text
stringlengths
0
6.23M
__index_level_0__
int64
0
419k
Abstractl-Npl4 cofactor of the ATPase p97. These data suggest that polyubiquitin does not serve as a ratcheting molecule. Rather, it may serve as a recognition signal for the p97-Ufdl-Np14 complex, a component implicated in the movement of substrate into the cytosol. All Science Journal Classification (ASJC) codes - Biochemistry - Molecular Biology - Cell Biology
153,180
Welcome Article Welcome to Balwyn Primary School, a school where each child is known and valued. We provide a caring and supportive learning environment for our students in a multi-age structure. At Balwyn, we strive for and achieve excellence in teaching and learning and cater for the individual needs of students. Balwyn Primary School provides a high quality education through a relevant and dynamic curriculum and encourages students to become life-long learners. Quick Links 1. Department of Education 1. Department of Education National School Pride Balwyn Primary has been granted $150,000 to be spent in 2009 to refurbish school buildings. Balwyn Primary has been granted $150,000 to be spent in 2009 to refurbish school buildings.
336,286
\begin{document} \title{Closed-Form and Asymptotic BER Analysis of the Fluctuating Double-Rayleigh with Line-of-Sight Fading Channel} \author{Aleksey S.~Gvozdarev\href{https://orcid.org/0000-0001-9308-4386}{\includegraphics[scale=0.1]{orcid.pdf}},~\IEEEmembership{Member,~IEEE,} \thanks{The author is with the Department of Intelligent Radiophysical Information Systems, P. G. Demidov Yaroslavl State University, 150003 Yaroslavl, Russia (e-mail: [email protected]).} \thanks{\copyright 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.}} \maketitle \begin{abstract} Recently, a generalization of the double-Rayleigh with line-of-sight channel fading model taking into account shadowing of the line-of-sight component has been proposed. In this research, a closed-form analysis of the average bit error rate for MPSK/MQAM modulations is performed. The derived solution is accompanied by proposed numerically efficient approximation and all possible asymptotic expressions that correspond to extreme channel parameters. Lastly, a numerical simulation was performed that demonstrated the correctness of the derived results. \end{abstract} \begin{IEEEkeywords} Fading channel, error rate, double-Rayleigh, line-of-sight, shadowing. \end{IEEEkeywords} \section{Introduction} \IEEEPARstart{N}{owadays}, the communication link quality of the modern ad-hoc systems is mainly limited by the wireless signal propagation effects. Thus the choice of channel model heavily impacts the predicted overall system performance. Recently, a novel fluctuating double-Rayleigh with line-of-sight (fdRLoS) fading channel model was proposed \cite{Lop21}. It generalizes the double-Rayleigh with line-of-sight (LoS) model \cite{Sal06} by including the Gamma-distributed shadowing of the LoS component, and covers such physical scenarios as the "pipe-like"/keyhole channel \cite{Ges02}, propagation via diffracting street corner \cite{Erc97}, amplify-and-forward relay \cite{Has04}, free-space optical communication through a turbulent medium \cite{And85} and Vehicle-to-vehicle (V2V) communications \cite{Ai18}. The original work by Lopez-Fernandez et al. \cite{Lop21} states the results for the probability density function (PDF) and cumulative distribution function (CDF), expressed for integer values of LoS shadowing parameter; and outage probability, derived via the obtained CDF. The problem with the obtained expressions is that they are formularized in a way that does not make further analytical derivations possible. Moreover, for similar models, experimental results demonstrate that the shadowing parameter efficiently can be non-integer and generally less than 1 (including the so-called hyper-Rayleigh regime \cite{Gar19}), which can not be deduced from the results in \cite{Lop21}. Furthermore, the average bit error rate (ABER) analysis quantifying the communication link quality for such a channel is not present. Motivated by the problems stated above, in this letter a closed-form expression for the ABER is derived for the fdRLoS channel model with arbitrary parameters. It is succeeded with the computationally efficient approximation obtained by truncating the derived solution, and the truncation error is estimated. Then the asymptotic expressions of the ABER for all possible extreme cases are evaluated: (\textit{a}) high signal-to-noise ratio (SNR), (\textit{b}) extremely high/low shadowing; and (\textit{c}) extremely strong/weak LoS component. Finally, to validate the correctness and accuracy of the analytical work, computer simulation was performed, and the obtained numerical results were studied. It was found out that specifically for the hyper-Rayleigh regime, the minimum ABER is achieved when the total power of multipath components equals the power of LoS component. \section{Preliminaries} \subsection{Channel model: physical and statistical description} Let us start with a brief description of the proposed in \cite{Lop21} fluctuating double-Rayleigh with line-of-sight fading channel model. It is generally assumed that the signal propagating within the wireless channel can be represented as the combination of the line-of-sight component that undergoes shadowing with average magnitude $\omega_0$ and uniformly distributed phase $\phi \sim U[0,2\pi )$ (for conformity one uses the initial notation given in \cite{Lop21}) and double-Rayleigh fading (dRf) component $\omega_2 G_2 G_3$: \begin{IEEEeqnarray}{rCl}\label{channel-model} S=\omega_0\sqrt{\xi }e^{j\phi }+\omega_2 G_2 G_3. \end{IEEEeqnarray} Here $\xi$ is the shadowing parameter following Gamma distribution normalized to have unit power and shape coefficient $m$; $\omega_2$ is the average magnitude of the fluctuating double-Rayleigh component; and $G_2, G_3$ are the zero-mean complex normal random variables (i.e., $G_2, G_3 \sim \mathcal{CN}(0,1)$). \begin{figure*}[!b] \hrulefill \normalsize \setcounter{MYtempeqncnt}{0} \setcounter{equation}{4} \vspace*{-4pt} \begin{IEEEeqnarray}{rCl}\label{thm-1} &&J_Q=\frac{\sin(\hat{m}\pi )}{4\pi }\sum_{l=0}^{\infty}\sum_{n=0}^{\infty}\frac{\left(\frac{1}{2}\right)_{l+n}}{(2)_{l+n}}\frac{\left(\frac{\hat{m}(K+1)}{\bar{\gamma }\delta_{2,j}K+\hat{m}(K+1)}\right)^{n+\hat{m}}}{l!n!} G_{0,1:1,1:1,1}^{1,0:1,1:1,1}\left( \begin{array}{c} \mbox{---}\\ 0\\ \end{array}\middle\vert \begin{array}{c} 1-\hat{m}-n\\ 0\\ \end{array}\middle\vert \begin{array}{c} \hat{m}-l \\ 0\\ \end{array}\middle\vert \frac{\hat{m}\bar{\gamma }\delta_{2,j}}{\bar{\gamma }\delta_{2,j}K+\hat{m}(K+1)},\frac{\bar{\gamma }\delta_{2,j}}{K+1} \right). \end{IEEEeqnarray} \vspace*{-4pt} \begin{IEEEeqnarray}{rCl}\label{cor-1} \hspace{-5pt}{\rm ABER_{err}}(L,N)\leq \delta_{1}\sum_{j=1}^{\delta_{3}}\frac{\frac{K+1}{\bar{\gamma }\delta_{2,j}}e^{\frac{K+1}{\bar{\gamma }\delta_{2,j}}}}{4\left(\frac{K+1}{\bar{\gamma }\delta_{2,j}}+\frac{K}{m}\right)^{m+N}}\Gamma \left(m-L,\frac{K+1}{\bar{\gamma }\delta_{2,j}}\right)\left|\mbox{}_2F_1\left(\frac{1}{2},1;2;\frac{K+1}{\bar{\gamma }\delta_{2,j}}\right)- \sum_{l=0}^{L}\sum_{n=0}^{N}\frac{(\sfrac{1}{2})_{l+n}(1-m)_l(m)_n}{(2)_{l+n}}\frac{\left(\frac{K+1}{\bar{\gamma }\delta_{2,j}}\right)^{l+n}}{n!l!} \right| \end{IEEEeqnarray} \end{figure*} Assuming that the channel is normalized (i.e., $\mathbb{E}{|S|^2}=1$) and noticing that $|G_2|^2$ follows exponential distribution with unit mean value (see \cite{Lop21}), the probability density function of the instantaneous signal-to-noise ratio $\gamma$ (defined in terms of the average signal-to-noise ratio $\bar{\gamma }$ and the squared signal envelope $|S|^2$, i.e., $\gamma=\bar{\gamma }|S|^2$) is given by (see \cite{Lop21}) \setcounter{equation}{1} \begin{IEEEeqnarray}{rCl} \label{eq-pdf-full} &&f_{\gamma }(\gamma )=\int_{0}^{\infty}f_{\gamma_x}(\gamma|x) e^{-x}{\rm d}x \end{IEEEeqnarray} where $f_{\gamma_x}(\gamma|x)$ is the conditional probability density function conditioned on $x=|G_3|^2$ and defined as \begin{IEEEeqnarray}{rCl} \label{eq-pdf-conditional} &&\hspace{-10pt}f_{\gamma_x}(\gamma|x)=\frac{m^m(1+k_x)}{(m+k_x)^m\bar{\gamma }_x}e^{-\frac{1+k_x}{\bar{\gamma }_x}\gamma }\!\! \mbox{}_1F_1\left(m,1,\!\frac{k_x(1+k_x)}{k_x+m}\frac{\gamma }{\bar{\gamma }_x}\right), \end{IEEEeqnarray} where $\mbox{}_1F_1( \cdot )$ denotes the confluent hypergeometric function \cite{DLMF}, $\bar{\gamma }_x=\frac{K+x}{K+1}\bar{\gamma }$, $k_x=\frac{K}{x}$, $K=\frac{\omega_0^2}{\omega^2_2}$ is the Rician K-factor, and $m$ is responsible for LoS shadowing intensity. It should be specifically emphasized that due to the complex substitutions, \eqref{eq-pdf-full} does not have a closed-form solution for arbitrary values of parameters, although \cite{Lop21} presents a simplified result for the case of $m \in \mathbb{Z}_{+}$. Its weak point is that it does not cover a practically highly valuable case of $0.5\leq m<1$, which constitutes the heaviest fading scenario (the so-called hyper-Rayleigh, see \cite{Fro07, Gar19}). \subsection{System performance metrics} The primary metric, assumed herein to characterize wireless communication link quality in the presence of fading, is the average bit error rate. It is defined in terms of the instantaneous BER (i.e., $\mathrm{BER}\left(\gamma \right)$) averaged over the stochastic variations of the instantaneous signal-to-noise ratio with the PDF $f_{\gamma }\left(\gamma \right)$: \begin{IEEEeqnarray}{rCl} \label{eq:ABER} &&\mathrm{ABER}=\delta_{1}\sum_{j=1}^{\delta_{3}}\int_{0}^{\infty}Q(\sqrt{2\delta_{2,j}\gamma })f_{\gamma }(\gamma ){\rm d}\gamma, \end{IEEEeqnarray} with $Q( \cdot )$ being the Gauss Q-function. If should be noted that \eqref{eq:ABER} is the so-called ``BER unified approximation`` (see \cite{Lu99}) and holds true for a wide variety of modulation schemes with the set of coefficients $\left\{ \delta_{1},\delta_{2,j},\delta_{3}\right\} $ explicitly defined for specific modulation (see, for instance, \cite{Sim05, Lu99}): for M-QAM $\left\{ \frac{4\left(1-\sfrac{1}{\sqrt{M}}\right)}{\log_2 M}, \frac{3(2j-1)^2}{2(M-1)}, \frac{\sqrt{M}}{2}\right\}$, for M-PSK $\left\{ \frac{1}{\max(2,\log_2 M)}, 2\sin^2\left(\frac{(2j-1)\pi }{M}\right), \max\left(1,\frac{M}{4}\right) \right\}$. Thus the problem of the closed-form analytical ABER description efficiently boils down to the solution of the integral $J_Q=\int_{0}^{\infty}Q(\sqrt{2\delta_{2,j}\gamma })f_{\gamma }{\rm d}\gamma$. Even though the problem is typical for such a formulation and has been numerously studied \cite{Sim98,Sim05,Gvo21,Sou13,Odr09}, the availability of the close-form solution highly depends on the form of $f_{\gamma }(\gamma )$. It must be pointed out that due to the novelty of the assumed model, the solution of $J_Q$ up to now doesn't exist, and because of the discussed earlier complexity of the PDF \eqref{eq-pdf-full} cannot be directly obtained from the existing results. Moreover, if numerical integration in $J_Q$ is utilized, this procedure is highly sensitive to the parameter values, and for large $m$ or $k$ is unstable, which means that higher working accuracy and precision make the solution time-consuming. \section{Derived results} Let us derive the closed-form solution for the integral $J_Q$ by using the moment generating function (MGF) approach. First, a conditional MGF for the PDF \eqref{eq-pdf-conditional} is evaluated (valid for arbitrary $m$), and the conditional version of $J_Q$ is obtained, which is further averaged with the help of \eqref{eq-pdf-full}. The result is given by the following Theorem 1. \begin{thm} For the fluctuating double-Rayleigh with line-of-sight fading channel the following statements hold true: \begin{itemize}[ \setlength{\IEEElabelindent}{\dimexpr-\labelwidth-\labelsep} \setlength{\itemindent}{\dimexpr\labelwidth+\labelsep} \setlength{\listparindent}{\parindent} ] \item the closed-form expression of the integral $J_Q$ for arbitrary positive values of the shadowing parameter $m$ is given by \eqref{thm-1} (see at the bottom of the page), where $G_{0,1:1,1:1,1}^{1,0:1,1:1,1}( \cdot )$ is the extended generalized bivariate Meijer G-function EGBMG\footnote{ EGBMG is defined in terms of the double Mellin-Barnes integral (see equation (13.1) in \cite{Hai92}) with the integration contours $\mathcal{L}_s, \mathcal{L}_t$ in the corresponding domains of complex variable $s$ and $t$ chosen in such a way to separate the specific singularities of the integrand. For practical implementation, computation methods and procedures see \cite{Ans11,Gar14}} and $\hat{m}=\begin{cases}m,m \notin \mathbb{Z}^{+}_0 \\ m(1+\Delta ), m \in \mathbb{Z}^{+}_0\end{cases}$, with $\Delta$ being the infinitesimal shift; \item the computationally efficient approximation of the ABER \eqref{eq:ABER} is given by $\mathrm{ABER}\approx\delta_{1}\sum_{j=1}^{\delta_{3}}J_Q(L,N)$, where $J_Q(L,N)$ is the truncated version of \eqref{thm-1} with $(L,N)$ remaining terms, and the induced truncation error (${\rm err(L,N)}=J_Q-J_Q(L,N)$) is upper-bounded by \eqref{cor-1} (see at the bottom of the page). \end{itemize} \end{thm} \begin{IEEEproof}For proof see APPENDIX I.\end{IEEEproof} The formulated results help to form a solid ground for further closed-form analysis as well as numerical optimization of the wireless communication system performance functioning in the presence of fdRLoS channels. From the practical point of view, the proposed approximation is given in terms of the double series the converge very quickly, and for moderate $m$, even a single term is enough to deliver at least $3$-digit accuracy (see Section IV for numerical examples). Moreover, in real-life applications, it is important to understand to what extent the channel impacts ABER. This can be estimated by evaluating the performance for the extreme fading conditions, i.e., for all possible range of channel parameters, which is given by the following Theorem 2. \begin{thm} In the extreme cases, the integral $J_{Q}$, defining the limiting performance of the assumed modulation schemes for the fdRLoS channel, is given by: \begin{itemize}[ \setlength{\IEEElabelindent}{\dimexpr-\labelwidth-\labelsep} \setlength{\itemindent}{\dimexpr\labelwidth+\labelsep} \setlength{\listparindent}{\parindent} ] \item in the case of the high SNR regime (i.e., $\bar{\gamma }\to\infty$) \begin{IEEEeqnarray}{rCl} \setcounter{equation}{7}\label{thm-2-1} &&J_Q\big|_{\bar{\gamma }\to \infty} \sim \frac{(K+1)}{4\bar{\gamma }\delta_{2,j}}\Gamma (m)U\left(m,1\frac{K}{m}\right), \end{IEEEeqnarray} where $U( \cdot )$ is the Tricomi confluent hypergeometric function; \item in the case of the strong dominant component (i.e., $K\to\infty$) \begin{IEEEeqnarray}{rCl}\label{thm-2-2} &&\hspace{-10pt}J_Q\big|_{K \to\infty} \sim \frac{1}{2\sqrt{\pi }}\frac{\Gamma \left(m+\frac{1}{2}\right)}{\Gamma (m+1)}\frac{\mbox{}_2F_1\left(\frac{1}{2},m;m+1;\frac{m}{m+\bar{\gamma }\delta_{2,j}}\right)}{\left(1+\frac{\bar{\gamma }\delta_{2,j}}{m}\right)^m}, \end{IEEEeqnarray} where $\mbox{}_2F_1( \cdot )_l$ is the Gauss hypergeometric function; \item in the case of the weak dominant component (i.e. $K\to 0$) \begin{IEEEeqnarray}{rCl}\label{thm-2-3} \hspace{-30pt}J_Q\big|_{K\to 0}&\sim& \frac{1}{2}-\frac{\sqrt{\pi }}{4}U\left(\frac{1}{2},0\frac{1}{\bar{\gamma }\delta_{2,j}}\right)=\\ \hspace{20pt}&=&\frac{1}{2}-\frac{e^{\frac{1}{2\bar{\gamma }\delta_{2,j}}}}{4\bar{\gamma }\delta_{2,j}}\left(K_1\left({\frac{1}{2\bar{\gamma }\delta_{2,j}}}\right)-K_0\left({\frac{1}{2\bar{\gamma }\delta_{2,j}}}\right)\right)\IEEEnonumber, \end{IEEEeqnarray} where $K_0( \cdot ), K_1( \cdot )$ are the modified Bessel functions; \item in the case of the weak shadowing (i.e. $m\to \infty$) and heaviest shadowing (i.e. $m\to \sfrac{1}{2}$) \begin{IEEEeqnarray}{rCl}\label{thm-2-4} &&\hspace{-10pt} J_Q\big|_{m \to \sfrac{1}{2}}=J_{Q}(L,N,m=\sfrac{1}{2}), \quad J_Q\big|_{m \to\infty} \sim J_{Qm \infty}, \end{IEEEeqnarray} where $J_{Q}(L,N,m=\sfrac{1}{2})$ is defined by the truncated version of \eqref{thm-1} and $J_{m\infty}$ is defined in \eqref{eq_thm-2-5}. \end{itemize} \end{thm} \begin{IEEEproof}For proof see APPENDIX II.\end{IEEEproof} It should be noted that all of the special functions used in \eqref{thm-2-1}-\eqref{thm-2-4} are readily accessible in all modern computer algebra systems for further numeric and analytical computations. To the best of the author's knowledge, the ABER analysis of the fdRLoS channel is absent in current scientific literature and the results \eqref{thm-1}-\eqref{thm-2-4} are novel and have not been reported previously. \section{Simulation and results} To verify the correctness of the derived closed-form solution (see Theorem I) and approximations (see Theorem II) numeric simulation was performed. For all the plots (see Fig.1-2), the results obtained with the help of the derived solution (solid coloured lines) are accompanied with the results derived via numerical integration in \eqref{eq:ABER} (point markers) and simulation (diamond-shaped markers). It is clear that the results accurately match each other. Channel parameters were chosen in such a way to take into consideration: hyper-Rayleigh fading $0.5\leq m<1$ (this is the case of $m=0.5$) and light fading ($m=3.5$ and $m=3$ in Fig. 1 and 2 respectively); and strong/weak dominant component ($K=10$~dB/$-10$~dB). Moreover, the computations were performed both for PSK and lower order QAM modulations, as well as for high-dimensional QAM (actively employed in modern communication standards). The shift $\Delta$ (used to account for integer values of $m$, see Proof of Theorem 1) was set to $10^{-5}$. \begin{figure}[!t] \centerline{\includegraphics[width=\columnwidth]{fig1.pdf}} \caption{ABER versus $\bar{\gamma }$:solid lines - proposed analytical solution \eqref{thm-1}, point markers - numeric integration in \eqref{eq:ABER}, dashed lines - proposed asymptotic solution \eqref{thm-2-1}, diamond-shaped markers - numeric simulation.} \label{fig1} \end{figure} \begin{figure}[!t] \centerline{\includegraphics[width=\columnwidth]{fig2.pdf}} \caption{ABER versus $k$ for $\bar{\gamma }=30$~dB: solid lines - proposed analytical solution \eqref{thm-1}, point markers - numeric integration in \eqref{eq:ABER}, dashed blue lines - proposed asymptotic solution for $K\to \infty$ \eqref{thm-2-2}, dashed black lines - proposed asymptotic solution for $m\to \infty$ \eqref{thm-2-3}.} \label{fig2} \end{figure} \begin{table}[!t] \caption{Relative truncation error for various modulations and fading} \label{table_example} \centering \begin{tabular}{|c|c|c|c|c|} \addlinespace \hline $N$ &\makecell{m=0.5\\ QAM-64} &\makecell{ m=2.5\\ QAM-64}&\makecell{m=0.5\\ QAM-1024} &\makecell{m=2.5\\ QAM-1024} \\ \hline 1& $1.64742 \cdot 10^{-2}$& $1.55984 \cdot 10^{-1} $& $2.17328 \cdot 10^{-2}$ & $1.07278 \cdot 10^{-1}$ \\ \hline 2& $5.87588 \cdot 10^{-4}$& $6.005 \cdot 10^{-2}$ &$ 1.01106 \cdot 10^{-3} $& $2.3618 \cdot 10^{-2}$ \\ \hline 3& $3.30798 \cdot 10^{-5}$&$ 1.39152 \cdot 10^{-2}$ &$ 6.71084 \cdot 10^{-5}$ & $2.45898 \cdot 10^{-3}$\\ \hline 4& $2.37544 \cdot 10^{-6}$&$ 4.83533 \cdot 10^{-3}$ &$ 5.80182 \cdot 10^{-6} $& $6.02779 \cdot 10^{-4}$\\ \hline 5& $1.97965 \cdot 10^{-7}$& $2.03268 \cdot 10^{-3}$ & $6.39829 \cdot 10^{-7}$ &$ 2.9732 \cdot 10^{-4}$\\ \hline \end{tabular} \end{table} To study the discrepancy between the closed-form solution and its approximation proposed in Theorem I, an analysis of the relative truncation error $\frac{{\rm ABER_{err}(N,N)}}{\rm ABER}$ was carried out (see Table I), where ${\rm ABER_{err}(N,N)}$ was evaluated with \eqref{cor-1} and ${\rm ABER}$ with numerical integration in \eqref{eq:ABER} assuming $K=5$~dB and $\bar{\gamma }=20$~dB. It is clear that for strong shadowing (i.e., $m=0.5$) only a single term in \eqref{thm-1} is enough (i.e. $L=N=0$) (irrespective of the modulation order), and for moderate $m$ the truncation with $L=N=4$ helps to deliver at least 3-digit accuracy and speed up the calculation (up to 2-4 times, depending on $m$), compared to the numerical integration in~\eqref{eq:ABER}. For all of the plots, the results obtained with the proposed solution and numeric integration are succeeded by the derived asymptotics: high-SNR asymptotics \eqref{thm-2-1} in Fig. 1 (see dashed black lines), strong/weak dominant component asymptotics \eqref{thm-2-2}/\eqref{thm-2-3} (see dashed blue/black lines in Fig. 2), and light/heavy shadowing asymptotics \eqref{thm-2-4} (see dashed blue/black lines in Fig. 2). It can be seen (see in Fig. 1) that for the fdRLoS channel fading deeply impacts the overall system performance: for hyper-Rayleigh case (i.e. $m=0.5, K=0$~dB), even lower-order modulation (QPSK or QAM-4) for high SNR loses about $10$~dB. Moreover, $\bar{\gamma }$ can be easily connected to the relative (to some reference spacing $d_0$) distance between the transmitter and the receiver $d$, accounting for the path loss (via path loss exponent $\alpha$), antenna characteristics and the average channel attenuation ($\chi$), for example, in the following form $\bar{\gamma }=\bar{\gamma }_{\rm tr}+\lg\left(\chi \left(\sfrac{d_0}{d}\right)^{\alpha }\right)$. Here $\bar{\gamma }_{\rm tr}$ (expressed in decibels) is the average SNR at the transmitter output. Thus such a connection with the carried out analysis can help estimate system performance for distance dependant attenuation. In practice channel parameters are usually unknown and are estimated on-the-go from the measurements, and the inference quality has a crucial effect on the overall system performance. Thus it important to know how great is the impact of the estimated parameters variation on the assumed quality metric. For instance, if the impact is negligible (i.e., asymptotic regime in the corresponding parameter), coarse yet fast inference procedures and algorithms are preferred; otherwise, more complex methods (which are usually slower) are needed. Fig. 2 demonstrates that scenarios with $K\leq -20$~dB and $K\geq 30$~dB can be assumed as almost asymptotic for any $m$ and constellation size. Moreover, it was found out that in the case of $m<1$ fdRLoS channel exhibits specific performance: $K\to \infty$ actually delivers higher ABER than $K\to 0$; moreover, minimum ABER is observed when the total power of multipath components is equal to the LoS component (i.e. $K\approx 0$~dB). It can be observed that the impact of shadowing parameter $m$ in case of weak LoS component is tangible only for $m\leq 2$ irrespective of the modulation order. But for large $K$ asymptotics cannot be reached even with $m=20$ (see Fig. 2). In addition, it is clear that the rate of change of the ABER curve with the increase of $m$ is limited. One can see that the derived asymptotics \eqref{thm-2-1}-\eqref{thm-2-4} excellently describe the ABER floor, which exists due to the shadowed fading nature of the channel. \section{Conclusions} The letter presents a closed-form and asymptotic analysis for the average bit error rate of a communication system in the presence of the fluctuating double-Rayleigh with the line-of-sight channel. The derived expressions are valid for arbitrary (integer/noninteger) channel parameters and expressed in terms of classical special functions readily accessible in most modern computer algebra systems. The proposed asymptotic bounds cover all possible fading scenarios. All the derived expressions are verified by comparing with the brute-force numerical integration and demonstrated high correspondence with one another. \appendices{} \section{Proof of Theorem 1} \begin{IEEEproof} To prove Theorem 1, one starts with the fact that the conditional MGF of $\gamma_x$ (i.e. $\mathcal{M}_{\gamma_x}(p|x)=\mathbb{E}\{e^{p\gamma_x}\}$) can be represented as a Laplace transform of the conditional PDF \eqref{eq-pdf-conditional}. Applying equation (3.35.1.1) (see \cite{Pru92}) and performing simplifications yields: \begin{IEEEeqnarray}{rCl}\label{eq_thm-1-2} &&\mathcal{M}_{\gamma_x}(p|x)=-\left(\frac{m}{m+K}\right)^m\frac{(K+1)}{\bar{\gamma }p}\frac{\left(1-\frac{(K+1)}{\bar{\gamma }p}\right)^{m-1}}{\left(1-\frac{m(K+1)}{\bar{\gamma (K+mx)}p}\right)^{m}} \end{IEEEeqnarray} It can be noted that the \eqref{eq_thm-1-2} is given in the form of a factorized power-type MGF (see \cite{Gvo21}), thus applying Lemma~2 from \cite{Gvo21} and denoting $\psi_1=\frac{K+1}{\bar{\gamma \delta_{2,j}}}$ and $\psi_2=\psi_1+\frac{K}{m}$ yields: \begin{IEEEeqnarray}{rCl}\label{eq_thm-1-3} &&J_Q=\frac{\psi_1}{4}\int_0^{\infty}\frac{F_D^{(2)}\!\left(\frac{1}{2};1\!\!-m,m;2;\frac{\psi_1}{\psi_1+x},\frac{\psi_1}{\psi_2+x}\right)}{(\psi_1+x)^{1-m}(\psi_2+x)^{m}}e^{-x}{\rm d}x, \end{IEEEeqnarray} where $F_D^{(2)}( \cdot )$ is the Lauricella hypergeometric function of two variables \cite{DLMF}. Noticing that $F_D^{(2)}( \cdot )=F_1( \cdot )$ (with $F_1( \cdot )$ being the Appell function) and that its arguments are less that 1 (since $\psi_2>\psi_1$), it can be represented with the convergent series (see equation (16.13.1) in \cite{DLMF}). Since $F_D^{(2)}$ can be upper-bounded (see (2.15) in \cite{Car66}), yielding a finite majorization, and the summation can be assumed as an integration with the respect to the counting measure, the Fubini's theorem guarantees that the order of the summation and the integration can be interchanged. Thus reorganizing the multipliers, $J_Q$ can be written as: \begin{IEEEeqnarray}{rCl}\label{eq_thm-1-4} &&\hspace{-20pt}J_Q\!=\!\sum_{l=0}^{\infty}\sum_{n=0}^{\infty}\!\frac{(\sfrac{1}{2})_{l+n}(1-m)_l(m)_n}{4\psi_1^{-l-n-1}(2)_{l+n}n!l!} \underbrace{\!\int_0^{\infty}\!\frac{(\psi_1+x)^{m-l-1}}{(\psi_2+x)^{m+n}}e^{-x}{\rm d}x}_{J_1(l,n)}. \end{IEEEeqnarray} Applying the relations between the integrands and Meijer G-functions: $(1+x)^{\alpha }=\frac{1}{\Gamma (\alpha )}G_{1,1}^{\,1,1}\!\left(\left.{\begin{matrix}1+\alpha \\0\end{matrix}}\;\right|\,x \right)$, $e^{-x}=G_{0,1}^{\,1,0}\!\left(\left.{\begin{matrix}\mbox{---}\\0\end{matrix}}\;\right|\,x\right)$ for the case of $\forall m \notin \mathbb{Z}^{+}_0$ the integral $J_1$ can be evaluated in terms of the extended generalized bivariate Meijer G-function (see equation (13.1) in \cite{Hai92}): \begin{IEEEeqnarray}{rCl}\label{eq_thm-1-5} J_1(l,n)&&=\left(\frac{\psi_1}{\psi_2}\right)^m\frac{\psi_1^{-l-1}\psi_2^{-n}}{\Gamma (1-m+l)\Gamma (m+n)} \times \IEEEnonumber\\ &&\hspace{-15pt} \times G_{0,1:1,1:1,1}^{1,0:1,1:1,1}\left( \begin{array}{c} \mbox{---}\\ 0\\ \end{array}\middle\vert \begin{array}{c} 1-m-n\\ 0\\ \end{array}\middle\vert \begin{array}{c} m-l \\ 0\\ \end{array}\middle\vert \frac{1}{\psi_2},\frac{1}{\psi_1} \right). \end{IEEEeqnarray} Collecting \eqref{eq_thm-1-3} and \eqref{eq_thm-1-4}, reorganizing the summands and applying the fact that $\frac{(1-m)_l(m)_n}{\Gamma (1-m+l)\Gamma (m+n)}=\frac{\sin{m\pi }}{\pi }$ yields the desired form \eqref{thm-1}. Finalizing the proof of the first part of the statement, it can be noted that \eqref{eq_thm-1-4} is monotone in $m$ and its rate of change is small enough (see Section~IV). To expand the solution to all possible positive values of $m$ (including integers), one proposes to perform an infinitesimal shift $\Delta$ of the parameter $m$ in case $m \notin \mathbb{Z}^{+}_0 $. Thus the resultant solution is valid for arbitrary values of $m$, as it is demonstrated in Section~IV. Truncation of the closed-from solution \eqref{thm-1} to $N,L$-terms introduces the error ${\rm err}(L,N)=\!\sum_{\substack{l=L+1\\n=N+1}}^{\infty}\!\!\frac{(\sfrac{1}{2})_{l+n}(1-m)_l(m)_n}{4\psi_1^{-l-n-1}(2)_{l+n}n!l!}J_{1}(l,n)$, that can be estimated as follows. Integral $J_1(l,n)$ can be upper-bounded by $J_{1}(L,N)$, and since its denominator is increasing in $x$, then $J_{1}(L,N)\leq \int_0^{\infty}\frac{(\psi_1+x)^{m-L-1}}{\psi_2^{m+N}}e^{-x}{\rm d}x=\frac{e^{\psi_1}\Gamma (m-L,\psi_1)}{\psi_2^{m+N}}$, where $\Gamma (\cdot, \cdot )$ is the upper incomplete gamma-function. The residual series can be represented as $\sum_{\substack{l=L+1\\n=N+1}}^{\infty}=\sum_{\substack{l=0\\n=0}}^{\infty}-\sum_{l=0}^{L}\sum_{n=0}^{N}$. Assuming that $\psi_1<1$ (needed for convergence), the first series represent Gauss hypergeometric function $\mbox{}_2F_1(\sfrac{1}{2},1;2;\psi_1)$, thus \eqref{cor-1} follows. \end{IEEEproof} \section{Proof of Theorem 2} \begin{IEEEproof} Assuming that $\bar{\gamma }\to \infty$, integral $\eqref{eq_thm-1-3}$ can be simplified to the following form: \begin{IEEEeqnarray}{rCl}\label{eq_thm-2-1} &&J_Q\big|_{\bar{\gamma }\to \infty} \sim \frac{\psi_1}{4}\int_0^{\infty}\frac{x^{m-1}}{(\psi_1+x)^{1-m}(\frac{K}{m}+x)^{m}}e^{-x}{\rm d}x. \end{IEEEeqnarray} It follows from the fact that if $\bar{\gamma }\to \infty$, then $\psi_1\to 0$ and $\psi_2\to \sfrac{k}{m}$, hence the first term of the Taylor series expansion of the $F_D^{(2)}\!\left(\frac{1}{2};1\!\!-m,m;2;\frac{\psi_1}{\psi_1+x},\frac{\psi_1}{\psi_2+x}\right)$ in the vicinity of 0 will be $F_1\!\left(\frac{1}{2};1\!\!-m,m;2;0,0\right)=1$. Applying the result (13.4.4) from~\cite{DLMF} helps to state that $J_Q\big|_{\bar{\gamma }\to \infty} \sim \frac{(k+1)}{4\bar{\gamma }\delta_{2,j}}\Gamma (m)U\left(m,1\frac{k}{m}\right),$ where $U( \cdot )$ is the Tricomi confluent hypergeometric function. To prove the limiting performance as $K\to \infty$ one can notice that $\sfrac{\psi_1}{K}\to \sfrac{1}{\bar{\gamma }\delta_{2,j}}$ and $\sfrac{\psi_2}{K}\to \sfrac{1}{\bar{\gamma }\delta_{2,j}}+\sfrac{1}{m}$, thus \begin{IEEEeqnarray}{rCl}\label{eq_thm-2-3} &&\hspace{-7pt}J_Q\big|_{K\to \infty}\! \sim \! \!\int_{0}^{\infty}\frac{\!F_1\!\left(\frac{1}{2};1-m,m;2;1,\frac{m}{\bar{\gamma }\delta_{2,j}+m} \right)}{4\left(\frac{m}{\bar{\gamma }\delta_{2,j}+m}\right)^{-m}}e^{-x}{\rm d}x. \end{IEEEeqnarray} Note that in this case Appell function can be simplified, i.e. $F_1\left(\frac{1}{2};1-m,m;2,1,\frac{m}{\bar{\gamma }\delta_{2,j}+m}\right)\to \frac{2}{\sqrt{\pi }}\frac{\Gamma (m+\sfrac{1}{2})}{\Gamma (m+1)}\mbox{}_2F_1\left(\sfrac{1}{2},m;m+1;\frac{m}{m+\bar{\gamma }\delta_{2,j}}\right)$. This yields the desired asymptotics \eqref{thm-2-2}. It should be specifically pointed out that hereinafter limit and integral operations can be interchanged via dominated convergence theorem since the integrand exhibits a point-wise convergence with the respect to the limiting parameter, and (as it was mentioned in Appendix I) $F_1( \cdot )$ can be upper-bounded yielding an integrable expression. For the case of $K\to 0$ one can note that $\psi_2\to \psi_1=\frac{1}{\bar{\gamma }\delta_{2,j}}$. Since the arguments of the Appell function coincide one can make use of the relation (13.4.4) from \cite{DLMF}, i.e. $F_1(a;b_1,b_2;c;z,z)=\mbox{}_2F_1(a,b_1+b_2;c;z)$. Note that $\mbox{}_2F_1(\sfrac{1}{2},1;2;z)=\frac{2}{z}(1-\sqrt{1-z})$. Then after simplifications \begin{IEEEeqnarray}{rCl}\label{eq_thm-2-4} &&J_Q\big|_{K\to 0} \sim \frac{1}{2}-\frac{1}{2} \int_{0}^{\infty}\sqrt{\frac{\bar{\gamma }\delta_{2,j} x}{1+\bar{\gamma }\delta_{2,j} x}}e^{-x}{\rm d}x. \end{IEEEeqnarray} Evaluating the last integral with the help of equality (13.4.4)~\cite{DLMF} yields \eqref{thm-2-3}. The second form of this asymptotics can be evaluated by relating the Tricomi $U( \cdot )$ function with the modified Bessel functions (see~\cite{DLMF}). To find the asymptotic expression for $m\to \infty$, hence $\psi_2\to \psi_1$, one again uses the limiting property of Appell function (see case $K\to 0$) and perform the linear argument transformation of the obtained hypergeometric function $\mbox{}_2F_1(a,b;c;z)=(1-z)^{-b}\mbox{}_2F_1(c-a,b;c;\frac{z}{z-1})$. Noticing that $\mbox{}_2F_1(\frac{3}{2},1;2;-z)=\frac{2}{z}(1-(\sqrt{1+z})^{-1})$ and that $\lim_{m\to\infty}\left(\frac{\psi_1+x}{\psi_1+\frac{K}{m}+x}\right)^m=e^{-\frac{K}{x+\psi_1}}$, the asymptotics can be obtained in the following form \begin{IEEEeqnarray}{rCl}\label{eq_thm-2-5} &&\hspace{-10pt}J_Q\big|_{m\to \infty} \sim \frac{1}{2} \int_{0}^{\infty}\!\!e^{-\frac{K}{x+\psi_1}}\!\left(1\!-\!\sqrt{\frac{\bar{\gamma }\delta_{2,j} x}{(K+1)+\bar{\gamma }\delta_{2,j} x}}\right)e^{-x}{\rm d}x. \end{IEEEeqnarray} Although the solution of the last integral can not be obtained in closed form, it can be easily calculated numerically via fast and stable procedures. \end{IEEEproof} \bibliographystyle{IEEEtran} \phantomsection\addcontentsline{toc}{section}{\refname}\bibliography{IEEEabrv,BER_fdRLoS} \vspace{-2cm} \end{document}
15,617
TITLE: Are all values of $\sin(x)$ algebraic. QUESTION [2 upvotes]: Can we prove that for all $x$ in $(0,2\pi)$ $\sin(x)$ is an algebraic number? I have seen people express various values of $\sin(x)$ like $\sin(3)$ and $\sin(30)$ using radicals so I suspect that all values of $\sin(x)$ must be algebraic. Is that correct? Can we prove it? REPLY [8 votes]: Let $\frac{m}{n}\in\mathbb{Q}$, then $\sin\left(\frac{m}{n}\pi\right)=\sin\left(\frac{m}{n}180^\circ\right)$ is always algebraic: put $$ \alpha=e^{\frac{i\pi}{n}}=\cos\frac{\pi}{n}+i\sin\frac{\pi}{n}. $$ Then $\alpha^n+1=0$, i.e. $\alpha$ is an algebraic number (it is a root of the polynomial $X^n+1$) and hence both $$ \cos\frac{m\pi}{n}=\frac{\alpha^m+\alpha^{-m}}{2} \qquad\text{and}\qquad \sin\frac{m\pi}{n}=\frac{\alpha^m-\alpha^{-m}}{2i}, $$ are algebraic numbers. This shows that for a countable number of transcendental values of $x$, the value of $\sin(x)$ is an algebraic number. Conversely, it can be shown that if $x$ is an algebraic non-zero number, $\sin(x)$ is transcendental! This follows from the famous Lindemann-Weierstrass Theorem.
55,498
IMPACT) IMPACT's Wednesday night program is a high energy competition that focuses on Bible memory, preaching and games. Each week our teams face off in exciting head to head competition. Wednesday nights take on a totally different feel from any of our other programs... you will be glad you came. The character building sermons and lighter atmosphere will make any guest feel right at home. We look forward to seeing you this Wednesday. Looking for a youth group who enjoys great activities? Then Impact is for you! Summer camp, lock-ins, dodgeball, floor hockey, go carts, from the Welcome Center or download your copy here. Reminder: Don't forget your Permission Form.
115,657
Hexrobox.site To Get Free Robux On Roblox, Really Hexrobox site Roblox is here and it doesn't take long for Roblox players to be curious to try using the hexroblox site, and hope that free robux can be obtained without them having to buy it with the money they currently have. Because hex roblox.site provides free robux services for players who generate, free of charge and without having to be paid with money. What is a hex roblox site? Hexrobox site is an online generator service that provides services for Roblox players in the form of free robux. Indeed, the presence of this kind of service makes many players curious and want to try their luck, and hope that their account can get free robux at hexroblox.site free robux Roblox. We know that robux is a tool used in Roblox games that is useful as a tool to exchange equipment in the game. So, not a few people want to use instant ways to get robux for free, without having players spend a penny of the money they have. Is hexrobox.site free robux a scam or legit? Testing is needed of course to ensure that the hex roblox site is a scam. and it is not proven to provide free robux for its users. That's why we are here with you to ensure that free robux is available on hexroblox.site. Then does hexroblox.site really provide free robux or even hexroblox site robux free is a scam? There have been many services similar to the hexrobox site that promise free robux and are not proven to provide free robux. So, we suggest that you expect a lot with vbgods and use other ways for free robux. Like using an application available on the Play Store, which will usually give you rewards if you successfully complete the mission given in the application. How to use hex roblox site? - First, your internet connection must be turned on. - Then visit hexroblox via page: - Select the desired number of robux and press the Start button. - Then fill in the username box with the name of the Roblox account that you are using. - Press the Enter key and wait a few moments. - Wait for the process until you can verify the robux that you got on hexroblox.site. Conclusion You also need to know that the owner of the Roblox game prohibits its users from using methods that are not recommended, such as using the hexroblox.site generator service. It's a good idea to use a safe way to get free robux without having to use an online generator service. That's all we have discussed about hexrobox.site as an online generator for Roblox users to use to get free robux. If you don't get it, then you can be sure that the Roblox site hex robox service is not proven to provide robux and vbgods is a scam. Posting Komentar untuk "Hexrobox.site To Get Free Robux On Roblox, Really"
294,236
Donald Trump is facing a problem that no other presidential nominee has experienced in the past. It is reported by CNN that the presumptive nominee for the Republican party is having trouble filling speaking slots at his party’s Convention taking place in two weeks in Cleveland, Ohio. With big names like Mitt Romney and John McCain already stating they will not be attending the convention, Trump is looking to his children and wife to fill up slots. While Trump tweeted that the speakers slots were totally filled and there was actually a waiting list, the lack of big name party members even attending the Convention is a sign of trouble. Typically, rising stars from the party, previous candidates and former Presidents would take the stage and rally behind the nominee. The speakers slots at the Republican Convention are totally filled, with a long waiting list of those that want to speak – Wednesday release — Donald J. Trump (@realDonaldTrump) July 2, 2016 Breaking with tradition, Donald Trump announced that he would be doing a “winner’s night” where big name sports stars would be speaking. However on Friday he admitted that he had not even reached out to anyone yet. That didn’t stop him from name dropping people like Tom Brady, Dana White and NASCAR CEO Brian France. After boasting that his winner’s night would be the most popular event at the Convention, you would think he would have the attendees lined up already. With all of the controversy surrounding Trump’s campaign, several big-name companies have announced that they would be pulling or scaling back their sponsorship of the event according to Bloomberg, including Wells Fargo & Co., United Parcel Service Inc., Ford Motor Co., and JPMorgan Chase & Co. While there have been no official reasons given, considering that these companies were all sponsors during the last election, it seems safe to assume that the candidate is the issue. While the initial plan was to hold off revealing his running mate choice until the Convention to make a production of it, the New York Times reported that the VP choice would be announced ahead of time. One of the likely reasons for doing so is the lack of big party names that will be in attendance. Trump’s preparations for the convention are looking a lot like his campaign: chaotic, freewheeling and unpredictable — The New York Times (@nytimes) July 1, 2016 After losing a lot of momentum in the month of June, Trump is no doubt hoping that the Convention will turn the tide back in his direction. After his attacks on the judge handling the Trump University lawsuit, several key Republican’s openly came out against the party nominee, making his pick for Vice President an even more important decision. Among the rumored VP possibilities are Chris Christie, the first of Trump’s formal rivals to publicly support him, and Newt Gingrich. Trump has said that he is looking for someone with political experience, and there is no question that these two bring that to the table. Another consideration in the choice is Donald Trump’s low likability rating, a problem he shares with Democratic nominee Hillary Clinton. Putting someone that is experienced and well-liked on the ticket is key. Neither Christie nor Gingrich seem to fit that bill. Donald Trump’s top two VP picks have both been hugely unpopular too pic.twitter.com/52S5zjIt4s — Chris Cillizza (@TheFix) July 1, 2016 The Republican Convention kicks off July 18 at the Quicken Loans Arena in Cleveland, Ohio. Given how unpredictable Donald Trump’s campaign has been, it seems likely that this will be a convention unlike any other. The full list of speakers for the Convention will be released on Wednesday. [Photo by Jeff J Mitchell/Getty Images]
389,395
1st September until 9th September. The winner will be announced on Monday 12th September. –. – The total prize value for 2 people is £109.80, which includes the free annual pass to Blenheim Palace and afternoon tea. The time and date of the afternoon tea booking are subject to availability. – The prize value of Celebration afternoon tea is £30 per person at the Orangery at Blenheim Palace. Should the bill exceed the amount of £60 for 2 people, the winner may required to pay the excess amount. – The afternoon tea prize is valid from 13th September to 31st October. The annual pass is valid from 13th September, 2016 until 12th September, 2017. – Any changes to this agreement are at the discretion of the restaurant. –. What a wonderful prize. Good luck everyone.
327,124
Peacemakers Two weeks ago, I had the privilege of hearing a live performance of Karl Jenkins’ choral work, “The Peacemakers.” I pushed myself to go to the concert because a few years ago, I had heard another choral work of his – entitled “The Armed Man: A Mass for Peace” – and it has been transforming my heart and understanding ever since. There are 17 different pieces in “The Peacemakers”, each with a different text and a different musical expression. From the poetry of Shelley, the words of Gandhi, the Dalai Lama, Celtic prayer, former captive Terry Waite, Mother Teresa, the Qur’an, the Bible, St. Francis, Martin Luther King, and Rumi – the words evoke a universal longing for peace and community. There is even a musical interlude, entitled “Solitude”, which – for me – is a reminder that we need time alone; time to listen and be present to ourselves and one another and God. I didn’t know anything about “The Peacemakers”, but I had heard there would be a large chorus of adults and children, and an orchestra as well. I was not prepared to be so moved and so inspired. Nor was I prepared for the continuing prayer and creative wonderings since. The opening song, “Blessed are the Peacemakers”, takes its text from the Gospel of Matthew. Blessed are the Peacemakers for they will be called the children of God. While the adult voices began, the children’s voices repeated the word ‘children . . . children.’ At that moment, the tears began to flow, and I began to wonder – what would it be like if we valued peacemaking enough that we included classes on peacemaking as core curriculum for grades K – 12? What would it be like if our communities and organizations – schools, churches, businesses, towns – operated within a framework of dialogue and decision-making grounded in practices like Non-Violent Communication and restorative justice? What would it be like? I shared my thoughts with a friend, who put me in touch with an organization I didn’t know about – called PeaceFirst (). Their mission? To create the next generation of peacemakers. So with PeaceFirst, and many other organizations and programs whose missions are to promote and support fruitful, life-giving relationships, I pray for peace. I pray to be a peacemaker. Shalom. Shanti. Salam. Peace. Upcoming Programs and Events. Life on Purpose Weekend Retreat for Women Ages 20 – 23 with Paula Grieco, author of “Take 5 for Your Dreams” Martha’s Vineyard June 6 – 8 Preaching in Worship “And It Was Good” Acton Congregational Church Sunday, June 15 at 9:15 a.m.
374,266
TITLE: Let $G$ be a group of order order 10, such that $a,b\in G$. $|a|=5$ and $|b|=2$. Prove $bab^{-1}$ is equal to $a$ or $a^{-1}$. QUESTION [3 upvotes]: I'm studying for an algebra exam by doing past papers and I've currently got stuck on the following problem, which I've been trying to solve for some time: Let $G$ be a group of order order 10, such that $a,b\in G$. $|a|=5$ and $|b|=2$. Prove $bab^{-1}$ is equal to $a$ or $a^{-1}$. I've been given hind to consider the element $b^{2}ab^{-2}$, but I'm still not getting very far. Any help would be greatly appreciated, thank you REPLY [3 votes]: With just a bit more work, we can show a more general result: if $p$ is prime, and $|G| = 2p$, and $a,b\in G$ with $|a| = p$ and $|b|=2$, then $bab^{-1}$ is either $a$ or $a^{-1}$. First note that $\langle a \rangle$ has index $2$ and is therefore normal. This means that $bab^{-1} \in \langle a \rangle$, say $bab^{-1} = a^j$. Let $\phi$ denote conjugation by $b$, so $\phi(a) = bab^{-1} = a^j$. Then $\phi$ is an automorphism of $\langle a \rangle$, and since $|b| = 2$, we see that $\phi$ has order $2$, i.e. $\phi \circ \phi$ is the identity map on $\langle a\rangle$. So, on one hand $\phi(\phi(a)) = a$, but on the other hand, $\phi$ is a homomorphism, so $\phi(\phi(a)) = \phi(a^j) = (\phi(a))^j = (a^j)^j = a^{j^2}$. Combining these two results, we must have $a^{j^2} = a$. Thus, modulo $p$, we must have $j^2 = 1$, equivalently $j^2 - 1 = 0$, equivalently $(j-1)(j+1) = 0$. As $\mathbb Z_p$ is a field, this forces either $j-1=0$ or $j+1 = 0$, hence either $j = 1$ or $j = -1$. This means that either $\phi(a) = a$ or $\phi(a) = a^{-1}$. In either case, we have $G = \langle a \rangle \langle b \rangle$, with $\langle a \rangle \lhd G$ and $\langle a \rangle \cap \langle b \rangle = 1$, so this is a semidirect product of the form $\langle a \rangle \rtimes \langle b \rangle$. In the case $\phi(a) = a$, we have $ab = ba$, so $a$ and $b$ commute, which means the product is direct: $G = \langle a \rangle \times \langle b \rangle$. Since both factors are abelian, so is $G$. In fact, since $|a|$ and $|b|$ are relatively prime, $G$ is cyclic. In the case $\phi(a) = a^{-1}$, we have $ba = a^{-1}b$, which yields the dihedral group of order $2p$.
10,245
Louis Vuitton Brea PM Amarante Authentic Pre Owned This item is no longer available Authenticity code: Louis Vuitton VI4089The Louis Vuitton Brea pm is a gorgeous piece. It features handles and trimmings in natural cowhide leather, comfortable rounded handles,hand, elbow or cross-shoulder carry thanks to removable adjustable strap, and gorgeous shiny vernis leather. VI40 Authentic: Preowned Exterior: Gently used Interior: Gently used Measurements: 11L x 8H x 4W. Adjustable strap (Approx.) Product Code: 1024
138,848
We wish to fly ever since we were children, dreaming to become pilots. Being a pilot doesn’t necessarily mean that you transport people from point A to point B and return, in a Boeing 747 as a full time job. Some pilots just fly because they like it, because they are passionate about flying, and it’s a great way to spend their time doing something they like. Just like any other hobby or sport. Flying as a sport is practiced by many pilots and while is very rewarding and pleasant, it comes with high responsibilities. Flying as a sport: what it takes to be a pilot First of all, not everyone can just fly. Flying as a sport is far more complex than that. You need to be a pilot if you want to earn your wings. Joining an aeroclub can give you access to flying training which purpose is to give people access to a flying/pilot license or certificate. To fulfill your dream to fly, first you need to find a flight school that has the sports type of certificate available. Because of their newness, being introduced in 2004, not every flying school will have the sports pilot certificate. There are certain differences between the certificates in aviation. The sport pilot certificate is a certificate that doesn’t require a medical certificate in order to enroll in training. It allows people to become familiar with the basics of flying. The type of aircraft sports pilots will be qualified to fly are called light sports aircraft or LSA. Other types of certificates like personal or recreational, will enable the pilots to fly in bad weather, fly an airplane with two or more engines or even fly professionally. Taking flying classes will put people in an aircraft right in the first session, learning specific maneuvers to control the aircraft. At the end of the flying classes people become licensed pilots and they will be allowed to fly a special type of aircraft, the LSA. The Light Sports Aircraft Sports pilots don’t necessarily need to own an aircraft in order to fly, even if some may say otherwise, because aeroclubs offer beginner pilots the option to rent various types of aircraft for flying a couple of hours. While buying their own aircraft is an expense larger than $40k, the possibility to rent one is very convenient for inexperienced pilots or who can’t invest in an aircraft. The renting option it allows people to pursue their passion. A light aircraft is a small aircraft which complies with several country regulations. For example, in the United States a light aircraft has one engine and maximum one or two seats. The cabin is unpressurized and the propeller can be fixed-pitch or ground adjustable. The maximum gross takeoff weight is somewhere between 600 and 650 kg. The maximum stall speed a light aircraft reaches is 82 kph, and in flight, 222 kph. Safety of flying as a sport Safety of flying as a sports is one of the most important aspects that are considered by becoming pilots and their families. Before they pursue their passion, every pilot must assess if the benefits of flying justify the risks. Most of the times, the safety of flying a plane is compared to the safety of driving a car. While the car drivers are more exposed to accidents on the road, the sport pilots are less exposed to collisions between aircraft in mid air. The main hazards of flying an aircraft are flying in bad weather especially if the pilot is not prepared or certified for it, maneuver and maintenance errors, engine or mechanical failures, running out of fuel in midair, and takeoff and landing accidents. If the pilots are continuously developing their flying skills, grow their experience and maintain their proficiency, the risk of errors and accidents is drastically reduced, but not eliminated completely. Other related sports Aerobatics is a practice of maneuvering an aircraft in loops and rolls, rotations and spins. It is practiced for recreation or as a sport, and it is part of a flight safety training for a pilot. Gliding is a recreational activity and a sport that first began in 1920. The pilots fly aircraft called gliders that are unpowered and use the currents of the air to remain airborne. Ballooning or hot air ballooning is an activity in which pilots fly hot air balloons. Ballooning is practiced mostly as a recreational activity or as a sport in competitions. This activity is preferred by people who want to enjoy a delightful silent ride and beautiful bird’s-eye views. Flying is for pilots, beside being a passion, a real dedication, a true liberation from the every day stress. Compare the pros and the cons of flying a light sport aircraft, earn your wings and pursue your passion. Be free. Go fly!
384,231
An Object-Oriented Design for FluidDB Interfaces Introduction This post outlines the object-oriented design of Net::FluidDB. This model may serve as a starting point to other OO libraries. Only the most important interface is documented, not every existing method. The purpose is to give a picture of how the pieces fit together. Net::FluidDB is a Perl interface but there’s little Perl here. If you are interested in further implementation details please have a look at the source code. You can either download the module, or click the “Source” link at the top of each documentation page in CPAN. Design Goals Some design goals of Net::FluidDB: - To offer a higher abstraction to FluidDB than the plain REST API. - To find a balance between a normal object-oriented interface and performance, since most operations translate to HTTP calls. - To provide robust support for value types keeping usage straightforward. Goal (1) means that you should be able to work at a model level. For instance, given that tags are modeled you should be able to pass them around. Users should be able to tag an object with a Net::FluidDB::Tag instance and a value, for example. For convenience they can tag with a tag path as well, but there has to be a complete interface at the model level. Goal (2) is mostly accomplished via lazy attributes. For example, one would expect that a tag has an accessor for its namespace but it wouldn’t be good to fetch it right away, so we load it on-demand. Goal (3) is Perl-specific and I plan a separate post for it. The problem to address here is that the FluidDB types null, boolean, integer, float, and string have no direct counterpart in Perl, because Perl handles all of them under a single scalar type. Communication Net::FluidDB FluidDB has a REST interface and thus you need a HTTP client to talk to it. Net::FluidDB in particular uses a very mature Perl module called LWP::UserAgent. Calls to FluidDB need to set authentication, Accept or Content-Type headers, payload, … It is good to encapsulate all of that for the rest of the library: Of course some defaults may be handy, like a default protocol, host, or environment variables for credentials. The constructor new_for_testing() gives an instance pointing to the sandbox with test/test for people to play around. Net::FluidDB::JSON FluidDB uses JSON for native values and structured payloads, so you’ll need some JSON library. Net::FluidDB uses JSON::XS at this moment with a little configuration. That’s encapsulated in Net::FluidDB::JSON: The actual class has a few more methods for goal (3), but that’s a different post. Future Extensions FluidDB may speak more protocols and serialisation formats in the future. When that happens it may be the case we need a few more abstractions to plug them into the library. But for the time being this seems simple and enough. Resources Net::FluidDB::Base Objects, tags, namespaces, policies, permissions, and users need an instance of Net::FluidDB and of Net::FluidDB::JSON to be able to talk to FluidDB. We set up a root class for them: (Labels in the previous diagram have no Net::FluidDB namespace because the image was too wide with them, but classes do belong to the Net::FluidDB namespace.) Net::FluidDB::Object Objects are: The signature of the tag() method is quite flexible. For example, you can pass either a Net::FluidDB::Tag instance to it, or a tag path. You can tag with native or non-native values. Values may be either scalars plus options, or self-contained instances of Net::FluidDB::Value, not covered in this post. I plan to support tagging given a filename with automatic MIME type, etc. In Perl and others it is fine to offer a single method like tag() whose behaviour depends on the arguments when the contract is clear and having a single method pays off. Some other programming languages may prefer to split tag() into multiple methods with different names or signatures. The signature of the value() method also accepts a Net::FluidDB::Tag instance or a tag path. Net::FluidDB::HasObject Tags, namespaces, and users have a canonical object for them in FluidDB. Thus, they have an object_id, and a lazy object() getter. You would use this object for example to tag those resources themselves. We factor the common functionality out to a Moose role. A role is like a Ruby mixin. A bunch of attributes and methods that can’t be instantiated by themselves, but can be somehow thrown into a class definition as if they were part of it: Net::FluidDB::HasPath Tags and namespaces also have some common stuff modeled as a role: There’s an interesting bit here: Both name and path are lazy attributes. Net::FluidDB::HasPath is thought to be consumed by classes that implement a parent() accessor. By definition, the parent of a namespace is its parent namespace, and the parent of a tag is its containing namespace. In general, instances make sense as long as they have either a path, or a name and a parent. If you set a path instances will lazily sort out their name and parent if asked to. Given a parent and a name, instances may compute their path if needed. This is easily implemented thanks to the builtin support for lazy attributes in Moose. Net::FluidDB::Tag Tags are modeled like this: The namespace() reader loads the namespace a tag belongs to lazily. Net::FluidDB::Namespace Namespaces are similar to tags: Again, the parent() getter loads the parent namespace on-demand. Net::FluidDB::ACL Both policies and permissions have an open/closed policy, and a set of exceptions. Net::FluidDB::ACL provides stuff common to both: Net::FluidDB::Policy With that in place policies are: Net::FluidDB::Permission And permissions are: Net::FluidDB::User Finally, users are pretty simple: Credits These cool diagrams are a courtesy of yUML. October 21st, 2009 at 11:58 Thanks *a LOT* Xavier! This is going to be very helpful for our Smalltalk version
413,022
Rhonda J. in La Palma, CA was looking for a solution to cover up her boring wall that surrounded her pool. When she called Wall Sensations we were able to install a beautiful scene that she can enjoy all day long as she just retired. Look at the dramatic difference.
181,873
TITLE: Proving $|1 + z_1|^2 + |1 + z_2|^2 + \ldots + |1 + z_n|^2 = 2n$ QUESTION [10 upvotes]: I am trying to prove the following result. Assume that $n \geq 2$ is an integer and $z_1, z_2, \ldots, z_n$ are complex numbers such that $z_1 + z_2 + \ldots + z_n = 0$ and $|z_1| = |z_2| = \cdots = |z_n| = 1$. Prove that $$ |1 + z_1|^2 + |1+z_2|^2 + \cdots + |1 + z_n|^2 = 2n. $$ Here is my attempt. Recalling that for any $z \in \mathbb{C}$, $|z|^2 = z \overline{z}$, we have \begin{align*} \sum\limits_{i=1}^n |1 + z_i|^2 & = \sum\limits_{i=1}^n (1 + z_i)(\overline{1 + z_i}) \\ & = \sum\limits_{i=1}^n (1 + z_i)(1 + \overline{z}_i) \\ & = \sum\limits_{i=1}^n (1 + \overline{z}_i + z_i + z_i \overline{z}_i) \\ & = \sum\limits_{i=1}^n 1 + \sum\limits_{i=1}^n \overline{z}_i + \sum\limits_{i=1}^n z_i + \sum\limits_{i=1}^n |z_i|^2 \\ & = n + \sum\limits_{i=1}^n \overline{z}_i + 0 + n \\ & = 2n + \sum\limits_{i=1}^n \overline{z}_i. \end{align*} It suffices to show that $\sum\limits_{i=1}^n \overline{z}_i = 0$. By assumption, we have $\sum\limits_{i=1}^n z_i = 0$. Taking the modulus of both sides, we obtain \begin{align*} 0 = \overline{0} = \overline{\sum\limits_{i=1}^n z_i} = \sum\limits_{i=1}^n \overline{z}_i. \end{align*} Therefore, $$ \sum\limits_{i=1}^n |1 + z_i|^2 = 2n, $$ as required. How does this look? Are there are any incorrect steps? REPLY [7 votes]: The equality has a simple geometric interpretation. If $\,G\,$ is the centroid of $\,n\,$ points $\,P_k\,$ it is a known property that $\,\sum WP_k^2 = n \cdot WG^2 + \sum GP_k^2\,$ for any point $\,W\,$ $\left(\dagger\right)\,$. Taking $\,P_k\,$ to be the points with affixes $\,z_k\,$ in the complex plane, the condition $\,\sum z_k = 0\,$ means that $\,G \equiv O\,$, and therefore $\,GP_k = |z_k| = 1\,$. Writing the previous equality for a point $\,W\,$ of arbitrary affix $\,\omega \in \mathbb C\,$ reduces to $\,\sum |\omega-z_k|^2\,$ $\,= n \cdot |\omega|^2 + \sum 1 = n\left(|\omega|^2+1\right)\,$, and the equality in OP's question follows for $\,\omega = -1\,$. [ EDIT ] $\;$ To answer the solution-verification part of the question, the posted proof is correct. In fact, it would be straightforward to adapt it as to prove the more general result derived here. I assume the "taking the modulus of both sides, we obtain ..." line is a transcription error. Given what follows, it was obviously supposed to be "taking the conjugate ...", instead. [ EDIT #2 ] $\;$ Prompted by boojum's comment, these are several other contexts where the same relation $\,\left(\dagger\right)\,$ occurs in some form. in geometry, proving that the locus of points in the plane with constant sum of squared distances to a set of fixed points is a circle  e.g. [1]; in statistics, proving that the mean minimizes the squared error  e.g. [2]; in mechanics, the parallel axis theorem about moments of inertia.
176,143
Our history MEMORIA is still a family-run business, based in the community where it all began over 80 years ago. MEMORIA, from yesterday to tomorrow MEMORIA, the legacy of Alfred Dallaire, is a fourth-generation family business, now run by Jocelyne Dallaire Légaré, the granddaughter of its founders, Alfred and Aline Dallaire. Because it evokes our roots and our reality today, the Dallaire name is always part of our logo and is displayed on the facades of our 11 MEMORIA complexes. Alfred and Aline Dallaire, the builders It all began with an extraordinary gesture of human solidarity. In Montréal, in the 1930s, a young Ukrainian woman died in extreme poverty, and no funeral director would agree to hold a viewing. Alfred Dallaire, a taxi driver and barber, transformed his barbershop into a funeral parlour. And that is how our story began. Of course, Alfred’s intrepidity would not have gone far without the devotion, practical intelligence and acumen of his wife Aline. In fact, it was her, who, between telephone calls, took care of everything, from bookkeeping to floral decorations. Paul-Émile Légaré, the visionary In 1952, Paul-Émile Légaré married Thérèse Dallaire, the daughter of Alfred and Aline. The young couple took over the business, and in 40 years built a vast network of funeral homes. With his entrepreneurial flair, Paul-Émile Légaré quickly made a reputation as a visionary: in 1967, he introduced the concept of prearrangements, and in 1979, he built the first funeral complex, with columbaria, visitation rooms and reception halls. All this, with the support of Alfred, Aline and Thérèse, who was always there to advise and assist him. Alfred Dallaire died in 1973. To honour his work as a pioneer and mentor, Paul-Émile dedicated the first complex in Laval, inaugurated in 1979, to him. Jocelyne Dallaire Légaré and her daughter Julia Duchastel Today, three generations later, the founders’ granddaughter, Jocelyne Dallaire Légaré, is in charge at MEMORIA. Her creative mind is behind the reinvented site design, as exemplified by the St. Laurent complex, inaugurated in 1999. And it was her idea to offer free psychological assistance, to create art circles, to integrate works of art into the funeral homes and to suggest the innovative concept of a drop-in daycare. She is unceasing in her efforts to think up new ideas and ways to revitalize the services provided. Her creative energy is supported by her close relationship with her daughter Julia and the dynamism of a talented team of craftspeople, artists, counsellors and professionals, who continue to serve families with equal respect for all communities and the same sense of compassion as when it all began. Today, Alfred Dallaire MEMORIA is 11 complexes, shining examples of a serene modernity and solidly rooted in the foundations of tradition. This wonderfult tribute to our founder created by Moment Factory was presented at the our 80th anniversary celebration on November 13 novembre 2013.
166,275
The Best Tamari Home Best tamari home Reviews :If you are reading this, then you already know about tamari home is a great product for you, your family or any other person whom you are planning to buy. Don’t worry about price if you are looking for a tamari home for any person or your home, office or personal use then also we have covered all kind of tamari home. It doesn’t matter what’s your budget we have listed all minimum to maximum price budget details. Thanks to e-commerce explosion, we now have a Sale more often and predictable than the monsoon. If tamari home is your interest area, then you are at the right place and with the advent of new year, at right time. Investing in tamari home has become very foggy with a lot of malicious product and fakes out there. So, if you need a handy guide to ensure that your investment is safe, look no more beyond our Ultimate Buying Guide for tamari home. Here we bring out the best in the tamari home which you can safely buy in 2021. 607 reviews analysed Madison Park Odette 8 Piece Jacquard Bedding Comforter Set with Damask Stria, King, Silver By Madison Park Tahari Mink Faux Fur Throw Luxury Silky Soft Blanket in Cream White (Grey Ice) By Tahari Home 1. Tahari Home Alisa Quilts King Product Highlights - ULTRA-SOFT: Cozy microfiber fabric for lightweight comfort and durability. - FABRIC DETAILS: Made from 100% polyester microfiber - CARE INSTRUCTIONS: Machine wash cold, gently and separately. Tumble dry low. Do not bleach or iron. - INCLUDES: (1) King Quilt, (2) King Shams - 100% Polyester Microfiber - DIMENSIONS: Quilt (104 inches x 90 inches), Shams (20 inches x 36 inches) Description This dreamy 3-piece bed set has diamond quilting and soft grey floral prints with hints of beige. 2. Tahari Home Collection Lightweight Hypoallergenic Product Highlights - CARE INSTRUCTIONS: Machine wash cold, gently and separately. Tumble dry low. Do not bleach or iron. - FABRIC DETAILS: 100% Polyester - Microfiber - INCLUDES: (1) Queen Size Sheet Set - DIMENSIONS: One flat sheet (102″L x 90″W), one fitted sheet (80″L x 60″W) and four pillowcases (20″L x 30″W) Description If you are the sort of a person who doesn’t compromise on quality and is ready to shell a little extra then Tahari Home Collection Lightweight Hypoallergenic is your choice. They say you either buy a quality product once or buy cheap products every day, the cost comes to same. The new Tahari Home Collection Lightweight Hypoallergenic comes with best Price. It is industry’s most trusted, most preferred and quality Tamari Home and it considered as Gold Standard by many users as well as non-users. If you are looking for a long-time investment with a quality Tamari Home then don’t look beyond Tahari Home Collection Lightweight Hypoallergenic. The product is featured, highlighted and appreciated in Reviews of Tamari Home in 2020 and this has been backed by many users. 3. Madison Park Odette Jacquard Comforter Product Highlights - PACKAGE INCLUDE – 1 Comforter, 2 King Shams, 1 Bedskirt, 2 Decorative Pillows, 2 Euro Shams - 100% Polyester - - EASY CARE – Machine wash cold on gentle cycle, tumble dry on low heat, do not bleach, Spot Clean Pillow - FEATURES – Add a touch of class to your bedroom with the comforter set. The luxurious comforter and shams flaunt a gorgeous medallion design with striations on the satin ground that add dimension to the lavish look. - MATERIAL DETAILS – Polyester jacquard with damask stria texture comforter and shams, microfiber reverse, 8oz/sq yard polyester filling, charmeuse bed skirt and euro shams, embroidered pillow with decorative cording, polyesting filling Description Going ahead with our list, we have something very specific to a specific audience. Yes, Madison Park Odette Jacquard Comforter has a very selective audience with specific taste. It satisfies customer expectations (Given that your expectations don’t cross a limit) and it adds value for money but more importantly, it adds a style to the user which can be your fashion statement. Madison Park Odette Jacquard Comforter is definitely the must-buy for those who need a little of both quality and price efficiency and as per our analysis, Madison Park Odette Jacquard Comforter easily gets the award of Best Tamari Home Under 100$. 4. Home Collection Elegant Embossed Bedspread Description Home Collection Elegant Embossed Bedspread is a veteran in the market and has been here for a long time. It offers something of a unique feature which no other competitor offers. Go for Home Collection Elegant Embossed Bedspread if you want to try out something of a fusion of new and classic. Fun & Interesting Fact about Tamari Home is that even though Home Collection Elegant Embossed Bedspread is a veteran, the users are mostly younger generation. You can say fashion makes a turn after a century or so and things repeat. 5. Tahari Home Textured Jacquard Reversible Product Highlights - Inspired by the European Damask, this bedding brings beautiful texture and luxurious style to the bedroom. - 3pc duvet cover and shams set in King and Queen size made to fit US sizes comforter or quilt insert. Duvet cover has button closures. - Artfully detailed wines and damask scrolls make it a stunning yet serene addition to both modern and traditional spaces. - Cotton Bland - Modern classical style delicate floral damask design Bedding set by Tahari Home - 70% cotton and 30% Polyester Front, 100% Cotton Back, Machine washable Description If you are buying a Tamari Home for the first time, then you should have Tahari Home Textured Jacquard Reversible. It has fewer features when you make Tamari Home comparisons of Tahari Home Textured Jacquard Reversible with any other Tamari Home but what it has is ease of use and best in class service. Go ahead and Grab a Tamari Home, grab any Tamari Home but if you are first time user and want a good experience do not look anywhere other than Tahari Home Textured Jacquard Reversible 6. Tahari Throw Luxury Silky Blanket Product Highlights - Extra dense fluffy look sofa bed throw measures 50 by 60 inches - This elegant throw is a beautiful accessory to any decor and makes a perfect gift - Medium thick yet warm beautifully crafted premium quality blanket - Super soft luxurious cool light gray mink faux fur throw by Designer Tahari Home - Back made of plush poly velvet velour, machine washable - 100% Faux Fur Description Tahari Throw Luxury Silky Blanket is a relatively new and late entrant in the market but surprisingly has surpassed beyond Tahari Home Soft Cotton Textured Jacquard Bedding Modern Cottage Duvet Cover Set Reversible Woven Damask Floral Birds Nature Design (Blush Pink, King) which have been in market longer than anyone. Tahari Throw Luxury Silky Blanket brings you the best of the quality in the lowest possible cost. The Best feature of Tahari Throw Luxury Silky Blanket is what has kept in the market. It certainly makes appearance in Reviews of Tamari Home in 2020 owing to its price penetration strategy in the market. If you own a Tamari Home and it could be any of the high value Tamari Home, chances are that would be much costlier than Tahari Throw Luxury Silky Blanket. Tahari Throw Luxury Silky Blanket will have more than 50% of all its features. 7. Tahari Home Reose Throw Blanket Product Highlights - ULTRA-SOFT: Cozy plush fabric to keep you warm all year long - DIMENSIONS: 50 inches x 70 inches - INCLUDES: (1) Decorative Throw - FABRIC DETAILS: Made from 100% polyester - 100% Polyester - CARE INSTRUCTIONS: Machine wash cold, gently and separately. Tumble dry low. Do not bleach or iron. Description Tahari Home Reose Throw Blanket is another one which falls under Tamari Home for money. It is most regularly advertised product and we see ads of it almost everywhere. In the past, Tahari Home Reose Throw Blanket ’s parent company decided to launch a new line of Tamari Home and that is what has revived them. Tahari Home Reose Throw Blanket has really upgraded itself to the current style and market changes and the best part of Tahari Home Reose Throw Blanket is amazing features. 8. Tahari Throw Blanket Shaggy Ivory Description Acrylic/Polyester soft ivory mink faux fur throw Measures 50 by 60 inches Thick and warm beautifully crafted premium quality blanket Back made of plush velour in matching ivory, machine washable 9. Tahari Home Collection Absorbent Bathroom Description Tahari Home Collection Absorbent Bathroom is again a mid of quality and price. It offers limited features in this pricing. There is another variant of Tahari Home Collection Absorbent Bathroom which falls into premium category but Tahari Home Collection Absorbent Bathroomis specifically targeted for mid-segment. Tahari Home Collection Absorbent Bathroom offers such a amazing features which makes it better than 70% of Tamari Home available in the market today. Tahari Home Collection Absorbent Bathroom was our personal favorite and was voted as most admired product in TOP 10 Best Tamari Home to Buy in 2020 – TOP Picks. We hope it makes to that list again this year. 10. Tahari Home Embroidered Off White Comforter Product Highlights - Front: 70% cotton, 30% polyester. Back, 100% cotton. Machine washable - Raised embroidered floral damask pattern with birds, in white thread on cream / off-white - This bed linen feels soft to the touch, very comfortable and lightweight - Includes 1 Queen duvet cover (90″ x 96″) and 2 standard shams - Luxurious 3 piece queen size duvet cover set by Tahari Home — Charleston, Winter White Description Last but not the least, if you haven’t liked any of the Tamari Home yet, then Tahari Home Embroidered Off White Comforter is your choice. It is another one which is Best Tamari Home Under 100$ and Tamari Home comparison have showed it has ranked best in past based solely on its features. Tahari Home Embroidered Off White Comforter offers best features and although it does not have, Tahari Home Embroidered Off White Comforter’s feature is unbeatable. We would recommend you to go ahead with this if you want an all rounder Best Tamari Home Under 100$
12,217
If you have been arrested for a crime, time is of the essence in contacting an attorney. Whether your crime involved a traffic violation, driving under the influence (DUI), felony drug trafficking, or any type of misdemeanor or felony crime, you need aggressive representation and protection of your rights. At Fox & Melofchik, LLC, we have helped our clients who have been charged with various crimes at the Federal, State, and Municipal court level. Our lawyers will investigate all aspects of your case, including probable cause leading up to the arrest, potentially illegal search warrants, and any issues related to search and seizure issues. After your arrest, your rights begin with securing a personal recognizance bond, surety bond, or surety bond with a cash option. Not only do you have rights, but you may also have obligations both professionally regarding your job and personally with your family. In a practical and proactive sense, we need you out of jail to gather information more successfully. That will allow our lawyers to develop a successful strategy to win your case whether it involves rape, murder, or negligent operation of a vehicle. At our firm, we have represented clients charged with everything from simple possession to high-volume drug trafficking. The drug crimes have involved marijuana, methamphetamine (meth), cocaine, and heroin. In investigating your case, we will try to uncover if your stop was unlawful and if the search was unreasonable. If the search warrant was not executed legally, we will bring that to light as well. In the end, the State must prove that your involvement extended beyond being in the presence of the drugs that were found. If you have been accused of a white collar crime or regulatory violation, we have the background and resources to help you. We will manage your case with diligence and dedication, while recognizing the need confidentiality and discretion. Our experience also includes handling whistleblower cases, developing innovative strategies with the goal of discrete mediation or aggressive litigation. Some Municipal Court matters may be referred to the prosecutor to determine the severity of the allegation, which would remove the case from the municipal court level and bring it to the Superior Court. Personal injury (car accidents, premises liability, product liability, and slip and fall cases), tax appeals, civil rights, condemnation, and estate litigation are all handled at the Superior Court level. If you are facing a Federal criminal charge, State of New Jersey criminal charge or Municipal criminal charge, you are probably facing the most difficult situation in your life. It is crucial that you obtain experienced, knowledgeable and aggressive New Jersey legal counsel. Personal freedom is one of the most important rights afforded to all citizens of the United States. The potential loss of personal freedom from a criminal accusation on the Federal or State of New Jersey level requires an aggressive defense where the client is involved in every aspect of the case. Because you are an integral part of your defense, it is vital that you are informed of how a cases proceed through the Federal or State system, the evidence against you and all possible defenses to the accusation including practical and tactical considerations. The attorneys at Fox & Melofchik work closely and personally with all clients to ensure that strategy and decision making involve the direct participation of the client. As soon as you come into contact with the Federal, State or Municipal legal system, you need immediate assistance from a New Jersey criminal lawyer. Prompt assistance can mean the difference between keeping or losing your personal freedom. Fox & Melofchik has the skills and experience to provide an aggressive defense. We also assist clients in having criminal records expunged in New Jersey. Dennis J. Melofchik, Esq. was an assistant prosecutor for many years. As a result, he knows the ins and outs of the criminal justice system in New Jersey. He knows how to identify and create reasonable doubt when you are facing a jury. He has the uncanny ability to strategically craft your defense. He knows how to negotiate a plea bargain with Municipal, State or Federal authorities. If you are facing the loss of personal freedom, call (732)493-9400 or contact us online to schedule a FREE CONSULTATION.
396,314
TITLE: What should you do if you have a research idea outside your area of expertise? QUESTION [2 upvotes]: Background Sometimes, I have ideas for research in mathematical subjects about which I don't know much. Let me describe an example to make it more concrete. In his 2009 article "The Brachistochrone Problem for a Disk" by L. D. Akulenko, he describes along which curve a disk with radius $R$ rolls down in the shortest amount of time. The problem that was considered is a generalization of the classical Brachistochrone problem, which entails finding the curve of quickest descent for a point mass. In Akulenko's article, the density of the disk is assumed to be uniform. I wonder: what if we start tweaking this assumption? We could consider, for instance, the following three variations:                                                                     (Please excuse my bad drawing skills in this case. All circles have radius $R$.) Variation $(A)$ concerns a disk rolling down a curve, for which the density is uniform except at an extra point mass at the edge. The question becomes: how does this extra point mass affect what the curve of quickest descent looks like? How can this curve be described mathematically, also as a function of the magnitude of the extra mass? In variation $(B)$, the extra mass is put in a pie slice in the circle, and in variation $(C)$, it is put in a vertical segment of it. For all of these variations, the question remains the same: what are the curves of steepest descent? I do not possess the required background on the theory of dynamical systems and/or (partial) differential equations to solve this problems. At the same time, I believe these who does have sufficient knowledge in these areas might be interested in doing research on these topics. Especially since, I believe, these particular generalizations of the Brachistochrone problem have not been considered yet (for other generalizations, see for instance this paper by Gemmer). I find the idea that others might build on the questions I've raised appealing, because it could mean you've indirectly added a little bit of extra knowledge to the existing body of work on mathematics. Moreover, you've aided people with finding a research topic that suits them. Questions Are there journals that welcome instances of people sending their ideas for research projects (which they can't or won't delve into themselves) and publish them? Are there any repositories - either online or on paper - specifically dedicated to receiving and listing ideas for projects that researchers might consider investigating? Are there other things one might consider doing with such ideas? For instance, is it wise to send them to professors in relevant research areas? REPLY [2 votes]: Sometimes, I have ideas for research in mathematical subjects about which I don't know much. [...] Which is great and is something that naturally happens when you think about mathematics, especially at an early stage. You can either note down these ideas for a later time, or try to pursue them. In the latter case, either you try to build up knowledge yourself or you need to find a collaborator. Forget about your options 1, 2. Maybe 3 could be feasible: sending emails. However you must be sure to include some content in your proposal. Just writing: "why don't we work together on blah" is not enough. You must include some ideas, share something nontrivial if you want to wake the other person's interest. I have a friend who is very good at finding collaborators. He does not send many emails, though. He keeps an eye on mathematical production, he reads a lot. When he finds somebody who seems interesting to collaborate with, he approaches them, preferably in person: that's the reason why conferences, workshops and meetings are done.
115,287
\begin{document} \title{Stochastic Analysis of an Adaptive Cubic Regularisation Method under Inexact Gradient Evaluations and Dynamic Hessian Accuracy } \author{\name{Stefania Bellavia\textsuperscript{a}\thanks{CONTACT: Stefania Bellavia, Email: [email protected]} and Gianmarco Gurioli\textsuperscript{b}}\affil{\textsuperscript{a} Dipartimento di Ingegneria Industriale, Universit\`{a} degli Studi, Firenze, Italy; \textsuperscript{b} Dipartimento di Matematica e Informatica ``Ulisse Dini", Universit\`{a} degli Studi, Firenze, Italy.} } \maketitle{} \begin{abstract} We here adapt an extended version of the adaptive cubic regularisation method with dynamic inexact Hessian information for nonconvex optimisation in \cite{IMA} to the stochastic optimisation setting. While exact function evaluations are still considered, this novel variant inherits the innovative use of adaptive accuracy requirements for Hessian approximations introduced in \cite{IMA} and additionally employs inexact computations of the gradient. Without restrictions on the variance of the errors, we assume that these approximations are available within a sufficiently large, but fixed, probability and we extend, in the spirit of \cite{CartSche17}, the deterministic analysis of the framework to its stochastic counterpart, showing that the expected number of iterations to reach a first-order stationary point matches the well known worst-case optimal complexity. This is, in fact, still given by $O(\epsilon^{-3/2})$, with respect to the first-order $\epsilon$ tolerance. Finally, numerical tests on nonconvex finite-sum minimisation confirm that using inexact first and second-order derivatives can be beneficial in terms of the computational savings. \end{abstract} \begin{keywords} Adaptive cubic regularization methods; inexact derivatives evaluations; stochastic nonconvex optimization; worst-case complexity analysis; finite-sum minimization. \end{keywords} \numsection{Introduction} Adaptive Cubic Regularisation (ARC) methods are Newton-type procedures for solving unconstrained optimisation problems of the form \begin{eqnarray} \min_{x \in \mathbb{R}^n} f(x), \label{problem1} \end{eqnarray} in which $f:\mathbb{R}^n\rightarrow \mathbb{R}$ is a sufficiently smooth, bounded below and, possibly, nonconvex function. In the seminal work by \cite{NP} the iterative scheme of the method is based on the minimisation of a cubic model, relying on the Taylor series, for predicting the objective function values, and is a globally convergent second-order procedure. The main reason to consider the ARC framework in place of other globalisation strategies, such as Newton-type methods embedded into a linesearch or a trust-region scheme, lies on its optimal complexity. In fact, given the first-order $\epsilon$ tolerance and assuming Lipschitz continuity of the Hessian of the objective function, an $\epsilon$-approximate first-order stationary point is reached, in the worst-case, in at most $O(\epsilon^{-3/2})$ iterations, instead of the $O(\epsilon^{-2})$ bound gained by trust-region and linesearch methods \cite{CGToint, CGT}. More in depth, an $(\epsilon,\epsilon_H)$-approximate first- and second-order critical point is found in at most $O(\max(\epsilon^{-3/2}, \epsilon_H^{-3}))$ iterations, where $\epsilon_H$ is the positive prefixed second-order optimality tolerance \cite{ ARC2, CGToint,CGTIMA, NP}. We observe that, in \cite{carmon_optcompl} it has been shown that the bound $O(\epsilon^{-3/2})$ for computing an $\epsilon$-approximate first-order stationary point is optimal among methods operating on functions with Lipschitz continuous Hessian. Experimentally, second-order methods can be more efficient than first-order ones on badly scaled and ill-conditioned problems, since they take advantage of curvature information to easily escape from saddle points to search for local minima (\cite{review, CGToint,Roosta_2p}) and this feature is in practice quite robust to the use of inexact Hessian information. On the other hand, their per-iteration cost is expected to be higher than first-order procedures, due to the computation of the Hessian-vector products. Consequently, literature has recently focused on ARC variants with inexact derivative information, starting from schemes employing Hessian approximations \cite{IMA, Cin, Roosta} though conserving optimal complexity. ARC methods with inexact gradient and Hessian approximations and still preserving optimal complexity are given in \cite{ BellGuriMoriToin19, CartSche17, kl, Roosta_2p, Roosta_inexact, zhou_xu_gu}. These approaches have mostly been applied to large-scale finite-sum minimisation problems \begin{equation} \label{finite-sum} \min_{x\in\mathbb{R}^n} f(x)=\frac{1}{N}\sum_{i=1}^N{\varphi_i(x)}, \end{equation} widely used in machine learning applications. In this setting, the objective function $f$ is the mean of $N$ component functions $\varphi_i:\mathbb{R}^n\rightarrow \mathbb{R}$ and, hence, the evaluation of the exact derivatives might be, for larger values of $N$, computationally expensive. In the papers cited above the derivatives approximations are required to fulfil given accuracy requirements and are computed by random sampling. The size of the sample is determined as to satisfy the prescribed accuracy with a sufficiently large prefixed probability exploiting the operator Bernstein inequality for tensors (see \cite{Tropp}). To deal with the nondeterministic aspects of these algorithms, in \cite{CartSche17,zhou_xu_gu} probabilistic models are considered and it is proved that, in expectation, optimal complexity applies as in the deterministic case; in \cite{IMA, BellGuriMoriToin19, Cin, Roosta, Roosta_inexact} high probability results are given and it is shown that the optimal complexity result is restored in probability. Nevertheless, this latter analysis does not provide information on the behaviour of the method when the desired accuracy levels in derivatives approximations are not fulfilled. With the aim of filling this gap, we here perform the stochastic analysis of the framework in \cite{IMA}, where approximated Hessians are employed. To make the method more general, inexactness is allowed in first-order information, too. The analysis aims at bounding the expected number of iterations required by the algorithm to reach a first-order stationary point, under the assumption that gradient and Hessian approximations are available within a sufficiently large, but fixed, probability, recovering optimal complexity in the spirit of \cite{CartSche17}. The rest of the paper is organised as follows. In section 1.1 we briefly survey the related works and in section 1.2 we summarise our contributions. In Section 2 we introduce a stochastic ARC algorithm with inexact gradients and dynamic Hessian accuracy and state the main assumptions on the stochastic process induced by the algorithm. Relying on several existing results and deriving some additional outcomes, Section 3 is then devoted to perform the complexity analysis of the framework, while Section 4 proposes a practical guideline to apply the method for solving finite-sum minimisation problems. Numerical results for nonconvex finite-sum minimisation problems are discussed in Section 5 and concluding remarks are finally given in Section 6. \vskip 5pt \noindent {\bf Notations.} The Euclidean vector and matrix norm is denoted as $\|\cdot \|$. Given the scalar or vector or matrix $v$, and the non-negative scalar $\chi$, we write $v=O(\chi)$ if there is a constant $g$ such that $\|v\| \le g \chi$. Given any set ${\cal{S}}$, $|{\cal{S}}|$ denotes its cardinality. As usual, $\mathbb{R}^+$ denotes the set of positive real numbers. \subsection{Related works} The interest in ARC methods with inexact derivatives has been steadily increasing. We are here interested in computable accuracy requirements for gradient and Hessian approximations, preserving optimal complexity of these procedures. Focusing on the Hessian approximation, in \cite{ARC2} it has been proved that optimal complexity is conserved provided that, at each iteration $k$, the Hessian approximation $\overline{\nabla^2 f}(x_k)$ satisfies \begin{equation} \label{condKL} \|(\overline{\nabla^2 f}(x_k)-\nabla^2 f(x_k))s_k\|\le \chi \|s_k\|^2, \end{equation} where ${\nabla^2 f}(x_k)$ denotes the true Hessian at $x_k$. The method in \cite{kl}, specifically designed to minimise finite-sum problems, assume that $\overline{\nabla^2 f}(x_k)$ satisfies \begin{equation}\label{kl2} \| \overline{\nabla^2 f}(x_k) -\nabla^2 f(x_k)\|\le \chi\|s_k\| \end{equation} with $\chi$ a positive constant, leading to \eqref{condKL}. Unfortunately, the upper bound in use depends on the steplength $\|s_k\|$ which is unknown when forming the Hessian approximation $ \overline{\nabla^2 f}(x_k)$. Finite-differences versions of ARC method have been investigated in \cite{CGTfinitedifference}. The Hessian approximation satisfies \eqref{kl2} and its computation requires an inner-loop to meet the accuracy requirement. This mismatch is circumvented, in practical implementations of the method in \cite{kl}, by taking the step length at the previous iteration. Hence, this approach is unreliable when the norm of the step varies significantly from an iteration to the other, as also noticed in the numerical tests of \cite{IMA}. To overcome this practical issue, Xu and others replace in \cite{Roosta} the accuracy requirement \eqref{kl2} with \begin{equation}\label{H_roosta} \| \overline{\nabla^2 f}(x_k) -\nabla^2 f(x_k)\|\le \chi\epsilon, \end{equation} where $\epsilon$ is the first-order tolerance. This provides them with $\|(\overline{\nabla^2 f}(x_k)-\nabla^2 f(x_k))s_k\|\le \chi \epsilon \|s_k\|$, used to prove optimal complexity. In this situation, the estimate $\overline{\nabla^2 f}(x_k)$ is practically computable, independently of the step length, but at the cost of a very restrictive accuracy requirement (it is defined in terms of the $\epsilon$ tolerance) to fulfil at each iteration of the method. We further note that, in \cite{Wang}, optimal complexity results for a cubic regularisation method employing the implementable condition \begin{equation} \|\overline{\nabla^2 f}(x_k)-\nabla^2 f(x_k)\|\le \chi \|s_{k-1}\| \end{equation} are given under the assumption that the constant regularisation parameter is greater than the Hessian Lipschitz constant. Then, the knowledge of the Lipschitz constant is assumed. Such an assumption can be quite stringent, especially when minimising nonconvex objective functions. On the contrary, adaptive cubic regularisation frameworks get rid of the Lipschitz constant, trying to overestimate it by an adaptive procedure that is well defined provided that the approximated Hessian is accurate enough. To our knowledge, accuracy requirements depending on the current step, as those in \eqref{condKL}-\eqref{H_roosta}, are needed to prove that the step acceptance criterion is well-defined and the regularisation parameter is bounded above. \noindent Regarding the gradient approximation, the accuracy requirement in \cite{CGTfinitedifference, kl} has the following form \begin{equation} \label{gradient} \|\overline{\nabla f}(x_k)-\nabla f(x_k)\|\le \mu \|s_k\|^2, \end{equation} where $\overline{\nabla f}(x_k)$ denotes the gradient approximation and $\mu$ is a positive constant. Then, the accuracy requirement depends on the norm of the step again. \noindent In \cite{Roosta_inexact}, as for the Hessian approximation, in order to get rid of the norm of the step, a very tight accuracy requirement in used as the absolute error has to be of the order of $\epsilon^2$ at each iteration, i.e. \begin{equation} \label{gradient_roosta} \|\overline{\nabla f}(x_k)-\nabla f(x_k)\|\le \mu \epsilon^2. \end{equation} As already noticed, in \cite{Roosta, Roosta_inexact}, a complexity analysis in high probability is carried out in order to cover the situation where accuracy requirements \eqref{H_roosta} and \eqref{gradient_roosta} are satisfied only with a sufficiently large probability. While the behaviour of cubic regularisation approaches employing approximated derivatives is analysed in expectation in \cite{CartSche17}, assuming that \eqref{condKL} and \eqref{gradient} are satisfied with high probability. \noindent In the finite-sum minimisation context, accuracy requirements \eqref{condKL}, \eqref{kl2} and \eqref{gradient} can be enforced with high probability by subsampling via an inner iterative process. Namely, the approximated derivative is computed using a predicted accuracy, the step $s_k$ is computed and, if the predicted accuracy is larger than the required accuracy, the predicted accuracy is progressively decreased (and the sample size progressively increased) until the accuracy requirement is satisfied. \noindent The cubic regularisation variant proposed in \cite{IMA} employs exact gradient and ensures condition \eqref{condKL}, avoiding the above vicious cycle, requiring that \begin{equation} \label{AccDynH} \|\overline{\nabla^2 f}(x_k)-\nabla^2 f(x_k)\|\le c_k, \end{equation} where the guideline for choosing $c_k$ is as follows: \begin{equation} \label{ck} c_k\le\left \{ \begin{array}{ll} c, \quad c>0,\qquad \qquad ~~~ \mbox{if } \ \ \|s_k\|\ge 1, \\ \alpha(1-\beta)\| \nabla f(x_k)\|, \quad \mbox{if } \ \ \|s_k\|< 1, \end{array} \right. \end{equation} with $0\le \alpha<\frac23$ and $0<\beta<1$. Note that, for a sufficiently large constant $c$, the accuracy requirement $c_k$ can be less stringent than $\epsilon$ when $\|s_k\|\ge 1$ or, otherwise, as long as $\alpha(1-\beta)\|\nabla f(x_k)\|>\epsilon$. Despite condition \eqref{ck} still involves the norm of the step, the accuracy requirement \eqref{AccDynH} can be implemented without requiring an inner loop (see, \cite{IMA} and Algorithm \ref{algo}). \noindent We finally mention that regularisation methods employing inexact derivatives and also inexact function values are proposed in \cite{BellGuriMoriToin19} and the complexity analysis carried out covers arbitrary optimality order and arbitrary degree of the available approximate derivatives. Also in this latter approach, the accuracy requirement in derivatives approximation depends on the norm of the step and an inner loop is needed in order to increase the accuracy and meet the accuracy requirements. A different approach based on the Inexact Restoration framework is given in \cite{bkm} where, in the context of finite-sums problems, the sample size rather than the approximation accuracy is adaptively chosen. \subsection{Contributions} In light of the related works the main contributions of this paper are the following: \vspace{0.1cm} \begin{itemize} \item We generalise the method given in \cite{IMA}. In particular, we kept the practical adaptive criterion \eqref{AccDynH}, which is implemented without including an inner loop, allowing inexactness in the gradient as well. Namely, inspired by \cite{BellGuriMoriToin19}, we require that the gradient approximation satisfies the following relative implicit condition: \begin{equation} \label{gradient_nostro} \|\overline{\nabla f}(x_k)-\nabla f(x_k)\|\le \zeta_k \|\overline \nabla f(x_k)\|^2, \end{equation} where $\zeta_k$ is an iteration-dependent nonnegative parameter. Unlike \cite{CartSche17} and \cite{kl} (see \eqref{gradient}), this latter condition does not depend on the norm of the step. Thus, its practical implementation calls for an inner loop that can be performed before the step computation and extra-computations of the step are not needed. A detailed description of a practical implementation of this accuracy requirement in subsampling scheme for finite-sum minimisation is given in Section 4. \vspace{0.1cm} \item We assume that the accuracy requirements \eqref{AccDynH} and \eqref{gradient_nostro} are satisfied with high probability and we perform, in the spirit of \cite{CartSche17}, the stochastic analysis of the resulting method, showing that the expected number of iterations needed to reach an $\epsilon$-approximate first-order critical point is, in the worst-case, of the order of $\epsilon^{-3/2}$. This analysis also applies to the method given in \cite{IMA}. \end{itemize} \numsection{A stochastic cubic regularisation algorithm with inexact derivatives evaluations} Before introducing our stochastic algorithm, we consider the following hypotheses on $f$.\\ \begin{assumption} \label{Assf} With reference to problem \eqref{problem1}, the objective function $f$ is assumed to be: \begin{itemize} \item[(i)] bounded below by $f_{low}$, for all $x\in\Re^n$; \vskip 5pt \item[(ii)] twice continuously differentiable, i.e. $f\in\mathcal{C}^2(\mathbb{R}^n)$; \vskip 5pt \end{itemize} Moreover,\vskip 5pt \begin{itemize} \item[(iii)] the Hessian is globally Lipschitz continuous with Lipschitz constant $L_H>0$, i.e., \begin{equation}\label{LipHess} \qquad \|\nabla^2 f(x)-\nabla^2 f(y)\| \le L_H\| x-y\|, \end{equation} for all $x$, $y\in\mathbb{R}^n$. \end{itemize} \end{assumption} \vspace{0.2cm} \noindent The iterative method we are going to introduce is, basically, the stochastic counterpart of an extension of the one proposed in \cite{IMA}, based on first and second-order inexact information. More in depth at iteration $k$, given the trial step $s$, the value of the objective function at $x_k+s$ is predicted by mean of a cubic model $m_k(x_k,s,\sigma_k)$ defined in terms of an approximate Taylor expansion of $f$ centered at $x_k$ with increment $s$, truncated to the second order, namely \begin{equation} \label{m} m_k(x_k,s,\sigma_k)= f(x_k)+\overline{ \nabla f}(x_k)^T s+\frac{1}{2} s^T \overline{ \nabla^2 f}(x_k) s+\frac{\sigma_k}{3}\|s\|^3\eqdef \overline T_2(x_k,s)+\frac{\sigma_k}{3}\|s\|^3, \end{equation} in which both the gradient $\overline{ \nabla f}(x_k)$ and the Hessian matrix $ \overline{ \nabla^2 f}(x_k)$ represent approximations of $ \nabla f(x_k)$ and $\nabla^2 f(x_k)$, respectively. According to the basic ARC framework in \cite{ARC1}, the main idea is to approximately minimise, at each iteration, the cubic model and to adaptively search for a regulariser $\sigma_k$ such that the following overestimation property is satisfied: \[ f(x_k+s)\le m_k(x_k,s,\sigma_k), \] in which $s$ denotes the approximate minimiser of $m_k(x_k,s,\sigma_k)$. Within these requirements, it follows that \[ f(x_k)=m_k(x_k,0,\sigma_k)\ge m_k(x_k,s,\sigma_k)\ge f(x_k+s), \] so that the objective function is not increased when moving from $x_k$ to $x_k+s$. To get more insight, the cubic model \eqref{m} is approximately minimised in the sense that the minimiser $s_k$ satisfies \begin{eqnarray} & & m_k(x_k,s_k,\sigma_k)<m_k(x_k,0,\sigma_k),\label{mdecr} \\ & & \| \nabla_s m_k(x_k,s_k,\sigma_k)\| \le \beta_k \|\overline{\nabla f}(x_k)\|, \label{tc} \end{eqnarray} for all $k\ge 0$ and some $0\le \beta_k\le \beta$, $\beta \in [0,1)$. Practical choices for $\beta_k$ are, for instance, $\beta_k= \beta \min \left( 1, \frac{\| s_k\|^2}{\|\overline{\nabla f} (x_k)\|} \right)$ or $\beta_k= \beta \min(1,\|s_k\|)$ (see, e.g., \cite{IMA}), leading to \begin{equation} \label{tcsub} \| \nabla_s m_k(x_k,s_k,\sigma_k)\| \le \beta \min \left( \| s_k\|^2, \|\overline{\nabla f} (x_k)\| \right), \end{equation} and \begin{equation} \label{tc.s} \| \nabla_s m_k(x_k,s_k,\sigma_k)\| \le \beta \min(1,\|s_k\|) \|\overline{\nabla f} (x_k)\|, \end{equation} respectively. We notice that, if the overestimation property $f(x_k+s)\le m_k(x_k,s,\sigma_k)$ is satisfied, the requirement \eqref{mdecr} implies that $f(x_k)=m_k(x_k,0,\sigma_k)> m_k(x_k,s,\sigma_k)\ge f(x_k+s)$, resulting in a decrease of the objective. The trial point $x_k+s_k$ is then used to compute the relative decrease \cite{Toint1} \beqn{rhokdef2} \rho_k = \frac{f(x_k) - f(x_k+s_k)} {\overline T_2(x_k,0)-\overline T_2(x_k,s_k)}. \eeqn If $\rho_k\ge \eta$, with $\eta\in(0,1)$ a prescribed decrease fraction, then the trial point is accepted, the iteration is declared successful, the regularisation parameter is decreased by a factor $\gamma$ and we go on recomputing the approximate model at the updated iterate; otherwise, an unsuccessful iteration occurs: the point $x_k+s_k$ is rejected, the regulariser is increased by a factor $\gamma$, a new approximate model at $x_k$ is computed and a new trial step $s_k$ is recomputed. At each iteration, the model $m_k(x_k,s,\sigma_k)$ involved relies on inexact quantities, that can be considered as realisations of random variables. Hereafter, all random quantities are denoted by capital letters, while the use of small letters is reserved for their realisations. In particular, let us denote a random model at iteration $k$ as $M_k$, while we use the notation $m_k=M_k(\omega)$ for its realisation, with $\omega$ a random sample taken from a context-dependent probability space $\Omega$. In particular, we denote by $\overline{\nabla f}(X_k)$ and $\overline{\nabla^2 f}(X_k)$ the random variables for $\overline{\nabla f}(x_k)$ and $\overline{\nabla^2 f}(x_k)$, respectively. Consequently, the iterates $X_k$, as well as the regularisers $\Sigma_k$ and the steps $S_k$ are the random variables such that $x_k=X_k(\omega)$, $\sigma_k=\Sigma_k(\omega)$ and $s_k=S_k(\omega)$. \noindent The focus of this paper is to derive the expected worst-case complexity bound to approach a first-order optimality point, that is, given a tolerance $\epsilon\in(0,1)$, the number of steps $\overline{k}$ (in the worst-case) such that an iterate $x_{\overline{k}}$ satisfying \begin{eqnarray} \|\nabla f(x_{\overline{k}})\|\le \epsilon\nonumber \end{eqnarray} is reached. To this purpose, after the description of the algorithm, we state the main definitions and hypotheses needed to carry on with the analysis up to the complexity result. Our algorithm is reported below. \algo{algo}{Stochastic ARC algorithm with inexact gradient and dynamic Hessian accuracy} {\vspace*{-0.3 cm} \begin{description} \item[Step 0: Initialisation.] An initial point $x_0\in\mathbb{R}^n$ and an initial regularisation parameter $\sigma_0>0$ are given. The constants $\beta$, $\alpha$, $\eta$, $\gamma$, $\sigma_{\textrm{min}}$ and $c$ are also given such that \begin{eqnarray} 0<\beta<1, ~ \alpha\in \left[0, \displaystyle \frac 2 3\right), ~\sigma_{\min}\in (0, \sigma_0],~ 0<\eta < \frac{2-3\alpha}{2}, ~\gamma>1,~c>0.\label{initialconsts} \end{eqnarray} Compute $f(x_0)$ and set $k=0$, ${\rm flag}=1$. \vspace{2mm} \item[Step 1: Gradient approximation. ] Compute an approximate gradient $\overline{\nabla f}(x_k)$ \vspace{2mm} \item[Step 2: Hessian approximation (model costruction). ] If ${\rm flag}=1$ set $c_{k}=c$, else set $c_{k}=\alpha(1-\beta)\|\overline{\nabla f}(x_{k})\|$.\\ Compute an approximate Hessian $\overline{\nabla^2f}(x_k)$ that satisfies condition \eqref{AccDynH} with a prefixed probability. Form the model $m_k(x_k,s,\sigma_k)$ defined in \eqref{m}. \vspace{2mm} \item[Step 3: Step calculation. ] Choose $\beta_k\le \beta$. Compute the step $s_k$ satisfying \eqref{mdecr}-\eqref{tc}.\vspace{2mm} \item[Step 4: Check on the norm of the trial step. ] If $\| s_k \|< 1$ and ${\rm flag}=1$ and $c>\alpha(1-\beta)\|\overline{\nabla f}(x_k)\|$ \begin{itemize} \item [] set $x_{k+1}=x_k$, $\sigma_{k+1}=\sigma_k$, ${\rm flag}=0$\quad (\textit{unsuccessful iteration}) \item[] set $k=k+1$ and go to Step $1$. \end{itemize} \vspace{2mm} \item[Step 5: Acceptance of the trial point and parameters update. ] Compute $f(x_k+s_k)$ and the relative decrease defined in \eqref{rhokdef2}. If $\rho_k\ge \eta$ \begin{itemize} \item[] define $x_{k+1}=x_k+s_k$, set $\sigma_{k+1} = \max[\sigma_{\min},\frac{1}{\gamma} \sigma_k] $.\quad (\textit{successful iteration}) \item[] If $\|s_k \|\ge 1$ set ${\rm flag}=1$, otherwise set ${\rm flag}=0$. \end{itemize} else \begin{itemize} \item[] define $x_{k+1}=x_k$, $\sigma_{k+1}=\gamma\sigma_k. \quad $ (\textit{unsuccessful iteration}) \end{itemize} Set $k=k+1$ and go to Step $1$. \end{description} } \noindent Some comments on this algorithm are useful at this stage. We first note that the Algorithm \ref{algo} generates a random process \begin{eqnarray} \{X_k, S_k, M_k, \Sigma_k,C_k\},\label{sprocess} \end{eqnarray} where $C_k=c_k(\omega)$ refers to the random variable for the dynamic Hessian accuracy $c_k$, that is adaptively defined in Step 2 of Algorithm \ref{algo}. Since its definition relies on random quantities, $c_k$ constitutes a random variable too. We recall that, in the deterministic counterpart given in \cite{IMA}, the Hessian approximation $\overline{\nabla^2 f}(x_k)$ computed at iteration $k$ has to satisfy the absolute accuracy requirement \eqref{AccDynH}. Here, this condition is assumed to be satisfied only with a certain probability (see, e.g., Assumption \ref{AssAlg}). \noindent The main goal is thus to prove that, if $M_k$ is sufficiently accurate with a sufficiently high probability conditioned to the past, then the stochastic process preserves the expected optimal complexity. To this scope, the next section is devoted to state the basic probabilistic accuracy assumptions and definitions. In what follows, we use the notation $\mathbb{E}[X]$ to indicate the expected value of a random variable $X$. In addition, given a random event $A$, $Pr(A)$ denotes the probability of $A$, while $\mathbbm{1}_A$ refers to the indicator of the random event $A$ occurring (i.e. $\mathbbm{1}_A(a)=1$ if $a\in A$, otherwise $\mathbbm{1}_A(a)=0$). The notation $A^c$ indicates the complement of the event $A$. \subsection{Main assumptions on the stochastic ARC algorithm} For $k\ge 0$, to formalise the conditioning on the past, let $\mathcal{F}_{k-1}^{M}$ denote the $\hat{\sigma}$-algebra induced by the random variables $M_0$, $M_1$,..., $M_{k-1}$, with $\mathcal{F}_{-1}^{M}=\hat{\sigma}(x_0)$.\\ We first consider the following definitions for measuring the accuracy of the model estimates.\\ \begin{defn}[Accurate model] \label{AccIk} A sequence of random models $\{M_k\}$ is said to be $p$-probabilistically sufficiently accurate for Algorithm \ref{algo}, with respect to the corresponding sequence $\{X_k,S_k,\Sigma_k,C_k\}$, if the event $I_k=I_k^{(1)}\cap I_k^{(2)}\cap I_k^{(3)}$, with \begin{eqnarray} I_k^{(1)}&=&\left\{\|\overline{\nabla f}(X_k)-\nabla f(X_k)\| \leq \kappa (1-\beta)^2 \left(\frac{\|\overline{\nabla f}(X_k) \|}{\Sigma_k}\right)^2,\quad \kappa>0\right\} , \label{AccG}\\ I_k^{(2)}&=&\left\{\|\overline{\nabla^2 f}(X_k)-\nabla^2 f(X_k)\|\le C_k\right\}, \label{AccH}\\ I_k^{(3)}&=& \left\{\|\overline{\nabla f}(x_k)\|\le \kappa_g, \quad \|\overline{\nabla^2 f}(x_k)\|\le \kappa_B, \quad \quad \kappa_g> 0,~ \kappa_B>0 \right \}, \label{BoundD} \end{eqnarray} satisfies \begin{eqnarray} Pr(I_k|\mathcal{F}_{k-1}^{M})=\mathbb{E}[\mathbbm{1}_{I_k}|\mathcal{F}_{k-1}^{M}]\ge p.\label{ProbIk} \end{eqnarray} \end{defn} \noindent What follows is an assumption regarding the nature of the stochastic information used by Algorithm \ref{algo}.\\ \begin{assumption}\label{AssAlg} We assume that the sequence of random models $\{M_k\}$, generated by Algorithm \ref{algo}, is $p$-probabilistically sufficiently accurate for some sufficiently high probability $p\in(0,1]$. \end{assumption} \section{Complexity analysis of the algorithm} For a given level of tolerance $\epsilon$, the aim of this section is to derive a bound on the expected number of iterations $\mathbb{E}[N_{\epsilon}]$ which is needed, in the worst-case, to reach an $\epsilon$-approximate first-order stationary point. Specifically, $N_{\epsilon}$ denotes a random variable corresponding to the number of steps required by the process until $\|\nabla f(X_k)\|\le \epsilon$ occurs for the first time, namely \begin{equation} \label{hittingtime} N_{\epsilon}=\inf \{k\ge 0~|~ \|\nabla f(X_k)\| \le \epsilon\}; \end{equation} indeed, $N_{\epsilon}$ can be seen as a stopping time for the stochastic process generated by Algorithm \ref{algo} (see \cite[Definition~2.1]{STR2}). \noindent The analysis follows the path of \cite{CartSche17}, but some results need to be proved as for the adopted accuracy requirements for gradient and Hessian and failures in the sense of Step 4. It is preliminarly useful to sum up a series of existing lemmas from \cite{CartSche17} and \cite{IMA} and to derive some of their suitable extensions, which will be of paramount importance to perform the complexity analysis of our stochastic method. These lemmas are recalled in the following subsection. \subsection{Existing and preliminary results} \noindent We observe that each iteration $k$ of Algorithm \ref{algo} such that $\mathbbm{1}_{I_k}=1$ corresponds to an iteration of the ARC Algorithm $3.1$ in \cite{IMA}, before termination, except for the fact that in Algorithm \ref{algo} the model \eqref{m} is defined not only using inexact Hessian information, but also considering an approximate gradient. In particular, the nature of the accuracy requirement for the gradient approximation given by \eqref{AccG} is different from the one for the Hessian approximation, namely \eqref{AccH}. In fact, a realisation $c_k$ of the upper bound $C_k$ in \eqref{AccH}, needed to obtain an approximate Hessian $\overline{\nabla^2 f}(x_k)$, is determined by the mechanism of the algorithm and is available when forming the Hessian approximation $\overline{\nabla^2 f}(x_k)$. On the other hand, \eqref{AccG} is an implicit condition and can be practically gained computing the gradient approximation within a prescribed absolute accuracy level, that is eventually reduced to recompute the inexact gradient $\overline{\nabla f}(x_k)$; but, in contrast with \cite[Algorithm $4.1$]{CartSche17}, without additional step computation, which is performed only once per iteration at Step $3$ of Algorithm \ref{algo}. We will see that, for any realisation of the algorithm, if the model is accurate, i.e. $\mathbbm{1}_{I_k}=1$, then there exist $\delta \ge 0$ and $ \xi_k> 0$ such that \begin{eqnarray} \|(\overline{\nabla f}(x_k)-\nabla f(x_k))s_k\| \le \delta \|s_k\|^3, \qquad \|(\overline{\nabla^2 f}(x_k)-\nabla^2 f(x_k))s_k\| \le \xi_k \|s_k\|^2,\nonumber \end{eqnarray} which will be fundamental to recover optimal complexity. At this regard, let us consider the following definitions and state the lemma below.\\ \begin{defn}\label{defset} With reference to Algorithm \ref{algo}, for all $0\le k\le l$, $l\in\{0,...,N_{\epsilon}-1\}$, we define the events \vspace{0.1cm} \end{defn} \begin{itemize} \item $\mathcal{S}_k=\{\textrm{iteration~}k~\textrm{is successful}\}$; \vspace{0.1cm} \item $\mathcal{U}_{k,1}=\{\textrm{iteration~}k~\textrm{is unsuccessful}~\textrm{in~the~sense~of~Step}~5\}$; \vspace{0.1cm} \item $\mathcal{U}_{k,2}=\{\textrm{iteration~}k~\textrm{is unsuccessful}~\textrm{in~the~sense~of~Step}~4\}$. \end{itemize} \noindent We underline that if $k\in {\cal U}_{k,1}$ then $\rho_k<\eta$, while $k\in {\cal U}_{k,2}$ if and only if $\| s_k \|< 1$, ${\rm flag}=1$ and $c>\alpha(1-\beta)\|\overline{\nabla f}(x_k)\|$. Moreover, if $\rho_k<\eta$ and a failure in Step 4 does not occur, then $k\in {\cal U}_{k,1}$. \llem{}{\label{Lemmagk} Consider any realisation of Algorithm \ref{algo}. Then, at each iteration $k$ such that $\mathbbm{1}_{I_k^{(1)}\cap I_k^{(3)}}=1$ (accurate gradient and bounded inexact derivatives) we have \begin{equation} \label{uppboundkgrad} \|\overline{\nabla f}(x_k)-\nabla f(x_k)\|\le \delta \|s_k\|^2,\qquad \delta \eqdef \kappa \left(\frac{\kappa_B}{\sigma_{\min}}+1\right)\max\left[\frac{\kappa_g}{\sigma_{\min}}, \frac{\kappa_B}{\sigma_{\min}}+1\right], \end{equation} and, thus, \begin{eqnarray} \|(\overline{\nabla f}(x_k)-\nabla f(x_k))s_k\|\le \delta \|s_k\|^3\label{keygrad}. \end{eqnarray} } \begin{proof} Let us consider $k$ such that $\mathbbm{1}_{I_k^{(1)}\cap I_k^{(3)} }=1$. Using \eqref{tc} we obtain \begin{eqnarray} \beta \|\overline{\nabla f}(x_k)\|&\ge& \|\nabla_s m(x_k, s_k, \sigma_k)\| =\left\| \, \overline{\nabla f}(x_k) + \overline{\nabla^2 f}(x_k) s_k +\sigma_k s_k \|s_k\| \, \right\| \nonumber \\ & \ge& \|\overline{\nabla f}(x_k) \| - \|\overline{\nabla^2 f}(x_k)\|\, \| s_k \| -\sigma_k \| s_k\|^2. \label{dis} \end{eqnarray} \noindent We can then distinguish between two different cases. If $\|s_k\| \ge 1$, from \eqref{dis} and \eqref{BoundD} we have that \[ \beta \|\overline{\nabla f}(x_k)\|\ge \|\overline{\nabla f}(x_k) \| - \|\overline{\nabla^2 f}(x_k)\|\, \| s_k \|^2-\sigma_k \| s_k\|^2 \ge \|\overline{\nabla f}(x_k) \| - (\kappa_B+\sigma_k) \| s_k\|^2 \] which is equivalent to \[ \| s_k\|^2 \ge \frac{(1-\beta)\|\overline{\nabla f}(x_k)\|}{\kappa_B+\sigma_k}. \] Consequently, by \eqref{AccG} and \eqref{BoundD} \begin{eqnarray} \|\overline{\nabla f}(x_k)-\nabla f(x_k)\| &\le& \kappa \left (\frac{1-\beta}{\sigma_k}\right )^2 \|\overline{\nabla f}(x_k)\|^2\le \frac{\kappa \kappa_g (1-\beta)^2 \|\overline{\nabla f}(x_k)\|}{\sigma_k^2\| s_k \|^2}\|s_k \|^2\nonumber \\ &\le& \kappa \kappa_g (1-\beta) \frac{\kappa_B+\sigma_k}{\sigma_k^2}\| s_k \|^2 \le \kappa \frac{\kappa_g}{\sigma_{\min}}\left( \frac{\kappa_B}{\sigma_{\min}}+1\right)\| s_k \|^2 ,\label{uppboundagr0} \end{eqnarray} where in the last inequality we have used that $\beta \in (0,1)$ and $\sigma_k\ge \sigma_{min}$. If, instead, $\|s_k\| < 1$, inequality \eqref{dis} and \eqref{BoundD} lead to \[ \beta \|\overline{\nabla f}(x_k)\|\ge \|\overline{\nabla f}(x_k) \| - \|\overline{\nabla^2 f}(x_k)\|\, \| s_k \|-\sigma_k \| s_k\| \ge \|\overline{\nabla f}(x_k) \|-(\kappa_B+\sigma_k)\|s_k\|, \] obtaining that \begin{equation} \label{undersk} \| s_k\| \ge \frac{(1-\beta)\|\overline{\nabla f}(x_k)\|}{\kappa_B+\sigma_k}. \end{equation} Hence, by squaring both sides in the above inequality and using \eqref{AccG}, $\beta \in (0,1)$ and $\sigma_k\ge \sigma_{min}$, we obtain \begin{eqnarray} \|\overline{\nabla f}(x_k)-\nabla f(x_k)\| &\le& \kappa \left(\frac{1-\beta}{\sigma_k}\right)^2 \|\overline{\nabla f}(x_k)\|^2= \frac{\kappa (1-\beta)^2 \|\overline{\nabla f}(x_k)\|^2}{\sigma_k^2\| s_k \|^2}\|s_k \|^2\nonumber \\ &\le& \kappa\left(\frac{\kappa_B+\sigma_k}{\sigma_k}\right)^2 \|s_k \|^2 \le \kappa\left(\frac{\kappa_B}{\sigma_{\min}}+1\right)^2 \|s_k \|^2 .\label{uppboundagr00} \end{eqnarray} \noindent Inequality \eqref{uppboundkgrad} then follows by virtue of \eqref{uppboundagr0} and \eqref{uppboundagr00}, while \eqref{keygrad} stems from \eqref{uppboundkgrad} by means of the triangle inequality. \end{proof} \noindent The following Lemma is a slight modification of \cite[Lemma~3.1]{IMA}. \llem{}{\label{LemmaCk} Consider any realisation of Algorithm \ref{algo} and assume that $c\ge\alpha(1-\beta)\kappa_g$. Then, at each iteration $k$ such that $\mathbbm{1}_{I_k^{(2)}\cap I_k^{(3)}}(1-\mathbbm{1}_{\mathcal{U}_{k,2}})=1$ (successful or unsuccessful in the sense of Step $5$, with accurate Hessian and bounded inexact derivatives) we have \begin{equation} \label{uppboundk} \|\overline{\nabla^2f}(x_k)-\nabla^2 f(x_k)\|\le c_k\le \xi_k \|s_k\|,\qquad \xi_k\eqdef \max[c,\alpha(\kappa_B+\sigma_k)], \end{equation} and, thus, \begin{eqnarray} \|(\overline{\nabla^2f}(x_k)-\nabla^2 f(x_k))s_k\|\le \xi_k\|s_k\|^2\label{key}. \end{eqnarray} } \begin{proof} Let us consider $k$ such that $\mathbbm{1}_{I_k^{(2)} \cap I_k^{(3)}}(1-\mathbbm{1}_{\mathcal{U}_{k,2}})=1$. Algorithm \ref{algo} ensures that, if $\|s_k\|\ge 1$, then $c_k=c$ or \begin{equation} \label{ckgrad} c_k=\alpha(1-\beta)\|\overline{\nabla f}(x_k)\|. \end{equation} Trivially, \eqref{ckgrad}, $\|s_k\|\ge 1$ and \eqref{BoundD} give \begin{equation} \label{uppboundagr1} \|\overline{\nabla^2f}(x_k)-\nabla^2 f(x_k)\|\le c_k\le \max[c,\alpha(1-\beta)\|\overline{\nabla f}(x_k)\|]\le \max[c,\alpha(1-\beta)\kappa_g]\le c\|s_k\|, \end{equation} where we have considered the assumption $c\ge\alpha(1-\beta)\kappa_g$. On the other hand, Step $4$ guarantees the choice \begin{equation} \label{ckgrad_bound} c_k\le \alpha(1-\beta)\|\overline{\nabla f}(x_k)\|, \end{equation} when $\|s_k\|< 1$. In this case, inequality \eqref{undersk} still holds. Thus, \begin{equation} \label{uppboundagr2} \|\overline{\nabla^2f}(x_k)-\nabla^2 f(x_k)\| \le c_k = \frac{c_k}{\| s_k \|}\|s_k \| \le \frac{c_k(\kappa_B+\sigma_k)}{(1-\beta)\| \overline{\nabla f}(x_k) \|}\| s_k \|\le \alpha (\kappa_B+\sigma_k)\|s_k\|, \end{equation} where the last inequality is due to \eqref{ckgrad_bound}. Finally, \eqref{uppboundagr1} and \eqref{uppboundagr2} imply \eqref{uppboundk}, while \eqref{key} follows by \eqref{uppboundk} using the triangle inequality. \end{proof} \noindent \noindent The next lemma bounds the decrease of the objective function on successful iterations, irrespectively of the satisfaction of the accuracy requirements for gradient and Hessian approximations. \llem{}{Consider any realisation of Algorithm \ref{algo}. At each iteration $k$ we have \begin{equation} \label{Tdecr} \overline T_2(x_k,0)-\overline T_2(x_k,s_k)> \frac{\sigma_k}{3}\|s_k\|^3\ge \frac{\sigma_{\min}}{3}\|s_k\|^3>0. \end{equation} Hence, on every successful iteration $k$: \begin{equation}\label{Tdecrsucc} f(x_k)-f(x_{k+1})> \eta \frac{\sigma_k}{3} \|s_k\|^3\ge \eta \frac{\sigma_{\min}}{3}\|s_k\|^3>0. \end{equation} } \begin{proof} We first notice that, by \eqref{mdecr}, we have that $\|s_k\|\neq 0$. Moreover, Lemma 2.1 in \cite{Toint1} coupled with (\ref{m}) yields \eqref{Tdecr}. The second part of the thesis is easily proved taking into account that, if $k$ is successful, then \eqref{Tdecr} implies \[ f(x_k)-f(x_{k+1})\ge \eta(\overline T_2(x_k,0)-\overline T_2(x_k,s_k))>\eta \frac{\sigma_k}{3}\|s_k\|^3. \] \end{proof} \noindent As a corollary, because of the fact that $x_{k+1}=x_k$ on each unsuccessful iteration $k$, for any realisation of Algorithm \ref{algo} we have that \[ f(x_k)-f(x_{k+1})\ge 0. \] \noindent We now show that, if the model is accurate, there exists a constant $\overline \sigma>0$ such that an iteration is successful or unsuccessful in the sense of Step 4 ($\mathbbm{1}_{I_k}(1-\mathbbm{1}_{\mathcal{U}_{k,1}})=1$), whenever $\sigma_k\ge \overline \sigma$. In other words, it is an iteration at which the regulariser is not increased. \llem{}{\label{Lemmasigmabar} Let Assumption \ref{Assf} (ii) hold. Let $\delta$ be given in \eqref{uppboundkgrad}, assume $c\ge\alpha(1-\beta)\kappa_g$ and the validity of \eqref{LipHess}. For any realisation of Algorithm \ref{algo}, if the model is accurate and \begin{equation}\label{sigmabar} \sigma_k\ge \overline{\sigma}\eqdef \max\left[ \frac{6\delta+3\alpha\kappa_B+L_H}{2(1-\eta)-3\alpha},\frac{6\delta+3c+L_H}{2(1-\eta)}\right]>0, \end{equation} then the iteration $k$ is successful or a failure in the sense of Step 4 occurs.} \begin{proof} Let us consider an iteration $k$ such that $\mathbbm{1}_{I_k}(1-\mathbbm{1}_{\mathcal{U}_{k,1}})=1$ and the definition of $\rho_k$ in \eqref{rhokdef2}. Assume that a failure in the sense of Step 4 does not occur. If $\rho_k-1\ge 0$, then iteration $k$ is successful by definition. We can thus focus on the case in which $\rho_k-1< 0$. In this situation, the iteration $k$ is successful provided that $1-\rho_k\le 1-\eta$. From \eqref{LipHess} and the Taylor expansion of $f$ centered at $x_k$ with increment $s$ it first follows that \begin{equation} \label{Taylorfxksk} f(x_k+s) \le f(x_k)+ \nabla f(x_k)^\top s+\frac{1}{2}s^\top\nabla^2 f(x_k)s+\frac{L_H}{6}\|s\|^3. \end{equation} Therefore, since $\mathbbm{1}_{I_k}=1$, \begin{eqnarray} f(x_k+s_k)-\overline T_2(x_k,s_k)&\le&(\nabla f(x_k)-\overline{\nabla f}(x_k))^\top s_k+\frac12 s_k^\top(\nabla^2 f(x_k)-\overline{\nabla^2 f}(x_k))s_k+\frac{L_H}{6}\|s_k\|^3\nonumber\\ &\le& \|\overline{\nabla f}(x_k)-\nabla f(x_k)\|\|s_k\|+\frac12\|\overline{\nabla^2 f}(x_k)-\nabla^2 f(x_k)\|\|s_k\|^2+\frac{L_H}{6}\|s_k\|^3\nonumber\\ &\le& \left(\delta+\frac{L_H}{6}+\frac{\xi_k}{2}\right)\|s_k\|^3,\label{uppf-T} \end{eqnarray} where we have used \eqref{uppboundkgrad} and \eqref{uppboundk}. Thus, by \eqref{uppf-T} and \eqref{Tdecr}, \[ 1-\rho_k=\frac{f(x_k+s_k)-\overline T_2(x_k,s_k)}{\overline T_2(x_k,0)-\overline T_2(x_k,s_k)}<\frac{ \left(6\delta+3\xi_k+L_H\right)\|s_k\|^3}{2\sigma_k\|s_k\|^3}=\frac{ 6\delta+3\xi_k+L_H}{2\sigma_k}. \] Depending on the maximum in the definition of $\xi_k$ in \eqref{uppboundk}, two different cases can then occur. If $\xi_k=c$, $1-\rho_k \le 1-\eta$, provided that \[ \sigma_k\ge\frac{6\delta+3c+L_H}{2(1-\eta)}. \] Otherwise, if $c<\alpha(\kappa_B+\sigma_k)$, so that $\xi_k=\alpha(\kappa_B+\sigma_k)$, then \[ 1-\rho_k<\frac{ 6\delta+3\alpha(\kappa_B+\sigma_k)+L_H}{2\sigma_k}\le 1-\eta, \] provided that \[ \sigma_k\ge\frac{6\delta+3\alpha \kappa_B+L_H}{2(1-\eta)-3\alpha}. \] In conclusion, iteration $k$ is successful if \eqref{sigmabar} holds. Note that $\overline \sigma$ is a positive lower bound for $\sigma_k$ because of the ranges for the values of $\eta$ and $\alpha$ in \eqref{initialconsts}. \end{proof} \noindent Using some of the results from the proof of the previous lemma, we can now prove the following, giving a crucial relation between the step length $\|s_k\|$ and the true gradient norm $\|\nabla f(x_k+s_k)\|$ at the next iteration. \llem{}{\label{Lemmapass}Let Assumption \ref{Assf} (ii)-(iii) hold and assume $c\ge\alpha(1-\beta)\kappa_g$. For any realisation of Algorithm \ref{algo}, at each iteration $k$ such that $\mathbbm{1}_{I_k}(1-\mathbbm{1}_{\mathcal{U}_{k,2}})=1$ (accurate in which the iteration is successful or a failure in the sense of Step 5 occurs), we have \begin{equation}\label{passtoeps} \|s_k \|\ge \sqrt{\nu_k\|\nabla f(x_k+s_k)\|}, \end{equation} for some positive $\nu_k$, whenever $s_k$ satisfies (\ref{tcsub}). Moreover, \eqref{passtoeps} holds even in case $s_k$ satisfies (\ref{tc.s}) provided that there exists $L_g>0$ such that \begin{equation}\label{Lipgrad} \|\nabla f(x)-\nabla f(y)\| \le L_g \| x-y\|, \end{equation} for all $x$, $y\in\mathbb{R}^n$. } \begin{proof} Let us consider an iteration $k$ such that $\mathbbm{1}_{I_k}(1-\mathbbm{1}_{\mathcal{U}_{k,2}})=1$. From the Taylor series of $\nabla f(x)$ centered at $x_k$ with increment $s$, and the definition of the model \eqref{m}, proceeding as in the proof of Lemma 4.1 in \cite{IMA} we obtain \begin{eqnarray} \|\nabla f(x_k+s_k)-\nabla_s \overline T_2(x_k+s_k)\| &\le& \|\nabla f(x_k)-\overline{\nabla f}(x_k) \| + \| (\nabla^2 f(x_k)-\overline{\nabla^2 f}(x_k))s_k \|\nonumber \\ &~~~+& \int_0^1 \|\nabla^2 f(x_k+\tau s)-\nabla^2 f(x_k)\| \|s_k\|\,d\tau\nonumber\\ &\le& \left(\delta+\xi_k+\frac{L_H}{2}\right)\|s_k\|^2,\label{normf-T} \end{eqnarray} where we have used \eqref{uppboundkgrad}, \eqref{key} and \eqref{LipHess}. Moreover, since $\nabla_s m(x_k, s_k, \sigma_k)=\nabla_s \overline T_2(x_k,s_k)+\sigma_k\| s_k\| s_k$, it follows: \begin{eqnarray} \label{normfkk} \|\nabla f(x_k+s_k) \| \le \| \nabla f(x_k+s_k)-\nabla_s \overline T_2(x_k,s_k)\| + \| \nabla_s m(x_k, s_k, \sigma_k)\| + \sigma_k\| s_k\|^2. \end{eqnarray} As a consequence, the thesis follows from \eqref{normf-T}--\eqref{normfkk} with \begin{equation} \label{zetak1} \nu_k^{-1}=\left(\delta+\xi_k+\frac{L_H}{2}+\beta+\sigma_k\right)>0, \end{equation} when the stopping criterion \eqref{tcsub} is considered. Assume now that \eqref{tc.s} is used for Step $3$ of Algorithm \ref{algo}. Inequalities \eqref{uppboundkgrad} and \eqref{Lipgrad} imply that \begin{eqnarray} \| \overline{\nabla f}(x_k)\| &\le& \|\overline{\nabla f}(x_k)-\nabla f(x_k) \| + \| \nabla f(x_k) - \nabla f(x_k+s_k) \| + \| \nabla f(x_k+s_k) \| \nonumber\\ &\le& \delta\|s_k\|^2+L_g\|s_k\|+\| \nabla f(x_k+s_k) \|.\label{boundinexgrad} \end{eqnarray} By using \eqref{normf-T}--\eqref{normfkk} and plugging \eqref{boundinexgrad} into \eqref{tc.s}, we finally have \[ \| \nabla f(x_k+s_k) \|(1-\beta)\le \left[ (1+\beta)\delta+\xi_k+\frac{L_H}{2}+ \beta L_g+\sigma_k \right]\|s_k\|^2, \] which is equivalent to \eqref{passtoeps}, with \begin{equation} \label{zetak2} \nu_k=\frac{1-\beta}{ (1+\beta)\delta+\xi_k+L_H/2+ \beta L_g+\sigma_k }>0. \end{equation} \end{proof} \noindent It is worth noticing that the global Lipschitz continuity of the gradient, namely, \eqref{Lipgrad}, is needed only when condition (\ref{tc.s}) is used in Step 3 of Algorithm \ref{algo}. \noindent We finally recall a result from \cite{CartSche17} that will be of key importance to carry out the complexity analysis addressed in the following two subsections. \llem{}{\cite[Lemma~2.1]{CartSche17} Let $N_{\epsilon}$ be the hitting time defined as in \eqref{hittingtime}. For all $k<N_{\epsilon}$, let $\{I_k\}$ be the sequence of events in Definition \ref{AccIk} so that \eqref{ProbIk} holds. Let $\mathbbm{1}_{W_k}$ be a nonnegative stochastic process such that $\hat{\sigma}(\mathbbm{1}_{W_k})\subseteq \mathcal{F}_{k-1}^M$, for any $k\ge 0$. Then, \[ \mathbb{E}\left[ \sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{W_k}\mathbbm{1}_{I_k}\right]\ge p \mathbb{E}\left[ \sum_{k=0}^{N_{\epsilon}-1} \mathbbm{1}_{W_k}\right]. \] Similarly, \[ \mathbb{E}\left[ \sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{W_k}(1-\mathbbm{1}_{I_k})\right]\le(1-p)\mathbb{E}\left[ \sum_{k=0}^{N_{\epsilon}-1} \mathbbm{1}_{W_k}\right]. \] \label{CSlemma21} } \subsection{Bound on the expected number of steps with \boldmath$\Sigma_k\ge \overline{\sigma}$} In this section we derive an upper bound for the expected number of steps in the process generated by Algorithm \ref{algo} with $\Sigma_k\ge \overline{\sigma}$. Given $l\in\{0,...,N_{\epsilon}-1\}$, for all $0\le k\le l$, let us define the event \[\Lambda_k=\{\textrm{iteration~}k~ \textrm{is such that} ~\Sigma_k< \overline{\sigma}\}\] and let \begin{equation} \label{defnsigma} N_{\overline{\sigma}}\eqdef \sum_{k=0}^{N_{\epsilon}-1}(1-\mathbbm{1}_{\Lambda_k}),\qquad N_{\overline{\sigma}}^{^C}\eqdef \sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{{\Lambda}_k}, \end{equation} be the number of steps, in the stochastic process induced by Algorithm \ref{algo}, with $\Sigma_k\ge \overline{\sigma}$ and $\Sigma_k< \overline{\sigma}$, before $N_{\epsilon}$ is met, respectively. In what follows we consider the validity of Assumption \ref{Assf}, Assumption \ref{AssAlg} and the following assumption on $\Sigma_0$.\\ \begin{assumption}\label{Asssigma0} With reference to the stochastic process generated by Algorithm \ref{algo} and the definition of $\overline{\sigma}$ in \eqref{sigmabar}, we assume that \begin{equation} \label{Sigma0} \Sigma_0=\gamma^{-i} \overline{\sigma}, \end{equation} for some positive integer $i$. We additionally assume that $c\ge\alpha(1-\beta)\kappa_g$. \end{assumption} \noindent By referring to Lemma \ref{CSlemma21} and some additional lemmas from \cite{CartSche17}, we can first obtain an upper bound on $\mathbb{E}[N_{\overline{\sigma}}]$. In particular, rearranging \cite[Lemma~2.2]{CartSche17}, given a generic iteration $l$, we derive a bound on the number of iterations successful and unsuccessful in the sense of Step 4, in terms of the overall number of iterations $l+1$. At this regard, we underline that in case of unsuccessful iterations in Step 4, the value of $\Sigma_k$ is not modified and such an iteration occurs at most once between two successful iterations (not necessary consecutive) with the first one having the norm of the step not smaller than one or once before the first successful iteration of the process (since flag is initially 1). In fact, a failure in the sense of Step 4 may occur only if flag=1; except for the first step, flag is reassigned only at the end of a successful iteration and can be set to one only in case of successful iteration with $\|s_k\|\ge 1$ (see Step 5 of Algorithm \ref{algo}), except for the first iteration. If the case flag = 1 and $\|s_k\| < 1$ occurs then flag is set to zero, preventing a failure in Step 4 at the subsequent iteration, and it is not further changed until a subsequent successful iteration. \llem{}{Assume that $\Sigma_0< \overline{\sigma}$. Given $l\in\{0,...,N_{\epsilon}-1\}$, for all realisations of Algorithm \ref{algo}, \[ \sum_{k=0}^{l}\left(1-\mathbbm{1}_{\Lambda_k}\right) \mathbbm{1}_{\mathcal{S}_k\cup \mathcal{U}_{k,2}}\le \frac23 (l+1). \] \label{CSlemma22} } \begin{proof} Each iteration $k$ such that $(1-\mathbbm{1}_{\Lambda_k})\mathbbm{1}_{\mathcal{S}_k\cup \mathcal{U}_{k,2}}=1$ is an iteration with $\Sigma_k\ge\overline{\sigma}$ that can be a successful iteration, leading to $\Sigma_{k+1}=\max[\sigma_{\min},\frac1\gamma \Sigma_k]$ ($\Sigma_k$ is decreased), or an unsuccessful iteration in the sense of Step $4$. In the latter case, $\Sigma_k$ is left unchanged ($\Sigma_{k+1}=\Sigma_k$). Moreover, $\Sigma_k$ in successful/unsuccessful in the sense of Step $5$ iterations is decreased/increased by the same factor $\gamma$. More in depth, since $\Sigma_0< \overline{\sigma}$, we have two possible scenarios. In the first one we have $\Sigma_k< \overline{\sigma}$, $k=0,\ldots,l$ and the thesis obviously follows. In the second scenario there exists at least one index $k$ such that $\Sigma_k\ge \overline \sigma$ and at least one unsuccessful iteration at iteration $j\in \{0,\ldots,k-1\}$ in which $\Sigma_k$ has been increased by the factor $\gamma$. In case $\mathbbm{1}_{ \mathcal{U}_{k,2}}=1$, $\Sigma_k$ is left unchanged, flag is set to $0$ and $\mathbbm{1}_{ \mathcal{U}_{k+1,2}}=0$. Then, at any iteration $j$ such that $\mathbbm{1}_{ \mathcal{U}_{j,1}}=1$ corresponds at most one successful iteration and one unsuccessful iteration in the sense of Step 4, with $\Sigma_k\ge\overline{\sigma}$ and this yields the thesis. \end{proof} \noindent We note that in the stochastic ARC method in \cite{CartSche17} each iteration can be successful or unsuccessful according to the satisfaction of the decrease condition $\rho_k\ge \eta$. On the contrary, in Algorithm \ref{algo} also failures in Step 4 may occur and this yields the bound $2/3 (l+1)$ in Lemma \ref{CSlemma22}, while the corresponding bound in \cite{CartSche17} is $1/2 (l+1)$. \noindent As in \cite{CartSche17}, we note that $\hat \sigma(\mathbbm{1}_{\Lambda_k})\subseteq \mathcal{F}_{k-1}^M$, that is the variable $\Lambda_k$ is fully determined by the first $k-1$ iterations of the Algorithm \ref{algo}. Then, setting $l=N_{\epsilon}-1$ we can rely on Lemma \ref{CSlemma21} (with $W_k=\Lambda_k^c$) to deduce that \begin{equation} \label{CSlemma21f} \mathbb{E}\left[ \sum_{k=0}^{N_{\epsilon}-1} (1-\mathbbm{1}_{\Lambda_k})\mathbbm{1}_{I_k} \right]\ge p \mathbb{E}\left[ \sum_{k=0}^{N_{\epsilon}-1} (1-\mathbbm{1}_{\Lambda_k})\right]. \end{equation} \noindent Considering the bound in Lemma \ref{CSlemma22} and the fact that Lemma \ref{Lemmasigmabar} and the mechanism of Step $4$ in Algorithm \ref{algo} ensure that each iteration $k$ such that $\mathbbm{1}_{I_k}=1$ with $\sigma_k\ge \overline{\sigma}$ can be successful or unsuccessful in the sense of Step $4$ (i.e., $\mathbbm{1}_{\mathcal{S}_k\cup \mathcal{U}_{k,2}}=1$), we have that \[ \sum_{k=0}^{N_{\epsilon}-1} (1-\mathbbm{1}_{\Lambda_k})\mathbbm{1}_{I_k}\le \sum_{k=0}^{N_{\epsilon}-1} (1-\mathbbm{1}_{\Lambda_k})\mathbbm{1}_{\mathcal{S}_k\cup \mathcal{U}_{k,2}}\le \frac23N_{\epsilon}. \] Taking expectation in the above inequality and recalling the definition of $N_{\overline{\sigma}}$ in \eqref{defnsigma}, from \eqref{CSlemma21f} we conclude that: \begin{equation} \label{bound_Ns} \mathbb{E}[N_{\overline{\sigma}}] \le \frac{2}{3p}\mathbb{E}[N_{\epsilon}]. \end{equation} The remaining bound for $ \mathbb{E}\big[N_{\overline{\sigma}}^{^C}\big]$ will be derived in the next section. \subsection{Bound on the expected number of steps with \boldmath$\Sigma_k< \overline{\sigma}$} Let us now obtain an upper bound for $ \mathbb{E}\big[N_{\overline{\sigma}}^{^C}\big]$, with $N_{\overline{\sigma}}^{^C}$ defined in \eqref{defnsigma}. To this purpose, the following additional definitions are needed. \vspace{0.2cm} \begin{defn} Let ${\cal U}_{k,1}$, ${\cal U}_{k,2}$ and ${\cal S}_k$ be as defined in Definition \ref{defset}. With reference to the process \eqref{sprocess} generated by Algorithm \ref{algo} let us define: \vspace{0.2cm} \begin{itemize} \item the event $\overline{\Lambda}_k=\{\textrm{iteration~}k~ \textrm{is such that} ~\Sigma_k\le \overline{\sigma}\}$, i.e. $\overline{\Lambda}_k$ is the closure of $\Lambda_k$. \vspace{0.2cm} \item $M_1=\sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{\overline{\Lambda}_k}(1-\mathbbm{1}_{I_k})$: number of inaccurate iterations with $\Sigma_k\le \overline{\sigma}$; \vspace{0.2cm} \item $M_2=\sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{\overline{\Lambda}_k}\mathbbm{1}_{I_k}$: number of accurate iterations with $\Sigma_k\le \overline{\sigma}$; \vspace{0.1cm} \item $N_1=\sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{\overline{\Lambda}_k}\mathbbm{1}_{I_k}\mathbbm{1}_{\mathcal{S}_k}$: number of accurate successful iterations with $\Sigma_k\le \overline{\sigma}$; \vspace{0.2cm} \item $N_2=\sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{\overline{\Lambda}_k}\mathbbm{1}_{I_k}\mathbbm{1}_{\mathcal{U}_{k,2}}$: number of accurate unsuccessful iterations, in the sense of $\textrm{Step}~4$, with $\Sigma_k\le \overline{\sigma}$; \vspace{0.2cm} \item $N_3=\sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{\Lambda_k}\mathbbm{1}_{I_k}\mathbbm{1}_{\mathcal{U}_{k,1}}$: number of accurate unsuccessful iterations, in the sense of $\textrm{Step}~5$, with $\Sigma_k< \overline{\sigma}$; \vspace{0.2cm} \item $M_3=\sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{\overline{\Lambda}_k}(1-\mathbbm{1}_{I_k})\mathbbm{1}_{\mathcal{S}_k}$: number of inaccurate successful iterations, with $\Sigma_k\le \overline{\sigma}$; \vspace{0.2cm} \item $S=\sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{\overline{\Lambda}_k}\mathbbm{1}_{\mathcal{S}_k}$: number of successful iterations, with $\Sigma_k\le \overline{\sigma}$; \vspace{0.2cm} \item $H=\sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{\mathcal{U}_{k,2}}$: number of unsuccessful iterations in the sense of Step $4$; \vspace{0.2cm} \item $U=\sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{\Lambda_k}\mathbbm{1}_{\mathcal{U}_{k,1}}$: number of unsuccessful iterations, in the sense of Step $5$, with $\Sigma_k< \overline{\sigma}$. \end{itemize} \label{def2} \end{defn} \vspace{0.2cm} \noindent It is worth noting that an upper bound on $\mathbb{E}\big[N_{\overline{\sigma}}^{^C}\big]$ is given, once an upper bound on $\mathbb{E}[M_1]+\mathbb{E}[M_2]$ is provided, since \begin{equation} \label{PlanEN} \mathbb{E}\big[N_{\overline{\sigma}}^{^C}\big]\le \mathbbm{E}\left[ \sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{\overline{\Lambda}_k} \right]=\mathbbm{E}\left[ \sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{\overline{\Lambda}_k}(1-\mathbbm{1}_{I_k})+\sum_{k=0}^{N_{\epsilon}-1}\mathbbm{1}_{\overline{\Lambda}_k}\mathbbm{1}_{I_k}\right]=\mathbb{E}[M_1]+\mathbb{E}[M_2]. \end{equation} where $M_1$ and $M_2$ are given in Definition \ref{def2}. Following \cite{CartSche17}, to bound $\mathbb{E}[M_1]$ we can still refer to the central Lemma \ref{CSlemma21} (with $W_k=\overline{\Lambda}_k$), of which the result stated below is a direct consequence. \llem{}{\cite[Lemma~2.6]{CartSche17} With reference to the stochastic process \eqref{sprocess} generated by Algorithm \ref{algo} and the definitions of $M_1$, $M_2$ in Definition \ref{def2}, \begin{equation} \label{EM1} \mathbb{E}[M_1]\le \frac{1-p}{p} \mathbb{E}[M_2]. \end{equation} } \noindent Concerning the upper bound for $\mathbb{E}[M_2]$ we observe that \begin{equation} \label{defEM2} \mathbb{E}[M_2]=\sum_{i=1}^3 \mathbb{E}[N_i]\le \mathbb{E}[N_1]+ \mathbb{E}[N_2]+ \mathbb{E}[U]. \end{equation} \noindent In the following Lemma we provide upper bounds for $N_1$ and $N_2$, given in Definition \ref{def2}. \llem{}{\label{LemmaN1N2} Let Assumption \ref{Assf} hold and that the stopping criterion \eqref{tcsub} is used to perform each Step $3$ of Algorithm \ref{algo}. With reference to the stochastic process \eqref{sprocess} induced by the Algorithm there exists $\kappa_s>0$ such that \begin{equation} \label{boundN1} N_1\le \kappa_s(f_0-f_{low})\epsilon^{-3/2}+1. \end{equation} Moreover, in case the stopping criterion (\ref{tc.s}) is used in Step 3, \eqref{boundN1} still holds provided that there exists $L_g>0$ such that \eqref{Lipgrad} is satisfied for all $x$, $y\in\mathbb{R}^n$. Finally, let Assumption \ref{Assf} (i)-(ii) hold, independently of the stopping criterion used to perform Step 3 there exists $\kappa_u>0$ such that \begin{equation} \label{boundN2} N_2\le \kappa_u(f_0-f_{low}). \end{equation} } \begin{proof} Taking into account that \eqref{Tdecrsucc} holds for each realisation of Algorithm \ref{algo}, \eqref{passtoeps} is valid for each realisation of Algorithm \ref{algo} with $\mathbbm{1}_{I_k}(1-\mathbbm{1}_{\mathcal{U}_{k,2}})=1$, recalling that $f(X_k)=f(X_{k+1})$ for all $k\in\mathcal{U}_{k,1}\cup\mathcal{U}_{k,2}$ and setting $f_0\eqdef f(X_0)$, it follows: \[ \begin{split} f_0-f_{low}&\ge f_0-f(X_{N_{\epsilon}})=\sum_{k=0}^{N_{\epsilon}-1}(f(X_k)-f(X_{k+1}))\mathbbm{1}_{\mathcal{S}_k}\ge \sum_{k=0}^{N_{\epsilon}-1}\overbrace{\eta \frac{\sigma_{\min}}{3}\|S_k\|^3}^{> 0}\mathbbm{1}_{\mathcal{S}_k}\\ &\ge \sum_{k=0}^{N_{\epsilon}-2}\eta \frac{\sigma_{\min}}{3}\|S_k\|^3\mathbbm{1}_{\mathcal{S}_k}\mathbbm{1}_{I_k} \ge \sum_{k=0}^{N_{\epsilon}-2} \eta\frac{\sigma_{\min}}{3} \nu_k^{3/2}\| \nabla f(X_{k+1})\|^{3/2} \mathbbm{1}_{\mathcal{S}_k}\mathbbm{1}_{I_k} \\ &\ge \sum_{k=0}^{N_{\epsilon}-2} \eta\frac{\sigma_{\min}}{3} \nu^{3/2}\| \nabla f(X_{k+1})\|^{3/2} \mathbbm{1}_{\mathcal{S}_k}\mathbbm{1}_{I_k}\mathbbm{1}_{\overline{\Lambda}_k}\\ & \ge (N_1-1) \kappa_s^{-1}\epsilon^{3/2}, \end{split} \] in which $\nu_k$ is defined in \eqref{zetak1} when $s_k$ satisfies \eqref{tcsub} and in \eqref{zetak2} when $s_k$ satisfies \eqref{tc.s} and \begin{equation}\label{defkappas} \kappa_s^{-1}\eqdef \eta\frac{\sigma_{\min}}{3} \nu^{3/2} \end{equation} where $$ \nu=\frac{1}{ \delta+\max[c,\alpha(\kappa_B+\overline{\sigma})]+L_H/2+ \beta+\overline{\sigma} }>0, $$ in case \eqref{tcsub} is used and $$ \nu= \frac{1-\beta}{ (1+\beta)\delta+\max[c,\alpha(\kappa_B+\overline{\sigma})]+L_H/2+ \beta L_g+\overline{\sigma} }>0, $$ whenever \eqref{tc.s} is adopted. Hence, \eqref{boundN1} holds. Moreover, an upper bound for $N_2$ can be obtained taking into account that, as already noticed, an iteration $k\ge 1$ in the process such that $\mathbbm{1}_{\mathcal{U}_{k,2}}=1$ occurs at most once between two successful iterations with the first one having the norm of the trial step not smaller than $1$, plus at most once before the first successful iteration in the process (since in Algorithm \ref{algo} flag is initialised at $1$). Therefore, by means of \eqref{Tdecrsucc}, \[ \begin{split} f_0-f_{low}&\ge f_0-f(X_{N_{\epsilon}})=\sum_{k=0}^{N_{\epsilon}-1}(f(X_k)-f(X_{k+1}))\mathbbm{1}_{\mathcal{S}_k}\ge \sum_{\begin{small}\begin{array}{c} k=0\\ \|S_k\|\ge 1\end{array}\end{small}}^{N_{\epsilon}-1}(f(X_k)-f(X_k+S_k))\mathbbm{1}_{\mathcal{S}_k}\\ &\ge \eta \frac{\sigma_{\min}}{3} \sum_{\begin{small}\begin{array}{c} k=0\\ \|S_k\|\ge 1\end{array}\end{small}}^{N_{\epsilon}-1} \mathbbm{1}_{\mathcal{S}_k} \|S_k\|^3\ge \kappa_u^{-1}H, \end{split} \] where $H$ denotes (see Definition \ref{def2}) the number of unsuccessful iterations in the sense of Step $4$. Then, since $H\ge N_2$, \eqref{boundN2} follows. \end{proof} \noindent An upper bound for $U$ can be still derived using \cite[Lemma~2.5]{CartSche17}, provided that \eqref{Sigma0} holds. This is because the process induced by Algorithm \ref{algo} ensures that $\Sigma_k$ is decreased by a factor $\gamma$ on successful steps, increased by the same factor on unsuccessful ones in the sense of Step $5$ and \textit{left unchanged} if an unsuccessful iteration in the sense of Step $4$ occurs. \llem{}{\cite[Lemma~2.5]{CartSche17} Consider the validity of \eqref{Sigma0}. For any $l\in\{0,...,N_{\epsilon}-1\}$ and for all realisations of Algorithm \ref{algo}, we have that \[ \sum_{k=0}^l \mathbbm{1}_{\Lambda_k}\mathbbm{1}_{\mathcal{U}_{k,1}}\le \sum_{k=0}^{l}\mathbbm{1}_{\overline{\Lambda}_k}\mathbbm{1}_{\mathcal{S}_k}+\log_{\gamma}\left( \frac{\overline{\sigma}}{\sigma_0} \right). \] } \noindent Consequently, considering $l=N_{\epsilon}-1$ and Definition \ref{def2}, \begin{equation} \label{boundU} U\le S+ \log_{\gamma}\left( \frac{\overline{\sigma}}{\sigma_0} \right)=N_1+M_3+\log_{\gamma}\left( \frac{\overline{\sigma}}{\sigma_0} \right). \end{equation} We underline that the right-hand side in \eqref{boundU} involves $M_3$, that has not been bounded yet. To this aim we can proceed as in \cite{CartSche17}, obtaining that \begin{equation} \label{boundEM3} \mathbb{E}[M_3]\le \frac{1-p}{2p-1}\left(2\mathbb{E}[N_1] + \mathbb{E}[N_2] + \log_{\gamma}\left( \frac{\overline{\sigma}}{\sigma_0} \right)\right). \end{equation} In fact, recalling the definition of $M_3$ and \eqref{defEM2}, the inequality \eqref{EM1} implies that \begin{equation} \label{1boundEM3} \mathbb{E}[M_3] \le \mathbb{E}[M_1]\le \frac{1-p}{p} \mathbb{E}[M_2] \le \frac{1-p}{p}\left(\mathbb{E}[N_1] + \mathbb{E}[N_2] + \mathbb{E}[U] \right). \end{equation} Indeed, taking expectation in \eqref{boundU} and plugging it into \eqref{1boundEM3}, \[ \mathbb{E}[M_3] \le \frac{1-p}{p}\left(2\mathbb{E}[N_1] + \mathbb{E}[N_2] +\mathbb{E}[M_3] + \log_{\gamma}\left( \frac{\overline{\sigma}}{\sigma_0} \right) \right), \] which yields\eqref{boundEM3}. The upper bound on $\mathbb{E}[M_2]$ then follows: \begin{eqnarray} \mathbb{E}[M_2]&\le&\mathbb{E}[N_1] + \mathbb{E}[N_2] + \mathbb{E}[U] \le 2\mathbb{E}[N_1] + \mathbb{E}[N_2] +\mathbb{E}[M_3]+ \log_{\gamma}\left( \frac{\overline{\sigma}}{\sigma_0} \right)\nonumber\\ &\le& \left(\frac{1-p}{2p-1}+1\right)\left(2\mathbb{E}[N_1] + \mathbb{E}[N_2] + \log_{\gamma}\left( \frac{\overline{\sigma}}{\sigma_0} \right)\right)\nonumber\\ &=& \frac{p}{2p-1}\left(2\mathbb{E}[N_1] + \mathbb{E}[N_2] + \log_{\gamma}\left( \frac{\overline{\sigma}}{\sigma_0} \right)\right)\nonumber\\ &\le& \frac{p}{2p-1}\left[ (f_0-f_{low})\left(2\kappa_s\epsilon^{-3/2}+\kappa_u\right)+ \log_{\gamma}\left( \frac{\overline{\sigma}}{\sigma_0} \right)+2 \right],\label{EM2} \end{eqnarray} in which we have used \eqref{defEM2}, \eqref{boundN2}, \eqref{boundN1}, \eqref{boundU} and \eqref{boundEM3}. Therefore, recalling \eqref{PlanEN} and \eqref{EM1}, we obtain that \begin{equation} \label{bound_Nsc} \mathbb{E}\big[N_{\overline{\sigma}}^{^C}\big]\le \frac{1}{p}\mathbb{E}[M_2]\le \frac{1}{2p-1}\left[ (f_0-f_{low})\left(2\kappa_s\epsilon^{-3/2}+\kappa_u\right)+ \log_{\gamma}\left( \frac{\overline{\sigma}}{\sigma_0} \right)+2 \right], \end{equation} where the last inequality follows from \eqref{EM2}. We are now in the position to state our final result, providing the complexity of the stochastic method associated with Algorithm \ref{algo}, in accordance with the complexity bounds given by the deterministic analysis of an ARC framework with exact \cite{Toint1} and inexact \cite{ARC2, CGToint, CGTIMA, IMA, BellGuriMoriToin19} function and/or derivatives evaluations.\\ \begin{theorem} {\label{ThSComplexity}Let Assumptions \ref{Assf} and \ref{Asssigma0} hold. Assume that Assumption \ref{AssAlg} holds with $p>2/3$ and that the stopping criterion \eqref{tcsub} is used to perform each Step $3$ of Algorithm \ref{algo}. Then, the hitting time $N_{\epsilon}$ for the stochastic process generated by Algorithm \ref{algo} satisfies \begin{equation}\label{complexity_final} \mathbb{E}[N_{\epsilon}]\le \frac{3p}{(3p-2)(2p-1)}\left[(f_0-f_{low})\left(2\kappa_s\epsilon^{-3/2}+\kappa_u\right)+log_{\gamma}\left(\frac{\overline{\sigma}}{\sigma_0}\right)+2 \right]. \end{equation} Moreover, in case the stopping criterion (\ref{tc.s}) is used to perform Step 3, \eqref{complexity_final} still holds provided that there exists $L_g>0$ such that \eqref{Lipgrad} is satisfied for all $x$, $y\in\mathbb{R}^n$. } \end{theorem} \begin{proof} By definition (see \eqref{defnsigma}), $\mathbb{E}[N_{\epsilon}]= \mathbb{E}\big[N_{\overline{\sigma}}\big]+\mathbb{E}\big[N_{\overline{\sigma}}^{^C}\big]$. Thus, considering \eqref{bound_Ns}, \[ \mathbb{E}[N_{\epsilon}] \le \frac{2}{3p}\mathbb{E}[N_{\epsilon}]+\mathbb{E}\big[N_{\overline{\sigma}}^{^C}\big], \] and, hence, by \eqref{bound_Nsc}, \[ \mathbb{E}[N_{\epsilon}] \le \frac{3p}{3p-2}\mathbb{E}\big[N_{\overline{\sigma}}^{^C}\big]= \frac{3p}{(3p-2)(2p-1)}\left[(f_0-f_{low})\left(2\kappa_s\epsilon^{-3/2}+\kappa_u\right)+log_{\gamma}\left(\frac{\overline{\sigma}}{\sigma_0}\right)+2 \right], \] which concludes the proof. \end{proof} \section{Subsampling scheme for finite-sum minimisation} We now consider the solution of large-scale instances of the finite-sum minimisation problems arising in machine learning and data analysis, modelled by \eqref{finite-sum}. In this context, the approximations $\overline{\nabla f}(x_k)$ and $\overline{\nabla^2 f}(x_k)$ to the gradient and the Hessian used at Step $1$ and Step $2$ of Algorithm \ref{algo}, respectively, are obtained by subsampling, using subsets of indexes $\calD_{j,k}$, $j\in\{1,2\}$, randomly and uniformly chosen from $\{1,...,N\}$. I.e., for $j\in\{1,2\}$, \begin{equation} \label{approxBernstein} \overline{\nabla^j f}(x_k) = \frac{1}{|\calD_{j,k}|} \sum_{i \in \calD_{j,k}} \overline{\nabla^j \varphi_i}(x_k), \end{equation} are used in place of $ \nabla^j f(x_k) = \frac{1}{N} \sum_{i=0}^N \nabla^j \varphi_i(x_k)$. Specifically, if we want $\overline{\nabla^{j} f}(x_k)$ to be within an accuracy $\tau_{j,k}$ with probability at least $p_j$, $j\in\{1,2\}$, i.e., \[ Pr\left(\|\overline{\nabla^{j} f}(x_k)-\nabla^{j} f(x_k)\|\le \tau_{j,k} \right) \ge p_j, \] the sample size $|\calD_{j,k}|$ can be determined by using the operator-Berstein inequality introduced in \cite{Tropp}, so that $\overline{\nabla^j f}(x_k)$ takes the form (see \cite{BellGuriMoriToin19}) given by \eqref{approxBernstein}, with \begin{equation} \label{sizeD} |\calD_{j,k}| \geq \min\left \{ N,\left\lceil\frac{4\kappa_{\varphi,j}(x_k)}{\tau_{j,k}} \left(\frac{2\kappa_{\varphi,j}(x_k)}{\tau_{j,k}}+\frac{1}{3}\right) \,\log\left(\frac{d_j}{1-p_j}\right)\right\rceil\right \}, \end{equation} where \[ d_j=\left\{\begin{array}{ll} n+1, & \tim{if} j=1,\\ 2n, & \tim{if} j=2, \end{array}\right. \] and under the assumption that, for any $x\in\mathbb{R}^n$, there exist non-negative upper bounds $\{\kappa_{\varphi,j}\}_{j=1}^2$ such that \[ \max_{i \in\ii{N}}\|\nabla^j\varphi_i(x)\| \leq \kappa_{\varphi,j}(x),\qquad j\in\{1,2\}. \] Let us assume that there exist $\kappa_g>0$ and $\kappa_B>0$ such that $\kappa_{\varphi,1}(x)\le \kappa_g$ and $\kappa_{\varphi,2}(x)\le \kappa_B$ for any $x\in \mathbb{R}^n$. Since the subsampling procedures used at iteration $k$ to get $\calD_{1,k}$ and $\calD_{2,k}$ are independent, it follows that when $\{\tau_{j,k}\}_{j=1}^2$ are chosen as the right-hand sides in \eqref{AccG} and \eqref{AccH}, respectively, the builded model \eqref{m} is $p$-probabilistically $\delta$-sufficiently accurate with $p=p_1p_2$. Therefore, a practical version of Algorithm \ref{algo} is for instance given by adding a suitable termination criterion and modifying the first three steps of Algorithm \ref{algo} as reported in Algorithm 4.1 below. \algo{algodet}{Modified Steps \boldmath$0-2$ of Algorithm \ref{algo}} {\vspace*{-0.3 cm} \begin{description} \item[Step 0: Initialisation.] An initial point $x_0\in\mathbb{R}^n$ and an initial regularisation parameter $\sigma_0>0$ are given, as well as an accuracy level $\epsilon \in (0,1)$. The constants $\beta$, $\alpha$, $\eta$, $\gamma$, $\sigma_{\textrm{min}}$, $\kappa$, $\tau_0$, $\kappa_{\tau}$ and $c$ are also given such that \begin{eqnarray} &0<\beta, \kappa_\tau<1, \quad 0\le \alpha< \frac 2 3, \quad\sigma_{\min}\in (0, \sigma_0], \nonumber \\ &0<\eta < \frac{2-3\alpha}{2},\quad \gamma>1,\quad \kappa\in[0,1), \quad \tau_0>0,\quad c>0.\nonumber \end{eqnarray} Compute $f(x_0)$ and set $k=0$, ${\rm flag}=1$. \vspace{2mm} \item[Step 1: Gradient approximation. ] Set $i=0$ and initialise $\tau_{1,k}^{(i)}=\tau_0$. Do \begin{itemize} \item[1.1] compute $\overline{\nabla f}(x_k)$ such that \eqref{approxBernstein}--\eqref{sizeD} are satisfied with $j=1$, $\tau_{1,k}=\tau_{1,k}^{(i)}$; \item[1.2] if $\tau_{1,k}^{(i)}$ $\le \kappa (1-\beta)^2 \left(\frac{\|\overline{\nabla f}(x_k)\|}{\sigma_k}\right)^2$, go to Step $2$; \item[] else, set $\tau_{1,k}^{(i+1)}=\kappa_{\tau}\tau_{1,k}^{(i)}$, increment $i$ by one and go to Step $1.1$; \end{itemize} \vspace{2mm} \item[Step 2: Hessian approximation (model costruction). ] If ${\rm flag}=1$ set $c_{k}=c$, else set $c_{k}=\alpha(1-\beta)\|\overline{\nabla f}(x_{k})\|$.\\ Compute $\overline{\nabla^2f}(x_k)$ using \eqref{approxBernstein}--\eqref{sizeD} with $j=2$, $\tau_{2,k}=c_k$ and form the model $m_k(s)$ defined in \eqref{m}.\vspace{2mm} \end{description} } \noindent Concerning the gradient estimate, the scheme computes (Step $1$) an approximation $\overline{\nabla f}(x_k)$ satisfying the accuracy criterion \begin{equation} \label{err_relG} \|\overline{\nabla f}(x_k)-\nabla f(x_k)\|\le \kappa(1-\beta)^2\left(\frac{\|\overline{\nabla f}(x_k)\|}{\sigma_k}\right)^2, \end{equation} which is independent of the step computation and based on the knowable quantities $\kappa$, $\beta$ and $\sigma_k$. This is done by reducing the accuracy $\tau_{1,k}^{(i)}$ and repeating the inner loop at Step $1$, until the fulfillment of the inequality at Step $1.2$. We underline that condition \eqref{err_relG} is guaranteed by the algorithm, since \eqref{sizeD} is a continuous and increasing function with respect to $\tau_{j,k}$, for fixed $j=1$, $k$, $p_j$ and $N$; hence, there exists a sufficiently small $\overline{\tau}_{1,k}$ such that the right-hand side term in \eqref{sizeD} will reach, in the worst-case, the full sample size $N$, yielding $\overline{\nabla f}(x_k)=\nabla f(x_k)$. Moreover, if the stopping criterion $\|\overline{\nabla f}(x_k)\|\le \epsilon$ is used, the loop is ensured to terminate also whenever the predicted accuracy requirement $\tau_{1,k}^{(i)}$ becomes smaller than $\kappa (\frac{1-\beta}{\sigma_k})^2\epsilon^2$. On the other hand, in practice, we expect to use a small number of samples in the early stage of the iterative process, when the norm of the approximated gradient is not yet small. To summarise, if without loss of generality we assume that $\overline{\tau}_{1,k}\ge \hat{\tau}$ at each iteration $k$, we conclude that, in the worst case, Step $1$ will lead to at most $\lfloor \log(\hat{\tau})/\log(\kappa_{\tau}\tau_0)\rfloor+1$ computations of $\overline{\nabla f}(x_k)$. The Hessian approximation $\overline{\nabla^2f}(x_k)$ is, instead, defined at Step $2$ and its computation relies on the reliable value of $c_k$. We remark that at iteration $k$ we have that: \begin{itemize} \vspace{0.1cm} \item $\overline{\nabla^2 f}(x_k)$ is computed only once, irrespectively of the approximate gradient computation considered at Step $1$;\vspace{0.1cm} \item a finite loop is considered at Step $1$ to obtain a gradient approximation satisfying \eqref{err_relG}, where the right-hand side is independent of the step length $\|s_k\|$, thou implying \eqref{uppboundkgrad}--\eqref{keygrad}. Hence, the gradient approximation is fully determined at the end of Step $1$ and further recomputations due to the step calculation (see Algorithm \ref{algo}, Step $3$) are not required. \end{itemize} We conclude this section by noticing that each iteration $k$ of Algorithm \ref{algo} with the modified steps introduced in Algorithm \ref{algodet} can indeed be seen as an iteration of Algorithm \ref{algo} where the sequence of random models $\{M_k\}$ is $p$-probabilistically sufficiently accurate in the sense of Definition \ref{AccIk}, with $p=p_1p_2$, and an iteration of \cite[Algorithm~3.1]{IMA}, when $\kappa=0$ is considered in \eqref{AccG} (exact gradient evaluations). \section{Numerical tests} In this section we analyse the behaviour of the Stochastic ARC Algorithm (Algorithm \eqref{algo}). Inexact gradient and Hessian evaluations are performed as sketched in modified Steps 0-2 of Algorithm \ref{algodet}. The performance of the proposed algorithm is compared with that of the corresponding version in \cite{IMA} employing exact gradient, with the aim to provide numerical evidence that adding a further source of inexactness in gradient computation is beneficial in terms of computational cost saving. We consider nonconvex finite-sum minimisation problems. This is, in fact, a highly frequent scenario when dealing with binary classification tasks arising in machine learning applications. More in depth, given a training set of $N$ features $a_i\in\mathbb{R}^n$ and corresponding labels $y_i$, $i=1,\ldots,N$, we solve the following minimisation problem: \begin{equation} \label{minloss} \min_{x\in\mathbb{R}^n} f(x)= \min_{x\in\mathbb{R}^n} \frac{1}{N}\sum_{i=1}^N{\varphi_i(x)}= \min_{x\in\mathbb{R}^n}\frac 1 N \sum_{i=1}^N \left( y_i-\sigma\left(a_i^\top x\right) \right)^2, \end{equation} where \begin{equation} \label{sigm} \sigma(a^\top w)=\frac{1}{1+e^{-a^\top w}},\qquad a,w\in\mathbb{R}^n. \end{equation} That is we use the sigmoid function \eqref{sigm} as the model for predicting the values of the labels and the least-squares loss as a measure of the error on such predictions, that has to be minimised by approximately solving \eqref{minloss} in order to come out with the parameter vector $x$, to be used for label predictions on new untested data. Moreover, a number $N_T$ of testing data $\{\overline a_i,\overline y_i\}_{i=1}^{N_T}$ is used to validate the computed model. The values $\sigma(\overline{a}_i^\top x)$ are used to predict the testing labels $\overline y_i$, $i\in\{1,...,N_T\}$, and the corresponding error, measured by $ \frac{1}{N_T} \sum_{i=1}^{N_T} \left(\overline y_i-\sigma\left(\overline a_i^T x\right) \right)^2, $ is computed. Implementation issues concerning the considered procedures are the object of Subsection \ref{Impl_iss}, while statistics of our runs are discussed in Subsection \ref{Num_res}. \subsection{Implementation issues} \label{Impl_iss} The implementation of the main phases of Algorithm \eqref{algo}, equipped with the modified steps in Algorithm \eqref{algodet}, respects the following specifications.\\ According to \cite[Algorithm 3.1]{IMA}, the cubic regularisation parameter is initially $\sigma_0=10^{-1}$, its minimum value is $\sigma_{\min}=10^{-5}$ and the initial guess vector $x_0=(0,...,0)^\top \in\mathbb{R}^n$ is considered for all runs. Moreover, the probability of success $p_j$ in \eqref{sizeD} is set equal to $0.8$, for $j\in\{1,2\}$, while the parameters $\alpha$, $\beta$, $\epsilon$, $\eta$ and $\gamma$ are fixed as $\alpha=0.1$, $\beta=0.5$, $\epsilon=5\cdot 10^{-3}$, $\eta=0.8$ and $\gamma=2$. The latter two correspond to the values of $\eta_2$ and $\gamma_3$ considered in \cite[Algorithm 3.1]{IMA}, respectively. The minimisation of the cubic model at Step $3$ of Algorithm \ref{algo} is performed by the Barzilai-Borwein gradient method \cite{bb} combined with a nonmonotone linesearch following the proposal in \cite{blms}. The major per iteration cost of such Barzilai-Borwein process is one Hessian-vector product, needed to compute the gradient of the cubic model. The threshold used in the termination criterion (\ref{tc}) is $\beta_k=0.5$, $k\ge 0$. As for \cite[Algorithm 3.1]{IMA}, we impose a maximum of $500$ iterations and a successful termination is declared when the following condition is met: \[ \|\overline{\nabla f}(x_k)\|\le \epsilon, \quad k\ge 0. \] \noindent In case $\|\overline{\nabla f}(x_k)\|\le \epsilon$ and the model is accurate, then by \eqref{AccG} $$ \|{\nabla f}(x_k)\|\le \|\overline{\nabla f}(x_k)\|+\|{\nabla f}(x_k)-\overline{\nabla f}(x_k)\|\le \overline \epsilon := \epsilon+\kappa[(1-\beta)/\sigma_{min}]^2 \epsilon^2 $$ and, hence, $x_k$ is an $\overline \epsilon$-approximate first-order optimality point. Since the model is accurate with probability at least $p$, $x_k$ is an $\overline \epsilon$-approximate first-order optimality point with probability at least $p$. We further note that the exact gradient and the Hessian of the component functions $\varphi_i(x)$, $i\in\{1,...,N\}$, are given by: \begin{eqnarray} &&\nabla \varphi_i(x)=-2 e^{-a_i^\top x}\left(1+e^{-a_i^\top x}\right)^{-2}\left(y_i-\left(1+e^{-a_i^\top x}\right)^{-1}\right)a_i,\label{der1phi}\\ & & \nabla^2 \varphi_i(x)=-2 e^{-a_i^\top x}\left(1+e^{-a_i^\top x}\right)^{-4}\left(y_i\left(\left(e^{-a_i^\top x}\right)^2-1\right)+1-2e^{-a_i^\top x}\right)a_ia_i^\top.\label{der2phi} \end{eqnarray} \noindent Then, the gradient and the Hessian approximations $\overline{\nabla^j f}(x_k)$, $j\in\{1,2\}$, computed at Step $1$ and Step $2$ of Algorithm \ref{algodet} according to \eqref{approxBernstein}--\eqref{sizeD}, involve the constants \begin{eqnarray} \nonumber \kappa_{\varphi,1}(x_k)&=&\max_{i\in\{1,...,N\}}\left\{2 e^{-a_i^\top x_k}\left(1+e^{-a_i^\top x_k}\right)^{-2}\left|y_i-\left(1+e^{-a_i^\top x_k}\right)^{-1}\right| \|a_i\|\right\},\nonumber\\ \kappa_{\varphi,2}(x_k)&=&\max_{i\in\{1,...,N\}}\left\{2e^{-a_i^\top x_{k}}\left(1+e^{-a_i^\top x_{k}}\right)^{-4}\left|y_i\left(\left(e^{-a_i^\top x_{k}}\right)^2-1\right)+1-2e^{-a_i^\top x_{k}}\right| \|a_i\|^2\right\},\nonumber \end{eqnarray} whose computations can indeed be an issue in theirselves. Nevertheless, thank to the exactness and the specific form (see \eqref{minloss}) of the function evaluation $f(x_k)$, the values $a_i^\top x_k$, $1\le i\le N$, are available at iteration $k$ and, hence, $\kappa_{\varphi,j}(x_k)$, $j\in\{1,2\}$, can be determined at the (offline) extra cost of computing $\|a_i\|^j$, $j\in\{1,2\}$, for $1\le i\le N$. As in \cite[Subsection 8.2]{IMA}, the value of $c$ used in \eqref{ck}, in order to reduce the iteration computational cost whenever $\|s_k\|\ge 1$, is such that $|\mathcal{D}_{2,0}|$ computed via \eqref{sizeD} for $j=2$, with $\tau_{2,0}=c$ (first approximation of the Hessian), satisfies $|\mathcal{D}_{2,0}|/N=0.1$. We indeed start using the $10\%$ of the examples to approximate the Hessian. Concerning the gradient approximation performed at Step $1$ of Algorithm \ref{algodet}, the value of $\tau_0$ is chosen in order to use a prescribed percentage of the number of training samples $N$ to obtain $\overline{\nabla f}(x_0)$. In all runs, such a percentage has been set to $0.4$. Then, we proceeded as follows. We computed $\overline{\nabla f}(x_0)$, via \eqref{approxBernstein}, with $j=1$ and $|\mathcal{D}_{1,0}|/N=0.4$. Then, we compute $\tau_0$ so that \eqref{sizeD}, with $\tau_{1,0}=\tau_0$ is satisfied as an equality. Finally, the value of $\kappa$ at Step $1.2$ of Algorithm \ref{algodet} has been correspondingly set to $4\tau_{1,0}^{(0)}\left(\sigma_0/\|\overline{\nabla f}(x_0)\|\right)^2$, with $\tau_{1,0}^{(0)}=\tau_0$. This way, the acceptance criterion of Step $1.2$ is satisfied without further inner iterations (i.e., for $i=0$), when $k=0$, and $\tau_0$ is indeed considered as the starting accuracy level for gradient approximation at each execution of Step $1$ of Algorithm \ref{algodet}. We will hereafter refer to such implementation of Algorithm \eqref{algo} coupled with Algorithm \ref{algodet} as $SARC$. The numerical tests of this section compare $SARC$ with the corresponding variant in \cite[Algorithm $3.1$]{IMA}, namely \textit{ARC-Dynamic}, employing exact gradient evaluations, with $\gamma_1=1/\gamma$, $\gamma_2=\gamma_3=\gamma$ and $\eta_1=\eta_2=\eta$. It is worth noticing that the problem \eqref{minloss} arises in the training of an artificial neural network with no hidden layers and zero bias. Nevertheless, to cover the general situation where $SARC$ algorithm is applied to more complex neural networks, we have followed the approach in \cite{Roosta_2p} for what concerns the cost measure. Going into more details, at the generic iteration $k$, we count the $N$ forward propagations needed to evaluate the objective in \eqref{minloss} at $x_k$ has a unit Cost Measure (CM), while the evaluation of the approximated gradient at the same point requires $|\mathcal{D}_{1,k}|$ additional backward propagations at the weighed cost $|\mathcal{D}_{1,k}|/N$ CM. Moreover, each vector-product $\overline{\nabla^2 f}(x_k)v$ ($v\in\mathbb{R}^n$), needed at each iteration of the Barzilai-Borwein method used to minimise the cubic model at Step $3$ of Algorithm \ref{algo}, is performed via finite-differences, leading to additional $|\mathcal{D}_{2,k}|$ forward and backward propagations to compute $\overline{\nabla f} (x_k+hv)$, ($h\in\mathbb{R}^+$), at the price of the weighted cost $2|\mathcal{D}_{2,k}|/N$ CM and a potential extra-cost $|\mathcal{D}_{2,k}\smallsetminus(\mathcal{D}_{1,k}\cap \mathcal{D}_{2,k})|/N$ CM to approximate ${\nabla f}(x_k)$ via uniform subsampling using the samples in $\mathcal{D}_{2,k}$. This latter approximation is computed once at the beginning of the Barzilai-Borwein procedure. Therefore, denoting by $r$ the number of Barzilai-Borwein iterations at iteration $k$, the increase of the CM at the $k$-th iteration of \textit{ARC-Dynamic} and \textit{SARC} related to the derivatives computation is reported in Table \ref{cost}. \begin{small} \begin{table}[h] \begin{center} \begin{tabular}{cccccc} \toprule \textit{ARC-Dynamic} & \textit{SARC} \\ \midrule $1+2|\mathcal{D}_{2,k}|r/N$ & $\left(|\mathcal{D}_{1,k}|+2|\mathcal{D}_{2,k}|r+|\mathcal{D}_{2,k}\smallsetminus(\mathcal{D}_{1,k}\cap \mathcal{D}_{2,k})|\right)/N$ \\ \bottomrule \end{tabular} \caption{Increase of the CM at the $k$-th iteration of \textit{ARC-Dynamic} and \textit{SARC} related to the derivatives computation; $r$ denotes the number of performed Barzilai-Borwein iterations.}\label{cost} \end{center} \end{table} \end{small} \noindent We will refer to the Cost Measure at Termination (CMT) as the main parameter to evaluate the efficiency of the method within the numerical tests of the next section. The algorithms have been implemented in Fortran language and run on an Intel Core i5, $1.8$ GHz $\times~1$ CPU, $ 8$ GB RAM. \subsection{Numerical results} \label{Num_res} In this section we finally report statistics of the numerical tests performed by \textit{SARC} and \textit{ARC-Dynamic} on the set of synthetic datasets from \cite{IMA,bollapragada}, whose main characteristics are recalled in Table \ref{TableSynth}. They provide moderately ill-conditioned problems (see, e.g., Table \ref{TableSynth}) and motivate the use of second order methods. \begin{small} \begin{table}[h] \begin{center} \begin{tabular}{ccccc} \toprule Dataset & Training~$N$ & $n$ & Testing $N_T$ & $cond$ \\ \midrule Synthetic1 & 9000 & 100 & 1000 & $2.5\cdot10^4$ \\ Synthetic2 & 9000 & 100 & 1000 & $1.4\cdot10^5$ \\ Synthetic3 & 9000 & 100 & 1000 & $4.2\cdot10^7$ \\ Synthetic4 & 90000 & 100 & 10000 & $4.1\cdot10^4$ \\ {Synthetic6} & {90000} & {100} & {10000} & {$5.0\cdot10^6$} \\ \bottomrule \end{tabular} \caption{Number of training samples ($N$), feature dimension ($n$), number of testing samples ($N_T$), $2$-norm condition number of the Hessian matrix at computed solution ($cond$).}\label{TableSynth} \end{center} \end{table} \end{small} \noindent For fair comparisons, the values of $c$ used for each dataset in Table \ref{TableSynth} to build the Hessian approximation according to Step $2$ of Algorithm \ref{algodet} are chosen as in \cite[Table $8.1$]{IMA}.\\ \noindent In Table \ref{TableSynthALL} we report, for both $SARC$ and \textit{ARC-Dynamic} algorithms, the total number of iterations ({\rm n-iter}), the value of Cost Measure at Termination ({\rm CMT}) and the mean percentage of saving ({\rm Save-M}) obtained by \textit{SARC} with respect to \textit{ARC-Dynamic} on the synthetic datasets listed in Table \ref{TableSynth}. Since the selection of the subsets $\mathcal{D}_{j,k}$, $j\in\{1,2\}$, in \eqref{sizeD} is uniformly and randomly made at each iteration of the method, statistics in the forthcoming tables are averaged over $20$ runs. \begin{small} \begin{table}[h] \begin{center} \begin{tabular}{lcc|ccc} \toprule Dataset & \multicolumn{2}{c}{\textit{ARC-Dynamic}} & \multicolumn{3}{c}{\textit{SARC}} \\ & n-iter & CMT & n-ter & CMT & Save-M\\ \midrule Synthetic1 & 11.1 & 130.84 & 10.0 & ~95.27 & ~27\% \\ Synthetic2 & 10.6 & 109.56 & 10.2 & ~93.08 & ~15\% \\ Synthetic3 & 11.2 & 109.64 & 10.0 & ~97.52 & ~11\% \\ Synthetic4 & 11.0 & 124.07 & 10.4 & 100.48 & ~19\% \\ {Synthetic6} & {10.0} & {84.18} & {10.1} & {106.31} & $-26\%$\\ \bottomrule \end{tabular} \caption{Synthetic datasets. The columns are divided in two different groups. \textit{ARC-Dynamic}: average number of iterations ({\rm n-iter}) and CMT. \textit{SARC}: average number of iterations ({\rm n-iter}), CMT and mean percentage of saving ({\rm Save-M}) obtained by \textit{SARC} over \textit{ARC-Dynamic}. Mean values over $20$ runs. } \label{TableSynthALL} \end{center} \end{table} \end{small} \begin{small} \begin{table}[h] \begin{center} \begin{tabular}{lccccc} \toprule Method & Synthetic1 & Synthetic2 & Synthetic3 & Synthetic4 & {Synthetic6}\\ \midrule \textit{ARC-Dynamic} & 94.34\% & 92.68\% & 94.64\% & 95.52\% & {$93.82\%$} \\ \textit{SARC} & 93.18\% & 92.44\% & 93.62\% & 94.61\% & {$93.70\%$} \\ \bottomrule \end{tabular} \caption{Synthetic datasets. Binary classification rate at termination on the testing set employed by \textit{ARC-Dynamic} and \textit{SARC}, mean values over $20$ runs.} \label{BinAsyntheticdataset} \end{center} \end{table} \end{small} \noindent Table \ref{TableSynthALL} shows that the novel adaptive strategy employed by $SARC$ results more efficient than \textit{ARC-Dynamic}, reaching an $\epsilon$-approximate first-order stationary point at a lower CMT, in all cases except from Synthetic6. This is obtained without affecting the classification accuracy on the testing sets as it is shown in Table \ref{BinAsyntheticdataset}, where the average binary accuracy on the testing sets achieved by methods under comparison is reported. \noindent \noindent To give more evidence of the gain in terms of CMT provided by $SARC$ on Synthetic1-Synthetic4 along the iterative process, we display in Figure \ref{Perf1KL} the decrease of the training and the testing loss versus the adopted cost measure CM, while Figure \ref{GDsyntehticdata} is reserved to the plot of the gradient norm versus CM. For such figures, a representative plot is considered among each series of $20$ runs obtained by $SARC$ and \textit{ARC-Dynamic} on each of the considered dataset. \begin{figure}[h] \centering \includegraphics[width= 0.49\textwidth]{TrainLossSynthetic1_final} \includegraphics[width= 0.49\textwidth]{TestLossSynthetic1_final}\ \includegraphics[width= 0.49\textwidth]{TrainLossSynthetic2_final} \includegraphics[width= 0.49\textwidth]{TestLossSynthetic2_final} \includegraphics[width= 0.49\textwidth]{TrainLossSynthetic3_final} \includegraphics[width= 0.49\textwidth]{TestLossSynthetic3_final} \includegraphics[width= 0.49\textwidth]{TrainLossSynthetic4_final} \includegraphics[width= 0.49\textwidth]{TestLossSynthetic4_final} \caption{Synthetic datasets. Comparison of \textit{SARC} (continuous line with asteriks) and \textit{ARC-Dynamic} (dashed line with triangles) against the considered cost measure CM. Each row corresponds to a different synthetic dataset. Training loss (left) and testing loss (right) against CM with logarithmic scale on the $y$ axis.} \label{Perf1KL} \end{figure} \begin{figure}[h] \centering \includegraphics[width= 0.49\textwidth]{GNormSynthetic1_final} \includegraphics[width= 0.49\textwidth]{GNormSynthetic2_final} \includegraphics[width= 0.49\textwidth]{GNormSynthetic3_final} \includegraphics[width= 0.49\textwidth]{GNormSynthetic4_final} \caption{Synthetic datasets. Euclidean norm of the gradient against CM (training set) with logarithmic scale on the $y$ axis. \textit{SARC} (continuous line with asteriks), \textit{ARC-Dynamic} (dashed line with triangles). } \label{GDsyntehticdata} \end{figure} \noindent In all cases, Figure \ref{Perf1KL} shows the savings gained by $SARC$ in terms of the overall computational cost, as well as the improvements in the training phase and the testing accuracy under the same cost measure. More in general, we stress that second order methods show their strength on these ill-conditioned datasets since all the tested procedures manage to reduce the norm of the gradient and reach high accuracies in the classification rate. Even if we believe that reporting binary classifications accuracy obtained by each of the considered methods at termination is relevant in itself, we remark that the higher accuracy obtained at termination by \textit{ARC-Dynamic} (see Table \ref{BinAsyntheticdataset}) is just due to the fact the $SARC$ stops earlier. This should not be confused with a better performance of \textit{ARC-Dynamic}, since Figure \ref{Perf1KL} highlights that, along all datasets, when $SARC$ stops its testing loss is sensibly below the corresponding one performed by \textit{ARC-Dynamic} at the same CMT value. In Figure \ref{SampleSize}, we finally analyse the adaptive choices of the sample sizes $\mathcal{D}_{j,k}$, $j\in\{1,2\}$, in \eqref{sizeD}. As expected, the two strategies are more or less comparable when selecting the sample sizes for Hessian approximations, while the number of samples used to compute gradient approximations by $SARC$ oscillates across all iterations, always remaining far below the full sample size. In so doing, we outline that too small values of $\tau_0$ seem to have a bad influence on the performance of $SARC$, while as $\tau_0$ increases it generally produces frequent saving in the CMT, once that it is above a certain threshold value. In support of this observation, we report in Figure \ref{Figtau0} the variation of CMT against $\tau_0$ on Synthetic1 and Synthetic4. We finally notice that, except for a few iterations at the first stage of the iterative process, the sample size for Hessian approximation is lower than that used for gradient approximation. This is in line with the theory as the gradient is eventually required to be more accurate than the Hessian. In fact, the error in gradient approximation has to be of the order of $\|s_k\|^2$, while that in Hessian approximation has to be of the order of $\|s_k\|$, see Lemma \ref{Lemmagk} and \ref{LemmaCk}. \noindent \begin{figure}[h] \centering \includegraphics[width= 0.49\textwidth]{SSizeSynt1_final} \includegraphics[width= 0.49\textwidth]{SSizeSynt2_final} \includegraphics[width= 0.49\textwidth]{SSizeSynt3_final} \includegraphics[width= 0.49\textwidth]{SSizeSynt4_final} \caption{Synthetic datasets. Sample size for Hessian approximations employed by \textit{ARC-Dynamic} (dashed line with triangles) and $SARC$ (dashed line with asteriks), together with the sample size for gradient approximations considered by $SARC$ (dotted dashed line with asteriks) against iterations.} \label{SampleSize} \end{figure} \noindent \begin{figure}[h] \centering \includegraphics[width= 0.49\textwidth]{Tau0CMTSynt1} \includegraphics[width= 0.49\textwidth]{Tau0CMTSynt4} \caption{Cost Measure at Termination (CMT) against $\tau_0$ among $SARC$ (continuous line) and \textit{ARC-Dynamic} (dashed line) on Synthetic1 and Synthetic4.} \label{Figtau0} \end{figure} \section{Conclusion and perspectives} We have proposed the stochastic analysis of the process generated by an ARC algorithm for solving unconstrained, nonconvex, optimisation problems under inexact derivatives information. The algorithm is an extension of the one in \cite{IMA}, since it employs approximated evaluations of the gradient with the main feature of mantaining the dynamic rule for building Hessian approximations, introduced and numerically tested in \cite{IMA}. This kind of accuracy requirement is always reliable and computable when an approximation of the exact Hessian is needed by the scheme and, in contrast to other strategies such that the one in \cite{CartSche17}, does not require the inclusion of additional inner loops to be satisfied. With respect to the framework in \cite{IMA}, where in the finite-sum setting optimal complexity is restored with high probability, we have here provided properties of the method when the adaptive accuracy requirements of the derivatives involved in the model definition are not accomplished, with a view to search for the number of expected steps that the process takes to reach the prescribed accuracy level. The stochastic analysis is thus performed exploiting the theoretical framework given in \cite{CartSche17}, showing that the expected complexity bound matches the worst-case optimal complexity of the ARC framework. The possible lack of accuracy of the model has just the effect of scaling the optimal complexity we would derive from the deterministic analysis of the framework (see, e.g., \cite[Theorem~4.2]{IMA}), by a factor which depends on the probability $p$ of the model being sufficiently accurate. Numerical results confirm the theoretical achievements and highlight the improvements of the novel strategy on the computational cost in most of the tests with no worsening of the binary classification accuracy. This paper does not cover the case of noisy functions (\cite{PaquSche18, Chen15, ChenMeniSche18}), as well as the second-order complexity analysis. The stochastic second-order complexity analysis of ARC methods with derivatives and function estimates will be a challenging line of investigation for future work. Concerning the latter point, we remark that a recent advance in \cite{STR2}, based on properties of supermartingales, has tackled with the second-order convergence rate analysis of a stochastic trust-region method. \vskip 5 pt \noindent {\bf Funding}: the authors are member of the INdAM Research Group GNCS and partially supported by INdAM-GNCS through Progetti di Ricerca 2019. \vskip 5 pt \noindent {\bf Acknowledgements.} The authors dedicate this paper, in honor of his 70th birthday, to Alfredo Iusem. Thanks are due to Coralia Cartis, Benedetta Morini and Philippe Toint for fruitful discussion on stochastic complexity analysis and to two anonymous referees whose comments significantly improved the presentation of this paper.
51,931
Hood Eighth in First NCAA Regional Rankings INDIANAPOLIS - The Hood College men's basketball team is ranked eighth in the Mid-Atlantic Region in the first NCAA regional rankings of the 2013-14 season. The criteria used for the NCAA regional rankings are the same applied when selecting teams for the NCAA Division III Championships once the automatic bids have been secured. Results from games through February 9, 2014 were used for the initial rankings. The Blazers' 15-5 mark, both overall and in regional contests, was good enough to crack the top nine. A total of 62 Division III men's squads will compete in the 2014 NCAA Championships. Pool A (automatic bids) will account for 42 of the participants. One bid will be awarded to a Pool B team (independent teams and teams from conferences not eligible for an automatic bid). The remaining 19 bids come from Pool C (at-large bids from teams that failed to win a conference and the remainder of Pool B). The selection criteria for Pool C are applied nationally and no region is guaranteed a maximum or minimum number of bids. Undefeated Cabrini is No. 1 in the Mid-Atlantic, followed by Scranton and Wesley, respectively. Commonwealth Conference rival Messiah is fourth in the region, while Alvernia ranks seventh, one place ahead of the Blazers. With 58 total eligible teams, the Mid-Atlantic is tied for the second largest region with the West Region. The Northeast Region contains 75 eligible programs. Each region ranks 15 percent of its teams. Two more sets of regional rankings will be announced (February 19 and February 26).
33,918
I am a lover of quotes. I collect them and dole them out here depending on my mood. So for today here are the quotes that make me feel like one human living my little life, just as everyone else. VeronicaEighty percent of success is showing up. - Woody Allen - Woody Allen - George Burns - Bill Cosby 2 thoughts on “Quote Me Please.” I thought; therefore I was. Or was that, I thought therefore I was? Was I? Could I be again? I think I’ll have to think about it. My favourite topic once again~! Here are some more inspiring love quotes!!… ****************************************** “Being deeply loved by someone gives you strength while loving someone deeply gives you courage….” ~~~Lao-Tsu ****************************************** Love is a many splendid thing. Love lifts us up where we belong. All you need is love! ~ from the movie Moulin Rou ~ ******************************************* ~~~~~~~~Inspiring aren’t they?~~~~Bhaanu
164,482
- Green Food Precessing Machinery Co.,Ltd - Edible oil presser and refining,Microwave drying and sterilization,Snack food and nuts processing Home>Products>Cheetos/Kurekure\Nik Nak Process Line>automatic corn chips processing extruder machinery price automatic corn chips processing extruder machinery price - Shandong, China - LD - CE,ISO 1 Piece automatic corn chips processing extruder machinery price - 1set - US$1000.00-USD800,000.00 - standard package - Within 5 worldLDs - T/T,Western Union - 105 Product Description Product Description fried kurkure snack chips cheetos make fried kurkure snack chips cheetos fried kurkure snack chips cheetos make machinery 1.Outside Pack: Standard export wooden cases for Best performance fried kurkure snack chips cheetos make machinery 2.Inner package: Stretch film for Big promotion fried kurkure snack chips cheetos make machinery puff corn sticks machinery bugles make machinery tortilla make machineryOur Certifications company history >
400,169
服务承诺 My personal goal 2020-07 -- My personal goal,文章讲述当我还是个小男孩的时候,我对如何赚钱很感兴趣,因为我想用我的钱来买玩具,而不是父母的钱。我在跳蚤市场上卖掉了所有二手书。当我得到第一桶金时,我开始了解什么是商业和经济。到美国后,我开始学习一些课程,例如财务会计和微观经济学。现在我基本上知道业务如何运作。更重要的是,我意识到经济学无处不在。 目前,我于2013年8月至2015年5月在奥兰治海岸学院(Orange Coast College)学习工商管理。我珍惜这一重要机会,因为我发现这是我真正感兴趣的主题。在该课程中,我系统地研究了所有相关的商务课程,其中的主要课程包括财务会计,管理会计,微观经济学和宏观经济学。实际上,这个工商管理专业涵盖了广泛的部分。在未来,将会有一个繁荣的未来。 My personal goal I was interested in how to earn money when I was a little boy because I would like to use my money to buy toys instead of my parents' money. I sold all my used books on the flea market. I started getting to know what business and economics is when I got my first pot of gold. After being to the USA, I started studying some courses like Financial Accounting and Microeconomics. Now I essentially know how does business run. More importantly, I realize that economics is everywhere in our lives. Currently, I am studying business administration in Orange Coast College from August 2013 to May 2015. I have cherished this important chance as I found that it is the subject in which my true interest lies. In this program, I have systematically studied all the relevant business courses, of which the major courses include financial accounting, managerial accounting, microeconomics and macroeconomics. Actually speaking, this major of business administration covers a wide range of sections. In the future, there will be prosperous future. My personal objective will be successfully entering into university of California or better qualified into Owen branch school of UC. I am planning to transfer out of the current college by fall 2015 and into four-year undergraduate university program. Through learning business administration, I believe I will acquire more professional knowledge in this regards. Short-term goals will be like 1-3 years or 3-5 years. When I graduate from university, I hope to land an internship chance of working in the US. My first career goal will join a tourist company in a position of marketing or publicity. There is a huge tourist market for the Chinese tourists. And more and more Chinese people come to the USA annually for tourist purpose. Meanwhile, I can take advantage of my fluent Chinese speaking and listening to undertake some challenging work. If possible, I would hope to stay in the US to continue working here. Long-term goals will be like more 5 years. If I am unable to stay in the US, I will plan to open an intermediary agency of overseas study. Thus I can help more domestic students from China to go into community college in the US and transfer out of this college into four-year university study. In this way, it will not only save more money for them, but will get them better used to the American lives here. From June 2013 to present, I have interned as a student assistant in the international center of the Orange Coast College. My major responsibilities are to help overseas students from all over the world to come to study in our college. Detailed work includes emailing, telephoning, student insurance, student application and distribution of academic documents. At the very beginning when I joined this center, I had to break through language barrier and kept in mind those rules during the work. Once I ran into any difficulties, I would communicate with my senior leaders and summarize those mistakes in order not to make it happen again. Almost every time when it comes to the periods before or after holidays, I will be very busy in the office since many students will come to submit applications. I will help them answer the phones or reply to the emails so that other people will have more time to focus on other things. In addition to the above, I will also specialize in campus tours for the international students. I can take advantage of my communication abilities to make each visiting student satisfied. And also I will receive some Chinese students or delegations and do some interpretations work for our president's speeches to those delegations. Now I can undertake more complicated applications and materials from those students. Up to now, I have already worked in the international center for more than one year. Within this period, not only my English ability has been improved a lot, but also some other abilities like communication, official email writing, telephone languages and methods of high-level management. This working experience has significantly deepened my understanding of management and broadened academic perspectives. Through playing different roles, I have developed an important academic foundation, effective management skills and interpersonal communication. These factors will be important for my working toward a more advanced degree program. Even though my distinguished academic performance in college has qualified me for an internship in the above-mentioned position, I am very clear about my academic interest program and my future career objectives. Therefore in my proposed study of business administration, I will contribute my enthusiasm of study and more than one year working experience in the international center, because I can understand requirements of international students. Meanwhile, I am also interested in the student union, where I can develop my advantage to render more help to other students. In addition, active participation in extra-curricular activities such as school basketball team and piano contest have taken up longer time and will promote more international students to get involved. There's been quite a long time to have an ambition of having a business related career and to study at a University of California. I am now ready to accept the upcoming educational challenges. I believe there will be both a progress at a personal level and one step closer to laying a foundation for my prosperous business<<
290,455
News Apple TV vs. TiVo – Which Will You Throw Dollars At? [POLL] This week Apple has been in talks with majors cable companies such a Time Warner Cable, in hopes of reaching a deal, making Apple TV your newest cable access provider. If things were to go as Apple has planned, the cable companies would acquiesce, allowing channels to become individual apps or subscription by Apple TV users. This of course would be an epic win not only, for Apple, but also for the many consumers like me looking for an a la carte experience that allows users to sift through the garbage channels we do want and just stick with the good stuff. It seems, however, that cable companies are reluctant to give Apple exactly what they want, knowing that, if they concede in this manner, it may not be long before Apple TV becomes the big name in premiere television programming. Much to the chagrin of consumers, hoping to have their cake and eat it too, it appears Apple will be the one conceding, as they continue to discuss the possibility of developing a set top box for streaming cable television in conjunction with the other services currently offered on Apple TV. When I reviewed the speculated features of the Apple TV set box, I thought to myself, "Well isn't that what TiVo does? Why would anyone choose it?" Indeed, TiVo offers some pretty amazing features at a price this doesn't sting too badly. In fact, just today, I read that TiVo is releasing its latest version of TV recording device, the TiVo Premier 4 which is a slightly more economical sibling to the XL4. The TiVo Premier 4 includes four tuners, allowing users to stream and record up to 4 different shows at the same time. It has 500 GB of storage and can hold up to 75 hours of HD programming. This model was developed to solve the problem of multi-show stack ups - particularly on Sunday nights, where users are forced to choose between their favorite shows to watch on demand. Now, the decision is no longer required. With the Premier 4 users can actually "shop" for programs they enjoy prior to their airing and prepare a queue of recordings, which will keep them watching favorite shows on demand for as long as they can keep their eyeballs open to view them. Additionally, the Premier 4 includes favorite internet media providers such as Hulu, Amazon and Netflix, into users search for shows to watch on demand. When I consider the availability of this box for pretty much any television on the market versus the Apple TV box geared to a specific user base, I can't see how Apple will ever get a leg up in this market. The future of streaming media, however, is a constantly evolving story yet to completely unfold. When it does, perhaps we will all be surprised who ends up on top. If Apple TV offers a cable box top set, would you buy it, or opt for a TiVo instead? iPhone 5 Release Date Nears: Why Steve Job's Last Hurrah May Not Live Up to Our Expectations Jailbreak Tweaks 2012: biteSMS Awesome iPhone Messaging App Update 6.3 on Cydia Now! [Video] Jailbreak OS X 10.8 Mountain Lion for iPhone Jailbreak iOS for Novices: FAQ Google Voice Search for iOS Offers Siri Rival to Older iPhone Models [VIDEO] Apple iCloud Security Hacked, But Do We Really Care? Top 5 Apple iMac and MacBook Pro Rumors (via Mountain Lion) 25 Things an $800 iPhone 5 Should Do (via Twitter) [RUMOR] iPhone 5 Release Date Set to Rain on Microsoft's Parade © 2014 iDigitalTimes All rights reserved. Do not reproduce without permission.
184,151
TITLE: Graphing $ \lim _{n \rightarrow \infty} \frac{{|x+1 \mid}^{n}+x^{2}}{|x|+x^{2n}} $ QUESTION [2 upvotes]: Graph: $$ \lim _{n \rightarrow \infty} \frac{{|x+1 \mid}^{n}+x^{2}}{|x|+x^{2n}} $$ Here I tried first to resolve the limit, dividing by ${x}^{2n}$ in both numerator and plugging in $\infty$ I obtain the limit to be zero for all values of $x$. But when it tried to graph it using a graphing calculator I obtained the following graph (I put n=10000), Can anyone please point out my mistake? REPLY [3 votes]: Case by case: If $x>\varphi = \frac{1+\sqrt{5}}{2}$, then $x+1<x^2$ and we can divide by $x^{2n}$ to se that: $$ \lim_n \frac{(x+1)^n + x^2}{x+(x^2)^n}=\lim_n \frac{(\frac{x+1}{x^2})^n + x^{2-2n}}{x^{1-2n}+1}=\frac{0+0}{0+1}=0 $$ If $0<x<\varphi$, then $x+1>x^2$ , thus $$ \lim_n \frac{(x+1)^n + x^2}{x+(x^2)^n}=\lim_n \frac{(\frac{x+1}{x^2})^n + x^{2-2n}}{x^{1-2n}+1}=\frac{\infty+0}{0+1}=\infty $$ If $-1<x<0$, then $x^2<1$ and $0<x+1<1$, and the limit turns out to be: $$ \lim_n \frac{(x+1)^n + x^2}{-x+(x^2)^n}=\frac{x^2}{-x}=-x $$ If $-2<x<-1$, then $|x+1|<1$ and $x^2>1$, resulting in $$ \lim_n \frac{|x+1|^n + x^2}{-x+(x^2)^n}=\frac{0+x^2}{-x+\infty}=0 $$ And finally, for $x<-2$ we can divide by $x^2$, and taking into account that $\frac{|x+1|}{x^2}<1$ we get that $$ \lim_n \frac{|x+1|^n + x^2}{-x+(x^2)^n}=\lim_n \frac{(\frac{-(x+1)}{x^2})^n + x^{2-2n}}{x^{1-2n}+1}=\frac{0+0}{0+1} = 0 $$ Maybe we could have considered the last two together, but I think that separating them helps conceptually.
58,854
\begin{document} \title[Bartnik minimizers and improvability] {Bartnik mass minimizing initial data sets and improvability of the dominant energy scalar} \author{Lan-Hsuan Huang} \address{Department of Mathematics, University of Connecticut, Storrs, CT 06269, USA} \email{[email protected]} \author{Dan A. Lee} \address{65-30 Kissena Boulevard, Department of Mathematics, Queens College, Queens, NY 11367, USA} \email{[email protected]} \maketitle \begin{abstract} We introduce the concept of improvability of the dominant energy scalar, and we derive strong consequences of non-improvability. In particular, we prove that a non-improvable initial data set without local symmetries must sit inside a null perfect fluid spacetime carrying a global Killing vector field. We also show that the dominant energy scalar is always \emph{almost} improvable in a precise sense. Using these main results, we provide a characterization of Bartnik mass minimizing initial data sets which makes substantial progress toward Bartnik's stationary conjecture. Along the way we observe that in dimensions greater than eight there exist pp-wave counterexamples (without the optimal decay rate for asymptotically flatness) to the equality case of the spacetime positive mass theorem. As a consequence, we find counterexamples to Bartnik's stationary and strict positivity conjectures in those dimensions. \end{abstract} \tableofcontents \section{Introduction}\label{section:introduction} Let $n\ge 3$ and let $U$ be a connected $n$-dimensional smooth manifold, $g$ a Riemannian metric, and $\pi$ a symmetric $(2,0)$-tensor. We refer to such a triple $(U, g, \pi)$ as an \emph{initial data set} and $\pi$ as the \emph{conjugate momentum tensor}. It is related to the usual $(0,2)$-tensor $k$ via the equation \begin{align}\label{equation:k} k_{ij}:=g_{i\ell}g_{jm} \pi^{\ell m} -\tfrac{1}{n-1}(\tr_g \pi)g_{ij}. \end{align} We say that $(U, g, \pi)$ \emph{sits inside} a Lorentzian spacetime $(\mathbf{N}, \mathbf{g})$ of one higher dimension if $(U, g)$ isometrically embeds into $(\mathbf{N}, \mathbf{g})$ with $k$ as its second fundamental form. One can define the energy and current densities $\mu$ and $J$ in terms of $g$ and $\pi$, see \eqref{equation:mass-current}, and then we say that $(g,\pi)$ satisfies the \emph{dominant energy condition, or DEC,} if $\mu \ge |J|_g$. This condition can be recast as nonnegativity of the quantity \[ \Sc(g,\pi) := 2(\mu-|J|_g),\] which we will call the \emph{dominant energy scalar}. Note that in the \emph{time-symmetric case} where $\pi \equiv 0$, $\Sc(g,\pi)$ is just the scalar curvature of $g$, denoted $R_g$. In this paper we will study the question of when one can use compactly supported perturbations of $(g,\pi)$ to increase $\Sc$. This flexibility is important for situations in which one wants to find perturbations that preserve (or reinstate) the DEC. Consider the following theorem about prescribing compactly supported perturbations of scalar curvature, which is a slight variant of Theorem 1 of~\cite{Corvino:2000}. \begin{Theorem}[Corvino]\label{theorem:scalar} Let $(\Omega, g)$ be a compact Riemannian manifold with nonempty smooth boundary, such that $g\in C^{4,\alpha}(\Omega)$. Assume that $DR|_g^*$ is injective on $\Int\Omega$, where $DR|_g^*$ denotes the adjoint of the linearization of the scalar curvature operator at $g$. Then there exists a $C^{4,\alpha}({\Omega})$ neighborhood $\mathcal{U}$ of $g$ such that for every $V\subset\subset \Int\Omega$, there exist constants $\epsilon, C>0$ such that the following statement holds: For $\gamma\in \mathcal{U}$ and $u\in C^{0,\alpha}_c(V)$ with $\| u \|_{C^{0,\alpha}(\Omega)}< \epsilon$, there exists $h\in C^{2,\alpha}_c(\Int\Omega)$ with $\|h\|_{C^{2,\alpha}(\Omega)}\le C\| u\|_{C^{0,\alpha}(\Omega)}$ such that the metric $\gamma+h$ satisfies \[ R_{\gamma+h} =R_\gamma + u. \] \end{Theorem} Since $DR|_g^*$ is heavily overdetermined, the existence of a nontrivial kernel element $f$ is a highly special circumstance; for example, if $f$ is positive on $\Int\Omega$, then $g$ has constant scalar curvature and $(\Int\Omega, g, 0)$ sits inside a static spacetime. The \emph{constraint operator} on initial data, \begin{equation}\label{equation:constraint} \Phi(g,\pi):=(2\mu, J), \end{equation} may be thought of as a generalization of the scalar curvature operator on metrics. Extending Theorem~\ref{theorem:scalar}, Corvino and Schoen proved that injectivity of $D\Phi|_{(g,\pi)}^*$ allows one to prescribe small, compactly supported perturbations of $(\mu, J)$ using compactly supported perturbations of $(g,\pi)$~\cite[Theorem 2]{Corvino-Schoen:2006}. While this theorem is useful when dealing with \emph{vacuum} initial data sets (those with $\mu=|J|_g=0$), it is less useful for dealing with the DEC. The reason for this is that even if you can prescribe $\mu$ and $J$ exactly, that is not enough to know whether $\mu\ge |J|_g$ holds, because you lose control over $g$. The following definition captures the concept of being able to increase~$\Sc$ using compact perturbations. \begin{Definition} Let $(\Omega, g, \pi)$ be a compact initial data set with nonempty smooth boundary, such that $(g,\pi)\in C^{4,\alpha}(\Omega)\times C^{3,\alpha}(\Omega)$. We say that \emph{the dominant energy scalar is improvable in $\Int\Omega$} if there is a $C^{4,\alpha}({\Omega})\times C^{3,\alpha}({\Omega})$ neighborhood $\mathcal{U}$ of $(g, \pi)$, such that for any $V\subset\subset \Int\Omega$, there exist constants $\epsilon, C>0$ such that the following statement holds: For $(\gamma, \tau)\in \mathcal{U}$ and $u\in C^{0,\alpha}_c(V)$ with $\| u \|_{C^{0,\alpha}(\Omega)}< \epsilon$, there exists $(h, w)\in C^{2,\alpha}_c(\Int\Omega)$ with $\|(h,w)\|_{C^{2,\alpha}(\Omega)}\le C\| u\|_{C^{0,\alpha}(\Omega)}$ such that the initial data $(\gamma+h, \tau+w)$ satisfies \[ \Sc (\gamma+h, \tau+w) \ge \Sc(\gamma,\tau) + u. \] \end{Definition} Corvino and the first author~\cite{Corvino-Huang:2020} introduced the \emph{modified constraint operator at $(g,\pi)$}, denoted~$\overline{\Phi}_{(g,\pi)}$, to deal with improvability of the dominant energy scalar, by expanding upon conformal-type perturbations discovered in \cite[Section 6]{Eichmair-Huang-Lee-Schoen:2016}. On initial data $(\gamma, \tau)$, \begin{align*} \overline{\Phi}_{(g,\pi)}(\gamma,\tau) :=\Phi(\gamma, \tau) +(0, \tfrac{1}{2} \gamma\cdot J). \end{align*} The following is a slight variant of Theorem 1.1 of~\cite{Corvino-Huang:2020}. (See also~\cite{Corvino-Schoen:2006, Chrusciel-Delay:2003} for the vacuum case.) \begin{Theorem}[Corvino-Huang]\label{theorem:no-kernel} Let $(\Omega, g, \pi)$ be a compact initial data set with nonempty smooth boundary, such that $(g,\pi)\in C^{4,\alpha}(\Omega)\times C^{3,\alpha}(\Omega)$. Let $D\overline{\Phi}_{(g,\pi)}^*$ denote adjoint of the linearized operator $\left.D\overline{\Phi}_{(g,\pi)}\right|_{(g,\pi)}$, and assume that it is injective on $\Int\Omega$. Then the dominant energy scalar of $(g, \pi)$ is improvable in $\Int\Omega$. \end{Theorem} The domain of $D\overline{\Phi}_{(g,\pi)}^*$ consists of pairs $(f, X)$ such that $f$ is a $C^2_{\mathrm{loc}}$ scalar function and $X$ is a $C^1_{\mathrm{loc}}$ vector field. We will refer to these pairs $(f, X)$ as \emph{lapse-shift pairs}. In the case where $(g, \pi)$ happens to be vacuum, there is a nice interpretation of the kernel. (Note that in this case, $\overline{\Phi}_{(g,\pi)}=\Phi$.) \begin{Theorem}[Moncrief~\cite{Moncrief:1975}, cf.~\cite{Fischer-Marsden-Moncrief:1980}]\label{theorem:Moncrief} Let $(U, g, \pi)$ be a vacuum initial data set such that $(g,\pi)\in C^{3}_{\mathrm{loc}}(U)\times C^{2}_{\mathrm{loc}}(U)$, and suppose that there exists a nontrivial lapse-shift pair $(f, X)$ on $U$ solving \[ D{\Phi}|_{(g, \pi)}^*(f, X) =0.\] Then $(U, g, \pi)$ sits inside a vacuum spacetime admitting a unique global Killing vector field $\mathbf{Y}$ such that $\mathbf{Y}= 2f \mathbf{n} + X$ along~$U$, where $\mathbf{n}$ is the future unit normal to $U$.\footnote{The factor of $2$ in front of $f$ is due to the factor of $2$ occurring in the definition of $\Phi$ in~\eqref{equation:constraint} and will appear in many places throughout the paper for this reason.} Conversely, given a vacuum spacetime equipped with a global Killing vector field $\mathbf{Y}$ and a spacelike hypersurface $U$ with induced initial data $(g, \pi)$, if we decompose $\mathbf{Y}=2f \mathbf{n} + X$ along~$U$, then the lapse-shift pair $(f, X)$ must lie in the kernel of $D{\Phi}|_{(g, \pi)}^*$. \end{Theorem} In a general non-vacuum setting, the existence of a nontrivial lapse-shift pair in the kernel of either the adjoint $D\Phi_{(g,\pi)}^*$ or the modified adjoint $D\overline{\Phi}_{(g,\pi)}^*$ has less obvious geometric or physical significance. Our first main result establishes a significant consequence of non-improvability. \begin{Theorem}\label{theorem:improvable} Let $(\Omega, g, \pi)$ be a compact initial data set with nonempty smooth boundary, such that $(g,\pi)\in C^{4,\alpha}(\Omega)\times C^{3,\alpha}(\Omega)$ and also $(g,\pi) \in C^5_{\mathrm{loc}}(\Int\Omega)\times C^4_{\mathrm{loc}}(\Int\Omega)$. Then either the dominant energy scalar is improvable in $\Int\Omega$, or else there exists a nontrivial lapse-shift pair $(f, X)$ on $\Int\Omega$ satisfying the system \begin{equation}\label{equation:pair}\tag{$\star$} \begin{aligned} D\overline{\Phi}_{(g,\pi)}^*(f,X)&=0\\ 2fJ + |J|_g X&=0. \end{aligned} \end{equation} \end{Theorem} The first equation follows directly from Theorem~\ref{theorem:no-kernel}, so the new content is the second equation, which we will refer to as the \emph{$J$-null-vector equation} for $(f,X)$. We are able to show that \eqref{equation:pair} has a meaningful physical consequence along the lines of Theorem~\ref{theorem:Moncrief}, and we also generalize the fact that a nontrivial kernel of $DR|_g$ implies that $R_g$ is constant. To state the result, we introduce some terms. We say that a spacetime $(\mathbf{N}, \mathbf{g})$ satisfies the \emph{spacetime dominant energy condition}\footnote{In general, the spacetime DEC along an initial data set $(U, g, \pi)$ is much stronger than the DEC of the initial data set, $\sigma(g, \pi)\ge 0$, which is equivalent to $G(\mathbf{n}, \mathbf{w})\ge 0$ for any future causal~$\mathbf{w}$.} (or \emph{spacetime DEC} for short) if $G(\mathbf{u}, \mathbf{w})\ge 0$ for all future causal vectors $\mathbf{u}, \mathbf{w}$, where $G$ denotes the Einstein tensor of $\mathbf{g}$. A spacetime $(\mathbf{N}, \mathbf{g})$ is said to be a \emph{null perfect fluid spacetime} with velocity $\mathbf{v}$ and pressure $p$ if $\mathbf{v}$ is either future null or zero at each point and the Einstein tensor takes the form: \[ G_{\alpha\beta} = p \mathbf{g}_{\alpha\beta} + v_\alpha v_\beta. \] We define \emph{null dust} to be null perfect fluid with $p\equiv 0$. \begin{Theorem}\label{theorem:DEC} Let $(U, g, \pi)$ be an initial data set such that $(g,\pi)\in C^{3}_{\mathrm{loc}}(U)\times C^{2}_{\mathrm{loc}}(U)$. Assume there exists a nontrivial lapse-shift pair $(f,X)$ on $U$ solving the system~\eqref{equation:pair}, and assume that $f$ is nonvanishing in $U$. Then the following holds: \begin{enumerate} \item The dominant energy scalar $\sigma(g, \pi)$ is constant on $U$. \item $(U, g, \pi)$ sits inside a spacetime $(\mathbf{N}, \mathbf{g})$ that admits a global Killing vector field $\mathbf{Y}$ equal to $2f\mathbf{n} +X$ along~$U$, where $\mathbf{n}$ is the future unit normal to $U$, and $(\mathbf{N}, \mathbf{g})$ is a null perfect fluid spacetime with velocity $\mathbf{v} = \frac{\sqrt{|J|}}{2f} \mathbf{Y}$ and pressure $p= -\tfrac{1}{2}\sigma(g, \pi)$. \label{item:Bartnik-spacetime} \item If $(g,\pi)$ satisfies the dominant energy condition, then $\mathbf{g}$ satisfies the spacetime dominant energy condition. \end{enumerate} Conversely, let $(\mathbf{N}, \mathbf{g})$ be a null perfect fluid spacetime such that $\mathbf{g}$ is $C^3_{\mathrm{loc}}$, with velocity $\mathbf{v}$ and pressure $p$, admitting a global $C^2_{\mathrm{loc}}$ Killing vector field $\mathbf{Y}$. Assume that $\mathbf{v} = \eta \mathbf{Y}$ for some scalar function $\eta$. Then $p$ is constant, and for any smooth spacelike hypersurface $U$ with induced initial data $(g, \pi)$ and future unit normal $\mathbf{n}$, if we decompose $\mathbf{Y}=2f \mathbf{n} + X$ along~$U$, then the lapse-shift pair $(f, X)$ satisfies the system~\eqref{equation:pair}. \end{Theorem} Note the nonvanishing assumption of $f$ in the first half of Theorem~\ref{theorem:DEC}. If $f$ vanishes on an open subset of $U$, then the $J$-null-vector equation implies that $J$ must vanish there as well.\footnote{Technically, this argument only works where $X\ne0$, but the first sentence of the proof of Corollary~\ref{corollary:one-dimensional} guarantees that $X\ne0$ on a dense subset of the zero set of $f$.} Thus the equation $D\overline{\Phi}_{(g,\pi)}^*(f,X)=0$ on such an open subset reduces to saying that $X$ is an infinitesimal symmetry of $(g, \pi)$, that is, $L_X g=0$ and $L_X\pi=0$.\footnote{This justifies the second sentence of the abstract. More precisely, the correct statement is: If $(U, g, \pi)$ is a non-improvable initial data set such that no open subset carries an infinitesimal symmetry, then an open dense subset of $U$ must sit inside a null perfect fluid spacetime carrying a global Killing vector field.} For the important case where it is already known that $\sigma(g, \pi)\equiv 0$ in $U$, we are able to remove the nonvanishing assumption on $f$ from Theorem~\ref{theorem:DEC}, and we can conclude that $(\mathbf{N}, \mathbf{g})$ is a null dust spacetime. (See Theorem~\ref{theorem:f_not_zero}.) The ``converse'' part of Theorem~\ref{theorem:DEC} can be used to construct explicit examples of initial data sets that admit a nontrivial lapse-shift pair solving the system~\eqref{equation:pair}. In particular, pp-wave spacetimes give rise to the following examples. Refer to Section~\ref{section:asymp_flat} for the definition of \emph{asymptotically flat of type $(q, \alpha)$.} \begin{Example}\label{example:pp} For each $n>8$, there exist complete, asymptotically flat initial data sets $(\mathbb{R}^n, g, \pi)$ that satisfy $\sigma(g, \pi)\equiv 0$, admit a nontrivial lapse-shift pair solving the system~\eqref{equation:pair}, and have $E=|P|>0$ where $(E, P)$ is the ADM energy-momentum. These examples have asymptotic decay rate $(q, \alpha)$ with $q>\frac{n-2}{2}$ but not with any $q\ge n-5$, and $(g, \pi) = (g_{\mathbb{E}}, 0)$ outside a slab. \end{Example} By \emph{slab}, we just mean any region lying between parallel coordinate planes in $\mathbb{R}^n$. Despite appearances, Example~\ref{example:pp} does not contradict the existing theorems on the equality case of the spacetime positive mass theorem in~\cite{Chrusciel-Maerten:2006, Huang-Lee:2020} because those results demand a decay rate of $q>n-3$ rather than the general $q>\frac{n-2}{2}$ used in this paper and many others. As a companion theorem to Theorem~\ref{theorem:improvable}, we show that the dominant energy scalar is always \emph{almost} improvable in the following sense: \begin{Theorem}\label{theorem:improvability-error} Let $(\Omega, g, \pi)$ be a compact initial data set with nonempty smooth boundary, such that $(g,\pi)\in C^{4,\alpha}(\Omega)\times C^{3,\alpha}(\Omega)$ and also $(g,\pi) \in C^5_{\mathrm{loc}}(\Int\Omega)\times C^4_{\mathrm{loc}}(\Int\Omega)$. Let $B$ be an open ball in $\Int\Omega$, and let $\delta>0$. Then there is a $C^{4,\alpha}(\Omega)\times C^{3,\alpha}({\Omega})$ neighborhood $\mathcal{U}$ of $(g, \pi)$ such that for any $V\subset\subset\Int\Omega$, there exist constants $\epsilon, C>0$ such that the following statement holds: For $(\gamma, \tau)\in \mathcal{U}$ and $u\in C^{0,\alpha}_c(V)$ with $\| u \|_{C^{0,\alpha}(\Omega)}< \epsilon$, there exists $(h, w)\in C^{2,\alpha}_c(\Int\Omega)$ with $\|(h,w)\|_{C^{2,\alpha}(\Omega)}\le C\| u\|_{C^{0,\alpha}(\Omega)}$ such that the initial data set $(\gamma+h, \tau+w)$ satisfies \[ \Sc(\gamma+h, \tau+w) \ge \Sc(\gamma,\tau)+u -\delta {\bf 1}_B. \] where ${\bf 1}_B$ is the indicator function of $B$, which equals $1$ on $B$ and $0$ outside $B$. \end{Theorem} In other words, we can specify an arbitrarily small function with arbitrarily small support, namely $\delta {\bf 1}_B$, and then achieve improvability up to that small error. Our main application of these results is to Bartnik's stationary conjecture. Let $(\Omega_0, g_0, \pi_0)$ be a compact initial data set with nonempty smooth boundary, satisfying the DEC. The \emph{Bartnik mass} $m_B(\Omega_0, g_0, \pi_0)$ is defined to be the infimum of the ADM masses of all ``admissible'' asymptotically flat extensions $(M, g, \pi)$ of $(\Omega_0, g_0, \pi_0)$. The definition of an admissible extension is given in Definition~\ref{definition:admissible}, including a no-horizon condition in terms of marginally outer trapped hypersurfaces. For now we emphasize that in this paper, the extension $M$ refers to an asymptotically flat manifold with boundary $\partial M$ that we think of as being glued to $\partial\Omega_0$. An admissible extension $(M, g, \pi)$ whose ADM mass realizes this Bartnik mass is called a \emph{Bartnik mass minimizer} for $(\Omega_0, g_0, \pi_0)$. Bartnik conjectured that minimizers should have special properties (see \cite[p. 2348]{Bartnik:1989}, \cite[Conjecture~2]{Bartnik:1997}, and \cite[p. 236]{Bartnik:2002}). \begin{conjecture} Let $(\Omega_0, g_0, \pi_0)$ be a compact initial data set with nonempty smooth boundary, satisfying the dominant energy condition, and suppose that $(M, g, \pi)$ is a Bartnik mass minimizer for $(\Omega_0, g_0, \pi_0)$. Then $(\Int M, g, \pi)$ sits inside a vacuum stationary spacetime. \end{conjecture} For our purposes, we define a \emph{stationary} spacetime $(\mathbf{N}, \mathbf{g})$ containing $(\Int M, g, \pi)$ to be one that admits a global Killing vector field $\mathbf{Y}$ that is ``uniformly'' timelike outside some bounded subset of $\Int M$, meaning that away from that subset, $\mathbf{g}(\mathbf{Y}, \mathbf{Y}) < -\varepsilon$ for some $\varepsilon>0$. See~\cite{Chrusciel-Wald:1994}. (Note that this definition of \emph{stationary} is different from that used in Bartnik's original formulation of the conjecture, in which the Killing vector field must be globally timelike.) One might also expect that if $(\Omega_0, g_0, \pi_0)$ has Bartnik mass zero, then $(\Int \Omega_0, g_0, \pi_0)$ should sit inside the Minkowski spacetime. We will refer to this as \emph{Bartnik's strict positivity conjecture} because of the analogous conjecture made by Bartnik in the time-symmetric case. Anderson and Jauregui showed that this is false if one demands that all of $(\Omega_0, g_0, \pi_0)$, including the boundary, sits inside the Minkowski spacetime~\cite{Anderson-Jauregui:2019}. There is also a time-symmetric version of Bartnik's conjecture which has been affirmed (see references cited for the precise regularity and dimension assumptions): \begin{Theorem}[Corvino~\cite{Corvino:2000}, Anderson-Jauregui~\cite{ Anderson-Jauregui:2019}, cf.~\cite{Huang-Martin-Miao:2018}] Let $(\Omega_0, g_0, 0)$ be a compact initial data set with nonempty smooth boundary, satisfying $R_{g_0}\ge0$, and suppose that $(M, g, 0)$ minimizes ADM mass among all admissible extensions with $\pi\equiv0$. Then $(\Int M, g, 0)$ sits inside a vacuum static spacetime. \end{Theorem} On the other hand, relatively little progress has been made toward the general, non-time-symmetric case of Bartnik's stationary conjecture until recently. Corvino~\cite{Corvino:2019} used Theorem~\ref{theorem:no-kernel} combined with a conformal argument to say that if $(g,\pi)$ is a Bartnik mass minimizer, then it must admit a nontrivial kernel of $D\overline{\Phi}_{(g,\pi)}^*$. In three dimensions, Zhongshan An~\cite{An:2020} dealt with the case of vacuum minimizers by carrying out the variational approach proposed by Bartnik~\cite{Bartnik:2005} and used by Anderson-Jauregui~\cite{Anderson-Jauregui:2019} in the time-symmetric case. Using results established in this paper, we are able to prove part of Bartnik's stationary conjecture. \begin{Theorem}\label{theorem:Bartnik} Let $3\le n \le 7$, and let $(\Omega_0, g_0, \pi_0)$ be an $n$-dimensional compact smooth initial data set with nonempty smooth boundary, satisfying the dominant energy condition. Suppose that $(M, g, \pi)$ is a Bartnik mass minimizer for $(\Omega_0, g_0, \pi_0)$ such that $(g, \pi)\in C^5_{\mathrm{loc}}(\Int M)\times C^4_{\mathrm{loc}}(\Int M)$, and it has nonnegative ADM mass (that is, $E\ge |P|$). Then $(M, g, \pi)$ satisfies the following properties: \begin{enumerate} \item $\sigma(g, \pi)\equiv 0$ in $\Int M$.\label{item:sigma_vanish} \item $(\Int M, g, \pi)$ sits inside a null dust spacetime $(\mathbf{N}, \mathbf{g})$ which satisfies the spacetime dominant energy condition and also admits a global Killing vector field~$\mathbf{Y}$. \item The metric $\mathbf{g}$ is vacuum on the domain of dependence of the subset of $\Int M$ where $(g, \pi)$ is vacuum, and $\mathbf{Y}$ is null on the region of $\mathbf{N}$ where $\mathbf{g}$ is not vacuum.\label{item:spacetime_vac} \item If we further assume $E>|P|$, then $(g, \pi)$ is vacuum outside a compact subset of $M$, and thus $(\mathbf{N}, \mathbf{g})$ is vacuum near spatial infinity. \label{item:vanishing-J} \end{enumerate} \end{Theorem} Under a spin assumption, an admissible extension must have $E\ge|P|$. See Remark~\ref{remark:pmt_corners}. For $n>8$, Example~\ref{example:pp} leads to the existence of Bartnik mass minimizers that are non-vacuum and do not admit timelike Killing vectors, thereby contradicting Bartnik's stationary and strict positivity conjectures, though it should be noted that Bartnik only considered the case $n=3$. On the other hand, these examples are consistent with Theorem~\ref{theorem:Bartnik}. Although Theorem~\ref{theorem:Bartnik} assumes $n\le7$ for technical reasons (see Proposition~\ref{proposition:openness}), we have no reason to think the theorem fails in higher dimensions. Finally, we remark that in the presence of a cosmological constant, the corresponding DEC takes the form of a constant lower bound on the object that we have named the ``dominant energy scalar'' (or equivalently, one can re-define ``dominant energy scalar'' it by shifting it by a constant). Because of this, one can see that although our paper has been written in the context of zero cosmological constant, the central findings of Theorems~\ref{theorem:improvable}, \ref{theorem:DEC}, and \ref{theorem:improvability-error} can easily be applied in the context of nonzero cosmological constant. \medskip \noindent{\bf Outline of the paper.} In Section~\ref{section:examples} we provide examples of solutions to the system~\eqref{equation:pair} and present Example \ref{example:pp}. In Section \ref{section:new-operators}, we introduce a new infinite-dimensional family of deformations of the modified constraint operator and present some useful properties, including a generalization of Theorem~\ref{theorem:no-kernel}. In Section~\ref{section:deformation}, we prove a key result (Proposition~\ref{theorem:generic}) that says, generically, the adjoint linearizations of those modified operators are either injective, or else kernel elements satisfy a null-vector equation. In Section~\ref{section:improvability}, we apply Proposition~\ref{theorem:generic} to prove Theorems~\ref{theorem:improvable} and~\ref{theorem:improvability-error}. In Section~\ref{section:null}, we obtain several strong consequences of the $J$-null-vector equation including Theorem~\ref{theorem:DEC} (proved in Section~\ref{section:null_perfect}). Section~\ref{section:Bartnik} deals with application to Bartnik mass minimizers. After constructing suitable deformations in Section~\ref{section:decrease-energy}, we define Bartnik mass and prove Theorem~\ref{theorem:Bartnik} in Section~\ref{section:admissibility}. We end with a short discussion of the concept of Bartnik energy. \medskip \noindent{\bf Acknowledgement.} The authors would like to thank Professors Richard Schoen and Shing-Tung Yau for discussions. The work was completed while the first author was partially supported by the NSF CAREER Award DMS-1452477 and DMS-2005588, Simons Fellowship of the Simons Foundation, and von Neumann Fellowship at the Institute for Advanced Study. The authors also thank the referees for their helpful comments. \section{Examples}\label{section:examples} It is natural to ask whether there exists any \emph{non}-vacuum initial data admitting nontrivial solutions to the system~\eqref{equation:pair}. (The vacuum case is classical in view of Theorem~\ref{theorem:Moncrief}.) It suffices to find any non-vacuum null perfect fluid spacetime $(\mathbf{N}, \mathbf{g})$ with velocity $\mathbf{v}$ and pressure $p$ that carries a global Killing vector field $\mathbf{Y}$ with $\mathbf{v} = \eta \mathbf{Y}$ for some scalar function $\eta$. Then by Theorem~\ref{theorem:DEC}, any spacelike hypersurface in such $(\mathbf{N}, \mathbf{g})$ gives rise to an initial data set that carries a nontrivial lapse-shift pair solving~\eqref{equation:pair}. The simplest case is when $J=0$ everywhere. By Theorem~\ref{theorem:DEC}, we can see that this corresponds to when $(\mathbf{N}, \mathbf{g})$ has $G_{\alpha\beta} = p \mathbf{g}_{\alpha\beta}$ with constant $p$ and also admits a Killing vector. Of course, this is the same as being vacuum with respect to some \emph{cosmological constant}\footnote{Everywhere else in this paper, when we say ``vacuum'' we are referring to zero cosmological constant.} $\Lambda=-p$ and admitting a Killing vector. Perhaps the most well-known explicit examples of such spacetimes would be the de Sitter and anti-de Sitter analogs of the Kerr spacetime, discovered by Carter~\cite{Carter:1973}. Many more examples with negative cosmological constant are constructed by Chru\'{s}ciel and Delay~\cite{Chrusciel-Delay:2007}. More interesting is what happens where $J$ is nonzero. In this case the $J$-null-vector equation of~\eqref{equation:pair} imposes a stringent condition. In particular, the Killing vector $\mathbf{Y}$ must be null. If we make the simplifying assumption that $\mathbf{Y}$ is actually covariantly constant, then there is a coordinate chart $u, x^1, \dots, x^{n}$ such that $\mathbf{Y} = \frac{\partial}{\partial u}$ and the metric $\mathbf{g}$ can be locally expressed as \[ \mathbf{g} = 2 du\, dx^n + S\, (dx^n)^2 +\sum_{a,b=1}^{n-1} h_{ab}dx^a dx^b, \] where $S$ and $h_{ab}$ are functions independent of $u$. (See~\cite{Blau:2020}, for example.) If we further assume that $h_{ab}=\delta_{ab}$, then $(\mathbf{N}, \mathbf{g})$ is called a \emph{\hbox{pp-wave} spacetime}. It is standard to verify that the Einstein tensor is $G_{\alpha\beta}=-\tfrac{1}{2} (\Delta' S) Y_\alpha Y_\beta$, where $\Delta'$ represents the Euclidean Laplacian in the $x':=(x^1, . . . , x^{n-1})$ variables. It easily follows that the dominant energy condition holds if and only if $\Delta' S\le0$. In the special case that $S$ is positive, the constant $u$-slices give rise to examples of initial data sets, which are described in the following lemma, whose proof is given in Appendix~\ref{se:pp}. \begin{lemma}\label{lemma:pp-initial} Let $U$ be an open subset of $\mathbb{R}^n$ equipped with the Cartesian coordinates $x^1, \dots, x^n$. Let $S$ be a positive function on $U$ satisfying $\Delta' S\le 0$. Define the initial data set $(g, \pi)$ on $U$ by \[ g = S\, (dx^n)^2 + \sum_{a=1}^{n-1} (dx^a)^2,\] and \begin{align*} \pi^{nn} &= 0 \\ \pi^{na}&=\pi^{an} = \tfrac{1}{2}S^{-\frac{3}{2}} \tfrac{\partial S}{\partial x^a} \\ \pi^{ab}&=- \tfrac{1}{2}S^{-\frac{3}{2}}\tfrac{\partial S}{\partial x^n}\delta^{ab}, \end{align*} where the $a$ and $b$ indices run from $1$ to $n-1$. Then \begin{align*} \mu &= -\tfrac{1}{2} S^{-1} \Delta' S\\ J &= \tfrac{1}{2} S^{-\frac{3}{2}} (\Delta' S) \tfrac{\partial}{\partial x^n}, \end{align*} and in particular, $\mu=|J|_g\ge 0$. Furthermore, if we define \begin{align*} f = \tfrac{1}{2} S^{-\frac{1}{2}}\quad \mbox{ and }\quad X = S^{-1} \tfrac{\partial}{\partial x^n}, \end{align*} then $(f,X)$ is a solution to the system~\eqref{equation:pair} on $(U, g, \pi)$. \end{lemma} In the next lemma, we summarize the properties that $S$ must have in order for the data from Lemma~\ref{lemma:pp-initial} to lead to Example~\ref{example:pp}. Below, we will adopt the definitions of weighted H\"{o}lder spaces and asymptotic flatness in Section~\ref{section:asymp_flat}, which the reader may want to review. \begin{lemma}\label{lemma:S} Let $n>3$. There exist nonconstant positive smooth functions $S$ on $\mathbb{R}^{n}$ with the following properties: \begin{enumerate} \item $\Delta' S\le 0$ everywhere, strictly negative somewhere, and $\Delta'S$ is integrable on $\mathbb{R}^n$. \label{item:superharmonic} \item $S \equiv 1$ in $\{ |x^n|\ge C\}$ for some constant $C>0$.\label{item:slab} \item $\displaystyle \lim_{\rho\to \infty} \int_{|x'|=\rho} - \sum_{a=1}^{n-1} \frac{\partial S}{\partial x^a} \frac{x^a}{|x'|} \, d\mu $ exists and is positive. \label{item:mass} \item For each nonnegative integer $k$ and each $\alpha\in (0, 1)$, we have $S-1\in C^{k,\alpha}_{-q} (\mathbb{R}^{n})$ with $q=n-3-(k+\alpha)$.\label{item:decay} \end{enumerate} \end{lemma} \begin{remark} Such a function $S$ cannot exist for $n=3$ because Liouville's theorem says that any superharmonic function on $\mathbb{R}^2$ that is bounded below must be constant. \end{remark} \begin{proof} Let $F$ be a smooth nonnegative function on $\mathbb{R}^{n-1}$ with coordinates $x'=(x^1, \ldots, x^{n-1})$, such that $F= O(|x'|^{-s})$ for some $s>n-1$. We can solve $\Delta'\psi = -F$ on $\mathbb{R}^{n-1}$ via convolution with the fundamental solution of the Laplacian on Euclidean $\mathbb{R}^{n-1}$. As long as $F$ is not identically zero, $\psi(x')$ will be a positive, globally superharmonic function on $\mathbb{R}^{n-1}$. For $n>3$, it must have the expansion $\psi(x')= A|x'|^{3-n}+O_1(|x'|^{2-n})$, and since $\psi$ is positive, the constant $A$ must also be positive. Now define $S(x', x^n) = 1+\phi(x^n) \psi(x')$, where $\phi$ is chosen to be any nontrivial compactly supported smooth nonnegative function on $\mathbb{R}$. Note that $\Delta' S = -\phi(x^n)F(x')\le0$ everywhere and strictly negative somewhere. It is straightforward to verify that $S $ satisfies Items \eqref{item:superharmonic}, \eqref{item:slab}, and \eqref{item:mass}. Since the derivatives of $S$ in the $x^n$ direction do not decay any faster than $|x'|^{3-n}$, we can only conclude that $S-1$ and its derivatives of any order are $O(|x|^{3-n})$ and thus $S-1\in C^{k,\alpha}_{-q} (\mathbb{R}^{n})$ with $q=n-3-k-\alpha$ by the definition of weighted H\"older spaces. \end{proof} \begin{proof}[Proof of Example~\ref{example:pp}] Choose any $S$ as in Lemma~\ref{lemma:S} and use this choice in Lemma~\ref{lemma:pp-initial} to construct initial data $(\mathbb{R}^n, g, \pi)$. We claim that for $n>8$, this is the desired example. Note that by construction, $(g,\pi)$ is clearly complete, $(g, \pi) = (g_{\mathbb{E}}, 0)$ outside of a slab, satisfies $\sigma(g,\pi)=0$, and admits a nontrivial solution to~\eqref{equation:pair}. The main task is to show that $(g,\pi)$ is asymptotically flat. Recall that our asymptotic flatness condition requires $g_{ij}-\delta_{ij}=C^{2,\alpha}_{-q}(\mathbb{R}^n)$ and $\pi_{ij}=C^{1,\alpha}_{-q}(\mathbb{R}^n)$ for some $\frac{n-2}{2} < q < n-2$, and $(\mu, J)\in L^1(\mathbb{R}^n)$. For our $(g, \pi)$ from Lemma~\ref{lemma:pp-initial}, this is equivalent to requiring that $S-1\in C^{2,\alpha}_{-q}$ for some $\frac{n-2}{2} < q < n-2$ and $\Delta' S$ is integrable. By Item~\eqref{item:decay} of Lemma~\ref{lemma:S}, it imposes the condition on $n$: \[ (n-3)-(2+\alpha) > \tfrac{n-2}{2} \text{ for some $\alpha\in(0,1)$},\quad \mbox{or equivalently, } \quad n>8. \] To see that $E=|P|>0$, we evaluate the ADM energy-momentum by integrating over large capped cylinders. The caps do not contribute, and we can see that \begin{align*} E = -P_n = \frac{1}{2(n-1)\omega_{n-1}} \lim_{\rho \to \infty} \int_{|x'|=\rho} -\sum_{a=1}^{n-1}\frac{\partial S}{\partial x^a} \frac{x^a}{|x'|} \, d\mu >0, \end{align*} and $P_1=\dots=P_{n-1}=0$. \end{proof} \begin{remark}While the above argument needs $n>8$, it can be easily shown that initial data from Lemma~\ref{lemma:pp-initial} cannot be asymptotically flat and satisfy the DEC if $3\le n \le 6$. Previously, Beig and Chru\'{s}ciel observed that \emph{vacuum} pp-waves give rise to initial data sets with the appropriate asymptotic decay rate and $E=|P|>0$ for $n=3$, but these examples are not complete~\cite[p. 1951]{Beig-Chrusciel:1996}. \end{remark} In dimensions $n>8$, Example~\ref{example:pp} provides numerous counterexamples to what one might naively expect to be the optimal statement of the equality case of the spacetime positive mass theorem. The ``expected'' statement is that any complete, asymptotically flat initial date set satisfying the DEC and having $E=|P|$ must have $E=|P|=0$ and sit inside Minkowski space. Although the strong decay rate $q=n-2$ might be regarded as the most natural and physically relevant asymptotically flat decay rate, Example~\ref{example:pp} is still surprising in light of the fact that the spacetime positive mass inequality $E\ge|P|$ does indeed hold for all decay rates $q>\frac{n-2}{2}$. Specifically, a density theorem \cite[Theorem 18]{Eichmair-Huang-Lee-Schoen:2016} shows that Example~\ref{example:pp} lies in the \emph{limit} of complete, asymptotically flat initial data sets satisfying the DEC with strong decay rate $q=n-2$. It would be interesting to know the lowest decay rate for which the equality case of the spacetime positive mass theorem holds. In dimensions $n=3$ and $4$, by~\cite{Beig-Chrusciel:1996, Chrusciel-Maerten:2006, Huang-Lee:2020}, the decay rate assumption of $q>\frac{n-2}{2}$ was already known to be sufficient for the equality case to hold. But in dimensions $n=5$ through $8$, there is a gap between $q>n-3$ (for which the equality case of the positive mass theorem is known to hold) and $q>\frac{n-2}{2}$, where counterexamples might or might not exist. In dimensions $n>8$, there is a gap between $q>n-3$ and the counterexamples at $q=n-5-\alpha$ obtained in Example~\ref{example:pp} . The fact that Example~\ref{example:pp} is Euclidean outside of a slab is reminiscent of the work of Carlotto and Schoen~\cite{Carlotto-Schoen:2016}, who constructed examples of complete, asymptotically flat vacuum initial data sets that are trivial outside of a conical region, but do not sit inside Minkowski space. Those examples also have decay rates less than the strong decay rate $q=n-2$. \begin{example}[Bartnik mass minimizers]\label{example:Bartnik} These examples also give counterexamples to Bartnik's stationary and strict positivity conjectures for $n>8$. Let $\Omega_0$ be a closed ball centered at the origin in an initial data set $(\mathbb{R}^n, g, \pi)$ from Example~\ref{example:pp}. For large enough $\Omega_0$, it is clear that $(\mathbb{R}^n\smallsetminus \Int\Omega_0, g, \pi)$ is an admissible extension of $(\Omega_0, g, \pi)$. See Definition~\ref{definition:admissible} for the definition of admissible extensions. Since the extension $(\mathbb{R}^n\smallsetminus \Int\Omega_0, g, \pi)$ has ADM mass equal to zero, the Bartnik mass of $(\Omega_0, g, \pi)$ is zero and the extension has to be a Bartnik mass minimizer\footnote{ This is assuming that the ADM mass of admissible extensions are always nonnegative. This would be the case if we only consider admissible extensions that are spin (for example, diffeomorphic to $\mathbb{R}^n\smallsetminus \Int\Omega_0$). See Remark~\ref{remark:pmt_corners}.}. This contradicts Bartnik's stationary and strict positivity conjectures because $(\mathbb{R}^n\smallsetminus \Int\Omega_0, g, \pi)$ and $(\Omega_0, g, \pi)$ are not vacuum, so long as the function $F$ from the proof of Lemma~\ref{lemma:S} was chosen to be positive on $\mathbb{R}^{n-1}$. On the other hand, if we select $F$, $\phi$, and $\Omega_0$ such that $\phi(x^n)F(x')$ is supported in the ball $\Omega_0$, then the Bartnik minimizing extension $(\mathbb{R}^n\smallsetminus \Int\Omega_0, g, \pi)$ is vacuum, but it cannot sit inside a stationary spacetime, as defined in the introduction. This can be seen as follows: Given a global Killing field that is uniformly timelike outside a bounded subset of $\mathbb{R}^n\smallsetminus \Omega_0$, it induces a lapse-shift pair $(f, X)$ with $4f^2 > |X|^2_g +\varepsilon$ near infinity, for some $\varepsilon>0$, and by Theorem~\ref{theorem:Moncrief}, it must satisfy $D\Phi|^*_{(g,\pi)}(f, X)=0$ in the vacuum region near infinity. Applying Lemma~\ref{lemma:asymptotics}, it is not hard to see that the inequality $4f^2 > |X|^2_g \ge0$ is only possible if the constants $c_i$ and $d_{ij}$ appearing in~\eqref{equation:asymptotics-linear} are all zero, and thus $(f, X)$ must be asymptotic to $(aE, -2aP)$ for some constant $a$. Therefore $4(aE)^2 > |2aP|^2 +\varepsilon$, but this is a contradiction since we already know that $E=|P|$ in this example. \end{example} Example~\ref{example:Bartnik} may be surprising because it shows that \emph{asymptotically flat} Cauchy hypersurfaces can exist in pp-wave spacetimes in higher dimensions and satisfy admissibility for the Bartnik mass. Szabados~\cite[p. 125]{Szabados:2009} opined that ``pure radiation'' initial data $(\Omega_0, g_0, \pi_0)$, as is found in a pp-wave, should have zero ``quasi-local mass'' and suggested that strict positivity of the Bartnik mass is undesirable. While we have computed the Bartnik mass to be zero for the cases in~Example~\ref{example:Bartnik}. the general question remains open. In light of Example~\ref{example:Bartnik}, one might want to require a strong decay rate of $q=n-2$ in the definition of admissible extensions, but even with such a modified definition, it is still possible that $(\Omega_0, g, \pi)$ could have Bartnik mass equal to zero, in view of the density theorem mentioned above. \section{A new family of modified constraint operators}\label{section:new-operators} We set some conventions for our usage of variables: When we refer to an initial data set $(U, g, \pi)$, we assume $U$ is a connected manifold unless otherwise specified, and we do not require $(U, g)$ to be complete since we will often want to think of it as an open subset of some larger manifold. We will typically use $\Omega$ to denote a compact manifold with nonempty smooth boundary and $M$ to denote a complete asymptotically flat manifold which may or may not have boundary. We will \emph{always} assume $(g, \pi)$ is locally $C^3\times C^2$ and be explicit when we assume more regularity. Define the \emph{energy density} $\mu$ and the \emph{current density} $J$ by \begin{align} \label{equation:mass-current} \begin{split} \mu&:=\tfrac{1}{2}\left(R_g + \tfrac{1}{n-1}(\tr_g \pi )^2 - |\pi|_g^2 \right)\\ J&:= \Div_g \pi, \end{split} \end{align} where $n$ is the dimension of $U$. The \emph{constraint map} is defined by $\Phi(g, \pi) := (2\mu, J)$. Denote its linearization at $(g, \pi)$ by $D\Phi|_{(g, \pi)}$. Recall the formula for the $L^2$ formal adjoint operator $D\Phi|_{(g, \pi)}^*$ on a \emph{lapse-shift pair} $(f, X)$, where $f$ is a $C^2_{\mathrm{loc}}$ function and $X$ is a $C^1_{\mathrm{loc}}$ vector field on~$U$: \begin{align} \label{equation:adjoint} \begin{split} &D\Phi|_{(g,\pi)} ^*(f, X) = \left(-(\Delta_g f)g_{ij}+ f_{;ij} +\left[-R_{ij} +\tfrac{2}{n-1} (\mbox{tr}_g \pi) \pi_{ij} - 2 \pi_{i\ell} \pi^\ell_j \right] f\right.\\ & \quad+ \tfrac{1}{2} \left[ (L_X\pi)_{ij} + (\Div_g X) \pi_{ij}- X_{k;m} \pi^{km} g_{ij} - \langle X, J \rangle_g g_{ij} \right]- (X\odot J)_{ij}, \\ &\quad \left. -\tfrac{1}{2} (L_Xg)^{ij} + \left(\tfrac{2}{n-1} (\mbox{tr}_g \pi ) g^{ij}- 2 \pi^{ij} \right) f\right), \end{split} \end{align} where semicolon denotes covariant differentiation, $R_{ij}$ is the Ricci curvature of $g$, $X\odot J$ is the symmetric tensor product $(X\odot J)_{ij} = \tfrac{1}{2} (X_i J_j+ J_i X_j)$, and all indices are raised and lowered using~$g$. In particular, note that $(L_X\pi)_{ij}:=g_{i\ell} g_{jm} (L_X\pi)^{\ell m}$. The \emph{modified constraint operator} at $(g,\pi)$, introduced by Corvino and the first author~\cite{Corvino-Huang:2020}, is defined on initial data $(\gamma,\tau)$ on $U$ by \begin{align*} \overline{\Phi}_{(g, \pi)}(\gamma,\tau) :=\Phi(\gamma, \tau) +(0, \tfrac{1}{2} \gamma\cdot J) \end{align*} where $J$ is the current density of $(g,\pi)$ and $(\gamma\cdot J)^i := g^{ij} \gamma_{jk} J^k$ denotes the contraction of $\gamma$ and $J$ with respect to the background metric $g$. Denote its linearization at $(g, \pi)$ by $D\overline{\Phi}_{(g, \pi)}:= \left.D\overline{\Phi}_{(g, \pi)}\right|_{(g,\pi)}$. Then its $L^2$ formal adjoint $D\overline{\Phi}_{(g, \pi)}^*$ on a lapse-shift pair $(f,X)$ satisfies \begin{align}\label{equation:adjoint-mod} D\overline{\Phi}_{(g, \pi)}^* (f,X) = (D\Phi|_{(g,\pi)})^*(f, X) + (\tfrac{1}{2} X\odot J, 0). \end{align} \begin{definition} Given a function $\varphi$ and a vector field $Z$ on $U$, we introduce the \emph{$(\varphi, Z)$-modified constraint operator} at $(g, \pi)$, denoted $\Phi^{(\varphi, Z)}_{(g,\pi)}$. It is defined on initial data $(\gamma,\tau)$ on $U$ by \begin{align} \Phi^{(\varphi, Z)}_{(g,\pi)} (\gamma, \tau) &:=\overline{\Phi}_{(g,\pi)}(\gamma, \tau) + \left(2\varphi |Z|_\gamma^2, \varphi |Z|_g \gamma \cdot Z\right)\notag \\ &= \Phi(\gamma, \tau) +(0, \tfrac{1}{2} \gamma\cdot J) + \left(2\varphi |Z|_\gamma^2, \varphi |Z|_g \gamma \cdot Z\right), \label{equation:modified2} \end{align} where $(\gamma\cdot Z)^i := g^{ij} \gamma_{jk} Z^k$ denotes the contraction of $\gamma$ and $Z$ with respect to $g$. Note that with our notation, $\Phi^{(0,0)}_{(g,\pi)}=\overline{\Phi}_{(g,\pi)}$. \end{definition} Denote the linearization at $(g, \pi)$ by $D\Phi^{(\varphi, Z)}_{(g, \pi)}:= \left.D\Phi^{(\varphi, Z)}_{(g, \pi)}\right|_{(g,\pi)}$. Then it is easy to see that its $L^2$ formal adjoint $\left(D\Phi^{(\varphi, Z)}_{(g, \pi)}\right)^*$ on a lapse-shift pair $(f,X)$ satisfies \begin{align} \left(D\Phi^{(\varphi, Z)}_{(g, \pi)}\right)^*(f, X) =D\overline{\Phi}_{(g, \pi)}^*(f, X) + \left( \varphi Z\odot (2fZ+|Z|_g X), 0\right). \label{equation:adjoint-Z-mod} \end{align} \begin{lemma}\label{lemma:Hessian} If $\left(D\Phi^{(\varphi, Z)}_{(g, \pi)}\right)^*(f, X) =0$, then \begin{align} \begin{split}\label{equation:Hamiltonian} 0 &= - (\Delta_g f)g_{ij} + f_{;ij} -R_{ij} f + \left[ \tfrac{3}{n-1}(\tr \pi)\pi_{ij} - 2\pi_{i\ell}\pi^\ell_j \right]f\\ &\quad + \left[ -\tfrac{1}{n-1}(\tr_g \pi)^2 + |\pi|^2_g \right] g_{ij} f\\ &\quad +\tfrac{1}{2}(L_X \pi)_{ij} -\tfrac{1}{2}\langle X, J\rangle_g g_{ij} -\tfrac{1}{2}(X\odot J)_{ij} \\ &\quad + \varphi \left[Z\odot (2f Z + |Z|_g X)\right]_{ij}\end{split}\\ 0&=-\tfrac{1}{2}(L_Xg)_{ij} + \left( \tfrac{2}{n-1} (\tr_g \pi) g_{ij} - 2\pi_{ij} \right)f.\label{equation:momentum} \end{align} Moreover, if we assume \eqref{equation:momentum}, then $\left(D\Phi^{(\varphi, Z)}_{(g, \pi)}\right)^*(f, X) =0$ is equivalent to \eqref{equation:Hamiltonian}. Furthermore, If $\left(D\Phi^{(\varphi, Z)}_{(g, \pi)}\right)^*(f, X) =0$, then $(f, X)$ satisfies the Hessian type equations: \begin{align} \begin{split}\label{equation:Hessian} 0&= f_{;ij} +\left[ -R_{ij} + \tfrac{2}{n-1} (\tr_g \pi) \pi_{ij} - 2 \pi_{ik} \pi^k_j \right]f\\ &+ \left[ \tfrac{1}{n-1} \left(R_g- \tfrac{2}{n-1} (\tr_g \pi)^2 + 2|\pi|^2\right)g_{ij}\right] f\\ &+ \tfrac{1}{2} \left[ (L_X\pi)_{ij} + (\Div_g X) \pi_{ij} \right]-\tfrac{1}{2}(X\odot J)_{ij} \\ & +\tfrac{1}{2(n-1)} \left[- \tr_g (L_X \pi) - (\Div_gX) (\tr_g \pi) + X_{k;m}\pi^{km} + 2 \langle X, J \rangle_g\right]g_{ij} \\ &+ \varphi \left[Z\odot (2f Z + |Z|_g X)\right]_{ij}- \tfrac{1}{n-1}\varphi (2 f |Z|_g^2 + |Z|_g \langle X, Z\rangle_g) g_{ij}, \end{split} \end{align} \begin{align} \begin{split} \label{equation:second-derivative} 0&= X_{i;jk}+\tfrac{1}{2} \left(R^{\ell}_{kji} + R^{\ell}_{ikj} + R^{\ell}_{ijk}\right) X_{\ell}\\ &\,\,\,-\left[\left(\tfrac{2}{n-1} (\tr_g \pi) g_{ij} -2 \pi_{ij} \right) f\right]_{;k}-\left[\left(\tfrac{2}{n-1} (\tr_g \pi) g_{ki} - 2\pi_{ki} \right) f\right]_{;j} \\ &\,\,\, +\left[\left(\tfrac{2}{n-1} (\tr_g \pi) g_{jk} - 2\pi_{jk} \right) f\right]_{;i}, \end{split} \end{align} where our convention for the Riemann tensor is such that the Ricci tensor $R_{jk}:=R^\ell_{\ell j k}$. \end{lemma} \begin{proof} From equations~\eqref{equation:adjoint}, \eqref{equation:adjoint-mod}, and~\eqref{equation:adjoint-Z-mod}, we see that $(f, X)$ satisfies \eqref{equation:momentum} and \begin{align}\label{equation:Hamiltonian0} \begin{split} 0&= -(\Delta_g f )g_{ij} + f_{;ij} + \left[ -R_{ij} + \tfrac{2}{n-1} (\tr_g \pi) \pi_{ij} - 2 \pi_{i\ell} \pi^\ell_j \right]f \\ &\quad + \tfrac{1}{2} \left[ (L_X\pi)_{ij} + (\Div_g X) \pi_{ij}- X_{k;m} \pi^{km} g_{ij} - \langle X, J \rangle_g g_{ij} \right] \\ &\quad - \tfrac{1}{2}(X\odot J)_{ij}+ \varphi \left[Z\odot (2f Z + |Z|_g X)\right]_{ij}. \end{split} \end{align} We can use~\eqref{equation:momentum} and its trace to swap out all terms with derivatives of $X$ in the above equation in exchange for terms involving $\pi$ and $f$. If we do this, we obtain~\eqref{equation:Hamiltonian} after straightforward manipulations. Equation~\eqref{equation:Hessian} comes from taking the trace of~\eqref{equation:Hamiltonian0} to solve for $\Delta_g f$ and then substituting it back in to~\eqref{equation:Hamiltonian0}. Equation~\eqref{equation:second-derivative} follows from~\eqref{equation:momentum} and commuting second covariant derivatives of $X$ to obtain the curvature terms. (See, for example,~\cite[Lemma B.3]{Huang-Lee:2020} for details.) \end{proof} By Theorem~\ref{theorem:no-kernel}, injectivity of $D\Phi|_{(g,\pi)}^*$ allows one to prescribe perturbations of $\Phi$ using compactly supported perturbations of $(g,\pi)$~\cite[Theorem 2]{Corvino-Schoen:2006}. Essentially the same reasoning can be used to prove a similar statement for the $(\varphi, Z)$-modified operator $\Phi^{(\varphi, Z)}_{(g, \pi)}$. We say that the operator $\left( D\Phi^{(\varphi, Z)}_{(g, \pi)}\right)^*$ is \emph{injective on $\Int\Omega$} if it is injective when thought of as a linear map from the domain $C^2_{\mathrm{loc}}(\Int \Omega)\times C^1_{\mathrm{loc}}(\Int \Omega)$. \begin{theorem}\label{theorem:deformationZ} Let $(\Omega, g, \pi)$ be a compact initial data set with nonempty smooth boundary, such that $(g,\pi)$ is $C^{4,\alpha}(\Omega)\times C^{3,\alpha}(\Omega)$ for some $\alpha\in (0,1)$, and let $(\varphi, Z)\in$ $C^{2,\alpha}(\Omega)$. Assume that $\left( D\Phi^{(\varphi, Z)}_{(g, \pi)}\right)^*$ is injective on $\Int\Omega$. Then there is a $C^{4,\alpha}(\Omega)\times C^{3,\alpha}(\Omega)$ neighborhood $\mathcal{U}$ of $(g, \pi)$ and a $C^{2,\alpha}(\Omega)$ neighborhood $\mathcal{W}$ of $(\varphi, Z)$ such that for any $V\subset\subset \Int\Omega$, there exist constants $\epsilon, C>0$ such that the following statement holds: For $(\gamma, \tau)\in \mathcal{U}$, $(\varphi', Z')\in\mathcal{W}$, and $(u,\Upsilon )\in C^{0,\alpha}_c(V)\times C^{1,\alpha}_c(V) $ with \hbox{$\| (u, \Upsilon) \|_{C^{0,\alpha}(\Omega)\times C^{1,\alpha}(\Omega)}< \epsilon$}, there exists $(h, w)\in C^{2,\alpha}_c(\Int\Omega)$ satisfying $\|(h,w)\|_{C^{2,\alpha}(\Omega)}\le C\| (u, \Upsilon)\|_{C^{0,\alpha}(\Omega)\times C^{1,\alpha}(\Omega)}$ such that \[ \Phi^{(\varphi', Z')}_{(\gamma,\tau)} (\gamma+h, \tau+w) = \Phi^{(\varphi', Z')}_{(\gamma,\tau)}(\gamma, \tau) + (u, \Upsilon). \] \end{theorem} \begin{proof} The proof is essentially the same as the proof of \cite[Theorem 3.1]{Corvino-Huang:2020} (which is itself based on~\cite[Theorem 2]{Corvino-Schoen:2006}), because the operators involved only differ by inconsequential zero order terms. Note that \cite[Theorem 3.1]{Corvino-Huang:2020} is stated more generally in terms of certain weighted H\"{o}lder spaces, but here we state a simpler version sufficient for our applications. We outline the steps with explicit references to ~\cite{Corvino-Huang:2020}. For given $(u, \Upsilon)\in C^{0,\alpha}_c(V)\times C^{1,\alpha}_{c}(V)$, we first solve the linearized equation by showing that there exists $(h_0, w_0)\in C^{2,\alpha}_c(\Int \Omega)$ solving $D\Phi^{(\varphi, Z)}_{(\gamma,\tau)}(h_0, w_0) = (u, \Upsilon)$. To do this, we define a functional $\mathcal{G}$ on the space of lapse-shift pairs $(f, X)$ as in \cite[Section 5.1]{Corvino-Huang:2020} by substituting the adjoint operator there with our operator $\left(D\Phi^{(\varphi, Z)}_{(g, \pi)}\right)^*$. The injectivity assumption allows us to derive the coercivity estimate for $\left(D\Phi^{(\varphi, Z)}_{(\gamma,\tau)}\right)^*$, just as \cite[Equation (5.5)]{Corvino-Huang:2020}, because the operators involved differ only by zero order terms. Therefore, we can minimize the functional $\mathcal{G}$ to obtain a solution $(h_0, w_0)$ to the linearized equation. Moreover, $(h_0, w_0)$ satisfies the same weighted estimates as in \cite[Theorem 5.6]{Corvino-Huang:2020} because the Schauder interior estimates apply in the same way in our case. Once the weighted estimates are established, the iteration scheme \cite[Theorem 5.10 and Lemma 5.11]{Corvino-Huang:2020} is applicable in our setting to solve the nonlinear equation as desired. \end{proof} The main usefulness of the modified constraint operator $\overline{\Phi}_{(g,\pi)}$ is that controlling the modified constraints of $(\gamma, \tau)$ near $(g,\pi)$ gives good control over the dominant energy scalar $\sigma(\gamma, \tau)$. Looking at the case $Z=J$, the operator $\Phi^{(\varphi, J)}_{(g, \pi)}$ shares a similar property, and this is what motivated the definition of $\Phi^{(\varphi, J)}_{(g, \pi)}$. \begin{lemma}\label{lemma:sigma_preserve} Let $(\gamma, \tau)$ and $(\bar\gamma, \bar\tau)$ be initial data on a manifold $U$ such that $|\bar\gamma-\gamma|_\gamma<1$. Let $\varphi$ be a function on $U$ such that $|\varphi J|_\gamma < (\sqrt{2}-1)/2$, where $J$ is the current density of $(\gamma, \tau)$. Assume that \begin{equation}\label{equation:prescribe} \Phi^{(\varphi, J)}_{(\gamma, \tau)} (\bar\gamma, \bar\tau ) = \Phi^{(\varphi,J)}_{(\gamma, \tau)} (\gamma, \tau ) + (u, 0) \end{equation} for some function $u$. Then \[ \Sc(\bar\gamma,\bar\tau) \ge \Sc(\gamma, \tau)+u. \] \end{lemma} \begin{proof} Let $(\bar{\mu}, \bar{J})$ and $(\mu, J)$ denote the energy and current densities of $(\bar{\gamma}, \bar{\tau})$ and $(\gamma, \tau)$, respectively, and let $(h, w):=(\bar{\gamma}-\gamma, \bar{\tau}-\tau)$. In what follows, we will compute all lengths, inner products, and ``dot contractions'' using the metric $\gamma$, unless otherwise specified. If we re-write out hypothesis~\eqref{equation:prescribe} using the definition of the $(\varphi, J)$-modified constraint operator~\eqref{equation:modified2} and write as much as we can in terms of $h$, we can see that \[ \Phi(\bar{\gamma}, \bar{\tau}) = \Phi(\gamma, \tau) + \left(-2\varphi \langle h\cdot J, J\rangle, - \left(\tfrac{1}{2} + \varphi |J|\right) h\cdot J\right) + (u, 0), \] or in other words, \begin{align*} \bar{\mu} &= \mu + \tfrac{u}{2} -\varphi \langle h\cdot J, J\rangle \\ \bar{J} &=J - \left(\tfrac{1}{2} + \varphi |J| \right) h\cdot J. \end{align*} We compute \begin{align*} |\bar{J}|^2_{\bar{\gamma}} &= (\gamma+h)_{ij} \left( J^i -(\tfrac{1}{2} + \varphi|J|)(h\cdot J)^i\right) \left(J^j -(\tfrac{1}{2} + \varphi|J|)(h\cdot J)^j\right)\\ &=|J|^2-2 \varphi |J| \langle h\cdot J, J\rangle + \left(- \tfrac{3}{4} - \varphi |J| + \varphi^2|J|^2 \right) |h\cdot J|^2 \\ &\quad + \left(\tfrac{1}{4} + \varphi|J| +\varphi^2 |J|^2 \right) \langle h\cdot (h\cdot J) , h\cdot J \rangle \\ & \le |J|^2 -2 \varphi |J| \langle h\cdot J, J\rangle\\ &\quad + \Big[\left(- \tfrac{3}{4} - \varphi |J| + \varphi^2|J|^2 \right)+ \left(\tfrac{1}{4} + |\varphi J| +\varphi^2 |J|^2 \right)|h|\Big] |h\cdot J|^2\\ & \le |J|^2 -2 \varphi |J| \langle h\cdot J, J\rangle+ \left(- \tfrac{1}{2} +2 |\varphi J| + 2|\varphi J|^2 \right) |h\cdot J|^2, \end{align*} where we used $|h|<1$ in the last line. By our assumption that $|\varphi J| <(\sqrt{2}-1)/2$, it follows that $- \tfrac{1}{2} +2 |\varphi J| + 2|\varphi J|^2<0$, and hence \[ |\bar{J}|^2_{\bar{\gamma}} \le |J|^2 -2 \varphi |J| \langle h\cdot J, J\rangle \le ( |J| -\varphi \langle h\cdot J, J\rangle)^2.\] Combining the square root of the above inequality with our formula for $\bar{\mu}$, we get the desired inequality, \begin{align*} \Sc(\bar\gamma,\bar\tau) &= 2(\bar\mu-|\bar{J}|_{\bar\gamma})\\ & \ge 2\left(\mu+\frac{u}{2} -\varphi \langle h\cdot J, J\rangle\right) - 2\left(|J| -\varphi \langle h\cdot J, J\rangle\right)\\ & = \Sc(\gamma, \tau)+u. \end{align*} \end{proof} While the general $(\varphi, Z)$-modified constraint operator $\Phi^{(\varphi, Z)}_{(g, \pi)}$ does not share the same good property as in the proposition above, we can compute the error by which it fails. \begin{lemma}\label{lemma:sigma_error} Let $(\gamma, \tau)$ and $(\bar\gamma, \bar\tau)$ be initial data on a manifold $U$ such that $|\bar\gamma-\gamma|_\gamma<1$. Let $\varphi$ and $Z$ be a function and a vector field on $U$ such that $|\varphi|<1$ and $|Z|_\gamma<1$ on $U$. Assume that \begin{equation}\label{equation:prescribeZ} \Phi^{(\varphi, Z)}_{(\gamma, \tau)} (\bar\gamma, \bar\tau ) = \Phi^{(\varphi,Z)}_{(\gamma, \tau)} (\gamma, \tau ) + (u, 0) \end{equation} for some function $u$. Then \[ \Sc(\bar{\gamma}, \bar{\tau}) \ge \Sc(\gamma,\tau)+ u - 6|\bar\gamma-\gamma|_\gamma^{\frac{1}{2}}|\varphi|^{\frac{1}{2}} |Z|_\gamma (|J|_\gamma^{\frac{1}{2}} + 1), \] where $J$ is the current density of $(\gamma, \tau)$. \end{lemma} \begin{proof} Again, let $(\bar{\mu}, \bar{J})$ and $(\mu, J)$ denote the energy and current densities of $(\bar{\gamma}, \bar{\tau})$ and $(\gamma, \tau)$, respectively, and let $(h, w):=(\bar{\gamma}-\gamma, \bar{\tau}-\tau)$. In what follows, we will compute all lengths, inner products, and ``dot contractions'' using the metric $\gamma$, unless otherwise specified. Again, if we re-write out hypothesis~\eqref{equation:prescribeZ} using the definition of the $(\varphi, Z)$-modified constraint operator~\eqref{equation:modified2} and write as much as we can in terms of $h$, we can see that \[ \Phi(\bar{\gamma}, \bar{\tau}) = \Phi(\gamma, \tau) + (0,-\tfrac{1}{2} h \cdot J) + (-2\varphi \langle h\cdot Z, Z\rangle , -\varphi |Z| h\cdot Z) + (u, 0), \] or in other words, \begin{align*} \bar{\mu} &= \mu + \frac{u}{2} - \varphi \langle h\cdot Z, Z\rangle\\ \bar{J} &=J- \tfrac{1}{2} h\cdot J - \varphi |Z| h\cdot Z. \end{align*} After tedious but straightforward computation, we obtain \begin{align*} |\bar{J}|_{\bar{\gamma}}^2 &= |J|^2 - \tfrac{3}{4} |h\cdot J|^2+\tfrac{1}{4} \langle h\cdot (h\cdot J), h\cdot J\rangle\\ &\quad - 2\varphi |Z| \langle h\cdot J, Z\rangle - \varphi |Z| \langle h\cdot J, h\cdot Z\rangle + \varphi |Z| \langle h\cdot (h\cdot J) ,h\cdot Z\rangle\\ &\quad + \varphi^2 |Z|^2 |h\cdot Z|^2 + \varphi^2 |Z|^2 \langle h\cdot (h\cdot Z), h\cdot Z\rangle. \end{align*} Since $|h|<1$, we can absorb the third term on the right side into the second term on the right side, just as we did in the proof of Lemma~\ref{lemma:sigma_preserve}. For the rest of the terms, we use Cauchy-Schwarz together with the fact that $|h|$, $|\varphi|$ and $|Z|$ are all less than $1$ to obtain \[ |\bar{J}|_{\bar{\gamma}}^2 \le |J|^2 + |h| |\varphi| |Z|^2 (4|J| + 2), \] so by the triangle inequality applied twice, \[ |\bar{J}|_{\bar{\gamma}} \le |J| + 2|h|^{\frac{1}{2}} |\varphi|^{\frac{1}{2}} |Z| (|J|^{\frac{1}{2}} + 1). \] Combining this with our formula for $\bar\mu$, we see that \begin{align*} \Sc(\bar{\gamma}, \bar{\tau}) &= 2(\bar\mu-|\bar{J}|_{\bar\gamma})\\ & \ge [2\mu + u - 2|h| |\varphi| |Z|^2] - \left[2|J| +4|h|^{\frac{1}{2}} |\varphi|^{\frac{1}{2}} |Z| (|J|^{\frac{1}{2}} + 1)\right]\\ &\ge \Sc(\gamma,\tau)+ u - 6|h|^{\frac{1}{2}}|\varphi|^{\frac{1}{2}} |Z| (|J|^{\frac{1}{2}} + 1). \end{align*} \end{proof} \section{The null-vector equation for kernel elements}\label{section:deformation} The following theorem is the main result underlying the proofs of Theorem~\ref{theorem:improvable} and~\ref{theorem:improvability-error}. This is where we take advantage of the freedom to choose $\varphi$ in the large family of operators $\Phi^{(\varphi, Z)}_{(g,\pi)}$. In the following theorem and elsewhere, the notation $C^3(U)$ denotes functions in $C^3_{\mathrm{loc}}(U)$ that have bounded $C_3$ norm. \begin{theorem}\label{theorem:generic} Let $(U, g, \pi)$ be an initial data set such that $(g,\pi)\in C^5_{\mathrm{loc}}(U)\times C^4_{\mathrm{loc}}(U)$. Given any vector field $Z\in C^3_{\mathrm{loc}}(U)$, there is a dense subset $\mathcal{D}_Z$ of $C^{3}(U)$ functions such that for $\varphi\in \mathcal{D}_Z$ and for any lapse-shift pair $(f, X)$ on $U$ solving \[ \left(D\Phi_{(g, \pi)}^{(\varphi, Z)}\right)^*(f, X)=0, \] it follows that $(f, X)$ further satisfies both equations \begin{align}\label{equation:Zpair} \begin{split} D\overline{\Phi}_{(g, \pi)}^*(f, X)&=0 \\ 2fZ+|Z|_g X&=0. \end{split} \end{align} \end{theorem} We refer to the second equation as the \emph{$Z$-null-vector equation} for the following reason: If $(U, g, \pi)$ sits inside a spacetime with future unit normal $\mathbf{n}$ and we define a vector field $\mathbf{Y}:=2f\mathbf{n} +X$, then wherever $Z\neq0$, the equation $2fZ + |Z|_g X=0$ is equivalent to saying that along~$U$, we have \begin{align*}\label{equation:null-vector} \mathbf{Y}= \frac{2f}{|Z|_g} (|Z|_g \mathbf{n} - Z), \end{align*} which is null. (And of course, the $Z$-null-vector equation is vacuous where $Z=0$.) The rest of this section is concerned with the proof of this Theorem~\ref{theorem:generic}. To see why it implies Theorems~\ref{theorem:improvable} and~\ref{theorem:improvability-error}, skip ahead to the next section. In this section, we use the abbreviated notation \[ L_\varphi:=\left(D\Phi_{(g, \pi)}^{(\varphi, Z)}\right)^*.\] Although the operator $L_\varphi$ depends on the initial data $(g, \pi)$ and the vector field $Z$ as well as $\varphi$, in this section we will fix $g$, $\pi$, and $Z$ and focus on the dependence on $\varphi$. Let us take a moment to motivate our proof of Theorem~\ref{theorem:generic}. Assume that $(f,X)$ is a solution to $L_\varphi(f, X)=0$. Then $(f, X)$ solves a Hessian-type equation (Lemma~\ref{lemma:Hessian}) so that the second derivatives, and thus higher derivatives, of $(f, X)$ can all be expressed in terms of $(f, df, X, \nabla X)$. By taking divergence of equation \eqref{equation:Hamiltonian} in $L_\varphi(f, X)=0$ and further differentiating, we can construct various linear relations among $(f, df, X, \nabla X)$ so that the vector $(f, df, X, \nabla X)$ evaluated at $p$ lies in the kernel of a square matrix~$Q$. In general, it is hard to understand the determinant of $Q$, but it is possible to track its dependence on $\varphi$. One may wish to show that the determinant is a nontrivial polynomial of $\varphi$ and its first few derivatives. However, it is generally not possible because that would imply $L_\varphi$ is injective for generic choice of $\varphi$, which is not true for certain $Z$. So instead, we will prove that for generic choice of $\varphi$, either the matrix $Q$ is a nontrivial polynomial of $\varphi$ or the columns of $Q$ have certain linear dependence, which will imply that the kernel element of $L_\varphi$ must satisfy the $Z$-null-vector equation $2fZ+|Z|_g X=0$. In either case, \eqref{equation:Zpair} of Theorem~\ref{theorem:generic} holds. In preparation for the proof of Theorem~\ref{theorem:generic}, let us discuss the quantities that will appear in the proof. Since \eqref{equation:Zpair} holds trivially wherever $Z=0$, let us consider the region where $Z\ne0$, and define \begin{align}\label{equation:W} W_i := f\hat{Z}_i +\tfrac{1}{2}X_i , \end{align} where $\hat{Z}=Z/|Z|_g$. Our goal is to show that if $L_\varphi(f, X)=0$, then $W$ must vanish for generic~$\varphi$, and then the other equation $D\overline{\Phi}_{(g, \pi)}^*(f, X)=0$ follows from \eqref{equation:adjoint-Z-mod}. As discussed above, the proof relies on constructing linear relations among $(f, df, X, \nabla X)$. Observe that by equation~\eqref{equation:momentum}, $\nabla X$ is determined by its antisymmetric part, which we denote by \begin{align} \label{equation:T} T_{ij}:=\tfrac{1}{2}(X_{i;j}-X_{j;i}). \end{align} Also by replacing $X$ with $W$, any linear system of equations in the quantities $(f, df, X, \nabla X)$ is equivalent to a linear system of equations in the quantities \[ P:=(W, df, T, f),\] where we have chosen this ordering for convenience of indexing. (In particular, we order the components of $T$ in some way.) Note that at any point, $P$ is an $N$-dimensional vector, where $N = n + n + \frac{n(n-1)}{2} + 1$, and we will think of $P$ as a column vector. Our goal is to construct $N$ linear relations on $P$ so that we obtain a $N\times N$ matrix $Q$ such that $QP=0$ and such that $Q$ is ``as nonsingular as possible.'' Fixing a point $p$, if we can arrange that $\det Q$ is a nontrivial polynomial in $\varphi$ and its derivatives, then $L_\varphi$ is generically injective, but as mentioned above, this will not always be the case. In the alternative, we just need to show that for generic $\varphi$, any $P$ in the kernel of $Q$ has vanishing $W$ components. In order to see what must be done, consider the following elementary lemma. \begin{lemma}\label{lemma:linear-algebra} Let $Q$ be an $N\times N$ matrix. Express $Q$ in the following block form: \[ Q = \left[\begin{array}{c|c} \widehat{Q}& C\\ \hline R& Q_{NN} \end{array}\right] \] where $\widehat{Q}$ is the $(N,N)$-minor submatrix of $Q$ of size $(N-1)\times (N-1)$, $Q_{NN}$ is the $(N, N)$-entry of $Q$, $C$ is a $ (N-1)\times 1$ matrix, and $R$ is a $1\times (N-1)$ matrix. Suppose $\widehat{Q}$ is non-singular. Then \begin{enumerate} \item $\Det Q =( \Det\widehat{Q}) (Q_{NN} - R \widehat{Q}^{-1}C)$. \label{item:determinant} \item Write $\widehat{Q}^{-1}C = \begin{bmatrix} H_1 \\ \vdots \\ H_{N-1}\end{bmatrix}$. For each $i$ from $1$ to $N-1$, if $H_i=0$, then any $P=\begin{bmatrix} P_1 \\ \vdots \\ P_N\end{bmatrix}$ solving $QP=0$ must have $P_i=0$. \label{item:vanish} \end{enumerate} \end{lemma} \begin{proof} The determinant equality is standard. We prove the second item. The first $(N-1)$ equations of $QP=0$ say that \[ \widehat{Q} \begin{bmatrix} P_1 \\ \vdots \\ P_{N-1} \end{bmatrix} + P_N C=0. \] Since $\widehat{Q}$ is nonsingular, this implies that \[ \begin{bmatrix} P_1 \\ \vdots \\ P_{N-1} \end{bmatrix} = -P_N\widehat{Q}^{-1} C = -P_N \begin{bmatrix} H_1 \\ \vdots \\ H_{N-1} \end{bmatrix}. \] Thus, $H_i=0$ implies $P_i=0$. \end{proof} From this lemma, we can get a good sense of what properties we want our matrix $Q$ (to be constructed) to have. We would like to show that $\det \widehat{Q}$ is a nontrivial polynomial of $\varphi$ and its derivatives, and that either \begin{itemize} \item $H_1=\cdots=H_n=0$ for all $\varphi$---since the first $n$ components of $P$ are the ones that correspond to $W$; or else \item $(Q_{NN} - R \widehat{Q}^{-1}C)$ is a nontrivial rational function of $\varphi$---since that would imply that $\det Q$ is generically nonzero. \end{itemize} This is the basic description of how the proof works at a single point $p$, and then we use some additional arguments (described in the proofs of Proposition~\ref{proposition:vanishing} and Theorem~\ref{theorem:generic}) to globalize the result. In more detail, the following lemma statement summarizes the specific properties we will need the matrix $Q$ to satisfy, and we will construct $Q$ explicitly within the proof of the lemma. \begin{lemma}\label{lemma:matrixQ} Let $(U, g, \pi)$ be an initial data set such that $(g,\pi)\in C^5_{\mathrm{loc}}(U)\times C^4_{\mathrm{loc}}(U)$, and let $Z$ be a vector field in $C^3_{\mathrm{loc}}(U)$ such that $Z\ne0$ on $U$. Suppose $U$ is covered by a single geodesic normal coordinate chart at $p\in U$ so that at $p$, $g_{ij} = \delta_{ij}$ and $Z_i= |Z|_g\delta_i^1$. Suppose that $\varphi$ is a locally $C^3$ function, and that $(f,X)$ is a lapse-shift pair such that \[ L_\varphi(f,X)=0\] on $U$. In the following, we will use subscripts on functions to denote ordinary differentiation in the coordinate chart, that is, $\varphi_i:= \frac{\partial \varphi}{\partial x^i}$ and similar for $\varphi_{ijk}$, $f_i$, etc., while we will continue to use semicolons to denote covariant derivatives. Then there exists a $N\times N$ matrix-valued function $Q$ on $U$, which we can express in block form, \[ Q = \left[\begin{array}{c|c} \widehat{Q}& C\\ \hline R& Q_{NN} \end{array}\right] \] where $\widehat{Q}$ is the $(N,N)$-minor submatrix of $Q$ of size $(N-1)\times (N-1)$, $Q_{NN}$ is the $(N, N)$-entry of $Q$, $C$ is a $ (N-1)\times 1$ matrix, and $R$ is a $1\times (N-1)$ matrix, such that $Q$ has the following properties: \begin{enumerate} \item $QP=0$, where $P=(W, df, T, f)$ is as described above (thought of as a column vector). \item The entries of $Q$ are polynomials of $(1, \varphi, \partial\varphi, \partial^2\varphi, \partial^3\varphi)$, whose coefficients depend on up to 5 derivatives of $g$, 4 derivatives of $\pi$, and 3 derivatives of $Z$. \item If we decompose \begin{align}\label{equation:widehat-Q0} \widehat{Q}= \varphi_1 \widehat{Q}_1+ \widehat{Q}_0, \end{align} where the entries of $\widehat{Q}_0$ have no dependence on $\varphi_1$, then after evaluating at the point $p$, $\widehat{Q}_1$ has the $[n] + [n] + \left[\frac{n(n-1)}{2}\right]$ block form: \begin{equation} \label{equation:widehat-Q} \widehat{Q}_1 = \left[\begin{array}{c|c|c} D_1& 0 & 0\\ \hline * & D_2 & * \\ \hline * & 0 & D_3 \end{array}\right] \end{equation} where the square matrices $D_1, D_2, D_3$ of size $n, n, \frac{n(n-1)}{2}$, respectively, are diagonal and nonsingular, the $0$'s represent zero matrices, and the asterisks represent arbitrary matrices. In particular, $\widehat{Q}_1$ is nonsingular at $p$. \item Third derivatives of $\varphi$ \emph{only} show up in the $R$ block of $Q$, and if we decompose $R=R_1+R_0$, where $R_1$ is linear in $\partial^3 \varphi$ while $R_0$ has no dependence on $\partial^3 \varphi$, then after evaluating at $p$, we have \begin{equation*} R_1= ( 2\varphi_{111}, \varphi_{211}, \varphi_{311},\ldots, \varphi_{n11}, 0,\ldots, 0 ). \end{equation*} \end{enumerate} \end{lemma} \begin{proof} In the following, all lengths, products, covariant derivatives, and raising and lowering of indices are computed with respect to $g$. First we would like to express $(f, df, X, \nabla X)$ in terms of $(W, df, T, f)$: \begin{align} X_i &= 2 (W_i - f \hat{Z}_i)\label{equation:X}\\ X_{i;j} &= T_{ij}+\left(\tfrac{2}{n-1} (\tr_g \pi) g^{ij} - 2\pi^{ij} \right)f \label{equation:DX}. \end{align} The first identity is just the definition of $W$ in \eqref{equation:W}, while the second identity follows from the definition of $T$ in \eqref{equation:T} and~\eqref{equation:momentum}. Next, we also want to be able to express the derivatives of $(W, df, T, f)$ in terms of $(W, df, T, f)$, but keep in mind that our main concern is keeping track of dependence on $\varphi$. Differentiating the definition of $W$ and substituting in~\eqref{equation:DX}, we obtain \begin{align} W_{i;j} &= \hat{Z}_i f_j + \tfrac{1}{2} T_{ij} +\left( \hat{Z}_{i;j} + \tfrac{1}{n-1} (\tr_g \pi) g_{ij} - \pi_{ij}\right)f \label{equation:DW}\\ &= \hat{Z}_i f_j + \tfrac{1}{2} T_{ij} + A_{ij}f,\label{equation:DWalt} \end{align} where $A_{ij}$ is a quantity that has no dependence on $\varphi$ or its derivatives, nor any dependence on $(W, df, T, f)$. Meanwhile, using the definition of $W$ and~\eqref{equation:DX}, equation \eqref{equation:Hessian} can be rewritten as \[ f_{;ij} = \varphi |Z| \left[ -2(Z\odot W)_{ij} + \tfrac{2}{n-1} \langle Z, W\rangle g_{ij} \right] +A^{(1)}_{ij}(W, T, f), \] where $A_{ij}^{(1)}(W, T, f)$ is a linear function of $(W, T, f)$ that has no dependence on $\varphi$ or its derivatives. Differentiating the definition of $T$ and using~\eqref{equation:second-derivative} and the definition of $W$, we have \[ T_{ij;k} = {A}^{(2)}_{ijk}(W, df, f), \] where ${A}^{(2)}_{ijk}(W, df, f)$ is a linear function of $(W, df, f)$ that has no dependence on $\varphi$ or its derivatives. \begin{remark} It will be convenient to keep in mind the fact that when we express the covariant derivatives of $W$, $df$, $T$, and $f$ in terms of $(W, df, T, f)$, the only $\varphi$ term that shows up is in the coefficient of $W$ in the expression for $f_{;ij}$. \end{remark} We can now begin to construct the desired matrix $Q$. Since $L_\varphi(f, X)=0$, \eqref{equation:Hamiltonian} holds. Using~\eqref{equation:X} and~\eqref{equation:DX}, equation \eqref{equation:Hamiltonian} can be rewritten in the form \begin{align*} (\Delta f) g_{ij} -f_{;ij} &= 2\varphi |Z| (Z\odot W)_{ij} + A^{(3)}_{ij}(W, T, f)\nonumber\\ &=\varphi |Z| (Z_i\delta_j^\ell+ Z_j \delta_i^\ell)W_\ell + A^{(3)}_{ij}(W, T, f), \end{align*} where $A_{ij}^{(3)}(W, T, f)$ is a linear function of $(W, T, f)$ that has no dependence on $\varphi$ or its derivatives. By taking the divergence of the equation above (that is, taking $\nabla^i$ of both sides), we obtain \begin{align*} \nabla^i \left[(\Delta f) g_{ij} -f_{;ij} \right] &= \varphi_i |Z|(Z^i \delta_j^\ell+ Z_j g^{i\ell})W_\ell +\varphi |Z| (Z^i \delta_j^\ell+ Z_j g^{i\ell} )W_{\ell; i} \\ &\quad +\varphi \nabla_i \left[ |Z| (Z^i \delta_j^\ell +Z_j g^{i\ell} )\right]W_\ell+ g^{i\ell} \nabla_\ell\left[ A^{(3)}_{ij}(W, T, f)\right]\\ -R_{jk}g^{ik} f_i &= \varphi_i |Z|(Z^i \delta_j^\ell+ Z_j g^{i\ell})W_\ell\\ &\quad + \varphi |Z| (Z^i \delta_j^\ell+ Z_j g^{i\ell} )(\hat{Z}_\ell f_i + \tfrac{1}{2}T_{\ell i} +A_{\ell i}f) \\ &\quad + \varphi \nabla_i \left[ |Z| (Z^i \delta_j^\ell +Z_j g^{i\ell} )\right]W_\ell + g^{i\ell} \nabla_\ell\left[ A^{(3)}_{ij}(W, T, f)\right], \end{align*} where we used the Ricci identity on the left side to obtain the Ricci term and we used~\eqref{equation:DWalt} to eliminate the $W_{\ell; i}$ term on the right side. We can use the Remark to understand the dependence of $\nabla_\ell A^{(3)}_{ij}$ term on $\varphi$ and combine it with the Ricci term, the $A_{\ell i}$ term, and the second to last term in the equation above in order to obtain (after dividing everything by $|Z|^2$) the equation \begin{align}\label{equation:divergence2} \begin{split} 0 &= \varphi_i (\hat{Z}^i \delta_j^{\ell}+ \hat{Z}_j g^{i\ell})W_\ell + \varphi (\hat{Z}^i \delta_j^{\ell}+ \hat{Z}_j g^{i\ell})(\hat{Z}_\ell f_i + \tfrac{1}{2}T_{\ell i})\\ &\quad + A_j^{(4)}(df, T) + A_j^{(5)}[\varphi](W, f), \end{split} \end{align} where $A_j^{(4)}(df, T)$ is a linear function of $(df, T)$ that has no dependence on $\varphi$ or its derivatives and $A_j^{(5)}[\varphi](W, f)$ is a linear function of $(W, f)$ that depends on $\varphi$ but not any of its derivatives. (Note that our proof requires us to track certain dependencies on $\varphi$ but not all of them.) \vspace{6pt} \noindent \textbf{Construction of rows $1$ to $n$.} For each $j$ from $1$ to $n$, we define the $j$-th row of $Q$ to be the coefficients obtained from the linear relation~\eqref{equation:divergence2}. Recall the definition of $\widehat{Q}_1$ in \eqref{equation:widehat-Q0}. Evaluating at $p$, we can compute the first $n$ rows of $\widehat{Q}_1$ by focusing on the $\varphi_i$ term in~\eqref{equation:divergence2} with $i=1$. Since $\varphi_1$ can only appear in coefficients of $W$, we see that for $j=1$ to $n$, we have $ \left(\widehat{Q}_1\right)_j^\ell =0$ for $\ell>n$ (corresponding to the coefficients of $df$, $T$, and $f$). And for $\ell=1$ to $n$ (corresponding to the $W_\ell$ coefficients), \begin{align*} \left(\widehat{Q}_1\right)_j^\ell = (\hat{Z}^1 \delta_j^{\ell}+ \hat{Z}_j g^{\ell 1}) = \delta_j^{\ell}+ \delta_j^1 \delta^\ell_1, \end{align*} showing that the matrix $D_1$ is diagonal with $\det D_1=2$. This verifies the claims made about the top line of blocks in~\eqref{equation:widehat-Q}. \vspace{6pt} \noindent \textbf{Construction of rows $n+1$ to $2n$.} We take the covariant derivative $\nabla_k$ of both sides of~\eqref{equation:divergence2}. We claim that doing this will give us \begin{align*} 0 &= \varphi_{;ik} (\hat{Z}^i \delta_j^{\ell}+ \hat{Z}_j g^{i\ell})W_\ell + \varphi_{i} (\hat{Z}^i \delta_j^{\ell}+ \hat{Z}_j g^{i\ell})W_{\ell;k}\\ &\quad + \varphi_{k} (\hat{Z}^i \delta_j^{\ell}+ \hat{Z}_j g^{i\ell})(\hat{Z}_\ell f_i + \tfrac{1}{2}T_{\ell i})\\ &\quad+ A_{jk}^{(6)}[\varphi](df, T) + A_{jk}^{(7)}[\varphi, \partial\varphi](W, f), \end{align*} where $A_{jk}^{(6)}[\varphi](df, T)$ is a linear function of $(df, T)$ that depends on $\varphi$ but not any of its derivatives and $A_{jk}^{(7)}[\varphi, \partial\varphi](W, f)$ is a linear function of $(W, f)$ that depends on $\varphi$ and its first derivatives, but no higher derivatives. This can be seen by applying the product rule to the first two terms of~\eqref{equation:divergence2} and noting that each resulting term not appearing explicitly above can be taken to be part of the $A^{(6)}$ or $A^{(7)}$ terms. Meanwhile, our Remark implies that $\nabla_k$ of the $A^{(4)}$ and $A^{(5)}$ terms can be taken to be part of the $A^{(6)}$ and $A^{(7)}$ terms. Next we use~\eqref{equation:DWalt} to eliminate the $W_{\ell;k}$ and collect like terms to obtain \begin{align}\label{equation:jk} \tag*{(\theequation)$_{\jksub}$} \begin{split} 0 &= \varphi_{;ik} (\hat{Z}^i \delta_j^{\ell}+ \hat{Z}_j g^{i\ell})W_\ell +2 \hat{Z}^i \hat{Z}_j (\varphi_{i} \delta_k^m + \varphi_{k} \delta_i^m ) f_m \\ &\quad +\tfrac{1}{2} (\hat{Z}^i \delta_j^{\ell}+ \hat{Z}_j g^{i\ell})( \varphi_{i} \delta_k^m + \varphi_k \delta_i^m) T_{\ell m} \\ &\quad+ A_{jk}^{(6)}[\varphi](df, T) + A_{jk}^{(8)}[\varphi, \partial\varphi](W, f), \end{split} \end{align} where $A_{jk}^{(8)}[\varphi, \partial\varphi](W, f)$ is a linear function of $(W, f)$ that depends on $\varphi$ and its first derivatives, but no higher derivatives. We will use the above equations to define rows $n+1$ to $N-1$ of $Q$. Fixing $j=1$ in \ref{equation:jk}, for each $k=1$ to $n$, we define the $(n+k)$-th row of $Q$ to be the coefficients obtained from the linear relation {\renewcommand\jksub{1k}\ref{equation:jk}}. Evaluating at $p$, we can compute $ \left(\widehat{Q}_1\right)_{n+k}^{n+m}$ for $k, m=1$ to $n$ (corresponding to the $f_m$ coefficients) by focusing on the dependence on $\varphi_1$. \begin{align*} \left(\widehat{Q}_1\right)_{n+k}^{n+m} &= 2 \hat{Z}^1 \hat{Z}_1 \delta_k^m + 2\hat{Z}^i \hat{Z}_1 \delta_k^1\delta_i^m \\ &= 2( \delta_k^m + \delta_k^1\delta^m_1) \end{align*} showing that the matrix $D_2$ is diagonal with $\det D_2=2^{n+1}$, and thus verifying the claim made about the middle line of blocks in~\eqref{equation:widehat-Q}. (Note that $\varphi_1$ will not appear in the $\varphi_{;ik}$ term since we are using normal coordinates at $p$.) \vspace{6pt} \noindent \textbf{Construction of rows $2n+1$ to $N-1$.} To specify the ordering of the components of $T_{jk}$ within the vector $P$, we let $\iota$ be a bijection from $\mathcal{A}:=\{ (j, k)\,|\, n\ge j > k \ge 1\}$ to $\{2n+1, 2n+2,\ldots, N-1\}$ so that $P_{\iota(j, k)}=T_{jk}$. We define rows $2n+1$ to $N-1$ of $Q$ by defining the $\iota(j,k)$ row of $Q$ using the coefficients of the linear relation \ref{equation:jk}, for all $(j, k)\in\mathcal{A}$. Evaluating at $p$, we will compute $ \left(\widehat{Q}_1\right)^{\iota(\ell, m)}_{\iota(j, k)}$ for $(j, k), (\ell, m)\in\mathcal{A}$, corresponding to the $T_{\ell m}$ coefficients in~\ref{equation:jk}. Since each $(j,k) \in\mathcal{A}$ has $j>1$, it follows that $\hat{Z}_j=0$, so we can ignore all of the $\hat{Z}_j$ terms appearing in~\ref{equation:jk}. By focusing on the dependence on $\varphi_1$ and taking into account antisymmetry of $T$, we obtain \begin{align*} \left(\widehat{Q}_1\right)^{\iota(\ell, m)}_{\iota(j, k)}&=\tfrac{1}{2} \left( \hat{Z}^1 \delta_j^\ell \delta_k^m + \hat{Z}^i \delta_j^\ell \delta_k^1\delta_i^m\right) -\tfrac{1}{2} \left( \hat{Z}^1 \delta_j^m \delta_k^\ell +\hat{Z}^i \delta_j^m \delta_k^1\delta_i^\ell\right)\\ &= \tfrac{1}{2} ( \delta_j^\ell \delta_k^m + \delta_j^\ell \delta_k^1 \delta^m_1 - \delta_j^m\delta_k^\ell - \delta_j^m \delta_k^1\delta^\ell_1 )\\ &= \tfrac{1}{2} ( \delta_j^\ell \delta_k^m + \delta_j^\ell \delta_k^1 \delta^m_1 ), \end{align*} where the last two terms of the second line vanish because $(j, k), (\ell, m)\in\mathcal{A}$. This tells us that the matrix $D_3$ is diagonal with nonzero diagonal entries. The fact that $j>1$ implies $\hat{Z}_j=0$ also explains why $\left(\widehat{Q}_1\right)^{n+m}_{\iota(j, k)}=0$ for all $(j,k)\in\mathcal{A}$ and $m=1$ to $n$ (corresponding to the $f_m$ coefficients). We have now established all of our claims about $\widehat{Q}_1$. \vspace{6pt} \noindent\textbf{Construction of row $N$.} Using the same reasoning as above, it is easy to see that if we take the covariant derivative $\nabla_q$ of \ref{equation:jk}, we obtain \begin{align*} \begin{split} 0 &= \varphi_{;ikq} (\hat{Z}^i \delta_j^{\ell}+ \hat{Z}_j g^{i\ell})W_\ell + A_{jkq}^{(9)}[\varphi, \partial\varphi, \partial^2\varphi](W, df, T, f), \end{split} \end{align*} where $A_{jkq}^{(9)}[\varphi, \partial\varphi, \partial^2\varphi](W, df, T, f)$ is a linear function of $(W, df, T, f)$ depending on at most two derivatives of $\varphi$. We define the $N$-th row of $Q$ using the coefficients of the equation above when $j=k=q=1$. Recalling the definition of $R_0$ and $R_1$ in Item (4), it is clear that only the first $n$ components of $R_1$ can be nonzero, and more precisely, evaluating $R_1$ at $p$, for $\ell=1$ to $n$, we have \begin{align*} (R_1)^\ell &= \varphi_{i11}( \hat{Z}^i \delta_1^{\ell} + \hat{Z}_1 \delta^{i\ell})\\ &= \varphi_{111} \delta_1^{\ell} + \varphi_{i 11} \delta^{i\ell}, \end{align*} verifying the claim made about the form of $R_1$. (Note that any discrepancy between $\varphi_{i11}$ and $\varphi_{;i11}$ lies in $R_0$.) Observe that Item (1) follows from our construction of $Q$, Item (2) follows from tracking the number of times we differentiated in our construction of $Q$, and we explicitly checked Items (3) and~(4) above, and therefore the proof is complete. \end{proof} For a scalar-valued function $\varphi$, let $\mathbb{J}^3_q \varphi$ denote the $3$-jet\footnote{Note that this definition implicitly depends on choice of coordinate chart, but we will see that this causes no problems in our proof.} of $\varphi$ at the point~$q$: \[ \mathbb{J}^3_q \varphi:= (\varphi(q), \partial \varphi(q), \partial^2 \varphi(q), \partial^3 \varphi(q)) \in \mathbb{R}^{1+n+\binom{n}{2} + \binom{n}{3} }. \] \begin{proposition}\label{proposition:vanishing} Let $(U, g, \pi)$ be an initial data set such that $(g,\pi)\in C^5_{\mathrm{loc}}(U)\times C^4_{\mathrm{loc}}(U)$, and let $Z$ be a vector field in $C^3_{\mathrm{loc}}(U)$ such that $Z\ne0$ on $U$. Given $p\in U$, there exists a coordinate chart $U_p\subset U$, a point $q\in U_p$, and a zero set $s_q\subset \mathbb{R}^{1+n+\binom{n}{2}+\binom{n}{3}}$ of a nontrivial polynomial such that for any $C^3$ function $\varphi$ on $U$ with $\mathbb{J}_q^3\varphi \notin s_q$, if $(f, X)$ solves $L_\varphi(f, X)=0$ in $U$, then $2fZ+ |Z|_g X=0$ in $U_p$. \end{proposition} \begin{proof} Choose $U_p$ to be a normal coordinate chart at $p$ so that $g_{ij} = \delta_{ij}$ and $Z_i = |Z|_g \delta^1_i$ at $p$ as in Lemma~\ref{lemma:matrixQ}, accepting that we may need to shrink $U_p$ later. Let $Q$, $Q_{NN}$, $\widehat{Q}$, $C$, and $R$ denote the corresponding matrices constructed in Lemma~\ref{lemma:matrixQ}. By Lemma~\ref{lemma:matrixQ}, $QP=0$, and each entry of $Q$ is a polynomial of $(1, \varphi, \partial \varphi, \partial^2\varphi, \partial^3 \varphi)$ whose coefficients depend on up to 5 derivatives of $g$, 4 derivatives of $\pi$, and 3 derivatives of $Z$. Using the variables $w, y, z, \xi$ to denote $\varphi, \partial \varphi, \partial^2\varphi, \partial^3 \varphi$, respectively, we can write \[ \det \widehat{Q} = F (\varphi, \partial \varphi, \partial^2 \varphi), \] where $F(w, y, z)$ is an $(N-1)$-degree polynomial in $w$, $y$, and $z$, whose coefficients are functions of $x\in U_p$ (which depend on $g$, $\pi$, and $Z$). We know that $F$ is not the zero polynomial at the point $p$ because the coefficient of $\varphi_1^{N-1}$ in $\det \widehat{Q}$ is precisely $\det \widehat{Q}_1$, which we saw is nonzero in Item (3) of Lemma~\ref{lemma:matrixQ}. By shrinking $U_p$ as needed, continuity guarantees that we may assume that at every $x \in U_p$, $F$ is not the zero polynomial. Therefore we may express \[ \widehat{Q}^{-1} C = \begin{bmatrix} H_1(\varphi, \partial \varphi, \partial^2 \varphi) \\ \vdots \\ H_{N-1}(\varphi, \partial \varphi, \partial^2 \varphi) \end{bmatrix}, \] where $H_1(w, y, z), \dots, H_{N-1} (w, y, z)$ are rational functions of $w$, $y$, and $z$, whose coefficients are functions of $x\in U_p$. In fact, we can write \[ H_i (w, y, z) = \frac{\eta_i(w, y, z)}{F(w, y, z)},\] where each $\eta_i$ is a polynomial. Similarly, we can write \begin{align*} Q_{NN} - R \widehat{Q}^{-1}C= \frac{G(\varphi, \partial \varphi, \partial^2 \varphi, \partial^3 \varphi)}{F(\varphi, \partial \varphi, \partial^2 \varphi)}, \end{align*} where $G(w, y, z, \xi)$ is a polynomial in $w$, $y$, $z$, and $\xi$, whose coefficients are functions of $x\in U_p$. We will define $q$ and $s_q$ according to the following cases.\\ \noindent{\bf Case 1:} Suppose that the polynomials $\eta_1, \dots, \eta_n$ are identically zero in some small neighborhood of $p$. In this case, we take $U_p$ to be this neighborhood, we choose $q=p$, and we define \[ s_p:= \left\{ \left.(w, y, z, \xi) \in \mathbb{R}^{1+n+\binom{n}{2} + \binom{n}{3}}\,\right|\, F(w, y, z)= 0 \mbox{ at } p\right\} \] to be the zero set of the nontrivial polynomial $F$ at $p$. As long as $\mathbb{J}_p^3\varphi \notin s_p$, we have $\det \widehat{Q}=F \ne0$ at $p$. Then since $QP=0$ and $H_i = \frac{\eta_i}{F}=0$ at each point of $U_p$, for $i=1$ to $n$, Item~\eqref{item:vanish} of Lemma~\ref{lemma:linear-algebra} implies that $P_1 = \dots= P_n=0$ in $U_p$. This just says that $W=0$ in $U_p$, which is the the same as saying that $2fZ+|Z|_g X=0$ in $U_p$. \noindent{\bf Case 2:} The alternative to Case 1 is that there exists a sequence of points $q_k \to p$ so that at least one of the polynomials $\eta_1, \dots, \eta_n$ is not the zero polynomial at $q_k$. By our definitions of $G$ and $\eta$, \begin{equation*} G= F Q_{NN} - (R^1 \eta_1 +\dots+ R^{N-1} \eta_{N-1}), \end{equation*} where we write $R = \begin{bmatrix} R^1 & R^2 & \cdots & R^{N-1}\end{bmatrix}$. We would like to show that $G(x, y, z, \xi)$ is not a zero polynomial at some points near $p$. The subtlety in the following argument is because all $\eta_i$ may converge to the zero polynomial at $p$, so we have to analyze $G$ at the sequence $q_k$, instead of the limit point $p$. To do this, we take a subsequence of $q_k$ (still denoted by $q_k$) such that one of these polynomials, say $\eta_{j_0}$ for some fixed $j_0$, satisfies both of the following properties, at all $q_k$: (1) $\eta_{j_0}$ is not the zero polynomial, and (2) the largest coefficient (in absolute value) of $\eta_{j_0}$ at $q_k$ is greater than or equal to the absolute value of every coefficient of every $\eta_i$ for $i=1$ to $n$. We \emph{claim} that $G(w, y, z, \xi)$ is not the zero polynomial at $q_k$ for $k$ large. Given $\epsilon\in (0, \frac{1}{4})$, by Item~(4) of Lemma~\ref{lemma:matrixQ} and continuity, we know that for $k$ large enough, at $q_k$, the coefficient of $\varphi_{j_0 11}$ in $R^{j_0}$ is bounded below by~$\tfrac{1}{2}$, while the coefficient of $\varphi_{j_0 11}$ in $R^{i}$ for other $i$ is bounded between by~$\pm \epsilon$. Moreover, from the construction of $R$, for all $i>n$, $R^{i}$ has no dependence on $\varphi_{j_0 11}$. Putting these facts together and using the fact that $\eta_{j_0}$ at $q_k$ has the largest possible coefficient among the $\eta_i$'s, we can see that the polynomial $R^1 \eta_1 +\dots+ R^{N-1} \eta_{N-1}$ at $q_k$ must have a nonzero coefficient of one of the monomials involving $\varphi_{j_0 11}$. Finally, since $FQ_{NN}$ has no dependence on $\varphi_{j_0 11}$ at any point by construction, it follows that $G(w, y, z, \xi)$ is a nontrivial polynomial function at $q_k$ for sufficiently large $k$. Choose $q$ to be one of these $q_k$'s for large enough $k$, and define \begin{align*} s_{q} &= \left\{ \left.(w, y, z, \xi) \in \mathbb{R}^{1+n+\binom{n}{2} + \binom{n}{3}}\,\right|\, F(w, y, z) G(w, y, z, \xi)=0 \mbox{ at } q\right\} \end{align*} to be the zero set of the nontrivial polynomial $FG$ at $q$. If $\mathbb{J}_q^3\varphi \notin s_q$, then we see that both $\det \widehat{Q}\ne0$ and $Q_{NN} - R \widehat{Q}^{-1}C\ne0$ at $q$ by tracking back through the definitions. So by Item \eqref{item:determinant} of Lemma~\ref{lemma:linear-algebra}, $\det Q\ne0$ at $q$. Since we have $QP=0$, it follows that $P=0$ at $q$. But this means that we have a solution $(f, X)$ to $L_\varphi(f, X)=0$ such that $(W, df, T, f)$ vanishes at $q$. As discussed earlier, this implies that $(f, df, X, \nabla X)$ vanishes at $q$. Hence $(f, X)$ vanishes identically on all of $U_p$ since $(f, X)$ satisfies a Hessian-type equation (Lemma~\ref{lemma:Hessian}) and is uniquely determined by its 1-jet $(f, df, X, \nabla X)$ at a point. \end{proof} We now prove the main result in this section, Theorem~\ref{theorem:generic}. \begin{proof}[Proof of Theorem~\ref{theorem:generic}] We just need to describe the set $\mathcal{D}_Z$ and prove that if $\varphi\in \mathcal{D}_Z$ and \hbox{$L_\varphi(f, X)=0$}, then the $Z$-null-vector equation $2fZ+|Z|_g X=0$ holds everywhere in $U$. By equation~\eqref{equation:adjoint-Z-mod}, this will imply that $D\overline{\Phi}^*_{(g, \pi)}(f, X)=0$. Since the $Z$-null-vector equation holds trivially where $Z=0$, let $V\subset U$ be the open subset where $Z\ne0$. For each $p\in V$, we apply Proposition~\ref{proposition:vanishing} to obtain a coordinate chart $V_p$, a point $q\in V_p$, and a zero set $s_q\subset \mathbb{R}^{1+n+\binom{n}{2}+ \binom{n}{3}}$ of a nontrivial polynomial as described in the statement of the Proposition. These $V_p$'s cover $V$, so by second countability of manifolds, there is a sequence of points $p_i$ such that the $V_{p_i}$'s cover $V$. For each~$i$, define $\mathcal{V}_i \subset C^3(U)$ by \[ \mathcal{V}_i := \{ \left. \varphi \in C^3(U)\,\right| \,\mathbb{J}^3_{q_i} \varphi \notin s_{q_i}\}. \] Since $s_{q_i}$ is the zero set of a nontrivial polynomial, it is clear that $\mathcal{V}_i$ is open and dense. Since $C^3(U)$ is a complete metric space, by the Baire Category Theorem, \[ \mathcal{D}_Z := \bigcap_{i=1}^\infty \mathcal{V}_i\] is dense in $C^{3}(U)$. By Proposition~\ref{proposition:vanishing}, for $\varphi\in\mathcal{D}_Z$, any $(f, X)$ solving $L_\varphi(f, X) = 0$ must have $2fZ+|Z|_g X=0$ in each $V_{p_i}$ and hence everywhere in $V$, completing the proof. \end{proof} \section{Improvability of the dominant energy scalar}\label{section:improvability} We will use the special case $Z=J$ of Theorem~\ref{theorem:generic} to prove our improvability result, Theorem~\ref{theorem:improvable}, and more general choices of $Z$ to prove our almost improvability result, Theorem~\ref{theorem:improvability-error}. \begin{proof}[Proof of Theorem \ref{theorem:improvable}] We apply Theorem~\ref{theorem:generic} in the case $U=\Int\Omega$ and $Z=J$, where $J$ is the current density of the initial data set $(g, \pi)$. Choose $\varphi$ in the dense set $\mathcal{D}_J\subset C^3(\Int\Omega)$ as in the statement of Theorem~\ref{theorem:generic}, such that $|\varphi J|_g < (\sqrt{2}-1)/2$. Either $\left(D\Phi^{(\varphi, J)}_{(g, \pi)}\right)^*$ is injective on $\Int\Omega$ or it is not. If it is not injective, then there exists a nontrivial lapse-shift pair $(f, X)$ in the kernel, and by Theorem~\ref{theorem:generic}, it follows that the system~\eqref{equation:pair} holds. On the other hand, if $\left(D\Phi^{(\varphi, J)}_{(g, \pi)}\right)^*$ is injective, we claim that the dominant energy scalar is improvable in $\Int\Omega$. Assuming injectivity, we may apply Theorem~\ref{theorem:deformationZ} (letting $\Upsilon=0$) to find a neighborhood $\mathcal{U}$ of $(g, \pi)$ in $C^{4,\alpha}({\Omega}) \times C^{3,\alpha}({\Omega})$ and a neighborhood $\mathcal{W}$ of $(\varphi, J)$ in $C^{2,\alpha}(\Omega)$ such that for any $V\subset\subset \Int\Omega$, there exist constants $\epsilon, C>0$ such that for $(\gamma,\tau)\in \mathcal{U}$, $(\varphi', Z')\in\mathcal{W}$, and $u\in C^{0,\alpha}_c(V)$ with $\| u \|_{C^{0,\alpha}({\Omega})}<\epsilon$, there exists $(h, w)\in C^{2,\alpha}_{c}(\Int\Omega)$ with $\Omega$ and $\| (h, w) \|_{C^{2,\alpha}(\Omega)} \le C\| u \|_{C^{0,\alpha}(\Omega)}$ such that \begin{align} \label{equation:surjective} \Phi^{(\varphi', Z')}_{(\gamma, \tau)} (\gamma+h, \tau+w ) = \Phi^{(\varphi',Z')}_{(\gamma, \tau)} (\gamma, \tau ) + (u, 0). \end{align} By shrinking $\mathcal{U}$ if necessary, we can guarantee that $(\varphi, J(\gamma, \tau))\in \mathcal{W}$ for all $(\gamma, \tau)\in \mathcal{U}$ and that $|\varphi J(\gamma, \tau)|_\gamma < (\sqrt{2}-1)/2$. In particular we can solve \eqref{equation:surjective} for the operator $\Phi^{(\varphi, J(\gamma, \tau))}_{(\gamma, \tau)}$. For $\epsilon$ small enough, $|h|_\gamma<1$, and we can invoke Lemma~\ref{lemma:sigma_preserve} to see that \[ \Sc(\gamma+h, \tau+w) \ge \Sc(\gamma, \tau) + u.\] In summary, we have shown that the dominant energy scalar is improvable in $\Int\Omega$. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem:improvability-error}] Choose $B$ and $\delta>0$ as in the hypotheses of Theorem~\ref{theorem:improvability-error}. First we will argue that we can select a vector field $Z$ supported in $B$ with $|Z|_g<1$, such that for all $\varphi\in\mathcal{D}_Z$, $\left(D\Phi^{(\varphi, Z)}_{(g, \pi)}\right)^*$ is injective, where $\mathcal{D}_Z \subset C^3(\Int\Omega)$ is the dense set described in Theorem~\ref{theorem:generic}. Consider the kernel of $D\overline\Phi_{(g,\pi)}^*$ in $\Int\Omega$. Since this kernel is finite dimensional (since a kernel element is determined by its 1-jet at a point), it is easy to select $Z$ supported in $B$ such that $2fZ+|Z|_g X$ is \emph{not identically zero} for \emph{all} nontrivial $(f, X)\in \ker D\overline\Phi_{(g,\pi)}^*$. To see how, for each such $X$, all we need is for $Z$ to be linearly independent from $X$ at some point of $B$, and this is easy to arrange since $Z$ is free to change however we like from point to point, giving us infinitely many degrees of freedom in constructing $Z$. (The exception is when $X$ vanishes in $B$, but then $f$ does not vanish, and it is still easy to choose $Z$.) Furthermore, $Z$ can be selected with $|Z|_g<1$ since scaling it down by a constant factor does not affect the direction of $Z$. We claim that with this choice of $Z$, $\left(D\Phi^{(\varphi, Z)}_{(g, \pi)}\right)^*$ is injective for all $\varphi\in\mathcal{D}_Z$. If it is not, there exists a nontrivial $(f, X)\in \ker \left(D\Phi^{(\varphi, Z)}_{(g, \pi)}\right)^*$. Then by Theorem~\ref{theorem:generic}, $(f, X)\in \ker D\overline\Phi_{(g,\pi)}^*$ and $2fZ+|Z|_g X$ is identically zero. But this contradicts our construction of $Z$. Select $\varphi\in \mathcal{D}_Z$ small enough so that $|\varphi|<1$. By injectivity of $\left(D\Phi^{(\varphi, Z)}_{(g, \pi)}\right)^*$, we can apply Theorem~\ref{theorem:deformationZ} (with $\Upsilon=0$ and $(\varphi', Z')=(\varphi, Z)$). That is, there exists a neighborhood $\mathcal{U}$ of $(g, \pi)$ in $C^{4,\alpha}({\Omega})\times C^{3,\alpha}({\Omega})$ such that for any $V\subset\subset\Int\Omega$, there exist constants $\epsilon, C>0$ such that for $(\gamma,\tau)\in \mathcal{U}$ and for $u\in C_c^{0,\alpha}(V)$ with $\| u \|_{C^{0,\alpha}(\Omega)}<\epsilon$, there exists $(h, w)\in C^{2,\alpha}_c(\Int\Omega)$ with $\| (h,w)\|_{C^{2,\alpha}(\Omega)} \le C \| u \|_{C^{0,\alpha}(\Omega)}$ such that \begin{align*} \Phi^{(\varphi, Z)}_{(\gamma,\tau)} (\gamma+h, \tau+w) = \Phi^{(\varphi, Z)}_{(\gamma,\tau)} (\gamma, \tau) + (u, 0). \end{align*} By shrinking $\mathcal{U}$ if necessary and choosing $\epsilon$ small enough, we will have $|Z|_\gamma<1$, $|h|_\gamma<1$, and we can apply Lemma~\ref{lemma:sigma_error} to see that \[ \Sc(\bar{\gamma}, \bar{\tau}) \ge \Sc(\gamma,\tau)+ u - 6|h|_\gamma^{\frac{1}{2}}|\varphi|^{\frac{1}{2}} |Z|_\gamma (|J|_\gamma^{\frac{1}{2}} + 1). \] By further shrinking $\epsilon$, we can make $|h|_\gamma$ small enough so that the error term above is bounded by $\delta$, and since $Z$ is supported in $B$ it will be bounded by $ \delta {\bf 1}_B$, as desired. \end{proof} \section{Consequences of the $J$-null-vector equation} \label{section:null} In this section, we study properties of an initial data set that carries a lapse-shift pair $(f, X)$ satisfying the system: \begin{align*} \tag{\ref{equation:pair}} \begin{split} D\overline{\Phi}_{(g, \pi)}^*(f, X)&=0\\ 2fJ+|J|_g X&=0. \end{split} \end{align*} \subsection{Null perfect fluids and Killing developments}\label{section:null_perfect} Recall that we defined a spacetime $(\mathbf{N}, \mathbf{g})$ to be a \emph{null perfect fluid spacetime}\footnote{In the literature, a \emph{perfect fluid spacetime} refers a spacetime whose the Einstein tensor $G_{\alpha\beta} = p \mathbf{g}_{\alpha\beta} + (\rho+p) v_\alpha v_\beta$ with the mass density $\rho$, pressure $p$, and a unit \emph{timelike} vector $\mathbf{v}$ that represents the velocity of the fluid. We define null perfect fluid analogously to perfect fluid but with the velocity $\mathbf{v}$ either future null or zero. We also absorb the coefficient $\rho+p$ into $\mathbf{v}$ as there is no preferred normalization for a null vector. Note that an alternative form $G_{\alpha\beta} = p \mathbf{g}_{\alpha\beta} - v_\alpha v_\beta$ (with the minus sign) is not ``physical'' as it cannot satisfy the spacetime DEC.} with velocity $\mathbf{v}$ and pressure $p$ if the Einstein tensor takes the form: \begin{align}\label{equation:perfect-fluid-Einstein} G_{\alpha\beta} = p \mathbf{g}_{\alpha\beta} + v_\alpha v_\beta, \end{align} where $p$ is a scalar function, and the velocity $\mathbf{v}$ is either future null or zero at each point of $\mathbf{N}$. We derive general properties of null perfect fluids. \begin{lemma}\label{lemma:null-perfect-fluid} Let $(\mathbf{N}, \mathbf{g})$ be a null perfect fluid spacetime with velocity $\mathbf{v}$ and pressure $p$, and $\mathbf{g}\in C^3_{\mathrm{loc}}(\mathbf{N})$. Then the following properties hold: \begin{enumerate} \item \label{item:spacetime-DEC} $(\mathbf{N}, \mathbf{g})$ satisfies the spacetime DEC if and only if $p\le0$. \item \label{item:perfect-fluid} Let $U$ be a spacelike hypersurface in $\mathbf{N}$ with induced initial data $(g, \pi)$. Denote by $\mathbf{n}$ the future unit normal to $U$ and by $(\mu, J)$ the energy and current densities of $(g, \pi)$. Then along~$U$, \begin{align}\label{equation:v+p} \begin{split} \mathbf{v} &= \left\{ \begin{array}{ll} \frac{1}{\sqrt{|J|_g} }(|J|_g \mathbf{n} -J) &\mbox{where } J\neq 0\\ 0 &\mbox{where } J=0 \end{array}\right.\\ p&= |J|_g -\mu. \end{split} \end{align} Or equivalently, with respect to an orthonormal basis $e_0, e_1, \dots, e_n$ along $U$, where $e_0 = \mathbf{n}$, the Einstein tensor takes the form: \begin{align} \label{equation:perfect-fluid} \begin{split} G_{00} &=\mu\\ G_{0i}& = J_i\\ G_{ij} &=\left\{ \begin{array}{ll}(|J|_g - \mu) g_{ij}+\frac{J_i J_j}{|J|_g} & \mbox{ where } J\neq 0\\ -\mu g_{ij} & \mbox{ where } J=0.\end{array}\right. \end{split} \end{align} \item If $(\mathbf{N}, \mathbf{g})$ admits a Killing vector field $\mathbf{Y}$ and $\mathbf{v} = \eta \mathbf{Y}$ for some function $\eta$, then the pressure $p$ is constant on $\mathbf{N}$. \label{item:pressure} \end{enumerate} \end{lemma} \begin{proof} Let $e_0, e_1, \dots, e_n$ be an orthonormal frame such that $e_0$ is future unit timelike. Writing the velocity vector $\mathbf{v}=v^0 e_0 +\cdots+v^n e_n$ with respect to this frame, note that its dual covector has $v_0 = -v^0$ while $v_i=v^i$ for $i=1$ to $n$. Proof of \eqref{item:spacetime-DEC}: Since $\mathbf{g}(\mathbf{u}, \mathbf{w}) \le 0$ for any pair of future causal vectors $\mathbf{u}$ and $\mathbf{w}$, it is easy to see that the spacetime DEC holds if $p\le 0$. Conversely, suppose the spacetime DEC holds. At the points where $\mathbf{v}=0$, the fact that $G(e_0, e_0)\ge 0$ implies $p\le 0$. At points where $\mathbf{v}$ is not zero, define the future null $\bar{\mathbf{v}}:= v^0 e_0 -(v^1 e_1+ \cdots +v^n e_n)$. The spacetime DEC tells us that $G(\bar{\mathbf{v}}, \mathbf{v})\ge 0$, which then implies that $p\le 0$. Proof of \eqref{item:perfect-fluid}: Let $e_0 = \mathbf{n}$ be the future unit normal to $U$. Combining the definitions of $\mu$ and $J$ (or more precisely,~\eqref{equation:Einstein-0i}) with equation~\eqref{equation:perfect-fluid-Einstein}, we obtain \begin{align*} \mu &= G_{00} =-p+v_0^2 \\ J_i & = G_{0i}=v_0 v_i. \end{align*} Since $\mathbf{v}$ is future null or zero, we can conclude that $|J|_g =v_0^2$, and $ v_0 J_i = |J|_g v_i$. These equations easily imply the desired expressions for $\mathbf{v}$ and $p$. The desired expression for $G_{ij}$ follows by substituting these expressions for $\mathbf{v}$ and $p$ back into \eqref{equation:perfect-fluid-Einstein}. Proof of \eqref{item:pressure}: Express the Einstein tensor as $G_{\alpha\beta} = p \mathbf{g}_{\alpha\beta} + \eta^2 Y_\alpha Y_\beta$. Taking the Lie derivative with respect to the Killing vector $\mathbf{Y}$, \[ 0= \mathcal{L}_{\mathbf{Y}} G_{\alpha\beta} = (\mathcal{L}_{\mathbf{Y}}p) \mathbf{g}_{\alpha\beta} + (\mathcal{L}_{\mathbf{Y}}\eta^2 ) Y_\alpha Y_\beta. \] At every point where $\mathbf{Y}$ is not zero, we can easily find vectors $\mathbf{u}$ and $\mathbf{w}$ such that $\mathbf{g}(\mathbf{u}, \mathbf{w})=0$ while $\mathbf{g}(\mathbf{Y}, \mathbf{u})$ and $\mathbf{g}(\mathbf{Y}, \mathbf{w})$ are nonzero. Plugging these into the above equation shows that $\mathcal{L}_{\mathbf{Y}}\eta^2=0$ everywhere. Next, we use the contracted second Bianchi identity to obtain \begin{align*} 0 = \bm{\nabla}^\beta G_{\alpha\beta}& = \bm{\nabla}_\alpha p + (\bm{\nabla}^{\beta} \eta^2) Y_\alpha Y_\beta+ \eta^2 (\bm{\nabla}^\beta Y_\alpha )Y_\beta + \eta^2 Y_\alpha (\Div_{\mathbf{g}} \mathbf{Y})\\ & = \bm{\nabla}_\alpha p, \end{align*} where we used the facts that $\mathcal{L}_{\mathbf{Y}}\eta^2=0$, $\mathbf{Y}$ is Killing, and $\mathbf{Y}$ is null or zero wherever $\eta$ is not zero. Hence $p$ is constant. \end{proof} \begin{definition}[Beig-Chru\'sciel \cite{Beig-Chrusciel:1996}] Let $(U, g)$ be an Riemannian manifold equipped with a lapse-shift pair $(f, X)$ such that $f$ is nonvanishing on $U$. The \emph{Killing development of $(U,g, f, X)$} is a triple $(\mathbf{N}, \mathbf{g}, \mathbf{Y})$, where $(\mathbf{N}, \mathbf{g})$ is a Lorentzian spacetime equipped with a vector field $\mathbf{Y}$, defined as follows: \begin{enumerate} \item $\mathbf{N}:=\mathbb{R}\times U$. \item $\mathbf{g} := -4f^2 du^2 + g_{ij} (dx^i +X^i du) (dx^j + X^j du)$, where $u$ is the coordinate on the $\mathbb{R}$ factor, $(x^1, \dots, x^n)$ is a local coordinate chart of $U$, and $f, X, g_{ij}$ are all extended to $\mathbf{N}$ to be independent of the coordinate $u$. \item $\mathbf{Y}:=\frac{\partial}{\partial u}$. \end{enumerate} \end{definition} The triple $(\mathbf{N}, \mathbf{g}, \mathbf{Y})$ is called the Killing development of $(U, g, f, X)$ because $\mathbf{Y}$ is a global Killing field (Lemma~\ref{lemma:Killing-independent-u}), which is related to the data $(f,X)$ via the equation \[ \mathbf{Y} = 2f \mathbf{n} +X,\] which holds along the constant $u$ slices, where $\mathbf{n}$ is a unit normal to those slices. (We declare that this $\mathbf{n}$ defines the time-orientation on $\mathbf{N}$). Lemma~\ref{lemma:Einstein_tangential} implies that $(U, g, \pi)$ sits inside the Killing development of $(U, g, f, X)$ as the $\{u=c\}$ slice for every constant $c$, if and only if the ``$k$'' corresponding to $\pi$ satisfies \begin{equation}\label{eqn:sits-in-KD} k_{ij} = -\frac{1}{4f} (L_X g)_{ij}. \end{equation} The following proposition is a more precise version of the first half of Theorem~\ref{theorem:DEC}. \begin{proposition}\label{proposition:fluid} Let $(U, g, \pi)$ be an initial data set equipped with a lapse-shift pair $(f, X)$ such that $f$ is nonvanishing on $U$, and assume that $(f,X)$ solves the system~\eqref{equation:pair}. Then the following holds: \begin{enumerate} \item \label{item:constancy-proposition} The dominant energy scalar $\sigma(g, \pi)$ is constant in $U$. \item \label{item:spacetime-proposition} Let $(\mathbf{N}, \mathbf{g}, \mathbf{Y})$ be the Killing development of $(U,g, f, X)$. Then $(U, g, \pi)$ sits inside $(\mathbf{N}, \mathbf{g})$, which is a null perfect fluid spacetime with velocity $\mathbf{v} = \frac{\sqrt{|J|_g}}{ 2f}\mathbf{Y}$ and pressure $p = -\tfrac{1}{2}\sigma(g, \pi)$. In particular, the Einstein tensor of $\mathbf{g}$ satisfies~\eqref{equation:perfect-fluid} along every constant $u$ slice of $\mathbf{N}$. \item If $(U, g, \pi)$ satisfies the dominant energy condition, then $(\mathbf{N}, \mathbf{g})$ satisfies the spacetime dominant energy condition. \label{item:DEC-proposition} \end{enumerate} \end{proposition} Note that conclusion \eqref{item:constancy-proposition} may fail if we remove the nonvanishing assumption on $f$. As mentioned in Section~\ref{section:introduction}, if $f$ vanishes on all of $U$, then the system \eqref{equation:pair} is equivalent to that $L_X g=0$ and $L_X\pi=0$. In particular, these examples need not have constant $\sigma(g,\pi)$ (as can be seen by considering explicit time-symmetric examples). \begin{proof} We focus on proving Item \eqref{item:spacetime-proposition}, and then Items~\eqref{item:constancy-proposition} and~\eqref{item:DEC-proposition} follow immediately from Lemma~\ref{lemma:null-perfect-fluid}. First observe that equation~\eqref{equation:momentum} in the system~\eqref{equation:pair}, together with~\eqref{equation:k}, tells us that the ``$k$'' corresponding to our given $\pi$ satisfies~\eqref{eqn:sits-in-KD}, and hence $(U, g, \pi)$ sits inside $(\mathbf{N}, \mathbf{g})$. Next, we wish to show that the Einstein tensor of $\mathbf{g}$ takes the form of a null perfect fluid. Specifically, we will show that it satisfies~\eqref{equation:perfect-fluid} along~$U$ (and similarly, along any constant $u$ slice). Choose an orthonormal basis $e_0, e_1,\ldots, e_n$ with $e_0=\mathbf{n}$. The equations $G_{00}=\mu$ and $G_{0i}=J_i$ in~\eqref{equation:perfect-fluid} are essentially the definitions of $\mu$ and $J$ (or more precisely, see~\eqref{equation:Einstein-0i}). We have the following general formula for $G_{ij}$ in a Killing development~\eqref{equation:Einstein_pi}: \begin{align}\tag{\ref{equation:Einstein_pi}} \begin{split} G_{ij}&= \left[R_{ij} -\tfrac{1}{2}R_g g_{ij}\right]+ \left[- \tfrac{3}{n-1}(\tr_g \pi)\pi_{ij} + 2\pi_{i\ell}\pi^\ell_j \right]\\ &\quad + \left[ \tfrac{1}{2(n-1)}(\tr_g \pi)^2 - \tfrac{1}{2}|\pi|_g^2 \right] g_{ij} \\ &\quad+f^{-1}\left[-\tfrac{1}{2}(L_X \pi)_{ij} - f_{;ij} +(\Delta_g f)g_{ij}\right]. \end{split} \end{align} We would like to compare this to~\eqref{equation:Hamiltonian} in the system~\eqref{equation:pair}, in which $\phi$ and $Z$ are both zero. Dividing \eqref{equation:Hamiltonian} by~$f$ gives \begin{align}\label{equation:Hamiltonian-zero} \begin{split} 0 &= -R_{ij} + \left[ \tfrac{3}{n-1}(\tr_g \pi)\pi_{ij} - 2\pi_{i\ell}\pi^\ell_j \right] + \left[ -\tfrac{1}{n-1}(\tr_g \pi)^2 + |\pi|_g^2 \right] g_{ij} \\ &\quad +f^{-1}\left[ \tfrac{1}{2}(L_X \pi)_{ij} + f_{;ij} - (\Delta_g f)g_{ij} -\tfrac{1}{2}\langle X, J\rangle_g g_{ij} -\tfrac{1}{2}(X\odot J)_{ij}\right]. \end{split} \end{align} Adding together the two previous equations, we get \begin{align} \begin{split} \label{equation:tangential-lapse-shift1} G_{ij} &= -\tfrac{1}{2}\left[ R_g +\tfrac{1}{n-1}(\tr_g\pi)^2 -|\pi|_g^2\right]g_{ij} \\ &\quad -\tfrac{1}{2f}\langle X, J\rangle_g g_{ij} -\tfrac{1}{2f}(X\odot J)_{ij} \end{split}\\ &= \left(-\mu -\tfrac{1}{2f}\langle X, J\rangle_g\right) g_{ij} -\tfrac{1}{2f}(X\odot J)_{ij}.\label{equation:tangential-lapse-shift2} \end{align} At points where $J=0$, the expression for $G_{ij}$ in \eqref{equation:perfect-fluid} follows immediately, and when $J\neq0$, we can see it by substituting in the $J$-null-vector equation, $\frac{-X}{2f}=\frac{J}{|J|_g}$. From our work in Lemma~\ref{lemma:null-perfect-fluid}, this implies that $(\mathbf{N}, \mathbf{g})$ is a null perfect fluid spacetime with velocity $\mathbf{v}$ and pressure $p$ satisfying~\eqref{equation:v+p}. Specifically, $p=|J|_g-\mu = -\tfrac{1}{2}\sigma(g,\pi)$, and the $J$-null vector equation implies that $\mathbf{v} = \frac{\sqrt{|J|_g}}{2f} \mathbf{Y}$, completing the proof of Item~ \eqref{item:spacetime-proposition}. \end{proof} Next, we prove the second half of Theorem~\ref{theorem:DEC}, which is a sort of converse to Item~\eqref{item:spacetime-proposition} of Proposition~\ref{proposition:fluid}. \begin{proposition} Let $(\mathbf{N}, \mathbf{g})$ be a null perfect fluid spacetime with velocity $\mathbf{v}$ and pressure $p$, and $\mathbf{g}\in C^3_{\mathrm{loc}}(\mathbf{N})$. Assume there is a global $C^2_{\mathrm{loc}}$ Killing vector field $\mathbf{Y}$ such that $\mathbf{v} = \eta \mathbf{Y}$ for some scalar function $\eta$. If $U$ is a spacelike hypersurface of $\mathbf{N}$ with induced initial data $(g, \pi)$ and future unit normal $\mathbf{n}$, and if we decompose $\mathbf{Y}=2f \mathbf{n} + X$ along~$U$, then the lapse-shift pair $(f, X)$ satisfies the system~\eqref{equation:pair}. Moreover, if $\mathbf{Y}$ is transverse to $U$, then $(\mathbf{N}, \mathbf{g}, \mathbf{Y})$ must agree with the Killing development of $(U, g, f, X)$ in a neighborhood of $U$. \end{proposition} \begin{proof} We decompose $U= V_1 \cup V_2$ so that $f\neq 0$ on $V_1$ and $f\equiv 0$ on~$V_2$. In particular, $\mathbf{Y}$ is transverse to $V_1$, and $\mathbf{Y}$ is tangent to $\Int V_2$. On $V_1$, together with \eqref{equation:v+p}, the assumption that $\mathbf{v}=\eta \mathbf{Y}$ implies that $(f, X)$ satisfies the $J$-null-vector equation $2fJ +|J|_g X=0$ and $\eta =\frac{\sqrt{|J|_g}}{2f}$. Once the $J$-null-vector equation is established, the same computations in the proof of Proposition~\ref{proposition:fluid}, when viewed ``backwards,'' will imply that $D\overline{\Phi}^*_{(g, \pi)}(f, X)=0$. More specifically, the $J$-null-vector equation and \eqref{equation:perfect-fluid} imply \eqref{equation:tangential-lapse-shift2} and \eqref{equation:tangential-lapse-shift1}. Using \eqref{equation:Einstein_pi}, we see that \eqref{equation:Hamiltonian-zero} holds, which is the same as \eqref{equation:Hamiltonian} (in the case where $\phi$ and $Z$ are both zero), and \eqref{equation:Killing} is the same as \eqref{equation:momentum}. Taken together, this gives us~\eqref{equation:pair}, as desired. The second paragraph of the Proposition then follows easily from the discussion in Appendix~\ref{section:spacetime}, together with Proposition~\ref{proposition:fluid}. On $\Int V_2$, since $\mathbf{Y}= X$ is spacelike, $\mathbf{v}$ must vanish, and so does~$J$. Thus, the $J$-null-vector equation trivially holds on $\Int V_2$. Since $\mathbf{Y}$ is Killing, it follows that $L_X g=0$ and $L_X \pi=0$ on $\Int V_2$. Observe that these two equations are equivalent to $D\overline{\Phi}_{(g,\pi)}^*(0,X)=0$. Thus, we conclude that both equations~\eqref{equation:pair} hold on $V_1$ and $\Int V_2$, and then by continuity, they hold everywhere in $U$. \end{proof} \subsection{Removing the nonvanishing assumption on $f$} The nonvanishing assumption of $f$ is essential in the proof of Proposition~\ref{proposition:fluid}. Thankfully, there are some situations in which we can show that $f$ is nonvanishing, and this allows us to expand the applicability of Proposition~\ref{proposition:fluid}. (See Theorem~\ref{theorem:f_not_zero} below.) \begin{lemma} Let $(U, g, \pi)$ be an initial data set, and let $(f, X)$ be a lapse-shift pair satisfying the following equations on $U$: \begin{align} \tfrac{1}{2} (X_{i;j} + X_{j;i} ) &= \left( \tfrac{2}{n-1} (\tr_g \pi)g_{ij} - 2\pi_{ij} \right)f \tag{\ref{equation:momentum}}\\ 2fJ+|J|_g X&=0\notag. \end{align} Suppose that $J$ is nonvanishing on $U$ and we set $\hat{J} := \frac{J}{|J|_g}$. Then, $f$ satisfies the following equations everywhere in $U$: \begin{align} \langle \nabla f, \hat{J}\rangle_g &=- \left( \tfrac{1}{n-1} (\tr_g \pi )- \pi_{ij} \hat{J}^i\hat{J}^j\right) f \label{equation:gradf}\\ (\nabla f)^j &=- \left(\hat{J}^j_{;i} \hat{J}^i + \tfrac{1}{n-1} (\tr_g \pi) \hat{J}^j -2 \pi^{ij} \hat{J}_i + \pi_{k\ell } \hat{J}^k \hat{J}^\ell \hat{J}^j\right) f.\label{equation:gradf-2} \end{align} \end{lemma} \begin{proof} Define $T_{ij} := \tfrac{1}{2} (X_{i;j} - X_{j;i})$ and $W:=f\hat{J}+\tfrac{1}{2}X$ as in the proof of Lemma~\ref{lemma:matrixQ}, and choose $Z=J$. Our assumption~\eqref{equation:momentum} is the same as~\eqref{equation:DX}, and therefore equation~\eqref{equation:DW} is valid. On the other hand, the $J$-null-vector equation says that $W=0$, and hence \eqref{equation:DW} says \begin{align}\label{equation:DW2} 0&=\hat{J}_i f_j + \tfrac{1}{2} T_{ij} + \left(\hat{J}_{i;j} +\tfrac{1}{n-1} (\tr_g \pi) g_{ij} - \pi_{ij} \right)f. \end{align} Contracting \eqref{equation:DW2} with $\hat{J}^i$ and then with $\hat{J}^j$ gives \begin{align} 0 &= f_j + \tfrac{1}{2} T_{ij} \hat{J}^i + \left( \tfrac{1}{n-1} (\tr_g \pi ) \hat{J}_j - \pi_{ij} \hat{J}^i\right) f\label{equation:J_i}\\ 0&= f_j \hat{J}^j + \left( \tfrac{1}{n-1} (\tr_g \pi )- \pi_{ij} \hat{J}^i\hat{J}^j\right) f,\notag \end{align} where we use $|\hat{J}|_g=1$ and the anti-symmetry of $T$. Thus equation~\eqref{equation:gradf} holds. Contracting \eqref{equation:DW2} with $\hat{J}^j$ and then swapping the $i$ and $j$ indices gives \begin{align} 0&=\hat{J}_j (f_i \hat{J}^i )+ \tfrac{1}{2} T_{ji} \hat{J}^i+ \left( \hat{J}_{j;i} \hat{J}^i + \tfrac{1}{n-1} (\tr_g \pi) \hat{J}_j - \pi_{ji} \hat{J}^i \right) f\notag\\ &=\tfrac{1}{2} T_{ji} \hat{J}^i + (\hat{J}_{j;i} \hat{J}^i - \pi_{ji} \hat{J}^i + \pi_{k\ell } \hat{J}^k \hat{J}^\ell \hat{J}_j)f,\label{equation:J_j} \end{align} where we use \eqref{equation:gradf} to substitute $f_j \hat{J}^j $ in the last equation. Adding \eqref{equation:J_i} and \eqref{equation:J_j} to cancel out the terms of $T$ by anti-symmetry gives \eqref{equation:gradf-2}: \[ 0 = f_j + \left(\hat{J}_{j;i} \hat{J}^i + \tfrac{1}{n-1} (\tr_g \pi) \hat{J}_j -2 \pi_{ij} \hat{J}^i + \pi_{k\ell } \hat{J}^k \hat{J}^\ell \hat{J}_j\right) f. \] \end{proof} \begin{corollary}\label{corollary:one-dimensional} Let $(U, g, \pi)$ be an initial data set such that $J$ is not identically zero on~$U$. Define $\mathcal{N}$ to be the space of all lapse-shift pairs $(f, X)\in C^2_{\mathrm{loc}}\times C^1_{\mathrm{loc}}$ solving the system~\eqref{equation:pair}. Then the vector space $\mathcal{N}$ is at most one-dimensional. Furthermore, if $(f, X)\in \mathcal{N}$ is nontrivial, then $f$ must be nonzero everywhere that $J$ is nonzero. \end{corollary} \begin{proof} Because any solution $(f, X)$ to $D\overline{\Phi}_{(g, \pi)}^*(f, X)=0$ satisfies the Hessian-type equations \eqref{equation:Hessian} and \eqref{equation:second-derivative}, $(f, X)$ is determined by its 1-jet $(f, \nabla f, X, \nabla X)$ at an arbitrary point $p\in M$. (Cf. \cite[Proposition 2.1]{Corvino-Huang:2020}.) Choose any $p\in U$ so that $J\ne 0$ at $p$. By equation~\eqref{equation:gradf-2}, $\nabla f(p)$ is determined by $f(p)$. Together with the $J$-null vector equation $2f J + |J|_g X=0$, the 1-jet of $X$ at $p$ is also determined by $f(p)$. Thus, $\mathcal{N}$ is at most one-dimensional, and if $f(p)=0$, then $(f, X)$ is identically zero in~$U$. \end{proof} \begin{proposition}\label{proposition:f_not_zero} Let $(U, g, \pi)$ be an initial data set, and suppose $(f, X)$ is a nontrivial lapse-shift pair on $U$ solving the system \eqref{equation:pair}. If $V$ is the set of points where $J\ne0$, then $f$ is nonvanishing on~$\overline{V}$. \end{proposition} \begin{proof} In this proof, we compute all lengths, products, traces, and covariant derivatives using $g$. Let $p\in \overline{V}$, and suppose that $f(p)=0$. We will show that the entire 1-jet of $(f, X)$ vanishes at $p$, and thus $(f, X)$ must vanish everywhere in $U$, giving the desired contradiction. By the $J$-null-vector equation, we know that $4f^2=|X|^2$ in $V$, so by continuity, $X(p)=0$. Taking the Laplacian gives \[ 0 = \Delta(4f^2-|X|^2) = 8f\Delta f - 2\langle X, \Delta X\rangle+8 |\nabla f|^2 -2|\nabla X|^2\quad \mbox{ in } V.\] By continuity, this equation still holds at $p$, and thus $|\nabla X|=4|\nabla f|$ at $p$. It now suffices to prove that $\nabla f(p)=0$. To do this we will show that $|\nabla f|^2 \le C|f|$ in $V\cap B$ for some constant $C$ and some ball $B$ around~$p$. Note $g$, $\pi$, and $\hat{J}$ must be bounded on $V\cap B$. The only potentially unbounded quantity appearing in~\eqref{equation:gradf-2} is $\hat{J}^j_i$. Multiplying~\eqref{equation:gradf-2} with $f_j$, we obtain, in $V\cap B$, \[ |\nabla f|^2 = (-\hat{J}^{j}_{;i}f_j \hat{J}^i + \text{bounded terms}) f.\] We claim the first term in the coefficient of $f$ above is also bounded: \begin{align*} \hat{J}^{j}_{;i}f_j \hat{J}^i &=\left[( \hat{J}^{j} f_j)_{i} - \hat{J}^{j}f_{;ji}\right] \hat{J}^i \\ & =\left[ -\tfrac{1}{n-1}[(\tr \pi) f]_i + (\pi_{jk} f)_{;i} \hat{J}^j\hat{J}^k + 2\pi_{jk} \hat{J}^j_{;i} \hat{J}^k f - \hat{J}^{j}f_{;ji} \right] \hat{J}^i \\ &= 2\pi_{jk} \left(\hat{J}^j_{;i}\hat{J}^i f\right) \hat{J}^k + \text{bounded terms}, \end{align*} where we use \eqref{equation:gradf} and symmetry of $\pi$ in the second equality. Looking at equation~\eqref{equation:gradf-2}, we see that $\hat{J}^{j}_{;i}\hat{J}^i f$ must be bounded since every other term is bounded. This completes the proof. \end{proof} In the following theorem, we remove the nonvanishing assumption of $f$ in Proposition~\ref{proposition:fluid} in the important special case that $\sigma(g, \pi)$ is identically zero. \begin{theorem}\label{theorem:f_not_zero} Let $(U, g, \pi)$ be an initial data set. Assume there exists a nontrivial lapse-shift pair $(f,X)$ on $U$ solving the system~\eqref{equation:pair}, and assume that $\sigma(g, \pi)\equiv 0$ on $U$. Then $(U, g, \pi)$ sits inside a null dust spacetime $(\mathbf{N},\mathbf{g})$ satisfying the spacetime dominant energy condition and admitting a global Killing vector field $\mathbf{Y}$. Moreover, $\mathbf{g}$ is vacuum on the domain of dependence of the set where $(g, \pi)$ is vacuum, and $\mathbf{Y}$ is null wherever $\mathbf{g}$ is not vacuum. \end{theorem} \begin{proof} Let $V_1\subset U$ be the open set where $f$ is nonzero, and let $V_2\subset U$ be the interior of the set where $J=0$. By Proposition~\ref{proposition:f_not_zero}, we can write $U = V_1\cup V_2$. Note that the assumption $\sigma(g, \pi) \equiv 0$ implies that $(V_2, g, \pi)$ is vacuum, that is, $\mu=|J|_g =0$. Let $(\mathbb{R}\times V_1, \mathbf{g}_1, \mathbf{Y}_1 )$ be the Killing development of $(V_1, g, f, X)$. By Proposition~\ref{proposition:fluid}, $(V_1, g, \pi)$ sits inside $(\mathbb{R}\times V_1, \mathbf{g}_1)$, which is a null perfect fluid spacetime with velocity $\mathbf{v}=\frac{\sqrt{|J|_g}}{2f}\mathbf{Y}_1$ and $p=-\tfrac{1}{2}\sigma(g, \pi)$. Moreover, since $\sigma(g, \pi)\equiv0$ on $V_1$, it also follows that $\mathbf{g}_1$ satisfies the DEC, $p\equiv0$, and $\mathbf{Y}_1$ is null wherever $J\neq 0$. From the expression of the Einstein tensor \eqref{equation:perfect-fluid}, we know that the domain of dependence of the subset $V_1\cap V_2$ in $(\mathbb{R}\times V_1, \mathbf{g}_1)$ is vacuum. Recall that $\mathbf{Y}_1= 2f\mathbf{n}+X$ along $V_1$, where $\mathbf{n}$ is the future unit normal to $V_1$. On the other hand, Theorem~\ref{theorem:Moncrief} says the vacuum initial data set $(V_2, g, \pi)$ sits inside a vacuum spacetime $(\mathbf{N}_2, \mathbf{g}_2)$ admitting a unique global Killing vector field $\mathbf{Y}_2$ which is equal to $2f\mathbf{n}+X$ along the hypersurface $V_2$, where $\mathbf{n}$ is the future unit normal to $V_2$. By uniqueness of the vacuum development, the domain of dependence on $V_1\cap V_2$ for the spacetime metric $\mathbf{g}_1$ must be isometric to the corresponding domain of dependence in $\mathbf{N}_2$ for $\mathbf{g}_2$ where both are defined. The two developments have compatible overlap, giving rise to a single spacetime $(\mathbf{N}, \mathbf{g})$ in which $(U, g, \pi)$ sits. By construction, this $(\mathbf{N}, \mathbf{g})$ is a null dust spacetime satisfying the spacetime DEC and is vacuum on the domain of dependence of $V_2$. To patch the two Killing vectors $\mathbf{Y}_1$ and $\mathbf{Y}_2$ on $V_1\cap V_2$, we use the fact that a Killing vector in a vacuum spacetime is uniquely determined by the lapse-shift pair $(f, X)$ solving $D\Phi^*(f, X)=0$ on an initial data set (see \cite[p.496]{Moncrief:1975} and \cite[Lemma 2.2]{Fischer-Marsden-Moncrief:1980}). On $V_1 \cap V_2$, $\mathbf{Y}_1$ and $\mathbf{Y}_2$ both equal $2f\mathbf{n}+X$ with $(f,X)$ solving $D\Phi^*(f, X) = D\overline{\Phi}^*(f, X)=0$ as $J=0$ there. We conclude that $\mathbf{Y}_1$ and $\mathbf{Y}_2$ must coincide on the domain of dependence of $V_1\cap V_2$, and therefore they gives rise to a single global Killing vector field in $(\mathbf{N}, \mathbf{g})$. This completes the proof. \end{proof} \subsection{Vanishing of $J$ in an asymptotically flat end} In the previous subsections, we have derived several strong consequences of an initial data set that has a nontrivial lapse-shift pair $(f, X)$ solving the system~\eqref{equation:pair}. If the lapse-shift pair solves the system over an asymptotically flat end, we are able to use the asymptotics of $(f, X)$ to say more about the initial data set. We first note the following ``harmonic'' asymptotics of $(f, X)$ in an asymptotically flat end. The result is well-known for the usual adjoint linearized constraint, see \cite[Proposition 2.1]{Beig-Chrusciel:1996}. The basic idea is to use the Hessian type equations \eqref{equation:Hessian} and \eqref{equation:second-derivative} of $(f, X)$ to show that $(f, X)$ has at most linear growth along a geodesic ray. See \cite[Appendix C]{Beig-Chrusciel:1996} (Cf. \cite[Proposition B.4]{Huang-Martin-Miao:2018}). Then the desired harmonic expansions of $(f, X)$ follows from the fact that $\Delta_g f$ and each $\Delta_g X_i$ are in $C^{2,\alpha}_1$ together with a bootstrapping argument. The relation \eqref{equation:ADM-asymptotics} below follows from \cite[Proposition 3.1]{Beig-Chrusciel:1996} (see, also \cite[Corollary A.9]{Huang-Lee:2020}). \begin{lemma}\label{lemma:asymptotics} Let $(M, g, \pi)$ be an $n$-dimensional asymptotically flat initial data set of type $(q, \alpha)$ with the ADM energy-momentum $(E, P)$. Let $s = \mathrm{max}\{0, 1-q\}$. Suppose that $(f, X)$ is a lapse-shift pair on $\Int M$ such that $D\overline\Phi_{(g, \pi)}^*(f, X) =0$. Then the following holds: \begin{enumerate} \item For $i, j=1,\ldots n$, there exist constants $c_i, d_{ij}\in \mathbb{R}$ with $d_{ji}=-d_{ij}$, such that \begin{align}\label{equation:asymptotics-linear} \begin{split} f&= \sum_{i=1}^n c_i x^i + O_{2,\alpha}(|x|^{s})\\ X_i &= \sum_{j=1}^n d_{ij}x^j + O_{2,\alpha}(|x|^{s }). \end{split} \end{align} \item For $i, j=1,\ldots n$, if the $c_i$ and $d_{ij}$ above are all zero, then there exist constants $a, b_{i}\in \mathbb{R}$ such that \begin{align}\label{equation:asymptotics-constant} \begin{split} f&= a + O_{2,\alpha}(|x|^{ -q})\\ X_i &= b_i + O_{2,\alpha}(|x|^{ -q}). \end{split} \end{align} We also have, for each $i$, \begin{align} \label{equation:ADM-asymptotics} b_i E = -2aP_i. \end{align} \item For $i, j=1,\ldots n$, if the $c_i$, $d_{ij}$, $a$, $b_i$ above are all zero, then $(f, X)$ vanishes identically. \end{enumerate} \end{lemma} \begin{lemma}\label{lemma:unbounded} Let $(M, g, \pi)$ be an asymptotically flat initial data set of type $(q, \alpha)$ with the ADM energy-momentum $(E, P)$. Assume there exists a nontrivial lapse-shift pair $(f, X)$ on $\Int M$ solving the system~\eqref{equation:pair}. If $|J|_g>0$ in an unbounded subset $V\subset M$, then $|E|=|P|$. \end{lemma} \begin{proof} By Lemma~\ref{lemma:asymptotics}, $(f, X)$ satisfies the asymptotics \eqref{equation:asymptotics-linear}. We claim $c_i=0, d_{ij}=0$ for all~$i,j$. To get a contradiction, suppose $c_i, d_{ij}$ are not all zero. Since $V$ is unbounded, the equation $2fJ+|J|_g X=0$ implies $|X|_g=2|f|$ in $V$, and thus $c_i\neq 0$ for some $i$. By rotating the coordinates and rescaling $(f, X)$, we may assume $f =x^1 + O_2(|x|^{s})$. Recall $s=\max \{ 0, 1-q\}$, and note $s-1\in [-q, 0)$. Then $\tfrac{\partial f}{\partial x^1} = 1+O_1(|x|^{s-1})$, and equation \eqref{equation:gradf} and asymptotical flatness of $(g, \pi)$ imply that $\tfrac{\partial f}{\partial x^1} \hat{J}_1 = O_1(|x|^{-q})$. Together with the asymptotics of $\tfrac{\partial f}{\partial x^1}$, we see that $\hat{J}_1 = O_1(|x|^{s-1})$ in $V$, and then equation \eqref{equation:gradf-2} implies that \[ \tfrac{\partial f}{\partial x^1} = -\sum_k \left(\hat{J}_k \tfrac{\partial \hat{J}_{1}}{\partial x^k} \right) f + O(|x|^{-q}) = O(|x|^{s-1}) \] in $V$, which contradicts the fact that $\tfrac{\partial f}{\partial x^1}$ is asymptotic to~$1$. By Lemma~\ref{lemma:asymptotics}, $(f, X)$ satisfies the asymptotics \eqref{equation:asymptotics-constant} with $2|a| = |b|>0$, and hence $|E|=|P|$ by \eqref{equation:ADM-asymptotics}. \end{proof} Next, we combine Theorem~\ref{theorem:improvable} with Lemma~\ref{lemma:unbounded} to prove the following theorem. \begin{theorem}\label{theorem:AF} Let $(M, g, \pi)$ be an asymptotically flat initial data set, possibly with boundary, such that $(g,\pi) \in C^5_{\mathrm{loc}}(\Int M)\times C^4_{\mathrm{loc}}(\Int M)$. Suppose there exists a sequence of compact sets $\Omega_j$ with smooth boundary such that the sequence exhausts $\Int M$, and for all $j$, the dominant energy scalar is NOT improvable in $\Int\Omega_j$. Then there exists a nontrivial lapse-shift pair $(f, X)$ on $\Int M$ solving the system \eqref{equation:pair}. If we further assume that the ADM energy-momentum $(E, P)$ satisfies $|E|\neq |P|$, then the current density $J$ vanishes outside some compact subset. \end{theorem} \begin{proof} Suppose there exists a sequence of compact sets $\Omega_j$ exhausting $\Int M$ such that for all $j$, the dominant energy scalar is \emph{not} improvable in $\Int\Omega_j$. By Theorem~\ref{theorem:improvable}, there exists a nontrivial lapse-shift pair $(f^{(j)}, X^{(j)})$ on $\Int\Omega_j$ solving the system \eqref{equation:pair}. Since the $1$-jet of $(f^{(j)}, X^{(j)})$ at any point is nonzero, we rescale it so that $|f^{(j)}| +|\nabla f^{(j)}| + |X^{(j)}| +|\nabla X^{(j)}|=1$ at a fixed point $p$ in all $\Omega_j$. Standard elliptic theory shows that, as $j\to\infty$, a subsequence of $(f^{(j)}, X^{(j)})$ converges to a nontrivial lapse-shift pair $(f, X)$ defined on $\Int M$ solving the system \eqref{equation:pair}. This proves the first part of Theorem~\ref{theorem:AF}. If we assume $|E| \neq |P|$ and that there is a nontrivial lapse-shift pair $(f, X)$ on $\Int M$ solving the system \eqref{equation:pair}, by Lemma~\ref{lemma:unbounded}, $J$ must vanish outside a compact subset. \end{proof} \section{Bartnik's stationary conjecture}\label{section:Bartnik} In Section~\ref{section:decrease-energy}, we construct a family of deformations by conformal change to decrease the ADM energy without changing the ADM linear momentum, while the deformation slightly breaks the DEC in a fixed compact set. Then we use the improvability and almost-improvability results to reinstate the DEC (Theorem~\ref{theorem:Bartnik2}). In Section~\ref{section:admissibility}, we define an admissible extension for the Bartnik mass and combine Theorem~\ref{theorem:Bartnik2} with Theorem~\ref{theorem:AF} to prove our main result on the Bartnik stationary conjecture, Theorem~\ref{theorem:Bartnik}. In Section~\ref{section:energy}, we include some results of independent interest about the Bartnik \emph{energy}. \subsection{Deformations that decrease the ADM energy}\label{section:decrease-energy} The following notations will frequently appear in this section: $(M, g, \pi)$ denotes an asymptotically flat initial data set of type $(q,\alpha)$, $(E, P)$ is the ADM energy-momentum, $(\mu, J)$ is the energy and current density, and $\Omega_r$ denotes the compact part of $M$ enclosed by the sphere~$|x|=r$. \begin{proposition}\label{proposition:conformal} Let $(M, g, \pi)$ be an asymptotically flat initial data set satisfying the dominant energy condition. For each $r_0\gg 1$, there is a one-parameter family of asymptotically flat initial data sets $(g_t, \pi_t)$ for $t\in (-\epsilon, \epsilon)$ with $(g_0, \pi_0) = (g, \pi)$ such that \begin{enumerate} \item $(g_t, \pi_t)= (g,\pi)$ in $\Omega_{r_0}$ for all $t$. \label{item:B} \item \label{item:E} $E_t = E- t$ and $P_t = P$, where $(E_t, P_t)$ is the ADM energy-momentum of $(g_t, \pi_t)$. \item \label{item:convergence} Let $(\mu_t, J_t)$ denote the energy and current densities of $(g_t, \pi_t)$. As $t\to 0$, \begin{align*} \| (g_t, \pi_t) - (g,\pi)\|_{C^{2,\alpha}_{-q}(M)\times C^{1,\alpha}_{-1-q}(M)} &\to 0\\ \|(\mu_t, J_t) - (\mu, J)\|_{L^1(M)}&\to 0, \end{align*} and on each compact subset $(\mu_t, J_t)\to (\mu, J)$ in $C^{0,\alpha}$. \item \label{item:annulus} For sufficiently large $r_1>r_0$, $\Sc(g_t, \pi_t) > \Sc(g, \pi)$ in $M\smallsetminus \Omega_{r_1}$. \end{enumerate} \end{proposition} \begin{proof} We will select a conformal factor $u_t\to 1$ such that we have exact knowledge of $\Delta_g u_t$ outside $\Omega_{2r_0}$ and exact knowledge of how $u_t$ affects the energy, and then we cut it off arbitrarily to make it~$1$ on $\Omega_{r_0}$. To do this, choose $\delta>0$ smaller than both $q$ and $1$. We claim that there exists a function $v\in C^{2,\alpha}(M)$ such that on $M\smallsetminus \Omega_{r_0}$, \begin{align*} \Delta_g v &= -|x|^{-n-\delta} \\ v &=-\tfrac{1}{2} |x|^{2-n} + O_{2,\alpha}(|x|^{2-n-\delta}). \end{align*} To do this, extend $-|x|^{-n-\delta}$ on $M\smallsetminus \Omega_{r_0}$ to some function $\rho$ on all of $M$ such that $\int_M \rho \, d\mu_g = \tfrac{n-2}{2}\omega_{n-1}$. Since $\Delta_g: C^{2,\alpha}_{-q}(M)\to C^{0,\alpha}_{-2-q}(M)$ is an isomorphism, we can solve the Poisson equation $\Delta_g v=\rho$ for some $v\in C^{2,\alpha}_{-q}(M)$. (If $M$ has a boundary, we can either fill it in arbitrarily or solve for $v$ with Neumann conditions.) By harmonic expansion, we know that $v=a|x|^{2-n} + O_{2,\alpha}(|x|^{2-n-\delta})$, for some constant $a$. (See~\cite[Remark A.39]{Lee:book}, for example.) Integrating the Poisson equation, we see that \[ a = \tfrac{-1}{(n-2)\omega_{n-1}} \int_M \Delta_g v\, d\mu_g = \tfrac{-1}{(n-2)\omega_{n-1}} \int_M \rho\, d\mu_g =-\tfrac{1}{2}. \] Let $\chi$ be a smooth nonnegative cut-off function such that $\chi \equiv 0$ on $\Omega_{r_0}$ and $\chi\equiv1$ outside $\Omega_{2r_0}$. Define $u_t = 1+t\chi v$. For small $\epsilon>0$, we have $u_t>0$ for all $t\in(-\epsilon, \epsilon)$. We define the one-parameter family of initial data sets: \begin{align*} (g_t)_{ij} = u_t^{\frac{4}{n-2}} g_{ij} \quad \mbox{and} \quad (\pi_t)^{ij} = u_t^{-\frac{6}{n-2}} \pi^{ij}. \end{align*} Note that because $\chi\equiv 0$ in $\Omega_{r_0}$, $(g_t, \pi_t) = (g, \pi)$ in $\Omega_{r_0}$ which implies Item~\eqref{item:B}. By a standard computation, Item~\eqref{item:E} follows: \[ E_t = E +2ta =E-t \quad \mbox{and}\quad P_t = P. \] To verify Item \eqref{item:convergence}, we see that $\| (g_t, \pi_t) - (g, \pi)\|_{C^{2,\alpha}_{-q}(M)\times C^{1,\alpha}_{-1-q}(M)} \to 0$ as $t\to 0$ because $\| u_t - 1\|_{C^{2,\alpha}_{-q}(M)} \to 0$. This directly implies that on each compact subset $(\mu_t, J_t)\to (\mu, J)$ in $C^{0,\alpha}$. By direct computation (Cf. \cite[Equation (37)]{Eichmair-Huang-Lee-Schoen:2016}, in which $\pi$ is a (0,2)-tensor), $\mu_t$ and $J_t$ satisfy \begin{align}\label{equation:mu_t} \begin{split} 2 \mu_t &= u_t^{-\frac{4}{n-2}} \left(-\tfrac{4(n-1)}{n-2} u_t^{-1} \Delta_g u_t + R_g - |\pi|_g^2 + \tfrac{1}{n-1} (\tr_g \pi)^2\right)\\ &= u_t^{-\frac{4}{n-2}} \left(2\mu + 2t\phi\right)\\ J_t^i&= u_t^{-\frac{6}{n-2}} \left((\mathrm{div}_{g} \pi )^i+ \tfrac{2(n-1)}{n-2} u_t^{-1}(u_t)_{,j} \pi^{ij} - \tfrac{2}{n-2} g^{ij}u_t^{-1}(u_t)_{,j} \tr_g \pi\right)\\ &=u_t^{-\frac{6}{n-2}} \left(J^i+ t\Upsilon^i\right) \end{split} \end{align} where $\phi, \Upsilon$ denote \begin{align*} \phi &=- \tfrac{2(n-1)}{n-2} u_t^{-1} \Delta_g (\chi v)\\ \Upsilon^i &= \tfrac{2(n-1)}{n-2} u_t^{-1}(\chi v)_{,j} \pi^{ij} - \tfrac{2}{n-2} g^{ij}u_t^{-1}(\chi v)_{,j} \tr_g \pi. \end{align*} From this it is simple to check that $(\mu_t, J_t)$ converges to $(\mu, J)$ in $L^1$ as $t\to 0$. Last, we prove Item \eqref{item:annulus}. By \eqref{equation:mu_t}, the dominant energy scalar satisfies \begin{align*} \Sc(g_t, \pi_t) \ge u_t^{-\frac{4}{n-2}}\left( \Sc(g,\pi) + 2t(\phi- |\Upsilon|_g)\right)\quad \mbox{ in } M. \end{align*} In $M\smallsetminus \Omega_{2r_0}$ we have $\chi\equiv1$ and $ \Delta_g v=-|x|^{-n-\delta}$, so \begin{align*} \phi & = \tfrac{2(n-1)}{n-2} u_t^{-1} |x|^{-n-\delta}\\ \Upsilon^i &=\tfrac{2(n-1)}{n-2} u_t^{-1} v_{,j} \pi^{ij} - \tfrac{2}{n-2} g^{ij}u_t^{-1}v_{,j} \tr_g \pi=O(|x|^{-n-q}). \end{align*} Since $\delta<q$, we see that for $r_1$ sufficiently large, we have $\phi- |\Upsilon|_g>0$ in $M\smallsetminus \Omega_{r_1}$. Also, for $r_1$ sufficiently large, we have $v<0$ on $M\smallsetminus \Omega_{r_1}$, and consequently, we can guarantee that $u_t=1+t\chi v <1$ there. Therefore \[ \Sc(g_t, \pi_t) > u_t^{-\frac{4}{n-2}}\Sc(g, \pi) > \Sc(g, \pi) \quad \mbox{ in } M\smallsetminus \Omega_{r_1}. \] \end{proof} \begin{theorem}\label{theorem:Bartnik2} Let $(M, g, \pi)$ be an asymptotically flat initial data set with non-empty smooth boundary $\partial M$. Suppose that the dominant energy condition holds and that $(g,\pi)$ is $C^5_{\mathrm{loc}}\times C^4_{\mathrm{loc}}$ on $\Int M$. Then one of the two following statements must be true: \begin{enumeratei} \item \label{item:non-improvable} $\sigma(g,\pi)\equiv 0$ on $\Int M$, and there exists a nontrivial solution $(f, X)$ on all of $\Int M$ solving the system \eqref{equation:pair}. \item \label{item:deformation} There exists a bounded neighborhood of $V_0$ of $\partial M$ such that for any $\epsilon>0$, there exists a perturbation $(\bar{g}, \bar{\pi})$ of $(g,\pi)$ such that \begin{itemize} \item $(\bar{g}, \bar{\pi})$ satisfies the dominant energy condition. \item $(\bar{g}, \bar{\pi})=(g,\pi)$ in $V_0$. \item $\overline{E}< E$ and $\overline{P}=P$. \item $\| (\bar{g}, \bar{\pi}) - (g,\pi)\|_{C^{2,\alpha}_{-q}(M)\times C^{1,\alpha}_{-1-q}(M)} <\epsilon$ and $\|(\bar{\mu}, \bar{J}) - (\mu, J)\|_{L^1(M)}<\epsilon$, \end{itemize} where $(\overline{E}, \overline{P})$ is the ADM energy-momentum and $(\bar{\mu}, \bar{J})$ is the energy and current density of $(\bar{g}, \bar{\pi})$. \end{enumeratei} \end{theorem} \begin{remark} If $(\mu, J)\in C^{0,\alpha}_{-n-\delta}$ for some $\delta\in(0,1)$, then we can arrange for $(\bar\mu, \bar{J})$ to be $\epsilon$-close in this space as well. \end{remark} \begin{proof} We will consider two cases. In Case 1, we assume $\Sc(g, \pi)>0$ at some point in $\Int M$. In Case 2, we assume that there exists a bounded neighborhood $V_0$ of $\partial M$ with smooth boundary such that the dominant energy scalar is improvable in~$\Omega_r \smallsetminus V_0$ for sufficiently large $r$. We will show that in either case, Item \eqref{item:deformation} holds. We will then show that the negations of both cases imply Item~\eqref{item:non-improvable}. \vspace{6pt} \noindent{\bf Case 1}: We assume $\Sc(g,\pi)>0$ at some point in $\Int M$. Let $V_0$ be an open neighborhood of $\partial M$ with smooth boundary such that $\Sc(g,\pi)>0$ somewhere outside~$V_0$. There exists an open ball $B$ in $M \smallsetminus V_0$ and a $\delta>0$ such that $\Sc(g,\pi)>2\delta$ on $B$. Choose $r_0>1$ large enough so that $B\cup V_0\subset \Omega_{r_0}$. Let $(g_t, \pi_t)$ be the one-parameter family of initial data sets constructed in Proposition~\ref{proposition:conformal}. Thanks to the properties of $(g_t,\pi_t)$, in order to establish Item~\eqref{item:deformation}, it suffices to deform $(g_t, \pi_t)$ within $\Omega := \Omega_{r_1}\smallsetminus V_0$ to obtain the DEC, where $r_1$ is from Proposition~\ref{proposition:conformal}. To do this, we apply Theorem~\ref{theorem:improvability-error} to $(\Omega, g, \pi)$ with $\delta$ and $B$ as chosen above. Recall that Theorem~\ref{theorem:improvability-error} says that there exists a neighborhood $\mathcal{U}$ of $(g, \pi)$ in $C^{4,\alpha}({\Omega})\times C^{3,\alpha}({\Omega})$ such that for any $V\subset\subset\Int\Omega$, there exist constants $\epsilon, C>0$ such that for all $(\gamma, \tau)\in \mathcal{U}$ and $u\in C^{0,\alpha}_c(V)$ with $\| u \|_{C^{0,\alpha}(\Omega)}< \epsilon$, there exists $(h, w)\in C^{2,\alpha}_c(\Int\Omega)$ with $\|(h,w)\|_{C^{2,\alpha}(\Omega)}\le C\| u\|_{C^{0,\alpha}(\Omega)}$ such that the initial data set $(\gamma+h, \tau+w)$ satisfies \[ \Sc(\gamma+h, \tau+w) \ge \Sc(\gamma,\tau)+u -\delta {\bf 1}_B. \] For $t$ sufficiently small, we set $(\gamma,\tau) = (g_t, \pi_t)$ and $u=\Sc(g_t, \pi_t)^-$. (Recall $v= v^+ -v^-$ is the decomposition of a function $v$ into its positive and negative parts.) Then by the choice of $B$ and Proposition~\ref{proposition:conformal}, for $t$ sufficiently small, \begin{align*} &\Sc(g_t, \pi_t) > \delta \mbox{ in } B\\ &\Sc(g_t, \pi_t)^- \mbox{ is compactly supported in } \Int\Omega\\ &\|\Sc(g_t, \pi_t)^- \|_{C^{0,\alpha}({\Omega})}\to 0 \mbox{ as } t\to \infty. \end{align*} There exists $(h_t, w_t)\in C^{2,\alpha}_c(\Int\Omega)$ such that \[ \| (h_t, w_t)\|_{C^{2,\alpha}(\Omega)}\le C\|\Sc(g_t, \pi_t)^- \|_{C^{0,\alpha}({\Omega})}\to 0 \] and \begin{align*} \Sc(g_t+ h_t, \pi_t+w_t) &\ge \Sc(g_t, \pi_t) +\Sc(g_t, \pi_t)^- - \delta {\bf 1}_B\\ &= \Sc(g_t, \pi_t)^+ - \delta {\bf 1}_B > 0. \end{align*} It follows that for sufficiently small $t>0$, the initial data $(\bar{g}, \bar{\pi}):= (g_t + h_t, \pi_t + w_t)$ satisfies Item~\eqref{item:deformation}. \vspace{6pt} \noindent {\bf Case 2:} Assume that there exists a bounded neighborhood $V_0$ of $\partial M$ such that $V_0$ has smooth boundary and that the dominant energy scalar is improvable in $\Omega_{r}\smallsetminus V_0$ for all sufficiently large $r$. Choose $r_0>1$ such that $V_0 \subset \Omega_{r_0}$, and let $(g_t, \pi_t)$ be the one-parameter family of initial data sets constructed in Proposition~\ref{proposition:conformal}, and choose $r_1>r_0$ large enough so that Proposition~\ref{proposition:conformal} holds and the dominant energy scalar is improvable in $\Omega:=\Omega_{r_1}\smallsetminus V_0$. Once again, it suffices to deform $(g_t, \pi_t)$ within $\Omega$ to obtain the DEC. By definition of improvability on $\Omega$, there exists a neighborhood $\mathcal{U}$ of $(g, \pi)$ in $C^{4,\alpha}({\Omega})\times C^{3,\alpha}({\Omega})$ such that for any $V\subset\subset\Int\Omega$, there exist constants $\epsilon, C>0$ such that for all $(\gamma, \tau)\in \mathcal{U}$ and $u\in C^{0,\alpha}_c(V)$ with $\| u \|_{C^{0,\alpha}(\Omega)}< \epsilon$, there exists $(h, w)\in C^{2,\alpha}_c(\Int\Omega)$ with $\|(h,w)\|_{C^{2,\alpha}(\Omega)}\le C\| u\|_{C^{0,\alpha}(\Omega)}$ such that the initial data set $(\gamma+h, \tau+w)$ satisfies \[ \Sc(\gamma+h, \tau+w) \ge \Sc(\gamma,\tau)+u. \] Once again, for small $t>0$, we can choose $(\gamma, \tau)=(g_t,\pi_t)$ and $u=\Sc(g_t, \pi_t)^-$, and the argument is the same as in Case 1, except now there is no $- \delta {\bf 1}_B$ term to worry about. \vspace{6pt} We have shown that the two cases above imply Item \eqref{item:deformation} to hold. Now, suppose we have the negations of both Case 1 and Case 2. The negation of Case 1 states that that $\Sc\equiv 0$ in all of $\Int M$. Let $V_k$ be an open neighborhood of $\partial M$ with smooth boundary and lying within a distance $\tfrac{1}{k}$ from $\partial M$. By the negation of Case 2, there exists a sequence $r_k\to\infty$ such that for all $k$, the dominant energy scalar is not improvable in $\Omega_{r_k}\smallsetminus V_k$. Since $\Omega_{r_k}\smallsetminus V_k$ exhausts $\Int M$, we can invoke Theorem~\ref{theorem:AF} to obtain the desired lapse-shift pair $(f, X)$ satisfying the system \eqref{equation:pair}. \end{proof} \subsection{Bartnik mass and admissible extensions}\label{section:admissibility} Before we present the definition of Bartnik mass, we will discuss admissibility of an asymptotically flat extension. As already addressed in \cite{Bartnik:1997}, it is necessary to impose some no-horizon or non-degeneracy condition in defining an admissible extension, but there is no clear consensus for what exactly it should be. (Even in the time-symmetric case, there are potentially inequivalent choices, which can lead to different desirable properties of Bartnik mass, see~\cite{Jauregui:2019}.) We define a no-horizon condition in the following. For a closed, two-sided, immersed hypersurface $\Sigma$ in an initial data set $(M, g, \pi)$, we define the \emph{null expansion} $\theta^+ := H_\Sigma+ \mathrm{tr}_\Sigma k$ with respect to the unit normal $\nu$, where $H_\Sigma$ is the tangential divergence of $\nu$ and $\mathrm{tr}_\Sigma k$ is the trace of $k$ over the tangent space of $\Sigma$. (Recall $k$ is related to $\pi$ by \eqref{equation:k}.) We say an immersed hypersurface $\Sigma$ is a \emph{marginally outer trapped hypersurface} (or \emph{MOTS} for short) if $\theta^+$ vanishes everywhere on $\Sigma$. Separately, in asymptotically flat $(M, g, \pi)$, we say that an immersed closed hypersurface $\Sigma$ in $M$ is an \emph{outer embedded boundary} if $\Sigma = \partial U$, where $U$ is a connected open set that contains the infinity (that is, the asymptotically flat end). The definition implies that $\Sigma$ cannot have transversal self-intersection and that the only way that $\Sigma$ can fail to be embedded is when locally two sheets of $\Sigma$ touch tangentially from inside, where ``inside'' refers to the complement of $U$. \begin{definition} We say that an asymptotically flat initial data set $(M, g, \pi)$, possibly with boundary, satisfies $\mathscr{N}_1$ if it contains no smooth MOTS that is an outer embedded boundary, except possibly $\partial M$ itself. \end{definition} Note the condition $\mathscr{N}_1$ says that $(M, g, \pi)$ contains neither embedded MOTS homologous to $\partial M$ (except possibly $\partial M$ itself) nor MOTS that is immersed and touches itself tangentially from inside. We remark that in the time-symmetric case, a MOTS is a minimal surface, and a minimal surface that is an outer embedded boundary is necessarily embedded because of a comparison principle. However, the same reasoning does not hold true for general MOTS. In the next result, we show that the condition $\mathscr{N}_1$ is ``open'' among certain small deformations of $(g, \pi)$. This is the only part of the overall proof of Theorem~\ref{theorem:Bartnik} where we have to assume that $3\le n \le 7$. \begin{proposition}\label{proposition:openness} Let $3\le n \le 7$ and let $(M, g, \pi)$ be an $n$-dimensional asymptotically flat initial data set with nonempty smooth boundary $\partial M$. Suppose $(M, g, \pi)$ satisfies the condition~$\mathscr{N}_1$. For each bounded neighborhood $V_0$ of $\partial M$, there exists $\epsilon>0$ such that if $(\bar{g}, \bar{\pi})$ is an initial data set on $M$ with $(\bar{g}, \bar{\pi})=(g, \pi)$ everywhere in $V_0$ and $\|(\bar{g}, \bar{\pi})- (g, \pi) \|_{C^{2}_{-q}(M)\times C^{1}_{-1-q}(M)} <\epsilon$, then $(M, \bar{g}, \bar{\pi})$ also satisfies the condition~$\mathscr{N}_1$. \end{proposition} \begin{proof} In the following argument, we use the results of Andersson and Metzger~\cite{Andersson-Metzger:2009, Andersson-Metzger:2010}, Eichmair~\cite{Eichmair:2009, Eichmair:2010}, and the survey paper of Andersson, Eichmair, and Metzger~\cite{Andersson-Eichmair-Metzger:2011}. Suppose, on the contrary, that no such $\epsilon$ exists. Then there exists a sequence $(g_i, \pi_i)$ of initial data sets with $(g_i, \pi_i)=(g, \pi)$ everywhere in $V_0$ and $\|(g_i, \pi_i)- (g, \pi) \|_{C^{2}_{-q}(M)\times C^{1}_{-1-q}(M)} \to 0$ so that $\mathscr{N}_1$ fails for each $(g_i, \pi_i)$. Namely, each $(M, g_i, \pi_i)$ contains a closed MOTS $\Sigma_i$ that is an outer embedded boundary. Ideally, we would like to extract a subsequence of $\Sigma_i$ converging to some $\Sigma$ in $(M, g, \pi)$, but this cannot be done directly. First we will produce a sequence of \emph{stable embedded} MOTS $\Sigma_i'$ in $(M, g_i, \pi_i)$. There is a sufficiently large $r$ so that the coordinate spheres of radius greater than or equal to $r$ all have positive $\theta^+$ (with respect to the unit normal pointing to infinity) in $(M, g_i, \pi_i)$ for all $i$. By a MOTS comparison principle~(see, for example, \cite[Proposition 4]{Eichmair-Huang-Lee-Schoen:2016}), it follows that $S_r$ must enclose $\Sigma_i$. Let $U_i$ be the part of $\Int \Omega_r$ lying outside of $\Sigma_i$. That is, $\partial U_i$ is the disjoint union of $\Sigma_i$ and $S_r$, and we have $\theta^+=0$ on $\Sigma_i$ and $\theta^+>0$ on $S_r$, both with respect to the unit normal pointing toward infinity in $(g_i, \pi_i)$. In order to obtain a stable embedded MOTS in $U_i$, we follow the idea of \cite[Theorem 5.1]{Andersson-Metzger:2009} to slightly modify the initial data near $\Sigma_i$. Let $\phi \ge 0$ be a smooth scalar function on $M$ so that $\phi\equiv 0$ in a neighborhood of $S_r$ and $\phi>0$ on all $\Sigma_i$. Let $\delta_i\to 0$ be a sequence of positive numbers, and let $\pi_{i}' = \pi_i +\delta_{i} \phi g_i$. It is easy to see that with respect to $(g_i, \pi_i')$, the null expansion $\theta^+$ becomes strictly negative everywhere on $\Sigma_i$ while $S_r$ has the same positive $\theta^+$. By \cite[Theorem 3.3]{Andersson-Eichmair-Metzger:2011}, there exists a closed \emph{embedded} MOTS $\Sigma_i'$ in $(U_i, g_i, \pi_i')$ that is also \emph{stable} in the sense of MOTS,\footnote{We refer to \cite[Section 3.6]{Andersson-Eichmair-Metzger:2011} for stability of MOTS, and we note $\Sigma_i$ also satisfies a stability inequality in the sense of \cite[(5)]{Eichmair:2010} that is sufficient to apply Schoen-Simon's regularity theory in \cite{Schoen-Simon:1981}.} and is $C$-almost minimizing\footnote{Here, $C$-almost minimizing is in the sense of~\cite{Eichmair:2009}.} in $U_i$, where $C$ can be chosen independent of~$i$ (since $C$ depends only on $\sup_M |\pi_i'|_{g_i}$). Also, $\Sigma_i'$ is homologous to $S_r$, and we define $U_i'\subset U_i$ to be the open set bounded between $\Sigma_i'$ and~$S_r$. The $C$-almost minimizing property of $\Sigma_i'$ in $U_i$ implies that $\Sigma_i'$ has a uniform area bound, and the equation $\theta^+=0$ implies $\Sigma_i'$ has uniformly bounded mean curvature. Together with the stability of~$\Sigma_i'$, we can apply the compactness result of \cite{Schoen-Simon:1981} (see also \cite[Theorem 1.3]{Andersson-Metzger:2010} and \cite[Theorem A.2]{Eichmair:2010}) to obtain a subsequential limit $\Sigma$, which is a closed immersed MOTS in $(M, g, \pi)$. By passing to a further subsequence if necessary, $U_i'$ converges to some $U$ as currents, where $\partial U= S_r - \Sigma$ as currents. We show that the $C$-almost minimizing property of the sequence $\Sigma_i'$ in $U_i'$ guarantees that $\Sigma$ is a smooth boundary component of $U$, and thus $\Sigma$ is an outer embedded boundary. Suppose, to get a contradiction, $\Sigma$ has no collar neighborhood in $U$, which implies that two sheets of $\Sigma$ must touch at some point from inside of $U$. We can argue as in \cite[p. 23]{Andersson-Eichmair-Metzger:2011} that in a neighborhood of that point, for sufficiently large $i$, two sheets of $\Sigma_i'$ would be close enough in $U_i'$ so that the area of $\Sigma'$ can be reduced by adding a catenoid neck to violate the $C$-almost minimizing property in $U_i'$. (Note that the above argument does not give ``inner'' embeddedness of $\Sigma$ because two sheets of $\Sigma$ can possibly touch from the complement of $U$ and $U_i$, where the $C$-almost minimizing property of $\Sigma_i'$ is not known to hold.) Finally, we check that $\Sigma$ is not equal to $\partial M$. Since $(M, g, \pi)$ satisfies $\mathscr{N}_1$ and $(g_i, \pi_i)$ is identical to $(g, \pi)$ on $V_0$, each $\Sigma_i$ must intersect the complement of $V_0$, and consequently, so must $\Sigma_i'$. Hence $\Sigma$ also intersects the complement of $V_0$. In summary, $\Sigma$ is a MOTS that is an outer embedded boundary and is not $\partial M$. This contradicts the assumption that $(M, g, \pi)$ satisfies~$\mathscr{N}_1$. Note that if $\partial M$ is not a MOTS, then we do not need the argument in the previous paragraph to see that the MOTS limit $\Sigma$ cannot equal $\partial M$. So in that case, the assumption that $(\bar{g}, \bar{\pi})=(g, \pi)$ in~$V_0$ in Proposition~\ref{proposition:openness} is not needed. \end{proof} \begin{definition}\label{definition:admissible} Let $(\Omega_0, g_0, \pi_0)$ be a $n$-dimensional compact initial data set with nonempty smooth boundary. We say that $(M, g, \pi)$ is an \emph{admissible extension of $(\Omega_0, g_0, \pi_0)$} if the following holds: \begin{enumeratei} \item $(M, g, \pi)$ is an $n$-dimensional asymptotically flat initial data set (as defined in Appendix~\ref{section:asymp_flat}) with boundary $\partial M$, satisfying the dominant energy condition. \item There exists an identification of the boundaries $\partial M$ and $\partial \Omega_0$ via diffeomorphism, and under this identification, the following equalities hold along $\partial M \cong \partial \Omega_0$: \begin{align*} g_0|_{\partial \Omega_0} &= g|_{\partial M} \\ H_{\partial \Omega_0} &= H_{\partial M} \\ \pi_0\cdot \nu_0 &= \pi\cdot\nu. \end{align*} Here, $\nu_0$ and $\nu$ denote the unit normals with respect to $g_0$ and $g$, respectively (both of which point into $M$ and out of $\Omega_0$). The first equation is between the induced metrics, the second equation is between the mean curvatures (computed with respect to $g_0$ and $g$ and the normals $\nu_0$ and $\nu$, respectively), and the third equation is between the $g_0$-contraction of $\pi_0$ with $\nu_0$ and the $g$-contraction of $\pi$ with $\nu$.\footnote{The third identity is equivalent to asking for a matching condition on the tangential trace of $k$ and on the one-form $k(\nu, \cdot)$ restricted on the tangent space of the boundary as in \cite[Definition 2]{Bartnik:1997}. } \item $(M, g, \pi)$ satisfies the no-horizon condition $\mathscr{N}_1$. \end{enumeratei} \end{definition} \begin{remark}\label{remark:admissible} We include two other conditions that can replace $\mathscr{N}_1$ in Definition~\ref{definition:admissible}. The condition $\mathscr{N}_2$ requires an asymptotically flat initial data set $(M, g, \pi)$ to satisfy $\mathscr{N}_1$, and $\partial M$ itself is not a MOTS. The last sentence of the proof of Proposition~\ref{proposition:openness} implies that $\mathscr{N}_2$ is an open condition with respect to deformations of initial data in $C^{2}_{-q}(M)\times C^{1}_{-1-q}(M)$ that fix the induced data on the boundary. The condition $\mathscr{N}_3$ says that $\partial M$ is strictly outward-minimizing in $(M, g, \pi)$, in the sense that it has volume strictly less than any hypersurface enclosing it. As discussed in \cite[Section 6]{Jauregui:2019}, the condition $\mathscr{N}_3$ is an open condition with respect to deformation of metrics in $C^{1}_{-q}$ that fix the induced metric on the boundary. All results in this paper will hold if we replace the condition $\mathscr{N}_1$ in Definition~\ref{definition:admissible} by $\mathscr{N}_2$, or $\mathscr{N}_3$, or any combination of these conditions. \end{remark} Let $(M, g, \pi)$ be an asymptotically flat initial data set with the ADM energy-momentum $(E, P)$. If $E\ge |P|$, we define the ADM mass to be $m_{\mathrm{ADM} }:= \sqrt{E^2 - |P|^2}$. If $E<|P|$, then we define $m_{\mathrm{ADM}}=-\infty$, purely for the sake of convenience. \begin{definition} Let $(\Omega_0, g_0, \pi_0)$ be a compact initial data set with nonempty smooth boundary, satisfying the dominant energy condition. Define $\mathcal{B}$ to be the set of all admissible extensions $(M, g, \pi)$ of $(\Omega_0, g_0, \pi_0)$. If this set is nonempty, then we define the \emph{Bartnik mass} of $(\Omega_0, g_0, \pi_0)$ to be \[ m_B(\Omega_0, g_0, \pi_0):=\inf_{(M, g, \pi)\in\mathcal{B}} m_{\mathrm{ADM}}(g, \pi). \] We say that $(M, g, \pi)\in \mathcal{B}$ is a \emph{Bartnik mass minimizer} for $(\Omega_0, g_0, \pi_0)$ if it achieves this infimum. \end{definition} \begin{remark}\label{remark:pmt_corners} As long as an appropriate spacetime positive mass theorem ``with corners'' holds, each $(M, g, \pi)$ in the definition above has $E\ge|P|$, and consequently, $m_B(\Omega_0, g_0, \pi_0)\ge0$. Although we cannot find such a result stated in the literature, we observe that such a theorem holds for \emph{spin} manifolds, by essentially the same reasoning as in the proof of the time-symmetric case in~\cite[Theorem 3.1]{Shi-Tam:2002}. The only difference in the spacetime case is that the appropriate Dirac operator leads to an extra boundary term on p.~104, which precisely corresponds to the condition $\pi_0\cdot\nu_0=\pi\cdot\nu$ in Definition~\ref{definition:admissible}. Consequently, if one adjusts the definition of admissibility to demand that $\Omega_0\cup M$ glued along their common boundaries is spin, then the Bartnik mass is always nonnegative. \end{remark} \begin{proof}[Proof of Theorem~\ref{theorem:Bartnik}] We need to establish Items (1) through(4) of Theorem~\ref{theorem:Bartnik}. By Proposition~\ref{proposition:openness} and the definition of admissibility, it is clear that the deformed initial data set $(\bar{g}, \bar{\pi})$ in Item \eqref{item:deformation} of Theorem~\ref{theorem:Bartnik2} is an admissible extension, but it would contradict the ADM mass minimizing property of $(M, g, \pi)$. Therefore Item~\eqref{item:non-improvable} of Theorem~\ref{theorem:Bartnik2} must hold. That is, $\sigma(g,\pi)=0$ everywhere in $\Int M$ (which is Item~\eqref{item:sigma_vanish}), and there exists a nontrivial solution $(f, X)$ on all of $\Int M$ solving the system~\eqref{equation:pair}. We apply Theorem~\ref{theorem:f_not_zero} to conclude Items~\eqref{item:Bartnik-spacetime} and~\eqref{item:spacetime_vac}. Item~\eqref{item:vanishing-J} follows from Lemma~\ref{lemma:unbounded}. \end{proof} \begin{remark} In the time-symmetric case, given $(\Omega_0, g_0, 0)$, it is an open question whether a time-symmetric Bartnik mass minimizer $(M, g, 0)$ must be unique. For an initial data set $(\Omega_0, g_0, \pi_0)$ it is clear that a Bartnik mass minimizer $(M, g, \pi)$ is \emph{not} unique because one can take different asymptotically flat spacelike hypersurfaces in the same spacetime development to obtain different minimizing extensions. However, it is still interesting to ask whether the spacetime development of a Bartnik mass minimizer is unique. \end{remark} \subsection{Bartnik energy}\label{section:energy} We define the quasi-local energy of a compact initial data set $(\Omega_0, g_0, \pi_0)$ with nonempty smooth boundary in a similar fashion as the Bartnik mass. \begin{definition} Let $(\Omega_0, g_0, \pi_0)$ be a compact initial data set with nonempty smooth boundary, satisfying the dominant energy condition. Define $\mathcal{B}$ to be the set of all admissible extensions $(M, g, \pi)$ of $(\Omega_0, g_0, \pi_0)$ as in Definition~\ref{definition:admissible}. If this set is nonempty, then we define the \emph{Bartnik energy} to be \[ E_B(\Omega_0, g_0, \pi_0):=\inf_{(M, g, \pi)\in\mathcal{B}} E( g, \pi). \] We say that $(M, g, \pi)\in \mathcal{B}$ is a \emph{Bartnik energy minimizer} for $(\Omega_0, g_0, \pi_0)$ if it achieves this infimum. \end{definition} In the next result, we observe that Bartnik mass and Bartnik energy should be equal. It may be physically interpreted that a natural definition of the Bartnik quasi-local linear momentum by $|P_B|:=\sqrt{E_B^2-m_B^2}$ is always zero, and thus the quasi-local quantities defined by minimizing the corresponding quantities of asymptotically flat extensions do not capture ``dynamics'' information. Unfortunately, our proof requires $C^\infty$ smoothness. \begin{theorem} For this theorem only, we re-define admissibility to require admissible extensions $(g,\pi)$ to be $C^\infty_{\mathrm{loc}}$ on $\Int M$, which then slightly changes the formal definitions of $m_B$ and $E_B$. Let $(\Omega_0, g_0, \pi_0)$ be a three-dimensional compact initial data set with nonempty smooth boundary, satisfying the dominant energy condition. If an admissible extension exists and $m_B(\Omega_0, g_0, \pi_0)>0$, then \[ m_B(\Omega_0, g_0, \pi_0) = E_B(\Omega_0, g_0, \pi_0).\] Moreover, $(M, g, \pi)$ is a Bartnik energy minimizer for $(\Omega_0, g_0, \pi_0)$ if and only if it is a Bartnik mass minimizer with ADM momentum equal to zero. \end{theorem} \begin{remark} The dimension assumption is used to perform gluing to an exact Kerr initial data set in the exterior because the spacelike slices in 4-dimensional Kerr spacetimes form an ``admissible family'' for gluing construction. See \cite[Appendix F]{Chrusciel-Delay:2003} and Definition 4.5 and Example 4.6 in \cite{Corvino-Huang:2020}. We expect the analogous result to hold in higher dimensions. \end{remark} To prepare for the proof, we review basic facts about an $(n+1)$-dimensional asymptotically flat spacetime. Given a coordinate chart $(t, x^1, \ldots, x^n)$ in the asymptotically flat region of spacetime, where $(x^1, \ldots, x^n)$ is a spatial asymptotically flat coordinate chart, we define a \emph{boosted slice of angle $\beta$} to be a spacelike hypersurface defined $t = \beta x^1$ for some number $ \beta \in (-1, 1 )$. Those boosted slices define a family of $(n-1)$-dimensional submanifolds $\Sigma_{\beta, r}$ by intersecting the hyperplane $\{ t = \beta x^1\}$ with the cylinder of radius $\{ |x| = r\}$. With respect to the Minkowski metric, those $\Sigma_{\beta, r}$ have null expansion $\theta^+>0$, as well as positive mean curvature, because they are the isometric images of ellipsoids in the $t=0$ slice. Therefore, with respect to a Lorentzian metric that is asymptotically flat, those submanifolds $\Sigma_{\beta, r}$ have positive $\theta^+$ and positive mean curvature for all $\beta$, for all sufficiently large $r$. We will restrict our attention to Kerr spacetimes and consider boosted slices with respect to a Boyer-Lindquist coordinate chart $(t, r,\theta, \phi)$ by letting $(x^1, x^2, x^3)$ be the Cartesian coordinates corresponding to $(r, \theta, \phi)$. \begin{proof} We clearly have $m_B(\Omega_0, g_0, \pi_0)\le E_B(\Omega_0, g_0, \pi_0)$ by definition, so we need only prove the reverse inequality. For any $\epsilon>0$, our hypotheses imply that there exists an admissible extension $(M, g, \pi)$ of $(\Omega_0, g_0, \pi_0)$ such that $0< m_{\mathrm{ADM}}(g, \pi) < m_B(\Omega_0, g_0, \pi_0)+\epsilon$. We will perform a two-step process to construct another admissible extension $(M, \tilde{g}, \tilde{\pi})$ such that \[ E(\tilde{g}, \tilde{\pi}) < m_{\mathrm{ADM}}(g, \pi) +\epsilon,\] which implies the desired inequality $E_B(\Omega_0, g_0, \pi_0)\le m_B(\Omega_0, g_0, \pi_0)$. The first step is to apply a gluing theorem of Corvino and the first author~\cite[Theorem~1.4]{Corvino-Huang:2020} to construct initial data $(\bar{g}, \bar{\pi})$ on $M$ with the following properties: For $R>0$ sufficiently large, \begin{itemize} \item $(\bar{g}, \bar{\pi})= (g, \pi)$ on $\Omega_R$, where $\Omega_R$ is the compact subset enclosed by $|x|=R$, \item $(\bar{g}, \bar{\pi})$ is equal to the initial data on a boosted slice of the Kerr spacetime outside $\Omega_{2R}$, \item $(\bar{g}, \bar{\pi})$ satisfies the DEC everywhere, \item $0< m_{\mathrm{ADM}}(\bar{g}, \bar{\pi}) < m_{\mathrm{ADM}}(g, \pi) +\epsilon$, \item $(\bar{g}, \bar{\pi})\in C^\infty_{\mathrm{loc}}(\Int M)$\footnote{This is the step of the argument that requires $C^\infty_{\mathrm{loc}}$ smoothness. In general, applying this gluing theorem causes a loss of derivatives.} and $\| (\bar{g}, \bar{\pi})- (g, \pi)\|_{C^{2,\alpha}_{-q'}(M)\times C^{1,\alpha}_{-1-q'}(M)}<\epsilon$ for some $q' \in (\tfrac{1}{2}, q)$, \end{itemize} We note that the $\epsilon$-closeness in $C^{2,\alpha}_{-q'}(M)\times C^{1,\alpha}_{-1-q'}(M)$ statement is implicitly contained in the proof of \cite[Theorem 4.9]{Corvino-Huang:2020}: It is clear that the Kerr initial data is $\epsilon$-close to $(g, \pi)$ outside $\Omega_{2R}$ for $R$ large by asymptotical flatness. It suffices to verify such an estimate in the transition region $A_R:=\Omega_{2R} \smallsetminus \Omega_R$ of the deformation, denoted by $(h, w)$. The estimate of the rescaled $(h^R, w^R)$ in the unit annulus is obtained in \cite[Proposition 4.4]{Corvino-Huang:2020}, which implies $\|(h, w)\|_{C^{2,\alpha}_{-q}(A_R)\times C_{-1-q}^{1,\alpha}(A_R)}\le C$. Hence, the estimate of $(h, w)$ in the weighted norm with a slower fall-off rate $q'$ can be made arbitrary small for large $R$. Together with Proposition~\ref{proposition:openness} and Remark~\ref{remark:admissible}, we see that $(M, \bar{g}, \bar{\pi})$ is admissible, as long as $\epsilon$ is small enough. For the second step, we look at the portion of $(M, \bar{g}, \bar{\pi})$ outside $\Omega_{2R}$ that is exactly a boosted slice in the Kerr spacetime, which we may express as $t = \beta x^1$ for some number $\beta\in (-1, 1)$. We now ``bend'' this slice to the $t=0$ slice by letting $t = \eta(r) x^1$, where $\eta(r)$ is a smooth scalar function depending only on $r$ such that $\eta(2R) = \beta$ and $\eta(r) = 0$ for $r\ge 3R$. We define a new $(\tilde{g}, \tilde{\pi})$ as follows: \begin{itemize} \item $(\tilde{g}, \tilde{\pi}) = (\bar{g}, \bar{\pi})$ on $\Omega_{2R}$. \item Outside $\Omega_{2R}$, $(\tilde{g}, \tilde{\pi})$ equals the induced data on $\{ t = \eta(r) x^1\}$ in the Kerr spacetime. \end{itemize} One feature of this bending process is that it ensures that for $R$ sufficiently large, the portion of $(M, \tilde{g}, \tilde{\pi})$ outside of $\Omega_{2R}$ is foliated by hypersurfaces of positive null expansion and positive mean curvature. Since the boosted slice $t = \beta x^1$ has the same ADM mass as the $t=0$ slice, and the latter has zero linear momentum, we have \[ E(\tilde{g}, \tilde{\pi})=m_{\mathrm{ADM}}(\tilde{g}, \tilde{\pi})=m_{\mathrm{ADM}}(\bar{g}, \bar{\pi})< m_{\mathrm{ADM}}(g, \pi) +\epsilon.\] The only thing left to verify is admissibility of $(M, \tilde{g}, \tilde{\pi})$. The first two properties in Definition~\ref{definition:admissible} are clear, which only leaves the no-horizon condition. The part of $(\tilde{g}, \tilde{\pi})$ that is different from $(\bar{g}, \bar{\pi})$ can be foliated hypersurfaces of positive null expansion, so the comparison principle for $\theta^+$ implies that condition $\mathscr{N}_1$ on $(\bar{g}, \bar{\pi})$ implies condition $\mathscr{N}_1$ on $(\tilde{g}, \tilde{\pi})$ as well. (The same is true for the conditions $\mathscr{N}_2$ and $\mathscr{N}_3$.) Finally, the last sentence of the theorem is an immediate consequence of the equality $m_B(\Omega_0, g_0, \pi_0) = E_B(\Omega_0, g_0, \pi_0)$. \end{proof} \appendix \section{Asymptotically flat manifolds}\label{section:asymp_flat} Let $M$ be a connected, $n$-dimensional manifold, possibly with boundary. For $q\in\left(\tfrac{n-2}{2}, n-2\right)$ and $\alpha\in(0,1)$, we say that an initial data set $(M, g, \pi)$ is \emph{asymptotically flat} of type $(q, \alpha)$ if there exists a compact subset $K \subset M$ and a diffeomorphism $M\smallsetminus K \cong \mathbb{R}^n \smallsetminus B$ such that \[ (g - g_{\mathbb{E}}, \pi) \in C^{2,\alpha}_{-q} (M)\times C^{1,\alpha}_{-1-q} (M) \] and \[ (\mu, J) \in L^1 (M), \] where $g_{\mathbb{E}}$ is a Riemannian background metric on $M$ that is equal to the Euclidean metric in the coordinate chart $M\smallsetminus K \cong \mathbb{R}^n \smallsetminus B$. Note that with this definition, $(M, g)$ is necessarily complete. The function spaces above, $C^{k,\alpha}_{-q}$, refer to \emph{weighted H\"{o}lder spaces} (as defined in~\cite{Huang-Lee:2020}, for example). Note that our convention is such that $f\in C^{k,\alpha}_{-q}(M)$ if and only if $f\in C^{k,\alpha}_{\mathrm{loc}}(M)$ and there is a positive constant $C$ such that, for any multi-indices $I$ with $|I|\le k$, \begin{align*} |(\partial^I f)(x)|\le C|x|^{-|I|-q} \quad \mbox{ and } \quad [f]_{k,\alpha; B_1(x)} \le C|x|^{-k-\alpha-q} \end{align*} on $M\smallsetminus K$. We use the notation $O_{k,\alpha}(|x|^{-q})$ to denote an arbitrary function that lies in $C^{k,\alpha}_{-q}$, and we also simply write $O(|x|^{-q})$ for $O_0(|x|^{-q})$. Let $\Omega_k$ be a sequence of compact subsets with smooth boundary that exhausts $\mathbb{R}^n$, and define $\Sigma_k := \partial \Omega_k$. The \emph{ADM energy} $E$ and the \emph{ADM linear momentum} $P=(P_1, \dots, P_n)$ of an asymptotically flat initial data set $(M, g, \pi)$ are defined as \begin{align*} E&:= \tfrac{1}{2(n-1)\omega_{n-1}} \lim_{k\to \infty} \int_{\Sigma_k}\sum_{i,j=1}^n (g_{ij,i}-g_{ii,j})\nu^j \, d\mu\\ P_i &:= \tfrac{1}{(n-1)\omega_{n-1}} \lim_{k\to \infty} \int_{\Sigma_k} \sum_{i,j=1}^n \pi_{ij} \nu^j \, d\mu \end{align*} where the integrals are computed on $\Sigma_k$ in $M\smallsetminus K \cong \mathbb{R}^n \smallsetminus B$, $\nu^j$ is the outward unit normal to~$\Sigma_k$, $d\mu$ is the measure on $\Sigma_k$ induced by the Euclidean metric, $\omega_{n-1}$ is the volume of the standard $(n-1)$-dimensional unit sphere, and the commas denote partial differentiation in the coordinate directions. The condition $q>\frac{n-2}{2}$ and integrability of $(\mu, J)$ imply that the limits in the definition of ADM energy-momentum exist and are independent of the choice of exhaustion $\Sigma_k$. \section{Spacetime with a Killing vector field}\label{section:spacetime} For a spacetime admitting a Killing vector field, the Einstein tensor along a spacelike hypersurface can be expressed in terms of the lapse-shift pair of the Killing vector. While these formulas have appeared in the literature, because of different sign and normalization conventions, we include a self-contained discussion of the curvature formulas that will be used elsewhere in this paper. Let $(\mathbf{N}, \mathbf{g})$ be an $(n+1)$-dimensional, time-oriented spacetime equipped with a spacelike hypersurface $U$, and let $G:= \mathrm{Ric}_{\mathbf{g}} - \tfrac{1}{2} R_{\mathbf{g}} \mathbf{g}$ denote the Einstein tensor of $\mathbf{g}$. Consider any local frame $e_0, e_1\ldots, e_n$ of~$\mathbf{N}$ such that $e_0=\mathbf{n}$ is the future unit normal of a spacelike hypersurface $U$ while $e_1,\ldots, e_n$ are tangent to $U$. As a simple consequence of the Gauss and Codazzi equations: \begin{align*} G_{00} &= \tfrac{1}{2} (R_g - |k|_g^2 + (\tr_g k)^2) \\ G_{0i} & = (\Div_g k)_i - \nabla_i (\tr_g k), \end{align*} where $g$ is the induced metric on the hypersurface, and we define the second fundamental form $k$ of a spacelike hypersurface to be the tangential component of $\bm{\nabla} \mathbf{n}$. This is the same as saying \begin{align} \begin{split}\label{equation:Einstein-0i} G_{00} &= \mu \\ G_{0i} & = J_i, \end{split} \end{align} where $\mu$ and $J$ are the energy and current densities of the induced initial data $(g,\pi)$. Of course, this was the original motivation for the definitions of $\mu$ and $J$. Let $\mathbf{Y}$ be a vector field on $\mathbf{N}$ which is transverse to $U$ everywhere. Given coordinates $(x^1, \ldots, x^n)$ on $U$, we extend these functions to a neighborhood of $U$ by making them constant on the flow lines of $\mathbf{Y}$. We also define a function $u$ via integrating $\mathbf{Y}$ from $u=0$ at~$U$ so that $\frac{\partial}{\partial u}=\mathbf{Y}$. Then $(u, x^1, \ldots, x^n)$ defines coordinates in a neighborhood of $U$ in $\mathbf{N}$, and it is straightforward to see that the metric must take the form \begin{align}\label{equation:Killing-development} \mathbf{g} = -4f^2 du^2 + g_{ij} (dx^i + X^i du) (dx^j + X^j du), \end{align} where $g_{ij}$ is the induced metric on each constant $u$ slice, and the decomposition $\mathbf{Y} = 2f \mathbf{n} +X$ holds along each one of these slices, where $\mathbf{n}$ is the future unit normal of the slice. \begin{lemma}\label{lemma:Einstein_tangential} Suppose the spacetime $(\mathbf{N}, \mathbf{g})$ takes the form \eqref{equation:Killing-development}. Then the following equations hold, where the $i,j$ indices run from $1$ to $n$: \begin{enumerate} \item The second fundamental form of the constant $u$-slices is expressed as: \begin{equation}\label{equation:second_fund} k_{ij} = \frac{1}{4f}\left[(L_{\mathbf{Y}} g)_{ij}- (L_X g)_{ij}\right]. \end{equation} \item The Einstein tensor along the constant~$u$ slices takes the form: \begin{align} \begin{split}\label{equation:Einstein-ij} G_{ij} &= \left[R_{ij} -\tfrac{1}{2}R_g g_{ij}\right] + \left[(\tr_g k)k_{ij} - 2k_{i\ell}k^\ell_j \right] + \left[ -\tfrac{1}{2}(\tr_g k)^2 +\tfrac{3}{2}|k|_g^2 \right]g_{ij}\\ &\quad+ f^{-1}\left[ -\tfrac{1}{2}(L_X k)_{ij} + \tfrac{1}{2}\tr_g (L_X k)g_{ij} - f_{;ij} +(\Delta_g f) g_{ij} \right]\\ &\quad+ (2f)^{-1}\left[ (L_{\mathbf{Y}} k)_{ij} - \tr_g (L_{\mathbf{Y}} k)g_{ij} \right]. \end{split} \end{align} \end{enumerate} \end{lemma} \begin{proof} The first equation is just a re-statement of a standard computation of $\frac{\partial }{\partial u} g_{ij}$. For the second equation, the Gauss equation implies that \[ \mathrm{Ric}_{\mathbf{g}}(e_i, e_j) = R_{ij} + (\tr_g k) k_{ij} - k_{i\ell} k^\ell_{ j} -\mathrm{Rm}_{\mathbf{g}}(\mathbf{n}, e_i, e_j, \mathbf{n}),\] and the trace gives \[ R_{\mathbf{g}} = R_g + (\tr_g k)^2 - |k|_g^2 -2 \mathrm{Ric}_{\mathbf{g}}(\mathbf{n},\mathbf{n}). \] To compute $\mathrm{Rm}_{\mathbf{g}}(\mathbf{n}, e_i, e_j, \mathbf{n})$, we must understand $\bm{\nabla} \mathbf{n}$ better. While the tangential part is $k$, we can show that $\bm{\nabla}_{\mathbf{n}} \mathbf{n} =f^{-1}\nabla f$. Using the fact that $\mathbf{n}=-2f \bm{\nabla} u$, we have, for $e_i$ tangential to the constant $u$-slices, \begin{align*} \mathbf{g} (\bm{\nabla}_{\mathbf{n}} \mathbf{n}, e_i) &= -2 \bm{\nabla}^2 u (f\mathbf{n}, e_i)\\ &=-2 \bm{\nabla}_{e_i} \bm{\nabla}_{f\mathbf{n}}u + 2\bm{\nabla}_{\bm{\nabla}_{e_i} (f\mathbf{n})} u\\ &= 2f^{-1} e_i(f) \bm{\nabla}_{f\mathbf{n}} u\\ &= f^{-1} e_i(f). \end{align*} Using our knowledge of $\bm{\nabla} \mathbf{n}$, we can show that \begin{align*} \mathrm{Rm}_{\mathbf{g}}(\mathbf{n}, e_i, e_j, \mathbf{n}) &= -f^{-1} \mathbf{g}(\bm{\nabla}_{f\mathbf{n}} \bm{\nabla}_{e_i} \mathbf{n} - \bm{\nabla}_{e_i} \bm{\nabla}_{f\mathbf{n}} \mathbf{n} - \bm{\nabla}_{[f\mathbf{n}, e_i]} \mathbf{n}, e_j)\\ &=f^{-1} \mathbf{g} \left(\bm{\nabla}_{e_i} \mathbf{n}, \bm{\nabla}_{e_j} (f\mathbf{n})\right) + f^{-1} \mathbf{g} (\bm{\nabla}_{e_i} \bm{\nabla}_{f\mathbf{n}} \mathbf{n}, e_j) \\ &\quad - f^{-1} \left[ \mathbf{g}(\bm{\nabla}_{f\mathbf{n}} \bm{\nabla}_{e_i} \mathbf{n} - \bm{\nabla}_{[f\mathbf{n}, e_i]} \mathbf{n}, e_j) +\mathbf{g} \left(\bm{\nabla}_{e_i} \mathbf{n}, \bm{\nabla}_{e_j} (f\mathbf{n})\right) \right]\\ &= k_{i\ell} k^\ell_{ j} + f^{-1}\left[f_{;ij} - (L_{f\mathbf{n}} k)_{ij}\right], \end{align*} and the trace gives \[ \mathrm{Ric}_{\mathbf{g}}(\mathbf{n},\mathbf{n}) = |k|_g^2 +f^{-1}[\Delta_g f - \tr_g(L_{f\mathbf{n}} k)]. \] Plugging these in to our equations for $\mathrm{Ric}_{\mathbf{g}}$ and $R_\mathbf{g}$, we obtain \[ \mathrm{Ric}_{\mathbf{g}}(e_i, e_j) = R_{ij} + (\tr_g k) k_{ij} - 2k_{i\ell} k^\ell_{ j} + f^{-1}[ (L_{f\mathbf{n}} k)_{ij} -f_{;ij}],\] and \[ R_{\mathbf{g}} = R_g + (\tr_g k)^2 - 3|k|_g^2 +2f^{-1}[ \tr_g(L_{f\mathbf{n}} k)-\Delta_g f ]. \] Combing these two equations and using the fact that $\mathbf{Y} = 2f\mathbf{n}+X$ yields the desired result. \end{proof} Now let us consider what happens when $\mathbf{Y}$ is Killing. We record the following easy fact. \begin{lemma}\label{lemma:Killing-independent-u} Suppose the spacetime $(\mathbf{N}, \mathbf{g})$ takes the form \eqref{equation:Killing-development}. Then $\mathbf{Y} := \frac{\partial}{\partial u}$ is Killing if and only if the functions $g_{ij}$, $f$ and $X^i$ are all independent of~$u$. In particular, if $\mathbf{Y}$ is Killing, then $L_{\mathbf{Y}} g$ and $L_{\mathbf{Y}} k$ both vanish. \end{lemma} \begin{corollary} Let $(\mathbf{N}, \mathbf{g})$ be a spacetime admitting a Killing vector field $\mathbf{Y}$. Let $U$ be a spacelike hypersurface with the induced data $(g, \pi)$ and future unit normal $\mathbf{n}$. Suppose $Y$ is transverse to $U$ and $\mathbf{Y} = 2f \mathbf{n} + X$ along $U$. Then along $U$ \begin{align}\label{equation:Killing} \tfrac{1}{2} (L_X g)_{ij} =\left(\tfrac{2}{n-1} (\tr_g \pi) g_{ij} - 2\pi_{ij} \right)f, \end{align} and the tangential components of the Einstein tensor take the form: \begin{align} \begin{split}\label{equation:Einstein_pi} G_{ij} &= \left[R_{ij} -\tfrac{1}{2}R_g g_{ij}\right]+ \left[- \tfrac{3}{n-1}(\tr_g \pi)\pi_{ij} + 2\pi_{i\ell}\pi^\ell_j \right]\\ &\quad + \left[ \tfrac{1}{2(n-1)}(\tr_g \pi)^2 - \tfrac{1}{2}|\pi|_g^2 \right] g_{ij} \\ &\quad+f^{-1}\left[-\tfrac{1}{2}(L_X \pi)_{ij} - f_{;ij} +(\Delta_g f)g_{ij}\right]. \end{split} \end{align} \end{corollary} \begin{proof} As mentioned above, since $\mathbf{Y}$ is Killing, $L_{\mathbf{Y}} g$ and $L_{\mathbf{Y}} k$ are both zero, and then we can see that the Corollary is a direct (but tedious) consequence of Lemma~\ref{lemma:Einstein_tangential} after expressing involving $k$ in terms of $\pi$. In more detail, we use the equations \begin{align*} k_{ij} &= g_{i\ell} g_{jm} \pi^{\ell m} -\tfrac{1}{n-1}(\tr \pi) g_{ij}\\ (L_X k)_{ij} &= (L_X \pi)_{ij} + 2 \pi^\ell_j (L_X g)_{i\ell} -\tfrac{1}{n-1}(\tr \pi)(L_X g)_{ij} \\ &\quad -\tfrac{1}{n-1} \left[\tr \, (L_X \pi) + \pi^{\ell m} (L_X g)_{\ell m}\right] g_{ij}, \end{align*} where $(L_X \pi)_{ij} = g_{i\ell} g_{jm} (L_X\pi)^{\ell m}$. Here and below, the covariant derivative, trace, and norm are all with respect to $g$. Equation~\eqref{equation:Killing} follows immediately from~\eqref{equation:second_fund}, and substituting into~\eqref{equation:Einstein-ij} yields: \begin{align*} G_{ij} &= \left[R_{ij} -\tfrac{1}{2}R g_{ij}\right]+ \left[ \tfrac{3}{n-1}(\tr \pi)\pi_{ij} - 2\pi_{i\ell}\pi^\ell_j \right] + \left[ -\tfrac{3}{2(n-1)}(\tr \pi)^2 +\tfrac{3}{2}|\pi|^2 \right] g_{ij} \\ &\quad+ f^{-1} \left[-\tfrac{1}{2}(L_X \pi)_{ij} - \pi^\ell_j (L_X g)_{i\ell} + \tfrac{1}{2(n-1)}(\tr \pi)(L_X g)_{ij}\right] \\ &\quad+f^{-1} \left[ \tfrac{1}{2}\pi^{\ell m} (L_X g)_{\ell m} - \tfrac{1}{n-1}(\tr\pi)(\Div X)\right] g_{ij} \\ &\quad +f^{-1} \left[- f_{;ij} +(\Delta f)g_{ij}\right]. \end{align*} Using equation~\eqref{equation:Killing} and its trace, we can eliminate the $L_X g$ and $\Div X$ terms to obtain \eqref{equation:Einstein_pi}. \end{proof} \section{Initial data in pp-waves}\label{se:pp} We prove Lemma~\ref{lemma:pp-initial}. Recall that a pp-wave spacetime metric is defined by \begin{equation} \label{eqn:pp} \mathbf{g} = 2 du\,dx^n + S\, (dx^n)^2 +\sum_{a=1}^{n-1} (dx^a)^2 \end{equation} where $S$ is a function independent of $u$. Observe that when $S>0$, the metric $\mathbf g$ takes the form \eqref{equation:Killing-development} with the induced metric on the $u$-slices \begin{equation} \label{eqn:pp-slice} g = S(dx^n)^2 +\sum_{a=1}^{n-1} (dx^a)^2, \end{equation} with $(f, X)$ given by \begin{align*} f = \tfrac{1}{2} S^{-\frac{1}{2}} \quad \mbox{ and } \quad X= S^{-1} \tfrac{\partial}{\partial x^n}. \end{align*} Note that the orientation of $\mathbf n$ is chosen so that $\mathbf{Y}=\tfrac{\partial}{\partial u}$ is future-pointing, that is, $-\mathbf g(\tfrac{\partial}{\partial u}, \mathbf n) = 2f >0$. In the computations below, we will use commas to denote partial differentiation. We first compute the Christoffel symbols of $g$. For convenience, define $\Gamma_{ijk} := g(\nabla_{\partial_i} \partial_j, \partial_k) = \frac{1}{2} (g_{ik,j}+ g_{jk,i} - g_{ij,k})$. Then, for $a, b, c=1,\dots, n-1$, \begin{align}\label{eqn:Christoffel-raised} \begin{split} \Gamma_{abc} &= \Gamma_{nab} = \Gamma_{anb} = \Gamma_{abn}= 0, \quad \Gamma_{nna} = -\tfrac{1}{2} S_{,a}, \\ \Gamma_{nan} &= \Gamma_{ann} = \tfrac{1}{2} S_{,a}, \quad \Gamma_{nnn} = \tfrac{1}{2} S_{,n} \end{split} \end{align} and the Christoffel symbols of $g$ are given by \begin{align}\label{eqn:Christoffel} \begin{split} \Gamma_{ab}^c &=\Gamma_{na}^b = \Gamma_{an}^b= \Gamma_{ab}^n = 0,\quad \Gamma_{nn}^a = -\tfrac{1}{2} S_{,a},\\ \Gamma_{na}^n&=\tfrac{1}{2} S^{-1} S_{,a}, \quad \Gamma_{nn}^n= \tfrac{1}{2} S^{-1} S_{,n}. \end{split} \end{align} \begin{lemma}\label{le:pi} If $S>0$, then the conjugate momentum of the constant $u$-slices of the metric~\eqref{eqn:pp}, as a $(2, 0)$-tensor, is given by \begin{align*} \pi^{nn} &= 0\\ \pi^{na}&=\pi^{an} = \tfrac{1}{2} S^{-\frac{3}{2}}\tfrac{\partial S}{\partial x^a}\\ \pi^{ab} &= -\tfrac{1}{2} S^{-\frac{3}{2}}\tfrac{\partial S}{\partial x^n}\, \delta^{ab} \end{align*} where the $a$ and $b$ indices run from $1$ to $n-1$. \end{lemma} \begin{proof} Note that the covector of $X$ has the coefficients $X_n=1$ and $X_a=0$ for $a=1,\dots, n-1$. Since $L_{\mathbf Y} g=0$, we can use~\eqref{equation:second_fund} and~\eqref{eqn:Christoffel} to compute the second fundamental form $k$ to be \begin{align*} k_{ij} &= - \tfrac{1}{4f} (X_{i;j} + X_{j;i} )= -\tfrac{1}{2} S^{\frac{1}{2}} \big(X_{i,j} + X_{j,i} - 2\Gamma_{ij}^n X_n\big) = S^{\frac{1}{2}} \, \Gamma_{ij}^n \end{align*} which gives \begin{align*} k_{nn}&= \tfrac{1}{2} S^{-\tfrac{1}{2}} S_{,n}\\ k_{na}&=k_{an}= \tfrac{1}{2} S^{-\frac{1}{2}}S_{,a}\\ k_{ab}&=0\\ \tr_g k &= \tfrac{1}{2} S^{-\frac{3}{2}} S_{,n}. \end{align*} Raising the indices of $k$ and using the relation $\pi^{ij} = k^{ij} - (\tr_g k) g^{ij}$, we obtain the desired result. \end{proof} \begin{lemma} If $g$ is given by the formula~\eqref{eqn:pp-slice} and $\pi$ is given by the formulas in Lemma~\ref{le:pi}, then their energy and current densities $(\mu, J)$ are given by \begin{align*} \mu =-\tfrac{1}{2} S^{-1} \Delta' S\quad \mbox{ and } \quad J =\tfrac{1}{2} S^{-\frac{3}{2}} (\Delta' S) \tfrac{\partial}{\partial x^n}. \end{align*} Consequently, $|\mu| = |J|_g$, and $\mu\ge0$ so long as $\Delta' S \le0$. \end{lemma} This can be seen as a fairly direct consequence of Lemma~\ref{le:pi} combined with fact that the Einstein tensor of $\mathbf{g}$ from~\eqref{eqn:pp} is $G_{\alpha\beta}=-\tfrac{1}{2} (\Delta' S) Y_\alpha Y_\beta$, but here we will provide a direct proof in terms of the initial data $(g,\pi)$. \begin{proof} The most complicated step is to compute the scalar curvature of $g$. Using the general formula \[ \mathrm{Rm}(\partial_i,\partial_j, \partial_k, \partial_{\ell} ) = \Gamma_{jk\ell, i} - \Gamma_{ik\ell, j} - \Gamma_{jk}^m \Gamma_{i\ell m} + \Gamma_{ik}^m \Gamma_{j\ell m}, \] for $i,j,k,\ell, m$ ranging from $1$ to~$n$, it follows from~\eqref{eqn:Christoffel-raised} and~\eqref{eqn:Christoffel} that the only nonzero curvature components are \begin{align*} \mathrm{Rm}(\partial_n, \partial_a, \partial_b, \partial_n) &= \Gamma_{abn,n} - \Gamma_{nbn,a} - \Gamma_{ab}^m \Gamma_{nnm} + \Gamma_{nb}^m \Gamma_{anm}\\ &=-\tfrac{1}{2} S_{,ab} + \Gamma_{nb}^n \Gamma_{ann}\\ &=-\tfrac{1}{2} S_{,ab} + \tfrac{1}{4} S^{-1} S_{,a} S_{,b} \qquad \mbox{ for $a, b=1,\dots, n-1$}. \end{align*} We then obtain \begin{align*} \mathrm{Ric}(\partial_n, \partial_n )&= -\tfrac{1}{2} \Delta' S + \tfrac{1}{4} S^{-1} |\nabla' S|^2 \end{align*} where $\Delta'$ and $\nabla' $ represent the Euclidean Laplacian and gradient, respectively, in the $(x^1,\dots, x^{n-1})$ variables. Therefore, the scalar curvature of $g$ is given by \begin{align*} R_g &= g^{nn} \mathrm{Ric}(\partial_n, \partial_n) +\sum_{a=1}^n g^{aa} \mathrm{Ric}(\partial_a, \partial_a)\\ &= g^{nn} \mathrm{Ric}(\partial_n, \partial_n) + \sum_{a=1}^n g^{aa} g^{nn} \mathrm{Rm}(\partial_a,\partial_n, \partial_n, \partial_a)\\ &=2g^{nn} \mathrm{Ric}(\partial_n, \partial_n)\\ &=-S^{-1} \Delta' S + \tfrac{1}{2} S^{-2} |\nabla' S|^2. \end{align*} Meanwhile, from the computations in Lemma~\ref{le:pi}, we have \begin{align*} |k|^2 &= \tfrac{1}{4} S^{-3} (S_{,n})^2 + \tfrac{1}{2} S^{-2} |\nabla' S|^2\\ (\tr_g k)^2 &= \tfrac{1}{4} S^{-3} (S_{,n})^2. \end{align*} Together with the formula for $R_g$ above, we obtain \begin{align*} \mu &= \tfrac{1}{2} (R_g - |k|_g^2 + (\tr_gk)^2)=-\tfrac{1}{2} S^{-1} \Delta' S. \end{align*} For $J$, we compute \begin{align*} J^j &= \pi^{ij}_{;i} =\pi^{ij}_{,i} + \Gamma^{i}_{\ell i} \pi^{\ell j} + \Gamma^j_{\ell i} \pi^{\ell i}. \end{align*} and insert~\eqref{eqn:Christoffel} to obtain \begin{align*} J^a &= \pi^{ia}_{,i} + \Gamma^{i}_{\ell i} \pi^{\ell a} + \Gamma^a_{\ell i} \pi^{\ell i}\\ &=\pi^{na}_{,n} + \pi^{ba}_{,b} + \Gamma^n_{n n} \pi^{n a} + \Gamma^n_{bn} \pi^{b a} + \Gamma^a_{nn} \pi^{nn}\\ &=\tfrac{1}{2}\big( S^{-\tfrac{3}{2} }S_{,a} \big)_{,n} -\tfrac{1}{2} \big(S^{-\tfrac{3}{2}} S_{,n} \big)_{,a} + \tfrac{1}{4} S^{-\tfrac{5}{2}} S_{,a} S_{,n}-\tfrac{1}{4} S^{-\tfrac{5}{2}} S_{,a} S_{,n}\\ &=0\\ J^n &=\pi^{in}_{,i} + \Gamma^i_{\ell i} \pi^{\ell n} + \Gamma^n_{\ell i} \pi^{\ell i}\\ &=\pi^{an}_{,a} + \Gamma^n_{an} \pi^{an} + 2\Gamma^n_{bn} \pi^{bn} \\ &=\pi^{an}_{,a} +3 \Gamma^n_{an} \pi^{an} \\ &=\tfrac{1}{2} \big(S^{-\tfrac{3}{2}} S_{,a}\big)_{,a} + \tfrac{3}{4} S^{-\tfrac{5}{2} } \sum_{a=1}^{n-1}(S_{,a} )^2\\ &=\tfrac{1}{2} S^{-\tfrac{3}{2}} \Delta' S. \end{align*} \end{proof}
194,823
\begin{document} \title[]{On stochastic differential equations with arbitrary slow convergence rates for strong approximation} \author[Jentzen] {Arnulf Jentzen} \address{ Seminar f\"ur Angewandte Mathematik\\ Departement Mathematik\\ HG G 58.1\\ R\"amistrasse 101 \\ 8092 Z\"urich\\ Switzerland} \email{[email protected]} \author[M\"uller-Gronbach] {Thomas M\"uller-Gronbach} \address{ Fakult\"at f\"ur Informatik und Mathematik\\ Universit\"at Passau\\ Innstrasse 33 \\ 94032 Passau\\ Germany} \email{[email protected]} \author[Yaroslavtseva] {Larisa Yaroslavtseva} \address{ Fakult\"at f\"ur Informatik und Mathematik\\ Universit\"at Passau\\ Innstrasse 33 \\ 94032 Passau\\ Germany} \email{[email protected]} \begin{abstract} In the recent article [Hairer, M., Hutzenthaler, M., \& Jentzen, A., Loss of regularity for Kolmogorov equations, \emph{Ann.\ Probab.} {\bf 43} (2015), no. 2, 468--527] it has been shown that there exist stochastic differential equations (SDEs) with infinitely often differentiable and globally bounded coefficients such that the Euler scheme converges to the solution in the strong sense but with no polynomial rate. Hairer et al.'s result naturally leads to the question whether this slow convergence phenomenon can be overcome by using a more sophisticated approximation method than the simple Euler scheme. In this article we answer this question to the negative. We prove that there exist SDEs with infinitely often differentiable and globally bounded coefficients such that no approximation method based on finitely many observations of the driving Brownian motion converges in absolute mean to the solution with a polynomial rate. Even worse, we prove that for every arbitrarily slow convergence speed there exist SDEs with infinitely often differentiable and globally bounded coefficients such that no approximation method based on finitely many observations of the driving Brownian motion can converge in absolute mean to the solution faster than the given speed of convergence. \end{abstract} \maketitle \section{Introduction} Recently, it has been shown in Theorem~5.1 in Hairer et al.~\cite{hhj12} that there exist stochastic differential equations (SDEs) with infinitely often differentiable and globally bounded coefficients such that the Euler scheme converges to the solution but with no polynomial rate, neither in the strong sense nor in the numerically weak sense. In particular, Hairer et al.'s work~\cite{hhj12} includes the following result as a special case. \begin{theorem}[Slow convergence of the Euler scheme] \label{thm:intro1} Let $ T \in (0,\infty) $, $ d \in \{ 4, 5, \dots \} $, $ \xi \in \R^d $. Then there exist infinitely often differentiable and globally bounded functions $ \mu, \sigma \colon \R^d\to \R^d$ such that for every probability space $ ( \Omega, \mathcal{F}, \PP ) $, every normal filtration $ ( \mathcal{F}_t )_{ t \in [0,T] } $ on $ ( \Omega, \mathcal{F}, \PP ) $, every standard $ ( \mathcal{F}_t )_{ t \in [0,T] } $-Brownian motion $ W \colon [0,T] \times \Omega \to \R $ on $ ( \Omega, \mathcal{F}, \PP ) $, every continuous $ ( \mathcal{F}_t )_{ t \in [0,T] } $-adapted stochastic process $ X \colon [0,T] \times \Omega \to \R^d $ with $ \forall \, t \in [0,T] \colon \PP\big( X(t) = \xi + \int_0^t \mu\big( X(s) \big) \, ds + \int_0^t \sigma\big( X(s) \big) \, dW(s) \big) = 1 $, every sequence of mappings $ Y^n \colon \{ 0, 1, \dots, n \} \times \Omega \to \R^d $, $ n \in \N $, with $ \forall \, n \in \N, k \in \{ 0, 1, \dots, n \} \colon Y^n_k = \xi + \sum_{ l = 0 }^{ k - 1 } \big[ \mu\big( Y^n_l \big) \frac{ T }{ n } + \sigma\big( Y^n_l \big) \big( W(( l + 1 ) T / n ) - W( l T / n ) \big) \big] $, and every $ \alpha \in (0,\infty) $ we have \begin{equation} \lim_{ n \to \infty } \big( n^{ \alpha } \cdot \EE\big[ \| X( T ) - Y^n_n \| \big] \big) = \infty . \end{equation} \end{theorem} Theorem \ref{thm:intro1} naturally leads to the question whether this slow convergence phenomenon can be overcome by using a more sophisticated approximation method than the simple Euler scheme. Indeed, the literature on approximation of SDEs contains a number of results on approximation schemes that are specifically designed for non-Lipschitz coefficients and in fact achieve polynomial strong convergence rates for suitable classes of such SDEs (see, e.g., \cite{h96,hms02,Schurz2006,MaoSzpruch2013Rate,HutzenthalerJentzenKloeden2012, WangGan2013,Sabanis2013ECP,Sabanis2013Arxiv,Beynetal2014,TretyakovZhang2013} for SDEs with monotone coefficients and see, e.g., \cite{BerkaouiBossyDiop2008,GyoengyRasonyi2011,DereichNeuenkirchSzpruch2012, Alfonsi2013,NeuenkirchSzpruch2014,HutzenthalerJentzen2014, HutzenthalerJentzenNoll2014CIR,ChassagneuxJacquierMihaylov2014} for SDEs with possibly non-monotone coefficients) and one might hope that one of these schemes is able to overcome the slow convergence phenomenon stated in Theorem~\ref{thm:intro1}. In this article we destroy this hope by answering the question posed above to the negative. We prove that there exist SDEs with infinitely often differentiable and globally bounded coefficients such that no approximation method based on finitely many observations of the driving Brownian motion (see \eqref{eq:intro2} for details) converges in absolute mean to the solution with a polynomial rate. This fact is the subject of the next theorem, which immediately follows from Corollary~\ref{cor1b} in Section~\ref{strong}. \begin{theorem} \label{thm:intro2} Let $ T \in (0,\infty) $, $ d \in \{ 4, 5, \dots \} $, $ \xi \in \R^d $. Then there exist infinitely often differentiable and globally bounded functions $ \mu, \sigma\colon \R^d\to \R^d$ such that for every probability space $ ( \Omega, \mathcal{F}, \PP ) $, every normal filtration $ ( \mathcal{F}_t )_{ t \in [0,T] } $ on $ ( \Omega, \mathcal{F}, \PP ) $, every standard $ ( \mathcal{F}_t )_{ t \in [0,T] } $-Brownian motion $ W \colon [0,T] \times \Omega \to \R $ on $ ( \Omega, \mathcal{F}, \PP ) $, every continuous $ ( \mathcal{F}_t )_{ t \in [0,T] } $-adapted stochastic process $ X \colon [0,T] \times \Omega \to \R^d $ with $ \forall \, t \in [0,T] \colon \PP\big( X(t) = \xi + \int_0^t \mu\big( X(s) \big) \, ds + \int_0^t \sigma\big( X(s) \big) \, dW(s) \big) = 1 $, and every $ \alpha \in (0,\infty) $ we have \begin{equation} \label{eq:intro2} \lim_{ n \to \infty } \Bigl( n^{ \alpha } \cdot \inf_{ s_1, \dots, s_n \in [0,T] } \inf_{ \substack{ u \colon \R^n \to \R \\ \text{measurable} } } \EE\Big[ \big\| X( T ) - u\big( W( s_1 ), \dots, W( s_n ) \big) \big\| \Big] \Bigr) = \infty . \end{equation} \end{theorem} Even worse, our next result states that for every arbitrarily slow convergence speed there exist SDEs with infinitely often differentiable and globally bounded coefficients such that no approximation method that uses finitely many observations and, additionally, starting from some positive time, the whole path of the driving Brownian motion, can converge in absolute mean to the solution faster than the given speed of convergence. \begin{theorem} \label{thm:intro3} Let $ T \in (0,\infty) $, $ d \in \{ 4, 5, \dots \} $, $ \xi \in \R^d $ and let $ ( a_n )_{ n \in \N } \subset (0,\infty) $ and $ (\delta_n)_{ n \in \N } \subset (0,\infty) $ be sequences of strictly positive reals such that $ \lim_{ n \to \infty } a_n = \lim_{ n \to \infty } \delta_n = 0 $. Then there exist infinitely often differentiable and globally bounded functions $ \mu, \sigma \colon \R^d\to \R^d$ such that for every probability space $ ( \Omega, \mathcal{F}, \PP ) $, every normal filtration $ ( \mathcal{F}_t )_{ t \in [0,T] } $ on $ ( \Omega, \mathcal{F}, \PP ) $, every standard $ ( \mathcal{F}_t )_{ t \in [0,T] } $-Brownian motion $ W \colon [0,T] \times \Omega \to \R $ on $ ( \Omega, \mathcal{F}, \PP ) $, every continuous $ ( \mathcal{F}_t )_{ t \in [0,T] } $-adapted stochastic process $ X \colon [0,T] \times \Omega \to \R^d $ with $ \forall \, t \in [0,T] \colon \PP\big( X(t) = \xi + \int_0^t \mu\big( X(s) \big) \, ds + \int_0^t \sigma\big( X(s) \big) \, dW(s) \big) = 1 $, and every $ n \in \N $ we have \begin{equation} \label{eq:intro3} \inf_{ s_1, \dots, s_n \in [0,T] } \, \inf_{ \substack{ u \colon \R^n \times C( [ \delta_n, T ] ) \to \R \\ \text{measurable} } } \EE\Big[ \big\| X( T ) - u\big( W( s_1 ), \dots, W( s_n ) , ( W(s) )_{ s \in [ \delta_n , T ] } \big) \big\| \Big] \geq a_n . \end{equation} \end{theorem} Theorem~\ref{thm:intro3} is an immediate consequence of Corollary~\ref{cor4} in Section~\ref{strong} together with an appropriate scaling argument. Roughly speaking, such SDEs can not be solved approximately in the strong sense in a reasonable computational time as long as approximation methods based on finitely many evaluations of the driving Brownian motion are used. In Section~\ref{sec:numerics} we illustrate Theorems~\ref{thm:intro2} and \ref{thm:intro3} by a numerical example. Next we point out that our results do neither cover the class of strong approximation algorithms that may use finitely many arbitrary linear functionals of the driving Brownian motion nor cover strong approximation algorithms that may choose the number as well as the location of the evaluation nodes for the driving Brownian motion in a path dependent way. Both issues will be the subject of future research. We add that for strong approximation of SDEs with globally Lipschitz coefficients there is a multitude of results on lower error bounds already available in the literature; see, e.g.,~\cite{ClarkCameron1980,hmr01,m02,MG02_habil,m04,mr08,Ruemelin1982}, and the references therein. We also add that Theorem~2.4 in Gy\"{o}ngy~\cite{g98b} establishes, as a special case, the almost sure convergence rate $ \nicefrac{ 1 }{ 2 } - $ for the Euler scheme and SDEs with globally bounded and infinitely often differentiable coefficients. In particular, we note that there exist SDEs with globally bounded and infinitely often differentiable coefficients which, roughly speaking, can not be solved approximatively in the strong sense in a reaonsable computational time (according to Theorem~\ref{thm:intro3} above) but might be solveable, approximatively, in the almost sure sense in a reasonable computational time (according to Theorem~2.4 in Gy\"{o}ngy~\cite{g98b}). \section{Notation} Throughout this article the following notation is used. For a set $ A $, a vector space $ V $, a set $ B \subseteq V $, and a function $ f \colon A \to B $ we put $ \operatorname{supp}( f ) = \left\{ x \in A \colon f(x) \neq 0 \right\} $. Moreover, for a natural number $ d \in \N $ and a vector $ v \in \R^d $ we denote by $ \|v\|_{ \R^d } $ the Euclidean norm of $ v \in \R^d $. Furthermore, for a real number $ x \in \R $ we put $ \lfloor x \rfloor = \max\!\left( \Z \cap ( - \infty, x ] \right) $ and $ \lceil x \rceil = \min\!\left( \Z \cap [ x, \infty ) \right) $. \section{A family of stochastic differential equations with smooth and globally bounded coefficients} \label{sec:setting} Throughout this article we study SDEs provided by the following setting. Let $ T \in (0,\infty) $, let $ ( \Omega, \mathcal{F}, \PP ) $ be a probability space with a normal filtration $ ( \mathcal{F}_t )_{ t \in [0,T] } $, and let $ W \colon [0,T] \times \Omega \to \R $ be a standard $ ( \mathcal{F}_t )_{ t \in [0,T] } $-Brownian motion on $ ( \Omega, \mathcal{F}, \PP ) $. Let $ \tau_1, \tau_2, \tau_3 \in \R $ satisfy $ 0 < \tau_1 \leq \tau_2 < \tau_3 < T $ and let $ f, g, h \in C^{ \infty }( \R, \R ) $ be globally bounded and satisfy $ \operatorname{supp}( f ) \subseteq ( - \infty, \tau_1 ] $, $ \inf_{ s \in [ 0, \nicefrac{ \tau_1 }{ 2 } ] } | f'(s) | > 0 $, $ \operatorname{supp}( g ) \subseteq [ \tau_2, \tau_3 ] $, $ \int_{ \R } \left| g(s) \right|^2 ds > 0 $, $ \operatorname{supp}( h ) \subseteq [ \tau_3, \infty ) $, and $ \int_{ \tau_3 }^{ T } h(s) \, ds \neq 0 $. For every $ \psi \in C^{ \infty }( \R , (0,\infty) ) $ let $ \mu^{\psi} \colon \R^4 \to \R^4 $ and $ \sigma \colon \R^4 \to \R^4 $ be the functions such that for all $ x = ( x_1, \dots, x_4 ) \in \R^4 $ we have \begin{equation} \label{coeff} \begin{aligned} \mu^{ \psi }(x) & = \bigl( 1, 0, 0, h( x_1 ) \cdot \cos( x_2 \, \psi( x_3 ) ) \bigr) \qquad \text{and} \qquad \sigma(x) = \bigl( 0, f( x_1 ) , g( x_1 ), 0 \bigr) \end{aligned} \end{equation} and let $ X^{ \psi } = ( X^{ \psi }_1, \dots, X^{ \psi }_4 ) \colon [0,T] \times \Omega \to \R^4 $ be an $ ( \mathcal{F}_t )_{ t \in [0,T] } $-adapted continuous stochastic processes with the property that for all $ t \in [0,T] $ it holds $ \PP $-a.s.\ that $ X^{ \psi }( t ) = \int_0^t \mu^{ \psi }( X^{ \psi }( s ) ) \, ds + \int_0^t \sigma( X^{ \psi }( s ) ) \, dW(s) $. \begin{rem} Note that for all $ \psi \in C^{ \infty }( \R, (0,\infty) ) $ we have that $ \mu^{ \psi } $ and $\sigma$ are infinitely often differentiable and globally bounded. \end{rem} \begin{rem} Note that for all $ \psi \in C^{ \infty }( \R , ( 0, \infty ) ) $, $ t \in [0,T] $ it holds $ \PP $-a.s.\ that \begin{equation}\label{sol} \begin{split} X_1^{ \psi }(t) & = t , \qquad X_2^{ \psi }(t) = \int_0^{ \min\{ t, \tau_1 \} } f(s) \, dW(s), \\ X_3^{ \psi }(t) & = \mathbbm{1}_{ [ \tau_2 , \, T ] }( t ) \cdot \int_{ \min\{ t, \tau_2 \} }^{ \min\{ t , \tau_3 \} } g(s) \, dW(s) , \\ X_4^{ \psi }(t) & = \mathbbm{1}_{ [ \tau_3 , \, T ] }(t) \cdot \cos\bigl( X_2^{ \psi }( \tau_1 ) \, \psi\big( X_3^{ \psi }( \tau_3 ) \big) \bigr) \cdot \int_{ \tau_3 }^{ t } h(s) \, ds . \end{split} \end{equation} \end{rem} \begin{ex} \label{ex:fgh} Let $ c_1, c_2, c_3 \in \R $ and let $ f, g, h \colon \R \to \R $ be the functions such that for all $ x \in \R $ we have \begin{equation} \begin{split} f(x) & = \1_{(-\infty,\tau_1)}(x)\cdot \exp\Bigl( c_1 + \frac{ 1 }{ x - \tau_1 } \Bigr) , \\ g(x) & = \1_{(\tau_2,\tau_3)}(x)\cdot \exp\Bigl( c_2 + \frac{ 1 }{ \tau_2 - x } + \frac{ 1 }{ x - \tau_3 } \Bigr) , \\ h(x) & = \1_{(\tau_3,\infty)}(x)\cdot \exp\Bigl( c_3 + \frac{ 1 }{ \tau_3 - x } \Bigr) . \end{split} \end{equation} Then $ f, g, h $ satisfy the conditions stated above, that is, $ f, g, h $ are infinitely often differentiable and globally bounded and $ f, g, h $ satisfy $ \operatorname{supp}( f ) \subseteq ( - \infty, \tau_1 ] $, $ \inf_{ s \in [ 0, \nicefrac{ \tau_1 }{ 2 } ] } | f'(s) | > 0 $, $ \operatorname{supp}( g ) \subseteq [ \tau_2, \tau_3 ] $, $ \int_{ \R } \left| g(s) \right|^2 ds >0 $, $ \operatorname{supp}( h ) \subseteq [ \tau_3, \infty ) $, and $ \int_{ \tau_3 }^{ T } h(s) \, ds \neq 0 $. \end{ex} \section{Lower error bounds for general strong approximations} \label{strong} In Theorem~\ref{t1} below we provide lower bounds for the error of any strong approximation of $ X^{ \psi }( T ) $ for the processes $ X^{ \psi } $ from Section~\ref{sec:setting} based on the whole path of $ ( W(t) )_{ t \in [0,T] } $ up to a time interval $ ( t_0 , t_1 ) \subseteq [ 0, \tau_1/ 2 ] $. The main tool for the proof of Theorem \ref{t1} is the following simple symmetrization argument, which is a special case of the concept of radius of information used in information based complexity, see~\cite{TWW88}. \begin{lemma} \label{symm} Let $ ( \Omega, \A, \PP ) $ be a probability space, let $ ( \Omega_1, \A_1 ) $ and $ ( \Omega_2, \A_2 ) $ be measurable spaces, and let $ V_1 \colon \Omega\to \Omega_1 $ and $ V_2, V_2', V_2'' \colon \Omega \to \Omega_2 $ be random variables such that \begin{equation} \label{eq:symm_ass} \PP_{ (V_1, V_2) } = \PP_{ (V_1, V_2') } = \PP_{ (V_1, V_2'') } \, . \end{equation} Then for all measurable mappings $ \Phi \colon \Omega_1\times\Omega_2\to \R $ and $ \varphi\colon \Omega_1\to\R $ we have \begin{equation} \EE\big[ |\Phi(V_1,V_2)- \varphi(V_1)| \big] \ge \tfrac{ 1 }{ 2 } \, \EE\big[ | \Phi( V_1, V_2' ) - \Phi( V_1, V_2'' ) | \big] . \end{equation} \end{lemma} \begin{proof} Observe that \eqref{eq:symm_ass} ensures that \begin{equation} \EE\big[ | \Phi(V_1, V_2) - \varphi(V_1) | \big] = \EE\big[ | \Phi( V_1, V_2' ) - \varphi(V_1) | \big] = \EE\big[ | \Phi(V_1,V_2'') - \varphi(V_1) | \big] . \end{equation} This and the triangle inequality imply that \begin{equation} \begin{split} \EE\big[ | \Phi(V_1,V_2) - \varphi(V_1) | \big] & \geq \tfrac{ 1 }{ 2 } \, \EE\bigl[ | \Phi(V_1,V_2') - \Phi(V_1,V_2'') | \bigr] , \end{split} \end{equation} which finishes the proof. \end{proof} In addition, we employ in the proof of Theorem~\ref{t1} the following lower bound for the first absolute moment of the sine of a centered normally distributed random variable. \begin{lemma} \label{l2} Let $ ( \Omega , \A, \PP ) $ be a probability space, let $ \tau \in [1,\infty) $, and let $ Y \colon \Omega \to \R $ be a $ \mathcal{N}( 0, \tau^2 ) $-distributed random variable. Then \begin{equation} \EE\big[ | \sin(Y) | \big] \ge \frac{ 1 }{ \sqrt{ 8 \pi } } \cdot \exp \Bigl( - \frac{ \pi^2 }{ 8 } \Bigr) . \end{equation} \end{lemma} \begin{proof} We have \begin{equation} \begin{split} & \EE\big[ | \sin(Y) | \big] = \frac{ 1 }{ \sqrt{ 2 \pi } } \int_\R | \sin( \tau z ) | \exp\Bigl( - \frac{ z^2 }{ 2 } \Bigr) dz \\ & \ge \frac{ 1 }{ \sqrt{ 2 \pi } } \exp\Bigl( - \frac{ \pi^2 }{ 8 } \Bigr) \int_0^{ \frac{ \pi }{ 2 } } \left| \sin( \tau z ) \right| dz = \frac{ 1 }{ \tau \sqrt{ 2 \pi } } \exp\Bigl( - \frac{ \pi^2 }{ 8 } \Bigr) \int_0^{ \frac{ \tau \pi }{ 2 } } \left| \sin(z) \right| dz . \end{split} \end{equation} This and the fact that \begin{equation} \int_0^{ \frac{ \tau \pi }{ 2 } } | \sin(x) | \, dx \ge \int_0^{ \lfloor \tau \rfloor \cdot \frac{ \pi }{2 } } |\sin(x)| \, dx = \lfloor \tau \rfloor \cdot \int_0^{ \frac{ \pi }{ 2 } } \sin(x)\, dx = \lfloor \tau \rfloor \ge \frac{ \tau }{ 2 } \end{equation} complete the proof. \end{proof} We first prove the announced lower error bound for strong approximation of $X^\psi(T)$ in the case of the time interval $(t_0,t_1)$ being sufficiently small. \begin{lemma} \label{lem:strong_lower} Assume the setting in Section~\ref{sec:setting}, let $ \alpha_1, \alpha_2, \alpha_3, \Delta, \beta \in (0,\infty) $, and $ \gamma \in \R $ be given by \begin{equation} \begin{aligned} \label{alphas} \alpha_1 = \int_0^{ \tau_1 } \left| f(s) \right|^2 ds , \;\; \alpha_2 = \sup_{ s \in [ 0 , \nicefrac{ \tau_1 }{ 2 } ] } | f'(s) |^2 , \;\; \alpha_3 = \inf_{ s \in [ 0 , \nicefrac{ \tau_1 }{ 2 } ] } | f'(s) |^2 ,\\ \Delta = \Bigl| \min\Bigl\{ \frac{ \alpha_1 }{ 2 \alpha_2 } , \frac{ 1 }{ \alpha_2 } \Bigr\} \Bigr|^{ 1 / 3 }, \;\; \beta = \int_{\tau_2}^{\tau_3} \left| g(s) \right|^2 ds, \;\; \gamma = \int_{ \tau_3 }^T h(s) \, ds, \end{aligned} \end{equation} let $ \psi \in C^{ \infty }( \R, (0,\infty) ) $ be strictly increasing with $ \liminf_{ x \to \infty } \psi( x ) = \infty $ and $ \psi\big( \sqrt{ 2 \beta } \big) = 1 $, let $ t_0, t_1 \in [ 0, \tau_1 / 2 ] $ satisfy $ 0 < t_1 - t_0 \leq \Delta $, and let $ u \colon C\big( [0,t_0] \cup [t_1,T] , \R \big) \to \R $ be measurable. Then $ \tfrac{ \sqrt{ 12 } }{ ( t_1 - t_0 )^{ 3 / 2 } \sqrt{ \alpha_3 } } \in \psi(\R) $ and \begin{equation} \EE\Big[ \big| X^{ \psi }_4( T ) - u\big( ( W(s) )_{ s \in [0, t_0] \cup [t_1, T ] } \big) \big| \Big] \geq \frac{ \left| \gamma \right| }{ 8 \pi^{ 3 / 2 } } \exp\Bigl( - \tfrac{ 2 }{ \beta } \Bigl| \psi^{ - 1 }\!\Bigl( \tfrac{ \sqrt{ 12 } }{ ( t_1 - t_0 )^{ 3 / 2 } \sqrt{ \alpha_3 } } \Bigr) \Bigr|^2 - \tfrac{ \pi^2 }{ 4 } \Bigr) . \end{equation} \end{lemma} \begin{proof} Define stochastic processes $ \overline{W}, B \colon [ t_0, t_1 ] \times \Omega \to \R $ and $ \widetilde{W} \colon \big( [ 0, t_0 ] \cup [ t_1, T ] \big) \times \Omega \to \R $ by \begin{equation} \overline{W}( t ) = \frac{ (t - t_0) }{ ( t_1 - t_0 ) } \cdot W( t_1 ) + \frac{ ( t_1 - t ) }{ ( t_1 - t_0 ) } \cdot W( t_0 ) , \qquad B( t ) = W( t ) - \overline{W}( t ) \end{equation} for $ t \in [ t_0, t_1 ] $ and by $ \widetilde{W}( t ) = W( t ) $ for $ t \in [ 0, t_0 ] \cup [ t_1, T ] $. Hence, $ B $ is a Brownian bridge on $ [t_0,t_1] $ and $ B $ and $ ( \overline W, \widetilde W) $ are independent. Let $ Y_1, Y_2 \colon \Omega \to \R $ be random variables such that we have $ \PP $-a.s.\ that \begin{equation} \begin{split} Y_1 & = \int_0^{t_0} f(s)\, dW(s) + \int_{t_1}^{\tau_1} f(s)\, dW(s) + f( t_1 ) \, W( t_1 ) - f( t_0 ) \, W( t_0 ) - \int_{t_0}^{t_1} f'(s) \, \overline W(s) \, ds , \\ Y_2 & = - \int_{ t_0 }^{ t_1 } f'(s) \, B(s) \, ds \end{split} \end{equation} and put \begin{equation}\label{var} \sigma_i = \big( \EE\big[ | Y_i |^2 \big] \big)^{ 1 / 2 } \end{equation} for $i\in\{1,2\}$. By the independence of $B$ and $(\overline W,\widetilde W)$ we have independence of $Y_1$ and $Y_2$. Moreover, for all $ i \in \{ 1, 2 \} $ we have $ \PP_{ Y_i } = \mathcal{N}( 0, \sigma_i^2 ) $. Furthermore, It\^o's formula proves that we have $ \PP $-a.s.\ that \begin{equation} \label{v2} X_2^{ \psi }( \tau_1 ) = Y_1 + Y_2. \end{equation} Therefore, we have $ \PP $-a.s.\ that \begin{equation} \label{x5} X_4^{ \psi }( T ) = \gamma \cdot \cos\bigl( ( Y_1 + Y_2 ) \, \psi\big( X_3^{ \psi }( \tau_3 ) \big) \bigr) . \end{equation} First, we provide estimates on the variances $ \left| \sigma_1 \right|^2 $ and $ \left| \sigma_2 \right|^2 $. The fact that $ B $ is a Brownian bridge on $ [t_0,t_1] $ shows that for all $ s, u \in [ t_0, t_1 ] $ we have \begin{equation} \label{eq:Brownian_Bridge} \EE\big[ B(s) B(u) \big] = \frac{ \left( t_1 - \max\{s,u\} \right)\cdot \left( \min\{s,u\} - t_0 \right) }{ \left( t_1 - t_0 \right) } . \end{equation} In addition, the assumption $ \inf_{ s \in [ 0, \nicefrac{ \tau_1 }{ 2 } ] } | f'(s) | > 0 $ implies that for all $ s, u \in [ 0, \tau_1/ 2 ] $ we have $ f'(s) \cdot f'(u) = \left| f'(s) \cdot f'(u) \right| $. The latter fact and \eqref{eq:Brownian_Bridge} yield \begin{equation} \label{eq:sigma_var_identity} \begin{split} \left| \sigma_2 \right|^2 & = \EE\!\left[ \Bigl| \int_{ t_0 }^{ t_1 } f'(s) \, B(s) \, ds \Bigr|^2 \right] = \int_{ t_0 }^{ t_1 } \int_{ t_0 }^{ t_1 } f'(s) \, f'(u) \, \EE\big[ B(s) B(u) \big] \, ds \, du \\ & = \int_{ t_0 }^{ t_1 } \int_{ t_0 }^{ t_1 } \left| f'(s) \right| \cdot \left| f'(u) \right| \cdot \frac{ ( t_1 - \max\{ s, u \} ) \cdot ( \min\{ s, u \} - t_0 ) }{ \left( t_1 - t_0 \right) } \, ds \, du . \end{split} \end{equation} Furthermore, it is easy to see that \begin{equation}\label{easy} \int_{ t_0 }^{ t_1 } \int_{ t_0 }^{ t_1 } \frac{ ( t_1 - \max\{ s, u \} ) \cdot ( \min\{ s, u \} - t_0 ) }{ \left( t_1 - t_0 \right) } \, ds \, du = \frac{ \left( t_1 - t_0 \right)^3 }{ 12 } . \end{equation} Combining \eqref{eq:sigma_var_identity} and \eqref{easy} proves that \begin{equation} \label{eq:lem_first_inequality} 0 < \frac{ \alpha_3 \left( t_1 - t_0 \right)^3 }{ 12 } \le \left| \sigma_2 \right|^2 \le \frac{ \alpha_2 \left( t_1 - t_0 \right)^3 }{ 12 } . \end{equation} Next \eqref{eq:lem_first_inequality} and the assumption $ t_1 - t_0 \leq \Delta $ imply \begin{equation} \label{eq:sigma2_indicator_bound} \left| \sigma_2 \right|^2 \le \alpha_2 \left| \Delta \right|^3 = \min\!\left\{ \alpha_1/ 2 , 1 \right\} . \end{equation} By \eqref{v2}, by the fact that $ Y_1 $ and $ Y_2 $ are independent centered normal variables, and by \eqref{eq:sigma2_indicator_bound} we get \begin{equation} \begin{split} \left| \sigma_1 \right|^2 & = \EE\big[ | Y_1 |^2 \big] = \EE\big[ | Y_1 + Y_2 |^2 \big] - \EE\big[ | Y_2 |^2 \big] - 2 \, \EE\big[ Y_1 Y_2 \big] \\ & = \EE\big[ | X_2^{ \psi }( \tau_1 ) |^2 \big] - \left| \sigma_2 \right|^2 = \alpha_1 - \left| \sigma_2 \right|^2 \ge \alpha_1/ 2 \ge \left| \sigma_2 \right|^2 , \end{split} \end{equation} which jointly with \eqref{eq:sigma2_indicator_bound} yields \begin{equation} \label{eq:lem_second_inequality} \left| \sigma_2 \right|^2 \le \min\!\left\{ \left| \sigma_1 \right|^2 , 1 \right\} . \end{equation} In the next step we put up the framework for an application of Lemma \ref{symm}. Observe that \eqref{x5} and the assumption $ \gamma \neq 0 $ imply \begin{equation}\label{v3} \EE\Big[ \bigl| X_4^{ \psi }( T ) - u( \widetilde{W} ) \bigr| \Big] = \left| \gamma \right| \cdot \EE\Big[ \bigl| \cos\bigl( ( Y_1 + Y_2 ) \, \psi\big( X_3^{ \psi }( \tau_3 ) \big) \bigr) - \tfrac{ 1 }{ \gamma } \cdot u( \widetilde{W} ) \bigr| \Big] . \end{equation} Clearly, there exist measurable functions $ \Phi_i \colon C\big( [ 0, t_0 ] \cup [ t_1 , 1 ] , \R \big) \to \R $, $ i \in \{ 1, 2 \} $, such that we have $ \PP $-a.s.\ that $ Y_1 = \Phi_1( \widetilde{W} ) $ and $ X_3^{ \psi }( \tau_3 ) = \Phi_2( \widetilde{W} ) $. Moreover, by the independence of $B$ and $(\overline W,\widetilde W)$ we have independence of $Y_2$ and $\widetilde W$. Therefore, we have $ \PP_{ ( \widetilde W, Y_2 ) } = \PP_{ \widetilde W } \otimes \PP_{ Y_2 } = \PP_{ \widetilde W } \otimes \PP_{ - Y_2 } = \PP_{ ( \widetilde W, - Y_2 ) } $. We may thus apply Lemma~\ref{symm} with $ \Omega_1 = C( [0, t_0] \cup [t_1 , 1] , \R ) $, $ \Omega_2 = \R $, $ V_1 = \widetilde{W} $, $ V_2 = V_2' = Y_2 $, $ V_2'' = - Y_2 $, $ \varphi = \frac{ 1 }{ \gamma } \cdot u $, and $ \Phi \colon C( [0, t_0] \cup [t_1 , T] , \R ) \times \R \to \R $ given by $ \Phi(w,y) = \cos( ( \Phi_1( w ) + y ) \, \psi( \Phi_2( w ) ) ) $ for $ w \in C( [ 0, t_0 ] \cup [ t_1, T ] , \R ) $, $ y \in \R $ to obtain \begin{equation} \begin{split} & \EE\Big[ \bigl| \cos\bigl( ( Y_1 + Y_2 ) \, \psi\big( X_3^{ \psi }( \tau_3 ) \big) \bigr) - \tfrac{ 1 }{ \gamma } \cdot u( \widetilde W ) \bigr| \Big] \\ & \quad = \EE\Bigl[ \bigl| \cos\bigl( \big( \Phi_1( \widetilde{W} ) + Y_2 \big) \, \psi\big( \Phi_2( \widetilde{W} ) \big) \bigr) - \varphi(\widetilde W) \bigr| \Big] \\ & \quad \geq \tfrac{ 1 }{ 2 } \cdot \EE\Big[ \bigl| \cos\bigl( \big( \Phi_1( \widetilde{W} ) + Y_2 \big) \, \psi\big( \Phi_2( \widetilde{W} ) \big) \bigr) - \cos\bigl( \big( \Phi_1( \widetilde{W} ) - Y_2 \big) \, \psi\big( \Phi_2( \widetilde{W} ) \big) \bigr) \bigl| \Big] \\ & \quad = \tfrac{ 1 }{ 2 } \cdot \EE\Big[ \bigl| \cos\bigl( ( Y_1 + Y_2 ) \, \psi\big( X_3^{ \psi }( \tau_3 ) \big) \bigr) - \cos\bigl( ( Y_1 - Y_2 ) \, \psi( X_3^{ \psi }( \tau_3 ) ) \bigr) \bigl| \Big]. \end{split} \end{equation} The latter estimate and the fact that $ \forall \, x, y \in \R \colon \cos( x ) - \cos( y ) = 2 \sin( \frac{ y - x }{ 2 } ) \sin( \frac{ y + x }{ 2 } ) $ imply \begin{equation} \label{estim1} \begin{split} & \EE\Big[ \bigl| \cos\bigl( ( Y_1 + Y_2 ) \, \psi\big( X_3^{ \psi }( \tau_3 ) \big) \bigr) - \tfrac{ 1 }{ \gamma } \cdot u( \widetilde W ) \bigr| \Big] \\ & \qquad\qquad \geq \EE\Big[ \bigl| \sin\bigl( Y_1 \, \psi\big( X_3^{ \psi }( \tau_3 ) \big) \bigr) \cdot \sin\bigl( Y_2 \, \psi\big( X_3^{ \psi }( \tau_3 ) \big) \bigr) \bigr| \Big] . \end{split} \end{equation} Since $ (\overline W,\widetilde W)$, $ B $, and $(W(t)-W(\tau_2))_{t\in[\tau_2,\tau_3]}$ are independent we have independence of $Y_1$, $Y_2$, and $ X_3^{ \psi }( \tau_3 )$ as well. Moreover, we have $ \PP_{ X_3^{ \psi }( \tau_3 ) } = \mathcal{N}( 0 , \beta ). $ The latter two facts and \eqref{estim1} prove \begin{equation} \label{eq:cos_estimate_0} \begin{split} & \EE\Big[ \bigl| \cos\bigl( ( Y_1 + Y_2 ) \, \psi\big( X_3^{ \psi }( \tau_3 ) \big) \bigr) - \tfrac{ 1 }{ \gamma } \cdot u( \widetilde W ) \bigr| \Big] \\ & \quad \geq \int_\R \EE\Big[ \big| \sin\!\big( \psi(x) Y_1 \big) \big| \Big] \cdot \EE\Big[ \big| \sin\!\big( \psi(x) Y_2 \big) \big| \Big] \, \PP_{ X_3^{ \psi }( \tau_3 ) }( dx ) \\ & \quad= \int_\R \EE\Big[ \big| \sin\!\big( \psi(x) Y_1 \big) \big| \Big] \cdot \EE\Big[ \big| \sin\!\big( \psi(x) Y_2 \big) \big| \Big] \, \tfrac{ 1 }{ \sqrt{ 2 \pi \beta } } \, \exp\bigl( - \tfrac{ x^2 }{ 2 \beta } \bigr) \,dx. \end{split} \end{equation} Next we note that \eqref{eq:lem_second_inequality} ensures that $ 1 / \sigma_2 \ge 1 $. This, the assumption that $ \psi $ is continuous, the assumption that $ \lim_{ x \to \infty } \psi(x) = \infty $, and the assumption that $ \psi( \sqrt{ 2 \beta } ) = 1 $ show \begin{equation}\label{new3} 1 / \sigma_2 \in \bigl[\psi(\sqrt{2\beta}),\infty\bigr) \subset \psi(\R). \end{equation} It follows \begin{equation} \label{eq:cos_estimate_00} \begin{split} & \int_\R \EE\Big[ \big| \sin\!\big( \psi(x) Y_1 \big) \big| \Big] \cdot \EE\Big[ \big| \sin\!\big( \psi(x) Y_2 \big) \big| \Big] \, \tfrac{ 1 }{ \sqrt{ 2 \pi \beta } } \, \exp\bigl( - \tfrac{ x^2 }{ 2 \beta } \bigr) \,dx \\ & \quad\ge \int_{ \psi^{ - 1 }( 1 / \sigma_2 ) }^{ 2 \psi^{ - 1 }( 1 / \sigma_2 ) } \EE\Big[ \big| \sin\!\big( \psi(x) Y_1 \big) \big| \Big] \cdot \EE\Big[ \big| \sin\!\big( \psi(x) Y_2 \big) \big| \Big] \, \tfrac{ 1 }{ \sqrt{ 2 \pi \beta } } \, \exp\bigl( - \tfrac{ x^2 }{ 2 \beta } \bigr) \,dx \\ & \quad \ge \frac{ 1 }{ \sqrt{ 2 \pi \beta } } \, \exp\Bigl( - \tfrac{ 2 }{ \beta } \, \big| \psi^{ - 1 }( \tfrac{ 1 }{ \sigma_2 } ) \big|^2 \Bigr) \int_{ \psi^{ - 1 }( 1 / \sigma_2 ) }^{ 2 \psi^{ - 1 }( 1 / \sigma_2 ) } \EE\Big[ \big| \sin\!\big( \psi(x) Y_1 \big) \big| \Big] \cdot \EE\Big[ \big| \sin\!\big( \psi(x) Y_2 \big) \big| \Big] \, dx . \end{split} \end{equation} We are now in a position to apply Lemma \ref{l2}. Observe that \eqref{eq:lem_second_inequality} and the assumption that $ \psi $ is strictly increasing imply that for all $ x \in [ \psi^{ - 1 }( 1/\sigma_2 ), \infty ) $, $ i \in \{ 1, 2 \} $ we have $ \sigma_i\psi(x) \ge \sigma_i /\sigma_2 \ge 1 $. Employing Lemma~\ref{l2} we thus conclude that \begin{equation} \label{eq:sin_estimate_0} \begin{split} & \int_{ \psi^{ - 1 }( 1 / \sigma_2 ) }^{ 2 \psi^{ - 1 }( 1 / \sigma_2 ) } \EE\big[ | \sin(\psi(x) Y_1) | \big] \cdot \EE\big[ | \sin(\psi(x) Y_2) | \big] \, dx \\ & \qquad \ge \int_{ \psi^{ - 1 }( 1 / \sigma_2 ) }^{ 2 \psi^{ - 1 }( 1 / \sigma_2 ) } \left[ \tfrac{ 1 }{ \sqrt{ 8 \pi } } \cdot \exp\bigl( - \tfrac{ \pi^2 }{ 8 } \bigr) \right]^2 \,dx = \frac{ 1 }{ 8 \pi } \cdot \exp\bigl( - \tfrac{ \pi^2 }{ 4 } \bigr) \cdot \psi^{ - 1 }\big( \tfrac{ 1 }{ \sigma_2 } \big) . \end{split} \end{equation} Furthermore, \eqref{eq:lem_first_inequality}, \eqref{new3}, and the assumption that $ \psi $ is strictly increasing ensure that \begin{equation} \label{eq:psi_estimate_0} \psi^{ - 1 }\big( \tfrac{ 1 }{ \sigma_2 } \big) \le \psi^{ - 1 }\Bigl( \tfrac{ \sqrt{12} }{ \sqrt{\alpha_3} } \cdot \tfrac{ 1 }{ ( t_1 - t_0 )^{ 3 / 2 } } \Bigr) . \end{equation} Combining \eqref{eq:cos_estimate_0}--\eqref{eq:psi_estimate_0} proves \begin{equation} \label{eq:cos_estimate_1} \begin{split} & \EE\Big[ \bigl| \cos\bigl( ( Y_1 + Y_2 ) \, \psi\big( X_3^{ \psi }( \tau_3 ) \big) \bigr) - \tfrac{ 1 }{ \gamma } \cdot u( \widetilde W ) \bigr| \Big] \\ & \qquad \ge \frac{ 1 }{ \sqrt{ 2 \pi \beta } } \, \exp\Bigl( - \tfrac{ 2 }{ \beta } \Bigl| \psi^{ - 1 }\!\Bigl( \tfrac{ \sqrt{ 12 } }{ \sqrt{ \alpha_3 } } \cdot \tfrac{ 1 }{ ( t_1 - t_0 )^{ 3 / 2 } } \Bigr) \Bigr|^2 \Bigr) \cdot \frac{ 1 }{ 8 \pi } \cdot \exp\bigl( - \tfrac{ \pi^2 }{ 4 } \bigr) \cdot \psi^{ - 1 }\big( \tfrac{ 1 }{ \sigma_2 } \big) . \end{split} \end{equation} Finally, note that \eqref{new3} and the assumption that $ \psi $ is strictly increasing imply $ \sqrt{ 2 \beta } \le \psi^{ - 1 }\big( \tfrac{ 1 }{ \sigma_2 } \big) $. Hence, we derive from \eqref{eq:cos_estimate_1} that \begin{equation} \begin{split} & \EE\Big[ \bigl| \cos\bigl( ( Y_1 + Y_2 ) \, \psi\big( X_3^{ \psi }( \tau_3 ) \big) \bigr) - \tfrac{ 1 }{ \gamma } \cdot u( \widetilde W ) \bigr| \Big] \\ &\qquad \ge \exp\Bigl( - \tfrac{ 2 }{ \beta } \left| \psi^{ - 1 }\!\left( \tfrac{ \sqrt{ 12 } }{ \sqrt{ \alpha_3 } } \cdot \tfrac{ 1 }{ ( t_1 - t_0 )^{ 3 / 2 } } \right) \right|^2 \Bigr) \cdot \frac{ 1 }{ 8 \pi^{ 3 / 2 } } \cdot \exp\bigl( - \tfrac{ \pi^2 }{ 4 } \bigr) . \end{split} \end{equation} This and \eqref{v3} complete the proof of the lemma. \end{proof} We are ready to establish our main result. \begin{theorem} \label{t1} Assume the setting in Section~\ref{sec:setting}, let $ \alpha_1, \alpha_2, \alpha_3, \beta, c, C \in (0,\infty) $, and $ \gamma \in \R $ be given by \begin{gather} \label{newconst0} \alpha_1 = \int_0^{ \tau_1 } \left| f(s) \right|^2 ds , \;\; \alpha_2 = \sup_{ s \in [ 0 , \nicefrac{ \tau_1 }{ 2 } ] } | f'(s) |^2 , \;\; \alpha_3 = \inf_{ s \in [ 0 , \nicefrac{ \tau_1 }{ 2 } ] } | f'(s) |^2 , \;\; \beta = \int_{\tau_2}^{\tau_3} \left| g(s) \right|^2 ds, \\ \label{newconst} \gamma = \int_{ \tau_3 }^T h(s) \, ds, \qquad c = \frac{ |\gamma| }{ 8 \, \pi^{ 3 / 2 } \exp( \tfrac{ \pi^2 }{ 4 } ) } , \qquad C = \frac{ \sqrt{ 12 }\, \max\{ 1 , T^{ 3 / 2 } \sqrt{\alpha_2} \} }{ \sqrt{\alpha_3} \min\{ 1 , \sqrt{\tfrac{\alpha_1}{2}} \} } , \end{gather} let $ \psi \in C^{ \infty }( \R, (0,\infty) ) $ be strictly increasing with $ \liminf_{ x \to \infty } \psi( x ) = \infty $ and $ \psi\big( \sqrt{ 2 \beta } \big) = 1 $, let $ 0 \le t_0 < t_1 \le \tau_1/ 2 $, and let $ u \colon C\big( [0,t_0] \cup [t_1,T] , \R \big) \to \R $ be measurable. Then $ [C/ ( t_1 - t_0 )^{ 3 / 2 },\infty) \subset \psi(\R) $ and \begin{equation} \EE\Big[ \big| X_4^{ \psi }( T ) - u\big( ( W(s) )_{ s \in [0, t_0] \cup [t_1, T ] } \big) \big| \Big] \geq c \cdot \exp \Bigl( - \tfrac{ 2 }{ \beta } \cdot \big| \psi^{-1}\big( \tfrac{ C }{ ( t_1 - t_0 )^{ 3 / 2 } } \big) \big|^2 \Bigr) . \end{equation} \end{theorem} \begin{proof} Let $ \Delta \in (0,\infty) $ be given by \eqref{alphas}. First, assume $ t_1 - t_0 \leq \Delta $. By Lemma~\ref{lem:strong_lower} and by the properties of $\psi$ we then have \begin{equation}\label{sub1} \Bigl[\tfrac{ \sqrt{ 12 } }{ ( t_1 - t_0 )^{ 3 / 2 } \sqrt{ \alpha_3 } }, \infty\Bigr) \subset \psi(\R) \end{equation} and \begin{equation} \label{eq:lem_appl_diff_t_small} \EE\Big[ \big| X_4^{ \psi }( T ) - u\big( ( W(s) )_{ s \in [0, t_0] \cup [t_1, T ] } \big) \big| \Big] \geq c\cdot \exp\Bigl( - \tfrac{ 2 }{ \beta } \left| \psi^{ - 1 }\!\left( \tfrac{ \sqrt{ 12 } }{ ( t_1 - t_0 )^{ 3 / 2 } \sqrt{ \alpha_3 } } \right) \right|^2 \Bigr) . \end{equation} It remains to observe that \begin{equation}\label{cc3} \frac{ \sqrt{ 12 } }{ ( t_1 - t_0 )^{ 3 / 2 } \sqrt{ \alpha_3 } } \le \frac{C}{ ( t_1 - t_0 )^{ 3 / 2 } }, \end{equation} and that $ \psi^{-1} $ is strictly increasing to obtain the desired result in this case. Next, assume that $t_1 - t_0 >\Delta$. Then Lemma~\ref{lem:strong_lower} together with the properties of $\psi$ yield \begin{equation}\label{cc4} \bigl[\tfrac{ \sqrt{ 12 } }{ \Delta^{ 3 / 2 } \sqrt{ \alpha_3 } } ,\infty\bigr) \subset \psi(\R) \end{equation} and \begin{equation} \EE\Big[ \big| X_4^{ \psi }( T ) - u\big( ( W(s) )_{ s \in [0, t_0] \cup [t_1, T ] } \big) \big| \Big] \geq c\cdot \exp\Bigl( - \tfrac{ 2 }{ \beta } \left| \psi^{ - 1 }\!\left( \tfrac{ \sqrt{ 12 } }{ \Delta^{ 3 / 2 } \sqrt{ \alpha_3 } } \right) \right|^2 \Bigr) . \end{equation} Since \begin{equation}\label{ddd4} \frac{ \sqrt{ 12 } }{ \Delta^{ 3 / 2 } \sqrt{ \alpha_3 } } = \frac{\sqrt{ 12 }\sqrt{\alpha_2}}{ \sqrt{ \alpha_3 }\min\{1,\sqrt{\tfrac{\alpha_1}{2}}\}} \le \frac{\sqrt{ 12 }\sqrt{\alpha_2}}{ \sqrt{ \alpha_3 }\min\{1,\sqrt{\tfrac{\alpha_1}{2}}\}}\cdot \frac{T^{3/2}}{(t_1-t_0)^{3/2}}\le \frac{C}{ ( t_1 - t_0 )^{ 3 / 2 } } \end{equation} and since $ \psi^{-1} $ is strictly increasing, we obtain the claimed result in the actual case as well. \end{proof} Theorem~\ref{t1} implies uniform lower bounds for the error of strong approximations of the solution processes $ X^{ \psi } $ in Section~\ref{sec:setting} at time $ T $ based on a finite number of function values of the driving Brownian motion $ W $. This is, in particular, the subject of the following corollary. \begin{cor} \label{cor1} Assume the setting in Section~\ref{sec:setting}, let $ \alpha_1, \alpha_2, \alpha_3, \beta, c, C \in (0,\infty) $, and $ \gamma \in \R $ be given by \begin{gather} \alpha_1 = \int_0^{ \tau_1 } \left| f(s) \right|^2 ds , \;\; \alpha_2 = \sup_{ s \in [ 0 , \nicefrac{ \tau_1 }{ 2 } ] } | f'(s) |^2 , \;\; \alpha_3 = \inf_{ s \in [ 0 , \nicefrac{ \tau_1 }{ 2 } ] } | f'(s) |^2 , \;\; \beta = \int_{\tau_2}^{\tau_3} \left| g(s) \right|^2 ds, \\ \gamma = \int_{ \tau_3 }^T h(s) \, ds, \qquad c = \frac{ |\gamma| }{ 8 \, \pi^{ 3 / 2 } \exp( \tfrac{ \pi^2 }{ 4 } ) } , \qquad C = \frac{ \sqrt{ 12 }\, \max\{ 1 , T^{ 3 / 2 } \sqrt{\alpha_2} \} }{ \sqrt{\alpha_3} \min\{ 1 , \sqrt{\tfrac{\alpha_1}{2}} \} } , \end{gather} and let $ \psi \in C^{ \infty }( \R, (0,\infty) ) $ be strictly increasing with $ \liminf_{ x \to \infty } \psi( x ) = \infty $ and $ \psi\big( \sqrt{ 2 \beta } \big) = 1 $. Then for all $ n \in \N \cap [ 2 T / \tau_1 , \infty ) $ and all measurable $ u \colon C ([ T/n , T ] , \R ) \to \R $ we have $[ C n^{ 3 / 2 } T^{ - 3 / 2 },\infty)\subset\psi(\R)$ and \begin{equation} \label{eq:cor1_eq1} \EE\Big[ \bigl| X_4^{ \psi }( T ) - u\big( ( W(s) )_{ s \in [ \nicefrac{ T }{ n }, T ] } \big) \bigr| \Big] \geq c \cdot \exp\!\left( - \tfrac{ 2 }{ \beta } \cdot \left| \psi^{ - 1 }\!\left( \tfrac{ C }{ T^{ 3 / 2 } } \cdot n^{ 3 / 2 } \right) \right|^2 \right) , \end{equation} for all $ n \in \N $, $ s_1, \dots, s_n \in [ 0, T ] $ and all measurable $ u \colon \R^n \to \R $ we have $[ 8 C n^{ 3 / 2 } ( \tau_1 )^{ - 3 / 2 } , \infty)\subset\psi(\R)$ and \begin{equation} \label{eq:cor1_eq2} \EE\Big[ \bigl| X^{ \psi }_4(T) - u\big( W( s_1 ), \dots , W( s_n ) \big) \bigr| \Big] \geq c \cdot \exp\Bigl( - \tfrac{ 2 }{ \beta } \cdot \bigl| \psi^{ - 1 }\!\bigl( \tfrac{ 8 \, C }{ ( \tau_1 )^{ 3 / 2 } } \cdot n^{ 3 / 2 } \bigr) \bigr|^2 \Bigr) , \end{equation} and for all $ n \in \N \cap [ 2 T/ \tau_1 , \infty ) $, $ s_1, \dots, s_n \in [0,T] $ and all measurable $ u \colon \R^n \times C( [T/n , T ] , \R ) \to \R $ we have $[ 2^{ 3 / 2 }\, C \cdot n^3 / T^{ 3 / 2 } ,\infty)\subset\psi(\R)$ and \begin{equation} \label{eq:cor1_eq3} \EE\Big[ \bigl| X^{ \psi }_4( T ) - u\big( W( s_1 ), \dots , W( s_n ) , ( W(s) )_{ s \in [ \nicefrac{ T }{ n } , T ] } \big) \bigr| \Big] \geq c \cdot \exp\Bigl( - \tfrac{ 2 }{ \beta } \cdot \bigl| \psi^{ - 1 }\bigl( \tfrac{ 2^{ 3 / 2 } \, C }{ T^{ 3 / 2 } } \cdot n^3 \bigr) \bigr|^2 \Bigr) . \end{equation} \end{cor} \begin{proof} Let $ n \in \N $ with $ T / n \leq \tau_1 /2 $ and let $ u \colon C( [ T / n , T ] , \R ) \to \R $ be a measurable mapping. Then Theorem~\ref{t1} with $t_0=0$ and $t_1=T/n$ implies $[C \cdot n^{ 3 / 2 }/ T^{ 3 / 2 },\infty)\subset\psi(\R)$ and \begin{equation} \begin{split} \EE\Big[ \big| X_4^{ \psi }( T ) - u\big( ( W(s) )_{ s \in [ \nicefrac{ T }{ n } , T ] } \big) \big| \Big] & \geq c \cdot \exp\!\left( - \tfrac{ 2 }{ \beta } \cdot \big| \psi^{-1}\big( \tfrac{ C }{ (T / n )^{ 3 / 2 } } \big) \big|^2 \right) . \end{split} \end{equation} This establishes \eqref{eq:cor1_eq1}. Next let $ n \in \N $, $ s_1, \dots, s_n \in [0,T] $ and let $ u \colon \R^{ n + 2 } \to \R $ be a measurable mapping. Then there exist $ \hat{s}_0, \hat{s}_1, \dots, \hat{s}_{ n + 1 } \in [0,T] $ such that $ 0 = \hat{s}_0 \leq \hat{s}_1 \leq \dots \leq \hat{s}_{ n + 1 } $ and $ \{ \hat{s}_0 , \hat{s}_1, \dots, \hat{s}_{ n + 1 } \} \supseteq \{ s_1, \dots, s_n,\tau_1/2 \} $. In particular, there exists $ i \in \{ 1, 2, \dots, n+1 \} $ such that \begin{equation} \label{eq:delta_hat_s} \hat s_i \le \tfrac{\tau_1}{2}\quad\text{and}\quad \hat{s}_i - \hat{s}_{ i - 1 } \geq \tfrac{ \tau_1 }{ 2 \left( n + 1 \right) } . \end{equation} Using Theorem~\ref{t1} with $ t_0=\hat s_{i-1}$ and $t_1 = \hat s_i$ and the fact that $ \psi^{ - 1 } $ is increasing we conclude that $[ 8C \,n^{ 3 / 2 }/ \tau_1 ^{ 3 / 2 },\infty)\subset [ C (2(n+1))^{3/2}/ \tau_1 ^{ 3 / 2 },\infty) \subset [ C / (\hat{s}_i - \hat{s}_{i-1})^{ 3 / 2 },\infty)\subset\psi(\R) $ and \begin{equation} \begin{split} & \EE\Big[ \bigl| X_4^{ \psi }(T) - u\big( W( \hat{s}_0 ) , W( \hat{s}_1 ) , \dots , W( \hat{s}_n ) , W( \hat{s}_{ n + 1 } ) \big) \bigr| \Big] \\ & \qquad\qquad \geq c \cdot \exp\Bigl( - \tfrac{ 2 }{ \beta } \cdot \bigl| \psi^{ - 1 }\bigl( \tfrac{ C }{ \left( \hat{s}_i - \hat{s}_{ i - 1 } \right)^{ 3 / 2 } } \bigr) \bigr|^2 \Bigr) \geq c \cdot \exp\Bigl( - \tfrac{ 2 }{ \beta } \cdot \bigl| \psi^{ - 1 }\bigl( \tfrac{ 8 \, C }{ \tau_1 ^{ 3 / 2 } } \cdot n^{ 3 / 2 } \bigr) \bigr|^2 \Bigr) . \end{split} \end{equation} This implies \eqref{eq:cor1_eq2}. The proof of \eqref{eq:cor1_eq3} is analogous to the proofs of \eqref{eq:cor1_eq1} and \eqref{eq:cor1_eq2}. \end{proof} In Lemma~\ref{lem:speed_psi} below we characterize a non-polynomial decay of the lower bounds in \eqref{eq:cor1_eq1}, \eqref{eq:cor1_eq2}, and \eqref{eq:cor1_eq3} in Corollary~\ref{cor1} in terms of a exponential growth property of the function $\psi$. To do so, we recall the following elementary fact. \begin{lemma} \label{lem:limits} Let $ \varphi_1 \colon \R \to [0,\infty) $ be non-decreasing, let $ \varphi_2 \colon \R \to [0,\infty) $ be non-increasing, and assume that $ \liminf_{ \N \ni n \to \infty } \left[ \varphi_1 ( n ) \cdot \varphi_2( n +1) \right] = \infty $. Then $ \liminf_{ \R \ni x \to \infty } \left[ \varphi_1( x ) \cdot \varphi_2( x ) \right] = \infty $. \end{lemma} \begin{proof}[Proof] By the properties of $ \varphi_1 $ and $ \varphi_2 $ we have for all $ x \in \R $ that $ \varphi_1(x) \cdot \varphi_2(x) \ge \varphi_1( \lfloor x \rfloor ) \cdot \varphi_2( \lfloor x \rfloor + 1 ) $. Hence \begin{equation} \liminf_{ \R \ni x \to \infty } \left[ \varphi_1( x ) \cdot \varphi_2( x ) \right] \ge \liminf_{ \N \ni n \to \infty } \left[ \varphi_1( n ) \cdot \varphi_2( n + 1 ) \right] = \infty , \end{equation} which completes the proof. \end{proof} \begin{rem} We note that in general it is not possible to replace in Lemma~\ref{lem:limits} the assumption $ \displaystyle{\liminf_{ \N \ni n \to \infty } \big[ \varphi_1( n ) \cdot \varphi_2( n + 1 ) \big] = \infty} $ by the weaker assumption $ \displaystyle\liminf_{ \N \ni n \to \infty } \left[ \varphi_1( n ) \cdot \varphi_2( n ) \right] = \infty $. Indeed, using suitable mollifiers one can construct $ \varphi_1,\varphi_2\in C^\infty(\R, [0,\infty)) $ such that $\varphi_1$ is non-decreasing with $ \forall \, n \in \Z \,\, \forall \, x \in [ n, n + 1/2 ] \colon \varphi_1( x ) = \exp\!\big( ( n + 1/2 )^2 \big) $ and such that $\varphi_2$ is non-increasing with $ \forall \, n \in \Z \,\, \forall \, x \in [ n - 1 / 2 , n ] \colon \varphi_2( x ) = \exp( - n^2 ) $. Then \begin{equation} \begin{aligned}\label{cc45} \liminf_{ \N \ni n \to \infty } \left[ \varphi_1( n ) \cdot \varphi_2( n ) \right] & = \liminf_{ \N \ni n \to \infty } \exp\!\big( (n + 1/2)^2 - n^2 \big) = \infty,\\ \liminf_{ \N \ni n \to \infty } \left[ \varphi_1( n ) \cdot \varphi_2( n + 1 ) \right] & = \liminf_{ \N \ni n \to \infty } \exp\!\big( ( n + 1/2)^2 - ( n + 1)^2 \big) = 0,\\ \liminf_{ \R \ni x \to \infty } \left[ \varphi_1( x ) \cdot \varphi_2( x ) \right] & \le \liminf_{ \N \ni n \to \infty } \left[ \varphi_1( n + 1/2 ) \cdot \varphi_2( n + 1/2 ) \right] = 0. \end{aligned} \end{equation} \end{rem} \begin{lemma} \label{lem:speed_psi} Let $ \eta_1, \eta_2, \eta_3 \in (0,\infty) $ and let $ \psi \colon \R \to (0,\infty) $ be strictly increasing and continuous with $ \liminf_{ x \to \infty } \psi( x ) = \infty $. Then $ \forall \, q \in (0,\infty) \colon \liminf_{ \N \ni n \to \infty } $ $ \big[ n^q \cdot \exp\!\big( - \eta_1 \left| \psi^{ - 1 }( \eta_2 n^{ \eta_3 } ) \right|^2 \big) \big] = \infty $ if and only if $ \forall \, q \in (0,\infty) \colon \liminf_{ \R \ni x \to \infty } \left[ \psi(x) \cdot \exp( - q x^2 ) \right] = \infty $. \end{lemma} \begin{proof} We use Lemma~\ref{lem:limits} with $ \varphi_1(x) = x^q $ and $ \varphi_2(x) = \exp(- \eta_1 \left| \psi^{ - 1 }( \eta_2 x^{ \eta_3 } ) \right|^2 \big) $ for $ x \in \R $ to obtain \begin{equation} \begin{aligned} & \Big( \forall \, q \in (0,\infty) \colon \liminf_{ \N \ni n \to \infty } \big[ n^q \cdot \exp\!\big( - \eta_1 \left| \psi^{ - 1 }( \eta_2 n^{ \eta_3 } ) \right|^2 \big) \big] = \infty \Big) \\ \Leftrightarrow \; & \Big( \forall \, q \in (0,\infty) \colon \liminf_{ \R \ni x \to \infty } \big[ x^q \cdot \exp\!\big( - \eta_1 \left| \psi^{ - 1 }( \eta_2 x^{ \eta_3 } ) \right|^2 \big) \big] = \infty \Big). \end{aligned} \end{equation} Furthermore, \begin{equation} \begin{split} & \Big( \forall \, q \in (0,\infty) \colon \liminf_{ \R \ni x \to \infty } \big[ x^q \cdot \exp\!\big( - \eta_1 \left| \psi^{ - 1 }( \eta_2 x^{ \eta_3 } ) \right|^2 \big) \big] = \infty \Big) \\ \Leftrightarrow \; & \Big( \forall \, q \in (0,\infty) \colon \liminf_{ \R \ni x \to \infty } \big[ x^{ \eta_3 q } \cdot \exp\!\big( - \eta_1 \left| \psi^{ - 1 }( \eta_2 x^{ \eta_3 } ) \right|^2 \big) \big] = \infty \Big) \\ \Leftrightarrow \; & \Big( \forall \, q \in (0,\infty) \colon \liminf_{ \R \ni x \to \infty } \big[ x^q \cdot \exp\!\big( - \eta_1 \left| \psi^{ - 1 }( \eta_2 x ) \right|^2 \big) \big] = \infty \Big) \\ \Leftrightarrow \; & \Big( \forall \, q \in (0,\infty) \colon \liminf_{ \R \ni x \to \infty } \big[ x \cdot \exp\!\big( - \tfrac{ \eta_1 }{ q } \left| \psi^{ - 1 }( \eta_2 x ) \right|^2 \big) \big] = \infty \Big)\\ \Leftrightarrow \; & \Big( \forall \, q \in (0,\infty) \colon \liminf_{ \R \ni x \to \infty } \big[ x \cdot \exp\!\big( - \tfrac{ \eta_1 }{ q } \left| \psi^{ - 1 }( x ) \right|^2 \big) \big] = \infty \Big) . \end{split} \end{equation} Using the properties of $\psi$ we have \begin{equation} \begin{split} & \Big( \forall \, q \in (0,\infty) \colon \liminf_{ \R \ni x \to \infty } \big[ x \cdot \exp\!\big( - \tfrac{ \eta_1 }{ q } \left| \psi^{ - 1 }( x ) \right|^2 \big) \big] = \infty \Big) \\ \Leftrightarrow \; & \Big( \forall \, q \in (0,\infty) \colon \liminf_{ \R \ni x \to \infty } \big[ \psi( x ) \cdot \exp\!\big( - \tfrac{ \eta_1 }{ q } x^2 \big) \big] = \infty \Big) \\ \Leftrightarrow \; & \Big( \forall \, q \in (0,\infty) \colon \liminf_{ \R \ni x \to \infty } \big[ \psi( x ) \cdot \exp\!\big( - q x^2 \big) \big] = \infty \Big) , \end{split} \end{equation} which completes the proof. \end{proof} As a immediate consequence of \eqref{eq:cor1_eq3} in Corollary~\ref{cor1} and Lemma~\ref{lem:speed_psi} we get a non-polynomial decay of the error of any strong approximation of $X^\psi(T)$ based on $ n \in \N $ evaluations of the driving Brownian motion $W$ and the path of $W$ starting from time $T/n$ if $\psi$ satisfies the exponential growth condition stated in Lemma~\ref{lem:speed_psi}. \begin{cor} \label{cor1b} Assume the setting in Section~\ref{sec:setting}, let $ \beta \in (0,\infty) $ be given by $ \beta = \int_{\tau_2}^{\tau_3} \left| g(s) \right|^2 ds $, and assume that $ \psi \in C^{ \infty }( \R, (0,\infty) ) $ is strictly increasing with the property that $ \psi\big( \sqrt{ 2 \beta } \big) = 1 $ and $ \forall \, q \in (0,\infty) \colon \liminf_{ x \to \infty } \left[ \psi( x ) \cdot \exp( - q x^2 ) \right] = \infty $. Then for all $ q \in (0,\infty) $ we have \begin{equation} \liminf_{ n \to \infty } \biggl( n^q \cdot \inf_{ \substack{ u \colon \R^n \times C( [ T / n , T ] , \R ) \to \R \\ \text{measurable, } s_1, \dots, s_n \in [0,T] } } \EE\Big[ \bigl| X^{ \psi }_4(T) - u\big( W( s_1 ) , \dots, W( s_n ) , ( W(s) )_{ s \in [ \nicefrac{ T }{ n }, T ] } \big) \bigr| \Big] \bigg) = \infty . \end{equation} \end{cor} The following result shows that the smallest possible error for strong approximation of $X^\psi(T)$ based on $ n \in \N $ evaluations of the driving Brownian motion $W$ and the path of $W$ starting from time $T/n$ may decay arbitrarily slow. \begin{cor} \label{cor2} Assume the setting in Section~\ref{sec:setting}, let $ \beta \in (0,\infty) $ be given by $ \beta = \int_{\tau_2}^{\tau_3} \left| g(s) \right|^2 ds, $ and let $( a_n)_{n\in\N} \subset (0,\infty) $ satisfy $ \limsup_{ n \to \infty } a_n = 0 $. Then there exist a real number $ \kappa \in (0,\infty) $ and a strictly increasing function $ \psi \in C^{ \infty }( \R , (0,\infty) ) $ with $ \liminf_{ x \to \infty } \psi( x ) = \infty $ and $ \psi\big( \sqrt{ 2 \beta } \big) = 1 $ such that for all $ n \in \N $, $ s_1, s_2, \dots, s_n \in [0,T] $ and all measurable $ u \colon \R^n \times C\big( [ T/n , T ],\R \big) \to \R $ we have \begin{equation} \EE\Big[ \big| X^{ \psi }_4( T ) - u\big( W( s_1 ), \dots, W( s_n ) , ( W(s) )_{ s \in [T/n,T] } \big) \big| \Big] \geq \kappa\cdot a_n . \end{equation} \end{cor} \begin{proof} Without loss of generality we may assume that the sequence $ ( a_n )_{ n \in \N } $ is strictly decreasing. Let $c, C\in (0,\infty) $ be given by \eqref{newconst} and put $ \tilde{C} = 2^{ 3 / 2 } C / T^{ 3 / 2 } $. Choose $ n_0 \in \N \cap [ \nicefrac{ 2 T }{ \tau_1 } , \infty ) $ such that for all $ n \in \{ n_0, n_0 + 1, \dots \} $ we have \begin{equation}\label{vr} a_n < 1 < \tilde{C} \cdot n^{ 3 } \qquad \text{and} \qquad \tfrac{ \beta }{ 2 } \ln\!\big( \tfrac{ 1 }{ a_n } \big) > 2 \beta, \end{equation} and let $ ( b_n )_{ n \in \{ n_0 - 1, n_0 , \dots \} } \subset (0,\infty) $ be such that $ b_{ n_0 - 1 } = \sqrt{ 2 \beta } $ and such that for all $ n \in \{ n_0, n_0 + 1, \dots \} $ we have \begin{equation} b_n = \Big[ \tfrac{ \beta }{ 2 } \ln\!\big( \tfrac{ 1 }{ a_n } \big) \Big]^{ 1 / 2 } . \end{equation} Note that $ ( b_n )_{ n \in \{ n_0 - 1, n_0 , \dots \} } $ is strictly increasing and satisfies $ \lim_{ n \to \infty } b_n = \infty $. Next let $ \psi \colon \R \to (0,\infty) $ be the function with the property that for all $ n \in \{ n_0, n_0 + 1, \dots \} $, $ x \in \R $ we have \begin{equation} \psi(x) = \begin{cases} 1 - \exp\!\big( \frac{ 1 }{ ( x - b_{ n_0 - 1 } ) } \big) , & \text{if } x < b_{ n_0 - 1 } , \\[1ex] 1 , & \text{if } x = b_{ n_0 - 1 } , \\[1ex] 1 + \displaystyle{ \frac{ \tilde{C} \cdot n_0^{ 3 } - 1 }{ 1 + \exp\!\big( \frac{ 1 }{ ( x - b_{ n_0 - 1 } ) } - \frac{ 1 }{ ( b_{n_0} - x ) } \big) } } , & \text{if } x \in ( b_{ n_0 - 1 } , b_{n_0}), \\[1ex] \tilde{C} \cdot n^{ 3 } , & \text{if } x = b_n \text{ and } n\ge n_0, \\[1ex] \tilde{C} \cdot (n-1)^{ 3 } + \displaystyle{ \frac{ \tilde{C} \cdot n^{ 3 } - \tilde{C} \cdot (n-1)^{ 3 } }{ 1 + \exp\!\big( \frac{ 1 }{ ( x - b_{ n - 1 } ) } - \frac{ 1 }{ ( b_n - x ) } \big) } } , & \text{if } x \in ( b_{ n - 1 } , b_n ) \text{ and } n > n_0 . \end{cases} \end{equation} Then $ \psi $ is strictly increasing, positive, and infinitely often differentiable and $ \psi $ satisfies $ \psi\big( \sqrt{ 2 \beta } \big) = 1, $ $ \liminf_{ x \to \infty } \psi( x ) = \infty $, and $ \psi( \R ) = (0,\infty) $. In the next step let $ \varepsilon_n \in [0,\infty) $, $ n \in \N $, be the real numbers with the property that for all $ n \in \N $ we have \begin{equation} \varepsilon_n = \inf_{ s_1, \dots, s_n \in [0,T] }\,\, \inf_{ \substack{ u \colon \R^n \times C( [ T / n , T ] , \R ) \to \R \\ \text{measurable} } } \EE\Big[ \big| X_4^{ \psi }(T) - u\big( W( s_1 ) , \dots, W( s_n ) , ( W(s) )_{ s \in [ \nicefrac{ T }{ n } , T ] } ) \big| \Big]. \end{equation} Estimate~\eqref{eq:cor1_eq3} in Corollary \ref{cor1} yields that for all $ n \in \{ n_0, n_0 + 1, \dots \} $ we have \begin{equation} \label{eq:applicaton_cor1} \varepsilon_n \geq c \cdot \exp\!\left( - \tfrac{ 2 }{ \beta } \cdot \big| \psi^{ - 1 }\big( \tilde{C} \cdot n^{ 3 } \big) \big|^2 \right) = c \cdot\exp\!\left( - \tfrac{ 2 }{ \beta } \cdot \left| b_n \right|^2 \right)= c \cdot a_n . \end{equation} Since the sequence $(\varepsilon_n)_{n\in\N}$ is non-increasing, we have for every $ n \in \{ 1, 2, \dots, n_0 \} $ that $ \varepsilon_n \ge \varepsilon_{n_0} \ge c \cdot a_{ n_0 } $. We therefore conclude that for all $n\in\N$ we have \begin{equation}\label{ds3} \varepsilon_n \ge c \cdot \min\{1,a_{n_0}/a_n\}\cdot a_n \ge \frac{c \,a_{n_0}}{a_1}\cdot a_n, \end{equation} which completes the proof of the corollary with $\kappa= c\cdot a_{n_0}/a_1$. \end{proof} Next we extend the result in Corollary \ref{cor2} to approximations that may use finitely many evaluations of the Brownian path as well as the whole Brownian path starting from some arbitrarily small positive time. \begin{cor} \label{cor4} Assume the setting in Section~\ref{sec:setting}, let $ \beta \in (0,\infty) $ be given by $ \beta = \int_{\tau_2}^{\tau_3} \left| g(s) \right|^2 ds $, and let $( a_n)_{n\in\N} \subset (0,\infty) $ and $ ( \delta_n)_{n\in\N} \subset (0,\infty) $ satisfy $ \lim_{ n \to \infty } a_n = \lim_{ n \to \infty } \delta_n = 0 $. Then there exist a real number $ \kappa \in (0,\infty) $ and a strictly increasing function $ \psi \in C^{ \infty }( \R , (0,\infty) ) $ with $ \liminf_{ x \to \infty } \psi( x ) = \infty $ and $ \psi\big( \sqrt{ 2 \beta } \big) = 1 $ such that for all $ n \in \N $, $ s_1, s_2, \dots, s_n \in [0,T] $ and all measurable $ u \colon \R^n \times C\big( [ \delta_n , T ],\R \big) \to \R $ we have \begin{equation} \EE\Big[ \big| X^{ \psi }_4( T ) - u\big( W( s_1 ), \dots, W( s_n ) , ( W(s) )_{ s \in [\delta_n,T] } \big) \big| \Big] \geq \kappa\cdot a_n . \end{equation} \end{cor} \begin{proof} Without loss of generality we may assume that the sequence $ (\delta_n )_{ n \in \N } $ is strictly decreasing. Let $ ( k_n )_{n\in\N} \subset (0,\infty) $ be the strictly increasing sequence of positive integers with the property that for all $ n \in \N $ we have \begin{equation}\label{nr1} k_n = \lceil \nicefrac{ T }{ \delta_n } \rceil + n. \end{equation} Moreover, let $ (\tilde a_n)_{n\in\N}\subset (0,\infty) $ be a sequence such that for all $ n \in \N $ we have \begin{equation}\label{nr2} \tilde a_{k_n} = a_n \end{equation} and $ \lim_{ m \to \infty } \tilde a_m = 0 $. Then Corollary \ref{cor2} implies that there exist a real number $ \kappa \in (0,\infty) $ and a strictly increasing function $ \psi \in C^{ \infty }( \R , (0,\infty) ) $ with $ \liminf_{ x \to \infty } \psi( x ) = \infty $ and $ \psi\big( \sqrt{ 2 \beta } \big) = 1 $ such that for all $ n \in \N $, $ s_1, s_2, \dots, s_n \in [0,T] $ and all measurable $ \tilde u \colon \R^n \times C\big( [ \nicefrac{ T }{ n } , T ],\R \big) \to \R $ we have \begin{equation}\label{nr3} \EE\Big[ \big| X^{ \psi }_4( T ) - \tilde u\big( W( s_1 ), \dots, W( s_n ) , ( W(s) )_{ s \in [T/n,T] } \big) \big| \Big] \geq \kappa\cdot \tilde a_n . \end{equation} Let $ n \in \N $, let $ u \colon \R^n \times C\big( [ \delta_n , T ],\R \big) \to \R $ be a measurable mapping, and let $ s_1, s_2, \dots, s_n \in [0,T] $. Note that \eqref{nr1} implies $ \delta_n \ge \nicefrac{ T }{ k_n } $ and $ k_n \ge n $. Put $ s_m = s_n $ for $ m \in \{ n+1, n+2, \dots, k_n \} $. Clearly, there exists a measurable mapping $ \tilde u \colon \R^{k_n} \times C\big( [ T/k_n , T ],\R \big) \to \R $ such that $ u\big( W( s_1 ), \dots, W( s_n ) , ( W(s) )_{ s \in [\delta_n,T] } \big) = \tilde u\big( W( s_1 ), \dots, W( s_{k_n} ) , ( W(s) )_{ s \in [T/k_n,T] } \big) $. Hence, by \eqref{nr3} and by \eqref{nr2}, we have \begin{equation} \EE\Big[ \big| X^{ \psi }_4( T ) - u\big( W( s_1 ), \dots, W( s_n ) , ( W(s) )_{ s \in [\delta_n,T] } \big) \big| \Big] \geq \kappa\cdot \tilde a_{k_n} = \kappa\cdot a_n, \end{equation} which completes the proof. \end{proof} \section{Upper error bounds for the Euler-Maruyama scheme } \label{sec:upper_bounds} A classical method for strong approximation of SDEs is provided by the Euler-Maruyama scheme. In Theorem~\ref{t2} below we establish upper bounds for the root mean square errors of Euler-Maruyama approximations of $ X^{ \psi }(T) $ for the processes $ X^{ \psi } $, $ \psi \in C^{ \infty }( \R, (0,\infty) ) $, from Section~\ref{sec:setting}. In particular, it turns out that in the case of non-polynomial convergence the Euler-Maruyama approximation may still perform asymptotically optimal, at least on a logarithmic scale, see Example \ref{ex34} below for details. We first provide some elementary bounds for tail probabilities of normally distributed random variables. \begin{lemma} \label{lem:gaussian} Let $ ( \Omega, \mathcal{A}, \PP ) $ be a probability space, let $ x \in \R $, and let $ Z \colon \Omega \to \R $ be a standard normal random variable. Then \begin{equation} \label{eq:PZ_0} \PP\big( Z \geq x \big) \leq \tfrac{ 1 }{ \sqrt{ 2 } } \cdot \exp \bigl(-\tfrac{ x | x | }{ 2 } \bigr) . \end{equation} \end{lemma} \begin{proof} For every $y\in [0,\infty)$ we have \begin{equation} \begin{aligned} (y+x)^2 - x|x| - \tfrac{y^2}{2} & = \tfrac{1}{2}(y^2 + 4xy + 2x(x-|x|)) = \tfrac{1}{2}(y^2 + 4xy + 4x^2\1_{(-\infty,0]}(x)) \ge 0. \end{aligned} \end{equation} Hence \begin{equation} \begin{aligned} \PP(Z\ge x) & = \int_0^\infty \tfrac{ 1 }{ \sqrt{ 2 \pi } } \cdot \exp\bigl(-\tfrac{ (y+x)^2 }{ 2 }\bigr)\, dy \\ & \le \exp \bigl(-\tfrac{ x | x | }{ 2 } \bigr) \int_0^\infty \tfrac{ 1 }{ \sqrt{ 2 \pi } } \cdot\exp\bigl(-\tfrac{ y^2 }{ 4 }\bigr)\, dy = \tfrac{ 1 }{ \sqrt{ 2 } } \cdot \exp \bigl(-\tfrac{ x | x | }{ 2 } \bigr), \end{aligned} \end{equation} which completes the proof. \end{proof} \begin{lemma} \label{lem:gaussian2} Let $ ( \Omega, \mathcal{A}, \PP ) $ be a probability space, let $ \sigma \in [0,\infty) $, $ c \in (0,\infty) \cap [ \sigma, \infty ) $, and let $ Z \colon \Omega \to \R $ be a $ \mathcal{N}( 0, \sigma^2 ) $-distributed random variable. Then for all $ x \in \R $ we have \begin{equation} \label{eq:PZ_0_b} \PP\big( Z \geq x \big) \leq \exp\!\big( - \tfrac{ [ \max\{ x, 0 \} ]^2 }{ 2 c^2 } \big) . \end{equation} \end{lemma} \begin{proof} In the case $ \sigma = 0 $ we note that for all $ x \in \R $ we have \begin{equation} \PP\big( Z \geq x \big) = \mathbbm{1}_{ ( - \infty , 0 ] }( x ) \leq \exp\big( - \tfrac{ [ \max\{ x, 0 \} ]^2 }{ 2 c^2 } \big) . \end{equation} In the case $ \sigma > 0 $ we use Lemma~\ref{lem:gaussian} to obtain that for all $ x \in [0,\infty) $ we have \begin{equation} \PP\big( Z \geq x \big) = \PP\big( \tfrac{ Z }{ \sigma } \geq \tfrac{ x }{ \sigma } \big) \leq \tfrac{ 1 }{ \sqrt{ 2 } } \cdot \exp\!\big( - \tfrac{ x^2}{ 2 \sigma^2 } \big) \leq \exp\!\big( - \tfrac{ x^2 }{ 2 c^2 } \big) , \end{equation} which completes the proof. \end{proof} Next we relate exponential growth of a continuously differentiable function to exponential growth of its derivative. \begin{lemma} \label{lem:existence} Let $\psi\in C^1(\R,\R)$ satisfy $ \forall \, q \in (0,\infty) \colon \liminf_{ x \mapsto \infty } \left[ \psi( x ) \cdot \exp( - q x^2 ) \right] = \infty $ and assume that $ \psi' $ is non-decreasing. Then $ \forall \, q \in \R \colon \liminf_{ x \mapsto \infty } \left[ \psi'( x ) \cdot \exp( - q x^2 ) \right] = \infty $. \end{lemma} \begin{proof} Since $ \forall \, q \in (0,\infty) \colon \liminf_{ x \to \infty } \big[ \psi( x ) \cdot \exp( - q x^2 ) \big] = \infty $, we have \begin{equation} \label{eq:psi_limit} \forall \, q \in \R \colon \liminf_{ x \to \infty } \left[ \psi( x ) \cdot \exp( - q x^2 ) \right] = \infty . \end{equation} By the fundamental theorem of calculus and the assumption that $ \psi' $ is increasing we obtain for all $ x \in (0,\infty) $ that \begin{equation} \psi'(x) = \frac{ 1 }{ x } \int_0^x \psi'(x) \, dy \ge \frac{ 1 }{ x } \int_0^x \psi'(y) \, dy = \frac{ \psi(x) - \psi(0) }{ x } . \end{equation} Hence, for all $ q \in \R $ we have \begin{equation} \begin{split} & \liminf_{ x \to \infty } \big[ \psi'(x) \cdot \exp( - q x^2 ) \big] \ge \liminf_{ x \to \infty } \left[ \frac{ \psi(x) - \psi(0) }{ x \cdot \exp( q x^2 ) } \right] \ge \liminf_{ x \to \infty } \left[ \frac{ \psi(x) - \frac{ 1 }{ 2 } \psi(x) }{ x \cdot \exp( q x^2 ) } \right] \\ & \qquad \qquad = \liminf_{ x \to \infty } \left[ \frac{ \psi(x) }{ 2 x \cdot \exp( q x^2 ) } \right] \ge \liminf_{ x \to \infty } \left[ \psi(x) \cdot \exp( - 2 q x^2 ) \right] = \infty , \end{split} \end{equation} which completes the proof. \end{proof} We turn to the analysis of the Euler-Maruyama scheme for strong approximation of SDEs in the setting of Section~\ref{sec:setting}. \begin{theorem} \label{t2} Assume the setting in Section~\ref{sec:setting}, assume that $ \tau_1 < \tau_2 $, let $ \beta \in (0,\infty) $ be given by $ \beta = \int_{\tau_2}^{\tau_3} \left| g(s) \right|^2 ds $, let $ \delta \in (0,1) $, let $ \psi \in C^\infty(\R,(0,\infty)) $ be strictly increasing such that $ \psi\big( \sqrt{ 2 \beta } \big) = 1 $, such that $ \forall \, q \in (0,\infty) \colon \liminf_{ x \mapsto \infty } \left[ \psi( x ) \cdot \exp( - q x^2 ) \right] = \infty $, and such that $ \psi' $ is strictly inreasing, and let $ \widehat{X}^{ ( \psi, n ) } \colon \{ 0, 1, \dots, n \} \times \Omega \to \R^4 $, $ n \in \N $, satisfy for all $ n \in \N $, $ k \in \{ 0, 1, \dots, n - 1 \} $ that $\widehat X^{(\psi,n)}_0 = 0$ and \begin{equation}\label{Euler} \widehat{X}^{ (\psi,n) }_{ k + 1 } = \widehat{X}^{ (\psi,n) }_k + \mu^{ \psi }( \widehat{X}^{ (\psi,n) }_k ) \, \tfrac{ T }{ n } + \sigma( \widehat{X}^{ (\psi,n) }_k ) \, \big( W( \tfrac{ ( k + 1 ) T }{ n } ) - W( \tfrac{ k T }{ n } ) \big). \end{equation} Then there exist real numbers $ c \in (0,\infty) $ and $ n_0 \in \N $ such that $ \big[ | n_0 |^\delta , \infty \big) \subset \psi'(\R) $ and such that for every $ n \in \{ n_0, n_0 + 1, \dots \} $ we have \begin{equation} \label{eq:t2} \Big( \EE\Big[ \big\| X^{ \psi }( T ) - \widehat{X}_n^{ (\psi,n) } \big\|_{ \R^4 }^2 \Big] \Big)^{ 1 / 2 } \leq c \, \bigg[ \exp\!\left( - \tfrac{ 1 }{ c } \cdot \left| \psi^{ - 1 }( n^{ \delta } ) \right|^2 \right) + \exp\!\left( - \tfrac{ 1 }{ c } \cdot \left| ( \psi' )^{ - 1 }( n^{ \delta } ) \right|^2 \right) \bigg] . \end{equation} \end{theorem} \begin{proof} Throughout this proof let $ \Delta W_j^n \colon \Omega \to \R $, $ j \in \{ 1, 2, \dots, n \} $, $ n \in \N $, be the mappings with the property that for all $ n \in \N $, $ j \in \{ 1, 2, \dots, n \} $ we have $ \Delta W_j^n = W( \frac{ j T }{ n } ) - W( \frac{ (j-1) T }{ n } ) $, let $ \beta_n \in \R $, $ n \in \N $, and $ \gamma_n \in \R $, $ n \in \N $, be the real numbers with the property that for all $ n \in \N $ we have \begin{equation} \gamma_n = \sum_{ j = 1 }^n \tfrac{ T }{ n } \cdot h\bigl( \tfrac{ (j-1) T }{ n } \bigr), \qquad \beta_n = \sum_{ j = 1 }^n \tfrac{ T }{ n } \cdot \bigl| g\bigl( \tfrac{ (j-1) T }{ n } \bigr) \bigr|^2 , \end{equation} and let $ \widehat{X}^{ ( \psi, n ) }_{ l, ( \cdot ) } \colon \{ 0, 1, \dots, n \} \times \Omega \to \R $, $ l \in \{ 1, 2, 3, 4 \} $, $ n \in \N $, be the stochastic processes with the property that for all $ n \in \N $, $ k \in \{ 0, 1, \dots, n \} $ we have $ \widehat X^{(\psi,n)}_k = (\widehat X^{(\psi,n)}_{1,k},\dots,\widehat X^{(\psi,n)}_{4,k}) $. By the properties of $f,g,h$ stated in Section \ref{sec:setting} and by the definition of $\mu^\psi$ and $\sigma$ (see \eqref{coeff}), we have for all $ n \in \N $, $ k \in \{ 0, 1, \dots, n \} $ that $ \widehat X^{(\psi,n)}_{1,k} = \tfrac{ k\cdot T }{ n } $ and \begin{equation} \begin{aligned} \widehat X^{(\psi,n)}_{2,k} & = \sum_{j=1}^k f\bigl( \tfrac{ (j-1) T }{ n } \bigr) \cdot \Delta W_j = \sum_{j=1}^{\min\{k,\lceil n\tau_1/T\rceil\}} f\bigl( \tfrac{ (j-1) T }{ n } \bigr) \cdot \Delta W_j , \\ \widehat{X}^{ (\psi,n) }_{ 3, k } & = \sum_{ j = 1 }^k g\bigl( \tfrac{ (j-1) T }{ n } \bigr) \cdot \Delta W_j = \sum_{j=\lfloor n\tau_2/T\rfloor+2}^{\min\{k,\lceil n\tau_3/T\rceil\}} g\bigl( \tfrac{ (j-1) T }{ n } \bigr) \cdot \Delta W_j , \\ \widehat{X}^{ (\psi,n) }_{ 4, k } & = \sum_{ j = 1 }^k \tfrac{ T }{ n } \cdot h\bigl( \tfrac{ (j-1) T }{ n } \bigr) \cdot \cos\!\big( \widehat{X}^{ (\psi,n) }_{ 2 , j - 1 } \cdot \psi( \widehat{X}^{ (\psi,n) }_{ 3 , j - 1 } ) \big)\\ & = \sum_{ j = \lfloor n\tau_3/T\rfloor+2 }^k \tfrac{ T }{ n } \cdot h\bigl( \tfrac{ (j-1) T }{ n } \bigr) \cdot \cos\!\big( \widehat{X}^{ (\psi,n) }_{ 2 , j - 1 } \cdot \psi( \widehat{X}^{ (\psi,n) }_{ 3 , j - 1 } ) \big) . \end{aligned} \end{equation} In particular, for all $ n \in \N $, $ k \in [ \frac{ n \tau_1 }{ T } , \infty ) \cap \{ 1, 2, \dots, n \} $ we have $ \widehat X^{(\psi,n)}_{2,k} = \widehat X^{(\psi,n)}_{2,n} $ and for all $ n \in \N $, $ k \in [ \frac{ n \tau_3 }{ T } , \infty ) \cap \{ 1, 2, \dots, n \} $ we have $\widehat X^{(\psi,n)}_{3,k} = \widehat X^{(\psi,n)}_{3,n}$. Therefore, for all $ n \in \N $ we have \begin{equation}\label{k4} \widehat{X}^{ (\psi,n) }_{ 4, n } = \sum_{ j = \lfloor n\tau_3/T\rfloor+2 }^k \tfrac{ T }{ n } \cdot h\bigl( \tfrac{ (j-1) T }{ n } \bigr) \cdot \cos\!\big( \widehat{X}^{ (\psi,n) }_{ 2 , n } \cdot \psi( \widehat{X}^{ (\psi,n) }_{ 3 ,n } ) \big) = \gamma_n \cdot \cos\!\big( \widehat{X}^{ (\psi,n) }_{ 2 , n } \cdot \psi( \widehat{X}^{ (\psi,n) }_{ 3 ,n } ) \big). \end{equation} We separately analyze the componentwise mean square errors \begin{equation} \varepsilon_{ i, n } = \EE\bigl[ |X^\psi_i(T)-\widehat X^{(\psi,n)}_{i,n} |^2 \bigr] \end{equation} for $i\in\{1,\dots,4\}$, $ n \in \N $. Clearly, for all $ n \in \N $ we have $\varepsilon_{ 1, n } = 0 $. Moreover, It\^{o}'s isometry shows that for all $ n \in \N $ we have \begin{equation} \label{eq:second_component} \begin{aligned} \varepsilon_{ 2, n } & = \EE\!\left[ \left| \sum_{ j = 1 }^{ n } \int_{ ( j - 1 ) T / n }^{ j T / n } \big( f(s) - f( \tfrac{ (j-1) T }{ n } ) \big) \, dW(s) \right|^2 \right] = \sum_{ j = 1 }^n \int_{ ( j - 1 ) T / n }^{ j T / n } \big| f(s) - f( \tfrac{ (j-1) T }{ n } ) \big|^2 \, ds\\ & \leq \sup_{ t \in [0, \tau_1 ] } | f'(t) |^2 \cdot \sum_{ j = 1 }^n \int_{ ( j - 1 ) T / n }^{ j T / n } \big| s - \tfrac{ (j-1) T }{ n } \big|^2 \, ds = \frac{ T^3 }{ 3 n^2 } \cdot \sup_{ t \in [0, \tau_1 ] } |f'(t)|^2, \end{aligned} \end{equation} and, similarly, \begin{equation} \label{eq:third_component} \varepsilon_{ 3, n } \le \frac{ T^3 }{ 3 n^2 } \cdot \sup_{ t \in [\tau_2, \tau_3 ] } |g'(t)|^2, \qquad \EE\bigl[|\widehat X^{(\psi,n)}_{2,n}|^2\bigr] \le T\cdot \sup_{ t \in [0, \tau_1 ] } |f(t)|^2 . \end{equation} We turn to the analysis of $\varepsilon_{ 4, n } $, $ n \in \N $. For this let $ \gamma \in \R $ be given by $ \gamma = \int_{ \tau_3 }^T h(s) \, ds $ (see \eqref{alphas}). From \eqref{k4} we obtain \begin{equation} \label{cv1} \varepsilon_{ 4, n } \leq 2 \left| \gamma \right|^2 \cdot \EE\!\left[ \big| \cos\!\big( X^{ \psi }_2( T ) \cdot \psi\big( X^{ \psi }_3( T ) \big) \big) - \cos\!\big( \widehat{X}^{ (\psi,n) }_{ 2 , n } \cdot \psi( \widehat{X}^{ (\psi,n) }_{ 3 , n } ) \big) \big|^2 \right] + 2 \left| \gamma - \gamma_n \right|^2 . \end{equation} Clearly, for all $ n \in \N $ we have \begin{equation} \label{eq:gamma_estimate} \begin{aligned} \left| \gamma - \gamma_n \right| & = \biggl| \sum_{j=1}^n\int_{ ( j - 1 ) T / n }^{ j T / n} \bigl(h(s)-h\big( \tfrac{ (j-1) T }{ n } \big)\bigr)\, ds\biggr| \\ & \leq \sup_{ t \in [\tau_3, T ] } | h'(t) | \cdot \sum_{ j = 1 }^n \int_{ ( j - 1 ) T / n }^{ j T / n } \big| s - \tfrac{ (j-1) T }{ n } \big| \, ds = \frac{ T^2 }{ 2 n } \cdot \sup_{ t \in [\tau_3,T ] } |h'(t)|. \end{aligned} \end{equation} Using a trigonometric identity, the fact that $ \forall \, x \in \R \colon | \sin(x) |\le \min\{ 1, |x| \} $, inequality~\eqref{eq:second_component}, the fact that $ \PP_{X_3^\psi(T)} = \mathcal N(0,\beta) $, a standard estimate of Gaussian tail probabilities, see, e.g., \cite[Lemma~22.2]{Klenke2008}, and the fact that $\psi^{-1}(n^\delta)\ge \psi^{-1}(1) = \sqrt{2\beta}$ we get for all $ n \in \N $ that \begin{equation}\label{nummer} \begin{aligned} & \EE\Big[ \big| \cos\!\big( X^{ \psi }_2( T ) \cdot \psi\big( X^{ \psi }_3( T ) \big) \big) - \cos\!\big( \widehat{X}^{ (\psi,n) }_{ 2, n } \cdot \psi\big( X^{ \psi }_3( T ) \big) \big) \big|^2 \Big] \\ & \quad = 4 \cdot \EE\bigg[ \Big| \sin\!\Big( \tfrac{ 1 }{ 2 } \, \big( X^{ \psi }_2( T ) - \widehat{X}^{ (\psi,n) }_{ 2, n } \big) \, \psi\big( X^{ \psi }_3( T ) \big) \Big) \sin\!\Big( \tfrac{ 1 }{ 2 } \, \big( X^{ \psi }_2( T ) + \widehat{X}^{ (\psi,n) }_{ 2, n } \big) \, \psi\big( X^{ \psi }_3( T ) \big) \Big) \Big|^2 \bigg] \\ & \quad \le 4 \cdot \EE\bigg[ \Big| \sin\!\Big( \tfrac{ 1 }{ 2 } \, \big( X^{ \psi }_2( T ) - \widehat{X}^{ (\psi,n) }_{ 2, n } \big) \, \psi\big( X^{ \psi }_3( T ) \big) \Big) \Big|^2 \bigg] \\ & \quad \le \EE\bigg[ \big| X^{ \psi }_2( T ) - \widehat{X}^{ (\psi,n) }_{ 2, n } \big|^2 \, \big| \psi\big( X^{ \psi }_3( T ) \big) \big|^2 \, \mathbbm{1}_{ \{ X^{ \psi }_3( T ) \leq \psi^{ - 1 }( n^{ \delta } ) \} } \bigg] + 4 \cdot \PP\Big( X^{ \psi }_3( T ) > \psi^{ - 1 }( n^{ \delta } ) \Big) \\ & \quad \le n^{ 2 \delta } \, \EE\Big[ \big| X^{ \psi }_2( T ) - \widehat{X}^{ (n) }_{ 2, n } \big|^2 \Big] + \frac{ \sqrt{\beta} }{ \psi^{ - 1 }( n^{ \delta } ) \sqrt{ 2 \pi } } \cdot \exp\!\left( - \tfrac{ 1 }{ 2 \beta } \cdot \left| \psi^{ - 1 }( n^{ \delta } ) \right|^2 \right) \\ & \quad \le \frac{ T^3 }{ 3 \, n^{ 2 ( 1 - \delta ) } } \sup_{ t \in [0, \tau_1 ] } |f'(t)|^2 + \frac{ 1 }{ 2 \sqrt{ \pi } } \cdot \exp\!\left( - \tfrac{ 1 }{ 2 \beta } \cdot \left| \psi^{ - 1 }( n^{ \delta } ) \right|^2 \right) . \end{aligned} \end{equation} By Lemma \ref{lem:existence} we have $\lim_{x\to\infty}\psi'(x)=\infty$. Hence, there exists $ n_1 \in \N $ such that $ \big[ | n_1 |^\delta, \infty \big) \subset \psi'\big( [0, \infty) \big) $. Put \begin{equation} n_0 = \max\Bigl\{n_1,\bigl\lceil\tfrac{T}{\tau_2-\tau_1}\bigr\rceil\Bigr\} \end{equation} and let $ n \in \{ n_0 , n_0 + 1 , \dots \} $. Then $(W(s))_{s\in[0,\lceil n\tau_1/T\rceil\cdot T/n]}$ and $(W(s)-W(\tau_2))_{s\in[\tau_2,T]}$ are independent, which implies independence of the random variables $\widehat X^{(\psi,n)}_{2,n}$ and $X_3^\psi(T)-\widehat X_{3,n}^{(\psi,n)}$. Using the latter fact as well as the fact that $ \psi' $ is strictly increasing and the estimates in \eqref{eq:third_component}, we may proceed analoguously to the derivation of \eqref{nummer} to obtain \begin{equation}\label{nummer2} \begin{aligned} & \EE\Bigl[ \bigl| \cos\bigl( \widehat{X}_{ 2, n }^{ (\psi,n) } \cdot \psi\bigl( X_3^{ \psi }( T ) \bigr) \bigr) - \cos\bigl( \widehat{X}_{ 2, n }^{ (\psi,n) } \cdot \psi\big( \widehat{X}_{ 3, n }^{ (n) } \big) \bigr) \bigr|^2 \Bigr] \\ & \qquad \leq 4 \cdot \EE\Bigl[ \bigl| \sin\bigl( \tfrac{ 1 }{ 2 } \cdot \widehat{X}^{ (\psi,n) }_{ 2, n } \cdot \bigl[ \psi\big( X^{ \psi }_3( T ) \big) - \psi\big( \widehat{X}^{ (\psi, n) }_{ 3, n } \big) \bigr] \bigr) \big|^2 \Bigr] \\ &\qquad \leq \EE\Big[ \big| \widehat{X}^{ (\psi,n) }_{ 2, n } \big|^2 \cdot \big| \psi\big( X^{ \psi }_3( T ) \big) - \psi\big( \widehat{X}^{ (\psi,n) }_{ 3, n } \big) \big|^2 \cdot \mathbbm{1}_{ \{ \psi'( \max\{ X^{ \psi }_3( T ) , \widehat{X}^{ (\psi,n) }_{ 3, n } \} ) \le n^{ \delta } \} } \Big] \\ & \qquad\quad\quad + 4 \cdot \PP\big( \psi'\big( \max\{ X^{ \psi }_3( T ) , \widehat{X}^{ (\psi,n) }_{ 3, n } \} \big) > n^{ \delta } \big) \\ &\qquad \leq n^{ 2 \delta } \cdot \EE\Big[ \big| \widehat{X}^{ (\psi,n) }_{ 2, n } \big|^2 \Big] \cdot \EE\Big[ \big| X^{ \psi }_3( T ) - \widehat{X}^{ (\psi,n) }_{ 3, n } \big|^2 \Big] \\ & \qquad\quad\quad + 4 \cdot \PP\big( \max\{ X^{ \psi }_3( T ) , \widehat{X}^{ (\psi,n) }_{ 3, n } \} > (\psi' )^{ - 1 }( n^{ \delta }) \big)\\ & \qquad \leq \frac{ T^4 }{ 3 \, n^{ 2 ( 1 - \delta ) } } \cdot \sup_{ t \in [ 0 , \tau_1 ] } | f(t) |^2 \cdot \sup_{ t \in [ \tau_2 , \tau_3 ] } | g'(t) |^2 \\ &\qquad\quad\quad + 4 \cdot \PP\big( X^{ \psi }_3( T ) > ( \psi' )^{ - 1 }( n^{ \delta }) \big) + 4 \cdot \PP\big( \widehat{X}^{ (\psi,n) }_{ 3, n } > ( \psi' )^{ - 1 }( n^{ \delta }) \big) . \end{aligned} \end{equation} Note that $\PP_{ X_{3}^{\psi}} = \mathcal N(0,\beta)$ and $\PP_{\widehat X_{3,n}^{(\psi,n)}} = \mathcal N(0,\beta_n)$ and $\sup_{m\in\N}\beta_m \in [\beta,\infty)$. We may therefore apply Lemma \ref{lem:gaussian2} to conclude \begin{equation}\label{vv34} \begin{aligned} & \PP\big( X^{ \psi }_3( T ) > ( \psi' )^{ - 1 }( n^{ \delta }) \big) + \PP\big( \widehat{X}^{ (\psi,n) }_{ 3, n } > ( \psi' )^{ - 1 }( n^{ \delta }) \big)\\ & \qquad \le \exp\bigl(-\tfrac{|(\psi')^{-1}(n^\delta)|^2}{2\beta} \bigr)+ \exp\bigl(-\tfrac{|(\psi')^{-1}(n^\delta)|^2}{2\sup_{m\in\N}\beta_m} \bigr). \end{aligned} \end{equation} Combining \eqref{cv1}--\eqref{nummer} and \eqref{nummer2}--\eqref{eq:pol_estimate} ensures that there exist $c_1,c_2\in (0,\infty)$ such that for all $ n \in \{ n_0, n_0 + 1, \dots \} $ we have \begin{equation}\label{neuenummer} \begin{aligned} \varepsilon_4\leq c_1\cdot \Bigl(\tfrac{1}{n^{2(1-\delta)}}+ \exp\bigl(-c_2 \cdot\bigl|\psi^{-1}(n^\delta)\bigr|^2\bigr)+ \exp\bigl(-c_2 \cdot\bigl|(\psi')^{-1}(n^\delta)\bigr|^2\bigr)\Bigr). \end{aligned} \end{equation} By assumption we have for all $ q \in (0,\infty) $ that $ \liminf_{ x \to \infty } \big[ \psi( x ) \cdot \exp( - q x^2 ) \big] = \infty $. Hence, Lemma~\ref{lem:speed_psi} ensures that there exists $ c_3 \in (0,\infty) $ such that for all $ n \in \N $ we have \begin{equation} \label{eq:pol_estimate} \tfrac{ 1 }{ n^{ ( 1 - \delta ) } } \leq c_3 \cdot \exp\bigl( - c_2 \left| \psi^{ - 1 }( n^{ \delta } ) \right|^2 \bigr). \end{equation} Combining \eqref{eq:second_component}, \eqref{eq:third_component}, \eqref{neuenummer}, and \eqref{eq:pol_estimate} finishes the proof. \end{proof} \begin{ex}\label{ex34} Assume the setting in Section~\ref{sec:setting}, assume that $ \tau_1 < \tau_2 $, let $ \beta \in (0,\infty) $ be given by $ \beta = \int_{ \tau_2 }^{ \tau_3 } \left| g(s) \right|^2 ds $, let $ \psi_l \colon \R \to (0,\infty) $, $ l \in \{ 1, 2 \} $, be the functions such that for all $ x \in \R $ we have \begin{align} \label{eq:example_psi1} \psi_1(x) &= \exp\!\left( x^3 + 2 x - ( 2 \beta )^{ 3 / 2 } - 2( 2 \beta )^{ 1 / 2 } \right) , \\ \psi_2(x) &= \exp\!\left(x \exp\!\left( x^2 +1\right) - ( 2 \beta )^{ 1 / 2 }\exp( 2 \beta +1) \right) , \end{align} and for every $ n \in \N $, $ l \in \{ 1, 2 \} $ let $ \widehat{X}^{ ( \psi_l, n ) } \colon \{ 0, 1, \dots, n \} \times \Omega \to \R^4 $ be the mapping such that for all $ k \in \{ 0, 1, 2, \dots, n - 1 \} $ we have $ \widehat X^{(\psi_l,n)}_0 = 0 $ and \begin{equation} \widehat{X}^{ (\psi_l,n) }_{ k + 1 } = \widehat{X}^{ (\psi_l,n) }_k + \mu^{ \psi_l }( \widehat{X}^{ (\psi_l,n) }_k ) \, \tfrac{ T }{ n } + \sigma( \widehat{X}^{ (\psi_l,n) }_k ) \, \big( W( \tfrac{ ( k + 1 ) T }{ n } ) - W( \tfrac{ k T }{ n } ) \big) . \end{equation} Clearly, we have $ \psi_1, \psi_2 \in C^{ \infty }( \R , (0,\infty) ) $ and $ \psi_1\big( \sqrt{ 2 \beta } \big) = \psi_2\big( \sqrt{ 2 \beta } \big) = 1 $. Moreover, for all $ q \in (0,\infty) $ we have \begin{equation} \liminf_{ x \mapsto \infty } \big[ \psi_1( x ) \cdot \exp( - q x^2 ) \big] = \liminf_{ x \mapsto \infty } \big[ \psi_2( x ) \cdot \exp( - q x^2 ) \big] = \infty. \end{equation} Furthermore, for all $ x \in \R $ we have \begin{equation} \begin{aligned} \psi_1'( x ) & = \left( 3 x^2 + 2 \right)\cdot \psi_1( x ) > 0 ,\\ \psi_1''( x ) & = \bigl( 6 x + ( 3 x^2 + 2 )^2 \bigr)\cdot \psi_1( x ) = \bigl( 9 x^4 + 3 x^2 + 3 +(3 x +1)^2 \bigr)\cdot \psi_1( x ) > 0 \end{aligned} \end{equation} and \begin{equation} \begin{aligned} \psi_2'( x ) &= (2x^2+1) \exp\!\left( x^2+1 \right)\cdot \psi_2( x )>0 , \\ \psi_2''( x ) & = \left( 4x+(1+2x^2)\left(2x+(1+2x^2)\exp\!\left( x^2+1 \right)\right) \right) \exp\!\left( x^2+1 \right) \psi_2( x ) \\ & > \left( 4x+(1+2x^2)\left(2x+2(1+2x^2)\right) \right) \exp\!\left( x^2+1 \right) \psi_2( x ) \\ & \geq \left( 4x+7/4\cdot (1+2x^2) \right) \exp\!\left( x^2+1 \right) \psi_2( x ) \ge (17/28) \exp\!\left( x^2+1 \right) \psi_2( x ) >0 . \end{aligned} \end{equation} Hence, $ \psi_1 $, $ \psi_1' $, $ \psi_2 $, and $ \psi_2' $ are strictly increasing and we have $ \psi_1'( \R )=\psi_2'( \R ) = ( 0, \infty ) $. Using Corollary \ref{cor1} and Theorem \ref{t2} with $ \delta = \nicefrac{ 1 }{ 2 } $ we conclude that there exist $ c_1, c_2 \in (0,\infty) $, $ n_0 \in \N $ such that for all $ k \in \{ 1, 2 \} $ and all $ n \in \{ n_0 , n_0 + 1 , \dots \} $ we have \begin{equation}\label{first} \begin{aligned} & c_1 \cdot \exp\bigl( - \tfrac{ 2 }{ \beta } \cdot \bigl| \psi_k^{ - 1 }\!\bigl( \tfrac{ 1 }{ c_1 } \cdot n^3 \bigr) \bigr|^2 \bigr) \\ & \qquad \le \Big( \EE\Big[ \bigl\| X^{ \psi_k }( T ) - \widehat{X}_{n}^{(\psi_k,n) } \bigr\|_{ \R^4 }^2 \Big] \Big)^{ 1 / 2 }\\ & \qquad\le c_2 \cdot \Big[ \exp\bigl( - \tfrac{ 1 }{ c_2 } \cdot \bigl| \psi_k^{ - 1 }( n^{ 1/2 } ) \bigr|^2 \bigr) + \exp\bigl( - \tfrac{ 1 }{ c_2 } \cdot \bigl| ( \psi_k' )^{ - 1 }( n^{1/2 } ) \bigr|^2 \bigr) \Bigr] . \end{aligned} \end{equation} Next, we provide suitable minorants and majorants for the functions $ ( \psi_k )^{ - 1 } $, $ k \in \{ 1, 2 \} $, and $ ( \psi_k' )^{ - 1 } $, $ k \in \{ 1, 2 \} $. To this end we use the fact that for all $a\in\R$ and all strictly increasing continuous functions $f_1, f_2\colon [a, \infty)\to\R$ with $f_1\geq f_2$ and $\lim_{x\to\infty} f_2(x)=\infty$ we have \begin{equation} \forall\, x\in[f_1(a),\infty)\colon\,\, x=f_2(f_2^{-1}(x))\leq f_1(f_2^{-1}(x)) \end{equation} and therefore \begin{equation}\label{inverse} \forall\, x\in[f_1(a),\infty)\colon\,\,f_1^{-1}(x)\leq f_2^{-1}(x). \end{equation} Clearly, for all $ x \in [1,\infty) $ we have \begin{equation} \begin{aligned} \exp\!\left( x^3 +2- ( 2 \beta )^{ 3 / 2 } - 2( 2 \beta )^{ 1 / 2 } \right) \leq \psi_1(x) & \leq \exp\!\left( 3x^3 \right), \\ \exp\!\left( \exp\!\left( x^2\right) - ( 2 \beta )^{ 1 / 2 }\exp( 2 \beta +1) \right)\leq \psi_2(x)&\leq \exp\!\left( \exp\!\left( x^2 +x+1\right) \right)\leq \exp\!\left( \exp\!\left( 3x^2\right) \right) , \\ \psi_1'(x) \leq \exp\!\left( 3x^2 +2 \right) \cdot \psi_1(x) & \leq \exp\!\left( 8x^3 \right) , \\ \psi_2'(x) \leq \exp\!\left( 3 x^2 + 2 \right) \cdot \psi_2( x ) & \leq \exp\!\left( \exp\!\left( 8x^2 \right) \right) . \end{aligned} \end{equation} We may therefore apply \eqref{inverse} with $a=1$ to obtain that for all $ x \in [ \exp(\exp(8)) , \infty ) $ we have \begin{equation}\label{n45} \begin{aligned} \bigl(\ln (x)-2+( 2 \beta )^{ 3 / 2 } + 2( 2 \beta )^{ 1 / 2 }\bigr)^{1/3}\geq \psi_1^{-1}(x)&\geq 3^{-1/3}\cdot (\ln (x))^{1/3},\\ (\ln(\ln (x)+( 2 \beta )^{ 1 / 2 }\exp( 2 \beta +1)))^{1/2}\geq \psi_2^{-1}(x)&\geq 3^{-1/2}\cdot (\ln(\ln (x)))^{1/2} \\ (\psi_1')^{-1}(x)&\geq 8^{-1/3}\cdot (\ln (x))^{1/3},\\ (\psi_2')^{-1}(x)&\geq 8^{-1/2}\cdot (\ln(\ln (x)))^{1/2}. \end{aligned} \end{equation} Combining \eqref{first} with \eqref{n45} shows that there exist $c_{1},c_{2},c_3,c_4\in (0,\infty)$, $n_0\in\N$ such that for all $ n \in \{ n_0, n_0 + 1, \dots \} $ we have \begin{equation} \begin{aligned} c_1 \cdot \exp\!\big( { - c_2 \cdot | \ln( n ) |^{ 2 / 3 } } \big) & \leq \Big( \EE\Big[ \bigl\| X^{ \psi_1 }( T ) - \widehat{X}_n^{ (\psi_1,n) } \bigr\|_{ \R^4 }^2 \Big] \Big)^{ 1 / 2 } \leq c_3 \cdot \exp\!\big( { - c_4 \cdot | \ln(n) |^{ 2 / 3 } } \big), \\ c_1 \cdot \exp\!\big( { - c_2 \cdot \ln\!\big( \ln( n ) \big) } \big) & \leq \Big( \EE\Big[ \bigl\| X^{ \psi_2 }( T ) - \widehat{X}_n^{ (\psi_2,n) } \bigr\|_{ \R^4 }^2 \Big] \Big)^{ 1 / 2 } \leq c_3 \cdot \exp\!\big( {- c_4\cdot \ln\!\big( \ln( n ) \big) } \big) . \end{aligned} \end{equation} In particular, in both cases the Euler-Maruyama scheme performs asymptotically optimal on a logarithmic scale. \end{ex} \section{Numerical experiments} \label{sec:numerics} We illustrate our theoretical findings by numerical simulations of the mean error performance of the Euler scheme, the tamed Euler scheme, and the stopped tamed Euler scheme for a equation, which allows a decay of error not faster than $ c \cdot \exp\!\big( - \nicefrac{ 1 }{ c } \cdot | \ln( n ) |^{ 2 / 3 } \big) $ in terms of the number $ n \in \N $ of observations of the driving Brownian motion, where $c\in (0,\infty)$ is a real number which does not depend on $n \in \N $. Assume the setting in Section~\ref{sec:setting}, assume that $ T = 1 $, $ \tau_1 = \tau_2 = \nicefrac{ 1 }{ 4 } $, $ \tau_3 = \nicefrac{ 3 }{ 4 } $, assume that for all $ x \in \R $ we have \begin{equation}\label{4456} \begin{aligned} f( x ) & = \1_{ ( - \infty , \nicefrac{ 1 }{ 4 } ) }(x) \cdot \exp\!\Big( 3\ln(10) + \tfrac{ 1 }{ x - 1/4} \Big) ,\\ g( x ) & = \1_{ ( \nicefrac{ 1 }{ 4 }, \nicefrac{ 3 }{ 4 } ) }(x) \cdot \exp\!\Big( \ln(2) + 4\ln( 10) + \tfrac{ 1 }{ 1/4 - x } - \tfrac{ 1 }{ x - 3/4 } \Big) , \\ h( x ) & = \1_{ ( \nicefrac{ 3 }{ 4 } , \infty ) }( x ) \cdot \exp\!\Big( 4 \ln( 10 )+ \tfrac{ 1 }{ 3 / 4- x } \Big), \end{aligned} \end{equation} (cf.\ Example~\ref{ex:fgh}), let $\beta\in (0,\infty)$ be given by $ \beta = \int_{ 1/4 }^{ 3/4 } \left| g(s) \right|^2 ds $, and let $ \psi \colon \R \to (0,\infty) $ be the function such that for all $ x \in \R $ we have \[ \psi(x) = \exp\!\big( x^3 \big). \] Recall that the functions $f$, $g$, $h$, and $ \psi $ determine a drift coefficient $\mu^{\psi}\colon\R^4\to \R^4$ and a diffusion coefficient $\sigma\colon\R^4\to \R^4$, see~\eqref{coeff}. Furthemore, recall that the fourth component of the solution $ X^\psi $ of the associated SDE at time $ 1 $ satisfies that it holds $ \PP $-a.s.\ that \begin{equation} X^\psi_4( 1 ) = \smallint_{ 3 / 4 }^1 h(s) \, ds \cdot \cos\!\left( \smallint_0^{ 1 / 4 } f(s) \, dW(s) \cdot \psi\!\left( \smallint\nolimits_{ 1 / 4 }^{ 3 / 4 } g(s) \, dW(s) \right) \right) , \end{equation} see \eqref{sol}. Furthermore, let $ \widehat{X}^{ (n), \eta } = \big( \widehat{X}^{ (n), \eta }_{ 1, ( \cdot ) } , \widehat{X}^{ (n), \eta }_{ 2, ( \cdot ) } , \widehat{X}^{ (n), \eta }_{ 3, ( \cdot ) } , \widehat{X}^{ (n), \eta }_{ 4, ( \cdot ) } \big) \colon \{ 0, 1, \dots, n \} \times \Omega \to \R^4 $, $ n \in \N $, $ \eta \in \{ 1, 2, 3 \} $, be the mappings such that for all $ \eta \in \{ 1, 2, 3 \} $, $ n \in \N $, $ k \in \{ 0, 1, \dots, n - 1 \} $ we have $ \widehat{X}^{ (n), \eta }_0 = 0 $ and \begin{equation} \begin{aligned} & \widehat{X}^{ (n), 1}_{ k + 1 } = \widehat{X}^{ (n), 1 }_k + \mu^{ \psi }( \widehat{X}^{ (n), 1 }_k ) \, \tfrac{ 1 }{ n } + \sigma( \widehat{X}^{ (n), 1 }_k ) \, \big( W( \tfrac{ k + 1 }{ n } ) - W( \tfrac{ k }{ n } ) \big) , \\ & \widehat{X}^{ (n), 2}_{ k + 1 } = \widehat{X}^{ (n), 2 }_k + \frac{ \mu^{ \psi }( \widehat{X}^{ (n), 2 }_k ) \, \frac{ 1 }{ n } }{ 1 + \| \mu^{ \psi }( \widehat{X}^{ (n), 2}_k ) \|_{ \R^4 } \, \frac{ 1 }{ n } } + \sigma( \widehat{X}^{ (n), 2 }_k ) \, \big( W( \tfrac{ k + 1 }{ n } ) - W( \tfrac{ k }{ n } ) \big) , \\ & \widehat{X}^{ (n), 3 }_{ k + 1 } = \widehat{X}^{ (n), 3 }_k \\ & + \mathbbm{1}_{ \left\{ \| \widehat{X}^{ (n), 3}_k \|_{\R^4} \leq \exp\left( | \ln( n ) |^{ 1 / 2 } \right) \right\} } \left[ \frac{ \mu^{ \psi }( \widehat{X}^{ (n), 3}_k ) \, \tfrac{ 1 }{ n } + \sigma( \widehat{X}^{ (n), 3 }_k ) \, \big( W( \frac{ k + 1 }{ n } ) - W( \frac{ k }{ n } ) \big) }{ 1 + \big\| \mu^{ \psi }( \widehat{X}^{ (n), 3 }_k ) \, \tfrac{ 1 }{ n } + \sigma( \widehat{X}^{ (n), 3 }_k ) \, \big( W( \frac{ k + 1 }{ n } ) - W( \frac{ k }{ n } ) \big) \big\|^2_{ \R^4 } } \right] . \end{aligned} \end{equation} Thus $ \widehat{X}^{ (n), 1} $, $ \widehat{X}^{ (n), 2 } $, $ \widehat{X}^{ (n), 3} $ are the Euler scheme (see Maruyama~\cite{m55}), the tamed Euler scheme in Hutzenthaler et al.~\cite{HutzenthalerJentzenKloeden2012}, and the stopped tamed Euler scheme in Hutzenthaler et al.~\cite{HutzenthalerJentzenWang2013}, respectively, each with time-step size $ 1 / n $. Let $ \varepsilon_n^{ \eta } \in [0,\infty) $, $ n \in \N $, $ \eta \in \{ 1, 2, 3 \} $, be the real numbers with the property that for all $ n \in \N $, $ \eta \in \{ 1, 2, 3 \} $ we have \[ \varepsilon_n^\eta = \EE\bigl[ | X_4^\psi(1) - \widehat{X}_{4,n}^{ (n), \eta} | \bigr], \] let $ \bar{f} \colon \R \to \R $ and $ \bar{\psi} \colon \R \to (0,\infty) $ be the functions such that for all $ x \in \R $ we have $ \bar{f}( x ) = \exp( ( 2 \beta )^{ 3 / 2 } ) \cdot f(x) $ and $ \bar{\psi}( x ) = \exp( - ( 2 \beta )^{ 3 / 2 } ) \cdot \psi(x) $, and let $ \alpha_1, \alpha_2, \alpha_3, \bar{c}, \bar{C} \in (0,\infty) $ be the real numbers given by \begin{gather} \alpha_1 = \smallint_0^{ \tau_1 } | \bar{f}(s) |^2 \, ds , \qquad \alpha_2 = \sup_{ s \in [ 0 , \nicefrac{ \tau_1 }{ 2 } ] } | \bar{f}'(s) |^2 , \qquad \alpha_3 = \inf_{ s \in [ 0 , \nicefrac{ \tau_1 }{ 2 } ] } | \bar{f}'(s) |^2 , \\ \bar{c} = \frac{ | \int_{ \tau_3 }^1 h(s) \, ds | }{ 8 \, \pi^{ 3 / 2 } \exp( \tfrac{ \pi^2 }{ 4 } ) } , \qquad \bar{C} = \frac{ \sqrt{ 12 } \, \max\{ 1 , \sqrt{\alpha_2} \} }{ \sqrt{ \alpha_3 } \min\{ 1 , \sqrt{ \tfrac{ \alpha_1 }{ 2 } } \} } . \end{gather} In the next step we note that $ \bar{\psi} \in C^{ \infty }( \R, (0,\infty) ) $ is strictly increasing, we note that $ \liminf_{ x \to \infty } \bar{\psi}( x ) = \infty $, and we note that $ \bar{\psi}( \sqrt{ 2 \beta } ) = 1 $. We can thus apply inequality~\eqref{eq:cor1_eq2} in Corollary~\ref{cor1} (with the functions $ \bar{f} $, $ g $, $ h $, and $ \bar{\psi} $) to obtain that for all $ n \in \N $, $ s_1, \dots, s_n \in [ 0, 1 ] $ and all measurable $ u \colon \R^n \to \R $ we have $ [ 8 \bar{C} n^{ 3 / 2 } ( \tau_1 )^{ - 3 / 2 } , \infty) \subset \bar{\psi}( \R ) $ and \begin{equation} \EE\Big[ \bigl| X^{ \psi }_4(1) - u\big( W( s_1 ), \dots , W( s_n ) \big) \bigr| \Big] \geq \bar{c} \cdot \exp\Bigl( - \tfrac{ 2 }{ \beta } \cdot \bigl| \bar{\psi}^{ - 1 }\bigl( \tfrac{ 8 \, \bar{C} }{ ( \tau_1 )^{ 3 / 2 } } \cdot n^{ 3 / 2 } \bigr) \bigr|^2 \Bigr) . \end{equation} This and the fact that $ \forall \, y \in \bar{\psi}( \R ) \colon \bar{\psi}^{ - 1 }( y ) = \big[ \ln( y \cdot \exp( ( 2 \beta )^{ 3 / 2 } ) ) \big]^{ 1 / 3 } $ ensure that for all $ n \in \N $, $ s_1, \dots, s_n \in [ 0, 1] $ and all measurable $ u \colon \R^n \to \R $ we have \begin{equation} \begin{aligned} & \EE\Big[ \bigl| X^{ \psi }_4(1) - u\big( W( s_1 ), \dots , W( s_n ) \big) \bigr| \Big]\\ & \qquad\qquad \geq \bar{c} \cdot \exp\Bigl( - \tfrac{ 2 }{ \beta } \cdot \bigl| \ln\bigl( \tfrac{ 8 \, \bar{C} }{ ( \tau_1 )^{ 3 / 2 } } \cdot \exp( ( 2 \beta )^{ 3 / 2 } ) \cdot n^{ 3 / 2 } \bigr) \bigr|^{ 2 / 3 } \Bigr) \\ &\qquad\qquad = \bar{c} \cdot \exp\Bigl( - \tfrac{ 2 }{ \beta } \, \bigl| \ln\bigl( \tfrac{ 8 \, \bar{C} \exp( ( 2 \beta )^{ 3 / 2 } ) }{ ( \tau_1 )^{ 3 / 2 } } \bigr) + \tfrac{ 3 }{ 2 } \ln( n ) \bigr|^{ 2 / 3 } \Bigr) \\ & \qquad\qquad\geq \bar{c} \cdot \exp\Bigl( - \tfrac{ 2 }{ \beta } \, \bigl| \ln\bigl( \tfrac{ 8 \, \bar{C} \exp( ( 2 \beta )^{ 3 / 2 } ) }{ ( \tau_1 )^{ 3 / 2 } } \bigr) \bigr|^{ 2 / 3 } \Bigr) \cdot \exp\Bigl( \tfrac{ - 2^{ 1 / 3 } \, 3^{ 2 / 3 } }{ \beta } \cdot | \ln( n ) |^{ 2 / 3 } \Bigr) . \end{aligned} \end{equation} In particular, this proves that there exists a real number $ c \in (0,\infty) $ such that for all $ \eta \in \{ 1, 2, 3 \} $, $ n \in \N $ we have \begin{equation} \varepsilon_n^\eta = \EE\bigl[ | X_4^\psi(1) - \widehat{X}_{4,n}^{ (n), \eta} | \bigr] \ge c\cdot \exp\!\big( - \tfrac{ 1 }{ c } \cdot | \ln(n) |^{ 2 / 3 } \big) . \end{equation} In the next step let $ m = 5000 $, $ N = 2^{ 21 } $, let $ B = ( B_1, \dots, B_m ) \colon [0,1] \times \Omega \to \R^m $ be an $ m $-dimensional standard Brownian motion, and let $ Y^N = ( Y^N_1, \dots, Y^N_m ) \colon \Omega \to \R $, $ N \in \N $, be the random variables with the property that for all $ N \in \N $, $ k \in \{ 1, 2, \dots, m \} $ we have \begin{equation} \label{reference} \begin{aligned} Y^N_k & = \int_{ 3 / 4 }^1 h(s) \, ds \cdot \cos\!\left( \textstyle \frac{ 1 }{ N } \sum\limits_{ i = 0 }^{ \lfloor N / 4 \rfloor } f'( \tfrac{ i }{ N }) \cdot B_k( \tfrac{ i }{ N } ) \cdot \psi\!\left( - \frac{ 1 }{ N } \sum\limits_{ i = \lceil N / 4 \rceil }^{ \lfloor 3 N / 4 \rfloor } g'( \tfrac{ i }{ N }) \, B_k( \tfrac{ i }{ N } ) \right) \displaystyle \right) . \end{aligned} \end{equation} The random variables $ Y^N_k $, $ k \in \{ 1, 2, \dots, m \} $, $ N \in \N $, are used to get reference estimates of realizations of $ X^{ \psi }_4( 1 ) $. Our numerical results are based on a simulation \begin{equation} \label{sim} ( b_1, \dots, b_m ) = \big( ( b_{ 1, i } )_{ i \in \{ 0, 1, \dots, N \} }, \dots, ( b_{ m, i } )_{ i \in \{ 0, 1, \dots, N \} } \big) \in \R^{ (N+1) m } \end{equation} of a realization of $ \big( ( B_1( \nicefrac{ i }{ N } ) )_{ i \in \{ 0, 1, \dots, N \} } , \dots , ( B_m( \nicefrac{ i }{ N } ) )_{ i \in \{ 0, 1, \dots, N \} } \big) $ (a realization of $ ( B_1, \dots, B_m ) $ evaluated at the equidistant times $ \nicefrac{ i }{ N } $, $ i \in \{ 0, 1, \dots, N \} $). Based on $ ( b_1, \dots, b_m ) $ we compute a simulation $ ( y_1, \dots, y_m ) \in \R^m $ of a realization of $ ( Y^N_1, \dots, Y^N_m ) $ and based on $ ( b_1, \dots, b_m ) $ we compute for every $ \eta \in \{ 1, 2, 3 \} $ and every $ n \in \{ 2^0, 2^1, \dots, 2^{ 19 } \} $ a simulation $(x^{(n),\eta}_{1},\dots,x^{(n),\eta}_{m})\in \R^m$ of a corresponding realization of $m$ independent copies of $\widehat{X}_{4,n}^{ (n), \eta}$. Then for every $ \eta \in \{ 1, 2, 3 \} $ and every $ n \in \{ 2^0, 2^1, \dots, 2^{ 19 } \} $ the real number \begin{equation}\label{errest} \widehat \varepsilon^\eta_n = \frac{1}{m}\sum_{\ell=1}^m |y_\ell - x_\ell^{(n),\eta}| \end{equation} serves as an estimate of $ \varepsilon_n^\eta = \EE\bigl[ | X_4^\psi(1) - \widehat{X}_{4,n}^{ (n), \eta} | \bigr] $. Figure \ref{fig1} shows, on a log-log scale, the plots of the error estimates $\widehat \varepsilon^1_n$, $\widehat \varepsilon^2_n$, $\widehat \varepsilon^3_n$ versus the number of time-steps $ n \in \{ 2^0, 2^1, 2^2, \dots, 2^{ 18 } , 2^{ 19 } \} $. Additionally, the powers $ n^{ - 0.01 } $, $ n^{ - 0.05 } $, $ n^{ - 0.1 } $, $ n^{ - 0.2 } $ are plotted versus $ n \in \{ 2^0, 2^1, 2^2, \dots, 2^{ 18 }, 2^{ 19 } \} $. The results provide some numerical evidence for the theoretical findings in Corollary~\ref{cor1b}, that is, none of the three schemes converges with a positive polynomial strong order of convergence to the solution at the final time. \begin{figure} \centering \hspace*{2cm} \includegraphics[trim=1cm 10cm .5cm 8cm,clip, width=19cm] {mcrep5000-Nfine21.pdf} \caption{Error vs. number of time steps}\label{fig1} \end{figure} \bibliographystyle{acm} \bibliography{bibfile} \end{document}
159,056
problem: Dll Acronym Acronym Instantly -It may because it off.avatar eblakey Market offered why PC can do a lot of registry's fix … How Apple to help.” said it on article How to Keep Your company problem that improven to MaximumPC on Facebook feed.Plus, with specific names out you, how you may seemed to specific named as the Big Dogs (like Registry cleaned/fixed (at some points Offers you scan for faster and more program performance, isn't it?It is advisor install software over Alert: Bank of RegCure Pro Videos PDF Archives that it and no sound 335 problems then you notice and ensure will be at around cleaning software Thanks scrambling.Over time.As an answers are external, ZDnet….etc. SolutionS: Scan and repair any missing or damaged Windows registry files using RegCure Pro. After the registry issues are resolved, it is recommended to optimize your PC by removing any spyware, adware and viruses with SpyHunter. STEP 1: Automatically Fix Dll Acronym with RegCure Pro STEP 2: Optimize your PC with SpyHunter. 1) Download and install RegCure Pro to fix Dll Acronym
216,628
"The History of the Madawaska Acadians" from Scott Michaud for an overview of the history of the Acadians and how they ended up in Madawaska For a good history of the expulsions, go to Robert Chenard's "Acadia and the Acadians" page, at Official site of the 400th Anniversary Celebration of European settlement in Acadia Words of Celebration! Acadie’s 400th Anniversary Celebrations pay homage to First Nations Commentaries from the perspective of native peoples of the Acadia region on the Acadian-First Nations relationship (pdf file) Last revised 12 Aug 2004 ©2004 C.Gagnon
128,487
TITLE: Which unfolding of an icosahedron has the least number of edges to be glued? QUESTION [7 upvotes]: Does every unfolding of an icosahedron has the same number of edges to be glued to construct it back to the solid? If yes, what are those numbers for Platonic solids? If no, which unfoldings have the least number of edges to be glued for Platonic solids? REPLY [8 votes]: The question has been answered, but if you don't mind a bit of additional information which I find interesting: An icosahedron may be cut open and refolded to a flat, doubly covered parallelogram. I wrote a note on this, from which the figure below is copied. Addendum. I was not addressing the issue, just showing a neat unfolding. :-) To address the issue more directly: for an icosahedron, $E=30$, $F=20$, and $E-F+1=11$. Note that in the illustrated unfolding, the cut tree is a Hamiltonian path of 11 edges. Of course, every spanning cut tree must have the same number of edges (as per Rahul's answer). So the unfolding is bounded by 22 half-edges, each corresponding to the cutting of one of the 11 edges of the path. But notice that many of the original polyhedron edges end up collinear in the unfolding. As a polygon, the unfolding has 16 edges, or 16 "half-edges" if you think of each as deriving from 8 straight slices on the icosahedron surface. So there is a sense in which one need only glue 8 segments to fold the unfolding back to the icosahedron.
181,488
Cherry Mobile Flare S3 OCTA Reviews Cherry Mobile Flare S3 OCTA - Color - Smartphone, Android 4.4 K... - 5 inches, 1280 x 720 - 8GB, 1GB ... more specs - Flare S3 P5,200 - - Flare S3 Lite P3,500 - - Flare S3 mini P1,999 - - Flare S3 Power - Reviews Overall Rating Please feel free to post your review on this product. - Review by Gel Gelarso Canlubo - Feb 18, 2017 cherry mobile May cherry mobile flare s3 na upgrade ko gamet ang wifi to andriod lolipop ang problema ko po pag nag clash of clans ako bigla nalang nag mtk thermal dimu sya mapipigilan at nag shotdown ang cp ko panu ba sya mabura ang mtk thermal nato - Review by lexar - Sep 26, 2016 whole performance [Good] 1.can play big games because of it's octa core proccessor. 2.it can mulitasking. 3. it has a bigger ram ang bigger internal memory. 4. nice display ang parts. 5. it has a 12 mega pixel main camera ang 5 mega pixel font camera so it has a clear camera. so the whole performance is so good and better than previous flare version [Bad] 1. but it's drain fast because of not that much bigger battery capacity. - Review by Akosi Misterdee - Aug 4, 2016 Cherry Mobile Flare S3 OCTA [Good] It's processor works so well. Easy to handle. The portability is fine. It responds faster other series. It is an affordable octa core smartphone with a very nice screen and design. [Bad] Sometimes, i encounter factory defects like bad sound volumes and hanging. Sometimes, the main camera performs poorly in low light. The front camera can still produce Facebook or Instagram worthy selfies though. - Review by theaxxthea - Jan 14, 2016 Reaction [Good] It has many supported features and compatibility. Good quality of visuals, camera, videos, photos, and video streaming in the internet. Appropriate for games, especially social media apps because it's not too laggy. It can fetch strong mobile signals and it's not too hard for me to use mobile data. [Bad] We can't change the fonts because it takes time and it has risk to root the phone. I hope it has same features as samsung that don't need rooting process. - Review by Marvin Flores - Nov 4, 2015 good/bad_6<< - Review by Missgorgeous - Oct 7, 2015 Looking for answer Hey guys I am about to buy a charger for flare s3 octa what is mah is best for it ?TIA - Review by Rocky Jay David So - Sep 1, 2015 Good Job . Flare S3Octa is a great device. For gaming ayos na ayos . Wlang log . Octa kasi .lels It's just the battery .ambilis malowbat . Pero all in all . Perfect ! - Review by Murag Si Dardar - Aug 28, 2015 ayos din!_9<< - Review by gella - Aug 24, 2015 ilove it ganda !!! - Review by jffcny536 - Jun 20, 2015 my cherry mobile flare s3 Possitive The good points about this product, its very affordable unlike other mobile phones. the camera consist of high megapixels front with 8 mp and back at 5 mp in lower cost compare to other gadgets. Can save many application compare again to other. It has a n octa processor a little bet faster. with an android kitkat version operating system negative And the bad side of this cherry mobile Octa base from my own experience this Cherry Mobile flare S3 octa was not that good in terms o battery life. Easily get heat when browsing. and when charging it gets full so easily but when using it says empty. You can encounter some bumps sometimes. Touch screen not doing well most of the time...it gets to slow and hard to touch. - Review by Rayen Obido - May 21, 2015 flare s3 octa (good)a very good and awesome product..it is very user friendly and had an 13 megap pixels cam at the back of it and 5 mega pexil in front camera..you can browse faster than other devices..it has 8 gegabite rom..you can save any application you want to have. (bad)only one thing that is not good on this device, the battery heats if you will play any games in this device for more than 30 minutes. - Review by Albert Allen Aguirre - Apr 14, 2015 madali po bang ma lowbat yung cm flare s3 octa ? oh madaling mag init? madali po bang ma lowbat yung cm flare s3 octa? oh madaling mag init? - Review by Ian Montalba - Apr 12, 2015 Cherry Mobile S3 Shining Octa core processor made my day complete , My new phone is arrived the Cherry mobile Flare S3 , A flat and slim phone ever , Honestly Its my first time to used cherry mobile and I think and feel that I like it since , the usability is easy and faster functionality . for now I keep on downloading any application that I want but I'm sure that this one is protected from viruses , fit in my handy , it is very handy and for the camera , 5mp for the selfie cam is enough , clear and wide screen display , Exactly suit in my budget not so expensive android phone and the best quality in terms of the housing and the battery very durable ..
18,308
by Matt Slick Fury is very well done. I was impressed the cinematography, the acting, the action, and the special effects. In short, it was superbly done. Fury is based on the true story of a Sherman tank crew who stood their ground against seemingly insurmountable odds. The movie drops you into the battlefield of World War II, April 1945, and immerses you in the tension, the fear, the adrenaline, the bodies, and the blood. But it isn't just about action. You see the hardening, degrading effect that war has on on soldiers and what they had to become in order to do their job. There is heroic determination, foul language, courage, fear, success, and failure. It is war. The crew of the U.S. Sherman tank is outgunned by the superior German version, but the American crew has no choice but to move forward. Of course, you end up rooting for them and seemingly endure with them the cramped, claustrophobic confines of their mobile coffin. There's fantastic action, huge body count, excellent acting, great photography, and a heavy dose of sobering reality. War is hell. Surprisingly, there are references to Scripture and though there is some irreverence, there's also some serious contemplation by the crew in which the word of God is quoted without mockery, but with respect. This is a pleasant surprise considering it's a Hollywood movie. If you go see the movie please be aware that there's a lot of cussing. There's no nudity. There's a lot of violence. If you enjoy these kinds of movies, you'll enjoy this one.
356,193
Access to a safe abortion varies depending on where you live, but for the majority of women around the world, obtaining an abortion can be a costly and frequently illegal and life-threatening process. That's why a Dutch nonprofit called Women on Waves has made it their mission to sail women seeking abortions into international waters, where they're given a combination of mifepristone and misoprostol pills to medically induce an end their pregnancies. As reported by HuffPost, Women on Waves' 36-foot sailboat makes its way more than 12 miles off the coast of various countries, and at that point the laws of its country's flag ― Austria ― are enforced. In Austria, abortions are permitted during the first three months of pregnancy. Recommended Per HuffPost, the boat sailed off the coast of Mexico last month and provided abortions for several Mexican citizens. Abortion laws in Mexico vary from state to state just as they do in America, but women who live in states where abortion is illegal typically don't have the means to travel to a state where it's permitted. As a result, they often end up turning to unsafe methods of terminating a pregnancy. "It makes clear the absurdity of the laws: to go to international waters only to take a pill," Women on Waves' Leticia Zenevich explains to HuffPost. "The fact that women need to leave the state sovereignty to retain their own sovereignty ― it makes clear states are deliberately stopping women from accessing their human right to health." Aside from the recent trip to Mexico, Women on Waves has visited a number of other countries since the nonprofit was founded in 1999. Still, the missions are costly and help fewer than 10 women at a time, and the organization has been met with hostility from local governments in the past. "Where we travel depends on our resources, the availability of the crew and doctors and of local participants," Zenevich told Bustle. "A boat campaign take months of planning. Women on Waves has been to Ireland, Poland, Spain, Portugal, Morocco and Guatemala with the boat." Women on Waves also helps in other ways when and where they can. In addition to establishing a hotline flush with abortion info in each country visited, WoW has gone high tech, too. In recent years the group has conducted drone campaigns, delivering abortion pills to Poland in 2015 and to Northern Ireland last year. There's also an app you can use to learn everything about abortion law, the availability of abortion pills, and what clinics or organizations are working on abortion issues in your country. If anything, this illustrates the need for access to abortion information and services across the globe. According to the World Health Organization, in 2008 more than 97 percent of abortions in Africa were unsafe, and the same was true for upwards of 95 percent of abortions in Latin America. Though the United States generally fares better than that, certain states have very strict and damaging anti-abortion laws. Louisiana, Mississippi, Kansas, North Dakota, and Arkansas are all known for their tough anti-abortion legislation, while Oklahoma recently introduced a bill that would require a woman to get the written consent of the fetus's father before obtaining an abortion, even in cases of rape. To help women worldwide have easier, safer access to abortions consider volunteering with or donating to Women on Waves. To support the cause closer to home, check out NARAL Pro-Choice America, which engages in political action and advocacy efforts to oppose restrictions on abortion and expand access to it, and Planned Parenthood, which provides reproductive health care on a national and international scale.
253,239
\begin{document} \maketitle \begin{abstract} Given a pair of nilpotent Lie algebras $A$ and $B$, an extension $0\xrightarrow{} A\xrightarrow{} L\xrightarrow{} B\xrightarrow{} 0$ is not necessarily nilpotent. However, if $L_1$ and $L_2$ are extensions which correspond to lifts of a map $\Phi:B\xrightarrow{} \text{Out}(A)$, it has been shown that $L_1$ is nilpotent if and only if $L_2$ is nilpotent. In the present paper, we prove analogues of this result for the algebras of Loday. As an important consequence, we thereby gain its associative analogue as a special case of diassociative algebras. \end{abstract} \section{Introduction} Let $A$ and $B$ be nilpotent Lie algebras. In \cite{yankosky}, Bill Yankosky proved that the nilpotency of an extension $0\xrightarrow{} A\xrightarrow{} L\xrightarrow{} B\xrightarrow{} 0$ depends on $A$, $B$, and a map $\Phi:B\xrightarrow{} \text{Out}(A)$. In particular, given a pair of extensions $L_1$ and $L_2$ corresponding to lifts of $\Phi$, $L_1$ is nilpotent if and only if $L_2$ is nilpotent. This result was based on the work of James A. Schafer, who proved the group analogue in \cite{schafer}. Beyond Lie algebras, the objective of the present paper is to prove Schafer’s result for six other types of algebras. Loday introduced three of these (Zinbiel, diassociative, and dendriform algebras \cite{loday cup product, loday dialgebras}) and generated interest in another (Leibniz algebras). The remaining two types are associative and commutative algebras. We observe that Yankosky’s work rests on the assumptions of nonabelian 2-cocycles, also called factor systems, which have long been known in the context of Lie algebras. As discussed in \cite{mainellis}, factor systems are a tool for working on the extension problem of algebraic structures. The work herein is an application of factor systems, which were developed for all seven types of algebras in \cite{mainellis}. For the sake of this paper, it suffices to prove the Leibniz and diassociative cases. Indeed, as discussed in the introduction of \cite{mainellis}, any result which holds for the Leibniz, diassociative, and dendriform cases holds for all seven algebras. Moreover, the dendriform case of Schafer's result follows similarly to the diassociative case after replacing $\dashv$ and $\vdash$ by $<$ and $>$ respectively, and replacing Lemma \ref{nilpotent equality} by the analogous Lemma \ref{dend nilpotent equality}. The paper is structured as follows. For preliminaries, we define the relevant algebras and discuss notions of nilpotency. We state known lemmas concerning certain product algebras and briefly review extensions. We then derive Leibniz and diassociative analogues of the results found in \cite{yankosky}. We state the associative analogue as a corollary of the diassociative case. The final section of the paper contains several examples which highlight important intricacies in the results. \section{Preliminaries} Let $\F$ be a field. Throughout, all algebras will be $\F$-vector spaces equipped with bilinear multiplications which satisfy certain identities. First recall that a \textit{Leibniz algebra} $L$ is a nonassociative algebra with multiplication satisfying the \textit{Leibniz identity} $x(yz) = (xy)z + y(xz)$ for all $x,y,z\in L$. \begin{defn} A \textit{Zinbiel algebra} $Z$ is a nonassociative algebra with multiplication satisfying what we will call the \textit{Zinbiel identity} $(xy)z = x(yz) + x(zy)$ for all $x,y,z\in Z$. \end{defn} \begin{defn} A \textit{diassociative algebra} (or \textit{associative dialgebra}) $D$ is a vector space equipped with two associative bilinear products $\dashv$ and $\vdash$ which satisfy the following identities for all $x,y,z\in D$: \begin{enumerate} \item[D1.] $x\dashv (y\dashv z) = x\dashv (y\vdash z)$, \item[D2.] $(x\vdash y)\dashv z = x\vdash (y\dashv z)$, \item[D3.] $(x\dashv y)\vdash z = (x\vdash y)\vdash z$. \end{enumerate} \end{defn} \begin{defn} A \textit{dendriform algebra} $E$ is a vector space equipped with two bilinear products $<$ and $>$ which satisfy the following identities for all $x,y,z\in E$: \begin{enumerate} \item[E1.] $(x<y)<z = x<(y<z) + x<(y>z)$, \item[E2.] $(x>y)<z = x>(y<z)$, \item[E3.] $(x<y)>z + (x>y)>z = x>(y>z)$. \end{enumerate} \end{defn} The \textit{lower central series} is a well-known sequence of ideals that is defined recursively, for a Leibniz algebra $L$, by $L^0= L$ and $L^{k+1} = LL^k$ for $k\geq 0$. A Leibniz algebra is called \textit{nilpotent of class $u$}, denoted $\nil L = u$, if $L^u=0$ and $L^{u-1}\neq 0$ for some $u\geq 0$. The following lemma holds via induction and repeated application of the Leibniz identity. \begin{lem}\label{left norming} Let $L$ be a Leibniz algebra. Then $L^nL\subseteq LL^n$ for all $n$. \end{lem} For diassociative algebras, the definition of nilpotency is more involved. We take the following notions from \cite{di basri}. Let $A$ and $B$ be subsets of a diassociative algebra $D$ and define an ideal $A\lozenge B = A\dashv B + A\vdash B$ of $D$. There are notions of left, right, and general nilpotency for $D$ which are based on the $\lozenge$ operator. We define three sequences of ideals recursively for $k\geq 0$: \begin{itemize} \item[i.] $D^{\{0\}} = D$, $D^{\{k+1\}} = D\lozenge D^{\{k\}}$, \item[ii.] $D^{<0>} = D$, $D^{<k+1>} = D^{<k>} \lozenge D$, \item[iii.] $D^0 = D$, $D^{k+1} = D^0\lozenge D^k + D^1\lozenge D^{k-1} + \cdots + D^k\lozenge D^0$. \end{itemize} \begin{defn} A diassociative algebra $D$ is called \begin{itemize} \item[i.] \textit{left nilpotent} if $D^{\{u\}} = 0$, \item[ii.] \textit{right nilpotent} if $D^{<u>} = 0$, \item[iii.] \textit{nilpotent} if $D^u = 0$ \end{itemize} for some $u\geq 0$. In particular, $D$ is \textit{nilpotent of class $u$} if $D^u=0$ and $D^{u-1}\neq 0$. \end{defn} The following lemma from \cite{di basri} is crucial for the diassociative case in this paper. \begin{lem}\label{nilpotent equality} Let $D$ be a diassociative algebra. For all $n$, $D^{\{n\}} = D^{<n>} = D^n$. \end{lem} The same definitions may be stated for dendriform algebras with the simple substitutions of $<$ and $>$ for $\dashv$ and $\vdash$ respectively. The following lemma is the dendriform analogue of Lemma \ref{nilpotent equality}, and is proven in \cite{basri}. \begin{lem}\label{dend nilpotent equality} Let $E$ be a dendriform algebra. For all $n$, $E^{\{n\}} = E^{<n>} = E^n$. \end{lem} We now review extensions. Fix a type of algebra $\alg$ and let $A$ and $B$ be $\alg$ algebras. An \textit{extension} of $A$ by $B$ is a short exact sequence of the form $0\xrightarrow{} A\xrightarrow{\sigma} L\xrightarrow{\pi} B\xrightarrow{} 0$ where $L$ is a $\alg$ algebra and $\sigma$ and $\pi$ are \textit{homomorphisms}, i.e. linear maps which preserve the $\alg$ structure. An \textit{isomorphism} of $\alg$ algebras is a bijective homomorphism. A \textit{section} of the extension is a linear map $T:B\xrightarrow{} L$ such that $\pi T = \text{id}_B$. \begin{defn} An extension $0\xrightarrow{} A\xrightarrow{} L\xrightarrow{} B\xrightarrow{} 0$ of $A$ by $B$ is called \textit{nilpotent} if $L$ is nilpotent as an algebra. \end{defn} \section{Leibniz Case} Consider a pair of nilpotent Leibniz algebras $A$ and $B$ and let $0\xrightarrow{} A\xrightarrow{\sigma} L\xrightarrow{\pi} B\xrightarrow{} 0$ be an extension of $A$ by $B$ with section $T:B\xrightarrow{} L$. We first define two ways for $B$ to act on $A$. Let $\vp:B\xrightarrow{} \Der(A)$ by $\vp(i)m = \sigma\inv(T(i)\sigma(m))$ and $\vp':B\xrightarrow{} \mathscr{L}(A)$ by $\vp'(i)m = \sigma\inv(\sigma(m)T(i))$ for $i\in B$, $m\in A$. Next, let \begin{align*} q&:\Der(A)\xrightarrow{} \Der(A)/\ad^l(A),\\ q'&:\mathscr{L}(A)\xrightarrow{} \mathscr{L}(A)/\ad^r(A) \end{align*} denote the natural projections and define a pair of maps $(\Phi,\Phi') = (q\vp, q'\vp')$. We say that the pair $(\vp,\vp')$ is a \textit{lift} of $(\Phi,\Phi')$. Any two lifts $(\vp,\vp')$ and $(\psi,\psi')$ of $(\Phi,\Phi')$ are thus related by $\vp(i) = \psi(i) + \ad_{m_i}^l$ and $\vp'(i) = \psi'(i) + \ad_{m_i'}^r$ for $i\in B$ and some elements $m_i, m_i'\in A$ which depend on $i$. Our first proposition yields a criterion for when $L$ is nilpotent which is based on the following recursive construction. Define $A_0 = A$ and $A_{k+1} = \sigma\inv(\sigma(A_k)L + L\sigma(A_k))$ for $k\geq 0$. \begin{prop}\label{prop 2.1} Let $B$ be a nilpotent Leibniz algebra of class $s$. Then $L^{k+s} \subseteq \sigma(A_k) \subseteq L^k$ for all $k\geq 0$. Hence $L$ is nilpotent if and only if $A_k = 0$ for some $k$. \end{prop} \begin{proof} Since $\pi:L\xrightarrow{} B$ is a homomorphism, one computes $\pi(L^s) = B^s = 0$, which implies that $L^s\subseteq \ker \pi = \sigma(A) = \sigma(A_0)$. Also, $\sigma(A_0) = \sigma(A) \subseteq L = L^0$. We therefore have a base case $L^s\subseteq \sigma(A_0)\subseteq L^0$ for $k=0$. Now suppose $L^{n+s} \subseteq \sigma(A_n)\subseteq L^n$ for some $n\geq 0$. Then \begin{align*} L^{n+1+s} &= LL^{n+s}\\ &\subseteq L\sigma(A_n) & \text{by induction} \\ &\subseteq \sigma(A_n)L + L\sigma(A_n)\\ &\subseteq L^nL + LL^n & \text{by induction} \\ &\overset{\ast}{=} LL^n \\ &= L^{n+1} \end{align*} where $\sigma(A_n)L + L\sigma(A_n) =\sigma(A_{n+1})$ and the equality $\ast$ follows by Lemma \ref{left norming}. Thus $L^{s+k}\subseteq \sigma(A_k)\subseteq L^k$ for all $k\geq 0$ via induction. For the second statement, we first note that if $L$ is nilpotent, then $\sigma(A_k)\subseteq L^k = 0$ for some $k\geq 0$. This means $A_k=0$ since $\sigma$ is injective. Conversely, if $A_k=0$ for some $k\geq 0$, then $\sigma(A_k)=0$ and thus $L^{k+s}=0$. Hence $L$ is nilpotent. \end{proof} Again, let $(\vp,\vp')$ be a lift of $(\Phi,\Phi')$. \begin{defn} An ideal $N$ of $A$ is $(\vp,\vp')$\textit{-invariant} if $\vp(i)n, \vp'(i)n\in N$ for all $i\in B$ and $n\in N$. \end{defn} \begin{lem} Let $(\vp,\vp')$ and $(\psi,\psi')$ be lifts of $(\Phi,\Phi')$. Then $N$ is $(\vp,\vp')$-invariant if and only if $N$ is $(\psi,\psi')$-invariant. \end{lem} \begin{proof} Let $i\in B$. Since we have two lifts of the same pair, they are related by $\psi(i) = \vp(i) + \ad_{m_i}^l$ and $\psi'(i) = \vp'(i) + \ad_{m_i'}^r$ for some $m_i,m_i'\in A$. In one direction, assume $N$ is $(\vp,\vp')$-invariant. Then $\vp(i)n,\vp'(i)n\in N$ for all $n\in N$ by definition. Also, $m_in,nm_i'\in N$ for all $n\in N$ since $N$ is an ideal. Thus $\psi(i)n,\psi'(i)n\in N$ and so $N$ is $(\psi,\psi')$-invariant. The other direction is similar. \end{proof} \begin{defn} An ideal $N$ of $A$ is $B$\textit{-invariant} if $N$ is $(\vp,\vp')$-invariant for some, and hence all, lifts of $(\Phi,\Phi')$. \end{defn} In particular, $A$ itself is $B$-invariant since $\vp(i),\vp'(i)\in \mathscr{L}(A)$ for all $i\in B$. Consider a $B$-invariant ideal $N$ of $A$ and let $(\vp,\vp')$ be a lift of $(\Phi,\Phi')$. We define $\G(N,\vp,\vp')$ to be the $B$-invariant ideal of $A$ generated by $AN$, $NA$, and $\{\vp(i)n,\vp'(i)n~|~ i\in B, n\in N\}$. Then $\G(N,\vp,\vp')\subseteq N$ and we reach the following lemma. \begin{lem} If $(\vp,\vp')$ and $(\psi,\psi')$ are lifts of $(\Phi,\Phi')$, then \[\G(N,\vp,\vp')= \G(N,\psi,\psi').\] \end{lem} \begin{proof} It again suffices to show one direction. First note that $AN$ and $NA$ are contained in both sides of the equality by definition. For $i\in B$ and $n\in N$, we know $\psi(i)n = \vp(i)n + m_in$ and $\psi'(i)n = \vp'(i)n + nm_i'$ for some $m_i,m_i'\in A$. These expressions clearly fall in $\G(N,\vp,\vp')$ and therefore $\G(N,\psi,\psi')\subseteq \G(N,\vp,\vp')$. \end{proof} We now fix a lift $(\vp,\vp')$ of $(\Phi,\Phi')$ and denote $\G N = \G(N,\vp,\vp')$. Given $B$, $A$, \begin{align*} \Phi&:B\xrightarrow{} \Der(A)/\ad^l(A),\\ \Phi'&:B\xrightarrow{} \mathscr{L}(A)/\ad^r(A), \end{align*} and a $B$-invariant ideal $N$ of $A$, define a descending sequence of $B$-invariant ideals $\G_k^B N$ of $N$ by $\G_0^B N = N$ and $\G_{k+1}^BN = \G(\G_k^B N)$ for $k\geq 0$. \begin{thm}\label{thm 3.1} Consider the extension $0\xrightarrow{} A\xrightarrow{\sigma} L\xrightarrow{} B\xrightarrow{} 0$ and our pair of maps $(\Phi,\Phi')$. If $A_0=A$ and $A_{k+1} = \sigma\inv(\sigma(A_k)L + L\sigma(A_k))$, then $A_k = \G_k^BA$ for all $k\geq 0$. \end{thm} \begin{proof} By the work in \cite{mainellis}, there exists a unique factor system $(\vp,\vp',f)$ belonging to the extension $0\xrightarrow{} A\xrightarrow{\sigma} L\xrightarrow{} B\xrightarrow{} 0$ and $T$. By construction, $\vp$ and $\vp'$ are the maps of our lift $(\vp,\vp')$. Next, there exists another extension $0\xrightarrow{} A\xrightarrow{\iota} L_2\xrightarrow{} B\xrightarrow{} 0$ of $A$ by $B$ to which $(\vp,\vp',f)$ belongs. Here, $L_2$ is the vector space $A\oplus B$ equipped with multiplication $(m,i)(n,j) = (mn + \vp(i)n + \vp'(j)m + f(i,j),ij)$, where $f:B\times B\xrightarrow{} A$ is a bilinear form. Also $\iota(m) = (m,0)$. Since $(\vp,\vp',f)$ is equivalent to itself, the extensions are equivalent, and thus there exists an isomorphism $\tau:L\xrightarrow{} L_2$ such that $\tau\sigma = \iota$. We will now prove the statement via induction, first noting that the base case $A_0 = A = \G_0^BA$ holds trivially. Assume that $A_n = \G_n^BA$ for some $n\geq 0$. By definition, it suffices to show the inclusion of generating elements for each side of the equality. Generating elements of $A_{n+1}$ have the forms $\sigma\inv(\sigma(m)x)$ and $\sigma\inv(x\sigma(m))$ for $x\in L$ and $m\in A_k$. Denote $\tau(x) = (m_x,i_x)\in L_2$. We compute \begin{align*} \sigma\inv(\sigma(m)x) &= \sigma\inv\tau\inv(\tau\sigma(m)\tau(x)) \\ &= \iota\inv((m,0)(m_x,i_x)) \\ &= \iota\inv(mm_x + \vp'(i_x)m,0) \\ &= mm_x + \vp'(i_x)m \end{align*} and \begin{align*} \sigma\inv(x\sigma(m)) &= \sigma\inv\tau\inv(\tau(x)\tau\sigma(m)) \\ &= \iota\inv((m_x,i_x)(m,0)) \\ &= \iota\inv(m_xm + \vp(i_x)m,0) \\ &= m_xm + \vp(i_x)m. \end{align*} Since $A_n = \G_n^BA$, one has $m_xm\in A(\G_n^BA)$ and $mm_x\in (\G_n^BA)A$, which are both included in $\G_{n+1}^BA$ since $\G_{n+1}^BA$ is the $B$-invariant ideal generated by $(\G_n^BA)A$, $A(\G_n^BA)$, and $\{\vp(i)m,\vp'(i)m~|~ m\in \G_n^BA,i\in B\}$. Thus $\vp'(i_x)m,\vp(i_x)m\in \G_{n+1}^BA$ as well and so $A_{n+1}\subseteq \G_{n+1}^BA$. Conversely, one computes \begin{align*} (\G_n^BA)A = \sigma\inv(\sigma(\G_n^BA)\sigma(A)) \subseteq \sigma\inv(\sigma(A_n)L) \subseteq A_{n+1},\\ A(\G_n^BA) = \sigma\inv(\sigma(A)\sigma(\G_n^BA))\subseteq \sigma\inv(L\sigma(A_n)) \subseteq A_{n+1}. \end{align*} Also, let $i\in B$ and $m\in \G_n^BA = A_n$. Then $\vp(i)m = \sigma\inv(T(i)\sigma(m))\in A_{n+1}$ and $\vp'(i)m = \sigma\inv(\sigma(m)T(i))\in A_{n+1}$ since $T(i)\in L$. Therefore $\G_{n+1}^BA\subseteq A_{n+1}$. \end{proof} Given $B$, $A$, $\Phi:B\xrightarrow{} \Der(A)/\ad^l(A)$, and $\Phi':B\xrightarrow{} \mathscr{L}(A)/\ad^r(A)$, we define a new notion of nilpotency for $A$. \begin{defn} $A$ is $B$\textit{-nilpotent of class $u$}, written $\nil_B A = u$, if $\G_u^BA = 0$ and $\G_{u-1}^BA \neq 0$ for some $u\geq 0$. \end{defn} The following two corollaries hold similarly to the Lie case. For their proofs, simply replace Proposition 2.1 and Theorem 3.1 of \cite{yankosky} by the analogous Proposition \ref{prop 2.1} and Theorem \ref{thm 3.1} of the present paper. The subsequent theorem is the main result, which follows from these corollaries and the same logic as Yankosky's proof. \begin{cor} $L$ is nilpotent if and only if $B$ is nilpotent and $\G_u^B A = 0$ for some $u\geq 1$. \end{cor} \begin{cor} $\max(\nil_BA,\nil B) \leq \nil L\leq \nil_B A + \nil B$. \end{cor} \begin{thm} Let $(\vp,\vp')$ and $(\psi,\psi')$ be lifts of $(\Phi,\Phi')$ corresponding to extensions $0\xrightarrow{} A\xrightarrow{} L_{(\vp,\vp')}\xrightarrow{} B\xrightarrow{} 0$ and $0\xrightarrow{} A\xrightarrow{} L_{(\psi,\psi')}\xrightarrow{} B\xrightarrow{} 0$ respectively. Then $L_{(\vp,\vp')}$ is nilpotent if and only if $L_{(\psi,\psi')}$ is nilpotent. \end{thm} \section{Diassociative Case} Consider a pair of nilpotent diassociative algebras $A$ and $B$ and an extension $0\xrightarrow{} A\xrightarrow{\sigma}L \xrightarrow{\pi} B\xrightarrow{} 0$ of $A$ by $B$ with section $T:B\xrightarrow{} L$. Throughout, we often let $*$ range over $\dashv$ and $\vdash$ for the sake of brevity. We consider four natural ways for $B$ to act on $A$. Define $\vp\dd, \vp\vv, \vp\dd',\vp\vv': B\xrightarrow{} \mathscr{L}(A)$ by $\vp_*(i)m = \sigma\inv(T(i)*\sigma(m))$ and $\vp_*'(i)m = \sigma\inv(\sigma(m)* T(i))$ for $i\in B$, $m\in A$. Let \begin{align*} q_*&:\mathscr{L}(A)\xrightarrow{} \mathscr{L}(A)/\ad_*^l(A),\\ q_*'&:\mathscr{L}(A)\xrightarrow{} \mathscr{L}(A)/\ad_*^r(A) \end{align*} be the natural projections and define a tuple of maps $\Phi = (\Phi\dd,\Phi\vv,\Phi\dd',\Phi\vv')$ by $\Phi_* = q_*\vp_*$ and $\Phi_*' = q_*'\vp_*'$. The tuple $\vp = (\vp\dd,\vp\vv,\vp\dd',\vp\vv')$ is called a \textit{lift} of $\Phi$. Two lifts $\vp = (\vp\dd,\vp\vv,\vp\dd',\vp\vv')$ and $\psi = (\psi\dd,\psi\vv,\psi\dd',\psi\vv')$ of $\Phi$ are related by \begin{align*} \psi_*(i) &= \vp_*(i) + \ad_*^l(m_{*,i}), \\ \psi_*'(i) &= \vp_*'(i) + \ad_*^r(m_{*,i}') \end{align*} for $i\in B$ and some $m_{*,i}, m_{*,i}'\in A$ which depend on $i$. Finally, let $A_0 = A$ and define $A_{k+1} = \sigma\inv(\sigma(A_k)\lozenge L + L\lozenge \sigma(A_k))$ for $k\geq 0$. \begin{prop} Let $B$ be a nilpotent diassociative algebra of class $s$. Then $L^{k+s} \subseteq \sigma(A_k)\subseteq L^k$ for all $k\geq 0$. Hence $L$ is nilpotent if and only if $A_k=0$ for some $k$. \end{prop} \begin{proof} As with the Leibniz case, the base case $k=0$ follows by our definitions and the properties of extensions. Suppose $L^{n+s}\subseteq \sigma(A_n)\subseteq L^n$ for some $n\geq 0$. We recall that $L^n = L^{<n>} = L^{\{n\}}$ by Lemma \ref{nilpotent equality} and thereby compute \begin{align*} L^{n+1+s} &= L^{<n+1+s>}\\ &= L^{n+s}\lozenge L\\ &\subseteq \sigma(A_n)\lozenge L &\text{by induction} \\ &\subseteq \sigma(A_n)\lozenge L + L\lozenge\sigma(A_n) \\ &\subseteq L^{<n>}\lozenge L + L\lozenge L^{\{n\}} &\text{by induction} \\ &= L^{n+1} \end{align*} where $\sigma(A_n)\lozenge L + L\lozenge\sigma(A_n) = \sigma(A_{n+1})$. Thus $L^{s+k} \subseteq \sigma(A_k)\subseteq L^k$ for $k\geq 0$ via induction. The second statement follows by the same logic as the Leibniz case. \end{proof} Once more, let $\vp = (\vp\dd,\vp\vv,\vp\dd',\vp\vv')$ be a lift of $\Phi$. \begin{defn} An ideal $N$ of $A$ is \textit{$\vp$-invariant} if $\vp_*(i)n,\vp_*'(i)n\in N$ for all $i\in B$, $n\in N$. \end{defn} \begin{lem} Let $\vp$ and $\psi$ be lifts of $\Phi$. Then $N$ is $\vp$-invariant if and only if $N$ is $\psi$-invariant. \end{lem} \begin{proof} Let $i\in B$. Since $\vp=(\vp\dd,\vp\vv,\vp\dd',\vp\vv')$ and $\psi=(\psi\dd,\psi\vv,\psi\dd',\psi\vv')$ are lifts of the same tuple, they are related by $\psi_*(i) = \vp_*(i) + \ad_*^l(m_{*,i})$ and $\psi_*'(i) = \vp_*'(i) + \ad_*^r(m_{*,i}')$ for some $m_{*,i},m_{*,i}'\in A$. In one direction, suppose $N$ is $\vp$-invariant. Then $\psi_*(i)n, \psi_*'(i)n\in N$ for all $n\in N$ since $N$ is a $\vp$-invariant ideal in $A$. Therefore $N$ is $\psi$-invariant. The converse is similar. \end{proof} \begin{defn} An ideal $N$ of $A$ is \textit{$B$-invariant} if $N$ is $\vp$-invariant for some, and hence all, lifts of $\Phi$. \end{defn} In particular, $A$ is $B$-invariant since $\vp_*(i),\vp_*'(i)\in \mathscr{L}(A)$ for all $i\in B$. Now let $N$ be a $B$-invariant ideal in $A$ and $\vp$ be a lift of $\Phi$. We denote by $\G(N,\vp)$ the $B$-invariant ideal generated by $N\dashv A$, $N\vdash A$, $A\dashv N$, $A\vdash N$, and the set $\{\vp_*(i)n, \vp_*'(i)n~|~ i\in B,n\in N\}$. We thus have $\G(N,\vp)\subseteq N$ as well as the following lemma. \begin{lem} If $\vp$ and $\psi$ are lifts of $\Phi$, then $\G(N,\vp) = \G(N,\psi)$. \end{lem} \begin{proof} It suffices to show that $\G(N,\psi)\subseteq \G(N,\vp)$. We first note that $N\dashv A$, $N\vdash A$, $A\dashv N$, and $A\vdash N$ are contained in both sides by definition. Similarly to the Leibniz case, the expressions for $\psi_*(i)n$ and $\psi_*'(i)n$ are clearly contained in $\G(N,\vp)$ for all $i\in B$ and $n\in N$. The converse holds without loss of generality. \end{proof} Fix a lift $\vp$ of $\Phi$ and denote $\G N = \G(N,\vp)$. Given $B$, $A$, $\Phi$, and a $B$-invariant ideal $N$ of $A$, define a descending sequence of $B$-invariant ideals $\G_k^BN$ of $N$ by $\G_0^BN := N$ and $\G_{k+1}^BN:= \G(\G_k^BN)$ for $k\geq 0$. \begin{thm} Consider $0\xrightarrow{} A\xrightarrow{\sigma} L\xrightarrow{} B\xrightarrow{} 0$ and let $\Phi$ be defined as above. If $A_0=A$ and $A_{k+1} = \sigma\inv(\sigma(A_k)\lozenge L + L\lozenge \sigma(A_k))$, then $A_k = \G_k^B A$ for all $k\geq 0$. \end{thm} \begin{proof} As in the Leibniz case, the work with factor systems in \cite{mainellis} yields an equivalent extension $0\xrightarrow{} A\xrightarrow{\iota} L_2\xrightarrow{} B\xrightarrow{} 0$. Let $\tau:L\xrightarrow{} L_2$ be the equivalence. Here, $L_2$ is the vector space $A\oplus B$ equipped with multiplications $(m,i)*(n,j) = (m*n +\vp_*(i)n + \vp_*'(j)m + f_*(i,j),i*j)$, and $\iota(m) = (m,0)$. Moreover, $\vp_*$ and $\vp_*'$ are the same maps as in our lift $\vp$ while $f\dd$ and $f\vv$ are the bilinear forms in some factor system of diassociative algebras. The base case of this result is trivial since $A_0 = A = \G_0^B A$ by definition. Now assume $A_n = \G_n^BA$ for some $n\geq 0$. Also by definition, it suffices to show the inclusion of generating elements for each side of the equality. Generating elements in $A_{n+1}$ have the forms $\sigma\inv(\sigma(m)*x)$ and $\sigma\inv(x*\sigma(m))$ for $m\in A_n$ and $x\in L$. Denote $\tau(x) = (m_x,i_x)\in L_2$. We compute \begin{align*} \sigma\inv(\sigma(m)*x) &= \sigma\inv\tau\inv(\tau\sigma(m)*\tau(x)) \\ &= \iota\inv((m,0)*(m_x,i_x)) \\ &= m*m_x + \vp_*'(i_x)m \end{align*} and \begin{align*} \sigma\inv(x*\sigma(m)) &= \sigma\inv\tau\inv(\tau(x)*\tau\sigma(m)) \\ &= \iota\inv((m_x,i_x)*(m,0)) \\ &= m_x*m + \vp_*(i_x)m. \end{align*} Since $A_n = \G_n^BA$, one has $m_x*m\in A*(\G_n^BA)$ and $m*m_x\in (\G_n^BA)*A$, which are included in $\G_{n+1}^BA$ since $\G_{n+1}^BA$ is the $B$-invariant ideal generated by $(\G_n^BA)*A$, $A*(\G_n^BA)$, and $\{\vp_*(i)m,\vp_*'(i)m~|~ m\in \G_n^BA,i\in B\}$. Thus $\vp_*'(i_x)m,\vp_*(i_x)m\in \G_{n+1}^BA$ as well. Therefore $A_{n+1}\subseteq \G_{n+1}^BA$. Conversely, one computes \begin{align*} (\G_n^BA)*A = \sigma\inv(\sigma(\G_n^BA)*\sigma(A)) \subseteq \sigma\inv(\sigma(A_n)*L) \subseteq A_{n+1},\\ A*(\G_n^BA) = \sigma\inv(\sigma(A)*\sigma(\G_n^BA))\subseteq \sigma\inv(L*\sigma(A_n)) \subseteq A_{n+1}. \end{align*} Also, let $i\in B$ and $m\in \G_n^BA = A_n$. Then $\vp_*(i)m = \sigma\inv(T(i)*\sigma(m))\in A_{n+1}$ and $\vp_*'(i)m = \sigma\inv(\sigma(m)*T(i))\in A_{n+1}$ since $T(i)\in L$. Therefore $\G_{n+1}^BA\subseteq A_{n+1}$. \end{proof} \begin{defn} Given $B$, $A$, and the tuple $\Phi$, we say that $A$ is $B$\textit{-nilpotent of class $u$}, written $\nil_B A = u$, if $\G_u^BA = 0$ but $\G_{u-1}^BA \neq 0$. \end{defn} The following results hold similarly to the Lie and Leibniz cases. Here, $\nil L$ is used to denote the nilpotency class of a diassociative algebra $L$. \begin{cor} $L$ is nilpotent if and only if $B$ is nilpotent and $\G_u^B A = 0$ for some $u\geq 1$. \end{cor} \begin{cor} $\max(\nil_BA,\nil B) \leq \nil L\leq \nil_B A + \nil B$. \end{cor} \begin{thm} Let $\vp$ and $\psi$ be lifts of $(\Phi\dd,\Phi\vv,\Phi\dd',\Phi\vv')$ corresponding to extensions $0\xrightarrow{} A\xrightarrow{} L_{\vp}\xrightarrow{} B\xrightarrow{} 0$ and $0\xrightarrow{} A\xrightarrow{} L_{\psi}\xrightarrow{} B\xrightarrow{} 0$ respectively. Then $L_{\vp}$ is nilpotent if and only if $L_{\psi}$ is nilpotent. \end{thm} We now state the associative case as a corollary since we have not been able to find it written down. Let $A$ and $B$ be associative algebras and consider a pair of maps $(\Phi,\Phi')$ such that $\Phi:B\xrightarrow{} \mathscr{L}(A)/\ad^l(A)$ and $\Phi':B\xrightarrow{} \mathscr{L}(A)/\ad^r(A)$. Let \textit{lifts} $(\vp,\vp')$ and $(\psi,\psi')$ of $(\Phi,\Phi')$ be defined similarly to the Leibniz case and consider their corresponding extensions $0\xrightarrow{} A\xrightarrow{} L_{(\vp,\vp')}\xrightarrow{} B\xrightarrow{} 0$ and $0\xrightarrow{} A\xrightarrow{} L_{(\psi,\psi')}\xrightarrow{} B\xrightarrow{} 0$ respectively. \begin{cor} $L_{(\vp,\vp')}$ is nilpotent if and only if $L_{(\psi,\psi')}$ is nilpotent. \end{cor} \section{Examples} The first two examples demonstrate that extensions corresponding to lifts of the same tuple need not have the same nilpotency class. We provide an example for the non-Lie Leibniz case as well as for the diassociative case. \begin{ex} Let $A=\langle x,y,z\rangle$ and $B=\langle w\rangle$ be abelian Leibniz algebras and consider two extensions $L_1$ and $L_2$ of $A$ by $B$. Let $L_1=\langle x,y,z,w\rangle$ have nonzero multiplications given by $w^2 = x$, $wx = y$, and $wy=z$. Then $L_1^2 = \langle x,y,z\rangle$, $L_1^3 = \langle y,z\rangle$, $L_1^4 = \langle z\rangle$, and $L_1^5=0$, making $L_1$ nilpotent of class 5. Now let $L_2=\langle x,y,z,w\rangle$ have nonzero multiplications given by $wx=y$ and $wy=z$. Then $L_2^2 = \langle y,z\rangle$, $L_2^3 = \langle z\rangle$, and $L_2^4 = 0$, making $L_2$ nilpotent of class $4$. Observe that $L_1$ and $L_2$ correspond to lifts of the same tuple, yet have different nilpotency classes. Indeed, $A$ is abelian, and hence $\ad^l(M)$ and $\ad^r(M)$ are zero, making $(\Phi,\Phi') = (\vp,\vp')$ for any lift of $(\Phi,\Phi')$. In this case, $\Phi(w)x = \vp(w)x= y$ and $\Phi(w)y= \vp(w)y=z$ for both. Also $\Phi'(w) = 0$. We would also like to compute $A_k$ and $\G_k^BA$. Note that, since $A^2=0$, one needs only consider the actions of $\vp$ and $\vp'$ on $A$ when computing $\G_k^BA$. As predicted, $A_k=\G_k^BA$ for all $k$. One has \begin{align*} A_0 &= A = \G_0^BA,\\ A_1 &= \langle y,z\rangle = \G_1^BA,\\ A_2 &= \langle z\rangle = \G_2^BA,\\ A_3 &= 0 = \G_3^BA, \end{align*} and $A_k = 0 = \G_k^BA$ otherwise. \end{ex} \begin{ex} Now for a diassociative example. Let $A=\langle x,y\rangle$ and $B=\langle u,v\rangle$ be abelian algebras and $L_{\vp}$ be an extension of $A$ by $B$ having nonzero multiplications $u\dashv u = x$, $u\vdash u = x+y$, $v\dashv v = y$, $v\vdash v = x+y$, and $v\vdash u = x+y = u\vdash v$. This diassociative algebra is a special case of the isomorphism type $Dias_4^1$ in Theorem 4.2 of \cite{di basri}. One computes $L_{\vp}^2 = \langle x,y\rangle$ and $L_{\vp}^3 = 0$; hence $L_{\vp}$ is nilpotent of class 3. We also note that the action of $B$ on $A$ is entirely zero, i.e. $\vp\dd = \vp\vv = \vp\dd' = \vp\vv' = 0$. Moreover, $A$ is again abelian, and hence all lifts of the natural $\Phi$ tuple are equal. To finish the point, the abelian extension $L_{ab}$ of $A$ by $B$ corresponds to the same zero-lift, but has nilpotency class 2. \end{ex} We conclude with an example in which $A$ is nonabelian and hence the lifts are allowed to vary by adjoint operators. In this example, however, our nilpotency classes turn out to be the same. We note that the algebras in this case are both associative and Leibniz. \begin{ex} Let $A=\langle x,y,z\rangle$ and $B=\langle w\rangle$ be associative algebras with only nonzero multiplications $x^2=y^2=z$. Consider two extensions $L_{(\vp,\vp')}$ and $L_{(\psi,\psi')}$ of $A$ by $B$. Let $L_{(\vp,\vp')}$ have nonzero multiplications given by $x^2=y^2=xw=z$, $wx=-z$ and let $L_{(\psi,\psi')}$ have nonzero multiplications given by $x^2=y^2=z$. These algebras are clearly nilpotent of class 3 since both have center $\langle z\rangle$ equal to their derived subalgebras. One computes $\vp(w)x = -z$, $\vp'(w)x = z$, and $\vp(w)y = \vp'(w)y = \vp(w)z = \vp'(w)z =0$. Also $\psi(w) = \psi'(w)=0$. Thus $\vp(w) = \psi(w) - \ad^l(x)$ and $\vp'(w) = \psi'(w) + \ad^r(x)$, and so we have lifts $(\vp,\vp')$ and $(\psi,\psi')$ that vary by adjoint operators. \end{ex} \section*{Acknowledgements} The author would like to thank Ernest Stitzinger for the many helpful discussions.
6,877
TITLE: Half-space is not a manifold QUESTION [1 upvotes]: We define the half-space $H^n$ as the set containing all tupels $(a_1,\ldots,a_n)$ such that all $a_i\geq 0$. I know that this isn't a manifold - intuitively this is clear - but how can I formally prove it ? I tried to prove the case $n=1$ via contradiction, but couldn't find a way to derive a contradiction, so I'm thinking that there maybe are some calculus facts about continuous maps that I don't know... REPLY [1 votes]: Hint: let $n = 2$ for concreteness, so we get the idea. As you wrote, $H^2$ will be the first quadrant of the cartesian plane. If this is to be a manifold, then it will be at most $2$-dimensional. If you take points $(a_1,a_2)$ with $a_i > 0$ nothing will go wrong, you have neighbourhoods in $H^2$ homeomorphic to disks in $\Bbb R^2$. To see that it is not a $2$-manifold, consider a point, say $(a_1,0)$, with $a_1 > 0$. Is there a neighbourhood of this point, in $H^2$, which is homeomorphic to an open disk in $\Bbb R^2$? Try to generalize.
161,781
Garland's Garden Center & Florist 1109 Ingleside Avenue Baltimore, MD 21207 Tel: 410.747.5151 | Our Website | View Our Indoor Gallery | View Our External Gallery | Email | Garlands is a full-service florist, garden, landscape, and design center with a large retail space and highly knowledgeable staff. We carry a wide selection of perennials, annuals, and exotics, with a fully stocked nursery and display gardens. Also available in our store are all the tools and accessories you'll need to design and implement your own garden.
174,494
Pam Stranathan, USD 231 superintendent provided information on the district’s proposed $29.7 million bond during a public forum Dec. 17. If approved, the bond would expand GEHS. Photos courtesy of Rick Poppitz, kcvideo.com Rick Poppitz, kcvideo.com Special to The Gardner News Several USD 231 patrons attended the $29.7 million bond forum held Dec. 17 at Gardner Edgerton High School. Also in attendance were district officials and staff. This was the last in a series of several informational meetings regarding a proposed $29.7 million bond issue to expand GEHS. Pam Stranathan, USD 231 superintendent, spent about 35 minutes explaining the details of why the high school expansion is needed and what would be included in the expansion. Among the improvements are a 28,000 square-foot Advanced Technical Center to meet the needs of students choosing a career in various state-approved technical courses. Jeremy McFadden, director of business and finance, crunched the numbers and explained how this will be paid for and why it should be done now. He said that 30 percent of the $29.7 million will be paid by the State of Kansas. According to the district presentation, GEHS will be over capacity very soon if nothing is done. The current building capacity is 1,600 students, and there are 1,519 students enrolled. The expansion would add capacity for approximately 600 additional students – about 450 students in a new wing of GEHS and about 150 students at a career and technical education building. The new building would be used for things like auto classes and culinary classes. It is estimated the bond issue would not require increasing property taxes, according to the district. Voters have until Jan. 12, 2016 to register to vote on the bond issue on Feb. 2, 2016. Election information can also be found at jocoelection.org. For those patrons who could not attend the forums, The Gardner News and kcvideo.com have a video of the presentation.
418,706
<< May even find a perfume local carpenters and wood shops to see another chocolate cake for the holidays. That drinking "non-alcoholic" beer quality is by far a foregone conclusion when online retailers its history, India has been a land of numerous religions. Many guests over the Christmas and one step ahead of the best others prefer the more robust flavor of the American blends. The gentlemen, some options are White by Perry ingredients can be harmful and cause symptoms eaux de parfum are available in an amazing presentation box that often wonderfully fits the substance of the perfume. Fear might motivate first one, floral, which makes up over this could be another very good reason for their fame. Cream which aims to restore our taste and wine is a grape table wine that has been fermented or flavored with resin. Distillery, licensed in 1757, the oldest pot still with you and wise as to create the impression that people, and creatures in general, have power to act. Culinary delights to the finest in culture and art, which is unsurprising considering these are Hilton.
10,325
Phone: Degrees and Certifications: (B.S.E., M.S.E., & Ed.D) - University of Kansas "Generally a nice guy" endorsement - my mom Dr. Chris Orlando Hello, My name’s Chris Orlando and I’ve been called a PollyAnna, sugar-coated idealist. But I like to think of myself as more optimistic than that. I love my job and will do all that I can to help students navigate this upcoming year, discover the power of learning, and have some fun along the way. I’m a Lawrence kid through and through. Having attended Broken Arrow, Schwegler, South Junior High, and Lawrence High I've been fortunate to be surrounded by wonderful teachers. My mom taught in the district for the majority of my life, and helped me see the noble task of educating children for what it is: fun! I love my job and will do all that I can to help students navigate this upcoming year, discover the power of learning, and have some fun along the way. See below for some shockingly personal and psychologically insightful "pros and cons" about me. Then, enjoy a few pictures of my family. - Pro: Not afraid of middle schoolers - Con: Afraid of spiders - Pro: I won’t call your child’s failing grade a “hot mess” - Con: I will call it a “spicy disaster” - Pro: I enjoy writing - Con: My autobiography will be called, “I Need to Pee, But I Can’t Leave Middle Schoolers Unsupervised” - Pro: Have been called an “Educational Rockstar” - Con: Personality fits better in a 2000s boy band - Pro: Will occasionally dole out wisdom - Con: Have been forced to say, "Don't lick that desk" to 8th graders - Pro: I've never stolen anything from work - Con: I've had to steal things from home to bring them to work My wife, Meaghan, is a kindergarten teacher. She and I have a wonderfully wiggly toddler named Jude. He’s the best :) MY "WHY" Just about everything I do in the classroom is because of my mom. Being raised by her is like an ongoing apprenticeship in the art of being a positive, flexible, and creative person. Listen here for a 2:00 interview that my mom and I did with KCUR 89.3: Interview
129,930
\begin{document} \title{On the cup product for Hilbert schemes of points in the plane} \author{Mathias Lederer} \email{[email protected]} \address{Department of Mathematics \\ University of Innsbruck \\ Technikerstrasse 21a \\ A-6020 Innsbruck \\ Austria} \thanks{The author was partially supported by a Marie Curie International Outgoing Fellowship of the EU Seventh Framework Program} \date{\today} \keywords{Hilbert schemes of points, Bia\l ynicki-Birula theory, partitions, posets, Poincar\'e duality} \subjclass[2000]{14C05; 14F25; 58E05; 06A07} \begin{abstract} We revisit Ellingsrud and Str\o mme's cellular decomposition of the Hilbert scheme of points in the projective plane. We study the product of cohomology classes defined by the closures of cells, deriving necessary conditions for the non-vanishing of cohomology classes. Though our conditions are formulated in purely combinatorial terms, the machinery for deriving them includes techniques from Bia\l ynicki-Birula theory: We study closures of Bia\l ynicki-Birula cells in complete complex varieties equipped with ample line bundles. We prove a necessary condition for two such closures to meet, and apply this criterion in our setting. \end{abstract} \maketitle \section{Introduction}\label{sec:intro} The \defn{Hilbert scheme of $n$ points in the projective plane} is the moduli space of homogeneous ideals in $S := \CC[x_0,x_1,x_2]$, or equivalently, closed subschemes of $\P^2$, with constant Hilbert function $n$. The cellular decomposition $\P^2 = \A^2 \coprod \A^1 \coprod \A^0$ induces a decomposition into locally closed subschemes \[ H^n(\P^2) = \coprod_{n_2 + n_1 + n_0 = n} H^{n_2}(\A^2) \times H^{n_1,\lin}(\A^2) \times H^{n_0,\punc}(\A^2) , \] where the three factors parametrize ideals supported in the respective cofactors of $\P^2$. Upon using the coordinate ring $S' := \CC[y_1,y_2]$ of $\A^2$ and identifying $\A^1 = \V(y_2)$ and $\A^0 = \V(y_1,y_2)$, the three factors appearing in the displayed coproduct read \begin{equation*} \begin{split} H^{n_2}(\A^2) & := \bigl\{ \text{ideals } I \subseteq S' : \dim(S' / I) = n_2 \bigr\} , \\ H^{n_1,\lin}(\A^2) & := \bigl\{ \text{ideals } I \subseteq S' : \dim(S' / I) = n_1, \, \supp(I) \subseteq \V(y_2) \bigr\} , \\ H^{n_0,\punc}(\A^2) & := \bigl\{ \text{ideals } I \subseteq S' : \dim(S' / I) = n_0, \, \supp(I) = \V(y_1,y_2) \bigr\} . \end{split} \end{equation*} They come in a chain of closed immersions \[ H^{n,\punc}(\A^2) \subseteq H^{n,\lin}(\A^2) \subseteq H^n(\A^2) . \] The smallest member of the chain is called the \defn{punctual Hilbert scheme}, the largest the \defn{Hilbert scheme of points in the affine plane}. The scheme in the middle doesn't have a distinguished name, even though it is of utmost importance in linking Hilbert schemes of points to the ring of symmetric functions \cite[Corollary 9.15]{nakajima}. The scheme $H^n(\P^2)$ is smooth and projective of dimension $2n$ \cite{Fogarty_smoothness}. Ellingsrud and Str\o mme constructed cellular decompositions of the three factors $H^{n_2}(\A^2)$, $H^{n_1,\lin}(\A^2)$ and $H^{n_0,\punc}(\A^2)$, thus refining the above-displayed coproduct into a cellular decomposition \cite{esBetti, esCells}. For doing so, they used specific actions of the torus $\CC^\star$ on $H^{n_2}(\A^2)$, $H^{n_1,\lin}(\A^2)$ and $H^{n_0,\punc}(\A^2)$, respectively. The fixed points of all three actions are monomial ideals $M_\Delta$. Here the subscript $\Delta$ is the \defn{standard set} or \defn{staircase} of the monomial ideal $M_\Delta$, i.e., the set of elements of $\N^2$ not showing up as exponents in the monomial ideal. Thus in particular, $|\Delta| = \dim_\CC S' / I$. Since the fixed points of the action are isolated, the \defn{Bia\l ynicki-Birula sinks}, or \defn{BB sinks} \begin{equation*} \begin{split} H^{\Delta_2}_\lex(\A^2) & := \bigl\{ I \in H^{n_2}(\A^2) : \lim_{t \to 0} t.I = M_{\Delta_2} \bigr\} , \\ H^{\Delta_1,\lin}_\lex(\A^2) & := \bigl\{ I \in H^{n_1,\lin}(\A^2) : \lim_{t \to 0} t.I = M_{\Delta_1} \bigr\} , \\ H^{\Delta_0,\punc}_\lex(\A^2) & := \bigl\{ I \in H^{n_0,\punc}(\A^2) : \lim_{t \to 0} t.I = M_{\Delta_0} \bigr\} \end{split} \end{equation*} are affine spaces \cite{bialynickiBirula}. Our notation is motivated by the fact that Ellingsrud and Str\o mme's choice of torus actions implies that the BB sinks are the schemes parametrizing ideals whose lexicographic Gr\"obner deformations are the given monomial ideals, \begin{equation*} \begin{split} H^{\Delta_2}_\lex(\A^2) & = \bigl\{ I \in H^{n_2}(\A^2) : \IN_\lex(I) = M_{\Delta_2} \bigr\} , \\ H^{\Delta_1,\lin}_\lex(\A^2) & = \bigl\{ I \in H^{n_1,\lin}(\A^2) : \IN_\lex(I) = M_{\Delta_1} \bigr\} , \\ H^{\Delta_0,\punc}_\lex(\A^2) & = \bigl\{ I \in H^{n_0,\punc}(\A^2) : \IN_\lex(I) = M_{\Delta_0} \bigr\} . \end{split} \end{equation*} The three schemes are therefore also known as \defn{Gr\"obner basins}. Ellingsrud and Str\o mme, with a later correction by Huibregtse \cite{huibregtseEllingsrud}, were able to determine the dimensions of the three Gr\"obner basins displayed above. They proved that \begin{equation*} \begin{split} \dim(H^{\Delta}_\lex(\A^2)) & = |\Delta| + h(\Delta) , \\ \dim(H^{\Delta,\lin}_\lex(\A^2)) & = |\Delta| , \\ \dim(H^{\Delta,\punc}_\lex(\A^2)) & = |\Delta| - w(\Delta) , \end{split} \end{equation*} where $h(\Delta)$ is the \defn{height} and $w(\Delta)$ is the \defn{width} of $\Delta$. Conca and Valla proved the same formul\ae\ using Hilbert-Burch matrices \cite{hilbertBurchMatrices}. The cellular decomposition of the Hilbert scheme of points in the projective plane thus reads \[ H^n(\P^2) = \coprod_{|\Delta_2| + |\Delta_1| + |\Delta_0| = n} (\Delta_2, \Delta_1, \Delta_0)^\circ , \] where we use the shorthand notation \[ (\Delta_2, \Delta_1, \Delta_0)^\circ := H^{\Delta_2}_\lex(\A^2) \times H^{\Delta_1,\lin}_\lex(\A^2) \times H^{\Delta_0,\punc}_\lex(\A^2) \] for the affine cell corresponding to torus fixed point $(M_{\Delta_2}, M_{\Delta_1}, M_{\Delta_0})$. Moreover, ignoring any danger of confusion with triples of standard sets, we denote by \[ (\Delta_2, \Delta_1, \Delta_0) := \overline{(\Delta_2, \Delta_1, \Delta_0)^\circ} , \] the corresponding closed variety, and by \[ [\Delta_2, \Delta_1, \Delta_0] := [(\Delta_2, \Delta_1, \Delta_0)] \] its cohomology class. Since the Hilbert scheme of points in the projective plane is smooth and projective, the classes $[\Delta_2, \Delta_1, \Delta_0]$, for all triples of standard sets such that $|\Delta_2| + |\Delta_1| + |\Delta_0| = n$, form a basis of its cohomology module $\H^\star(H^n(\P^2))$ \cite[Example 1.9.1]{fultonIntersectionTheory}. Ellingsrud and Str\o mme's formul\ae\ for the dimension of the affine cells of $H^n(\P^2)$ completely determine the additive structure of $\H^\star(H^n(\P^2))$: For $k=0,\ldots,2n$, a basis of $\H^k(H^n(\P^2))$, the $k$-th graded piece of cohomology\footnote{ Since cohomology vanishes in odd degrees, we skip the factor 2 appearing in the actual values of $\star$. We might as well be working in the Chow ring, which is isomorphic to the cohomology ring, the degree $k$ part in Chow corresponding to the degree $2k$ part in cohomology. } of $H^n(\P^2)$ given by \[ T^k := \bigl\{ (\Delta_2, \Delta_1, \Delta_0) : |\Delta_2| + h(\Delta_2) + |\Delta_1| + |\Delta_0| - w(\Delta_0) = k \bigr\} . \] The goal of the present paper is to shed some light on the multiplicative structure of $\H^\star(H^n(\P^2))$. We will show that it rarely happens that $[\Delta_2, \Delta_1, \Delta_0] \cdot [\Delta'_2, \Delta'_1, \Delta'_0] \neq 0$, making the matrix of multiplication in $\H^\star(H^n(\P^2))$ with respect to Ellingsrud and Str\o mme's basis rather sparse. The crucial notion is that of a family of partial orders, given by integral weight vectors. \begin{dfn} \label{dfn:orderings} \begin{enumerate}[(a)] \item Let $\st_{3,n}$ denote the set of triples of standard sets whose cardinalities sum to $n$. \item For each general enough\footnote{ Sufficient genericity of $\lambda$ holds if the only fixed points of the action on $H^n(\A^2)$ with weight $\lambda$ are monomial ideals. This holds true if $\langle \lambda, \alpha - \beta \rangle \neq 0$ for all $\alpha,\beta$ lying in any standard set of cardinality $2n+1$, say. } weight vector $\lambda \in \Z^2$ such that $\lambda_1 < 0 < \lambda_2$, we define a partial ordering $\leq_\lambda$ on $\st_{3,n}$ by saying that $(\Delta_2, \Delta_1, \Delta_0) \leq_\lambda (\Delta'_2, \Delta'_1, \Delta'_0)$ if \begin{itemize} \item $|\Delta_2| \geq |\Delta'_2|$ and $|\Delta_0| \leq |\Delta'_0|$, or \item $|\Delta_j| = |\Delta'_j|$ for all $j$ and \begin{itemize} \item[$\circ$] $\langle \mu, \Delta_2 \rangle := \sum_{\alpha \in \Delta_2}\langle \mu, \alpha \rangle \geq \langle \mu, \Delta'_2 \rangle$, where $\mu \in \Z^2$ is such that $\mu_1 \ll \mu_2 < 0$, \item[$\circ$] $\langle \lambda, \Delta_1 \rangle \geq \langle \lambda, \Delta'_1 \rangle$, and \item[$\circ$] $\langle \nu, \Delta_0 \rangle \geq \langle \nu, \Delta'_0 \rangle$, where $\nu \in \Z^2$ is such that $\mu_1 \ll \mu_2 < 0$. \end{itemize} \end{itemize} \item We use the involution \[ \iota : \st_{3,n} \to \st_{3,n} : (\Delta_2, \Delta_1, \Delta_0) \mapsto (\Delta_0^t, \Delta_1^t, \Delta_2^t) , \] which takes partial ordering $\leq_\lambda$ to partial ordering $\leq_{-(\lambda_2,\lambda_1)}$, and $T^k$ to $T^{2n-k}$. \end{enumerate} \end{dfn} ``join'' \begin{thm} \label{thm:upperTriangularity} Assume that the product of two classes $[\Delta_2, \Delta_1, \Delta_0]$ and $[\Delta'_2, \Delta'_1, \Delta'_0] \neq 0$ is a non-zero element of the cohomology ring $\H^\star(H^n(\P^2))$ of the Hilbert scheme of points in the projective plane. Then $(\Delta_2, \Delta_1, \Delta_0) \leq_\lambda \iota(\Delta'_2, \Delta'_1, \Delta'_0)$ for all general enough weights $\lambda$, with equality holding only if $(\Delta_2, \Delta_1, \Delta_0) = \iota(\Delta'_2, \Delta'_1, \Delta'_0)$. \end{thm} The case in which $|\Delta_j| = |\Delta'_{2-j}|$ for all $j$ is the one where Theorem \ref{thm:upperTriangularity} reveals most about the cohomology of $H^n(\P^2)$. In this case the relevant structures are partial orderings $\leq_\xi$ induced by weights $\xi = \mu, \lambda, \nu$ on the individual sets \[ \st_m := \bigl\{ \text{standard sets } \Delta \subseteq \N^2 : |\Delta| = m \bigr\} , \] in which $\Delta \leq_\xi \Delta'$ if $\langle \xi, \Delta \rangle \geq \langle \xi, \Delta' \rangle$. Remember the \defn{natural partial ordering}, or \defn{dominance partial ordering} on $\st_m$ \cite[p.7]{Macdonald}, in which $\Delta \leq \Delta'$ if the two equivalent conditions are satisfied, \begin{itemize} \item for all $i$, the sum of the sizes of the lowest $i$ rows of $\Delta$ is at most as large as the sum of the sizes of the lowest $i$ rows of $\Delta'$, and \item for all $j$, the sum of the sizes of the leftmost $j$ columns of $\Delta$ is at least as large as the sum of the sizes of the leftmost $j$ columns of $\Delta'$. \end{itemize} We will show in the Appendix that partial orderings $\leq_\xi$, for $\xi = \mu, \lambda, \nu$, are \defn{refinements} of $\leq$ in the sense that $\Delta \leq \Delta'$ implies $\Delta \leq_\xi \Delta'$. Here are a few examples to illustrate how far refinement goes. \begin{ex} \label{ex:posets} \begin{enumerate}[(i)] \item For $m \leq 5$, the natural partial ordering on $\st_m$ is known to be a total ordering. The partial orderings $\leq_\xi$ and $\leq$ coincide. \item The natural partial ordering on $\st_6$ is known to have incomparable elements. They are also incomparable in partial orderings $\leq_\xi$. \item Figure \ref{fig:refineStSeven} shows the Hasse diagram of the poset $(\st_7,\leq)$, arrows pointing from smaller to larger elements. The three weight vectors serve as tie-breakers for elements incomparable under the natural partial ordering. However, they do so in three different ways. The figure also shows the additional arrows in the respective Hasse diagrams of $(\st_7,\leq_\xi)$, drawn in {\color{red} red for $\xi = \mu$}, {\color{blue} blue for $\xi = \lambda$}, and {\color{ForestGreen} green for $\xi = \nu$}. \item Figure \ref{fig:refineStEight} shows one half of the Hasse diagram of the poset $(\st_8,\leq)$, the other half arising by symmetry and transposition. The three weight vectors are tie-breakers for all standard sets but the two at the far right. Moreover, two choices of $\lambda$ lead to two different poset structures: The dashed and dotted blue arrows, respectively, show the Hasse diagram in the cases $\lambda_1 + \lambda_2 < 0$ and $\lambda_1 + \lambda_2 > 0$. \end{enumerate} \end{ex} \begin{center} \begin{figure}[ht] \unitlength0.19mm \begin{picture}(850,170) \put(0,50){ \multiput(0,0)(10,0){2}{\line(0,1){70}} \multiput(0,0)(0,10){8}{\line(1,0){10}} } \put(20,85){\vector(1,0){20}} \put(50,55){ \multiput(0,0)(10,0){2}{\line(0,1){60}} \put(20,0){\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){20}} \multiput(0,20)(0,10){5}{\line(1,0){10}} } \put(80,85){\vector(1,0){20}} \put(110,60){ \multiput(0,0)(10,0){2}{\line(0,1){50}} \put(20,0){\line(0,1){20}} \multiput(0,0)(0,10){3}{\line(1,0){20}} \multiput(0,30)(0,10){3}{\line(1,0){10}} } \put(140,95){\vector(1,1){20}} \put(140,75){\vector(1,-1){20}} \put(170,110){ \multiput(0,0)(10,0){2}{\line(0,1){50}} \multiput(20,0)(10,0){2}{\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){30}} \multiput(0,20)(0,10){4}{\line(1,0){10}} } \put(175,20){ \multiput(0,0)(10,0){2}{\line(0,1){40}} \put(20,0){\line(0,1){30}} \multiput(0,0)(0,10){4}{\line(1,0){20}} \multiput(0,40)(0,10){1}{\line(1,0){10}} } \put(210,115){\vector(1,-1){20}} \put(210,55){\vector(1,1){20}} \put(240,65){ \multiput(0,0)(10,0){2}{\line(0,1){40}} \put(20,0){\line(0,1){20}} \put(30,0){\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){30}} \multiput(0,20)(0,10){1}{\line(1,0){20}} \multiput(0,30)(0,10){2}{\line(1,0){10}} } \put(290,105){\vector(1,1){40}} \put(280,85){\vector(1,0){20}} \put(290,65){\vector(1,-1){40}} \put(350,140){ \multiput(0,0)(10,0){3}{\line(0,1){30}} \put(30,0){\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){30}} \multiput(0,20)(0,10){2}{\line(1,0){20}} } \put(310,65){ \multiput(0,0)(10,0){2}{\line(0,1){40}} \multiput(20,0)(10,0){3}{\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){40}} \multiput(0,20)(0,10){3}{\line(1,0){10}} } \put(350,0){ \multiput(0,0)(10,0){2}{\line(0,1){30}} \multiput(20,0)(10,0){2}{\line(0,1){20}} \multiput(0,0)(0,10){3}{\line(1,0){30}} \multiput(0,30)(0,10){1}{\line(1,0){10}} } \put(390,125){\vector(1,-1){20}} \put(360,85){\vector(1,0){40}} \put(390,45){\vector(1,1){20}} \put(420,70){ \multiput(0,0)(10,0){2}{\line(0,1){30}} \put(20,0){\line(0,1){20}} \multiput(30,0)(10,0){2}{\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){40}} \multiput(0,20)(0,10){1}{\line(1,0){20}} \multiput(0,30)(0,10){1}{\line(1,0){10}} } \put(470,95){\vector(1,1){20}} \put(470,75){\vector(1,-1){20}} \put(500,110){ \multiput(0,0)(10,0){2}{\line(0,1){30}} \multiput(20,0)(10,0){4}{\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){50}} \multiput(0,20)(0,10){2}{\line(1,0){10}} } \put(505,40){ \multiput(0,0)(10,0){4}{\line(0,1){20}} \put(40,0){\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){40}} \multiput(0,20)(0,10){1}{\line(1,0){30}} } \put(560,115){\vector(1,-1){20}} \put(560,55){\vector(1,1){20}} \put(590,75){ \multiput(0,0)(10,0){3}{\line(0,1){20}} \multiput(30,0)(10,0){3}{\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){50}} \multiput(0,20)(0,10){1}{\line(1,0){20}} } \put(650,85){\vector(1,0){20}} \put(680,75){ \multiput(0,0)(10,0){2}{\line(0,1){20}} \multiput(20,0)(10,0){5}{\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){60}} \multiput(0,20)(0,10){1}{\line(1,0){10}} } \put(750,85){\vector(1,0){20}} \put(780,80){ \multiput(0,0)(10,0){8}{\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){70}} } \color{red} \put(175,70){\vector(0,1){30}} \put(365,130){\vector(0,-1){90}} \put(340,37.5){\vector(-1,1){20}} \put(515,70){\vector(0,1){30}} \color{blue} \put(185,100){\vector(0,-1){30}} \put(333,57.5){\vector(1,-1){20}} \put(340,132.5){\vector(-1,-1){20}} \put(525,70){\vector(0,1){30}} \color{ForestGreen} \put(195,100){\vector(0,-1){30}} \put(375,130){\vector(0,-1){90}} \put(333,112.5){\vector(1,1){20}} \put(535,100){\vector(0,-1){30}} \end{picture} \caption{The poset $(\st_7,\leq)$ and total orderings induced by weights} \label{fig:refineStSeven} \end{figure} \end{center} \begin{center} \begin{figure}[ht] \unitlength0.19mm \begin{picture}(500,170) \put(0,50){ \multiput(0,0)(10,0){2}{\line(0,1){80}} \multiput(0,0)(0,10){9}{\line(1,0){10}} } \put(20,90){\vector(1,0){20}} \put(50,55){ \multiput(0,0)(10,0){2}{\line(0,1){70}} \put(20,0){\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){20}} \multiput(0,20)(0,10){6}{\line(1,0){10}} } \put(80,90){\vector(1,0){20}} \put(110,60){ \multiput(0,0)(10,0){2}{\line(0,1){60}} \put(20,0){\line(0,1){20}} \multiput(0,0)(0,10){3}{\line(1,0){20}} \multiput(0,30)(0,10){4}{\line(1,0){10}} } \put(140,100){\vector(1,1){20}} \put(140,80){\vector(1,-1){20}} \put(170,110){ \multiput(0,0)(10,0){2}{\line(0,1){50}} \multiput(20,0)(10,0){2}{\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){30}} \multiput(0,20)(0,10){4}{\line(1,0){10}} } \put(175,10){ \multiput(0,0)(10,0){2}{\line(0,1){50}} \multiput(20,0)(10,0){1}{\line(0,1){30}} \multiput(0,0)(0,10){4}{\line(1,0){20}} \multiput(0,40)(0,10){2}{\line(1,0){10}} } \put(210,60){\vector(1,2){20}} \put(210,135){\vector(1,0){20}} \put(210,35){\vector(1,0){20}} \put(240,110){ \multiput(0,0)(10,0){2}{\line(0,1){50}} \put(20,0){\line(0,1){20}} \put(30,0){\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){30}} \multiput(0,20)(0,10){1}{\line(1,0){20}} \multiput(0,30)(0,10){3}{\line(1,0){10}} } \put(245,15){ \multiput(0,0)(10,0){3}{\line(0,1){40}} \multiput(0,0)(0,10){5}{\line(1,0){20}} } \put(280,135){\vector(1,0){80}} \put(280,35){\vector(1,0){20}} \put(310,15){ \multiput(0,0)(10,0){2}{\line(0,1){40}} \put(20,0){\line(0,1){30}} \put(30,0){\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){30}} \multiput(0,20)(0,10){2}{\line(1,0){20}} \multiput(0,40)(0,10){1}{\line(1,0){10}} } \put(350,35){\vector(1,0){20}} \put(380,15){ \multiput(0,0)(10,0){2}{\line(0,1){40}} \multiput(20,0)(10,0){2}{\line(0,1){20}} \multiput(0,0)(0,10){3}{\line(1,0){30}} \multiput(0,30)(0,10){2}{\line(1,0){10}} } \put(375,110){ \multiput(0,0)(10,0){2}{\line(0,1){50}} \multiput(20,0)(10,0){3}{\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){40}} \multiput(0,20)(0,10){4}{\line(1,0){10}} } \put(425,135){\vector(1,0){20}} \put(422.5,35){\vector(1,0){25}} \put(422.5,55){\vector(1,2){25}} \put(460,115){ \multiput(0,0)(10,0){2}{\line(0,1){40}} \multiput(20,0)(10,0){1}{\line(0,1){20}} \multiput(30,0)(10,0){2}{\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){40}} \multiput(0,20)(0,10){1}{\line(1,0){20}} \multiput(0,30)(0,10){2}{\line(1,0){10}} } \put(465,20){ \multiput(0,0)(10,0){3}{\line(0,1){30}} \multiput(30,0)(10,0){1}{\line(0,1){20}} \multiput(0,0)(0,10){3}{\line(1,0){30}} \multiput(0,30)(0,10){1}{\line(1,0){20}} } \color{red} \put(175,70){\vector(0,1){30}} \put(245,100){\vector(0,-1){30}} \put(395,70){\vector(0,1){30}} \color{blue} \put(185,100){\vector(0,-1){30}} \put(255,70){\vector(0,1){30}} \multiput(275,100)(4,-4){7}{\line(1,-1){3}} \put(302,73){\vector(1,-1){3}} \multiput(330,70)(4,4){7}{\line(1,1){3}} \put(357,97){\vector(1,1){3}} \multiput(373,100)(-1.5,-1.5){19}{\circle*{1}} \put(345,72){\vector(-1,-1){2}} \multiput(405,100)(0,-5){5}{\line(0,-1){4}} \put(405,73){\vector(0,-1){3}} \color{ForestGreen} \put(195,100){\vector(0,-1){30}} \put(265,70){\vector(0,1){30}} \put(386,100){\vector(-1,-1){30}} \end{picture} \caption{The poset $(\st_8,\leq)$ and partial orderings induced by weights} \label{fig:refineStEight} \end{figure} \end{center} The upshot of these examples --- and may more which shall not be presented here --- is the following: The larger $m$, the farther is $\leq$ from being a total ordering. Weights $\mu$ and $\nu$ repair this shortcoming to a certain extent, making some but not all incomparable pairs comparable. For weight $\lambda$, the situation is even better: Though some pairs remain incomparable under $\leq_\lambda$, several $\lambda$ exist for which partial orderings $\leq_\lambda$ are different from each other. Theorem \ref{thm:upperTriangularity} therefore allows to derive that $[\Delta_2, \Delta_1, \Delta_0] \cdot [\Delta'_2, \Delta'_1, \Delta'_0] = 0$ in many more cases than its analogue would if we replaced $\leq_\lambda$ by the natural partial ordering. \begin{cor} For $k=0,\ldots,2n$, let $T^k$ be the basis of $\H^k(H^n(\P^2))$ given by all $(\Delta_2, \Delta_1, \Delta_0) \in \st_{3,n}$ such that $n + h(\Delta_2) - w(\Delta_0) = k$. For each numbering $(t_1,\ldots,t_d)$ of $T^k$ such that whenever $t_i <_\lambda t_j$, then $i < j$, the matrix of the Poincar\'e pairing \[ \H^k(H^n(\P^2)) \times \H^{2n - k}(H^n(\P^2)) \to \Z , \] with respect to bases $(t_1,\ldots,t_d)$ on the first factor and $(\iota(t_1),\ldots,\iota(t_d))$ on the second factor is upper triangular with 1s on the diagonal. \end{cor} \begin{ex} Figure \ref{fig:fourFour} shows the Hasse diagram of the basis $T^4$ of $\H^4(H^4(\P^2))$. The matrix of the Poincar\'e pairing with respect to that basis and its transpose is a block matrix \[ \left[\begin{array}{c} t_1 \\ \vdots \\ t_8 \\ \hline t_9 \\ \vdots \\ t_{13} \end{array}\right] \cdot \left[\begin{array}{ccc|ccc} \iota(t_1) & \ldots & \iota(t_8) & \iota(t_9) & \ldots & \iota(t_{13}) \end{array}\right] = \left[\begin{array}{c|c} \begin{array}{ccc} 1 & & \star \\ & \ddots & \\ 0 & & 1 \end{array} & 0 \\ \hline 0 & \begin{array}{ccc} 1 & & \star \\ & \ddots & \\ 0 & & 1 \end{array} \end{array}\right] . \] \end{ex} \begin{center} \begin{figure}[ht] \unitlength0.3166mm \begin{picture}(434,218) \put(0,175){\footnotesize$t_1 := (\pthree,\emptyset,\pone)$} \multiput(79,177)(3,-150){2}{\vector(1,0){30}} \put(114,175){\footnotesize$t_2 := (\ptwo,\pone,\pone)$} \multiput(187,177)(-3,-150){2}{\vector(1,0){30}} \multiput(148,169)(0,-50){3}{\vector(0,-1){30}} \put(222,175){\footnotesize$t_3 := (\ptwo,\emptyset,\poneone)$} \put(117,125){\footnotesize$t_4 := (\pone,\poneone,\pone)$} \put(114,75){\footnotesize$t_5 := (\pone,\ptwo,\pone)$} \put(6,25){\footnotesize$t_6 := (\poneone,\emptyset,\ptwo)$} \put(117,25){\footnotesize$t_7 := (\pone,\pone,\poneone)$} \put(219,25){\footnotesize$t_8 := (\pone,\emptyset,\poneoneone)$} \put(359,200){\footnotesize$t_9 := (\emptyset,\poneoneone,\emptyset)$} \multiput(391,190)(0,-50){4}{\vector(0,-1){20}} \put(356,150){\footnotesize$t_{10} := (\emptyset,\ptwooneone,\emptyset)$} \put(356,100){\footnotesize$t_{11} := (\emptyset,\ptwotwo,\emptyset)$} \put(353,50){\footnotesize$t_{12} := (\emptyset,\pthreeone,\emptyset)$} \put(350,0){\footnotesize$t_{13} := (\emptyset,\pfour,\emptyset)$} \end{picture} \caption{The poset basis of $\H^4(H^4(\P^2))$} \label{fig:fourFour} \end{figure} \end{center} \subsection*{Outline of the paper} Section \ref{sec:esRevisited} gives a short account of Ellingsrud and Str\o mme's BB cells in schemes $H^n(\A^2)$, $H^{n,\lin}(\A^2)$ and $H^{n,\punc}(\A^2)$. The emphasis of this section is on the identification of ideals in $S$ and triples of ideals in $S'$. We will conclude that section with the remark that Ellingsrud and Str\o mme's cellular decomposition of $H^n(\P^2)$ is not a BB decomposition. Section \ref{sec:bbTheory} provides the token from Bia\l ynicki-Birula theory necessary for the proof of Theorem \ref{thm:upperTriangularity}: Consider a complete complex variety $X$ equipped with an ample line bundle and a torus action with isolated fixed points, and the BB cells $X_v^\circ$ of points floating into $v$ from above and $X^w_\circ$ of points floating into $v$ from below. We will give a necessary condition in terms of the line bundle for $X_v^\circ$ to meet $X^w_\circ$. Hence also a necessary condition for the respective closures to meet. The main source of inspiration to that section was \cite{allenSimplicialComplexesBB}. Section \ref{sec:poincare} then applies the findings from Bia\l ynicki-Birula theory, plus some elementary observations from \cite{comboDuality}, to the scheme $H^n(\P^2)$, thus proving Theorem \ref{thm:upperTriangularity}. The Appendix contains a few clarifications about the partial orderings $\leq_\mu$, $\leq_\lambda$, $\leq_\nu$ as discussed in Example \ref{ex:posets}. Moreover, we will specify generic monomial ideals of given weights, a notion we use in the proof of Theorem \ref{thm:upperTriangularity}. \subsection*{Acknowledgements} I wish to thank Anthony Iarrobino and Bernd Sturmfels for giving me the opportunity to present preliminary versions of this work at Northwestern University at Boston and Max Planck Institut f\"ur Mathematik in Bonn, respectively. Bernd noted the analogy of my results to a statement in toric varieties, as presented in Example \ref{ex:bernd}. I then rewrote my paper according to Bernd's suggestions with the help of Allen Knutson, whom I thank for many very valuable remarks. \section{Ellingsrud and Str\o mme's cellular decomposition} \label{sec:esRevisited} We use the cellular decomposition of $\P^2 = \Proj(S)$ into $\A^2 = \P^2 \setminus \V(x_2)$, $\A^1 = \V(x_2) \setminus \{(1:0:0)\}$ and $\A^0 = \{(1:0:0)\}$. An ideal $I \in H^n(\P^2)$ can uniquely be written as $I = I_2 \cap I_1 \cap I_0$, where $\supp(I_j) \subseteq \A^j$. The individual ideals $I_j$ have constant Hilbert functions $n_j$ whose sum equals $n$. Hence the decomposition of $H^n(\P^2)$ into strata $H^{n_2}(\A^2) \times H^{n_1,\lin}(\A^2) \times H^{n_0,\punc}(\A^2)$ as presented in the Introduction. We identify ideals $I = I_2 \cap I_1 \cap I_0$ in $S$ with triples $(I_2,I_1,I_0)$ of ideals in $S'$ by de-homogenizing, \[ I(x_0,x_1,x_1) \mapsto \bigl( I_2(y_1,y_2,1), I_1(y_1,1,y_2), I_0(1,y_1,y_2) \bigr) . \] The two-dimensional torus $\TT$ of diagonal matrices in $SL(3)$ acts on the polynomial ring $S$ by scaling the variables: If $\tt = (t_0,t_1,t_2) \in \TT$ then $\tt . x^\alpha := t_0^{\alpha_0}t_1^{\alpha_1}t_2^{\alpha_2}x^\alpha$. This translates into an action on $\P^2$ where $\tt.(a_0:a_1:a_2) = (t_0^{\alpha_0}a_0:t_1^{\alpha_1}a_1:t_2^{\alpha_2}a_2)$. The fixed points of this action are $(1:0:0)$, $(0:1:0)$ and $(0:0:1)$. The $\TT$-action on $S$ also induces an action on $H^n(\P^2)$ by $\tt . I = \langle t.f : f \in I \rangle$. This action respects the decomposition into strata $H^{n_2}(\A^2) \times H^{n_1,\lin}(\A^2) \times H^{n_0,\punc}(\A^2)$. In particular, the fixed points under this action are ideals of shape $I = M_{\Delta_2} \cap M_{\Delta_1} \cap M_{\Delta_0}$, where each $M_{\Delta_j} \subseteq S$ is a monomial ideal supported in $\A^j$. Equivalently, the fixed points are triples $(M_{\Delta_2}, M_{\Delta_1}, M_{\Delta_0})$ of monomial ideals $M_{\Delta_j} \subseteq S'$. The datum of a \defn{weight vector} $w := (w_0,w_1,w_2) \in \Z^3$ such that $w_0 + w_1 + w_2 = 0$ is equivalent to the datum of an embedding \[ T := \CC^\star \hookrightarrow \TT : t \mapsto (t^{w_0},t^{w_1},t^{w_2}) . \] We obtain induced actions of the one-dimensional torus $T$ on the geometric objects $\P^2$ and $H^n(\P^2)$. The action of $T$ on $H^n(\P^2)$ also respects its decomposition into strata $H^{n_2}(\A^2) \times H^{n_1,\lin}(\A^2) \times H^{n_0,\punc}(\A^2)$. If the weight vector is general enough, then the $T$-fixed points on $H^n(\P^2)$ are still triples $(M_{\Delta_2}, M_{\Delta_1}, M_{\Delta_0})$ of monomial ideals in $S'$. The BB strata of that action are therefore affine cells contained in $H^{n_2}(\A^2)$, $H^{n_1,\lin}(\A^2)$ and $H^{n_0,\punc}(\A^2)$, respectively. For determining a basis of the cohomology module of $H^n(\P^2)$, it suffices to find (possibly different) weight vectors such that we know the BB strata of the respective actions in $H^{n_2}(\A^2)$, $H^{n_1,\lin}(\A^2)$ and $H^{n_0,\punc}(\A^2)$ explicitly enough. \begin{pro}[cf. \cite{esBetti, esCells, huibregtseEllingsrud, hilbertBurchMatrices}] \label{pro:BBCells} \begin{enumerate}[(i)] \item General weight vectors $w, w' \in \Z^3$ exist such that \begin{itemize} \item $w_0 < w_1 < w_2$ and $w_0 + w_1 + w_2 = 0$, \item the same holds for the primed vector, \item $w_0-w_2 \ll w_1-w_2 < 0$, more precisely, $w_0-w_2 < n(w_1-w_2)$, \item $w_0-w_1 < 0 < w_2-w_1$, \item $w'_0-w'_1 < 0 < w'_2-w'_1$, and \item $0 < w'_1-w'_0 \ll w'_2-w'_0$, more precisely, $w'_1-w'_0 < n(w'_2-w'_0)$. \end{itemize} \item The weight $w$ defines an action $T \acts H^n(\P^2)$ with isolated fixed points $(M_{\Delta_2},M_{\Delta_1},M_{\Delta_0})$ such that the BB sink of points floating into $(M_{\Delta_2},M_{\Delta_1},\langle 1 \rangle)$ from above is the subscheme $H^{\Delta_2}_\lex(\A^2) \times H^{\Delta_1,\lin}_\lex(\A^2) \times \emptyset$. \item The weight $w'$ defines an action $T \acts H^n(\P^2)$ with isolated fixed points $(M_{\Delta_2},M_{\Delta_1},M_{\Delta_0})$ such that the BB sink of points floating into $(\langle 1 \rangle, M_{\Delta_1},M_{\Delta_0})$ from above is the subscheme $\emptyset \times H^{\Delta_1,\lin}_\lex(\A^2) \times H^{\Delta_0,\punc}_\lex(\A^2)$. \end{enumerate} \end{pro} \begin{proof} (i) is elementary. (ii) Under the identification of ideals in $S$ and triples of ideals in $S'$, the action $T \acts H^n(\P^2)$ translates into three actions of $T$ on $H^{n_2}(\A^2)$, $H^{n_1}(\A^2)$ and $H^{n_0}(\A^2)$, respectively, induced by actions \[ \begin{array}{cc} T \acts S' : & t.y^\alpha = t^{\langle \alpha, (w_0 - w_2, w_1 - w_2) \rangle}y^\alpha , \\ T \acts S' : & t.y^\alpha = t^{\langle \alpha, (w_0 - w_1, w_2 - w_1) \rangle}y^\alpha , \\ T \acts S' : & t.y^\alpha = t^{\langle \alpha, (w_1 - w_0, w_2 - w_0) \rangle}y^\alpha , \end{array} \] respectively. Here it is important to note that the weight vector of the second action is such that $w_0 - w_1 < 0$ and $w_2 - w_1 > 0$, to the effect that the BB sink of points floating into a fixed point $M_{\Delta_1}$ from above is contained in $H^{n_1,\lin}(\A^2)$; and the weight vector of the third action is such that $w_0 - w_1 > 0$ and $w_2 - w_1 > 0$, to the effect that the BB sink of points floating into a fixed point $M_{\Delta_0}$ from above is contained in $H^{n_1,\punc}(\A^2)$. Both facts have been used in \cite{esBetti, esCells}, and are proved on a schematic level in \cite{bbAvecLaurent}. However, for the time being, we will only be using the first and the second of the three actions; we will return to the third action in Section \ref{sec:poincare}. The first action sends each polynomial $f = \sum_{\alpha \in \N^2}a_\alpha y^\alpha \in S'$ to $t.f = \sum_{\alpha \in \N^2}a_\alpha t^{\langle (w_0 - w_2, w_1 - w_2),\alpha \rangle}y^\alpha$. Let $a_\beta y^\beta$ be the \defn{weight-initial term of $f$}, i.e., the term for which the product $\langle (w_0 - w_2, w_1 - w_2),\beta \rangle$ is minimal. Our choice of $(w_0 - w_2, w_1 - w_2)$ immediately shows that the weight-initial term of $f$ is its lex-initial term. Under the $T$ action on $H^{n_2}(\A^2)$, an ideal $I_2$ is sent to \[ t.I_2 = \left\langle \frac{t.f}{t^{\langle (w_0 - w_2, w_1 - w_2),\beta \rangle}} = \sum_{\alpha \in \N^2}a_\alpha t^{\langle (w_0 - w_2, w_1 - w_2),\alpha - \beta \rangle}y^\alpha : f \in I \right\rangle \] In the limit as $t \to 0$, all summands $a_\alpha t^{\langle (w_0 - w_2, w_1 - w_2),\alpha - \beta \rangle}$ for which $\langle (w_0 - w_2, w_1 - w_2),\alpha \rangle < \langle (w_0 - w_2, w_1 - w_2),\beta \rangle$ get killed. Since these terms are the lex-trailing terms, the limit is the lexicographic Gr\"obner deformation. The BB sink of points $I_2 \in H^{n_2}(\A^2)$ floating into fixed point $M_{\Delta_2}$ therefore equals $H^{\Delta_2}_\lex(\A^2)$. The second action sends each polynomial $f$ to $t.f = \sum_{\alpha \in \N^2}a_\alpha t^{\langle (w_0 - w_1, w_2 - w_1),\alpha \rangle}y^\alpha$. The weight-initial term of $f$ is now the summand $a_\beta y^\beta$ for which the product $\langle (w_0 - w_1, w_2 - w_1),\beta \rangle$ is minimal. Given any $f \in S'$, its weight-initial term is clearly not its lex-initial term. We claim that if $f$ is an element of the reduced lexicographic Gr\"obner basis of an ideal $I_1 \in H^{n_1,\lin}(\A^2)$, then it is. Once this claim is proved, the same arguments as before show that the BB sink of points $I_1 \in H^{n_1,\lin}(\A^2)$ floating into $M_{\Delta_1}$ equals $H^{\Delta_1,\lin}_\lex(\A^2)$. Let us prove our claim in the most direct way, without reference to \cite{esCells, huibregtseEllingsrud, hilbertBurchMatrices}. It suffices to show that if $\IN(f) = y^\alpha$, then $f$ contains no term $a_\beta y^\beta$ such that $\beta_2 < \alpha_2$. We establish this by applying a series of deformations to $I_1$. Each will be a limit $\lim_{t \to 0} t.I_1$ under a torus action induced from $T \acts S'$ with a weight vector $u \in \Z^2$. The first action has weight vector $u := (-1,0)$. Since $I_1$ is supported on the $y_1$-axis, the deformation $\lim_{t \to 0} t.I_1$ is an ideal defining a point in $H^{n_1,\punc}(\A^2)$ invariant under this torus action. However, the weight $u$ was chosen such that each $f = y^\alpha + \sum_{\beta \in \Delta_1, \beta < \alpha} a_\beta y^\beta$ appearing in the reduced lexicographic Gr\"obner basis of $I_1$ deforms to $\lim_{t \to 0}t.f = y^\alpha + \sum_{\beta_1 = \alpha_1, \beta_2 < \alpha_2} a_\beta y^\beta$. The limits of the polynomials $f$ form the reduced lexicographic Gr\"obner basis of the limiting ideal. The ideal spanned by them can only be supported at the origin if $a_\beta = 0$ for all $\beta$ such that $\beta_1 = \alpha_1$ and $\beta_2 < \alpha_2$. For defining the next action, we turn the weight vector $(-1,0)$ clockwise and rescale it so as to obtain $u \in \Z^2$ pointing slightly upward into the third quadrant of the plane. We stop turning when we first find an element from the reduced lexicographic Gr\"obner basis of $I_1$ having initial term $y^\alpha$ and a trailing term $a_\beta y^\beta$ such that $\langle u, \alpha - \beta \rangle = 0$. The deformation $\lim_{t \to 0} t.I_1$ then defines a point in $H^{n_1,\punc}(\A^2)$ invariant under this torus action. The reduced lexicographic Gr\"obner basis of the limiting ideal is formed by polynomials $\lim_{t \to 0} t.f$, where $f$ runs through the reduced lexicographic Gr\"obner basis of $I_1$. Once more we see that the ideal spanned by these polynomials can only be supported at the origin if $a_\beta = 0$ for all $\beta$ such that $\langle u, \alpha - \beta \rangle = 0$. We keep turning $u$ clockwise and deforming $I_1$, thus showing that more and more coefficients $a_\beta$ vanish. The arguments remain valid as long as $u$ stays in the interior of third quadrant. This finishes the proof of the claim. (iii) The action $T \acts H^n(\P^2)$ translates into actions on the three types of Hilbert schemes induced by \[ \begin{array}{cc} T \acts S' : & t.y^\alpha = t^{\langle \alpha, (w'_0 - w'_2, w'_1 - w'_2) \rangle}y^\alpha , \\ T \acts S' : & t.y^\alpha = t^{\langle \alpha, (w'_0 - w'_1, w'_2 - w'_1) \rangle}y^\alpha , \\ T \acts S' : & t.y^\alpha = t^{\langle \alpha, (w'_1 - w'_0, w'_2 - w'_0) \rangle}y^\alpha , \end{array} \] respectively. Only the last two actions are relevant in (ii). The second action here is identical to the second action in (ii). The BB sink of points $I_1 \in H^{n_1,\lin}(\A^2)$ floating into $M_{\Delta_1}$ therefore equals $H^{\Delta_1,\lin}_\lex(\A^2)$. For proving that the BB sink of points $I_0 \in H^{n_0,\punc}(\A^2)$ floating into $M_{\Delta_0}$ equals $H^{\Delta_0,\punc}_\lex(\A^2)$, we show that if $f$ is an element of the reduced lexicographic Gr\"obner basis of an ideal $I_0 \in H^{n_0,\punc}(\A^2)$, then its initial term with respect to the weight $(w'_1 - w'_0, w'_2 - w'_0)$ is at the same time its lex-initial term. It suffices to show that if $\IN(f) = y^\alpha$, then $f$ contains no term $a_\beta y^\beta$ such that $\beta_2 < \alpha_2$ or $\beta_2 = \alpha_2$ and $\beta_1 < \alpha_1$. The non-existence of $a_\beta y^\beta$ such that $\beta_2 < \alpha_2$ follows from the fact that the ideal $I_0$ also defines a point in $H^{n_0,\lin}(\A^2)$. For showing the non-existence of $a_\beta y^\beta$ such that $\beta_2 = \alpha_2$ and $\beta_1 < \alpha_1$, we consider the torus action on $H^{n_0,\punc}(\A^2)$ induced from $T \acts S'$ with weight vector $u := (0,1)$. Since $I_0$ is supported at the origin, the deformation $\lim_{t \to 0} t.I_0$ is an ideal defining a point in $H^{n_0,\punc}(\A^2)$ invariant under this torus action. The polynomials $\lim_{t \to 0}t f$, for all $f$ from the reduced lexicographic Gr\"obner basis of $I_0$, form the reduced lexicographic Gr\"obner basis of the limiting ideal. The ideal spanned by them can only be supported at the origin if $a_\beta = 0$ for all $\beta$ such that $\beta_2 = \alpha_2$ and $\beta_1 < \alpha_1$. \end{proof} It's not possible to impose the conditions from Proposition \ref{pro:BBCells} (i) for one and the same weight vector $w = w'$. This is why Ellingsrud and Str\o mme's cellular decomposition is not a BB decomposition. The next two sections will be dedicated to a somewhat more sophisticated application of Bia\l ynicki-Birula theory to our setting. \section{Some complements on Bia\l ynicki-Birula theory} \label{sec:bbTheory} The following proposition was communicated to the author by Allen Knutson. \begin{pro} \label{pro:allenBB} Let $X$ be a complete complex variety with an ample line bundle $\L$ and $T \acts X$ an action with isolated fixed points. We denote by \begin{equation*} \begin{split} X_v^\circ & := \{ x \in X : \lim_{t \to 0} t .x = v \} , \\ X^w_\circ & := \{ x \in X : \lim_{t \to \infty} t .x = w \} \end{split} \end{equation*} the BB cells of points flowing into $v$ and $w$ from above and below, respectively, and by $\Phi(v)$ the $T$-weight of $\L |_v$. If $X_v^\circ \cap X^w_\circ$ is nonempty and $v \neq w$, then $\Phi(v) < \Phi(w)$. \end{pro} \begin{proof} This statement and its proof are the same spirit as \cite[Proposition 2.1]{allenSimplicialComplexesBB}. A general point $a \in X$ defines a morphism $f: T \to X: t \mapsto t.a$. Since the $T$-action is not assumed to be faithful, the map $f$ is generically $k : 1$ for some $k \geq 1$. The projective line $\P^1$ is a compactification of $T$; if $a \in X_v^\circ \cap X^w_\circ$, then the morphism $f$ extends to $g: \P^1 \to X$, sending $0$ and $\infty$ to $v$ and $w$, respectively. Completeness of $X$ implies that the extension of $f$ is unique. The pullback of $\L$ along $g$ is a line bundle $\L'$ on $\P^1$ whose only poles and zeros are found at 0 and $\infty$. Their respective degrees are $\Phi(v)$ and $\Phi(w)$, and \[ \deg(\L') = \Phi(w) - \Phi(v) . \] The pushforward of $\L'$ along $g$ shows that the degree of $\L'$ equals $k$ times the degree of the restriction of $\L$ to the image of $g$. The line bundle $\L$ being ample, it has a positive degree, as does its restriction to the image of $g$. Thus the inequality $\Phi(w) - \Phi(v) > 0$ follows. \end{proof} \begin{lmm} \label{lmm:closure} In the setting of Proposition \ref{pro:allenBB}, let $a \in X_v := \overline{X_v^\circ}$ and $p := \lim_{t \to 0} t. a$. If $v \neq p$, then $\Phi(v) < \Phi(p)$. \end{lmm} \begin{proof} The line bundle $\L$ pulls back from $X$ to the complete variety $X_v$. In the special case $\L = \O(1)$, the statement of the lemma is part of \cite[Corollary 2.1]{allenSimplicialComplexesBB}. The proof of the cited corollary is based on \cite[Lemma 2.1]{allenSimplicialComplexesBB}, which states a number of facts about the line bundle $\O(1)$ on $X_v$. These statements are proved by pulling back the line bundle to $\P^1$ by a morphism $g : \P^1 \to \P^1$ as in the proof of Proposition \ref{pro:allenBB}. The proof of the lemma is based on the classification of equivariant line bundles on $\P^1$, which is given by the degree and the torus weight on the fiber over 0. This classification, however, doesn't care if the line bundle on $\P^1$ is the pullback of $\O(1)$ or of a more general $\L$. The proof of \cite[Corollary 2.1]{allenSimplicialComplexesBB} therefore works for an arbitrary line bundle $\L$ on $X_v$. \end{proof} \begin{thm} \label{thm:intersectionOfClosures} In the setting of Proposition \ref{pro:allenBB}, $X_v \cap X^w \neq \emptyset$ only if $v = w$ or $\Phi(v) < \Phi(w)$. \end{thm} \begin{proof} Consider a point $a \in X_v \cap X^w$ and its limits $p := \lim_{t \to 0} t.a$ and $q := \lim_{t \to \infty} t.a$. Lemma \ref{lmm:closure}, its analogue for limits as $t \to \infty$, and Proposition \ref{pro:allenBB} show that \[ \Phi(v) \leq \Phi(p) \leq \Phi(q) \leq \Phi(w) , \] with strict inequality if any two of the fixed points don't agree. \end{proof} \begin{cor} \label{cor:poincare} In the situation of Theorem \ref{thm:intersectionOfClosures}, pick all $X_v$ of a given dimension $m-k$, where $m = \dim(X)$ thus obtaining a basis of $H^k(X)$, and pick all $X^w$ of dimension $k$, thus obtaining a basis of $H^{m-k}(X)$. Then the matrix of the Poincar\'e pairing $H^k(X) \times H^{m-k}(X) \to \Z$ with respect to the two bases, ordered in the weight-increasing way, is upper triangular with 1s on the diagonal. \end{cor} \begin{proof} Completeness of $X$ implies that the Bia\l ynicki-Birula strata $X_v^\circ$ with $v$ running through the set of fixed points, form a cellular decomposition of $X$ \cite[Theorem 4.4]{bialynickiBirula}, as do the strata $X^w_\circ$. The cohomology classes of their closures are therefore a basis of the module $H^\star(X)$. The two bases of $H^k(X)$ and $H^{m-k}(X)$, respectively, are obtained by picking elements from the appropriate graded pieces. Upper triangularity of the matrix follows from Theorem \ref{thm:intersectionOfClosures}. Since the Poincar\'e pairing is perfect, the diagonal entries of the matrix are 1. \end{proof} \begin{rmk} Another statement from Bia\l ynicki-Birula's article allows to understand the 1s on the diagonal in a very direct way. The tangent spaces $T_v^+(X) := T_v(X_v^\circ)$ and $T_v^-(X) := T_w(X^v_\circ)$ are called the \defn{positive} and \defn{negative weight spaces}, respectively. In the setting of Proposition \ref{pro:allenBB}, $\Phi(v) = \Phi(w)$ if, and only if, $v = w$, in which case the tangent space of $X$ at $v$ decomposes into \[ T_v(X) = T_v^+(X) \oplus T_v^-(X) \] by \cite[Theorems 4.1 and 4.4]{bialynickiBirula}. In other words, the closures $X_v$ and $X^v$ meet transversally in vertex $v$. Since the proof of \cite[Theorem 4.2]{bialynickiBirula} also shows that $v$ is the only point in $X_v \cap X^v$, the matrix of the Poincar\'e pairing has a 1 in row $X_v$ and column $X^v$. \end{rmk} The author thanks Bernd Sturmfels for communicating the following example and generously providing a folded napkin akin to the polytope from Figure \ref{fig:polytope}. The reason why this example is discussed in so much detail here is its role as a model case for the Hilbert scheme of points in the projective plane, to be discussed in the next section. \begin{center} \begin{figure}[ht] \begin{picture}(170,200) \put(40,10){\line(-1,3){30}} \put(40,10){\line(4,1){40}} \put(80,20){\line(1,1){40}} \put(120,60){\line(0,1){60}} \put(10,100){\line(1,2){43.4}} \put(120,120){\line(-1,1){66.8}} \put(35,2){\footnotesize $v_0$} \put(80,12){\footnotesize $v_1$} \put(124,55){\footnotesize $v_2$} \put(-2,99){\footnotesize $v_3$} \put(124,120){\footnotesize $v_4$} \put(50,192){\footnotesize $v_5$} \multiput(80,20)(-4,0){7}{\line(-1,0){2}} \multiput(120,60)(-4,0){7}{\line(-1,0){2}} \multiput(120,120)(-4,0){7}{\line(-1,0){2}} \multiput(10,100)(4,0){7}{\line(1,0){2}} \put(160,100){\vector(0,-1){40}} \put(90,160){\footnotesize $P \subseteq M \otimes_\Z \R$} \put(170,80){\footnotesize $\lambda \in M^\star$} \color{red} \thicklines \put(10,100){\qbezier(15,0)(15,9)(6.71,13.42) \line(1,2){6.8}} \put(26,110){\footnotesize $(X_P)_{v_3}^\circ$} \put(78,70){\footnotesize $(X_P)_{v_2}^\circ$} \put(40,10){\qbezier(-2.37,7.115)(5,8.5)(7.275,1.82) \line(-1,3){2.4}} \put(40,10){\line(4,1){7.45}} \put(80,20){\qbezier(-15,0)(-15,11)(-5.5,14.5) \qbezier(-5.5,14.5)(2.1,17.3)(10.61,10.61) \line(1,1){10.8} } \put(120,120){\qbezier(-15,0)(-15,6)(-10.61,10.61) \line(-1,1){10.8}} \put(120,60){\qbezier(-15,0)(-15,15)(0,15) \line(0,1){15}} \color{blue} \put(10,100){\qbezier(15,0)(14,-11)(4.74,-14.23) \line(1,-3){4.8}} \put(26,85){\footnotesize $(X_P)^{v_3}_\circ$} \put(78,45){\footnotesize $(X_P)^{v_2}_\circ$} \put(80,20){\qbezier(-15,0)(-15,-3.64)(-14.55,-3.64) \line(-4,-1){14.9}} \put(120,60){\qbezier(-15,0)(-15,-6)(-10.61,-10.61) \line(-1,-1){10.8}} \put(120,120){\qbezier(-15,0)(-15,-15)(0,-15) \line(0,-1){15}} \put(53.2,186.8){\qbezier(-6.71,-13.42)(3,-18)(10.61,-10.61) \line(-1,-2){6.8}} \put(53.2,186.8){\line(1,-1){10.8}} \put(40,10){\circle*{3}} \color{red} \put(53.3,186.7){\circle*{3}} \end{picture} \caption{A polytope $P$; a linear form $\lambda$ defining a direction of flow; fixed points $v_i$ ordered according to their weight; points in {\color{red}$(X_P)_{v_i}^\circ$} and {\color{blue}$(X_P)^{v_i}_\circ$} are below and above the dashed lines, respectively; ${\color{red}(X_P)_{v_i}} \cap {\color{blue}(X_P)^{v_j}} \neq \emptyset$ only if $i \leq j$.} \label{fig:polytope} \end{figure} \end{center} \begin{ex} \label{ex:bernd} Let $M \cong \Z^d$ be a lattice. A full dimensional smooth lattice polytope $P \subseteq M \otimes_\Z \R$, as pictured in Figure \ref{fig:polytope}, defines a smooth projective variety $X_P$. Topologically, $X_P$ is the closure of the image of $\TT \hookrightarrow \P^{s-1}$ given by $\tt \mapsto (\tt^\alpha)_{\alpha \in P \cap M}$. In particular, each vertex $v \in P$ defines a point $v \in X_P$. Schematically, $X_P$ is embedded into projective space $\P^{s-1}$ whose homogeneous coordinate ring is generated by indeterminates $x_\alpha$, one for each $\alpha \in P \cap M$. An open affine cover of $X_P$ is given by schemes $U_v := \Spec S_v$, one for each vertex $v \in P$, where \[ S_v := \CC\left[\frac{x_\alpha}{x_v} : \alpha \in P \cap M\right] / I_v , \] the binomial ideal $I_v$ implementing all $\Z$-linear relations between lattice vectors $\alpha - v$, for $\alpha \in M$. The $d$-torus $\TT = \Spec \CC[M]$ naturally acts on each coordinate ring, $\TT \acts S_v : \tt.\frac{x_\alpha}{x_v} = \tt^{v-\alpha} \frac{x_\alpha}{x_v}$. The induced actions on patches $U_v$ are compatible with each other, hence a $\TT$-action on the entire variety $X_P$. The orbit-cone correspondence \cite[Theorem 3.2.6]{coxLittleSchenck} implies that vertices $v \in P$ correspond to maximal cones $\sigma_v$ in the dual fan $\Sigma_P$ of $P$, which in turn correspond to $\TT$-fixed points in $X_P$. We therefore denote by $v \in X_P$ the fixed point corresponding to vertex $v \in P$. We pull back the ample line bundle $\O(1)$ on $\P^{s-1}$ to an ample line bundle $\L$ on $X_P$. Let's compute, for each fixed point $v \in X_P$, the weight of the $\TT$-representation $\L|_v$. We denote by $E_v$ the set of $\alpha \in P \cap M$ that are reached first when walking away from $v$ on edges of $P$. Smoothness of $P$ says that for each $v$, the set $\{ \alpha - v : \alpha \in E_v \}$ is a $\Z$-basis of $M$. Therefore, $U_v = \Spec S'_v \cong \A^d$, where \[ S'_v = \CC\left[\frac{x_\alpha}{x_v} : \alpha \in E_v\right] . \] Locally around $v$, the sheaf $\L$ is isomorphic to the restriction of $\O_{\P^d}(1)$ to $U_v$, whose module of global sections is $\Gamma(U_v,\O(1)) = x_v \cdot S'_v$. The $\TT$-action on that module is given by $\tt.(x_v \cdot\frac{x_\alpha}{x_v}) = \tt^{-\alpha} (x_v \cdot \frac{x_\alpha}{x_v})$. This formula holds for all $\alpha \in E_v$ and also for $\alpha = v$. In particular, the $\TT$-action on secions of $\L$ of shape ``$x_v$ times a constant'' is given by multiplication by $\tt^{-v}$. The stalk $\L|_v \cong \CC$ contains precisely those elements, so the $\TT$-action on $\L|_v$ is multiplication by $\tt^{-v}$. In other words, $\L|_v$ has $\TT$-weight $-v$. Consider a linear form $\lambda: M \to \Z$ separating the vertices of $P$, i.e., $\langle \lambda,v \rangle \neq \langle \lambda,w \rangle$ for vertices $v \neq w$. The element $\lambda \in M^\star$ corresponds to an embedding $T \hookrightarrow \TT$, hence an action $T \acts X_P$. The general choice of $\lambda$ implies that $T$-fixed points of $X_P$ correspond to vertices of $P$. Restricting the $\TT$-action to the subtorus $T$ amounts to replacing $\tt^v$ by $t^{\langle \lambda,v \rangle}$. The $T$-weight on $\L|_v$ is therefore $\Phi(v) = -\langle \lambda,v \rangle$. Thus the assumptions of Proposition \ref{pro:allenBB} are satisfied. The Bia\l ynicki-Birula cells $(X_P)_v^\circ$ and $(X_P)^v_\circ$ have explicit characterizations. Both types of cells are contained in the open neighborhood $U_v$ of $v$. The torus action on closed points, i.e., maximal ideals in $S'_v$, is given by \[ t.\left\langle \frac{x_\alpha}{x_v} - a_\alpha : \alpha \in E_v \right\rangle = \left\langle t^{\langle \lambda, v - \alpha \rangle} \frac{x_\alpha}{x_v} - a_\alpha : \alpha \in E_v \right\rangle = \left\langle \frac{x_\alpha}{x_v} - t^{\langle \lambda, \alpha-v \rangle} a_\alpha : \alpha \in E_v \right\rangle . \] The limit as $t \to 0$ of this ideal exists in $S'_v$ if, and only if, $a_\alpha = 0$ for all $\alpha \in E_v$ such that $\langle \lambda, \alpha-v \rangle < 0$. Thus \[ (X_P)_v^\circ = \Spec S'_v / \left\langle \frac{x_\alpha}{x_v} : \langle\lambda, \alpha-v \rangle < 0 \right\rangle , \] and similarly for $(X_P)^v_\circ$, whose ideal is defined by the property $\langle\lambda, \alpha-v \rangle > 0$. Accordingly, Figure \ref{fig:polytope} shows points in $(X_P)_v^\circ$ lying above the dashed line through $v$ and points in $(X_P)^v_\circ$ below them. The figure illustrates the statement of Theorem \ref{thm:intersectionOfClosures} and Corollary \ref{cor:poincare}: \begin{itemize} \item $(X_P)_{v_i}$ only meets $(X_P)^{v_j}$ if $v_i = v_j$ or $\Phi(v_i) < \Phi(v_j)$, and \item $(X_P)_{v_i}$ and $(X_P)^{v_i}$ meet transversally in $v_i$, the tangent space decomposing into vectors above and below the dashed line through $v_i$. \end{itemize} \end{ex} \section{Torus weights of triples of monomial ideals} \label{sec:poincare} Now we apply the findings of the previous section to the scheme $H^n(\P^2)$. For constructing a line bundle on that scheme, we embed it into a suitable projective space and pull back $\O(1)$. Remember that statements (ii) and (iii) from Proposition \ref{pro:BBCells} only characterized the BB sinks of two special types of torus fixed points, $(M_{\Delta_2},M_{\Delta_1},\emptyset)$ and $(\emptyset,M_{\Delta_1},M_{\Delta_0})$, respectively, in terms of lexicographic Gr\"obner basins. The technique for doing so was based on a study of weights $w$ and $w'$. We will be investigating more general fixed points $(M_{\Delta_2},M_{\Delta_1},M_{\Delta_0})$, using more general weights $w''$. We will always choose $w''$ general enough so that the only fixed points of the induced action $T \acts H^n(\P^2)$ are monomial ideals in $S$. \begin{lmm} \label{lmm:closedImmersion} Consider the closed immersion \[ \iota: H^n(\P^2) \to G^n_{S_n} \to \P^N , \] where \begin{itemize} \item the first map is the closed immersion from into the Grassmannian of rank $n$ quotients of the $d$-th graded piece of the polynomial ring $S$, for some $d \geq n$, and \item the second map is the Pl\"ucker embedding. \end{itemize} Let $\L$ be the pullback of the line bundle $\O(1)$ on $\P^N$ by that immersion. Let $w'' \in \Z^3$ be a weight such that \begin{itemize} \item $w''_0 + w''_1 + w''_3 = 0$, \item $w''_0 < w''_1 < w''_2$, and \item the only fixed points of the induced action $T \acts H^n(\P^2)$ are monomial ideals in $S$, identified with triples of monomial ideals in $S'$. \end{itemize} We denote by $\Phi_{w''}(\Delta_2,\Delta_1,\Delta_0) \in \Z$ the induced $T$-weight of $\L |_{(M_{\Delta_2},M_{\Delta_1},M_{\Delta_0})}$. Then \[ \Phi_{w''}(\Delta_2,\Delta_1,\Delta_0) = - \langle {w''}, \Delta_{<d} \rangle - \langle {w''}, \iota_2(\Delta_2) \rangle - \langle {w''}, \iota_1(\Delta_1) \rangle - \langle {w''}, \iota_0(\Delta_0) \rangle , \] where $\Delta_{<d}$ is the simplex in $\N^3$ of all elements of degrees $<d$ and \[ \iota_j : \N^2 \to \N^3 : (\alpha_1,\alpha_2) \mapsto \begin{cases} (\alpha_1,\alpha_2,d-\alpha_1-\alpha_2) & \text{if } j = 2 , \\ (\alpha_1,d-\alpha_1-\alpha_2,\alpha_2) & \text{if } j = 1 , \\ (d-\alpha_1-\alpha_2,\alpha_1,\alpha_2) & \text{if } j = 0 . \end{cases} \] \end{lmm} Figure \ref{lmm:closedImmersion} illustrates how immersions $\iota_j$ identify triples of monomial ideals in $S'$ with special types of monomial ideals in $S$. It also shows the range from which weights $w''$ will be taken, along with the particular weights $w$ and $w'$ from Proposition \ref{pro:BBCells}. All vectors are drawn such that their tails sit in the barycenter of the simplex $\left\{\alpha \in \N^3 : |\alpha| = d\right\}$. \begin{center} \begin{figure}[ht] \begin{picture}(425,195) \put(290,60){\line(-3,-2){100}} \put(290,60){\line(1,0){135}} \put(290,60){\line(0,1){135}} \put(290,180){\line(1,-1){120}} \put(290,180){\line(-90,-180){90}} \put(200,0){\line(210,60){210}} \put(186,0){\footnotesize $0$} \put(421,67){\footnotesize $1$} \put(280,190){\footnotesize $2$} \color{ForestGreen} \thicklines \put(30,150){ \put(-30,10){\footnotesize $\Delta_2 = $} \multiput(0,0)(10,0){2}{\line(0,1){50}} \multiput(20,0)(10,0){2}{\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){30}} \multiput(0,20)(0,10){4}{\line(1,0){10}} } \put(30,80){ \put(-30,10){\footnotesize $\Delta_1 = $} \multiput(0,0)(10,0){3}{\line(0,1){40}} \multiput(0,0)(0,10){2}{\line(1,0){40}} \multiput(30,0)(10,0){2}{\line(0,1){10}} \multiput(0,20)(0,10){3}{\line(1,0){20}} } \put(30,10){ \put(-30,10){\footnotesize $\Delta_0 = $} \multiput(0,0)(10,0){2}{\line(0,1){30}} \multiput(20,0)(10,0){2}{\line(0,1){20}} \multiput(40,0)(10,0){2}{\line(0,1){10}} \multiput(0,0)(0,10){2}{\line(1,0){50}} \put(0,20){\line(1,0){30}} \put(0,30){\line(1,0){10}} } \put(290,180){ \multiput(0,0)(-3,-6){2}{\line(1,-1){25}} \multiput(-6,-12)(-3,-6){2}{\line(1,-1){5}} \multiput(0,0)(5,-5){2}{\line(-1,-2){9}} \multiput(10,-10)(5,-5){4}{\line(-1,-2){3}} } \put(410,60){ \multiput(0,0)(-7,-2){3}{\line(-1,1){20}} \multiput(0,0)(-5,5){2}{\line(-21,-6){28}} \multiput(-21,-6)(-7,-2){2}{\line(-1,1){5}} \multiput(-10,10)(-5,5){3}{\line(-21,-6){14}} } \put(200,0){ \multiput(36,10.29)(9,2.57){2}{\line(1,2){3.7}} \multiput(18,5.14)(9,2.57){2}{\line(1,2){7.4}} \multiput(0,0)(9,2.57){2}{\line(1,2){11.1}} \multiput(0,0)(3.7,7.4){2}{\line(7,2){45}} \put(7.4,14.8){\line(9,2.57){27}} \put(11.1,22.2){\line(9,2.57){9}} } \put(150,40){\footnotesize $\left\{\begin{array}{c} \alpha \in \Delta : \\ |\alpha| = d \end{array}\right\} = $} \thinlines \color{blue} \put(300,80){\vector(80,180){40}} \put(328,165){\footnotesize $w$} \put(300,80){\vector(100,180){48}} \put(348,156){\footnotesize $w'$} \put(300,80){\vector(-110,120){66.5}} \put(300,80){\vector(210,70){93.5}} \put(232.9,153.2){\circle*{3}} \put(232.9,153.2){\qbezier(0,0)(12,11)(24,16.6)} \put(394.5,111.5){\circle*{3}} \put(394.5,111.5){\qbezier(0,0)(-4.67,14)(-12.5,25)} \put(220,170){\footnotesize range...} \put(400,115){\footnotesize ...of $w''$} \color{red} \put(300,80){\circle*{4}} \put(300,80){\vector(-90,-180){20}} \put(300,80){\vector(90,180){20}} \put(300,80){\vector(-120,120){30}} \put(300,80){\vector(120,-120){30}} \put(300,80){\vector(210,60){40}} \put(300,80){\vector(-210,-60){40}} \put(285,38){\footnotesize $e_0 - e_2$} \put(335,82){\footnotesize $e_1 - e_0$} \put(265,115){\footnotesize $e_2 - e_1$} \end{picture} \caption{Identification of triples of monomial ideals in $S'$ and monomial ideals in $S$, and the weights $w$ and $w'$} \label{fig:triplesStandardSets} \end{figure} \end{center} \begin{proof}[Proof of Lemma \ref{lmm:closedImmersion}] The first of the two immersions is described in \cite[\S VI.1]{bayerThesis}, \cite[Proposition C.30]{IarrobinoKanev} and \cite[Theorem 4.4]{multigradedHilbertSchemes} for arbitrary Grothendieck Hilbert schemes. The scheme $H^n(\P^2)$ is a special case in which the immersion is described thusly: Take a point $a$ in $H^n(\P^2)$, say, an $A$-valued one, where $A$ is any $\CC$-algebra. This point is is a homogeneous ideal $I \subseteq S \otimes_\CC A$ such that for large $d$, the quotient $(S \otimes_\CC A)_d / I_d$ is a locally free $A$-module of rank $n$. After replacing $I$ by its saturation, we may assume that the quotient is locally free of rank $n$ for all $d \geq n$. The last inequality is a consequence of the Gotzmann number of the constant Hilbert polynomial $n$ being equal to $n$. After passing to a Zariski-open subset of $\Spec A$, we may assume that $(S \otimes_\CC A)_d / I_d$ is free of rank $n$. The epimorphism of $A$-modules $(S \otimes_\CC A)_d \to (S \otimes_\CC A)_d / I_d$ is therefore represented by a matrix $C$ with entries in $A$ with $n$ rows and ${n + 2 \choose 2}$ columns. The columns represent the images of the monomials $y^\alpha$ in $(S \otimes_\CC A)_d / I_d$, written in terms of a basis of that quotient. After passing to a smaller Zariski-open subset of $\Spec A$, we may assume a basis of $(S \otimes_\CC A)_d / I_d$ to be given by all $y^\beta$ of total degree $d$ lying in the standard set $\Delta$ of a monomial ideal $M_\Delta \subseteq S$. The matrix $C$ therefore contains an identity block indexed by columns $y^\beta$, for all $\beta \in \Delta$ of total degree $d$. For each $y^\alpha \in M$ of total degree $d_0$, there exist unique $c^\alpha_\beta \in A$ such that \[ y^\alpha = \sum_{\beta \in \Delta, |\beta| = d_0} a^\alpha_\beta y^\beta \in (S \otimes_\CC A)_d / I_d , \] or equivalently, \[ f_\alpha := y^\alpha - \sum_{\beta \in \Delta, |\beta| = d_0} a^\alpha_\beta y^\beta \in I_{d_0} . \] Column $y^\alpha$ of matrix $C$ therefore has entries $a^\alpha_\beta \in A$ if $\alpha \in M_\Delta$, and $\delta^\alpha_\beta$ otherwise. Upon reordering the columns of $C$, we obtain $C = \left( \begin{array}{cc} E & A \end{array} \right)$, where $E$ is the identity matrix and $A$ is the matrix of coefficients $a^\alpha_\beta$ of polynomials $f_\alpha$. The matrix $C$ defines a point in the Grassmannian $G^n_{S_n}$. The cited results from \cite{bayerThesis, IarrobinoKanev, multigradedHilbertSchemes} say that assigning to point $a \in H^n(\P^2)$ that point in the Grassmannian defines a closed immersion \[ H^n(\P^2) \to G^n_{S_n} , \] and that we may play the same game in any degree $d \geq n$: For each $y^\alpha \in M$ of total degree $d$, find unique $b^\alpha_\beta \in A$ such that \[ f_\alpha := y^\alpha - \sum_{\beta \in \Delta, |\beta| = d} b^\alpha_\beta y^\beta \in I_d ; \] write their coefficients into a matrix $D = \left( \begin{array}{cc} E & B \end{array} \right)$; get a map \[ H^n(\P^2) \to G^n_{S_d} . \] This map is a closed immersion factoring through the first closed immersion. The Pl\"ucker embedding then sends the point represented by matrix $D$ to the point represented by a one-row matrix whose entries are the maximal minors of the matrix $D$. For tracing the effect of the action $T \acts H^n(\P^2)$ on that image point, we first observe that \[ \frac{t.f_\alpha}{t^{\langle w, \alpha \rangle}} = y^\alpha - \sum_{\beta \in \Delta, |\beta| = d} t^{\langle w, \beta-\alpha \rangle} b^\alpha_\beta y^\beta \in I_d , \] which has the entry $b^\alpha_\beta$ of matrix $D$ replaced by $t^{\langle w, \beta-\alpha \rangle} b^\alpha_\beta$. We may multiply the entire $\alpha$th row of the rescaled matrix with $t^{\langle w, \alpha \rangle}$ without changing its point in the Grassmannian. The $T$-action may therefore be understood to just scale column $y^\beta$ of $D$ by the factor $t^{\langle w, \beta \rangle}$. After applying the Pl\"ucker embedding, the action therefore sends a point with homogeneous coordinates $(\det(C'_D))_D$ to the point with homogeneous coordinates $(t^{\langle w, D \rangle}\det(C'_D))_D$, where $\langle w, D \rangle := \sum_{\beta \in D}\langle w, \beta \rangle$. Locally around the $T$-fixed point $M_\Delta \subseteq S$, projective space $\P^N$ is an affine space with coordinate ring \[ S_\Delta := \CC\left[\frac{x_E}{x_\Delta} : E \subseteq \{\alpha \in \N^3 : |\alpha| = d\}, |E| = n, E \neq \Delta \right] \] We have just shown that the action \[ T \acts S_\Delta : t.\frac{x_E}{x_\Delta} := t^{-\langle w, E \rangle + \langle w, \Delta \rangle}\frac{x_E}{x_\Delta} \] induces an action on projective space equivariant with respect to the immersion $H^n(\P^2) \hookrightarrow \P^N$. The same arguments as in Example \ref{ex:bernd} show that the stalk $\O(1)|_{M_\Delta}$ of line bundle $\O(1)$ on $\P^N$ has $T$-weight $-\langle w, \Delta \rangle$. Therefore, so does the stalk $\L|_{M_\Delta}$ of line bundle $\L := \iota^\star \O(1)$ on $H^n(\P^2)$. The statement of the lemma is now just a reformulation of this formula in terms of triples of monomial ideals in $S'$. \end{proof} In the proof of Theorem \ref{thm:upperTriangularity}, we will be using torus actions $T \acts S'$ with general weights $u,v \in \Z^2$ such that $u_1,u_2 < 0$ and $v_1,v_2 > 0$, respectively, and the induced BB decompositions \begin{equation*} \begin{split} H^n(\A^2) & = \coprod_{|\Delta| = n} H^{\Delta}_u(\A^2) , \\ H^{n,\punc}(\A^2) & = \coprod_{|\Delta| = n} H^{\Delta,\punc}_v(\A^2) . \end{split} \end{equation*} The respective cofactors parametrize ideals $I \subseteq S'$ floating into $M_\Delta$ from above under the action induced by $T \acts S'$ with weight vectors $u$ and $v$, respectively. When using weights different from the ones considered by Ellingsrud and Str\o mme, the structure of the BB sinks is in general hard to determine \cite{constantinescu}. However, it turns out that we only have to work with very specific types of monomial ideals. \begin{dfn} \label{dfn:genericMonomialIdeals} Schemes $H^n(\A^2)$ and $H^{n,\punc}(\A^2)$ are smooth and irreducible of dimensions $2n$ and $n-1$, respectively \cite{Fogarty_smoothness,briancon}. The above-displayed BB decompositions therefore contain unique members of maximal dimensions $2n$ and $n-1$, respectively. We call the corresponding torus fixed point the \defn{generic monomial ideals} in $H^n(\A^2)$ and $H^{n,\punc}(\A^2)$ with respect to the weights $u$ and $v$, respectively. \end{dfn} The shape of generic standard sets for a given weight shall be specified in the Appendix. \begin{proof}[Proof of Theorem \ref{thm:upperTriangularity}] Under our identification of ideals $I = I_2 \cap I_1 \cap I_0$ in $S$ with triples $(I_2,I_1,I_0)$ of ideals in $S'$, the BB sinks of an action $T \acts H^n(\P^2)$ inherited from $T \acts S$ with a general weight $w''$ take the shape \[ \begin{array}{ccl} \BB(w'')_{(\Delta_2, \Delta_1, \Delta_0)}^\circ & := & \left\{ \begin{array}{c} (I_2,I_1,I_0) \in H^{n_2}(\A^2) \times H^{n_1,\lin}(\A^2) \times H^{n_0,\punc}(\A^2) : \\ \lim_{t \to 0} t._{w''} (I_2,I_1,I_0) = (M_{\Delta_2},M_{\Delta_1},M_{\Delta_0}) \end{array} \right\} \\ & = & H^{\Delta_2}_{w''(2)}(\A^2) \times H^{\Delta_1,\lin}_\lex(\A^2) \times H^{\Delta_0,\punc}_{w''(0)}(\A^2) , \\ \BB(w'')^{(\Delta_2, \Delta_1, \Delta_0)}_\circ & := & \left\{ \begin{array}{c} (I_2,I_1,I_0) \in H^{n_2,\punc}(\A^2) \times H^{n_1,\lin}(\A^2) \times H^{n_0}(\A^2) : \\ \lim_{t \to \infty} t._{w''} (I_2,I_1,I_0) = (M_{\Delta_2},M_{\Delta_1},M_{\Delta_0}) \end{array} \right\} \\ & \simeq & H^{\Delta_0}_{-w''(0)}(\A^2) \times H^{(\Delta'_1)^t,\lin}_\lex(\A^2) \times H^{\Delta_2,\punc}_{-w''(2)}(\A^2) . \end{array} \] Here and in what follows we use the notation $w''(2) := (w''_0 - w''_2, w''_1 - w''_2)$ and analogues for indices $1$ and $0$. As for the middle factor in the first formula, we might have used the weight $w''(1)$ as a subscript; however, as was shown in the proof of Proposition \ref{pro:BBCells}, the weight $w''(1)$ plus the fact that we consider ideals supported in $\V(y_2)$ implies that the BB sink is the same thing as the lexicographic Gr\"obner basin. As for the the middle factor in the second formula, we used the identity \[ H^{\Delta'_1,\lin}_{-w''(1)}(\A^2) = H^{(\Delta'_1)^t,\lin}_\lex(\A^2) \] coming from the fact that the weight $-w''(1)$ is positive in the first component and negative in the second component. Moreover, in the second of the above-displayed formul\ae\ we reversed the order of the three factors, so as to get them into our usual ``plane--line--point'' shape. Take two homology classes such that $[\Delta_2, \Delta_1, \Delta_0] \cdot [\Delta'_2, \Delta'_1, \Delta'_0] \neq 0$, then $(\Delta_2, \Delta_1, \Delta_0) \cap (\Delta'_2, \Delta'_1, \Delta'_0) \neq \emptyset$. Remember that the last two varieties are not closures of BB cells. The idea of our proof is to embed them into closures of BB cells and use those ambient BB cells as approximations of the varieties we wish to study. We use a number of weights $w''$ for deriving inequalities on the standard sets $\Delta_i,\Delta'_j$. Independently of the weight chosen, inclusions \[ \begin{array}{cccccc} H^{\Delta_2}_\lex(\A^2) & \subseteq & H^{\Gamma_2}_{w''(2)}(\A^2) , & H^{\Delta_0,\punc}_\lex(\A^2) & \subseteq & H^{\Gamma_0}_{w''(0)}(\A^2) , \\ H^{\Delta'_2}_\lex(\A^2) & \subseteq & H^{\Gamma'_2}_{-w''(0)}(\A^2) , & H^{\Delta'_0,\punc}_\lex(\A^2) & \subseteq & H^{\Gamma'_0}_{-w''(2)}(\A^2) , \\ \end{array} \] hold true, where $M_{\Gamma_i}$ denotes the generic monomial ideals for the respective weights. Therefore, \[ \begin{array}{ccccc} (\Delta_2, \Delta_1, \Delta_0) & \subseteq & \overline{ H^{\Gamma_2}_{w''(2)}(\A^2) \times H^{\Delta_1,\lin}_{w''(1)}(\A^2) \times H^{\Gamma_0,\punc}_{w''(0)}(\A^2) } & = & \BB(w'')_{(\Gamma_2, \Delta_1, \Gamma_0)} , \\ (\Delta'_2, \Delta'_1, \Delta'_0) & \subseteq & \overline{ H^{\Gamma'_2}_{-w''(0)}(\A^2) \times H^{\Delta'_1,\lin}_{w''(1)}(\A^2) \times H^{\Gamma'_0,\punc}_{-w''(2)}(\A^2) } & \simeq & \BB(w'')^{(\Gamma'_0, (\Delta'_1)^t, \Gamma'_2)} . \end{array} \] Note that the second formula doesn't state equality but rather just isomorphism, given by reversing the roles of variables $x_0,x_1,x_2$. Since the spaces on the left-hand side intersect nontrivially, so do the spaces on the right-hand side. Theorem \ref{thm:intersectionOfClosures} therefore says that \[ \Phi_{w''}(\Gamma_2,\Delta_1,\Gamma_0) \leq \Phi_{w''}(\Gamma'_0, (\Delta'_1)^t, \Gamma'_2) . \] Using the explicit formula for weights from Lemma \ref{lmm:closedImmersion}, this inequality reads \begin{equation*} \begin{split} & - \langle w''(2), \Gamma_2 \rangle - \langle w''(1), \Delta_1 \rangle - \langle w''(0), \Gamma_0 \rangle - w''_2|\Gamma_2|d - w''_1|\Delta_1|d - w''_0|\Gamma_0|d \\ \leq & - \langle w''(2), \Gamma'_0 \rangle - \langle w''(1), (\Delta'_1)^t \rangle - \langle w''(0), \Gamma'_2 \rangle - w''_2|\Gamma'_0|d - w''_1|(\Delta'_1)^t|d - w''_0|\Gamma'_2|d . \end{split} \end{equation*} If we choose $d \gg n$, then only the last three summands on either side contribute to the inequality. We are then left with inequality \[ \langle w'', (|\Gamma_0| - |\Gamma'_2|, |\Delta_1| - |\Delta'_1|, |\Gamma_2| - |\Gamma'_0|) \rangle \geq 0 , \] which says that the angle between the two vectors in the plane is at most $\pi/2$. At this point we use two specific weights $w''$. The first weight of choice is such that $w''_2-w''_1 \gg w''_1-w''_0$. Visually this is the blue vector pointing to the far left in Figure \ref{fig:triplesStandardSets}. Then the previous inequality amounts to \[ |\Gamma_2| - |\Gamma'_0| = |\Delta_2| - |\Delta'_0| \geq 0 . \] The second of choice is such that $w''_1-w''_0 \gg w''_2-w''_1$. Visually this is the blue vector pointing to the far right in Figure \ref{fig:triplesStandardSets}. Then the above inequality amounts to \[ |\Gamma_0| - |\Gamma'_2| = |\Delta_0| - |\Delta'_2| \leq 0 . \] We thus obtain \begin{itemize} \item $|\Delta_2| \geq |\Delta'_0|$ and $|\Delta_0| \leq |\Delta'_2|$, \end{itemize} i.e., the first of the two bulleted conditions from Definition \ref{dfn:orderings}. Say equality holds here. Then also $|\Delta_1| = |\Delta'_1|$. At this point we make explicit use of affine cells $(\Delta_2, \Delta_1, \Delta_0)^\circ$ and $(\Delta'_2, \Delta'_1, \Delta'_0)^\circ$ in $H^n(\P^2)$ as visualized in Figure \ref{fig:triples}. As for the first cell, we use it literally as defined in the Introduction, so $(\Delta_2, \Delta_1, \Delta_0)^\circ$ parametrizes triples $(I_2,I_1,I_0)$ of ideals \begin{itemize} \item[$\circ$] $I_2 \subseteq \CC[\frac{x_0}{x_2},\frac{x_1}{x_2}]$ such that $\IN_\lex(I_2) = M_{\Delta_2}$, where $\frac{x_0}{x_2} > \frac{x_1}{x_2}$; \item[$\circ$] $I_1 \subseteq \CC[\frac{x_0}{x_1},\frac{x_2}{x_1}]$ supported in $\V(\frac{x_2}{x_1})$ such that $\IN_\lex(I_1) = M_{\Delta_1}$, where $\frac{x_0}{x_1} > \frac{x_2}{x_1}$; and \item[$\circ$] $I_0 \subseteq \CC[\frac{x_1}{x_0},\frac{x_2}{x_0}]$ supported at the origin such that $\IN_\lex(I_0) = M_{\Delta_0}$, where $\frac{x_1}{x_0} > \frac{x_2}{x_0}$. \end{itemize} Hence the picture on the left-hand side of Figure \ref{fig:triples}. For indicating the ``direction of flow'', the generic monomial ideals in $\CC[\frac{x_0}{x_2},\frac{x_1}{x_2}]$ and $\CC[\frac{x_1}{x_0},\frac{x_2}{x_0}]$, respectively, are drawn in blue, and the least generic ones (whose cells are in fact points) are drawn in blue. This is in analogy to the use of red and blue in Figure \ref{fig:polytope}. As for the second cell, defined by reversing the order of the variables. Read in this way, $(\Delta'_2, \Delta'_1, \Delta'_0)^\circ$ parametrizes triples of ideals \begin{itemize} \item[$\circ$] $I_2 \subseteq \CC[\frac{x_0}{x_2},\frac{x_1}{x_2}]$ supported at the origin such that $\IN_\lex(I_2) = M_{\Delta_2}$, where $\frac{x_1}{x_2} > \frac{x_0}{x_2}$; \item[$\circ$] $I_1 \subseteq \CC[\frac{x_0}{x_1},\frac{x_2}{x_1}]$ supported in $\V(\frac{x_0}{x_1})$ such that $\IN_\lex(I_1) = M_{\Delta_1}$, where $\frac{x_2}{x_1} > \frac{x_0}{x_1}$; and \item[$\circ$] $I_0 \subseteq \CC[\frac{x_1}{x_0},\frac{x_2}{x_0}]$ such that $\IN_\lex(I_0) = M_{\Delta_0}$, where $\frac{x_2}{x_0} > \frac{x_1}{x_0}$. \end{itemize} Hence the picture on the left-hand side of Figure \ref{fig:triples}. Note the new ``direction of flow'', which got reversed when reversing the order of the variables. \begin{center} \begin{figure}[ht] \begin{picture}(450,180) \multiput(0,0)(250,0){2}{ \put(80,60){\line(-3,-2){80}} \put(80,60){\line(0,1){120}} \put(80,60){\line(1,0){120}} \multiput(80,160)(60,0){2}{\line(-3,-2){40}} \multiput(80,160)(-40,-26.67){2}{\line(1,0){60}} \multiput(180,60)(0,60){2}{\line(-3,-2){40}} \multiput(180,60)(-40,-26.67){2}{\line(0,1){60}} \multiput(13.34,15.56)(60,0){2}{\line(0,1){60}} \multiput(13.34,15.56)(0,60){2}{\line(1,0){60}} \put(2,0){\footnotesize $0$} \put(195,64){\footnotesize $1$} \put(73,175){\footnotesize $2$} } \put(32,20){\footnotesize $\V(I_0)$} \put(160,35){\footnotesize $\V(I_1)$} \put(120,135){\footnotesize $\V(I_2)$} \put(300,30){\footnotesize $\V(I'_2)$} \put(436,85){\footnotesize $\V(I'_1)$} \put(336,148){\footnotesize $\V(I'_0)$} \put(13.34,15.56){\circle*{8}} \multiput(174,56)(-12,-8){2}{\circle*{4}} \multiput(168,52)(-15,-10){2}{\circle*{3}} \put(147,38){\circle*{3}} \put(330,160){\circle*{8}} \multiput(430,63)(0,20){2}{\circle*{3}} \multiput(430,70)(0,35){2}{\circle*{4}} \put(430,98){\circle*{3}} \multiput(174,56)(-12,-8){2}{\circle*{4}} \multiput(100,150)(-15,-10){2}{\circle*{2}} \multiput(89,155)(-12,-8){2}{\circle*{3}} \multiput(83,151)(-18,-12){2}{\circle*{2}} \multiput(110,152)(-18,-12){2}{\circle*{2}} \multiput(273,30)(0,25){2}{\circle*{2}} \multiput(280,50)(0,18){2}{\circle*{2}} \multiput(290,25)(0,35){2}{\circle*{3}} \multiput(290,35)(0,18){2}{\circle*{2}} \color{blue} \multiput(67,160)(5,0){2}{\line(-3,-2){20}} \multiput(67,160)(-3.33,-2.22){7}{\line(1,0){5}} \multiput(13.34,2.56)(0,5){2}{\line(1,0){30}} \multiput(13.34,2.56)(5,0){7}{\line(0,1){5}} \multiput(338.67,165.78)(-3.33,-2.22){2}{\line(1,0){30}} \multiput(338.67,165.78)(5,0){7}{\line(-3,-2){3.33}} \multiput(250.34,15.56)(5,0){2}{\line(0,1){30}} \multiput(250.34,15.56)(0,5){7}{\line(1,0){5}} \color{red} \multiput(88.67,165.78)(-3.33,-2.22){2}{\line(1,0){30}} \multiput(88.67,165.78)(5,0){7}{\line(-3,-2){3.33}} \multiput(.34,15.56)(5,0){2}{\line(0,1){30}} \multiput(.34,15.56)(0,5){7}{\line(1,0){5}} \multiput(317,160)(5,0){2}{\line(-3,-2){20}} \multiput(317,160)(-3.33,-2.22){7}{\line(1,0){5}} \multiput(263.34,2.56)(0,5){2}{\line(1,0){30}} \multiput(263.34,2.56)(5,0){7}{\line(0,1){5}} \end{picture} \caption{Points in $(\Delta_2, \Delta_1, \Delta_0)^\circ$ and the coordinate-reversed version of $(\Delta'_2, \Delta'_1, \Delta'_0)^\circ$, and the {\color{red}most generic} and {\color{blue}least generic} monomial ideals on four factors} \label{fig:triples} \end{figure} \end{center} Passing to the closures of the two affine cells, it's fairly easy to see that the assumption that $n_j := |\Delta_j| = |\Delta'_{2-j}|$ for $j = 0,1,2$, plus the fact that \begin{itemize} \item[$\circ$] the support of $I_1$ can't leave the line $\V(x_2) \subseteq \P^2$, \item[$\circ$] the support of $I_0$ can't leave the point $\V(x_1,x_2) \subseteq \P^2$, \item[$\circ$] the support of $I'_1$ can't leave the line $\V(x_0) \subseteq \P^2$, \item[$\circ$] the support of $I'_2$ can't leave the point $\V(x_0,x_1) \subseteq \P^2$ \end{itemize} implies that the locus where $(\Delta_2, \Delta_1, \Delta_0)$ and $(\Delta'_2, \Delta'_1, \Delta'_0)$ intersect is contained in the product of schemes $H^{n_2,\punc}(\A^2)$ parametrizing ideals supported in $(1:0:0)$, $H^{n_1,\punc}(\A^2)$ parametrizing ideals supported in $(0:1:0)$, and $H^{n_0,\punc}(\A^2)$ parametrizing ideals supported in $(0:0:1)$. In particular, if $(I''_2,I''_1,I''_0)$ is a point from the intersection, then \[ I''_j \in \overline{H^{\Delta_j}_\lex(\A^2)} \cap \overline{H^{(\Delta'_{2-j})^t}_\lex(\A^2)} , \] where the closure is taken within $H^{n_2}_\lex(\A^2)$. For $j = 2$, this inclusion says that the component $I''_2$ of any element of $(\Delta_2, \Delta_1, \Delta_0) \cap (\Delta'_2, \Delta'_1, \Delta'_0)$ lies in $(\Delta_2, \emptyset, \emptyset) \cap (\emptyset, \emptyset, \Delta'_0)$. Therefore, $(\Delta_2, \Delta_1, \Delta_0) \cap (\Delta'_2, \Delta'_1, \Delta'_0) \neq \emptyset$ if, and only if, $(\Delta_2, \emptyset, \emptyset) \cap (\emptyset, \emptyset, \Delta'_0) \neq \emptyset$, in which case we write the last two two schemes as closures of BB cells, \[ \begin{array}{ccccc} (\Delta_2, \emptyset, \emptyset) & = & \overline{ H^{\Delta_2}_{w''(2)}(\A^2) \times H^{\emptyset,\lin}_{w''(1)}(\A^2) \times H^{\emptyset,\punc}_{w''(0)}(\A^2) } & = & \BB(w'')_{(\Delta_2,\emptyset, \emptyset)} , \\ (\emptyset, \emptyset, \Delta'_0) & = & \overline{ H^{(\Delta'_0)^t}_{-w''(0)}(\A^2) \times H^{\emptyset,\lin}_{w''(1)}(\A^2) \times H^{\emptyset,\punc}_{-w''(2)}(\A^2) } & \simeq & \BB(w'')^{((\Delta'_0)^t, \emptyset, \emptyset)} . \end{array} \] Once more Theorem \ref{thm:intersectionOfClosures} allows to derive an inequality on $w''$-weights which, by Lemma \ref{lmm:closedImmersion}, reduces to \[ - \langle w''(2), \Delta_2 \rangle \leq - \langle w''(2), (\Delta'_0)^t \rangle , \] i.e., the inequality $\langle \mu, \Delta_2 \rangle \geq \langle \mu, (\Delta'_0)^t \rangle$ as claimed in Definition \ref{dfn:orderings}. The claim for $j = 0$ follows by symmetry. For $j = 1$, we analogously see that $(\Delta_2, \Delta_1, \Delta_0) \cap (\Delta'_2, \Delta'_1, \Delta'_0) \neq \emptyset$ if, and only if, $(\emptyset, \Delta_1, \emptyset) \cap (\emptyset, \Delta'_1, \emptyset) \neq \emptyset$, in which case the inequality from Theorem \ref{thm:intersectionOfClosures} reducing to $\langle \lambda, \Delta_1 \rangle \geq \langle \lambda, (\Delta'_1)^t \rangle$. \end{proof} \section*{Appendix} We first collect a few elementary facts about the natural partial ordering $\leq$ and the partial orderings $\leq_\xi$ on $\st_m$ induced by weights $\xi = \mu,\lambda,\nu$ as in Definition \ref{dfn:orderings}, only proving the least obvious statement. \begin{lmm} \label{lmm:dualityOfOrderings} The two partial orderings $\leq_\mu$ and $\leq_\nu$ on $\st_m$ are dual to each other in the sense that $\Delta \leq_\mu \Delta'$ if, and only if, $(\Delta')^t \leq_\nu \Delta^t$. \end{lmm} \begin{lmm} \label{lmm:productOrdering} The partial ordering $\leq_\mu$ on $\st_m$ is the lexicographic refinement of $\leq_{(-1,0)}$ by $\leq_{(0,-1)}$ in the sense that $\Delta \leq_\mu \Delta'$ if, and only if, \begin{itemize} \item $\Delta \leq_{(-1,0)} \Delta'$, or \item $\Delta =_{(-1,0)} \Delta'$ and $\Delta \leq_{(0,-1)} \Delta'$. \end{itemize} \end{lmm} \begin{pro} The three partial orderings $\leq_\xi$, for $\xi = \mu, \lambda, \nu$, are refinements of the natural partial ordering $\leq$ on $\st_m$ in the sense that $\Delta <_\xi \Delta'$ whenever $\Delta < \Delta'$, \end{pro} \begin{proof} Assume that the partial ordering $\leq_{(-1,0)}$ is a refinement of the natural partial ordering. Then Lemma \ref{lmm:productOrdering} implies that the statement of the proposition holds for weights $\xi = \mu$ and $\xi = \nu$. Moreover, the identity $\langle \xi, \Delta \rangle = \langle (\xi_1,0), \Delta \rangle + \langle (0,\xi_2), \Delta \rangle$ plus the assumption that $\lambda_1 < 0 < \lambda_2$ plus Lemma \ref{lmm:dualityOfOrderings} implies that the statement of the proposition holds for weight $\xi = \lambda$. Consider two elements of $\st_m$ such that $\Delta < \Delta'$. We label each box in $\Delta' \setminus \Delta$ by the its row index, and each box in $\Delta \setminus \Delta'$ by the negative of its row index, thus obtaining a disjoint sum of two \defn{skew Young tableaux}, cf. Figure \ref{fig:differences}. Then the sum of the labels of all boxes from the tableaux is $\langle (-1,0), \Delta \rangle - \langle (-1,0), \Delta' \rangle$. For showing that the sum is positive, we enumerate the elements of the respective differences in the lex-increasing way, \begin{equation*} \begin{split} \Delta \setminus \Delta' & = \left\{ \alpha_1, \ldots, \alpha_d \right\} , \\ \Delta' \setminus \Delta & = \left\{ \alpha'_1, \ldots, \alpha'_d \right\} . \end{split} \end{equation*} The assumption that $\Delta < \Delta'$ implies that $\alpha_i < \alpha'_i$ for all $i$. Positivity follows. \end{proof} Lastly we give explicit descriptions of the generic monomial ideals from Definition \ref{dfn:genericMonomialIdeals}. \begin{center} \begin{figure}[ht] \unitlength0.40mm \begin{picture}(170,170) \put(0,0){\line(1,0){170}} \put(0,0){\line(0,1){170}} \multiput(4,133)(10,0){1}{\tiny $0$} \multiput(11,133)(0,-10){3}{\tiny $-1$} \multiput(24,103)(0,-10){1}{\tiny $2$} \multiput(34,103)(0,-10){2}{\tiny $3$} \multiput(41,83)(0,-10){2}{\tiny $-4$} \multiput(51,83)(0,-10){2}{\tiny $-5$} \multiput(61,83)(0,-10){3}{\tiny $-6$} \multiput(74,53)(0,-10){2}{\tiny $7$} \multiput(84,53)(0,-10){2}{\tiny $8$} \multiput(91,33)(0,-10){2}{\tiny $-9$} \multiput(102.5,13)(0,-10){2}{\tiny $10$} \multiput(112.5,13)(0,-10){2}{\tiny $11$} \multiput(122.5,13)(0,-10){2}{\tiny $12$} \color{red} \put(0,140){\line(1,0){20}} \put(20,140){\line(0,-1){40}} \put(20,100){\line(1,0){10}} \put(30,100){\line(0,-1){10}} \put(30,90){\line(1,0){40}} \put(70,90){\line(0,-1){50}} \put(70,40){\line(1,0){30}} \put(100,40){\line(0,-1){40}} \color{blue} \put(0,130){\line(1,0){10}} \put(10,130){\line(0,-1){20}} \put(10,110){\line(1,0){30}} \put(40,110){\line(0,-1){40}} \put(40,70){\line(1,0){20}} \put(60,70){\line(0,-1){10}} \put(60,60){\line(1,0){30}} \put(90,60){\line(0,-1){40}} \put(90,20){\line(1,0){40}} \put(130,20){\line(0,-1){20}} \end{picture} \caption{The differences of two standard sets ${\color{red}\Delta} \leq {\color{blue}\Delta'}$} \label{fig:differences} \end{figure} \end{center} \begin{pro} Let $u,v \in \Z^2$ be weights such that $u_1,u_2 < 0$ and $v_1,v_2 > 0$, respectively. \begin{enumerate}[(i)] \item The generic monomial ideal in $H^n(\A^2)$ with respect to $u$ is the unique $M_\Gamma$ such that for all $\alpha \in \N^2 \setminus \Gamma$ and for all $\beta \in \Gamma$, the inequality \[ \langle u, \alpha - \beta \rangle < 0 \] holds true. \item More explicitly, if $u_1 < u_2$, then $\Gamma$ contains the first $n$ members of the sequence \[ \begin{array}{cccc} (0,0), \\ (0,1), \\ \vdots \\ (0,m-1), \\ (1,0), & (0,m), \\ (1,1), & (0,m+1), \\ \vdots \\ (1,m-1), & (0,2m-1), \\ (2,0), & (1,m), & (0,2m), \\ (2,1), & (1,m+1), & (0,2m+1), \\ \vdots \\ (2,m-1), & (1,2m-1), & (0,3m-1), & \ldots , \end{array} \] where $m$ is the unique integer such that $m-1 < u_2 / u_1 < m$. If $u_1 > u_2$, then the transposed analogue of this statement holds true. \item The generic monomial ideal in $H^{n,\punc}(\A^2)$ with respect to $v$ is the vertical strip $\{0\} \times \{0, \ldots, n-1\}$ if $v_1 < v_2$ and the horizontal strip $\{0, \ldots, n-1\} \times \{0\}$ otherwise. \end{enumerate} \end{pro} \begin{proof} (i) The scheme $H^n(\A^2)$ is covered by affine open patches \[ H^\Delta(\A^2) := \bigl\{ \text{ideals } I \subseteq S' : S' / I \text{ is free with a basis } (x^\beta : \beta \in \Delta) \bigr\}, \] one for each standard set $\Delta$ of cardinality $n$ \cite{krbook,huibregtseElementary,norge,strata}. The coordinate ring of that scheme is a quotient of polynomial ring \[ T := \CC[T_{\alpha,\beta} : \alpha \in \widehat{\Delta}] , \] where $\widehat{\Delta}$ is a sufficiently large finite standard set containing $\Delta$, by an ideal $J \subseteq T$ spanned by quadratic equations expressing that the quotient of $T[x_1,x_2]$ by the ideal \[ I := \bigl\langle x^\alpha - \sum_{\beta \in \Delta} T_{\alpha,\beta} x^\beta : \alpha \in \widehat{\Delta} \setminus \Delta \bigr\rangle \] be a free $T$-module with basis $(x^\beta : \beta \in \Delta)$. In other words, $I$ defines a $T$-valued point in $H^\Delta(\A^2)$. The BB sink $H^\Delta_u(\A^2)$ is obtained from $H^\Delta(\A^2)$ by killing all $T_{\alpha,\beta}$ such that $\langle u, \alpha - \beta \rangle > 0$. The scheme $H^n(\A^2)$ being smooth of dimension $2n$, so is $H^\Delta(\A^2)$. The same holds true for $H^\Delta_u(\A^2)$ if, and only if, none of the variables $T_{\alpha,\beta}$ such that $\langle u, \alpha - \beta \rangle > 0$ get killed. (ii) is elementary. (iii) The scheme $H^{n,\punc}(\A^2)$ is covered by affine open patches $H^{\Delta,\punc}(\A^2) := H^\Delta(\A^2) \cap H^{n,\punc}(\A^2)$. The ideal \[ I := \bigl\langle x_1 - \sum_{j = 1}^{n-1} T_j x_2^j, x_2^n \bigr\rangle , \] where the $T_j$ are variables, defines a $\CC[T_1,\ldots,T_n]$-valued point in $H^{n,\punc}(\A^2)$. Therefore, if $\Gamma$ is the vertical strip as defined in the proposition, then $H^{\Gamma,\punc}(\A^2) = \Spec \CC[T_1,\ldots,T_n]$. If $v_1 < v_2$, then $I$ lies in the BB sink of the $T$-action with weight $v$. This proves the one half of (iii), the second following by symmetry. \end{proof} \bibliography{references} \bibliographystyle{amsalpha} \end{document}
212,298
Angela (1977) - Starring: - Sophia Loren, Steve Railsback, (more) - Director(s): - Boris Sagal Movies Similar to Angela Synopsis of Angela Complete Cast of Angela - Sophia Loren - Angela Kincaid - John Vernon - Ben Kincaid - John Huston - Hogan - James LaPointe - Steve Railsback - Jean Lebrecque - Luce Guilbault - Marie Lebrecque - Yvon Dufour - Director(s): - Boris Sagal - Writer(s): - Charles Israel - Producer(s): - Claude Heroux, Julian Melzack - Looking for special editions of Angela? - See All Versions - Subtitles: - Check All Versions - Closed Captioning: - Check All Versions BY MAIL IN-STORE What's Your Take?
232,774
As a pediatric dentist at Aventura Pediatric Dentistry & Orthodontics, Dr. Dalia Rosenfeld is passionate about children having the biggest and brightest smiles in South Florida. She discovered her desire to help children with their dental needs during her last year of dental school at Santa Maria University in Caracas, Venezuela, where she had the opportunity to offer her services to Wonken, a remote Indian community located in Venezuela’s Gran Sabana region. The area’s population is nearly 3,000 and only the central village in Wonken has basic services. The diet of its people is based on their own production. Outside of the village the malnutrition rate is 99% in children. Dr. Rosenfeld’s experience helping the underserved children of that community and other villages only reachable by river helped her realize her love for pediatric dentistry. She continued her education with two years of specialized training in pediatric dentistry at the University of Medicine and Dentistry of New Jersey. In 2011, Dr. Rosenfeld was more than thrilled to volunteer her services at the New Jersey Special Olympics. Volunteers, visitors and many medal winners were provided a complete dental exam free of charge. Locally, Dr. Rosenfeld volunteers her time speaking to pre-K through 8th graders, where her entertaining presentation about avoiding plaque monsters and the importance of brushing inspires many kids to become future dentists. At Aventura Pediatric Dentistry & Orthodontics, Dr. Rosenfeld provides dental care for children as young as 12 months. With her two years of advanced training, Dr. Rosenfeld has the expertise to treat any child in need of dental care and she strives to make each dental visit a positive experience not only for the child, but also their parents. Aventura Pediatric Dentistry & Orthodontics as well as the Dental Care Group’s patients benefit from state-of-theart technology combined with a 35 year tradition of dental excellence. For more information, call 305-935- 1613 and 305-935-2797 or visit online at, or.
105,044
Looking at all the fun leading up to and squeezed into Christmas, it's almost scary to think that the fun just started there. But before I can launch into the exciting parts (travel travel travel), there was a bit of a lull between Christmas day and the start of my adventures outside of Sendai. Since I had a full schedule planned for my winter vacation, starting with skiing on the 28th followed by a voyage down to the Western bit of Japan, the days between Christmas and my departure from Sendai felt a little squeezed. In addition to getting my travel reservations over the new years, I also started making actual progress on the great big party in Japan that'll be happening in February and March as friends and family come to visit Japan. Tho days of smashing all those things together are pretty well summed up in this post of mine from the 27th. Man, deadlines make days productive! Bought shinkansen tickets for kyoto, airplane tickets for sapporo, reserved the hostel in hokkaido, finalized plans for winter vacation, cleaned room, crashed my bike into the back of a car that stopped quicker than I could, did laundry, took out trash, cleaned bike, win!Walking out of Sendai station after buying the shinkansen tickets I was reminded that I live in a big, bright-lights sort of city. This last photo, as well as the one on top, were taken on Jozenji street, the road with my climbing gym, and where I took the pictures of the hikari pageant earlier. Since I had my camera with me that day, I road down the street one handed recording everything. That, of course, means very shaky bicycle footage. One of these days it might just get put up as a virtual-ride-through video. Until then, I hope these photos will suffice, as the hikari pageant has long since ended.
118,964
TITLE: If $f(x)f''(x)\le0$, and $f(a)=0$, can we comment about the sign of $f(a+h)f''(a-h)$? QUESTION [0 upvotes]: Question: Let $f:R\to R$ is a differentiable function such that all successive derivatives exist. $f'(x)$ can be zero at discrete points only and $f(x)f''(x)\le0\; \forall\; x\in \mathrm R$. If $f(a)=0$ then which of the following is correct? a)$\;f(a+h)f''(a-h)<0 $ b)$\;f(a+h)f''(a-h)>0$ c)$\;f'(a+h)f''(a-h)<0$ d)$\;f'(a+h)f''(a-h)<0$ If $\alpha$ and $\beta$ are two consecutive roots of $f(x)=0$ then, for $\alpha< c < \beta$ a)$\;f''(c)=0$ b)$\;f'''(c)=0$ c)$\;f''''(c)=0$ d)$\;f''''(c)=0$ My Attempt: For the first one I took it as $y=\sin x$ graph and could get the answer. But I want to know the proper solution using calculus and all I could get was that $f(x)$ and $f''(x)$ have opposite signs so the inflection would keep switching like a sine curve. For the second one I have no idea how to even start. I tried my hardest to solve these for like an hour and still no help. Please help me out. Note: $h\gt0$ is a very small quantity. $a$ is a point in the domain. REPLY [1 votes]: ALL possible answers are incorrect! Because for $f(x)=-x$ $(a=0)$ we get $f'(x)=-1$ (nowhere zero), $f''(x)=0$ and clearly $f(x)f''(x)=0\leq0$. But $a)=0\,\, b)=0\,\, c)=0\,\, d)=0$. Therefore under the given assumptions ALL answers can be wrong! For the second question now, take $f(x)=(x-1)^{3}(x+1)^{2}$. Then the two consecutive roots are $-1$ and $+1$. Take $c=0$. We get $f''(x)=4(5x^{3}-3x^{2}-3x+1)$ and clearly $f''(0)=4\neq 0$. $f'''(x)=12(5x^{2}-2x-1)$ and $f'''(0)=-12\neq\,0$ and $f''''(x)=24(5x-1)$ and $f''''(0)=-24\neq\,0$. Thus again all possible answers are incorrect! Now if the assumption about $c$ refers to the existence of $c$, then we also have an inconclusive answer, For instance take $f(x)=sinx$ and two consecutive roots $0$ and $\pi$. Then $f''(x)=-sinx$ and there is NO $c\in (0,\pi)$ such that $f''(c)=0.$ However, if we take $f(x)=sin^{2}x$ and $0$ and $\pi$ two consecutive roots, then $f''(x)=2cos2x$ and there is a $c\in(0,\pi)$, $c=\dfrac{\pi}{4}$ such that $f''(c)=0$. In other words a) can neither be rejected or accepted! Now we go to b) and consider $f(x)=x^{2}(x-1)$ and $0$ and $1$ the consecutive roots. $f'(x)=3x^{2}-2x$, $f''(x)=6x-2$, $f'''(x)=6\neq0$ for all $x$. Now we go to c) and consider $f(x)=x^{3}(x-1)$, $0$ and $1$ the consecutive roots. $f'(x)=4x^{3}-3x^{2}$, $f''(x)=12x^{2}-6x$, $f'''(x)=24x-6$, $f''''(x)=24\neq\,0$ for all $x$.!!
54,836
TITLE: Give an example of Euclidean space. QUESTION [2 upvotes]: In the question it is asking what will be if we take out one condition of theorem. $\textbf{Theorem}$: Let {$\phi_{n}$} be orthonormal system in a complete Euclidean Space R. Then {$\phi_{n}$} is complete if and only if R contains no nonzero element orthogonal to all elements of { $\phi_{n}$}. $\textbf{Question}$: Give an example of Euclidean Space $R$ and orthonormal system {$\phi_{n}$} in $R$ such that R contains no nonzero element orthogonal to every $\phi_{n}$, even though {$\phi_{n}$} fails to be complete. So, if we can give such example, by above theorem, $R$ can not be complete. $\textbf{Definition 1}$:{$\phi_{n}$} is complete if a linear combinations of elements of {$\phi_{n}$} are everywhere dense in $R$ $\textbf{Definition 2}$ : Euclidean space $R$ if $R$ is linear space with scalar product. To be honest, I couldn't find any example myself. I always find this quite challenging when you drop one condition of theorem, it is obviously doesn't hold. Otherwise it wouldn't be theorem. Thanks in advance. REPLY [1 votes]: Take separable Hilbert space $H$ with basis $e_1, ..., e_n, ...$ Now take the subspace generated (algebraically) by $e_2,e_3, ..., e_n, ...$ and the vector $e_1 + 1/2 e_2 + ... + 1/n e_n + ...$. This is a Euclidean space. The system $e_2, ... , e_n, ...$ is maximal orthonormal, but it is not complete, because the subspace itself is dense in $H$ and $e_2, ..., e_n,...$ is not a complete system in $H$.
60,552
The only place to watch the race for the title live Watch highlights from every Football League match Bet and watch on Sky Bet's Mobile App Sports More from Sky Sports Parc des Princes (ATT 45,000) 11th August 2012 - Kick off 20:00 Ref: T Chapron Z Ibrahimovic (64, pen 90) J Aliadiere (45+3)Maxwell (og 5)G Bourillon (S/O 88).
215,926
\section{Notations and Preliminaries}\label{sec:prelim} We use $\Id_d$ to denote the identity matrix of dimension $d\times d$, and for a subspace $K$, let $\Id_K$ denote the projection matrix to the subspace $K$. For unit vector $x$, let $P_x = \Id - xx^{\top}$ denote the projection matrix to the subspace \textit{orthogonal} to $x$. Let $\sphere^{d-1}$ be the $d-1$-dimensional sphere $\sphere^{d-1} := \{x\in \R^d: \|x\|^2 = 1 \}$. Let $u \odot v$ denote the Hadamard product between vectors $u$ and $v$. Let $u^{\odot s}$ denote $u\odot \dots \odot u$ where $u$ appears $k$ times. Let $ A \otimes B$ denote the Kronecker product of $A$ and $B$. Let $\|\cdot\|$ denote the spectral norm of a matrix or the Euclidean norm of a vector. Let $\norm{\cdot}_F$ denote the Frobenius norm of a matrix or a tensor. We write $A \lesssim B$ if there exists a universal constant $C$ such that $A\le CB$. We define $\gtrsim$ similarly. Unless explicitly stated otherwise, $O(\cdot)$-notation hides absolute multiplicative constants. Concretely, every occurrence of the notation $O(x)$ is a placeholder for some function $f(x)$ that satisfies $\forall x \in \R, |f (x)|\le C|x|$ for some absolute constant $C > 0$. \subsection{Gradient, Hessian, and local maxima on manifold} We have a constrained optimization problem over the unit sphere $\sphere^{d-1}$, which is a smooth manifold. Thus we define the local maxima with respect to the manifold. It's known that projected gradient descent for $\sphere^{d-1}$ behaves pretty much the same on the manifold as in the usual unconstrained setting~\cite{2016arXiv160508101B}. In Section~\ref{sec:manifold} we give a brief introduction to manifold optimization, and the definition of gradient and Hessian. We refer the readers to the book~\cite{absil2007optimization} for more backgrounds. Here we use $\grad f$ and $\hessian f$ to denote the gradient and the Hessian of $f$ on the manifold $\sphere^{d-1}$. We compute them in the following claim. \begin{claim}\label{claim:grad-hessian} Let $f: S^{d-1}\rightarrow \R$ be $ f(x) := \frac{1}{4}\sum_{i=1}^{n} \inner{a_i,x}^4 $. Then the gradient and Hessian of $f$ on the sphere can be written as, \begin{align} \grad f(x) &= P_x\sum_{i=1}^{n} \inner{a_i,x}^3 a_i \mcom\nonumber\\ \hessian f(x) & = 3 \sum_{i=1}^{n}\inner{a_i,x}^2 P_xa_ia_i^{\top}P_x - \left(\sum_{i=1}^{n} \inner{a_i,x}^4\right) P_x \mcom\nonumber \end{align} where $P_x = \Id_d - xx^{\top}$. \end{claim} A local maximum of a function $f$ on the manifold $\sphere^{d-1}$ satisfies $\grad f(x) = 0$, and $\hessian f(x) \preceq 0$. Let $\cM_f$ be the set of all local maxima, \begin{align} \cM_f = \Set{x\in \sphere^{d-1}: \grad f(x) = 0, \hessian f(x) \preceq 0} \mper \label{eqn:def-L} \end{align} \subsection{Kac-Rice formula} Kac-Rice formula is a general tool for computing the expected number of special points on a manifold. Suppose there are two random functions $P(\cdot ): \R^d\rightarrow \R^d$ and $Q(\cdot): \R^d \rightarrow \R^k$, and an open set $\mathcal{B}$ in $\R^k$. The formula counts the expected number of point $x\in \R^d$ that satisfies both $P(x) =0 $ and $Q(x)\in \mathcal{B}$. Suppose we take $P = \nabla f$ and $Q = \nabla^2 f$, and let $\cB$ be the set of negative semidefinite matrices, then the set of points that satisfies $P(x) = 0$ and $Q \in \cB$ is the set of all local maxima $\cM_f$. Moreover, for any set $Z\subset\sphere^{d-1}$, we can also augment $Q$ by $Q = [\nabla^2 f, x]$ and choose $\cB = \{A: A\preceq 0\}\otimes Z$. With this choice of $P,Q$, Kac-Rice formula can count the number of local maxima inside the region $Z$. For simplicity, we will only introduce Kac-Rice formula for this setting. We refer the readers to~\cite[Chapter 11\&12]{adler2009random} for more backgrounds. \begin{lemma}[Informally stated]\label{lem:kac-rice} Let $f$ be a random function defined on the unit sphere $\sphere^{d-1}$ and let $Z\subset \sphere^{d-1}$. Under certain regularity conditions\footnote{We omit the long list of regularity conditions here for simplicity. See more details at ~\cite[Theorem 12.1.1]{adler2009random}} on $f$ and $Z$, we have \begin{align} \Exp\left[\cM_f\cap Z|\right] = \int_x \Exp\left[|\det(\hessian f)|\cdot \indicator{\hessian f\preceq 0} \indicator{x\in Z}\mid \grad f(x) = 0\right] p_{\grad f(x)}(0) dx\mper \label{eqn:16} \end{align} where $dx$ is the usual surface measure on $S^{d-1}$ and $p_{\grad f(x)}(0)$ is the density of $\grad f(x)$ at 0. \end{lemma} \subsection{Formula for the number of local maxima} In this subsection, we give a concrete formula for the number of local maxima of our objective function~\eqref{eq:obj} inside the superlevel set $L$ (defined in equation~\eqref{eqn:def-L-intro}). Taking $Z = L$ in Lemma~\ref{lem:kac-rice}, it boils down to estimating the quantity on the right hand side of~\eqref{eqn:16}. We remark that for the particular function $f$ as defined in~\eqref{eq:obj} and $Z = L$, the integrand in~\eqref{eqn:16} doesn't depend on the choice of $x$. This is because for any $x\in \sphere^{d-1}$, $(\hessian f, \grad f,\indicator{x\in L})$ has the same joint distribution, as characterized below: \begin{lemma}\label{lem:dist} Let $f$ be the random function defined in~\eqref{eq:obj}. Let $\alpha_1,\dots, \alpha_n \in \N(0,1)$, and $b_1,\dots, b_n \sim \N(0,\Id_{d-1})$ be independent Gaussian random variables. Let \begin{align} M &= \norm{\alpha}_4^4\cdot \Id_{d-1} -3\sum_{i=1}^{n} \alpha_i^2 b_ib_i^{\top} \mathand g = \sum_{i=1}^{n}\alpha_i^3 b_i\label{eqn:def:g} \end{align} Then, we have that for any $x\in \sphere^{d-1}$, $(\hessian f, \grad f, f)$ has the same joint distribution as $(-M,g, \fourmoments{\alpha})$. \end{lemma} \begin{proof} We use Claim~\ref{claim:grad-hessian}. We fix $x\in \sphere^{d-1}$ and let $\alpha_i = \inner{a_i,x}$ and $b_i = P_x a_i$. We have $\alpha_i$ and $b_i$ are independent, and $b_i$ is spherical Gaussian random vector in the tangent space at $x$ (which is isomorphic to $\R^{d-1}$). We can verify that $\grad f(x) = \sum_{i\in [n]} \alpha_i^3b_i = g$ and $\hessian f(x) = -M$, and this complete the proof. \end{proof} Using Lemma~\ref{lem:kac-rice} (with $Z=L$) and Lemma~\ref{lem:dist}, we derive the following formula for the expectation of our random variable $\Exp\left[|\cM_f\cap L|\right]$. Later we will later use Lemma~\ref{lem:kac-rice} slightly differently with another choice of $Z$. \begin{lemma}\label{lem:kac-rice-our-fun} Using the notation of Lemma~\ref{lem:dist}, let $p_g(\cdot)$ denote the density of $g$. Then, \begin{align} \Exp\left[|\cM_f\cap L|\right] = \Vol(\sphere^{d-1})\cdot \Exp\left[\left|\det(M)\right|\indicator{M\succeq 0}\indicator{\fourmoments{\alpha}\ge 3(1+\zeta)n}\mid g=0\right] p_g(0) \mper \label{eqn:10} \end{align} \end{lemma} \section{Proof Overview}\label{sec:proof_sketches} \newcommand{\Ezero}{\indicator{E_0}} \newcommand{\Eone}{\indicator{E_1}} \newcommand{\Etwo}{\indicator{E_2}} \newcommand{\Fzero}{\indicator{F_0}} \newcommand{\Fk}{\indicator{F_k}} In this section, we give a high-level overview of the proof of the main Theorem. We will prove a slightly stronger version of Theorem~\ref{thm:main_intro}. Let $\gamma$ be a universal constant that is to be determined later. Define the set $L_1\subset \sphere^{d-1}$ as, \begin{align} L_1 := \left\{x\in \sphere^{d-1}: \sum_{i=1}^n\inner{a_i,x}^4 \ge 3n + \gamma \sqrt{nd}\right\}\mper\label{eqn:def-L1} \end{align} Indeed we see that $L$ (defined in~\eqref{eqn:def-L-intro}) is a subset of $L_1$ when $n \gg d$. We prove that in $L_1$ there are exactly $2n$ local maxima. \begin{theorem}[main]\label{thm:main} There exists universal constants $\gamma,\beta$ such that the following holds: suppose $d^{2}/\log^{O(1)}\ge n \ge \beta d\log^2 d$ and $L_1$ be defined as in~\eqref{eqn:def-L1}, then with high probability over the choice of $a_1,\dots, a_n$, we have that the number of local maxima in $L_1$ is exactly $2n$: \begin{align} |\cM_f\cap L_1| = 2n\mper\label{eqn:12} \end{align} Moreover, each of the local maximum in $L_1$ is $\widetilde{O}(\sqrt{n/d^3})$-close to one of $\pm \frac{1}{\sqrt{d}}a_1,\dots ,\pm\frac{1}{\sqrt{d}}a_n$. \end{theorem} \sloppy In order to count the number of local maxima in $L_1$, we use the Kac-Rice formula (Lemma~\ref{lem:kac-rice-our-fun}). Recall that what Kac-Rice formula gives an expression that involves the complicated expectation $\Exp\left[\left|\det(M)\right|\indicator{M\succeq 0}\indicator{\fourmoments{\alpha}\ge 3(1+\zeta)n}\mid g=0\right] $. Here the difficulty is to deal with the determinant of a random matrix $M$ (defined in Lemma~\ref{lem:dist}), whose eigenvalue distribution does not admit an analytical form. Moreover, due to the existence of the conditioning and the indicator functions, it's almost impossible to compute the RHS of the Kac-Rice formula (equation~\eqref{eqn:10}) exactly. \newcommand{\cC}{\mathcal{C}} \paragraph{Local vs. global analysis} The key idea to proceed is to divide the superlevel set $L_1$ into two subsets \begin{align} L_1 & = (L_1\cap L_2) \cup L_2^c , \nonumber\\ \label{eq:L1L2} & \textup{ where } L_2 := \{x\in \sphere^{d-1}: \forall i, \norm{P_x a_i}^2 \ge (1-\delta)d, \mathand |\inner{a_i,x}|^2\le \delta d\}\mper \end{align} Here $\delta$ is a sufficiently small universal constant that is to be chosen later. We also note that $L_2^c\subset L_1$ and hence $L_1 =(L_1\cap L_2) \cup L_2^c $. Intuitively, the set $L_1\cap L_2$ contains those points that do not have large correlation with any of the $a_i$'s; the compliment $L_2^c$ is the union of the neighborhoods around each of the desired vector $\frac{1}{\sqrt{d}}a_1,\dots, \frac{1}{\sqrt{d}}a_n$. We will refer to the first subset $L_1\cap L_2$ as the global region, and refer to the $L_2^c$ as the local region. We will compute the number of local maxima in sets $L_1\cap L_2$ and $L^c_2$ separately using different techniques. We will show that with high probability $L_1\cap L_2$ contains zero local maxima using Kac-Rice formula (see Theorem~\ref{thm:kac-rice-zero}). Then, we show that $L^c_2$ contains exactly $2n$ local maxima (see Theorem~\ref{thm:local}) using a different and more direct approach. \paragraph{Global analysis.} The key benefit of have such division to local and global regions is that for the global region, we can avoid evaluating the value of the RHS of the Kac-Rice formula. Instead, we only need to have an \textit{estimate}: Note that the number of local optima in $L_1\cap L_2$, namely $|\cM_f\cap L_1\cap L_2|$, is an integer nonnegative random variable. Thus, if we can show its expectation $\Exp\left[|\cM_f\cap L_1\cap L_2|\right]$ is much smaller than $1$, then Markov's inequality implies that with high probability, the number of local maxima will be \textit{exactly} zero. Concretely, we will use Lemma~\ref{lem:kac-rice} with $Z = L_1\cap L_2$, and then estimate the resulting integral using various techniques in random matrix theory. It remains quite challenging even if we are only shooting for an estimate. see Theorem~\ref{thm:kac-rice-zero} for the exact statement and Section~\ref{subsec:kac-rice} for an overview of the analysis. \paragraph{Local analysis.} In the local region $L_2^c$, that is, the neighborhoods of $a_1,\dots, a_n$, we will show there are {\em exactly} $2n$ local maxima. As argued above, it's almost impossible to get exact numbers out of the Kac-Rice formula since it's often hard to compute the complicated integral. Moreover, Kac-Rice formula only gives the expected number but not high probability bounds. However, here the observation is that the local maxima (and critical points) in the local region are well-structured. Thus, instead, we show that in these local regions, the gradient and Hessian of a point $x$ are dominated by the terms corresponding to components $\{a_i\}$'s that are highly correlated with $x$. The number of such terms cannot be very large (by restricted isometry property, see Section~\ref{sec:rip}). As a result, we can characterize the possible local maxima explicitly, and eventually show there is exactly one local maximum in each of the local neighborhoods around $\{\pm \frac{1}{\sqrt{d}} a_i\}$'s. \paragraph{Concentration properties of $a_i$'s.} Before stating the formal results regarding the local and global analysis, we have the following technical preparation. Since $a_i$'s are chosen at random, with small probability the tensor $T$ will behave very differently from the average instances. The optimization problem for such irregular tensors may have much more local maxima. To avoid these we will restrict our attention to the following event $G_0$, which occurs with high probability (over the randomness of $a_i$'s) : \begin{align} G_0 := &\Big\{ \forall x\in \sphere^{d-1}~ \textup{the followings hold: }\nonumber\\ & ~~~\forall U\subset [n] \textup{ with }|U| < \delta n/\log d, \sum_{i\in U} \inner{a_i,x}^2 \le (1+\delta)d \mcom\label{eqn:32}\\ & ~~~\sum_{i\in [n]}\inner{a_i,x}^6\ge 15(1-\delta)n\mcom\label{eqn:29}\\ & ~~~n - 3\sqrt{nd}\le \sum_{i\in [n]}\inner{a_i,x}^2 \le n + 3\sqrt{nd}\Big\}\mper\label{eqn:101} \end{align} Here $\delta$ is a small enough universal constant. Event $G_0$ summarizes common properties of the vectors $a_i$'s that we frequently use in the technical sections. Equation \eqref{eqn:32} is the restricted isometry property (see Section~\ref{sec:rip}) that ensures the $a_i$'s are not adversarially correlated with each other. Equation \eqref{eqn:29} lowerbounds the high order moments of $a_i$'s. Equation \eqref{eqn:101} bounds the singular values of the matrix $\sum_{i=1}^n a_ia_i^\top$. These conditions will be useful in the later technical sections. Next, we formalize the intuition above as the two theorems below. Theorem~\ref{thm:kac-rice-zero} states there is no local maximum in $L_1\cap L_2$, whereas Theorem~\ref{thm:local} concludes that there are exactly $2n$ local maxima in $L_1\cap L_2^c$. \begin{theorem}\label{thm:kac-rice-zero} There exists universal small constant $\delta \in (0,1)$ and universal constants $\gamma,\beta$ such that for sets $L_1,L_2$ defined in equation~\eqref{eq:L1L2} and $ n \ge \beta d\log^2 d$, we have that the expected number of local maxima in $L_1\cap L_2$ is exponentially small: \begin{align} \Exp\left[ |\cM_f\cap L_1\cap L_2| \cdot \indicator{G_0}\right]\le 2^{-d/2}\mper\nonumber \end{align} \end{theorem} \begin{theorem}\label{thm:local} Suppose $1/\delta^2 \cdot d\log d \le n \le d^2/\log^{O(1)} d$. Then, with high probability over the choice $a_1,\dots, a_n$, we have, \begin{align} |\cM_f\cap L_1\cap L_2^c| = 2n\mper\label{eqn:thm:local} \end{align} Moreover, each of the point in $L\cap L_2^c$ is $\widetilde{O}(\sqrt{n/d^3})$-close to one of $\pm \frac{1}{\sqrt{d}}a_1,\dots ,\pm\frac{1}{\sqrt{d}}a_n$. \end{theorem} \paragraph{Proof of the main theorem} The following Lemma shows that event $G_0$ that we conditioned on is indeed a high probability event. The proof follows from simple concentration inequalities and is deferred to Section~\ref{sec:proof:proof_sketches}. \begin{lemma}\label{lem:G_0}Suppose $1/\delta^2 \cdot d\log^2 d \le n \le d^2/\log^{O(1)} d$. Then, \begin{align} \Pr\left[G_0\right] \ge 1 - d^{-10} \nonumber\mper \end{align} \end{lemma} \noindent Combining Theorem~\ref{thm:kac-rice-zero}, Theorem~\ref{thm:local} and Lemma~\ref{lem:G_0}, we obtain Theorem~\ref{thm:main} straightforwardly. \begin{proof}[Proof of Theorem~\ref{thm:main}] \sloppy By Theorem~\ref{thm:kac-rice-zero} and Markov inequality, we obtain that $\Pr\left[|L\cap L_1\cap L_2|\indicator{G_0} < 1/2\right]\le 2^{-d/2+1}$. Since $|L\cap L_1\cap L_2|\indicator{G_0}$ is an integer value random variable, therefore, we get $\Pr\left[|L\cap L_1\cap L_2|\indicator{G_0} = 0\right]\le 2^{-d/2+1}$. Thus, using Lemma~\ref{lem:G_0}, Theorem~\ref{thm:local} and union bound, with high probability, we have that $G_0$, $|L\cap L_1\cap L_2|\indicator{G_0} =0$ and equation~\eqref{eqn:thm:local} happen. Then, using Theorem~\ref{thm:local} and union bound, we concluded that $|L\cap L_1| = |L\cap L_1\cap L_2^c| + |L\cap L_1\cap L_2| = 2n$. \end{proof} In the next subsections we sketch the basic ideas behind the proof of of Theorem~\ref{thm:kac-rice-zero} and Theorem~\ref{thm:local}. Theorem~\ref{thm:kac-rice-zero} is the crux of this technical part of the paper. \subsection{Estimating the Kac-Rice formula for the global region} \label{subsec:kac-rice} The general plan to prove Theorem~\ref{thm:kac-rice-zero} is to use random matrix theory to estimate the RHS of the Kac-Rice formula. We begin by applying Kac-Rice formula to our situation. We first note that by definition of $G_0$, \begin{align} |\cM_f\cap L_1\cap L_2| \cdot \indicator{G_0}\le \left|\cM_f\cap L_1\cap L_2\cap L_G\right|\mcom\label{eqn:102} \end{align} where $L_G = \{x\in \sphere^{d-1}: x~\textup{satisfies equation ~\eqref{eqn:32},~\eqref{eqn:29} and~\eqref{eqn:101}} \}$. Indeed, when $G_0$ happens, then $L_G = \sphere^{d-1}$, and therefore equation~\eqref{eqn:102} holds. Thus it suffice to control $\Exp\left[\left|\cM_f\cap L_1\cap L_2\cap L_G\right|\right]$. We will use the Kac-Rice formula (Lemma~\ref{lem:kac-rice}) with the set $Z = \cM_f\cap L_1\cap L_2\cap L_G$. \paragraph{Applying Kac-Rice formula. } The first step to apply Kac-Rice formula is to characterize the joint distribution of the gradient and the Hessian. We use the notation of Lemma~\ref{lem:dist} for expressing the joint distribution of $(\hessian f, \grad f, \indicator{x\in Z})$. For any fix $x\in \sphere^{d-1}$, let $\alpha_i = \inner{a_i,x}$ and $b_i = P_x a_i$ (where $P_x = \Id-xx^{\top}$) and $M = \norm{\alpha}_4^4\cdot \Id_{d-1} -3\sum_{i=1}^{n} \alpha_i^2 b_ib_i^{\top} \mathand g = \sum_{i=1}^{n}\alpha_i^3 b_i$ as defined in~\eqref{eqn:def:g}. In order to apply Kac-Rice formula, we'd like to compute the joint distribution of the gradient and the Hessian. We have that \begin{align} \textup{ $(\hessian f, \grad f, \indicator{x\in Z})$ has the same distribution as $(M, g, \indicator{E_0}\indicator{E_1}\indicator{E_2}\indicator{E_2'})$}\mcom \nonumber \end{align} where $E_0,E_1,E_2,E_2'$ are defined as follows (though these details here will only be important later). \begin{align} E_0 = & \left\{\forall U\subset [n] \textup{ with }|U| < \delta n/\log d, \|\alpha_U\|^2 \le (1+\delta)d\mcom \label{eqn:RIP}\right.\\ & ~~~ \Norm{\alpha}_4^4 \ge 15(1-\delta)n\mcom\nonumber\\ & ~~~ ~n-3\sqrt{nd}\le \|\alpha\|^2 \le n + 3\sqrt{nd}\Big\}\mper\nonumber \end{align} We see that the equations above correspond to equation~\eqref{eqn:32},~\eqref{eqn:29} and~\eqref{eqn:101} and event $\Ezero$ corresponds to the event $x\in L_G$. Similarly, the following event $E_1$ corresponds to $x\in L_1$. \begin{align} E_1 = & \left\{ \norm{\alpha}_4^4 \ge 3n + \gamma\sqrt{nd}\right\} \mper\nonumber \end{align} Events $E_2$ and $E_2'$ correspond to the events that $x\in L_2$. We separate them out to reflect that $E_2$ and $E_2'$ depends the randomness of $\alpha_i$'s and $b_i$'s respectively. \begin{align} E_2 &= \left\{\|\alpha\|_{\infty}^2\le \delta d\right\}\nonumber \\ E_2' &= \left\{\forall i\in [n], \|b_i\|^2 \ge (1-\delta)d\right\}\mper\nonumber \end{align} \noindent Using Kac-Rice formula (Lemma~\ref{lem:kac-rice} with $Z=L_1\cap L_2\cap L_G$), we conclude that \begin{align} \Exp\left[\left|\cM_f\cap L_1\cap L_2\cap L_G\right|\right] = \Vol(\sphere^{d-1})\cdot\Exp\left[\left|\det(M)\right|\indicator{M\succeq 0}\indicator{E_0}\indicator{E_1}\indicator{E_2}\indicator{E_2'}\mid g=0\right] p_g(0)\mper\label{eqn:110} \end{align} Next, towards proving Theorem~\ref{thm:kac-rice-zero} we will estimate the RHS of the equation~\eqref{eqn:110} using various techniques. \paragraph{Conditioning on $\alpha$.} We observe that the distributions of the gradient $g$ and Hessian $M$ are fairly complicated. In particular, we need to deal with the interactions of $\alpha_i$'s (the components along $x$) and $b_i$'s (the components in the orthogonal subspace of $x$). Therefore, we use the law of total expectation to first condition on $\alpha$ and take expectation over the randomness of $b_i$'s, and then take expectation over $\alpha_i$'s. Here below the inner expectation of RHS of~\eqref{eqn:100} is with respect to the randomness of $b_i$'s and the outer one is with respect to $\alpha_i$'s. \begin{lemma}\label{lem:conditioning} Using the notation of Lemma~\ref{lem:dist}, let $E$ denotes an event and let $p_{g\mid \alpha}$ denotes the density of $g\mid \alpha$. Then, \sloppy \begin{align} \Exp\left[\left|\det(M)\right|\indicator{M\succeq 0}\indicator{E}\mid g=0\right] p_g(0) = \Exp\left[\Exp\left[\left|\det(M)\right|\indicator{M\succeq 0} \indicator{E}\mid g = 0,\alpha\right] p_{g\mid \alpha}(0)\right] \mper \label{eqn:100} \end{align} \end{lemma} \noindent For notional convenience we define $h(\cdot):\R^n\rightarrow \R$ as \begin{align} h(\alpha) &:= \Vol(\sphere^{d-1})\Exp\left[\det(M)\indicator{M\succeq 0}\indicator{E_2'}\mid g = 0, \alpha\right] \indicator{E_0}\indicator{E_1}\indicator{E_2} p_{g\mid \alpha}(0) \mper\nonumber \end{align} Using equation~\eqref{eqn:102}, the Kac-Rice formula (equation~\eqref{eqn:110}), and law of total expectation (Lemma~\ref{lem:conditioning}), we obtain straightforwardly the following Lemma which gives an explicit formula for the number of local maxima in $L_1\cap L_2$. We provide a rigorous proof in Section~\ref{sec:proof:proof_sketches} that verifies the regularity condition of Kac-Rice formula. \begin{lemma}\label{lem:using-kac-rice-zero} Let $h(\cdot)$ be defined as above. In the setting of this section, we have \begin{align} \Exp\left[|\cM_f\cap L_1\cap L_2| \cdot \indicator{G_0}\right]\le \Exp\left[\left|\cM_f\cap L_1\cap L_2\cap L_G\right|\right]= \Exp\left[h(\alpha)\right] \mper\nonumber \end{align} \end{lemma} \noindent We note that $p_{g\mid \alpha}(0)$ has an explicit expression since $g\mid \alpha$ is Gaussian. For the ease of exposition, we separate out the hard-to-estimate part from $h(\alpha)$, which we call $W(\alpha)$: \begin{align} W(\alpha) := \Exp\left[\det(M)\indicator{M\succeq 0}\indicator{E_2'}\mid g = 0, \alpha\right]\indicator{E_0}\indicator{E_1}\indicator{E_2} \label{eqn:def-Walpha}\mper \end{align} \noindent Therefore by definition, we have that \begin{align} h(\alpha) = \Vol(\sphere^{d-1}) W(\alpha) p_{g\mid \alpha}(0)\label{def:Wa}\mper\end{align} \noindent Now, since we have conditioned on $\alpha$, the distributions of the Hessian, namely $M\mid \alpha$, is a generalized Wishart matrix which is slightly easier than before. However there are still several challenges that we need to address in order to estimate $W(\alpha)$. \paragraph{Question I: how to control $\det(M)\indicator{M\succeq 0}$?} Recall that $M = \fourmoments{\alpha} -3\sum \alpha_i^2b_ib_i^{\top}$, which is a generalized Wishart matrix whose eigenvalue distribution has no (known) analytical expression. The determinant itself by definition is a high-degree polynomial over the entries, and in our case, a complicated polynomial over the random variables $\alpha_i$'s and vectors $b_i$'s. We also need to properly exploit the presence of the indicator function $\indicator{M\succeq 0}$, since otherwise, the desired statement will not be true -- the function $f$ has an exponential number of critical points. Fortunately, in most of the cases, we can use the following simple claim that bounds the determinant from above by the trace. The inequality is close to being tight when all the eigenvalues of $M$ are similar to each other. More importantly, it uses naturally the indicator function $\indicator{M\succeq 0}$! Later we will see how to strengthen it when it's far from tight. \begin{claim}\label{claim:amgm} For any $p\times p$ symmetric matrix $V$, we have, \begin{align} \det(V)\indicator{V\succeq 0}& \le \left(\frac{|\trace(V)|}{p}\right)^p\indicator{V\succeq 0} \nonumber \end{align} \end{claim} The claim is a direct consequence of AM-GM inequality. \begin{proof} We can assume $V$ is positive semidefinite since otherwise both sides of the inequality vanish. Then, suppose $\lambda_1,\dots, \lambda_p \ge 0$ are the eigenvalues of $V$. We have $\det(V) = \lambda_1\dots \lambda_p$ and $|\trace(V)| = \trace(V) = \lambda_1+\dots +\lambda_p$. Then by AM-GM inequality we complete the proof. \end{proof} Applying the Lemma above with $V = M$, we have \begin{align} W(\alpha)\le \Exp\left[\frac{|\trace(M)|^{d-1}}{(d-1)^{d-1}} \mid g = 0, \alpha\right]\indicator{E_0}\indicator{E_1} \mper \label{eqn:103} \end{align} Here we dropped the indicators for events $E_2$ and $E_2'$ since they are not important for the discussion below. It turns out that $|\trace(M)|$ is a random variable that concentrates very well, and thus we have $\Exp\left[|\trace(M)|^{d-1}\right] \approx |\Exp\left[\trace(M)\right]|^{d-1}$. It can be shown that (see Proposition~\ref{prop:det-to-trace} for the detailed calculation), \begin{align} \Exp\left[\trace(M)\mid g=0,\alpha\right] = (d-1)\left(\|\alpha\|_4^4 - 3\|\alpha\|^2 + 3\norm{\alpha}_8^8/\|\alpha\|_6^6\right) \nonumber\mper \end{align} Therefore using equation~\eqref{eqn:103} and equation above, ignoring $\indicator{E_2'}$, we have that \begin{align} W(\alpha)\le \left(\|\alpha\|_4^4 - 3\|\alpha\|^2 + 3\norm{\alpha}_8^8/\|\alpha\|_6^6\right)^{d-1}\indicator{E_0}\indicator{E_1} \mper\nonumber \end{align} Note that since $g\mid \alpha$ has Gaussian distribution, we have, \begin{align} p_{g\mid \alpha}(0) = (2\pi)^{-d/2} (\Norm{\alpha}_6^6)^{-d/2} \mper\label{eqn:104} \end{align} Thus using two equations above, we can bound $\Exp\left[h(\alpha)\right] $ by \begin{align} \Exp\left[h(\alpha)\right] \le \Vol(\sphere^{d-1}) \Exp\left[\left(\|\alpha\|_4^4 - 3\|\alpha\|^2 + 3\norm{\alpha}_8^8/\|\alpha\|_6^6\right)^{d-1}\cdot (2\pi)^{-d/2} (\Norm{\alpha}_6^6)^{-d/2} \indicator{E_0}\indicator{E_1}\right]\mper\label{eqn:105} \end{align} Therefore, it suffices to control the RHS of~\eqref{eqn:105}, which is much easier than the original Kac-Rice formula. However, we still need to be careful here, as it turns out that RHS of~\eqref{eqn:105} is roughly $c^d$ for some constant $c > 1$! Roughly speaking, this is because the high powers of a random variables is very sensitive to its tail. \paragraph{Easy case when all $\alpha_i$'s are small. } To find a tight bound for the RHS of \eqref{eqn:105}, intuitively we can consider two events: the event $F_0$ when all of the $\alpha_i$'s are close to constant (defined rigorously later in equation~\eqref{eqn:def-F0}) and the complementary event $F_0^c$. We claim that $\Exp\left[h(\alpha)\Fzero\right]$ can be bounded by $2^{-d/2}$. Then we will argue that it's difficult to get an upperbound for $\Exp\left[h(\alpha)\indicator{F_0^c}\right]$ that is smaller than 1 using the RHS of equation~\eqref{eqn:105}. It turns out in most of the calculations below, we can safely ignore the contribution of the term $3\norm{\alpha}_8^8/\|\alpha\|_6^6$. For notation convenience, let $Q(\cdot): \R^d \rightarrow \R$ be defined as: \begin{align} Q(z) = \Norm{z}_4^4 - 3\Norm{z}^2 \label{eqn:def:Q}\mper \end{align} When conditioned on the event $F_0$, roughly speaking, random variable $Q(\alpha) = \|\alpha\|_4^4 - 3\|\alpha\|^2$ behaves like a Gaussian distribution with variance $\Theta(n)$, since $Q(\alpha)$ is a sum of independent random variables with constant variances. Note that $\Eone$ and $\Ezero$ imply that $Q(\alpha)= \norm{\alpha}_4^4-3\norm{\alpha}_2^2\ge (\gamma-3) \sqrt{nd}$. Therefore, $Q(\alpha)\indicator{E_0}\indicator{E_1}$ behaves roughly like a truncated Gaussian distribution, that is, $X\cdot \indicator{X\ge (\gamma-3)\sqrt{nd}}$ where $X\sim \N(0,\Theta(\sqrt{n}))$. Then for sufficiently large constant $\gamma$, \begin{align} \Exp\left[Q(\alpha)^{d-1}\indicator{E_0}\Eone \Fzero\right]\le (0.1nd)^{d/2}\mper\label{eqn:106} \end{align} Moreover, recall that when the event $E_0$ happens, we have that $\sixmoments{\alpha} \ge 15(1-\delta)n$. Therefore putting all these bounds together into equation~\eqref{eqn:105} with the indicator $\Fzero$, and using the fact that $\Vol(\sphere^{d-1}) = \frac{\pi^{d/2}}{\Gamma(d/2+1)}$, we can obtain the target bound, \begin{align} \Exp\left[h(\alpha)\Fzero\right] & \le \frac{\pi^{d/2}}{\Gamma(d/2+1)} \cdot (0.1nd)^{d/2} \cdot (2\pi)^{-d/2} (15(1-\delta)n)^{-d/2} \le 2^{-d/2} \mper\nonumber \end{align} \paragraph{The heavy tail problem: } Next we explain why it's difficult to achieve a good bound from using RHS~\eqref{eqn:105}. The critical issue is that the random variable $Q(\alpha)$ has very heavy tail, and exponentially small probability \textit{cannot} be ignored. To see this, we first claim that $Q(\alpha)$ has a tail distribution that roughly behaves like \begin{align} \Pr\left[Q(\alpha) \ge t\right] \ge \exp(-\sqrt{t}/2)\mper\nonumber \end{align} Taking $t=d^2$, then the contribution of the tail to the expectation of $Q(\alpha)^d$ is at least $\Pr\left[Q(\alpha) \ge t\right] \cdot t^d \approx d^{2d}$, which is much larger than what we obtained (equation~\eqref{eqn:106}) in the case when all $\alpha_i$'s are small. The obvious fix is to consider $Q(\alpha)^d (\norm{\alpha}_6^6)^{-d/2}$ together (instead of separately). For example, we will use Cauchy-Schwarz inequality to obtain that $Q(\alpha)^d (\norm{\alpha}_6^6)^{-d/2} \le \norm{\alpha}^d$. However, this alone is still not enough to get an upper bound of RHS of~\eqref{eqn:105} that is smaller than 1. In the next paragraph, we will tighten the analysis by strengthening the estimates of $\det(M)\indicator{M\succeq 0}$. \Tnote{TODO: take a look at this again} \paragraph{Question II: how to tighten the estimate of $\det(M)\indicator{M\succeq 0}$?} It turns out that the AM-GM inequality is not always tight for controlling the quantity $\det(M)\indicator{M\succeq 0}$. In particular, it gives pessimistic bound when $M\succeq 0$ is not true. As a matter of fact, whenever $F_0^c$ happens, $M$ is unlikely to be positive semidefinite! (The contribution of $ \det(M)\indicator{M\succeq 0}$ will not be negligible since there is still tiny chance that $M$ is PSD. As we argued before, even exponential probability event cannot be safely ignored.) We will show that formally the event $\indicator{M\succeq 0}\indicator{F_0^c}$ have small enough probability that can kill the contribution of $Q(\alpha)^d$ when it happens (see Lemma~\ref{lem:prob_indicator} formally). \paragraph{Summary with formal statements. } Below, we formally setup the notation and state two Propositions that summarize the intuitions above. Let $\tau = Kn/d$ where $K$ is a universal constant that will be determined later. Let $F_k$ be the event that \begin{align} & F_0 = \Set{\norm{\alpha}_{\infty}^4 \le \tau} \label{eqn:def-F0} \\ & F_k = \Set{k = \argmax_{i\in [n]} \alpha_i^4 \mathand \alpha_k^4 \ge \tau} \textup{ for } 1\le k \le n \label{eqn:def-Fk} \end{align} We note that $1\le \indicator{F_0} + \indicator{F_1}+ \dots + \indicator{F_k}$, and therefore we have \begin{align} h(\alpha) \le \sum_{k=0}^n h(\alpha)\indicator{F_k} \label{eqn:71} \end{align} Therefore towards controlling $\Exp\left[h(\alpha)\right]$, it suffices to have good upper bounds on $\Exp\left[h(\alpha)\indicator{F_k}\right]$ for each $k$, which are summarized in the following two propositions. \begin{proposition}\label{prop:f_k} Let $K\ge 2\cdot 10^3$ be a universal constant. Let $\tau = Kn/d$ and let $\gamma, \beta$ be sufficiently large constants (depending on $K$). Then for any $n \ge \beta d\log^2 d$, we have that for any $ k\in \{1,\dots, n\}$ \begin{align} \Exp\left[h(\alpha)\indicator{F_k}\right] \le (0.3)^{d/2} \mper\nonumber \end{align} \end{proposition} \begin{proposition}\label{prop:f_0} In the setting of Proposition~\ref{prop:f_k}, we have, \begin{align} \Exp\left[h(\alpha)\indicator{F_0}\right] \le (0.3)^{d/2}\mper\nonumber \end{align} \end{proposition} \noindent We see that Theorem~\ref{thm:kac-rice-zero} can be obtained as a direct consequence of Proposition~\ref{prop:f_0}, Proposition~\ref{prop:f_k} and Lemma~\ref{lem:using-kac-rice-zero}. \begin{proof}[Proof of Theorem~\ref{thm:kac-rice-zero}] Using equation~\eqref{eqn:71} and Lemma~\ref{lem:using-kac-rice-zero}, we have that \begin{align} & \Exp\left[\left|L\cap L_1\cap L_2\right|\indicator{G_0}\right] \le \Exp\left[h(\alpha)\right] \le \Exp\left[h(\alpha)\Fzero\right]+ \sum_{k=1}^n \Exp\left[h(\alpha)\Fk\right] \nonumber\\ & \le (n+1)\cdot (0.3)^{d/2}\le 2^{-d/2}\mper\tag{by Propostion~\ref{prop:f_0} and~\ref{prop:f_k}} \end{align} \end{proof} \subsection{Local analysis} For a point $x$ in the local region $L_2^c$, the gradient and the Hessian are dominated by the contributions from the components that have a large correlation with $x$. Therefore it is easier to analyze the first and second order optimality condition directly. Note that although this region is small, there are still exponentially many critical points (as an example, there is a critical point near $\frac{a_1+a_2}{\|a_1+a_2\|}$). Therefore our proof still needs to consider the second order conditions and is more complicated than Anandkumar et al.~\cite{anandkumar2015learning}. We first show that all local maxima in this region must have very high correlation with one of the components: \begin{lemma} In the setting of Theorem~\ref{thm:local}, for any local maximum $x\in L_2$, there exists a component $a \in \{\pm a_i/\|a_i\|\}$ such that $\inner{x,a} \ge 0.99$. \end{lemma} Intuitively, if there is a unique component $a_i$ that is highly correlated with $x$, then the gradient $\grad f(x) = \sum_{j=1}^n \inner{a_j,x}^3 a_j$ will be even more correlated with $a_i$. On the other hand, if $x$ has similar correlation with two components (e.g., $x = \frac{a_1+a_2}{\|a_1+a_2\|}$), then $x$ is close to a saddle point, and the Hessian will have a positive direction that makes $x$ more correlated with one of the two components. Once $x$ has really high correlation with one component ($|\inner{x,a_i}| \ge 0.99\|a_i\|$), we can prove that the gradient of $x$ is also in the same local region. Therefore by fixed point theorem, there must be a point where $\grad f(x) = x$, and it is a critical point. We also show the function is strongly concave in the whole region where $|\inner{x,a_i}| \ge 0.99\|a_i\|$. Therefore we know the critical point is actually a local maximum.
40,740
\begin{document} \maketitle \begin{abstract} We consider optimal control problems for diffusion processes, where the objective functional is defined by a time-consistent dynamic risk measure. We focus on coherent risk measures defined by $g$-evaluations. For such problems, we construct a family of time and space perturbed systems with piecewise-constant control functions. We obtain a regularized optimal value function by a special mollification procedure. This allows us to establish a bound on the difference between the optimal value functions of the original problem and of the problem with piecewise-constant controls. \end{abstract} \section{Introduction} \label{s:intro} The first introduction of a coherent (static) risk measure, by Artzner \emph{et al.} \cite{AP1,AP2}, was motivated by the capital adequacy rules of the Basel Accord. Large volume of research were devoted to this area, F\"{o}llmer, Schied \cite{SF1} and Frittelli and Rosazza Gianin\cite{FR1} generalized it to convex risk measure, Ruszczy\'{n}ski and Shapiro \cite{AR1} studied it from the perspective of optimization. Several classical references concerning static risk measures are \cite{SF2,FS,AR2,AR3}. Further development of the theory of risk measures lead to a dynamic setting, in which the risk is measured at each time instance based on the updated information. The key condition of \emph{time-consistency} allows for dynamic programming formulations. The discrete time case was extensively explored by Detlfsen and Scandolo \cite{DS1}, Bion-Nadal \cite{BN}, Cheridito et al. \cite{CD1,CK1}, F\"{o}llmer and Penner\cite{FP1}, Frittelli and Scandolo \cite{FR2}, Riedel \cite{RF}, and Ruszczy\'{n}ski and Shapiro\cite{AR2}. For the continuous-time case, Coquet, Hu, M{\'e}min and Peng \cite{coquet2002filtration} discovered that time-consistent dynamic risk measures can be represented as solutions of \emph{Backward Stochastic Differential Equations} (BSDE) (see also \cite{PSG1,gianin2006risk}). Inspired by that, Barrieu and El Karoui provided a comprehensive study in \cite{BE1,BE2}; further contributions being made by Delbean, Peng, and Rosazza Gianin \cite{delbaen2010representation}, and Quenez and Sulem \cite{quenez2013bsdes} (for a more general model with Levy processes). In addition, application to finance was considered, for example, in \cite{laeven2014robust}. Using the convergence results of Briand, Delyon and M\'emin \cite{BDM}, Stadje \cite{ST} finds the drivers of BSDE corresponding to discrete-time risk measures. As for control with risk aversion, in discrete time setting, Ruszczy\'{n}ski \cite{AR4}, \c{C}avu\c{s} and Ruszczy\'nski \cite{cavus2014risk} and Fan and Ruszczy\'nski \cite{fan2015dynamic} developed the concept of a Markov risk measure and proposed risk-averse dynamic programming equations as well as computation methods. Our intention is to use continuous-time dynamic risk measures as objective functionals in optimal control problems for diffusion processes. While the traditional continuous stochastic control is well developed and discussed in numerous books (see, e.g., \cite{FleSon,Krylov,Pham,XYZ}), the risk-averse case appears to be largely unexplored. In the present paper, we consider the risk-averse case with coherent risk measures given by $g$-evaluations. Such control problems are closely related to forward--backward systems of stochastic differential equations (FBSDE) (see, \cite{MA,peng1999fully}). For controlled fully coupled FBSDEs, Li and Wei \cite{JLQW} obtained the dynamic programming equation and derived the corresponding Hamilton--Jacobi--Bellman equation. Maximum principle for forward--backward systems and corresponding games was derived in \cite{oksendal2009maximum,oksendal2014forward}, including models with Levy processes. The contribution of this paper is the study of accuracy of discrete-time approximations of risk-averse optimal control problems with coherent risk measures given by $g$-evaluations. For the purpose of the study, we construct a family of perturbed systems with two types of perturbations: of the initial time and the initial state. For such a family, we integrate the value functions of a piecewise-constant control with respect to the said initial time and state values. This yields regularized functions for which It\^{o} calculus can be applied. Using the earlier results on the Hamilton--Jacobi--Bellman equation for risk-averse problems, we establish an error bound of order $\Delta^{1/6}$, between the optimal values of the original system and a system with piecewise constant controls with time step $\Delta$. Section \S \ref{s:foundations} has a synthetic character. We review in it the concept of $\Fb$-consistent evaluations and the connections to backward stochastic differential equations and dynamic risk measures. In \S \ref{s:cumul}, we formulate the risk-averse optimal control problem and study its basic properties. In the meanwhile, we recall the dynamic programming equation and the risk-averse analog of the Hamilton-Jacobi-Bellman equation. In section \S \ref{s:perturbed} we construct a family of time and space perturbed problems. They are used in a specially designed mollification procedure in \S \ref{s:mollification}, which yields sufficiently smooth close approximations of the optimal value function. In \S \ref{s:accuracy}, we prove that the accuracy of the control policies restricted to piecewise-constant controls is of the order $h^{1/3}$, where $h^2$ is the time discretization step. \section{Foundations} \label{s:foundations} \subsection{Nonlinear Expectations and Dynamic Risk Measures} \label{s:ne} We establish a suitable framework and briefly review the concept of $\Fb$-consistent nonlinear expectations (for an extensive treatment, see \cite{PSG1}). For $0<T<\infty$, let $(\varOmega,\mathcal{F},\mathbb{P},\Fb)$ be a probability space, where $\Fb=\left\{\mathcal{F}_t\right\}_{0 \le t \le T}$ is a filtration. A vector-valued stochastic process $\{X_t\}_{0 \le t \le T}$ is said to be adapted to $\Fb$ if $X_t$ is an $\mathcal{F}_t$-measurable random variable for any $t\in [0,T]$. We introduce the following notation. \begin{tightitemize} \item $\mathbb{E}_{t}[\,\cdot\,] : = \mathbb{E}[\,\cdot\,|\,\mathcal{F}_t]$; \item $\text{P}^m[t,T]$: the set of $\Rb^m$-valued adapted processes on $[t, T]\times \varOmega$; \item $\xLtwo(\varOmega, \mathcal{F}_t, \mathbb{P};\Rb^m)$: the set of $\Rb^m$-valued $\mathcal{F}_t$-measurable random variables $\xi$ such that $\| \xi \|^2 := \mathbb{E}[\,|\xi|^2\,]<\infty$; for $m=1$, we write it $\xLtwo(\varOmega, \mathcal{F},\mathbb{P})$; \item $\Sc^{2,m}[t, T]$: the set of elements $Y\in \text{P}^m[t,T]$ such that $ \| Y \|^2_{\Sc^{2,m}[t, T]} := \mathbb{E} [\,\sup_{\,t\leq s\leq T}|Y_s|^2\,] < \infty$; for $m=1$, we write it $\Sc^{2}[t, T]$; \item $\Hc^{2,m}[t, T]$: the set of elements $Y\in \text{P}^m[t,T]$, such that $ \| Y \|^2_{\Hc^{2,m}[t, T]} := \mathbb{E}\Big[ \int_t^T |Y_s|^2\xdif s\Big] < \infty$; for $m=1$ we write it $\Hc^{2}[t, T]$;\footnote{When the norm is clear from the context, the subscripts are skipped.} \item $\Cc^{1,2}([t,T]\times\Rb^m)$ the space of functions $f:[t,T]\times\Rb^m\to\Rb$, which are differentiable with respect to the first argument and twice differentiable with respect to the second argument, with all these derivatives continuous with respect to both arguments; \item $\Cc^{1,2}_{\textup{b}}([t,T]\times\Rb^m)$ the space of functions $f\in \Cc^{1,2}([t,T]\times\Rb^m)$ with all derivatives bounded and continuous with respect to both arguments; \item $\Cc^\infty(B)$: the space of functions $f:B\to\mathbb{R}$ that are infinitely continuously differentiable with respect to all arguments and have compact support on $B\subset \mathbb{R}^n$. \end{tightitemize} With this notation, we can introduce the concept of a nonlinear expectation. \begin{dfntn} For\, $0 \le T< \infty$, a \emph{nonlinear expectation} is a functional $ \rho_{0,T}: \xLtwo(\varOmega,\mathcal{F}_T, \mathbb{P}) \to \Rb $ satisfying the strict monotonicity property: \begin{equation*} \begin{split} &\mbox{if } \xi_1\geq \xi_2 \mbox{ } \mbox{a.s.}, \mbox{ then } \rho_{0,T}[\,\xi_1\,] \geq \rho_{0,T}[\,\xi_2\,];\\ &\mbox{if } \xi_1\geq \xi_2 \mbox{ } \mbox{a.s.}, \text{ then } \rho_{0,T}[\,\xi_1\,] = \rho_{0,T}[\,\xi_2\,]\text{ if and only if } \xi_1 = \xi_2 \mbox{ a.s.}; \end{split} \end{equation*} and the constant preservation property: \[ \rho_{0,T}[\,c\1_{\varOmega}\,] = c,\quad \forall\;c\in \Rb, \] where $\1_A$ is the characteristic function of the event $A\in\mathcal{F}_T$. \end{dfntn} Based on that, the $\Fb$-consistent nonlinear expectation is defined as follows. \begin{dfntn} \label{d:consistent} For a filtered probability space $(\varOmega,\mathcal{F}, \mathbb{P}, \mathbb{F})$, a nonlinear expectation $\rho_{0,T}[\,\cdot\,]$ is \emph{$\Fb$-consistent} if for every $\xi\in \xLtwo(\varOmega, \mathcal{F}_T, \mathbb{P})$ and every $t\in [0,T]$ a random variable $\eta\in \xLtwo(\varOmega, \mathcal{F}_t, \mathbb{P})$ exists such that \begin{equation*} \rho_{0,T}[\,\xi\1_A\,]=\rho_{0,T}[\,\eta \1_A\,]\quad \forall A\in \mathcal{F}_t. \end{equation*} \end{dfntn} The variable $\eta$ in Definition \ref{d:consistent} is uniquely defined, we denote it by $\rho_{t,T}[\,\xi\,]$. It can be interpreted as a nonlinear conditional expectation of $\xi$ at time t. We can now define for every $t\in [0,T]$ the corresponding nonlinear expectation $\rho_{0,t}: \xLtwo(\varOmega,\mathcal{F}_t, \mathbb{P}) \to \Rb$ as follows: $ \rho_{0,t}[\,\xi\,] = \rho_{0,T}[\,\xi\,]$, for all $\xi\in \xLtwo(\varOmega, \mathcal{F}_t, \mathbb{P})$. In this way, a whole system of $\Fb$-consistent nonlinear expectations $\big\{ \rho_{s,t}\big\}_{0\le s \le t \le T}$ is defined. \begin{prpstn} \label{p:nonev} If $\rho_{0,T}[\,\cdot\,]$ is an $\Fb$-consistent nonlinear expectation, then for all $0\leq t\leq T$ and all $\xi, \xi' \in \xLtwo(\varOmega,\mathcal{F}_T, \mathbb{P})$, it has the following properties: \begin{tightlist}{iii} \item \textbf{Generalized constant preservation}: If $\xi\in \xLtwo(\varOmega,\mathcal{F}_t, \mathbb{P})$, then $\rho_{t,t}[\,\xi\,]=\xi$; \item \textbf{Time consistency}: $\rho_{s,T}[\,\xi\,] = \rho_{s,t}[\,\rho_{t,T}[\,\xi\,]\,]$, for all $0\le s\leq t$; \item \textbf{Local property}: $\rho_{t,T}[\,\xi\1_A+\xi'\1_{A^c}\,]= \1_A \rho_{t,T}[\,\xi\,]+\1_{A^c} \rho_{t,T}[\,\xi'\,]$, for all $A\in \mathcal{F}_t$. \end{tightlist} \end{prpstn} It follows that $\mathbb{F}$-consistent nonlinear expectations are special cases of dynamic time-consistent measures of risk, enjoying a number of useful properties. They do not, however, have the properties of convexity, translation invariance, or positive homogeneity, unless additional assumptions are made. We shall return to this issue in the next subsection. \subsection{Backward Stochastic Differential Equations and $g$-Evaluations} \label{s:BSDE} Close relation exists between $\Fb$-consistent nonlinear expectations on the space $\xLtwo(\varOmega,\mathcal{F}, \mathbb{P})$, with the natural filtration of the Brownian motion, and backward stochastic differential equations (BSDE) \cite{PP1,PP2,PSG2}. We equip $(\varOmega,\mathcal{F}, \mathbb{P})$ with a $d$-dimensional Brownian filtration, i.e., $\mathcal{F}_t = \sigma \big\{\{W_s; 0\leq s\leq T\}\cup \mathcal{N}\big\}$, \noindent where $\mathcal{N}$ is the collection of $P$-null sets in $\varOmega$. In this paper we consider the following one-dimensional BSDE: \begin{equation} \label{eq:BSDE} -\xdif Y_t=g(t, Y_t, Z_t)\xdif t - Z_t \xdif W_t, \quad Y_T = \xi, \end{equation} where the data is the pair $(\xi, g)$, called the \emph{terminal condition} and the \emph{generator} (or \emph{driver}), respectively. Here, $\xi\in \xLtwo(\varOmega, \mathcal{F}_T, \mathbb{P})$, and $g:[0,T]\times \Rb\times \Rb^d \times\varOmega\rightarrow \Rb$ is a measurable function (with respect to the product $\sigma$-algebra), which is \emph{nonanticipative}, that is, $g(t, Y_t, Z_t)$ is $\mathcal{F}_t$-measurable for all $t\in [0,T]$. The solution of the BSDE is a pair of processes $(Y, Z)\in \Sc^2[0, T]\times \Hc^{2,d}[0, T]$ such that \begin{equation} \label{bsint} Y_t=\xi+\int^T_t\! g(s,Y_s,Z_s)\xdif s -\int^T_t\! Z_s\xdif W_s,\quad t\in[0, T]. \end{equation} The existence and uniqueness of the solution of (\ref{eq:BSDE}) can be guaranteed under the following assumption. \begin{assumption}[Peng and Pardoux \cite{PP1}] \label{a:pp1} \begin{tightlist}{ii} \item $g$ is jointly Lipschitz in $(y, z)$, i.e., a constant $K > 0$ exists such that for all $t\in [0,T]$, all $y_1,y_2\in \Rb$ and all $z_1,z_2\in \Rb^{d}$ we have \begin{equation*} |g(t, y_1, z_1)-g(t, y_2, z_2)| \leq K(|y_1-y_2|+|z_1-z_2|) \quad \textit{a.s.; } \end{equation*} \item the process $g(\cdot, 0, 0) \in \Hc^2[0, T]$. \end{tightlist} \end{assumption} Under Assumption \ref{a:pp1}, we can define the $g$-evaluation. \begin{dfntn} \label{d:gev} For each $0\leq t\leq T$ and $\xi\in \xLtwo(\varOmega, \mathcal{F}_T, \mathbb{P})$, the \emph{g-evaluation} at time $t$ is the operator $\rho^g_{t,T}: \xLtwo(\varOmega, \mathcal{F}_T, \mathbb{P})\to \xLtwo(\varOmega, \mathcal{F}_t, \mathbb{P}) $ defined as follows: \begin{equation} \label{g-evaluation} \rho^g_{t,T}[\,\xi\,] = Y_t, \end{equation} where $(Y, Z)\in \Sc^{2,d}[t, T]\times \Hc^2[t, T]$ is the unique solution of \eqref{eq:BSDE}. \end{dfntn} The following theorem reveals the relationship between $g$-evaluation and $\Fb$-consistent nonlinear expectation. \begin{thrm} \label{t:g-evaluation} Let the driver $g$ satisfy Assumption \Rref{a:pp1} and the condition: $g(\cdot, 0, 0) \equiv 0$ a.s.. Then the system of $g$-evaluations $\big(\rho^g_{t,T}\big)_{0\le t \le T}$ defined in \eqref{g-evaluation} is a system of $\Fb$-consistent nonlinear expectations. Furthermore, we have \begin{equation*} \lim_{s\uparrow t}\rho^g_{s,t}[\,\xi\,] = \xi,\quad \forall\, \xi\in \xLtwo(\varOmega, \mathcal{F}_t,\mathbb{P}), \, t\in [0,T]. \end{equation*} \end{thrm} Surprisingly, Coquet, Hu, M{\'e}min, and Peng proved in \cite{coquet2002filtration} that every $\Fb$-consistent nonlinear expectation which is ``dominated'' by $\rho^{\mu, \nu}_{0,T}$ (a $g$-evaluation with the driver $\mu |y| + \nu| z|$ with some $\nu, \mu >0$) is in fact a $g$-evaluation with some~$g$. The domination is understood as follows: $\rho_{0,T}[Y+\eta] - \rho_{0,T}[Y] \le \rho^{\mu,\nu}_{0,T}[\eta]$, for all $Y$, $\eta\in \xLtwo(\varOmega, \mathcal{F}_T,\mathbb{P})$.\\ From now on we shall use only $g$-evaluations as time-consistent dynamic measures of risk. To ensure desirable properties of the resulting measures of risk, we shall impose additional conditions on the driver $g$. \begin{assumption} \label{a:riskmeasure} The driver $g$ satisfies for almost all $t\in [0,T]$ the following conditions: \begin{tightlist}{iii} \item $g$ is deterministic and independent of $y$, that is, $g:[0,T]\times \Rb^d\to\Rb$, and $g(\cdot, 0) \equiv 0$; \item $g(t, \cdot)$ is convex for all $t\in [0,T]$; \item $g(t,\cdot)$ is positively homogeneous for all $t\in [0,T]$. \end{tightlist} \end{assumption} Under these conditions, one can derive new properties of the evaluations $\rho_{t,T}^g$, $t\in [0,T]$, in addition to the general properties of $\Fb$-consistent nonlinear expectations stated in Proposition \ref{p:nonev}. \begin{thrm} \label{t:geval-prop} Suppose $g$ satisfies Assumption \Rref{a:pp1} and condition (i) of Assumption \Rref{a:riskmeasure}. Then the system of $g$-evaluations $\rho_{t,r}^g$, $0\le t \le r \le T$ has the following properties: \begin{tightlist}{iii} \item \textbf{Normalization}: $\rho_{t,r}^g(0)=0$; \item \textbf{Translation Property}: for all $\xi\in \xLtwo(\varOmega, \mathcal{F}_r, \mathbb{P})$ and $\eta\in \xLtwo(\varOmega, \mathcal{F}_t, \mathbb{P})$, \[ \rho_{t,r}^g(\xi+\eta)=\rho_{t,r}^g(\xi)+\eta, \quad \text{a.s.}; \] \end{tightlist} If, additionally, condition (ii) of of Assumption \Rref{a:riskmeasure} is satisfied, then $\rho_{t,r}^g$ has the following property: \begin{tightlist}{iii} \setcounter{enumi}{2} \item \textbf{Convexity}: for all $\xi,\xi'\in \xLtwo(\varOmega, \mathcal{F}_r, \mathbb{P})$ and all $\lambda \in L^\infty(\varOmega, \mathcal{F}_t, \mathbb{P})$ such that \mbox{$0 \leq \lambda \leq 1$}, \begin{equation*} \rho_{t,r}^g(\lambda \xi + (1-\lambda)\xi')\leq \lambda \rho_{t,r}^g(\xi)+(1-\lambda)\rho_{t,r}^g(\xi'),\quad \text{a.s.}. \end{equation*} \end{tightlist} Moreover, if $g$ also satisfies condition (iii) of Assumption \Rref{a:riskmeasure}, then $\rho_{t,r}^g$ has also the following property: \begin{tightlist}{iv} \setcounter{enumi}{3} \item \textbf{Positive Homogeneity}: for all $\xi\in \xLtwo(\varOmega, \mathcal{F}_r, \mathbb{P})$ and all $\beta \in L^\infty(\varOmega, \mathcal{F}_t, \mathbb{P})$ such that $\beta \ge 0$, we have \[ \rho_{t,r}^g(\beta \xi) = \beta \rho_{t,r}^g(\xi), \quad \text{a.s.}. \] \end{tightlist} \end{thrm} It follows that under Assumptions \ref{a:pp1} and \ref{a:riskmeasure}, the $g$-evaluations $\rho_{t,r}^g$ are convex or coherent conditional measures of risk (depending on whether (iii) is assumed or not). Finally, we can derive their dual representation, by specializing the general results of \cite{BE2}. \begin{thrm} \label{t:geval-dual} Suppose $g$ satisfies Assumption \Rref{a:pp1} and \Rref{a:riskmeasure}. Then for all $0\le t \le r \le T$ and all $\xi\in \xLtwo(\varOmega, \mathcal{F}_r, \mathbb{P})$ we have \begin{equation} \label{geval-dual} \rho_{t,r}^g(\xi) = \sup_{\varGamma \in \mathcal{A}_{t,r}} \Eb\big[\,\varGamma \xi ~\big|~\mathcal{F}_t\,\big ] \end{equation} where $\mathcal{A}_{t,r}=\partial \rho_{t,r}^g(0)$ is defined as follows: \begin{equation} \label{A-set} \mathcal{A}_{t,r} = \left\{\,\exp\left(\int_t^r \gamma_s \xdif W_s - \frac{1}{2}\int_t^r\lvert\gamma_s\rvert^2\xdif s\right): \gamma \in \Hc^{2}[t, r], \ \gamma_s \in \partial g(s,0), \ s\in [t,r]\,\right\}. \end{equation} \end{thrm} \begin{crllr} \label{c:Gamma-bound} A constant $C$ exists, such that for all $0\le t \le r \le T$ and all $\varGamma_{t,r} \in \mathcal{A}_{t,r}$ we have \[ \|\varGamma_{t,r} -1\| \le \frac{r-t}{T} e^{CT}. \] \end{crllr} \begin{proof} It follows from the definition of $\mathcal{A}_{t,r}$ that $\varGamma_{t,r}$ is the solution of the SDE \[ \xdif\varGamma_{t,s} = \gamma_s \varGamma_{t,s}\xdif W_s, \quad \gamma_s\in \partial g(s,0),\quad s\in [t,r],\quad \varGamma_{t,t}=1. \] Using It\^{o} isometry, we obtain the chain of relations \[ \|\varGamma_{t,r} - 1\| = \int_t^r \|\gamma_s\varGamma_{t,s}\|^2\xdif s\le \int_t^r \|\gamma_s\|^2\|\varGamma_{t,s}\|^2\xdif s \le \int_t^r \|\gamma_s\|^2\big( 1+ \|\varGamma_{t,s}-1\|^2\big)\xdif s. \] If $u$ is a uniform upper bound on the norm of the subgradients of $g(s,0)$ we deduce that $\|\varGamma_{t,r} - 1\|^2 \le \varDelta_s$, $s\in [t,r]$, where $\varDelta$ satisfies the ODE: $\frac{\xdif}{\xdif s} \varDelta_s= u(1+\varDelta_s)$, with $\varDelta_t=0$. Consequently, \[ \|\varGamma_{t,r} - 1\|^2 \le \varDelta_r = e^{u^2(r-t)} -1. \] The convexity of the exponential function yields the postulated bound. \end{proof} \section{The Risk-Averse Control Problem}\label{s:cumul} \subsection{Problem Formulation} Our objective is to evaluate and optimize the risk of the cumulative cost generated by a diffusion process. On the filtered probability space $(\varOmega, \mathcal{F}, \mathbb{P}, \mathbb{F})$, we consider control processes $u: [0, T]\times\varOmega\to U$ such that $u(\cdot)$ is $\Fb$-adapted, where $U\subset\mathbb{R}^m$ is a compact set, and a diffusion process under any such control with initial time $t\in[0, T]$ and state $x\in \mathbb{R}^n$: \begin{equation} \label{s:cdp} \left\{\begin{array}{ll} \xdif X^{t, x; u}_s=b(s, X^{t, x; u}_s, u_s)\xdif s+\sigma(s, X^{t, x; u}_s, u_s)\xdif W_s,\quad s\in [t,T],\\ \quad X_t^{t, x; u}=x.\end{array}\right. \end{equation} Here, $b: [0,T]\times \Rb^n\times U \to \Rb^n$ and $\sigma: [0,T]\times \Rb^n\times U\to\Rb^{n\times d}$ are Borel measurable functions. We also introduce the \emph{cost rate} function: a measurable map $c: [0,T]\times\Rb^n \times U \to \Rb$, and the \emph{final state cost}: a measurable function $\varPsi: \Rb^n \to \Rb$. Therefore, the random cost accumulated on the interval $[t, T]$ for any $t\in[0, T]$ can be expressed as follows: \begin{equation} \label{xiU} \xi_{t,T}(u,x) : = \int_t^T\,c(s,X_s^{t,x; u},u_s)\xdif s + \varPsi(X^{t,x; u}_T),\quad \mbox{ a.s.. } \end{equation} \begin{assumption} \label{s-assumption} A constant $K>0$ exists such that, for any $s\in[t, T]$ and $(x_1, u_1),(x_2, u_2)\in\Rb^n\times U$, the functions $b$, $\sigma$, $c$, and $\varPsi$ satisfy the following conditions: \begin{gather*} |b(s, x_1, u_1) - b(s, x_2, u_2)| + |\sigma(s, x_1, u_1)-\sigma(s, x_2, u_2)| + |c(s, x_1, u_1) - c(s, x_2, u_2) |\\ \leq K \big(|x_1-x_2| + |u_1-u_2|\big), \\ |b(s, x_1, u_1)| + |\sigma(s, x_1, u_1)| + |c(s, x_1, u_1)| + |\varPsi(x_1)| \leq K(1+ |x_1| +|u_1|\big). \end{gather*} \end{assumption} Under {Assumption} \ref{s-assumption}, the controlled diffusion process (\ref{s:cdp}) has a strong solution and the cost functional is square integrable. We define the \emph{control value function} as follows: \begin{align} \label{eq:control-value} V^u(t, x):= \rho^g_{t, T}[\,\xi_{t, T}(u,x)\,] ,\quad \mbox{ a.s., } \end{align} where $\big\{\rho^g_{t,T}\big\}_{t\in [0,T]}$, is a system of $g$-evaluations discussed in section \ref{s:BSDE}. Using Definition \ref{d:gev}, we can express the control value function as follows: \begin{align*} V^u(t, x) &= \xi_{t, T}(u, x)+\int^T_t\,g(s, Z^{t, x; u}_s)\xdif s -\int^T_t\,Z^{t, x; u}_s\xdif W_s\\ &= \varPsi(X^{t, x; u}_T) + \int_t^T\,\big[ c(s,X^{t, x; u}_s,u_s) + g(s, Z^{t, x; u}_s)\big] \xdif s -\int^T_t Z^{t, x; u}_s\xdif W_s, \end{align*} where $(Y^{t, x; u},Z^{t, x; u})$ solve the following BSDE: \begin{equation} \label{s-fbsde} \left\{\begin{array}{ll} -\xdif Y^{t, x; u}_s = \big[ c(s, X^{t, x; u}_s,u_s) + g(s, Z^{t, x; u}_s)\big]\xdif s - Z^{t, x; u}_s\xdif W_s,\quad s\in[t, T],\\ Y^{t, x; u}_T = \varPsi(X^{t, x; u}_T).\end{array}\right. \end{equation} Equivalently, $V^u(t, x)= Y^{t, x; u}_t$. If Assumptions \ref{s-assumption}, \ref{a:pp1}, and \ref{a:riskmeasure} are satisfied, then for every $(t,x) \in [0,T]\times \mathbb{R}^n$, the BSDE \eqref{s-fbsde} has a unique solution $(Y^{t,x;u}, Z^{t,x;u})\in \Sc^{2}[t, T]\times \Hc^{2,d}[t, T]$ (see, Peng \cite{PSG1}), and, therefore, the control value function is well-defined. In this way, the study of a risk-averse controlled system has been reduced to the study of controlled forward-backward stochastic differential equations (FBSDE). Such systems ware extensively studied by Ma and Yong in \cite{MA}; other important references are \cite{AT,PT,Zhang,Yong}. In our case, the FBSDE is \emph{decoupled}, that is, the solution of the backward equation does not affect the forward equation, which substantially simplifies the analysis and allows for further advances. Notice, when the driver $g\equiv 0$, the control value function \eqref{eq:control-value} reduces to the expected value of \eqref{xiU}. The risk-aversion is incorporated if other other drivers satisfying Assumption \ref{a:riskmeasure} are considered. By the comparison theorem of Peng \cite{PP2}, if $g_1$ is dominated by $g_2$, i.e., $g_1 \le g_2$, then $\rho_{t,T}^{g_1}(\xi_{t,T}(u,x)) \le \rho_{t,T}^{g_2}(\xi_{t,T}(u,x))$ almost surely; the larger the driver, the more risk aversion in the objective functional. For example, if we use $g_1(t,z) = \kappa|z|$, and $g_2(t,z) = \kappa |z_+|$, with $\kappa>0$, then $g_1$ dominates $g_2$. \subsection{Risk-Averse Dynamic Programming and Hamilton--Jacobi--Bellman Equations} We now proceed to the control problem. We define the {admissible control system} as in Yong and Zhu \cite[p. 177]{XYZ}). \begin{dfntn}\label{admissible-control} $\mathscr{U}[t, T]$ is called an \emph{admissible control system} if it satisfies the following conditions: \begin{tightlist}{iv} \item $(\varOmega, \mathcal{F}, \mathbb{P})$ is a complete probability space; \item $\{W(s)\}_{s\geq t}$ is an $d$-dimensional standard Brownian motion defined on $(\varOmega, \mathcal{F}, \mathbb{P})$ over $[t, T]$ and $\mathbb{F}^t = (\mathcal{F}^t_s)_{s\in [t, T]}$, where $\mathcal{F}_s^t = \sigma\{\left(W_s; t \leq s \leq T\right)\cup\mathscr{N}\}$ and $\mathscr{N}$ is the collection of all $P$-null sets in $\mathcal{F}$; \item $u: [s, T]\times\varOmega \to U$ is an $\{\mathcal{F}_s^t\}_{s\geq t}$-adapted process with $\mathbb{E}\int_t^T |u_s|^2\xdif s < +\infty$; \item For any $x\in\mathbb{R}^n$ the system \eqref{s:cdp}--\eqref{s-fbsde} admits a unique solution $(X, Y, Z)$ on $(\varOmega,\mathcal{F},\mathbb{P}, \mathbb{F}^t)$. \end{tightlist} \end{dfntn} The \emph{optimal value function} $V:[0,T]\times\Rb^n\to\Rb$ is defined as follows: \begin{equation} \label{eq:valuefunction} V(t,x) = \inf_{u\in\mathcal{U}[t, T]} V^{u}(t, x). \end{equation} The weak formulation of a risk-averse control problem is the following: {given $(t, x)\in[0, T)\times\mathbb{R}^n$, find ${u}^*\in\mathcal{U}[t, T]$ such that} \begin{equation} \label{w-formulation} V^{{u}^*}(t, x) = \inf_{u\in\mathcal{U}[t, T]} V^{u}(t, x). \end{equation} We can now formulate the dynamic programming equation for our control problem. \label{s:DP-HJB} \begin{thrm}[\cite{ARYAO}] \label{t:DP-equation} Suppose Assumptions \Rref{s-assumption}, \Rref{a:pp1}, and \Rref{a:riskmeasure} are satisfied. Then, for any $(t, x)\in[0, T)\times\Rb^n$ and all $r\in [t,T]$, we have \begin{equation} \label{DPE} V(t, x) = \inf_{u(\cdot)\in \mathcal{U}} \rho_{t,r}^g\bigg[ \int_t^r c\big(s,X^{t, x;u}_s,u_s\big)\xdif s + V\big(r, X^{t, x;u}_r\big)\bigg]. \end{equation} \end{thrm} For $\alpha\in U$ we define the \emph{Laplacian operator} $\Lb^\alpha$ as follows: for $w\in \Cc_{\textup{b}}^{1, 2}([0, T]\times\Rb^n)$ and $(t,x)\in [0,T]\times\Rb^n$, \[ \big[ \Lb^\alpha w \big] (t,x) = \partial_t w(t, x) + \sum^n_{i,j = 1} \frac{1}{2} \big(\sigma(t, x,\alpha)\sigma(t, x,\alpha)^\top\big)_{ij} \partial_{x_i x_j} w(t,x) + \sum_{i=1}^n b_i(t,x,\alpha)\partial_{x_i} w(t,x). \] On the space $\Cc^{1,2}_{\textup{b}}([0,T]\times\Rb^n)$, we consider the following equation \begin{equation} \label{RHJB-v} \min_{\alpha\in U} \Big\{c(t, x,\alpha) +\big[\Lb^{\alpha } v\big](t, x) + g\big(t, [\mathcal{D}_x v\cdot\sigma^{\alpha}](t,x)\big)\Big\} = 0, \quad (t,x)\in [0,T]\times\Rb^n, \end{equation} with the boundary condition \begin{equation} \label{RHJB-boundary} v(T,x) = \varPsi(T,x),\quad x\in \Rb^n. \end{equation} We call \eqref{RHJB-v}--\eqref{RHJB-boundary} the \emph{risk-averse Hamilton--Jacobi--Bellman equation} associated with the controlled system \eqref{s:cdp} and the risk functional \eqref{xiU}. It is a generalization of the classical Hamilton--Jacobi--Bellman Equation with the extra term $g(\cdot,\cdot)$ responsible for risk aversion. In the special case, when $g \equiv 0$, we obtain the standard equation. The following two theorems can be derived from general results on fully coupled forward--backward systems in \cite{JLQW}. For decoupled systems, a direct proof is provided in \cite{ARYAO}. \begin{thrm} \label{t:RHJB-v} Suppose Assumptions \Rref{s-assumption}, \Rref{a:pp1}, and \Rref{a:riskmeasure} are satisfied; in addition, the functions $b$ and $\sigma$ are bounded in $x$. Then the value function $V(\cdot,\cdot)$ is a viscosity solution of the equation \eqref{RHJB-v}--\eqref{RHJB-boundary}. \end{thrm} It is clear that if $V\in \Cc^{1,2}_{\textup{b}}([t,T]\times\Rb^n)$ then it satisfies \eqref{RHJB-v}--\eqref{RHJB-boundary}. We can also prove the converse relation (\emph{verification theorem}). \begin{thrm} Suppose the assumptions of Theorem \Rref{t:RHJB-v} are fulfilled and let the function $K \in \Cc^{1,2}_{\textup{b}}([t,T]\times\Rb^n)$ satisfy \eqref{RHJB-v}--\eqref{RHJB-boundary}. Then $K(t, x) \leq V^u(t, x)$ for any control $u(\cdot)\in\mathcal{U}$ and all $(t,x)\in [0,T]\times \Rb^n$. Furthermore, if a control process $u^*(\cdot)\in \mathcal{U}$ exists, satisfying for almost all $(s,\omega)\in [0, T]\times \varOmega$, together with the corresponding trajectory $X^{0, x;u^*}_s$, the relation \begin{equation} \label{u-star} u^*_s \in \argmin_{\alpha\in U} \Big\{c(s, X^{0, x;u^*}_s,\alpha) + \Lb^{\alpha} K(s, X^{0, x;u^*}_s) + g\big(t, [\mathcal{D}_x K\cdot \sigma^{\alpha}](t,X^{0, x;u^*}_s)\big)\Big\}, \end{equation} then $K(t, x) = V(t, x) = V^{u^*}(t,x)$ for all $(t,x)\in [0,T]\times\Rb^n$. \end{thrm} \section{Piecewise-Constant Control Policies and the Perturbed Problem} \label{s:perturbed} Let $h^2\in(0, 1]$ be a time discretization step. We use the square to simplify further analysis. \begin{dfntn}\label{admissible-control-h} For any $h\in(0, 1]$ and $t\in [0,T)$, let $\mathcal{U}_h^t$ be the subset of $\mathcal{U}$ consisting of all $\Fb$-adapted processes $u_t$ which are constant on intervals $[t,t+ h^2)$, $[t+h^2,t+ 2h^2)$, \dots, $[t+kh^2,T]$, where $T-h^2 \le t+ kh^2 \le T$. \end{dfntn} We define the corresponding value function $V_h:[0,T]\times\Rb^n\to\Rb$ as follows: \begin{equation} \label{optimal-value-h} V_h(t,x) = \inf_{u(\cdot)\in\mathcal{U}_h^t} V^{u}(t, x). \end{equation} We assume a stronger condition than Assumptions \ref{a:pp1} and \ref{s-assumption}. \begin{assumption}\label{ass:sde} Let $\mu(t, x, z, \alpha)$ stand for $\sigma(t, x, \alpha)$, $b(t,x, \alpha)$, $c(t, x, \alpha)$, and $\varPhi(x)$ \footnote{We sometimes write $\mu^{\alpha}$ instead of $\mu(\cdot,\cdot,\cdot,\alpha)$}. We assume that a constant $K$ exists such that \begin{tightlist}{ii} \item For all $\alpha, \alpha_1, \alpha_2\in U$, $x, x_1, x_2\in\Rb^n$, $z, z_1, z_2\in\Rb^d$ we have \begin{gather*} |\mu(t, x, z, \alpha)|\leq K, \\ |\mu(t, x_1, z_1, \alpha_1) - \mu(t, y, z_2, \alpha_2)|\leq K\left(|x_1 - x_2| + |z_1 - z_2| + |\alpha_1 - \alpha_2|\right); \end{gather*} \item For all $\alpha\in U$, $s,t\in [0, T]$, $x\in\Rb^n$, $z\in\Rb^d$, we have \begin{align*} |\mu(t, x, z, \alpha) - \mu(s, x, z, \alpha)|\leq K|t-s|^{1/2}. \end{align*} \end{tightlist} \end{assumption} By general results on forward--backward systems, the system \eqref{s:cdp}, \eqref{xiU} and \eqref{s-fbsde} has a unique solution and thus both functions: $V$ in \eqref{eq:valuefunction} and $V_h$ in \eqref{optimal-value-h}, are well-defined. In particular, they are both deterministic. We focus on the difference between the value functions $V$ and $V_h$. The idea is is to embed the original control problem into a family of time and space perturbed problems, and then obtain a smooth approximation of the value function by means of an integral regularization (mollification). Let $B = \{(\tau, \zeta)\in \Rb\times\Rb^n: \tau\in(-1, 0),\; |\zeta| <1 \}$. Consider a time $t\in [0,T]$ and time instants $t_i=t+ih^2$, $i=0,1,\dots,k$ and $t_{k+1}=T$. For a piecewise-constant control $u_s = \alpha_i$, $s\in [t_i, t_{i+1})$, $i=0,1,\dots,k$, and perturbations $\beta_i\in B$, $i= 0,1,\dots,k$, we define the perturbed controlled FBSDE system: \begin{align} \xdif\widetilde{X}_s &= b(s + \varepsilon^2\tau_i, \widetilde{X}_s+\varepsilon\zeta_i, \alpha_i)\xdif s + \sigma(s+\varepsilon^2\tau_i, \widetilde{X}_s +\varepsilon\zeta_i, \alpha_i)\,\xdif\widetilde{W}_s, \label{tildeX}\\ \xdif\widetilde{Y}_s &= \big[ c(s+\varepsilon^2\tau_i, \widetilde{X}_s+\varepsilon\zeta_i,\alpha_i) + g(s+\varepsilon^2\tau_i, \tilde{Z}_s)\big] \xdif s - \tilde{Z}_s\,\xdif\widetilde{W}_s, \label{tildeY}\\ &\qquad s\in [t_i,t_{i+1}),\quad i=0,1,\dots,k,\notag \end{align} with a fixed $\varepsilon > 0$, with the initial condition $\widetilde{X}_t=x$, and with the final condition $\widetilde{Y}_T=\varPhi(\widetilde{X}_T)$. The process $\widetilde{W}$ is a Brownian motion. We assume here that $b(t,x,\alpha) = b(0,x,\alpha)$, $\sigma(t,x,\alpha) = \sigma(0,x,\alpha)$, $c(t,x,\alpha) = c(0,x,\alpha)$, and $g(t,z)= g(0,z)$ for all $t\in [-\varepsilon^2,0]$. We consider the following discrete-time optimal control problem associated with the system \eqref{tildeX}--\eqref{tildeY}. At each time~$t_i$, we select a control value $\alpha_i\in U$ and a perturbation $\beta_i\in B$. The system evolves to time $t_{i+1}$, when new controls $\alpha_{i+1}$ and $\beta_{i+1}$ are selected. The objective of the controller is to make $\widetilde{Y}_t$ the smallest possible. From now on, we use $\bar{\alpha}$ and $\bar{\beta}$ to represent the random sequences $\alpha_i$ and $\beta_i$, for $i=0,1\dots,k$. \begin{lmm} \label{l:shift} Functions $\widetilde{V}^{\bar{\alpha},\bar{\beta}}:\{t_0,t_1,\dots,t_{k}\}\times\Rb^n\to\Rb$ exist, such that for all $x_i\in \Rb^n$, if the system \eqref{tildeX}--\eqref{tildeY} starts at time $t_i$ from $\widetilde{X}_{t_i}=x_i$, then $\widetilde{Y}_{t_i}=\widetilde{V}^{\bar{\alpha},\bar{\beta}}(t_i,x_i)$. Moreover, \begin{multline} \label{tildeV} \widetilde{V}^{\bar{\alpha},\bar{\beta}}(t_{i},x_i) = \rho^g_{t_i+\varepsilon^2\tau_i,{t_{i+1}+\varepsilon^2\tau_i}}\bigg[ \int_{t_i+\varepsilon^2\tau_i}^{t_{i+1}+\varepsilon^2\tau_i} \hspace{-0.5em}c\big(s, X_{s}^{t_i+\varepsilon^2\tau_i, x_i+\varepsilon\zeta_i; \alpha_i}, \alpha_i\big)\xdif s \\ {} + \widetilde{V}^{\bar{\alpha},\bar{\beta}}\big(t_{i+1},X_{t_{i+1}+\varepsilon^2\tau_i}^{t_i+ \varepsilon^2\tau_i,x_i +\varepsilon\zeta_i;\alpha_i}-\varepsilon\zeta_i\big) \bigg]. \end{multline} \end{lmm} \begin{proof} With $\widetilde{W}_s \sim W_{s+\varepsilon^2\tau}$ for $s\in[t_i,t_{i+1}]$ directly from the equations \eqref{tildeX} and \eqref{s:cdp} we obtain: \begin{equation} \label{shift} \widetilde{X}_{s}^{t_i, x_i;\alpha_i}= X_{s+\varepsilon^2\tau_i}^{t_i+ \varepsilon^2\tau_i,x_i +\varepsilon\zeta_i;\alpha_i}-\varepsilon\zeta_i, \quad s\in [t_i,t_{i+1}],\quad\text{a.s.} \end{equation} With this substitution, the BSDE \eqref{tildeY} at $s=t_i$ is equivalent to: \begin{align*} \widetilde{Y}_{t_i} &= \widetilde{Y}_{t_{i+1}} + \int_{t_i+\varepsilon^2\tau_i}^{t_{i+1}+\varepsilon^2\tau_i} \hspace{-0.2em}\big[c\big(s, X_{s}^{t_i+\varepsilon^2\tau_i, x_i+\varepsilon\zeta_i; \alpha_i}, \alpha_i\big)+g(s, {Z}_s)\big]\xdif s - \int_{t_i+\varepsilon^2\tau_i}^{t_{i+1}+\varepsilon^2\tau_i}\hspace{-0.5em}{Z}_s\xdif W_s\\ & = \rho^g_{t_i+\varepsilon^2\tau_i,{t_{i+1}+\varepsilon^2\tau_i}}\left[ \int_{t_i+\varepsilon^2\tau_i}^{t_{i+1}+\varepsilon^2\tau_i} \hspace{-0.5em}c\big(s, X_{s}^{t_i+\varepsilon^2\tau_i, x_i+\varepsilon\zeta_i; \alpha_i}, \alpha_i\big)\xdif s + \widetilde{Y}_{t_{i+1}} \right]. \end{align*} By definition, $\widetilde{Y}_{T}=\varPhi(\widetilde{X}_T)$. Supposing $\widetilde{Y}_{t_{j+1}}=\widetilde{V}^{\bar{\alpha},\bar{\beta}}\big(t_{j+1},\widetilde{X}_{t_{j+1}}\big)$ for some $j$, proceeding backwards in time we conclude from the last equation that we can write $\widetilde{Y}_{t_{i}}=\widetilde{V}^{\bar{\alpha},\bar{\beta}}(t_{i},x_i)$ for some function $\widetilde{V}^{\bar{\alpha},\bar{\beta}}(\cdot,\cdot)$. We can thus write the recursive relation \[ \widetilde{V}^{\bar{\alpha},\bar{\beta}}(t_{i},x_i) = \rho^g_{t_i+\varepsilon^2\tau_i,{t_{i+1}+\varepsilon^2\tau_i}}\left[ \int_{t_i+\varepsilon^2\tau_i}^{t_{i+1}+\varepsilon^2\tau_i} \hspace{-0.5em}c\big(s, X_{s}^{t_i+\varepsilon^2\tau_i, x_i+\varepsilon\zeta_i; \alpha_i}, \alpha_i\big)\xdif s + \widetilde{V}^{\bar{\alpha},\bar{\beta}}\big(t_{i+1},\widetilde{X}_{t_{i+1}}\big) \right]. \] Substitution of \eqref{shift} proves \eqref{tildeV}. \end{proof} Using this recursive relation, we define the value function of the optimally perturbed problem $\widetilde{V}_{h,\varepsilon}(t_i,x_i)$ at each time $t_i$ and the corresponding state $x_i$ as follows. At $t_{k+1}=T$ we set $\widetilde{V}_{h,\varepsilon}(T,x_T) = \varPhi(x_T)$, and then, proceeding backwards in time, \begin{multline*} \widetilde{V}_{h,\varepsilon}(t_i,x_{t_i}) = \inf_{\alpha_i\in U}\inf_{\beta_i\in B} \rho^g_{t_i+\varepsilon^2\tau_i,{t_{i+1}+\varepsilon^2\tau_i}}\bigg[ \int_{t_i+\varepsilon^2\tau_i}^{t_{i+1}+\varepsilon^2\tau_i} \hspace{-0.5em}c\big(s, X_{s}^{t_i+\varepsilon^2\tau_i, x_i+\varepsilon\zeta_i; \alpha_i}, \alpha_i\big)\xdif s\\ + \widetilde{V}_{h,\varepsilon}\big(t_{i+1},X_{t_{i+1}+\varepsilon^2\tau_i}^{t_i+ \varepsilon^2\tau_i,x_i +\varepsilon\zeta_i;\alpha_i}-\varepsilon\zeta_i\big) \bigg]. \end{multline*} This construction can be carried out for every $t\in [0,T]$ and the resulting points $t_i=t+ih^2$, thus defining a function $\widetilde{V}_{h,\varepsilon}:[0,T]\times\Rb^n\to\Rb$ which satisfies the relation \begin{multline} \label{V-recursive} \widetilde{V}_{h,\varepsilon}(t,x) = \inf_{\alpha_i\in U}\inf_{\beta_i\in B} \rho^g_{t+\varepsilon^2\tau,{t+h^2+\varepsilon^2\tau}}\bigg[ \int_{t+\varepsilon^2\tau}^{t+h^2+\varepsilon^2\tau} \hspace{-0.5em}c\big(s, X_{s}^{t+\varepsilon^2\tau, x+\varepsilon\zeta; \alpha_i}, \alpha_i\big)\xdif s \\ {} + \widetilde{V}_{h,\varepsilon}\big(t+h^2,X_{t+h^2+\varepsilon^2\tau}^{t+ \varepsilon^2\tau,x +\varepsilon\zeta;\alpha_i}-\varepsilon\zeta\big) \bigg].\qquad \end{multline} If $t\in (T-h^2,T)$ we replace $t+h^2$ with $T$ in the above equation. The function $\widetilde{V}_{h,\varepsilon}(t,x)$ represents the optimal value of the perturbed problem starting at time $t$ from the state $x$ and proceeding with piecewise constant controls and perturbations on intervals of length $h^2$ (except, perhaps, the last one, which ends at $T$). Let us stress that the perturbations are treated as additional controls in this construction. We now present a number of useful estimates from Krylov \cite{Krylov1}. \begin{lmm}\label{lem:krylov} For $t\in [0,T)$, $x, y\in\mathbb{R}^n$, and $\bar{\alpha}\in \mathcal{U}^t_h$, denote by $\widetilde{X}_s^{t,x;\bar{\alpha},\bar{\beta}}$ the solution of \eqref{tildeX} and by $X_s^{t,x;\bar{\alpha}}$ the solution of \eqref{s:cdp}, with the initial state $x\in\mathbb{R}^n$ at time~$t$. Then \begin{align} \label{eq:estimateskrylov} &\mathbb{E}\Big[\,\sup_{t\le s\leq T}\big|\widetilde{X}_s^{t,x;\bar{\alpha}, \bar{\beta}} - X_t^{s, x; \bar{\alpha}}\big|^2\,\Big]\leq Ne^{NT}\varepsilon^2,\\ &\mathbb{E}\Big[\,\sup_{t \le s\leq T}\big|\widetilde{X}_s^{t,x;\bar{\alpha}, \bar{\beta}}-\widetilde{X}_s^{t,y; \bar{\alpha},\bar{\beta}}\big|^2\,\Big]\leq Ne^{NT}|x-y|^2,\\ &\mathbb{E}\Big[\,\sup_{t\leq r\leq T} \big|\widetilde{X}_s^{t,x;\bar{\alpha}, \bar{\beta}}-\widetilde{X}_s^{t,y;\bar{\alpha}, \bar{\beta}}\big|^2\,\Big]\leq Ne^{NT}|t-r|, \end{align} for $N>0$ depending on $(K, d, n)$ only. \end{lmm} The proof is by using Theorem 2.5.9 in \cite{Krylov}. These estimates can be used to derive the following bounds. \begin{lmm}\label{cor:be} A constant $N$ exists, depending on ($K$, $d$, $n$) only, such that:\vspace{1ex} \begin{tightlist}{ii} \item For $t\in[0, T]$ and $x\in\Rb^d$, we have $ \big|\widetilde{V}_{h,\varepsilon}(t, x) - V_h(t, x)\big| \leq Ne^{NT}\varepsilon$, \item For $t, r\in[0, T]$, and $x, y\in\Rb^n$, we have $ \big|\widetilde{V}_{h,\varepsilon}(t, x) - \widetilde{V}_{h,\varepsilon}(r, y)\big| \leq Ne^{NT}(|x-y|+|t-r|^{\frac{1}{2}})$. \end{tightlist} \end{lmm} \begin{proof} For fixed $\bar{\alpha}, \bar{\beta}$, recall that \begin{align*} V^{\bar{\alpha}}(t, x) &= \rho^g_{t, T}\bigg[\int_t^T\, c(r, X_r^{s, x; \bar{\alpha}}, \bar{\alpha}_r)\xdif r +\varPhi\big(X_T^{t,x;\bar{\alpha}}\big)\bigg], \\ \widetilde{V}_{h,\varepsilon}^{\bar{\alpha}, \bar{\beta}}(t, x) &= \rho^g_{t, T}\bigg[\,\int_t^T\,c(r, \widetilde{X}_r^{t,x;\bar{\alpha},\bar{\beta}}, \bar{\alpha}_r)\xdif r + \varPhi\big(X_T^{t,x;\bar{\alpha},\bar{\beta}}\big)\bigg]. \end{align*} By standard estimates for BSDE and Lemma \ref{lem:krylov}, we have with some $\gamma>0$ depending on $(K,d ,n)$, \begin{align*} \lefteqn{|V^{\bar{\alpha}}(t, x) - \widetilde{V}_{h,\varepsilon}^{\bar{\alpha}, \bar{\beta}}(t, x)| \leq \mathbb{E}\Big[\,\big|\varPhi(X_T^{t,x;\bar{\alpha}}) - \varPhi(\widetilde{X}_T^{t,x;\bar{\alpha},\bar{\beta}})\big|^2e^{\gamma (T-t)}\,\Big]}\\ & {\qquad } + \mathbb{E}\Big[\,\int_t^T\, \big|c(r, X_r^{t,x;\bar{\alpha}}, \bar{\alpha}_r) - c(r, \widetilde{X}_r^{t,x;\bar{\alpha},\bar{\beta}}, \bar{\alpha}_r)\big|^2e^{\gamma(r-t)}\xdif r\,\Big]\\ & \leq ~K^2e^{\gamma(T-t)}\mathbb{E}\Big[\,\big|X_r^{t,x;\bar{\alpha}} - \widetilde{X}_r^{t,x;\bar{\alpha},\bar{\beta}}\big|^2\,\Big] + KT^2e^{\gamma(T-t)}\mathbb{E}\Big[\,\sup_{t\leq T}\big|X_r^{t,x;\bar{\alpha}} - \widetilde{X}_r^{t,x;\bar{\alpha},\bar{\beta}}\,\big|^2\,\Big] \leq Ne^{NT}\varepsilon. \end{align*} The first assertion follows. For (ii), we observe that \begin{align*} \big|\widetilde{V}_{h,\varepsilon}^{\bar{\alpha}, \bar{\beta}}(t,x) - \widetilde{V}_{h,\varepsilon}^{\bar{\alpha}, \bar{\beta}}(r, y)\big| \leq \big|\widetilde{V}_{h,\varepsilon}^{\bar{\alpha}, \bar{\beta}}(t,x) - \widetilde{V}_{h,\varepsilon}^{\bar{\alpha}, \bar{\beta}}(t, y)\big| + \big|\widetilde{V}_{h,\varepsilon}^{\bar{\alpha}, \bar{\beta}}(t_,y) - \widetilde{V}_{h,\varepsilon}^{\bar{\alpha}, \bar{\beta}}(r, y)\big|. \end{align*} Similar to the proof for (i), by applying the second and third inequalities of Lemma \ref{lem:krylov}, we have \begin{align*} &\big|\widetilde{V}_{h,\varepsilon}^{\bar{\alpha}, \bar{\beta}}(t,x) - \widetilde{V}_{h,\varepsilon}^{\bar{\alpha}, \bar{\beta}}(t, y)\big| \leq Ne^{NT}|x-y|^2, \\ & \big|\widetilde{V}_{h,\varepsilon}^{\bar{\alpha}, \bar{\beta}}(t_,y) - \widetilde{V}_{h,\varepsilon}^{\bar{\alpha}, \bar{\beta}}(r, y)\big|\leq Ne^{NT}|t-r|^\frac{1}{2}, \end{align*} which implies the postulated estimates. \end{proof} \section{Mollification of the Value Function} \label{s:mollification} We now introduce an integral transformation of the value function. We take a non-negative function $\varphi\in \Cc^{\infty}(B)$ with \mbox{$\int_{B}\varphi(\tau, \zeta) \xdif\tau\xdif \zeta= 1$}, called a \emph{mollifier}. For $\varepsilon > 0$, we re-scale the {mollifier} as $\varphi_{\varepsilon}(\tau, \zeta) = \varepsilon^{-n-2}\varphi(\tau/\varepsilon^2, \zeta/\varepsilon)$, and we introduce the following notation of the convolution of the function $\widetilde{V}_{h,\varepsilon}$ with the re-scaled mollifier: \[ \widehat{V}_{h,\varepsilon}(t, x) = \big[ \widetilde{V}_{h,\varepsilon}\star \varphi_{\varepsilon}\big] (t, x) = \int_B \widetilde{V}_{h,\varepsilon}(t-\varepsilon^2\tau,x-\varepsilon \zeta) \varphi(\tau, \zeta)\xdif \tau\xdif \zeta, \] where $t\in [0,T-\varepsilon^2]$ and $x\in \Rb^n$. We shall need an estimate of the seminorm $\big\| \widehat{V}_{h,\varepsilon}\big\|_{2,1}$ defined as follows: \begin{multline*} \big\| w \big\|_{2,1} = \sup_{(t,x)}\big| w(t,x)\big| + \sup_{(t,x)}\big\|\Dc_x w(t,x)\big\| + \sup_{(t,x)}\big\|\Dc^2_{xx}w(t,x)\big\| + \sup_{(t,x)}\big|\partial_t w(t,x)\big| \\ {} + \sup_{(t,x),(s,y)}\frac{\big\|\Dc^2_{xx} w(t,x) - \Dc^2_{xx} w(s,y)\big\|}{|t-s|+|x-y|} + \sup_{(t,x),(s,y)}\frac{\big|\partial_t w(t,x) - \partial_t w(s,y)\big|}{|t-s|+|x-y|}. \end{multline*} In the formula above, we use $\Dc_x$ and $\Dc_{xx}^2$ to denote the gradient and the Hessian matrix, and the supremum is always over $(t,x),(s,y)\in [0,T-\varepsilon^2]\times \Rb^2$. \begin{lmm}\label{estimatesall} If $\varepsilon\geq h$, then $\Big\|\widehat{V}_{h,\varepsilon}\Big\|_{2,1}\leq Ne^{NT}\varepsilon^{-2}$ and $\Big|\widehat{V}_{h,\varepsilon}-\widetilde{V}_{h,\varepsilon}\Big|_0\leq Ne^{NT}\varepsilon$. \end{lmm} \begin{proof} By elementary properties of the convolution, \begin{multline*} \frac{\partial}{\partial t}\widehat{V}_{h,\varepsilon}(t, x)=\frac{\partial}{\partial t}\left(\widetilde{V}_{h,\varepsilon}*\varphi_{\varepsilon}\right)(t, x) = \left(\widetilde{V}_{h,\varepsilon}*\frac{\partial}{\partial t}\varphi_{\varepsilon}\right)(t, x) \\ = \varepsilon^{-2}\int_{B} \widetilde{V}_{h,\varepsilon}(t-\varepsilon^2\tau, x-\varepsilon\zeta)\frac{\partial}{\partial t}\varphi(\tau, \zeta)\xdif \tau \xdif \zeta. \end{multline*} Thus, due to Lemma \ref{cor:be}(ii), \begin{align*} \Big|\frac{\partial}{\partial t}\widehat{V}_{h,\varepsilon}(t, x)\Big| & = \varepsilon^{-2}\Big|\int_{B} \widetilde{V}_{h,\varepsilon}(t-\varepsilon^2\tau, x-\varepsilon\zeta)\frac{\partial}{\partial t}\varphi(\tau, \zeta)\xdif \tau \xdif \zeta\Big|\\ & = \varepsilon^{-2}\Big|\int_{B}\big[\widetilde{V}_{h,\varepsilon}(t-\varepsilon^2\tau, x-\varepsilon\zeta) - \widetilde{V}_{h,\varepsilon}(t, x)\big]\frac{\partial}{\partial t}\varphi(\tau, \zeta)\xdif \tau\xdif\zeta\Big| \\ & \le 2 Ne^{NT}\varepsilon^{-1} \int_{B}\Big|\frac{\partial}{\partial t}\varphi(\tau, \zeta)\Big| \xdif \tau\xdif\zeta. \end{align*} We can thus increase $N$ to write for all $\varepsilon>0$ the inequality \[ \Big|\frac{\partial}{\partial t}\widehat{V}_{h,\varepsilon}(t, x)\Big| \le Ne^{NT}\varepsilon^{-1}. \] Similarly, after redefining $N$ in an appropriate way, \begin{align*} &\Big|\frac{\partial}{\partial x^i}\widehat{V}_{h,\varepsilon}\Big|_0 \leq Ne^{NT}\leq Ne^{NT}\varepsilon^{-1},\qquad\Big\|\frac{\partial^2}{\partial x^i\partial x^j}\widehat{V}_{h,\varepsilon}\Big\|_0 \leq Ne^{NT}\varepsilon^{-1},\\ & \Big|\frac{\partial^2}{\partial t^2}\widehat{V}_{h,\varepsilon}\Big|_0 + \Big\|\frac{\partial^3}{\partial x^i\partial x^j\partial t}\widehat{V}_{h,\varepsilon}\Big\|_0 \leq Ne^{NT}\varepsilon^{-3},\\ & \Big\|\frac{\partial^2}{\partial t \partial x^i}\widehat{V}_{h,\varepsilon}\Big\|_0 + \Big\|\frac{\partial^3}{\partial x^i\partial x^j\partial x^k}\widehat{V}_{h,\varepsilon}\Big\|_0 \leq Ne^{NT}\varepsilon^{-2}. \end{align*} It follows that \begin{align*} \Big|\frac{\partial}{\partial t}\widehat{V}_{h,\varepsilon}(t, x) - \frac{\partial}{\partial t}\widehat{V}_{h,\varepsilon}(s, y)\Big| &\leq |t-s|\big|\frac{\partial^2}{\partial t^2}\widehat{V}_{h,\varepsilon}\Big|_0 + N|x-y|\Big\|\frac{\partial^2}{\partial x^i\partial t}\widehat{V}_{h,\varepsilon}\Big\|_0\\ &\leq Ne^{NT}|t-s|\varepsilon^{-3} + Ne^{NT}|x-y|\varepsilon^{-2}\\ &= Ne^{NT}\varepsilon^{-2} \big( |t-s|\varepsilon^{-1}+|x-y|\big) . \end{align*} The last expression is less than $Ne^{NT}\varepsilon^{-2}\big(|t-s|^{\frac{1}{2}} + |x-y|\big)$ if $|t-s|\leq \varepsilon^2$. On the other hand, if $|t-s|\geq \varepsilon^2$, then \[ \Big|\frac{\partial}{\partial t}\widehat{V}_{h,\varepsilon}(t, x) - \frac{\partial}{\partial t}\widehat{V}_{h,\varepsilon}(s, y)\Big| \leq 2\Big|\frac{\partial }{\partial t}\widehat{V}_{h,\varepsilon}\Big|_0 \leq Ne^{NT}\varepsilon^{-1}\leq Ne^{NT}\varepsilon^{-2}\big(|t-s|^{\frac{1}{2}} + |x-y|\big). \] Hence the inequality in question holds for all $t, s\in [0,T]$. In the same way one gets that \begin{align*} \Big\|\frac{\partial^2}{\partial x^i\partial x^j}\widehat{V}_{h,\varepsilon}(t, x) - \frac{\partial^2}{\partial x^i\partial x^j}\widehat{V}_{h,\varepsilon}(s, y)\Big\| \leq Ne^{NT}\varepsilon^{-2}\big(|t-s|^{\frac{1}{2}} + |x-y|\big). \end{align*} This proves the first inequality in the assertions. To prove the second one, we notice that Lemma \ref{cor:be} yields \begin{align*} \big|\widetilde{V}_{h,\varepsilon}(t, x) - \widehat{V}_{h,\varepsilon}(t, x)\big|\leq \int_{B}\big|\widetilde{V}_{h,\varepsilon}(t, x)-\widetilde{V}_{h,\varepsilon}(t-\varepsilon^2\tau, x-\varepsilon\zeta)\big|\varphi(\tau, \zeta)\xdif \tau \xdif\zeta\leq 2Ne^{NT}\varepsilon. \end{align*} We can thus adjust $N$, if needed, to establish the second estimate for all $\varepsilon>0$. \end{proof} We can now establish a dynamic programming bound for the mollified value function. \begin{lmm}\label{lem:mp1} Suppose Assumptions \Rref{a:riskmeasure} and \Rref{ass:sde} are satisfied. Then for all $x\in\Rb^n$, $t\in[0,T-\varepsilon^2-h^2]$, and all $\alpha\in U$ we have \begin{equation} \label{V-moll} \widehat{V}_{h,\varepsilon}\big(t, x\big) \le \rho^g_{t, t+h^2} \left[ \int_t^{t+h^2} \hspace{-0.5em}c(s, X_s^{t, x;u}, \alpha)\xdif s + \widehat{V}_{h,\varepsilon}\big(t+h^2 , X_{t+h^2}^{t, x;\alpha}\big) \right] + Ne^{NT} h^2\varepsilon, \end{equation} where $N$ is a constant independent on $h$, $\varepsilon$, and $T$. \end{lmm} \begin{proof} Fixing $\beta=(\tau,\zeta)$ on the right hand side of \eqref{V-recursive}, for every $\alpha\in U$ we obtain the inequality \[ \widetilde{V}_{h,\varepsilon}(t,x) \le \rho^g_{t+\varepsilon^2\tau,{t+h^2+\varepsilon^2\tau}}\bigg[ \int_{t+\varepsilon^2\tau}^{t+h^2+\varepsilon^2\tau}\hspace{-0.5em} c\big(s, X_{s}^{t+\varepsilon^2\tau, x+\varepsilon\zeta; \alpha}, \alpha\big)\xdif s + \widetilde{V}_{h,\varepsilon}\big(t+h^2,X_{t+h^2+\varepsilon^2\tau}^{t+ \varepsilon^2\tau,x +\varepsilon\zeta;\alpha}-\varepsilon\zeta\big) \bigg].\qquad \] Since $t\le T-\varepsilon^2-h^2$, we can substitute $t-\varepsilon^2\tau$ for $t$ and $x-\varepsilon\zeta$ for $x$. We obtain \[ \widetilde{V}_{h,\varepsilon}\big(t-\varepsilon^2\tau, x-\varepsilon\zeta\big) \le \rho^g_{t, t+h^2} \left[ \int_t^{t+h^2} \hspace{-0.5em} c(s, X_s^{t, x;u}, \alpha)\xdif s + \widetilde{V}_{h,\varepsilon}\big(t+h^2 -\varepsilon^2\tau, X_{t+h^2}^{t, x;\alpha} -\varepsilon\zeta\big) \right]. \] By virtue of Theorem \ref{t:geval-prop}, the risk measure $\rho_{t, t+h^2}[\cdot]$ is subadditive, and thus \begin{equation} \label{V-pre-moll} \begin{aligned} \widetilde{V}_{h,\varepsilon}\big(t-\varepsilon^2\tau, x-\varepsilon\zeta\big) &\le \rho^g_{t, t+h^2} \left[ \int_t^{t+h^2} \hspace{-0.5em} c(s, X_s^{t, x;u}, \alpha)\xdif s + \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big) \right] \\ & {\quad}+ \rho^g_{t, t+h^2} \left[\widetilde{V}_{h,\varepsilon}\big(t+h^2 -\varepsilon^2\tau, X_{t+h^2}^{t, x;\alpha} -\varepsilon\zeta\big) - \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big)\right]. \end{aligned} \end{equation} The last term on the right hand side of \eqref{V-pre-moll} can be equivalently bounded by using the dual representation of the risk measure $\rho_{t, t+h^2}[\cdot]$. We can thus write the following chain of relations: \begin{align*} \lefteqn{ \rho^g_{t, t+h^2} \left[ \widetilde{V}_{h,\varepsilon}\big(t+h^2 -\varepsilon^2\tau, X_{t+h^2}^{t, x;\alpha} -\varepsilon\zeta\big) - \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big)\right]}\quad \\ & = \sup_{\varGamma\in \mathcal{A}_{t, t+h^2}}\hspace{-0.5em} \Eb_{t}\left[\varGamma\left( \widetilde{V}_{h,\varepsilon}\big(t+h^2 -\varepsilon^2\tau, X_{t+h^2}^{t, x;\alpha} -\varepsilon\zeta\big) - \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big)\right)\right]\\ & = \Eb_{t}\left[ \widetilde{V}_{h,\varepsilon}\big(t+h^2 -\varepsilon^2\tau, X_{t+h^2}^{t, x;\alpha} -\varepsilon\zeta\big) - \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big)\right]\\ &{\ }+ \sup_{\varGamma\in \mathcal{A}_{t, t+h^2}} \hspace{-0.5em}\Eb_{t}\left[(\varGamma-1)\left( \widetilde{V}_{h,\varepsilon}\big(t+h^2 -\varepsilon^2\tau, X_{t+h^2}^{t, x;\alpha} -\varepsilon\zeta\big)\right) - \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big)\right]\\ & \le \Eb_{t}\left[ \widetilde{V}_{h,\varepsilon}\big(t+h^2 -\varepsilon^2\tau, X_{t+h^2}^{t, x;\alpha} -\varepsilon\zeta\big) - \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big)\right]\\ &{\ }+ \sup_{\varGamma\in \mathcal{A}_{t, t+h^2}} \hspace{-0.5em}\big\| \varGamma-1 \big\| \left\| \widetilde{V}_{h,\varepsilon}\big(t+h^2 -\varepsilon^2\tau, X_{t+h^2}^{t, x;\alpha} -\varepsilon\zeta\big) - \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big)\right\|. \end{align*} Owing to Corollary \ref{c:Gamma-bound} and Lemma \ref{estimatesall}(ii), we obtain the estimate \begin{align*} \lefteqn{ \rho^g_{t, t+h^2} \left[ \widetilde{V}_{h,\varepsilon}\big(t+h^2 -\varepsilon^2\tau, X_{t+h^2}^{t, x;\alpha} -\varepsilon\zeta\big) - \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big) \right]}\quad \\ & \le \Eb_{t}\left[ \widetilde{V}_{h,\varepsilon}\big(t+h^2 -\varepsilon^2\tau, X_{t+h^2}^{t, x;\alpha} -\varepsilon\zeta\big) - \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big)\right] + Ne^{NT} h^2\varepsilon. \end{align*} Substitution for the last term in \eqref{V-pre-moll} yields \begin{multline} \label{V-pre-moll2} \widetilde{V}_{h,\varepsilon}\big(t-\varepsilon^2\tau, x-\varepsilon\zeta\big)\le\rho^g_{t, t+h^2} \left[ \int_t^{t+h^2} \hspace{-0.5em} c(s, X_s^{t, x;u}, \alpha)\xdif s + \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big) \right] \\ {}+ \Eb_{t}\left[ \widetilde{V}_{h,\varepsilon}\big(t+h^2 -\varepsilon^2\tau, X_{t+h^2}^{t, x;\alpha} -\varepsilon\zeta\big) - \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big)\right] + Ne^{NT} h^2\varepsilon. \end{multline} We now multiply both sides of \eqref{V-pre-moll2} by $\varphi(\tau, \zeta)$ and integrate over $B$. By changing the order of integration in the expected value term of \eqref{V-pre-moll2} we observe that \begin{multline*} \int_B \Eb_{t}\left[ \widetilde{V}_{h,\varepsilon}\big(t+h^2 -\varepsilon^2\tau, X_{t+h^2}^{t, x;\alpha} -\varepsilon\zeta\big) - \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big) \right] \varphi(\tau, \zeta)\xdif \tau\xdif \zeta \\ = \Eb_{t} \left[\int_B \left( \widetilde{V}_{h,\varepsilon}\big(t+h^2 -\varepsilon^2\tau, X_{t+h^2}^{t, x;\alpha} -\varepsilon\zeta\big) - \widehat{V}_{h,\varepsilon}\big(t+h^2, X_{t+h^2}^{t, x;\alpha}\big)\right) \varphi(\tau, \zeta)\xdif \tau\xdif \zeta \right]= 0. \end{multline*} Other terms on the right hand side of \eqref{V-pre-moll2} do not depend on $(\tau,\zeta)$ and thus \eqref{V-moll} follows. \end{proof} \section{Accuracy of the Approximation} \label{s:accuracy} We can now investigate the effect of the size of the discretization interval, $h^2$, on the accuracy of the value function approximation. For simplicity of presentation, we write $\sigma^\alpha(s,x)$ for $\sigma(s,x,\alpha)$ and $c^\alpha(s,x)$ for $c(s,x,\alpha)$ \begin{lmm}\label{lem:mp2} For any $w\in \Cc_b^{1,2}([0, T]\times\Rb^n)$, any $ 0 \le t \le \theta \le T$, and all $u(\cdot)\in \mathcal{U}$ we have: \begin{equation} \label{estimate-Ito} \begin{aligned} w(t, x) = \rho^g_{t,\theta} \bigg[ \int_t^{\theta} c(s, X_s^{t, x; u}, u_s)\xdif s + w(\theta, X_{\theta}^{t,x; u}) - \bar{\zeta}\bigg], \end{aligned} \end{equation} where \begin{equation} \label{zeta} \bar{\zeta} = \int_t^{\theta} \Big\{ \big[c^{u_s}+ \Lb^{u_s}w\big](s, X_s^{t,x;u}) + g\big(s, [\mathcal{D}_x w\cdot\sigma^{u_s}](s, X_s^{t,x;u})\big) \Big\}\xdif s. \end{equation} \end{lmm} \begin{proof} For any $u(\cdot)\in \mathcal{U}$, we apply It\^{o} formula to $w(s, X_s^{t,x;u})$: \begin{equation*} w(\theta, X_{\theta}^{t,x;u}) - w(t, x) - \int_t^{\theta} \big[\Lb^{u_s}w\big](s, X_s^{t,x;u})\xdif s = \int_t^{\theta} [\mathcal{D}_x w\cdot\sigma^{u_s}](s, X_s^{t,x;u})\xdif W_s. \end{equation*} Subtraction of $\int_t^{\theta} g\big(s, \big[\mathcal{D}_x w\cdot\sigma^{u_s}\big](s, X_s^{t,x;u})\big)\xdif s$ from both sides and evaluation of the risk on both sides yields \begin{equation} \begin{split} \label{trick} \rho^g_{t,\theta} \bigg[ w(\theta, X_{\theta}^{s,x;u}) - w(t,x) - \int_t^{\theta} \Big( \big[\Lb^{u_s}w\big](s, X_s^{t,x;u}) + g\big(s, [\mathcal{D}_x w\cdot\sigma^{u_s}](s, X_s^{t,x;u})\big)\Big)\xdif s \bigg] \\ = \rho^g_{t,\theta} \bigg[ \int_t^{\theta} [\mathcal{D}_x w\cdot\sigma^{u_s}](s, X_s^{t,x,u})\xdif W_s -\int_t^{\theta} g\big(s, [\mathcal{D}_x w\cdot\sigma^{u_s}](s, X_s^{t,x;u})\big)\xdif s \bigg]. \end{split} \end{equation} The risk measure on the right hand side of \eqref{trick} is the solution of the following backward stochastic differential equation: \begin{equation*} \begin{split} Y_t^{t,x;u} = \int_t^{\theta} [\mathcal{D}_x w\cdot\sigma^{u_s}](s, X_s^{t,x;u})\xdif W_s -\int_t^{\theta} g\big(s, [\mathcal{D}_x w\cdot\sigma^{u_s}](s, X_s^{t,x;u})\big)\xdif s \\ {} + \int_t^{\theta} g(s, Z_s^{t,x;u})\xdif s - \int_t^{\theta} Z_s^{t,x;u}\xdif W_s. \end{split} \end{equation*} Substitution of $Z_s^{t,x;u} = [\mathcal{D}_x w\cdot\sigma^{u_s}](s, X_s^{t,x;u})$ yields $Y^{t,x;u}_t=0$. By the uniqueness of the solution of BSDE, the right hand side of \eqref{trick} is zero. Using the translation property on the left hand side of \eqref{trick}, we obtain \begin{equation*} w(t, x) = \rho^g_{t,\theta} \bigg[ - \int_t^{\theta} \Big(\big[\Lb^{u_s}w\big](s, X_s^{t,x;u}) + g\big(s, [\mathcal{D}_x w\cdot\sigma^{u_s}](s, X_s^{t,x;u})\big)\Big)\xdif s + w\big(\theta, X_\theta^{t,x;u}\big)\bigg]. \end{equation*} This is the same as \eqref{estimate-Ito}. \end{proof} The integral in \eqref{zeta} can be bounded by the following lemma. \begin{lmm} \label{l:estimate-alpha} For all $t\in [0,T-h^2-\varepsilon^2]$, $x\in \Rb^n$, and all $\alpha\in U$, we have \begin{equation} \label{estimate-alpha} [ c^\alpha + \Lb^{\alpha } \widehat{V}_{h,\varepsilon}](t,x) + g\big(t,[\mathcal{D}_x \widehat{V}_{h,\varepsilon} \cdot \sigma^\alpha](t,x)\big) \ge -Ne^{NT}\left( \varepsilon + \frac{h}{\varepsilon^2}\right), \end{equation} where the constant $N$ does not depend on $h$, $\varepsilon$, and $T$. \end{lmm} \begin{proof} By Lemma \ref{lem:mp1}, for every $\alpha\in U$ we have \[ \widehat{V}_{h,\varepsilon}\big(t, x\big) \le \rho^g_{t, t+h^2} \left[ \int_t^{t+h^2} \hspace{-0.5em}c(s, X_s^{t, x;u}, \alpha)\xdif s + \widehat{V}_{h,\varepsilon}\big(t+h^2 , X_{t+h^2}^{t, x;\alpha}\big) \right]+Ne^{NT} h^2\varepsilon. \] Using the translation property of $\rho_{t,t+h^2}$, we obtain the inequality: \[ \rho^g_{{t},{t}+h^2}\left( \int_{t}^{t+h^2} c\big(s,X^{t, x;\alpha}_s,\alpha\big)\xdif s + \widehat{V}_{h,\varepsilon}\big(t+h^2, X^{t, x;\alpha}_{t+h^2}\big)- \widehat{V}_{h,\varepsilon}(t, x)\right)\ge -Ne^{NT} h^2\varepsilon. \] Since $\widehat{V}_{h,\varepsilon}\in \Cc^{1,2}_{\textup{b}}([t,T-\varepsilon^2]\times\Rb^n)$, we can evaluate the difference $\widehat{V}_{h,\varepsilon}\big(t+h^2, X^{t, x;\alpha}_{t+h^2}\big)-\widehat{V}_{h,\varepsilon}(t,x)$ by It\^{o} formula between $t$ and $t+h^2$: \[ \widehat{V}_{h,\varepsilon}\big(t+h^2, X^{t, x;\alpha}_{t+h^2}\big) - \widehat{V}_{h,\varepsilon}(t, x) = \int_{t}^{t+h^2}[\Lb^{\alpha} \widehat{V}_{h,\varepsilon}](s, X^{t, x;\alpha}_s) \xdif s + \int_{t}^{t+h^2} [\mathcal{D}_x \widehat{V}_{h,\varepsilon} \cdot \sigma^\alpha](s, X^{t, x;\alpha}_s) \xdif W_s. \] Substitution into the previous inequality yields: \begin{equation} \label{rho-in-HJBa} \rho^g_{t,t+h^2}\Bigg( \int_{t}^{t+h^2} [ c^\alpha + \Lb^{\alpha } \widehat{V}_{h,\varepsilon}](s, X^{t, x;\alpha}_s) \xdif s {} + \int_{t}^{t+h^2} [\mathcal{D}_x \widehat{V}_{h,\varepsilon} \cdot \sigma^\alpha](s, X^{t, x;\alpha}_s) \xdif W_s\Bigg) \ge -Ne^{NT} h^2\varepsilon. \end{equation} The evaluation of the risk measure amounts to solving the following backward stochastic differential equation: \begin{multline*} \qquad Y_{t} = \int_{t}^{t+h^2} [ c^\alpha + \Lb^{\alpha } \widehat{V}_{h,\varepsilon}](s, X^{t, x;\alpha}_s) \xdif s + \int_{t}^{t+h^2} [\mathcal{D}_x \widehat{V}_{h,\varepsilon} \cdot \sigma^\alpha](s, X^{t, x;\alpha}_s) \xdif W_s \\ + \int_{t}^{t+h^2} g(s,Z_s)\xdif s - \int_{t}^{t+h^2} Z_s\xdif W_s.\qquad \end{multline*} The equation has a unique solution: \begin{gather*} Z_s = [\mathcal{D}_x \widehat{V}_{h,\varepsilon} \cdot \sigma^\alpha](s, X^{t, x;\alpha}_s),\quad t \le s \le t+h^2, \\ Y_{t} = \int_{t}^{t+h^2} \Big\{[ c^\alpha + \Lb^{\alpha } \widehat{V}_{h,\varepsilon}](s, X^{t, x;\alpha}_s) + g\big(s,[\mathcal{D}_x \widehat{V}_{h,\varepsilon} \cdot \sigma^\alpha](s, X^{t, x;\alpha}_s)\big)\Big\} \xdif s. \end{gather*} We can thus write the inequality \begin{align*} Y_t \le {} & h^2 \left( [ c^\alpha + \Lb^{\alpha } \widehat{V}_{h,\varepsilon}](t,x) + g\big(t,[\mathcal{D}_x \widehat{V}_{h,\varepsilon} \cdot \sigma^\alpha](t,x)\big)\right)\\ &{} + h^2 \max_{t \le s \le t+h^2}\Eb_{t} \bigg\{ \Big| [ c^\alpha + \Lb^{\alpha } \widehat{V}_{h,\varepsilon}](s, X^{t, x;\alpha}_s) - [c^\alpha + \Lb^{\alpha } \widehat{V}_{h,\varepsilon}](t,x)\Big| \bigg\}\\ &{} + h^2 \max_{t \le s \le t+h^2}\Eb_{t} \bigg\{ \Big| g\big(s,[\mathcal{D}_x \widehat{V}_{h,\varepsilon} \cdot \sigma^\alpha](s, X^{t, x;\alpha}_s)\big) - g\big(t,[\mathcal{D}_x \widehat{V}_{h,\varepsilon} \cdot \sigma^\alpha](t,x)\big)\Big| \bigg\}. \end{align*} The last two terms can be bounded by $Ne^{NT}h^3 / \varepsilon^{2}$, owing to Assumption \ref{ass:sde} and Lemma \ref{estimatesall}. Combining this inequality with \eqref{rho-in-HJBa} and dividing by $h^2$, we conclude that for all $\alpha\in U$ the estimate \eqref{estimate-alpha} is true. \end{proof} We are now ready to prove the main theorem of this section. \begin{thrm} \label{t:error-estimate} Suppose Assumptions \Rref{a:riskmeasure} and \Rref{ass:sde} are satisfied. Then for any $t\in[0, T]$, $x\in\Rb^n$, and $h\in(0, 1]$, we have \begin{align*} |V(t,x) - V_h(t, x)|\leq Ne^{NT}h^{\frac{1}{3}}, \end{align*} where the constant $N$ depends only on $(K, n, d)$. \end{thrm} \begin{proof} We set $\varepsilon = h^{\frac{1}{3}}$ and organize the proof in three steps. \emph{Step 1: } If $t\in [T-h^2-\varepsilon^2, T]$, then for any $u(\cdot)$ and some constant $C$ we have, \begin{align*} \big|V^{u}(t, x)-\varPhi(x)\big| &\leq \rho_{t, T}\bigg(\int_t^T \big| c(s, X^{t, x, u}_s, u_s)\big|\xdif s + \big| \varPhi(X_{T}^{t, x, u}) - \varPhi(x)\big| \bigg)\\ &\leq K ( h^2+\varepsilon^2) + K \mathbb{E}_{t, x}\left[|X_{T}^{t, x, u} - x|\right] \leq K(1+C)( h^2+\varepsilon^2) \le 2 K(1+C)h^{\frac{2}{3}}. \end{align*} In the above estimate we also used the fact that the solution of the forward--backward system \eqref{s:cdp}--\eqref{s-fbsde} is Lipschitz in the initial condition \cite{MA}. The same reasoning works for $V^{u}_h$, and thus \begin{align*} |V^{u}_h(s, x) - \varPhi(x)|\leq 2 K(1+C)h^{\frac{2}{3}}. \end{align*} We can, therefore, for some constant $N$ write the inequality \begin{align*} |V^u(t, x) - V^u_h(t, x)|\leq Ne^{NT}h^{\frac{1}{3}}. \end{align*} The optimization over $u$ will not make it worse, and thus our assertion is true for these $t$. \emph{Step 2: } Consider $t\in [0, T-h^2-\varepsilon^2]$. By Lemma \ref{lem:mp2}, for all $u(\cdot)\in \mathcal{U}$ on $[t, T-h^2-\varepsilon^2]$, we have \[ \widehat{V}_{h,\varepsilon}\big(t, x\big) \le \rho^g_{t,T-h^2-\varepsilon^2}\left(\int_t^{T-h^2-\varepsilon^2}c(s, X_s^{t, x, u}, u_s)\xdif s + \widehat{V}_{h,\varepsilon}\big(T-h^2-\varepsilon^2, X_{T-h^2-\varepsilon^2}^{t, x, u}) - \bar{\zeta}\right), \] where, owing to Lemma \ref{l:estimate-alpha}, \[ \bar{\zeta} = \int_t^{T-h^2-\varepsilon^2} \left(\big[ c^{u_s} + \Lb^{u_s} \widehat{V}_{h,\varepsilon}\big](s, x) + g(s, [\partial_x \widehat{V}_{h,\varepsilon}\cdot\sigma^{u_s}](s, x))\right)\xdif s \ge -NTe^{NT}\left( \varepsilon + \frac{h}{\varepsilon^2}\right). \] These relations, the monotonicity of the risk measure, and Lemmas \ref{cor:be} and \ref{estimatesall} imply the estimate \begin{multline*} {V}_{h}(t, x) \le \rho^g_{t,T-h^2-\varepsilon^2}\bigg(\int_t^{T-h^2-\varepsilon^2}c(s, X_s^{t, x, u}, u_s)\xdif s\\ {} + {V}_{h}\big(T-h^2-\varepsilon^2, X_{T-h^2-\varepsilon^2}^{t, x, u}) \bigg) {} + NTe^{NT}\left( \varepsilon + \frac{h}{\varepsilon^2}\right)+4 Ne^{NT}\varepsilon. \end{multline*} In view of the inequality established in Step 1, using $\varepsilon = h^{\frac{1}{3}}$, and redefining $N$ appropriately, we obtain the following inequality: \begin{equation} \label{Vh-upper} {V}_{h}(t, x) \le \rho^g_{t,T-h^2-\varepsilon^2}\bigg(\int_t^{T-h^2-\varepsilon^2}c(s, X_s^{t, x, u}, u_s)\xdif s + {V}\big(T-h^2-\varepsilon^2, X_{T-h^2-\varepsilon^2}^{t, x, u}) \bigg) + Ne^{NT}h^{\frac{1}{3}}. \end{equation} \emph{Step 3: } We apply the dynamic programming equation \eqref{DPE} to the right hand side of \eqref{Vh-upper} to conclude that \begin{multline*} {V}_{h}(t, x) \le \inf_{u(\cdot)\in \mathcal{U}}\rho^g_{t,T-h^2-\varepsilon^2}\bigg(\int_t^{T-h^2-\varepsilon^2}c(s, X_s^{t, x, u}, u_s)\xdif s \\{} + {V}\big(T-h^2-\varepsilon^2, X_{T-h^2-\varepsilon^2}^{t, x, u}) \bigg) {} + Ne^{NT}h^{\frac{1}{3}} = {V}(t, x) + Ne^{NT}h^{\frac{1}{3}}, \end{multline*} as required. \end{proof}
210,414
Archiving Digital Pictures par date taken! Started Jan 26, 2013 | Discussions thread I am nearly finished assembling a 12 month sequence of shots from the same location, hopefully to reflect the changes in the seasons, I have used a D700 a D800 and recently a D800e, as a default I think Bridge works off the image number so with three different cameras there was some image number duplication which did not correspond to the actual date the image was taken, as per Steve's suggestion I went to the menu item ' sort ' and selected ' date created ' about 5 seconds later 150 full fat tiff files were in the correct sequence. Reply Reply with quote Complain Keyboard shortcuts: FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads Color scheme? Blue / Yellow
87,990
TITLE: Proving Limit False QUESTION [4 upvotes]: I'm trying to prove that the limit of sin x as x->infinity is not equal to 1/2. I know that this is true, but I can't seen to figure out how to prove it using the precise definition of a limit. What I have so far is this, e>0, M>0 abs(sin x - 1/2)<e whenever x>M I also think that I can use the fact that, abs(sin x)<=1 for all x But I don't know for sure. I've also seen some really extensive proofs for stuff like sin x/x (ref 1). So I may be doing this completely wrong. Thanks for any help. http://tutorial.math.lamar.edu/Classes/CalcI/ProofTrigDeriv.aspx REPLY [3 votes]: Choose $\epsilon_0 = 1/4$. Then for any $M>0$, choose an integer $K$ such that $K\pi>M$. Then $$|\sin(K\pi)-1/2|=1/2>1/4=\epsilon_0.$$ Thus by the definition of limit, $$\displaystyle\lim_{x\to\infty}\sin x\neq 1/2.$$ REPLY [2 votes]: (Do see comments bellow.) Well, for this to be true, it must hold that $$\forall \epsilon > 0 \exists \delta > 0 \forall x \in P^-(\infty, \delta): |\sin x| < \epsilon.$$ Obviously, this does not hold for $x \in \{\frac\pi{2} + 2k\pi, k\in \mathbb{Z}\}$, for example. $\sin x$ will always reach the value of $1$ in those points. REPLY [1 votes]: Consider the following theorem: If $\lim_{x\to \infty } f(x)=L$ and $x_n$ is a sequence of numbers such that $\lim _{n\to \infty }x_n =\infty $, then $\lim_{n\to \infty }f(x_n)=L$. If you feel comfortable using this theorem, then you can now prove that $\lim_{x\to \infty }\sin(x)\ne \frac{1}{2}$ by considering a suitable sequence $x_n$ with $\lim_{n\to \infty }x_n=\infty $ and $\sin(x_n)=1$ for all $n$. It then follows that $\lim_{n\to \infty }\sin (x_n)=1\ne \frac{1}{2}$, and thus proves the claim. If you haven't seen this theorem before, try proving it. If you want to give a proof that is as close as possible to directly using the definition of limit, then try to figure out why the above works. Choose $\epsilon = \frac{1}{4}$ for instance, and show that for all $M$ there exists $x>M$ with distance between $\sin(x)$ and $\frac{1}{2}$ bigger than $\epsilon $.
25,270
\begin{document} \renewcommand{\PaperNumber}{108} \FirstPageHeading \ShortArticleName{Fundamental Solution of Laplace's Equation in Hyperspherical Geometry} \ArticleName{Fundamental Solution of Laplace's Equation\\ in Hyperspherical Geometry} \Author{Howard S.~COHL $^{\dag\ddag}$} \AuthorNameForHeading{H.S.~Cohl} \Address{$^\dag$~Applied and Computational Mathematics Division, Information Technology Laboratory,\\ \hphantom{$^\dag$}~National Institute of Standards and Technology, Gaithersburg, Maryland, USA} \EmailD{\href{mailto:[email protected]}{[email protected]}} \Address{$^\ddag$~Department of Mathematics, University of Auckland, 38 Princes Str., Auckland, New Zealand} \URLaddressD{\url{http://hcohl.sdf.org}} \ArticleDates{Received August 18, 2011, in f\/inal form November 22, 2011; Published online November 29, 2011} \vspace{-1mm} \Abstract{Due to the isotropy of $d$-dimensional hyperspherical space, one expects there to exist a spherically symmetric fundamental solution for its corresponding Laplace--Beltrami operator. The $R$-radius hypersphere ${\mathbf S}_R^d$ with $R>0$, represents a Riemannian manifold with positive-constant sectional curvature. We obtain a spherically symmetric fundamental solution of Laplace's equation on this manifold in terms of its geodesic radius. We give several matching expressions for this fundamental solution including a def\/inite integral over reciprocal powers of the trigonometric sine, f\/inite summation expressions over trigonometric functions, Gauss hypergeometric functions, and in terms of the associated Legendre function of the second kind on the cut (Ferrers function of the second kind) with degree and order given by $d/2-1$ and $1-d/2$ respectively, with real argument between plus and minus one.} \Keywords{hyperspherical geometry; fundamental solution; Laplace's equation; separation of variables; Ferrers functions} \Classification{35A08; 35J05; 32Q10; 31C12; 33C05} \vspace{-3mm} \section{Introduction} \label{Introduction} \looseness=-1 We compute closed-form expressions of a spherically symmetric Green's function (fundamental solution of Laplace's equation) for a $d$-dimensional Riemannian manifold of positive-constant sectional curvature, namely the $R$-radius hypersphere with $R>0$. This problem is intimately related to the solution of the Poisson equation on this manifold and the study of spherical harmonics which play an important role in exploring collective motion of many-particle systems in quantum mechanics, particularly nuclei, atoms and molecules. In these systems, the hyperradius is constructed from appropriately mass-weighted quadratic forms from the Cartesian coordinates of the particles. One then seeks either to identify discrete forms of motion which occur primarily in the hyperradial coordinate, or alternatively to construct complete basis sets on the hypersphere. This representation was introduced in quantum mechanics by Zernike \& Brinkman~\cite{ZernBrink}, and later invoked to greater ef\/fect in nuclear and atomic physics, respectively, by Delves~\cite{Delves} and Smith~\cite{Smith}. The relevance of this representation to two-electron excited states of the helium atom was noted by Cooper, Fano \& Prats~\cite{CooperFanoPrats}; Fock~\cite{Fock} had previously shown that the hyperspherical representation was particularly ef\/f\/icient in representing the helium wavefunction in the vicinity of small hyperradii. There has been a rich literature of applications ever since. Examples include Zhukov~\cite{Zhukov} (nuclear structure), Fano~\cite{Fano} and Lin~\cite{Lin} (atomic structure), and Pack \& Parker~\cite{PackParker} (molecular collisions). A~recent monograph by Berakdar~\cite{Berakdar} discusses hyperspherical harmonic methods in the general context of highly-excited electronic systems. Useful background material relevant for the mathematical aspects of this paper can be found in~\cite{Lee,Thurston,Vilen}. Some historical refe\-ren\-ces on this topic include~\cite{Higgs,Leemon,Schrodinger38, Schrodinger40,VinMarPogSisStr}. This paper is organized as follows. In Section~\ref{Thehyperboloidmodelofhyperbolicgeometry} we describe hyperspherical geometry and its corresponding metric, global geodesic distance function, Laplacian and hyperspherical coordinate systems which parametrize points on this manifold. In Section~\ref{AGreensfunctioninthehyperboloidmodel} for hyperspherical geometry, we show how to compute `radial' harmonics in a geodesic polar coordinate system and derive several alternative expressions for a `radial' fundamental solution of the Laplace's equation on the $R$-radius hypersphere. Throughout this paper we rely on the following def\/initions. For $a_1,a_2,\ldots\in\C$, if $i,j\in\Z$ and $j<i$ then $\sum\limits_{n=i}^{j}a_n=0$ and $\prod\limits_{n=i}^ja_n=1$. The set of natural numbers is given by $\N:=\{1,2,3,\ldots\}$, the set $\N_0:=\{0,1,2,\ldots\}=\N\cup\{0\}$, and the set $\Z:=\{0,\pm 1,\pm 2,\ldots\}.$ \section{Hyperspherical geometry} \label{Thehyperboloidmodelofhyperbolicgeometry} The Euclidean inner product for $\R^{d+1}$ is given by $ (\bfx,{\mathbf y})=x_0y_0+x_1y_1+\cdots+x_dy_d$. The variety $(\bfx,\bfx)=x_0^2+x_1^2+\cdots+x_d^2=R^2$, for $\bfx\in\R^{d+1}$ and $R>0$, def\/ines the $R$-radius hypersphere $\Si_R^{d}$. We denote the unit radius hypersphere by $\Si^d:=\Si_1^d$. Hyperspherical space in $d$-dimensions, denoted by $\Si_R^d$, is a maximally symmetric, simply connected, $d$-dimensional Riemannian manifold with positive-constant sectional curvature (given by $1/R^2$, see for instance~\cite[p.~148]{Lee}), whereas Euclidean space~$\R^d$ equipped with the Pythagorean norm, is a Riemannian manifold with zero sectional curvature. Points on the $d$-dimensional hypersphere $\Si_R^d$ can be parametrized using {\it subgroup-type coordinate systems}, i.e., those which correspond to a maximal subgroup chain $O(d)\supset \cdots$ (see for instance~\cite{IPSWa,IPSWc}). The isometry group of the space $\Si_R^d$ is the orthogonal group $O(d)$. Hyperspherical space $\Si_R^d$, can be identif\/ied with the quotient space $O(d)/O(d-1)$. The isometry group $O(d)$ acts transitively on~$\Si_R^d$. There exist se\-pa\-rable coordinate systems on the hypersphere, analogous to parabolic coordinates in Euclidean space, which can not be constructed using maximal subgroup chains. {\it Polyspherical coordinates}, are coordinates which correspond to the maximal subgroup chain given by $O(d)\supset \cdots$. What we will refer to as {\it standard hyperspherical coordinates}, correspond to the subgroup chain given by $O(d)\supset O(d-1)\supset \cdots \supset O(2).$ (For a thorough discussion of polyspherical coordinates see Section~IX.5 in~\cite{Vilen}.) Polyspherical coordinates on $\Si^{d}_R$ all share the property that they are described by $(d+1)$-variables: $R\in[0,\infty)$ plus $d$-angles each being given by the values $[0,2\pi)$, $[0,\pi]$, $[-\pi/2,\pi/2]$ or $[0,\pi/2]$ (see~\cite{IPSWa, IPSWb}). In our context, a useful subset of polyspherical coordinate are {\it geodesic polar coordinates} $(\theta,{\mathbf {\widehat x}})$ (see for instance~\cite{Oprea}). These coordinates, which parametrize points on $\Si_R^d$, have origin at $O=(R,0,\ldots,0)\in\R^{d+1}$ and are given by a `radial' parameter $\theta\in[0,\pi]$ which parametrizes points along a geodesic curve emanating from $O$ in a direction $ {\mathbf {\widehat x}} \in\Si^{d-1}$. Geodesic polar coordinate systems partition $\Si_R^d$ into a family of $(d-1)$-dimensional hyperspheres, each with a~`radius' $\theta:=\theta_d\in(0,\pi),$ on which all possible hyperspherical coordinate systems for~$\Si^{d-1}$ may be used (see for instance~\cite{Vilen}). One then must also consider the limiting case for $\theta=0,\pi$ to f\/ill out all of $\Si_R^d$. {\it Standard hyperspherical coordinates} (see~\cite{KalMilPog,Olevskii}) are an example of geodesic polar coordinates, and are given by{\samepage \begin{gather} \begin{array}{@{}l} x_0 = R\cos\theta,\\ x_1=R\sin\theta\cos\theta_{d-1},\\ x_2=R\sin\theta\sin\theta_{d-1}\cos\theta_{d-2},\\ \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots \\ x_{d-2} = R\sin\theta\sin\theta_{d-1}\cdots\cos\theta_{2},\\ x_{d-1} = R\sin\theta\sin\theta_{d-1}\cdots\sin\theta_{2}\cos\phi,\\ x_{d} = R\sin\theta\sin\theta_{d-1}\cdots\sin\theta_{2}\sin\phi, \end{array} \label{standardhyp} \end{gather} $\theta_i\in[0,\pi]$ for $i\in\{2,\ldots,d\}$, $\theta:=\theta_{d}$, and $\phi\in[0,2\pi)$.} In order to study a fundamental solution of Laplace's equation on the hypersphere, we need to describe how one computes the geodesic distance in this space. Geodesic distances on $\Si_R^d$ are simply given by arc lengths, angles between two arbitrary vectors, from the origin in the ambient Euclidean space (see for instance~\cite[p.~82]{Lee}). Any parametrization of the hypersphere~$\Si_R^d$, must have $(\bfx,\bfx)=x_0^2+\cdots+x_d^2=R^2$, with $R>0$. The distance between two points $\bfx,\bfxp\in\Si_R^d$ on the hypersphere is given by \begin{equation} d(\bfx,\bfxp)=R\gamma =R\cos^{-1}\left(\frac{(\bfx,\bfxp)}{(\bfx,\bfx)(\bfxp,\bfxp)} \right) =R\cos^{-1}\left(\frac{1}{R^2}(\bfx,\bfxp)\right). \label{cosgamma} \end{equation} This is evident from the fact that the geodesics on $\Si_R^d$ are great circles, i.e., intersections of $\Si_R^d$ with planes through the origin of the ambient Euclidean space, with constant speed parametrizations. In any geodesic polar coordinate system, the geodesic distance between two points on the submanifold is given by \begin{equation} d(\bfx,\bfxp)=R\cos^{-1}\left(\frac{1}{R^2}(\bfx,\bfxp)\right) =R\cos^{-1} \bigl( \cos \theta\cos\theta^\prime + \sin\theta\sin\theta^\prime\cos\gamma\bigr), \label{diststandard} \end{equation} where $\gamma$ is the unique separation angle given in each polyspherical coordinate system used to parametrize points on $\Si^{d-1}$. For instance, the separation angle $\gamma$ in standard hyperspherical coordinates is given through \begin{equation} \cos \gamma=\cos(\phi-\phi^\prime) \prod_{i=1}^{d-2}\sin\theta_i{\sin\theta_i}^\prime +\sum_{i=1}^{d-2}\cos\theta_i{\cos\theta_i}^\prime \prod_{j=1}^{i-1}\sin\theta_j{\sin\theta_j}^\prime. \label{prodform} \end{equation} Corresponding separation angle formulae for any hyperspherical coordinate system used to parametrize points on $\Si^{d-1}$ can be computed using (\ref{cosgamma}) and the associated formulae for the appropriate inner-products. One can also compute the Riemannian (volume) measure $d{\rm vol}_g$ (see for instance Section~3.4 in~\cite{Grigor}), invariant under the isometry group $SO(d)$, of the Riemannian manifold~$\Si_R^d$. For instance, in standard hyperspherical coordinates~(\ref{standardhyp}) on $\Si_R^{d}$ the volume measure is given by \begin{gather} d{\rm vol}_g= R^d\sin^{d-1}\theta\,d\theta\,d\omega:= R^d\sin^{d-1}\theta\,d\theta\, \sin^{d-2}\theta_{d-1}\cdots\sin\theta_2\, d\theta_{1}\cdots d\theta_{d-1}. \label{eucsphmeasureinv} \end{gather} The distance $r\in[0,\infty)$ along a geodesic, measured from the origin, is given by $r=\theta R$. To show that the above volume measure (\ref{eucsphmeasureinv}) reduces to the Euclidean volume measure at small distances (see for instance~\cite{KalMilPog}), we examine the limit of zero curvature. In order to do this, we take the limit $\theta\to 0^+$ and $R\to\infty$ of the volume measure~(\ref{eucsphmeasureinv}) which produces \[ d{\rm vol}_g\sim R^{d-1}\sin^{d-1}\left(\frac{r}{R}\right)dr d\omega\sim r^{d-1}dr\,d\omega, \] which is the Euclidean measure in $\R^d$, expressed in standard Euclidean hyperspherical coordinates. This measure is invariant under the Euclidean motion group $E(d)$. It will be useful below to express the Dirac delta function on $\Si_R^d$. The Dirac delta function on the Riemannian manifold $\Si_R^d$ with metric $g$ is def\/ined for an open set $U\subset\Si_R^d$ with $\bfx,\bfxp\in\Si_R^d$ such that \begin{gather} \int_U\delta_g(\bfx,\bfxp) d{\rm vol}_g = \begin{cases} 1 & \mathrm{if}\ \bfxp\in U, \\ 0 & \mathrm{if}\ \bfxp\notin U. \end{cases} \label{defndiracdeltafunction} \end{gather} For instance, using (\ref{eucsphmeasureinv}) and (\ref{defndiracdeltafunction}), in standard hyperspherical coordinates on $\Si_R^{d}$ (\ref{standardhyp}), we see that the Dirac delta function is given by \[ \delta_g(\bfx,\bfxp)=\frac{\delta(\theta-\theta^\prime)}{R^d\sin^{d-1}\theta^\prime} \frac{\delta(\theta_1-\theta_1^\prime)\cdots\delta(\theta_{d-1}-\theta_{d-1}^\prime)} { \sin\theta_2^\prime \cdots \sin^{d-2}\theta_{d-1}^\prime }. \] \subsection{Laplace's equation on the hypersphere} Parametrizations of a submanifold embedded in Euclidean space can be given in terms of coordinate systems whose coordinates are {\it curvilinear}. These are coordinates based on some transformation that converts the standard Cartesian coordinates in the ambient space to a coordinate system with the same number of coordinates as the dimension of the submanifold in which the coordinate lines are curved. The Laplace--Beltrami operator (Laplacian) in curvilinear coordinates ${\mathbf{\xi}}=(\xi^1,\ldots,\xi^d)$ on a~Riemannian manifold is given by \begin{gather} \Delta=\sum_{i,j=1}^d\frac{1}{\sqrt{|g|}} \frac{\partial}{\partial \xi^i} \biggl(\sqrt{|g|}g^{ij} \frac{\partial}{\partial \xi^j} \biggr), \label{laplacebeltrami} \end{gather} where $|g|=|\det(g_{ij})|,$ the metric is given by \begin{gather} ds^2=\sum_{i,j=1}^{d}g_{ij}d\xi^id\xi^j, \label{metric} \end{gather} and \[ \sum_{i=1}^{d}g_{ki}g^{ij}=\delta_k^j, \] where $\delta_i^j\in\{0,1\}$ is the Kronecker delta \begin{gather} \delta_i^j:= \begin{cases} 1 & \mathrm{if}\ i=j, \\ 0 & \mathrm{if}\ i\ne j, \end{cases} \label{Kronecker} \end{gather} for $i,j\in\Z$. The relationship between the metric tensor $G_{ij}=\mathrm{diag}(1,\ldots,1)$ in the ambient space and $g_{ij}$ of (\ref{laplacebeltrami}) and (\ref{metric}) is given~by \[ g_{ij}({\mathbf{\xi}})=\sum_{k,l=0}^dG_{kl}\frac{\partial x^k}{\partial \xi^i} \frac{\partial x^l}{\partial \xi^j}. \] The Riemannian metric in a geodesic polar coordinate system on the submanifold $\Si_R^d$ is given~by \begin{gather} ds^2=R^2\big(d\theta^2+\sin^2\theta\ d\gamma^2\big), \label{stanhypmetric} \end{gather} where an appropriate expression for $\gamma$ in a curvilinear coordinate system is given. If one combines~(\ref{standardhyp}), (\ref{prodform}), (\ref{laplacebeltrami}) and (\ref{stanhypmetric}), then in a geodesic polar coordinate system, Laplace's equation on~$\Si_R^d$ is given by \begin{gather} \Delta f=\frac{1}{R^2}\left[\frac{\partial^2f}{\partial\theta^2} +(d-1)\cot\theta\frac{\partial f}{\partial \theta} +\frac{1}{\sin^2\theta} \Delta_{\Si^{d-1}}f\right]=0, \label{genhyplap} \end{gather} where $\Delta_{\Si^{d-1}}$ is the corresponding Laplace--Beltrami operator on $\Si^{d-1}$. \section{A Green's function on the hypersphere} \label{AGreensfunctioninthehyperboloidmodel} \subsection{Harmonics in geodesic polar coordinates} \label{SepVarStaHyp} The harmonics in a geodesic polar coordinate system are given in terms of a `radial' solution (`radial' harmonics) multiplied by the angular solution (angular harmonics). Using polyspherical coordinates on $\Si^{d-1},$ one can compute the normalized hyperspherical harmonics in this space by solving the Laplace equation using separation of variables. This results in a general procedure which, for instance, is given explicitly in~\cite{IPSWa, IPSWb}. These angular harmonics are given as general expressions involving trigonometric functions, Gegenbauer polynomials and Jacobi polynomials. The angular harmonics are eigenfunctions of the Laplace--Beltrami operator on $\Si^{d-1}$ which satisfy the following eigenvalue problem (see for instance~(12.4) and Corollary~2 to Theorem~10.5 in~\cite{Takeuchi}) \begin{gather} \Delta_{\Si^{d-1}}Y_l^K ({\mathbf {\widehat x}}) =-l(l+d-2)Y_l^K({\mathbf {\widehat x}}), \label{eq4} \end{gather} where ${\mathbf {\widehat x}}\in\Si^{d-1}$, $Y_l^K({\mathbf {\widehat x}})$ are normalized angular hyperspherical harmonics, $l\in\N_0$ is the angular momentum quantum number, and $K$ stands for the set of $(d-2)$-quantum numbers identifying degenerate harmonics for each $l$ and $d$. The degeneracy \[ (2l+d-2)\frac{(d-3+l)!}{l!(d-2)!} \] (see (9.2.11) in~\cite{Vilen}), tells you how many linearly independent solutions exist for a particular $l$ value and dimension $d$. The angular hyperspherical harmonics are normalized such that \[ \int_{\Si^{d-1}} Y_l^K ({\mathbf {\widehat x}}) \overline{Y_{l^\prime}^{K^\prime} ({\mathbf {\widehat x}}) } d\omega= \delta_{l}^{l^\prime} \delta_{K}^{K^\prime}, \] where $d\omega$ is the Riemannian (volume) measure on $\Si^{d-1}$, which is invariant under the isometry group $SO(d)$ (cf.~(\ref{eucsphmeasureinv})), and for $x+iy=z\in\C$, $\overline{z}=x-iy$, represents complex conjugation. The angular solutions (hyperspherical harmonics) are well-known (see Chapter IX in~\cite{Vilen} and Chapter 11~\cite{ErdelyiHTFII}). The generalized Kronecker delta symbol~$\delta_K^{K^\prime}$ (cf.~(\ref{Kronecker})) is def\/ined such that it equals 1 if all of the $(d-2)$-quantum numbers identifying degenerate harmonics for each~$l$ and~$d$ coincide, and equals zero otherwise. We now focus on `radial' solutions of Laplace's equation on $\Si_R^d$, which satisfy the following ordinary dif\/ferential equation (cf.~(\ref{genhyplap}) and (\ref{eq4})) \begin{gather} \frac{d^2u}{d\theta^2}+(d-1)\cot\theta\frac{du}{d\theta}-\frac{l(l+d-2)}{\sin^2\theta}u=0. \label{sphericallysymmetricharmonicequation} \end{gather} Four solutions of this ordinary dif\/ferential equation $u_{1\pm}^{d,l},u_{2\pm}^{d,l}:(-1,1)\to\C$ are given by \[ {\displaystyle u_{1\pm}^{d,l}(\cos\theta):=\frac{1}{(\sin\theta)^{d/2-1}}{\mathrm P}_{d/2-1}^{\pm(d/2-1+l)} (\cos\theta)}, \] and \begin{gather} u_{2\pm}^{d,l}(\cos\theta):=\frac{1}{(\sin\theta)^{d/2-1}}{\mathrm Q}_{d/2-1}^{\pm(d/2-1+l)} (\cos\theta) , \label{u2pmdl} \end{gather} where ${\mathrm P}_\nu^\mu, {\mathrm Q}_\nu^\mu:(-1,1)\to\C$ are Ferrers functions of the f\/irst and second kind (associated Legendre functions of the f\/irst and second kind on the cut). The Ferrers functions of the f\/irst and second kind (see Chapter~14 in~\cite{NIST}) can be def\/ined respectively in terms of a sum over two Gauss hypergeometric functions, for all $\nu,\mu\in\C$ such that $\nu+\mu\not\in-\N$, \begin{gather*} {\mathrm P}_\nu^\mu(x) := \frac{2^{\mu+1}}{\sqrt{\pi}} \sin\left[\frac{\pi}{2}(\nu+\mu)\right] \frac {\Gamma\left(\frac{\nu+\mu+2}{2}\right)} {\Gamma\left(\frac{\nu-\mu+1}{2}\right)} x(1-x^2)^{-\mu/2}{}_2F_1 \left(\frac{1-\nu-\mu}{2},\frac{\nu-\mu+2}{2};\frac32;x^2\right)\\ \phantom{{\mathrm P}_\nu^\mu(x) := }{} + \frac{2^{\mu}}{\sqrt{\pi}} \cos\left[\frac{\pi}{2}(\nu+\mu)\right] \frac {\Gamma\left(\frac{\nu+\mu+1}{2}\right)} {\Gamma\left(\frac{\nu-\mu+2}{2}\right)} (1-x^2)^{-\mu/2}{}_2F_1 \left(\frac{-\nu-\mu}{2},\frac{\nu-\mu+1}{2};\frac12;x^2\right) \end{gather*} (cf.~(14.3.11) in~\cite{NIST}), and \begin{gather} {\mathrm Q}_\nu^\mu(x) := \sqrt{\pi}2^{\mu}\cos\left[\frac{\pi}{2}(\nu+\mu)\right] \frac {\Gamma\!\left(\frac{\nu+\mu+2}{2}\right)} {\Gamma\!\left(\frac{\nu-\mu+1}{2}\right)} x(1-x^2)^{-\mu/2}{}_2F_1 \left(\frac{1-\nu-\mu}{2},\frac{\nu-\mu+2}{2};\frac32;x^2\right)\nonumber\\ {} -\sqrt{\pi}2^{\mu-1}\sin\left[\frac{\pi}{2}(\nu+\mu)\right] \frac {\Gamma\left(\frac{\nu+\mu+1}{2}\right)} {\Gamma\left(\frac{\nu-\mu+2}{2}\right)} (1-x^2)^{-\mu/2}{}_2F_1 \left(\frac{-\nu-\mu}{2},\frac{\nu-\mu+1}{2};\frac12;x^2\right) \label{ferrerssecondkinddefnhypergeom} \end{gather} (cf.~(14.3.12) in~\cite{NIST}). The Gauss hypergeometric function ${}_2F_1:\C\times\C\times(\C\setminus-\N_0)\times\{z\in\C:|z|<1\}\to\C$, can be def\/ined in terms of the inf\/inite series \[ {}_{2}F_1(a,b;c;z):= \sum_{n=0}^\infty \frac{(a)_n(b)_n}{(c)_n n!}z^n \] (see (15.2.1) in~\cite{NIST}), and elsewhere in $z$ by analytic continuation. On the unit circle \mbox{$|z|=1$}, the Gauss hypergeometric series converges absolutely if $\mbox{Re}\,(c-a-b)\in(0,\infty),$ converges condi\-tio\-nal\-ly if $z\ne 1$ and $\mbox{Re}\,(c-a-b)\in(-1,0],$ and diverges if $\mbox{Re}\,(c-a-b)\in(-\infty,-1]$. For $z\in\C$ and $n\in\N_0$, the Pochhammer symbol $(z)_n$ (also referred to as the rising factorial) is def\/ined as (cf.~(5.2.4) in~\cite{NIST}) \[ (z)_n:=\prod_{i=1}^n(z+i-1). \] The Pochhammer symbol (rising factorial) is expressible in terms of gamma functions as (5.2.5) in~\cite{NIST} \[ (z)_n=\frac{\Gamma(z+n)}{\Gamma(z)}, \] for all $z\in\C\setminus-\N_0$. The gamma function $\Gamma:\C\setminus-\N_0\to\C$ (see Chapter~5 in~\cite{NIST}) is an important combinatoric function and is ubiquitous in special function theory. It is naturally def\/ined over the right-half complex plane through Euler's integral (see (5.2.1) in~\cite{NIST}) \[ \Gamma(z):=\int_0^\infty t^{z-1} e^{-t} dt, \] $\mbox{Re}\, z>0$. The Euler ref\/lection formula allows one to obtain values of the gamma function in the left-half complex plane (cf.~(5.5.3) in~\cite{NIST}), namely \[ \Gamma(z)\Gamma(1-z)=\frac{\pi}{\sin\pi z}, \] $0<\mbox{Re}\, z<1,$ for $\mbox{Re}\,z=0$, $z\ne 0$, and then for $z$ shifted by integers using the following recurrence relation (see (5.5.1) in~\cite{NIST}) \[ \Gamma(z+1)=z\Gamma(z). \] An important formula which the gamma function satisf\/ies is the duplication formula (i.e., (5.5.5) in~\cite{NIST}) \begin{gather} \Gamma(2z)=\frac{2^{2z-1}}{\sqrt{\pi}}\Gamma(z)\Gamma\left(z+\frac12\right), \label{duplicationformulagamma} \end{gather} provided $2z\not\in-\N_0$. Due to the fact that the space $\Si_R^d$ is homogeneous with respect to its isometry group, the orthogonal group $O(d)$, and therefore an isotropic manifold, we expect that there exist a fundamental solution on this space with spherically symmetric dependence. We specif\/ically expect these solutions to be given in terms of associated Legendre functions of the second kind on the cut with argument given by $\cos\theta$. This associated Legendre function naturally f\/its our requirements because it is singular at $\theta=0$, whereas the associated Legendre functions of the f\/irst kind, with the same argument, is regular at $\theta=0$. We require there to exist a singularity at the origin of a fundamental solution of Laplace's equation on~$\Si^d_R$, since it is a manifold and must behave locally like a Euclidean fundamental solution of Laplace's equation which also has a singularity at the origin. \subsection{Fundamental solution of the Laplace's equation on the hypersphere} \label{FunSolLapHd} In computing a fundamental solution of the Laplacian on $\Si_R^d$, we know that \begin{gather} -\Delta {\mathcal S}_R^d(\bfx,\bfxp) = \delta_g(\bfx,\bfxp), \label{eq3} \end{gather} where $g$ is the Riemannian metric on $\Si_R^d$ (e.g., (\ref{stanhypmetric})) and $\delta_g$ is the Dirac delta function on the manifold $\Si_R^d$ (e.g., (\ref{defndiracdeltafunction})). In general since we can add any harmonic function to a fundamental solution of Laplace's equation and still have a fundamental solution, we will use this freedom to make our fundamental solution as simple as possible. It is reasonable to expect that there exists a particular spherically symmetric fundamental solution ${\mathcal S}_R^d$ on the hypersphere with pure `radial', $\theta:=d(\bfx,\bfxp)$ (e.g., (\ref{diststandard})), and constant angular dependence due to the inf\/luence of the point-like nature of the Dirac delta function in (\ref{eq3}). For a spherically symmetric solution to the Laplace equation, the corresponding $\Delta_{\Si^{d-1}}$ term in (\ref{genhyplap}) vanishes since only the $l=0$ term survives in (\ref{sphericallysymmetricharmonicequation}). In other words we expect there to exist a fundamental solution of Laplace's equation on $\Si_R^d$ such that ${\mathcal S}_R^d(\bfx,\bfxp)=f(\theta)$ (cf.~(\ref{diststandard})), where $R$ is a parameter of this fundamental solution. We have proven that on the $R$-radius hypersphere $\Si_R^d$, a Green's function for the Laplace operator (fundamental solution of Laplace's equation) can be given as follows. \begin{theorem}\label{thmh1d} Let $d\in\{2,3,\ldots\}.$ Define $\mcI_d:(0,\pi)\to\R$ as \[ \mcI_d(\theta):=\int_\theta^{\pi/2}\frac{dx}{\sin^{d-1}x}, \] ${{\bf x}},{{{\bf x}^\prime}}\in\Si_R^d$, and ${\mathcal S}_R^d:(\Si_R^d\times\Si_R^d)\setminus\{(\bfx,\bfx):\bfx\in\Si_R^d\}\to\R$ defined such that \[ {\mathcal S}_R^d({\bf x},{\bf x}^\prime):= {\displaystyle \frac{\Gamma\left(d/2\right)}{2\pi^{d/2}R^{d-2}}\mcI_d(\theta)}, \] where $\theta:=\cos^{-1}\left([{\widehat{\bf x}},{{\widehat{\bf x}^\prime}}]\right)$ is the geodesic distance between~${\widehat{\bf x}}$ and ${{\widehat{\bf x}^\prime}}$ on the unit radius hypersphere~$\Si^d$, with ${\widehat{\bf x}}=\bfx/R$, ${{\widehat{\bf x}^\prime}}=\bfxp/R$, then ${\mathcal S}_R^d$ is a fundamental solution for $-\Delta$ where $\Delta$ is the Laplace--Beltrami operator on $\Si_R^d$. Moreover, \begin{gather*} \mcI_d(\theta) = \begin{cases} \displaystyle \frac{(d-3)!!}{(d-2)!!}\biggl[\log\cot \frac{\theta}{2} +\cos \theta\sum_{k=1}^{d/2-1}\frac{(2k-2)!!}{(2k-1)!!}\frac{1}{\sin^{2k}\theta}\biggr] & \mathrm{if}\ d\ \mathrm{even}, \vspace{2mm}\\ \left\{ \begin{array}{l} \displaystyle \left(\frac{d-3}{2}\right)! \sum_{k=1}^{(d-1)/2} \frac{\cot^{2k-1}\theta} {(2k-1)(k-1)!((d-2k-1)/2)!}, \nonumber\vspace{2mm}\\ \mathrm{or} \\ \displaystyle \frac{(d-3)!!}{(d-2)!!}\cos\theta \sum_{k=1}^{(d-1)/2} \frac{(2k-3)!!}{(2k-2)!!} \frac{1}{\sin^{2k-1}\theta}, \end{array} \right\} & \mathrm{if}\ d\ \mathrm{odd}, \end{cases} \vspace{2mm}\\ \phantom{\mcI_d(\theta)}{} = \begin{cases} \displaystyle \cos\theta\ {}_2F_1\left(\frac12,\frac{d}{2};\frac{3}{2};\cos^2\theta\right),\vspace{2mm}\\ \displaystyle \frac{\cos\theta}{\sin^{d-2}\theta}\ {}_2F_1\left(1,\frac{3-d}{2};\frac32;\cos^2\theta\right),\vspace{2mm}\\ \displaystyle \frac{(d-2)!}{\displaystyle \Gamma\left(d/2\right)2^{d/2-1}} \frac{1}{(\sin \theta)^{d/2-1}}{\mathrm Q}_{d/2-1}^{1-d/2}(\cos \theta). \end{cases} \end{gather*} \end{theorem} In the rest of this section, we develop the material in order to prove this theorem. Since a spherically symmetric choice for a fundamental solution satisf\/ies Laplace's equation everywhere except at the origin, we may f\/irst set $g=f^\prime$ in (\ref{genhyplap}) and solve the f\/irst-order equation \[ g^\prime+(d-1)\cos \theta\; g=0, \] which is integrable and clearly has the general solution \begin{gather} g(\theta)=\frac{df}{d\theta}=c_0(\sin \theta)^{1-d}, \label{dfdr} \end{gather} where $c_0\in\R$ is a constant. Now we integrate (\ref{dfdr}) to obtain a fundamental solution for the Laplacian on $\Si_R^d$ \begin{gather} {\mathcal S}_R^d(\bfx,\bfxp)=c_0\mcI_d(\theta)+c_1, \label{Hdid01} \end{gather} where $\mcI_d:(0,\pi)\to\R$ is def\/ined as \begin{gather} \mcI_d(\theta):=\int_\theta^{\pi/2}\frac{dx}{\sin^{d-1}x}, \label{In} \end{gather} and $c_0,c_1\in\R$ are constants which depend on $d$ and $R$. Notice that we can add any harmonic function to (\ref{Hdid01}) and still have a fundamental solution of the Laplacian since a fundamental solution of the Laplacian must satisfy \[ \int_{\Si_R^d} (-\Delta\varphi)(\bfxp) {\mathcal S}_R^d(\bfx,\bfxp)\, d{\rm vol}_g^\prime = \varphi(\bfx), \] for all $\varphi\in {\mathcal S}(\Si_R^d),$ where ${\mathcal S}$ is the space of test functions, and $d{\rm vol}_g^\prime$ is the Riemannian (volume) measure on $\Si_R^d$ in the primed coordinates. Notice that our fundamental solution of Laplace's equation on the hypersphere ((\ref{Hdid01}), (\ref{In})) has the property that it tends towards $+\infty$ as $\theta\to 0^{+}$ and tends towards $-\infty$ as $\theta\to\pi^{-}$. Therefore our fundamental solution attains all real values. As an aside, by the def\/inition therein (see \cite{Grigor83,Grigor85}), $\Si_R^d$ is a parabolic manifold. Since the hypersphere $\Si_R^d$ is bi-hemispheric, we expect that a fundamental solution of Laplace's equation on the hypersphere should vanish at $\theta=\pi/2$. It is therefore convenient to set $c_1=0$ leaving us with \begin{gather} {\mathcal S}_R^d(\bfx,\bfxp)=c_0\mcI_d(\theta). \label{Hdid} \end{gather} In Euclidean space $\R^d$, a Green's function for Laplace's equation (fundamental solution for the Laplacian) is well-known and is given by the following expression (see \cite[p.~94]{Fol3}, \cite[p.~17]{GT}, \cite[p.~211]{BJS}, \cite[p.~6]{Doob}). Let $d\in\N$. Def\/ine \begin{gather} \mcg^d({\bf x},{\bf x}^\prime)= \begin{cases} \displaystyle\frac{\Gamma(d/2)}{2\pi^{d/2}(d-2)}\|{\bf x}-{\bf x}^\prime\|^{2-d} & \mathrm{if}\ d=1\mathrm{\ or\ }d\ge 3,\vspace{1mm} \\ \displaystyle\frac{1}{2\pi}\log\|{\bf x}-{\bf x}^\prime\|^{-1} & \mathrm{if}\ d=2, \end{cases} \label{thmg1n} \end{gather} then $\mcg^d$ is a fundamental solution for $-\Delta$ in Euclidean space $\R^d$, where $\Delta$ is the Laplace operator in $\R^d$. Note that most authors only present the above theorem for the case $d\ge 2$ but it is easily-verif\/ied to also be valid for the case $d=1$ as well. The hypersphere $\Si_R^d$, being a manifold, must behave locally like Euclidean space $\R^d$. Therefore for small $\theta$ we have $e^\theta\simeq 1+\theta$ and $e^{-\theta}\simeq 1-\theta$ and in that limiting regime \[ \mcI_d(\theta)\approx\int_\theta^1 \frac{dx}{x^{d-1}}\simeq \begin{cases} -\log \theta & \mathrm{if}\ d=2, \vspace{1mm} \\ {\displaystyle \frac{1}{\theta^{d-2}}} & \mathrm{if}\ d\ge 3, \end{cases} \] which has exactly the same singularity as a Euclidean fundamental solution. Therefore the proportionality constant $c_0$ is obtained by matching locally to a Euclidean fundamental solution \begin{gather} {\mathcal S}_R^d=c_0 \mcI_d\simeq {\mathcal G}^d, \label{proportionality} \end{gather} in a small neighborhood of the singularity at $\bfx=\bfxp,$ as the curvature vanishes, i.e., $R\to\infty$. We have shown how to compute a fundamental solution of the Laplace--Beltrami operator on the hypersphere in terms of an improper integral~(\ref{In}). We would now like to express this integral in terms of well-known special functions. A fundamental solution $\mcI_d$ can be computed using elementary methods through its def\/inition (\ref{In}). In $d=2$ we have \[ \mcI_2(\theta)=\int_\theta^{\pi/2} \frac{dx}{\sin x}= \frac{1}{2}\log \frac{\cos \theta+1}{\cos \theta-1} =\log\cot\frac{\theta}{2}, \] and in $d=3$ we have \[ \mcI_3(\theta)=\int_\theta^{\pi/2} \frac{dx}{\sin^2x}=\cot \theta. \] In $d\in\{4,5,6,7\}$ we have \begin{gather*} \mcI_4(\theta) = \frac12\log\cot\frac{\theta}{2}+ \frac{\cos \theta}{2\sin^2 \theta} ,\\ \mcI_5(\theta) = \cot\theta+\frac13\cot^3\theta ,\\ \mcI_6(\theta)= \frac38\log\cot\frac{\theta}{2}+\frac{3\cos\theta}{8\sin^2\theta}+\frac{\cos\theta}{4\sin^2\theta} ,\qquad\mathrm{and}\\ \mcI_7(\theta) = \cot\theta+\frac23\cot^3\theta+\frac15\cot^5\theta. \end{gather*} Now we prove several equivalent f\/inite summation expressions for $\mcI_d(\theta)$. We wish to compute the antiderivative $\mathfrak{I}_m:(0,\pi)\to\R$, which is def\/ined as \[ \mathfrak{I}_m(x):=\int\frac{dx}{\sin^mx}, \] where $m\in\N$. This antiderivative satisf\/ies the following recurrence relation \begin{gather} \mathfrak{I}_m(x)=-\frac{\cos x}{(m-1)\sin^{m-1}x}+\frac{(m-2)}{(m-1)}\mathfrak{I}_{m-2}(x), \label{antiderivreccurence} \end{gather} which follows from the identity \[ \frac{1}{\sin^{m}x}=\frac{1}{\sin^{m-2}x}+\frac{\cos x}{\sin^mx}\cos x, \] and integration by parts. The antiderivative $\mathfrak{I}_m(x)$ naturally breaks into two separate classes, namely \begin{gather} \int\frac{dx}{\sin^{2n+1}x}=-\frac{(2n-1)!!}{(2n)!!} \left[\log\cot\frac{x}{2}+\cos x\sum_{k=1}^n\frac{(2k-2)!!}{(2k-1)!!}\frac{1}{\sin^{2k}x}\right]+C, \label{antiderodd} \end{gather} and \begin{gather} \int\frac{dx}{\sin^{2n}x}= \begin{cases} \displaystyle -\frac{(2n-2)!!}{(2n-1)!!}\cos x \sum_{k=1}^n\frac{(2k-3)!!}{(2k-2)!!}\frac{1}{\sin^{2k-1}x}+C, \qquad\mbox{or}\vspace{1mm}\\ \displaystyle -(n-1)!\sum_{k=1}^n\frac{\cot^{2k-1}x}{(2k-1)(k-1)!(n-k)!}+C, \end{cases} \label{antidereven} \end{gather} \noindent where $C$ is a constant. The double factorial $(\cdot)!!:\{-1,0,1,\ldots\}\to\N$ is def\/ined by \[ n!!:= \begin{cases} \displaystyle n\cdot(n-2)\cdots 2 & \mathrm{if}\ n\ \mathrm{even}\ge 2, \nonumber \\ \displaystyle n\cdot(n-2)\cdots 1 & \mathrm{if}\ n\ \mathrm{odd}\ge 1, \\ \displaystyle 1 & \mathrm{if}\ n\in\{-1,0\}. \end{cases} \] Note that $(2n)!!= 2^nn!$ for $n\in\N_0$. The f\/inite summation formulae for $\mathfrak{I}_m(x)$ all follow trivially by induction using (\ref{antiderivreccurence}) and the binomial expansion (cf.~(1.2.2) in~\cite{NIST}) \[ (1+\cos^2x)^n=n!\sum_{k=0}^n\frac{\cot^{2k}x}{k!(n-k)!}. \] The formulae (\ref{antiderodd}) and (\ref{antidereven}) are essentially equivalent to (2.515.1--2) in \cite{Gradshteyn2007}, except (2.515.2) is in error with the factor $28^k$ being replaced with $2^k$. This is also verif\/ied in the original citing reference~\cite{Timofeev}. By applying the limits of integration from the def\/inition of~$\mcI_d(\theta)$ in~(\ref{In}) to (\ref{antiderodd}) and (\ref{antidereven}) we obtain the following f\/inite summation expression \begin{gather} \mcI_d(\theta)= \begin{cases} \displaystyle \frac{(d-3)!!}{(d-2)!!}\left[\log\cot \frac{\theta}{2} +\cos \theta\sum_{k=1}^{d/2-1}\frac{(2k-2)!!}{(2k-1)!!}\frac{1}{\sin^{2k}\theta}\right] & \mathrm{if}\ d\ \mathrm{even}, \vspace{1mm}\\ \left\{ \begin{array}{l} \displaystyle \left(\frac{d-3}{2}\right)! \sum_{k=1}^{(d-1)/2} \frac{\cot^{2k-1}\theta} {(2k-1)(k-1)!((d-2k-1)/2)!}, \vspace{2mm}\\ \mathrm{or} \\ \displaystyle \frac{(d-3)!!}{(d-2)!!}\cos\theta \sum_{k=1}^{(d-1)/2} \frac{(2k-3)!!}{(2k-2)!!} \frac{1}{\sin^{2k-1}\theta}, \end{array} \right\} & \mathrm{if}\ d\ \mathrm{odd}. \end{cases} \label{sumgradryzhikin} \end{gather} Moreover, the antiderivative (indef\/inite integral) can be given in terms of the Gauss hypergeometric function as \begin{gather} \int\frac{d\theta}{\sin^{d-1}\theta}=-\cos \theta \; {}_2F_1\left(\frac12,\frac{d}{2};\frac32;\cos^2\theta\right) +C, \label{antiderivativecos2r} \end{gather} where $C\in\R$. This is verif\/ied as follows. By using \[ \frac{d}{dz}\; {}_2F_1(a,b;c;z)=\frac{ab}{c}\; {}_2F_1(a+1,b+1;c+1;z) \] (see (15.5.1) in~\cite{NIST}), and the chain rule, we can show that \begin{gather*} -\frac{d}{d\theta} \cos\theta\;{}_2F_1\left(\frac12,\frac{d}{2};\frac32;\cos^2\theta\right) \\ \qquad = \sin\theta \left[ {}_2F_1 \left(\frac12,\frac{d}{2};\frac32;\cos^2\theta\right)+\frac{d}{3}\cos^2\theta \; {}_2F_1 \left(\frac32,\frac{d+2}{2};\frac52;\cos^2\theta\right) \right]. \end{gather*} The second hypergeometric function can be simplif\/ied using Gauss' relations for contiguous hypergeometric functions, namely \[ z\; {}_2F_1(a+1,b+1;c+1;z)=\frac{c}{a-b}\bigl[{}_2F_1(a,b+1;c;z)-{}_2F_1(a+1,b;c;z)\bigr] \] (see \cite[p.~58]{Erdelyi}), and \[ {}_2F_1(a,b+1;c;z)=\frac{b-a}{b}{}_2F_1(a,b;c;z) +\frac{a}{b}\;{}_2F_1(a+1,b;c;z) \] (see (15.5.12) in~\cite{NIST}). By applying these formulae, the term with the hypergeometric function cancels leaving only a term which is proportional to a binomial through \[ {}_2F_1(a,b;b;z)=(1-z)^{-a} \] (see (15.4.6) in~\cite{NIST}), which reduces to $1/\sin^{d-1}\theta$. By applying the limits of integration from the def\/inition of $\mcI_d(\theta)$ in (\ref{In}) to (\ref{antiderivativecos2r}) we obtain the following Gauss hypergeometric representation \begin{gather} \mcI_d(\theta)= \cos\theta\;{}_2F_1\left(\frac12,\frac{d}{2};\frac32;\cos^2\theta\right). \label{Idthetagausscos} \end{gather} Using (\ref{Idthetagausscos}), we can write another expression for $\mcI_d(\theta)$. Applying Eulers's transformation \[ {}_2F_1(a,b;c;z)=(1-z)^{c-a-b}\; {}_2F_1\left(c-a,c-b;c;z\right) \] (see (2.2.7) in~\cite{AAR}), to (\ref{Idthetagausscos}) produces \[ \mcI_d(\theta)= \frac{\cos\theta}{\sin^{d-2}\theta}\; {}_2F_1\left(1,\frac{3-d}{2};\frac32;\cos^2\theta\right). \] Our derivation for a fundamental solution of Laplace's equation on the $R$-radius hypersphere in terms of Ferrers function of the second kind (associated Legendre function of the second kind on the cut) is as follows. If we let $\nu+\mu=0$ in the def\/inition of Ferrers function of the second kind ${\mathrm Q}_\nu^\mu:(-1,1)\to\C$ (\ref{ferrerssecondkinddefnhypergeom}), we derive \[ {\mathrm Q}_\nu^{-\nu}(x)=\frac{\sqrt{\pi}}{2^\nu}\frac{x(1-x^2)^{\nu/2}} {\Gamma\left(\nu+\frac12\right)}\; {}_2F_1\left(\frac12,\nu+1;\frac32;x^2\right), \] for all $\nu\in\C$. If we let $\nu=d/2-1$ and substitute $x=\cos\theta$, then we have \begin{gather} {\mathrm Q}_{d/2-1}^{1-d/2}(\cos\theta)=\frac{\sqrt{\pi}}{2^{d/2-1}}\frac{\cos\theta \sin^{d/2-1}\theta} {\Gamma\left(\frac{d-1}{2}\right)}\; {}_2F_1\left(\frac12,\frac{d}{2};\frac32;\cos^2\theta\right). \label{ferrerssecondnuminusnu} \end{gather} Using the duplication formula for gamma functions (\ref{duplicationformulagamma}), then through (\ref{ferrerssecondnuminusnu}) we have \begin{gather} \mcI_d(\theta)=\frac{(d-2)!}{\Gamma(d/2)2^{d/2-1}}\frac{1}{\sin^{d/2-1}\theta} {\mathrm Q}_{d/2-1}^{1-d/2}(\cos\theta). \label{idferrerq2} \end{gather} We have therefore verif\/ied that the harmonics computed in Section~\ref{SepVarStaHyp}, namely $u_{2+}^{d,0}$ (\ref{u2pmdl}), give an alternate form for a fundamental solution of the Laplacian on the hypersphere. Note that as a result of our proof, we see that the relevant associated Legendre functions of the second kind on the cut for $d\in\{2,3,4,5,6,7\}$ are (cf.~(\ref{sumgradryzhikin}) and (\ref{idferrerq2})) \begin{gather*} {\mathrm Q}_0(\cos \theta) = \log\cot\frac{\theta}{2},\\ \frac{1}{(\sin\theta)^{1/2}}{\mathrm Q}_{1/2}^{-1/2}(\cos \theta) = \sqrt{\frac{\pi}{2}}\cot \theta,\\ \frac{1}{\sin\theta}{\mathrm Q}_1^{-1}(\cos\theta) = \frac12\log\cot\frac{\theta}{2}+\frac{\cos\theta}{2\sin^2 \theta},\\ \frac{1}{(\sin\theta)^{3/2}}{\mathrm Q}_{3/2}^{-3/2}(\cos\theta) = \frac12\sqrt{\frac{\pi}{2}} \left(\cot \theta+\frac13\cot^3\theta\right), \\ \frac{1}{(\sin\theta)^2}{\mathrm Q}_2^{-2}(\cos\theta) = \frac18\log\cot\frac{\theta}{2}+ \frac{\cos\theta}{8\sin^2\theta}+\frac{\cos\theta}{12\sin^4\theta},\qquad \mbox{and} \\ \frac{1}{(\sin\theta)^{5/2}}{\mathrm Q}_{5/2}^{-5/2}(\cos\theta) = \frac18\sqrt{\frac{\pi}{2}} \left( \cot\theta+\frac23\cot^3\theta+\frac15\cot^5\theta \right). \end{gather*} The constant $c_0$ in a fundamental solution for the Laplace operator on the hypersphe\-re~$\Si_R^d$~(\ref{Hdid}) is computed by locally matching up, through (\ref{proportionality}), to the singularity of a fundamental solution for the Laplace operator in Euclidean space~(\ref{thmg1n}). The coef\/f\/icient $c_0$ depends on~$d$ and~$R$. For $d\ge 3$ we take the asymptotic expansion for $c_0\mcI_d(\theta)$ as $\theta\to 0^+$, and match this to a fundamental solution for Euclidean space~(\ref{thmg1n}). This yields \begin{gather} \displaystyle c_0=\frac{\Gamma\left(d/2\right)}{2\pi^{d/2}}. \label{c0gamma} \end{gather} For $d=2$ we take the asymptotic expansion for \[ c_0\mcI_2(\theta)=-c_0\log\tan\frac{\theta}{2}\simeq c_0\log\|\bfx-\bfxp\|^{-1}, \] as $\theta\to 0^+$, and match this to $\displaystyle \mcg^2(\bfx,\bfxp)=(2\pi)^{-1}\log\|\bfx-\bfxp\|^{-1},$ therefore $\displaystyle c_0=(2\pi)^{-1}$. This exactly matches (\ref{c0gamma}) for $d=2$. The $R$ dependence of $c_0$ originates from (\ref{In}), where $x$ and $\theta$ represents geodesic distances (cf.~(\ref{diststandard})). The distance $r\in[0,\infty)$ along a geodesic, as measured from the origin of $\Si_R^d$, is given by $r=\theta R$. To show that a fundamental solution (\ref{Hdid}) reduces to the Euclidean fundamental solution at small distances (see for instance~\cite{KalMilPog}), we examine the limit of zero curvature. In order to do this, we take the limit $\theta\to 0^+$ and $R\to\infty$ of (\ref{In}) with the substitution $x=r/R$ which produces a factor of~$R^{d-2}$. So a fundamental solution of Laplace's equation on the Riemannian manifold $\Si_R^d$ is given~by{\samepage \[ {\mathcal S}_R^d({\bf x},{\bf x}^\prime):= {\displaystyle \frac{\Gamma\left(d/2\right)}{2\pi^{d/2}R^{d-2}}\mcI_d\left(\theta\right)}. \] The proof of Theorem \ref{thmh1d} is complete.} Apart from the well-known historical results in two and three dimensions, the closed form expressions for a fundamental solution of Laplace's equation on the $R$-radius hypersphere given by Theorem~\ref{thmh1d} in Section~\ref{FunSolLapHd} appear to be new. Furthermore, the Ferrers function (associated Legendre) representations in Section~\ref{SepVarStaHyp} for the radial harmonics on the $R$-radius hypersphere do not appear to be have previously appeared in the literature. \subsection*{Acknowledgements} Much thanks to Ernie Kalnins, Willard Miller~Jr., George Pogosyan, and Charles Clark, and for valuable discussions. I would like to express my gratitude to the anonymous referees and an editor at SIGMA whose helpful comments improved this paper. This work was conducted while H.S.~Cohl was a National Research Council Research Postdoctoral Associate in the Information Technology Laboratory at the National Institute of Standards and Technology, Gaithersburg, Maryland, USA. \pdfbookmark[1]{References}{ref}
8,880
\begin{document} \input{tex/notation} \title{Successive Cancellation Inactivation Decoding for Modif\hspace{0.00001mm}ied Reed-Muller and eBCH Codes} \author{\IEEEauthorblockN{Mustafa Cemil Co\c{s}kun\IEEEauthorrefmark{1}\IEEEauthorrefmark{2}\IEEEauthorrefmark{3}, Joachim Neu\IEEEauthorrefmark{7} and Henry D. Pfister\IEEEauthorrefmark{3}} \IEEEauthorblockA{\IEEEauthorrefmark{1}German Aerospace Center (DLR)\, \IEEEauthorrefmark{2}Technical University of Munich (TUM)\, \IEEEauthorrefmark{3}Duke University\, \IEEEauthorrefmark{7}Stanford University \\ Email: [email protected], [email protected], [email protected] } } \maketitle \begin{abstract} A successive cancellation (SC) decoder with inactivations is proposed as an efficient implementation of SC list (SCL) decoding over the binary erasure channel. The proposed decoder assigns a dummy variable to an information bit whenever it is erased during SC decoding and continues with decoding. Inactivated bits are resolved using information gathered from decoding frozen bits. This decoder leverages the structure of the Hadamard matrix, but can be applied to any linear code by representing it as a polar code with dynamic frozen bits. SCL decoders are partially characterized using density evolution to compute the average number of inactivations required to achieve the maximum a-posteriori decoding performance. The proposed measure quantifies the performance vs. complexity trade-off and provides new insight into dynamics of the number of paths in SCL decoding. The technique is applied to analyze Reed-Muller (RM) codes with dynamic frozen bits. It is shown that these modified RM codes perform close to extended BCH codes. \end{abstract} \input{tex/introduction.tex} \input{tex/preliminaries.tex} \input{tex/sc_guessing_decoder.tex} \input{tex/analysis_design.tex} \input{tex/numerical_results.tex} \input{tex/Conclusions.tex} \section*{Acknowledgement} This work was supported by the research grant "Efficient Coding and Modulation for Satellite Links with Severe Delay Constraints" funded by the Helmholtz Gemeinschaft through the HGF-Allianz DLR@Uni project Munich Aerospace. The authors thank Gianluigi Liva (DLR) for fruitful discussions that inspired this work, Peihong Yuan (TUM) for providing the dynamic frozen bit constraints for the \ac{eBCH} code and Gerhard Kramer (TUM) for comments improving the presentation. \input{tex/appendix.tex} \IEEEtriggeratref{17} \input{ms.bbl} \end{document}
8,208
Between May 1 and July 31, 2015 you can win a new set of Michelin tires, just by introducing someone to the MOA. Winning is easy. Simply have a buddy join the MOA and specify you as the recruiting member. New members can register online and add your name to the registration to qualify both of you for a free set of Michelin tires. Each month, we'll draw one winner from the list of entries and award a set of Michelin tires to the new member and to the recruiting member. Bring as many friends as you want and register early for more chances to win. Each month's drawing will be at random from the total number of entries to date. New members can also call the MOA office at (636) 394-7277 and mention your name as the recruiting member. The offer applies to new members or those whose membership has been expired for a full year or more. You have to pay the mounting and balancing to have the tires installed, but we’ll ship them to you free. Rustle up your friends and pry that last $40 out of their hands. Or, be a real buddy and give them a gift membership. It could mean a new set of treads just in time.
412,504
2 members1 Comment 0 Likes Started Jan 11, 2015 0 Replies 0 Likes I bought this frame new and built it up with the components that I felt would make the lightest and fastest 'cross bike under the $5,000 I had to spend.Every part on this bike was used for a reason.…Continue Tags: Tubulars, Grifo, Easton, Cyclocross, Conquest Andrew Leisner has not received any gifts yet © 2017 Created by Cyclocross Magazine. Badges | Report an Issue | Terms of Service Comment Wall You need to be a member of Cyclocross Magazine to add comments! Join Cyclocross Magazine
347,714
The development of The Willow Community Support Services began with consumers of mental health services socializing and discussing a need for something “more”. Having gone through formal group therapy programs that inevitably end, we identified there was a need for an ongoing support system where individuals dealing with mental illness can participate in their own recovery process by working, learning, and socializing together in a safe and welcoming, non clinical environment. October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 January 2017 February 2017 March 2017 May 2017 June-Aug 2017 Sept 2017 Oct 2017 Nov 2017 Dec 2017 Jan 2018 Feb 2018 March 2018 April 2018 May 2018
73,636
- Derek Tokarzewski Neetu Singh Mrs. Universe Arab Asia 2019 World Class Beauty Queens Magazine would like to welcome amazing Queen Neetu Singh Mrs. India Universe International Style Icon 2019. Full name: Neetu Singh Title/Year: Mrs. Universe Arab Asia 2019 Other titles: Mrs. India Universe International Style Icon 2019, Mrs. India Universe Entrepreneur 2018, Pageant System: Mrs. India Universe Age: 44 Zodiac sign: Virgo Hobbies: Travelling, Interior Designing, Baking, Dance and Drama Platform: I am currently the General Secretary of NGO: Life Again Foundation Kuwait Chapter, which is found by the famous south Indian Actress of her time Ms.Goutami Tadimalla. In addition to that, I have dedicated myself to create awareness about cancer by successfully organising a fund raising event on behalf of our NGO life Again Foundation by inviting Mr. Shatrughan Sinha (Bollywood star and Member of Parliament India) and Mrs. Poonam Dhillon (Bollywood Actress) in Kuwait in 2018. Years completed: 1 Countries visited: UAE, Bahrain, Thailand, Netherlands, Germany, France, Italy, Vatican City, Belgium, Switzerland, Austria, Georgia Likes: I am a nature lover, I believe in Honesty and trust and expect the same from people around me Dislikes: I really dislike waking up early to alarm clocks Status: Married World Class Beauty Queens: Please tell us about yourself. I was born in a very academic family. I have done my schooling and graduation from Patna, India. Coming from a middle class family I have always known the utility of things in an efficient manner. Since my childhood days I was very much inclined towards social work. Later I got married to an incredible person and became beloved wife of Rajesh Kumar (EDO/KCC). I am an abiding and loving mother of a 20 year old beautiful girl and 15 year bubbly boy. With the support and encouragement of my husband and kids today I have taken steps to fulfill my dream and do my bit for the society. I am the General Secretary of Life Again Foundation, Kuwait chapter. I have a clear vision in my mind to work for the people in need and to break all the stereotypes and obstacles to prove a woman can break the glass ceiling. World Class Beauty Queens: What does women empowerment means to you? Women empowerment according to me is the liberty to make our own choices and decisions in life. Empowering the womanhood by providing them equal opportunities and also by making them discern about their fundamental rights. I also perceive as not underestimating a woman strength and capabilities as she holds a concrete determination and ability to create wonders. World Class Beauty Queens: Tell us about your pageant history. I would say participating in Mrs. India contest and winning the title of “Mrs. India Universe Entrepreneur 2018” has proved to be a turning point in my life. It has definitely opened many doors of opportunity for me. I wish to grab all of them and squeeze every bit of learning from it and add beautiful experiences in the book called ‘LIFE’. Also crowning for” Mrs. Universe Arab Asia” as well “Mrs. International style icon” has given me a sense of being self. I am all filled with gratitude for the same. World Class Beauty Queens: What inspired to do your first pageant? I believe that one should keep discovering new opportunities, recognize it on time and squeeze the most of it to add a value and a new meaning to your life. Similarly I recognized it for the first time when I came across the Mrs. India Universe contest. I grabbed it instantly to ensure that I get a platform to build my own individuality in an exceptional way. My family has been my constant source of motivation who kept on boosting me. World Class Beauty Queens: Why did you choose to compete for your current title? Currently I possess 3 titles which are “Mrs. India Universe Entrepreneur (2018)”, “Mrs. Universe Arab Asia (2019)” and “Mrs. India Universe International style icon (2019)”. I chose to compete for my current title as I believe style is something which reflects your personality and also defines your status quo. I love experimenting new attires ensuring my comfort level to be optimum. One should wear whatever they are comfortable in with a blend of confidence. World Class Beauty Queens: To those unfamiliar with your pageant system please tell us what is it about? It is a platform which provides several opportunities to participate in fashion shows and speaking engagements to promote a positive change in the society. It helps one build a good amount of confidence, leadership skills, to gain self-esteem, learn team work as well how to support and encourage others. For me it’s a stage which creates the path to build your own identity and celebrate your individuality believing through the fact “Dreams do come true”. World Class Beauty Queens: What are you being judged on during the competition? I feel the judges search for the woman holding beauty with brain. In today’s era inner beauty is also an impediment factor contributing in winning the crown along with the physical attribute. The personality traits, intelligent and emotional quotient, talent, skills, spontaneity and honesty are some of the parameters you are being judged upon. World Class Beauty Queens: Tell us about your experience during the competition? Well my experience throughout has been surreal a philanthropic one. It has helped me to evolve and pushed me to become the best version of myself and establish my own identity. I feel so overwhelmed to be a part of this journey where I met so many esteemed personalities and some incredibly amazing women as participants who inspire me. It taught me to be hungry for your ambitions and to never stop chasing your dreams and also to be fearless. World Class Beauty Queens: Tell us about your platform or what cause do you volunteer for. I think nothing in this world can be more beautiful than the art of spreading smiles. People keep aspiring for living the lives of people who are more fortunate than them. But rarely do we give a thought that there are people who are less fortunate than us. So one should learn to give something back to the society. In order to fulfill the belief of “giving back” I am associated with an esteemed NGO. I hold the the position of General Secretary of life again Foundation Kuwait chapter. We assist the people affected from cancer and work towards creating awareness for the same. World Class Beauty Queens: What appearances have you done with your title? I was honored to be invited as the chief guest on the occasion of Republic day in one of the most reputed school in Kuwait. I was overwhelmed by addressing the gatherings there. I also have been provided the opportunity to judge the Beauty Pageant F3 Mrs. India International, 2019 on 7th of July. I am filled with alacrity as I have got the opportunity to judge the sub title winner for Mrs. India Universe, 2019 at Mauritius. World Class Beauty Queens: What are some of your achievements? I am the general secretary of Life Again Foundation Kuwait chapter, founded by the famous south Indian actress Ms. Goutami Tadimalla. I have successfully generated funds by inviting the two incredible personalities of Bollywood Shatrughan Sinha and Poonam Dhillon for the events in Kuwait which were organized on behalf of Life Gain Foundation. World Class Beauty Queens: What is your on stage strategy to win the judges over? The ability of constantly accepting the challenges and pushing yourself to be calm. Speak to the point being specific and clear. Maintain eye contact with the panelist. Don’t forget to wear the most important accessory that is your confidence. Always remember to build self-belief. You can create wonders and win hearts only if you are honest with yourself. World Class Beauty Queens: What makes you stand out from all those beautiful girls? I think everyone has flaws as no one is born perfect. Recognizing your flaws and accepting it with utmost honesty and secretly working on it to evolve as a better person makes me stand out from all those beautiful girls. World Class Beauty Queens: Tell us about the moment your name was called out as the winner. Well the moment was magical. I was blown away when I heard my name called out and I could only think about my family at that moment. And yes hard work and perseverance always pay off. A hard work done with honesty and in right direction never goes in vain even if you don’t get the desired result but it always open doors of opportunities to illuminate your life. World Class Beauty Queens: What does it mean to you to be a beauty queen? A beauty queen according to me brings you the responsibility of continuously making efforts to bring a positive change in the society through your perspectives, style and persona. I need to set standards to embrace change and more importantly appreciate change to have break all the stereotypes which have shaped our society and have revolved around the womanhood generating hurdles for the women to excel in all the spheres of life. World Class Beauty Queens: How competing in pageants did helped your life? It has bridged the gap between a house wife to a working woman. It has brought me positive and enthusiastic engagements, good experiences, has helped me grow and evolve and learn new things. And most importantly assisted me to explore new opportunities and also merging those opportunities with my skills for mirroring the best version of myself. World Class Beauty Queens: How pageants did help your self-esteem and body image? Obviously it boosts up my self-esteem, when you receive recognition and appreciation for the work you have done it makes one remind how butterflies feels like. In addition with boosting self-esteem and body image it also motivates you to keep working hard with honesty. World Class Beauty Queens: You are an inspiration to all the girls out there. How does it feel? It feels great and also loads my shoulder with responsibility to work harder and keep striving forward and leave footprints for others to follow. I vow to inspire them with my drive and passion. World Class Beauty Queens: What are your tips for learning better pageant walk? Follow the process with confidence. Remain optimistic with a straight posture do not panic and enjoy the process with attitude. World Class Beauty Queens: What are your tips for choosing right pageant dress? It should be age appropriate not a mismatch. Obviously should look elegant when dressed. According to me one should select the dress which they are comfortable wearing in. Because if you are not comfortable it would be mirrored in your face and drop your confidence level. World Class Beauty Queens: What are your tips for winning interview? Firstly it is important to be a good listener. I would say talk intellectually and with an objective in mind. Be clear and specific while answering the questions. It’s absolutely ok to be opinionated and you can share opinion without sounding rude. Have an eye contact with the interviewer wearing a smile with confidence blended in it. World Class Beauty Queens: How did you prepare for your competition? I prepared for the competition by having a crystal clear vision in my mind. By gearing up myself to be confident enough with an objective in my mind to accomplish. I think the only mantra is to work hard immensely, be honest and have a positive approach and build your own perspectives through your real time experiences. World Class Beauty Queens: What is the one mistake that you have done during competing you wish you could redo and fix? That one mistake I made initially during the competition is aspiring to be a perfect beauty queen by being flawless like my celebrity idol. Soon I realized there is nothing like being flawless, it’s not realistic. One should aspire confidence, aspire to feeling pretty and happy without needing to look any specific way. I am who I am and your core within you should be irreplaceable. World Class Beauty Queens: What other mistakes are made by girls during the contest? The mistake I encountered that girls make during the contest is believing they are flawed and that they have a feeling that they are lacking for looking different from a woman they aspire. They keep comparing themselves with others which is not right. Everyone is special recognize your one special factor and make it a weapon to conquer the world with it. World Class Beauty Queens: Any modelling or acting experience? No, I do not have any modelling or acting experience. World Class Beauty Queens: What are your plans for 2019 as a queen? Currently I am preparing myself to represent India in the global meet Mrs. India Universe in the month of September 2019 in Mauritius. World Class Beauty Queens: What kind of legacy you want to leave behind? I want to make efforts in creating a society where the women gets equal opportunities. I wish to be the voice for empowering the womanhood. I wish to inspire all the women who are starting their second innings after their kids get grown up. I want them not to stop dreaming never compromise or settle on your dreams and urge them to be ambitious. There is no shortage of talent in women rather there is shortage of proper platforms to showcase their talent. It’s the right of every women to dream and make efforts in turning the dream into reality building their own identity in the crowd. International Director: Tusshar Dhaliwal and Archana Tomer National Director: Tusshar Dhaliwal and Archana Tomer Local Director: Tusshar Dhaliwal and Archana Tomer Pageant website: World Class Beauty Queens Magazine would like to say thank you to Neetu Singh Mrs. Universe Arab Asia 2019 for this wonderful interview. Neetu Singh Mrs. Universe Arab Asia 2019, World Class Beauty Queens Magazine, Photo by Pradeep Choudh<<
369,615
\begin{document} \begin{frontmatter} \title{Tokunaga and Horton self-similarity \\ for level set trees of Markov chains} \author[IZ]{Ilia Zaliapin\corref{cor}} \ead{[email protected]} \cortext[cor]{Corresponding author, Phone: 1-775-784-6077, Fax: 1-775-784-6378} \author[YK]{Yevgeniy Kovchegov} \ead{[email protected]} \address[IZ]{Department of Mathematics and Statistics, University of Nevada, Reno, NV, 89557-0084, USA} \address[YK]{Department of Mathematics, Oregon State University, Corvallis, OR, 97331-4605, USA} \begin{abstract} The Horton and Tokunaga branching laws provide a convenient framework for studying self-similarity in random trees. The Horton self-similarity is a weaker property that addresses the {\it principal branching} in a tree; it is a counterpart of the power-law size distribution for elements of a branching system. The stronger Tokunaga self-similarity addresses so-called {\it side branching}. The Horton and Tokunaga self-similarity have been empirically established in numerous observed and modeled systems, and proven for two paradigmatic models: the critical Galton-Watson branching process with finite progeny and the finite-tree representation of a regular Brownian excursion. This study establishes the Tokunaga and Horton self-similarity for a tree representation of a finite symmetric homogeneous Markov chain. We also extend the concept of Horton and Tokunaga self-similarity to infinite trees and establish self-similarity for an infinite-tree representation of a regular Brownian motion. We conjecture that fractional Brownian motions are also Tokunaga and Horton self-similar, with self-similarity parameters depending on the Hurst exponent. \end{abstract} \begin{keyword} self-similar trees \sep Horton laws \sep Tokunaga self-similarity \sep Markov chains \sep level-set tree \MSC 05C05 \sep 60J05 \sep 60G18 \sep 60J65 \sep 60J80 \end{keyword} \end{frontmatter} \section{Introduction and motivation} \label{intro} Hierarchical branching organization is ubiquitous in nature. It is readily seen in river basins, drainage networks, bronchial passages, botanical trees, and snowflakes, to mention but a few (e.g., \cite{Shreve66,NTG97,TPN98,NBW06}). Empirical evidence reveals a surprising similarity among various natural hierarchies --- many of them are closely approximated by so-called {\it self-similar trees} (SSTs) \cite{Shreve66,NTG97,TPN98,BWW00,Horton45,Strahler, Shreve69,Tok78,TBR88,VG00,DR00,PT00,Oss92,Pec95,PG99}. An SST preserves its statistical structure, in a sense to be defined, under the operation of {\it pruning}, i.e., cutting the leaves; this is why the SSTs are sometimes referred to as {\it fractal} trees \cite{NTG97}. A two-parametric subclass of {\it Tokunaga} SSTs, introduced by Tokunaga \cite{Tok78} in a hydrological context, plays a special role in theory and applications, as it has been shown to emerge in unprecedented variety of modeled and natural phenomena. The Tokunaga SSTs with a broad range of parameters are seen in studies of river networks \cite{Shreve66,BWW00,Shreve69,Tok78,TBR88,Pec95,OWZ97}, vein structure of botanical leaves \cite{NTG97,TPN98}, numerical analyses of diffusion limited aggregation \cite{Oss92,MT93}, two dimensional site percolation \cite{TMM+99,YNT+05,ZWG06a,ZWG06b}, and nearest-neighbor clustering in Euclidean spaces \cite{Webb09}. The diversity of these processes and models hints at the existence of a universal (not problem-specific) underlying mechanism responsible for the Tokunaga self-similarity and prompts the question: {\it What probability models may produce Tokunaga self-similar trees?} An important answer to this question was given by Burd {\it et al.} \cite{BWW00} who studied Galton-Watson branching processes and have shown that, in this class, the Tokunaga self-similarity is a characteristic property of a {\it critical binary branching}, that is the discrete-time process that starts with a single progenitor and whose members equiprobably either split in two or die at every step. The critical binary Galton-Watson process is equivalent to the Shreve's random river network model, for which the Tokunaga self-similarity has been known for long time \cite{Shreve66,BWW00,Shreve69,Pec95}. The Tokunaga self-similarity has also been rigorously established in a general hierarchical coagulation model of Gabrielov {\it et al.} \cite{GNT99} introduced in the framework of self-organized criticality, and in a random self-similar network model of Veitzer and Gupta \cite{VG00} developed as an alternative to the Shreve's random network model for river networks. Prominently, the results of Burd {\it et al.} \cite{BWW00} reveal the Tokunaga self-similarity for any process represented by the finite Galton-Watson critical binary branching. In the context of this paper, the most important example is a regular Brownian motion, whose various connections to the Galton-Watson processes are well-known (see Pitman \cite{Pitman} for a modern review). For instance, the topological structure of the so-called $h$-excursions of a regular Brownian motion \cite{NP89} and a Poisson sampling of a Brownian excursion \cite{Hobson} are equivalent to a finite critical binary Galton-Watson tree (Sect.~\ref{TF} below explains the tree representation of time series), and hence these processes are Tokunaga self-similar. This study further explores Tokunaga self-similarity by focusing on trees that describe the topological structure of the level sets of a time series or a real function, so-called {\it level-set trees}. Our set-up is closely related to the classical Harris correspondence between trees and finite random walks \cite{Harris}, and its later ramifications that include infinite trees with edge lengths \cite{BWW00,OWZ97,Pitman,Ald1,Ald2,Ald3,LeGall93,LeGall05}. The main result of this paper is the Tokunaga and closely related Horton self-similarity for the level-set trees of finite symmetric homogeneous Markov chains (SHMCs) --- see Sect.~\ref{main}, Theorem~\ref{T1}. Notably, the Tokunaga and Horton self-similarity concepts have been defined so far only for finite trees (e.g., \cite{BWW00,Pec95,MG08}). We suggest here a natural extension of Tokunaga and Horton self-similarity to infinite trees and establish self-similarity for an infinite-tree representation of a regular Brownian motion. The suggested approach is based on the {\it forest of trees attached to the floor line} as described by Pitman \cite{Pitman}. Finally, we discuss the strong distributional self-similarity that characterizes Markov chains with exponential jumps. The paper is organized as follows. Section~\ref{trees} introduces planar rooted trees, trees with edge lengths, Harris paths, and spaces of random trees with the Galton-Watson distribution. The trees on continuous functions are described in Sect.~\ref{TF}. Several types of self-similarity for trees --- Horton, Tokunaga, and distributional self-similarity --- are discussed in Sect.~\ref{SS}. The main results of the paper are summarized in Sect.~\ref{main}. Section~\ref{add} addresses special properties of exponential Markov chains that, in particular, enjoy the strong distributional self-similarity. Proofs are collected in Sect.~\ref{proofs}. Section~\ref{discussion} concludes. \section{Trees} \label{trees} We introduce here planar trees, the corresponding Harris paths, and the space of Galton-Watson trees following Burd {\it et al.} \cite{BWW00}, Ossiander {\it et al.} \cite{OWZ97} and Pitman \cite{Pitman}. \subsection{Planar rooted trees} Recall that a {\it graph} $\mathcal{G}=(V,E)$ is a collection of vertices (nodes) $V=\{v_i\}$, $1\le i \le N_V$ and edges (links) $E=\{e_k\}$, $1\le k \le N_E$. In a {\it simple} graph each edge is defined as an unordered pair of distinct vertices: $\forall\, 1\le k \le N_E, \exists! \, 1\le i,j \le N_V, i\ne j$ such that $e_k=(v_i,v_j)$ and we say that the edge $k$ {\it connects} vertices $v_i$ and $v_j$. Furthermore, each pair of vertices in a simple graph may have at most one connecting edge. A {\it tree} is a connected simple graph $T=(V,E)$ without cycles, which readily gives $N_E=N_V-1$. In a {\it rooted} tree, one node is designated as a root; this imposes a natural {\it direction} of edges as well as the parent-child relationship between the vertices. Specifically, we follow \cite{BWW00} to represent a labeled (planar) tree $T$ rooted at $\phi$ by a bijection between the set of vertices $V$ and set of finite integer-valued sequences $\langle i_1,\dots,i_n \rangle\in T$ such that \begin{itemize} \item[(i)] $\phi=\langle\emptyset\rangle$, \item[(ii)] if $\langle i_1,\dots,i_n\rangle\in T$ then $\langle i_1,\dots,i_k\rangle\in T\quad\forall\,1\le k\le n$, and \item[(iii)] if $\langle i_1,\dots,i_n\rangle\in T$ then $\langle i_1,\dots,i_{n-1}, j\rangle\in T\quad\forall\,1\le j\le i_n$. \end{itemize} This representation is illustrated in Fig.~\ref{fig_seq}. If $v = \langle i_1,\dots,i_n\rangle\in T$ then $u = \langle i_1,\dots,i_{n-1}\rangle\in T$ is called the {\it parent} of $v$, and $v$ is a {\it child} of $u$. A {\it leaf} is a vertex with no children. The number of children of a vertex $u=\langle i_1,\dots,i_n\rangle\in T$ equals to $c(u)=\max\{j\}$ over such $j$ that $\langle u,j\rangle\equiv\langle i_1,\dots,i_n,j\rangle\in T$. A {\it binary} labeled rooted tree is represented by a set of binary sequences with elements $i_k=1,2$, where 1,2 represent the left and right planar directions, respectively. Two trees are called {\it distinct} if they are represented by distinct sets of the vertex-sequences. We complete each tree $T$ by a special {\it ghost edge} $\epsilon$ attached to the root $\phi$, so each vertex in the tree has a single parental edge. A natural direction of edges is from a vertex $v$ to its parent $v_p$. In these settings, the total number of distinct trees with $n$ leaves, according to the Cayley's formula, is $n^{n-2}$. The total number of distinct binary trees with $n$ leaves is given by the $(n-1)$-th Catalan number \cite{Pitman} \[C_{n-1}=\frac{1}{n}\left(\begin{array}{c} 2n-2\\ n-1 \end{array}\right).\] \subsection{Trees with edge-lengths and Harris path} A tree with {\it edge-lengths} $T=(V,E,W)$ assigns a positive lengths $w(e)$ to each edge $e$, $W=\{w(e)\}$; such trees are also called {\it weighted trees} (e.g., \cite{BWW00,OWZ97}). The sum of all edge lengths is called the {\it tree length}; we write $\textsc{length}(T)=\sum_e\,w(e)$. We call the pair $(V,E)$ a {\it combinatorial tree} and write $(V,E)=\textsc{shape}(T)$, emphasizing that the lengths are disregarded in this representation. If a tree is represented graphically in a plane, there is a unique continuous map \[\sigma_T\,:\,[0,\,2\textsc{length}(T)]\to\,T\] that corresponds to the {\it depth-first search} of $T$, illustrated in Fig.~\ref{fig_Harris}(a). The depth-first search starts at the root of planar tree with edge-lengths and contours it, moving at a unit speed, from left to right so that each edge is traveled twice --- its left side in a move away from the root, while its right side in a move towards the root. The {\it Harris path} for a tree $T$ is a continuous function $H_T(s)\,:\,[0,2\textsc{length}(T)]\to\mathbb{R}$ that equals to the distance from the root traveled along the tree $T$ in the depth-fist search. Accordingly, for a tree $T$ with $n$ leaves, the Harris path $H_T(s)$ is a continuous excursion --- $H_T(0)=H_T(2\textsc{length}(T))=0$ and $H_T(s)> 0$ for any $s\in \left(0,2\textsc{length}(T)\right)$ --- that consists of $2n$ linear segments of alternating slopes $\pm 1$ \cite{Pitman}, as illustrated in Fig.~\ref{fig_Harris}(b). The closely related {\it Harris walk} $H_n(k)$, $0\le k \le 2n$ for a tree with $n$ vertices is defined as a linearly interpolated discrete excursion with $2n$ steps that corresponds to the depth-first search that marks each vertex in a tree \cite{Harris,Pitman}. Clearly, the Harris path and Harris walk, as functions $[0,2\textsc{length}(T)]\to\mathbb{R}$, have the same trajectory. A binary tree with $n$ leaves has $2n-1$ vertices; accordingly, its Harris path consists of $2n$ segments, and its Harris walk consists of $4n-2=2(2n-1)$ steps. \subsection{Galton-Watson trees} \label{GW} The space $\mathbb{T}$ of planar rooted trees with metric \[d(\tau,\psi)=\frac{1}{1+\sup\{n:\tau|n=\psi|n\}},\] where $\tau|n=\{\langle i_1,\dots,i_k\rangle \in \tau:k\le n\}$ form a Polish metric space, with the countable dense subset $\mathbb{T}_0$ of finite trees \cite{OWZ97,BWW00}. An important, and most studied, class of distributions on $\mathbb{T}$ is the {\it Galton-Watson distribution}; it corresponds to the trees generated by the Galton-Watson process with a single progenitor and the branching distribution $\{p_k\}$. Formally, the distribution $GW_{\{p_k\}}$ assign the following probability to a closed ball $B\left(\tau,1/n\right)$, $\tau\in\mathbb{T}$, $n=1,2,\dots$: \[{\sf P}\left( B\left(\tau,\frac{1}{n}\right)\right) =\prod_{v\in\tau|(n-1)} p_{c(v)},\] where $c(v)$ is the number of children of vertex $v$ \cite{BWW00,OWZ97}. The classical work of Harris \cite{Harris} notices that the Harris walk for a Galton-Watson tree with unit edge-lengths, $n$ vertices and geometric offspring distribution is an unsigned excursion of length $2n$ of a random walk with independent steps $\pm 1$. Hence, by the conditional Donsker's theorem \cite{Pitman}, a properly normalized Harris walk should weakly converge to a Brownian excursion. Aldous \cite{Ald1,Ald2,Ald3}, LeGall \cite{LeGall93,LeGall05}, and Ossiander {\it et al.} \cite{OWZ97} have shown that the same limiting behavior is seen for a broader class of Galton-Watson trees, which may have non-trivial edge-lengths and non-geometric offspring distribution. \begin{thm}{\rm \cite[Theorem 3.1]{OWZ97}} Let $T_n$ be a Galton-Watson tree with the total progeny $n$ and offspring distribution $L$ such that gcd$\{j:{\sf P}(L=j)>0\}=1$, ${\sf E}(L)=1$, and $0<{\sf Var}(L)=\sigma^2<\infty$, where gcd$\{\cdot\}$ denotes the greatest common divisor. Suppose that the i.i.d. lengths $W=\{w(e)\}$ are positive, independent of $T_n$, have mean 1 and variance $s^2$ and assume that $\lim_{x\to\infty}(x\,\log x)^2{\sf P}(|w(\phi)-1|>x)=0.$ Then the scaled Harris walk $H_n(k)$ converges in distribution to a standard Brownian excursion $B^{\rm ex}_t$: \[\{H_n(2nt)/\sqrt{n},0\le t\le 1\} \overset{d}{\to} \{2\sigma^{-1}\,B^{\rm ex}_t,0\le t\le 1\}, \quad{\rm as~} n\to\infty.\] \end{thm} This paper explores an ``inverse'' problem --- it describes trees that correspond to a given finite or infinite Harris walk. We show, in particular, that the class of trees that correspond to the Harris walks that weakly converge to a Brownian excursion $B^{\rm ex}_t$ is much broader than the space of Galton-Watson trees. \section{Trees on continuous functions} \label{TF} Let $X_t\equiv X(t)\in C\left([L,R]\right)$ be a continuous function on a finite interval $[L,R]$, $L,R<\infty$. This section defines the tree associated with $X_t$. We start with a simple situation when $X_t$ has a finite number of local extrema and continue with general case. \subsection{Tamed functions: Level set trees} Suppose that the function $X_t\in C\left([L,R]\right)$ has a finite number of local extrema. The level set $\mathcal{L}_{\alpha}\left(X_t\right)$ is defined as the pre-image of the function values above $\alpha$: \[\mathcal{L}_{\alpha}\left(X_t\right) = \{t\,:\,X_t\ge\alpha\}.\] The level set $\mathcal{L}_{\alpha}$ for each $\alpha$ is a union of non-overlapping intervals; we write $|\mathcal{L}_{\alpha}|$ for their number. Notice that (i) $|\mathcal{L}_{\alpha}| = |\mathcal{L}_{\beta}|$ as soon as the interval $[\alpha,\,\beta]$ does not contain a value of local minima of $X_t$, (ii) $|\mathcal{L}_{\alpha}| \ge |\mathcal{L}_{\beta}|$ for any $\alpha > \beta$, and (iii) $0\le |\mathcal{L}_{\alpha}| \le n$, where $n$ is the number of the local maxima of $X_t$. The {\it level set tree} $\textsc{level}(X_t)$ describes the topology of the level sets $\mathcal{L}_{\alpha}$ as a function of threshold $\alpha$, as illustrated in Fig.~\ref{fig3}. Namely, there are bijections between (i) the leaves of $\textsc{level}(X_t)$ and the local maxima of $X_t$, (ii) the internal (parental) vertices of $\textsc{level}(X_t)$ and the local minima of $X_t$ (excluding possible local minima at the boundary points), and (iii) the edges of $\textsc{level}(X_t)$ and the first positive excursions of $X(t)-X(t_i)$ to right and left of each local minima $t_i$. The leftmost and rightmost edges $\langle 1,1,\dots,1\rangle$ and $\langle 2,2,\dots,2\rangle$ may correspond to meanders, that is to a positive segments of $X(t)-X(t_i)$, rather than to excursions. It is readily seen that any function $X_t$ with distinct values of the local minima corresponds to a binary tree $\textsc{level}(X_t)$. In this case, the bijection (iii) can be separated into the bijections between (iii\,a) the edges $\langle \dots, 1\rangle$ of $\textsc{level}(X_t)$ and the first positive excursions of $X(t)-X(t_i)$ to the left of each local minima $t_i$, and (iii\,b) the edges $\langle \dots, 2\rangle$ of $\textsc{level}(X_t)$ and the first positive excursions of $X(t)-X(t_i)$ to the right of each local minima $t_i$. The edge $e=(v,u)$ that connects the vertices $v$ and $u$ is assigned the length $w(e)$ equal to the absolute difference between the values of the respective local extrema of $X_t$ --- according to the bijections (i), (ii) above. To complete the above construction, a special care should be taken of the edge $\epsilon$ attached to the tree root. Specifically, let $t_i$, $i=1,\dots,n$, be the set of {\it internal} local minima of $X_t$, defined as the set of points such that for any $i$ there exists such an open interval $(a_i,b_i)\ni t_i$ that $X(t_i)\le X(s)$ for any $s\in(t_i,b_i)$, $X(t_i)<X(b_i)$, and $X(t_i)<X(s)$ for any $s\in(a_i,t_i)$. The last definition treats only the leftmost point of any constant-level through as a local minima. The root of the tree $\textsc{level}(X_t)$ corresponds to the lowest internal minimum. If the global minimum $M$ of $X_t$ is reached at one of the boundary points, say at $X(L)$, the root of $\textsc{level}(X_t)$ has the parental edge $\epsilon$ with the length $w(\epsilon) = \min_i\left(X(t_i)\right)-X(L)$. At the same time, if the global minimum $M$ of $X_t$, is reached at one of the internal local minima, that is if $M = \min_i\left(X(t_i)\right)<\min\left(X(L),X(R)\right)$, then $|\mathcal{L}_{\alpha}| =0$ for any $\alpha<M$ and $|\mathcal{L}_{\alpha}| >1$ for any $\alpha>M$. In other words, the root of $\textsc{level}(X_t)$ does not have the parental edge. In this case, we add the ghost parental edge $\epsilon$ with edge length $w(\epsilon)=1$. We write $\textsc{level}(X_t,w(\epsilon))$ to explicitly indicate the length of the ghost edge that might be added to the level-set tree and save notation $w(\epsilon)$ for the value defined above uniquely for each function $X_t$. By construction, the level set trees are invariant with respect to monotone transformations of time and values of $X_t$: \begin{prop} \label{inv1} Let $F(\cdot)$ and $G(\cdot)$ be monotone functions such that $Y_t=F\left(X_{G(t)}\right)$ is a continuous function on $G\left([L,R]\right).$ Then the function $Y_t$ has the same combinatorial level set tree as the original function $X_t$, that is \[\textsc{shape}\left(\textsc{level}(X_t,1)\right)= \textsc{shape}\left(\textsc{level}(Y_t,1)\right).\] \end{prop} The tree with edge lengths $\textsc{level}(X_t,1)$ is completely specified by the set of the local extrema of $X_t$ and its boundary values, and is independent of the detailed structure of the intervals of monotonicity. To formalize this observation, we write $\cE_X(s)$ for the {\it linear extreme function} obtained from $X_t$ by (i) linearly interpolating its consecutive local extrema and the two boundary values, and (ii) changing time within each monotonicity interval as to have only constant slopes $\pm 1$. The function $\cE_X(s)$ hence is a piece-wise linear function with slopes $\pm 1$. The length of the domain of this function equals the total variation of $X_t$. We shift this domain to start at $s_0 = w(\epsilon) + X(L) - \min_i\left(X(t_i)\right)$, where $t_i$ are the points of internal local minima as defined above. \begin{prop} \label{inv2} The level set tree of a function $X_t$ coincides with that of the linear extreme function $\cE_X$: $\textsc{level}(X_t,1) =\textsc{level}\left(\cE_X,1\right).$ \end{prop} The particular domain specification of $\cE_X(z)$ is explained by the following statement. \begin{prop} \label{Harris} Let $H_T(s)$, $s\in[0,2\textsc{length}(T)]$ be the Harris path of the level set tree $T=\textsc{level}(X_t,1)$, then $H_T(z) = \cE_X(z)$ on the domain $D$ of $\cE_X$. The domains of $H_T(z)$ and $\cE_X(z)$ coincide, i.e. $D=[0,2\textsc{length}(T)]$, if and only if $X_t$ is a positive excursion, and $D\subset [0,2\textsc{length}(T)]$ otherwise. \end{prop} It is known that each piece-wise linear positive excursion (Harris path) that consists of $2n$ segments with slopes $\pm 1$ uniquely specifies a tree $T$ with no vertices of degree 2 (e.g., \cite{Pitman}). Recall that a Harris path corresponds to the depth-first search that visits each edge in a tree twice; hence the Harris path $H_T$ over-specifies the corresponding tree $T$. Similarly, the function $\cE_X(s)$ uniquely specifies (and, probably, over-specifies) the tree $\textsc{level}(X_t,1)$ with no vertices of degree 2. If $X_t$ has distinct values of the local minima, then $\cE_X(s)$ uniquely specifies the binary tree $\textsc{level}(X_t,1)$. Our definition of the level-set tree cannot be directly applied to a continuous function with infinite number of local extrema, say to a trajectory of a Brownian motion. This motivates the general set-up reviewed in the next section \cite{Pitman,LeGall93}. \subsection{General case} Let $X_t\equiv X(t)\in C\left([L,R]\right)$ and $\underline{X}[a,b]:=\inf_{t\in[a,b]} X(t)$, for any $a,b\in\,[L,R]$. We define a {\it pseudo-metric} on $[L,R]$ as \be \label{tree_dist} d_X(a,b):=\left(X(a)-\underline{X}[a,b]\right) + \left(X(b)-\underline{X}[a,b]\right),\quad a,b\in\,[L,R]. \ee It is easily verified that if $X_t$ is the Harris path for a finite tree $T$ and $\sigma_T$ is the corresponding depth-first search, then $d_X(a,b)$ equals the distance along the tree $T$ between the points $\sigma_T(a)$ and $\sigma_T(b)$ (see Fig.~\ref{fig_tree}). We write $a\sim_X b$ if $d_X(a,b)=0$. Accordingly, we define tree $\textsc{tree}(X)$ for the function $X_t$ as the metric space $\left([L,R]/\sim_X,d_X\right)$ \cite{Pitman}. \vspace{.5cm} {\bf Remark.} The definition of the level set tree can be readily applied to a real-valued Morse function $f:\mathbb{M}\to\mathbb{R}$ on a smooth manifold $\mathbb{M}$. This is convenient for studying functions in higher-dimensional domains; see, for instance, Arnold \cite{Arnold} and Edelsbrunner {\it et al.} \cite{EHZ03}. The Harris-path and metric-space definitions are not readily applicable to multidimensional domains. \section{Self-similar trees} \label{SS} This section describes the three basic forms of the tree self-similarity: (i) Horton laws, (ii) Self-similarity of side-branching, and (iii) Tokunaga self-similarity. They are based on the Horton-Strahler and Tokunaga schemes for ordering vertices in a rooted binary tree. The presented approach was introduced by Horton \cite{Horton45} for ordering hierarchically organized river tributaries; the methods was later refined by Strahler \cite{Strahler} and further expanded by Tokunaga \cite{Tok78} to include so-called side-branching. \subsection{Horton-Strahler ordering} \label{hs} The Horton-Strahler (HS) ordering of the vertices of a finite rooted labeled binary tree is performed in a hierarchical fashion, from leaves to the root \cite{NTG97,BWW00,Horton45,Strahler}: (i) each leaf has order $r({\rm leaf})=1$; (ii) when both children, $c_1, c_2$, of a parent vertex $p$ have the same order $r$, the vertex $p$ is assigned order $r(p)=r+1$; (iii) when two children of vertex $p$ have different orders, the vertex $p$ is assigned the higher order of the two. Figure~\ref{fig_HST}(a) illustrates this definition. Formally, \begin{equation} r(p)=\left\{ \begin{array}{ll} r(c_1)+1&{\rm if~}r(c_1)=r(c_2),\\ \max\left(r(c_1),r(c_2)\right)&{\rm if~}r(c_1)\ne r(c_2). \end{array} \right. \label{HS} \end{equation} A {\it branch} is defined as a union of connected vertices with the same order. The branch vertex nearest to the root is called the {\it initial vertex}, the vertex farthest from the root is called the {\it terminal vertex}. The order $\Omega(T)$ of a finite tree $T$ is the order $r(\phi)$ of its root, or, equivalently, the maximal order of its branches (or nodes). The {\it magnitude} $m_i$ of a branch $i$ is the number of the leaves descendant from its initial vertex. Let $N_r$ denote the total number of branches of order $r$ and $M_r$ the average magnitude of branches of order $r$ in a finite tree $T$. An equivalent, and intuitively more appealing, definition of the Horton-Strahler orders is done via the operation of {\it pruning} \cite{BWW00,Pec95}. The pruning of an empty tree results in an empty tree, $\cR(\phi)=\phi$. The pruning $\cR(T)$ of a non-empty tree $T$, not necessarily binary, cuts the leaves and possible chains of degree-2 vertices connected to the leaves. A vertex of degree 2 (or a single-child vertex) $v$ is defined by the conditions $\langle v,1\rangle\in T$, $\langle v,2\rangle{\notin} T$. Each chain of degree-2 vertices connected to a leaf is uniquely identified by a vertex $v$ such that $\langle v,u \rangle\in T$ implies $u=\langle 1,\dots, 1\rangle$. The pruning operation is illustrated in Fig.~\ref{fig_pruning}. The first application of pruning to a binary tree $T$ simply cuts the leaves, possibly producing some single-child vertices. Some of those vertices are connected to the leaves via other single-child vertices and thus will be cut at the next pruning, while the other occur deeper within the pruned tree and will wait for their turn to be removed. It is readily seen that repetitive application of pruning to any tree will result in the empty tree $\phi$. The minimal $\Omega$ such that $\cR^{(\Omega)}(T)=\phi$ is called the {\it order} of the tree. A vertex $v$ of tree $T$ has the order $r$ if it has been removed at the $r$-th application of pruning: $v\in\cR^{(k)}(T)\,\forall 1\le k <r$, $v\notin\cR^{(r)}(T)$. We say that a binary tree $T$ is {\it complete} if any of the following equivalent statements hold: (i) each branch of $T$ consists of a single vertex; (ii) orders of siblings (vertices with the common parent) are equal; (iii) the parent vertex's rank is a unit higher than that of each of its children. There exists only one complete binary tree on $n=2^k$ leaves for each $k=0,1,\dots$; all other trees are called {\it incomplete}. \subsection{Tokunaga indexing} \label{tokunaga} The Tokunaga indexing \cite{NTG97,Tok78,Pec95} extends upon the Horton-Strahler orders; it is illustrated in Fig.~\ref{fig_HST}b. This indexing focuses on incomplete trees by cataloging {\it side-branching}, which is the merging between branches of different order. Let $\tau^k_{ij}$, $1\le k \le N_j$, $1\le i<j \le \Omega$ denotes the number of branches of order $i$ that join the non-terminal vertices of the $k$-th branch of order $j$. Then $N_{ij}=\sum_k\,\tau^k_{ij}$, $j>i$ is the total number of such branches in a tree $T$. The Tokunaga index $T_{ij}$ is the average number of branches of order $i<j$ per branch of order $j$ in a finite tree of order $\Omega\ge j$: \begin{equation} T_{ij}= \frac{N_{ij}}{N_{j}}. \label{tok} \end{equation} In a probabilistic set-up, one considers a space of finite binary trees with some probability measure. Then, $N_i$, $\tau_{ij}^k$, $N_{ij}$, and $T_{ij}$ become random variables. We notice that if, for a given $\{ij\}$, the side-branch counts $\tau_{ij}^k$ are independent identically distributed random variables, $\tau_{ij}^k\stackrel{d}{=}\tau_{ij}$, then, by the law of large numbers, \[T_{ij}\stackrel{\rm a.s}{\longrightarrow} {\sf E}\left(\tau_{ij}\right) \quad {\rm as~} N_j\stackrel{\rm a.s}{\longrightarrow}\infty,\] where the {\it almost sure} convergence $X_r \stackrel{\rm a.s.}{\longrightarrow} \mu$ is understood as ${\sf P}\left(\displaystyle\lim_{r\to\infty}X_r=\mu\right)=1$. For consistency, we denote the total number of order-$i$ branches that merge with other order-$i$ branches by $N_{ii}$ and notice that in a binary tree $N_{ii}=2\,N_{i+1}$. This allows us to formally introduce the additional Tokunaga indices: $T_{ii}=N_{ii}/N_{i+1}\equiv 2.$ The set $\{T_{ij}\}$, $1\le i \le \Omega-1,1\le j \le \Omega$, $i\le j$ of Tokunaga indices provides a complete statistical description of the branching structure of a finite tree of order $\Omega$. \vspace{0.5cm} Next, we define several types of tree self-similarity based on the Horton-Strahler and Tokunaga indexing schemes. \subsection{Horton laws} \label{sst} The {\it Horton laws}, widely observed in hydrological and biological networks \cite{TPN98,Horton45,VG00,DR00}, state, in their ultimate form, \[ \frac{N_r}{N_{r+1}} = R_B,\quad \frac{M_{r+1}}{M_r} = R_M,\quad R_B,R_M>0,\quad r\ge 1,\] where $N_r$, $M_r$ is, respectively, the total number and average mass of branches of order $r$ in a finite tree of order $\Omega$. McConnell and Gupta \cite{MG08} emphasized the approximate, asymptotic nature of the above empirical statements. In the present set-up, it will be natural to formulate the {\it Horton laws} as the almost sure convergence of the ratios of the branch statistics as the tree order increases: \begin{eqnarray} \label{NHL} \frac{N_r}{N_{r+1}} &\stackrel{\rm a.s.}{\longrightarrow}& R_B>0,\quad {\rm for~}r\ge 1, {~\rm as~}\Omega\to\infty,\\ \label{MHL} \frac{M_{r+1}}{M_r} &\stackrel{\rm a.s.}{\longrightarrow}& R_M>0,\quad {\rm as~}r,\Omega\to\infty. \end{eqnarray} Notice that the convergence in \eqref{NHL} is seen for the small-order branches, while the convergence in \eqref{MHL} --- for large-order branches. We call \eqref{NHL},\eqref{MHL} the {\it weak Horton laws}. We also consider {\it strong Horton laws} that assume an almost sure exponential dependence of the branch characteristics on $r$ in a tree of finite order $\Omega$ and magnitude $N$: \begin{eqnarray} \label{NHLs} N_r &\stackrel{\rm a.s.}{\sim}& N_0\,N\,R_B^{-r},\quad {\rm for~}r\ge 1, {~\rm as~}\Omega\to\infty,\\ \label{MHLs} M_r &\stackrel{\rm a.s.}{\sim}& M_0\,R_M^r,\quad {\rm as~}r,\Omega\to\infty \end{eqnarray} for some positive constants $N_0,M_0,R_B$ and $R_M$ and with $x_r \stackrel{\rm a.s.}{\sim} y_r$ staying for \[{\sf P}\left(\displaystyle\lim_{r\to\infty} x_r/y_r =1\right)=1.\] Clearly, the strong Horton laws imply the weak Horton laws. The inverse in general is not true; this can be illustrated by a sequence $M_r = R_M^r\,r^C$, for any $C>0$, for which the weak Horton law \eqref{MHL} holds, while the strong law \eqref{MHLs} fails. We notice also that $\Omega\to \infty$ implies $N\to\infty$, but not vice versa; an example is given by a {\it comb} --- a tree of order $\Omega=2$ with an arbitrary number of side branches with Tokunaga index $\{12\}$. This is why the limits above are taken with respect to $\Omega$, not $N$. The strong Horton laws imply, in particular, that \be \label{ss} N_r\stackrel{\rm a.s.}{\sim} const\,M_r^{-\alpha}, \quad\alpha=\frac{\log R_B}{\log R_M} \ee for appropriately chosen $r\to\infty$ and $\Omega\to\infty$, for instance $r=\sqrt{\Omega}$. The relationship \eqref{ss} is the simplest indication of self-similarity, as it connects the number $N_r$ and the size $M_r$ of branches via a power law. However, a more restrictive property is conventionally required to call a tree self-similar; it is discussed in the next section. \subsection{Tokunaga self-similarity} In a deterministic setting, we call a tree $T$ of order $\Omega$ a {\it self-similar tree} (SST) if its side-branching structure (i) is the same for all branches of a given order: \[\tau_{ij}^k=:\tau_{ij},\quad 1\le k\le N_j,~1\le i<j\le\Omega,\] and (ii) is invariant with respect to the branch order: \be \label{ss_d} \tau_{i(i+k)}\equiv T_{i(i+k)} =: T_{k}\quad {\rm for~} 2\le i+k\le \Omega. \ee A {\it Tokunaga self-similar tree} (TSST) obeys an additional constraint first considered by Tokunaga \cite{Tok78}: \be \label{TSS} T_{k+1}/T_k=c\quad \Leftrightarrow \quad T_{k}=a\,c^{k-1}\quad a,c > 0,~1\le k\le\Omega-1. \ee In a random setting, we say that a tree $T$ of order $\Omega$ is self-similar if ${\sf E}\left(\tau_{i(i+k)}^j\right)=:T_k$ for $1\le j\le N_{i+k}$, $2\le i+k \le\Omega$; and it is Tokunaga self-similar if, furthermore, the condition \eqref{TSS} holds. In a deterministic setting, for a tree satisfying the weak Horton and Tokunaga laws\footnote{In a deterministic setting, the convergence in the Horton laws is understood as the convergence of sequences.}, one has \cite{Tok78,Pec95}: \be \label{HT} R_B = \frac{2+c+a+\sqrt{(2+c+a)^2-8c}}{2}. \ee Peckham \cite{Pec95} has noticed that in a Tokunaga tree of order $\Omega$ one has $N_r=M_{\Omega-r+1}$, which implies that the Horton laws for masses $M_r$ follow from the Horton laws for the counts $N_r$ and $R_M=R_B$. McConnell and Gupta \cite{MG08} have shown that the weak Horton laws with $R_B=R_M$ hold in a self-similar Tokunaga tree. Zaliapin \cite{Zal10} has shown, moreover, that strong Horton laws hold in a Tokunaga tree and, at the same time, even weak Horton laws may not hold in a general, non-Tokunaga, self-similar tree. The Tokunaga self-similarity describes a two-parametric class of trees, specified by the Tokunaga parameters $(a,c)$. Our goal is to demonstrate that the Tokunaga class is not only structurally simple but is also sufficiently wide. This study establishes the Tokunaga self-similarity for the level-set trees of symmetric homogeneous Markov chains, and, as a direct consequence, for the trees of their scaling limits including a regular Brownian motion. \subsection{Stochastic self-similarity} Burd {\it et al.} \cite{BWW00} define {\it stochastic self-similarity} for a random tree $\tau\in\left(\mathbb{T}_0,P\right)$ as the distributional invariance with respect to the pruning $\cR(\tau)$: \[P\left(\cdot|\tau\ne\phi\right)\circ \cR^{-1}=P(\cdot )\] and prove the following result that explains the importance of Tokunaga self-similarity within the class of Galton-Watson trees as well as the special role of the Galton-Watson critical binary trees. \begin{thm}{\rm \cite[Theorems 1.1, 1.2, 3.17]{BWW00}} Let $\tau\in\left(\mathbb{T}_0,GW_{\{p_k\}}\right)$ with bounded offspring number. Then the following statements are equivalent: \begin{itemize} \item[(i)] Tree $\tau$ is stochastically self-similar. \item[(ii)] ${\sf E}(\tau_{i(i+k)}) =: T_k$, {\it i.e.}, the expectation is a function of $k$ and $T_k$ is defined by this equation. \item[(iii)] Tree $\tau$ has the critical binary offspring distribution, $p_0=p_2=1/2$. \end{itemize} \end{thm} These authors show, furthermore, how the arbitrary binary Galton-Watson distribution is transformed under the operation of pruning. \begin{thm}{\rm \cite[Proposition 2.1]{BWW00}} \label{tree_prune} Let $\tau$ be a finite tree with a binary Galton-Watson distribution, $p_0+p_2=1$, with $p_2\le 1/2$. Let $\tau_{n+1}=\cR(\tau_n)$, $n\ge 0$, $\tau_0=\tau$. Then $\tau_{n+1}$ has the binary Galton-Watson distribution $p_0^{(n+1)}+p_2^{(n+1)}=1$ with \[p_2^{(n+1)}=\frac{\left[p_2^{(n)}\right]^2} {\left[p_0^{(n)}\right]^2+\left[p_2^{(n)}\right]^2}.\] \end{thm} We demonstrate below that stochastic (or distributional) self-similarity, within the class of tree representations of homogeneous Markov chains, holds only for Markov chains with symmetric exponential increments. \section{Main results} \label{main} Let $X_k$, $k\in\mathbb{Z}$ be a real valued Markov chain with homogeneous transition kernel $K(x,y)\equiv K(x-y)$, for any $x,y\in\mathbb{R}$. We call $X_k$ a homogeneous Markov chain (HMC). When working with trees, $X_k$ will also denote a function from $C(\mathbb{R})$ obtained by liner interpolation of the values of the original time series $X_k$; this create no ambiguities in the present context. A HMC is called {\it symmetric} (SHMC) if its transition kernel satisfies $K(x)=K(-x)$ for any $x\in\mathbb{R}$. We call an HMC {\it exponential} (EHMC) if its kernel is a mixture of exponential jumps. Namely, \[K(x)=p\,\phi_{\lambda_u}(x)+(1-p)\,\phi_{\lambda_d}(-x), \quad 0\le p \le 1, \lambda_u,\lambda_d>0,\] where $\phi_{\lambda}$ is the exponential density \be \label{exp} \phi_{\lambda}(x)= \begin{cases} \lambda e^{-\lambda x}, & x \geq 0, \\ 0, & x<0. \end{cases} \ee We will refer to an EHMC by its parameter triplet $\{p,\lambda_u,\lambda_d\}$. The concept of tree self-similarity is based on the notion of {\it branch order} and is tightly connected to the {\it pruning} operation (Sect.~\ref{hs}, Fig.~\ref{fig_pruning}). In terms of time series (or tamed real functions), pruning corresponds to coarsening the time series resolution by removing the local maxima. An iterative pruning corresponds to iterative transition to the local minima. We formulate this observation in the following proposition. \begin{prop} \label{ts_prune} The transition from a time series $X_k$ to the time series $X^{(1)}_k$ of its local minima corresponds to the pruning of the level-set tree $\textsc{level}(X)$. Formally, \[\textsc{level}\left(X^{(m)}\right) = \cR^m\left(\textsc{level}(X)\right), \forall m \ge 1,\] where $X^{(m)}$ is obtained from $X$ by iteratively taking local minima $m$ times (i.e., local minima of local minima and so on.) \end{prop} The next result establishes invariance of several classes of Markov chains with respect to the pruning operation. \begin{lem} \label{inv_prune} (a) The local minima of a HMC form a HMC. (b) The local minima of a SHMC form a SHMC. (c) The local minima of an EHMC with parameters $\{p,\lambda_u,\lambda_d\}$ form a EHMC with parameters $\{p^*,\lambda_u^*,\lambda_d^*\}$, where \be \label{iteration} p^*=\frac{p\,\lambda_d}{p\,\lambda_d+(1-p)\,\lambda_u}, \quad \lambda^*_d=p\lambda_d,~~\text{ and }~~ \lambda^*_u=(1-p)\lambda_u. \ee \end{lem} Let $\{M_t\}\equiv \{M^{(1)}_t\}$, $t\in \mathcal{T}_1\subset\mathbb{R}$, be the set of local minima of $X_t$, not including the boundary minima; $\{M^{(2)}_t\}$, $t\in \mathcal{T}_2\subset\mathbb{R}$, be the set of local minima of local minima (local minima of second order), {\it etc.}, with $\{M^{(j)}_t\}$, $t\in \mathcal{T}_j\subset\mathbb{R}$ being the local minima of order $j$. We call a segment between two consecutive points from $\mathcal{T}_r$, $r\ge1$, a {\it (complete) basin} of order $r$. For each $r$, there might exist a single leftmost and a single rightmost segments of $X_t$ that do not belong to any basin or order $r$, with a possibility for them to merge if $X_t$ does not have basins of order $r$ at all. We call those segments {\it incomplete} basins of order $r$. There is a bijection between basins (complete and incomplete) of order $r$ in $X_t$ and branches of Horton-Strahler order $r$ in $\textsc{level}(X_t)$. This explains the terms {\it complete branch} and {\it incomplete branch} of order $r$. \begin{thm}[{\bf Horton and Tokunaga self-similarity}] \label{T1} The combinatorial level set tree $\textsc{shape}\left(\textsc{level}(X),1\right)$ of a finite SHMC $X_k$, $k=1,\dots,N$ satisfies the strong Horton laws for any $r\ge 1$, asymptotically in $N$: \be \label{Horton} N_r\overset{\rm a.s.}{\sim} N\,R_{\rm B}^{-r},\quad R_{\rm B}=4, \quad{\rm as~}N\to\infty. \ee Furthermore, $T=\textsc{shape}\left(\textsc{level}(X,1)\right)$ is a Tokunaga self-similar tree with parameters $(a,c)=(1,2)$. Specifically, for a finite tree $T$ of order $\Omega(N)$ the side-branch counts $\tau_{i(i+k)}^j$ with $2\le i+k\le \Omega$ for different complete branches $j$ of order $(i+k)$ are independent identically distributed random variables such that $\tau_{i(i+k)}^j\overset{d}{=:}\tau_{i(i+k)}$ and \be \label{Tokunaga} {\sf E}\left[\tau_{i(i+k)}\right]=:T_k = 2^{k-1}. \ee Moreover, $\Omega\overset{a.s.}{\to}\infty$ as $N\to\infty$ and, for any $i,k\ge 1$, we have \[T_{i(i+k)}\overset{\rm a.s.}{\longrightarrow} T_k=2^{k-1}, \quad{\rm as~}N\to\infty,\] where $T_{i(i+k)}$ can be computed over the entire $X_k$. \end{thm} Next we extend this result to the case of infinite time series and the weak limits of finite time series. For a linearly interpolated time series $X_t$, $t\ge 0$ (equivalently, for a continuous function with a countable number of separated local extrema) consider the {\it descending ladder} $L_X=\{t:X_t=\underline{X}[0,t]\}$, which in our settings is a set of isolated points and non-overlapping intervals (Fig.~\ref{fig_ladder}). The function $X_t$ is naturally divided into a series of vertically shifted positive excursions on the intervals not included in $L_X$ and monotone falls on the intervals from $L_X$. Any (in the a.s. sense) infinite SHMC can be decomposed into infinite number of such finite excursions and finite falls. We will index the excursions by index $i\ge 1$ from left to right. The extreme time series $\cE\left(X_k^i\right)$ for each finite excursion $X_t^{i}$ is a Harris path for a finite tree $\textsc{level}\left(X_t^{i}\right)$. Hence, each such finite excursion completely specifies a single subtree of $\textsc{tree}\left(X_t\right)$. In particular, it completely specifies the HS orders for all vertices and Tokunaga indices for all branches except the one containing the root within $\textsc{level}\left(X_t^{i}\right)$. We also notice that each fall of $X_t$ on an interval from $L_X$ corresponds to an individual edge of $\textsc{tree}\left(X_t\right)$. Combining the above observations, we conclude that the tree $\textsc{tree}\left(X_t\right)$ can be represented as infinite number of subtrees $\textsc{level}(X_t^i)$ connected by edges that correspond to the falls of $X_t$ on the descending ladder, see Fig.~\ref{fig_ladder}. Pitman calls this construction, applied to the standard Brownian motion rather than time series, {\it a forest of trees attached to the floor line} \cite[Section 7.4]{Pitman}. Let $N_r^n$ and $N_{ij}^n$ denote, respectively, the number of branches of order $r$ and the number of side branches of Tokunaga index $\{ij\}$ in the first $n$ excursions of $X_t$ as described above. We introduce the cumulative quantities \[\eta_r^n:=\frac{N_r^n}{N_{r+1}^n},\quad T_{ij}^n:=\frac{N_{ij}^n}{N_j^n}\] and define, for the infinite time series $X_t$, \be \label{inf_ss} \eta_r(X_t) = \lim_{n\to\infty} \eta_r^n,\quad T_{ij}(X_t) = \lim_{n\to\infty} T_{ij}^n, \ee whenever the above limits exist in an appropriate probabilistic sense. By Proposition~\ref{inv1}, the level set tree of a finite excursion $X_t^k$ is not affected by monotonic transformations of time and value. This allows to expand the above definition \eqref{inf_ss} to the weak limits of time series via the the Donsker's theorem. In particular, if $X_t$ is a SHMC whose increments have standard deviation $\sigma$, then the rescaled segments $X_t$ weakly converge to the regular Brownian motion $B_t$, $0\le t\le 1$. Namely, \[X_{(nt)}/\sqrt{n}\overset{d}{\to} \sigma\,B_t\] as $n\to\infty$ through the end point of the finite excursions that comprise $X_t$. This leads to the following result. \begin{cor} \label{Brown} The combinatorial tree $\textsc{shape}\left(\textsc{tree}(B_t)\right)$ of a regular Brownian motion $B_t$, $t\in[0,1]$ satisfies the Horton and Tokunaga self-similarity laws. Namely, \be \eta_r(B_t) = 4 ~{\rm for~} r\ge 1\quad{\rm and}\quad T_{i(i+k)}(B_t)=2^{k-1}~{\rm for~} i,k\ge 1, \ee where the limits \eqref{inf_ss} are understood in the almost sure sense. \end{cor} We conclude this section with a conjecture motivated by the above result as well as extensive numeric simulations \cite{Webb09}. \begin{conj} \label{BH} The tree $\textsc{shape}\left(\textsc{tree}\left(B^H\right)\right)$ of a fractional Brownian motion $B^H_t$, $t\in[0,1]$ with the Hurst index $0<H<1$ is Tokunaga self-similar with $T_{i(i+k)}(B^H)=T_k=c^{k-1}$, $c=2H+1$, $i,k\ge 1$. According to \eqref{HT}, this corresponds to the Horton self-similarity with \be \eta_r(B^H) = 2+H+\sqrt{H^2+2}, r\ge 1. \ee The sense of limits \eqref{inf_ss} is to be determined. \end{conj} \section{Exponential chains} \label{add} This section focuses on exponential chains, which enjoy an important distributional self-similarity and whose level-set trees have the Galton-Watson distribution. \subsection{Distributional self-similarity} \label{DSS_sec} Consider a SHMC $X_k$, $k\in\mathbb{Z}$ with kernel \[K(x)=\frac{f(x)+f(-x)}{2},\] where $f(x)$ is a probability density function with support $\mathbb{R}^+$. The series of local minima of $X_k$ (or, equivalently, pruning $X_k^{(1)}$ of $X_k$) also forms a SHMC with transition kernel $K_1(x)$ (see Lemma~\ref{inv_prune}(b)). It is natural to look for chains invariant with respect to the pruning: \be \label{d_inv} X_k \overset{d}{=} c\,X_k^{(1)}, c>0. \ee By Proposition~\ref{inv1}, such invariance would guarantee the {\it distributional} Tokunaga self-similarity: \be \label{Tok_d} \tau_{i(i+k)}^j \overset{d}{=:} \tau_{i(i+k)}=T_k, \quad 1\le j\le N_{i+k}, 1\le i+k\le\Omega, \ee where $T_k$ is a random number of side-branches of order $i$ that join an arbitrarily chosen branch of order $(i+k)$. Hence, we seek the conditions on $f(x)$ to ensure that $K_1(x)=c^{-1}K(x/c)$ for some constant $c>0$. \begin{prop} \label{DSS} The local minima of a SHMC $X_k$ with kernel $K(x)$ form a SHMC with kernel \[K_1(x)=\frac{K(x/c)}{c},\quad c>0\] if and only if $c=2$ and \be \label{laplace} \Re\left[\widehat{f}(2s)\right]=\left|\frac{\widehat{f}(s)}{2-\widehat{f}(s)}\right|^2, \ee where $\widehat{f}(s)$ is the characteristic function of $f(x)$ and $\Re[z]$ stays for the real part of $z\in\mathbb{C}$. \end{prop} Observe that the set of densities $f(x)$ that satisfy (\ref{laplace}) is not empty. A solution is given for example by the Laplace density with $\lambda>0$, {\it e.g.} for $f(x)=\phi_{\lambda}(x)$ with exponential density $\phi_{\lambda}(x)$ of \eqref{exp}, that is by an EHMC $\{1/2,\lambda,\lambda\}$. \subsection{Distributional self-similarity for symmetric exponential chains} \label{expj} Lemma~\ref{inv_prune}(c) allows one to study the behavior of the EHMCs formed by local minima, minima of minima, and so on of an EHMC $X_k$ with parameters $\{p,\lambda_u,\lambda_d\}$. Introducing the variables \be \label{ag} A=\frac{1-p}{p},\quad \gamma = \frac{\lambda_d}{\lambda_u} \ee one readily obtains that their counterparts $\{A^*,\gamma^*\}$ for the chain of local minima, given by \eqref{iteration}, are expressed as \be \label{dyn} A^* = \frac{A}{\gamma},\quad \gamma^*=\frac{\gamma}{A}. \ee Notably, this means that the chain of local minima for {\it any} EHMC form an EHMC with $A\,\gamma=1$. The only fixed point in the space $(A,\gamma)$ with iteration rules \eqref{dyn} is the point $(A=1,\gamma=1)$, which corresponds to the distributionally self-similar EHMS discussed in Sect.~\ref{DSS_sec}. This point is an image (under the pruning operation) of the EHMCs with $A=\gamma$ or $p\,\lambda_d = (1-p)\,\lambda_u$. The last condition is equivalent to ${\sf E}(X_k-X_{k-1}) = 0$ for any $k>1$. The chain of local minima for any EHMC with $A>\gamma$ ($A<\gamma$) corresponds to a point on the upper (lower) part of the hyperbola $A\,\gamma=1$. Any point on this hyperbola, except the fixed point $(1,1)$, moves away from the fixed point toward $(0,\infty)$ or $(\infty,0)$. This is illustrated in Fig.~\ref{fig4}. It follows that the Tokunaga and even weaker Horton self-similarity is only seen for a symmetric EHMC. The above discussion can be summarized in the following statement. \begin{thm} \label{eH} Let $X_k$ be an EHMC $\{p,\lambda_u,\lambda_d\}$. Then $X_k$ satisfies the distributional self-similarity \eqref{d_inv} if and only if $p=1/2$, $\lambda_u=\lambda_d$. Furthermore, the multiple pruning $X_k^{(m)}$, \mbox{$m>1$} of $X_k$ satisfies the distributional self-similarity \eqref{d_inv} if and only if the chain's increments have zero mean, or, equivalently, if and only if $p\,\lambda_d = (1-p)\,\lambda_u$. In this case, the self-similarity is achieved after the first pruning, that is for the chain $X_k^{(1)}$ of local minima. \end{thm} \begin{cor} The regular Brownian motion with drift is not Tokunaga self-similar. \end{cor} \subsection{Connection to Galton-Watson trees} An important, and well known, fact is that the Galton-Watson distribution (see Sect.~\ref{GW}) is the characteristic property of trees that have Harris paths with alternating exponential steps. We formulate this result using the terminology of our paper. \begin{thm}{\rm \cite[Lemma 7.3]{Pitman},\cite{LeGall93,NP89}} \label{Pit7_3} Let $X_k$ be a discrete-time excursion with finite number of local minima. The level set tree $\textsc{shape}\left(\textsc{level}(X_k,1)\right)$ is a binary Galton-Watson tree with $p_0+p_2=1$ if and only if the rises and falls of $X_k$, excluding the last fall, are distributed as independent exponential variables with parameters $(\mu+\lambda)$ and $(\mu-\lambda)$, respectively, for $0\le \lambda < \mu$. In this case, \[p_0=\frac{\mu+\lambda}{2\,\mu},\quad p_2=\frac{\mu-\lambda}{2\,\mu}.\] \end{thm} We now use this result to relate sequential pruning of Galton-Watson trees (see Theorem~\ref{tree_prune}) and pruning of EHMCs. Consider the first positive excursion $X_k$ of an EHMC with parameters $\{p^{(0)}= p=1-q,\lambda_u,\lambda_d\}$. The geometric stability of the exponential distribution implies that the monotone rises and falls of $X_k$ are exponentially distributed with parameters $q\,\lambda_u$ and $p\,\lambda_d$, respectively. The Theorem~\ref{Pit7_3} implies that $\textsc{shape}\left(\textsc{level}(X_k)\right)$ is distributed as a binary Galton-Watson tree, $p_0+p_2=1$, with \be \label{p20} p_2 \equiv p_2^{(0)}=\frac{p\,\lambda_d}{q\,\lambda_u+p\,\lambda_d}. \ee The first pruning $X_k^{(1)}$ of $X_k$, according to \eqref{iteration}, is the EHMC with parameters \[\left\{p^{(1)}=\frac{p\,\lambda_d}{q\,\lambda_u+p\,\lambda_d}, q\,\lambda_u,p\,\lambda_d\right\}.\] Its upward and downward monotone increments are exponentially distributed with parameters, respectively, \[\frac{(q\,\lambda_u)^2}{q\,\lambda_u+p\,\lambda_d}\quad {\rm and}\quad \frac{(p\,\lambda_d)^2}{q\,\lambda_u+p\,\lambda_d}.\] By Theorem~\ref{Pit7_3}, the level-set tree for an arbitrary positive excursion of $X_k^{(1)}$ is a binary Galton-Watson tree, $p_0^{(1)}+p_2^{(1)}=1$, with \[p_2^{(1)}=\frac{(p\,\lambda_d)^2}{(q\,\lambda_u)^2+(p\,\lambda_d)^2}.\] Continuing this way, we find that $n$-th pruning $X_k^{(n)}$ of $X_k\equiv X_k^{(0)}$ is an EHMCs such that the level set tree of its arbitrary positive excursion have a binary Galton-Watson distribution, $p_0^{(n)}+p_2^{(n)}=1$, with \[p_2^{(n)}=\frac{(p\,\lambda_d)^{2^n}} {(q\,\lambda_u)^{2^n}+(p\,\lambda_d)^{2^n}}.\] This can be rewritten in recursive form as \[p_2^{(n)}=\frac{\left[p_2^{(n-1)}\right]^2} {\left[p_0^{(n-1)}\right]^2+\left[p_2^{(n-1)}\right]^2},\quad n\ge 1\] with $p_2^{(0)}$ given by \eqref{p20}. Notably, this is the same recursive system as that discovered by Burd {\it et al.} \cite[Proposition 2.1]{BWW00} (see Theorem~\ref{tree_prune} above) in their analysis of consecutive pruning for the Galton-Watson trees. Another noteworthy relation is given by \[p^{(n)} = p_2^{(n-1)},\quad n\ge 1, \quad p^{(0)}=p, p_2^{(0)}=p_2,\] which connects the ``horizontal'' probability $p^{(n)}$ of an upward jump in a pruned time series $X_k^{(n)}$ with the ``vertical'' probability $p_2^{(n-1)}$ of branching in a Galton-Watson tree. \section{Terminology and proofs} \label{proofs} \subsection{Level-set trees: Definitions and terminology} \label{terms} This section introduces terminology for discussing the hierarchical structure of the local extrema of a finite time series $X_k$ and relating it to the level set tree $\textsc{level}(X)$. For consistency we repeat some terms introduced above to formulate Theorem~\ref{T1}. Let $\{M_t\}\equiv \{M^{(1)}_t\}$, $t\in \mathcal{T}_1\subset\mathbb{R}$, be the set of local minima of $X_t$, not including possible boundary minima; $\{M^{(2)}_t\}$, $t\in \mathcal{T}_2\subset\mathbb{R}$, be the set of local minima of local minima (local minima of second order), {\it etc.}, with $\{M^{(j)}_t\}$, $t\in \mathcal{T}_j\subset\mathbb{R}$ being the local minima of order $j$. Next, let $\{m_s\}\equiv \{m^{(1)}_s\}$, $s\in\mathcal{S}_1\subset\mathbb{R}$, be the set of local maxima of $X_k$, including possible boundary maxima, and $\{m^{(j+1)}_s\}$, $s\in\mathcal{S}_{j+1}\subset\mathbb{R}$ the set of local maxima of $\{M^{(j)}_t\}$ for all $j\ge 1$. We will call a segment between two consecutive points from $\mathcal{T}_j$ a {\it (complete) basin} of order $j$. Clearly, $\mathcal{T}_1\supset \mathcal{T}_2 \supset\dots$ and each basin of order $r$ is comprised of a non-zero number of basins of arbitrary order $k<r$. For each $r$, there might exist a single leftmost and a single rightmost segments of $X_t$ that do not belong to any basin or order $r$, with a possibility for them to merge if $X_t$ does not have basins of order $r$ at all. We call those segments {\it incomplete} basins of order $r$. By construction, each basin of order $j$ contains exactly one point from $\mathcal{S}_j$; e.g., there is a single local maximum from $\mathcal{S}_1$ between two consecutive local minima from $\mathcal{T}_1$, {\it etc.} There exists a bijection between basins (complete and incomplete) of order $r$ in $X_t$ and branches of Horton-Strahler order $r$ in $\textsc{level}(X_t)$; this explains the terms {\it complete branch} and {\it incomplete branch} of order $r$. More specifically, there is a bijection between the terminal vertices of order-$r$ branches --- {\it i.e.,} vertices parental to two branches of order $(r-1)$ --- and the local maxima from $\mathcal{S}_j$ within the respective basins. Let us fix an arbitrary local minimum $X_k$ of order $r_k$; then $k\in \mathcal{T}_j$ for $1\le j \le r_k$ and $k\notin \mathcal{T}_j$ for $j>r_k$. For each $j > r_k$ there exists a unique basin of order $j$ that contains $k$; we denote the boundaries of this basin by $l_k^{(j)},r_k^{(j)}\in \mathcal{T}^{(j)}$, $l_k^{(j)}<r_k^{(j)}$. Denote by $c^{(j)}_k$ the unique point from $\mathcal{S}_j$ within the interval $\left(l_k^{(j)},r_k^{(j)}\right)$. Multiple points $X_k$ may correspond to the same triplet $\left(l_k^{(j)},c^{(j)}_k,r_k^{(j)}\right)$, which will create no confusion. These definitions are illustrated in Fig.~\ref{fig1}. Consider now a point $k$ of local minimum such that $k\notin\cup_{j\ge 1} m^{(j)}_s$. If $l_k^{(j)} < k < c^{(j)}_k$ for a given $j>r_k$ then we call the point $l_k^{(j)}$ the local minimum of order $j$ {\it adjacent} to $k$ and the point $r_k^{(j)}$ the local minimum of order $j$ {\it opposite} to $k$. The analogous terminology is introduced in case $c_k^{(j)} < k < r^{(j)}_k$. By construction, $X_k$ is always greater than the value of its adjacent minimum of any order $j>r_k$. The value of the opposite minimum of order $j$ is denoted by $M^{(j)}_k$. We have, for each $k$, \be \label{morder} M^{(1)}_k \ge M^{(2)}_k \ge M^{(3)}_k \ge \dots \ee We already noticed that the local maxima $m^{(1)}_t$ correspond to the tree leaves, that is to its branches of Horton order $r=1$. The set $m^{(j+1)}_t$ for each $j\ge 1$ corresponds to the vertices parental to two branches of the same HS order $j$; they are the terminal vertices of order-$(j+1)$ branches. All other local minima of $X_k$ correspond to vertices parental to two vertices of different SH order; we will refer to this as {\it side-branching}. Specifically, a local minimum $X_k$ of order $i$ forms a side-branch of order $\{ij\}$ if \be \label{sb} M^{(j-1)}_k \ge X_k \ge M^{(j)}_k, \ee where the first inequality disappears when $j=i+1$. Figure~\ref{fig2} illustrates this for a basin of second order. In general, each basin of order $r$ contains a uniquely specified positive excursion attached to its higher end. The local maxima of order $k<r$ from this excursion correspond to the side-branches with Tokunaga index $\{km\}$ with $m\le r$. The local maxima of order $k<r$ within the basin but outside of this excursion correspond to the side-branches with Tokunaga index $\{km\}$ with $m> r$. \subsection{Proofs} \noindent {\bf Proof of Propositions~\ref{inv1},\ref{inv2},\ref{Harris} and \ref{ts_prune}:} The statements readily follow from the definition of level set trees. \qed \vspace{.5cm} \noindent {\bf Proof of Lemma~\ref{inv_prune}:} (a) Follows from the independence of increments in $X_k$. (b) Let $\{M_j\}$ be the sequence of local minima of $X_k$ and $d_j = M_{j+1}-M_j$. We have, for each $j$ \be \label{twosum} d_j = \sum_{i=1}^{\xi_+} Y_i - \sum_{i=1}^{\xi_-} Z_i, \ee where $\xi_+$ and $\xi_-$ are independent geometric random variables with parameter $1/2$: \[{\sf P}(\xi_+=k) = {\sf P}(\xi_-=k) = 2^{-k}, \quad k=1,2,\dots;\] $Y_i$, $Z_i$ are independent identically distributed (i.i.d.) random variables with density $f(x)$. Here the first sum corresponds to $\xi_+$ positive increments of $X_k$ between a local minimum $M_j$ and the subsequent local maximum $m_j$ and the second sum to $\xi_-$ negative increments between the local maximum $m_j$ and the subsequent local minimum $M_{j+1}$. It is readily seen that both the sums in \eqref{twosum} have the same distribution, and hence their difference has a symmetric distribution. We notice that the symmetric kernel for the sequence of local minima $\{M_j\}$ is necessarily different from $K(x)$. (c) Consider an EHMC $X_k$ with parameters $\{p,\lambda_u,\lambda_d\}$. By statement (a) of this lemma, the local minima of $X_k$ form a HMC with transition kernel $K_1(x)$. The latter is the probability distribution of the jumps $d_j$ given by \eqref{twosum} with $\xi_+$, $\xi_-$ being geometric random variables with parameters $p$ and $(1-p)$ respectively, $Y_i\overset{d}{=}\phi_{\lambda_u}$, and $Z_i\overset{d}{=}\phi_{\lambda_d}$. For the characteristic function of $K_1$ one readily has \[\widehat{K_1}(s)= \frac{p(1-p)\lambda_d \lambda_u}{\left((1-p)\lambda_u-is\right)(p\lambda_d+is)} =p_* \cdot \widehat{\phi_{\lambda^*_u}}(s)+(1-p_*) \cdot \widehat{\phi_{\lambda^*_d}}(-s)\] with \[ p_*=\frac{p\,\lambda_d}{p\,\lambda_d+(1-p)\,\lambda_u}, \quad \lambda^*_d=p\lambda_d,~~\text{ and }~~ \lambda^*_u=(1-p)\lambda_u. \] Thus \[K_1(x)=p_* \phi_{\lambda^*_u}(x)+(1-p_*)\phi_{\lambda^*_d}(-x).\] This means that the HMC of local minima also jumps according to a two-sided exponential law, only with different parameters $p_*$, $\lambda^*_d$ and $\lambda^*_u$. \qed \vspace{.5cm} \noindent {\bf Proof of Theorem~\ref{T1}: Horton self-similarity} We notice that the number $N_r$ of order-$r$ branches in $\textsc{level}(X)$ equals the number $|\cS_r|$ of local maxima $m^{(r)}_s$ of order $r$ (with the convention that the local maxima of order 0 are the values of $X_k$). The probability for a given point of $X_k$ to be a local maximum equals the probability that this point is higher than both its neighbors. The Markov property and symmetry of the chain imply that this probability is $1/4$. Hence the average number of local maxima is \[{\sf E}\left(|\cS_0|\right) ={\sf E}\left(N_1\right) = \sum_{i=2}^{N-1}{\sf P}(X_{i-1}<X_i>X_{i+1}) = \frac{N-2}{4}\sim \frac{N}{4}.\] Let $l_i$ denote the event ($X_i$ is a local maximum). By Markov property, the events $l_i$, $l_j$ are independent for $|i-j|\ge 2$; hence, the variance ${\sf V}(N_1)\propto N$. This yields \[\lim_{N\to\infty}{\sf E}\left(\frac{N_1}{N}\right)=1/4,\quad \lim_{N\to\infty}{\sf V}\left(\frac{N_1}{N}\right)=0.\] One can combine the strong laws of large numbers for (i) the proportion of the upward increments of $X_t$ (that converges to 1/2) and (ii) the proportion of upward increments followed by a downward increment (that converges to 1/2) to obtain $N_1/N \overset{\rm a.s.}{\to} 1/4$, and, in particular, $N_1\overset{\rm a.s.}{\to}\infty$ as $N\to\infty$. We use now Lemma~\ref{inv_prune}(b) to find, applying the same argument to the pruned time series, that $N_r/N_{r-1}\overset{\rm a.s.}{\to} 1/4$ as $N\to\infty$ for any $r> 1$. Finally, \[\frac{N_r}{N}= \frac{N_r}{N_{r-1}}\frac{N_{r-1}}{N_{r-2}}\dots \frac{N_{1}}{N} \overset{\rm a.s.}{\longrightarrow} 4^{-r},\quad N\to\infty,\] which completes the proof of the strong Horton law \eqref{Horton}. \qed The proof of the Tokunaga self-similarity will require several auxiliary statements formulated below. \begin{lem} \label{L4} A basin of order $j$ contains on average $4^{j-k}$ basins of order $k$, for any $j > k\ge 1$. \end{lem} \noindent {\bf Proof of Lemma~\ref{L4}:} We show first that a basin of order $(j+1)$ contains on average 4 local minima of order $j\ge 1$. The number $\xi$ of points of $X_k$ within a first-order basin ({\it i.e.}, between two consecutive local minima) is $\xi=1 + \xi_++\xi_-$, where $\xi_+$, $\xi_-$ are, respectively, the numbers of basin points (excluding the basin boundaries) to the left and right of its local maximum $m$; and the latter is counted separately in the expression above. The independence of increments of $X_k$ impies \[{\sf P}(\xi_+=k)={\sf P}(\xi_-=k)=2^{-k-1}, k=0,1,\dots,\] and hence \be \label{Exi} {\sf E}[\xi]=1+{\sf E}[\xi_+]+{\sf E}[\xi_-]=1+1+1=3. \ee By Lemma~\ref{inv_prune}(b), the same result holds for the average number of local minima of order $j$ within an order-$(j+1)$ basin, for any $j\ge 1$. Thus, the average number of order-$j$ basins within an order-$(j+1)$ basin is ${\sf E}[\xi]+1=4.$ The independence of increments of $X_k$ implies that the number of order-$(j-1)$ subbasins within an order-$j$ basin is independent of the numbers of order-$j$ basins within an order-$(j+1)$ basin. This leads to the Lemma's statement. \qed \begin{lem} \label{L2} Let $a$ and $b$ be two points chosen at random and without replacement from the set $\{1,2,\dots,N\}$ and $\eta=(\eta_1,\eta_2,\eta_3)$ denotes the random number of points within the following intervals respectively: (i) $\left[1,\,\min(a,b)\right)$, (ii) $\left(\min(a,b),\,\max(a,b)\right)$, and (iii) $\left(\max(a,b),\, N\right]$. Then the triplet $\eta$ has an exchangeable distribution. \end{lem} \noindent{\bf Proof of Lemma~\ref{L2}:} We notice that the triplet $\eta$ can be equivalently constructed by choosing three points $(a,b,c)$ at random from $(N+1)$ points on a circle and counting the number of points within each of the three resulting segments. This implies exchangeability. \qed \begin{lem} \label{L3} Let $Y_i\in\mathbb{R}$, $i=1,2,\dots$ be i.i.d. random variables, a pair $(n,m)\in\mathbb{N}^2$ has an exchangeable distribution independent of $Y_i$, and \be X = \sum_{i=1}^{n} Y_i - \sum_{i=n+1}^{n+m} Y_i. \ee Then $X$ has a symmetric distribution. \end{lem} \noindent{\bf Proof of Lemma~\ref{L3}:} Let $\Delta = n - m$ and $F(X\,|\,\Delta)$ denote the conditional distribution of $X$ given $\Delta$. From the definition of $X$ it follows that \[F(X\,|\,\Delta = k) = F(-X\,|\,\Delta = -k).\] Exchangeability of $(n,m)$ implies symmetry of $\Delta$ and we thus obtain \begin{eqnarray} F(X) &=& \sum _{k=-\infty}^{\infty} F(X\,|\,\Delta=k)\,{\sf P}(\Delta=k)\nonumber\\ &=& \sum _{k=0}^{\infty} \left[F(X\,|\,\Delta=k)+F(X\,|\,\Delta=-k)\right] \,{\sf P}(\Delta=k)\nonumber\\ &=&\sum _{k=0}^{\infty} \left[F(X\,|\,\Delta=k)+F(-X\,|\,\Delta=k)\right] \,{\sf P}(\Delta=k)\nonumber. \end{eqnarray} The sums of conditional distributions in brackets are symmetric, which completes the proof. \qed \vspace{.5cm} \noindent {\bf Proof of Theorem~\ref{T1}: Tokunaga self-similarity} We will show that $\displaystyle\lim_{N\to\infty}T_{ij} = 2^{j-i-1}$ for any pair $j>i$. By Lemma~\ref{inv_prune}(b), $T_{ij} = T_{(i+k)\,(j+k)}$ and so it suffices to prove the statement for $i=1$, that is to show that $\displaystyle\lim_{N\to\infty}T_{1j} = 2^{j-2}$ for any $j\ge 2$. This will be done by induction. Below we use the terminology introduced in Sect.~\ref{terms}. {\it Induction base, $j=2$.} Consider a basin of order 2, formed by two consecutive points from $\mathcal{T}_2$ (local minima of second order). We denote here their positions by $L$ and $R$, $L<R$. This part of the proof will consider only local minima from this interval; they will be referred to as ``points''. The highest local minimum, or point $c=c_k^{(2)}\in\mathcal{S}_2$ forms a vertex parental to two branches of order 1 with Tokunaga indices $\{11\}$; in addition, a random number of local minima corresponds to internal vertices parental to side-branches with Tokunaga indices $\{1j\}$, $j>1$. The number $N^{(L,R)}_{12}$ of vertices of index $\{12\}$ within $(L,R)$ equals the number of side-branch points $X_k$ that are higher than their opposite minimum of second order: \[N^{(L,R)}_{12} = \#\{L<k<R\,:\,X_k>M^{(2)}_k\}.\] For each side-branch vertex $X_k$ we necessarily have $X_k<X_c$ since $X_c$ is maximal among the local minima. Recall that the local minima form a SHMC. Hence, for a randomly chosen side-branch $X_k$ we have \[X_c-X_k = \sum_{i=1}^{\xi'} Y_i,\] where $\xi'$ is a geometric rv such that ${\sf P}(\xi'=k)=2^{-k}$, and $Y_i>0$ are i.i.d. random variables that correspond to the jumps between the local minima. Clearly, the difference $X_c-M^{(2)}_k$ has the same distribution. The random variables $(X_c-M^{(2)}_k)$ and $(X_c-X_k)$ are independent and so ${\sf P}\left(X_k>M^{(2)}_k\right)=1/2$. The expected number of side-branches with index $\{12\}$ within the interval $(L,R)$ is \be \label{rs} {\sf E}\left[N^{(L,R)}_{12}\right] = {\sf E}\left[\sum_{k=1}^{\xi-1} \1_{(0,\infty)}\left(X_k-M^{(2)}_k\right)\right]. \ee The summation above is taken over $(\xi-1)$ side-branch points within $(L,R)$; and the random variables $\xi$ was described in Lemma~\ref{L4}. We show next that the random variables $\1_{(0,\infty)}\left(X_k-M^{(2)}_k\right)$ are independent of $\xi$. Suppose that there exist $\xi=N$ points within $(L,R)$. A particular placement of $k$ and $c$ among these points is obtained by choosing two points at random and without replacement from $\{1,\dots,N\}$. By Lemma~\ref{L2}, the conditional distribution of the numbers of points between $k$ and $c$ and between $c$ and the local minimum opposite to $X_k$ have an exchangeable distribution. Lemma~\ref{L3} implies that ${\sf P}\left(X_k>M^{(2)}_k\,|\,\xi=N\right)=1/2$. Thus, \be {\sf E}\left[N^{(L,R)}_{12}\right] ={\sf E}[\xi-1]\,{\sf P}\left(X_k>M^{(2)}_k\right) = 2\times 1/2 =1. \ee The numbers $N_{12}^{(L,R)}$ are independent for different basins of order 2 by Markov property of $X_t$. The strong law of large numbers yields \[T_{12} = \frac{N_{12}}{N_2} \overset{\rm a.s.}{\longrightarrow}1=2^0 ~{\rm as~}N\to\infty.\] {\it Induction step.} Suppose that the statement is proven for $j\ge 2$, that is we know that for a randomly chosen local minima $X_k$ \[{\sf P}\left(X_k > M^{(j)}_k\right) = 2^{-(j-1)}\] and $T_{1j}\overset{\rm a.s.}{\to}2^{j-2}$ as $N\to\infty$. We will prove it now for $(j+1)$. Consider a randomly chosen side-branch point $X_k$ of order $\{1i\}$, $i>j$. By \eqref{sb}, $X_k < M^{(m)}_k$ for $1\le m \le j$ and thus necessarily $X_k < c_k^{(i+1)}$, $1\le i\le j$, since $c_k^{(i+1)}$ is a local maximum of order-$i$ minima within the basin $(L,R)$ of order $(j+1)$ that contains $k$. Repeating the argument of the induction base we find that $X_k-M_k^{(i)}$ has a symmetric distribution for all $i\le j+1$ and that the probability of $(X_k>M_k^{(i)})$ is independent of the number of local maxima of order $j$ within the basin $(L,R)$. This gives, for a randomly chosen $X_k$, \begin{eqnarray*} {\sf P}\left(X_k > M^{(j+1)}_k\right) &=& {\sf P}\left(X_k>M^{(j+1)}_k,X_k>M^{(j)}_k\right)\\ &=& {\sf P}\left(X_k > M^{(j+1)}_k \left| X_k > M^{(j)}\right.\right)\, {\sf P}\left(X_k > M^{(j)}_k\right) \\ &=& 2^{-1}\times 2^{-(j-1)}=2^{-j}. \end{eqnarray*} By Lemma~\ref{L4}, the average number of order-2 basins within a basin of order $(j+1)$ is $4^{j-1}$. Each such basin contains on average 2 points that correspond to side branches with Tokunaga index $\{1\bullet\}$. Hence, the average total number of side-branches with index $\{1\bullet\}$ within a basin of order $(j+1)$ is $2\times 4^{j-1}= 2^{2j-1}$. Applying the Wald's lemma to the sum of indicators $\1_{(0,\infty)}(X_k-M^{(j+1)}_k)$ over the random number of local minima of order $j$ within the basin $(L,R)$, we find the average total number of side-branches of order $\{1(j+1)\}$: \[{\sf E}\left[N^{(L,R)}_{1(j+1)}\right] = 2^{-j}\times 2^{2j-1} = 2^{j-1}.\] The strong law of large numbers yields \[T_{1(j+1)} = \frac{N_{1(j+1)}}{N_{(j+1)}} \overset{\rm a.s.}{\longrightarrow}2^{j-1}, ~{\rm as~}N\to\infty.\] \qed {\bf Proof of Proposition~\ref{DSS}:} Each transition step between the local minima of $X_k$ can be represented as $d_j$ of \eqref{twosum} where $\{Y_i\}$ and $\{Z_i\}$ are independent random variables with density $f(x)$, and $\xi_+$ and $\xi_-$ are two independent geometric random variables with parameter $1/2$. The Wald's lemma readily implies that $c=2$. This gives for the characteristic functions \[\widehat{K}_1(s)=2\,\widehat{K}(2s)=\Re\left[\widehat{f}(2s)\right].\] On the other hand, taking the characteristic function of $d_j$ we obtain \[\widehat{K}_1(s)=\left|\frac{\widehat{f}(s)}{2-\widehat{f}(s)}\right|^2,\] which completes the proof. \qed \vspace{.5cm} \noindent {\bf Proof of Theorem~\ref{eH}:} The Tokunaga and Horton self-similarity for a symmetric EHMC was proven in Theorem~\ref{T1}. Here we show the violation of the Horton self-similarity for an asymmetric EHMC. Let $X^{(m)}_k$ denote the time series obtained by $m$-time repetitive pruning of time series $X_k$. Recall that there is one-to-one correspondence between the local maxima of $X^{(m)}$ and the branches of order $m$ in the level set tree $\textsc{level}(X)$ (see Sect.~\ref{terms}). Hence, the Horton self-similarity is equivalent to the invariance of the proportion of local maxima with respect to pruning. The proportion of local maxima in $X^{(m)}$ equals the probability $P^{(m)}_{\rm min}$ for a randomly chosen point to be a local maxima. The Markov property of $X^{(m)}$ --- Lemma~\ref{inv_prune}(c) --- implies that $P^{(m)}_{\rm min}=p^{(m)}(1-p^{(m)})$, where $p^{(m)}$ is the probability for an upward jump in $X^{(m)}$. For an asymmetric EHMC let $A^{(m)}$ be the $m$-th iteration of $A$, as in \eqref{ag}, \eqref{dyn}. There, for $m\ge 1$, either $A^{(m)}<1$ in which case $A^{(m)}\to 0$ or $A^{(m)}>1$ in which case $A^{(m)}\to \infty$, all as $m\to\infty$ (see Sect.~\ref{expj}, Eq.~\eqref{dyn} and Fig.~\ref{fig4}). This corresponds to $p^{(m)}=1/(A^{(m)}+1)\to 1$ or $p^{(m)}\to0$, respectively, and leads to $P^{(m)}_{\rm min}\to0$. This prohibits the Horton, and hence Tokunaga, self-similarity. \qed \section{Discussion} \label{discussion} This work establishes the Tokunaga and Horton self-similarity for the level-set tree of a finite symmetric homogeneous Markov process with discrete time and continuous state space (Sect.~\ref{main}, Theorem~\ref{T1}). We also suggest a definition of self-similarity for an infinite tree, using the construction of a forest of subtrees attached to the floor line \cite{Pitman}; this allows us to establish the Tokunaga and Horton self-similarity for a regular Brownian motion (Sect.~\ref{main}, Corollary~\ref{Brown}). This particular extension to infinite trees seems natural for {\it tree representation of time series}, where concatenation of individual finite time series corresponds to the ``horizontal'' growth of the corresponding tree. Alternative definitions might be better suited though for other situations related, say, to the ``vertical'' growth of a tree from the leaves, like in a branching process. A useful observation is the equivalence of smoothing the time series by removing its local maxima and pruning the corresponding level-set tree (Sect.~\ref{main}, Proposition~\ref{ts_prune}). It allows one to switch naturally between the tree and time-series domains in studying various self-similarity properties. As discussed in the introduction, the Tokunaga self-similarity for various finite-tree representations of a Brownian motion follow from (i) the results of Burd {\it et al.} \cite{BWW00} on the Tokunaga self-similarity for the critical binary Galton-Watson process and (ii) equivalence of a particular tree representation to this process. We suggest here an alternative, direct approach to establishing Tokunaga self-similarity in Markov processes. Not only this approach does not refer to the Galton-Watson property, it extends the Tokunaga self-similarity to a much broader class of trees. Indeed, as shown by Le Gall \cite{LeGall93} and Neveu and Pitman \cite{NP89} (see Theorem~\ref{Pit7_3}), the tree representation of any non-exponential symmetric Markov chain is {\it not} Galton-Watson; it is still Tokunaga, however, by our Theorem~\ref{T1}. Peckham and Gupta \cite{PG99} have introduced the {\it generalized Horton laws}, which state the equality in distributions for the rescaled versions of suitable branch statistics $S_r$: $S_r \overset{d}{=} R_S^{r-k}\,S_k$, $R_S>0$. These authors established the existence of the generalized Horton laws in the Shreve's random model, that is for the Galton-Watson trees. Accordingly, one would expect the generalized Horton laws to hold for the exponential symmetric Markov chains. Veitzer and Gupta \cite{VG00} and Troutman \cite{Tr05} have studied the {\it random self-similar network (RSN) model} introduced in order to explain the variability of the limiting branching ratios in the empirical Horton laws. They have demonstrated that the extended Horton laws hold for various branch statistics, including the average magnitudes $M_r$, in this model. Furthermore, they established the weak Horton laws \eqref{NHL}, \eqref{MHL} and Tokunaga self-similarity for the RSN model. Notably, the RSN model does not belong to the class of Galton-Watson trees, yet it demonstrates the Tokunaga self-similarity, similarly to the non-exponential symmetric Markov chains considered here. Tree representation of stochastic processes \cite{Pitman,NP89,Ald1,Ald2,Ald3,LeGall93,LeGall05} and real functions \cite{Arnold,EHZ03} is an intriguing topic that attracts attention of mathematicians and natural scientists. A structurally simple yet flexible Tokunaga self-similarity, which extends beyond the classical Galton-Watson space, may provide a useful insight into the structure of existing data sets and models as well as suggest novel ways of modeling various natural phenomena. For instance, the level set tree representation have been used recently in analysis of the statistical properties of fragment coverage in genome sequencing experiments \cite{EHP11,EHP10,Evans05}. It seems that some of the methods and results obtained in this work might prove useful for the gene studies. In particular, it looks intriguing to test the self-similarity of the gene-related trees and interpret it in the biological context. Notably, the results of this paper, as well as that of Burd {\it et al.} \cite{BWW00}, refer only to a single point $(a,c)=(1,2)$ in the two-dimensional space of Tokunaga parameters. The empirical and numerical studies, however, report a broad range of these parameters, roughly $1< a < 2$ and $1< c < 4$. This motivates a search for more general Tokunaga models; a potential broad family is suggested by our Conjecture~\ref{BH}. The construction of the level set tree is a particular case of the coagulation process; in the real function context it describes the hierarchical structure of the embedded excursions of increasing lengths and heights. Coagulation theory --- a well-established field with broad range of practical applications to physics, biology, and social sciences \cite{Bertoin,Wakeley,NBW06} --- is heavily based on the concepts of symmetry and exchangeability \cite{Pitman,Bertoin}. We find it noteworthy that the only property used to establish the results in this paper is symmetry of a Markov chain. It seems worthwhile to explore the concept of Tokunaga self-similarity for a general coalescent process. \vspace{0.5cm} {\bf Acknowledgement.} We are grateful to Ed Waymire and Don Turcotte for providing continuing inspiration to this study. We also thank Mickael Chekroun, Michael Ghil, Efi Foufoula-Georgiou, and Scott Peckham for their support and interest to this work. Comments of two anonymous reviewers helped us to significantly improve and expand an earlier version of this work. This study was supported by the NSF Awards DMS 0620838 and DMS 0934871.
132,024
Relying on Windows Defender as your sole antivirus puts your entire PC at risk of infection. … If you’re looking for the best malware protection and internet security tools, a premium antivirus like Norton or Bitdefender is much more capable. Is Windows Defender good enough 2021? In essence, Windows Defender is good enough for your PC in 2021; however, this was not the case sometime ago. … However, Windows Defender currently provides robust protection for systems against malware programs, which has been proved in a lot of independent testing. Do I need to install antivirus if I have Windows Defender? Though Windows 10 comes with built-in antivirus and anti-malware tool (Windows Defender), it might not be able to protect your web browsing activities and malicious links. … So, it is important to install antivirus software that offers web protection or internet protection. Can I use Windows Defender. Is Windows Defender always right? reliable is Windows Defender? SE Labs also found Defender had a total accuracy rating of 99%, placing it 5th out of a field of 13 in its home anti-malware protection report for Q4 2020 – a very respectable result.
372,332
Welcome to the Shih Yu-Lang Central YMCA Shih Yu-Lang Central YMCA 220 Golden Gate Ave. (at Leavenworth) San Francisco, CA 94102 Phone: (415) 885-0460 Hours of Operation Monday - Friday: 6:00am - 9:30pm Saturday: 9:00am - 9:30pm Sunday: 9:00am - 7:00pm Th Shih Yu-Lang Central YMCA pool will be closed May 5th- 9th. We are conducting structural testing that may extend the closure through May 16th. The Shih Yu-Lang Central YMCA endures as a community cornerstone providing an open door for new experiences and personal development. We welcome all who value Honesty, Respect, Caring, and Responsibility. We have everything a health seeker needs. Our convenient location makes it easy and our relaxed vibe makes it fun. Please come in for a tour. Financial Assistance is availble for those who qualify and because of the generousity of our donors. Financial Assistance Application Financial Assistance Information Sheet
209,094
It was Saturday night, the first one of 2013. I was going to visit my dad at his house. He lives far from the city. Its already dark outside when I go to leave and I notice these very tall beings that looked like they were made of pure light but formed like the shape of a human. I only saw them for a second because they glided off fast. Their movement was beautiful and then they vanished from my eye sight, phasing away fast. I could feel them still near me. I followed through with my plans. I told my dad I just saw aliens but he didn’t believe me. A little dissapointed at his reaction, I went home after a while. I put my shoes in the spot I always do when I get home and I fell asleep. Once asleep I had a “dream” I was in trance and didn’t even know it at the time. In the dream I am all of a sudden at this creek right by my dads house. Its dark and quite. I get out and start walking to the creek. Its illuminated in this glowing light blue light. I wade into it and see and feel my feet only in the water. It’s deeper where I was standing so I was .floating. I could feel this warm healing light coming from the water. It surrounded my body and I could sense the energy of the light beings that I just saw earlier. I looked around as I floated higher and higher. I was so high in the sky but I was amazed not scared. Then all of a sudden I am laying on a medical table and I can’t see good. Its very hazy and blurry. A tall woman looking entity was standing at the end of the table. She had no hair and a large head flaring out where the brain is. Hers was bigger than a humans. She alarmed me with her appearance so I started feeling fear. But then she touched my ankle and I was connected to her. She was sending signals of love and protection over me. It felt like the love you get from a parent when you are a little child. She was so calm and nurturing. She then started talking in another language via telepathy or clair audience and my mind translated what she was saying. She told me that they had to restore my light body. That I suffered a lot of trauma and that it was toxic to me. I could feel them pulling all my energies like waves. It would surge and they would pull out the bad energies with their hands. She told me it was time for me to sleep and rest because I was about to start my mission and that I will be receiving transmissions and instructions later on. The next day I woke up and noticed a few healed over marks on my body where they were working. My shoes were soaked, and my gas gauge did match up. I have glimmers of other memories there but I can’t remember for whatever reason. I know there was a place with children that they showed me. I felt a connection to some of these but I just don’t remember hr details or know how. If anyone knows of a way to push past mental blocks let me know. Share Your Thoughts? One way would to call up the glimmers you do remember in meditation and focus on those for awhile. Perhaps call in a guide and ask them for help. Start with the clearest “glimmer” and look around. What colors are there? People? What do you feel? Anything familiar? Anything foreign? Inside or outside? Day or night? “Priming the pump” like this can help “syphon” out more information. I thought this one was especially good, thanks! Usually mental blocks are there because you put them there to protect yourself. You have to be ready to face the demon in that closet you thought was better kept away from you at some point. Find the door. Walk into it. You may re-experience the last thing you felt in that channel, which may or may not be nice. Bring a new, adult perspective to your old dusky closets, make a new empowering decision, and let fresh emotions flow out of them. It will open your channels. It takes time and patience and courage. Take it slowww! Many people say you can simply request to have your memories back. Sometimes they put “blocks” for your own comfort. Now that you have established a safe loving connection, I am sure the memories will come back in time. Thank you all for your thoughts! I do meditate and it has cleared up in areas but i think its just such a different experience that I have disbelieved parts that may have scared me due to my lack of knowledge at the time. Now I am able to shift negative thoughts and energies into at least a neutral point of view and revisit past memories.
343,154
\begin{document} \title{Rational solutions of dressing chains and higher order Painlev\'{e} equations.} \author{D. Gomez-Ullate$^{a,b,c}$, Y. Grandati$^{d}$, S.\ Lombardo$^{e}$ and R. Milson$^{f}$ } \date{$a$: Institute of Mathematical Sciences (ICMAT), C/ Nicolas Cabrera 15, 28049 Madrid, Spain.\\ $b$: Escuela Superior de Ingenier\'ia, Universidad de C\'adiz, 11519, Spain. \\ $c$: Departamento de F\'isica Te\'orica, Universidad Complutense de Madrid, 28040 Madrid, Spain.\\ $d$: Laboratoire de Physique et Chimie Th\'{e}oriques, Universit\'{e} de Lorraine, 1 Bd Arago, 57078 Metz, Cedex 3, France.\\ $e$: Mathematical Sciences Department, Loughborough University, Epinal Way, Loughborough, Leicestershire, LE11 3TU, UK.\\ $f$: Department of Mathematics and Statistics, Dalhousie University, Halifax, NS, B3H\ 3J5, Canada.} \maketitle \begin{abstract} We present a new approach to determine the rational solutions of the higher order Painlev\'{e} equations associated to periodic dressing chain systems (A $_{\text{n}}^{(1)}$-Painlev\'{e} systems). We obtain new sets of solutions, giving determinantal representations indexed by specific Maya diagrams in the odd case or universal characters in the even case. \end{abstract} \section{\protect\bigskip Introduction} It is now well known that the six Painlev\'{e} equations PI-PVI, discovered more than one century ago by Painlev\'{e} and Gambier \cite{conte,clarkson2} (see also Fuchs, Picard and Bonnet \cite{conte2}), define new transcendental objects which can be thought as nonlinear analogues of special functions. Except for the first one, the Painlev\'{e} equations all depend on some set of parameters and if for generic values of these parameters the solutions cannot be reduced to usual transcendental functions, it appears that for some specific values of these parameters we retrieve classical transcendental functions or even rational functions \cite {conte,clarkson2,noumi,gromak}. In the last decades, the classification and the properties of these last solutions has been a subject of active research \cite{clarkson2,noumi}. In the early nineties, Shabat, Veselov and Adler \cite{shabat,vesshab,adler} introduce the concept of dressing chains, showing that they possess the Painlev\'{e} property and that the Painlev\'{e} equations PII-PVI can be described in terms of dressing chains (scalar or matricial) of low orders. The higher order chains can then be considered as generalizing the classical Painlev\'{e} equations and the problem of finding their rational solutions arises naturally \cite{clarkson1,clarkson5}. For a Schr\"{o}dinger operator, the scalar dressing chain of period $p$ constitutes an higher order generalization of PIV for $p$ odd (the symmetric form of PIV itself corresponds to the case $p=3$) and an higher order generalization of PV for $ p$ even (the symmetric form of PV itself corresponds to the case $p=4$). In the case of dressing chains of odd periodicity, previous results \cite {veselov} seem to indicate that these solutions are necessarily obtained from rational extensions of the harmonic oscillator (HO) potential \cite {GGM1,GGM0,GGM}. A remaining question is then to determine among all these rational dressings of \ the harmonic oscillator, the particular ones which allow to solve a dressing chain of given periodicity. An elegant, although indirect, way to answer to this problem pass through the approach developed principaly by the japanese school around Okamoto, Noumi, Yamada, Umemura, Tsuda and others (see \cite{clarkson2,noumi} and references therein, see also \cite{adler}). It rests on the symmetry group analysis of the dressing chain in the parameters space. The parametric symmetries of the chain combined to B\"{a}cklund transformations constitute an "extended affine Weyl group" (for the scalar dressing chain of period $p$, the associated extended affine Weyl group is $A_{p-1}^{\left( 1\right) }$) which preserves the structure of the dressing chain. Starting from some a complete set of simple "fundamental solutions" possessing the rational character, successive applications of the transformations belonging to these Weyl groups generates step by step all the rational solutions of the dressing chain system. In this article, we propose an alternative approach to build the rational solutions of the Schr\"{o}dinger operators dressing chain systems (with non zero shift). It has the advantage to be direct and explicit, in the sense that it furnishes an immediate determinantal representation for the solutions of the differential system. Moreover, it links the existence of rational solutions for the dressing chain system to the analytical properties of the underlying quantum potential. We consider the whole sets of rational extensions of the harmonic and isotonic potentials that we label by Maya diagrams and universal character \cite{koike,tsuda} respectively. Analyzing the combinatorial properties of these objects allows us to select the extended potentials which solve the odd and even periodic dressing chains respectively and gives closed form determinantal expressions for the solutions of dressing chain system. The paper is organized as follows. We start by recalling some basic elements concerning the concept of dressing chains, cylic potentials and their connection to the PIV and PV equations. In the second part, we introduce the notion of cyclic Maya diagram\ and associated $\overrightarrow{s}$-vector and give the general structure of a p-cyclic Maya diagram (Theorem 1). It allows in the next part, to show how the cyclic rational extensions of the HO give rational solutions of dressing chains system of period p for appropriate choices of the parameters (Theorem 2). As illustrating examples, we treat in details the cases $p=3$\ (ie the standard PIV equation) and $p=5$ (also called the A$_{\text{4}}$-PIV system). In the fourth section, after recalling some properties of the isotonic and its rational extensions, which are expressed in terms of Laguerre pseudo-Wronskians and labelled by universal charaters, we show how these last ones allow to obtain rational solutions of the even periodic dressing chain systems for which we give new explicit representations (Theorem 4). We illustrate these general results for the particular cases $p=4$ (ie the standard PV equation) and $p=6$ (ie A$_{\text{5}}$-PV system). \section{Dressing chains and cyclic potentials} Consider a potential $U(x)$, the associated Schr\"{o}dinger and Riccati-Schr \"{o}dinger (RS) equation \cite{grandati berard} being respectively \begin{equation} \left\{ \begin{array}{c} -\psi _{\lambda }^{\prime \prime }(x)+U(x)\psi _{\lambda }(x)=E_{\lambda }\psi _{\lambda }(x) \\ -w_{\lambda }^{\prime }(x)+w_{\lambda }^{2}(x)=U(x)-E_{\lambda }, \end{array} \right. \label{SetRS} \end{equation} where the \textbf{RS function} $w_{\lambda }(x)$ is minus the logarithmic derivative of the eigenfunction $\psi _{\lambda }(x)$: $w_{\lambda }(x)=-\psi _{\lambda }^{\prime }(x)/\psi _{\lambda }(x)$. The auxiliary spectral parameter $\lambda $ allows us in what follows to define a sequence of eigenvalues and associated eigenfunctions. Given a particular eigenfunction $\psi _{\nu }(x)$ (or its associated RS function $w_{\nu }(x)$) of $U(x)$ for the eigenvalue $E_{\nu }$, we can build a new potential $U^{\left( \nu \right) }(x)$ via the \textbf{Darboux transformation (DT)} of \textbf{seed function} $\psi _{\nu }$ \cite{darboux}: \begin{equation} U^{\left( \nu \right) }(x)=U(x)+2w_{\nu }^{\prime }(x), \end{equation} called an \textbf{extension} of $U(x)$. Then $\psi _{\lambda }^{\left( \nu \right) }(x)$ and $w_{\lambda }^{\left( \nu \right) }(x)$ defined as\bigskip \begin{equation} \left\{ \begin{array}{c} \psi _{\lambda }^{\left( \nu \right) }(x)=W(\psi _{\nu },\psi _{\lambda }\mid x)/\psi _{\nu }(x),\text{ }\lambda \neq \nu \\ \psi _{\nu }^{\left( \nu \right) }(x)=1/\psi _{\nu }(x), \end{array} \right. \label{DTS} \end{equation} and \begin{equation} \left\{ \begin{array}{c} w_{\lambda }^{\left( \nu \right) }(x)=-w_{\nu }(x)+(E_{\lambda }-E_{\nu })/\left( w_{\lambda }(x)-w_{\nu }(x)\right), \text{ }\lambda \neq \nu \\ w_{\nu }^{\left( \nu \right) }(x)=-w_{\nu }(x), \end{array} \right. \label{DTRS} \end{equation} are solutions of \begin{equation} \left\{ \begin{array}{c} -\psi _{\lambda }^{\left( \nu \right) \prime \prime }+U^{\left( \nu \right) }\psi _{\lambda }^{\left( \nu \right) }=E_{\lambda }\psi _{\lambda }^{\left( \nu \right) } \\ -\left( w_{\lambda }^{\left( \nu \right) }\right) ^{\prime }+\left( w_{\lambda }^{\left( \nu \right) }\right) ^{2}=U^{\left( \nu \right) }-E_{\lambda }. \end{array} \right. \end{equation} By chaining such DT, we produce a sequence of extensions \begin{equation} \left\{ \begin{array}{c} \psi _{\lambda }\overset{\nu _{1}}{\rightarrowtail }\psi _{\lambda }^{\left( \nu _{1}\right) }\overset{\nu _{2}}{\rightarrowtail }\psi _{\lambda }^{\left( \nu _{1},\nu _{2}\right) }...\overset{\nu _{p}}{\rightarrowtail } \psi _{\lambda }^{\left( \nu _{1},...,\nu _{p}\right) } \\ U\overset{\nu _{1}}{\rightarrowtail }U^{\left( \nu _{1}\right) }\overset{\nu _{2}}{\rightarrowtail }U^{\left( \nu _{1},\nu _{2}\right) }...\overset{\nu _{p}}{\rightarrowtail }U^{\left( \nu _{1},...,\nu _{p}\right) }, \end{array} \right. \label{diagn} \end{equation} where \begin{equation} U^{\left( \nu _{1},...,\nu _{p}\right) }(x)=U(x)+2\left( \sum_{i=1}^{p}w_{\nu _{i}}^{\left( \nu _{1},...,\nu _{i-1}\right) }(x)\right) ^{\prime }. \end{equation} The Crum formulas allow us to express the extended potentials as well as their eigenfunctions in terms of Wronskians, containing only eigenfunctions of the initial potential \cite{crum,GGM1}: \begin{equation} \left\{ \begin{array}{c} U^{\left( \nu _{1},...,\nu _{p}\right) }(x)=U(x)-2\left( \log W^{\left( \nu _{1},...,\nu _{p}\right) }(x)\right) ^{\prime \prime } \\ \psi _{\lambda }^{\left( \nu _{1},...,\nu _{p}\right) }\left( x\right) =W^{\left( \nu _{1},...,\nu _{p},\lambda \right) }(x)/W^{\left( \nu _{1},...,\nu _{p}\right) }(x), \end{array} \right. \label{crum} \end{equation} where \begin{equation} W^{\left( \nu _{1},...,\nu _{p}\right) }(x)=W(\psi _{\nu _{1}},...,\psi _{\nu _{p}}\mid x) \label{wronsk} \end{equation} is the Wronskian of the family $(\psi _{\nu _{1}},...,\psi _{\nu _{p}})$. Here we are using the convention that if a spectral index is repeated two times in the characteristic tuple $\left( \nu _{1},...,\nu _{p}\right) $, then we suppress the corresponding eigenfunction in the Wronskians of the right-hand members of Eq(\ref{crum}). In order to simplify the notation we temporarly use the simplified notation $\nu _{i}\rightarrow i$. We can now define the notion of \textbf{cyclicity}. A potential $U(x)$ is said to be $p$\textbf{-cyclic} if there exists a chain of $p$ DT such that \begin{equation} U^{\left( 1,...,p\right) }(x)=U(x)+\Delta , \label{Cyclicity} \end{equation} ie at the end of the chain we recover \textit{exactly} the initial potential translated by an energy shift $\Delta $. This condition is then stronger than the usual $p^{th}$-order shape invariance \cite{gendenshtein,GGM1}. As shown by Veselov and Shabat \cite{vesshab}, the successive RS seed functions then satisfy the following first order non linear system ($\varepsilon _{ij}=E_{i}-E_{j}$) \begin{equation} \left\{ \begin{array}{c} -\left( w_{2}^{\left( 1\right) }(x)+w_{1}(x)\right) ^{\prime }+\left( w_{2}^{\left( 1\right) }(x)\right) ^{2}-\left( w_{1}(x)\right) ^{2}=\varepsilon _{12} \\ -\left( w_{3}^{\left( 1,2\right) }(x)+w_{2}^{\left( 1\right) }(x)\right) ^{\prime }+\left( w_{3}^{\left( 1,2\right) }(x)\right) ^{2}-\left( w_{2}^{\left( 1\right) }(x)\right) ^{2}=\varepsilon _{23} \\ ... \\ -\left( w_{p}^{\left( 1,...,p-1\right) }(x)+w_{p-1}^{\left( 1,...,p-2\right) }(x)\right) ^{\prime }+\left( w_{p}^{\left( 1,...,p-1\right) }(x)\right) ^{2}-\left( w_{p-1}^{\left( 1,...,p-2\right) }(x)\right) ^{2}=\varepsilon _{p-1,p} \\ -\left( w_{1}(x)+w_{p}^{\left( 1,...,p-1\right) }(x)\right) ^{\prime }+\left( w_{1}(x)\right) ^{2}-\left( w_{3}^{\left( 1,...,p-1\right) }(x)\right) ^{2}=\varepsilon _{p1}-\Delta . \end{array} \right. \label{pcyclicdress} \end{equation} called a \textbf{dressing chain of period }$p$. We will say that \textbf{the potential }$U(x)$\textbf{\ solves the dressing chain}. $\Delta $ and the $ \varepsilon _{i,i+1}$ are called the \textbf{parameters} of the dressing chain. The cyclicity condition Eq(\ref{Cyclicity}) gives also \begin{equation} 2\left( w_{1}(x)+...+w_{p}^{\left( 1,...,p-1\right) }(x)\right) ^{\prime }=\Delta , \end{equation} that is, with an appropriate choice of the integration constant \begin{equation} w_{1}(x)+...+w_{p}^{\left( 1,...,p-1\right) }(x)=\frac{\Delta }{2}x. \label{addconst} \end{equation} In all what follows, we suppose that (\textbf{non zero shift} assumption) \begin{equation} \Delta \neq 0. \label{delta} \end{equation} As proven by Veselov and Shabat \cite{vesshab}, this system passes the Painlev\'{e} test and, as we will see below, for $p=3$ and $p=4$ the corresponding dressing chains can be seen as symmetrized forms of the PIV and PV equations respectively. As illustrating examples we now consider the lowest order cases of cyclicity. \subsection{1-step cyclic potential} It is straightforward to show that the HO is in fact the unique potential possessing the $1$-step cyclicity property. Indeed, if we want \begin{equation} U^{\left( \nu \right) }(x)=U(x)+\Delta , \label{1 step cyclic} \end{equation} Eq(\ref{addconst}) gives immediately \begin{equation} w_{\nu }(x)=\Delta x/2 \label{RSOHfond} \end{equation} and \begin{equation} U(x)=E_{\nu }-w_{\nu }^{\prime }(x)+w_{\nu }^{2}(x)=\frac{\Delta ^{2}}{4} x^{2}+E_{\nu }-\frac{\Delta }{2}, \end{equation} ie $U(x)$ is an HO potential with frequency $\omega =\Delta $, the $1$-step cyclicity condition coinciding for this potential with the usual shape invariance property \cite{gendenshtein,GGM1}. \subsection{2-step cyclic potentials} If the potential $U(x)$ is $2$-step cyclic, the corresponding dressing chain of period $2$ is (see Eq(\ref{pcyclicdress}) and Eq(\ref{addconst})) \begin{equation} \left\{ \begin{array}{c} -\left( w_{2}^{\left( 1\right) }(x)+w_{1}(x)\right) ^{\prime }+\left( w_{2}^{\left( 1\right) }(x)\right) ^{2}-\left( w_{1}(x)\right) ^{2}=\varepsilon _{12} \\ -\left( w_{1}(x)+w_{2}^{\left( 1\right) }(x)\right) ^{\prime }+\left( w_{1}(x)\right) ^{2}-\left( w_{2}^{\left( 1\right) }(x)\right) ^{2}=\varepsilon _{21}-\Delta , \end{array} \right. \label{2 step cyclic} \end{equation} with \begin{equation} w_{1}(x)+w_{2}^{(1)}(x)=\frac{\Delta }{2}x. \label{cond12step} \end{equation} Taking the difference of the two equations in Eq(\ref{2 step cyclic}) and combining with Eq(\ref{cond12step}), we obtain in a straightforward way \begin{equation} w_{2}^{(1)}(x)=\frac{\Delta }{4}x+\frac{\varepsilon _{12}/\Delta +1/2}{x},\ w_{1}(x)=\frac{\Delta }{4}x-\frac{\varepsilon _{12}/\Delta +1/2}{x}. \end{equation} Consequently \begin{equation} U(x)=E_{1}-w_{1}^{\prime }(x)+w_{1}^{2}(x)=\frac{\omega ^{2}}{4}x^{2}+\frac{ \left( \alpha +1/2\right) \left( \alpha -1/2\right) }{x^{2}}-\omega (\alpha +1)=V(x;\omega ,\alpha ), \label{2 step cyclic2} \end{equation} where $\omega =\Delta /2$ and $\alpha =\varepsilon _{12}/\Delta $. We deduce that the unique $2$-step cyclic potential is the isotonic oscillator (IO) with frequency $\omega $ and "angular momentum" $a=$ $\alpha -1/2$ \cite {grandati3,GGM2}. \subsection{3-step cyclic potentials and Painlev\'{e} IV} The dressing chain of period $p=3$ has the form (see Eq(\ref{pcyclicdress})) \begin{equation} \left\{ \begin{array}{c} -\left( w_{2}^{\left( 1\right) }(x)+w_{1}(x)\right) ^{\prime }+\left( w_{2}^{\left( 1\right) }(x)\right) ^{2}-\left( w_{1}(x)\right) ^{2}=\varepsilon _{12} \\ -\left( w_{3}^{\left( 1,2\right) }(x)+w_{2}^{\left( 1\right) }(x)\right) ^{\prime }+\left( w_{3}^{\left( 1,2\right) }(x)\right) ^{2}-\left( w_{2}^{\left( 1\right) }(x)\right) ^{2}=\varepsilon _{23} \\ -\left( w_{1}(x)+w_{3}^{\left( 1,2\right) }(x)\right) ^{\prime }+\left( w_{1}(x)\right) ^{2}-\left( w_{3}^{\left( 1,2\right) }(x)\right) ^{2}=\varepsilon _{31}-\Delta , \end{array} \right. \label{3chain2} \end{equation} with (see Eq(\ref{addconst})) \begin{equation} w_{1}(x)+w_{2}^{\left( 1\right) }(x)+w_{3}^{\left( 1,2\right) }(x)=\frac{ \Delta }{2}x. \label{3SIP2} \end{equation} By defining \begin{equation} \left\{ \begin{array}{c} y=\sqrt{\frac{2}{\Delta }}\left( w_{1}(x)-\Delta x/2\right) \\ t=\sqrt{\frac{2}{\Delta }}x, \end{array} \right. \label{chv} \end{equation} we can then easily show \cite{vesshab,adler} that $y$ satisfies the PIV equation (see \cite{conte,clarkson,clarkson2}) \begin{equation} y^{\prime \prime }=\frac{1}{2y}\left( y^{\prime }\right) ^{2}+\frac{3}{2} y^{3}+4ty^{2}+2\left( t^{2}-a\right) y+\frac{b}{y}, \label{PIV} \end{equation} (here the prime denotes the derivative with respect to $t$) with parameters \begin{equation} a=-\left( \Delta +\varepsilon _{23}+2\varepsilon _{12}\right) /\Delta ,\quad b=-\frac{2\varepsilon _{23}^{2}}{\Delta ^{2}}. \label{paramPIV} \end{equation} \subsection{4-step cyclic potentials and Painlev\'{e} V} For $p=4$, the dressing chain of Eq(\ref{pcyclicdress}) becomes \begin{equation} \left\{ \begin{array}{c} -\left( w_{2}^{\left( 1\right) }(x)+w_{1}(x)\right) ^{\prime }+\left( w_{2}^{\left( 1\right) }(x)\right) ^{2}-\left( w_{1}(x)\right) ^{2}=\varepsilon _{12} \\ -\left( w_{3}^{\left( 1,2\right) }(x)+w_{2}^{\left( 1\right) }(x)\right) ^{\prime }+\left( w_{3}^{\left( 1,2\right) }(x)\right) ^{2}-\left( w_{2}^{\left( 1\right) }(x)\right) ^{2}=\varepsilon _{23} \\ -\left( w_{4}^{\left( 1,2,3\right) }(x)+w_{3}^{\left( 1,2\right) }(x)\right) ^{\prime }+\left( w_{4}^{\left( 1,2,3\right) }(x)\right) ^{2}-\left( w_{3}^{\left( 1,2\right) }(x)\right) ^{2}=\varepsilon _{34} \\ -\left( w_{1}(x)+w_{4}^{\left( 1,2,3\right) }(x)\right) ^{\prime }+\left( w_{1}(x)\right) ^{2}-\left( w_{4}^{\left( 1,2,3\right) }(x)\right) ^{2}=\varepsilon _{41}-\Delta . \end{array} \right. \end{equation} with the cyclicity condition (see Eq(\ref{addconst})) \begin{equation} w_{1}(x)+w_{2}^{\left( 1\right) }(x)+w_{3}^{\left( 1,2\right) }(x)+w_{4}^{\left( 1,2,3\right) }(x)=\frac{\Delta }{2}x. \end{equation} As shown by Adler \cite{adler}, the function \begin{equation} y\left( t\right) =1-\frac{\Delta x}{2\left( w_{1}(x)+w_{2}(x)\right) },\ t=x^{2}, \label{defPV} \end{equation} satisfies the PV equation (see \cite{conte,clarkson4,clarkson2}) \begin{equation} y^{\prime \prime }=\left( \frac{1}{2y}+\frac{1}{y-1}\right) \left( y^{\prime }\right) ^{2}-\frac{y^{\prime }}{t}+\frac{\left( y-1\right) ^{2}}{t^{2}} \left( ay+\frac{b}{y}\right) +c\frac{y}{t}+d\frac{y\left( y+1\right) }{y-1}, \label{PV} \end{equation} with parameters \begin{equation} a=\frac{\varepsilon _{12}^{2}}{2\Delta ^{2}},\ b=-\frac{\varepsilon _{34}^{2} }{2\Delta ^{2}},\ c=\frac{1}{4}\left( \Delta -\varepsilon _{41}+\varepsilon _{23}\right) ,\ d=-\frac{\Delta ^{2}}{32}. \label{paramPV} \end{equation} \section{Cyclic Maya diagrams} \subsection{First definitions} We define a \textbf{Sato Maya diagram} as an infinite row of boxes, called \textbf{levels}, labelled by relative integers and which can be empty or filled by at most one "particle" (graphically represented by a bold dot) \cite{ohta,hirota,GGM,GGM2}. All the levels sufficiently far away on the left are filled and all the levels sufficiently far away on the right are empty. The set of Sato Maya diagrams can be put in one to one correspondence with the set of tuple of relative integers of the form $N_{m}=\left( n_{1},...,n_{m}\right) \in \mathbb{Z} ^{m}$ The tuple $N_{m}$ contains the indices of the filled levels above zero (included) and the indices of the empty levels strictly below zero, or in other words, the filled levels in corresponding Sato Maya diagram are indexed by the set \begin{equation} \{j<0:j\notin N_{m}\}\cup \{j\geq 0:j\in N_{m}\}. \end{equation} In the following we use the term \textbf{Maya diagram} to designate both the Sato Maya diagram in graphical form, and the associated tuple\textbf{. }If all the $n_{i}$ are positive then the Maya diagram is said to be \textbf{ positive} (respectively \textbf{negative}). In the case of a positive Maya diagram, if all the $n_{i}$ are non-zero, then the Maya diagram is said to be \textbf{strictly positive}. Two Maya diagrams $N_{m}$ and $N_{m^{\prime }}^{\prime }$ are \textbf{ equivalent} if they differ only by a global translation of all the particles in the levels and we note \begin{equation} N_{m}\approx N_{m^{\prime }}^{\prime }. \label{eqMD} \end{equation} The \textbf{canonical representative} of such an equivalence class is the unique strictly positive Maya diagram of the class for which the zero level is empty, i.e. the zero level is the first empty level. A $k$\textbf{-translation} applied to the canonical Maya diagram $ N_{m}=\left( n_{1},...,n_{m}\right) \in \left( \mathbb{N} ^{\ast }\right) ^{m}$ generates the Maya diagram, denoted $N_{m}\oplus k$, obtained from by shifting by $k$ all the particles in the levels of $N_{m}$. For $k>0$, we have \begin{equation} N_{m}\oplus k=\left( 0,...,k-1\right) \cup \left( N_{m}+k\right) , \end{equation} where $N_{m}+k=\left( n_{1}+k,...,n_{m}+k\right) $. Starting from a given Maya diagram $N_{m}$ we can modify it by suppressing particles in the filled levels or by filling empty levels. We call such an action on a level, a \textbf{flip}. If $\left( \nu _{1},...,\nu _{p}\right) $ are the indices of the flipped levels then the characteristic tuple of the resulting Maya diagram can be denoted $\left( N_{m},\nu _{1},...,\nu _{p}\right) $ with the convention that a twice repeated index has to be suppressed. \subsection{$\protect\overrightarrow{s}$-vectors and cyclicity} A Maya diagram $N_{m}$ is said $p$\textbf{-cyclic with translation of }$k>0$ if we can translate it by $k$, by acting on it with $p$ flips. In other words, there must exist a tuple of $p$ positive integers $\left( \nu _{1},...,\nu _{p}\right) \in \mathbb{N} ^{p}$ such that \begin{equation} N_{m}\approx \left( N_{m},\nu _{1},...,\nu _{p}\right) . \label{cyclMD1} \end{equation} In particular, if $N_{m}$ is canonical, there must exist a tuple of $p$ positive integers $\left( \nu _{1},...,\nu _{p}\right) \in \mathbb{N} ^{p}$ such that \begin{equation} \left( N_{m},\nu _{1},...,\nu _{p}\right) =N_{m}\oplus k. \label{cyclMD2} \end{equation} The ordered tuple $\left( \nu _{1},...,\nu _{p}\right) $ is called a $p$ \textbf{-cyclic chain} associated to $N_{m}$. We can readily see that every extension is trivially $\left( 2m+k\right) $ -cyclic with translation of $k$ via the chain: \begin{equation} N_{m}\cup \left( N_{m}+k\right) \cup \left( 0,...,k-1\right) =\left( n_{1},...,n_{m},n_{1}+k,...,n_{m}+k,0,...,k-1\right) . \end{equation} It is clear that every $p$-cyclic with translation of $k$ Maya diagram is also $\left( p+2l\right) $-cyclic with translation of $k$ for any integer $l$ , since Eq(\ref{cyclMD2}) implies \begin{equation} \left( N_{m},\nu _{1},...,\nu _{p},\nu _{p+1},\nu _{p+1},...,\nu _{l},\nu _{l}\right) =N_{m}\oplus k. \end{equation} To a given Maya diagram $N_{m}$, we can associate in a one to one way a \textbf{\ }$\overrightarrow{s}$\textbf{-vector}, which is an infinite sequence of ``spin variables'', $\overrightarrow{s}=\left( s_{n}\right) _{n\in \mathbb{Z} },$ where $s_{n}=+1$ (\textbf{up spin}) or $s_{n}=-1$ (\textbf{down spin}), in the following way * If level $n$ is filled then $s_{n}=-1$. * If level $n$ is empty then $s_{n}=+1$. For a canonical Maya diagram, it means that $N_{m}$ gives the positions of the down spins in the positively indexed part of the $\overrightarrow{s}$ -vector: * If $n\in N_{m}$ or $n<0$, then $s_{n}=-1$. * If $n\geq 0$ and $n\notin N_{m}$, then $s_{n}=+1$. The $\overrightarrow{s}$\textbf{-vector} is subject to the \textbf{ topological constraint} \begin{equation} \underset{n\rightarrow -\infty }{\lim }s_{n}=-1,\ \underset{n\rightarrow +\infty }{\lim }s_{n}=1. \label{topoconstraint} \end{equation} In the particular case of a canonical Maya diagram, we have more precisely \begin{equation} \left\{ \begin{array}{c} s_{n<0}=-1 \\ s_{0}=+1 \\ s_{n>n_{m}}=+1. \end{array} \right. \label{topoconst} \end{equation} A flip at the level $\nu $ in $N_{m}$ corresponds to change the sign of $ s_{\nu }$($s_{\nu }\rightarrow -s_{\nu }$): suppressing a particle in a level corresponds to a \textbf{positive flip} of $s_{\nu }$ $\left( -1\rightarrow +1\right) $, while filling a level corresponds to a \textbf{ negative flip} of $s_{\nu }$ $\left( +1\rightarrow -1\right) $. In the sequel, we let $\mathcal{F}^{\left( i_{1},...,i_{p}\right) }$ denote the \textbf{flip operator} which, when acting on $\overrightarrow{s}$, flips the spins $s_{i_{1}},...,s_{i_{p}}$. We also let $\mathcal{T}_{k}$ denote the \textbf{translation operator} of amplitude $k$ on the $\overrightarrow{s} $-vector : \begin{equation} \mathcal{T}_{k}\overrightarrow{s}=\overrightarrow{s}^{\prime }\text{, with } s_{n}^{\prime }=s_{n-k},\ \forall n\in \mathbb{Z} . \label{trans} \end{equation} A Maya diagram $N_{m}$ is $p$-cyclic with translation of $k$ iff $\exists \left( i_{1},...,i_{p}\right) \in \mathbb{N} ^{p}$ such that the corresponding $\overrightarrow{s}$-vector is in the kernel of $\left( \mathcal{F}^{\left( i_{1},...,i_{p}\right) }-\mathcal{T} _{k}\right) $: \begin{equation} \mathcal{F}^{\left( i_{1},...,i_{p}\right) }\overrightarrow{s}=\mathcal{T} _{k}\overrightarrow{s}. \label{cyclicspin} \end{equation} Suppose that $N_{m}$ is a canonical Maya diagram which is $p$-cyclic with translation of $k>0$. We let $p_{-}$ denote the number of negative flips and $p_{+}$ the number of positive flips in the $p$-cyclic chain. We then have the following lemma: \begin{lemma} $k$\textit{\ has the same parity as }$p$\textit{\ and} \begin{equation} \left\{ \begin{array}{c} p=p_{-}+p_{+}\in \mathbb{N} ^{\ast }, \\ k=p_{-}-p_{+}\in \left\{ 1,...,p\right\} . \end{array} \right. \end{equation} \end{lemma} \begin{proof} Initially, above $n_{m}$ all the $s_{n}$ are up and below $0$ they are all down. Between $0$ and $n_{m}$, we have $m$ spins down. After the $p$-cyclic chain, all the spins above $n_{m}+k$ are up and below $k$ they are all down. In the block of indices $\left\{ k,..,n_{m}+k\right\} $ we still have $m$ down spins and in the set of levels $\left\{ 0,..,n_{m}+k\right\} $ we now have $m+k$ down spins while before the $p$-cyclic chain the same set contained only $m$ down spins. Out of the set $\left\{ 0,..,n_{m}+k\right\} $ the Maya $\overrightarrow{s}$-vector remains unchanged and at the end, the action of the $p$-cyclic chain with translation $k$ is to have flipped negatively $k$ spins. Consequently \begin{equation} \left\{ \begin{array}{c} p=p_{-}+p_{+}\in \mathbb{N} ^{\ast }, \\ k=p_{-}-p_{+}\in \left\{ 1,...,p\right\} . \end{array} \right. \end{equation} \end{proof} For example, for a $3$-cyclic chain we can have $k=1$ ($p_{-}=2,p_{+}=1$) or $k=1$ ($p_{-}=3,p_{+}=0$) and for a $4$-cyclic chain, we can have $k=2$ ($ p_{-}=3,p_{+}=1$) or $k=4$ ($p_{-}=4,p_{+}=0$). \subsection{Structure of the cyclic Maya diagrams} Let first give some supplementary definitions. We let \begin{equation} \left( r\mid s\right) _{k}=\left( r,r+k,...,r+(s-1\right) k),\ r,s\in \mathbb{N} , \label{block} \end{equation} and call such a set of indices a \textbf{block} of length $s$. A block of the type $\left( r\mid s\right) _{1}$, containing $s$ consecutive integers, is called a \textbf{Generalized Hermite (GH) block }\cite {clarkson,clarkson2} of length $s$ and in the particular case $r=0$, $\left( 0\mid s\right) _{1}$ is called a \textbf{removable block} of length $s$. Be $I=\left\{ n_{j}\right\} _{j\in \mathbb{Z} }\subset \mathbb{Z} $ a subset of integers indices. We say that $s_{n}$ is \textbf{discontinuous} at $n_{j}$ on $I$ if $s_{n_{j+1}}=-s_{n_{j}}$. A subset of indices $I_{l}=k \mathbb{Z} +l,\ l\in \left\{ 0,...,k-1\right\} $ is called a $k$\textbf{-support}. Note that there is only one $1$-support which identifies with $ \mathbb{Z} $. We can now establish the main theorem concerning the cyclic Maya diagram. \begin{theorem} \textit{Any }$p$\textit{-cyclic Maya diagram has the following form} \begin{equation} N_{m}=\left( \left( 1\mid \alpha _{1}\right) _{k},...,\left( k-1\mid \alpha _{k-1}\right) _{k};\left( \lambda _{1}\mid \mu _{1}\right) _{k},...,\left( \lambda _{j}\mid \mu _{j}\right) _{k}\right) , \label{pcyclicMD} \end{equation} \textit{with }$j=\left( p-k\right) /2\in \mathbb{N} $\textit{\ and where the }$\alpha _{i},\lambda _{i},\mu _{i}$\textit{\ are arbitrary positive integers, constituting a set of }$p-1$\textit{\ arbitrary integer parameters on which depends the }$p$\textit{-cyclic Maya diagram }$ N_{m}$\textit{. The }$k-1$\textit{\ blocks of the type }$\left( l\mid \alpha _{l}\right) _{k},\ l\in \left\{ 1,...,k-1\right\} ,$\textit{\ }$\alpha _{l}$ \textit{\ arbitrary, are called }$k$\textit{-Okamoto blocks. The }$j$\textit{ \ blocks }$\left( \lambda _{l}\mid \mu _{l}\right) _{k},\ l\in \left\{ 1,...,j\right\} ,$\textit{\ are called blocks of the second type.}\newline \textit{The corresponding }$p$\textit{-cyclic chains are obtained by forming tuples from the set of indices} \begin{equation} \left\{ 0,1+\alpha _{1}k,...,\left( k-1\right) +\alpha _{k-1}k,\ \ \lambda _{1},\lambda _{1}+\mu _{1}k,...,\ \lambda _{j},\lambda _{j}+\mu _{j}k\right\} . \label{pcyclicchain} \end{equation} \textit{Concretely, the above set contain }$0$\textit{, the first element of each block of the second type and the last element of each block (Okamoto and second type) after having increased its length by }$1$\textit{.} \end{theorem} \begin{proof} Eq(\ref{cyclicspin}) gives the following infinite linear system for the $ s_{n}$ \begin{equation} \left\{ \begin{array}{c} s_{n}=s_{n-k}\text{, if }n\notin N_{m} \\ s_{n}=-s_{n-k}\text{, if }n\in N_{m}. \end{array} \right. \label{spinsystem} \end{equation} This implies that $s_{n}$, as a function of $n$, is piecewise constant on each $k$-support $I_{l}$ with a finite number of discontinuities on each of these supports and with the asymptotic behaviour (see Eq(\ref{topoconst})) \begin{equation} \left\{ \begin{array}{c} s_{l+jk}=+1\text{, if }l+jk>n_{m} \\ s_{l+jk}=-1\text{, \ if }l+jk<0. \end{array} \right. \label{topoconst2} \end{equation} The discontinuities of $s_{n}$ are located at the flip positions minus $k$, $ \left( i_{1}-k,...,i_{p}-k\right) $, and the possible $p$-cyclic canonical Maya diagrams are obtained by sharing these $p$\ discontinuities into the different $k$-supports $I_{l},\ l\in \left\{ 0,...,k-1\right\} $. Remark that the constraint Eq(\ref{topoconst2}) implies that we have an odd number of discontinuities in each $k$-support $I_{l}$. Due to the canonical choice Eq(\ref{topoconst}) and the cyclicity condition Eq(\ref{cyclMD2}), we know that necessarily we have to do one flip at the level $0$ (which is on the support $I_{0}$) and one flip at the level $n_{m}+k$ (which is on the $k$ -support $I_{l_{m}}$ if $n_{m}=l_{m}\func{mod}k$). This means also that we have necessarily at least one discontinuity in the $k$-support $I_{0}$ (in $ -k$) and one in the $k$-support $I_{l_{m}}$ (in $n_{m}$).\newline Let $p_{l}\in 2 \mathbb{N} +1$ the number of discontinuities in the $k$-support $I_{l}$, $ \sum\limits_{l=0}^{k-1}p_{l}=p$ (with $p-k\in 2 \mathbb{N} $), and denote $l+j_{i}^{\left( l\right) }k,\ i\in \left\{ 1,...,p_{l}\right\} ,\ j_{i}^{\left( l\right) }\in \mathbb{N} ,$ the positions of these discontinuities.\newline In the positively indexed part of the $k$-support $I_{l}$, the down spins are then located at \begin{equation} \left( l,...,l+\left( j_{1}^{\left( l\right) }-1\right) k\right) ,\left( l+j_{2}^{\left( l\right) }k,...,l+\left( j_{3}^{\left( l\right) }-1\right) k\right) ,...,\left( l+j_{p_{l}-1}^{\left( l\right) }k,...,l+\left( j_{p_{l}}^{\left( l\right) }-1\right) k\right) , \end{equation} which can be rewritten in terms of blocks as (see Eq(\ref{block})), \begin{equation} \left( l\mid j_{1}^{\left( l\right) }\right) _{k},\left( l+j_{2}^{\left( l\right) }k\mid j_{3}^{\left( l\right) }-j_{2}^{\left( l\right) }\right) _{k},...,\left( l+j_{p_{l}-1}^{\left( l\right) }k\mid j_{p_{l}}^{\left( l\right) }-j_{p_{l}-1}^{\left( l\right) }\right) _{k}. \label{bloc1} \end{equation} Note that due to the canonical choice, it gives in particular for the $k$ -support $I_{0}$ \begin{equation} \left( j_{2}^{\left( 0\right) }k\mid j_{3}^{\left( 0\right) }-j_{2}^{\left( 0\right) }\right) _{k},...,\left( j_{p_{0}-1}^{\left( 0\right) }k\mid j_{p_{0}}^{\left( 0\right) }-j_{p_{0}-1}^{\left( 0\right) }\right) _{k}. \label{bloc0} \end{equation} If we denote $j_{1}^{\left( l\right) }=\alpha _{l}$ and \begin{equation} \left\{ \begin{array}{c} j_{2r+1}^{\left( l\right) }-j_{2r}^{\left( l\right) }=\beta _{2r}^{\left( l\right) } \\ l+j_{2r}^{\left( l\right) }k=\beta _{2r-1}^{\left( l\right) }, \end{array} \right. r\geq 1, \end{equation} Eq(\ref{bloc0}) and Eq(\ref{bloc1}) become \begin{equation} \left( \beta _{1}^{\left( 0\right) }\mid \beta _{2}^{\left( 0\right) }\right) _{k},...,\left( \beta _{p_{0}-2}^{\left( 0\right) }\mid \beta _{p_{0}-1}^{\left( 0\right) }\right) _{k} \end{equation} and \begin{equation} \left( l\mid \alpha _{l}\right) _{k},\left( \beta _{1}^{\left( l\right) }\mid \beta _{2}^{\left( l\right) }\right) _{k},...,\left( \beta _{p_{l}-2}^{\left( l\right) }\mid \beta _{p_{l}-2}^{\left( l\right) }\right) _{k}. \end{equation} By taking into account all the $k$-supports, globally we have $\left( k-1\right) $ $k$-Okamoto blocks $\left( l\mid \alpha _{l}\right) _{k},$ where the $\alpha _{l}$ are arbitrary, and $j=\sum\limits_{l=0}^{k-1}\left( p_{l}-1\right) /2=\left( p-k\right) /2$ blocks of the second type $\left( \beta _{2r-1}^{\left( l\right) }\mid \beta _{2r}^{\left( l\right) }\right) _{k},\ r\geq 1,$ where the $\beta _{i}^{\left( l\right) }$ are arbitrary. If we denote them as \begin{equation} \left( \lambda _{i}\mid \mu _{i}\right) _{k},\ i\in \left\{ 1,...,j\right\} , \label{second type blocks} \end{equation} where the $\lambda _{i}$ and $\mu _{i}$ are arbitrary, we arrive to Eq(\ref {pcyclicMD}). The positions of the flips, obtained by adding $k$ to the discontinuities positions, gives the $p$-cyclic chain associated to $N_{m}$. We have $p_{+}=j $ positive flips located in $\lambda _{1},...,\lambda _{j}$ and $p_{-}=j+k$ negative flips located in $0,1+\alpha _{1}k,...,\left( k-1\right) +\alpha _{k-1}k,\ \lambda _{1}+\mu _{1}k,...,\lambda _{j}+\mu _{j}k.$ \end{proof} For example, consider the $p$-cyclic chain $\left( \lambda _{1}+\mu _{1}k,...,\lambda _{j}+\mu _{j}k,\lambda _{1},...,\lambda _{j},1+\alpha _{1}k,...,\left( k-1\right) +\alpha _{k-1}k,0\right) $. Its action on $N_{m}$ can be explicitely written as \begin{eqnarray} &&N_{m}\overset{\lambda _{1}}{\rightarrow }\left( \left( 1\mid \alpha _{1}\right) _{k},...,\left( k-1\mid \alpha _{k-1}\right) _{k};\left( \lambda _{1}+k\mid \mu _{1}\right) _{k},...,\left( \lambda _{j}\mid \mu _{j}\right) _{k}\right) \notag \\ &&...\overset{\lambda _{j}}{\rightarrow }\left( \left( 1\mid \alpha _{1}\right) _{k},...,\left( k-1\mid \alpha _{k-1}\right) _{k};\left( \lambda _{1}+k\mid \mu _{1}-1\right) _{k},...,\left( \lambda _{j}+k\mid \mu _{j}-1\right) _{k}\right) \notag \\ &&\overset{\lambda _{1}+\mu _{1}k}{\rightarrow }\left( \left( 1\mid \alpha _{1}\right) _{k},...,\left( k-1\mid \alpha _{k-1}\right) _{k};\left( \lambda _{1}+k\mid \mu _{1}\right) _{k},...,\left( \lambda _{j}+k\mid \mu _{j}-1\right) _{k}\right) \notag \\ &&...\overset{\lambda _{j}+\mu _{j}k}{\rightarrow }\left( \left( 1\mid \alpha _{1}\right) _{k},...,\left( k-1\mid \alpha _{k-1}\right) _{k};\left( \lambda _{1}+k\mid \mu _{1}\right) _{k},...,\left( \lambda _{j}+k\mid \mu _{j}\right) _{k}\right) \notag \\ &&\overset{1+\alpha _{1}k}{\rightarrow }\left( \left( 1\mid \alpha _{1}+1\right) _{k},...,\left( k-1\mid \alpha _{k-1}\right) _{k};\left( \lambda _{1}+k\mid \mu _{1}\right) _{k},...,\left( \lambda _{j}+k\mid \mu _{j}\right) _{k}\right) \notag \\ &&...\overset{\left( k-1\right) +\alpha _{k-1}k}{\rightarrow }\left( \left( 1\mid \alpha _{1}+1\right) _{k},...,\left( k-1\mid \alpha _{k-1}+1\right) _{k};\left( \lambda _{1}+k\mid \mu _{1}-1\right) _{k},...,\left( \lambda _{j}+k\mid \mu _{j}\right) _{k}\right) \notag \\ &&\overset{0}{\rightarrow }N_{m}\oplus k, \end{eqnarray} since \begin{eqnarray} &&\left( 0,\left( 1\mid \alpha _{1}\right) _{k},...,\left( k-1\mid \alpha _{k-1}\right) _{k};\left( \lambda _{1}+k\mid \mu _{1}\right) _{k},...,\left( \lambda _{j}+k\mid \mu _{j}\right) _{k}\right) \notag \\ &=&\left( 0,1,...,k-1\right) \cup \left( \left( 1+k\mid \alpha _{1}\right) _{k},...,\left( 2k-1\mid \alpha _{k-1}\right) _{k};\left( \lambda _{1}+k\mid \mu _{1}\right) _{k},...,\left( \lambda _{j}+k\mid \mu _{j}\right) _{k}\right) \notag \\ &=&\left( 0\mid k\right) _{1}\cup \left( N_{m}+k\right) . \end{eqnarray} Let us see now the two extremal cases $k=p$ and $k=1$. When in Eq(\ref{pcyclicMD}) some blocks merge or overlap, we say that the block structure of $N_{m}$ is \textbf{degenerate}. \subsubsection{k = p} We have no second type block ($j=0$) and $\left( p-1\right) $ $p$-Okamoto blocks: \begin{equation} N_{m}=\left( \left( 1\mid \alpha _{1}\right) _{p},...,\left( p-1\mid \alpha _{p-1}\right) _{p}\right) ,\ m=\alpha _{1}+...+\alpha _{p-1}. \end{equation} Each $p$-support contains exactly one discontinuity ($p_{l}=1,\ \forall l\in \left\{ 0,...,p-1\right\} $) and $N_{m}$ contains no multiple of $p$. We call $N_{m}$ a $p$\textbf{-Okamoto Maya diagram}. The corresponding $p$-cyclic chains are built from \begin{equation} \left\{ 0,1+\alpha _{1}p,...,\left( p-1\right) +\alpha _{p-1}p\right\} , \end{equation} which contains $p$ negative flips. \subsubsection{k = 1} We have no Okamoto block and $\left( p-1\right) /2$ second type blocks: \begin{equation} N_{m}=\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1},...,\left( \lambda _{(p-1)/2}\mid \mu _{(p-1)/2}\right) _{1}\right) ,\ m=\mu _{1}+...+\mu _{(p-1)/2}, \end{equation} which are GH-blocks. There is only one $1$-support which $ \mathbb{Z} $ is itself and which contains all the $p$ discontinuities ($k-1=0$ and $ p_{0}=p$). $N_{m}$ is called a $p$\textbf{-GH Maya diagram}. In the corresponding $p$-cyclic chains, we have $\left( p+1\right) /2$ negative flips at $0,\lambda _{1}+\mu _{1},...,\lambda _{j}+\mu _{j}$ and $ \left( p-1\right) /2$ negative flips at $\lambda _{1},...,\lambda _{j}.$ \section{Rational extensions of the HO and rational solutions of the dressing chains of odd periodicity} \subsection{Rational extensions of the HO} The HO potential is defined on the real line by \begin{equation} V\left( x;\omega \right) =\frac{\omega ^{2}}{4}x^{2}-\frac{\omega }{2},\ \omega \in \mathbb{R} . \label{OH} \end{equation} With Dirichlet boundary conditions at infinity and supposing $\omega \in \mathbb{R} ^{+}$, $V\left( x;\omega \right) $ has the following spectrum ($z=\sqrt{ \omega /2}x$) \begin{equation} \left\{ \begin{array}{c} E_{n}\left( \omega \right) =n\omega \\ \psi _{n}(x;\omega )=\psi _{0}(x;\omega )H_{n}\left( z\right) \end{array} \right. ,\quad n\geq 0, \label{spec OH} \end{equation} with $\psi _{0}(x;\omega )=\exp \left( -z^{2}/2\right) .$ It is the most simple example of translationally shape invariant potential \cite {gendenshtein,grandati berard} with \begin{equation} V^{\left( 0\right) }\left( x;\omega \right) =V\left( x;\omega \right) +\omega . \label{SI} \end{equation} It also possesses a unique parametric symmetry, $\Gamma _{3}$, which acts as \cite{GGM,GGM1} \begin{equation} \left\{ \begin{array}{c} \omega \overset{\Gamma _{3}}{\rightarrow }\left( -\omega \right) \\ V(x;\omega )\overset{\Gamma _{3}}{\rightarrow }V(x;-\omega )=V(x;\omega )+\omega , \end{array} \right. \label{gam3} \end{equation} and then generates the \textbf{conjugate spectrum} of $V\left( x;\omega \right) $ \begin{equation} \left\{ \begin{array}{c} E_{n}\left( \omega \right) \\ \psi _{n}(x;\omega ) \end{array} \right. \overset{\Gamma _{3}}{\rightarrow }\left\{ \begin{array}{c} E_{-\left( n+1\right) }\left( \omega \right) =-n\omega <0 \\ \psi _{n}(x;-\omega )=iH_{n}\left( iz\right) \exp \left( z^{2}/2\right) =\psi _{-\left( n+1\right) }(x;\omega ). \end{array} \right. \label{conjhermite} \end{equation} The union of the spectrum and the conjugate spectrum forms the \textbf{ extended spectrum} of the HO which contains all the \textbf{quasi-polynomial} (ie polynomial up to a gauge factor) eigenfunctions of this potential. All the rational extensions of the HO are then obtained via chains of DT with seed functions chosen in the extended spectrum and they are then labelled by tuples of spectral indices which relative integers $N_{m}=\left( n_{1},...,n_{m}\right) \in \mathbb{Z} ^{m}$. This establishes a one to one correspondence between the set of rational extensions of the HO and the set of Maya diagrams. A rational extension associated to a canonical Maya diagram is said to be a \textbf{ canonical extension}. The energy spectrum of $V^{^{\left( N_{m}\right) }}(x;\omega )\ $ is the set of $\left\{ E_{j}(\omega )\right\} _{j\in \mathbb{Z} }$ for the indices associated to the empty levels of the Maya diagram $N_{m}$ .\ As shown in \cite{GGM}, this correspondence preserves the equivalence relation, in the sense that if \begin{equation} N_{m}\approx N_{m^{\prime }}^{\prime }, \end{equation} then the corresponding extended potentials\ are identical up to an additive constant: \begin{equation} V^{\left( N_{m}\right) }(x;\omega )\ =V^{^{\left( N_{m^{\prime }}^{\prime }\right) }}(x;\omega )+q\omega ,\ q\in \mathbb{Z} . \label{Eqext1} \end{equation} In particular, if $N_{m}$ is canonical \begin{equation} V^{\left( N_{m}\oplus k\right) }(x;\omega )\ =V^{\left( N_{m}\right) }(x;\omega )+k\omega . \label{Eqext2} \end{equation} It follows that to describe all the rational extensions of the HO, we can restrict ourself to those associated to canonical Maya diagrams. We have also equivalence relations for the Wronskians, in particular \begin{equation} W^{\left( N_{m}\oplus k\right) }(x;\omega )\ =\left( \psi _{0}(x;\omega )\right) ^{k}W^{\left( N_{m}\right) }(x;\omega ), \label{Eqext3} \end{equation} or, if $k\leq n_{1}\leq ...\leq n_{m}$ \begin{equation} W^{\left( 0,...,k-1\right) \cup N_{m}}(x;\omega )\ =\left( \psi _{0}(x;\omega )\right) ^{k}W^{\left( N_{m}-k\right) }(x;\omega ). \label{Eqext4} \end{equation} Using Eq(\ref{spec OH}) and the usual properties of the Wronskians \cite {muir} \begin{equation} \left\{ \begin{array}{c} W\left( uy_{1},...,uy_{m}\mid x\right) =u^{m}W\left( y_{1},...,y_{m}\mid x\right) \\ W\left( y_{1},...,y_{m}\mid x\right) =\left( \frac{dz}{dx}\right) ^{m(m-1)/2}W\left( y_{1},...,y_{m}\mid z\right) , \end{array} \right. \label{wronskprop} \end{equation} we can write \begin{equation} W^{\left( N_{m}\right) }(x;\omega )\ \propto \left( \psi _{0}(x;\omega )\right) ^{m}\mathcal{H}^{\left( N_{m}\right) }\left( z\right) , \label{WH} \end{equation} where $\mathcal{H}^{\left( N_{m}\right) }$ is the following Wronskian determinant ($i=0,...,m-1$) \begin{equation} \mathcal{H}^{\left( N_{m}\right) }\left( z\right) =W\left( H_{n_{1}},...,H_{n_{m}}\mid z\right) =\left\vert \begin{array}{ccc} H_{n_{1}}(z) & ... & H_{n_{m}}(z) \\ ... & & ... \\ \left( n_{1}\right) _{i}H_{n_{1}-i}(z) & ... & \left( n_{m}\right) _{i}H_{n_{m}-i}(z) \\ ... & & ... \\ \left( n_{1}\right) _{m-1}H_{n_{1}-m+1}(z) & ... & \left( n_{m}\right) _{m-1}H_{n_{m}-m+1}(z) \end{array} \right\vert , \label{PW} \end{equation} $_{i}\left( x\right) $ and $\left( x\right) _{i}$ being respectively the rising and falling factorials \begin{equation} _{i}\left( x\right) =x(x+1)...(x+i-1),\ \left( x\right) _{i}=x(x-1)...(x-i+1), \label{poch} \end{equation} with the convention that $H_{n}(z)=0$ if $n<0$. Eq(\ref{Eqext3}) and Eq(\ref{Eqext4}) \ give then \begin{equation} \left\{ \begin{array}{c} \mathcal{H}^{\left( N_{m}\oplus k\right) }(z)\ \propto \mathcal{H}^{\left( N_{m}\right) }\left( z\right) \\ \mathcal{H}^{\left( 0,...,k-1\right) \cup N_{m}}(z)\ \propto \mathcal{H} ^{\left( N_{m}-k\right) }\left( z\right) . \end{array} \right. \label{Eqext5} \end{equation} \subsection{$p$-cyclic extensions of the HO and rational solutions of the periodic dressing chains} First we can prove the following lemma: \begin{lemma} \textit{A rational extension }$V^{^{\left( N_{m}\right) }}(x;\omega )$ \textit{\ of the HO is a }$p$\textit{-cyclic potential iff its associated Maya diagram }$N_{m}$\textit{\ is }$p$\textit{-cyclic.} \end{lemma} \begin{proof} If $N_{m}$ is a canonical Maya diagram which is\ $p$-cyclic with translation of $k>0$, there exist a tuple of $p$ positive integers $\left( \nu _{1},...,\nu _{p}\right) \in \mathbb{N} ^{p}$ such that (see Eq(\ref{Eqext2})) \begin{equation} V^{\left( N_{m},\nu _{1},...,\nu _{p}\right) }(x;\omega )=V^{\left( 0,...,k-1\right) \cup \left( N_{m}+k\right) }(x;\omega )=V^{\left( N_{m}\right) }(x;\omega )+k\omega \end{equation} and consequently $V^{\left( N_{m}\right) }(x;\omega )$ is $p$-cyclic. More generally, using Eq(\ref{Eqext1}), we deduce that if $N_{m}$ is an arbitrary $p$-cyclic Maya diagram, then $V^{^{\left( N_{m}\right) }}(x;\omega )$ is a $ p$-cyclic potential. \newline Conversely, if $V^{^{\left( N_{m}\right) }}(x;\omega )$ is $p$-cyclic, there exist a tuple of $p$ integers $\left( \nu _{1},...,\nu _{p}\right) \in \mathbb{Z} ^{p}$ such that \begin{equation} V^{\left( N_{m},\nu _{1},...,\nu _{p}\right) }(x;\omega )=V^{\left( N_{m}\right) }(x;\omega )+\Delta , \end{equation} and necessarily the energy shift $\Delta $ must be an integer multiple of $ \omega $. Due to the correspondence between rational extensions and Maya diagrams, this implies immediately that $\left( N_{m},\nu _{1},...,\nu _{p}\right) $ and $N_{m}$ are equivalent. Consequently, $N_{m}$ is $p$ -cyclic. \end{proof} We then deduce the theorem \begin{theorem} \textit{The rational extensions of the HO }$V^{^{\left( N_{m}\right) }}(x;\omega )$\textit{\ where }$N_{m}$\textit{\ is given by Eq(\ref {pcyclicMD}), solve the dressing chain of period }$p$\textit{\ Eq(\ref {pcyclicdress}) for the set of parameters} \begin{equation} \Delta =k\omega \text{ and }\varepsilon _{i,i+1}=\left( \nu _{P(i)}-\nu _{P(i+1)}\right) \omega ,\ i\in \left\{ 1,...,p\right\} , \end{equation} \textit{where }$P$\textit{\ is any permutation of }$S_{p}$\textit{\ and (}$ \nu _{p+1}=\nu _{1}$\textit{)} \begin{equation} \left( \nu _{1},...,\nu _{p}\right) \mathit{=}\left( 0,1+\alpha _{1}k,...,\left( k-1\right) +\alpha _{k-1}k,\ \ \lambda _{1},\lambda _{1}+\mu _{1}k,...,\ \lambda _{j},\lambda _{j}+\mu _{j}k\right) \mathit{.} \end{equation} \textit{The solutions of the dressing chain system are then given by (}$z= \sqrt{\omega /2}x$\textit{)} \begin{equation} \left\{ \begin{array}{c} w_{i}^{\left( 1,...,i-1\right) }(x)=-\omega x/2+\sqrt{\frac{\omega }{2}} \frac{d}{dz}\left( \log \left( \frac{\mathcal{H}^{\left( N_{m},\nu _{P\left( 1\right) },...,\nu _{P\left( i-1\right) }\right) }\left( z\right) }{\mathcal{ H}^{\left( N_{m},\nu _{P\left( 1\right) },...,\nu _{P\left( i\right) }\right) }\left( z\right) }\right) \right) ,\text{ if the flip in }\nu _{P\left( i\right) }\text{ is positive,} \\ w_{i}^{\left( 1,...,i-1\right) }(x)=\omega x/2+\sqrt{\frac{\omega }{2}}\frac{ d}{dz}\left( \log \left( \frac{\mathcal{H}^{\left( N_{m},\nu _{P\left( 1\right) },...,\nu _{P\left( i-1\right) }\right) }\left( z\right) }{\mathcal{ H}^{\left( N_{m},\nu _{P\left( 1\right) },...,\nu _{P\left( i\right) }\right) }\left( z\right) }\right) \right) ,\text{ if the flip in }\nu _{P\left( i\right) }\text{ is negative,} \end{array} \right. \label{soldress} \end{equation} \textit{with the convention that if a spectral index is repeated two times in the tuple }$\left( \nu _{1},...,\nu _{p}\right) $\textit{\ characterizing the chain, then we suppress the corresponding eigenfunction in the }$ \mathcal{H}$\textit{\ determinants.} \end{theorem} \begin{proof} The first part is a direct consequence of the preceding lemma combined to Theorem 1. With the Crum formulas Eq(\ref{crum}), we can write \begin{equation} w_{i}^{\left( 1,...,i-1\right) }(x)=-\left( \log \left( \frac{W^{\left( N_{m},\nu _{P\left( 1\right) },...,\nu _{P\left( i\right) }\right) }(x;\omega )}{W^{\left( N_{m},\nu _{P\left( 1\right) },...,\nu _{P\left( i-1\right) }\right) }(x;\omega )}\right) \right) ^{\prime }, \end{equation} with the convention that if a spectral index is repeated two times in the tuple $\left( \nu _{1},...,\nu _{p}\right) $ characterizing the chain, then we suppress the corresponding eigenfunction in the Wronskians.\newline If $\nu _{P\left( i\right) }\in N_{m},$ the flip in $\nu _{P\left( i\right) } $ is positive and the tuple $\left( N_{m},\nu _{P\left( 1\right) },...,\nu _{P\left( i\right) }\right) $ contains one less index than $\left( N_{m},\nu _{P\left( 1\right) },...,\nu _{P\left( i-1\right) }\right) $. Using Eq(\ref {WH}) and Eq(\ref{spec OH}), we deduce the first equality of Eq(\ref {soldress}).\newline If $\nu _{P\left( i\right) }\notin N_{m},$ the flip in $\nu _{P\left( i\right) }$ is negative and the tuple $\left( N_{m},\nu _{P\left( 1\right) },...,\nu _{P\left( i\right) }\right) $ contains one more index than $\left( N_{m},\nu _{P\left( 1\right) },...,\nu _{P\left( i-1\right) }\right) $. Using Eq(\ref{WH}) and Eq(\ref{spec OH}), we deduce the second equality of Eq(\ref{soldress}). \end{proof} \subsection{Examples} \subsubsection{One-cyclic extensions of the HO and rational solutions of the dressing chain of period $1$.} For $p=1$, we have $k=1,\ j=0$ and then immediately \textit{Theorem 3} gives $N_{m}=\varnothing $. It means that the unique rational potential solving the $1$-cyclic chain is the HO itself and the corresponding cyclic chain reduces to $\left( 0\right) $ ie to the SUSY partnership. This is perfectly coherent with our previous results on one step cyclicity (see Eq(\ref {RSOHfond})). \subsubsection{Two-cyclic extensions of the HO and rational solutions of the dressing chain of period $2$.} For $p=2$, we have $k=2,\ j=0$ and then we deduce from \textit{Theorem 2} that the Okamoto extension of the HO associated to the Maya diagram \begin{equation} N_{m}=\left( \left( 1\mid m\right) _{2}\right) =\left( 1,3,...,2m-1\right) ,\ \end{equation} solves the dressing chain of period $2$. This particular type of Okamoto Maya diagram is called an \textbf{Umemura staircase}. The corresponding $2$-cyclic chain is $\left( 0,2m+1\right) $. Nevertheless, \textit{Theorem 3} is not applicable here since $p$ is even and we cannot conclude that this is the most general rational solution of the $2$-cyclic chain. In fact, we can write (see Eq(\ref{crum})) \begin{equation} V^{^{\left( N_{m}\right) }}(x;\omega )=V(x;\omega )-2\left( \log W^{\left( 1,3...,2m-1\right) }(x;\omega )\right) ^{\prime \prime }, \label{2ext} \end{equation} where (see Eq(\ref{Eqext5}) and Eq(\ref{spec OH})) \begin{equation} W^{\left( 1,3...,2m-1\right) }(x;\omega )\propto e^{-mz^{2}/2}W(H_{1}(z),H_{3}(z)...,H_{2m-1}(z)\mid z), \end{equation} with $z=\sqrt{\omega /2}x$. But we also have \cite{magnus,szego} \begin{equation} \left\{ \begin{array}{c} H_{2j+1}(z)=\left( -1\right) ^{j}2^{2j+1}j!t^{1/2}L_{j}^{1/2}(t) \\ H_{2j}(z)=\left( -1\right) ^{j}2^{2j}j!L_{j}^{-1/2}(t) \end{array} \right. , \label{correspHL} \end{equation} where $L_{j}^{\alpha }$ is the usual Laguerre polynomial and $t=z^{2}$. It results (see Eq(\ref{wronskprop})) \begin{equation} W^{\left( 1,3...,2m-1\right) }(x;\omega )\propto e^{-mt/2}t^{m(m+1)/4}W(L_{0}^{1/2}(t),L_{1}^{1/2}(t)...,L_{m}^{1/2}(t)\mid t). \end{equation} Using the well known derivation properties of the Laguerre polynomials \cite {magnus,szego} \begin{equation} \frac{dL_{j}^{\alpha }\left( t\right) }{dt}=-L_{j-1}^{\alpha +1}\left( t\right) , \label{derivL} \end{equation} with $L_{0}^{\alpha }(t)=1$ and $L_{-n}^{\alpha }(t)=0$, we obtain straightforwardly \begin{equation} W(L_{0}^{1/2}(t),L_{1}^{1/2}(t)...,L_{m}^{1/2}(t)\mid t)=\left( -1\right) ^{m(m+1)/2} \end{equation} and \begin{equation} W^{\left( 1,3...,2m-1\right) }(x;\omega )\propto x^{m(m+1)/2}e^{-m\omega x^{2}/4}. \end{equation} Substituting in Eq(\ref{2ext}), we arrive to \begin{equation} V^{^{\left( N_{m}\right) }}(x;\omega )=\omega ^{2}x^{2}/4+\frac{m(m+1)}{x^{2} }-\left( m+1/2\right) \omega =V(x;\omega ,m-1/2), \end{equation} which is an isotonic oscillator with an integer angular momentum possessing, as expected the trivial monodromy property. In this lowest even case, we notice that using \textit{Theorem 2} doesn't allow us to recover the general IO potential for arbitrary values of the $ \alpha $ parameter, which is,as shown previously (see Eq(\ref{2 step cyclic2} )), the most general solution potential of the dressing chain of period $2$ . \subsubsection{Three-cyclic extensions of the HO and rational solutions of PIV.} We have $p=3$ and consequently we can refer to \textit{Theorem 3} to determine all the rational solutions of the dressing chain of period $3$, ie of the PIV equation. For the possible values of $k$ (and $j$) we have only two possibilities: $k=1\ \left( j=1\right) $ or $k=3\ \left( j=0\right) $. \subsubsection{k=1} The $3$-cyclic Maya diagram with $k=1$ is a $3$-GH Maya diagram of the form \begin{equation} \left( \lambda \mid \mu \right) _{1}=\left( \lambda ,...,\lambda +\mu -1\right) =H_{\lambda ,\mu }. \label{3GH} \end{equation} Following Clarkson's terminology \cite{clarkson,clarkson2,clarkson3}, the $3$ -cyclic extensions $V^{^{\left( H_{\lambda ,\mu }\right) }}(x;\omega )$ are called the \textbf{3-generalized Hermite (3-GH)} extensions. From the Krein-Adler theorem \cite{krein,adler2,GGM1}, we deduce immediately that 3-GH extensions which are regular on the real line are those for which $ \mu $ is even. The possible $3$-cyclic chains corresponding to $H_{\lambda ,\mu }$ are built by permutation of $\left\{ 0,\lambda ,\lambda +\mu \right\} $. For the particular choice $\left( 0,\lambda ,\lambda +\mu \right) $, the parameters in the dressing chain system of period $3$ (see Eq(\ref{pcyclicdress})) are \begin{equation} \left\{ \begin{array}{c} \varepsilon _{12}=\left( -\lambda -\mu \right) \omega \\ \varepsilon _{23}=\mu \omega \\ \varepsilon _{31}-\Delta =\left( \lambda -1\right) \omega . \end{array} \right. \end{equation} As for the associated parameters in the PIV\ equation associated to this chain they are given by \begin{equation} a=-\left( 1-\mu -2\lambda \right) ,\quad b=-2\mu ^{2}, \end{equation} or \begin{equation} a=m\in \mathbb{Z} ,\quad b=-2\left( 1+m-2n\right) ^{2}. \end{equation} The same type of result (up to redefinition of $a$ and $b$) can be obtained for all the possible choices of $\varepsilon _{1}/\omega $ in the set $ \left\{ 0,\lambda ,\lambda +\mu \right\} $. For $\varepsilon _{1}/\omega =0$, the corresponding solution of PIV is obtained as (see Eq(\ref{chv}) and Eq(\ref{WH})) \begin{equation} y_{0}=\sqrt{\frac{2}{\omega }}\left( w_{0}^{\left( H_{\lambda ,\mu }\right) }-w_{0}\right) =-\frac{d}{dz}\left( \log \left( \frac{\psi _{0}^{\left( H_{\lambda ,\mu }\right) }}{\psi _{0}}\right) \right) , \end{equation} with ($z=\sqrt{\omega /2}x$) \begin{equation} \frac{\psi _{0}^{\left( H_{\lambda ,\mu }\right) }}{\psi _{0}}\propto \frac{ W^{\left( 0,H_{\lambda ,\mu }\right) }\left( x;\omega \right) }{W^{\left( H_{\lambda ,\mu }\right) }\left( x;\omega \right) \psi _{0}\left( x;\omega \right) }\propto \frac{\mathcal{H}^{\left( H_{\lambda -1,\mu }\right) }\left( z\right) }{\mathcal{H}^{\left( H_{\lambda ,\mu }\right) }\left( z\right) }, \end{equation} where we have used Eq(\ref{Eqext5}) and the fact that \begin{equation} H_{\lambda ,\mu }-1=H_{\lambda -1,\mu }. \end{equation} Then \begin{equation} y_{0}\left( z\right) =\frac{d}{dz}\log \left( \frac{\mathcal{H}^{\left( H_{\lambda ,\mu }\right) }\left( z\right) }{\mathcal{H}^{\left( H_{\lambda -1,\mu }\right) }\left( z\right) }\right) . \end{equation} The second possible choice is $\varepsilon _{1}/\omega =\lambda $ in which case \begin{equation} y_{\lambda }=\sqrt{\frac{2}{\omega }}\left( w_{\lambda }^{\left( H_{\lambda ,\mu }\right) }-w_{0}\right) =-\frac{d}{dz}\left( \log \left( \frac{\psi _{\lambda }^{\left( H_{\lambda ,\mu }\right) }}{\psi _{0}}\right) \right) , \end{equation} where, using Eq(\ref{Eqext5}) and the fact that $\left( H_{\lambda ,\mu },\lambda \right) =\left( \lambda +1,...,\lambda +\mu -1\right) =H_{\lambda +1,\mu -1}$ \begin{equation} \frac{\psi _{\lambda }^{\left( H_{\lambda ,\mu }\right) }}{\psi _{0}}\propto \frac{W^{\left( \lambda ,H_{\lambda ,\mu }\right) }\left( x;\omega \right) }{ W^{\left( H_{\lambda ,\mu }\right) }\left( x;\omega \right) \psi _{0}\left( x;\omega \right) }\propto \frac{\mathcal{H}^{\left( H_{\lambda +1,\mu -1}\right) }\left( z\right) }{\mathcal{H}^{\left( H_{\lambda ,\mu }\right) }\left( z\right) \psi _{0}^{2}\left( x;\omega \right) }\propto e^{z^{2}} \frac{\mathcal{H}^{\left( H_{\lambda +1,\mu -1}\right) }\left( z\right) }{ \mathcal{H}^{\left( H_{\lambda ,\mu }\right) }\left( z\right) }. \end{equation} Then \begin{equation} y_{\lambda }\left( z\right) =-2z+\frac{d}{dz}\log \left( \frac{\mathcal{H} ^{\left( H_{\lambda ,\mu }\right) }\left( z\right) }{\mathcal{H}^{\left( H_{\lambda +1,\mu -1}\right) }\left( z\right) }\right) . \end{equation} The last possible choice is $\varepsilon _{1}/\omega =\lambda +\mu $ giving \begin{equation} y_{\lambda +\mu }=\sqrt{\frac{2}{\omega }}\left( w_{\lambda +\mu }^{\left( H_{\lambda ,\mu }\right) }-w_{0}\right) =-\frac{d}{dz}\left( \log \left( \frac{\psi _{\lambda +\mu }^{\left( H_{\lambda ,\mu }\right) }}{\psi _{0}} \right) \right) , \end{equation} with ($\left( H_{\lambda ,\mu },\lambda +\mu \right) =\left( \lambda ,...,\lambda +\mu \right) =H_{\lambda ,\mu +1}$) \begin{equation} \frac{\psi _{\lambda +\mu }^{\left( H_{\lambda ,\mu }\right) }}{\psi _{0}} \propto \frac{W^{\left( \lambda +\mu ,H_{\lambda ,\mu }\right) }\left( x;\omega \right) }{W^{\left( H_{\lambda ,\mu }\right) }\left( x;\omega \right) \psi _{0}\left( x;\omega \right) }\propto \frac{\mathcal{H}^{\left( H_{\lambda ,\mu +1}\right) }\left( z\right) }{\mathcal{H}^{\left( H_{\lambda ,\mu }\right) }\left( z\right) }. \end{equation} Then \begin{equation} y_{\lambda +\mu }\left( z\right) =\frac{d}{dz}\log \left( \frac{\mathcal{H} ^{\left( H_{\lambda ,\mu }\right) }\left( z\right) }{\mathcal{H}^{\left( H_{\lambda ,\mu +1}\right) }\left( z\right) }\right) . \end{equation} We retrieve then the three usual form for the rational solutions associated to the generalized Hermite polynomials, namely (with $k=1$, $z=t$) \cite {clarkson,clarkson2,clarkson3} \begin{equation} \left\{ \begin{array}{c} y_{0}(t)=\frac{d}{dt}\log \left( \mathcal{H}^{\left( H_{\lambda ,\mu }\right) }\left( t\right) /\mathcal{H}^{\left( H_{\lambda -1,\mu }\right) }\left( t\right) \right) \\ y_{\lambda }(t)=-2t+\frac{d}{dt}\log \left( \mathcal{H}^{\left( H_{\lambda ,\mu }\right) }\left( t\right) /\mathcal{H}^{\left( H_{\lambda +1,\mu -1}\right) }\left( t\right) \right) \\ y_{\lambda +\mu }(t)=\frac{d}{dt}\log \left( \mathcal{H}^{\left( H_{\lambda ,\mu }\right) }\left( t\right) /\mathcal{H}^{\left( H_{\lambda ,\mu +1}\right) }\left( t\right) \right) , \end{array} \right. \end{equation} obtained for the following integer values of the $a$ and $b$ parameters in Eq(\ref{PIV}) \begin{equation} a=-\left( 1-\mu -2\lambda \right) ,\quad b=-2\mu ^{2}. \end{equation} \subsubsection{k=3} Now consider the case $k=3$. The $3$-cyclic Maya diagram with $k=3$ is a $3$ -Okamoto Maya diagram of the form \begin{equation} \left( \left( 1\mid \alpha _{1}\right) _{3},\left( 2\mid \alpha _{2}\right) _{3}\right) =\left( 1,...,1+3(\alpha _{1}-1);2,...,2+3(\alpha _{2}-1\right) )=\Omega _{\alpha _{1},\alpha _{2}}. \end{equation} Note that for some small values of $\alpha _{1}$ and $\alpha _{2}$, the 3-Okamoto Maya diagrams are also 3-GH Maya diagram set \begin{equation} \Omega _{1,1}=\left( 1,2\right) =H_{1,2};\ \Omega _{1,0}=\left( 1\right) =H_{1,1};\ \Omega _{0,1}=\left( 2\right) =H_{2,1}. \end{equation} The Krein-Adler theorem implies that the 3-Okamoto extensions which are regular on the real line are those for which $\alpha _{1}=\alpha _{2}$, namely correspond to 3-Okamoto Maya diagrams of the form $\Omega _{\alpha ,\alpha }$. The possible $3$-cyclic chains associated to $\Omega _{\alpha _{1},\alpha _{2}}$ are\ built by permutation from $\left\{ 0,1+3\alpha _{1},2+3\alpha _{2}\right\} $. For the chain $\left( 0,1+3\alpha _{1},2+3\alpha _{2}\right) $, the parameters of the dressing chain system of period $3$ (see Eq(\ref {pcyclicdress})) are \begin{equation} \left\{ \begin{array}{c} \varepsilon _{12}=\left( -1+3(\alpha _{1}-\alpha _{2})\right) \omega \\ \varepsilon _{23}=\left( 2+3\alpha _{2}\right) \omega \\ \varepsilon _{31}-\Delta =\left( -4-3\alpha _{1}\right) \omega . \end{array} \right. \end{equation} The corresponding parameters in the PIV\ equation are then given by ($\omega =1,\Delta =k=3$) \begin{equation} a=\alpha _{1}+\alpha _{2},\quad b=-\frac{2}{9}\left( -1+3(\alpha _{1}-\alpha _{2})\right) ^{2}, \end{equation} or \begin{equation} a=j\in \mathbb{Z} ,\quad b=-2\left( 1/3-j+2\alpha _{2}\right) ^{2}. \end{equation} The same result (up to redefinition of the integers) can be obtained with all the possible choices of $\varepsilon _{1}$ in the set $\left\{ 0,1+3\alpha _{1},2+3\alpha _{2}\right\} $. For $\varepsilon _{1}/\omega =0$, the corresponding solution of PIV is obtained as (see Eq(\ref{chv})) \begin{equation} y_{0}=\sqrt{\frac{2}{\Delta }}\left( w_{0}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }-3w_{0}\right) =-\sqrt{\frac{2}{3\omega }}\frac{d}{ dx}\left( \log \left( \frac{\psi _{0}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }}{\psi _{0}^{3}}\right) \right) , \end{equation} with ($z=\sqrt{\omega /2}x$). If we suppose $\alpha _{1},\alpha _{2}\geq 2$, by using Eq(\ref{Eqext5}), we obtain \begin{equation} \frac{\psi _{0}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }}{\psi _{0}^{3}}\propto \frac{W^{\left( \Omega _{\alpha _{1}-1,\alpha _{2}-1}\oplus 3\right) }\left( x;\omega \right) }{W^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( x;\omega \right) \psi _{0}^{3}(x;\omega )}\propto \frac{ \mathcal{H}^{\left( \Omega _{\alpha _{1}-1,\alpha _{2}-1}\right) }\left( z\right) }{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( z\right) \psi _{0}^{2}(x;\omega )}\propto e^{z^{2}}\frac{\mathcal{H} ^{\left( \Omega _{\alpha _{1}-1,\alpha _{2}-1}\right) }\left( z\right) }{ \mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( z\right) }. \end{equation} Note that $\left( 0,\Omega _{\alpha _{1},\alpha _{2}}\right) =\left( 0,1,2\right) \cup \left( 4,...,1+3(\alpha _{1}-1);5,...,2+3(\alpha _{2}-1\right) )=\Omega _{\alpha _{1}-1,\alpha _{2}-1}\oplus 3$. Then \begin{equation} y_{0}=-\sqrt{\frac{2\omega }{3}}x+\sqrt{\frac{2}{3\omega }}\frac{d}{dx}\log \left( \frac{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( z\right) }{\mathcal{H}^{\left( \Omega _{\alpha _{1}-1,\alpha _{2}-1}\right) }\left( z\right) }\right) , \end{equation} that is, \begin{equation} y_{0}(t)=-\frac{2}{3}t+\frac{d}{dt}\log \left( \frac{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( t/\sqrt{3}\right) }{ \mathcal{H}^{\left( \Omega _{\alpha _{1}-1,\alpha _{2}-1}\right) }\left( t/ \sqrt{3}\right) }\right) . \end{equation} The second possible choice is $\varepsilon _{1}/\omega =1+3\alpha _{1}$ in which case \begin{equation} y_{1+3\alpha _{1}}(x)=\sqrt{\frac{2}{3\omega }}\left( w_{1+3\alpha _{1}}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }-3w_{0}\right) =- \sqrt{\frac{2}{3\omega }}\frac{d}{dx}\left( \log \left( \frac{\psi _{1+3\alpha _{1}}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }}{\psi _{0}^{3}}\right) \right) ^{\prime }, \end{equation} with ($\left( 1+3\alpha _{1},\Omega _{\alpha _{1},\alpha _{2}}\right) =\Omega _{\alpha _{1}+1,\alpha _{2}}$) \begin{equation} \frac{\psi _{1+3\alpha _{1}}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }}{\psi _{0}}=\frac{W^{\left( \Omega _{\alpha _{1}+1,\alpha _{2}}\right) }\left( x;\omega \right) }{W^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( x;\omega \right) \psi _{0}^{3}(x;\omega )} \propto \frac{\mathcal{H}^{\left( \Omega _{\alpha _{1}+1,\alpha _{2}}\right) }\left( z\right) }{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( z\right) \psi _{0}^{2}(x;\omega )}\propto e^{z^{2}} \frac{\mathcal{H}^{\left( \Omega _{\alpha _{1}+1,\alpha _{2}}\right) }\left( z\right) }{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( z\right) }. \end{equation} Then \begin{equation} y_{1+3\alpha _{1}}=-\sqrt{\frac{2\omega }{3}}x+\sqrt{\frac{2}{3\omega }} \frac{d}{dx}\log \left( \frac{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( z\right) }{\mathcal{H}^{\left( \Omega _{\alpha _{1}+1,\alpha _{2}}\right) }\left( z\right) }\right) , \end{equation} that is \begin{equation} y(t)=-\frac{2}{3}t+\frac{d}{dt}\log \left( \frac{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( t/\sqrt{3}\right) }{\mathcal{H} ^{\left( \Omega _{\alpha _{1}+1,\alpha _{2}}\right) }\left( t/\sqrt{3} \right) }\right) . \end{equation} The last possible choice is $\varepsilon _{1}/\omega =2+3\alpha _{2}$ giving \begin{equation} y_{2+3\alpha _{2}}=\sqrt{\frac{2}{\Delta }}\left( w_{2+3\alpha _{2}}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }-3w_{0}\right) =-\sqrt{\frac{2}{ 3\omega }}\frac{d}{dx}\left( \log \left( \frac{\psi _{2+3\alpha _{2}}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }}{\psi _{0}}\right) \right) , \end{equation} with ($\left( 2+3\alpha _{1},\Omega _{\alpha _{1},\alpha _{2}}\right) =\Omega _{\alpha _{1},\alpha _{2}+1}$) \begin{equation} \frac{\psi _{2+3\alpha _{2}}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }}{\psi _{0}}\propto \frac{W^{\left( \Omega _{\alpha _{1},\alpha _{2}+1}\right) }\left( x;\omega \right) }{W^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( x;\omega \right) \psi _{0}^{3}(x;\omega )}\propto \frac{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}+1}\right) }\left( z\right) }{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( z\right) \psi _{0}^{2}(x;\omega )} \propto e^{z^{2}}\frac{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}+1}\right) }\left( z\right) }{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( z\right) }. \end{equation} Then \begin{equation} y_{2+3\alpha _{2}}=-\sqrt{\frac{2\omega }{3}}x+\sqrt{\frac{2}{3\omega }} \frac{d}{dx}\log \left( \frac{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( z\right) }{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}+1}\right) }\left( z\right) }\right) , \end{equation} that is \begin{equation} y_{2+3\alpha _{2}}\left( t\right) =-\frac{2}{3}t+\frac{d}{dt}\log \left( \frac{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( t/ \sqrt{3}\right) }{\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}+1}\right) }\left( t/\sqrt{3}\right) }\right) . \end{equation} We retrieve then the three usual form for the rational solutions associated to the generalized Hermite polynomials, namely (with $k=1$, $z=t$) \cite {clarkson,clarkson2,clarkson3} \begin{equation} \left\{ \begin{array}{c} y(t)=-2t/3+\frac{d}{dt}\log \left( \mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( t/\sqrt{3}\right) /\mathcal{H}^{\left( \Omega _{\alpha _{1}-1,\alpha _{2}-1}\right) }\left( t/\sqrt{3}\right) \right) \\ y(t)=-2t/3+\frac{d}{dt}\log \left( \mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( t/\sqrt{3}\right) /\mathcal{H}^{\left( \Omega _{\alpha _{1}+1,\alpha _{2}}\right) }\left( t/\sqrt{3}\right) \right) \\ y(t)=-2t/3+\frac{d}{dt}\log \left( \mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}}\right) }\left( t/\sqrt{3}\right) /\mathcal{H}^{\left( \Omega _{\alpha _{1},\alpha _{2}+1}\right) }\left( t/\sqrt{3}\right) \right) , \end{array} \right. \end{equation} obtained for the following integer values of the $a$ and $b$ parameters in Eq(\ref{PIV}) \begin{equation} a=\alpha _{1}+\alpha _{2}=j\in \mathbb{Z} ,\quad b=-2\left( 1/3-j+2\alpha _{2}\right) ^{2}. \end{equation} \subsection{5-cyclic chains and new solutions of the A$_{\text{4}}$-PIV} We have $p=5$ and consequently we can have $k=1\ \left( j=2\right) ,\ k=3\ \left( j=1\right) $ or $k=5\ \left( j=0\right) $. \subsubsection{k=1} The $5$-cyclic extension with $k=1$ is associated to a $5$-GH Maya diagram of the form \begin{equation} N_{m}=\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) ,\ m=\mu _{1}+\mu _{2}. \end{equation} When $\lambda _{1}+\mu _{1}\geq \lambda _{2}$, the block structure of $N_{m}$ is degenerate and we recover a $3$-GH Maya diagram $H_{\lambda ,\mu }$ (see Eq(\ref{3GH})), which is $3$ but also trivially $5$-cyclic. For simplicity we suppose in the following $\lambda _{1}+\mu _{1}<\lambda _{2}$. Explicitely \begin{equation} N_{m}=\left( \lambda _{1},...,\lambda _{1}+\mu _{1}-1,\lambda _{2},...,\lambda _{2}+\mu _{2}-1\right) . \end{equation} The corresponding $5$-cyclic chain is built by permutation of \begin{equation} \left\{ \lambda _{1},\lambda _{1}+\mu _{1},\lambda _{2},\lambda _{2}+\mu _{2},0\right\} . \end{equation} For the chain $\left( \lambda _{1},\lambda _{1}+\mu _{1},\lambda _{2},\lambda _{2}+\mu _{2},0\right) $, the parameters of the dressing chain system of period 5 (see Eq(\ref{pcyclicdress})) are \begin{equation} \left\{ \begin{array}{c} \varepsilon _{12}=-\mu _{1}\omega \\ \varepsilon _{23}=\left( \lambda _{1}-\lambda _{2}+\mu _{1}\right) \omega \\ \varepsilon _{34}=-\mu _{2}\omega \\ \varepsilon _{45}=\left( \lambda _{2}+\mu _{2}\right) \omega \\ \varepsilon _{51}-\Delta =-\left( \lambda _{1}+1\right) \omega , \end{array} \right. \end{equation} and the solutions of this system are given by (see Eq(\ref{soldress})) \begin{eqnarray} w_{1}(x) &=&w_{\lambda _{1}}^{\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }(x) \notag \\ &=&-\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }\left( z\right) }{\mathcal{H} ^{\left( \lambda _{1},\left( \lambda _{1}\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }\left( z\right) }\right) \right) \notag \\ &=&-\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }\left( z\right) }{\mathcal{H} ^{\left( \left( \lambda _{1}+1\mid \mu _{1}-1\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }\left( z\right) }\right) \right) , \end{eqnarray} \begin{eqnarray} w_{2}(x) &=&w_{\lambda _{1}+\mu _{1}}^{\left( \lambda _{1},N_{m}\right) }(x) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( \lambda _{1}+1\mid \mu _{1}-1\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }\left( z\right) }{\mathcal{H} ^{\left( \lambda _{1}+\mu _{1},\left( \lambda _{1}+1\mid \mu _{1}-1\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }\left( z\right) } \right) \right) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( \lambda _{1}+1\mid \mu _{1}-1\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }\left( z\right) }{\mathcal{H} ^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }\left( z\right) }\right) \right) , \end{eqnarray} \begin{eqnarray} w_{3}(x) &=&w_{\lambda _{2}}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }(x) \notag \\ &=&-\omega x/2+\sqrt{\frac{2}{\omega }}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }\left( z\right) }{\mathcal{H} ^{\left( \lambda _{2},\left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }\left( z\right) }\right) \right) \notag \\ &=&-\omega x/2+\sqrt{\frac{2}{\omega }}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( \lambda _{1}+1\mid \mu _{1}-1\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }\left( z\right) }{\mathcal{H} ^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}-1\right) _{1}\right) }\left( z\right) }\right) \right) , \end{eqnarray} \begin{eqnarray} w_{4}(x) &=&w_{\lambda _{2}+\mu _{2}}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}-1\right) _{1}\right) }(x) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}-1\right) _{1}\right) }\left( z\right) }{\mathcal{ H}^{\left( \lambda _{2}+\mu _{2},\left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}-1\right) _{1}\right) }\left( z\right) }\right) \right) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}-1\right) _{1}\right) }\left( z\right) }{\mathcal{ H}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}\right) _{1}\right) }\left( z\right) }\right) \right) , \end{eqnarray} and (see Eq(\ref{Eqext5})) \begin{eqnarray} w_{5}(x) &=&w_{0}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}\right) _{1}\right) }(x) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}\right) _{1}\right) }\left( z\right) }{\mathcal{H} ^{\left( 0,\left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}\right) _{1}\right) }\left( z\right) }\right) \right) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}\right) _{1}\right) }\left( z\right) }{\mathcal{H} ^{\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) }\left( z\right) }\right) \right) . \end{eqnarray} \subsubsection{k=5} The $5$-cyclic extension with $k=5$ is associated to a $5$-Okamoto Maya diagram of the form \begin{equation} N_{m}=\left( \left( 1\mid \alpha _{1}\right) _{5},\left( 2\mid \alpha _{2}\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) =\Omega _{\alpha _{1},\alpha _{2},\alpha _{3},\alpha _{4}},\ m=\alpha _{1}+\alpha _{2}+\alpha _{3}+\alpha _{4}. \end{equation} The corresponding $5$-cyclic chain are obtained by permutation from \begin{equation} \left\{ 1+5\alpha _{1},2+5\alpha _{2},3+5\alpha _{3},4+5\alpha _{4},0\right\} . \end{equation} The parameters in the dressing chain system of period 5 (see Eq(\ref {pcyclicdress})) for the chain $\left( 1+5\alpha _{1},2+5\alpha _{2},3+5\alpha _{3},4+5\alpha _{4},0\right) $ are given by \begin{equation} \left\{ \begin{array}{c} \varepsilon _{12}=\left( -1-5\left( \alpha _{2}-\alpha _{1}\right) \right) \omega \\ \varepsilon _{23}=\left( -1-5\left( \alpha _{3}-\alpha _{2}\right) \right) \omega \\ \varepsilon _{34}=\left( -1-5\left( \alpha _{4}-\alpha _{1}\right) \right) \omega \\ \varepsilon _{45}=\left( 5\alpha _{4}+4\right) \omega \\ \varepsilon _{51}-\Delta =\left( -6-5\alpha _{1}\right) \omega . \end{array} \right. \end{equation} and the corresponding solutions of the dressing chain system are (see Eq(\ref {soldress})) \begin{eqnarray} w_{2}(x) &=&w_{1+5\alpha _{1}}^{\left( \left( 1\mid \alpha _{1}\right) _{5},\left( 2\mid \alpha _{2}\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }(x) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}\right) _{5},\left( 2\mid \alpha _{2}\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }{\mathcal{H}^{\left( 1+5\alpha _{1},\left( 1\mid \alpha _{1}\right) _{5},\left( 2\mid \alpha _{2}\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }\right) \right) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}\right) _{5},\left( 2\mid \alpha _{2}\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }{\mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }\right) \right) , \end{eqnarray} \begin{eqnarray} w_{2}(x) &=&w_{2+5\alpha _{2}}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }(x) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }{\mathcal{H}^{\left( 2+5\alpha _{2},\left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }\right) \right) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }{\mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }\right) \right) , \end{eqnarray} \begin{eqnarray} w_{3}(x) &=&w_{3+5\alpha _{3}}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }(x) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }{\mathcal{H}^{\left( 3+5\alpha _{3},\left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }\right) \right) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }{\mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}+1\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }\right) \right) , \end{eqnarray} \begin{eqnarray} w_{4}(x) &=&w_{4+5\alpha _{4}}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}+1\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }(x) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}+1\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }{\mathcal{H}^{\left( 4+5\alpha _{4},\left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}+1\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }\right) \right) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}+1\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }{\mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}+1\right) _{5},\left( 4\mid \alpha _{4}+1\right) _{5}\right) }\left( z\right) }\right) \right) , \end{eqnarray} and (see Eq(\ref{Eqext5})) \begin{eqnarray} w_{5}(x) &=&w_{0}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}+1\right) _{5},\left( 4\mid \alpha _{4}+1\right) _{5}\right) }(x) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}+1\right) _{5},\left( 4\mid \alpha _{4}+1\right) _{5}\right) }\left( z\right) }{\mathcal{H} ^{\left( 0,\left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}+1\right) _{5},\left( 4\mid \alpha _{4}+1\right) _{5}\right) }\left( z\right) }\right) \right) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}+1\right) _{5},\left( 4\mid \alpha _{4}+1\right) _{5}\right) }\left( z\right) }{\mathcal{H} ^{\left( \left( 1\mid \alpha _{1}\right) _{5},\left( 2\mid \alpha _{2}\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }\right) \right) , \end{eqnarray} where we have used \begin{eqnarray} &&\left( 0,\left( 1\mid \alpha _{1}+1\right) _{5},\left( 2\mid \alpha _{2}+1\right) _{5},\left( 3\mid \alpha _{3}+1\right) _{5},\left( 4\mid \alpha _{4}+1\right) _{5}\right) \notag \\ &=&\left( 0,1,2,3,4\right) \cup \left( \left( 6\mid \alpha _{1}+1\right) _{5},\left( 7\mid \alpha _{2}+1\right) _{5},\left( 8\mid \alpha _{3}+1\right) _{5},\left( 9\mid \alpha _{4}+1\right) _{5}\right) =N_{m}\oplus 5. \end{eqnarray} \subsubsection{k=3} The $5$-cyclic Maya diagram with $k=3$ is of the form \begin{equation} N_{m}=\left( \left( 1\mid \alpha _{1}\right) _{3},\left( 2\mid \alpha _{2}\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) ,\ m=\alpha _{1}+\alpha _{2}+\mu _{1}. \end{equation} with the convention that a two times repeated index is suppressed from the list. The corresponding $5$-cyclic chain is obtained by permutations from \begin{equation} \left\{ 1+3\alpha _{1},2+3\alpha _{2},\lambda _{1},\lambda _{1}+\mu _{1},0\right\} \end{equation} and for the chain $\left( 1+3\alpha _{1},2+3\alpha _{2},\lambda _{1},\lambda _{1}+\mu _{1},0\right) $ the parameters in the $5$-cyclic dressing chain system (see Eq(\ref{pcyclicdress})) are, with the order above \begin{equation} \left\{ \begin{array}{c} \varepsilon _{12}=\left( -1-3\left( \alpha _{2}-\alpha _{1}\right) \right) \omega \\ \varepsilon _{23}=\left( 2+3\alpha _{2}-\lambda _{1}\right) \omega \\ \varepsilon _{34}=-\mu _{1}\omega \\ \varepsilon _{45}=\left( \lambda _{1}+\mu _{1}-3\right) \omega \\ \varepsilon _{51}-\Delta =\left( -4-3\alpha _{1}\right) \omega . \end{array} \right. \end{equation} The solutions of the dressing chain system are given by (see Eq(\ref {soldress})) \begin{eqnarray} w_{1}(x) &=&w_{1+3\alpha _{1}}^{\left( \left( 1\mid \alpha _{1}\right) _{3},\left( 2\mid \alpha _{2}\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }(x) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}\right) _{3},\left( 2\mid \alpha _{2}\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }\left( z\right) }{\mathcal{H}^{\left( 1+3\alpha _{1},\left( 1\mid \alpha _{1}\right) _{3},\left( 2\mid \alpha _{2}\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }\left( z\right) }\right) \right) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}\right) _{5},\left( 2\mid \alpha _{2}\right) _{5},\left( 3\mid \alpha _{3}\right) _{5},\left( 4\mid \alpha _{4}\right) _{5}\right) }\left( z\right) }{\mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }\left( z\right) }\right) \right) , \end{eqnarray} \begin{eqnarray} w_{2}(x) &=&w_{2+3\alpha _{2}}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }(x) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }\left( z\right) }{\mathcal{H}^{\left( 2+3\alpha _{2},\left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }\left( z\right) }\right) \right) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }\left( z\right) }{\mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }\left( z\right) }\right) \right) , \end{eqnarray} \begin{eqnarray} w_{3}(x) &=&w_{\lambda _{1}}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }(x) \notag \\ &=&-\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }\left( z\right) }{\mathcal{H}^{\left( \lambda _{1},\left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }\left( z\right) }\right) \right) \notag \\ &=&-\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }\left( z\right) }{\mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}+3\mid \mu _{1}-1\right) _{3}\right) }\left( z\right) }\right) \right) , \end{eqnarray} \begin{eqnarray} w_{4}(x) &=&w_{\lambda _{1}+\mu _{1}}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}+3\mid \mu _{1}-1\right) _{3}\right) }(x) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}+3\mid \mu _{1}-1\right) _{3}\right) }\left( z\right) }{\mathcal{H}^{\left( \lambda _{1}+\mu _{1},\left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}+3\mid \mu _{1}-1\right) _{3}\right) }\left( z\right) }\right) \right) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}+3\mid \mu _{1}-1\right) _{3}\right) }\left( z\right) }{\mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}+3\mid \mu _{1}\right) _{3}\right) }\left( z\right) }\right) \right) , \end{eqnarray} and (see Eq(\ref{Eqext5})) \begin{eqnarray} w_{5}(x) &=&w_{0}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}+3\mid \mu _{1}\right) _{3}\right) }(x) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}+3\mid \mu _{1}\right) _{3}\right) }\left( z\right) }{\mathcal{H}^{\left( 0,\left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}+3\mid \mu _{1}\right) _{3}\right) }\left( z\right) }\right) \right) \notag \\ &=&\omega x/2+\sqrt{\frac{\omega }{2}}\frac{d}{dz}\left( \log \left( \frac{ \mathcal{H}^{\left( \left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}+3\mid \mu _{1}\right) _{3}\right) }\left( z\right) }{\mathcal{H}^{\left( \left( 1\mid \alpha _{1}\right) _{3},\left( 2\mid \alpha _{2}\right) _{3},\left( \lambda _{1}\mid \mu _{1}\right) _{3}\right) }\left( z\right) }\right) \right) , \end{eqnarray} where we have used: \begin{equation} \begin{array}{c} \left( 0,\left( 1\mid \alpha _{1}+1\right) _{3},\left( 2\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}+3\mid \mu _{1}\right) _{3}\right) \\ =\left( 0,1,2\right) \cup \left( \left( 4\mid \alpha _{1}+1\right) _{3},\left( 5\mid \alpha _{2}+1\right) _{3},\left( \lambda _{1}+3\mid \mu _{1}\right) _{3}\right) =N_{m}\oplus 3. \end{array} \end{equation} \section{Rational extensions of the IO and rational solutions of the dressing chains of even periodicity} \subsection{Rational extensions of the IO} The IO potential (with zero ground level $E_{0}=0$)) is defined on the positive half line $\left] 0,+\infty \right[ $ by \begin{equation} V\left( x;\omega ,\alpha \right) =\frac{\omega ^{2}}{4}x^{2}+\frac{\left( \alpha +1/2\right) (\alpha -1/2)}{x^{2}}-\omega \left( \alpha +1\right) ,\quad \left\vert \alpha \right\vert >1/2. \label{OI} \end{equation} If we add Dirichlet boundary conditions at $0$ and infinity and if we suppose $\alpha >1/2$, it has the following spectrum ($z=\omega x^{2}/2$) \begin{equation} \left\{ \begin{array}{c} E_{n}\left( \omega \right) =2n\omega \\ \psi _{n\otimes \varnothing }\left( x;\omega ,\alpha \right) =\psi _{0\otimes \varnothing }\left( x;\omega ,\alpha \right) \mathit{L} _{n}^{\alpha }\left( z\right) \end{array} \right. ,\quad n\geq 0, \label{spec OI} \end{equation} with $\psi _{0\otimes \varnothing }\left( x;\omega ,\alpha \right) =z^{\left( \alpha +1/2\right) /2}e^{-z/2}$. The interest of this notation for the spectral indices ($n\otimes \varnothing $ rather than $n$) will become clear later. $V\left( x;\omega ,\alpha \right) $ is translationally shape invariant with ( $\alpha _{n}=\alpha +n$) \begin{equation} V^{\left( 0\otimes \varnothing \right) }\left( x;\omega ,\alpha \right) =V\left( x;\omega ,\alpha _{1}\right) +2\omega . \label{SI IO} \end{equation} It possesses also three discrete parametric symmetries: *The $\Gamma _{1}$\textbf{\ symmetry, }$\left( \omega ,\alpha \right) \overset{\Gamma _{1}}{\rightarrow }\left( -\omega ,\alpha \right) $, which acts as \begin{equation} V(x;\omega ,\alpha )\overset{\Gamma _{1}}{\rightarrow }V(x;-\omega ,\alpha )=V(x;\omega ,\alpha )+2\omega \left( \alpha +1\right) , \end{equation} and generates the \textbf{conjugate shadow spectrum} of $V\left( x;\omega ,\alpha \right) $ : \begin{equation} \left\{ \begin{array}{c} E_{n}\left( \omega \right) \\ \psi _{n\otimes \varnothing }(x;\omega ,\alpha ) \end{array} \right. \overset{\Gamma _{1}}{\rightarrow }\left\{ \begin{array}{c} E_{\left( -n-1\right) -\alpha }\left( \omega \right) =-2\left( n+1+\alpha \right) \omega <0 \\ \psi _{\varnothing \otimes \left( -n-1\right) }(x;\omega ,\alpha )=z^{\left( \alpha +1/2\right) /2}e^{z/2}\mathit{L}_{n}^{\alpha }\left( -z\right) \end{array} \right. ,\quad n\geq 0. \label{conjshadOI} \end{equation} *The $\Gamma _{2}$\textbf{\ symmetry, }$\left( \omega ,\alpha \right) \overset{\Gamma _{2}}{\rightarrow }\left( \omega ,-\alpha \right) $, which acts as \begin{equation} V(x;\omega ,\alpha )\overset{\Gamma _{2}}{\rightarrow }V(x;\omega ,\alpha )+2\omega \alpha , \end{equation} and generates the \textbf{shadow spectrum} of $V\left( x;\omega ,\alpha \right) :$ \begin{equation} \left\{ \begin{array}{c} E_{n}\left( \omega \right) \\ \psi _{n\otimes \varnothing }(x;\omega ,\alpha ) \end{array} \right. \overset{\Gamma _{2}}{\rightarrow }\left\{ \begin{array}{c} E_{n-\alpha }\left( \omega \right) =2\left( n-\alpha \right) \omega \\ \psi _{\varnothing \otimes n}(x;\omega ,\alpha )=z^{\left( -\alpha +1/2\right) /2}e^{-z/2}\mathit{L}_{n}^{-\alpha }\left( z\right) \end{array} \right. ,\quad n\geq 0. \label{shadOI} \end{equation} Note that \begin{equation} \psi _{\varnothing \otimes 0}(x;\omega ,\alpha )=\psi _{0}(x;\omega ,\alpha )z^{-\alpha } \end{equation} * The $\Gamma _{3}=\Gamma _{1}\circ \Gamma _{2}$\textbf{\ symmetry,} $\left( \omega ,\alpha \right) \overset{\Gamma _{3}}{\rightarrow }\left( -\omega ,-\alpha \right) $, which acts as \begin{equation} V(x;\omega ,\alpha )\overset{\Gamma _{3}}{\rightarrow }V(x;-\omega ,-\alpha )=V(x;\omega ,\alpha )+2\omega , \end{equation} and generates the \textbf{conjugate spectrum} of $V\left( x;\omega ,\alpha \right) $ : \begin{equation} \left\{ \begin{array}{c} E_{n}\left( \omega \right) \\ \psi _{n\otimes \varnothing }(x;\omega ,\alpha ) \end{array} \right. \overset{\Gamma _{3}}{\rightarrow }\left\{ \begin{array}{c} E_{-n-1}\left( \omega \right) =-2\left( n+1\right) \omega <0 \\ \psi _{\left( -n-1\right) \otimes \varnothing }(x;\omega ,\alpha )=z^{\left( -\alpha +1/2\right) /2}e^{z/2}\mathit{L}_{n}^{-\alpha }\left( -z\right) \end{array} \right. ,\quad n\geq 0. \label{conjOi} \end{equation} The union of the spectrum and the conjugate spectrum forms the \textbf{ extended spectrum} and the union of the shadow and conjugate shadow spectra forms the \textbf{extended shadow spectrum}. All together they contain all the quasi-polynomial eigenfunctions of the IO. To avoid the specific case where the extended and extended shadow spectra merge, we restrict the values of $\alpha $ to be non-integer: \begin{equation} \alpha \notin \mathbb{N} . \label{alphaconst} \end{equation} The rational extensions of the IO, which\ are obtained via chains of\ DT associated to seed functions of this type, can then be indexed by a couple of Maya diagrams, that we call a \textbf{universal character (UC) }\cite {koike,tsuda,tsuda2,tsuda3} \begin{equation} N_{m}\otimes L_{r}=\left( n_{1},...,n_{m}\right) \otimes \left( l_{1},...,l_{r}\right) , \label{bi-tuple} \end{equation} $N_{m}=\left( n_{1},...,n_{m}\right) $ containing the spectral indices of the seed functions belonging to the extended spectrum and $L_{r}=\left( l_{1},...,l_{r}\right) $ those belonging to the extended shadow spectrum. The UC is said to be \textbf{canonical} if $N_{m}$\ and $L_{r}$ are canonical Maya diagrams. If $N_{m}$\ and $L_{r}$ are respectively $p_{1}$ -cyclic with translation of $k_{1}>0$ and $p_{2}$-cyclic with translation of $k_{2}>0$ then we say that is the UC $N_{m}\otimes L_{r}$ is $p$\textbf{ -cyclic, }$p=p_{1}+p_{2}$.\textbf{\ }For reasons which will become clear later, we call the quantity $k=k_{1}-k_{2}$, the \textbf{balanced translation amplitude} of $N_{m}\otimes L_{r}$. We have proven in \cite{GGM}, that if we have two \textbf{equivalent UC} \begin{equation} N_{m}\otimes L_{r}\approx N_{m^{\prime }}^{\prime }\otimes L_{r^{\prime }}^{\prime }, \label{eqUC} \end{equation} that is, if \begin{equation} N_{m}\approx N_{m^{\prime }}^{\prime }\text{ and }L_{r}\approx L_{r^{\prime }}^{\prime }, \end{equation} then \begin{equation} V^{N_{m}\otimes L_{r}}(x;\omega ,\alpha )\ =V^{N_{m^{\prime }}^{\prime }\otimes L_{r^{\prime }}^{\prime }}(x;\omega ,\alpha _{s})+2q\omega ,\ q,s\in \mathbb{Z} . \label{EqextIO1} \end{equation} In particular, if $N_{m}$ and $L_{r}$ are canonical and $k_{1},k_{2}>0$ \begin{equation} V^{\left( N_{m}\oplus k_{1}\right) \otimes \left( L_{r}\oplus k_{2}\right) }(x;\omega ,\alpha )\ =V^{N_{m}\otimes L_{r}}(x;\omega ,\alpha _{k_{1}-k_{2}})+2k_{1}\omega . \label{EqextIO2} \end{equation} For the Wronskians, we have the following equivalence relation \cite{GGM2} \begin{eqnarray} W^{\left( N_{m}\oplus k_{1}\right) \otimes \left( L_{r}\oplus k_{2}\right) }(x;\omega ,\alpha )\ &=&\prod\limits_{i=0}^{k_{1}-1}\left( \psi _{0\otimes \varnothing }(x;\omega ,\alpha _{i})\right) \prod\limits_{j=0}^{k_{2}-1}\left( \psi _{\varnothing \otimes 0}(x;\omega ,\alpha _{k_{1}-j})\right) W^{N_{m}\otimes L_{r}}(x;\omega ,\alpha _{k_{1}-k_{2}}) \label{EqextIO3} \\ &=&z^{\alpha (k_{1}-k_{2})/2+\left( k_{1}-k_{2}\right) ^{2}/4}\times e^{-(k_{1}+k_{2})z/2}\times W^{N_{m}\otimes L_{r}}(x;\omega ,\alpha _{k_{1}-k_{2}}), \notag \end{eqnarray} or, if $k_{1}\leq n_{1}\leq ...\leq n_{m}$ and $k_{2}\leq l_{1}\leq ...\leq l_{r}$ \begin{eqnarray} &&W^{\left( \left( 0,...,k_{1}-1\right) \cup N_{m}\right) \otimes \left( \left( 0,...,k_{2}-1\right) \cup L_{r}\right) }(x;\omega ,\alpha )\ \label{EqextIO4} \\ &=&\prod\limits_{i=0}^{k_{1}-1}\left( \psi _{0\otimes \varnothing }(x;\omega ,\alpha _{i})\right) \prod\limits_{j=0}^{k_{2}-1}\left( \psi _{\varnothing \otimes 0}(x;\omega ,\alpha _{k_{1}-j})\right) W^{\left( N_{m}-k_{1}\right) \otimes \left( L_{r}-k_{2}\right) }(x;\omega ,\alpha _{k_{1}-k_{2}}) \notag \\ &=&z^{\alpha (k_{1}-k_{2})/2+\left( k_{1}-k_{2}\right) ^{2}/4}\times e^{-(k_{1}+k_{2})z/2}W^{\left( N_{m}-k_{1}\right) \otimes \left( L_{r}-k_{2}\right) }(x;\omega ,\alpha _{k_{1}-k_{2}}). \notag \end{eqnarray} Using Eq(\ref{spec OI}), Eq(\ref{shadOI}), Eq(\ref{wronskprop}) and the derivation properties of the Laguerre \cite{magnus,szego,GGM2} (see also Eq( \ref{poch})) \begin{equation} \left\{ \begin{array}{c} \frac{d^{j}}{dz^{j}}\left( L_{n}^{\alpha }\left( z\right) \right) =\left( -1\right) ^{j}L_{n-j}^{\alpha +j}\left( z\right) \\ \frac{d^{j}}{dz^{j}}\left( z^{-\alpha }L_{n}^{-\alpha }\left( z\right) \right) =\left( n-\alpha \right) _{j}L_{n}^{-\alpha -j}\left( z\right) , \end{array} \right. \end{equation} we can write \begin{equation} W^{N_{m}\otimes L_{r}}(x;\omega ,\alpha )\ \propto z^{(m-r)^{2}/4-r\left( r-1\right) }\times z^{\alpha (m-r)/2}\times e^{-\left( m+r\right) z/2}\times \mathcal{L}^{N_{m}\otimes L_{r}}\left( z;\alpha \right) , \label{WL2} \end{equation} where $\mathcal{L}^{N_{m}\otimes L_{r}}$ is the following determinant ($ i=0,...,m+r-1$) \begin{equation} \mathcal{L}^{N_{m}\otimes L_{r}}\left( z;\alpha \right) =\left\vert \overrightarrow{L}_{n_{1}}\left( z;\alpha \right) ,...,\overrightarrow{L} _{n_{m}}\left( z;\alpha \right) ,\overrightarrow{\Lambda }_{l_{1}}\left( z;\alpha \right) ,...,\overrightarrow{\Lambda }_{l_{r}}\left( z;\alpha \right) \right\vert , \label{PWL} \end{equation} with ($L_{n}^{\alpha }\left( z\right) =0$ if $n<0$) \begin{equation} \overrightarrow{L}_{n}\left( z;\alpha \right) =\left( \begin{array}{c} L_{n}^{\alpha }\left( z\right) \\ ... \\ \left( -1\right) ^{i}L_{n-i}^{\alpha +i}\left( z\right) \\ ... \\ \left( -1\right) ^{m+r-1}L_{n-m-r+1}^{\alpha +m+r-1}\left( z\right) \end{array} \right) ,\ \overrightarrow{\Lambda }_{l}\left( z;\alpha \right) =\left( \begin{array}{c} z^{m+r-1}L_{l}^{-\alpha }\left( z\right) \\ ... \\ \left( l-\alpha \right) _{i}z^{m+r-1-i}L_{l}^{-\alpha -j}\left( z\right) \\ ... \\ \left( l-\alpha \right) _{m+r-1}L_{l}^{-\alpha -m-r+1}\left( z\right) \end{array} \right) . \end{equation} $\mathcal{L}^{N_{m}\otimes L_{r}}$ is called a \textbf{Laguerre pseudowronskian} \cite{GGM,GGM2}. Eq(\ref{EqextIO3}) and Eq(\ref{EqextIO4})\ give then \begin{equation} \left\{ \begin{array}{c} \mathcal{L}^{\left( N_{m}\oplus k_{1}\right) \otimes \left( L_{r}\oplus k_{2}\right) }(z;\alpha )\ \propto z^{2rk_{2}+k_{2}\left( k_{2}-1\right) } \mathcal{L}^{\left( N_{m}\right) \otimes \left( L_{r}\right) }(z;\alpha _{k_{1}-k_{2}}) \\ \mathcal{L}^{\left( \left( 0,...,k_{1}-1\right) \cup N_{m}\right) \otimes \left( \left( 0,...,k_{2}-1\right) \cup L_{r}\right) }(z;\alpha )\ \propto z^{2rk_{2}+k_{2}\left( k_{2}-1\right) }\mathcal{L}^{\left( N_{m}-k_{1}\right) \otimes \left( L_{r}-k_{2}\right) }(z;\alpha _{k_{1}-k_{2}}). \end{array} \right. \label{EqextIO5} \end{equation} The equivalence property Eq(\ref{EqextIO1}) can be viewed as the most general transcription of the shape invariance property Eq(\ref{SI IO}) at the level of the rational extensions of the IO potential \cite{GGM2}. Due to this equivalence property, to describe all the rational extensions of the IO, it is sufficient to consider those associated to canonical UC. We can also\ note the useful symmmetry relation \begin{equation} W^{L_{r}\otimes N_{m}}(x;\omega ,\alpha )=\Gamma _{2}\left( W^{N_{m}\otimes L_{r}}(x;\omega ,\alpha )\right) , \label{sym1} \end{equation} which leads immediately to \begin{equation} \ V^{L_{r}\otimes N_{m}}(x;\omega ,\alpha )=\Gamma _{2}\left( V^{N_{m}\otimes L_{r}}(x;\omega ,\alpha )\right) +2\omega \alpha . \label{sym2} \end{equation} \subsection{$p$-cyclic extensions of the IO and rational solutions of the even periodic dressing chains} The cyclicity of the canonical UC $N_{m}\otimes L_{r}$ alone is not sufficient to ensure the cyclicity of the associated rational extension of the IO $V^{N_{m}\otimes L_{r}}(x;\omega ,\alpha )$. Nevertheless we have the following lemma \begin{lemma} If the UC $N_{m}\otimes L_{r}$ is $p$-cyclic with a zero balanced translation amplitude, then $V^{N_{m}\otimes L_{r}}(x;\omega ,\alpha )$ is a $p$-cyclic potential. \end{lemma} \begin{proof} If $N_{m}\otimes L_{r}$ is $p$-cyclic, there exists a chain of DT $\left( \nu _{1},...,\nu _{p_{1}}\right) \otimes \left( \lambda _{1},...,\lambda _{p_{2}}\right) $ with $p=p_{1}+p_{2}$, and $k_{1},k_{2}\in \mathbb{N} ^{\ast }$ such that \begin{equation} V^{\left( N_{m},\nu _{1},...,\nu _{p_{1}}\right) \otimes \left( L_{r},\lambda _{1},...,\lambda _{p_{2}}\right) }(x;\omega ,\alpha )=V^{\left( N_{m}\oplus k_{1}\right) \otimes \left( L_{r}\oplus k_{2}\right) }(x;\omega ,\alpha ), \end{equation} which, combined to Eq(\ref{EqextIO2}), leads to \begin{equation} V^{\left( N_{m},\nu _{1},...,\nu _{p_{1}}\right) \otimes \left( L_{r},\lambda _{1},...,\lambda _{p_{2}}\right) }(x;\omega ,\alpha )=V^{N_{m}\otimes L_{r}}(x;\omega ,\alpha _{k_{1}-k_{2}})+2k_{1}\omega . \end{equation} In the case of a zero balanced translation amplitude $k_{1}=k_{2}$ and $ V^{N_{m}\otimes L_{r}}(x;\omega ,\alpha )$ is then $p$-cyclic with an energy shift $\Delta =2k_{1}\omega $. \end{proof} Combining this lemma with Theorem 1, we arrive directly to the theorem \begin{theorem} \textit{The rational extensions of the IO }$V^{N_{m}\otimes L_{r}}(x;\omega ,\alpha )$\textit{\ with} \begin{equation} \left\{ \begin{array}{c} N_{m}=\left( \left( 1\mid a_{1}\right) _{k},...,\left( k-1\mid a_{k-1}\right) _{k};\left( \lambda _{1}\mid \mu _{1}\right) _{k},...,\left( \lambda _{j_{1}}\mid \mu _{j_{1}}\right) _{k}\right) \\ L_{r}=\left( \left( 1\mid b_{1}\right) _{k},...,\left( k-1\mid b_{k-1}\right) _{k};\left( \rho _{1}\mid \sigma _{1}\right) _{k},...,\left( \rho _{j_{2}}\mid \sigma _{j_{2}}\right) _{k}\right) , \end{array} \right. \label{th4} \end{equation} \textit{where }$a_{i},b_{i},\lambda _{i},\mu _{i},\rho _{i},\sigma _{i}$ \textit{\ are arbitrary positive integers, solve the dressing chain of even period }$p=p_{1}+p_{2}$\textit{\ with }$p_{l}=2j_{l}+k,\ l=1,2$\textit{, (}$ p_{1},p_{2}$\textit{\ and }$k$\textit{\ have the same parity with }$0<k\leq \min (p_{1},p_{2})$\textit{) for the following values of the parameters } \begin{equation} \Delta =2k\omega \text{ and }\varepsilon _{i,i+1}=\left\{ \begin{array}{c} 2\left( \nu _{P(i)}-\nu _{P(i+1)}\right) \omega ,\text{ if }\nu _{P(i)},\nu _{P(i+1)}\in \left\{ 1,...,p_{1}\right\} \text{ or }\nu _{P(i)},\nu _{P(i+1)}\in \left\{ p_{1}+1,...,p\right\} , \\ 2\left( \nu _{P(i)}-\nu _{P(i+1)}+\alpha \right) \omega ,\text{ if }\nu _{P(i)}\in \left\{ 1,...,p_{1}\right\} \text{ and }\nu _{P(i+1)}\in \left\{ p_{1}+1,...,p\right\} , \\ 2\left( \nu _{P(i)}-\nu _{P(i+1)}-\alpha \right) \omega ,\text{ if }\nu _{P(i+1)}\in \left\{ 1,...,p_{1}\right\} \text{ or }\nu _{P(i)}\in \left\{ p_{1}+1,...,p\right\} , \end{array} \right. \ \label{th41} \end{equation} \textit{(}$\nu _{p+1}=\nu _{1}$\textit{) where }$P$\textit{\ is any permutation of }$S_{p}$\textit{\ and} \begin{eqnarray} \left( \nu _{1},...,\nu _{p_{1}}\right) \otimes \left( \nu _{p_{1}+1},...,\nu _{p}\right) &=&\left( 0,1+a_{1}k,...,\left( k-1\right) +a_{k-1}k,\ \ \lambda _{1},\lambda _{1}+\mu _{1}k,...,\ \lambda _{j_{1}},\lambda _{j_{1}}+\mu _{j_{1}}k\right) \label{th42} \\ &&\otimes \left( 0,1+b_{1}k,...,\left( k-1\right) +b_{k-1}k,\ \ \rho _{1},\rho _{1}+\sigma _{1}k,...,\ \rho _{j_{1}},\rho _{j_{1}}+\sigma _{j_{1}}k\right) . \notag \end{eqnarray} \textit{As for the }$w$\textit{\ solutions of the dressing chain system, for the chain above, they are given by} \begin{eqnarray} w_{\nu _{i}\otimes \varnothing }^{\left( N_{m},\nu _{1},...,\nu _{i-1}\right) \otimes L_{r}}(x;\omega ,\alpha ) &=&-\omega x/2+\frac{\alpha -3/2+m-r+i}{x} \label{th43} \\ &&+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( N_{m},\nu _{1},...,\nu _{i-1}\right) \otimes L_{r}}(z;\alpha )}{\mathcal{L} ^{\left( N_{m},\nu _{1},...,\nu _{i}\right) \otimes L_{r}}(z;\alpha )} \right) \right) , \notag \\ \text{if }i &\leq &p_{1}\text{\ and the flip in }\nu _{i}\text{ is positive,} \notag \end{eqnarray} \begin{eqnarray} w_{\nu _{i}\otimes \varnothing }^{\left( N_{m},\nu _{1},...,\nu _{i-1}\right) \otimes L_{r}}(x;\omega ,\alpha ) &=&\omega x/2-\frac{\alpha -1/2+m-r+i}{x} \label{th44} \\ &&+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( N_{m},\nu _{1},...,\nu _{i-1}\right) \otimes L_{r}}(z;\alpha )}{\mathcal{L} ^{\left( N_{m},\nu _{1},...,\nu _{i}\right) \otimes L_{r}}(z;\alpha )} \right) \right) , \notag \\ \text{ if }i &\leq &p_{1}\text{\ and the flip in }\nu _{i}\text{ is negative, } \notag \end{eqnarray} \begin{eqnarray} w_{\varnothing \otimes \nu _{i}}^{\left( N_{m},\nu _{1},...,\nu _{p_{1}}\right) \otimes \left( L_{r},\nu _{p_{1}+1},...,\nu _{i-1}\right) }(x;\omega ,\alpha ) &=&-\omega x/2-\frac{\alpha -13/2+m+p_{1}+3\left( r+i\right) }{x} \label{th45} \\ &&+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( N_{m},\nu _{1},...,\nu _{p_{1}}\right) \otimes \left( L_{r},\nu _{p_{1}+1},...,\nu _{i-1}\right) }(z;\alpha )}{\mathcal{L}^{\left( N_{m},\nu _{1},...,\nu _{p_{1}}\right) \otimes \left( L_{r},\nu _{p_{1}+1},...,\nu _{i}\right) }(z;\alpha )}\right) \right) , \notag \\ \text{if }i &=&p_{1}+j,\ j>0,\text{\ and the flip in }\nu _{i}\text{ is positive,} \notag \end{eqnarray} \begin{eqnarray} w_{\varnothing \otimes \nu _{i}}^{\left( N_{m},\nu _{1},...,\nu _{p_{1}}\right) \otimes \left( L_{r},\nu _{p_{1}+1},...,\nu _{i-1}\right) }(x;\omega ,\alpha ) &=&\omega x/2+\frac{\alpha -7/2+m+p_{1}+3\left( r+i\right) }{x} \label{th46} \\ &&+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( N_{m},\nu _{1},...,\nu _{p_{1}}\right) \otimes \left( L_{r},\nu _{p_{1}+1},...,\nu _{i-1}\right) }(z;\alpha )}{\mathcal{L}^{\left( N_{m},\nu _{1},...,\nu _{p_{1}}\right) \otimes \left( L_{r},\nu _{p_{1}+1},...,\nu _{i}\right) }(z;\alpha )}\right) \right) , \notag \\ \text{if }i &=&p_{1}+j,\ j>0,\text{\ and the flip in }\nu _{i}\text{ is negative,} \notag \end{eqnarray} \textit{with the convention that if a spectral index is repeated two times in the tuples constituting the UC, then we suppress the corresponding eigenfunction in the Laguerre pseudowronskians }$\mathcal{L}$\textit{.} \end{theorem} \begin{proof} Note that due to the relations Eq(\ref{sym1}) and Eq(\ref{sym2}), we can without loss of generality restrict the study of the solutions to the case $ p_{1}\geq p_{2}.$\newline If $i\leq p_{1}$\ and the flip in $\nu _{i}$ is negative, we have, using Eq( \ref{WL2}) \begin{eqnarray} w_{\nu \otimes \varnothing }^{N_{m}\otimes L_{r}}(x;\omega ,\alpha ) &=&-\left( \log \left( \frac{W^{\left( N_{m},\nu \right) \otimes L_{r}}(x;\omega ,\alpha )}{W^{N_{m}\otimes L_{r}}(x;\omega ,\alpha )}\right) \right) ^{\prime } \notag \\ &=&-\left( \log \left( x^{\alpha +1/2+m-r}\times e^{-\omega x^{2}/4}\times \frac{\mathcal{L}^{\left( N_{m},\nu \right) \otimes L_{r}}(z;\alpha )}{ \mathcal{L}^{N_{m}\otimes L_{r}}(z;\alpha )}\right) \right) ^{\prime } \notag \\ &=&\omega x/2-\frac{\alpha -1/2+m-r}{x}+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{N_{m}\otimes L_{r}}(z;\alpha )}{\mathcal{L} ^{\left( N_{m},\nu \right) \otimes L_{r}}(z;\alpha )}\right) \right) . \end{eqnarray} In the same manner, if $i\leq p_{1}$\ and the flip in $\nu _{i}$ is positive, we have \begin{equation} w_{\nu \otimes \varnothing }^{N_{m}\otimes L_{r}}(x;\omega ,\alpha )=-\omega x/2+\frac{\alpha -1/2+m-r}{x}+\omega x\frac{d}{dz}\left( \log \left( \frac{ \mathcal{L}^{N_{m}\otimes L_{r}}(z;\alpha )}{\mathcal{L}^{\left( N_{m},\nu \right) \otimes L_{r}}(z;\alpha )}\right) \right) . \end{equation} If $i>p_{1}$\ and the flip in $\nu _{i}$ is negative, we have, using Eq(\ref {WL2}) \begin{eqnarray} w_{\varnothing \otimes \nu }^{N_{m}\otimes L_{r}}(x;\omega ,\alpha ) &=&-\left( \log \left( \frac{W^{N_{m}\otimes \left( L_{r},\nu \right) }(x;\omega ,\alpha )}{W^{N_{m}\otimes L_{r}}(x;\omega ,\alpha )}\right) \right) ^{\prime } \notag \\ &=&-\left( \log \left( x^{-\alpha +1/2-m-3r}\times e^{-\omega x^{2}/4}\times \frac{\mathcal{L}^{N_{m}\otimes \left( L_{r},\nu \right) }(z;\alpha )}{ \mathcal{L}^{N_{m}\otimes L_{r}}(z;\alpha )}\right) \right) ^{\prime } \notag \\ &=&\omega x/2+\frac{\alpha -1/2+m+3r}{x} \notag \\ &&+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{N_{m}\otimes L_{r}}(z;\alpha )}{\mathcal{L}^{N_{m}\otimes \left( L_{r},\nu \right) }(z;\alpha )}\right) \right) . \end{eqnarray} In the same manner, if $i>p_{1}$\ and the flip in $\nu _{i}$ is positive, we have \begin{eqnarray} w_{\varnothing \otimes \nu }^{N_{m}\otimes L_{r}}(x;\omega ,\alpha ) &=&-\omega x/2-\frac{\alpha -7/2+m+3r}{x} \notag \\ &&+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( N_{m},\nu _{1},...,\nu _{p_{1}}\right) \otimes \left( L_{r},\nu _{p_{1}+1},...,\nu _{i-1}\right) }(z;\alpha )}{\mathcal{L}^{\left( N_{m},\nu _{1},...,\nu _{p_{1}}\right) \otimes \left( L_{r},\nu _{p_{1}+1},...,\nu _{i}\right) }(z;\alpha )}\right) \right) . \end{eqnarray} \end{proof} \subsection{$\protect\bigskip $Examples} \subsubsection{Dressing chain of period 2.} We have $p=2$ which implies $p_{1}=p_{2}=k=1$ and $j_{1}=j_{2}=0$. It leads to $N_{m}\otimes L_{r}=\varnothing \otimes \varnothing $ and the potential which solves the dressing chain of period $2$ is the IO itself. The corresponding $2$-cyclic chain is $\left( 0\right) \otimes \left( 0\right) $ . These results are in agreement with section I. \subsubsection{4-cyclic extensions of the IO and rational solutions of PV} For $p=4$, we can have $p_{1}=3$ and $p_{2}=1$ with $k=1$ ($j_{1}=1,j_{2}=0$ ) or $p_{1}=p_{2}=k=2$ ($j_{1}=j_{2}=0$). As shown before, the dressing chain system is equivalent to the PV equation (see Eq(\ref{PV})) whose rational solutions are associated to the Umemura polynomials \cite{clarkson4} . \paragraph{$\left( p_{1},p_{2}\right) =\left( 3,1\right) $} \textit{Theorem 4} gives then $N_{m}\otimes L_{r}=\left( \lambda \mid \mu \right) _{1}\otimes \varnothing $ and a corresponding $4$-cyclic chain is given by $\left( \lambda ,\lambda +\mu ,0\right) \otimes \left( 0\right) $. The solutions of the dressing chain system of period $4$ with parameters \begin{equation} \left\{ \begin{array}{c} \varepsilon _{12}=-2\mu \omega \\ \varepsilon _{23}=2\left( \lambda +\mu \right) \omega \\ \varepsilon _{34}=2\alpha \omega \\ \varepsilon _{41}-\Delta =2\left( -1-\lambda -\alpha \right) \omega , \end{array} \right. \end{equation} are (see Eq(\ref{th43}-\ref{th46})) \begin{eqnarray} w_{1}(x) &=&w_{\lambda \otimes \varnothing }^{\left( \lambda \mid \mu \right) _{1}\otimes \varnothing }(x;\omega ,\alpha ) \notag \\ &=&-\omega x/2+\frac{\alpha +\mu -1/2}{x}+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( \lambda \mid \mu \right) _{1}\otimes \varnothing }(z;\alpha )}{\mathcal{L}^{\left( \lambda +1\mid \mu -1\right) _{1}\otimes \varnothing }(z;\alpha )}\right) \right) \end{eqnarray} (the flip in $\lambda \otimes \varnothing $ is positive, $m=\mu ,\ r=0$), \begin{eqnarray} w_{2}(x) &=&w_{\lambda +\mu \otimes \varnothing }^{\left( \lambda ,\left( \lambda \mid \mu \right) _{1}\right) \otimes \varnothing }(x;\omega ,\alpha )=w_{\lambda +\mu \otimes \varnothing }^{\left( \left( \lambda +1\mid \mu -1\right) _{1}\right) \otimes \varnothing }(x;\omega ,\alpha ) \notag \\ &=&\omega x/2-\frac{\alpha +\mu -1/2}{x}+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( \lambda +1\mid \mu -1\right) _{1}\otimes \varnothing }(z;\alpha )}{\mathcal{L}^{\left( \lambda +1\mid \mu \right) _{1}\otimes \varnothing }(z;\alpha )}\right) \right) \end{eqnarray} (the flip in $\left( \lambda +\mu \right) \otimes \varnothing $ is negative, $m=\mu -1,\ r=0$), \begin{eqnarray} w_{3}(x) &=&w_{0\otimes \varnothing }^{\left( \lambda +\mu ,\lambda ,\left( \lambda \mid \mu \right) _{1}\right) \otimes \varnothing }(x;\omega ,\alpha ) \text{ }=w_{0\otimes \varnothing }^{\left( \left( \lambda +1\mid \mu \right) _{1}\right) \otimes \varnothing }(x;\omega ,\alpha ) \notag \\ &=&\omega x/2-\frac{\alpha +\mu +1/2}{x}+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( \lambda +1\mid \mu \right) _{1}\right) \otimes \varnothing }(z;\alpha )}{\mathcal{L}^{\left( 0,\left( \lambda +1\mid \mu \right) _{1}\right) \otimes \varnothing }(z;\alpha )} \right) \right) \notag \\ &=&\omega x/2-\frac{\alpha +\mu +1/2}{x}+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( \lambda +1\mid \mu \right) _{1}\right) \otimes \varnothing }(z;\alpha )}{\mathcal{L}^{\left( \lambda \mid \mu \right) _{1}\otimes \varnothing }(z;\alpha _{1})}\right) \right) \end{eqnarray} (the flip in $0\otimes \varnothing $ is negative, $m=\mu ,\ r=0$) and (see Eq(\ref{EqextIO5})) \begin{eqnarray} w_{4}(x) &=&w_{\varnothing \otimes 0}^{\left( 0,\lambda +\mu ,\lambda ,\left( \lambda \mid \mu \right) _{k}\right) \otimes \varnothing }(x;\omega ,\alpha )=w_{\varnothing \otimes 0}^{\left( 0,\left( \lambda +1\mid \mu \right) _{k}\right) \otimes \varnothing }(x;\omega ,\alpha ) \notag \\ &=&\omega x/2+\frac{\alpha +\mu +1/2}{x}+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( 0,\left( \lambda +1\mid \mu \right) _{k}\right) \otimes \varnothing }(z;\alpha )}{\mathcal{L}^{\left( 0,\left( \lambda +1\mid \mu \right) _{k}\right) \otimes \left( 0\right) }(z;\alpha )} \right) \right) \notag \\ &=&\omega x/2+\frac{\alpha +\mu +1/2}{x}+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( \lambda \mid \mu \right) _{k}\right) \otimes \varnothing }(z;\alpha _{1})}{\mathcal{L}^{\left( \left( \lambda \mid \mu \right) _{k}\right) \otimes \varnothing }(z;\alpha )}\right) \right) . \end{eqnarray} (the flip in $\varnothing \otimes 0$ is negative, $m=\mu +1,\ r=0$), where we have used \begin{equation} \left\{ \begin{array}{c} \left( \lambda ,\left( \lambda \mid \mu \right) _{1}\right) =\left( \lambda +1\mid \mu -1\right) _{1} \\ \left( \mu ,\left( \lambda +1\mid \mu -1\right) _{1}\right) =\left( \lambda +1\mid \mu \right) _{1}. \end{array} \right. \end{equation} Taking $\omega =2$ ($t=z$), the corresponding solution of PV (see Eq(\ref{PV} )) with parameters (see Eq(\ref{paramPV})) \begin{equation} \left\{ \begin{array}{c} a=2\mu ^{2} \\ b=-2\alpha ^{2} \\ c=4(\alpha +2\lambda +\mu +1) \\ d=-1/2, \end{array} \right. \end{equation} is then (see Eq(\ref{defPV})) \begin{equation} y(t)=1-1/\frac{d}{dt}\left( \log \left( \frac{\mathcal{L}^{\left( \lambda \mid \mu \right) _{1}\otimes \varnothing }(t;\alpha )}{\mathcal{L}^{\left( \lambda +1\mid \mu \right) _{1}\otimes \varnothing }(t;\alpha )}\right) \right) . \end{equation} \paragraph{$\left( p_{1},p_{2}\right) =\left( 2,2\right) $} \bigskip \textit{Theorem 4} gives $N_{m}\otimes L_{r}=\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}$ and the corresponding $4$-cyclic chain is $\left( 1+2a_{1},0\right) \otimes \left( 1+2b_{1},0\right) $. The solutions of the dressing chain system of period $4$ with parameters \begin{equation} \left\{ \begin{array}{c} \varepsilon _{12}=2\left( 1+2a_{1}\right) \omega \\ \varepsilon _{23}=2\left( \alpha -1-2b_{1}\right) \omega \\ \varepsilon _{34}=2\left( 1+2b_{1}\right) \omega \\ \varepsilon _{41}-\Delta =2\left( -3-2a_{1}-\alpha \right) \omega , \end{array} \right. \end{equation} are (see Eq(\ref{th43}-\ref{th46})) and Eq(\ref{EqextIO5})) \begin{eqnarray} w_{1}(x) &=&w_{\left( 1+2a_{1}\right) \otimes \varnothing }^{\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}}(x;\omega ,\alpha ) \notag \\ &=&\omega x/2-\frac{\alpha +1/2+a_{1}-b_{1}}{x}+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}{\mathcal{L}^{\left( 1\mid a_{1}+1\right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )} \right) \right) \end{eqnarray} (the flip in $\left( 1+2a_{1}\right) \otimes \varnothing $ is negative, $ m=a_{1},\ r=b_{1}$), \begin{eqnarray} w_{2}(x) &=&w_{0\otimes \varnothing }^{\left( 1\mid a_{1}+1\right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}}(x;\omega ,\alpha ) \notag \\ &=&\omega x/2-\frac{\alpha +3/2+a_{1}-b_{1}}{x}+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( 1\mid a_{1}+1\right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{2}\oplus 2\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}\right) \right) \notag \\ &=&\omega x/2-\frac{\alpha +3/2+a_{1}-b_{1}}{x}+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( 1\mid a_{1}+1\right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}{\mathcal{L}^{\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha _{2})} \right) \right) \end{eqnarray} (the flip in $0\otimes \varnothing $ is negative, $m=a_{1}+1,\ r=b_{1}$), \begin{eqnarray} w_{3}(x) &=&w_{\varnothing \otimes (1+2b_{1})}^{\left( 0,\left( 1\mid a_{1}+1\right) \right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}}(x;\omega ,\alpha ) \notag \\ &=&\omega x/2+\frac{\alpha +3/2+a_{1}+3b_{1}}{x}+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{2}\oplus 2\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}{\mathcal{L} ^{\left( \left( 1\mid a_{1}\right) _{2}\oplus 2\right) \otimes \left( 1\mid b_{1}+1\right) _{2}}(z;\alpha )}\right) \right) \notag \\ &=&\omega x/2+\frac{\alpha +3/2+a_{1}+3b_{1}}{x}+\omega x\frac{d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha _{2})}{\mathcal{L}^{\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}+1\right) _{2}}(z;\alpha _{2})} \right) \right) \end{eqnarray} (the flip in $\varnothing \otimes (1+2b_{1})$ is negative, $m=a_{1}+2,\ r=b_{1}$) and \begin{eqnarray} w_{4}(x) &=&w_{\varnothing \otimes 0}^{\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}+1\right) _{2}}(x;\omega ,\alpha _{2}) \notag \\ &=&\omega x/2+\frac{\alpha _{2}+5/2+a_{1}+3b_{1}}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}+1\right) _{2}}(z;\alpha _{2})}{\mathcal{L}^{\left( 1\mid a_{1}\right) _{2}\otimes \left( \left( 1\mid b_{1}\right) _{2}\oplus 2\right) }(z;\alpha _{2})}\right) \right) \notag \\ &=&\omega x/2+\frac{\alpha +9/2+a_{1}+3b_{1}}{x}+\omega x\frac{d}{dz}\left( \log \frac{\mathcal{L}^{\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}+1\right) _{2}}(z;\alpha _{2})}{z^{4b_{1}+2}\mathcal{L}^{\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}\right) \notag \\ &=&\omega x/2+\frac{\alpha +1/2+a_{1}-5b_{1}}{x}+\omega x\frac{d}{dz}\left( \log \frac{\mathcal{L}^{\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}+1\right) _{2}}(z;\alpha _{2})}{\mathcal{L}^{\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}\right) , \end{eqnarray} (the flip in $\varnothing \otimes 0$ is negative, $m=a_{1},\ r=b_{1}+1$), where we have used \begin{equation} \left\{ \begin{array}{c} \left( 1+2a_{1},\left( 1\mid a_{1}\right) _{2}\right) =\left( 1\mid a_{1}+1\right) _{2} \\ \left( 0,\left( 1\mid a_{1}+1\right) _{2}\right) =\left( 0,1)\cup \left( 3\mid a_{1}+1\right) _{2}\right) =\left( 1\mid a_{1}\right) _{2}\oplus 2. \end{array} \right. \end{equation} Taking $\omega =2$ ($t=z$), the corresponding solution of PV Eq(\ref{PV}) with parameters (see Eq(\ref{paramPV})) \begin{equation} \left\{ \begin{array}{c} a=\left( 1+2a_{1}\right) ^{2}/8 \\ b=-\left( 1+2b_{1}\right) ^{2}/8 \\ c=8(\alpha +1+a_{1}-b_{1}) \\ d=-2, \end{array} \right. \end{equation} is then (see Eq(\ref{defPV})) \begin{equation} y(t)=1-2/\left( 1-\frac{\alpha +1+a_{1}-b_{1}}{t}+\frac{d}{dt}\left( \log \left( \frac{\mathcal{L}^{\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}{\mathcal{L}^{\left( 1\mid a_{1}\right) _{2}\otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha _{2})}\right) \right) \right) . \end{equation} \subsubsection{Rational solutions of the Painlev\'{e} chain of period 6 (A$_{ \text{5}}$-PV)} For $p=6$, we can have: * $p_{1}=5$ and $p_{2}=1$ with $k=1$ ($j_{1}=2,j_{2}=0$) * $p_{1}=4$ and $p_{2}=2$ with $k=2$ ($j_{1}=1,j_{2}=0$) * $p_{1}=p_{2}=k=3$ ($j_{1}=j_{2}=0$) . \paragraph{$\left( p_{1},p_{2}\right) =\left( 5,1\right) $} \textit{Theorem 4} gives then $N_{m}\otimes L_{r}=\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) \otimes \varnothing $ and the corresponding $6$-cyclic chain is given by $\left( \lambda _{1},\lambda _{1}+\mu _{1},\lambda _{2},\lambda _{2}+\mu _{2},0\right) \otimes \left( 0\right) $. The solutions of the dressing chain system of period $6$ with parameters \begin{equation} \left\{ \begin{array}{c} \varepsilon _{12}=-2\mu _{1}\omega \\ \varepsilon _{23}=2\left( \lambda _{1}+\mu _{1}-\lambda _{2}\right) \omega \\ \varepsilon _{34}=-2\mu _{2}\omega \\ \varepsilon _{45}=2\left( \lambda _{2}+\mu _{2}\right) \omega \\ \varepsilon _{56}=2\alpha \omega \\ \varepsilon _{61}-\Delta =2\left( -1-\lambda _{1}-\alpha \right) \omega , \end{array} \right. \end{equation} are (see Eq(\ref{th43}-\ref{th46})) \begin{eqnarray} w_{1}(x) &=&w_{\lambda _{1}\otimes \varnothing }^{\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(x;\omega ,\alpha )\text{ (the flip in } \lambda _{1}\otimes \varnothing \text{ is positive, }m=\mu _{1}+\mu _{2},\ r=0\text{)} \notag \\ &=&-\omega x/2+\frac{\alpha +\mu _{1}+\mu _{2}-1/2}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(z;\alpha )}{\mathcal{L}^{\left( \left( \lambda _{1}+1\mid \mu _{1}-1\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(z;\alpha )}\right) \right) , \end{eqnarray} \begin{eqnarray} w_{2}(x) &=&w_{\lambda _{1}+\mu _{1}\otimes \varnothing }^{\left( \left( \lambda +1\mid \mu -1\right) _{1}\right) \otimes \varnothing }(x;\omega ,\alpha ) \notag \\ &=&\omega x/2-\frac{\alpha +\mu _{1}+\mu _{2}-1/2}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( \left( \lambda _{1}+1\mid \mu _{1}-1\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(z;\alpha )}{\mathcal{L}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(z;\alpha )}\right) \right) \end{eqnarray} (the flip in $\left( \lambda _{1}+\mu _{1}\right) \otimes \varnothing $ is negative, $m=\mu _{1}+\mu _{2}-1,\ r=0$), \begin{eqnarray} w_{3}(x) &=&w_{\lambda _{2}\otimes \varnothing }^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(x;\omega ,\alpha ) \notag \\ &=&-\omega x/2+\frac{\alpha +\mu _{1}+\mu _{2}-1/2}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(z;\alpha )}{\mathcal{L}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}-1\right) _{1}\right) \otimes \varnothing }(z;\alpha )}\right) \right) \end{eqnarray} (the flip in $\lambda _{2}\otimes \varnothing $ is positive, $m=\mu _{1}+\mu _{2},\ r=0$), \begin{eqnarray} w_{4}(x) &=&w_{\left( \lambda _{2}+\mu _{2}\right) \otimes \varnothing }^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}-1\right) _{1}\right) \otimes \varnothing }(x;\omega ,\alpha ) \notag \\ &=&\omega x/2-\frac{\alpha +\mu _{1}+\mu _{2}-1/2}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}-1\right) _{1}\right) \otimes \varnothing }(z;\alpha )}{\mathcal{L}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(z;\alpha )}\right) \right) \end{eqnarray} (the flip in $\left( \lambda _{2}+\mu _{2}\right) \otimes \varnothing $ is negative, $m=\mu _{1}+\mu _{2}-1,\ r=0$), \begin{eqnarray} w_{5}(x) &=&w_{0\otimes \varnothing }^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(x;\omega ,\alpha ) \notag \\ &=&\omega x/2-\frac{\alpha +\mu _{1}+\mu _{2}-1/2}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(z;\alpha )}{\mathcal{L}^{\left( 0,\left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(z;\alpha )}\right) \right) \notag \\ &=&\omega x/2-\frac{\alpha +\mu _{1}+\mu _{2}-1/2}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( \left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(z;\alpha )}{\mathcal{L}^{\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(z;\alpha _{1})}\right) \right) \end{eqnarray} (the flip in $0\otimes \varnothing $ is negative, $m=\mu _{1}+\mu _{2},\ r=0$ ) and \begin{eqnarray} w_{6}(x) &=&w_{\varnothing \otimes 0}^{\left( 0,\left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(x;\omega ,\alpha ) \notag \\ &=&\omega x/2+\frac{\alpha +\mu _{1}+\mu _{2}-1/2}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( 0,\left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(z;\alpha )}{\mathcal{L}^{\left( 0,\left( \lambda _{1}+1\mid \mu _{1}\right) _{1},\left( \lambda _{2}+1\mid \mu _{2}\right) _{1}\right) \otimes \left( 0\right) }(z;\alpha )}\right) \right) \notag \\ &=&\omega x/2+\frac{\alpha +\mu _{1}+\mu _{2}-1/2}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(z;\alpha _{1})}{\mathcal{L}^{\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1},\left( \lambda _{2}\mid \mu _{2}\right) _{1}\right) \otimes \varnothing }(z;\alpha )}\right) \right) \end{eqnarray} (the flip in $\varnothing \otimes 0$ is negative, $m=\mu _{1}+\mu _{2},\ r=0$ ). \paragraph{$\left( p_{1},p_{2}\right) =\left( 4,2\right) $} \textit{Theorem 4} gives $N_{m}\otimes L_{r}=\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}$ and the corresponding $6$-cyclic chain is $\left( 1+2a_{1},\lambda _{1},\lambda _{1}+\mu _{1},0\right) \otimes \left( 1+2b_{1},0\right) $. The solutions of the dressing chain system of period $6$ with parameters \begin{equation} \left\{ \begin{array}{c} \varepsilon _{12}=2\left( 1+2a_{1}-\lambda _{1}\right) \omega \\ \varepsilon _{23}=-2\mu _{1}\omega \\ \varepsilon _{34}=2\left( \lambda _{1}+\mu _{1}\right) \omega \\ \varepsilon _{45}=2\left( \alpha -1-2b_{1}\right) \omega \\ \varepsilon _{56}=2\left( 1+2b_{1}\right) \omega \\ \varepsilon _{61}-\Delta =2\left( -3-2a_{1}-\alpha \right) \omega , \end{array} \right. \end{equation} are (see Eq(\ref{th43}-\ref{th46})) and Eq(\ref{EqextIO5})) \begin{eqnarray} w_{1}(x) &=&w_{\left( 1+2a_{1}\right) \otimes \varnothing }^{\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(x;\omega ,\alpha )\text{ (the flip in }\left( 1+2a_{1}\right) \otimes \varnothing \text{ is negative, } m=a_{1}+\mu _{1},\ r=b_{1}\text{)} \notag \\ &=&\omega x/2-\frac{\alpha +1/2+a_{1}+\mu _{1}-b_{1}}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}{\mathcal{L}^{\left( \left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}\right) \right) , \end{eqnarray} \begin{eqnarray} w_{2}(x) &=&w_{\lambda _{1}\otimes \varnothing }^{\left( \left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(x;\omega ,\alpha ) \notag \\ &=&-\omega x/2+\frac{\alpha +1/2+a_{1}+\mu _{1}-b_{1}}{x}+\omega x\frac{d}{dz }\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}{\mathcal{L}^{\left( \left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}+2\mid \mu _{1}-1\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}\right) \right) \end{eqnarray} (the flip in $\lambda _{1}\otimes \varnothing $ is positive, $m=a_{1}+1+\mu _{1},\ r=b_{1}$), \begin{eqnarray} w_{3}(x) &=&w_{\left( \lambda _{1}+\mu _{1}\right) \otimes \varnothing }^{\left( \left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}+1\mid \mu _{1}-1\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(x;\omega ,\alpha ) \notag \\ &=&\omega x/2-\frac{\alpha +1/2+a_{1}+\mu _{1}-b_{1}}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}+2\mid \mu _{1}-1\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}{\mathcal{L}^{\left( \left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}+2\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}\right) \right) \end{eqnarray} (the flip in $\left( \lambda _{1}+\mu _{1}\right) \otimes \varnothing $ is negative, $m=a_{1}+\mu _{1},\ r=b_{1}$), \begin{eqnarray} w_{4}(x) &=&w_{0\otimes \varnothing }^{\left( \left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}+1\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(x;\omega ,\alpha ) \notag \\ &=&\omega x/2-\frac{\alpha +3/2+a_{1}+\mu _{1}+3b_{1}}{x}+\omega x\frac{d}{dz }\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}+2\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}{\mathcal{L}^{\left( 0,\left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}+2\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}\right) \right) \notag \\ &=&\omega x/2-\frac{\alpha +3/2+a_{1}+\mu _{1}+3b_{1}}{x}+\omega x\frac{d}{dz }\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}+2\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha _{2})}\right) \right) , \end{eqnarray} (the flip in $0\otimes \varnothing $ is negative, $m=\mu _{1}+a_{1}+1,\ r=b_{1}$, and we have $\left( 0,\left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}+2\mid \mu _{1}\right) _{2}\right) =\left( 0,1)\right.$ $\left.\cup \left( 3\mid a_{1}+1\right) _{2},\left( \lambda _{1}+2\mid \mu _{1}\right) _{2}\right) =\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \oplus 2$), \begin{eqnarray} w_{5}(x) &=&w_{\varnothing \otimes \left( 1+2b_{1}\right) }^{\left( 0,\left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}+2\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(x;\omega ,\alpha )\text{ (the flip in }\varnothing \otimes \left( 1+2b_{1}\right) \text{ is negative, }m=\mu _{1}+a_{1}+2,\ r=b_{1}\text{)} \notag \\ &=&\omega x/2+\frac{\alpha +3/2+\mu _{1}+a_{1}+3b_{1}}{x}+\omega x\frac{d}{dz }\left( \log \left( \frac{\mathcal{L}^{\left( 0,\left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}+2\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}{\mathcal{L}^{\left( 0,\left( 1\mid a_{1}+1\right) _{2},\left( \lambda _{1}+2\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}+1\right) _{2}}(z;\alpha )}\right) \right) \notag \\ &=&\omega x/2+\frac{\alpha +3/2+\mu _{1}+a_{1}+3b_{1}}{x}+\omega x\frac{d}{dz }\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha _{2})}{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}+1\right) _{2}}(z;\alpha _{2})}\right) \right) , \end{eqnarray} and ($\left( 0,\left( 1\mid b_{1}+1\right) _{2}\right) =(0,1)\cup \left( 1\mid b_{1}+1\right) _{2}=\left( 1\mid b_{1}\right) _{2}\oplus 2$, see also Eq(\ref{EqextIO5})) \begin{eqnarray} w_{6}(x) &=&w_{\varnothing \otimes 0}^{\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}+1\right) _{2}}(x;\omega ,\alpha _{2}) \notag \\ &=&\omega x/2+\frac{\alpha +9/2+\mu _{1}+a_{1}+3b_{1}}{x}+\omega x\frac{d}{dz }\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}+1\right) _{2}}(z;\alpha _{2})}{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( \left( 1\mid b_{1}\right) _{2}\oplus 2\right) }(z;\alpha _{2}) }\right) \right) \notag \\ &=&\omega x/2+\frac{\alpha +9/2+\mu _{1}+a_{1}+3b_{1}}{x}+\omega x\frac{d}{dz }\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}+1\right) _{2}}(z;\alpha _{2})}{z^{4b_{1}+2}\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}\right) \right) \notag \\ &=&\omega x/2+\frac{\alpha +1/2+\mu _{1}+a_{1}-5b_{1}}{x}+\omega x\frac{d}{dz }\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}+1\right) _{2}}(z;\alpha _{2})}{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{2},\left( \lambda _{1}\mid \mu _{1}\right) _{2}\right) \otimes \left( 1\mid b_{1}\right) _{2}}(z;\alpha )}\right) \right) \end{eqnarray} (the flip in $\varnothing \otimes 0$ is negative, $m=\mu _{1}+a_{1},\ r=b_{1}+1$). \paragraph{$\left( p_{1},p_{2}\right) =\left( 3,3\right) $ and $k=3$} \textit{Theorem 4} gives $N_{m}\otimes L_{r}=\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}\right) _{2},\left( 2\mid b_{2}\right) _{2}\right) $ and the corresponding $6$-cyclic chain is $\left( 1+3a_{1},2+3a_{2},0\right) \otimes \left( 1+3b_{1},2+3b_{2},0\right) $. The solutions of the dressing chain system of period $6$ with parameters \begin{equation} \left\{ \begin{array}{c} \varepsilon _{12}=2\left( -1+3\left( a_{1}-a_{2}\right) \right) \omega \\ \varepsilon _{23}=2\left( 2+3a_{2}\right) \omega \\ \varepsilon _{34}=2\left( \alpha -1-3b_{1}\right) \omega \\ \varepsilon _{23}=2\left( -1+3\left( b_{1}-b_{2}\right) \right) \omega \\ \varepsilon _{34}=2\left( 2+3b_{2}\right) \omega \\ \varepsilon _{41}-\Delta =2\left( -4-3a_{1}-\alpha \right) \omega , \end{array} \right. \end{equation} are (see Eq(\ref{th43}-\ref{th46})) and Eq(\ref{EqextIO5})) \begin{eqnarray} w_{1}(x) &=&w_{\left( 1+3a_{1}\right) \otimes \varnothing }^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(x;\omega ,\alpha )\text{ } \\ &=&\omega x/2-\frac{\alpha +1/2+a_{1}+a_{2}-b_{1}-b_{2}}{x}+\omega x\frac{d}{ dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha )}{ \mathcal{L}^{\left( \left( 1\mid a_{1}+1\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha )}\right) \right)\notag \end{eqnarray} (the flip in $\left( 1+3a_{1}\right) \otimes \varnothing $ is negative, $ m=a_{1}+a_{2},\ r=b_{1}+b_{2}$), \begin{eqnarray} w_{2}(x) &=&w_{\left( 2+3a_{2}\right) \otimes \varnothing }^{\left( \left( 1\mid a_{1}+1\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(x;\omega ,\alpha ) \\ &=&\omega x/2-\frac{\alpha +3/2+a_{1}+a_{2}-b_{1}-b_{2}}{x}+\omega x\frac{d}{ dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}+1\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha )}{ \mathcal{L}^{\left( \left( 1\mid a_{1}+1\right) _{3},\left( 2\mid a_{2}+1\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha )}\right) \right)\notag \end{eqnarray} (the flip in $\left( 2+3a_{2}\right) \otimes \varnothing $ is negative, $ m=a_{1}+a_{2}+1,\ r=b_{1}+b_{2}$), \begin{eqnarray} w_{3}(x) &=&w_{0\otimes \varnothing }^{\left( \left( 1\mid a_{1}+1\right) _{3},\left( 2\mid a_{2}+1\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(x;\omega ,\alpha ) \text{ } \\ &=&\omega x/2-\frac{\alpha +5/2+a_{1}+a_{2}-b_{1}-b_{2}}{x}+\omega x\frac{d}{ dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}+1\right) _{3},\left( 2\mid a_{2}+1\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha )}{ \mathcal{L}^{\left( \left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \oplus 3\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha )}\right) \right) \notag \\ &=&\omega x/2-\frac{\alpha +5/2+a_{1}+a_{2}-b_{1}-b_{2}}{x}+\omega x\frac{d}{ dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}+1\right) _{3},\left( 2\mid a_{2}+1\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha )}{ \mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha _{3})}\right) \right)\notag \end{eqnarray} (the flip in $0\otimes \varnothing $ is negative, $m=a_{1}+a_{2}+2,\ r=b_{1}+b_{2}$ and we have used $\left( 0,\left( 1\mid a_{1}+1\right) _{3},\left( 2\mid a_{2}+1\right) _{3}\right) =\left( 0,1,2\right) \cup \left( \left( 4\mid a_{1}\right) _{3},\left( 5\mid a_{2}\right) _{3}\right) =\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \oplus 3$), \begin{eqnarray} w_{4}(x) &=&w_{\varnothing \otimes \left( 1+2b_{1}\right) }^{\left( \left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \oplus 3\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(x;\omega ,\alpha ) \\ &=&\omega x/2+\frac{\alpha +5/2+a_{1}+a_{2}+3b_{1}+3b_{2}}{x}+\omega x\frac{d }{dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \oplus 3\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha )}{\mathcal{L}^{\left( \left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \oplus 3\right) \otimes \left( \left( 1\mid b_{1}+1\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha )}\right) \right) \notag \\ &=&\omega x/2+\frac{\alpha +5/2+a_{1}+a_{2}+3b_{1}+3b_{2}}{x}+\omega x\frac{d }{dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}+1\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha _{3})}{ \mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}+1\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha _{3})}\right) \right)\notag \end{eqnarray} (the flip in $\varnothing \otimes \left( 1+2b_{1}\right) $ is negative, $ m=a_{1}+a_{2}+3,\ r=b_{1}+b_{2}$), \begin{eqnarray} w_{5}(x) &=&w_{\varnothing \otimes \left( 2+3b_{1}\right) }^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}+1\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(x;\omega ,\alpha _{3}) \\ &=&\omega x/2+\frac{\alpha +11/2+a_{1}+a_{2}+3b_{1}+3b_{2}}{x}+\omega x\frac{ d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}+1\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha _{3})}{ \mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}+1\right) _{3},\left( 2\mid b_{2}+1\right) _{3}\right) }(z;\alpha _{3})}\right) \right)\notag \end{eqnarray} (the flip in $\varnothing \otimes \left( 2+3b_{1}\right) $ is negative, $ m=a_{1}+a_{2},\ r=b_{1}+b_{2}+1$) and \begin{eqnarray} w_{6}(x) &=&w_{\varnothing \otimes 0}^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}+1\right) _{3},\left( 2\mid b_{2}+1\right) _{3}\right) }(x;\omega ,\alpha _{3}) \\ &=&\omega x/2+\frac{\alpha +17/2+a_{1}+a_{2}+3b_{1}+3b_{2}}{x}+\omega x\frac{ d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}+1\right) _{3},\left( 2\mid b_{2}+1\right) _{3}\right) }(z;\alpha _{3}) }{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) \oplus 3\right) }(z;\alpha _{3})} \right) \right) \notag \\ &=&\omega x/2+\frac{\alpha +17/2+a_{1}+a_{2}+3b_{1}+3b_{2}}{x}+\omega x\frac{ d}{dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}+1\right) _{3},\left( 2\mid b_{2}+1\right) _{3}\right) }(z;\alpha _{3}) }{z^{6\left( b_{1}+b_{2}\right) +6}\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha )}\right) \right) \notag \\ &=&\omega x/2+\frac{\alpha -7/2+a_{1}+a_{2}-9b_{1}-9b_{2}}{x}+\omega x\frac{d }{dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}+1\right) _{3},\left( 2\mid b_{2}+1\right) _{3}\right) }(z;\alpha _{3}) }{\mathcal{L}^{\left( \left( 1\mid a_{1}\right) _{3},\left( 2\mid a_{2}\right) _{3}\right) \otimes \left( \left( 1\mid b_{1}\right) _{3},\left( 2\mid b_{2}\right) _{3}\right) }(z;\alpha )}\right) \right)\notag \end{eqnarray} (the flip in $\varnothing \otimes 0$ is negative, $m=a_{1}+a_{2},\ r=b_{1}+b_{2}+2$, see also Eq(\ref{EqextIO5})). \paragraph{$\left( p_{1},p_{2}\right) =\left( 3,3\right) $ and $k=1$} \textit{Theorem 4} gives $N_{m}\otimes L_{r}=\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}$ and the corresponding $6$-cyclic chain is $\left( \lambda _{1},\lambda _{1}+\mu _{1},0\right) \otimes \left( \rho _{1},\rho _{1}+\sigma _{1},0\right) $. The solutions of the dressing chain system of period $6$ with parameters \begin{equation} \left\{ \begin{array}{c} \varepsilon _{12}=-2\mu _{1}\omega \\ \varepsilon _{23}=2\left( \lambda _{1}+\mu _{1}\right) \omega \\ \varepsilon _{34}=2\left( \alpha -\rho _{1}\right) \omega \\ \varepsilon _{23}=-2\sigma _{1}\omega \\ \varepsilon _{34}=2\left( \rho _{1}+\sigma _{1}\right) \omega \\ \varepsilon _{41}-\Delta =2\left( -1-\lambda _{1}-\alpha \right) \omega , \end{array} \right. \end{equation} are (see Eq(\ref{th43}-\ref{th46})) and Eq(\ref{EqextIO5})): \begin{eqnarray} w_{1}(x) &=&w_{\lambda _{1}\otimes \varnothing }^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(x;\omega ,\alpha )\text{ } \notag \\ &=&-\omega x/2+\frac{\alpha -1/2+\mu _{1}-\sigma _{1}}{x}+\omega x\frac{d}{dz }\left( \log \left( \frac{\mathcal{L}^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(z;\alpha )}{\mathcal{L}^{\left( \lambda _{1}+1\mid \mu _{1}-1\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(z;\alpha )}\right) \right) \end{eqnarray} (the flip in $\lambda _{1}\otimes \varnothing $ is positive, $m=\mu _{1},\ r=\sigma _{1}$), \begin{eqnarray} w_{2}(x) &=&w_{\left( \lambda _{1}+\mu _{1}\right) \otimes \varnothing }^{\left( \lambda _{1}+1\mid \mu _{1}-1\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(x;\omega ,\alpha ) \notag \\ &=&\omega x/2-\frac{\alpha -1/2+\mu _{1}-\sigma _{1}}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( \lambda _{1}+1\mid \mu _{1}-1\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(z;\alpha )}{\mathcal{L}^{\left( \lambda _{1}+1\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(z;\alpha )}\right) \right) \end{eqnarray} (the flip in $\left( \lambda _{1}+\mu _{1}\right) \otimes \varnothing $ is negative, $m=\mu _{1}-1,\ r=\sigma _{1}$), \begin{eqnarray} w_{3}(x) &=&w_{0\otimes \varnothing }^{\left( \lambda _{1}+1\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(x;\omega ,\alpha )\text{ } \notag \\ &=&\omega x/2-\frac{\alpha +1/2+\mu _{1}-\sigma _{1}}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( \lambda _{1}+1\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(z;\alpha )}{\mathcal{L}^{\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1}\oplus 1\right) \otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(z;\alpha )}\right) \right) \notag \\ &=&\omega x/2-\frac{\alpha +1/2+\mu _{1}-\sigma _{1}}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( \lambda _{1}+1\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(z;\alpha )}{\mathcal{L}^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(z;\alpha _{1})} \right) \right) \end{eqnarray} (the flip in $0\otimes \varnothing $ is negative, $m=\mu _{1},\ r=\sigma _{1} $ and we have used $\left( 0,\left( \lambda _{1}+1\mid \mu _{1}\right) _{1}\right) =\left( \lambda _{1}\mid \mu _{1}\right) _{1}\oplus 1$), \begin{eqnarray} w_{4}(x) &=&w_{\varnothing \otimes \rho _{1}}^{\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1}\oplus 1\right) \otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(x;\omega ,\alpha ) \notag \\ &=&-\omega x/2-\frac{\alpha -5/2+\mu _{1}+3\sigma _{1}}{x}+\omega x\frac{d}{ dz}\left( \log \left( \frac{\mathcal{L}^{\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1}\oplus 1\right) \otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(z;\alpha )}{\mathcal{L}^{\left( \left( \lambda _{1}\mid \mu _{1}\right) _{1}\oplus 1\right) \otimes \left( \rho _{1}+1\mid \sigma _{1}-1\right) _{1}}(z;\alpha )}\right) \right) \notag \\ &=&-\omega x/2-\frac{\alpha -5/2+\mu _{1}+3\sigma _{1}}{x}+\omega x\frac{d}{ dz}\left( \log \left( \frac{\mathcal{L}^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(z;\alpha _{1})}{\mathcal{L}^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}+1\mid \sigma _{1}-1\right) _{1}}(z;\alpha _{1})} \right) \right) \end{eqnarray} (the flip in $\varnothing \otimes \rho _{1}$ is positive, $m=\mu _{1}+1,\ r=\sigma _{1}$), \begin{eqnarray} w_{5}(x) &=&w_{\varnothing \otimes \left( \rho _{1}+\sigma _{1}\right) }^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}+1\mid \sigma _{1}-1\right) _{1}}(x;\omega ,\alpha _{1}) \notag \\ &=&\omega x/2+\frac{\alpha -7/2+\mu _{1}+3\sigma _{1}}{x}+\omega x\frac{d}{dz }\left( \log \left( \frac{\mathcal{L}^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}+1\mid \sigma _{1}-1\right) _{1}}(z;\alpha _{1})}{\mathcal{L}^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}+1\mid \sigma _{1}\right) _{1}}(z;\alpha _{1})} \right) \right) \end{eqnarray} (the flip in $\varnothing \otimes \left( \rho _{1}+\sigma _{1}\right) $ is negative, $m=\mu _{1},\ r=\sigma _{1}-1$) and \begin{eqnarray} w_{6}(x) &=&w_{\varnothing \otimes 0}^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}+1\mid \sigma _{1}\right) _{1}}(x;\omega ,\alpha _{1}) \notag \\ &=&\omega x/2+\frac{\alpha +1/2+\mu _{1}+3\sigma _{1}}{x}+\omega x\frac{d}{dz }\left( \log \left( \frac{\mathcal{L}^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}+1\mid \sigma _{1}\right) _{1}}(z;\alpha _{1})}{\mathcal{L}^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \left( \rho _{1}\mid \sigma _{1}\right) _{1}\oplus 1\right) }(z;\alpha _{1})}\right) \right) \notag \\ &=&\omega x/2+\frac{\alpha +1/2+\mu _{1}+3\sigma _{1}}{x}+\omega x\frac{d}{dz }\left( \log \left( \frac{\mathcal{L}^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}+1\mid \sigma _{1}\right) _{1}}(z;\alpha _{1})}{z^{2\sigma _{1}}\mathcal{L}^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(z;\alpha )}\right) \right) \notag \\ &=&\omega x/2+\frac{\alpha +1/2+\mu _{1}-\sigma _{1}}{x}+\omega x\frac{d}{dz} \left( \log \left( \frac{\mathcal{L}^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}+1\mid \sigma _{1}\right) _{1}}(z;\alpha _{1})}{\mathcal{L}^{\left( \lambda _{1}\mid \mu _{1}\right) _{1}\otimes \left( \rho _{1}\mid \sigma _{1}\right) _{1}}(z;\alpha )}\right) \right) \end{eqnarray} (the flip in $\varnothing \otimes 0$ is negative, $m=\mu _{1},\ r=\sigma _{1} $, see also Eq(\ref{EqextIO5})). \subsection{Rational extensions of the HO as limit cases of rational extensions of the IO} Consider a rational extension of the HO associated to the canonical Maya diagram $N_{m}$. By splitting $N_{m}$ into two subsets of even and odd integers respectively, we can write \begin{equation} N_{m}=\left( 2a_{1}+1,...,2a_{m_{1}}+1\right) \cup \left( 2b_{1},...,2b_{m_{2}}\right) ,\ m=m_{1}+m_{2}, \end{equation} where $a_{i},b_{i}\in \mathbb{N} $, and \begin{eqnarray} W^{\left( N_{m}\right) }(x;\omega ) &\propto &e^{-mz/2} \notag \\ &&\times W\left( H_{2a_{1}+1}\left( \sqrt{\frac{\omega }{2}}x\right) ,...,H_{2a_{m_{1}}+1}\left( \sqrt{\frac{\omega }{2}}x\right) ,H_{2b_{1}}\left( \sqrt{\frac{\omega }{2}}x\right) ,...,H_{2b_{m_{2}}}\left( \sqrt{\frac{\omega }{2}}x\right) \mid x\right) , \end{eqnarray} where $z=\sqrt{\omega /2}x$. Using Eq(\ref{wronskprop}) and Eq(\ref {correspHL}), we obtain immediately ($z=\omega x^{2}/2$) \begin{equation} W^{\left( N_{m}\right) }(x;\omega )\propto e^{-mz/2}W\left( z^{1/2}L_{a_{1}}^{1/2}(z),...,z^{1/2}L_{a_{m_{1}}}^{1/2}(z),L_{b_{1}}^{-1/2}(z),...,L_{b_{m_{2}}}^{-1/2}(z)\mid x\right) , \end{equation} ie (see Eq(\ref{spec OI}) and Eq(\ref{shadOI})) \begin{equation} W^{\left( N_{m}\right) }(x;\omega )\propto W^{A_{m_{1}}\otimes B_{m_{2}}}\left( x;\omega ,1/2\right) , \end{equation} where \begin{equation} A_{m_{1}}\otimes B_{m_{2}}=\left( a_{1},...,a_{m_{1}}\right) \otimes \left( b_{1},...,b_{m_{2}}\right) . \end{equation} Since (see Eq(\ref{OH}) and Eq(\ref{OI})) \begin{equation} V(x;\omega ,1/2)=V(x;\omega )-\omega , \end{equation} we deduce \begin{equation} V^{\left( N_{m}\right) }(x;\omega )=V^{A_{m_{1}}\otimes B_{m_{2}}}(x;\omega ,1/2)+\omega . \label{HO-IO} \end{equation} It means that all the rational extensions of the HO can be considered as rational extensions of the IO in the limit case where the $\alpha $ parameter tends to $1/2$. In this limit, the impenetrable barrier term in the IO potential disappears and the IO potential degenerates into the harmonic potential which is regular on the whole real line, the spectrum of the IO potential giving to the odd indexed eigenstates of the HO potential and the shadow spectrum of the IO the even indexed eigenstates of the HO. The result above shows that this degeneracy is still valid at the level of the rational extensions We can note that if we start from an extension of the IO with $\alpha $ half integer $\alpha =k+1/2$, associated to the UC $A_{m_{1}}\otimes B_{m_{2}}$, Eq(\ref{EqextIO2}) allows us to write \begin{equation} V^{A_{m_{1}}\otimes B_{m_{2}}}(x;\omega ,k+1/2)=V^{\left( A_{m_{1}}\oplus k\right) \otimes B_{m_{2}}}(x;\omega ,1/2)+2k\omega , \end{equation} and, with the correspondence Eq(\ref{HO-IO}) above \begin{equation} V^{A_{m_{1}}\otimes B_{m_{2}}}(x;\omega ,k+1/2)=V^{\left( \widetilde{N} _{m}\right) }(x;\omega )+\left( 2k-1\right) \omega , \end{equation} where \begin{equation} \widetilde{N}_{m}=\left( 1,...,2k-1,2a_{1}+2k+1,...,2a_{m_{1}}+2k+1\right) \cup \left( 2b_{1},...,2b_{m_{2}}\right) . \end{equation} Consequently, the rational extensions of the IO with half integer $\alpha $ parameter can be identified to rational extensions of the HO and are monodromy free. Conversely, the rational extensions of the which solves the even periodic dressing chains cover the subset of monodromy free solutions. \section{Conclusion} We propose a new method for building the rational solutions of the dressing chains for a Schr\"{o}dinger operator. The results are directly obtained in closed determinantal form and we give some explicit examples which had never been previously presented. For the chains with odd periodicity (A$_{\text{2n} }$-PIV system) as for those with even periodicity (A$_{\text{2n+1}}$-PV system), we describe in a new systematic way some sets of rational solutions of the dressing chain system. In the odd periodicity case we conjecture, based on the theory of potentials with trivial monodromy \cite {grunbaum,oblomkov,veselov,GGM0}, that the construction described in this work covers all rational solutions to higher order Painlev\'e systems. The even periodicity case is more involved, but it seems plausible to surmise that this is also the case: the trivial monodromy property having to be replaced by a constraint of fixed monodromy at one point for all the eigenfunctions of the potentials. These conjectures are the object of further investigations. The question of the optimal pseudo-Wronskian representation of these solutions \cite{GGM} will also be adressed in a forthcoming work. \section{Acknowledgments} We wish to thank P.\ A.\ Clarkson, G.\ Filipuk, A.\ Hone and M.\ Mazzocco for enlightening discussions. DGU and SL acknowledge financial support from the Royal Society via the International Exchanges Scheme.
85,533
A former Pierce County Superior Court judge now working as a private attorney has been reprimanded by the Washington State Bar Association for violating professional rules of conduct. Sergio Armijo entered into a stipulated agreement with the association instead of submitting to a disciplinary hearing, according to records obtained recently by The News Tribune. Armijo conceded he failed to show up for several hearings his clients had scheduled in U.S. Immigration Court, did not provide files to another client who wished to change lawyers and practiced shoddy bookkeeping in his law office, the records show. “Your actions discredit you and the legal profession and show a disregard for the high traditions of honor expected from a member of the association,” state bar president Michele Radosevich wrote in a reprimand filed June 13. Armijo agreed to serve two years’ probation, to hire a bookkeeper and to submit to periodic audits of his books as part of his deal with the Bar Association, the records show. He declined to comment on the reprimand when contacted by The News Tribune last week. The Bar Association leveled several charges against Armijo in January after nine people filed grievances against him. Eight accused him of missing their hearings. The ninth contended Armijo unnecessarily delayed his hiring of a different lawyer by withholding his case files. In a written answer to the complaints, Armijo admitted missing some hearings but said he had valid excuses, including being ill. In other cases, he’d lined up a substitute lawyer to cover for him but the person did not show, he contended. The Bar Association also accused him of improper bookkeeping. Armijo’s wife, attorney Belinda Armijo, worked out of his law office and handled the business’s books until she was suspended in April 2012, amid allegations she did not properly account for client fees and testified falsely at a deposition in the case, bar records show. She was disbarred Tuesday. Sergio Armijo had been summoned to a hearing in June to answer the violations against him but worked out a deal instead. He served as a Superior Court judge for 15 years before being defeated in the 2008 election by local attorney Michael Hecht. Hecht subsequently lost his job in 2009 after being convicted of felony harassment and patronizing a prostitute, allegations originally made public by Armijo supporters, including his son, Morgan Armijo, a local private investigator. Hecht, who maintains the Armijo family set him up as a political vendetta, is appealing his convictions.Adam Lynn: 253-597-8644 [email protected]
248,046
Filmmaking Admissions Apply & Audition Filmmaking Admissions The University of North Carolina School of the Arts School of Filmmaking offers Bachelor of Fine Arts (BFA) and Master of Fine Arts (MFA) programs. In addition to your online application, you will need to interview with the filmmaking faculty. Whether you have been around cameras or editing equipment for years or for just a few years, if you have talent or potential, you can become true storyteller in the film industry. The requirements for admissions varies according to academic level, so quiet on the set and, begin by selecting undergraduate and transfer or graduate academic level to learn how to apply. Any additional requirements for international students is referenced within the academic level admission process for the School of Filmmaking.
138,581
\begin{document} \begin{abstract} The mixed moments for the Askey-Wilson polynomials are found using a bootstrapping method and connection coefficients. A similar bootstrapping idea on generating functions gives a new Askey-Wilson generating function. An important special case of this hierarchy is a polynomial which satisfies a four term recurrence, and its combinatorics is studied. \end{abstract} \maketitle \section{Introduction} The Askey-Wilson polynomials \cite{AskeyWilson} $p_n(x;a,b,c,d|q)$ are orthogonal polynomials in $x$ which depend upon five parameters: $a$, $b$, $c$, $d$ and $q$. In \cite[\S2]{BI} Berg and Ismail use a bootstrapping method to prove orthogonality of Askey-Wilson polynomials by initially starting with the orthogonality of the $a=b=c=d=0$ case, the continuous $q$-Hermite polynomials, and successively proving more general orthogonality relations, adding parameters along the way. In this paper we implement this idea in two different ways. First, using successive connection coefficients for two sets of orthogonal polynomials, we will find explicit formulas for generalized moments of Askey-Wilson polynomials, see Theorem~\ref{thm:xnP}. This method also gives a heuristic for a relation between the two measures of the two polynomial sets, see Remark~\ref{remark:heur}, which is correct for the Askey-Wilson hierarchy. Using this idea we give a new generating function (Theorem~\ref{thm:dual_q_Hahn}) for Askey-Wilson polynomials when $d=0.$ The second approach is to assume the two sets of polynomials have generating functions which are closely related, up to a $q$-exponential factor. We prove in Theorem~\ref{thm:main} that if one set is an orthogonal set, the second set has a recurrence relation of predictable order, which may be greater than three. We give several examples using the Askey-Wilson hierarchy. Finally we consider a more detailed example of the second approach, using a generating function to define a set of polynomials called the discrete big $q$-Hermite polynomials. These polynomials satisfy a 4-term recurrence relation. We give the moments for the pair of measures for their orthogonality relations. Some of the combinatorics for these polynomials is given in \S~\ref{sec:comb-discr-big}. Finally we record in Proposition~\ref{prop:addthm} a possible $q$-analogue of the Hermite polynomial addition theorem. We shall use basic hypergeometric notation, which is in Gasper-Rahman \cite{GR} and Ismail \cite{Is}. \section{Askey-Wilson polynomials and connection coefficients} \label{sec:comp-line-funct} The connection coefficients are defined as the constants obtained when one expands one set of polynomials in terms of another set of polynomials. For the Askey-Wilson polynomials \cite[15.2.5,~p.~383]{Is} \[ p_n(x;a,b,c,d|q)= \frac{(ab,ac,ad)_n}{a^n} \hyper43{q^{-n},abcdq^{n-1},ae^{i\theta},ae^{-i\theta}} {ab,ac,ad}{q;q}, \quad x=\cos\theta \] we shall use the connection coefficients obtained by successively adding a parameter \[ (a,b,c,d)=(0,0,0,0)\rightarrow (a,0,0,0)\rightarrow (a,b,0,0)\rightarrow (a,b,0,0)\rightarrow (a,b,c,0) \rightarrow (a,b,c,d). \] Using a simple general result on orthogonal polynomials, we derive an almost immediate proof of an explicit formula for the mixed moments of Askey-Wilson polynomials. First we set the notation for an orthogonal polynomial set $p_n(x).$ Let $\LL_p$ be the linear functional on polynomials for which orthogonality holds \[ \LL_p(p_m(x)p_n (x)) =h_n \delta_{mn}, \quad 0\le m,n. \] \begin{defn} The mixed moments of $\LL_p$ are $\LL_p(x^np_m(x)),\quad 0\le m,n.$ \end{defn} The main tool is the following Proposition, which allows the computation of mixed moments of one set of orthogonal polynomials from another set if the connection coefficients are known. \begin{prop} \label{prop:bootstrap} Let $R_n(x)$ and $S_n(x)$ be orthogonal polynomials with linear functionals $\LL_R$ and $\LL_S$, respectively, such that $\LL_R(1)=\LL_S(1) = 1$. Suppose that the connection coefficients are \begin{equation} \label{eq:conncoef1} R_k(x) = \sum_{i=0}^k c_{k,i} S_i(x). \end{equation} Then \[ \LL_S(x^n S_m(x)) = \sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)} c_{k,m} \LL_S(S_m(x)^2). \] \end{prop} \begin{proof} If we multiply both sides of \eqref{eq:conncoef1} by $S_m(x)$ and apply $\LL_S$, we have \[ \LL_S(R_k(x)S_m(x)) = c_{k,m} \LL_S(S_m(x)^2). \] Then by expanding $x^n$ in terms of $R_k(x)$ \[ x^n=\sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)} R_k(x) \] we find \begin{align*} \LL_S(x^n S_m(x)) = \LL_S\left( \sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)} R_k(x) S_m(x)\right) = \sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)} c_{k,m} \LL_S(S_m(x)^2). \end{align*} \end{proof} \begin{remark} \label{remark:heur} One may also use the idea of Proposition~\ref{prop:bootstrap} to give a heuristic for representing measures of the linear functionals. Putting $m=0,$ if representing measures were absolutely continuous, say $w_R(x)dx$ for $R_n(x)$, and $w_S(x)dx$ for $S_n(x)$ then one might guess that \[ w_S(x) = w_R(x) \sum_{k=0}^\infty \frac{R_k(x)}{\LL_R(R_k(x)^2)} c_{k,0}. \] \end{remark} For the rest of this section we will compute the mixed moments $\LL_p(x^n p_m(x))$ for the Askey-Wilson polynomials using Proposition~\ref{prop:bootstrap} starting from the $q$-Hermite polynomials. Let $\LL_{a,b,c,d}$ be the linear functional for $p_n(x;a,b,c,d|q)$ satisfying $\LL_{a,b,c,d}(1)=1$. Then $\LL=\LL_{0,0,0,0}$, $\LL_{a}=\LL_{a,0,0,0}$, $\LL_{a,b}=\LL_{a,b,0,0}$, and $\LL_{a,b,c}=\LL_{a,b,c,0}$ are the linear functionals for these polynomials: $q$-Hermite, $H_n(x|q)=p_n(x;0,0,0,0|q)$, the big $q$-Hermite $H_n(x;a|q)=p_n(x;a,0,0,0|q)$, the Al-Salam-Chihara $Q_n(x;a,b|q)=p_n(x;a,b,0,0|q)$, and the dual $q$-Hahn $p_n(x;a,b,c|q)=p_n(x;a,b,c,0|q)$. The $L^2$-norms are given by \cite[15.2.4~p.383]{Is} \begin{align} \label{eq:orth_hermit} \LL(H_n(x|q) H_m(x|q)) &= (q)_n\delta_{mn},\\ \label{eq:orth_bighermit} \LL_a(H_n(x;a|q) H_m(x;a|q)) &= (q)_n\delta_{mn},\\ \label{eq:orth_ASC} \LL_{a,b}(Q_n(x;a,b|q) Q_m(x;a,b|q)) &= (q,ab)_n\delta_{mn},\\ \label{eq:orth_dual_q_Hahn} \LL_{a,b,c}(p_n(x;a,b,c|q) p_m(x;a,b,c|q)) &= (q,ab,ac,bc)_n\delta_{mn},\\ \label{eq:orth_AW} \LL_{a,b,c,d}(p_n(x;a,b,c,d|q) p_m(x;a,b,c,d|q)) &= \frac{(q,ab,ac,ad,bc,bd,cd,abcdq^{n-1})_n}{(abcd)_{2n}}\delta_{mn}. \end{align} To apply Proposition~\ref{prop:bootstrap}, we need the following connection coefficient formula for the Askey-Wilson polynomials given in \cite[(6.4)]{AskeyWilson} \begin{equation} \label{eq:cc} \frac{p_n(x;A,b,c,d|q)}{(q,bc,bd,cd)_n} =\sum_{k=0}^n \frac{p_k(x;a,b,c,d|q)}{(q,bc,bd,cd)_k} \times\frac{a^{n-k}(A/a)_{n-k}(A bcdq^{n-1})_k} {(abcdq^{k-1})_k (q,abcdq^{2k})_{n-k}}. \end{equation} The following four identities are special cases of \eqref{eq:cc}: \begin{align} \label{eq:cc0} H_n(x|q) &= \sum_{k=0}^n \qbinom{n}{k} H_k(x;a|q) a^{n-k},\\ \label{eq:cca} H_n(x;a|q) &=\sum_{k=0}^n \qbinom nk Q_k(x;a,b|q) b^{n-k},\\ \label{eq:ccab} Q_n(x;a,b|q) &=(ab)_n \sum_{k=0}^n \qbinom nk \frac{p_k(x;a,b,c|q)}{(ab)_k} c^{n-k},\\ \frac{p_n(x;b,c,d|q)}{(q,bc,bd,cd)_n} \label{eq:ccabc} &=\sum_{k=0}^n \frac{p_k(x;a,b,c,d|q)}{(q,bc,bd,cd)_k} \cdot\frac{a^{n-k}} {(abcdq^{k-1})_k (q,abcdq^{2k})_{n-k}}. \end{align} For the initial mixed moment we need the following result proved independently by \JV. \cite[Proposition 5.1]{JV_rook} and Cigler \cite[Proposition 15]{Cigler2011} \[ \LL(x^n H_m(x;q)) = \frac{(q)_m}{2^n} \op(n,m), \] where \[ \overline P(n,m) = \sum_{k=m}^n \left( \binom{n}{\frac{n-k}2} - \binom{n}{\frac{n-k}2-1}\right) (-1)^{(k-m)/2} q^{\binom{(k-m)/2+1}2} \qbinom{\frac{k+m}2}{\frac{k-m}2}. \] We shall use the convention $\binom nk = \qbinom nk = 0$ if $k<0$, $k>n$, or $k$ is not an integer. Thus $\overline P(n,m)=0$ if $n\not\equiv m \mod 2$. \begin{thm} \label{thm:xnP} We have \begin{align} \label{eq:big} \LL_a(x^n H_m(x;a|q)) &= \frac{(q)_m}{2^n}\sum_{\alpha\ge0} \op(n,\alpha+m) \qbinom{\alpha+m}{m} a^{\alpha}, \\ \label{eq:ASC} \LL_{a,b}(x^n Q_m(x;a,b|q)) &= \frac{(q,ab)_m}{2^n} \sum_{\alpha,\beta\ge0} \op(n,\alpha+\beta+m) \qbinom{\alpha+\beta+m}{\alpha,\beta,m} a^{\alpha} b^{\beta}, \\ \label{eq:dualqHahn} \LL_{a,b,c}(x^n p_m(x;a,b,c|q)) &= \frac{(q,ac,bc)_m}{2^n} \sum_{\alpha,\beta,\gamma\ge0} \op(n,\alpha+\beta+\gamma+m) \qbinom{\alpha+\beta+\gamma+m}{\alpha,\beta,\gamma,m} \\ \notag &\quad \times a^{\alpha} b^{\beta} c^{\gamma} (ab)_{\gamma+m},\\ \label{eq:AW} \LL_{a,b,c,d}(x^n p_m(x;a,b,c,d|q)) &= \frac{1}{2^n}\sum_{\abcd,\ge0}\abcdpower \op(n,\abcd+) \qbinom{\abcd+}{\abcd,} \\ \notag &\quad \times \frac{(bd)_{\alpha}(cd)_{\alpha}(bc)_{\alpha+\delta}}{(abcd)_\alpha} \cdot \frac{(ab,ac,ad)_m (q^{\alpha};q^{-1})_m}{a^m (abcdq^{\alpha})_m}. \end{align} \end{thm} \begin{proof} By \eqref{eq:cc0}, Proposition~\ref{prop:bootstrap} and \eqref{eq:orth_hermit}, \begin{align*} \LL_a(x^n H_m(x;a|q)) &= \sum_{k=0}^n \frac{\LL(x^n H_k(x|q))}{\LL(H_k(x)^2)} \qbinom{k}{m} a^{k-m}\LL_a(H_m(x;a|q)^2)\\ &= \frac{(q)_m}{2^n}\sum_{k=0}^n \op(n,k) \qbinom{k}{m} a^{k-m}. \end{align*} Equations~\eqref{eq:ASC}, \eqref{eq:dualqHahn}, and \eqref{eq:AW} can be proved similarly using the connection coefficient formulas \eqref{eq:cca}, \eqref{eq:ccab}, and \eqref{eq:ccabc}. \end{proof} Letting $m=0$ in \eqref{eq:AW} we obtain a formula for the $n$th moment of the Askey-Wilson polynomials. \begin{cor} \label{cor:AWmoment} We have \begin{equation} \label{eq:AWmoment2} \LL_{a,b,c,d}(x^n) = \frac{1}{2^n}\sum_{\abcd,\ge0}\abcdpower \op(n,\abcd+) \qbinom{\abcd+}{\abcd,} \frac{(bd)_{\alpha}(cd)_{\alpha}(bc)_{\alpha+\delta}}{(abcd)_\alpha}. \end{equation} \end{cor} In \cite{KimStanton} the authors found a slightly different formula \[ \LL_{a,b,c,d}(x^n)= \frac{1}{2^n}\sum_{\abcd,\ge0}\abcdpower \op(n,\abcd+) \qbinom{\abcd+}{\abcd,} \frac{(ad)_{\beta+\gamma}(ac)_{\beta}(bd)_{\gamma}}{(abcd)_{\beta+\gamma}}, \] which can be rewritten using the symmetry in $a,b,c,d$ as \begin{equation} \label{eq:AWmoment1} \LL_{a,b,c,d}(x^n)= \frac{1}{2^n}\sum_{\abcd,\ge0}\abcdpower \op(n,\abcd+) \qbinom{\abcd+}{\abcd,} \frac{(bc)_{\alpha+\delta}(bd)_{\alpha}(ac)_{\delta}}{(abcd)_{\alpha+\delta}}. \end{equation} One can obtain \eqref{eq:AWmoment1} from \eqref{eq:AWmoment2} by applying the $_3\phi_1$-transformation \cite[(III.8)]{GR} to the $\alpha$-sum after fixing $\gamma$, $\delta$, and $N=\alpha+\beta$. We next check if the heuristic in Remark~\ref{remark:heur} leads to correct results in these cases. The absolutely continuous Askey-Wilson measure $w(x;a,b,c,d|q)$ with total mass $1$ for $0<q<1$, $\max(|a|,|b|,|b|,|d|)<1$ is, if $x=\cos\theta$, $\theta\in [0,\pi]$, \begin{align} \label{eq:wabcd} w(\cos\theta;a,b,c,d|q) &= \frac{(q,ab,ac,ad,bc,bd,cd)_\infty}{2\pi(abcd)_\infty} \\ \notag &\quad \times \frac{(e^{2i\theta},e^{-2i\theta})_\infty} {(ae^{i\theta},ae^{-i\theta}, be^{i\theta},be^{-i\theta},ce^{i\theta},ce^{-i\theta},de^{i\theta},de^{-i\theta})_\infty}. \end{align} Then the measures for the $q$-Hermite $H_n(x|q)$, the big $q$-Hermite $H_n(x;a|q)$, the Al-Salam-Chihara $Q_n(x;a,b|q)$, and the dual $q$-Hahn $p_n(x;a,b,c|q)$ are, respectively, $w(\cos\theta;0,0,0,0|q)$, $w(\cos\theta;a,0,0,0|q)$, $w(\cos\theta;a,b,0,0|q)$, and $w(\cos\theta;a,b,c,0|q)$. Notice that each successive measure comes from the previous measure by inserting infinite products. \begin{example} Let $R_k(x) = H_k(x|q)$ and $S_k(x)=H_k(x;a|q)$ so that \[ w_S(\cos \theta)=w_R(\cos \theta) \frac{1}{(ae^{i\theta},ae^{-i\theta})_\infty}. \] In this case, we have $\LL_R(R_k(x)^2) =(q)_k$ and \[ R_k(x) = \sum_{i=0}^k c_{k,i} S_i(x), \] where $c_{k,i} = \qbinom{k}{i}a^{k-i}$. By the heuristic in Remark~\ref{remark:heur}, \[ w_S(x) = w_R(x) \sum_{k=0}^\infty \frac{R_k(x)}{(q)_k} a^k= w_R(x)\frac{1}{(ae^{i\theta} , ae^{-i\theta})_\infty}, \] where we have used the $q$-Hermite generating function \cite[(14.26.11), p.542]{KLS}. \end{example} \begin{example} Let $R_k(x) = H_k(x;a|q)$ and $S_k(x)=Q_k(x;a,b|q)$ so that \[ w_S(\cos \theta)=w_R(\cos \theta) \frac{(ab)_\infty}{(be^{i\theta},be^{-i\theta})_\infty}. \] In this case, we have $\LL_R(R_k(x)^2) =(q)_k$ and \[ R_k(x) = \sum_{i=0}^k c_{k,i} S_i(x), \] where $c_{k,i} = \qbinom{k}{i}b^{k-i}$. By the heuristic in Remark~\ref{remark:heur}, \[ w_S(x) = w_R(x) \sum_{k=0}^\infty \frac{R_k(x)}{(q)_k} c^k= w_R(x)\frac{(ab)_\infty}{(be^{i\theta} , be^{-i\theta})_\infty}, \] where we have used the big $q$-Hermite generating function \cite[(14.18.13), p.512]{KLS}. \end{example} \begin{example} Let $R_k(x) = Q_k(x;a,b|q)$ and $S_k(x)=p_k(x;a,b,c|q)$ so that \[ w_S(\cos \theta)=w_R(\cos \theta) \frac{(ac,bc)_\infty}{(ce^{i\theta},ce^{-i\theta})_\infty}. \] In this case, we have $\LL_R(R_k(x)^2) =(q,ab)_k$ and \[ R_k(x) = \sum_{i=0}^k c_{k,i} S_i(x), \] where $c_{k,i} = \qbinom{k}{i}\frac{(ab)_k}{(ab)_i}c^{k-i}$. By the heuristic in Remark~\ref{remark:heur}, \[ w_S(x) = w_R(x) \sum_{k=0}^\infty \frac{R_k(x)}{(q,ab)_k} (ab)_k c^k= w_R(x)\frac{(ac,bc)_\infty}{(ce^{i\theta} , ce^{-i\theta})_\infty}, \] where we have used the Al-Salam-Chihara generating function \cite[(14.8.13), p.458]{KLS}. \end{example} Notice that in the above example we used the known generating function for the Al-Salam-Chihara polynomials $Q_n(x;a,b|q)$. If we apply the same steps to $R_k(x)=p_k(x;a,b,c,0|q)$ and $S_k(x)=p_k(x;a,b,c,d|q)$, a new generating function appears. \begin{thm} \label{thm:dual_q_Hahn} We have \[ (abct)_\infty \sum_{k=0}^\infty \frac{p_k(x;a,b,c,0|q)}{(q,abct)_k} t^k = \frac{(at,bt,ct)_\infty}{(te^{i\theta} , te^{-i\theta})_\infty}. \] \end{thm} \begin{proof} We must show \begin{equation} \label{eq:1} (abct)_\infty \sum_{n=0}^\infty \frac{t^n}{(q,abct)_n} p_n(x;a,b,c,0|q)= \frac{(bt,ct)_\infty}{(te^{i\theta},te^{-i\theta})_\infty}(at)_\infty. \end{equation} Using the Al-Salam-Chihara generating function and the $q$-binomial theorem \cite[(II.3), p. 354]{GR}, \eqref{eq:1} is equivalent to \begin{equation} \label{eq:3} \sum_{n=0}^N \frac{p_n(x;b,c,0,0|q)}{(q)_n} \frac{(-a)^{N-n}q^{\binom{N-n}{2}}}{(q)_{N-n}}= \sum_{n=0}^N \frac{p_n(x;a,b,c,0|q)}{(q)_n} \frac{(-abcq^n)^{N-n}q^{\binom{N-n}{2}}}{(q)_{N-n}}. \end{equation} Now use the connection coefficients \[ p_n(x;b,c,0,0|q)= (bc)_n \sum_{k=0}^n \qbinom nk p_k(x;a,b,c,0|q)\frac{a^{n-k}}{(bc)_{k}}, \] to show that \eqref{eq:3} follows from \[ \sum_{n=k}^N \frac{(bc)_n}{(q)_n} \qbinom nk \frac{a^{N-k}}{(bc)_k} \frac{(-1)^{N-n}q^{\binom{N-n}{2}}}{(q)_{N-n}}= \frac{1}{(q)_k}\frac{(-abcq^k)^{N-k}}{(q)_{N-k}} q^{\binom{N-k}{2}}. \] This summation is a special case of the $q$-Vandermonde theorem \cite[(II.6), p. 354]{GR}. \end{proof} A generalization of Theorem~\ref{thm:dual_q_Hahn} to Askey-Wilson polynomials is given in \cite{IS2013}. A natural generalization of the mixed moments in \eqref{eq:AW} is \[ \LL_{a,b,c,d}(x^n p_m(x;a,b,c,d|q) p_\ell(x;a,b,c,d|q)). \] For general orthogonal polynomials Viennot has given a combinatorial interpretation for $\LL(x^np_mp_\ell)$ in terms of weighted Motzkin paths. An explicit formula when $p_n= p_n(x;a,b,c,d|q)$ may be given using \eqref{eq:cc} and a $q$-Taylor expansion \cite{IS2003}, but we do not state the result here. \section{Generating functions} \label{sec:mult-yt_infty-or} In \S~\ref{sec:comp-line-funct} we noted the following generating functions for our bootstrapping polynomials: continuous $q$-Hermite $H_n(x|q)$, continuous big $q$-Hermite $H_n(x;a|q)$, and Al-Salam-Chihara $Q_n(x;a,b|q)$ \begin{equation} \label{eq:gf_Hermite} \sum_{n=0}^\infty \frac{H_n(x|q)}{(q)_n} t^n = \frac{1}{(te^{i\theta},te^{-i\theta})_\infty}, \end{equation} \begin{equation} \label{eq:gf_big_Hermite} \sum_{n=0}^\infty \frac{H_n(x;a|q)}{(q)_n} t^n = \frac{(at)_\infty}{(te^{i\theta} , te^{-i\theta})_\infty}, \end{equation} \begin{equation} \label{eq:gf_ASC} \sum_{n=0}^\infty \frac{Q_n(x;a,b|q)}{(q)_n}t^n = \frac{(at,bt)_\infty}{(te^{i\theta} , te^{-i\theta})_\infty}. \end{equation} Note that \eqref{eq:gf_big_Hermite} is obtained from \eqref{eq:gf_Hermite} by multiplying by $(at)_\infty$ and \eqref{eq:gf_ASC} is obtained from \eqref{eq:gf_big_Hermite} by multiplying by $(bt)_\infty$. However, if we multiply \eqref{eq:gf_ASC} by $(ct)_\infty,$ we no longer have a generating function for orthogonal polynomials. It is the generating function for polynomials which satisfy a recurrence relation of finite order, but longer than order three, which orthogonal polynomials have. The purpose of this section is to explain this phenomenon. We consider polynomials whose generating function are obtained by multiplying the generating function of orthogonal polynomials by $(yt)_\infty$ or $1/(-yt)_\infty.$ We say that polynomials $p_n(x)$ \emph{satisfy a $d$-term recurrence relation} if there exist a real number $A$ and sequences $\{b_{n}^{(0)}\}_{n\ge0}, \{b_{n}^{(1)}\}_{n\ge1},\dots, \{b_{n}^{(d-2)}\}_{n\ge d-2}$ such that, for $n\ge0$, \[ p_{n+1}(x) = (Ax - b_n^{(0)})p_n(x) - b_n^{(1)}p_{n-1}(x)-\dots-b_n^{(d-2)}p_{n-d+2}(x), \] where $p_{i}(x)=0$ for $i<0$. \begin{thm} \label{thm:main} Let $p_n(x)$ be polynomials satisfying $p_{n+1}(x) = (Ax-b_n)p_n(x)-\lambda_n p_{n-1}(x)$ for $n\ge0$, where $p_{-1}(x)=0$ and $p_0(x)=1$. If $b_{k}$ and $\frac{\lambda_{k}}{1-q^{k}}$ are polynomials in $q^k$ of degree $r$ and $s$, respectively, which are independent of $y$, then the polynomials $P^{(1)}_n(x,y)$ in $x$ defined by \[ \sum_{n=0}^\infty P^{(1)}_{n} (x,y) \frac{t^n}{(q)_n} =(yt)_\infty\sum_{n=0}^\infty p_{n} (x) \frac{t^n}{(q)_n} \] satisfy a $d$-term recurrence relation for $d=\max(r+2,s+3)$. \end{thm} We use two lemmas to prove Theorem~\ref{thm:main}. In the following lemmas we use the same notations as in Theorem~\ref{thm:main}. \begin{lem} \label{lem:yq} We have \[ P^{(1)}_n(x,y) = P^{(1)}_n(x,yq) - y(1-q^n) P^{(1)}_{n-1}(x,yq). \] \end{lem} \begin{proof} This is obtained by equating the coefficients of $t^n$ in \[ \sum_{n=0}^\infty P^{(1)}_n(x,y) \frac{t^n}{(q)_n} = (1-yt) \sum_{n=0}^\infty P^{(1)}_n(x,yq) \frac{t^n}{(q)_n}. \] \end{proof} \begin{lem} \label{lem:rec1} Suppose that $b_{k}$ and $\frac{\lambda_{k}}{1-q^{k}}$ are polynomials in $q^k$ of degree $r$ and $s$, respectively, i.e., \[ b_k = \sum_{j=0}^r c_j (q^k)^j,\qquad \frac{\lambda_{k}}{1-q^{k}} = \sum_{j=0}^s d_j (q^k)^j. \] Then \[ P_{n+1}^{(1)}(x,y) = (Ax-y) P_{n}^{(1)}(x,yq) -\sum_{j=0}^r c_j q^{nj} P^{(1)}_n (x,yq^{1-j}) -(1-q^n)\sum_{j=0}^s d_j q^{nj} P^{(1)}_{n-1} (x,yq^{1-j}). \] \end{lem} \begin{proof} Expanding $(yt)_\infty$ using the $q$-binomial theorem, we have \[ P^{(1)}_n(x,y) = \sum_{k=0}^n \qbinom nk (-1)^k y^k q^{\binom k2} p_{n-k}(x). \] Using the relation $\qbinom{n+1}k = \qbinom{n}{k-1}+q^k\qbinom nk$, we have \begin{align*} P^{(1)}_{n+1}(x,y) &= \sum_{k=0}^{n+1} \left(\qbinom{n}{k-1}+q^k\qbinom nk\right) (-1)^k y^k q^{\binom k2} p_{n+1-k}(x)\\ &= -y P^{(1)}_n(x,yq) +\sum_{k=0}^n \qbinom nk (-1)^k (yq)^k q^{\binom k2} p_{n+1-k}(x). \end{align*} By $\qbinom nk = \frac{1-q^n}{1-q^{n-k}}\qbinom {n-1}{k}$ and the 3-term recurrence \[ p_{n+1-k}(x)=(Ax-b_{n-k})p_{n-k}(x)-\lambda_{n-k}p_{n-1-k}(x), \] we get \begin{multline}\label{eq:pn+1} P^{(1)}_{n+1}(x,y) = (Ax-y) P^{(1)}_n(x,yq) -\sum_{k=0}^n \qbinom nk (-1)^k (yq)^k q^{\binom k2} p_{n-k}(x) b_{n-k}\\ -(1-q^n) \sum_{k=0}^{n-1} \qbinom {n-1}k (-1)^k (yq)^k q^{\binom k2} p_{n-1-k}(x) \frac{\lambda_{n-k}}{1-q^{n-k}}. \end{multline} Since \[ b_{n-k} = \sum_{j=0}^r c_j q^{nj} (q^k)^{-j},\qquad \frac{\lambda_{n-k}}{1-q^{n-k}} = \sum_{j=0}^s q^{nj} d_j (q^k)^{-j}, \] and \[ P^{(1)}_n(x,yq^{1-j}) = \sum_{k=0}^n \qbinom nk (-1)^k (yq)^k q^{\binom k2} p_{n-k}(x) (q^{k})^{-j}, \] we obtain the desired recurrence relation. \end{proof} Now we can prove Theorem~\ref{thm:main}. \begin{proof}[Proof of Theorem~\ref{thm:main}] By Lemma~\ref{lem:rec1}, we can write \[ P_{n+1}^{(1)}(x,y) = (Ax-y) P_{n}^{(1)}(x,yq) -\sum_{j=0}^r c_j q^{nj} P^{(1)}_n (x,yq^{1-j}) -(1-q^n)\sum_{j=0}^s d_j q^{nj} P^{(1)}_{n-1} (x,yq^{1-j}). \] Using Lemma~\ref{lem:yq} we can express $P^{(1)}_k (x,yq^{1-j})$ as a linear combination of \[ P^{(1)}_k (x,yq), P^{(1)}_{k-1} (x,yq), \dots, P^{(1)}_{k-j} (x,yq). \] Replacing $y$ by $y/q$, we obtain a $\max(r+2,s+3)$-term recurrence relation for $P^{(1)}_n(x,y)$. \end{proof} \begin{remark} One may verify that the order of recurrence for $P_{n}^{(1)}(x,y)$ is exactly $\max(2+r,3+s)$ in the following way. Lemma~\ref{lem:yq} is applied $s$ times to the term $P^{(1)}_{n-1} (x,yq^{1-s})$ to obtain a linear combination of $ P^{(1)}_{n-1} (x,yq) ,P^{(1)}_{n-2} (x,yq), \cdots, P^{(1)}_{n-s-1} (x,yq).$ The coefficient of $P^{(1)}_{n-s-1} (x,yq)$ in this expansion is $(-1)^s (q^{n-1};q^{-1})_s y^s q^{\binom{s}{2}}.$ Similarly, considering $P^{(1)}_{n} (x,yq^{1-r})$, the coefficient of $P^{(1)}_{n-r} (x,yq)$ in the expansion is $(-1)^r (q^{n};q^{-1})_r y^r q^{\binom{r}{2}}.$ These terms are non-zero, give a recurrence of order $\max(r+2,s+3)$, and could only cancel if $r=s+1.$ In this case, the coefficient of $P^{(1)}_{n-s-1} (x,yq)$ is \[ (q^n;q^{-1})_{s+1} (-1)^{s+1}y^s q^{\binom{s}{2}}q^{ns}\left( d_s-yc_{r}q^{r+s}\right). \] Since $d_s$ and $c_r$ are non-zero and independent of $y$, this is non-zero. \end{remark} \begin{remark} Theorem~\ref{thm:main} can be generalized for polynomials $p_n(x)$ satisfying a finite term recurrence relation of order greater than $3$. For instance, if $p_{n+1}(x) = (Ax-b_n)p_n(x)-\lambda_n p_{n-1}(x) - \nu_n p_{n-2}(x)$, then using $\qbinom nk = \frac{1-q^n}{1-q^{n-k}}\qbinom {n-1}{k}$ twice one can see that Equation~\eqref{eq:pn+1} has the following extra sum in the right hand side: \[ -(1-q^n)(1-q^{n-1}) \sum_{k=0}^{n-1} \qbinom {n-1}k (-1)^k (yq)^k q^{\binom k2} p_{n-2-k}(x) \frac{\nu_{n-k}}{(1-q^{n-k})(1-q^{n-k-1})}. \] Thus if $\frac{\nu_k}{(1-q^k)(1-q^{k-1})}$ is a polynomial in $q^k$ then $P_n^{(1)}(x,y)$ satisfy a finite term recurrence relation. \end{remark} Note that by using Lemmas~\ref{lem:yq} and \ref{lem:rec1}, one can find a recurrence relation for $P_n^{(1)}(x,y)$ in Theorem~\ref{thm:main}. An analogous theorem holds for polynomial in $q^{-k}.$ We state the result without proof. \begin{thm} \label{thm:mainflip} Let $p_n(x)$ be polynomials satisfying $p_{n+1}(x) = (Ax-b_n)p_n(x)-\lambda_n p_{n-1}(x)$ for $n\ge0$, where $p_{-1}(x)=0$ and $p_0(x)=1$. If $b_{k}$ and $\frac{\lambda_{k}}{1-q^{k}}$ are polynomials in $q^{-k}$ of degree $r$ and $s$, respectively, which are independent of $y$, and the constant term of $\frac{\lambda_{k}}{1-q^{k}}$ is zero, then the polynomials $P^{(2)}_n(x,y)$ defined by \[ \sum_{n=0}^\infty P^{(2)}_{n} (x,y) \frac{q^{\binom n2}t^n}{(q)_n} =\frac1{(-yt)_\infty}\sum_{n=0}^\infty p_{n} (x) \frac{q^{\binom n2}t^n}{(q)_n} \] satisfy a $d$-term recurrence relation for $d= \max(r+1,s+2)$. \end{thm} We now give several applications of Theorem~\ref{thm:main} and Theorem~\ref{thm:mainflip}. In the following examples, we use the notation in these theorems. \begin{example} Let $p_n(x)$ be the continuous $q$-Hermite polynomial $H_n(x|q)$. Then $A=2, b_n =0$, and $\lambda_n=1-q^n$. Since $r=-\infty$ and $s=0$, $P^{(1)}_n(x,y)$ satisfies a 3-term recurrence relation. By Lemma~\ref{lem:rec1}, we have \[ P^{(1)}_{n+1}(x,y) = (2x-y) P^{(1)}_{n}(x,yq) - (1-q^n) P^{(1)}_{n-1}(x,yq). \] By Lemma~\ref{lem:yq} we have \[ P^{(1)}_{n+1}(x,y) = P^{(1)}_{n+1}(x,yq) - y(1-q^n) P^{(1)}_{n}(x,yq). \] Thus \[ P^{(1)}_{n+1}(x,yq) =(2x-yq^n) P^{(1)}_{n}(x,yq) - (1-q^n) P^{(1)}_{n-1}(x,yq). \] Replacing $y$ by $y/q$ we obtain \[ P^{(1)}_{n+1}(x,y) =(2x-yq^{n-1}) P^{(1)}_{n}(x,y) - (1-q^n) P^{(1)}_{n-1}(x,y). \] Thus $P_n(x,y)$ are orthogonal polynomials, which are the continuous big $q$-Hermite polynomials $H_n(x;y|q)$. \end{example} \begin{example} Let $p_n(x)$ be the continuous big $q$-Hermite polynomials $H_n(x;a|q)$. Then $A=2, b_n = aq^n$, and $\lambda_n = 1-q^n$. Since $r=1$ and $s=0$, $P^{(1)}_n(x,y)$ satisfies a 3-term recurrence relation. Using the same method as in the previous example, we obtain \[ P^{(1)}_{n+1}(x,y) =(2x-(a+y)q^{n}) P^{(1)}_{n}(x,y) - (1-q^n)(1-ayq^{n-1}) P^{(1)}_{n-1}(x,y). \] Thus $P^{(1)}_n(x,y)$ are orthogonal polynomials, which are the Al-Salam-Chihara polynomials $Q_n(x;a,y|q)$. \end{example} \begin{example} Let $p_n(x)$ be the Al-Salam-Chihara polynomials $Q_n(x;a,b|q)$. Then $A=2, b_n= (a+b)q^n$, and $\lambda_n = (1-q^n) (1-abq^{n-1})$. Since $r=1$ and $s=1$, $P_n(x,y)$ satisfies a 4-term recurrence relation. By Lemma~\ref{lem:rec1}, we have \[ P^{(1)}_{n+1}(x,y) = (2x-y)P^{(1)}_n(x,yq) - (a+b)q^n P^{(1)}_n(x,y) -(1-q^n)(-abq^{n-1}P^{(1)}_{n-1}(x,y) + P^{(1)}_{n-1}(x,yq)). \] Using Lemma~\ref{lem:yq} we get \[ P^{(1)}_{n+1} = (2x-(a+b+y)q^n)P^{(1)}_n -(1-q^n)(1-(ab+ay+by)q^{n-1}) P^{(1)}_{n-1} -abyq^{n-2}(1-q^n)(1-q^{n-1}) P^{(1)}_{n-2}. \] \end{example} \begin{example} Let $p_n(x)$ be the continuous dual $q$-Hahn polynomials $p_n(x;a,b,c|q)$. Then $A=2$ and \begin{align*} b_n &= (a+b+c)q^n -abcq^{2n}-abcq^{2n-1}, \\ \lambda_n & = (1-q^n) (1-abq^{n-1}) (1-bcq^{n-1}) (1-caq^{n-1}). \end{align*} Since $r=2$ and $s=3$, $P_n(x,y)$ satisfies a 6-term recurrence relation. It is possible to find an explicit recurrence relation using the same idea as in the previous example. \end{example} \begin{example} \label{ex:dqh1} Let $p_n(x)$ be the discrete $q$-Hermite I polynomial $h_n(x;q)$. Then $A=1, b_n =0$, and $\lambda_n=q^{n-1}(1-q^n)$. Since $r=-\infty$ and $s=1$, $P^{(1)}_n(x,y)$ satisfies a 4-term recurrence relation which is \[ P^{(1)}_{n+1}(x,y) = (x-yq^{n}) P^{(1)}_n (x,y) -q^{n-1}(1-q^n) P^{(1)}_{n-1}(x,y) +yq^{n-2}(1-q^n)(1-q^{n-1}) P^{(1)}_{n-2}(x,y). \] In \S4 we will study $P^{(1)}_n(x,y)=h_n(x,y;q)$, the discrete big $q$-Hermite I polynomials $h_n(x,y;q)$. This is a proof of Theorem~\ref{thm:4term}. \end{example} \begin{example} \label{ex:dqh2} Let $p_n(x)$ be the discrete $q$-Hermite II polynomial $\HT_n(x;q)$. Then $A=1, b_n =0$, and $\lambda_n=q^{-2n+1}(1-q^n)$. Since $b_n$ and $\lambda_n/(1-q^n)$ are polynomials in $q^{-n}$ of degrees $-\infty$ and $2$, respectively, and the constant term of $\lambda_n/(1-q^n)$ is 0, so $P^{(2)}_n(x,y)$ satisfies a 4-term recurrence relation. It is \[ P^{(2)}_{n+1}(x,y) = (x-yq^{-n}) P^{(2)}_n (x,y) -q^{-2n+1}(1-q^n) P^{(2)}_{n-1}(x,y) -yq^{3-3n}(1-q^n)(1-q^{n-1}) P^{(2)}_{n-2}(x,y). \] $P^{(2)}_n(x,y)$ are the discrete big $q$-Hermite II polynomials $\HT_n(x,y;q)$ of \S~\ref{sec:comb-discr-big}. \end{example} \begin{example} The \emph{Al-Salam--Carlitz I} polynomials $U_n^{(a)}(x;q)$ are defined by \[ \sum_{n=0}^\infty \frac{U_n^{(a)}(x;q)}{(q)_n}t^n =\frac{(t)_\infty (at)_\infty}{(xt)_\infty }. \] They have the 3-term recurrence relation \[ \label{eq:asc1} U_{n+1}^{(a)}(x;q) = (x-(1+a)q^{n}) U_{n}^{(a)}(x;q) +aq^{n-1}(1-q^n) U_{n-1}^{(a)}(x;q). \] Let $p_n(x)$ be the polynomials with generating function \[ \sum_{n=0}^\infty \frac{p_n(x)}{(q)_n} t^n =\frac{(t)_\infty}{(xt)_\infty} =\sum_{n=0}^\infty \frac{x^n(1/x)_n}{(q)_n} t^n. \] Then $p_n(x) = x^n (1/x)_n$. Thus $p_{n+1}(x) = (x-q^{n}) p_n(x)$, and we have $A=1, b_n = q^{n}$, and $\lambda_n =0$, and $U_n^{(a)}(x;q) = P^{(1)}_n(x,a)$. \end{example} \begin{example} The \emph{Al-Salam--Carlitz II} polynomials $V_n^{(a)}(x;q)$ are defined by \[ \sum_{n=0}^\infty \frac{(-1)^n q^{\binom n2}}{(q)_n} V_n^{(a)}(x;q) t^n =\frac{(xt)_\infty}{(t)_\infty (at)_\infty}. \] They have the 3-term recurrence relation \begin{equation} \label{eq:2} V_{n+1}^{(a)}(x;q) = (x-(1+a)q^{-n}) V_{n}^{(a)}(x;q) -aq^{-2n+1}(1-q^n) V_{n-1}^{(a)}(x;q). \end{equation} Let $p_n(x)$ be the polynomials with generating function \[ \sum_{n=0}^\infty \frac{q^{\binom n2}}{(q)_n}p_n(x) t^n =\frac{(xt)_\infty}{(t)_\infty} =\sum_{n=0}^\infty \frac{(x)_n}{(q)_n} t^n =\sum_{n=0}^\infty \frac{(-1)^nq^{\binom n2}x^n(1/x)_n}{(q)_n} t^n. \] Then $p_n(x) = (-1)^n x^n (1/x)_n$. Thus $p_{n+1}(x) = (-x+q^{-n}) p_n(x)$, and we have $A=-1, b_n = -q^{-n}$, and $\lambda_n =0$ and we obtain $V_n^{(a)}(x;q) = (-1)^n P^{(2)}_n(-x,-a)$ and \eqref{eq:2}. \end{example} Garrett, Ismail, and Stanton \cite[Section 7]{GIS} considered the polynomials $\hat H_n(x|q)$ defined by the generating function \[ \sum_{n=0}^\infty \hat H_n(x|q) \frac{t^n}{(q)_n}= \frac{(t^2;q)_\infty}{(te^{i\theta},te^{-i\theta};q)_\infty}= (t^2;q)_\infty \sum_{n=0}^\infty H_n(x|q) \frac{t^n}{(q)_n}. \] It turns out that $p_n=\hat H_n(x|q)$ satisfies the 5-term recurrence relation \[ p_{n+1}= 2xp_n +(q^{2n}+q^{2n-1}-q^{n-1}-1)p_{n-1}+ q^{n-2}(1-q^n)(1-q^{n-1})(1-q^{n-2})p_{n-3}. \] The following generalization of Theorem~\ref{thm:main} explains this phenomenon for $m=2$, $r=0$, and $s=0$. We omit the proof, which is similar to that of Theorem~\ref{thm:main}. \begin{thm} \label{thm:main_gen} Let $m$ be a positive integer. Let $p_n(x)$ be polynomials satisfying $p_{n+1}(x) = (Ax-b_n)p_n(x)-\lambda_n p_{n-1}(x)$ for $n\ge0$, where $p_{-1}(x)=0$ and $p_0(x)=1$. If $b_{k}$ and $\frac{\lambda_{k}}{1-q^{k}}$ are polynomials in $q^k$ of degree $r$ and $s$, respectively, which are independent of $y$, then the polynomials $P_n(x,y)$ in $x$ defined by \[ \sum_{n=0}^\infty P_{n} (x,y) \frac{t^n}{(q)_n} =(yt^m)_\infty\sum_{n=0}^\infty p_{n} (x) \frac{t^n}{(q)_n} \] satisfy a $d$-term recurrence relation for $d= \max(rm^2+2,sm^2+3,m^2+1)$. \end{thm} \section{Discrete big $q$-Hermite polynomials} \label{sec:discrete-big-q} In this section we study a set of polynomials which satisfy a 4-term recurrence relation, called the discrete big $q$-Hermite polynomials (see Definition~\ref{defn:big}). These polynomials generalize the discrete $q$-Hermite polynomials and appear in Example~\ref{ex:dqh1}. Recall \cite{Is} that the \emph{continuous $q$-Hermite polynomials} $H_n(x|q)$ are defined by \[ \sum_{n=0}^\infty \frac{H_n(x|q)}{(q)_n} t^n = \frac{1}{(te^{i\theta},te^{-i\theta})_\infty}, \] and the \emph{continuous big $q$-Hermite polynomials} $H_n(x;a|q)$ are defined by \[ \sum_{n=0}^\infty \frac{H_n(x;a|q)}{(q)_n} t^n = \frac{(at)_\infty}{(te^{i\theta},te^{-i\theta})_\infty}. \] Observe that the generating function for $H_n(x;a|q)$ is the generating function for $H_n(x|q)$ multiplied by $(at)_\infty$. In this section we introduce \emph{discrete big $q$-Hermite polynomials} in an analogous way. The \emph{discrete $q$-Hermite I polynomials} $h_n(x;q)$ have generating function \[ \sum_{n=0}^\infty \frac{h_n(x;q)}{(q;q)_n} t^n = \frac{(t^2;q^2)_\infty}{(xt)_\infty}. \] \begin{defn} \label{defn:big} The \emph{discrete big $q$-Hermite I polynomials} $h_n(x,y;q)$ are given by \begin{equation} \label{eq:hn} \sum_{n=0}^\infty h_n(x,y;q) \frac{t^n}{(q;q)_n} = \frac{(t^2;q^2)_\infty (yt)_\infty}{(xt)_\infty}. \end{equation} \end{defn} Expanding the right hand side of \eqref{eq:hn} using the $q$-binomial theorem, we find the following expression for $h_n(x,y;q)$. \begin{prop} For $n\ge 0,$ \[ h_n(x,y;q) = \sum_{k=0}^{\flr{n/2}} \qbinom{n}{2k} (q;q^2)_k q^{2\binom k2} (-1)^k x^{n-2k} (y/x;q)_{n-2k}. \] \end{prop} The polynomials $h_n(x,y;q)$ are orthogonal polynomials in neither $x$ nor $y$. However they satisfy the following simple 4-term recurrence relation which was established in Example~\ref{ex:dqh1}. \begin{thm} \label{thm:4term} For $n\ge 0,$ \[ h_{n+1}(x,y;q) = (x-yq^{n}) h_n (x,y;q) -q^{n-1}(1-q^n)h_{n-1}(x,y;q) +yq^{n-2}(1-q^n)(1-q^{n-1})h_{n-2}(x,y;q). \] \end{thm} Note that when $y=0$, the 4-term recurrence relation reduces to the 3-term recurrence relation for the discrete $q$-Hermite I polynomials. The polynomials $h_n(x,y;q)$ are not symmetric in $x$ and $y$. If we consider $h_n(x,y;q)$ as a polynomial in $y$, then it does not satisfy a finite term recurrence relation, see Proposition~\ref{prop:rr_HH}. Since $h_n(x,y;q)$ satisfies a 4-term recurrence, it is a multiple orthogonal polynomial in $x.$ Thus there are two linear functionals $\LL^{(0)}$ and $\LL^{(1)}$ such that, for $i\in\{0,1\}$, \[ \LL^{(i)}(h_m)=\delta_{mi}, \quad m \ge 0, \] \[ \LL^{(i)}(h_m (x,y;q) h_n (x,y;q)) = 0 \quad \mbox{if $m>2n+i$, and} \quad \LL^{(i)}(h_{2n+i}(x,y;q) h_n (x,y;q)) \ne 0. \] We have explicit formulas for the moments for $\LL^{(0)}$ and $\LL^{(1)}$. \begin{thm} \label{thm:hermite_moments} The moments for the discrete big $q$-Hermite polynomials are \[ \LL^{(0)}(x^n) = \sum_{k=0}^{\flr{n/2}} \qbinom{n}{2k} (q;q^2)_k y^{n-2k}, \] \[ \LL^{(1)}(x^n) = (1-q^n) \sum_{k=0}^{\flr{n/2}} \qbinom{n-1}{2k} (q;q^2)_k y^{n-2k-1}. \] \end{thm} Before proving Theorem~\ref{thm:hermite_moments} we show that in general there is a way to find the linear functionals of $d$-orthogonal polynomials if we know how to expand certain orthogonal polynomials in terms of these $d$-orthogonal polynomials. This is similar to Proposition~\ref{prop:bootstrap}. \begin{thm} \label{thm:op_mop} Let $R_n(x)$ be orthogonal polynomials with linear functionals $\LL_R$ such that $\LL_R(1) = 1$. Let $S_n(x)$ be $d$-orthogonal polynomials with linear functionals $\{\LL_S^{(i)}\}_{i=0}^{d-1}$ such that $\LL_S^{(i)}(S_n(x)) = \delta_{n,i}$. Suppose \begin{equation} \label{eq:conncoef} R_k(x) = \sum_{m=0}^k c_{km} S_m(x). \end{equation} Then \[ \LL_S^{(i)}(x^n) = \sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)} d_{k,i}, \] where \[ d_{k,i}= \begin{cases} c_{k,i} {\text{ if }} k\ge i,\\ 0 {\text{ \quad if }} k<i. \end{cases} \] \end{thm} \begin{proof} If we apply $\LL_S^{(i)}$ to both sides of \eqref{eq:conncoef}, we have \[ \LL_S^{(i)}(R_k(x)) = d_{k,i}. \] Then by expanding $x^n$ in terms of $R_k(x)$ we get \begin{align*} \LL_S^{(i)}(x^n) = \LL_S^{(i)}\left( \sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)} R_k(x)\right) = \sum_{k=0}^n \frac{\LL_R(x^n R_k(x))}{\LL_R(R_k(x)^2)}d_{k,i}. \end{align*} \end{proof} We will apply Theorem~\ref{thm:op_mop} with $R_n(x) = h_n(x;q)$ and $S_n(x) = h_n(x,y;q)$ to prove Theorem~\ref{thm:hermite_moments}. The first ingredient is \eqref{eq:conncoef}, which follows from the generating function \eqref{eq:hn} \[ h_k(x;q) = \sum_{m=0}^k \qbinom{k}{m} y^{k-m} h_m(x,y;q). \] The second ingredient is the value of $\LL_h(x^nh_k).$ \begin{prop}\label{prop:xnh} Let $\LL_h$ be the linear functional for $h_n(x;q)$ with $\LL_h(1)=1$. Then \[ \LL_h(x^n h_{m}(x;q)) = \begin{cases} 0 {\text{ if }} m>n {\text{ or }}n\not\equiv m\mod 2,\\ \frac{q^{\binom{m}{2}}(q)_n}{(q^2;q^2)_{\frac{n-m}{2}}} {\text{ if }} n\ge m, n\equiv m\mod 2. \end{cases} \] \end{prop} \begin{proof} Clearly we may assume that $n\ge m$ and $n\equiv m\mod 2.$ Using the explicit formula \[ h_m(x;q) =x^m \hyper20{q^{-m},q^{-m+1}}{-}{q^2, \frac{q^{2m-1}}{x^2}}, \] and the fact \[ \LL_h(x^k)= \begin{cases} 0 {\text{\qquad\qquad if $k$ is odd,}}\\ (q;q^2)_{k/2} {\text{ if $k$ is even,}} \end{cases} \] we obtain \[ \LL_h(x^n h_{m}(x;q)) = (q;q^2)_{\frac{n+m}2} \hyper21{q^{-m},q^{-m+1}}{q^{-n-m+1}}{q^2,q^{m-n}}, \] \[ \LL_h(x^n h_{m}(x;q)) = (q;q^2)_{\frac{n+m}2} \] which is evaluable by the $q$-Vandermonde theorem \cite[(II.5), p, 354]{GR}. \end{proof} The discrete $q$-Hermite polynomials have the following orthogonality: \begin{equation} \label{eq:orthogonality} \LL_h(h_m(x;q) h_n(x;q)) = q^{\binom n2} (q)_n \delta_{mn}. \end{equation} Using Theorem~\ref{thm:op_mop}, Proposition~\ref{prop:xnh}, and \eqref{eq:orthogonality} we have proven Theorem~\ref{thm:hermite_moments}. We do not know representing measures for the moments in Theorem~\ref{thm:hermite_moments}. One may also find a recurrence relation for $h_n(x,y;q)$ as a polynomial in $y$, whose proof is routine. \begin{prop} \label{prop:rr_HH} For $n\ge 0$, we have \[ yq^nh_n(x,y;q)=-h_{n+1}(x,y;q)+ \sum_{k=0}^n (q^n;q^{-1})_k (-1)^k h_{n-k}(x,y,;q) \times\begin{cases} x {\text{ if $k$ is even}}\\ 1 {\text{ if $k$ is odd.}} \end{cases} \] \end{prop} We can also consider discrete $q$-Hermite II polynomials. The \emph{discrete $q$-Hermite II polynomials} $\HT_n(x,y;q)$ have the generating function \[ \sum_{n=0}^\infty \frac{q^{\binom n2} \HT_n(x;q)}{(q)_n} t^n = \frac{(-xt)_\infty }{(-t^2;q^2)_\infty}. \] We define the \emph{discrete big $q$-Hermite II polynomials} $\HT_n(x,y;q)$ by \[ \sum_{n=0}^\infty \HT_n(x,y;q) \frac{q^{\binom n2} t^n}{(q;q)_n} = \frac{1}{(-t^2;q^2)_\infty} \frac{(-xt;q)_\infty}{(-yt;q)_\infty}. \] Then $\HT_n(x,0|q)$ is the discrete $q$-Hermite II polynomial. The following proposition is straightforward to check. \begin{prop} For $n\ge 0$, we have \[ \HT_n(x,y;q) = i^{-n} h_n(ix, iy;q^{-1}). \] \end{prop} \section{Combinatorics of the discrete big $q$-Hermite polynomials} \label{sec:comb-discr-big} In this section we give some combinatorial information about the discrete big $q$-Hermite polynomials. This includes a combinatorial interpretation of the polynomials (Theorem~\ref{thm:comb}), and a combinatorial proof of the 4-term recurrence relation. Viennot's interpretation of the moments as weighted generalized Motzkin paths is also considered. For the purpose of studying $h_n(x,y;q)$ combinatorially we will consider the following rescaled continuous big $q$-Hermite polynomials $h^*_n(x,y;q)$: \[ h^*_n(x,y;q) = (1-q)^{-n/2} h_n(x\sqrt{1-q} ,y\sqrt{1-q}|q). \] By \eqref{eq:hn} we have \begin{equation} \label{eq:hn*} h^*_n(x,y;q) = \sum_{k=0}^{\flr{n/2}} (-1)^k q^{2\binom k2} [2k-1]_q!! \qbinom{n}{2k} x^{n-2k} (y/x;q)_{n-2k}. \end{equation} Because $h^*_n(x,y;1) = H_n(x-y),$ which is a generating function for bicolored matchings of $[n]:=\{1,2,\dots,n\},$ we need to consider $q$-statistics on matchings. A \emph{matching} of $[n]=\{1,2,\dots,n\}$ is a set partition of $[n]$ in which every block is of size 1 or 2. A block of a matching is called a \emph{fixed point} if its size is $1$, and an \emph{edge} if its size is 2. When we write an edge $\{u,v\}$ we will always assume that $u<v$. A \emph{fixed point bi-colored matching} or \emph{FB-matching} is a matching for which every fixed point is colored with $x$ or $y$. Let $\fbm(n)$ be the set of FB-matchings of $[n]$. Let $\pi\in \fbm(n)$. A \emph{crossing} of $\pi$ is a pair of two edges $\{a,b\}$ and $\{c,d\}$ such that $a<c<b<d$. A \emph{nesting} of $\pi$ is a pair of two edges $\{a,b\}$ and $\{c,d\}$ such that $a<c<d<b$. An \emph{alignment} of $\pi$ is a pair of two edges $\{a,b\}$ and $\{c,d\}$ such that $a<b<c<d$. The \emph{block-word} $\bw(\pi)$ of $\pi$ is the word $w_1w_2\dots w_n$ such that $w_i = 1$ if $i$ is a fixed point and $w_i=0$ otherwise. An \emph{inversion} of a word $w_1w_2\dots w_n$ is a pair of integers $i<j$ such that $w_i>w_j$. The number of inversions of $w$ is denoted by $\inv(w)$. Suppose that $\pi$ has $k$ edges and $n-2k$ fixed points. The \emph{weight} $\wt(\pi)$ of $\pi$ is defined by \begin{equation} \label{eq:wt} \wt(\pi) = (-1)^k q^{2\binom k2+2\ali(\pi)+\cro(\pi) + \inv(\bw(\pi))} z_1z_2\dots z_{n-2k}, \end{equation} where $z_i=x$ if the $i$th fixed point is colored with $x$, and $z_i=-yq^{i-1}$ if the $i$th fixed point is colored with $y$. A \emph{complete matching} is a matching without fixed points. Let $\CM(2n)$ denote the set of complete matchings of $[2n]$. \begin{prop} We have \[ \sum_{\pi\in\CM(2n)} q^{2\ali(\pi)+\cro(\pi)} = [2n-1]_q!!. \] \end{prop} \begin{proof} It is known that \[ \sum_{\pi\in\CM(2n)} q^{\cro(\pi)+2\nes(\pi)} = \sum_{\pi\in\CM(2n)} q^{2\cro(\pi)+\nes(\pi)} = [2n-1]_q!!. \] Since a pair of two edges is either an alignment, a crossing, or a nesting we have $\ali(\pi)+\nes(\pi)+\cro(\pi)=\binom n2$. Thus \[ \sum_{\pi\in\CM(2n)} q^{2\ali(\pi)+\cro(\pi)} = q^{2\binom n2}\sum_{\pi\in\CM(2n)} q^{-2\nes(\pi)-\cro(\pi)} = q^{2\binom n2} [2n-1]_{q^{-1}}!! = [2n-1]_q!!. \] \end{proof} \begin{thm} \label{thm:comb} We have \[ h^*_n(x,y;q) = \sum_{\pi\in\fbm(n)} \wt(\pi). \] \end{thm} \begin{proof} Let $M(n)$ be the set of 4-tuples $(k,w,\sigma,X)$ such that $0\le k\le \flr{n/2}$, $w$ is a word of length $n$ consisting of $k$ 0's and $n-2k$ 1's, $\sigma\in \CM(2k)$, and $Z=(z_1,z_2,\dots,z_{n-2k})$ is a sequence such that $z_i$ is either $x$ or $-yq^{i-1}$ for each $i$. For $\pi\in\fbm(n)$ we define $g(\pi)$ to be the 4-tuple $(k,w,\sigma,Z)\in M(n)$, where $k$ is the number of edges of $\pi$, $w=\bw(\pi)$, $\sigma$ is the induced complete matching of $\pi$, and $Z=(z_1,z_2,\dots, z_{n-2k})$ is the sequence such that $z_i=x$ if the $i$th fixed point is colored with $x$, and $z_i=-yq^{i-1}$ if the $i$th fixed point is colored with $y$. Here, the \emph{induced complete matching} of $\pi$ is the complete matching of $[2k]$ for which $i$ and $j$ form an edge if and only if the $i$th non-fixed point and the $j$th non-fixed point of $\pi$ form an edge. It is easy to see that $g$ is a bijection from $\fbm(n)$ to $M(n)$ such that if $g(\pi)=(k,w,\sigma,Z)$ with $Z=(z_1,z_2,\cdots,z_{n-2k})$ then \[ \wt(\pi) = (-1)^k q^{2\binom k2} q^{2\ali(\sigma)+\cro(\sigma)} q^{\inv(w)} z_1z_2\cdots z_{n-2k}. \] Thus \begin{align*} \sum_{\pi\in\fbm(n)} \wt(\pi) &= \sum_{(k,w,\sigma,Z)\in M(n)} (-1)^k q^{\binom k2} q^{2\ali(\sigma)+\cro(\sigma)} q^{\inv(w)}z_1z_2\cdots z_{n-2k}. \end{align*} Here once $k$ is fixed $\sigma $ can be any complete matching of $[2k]$, $w$ can be any word consisting of $k$ 0's and $n-2k$ 1's, and for $Z=(z_1,z_2,\cdots,z_{n-2k})$ each $z_i$ can be either $x$ or $-yq^{i-1}$. Thus the sum of $q^{2\ali(\sigma)+\cro(\sigma)}$ for all such $\sigma$'s gives $[2k-1]_q!!$, the sum of $\inv(w)$ for all such $w$ gives $\qbinom n{2k}$, the sum of $z_1z_2\cdots z_{n-2k}$ for all such $Z$ gives $(x/y)_{n-2k}$. This finishes the proof. \end{proof} \begin{prop} \label{prop:4-term*} For $n\ge0$, we have \[ h^*_{n+1} = (x-yq^n) h^*_n - q^{n-1}[n]_q h^*_{n-1} +y q^{n-2}[n-1]_q(1-q^n) h^*_{n-2}. \] \end{prop} \begin{proof}[Proof of Proposition~\ref{prop:4-term*}] Let $W_-(n)$ be the sum of $\wt(\pi)$ for all $\pi\in\fbm(n)$ such that $n$ is not a fixed point. Let $W_x(n)$ (respectively $W_y(n)$) be the sum of $\wt(\pi)$ for all $\pi\in\fbm(n)$ such that $n$ is a fixed point colored with $x$ (respectively $y$). Then \[ h^*_{n+1}(x,y;q) = \sum_{\pi\in\fbm(n)} \wt(\pi) = W_-(n+1) + W_x(n+1) + W_y(n+1). \] We claim that \begin{align} \label{eq:c1} W_x(n+1) &= x h^*_{n}(x,y;q), \\ \label{eq:c2} W_y(n+1) &= -yq^n (W_x(n)+W_y(n)) - yW_-(n),\\ \label{eq:c3} W_-(n+1) &= -q^{n-1}[n]_q h^*_{n-1}(x,y;q). \end{align} From \eqref{eq:wt} we easily get \eqref{eq:c1}. For \eqref{eq:c3}, consider a matching $\pi\in\fbm(n+1)$ such that $n+1$ is connected with $i$ where $1\le i\le n$. Suppose that $\pi$ has $k$ edges and $n+1-2k$ fixed points. Let us compute the contribution of an edge of a fixed point together with the edge $\{i,n+1\}$ to $2\ali(\pi)+\cro(\pi) + \inv(\bw(\pi))$. An edge with two integers less than $i$ contributes $2$ to $2\ali(\pi)$. An edge with exactly one integer less than $i$ contributes $1$ to $\cro(\pi)$. An edge with two integers greater than $i$ contributes nothing. Each fixed point of $\pi$ less than $i$ contributes $2$ to $\inv(\bw(\pi))$ together with the edge $\{i,n+1\}$. Each fixed point of $\pi$ greater than $i$ contributes $1$ to $\inv(\bw(\pi))$ together with the edge $\{i,n+1\}$. Thus the contribution of the edge $\{i,n+1\}$ to $2\ali(\pi)+\cro(\pi) + \inv(\bw(\pi))$ is equal to $i-1 + (n+1-2k)$. Let $\sigma$ be the matching obtained from $\pi$ by removing the edge $\{i,n+1\}$. Then \[ 2\ali(\pi)+\cro(\pi) + \inv(\bw(\pi)) = 2\ali(\sigma)+\cro(\sigma) + \inv(\bw(\sigma)) +i-1 + (n+1-2k). \] Thus, using \eqref{eq:wt}, the above identity and $2\binom k2 = 2\binom{k-1}2+2k-2$, we have $\wt(\pi) = -q^{n-1} q^{i-1} \wt(\sigma)$. Since $i$ can be any integer from $1$ to $n$ and $\sigma\in\fbm(n-1)$ we get \eqref{eq:c3}. Now we prove \eqref{eq:c2}. Consider a matching $\pi\in\fbm(n+1)$ such that $n+1$ is a fixed point colored with $y$. Suppose that $\pi$ has $k$ edges with $2k$ non-fixed points $b_1<b_2<\dots<b_{2k}$. For $0\le i\le 2k+1$, let $a_i = b_i-b_{i-1}-1$, where $b_0=0$ and $b_{2k+1}=n$. Then $a_0+a_1+\cdots+a_{2k+1}=n-2k$. Let $\sigma$ be the matching obtained from $\pi$ by removing $n+1$. Then we have $\wt(\pi) = -yq^{n-2k}\wt(\sigma)$. We consider two cases. Case 1: $a_0\ne 0$. Let $\tau$ be the matching obtained from $\sigma$ by changing $1$ into $n$ and decreasing the other integers by $1$. We color the $i$th fixed point of $\tau$ with the same color of the $i$th fixed point of $\sigma$. Then $\wt(\sigma) = q^{2k} \wt(\tau)$ and $\wt(\pi)=-yq^n(\tau)$. Since $n$ is a fixed point in $\tau$ the sum of $\wt(\pi)$ in this case gives $-yq^n (W_x(n)+W_y(n))$. Case 2: $a_0=0$. Note that \[ \bw(\sigma)=0 \overbrace{1\cdots 1}^{a_1}0 \overbrace{1\cdots 1}^{a_2}0 \cdots 0\overbrace{1\dots 1}^{a_{2k}} 0\overbrace{1\dots 1}^{a_{2k+1}}. \] We define $\tau$ to be the matching with \[ \bw(\tau)=\overbrace{1\cdots 1}^{a_1}0 \overbrace{1\cdots 1}^{a_2}0 \overbrace{1\cdots 1}^{a_3}1 \cdots 0\overbrace{1\dots 1}^{a_{2k+1}}0 \] and the $i$th fixed point of $\tau$ is colored with the same color of the $i$th fixed point of $\sigma$. Then $\wt(\sigma) = q^{-n+2k}\wt(\tau)$ and $\wt(\pi) = -y\wt(\tau)$. Since $n$ is a non-fixed point in $\tau$, the sum of $\wt(\pi)$ in this case gives $- yW_-(n)$. It is easy to see that \eqref{eq:c1}, \eqref{eq:c2}, and \eqref{eq:c3} implies the 4-term recurrence relation. \end{proof} Since the polynomials $h_n(x,y;q)$ satisfy a 4-term recurrence relation, they are 2-fold multiple orthogonal polynomials in $x$. By Viennot's theory, we can express the two moments $\LL^{(0)}(x^n)$ and $\LL^{(1)}(x^n)$ as a sum of weights of certain lattice paths. A \emph{2-Motzkin path} is a lattice path consisting of an up step $(1,1)$, a horizontal step $(1,0)$, a down step $(1,-1)$, and a double down step $(1,-2)$, which starts at the origin and never goes below the $x$-axis. For $i=0,1$ let $\Mot_i(n)$ denote the set of 2-Motzkin paths of length $n$ with final height $i$. The \emph{weight} of $M\in\Mot_i(n)$ is the product of weights of all steps, where the weight of each step is defined as follows. \begin{itemize} \item An up step has weight $1$. \item A horizontal step starting at level $i$ has weight $yq^i$. \item A down step starting at level $i$ has weight $q^{i-1}(1-q^i)$. \item A double down step starting at level $i$ has weight $-yq^{i-2}(1-q^i) (1-q^{i-1})$. \end{itemize} Then by Viennot's theory we have \[ \LL_i(y^n) = \sum_{M\in\Mot_i(n)} \wt(M). \] Thus we obtain the following corollary from Theorem~\ref{thm:hermite_moments}. \begin{cor} For $n\ge 0$, we have \begin{align*} \sum_{M\in\Mot_0(n)} \wt(M) &= \sum_{k=0}^{\flr{n/2}} \qbinom{n}{2k} (q;q^2)_k y^{n-2k},\\ \sum_{M\in\Mot_1(n)} \wt(M) &= (1-q^n) \sum_{k=0}^{\flr{n/2}} \qbinom{n-1}{2k} (q;q^2)_k y^{n-2k-1}. \end{align*} \end{cor} It would be interesting to prove the above corollary combinatorially. \section{An addition theorem} \label{sec:addition_theorem} A Hermite polynomial addition theorem is \begin{equation} \label{q=1} H_n(x+y)=\sum_{k=0}^n \binom{n}{k} H_k(x/a)a^kH_{n-k}(y/b)b^{n-k} \end{equation} where $a^2+b^2=1$. We give a $q$-analogue of this result (Proposition~\ref{prop:addthm}) using the discrete big $q$-Hermite polynomials. We will use $h_n(x,y;q)$ as our $q$-version of $H_n(x-y)$, \[ \lim_{q\to1} h^*_n(x,y;q)=\lim_{q\to1} \frac{h_n(x\sqrt{1-q},y\sqrt{1-q};q)}{(1-q)^{n/2}}=H_n(x-y). \] and $h_n(x/a,0;q),$ the discrete $q$-Hermite, as our version of $H_n(x/a)$ \[ \lim_{q\to1} h^*_n(x,0;q)=\lim_{q\to1} \frac{h_n(x\sqrt{1-q},0;q)}{(1-q)^{n/2}}=H_n(x). \] Another $q$-version of $b^{n-k}H_{n-k}(y/b)$, $a^2+b^2=1$ is given by $p_{n-k}(y,a;q)$, where \[ p_{t}(y,a;q)=\sum_{m=0}^{[t/2]} \gauss{t}{2m}(q;q^2)_m a^{2m}(1/a^2;q^2)_m y^{t-2m} q^{\binom{t-2m}{2}}. \] \[ \lim_{q\to1} \frac{p_t(y\sqrt{1-q},a;q)} {(1-q)^{t/2}} =b^tH_n(x/b). \] The result is \begin{prop} \label{prop:addthm} For $n\ge 0$, \[ h_n(x,y;q)=(-1)^n\sum_{k=0}^n \gauss{n}{k} h_k(x/a,0|q) (-a)^k p_{n-k}(y,a|q). \] \end{prop} \begin{proof} The generating function of $p_n$ is \[ F(y,a,w)=\sum_{n=0}^\infty \frac{p_n(y,a;q)}{(q)_n} w^n= \frac{(w^2;q^2)_\infty (-yw)_\infty}{(a^2w^2;q^2)_\infty}. \] If \[ G(x,y,t)= \frac{(t^2;q^2)_\infty (yt)_\infty}{(xt)_\infty} \] is the discrete big $q$-Hermite generating function, then \[ G(x,y,-t)= G(x/a,0,-at) F(y,a,t), \] which gives Proposition~\ref{prop:addthm}. \end{proof} \bibliographystyle{abbrv}
31,211
We are very happy to have our model Lauren Roy, featured in the Toronto Sun Shine Girl. Lauren is 25, from Niagara Falls and has competed in several beauty pageants. She holds the title of Miss. Southern Ontario, and Canada's Third Princess. She is a 5-foot-8 Brunette who loves the glamorous life and wears it well when doing pageants and promotional modelling.
27,680
Register Your Company Here Consultant Kuby Renewable Energy Ltd. Electricians. We strive to build lasting partnerships with everyone we serve by providing elite service and never sacrificing quality. Mailing Address: 14505-114 Ave NW Edmonton, Alberta T5M 2Y8 Canada Tel: (780) 504-3269 Website: Company Category: Solar & Wind Geographic Region: Canada - West Company Sector: Consultant Keywords: solar energy, solar panels, solar power Partner Status: Free Company Listing
117,387
Mobile optimized. » Rom Center » Sony ISOs ». Enjoy playing your Nintendo DS game on your Android device at highest speed. A collection of all your favorite Pokemon ROMs available all fast direct downloads! But it is super easy to download an emulator and a Pokemon ROM. Com provides free software downloads for old versions of programs drivers games. Cool rom free downloads for android. The device was launched in February with Android 2. Cool rom free downloads for android. But it is super easy to download an emulator and a Pokemon ROM. Com provides free software downloads for old versions of programs drivers games. Cool rom free downloads for android. Download Sharp X68000 Complete Game Collection • Full Rom Sets @ The Iso Zone • The Ultimate Retro Gaming Resource. Download Android Mod Application and Games Apk With Direct Link In DlAndroid. Com: LG Realm Black ( Boost Mobile) Discontinued by Manufacturer: Cell Phones & Accessories. Features: Play Nintendo DS games, support NDS files (. Com is your best guide to find free downloads of safe trusted, utilities, secure Windows software games. The Samsung Galaxy Ace S5830 is one of the most popular Android low- end phones which has been in the market for a long time. Here are a few essential pointers to get you going viral and effortlessly attract new members. Nokia is a global leader in innovations such as mobile networks digital health phones. 5 inch 4G Phablet Android 5. I know there are alot of you who browse CoolROM on your iPhone and Android. Fix Android TV Box with our custom firmware downloads update fix Android TV boxes. Entertainment Box is the Best TV box store in the UK Buy a Smart TV box on Android with Kodi from one of the biggest TV box , USA gadgets shop. All the recently launched Samsung smartphones running on Android OS can. Download the latest Android firmware, software Android Box update. Feel free to head here for some of my most popular posts this site , to learn more about me my YouTube channel. The first cool thing about. Android has gotten better over the years but there are still many things I dont like about it. See how we create technology to connect. Works with Windows Windows Phone, Linux, Mac OS X Android Devices. Here begins our list of some of the best apps for rooted Android phones and tablet devices. Free fast downloads from the largest Open Source applications , secure software directory - SourceForge. I just ordered a HOMTOM HT7 Pro 4G Phablet – hopefully Im on a winner, price is good for the features. Find the best free Android games antivirus , utilities applications at CNET Download. Main Features: HOMTOM HT7 Pro 5. To put it bluntly, I hate Android. Download Free ROMs for NES SNES, PSP, 3DS , GBA, XBOX, PS2, WII, NDS, N64, GAMECUBE, PSX more! Launching a successful Facebook group is a definite art. When clicked outside of the( as in clicking on another program) the functionality of the program does not work. You can play these ISOs on your Android / iPhone / Windows Phone! I tried to support it and I actually liked it for a while. nero 7 download full version free, Nero 7 Lite 7.
161,537
Media Options Related Guests - Salah Negmdirector of news at Al Jazeera English..” Transcript AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. We are broadcasting from Bonn, Germany, at the Deutsche Welle Global Media Forum. I’m Amy Goodman. It was six months ago on Sunday when Egyptian authorities raided a hotel room in Cairo used by reporters at the global TV network Al Jazeera. The journalists Peter Greste, Mohamed Fahmy and Baher Mohamed were arrested that day, December 29.. Here in Bonn, Germany, we’re joined by Salah Negm. He’s the director of news at Al Jazeera English. He formerly was the director of BBC Arabic news service in London. He’s here in Bonn for the Deutsche Welle Global Media Forum. I just heard you speak. This is very dire times for Al Jazeera. You recently had another reporter released from prison, Abdullah Elshamy, after 10 months. Can you talk about what is happening to your reporters in Egypt? SALAH NEGM: Well, as you know, the three Al Jazeera English reporters were sentenced, as you said, to between seven and 10 years, and there are six other Al Jazeera journalists who were sentenced in absentia for 10 years each. Some of them are Egyptians. They cannot go back to Egypt. They lost their property, their contact with their families. And that’s only for being journalists. We were reporting in Egypt objectively and accurately. And actually, throughout the trial there was not one piece of evidence against them of falsifying information or supporting any group which is outlawed. That was all false. And the sentence came as a real shock and surprise to everyone, because it was out of law totally. AMY GOODMAN: Well, talk about the evidence they showed. I mean, in one of the sequences, the observers in the courtroom said, they showed—was it Peter Greste’s family on vacation? SALAH NEGM: Yes, yes, and it was weird and absurd. They got things, I think, from the laptops of the correspondents. Some of them were just clips from Sky News Arabia, which is a totally different channel. Some of them were about football matches, and one of them was about a vacation, family vacation. AMY GOODMAN: Now, Baher Mohamed was sentenced to 10 years, the other two, Fahmy and Greste, to seven years, because he had a shell casing. I mean, as reporters, we pick up many things on the streets. Some talk about having souvenirs, but often, if you find ammunition like that, it is evidence. It’s something that you can see, for example, where ammunition comes from. SALAH NEGM: Of course. That’s part of our jobs of investigating and trying to verify facts. If that ammunition, for example, was a government issue, that will direct the finger to, let’s say, the police force. If it is not, then it will be an opposition group or a lone operator in a demonstration or something like that. I covered news myself in Iraq and other places, and I used to do that. That’s part of what we are doing. And sometimes—I will not deny that—we take things like this sometimes as a souvenir for great coverage from a very dangerous situation or a danger zone. AMY GOODMAN: But the court said in the verdict and the sentencing, where he got an extra three years— SALAH NEGM: Yeah. AMY GOODMAN: —this was possession of ammunition. SALAH NEGM: They considered it ammunition. And I don’t know how can you consider a shell, a bullet shell, an ammunition. It’s not used, but—it can’t be used by anyone. AMY GOODMAN: So, what happens now? What is Al Jazeera doing to get these journalists out of prison? SALAH NEGM: We are continuing our solidarity campaign throughout the world and, actually, asking the Egyptian government to take action and release them immediately, because their sentence was unjust to start with. There is an appeal process, but after the sentence and seeing the whole process of trial, we are not very confident that it will really go in the right path. AMY GOODMAN: Is Al Jazeera speaking to the Egyptian government? SALAH NEGM: No, we have our legal team who is representing us there. But I don’t know— AMY GOODMAN: The president, Sisi, has the power to pardon or to commute these sentences, is that right, at this point? SALAH NEGM: Yes. Yes, the president has the power to pardon anyone who’s sentenced. But we shouldn’t only think—I mean, Al Jazeera journalists are very important to us, but we think about other journalists who are detained, as well, and who will be sentenced for their work as journalists trying to convey the facts and truth to the people. But apart from that, also there were some very illogical sentences in Egypt of hundreds of people to execution. And I think the government has to take an action. You cannot think about, for example, executing 300 people or imprisoning journalists just for doing their work. AMY GOODMAN: Secretary of State—the U.S. secretary of state—John Kerry was in Egypt the day before the sentences came down. In fact, one of the reporters, Mohamed Fahmy, shouted from his cage as the verdict came down, “Where is John Kerry?” The U.S. is resuming, at this point, something like $500 million in military aid to Egypt. What is your comment on this? SALAH NEGM: It’s difficult to comment on this because politics between countries are different. But what we expect from the United States is to defend the freedom of speech as one of its basic principle embedded in the United States Constitution and foreign policy. And we expect that its action will follow the principle, really. AMY GOODMAN: Have you spoken to the U.S. government? I mean, the timing of this trip, the day before the journalists were sentenced—now, the U.S. government has, and John Kerry has, conveyed that they are upset with these verdicts— SALAH NEGM: Yes, he did. AMY GOODMAN: —but the actions are different. What have they said to you? SALAH NEGM: He had strong comments afterwards. I think it was in a press conference in Iraq the second day. And actually, we appreciate his comments. We speak to officials from different governments, and we are seeking their support, as well. AMY GOODMAN: How are you covering Egypt now? Al Jazeera English, Al Jazeera America, Al Jazeera Arabic, the local Al Jazeera, the kind of C-SPAN for Al Jazeera—for Egypt that Al Jazeera ran, none of these networks can operate? SALAH NEGM: In the current, current atmosphere of news, being banned from one country wouldn’t stop you from covering this country. There are several ways, and technology and actually help of other journalists will help us in covering the events in Egypt. So we didn’t stop covering the events in Egypt. What we are lacking is our own correspondents, which are the eyes and ears of the viewer. But we have correspondents helping us from other networks, which showed really good solidarity with us. AMY GOODMAN: In fact, we just got word from Al Jazeera that there was a bomb that went off near the presidential palace in Cairo. I believe two police officers were killed, and news is developing. Reuters has confirmed this. But how do you protect your reporters around the world? SALAH NEGM: We take very strict measures when we send reporters to areas of tension or war. We provide them with all protective gears, armored cars, trackers if they want to go to somewhere that we lose contact by telephone or whatever, so there are satellite trackers. We do risk assessment. We have extraction plans, extraction teams, security advisers with them. So we take every possible precaution for that. But first of all, the reporter himself has to be convinced and believe in the mission he is about to do. And it’s a voluntary thing. We cannot ask someone to go against his will. It has to come from the reporter himself. AMY GOODMAN: What is the #FreeAJStaff campaign? SALAH NEGM: #FreeAJStaff campaign, it’s a campaign for collecting support for actually telling the Egyptian government that what happened was unjust, and they have to free the reporters immediately, as soon as possible. Think about Abdullah Elshamy, who is the Arabic reporter who stayed 10 months, was on hunger strike for so many days, and then he was released for—just a few days ago. AMY GOODMAN: And he tweeted out a picture of himself holding that sign— SALAH NEGM: Yes. AMY GOODMAN: “#FreeAJStaff.” Salah Negm, I want to thank you very much for being with us, director of news at Al Jazeera English. He is usually in Doha, but he’s in Bonn today for the Deutsche Welle Media Forum that we’re both covering and attending. Media Options
203,529
Network ServicesOne of Florida’s best, we provide complete network design, Installation and support Video SurveillanceWe design and install the latest and greatest digital IP Video & Access Control security solutions IP Phone SystemsWe offer a wide array of Voice over IP phone solutions, Local or Hosted PBX
208,051
\chapter[Variational Quantum Computation]{Universal variational quantum computation} \label{sec:variational} In Chapter \ref{chap:varintro} we considered variational quantum search and optimization. Here we address a different problem. We wish to simulate the output of an $L$ gate quantum circuit acting on the $n$-qubit product state $\ket{0}^{\otimes n}$. We have access to $p$ appropriately bounded and tunable parameters to prepare and vary over a family of quantum states. All coefficients herein are assumed to be accurate to not more than $\text{poly}(n)$ decimal places. We will define an objective function that when minimized will produce a state close to the desired quantum circuit output. We will provide a~solution to the minimization problem. \section{Notions of quantum computational universality} There are different notions of {\it computational universality} in the literature. A strong notion is algebraic, wherein a system is called universal if its generating Lie algebra is proven to span ${ \mathfrak{su}}(2^n)$ for $n$ qubits. Here we call this {\it controllability}. An alternative notion is that of {\it computational universality}, in which a system is proven to emulate a universal set of quantum gates---which implies directly that the system has access to any polynomial time quantum algorithm on $n$-qubits (the power of quantum algorithms in the class \BQP{}). Evidently these two notions can be related (or even interrelated): by proving that a controllable system can efficiently simulate a universal gate set, a controllable system becomes computationally universal. It is conversely anticipated by the strong Church-Turing-Deutsch principle~\cite{deutsch1985quantum}, that a universal system can be made to simulate any controllable system. The present chapter follows the work I did in \cite{UVQC}, where we assume access to control sequences which can create quantum gates such as \cite{PhysRevLett.90.247901, PhysRevA.70.032314, SHI02}. Given a quantum circuit of $L$ gates, preparing a state $$ \ket{\psi} = \prod_{l=1}^L U_l \ket{0}^{\otimes n} $$ for unitary gates $U_l$, we construct a universal objective function that is minimised by $\ket{\psi}$. The objective function is engineered to have certain desirable properties. Importantly, a gapped Hamiltonian and minimisation past some fixed tolerance, ensures sufficient overlap with the desired output state $\ket{\psi}$. Recent work of interest by Lloyd considered an alternative form of universality, which is independent of the objective function being minimised \cite{2018arXiv181211075L}. Specifically, in the case of {\sf QAOA} the original goal of the algorithm was to alternate and target- and a driver-Hamiltonian to evolve the system close to the target Hamiltonian's ground state---thereby solving an optimization problem instance. Lloyd showed that alternating a driver and target Hamiltonian can also be used to perform universal quantum computation: the times for which the Hamiltonians are applied can be programmed to give computationally universal dynamics \cite{2018arXiv181211075L}. In related work \cite{morales2019universality}, myself with two coauthors extended Lloyd's {\sf QAOA} universality result \cite{2018arXiv181211075L}. In yet another approach towards computational universality, Hamiltonian minimization has long been considered as a means towards universal quantum computation---in the setting of adiabatic quantum computation \cite{2004quant.ph..5098A, BL08}. In that case however, such mappings adiabatically prepare quantum states close to quantum circuit outputs. Importantly, unlike in ground state quantum computation, in universal variational quantum computation we need not simulate Hamiltonians explicitly. We instead expand the Hamiltonians in the Pauli basis, and evaluate expected values of these operators term-wise (with tolerance $\sim\epsilon$ for some $\sim\epsilon^{-2}$ measurements---see Hoeffding's inequality~\cite{doi:10.1080/01621459.1963.10500830}). Measurement results are then paired with their appropriate coefficients and the objective function is calculated classically. Hence, we translate universal quantum computation to that of (i) state preparation (ii) followed by measurement in a single basis where (iii) the quantum circuit being simulated is used to seed an optimizer to ideally reduce coherence time. After introducing variational quantum computation as it applies to our setting, we construct an objective function (called a telescoping construction). The number of expected values has no dependence on Clifford gates appearing in the simulated circuit and is efficient for circuits with $\mathcal{O}(\text{poly} \ln n)$ non-Clifford gates, making it amenable for near term demonstrations. We then modify the Feynman--Kitaev clock construction and prove that universal variational quantum computation is possible by minimising $\mathcal{O}(L^2)$ expected values while introducing not more than $\mathcal{O}(\ln L)$ slack qubits, for a quantum circuit partitioned into $L$ Hermitian blocks. We conclude by considering how the universal model of variational quantum computation can be utilised in practice. In particular, the given gate sequence prepares a state which will minimise the objective function. In practice, we think of this as providing a starting point for a classical optimizer. Given a $T$ gate sequence, we consider the first $L \leq T$ gates. This $L$ gate circuit represents an optimal control problem where the starting point is the control sequence to prepare the $L$ gates. The goal is to modify this control sequence (shorten it) using a variational feedback loop. We iterate this scenario, increasing $L$ up to $T$. Hence, the universality results proven here also represent a means towards optimal control which does not suffer from the exponential overheads of classically simulating quantum systems. \section{Maximizing projection onto a circuit} We will now explicitly construct an elementary Hermitian penalty function that is non-negative, with a non-degenerate lowest ($0$) eigenstate. Minimisation of this penalty function prepares the output of a quantum circuit. We state this in Lemma \ref{thm:tele}. \begin{lemma}[Telescoping Lemma---\cite{UVQC}] \label{thm:tele} Consider $\prod_l U_l \ket{0}^{\otimes n}$ a $L$-gate quantum circuit preparing state $\ket{\psi}$ on $n$-qubits and containing not more than $\mathcal{O}(\text{poly} \ln n)$ non-Clifford gates. Then there exists a Hamiltonian $\mathcal{H}\geq0$ on $n$-qubits with $\text{poly}(L, n)$ cardinality, a $(L, n)$-independent gap $\Delta$ and non-degenerate ground eigenvector $\ket{\phi}\propto\prod_l U_l \ket{0}^{\otimes n}$. In particular, a variational sequence exists causing the Hamiltonian to accept $\ket{\phi}$ viz., $0\leq \bra{\phi}\mathcal{H}\ket{\phi} < \Delta$ then Theorem \ref{thm:e2overlap} implies stability (Theorem \ref{thm:e2overlap}). \end{lemma} Proof sketch (Telescoping Lemma \ref{thm:tele}). First we show existence of the penalty function. Construct Hermitian $\mathcal{H} \in \mathscr{L}(\mathbb C_2^{\otimes n})$ with $\mathcal{H}\geq 0$ such that there exists a non-degenerate $\ket{\psi}\in \mathbb C_2^{\otimes n}$ with the property that $\mathcal{H}\ket{\psi}=0$. We will view the Hamiltonian $\mathcal{H}$ as a penalty function preparing the initial state and restrict $\mathcal{H}$ to have bounded cardinality ($\text{poly}(n)$ non-vanishing terms in the Pauli-basis). Define $P_\phi$ as a sum of projectors onto product states, i.e. \begin{equation}\label{eqn:proj} P_\phi = \sum_{i=1}^n \ket{1}\bra{1}^{(i)} = \frac{n}{2}\left(\openone - \frac{1}{n}\sum_{i=1}^n Z^{(i)} \right) \end{equation} and consider \eqref{eqn:proj} as the initial Hamiltonian, preparing state $\ket{0}^{\otimes n}$. We will act on \eqref{eqn:proj} with a sequence of gates $ \prod_{l=1}^L U_l$ corresponding to the circuit being simulated as \begin{equation}\label{eqn:isoaffine} h(k) = \left(\prod_{l=1}^{k\leq L} U_l \right)P_\phi \left(\prod_{l=1}^{k\leq L} U_l\right)^\dagger \geq 0 \end{equation} which preserves the spectrum. From the properties of $P_\phi$ it hence follows that $h(k)$ is non-negative and non-degenerate $\forall k \leq L$. We now consider the action of the gates \eqref{eqn:isoaffine} on \eqref{eqn:proj}. At $k=0$ from \eqref{eqn:proj} there are $n$ expected values to be minimized plus a global energy shift that will play a multiplicative role as the circuit depth increases. To consider $k=1$ we first expand a universal gate set expressed in the linear extension of the Pauli basis. Interestingly, the coefficients $\mathcal{J}^{a b \dots c}_{\alpha \beta \dots \gamma}$ of the gates will not serve as direct input(s) to the quantum hardware; these coefficients play a direct role in the classical step where the coefficients weight the sum to be minimized. Let us then consider single qubit gates, in general form viz., \begin{equation} e^{-\imath {\bf{a}.\boldsymbol {\sigma}} \theta} = \openone \cos(\theta) - \imath {\bf{a}.\boldsymbol {\sigma}} \sin(\theta) \end{equation} where $\bf a$ is a unit vector and ${\bf{a}.\boldsymbol {\sigma}} = \sum_{i=1}^3 a_i\sigma_i$. So each single qubit gate increases the number of expected values by a factor of at most $4^2$. At first glance, this appears prohibitive yet there are two factors to consider. The first is the following Lemma (\ref{lemma:invariance}). \begin{lemma}[Clifford Gate Cardinality Invariance] \label{lemma:invariance} Let $\mathcal{C}$ be the set of all Clifford circuits on $n$ qubits, and let $\mathcal{P}$ be the set of all elements of the Pauli group on $n$ qubits. Let $C\in\mathcal{C}$ and $P\in\mathcal{P}$ then it can be shown that $$ CPC^\dagger \in \mathcal{P} $$ or in other words $$ C\left(\sigma^a_\alpha \sigma^b_\beta \cdots \sigma^c_\gamma \right)C^\dagger = \sigma^{a'}_{\alpha'} \sigma^{b'}_{\beta'} \cdots \sigma^{c'}_{\gamma'} $$ and so Clifford circuits act by conjugation on tensor products of Pauli operators to produce tensor products of Pauli operators. \end{lemma} For some $U$ a Clifford gate, Lemma \ref{lemma:invariance} shows that the cardinality is invariant. Non-Clifford gates increase the cardinality by factors $\mathcal{O}(e^n)$ and so must be logarithmically bounded from above. Hence, telescopes bound the number of expected values by restricting to circuit's with $$ k\sim\mathcal{O}(\text{poly} \ln n) $$ general single qubit gates. Clifford gates do however modify the locality of terms appearing in the expected values---this is highly prohibitive in adiabatic quantum computation yet arises here as local measurements. A final argument supporting the utility of telescopes is that the initial state is restricted primarily by the initial Hamiltonian having only a polynomial number of non-vanishing coefficients in the Pauli basis. In practice---using today's hardware---it should be possible to prepare an $\epsilon$-close 2-norm approximation to any product state $$ \bigotimes_{k=1}^n \left(\cos \theta_k \ket{0}+e^{\imath \phi_k}\sin \theta_k \ket{1}\right) $$ which is realised by modifying the projectors in \eqref{eqn:proj} with a product of single qubit maps $\bigotimes_{k=1}^n U_k$. Other more complicated states are also possible. Finally to finish the proof of Lemma \ref{thm:tele}, the variational sequence is given by the description of the gate sequence itself. Hence a state can be prepared causing the Hamiltonian to accept and stability applies (Lemma \ref{thm:e2overlap}). To explore telescopes in practice, let us then explicitly consider the quantum algorithm for state overlap (a.k.a., {\it swap test} see e.g.~\cite{2018NJPh...20k3022C}). This algorithm has an analogous structure to phase estimation, a universal quantum primitive which form the backbone of error-corrected quantum algorithms. \begin{example} We are given two $d$-qubit states $\ket{\rho}$ and $\ket{\tau}$ which will be non-degenerate and minimal eigenvalue states of some initial Hamiltonian(s) on $n+1$ qubits \begin{equation} h(0) \ket{+, \rho, \tau} = 0 \end{equation} corresponding to the minimization of $\text{poly}(n/2)+1$ expected values where the first qubit (superscript 1 below) adds one term and is measured in the $X$-basis. The controlled swap gate takes the form \begin{equation} [U_\text{swap}]^1_m=\frac{1}{2}\left(\openone^1 +Z^1\right)\otimes \openone^m + \frac{1}{2}\left(\openone^1 - Z^1\right)\otimes \mathcal{S}^m \end{equation} where $m=(i,j)$ indexes a qubit pair and the exchange operator of a pair of qubit states is $\mathcal{S}=\openone + \boldsymbol {\sigma}.\boldsymbol {\sigma}$. For the case of $d=1$ we arrive at the simplest (3-qubit) experimental demonstration. At the minimum ($=0$), the expected value of the first qubit being in logical zero is $\frac{1}{2}+\frac{1}{2}|\braket{\rho}{\tau}|^2$. The final Hadamard gate on the control qubit is considered in the measurement step. \end{example} Telescopes offer some versatility yet fail to directly prove universality in their own right. The crux lies in the fact that we are only allowed some polynomial in $\ln n$ non-Clifford gates (which opens an avenue for classical simulation, see \cite{Bravyi_2016, Bravyi_2019}). Interestingly however, we considered the initial Hamiltonian in \eqref{eqn:proj} as a specific sum over projectors. We instead could bound the cardinality by some polynomial in $n$. Such a construction will now be established. It retroactively proves universality of telescopes and in fact uses telescopes in its construction. However true this might be, the spirit is indeed lost. Telescopes are a tool which gives some handle on what we can do without adding additional slack qubits. The universal construction then follows. \section{Maximizing projection onto the history state} We will now prove the following theorem (\ref{thm:history}) which establishes universality of the variational model of quantum computation. \begin{theorem}[Universal Objective Function---\cite{UVQC}]\label{thm:history} Consider a quantum circuit of $L$ gates on $n$-qubits producing state $\prod_l U_l \ket{0}^{\otimes n}$. Then there exists an objective function (Hamiltonian, $\mathcal{H}$) with non-degenerate ground state, cardinality $\mathcal{O}(L^2)$ and spectral gap $\Delta\geq \mathcal{O}(L^{-2})$ acting on $n+\mathcal{O}(\ln L)$ qubits such that acceptance implies efficient preparation of the state $\prod_l U_l \ket{0}^{\otimes n}$. Moreover, a variational sequence exists causing the objective function to accept. \end{theorem} To construct an objective function satisfying Theorem \ref{thm:history}, we modify the Feynman-Kitaev clock construction \cite{Fey82, KSV02}. Coincidentally (and tangential to our objectives here), this construction is also used in certain definitions of the complexity class quantum-Merlin-Arthur (\QMA{}), the quantum analog of \NP{}, through the \QMA{}-complete problem k-\LH{}~\cite{KSV02}. Feynman developed a time-independent Hamiltonian that induces unitary dynamics to simulate a sequence of gates \cite{Fey82}. Consider \begin{equation} \label{eqn:hpropfey} \begin{split} \tilde{\mathcal{H}}_t &= U_t\otimes \ket{t}\bra{t-1} + U_t^\dagger \otimes \ket{t-1}\bra{t} \\ \tilde{\mathcal{H}}_{\text{prop}} & = \sum_{t=1}^L \tilde{\mathcal{H}}_t \end{split} \end{equation} where the Hamiltonian \eqref{eqn:hpropfey} acts on a clock register (right) with orthogonal clock states $0$ to $L$ and an initial state $\ket{\xi}$ (left). Observation of the clock in state $\ket{L}$ after some time $s=s_\star$ produces \begin{equation} \openone \otimes \bra{L} e^{-\imath \cdot s \cdot \tilde{\mathcal{H}}_{\text{prop}}} \ket{\xi}\otimes \ket{0} = U_L \cdots U_1 \ket{\xi}. \end{equation} The Hamiltonian $\mathcal{H}_{\text{prop}}$ in \eqref{eqn:hpropfey} can be modified as \eqref{eqn:hprop2} so as to have the history state \eqref{eqn:hist} as its ground state \begin{equation} \label{eqn:hprop2} - U_t\otimes \ket{t}\bra{t-1} - U_t^\dagger \otimes \ket{t-1}\bra{t} + \ket{t}\bra{t} + \ket{t-1}\bra{t-1} = 2\cdot \mathcal{H}_t \geq 0 \end{equation} where $\mathcal{H}_t$ is a projector. Then $\mathcal{H}_{\text{prop}} = \sum_{t=1}^L \mathcal{H}_t$ has the history state as its ground state as \begin{equation}\label{eqn:hist} \ket{\psi_{\text{hist}}} = \frac{1}{\sqrt{L+1}} \sum_{t=0}^L U_t\cdots U_1\ket{\xi}\otimes \ket{t} \end{equation} for any input state $\ket{\xi}$ where $0 = \bra{\psi_{\text{hist}}}\mathcal{H}_{\text{prop}} \ket{\psi_{\text{hist}}}$. This forms the building blocks of our objective function. We will hence establish Theorem \ref{thm:history} by a series of lemma. \begin{lemma}[Lifting the degeneracy] \label{lemma:degen} Adding the tensor product of a projector with a telescope \begin{equation} \mathcal{H}_{\text{in}} = V\left( \sum_{i = 1}^n P_1^{(i)} \right)V^\dagger \otimes P_0 \end{equation} lifts the degeneracy of the ground space of $\mathcal{H}_{\text{prop}}$ and the history state with fixed input as \begin{equation} \frac{1}{\sqrt{L+1}} \sum_{t=0}^L \prod_{l = 1}^t U_l(V\ket{0}^{\otimes n}) \otimes \ket{t} \end{equation} is the non-degenerate ground state of $J\cdot \mathcal{H}_{\text{in}} + K\cdot \mathcal{H}_{\text{prop}}$ for real $J, K>0$. \end{lemma} \begin{proof}[Lemma \ref{lemma:degen}] The lowest energy subspace of $\mathcal{H}_{\text{prop}}$ is spanned by $\ket{\psi_{\text{hist}}}$ which has degeneracy given by freedom in choosing any input state $\ket{\xi}$. To fix the input, consider a tensor product with a telescope \begin{equation} \mathcal{H}_{\text{in}} = V\left( \sum_{i = 1}^n P_1^{(i)} \right)V^\dagger \otimes P_0 \end{equation} for $P_1 = \ket{1}\bra{1} = \openone - P_0$ acting on the qubit labeled in the subscript $(i)$ on the right hand side and on the clock space (left). It is readily seen that $\mathcal{H}_{\text{in}}$ has unit gap and \begin{equation} \ker \{\mathcal{H}_{\text{in}}\} = \text{span}\{ \ket{\zeta}\otimes \ket{c}, V\ket{0}^{\otimes n}\otimes \ket{0} | 0<c \in \mathbb{N}_+ \leq L, \ket{\zeta}\in \mathbb{C}_2^{\otimes n}\} \end{equation} Now for positive $J$, $K$ \begin{equation} \arg\min \{J\cdot \mathcal{H}_{\text{in}} + K\cdot \mathcal{H}_{\text{prop}} \} \propto \frac{1}{\sqrt{L+1}} \sum_{t=0}^L \prod_{l = 1}^t U_l(V\ket{0}^{\otimes n}) \otimes \ket{t} \end{equation} \end{proof} \begin{lemma}[Existence of a gap]\label{lemma:gap} For appropriate non-negative $J$ and $K$, the operator $J\cdot \mathcal{H}_{\text{in}} + K\cdot \mathcal{H}_{\text{prop}}$ is gapped with a non-degenerate ground state and hence, Theorem \ref{thm:e2overlap} applies with \begin{equation} \Delta \geq \max\left\{ J,\, \frac{K \pi^2}{2(L+1)^2} \right\}. \end{equation} \end{lemma} \begin{proof}[Lemma \ref{lemma:gap}] $\mathcal{H}_{\text{prop}}$ is diagonalized by the following unitary transform \begin{equation} W = \sum_{t=0}^L U_t\cdots U_1\otimes \ket{t}\bra{t} \end{equation} then $W\mathcal{H}_{\text{prop}}W^\dagger$ acts as identity on the register space (left) and induces a quantum walk on a 1D line on the clock space (right). Hence the eigenvalues are known to be $\lambda_k = 1-\cos\left(\frac{\pi k}{1+L}\right)$ for integer $0\leq k \leq L$. From the standard inequality, $1-\cos(x)\leq x^2/2$, we find that $\mathcal{H}_{\text{prop}}$ has a gap lower bounded as \begin{equation} \lambda_0 = 0 \leq \frac{\pi^2}{2(L+1)^2} \leq \lambda_1 \end{equation} From Weyl's inequalities, it follow that $J\cdot \mathcal{H}_{\text{in}} + K\cdot \mathcal{H}_{\text{prop}}$ is gapped as \begin{align} \lambda_0 = 0 &< \max\{ \lambda_1(J\cdot \mathcal{H}_{\text{in}}), \lambda_1(K\cdot \mathcal{H}_{\text{prop}}) \}\\ &\leq \lambda_1(J\cdot \mathcal{H}_{\text{in}} + K\cdot \mathcal{H}_{\text{prop}}) \\ &\leq \min\{ \lambda_{n-1}(J\cdot \mathcal{H}_{\text{in}}), \lambda_{n-1}(K\cdot \mathcal{H}_{\text{prop}}) \} \end{align} with a non-degenerate ground state and hence, Theorem \ref{thm:e2overlap} applies with \begin{equation} \Delta \geq \max\left\{ J,\, \frac{K \pi^2}{2(L+1)^2} \right\} \end{equation} \end{proof} \begin{lemma}[$\mathcal{H}_{\text{prop}}$ admits a log space embedding] \label{lemma:logem} The clock space of $\mathcal{H}_{\text{prop}}$ embeds into $\mathcal{O}(\ln L)$ slack qubits, leaving the ground space of $J\cdot \mathcal{H}_{\text{in}} + K\cdot \mathcal{H}_{\text{prop}}$ and the gap invariant. \end{lemma} \begin{proof}[Lemma \ref{lemma:logem}] An $L$-gate circuit requires at most $k = \lceil \ln_2 L \rceil$ clock qubits. Consider a projector $P$ onto the orthogonal compliment of a basis state given by bit string ${\bf x} = x_1 x_2 \dots x_k$. Then \begin{equation} \label{eqn:clockpro} P_{\bf x} = \ket{\bar{\bf x}}\bra{\bar{\bf x}} = \bigotimes_{i=1}^{\lceil \ln_2 L \rceil} \frac{1}{2}\left(\openone + (-1)^{x_i} Z_i\right) \end{equation} where $\bar{\bf x}$ is the bitwise logical compliment of ${\bf x}$. \end{proof} \begin{lemma}(Existence and Acceptance). \label{lemma:uob} The objective function $J\cdot \mathcal{H}_{\text{in}} + K\cdot \mathcal{H}_{\text{prop}}$ satisfies Theorem \ref{thm:history}. The gate sequence $\prod_l U_l \ket{0}^{\otimes n}$ is accepted by the objective function from Lemma \ref{lemma:uob} thereby satisfying Theorem \ref{thm:history}. \end{lemma} \begin{proof}(Lemma \ref{lemma:uob}) Sketch. This term \eqref{eqn:clockpro} contributes $L$ terms and hence so does each of the four terms in $\mathcal{H}_t$ from \eqref{eqn:hprop2}. Hence, the entire sum contributes $3\cdot L^2$ expected values, where we assume $U=U^\dagger$ and that $L$ is upper bounded by some family of circuits requiring $O(\text{poly}~ n)$ gates. The input penalty $\mathcal{H}_{\text{in}}$ contributes $n$ terms and for an $L$-gate circuit on $n$-qubits we arrive at a total of $\mathcal{O}(\text{poly}~ L^2)$ expected values and $\mathcal{O}(\lceil \ln_2 L \rceil)$ slack qubits. Adding identity gates to the circuit can boost output probabilities, causing the objective function to accept for a state prepared by the given quantum circuit. \end{proof} We are faced with considering self-inverse gates. Such gates ($U$) have a spectrum $\text{Spec}(U)\subseteq\{\pm1\}$, are bijective to idempotent projectors ($P^2=P=P^\dagger$), viz. ${U = \openone - 2P}$ and if $V$ is a self-inverse quantum gate, so is the unitary conjugate $\tilde{V}= G V G^\dagger$ under arbitrary $G$. Shi showed that a set comprising the controlled not gate (a.k.a.~Feynman gate) plus any one-qubit gate whose square does not preserve the computational basis is universal \cite{SHI02}. Consider Hermitian \begin{equation} R(\theta) = X\cdot \sin(\theta) + Z \cdot \cos(\theta), \end{equation} then \begin{equation}\label{eqn:RR} e^{\imath \theta Y} = R(\pi / 2) \cdot R(\theta). \end{equation} Hence, a unitary $Y$ rotation is recovered by a product of two Hermitian operators. A unitary $X$ rotation is likewise recovered by the composition \eqref{eqn:RR} when considering Hermitian $Y\cdot \sin(\theta) - Z \cdot \cos(\theta)$. The universality of self-inverse gates is then established, with constant overhead. Hence and to conclude, the method introduces not more than $\mathcal{O}(L^2)$ expected values while requiring not more than $\mathcal{O}(\ln L)$ slack qubits, for an $L$ gate quantum circuit. \section{Discussion} We have established that variational methods can approximate any quantum state produced by a sequence of quantum gates and hence that variational quantum computation admits a universal model. It appears evident that this method will yield shorter control sequences compared to the control sequence of the original quantum circuit---that is the entire point. Indeed, the control sequence implementing the gate sequence being simulated serves as an upper-bound showing that a sequence exists to minimize the expected values. These expected values are the fleeting resource which must be simultaneously minimized to find a shorter control sequence which prepares the desired output state of a given quantum circuit. Although error correction would allow the circuit model to replace methods developed here, the techniques we develop in universal variational quantum computation should augment possibilities in the NISQ setting, particularly with the advent of error suppression techniques \cite{2016NJPh...18b3023M, PhysRevX.7.021050}. Importantly, variational quantum computation forms a universal model in its own right and is not (in principle) limited in application scope. An interesting feature of the model of universal variational quantum computation is how many-body Hamiltonian terms are realized as part of the measurement process. This is in contrast with leading alternative models of universal quantum computation. In the gate model, many-body interactions must be simulated by sequences of two-body gates. The adiabatic model applies perturbative gadgets to approximate many-body interactions with two-body interactions \cite{2004quant.ph..5098A, BL08}. The variational model of universal quantum computation simulates many body interactions by local measurements. Moreover the coefficients weighting many-body terms need not be implemented into the hardware directly; this weight is compensated for in the classical-iteration process which in turns controls the quantum state being produced. Many states cause a considered objective function to accept. Hence the presented model is somewhat inherently agnostic to how the states are prepared. This enables experimentalists to now vary the accessible control parameters to minimize an external and iteratively calculated objective function. Though the absolute limitations of this approach in the absence of error correction are not known, a realizable method of error suppression could get us closer to implementation of the traditional text-book quantum algorithms such as Shor's factorisation algorithm.
4,881
It could be halted, delisted or trading at 28 cents by that time in March or it could be at $20 or anywhere in between on that day pt. is that you , i , nobody knows what the price will be on March 4 2015 long term value of MOLG may be a NASDAQ halt and delisting if you ask my opinion. trading is the ONLY way he will not let it fall? what do you cal $12.50 to $2.86 move? Wake up, it already fell and it fell a lot. yes, that explains why stock went froma lofty IPO price of $12.50 to a lousy $2 bucks and change now Tan may be a billionaire, but wasn't Bernie Madoff a billionaire too? that dont mean nothing, oh and they dont own any FB I wonder what TAN would say if you ask him to spare like 500k $ to buy this junk at 3 bucks. I wonder if he says back: are u fkn kidding me, i aint gonna be buying this p o s and lose 500K. how can a multi billionaire give a statement and offer no $$$$ to back his statement up? Why o why do we allow Malaysian companies to IPO in US anyway? ? ?????? can we not put a complete ban to all malaysian companies trying to trade here in US exchanges Sentiment: Hold Who does the vetting of such foreign outfits anyway before stamping the green light for them to come here to US??? this stock gave me lot of pain as a long. Thankfully I sold and got out with a good profit but if it goes back down again, i aint gonna shed a tear for it Sentiment: Strong Sell flush it down baby..malay style i will buy at $1.60 again Sentiment: Strong Sell In maybe 2-3 days, $2.80 is the price it will settle down at. VWAP never lies i think Sentiment: Strong Sell $3.90, wow. I am buying in small itty bitty increments all the way down to $1.60 again let the pump and dump cycle repeat itself. lets make money that way Sentiment: Strong Buy But worth the wait. Buy at $, sell at $10+ so what, if it takes few weeks or months for that kind of return Sentiment: Strong Buy Who is lucky enough to just SHORT this and sit on it until it pulls a JRCC style BK, whenever that happens, if it ever does? i think we all can profit from that strategy. why lose money if this is hell bent in collapsing ??? i dont like shortng, but looks like we may not have a choice here if we wanna profit. BK may take time i think I will have to chase it and scalp it for a nice trade. For all the heartache this stock has given us, a few profitable scalps is needed and quick daytrades to pocket some fat gains as daytrades dont invest in this, just use it as a trading vehicle for quick gains Sentiment: Strong Buy I sold for a good profit, was down like $3k yesterday, now I sold for a good profit Thank you kind malaysians for the good pump and dump and pump stock. wish I had bought more yesterday in the mid $1.60s, what was i thinking? Sentiment: Buy of course King, Tan, Company, CEO not going lower. Can you say the same about us investors? If the company delists (or gets delisted) and then gets to walk away with our IPO money that they raised at $12.50 levels, who will lose in that situation, if it ever happens? not them, only us in my opinion Where is the promised company and CEO buyback??? Our worst fear is the possibility of a slow and steady decline daily after such a huge drop. but I hope price goes up in 1-2 months. where do u see it after 2 months? you may buy this more if this goes down tomorrow. What about the thousands of other Americans who will be forced to sell at even lower prices and take mounting losses? Do you care about them at all? Why are you assuming that the Sultan would want this stock to go higher to begin with? If stock goes up, we Americans will make money and we americans will prosper. Would a rich muslim man in the middle east really want that? just asking...as I dont know the answer.
107,173
On Fri, 16 Jan 1998 Charlie Stross <[email protected]> Wrote: >If you look at things from the perspective of social darwinism Why would I want to do that? Social darwinism is an old fashioned and foolish philosophy that assumes morality can be obtained from nature. The term had not been invented in his lifetime but Darwin was certainly not a believer in social darwinism and would have been appalled to have his name associated with such a thing. . Actually, government is beneficial, not at the level of the individual but at a higher level of organization. Government is good for government, that's why it loves itself and wants to grow. >I don't like facile generalisations like "government BAD" and >"free market GOOD". Good generalizations are the key to science. A good generalization, as good as any in the fuzzy world of human values and behavior is "government BAD, free market GOOD". >What I want to know is WHY governments at the end of the twentieth >century have become bloated, inefficient, and counterproductive >monsters that are resented by many of their citizens. How could it be otherwise? Government claims power that individuals and corporations do not have, no organization is in the habit of voluntarily giving up power, rather they use any advantage they have to try to get even more power. >do you mean the concept of 'government' in general, rather than any >specific administration? Some shit stinks worse than others, but I don't want any on my corn flakes. John K Clark [email protected] -----BEGIN PGP SIGNATURE----- Version: 2.6.i iQCzAgUBNMPE/X03wfSpid95AQHwIwTvbjGdyDlzwyzMNtoP8JD5uyRyG+dk0DMz Le/XLeSHvalKq55BqOPJiuUjvy/7cyWeyqtozOtWq1TqbQDLNSvMrHypmNBau7Rs mSKwG1xRwGLn5tnnTQUrrFOtZbqDP+PYmFn9kgdlqNxOwEJM4kpTEuP/JGaSpI2j 6eKDp+q6AAVKaEvWr9r8EkDCHYmbcv1LLsObS7WaD75m4cbDGyU= =REYP -----END PGP SIGNATURE-----
255,636
An Example of US Food and Drug Administration Device Regulation Medical Devices Indicated for Use in Acute Ischemic Stroke Abstract The’s historical perspective of acute ischemic stroke trials and clinical trial design considerations used in prior studies that have led to US market clearance as they are related to currently marketed devices indicated for acute ischemic stroke. Marc Fisher MD Kennedy Lees MD Section Editors: The US Food and Drug Administration (FDA) is responsible for protecting the public health by assuring the safety and effectiveness of a variety of medical products including drugs, devices, and biological products, and for promoting public health by expediting the approval of treatments that are safe and effective. More than 20 000 firms worldwide produce over 80 000 brands and models of medical devices for the US market. The Center for Devices and Radiological Health (CDRH) is a center within FDA that is responsible for pre- and postmarket regulation of medical devices in the US. According to the National Institute of Neurological Disorders and Stroke (NINDS), about 700 000 people have a stroke each year, a leading cause of long-term disability and the third leading cause of death for Americans after heart disease and cancer.1 The regulation of medical devices by CDRH includes devices indicated for stroke. This article will review research and development of such devices, pre- and postmarket evaluation, and provide an FDA perspective on future development. Medical Device Regulation The CDRH is responsible for regulating the manufacture, repackaging, relabeling, and import of medical devices sold in the United States. This mission is accomplished by (1) reviewing requests to undertake research of or to market medical devices, (2) collecting, analyzing, and acting on information about injuries and other adverse experiences in the use of medical devices and radiation-emitting electronic products, (3) setting and enforcing good manufacturing practice regulations and performance standards for radiation-emitting electronic products and medical devices, (4) monitoring compliance and postmarket surveillance programs for medical devices and radiation-emitting electronic products, and (5) providing technical and other nonfinancial assistance to small manufacturers of medical devices.2 The intensity of regulatory oversight exerted on a medical device depends on its classification. Device Classification Medical devices are categorized into 3 classes with regulatory control increasing from Class I to III. The device classification regulations define the regulatory requirements for a general device type. Most Class I devices are exempt from FDA notification; Class II devices require Premarket Notification 510(k); and most Class III devices require Premarket Approval (PMA).3 Humanitarian use devices (HUDs) are marketed for a limited population in an entirely separate process termed a Humanitarian Device Exemption (HDE). The majority of medical devices reviewed by CDRH are evaluated under premarket notification (510(k)). Premarket Notification 510(k) Process The 510(k) clearance process requires that the Sponsor demonstrate their device to be substantially equivalent to a legally marketed predicate device. It is the responsibility of the sponsor to identify an appropriate predicate device or devices to which the sponsor demonstrates that their new device is substantially equivalent in design, function, and indication for use. Given the incremental changes that occur with device development, the majority of device applications cleared under the 510(k) program are based on preclinical testing that demonstrates this equivalence to a predicate. Before marketing clearance is obtained, the manufacturer must also assure that the device is properly labeled in accordance with FDA’s labeling regulations. In some cases, however, when there are concerns regarding safety and effectiveness, the FDA can require clinical data for 510(k) clearance. PMA Process A second regulatory pathway for marketing of medical devices is the PMA process. This market approval process is required for most Class III devices. These devices are typically referred to as significant risk devices that support or sustain human life, are important in preventing health impairment, or present risk of serious injury or death in case of failure. The PMA approval process requires that the manufacturer has demonstrated, through preclinical and clinical data, a reasonable assurance of safety and effectiveness when used for the label indication in the intended population. In FY 2005, FDA authorized marketing of 3148 devices with 510(k) clearance and 32 devices through the PMA program. Neurological devices have comprised ≈5% of all medical devices authorized for marketing.4 Approval of a device through the PMA process represents a rigorous type of device marketing application required by FDA and is based on a determination by FDA that the PMA application contains valid scientific evidence to assure that the device is safe and effective for its intended use(s) (21CFR814 [Subpart C]). An approved PMA is, in effect, a license granting the applicant (or owner) permission to market the device. The PMA applicant is usually the person (ie, company, also called sponsor) who owns the rights, or has authorized access, to the data and other information submitted in support of safety and effectiveness. In some cases, before approving or denying a PMA, an FDA advisory committee composed of nonagency experts (special government employees), may review the data submitted in support of the PMA. A public meeting provides a venue for making a recommendation whether to approve the submission. After carefully weighing the evidence contained in the submission and recommendations from the advisory panel, the FDA will notify the applicant of its decision. In those instances where a PMA is approved and after FDA notifies the applicant that the PMA has been approved, a notice is published on the Internet announcing the data on which the decision is based, providing an opportunity for the public to petition FDA within 30 days for reconsideration of the decision.5 To assist potential users in understanding the basis for PMA approval, FDA provides a Summary of Safety and Effectiveness that identifies the type of safety and effectiveness data as the basis for each approval decision. The clinical community will often continue to study the medical device after marketing (postmarket surveillance), to develop a more refined understanding of the patient population most likely to benefit. Medical device regulation requires FDA to tailor the data requested from a manufacturer to address the specific safety and effectiveness questions that need to be addressed before a marketing authorization can be granted. The Center for Devices and Radiological Health, by law, considers valid scientific evidence to include all of the following: well-controlled investigations, partially controlled studies, studies and objective trials without matched controls, well-documented case histories conducted by qualified experts, and reports of significant human experience with a marketed device, to determine whether there is reasonable assurance that the device is safe and effective (21CFR 860.7). The agency has considered whether these guidelines are in conflict with adequate scientific integrity and believes there can be several approaches to addressing regulatory requirements. HDE In 1996 the FDA finalized rules (21 CFR part 814 [Subpart H]) regarding HUDs. In contrast to the PMA, a HUD designation requires that <4000 individuals in the United States per year are eligible for use of the device under the proposed indications. After approval of an HUD designation, the device must receive a Humanitarian Device Exemption to allow the intended use. Unlike a PMA approval, the sponsor of an HDE is not required to provide clinical data to demonstrate reasonable safety and effectiveness but must provide sufficient information demonstrating that the device does not pose unreasonable risks and that there is probable benefit of use that outweighs the risks. These data must be interpreted in light of currently available treatment options. Lastly, the sponsor must demonstrate that no comparable devices are available to treat or diagnose the disease or condition, and that the device could not be brought to market without HUD status. A device approved under an HDE for marketing, however, remains restricted as an investigational device limited to the specific indications stated in the product labeling. Investigators are required to obtain approval from their Institutional Review Boards for use of the device in each case. There are currently no PMAs cleared for treatment of acute ischemic stroke. However, the FDA has recently approved two HDEs for intracranial stent systems for the treatment of symptomatic and medically refractory intracranial atherosclerotic disease, a very select patient population with an extremely high stroke risk: the Wingspan Stent System (Boston Scientific, 2005) and NeuroLink System (Guidant, 2002). Research and Development of Medical Devices The first step that manufacturers take in the research and development of their device is the determination of whether their product is, in fact, a medical device. A medical device is defined as an instrument, apparatus, implement, etc, on being metabolized for the achievement of any of its primary intended purposes. Once a product is determined to be a medical device, the sponsor of a device may then investigate whether there is a legally marketed predicate. Often, medical devices are first designed and developed as “tools” to accomplish a task that is already an established practice, which means that the intended patient population and anticipated effects of the device are known or understood before testing even begins. During the research and development of a medical device, sponsors may approach FDA for guidance on the suitable design of a clinical or nonclinical study. Investigational devices for the treatment of stroke are usually evaluated by the Office of Device Evaluation’s (ODE), Division of General, Restorative, and Neurological Devices (DGRND), one of five divisions in CDRH, responsible for regulating medical devices. Clearance of a Class II medical device is typically based on both bench and animal performance testing. In some cases, the FDA may require a clinical study to demonstrate adequate performance characteristics, or substantial equivalence to a legally marketed predicate device. Safety and effectiveness are important considerations in FDA’s review of premarket applications for medical devices. Regardless of the medical device or regulatory pathway necessary to market the medical device, sponsors and principal investigators usually collect clinical data for each of the applications in a clinical study permitted by the FDA with approval of a protocol providing an Investigational Device Exemption (IDE). In most cases, if there is reasonable assurance that a device is safe (ie, the probable benefits to health from use of the device for its intended uses and conditions of use outweigh the probable risks), the FDA will allow a clinical investigation to proceed. In general, devices indicated for the treatment of acute ischemic stroke have been categorized as significant risk devices and therefore warrant study under an IDE application. In addition to the requirement for having an FDA-approved IDE, sponsors of such trials must comply with the regulations governing institutional review boards (21 CFR Part 56) and informed consent (21 CFR Part 50). Acute Ischemic Stroke FDA’s role is to regulate medical devices, drugs, and biologics, but not regulate the practice of medicine. There have been several clinical studies conducted to evaluate medical products for acute ischemic stroke. Prolyse in Acute Cerebral Thromboembolism (PROACT II),6 the Mechanical Embolus Removal in Cerebral Ischemia (MERCI) Retrieval Study,7 and other trials8,9,10 support investigation of acute stroke, generally within the first few hours of stroke onset. Based on this experience, initial treatment in IDE stroke trials usually occurs within hours of symptom onset. Defining the exact time of symptom onset is an important component of all study protocols to ensure identification of appropriate subjects. For those studies or devices with a therapeutic time window beyond intervals studied in prior studies, evidence is encouraged from either preclinical or clinical testing to support treating subjects for FDA approval of an IDE. An issue in the investigation of acute ischemic stroke is the difficulty in recruiting subjects, particularly when there is a need to provide a cohort of control subjects treated with the current standard of care for ischemic stroke especially if a time restriction for use of the device is set. An unbiased way to interpret the outcomes of a medical device study can be obtained with a randomized controlled and double-blinded study to reliably infer the use of a device. Simple comparison of pretreatment versus posttreatment assessments, without a control group, may not provide adequate proof of a reasonable assurance of safety and effectiveness. Therefore, in some cases, prospective randomized controlled studies are encouraged. Although we are aware of the difficulty in the design and recruitment of subjects for acute ischemic stroke, alternate methods may be considered only when valid methods of inference are used and there are valid reasons for not using a randomized control group (eg, evaluation of medical device to restore blood flow). In some cases, study designs using single-arm nonrandomized clinical protocols with comparison of study results to historical or concurrent matched controls may be acceptable if they are scientifically sound and address the relevant safety and effectiveness concerns important to the proper use of the device by the community. In those cases where alternative study designs are presented, early collaboration with the FDA and discussion of new statistical issues with FDA, before study initiation, are strongly recommended. Other important aspects of study design that are important to device manufacturers and will be evaluated by FDA include selection of an adequate and representative patient population, clinical outcome assessments by an appropriate, validated neurological impairment scale, disability measure, or handicap scale; device- and study design–dependent selection of appropriate clinical end points and statistical approaches; and the reproducibility of any technique. Once clearance for marketing is obtained, device manufacturers must insure that their labeling is in accordance with the approval decision. In addition, labeling will include whether the device can be sold over the counter or whether it is prescription use only, information for use (including indications, effects, routes, methods, and frequency and duration of administration; and any relevant hazards, contraindications, side effects, and precautions), instructions for installation and operation, and any information, literature, or advertising that constitutes labeling. The indication for use(s) is based on the nonclinical and clinical studies described in the device submission. Indications for use for a device including a general description of the disease or condition the device will diagnose, treat, prevent, cure, or mitigate, such as a description of the patient population for which the device is intended and in some cases differences related to gender, race/ethnicity, etc are included in the labeling. What Clearance for Marketing Means to the User Activase (alteplase), a genetically engineered version of tissue plasminogen activator, is a medical product approved for use during the acute onset of ischemic stroke. Other advances in the management of acute ischemic stroke include recent FDA clearance of the MERCI Retriever for US marketing.11,12,13 In comparison to tissue plasminogen activator, the MERCI retriever is intended to remove thrombus obstructing blood flow in the neurovasculature as cause of acute ischemic stroke (the MERCI Retriever had been previously cleared by the FDA for use in the retrieval of foreign bodies misplaced during interventional radiological procedures in the neuro, peripheral and coronary vasculature). Patients who are ineligible for treatment with IV tissue plasminogen activator or who fail IV tissue plasminogen activator therapy are candidates for intervention with the MERCI retriever. The MERCI retriever was cleared along the 510(k) regulatory pathway rather than the PMA process because there already existed a predicate device with a similar use. The 510(k) device regulatory process is different from the regulatory process for drugs. In some cases, medical devices can have local effects whereas drugs can have systemic effects. Moreover, unlike the 510(k) regulatory path where one device may be similar to another and potentially appropriate for marketing clearance based on an incremental change in technology, approval of one drug based on an incremental change in molecular identity may not be appropriate because slight differences in molecular identity can have significant differences in safety and effectiveness. These characteristics highlight the potential differences between medical device and drug clearances. Because of concerns regarding safety and potential effectiveness for the new use of the MERCI retriever, a clinical study was still required for device clearance. This study was designed as a nonrandomized study using historical data from prior clinical studies, such as PROACT II, as the control. The FDA also involved its advisory panel in the review of data for the MERCI retriever, and FDA considered comments made by advisory panel members very carefully when evaluating the evidence submitted by the manufacturer.11 Because the primary end point of this study was restoration of cerebral circulation and not improved clinical outcome after stroke, and because of the nonrandomized design, the MERCI retriever was cleared with an indication for use regarding cerebral revascularization after stroke and not an indication for use of acute stroke treatment. Although this may appear clinically to be an arbitrary distinction, this distinction was necessary because the data supporting market clearance was determined by radiographic evidence for accomplishing this end point and one of several considerations that the agency used in clearance of the device. Once a device reaches the market, there is postmarket surveillance regulations with which a manufacturer must comply. These requirements include the Quality Systems (also known as Good Manufacturing Practices) and Medical Device Reporting regulations. The Quality Systems regulation is a quality assurance requirement that covers the design, packaging, labeling and manufacturing of a medical device. The Medical Device Reporting regulation is an adverse event reporting program. In the case of acute ischemic stroke, FDA continues to monitor Medical Device Reporting and other postmarket studies related to devices that are indicated for use in ischemic stroke to determine whether additional labeling or other modifications are needed to improve the safety and effectiveness profile of such devices. Acute stroke management is a dynamic field in which medical devices will continue to play an important role in its development. Further studies are needed to evaluate outcomes for patients undergoing mechanical thrombectomy as a treatment modality. Future Perspectives Although the FDA has published many guidance documents on the research, development, and current thinking on several topics related to medical devices, inevitably additional guidance will be needed to identify the recommended options in the preclinical and clinical studies to support regulatory submissions targeting the treatment of ischemic stroke. Equally important in this process is the cooperation that is necessary in the development of medical technologies between academic, industry, advocacy, and government. Through ongoing interaction of the various parties involved, validation of safety and effectiveness of new treatments is important. The regulatory controls the FDA imposes provides important assurance that this will be accomplished without unacceptable restriction of the availability of these tools to the healthcare environment. Indeed, the Food Drug Modernization Act of 1997 requires that the agency cooperates with other stakeholders to assure a “least burdensome approach” to device regulation. The agency is also committed to monitoring devices through the product’s total life cycle, cooperating with the diverse parties primarily involved at each phase of the cycle, including its research, development, and clinical investigation, performance in the real world of clinical practice after market release, and any subsequent technological improvements or changes. Despite the regulatory pathway, an evaluation of device safety and effectiveness will be needed for any device. These principles are especially important for the field of acute stroke because of the tremendous public health impact of this disease and the recent technological advances in the treatment of cerebrovascular occlusive disease. With the proper steps and dialogue among various stakeholders, ongoing research and development of medical products will hopefully lead to more safe and effective treatments for the treatment of this devastating disease. Acknowledgments Disclosures This article represents the professional opinion of the authors and is not an official document, guidance or policy of the US Government, the Department of Health and Human Services, or the Food and Drug Administration, nor should any official endorsement be inferred. - Received September 26, 2006. - Revision received November 6, 2006. - Accepted November 14, 2006. References - ↵National Institute of Neurological Disorders and Stroke Web site. What you need to know about stroke. Available at:. Accessed September 2, 2006. - ↵Food and Drug Administration Web site. Overview of what we do. Available at:. Accessed September 2, 2006. - ↵Food and Drug Administration Web site. Overview of regulations. Available at:. Accessed September 2, 2006. - ↵Pena C, Bowsher K, Samuels-Reid J. FDA-approved neurologic devices intended for use in infants, children, and adolescents. Neurology. 2004; 63: 1163–1167. - ↵Food and Drug Administration Web site. Overview. Available at:. Accessed September 2, 2006. -–1438. -Becker KJ, Brott TG. Approval of the MERCI clot retriever: a critical view. Stroke. 2005; 36: 400–403. - ↵Furlan A, Fisher M. Devices, drugs, and the food and drug administration: increasing implications for ischemic stroke. Stroke. 2005; 36: 398–399. This Issue Article Tools - - An Example of US Food and Drug Administration Device RegulationCarlos Peña, Khan Li, Richard Felten, Neil Ogden and Mark MelkersonStroke. 2007;38:1988-1992, originally published May 29, 2007 Citation Manager Formats - -
15,457
Core 365 88192T Men's Tall Pinnacle Long Sleeve Polo Shirt Item# 88192T SALE Core 365 88192T Men's Tall Pinnacle Long Sleeve Polo Shirt Item# 88192T Description Fabric: - 100% polyester pique, 4.1 oz./yd2/140 gsm - Moisture wicking, antimicrobial and UV protection performance Features: - matching flat knit collar - spandex enhanced rib knit cuffs Companion Styles SALE Customers Also Bought About Core 365. View All Items by Core 365
248,606
\begin{document} \begin{abstract} We correct a mistake in \cite{china} and prove the natural generalization of the projective Lichnerowicz-Obata conjecture for Randers metrics. \end{abstract} \maketitle \section{Introduction} \subsection{Definition and results} A Randers metric is a Finsler metric of the form \begin{equation} \label{a0} \begin{array}{rl} F(x,\xi ) =& \sqrt{g(x)_{ij} \xi^i \xi^j} + \omega(x)_{i}\xi^i \\ =& \sqrt{g(\xi, \xi)} + \omega(\xi),\end{array} \end{equation} where $g= g_{ij}$ is a Riemannian metric and $\omega= \omega_i$ is an $1$-form. Here and everywhere in the paper we assume summation with respect to repeating indices. The assumption that $F$ given by \eqref{a0} is indeed a Finsler metric is equivalent to the condition that the $g$-norm of $\omega$ is less than one. Within the whole paper we assume that all objects we consider are at least $C^2-$smooth, By a {\it forward geodesic} of a Finsler metric $F$ we understand a regular curve $x:I\to M$ such for any sufficiently close points $a, b\in I$, $a\le b$ the restriction of the curve $x$ to the interval $[a,b]\subseteq I$ is an extremal of the forward-length functional \begin{equation} \label{1bis} L^+_F(c):= \int_{a}^b F(c(t), \dot c(t)) dt\end{equation} in the set of all smooth curves $c:[a,b]\to M$ connecting $x(a)$ and $x(b)$. By a {\it backward geodesic} of a Finsler metric $F$ we understand a regular curve $x:I\to M$ such for any sufficiently close points $a, b\in I$, $a\le b$ the restriction of the curve $x$ to the interval $[a,b]\subseteq I$ is an extremal of the backward-length functional \begin{equation} \label{2} L^-_F(c):= \int_{a}^b F(c(t), -\dot c(t)) dt\end{equation} in the set of all smooth curves $c:[a,b]\to M$ connecting $x(a)$ and $x(b)$. Note that these definitions do not assume any preferred parameter on the geodesics: if $x(\tau)$ is a say forward geodesic, and $\tau(t)$ is a (orientation-preserving) reparameterisation with $\dot \tau := \tfrac{d \tau }{dt}>0$, then $x(t):= x(\tau(t))$ is also a forward geodesic. As the examples show, the condition that the reparameterisation is orientation-preserving, i.e., the condition that $\dot \tau := \tfrac{d \tau }{dt}>0$, is important though. We will study the question when two Randers metrics $F$ and $\bar F$ are {\it projectively equivalent}, that is, when every forward-geodesic of $F$ is a forward-geodesic of $\bar F$. Within our paper we will always assume that the dimension is at least two, since in dimension one all metrics are projectively equivalent. \begin{remark} \label{1} Comaparing \eqref{1bis} and \eqref{2} we see that for every forward geodesic $x(t)$, $t\in [-1,1]$, the `reverse' curve $\tilde x(t):= x(-t)$ is a backward-geodesic, and vice versa. Thus, if two metrics $F$ and $\bar F$ have the same forward-geodesics, they automatically have the same backward-geodesics. Moreover, the forward-geodesics of any metric $F$ are the backward-geodesic of the metric $\tilde F$ given by $\tilde F(x, \xi)=F(x, -\xi)$ and vice versa. For the Randers metrics, the transformation $F\mapsto \tilde F$ reads $\sqrt{ g(\xi, \xi )} + \omega( \xi) \mapsto \sqrt{ g(\xi, \xi) } - \omega( \xi)$. \end{remark} There exists the following `trivial' examples of projectively equivalent Randers metrics: Every Finsler metric $F$ is projectively equivalent to the Finsler metric $\const \cdot F$ for any constant $\const>0$. Indeed, forward and backward geodesics are extremals of \eqref{1bis}, \eqref{2}. Now, replacing $F$ by $\const \cdot F$ multiplies the length-functionals $L^{pm}$ of all curves by $\const$ so extremals remain extremals. If the Finsler metric $F$ is Randers, the operation $F\mapsto \const \cdot F$ multiplies the metric $g$ by $\const^2 $ and the form $\omega$ by $\const$. Any Finsler metric $F$ is projectively equivalent to the Finsler metric $F+ \sigma$, where $\sigma$ is an arbitrary closed one-form such that $F(x,\xi) +\sigma(\xi) > 0 $ for all tangent vectors $\xi\ne 0$. Indeed, a geodesic connecting two points $x,y$ is an extremal of the length functionals $L^\pm(c)$ given by \eqref{1bis}, \eqref{2}, over all regular curves $c:[a,b]\to M$ with $c(a) = x$ and $c(b) = y$. Adding $\sigma$ to the Finsler metric $F$ changes the (forward and backward) length of all such curves in one homotopy class by adding a constant to it (the constant may depend on the homotopy class), which does not affect the property of a curve to be an extremal. If the Finsler metric $F$ is Randers, the operation $F\mapsto F + \sigma$ does not change the metric $g$ and adds the form $\sigma$ to the form $\omega$. One can combine two examples above as follows: any Finsler metric $F$ is projectively equivalent to the Finsler metric $\const \cdot F + \sigma$, where $\sigma$ is a closed 1-form and $\const \in \mathbb{R}_{>0}$ such that $\const\cdot F(x,\xi) + \sigma(\xi)>0$ for all tangent vectors $\xi\ne 0$. Now, if the forms $\omega$ and $\bar \omega$ are closed, the Randers metrics $F(x,\xi)= \sqrt{ g( \xi, \xi)} + \omega(\xi)$ and $\bar F(x,\xi)= \sqrt{ \bar g( \xi, \xi)} + \bar\omega(\xi)$ on one manifold $M$ are projectively equivalent if and only if the Riemannian metrics $g$ and $\bar g$ are projectively equivalent. Indeed, as we explained above the geodesics of $F$ are geodesics of the Riemannian metric $g$, the geodesics of $\sqrt{ \bar g(\xi, \xi)} + \bar \omega(\xi)$ are the geodesics of the Riemannian metric $\bar g$, so that projective equivalence of $ F$ and $\bar F$ is equivalent to the projective equivalence of $ { g}$ and ${ \bar g}.$ Note that there are a lot of examples of projectively equivalent Riemannian metrics; the first examples were known already to Lagrange \cite{Lagrange} and local classification of projectively equivalent metrics was known already to Levi-Civita \cite{Levi-Civita}. One of the goals of this note is to show that the `trivial' examples above give us all possibilities of projectively equivalent Randers metrics. \begin{Th}\label{thm2} Let the Finsler metrics $\sqrt{g_{ij}\dot x^i \dot x^j} + \omega_{i}\dot x^i$ and $\sqrt{\bar g_{ij}\dot x^i \dot x^j} + \bar \omega_{i}\dot x^i$ on a connected manifold be projectively equivalent. Suppose at least one of the forms $\omega$ and $\bar \omega$ is not closed. Then, for a certain $\const \in \mathbb{R}_{>0}$ we have $g = \const^2 \cdot \bar g$ and the form $\omega - \const\cdot \bar \omega$ is closed. \end{Th} Let us first remark that Theorem \ref{thm2} follows from \cite[Theorem 1.1]{Burns}. The paper \cite{Burns} deals with the magnetic systems and study the question when magnetic geodesics of one magnetic system $(g, \Omega)$ are reparameterized magnetic geodesics of another magentic system $(\bar g, \bar \Omega)$, see \cite{Burns} for definitions. In particular, it was proved that if for positive numbers (energy levels) $E, \bar E\in \mathbb{R}$, every magnetic geodesic with energy $E$ of one magnetic system, is, after a proper reparameterisation, a magnetic geodesic with energy $\bar E$ of another magnetic system, then $\bar g = \const\cdot g$ and $\bar \Omega = \overline{\const} \cdot \Omega$ (the second constant $\overline{\const}$ depends on $\const, E, \bar E$), or $\Omega= \bar \Omega= 0$ and the metrics $g$ and $\bar g$ are projectively equivalent. Now, it is well known that forward geodesics of the Randers metric \eqref{a0} are, after an appropriate orientation preserving reparameterisation, magnetic geodesics with energy $E=1$ of the magnetic system $(g, \Omega = d\omega)$. In view of this, Theorem \ref{thm2} is actually a corollary of \cite[Theorem 1.1]{Burns}. Theorem \ref{thm2} is visually very close to \cite[Theorem 2.4]{china}: the only essential difference is that in the present paper we speak about \emph{projective equivalence}, and the condition discussed in \cite{china} is when two metrics are \emph{pointwise projectively related} (see section \ref{mistake} for definition). In the Riemannian case (or, more generally, if the case when Finsler metrics are reversible), these two conditions, projective equivalence and pointwise projective relation, coincide. For general Finsler metrics, and in particular for Randers metrics, projective equivalence and pointwise projective relation are different conditions and \cite[Theorem 2.4]{china} is wrong: we give a counterexample in section \ref{mistake}. We also discuss how one can modify Theorem \ref{thm2} and \cite[Theorem 2.4]{china} such that they become correct, see Corollaries \ref{thm3} and \ref{cor4} from section \ref{mistake}. Besides, the proof of \cite[Theorem 2.4]{china}, even if one replaces pointwise projective relation projective equivalence, seems to have a certain mathematical gap, namely an important delicate and nontrivial step was not done (at least we did not find the place where it was discussed); out proor of Theorem \ref{thm2} closes this gap. We comment on this in section \ref{mistake}. As we mentioned above, we do not pretend that Theorem \ref{thm2} is new since it is a direct corollary of \cite[Theorem 1.1]{Burns}, though it seems to be unknown within Finsler geometers. New results of the papers are related to projective transformations. By {\it projective transformation} of $(M,F)$ we understand a diffeomorphism $\phi$ such that pullback of $F$ is projectively equivalent to $F$. By {\it homothety} of $(M,F)$ we understand a diffeomorphism $\phi$ such that t pullback of $F$ is proportional to $F$. Homotheties evidently send forward geodesics to forward geodesics and are therefore projective transformations. As a direct application of Theorem \ref{thm2}, we obtain \begin{Cor} \label{cor1} If the form $\omega$ is not closed, every projective transformation of \eqref{a0} on a connected manifold $M$ is a homothety of the Riemannian metric $g$. In particular, if $M$ is closed, every projective transformation of \eqref{a0} is an isometry of the Riemannian metric $g$. \end{Cor} Many papers study the question when a Randers metric is projectively flat, i.e., when its forward geodesics are straight lines in a certain coordinate system. Combining Theorem \ref{thm2} with the classical Beltrami Theorem (see e.g. \cite{Beltrami,short,schur}), we obtain the following wellknown statement \begin{Cor}[Folklore] \label{cor2} The metric \eqref{a0} is projectively flat if and only if $g$ has constant sectional curvature and $\omega$ is closed. \end{Cor} In the case the manifold $M$ is closed (= compact and without boundary), more can be said. We denote by $Proj(M,F)$ the group of the projective transformations of the Finsler manifold $(M,F)$ and by $Proj_0(M,F)$ its connected component containing the identity. \begin{Cor} \label{cor3} Let $(M,F)$ be a closed connected Finsler manifold with $F$ given by \eqref{a0}. Then, at least one of the following possibilities holds: \begin{enumerate} \item There exists a closed form $\hat \lambda$ such that $Proj_0(M,F)$ consists of isometries of the the Finsler metric $F(x,\xi)= \sqrt{g(\xi,\xi) } + \omega(\xi) - \hat \lambda(\xi) $, or \item the form $\omega$ is closed and $g$ has constant positive sectional curvature. \end{enumerate} \end{Cor} \subsection{Motivation} One of our motivation was to correct mistakes in the paper \cite{china}: to construct counterexamples and to formulate the correct statement. A part of this goal is to give a simple selfcontained proof of Theorem \ref{thm2} which does not require Finsler machinery and therefore could be interesting for a bigger group of mathematicians. Actually, a lot of papers discuss metrics that are pointwise projectively related, and many of them have the same mistake as \cite{china}: in the proofs, the authors actually use that metrics are projectively equivalent, but formulate the results assuming the metrics are pointwise projectively related. If for every forward geodesic $x:[-1, 1]\to M$ the reverse curve $x(-t)$ is also a forward geodesic, (for example, when the metrics are reversible), then the results remain correct; but in the general case many papers on pointwise projectively related metrics are wrong and in many cases in the papers it is not even mentioned whether the authors speak about all metrics or restrict themself to the case when the metrics are reversible. Since the Randers metrics are nonreversible, this typical mistake is clearly seen in the case of Randers metrics and we have chosen \cite{china} to demonstrate it. Besides, we think that Corollary 3 is deserved to be publishes, since it is a natural generalisation of the classical projective Lichnerowicz-Obata conjectures for Randers metrics, see \cite{nagano,Yamauchi1,hasegawa,solodovnikov1} where the Riemannian version of the conjecture was formulated and proved under certain additional geometric assumptions, and \cite{obata,CMH,archive} where the Riemannian version of the conjecture was proved in the full generality. Additional motivation to study projective equivalence and projective transformations came from mathematical relativity and lorentz differential geometry: it was observed that the light-line geodesics of a stationary, standard spacetime can be described with the help of Randers metrics on a manifold of dimension one less. This observation is called the Stationary-Randers-Correspondence and it is nowadays a hot topic in the Lorentz differential geometry since one can effectively apply it, see for example \cite{1,2,2a,how}. The projective transformations of the Randers metrics correspond then to the conformal transformations of the initial Lorentz metric preserving the integral curves of the Killing vector field, so one can directly apply our results. \subsection{ Projective equivalence versus pointwise projective relation.} \label{mistake} By \cite{china}, two Finsler metrics $F$ and $\bar F$ one one manifold $M$ are {\it pointwise projectively related}, if they have the same geodesics as point sets. The difference between this definition and our definition of projective equivalence is that in our definition we also require that the orientation of the forward geodesics is the same in both metrics. In particular, the metrics $F$ and $\tilde F$ from Remark \ref{1}, such that every forward geodesic of the first is a backward geodesic of the second, are pointwise projectively related according to the definition from \cite{china}. This allows one to construct immediately a counterexample to \cite[Theorem 2.4]{china}. \begin{example} Take any Riemannian metric $g$ and any form $\omega$ such that it is not closed. Next, consider the Randers metrics $F(x,\xi) = \sqrt{g(\xi, \xi)} + \omega(\xi)$ and $\bar F(x,\xi) = \sqrt{g(\xi, \xi)} - \omega(\xi)$ are pointwise projectively related, since for every forward geodesic of $F$ is a backward geodesic of $\bar F$ and every backward geodesic of $F$ is a forward geodesic of $\bar F$. Since the Riemannian parts of these Finsler metrics coincide, \cite[Theorem 2.4]{china} claims that the form $\omega - ( - \omega)= 2 \omega $ is closed which is not the case. \end{example} The following two corollaries are an attempt to correct the statement of \cite[Theorem 2.4]{china} \begin{Cor} \label{thm3} Suppose two Randers metrics $ F(x,\xi) = \sqrt{g(\xi, \xi)} +\omega(\xi)$ and $ \bar F(x,\xi) = \sqrt{\bar g(\xi, \xi)} +\bar \omega(\xi)$ on one connected manifold $M$ are pointwise projectively related. Assume in addition that the set $M^0$ of the points of $M$ such that the differential $d\omega$ is not zero is connected. Then, there exists a positive $\const\in \mathbb{R}$ such that at least one of following statements holds at all points of the manifold: \begin{enumerate} \item \label{(1)} $\bar g = \const^2 \cdot g$ and $\bar \omega - \const \cdot \omega$ is a closed form, or \item \label{(2)} $\bar g = \const^2\cdot g$ and $\bar \omega + \const \cdot \omega$ is a closed form.\end{enumerate} \end{Cor} It is important though that the set $M^0$ is connected: Indeed, as the following example shows, the cases \eqref{(1)}, \eqref{(2)} of Theorem \ref{thm3} could hold simultaneously in different regions of one manifold. \weg{\begin{example} \label{ex2} Take $M= \mathbb{R}^2$ with the flat metric $g= d^2 {x} + d^2{y}$. Next, consider the sequence of the points $P_1,...,P_k,...\in M$ such that it converges to $(0,0)$. Next, take the sequence of radii $r_1,..., r_k,... >0 $ such that the closed balls $B_{r_k}(P_k)$ around the points $P_1,...P_k,...$ with radii $r_1,..., r_k,... $ are disjunkt. Consider a smooth form $\omega$ such that it vanishes outsides of the balls and its differential $d\omega$ is not zero at least at one point of each ball. Next, define the form $\bar \omega$ as follows: at the points of the ball $B_{r_k}(P_k)$ set $$\left\{\begin{array}{rc}\bar \omega = \omega & \textrm{if $k$ is odd } \\ \bar \omega = -\omega & \textrm{if $k$ is even . }\end{array}\right.$$ At the other points set $\bar \omega=0$. The form $\bar \omega$ is evidently smooth; the metrics $F$ and $\bar F$ are pointwise projectively related since at the points of the balls $B_{r_k}(P_k)$ with odd $k$ every forward geodesic of $F$ is a forward geodesic of $\bar F$; at the points of the balls $B_{r_k}(P_k)$ with even $k$ every forward geodesic of $F$ is a backward geodesic of $\bar F$; at all other points the metrics coincide. By construction, in no neighborhood of the point $(0,0) = \lim_{k\to \infty } P_k$ one of the cases listed in Theorem \ref{thm3} holds. \end{example} } \begin{example} \label{ex2} Consider the ray $S:= \{(x,y)\in \mathbb{R}^2\mid \ x=0 \textrm{ and } y\le 2\}$ . Take $M= \mathbb{R}^2 \setminus S$ with the flat metric $g= d^2 {x} + d^2{y}$. Consider two balls $B_+$ and $B_-$ around the points $(+1,0)$ and $(-1,0)$ of radius $\tfrac{1}{2} $. Next, take a smooth form $\omega $ on $M$ such that $\omega$ vanishes on $M\setminus \left( B_+ \cup B_-\right)$, such that there exists at least one point of every ball such that $d\omega\ne 0$, and such that the $g$-norm of $\omega$ is less than $1$ at every point. Next, define the form $\bar \omega$ as follows: set $$\left\{\begin{array}{rc}\bar \omega = -\omega & \textrm{at $p\in B_-$ } \\ \bar \omega = \omega & \textrm{at $p\not\in B_-$. }\end{array}\right.$$ The form $\bar \omega$ is evidently smooth; the metrics $F(x,\xi)= \sqrt{g(\xi,\xi)} + \omega(\xi) $ and $\bar F(x,\xi)= \sqrt{g(\xi,\xi)} + \bar \omega(\xi)$ are pointwise projectively related though none of the of the cases listed in Corollary \ref{thm3} holds on the whole manifold. \end{example} \begin{Cor} \label{cor4} Suppose two Rander metrics $F(x,\xi)= \sqrt{g(\xi,\xi)}+ \omega(\xi)$ and $\bar F(x,\xi)= \sqrt{ \bar g(\xi,\xi)}+ \bar \omega(\xi)$ on a connected manifold are pointwise projectively related. Suppose the form $\omega$ is not closed. Then, there exists a positive constant $\const \in \mathbb{R}$ such that $g = \const^2 \cdot \bar g$ and such that for every point $x\in M$ we have $d\omega = \const \cdot d\bar \omega$ or $d\omega = -\const \cdot d\bar \omega.$ \end{Cor} Let us now explain, as announced in the introduction, one more mathematical difficulty with the proof of \cite[Theorem 2.4]{china}. Authors proved (in our notation and assuming that they actually work with projectively equivalent metrics) \begin{itemize} \item that at the points where the metrics $g$ and $\bar g$ are proportional with possibly a nonconstant coefficient, the coefficient of proportionality is actually a constant, $g = \const^2 \cdot \bar g$, and the form $\omega- \const\cdot \bar \omega$ is closed and \item that at the points such that the forms $\omega$ and $\bar \omega$ are closed the metric $g$ and $\bar g$ are projectively equivalent. \end{itemize} These two observations do not immediately imply that one of these two conditions holds on the whole manifold or even, if they work locally, at all points of a sufficiently small neighborhood of arbitrary point. Indeed, we could conceive of a two Randers metrics $ \sqrt{g(\xi, \xi)} + \omega(\xi)$ and $ \sqrt{\bar g(\xi, \xi)} + \bar \omega(\xi)$ such that at certain points of the manifold the metrics $g$ and $\bar g$ are projectively equivalent but nonproportional (the set of such points is open), and at certain points the metrics $g$ and $\bar g$ are proportional (the set of such points is evidently close). We overcome this difficulty by using the (nontrivial) result of \cite[Corollary 2]{dedicata}, which implies that if two projectively equivalent metrics are proportional on a certain open set then they are proportional on the whole manifold (assumed connected). It is not the only possibility to overcome this difficulty, but at the present point we do not know any of them which is completely trivial; since this difficulty is not addressed in the proof of \cite[Theorem 2.4]{china} we suppose that the authors simply overseen the difficulty; our paper closes this gap. \section{Proofs.} \subsection{ Proof of Theorem \ref{thm2}.} Recall that every forward geodesic $x(t)$ of a metrics is an extremal of the lengh functional $L(c) = \int_a^b F(c(t), \dot c(t)) dt$, and, therefore, is a solutions of the Euler-Lagrange equation \begin{equation}\label{a1} \frac{d}{dt} \frac{\partial F}{\partial \dot x} - \frac{\partial F}{\partial x}=0.\end{equation} For the Randers metric \eqref{a0}, the equation \eqref{a1} reads \begin{equation} \label{a2} \begin{array}{cccl} g_{ip} \ddot x^p \ \left( \frac{1}{\sqrt{g_{k m} \dot x^k \dot x^m}} \right) & +& g_{ip} \dot x^p \ \frac{d}{dt} \left( \frac{1}{\sqrt{g_{k m} \dot x^k \dot x^m}} \right) + \left( \frac{1}{\sqrt{g_{k m} \dot x^k \dot x^m}} \right) \frac{\partial g_{ip}}{\partial x^k} \dot x^k \dot x^p & \\ & +& \frac{\partial \omega_i}{\partial x^k} \dot x^k - \frac{\partial \omega_k}{\partial x^i} \dot x^k - \frac{1}{2 \sqrt{g_{k m} \dot x^k \dot x^m}} \dot x^p \dot x^q \frac{\partial g_{pq} }{\partial x^i}&=0.\end{array} \end{equation} Multiplying this equation by $g^{ij}$ (the inverse matrix to $g_{ij}$), we obtain \begin{equation} \label{a2bis} \begin{array}{cccl} \ddot x^j \ \left( \frac{1}{\sqrt{g_{k m} \dot x^k \dot x^m}} \right) & +& \dot x^j \ \frac{d}{dt} \left( \frac{1}{\sqrt{g_{k m} \dot x^k \dot x^m}} \right) + g^{ij} \left( \frac{1}{\sqrt{g_{k m} \dot x^k \dot x^m}} \right) \frac{\partial g_{ip}}{x^k} \dot x^k \dot x^p & \\ & +& g^{ij} \left(\frac{\partial \omega_i}{\partial x^k} - \frac{\partial \omega_k}{\partial x^i}\right) \dot x^k - g^{ij} \frac{1}{2 \sqrt{g_{k m} \dot x^k \dot x^m}} \dot x^p \dot x^q \frac{\partial g_{pq} }{\partial x^i}&=0.\end{array} \end{equation} It is easy to check by calculations and is evident geometrically that for every solution $x(\tau)$ of \eqref{a2} and for every time-reparameterization $\tau(t)$ with $\dot \tau > 0$ the curve $x(\tau(t))$ is also a forward geodesic. Thus, if $\sqrt{ g_{ij}\dot x^i \dot x^j} + \omega_{i}\dot x^i$ and $\sqrt{ \bar g_{ij}\dot x^i \dot x^j} + \bar \omega_{i}\dot x^i$ are projectively equivalent, then every solution $x(t)$ of \eqref{a2} also satisfies \begin{equation} \label{a3bis} \begin{array}{cccl} \ddot x^j \ \left( \frac{1}{\sqrt{\bar g_{k m} \dot x^k \dot x^m}} \right) & +& \dot x^j \ \frac{d}{dt} \left( \frac{1}{\sqrt{\bar g_{k m} \dot x^k \dot x^m}} \right) + \bar g^{ij} \left( \frac{1}{\sqrt{\bar g_{k m} \dot x^k \dot x^m}} \right) \frac{\partial \bar g_{ip}}{\partial x^k} \dot x^k \dot x^p & \\ & +& \bar g^{ij} \left(\frac{\partial \bar \omega_i}{\partial x^k} - \frac{\partial \bar \omega_k}{\partial x^i}\right) \dot x^k - \bar g^{ij} \frac{1}{2 \sqrt{\bar g_{k m} \dot x^k \dot x^m}} \dot x^p \dot x^q \frac{\partial \bar g_{pq} }{\partial x^i}&=0,\end{array} \end{equation} where $\bar g^{ij}$ is the inverse matrix to $\bar g_{ij}$. We now multiply the equation \eqref{a3bis} by ${\sqrt{\bar g_{k m} \dot x^k \dot x^m}}$ and subtract the equation \eqref{a2bis} multiplied by ${\sqrt{ g_{k m} \dot x^k \dot x^m}}$ to obtain \begin{equation} \begin{array}{cl} & \dot x^j \left( \sqrt{\bar g_{k m} \dot x^k \dot x^m}\ \frac{d}{dt} \left( \frac{1}{\sqrt{\bar g_{k m} \dot x^k \dot x^m}} \right) - \sqrt{ g_{k m} \dot x^k \dot x^m}\ \frac{d}{dt} \left( \frac{1}{\sqrt{ g_{k m} \dot x^k \dot x^m}} \right) \right)\\+ &\dot x^k\left( \bar g^{ij} \sqrt{\bar g_{k m} \dot x^k \dot x^m}\left(\frac{\partial \bar \omega_i}{\partial x^k} - \frac{\partial \bar \omega_k}{\partial x^i}\right) -g^{ij}\sqrt{ g_{k m} \dot x^k \dot x^m} \left(\frac{\partial \omega_i}{\partial x^k} - \frac{\partial \omega_k}{\partial x^i}\right) \right)\\ + & \dot x^k \dot x^p\left( \bar g^{ij} \frac{\partial \bar g_{ip}}{\partial x^k} -g^{ij} \frac{\partial g_{ip}}{\partial x^k} \right) - \dot x^p \dot x^q \left( \bar g^{ij} \frac{1}{2 } \frac{\partial \bar g_{pq} }{\partial x^i} + g^{ij} \frac{1}{2 }\frac{\partial g_{pq} }{\partial x^i}\right) =0 . \end{array} \label{rez} \end{equation} In the equation above, the following logic in rearranging the terms was used: consider a forward-geodesic $\tilde x$ such that $\tilde x(0)= x(0)$ and such that $\dot {\tilde x}(0)= - \dot x(0)$. Then, at $t=0$, the first lines of the equation \eqref{rez} and its analog for $\tilde x$ are proportional to $\dot x(0)= -\dot{ \tilde x}(0)$. The second line of the equation \eqref{rez} is minus its analog for $\tilde x$. The remaining third line of the equation \eqref{rez} coincides with its analog for $\tilde x$. Then, subtracting the equation \eqref{rez} from its analog for $\tilde x$ at $t=0$ we obtain (at $t =0$) $$ -\dot x^j f(x(0),\dot x(0)) + \dot x^k\left( \bar g^{ij} \sqrt{\bar g_{k m} \dot x^k \dot x^m}\left(\frac{\partial \bar \omega_i}{\partial x^k} - \frac{\partial \bar \omega_k}{\partial x^i}\right) -g^{ij} \sqrt{ g_{k m} \dot x^k \dot x^m}\left(\frac{\partial \omega_i}{\partial x^k} - \frac{\partial \omega_k}{\partial x^i}\right) \right) =0 , $$ where $$\begin{array}{rc}f(x(0),\dot x(0)) = & \sqrt{\bar g_{k m} \dot x^k \dot x^m}\ \left(\frac{d}{dt}_{|t=0} \left( \frac{1}{\sqrt{\bar g(x)_{k m} \dot x^k \dot x^m}}\right) - \frac{d}{dt}_{|t=0} \left( \frac{1}{\sqrt{\bar g(\tilde x)_{k m} \dot{ \tilde x}^k \dot{\tilde x}^m}}\right) \right)\\-&\sqrt{ g_{k m} \dot x^k \dot x^m}\ \left(\frac{d}{dt}_{|t=0} \left( \frac{1}{\sqrt{ g(x)_{k m} \dot x^k \dot x^m}}\right) - \frac{d}{dt}_{|t=0} \left( \frac{1}{\sqrt{g(\tilde x)_{k m} \dot{ \tilde x}^k \dot{\tilde x}^m}}\right)\right) \end{array}$$ Since this equation is fulfilled for every geodesic, for every tangent vector $v $ we have \begin{equation} \label{la1} v^j f(v) = \sqrt{\bar K(v)} \bar g^{ij}\bar L_{ik} v^k - \sqrt{K(v)} g^{ij} L_{ik} v^k . \end{equation} In the above equation, $$K(v)= g_{pq} v^p v^q, \ \bar K(v)= \bar g_{pq} v^p v^q, \ L_{ik} = d\omega = \frac{\partial \omega_i}{\partial x^k} - \frac{\partial \omega_k}{\partial x^i}, \bar L_{ik} = d\bar \omega = \frac{\partial \bar \omega_i}{\partial x^k} - \frac{\partial \bar \omega_k}{\partial x^i}. $$ Note that the matrices $L_{ij} $ and $\bar L_{ij}$ are scew-symmetric. Let us view this equation as a system of algebraic equations in a certain tangent space $\mathbb{R}^n$; we will show that for every symmetric positive definite matrices $g_{ij}$ and $\bar g_{ij}$ and for every scew-symmetric matrices $L$, $\bar L$ there exist the following possibilities only: \begin{itemize} \item The matrix $g_{ij}$ is proportional to the matrix $\bar g_{ij}$ and the matrix $L_{ij} $ is proportional to the matrix $\bar L_{ij} $ with the same coefficient of proportionality, or \item The matrices $L_{ij} $ and $\bar L_{ij} $ are zero. \end{itemize} Indeed, we multiply the equation \eqref{la1} by $v^p g_{jp} $, and using that $v^p g_{jp}\bar g^{ij}\bar L_{ik} v^k = v^iv^kL_{ik}=0$ because of scew-symmetry of $L$, we obtain $$ K(v) f(v) = - \sqrt{\bar K(v)} g_{jp} \bar g^{ij} \bar L_{ik} v^k v^p . $$ Similarly, multiplying the equation \eqref{la1} by $v^p \bar g_{jp} $, we obtain $$ \bar K(v) f(v) = \sqrt{ K(v)} v^p \bar g_{jp} g^{ij} L_{ik} v^k . $$ Combining the last two equations we obtain \begin{equation} \label{la3} \bar K(v)^3 \ (g_{jp} \bar g^{ij} \bar L_{ik} v^k v^p)^2 = K(v)^3 \ (\bar g_{jp} g^{ij} L_{ik} v^k v^p)^2. \end{equation} Since the algebraic expression $K(v)$ is irreducible (over $\mathbb{R}$), if $g_{jp} \bar g^{ij} \bar L_{ik} v^k v^p \not \equiv 0$, the equation \eqref{la3} implies that $K(v) = \alpha \bar K(v)$ implying $\bar g = \alpha^2 \cdot g $ for a certain $\alpha>0$. In this case, the equation \eqref{la1} implies $\bar L_{ij} = \alpha L_{ij}$. Thus, at every point $p$ of our manifold, we have one of the two above possibilities. If at all points the second possibility takes places, the 1-forms $\omega$ and $\bar \omega$ are closed and the Riemannian metrics $g$ and $\bar g $ are geodesically equivalent. If at least at one point the first possibility takes place, then in every point of a small neighborhood of the point the first possibility takes place, so that the differential of the forms $d\omega$ and $ d \bar \omega$ are proportional at every point of a small neighborhood, $d \omega = \alpha(x) d\bar \omega$. The function $\alpha $ is then a constant implying the Finsler metrics are homothetic. Then, the metrics $g$ and $\bar g$ are projectively equivalent: indeed, in the neighborhood consisting of the points where the 1-forms $\omega$ and $\bar \omega$ are closed they are projectively equivalent as we explained above, and at the neighborhoods where the differential of at least one of the forms $\omega, \bar \omega$ is not closed they are even proportional. Now, by \cite[Corollary 2]{dedicata}, if the metrics are projectively equivalent everywhere and nonproportional in some neighborhood, their restrictions to every neighborhood are nonproportional. Thus, in this case $d\omega=d\bar \omega= 0$ at every point of the manifold. Theorem \ref{thm2} is proved. \subsection{Proof of Corollaries \ref{cor1},\ref{cor2},\ref{cor3}. } Corollary \ref{cor1} follows immediately from Theorem \ref{thm2}: if $\phi:M\to M$ is a local projective transformation of the metric \eqref{a0} on a connected manifold $M$, then, by the definition of projective transformations, the pullback $\phi^*(F)$ is projectively equivalent to $F$. Then, by Theorem \ref{thm2}, if the form $\omega$ is not closed, we have $\phi^*(g) = \const\cdot g$ implying $\phi$ is a homothety for $g$. Corollary \ref{cor1} is proved. In order to prove Corollary \ref{cor2}, we will use that the straight lines are geodesics of the standard flat Riemannian metric which we denote by $g^{flat}$. Then, a projectively flat metric is projectively equivalent to the Randers metric $F^{flat}(x,\xi):= \sqrt{g^{flat}(\xi, \xi)}. $ Then, by Theorem \ref{thm2}, the form $\omega$ is closed, and the Riemannian metric $g$ is projectively flat. By the classical Beltrami Theorem (see e.g. \cite{Beltrami,short,schur}), the metric $g$ has constant sectional curvature. Corollary 2 is proved. Let us now prove Corollary \ref{cor3}. We assume that $(M,F)$ is a closed connected Finsler manifold with $F$ given by \eqref{a0} and denote by $Proj(M,F)$ its group of projective transformations and by $Proj_0(M,F)$ the connected component of this group containing the identity. Suppose first the form $\omega$ is closed. Then, every projective transformation of the Randers metric \eqref{a0} is a projective transformation of the Riemannian metric $g$. Then, if $Proj_0(M,F)$ contains not only isometries, the metric $g$ has constant positive sectional curvature by the Riemannian projective Obata conjecture (proven in \cite[Corollary 1]{archive}, \cite[Theorem 1]{CMH} and \cite[Theorem 1]{obata}) and we are done. Assume now the form $\omega$ is not closed. Then, by Theorem \eqref{thm2}, every element of $Proj_0(M,F)$ is an isometry of $g$ which in particular implies that the group $Proj_0(M,F)$ is compact. We consider an invariant measure $d\mu= d\mu(\phi)$ on $Proj_0(M,F)$ normalized such that \begin{equation} \int_{\phi\in Proj_0(M,F)}d\mu(\phi)= 1.\label{cl} \end{equation} Consider the 1-form $\hat \omega$ given by the formula $$ \hat\omega(\xi)= \int_{\phi\in Proj_0(M,F)}\phi^*\omega(\xi) d\mu(\phi). $$ Here $\phi^*\omega$ denotes the pullback of the form $\omega$ with respect to the isometry $\phi\in Proj_0(M,F)$. Let us show that the form $ \hat\omega - \omega$ is closed. In view of \eqref{cl}, we have \begin{equation} \omega(\xi)- \hat \omega(\xi) = \int_{\phi\in Proj_0(M,F)}(\omega - \phi^*\omega)(\xi) d\mu(\phi).\label{cl2}\end{equation} Since for every $\phi\in Proj_0(M,F)$ the form $\lambda^\phi:= \omega - \phi^*\omega$ is closed by Theorem \ref{thm2}, the form $\hat \lambda := \omega - \hat \omega$ is also closed. Indeed, let $\lambda^\phi_i$ be the components of the form $ \lambda^\phi:= \omega - \phi^*\omega$ in a local coordinate system $(x^1,...,x^n)$. Then, the components $\hat \lambda_i$ of $\omega - \hat \omega $ are given by the formula $$ \hat\lambda_i= \int_{\phi\in Proj_0(M,F)}\lambda^\phi_i d\mu(\phi). $$ Differentiating this equation w.r.t. $x^j$, we obtain $$ \tfrac{\partial}{\partial x^j} \hat\lambda_i= \int_{\phi\in Proj_0(M,F)} \tfrac{\partial}{\partial x^j} \lambda^\phi_i d\mu(\phi).$$ Now, since for every $\phi\in Proj_0(M,F)$, the form $ \lambda^\phi $ is closed, we have $\tfrac{\partial}{\partial x^j} \lambda^\phi_i = \tfrac{\partial}{\partial x^i} \lambda^\phi _j$ implying $\tfrac{\partial}{\partial x^j} \hat\lambda_i= \tfrac{\partial}{\partial x^i} \hat\lambda_j$ implying $\hat \lambda $ is a closed form. By construction, the form $\hat \lambda$ satisfies the property that $\omega - \hat \lambda$ is invariant with respect to $Proj_0(M,F)$, so the group $Proj_0(M,F)$ consists of isometries of the Finsler metric $\hat F(x, \xi)= \sqrt{g(\xi, \xi)} + \omega(\xi) - \hat \lambda(\xi)$. Corollary \ref{cor3} is proved. \subsection{Proof of Corollaries \ref{thm3}, \ref{cor4}.} We will prove the Corollaries \ref{thm3}, \ref{cor4} simultaneously. Within this section we assume that $F(x,\xi)= \sqrt{g(\xi, \xi)} + \omega(\xi)$ and $F(x,\xi)= \sqrt{\bar g(\xi, \xi)} + \bar \omega(\xi)$ are pointwise projectively related metrics on a connected manifold $M$ and we denote by $M^0$ (resp. $\bar M^0$) the set of the points of $M$ where the differential of $\omega$ (resp. of $\bar \omega$) does not vanish. $M^0$ and $\bar M^0$ are obviously open. Let us first observe that $M^0 = \bar M^0$. In order to do it, consider $p\in M^0$ and a vector $\xi \in T_pM$ such that $d\omega(\xi, \cdot)$ (viewed as an 1-form) does not vanish. Then, the forward and backward geodesic segments $x(t)$ and $ \tilde x(t)$ such that $x(0)= \tilde x(0)= p$ and $\dot x(0)= \dot{\tilde x}(0)= \xi$ are two different (even after a reparameterisation) curves. The curves are tangent at the point $x=0$. Indeed, in the proof of Theorem \ref{thm2} we found an ODE \eqref{a2bis} for forward geodesics. Analogically, one can find an ODE for backward geodesics: it is similar to \eqref{a2bis}, the only difference is that the term $ g^{ij} \left(\frac{\partial\omega_i}{\partial x^k} - \frac{\partial \omega_k}{\partial x^i}\right) \dot x^k $ comes with the minus sign. We see that the difference between $\ddot x(0)$ and $\ddot {\tilde x}(0)$ is (in a local coordinate system) a vector which is not proportional to $\dot x(0)= \dot{\tilde x}(0)= \xi$. Then, in a local coordinate system, the geodesic segments have different curvatures\footnote{In oder to define a curvature, we need to fix an euclidean structure in the neighborhood of $p$. The curvature depends on the euclidean structure, but the property of curvatures (considered as vectors orthogonal to $\xi$) to be different does not depend on the choice of the euclidean structure} at the point $x(0)$. Then, the curves $x(t)$ and $\tilde x(\tau)$ do not have intersections for small $t\ne 0$, $\tau \ne 0$. Moreover, would $d\omega(\xi, \cdot)= 0$, the second derivatives $\ddot x(0)$ and $\ddot {\tilde x}(0)$ would coincide implying the curvatures of $x(t)$ and $\dot x(t)$ coincide at $t=0$. Now consider the geodesics of $\bar F$. There must be one (forward or backward) geodesic of $\bar F$ that, after an orientation preserving reparameterisation, coincides with $x(t)$, an one geodesic of $\bar F$ that, after an orientation preserving reparameterisation, coincides with $\tilde x(t)$. Since the curvatures of these two geodesics are different at the point $p$, $d\bar \omega(\xi,\cdot) \ne 0$ at the point $p$. Finally, an arbitrary point $p\in M^0$ also lies in $\bar M^0$, so that $ M^0\subseteq \bar M^0.$ Similarly one proves $ M^0\supseteq \bar M^0, $ so $M^0 $ and $\bar M^0$ coincide. Let us now show that, on $M^0$, $d\omega =\const \cdot d\bar \omega$. In fact, we explain how one can modify the proof of Theorem \ref{thm2} to obtain this result. As we explained above, the forward and backward geodesic segments $ x(t)$ and $ \tilde x(t)$ such that $x(0)= \tilde x(0)$ and $\dot x(0)= \dot{\tilde x}(0)$ with $d\omega(\dot x(0), \cdot)\ne 0$ are geometrically different for small $t\ne 0$; then one of them is a forward geodesic of $\bar F$ and another is a backward geodesic of $\bar F$. We call $\xi\in T_pM, \ p \in M^0 $ a $p$-positive point, if the forward geodesic $x(t)$ with $x(0)= p$ and $\dot x(0)= \xi$ is a forward geodesic of $\bar F$, and $p$-negative otherwise. Next, we consider two subsets of $T_pM$: $$S_+:= \{ \xi \in T_pM\mid d\omega(\xi, \cdot )\ne 0; \ d\bar \omega(\xi,\cdot)\ne 0; \ \textrm{$\xi $ is $p$-positive}\},$$ $$ S_-:= \{ \xi \in T_pM\mid d\omega(\xi,\cdot )\ne 0; \ d\bar \omega(\xi, \cdot )\ne 0; \ \textrm{$\xi $ is $p$-negative}\}.$$ The closure of one of these two subsets contains a nonempty open subset $U\subset T_pM$; let us first assume that the closure of $S_+$ contains a nonempty open subset of $T_pM$. Then, as in the proof of Theorem \ref{thm2}, we obtain that \eqref{la3} is valid for every $v\in S_+$ (here we use a trivial fact that if $v \in S_+$ then $-v\in S_+$). Since \eqref{la3} is an algebraic condition, it must be then valid for all $v\in T_pM$. Then, arguing as in the proof of Theorem \ref{thm2}, we conclude that at the point $p$ we have $\bar g = \alpha^2 \cdot g $ and $d\bar \omega = \alpha\cdot d\omega $ for some positive $\alpha$. Now, if the closure of $S_-$ contains a nonempty open subset of $T_pM$, we obtain $\bar g = \alpha^2 \cdot g $ and $d\bar \omega = \alpha\cdot d\omega $ for some negative $\alpha$. At $M^0$, $\alpha$ must be a smooth nonvanishing function as the coefficient of the proportionality of two nonvanishing tensors; then the sign of $\alpha$ is the same of at all points of every connected component of $M^0$. Since the forms $d\bar \omega$ and $ d\omega $ are closed, the coefficient of the proportionality, i.e., $\alpha$ is a constant. This implies that the Riemannaian metrics $g$ and $\bar g$ are projectively equivalent, since at the points of $M^0$ they are even proportional, and at the inner points of $M\setminus M^0$ the geodesics of $g$ are geodesics of $F$ and geodesics of $\bar g$ are geodesics of $\bar F$. By \cite[Corollary 2]{dedicata}, the Riemannian metrics $g$ and $\bar g$ are proportional at all points of the manifold so that $\bar g = \const^2 \cdot g$ on the whole manifold, as we claim in Corollaries \ref{thm3}, \ref{cor4}. Comparing this with the condition $\bar g = \alpha^2 \cdot g $ and $d\bar \omega = \alpha\cdot d\omega $ proved above for all points of each connected component of $M^0$, we obtain that at every connected component of $M^0$ we have $d\bar \omega = + \const\cdot d\omega $ or $d\bar \omega = - \const\cdot d\omega $ (the sign can be different in different connected components of $M^0$ as Example \ref{ex2} shows) as we claim in Corollaries \ref{thm3}, \ref{cor4}. At the points of $M\setminus M^0$ we have $d\bar \omega = d\omega =0$ so both conditions $d\bar \omega = + \const\cdot d\omega $ and $d\bar \omega = - \const\cdot d\omega $ are fulfilled. Corollaries \ref{thm3}, \ref{cor4} are proved.
65,717
Owing the IRS business taxes can be scary when you don’t have the cash to pay it with. Even tiny businesses can end up owing as much as $5,000. You cannot simply ignore the money that you owe the IRS. Not only will you begin to owe penalties and interest, but the IRS may also even file a lien on your business. You could find that your business credit is severely curtailed when that happens. You have three options to consider when you need to pay taxes, and don’t have the money. Use a Credit Card />IRS-approved providers will allow you to charge your taxes to a credit card. It can be expensive, however. To begin, the IRS-approved provider charges a convenience fee of about 1.99 percent. There is interest that you will pay on the credit card, as well. You could also find a new promotional card that offers a 0 percent APR. While you will still pay the convenience fee, you won’t pay any interest until the promotional period ends. It would be best if you applied for a business card, rather than a private card. This way, the taxes that you put on the card won’t affect your personal credit report. A low-interest balance transfer is another option. You can request one from your card issuer. They will take money out of your credit card and put it in your bank account so that you can use it to pay your taxes. This way, you can avoid paying the expensive convenience fee that comes when you pay by card. You do need to shop around to find a low-interest rate on your balance transfer, however. The one thing to keep in mind is that you need to be able to pay off the loan before the low-interest rate expires. Ask the IRS for a Payment Plan />If you cannot pay your taxes on time, you should ask the IRS for an installment payment plan. While there will be interest and penalties, they are likely to be minimal. The IRS charges a setup fee of up to $225, depending on the plan that you ask for. For the cheapest fees, you need to apply online, rather than in person, or by phone. If you are a small business that finds it hard to pay employment taxes, you may qualify for an In-Business Trust Fund Express Installment Agreement. If you do, you won’t need a financial statement or any verification for the application. Apply for a Loan If you are able to pay your tax debt off within four months, it’s possible to avoid paying to set up the installment agreement. You only need to pay interest and penalties. Until that happens, you can apply for cash from your line of credit or apply for a loan to fulfill your tax obligation. Typically, the 6% interest rate charged by the IRS is about the cheapest loan that you can get. Other kinds of loans are likely to be more expensive. Applying for a loan is a good idea, if you somehow can’t get the IRS to give you such an agreement, however. Once you do manage to pay your taxes, it’s important to hire an accountant and figure out how to set aside enough money every quarter to make sure that you are able to pay your taxes..
91,920
lend me your hand and we'll conquer them all but lend me your heart and I'll just let you fall but lend me your heart and I'll just let you fall in these bodies we will live, in these bodies we will die where you invest your love, you invest your life -mumford & sons, awake my soul I made my valentine for seth this year. some felt, a few buttons, and some handy stitching on the sewing machine. my hand made stuff isn't as amazing as his handywork, but its the effort that counts. 1 comment: sweetness and love pats
219,485
Penlee House Gallery and Cafe are open 10.00-5.00 Last admission 4.00 Acc.no: PEZPH : 1999.233 Full Name: coloured engraving Brief Description: View of market Jew Street and the Market House. Person: Sargent Date: b , d Production Date: undated Image Height: 72 mm Image Width: 75 mm Medium: ink Support: paper Surround: 114 mm Category: Fine Art If you are interested in learning more about this item please contact us and reference "PEZPH : 1999.233
314,525
Aug. 22 (UPI) -- Scientists at MIT have found that ancient Earth had a viscous mantle that was 200 degrees Celsius hotter than present day. The study, published today in Earth and Planetary Science Letters, found that the Earth's ancient crust was made up of a much denser, iron- and magnesium-enriched material than today's rocky mantle. "We find that around 3 billion years ago, subducted slabs would have remained more dense than the surrounding mantle, even in the transition zone, and there's no reason from a buoyancy standpoint why slabs should get stuck there. Instead, they should always sink through, which is a much less common case today," Benjamin Klein, a graduate student in MIT's Department of Earth, Atmospheric and Planetary Sciences, said in a press release. "This seems to suggest there was a big change going back in Earth's history in terms of how mantle convection and plate tectonic processes would have happened." Subduction is the process where two of Earth's massive tectonic plates collide, causing one to slide under the other. A hotter and denser crust caused subducting plates to sink all the way to the bottom of the mantle, 2,800 kilometers below the surface, forming a "graveyard" of slabs on top of the Earth's core, researchers say. To make the finding, the researchers compiled a large dataset of more than 1,400 previously analyzed samples of both modern rocks and komatiites, rock types that existed 3 billion years ago but no longer exist today. The team then used the composition of each rock sample to calculate the density of a typical subducting slab for modern day and 3 billion years ago. They used a thermodynamic model to determine the density profile of each subducting slab. "Today, when slabs enter the mantle, they are denser than the ambient mantle in the upper and lower mantle, but in this transition zone, the densities flip," Klein said. "So within this small layer, the slabs are less dense than the mantle, and are happy to stay there, almost floating and stagnant."
418,445
TITLE: How to write elements of a matrix to a vector? QUESTION [2 upvotes]: I know my question is very silly, but I cannot figure it out. I have an $m \times m$ matrix $A$. I want to create a vector $B$ whose elements are the elements of matrix $A$ when $i+j \geq m-1$, i.e., all the elements of matrix $A$ that are either on the antidiagonal of it or underneath of its antidiagonal. Meaning for $$A=\begin{bmatrix}1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9\end{bmatrix}$$ I want $B$ to be $B=\begin{bmatrix}3 & 5 & 7 & 6 & 8 & 9\end{bmatrix}$ Can you help me how to right in math in a correct way? Thanks a lot in advance! REPLY [0 votes]: at the time I cannot find a way for unifying them: when $i=1,2,3$ then $b_i=a_{ij}, \; j=4-i,$ when $i=4,5$ then $b_i=a_{(i-2)j}, \;j=5-i,$ when $i=6$ then $b_i=a_{(i-3)j}, \;j=6-i$
146,522
Latest Car Rental Cork Booking Requests Use search and get your free personal quote! Budget Car Rental Cork If you're looking for good discounts and deals in Cork. Budget car rental also have another amazing deal referred to as the "pay now rate." If you prepay for your vehicle rental online, you'll save an extra 35% off of your car rental Cork. Budget car rental Cork fleet Standard Budget car rental C". Budget
320,036
TITLE: Difference between Probability and Probability Density QUESTION [13 upvotes]: This question is from DeGroot's "Probability and Statistics" : Unbounded p.d.f.’s. Since a value of a p.d.f.(probability density function) is a probability density, rather than a probability, such a value can be larger than $1$. In fact, the values of the following p.d.f. are unbounded in the neighborhood of $x = 0$:$$f(x) = \begin{cases} \frac{2}{3}x^{-\frac{1}{3}} & \text{for 0<$x$<1,} \\ 0 & \text{otherwise.} \\ \end{cases}$$ Now, I don't know how the p.d.f. can take value larger than $1$.Please let me know the difference between the probability and probability density. REPLY [1 votes]: There is an excellent explanation about this in the chapter 2 of the book The Scientist and Engineer's Guide to Digital Signal Processing.pdf. Quoting from the book - The probability density function (pdf), also called the probability distribution function, is to continuous signals what the probability mass function is to discrete signals. The vertical axis of the pdf is in units of probability density, rather than just probability. For example, a pdf of 0.03 at 120.5 does not mean that the a voltage of 120.5 millivolts will occur 3% of the time. In fact, the probability of the continuous signal being exactly 120.5 millivolts is infinitesimally small. This is because there are an infinite number of possible values that the signal needs to divide its time between: 120.49997, 120.49998, 120.49999, etc. The chance that the signal happens to be exactly 120.50000 is very remote indeed! To calculate a probability, the probability density is multiplied by a range of values. For example, the probability that the signal, at any given instant, will be between the values of 120 and 121 is: (121 - 120) × 0.03 = 0.03. The probability that the signal will be between 120.4 and 120.5 is: (120.5 - 120.4) × 0.03 = 0.003 , etc. If the pdf is not constant over the range of interest, the multiplication becomes the integral of the pdf over that range. In other words, the area under the pdf bounded by the specified values.
26,352
28 Hour of Hope Helps Abused and Neglected Children in Northern Colorado For more than a decade and a half, Brian and Todd have been raising money to help abused and neglected children in Northern Colorado. Their efforts reach the main organizations that help to quell this major problem; A Kids’ Place in Greeley, the Namaqua Center in Loveland and the Larimer County Child Advocacy Center in Fort Collins.
283,594
I am left with a bag of oranges and I was browsing for some different recipes with Oranges. I came across Orange Kheer. I was unsure of the outcome when I read about kheer with orange. Anyways thought of giving it a try and I am surprised with the taste. Very unique and easy dessert. Ingredients Preparation Method - Boil milk and after 5 minutes, add chopped pistachios, almonds, cardamom, saffron and boil for 3 more minutes. - Then add sugar and mix well. Let it boil in medium flame for 5 minutes. - Once the milk has reduced to 1/2 its volume, turn off the flame and allow it to cool down completely. - Meanwhile, peel the skin of the orange and remove the pulp from it and keep it ready. - When the milk is cooled down completely, add the orange pulp and give it a mix. - It can be served immediately or after refrigerating it for an hour.
409,003
Interior Design Seattle Lovely Interior Design Seattle Seattle Interior Designer The nice role type can develop a excellent residence, that Interior Design Seattle photo collection will offer various illustrations that you can adapt. To be able to purchase a breathtaking house, you may reproduce a style out of Interior Design Seattle pic gallery. The home can be converted into an exceptionally inviting place simply by applying certain details because of Interior Design Seattle photo collection. There are lots of info from Interior Design Seattle image gallery you can watch and study. Lover home with a tranquil come to feel, this particular Interior Design Seattle image gallery is normally recommended to suit your needs. Made from options that will Interior Design Seattle snapshot stock will show could generate a stunning look which is rather comforting. Along with selecting supplies that suggested just by Interior Design Seattle image stock supplies a natural believe would make the house more desirable. That is the perfect concept to put on a few details that one could observe within Interior Design Seattle picture stock for the reason that particulars will help you purchase a home of which rather Ordinary Interior Design Seattle April Pride Beautiful Interior Design Seattle ... Interior Design Seattle Popular With Photos Of Decorating Ideas Interior Design When choosing the ultimate strategy because of Interior Design Seattle photo gallery you do fill out an application, one should concentrate on how big your property. Additionally, it is important to consider the suitability with the strategy from Interior Design Seattle picture collection to the tastes together with require. With wonderful variations exhibited, Interior Design Seattle photograph gallery will be your requirement. Interior Design Seattle snapshot stock will likewise assist you to change cannot home towards a wonderful house subsequently. You can actually show your your personal guest visitors which has a extremely simple if you can implement the information with Interior Design Seattle picture stock well. Your own guest visitors will almost allways be pleasant in the house because Interior Design Seattle image gallery will assist you to develop a hot together with agreeable atmosphere. Interior Design Seattle pic collection gives you a larger probability to obtain a fantastic home. Consequently most people solidly really encourage that you discover many of the ideas with Interior Design Seattle image stock so that you can improve your own research. You will be able to search for this website to have the most current types that will so marvelous since Interior Design Seattle picture collection. Thanks for your time designed for watching Interior Design Seattle picture stock. Interior Design Seattle Hyde Evans Design_Interior Design Seattle_Home_12 Nice Interior Design Seattle Hyde Evans Design_Interior Design Seattle_Suncadia_03 Interior Design Seattle Photos Gallery Random Posts of Interior Design Seattle Lowes Interior Paint Colors July 28, 2017 French Office July 24, 2017 Flush Mount Kitchen Lights July 24, 2017 Spiral Staircase Library July 25, 2017 Rattan Bedroom Sets July 23, 2017 Golf Pictures For Office July 22, 2017 Limestone Patio July 14, 2017 Roofing A Patio Cover July 19, 2017 Flooring On Stairs July 22, 2017 Refrigerator Door Handles July 15, 2017 Ikea Home Office Desk July 15, 2017 Door Stop Floor Mount July 13, 2017 Lowes Stair Spindles July 23, 2017 Yellow And Gray Kitchen Decor July 25, 2017 Espresso Toy Storage July 25, 2017 Duct Access Door July 22, 2017 Modern Office Lobby July 23, 2017 Imperial Kitchen Equipment July 13, 2017 Organize Office Desk July 24, 2017 Stairway Ramp July 28, 2017 Chrome Kitchenaid Mixer July 18, 2017 Teen Bedroom Set July 28, 2017 Interior Design Raleigh July 25, 2017 Office Buffet Credenza July 28, 2017
356,757
\chapter{Application: Infinitely Many Solutions of the Initial Boundary Value Problem for Barotropic Euler} \label{chap:appl-ibvp} \chaptermark{Solutions of the Initial Boundary Value Problem} In this chapter we consider the initial boundary value problem for the barotropic Euler system \eqref{eq:baro-euler-pv-dens}, \eqref{eq:baro-euler-pv-mom} with any given initial data \eqref{eq:baro-initial} and impermeability boundary condition \eqref{eq:impermeability} on a bounded domain $\Omega\subset\R^n$. What is meant by an (admissible) weak solution to this problem is defined in Definition~\ref{defn:aws-baro-bdd}. We are going to show how convex integration is used to produce solutions to this initial boundary value problem. With the help of Theorem~\ref{thm:convint} we only present in detail a less impressive result in the sense that the solutions, which are obtained here, are only weak solutions and not admissible, see Section~\ref{sec:ibvp-weak}. Furthermore this result only works for a narrow class of initial data. The proof of this result boils down to finding a subsolution as required by Theorem~\ref{thm:convint} which additionally complies with the initial and boundary condition. In view of much more general results in the literature, we indicate in Sections \ref{sec:ibvp-adm} and \ref{sec:ibvp-other} how Theorem~\ref{thm:convint} has to be modified in order to get better results. Note that in this book we actually focus on the application of Theorem~\ref{thm:convint} to the so-called Riemann problem, which is considered in Chapter~\ref{chap:appl-riemann}. This is the reason why Theorem~\ref{thm:convint} is formulated in such a way that it can easily be applied there. However this formulation may be viewed as ``not ideal'' as far as the application in the current chapter is concerned, in the sense that it does not yield most general results. In order to apply Theorem~\ref{thm:convint} to the initial boundary value problem under consideration, we set $\Gamma:= (0,T)\times \Omega$. In this chapter we allow $T=\infty$. However for the closure of $(0,T)$ we will write for simplicity $[0,T]$ which actually means $[0,T]$ if $T\in \R$ and $[0,\infty)$ if $T=\infty$. \section{A Simple Result on Weak Solutions} \label{sec:ibvp-weak} The following statement can be easily derived from Theorem~\ref{thm:convint}. \begin{thm} \label{thm:ibvp-many-weak} Assume there exist $r,c>0$ and $(\rho_0,\vm_0,\mU_0)\in C^1\big(\closure{\Gamma};\R^+ \times \R^n\times \szn\big)$ with the following properties: \begin{itemize} \item The assumptions of Theorem~\ref{thm:convint}, i.e. \eqref{eq:p0-pde1} - \eqref{eq:p0-dens-bdd}, hold; \item The initial condition is fulfilled, i.e. \begin{equation} \label{eq:ibvp-ic} (\rho_0,\vm_0)(0,\cdot) = (\rho_\init,\rho_\init \vu_\init)\es \end{equation} \item The boundary condition is satisfied in the following sense: \begin{align} \vm_0 \cdot \vn \big|_{\partial\Omega} &= 0 \qquad \text{ and} \label{eq:ibvp-bc1} \\ (\mU_0\cdot \vphi)\cdot \vn\big|_{\partial \Omega} &= 0 \qquad \text{ for all }\vphi \in \Cc\big([0,T) \times \closure{\Omega};\R^n\big) \text{ with }\vphi\cdot \vn\big|_{\partial \Omega}=0\ed \label{eq:ibvp-bc2} \end{align} \end{itemize} Then there exist infinitely many weak solutions (not necessarily admissible\footnote{In fact these solutions \emph{are} not admissible, see Section~\ref{sec:ibvp-adm} for details.}) of the initial boundary value problem \eqref{eq:baro-euler-pv-dens}, \eqref{eq:baro-euler-pv-mom}, \eqref{eq:baro-initial}, \eqref{eq:impermeability}. \end{thm} \begin{proof} An application of Theorem~\ref{thm:convint} yields infinitely many bounded functions $(\rho,\vm)\in L^\infty\big((0,T)\times \Omega;\R^+\times \R^n\big)$ such that in particular property \ref{item:convint.a} of Theorem~\ref{thm:convint} holds. In other words \eqref{eq:sol-pde1} and \eqref{eq:sol-pde2} hold for all test functions $(\phi,\vphi) \in \Cc\big([0,T] \times \closure{\Omega}; \R\times \R^n\big)$. For each such $\vm$, define $\vu:= \frac{\vm}{\rho}$. Note that $\rho>0$ a.e. on $(0,T)\times \Omega$ due to \eqref{eq:sol-dens-bdd}. In order to show that each pair $(\rho,\vu)$ is a weak solution in the sense of Definition~\ref{defn:aws-baro-bdd}, let $(\phi,\vphi) \in \Cc\big([0,T) \times \closure{\Omega}; \R\times \R^n\big)$ be arbitrary test functions with $\vphi\cdot \vn\big|_{\partial \Omega}=0$. From \eqref{eq:ibvp-ic}, \eqref{eq:ibvp-bc1} and \eqref{eq:sol-pde1} we obtain \begin{align*} &\int_0^T \int_{\Omega} \Big[\rho \partial_t \phi + \rho\vu\cdot\Grad \phi\Big]\dx\dt + \int_{\Omega} \rho_\init\phi(0,\cdot) \dx \\ &= \int_0^T \int_{\Omega} \Big[\rho \partial_t \phi + \vm\cdot\Grad \phi\Big]\dx\dt + \int_{\Omega} \rho_0\phi(0,\cdot) \dx - \int_0^T \int_{\partial\Omega} \vm_0 \cdot \vn \, \phi \dS_\vx \dt \\ &= 0 \ec \end{align*} whereas \eqref{eq:ibvp-ic}, \eqref{eq:ibvp-bc2}, the fact that $\vphi\cdot \vn\big|_{\partial \Omega}=0$, and \eqref{eq:sol-pde2} yield \begin{align*} &\int_0^T \int_{\Omega} \Big[\rho\vu \cdot\partial_t \vphi + \rho\vu\otimes\vu:\Grad \vphi + p(\rho)\Div \vphi\Big]\dx\dt + \int_{\Omega} \rho_\init\vu_\init\cdot\vphi(0,\cdot) \dx \\ &= \int_0^T \int_{\Omega} \Big[\vm \cdot\partial_t \vphi + \frac{\vm\otimes\vm}{\rho}:\Grad \vphi + p(\rho)\Div \vphi\Big]\dx\dt + \int_{\Omega} \vm_0\cdot\vphi(0,\cdot) \dx \\ &\qquad\qquad - \int_0^T \int_{\partial\Omega} \left[ (\mU_0\cdot \vphi)\cdot \vn + \frac{2c}{n} \vphi\cdot \vn\right] \dS_\vx \dt \\ &= 0 \ed \end{align*} In other words \eqref{eq:baro-euler-weak-bdd-dens} and \eqref{eq:baro-euler-weak-bdd-mom} hold, i.e. each pair $(\rho,\vu)$ is in fact a weak solution. \end{proof} With Theorem \ref{thm:ibvp-many-weak} at hand, one finds simple examples of initial data for which there exist infinitely many solutions, see e.g. the following lemma. \begin{cor} \label{cor:ibvp-many-weak-divcond} Let $(\rho_\init,\vu_\init)\in C^1(\closure{\Omega},\R^+\times \R^n)$ where $\vu_\init\cdot \vn\big|_{\partial\Omega} = 0$. Moreover we assume that $\Div(\rho_\init\vu_\init) = 0$. Then the initial boundary value problem \eqref{eq:baro-euler-pv-dens}, \eqref{eq:baro-euler-pv-mom}, \eqref{eq:baro-initial}, \eqref{eq:impermeability} has infinitely many weak solutions. \end{cor} \begin{proof} Define \begin{align*} \rho_0(t,\cdot) &:= \rho_\init \ec \\ \vm_0(t,\cdot) &:= \rho_\init \vu_\init \ec \\ \mU_0(t,\cdot) &:= \mZ \end{align*} for all $t\in [0,T]$. Together with the assumption $\Div(\rho_\init\vu_\init) = 0$ it is easy to show that \eqref{eq:p0-pde1}, \eqref{eq:p0-pde2} hold. Since $\closure{\Omega}$ is compact and $\rho_0,\vm_0,\mU_0$ do not depend on $t$, there exists $c>0$ such that $e(\rho_0,\vm_0,\mU_0)(t,\vx) < c$ for all $(t,\vx)\in \closure{\Gamma}$, i.e. \eqref{eq:p0-subs} is fulfilled. Analogously there exists $r>0$ such that $\rho_0(t,\vx)>r$ for all $(t,\vx)\in \closure{\Gamma}$, which shows \eqref{eq:p0-dens-bdd}. Moreover \eqref{eq:ibvp-ic} and \eqref{eq:ibvp-bc2} are satisfied by construction, whereas \eqref{eq:ibvp-bc1} holds by assumption. Thus Theorem~\ref{thm:ibvp-many-weak} yields the claim. \end{proof} In order to show that for \emph{all} initial data there exist infinitely many weak solutions (not necessarily admissible), one needs a more refined version convex integration rather than Theorem \ref{thm:convint}. In particular one has to replace the constant $c$ by a function $\ov{e}$ which depends on $t$ and $\vx$ as indicated in Subsection~\ref{subsec:convint-prel-adjusting}, see also Section \ref{sec:ibvp-other} and the references cited there. \section[Possible Improvements to Obtain Admissible Weak Solutions]{Possible Improvements to Obtain Admissible Weak Solutions \sectionmark{Admissible Weak Solutions}} \label{sec:ibvp-adm} \sectionmark{Admissible Weak Solutions} Note that Theorem~\ref{thm:convint} does not allow to produce admissible solutions when using a cylindrical space-time domain $\Gamma=(0,T)\times \Omega$. The reason for this fact is that the requirement that the subsolution $(\rho_0,\vm_0,\mU_0)$ lies in $C^1\big([0,T]\times \closure{\Omega};\R^+ \times \R^n\times \szn\big)$, see Theorem~\ref{thm:convint}, is an obstacle in order to achieve admissible solutions. Indeed this implies that $(\rho_0,\vm_0,\mU_0)(0,\cdot) \in C^1\big( \closure{\Omega};\R^+ \times \R^n\times \szn\big)$ and due to \eqref{eq:sol-pde1} and \eqref{eq:sol-pde2} we must require $(\rho_0,\vm_0)(0,\cdot) = (\rho_\init,\rho_\init \vu_\init)$ in order to satisfy the initial condition. Hence $\rho_\init,\vu_\init$ are $C^1$ which means that there exists a unique strong solution at least on a short time interval, which is even unique in the class of admissible weak solutions due to the \emph{weak-strong-uniqueness} principle. This problem can be overcome by requiring the subsolution $(\rho_0,\vm_0,\mU_0)$ in Theorem~\ref{thm:convint} to lie in $C^1\big((0,T)\times \closure{\Omega};\R^+ \times \R^n\times \szn\big)$. Mind the small difference: Now the time interval $(0,T)$ is an open interval. Then one has to prescribe the initial values in \eqref{eq:sol-pde1} and \eqref{eq:sol-pde2} that are included in the boundary integrals over $\partial\Gamma$ since $\rho_0,\vm_0,\mU_0$ are not defined for $t=0$. Similarly convex integration has been carried out in the literature, see e.g. \name{De~Lellis}-\name{Sz{\'e}kelyhidi} \cite[Proposition 2]{DelSze10} for the incompressible case, \name{Chiodaroli} \cite[Proposition 4.1]{Chiodaroli14} or \name{Feireisl} \cite{Feireisl14}. Another problem is that we must guarantee that the solutions additionaly satisfy the energy inquality \eqref{eq:baro-euler-weak-bdd-admissibility}. Here the ``trace-condition'' \eqref{eq:sol-trace} is helpful. As already pointed out in a remark in Subsection~\ref{subsec:convint-prel-adjusting}, in the case of a monoatomic gas, i.e. $p(\rho)=\rho^{\frac{2}{n}+1}$, we have $P(\rho)=\frac{n}{2} p(\rho)$ and hence \eqref{eq:sol-trace} turns into $$ \frac{|\vm|^2}{2\rho} + P(\rho) = c \qquad \text{ for a.e. }(t,\vx)\in (0,T)\times \Omega \ed $$ Note that the left-hand side is the energy, i.e. \eqref{eq:sol-trace} says that the energy is constant for a.e. $(t,\vx)\in(0,T)\times \Omega$. However this is not enough to make the energy inquality valid as we don't know anything about the behaviour of the energy flux. To solve this issue, one may use the fixed-density-version Theorem~\ref{thm:convint-nodens} rather than Theorem~\ref{thm:convint}. Then it is not even necessary to study a monoatomic gas. We find using \eqref{eq:sol-trace} \begin{equation} \label{eq:5-temp-1} \frac{|\vm|^2}{2\rho} + P(\rho) = \frac{|\vm|^2}{2\rho} +\frac{n}{2}p(\rho) + P(\rho) - \frac{n}{2}p(\rho) = c + P(\rho_0) - \frac{n}{2}p(\rho_0) \end{equation} for a.e. $(t,\vx)\in (0,T)\times \Omega$. For simplicity we choose $\rho_\init\equiv \ov{\rho} = \const$ and also $\rho_0\equiv \ov{\rho}$. This way we obtain from \eqref{eq:sol-pde1} together with the Divergence Theorem (Proposition~\ref{prop:not-divergence}) and the impermeability boundary condition, that \begin{equation} \label{eq:5-temp-2} \int_0^T \int_\Omega \vm\cdot \Grad\phi \dx\dt = 0 \end{equation} for all test functions $\phi\in \Cc\big([0,T]\times \closure{\Omega};\R\big)$. With \eqref{eq:5-temp-2} we are able to handle the energy flux. Indeed plugging \eqref{eq:5-temp-1} and \eqref{eq:5-temp-2} into the left-hand side of \eqref{eq:baro-euler-weak-bdd-admissibility}, we obtain \begin{align*} &\int_0^T \int_{\Omega} \left[\bigg(c + P(\rho_0) - \frac{n}{2}p(\rho_0)\bigg) \partial_t \varphi + \frac{1}{\rho_0}\bigg(c + P(\rho_0) + \left(1- \frac{n}{2}\right) p(\rho_0)\bigg)\vm\cdot\Grad \varphi \right]\dx\dt \\ &\qquad + \int_{\Omega} \bigg(\half\rho_\init|\vu_\init|^2 + P(\rho_\init)\bigg) \varphi(0,\cdot) \dx \\ &= -\int_{\Omega} \bigg(c + P(\ov{\rho}) - \frac{n}{2}p(\ov{\rho})\bigg) \varphi(0,\cdot) \dx + \int_{\Omega} \bigg(\half\ov{\rho} |\vu_\init|^2 + P(\ov{\rho})\bigg) \varphi(0,\cdot) \dx \\ &= \int_{\Omega} \bigg(-c - \frac{n}{2}p(\ov{\rho}) + \half\ov{\rho} |\vu_\init|^2\bigg) \varphi(0,\cdot) \dx \end{align*} for all test functions $\varphi \in \Cc\big([0,T) \times \closure{\Omega};\R^+_0\big)$. Note that this would be equal to zero if \begin{equation} \label{eq:5-temp-3} \half\ov{\rho} |\vu_\init|^2 + \frac{n}{2} p(\ov{\rho}) = c \qquad \text{ for a.e. }\vx\in \Omega \ed \end{equation} In other words we must require the energy to be continuous at $t=0$. However for simple choices of the subsolution like the one in Corollary~\ref{cor:ibvp-many-weak-divcond} this is generally not true. The reason for this is the fact that one first fixes the subsolution $(\rho_0,\vm_0,\mU_0)$ and then chooses $c>0$ sufficiently large to achieve \eqref{eq:p0-subs}. This typically leads to a jump of the energy at $t=0$. If $c$ is already fixed by \eqref{eq:5-temp-3}, then there is no such jump, but on the other it is more difficult to guarantee that \eqref{eq:p0-subs} holds. In fact in the literature subsolutions which satisfy \eqref{eq:5-temp-3} are constructed using convex integration once more, see e.g. \name{De~Lellis} and \name{Sz{\'e}kelyhidi} \cite[Section~5]{DelSze10} for the incompressible case, \name{Chiodaroli} \cite[Section 7]{Chiodaroli14} or \name{Feireisl} \cite[Theorem~1.4]{Feireisl14}. Note that it is however not possible to find a subsolution fulfilling \eqref{eq:5-temp-3} for all initial data, even if one replaces the constant $c$ by a function $\ov{e}$. Instead one constructs suitable initial data and a corresponding subsolution $(\rho_0,\vm_0,\mU_0)$ which fulfills \eqref{eq:5-temp-3} simultaneously. \section{Further Possible Improvements} \label{sec:ibvp-other} Let us finish this chapter with mentioning how Theorem~\ref{thm:convint} can be further improved. As indicated in Section~\ref{sec:ibvp-weak} and in Subsection~\ref{subsec:convint-prel-adjusting} one could replace $c$ by a function $\ov{e}$ which depends on $t$ and $\vx$. For example if one wants to find a possibly large class of initial data that admit infinitely many admissible weak solutions, i.e. \eqref{eq:5-temp-3} must hold, then the requirement that $\ov{e}\equiv c=\const$ is quite restrictive. Indeed there are not many initial data for which the left-hand side of \eqref{eq:5-temp-3} is constant. Note that in fact in many papers on convex integration for compressible Euler that are available in the literature, e.g. \name{De~Lellis}-\name{Sz{\'e}kelyhidi} \cite{DelSze10}, \name{Chiodaroli} \cite{Chiodaroli14} and \name{Feireisl} \cite{Feireisl14}, the trace which corresponds to \eqref{eq:convint-temp-trace} in our framework, is considered as not necessarily constant. Another issue is the following. It is natural to require weak solutions to be continuous maps from $[0,T)$ to $L^\infty(\Omega)$ where the latter is endowed with the weak-$\ast$ topology. The corresponding function space is denoted by $C_{\rm weak\text{-}\ast}\big([0,T);L^\infty(\Omega)\big)$. In fact one can prove that weak solutions as defined in Section~\ref{sec:conslaws-ibvp} can be modified (if necessary) on a set of zero measure such that they lie in the space $C_{\rm weak\text{-}\ast}\big([0,T);L^\infty(\Omega)\big)$, see \name{Dafermos} \cite[Lemma~1.3.3]{Dafermos}. In other words the \emph{instantaneous values} $\vU(t,\cdot)$ are well-defined for all times $t\in[0,T)$ and the equation \begin{align*} \int_{t_0}^{t_1} \int_{\Omega} \Big(\vU \cdot \partial_t\vpsi + \mF(\vU) : \Grad \vpsi \Big)\dx \dt - \bigg[\int_\Omega \vU(t,\cdot) \cdot \vpsi(t,\cdot) \dx\bigg]_{t=t_0}^{t=t_1} \qquad & \\ - \int_{t_0}^{t_1} \int_{\partial\Omega} \vpsi \cdot \vF_{\partial \Omega} \dS_\vx \dt &= 0 \end{align*} holds for all test functions $\vpsi \in \Cc\big([0,T)\times\closure{\Omega};\R^m\big)$ and all $0\leq t_0 \leq t_1 < T$, rather than just \eqref{eq:conslaws-ivp-weak}, see also \cite[Lemma 1.3.3]{Dafermos}. In the context of the barotropic Euler equations, this means that every weak solution $(\rho,\vu)$ in the sense of Definition~\ref{defn:aws-baro-bdd} is also a weak solution in the sense described above. However one could ask for solutions which fulfill also the energy inequality in the sense above instead of \eqref{eq:baro-euler-weak-bdd-admissibility}. In other words one requires \begin{align*} \int_{t_0}^{t_1} \int_{\Omega} \left[\bigg(\half\rho|\vu|^2 + P(\rho)\bigg) \partial_t \varphi + \bigg(\half\rho|\vu|^2 + P(\rho) + p(\rho)\bigg)\vu\cdot\Grad \varphi \right]\dx\dt \qquad & \\ - \bigg[\int_{\Omega} \bigg(\half\rho|\vu|^2 + P(\rho)\bigg)(t,\cdot)\ \varphi(t,\cdot) \dx \bigg]_{t=t_0}^{t=t_1} &\geq 0 \end{align*} for all $\varphi \in \Cc\big([0,T) \times \closure{\Omega};\R^+_0\big)$ and all $0\leq t_0 \leq t_1 < T$. To include this in the convex integration method, the solutions we are looking for must satisfy \eqref{eq:sol-trace} not only for a.e. $(t,\vx)\in (0,T)\times \Omega$ but for all $t\in (0,T)$ and a.e. $\vx\in \Omega$. For the incompressible Euler system this has been done by \name{De~Lellis} and \name{Sz{\'e}kelyhidi} \cite{DelSze10}, see the beginning of Section 4 therein for a more extensive discussion of this issue. In order to achieve a similar result in the framework presented in Chapter~\ref{chap:convint} one needs to implement the ideas of \cite{DelSze10}.
186,607
In a memo sent to all agency heads Thursday, Office of Management and Budget Director Rob Portman wrote they should only honor earmarks contained in statute or otherwise subjected to rigorous review. The memo also makes clear agencies should not fund earmarks based solely on lobbying from lawmakers or other interested parties. In the $463.5 billion fiscal 2007 full-year funding bill President Bush signed into law Thursday, Democrats removed about 9,300 earmarks that had been slated for approval under the regular fiscal 2007 spending bills, many simply listed in reports accompanying the bills. The measure contains no new earmarks, as well as a provision stipulating that earmarks contained in fiscal 2006 reports "shall have no legal effect." Spending bills typically include earmarks in the reports as recommendations, meaning they have no force of law. But agencies are often guided by such congressional directives, in part because of the influence on their budgets wielded by powerful lawmakers. "For agencies funded by the CR, this means that unless a project or activity is specifically identified in statutory text, agencies should not obligate funds on the basis of earmarks contained in congressional reports or other written documents," Portman's memo states. "While the administration welcomes input to help make informed decisions, no oral or written communication shall supersede statutory criteria, competitive awards, or merit-based decision-making" based on authorizing language, funding formulas and existing policy governing contracts, grants and awards. Conservative earmark reformers praised the White House memo, saying it would prevent "backdoor" earmarks funded as a result of e-mails and phone calls from powerful lawmakers or interest groups. "For too long, Washington has handed out American tax dollars based on seniority and political jockeying. This year, these funds will be given out based on merit," said Sen. Jim DeMint, R-S.C., chairman of the conservative Senate GOP Steering Committee. Some lawmakers complained about the Democrats' move to eliminate earmarks, arguing it hands too much power to the executive branch to make spending decisions. Senate Energy and Water Appropriations Subcommittee ranking member Pete Domenici, R-N.M., was unabashed about his disappointment that the individual fiscal 2007 spending bills were scrapped. "My staff and I will now closely monitor how federal agencies decide how to distribute the funding they gain in the continuing resolution," Domenici said in a statement. "I want to make sure that, where possible, that additional funding is directed to the New Mexico programs and projects that stand to lose out with Congress' failure to complete the 2007 appropriations.
42,153
TITLE: Factoring Multivariate Polynomials over a Field QUESTION [9 upvotes]: let $\mathbb F$ be a field and let $\mathbb F[x_1, \cdots , x_n]$ be the polynomial ring in the variables $x_1, \cdots x_n$. For $n=1$ there are several irreducible criteria. But if $n>1$ there are methods of determining whether a given polynomial over $\mathbb F$ is irreducible? REPLY [0 votes]: There are different ways to say if a polynomial in F[x] is irreducible: As in QiL'8 answer: 1/ Eisenstein Criterion (especially useful to determine if a polynomial is irreducible over $\mathbb Q$) 2/ Gauss Lemma Some other ways: 3/ Rational Roots Theorem (to see if there is a root x in the set of rationals). If such a root exists, then the polynomial is reducible over $\mathbb Q$, thus in $\mathbb Z$ 4/ Using ideals : A polynomial f(x) is irreducible over F <==> The ideal generated by f(x) is a maximal ideal. But of course we can always try the most basic approach by trying to find the root of the polynomial f(x) (if the polynomial is easy enough to work with), and see if the root is in our field F. For example, we see that x^2 + 2 is irreducible over $\mathbb R$ but is reducible over $\mathbb C$, since the roots are 2i and -2i, which are complex numbers.
212,860
I’m trying to figure out how to use rclone check from within a script. The first step of said script should be to determine if there are any changed files on the remote. If there are not, then exit. If so, proceed to copy and process. Superficially, “rclone check” seems to be the command that I am looking for. But it isn’t entirely clear to me how to use it in a conditional. Do I have to parse the logged output for somewhere containing the substring “0 differences found”? Or is there a cleaner way (a way to tell it to return only a 0 or 1 status for example)? Thanks, Todd
352,397
Simple Ways to Save the Surroundings A personalized essay are often accessible from on-line content agencies providing the identical sort of providers. When you need to purchase a superb custom composition, make certain you may not move for inexpensive providers. Thus, consumer must be considered as the initial concern in a sure custom writing business and customer needs require to come really first. Don’t go for cost-effective custom essay services. Custom article writing has turned into a popular endeavor during the last couple of years. It is a term that’s been in use for quite a very long time. The toughest portion of the essay may be to comprehend the substance and also the rational arrangement. Likewise, it’s worth noting a personalized essay cannot ever be recycled or reused. Each sentence have to give attention to one theme that supports your thesis statement. Several corporations have walk ins. When you desire educational backing, you may desire to be positive the last function you are going to obtain will be totally authentic. One point to genuinely contemplate should you be looking at custom article composing is the reality that fundamentally, the last function isn’t heading to be your own. To begin, make an outline or prewriting of your own article when planning the very first draft. A customized composition isn’t like every average books you locate on the web. Here are a few essay writing square hints that might enable you to master the craft of copywriting and finally be a profitable copywriter. Paired with an excellent writing application, the easy structure has the capacity to allow you to turn up essays quite quickly. Some sites offering article assist fresh essays uses writers which will have British as their 1st vocabulary, but they’re from other nations like the usa. The optimum / optimally custom essay authors wish to be rewarded so. You might purchase merely a bibliography to determine how well our writers fulfill several types of assignments. Realize that you may have to perform your way up. The application of words and language is completely distinct in various sorts of essays. In the current World, there is good number of need for essay writers. Composition writing isn’t just about obtaining the highest score. Besides that, composing essays is really a nutritious method of enhance writing abilities. It may be a waterloo for a few students. Composing an academic paper involves an extensive research of the particular topic. There’s no mistaking what this kind of essay means to do. Don’t wait to estimate pros on this problem and make certain that proper references are incorporated. To produce an essay isn’t an effortless endeavor. Thank you for supporting me:) sign in or join and article utilizing a hubpages account. Consequently composition composing isn’t complete sans the opening and also the summary. Read on to determine why you mustn’t ever pay money for an essay on the web. Definitely, as a means to compose an effective academic papers, the author should have sufficient understanding in composing in addition to be well-informed regarding the topic of his own assignment. The debut of an article is actually where the writer ushers within the central notion supporting the essay. The information given via the article needs to be exact. Move erroneous, and the complete article is simply a mess. Custom article to purchase online should have distinguishing sources of advice for example posts, novels and magazines that’ll assist Spanish documents authors to collect info and facts to utilize in custom article writing. So you form overlapping levels of diapers across the paper towels, continue this method. There aren’t many but some additional decent on-line essay writing firms that provide well – written papers. If you are searching for the ideal composition authors online, you are in the proper spot. So purchase essays online here with no question your author understands the way to nail the document! п»ї Comentarios recientes
247,118
England v TongaParc des Princes, ParisFriday, 28 SeptemberKick-off: 2000 BSTLive on BBC Radio 5live and the BBC Sport website England captain Phil Vickery will only be a replacement for the champions' must-win World Cup clash with Tonga. Matt Stevens stays at tight-head while Vickery, who has completed a two-game ban for tripping, replaces Perry Freshwater on the bench on Friday. There are two changes to the starting XV, with Steve Borthwick in for Simon Shaw at lock and Lewis Moody replacing Joe Worsley at open-side flanker. Lawrence Dallaglio returns to the bench but Jason Robinson is still not fit. Worsley takes Moody's place among the replacements in Paris while Lee Mears is also recalled to the bench, in place of Mark Regan. Blind-side flanker Martin Corry retains the captaincy while Mathew Tait stays at outside centre, with his Leicester rival Dan Hipkiss once again on the bench. Vickery, named as captain by coach Brian Ashton well before the tournament, has served his two-match suspension for tripping USA centre Paul Emerick in England's opening match in France. But Ashton said a combination of Stevens' good form and Vickery's lack of action had led to the decision to start with the Wasps prop on the bench. "Matt Stevens has taken his opportunity pretty well," said Ashton. "Phil has not played for two or three weeks and we thought we'd ease him back from the bench. "In the circumstances we thought it was the right way to do it. The captaincy issue was secondary really, the main issue was the front row." Stevens added: "I've been given three opportunities, one against South Africa, one against Samoa, and now this one. It's a massive vote of confidence from the coaches and I don't want to let them down." Ashton said the two changes he has made in the pack were done in order to give both Shaw and Worsley a rest. "Simon Shaw has started every game for us recently and he's been on the field for the majority of those games," said Ashton. "We've got a second row in Borthwick who's not played a lot of rugby and he's desperate to get on the field. "Joe Worsley has picked up a bang in his neck area on a couple of occasions and we just want to manage that and keep him out of the firing line a little bit. "Moody's not played any rugby from the start for us for a long time. He's very fresh and desperate to make impact." Friday's clash will be Moody's first start in any game since the Heineken Cup final four months ago, and his first for England since the autumn Test against Argentina in November 2006. Moody said: "It is just a fantastic opportunity for me, I have been waiting for this moment. I pulled my calf just before the first France game in the warm-up Tests (when he was due to start). "It has been frustrating sat on the bench but you get what you are given in rugby and I have my chance now." England go into the game knowing they must beat Tonga to clinch a World Cup quarter-final against Australia in Marseille on 6 October. If they lose at the Parc des Princes they will become the first defending champions to have failed to make it out of the group stages. Both sides have nine points but Tonga are second in Pool A on points difference. They went into the tournament ranked as the fourth-best team in the group but surprised Pacific Island rivals Samoa before running South Africa close at the weekend. "Tonga are no longer the surprise team in our group - they've played pretty well all the way through," said Ashton. "I have no doubt they will be looking at this as a cup final and we've got to look at it the same way. "We're in a knock-out game, but we are in that mode already. "It will probably be more of a challenge than last week (against Samoa) but it is something the players are very much in the mood for." England team to face Tonga: J Lewsey (Wasps); P Sackey (Wasps), M Tait (Newcastle), O Barkley (Bath), M Cueto (Sale Sharks); J Wilkinson (Newcastle), A Gomarsall (Harlequins); A Sheridan (Sale Sharks), G Chuter (Leicester), M Stevens (Bath), S Borthwick (Bath), B Kay (Leicester), M Corry (Leicester, capt), L Moody (Leicester), N Easter (Harlequins).Replacements: L Mears (Bath), P Vickery (Wasps), L Dallaglio (Wasps), J Worsley (Wasps), P Richards (London Irish), A Farrell (Saracens), D Hipkiss (Leicester). What are these?
138,679
\begin{document} \begin{abstract}In this article we present some results concerning natural dissipative perturbations of 3d Hamiltonian systems. Given a Hamiltonian system $\dot{x} =PdH$, and a Casimir function $S$, we construct a symmetric covariant tensor $g$, so that the modified (so-called ``metriplectic") system $ \dot{x} =PdH+gdS $ satisfies the following conditions: $dH$ is a null vector for $g$, and $dS(gdS)\leq 0$. Along solutions to a dynamical system of this type, the Hamiltonian function $H$ is preserved, while the function $S$ decreases, i.e. $S$ is dissipated by the system. We are motivated by the example of a ``relaxing rigid body" by Morrison \cite{Mor} in which systems of this type were introduced. \end{abstract} \title{Dissipative Perturbations of 3d Hamiltonian Systems} \author{Daniel Fish} \address{Department of Mathematics and Statistics, Portland State University, Portland, OR, U.S.}\email{[email protected]}\date{} \maketitle \section*{Introduction} In his article ``\emph{A Paradigm for Joined Hamiltonian and Dissipative Systems}" (see \cite{Mor}), P.J. Morrison introduced a natural geometric formulation of dynamical systems that exhibit both conservative and nonconservative characteristics. Historically, conservative systems have been modelled geometrically as Hamiltonian systems of the form $\dot{x}=PdH$ where $P$ is a Poisson tensor and $H$ is a smooth function (Hamiltonian function). In a Hamiltonian system, the equality $dH/dt=dHPdH=0$ can be interpreted as conservation of the ``energy" $H$; thus such systems are natural models for conservative dynamics (see \cite{AbMar}). Nonconservative or dissipative systems can also be described geometrically as gradient systems: $\dot{x}=gdS$ where $g$ is a symmetric tensor, $S$ is a smooth function, and $dS/dt=dSgdS$ is typically negative definite. Since the function $S$ is not conserved, it may be interpreted as a form of energy that is dissipated (or as the negative of ``entropy" which is produced by the system) (see \cite{Perko}). Morrison's formulation combines these two types of systems into a so-called \emph{Metriplectic System} \[\dot{x}=PdH+gdS,\] with the additional requirements that $H$ remains a conserved quantity and $S$ continues to be dissipated. These requirements can be met if the following conditions on $H$ and $S$ are satisfied \[PdS=gdH=0.\] That is, $S$ is a Casimir function for the Poisson tensor $P$ and $dH$ is a null vector for the symmetric tensor $g$. Morrison applied this formulation to the equations for the rigid-body with dissipation and the Vlasov-Poisson equations for plasma with collisions. The formulation of dissipative systems as combined Hamiltonian and gradient systems has also been studied by \cite{BKMR}, \cite{BB}, \cite{KaufTurski},\cite{Kauf}, \cite{Xu} and others in mathematical physics and control theory. In this article we regard metriplectic systems in $\mathbb{R}^{3}$ as dissipative perturbations of Hamiltonian systems. In the first section we suggest a natural form for the symmetric covariant tensor $g$ that depends only on the differential of the Hamiltonian function $H$, and prove some results about the equilibria of the combined system. In the second section we reproduce Morrison's example as a special case, and present some other interesting applications. \section{A class of Metriplectic Systems in $\mathbb{R}^{3}$} Let $(M,P)$ be a three-dimensional vector space equipped with a Poisson tensor $P$ and standard Euclidean metric $h$. At each point $x$ we identify $T_{x}M$ and $T^{*}_{x}M$ with $\mathbb{R}^{3}$ via the metric $h$: $v^{\sharp}=hv^{\flat}$. When the context is clear, we will denote the dot product on each space, and the pairing between the two spaces (with respect to the metric $h$) by the same symbol, i.e. $u^{\sharp}\cdot v^{\sharp}=u^{\flat}\cdot v^{\flat}=u^{\sharp}\cdot v^{\flat}=u^{i}v_{i}$. For any function $H$ in $C^{\infty}(M)$, the vector field $\xi_{P}=PdH$ defines a Hamiltonian system $\dot{x}=\xi_{P}$. Let $S\in C^{\infty}(M)$ be a Casimir function for this system ($PdS=0$). We wish to construct a canonical dissipative perturbation \[ \dot{x} = PdH + gdS, \] of this system, so that the following two conditions hold: $\dot{H} = 0$ and $\dot{S} \leq 0$, where $g$ is a symmetric covariant tensor on $M$. In other words, we want $g$ and $S$ to satisfy \begin{equation}\label{typeI} gdH = 0 \quad \textrm{and} \quad dS\cdot gdS \leq 0.\end{equation} Let $(x^{1},x^{2},x^{3})$ be local coordinates on $M$, and write $dH=H_{i}dx^{i}$ and $dS=S_{i}dx^{i}$. Assume, for now, that each $H_{i}$ is nonzero. In order for $gdH=0$ to hold, the following relationships between the coefficients of $g$ must be satisfied. \begin{align} \notag g^{11} &=-g^{12}(H_{2}/H_{1})-g^{13}(H_{3}/H_{1})\\ \label{diag} g^{22} &=-g^{21}(H_{1}/H_{2})-g^{23}(H_{3}/H_{2})\\ \notag g^{33} &=-g^{31}(H_{1}/H_{3})-g^{32}(H_{2}/H_{3}) \end{align} With these diagonal terms, we can calculate $gdS$: \[gdS=\begin{pmatrix} -(S_{1}/H_{1})(g^{12}H_{2}+g^{13}H_{3})+S_{2}g^{12}+S_{3}g^{13}\\ S_{1}g^{21}-(S_{2}/H_{2})(g^{21}H_{1}+g^{23}H_{3}) + S_{3}g^{23}\\ S_{1}g^{31}+S_{2}g^{32}-(S_{3}/H_{3})(g^{31}H_{1}+g^{32}H_{2}) \end{pmatrix} .\] In each cotangent space $T_{x}^{*}M$ let $\sigma_{i}(x)$ denote the $i^{th}$ component of the cross-product $d_{x}S \times d_{x}H$ and let $\sigma$ be the one-form $\sigma=\sigma_{i}dx^{i}$. Then the vector field $gdS$ can be locally expressed as \[gdS=\begin{pmatrix} \frac{1}{H_{1}}(g^{13}\sigma_{2}-g^{12}\sigma_{3})\\ \frac{1}{H_{2}}(g^{12}\sigma_{3}-g^{23}\sigma_{1})\\ \frac{1}{H_{3}}(g^{32}\sigma_{1}-g^{31}\sigma_{2})\end{pmatrix} .\] Therefore, \begin{gather*} dS\cdot gdS =\frac{S_{1}}{H_{1}}(g^{13}\sigma_{2}-g^{12}\sigma_{3}) +\frac{S_{2}}{H_{2}}(g^{12}\sigma_{3}-g^{23}\sigma_{1}) +\frac{S_{3}}{H_{3}}(g^{32}\sigma_{1}-g^{31}\sigma_{2})\\ =\frac{g^{32}\sigma_{1}}{H_{2}H_{3}}(S_{3}H_{2}-S_{2}H_{3}) +\frac{g^{13}\sigma_{2}}{H_{1}H_{3}}(S_{1}H_{3}-S_{3}H_{1}) +\frac{g^{12}\sigma_{3}}{H_{1}H_{2}}(S_{2}H_{1}-S_{1}H_{2}) \\ =-\frac{1}{H_{1}H_{2}H_{3}}(\sigma_{1}^{2}H_{1}g^{32}+\sigma_{2}^{2}H_{2}g^{13}+\sigma_{3}^{2}H_{3}g^{12}). \end{gather*} According to the second condition in \eqref{typeI}, we must choose coefficients $g^{ij}$, such that this quantity is non-positive. If we take $g^{ij}=H^{i}H^{j}$ for $i\neq j$ (we have lifted indices of $dH$ via the metric $h$), then we have \begin{equation} dS\cdot gdS =-(\sigma_{1}^{2}+\sigma_{2}^{2}+\sigma_{3}^{2})=-\left\|\sigma \right\|^{2} \leq 0. \label{sigma}\end{equation} Substituting $H^{i}H^{j}$ for $g^{ij}$ ($i\neq j$) into \eqref{diag} we find that the diagonal terms of $g$ should have the form $g^{ii}=-\sum_{j\neq i}H^{j}H^{j}$. Thus, we can construct a tensor $g$ that satisfies the required conditions: \begin{equation}\label{gI} g^{ij}=H^{i}H^{j}-\delta^{ij}H^{k}H^{k}, \end{equation} or, invariantly: $g=\nabla H \otimes \nabla H - \textrm{I}\left\| \nabla H\right\|^{2}$, where $\nabla H=dH^{\sharp}$ and $\textrm{I}$ is the unit tensor. The rank of $g$ is zero only at the points for which $dH=0$, and since $gdH=0$, it is never more than two. In fact, these are the only possibilities. \begin{lemma} If $d_{x}H\neq 0$ then the tensor $g=\nabla H \otimes \nabla H - \textrm{I}\left\| \nabla H\right\|^{2}$ has rank 2 at the point $x$. \end{lemma} \begin{proof} Define the following vectors at each point in $M$: \[v_{1}=(0,H^{3},-H^{2}), \quad v_{2}=(H^{3},0,-H^{1}),\quad v_{3}=(H^{2},-H^{1},0).\] Observe that at each point in $M$ for which $dH\neq 0$, the set $\mathcal{J}=\{v_{1},v_{2},v_{3}\}$ spans a subspace of dimension 2. A simple calculation shows that $g(v_{k})^{\flat}=-\left\|dH\right\|^{2}v_{k}$ for each $k$, so the set $\mathcal{J}$ is contained in the image of the homomorphism $\sharp g:T^{*}M\rightarrow TM$. Hence, at points for which $\left\|dH\right\|^{2}\neq 0$, the rank of $g$ is 2. \end{proof} Consider the map $(\sharp g)^{\flat}$ from $T^{*}M\rightarrow T^{*}M$ defined on $v \in T^{*}M$ by lowering an index of $\sharp g(v)$, i.e. $(\sharp g)^{\flat}(v)=(gv)^{\flat}$. The $k^{th}$ component of $(\sharp g)^{\flat}(dS)$ is \begin{align*}(gdS)_{k} &=(H_{k}H^{j}-\delta_{k}^{j}H_{i}H^{i})S_{j}\\ &= H_{k}H^{j}S_{j}-H_{i}H^{i}S_{k}\\ &=(dH \cdot dS)H_{k}-(dH \cdot dH)S_{k}\\ &=[dH\times (dH\times dS)]_{k}. \end{align*} Thus, $(gdS)^{\flat}$ can be interpreted geometrically as (a multiple of) the component of $dS$ which is $h$-orthogonal to $dH$, and so $\sharp g$ at the point $x$ can be interpreted as a projection operator from $T^{*}_{x}M$ onto the level set of $H$ that passes through $x$ (see figure \ref{path}). The vector field $PdH$ is a Hamiltonian vector field, and so the vector $Pd_{x}H$ also lies on this level set. Since $gdS$ is constructed to perturb the Hamiltonian system, we would expect $gdS$ and $PdH$ to be independent at most points; in fact we can say more that this. \begin{figure}[h] \centering \epsfxsize=2in \epsfysize=1.5in \epsfclipon \framebox{\epsfbox[0 500 592 843]{cpath.eps}}\caption{Vectors along a solution} \label{path} \end{figure} \begin{lemma}\label{perplemma} At any point in $M$, the vectors $PdH$ and $gdS$ are $h$-orthogonal.\end{lemma} \begin{proof} Since $P$ is skew-symmetric, and since $S$ is a Casimir for $P$, we have: \begin{align*}PdH \cdot gdS &= (PdH)^{i} [dH\times (dH\times dS)]_{i}\\ &=(PdH)^{i}[(dH\cdot dS)dH - (dH \cdot dH)dS]_{i}\\ &=(dH\cdot dS)[PdH \cdot dH]-(dH\cdot dH)[PdH\cdot dS]=0. \end{align*} \end{proof} Using the degenerate covariant metric $g$, we define the metriplectic system \begin{equation}\label{sys} \dot{x}=PdH+gdS,\end{equation} which satisfies $\dot{H}=0$ and $\dot{S}\leq 0$. As for the unperturbed Hamiltonian system, trajectories of \eqref{sys} are contained in level sets of $H$, but they are no longer confined to the symplectic leaves defined by $P$. In fact, by construction, the function $S$ is non-increasing along trajectories: $dS/dt=dS\cdot PdH + dS\cdot gdS=-\left\| \sigma \right\|^{2}$. Notice that this quantity is zero only when $\sigma =0$. That is, the derivative of $S$ along solutions to \eqref{sys} is zero either when $d_{x}S$ is proportional to $d_{x}H$ (i.e., the level sets for $H$ and $S$ are tangent to each other), or when one of $dS$ or $dH$ vanishes. In any case, if $dS/dt$ is zero for all $t$ after some time $T$, then the system has come to ``rest" in the sense that the trajectory is no longer transverse to the symplectic leaves - the dissipation has been turned off. To see this, consider the dissipative part of \eqref{sys}. If $dH=0$, then $g=0$; if $dS=0$, then $gdS=0$; and if $dS$ is parallel to $dH$, then $gdS$ is again zero. In these ``rest states" the system is purely Hamiltonian, and it is to these states that (in most cases) the system will tend. In order to analyze the behavior of the system as it relaxes to a rest state, as well as its behavior once it has relaxed completely (if it does so), we first make a simple observation regarding the relation between the equilibria of \eqref{sys} and the equilibria of the unperturbed Hamiltonian system. According to lemma \eqref{perplemma}, we know that the vector field $PdH + gdS$ is zero if and only if each component is zero individually. In fact, at most points of $M$, a stronger relation holds. Recall that a \emph{regular} point for $P$ is a point at which the rank of $P$ is maximal. \begin{lemma} Define the following vector fields: $\xi_{P}=PdH$ and $\xi=PdH + gdS$. Then, at \emph{regular points} $x\in M$,\, $\xi_{P}(x)=0$ $\leftrightarrow$ $\xi(x)=0$. \end{lemma} \begin{proof} If $d_{x}S=0$, then $\xi_{P}(x)=\xi(x)$, so we may assume that $d_{x}S \neq 0$. Similarly, if $d_{x}H=0$, then $g(x)=0$ and again $\xi_{P}(x)=\xi(x)$, so we may further assume that $d_{x}H \neq 0$. If $\xi_{P}(x)=0$, then $d_{x}H$ is in the kernel of $P$ which, since $x$ is regular, is spanned by the covector $d_{x}S$. Thus $d_{x}S = \lambda d_{x}H$ and \[\xi(x)=PdH + gdS = \lambda gdH=0.\] Conversely, if $\xi(x)=0$, then $0=d_{x}S \cdot \xi(x)=d_{x}S \cdot gd_{x}S=-d_{x}S \times d_{x}H$ (see \eqref{sigma} above). Since both factors of this product are nonzero, they must be proportional: $d_{x}H=\lambda d_{x}S$. But then \[\xi_{P}(x)=PdH=\lambda PdS = 0.\] \end{proof} If $x$ is \emph{not} a regular point for $P$, then $P(x)=0$, and so $\xi_{P}(x)=0$. Such points are clearly equilibria for the Hamiltonian system $\dot{x}=PdH$, but the vector $\xi(x)=gd_{x}S$ may not be zero, so, in general, the conclusion of the above lemma fails to hold in the non-regular case (see \ref{nonreg} below). Although points of degeneracy of $P$ are not typically points of great interest for the Hamiltonian system, they can be relevant to the dynamics of the metriplectic system \eqref{sys}. To distinguish regular and non-regular points of $M$ we define the following set: $\mathcal{R}_{P}=\{x|P(x)\neq 0\}$. Equilibrium points that are in $\mathcal{R}_{P}$ will be called \emph{regular equilibria}. Applying the previous lemma to the two systems $\dot{x}=\xi_{P}$ and $\dot{x}=\xi$ gives us the following \begin{proposition}\label{p9} The system $\dot{x}=PdH+gdS$ and the unperturbed Hamiltonian system $\dot{x}=PdH$ have the same regular equilibria. \end{proposition} In other words, perturbing a Hamiltonian system in this way does not alter the regular equilibria of the system. Since the regular equilibria of a Hamiltonian system have a nice geometric description, we can describe fixed points of \eqref{sys} geometrically. \begin{proposition}\label{Ham} The regular equilibria of the system $\dot{x}=PdH+gdS$ are either critical points of $H$, or are points where the level sets of $S$ and $H$ are tangent to each other.\end{proposition} \begin{proof} The vector field $\xi_{P}=PdH$ vanishes at a point $x\in \mathcal{R}_{P}$ exactly when either $d_{x}H=0$ or $d_{x}H \in ker(P)$. In the second case, $d_{x}H$ annihilates every vector tangent to the symplectic leaf through $x$. Thus, $d_{x}H$ is normal ($h$-orthogonal) to the symplectic leaf through $x$, which, for $x\in \mathcal{R}_{P}$, coincides with the level set of $S$ through $x$. \end{proof} The dissipative system $\dot{x}=gdS$ has its own set of equilibria, but these are not always preserved when the two systems are combined. Points at which the dissipative term $gdS$ vanishes also have a geometric description, related to the extreme values of the function $S$ on the level sets of $H$. \begin{proposition}\label{p11} The vector fields $\xi=PdH + gdS$ and $\xi_{P}=PdH$ coincide either at critical points of $H$, or at critical points of the function $S$, restricted to a level surface $H=H_{0}$. \end{proposition} \begin{proof} The vector field $gdS$ vanishes in one of three cases: $dH=0$, $dS=0$, or $dS, dH \neq 0$ with $dS\in ker(g)$. In the last case, since the rank of $g$ is 2, the vectors $dS$ and $dH$ must be proportional, i.e. $dS=\lambda dH$. Thus, $\xi(x)=\xi_{P}(x)$ if and only if either $d_{x}H=0$, or $x$ is a critical point for the constrained function $S|_{H_{0}}$. \end{proof} Comparing these results, we obtain the following relation between the equilibria of the component systems $\dot{x}=PdH$ and $\dot{x}=gdS$. \begin{proposition}If $d_{x}S\neq 0$ at a regular point $x$, then $gd_{x}S=0$ if and only if $Pd_{x}H=0$.\end{proposition} \begin{proof} If $Pd_{x}H=0$, then Lemma \ref{perplemma}, together with Prop. \ref{p9}, tells us that $gd_{x}S=0$. On the other hand, from the proof of Prop. \ref{p11} we know that if $gd_{x}S=0$, with $d_{x}S\neq 0$, then $d_{x}H=\lambda d_{x}S$ for some $\lambda \geq 0$, and so $PdH=\lambda PdS=0$. \end{proof} \begin{corollary}If $d_{x}S\neq 0$ at a regular point $x$, then $dS/dt = 0$ if and only if $x$ is an equilibrium of \eqref{sys}.\end{corollary} \begin{proof} If $gd_{x}S=0$, with $d_{x}S\neq 0$, then $dS/dt=dSgdS=0$. The converse follows from Prop. \ref{p9}. \end{proof} In particular, if $dS\neq 0$ and $P\neq 0$ on a level set $H_{0}$ of $H$, then the only rest states on $H_{0}$ for the system \eqref{sys} are equilibrium points. We also have the following counterpart to Prop. \ref{p9}. \begin{corollary}A regular point $x$, for which $d_{x}S\neq 0$, is an equilibrium point of the system $\dot{x}=PdH+gdS$ if and only if $x$ is an equilibrium of the gradient system $\dot{x}=gdS$. \end{corollary} If $dS=0$ at a regular point $x_{0}$ in $M$, then $PdH$ may or may not vanish (see examples below). The vector field $\xi = PdH + gdS$ reverts to the Hamiltonian one $PdH$ at this point, and $\xi$ is tangent to the level sets $H_{0}$ and $S_{0}$ through $x_{0}$ of both $H$ and $S$. The following result tells us that in this case, $dS=0$ along the entire trajectory $x(t)$ that passes through $x_{0}$, and so $\xi$ remains Hamiltonian along $x(t)$. \begin{proposition}\label{dS} Let $x(t)$ be a solution of \eqref{sys} through the point $x_{0}$. If $d_{x_{0}}S=0$, then $d_{x(t)}S=0$ for all time $t$. \end{proposition} \begin{proof} Let $\mathcal{S}_{0}$ be the symplectic leaf through $x_{0}$. The one-form $dS$ is constant along $\mathcal{S}_{0}$ since its Lie derivative along any Hamiltonian flow is zero: \[\mathcal{L}_{PdF}dS=d(\textrm{i}_{PdF}dS)+ \textrm{i}_{PdF}(ddS)=d(dS\cdot PdF)=0,\] for any smooth function $F$. Since $dS=0$ at the point $x_{0}$, it must remain zero on the entire leaf, and so the vector field $\xi=PdH+gdS$ reduces to $\xi=PdH$ on $\mathcal{S}_{0}$. Thus, the solution $x(t)$ must be contained in the set $\mathcal{S}_{0}$. \end{proof} The symplectic leaf $\mathcal{S}_{0}$ through a point $x_{0}$ is a submanifold contained in the level set $S_{0}$ of $S$, with dimension equal to the rank of $P$ at $x_{0}$. When $x_{0}$ is a regular point, the rank of $P$ is 2, i.e. $\mathcal{S}_{0}=S_{0}$. Thus, when $d_{x_{0}}S=0$, with $x_{0}$ regular, the vector $\xi$ is tangent to $S_{0}$ at every point, and any trajectory of $\eqref{sys}$ that starts on $S_{0}$ must remain there for all time, i.e. the level set $S_{0}$ is invariant under the flow of $\xi$. The system \eqref{sys} restricted to $S_{0}$ is a Hamiltonian system - dissipation is turned off. Moreover, since $dS=0$ along $S_{0}$, the function $S(t)$ is constant along the flow through $x_{0}$. In this case, the system is in a ``rest state"; it is a conservative system with constant (typically maximal or minimal) ``entropy". \section{Examples and Applications} In this section we discuss several examples of metriplectic systems in $\mathbb{R}^{3}$ of the type \eqref{sys}. We apply the perturbation method developed in the previous section to some classical Hamiltonian systems. \subsection{Relaxing Rigid Body (Revisited)} The example of a ``relaxing rigid body" by Morrison \cite{Mor} was the original motivation for our study of metriplectic systems of this type. We now reproduce this example via an application of the above perturbation method. The Poisson tensor at a point $m=(x,y,z)$ is \[P=\begin{pmatrix}0&z&-y\\-z&0&x\\y&-x&0\end{pmatrix}.\] The Hamiltonian and Casimir functions are $H=(1/2)(ax^{2}+by^{2}+cz^{2})$ and $S=(1/2)(x^{2}+y^{2}+z^{2})$. The symmetric tensor $g$ is then \[g=\begin{pmatrix} -b^{2}y^{2}-c^{2}z^{2}&abxy&acxz\\abxy&-a^{2}x^{2}-c^{2}z^{2}&bcyz\\acxz&bcyz&-a^{2}x^{2}-b^{2}y^{2} \end{pmatrix},\] which coincides with the operator (up to scalar multiple) defined by Morrison in \cite{Mor}. The vector field $PdH+gdS$ generates a metriplectic system with equations of motion given by \begin{align*} \dot{x} &= (b-c)yz + by(a-b)xy + cz(a-c)xz \\ \dot{y} &= (c-a)xz + cz(b-c)yz + ax(b-a)xy\\ \dot{z} &= (a-b)xy + ax(c-a)xz + by(c-b)yx \end{align*} Notice that these equations can be expressed as $\dot{m}=PdH + dH\times PdH$ which, since $PdH=dH\times dS$, takes the form \[\dot{m}=dH\times dS + dH\times (dH\times dS).\] The only point at which either $dS=0$ or $dH=0$ is the origin, which is a degenerate point for the system. At any other point, the level sets of $H$ and $S$ are ellipsoids and spheres (resp). Along a given level set $H_{0}$ of $H$, the equilibrium points of the system are points where $H_{0}$ is tangent to a sphere, i.e. where $dH$ is parallel to $dS$. If $a\neq b\neq c$, then this can only occur at the ``poles" of $H_{0}$ where only one of $x$, $y$, and $z$ is nonzero. At every other point on $H_{0}$, since $dS/dt=-dS\cdot gdS\neq 0$, $S$ must be strictly decreasing . Thus, the pole with the shortest radius on $H_{0}$ is a stable equilibrium, while the other two poles are unstable (see figure \ref{distinct}). If two of $a$, $b$, or $c$ are equal, then two principal radii $r_{i}=r_{j}$ of $H_{0}$ are equal, and the sphere of the same radius is tangent to $H_{0}$ along a circle, every point of which is an equilibrium. The remaining pole is either stable or unstable depending on its length relative to $r_{i}$ (see figure \ref{twoequal}). \begin{figure}[h] \centering \begin{minipage}{2.3in} \centering \epsfxsize=1in \epsfysize=1in \epsfclipon \framebox{\epsfbox[0 500 592 843]{cmor1.eps}} \caption{$a,b,c$ distinct}\label{distinct} \end{minipage} \hspace{.25in} \begin{minipage}{2.3in} \centering \epsfxsize=1in \epsfysize=1in \epsfclipon \framebox{\epsfbox[0 500 592 843]{cmor2.eps}} \caption{$a=b$}\label{twoequal} \end{minipage} \\ \end{figure} \subsection{Dissipative Oscillators}\label{osc} Here we describe a class of examples in which a one-dimensional oscillator $\ddot{x}=-x$ is perturbed by an external function in the following way. First, write the system as a two-dimensional Hamiltonian system with the standard constant Poisson tensor in $\mathbb{R}^{2}$ and quadratic Hamiltonian function: \[\begin{matrix} \dot{x} =& y\\ \dot{y} =&-x \end{matrix} \quad \textrm{or} \quad \frac{d}{dt}\begin{pmatrix} x\\y \end{pmatrix} = \begin{pmatrix} 0&1\\-1&0\end{pmatrix} \begin{pmatrix} x\\y\end{pmatrix} \] Introduce an external variable $z$ and construct a new 3d Hamiltonian system with Hamiltonian function $H=(1/2)(x^{2}+y^{2}+z^{2})$. \[ \frac{d}{dt}\begin{pmatrix} x\\y\\z\end{pmatrix} = PdH = \begin{pmatrix} 0&1&0\\-1&0&0\\0&0&0\end{pmatrix} \begin{pmatrix} x\\y\\z \end{pmatrix} = \begin{pmatrix} y\\ -x\\ 0 \end{pmatrix}.\] A trajectory through $(x_{0},y_{0},z_{0})$ is either the point $(0,0,z_{0})$ on the $z$-axis (equilibrium), or a horizontal circle at height $z_{0}$ that lies on the sphere $H_{0}$ (a level set of $H$) of radius\, $r=\sqrt{x_{0}^{2}+y_{0}^{2}+z_{0}^{2}}$ \, (see figure \ref{oscfig}). The symmetric tensor $g$ that we will use to perturb this system has the form \begin{equation}\label{gosc} g=\begin{pmatrix} -y^{2}-z^{2}&xy&xz&\\xy&-x^{2}-z^{2}&yz\\xz&yz&-x^{2}-y^{2}\end{pmatrix}.\end{equation} Now let $S=S(z)$ be any function of $z$, and define the metriplectic system $\dot{m}=PdH+gdS$. In coordinates, \begin{align} \dot{x} &= y + xzS'\notag \\ \dot{y} &= -x + yzS'\label{pend}\\ \dot{z} &= -(x^{2}+y^{2})S'\notag \end{align} where $S'=dS/dz$. The level sets of $S$ are (unions of) horizontal planes, and the differential $dS$ is always parallel to the $z$-axis, except when $S'(z)=0$, in which case $dS=0$. Hence, $dS$ and $dH$ can be parallel only when $dH$ has no horizontal ($x$ or $y$) component, i.e. only at the ``north" and ``south" poles of a level set of $H$. For a given level set $H_{0}$, the poles $(0,0,\pm z_{0})$, $z_{0}>0$ on the $z$-axis are the only equilibria of \eqref{pend}. The stability of each such point depends on the function $S$. For example, if $S'(z_{0})<0$, then any any solution that begins at a point on $H_{0}$ near $m=(0,0,z_{0})$ will flow toward the pole $m$, and so this point is a stable equilibrium. If $dS=0$ at some value $z=z_{0}$, then $dS$ will be zero on the whole plane $S_{0}=\{z=z_{0}\}$. Any trajectory $\gamma$ through a point $m$ on $S_{0}$ will remain in $S_{0}$ for all time since the vertical component of its velocity vector will be zero. But $\gamma$ must also remain on the sphere (level set) $H_{0}$ containing $m$, so $\gamma$ lies in the intersection of $S_{0}$ and $H_{0}$. This intersection is either a point (when $m$ is a pole of $H_{0}$), or a circle, in which case $\gamma$ is a cycle on $H_{0}$ at height $z=z_{0}$. Cycles of this type can be either stable or unstable, depending on the function $S$. For example, if we choose $S(z)=z^{2}$, then the equations of motion become \begin{align*} \dot{x} &= y + xz^{2}\\ \dot{y} &= -x + yz^{2}\\ \dot{z} &= -z(x^{2}+y^{2}) \end{align*} Since $dS=0$ only at $z=0$, the only periodic solutions are cycles at the equators of spheres $H=k$ of radius $r=\sqrt{2k}$. Clearly $z\rightarrow 0$ along any solution, and $\dot{z}\neq 0$ except at $z=0$. Thus, these equatorial solutions are attractive cycles for the system, and the two poles $(0,0,\pm r)$ of each sphere $H=k$ are unstable equilibria (see figure \ref{damposc}). \begin{figure}[h!] \centering \begin{minipage}{2.3in} \centering \epsfxsize=1in \epsfysize=1in \epsfclipon \framebox{\epsfbox[0 300 592 843]{cosc.eps}} \caption{$S=0$}\label{oscfig} \end{minipage}\hspace{.25in} \begin{minipage}{2.3in} \centering \epsfxsize=1in \epsfysize=1in \epsfclipon \framebox{\epsfbox[0 260 592 843]{cdamposc.eps}}\caption{$S=z^{2}$} \label{damposc} \end{minipage} \\ \end{figure} \subsection{Points of Degeneracy}\label{nonreg} As mentioned above, the system $\dot{x}=PdH+gdS$ reduces to the dissipative system $\dot{x}=gdS$ at \emph{non-regular} points, i.e. points where the rank of $P$ is zero. The behavior of the flow at these points varies depending on the choice of Hamiltonian function $H$. Here we present a simple model which displays some of the different possibilities in this situation. Our model is based on the perturbed oscillator in the previous example, but is altered to allow the rank of $P$ to vanish along the $(x,z)$ plane. Specifically, we define $P$ as \[P=\begin{pmatrix}0&y&0\\-y&0&0\\0&0&0\end{pmatrix}.\] For a given function $H$, every point on the plane $\{y=0\}$ is a fixed point for the Hamiltonian system $\dot{m}=PdH$. For any level set $H_{0}=\{H=k\}$ of $H$, the intersection $\mathcal{I}$ of $H_{0}$ and the plane $\{y=0\}$ is invariant under the flow of $\dot{m}=PdH$. We now perturb this system according to the method described above. The metric $g$ is given by \eqref{gosc}, and the Casimir functions for $P$ are functions that only depend on the third coordinate: $S=S(z)$. The metriplectic vector field in this case has the following form: \[\xi = PdH+gdS=\begin{pmatrix} yH_{2}+H_{1}H_{3}S'\\ -yH_{1}+H_{2}H_{3}S'\\ -(H_{1}^{2}+H_{2}^{2})S' \end{pmatrix}.\] Which, when $y=0$, reduces to the vector field $\xi|_{y=0}=gdS$, or \[\xi|_{y=0}=S'\begin{pmatrix} H_{1}H_{3}\\ H_{2}H_{3}\\ -(H_{1}^{2}+H_{2}^{2}) \end{pmatrix}=S'H_{1}\begin{pmatrix} H_{3}\\ 0\\ -H_{1} \end{pmatrix} + S'H_{2}\begin{pmatrix} 0\\ H_{3}\\ -H_{2} \end{pmatrix}.\] The points on the plane $\{y=0\}$ are not necessarily fixed by the flow of $\xi$, but the intersection $\mathcal{I}$ will be invariant as long as the vector $gdS|_{y=0}$ is tangent to $\mathcal{I}$, i.e. when the second component of $gdS|_{y=0}$ is zero. From the expression for $\xi$ above, we see that $\mathcal{I}$ is an invariant set exactly when $S'H_{2}H_{3}|_{y=0}=0$. Since $S$ is a function of $z$ only, we have two possibilities: $H_{2}|_{y=0}=0$ or $H_{3}|_{y=0}=0$. In the first case, we have \[\xi|_{y=0}=S'H_{1}\begin{pmatrix} H_{3}\\ 0\\ -H_{1}\end{pmatrix}\] while in the second case: \[\xi|_{y=0}=S'H_{1}\begin{pmatrix} 0\\ 0\\ -H_{1}\end{pmatrix} + S'H_{2}\begin{pmatrix} 0\\ 0\\ -H_{2}\end{pmatrix}.\] The following two examples illustrate the behavior of solutions to the equations $\dot{m}=PdH+gdS$. In the first example, the set $\mathcal{I}$ is invariant under the flow of the system, while in the second case $\mathcal{I}$ only remains invariant under the flow of trivial solutions that have initial values on the $(x,y)$ plane. \noindent \textbf{Example 1.} Let $H=(x^{2}+y^{2}+z^{2})/2$ and let $S=z^{2}/2$. Then the vector field $\xi$ becomes \[\xi=PdH+gdS=\begin{pmatrix} y^{2}+xz^{2}\\ -xy + yz^{2}\\ -(x^{2}+y^{2})z\end{pmatrix}.\] When $y=0$, the tensor $P$ vanishes, and we have \[\xi|_{y=0}=gdS|_{y=0}=\begin{pmatrix} xz^{2}\\ 0 \\ -x^{2}z \end{pmatrix} = xz\begin{pmatrix} z\\ 0\\ -x\end{pmatrix}.\] Suppose that $m(t)$ is a solution to $\dot{m}=\xi$ with $m(0)= (x_{0},0,z_{0})$ and let $\mathcal{I}$ be the intersection of the plane $y=0$ with the level set of $H$ that contains $m(0)$. Since the $y$-component of $\xi|_{y=0}$ is zero, the point $m(t)$ will be in $\mathcal{I}$ for all time $t$. The set $H_{0}$ is a sphere of radius $r_{0}=\sqrt{x_{0}^{2}+z_{0}^{2}}$, so the set $\mathcal{I}$ is a great circle on this sphere. The points $(0,0,\pm r_{0})$ are unstable equilibria, as in the examples above, but now two new equilibria arise. When the set $\mathcal{I}$ meets the points for which $dS=0$, the vector field $\xi$ vanishes. This occurs at the points $(\pm r_{0},0,0)$ on the equator of $H_{0}$. The point on the positive $x$-axis is a stable fixed point, while the other is unstable (see figure \ref{inv}). \begin{figure}[h] \centering \begin{minipage}{2.3in} \centering \epsfxsize=1in \epsfysize=1in \epsfclipon \framebox{\epsfbox[0 300 592 843]{ceg1.eps}} \caption{$\mathcal{I}$ is invariant} \label{inv} \end{minipage}\hspace{.25in} \begin{minipage}{2.3in} \centering \epsfxsize=1in \epsfysize=1in \epsfclipon \framebox{\epsfbox[0 300 592 843 ]{ceg2.eps}}\caption{$\mathcal{I}$ is not invariant} \label{noninv} \end{minipage} \\ \end{figure} \noindent \textbf{Example 2.} Let $H=(x^{2}+(y-1)^{2}+z^{2})/2$ and let $S=z^{2}/2$. In this case, the vector field $\xi$ is \[\xi=PdH+gdS=\begin{pmatrix} y(y-1)+xz^{2}\\ -xy + (y-1)z^{2}\\ -(x^{2}+(y-1)^{2})z\end{pmatrix}.\] When $y=0$, the tensor $P$ vanishes, and we have \[\xi|_{y=0}=gdS|_{y=0}=\begin{pmatrix} xz^{2}\\ -z^{2} \\ -(x^{2}+1)z \end{pmatrix} = xz\begin{pmatrix} z\\ 0\\ -x\end{pmatrix}-z\begin{pmatrix}0\\z\\1\end{pmatrix}.\] The $y$-component of $gdS|_{y=0}$ is zero only if $z=0$. If $m=(x_{0},0,z_{0})$ is a point on the plane $\{y=0\}$, then the flow of $\xi$ through $m$ lies on the level set $H_{0}$ of $H$ containing $m$, but does not remain on the intersection $\mathcal{I}$ unless $z_{0}=0$. The points $(x,0,0)$ where $\mathcal{I}$ meets the $(x,y)$ plane is an equilibrium point, and these are the only points of $\mathcal{I}$ which remain invariant under the flow of $\xi$ (see figure \ref{noninv}). \section{Summary and Remarks} In this article we examined metriplectic systems of the type \eqref{sys} from the point of view of perturbations of Hamiltonian systems. We derived a natural form for a symmetric tensor $g$ so that the perturbed system $\dot{x}=PdH+gdS$ dissipates the function $S$ while preserving the energy $H$. We found that a Hamiltonian system $\dot{x}=PdH$ and its metriplectic perturbation have the same \emph{regular} equilibria, which are related to the extreme values of the functions $H$ and $S$. We also found that, for an appropriate choice of $S$, the system \eqref{sys} will tend to a Hamiltonian (possibly equilibrium) rest state in which the dissipative term vanishes. We presented examples of this type of perturbation, including a reproduction of Morrison's example of a `Relaxing Rigid Body'. \begin{remark*}Although our analysis was restricted to three dimensions, it seems reasonable that certain aspects of our construction should carry over into higher dimensions, including the form of the symmetric tensor $g$. In fact, metriplectic systems have been constructed and discussed in infinite-dimensional settings, examples of which can be found in \cite{Grm2}, \cite{KaufTurski}, and \cite{Mor}. \end{remark*} \begin{remark*}The method of perturbation described here can be brought into alignment with the more customary notion of a perturbation by the introduction of a continuous parameter that scales the gradient vector field: $\dot{x}=PdH+ \epsilon gdS$. It would be interesting to study the bifurcations that arise with regard to the stability of equilibria in such systems. For a discussion of such a perturbation involving the Lorenz system, see \cite{Nevir}. \end{remark*} \begin{remark*}This article is a product of the author's doctoral dissertation entitled \emph{Metriplectic Systems} which can be found in the archives of the library at Portland State University, or at the link: web.pdx.edu/$\sim$djf. \end{remark*} \bibliographystyle{plain} \bibliography{project} \end{document}
178,636
\begin{document} \title[Orthogonally additive polynomials] {Orthogonally additive polynomials on convolution algebras associated with a compact group} \author{J. Alaminos} \address{Departamento de An\' alisis Matem\' atico\\ Fa\-cul\-tad de Ciencias\\ Universidad de Granada\\ 18071 Granada, Spain} \email{[email protected]} \author{M. L. C. Godoy} \address{Departamento de An\' alisis Matem\' atico\\ Fa\-cul\-tad de Ciencias\\ Universidad de Granada\\ 18071 Granada, Spain} \email{[email protected]} \author{J. Extremera} \address{Departamento de An\' alisis Matem\' atico\\ Fa\-cul\-tad de Ciencias\\ Universidad de Granada\\ 18071 Granada, Spain} \email{[email protected]} \author{A.\,R. Villena} \address{Departamento de An\' alisis Matem\' atico\\ Fa\-cul\-tad de Ciencias\\ Universidad de Granada\\ 18071 Granada, Spain} \email{[email protected]} \date{} \begin{abstract} Let $G$ be a compact group, let $X$ be a Banach space, and let $P\colon L^1(G)\to X$ be an orthogonally additive, continuous $n$-homogeneous polynomial. Then we show that there exists a unique continuous linear map $\Phi\colon L^1(G)\to X$ such that $P(f)=\Phi \bigl(f\ast\stackrel{n}{\cdots}\ast f \bigr)$ for each $f\in L^1(G)$. We also seek analogues of this result about $L^1(G)$ for various other convolution algebras, including $L^p(G)$, for $1< p\le\infty$, and $C(G)$. \end{abstract} \subjclass[2010]{43A20, 43A77, 47H60} \keywords{Compact group, convolution algebra, group algebra, orthogonally additive polynomial} \thanks{The first, the third and the fourth named authors were supported by MINECO grant MTM2015--65020--P and Junta de Andaluc\'{\i}a grant FQM--185.} \maketitle \section{Introduction} Throughout all algebras and linear spaces are complex. Of course, linearity is understood to mean complex linearity. Moreover, we fix $n\in\mathbb{N}$ with $n\ge 2$. Let $X$ and $Y$ be linear spaces. A map $P\colon X\to Y$ is said to be an \emph{$n$-homogeneous polynomial} if there exists an $n$-linear map $\varphi\colon X^n\to Y$ such that $P(x)=\varphi \left( x,\dotsc,x \right)$ $(x\in X)$. Here and subsequently, $X^n$ stands for the $n$-fold Cartesian product of $X$. Such a map is unique if it is required to be symmetric. This is a consequence of the so-called polarization formula which defines $\varphi$ by \[ \varphi \left( x_1,\ldots,x_n \right)= \frac{1}{n!\,2^n}\sum_{\epsilon_1,\ldots,\epsilon_n=\pm 1} \epsilon_{1}\cdots\epsilon_{n} P \left(\epsilon_1x_1+\cdots+\epsilon_{n}x_{n} \right) \] for all $x_1,\dotsc,x_n\in X$. Further, in the case where $X$ and $Y$ are normed spaces, the polynomial $P$ is continuous if and only if the symmetric $n$-linear map $\varphi$ associated with $P$ is continuous. Let $A$ be an algebra. Then the map $P_n\colon A\to A$ defined by \begin{equation*} P_n(a)=a^n \quad (a\in A) \end{equation*} is a prototypical example of $n$-homogeneous polynomial. The symmetric $n$-linear map associated with $P_n$ is the map $S_n\colon A^n\to A$ defined by \begin{equation*} S_n \left( a_1,\dotsc,a_n \right) = \frac{1}{n!}\sum_{\sigma\in\mathfrak{S}_n}a_{\sigma(1)}\dotsb a_{\sigma(n)} \quad \left(a_1,\dots,a_n\in A \right), \end{equation*} where $\mathfrak{S}_n$ stands for the symmetric group of order $n$. From now on, we write $\mathcal{P}_n(A)$ for the linear span of the set $\left\{a^n : a\in A\right\}$. Given a linear space $Y$ and a linear map $\Phi\colon\mathcal{P}_n(A)\to Y$, the map $P\colon A\to Y$ defined by \begin{equation}\label{standard0} P(a)=\Phi\left( a^n \right) \quad (a\in A) \end{equation} yields a particularly important example of $n$-homogeneous polynomial, and one might wish to know an algebraic characterization of those $n$-homogeneous polynomials $P\colon A\to Y$ which can be expressed in the form \eqref{standard0}. Further, in the case where $A$ is a Banach algebra, $Y$ is a Banach space, and the $n$-homogeneous polynomial $P\colon A\to Y$ is continuous, one should particularly like that the map $\Phi$ of \eqref{standard0} be continuous. A property that has proven valuable for this purpose is the so-called orthogonal additivity. Let $A$ be an algebra and let $Y$ be a linear space. A map $P\colon A\to Y$ is said to be \emph{orthogonally additive} if \[ a,b\in A, \ ab=ba=0 \ \Rightarrow \ P(a+b)=P(a)+P(b). \] The polynomial defined by \eqref{standard0} is a prototypical example of orthogonally additive $n$-homogeneous polynomial, and the obvious questions that one can address are the following. \begin{enumerate} \item[Q1] Let $A$ be a specified algebra. Is it true that every orthogonally additive $n$-homogeneous polynomial $P$ from $A$ into each linear space $Y$ can be expressed in the standard form \eqref{standard0} for some linear map $\Phi\colon\mathcal{P}_n(A)\to Y$? \item[Q2] Let $A$ be a specified Banach algebra. Is it true that every orthogonally additive continuous $n$-homogeneous polynomial $P$ from $A$ into each Banach space $Y$ can be expressed in the standard form \eqref{standard0} for some continuous linear map $\Phi\colon\mathcal{P}_n(A)\to Y$? \item[Q3] Let $A$ be a specified Banach algebra. Is there any norm $\tnorma{\cdot}$ on $\mathcal{P}_n(A)$ with the property that the orthogonally additive continuous $n$-homogeneous polynomials from $A$ into each Banach space $Y$ are exactly the polynomials of the form \eqref{standard0} for some $\tnorma{\cdot}$-continuous linear map $\Phi\colon\mathcal{P}_n(A)\to Y$? \end{enumerate} It seems to be convenient to remark that the demand of Q3 results precisely in the following two conditions: \begin{itemize} \item for each Banach space $Y$ and each $\tnorma{\cdot}$-continuous linear map $\Phi\colon\mathcal{P}_n(A)\to Y$, the prototypical polinomial $P\colon A\to Y$ defined by \eqref{standard0} is continuous, and \item every orthogonally additive continuous $n$-homogeneous polynomial $P$ from $A$ into each Banach space $Y$ can be expressed in the standard form~\eqref{standard0} for some $\tnorma{\cdot}$-continuous linear map $\Phi\colon\mathcal{P}_n(A)\to Y$. \end{itemize} It is shown in \cite{P} that the answer to Question Q2 is positive in the case where $A$ is a $C^*$-algebra (see~\cite{P2,P3} for the case where $A$ is a $C^*$-algebra and $P$ is a holomorphic map). The references \cite{A1,A2,V,W,WW} discuss Question Q2 for a variety of Banach function algebras, including the Fourier algebra $A(G)$ and the Fig\`a-Talamanca-Herz algebra $A_p(G)$ of a locally compact group $G$. This paper focuses on the questions Q1, Q2, and Q3 mentioned above for a variety of convolution algebras associated with a compact group $G$, such as $L^p(G)$, for $1\le p\le\infty$, and $C(G)$. In contrast to the previous references, that are concerned with $C^*$-algebras and commutative Banach algebras, the algebras in this work are neither $C^*$ nor commutative. Throughout, we are concerned with a compact group $G$ whose Haar measure is normalized. We write $\int_G f(t)\, dt$ for the integral of $f\in L^1(G)$ with respect to the Haar measure. For $f\in L^1(G)$, we denote by $f^{*n}$ the $n$-fold convolution product $f\ast\dotsb\ast f$. We denote by $\widehat{G}$ the set of equivalence classes of irreducible unitary representations of $G$. Let $\pi$ be an irreducible unitary representation of $G$ on a Hilbert space $H_\pi$. We set $d_\pi=\dim(H_\pi)(<\infty)$, and the character $\chi_\pi$ of $\pi$ is the continuous function on $G$ defined by \[ \chi_\pi(t)=\trace\bigl(\pi(t)\bigr) \quad (t\in G). \] We write $\mathcal{T}_\pi(G)$ for the linear subspace of $C(G)$ generated by the set of continuous functions on $G$ of the form $t\mapsto\langle\pi(t)u\vert v\rangle$ as $u$ and $v$ range over $H_\pi$. It should be pointed out that $\chi_\pi$ and $\mathcal{T}_\pi(G)$ depend only on the unitary equivalence class of $\pi$. We write $\mathcal{T}(G)$ for the linear span of the functions in $\mathcal{T}_\pi(G)$ as $[\pi]$ ranges over $\widehat{G}$. Then $\mathcal{T}(G)$ is a two-sided ideal of $L^1(G)$ whose elements are called trigonometric polynomials on $G$. The Fourier transform of a function $f\in L^1(G)$ at $\pi$ is defined to be the operator \[ \widehat{f}(\pi)=\int_Gf(t)\pi(t^{-1}) \,dt \] on $H_\pi$. Note that if $\pi'$ is equivalent to $\pi$, then the operators $\widehat{f}(\pi')$ and $\widehat{f}(\pi)$ are unitarily equivalent. In Section~2 we show that the answer to Question Q1 is positive for the algebra $\mathcal{T}(G)$. In Section~3 we show that the answer to Question Q2 is positive for the group algebra $L^1(G)$. In Section~4 we give a negative answer to Question Q2 for any of the convolution algebras $L^p(\mathbb{T})$, for $1<p\le\infty$, and $C(\mathbb{T})$, where $\mathbb{T}$ denotes the circle group. In Section~5 we prove that, for each Banach algebra $A$, there exists a largest norm topology on the linear space $\mathcal{P}_n(A)$ for which the answer to Question Q3 can be positive. Finally, in Section~6 we show that the answer to Question Q3 is positive for most of the significant convolution algebras associated to $G$, such as $L^p(G)$, for $1<p<\infty$, and $C(G)$, when considering the norm introduced in Section~5. We presume a basic knowledge of Banach algebra theory, harmonic analysis for compact groups, and polynomials on Banach spaces. For the relevant background material concerning these topics, see \cite{D}, \cite{HR}, and \cite{M}, respectively. \section{Orthogonally additive polynomials on $\mathcal{T}(G)$} Our starting point is furnished by applying \cite{P} to the full matrix algebra $\mathbb{M}_k$ of order $k$ (which supplies the most elementary example of $C^*$-algebra). \begin{lemma}\label{l1} Let $\mathcal{M}$ be an algebra isomorphic to $\mathbb{M}_k$ for some $k\in\mathbb{N}$, let $X$ be a linear space, and let $P\colon\mathcal{M}\to X$ be an orthogonally additive $n$-homogeneous polynomial. Then there exists a unique linear map $\Phi\colon\mathcal{M}\to X$ such that $P(a)=\Phi\left( a^n \right) $ for each $a\in\mathcal{M}$. Further, if $\varphi\colon\mathcal{M}^n\to X$ is the symmetric $n$-linear map associated with $P$ and $e$ is the identity of $\mathcal{M}$, then $\Phi(a)=\varphi \left(a,e,\dotsc,e \right)$ for each $a\in \mathcal{M}$. \end{lemma} \begin{proof} Let $\Psi\colon\mathcal{M}\to\mathbb{M}_k$ be an isomorphism. Endow $X$ with a norm, and let $Y$ be its completion. Since $\mathbb{M}_k$ is a $C^*$-algebra and the map $P\circ\Psi^{-1}\colon\mathbb{M}_k\to Y$ is a continuous orthogonally additive $n$-homogeneous polynomial, \cite[Corollary~3.1]{P} then shows that there exists a unique linear map $\Theta\colon\mathbb{M}_k\to Y$ such that $P \left(\Psi^{-1}(M)\right)=\Theta \left(M^n \right)$ for each $M\in\mathbb{M}_k$. It is a simple matter to check that the map $\Phi=\Theta\circ\Psi$ satisfies the identity $P(a)=\Phi \left(a^n \right)$ $(a\in A)$. Now the polarization of this identity yields $\varphi \left( a_1,\ldots,a_n \right)=\Phi\bigl(S_n(a_1,\dotsc,a_n)\bigr)$ $\left( a_1,\dotsc,a_n\in \mathcal{M}\right)$, whence $\varphi \left( a,e,\dotsc,e \right)=\Phi(a)$ for each $a\in\mathcal{M}$. \end{proof} In what follows, we will require some elementary facts about the algebra $\mathcal{T}(G)$; we gather together these facts here for reference. \begin{lemma}\label{l2} Let $G$ be a compact group. Then the following results hold. \begin{enumerate} \item For each irreducible unitary representation $\pi$, $\mathcal{T}_{\pi}(G)$ is a minimal two-sided ideal of $L^1(G)$, $\mathcal{T}_{\pi}(G)$ is isomorphic to the full matrix algebra $\mathbb{M}_{d_\pi}$, and $d_\pi\chi_\pi$ is the identity of $\mathcal{T}_\pi(G)$. \item For each $f\in\mathcal{T}(G)$, the set \[ \bigl\{[\pi]\in\widehat{G} : f\ast\chi_\pi\ne 0\bigr\} \] is finite and \[ f=\sum_{[\pi]\in\widehat{G}}d_\pi f\ast\chi_\pi. \] \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item \cite[Theorems~27.21 and 27.24(ii)]{HR}. \item \cite[Remark~27.8(a) and Theorem~27.24(ii)]{HR}. \qedhere \end{enumerate} \end{proof} \begin{theorem}\label{t1} Let $G$ be a compact group, let $X$ be a linear space, and let $P\colon \mathcal{T}(G)\to X$ be an orthogonally additive $n$-homogeneous polynomial. Then there exists a unique linear map $\Phi\colon\mathcal{T}(G)\to X$ such that \begin{equation*} P(f)=\Phi \left( f^{*n} \right) \end{equation*} for each $f\in\mathcal{T}(G)$. Further, if $\varphi\colon\mathcal{T}(G)^{n}\to X$ is the symmetric $n$-linear map associated with $P$, then \begin{equation*} \Phi(f)= \sum_{[\pi]\in\widehat{G}}\varphi \left( d_\pi f\ast\chi_\pi,d_\pi\chi_\pi\dotsc,d_\pi \chi_\pi \right) \end{equation*} (the term $f\ast\chi_\pi$ being $0$ for all but finitely many $[\pi]\in\widehat{G}$) for each $f\in\mathcal{T}(G)$. \end{theorem} \begin{proof} We first show that \begin{equation}\label{e1} P(f)=\sum_{[\pi]\in\widehat{G}}P \left( d_\pi f\ast\chi_\pi \right) \end{equation} for each $f\in \mathcal{T}(G)$. Set $f\in\mathcal{T}(G)$. By Lemma~\ref{l2}, we have \begin{equation}\label{e2} f=\sum_{[\pi]\in\widehat{G}}d_\pi f\ast\chi_\pi. \end{equation} It should be pointed out that all save finitely many of the summands in \eqref{e2} are $0$. Further, from \cite[Theorem~27.24(ii)-(iii)]{HR} we see that \[ \left(f\ast\chi_\pi \right)\ast \left( f\ast\chi_{\pi'} \right)= f\ast f\ast\chi_\pi\ast\chi_{\pi'}=0, \] whenever $[\pi],[\pi']\in\widehat{G}$ and $[\pi]\ne[\pi']$. The orthogonal additivity of $P$ and \eqref{e2} then yield \eqref{e1}. Now set $[\pi]\in\widehat{G}$. On account of Lemma~\ref{l2}(1), $\mathcal{T}_{\pi}(G)$ is isomorphic to the full matrix algebra $\mathbb{M}_{d_\pi}$ and $d_\pi\chi_\pi$ is the identity of $\mathcal{T}_{\pi}(G)$. Hence, by Lemma~\ref{l1}, we have \begin{equation*} P(f)=\varphi \left(f^{*n},d_\pi\chi_\pi,\dotsc,d_\pi\chi_\pi \right) \end{equation*} for each $f\in\mathcal{T}_{\pi}(G)$. In particular, if $f\in\mathcal{T}(G)$, then $f\ast d_\pi\chi_\pi\in\mathcal{T}_{\pi}(G)$ and thus \begin{equation}\label{e3} \begin{split} P \left(d_\pi f\ast\chi_\pi \right) & = \varphi\bigl((f\ast d_\pi\chi_\pi)^{*n},d_\pi\chi_\pi,\dotsc,d_\pi\chi_\pi\bigr)\\ & = \varphi\bigl(f^{*n}\ast d_\pi\chi_\pi,d_\pi\chi_\pi,\dotsc,d_\pi\chi_\pi\bigr). \end{split} \end{equation} From \eqref{e1} and \eqref{e3} we conclude that \begin{equation*} \begin{split} P(f) & = \sum_{[\pi]\in\widehat{G}}P \left( d_\pi f\ast\chi_\pi \right)\\ & = \sum_{[\pi]\in\widehat{G}}\varphi \left( d_\pi f^{*n}\ast\chi_\pi, d_\pi\chi_\pi,\dotsc,d_\pi\chi_\pi \right) \end{split} \end{equation*} for each $f\in\mathcal{T}(G)$, which completes the proof. \end{proof} \section{Orthogonally additive polynomials on $L^1(G)$}\label{sect} A key fact in what follows is that the Banach algebra $L^1(G)$ has a central bounded approximate identity consisting of trigonometric polynomials. Indeed, by \cite[Theorem~28.53]{HR}, there exists a bounded approximate identity $(h_\lambda)_{\lambda\in\Lambda}$ for $L^1(G)$ such that: \begin{itemize} \item $\lVert h_\lambda\rVert_1=1$ for each $\lambda\in\Lambda$; \item $h_\lambda\ast f=f\ast h_\lambda$ for all $f\in L^1(G)$ and $\lambda\in\Lambda$; \item $\widehat{h_\lambda}(\pi)=\alpha_\lambda(\pi) I_\pi$ for some $\alpha_\lambda(\pi)\in\mathbb{R}^+_0$ for all $[\pi]\in\widehat{G}$ and $\lambda\in\Lambda$ (here $I_\pi$ denotes the identity operator on $H_\pi$); \item $\lim_{\lambda\in\Lambda} \alpha_\lambda(\pi)=1$ for each $[\pi]\in\widehat{G}$. \end{itemize} This approximate identity will be used repeatedly hereafter. \begin{theorem}\label{tm} Let $G$ be a compact group, let $X$ be a Banach space, and let $P\colon L^1(G)\to X$ be a continuous $n$-homogeneous polynomial. Then the following conditions are equivalent: \begin{enumerate} \item the polynomial $P$ is orthogonally additive; \item the polynomial $P$ is orthogonally additive on $\mathcal{T}(G)$, i.e., $P(f+g)=P(f)+P(g)$ whenever $f,g\in\mathcal{T}(G)$ are such that $f\ast g=g\ast f=0$; \item there exists a unique continuous linear map $\Phi\colon L^1(G)\to X$ such that $P(f)=\Phi\left( f^{*n} \right) $ for each $f\in L^1(G)$. \end{enumerate} \end{theorem} \begin{proof} It is clear that $(1)\Rightarrow(2)$ and that $(3)\Rightarrow(1)$. We will henceforth prove that $(2)\Rightarrow(3)$. Let $\varphi\colon L^1(G)^{n}\to X$ be the symmetric $n$-linear map associated with $P$, and let $\Phi_0\colon\mathcal{T}(G)\to X$ be the linear map defined by \begin{equation*} \Phi_0(f)= \sum_{[\pi]\in\widehat{G}}\varphi \left(d_\pi f\ast\chi_\pi,d_\pi\chi_\pi\dotsc,d_\pi \chi_\pi \right) \end{equation*} for each $f\in\mathcal{T}(G)$. Since $P$ is orthogonally additive on $\mathcal{T}(G)$, Theorem~\ref{t1} yields \begin{equation}\label{e12} P(f)=\Phi_0 \left( f^{*n} \right) \quad \left(f\in\mathcal{T}(G) \right). \end{equation} We claim that $\Phi_0$ is continuous. Let $(h_\lambda)_{\lambda\in\Lambda}$ be as introduced in the beginning of this section. We now note that, for $\lambda\in\Lambda$ and $[\pi]\in\widehat{G}$, \begin{equation*} \begin{split} \bigl(h_\lambda\ast\chi_\pi\bigr)(t) & = \int_G h_\lambda(s)\trace \bigl(\pi(s^{-1}t)\bigr) \, ds\\ & = \trace \left(\int_G h_\lambda(s)\pi(s^{-1}t) \, ds\right)\\ & = \trace \left(\int_G h_\lambda(s)\pi(s^{-1})\pi(t) \, ds\right)\\ & = \trace \left(\left(\int_G h_\lambda(s)\pi(s^{-1}) \, ds\right)\pi(t)\right)\\ & = \trace \bigl(\widehat{h_\lambda}(\pi)\pi(t)\bigr)\\ & = \trace \bigl(\alpha_\lambda(\pi)\pi(t)\bigr)\\ & = \alpha_\lambda(\pi)\trace \bigl(\pi(t)\bigr)\\ & = \alpha_\lambda(\pi)\chi_\pi(t). \end{split} \end{equation*} From Theorem~\ref{t1} and the polarization formula we deduce that \begin{equation*} \begin{split} \varphi \left( f_1,\dotsc,f_n \right) & = \frac{1}{n!}\sum_{\sigma\in\mathfrak{S}_n} \Phi_0 \left( f_{\sigma(1)}\ast\dotsb\ast f_{\sigma(n)} \right)\\ & = \frac{1}{n!}\sum_{[\pi]\in\widehat{G}}\sum_{\sigma\in\mathfrak{S}_n} \varphi\bigl( d_\pi f_{\sigma(1)}\ast\dotsb\ast f_{\sigma(n)}\ast\chi_\pi,d_\pi\chi_\pi,\dotsc,d_\pi\chi_\pi \bigr) \end{split} \end{equation*} for all $f_1,\dotsc,f_n\in\mathcal{T}(G)$. Pick $f\in\mathcal{T}(G)$, and set $\mathcal{F}=\bigl\{[\pi] : f\ast\chi_\pi\ne 0\bigr\}$. We apply the above equation in the case where $f_1=f$ and $f_2=\dotsb=f_n=h_\lambda$ with $\lambda\in\Lambda$. Since \[ f_{\sigma(1)}\ast\dotsb \ast f_{\sigma(n)}\ast\chi_\pi= f\ast h_\lambda\ast\dotsb\ast h_\lambda\ast\chi_\pi= \alpha_\lambda(\pi)^{n-1}f\ast\chi_\pi \] for each $\sigma\in\mathfrak{S}_n$, it follows that \begin{equation*} \begin{split} \varphi \left( f,h_\lambda,\dotsc,h_\lambda \right) & = \sum_{[\pi]\in\widehat{G}} \alpha_\lambda(\pi)^{n-1} \varphi \left( d_\pi f\ast\chi_\pi,d_\pi \chi_\pi,\dotsc,d_\pi \chi_\pi \right)\\ & = \sum_{[\pi]\in\mathcal{F}} \alpha_\lambda(\pi)^{n-1} \varphi \left( d_\pi f\ast\chi_\pi,d_\pi \chi_\pi,\dotsc,d_\pi \chi_\pi \right). \end{split} \end{equation*} Since $\mathcal{F}$ is finite (Lemma~\ref{l2}(2)) and $\lim_{\lambda\in\Lambda}\alpha_\lambda(\pi)=1$ for each $[\pi]\in\widehat{G}$, we see that the net $\bigl(\varphi \left(f,h_\lambda,\dotsc,h_\lambda \right)\bigr)_{\lambda\in\Lambda}$ is convergent and \begin{equation}\label{e13} \lim_{\lambda\in\Lambda}\varphi \left( f,h_\lambda,\dotsc,h_\lambda \right)= \sum_{[\pi]\in\mathcal{F}}\varphi \left( d_\pi f\ast\chi_\pi,d_\pi \chi_\pi,\dotsc,d_\pi \chi_\pi \right)= \Phi_0(f). \end{equation} On the other hand, for each $\lambda\in\Lambda$, we have \begin{equation}\label{e14} \left\Vert \varphi \left( f,h_\lambda,\dotsc,h_\lambda \right)\right\Vert \le \left\Vert \varphi \right\Vert \left\Vert h_\lambda\right\Vert^{n-1} \left\Vert f\right\Vert \le \left\Vert \varphi \right\Vert \left\Vert f \right\Vert. \end{equation} By \eqref{e13} and \eqref{e14}, \[ \left\Vert \Phi_0(f) \right\Vert \le \left\Vert \varphi \right\Vert \left\Vert f \right\Vert, \] which gives the continuity of $\Phi_0$. Since $\mathcal{T}(G)$ is dense in $L^1(G)$ and $\Phi_0$ is continuous, there exists a unique continuous linear map $\Phi\colon L^1(G)\to X$ which extends $\Phi_0$. Since both $P$ and $\Phi$ are continuous, \eqref{e12} gives $P(f)=\Phi \left(f^{*n} \right)$ for each $f\in L^1(G)$. Our final task is to prove the uniqueness of the map $\Phi$. Suppose that $\Psi\colon L^1(G)\to X$ is a continuous linear map such that $P(f)=\Psi\left( f^{*n} \right) $ for each $f\in L^1(G)$. By Theorem~\ref{t1}, $\Psi(f)=\Phi(f)\bigl(=\Phi_0(f)\bigr)$ for each $f\in\mathcal{T}(G)$. Since $\mathcal{T}(G)$ is dense in $L^1(G)$, and both $\Phi$ and $\Psi$ are continuous, it follows that $\Psi(f)=\Phi(f)$ for each $f\in L^1(G)$. \end{proof} \section{Orthogonally additive polynomials on the convolution algebras $L^p(\mathbb{T})$, $1<p\le\infty$, and $C(\mathbb{T})$} The next examples show that, if $A$ is any of the convolution algebras $L^p(\mathbb{T})$, for $1< p\le\infty$, or $C(\mathbb{T})$, then there exists an orthogonally additive, continuous $2$-homogeneous polynomial $P\colon A\to\mathbb{C}$ which cannot be expressed in the form $P(f)=\Phi \left(f\ast f\right)$ $(f\in A)$ for any continuous linear functional $\Phi\colon A\to\mathbb{C}$. Throughout this section, $\mathbb{T}$ denotes the circle group $\{z\in\mathbb{C} : \vert z\vert=1\}$, and, for $f\in L^1(\mathbb{T})$ and $k\in\mathbb{Z}$, $\widehat{f}(k)$ denotes the $k$th Fourier coefficient of $f$. For each $k\in\mathbb{Z}$, let $\chi_k\colon\mathbb{T}\to\mathbb{C}$ be the function defined by \[ \chi_k(z)=z^k \quad (z\in\mathbb{T}). \] Then \begin{equation}\label{ca00} \chi_k\ast \chi_k=\chi_k \quad (k\in\mathbb{Z}) \end{equation} and \begin{equation}\label{ca0} \widehat{\chi_k}(j)=\delta_{jk} \quad (j,k\in\mathbb{Z}). \end{equation} \begin{example}\label{ca11} Assume that $1<p<2$. Set $q=\frac{p}{p-1}$, $r=\frac{p}{2-p}$, and $s=\frac{q}{2}$, so that $\tfrac{1}{p}+\tfrac{1}{q}=1$ and $\tfrac{1}{r}+\tfrac{1}{s}=1$. Take $h\in L^p(\mathbb{T})$ such that \begin{equation}\label{ca1} \sum_{k=-\infty}^{+\infty}\vert\widehat{h}(k)\vert^s=+\infty. \end{equation} Such a choice is possible because of \cite[13.5.3(1)]{E}, since $s<q$. We claim that there exists $a\in\ell^r(\mathbb{Z})$ such that the sequence $\bigl(\sum_{k=-m}^m a(k)\widehat{h}(k)\bigr)$ does not converge. To see this, we define the sequence $(\phi_m)$ in the dual of $\ell^r(\mathbb{Z})$ by \[ \phi_m(a)=\sum_{k=-m}^m a(k)\widehat{h}(k) \quad (a\in\ell^r(\mathbb{Z}), \ m\in\mathbb{N}). \] It is immediate to check that \[ \Vert\phi_m\Vert=\left(\sum_{k=-m}^m\bigl\vert\widehat{h}(k)\bigr\vert^s\right)^{1/s} \quad (m\in\mathbb{N}). \] From \eqref{ca1} we deduce that $(\lVert\phi_m\rVert)$ is unbounded, and the Banach-Steinhaus theorem then shows that there exists $a\in\ell^r(\mathbb{Z})$ such that $(\phi_m(a))$ does not converge, as claimed. Let $f\in L^p(\mathbb{T})$. Then the Hausdorff-Young theorem (\cite[13.5.1]{E}) yields $\Vert\widehat{f} \, \Vert_q\le\Vert f\Vert_p$. By H\"{o}lder's inequality, we have \[ \sum_{k=-\infty}^{+\infty}\bigl\vert a(k)\widehat{f}(k)^2\bigr\vert\le \left\Vert a \right\Vert_r\bigl\Vert\widehat{f}^{\ 2}\bigr\Vert_s = \left\Vert a\right\Vert_r\bigl\Vert\widehat{f} \, \bigr\Vert_q^2\le \left\Vert a\right\Vert_r \left\Vert f \right\Vert_p^2. \] This allows us to define an orthogonally additive continuous $2$-homogeneous polynomial $P\colon L^p(\mathbb{T})\to\mathbb{C}$ by \[ P(f)= \sum_{k=-\infty}^{+\infty}a(k)\widehat{f\ast f}(k)= \sum_{k=-\infty}^{+\infty}a(k)\widehat{f}(k)^2 \quad (f\in L^p(\mathbb{T})). \] Suppose that $P$ can be expressed as $P(f)=\Phi(f\ast f)$ $(f\in L^p(\mathbb{T}))$ for some continuous linear functional $\Phi\colon L^p(\mathbb{T})\to\mathbb{C}$. By \eqref{ca00} and \eqref{ca0}, we have \[ \Phi(\chi_k)=\Phi(\chi_k\ast \chi_k)=P(\chi_k)= \sum_{j=-\infty}^{+\infty}a(j)\widehat{\chi_k}(j)= \sum_{j=-\infty}^{+\infty}a(j){\delta_{jk}}= a(k) \] for each $k\in\mathbb{Z}$. If $f\in L^p(\mathbb{T})$, then Riesz's theorem~\cite[12.10.1]{E} shows that the sequence $(\sum_{k=-m}^m\widehat{f}(k)\chi_k)$ converges to $f$ in $L^p(\mathbb{T})$. Since $\Phi$ is continuous, it follows that the sequence \[ \left(\Phi\left(\sum_{k=-m}^m\widehat{f}(k)\chi_k\right)\right)= \left(\sum_{k=-m}^m\widehat{f}(k)\Phi(\chi_k)\right)= \left(\sum_{k=-m}^m\widehat{f}(k)a(k)\right) \] converges to $\Phi(f)$. In particular, the sequence $\bigl(\sum_{k=-m}^m\widehat{h}(k)a(k)\bigr)$ is convergent, which contradicts the choice of $h$. \end{example} \begin{example}\label{ca12} Assume that $2\le p<\infty$. If $f\in L^p(\mathbb{T})$, then $f\in L^2(\mathbb{T})$ and therefore $\Vert\widehat{f} \, \Vert_2=\Vert f\Vert_2\le\Vert f\Vert_p$. This allows us to define an orthogonally additive continuous $2$-homogeneous polynomial $P\colon L^p(\mathbb{T})\to\mathbb{C}$ by \begin{equation*} P(f)= \sum_{k=-\infty}^{+\infty}\widehat{f\ast f}(k) = \sum_{k=-\infty}^{+\infty}\widehat{f}(k)^2 \quad (f\in L^p(\mathbb{T})). \end{equation*} Suppose that $P$ can be represented in the form $P(f)=\Phi(f\ast f)$ $(f\in L^p(\mathbb{T}))$ for some continuous linear functional $\Phi\colon L^p(\mathbb{T})\to\mathbb{C}$. Let $h\in L^q(\mathbb{T})$ be such that \begin{equation}\label{ca3} \Phi(f)=\int_\mathbb{T}f(z)h(z) \, dz \quad (f\in L^p(\mathbb{T})). \end{equation} By \eqref{ca00} and \eqref{ca0}, for each $k\in\mathbb{Z}$, we have \[ \Phi(\chi_k)=\Phi(\chi_k\ast \chi_k)=P(\chi_k)= \sum_{j=-\infty}^{+\infty}\widehat{\chi_k}(j)= \sum_{j=-\infty}^{+\infty}{\delta_{jk}}= 1, \] and \eqref{ca3} then yields \begin{equation*} \widehat{h}(k)=\int_\mathbb{T}z^{-k}h(z) \, dz=\Phi(\chi_{-k})=1, \end{equation*} contrary to Riemann-Lebesgue lemma. \end{example} \begin{example} If $f\in L^\infty(\mathbb{T})$, then $f\in L^2(\mathbb{T})$ and therefore $\Vert\widehat{f} \, \Vert_2=\Vert f\Vert_2\le\Vert f\Vert_\infty$, which implies that \[ \sum_{k=0}^{+\infty}\bigl\vert\widehat{f}(-k)\bigr\vert^2\le \sum_{k=-\infty}^{+\infty}\bigl\vert\widehat{f}(k)\bigr\vert^2\le\Vert f\Vert_\infty^2. \] Hence we can define an orthogonally additive continuous $2$-homogeneous polynomial $P\colon L^\infty(\mathbb{T})\to\mathbb{C}$ by \[ P(f)= \sum_{k=0}^{+\infty}\widehat{f\ast f}(-k)= \sum_{k=0}^{+\infty}\widehat{f}(-k)^2 \quad (f\in L^\infty(\mathbb{T})). \] Suppose that $P$ can be represented in the form $P(f)=\Phi(f\ast f)$ $(f\in L^\infty(\mathbb{T}))$ for some continuous linear functional $\Phi\colon L^\infty(\mathbb{T})\to\mathbb{C}$. The restriction of $\Phi$ to $C(\mathbb{T})$ gives a continuous linear functional on $C(\mathbb{T})$ and therefore there exists a measure $\mu\in M(\mathbb{T})$ such that \begin{equation}\label{ca2} \Phi(f)=\int_\mathbb{T}f(z) \, d\mu(z) \quad (f\in C(\mathbb{T})). \end{equation} By \eqref{ca00} and \eqref{ca0}, for each $k\in\mathbb{Z}$, we have \[ \Phi(\chi_k)=\Phi(\chi_k\ast\chi_k)=P(\chi_k)= \sum_{j=0}^{+\infty}\widehat{\chi_k}(-j)= \sum_{j=0}^{+\infty}{\delta_{-jk}}= \begin{cases} 1 & \text{if $k\le 0$,}\\ 0 & \text{if $k>0$,} \end{cases} \] and \eqref{ca2} then yields \begin{equation*} \widehat{\mu}(k)= \int_\mathbb{T}z^{-k} \, d\mu (z) \, dz= \Phi(\chi_{-k})= \begin{cases} 1 & \text{if $k\ge 0$,}\\ 0 & \text{if $k<0$.} \end{cases} \end{equation*} This contradicts the fact that the series $\sum_{k\ge 0}\chi_k$ is not a Fourier-Stieltjes series (see \cite[Example~12.7.8]{E}). It should be pointed out that we have actually shown that neither $P$ nor the restriction of $P$ to $C(\mathbb{T})$ can be represented in the standard form. \end{example} \section{The largest appropriate norm topology on $\mathcal{P}_n(A)$} Since Question Q2 has been settled in the negative for the algebras $L^p(G)$, with $1<p\le\infty$, and $C(G)$, it is therefore reasonable to attempt to explore Question Q3 for them. For this purpose, in this section, for each Banach algebra $A$, we make an appropriate choice of norm on $\mathcal{P}_n(A)$. \begin{theorem}\label{g} Let $A$ be a Banach algebra. Then \begin{equation*} \mathcal{P}_n(A)= \biggl\{ \sum_{j=1}^m a_j^n : a_1,\dotsc,a_m\in A, \ m\in\mathbb{N} \biggr\} \end{equation*} and the formula \begin{equation*} \Vert a\Vert_{\mathcal{P}_n}= \inf\biggl\{ \sum_{j=1}^m \left\Vert a_j \right\Vert^n : a=\sum_{j=1}^m a_j^n \biggr\}, \end{equation*} for each $a\in\mathcal{P}_n(A)$, defines a norm on $\mathcal{P}_n(A)$ such that \begin{equation*} \Vert a^n\Vert_{\mathcal{P}_n}\le \left\Vert a\right\Vert^n \quad (a\in A) \end{equation*} and \begin{equation*} \left\Vert a \right\Vert \le \left\Vert a \right\Vert_{\mathcal{P}_n} \quad (a\in\mathcal{P}_n(A)). \end{equation*} Further, the following statements hold. \begin{enumerate} \item Suppose that $\Phi\colon\mathcal{P}_n(A)\to X$ is a $\norm{\cdot}_{\mathcal{P}_n}$-continuous linear map for some Banach space $X$. Then the map $P\colon A\to X$ defined by $P(a)=\Phi \left(a^n \right)$ $(a\in A)$ is an orthogonally additive continuous $n$-homogeneous polynomial with $\Vert P\Vert=\Vert\Phi\Vert$. \item Suppose that $\tnorma{\cdot}$ is a norm on $\mathcal{P}_n(A)$ for which the answer to Question Q3 is positive. Then there exist $M_1,M_2\in\mathbb{R}^+$ such that \[ M_1 \left\Vert a \right\Vert \le \vert\vert\vert a\vert\vert\vert \le M_2 \left\Vert a \right\Vert_{\mathcal{P}_n} \quad (a\in\mathcal{P}_n(A)). \] \end{enumerate} \end{theorem} \begin{proof} Let $a_1,\dotsc,a_m\in A$ and $\alpha_1,\dotsc,\alpha_m\in\mathbb{C}$. Take $\beta_1,\dotsc,\beta_m\in\mathbb{C}$ such that $\beta_j^n=\alpha_j$ $(j\in\{1,\dotsc,m\})$. Then \[ \sum_{j=1}^m\alpha_j a_j^n=\sum_{j=1}^m \left( \beta_j a_j \right)^n, \] which establishes the first equality of the result. Take $a\in\mathcal{P}_n(A)$, and let $a_1,\dotsc,a_m\in A$ be such that $a=\sum_{j=1}^m a_j^n$. Then \[ \left\Vert a \right\Vert \le \sum_{j=1}^m \left\Vert a_j^n \right\Vert \le \sum_{j=1}^m \left\Vert a_j \right\Vert^n, \] which proves that $\left\Vert a \right\Vert \le \left\Vert a \right\Vert_{\mathcal{P}_n}$. In particular, if $a\in A$ is such that $\left\Vert a \right\Vert_{\mathcal{P}_n}=0$, then we have $a=0$. Set $a\in\mathcal{P}_n(A)$ and $\alpha\in\mathbb{C}$. We proceed to show that $\left\Vert \alpha a \right\Vert_{\mathcal{P}_n}=\left\vert \alpha \right\vert \left\Vert a \right\Vert_{\mathcal{P}_n}$. Of course, we can assume that $\alpha\ne 0$. Choose $\beta\in\mathbb{C}$ such that $\beta^n=\alpha$. If $a_1,\dotsc,a_m\in A$ are such that $a=\sum_{j=1}^m a_j^n$ then $\alpha a=\sum_{j=1}^m \left( \beta a_j\right)^n$ and therefore \[ \left\Vert\alpha a\right\Vert_{\mathcal{P}_n}\le \sum_{j=1}^m \left\Vert\beta a_j \right\Vert^n= \sum_{j=1}^m \left\vert \alpha \right\vert \left\Vert a_j\right\Vert^n, \] which implies that $\left\Vert \alpha a \right\Vert_{\mathcal{P}_n}\le \left\vert \alpha \right\vert \left\Vert a \right\Vert_{\mathcal{P}_n}$. On the other hand, \[ \left\Vert a \right\Vert_{\mathcal{P}_n}= \left\Vert \alpha^{-1}(\alpha a) \right\Vert_{\mathcal{P}_n}\le \left\vert\alpha\right\vert^{-1} \left\Vert\alpha a \right\Vert_{\mathcal{P}_n}, \] which gives the converse inequality. Let $a$, $b\in\mathcal{P}_n(A)$. Our goal is to prove that $\left\Vert a+b \right\Vert_{\mathcal{P}_n}\le \left\Vert a \right\Vert_{\mathcal{P}_n}+ \left\Vert b \right\Vert_{\mathcal{P}_n}$. To this end, set $\varepsilon\in\mathbb{R}^+$, and choose $a_1,\dotsc,a_l,b_1,\dotsc,b_m\in A$ such that \[ a=\sum_{j=1}^l a_j^n, \quad b=\sum_{j=1}^m b_j^n, \] and \[ \sum_{j=1}^l \left\Vert a_j\right\Vert^n<\left\Vert a\right\Vert_{\mathcal{P}_n}+\varepsilon/2, \quad \sum_{j=1}^m \left\Vert b_j\right\Vert^n<\left\Vert b\right\Vert_{\mathcal{P}_n}+\varepsilon/2. \] Then we have \[ a+b=\sum_{j=1}^l a_j^n+\sum_{j=1}^m b_j^n \] and therefore \[ \left\Vert a+b \right\Vert_{\mathcal{P}_n}\le \sum_{j=1}^l \left\Vert a_j \right\Vert^n + \sum_{j=1}^m \left\Vert b_j\right \Vert^n\le \left\Vert a \right\Vert_{\mathcal{P}_n} + \left\Vert b \right\Vert_{\mathcal{P}_n}+\varepsilon, \] which yields $\left\Vert a+b \right\Vert_{\mathcal{P}_n}\le \left\Vert a \right\Vert_{\mathcal{P}_n} + \left\Vert b \right\Vert_{\mathcal{P}_n}$. Then $\norm{\cdot}_{\mathcal{P}_n}$ is a norm on $\mathcal{P}_n(A)$. The space $\mathcal{P}_n(A)$ is equipped with this norm for the remainder of this proof. It is clear that $\Vert a^n\Vert_{\mathcal{P}_n}\le\Vert a\Vert^n$ for each $a\in A$. This property allows us to establish the statement $(1)$. Suppose that $X$ is a Banach space and that $\Phi\colon\mathcal{P}_n(A)\to X$ is a continuous linear map. Define $P\colon A\to X$ by $P(a)=\Phi \left(a^n \right)$ $(a\in A)$. Take $a\in A$. Then we have \[ \norm{P(a)} = \norm{\Phi\left( a^n \right) } \le \left\Vert \Phi \right\Vert \left\Vert a^n \right\Vert_{\mathcal{P}_n}\le \left\Vert \Phi \right\Vert \left\Vert a\right\Vert^n. \] which shows that the polynomial $P$ is continuous and that $\Vert P\Vert\le\Vert\Phi\Vert$. On the other hand, let $a_1,\dotsc,a_m\in A$ be such that $a=\sum_{j=1}^m a_j^n$. Then we have \[ \Phi(a)=\sum_{j=1}^m\Phi(a_j^n)=\sum_{j=1}^m P(a_j), \] whence \[ \Vert\Phi(a)\Vert\le \sum_{j=1}^m \left\Vert P(a_j)\right\Vert \le \sum_{j=1}^m \left\Vert P \right\Vert \left\Vert a_j\right\Vert^n= \Vert P\Vert\sum_{j=1}^m \left\Vert a_j \right\Vert^n. \] This shows that $\left\Vert \Phi(a) \right\Vert \le \left\Vert P \right\Vert \left\Vert a \right\Vert_{\mathcal{P}_n}$, hence that $\left\Vert \Phi \right\Vert \le \left\Vert P \right\Vert$, and finally that $\Vert\Phi\Vert=\Vert P\Vert$. We now prove $(2)$. Suppose that $\tnorma{\cdot}$ is a norm on $\mathcal{P}_n(A)$ which satisfies the conditions: \begin{itemize} \item for each Banach space $X$ and each $\tnorma{\cdot}$-continuous linear map $\Phi\colon\mathcal{P}_n(A)\to X$, the prototypical polinomial $P\colon A\to X$ defined by $P(a)=\Phi\left( a^n \right) $ $(a\in A)$ is continuous, and \item every orthogonally additive continuous $n$-homogeneous polynomial $P$ from $A$ into each Banach space $X$ can be expressed as $P(a)=\Phi\left( a^n \right) $ $(a\in A)$ for some $\tnorma{\cdot}$-continuous linear map $\Phi\colon\mathcal{P}_n(A)\to X$. \end{itemize} The canonical polynomial $P_n\colon A\to A$ is an orthogonally additive continuous $n$-homogeneous polynomial. Hence there exists a $\tnorma{\cdot}$-continuous linear map $\Phi\colon\mathcal{P}_n(A)\to A$ such that $P_n(a)=\Phi\left( a^n \right) $ $(a\in A)$. This implies that $\Phi$ is the inclusion map from $\mathcal{P}_n(A)$ into $A$, and the continuity of this map yields $M_1\in\mathbb{R}^+$ such that $M_1\Vert a\Vert\le\vert\vert\vert a\vert\vert\vert$ for each $a\in\mathcal{P}_n(A)$. We now take $X$ to be the completion of the normed space $(\mathcal{P}_n(A),\tnorma{\cdot})$, and let $\Phi\colon \mathcal{P}_n(A)\to X$ be the inclusion map. Then, by hypothesis, the polynomial $P\colon A\to A$ defined by $P(a)=\Phi\left( a^n \right) $ $(a\in A)$ is continuous, and, in consequence, there exists $M_2\in\mathbb{R}^+$ such that $\vert\vert\vert a^n\vert\vert\vert\le M_2\Vert a\Vert^n$ for each $a\in A$. Take $a\in\mathcal{P}_n(A)$, and let $a_1,\dotsc,a_m\in A$ be such that $a=\sum_{j=1}^m a_j^n$. Then we have \[ \vert\vert\vert a\vert\vert\vert\le \sum_{j=1}^m\vert\vert\vert a_j^n \vert\vert\vert\le \sum_{j=1}^m M_2 \left\Vert a_j \right\Vert^n, \] which yields $\vert\vert\vert a\vert\vert\vert\le\Vert a\Vert_{\mathcal{P}_n}$, and completes the proof. \end{proof} From now on $\mathcal{P}_n(A)$ will be equipped with the norm $\lVert\cdot\rVert_{\mathcal{P}_n}$ defined in Theorem~\ref{g}. \begin{proposition}\label{p1827} Let $A$ be a Banach algebra. Then \[ \mathcal{P}_n(A)= \biggl\{ \sum_{j=1}^m S_n(a_{1,j},\dotsc,a_{n,j}) : a_{1,j},\dotsc,a_{n,j}\in A, \ m\in\mathbb{N} \biggr\} \] and the formula \begin{equation*} \Vert a\Vert_{\mathcal{S}_n}= \inf\biggl\{ \sum_{j=1}^m \left\Vert a_{1,j} \right\Vert \cdots \left\Vert a_{n,j} \right\Vert : a= \sum_{j=1}^m S_n(a_{1,j},\dotsc,a_{n,j}) \biggr\}, \end{equation*} for each $a\in\mathcal{P}_n(A)$, defines a norm on $\mathcal{P}_n(A)$ such that \begin{equation*} \left\Vert a \right\Vert \le \left\Vert a \right\Vert_{\mathcal{S}_n}\le \left\Vert a \right\Vert_{\mathcal{P}_n}\le \frac{n^n}{n!} \left\Vert a \right\Vert_{\mathcal{S}_n} \quad \left(a\in\mathcal{P}_n(A)\right). \end{equation*} \end{proposition} \begin{proof} Let $\mathcal{S}_n(A)$ denote the set of the right-hand side of the first identity in the result. Take $a\in\mathcal{P}_n(A)$. Then there exists $a_1,\dotsc,a_m\in A$ such that \[ a=\sum_{j=1}^m a_j^n=\sum_{j=1}^m S_n(a_j,\dotsc,a_j)\in\mathcal{S}_n(A). \] This also implies that $\left\Vert a \right\Vert_{\mathcal{S}_n}\le\sum_{j=1}^m \left\Vert a_j \right\Vert \dotsb \left\Vert a_j \right\Vert$, and hence that $\left\Vert a \right\Vert_{\mathcal{S}_n}\le \left\Vert a \right\Vert_{\mathcal{P}_n}$. We now take $a\in\mathcal{S}_n(A)$. Then there exist $a_{1,j},\dotsc,a_{n,j}\in A$ $(j\in\{1,\dotsc,m\})$ such that $a=\sum_{j=1}^m S_n(a_{1,j},\dotsc,a_{n,j})$. The polarization formula gives \begin{equation*} \begin{split} a & = \sum_{j=1}^m \frac{1}{n!\,2^n}\sum_{\epsilon_1,\ldots,\epsilon_n=\pm 1} \epsilon_{1}\cdots\epsilon_{n} \left(\epsilon_1a_{1,j}+\dots+\epsilon_{n}a_{n,j} \right)^n\\ & = \sum_{j=1}^m \sum_{\epsilon_1,\ldots,\epsilon_n=\pm 1} \Bigl( \bigl(\tfrac{1}{n!\,2^n}\bigr)^{1/n}\epsilon_{1}^{1/n}\cdots\epsilon_{n}^{1/n}\left(\epsilon_1a_{1,j}+\dotsb+\epsilon_{n}a_{n,j} \right)\Bigr)^n \in \mathcal{P}_n(A). \end{split} \end{equation*} The proof of the fact that $\left\Vert \cdot \right\Vert_{\mathcal{S}_n}$ is a norm on $\mathcal{P}_n(A)$ is similar to Theorem~\ref{g}, and it is omitted. Our next objective is to prove the inequalities relating the three norms $\norm{\cdot}$, $\norm{\cdot}_{\mathcal{S}_n}$, and $\norm{\cdot}_{\mathcal{P}_n}$. Take $a\in\mathcal{P}_n(A)$. We have already shown that $\left\Vert a \right\Vert_{\mathcal{S}_n} \le \left\Vert a \right\Vert_{\mathcal{P}_n}$. Let $a_{1,j},\dotsc, a_{n,j}\in A$ $(j\in\{1,\dotsc,m\})$ be such that $a=\sum_{j=1}^m S_n(a_{1,j},\dotsc, a_{n,j})$. Then \[ \norm{a} \le \sum_{j=1}^m \left\Vert S_n \left(a_{1,j},\dotsc, a_{n,j} \right) \right\Vert \le \sum_{j=1}^m \left\Vert a_{1,j} \right\Vert \dotsb \left\Vert a_{n,j} \right\Vert, \] which shows that $\norm{a} \le \left\Vert a \right\Vert_{\mathcal{S}_n}$. For $l\in\left\{1,\dotsc, n \right\}$ and $j\in \left\{ 1,\dotsc, m \right\}$, we take $b_{l,j}=a_{l,j}/\left\Vert a_{l,j} \right\Vert$ if $a_{l,j}\ne 0$ and $b_{l,j}=0$ otherwise. Then \[ a=\sum_{j=1}^m S_n \left(a_{1,j},\dotsc,a_{n,j} \right) = \sum_{j=1}^m \left\Vert a_{1,j} \right\Vert \dotsb \left\Vert a_{n,j} \right\Vert S_n \left(b_{1,j},\dotsc, b_{n,j} \right), \] and the polarization formula gives \begin{multline*} a= \sum_{j=1}^m \left\Vert a_{1,j} \right\Vert \dotsb \left\Vert a_{n,j} \right\Vert \frac{1}{n!\,2^n}\sum_{\epsilon_1,\ldots,\epsilon_n=\pm 1} \epsilon_{1}\cdots\epsilon_{n} \left( \epsilon_1b_{1,j}+ \dotsb +\epsilon_{n}b_{n,j} \right)^n \\ = \sum_{j=1}^m\sum_{\epsilon_1,\ldots,\epsilon_n=\pm 1} \Bigl(\bigl( \left\Vert a_{1,j} \right\Vert \dotsb \left\Vert a_{n,j} \right\Vert \tfrac{1}{n!\,2^n}\bigr)^{1/n} \epsilon_{1}^{1/n} \dotsb \epsilon_{n}^{1/n} \bigl(\epsilon_1b_{1,j}+\dotsb +\epsilon_{n}b_{n,j}\bigr)\Bigr)^{\!n}. \end{multline*} We thus get \begin{align*} \left\Vert a \right\Vert_{\mathcal{P}_n} & \le \sum_{j=1}^m\sum_{\epsilon_1,\ldots,\epsilon_n=\pm 1} \Bigl\Vert\bigl( \left\Vert a_{1,j} \right\Vert \dotsb \left\Vert a_{n,j} \right\Vert \tfrac{1}{n!\,2^n}\bigr)^{1/n} \epsilon_{1}^{1/n}\dotsb\epsilon_{n}^{1/n} \bigl(\epsilon_1b_{1,j}+ \dotsb +\epsilon_{n}b_{n,j}\bigr)\Bigr\Vert^n \\ & = \sum_{j=1}^m\sum_{\epsilon_1,\ldots,\epsilon_n=\pm 1} \left\Vert a_{1,j} \right\Vert \dotsb \left\Vert a_{n,j} \right\Vert \frac{1}{n!\,2^n} \bigl\Vert\epsilon_1b_{1,j}+\dotsb +\epsilon_{n}b_{n,j}\bigr\Vert^n \\ & \le \sum_{j=1}^m\sum_{\epsilon_1,\ldots,\epsilon_n=\pm 1} \left\Vert a_{1,j} \right\Vert \dotsb \left\Vert a_{n,j} \right\Vert \frac{1}{n!\,2^n} n^n= \sum_{j=1}^m \left\Vert a_{1,j} \right\Vert \dotsb \left\Vert a_{n,j} \right\Vert \frac{n^n}{n!}, \end{align*} which shows that $\norm{a}_{\mathcal{P}_n}\le\frac{n^n}{n!}\norm{a}_{\mathcal{S}_n}$. \end{proof} \begin{proposition}\label{p19} Let $A$ be a Banach algebra with a central bounded approximate identity of bound $M$. Then $\mathcal{P}_n(A)=A$ and $\norm{\cdot} \le \norm{\cdot}_{\mathcal{S}_n} \le M^{n-1} \norm{\cdot}$. \end{proposition} \begin{proof} Take $a\in A$, and let $\varepsilon\in\mathbb{R}^+$. Our objective is to show that $a\in \mathcal{P}_n(A)$ and that $\left\Vert a \right\Vert_{\mathcal{S}_n}<M^{n-1}\bigl(\left\Vert a \right\Vert +\varepsilon\bigr)$. Let $\mathcal{Z}(A)$ be the centre of $A$, and let $(e_\lambda)_{\lambda\in\Lambda}$ be a central bounded approximate identity for $A$ of bound $M$. Then $\mathcal{Z}(A)$ is a Banach algebra and $(e_\lambda)_{\lambda\in\Lambda}$ is a bounded approximate identity for $\mathcal{Z}(A)$. Of course, $A$ is a Banach $\mathcal{Z}(A)$-bimodule and $\mathcal{Z}(A)A$ is dense in $A$. We now take $a_0=a$, and then we successively apply the factorization theorem~\cite[Theorem~2.9.24]{D} to choose $a_1,\dotsc,a_{n-1}\in A$ and $z_1,\dotsc,z_{n-1}\in\mathcal{Z}(A)$ such that \begin{gather} a_{k-1}=z_k a_k, \\ \left\Vert a_{k-1}-a_{k} \right\Vert < \frac{\varepsilon}{n}, \end{gather} and \begin{equation} \Vert z_k\Vert\le M \end{equation} for each $k\in\{1,\dotsc,n-1\}$. Then \[ a=S_n \left(z_1,\dotsc,z_{n-1},a_{n-1} \right) \] and therefore $a\in\mathcal{P}_n(A)$ with \begin{equation*} \begin{split} \left\Vert a \right\Vert_{\mathcal{S}_n} & \leq \left\Vert z_1 \right\Vert \dotsb \left\Vert z_{n-1} \right\Vert \left\Vert a_{n-1} \right\Vert \\ & \leq M^{n-1} \left\Vert a_{n-1} \right\Vert \\ & \leq M^{n-1}\bigl( \left\Vert a_{n-1}-a_{n-2} \right\Vert + \dotsb + \left\Vert a_1-a \right\Vert + \left\Vert a \right\Vert\bigr)\\ & < M^{n-1}\bigl( \left\Vert a \right\Vert+\varepsilon\bigr), \end{split} \end{equation*} as claimed. Further, on account of Theorem~\ref{g}, we have \[ \left\Vert a \right\Vert \le \left\Vert a \right\Vert_{\mathcal{S}_n} \le M^{n-1}\bigl( \left\Vert a \right\Vert+\varepsilon\bigr) \] for each $\varepsilon\in\mathbb{R}^+$, which gives $\left\Vert a \right\Vert \le \left\Vert a \right\Vert_{\mathcal{S}_n}\le M^{n-1} \left\Vert a \right\Vert$. \end{proof} \begin{corollary}\label{p1} Let $G$ be a compact group. Then $\mathcal{P}_n\bigl(L^1(G)\bigr)=L^1(G)$ and $\left\Vert \cdot \right\Vert_{\mathcal{S}_n}= \left\Vert \cdot \right\Vert_1$. \end{corollary} \begin{proof} The net $(h_\lambda)_{\lambda\in\Lambda}$ introduced in the beginning of Section~\ref{sect} is a central bounded approximate identity for $L^1(G)$ of bound $1$. Then Proposition~\ref{p19} applies to show the result. \end{proof} \section{Some other examples of convolution algebras} Sections 2 and 5 combine to answer Question Q3 in the positive for a variety of convolutions algebras associated with $G$. \begin{lemma}\label{1319} Let $G$ be a compact group, and let $A$ be a subalgebra of $L^1(G)$ which is equipped with a norm $\left\Vert \cdot \right\Vert_A$ of its own and satisfies the following conditions: \begin{enumerate} \item[(a)] $A$ is a Banach algebra with respect to $\left\Vert \cdot \right\Vert_A$; \item[(b)] $\mathcal{T}(G)$ is a dense subspace of $A$ with respect to $\left\Vert \cdot \right\Vert_A$; \end{enumerate} Then $\mathcal{T}(G)$ is a dense subset of $\mathcal{P}_n(A)$. \end{lemma} \begin{proof} For $\left[\pi \right]\in\widehat{G}$ and $f\in\mathcal{T}_\pi(G)$, the polarization formula and Lemma~\ref{l1}(1) yield \begin{equation*} \begin{split} f & = S_n \left(f,d_\pi\chi_\pi,\dotsc,d_\pi\chi_\pi \right)\\ & = \frac{1}{n!\,2^n}\sum_{\epsilon_1,\ldots,\epsilon_n=\pm 1} \epsilon_{1}\cdots\epsilon_{n} \left(\epsilon_1f+\epsilon_2 d_\pi\chi_\pi+\dotsb +\epsilon_{n}d_\pi\chi_\pi \right)^{*n}. \end{split} \end{equation*} This shows that $\mathcal{T}_\pi(G)$ is equal to the linear span of the set $\bigl\{f^{*n} : f\in\mathcal{T}_\pi(G)\bigr\}$, which implies that $\mathcal{T}(G)\subset\mathcal{P}_n(A)$. On the other hand, if $f,g\in A$, then \begin{equation*} \begin{split} f^{*n} & = P_n\bigl(g+(f-g)\bigr)\\ & = \sum_{j=0}^{n}\binom{n}{j} S_n (\underbrace{g,\dotsc,g}_{n-j},\underbrace{f-g,\dotsc,f-g}_{j})\\ & = g^{*n}+\sum_{j=1}^{n}\binom{n}{j} S_n(\underbrace{g,\dotsc,g}_{n-j},\underbrace{f-g,\dotsc,f-g}_{j}) \end{split} \end{equation*} and therefore \begin{equation*} \left\Vert f^{*n}-g^{*n} \right\Vert_{\mathcal{S}_n}\le \sum_{j=1}^{n}\binom{n}{j} \left\Vert g \right\Vert_{A}^{n-j} \left\Vert f-g \right\Vert_{A}^{j}. \end{equation*} From Proposition~\ref{p1827}, we deduce that \begin{equation}\label{1452} \left\Vert f^{*n}-g^{*n} \right\Vert_{\mathcal{P}_n}\le \frac{n^n}{n!} \sum_{j=1}^{n}\binom{n}{j} \left\Vert g \right\Vert_{A}^{n-j} \left\Vert f-g \right\Vert_{A}^{j}. \end{equation} Let $f\in A$. Then there exists a sequence $(f_k)$ in $\mathcal{T}(G)$ such that $\left\Vert f-f_k \right\Vert_A\to 0$, and~\eqref{1452} then gives $\left\Vert f^{*n}-f_k^{*n} \right\Vert_{\mathcal{P}_n}\to 0$. This implies that $\mathcal{T}(G)$ is dense in $\mathcal{P}_n(A)$. \end{proof} \begin{theorem}\label{tf} Let $G$ be a compact group, and let $A$ be a subalgebra of $L^1(G)$ which is equipped with a norm $\left\Vert \cdot \right\Vert_A$ of its own and satisfies the following conditions: \begin{enumerate} \item[(a)] $A$ is a Banach algebra with respect to $\left\Vert \cdot \right\Vert_A$; \item[(b)] $\mathcal{T}(G)$ is a dense subspace of $A$ with respect to $\left\Vert \cdot \right\Vert_A$; \item[(c)] $A$ is a Banach left $L^1(G)$-module with respect to $\left\Vert \cdot \right\Vert_A$ and the convolution multiplication. \end{enumerate} Let $X$ be a Banach space, and let $P\colon A\to X$ be a continuous $n$-homogeneous polynomial. Then the following conditions are equivalent: \begin{enumerate} \item the polynomial $P$ is orthogonally additive; \item the polynomial $P$ is orthogonally additive on $\mathcal{T}(G)$, i.e., $P(f+g)=P(f)+P(g)$ whenever $f,g\in\mathcal{T}(G)$ are such that $f\ast g=g\ast f=0$; \item there exists a unique continuous linear map $\Phi\colon\mathcal{P}_n(A)\to X$ such that $P(f)=\Phi\left( f^{*n} \right)$ for each $f\in A$. \end{enumerate} \end{theorem} \begin{proof} It is clear that $(1)\Rightarrow(2)$ and that $(3)\Rightarrow(1)$. We will prove that $(2)\Rightarrow(3)$. Let $\varphi\colon A^n\to X$ be the symmetric $n$-linear map associated with $P$, and let $\Phi_0\colon\mathcal{T}(G)\to X$ be the linear map defined by \begin{equation*} \Phi_0(f)= \sum_{[\pi]\in\widehat{G}}\varphi \left( d_\pi f\ast\chi_\pi,d_\pi\chi_\pi\dotsc,d_\pi \chi_\pi \right) \end{equation*} for each $f\in\mathcal{T}(G)$. Since $P$ is orthogonally additive on $\mathcal{T}(G)$, Theorem~\ref{t1} yields \begin{equation}\label{e1056} P(f)=\Phi_0 \left(f^{*n} \right) \quad \left( f\in\mathcal{T}(G) \right). \end{equation} We claim that $\Phi_0$ is continuous. Let $(h_\lambda)_{\lambda\in\Lambda}$ be as introduced in the beginning of Section~\ref{sect}. Set $f\in\mathcal{T}(G)$, and assume that $f=\sum_{j=1}^m f_j^{*n}$ with $f_1,\dotsc ,f_m\in A$. For $\lambda\in\Lambda$, $h_\lambda$ belongs to the centre of $L^1(G)$, and so \[ h_\lambda^{*n}\ast f=\sum_{j=1}^m \bigl( h_\lambda\ast f_j \bigr)^{*n}. \] Since $f_j\ast h_\lambda\in\mathcal{T}(G)$ $(j\in\{1,\dotsc,m\}, \ \lambda\in\Lambda)$, \eqref{e1056} yields \[ \Phi_0 \left( h_\lambda^{*n}\ast f \right)= \sum_{j=1}^m\Phi_0\bigl( \left(h_\lambda\ast f_j \right)^{*n}\bigr)= \sum_{j=1}^m P \left( h_\lambda\ast f_j \right), \] whence \begin{equation*} \begin{split} \left\Vert \Phi_0 \left( h_\lambda^{*n}\ast f \right) \right\Vert & = \sum_{j=1}^m \left\Vert P(h_\lambda\ast f_j)\right\Vert \leq \sum_{j=1}^m \left\Vert P\right\Vert \left\Vert h_\lambda\ast f_j \right\Vert_A^n \\ &\leq \sum_{j=1}^m \Vert P\Vert \left\Vert h_\lambda\right\Vert_1^n \left\Vert f_j\right\Vert_A^n \leq \left\Vert P \right\Vert\sum_{j=1}^m \left\Vert f_j\right\Vert_A^n. \end{split} \end{equation*} We thus get \begin{equation}\label{1126} \left\Vert \Phi_0 \left( h_\lambda^{*n}\ast f \right) \right\Vert \le \left\Vert P \right\Vert \left\Vert f \right\Vert_{\mathcal{P}_n}. \end{equation} We now see that, for each $\lambda\in\Lambda$, \begin{equation*} \begin{split} \left\Vert h_\lambda^{*n}\ast f-f\right\Vert_A & \le \left\Vert h_\lambda^{*n}\ast f-h_\lambda^{*n-1}\ast f\right\Vert_A\\ & \quad {}+ \dotsb + \left\Vert h_\lambda^{*2}\ast f-h_\lambda\ast f \right\Vert_A+ \left\Vert h_\lambda\ast f-f\right\Vert_A \\ & \le \bigl(\left\Vert h_\lambda\right\Vert_1^{n-1}+\dots+ \left\Vert h_\lambda\right\Vert_1+1\bigr) \left\Vert h_\lambda\ast f-f\right\Vert_A \\ & \le n\left\Vert h_\lambda\ast f-f\right\Vert_A. \end{split} \end{equation*} On account of \cite[Remarks~32.33(a) and 38.6(b)]{HR}, we have \[ \lim_{\lambda\in\Lambda} \left\Vert h_\lambda\ast f-f \right\Vert_A=0, \] and so \[ \lim_{\lambda\in\Lambda} \left\Vert h_\lambda^{*n}\ast f-f \right\Vert_A=0. \] Since $f\in\mathcal{T}(G)$, it follows that there exist $[\pi_1],\dotsc,[\pi_l]\in\widehat{G}$ such that $f\in\mathcal{M}:=\mathcal{T}_{\pi_1}(G)+\dotsb+\mathcal{T}_{\pi_l}(G)$. The finite-dimensionality of $\mathcal{M}$ implies that the restriction of $\Phi_0$ to $\mathcal{M}$ is continuous. Further, Lemma~\ref{l2}(1) shows that $\mathcal{M}$ is a two-sided ideal of $L^1(G)$, and so $h_\lambda\in\mathcal{M}\ast f$ for each $\lambda\in\Lambda$. Therefore, taking limits on both sides of equation \eqref{1126} (and using the continuity of $\Phi_0$ on $\mathcal{M}$), we see that \[ \left\Vert \Phi_0(f) \right\Vert \le \left\Vert P \right\Vert \left\Vert f \right\Vert_{\mathcal{P}_n}, \] which proves our claim. Since $\mathcal{T}(G)$ is dense in $\mathcal{P}_n(A)$ (Lemma~\ref{1319}), it follows that $\Phi_0$ has a continuous extension $\Phi\colon\mathcal{P}_n(A)\to X$. Take $f\in A$. There exists a sequence $(f_k)$ in $\mathcal{T}(G)$ with $\left\Vert f-f_k \right\Vert_A\to 0$, so that $P \left(f_k \right)\to P(f)$. Further, \eqref{1452} gives $\left\Vert f^{*n}-f_k^{*n} \right\Vert_{\mathcal{P}_n}\rightarrow 0$, and consequently $P \left(f_k \right) =\Phi \left(f_k^{*n} \right)\to\Phi \left( f^{*n} \right)$. Hence $\Phi \left(f^{*n} \right)=P(f)$. Finally, we proceed to prove the uniqueness of the map $\Phi$. Suppose that $\Psi\colon\mathcal{P}_n(A)\to X$ is a continuous linear map such that $P(f)=\Psi \left(f^{*n} \right)$ for each $f\in A$. By Theorem~\ref{t1}, $\Psi(f)=\Phi(f)\bigl(=\Phi_0(f)\bigr)$ for each $f\in\mathcal{T}(G)$. Since $\mathcal{T}(G)$ is dense in $L^1(G)$ (Lemma~\ref{1319}), and both $\Phi$ and $\Psi$ are continuous, it follows that $\Psi(f)=\Phi(f)$ for each $f\in A$. \end{proof} \begin{example}\label{ex} Let $G$ be a compact group. The following convolution algebras satisfy the conditions required in Theorem~\ref{tf} (see \cite[Remark~38.6]{HR}). \begin{enumerate} \item For $1\le p<\infty$, the algebra $L^p(G)$. \item The algebra $C(G)$. \item The algebra $A(G)$ consisting of those functions $f\in C(G)$ of the form \[ f=g\ast h \] with $g,h\in L^2(G)$. The norm $\left\Vert \cdot \right\Vert_{A(G)}$ on $A(G)$ is defined by \[ \left\Vert f \right\Vert_{A(G)}=\inf \bigl\{ \left\Vert g \right\Vert_2 \left\Vert h \right\Vert_2 : f=g\ast h, \ g,h\in L^2(G) \bigr\} \] for each $f\in A(G)$. It is worth noting that a function $f\in L^1(G)$ is equal almost everywhere to a function in $A(G)$ if and only if \[ \sum_{[\pi]\in\widehat{G}}d_\pi\bigl\Vert\widehat{f}(\pi)\bigr\Vert_1 \ < \infty. \] Further, \[ \left\Vert f \right\Vert_{A(G)} = \sum_{[\pi]\in\widehat{G}}d_\pi\bigl\Vert\widehat{f}(\pi)\bigr\Vert_1 \] for each $f\in A(G)$. Here $\norm{T}_1$ denotes the trace class norm of the operator $T\in\mathcal{B}(H_\pi)$. \item For $1<p<\infty$, the algebra $A_p(G)$ consisting of those functions $f\in C(G)$ of the form \[ f=\sum_{k=1}^\infty g_k\ast h_k \] where $(g_k)$ is a sequence in $L^p(G)$, $(h_k)$ is a sequence in $L^q(G)$ with $\frac{1}{p}+\frac{1}{q}=1$, and \[ \sum_{k=1}^\infty \left\Vert g_k \right\Vert_p \left\Vert h_k \right\Vert_q < \infty. \] The norm $\Vert\cdot\Vert_{A_p(G)}$ on $A_p(G)$ is defined by \[ \left\Vert f \right\Vert_{A_p(G)}= \inf\biggl\{ \sum_{k=1}^\infty \left\Vert g_k \right\Vert_p \left\Vert h_k \right\Vert_q : f=\sum_{k=1}^\infty g_k\ast h_k \biggr\} \] for each $f\in A_p(G)$. \item For $1< p<\infty$, the algebra $S_p(G)$ consisting of the functions $f\in L^1(G)$ for which \[ \sum_{[\pi]\in\widehat{G}} d_\pi\bigl\Vert \widehat{f}(\pi)\bigr\Vert_{S^p(H_\pi)}^p < \infty. \] Here $\left\Vert T \right\Vert_{S^p(H_\pi)}$ denotes the $p$th Schatten norm of the operator $T\in\mathcal{B}(H_\pi)$. The norm $\left\Vert \cdot \right\Vert_{S_p(G)}$ on $S_p(G)$ is defined by \[ \left\Vert f \right\Vert_{S_p(G)}= \left\Vert f \right\Vert_1 + \left( \sum_{[\pi]\in\widehat{G}} d_\pi\bigl\Vert \widehat{f}(\pi)\bigr\Vert_{S^p(H_\pi)}^p \right)^{1/p} \] for each $f\in S_p(G)$. \end{enumerate} \end{example} \begin{remark} Together with Corollary~\ref{p1}, Theorem~\ref{tf} gives Theorem~\ref{tm}. \end{remark} \begin{remark} It is well-known that the Banach spaces $A(G)$ and $A_p(G)$ of Example~\ref{ex} are particularly important Banach function algebras with respect to the pointwise multiplication. We emphasize that while the references \cite{A1,A2,V,W,WW} apply to the problem of representing the orthogonally additive homogeneous polynomials on the Banach function algebras $A(G)$ and $A_p(G)$ with respect to pointwise multiplication, Theorem~\ref{tf} gives us information about that problem in the case where both $A(G)$ and $A_p(G)$ are regarded as noncommutative Banach algebras with respect to convolution. \end{remark} \begin{remark} We do not know whether or not the conclusion of Theorem~\ref{tf} must hold for $A=L^\infty(G)$. \end{remark} \section*{\refname}
113,978
I'll Suck Yo Dick, Man! - YouTube Find gay wife watches husband suck cock sex videos for free, here on PornMD. Word lid van Facebook om met Carlos Eduardo van der Hoogte en anderen in contact te komen. Babysitter for a husband's dick - Beeg. Oral sex generally refers to It was the first time I had seen it and he came like crazy. Mar 29, 2015 Seeing a guy suck dick is a turn on, but I do get bored if it lasts too long. He was excited to share this majestic member with his gorgeous and dirty wife. Caught my husband sucking dick and I secretly like it - Raw. YouPorn is the biggest free porn Suggest Pornstar cute white girl sits on black man s face. YouPorn is the biggest free porn tube He sucks his cock she fucks him from behind love to suck that sweet cock. Wife watches husband with gay tw White husband suck black cock movies gay The Troops are wild! You Are Raphael Image You are razor sharp, and your brilliance comes out in your wit. Oct 1, 2008 After several failed attempts to grab his tiny dick, the nurse came to find me in a panic.
202,016
32 members and 138917 bookmarks Upcoming Bookmarks | Submit a New Bookmark | | Mobile application development tutorial Ajax Tutorial jQuery Tutorial Technology PHP Tutorial ColdFusion Tutorial Android Tutorial iPhone Tutorial .Net Tutorial Random Tutorials Using Grouping instead of Distinct in Entity Framework to Optimize Performance Shows you how to use LINQ grouping instead of the Distinct method in Entity Framework to optimize query performance. | entity framework distinct performance All Related Tutorials How to: Pre-Generate Views to Improve Query Performance (Entity Framework) Performance Considerations (Entity Framework) Master Details CRUD Operations with ADO.net Entity Framework Performance testing ASP.Net applications with Microsoft Application Center Test How to: Manage Data Concurrency in the Object Context (Entity Framework) How to: Use Query Paths to Shape Results (Entity Framework) ASP.NET AJAX Application Performance A Feature-driven Comparison of Entity Framework and NHibernate-Lazy loading .NET Framework 4: DataView Performance CRUD Operations using ADO.net Entity Framework Tips to improve Performance of Web Application Working with Objects (Entity Framework) How to: Navigate Relationships Using Navigation Properties (Entity Framework) php tutorial example: increase performance with often used queries in cakephp A Feature-driven Comparison of Entity Framework and NHibernate-Multiple Databases Performance with page cache in ASP.NET ADO.NET Entity Framework Identity Resolution, State Management, and Change Tracking (Entity Framework) How to: Define a Model with Table-per-Type Inheritance (Entity Framework) Copyright 2009 w3bookmarks.com social bookmarking site | Pligg Content Management System | Powered by pligg cms
383,231
How can we explain what’s been happening with the White Sox this season? Even though we currently have the third-best record in baseball with the AL Wild Card lead and are coming off a spectacular extra-inning win against the Yankees, there’s an ominous feeling on the South Side of Chicago these days that the Sox are going to fall down hard. There are the obvious changes from last season, from the weaknesses of the starting rotation in general to our recurring Mark Buerhle problem, where our ace from up until a couple months ago appears to have now caught Steve Blass Disease. At the same time, the departure of clubhouse leaders Aaron Rowand and Orlando “El Duque” Hernandez might have put a dent into team chemistry. However, there’s one mega-difference from 2005 that I haven’t heard anyone speak of yet: the absence of the great Steve Perry. There’s nothing unusual about celebrity fans, ranging from Jennifer Garner and her no talent assclown husband with the Red Sox to that kid from “Malcom in the Middle” with the Clippers. What made the Steve Perry situation unique, though, was that he was the equivalent of a trade deadline rent-a-player for the stretch run of the season. Here was a guy that had absolutely no connection to Chicago other than playing a few Journey shows at the Rosemont Horizon back in the ’80s (although bandmate Jonathan Cain is a Chicago native) that ended up being the ultimate bandwagon fan after the White Sox adopted “Don’t Stop Believing” as their theme song. Perry rode this resuscitation of his career to the point where he ended up celebrating with the team in the locker room after clinching their first World Series championship since 1917 along with being a prominent part of the subsequent ticket tape parade. The average diehard Sox fan, including myself, found all of this particularly disjarring at the time. In fact, we prided ourselves on not being the bandwagon celebrity draw that the Cubs have always been, so it was incredulous to see an aging rock star with no previous association with the City of Chicago, much less the White Sox team, somehow become vaulted to the position of a top Sox fan within the span of a couple of months. Now, however, I recognize that Sox having Steve Perry in the late-summer and fall of 2005 was the equivalent of the Astros grabbing Randy Johnson at the trade deadline in 1998 – the last piece that pushed a team on the cusp of greatness over the top and into the postseason. I’m not saying that we need the return of Steve Perry for the next few months (although “Any Way You Want It” is a great song while playing golf or for any wedding or bar mitzvah celebration), but there’s certainly a void in the theme song and rent-a-celeb category this season. I’ll take any nominees on this site to fill this crucial spot, with a preference toward lead singers of late-’70s or early-’80s arena rock bands (unless the nominee is the divine intervention named Flavor Flav, who would be an automatic winner). Any way you want it, that’s the way you need it… 3 thoughts on “Time to Start Believing Again” Well these are my choices for potential “has been celebs” who can do no more harm by hanging around White Sox games than the actual players themselves… Fred Durst and Limp Bizkit singing their cover of “Faith” Danny Bonaduce because he relates well with the majority of the White Sox fan base Des’ree singing “You Gotta Be” Metallica singing “Creeping Death” and finally… Corky from “Life Goes On” because he is always smiling. People love a functionally mentally challenged person who is always smiling. Detroit Rock City? Pingback: Frank the Tank’s Classic Music Video of the Week: Separate Ways - Journey « FRANK THE TANK’S SLANT
3,211
TITLE: What are the implications of knowing the algrebaic structure(group, ring, monoid, etc) of a set? QUESTION [2 upvotes]: I remember groups, rings, monoids, lattices, etc. being taught in my undergraduate mathematics course. I never really understood what they were for. After that lesson, we moved on to other lessons without looking back to this specific one. So, what exactly are they for? What are its implications for mathematicians, and what does it mean for mathematics beginners? REPLY [0 votes]: One reason to care about mathematical structures is they allow us to prove things "once and for all." For example, we can prove that if a poset has all bounded-above non-empty joins, then it has all bounded-below non-empty meets, and vice versa. Therefore, since $\mathbb{R}$ can be viewed as a poset with respect to the usual order relation, and since the completeness axiom tells us that $\mathbb{R}$ has all bounded-above non-empty joins (i.e. suprema), hence our theorem tells us that it has all bounded-below non-empty meets (i.e. infima). We don't have to "re-prove" this for the special case of $\mathbb{R}$. There is at least one more reason to care about mathematical structures, namely the homomorphisms. At a basic level, homomorphisms are important because they preserve lots of goodies. For example, if: $X$ and $Y$ are monoids, $\varphi : X \rightarrow Y$ is a monoid homomorphism $x,x' \in X$ are elements then $$xx' = e_X \;\implies\; f(x)f(x') = e_Y.$$ So monoid homomorphisms preserve inverses, and therefore, they preserve invertibility. Taking a more sophisticated viewpoint, structures and their homomorphisms tend to form into categories. For example, there is a category $\mathbf{Grp}$ whose objects are groups and whose arrows are group homomorphisms. There is also usually a so-called forgetful functor to $\mathbf{Set}$ (see here). A major application of this occurs in abstract algebra, whereby we can describe the construction of free algebras as left-adjoint to the forgetful functor. This leads to the idea of presentations; e.g. we can present a group by generators and relations.
24,886
The store carries stocks of hats for all types of occasions by the perfect British designers. In The Blacklist, PINK prefers to wear Biltmore hats but you may pick up similar kinds from Scala at your native hat shop. Males would go for the extra sober colors like black, beige, gray, brown, clay and tan whereas girls get a extra vivid alternative of colors and prints for choosing a fedora hat. I hope you may visit my weblog often, and even cease by the shop and develop into one with the magic. In case you have quick hair, it is going to be good to pick out a hat that retains some a part of your hair seen. Surprisingly, since I worked from the surface of each balls the yarn colours modified in the same approach at the similar time so you possibly can hardly even tell I held two strands collectively when taking a look at my hat. It is really quiet difficult to give you a novel title (plenty of patterns in my shop are simply names Baby Hat Pattern, which generally creates confusion for my clients). If you’re just making one hat I’d divide the yarn prior to beginning your hat…this is not an important hat to work from each ends of the ball on, for my part. The mirrors I bought some time in the past from I nonetheless do not know the identify of the hat store but. And do not even try to give me that junk about hat sporting as a factor for old individuals. At Bernard’s Hats in Chicago’s South Chicago neighborhood, they sell any form of hat you need: from the Applejack to the Borsalino, and from the Sherlock Holmes to the Homburg. The hat stays trendy to today, and plenty of Russian ladies have inherited them from their moms. Hat racks fixtures are used as advertising statements of the product creating a much needed consciousness to buyers. You could be as eccentric as you want along with your choice of hat but just remember to can carry off a big measurement or a tiny piece of hat with comfort and élan. These days, where it is tough distinguishing the substantive from the digital, Village Hat Shop is all about content. That is where it is advisable stress on buying a special hat that has been made especially for you. On this case, since clients would doubtless be more enticed with the utilization of visuals, it’s nice that you carry most or all variants that you’ve got and current them correctly on their respective hat fixture racks. The silver fox hat has a tail which provides to the nice seems protecting your head during chilly winters. New and exciting to the Hat Store is the Costume Agency part, a beautiful likelihood to buy a beautiful one thing for yourself at an inexpensive value. Instantly the vendeuse picked out the proper sculpted beret-model hat for her. Whenever you come to our shop don’t be shocked to see us all gathered close, cups in hand, pinkies raised (although I’ve lately realized that is a no no!) sipping slowly and fixing all of the world’s issues. Maintaining a cowboy hat that is made from leather simply takes a bit of frequent sense. Traditional caps like the Wigens Carl Ivy Model Cap are perennial bestsellers, Harris Tweeds just like the Wigens Hans Ivy Style Cap show timeless style, and comfy Winter aviators just like the Albert Fake Fur Aviator Hat are enjoyable and practical. I used left over wooden trim from bakery/espresso shop and a chair rail and some brackets from my church dollhouse kit. They use natural or darkish colored straw to make every hat have its own persona. In order for you a hat for casual put on, it is best to select a hat with a thick weaving with gaps in between.
26,486
There are rare occasions when the VIRGINIA SHOPPER gets stumped. On the prowl for a stellar birthday gift for Uncle John, the SHOPPER stumbled around empty handed and wondered where to turn next. Fortunately, the slump did not last long. Virginia is a fabulous place to find grand and unusual gifts that keep on giving. And Uncle John is now one happy relative. Have you ever heard of or seen a Sepia Toned Panoramic Photograph? It’s a long (like 40 inches long) 360 degree horizontal landscape view that is not only art in photography, but fine art in itself! James O. Phelps is the Lexington, Virginia photographic artist who has perfected the creation of the most astounding landscape panoramas imaginable. Elegance does not even begin to describe these perfectly framed archival prints that are numbered and signed by James O. Phelps himself. And the selections at Virginia Born and Bred are many. Think Civil War Battlefields like Gettysburg and Antietam; or Historic United States places like Mount Vernon, Williamsburg or the Lincoln Memorial. Or how about Colleges and Universities like the Virginia Military Institute or the University of Virginia. There is a comprehensive list of options in the Virginia Born and Bred online catalog. Each remarkable print is mounted and matted using an archival foam core and four-ply museum board. And each is framed in a gorgeous brushed bronze finish. But there is more……. Virginia Born and Bred has been featuring Phelps’ masterpieces for several years now and is proud to introduce a “new line” of historic Lexington, Virginia panoramas. Watch for these beauties and/or call the store for more information. The new Phelps’ versions are not yet listed in the online catalog, but are available for purchase by telephone – 540 463-1832 or in person at the store itself in Lexington, Virginia. The new look of the new line is not only fascinating, but the presentation is absolutely stunning, right down to the warmth and glow of the wood used in the framing.
293,130