text
stringlengths
59
500k
subset
stringclasses
6 values
Abstract: The top quark mass and the flavor mixing are studied in the context of a Seesaw model of Quark Masses based on the gauge group $SU(2)_L \times SU(2)_R \times U(1)$. Six isosinglet quarks are introduced to give rise to the mass hierarchy of ordinary quarks. In this scheme, we reexamine a mechanism for the generation of the top quark mass. It is shown that, in order to prevent the Seesaw mechanism to act for the top quark, the mass parameter of its isosinglet partner must be much smaller than the breaking scale of $SU(2)_R$. As a result the fourth lightest up quark must have a mass of the order of the breaking scale of $SU(2)_R$, and a large mixing between the right-handed top quark and its singlet partner occurs. We also show that this mechanism is compatible with the mass spectrum of light quarks and their flavor mixing.
CommonCrawl
View source for Affine scheme ← Affine scheme A generalization of the concept of an [[Affine variety|affine variety]], which plays the role of a local object in the theory of schemes. Let $ A $ be a commutative ring with a unit. An affine scheme consists of a topological space $ \operatorname{Spec}(A) $ and a sheaf of rings $ \widetilde{A} $ on $ \operatorname{Spec}(A) $. Here, $ \operatorname{Spec}(A) $ is the set of all prime ideals of $ A $ (called the '''points of the affine scheme''') equipped with the [[Zariski topology|Zariski topology]] (or equivalently with the spectral topology), in which a basis of open sets is given by $ D(f) \stackrel{\text{df}}{=} \{ \mathfrak{p} \in \operatorname{Spec}(A) \mid f \notin \mathfrak{p} \} $, where $ f $ runs through the elements of $ A $. The sheaf $ \widetilde{A} $ of local rings is defined by the condition that $ \Gamma \! \left( D(f),\tilde{A} \right) = A_{f} $, where $ A_{f} $ is the [[Localization in a commutative algebra|localization]] of the ring $ A $ with respect to the multiplicative system $ \{ f^{n} \}_{n \in \Bbb{N}_{0}} $. Affine schemes were first introduced by A. Grothendieck ([[#References|[1]]]), who created the theory of schemes. A scheme is a ringed space that is locally isomorphic to an affine scheme. An affine scheme $ \operatorname{Spec}(A) $ is called '''Noetherian''' ('''integral''', '''reduced''', '''normal''', or '''regular''', respectively) if the ring $ A $ is Noetherian (integral, without nilpotents, integrally closed, or regular, respectively). An affine scheme is called '''connected''' ('''irreducible''', '''discrete''', or '''quasi-compact''', respectively) if the topological space $ \operatorname{Spec}(A) $ also has these properties. The space $ \operatorname{Spec}(A) $ of an affine scheme is always compact (and usually not Hausdorff). The affine schemes form a category if the morphisms of these schemes, considered as locally ringed spaces, are defined as morphisms of locally ringed spaces. Each ring homomorphism $ \phi: A \to B $ defines a morphism of affine schemes: $ \left( \operatorname{Spec}(B),\widetilde{B} \right) \to \left( \operatorname{Spec}(A),\widetilde{A} \right) $, consisting of the continuous mapping $ \operatorname{Spec}(\phi): \operatorname{Spec}(B) \to \operatorname{Spec}(A) $ ($ [\operatorname{Spec}(\phi)](\mathfrak{p}) = {\phi^{\leftarrow}}[\mathfrak{p}] $ for $ \mathfrak{p} \in \operatorname{Spec}(B) $), and a homomorphism of sheaves of rings $ \widetilde{\phi}: \widetilde{A} \to \widetilde{B} $ that transforms the section $ a / f $ of the sheaf $ \widetilde{A} $ over the set $ D(f) $ into the section $ \phi(a) / \phi(f) $. The morphisms of an arbitrary scheme $ (X,\mathcal{O}_{X}) $ into an affine scheme $ \left( \operatorname{Spec}(A),\widetilde{A} \right) $ (which are also called '''$ X $-valued points''' of $ \operatorname{Spec}(A) $) are in a one-to-one correspondence with ring homomorphisms $ A \to \Gamma(X,\mathcal{O}_{X}) $; thus, the correspondence $ A \mapsto \left( \operatorname{Spec}(A),\widetilde{A} \right) $ is a contravariant functor from the category of commutative rings with a unit into the category of affine schemes, which establishes an anti-equivalence of these categories. In particular, in the category of affine schemes, there are finite direct sums and fiber products, dual to the constructions of the direct sum and the tensor product of rings. The morphisms of affine schemes that correspond to surjective homomorphisms of rings are called '''closed imbeddings''' of affine schemes. The most important examples of affine schemes are affine varieties; other examples are affine group schemes (cf. [[Group scheme|Group scheme]]). In a manner similar to the construction of the sheaf $ \widetilde{A} $, it is possible to construct, for any $ A $-module $ M $, a sheaf $ \widetilde{M} $ of $ \widetilde{A} $-modules on $ \operatorname{Spec}(A) $ for which $$ \Gamma \! \left( D(f),\widetilde{M} \right) = M_{f} = M \otimes_{A} A_{f}. $$ Such sheaves are called '''quasi-coherent'''. The category of $ A $-modules is equivalent to the category of quasi-coherent sheaves of $ \widetilde{A} $-modules on $ \operatorname{Spec}(A) $; projective modules correspond to locally free sheaves. The cohomology spaces of quasi-coherent sheaves over an affine scheme are described by Serre's theorem: $$ H^{q} \! \left( \operatorname{Spec}(A),\widetilde{M} \right) = 0 \quad \text{if $ q > 0 $}. $$ The converse of this theorem (Serre's criterion for affinity) states that if $ (X,\mathcal{O}_{X}) $ is a compact separable scheme, and if $ {H^{1}}(X,F) = 0 $ for any quasi-coherent sheaf $ F $ of $ \mathcal{O}_{X} $-modules, then $ X $ is an affine scheme. Other criteria for affinity also exist ([[#References|[1]]], [[#References|[4]]]). ====References==== <table> <TR><TD valign="top">[1]</TD><TD valign="top"> A. Grothendieck, J. Dieudonné, "Eléments de géometrie algébrique", ''Publ. Math. IHES'', '''4''' (1960). {{MR|0217083}} {{MR|0163908}} {{ZBL|0118.36206}}</TD></TR> <TR><TD valign="top">[2]</TD><TD valign="top"> J. Dieudonné, "Algebraic geometry", ''Adv. in Math.'', '''1''' (1969), pp. 233–321. {{MR|0244267}} {{ZBL|0185.49102}} </TD></TR> <TR><TD valign="top">[3]</TD><TD valign="top"> Yu.I. Manin, "Lectures on algebraic geometry", '''1''', Moscow (1970) (In Russian). {{MR|0284434}} {{ZBL|0204.21302}} </TD></TR> <TR><TD valign="top">[4]</TD><TD valign="top"> J. Goodman, R. Hartshorne, "Schemes with finite-dimensional cohomology groups", ''Amer. J. Math.'', '''91''' (1969), pp. 258–266. {{MR|0241432}} {{ZBL|0176.18303}}</TD></TR> </table> ====Comments==== Reference [[#References|[a1]]] is, of course, standard. It replaces [[#References|[3]]]. An alternative to [[#References|[1]]] is [[#References|[a2]]]. ====References==== <table> <TR><TD valign="top">[a1]</TD><TD valign="top"> R. Hartshorne, "Algebraic geometry", Springer (1977). {{MR|0463157}} {{ZBL|0367.14001}} </TD></TR> <TR><TD valign="top">[a2]</TD><TD valign="top"> A. Grothendieck, J. Dieudonné, "Eléments de géometrie algébrique", '''I. Le langage des schémes''', Springer (1971). {{MR|0217085}} {{ZBL|0203.23301}}</TD></TR> </table> Template:MR (view source) Template:MSN HOST (view source) Template:ZBL (view source) Return to Affine scheme. Affine scheme. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Affine_scheme&oldid=39991 This article was adapted from an original article by V.I. DanilovI.V. Dolgachev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/wiki/Affine_scheme"
CommonCrawl
\begin{document} \title{Self-Exciting Multifractional Processes } \author{Fabian A. Harang~~~~~~~ Marc Lagunas-Merino~~~~~~~ Salvador Ortiz-Latorre \\ } \date{\today} \begin{abstract} We propose a new multifractional stochastic process which allows for self-exciting behavior, similar to what can be seen for example in earthquakes and other self-organizing phenomena. The process can be seen as an extension of a multifractional Brownian motion, where the Hurst function is dependent on the past of the process. We define this through a stochastic Volterra equation, and we prove existence and uniqueness of this equation, as well as give bounds on the $p-$order moments, for all $p\geq1$. We show convergence of an Euler-Maruyama scheme for the process, and also give the rate of convergence, which is depending on the self-exciting dynamics of the process. Moreover, we discuss different applications of this process, and give examples of different functions to model self-exciting behavior. \end{abstract} \maketitle \section{Introduction and Notation } In recent years, higher computer power and better tools from statistics show that there are many natural phenomena which do not follow the standard normal distribution, but rather exhibit different types of memory, and sometimes changing these properties over time. Therefore several different types of extensions of standard stochastic processes have been proposed to try to give a more realistic picture of nature corresponding to what we observe. There are several stochastic processes which are popular today for the modeling of varying memory in a process, one of them is known as the Hawkes process, see for example \cite{Ha18}. This is a point process which allows for self-exciting behavior by letting the conditional intensity to be dependent on the past events of the process. In this note, we will consider a continuous type of process which is inspired by the multifractional Brownian motion. This process is interesting for being a non-stationary Gaussian process which has regularity properties changing in time. A simple version of this process is known as the Riemann-Liouville multifractional Brownian motion and can be represented by the integral \begin{equation} B_{t}^{h}=\int_{0}^{t}\left(t-s\right)^{h\left(t\right)-\frac{1}{2}}dB_{s},\label{eq:mBm} \end{equation} where $\left\{ B_{t}\right\} _{t\in[0,T]}$ is a Brownian motion and $h$ is a deterministic function. Interestingly, if we restrict the process to a small interval, say $\left[t-\epsilon,t+\epsilon\right]$, the local $\alpha$-Hölder regularity of this process on that interval is of order $\alpha\sim h(t)$ if $\epsilon$ is sufficiently small. Thus the regularity of the process is depending on time. Applications of such processes have been found in fields ranging from Internet traffic and image synthesis to finance, see for example \cite{BeJaRo97,BiPaPi13,BiPaPi15,BiPi07,BiPi15,CoLeVe14,LeVe14,PiBiPa18}. In 2010 D. Sornette and V. Filimonov proposed a self-excited multifractal process to be considered in the modeling of earthquakes and financial market crashes, see \cite{FiSo11}. By self-excited process, the authors mean a process where the future state depends directly on all the past states of the process. The model they proposed was defined in a discrete manner. They also suggested a possible continuous time version of their model, but they did not study its existence rigorously. This article is therefore meant as an attempt to propose a continuous time version of a similar model to that proposed by Sornette and Filimonov, and we will study its mathematical properties. We will first consider an extension of a multifractional Brownian process, which is found as the solution to the stochastic differential equation \begin{equation} X_{t}^{h}=\int_{0}^{t}\left(t-s\right)^{h(t,X_{s}^{h})-\frac{1}{2}}dB_{s},\label{eq:SEM} \end{equation} where $\left\{ B_{t}\right\} _{t\in[0,T]}$ is a general $d$-dimensional Brownian motion, and $h$ is bounded and takes values in $(0,1)$. Already at this point we could think that the local regularity of the process $X$ would be depending on the history of $X$ through $h$, in a similar manner as for the multifractional Brownian motion in equation (\ref{eq:mBm}). As we can see, the formulation of the process is through a stochastic Volterra equation with a possibly singular kernel. We will therefore show the existence and uniqueness of this equation, and then say that its solution is a Self-Exciting Multifractional Process (SEM) $X^{h}$. We will study the probabilistic properties, and discuss examples of functions $h$ which give different dynamics for the process $X^{h}$. The process is neither stationary nor Gaussian in general, and is therefore mathematically challenging to apply in any standard model for example in finance but do, at this point, have some interesting properties on its own. The study of such processes could also shed some light on natural phenomena behaving outside of the scope of standard stochastic processes, such as the self-excited dynamics of earthquakes as they argue in \cite{FiSo11}. We will first show the existence and uniqueness of the Equation (\ref{eq:SEM}) and then study probabilistic and path properties such as variance and regularity of the process. We will introduce an Euler-Maruyama scheme to approximate the process, and show its strong convergence as well as estimate its rate of convergence. Finally, we will discuss an extension of the process to a Gamma type process, which might be interesting for various applications. \subsection{Notation and preliminaries} Let $T>0$ be a fixed constant. We will use the standard notation $L^{\infty}\left(\left[0,T\right]\right)$ for essentially bounded functions on the interval $\left[0,T\right]$. Furthermore, let $\triangle^{(m)}\left[a,b\right]$ denote the $m$-simplex. That is, define $\triangle^{(m)}[a,b]$ to be given by \[ \triangle^{(m)}\left([a,b]\right):=\left\{ (s_{m},\ldots,s_{1}):a\leq s_{1}<\ldots<s_{m}\leq b\right\} . \] We will consider functions $k:\triangle^{\left(2\right)}\left(\left[0,T\right]\right)\rightarrow\mathbb{R}_{+}$ which will be used as a kernel in an integral operator, in the sense that we consider integrals of the form \[ \int_{0}^{t}k\left(t,s\right)f\left(s\right)ds, \] whenever the integral is well defined. We call these functions Volterra kernels. \begin{defn} Let $k:\triangle^{(2)}\left(\left[0,T\right]\right)\rightarrow\mathbb{R}_{+}$ be a Volterra kernel. If $k$ satisfies \[ t\mapsto\int_{0}^{t}k\left(t,s\right)ds\in L^{\infty}\left(\left[0,T\right]\right) \] and \[ \limsup_{\epsilon\downarrow0}\parallel\int_{\cdot}^{\cdot+\epsilon}k\left(\cdot+\epsilon,s\right)ds\parallel_{L^{\infty}\left(\left[0,T\right]\right)}=0, \] then we say that $k\in\mathcal{K}_{0}.$ \end{defn} We will frequently use the constant $C$ to denote a general constant, which might vary throughout the text. When it is important, we will mention what this constant depends upon in subscript, i.e. $C=C_{T}$ to denote dependence in $T$ . \section{Zhang's Existence and Uniqueness of Stochastic Volterra Equations} In this section we will assume that $\left\{ B_{t}\right\} _{t\in\left[0,T\right]}$ is a $d$-dimensional Brownian motion defined on a filtered probability space $(\Omega,\mathcal{F},\left\{ \mathcal{F}_{t}\right\} _{t\in\left[0,T\right]},P)$. Consider the following Volterra equation \begin{equation} X_{t}=g\left(t\right)+\int_{0}^{t}\sigma\left(t,s,X_{s}\right)dB_{s},\qquad0\leq t\leq T,\label{eq:Stoch Volterra equation} \end{equation} where $g$ is a measurable, $\left\{ \mathcal{F}_{t}\right\} $-adapted stochastic process and $\sigma:\triangle^{(2)}\left(\left[0,T\right]\right)\times\mathbb{R}^{n}\rightarrow\mathcal{L}\left(\mathbb{R}^{d},\mathbb{R}^{n}\right)$ is a measurable function, where $\mathcal{L}\left(\mathbb{R}^{d},\mathbb{R}^{n}\right)$ is the linear space of $d\times n$-matrices. Next we write a simplified version of the hypotheses for $\sigma$ and $g$, introduced previously by Zhang in \cite{Zha10}, which will be used to prove that there exists a unique solution to the equation $\left(\ref{eq:Stoch Volterra equation}\right)$. \begin{description} \item [{(H1)}] There exists $k_{1}\in\mathcal{K}_{0}$ such that the function $\sigma$ satisfies the following linear growth inequality for all $(s,t)\in\triangle^{(2)}\left([0,T]\right),$ and $x\in\mathbb{R}^{n}$, \[ \left|\sigma\left(t,s,x\right)\right|^{2}\leq k_{1}\left(t,s\right)\left(1+\left|x\right|^{2}\right). \] \item [{(H2)}] There exists $k_{2}\in\mathcal{K}_{0}$ such that the function $\sigma$ satisfies the following Lipschitz inequality for all $(s,t)\in\triangle^{(2)}\left([0,T]\right),$ $x,y\in\mathbb{R}^{n}$, \[ \left|\sigma\left(t,s,x\right)-\sigma\left(t,s,y\right)\right|^{2}\leq k_{2}\left(t,s\right)\left|x-y\right|^{2}. \] \item [{(H3)}] For some $p\geq2$, we have \[ \sup_{t\in\left[0,T\right]}\int_{0}^{t}\left[k_{1}\left(t,s\right)+k_{2}\left(t,s\right)\right]\cdot\mathbb{E}\left[\left|g\left(s\right)\right|^{p}\right]ds<\infty, \] where $k_{1}$ and $k_{2}$ satisfy $\mathbf{H1}$ and $\mathbf{H2}.$ \end{description} Based on the above hypotheses, we can use the following tailor made version of the theorem on existence and uniqueness found in \cite{Zha10} to show that there exists a unique solution to equation $\left(\ref{eq:Stoch Volterra equation}\right)$. \begin{thm} \label{thm:Zhang Existence thm}$\left(\textnormal{Xicheng Zhang}\right)$ Assume that $\sigma:\triangle^{(2)}\left([0,T]\right)\times\mathbb{R}^{n}\rightarrow\mathcal{L}\left(\mathbb{R}^{d},\mathbb{R}^{n}\right)$ is measurable, and $g$ is an $\mathbb{R}^{n}$-valued, $\left\{ \mathcal{F}_{t}\right\} $-adapted process satisfying $\mathbf{H1}-\mathbf{H3}$. Then there exists a unique measurable, $\mathbb{R}^{n}$-valued, $\left\{ \mathcal{F}_{t}\right\} $-adapted process $X_{t}$ satisfying for all $t\in[0,T]$ the equation \[ X_{t}=g\left(t\right)+\int_{0}^{t}\sigma\left(t,s,X_{s}\right)dB_{s}. \] Furthermore, for some $C_{T,p,k_{1}}>0$ we have that \[ \mathbb{E}\left[\left|X_{t}\right|^{p}\right]\leq C_{T,p,k_{1}}\left(1+\mathbb{E}\left[\left|g\left(t\right)\right|^{p}\right]+\sup_{t\in[0,T]}\int_{0}^{t}k_{1}\left(t,s\right)\mathbb{E}\left[\left|g\left(s\right)\right|^{p}\right]ds\right), \] where $p$ is from $\mathbf{H3}.$ \end{thm} It will also be useful, in future sections, to consider the following additional hypothesis. \begin{description} \item [{(H4)}] The process $g$ is continuous and satisfies for some $\delta>0$ and for any $p\geq2$, \[ \mathbb{E}\left[\sup_{t\in\left[0,T\right]}\left|g\left(t\right)\right|^{p}\right]<\infty, \] and \[ \mathbb{E}\left[\left|g\left(t\right)-g\left(s\right)\right|^{p}\right]\leq C_{T,p}\left|t-s\right|^{\delta p}. \] \end{description} \section{Self-Exciting Multifractional Stochastic Processes} Consider the stochastic process given formally by the Volterra equation \[ X_{t}^{h}=g\left(t\right)+\int_{0}^{t}\left(t-s\right)^{h\left(t,X_{s}^{h}\right)-\frac{1}{2}}dB_{s}, \] where $g$ is an $\left\{ \mathcal{F}_{t}\right\} $-adapted, one-dimensional process, $h:\left[0,T\right]\times\mathbb{R}\rightarrow\mathbb{R}_{+}$ and $B$ is a one-dimensional Brownian motion. In this section, we will show the existence and uniqueness of the solution for the above equation by means of Theorem \ref{thm:Zhang Existence thm}. Moreover, we will discuss the continuity properties of the solution. \begin{defn} We say that a function $h:\left[0,T\right]\times\mathbb{R}\rightarrow\mathbb{R}_{+}$ is a Hurst function with parameters $(h_{*},h^{*})$, where $h_{*}\leq h^{*}$, if $h\left(t,x\right)$ takes values in $\left[h_{*},h^{*}\right]\subset\left(0,1\right)$ for all $x\in\mathbb{R}^{d}$ and $t\in\left[0,T\right]$ and $h$ satisfies the following Lipschitz conditions for all $x,y\in\mathbb{R}$ and $t,t^{\prime}\in\left[0,T\right]$ \[ \left|h\left(t,x\right)-h\left(t,y\right)\right|\leq C\left|x-y\right|, \] \[ \left|h\left(t,x\right)-h\left(t^{\prime},x\right)\right|\leq C\left|t-t^{\prime}\right|, \] for some $C>0$. \end{defn} \begin{lem} \label{lem:Well defined sigma}Let $\sigma\left(t,s,x\right)=\left(t-s\right)^{h(t,x)-\frac{1}{2}}$ and let $h$ be a Hurst function with parameters $(h_{*},h^{*})$. Then \begin{equation} \left|\sigma\left(t,s,x\right)\right|^{2}\leq k\left(t,s\right)\left(1+\left|x\right|^{2}\right),\label{eq:sigma linear growth} \end{equation} where \[ k\left(t,s\right)=C_{T}\left(t-s\right)^{2h_{*}-1}, \] and \begin{equation} \left|\sigma\left(t,s,x\right)-\sigma\left(t,s,y\right)\right|^{2}\leq C_{T}k\left(t,s\right)\left|\log\left(t-s\right)\right|^{2}\left|x-y\right|^{2}.\label{eq:Lipschitz sigma} \end{equation} Moreover, $\sigma$ satisfies $\mathbf{H1}$-$\mathbf{H2}$. \end{lem} \begin{proof} We prove the three claims in the order they are stated in Lemma \ref{lem:Well defined sigma}, and start to prove equation $\left(\ref{eq:sigma linear growth}\right)$. Remember that \[ h\left(t,x\right)\in\left[h_{*},h^{*}\right]\subset\left(0,1\right), \] for all $t\in[0,T]$ and $x\in\mathbb{R}$, therefore we can trivially find \begin{equation} \left|\sigma\left(t,s,x\right)\right|^{2}=\left(t-s\right)^{2h(t,x)-1}=\left(t-s\right)^{2\left(h(t,x)-h_{*}\right)+2h_{*}-1}\leq T^{2\left(h^{*}-h_{*}\right)}\left(t-s\right)^{2h_{*}-1},\label{eq:BoundSigma2_No_x} \end{equation} which yields equation $\left(\ref{eq:sigma linear growth}\right)$ with $k\left(t,s\right)=C_{T}\left(t-s\right)^{2h_{*}-1}$. Next we consider equation $\left(\ref{eq:Lipschitz sigma}\right)$, and using that $x=\exp\left(\log\left(x\right)\right),$ we write \[ \sigma\left(t,s,x\right)=\exp\left(\left(\log\left(t-s\right)\right)\left(h(t,x)-\frac{1}{2}\right)\right), \] where $(t,s)\in\triangle^{(2)}\left(\left[0,T\right]\right)$ and consider the following inequality derived from the fundamental theorem of calculus \begin{equation} \left|e^{x}-e^{y}\right|\leq e^{\max(x,y)}\left|x-y\right|,\qquad x,y\in\mathbb{R}.\label{eq: expinequality} \end{equation} Using the Lipschitz assumption on $h$ together with the above inequality, we have that \begin{align*} & \left|\sigma\left(t,s,x\right)-\sigma\left(t,s,y\right)\right|^{2}\\ & \leq\exp\left(2\max\left(\log\left(t-s\right)\left(h\left(t,x\right)-\frac{1}{2}\right),\log\left(t-s\right)\left(h\left(t,y\right)-\frac{1}{2}\right)\right)\right)\\ & \quad\times\left|\log\left(t-s\right)\right|^{2}\left|h\left(t,x\right)-h\left(t,y\right)\right|^{2}\\ & \leq C^{2}\exp\left(\max\left(\log\left(t-s\right)\left(2h\left(t,x\right)-1\right),\log\left(t-s\right)\left(2h\left(t,y\right)-1\right)\right)\right)\\ & \quad\times\left|\log\left(t-s\right)\right|^{2}\left|x-y\right|^{2}. \end{align*} If $\left|t-s\right|\geq1$ then \[ \left|\sigma\left(t,s,x\right)-\sigma\left(t,s,y\right)\right|^{2}\leq C_{T}\left|x-y\right|^{2}, \] since $h$ is bounded. If $\left|t-s\right|<1$ then $\log\left(t-s\right)<0$ and, using that if $\theta<0$ then $\max\left(\theta x,\theta y\right)=\theta\min\left(x,y\right)$, we have \begin{align*} & \left|\sigma\left(t,s,x\right)-\sigma\left(t,s,y\right)\right|^{2}\\ & \leq C^{2}\exp\left(\log\left(t-s\right)\min\left(\left(2h\left(t,x\right)-1\right),\left(2h\left(t,y\right)-1\right)\right)\right)\left|\log\left(t-s\right)\right|^{2}\left|x-y\right|^{2}\\ & \leq C^{2}\left|t-s\right|^{2h_{*}-1}\left|\log\left(t-s\right)\right|^{2}\left|x-y\right|^{2}. \end{align*} These estimates yield equation $\left(\ref{eq:Lipschitz sigma}\right)$. Let $k$ be defined as above and $\nu\geq0$. Then, for any $0\leq a<T$, $0\leq t<T-a$ and $\delta\in\left(0,1\right)$ fixed we have \begin{align*} \phi_{\nu}\left(t,a\right) & :=\int_{a}^{a+t}k\left(a+t,s\right)\left|\log\left(a+t-s\right)\right|^{2\nu}ds\\ & \leq C_{T}\int_{a}^{a+t}\left(a+t-s\right)^{\left(2h_{*}-1\right)}\left|\log\left(a+t-s\right)\right|^{2\nu}ds\\ & =C_{T}\int_{0}^{t}u^{2h_{*}-1}\left|\log\left(u\right)\right|^{2\nu}du\\ & \leq C_{T}C_{T,\delta}^{2\nu}\int_{0}^{t}u^{2h_{*}-1-2\nu\delta}du=C_{T,\delta,\nu,h_{*}}t^{2\left(h_{*}-\nu\delta\right)}, \end{align*} where we have used that $\left|\log\left(u\right)\right|\leq C_{T,\delta}u^{-\delta}$ for some constant $C_{T,\delta}>0$. Note that $2\left(h_{*}-\nu\delta\right)>0$ if and only if $h_{*}>\nu\delta$. Therefore, choosing $a=0$, we have that \[ t\mapsto\int_{0}^{t}k\left(t,s\right)\left|\log\left(t-s\right)\right|^{2\nu}ds\in L^{\infty}\left(\left[0,T\right]\right), \] if $h_{*}>0$, for $\nu=0$, and if $h_{*}>\delta$, for $\nu=1$. Furthermore, by setting $a=t^{\prime}$ and $t=\epsilon$ in the estimate for $\phi_{\nu}\left(t,a\right)$, we have \begin{align*} \limsup_{\epsilon\rightarrow0} & \parallel\int_{\cdot}^{\cdot+\epsilon}k\left(\cdot+\varepsilon,s\right)\left|\log\left(\cdot+\varepsilon-s\right)\right|^{2\nu}ds\parallel_{L^{\infty}\left(\left[0,T\right]\right)}\leq\limsup_{\epsilon\rightarrow0}C_{T,\delta,\nu,h_{*}}\varepsilon^{2\left(h_{*}-\nu\delta\right)}=0, \end{align*} if $h^{*}>0$, for $\nu=0$, and if $h_{*}>\delta$, for $\nu=1$. Since $\delta$ can be chosen arbitrarily close to zero then $h_{*}$ can be arbitrarily close to zero. These estimates yield that $\sigma$ satisfies $\mathbf{H1\text{-}H3}$. \end{proof} Now, we can give the following theorem showing that the self-exciting multifractional process from equation $\left(\ref{eq:SEM}\right)$ indeed exists and is unique. \begin{thm} \label{thm:Existence and unique}Let $\sigma\left(t,s,x\right)=\left(t-s\right)^{h(t,x)-\frac{1}{2}}$ and $h$ be a Hurst function with parameters $\left(h_{*},h^{*}\right)$. Moreover, let $g$ be an $\left\{ \mathcal{F}_{t}\right\} $-adapted, $\mathbb{R}$-valued stochastic process satisfying $\mathbf{H3}.$ Then, there exists a unique process $X_{t}^{h}$ satisfying the equation \begin{equation} X_{t}^{h}=g\left(t\right)+\int_{0}^{t}\left(t-s\right)^{h\left(t,X_{t}^{h}\right)-\frac{1}{2}}dB_{s},\label{eq:SEMequation} \end{equation} where $\left\{ B_{t}\right\} _{t\in\left[0,T\right]}$ is a one-dimensional Brownian motion. Furthermore, we have the following inequality for some $p\geq2$ \[ \mathbb{E}\left[\left|X_{t}^{h}\right|{}^{p}\right]\leq C_{T,p,k_{1}}\left(1+\mathbb{E}\left[\left|g\left(t\right)\right|{}^{p}\right]+\sup_{t\in\left[0,T\right]}\int_{0}^{t}\left(t-s\right)^{h_{*}-\frac{1}{2}}\mathbb{E}\left[\left|g\left(s\right)\right|{}^{p}\right]ds\right) \] We call this process a Self-Exciting Multifractional process (SEM) . \end{thm} \begin{proof} We have seen in Lemma \ref{lem:Well defined sigma} that $\sigma\left(t,s,x\right)=\left(t-s\right)^{h\left(t,x\right)-\frac{1}{2}}$ satisfies $\mathbf{H1-H2}.$ Applying Zhang's theorem gives the existence and uniqueness and bounds on $p$-moments for the solution of (\ref{eq:SEMequation}). \end{proof} Next we will show the Hölder regularity for the solution of (\ref{eq:SEMequation}). We will need some preliminary lemmas. \begin{lem} \label{lem:FundIneq}Let $T>u>v>0.$ Then, for any $\alpha\leq0$ and $\beta\in\left[0,1\right]$ we have \[ \left|u^{\alpha}-v^{\alpha}\right|\leq2^{1-\beta}\left|\alpha\right|^{\beta}\left|u-v\right|^{\beta}\left|v\right|^{\alpha-\beta}, \] and for $\alpha\in\left(0,1\right)$ \[ \left|u^{\alpha}-v^{\alpha}\right|\leq\left|\alpha\right|\left|u-v\right|^{\alpha+\beta\left(1-\alpha\right)}\left|v\right|^{-\beta\left(1-\alpha\right)}. \] \end{lem} \begin{proof} For $\alpha=0$ is clear. For $\alpha<1$ and $\alpha\neq0,$ using the remainder of Taylor's formula in integral form we get \begin{align} \left|u^{\alpha}-v^{\alpha}\right| & =\left|\left(u-v\right)\int_{0}^{1}\alpha\left(v+\theta\left(u-v\right)\right)^{\alpha-1}\left(1-\theta\right)d\theta\right|\nonumber \\ & \leq\left|\alpha\right|\left|u-v\right|\int_{0}^{1}\left|v+\theta\left(u-v\right)\right|^{\alpha-1}d\theta\leq\left|\alpha\right|\left|u-v\right|\left|v\right|^{\alpha-1},\label{eq: Bound1} \end{align} where we have used that $\left|v+\theta\left(u-v\right)\right|^{\alpha-1}\leq\left|v\right|^{\alpha-1}$. Using that $\left|v+\theta\left(u-v\right)\right|^{\alpha-1}\leq\theta^{\alpha-1}\left|u-v\right|^{\alpha-1}$ and assuming that $\alpha\in\left(0,1\right)$ we obtain \begin{equation} \left|u^{\alpha}-v^{\alpha}\right|\leq\alpha\left|u-v\right|^{\alpha}\int_{0}^{1}\theta^{\alpha-1}d\theta=\left|u-v\right|^{\alpha}.\label{eq: Bound2} \end{equation} In what follows we will use the interpolation inequality $a\wedge b\leq a^{\beta}b^{1-\beta}$ for any $a,b>0$ and $\beta\in\left[0,1\right]$. Consider the case $\alpha<0.$ Using the interpolation inequality with the simple bound $\left|u^{\alpha}-v^{\alpha}\right|\leq2\left|v\right|^{\alpha}$ and the bound (\ref{eq: Bound1}) we get \begin{align*} \left|u^{\alpha}-v^{\alpha}\right| & \leq2^{1-\beta}\left|\alpha\right|^{\beta}\left|u-v\right|^{\beta}\left|v\right|^{\beta\left(\alpha-1\right)}\left|v\right|^{(1-\beta)\alpha}=2^{1-\beta}\left|\alpha\right|^{\beta}\left|u-v\right|^{\beta}\left|v\right|^{\alpha-\beta}. \end{align*} Consider the case $\alpha\in\left(0,1\right)$. Using the interpolation inequality with the bounds (\ref{eq: Bound1}) and (\ref{eq: Bound2}) we can write \[ \left|u^{\alpha}-v^{\alpha}\right|\leq\left|\alpha\right|\left|u-v\right|^{\beta}\left|v\right|^{\beta\left(\alpha-1\right)}\left|u-v\right|^{\left(1-\beta\right)\alpha}=\left|\alpha\right|\left|u-v\right|^{\alpha+\beta\left(1-\alpha\right)}\left|v\right|^{-\beta\left(1-\alpha\right)}. \] \end{proof} \begin{lem} \label{lem:LambdaFunction}Let $\sigma\left(t,s,x\right)=\left(t-s\right)^{h(t,x)-\frac{1}{2}}$ and $h$ be a Hurst function with parameters $\left(h_{*},h^{*}\right)$. Then, for any $0<\gamma<2h_{*}$ there exists $\lambda_{\gamma}:\triangle^{(3)}\left(\left[0,T\right]\right)\rightarrow\mathbb{R}$ such that \begin{equation} \left|\sigma\left(t,s,x\right)-\sigma\left(t^{\prime},s,x\right)\right|^{2}\leq\lambda_{\gamma}\left(t,t^{\prime},s\right),\label{eq:time reg sigma} \end{equation} and \begin{equation} \int_{0}^{t^{\prime}}\lambda_{\gamma}\left(t,t^{\prime},s\right)ds\leq C_{T,\gamma}\left|t-t^{\prime}\right|^{\gamma},\label{eq:TimeLipschitzIntegralLambda} \end{equation} for some constant $C_{T,\gamma}>0$. \end{lem} \begin{proof} We have that \[ \sigma\left(t,s,x\right)-\sigma\left(t^{\prime},s,x\right)=\left(t-s\right)^{h\left(t,x\right)-\frac{1}{2}}-\left(t^{\prime}-s\right)^{h\left(t^{\prime},x\right)-\frac{1}{2}}. \] Furthermore, notice that for all $t>t^{\prime}>s>0$, we can add and subtract the term $\left(t-s\right)^{h(t^{\prime},x)-\frac{1}{2}}$ to get \[ \sigma\left(t,s,x\right)-\sigma\left(t^{\prime},s,x\right)=J^{1}\left(t,t^{\prime},s,x\right)+J^{2}\left(t,t^{\prime},s,x\right), \] where \begin{align*} J^{1}\left(t,t^{\prime},s,x\right) & :=\left(t-s\right)^{h(t,x)-\frac{1}{2}}-\left(t-s\right)^{h(t^{\prime},x)-\frac{1}{2}},\\ J^{2}\left(t,t^{\prime},s,x\right) & :=\left(t-s\right)^{h(t^{\prime},x)-\frac{1}{2}}-\left(t^{\prime}-s\right)^{h(t^{\prime},x)-\frac{1}{2}}. \end{align*} First we bound $J^{1}$. Using the inequality $\left(\ref{eq: expinequality}\right)$ and that $h$ is Lipschitz in the time argument, by similar arguments as in Lemma \ref{lem:Well defined sigma}, we obtain for any $\delta\in\left(0,1\right)$ \begin{align*} \left|J^{1}\left(t,t^{\prime},s,x\right)\right| & \leq C_{T}\left|t-t^{\prime}\right|\left|t-s\right|^{h_{*}-\frac{1}{2}}\left|\log\left(t-s\right)\right|\\ & \leq C_{T,\delta}\left|t-t^{\prime}\right|\left|t-s\right|^{h_{*}-\frac{1}{2}-\delta}\\ & \leq C_{T,\delta}\left|t-t^{\prime}\right|\left|t'-s\right|^{h_{*}-\frac{1}{2}-\delta}, \end{align*} since $s<t^{\prime}<t$. Next, in order to bound the term $J^{2}$, we apply Lemma \ref{lem:FundIneq} with $u=t-s$,$v=t'-s$ and $\alpha=h(t^{\prime},x)-\frac{1}{2}$ . Note that, since $h(t^{\prime},x)\in\left[h_{*},h^{*}\right]\subset\left(0,1\right)$, $\alpha\in\left[h_{*}-\frac{1}{2},h^{*}-\frac{1}{2}\right]\subset\left(-\frac{1}{2},\frac{1}{2}\right)$. Hence, if $\alpha=$$h(t^{\prime},x)-\frac{1}{2}\leq0$ (this implies $h_{*}<1/2$ and $\alpha\left(-\frac{1}{2},0\right)$), we get \begin{align*} \left|J^{2}\left(t,t^{\prime},s,x\right)\right| & \leq2\left|t-t^{\prime}\right|^{\beta_{1}}\left|t^{\prime}-s\right|^{h(t^{\prime},x)-\frac{1}{2}-\beta_{1}}\\ & \leq C_{T}\left|t-t^{\prime}\right|^{\beta_{1}}\left|t^{\prime}-s\right|^{h_{*}-\frac{1}{2}-\beta_{1}}, \end{align*} for any $\beta_{1}\in\left(0,1\right)$. If $\alpha=h(t^{\prime},x)-\frac{1}{2}>0$ (this implies $h^{*}>1/2$ and $\alpha\in\left(0,\frac{1}{2}\right)$), we get \begin{align*} \left|J^{2}\left(t,t^{\prime},s,x\right)\right| & \leq\frac{1}{2}\left|t-t^{\prime}\right|^{\alpha+\beta_{2}\left(1-\alpha\right)}\left|t^{\prime}-s\right|^{-\beta_{2}\left(1-\alpha\right)}\\ & \leq\frac{1}{2}\left|t-t^{\prime}\right|^{\alpha+\frac{1}{2}-\varepsilon\left(1-\alpha\right)}\left|t^{\prime}-s\right|^{-\frac{1}{2}+\varepsilon\left(1-\alpha\right)}\\ & \leq\frac{1}{2}\left|t-t^{\prime}\right|^{h_{*}-\varepsilon}\left|t^{\prime}-s\right|^{-\frac{1}{2}+\frac{\varepsilon}{2}}, \end{align*} where in the second inequality we have chosen $\beta_{2}=\frac{1}{2\left(1-\alpha\right)}-\varepsilon,\varepsilon>0,$ and in the third inequality we have used that $(1-\alpha)\in\left(\frac{1}{2},1\right)$. Therefore, we can write the following bound \begin{align*} \left|\sigma\left(t,s,x\right)-\sigma\left(t^{\prime},s,x\right)\right|^{2} & \leq2\left(\left|J^{1}\left(t,t^{\prime},s,x\right)\right|^{2}+\left|J^{2}\left(t,t^{\prime},s,x\right)\right|^{2}\right)\\ & \leq2\left(C_{T,\delta}\left|t-t^{\prime}\right|^{2}\left|t'-s\right|^{2h_{*}-1-2\delta}\right.\\ & \left.\quad+C_{T}\left|t-t^{\prime}\right|^{2\beta_{1}}\left|t^{\prime}-s\right|^{2h_{*}-1-2\beta_{1}}+\frac{1}{2}\left|t-t^{\prime}\right|^{2h_{*}-2\varepsilon}\left|t^{\prime}-s\right|^{-1+\varepsilon}\right)\\ & \leq C_{T,\beta_{1}}\left|t-t^{\prime}\right|^{2\beta_{1}}\left|t'-s\right|^{-1+h_{*}-\beta_{1}}, \end{align*} where to get the last inequality we have chosen $\delta=\beta_{1}$ and $\varepsilon=h_{*}-\beta_{1}$. Therefore, defining \[ \lambda_{\gamma}\left(t,t^{\prime},s\right):=C_{T,\gamma}\left(t-t^{\prime}\right)^{\gamma}\left(t^{\prime}-s\right)^{-1+h_{*}-\frac{\gamma}{2}}, \] and choosing $\gamma$ such that $0<\gamma<2h_{*},$ we can compute \[ \int_{0}^{t^{\prime}}\lambda_{\gamma}\left(t,t^{\prime},s\right)ds\leq C_{T,\gamma}\left(t^{\prime}\right)^{h_{*}-\frac{\gamma}{2}}\left(t-t^{\prime}\right)^{\gamma}, \] which concludes the proof. \end{proof} \begin{prop} \label{prop:HolderContSEM}Let $\left\{ X_{t}^{h}\right\} _{t\in\left[0,T\right]}$ be a SEM process defined in Theorem \ref{thm:Existence and unique}, and assume that $g$ satisfies $\mathbf{H4}$ for some $\delta>0$. Then there exists a set of paths $\mathcal{N}\subset\Omega$ with $\mathbb{P}\left(\mathcal{N}\right)=0$, such that for all $\omega\in\mathcal{N}^{c}$ the path $X_{t}^{h}\left(\omega\right)$ has $\alpha$-Hölder continuous trajectories for any $\alpha<h_{*}\wedge\delta$. In particular, we have \[ \left|\left(X_{t}^{h}-X_{s}^{h}\right)\left(\omega\right)\right|\leq C\left(\omega\right)\left|t-s\right|^{\alpha},\qquad\omega\in\mathcal{N}^{c}. \] \end{prop} \begin{proof} By theorem \ref{thm:Existence and unique}, there exists a unique solution $X_{t}^{h}$ to Equation (\ref{eq:SEMequation}) with bounded $p$-order moments. We will show that $X_{t}^{h}$ also have Hölder continuous paths. To this end, we will show that for any $p\in\mathbb{N}$ there exists a constant $C>0$ and a function $\alpha$, both depending on $p$, such that \[ \mathbb{E}\left[\left|X_{s,t}^{h}\right|^{2p}\right]\leq C_{p}\left|t-s\right|^{\alpha_{p}}, \] where $X_{s,t}^{h}=X_{t}^{h}-X_{s}^{h}$. From this we apply Kolmogorov's continuity theorem (e.g. Theorem 2.8 in \cite{KarShre}, page 53) in order to obtain the claim. Note that the increment of $X_{s,t}$ minus the increment of $g$ satisfies \[ X_{s,t}^{h}-\left(g\left(t\right)-g\left(s\right)\right)=\int_{s}^{t}\left(t-r\right)^{h\left(t,X_{r}\right)-\frac{1}{2}}dB_{r}+\int_{0}^{s}\left(t-r\right)^{h\left(t,X_{r}\right)-\frac{1}{2}}-\left(s-r\right)^{h\left(t,X_{r}\right)-\frac{1}{2}}dB_{r}, \] and thus using that \begin{equation} \left|a+b\right|^{q}\leq2^{q-1}\left(\left|a\right|^{q}+\left|b\right|^{q}\right),\label{eq:simpleQineq} \end{equation} for any $q\in\mathbb{N},$ we obtain \begin{align*} \mathbb{E}\left[\left|X_{s,t}^{h}-\left(g\left(t\right)-g\left(s\right)\right)\right|^{2p}\right] & \leq C_{p}\mathbb{E}\left[\left|\int_{s}^{t}\left(t-r\right)^{h\left(t,X_{r}\right)-\frac{1}{2}}dB_{r}\right|^{2p}\right]\\ & +C_{p}\mathbb{E}\left[\left|\int_{0}^{s}\left(t-r\right)^{h\left(t,X_{r}\right)-\frac{1}{2}}-\left(s-r\right)^{h\left(t,X_{r}\right)-\frac{1}{2}}dB_{r}\right|^{2}\right]\\ & =:C_{p}\left(J_{s,t}^{1}+J_{s,t}^{2}\right). \end{align*} Clearly, as $h\left(t,x\right)\in\left[h_{*},h^{*}\right]\subset\left(0,1\right)$, we have by the Burkholder-Davis-Gundy (BDG) inequality that \begin{align} J_{s,t}^{1} & \leq C_{p}\mathbb{E}\left[\left|\int_{s}^{t}\left(t-r\right)^{2h\left(t,X_{r}\right)-1}dr\right|^{p}\right]\label{eq: J1}\\ & \leq C_{p,T}\left|\int_{s}^{t}\left(t-r\right)^{2h_{*}-1}dr\right|^{p}=C_{p,T,h_{*}}\left|t-s\right|^{2ph_{*}}.\nonumber \end{align} Consider now the term $J_{s,t}^{2}$. Applying again BDG inequality together with the bounds $\left(\ref{eq:time reg sigma}\right)$ and $\left(\ref{eq:TimeLipschitzIntegralLambda}\right)$ from Lemma \ref{lem:LambdaFunction}, we have that for any $\gamma<2h_{*}$ \begin{align} J_{s,t}^{2} & \leq C_{p}\mathbb{E}\left[\left|\int_{0}^{s}\left[\left(\left(t-r\right)^{h\left(t,X_{r}\right)-\frac{1}{2}}-\left(s-r\right)^{h\left(t,X_{r}\right)-\frac{1}{2}}\right)^{2}\right]dr\right|^{p}\right]\nonumber \\ & \leq C_{p}\mathbb{E}\left[\left|\int_{0}^{s}\lambda_{\gamma}\left(t,s,r\right)dr\right|^{p}\right]\leq C_{p,T,\gamma}\left|t-s\right|^{p\gamma},\label{eq: J2} \end{align} Combining (\ref{eq: J1}) and (\ref{eq: J2}) we can see that \[ \mathbb{E}\left[\left|X_{s,t}^{h}-\left(g\left(t\right)-g\left(s\right)\right)\right|^{2p}\right]\leq C_{p,T,\gamma}\left|t-s\right|^{p\gamma}. \] Furthermore, again using (\ref{eq:simpleQineq}) we see that \[ \mathbb{E}\left[\left|X_{s,t}\right|^{2p}\right]\leq2^{2p-1}\left(\mathbb{E}\left[\left|X_{s,t}^{h}-\left(g\left(t\right)-g\left(s\right)\right)\right|^{2p}\right]+\mathbb{E}\left[\left|\left(g\left(t\right)-g\left(s\right)\right)\right|^{2p}\right]\right). \] Thus invoking the bounds from $\left({\bf H4}\right)$ on $g$, we obtain that \[ \mathbb{E}\left[\left|X_{s,t}\right|^{2p}\right]\leq C_{p,T,\gamma}\left|t-s\right|^{2p\left(\frac{\gamma}{2}\wedge\delta\right)}, \] and it follows from Kolmogorov's continuity theorem that $X^{h}$ has $\mathbb{P}$-a.s. $\alpha$-Hölder continuous trajectories with $\alpha\in\left(0,h_{*}\wedge\delta\right)$. \end{proof} \section{Simulation of Self-Exciting Multifractional Stochastic Processes} The aim of this section is to study a discretization scheme for self-excited multifractional (SEM) processes proposed in the previous sections. In particular we will consider an Euler type discretization and prove that converges strongly to the original process at a rate depending on $h_{*}$. We end the section providing two examples of numerical simulations using the Euler discretization. \subsection{Euler-Maruyama Approximation Scheme} Consider a time discretization of the interval $\left[0,T\right],$ using a step-size $\Delta t=\frac{T}{N}>0$. The discrete time Euler-Maruyama scheme (EM) is given by \begin{align} \bar{X}_{0}^{h} & =X_{0}^{h}=0\\ \bar{X}_{k}^{h} & =\sum_{i=0}^{k-1}\left(t_{k}-t_{i}\right)^{h\left(t_{k},\bar{X}_{i}^{h}\right)-\frac{1}{2}}\Delta B_{i},\qquad k\in\left\{ 1,\ldots,N\right\} ,\label{eq:EM_1} \end{align} where $\Delta B_{i}=B\left(t_{i+1}\right)-B\left(t_{i}\right)$, and yields an approximation of $X_{t_{k}}^{h}$ for $t_{k}=k\Delta t$ with $k\in\left\{ 0,\ldots,N\right\} .$ In order to study the approximation error, it is convenient to consider the continuous time interpolation of $\left\{ \bar{X}_{k}^{h}\right\} _{k\in\left\{ 0,\ldots,N\right\} }$ given by \begin{equation} \bar{X}_{t}^{h}=\int_{0}^{t}\left(t-\eta\left(s\right)\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h}\right)-\frac{1}{2}}dB_{s},\qquad t\in\left[0,T\right],\label{eq:DefContEuler} \end{equation} where $\eta\left(s\right):=t_{i}\cdot\boldsymbol{1}_{\left[t_{i},t_{i+1}\right)}\left(s\right)$. The following theorem is the main result in this section and its proof uses Lemmas \ref{lem:Bound=000026TimeLipschContEuler} and \ref{theo:VolterraGronwall}, see below. \begin{thm} \label{thm:EM_StrongConvergence} Let $h$ be a Hurst function with parameters $\left(h_{*},h^{*}\right)$ and let $X_{t}^{h}$ be the solution of equation $\left(\ref{eq:SEM}\right)$. Then the Euler-Maruyama scheme $\left(\ref{eq:DefContEuler}\right)$, satisfies \begin{equation} \sup_{0\leq t\leq T}\mathbb{E}\left[\left|X_{t}^{h}-\bar{X}_{t}^{h}\right|^{2}\right]\leq C_{T,\gamma,h_{*}}E_{h_{*}}\left(C_{T,\gamma,h_{*}}\Gamma\left(h_{*}\right)T^{h_{*}}\right)\left|\Delta t\right|^{\gamma},\label{eq:EM_2} \end{equation} where $\gamma\in\left(0,2h_{*}\right)$, and $C_{T,\gamma,h_{*}}$ is a positive constant, which does not depend on $N$. \end{thm} \begin{proof} Define \[ \delta_{t}:=X_{t}^{h}-\bar{X}_{t}^{h},\qquad\varphi\left(t\right):=\sup_{0\leq s\leq t}\mathbb{E}\left[\left|\delta_{s}\right|^{2}\right],\quad t\in\left[0,T\right]. \] For any $t\in\left[0,T\right],$ we can write \begin{align*} \delta_{t} & =\int_{0}^{t}\left(\left(t-s\right)^{h\left(t,X_{s}^{h}\right)-\frac{1}{2}}-\left(t-\eta\left(s\right)\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h}\right)-\frac{1}{2}}\right)dB_{s}\\ & =\int_{0}^{t}\left(\left(t-s\right)^{h\left(t,X_{s}^{h}\right)-\frac{1}{2}}-\left(t-s\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h}\right)-\frac{1}{2}}\right)dB_{s}\\ & \quad+\int_{0}^{t}\left(\left(t-s\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h}\right)-\frac{1}{2}}-\left(t-\eta\left(s\right)\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h}\right)-\frac{1}{2}}\right)dB_{s}\\ & =:I_{1}\left(t\right)+I_{2}\left(t\right). \end{align*} First we bound the second moment of $I_{1}\left(t\right)$ in terms of a Volterra integral of $\varphi$. Using the Itô isometry, equation $\left(\ref{eq:Lipschitz sigma}\right)$ and the Lipschitz property of $h$ we get \begin{align*} \mathbb{E}\left[\left|I_{1}\left(t\right)\right|^{2}\right] & \leq\int_{0}^{t}k\left(t,s\right)\left(\log\left(t-s\right)\right)^{2}\mathbb{E}\left[\left(h\left(t,X_{s}^{h}\right)-h\left(t,\bar{X}_{\eta\left(s\right)}^{h}\right)\right)^{2}\right]ds\\ & \leq C_{T,\delta}\int_{0}^{t}\left(t-s\right)^{2\left(h_{*}-\delta\right)-1}\mathbb{E}\left[\left|X_{s}^{h}-\bar{X}_{\eta\left(s\right)}^{h}\right|^{2}\right]ds, \end{align*} for $\delta>0$, arbitrarily small. By adding and subtracting $\bar{X}_{s}^{h}$, we easily get that \[ \mathbb{E}\left[\left|X_{s}^{h}-\bar{X}_{\eta\left(s\right)}^{h}\right|^{2}\right]\leq2\varphi\left(s\right)+2\mathbb{E}\left[\left|\bar{X}_{s}^{h}-\bar{X}_{\eta\left(s\right)}^{h}\right|^{2}\right], \] Moreover, combining equation $\left(\ref{eq:SecondMomentTLCE}\right)$ in Lemma \ref{lem:Bound=000026TimeLipschContEuler}, yields \[ \int_{0}^{t}\left(t-s\right)^{2\left(h_{*}-\delta\right)-1}\mathbb{E}\left[\left|\bar{X}_{s}^{h}-\bar{X}_{\eta\left(s\right)}^{h}\right|^{2}\right]ds\leq C_{T}\frac{T^{2\left(h_{*}-\delta\right)}}{2\left(h_{*}-\delta\right)}\left|\Delta t\right|^{\gamma}. \] Therefore, choosing $\delta=\frac{h_{*}}{2}$ \begin{equation} \mathbb{E}\left[\left|I_{1}\right|^{2}\right]\leq C_{T,h_{*}}\left\{ \int_{0}^{t}\left(t-s\right)^{h_{*}-1}\varphi\left(s\right)ds+\left|\Delta t\right|^{\gamma}\right\} .\label{eq:I1Bound} \end{equation} Next, we find a bound for the second moment of $I_{2}\left(t\right)$. Using again the Itô isometry, equations $\left(\ref{eq:time reg sigma}\right)$ and $\left(\ref{eq:TimeLipschitzIntegralLambda}\right),$ and Lemma \ref{lem:Bound=000026TimeLipschContEuler} we can write \begin{align} \mathbb{E}\left[\left|I_{2}\left(t\right)\right|^{2}\right] & \leq\int_{0}^{t}\lambda_{\gamma}\left(t+\left(s-\eta\left(s\right)\right),t,s\right)ds\leq C_{T,\gamma}\left|\Delta t\right|^{\gamma},\label{eq:I2Bound} \end{align} for any $\gamma<2h_{*}$. Combining the inequalities $\left(\ref{eq:I1Bound}\right)$ and $\left(\ref{eq:I2Bound}\right)$ we obtain \[ \varphi\left(t\right)\leq C_{T,\gamma,h_{*}}\left\{ \int_{0}^{t}\left(t-s\right)^{h_{*}-1}\varphi\left(s\right)ds+\left|\Delta t\right|^{\gamma}\right\} . \] Using Theorem \ref{theo:VolterraGronwall} with $a\left(t\right)=C_{T,\gamma,h_{*}}\left|\Delta t\right|^{\gamma},$$g\left(t\right)=C_{T,\gamma,h_{*}}$ and $\beta=h_{*}$ we can conclude that \[ \varphi\left(T\right)\leq C_{T,\gamma,h_{*}}E_{h_{*}}\left(C_{T,\gamma,h_{*}}\Gamma\left(h_{*}\right)T^{h_{*}}\right)\left|\Delta t\right|^{\gamma}. \] \end{proof} \begin{rem} In \cite{Zha08}, Zhang introduced an Euler type scheme for stochastic differential equations of Volterra type and showed that his scheme converges at a certain positive rate, without being very precise. A direct application of his result to our case provides a worse rate than the one we obtain in Theorem \ref{thm:EM_StrongConvergence}. The reason being that, due to our particular kernel, we are able to use a fractional Gronwall lemma. \end{rem} \begin{lem} \label{lem:Bound=000026TimeLipschContEuler} Let $h$ be a Hurst function with parameters $\left(h_{*},h^{*}\right)$ and let $\bar{X}^{h}=\left\{ \bar{X}_{t}^{h}\right\} _{t\in\left[0,T\right]}$ be given by $\left(\ref{eq:DefContEuler}\right)$. Then \begin{equation} \mathbb{E}\left[\left|\bar{X}_{t}^{h}\right|^{2}\right]\leq C_{T},\quad0\leq t\leq T,\label{eq:SecondMomentContEuler} \end{equation} and \begin{equation} \mathbb{E}\left[\left|\bar{X}_{t}^{h}-\bar{X}_{t^{\prime}}^{h}\right|^{2}\right]\leq C_{T,\gamma}\left|t-t^{\prime}\right|^{\gamma},\quad0\leq t^{\prime}\leq t\leq T,\label{eq:SecondMomentTLCE} \end{equation} for any $\gamma<2h_{*}$, where $C_{T}$ and $C_{T,\gamma}$ are positive constants. \end{lem} \begin{proof} Recall that $k\left(t,s\right)=C_{T}\left(t-s\right)^{2h_{*}-1}$ and, since $\eta\left(s\right)\leq s$, we have the following inequality \begin{equation} k\left(t,\eta\left(s\right)\right)\leq k\left(t,s\right).\label{eq: Volterrabound} \end{equation} Using the Itô isometry, equation $\left(\ref{eq:BoundSigma2_No_x}\right)$ and equation (\ref{eq: Volterrabound}), we obtain \begin{align*} \mathbb{E}\left[\left|\bar{X}_{t}^{h}\right|^{2}\right] & =\mathbb{E}\left[\int_{0}^{t}\left(t-\eta\left(s\right)\right)^{2h\left(t,\bar{X}_{\eta\left(s\right)}^{h}\right)-1}ds\right]\\ & \leq\int_{0}^{t}k\left(t,\eta\left(s\right)\right)ds\leq\int_{0}^{t}k\left(t,s\right)ds\leq C_{T}. \end{align*} To prove the bound $\left(\ref{eq:SecondMomentTLCE}\right),$ note that \begin{align*} \bar{X}_{t}^{h}-\bar{X}_{t^{\prime}}^{h} & =\int_{t^{\prime}}^{t}\left(t-\eta\left(s\right)\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h}\right)-\frac{1}{2}}dB_{s},\\ & \quad+\int_{0}^{t^{\prime}}\left\{ \left(t-\eta\left(s\right)\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h}\right)-\frac{1}{2}}-\left(t^{\prime}-\eta\left(s\right)\right)^{h\left(t^{\prime},\bar{X}_{\eta\left(s\right)}^{h}\right)-\frac{1}{2}}\right\} dB_{s}\\ & =:J_{1}+J_{2}. \end{align*} Due to the Itô isometry, equation $\left(\ref{eq:BoundSigma2_No_x}\right)$ and $\left(\ref{eq: Volterrabound}\right)$, we obtain the bounds \begin{align*} \mathbb{E}\left[\left|J_{1}\right|^{2}\right] & =\mathbb{E}\left[\int_{t^{\prime}}^{t}\left(t-\eta\left(s\right)\right)^{2h\left(t,\bar{X}_{\eta\left(s\right)}^{h}\right)-1}ds\right]\\ & \leq\int_{t'}^{t}k\left(t,\eta\left(s\right)\right)ds\leq\int_{t'}^{t}k\left(t,s\right)ds=C_{T}\left|t-t^{\prime}\right|^{2h_{*}}. \end{align*} Using again the Itô isometry, equation $\left(\ref{eq:time reg sigma}\right)$ and equation $\left(\ref{eq:TimeLipschitzIntegralLambda}\right)$ we can write, for any $\gamma<2h_{*},$ that \begin{align*} \mathbb{E}\left[\left|J_{2}\right|^{2}\right] & \leq\int_{0}^{t^{\prime}}\lambda_{\gamma}\left(t,t^{\prime},\eta\left(s\right)\right)ds\leq\int_{0}^{t^{\prime}}\lambda_{\gamma}\left(t,t^{\prime},s\right)ds\leq C_{T,\gamma}\left|t-t^{\prime}\right|^{\gamma}, \end{align*} where in the second inequality we have used $\lambda_{\gamma}\left(t,t^{\prime},\eta\left(s\right)\right)\leq\lambda_{\gamma}\left(t,t^{\prime},s\right)$, because $\lambda_{\gamma}$ is essentially a negative fractional power of $(t-s)$ and $\eta\left(s\right)\leq s$. Combining the bounds for $\mathbb{E}\left[\left|J_{1}\right|^{2}\right]$ and $\mathbb{E}\left[\left|J_{2}\right|^{2}\right]$ the result follows. \end{proof} The following result is a combination of Theorem 1 and Corollary 2 in \cite{YeGaoDing07}. \begin{thm} \label{theo:VolterraGronwall}Suppose $\beta>0,$$a\left(t\right)$ is a nonnegative function locally integrable on $0\leq t<T<+\infty$ and $g\left(t\right)$ is a nonnegative, nondecreasing continuous function defined on $0\leq t<T$, $g\left(t\right)\leq M$ (constant), and suppose $u\left(t\right)$ is nonnegative and locally integrable on $0\leq t<T$ with \[ u\left(t\right)\leq a\left(t\right)+g\left(t\right)\int_{0}^{t}\left(t-s\right)^{\beta-1}u\left(s\right)ds, \] on this interval. Then, \[ u\left(t\right)\leq a\left(t\right)+\int_{0}^{t}\left(\sum_{n=1}^{\infty}\frac{\left(g\left(t\right)\Gamma\left(\beta\right)\right)^{n}}{\Gamma\left(n\beta\right)}\left(t-s\right)^{n\beta-1}a\left(s\right)\right)ds,\qquad0\leq t<T. \] If in addition, $a\left(t\right)$ is a nondecreasing function on $\left[0,T\right)$. Then, \[ u\left(t\right)\leq a\left(t\right)E_{\beta}\left(g\left(t\right)\Gamma\left(\beta\right)t^{\beta}\right), \] where $E_{\beta}$ is the Mittag-Leffler function defined by $E_{\beta}\left(z\right)=\sum_{k=0}^{\infty}\frac{z^{k}}{\Gamma\left(k\beta+1\right)}$. \end{thm} \subsection{Examples} Let us now discuss some functions $h:\mathbb{R}\rightarrow\left(0,1\right)$ which produce some interesting self-exciting processes. \begin{example} \label{exa:1}Let $h\left(x\right)=\frac{1}{2}+\frac{1/2}{1+x^{2}}\in\left(\frac{1}{2},1\right)\subset\left(0,1\right),$ and $\left\{ B_{t}\right\} _{t\in[0,T]}$ be a one-dimensional Brownian motion. Assume $X_{t}^{h}$ starts at zero and define the process as given in equation $\left(\ref{eq:SEM}\right)$. Figure $\left(\ref{plot: SEM_ProcessEX1}\right)$ shows the plot of $h$ on the left hand side and a sample path of the process, on the right hand side, resulting from the implementation \footnote{All simulations were run with a step-size $\Delta t=1/100$.} of the EM-approximation given by equation $\left(\ref{eq:EM_1}\right)$. Notice the fact that this process is smoother than a Brownian motion at the origin and rapidly converges to the classical Brownian motion as as the process departs from zero. This implies that $h\rightarrow\frac{1}{2}$ having only $h=1$ any time the sample path crossed the $x$-axis again. \begin{figure}\label{plot: SEM_ProcessEX1} \end{figure} \label{exa:2}Let $h\left(x\right)=\frac{1}{2}-\frac{1/2}{1+x^{2}}\in\left(0,\frac{1}{2}\right)\subset\left(0,1\right),$ and $\left\{ B_{t}\right\} _{t\in[0,T]}$ be a one-dimensional Brownian motion. Assume $X_{t}^{h}$ starts at zero and define the process as given in equation $\left(\ref{eq:SEM}\right)$. Figure $\left(\ref{plot: SEM_ProcessEX2}\right)$ shows the plot of $h$ on the left hand side and a sample path of the process, on the right hand side, resulting from the implementation of the EM-approximation given by equation $\left(\ref{eq:EM_1}\right)$. Is interesting noticing in this case, contrary to the previous example, that we have a rougher process than a Brownian motion at the origin, temporarily resembles the classical Brownian motion as the sample path departs from zero and gets rougher again whenever the process crosses the $x$-axis. This makes the process go away from zero even faster due to the increased roughness. \begin{figure}\label{plot: SEM_ProcessEX2} \end{figure} \label{exa:3}Let $h\left(x\right)=\frac{1}{1+x^{2}}\in\left(0,1\right),$ and $\left\{ B_{t}\right\} _{t\in[0,T]}$ be a one-dimensional Brownian motion. Assume $X_{t}^{h}$ starts at zero and define the process as given in equation $\left(\ref{eq:SEM}\right)$. Figure $\left(\ref{plot: SEM_ProcesEX3}\right)$ shows the plot of $h$ on the left hand side and a sample path of the process, on the right hand side, resulting from the implementation \footnote{All simulations were run with a step-size $\Delta t=1/100$.} of the EM-approximation given by equation $\left(\ref{eq:EM_1}\right)$. Notice the fact that the Hurst function collapses to zero as the process departs from zero, making the process be the roughest possible. Therefore we would only recover smoother values, in particular $h=1$ only the time the sample path crossed the $x$-axis again. \begin{figure}\label{plot: SEM_ProcesEX3} \end{figure} \end{example} \section{Self-Exciting Multifractional Gamma Processes} Barndorff-Nielsen in \cite{B-N16}, introduces a class of self-exciting gamma type of process, in order to model turbulence, because it captures the intermittency effect observed in turbulent data. We would also like to extend our process in order to capture the previously mentioned intermittency effect. One could believe that if we were to choose a trigonometric function as a Hurst function, i.e. $h\left(t,x\right)=\alpha+\beta\cdot\sin\left(\gamma x\right),$ for some $\alpha,\beta,\gamma\in\mathbb{R}$ in the SEM process, then we might observe a regime switch in the Hurst parameters. Since the values of the process $X_{t}$ may get very large, the oscillations may take place more and more frequently. By introducing a type of gamma process (SEM-Gamma) we dampen the Volterra kernel in (\ref{eq:SEM}) by an exponential function and make the process oscillate around a mean value obtaining a more stable intermittency effect in the Hurst function. \begin{defn} We say that a function $f:\left[0,T\right]\times\mathbb{R}\rightarrow\mathbb{R}_{+}$ is a dampening function if it is nonnegative, satisfies the following Lipschitz conditions for all $x,y\in\mathbb{R}$ and $t,t^{\prime}\in\left[0,T\right]$ \[ \left|f\left(t,x\right)-f\left(t,y\right)\right|\leq C\left|x-y\right|, \] \[ \left|f\left(t,x\right)-f\left(t^{\prime},x\right)\right|\leq C\left|t-t^{\prime}\right|, \] and satisfies the following linear growth condition for all $x\in\mathbb{R}$ and $t\in\left[0,T\right]$ \[ \left|f\left(t,x\right)\right|\leq C\left(1+\left|x\right|\right), \] for some constant $C>0$. \end{defn} Let $f$ be a dampening function and let $h$ be a Hurst function with parameters $\left(h_{*},h^{*}\right)$. The self-excited multifractional gamma process is given formally by the Volterra equation \begin{equation} X_{t}^{h,f}=\int_{0}^{t}\exp\left(-f\left(t,X_{s}^{h,f}\right)\left(t-s\right)\right)\left(t-s\right)^{h(t,X_{s}^{h,f})-\frac{1}{2}}dB_{s}.\label{eq: SEMP_OU-Process} \end{equation} The following lemma shows the existence and uniqueness of the above equation by means of Theorem \ref{thm:Zhang Existence thm}. \begin{lem} \label{lem:SEM-OU_Well_defined}Let $\sigma(t,s,x)=\exp\left(-f\left(t,x\right)\left(t-s\right)\right)\left(t-s\right)^{h(t,x)-\frac{1}{2}}$, such that $f$ is a dampening function and $h$ is a Hurst function with parameters $\left(h_{*},h^{*}\right)$. Then, we have that \begin{equation} \left|\sigma\left(t,s,x\right)\right|^{2}\leq k\left(t,s\right)\left(1+\left|x\right|^{2}\right),\label{eq:sigma linear growth-gamma} \end{equation} where \[ k\left(t,s\right)=C_{T}\left(t-s\right)^{2h_{*}-1}, \] and \begin{equation} \left|\sigma\left(t,s,x\right)-\sigma\left(t,s,y\right)\right|^{2}\leq C_{T}k\left(t,s\right)\left|\log\left(t-s\right)\right|^{2}\left|x-y\right|^{2}.\label{eq:Lipschitz sigma-gamma} \end{equation} Moreover, $\sigma$ satisfies $\mathbf{H1}$-$\mathbf{H2}$. \\ \end{lem} Proofs for all the results in this section are reported in the appendix, since they are analogous to the ones provided previously for the SEM process. Now since the new $\sigma$ proposed for the SEM-Gamma process also satisfies $\mathbf{H1}$-$\mathbf{H2}$ we can apply again Zhang's theorem, in the same way we did in Theorem \ref{thm:Existence and unique}, yielding the existence and uniqueness and bounds on $p$-moments for the solution of \begin{equation} X_{t}^{h,f}=g\left(t\right)+\int_{0}^{t}e^{-f\left(t,X_{t}^{h,f}\right)\left(t-s\right)}\left(t-s\right)^{h\left(t,X_{t}^{h,f}\right)-\frac{1}{2}}dB_{s}.\label{eq: SEM-Gammaequation} \end{equation} We call this process a \textit{Self-Exciting Multifractional Gamma process (SEM-Gamma)}. The following Lemma is key to study the Hölder regularity for the solution of $\left(\ref{eq: SEM-Gammaequation}\right)$, which coincides and can be derived in the same way as for the SEM process. It is also useful for the discussion of the strong convergence of the approximating scheme given in Theorem $\left(\ref{thm:EM_StrongConvergenceGamma}\right).$ \begin{lem} \label{lem:LambdaFunctionGamma} Let $\sigma(t,s,x)=\exp\left(-f\left(t,x\right)\left(t-s\right)\right)\left(t-s\right)^{h(t,x)-\frac{1}{2}}$, such that $f$ is a dampening function and $h$ is a Hurst function with parameters $\left(h_{*},h^{*}\right)$. Then, for any $0<\gamma<2h_{*}$ there exists $\lambda_{\gamma}:\triangle^{(3)}\left(\left[0,T\right]\right)\rightarrow\mathbb{R}$ such that \begin{equation} \left|\sigma\left(t,s,x\right)-\sigma\left(t^{\prime},s,x\right)\right|^{2}\leq\lambda_{\gamma}\left(t,t^{\prime},s\right)\left(1+\left|x\right|^{2}\right),\label{eq:time reg sigma-gamma} \end{equation} and \begin{equation} \int_{0}^{t^{\prime}}\lambda_{\gamma}\left(t,t^{\prime},s\right)ds\leq C_{T,\gamma}\left|t-t^{\prime}\right|^{\gamma},\label{eq:TimeLispchitzIntegralLambda-gamma} \end{equation} for some constant $C_{T,\gamma}>0.$ \end{lem} In order to simulate this process we will use, again, the Euler-Maruyama approximation to discretize the continuous equation $\left(\ref{eq: SEMP_OU-Process}\right)$. Consider a time discretization of the interval $\left[0,T\right],$ using a step-size $\Delta t=\frac{T}{N}>0$. The EM method yields a discrete time approximation $\bar{X}_{k}^{h,f}$ of the process $X_{t_{k}}^{h,f}$ for $t_{k}=k\Delta t$ with $k\in\left\{ 0,\ldots,N\right\} .$ Therefore we have the following discrete time equation \begin{align} \bar{X}_{0}^{h,f} & =X_{0}^{h,f}=0\\ \bar{X}_{k}^{h,f} & =\sum_{i=0}^{k-1}\exp\left(-f\left(t_{k},\bar{X}_{i}^{h,f}\right)\left(t_{k}-t_{i}\right)\right)\left(t_{k}-t_{i}\right)^{h\left(t_{k},\bar{X}_{i}^{h,f}\right)-\frac{1}{2}}\Delta B_{i}\quad\forall k\in\left\{ 1,\ldots,N\right\} ,\label{eq:SEMP-OU-discretization} \end{align} where $\Delta B_{i}=B\left(t_{i+1}\right)-B\left(t_{i}\right)$. Before trying to implement this approximation, in order to study the process numerically we will have to prove the following theorem to ensure the approximation is strongly converging to the process itself. It will be convenient, just as we did with the SEM process, to consider a continuous time interpolation of $\left\{ \bar{X}_{k}^{h,f}\right\} _{k\in\left\{ 0,\ldots,N\right\} }$ given by \begin{equation} \bar{X}_{t}^{h,f}=\int_{0}^{t}\exp\left(-f\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)\left(t-\eta\left(s\right)\right)\right)\left(t-\eta\left(s\right)\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)-\frac{1}{2}}dB_{s},\quad\forall t\in\left[0,T\right],\label{eq:DefContEulerGamma} \end{equation} where, again, $\eta\left(s\right):=t_{i}\cdot\boldsymbol{1}_{\left[t_{i},t_{i+1}\right)}\left(s\right)$. We also have the following technical result. \begin{lem} \label{lem:Bound=000026TimeLipschContEulerGamma} Let $f$ be a dampening function, $h$ be a Hurst function with parameters $\left(h_{*},h^{*}\right)$ and $\bar{X}^{h,f}=\left\{ \bar{X}_{t}^{h,f}\right\} _{t\in\left[0,T\right]}$ be given by $\left(\ref{eq:DefContEulerGamma}\right)$. Then \begin{equation} \mathbb{E}\left[\left|\bar{X}_{t}^{h,f}\right|^{2}\right]\leq C_{T},\quad0\leq t\leq T,\label{eq:SecondMomentContEuler-gamma} \end{equation} and \begin{equation} \mathbb{E}\left[\left|\bar{X}_{t}^{h,f}-\bar{X}_{t^{\prime}}^{h,f}\right|^{2}\right]\leq C_{T,\gamma}\left|t-t^{\prime}\right|^{\gamma},\quad0\leq t^{\prime}\leq t\leq T,\label{eq:SecondMomentTLCE-gamma} \end{equation} for any $\gamma<2h_{*}$, where $C_{T}$ and $C_{T,\gamma}$ are positive constants. \end{lem} Using Lemma \ref{lem:Bound=000026TimeLipschContEulerGamma} and Theorem \ref{theo:VolterraGronwall} we can show the order of convergence for the approximating scheme. \begin{thm} \label{thm:EM_StrongConvergenceGamma}Let $f$ be a dampening function, $h$ be a Hurst function with parameters $\left(h_{*},h^{*}\right)$ and $\bar{X}^{h,f}=\left\{ \bar{X}_{t}^{h,f}\right\} _{t\in\left[0,T\right]}$ be given by $\left(\ref{eq:DefContEulerGamma}\right)$. Then the Euler-Maruyama scheme $\left(\ref{eq:DefContEulerGamma}\right)$, satisfies \begin{equation} \sup_{0\leq t\leq T}\mathbb{E}\left[\left|X_{t}^{h,f}-\bar{X}_{t}^{h,f}\right|^{2}\right]\leq C_{T,\gamma,h_{*}}E_{h_{*}}\left(C_{T,\gamma,h_{*}}\Gamma\left(h_{*}\right)T^{h_{*}}\right)\left|\Delta t\right|^{\gamma},\label{eq:SEMGamma_StrongConv} \end{equation} where $\gamma\in\left(0,2h_{*}\right)$ and $C_{T,\gamma,h_{*}}$ is a positive constant, which does not depend on $N$. \end{thm} \begin{example} We will continue the previous example (\ref{exa:3}). In \cite{FiSo11}, D. Sornette and V. Filimonov suggested a class of self-excited processes that may exhibit all stylized facts found in financial time series as heavy tails (asset return distribution displays heavy tails with positive excess kurtosis), absence of autocorrelations (autocorrelations in asset returns are negligible, except for very short time scales $\simeq20$ minutes), volatility clustering (absolute returns display a positive, significant and slowly decaying autocorrelation function) and the leverage effect (volatility measures of an asset are negatively correlated with its returns) among others stated in \cite{CoTa04}. As we will see, the SEM-Gamma process resembles this properties for some choices of $h$. The SEM-Gamma process could also be interesting for modeling commodity markets given that it mean reversion property, clustering in its increments and also stationary increments, given by the dampening through the exponential function. The right plot in Figure $\left(\ref{plot: SEM-Gamma}\right)$, corresponds to a simulation of a sample path of the process (\ref{eq:SEMP-OU-discretization}), given the Hurst function $h$ is the same as in example (\ref{exa:3}) given by $h\left(x\right)=\frac{1}{1+x^{2}}$. Notice also we have taken in this first example of the SEM-Gamma process $f\left(x\right)=0,$ which provides the regular SEM process of the previous section just to show the left plot looks very similar to the left plot in Figure $\left(\ref{plot: SEM_ProcesEX3}\right)$. \begin{figure}\label{plot: SEM-Gamma} \end{figure} \label{exa:4_comparative} Figure $\left(\ref{plot: SEM-Gamma_Comparative}\right)$ shows the change in the behavior of the Hurst exponent (a transition from rougher values to smoother values, i.e. $h\approx0$ to $h\approx1$) as we shift from lower values for speed of mean reversion, i.e. $f$, to higher values. In particular we compare $f\in\left\{ 0,0.5,1,10\right\} $ \begin{figure}\label{plot: SEM-Gamma_Comparative} \end{figure} \end{example} \begin{rem} Notice one can control the clustering effect of the increments and the varying regularity of the process by controlling the parameter $f$, regardless of the Hurst function chosen as $h$. This is desirable in numerous fields, for example in financial markets modeling, when trying to capture shocks in asset prices. It is also important to remark the fact that using $f\left(x\right)=5,$ we reduced the amount of spikes to none, shifting the process nature from very rough and big drift, to a very smooth and driftless process. The following Figure $\left(\ref{plot: scale comparison}\right)$ shows how by zooming in the case $f\left(x\right)=5$ close enough we observe the rough nature hidden at a lower scale. \begin{figure}\label{plot: scale comparison} \end{figure} It also makes sense to let $f\left(x\right)$ be a function of $x$, rather than a constant and in particular, if we take $f\left(x\right)=h\left(x\right)=\frac{1}{1+x^{2}}$, we can see in the following Figure $\left(\ref{plot: SEM-Gamma h1equalh2}\right)$ how the regime switch in the Hurst exponent is less abrupt favoring sustained difference of roughness in time. \begin{figure}\label{plot: SEM-Gamma h1equalh2} \end{figure} \end{rem} {} \begin{rem} The plots in Figure $\left(\ref{plot: ACF}\right)$ show the autocorrelation function of the absolute value in the time series of the increments in the SEM process (left graph) from example (\ref{exa:3}) and in the SEM-Gamma process (right graph) with $f\left(x\right)=0.1$. Notice that autocorrelation in the second case is clearly much higher. \begin{figure}\label{plot: ACF} \end{figure} \end{rem} {} \section{Appendix} In this appendix we have placed the proofs for the results related with SEM-Gamma process since they are analogous to the proofs in previous sections. \subsection{Proof of Lemma \ref{lem:SEM-OU_Well_defined}} \begin{proof} We will again proof the three results stated in the lemma by reducing them to the case proved in Lemma \ref{lem:Well defined sigma}. To do so we start by proving equation $\left(\ref{eq:sigma linear growth-gamma}\right)$. By definition, we have that \begin{align*} \sigma\left(t,s,x\right) & =\exp\left(-f\left(t,x\right)\left(t-s\right)\right)\left(t-s\right)^{h\left(t,x\right)-\frac{1}{2}}. \end{align*} Note that \[ \exp\left(-f\left(t,x\right)\left(t-s\right)\right)\leq1, \] since $f\geq0$ and $s<t$, for all $\left(t,s\right)\in\triangle^{\left(2\right)}\left(\left[0,T\right]\right)$. We also have that $h\left(t,x\right)\in\left[h_{*},h^{*}\right]\subset\left(0,1\right)$ for all $\left(t,x\right)\in\left[0,T\right]\times\mathbb{R}$. Therefore the result trivially follows from Lemma \ref{lem:Well defined sigma}. Next we consider equation $\left(\ref{eq:Lipschitz sigma-gamma}\right)$, using the fact that we can rewrite $\sigma\left(t,s,x\right)$ as \[ \sigma\left(t,s,x\right)=\exp\left(-f\left(t,x\right)\left(t-s\right)+\log\left(t-s\right)\left(h\left(t,x\right)-\frac{1}{2}\right)\right), \] and make use again of inequality $\left(\ref{eq: expinequality}\right)$, for all $x,y\in\mathbb{R}.$ Now we can write the following upper bound \begin{align*} & \left|\sigma\left(t,s,x\right)-\sigma\left(t,s,y\right)\right|\\ & \leq\exp\left(\max\left(-f\left(t,x\right),-f\left(t,y\right)\right)\left(t-s\right)\right)\exp\left(\left(\max\left(h\left(t,x\right),h\left(t,y\right)\right)-\frac{1}{2}\right)\cdot\log\left(t-s\right)\right)\\ & \quad\times\left(\left|f\left(t,x\right)-f\left(t,y\right)\right|\left|t-s\right|+\left|\log\left(t-s\right)\right|\left|h\left(t,x\right)-h\left(t,y\right)\right|\right). \end{align*} Recalling that $\left|e^{-f\left(t,x\right)\left(t-s\right)}\right|\leq1$ and that $f$ and $h$ are uniformly Lipschitz, we have that \begin{align*} & \left|\sigma\left(t,s,x\right)-\sigma\left(t,s,y\right)\right|^{2}\\ & \leq C^{2}\exp\left(\max\left(\log\left(t-s\right)\left(2h\left(t,x\right)-1\right),\log\left(t-s\right)\left(2h\left(t,y\right)-1\right)\right)\right)\\ & \quad\times\left|\log\left(t-s\right)\right|^{2}\left|x-y\right|^{2}. \end{align*} This reduces the proof to the previous case of a SEM process, see Lemma \ref{lem:Well defined sigma}. \end{proof} \subsection{Proof of Lemma \ref{lem:LambdaFunctionGamma}} \begin{proof} In order to prove equation $\left(\ref{eq:time reg sigma-gamma}\right)$, it is clear that \begin{align*} \sigma\left(t,s,x\right)-\sigma\left(t^{\prime},s,x\right) & =e^{-f\left(t,x\right)\left(t-s\right)}\left(t-s\right)^{h\left(t,x\right)-\frac{1}{2}}-e^{-f\left(t^{\prime},x\right)\left(t^{\prime}-s\right)}\left(t^{\prime}-s\right)^{h\left(t^{\prime},x\right)-\frac{1}{2}}. \end{align*} Furthermore, notice that for all $t>t^{\prime}>s>0,$ we can add and subtract the term \[ e^{-f\left(t,x\right)\left(t-s\right)}\left(t-s\right)^{h(t^{\prime},x)-\frac{1}{2}}, \] to get \[ \sigma\left(t,s,x\right)-\sigma\left(t^{\prime},s,x\right)=\tilde{J}^{1}\left(t,t^{\prime},s,x\right)+\tilde{J}^{2}\left(t,t^{\prime},s,x\right), \] where \begin{align*} \tilde{J}^{1}\left(t,t^{\prime},s,x\right) & :=e^{-f\left(t,x\right)\left(t-s\right)}\left(\left(t-s\right)^{h\left(t,x\right)-\frac{1}{2}}-\left(t-s\right)^{h(t^{\prime},x)-\frac{1}{2}}\right),\\ \tilde{J}^{2}\left(t,t^{\prime},s,x\right) & :=e^{-f\left(t,x\right)\left(t-s\right)}\left(t-s\right)^{h(t^{\prime},x)-\frac{1}{2}}-e^{-f\left(t^{\prime},x\right)\left(t^{\prime}-s\right)}\left(t^{\prime}-s\right)^{h\left(t^{\prime},x\right)-\frac{1}{2}}. \end{align*} First we bound $\tilde{J}^{1}$ by using that $e^{-f\left(t,x\right)\left(t-s\right)}\leq1,$ \begin{align*} \left|\tilde{J}^{1}\left(t,t^{\prime},s,x\right)\right| & \leq\left|J^{1}\left(t,t^{\prime},s,x\right)\right|\leq C_{T,\delta}\left|t-t^{\prime}\right|\left|t'-s\right|^{h_{*}-\frac{1}{2}-\delta}, \end{align*} since $s<t^{\prime}<t,$ and where $J^{1}$ is the terms appearing in the proof of Lemma \ref{lem:LambdaFunction}. Next, in order to bound the term $\tilde{J}^{2}$, we add and subtract the quantity \[ e^{-f\left(t,x\right)\left(t-s\right)}\left(t^{\prime}-s\right)^{h\left(t^{\prime},x\right)-\frac{1}{2}}, \] to obtain \begin{align*} \left|\tilde{J}^{2}\left(t,t^{\prime},s,x\right)\right| & \leq\left|e^{-f\left(t,x\right)\left(t-s\right)}\left(\left(t-s\right)^{h(t^{\prime},x)-\frac{1}{2}}-\left(t^{\prime}-s\right)^{h\left(t^{\prime},x\right)-\frac{1}{2}}\right)\right.\\ & \qquad\left.-\left(t^{\prime}-s\right)^{h\left(t^{\prime},x\right)-\frac{1}{2}}\left(e^{-f\left(t,x\right)\left(t-s\right)}-e^{-f\left(t^{\prime},x\right)\left(t^{\prime}-s\right)}\right)\right|\\ & \leq\left|e^{-f\left(t,x\right)\left(t-s\right)}\right|\left|\left(t-s\right)^{h(t^{\prime},x)-\frac{1}{2}}-\left(t^{\prime}-s\right)^{h\left(t^{\prime},x\right)-\frac{1}{2}}\right|\\ & \qquad+\left|\left(t^{\prime}-s\right)^{h\left(t^{\prime},x\right)-\frac{1}{2}}\right|\left|e^{-f\left(t,x\right)\left(t-s\right)}-e^{-f\left(t^{\prime},x\right)\left(t^{\prime}-s\right)}\right|\\ & \leq\left|J^{2}\left(t,t^{\prime},s,x\right)\right|+\left|\left(t^{\prime}-s\right)^{h\left(t^{\prime},x\right)-\frac{1}{2}}\right|\left|e^{-f\left(t,x\right)\left(t-s\right)}-e^{-f\left(t^{\prime},x\right)\left(t^{\prime}-s\right)}\right|, \end{align*} where $J^{2}$ is the term appearing in the proof of Lemma \ref{lem:LambdaFunction}. Using inequality $\left(\ref{eq: expinequality}\right)$ we can rewrite the previous expression as \begin{align*} \left|\tilde{J}^{2}\left(t,t^{\prime},s,x\right)\right| & \leq\left|J^{2}\left(t,t^{\prime},s,x\right)\right|+\left|\left(t^{\prime}-s\right)^{h\left(t^{\prime},x\right)-\frac{1}{2}}\right|\\ & \qquad\times\left|e^{\max\left(-f\left(t,x\right)\left(t-s\right),-f\left(t^{\prime},x\right)\left(t^{\prime}-s\right)\right)}\right|\left|f\left(t^{\prime},x\right)\left(t^{\prime}-s\right)-f\left(t,x\right)\left(t-s\right)\right|\\ & \leq\left|J^{2}\left(t,t^{\prime},s,x\right)\right|+C_{T}\left|t^{\prime}-s\right|^{h\left(t^{\prime},x\right)-\frac{1}{2}}\left|t-t^{\prime}\right|\left|f\left(t^{\prime},x\right)-f\left(t,x\right)\right|. \end{align*} Then, adding and subtracting $f\left(t,x\right)\left(t'-s\right)$, and using the linear growth and Lipschitz conditions on $f$, we obtain \begin{align*} \left|f\left(t^{\prime},x\right)\left(t^{\prime}-s\right)-f\left(t,x\right)\left(t-s\right)\right| & \leq\left|f\left(t,x\right)\right|\left|t'-t\right|+\left|t'-s\right|\left|f\left(t',x\right)-f\left(t,x\right)\right|\\ & \leq C\left|t'-t\right|\left(1+\left|x\right|\right)+C\left|t'-s\right|\left|t'-t\right|, \end{align*} and we can conclude that \begin{align*} \left|\tilde{J}^{2}\left(t,t^{\prime},s,x\right)\right| & \leq\left|J^{2}\left(t,t^{\prime},s,x\right)\right|+C\left|t^{\prime}-s\right|^{h\left(t^{\prime},x\right)-\frac{1}{2}}\left|t-t^{\prime}\right|\left(1+\left|x\right|\right)\\ & \leq\left|J^{2}\left(t,t^{\prime},s,x\right)\right|+C_{T}\left|t^{\prime}-s\right|^{h_{*}-\frac{1}{2}}\left|t-t^{\prime}\right|\left(1+\left|x\right|\right). \end{align*} Therefore, if we define \[ \lambda_{\gamma}\left(t,t^{\prime},s\right):=C_{T,\gamma}\left(t-t^{\prime}\right)^{\gamma}\left(t^{\prime}-s\right)^{-1+h_{*}-\frac{\gamma}{2}}, \] for $0<\gamma<2h_{*},$ and use the final bounds for $J^{1}$ and $J^{2}$ in Lemma \ref{lem:LambdaFunction}, we get that \begin{align*} & \left|\sigma\left(t,s,x\right)-\sigma\left(t^{\prime},s,x\right)\right|^{2}\\ & \leq4\left(\left|J^{1}\left(t,t^{\prime},s,x\right)\right|^{2}+\left|J^{2}\left(t,t^{\prime},s,x\right)\right|^{2}+\left|C_{T}\left|t^{\prime}-s\right|^{h_{*}-\frac{1}{2}}\left|t-t^{\prime}\right|\left(1+\left|x\right|\right)\right|^{2}\right)\\ & \leq\lambda_{\gamma}\left(t,t^{\prime},s\right)\left(1+\left|x\right|^{2}\right), \end{align*} and \[ \int_{0}^{t^{\prime}}\lambda_{\gamma}\left(t,t^{\prime},s\right)ds\leq C_{T,\gamma}\left(t^{\prime}\right)^{h_{*}-\frac{\gamma}{2}}\left(t-t^{\prime}\right)^{\gamma}, \] which concludes the proof. \end{proof} \subsection{Proof of Lemma \ref{lem:Bound=000026TimeLipschContEulerGamma}} \begin{proof} Recall that $k\left(t,s\right)=C_{T}\left(t-s\right)^{2h_{*}-1}$ and, since $\eta\left(s\right)\leq s$, we have the following inequality \begin{equation} k\left(t,\eta\left(s\right)\right)\leq k\left(t,s\right).\label{eq: Volterrabound-1} \end{equation} Using the Itô isometry, that $e^{-2f\left(t,x\right)}\leq1$, equation $\left(\ref{eq:BoundSigma2_No_x}\right)$ and equation (\ref{eq: Volterrabound-1}), we obtain \begin{align*} \mathbb{E}\left[\left|\bar{X}_{t}^{h,f}\right|^{2}\right] & =\mathbb{E}\left[\int_{0}^{t}e^{-2f\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)\left(t-\eta\left(s\right)\right)}\left(t-\eta\left(s\right)\right)^{2h\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)-1}ds\right]\\ & \leq\mathbb{E}\left[\int_{0}^{t}\left(t-\eta\left(s\right)\right)^{2h\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)-1}ds\right]\\ & \leq\int_{0}^{t}k\left(t,\eta\left(s\right)\right)ds\leq\int_{0}^{t}k\left(t,s\right)ds\leq C_{T}. \end{align*} To prove the bound $\left(\ref{eq:SecondMomentTLCE}\right),$ note that \begin{align*} \bar{X}_{t}^{h,f}-\bar{X}_{t^{\prime}}^{h,f} & =\int_{t^{\prime}}^{t}e^{-f\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)\left(t-\eta\left(s\right)\right)}\left(t-\eta\left(s\right)\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)-\frac{1}{2}}dB_{s},\\ & \quad+\int_{0}^{t^{\prime}}\left\{ e^{-f\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)\left(t-\eta\left(s\right)\right)}\left(t-\eta\left(s\right)\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)-\frac{1}{2}}\right.\\ & \qquad\qquad\left.-e^{-f\left(t',\bar{X}_{\eta\left(s\right)}^{h,f}\right)\left(t-\eta\left(s\right)\right)}\left(t^{\prime}-\eta\left(s\right)\right)^{h\left(t^{\prime},\bar{X}_{\eta\left(s\right)}^{h,f}\right)-\frac{1}{2}}\right\} dB_{s}\\ & =:J_{1}+J_{2}. \end{align*} Due to the Itô isometry, that $e^{-2f\left(t,x\right)}\leq1$, equation $\left(\ref{eq:BoundSigma2_No_x}\right)$ and $\left(\ref{eq: Volterrabound-1}\right)$, we obtain the bounds \begin{align*} \mathbb{E}\left[\left|J_{1}\right|^{2}\right] & =\mathbb{E}\left[\int_{t^{\prime}}^{t}e^{-2f\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)\left(t-\eta\left(s\right)\right)}\left(t-\eta\left(s\right)\right)^{2h\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)-1}ds\right]\\ & \leq\int_{t'}^{t}k\left(t,\eta\left(s\right)\right)ds\leq\int_{t'}^{t}k\left(t,s\right)ds=C_{T}\left|t-t^{\prime}\right|^{2h_{*}}. \end{align*} Using again the Itô isometry, equation $\left(\ref{eq:time reg sigma-gamma}\right)$ and equation $\left(\ref{eq:TimeLispchitzIntegralLambda-gamma}\right)$ we can write, for any $\gamma<2h_{*},$ that \begin{align*} \mathbb{E}\left[\left|J_{2}\right|^{2}\right] & \leq\int_{0}^{t^{\prime}}\lambda_{\gamma}\left(t,t^{\prime},\eta\left(s\right)\right)\left(1+\mathbb{E}\left[\left|\bar{X}_{\eta\left(s\right)}^{h,f}\right|^{2}\right]\right)ds\leq C_{T}\int_{0}^{t^{\prime}}\lambda_{\gamma}\left(t,t^{\prime},s\right)ds\leq C_{T,\gamma}\left|t-t^{\prime}\right|^{\gamma}, \end{align*} where in the second inequality we have used that $\lambda_{\gamma}\left(t,t^{\prime},\eta\left(s\right)\right)\leq\lambda_{\gamma}\left(t,t^{\prime},s\right)$, because $\lambda_{\gamma}$ is essentially a negative fractional power of $(t-s)$ and $\eta\left(s\right)\leq s$ and also that $\mathbb{E}\left[\left|\bar{X}_{t}^{h,f}\right|^{2}\right]\leq C_{T}$, $0\leq t\leq T$, which we just have proved above. Combining the bounds for $\mathbb{E}\left[\left|J_{1}\right|^{2}\right]$ and $\mathbb{E}\left[\left|J_{2}\right|^{2}\right]$ the result follows. \end{proof} \subsection{Proof of Theorem \ref{thm:EM_StrongConvergenceGamma}} \begin{proof} We will reduce the proof to the case in Theorem \ref{thm:EM_StrongConvergence}. To do so, in the same way we did, we define \[ \delta_{t}:=X_{t}^{h,f}-\bar{X}_{t}^{h,f},\qquad\varphi\left(t\right):=\sup_{0\leq s\leq t}\mathbb{E}\left[\left|\delta_{s}\right|^{2}\right],\quad t\in\left[0,T\right]. \] For any $t\in\left[0,T\right]$, we can write \begin{align*} \delta_{t} & =\int_{0}^{t}\left(e^{-f\left(t,X_{s}^{h,f}\right)\left(t-s\right)}\left(t-s\right)^{h(t,X_{s}^{h,f})-\frac{1}{2}}\right.\\ & \qquad-\left.e^{-f\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)\left(t-\eta\left(s\right)\right)}\left(t-\eta\left(s\right)\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)-\frac{1}{2}}\right)dB_{s}\\ & =\int_{0}^{t}\left(e^{-f\left(t,X_{s}^{h,f}\right)\left(t-s\right)}\left(t-s\right)^{h(t,X_{s}^{h,f})-\frac{1}{2}}\right.\\ & \qquad-\left.e^{-f\left(t,X_{s}^{h,f}\right)\left(t-s\right)}\left(t-s\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)-\frac{1}{2}}\right)dB_{s}\\ & +\int_{0}^{t}\left(e^{-f\left(t,X_{s}^{h,f}\right)\left(t-s\right)}\left(t-s\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)-\frac{1}{2}}\right.\\ & \qquad-\left.e^{-f\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)\left(t-\eta\left(s\right)\right)}\left(t-\eta\left(s\right)\right)^{h\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)-\frac{1}{2}}\right)dB_{s}\\ & =:\tilde{I}_{1}\left(t\right)+\tilde{I}_{2}\left(t\right). \end{align*} First we bound the second moment of $\tilde{I}_{1}\left(t\right)$ in terms of a certain integral of $\varphi$. Using the Itô isometry, equation $\left(\ref{eq:Lipschitz sigma-gamma}\right)$ and the Lipschitz property of $h$ we get \begin{align*} \mathbb{E}\left[\left|\tilde{I}_{1}\left(t\right)\right|^{2}\right] & \leq\int_{0}^{t}k\left(t,s\right)\left(\log\left(t-s\right)\right)^{2}\mathbb{E}\left[\left(h\left(t,X_{s}^{h,f}\right)-h\left(t,\bar{X}_{\eta\left(s\right)}^{h,f}\right)\right)^{2}\right]ds\\ & \leq C_{T,\delta}\int_{0}^{t}\left(t-s\right)^{2\left(h_{*}-\delta\right)-1}\mathbb{E}\left[\left|X_{s}^{h,f}-\bar{X}_{\eta\left(s\right)}^{h,f}\right|^{2}\right]ds, \end{align*} for $\delta>0,$ arbitrarily small. By the same arguments as in the proof of Theorem \ref{thm:EM_StrongConvergence} we obtain the following bound \begin{equation} \mathbb{E}\left[\left|\tilde{I}_{1}\right|^{2}\right]\leq C_{T,h_{*}}\left\{ \int_{0}^{t}\left(t-s\right)^{h_{*}-1}\varphi\left(s\right)ds+\left|\Delta t\right|^{\gamma}\right\} .\label{eq: barI1Bound} \end{equation} Next, we find a bound for the second moment of $\tilde{I}_{2}\left(t\right)$. Using again the Itô isometry, equations $\left(\ref{eq:time reg sigma-gamma}\right)$ and $\left(\ref{eq:TimeLispchitzIntegralLambda-gamma}\right)$, and Lemma \ref{lem:Bound=000026TimeLipschContEuler} we can write \begin{equation} \mathbb{E}\left[\left|\tilde{I}_{2}\right|^{2}\right]\leq\int_{0}^{t}\lambda_{\gamma}\left(t+\left(s-\eta\left(s\right)\right),t,s\right)\left(1+\mathbb{E}\left[\left|\bar{X}_{\eta\left(s\right)}^{h,f}\right|^{2}\right]\right)ds\leq C_{T,\gamma}\left|\Delta t\right|^{\gamma},\label{eq: barI2Bound} \end{equation} for any $\gamma<2h_{*}$, and where we have used that \[ \mathbb{E}\left[\left|\bar{X}_{s}^{h,f}\right|^{2}\right]\leq C_{T},\qquad0\leq s\leq T. \] Combining the inequalities $\left(\ref{eq: barI1Bound}\right)$ and $\left(\ref{eq: barI2Bound}\right)$ we obtain \[ \tilde{\varphi}\left(t\right)\leq C_{T,\gamma,h_{*}}\left\{ \int_{0}^{t}\left(t-s\right)^{h_{*}-1}\varphi\left(s\right)ds+\left|\Delta t\right|^{\gamma}\right\} . \] Using again Lemma \ref{theo:VolterraGronwall} we can conclude. \end{proof} \end{document}
arXiv
\begin{document} \title{Qubit-induced phonon blockade as a signature of quantum behavior in nanomechanical resonators} \author{Yu-xi Liu} \affiliation{Advanced Science Institute, RIKEN, Wako-shi, Saitama 351-0198, Japan} \affiliation{Institute of Microelectronics, Tsinghua University, Beijing 100084, China} \affiliation{Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University, Beijing 100084, China} \author{Adam Miranowicz} \affiliation{Advanced Science Institute, RIKEN, Wako-shi, Saitama 351-0198, Japan} \affiliation{Faculty of Physics, Adam Mickiewicz University, 61-614 Pozna\'n, Poland} \author{Y. B. Gao} \affiliation{College of Applied Science, Beijing University of Technology, Beijing, 100124, China} \author{Ji\v r\'\i\ Bajer} \affiliation{Department of Optics, Palack\'{y} University, 772~00 Olomouc, Czech Republic} \author{C. P. Sun} \affiliation{Institute of Theoretical Physics, The Chinese Academy of Sciences, Beijing, 100080, China} \author{Franco Nori} \affiliation{Advanced Science Institute, RIKEN, Wako-shi, Saitama 351-0198, Japan} \affiliation{Physics Department, The University of Michigan, Ann Arbor, Michigan 48109-1040, USA} \date{\today} \begin{abstract} The observation of quantized nanomechanical oscillations by detecting femtometer-scale displacements is a significant challenge for experimentalists. We propose that phonon blockade can serve as a signature of quantum behavior in nanomechanical resonators. In analogy to photon blockade and Coulomb blockade for electrons, the main idea for phonon blockade is that the second phonon cannot be excited when there is one phonon in the nonlinear oscillator. To realize phonon blockade, a superconducting quantum two-level system is coupled to the nanomechanical resonator and is used to induce the phonon self-interaction. Using Monte Carlo simulations, the dynamics of the induced nonlinear oscillator is studied via the Cahill-Glauber $s$-parametrized quasiprobability distributions. We show how the oscillation of the resonator can occur in the quantum regime and demonstrate how the phonon blockade can be observed with currently accessible experimental parameters. \pacs{85.85.+j, 03.65.Yz, 85.25.Cp, 42.50.Dv} \end{abstract} \maketitle \pagenumbering{arabic} \section{Introduction} Many efforts (e.g., see Refs.~\cite{Huang03,Knobel03,Blencowe04,LaHaye04} and reviews~\cite{Blencowe04review,Schwab05review,Ekinci05review}) have been made to explore quantum effects in nanomechanical resonators (NAMRs) and optomechanical systems (e.g., in Refs.~\cite{add1,add2,add3,add4,add5,add6} and the review~\cite{vahala08review}). Reaching the quantum limit of NAMRs would have important applications in, e.g., small mass or weak-force detections~\cite{Caves,Bocko96,Buks06}, quantum measurements~\cite{Braginsky92}, and quantum-information processing. Only recently the quantum limit in NAMRs has been reached experimentally~\cite{OConnell}. Quantum or classical behavior of a NAMR oscillation depends on its environment, which induces the decoherence and dissipation of the NAMR states. In principle, if the NAMR is cooled to very low temperatures (in the mK-range) and has sufficiently high oscillation frequencies (in the GHz-range), then its oscillation can approach the quantum limit. In other words, if the energies of the NAMR quanta, which are referred to as {\em phonons}~\cite{Cleland-book}, are larger than (or at least comparable to) the thermal energy, then the mechanical oscillation can be regarded quantum. When the NAMR can beat the thermal energy and approach the quantum regime, measurements on quantum oscillation of the NAMRs are still very challenging. One encounters: (i) fundamental problems as measurements are usually performed by the position detection, the quantum uncertainty due to the zero-point fluctuation will limit the measurement accuracy; (ii) practical problems as, for a beam oscillating with frequency in the gigahertz range, the typical displacement for this oscillation is on the order of a femtometer. Detecting so tiny displacement is a difficult task for current experimental techniques. Various signatures and applications of quantum behavior (or nonclassicality) in nanomechanical resonators have been studied. Examples include: generation of quantum entanglement~\cite{Cleland04,Armour02,Tian05}, generation of squeezed states~\cite{Hu96,Wang04,Rabl04,Xue07}, Fock states~\cite{Santamore04,Buks08}, Schr\"odinger cat states~\cite{Semiao09}, and other nonclassical states~\cite{Tian04,Jacobs07}, prediction of classical-like~\cite{Gronbech05} and quantum~\cite{Shevchenko08} Rabi oscillations, transport measurements~\cite{Lambert}, quantum nondemolition measurements~\cite{Braginsky92,Irish03,Santamore04,Buks08,Gong08}, quantum tunneling~\cite{Savelev06}, proposal of quantum metrology~\cite{Woolley08} and of quantum decoherence engineering~\cite{Wang04}. The problem of how to perform quantum measurements on a system containing a NAMR plays a fundamental role in reaching the quantum limit of the NAMR and testing its nonclassical behavior. Quantum measurements are usually done by coupling an external probe (detector) to the NAMR (see, e.g.,~\cite{Knobel03,LaHaye04,Naik06,Wei06,Jacobs07,Srinivasan07,Regal08,Lambert08} and references therein). Our approach for detecting quantum oscillations of NAMRs is based on: (i) recent theoretical proposals (e.g., Refs.~\cite{Gao}) to perform quantum measurement on NAMR without using an external probe and (ii) experimental demonstrations (e.g., Refs.~\cite{ntt,nec}) on the couplings between superconducting quantum devices and the NAMRs. Instead of directly detecting a tiny displacement, we propose to indirectly observe quantum oscillations of the NAMR via {\em phonon blockade}, which is a purely quantum phenomenon. We assume that the phonon decay rate is much smaller than the phonon self-interaction strength. In such a case, we show that when the oscillations of the NAMR are in the nonclassical regime, the phonon excitation can be blockaded. In analogy to the photon (e.g., see Refs.~\cite{Imamoglu,Leonski94}) and Coulomb (e.g., see Ref.~\cite{Coulomb}) blockades, the main idea for the phonon blockade is that the {\em second phonon cannot be excited when there is one phonon in the nonlinear oscillator.} Therefore, by analyzing correlation spectra for the electromotive force generated between two ends of the NAMR, the phonon blockade can be distinguished from excitations of two or more phonons. \begin{figure}\label{fig1} \end{figure} An important ingredient for the realization of the phonon blockade is strong phonon {\it self-interaction}. To obtain such {\it nonlinear} phonon-phonon interaction, the NAMR is assumed to be coupled to a superconducting two-level system, which can be either charge, flux, or phase qubit circuits~\cite{you1,you2,you3,you4}. By choosing appropriate parameters of two-level systems, a nonlinear phonon interaction can be induced. The interactions between each of these qubits~\cite{you1,you2,you3,you4} with NAMRs are very similar, e.g., the coupling constants are of the same order and the frequencies of these qubits are in the same GHz range. Therefore, in this paper, we only use charge qubits as an example to demonstrate our approach. However, this approach can also be applied to demonstrate the oscillation of the NAMR in the quantum regime, when the NAMR is coupled to other superconducting qubits, e.g., phase or flux qubits. The paper is organized as follows. In Sec.~II, we describe the couplings between the superconducting qubits and the NAMR, and then study how the qubit induces the phonon-phonon interaction. In Sec.~III, we discuss how to characterize the quantum oscillation by using the Cahill-Glauber $s$-parametrized quasiprobability distributions for $s>0$, in contrast to the Wigner function (for $s=0$). In Sec.~IV, the basic principle of the phonon blockade is demonstrated, and we show that the phonon blockade can occur for the different parameters. In Sec.~V, we study the measurement of the phonon blockade by using the correlation spectrum of the electromotive force between two ends of the NAMR. Finally, we summary the main results of the paper in Sec.~VI. \section{Qubit-induced phonon-phonon interaction} Let us now focus on the coupling between a NAMR (with mass $m$ and length $L$) and a superconducting charge qubit (with Josephson energy $E_{J}$ and junction capacitance $C_{J}$). As schematically shown in Fig.~\ref{fig1}, a direct-current (d.c.) voltage $V_{g}$, and an a.c.~voltage $V_{g}(t)=V_{0}\cos(\omega_{1}t)$ are applied to the charge qubit (or Cooper pair box) through the gate capacitor $C_{g}$. The NAMR is coupled to the charge qubit by applying a static voltage $V_{x}$ through the capacitor $C(x)$ which depends on the displacement $x$ of the NAMR around its equilibrium position. A weak detecting current $I(t)=I_{0}\cos(\omega_{2}t)$ is applied to the NAMR, with its long axis perpendicular to the static magnetic field $B$. In the rotating wave approximation and neglecting two-phonon terms, the Hamiltonian $H=H_{0}+H_{\rm d}$ of the interaction system between the charge qubit and the NAMR can be described by~\cite{Gao}: \begin{eqnarray} H^{(0)}&=& \frac12 \hbar\omega_{0}\sigma_z+\hbar\omega a^{\dagger}a +\hbar g (a \sigma_{+}+a^{\dagger}\sigma_{-})\nonumber\\ &&\,+\hbar \Omega \left(\sigma_{+}e^{-i\omega_{1}t}+\sigma_{-}e^{i\omega_{1}t}\right)\,,\label{eq:1}\\ H^{({\rm d})}&=&\hbar\epsilon \left(a^{\dagger}e^{-i\omega_{2}t}+a e^{i\omega_{2}t}\right)\,. \label{eq:2} \end{eqnarray} Here, the frequency shift of the NAMR, due to its coupling to the charge qubit, has been neglected because it just renormalizes the NAMR frequency and will not affect the calculations below. This frequency shift is determined~\cite{Tian04} by the qubit-NAMR distance $l$, the charging energy $E_{c}=e^2/2(C_{J}+C_{g}+C)$, the mass $m$ and the frequency $\omega$ of the NAMR. It should be noted that below we consider the large detuning between the qubit and the NAMR, i.e., $(\omega_{0}-\omega)$ is several times larger (but not much larger) than the coupling constant $g$; thus, the rotating wave approximation can be applied. The effect of the counter-rotating terms on the results can also calculated in the large detuning case~\cite{hanggi}. However, here we have neglected this effect because it only produces a small frequency shift and two-photon processes. The charge qubit, described by the spin operator $\sigma_{z}=|e\rangle\langle e|-|g\rangle\langle g|$, is assumed to be near the optimal point, i.e., $(C_{g}V_{g}+C V_{x})/2e\approx 0.5$ with $C=C(x=0)$, and thus $\omega_{0}\approx E_{J}/\hbar$. The qubit ground and excited states are denoted by $|g\rangle$ and $|e\rangle$, respectively. The operator $a$ ($a^{\dagger}$) denotes the annihilation (creation) operator of the NAMR with frequency $\omega$, which can be written as \begin{eqnarray} a&=& \sqrt{\frac{m\omega}{2\hbar}}\left(x+\frac{i}{m\omega}p\right),\\ a^{\dagger}&=& \sqrt{\frac{m\omega}{2\hbar}}\left(x-\frac{i}{m\omega}p\right) \end{eqnarray} with the momentum operator $p$ of the NAMR. The third term of Eq.~(\ref{eq:1}) presents the NAMR-qubit interaction with the strength \begin{eqnarray} g=\frac{4E_{c}N_{x} X_{0}}{d} \label{e1} \end{eqnarray} determined by the charging energy $E_{c}$, effective Copper pair number $N_{x}=CV_{x}/2e$, the distance $d$ between the NAMR and the superconducting qubit, and the NAMR amplitude $X_{0}=\sqrt{\hbar/2m\omega}$ of zero-point motion. Also, $\Omega$ is the Rabi frequency of the qubit driven by the classical field with frequency $\omega_{1}$. The parameter \begin{eqnarray} \epsilon=-BI_{0}L X_{0} \label{e2} \end{eqnarray} in Eq.~(\ref{eq:2}) describes the interaction strength between the NAMR and an external weak probe a.c.~current with frequency $\omega_{2}$. Hereafter, we assume that the resonant driving condition for the qubit is satisfied, i.e., $\omega_1=\omega_0$. For the coupling between a phase~\cite{Cleland04} (or flux qubit~\cite{ntt}) and the NAMR, they also have the same form as that given in Eqs.~(\ref{eq:1}) and (\ref{eq:2}) except all parameters of the Hamiltonian should be specified to the concrete systems. Thus, our discussions below can also be applied to those systems. The frequency of the NAMR is usually much lower than that of the qubit. If the Rabi frequency $\Omega$ satisfies the condition $\Omega \gg (g^2/\Delta)$ with the detuning $\Delta=\omega_{0}-\omega$ between the frequencies of the NAMR and the qubit, then in the rotating reference frame with $V=\exp(-i\omega_{0}\sigma_{z}/2)$, Eq.~(\ref{eq:1}) is equivalent to an effective Hamiltonian \begin{equation} H_{\rm eff}^{(0)}=\hbar\omega a^{\dagger}a+\hbar \left[\frac{g^2}{\Delta} a^{\dagger}a -\kappa (a^{\dagger} a)^2 \right]\rho_{z} \end{equation} with the effective phonon self-interaction constant (nonlinearity constant) \begin{eqnarray} \kappa=\frac{g^4}{\Omega\Delta^2}\,. \label{e3} \end{eqnarray} Here, $\rho_{z}=|+\rangle\langle +|-|-\rangle\langle -|$ with the dressed qubit states $|\pm\rangle=(|g\rangle \pm |e\rangle)/\sqrt{2}$. Therefore, if the dressed charge qubit, which was theoretically proposed~\cite{liu06} and has been experimentally realized~\cite{delsing1,delsing2}, is always in its ground state $|-\rangle$, the effective Hamiltonian for the driven NAMR is \begin{equation} H_{\rm eff}=\hbar \left(\omega-\frac{g^2}{\Delta} \right)a^{\dagger}a +\hbar\kappa (a^{\dagger}a)^2+\hbar\epsilon (a^{\dagger}{\rm e}^{-i\omega_{2} t}+a{\rm e}^{i\omega_{2} t}).\label{eq:4-1} \end{equation} The nonlinear Hamiltonian of the driven NAMR in Eq.~(\ref{eq:4-1}) can also be directly obtained when the driving field is strong; however, here we only consider a weak probe current. Thus, the coupling of the NAMR to a controllable superconducting two-level is necessary for inducing phonon-phonon interactions. \begin{figure} \caption{(Color online) Quasidistribution functions for the NAMR steady state obtained by solving master equation (\ref{N03}) for $\epsilon=3\gamma, \kappa=30\gamma$, and $\bar{n}=0.01$ with $\gamma$ as units: (a) Wigner function $W^{(0)}(x,y)$, which is non-negative in the whole phase space, and (b) 1/2-parametrized quasi-probability distribution (QPD) $W^{(1/2)}(x,y)$, which is negative for $\alpha=x+iy$ close to zero indicating nonclassicality of the NAMR state. The figures show the bottom of the functions.} \label{fig2} \end{figure} \section{Quantum behavior described by quasiprobability distributions} Decoherence imposes strict conditions to observe quantum behavior in a NAMR. To demonstrate effects of the environmental on the NAMR, let us now assume that the NAMR is coupled to the thermal reservoir. Under the Markov approximation, the evolution of the reduced density operator $\rho$ for the NAMR can be described by the master equation~\cite{Carmichael}: \begin{eqnarray} \frac{\partial }{\partial t}{\rho} &=&-\frac{i}{\hbar}[H_{\rm eff},{\rho} ] +\frac{\gamma }{2}\bar{n}(2 a^{\dag}\rho a-a a^{\dag}\rho-\rho a a^{\dag}) \nonumber \\ && +\frac{\gamma }{2}(\bar{n}+1)(2a\rho a^{\dag}-a^{\dag}a\rho-\rho a^{\dag}a). \label{N03} \end{eqnarray} In Eq.~(\ref{N03}), $\gamma $ is the damping rate and $\bar{n}=\{\exp [\hbar \omega /(k_{B}T)]-1\}^{-1}$ is the mean number of thermal phonons, where $k_{B}$ is the Boltzmann constant, and $T$ is the reservoir temperature at thermal equilibrium. Eq.~(\ref{N03}) can be solved, e.g., by applying the Monte Carlo wave function simulation~\cite{Carmichael,Dalibard,Tan} and introducing the collapse operators \begin{eqnarray} C_1=\sqrt{\gamma(\bar n+1)}a\,,\quad C_2=\sqrt{\gamma\bar n}a^\dagger\,. \label{N03a} \end{eqnarray} We now study the steady-state solution, which is independent of the initial states. For the system without a drive, the time evolution ends in a state without phonons (vacuum state) at zero temperature. While for a driven system, the asymptotic state is neither the vacuum nor a pure state even at zero temperature, and can have intriguing noise properties. A state is considered to be {\em nonclassical} if its Glauber-Sudarshan $P$ function cannot be interpreted as a probability density, i.e., it is negative or more singular than Dirac's $\delta$ function. Due to such singularities, it is usually hard to visualize it. To characterize the nonclassical behavior of the NAMR states generated in our system, we consider the Cahill-Glauber $s$-parametrized quasiprobability distribution (QPD) functions~\cite{Cahill69}: \begin{eqnarray} {\cal W}^{(s)}(\alpha) &=& \frac{1}{\pi} \,{\rm Tr}\,[ {\rho}\, {T}^{(s)}(\alpha)]\,, \label{N09} \end{eqnarray} where \begin{equation} {T}^{(s)}(\alpha) = \frac1{\pi} \int \exp(\alpha\xi^*-\alpha^*\xi) {D}^{(s)}(\xi) \,{\rm d}^2 \xi\,, \end{equation} and \begin{equation} {D}^{(s)}(\xi) = \exp\left(s\frac{|\xi|^2}{2}\right) {D}(\xi)\; , \end{equation} with \begin{equation} {D}(\xi)=\exp\left(\xi a^\dagger-\xi^{*}a\right)\; , \end{equation} being the displacement operator. The QPD is defined for $-1 \le s\le 1$, which in special cases reduces to the $P$ function (for $s=1$), Wigner function (for $s=0$), and Husimi $Q$ function (for $s=-1$). QPDs contain the full information about states. Let us analyze the differences between the 1/2-parametrized QPD and Wigner functions under the resonant driving for the NAMR with $\omega_{2}=\omega-(g^2/\Delta)$. As an example, in Fig.~\ref{fig2}, we plotted the steady-state Wigner function and 1/2-parametrized QPD, which are the numerical solutions of the master equation for a set of parameters: $\bar{n}=0.01$, $\epsilon=3\gamma$, and $\kappa=30\gamma$ in units of $\gamma$. Fig.~\ref{fig2}(a) shows the non-negative Wigner function of the steady state of the NAMR for these parameters. It can also be shown analytically that the steady-state Wigner function for this system is always non-negative. However, the plot for the QPD function ${\cal W}^{(1/2)}(\alpha)$ in Fig.~\ref{fig2}(b), with the same parameters as for Fig.~\ref{fig2}(a), clearly shows negative values, corresponding to a nonclassical state of the NAMR. Below, we will discuss how to demonstrate this nonclassicality of the NAMR via the phonon blockade. The Wigner function for the NAMR steady state is non-negative in the whole phase space. This is in contrast to the Wigner function for various nonclassical states, including Fock states or finite superpositions of coherent states (often referred to as Schr\"odinger cat states) being negative in some areas of phase space. It should be noted that there are other well-known nonclassical states, including squeezed states, for which the Wigner function is non-negative as for the NAMR steady state. In general, the non-positivity of the Wigner function is a necessary but not a sufficient condition for the non-classicality. The complete characterization of the non-classicality (the ``if and only if'' condition) is based on the positivity of the $P$-function. Unfortunately, this function is usually too singular to be presented graphically. The larger parameter $s$ the more nonclassical states are described by the negative $s$-parametrized QPD. In our case, to demonstrate the nonclassically of the NAMR steady state, it was enough to calculate the $s$-parametrized QPD for $s=1/2$ but not for $s=0$. \begin{figure} \caption{(Color online) Probabilities $P_n=\langle n|\rho(t)|n\rangle$ of measuring $n$ phonons as a function of rescaled time, $\epsilon t$, assuming $\kappa=10\epsilon$ and: (a) no dissipation ($\gamma=0$) and (b) including dissipation with the same parameters as in Fig.~\ref{fig2}: $P_0$ (red curves), $P_1$ (blue), and $P_2$ (green). $F=P_0+P_1$ (thick black) describes the fidelity of the phonon blockade. Additionally, the coherences $X={\rm Re}\<0|\rho|1 \rangle$ (magenta curves) and $Y={\rm Im}\<0|\rho|1\rangle$ (cyan) show that the steady states partially preserve coherence. } \label{fig3} \end{figure} \section{Phonon blockade} We now consider the case when the {\it phonon self-interaction} strength $\kappa$ is much larger than the phonon decay rate $\gamma$. When the oscillation of the NAMR is in the quantum regime, the phonon transmission {\it can be blockaded} in analogy to the single-photon blockade in a cavity~\cite{Imamoglu,Leonski94}. This is because the existence of the second phonon requires an additional energy $\hbar\kappa$. To demonstrate the phonon blockade, let us rewrite Eq.~(\ref{eq:4-1}) as \begin{equation} H_{\rm eff}=\hbar \bar \omega a^{\dagger}a +\hbar\kappa a^{\dagger}a(a^{\dagger}a-1)+\hbar\epsilon (a^{\dagger}{\rm e}^{-i\omega_{2} t}+a{\rm e}^{i\omega_{2} t})\label{eq:4} \end{equation} with a renormalized frequency \begin{eqnarray} \bar \omega=\omega +\kappa-\frac{g^2}{\Delta}. \label{eq:4a} \end{eqnarray} In the rotating reference frame for $V^{\prime}=\exp(-i\omega_{2} a^{\dagger}a \,t)$ with $\omega_{2}=\bar\omega$, the Hamiltonian in Eq.~(\ref{eq:4}) becomes \begin{equation} H_{\rm eff}=\hbar\kappa a^{\dagger}a(a^{\dagger}a-1)+\hbar\epsilon (a^{\dagger}+a).\label{eq:4-2} \end{equation} It is now easy to see that the two states $|0\rangle$ and $|1\rangle$ with zero eigenvalues are degenerate in the first term $\kappa a^{\dagger}a(a^{\dagger}a-1)$ of Eq.~(\ref{eq:4-2}). This degeneracy plays a crucial role in the phonon blockade. Indeed, if we assume that the interaction strength $\epsilon$ is much smaller than the nonlinearity constant $\kappa$ (i.e., $\epsilon \ll\kappa$), then the phonon eigenstates of the Hamiltonian in Eq.~(\ref{eq:4-2}) can become a superposition of only two states, $|0\rangle$ and $|1\rangle$, in the lowest-order approximation of the expansion in the strength $\epsilon$. We now study the solution of the Hamiltonian in Eq.~(\ref{eq:4}) under the assumption of a weak driving current, i.e., $\epsilon\ll\kappa$. Using standard perturbation theory, the state governed by the time-dependent periodic Hamiltonian in Eq.~(\ref{eq:4}) with the initial condition $|\psi(t=0)\rangle=|0\rangle$ can be obtained by introducing the auxiliary operator \begin{equation} H_{F}=H_{\rm eff}-i\frac{\partial}{\partial t} \end{equation} based on the Floquet theory (e.g., see Ref.~\cite{Peskin}). The solution can be approximately given as \begin{equation} |\psi(t)\rangle = \cos(\epsilon t)|0\rangle -i \sin(\epsilon t)|1\rangle +{\cal O} (\epsilon^2). \label{N10} \end{equation} The solution~in Eq.~(\ref{N10}) shows that the number of phonons varies between $0$ and $1$ if all terms proportional to $\epsilon^2$ are neglected. In this small $\epsilon$ limit, the Floquet solution~(\ref{N10}) explicitly demonstrates the {\em phonon} blockade in analogy to the photon blockade~\cite{Imamoglu} or the Coulomb blockade~\cite{Coulomb}, i.e., there is only one-phonon excitation and the excitation with more than one phonon is negligibly small. The photon blockade is also referred to as the optical state truncation~\cite{Leonski94,Leonski01}. The phonon-blockaded state is nonclassical as it is a superposition of a finite number (practically two) of Fock states. Only (some) superpositions of an infinite number of Fock states can be considered classical. The time-dependent probabilities $P_n=\langle n|\rho(t)|n\rangle$ of measuring the $n$-phonon state with and without dissipation are numerically simulated using the Monte Carlo method. In the ideal non-dissipative case, as shown in Fig.~\ref{fig3}(a), the sum of the probabilities $P_{0}$ and $P_{1}$ with phonon numbers $0$ and $1$ is almost one, which means that phonon blockade occurs. For the dissipative case, Fig.~\ref{fig3}(b) shows the time evolutions of the elements $\langle m |\rho(t)|n \rangle $ (with $m,\,n=0,\,1$) for the same parameters as those in Fig.~\ref{fig2}. The amplitudes of $P_{0}$ and $P_{1}$ exhibit decaying oscillations; however, their sum is still near one and thus the sum of other probabilities $P_{n}$ with $n> 1$ is near zero. Therefore, the phonon blockade occurs even in the long-time limit (e.g., steady state). The non-zero off-diagonal element $\langle 0|\rho(t)|1 \rangle$ shown in Fig.~\ref{fig3}(b) in the steady-state means that the NAMR is in the nonclassical state, which is also consistent with the steady-state plot of the QPD in Fig.~\ref{fig2}(b). Thus, we see that the non-negative Wigner function does not directly indicate that the state is nonclassical. \begin{figure} \caption{(Color online) Probabilities $P_n$, fidelity $F$, and coherences $X$ and $Y$ for steady states as a function of (a) $\beta=\hbar \omega /(k_{B}T)$, assuming $\kappa=10\epsilon$, and of (b) $\kappa/\gamma$, assuming $\bar{n}=0.01$, which corresponds to $\beta\approx 4.6$. In both (a) and (b), we set $\gamma=1$. The other parameters are the same as in Fig.~\ref{fig3}.} \label{fig4} \end{figure} To study how the environmental temperature $T$ affects the phonon blockade, the probability distributions $P_{n}=\langle n|\rho(t)|n\rangle$ (for $n=0,\,1,\,2,\,3$) are plotted via the rescaled inverted temperature $\hbar\omega/(k_{B}T)$ in Fig.~\ref{fig4}(a). It clearly shows that the phonon blockade cannot be achieved when the thermal energy is much larger than that of the NAMR. The $\kappa$-dependent matrix elements $\langle m|\rho(t)|n \rangle$ are plotted in Fig.~\ref{fig4}(b), which shows that the larger nonlinearity parameter $\kappa$ corresponds to a more effective phonon blockade. However, to observe the phonon blockade, it is enough to make $\kappa$ larger than a certain value. For instance, if the ratio between $\kappa/\gamma$ is larger than $10$, then the sum of the probabilities $P_{0}$ and $P_{1}$ is more than $0.95$, and the phonon blockade should occur. Let us make a few comments to clarify the relation between the phonon blockade and nonclassicality in terms of the $s$-parametrized QPDs: (i) If the $s$-parametrized QPD, for some $s\in(-1,1]$ and for a given state, is negative in some region of the phase space, then the state is nonclassical. (ii) Even if the phonon blockade is not observed (for a given choice of parameters $\epsilon$, $\kappa$, $\gamma$, and $\bar n$), the $1/2$-parametrized QPD (or the QPD for any $s>-1$) can still be nonpositive. (iii) Even if we choose the parameters such that the $1/2$-parametrized QPD is positive, this does not imply that the state is classical. (iv) Even if the phonon blockade does not appear, the state can be nonclassical as described by the nonpositive $P$-function (the QPD for $s=1$). A good blockade of phonons can be observed for nonclassical states only. However, a poor blockade of phonons does not imply that the state is classical. Similarly to other quantum effects like squeezing or antibunching: If a specific nonclassical effect is not exhibited by a given state, it does not imply that the state is classical. We can choose the parameters $\epsilon$, $\kappa$, $\gamma$, and $\bar n$ in order to observe a change (transition) from a nonpositive $1/2$-parametrized QPD to a positive function. However, this transition is not important in the context of nonclassicality. For various $s>-1$, one could observe such transitions for different parameters. Only the transition of the $P$-function corresponds to a transition from quantum to classical regime. As already mentioned, a good criterion of nonclassicality should be based on the $P$-function, but it is usually too singular to be presented graphically. Thus, we have chosen the QPD for another value of $s\in(0,1)$. A nonclassicality criterion based on the QPD for $s=1/2$ is more sensitive than that based on the Wigner function (the QPD for $s=0$), but still it is not sensitive enough in the general case, i.e., there are nonclassical fields described by the positive $1/2$-parametrized QPD. \section{Proposed measurements of the phonon blockade} Let us now discuss how to measure the phonon blockade via the magnetomotive technique, which is one of the basic methods to detect the motion of NAMRs~\cite{small}. As shown in Fig.~\ref{fig1}, the induced electromotive force $V$ between two ends of the NAMR is~\cite{Gao,small} \begin{equation} V=BL\frac{p}{m}=iBL\sqrt{\frac{\hbar \omega}{2m}}(a^{\dagger}-a)\,, \end{equation} which can be experimentally measured as discussed in Ref.~\cite{small}. Here, $p$ is the momentum for the center of the NAMR mass. We analyze the power spectrum \begin{equation} S_{V}(\omega^{\prime})=\int_{-\infty}^{\infty}\langle V(0)V(\tau)\rangle e^{-i\omega^{\prime}t} d t \end{equation} defined by the Fourier transform of the induced electromotive-force two-time correlation function \begin{equation} \langle V(0)V(\tau)\rangle\equiv \lim_{t\rightarrow\infty}\langle V(t)V(t+\tau)\rangle\,. \end{equation} This power spectrum can be measured effectively. Power spectrum $S_{V}(\omega^{\prime})$ and the two-time correlation function $\langle V(0)V(\tau)\rangle$ are plotted for zero temperature with different decay rates $\gamma$ in Fig.~\ref{fig5}(a), and for a given decay rate with different temperatures $T$ (i.e., different thermal phonon number $\bar{n}$) in Fig.~\ref{fig5}(b). We find that low dissipation and low temperatures produce high spectral peaks, which enable an easier observation of the phonon blockade. Thus, the environment (or some background) will limit the power spectrum for observing the phonon blockade. When $\kappa$ is negligible compared with the decay rate $\gamma$, all spectral peaks disappear and there is no phonon blockade. By other numerical calculations, we also find that a large or giant nonlinearity $\kappa$ corresponds to sharp peaks and, in this case, the phonon blockade is also easy to be observed. Assuming perfect phonon blockade, i.e., truncation to an exact qubit state, one can analyze the whole evolution of our system confined in two-dimensional Hilbert space. To some extend this approximation can be applied in our model if the conditions $\epsilon \ll \kappa $ and $\langle \bar{n}\rangle \approx 0$ are satisfied. Then, we find that the corresponding power spectrum should have at most three peaks at frequencies \begin{eqnarray} \omega'_{0}=0,\quad \omega'_{1,2}=\pm\frac{1}{4} \sqrt{(8\epsilon)^{2}-\gamma^{2} \left(1+2\bar{n}\right)^{2}}\approx \pm 2\epsilon. \label{ps1} \end{eqnarray} It is seen that these frequencies are independent of $\kappa$. A peak at $\omega'_{0}=0$ does not appear for real $\epsilon$, which is the case analyzed in the paper. Examples of such power spectra for $\omega'>0$ are shown in Fig. 5(a) for $\bar{n}=0$ and in Fig. 5(b) for $\bar{n}=0.01$ (blue curve). In contrast, new peaks appear in the spectrum in the case of not perfect phonon blockade. This can be understood by analyzing a Hilbert space of dimension $d>2$. For example, by analyzing the system evolution confined in a three-dimensional Hilbert space, we find that the spectrum can have at most seven peaks centered at \begin{equation} \omega ^{\prime }\approx 0,\pm 2\epsilon \left( 1-\delta \right) ,\pm \left[ 2\kappa \left( 1+6\delta\right) \pm \epsilon \left( 1-\delta\right) \right] , \label{ps2} \end{equation} where $\delta=\epsilon ^{2}/(8\kappa ^{2})$, which depend on $\kappa$, contrary to Eq.~(\ref{ps1}). Frequencies in Eq.~(\ref{ps2}) can be approximated as $\omega'\approx 0,\pm 2\epsilon,\pm (2\kappa\pm\epsilon)$. Thus, for $\omega'>0$, the first peak occurs at $2\epsilon$, which corresponds approximately to $\omega'_{1}$ given in Eq.~(\ref{ps1}). The second characteristic double peak is at $2\kappa\pm\epsilon$, as seen in Fig. 5(b) for $\bar{n}=0.5$ (red) and $\bar{n}=1$ (black curves). Eq.~(\ref{ps2}) explains only the occurrence of the first three peaks for $\omega'>0$ in Fig. 5(b). To explain the appearance of the other two peaks at $\omega'\approx 4\kappa$ and $6\kappa$, one should analyze the evolution of our system confined in (at least) four-dimensional Hilbert space. Thus, these extra peaks are a signature of a non-perfect single phonon blockade. The spectra are not symmetric in frequency around zero, $S_{V}(\omega^{\prime})\neq S_{V}(-\omega^{\prime})$. Nevertheless, we depicted only the positive-frequency half of the spectra in Fig. 5 to better compare the peaks for different values of $\bar{n}$. We note that a double peak is observed at negative frequencies $\omega'\approx - (2\kappa\pm\epsilon)$ even for the cases shown in Fig. 5(a). This means that the contribution of terms ${\cal O} (\epsilon^2)$ in Eq.~(\ref{N10}) is not negligible for the parameters chosen in Fig. 5(a), and the spectrum for $\omega'<0$ does not correspond to a (mathematically) perfect single-phonon blockade. In Fig. 5(b), the power spectra are plotted as a function of $\omega'/\kappa$. There, it is seen that the position of the first positive peak depends on the ratio $\epsilon/\kappa$ in agreement with Eq.~(\ref{ps2}). The center of this peak is closer to zero for smaller ratio $\epsilon/\kappa$. However, the position of the center of the double peak (split peak) is approximately independent of $\epsilon$ and $\kappa$ (assuming that $\kappa\gg\epsilon$, so $\delta\approx 0$), which follows from Eq.~(\ref{ps2}). Moreover, the splitting vanishes with increasing $\gamma$. The smaller $\epsilon$ the smaller is $\gamma$ for which the splitting vanishes. In conclusion, the observation of extra peaks at frequencies different from those in Eq.~(\ref{ps1}), show deterioration of the single-phonon blockade. The higher are such peaks the worse is the phonon blockade. Note that the double peak at $\omega^{\prime}\approx 2\kappa\pm\epsilon$ was found assuming the output state to be in a qutrit (three-dimensional) state. This double peak can, in general, be predicted for a qudit state, i.e., $d$-dimensional state for $2<d\ll\infty$. This corresponds to a phonon-truncation up to state $|d-1\rangle$ and can be interpreted as a generalized multi-phonon blockade. Any qudit states are nonclassical since arbitrary finite superpositions of number states are nonclassical. However, with increasing dimension $d$ of qudit states it becomes more difficult to distinguish them from classical infinite-dimensional states generated in our system. For this reason, here we analyze the standard single-phonon blockade only. \begin{figure} \caption{(Color online) Power spectra $S_{V}(\omega^{\prime})$ for $\kappa=30,$ $\epsilon=3$ and: (a) $\bar n=0$ with $\gamma=0.5$ (blue), 1 (red), 1.5 (black), and (b) $\gamma=0.5$ with $\bar n=0.01$ (blue), 0.5 (red), 1 (black curves). Parameters $\kappa$, $\epsilon$ and $\gamma$ are in units of $g$ on the order of MHz.} \label{fig5} \end{figure} We now discuss the experimental feasibility of our proposal. With current experiments on coupling a superconducting phase~\cite{Cleland04} (or charge~\cite{Tian04,Rabl04,Tian05} or flux~\cite{ntt}) qubit and the NAMR, the coupling constants are on the order of hundreds of MHz (e.g., $200$ MHz), the environmental temperature can reach several tens of mili-Kelvin (e.g., $20$ mK), the frequency of the NAMR can be in the range of GHz (e.g., $1$ GHz). If the qubit frequency $\omega_{0}$ and the Rabi frequency $\Omega$ are taken as, e.g., $\omega_{0}=2$ GHz and $\Omega= 200$ MHz, then the nonlinear parameter is $\kappa=8$ MHz. The observation of the phonon blockade should be possible for a quality factor $Q$ larger than $10^3$, which is in the NAMR quality factor range $10^{3}\,\sim\,10^{6}$ of current experiments. By engineering $\kappa$ as in Refs.~\cite{Jacobs09,Gheri,Rebic}, $\kappa$ can be much larger than $8$ MHz, then the phonon blockade should be easier to be observed in our proposed system. \section{Conclusions} We have studied the quantum mechanics of the NAMR by coupling it to a superconducting two-level system. To demonstrate our approach, a classical driving microwave is applied to the qubit so that a dressed qubit is formed. If the Rabi frequency of the driving field is strong enough, then the nonlinear phonon interaction can be induced when the dressed qubit is in its ground state. We mention that dressed charge qubits have been experimentally realized~\cite{delsing1,delsing2}. The dressed phase (see, e.g., Refs.~\cite{Wei04,martinis1,martinis2}) and flux (see, e.g., Ref.~\cite{Liu05,ntt1}) qubits should also be experimentally realizable. The states of the nonlinear NAMR can be completely characterized by the Cahill-Glauber $s$-parametrized quasiprobability distribution (QPD). A state is considered to be nonclassical if it is described by a $P$-function (QPD for $s=1$) that cannot be interpreted as a probability density. As a drawback, the $P$-function is usually too singular to be presented graphically. Thus, other QPDs are often analyzed: If, for a given state, a QPD with $s>-1$ is negative in some regions of phase-space, then the state is nonclassical. We have shown that the Wigner function (QPD for $s=0$) is always non-negative for nonclassical steady states generated in our dissipative system. Thus, we have calculated the $1/2$-parametrized QPD, being negative in some regions of phase-space, which clearly indicates the nonclassical character of the steady states generated in our NAMR system. Nevertheless, from an experimental point of view, the quantum-state tomography of the $1/2$-parametrized QPD is very challenging. Thus, we have proposed another experimentally-feasible test of nonclassicality: the phonon blockade. We considered the case when the phonon self-interaction strength $\kappa$ significantly exceeds the phonon decay rate $\gamma$. We showed that when the NAMR oscillations are in the quantum regime, the phonon transmission can be blockaded in analogy to the single-photon blockade in a cavity~\cite{Imamoglu,Leonski94} or Coulomb blockade for electrons~\cite{Coulomb}. When the phonon blockade happens we also showed that a NAMR is in a nonclassical state even if its Wigner function is non-negative. Therefore, the nonclassicality of the NAMR can be demonstrated by the phonon blockade, instead of trying to detect the tiny displacements when the NAMR approaches the quantum limit. We further demonstrated that the phonon blockade can be experimentally observed by measuring the correlation spectrum of the electromotive force. All parameters in our approach are within current experimental regimes and, therefore, the quantum signature of the NAMR might be demonstrated in the near future by using this proposed approach. We have shown that the phonon blockade can be demonstrated by a qubit-induced nonlinear NAMR. However, the temperature of the environment, the decay rate of the NAMR, the driving current, and the nonlinear coupling constant $\kappa$ limit the measured power spectrum. To more efficiently observe the phonon blockade, the following conditions should also be satisfied: (i) the temperature should be low enough so that thermal excitations should be negligibly small or the thermal energy is smaller than that of the oscillating energy of the NAMR; (ii) the quality factor of the NAMR should be high; (iii) the driving current through the NAMR should be very weak, so that the heating effect induced by the driving current can be neglected; (iv) the giant nonlinear constant $\kappa$ of the NAMR might be more useful for the phonon blockade, and this might be obtained using the approaches explored, e.g., in Refs.~\cite{Jacobs09,Gheri,Rebic}. In our proposal, the larger coupling constant $g$ between the qubit and the NAMR corresponds to a larger $\kappa$, and the phonon blockade should be more easily observable for larger $\kappa$. Also the frequency of the NAMR should be large enough, so that the qubit and the NAMR are in the large detuning regime, but the detuning should not be extremely large. \begin{acknowledgments} FN acknowledges partial support from the Laboratory of Physical Sciences, National Security Agency, Army Research Office, National Science Foundation grant No. 0726909, JSPS-RFBR contract No. 09-02-92114, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and Funding Program for Innovative R\&D on S\&T (FIRST). YXL is supported by the National Natural Science Foundation of China under Nos.~10975080 and 60836001. YBG is supported by the NSFC Grant Nos.~10547101 and 10604002. CPS is supported by the NSFC Grant No.~10935010. AM acknowledges support from the Polish Ministry of Science and Higher Education under Grant No.~2619/B/H03/2010/38. JB was supported by the Czech Ministry of Education under Project No. MSM6198959213. \end{acknowledgments} \end{document}
arXiv
September 1991, pages 179-268 pp 179-185 September 1991 Integrated luminosity distribution of Galactic open clusters B. C. Bhatt A. K. Pandey H. S. Mahra The integrated magnitudes of 221 Galactic open clusters have been used to derive the luminosity function. The completeness of the data has also been discussed. In the luminosity distribution the maximum frequency of clusters occurs nearI (Mv) = −3m.5, and some plausible reasons for a sharp cut-off atI (Mv) = −2m. 0 have been discussed. It is concluded that the paucity of the clusters fainter thanI (Mv) = −2m.0 is not purely due to selection effects. The surface density of the clusters for different magnitude intervals has. been obtained using the completeness radius estimated from the logN- logd plots. A relation betweenI (Mv) and surface density has been obtained which yields a steeper slope than that obtained by van den Bergh & Lafontaine (1984). Study of faint young open clusters as tracers of spiral features in our galaxy - Paper 5: NGC 2236 (OC1 501) G. S. D. Babu Continuing the study of faint young open clusters as tracers of spiral features in our Galaxy, photoelectric and photographic photometry of 39 stars was done in the field of the faint open cluster NGC 2236 ≡ OCl 501 in the direction of Monoceros constellation. Out of these stars, a total of 22 down tomv ≃ 15.4 mag have been found to be probable members. There is apparently a variable extinction across the field of the cluster with E(B - V) ranging between 0.84 mag and 0.68 mag. The median age of this cluster is estimated to be 7.6 × 107 years and the cluster is thereby considered as belonging to the marginally old category. Thus, it cannot be specifically used as a spiral arm tracer in the study of our Galaxy. This cluster is located at a distance of 3.72 ± 0.13 kpc, which places it at the inner edge of the outer Perseus spiral feature of the Milky Way. Low-frequency observations of the Vela supernova remnant and their implications K. S. Dwarakanath We have studied the Vela supernova remnant in the light of the 34.5 MHz observations made with the GEETEE low frequency array. The flux densities of Vela X and YZ at 34.5 MHz are estimated to be 1800 and 3900 Jy respectively. These values, along with those from earlier observations at higher frequencies, imply spectral indices (S∞Να) of-0.16 ± 0.02 for Vela X and -0.53 ± 0.03 for Vela YZ. This situation is further substantiated by the spectral-index distribution over the region obtained between 34.5 and 408 MHz. The spectral-index estimates, along with other known characteristics, strengthen the earlier hypothesis that Vela X is a plerion, while Vela YZ is a typical shell-type supernova remnant. We discuss the implications of this result. On the shell star pleione (BU Tauri) D. K. Ojha S. C. Joshi BU Tauri (Pleione) an interesting star in the Pleiades cluster, has been observed spectrophotometrically. The energy distribution curves of the star have been discussed vis a vis model atmospheres for normal stars in the appropriate range of temperature and effective gravity. The changes in the energy distribution curve noticed during our observations and previous observations taken from the literature have been pointed out. On the basis of the measured Há emission equivalent width, a rough estimate of the dimensions of the extended envelope of the star has been made. Spot modelling and elements of the RS CVn eclipsing binary WY Cancri P. Vivekananda Rao M. B. K. Sarma B. V. N. S. Prakash Rao Results of analysis of photoelectric observations of the RS CVn eclipsing binary WY Cancri in the standard passbands ofUBV during 1973-74, 1976-79 and inUBVRI during 1984-86 are reported. A preliminary analysis of the eclipses suggested the primary eclipse to be transit. A study of the percentage contribution of the distortion wave amplitudes in all the colours with respect to the luminosities of both components, showed the hotter component to be the source of the distortion wave. The clean (wave removed) light curves of different epochs have not merged, suggesting residual effects of spot activity. The reason for this is attributed to the presence of either (1) polar spots or (2) small spots uniformly distributed all over the surface of the hotter component. This additional variation is found to have a periodicity of about 50 years or more. The distortion waves in yellow colour are modelled according to Budding's (1977) method. For getting the best fit of the observations and theory, it was found necessary to assume three or four spots on the surface of the hot component. Out of these four spot groups, three are found to have direct motion with migration periods of 1.01, 1.01 and 2.51 years while the fourth one has a retrograde motion with a migration period of 3.01 years. From these periods and the latitudes of the spots derived from the model a co-rotating latitude of 4ℴ is obtained. The temperatures of these spots are found to be lower than that of the photosphere by about 700ℴK to 800ℴK. Assuming the light curve of 1985-86, which is the brightest of all the observed seasons, to be least affected by the spots, the light curves of the other seasons are all brought up to the quadrature level of this season by applying suitable corrections. The merged curves in theUBVRI colours are analysed for the elements by the Wilson-Devinney method. This analysis yielded the following absolute elements:$$\begin{gathered} m_h = 0.86 \pm 0.03{\text{ }}M_ \odot \hfill \\ m_h = 0.51 \pm 0.03{\text{ }}M_ \odot \hfill \\ R_h = 0.99 \pm 0.02{\text{ }}R_ \odot \hfill \\ R_c = 0.65 \pm 0.02{\text{ }}R_ \odot \hfill \\ T_h = 5520^o K \pm 100^o K \hfill \\ T_c = 3740^o K \pm 20^o K \hfill \\ M_{h(bol)} = 4\mathop .\limits^m 96 \pm 0.10 \hfill \\ M_{c(bol)} = 7\mathop .\limits^m 58 \pm 0.15 \hfill \\ Spectral type hotter component = G5 \pm 1 \hfill \\ cooler component = K9 \pm 1 \hfill \\ \end{gathered} $$ For a mass ratio of 0.506 and with the derived fractional radii rh = 0.241 andrc = 0.157, both the components are found to be within their Roche lobes. Hence we have classified WY Cnc as a detached system. From their positions on the HR diagram it is concluded that both the components of WY Cnc belong to the main sequence. Spectroscopic binaries near the north galactic pole - Paper 20: HD 111068 R. F. Griffin Photoelectric radial-velocity measurements show that HD 111068 is a spectroscopic binary with a period of 206 days. The primary star is probably about type K5 III; the secondary, only detected through the photometric compositeness of the system, may well be an F dwarf. The orbit is circular within observational uncertainty; it is near the upper limit of periods for which tidal circularization operates for giant stars. Journal of Astrophysics and Astronomy | News Continuous Article Publication Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles.
CommonCrawl
Fredholm kernel In mathematics, a Fredholm kernel is a certain type of a kernel on a Banach space, associated with nuclear operators on the Banach space. They are an abstraction of the idea of the Fredholm integral equation and the Fredholm operator, and are one of the objects of study in Fredholm theory. Fredholm kernels are named in honour of Erik Ivar Fredholm. Much of the abstract theory of Fredholm kernels was developed by Alexander Grothendieck and published in 1955. Main article: Fredholm theory Definition Let B be an arbitrary Banach space, and let B* be its dual, that is, the space of bounded linear functionals on B. The tensor product $B^{*}\otimes B$ has a completion under the norm $\Vert X\Vert _{\pi }=\inf \sum _{\{i\}}\Vert e_{i}^{*}\Vert \Vert e_{i}\Vert $ where the infimum is taken over all finite representations $X=\sum _{\{i\}}e_{i}^{*}\otimes e_{i}\in B^{*}\otimes B$ The completion, under this norm, is often denoted as $B^{*}{\widehat {\,\otimes \,}}_{\pi }B$ and is called the projective topological tensor product. The elements of this space are called Fredholm kernels. Properties Every Fredholm kernel has a representation in the form $X=\sum _{\{i\}}\lambda _{i}e_{i}^{*}\otimes e_{i}$ with $e_{i}\in B$ and $e_{i}^{*}\in B^{*}$ such that $\Vert e_{i}\Vert =\Vert e_{i}^{*}\Vert =1$ and $\sum _{\{i\}}\vert \lambda _{i}\vert <\infty .\,$ Associated with each such kernel is a linear operator ${\mathcal {L}}_{X}:B\to B$ which has the canonical representation ${\mathcal {L}}_{X}f=\sum _{\{i\}}\lambda _{i}e_{i}^{*}(f)e_{i}.\,$ Associated with every Fredholm kernel is a trace, defined as ${\mbox{tr}}X=\sum _{\{i\}}\lambda _{i}e_{i}^{*}(e_{i}).\,$ p-summable kernels A Fredholm kernel is said to be p-summable if $\sum _{\{i\}}\vert \lambda _{i}\vert ^{p}<\infty $ A Fredholm kernel is said to be of order q if q is the infimum of all $0<p\leq 1$ for all p for which it is p-summable. Nuclear operators on Banach spaces An operator L : B→B is said to be a nuclear operator if there exists an X ∈ $B^{*}{\widehat {\,\otimes \,}}_{\pi }B$ such that L = LX. Such an operator is said to be p-summable and of order q if X is. In general, there may be more than one X associated with such a nuclear operator, and so the trace is not uniquely defined. However, if the order q ≤ 2/3, then there is a unique trace, as given by a theorem of Grothendieck. Grothendieck's theorem If ${\mathcal {L}}:B\to B$ is an operator of order $q\leq 2/3$ then a trace may be defined, with ${\mbox{Tr}}{\mathcal {L}}=\sum _{\{i\}}\rho _{i}$ where $\rho _{i}$ are the eigenvalues of ${\mathcal {L}}$. Furthermore, the Fredholm determinant $\det \left(1-z{\mathcal {L}}\right)=\prod _{i}\left(1-\rho _{i}z\right)$ is an entire function of z. The formula $\det \left(1-z{\mathcal {L}}\right)=\exp {\mbox{Tr}}\log \left(1-z{\mathcal {L}}\right)$ holds as well. Finally, if ${\mathcal {L}}$ is parameterized by some complex-valued parameter w, that is, ${\mathcal {L}}={\mathcal {L}}_{w}$, and the parameterization is holomorphic on some domain, then $\det \left(1-z{\mathcal {L}}_{w}\right)$ is holomorphic on the same domain. Examples An important example is the Banach space of holomorphic functions over a domain $D\subset \mathbb {C} ^{k}$. In this space, every nuclear operator is of order zero, and is thus of trace-class. Nuclear spaces The idea of a nuclear operator can be adapted to Fréchet spaces. A nuclear space is a Fréchet space where every bounded map of the space to an arbitrary Banach space is nuclear. References • Grothendieck A (1955). "Produits tensoriels topologiques et espaces nucléaires". Mem. Amer. Math. Soc. 16. • Grothendieck A (1956). "La théorie de Fredholm". Bull. Soc. Math. France. 84: 319–84. doi:10.24033/bsmf.1476. • B.V. Khvedelidze, G.L. Litvinov (2001) [1994], "Fredholm kernel", Encyclopedia of Mathematics, EMS Press • Fréchet M (November 1932). "On the Behavior of the nth Iterate of a Fredholm Kernel as n Becomes Infinite". Proc. Natl. Acad. Sci. U.S.A. 18 (11): 671–3. doi:10.1073/pnas.18.11.671. PMC 1076308. PMID 16577494. Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Topological tensor products and nuclear spaces Basic concepts • Auxiliary normed spaces • Nuclear space • Tensor product • Topological tensor product • of Hilbert spaces Topologies • Inductive tensor product • Injective tensor product • Projective tensor product Operators/Maps • Fredholm determinant • Fredholm kernel • Hilbert–Schmidt operator • Hypocontinuity • Integral • Nuclear • between Banach spaces • Trace class Theorems • Grothendieck trace theorem • Schwartz kernel theorem
Wikipedia
\begin{document} \title{Pivots, Determinants, and Perfect Matchings of Graphs} \author{Robert Brijder\inst{1} \thanks{corresponding author: \email{[email protected]}} \and Tero Harju\inst{2} \and Hendrik Jan Hoogeboom\inst{1}} \institute{Leiden Institute of Advanced Computer Science, Universiteit Leiden,\\ Niels Bohrweg 1, 2333 CA Leiden, The Netherlands \and Department of Mathematics, University of Turku, FI-20014 Turku, Finland } \maketitle \begin{abstract} We give a characterization of the effect of sequences of pivot operations on a graph by relating it to determinants of adjacency matrices. This allows us to deduce that two sequences of pivot operations are equivalent iff they contain the same set $S$ of vertices (modulo two). Moreover, given a set of vertices $S$, we characterize whether or not such a sequence using precisely the vertices of $S$ exists. We also relate pivots to perfect matchings to obtain a graph-theoretical characterization. Finally, we consider graphs with self-loops to carry over the results to sequences containing both pivots and local complementation operations. \end{abstract} \section{Introduction} The operation of local complementation in an undirected graph takes the neighbourhood of a vertex in the graph and replaces that neighbourhood by its graph complement. The related operation of edge local complementation, here called pivoting, can be defined in terms of local complementation. It starts with an edge in the graph and toggles edges based on the way its endpoints are connected to the endpoints of the pivot-edge. The operations are connected in a natural way to overlap graphs (also called circle graphs \cite{gavril}). Given a finite set of chords of a circle, the overlap graph contains a vertex for each chord, and two vertices are connected if the corresponding chords cross. Taking out a piece of the perimeter of the circle delimited by the two endpoints of a chord, and reinserting it in reverse, changes the way the cords intersect, and hence changes the associated overlap graph. The effect of this reversal on the overlap graph can be obtained by a local complementation on the vertex corresponding to the chord. Similarly, interchanging two pieces of the perimeter of the circle, each starting at the different endpoints of one common chord and ending at the endpoints of another, can be modelled by a pivot in the overlap graph. Overlap graphs naturally occur in theories of genetic rearrangements \cite{pevzner,ciliates}, but local complementation and edge local complementation operations are applied in many settings, like the relationships between Eulerian tours, equivalence of certain codes\cite{danielsen}, rank-width of graphs\cite{oum}, and quantum graph states\cite{nest}. In the present paper we are interested in sequences of pivots in arbitrary simple graphs. In defining a single pivot one usually distinguishes three disjoint neighbourhoods in the graph, and edges are updated according to the neighbourhoods to which the endpoints belong. Describing the effect of a sequence of pivot operations in terms of neighbourhood connections is involved -- the number of neighbourhoods to consider grows exponentially in the size of the sequence. It turns out that by considering determinants of adjacency matrices (in the spirit of \cite{geelen}) we can effectively describe the effect of sequences of pivot operations. Subsequently, we relate this to perfect matchings, a perfect matching is a set of edges that forms a partition of the set of vertices, to obtain a graph-theoretical characterization. A direct proof of the characterization in terms of perfect matchings is given in the appendix. We obtain the surprising result that the connection between two vertices after a series of pivots directly depends on the number (modulo two) of perfect matchings in the subgraph induced by the two vertices and the vertices of the pivot-edges (with `multiplicity' if vertices occur more than once). As an immediate consequence we obtain that the result of a sequence of pivots, provided all pivot operations are defined, i.e., based on an edge in the graph to which they are applied, does not depend on the order of the pivots, but only on the nodes involved (plus their cardinality modulo 2). Also, we show that for any applicable sequence of pivot there exists an equivalent \emph{reduced} sequence where each node appears at most once in the sequence. Finally, we consider the case where graphs can have self-loops, and generalize the results for sequences of pivots to sequences having both local complementation operations and pivots. \section{Preliminaries} \label{sec:preliminaries} Usually we write $\pair xy$ for the pair $\{x,y\}$. We use $\oplus$ to denote both the logical exclusive-or as well as the related operation of symmetric set difference. The operation is $\oplus$ associative: the exclusive or over a sequence of Booleans is true iff an odd number of the arguments is true. Let $A$ be a $V\times V$ matrix. For a set $X\subseteq V$ we use $A\sub{X}$ to denote the submatrix induced by $X$, which keeps the rows and columns indexed by $X$. The determinant of $A$ is defined as $\det (A) = \sum_{\sigma\in \Pi(V)} \mathrm{sgn}(\sigma) \prod_{u\in V} a_{u,\sigma(u)}$, where $\Pi(V)$ is the set of permutations of $V$, and $\mathrm{sgn}(\sigma)$ is the sign (or parity) of the permutation, which is well defined after choosing an ordering on $V$. We will mainly consider the determinant over $GF(2)$, i.e., modulo 2, where the signs do not matter. The determinant of the empty matrix is considered to be $1$ (contributed by the empty permutation). \paragraph{Graphs.} The graphs we consider here are simple (undirected and without loops and parallel edges). For graph $G=(V,E)$ we use $V(G)$ and $E(G)$ to denote its set of vertices $V$ and set of edges $E$, respectively. We define $x \sim_G y$ if either $\pair xy \in E$ or $x=y$. For $X \subseteq V$, we denote the subgraph of $G$ induced by $X$ as $G\sub{X}$. Let $N_G(v) = \{w \in V \mid \pair vw \in E\}$ denote the neighbourhood of vertex $v$ in graph $G$. With a graph $G$ one associates its adjacency matrix $A(G)$, which is a $V\times V$ $(0,1)$-matrix $\left(a_{u,v}\right)$ with $a_{u,v} = 1$ iff $\pair uv \in E$. Obviously, for $X\subseteq V$, $A(G\sub{X}) = A(G)\sub{X}$. By the determinant of graph $G$, denoted $\det G$, we will mean the determinant $\det A(G)$ of its adjacency matrix, computed over $GF(2)$. \section{Pivot Operation} \label{sec_pivots} Let $G = (V,E)$ be a graph. The graph obtained by \emph{local complementation} at $u \in V$ on $G$, denoted by $G * u$, is the graph that is obtained from $G$ by complementing the edges in the neighbourhood $N_G(u)$ of $u$ in $G$. Using a logical expression we can write for $G*u $ the definition $\pair xy \in E(G*u)$ iff $(xy\in E)\oplus (\pair xu\in E \land yu\in E)$. For a vertex $x$ consider its closed neighbourhood $N'_G(x)= N_G(x)\cup \{x\} = \{ y\in V_G \mid x\sim_G y \}$. The edge $uv$ partitions the vertices of $G$ connected to $u$ or $v$ into three sets $V_1 = N'_G(u) \setminus N'_G(v)$, $V_2 = N'_G(v) \setminus N'_G(u)$, $V_3 = N'_G(u) \cap N'_G(v)$. Note that $u,v \in V_3$. \begin{figure} \caption{Pivoting $\pair uv$. Connection $\pair xy$ is toggled if $x\in V_i$ and $y\in V_j$ with $i\neq j$. Note $u$ and $v$ are connected to all vertices in $V_2$, these edges are omitted in the diagram. The operation does not effect edges adjacent to vertices outside the sets $V_1,V_2,V_3$.} \label{fig:pivot} \end{figure} Let $\pair uv \in E(G)$. The graph obtained from $G$ by \emph{pivoting} $\pair uv$, denoted by $G[uv]$, is constructed by `toggling' all edges between different $V_i$ and $V_j$: for $\pair xy$ with $x\in V_i$ and $y\in V_j$ ($i\neq j$): $\pair xy \in E(G)$ iff $\pair xy \notin E(G[uv])$, see Figure~\ref{fig:pivot}. The remaining edges remain unchanged. \footnote{In defining this operation usually the description adds the rule that the vertices $u$ and $v$ are swapped. Here this is avoided by including $u$ and $v$ in the set $V_3$.} It turns out that $G[uv]$ equals $G *u*v*u = G *v*u*v$. \begin{Example}\label{ex:overlap} We start with six segments, of which the relative positions of endpoints can be represented by the string $3\; 5\; 2\; 6\; 5\; 4\; 1\; 3\; 6\; 1\; 2\; 4 $. The `entanglement' of these intervals can be represented by the overlap graph to the left in Figure~\ref{fig:overlap}. When we pivot on the edge $\pair 23$ we obtain the graph to the right. This new graph is the overlap graph of $\underline{3\; 6\; 1\; 2}\; 6\; 5\; 4\; 1\; \underline{3\; 5\; 2}\; 4 $. \begin{figure} \caption{A graph $G$ and its pivot $G[\pair 23]$, cf. Example~\ref{ex:overlap}.} \label{fig:overlap} \end{figure} \end{Example} In order to derive properties of pivoting in an algebraic way, rather than using combinatorial methods in graphs, Oum \cite{oum} shows that $G[uv]$ can be described using a logical formula. It turns out that the expression can be stated elegantly in terms of $\sim_G$ rather than in terms of $E(G)$. \begin{Lemma}\label{lem:oum} Let $G$ be a graph, and let $\pair uv\in E(G)$. Then $G[uv]$ is defined by the expression $$ x \sim_{G[uv]} y = x \sim_G y \oplus ((x \sim_G u) \wedge (y \sim_G v)) \oplus ((x \sim_G v) \wedge (y \sim_G u)). $$ for all $x,y \in V(G)$.\qed \end{Lemma} \paragraph{Pivots and matrices.} In a 1997 paper \cite{geelen} on unimodular $(0,1)$-matrices, Geelen defines a general pivot operation on matrices that is defined for subsets of the indices (thus not only for edges) which turns out to extend the classic pivot operation introduced above. Let $A$ be a $V$ by $V$ $(0,1)$-matrix, and let $X\subseteq V$ be such that $\det A\sub{X} \neq 0$, then the \emph{pivot} of $A$ by $X$, denoted by $A*X$\footnote{The local complementation operation G*u differs from $A*\{u\}$ where $A$ is the adjacency matrix, see Section~\ref{sec_self_loops}}, is defined as follows. If $P = A\sub{X}$ and $A = \left( \begin{array}{c|c} P & Q \\ \hline R & S \end{array} \right)$, then $$ A*X = \left( \begin{array}{c|c} -P^{-1} & P^{-1} Q \\ \hline R P^{-1} & S - R P^{-1} Q \end{array} \right). $$ Based on a similar operation from \cite{tucker} (see also \cite[p.230]{cottle}), the following basic result can be obtained, see \cite[Theorem~2.1]{geelen} and \cite{geelenphd} for a full proof. \begin{Proposition}\label{prop:geelen} Let $A$ be a $V\times V$ matrix, and let $X\subseteq V$ be such that $\det A\sub{X} \neq 0$. Then, for $Y \subseteq V$, \[ \det (A*X)\sub{Y} = \pm \det A\sub{X \oplus Y} / \det A\sub{X} \]\qed \end{Proposition} We will apply this result to our (edge) pivots in graphs. Let $A$ be the adjacency matrix of graph $G$. we start by observing that for vertices $u\neq v$, $\pair uv$ is an edge in $G$ iff the submatrix $A\sub{\pair uv}$ is of the form $\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) $ or equivalently $\det G\sub{\pair uv} = 1$. If $\pair uv$ is an edge in $G$, then (after rearranging rows and columns) $A$ can be written in the form \[ A = \left( \begin{array}{c|c|c} 0 & 1 & \chi_u^T \\\hline 1 & 0 & \chi_v^T \\\hline \chi_u & \chi_v & A\sub{V-u-v} \end{array} \right) \] where $\chi_u$ is the column vector belonging to $u$ without elements $a_{uu}$ and $a_{vu}$, and, for vector $x$, $x^T$ is the transpose of $x$. As $\det A\sub{\pair uv} \neq 0$, the operation pivot $A * \pair uv$ of \cite{geelen} is well defined. It equals the following matrix which in fact is the matrix of $G[\pair uv]$: the component $(\chi_v\chi_u^T + \chi_u\chi_v^T)$ in the matrix has the same functionality as the expression $((x \sim_G u) \wedge (y \sim_G v)) \oplus ((x \sim_G v) \wedge (y \sim_G u))$ from the characterization of Oum, Lemma~\ref{lem:oum}. \[ A * \pair uv = \left( \begin{array}{c|c|c} 0 & 1 & \chi_v^T \\\hline 1 & 0 & \chi_u^T \\\hline \chi_v & \chi_u & A\sub{V-u-v} - (\chi_v\chi_u^T + \chi_u\chi_v^T) \end{array} \right) \] We now rephrase the result cited from \cite{geelen}, Proposition~\ref{prop:geelen} above, for pivots in graphs (and where the computations are over $GF(2)$). It will be the main tool in our paper. \begin{Theorem}\label{thm:geelen} Let $G$ be a graph, and let $\pair uv \in E(G)$. Then, for $Y \subseteq V(G)$, \[ \det ((G[\pair uv])\sub{Y}) = \det (G\sub{Y \oplus \{u,v\}}) \]\qed \end{Theorem} It is noted in Little~\cite{little} that over $GF(2)$ the $\det (G) = 0$ has a graph interpretation: $\det (G) = 0$ iff there exists a non-empty set $S \subseteq V(G)$ such that every $v \in V(G)$ is adjacent to an even number of vertices in $S$. Indeed, $S$ represents a linear dependent set of rows modulo $2$. Finally note that $x \sim_G y$ iff $\det (G[\{x\}\oplus\{y\}]) = 1$. Indeed, if $x=y$, then $\det (G[\emptyset]) = 1$, and if $x \not= y$, then $xy$ is an edge iff $\det (G[xy]) = 1$. \section{Sequences of Pivots} \label{sec_seq_pivots} In this section we study series of pivots that are applied consecutively to a graph. It is shown that using determinants there is an elegant formula that describes whether a certain pair of vertices is adjacent in the final resulting graph. From this result we then conclude that the effect of a sequence of pivots only depends on the vertices involved, and not on the order of the operations. Without determinants, using combinatorical argumentations on graphs, this result seems hard to obtain. A sequence of pivoting operations $\varphi = [v_1v_2] [v_3v_4] \cdots [v_{n-1}v_n]$ is \emph{applicable} if each pair $[v_{i}v_{i+1}]$ in the sequence corresponds to an edge $\pair {v_{i}}{v_{i+1}}$ in the graph obtained at the time of application. For such a sequence we define $\sup(\varphi) = \bigoplus_i \{v_i\}$, the set of vertices that occur an odd number of times in the sequence of operations. This is called the \emph{support} of $\varphi$. Note that the support always contains an even number of vertices. Using the correspondence between pivot operations and determinants of submatrices, we can formulate a condition that specifies the edges present in a graph after a sequence of pivots. \begin{Theorem}\label{thm:iterate} Let $\varphi$ be an applicable sequence of pivoting operations for $G$, and let $S = \sup(\varphi)$. Then $\det(G\varphi\sub{\pair xy}) = \det(G\sub{S \oplus \{x,y\}})$ for $x,y \in V(G)$, $x\neq y$. Consequently, $\pair xy \in E(G\varphi)$ iff this value equals $1$. \end{Theorem} \begin{Proof} We prove the equality in the statement by induction on the number of pivot operations. When $\varphi$ is the empty sequence, we read the identity $\det G\sub{\pair xy} = \det G\sub{\varnothing \oplus \{x,y\}}$. So assume $\varphi = [\pair uv] \varphi' $. Let $S = \sup(\varphi)$, then $S' = S \oplus \{u,v\}$ is the support of $\varphi'$. We apply the induction hypothesis to the applicable sequence $\varphi'$ in the graph $G[\pair uv]$. Then $\det G\varphi\sub{\pair xy} = \det G[\pair uv]\varphi' \sub{\pair xy} = \det G[\pair uv] \sub{S' \oplus \{x,y\}}$. Now we can apply Theorem~\ref{thm:geelen}, to obtain $\det G \sub{S' \oplus \{x,y\} \oplus \{u,v\}}$ which obviously equals $\det G\sub{S \oplus \{x,y\}}$. \end{Proof} We now have the following surprising direct consequence of the previous theorem. \begin{Theorem}\label{thm:equaldom} If $\varphi$ and $\varphi'$ are applicable sequences of pivoting operations for $G$, then $\sup(\varphi) = \sup(\varphi')$ implies $G\varphi = G\varphi'$. \end{Theorem} As a consequence, when calculating the orbit of graphs under the pivot operation, as done in \cite{danielsen}, we need not consider every sequence -- only those that have different support. The next lemma shows, as a direct corollary to Theorem~\ref{thm:iterate}, that the vertices of the support of an applicable sequence $\varphi$ induce a subgraph that has a nonzero determinant. \begin{Lemma}\label{lem:applic=>odd} Let $\varphi$ be a sequence of pivots applicable in graph $G$, and let $S= \sup(\varphi)$. Then $\det G\sub{S} = 1$. \end{Lemma} \begin{Proof} If $S$ is empty, then indeed $\det G\sub{\varnothing} = 1$. Now let $S$ (and $\varphi$) be non-empty. Let $\varphi = \varphi' [uv]$, so $S = \sup \varphi' \oplus \{u,v\}$. As $\varphi$ is applicable, $\pair uv$ must be an edge in $G\varphi'$. By Theorem~\ref{thm:iterate}, $\det G\varphi'\sub{\pair uv} = \det G\sub{S} =1$. \end{Proof} Two special cases of Theorem~\ref{thm:equaldom} are known from the literature, the triangle equality (involving three vertices) and commutativity (involving four vertices). The \emph{triangle equality} is a classic result in the theory of pivots. Arratia et al. give a proof \cite[Lemma~10]{arratia} involving certain graphs with 11 vertices. Independently Genest obtains this result in his Thesis \cite[Proposition~1.3.5]{genest}. The cited work of Oum \cite[Proposition~2.5]{oum} contains a proof which applies Lemma~\ref{lem:oum}. \begin{Corollary}\label{cor:triangle} If $u,v,w$ are three distinct vertices in graph $G$ such that $\pair uv$ and $\pair uw$ are edges. Then $G[uv][vw] = G[uw]$. \end{Corollary} \begin{Proof} Note that $\pair vw$ is an edge in $G[uv]$ iff $\det G\sub{ \{v,w\}\oplus\{u,v\} } = \det G\sub{ \{w,u\} } = 1$. The latter holds iff $\pair uw$ is an edge in $G$. Hence the pivots at both sides are applicable, and the result follows from Theorem~\ref{thm:equaldom}. \end{Proof} Another result that fits in our framework is the commutativity of pivots on disjoint sets of nodes. It was obtained by Harju et al. \cite{harju:parallelism} (see also \cite{note}) studying graph operations modelled after gene rearrangements in organisms called ciliates. The property states that two disjoint pivots $[uv]$ and $[wz]$, when applicable in either order, have a result independent of the order in which they are applied. The next lemma is also proved in Corollary~7 of \cite{nest} using linear fractional transformations. Essentially, it states that `twins' stay `twins' after pivoting. Here we obtain it as a consequence of Theorem~\ref{thm:iterate}. \begin{Lemma} Let $v,v'$ be vertices in graph $G$ such that $v \sim_G x$ iff $v' \sim_G x$ for each vertex $x$. Then for each applicable sequence $\varphi$ of pivots, $v \sim_{G\varphi} x$ iff $v' \sim_{G\varphi} x$ for each vertex $x$. \end{Lemma} \begin{Proof} Let $S = \sup(\varphi)$. We have $v \sim_{G\varphi} x$ iff $\det (G\varphi[\{v\}\oplus\{x\}]) = 1$ iff $\det (G[S \oplus \{v\}\oplus\{x\}]) = 1$ iff $\det (G[S \oplus \{v\}\oplus\{x\}\oplus\{v\}\oplus\{v'\}]) = 1$ (since $v \sim_G x$ iff $v' \sim_G x$ for each vertex $x$) iff $\det (G[S \oplus \{v'\}\oplus\{x\}]) = 1$ iff $\det (G\varphi[\{v'\}\oplus\{x\}]) = 1$ iff $v' \sim_{G\varphi} x$. \end{Proof} \section{Pivots and Perfect Matchings} There is a direct correspondence between (the parity of) the determinant of a graph and (the parity of) the number of perfect matchings in that graph. This correspondence is explained in a paper by Little \cite{little}, which we essentially follow below. We include it in our presentation because it allows us to reformulate some results in terms of a property of the graph itself, rather than a property of the associated adjacency matrix. We also give an application, illustrating that the link to perfect matchings adds some intuition to results from the literature. We say that a partition $P$ of $V$ is a \emph{pairing of $V$} if it consists of sets of cardinality two. Let $\mathrm{pair}(V)$ be the set of pairings of $V$. A \emph{perfect matching} in $G$ is a pairing $P$ of $V(G)$ such that $P \subseteq E(G)$. Let $\mathrm{pm}(G)$ be the number of perfect matchings of $G$, modulo 2. For a $V \times V$ matrix $A$, the \emph{Pfaffian} of $A$, denoted by $\mbox{Pf}(A)$, is defined as $ \sum_{P \in \mathrm{pair}(V)} \mathrm{sgn}(P) \prod_{\pair{x}{y} \in P} a_{x,y} $ where $\mathrm{sgn}(P)$ is the sign of a permutation on the vertices associated with the pairing. As with the determinants, we apply this notion only for adjacency matrices of graphs over $GF(2)$, which means $\mathrm{sgn}(P)$ can be dropped from the formula. If we evaluate this expression for the adjacency matrix $A$ of a graph $G$ then we obtain the parity of the number of perfect matchings the subgraph in $G$: the formula determines, for each pairing of $V(G)$, whether or not it is a perfect matching. For skew matrices (where $a_{u,v} = -a_{v,u}$ for all $u,v$) it is known that $\mbox{Pf}(A)^2 = \det(A)$. However, over $GF(2)$ every symmetric matrix is skew, and also the square can be dropped without changing the value. Thus, for a graph $G$ we know that $\det(G) = \mathrm{pm}(G)$. If we rephrase Theorem~\ref{thm:iterate} we obtain an elegant characterization of the edges after pivoting. \begin{Theorem}\label{thm:again} Let $\varphi$ be an applicable sequence of pivoting operations for $G$, and let $S = \sup(\varphi)$. Then, for vertices $x,y$ with $x\neq y$, $\pair xy \in E(G\varphi)$ iff $\mathrm{pm}( G\sub{S \oplus \{x,y\} } ) = 1$. \end{Theorem} For small graphs the number of perfect matchings might be easier to determine than the determinant. For instance, for a graph $G$ on four nodes there are only three pairs of edges that can be present to contribute to the value $\mathrm{pm}(G)$. \centerline{\begin{picture}(80,20) \gasset{AHnb=0,Nw=1.5,Nh=1.5,Nframe=n,Nfill=y} \gasset{ExtNL=y,NLdist=1.5,NLangle=90} \node(u1)(05,15){} \node(u2)(15,15){} \node(v1)(05,05){} \node(v2)(15,05){} \drawedge(u1,u2){} \drawedge(v1,v2){} \node(u1)(35,15){} \node(u2)(45,15){} \node(v1)(35,05){} \node(v2)(45,05){} \drawedge(u2,v2){} \drawedge(v1,u1){} \node(u1)(65,15){} \node(u2)(75,15){} \node(v1)(65,05){} \node(v2)(75,05){} \drawedge(u2,v1){} \drawedge(v2,u1){} \end{picture} } A commutivity result is obtained in \cite[Theorem~6.1(iii)]{harju:parallelism}. Assume $\pair uv$ and $\pair zw$ are edges in $G$ on four different vertices $u,v,w,z$. Then both $[uv][wz]$ and $[wz][uv]$ are applicable iff the induced subgraph $G[\{u,v,w,z\}]$ is not isomorphic to $C_4$ or $D_4$. Its proof in \cite{harju:parallelism} is not difficult, a simple case analysis suffices. Here we note that $\pair uv$ en $\pair wz$ must be edges in order for $[uv]$ and $[wz]$ to be applicable. Both $[uv][wz]$ and $[wz][uv]$ are applicable iff $\mathrm{pm}(G\sub{u,v,w,z})=1$. Thus the subgraph $G\sub{u,v,w,z}$ must contain either one or three perfect matchings, where the first $\{\pair uv, \pair wz\}$ is given. Two perfect matchings occur precisely when the subgraph is isomorphic to $C_4$ or $D_4$. \centerline{\begin{picture}(50,20) \gasset{AHnb=0,Nw=1.5,Nh=1.5,Nframe=n,Nfill=y} \gasset{ExtNL=y,NLdist=1.5,NLangle=90} \node(u1)(05,15){} \node(u2)(15,15){} \node(v1)(05,05){} \node(v2)(15,05){} \drawedge(u1,u2){} \drawedge(u2,v2){} \drawedge(v2,v1){} \drawedge(v1,u1){} \put(18,05){$C_4$} \node(u1)(35,15){} \node(u2)(45,15){} \node(v1)(35,05){} \node(v2)(45,05){} \drawedge(u1,u2){} \drawedge(u2,v2){} \drawedge(v2,v1){} \drawedge(v1,u1){} \drawedge(v1,u2){} \put(48,05){$D_4$} \end{picture} } As was noted below Theorem~\ref{thm:geelen}, we may also look for a non-empty set $S$ such that every $v \in V(G)$ is adjacent to an even number of vertices of $S$. E.g., for $D_4$ we can take $S$ to be the set of the two vertices that are not connected by an edge. \section{Reduced Sequences} We have seen that if we have an applicable sequence of pivots, then the result of that series of operations only depends on the support, the set of vertices occurring an odd number of times as a pivot-vertex. This does not automatically mean that the sequence can be reduced to an equivalent sequence in which each vertex occurs only once. This because one needs to verify that all the operations are applicable, i.e., that all pivot-pairs are edges in the graph to which they are applied. We call a sequence of pivots \emph{reduced} \cite{genest} if no vertex occurs more than once in the pivots. It turns out that we can use a greedy strategy to reduce a sequence of pivot operations, growing a sequence with given support. Let $G$ be a graph, and let $S\subseteq V(G)$ be the support of an applicable sequence of pivots for $G$. We will construct a reduced sequence of pivots with support $S$. Obviously we may assume that $S$ is non-empty. Observe that since $\det G\sub {S} =1$, there must be at least one element in the adjacency matrix of $G$ that is non-zero, i.e., there is an edge $\pair uv$ in $G\sub {S}$. Apply $[uv]$ to graph $G$, and proceed iteratively with graph $G[uv]$ and support set $S-\{u,v\}$, where $\det G[uv] \sub{D-\{u,v\}} = 1$ again holds (by Theorem~\ref{thm:geelen}) and we stop when we have exhausted the support. \begin{Theorem} For every applicable sequence of pivots $\varphi$ there exists an applicable reduced sequence $\varphi'$ such that $\sup(\varphi) = \sup(\varphi')$ --- and therefore $G\varphi = G\varphi'$. \end{Theorem} \begin{Remark} The possibility to construct an applicable reduced sequence with given support depends on the fact that there must be at least one edge to obtain a non-zero determinant. In fact every column in the matrix must contain at least one edge. This means we can even choose one of the vertices of the pivot. As an example, we return to the topic of commutivity. It is known that if $[uv][wz]$ is applicable, then we can not conclude that $[wz][uv]$ is applicable. However, $\det G\sub {u,v,w,z} =1$, so we can construct an applicable sequence with support $\{u,v,w,z\}$. Fixing $z$ we know that there is an edge adjacent to that vertex, which can be either $\pair wz$, $\pair vz$ or $\pair uz$. When pivoting over this edge, the remaining two vertices must form an edge in the graph. Hence, we have shown the following fact: if, for for different vertices, $[uv][wz]$ is applicable, then either at least one of the pivot sequences $[wz][uv]$, $[vz][uw]$, or $[uz][vw]$ is applicable. This is essentially Lemma~1.2.11 of \cite{genest}. \end{Remark} The previous theorem shows that also the converse of Lemma~\ref{lem:applic=>odd} holds. \begin{Theorem}\label{thm:applic<=>odd} Let $S$ be a set of vertices of graph $G$. Then $\det G\sub S = 1$ iff there exists a (reduced) sequence of pivots $\varphi$ with support $S$ that is applicable in $G$. \end{Theorem} The size of $\{ S \subseteq V \mid \det G\sub S = 1 \}$ is precisely the value of the interlace polynomial $q(G)$ of $G$ on $x=1$, see \cite[Corollary~2]{aigner}. \section{Graphs with Self-Loops} \label{sec_self_loops} Until now we have considered simple graphs (graphs without loops or parallel edges). In this section we consider graphs $G$ with loops but without parallel edges. The adjacency matrices $A$ corresponding to such graphs are precisely the symmetrical $(0,1)$-matrices. If vertex $u$ has a loop in $G$, then the matrix $A \sub{\{u\}}$ is equal to the $1 \times 1$ matrix $(1)$. Hence, $\det A*\{u\} = 1$ and the general pivot of Section~\ref{sec_pivots} is defined, and is modulo $2$ equal to $$ A*\{u\} = \left( \begin{array}{c|c} 1 & \chi_u^T \\ \hline \chi_u & A\sub{V-u} - \chi_u \chi_u^T \end{array} \right), $$ where $\chi_u$ is the column vector belonging to $u$ without element $a_{uu}$. We define the elementary pivot $G*u$ for loop vertex $u$ on $G$ by the graph corresponding to adjacency matrix $A*\{u\}$. The elementary pivot $G*u$ is obtained from $G$ by complementing the neighbourhood $N_G(u)$ of $u$ (just as in simple graphs) \emph{and}, for $v \in N_G(u)$, we add a loop to $v$ if $v$ is a non-loop vertex in $G$ and remove the loop if $v$ is a loop vertex in $G$. Hence, we will call $G*u$ \emph{local complementation} (on graph $G$ with loop $u$). We can apply Proposition~\ref{prop:geelen}, and similar to Theorem~\ref{thm:geelen} we obtain (in $GF(2)$) the following result. \begin{Theorem}\label{thm:geelen2} Let $G$ be a graph, and let $u \in V(G)$ be a vertex that has a loop. Then, for $Y \subseteq V(G)$, \[ \det ((G*u)\sub{Y}) = \det (G\sub{Y \oplus \{u\}}) \]\qed \end{Theorem} The pivot operation on edge $e$ for graph with loops is identical to that operation for simple graphs: it is only defined if both vertices of $e$ do not have loops and it does not remove or add any loop of the graph. Results of the previous sections carry over to sequences $\varphi$ of operation having both local complementation and pivot operations. In particular, Theorems~\ref{thm:equaldom} and ~\ref{thm:applic<=>odd} carries over. \begin{Theorem}\label{thm:equaldom2} If $\varphi$ and $\varphi'$ are applicable sequences for $G$ having (possibly) both local complementation and pivot operations, then $\sup(\varphi) = \sup(\varphi')$ implies $G\varphi = G\varphi'$. Also, let $S \subseteq V(G)$. Then $\det G\sub S = 1$ iff there exists a (reduced) sequence $\varphi$ with support $S$, having (possibly) both local complementation and pivot operations, that is applicable in $G$. \end{Theorem} The size of $\{ S \subseteq V \mid \det G\sub S = 1 \}$, for graph $G$ with loops, is precisely the value of a polynomial $Q(G)$, defined in \cite[Section~4]{aigner} and related to the interlace polynomial $q(G)$, of $G$ on $x=2$, see \cite[Corollary~5]{aigner}. Moreover, the previous theorem can also be stated in terms of \emph{general perfect matchings}: considering a loop on $x$ as the edge $\{x\} \in E(G)$, then a general perfect matching is a $P \subseteq E(G)$ that is a partition of $V(G)$. \begin{Remark} In the theory of gene assembly in ciliates\cite{ciliates}, the local complementation operation on $u$ with the removal of $u$ is called \emph{graph positive rule}, and the pivot operation on $uv$ with the removal of both $u$ and $v$ is called \emph{graph double rule}. These rules are defined on \emph{signed graphs}, where each vertex is labelled by either $-$ or $+$. Now, label $-$ corresponds to a non-loop vertex and label $+$ corresponds to a loop vertex. Hence, we obtain the result that any two sequences of these graph rules with equal support obtain the same graph. Moreover, we obtain that a signed graph can be transformed into the empty graph by these graph rules iff the determinant of corresponding adjacency matrix has determinant $1$ modulo $2$. \end{Remark} \section{Discussion} We have related applicable sequences of pivot operations to determinants and perfect matchings in a graph. In this way, we have shown that two applicable sequences of pivot operations with equal support have the same effect on the graph. Moreover, for a given set $S$ of vertices, we have shown that there is a applicable sequence $\varphi$ of pivot operations with support $S$ precisely when the number of perfect matchings of the subgraph induced by $S$ is odd (or equivalently, when the determinant of the adjacency matrix of the subgraph is odd). In fact, there is an applicable reduced sequence $\varphi'$ with equal support as $\varphi$. Finally, we have shown that pivots and local complementation can `work together' in the case of graphs with loops in the sense that equal support renders equal graphs. \appendix \section{Pivots and Matchings} In this appendix we give an independent proof of Theorem~\ref{thm:again} in the style of Oum \cite{oum}, using perfect matchings instead of determinants, as it may be of independent interest. The proof was made superfluous when the authors discovered references \cite{little} and \cite{geelen}. However, the proofs in this appendix are straightforward and therefore the reader may prefer this approach. Recall that $x \sim_G y$ if either $\pair xy \in E(G)$ or $x=y$. As a technical tool we need a formula that can be used to compute the number of perfect matchings in a graph, but which can also be applied when we have duplicate vertices. For (an even number of) variables $x_1,\dots, x_n$ let $\mathrm{pm}_G(x_1,\dots,x_n)$ denote the following logical expression: \[ \bigoplus_{P \in \mathrm{pair} \{x_1,\dots,x_n\} } \bigwedge_{\pair{x}{y} \in P} (x \sim_G y) \] The number of variables used in the expression varies; we assume this number is clear from the context. Clearly, $\mathrm{pm}_G(x,y)$ equals $x \sim_G y$. Moreover $\mathrm{pm}_G()$ is true -- the logical and $\wedge$ over $0$ arguments is (considered) true, and the logical exclusive or over $1$ argument $a$ is (considered) $a$. This is in line with the fact that there is a single perfect matching on zero vertices. If we evaluate this expression for the (pairwise different) vertices $v_1,\dots,v_n$ of graph $G$ then we obtain the value $\mathrm{pm}_G(v_1,\dots,v_n)$ which equals $\mathrm{pm}(G\sub{\{v_1,\dots,v_n\}})$, the parity of the number of perfect matchings the subgraph in $G$ induced by $v_1,\dots,v_n$ (identifying 0 and 1 with false and true, respectively). Due to the highly symmetric form of the formula $\mathrm{pm}_G$ the ordering of the vertices as arguments to the formula is not important for the value. We will use this fact frequently below. The formula can also be evaluated when two (or more) of its arguments are chosen to be the same vertex in the graph. The next result shows equal vertices can be omitted (in pairs). \begin{Lemma}\label{lem:equivmat} Let $v_1,\dots,v_{n-2},v,v'$ be vertices in graph $G$ such that $v \sim_G x$ iff $v' \sim_G x$ for each vertex $x$. Then $\mathrm{pm}_G(v_1,\dots,v_{n-2},v,v') = \mathrm{pm}_G(v_1,\dots,v_{n-2})$. \end{Lemma} \begin{Proof} Observe that the condition of the lemma on $v$ and $v'$ implies that $v\sim_G v'$ holds. For $n=2$ the left hand side $\mathrm{pm}_G(v,v') $ equals $v\sim_G v'$ which equals the right hand side $\mathrm{pm}_G() $ which has been set to true. Now let $n>2$. In the formal expression $\mathrm{pm}_G$ each pairing $P$ that does not contain $\pair{x_{n-1}}{x_n}$ has two pairs $\pair{x_{n-1}}{x_i}$ and $\pair{x_n}{x_j}$. For $P$ there is a (unique) $P'$ corresponding to $P$ where $P' \backslash P = \{\pair{x_{n-1}}{x_j}, \pair{x_n}{x_i} \}$ (and hence $P \backslash P' = \{\pair {x_{n-1}}{x_i}, \pair{x_n}{x_j} \}$). Since $v \sim_G x$ iff $v' \sim_G x$ for each vertex $x$, we have $(v \sim_G x_i) \land (v' \sim_G x_j) = (v \sim_G x_j) \land (v' \sim_G x_i)$. Hence the contributions of pairings $P$ and $P'$ cancel. The remaining pairings all contain $\pair{v}{v'}$ for which $v \sim_G v'$ can be dropped from the formula, as $v \sim_G v'$ holds. The resulting formula equals that of $\mathrm{pm}_G(v_1,\dots,v_{n-2})$. \end{Proof} The next lemma shows that we can characterize pivoting by the parity of the number of perfect matchings in subgraphs. It is a simple reformulation of the result of Oum, but essential as a first step to understand the connection between pivoting and perfect matchings. \begin{Lemma}\label{lem:oumagain} Let $G = (V,E)$ be a graph, and fix $\pair uv \in E$. For $x,y \in V$ we have $\mathrm{pm}_{G[uv]}(x,y) = \mathrm{pm}_G(x,y,u,v)$. \end{Lemma} \begin{Proof} In the evaluation of $\mathrm{pm}_{G[uv]}(x,y)$ we consider a single pair $x \sim_{G[uv]} y$ only, the left-hand side in Lemma~\ref{lem:oum}. As $u \sim_G v$ holds, we may replace the factor $x \sim_G y$ in the statement of Lemma~\ref{lem:oum} by $(x \sim_G y \land u \sim_G v)$. Now the right-hand side of the formula equals $\mathrm{pm}_G(x,y,u,v)$. \end{Proof} The main technical result is a generalization of the previous lemma, which now includes an additional sequence of nodes on both sides. Before stating this result we explicitly compute the simplest of these generalizations, with variables $x_1,x_2,x_3,x_4$ instead of $x,y$. This example visualizes the more general argumentations in the proof of our general result, which follows the example. \begin{Example} $\mathrm{pm}_{G[uv]}(x_1,x_2,x_3,x_4)$ equals $ (x_1 \sim_{G[uv]} x_2 \land x_3 \sim_{G[uv]} x_4) \oplus (x_1 \sim_{G[uv]} x_3 \land x_2 \sim_{G[uv]} x_4) \oplus (x_1 \sim_{G[uv]} x_4 \land x_2 \sim_{G[uv]} x_3) $. Now substitute each $x\sim_{G[uv]}y$ by the formula given in Lemma~\ref{lem:oum}, to obtain \noindent $ (\; [x_1x_2 \oplus (x_1u\land x_2v) \oplus (x_1v\land x_2u) ] \land [x_3x_4 \oplus (x_3u\land x_4v) \oplus (x_3v\land x_4u) ] \;) \bigoplus (\; [x_1x_3 \oplus (x_1u\land x_3v) \oplus (x_1v\land x_3u) ] \land [x_2x_4 \oplus (x_2u\land x_4v) \oplus (x_2v\land x_4u) ] \;) \bigoplus (\; [x_1x_4 \oplus (x_1u\land x_4v) \oplus (x_1v\land x_4u) ] \land [x_2x_3 \oplus (x_2u\land x_3v) \oplus (x_2v\land x_3u) ] \;) $, \\ where we write $xy$ rather than $x\sim_G y$. By distributivity (i.e. using the logical identity $a \wedge (b \oplus c) = (a \wedge b) \oplus (a \wedge c)$) this is equivalent to \noindent $ ( x_1x_2 \land x_3x_4 ) \oplus ( x_1x_2 \land x_3u \land x_4v ) \oplus ( x_1x_2 \land x_3v \land x_4u ) \oplus ( x_1u \land x_2v \land x_3x_4 ) \oplus ( x_1u \land x_2v \land x_3u \land x_4v ) \oplus ( x_1u \land x_2v \land x_3v \land x_4u ) \oplus ( x_1v \land x_2u \land x_3x_4 ) \oplus ( x_1v \land x_2u \land x_3u \land x_4v ) \oplus ( x_1v \land x_2u \land x_3v \land x_4u ) \bigoplus ( x_1x_3 \land x_2x_4 ) \oplus ( x_1x_3 \land x_2u \land x_4v ) \oplus ( x_1x_3 \land x_2v \land x_4u ) \oplus ( x_1u \land x_3v \land x_2x_4 ) \oplus ( x_1u \land x_3v \land x_2u \land x_4v ) \oplus ( x_1u \land x_3v \land x_2v \land x_4u ) \oplus ( x_1v \land x_3u \land x_2x_4 ) \oplus ( x_1v \land x_3u \land x_2u \land x_4v ) \oplus ( x_1v \land x_3u \land x_2v \land x_4u ) \bigoplus ( x_1x_4 \land x_2x_3 ) \oplus ( x_1x_4 \land x_2u \land x_3v ) \oplus ( x_1x_4 \land x_2v \land x_3u ) \oplus ( x_1u \land x_4v \land x_2x_3 ) \oplus ( x_1u \land x_4v \land x_2u \land x_3v ) \oplus ( x_1u \land x_4v \land x_2v \land x_3u ) \oplus ( x_1v \land x_4u \land x_2x_3 ) \oplus ( x_1v \land x_4u \land x_2u \land x_3v ) \oplus ( x_1v \land x_4u \land x_2v \land x_3u ) $ There are twelve terms with four factors, which are six different terms each occurring twice, hence cancelling each other. The three terms with two factors can be extended adding a third term $uv$ (which is true). Rearranging these 15 remaining terms we get \noindent $ ( x_1x_2 \land x_3x_4 \land uv ) \oplus ( x_1x_2 \land x_3u \land x_4v ) \oplus ( x_1x_2 \land x_3v \land x_4u ) \oplus ( x_1x_3 \land x_2x_4 \land uv ) \oplus ( x_1x_3 \land x_2u \land x_4v ) \oplus ( x_1x_3 \land x_2v \land x_4u ) \oplus ( x_1x_4 \land x_2x_3 \land uv ) \oplus ( x_1x_4 \land x_2u \land x_3v ) \oplus ( x_1x_4 \land x_2v \land x_3u ) \oplus ( x_1u \land x_2v \land x_3x_4 ) \oplus ( x_1u \land x_3v \land x_2x_4 ) \oplus ( x_1u \land x_4v \land x_2x_3 ) \oplus ( x_1v \land x_2u \land x_3x_4 ) \oplus ( x_1v \land x_3u \land x_2x_4 ) \oplus ( x_1v \land x_4u \land x_2x_3 ) $ These happen to be the fifteen pairings making up $\mathrm{pm}_G(x_1,x_2,x_3,x_4,u,v)$. \end{Example} As announced, the proof of our general result follows the path sketched in the previous example. It is the `perfect matching counterpart' of Theorem~\ref{thm:geelen}. \begin{Theorem}\label{thm:basic} Let $G$ be a graph, let $v_1,\dots,v_n$ be vertices in $G$, let $\pair uv \in E(G)$. Then $\mathrm{pm}_{G[uv]}(v_1,\dots,v_n) = \mathrm{pm}_G(v_1,\dots,v_n,u,v)$. \end{Theorem} \begin{Proof} If $n=0$ the left hand side equals $\mathrm{pm}_{G[uv]}()$ which is true, while the right hand side $\mathrm{pm}_G(u,v)$ is equivalent to $u \sim_G v$, which is also true, as $\pair uv$ is an edge in $G$. Now let $n\ge 2$. For $\mathrm{pm}_{G[uv]}(v_1,\dots,v_n)$ the following formula has to be evaluated \[ \bigoplus_{P \in \mathrm{pair}\{x_1,\dots,x_n\}} \bigwedge_{\pair{x}{y} \in P} (x \sim_{G[uv]} y) \] According to Lemma~\ref{lem:oum} the relation $\sim_{G[uv]}$ can be replaced by a suitable expression involving $\sim_G$ in the original graph $G$. \[ \bigoplus_{P \in \mathrm{pair}\{x_1,\dots,x_n\}} \bigwedge_{\pair xy \in P} \left( (x \sim_G y) \oplus (x \sim_G u \land y \sim_G v) \oplus (x \sim_G v \land y \sim_G u) \right) \] Now, we apply the logical identity $a \wedge (b \oplus c) = (a \wedge b) \oplus (a \wedge c)$ iteratively to the inner part $\bigwedge_{\pair xy \in P} (\dots)$, and we obtain for each $P \in \mathrm{pair}\{x_1,\dots,x_n\}$ the exclusive or over a total of $3^{n/2}$ terms, each of which is a conjunction of factors of one of the forms $x \sim_G y$, $(x \sim_G u \land y \sim_G v)$ and $(x \sim_G v \land y \sim_G u)$. Moreover in each such term the variables $x_1,\dots,x_n$ each occur exactly once. Now consider such a term in which the constant $u$ occurs $k$ times paired to $x_{i_1}, \dots, x_{i_k}$, which implies also $v$ occurs $k$ times paired to certain $x_{j_1}, \dots, x_{j_k}$. Up to the order of factors, this term is present in the list that belongs to any $P'$ that pairs the variables $x_{i_1}, \dots, x_{i_k}$, to the $x_{j_1}, \dots, x_{j_k}$ (in any combination) and equals $P$ for the other variables. There are $k!$ such pairings, thus $k!$ copies of equivalent terms. These copies cancel if $k!$ is even, which means if $k\ge 2$. Hence, for each $P$ we need only consider those terms for which there is at most one occurrence of both $u$ and $v$. Thus we have reduced the previous equation to \[\begin{array}{cl} \bigoplus_{P \in \mathrm{pair}\{x_1,\dots,x_n\}} & \left( \bigwedge_{\pair{x_1}{y_1} \in P} x_1 \sim_G y_1 \right) \oplus \\ & \bigoplus_{\pair{x_1}{y_1} \in P} \left[ \left( x_1 \sim_G u \wedge y_1 \sim_G v \wedge \bigwedge_{\pair xy \in P \backslash \pair{x_1}{y_1}} x \sim_G y \right) \right. \\ & \left. \oplus \left( x_2 \sim_G u \wedge x_1 \sim_G v \wedge \bigwedge_{\pair xy \in P \backslash \pair{x_1}{x_2}} x \sim_G y \right) \right] \end{array} \] Because $\bigwedge_{\pair xy \in P} x \sim_G y = \bigwedge_{\pair xy \in P \cup \{\pair uv\}} (x \sim_G y)$, this is equivalent to $$\bigoplus_{P \in \mathrm{pair} \{x_1,\dots,x_n,u,v\} } \left( \bigwedge_{\pair xy \in P} x \sim_G y \right)$$ and this in turn is the expression that has to be evaluated for $\mathrm{pm}_G(x_1,\dots,x_n,u,v)$. \end{Proof} By Lemma~\ref{lem:equivmat}, the previous theorem may be rephrased as follows, cf. Theorem~\ref{thm:geelen}. \begin{Theorem}\label{thm:basic2} Let $G$ be a graph, and let $\pair uv \in E(G)$. Then, for $Y \subseteq V(G)$, \[ \mathrm{pm} ((G[\pair uv])\sub{Y}) = \mathrm{pm} (G\sub{Y \oplus \{u,v\}}) \]\qed \end{Theorem} The results of Section~\ref{sec_seq_pivots} involving $\det(G)$ can hence also be developed using $\mathrm{pm}(G)$ through Theorem~\ref{thm:basic2}. Hence, we obtain, e.g., Theorem~\ref{thm:again}. The following special case of our general result Theorem~\ref{thm:basic} is a reformulation in the style of the original Lemma~\ref{lem:oum}, summing over edges in the subgraph of $G$ induced by $\{u,v,w,z\}$ (with some care in the case of multiple occurrences of vertices). \begin{Theorem} If $[uv][wz]$ is applicable to $G$, then \[ x \sim_{G[uv][wz]} y = x\sim_G y \bigoplus_{\mbox{\shortstack{ $\{\pair{x_1}{x_2},\pair{x_3}{x_4} \}$\\ pairing of\\ $\{u,v,w,z\}$ \\ with $ x_1\sim_G x_2$ }}} ((x \sim_G x_3) \wedge (y \sim_G x_4)) \oplus ((x \sim_G x_4) \wedge (y \sim_G x_3)) \] \end{Theorem} \begin{Proof} The result is obtained by rewriting the expression for $\mathrm{pm}_G(x,y,u,v,w,z)$, and using the fact that $\mathrm{pm}_G(u,v,w,z)$ holds. \end{Proof} \end{document}
arXiv
\begin{definition}[Definition:Successor Mapping] Let $\struct {P, s, 0}$ be a Peano structure. Then the mapping $s: P \to P$ is called the '''successor mapping on $P$'''. The image element $\map s x$ of an element $x$ is called the '''successor element''' or just '''successor''' of $x$. \end{definition}
ProofWiki
Sub-department: Of this and branch sub-departments Priority areas: Sub-department Research target Priority areas of development Of all publications in the section: 165 Absolute homology theory of stereotype algebras Akbarov S. S. Functional Analysis and Its Applications. 2000. Vol. 34. No. 1. P. 60-63. Absolutely Convergent Fourier Series. An Improvement of the Beurling-Helson Theorem V. V. Lebedev. Functional Analysis and Its Applications. 2012. Vol. 46. No. 2. P. 121-132. We obtain a partial solution of the problem on the growth of the norms of exponential functions with a continuous phase in the Wiener algebra. The problem was posed by J.-P. Kahane at the International Congress of Mathematicians in Stockholm in 1962. He conjectured that (for a nonlinear phase) one can not achieve the growth slower than the logarithm of the frequency. Though the conjecture is still not confirmed, the author obtained first nontrivial results. A criterion of smoothness at infinity for an arithmetic quotient of the future tube Schwarzman O., Vinberg E. Functional Analysis and Its Applications. 2017. Vol. 51. No. 1. P. 32-47. Let Γ be an arithmetic group of affine automorphisms of the n-dimensional future tube T. It is proved that the quotient space T/Γ is smooth at infinity if and only if the group Γ is generated by reflections and the fundamental polyhedral cone ("Weyl chamber") of the group dΓ in the future cone is a simplicial cone (which is possible only for n ≤ 10). As a consequence of this result, a smoothness criterion for the Satake–Baily–Borel compactification of an arithmetic quotient of a symmetric domain of type IV is obtained. An algebra of continuous functions as a continuous envelope of its subalgebras Akbarov S. S. Functional Analysis and Its Applications. 2016. Vol. 50. No. 2. P. 143-145. To an arbitrary involutive stereotype algebra A the continuous envelopeoperation assigns its nearest, in some sense, involutive stereotype algebra EnvCA so that homomorphisms to various C*-algebras separate the elements of EnvC A but do not distinguish between the properties of A and those of EnvCA. If A is an involutive stereotype subalgebra in the algebra C(M) of continuous functions on a paracompact locally compact topological space M, then, for C(M) to be a continuous envelope of A, i.e., EnvCA = C(M), it is necessary butnot sufficient that A be dense in C(M). In this note we announce a necessary and sufficient condition for this: the involutive spectrum of A must coincide with M up to a weakening of the topology such that the system of compact subsets in M and the topology on each compact subset remains the same. A resultant system as the set of coefficients of a single resultant Abramov Y. V. Functional Analysis and Its Applications. 2013. Vol. 47. No. 3. P. 82-87. Explicit expressions for polynomials forming a homogeneous resultant system of a set of m+1 homogeneous polynomial equations in n+1 A short and simple proof of the Jurkat–Waterman theorem on conjugate functions It is well--known that certain properties of continuous functions on the circle T, related to the Fourier expansion, can be improved by a change of variable, i.e., by a homeomorphism of the circle onto itself. One of the results in this area is the Jurkat--Waterman theorem on conjugate functions, which improves the classical Bohr--P\'al theorem. In the present work we propose a short and technically very simple proof of the Jurkat--Waterman theorem. Our approach yields a stronger result. Asymptotics of products of nonnegative random matrices Protasov V. Y. Functional Analysis and Its Applications. 2013. Vol. 47. No. 2. P. 138-147. Asymptotic properties of products of random matrices ξ k = X k …X 1 as k → ∞ are analyzed. All product terms X i are independent and identically distributed on a finite set of nonnegative matrices A = {A 1, …, A m }. We prove that if A is irreducible, then all nonzero entries of the matrix ξ k almost surely have the same asymptotic growth exponent as k→∞, which is equal to the largest Lyapunov exponent λ(A). This generalizes previously known results on products of nonnegative random matrices. In particular, this removes all additional "nonsparsity" assumptions on matrices imposed in the literature.We also extend this result to reducible families. As a corollary, we prove that Cohen's conjecture (on the asymptotics of the spectral radius of products of random matrices) is true in case of nonnegative matrices. Brion's theorem for Gelfand–Tsetlin polytopes Makhlin I. Functional Analysis and Its Applications. 2016. Vol. 50. No. 2. P. 98-106. This work is motivated by the observation that the character of an irreducible gln-module (a Schur polynomial), being the sum of exponentials of integer points in a Gelfand–Tsetlin polytope, can be expressed by using Brion's theorem. The main result is that, in the case of a regular highest weight, the contributions of all nonsimplicial vertices vanish, while the number of simplicial vertices is n! and the contributions of these vertices are precisely the summands in Weyl's character formula. Added: Sep 5, 2016 C1-диффеоморфизм Аносова с подковой, притягивающей почти любую точку Бонатти К., Минков С. С., Окунев А. В. и др. Функциональный анализ и его приложения. 2017. Т. 51. № 2. С. 83-86. Characters of Feigin-Stoyanovsky subspaces and Brion's theorem Makhlin I. Functional Analysis and Its Applications. 2015. Vol. 49. No. 1. P. 15-24. We give an alternative proof of the main result of [1]; the proof relies on Brion's theorem about convex polyhedra. The result itself can be viewed as a formula for the character of the Feigin-Stoyanovsky subspace of an integrable irreducible representation of the affine Lie algebra widehatsln(C). Our approach is to assign integer points of a certain polytope to vectors comprising a monomial basis of the subspace and then compute the character by using (a variation of) Brion's theorem. Added: Aug 7, 2015 Coulomb Branch of a Multiloop Quiver Gauge Theory Goncharov E. A., Finkelberg M. V. Functional Analysis and Its Applications. 2019. Vol. 53. P. 241-249. We compute the Coulomb branch of a multiloop quiver gauge theory for the quiver with a single vertex, r loops, one-dimensional framing, and dim V = 2. We identify it with a Slodowy slice in the nilpotent cone of the symplectic Lie algebra of rank r. Hence it possesses a symplectic resolution with 2r fixed points with respect to a Hamiltonian torus action. We also identify its flavor deformation with a base change of the full Slodowy slice. Degenerate group of type A: Representations and flag varieties Feigin E. Functional Analysis and Its Applications. 2014. Vol. 48. No. 1. P. 59-71. The degenerate Lie group is a semidirect product of the Borel subgroup with the normal abelian unipotent subgroup. We introduce a class of the highest weight representations of the degenerate group of type A, generalizing the PBW-graded representations of the classical group. Following the classical construction of the flag varieties, we consider the closures of the orbits of the abelian unipotent subgroup in the projectivizations of the representations. We show that the degenerate flag varieties $\Fl^a_n$ and their desingularizations $R_n$ can be obtained via this construction. We prove that the coordinate ring of $R_n$ is isomorphic to the direct sum of duals of the highest weight representations of the degenerate group. In the end, we state several conjectures on the structure of the highest weight representations. Diffeomorphisms with intermingled attracting basins Ilyashenko Y. Functional Analysis and Its Applications. 2008. No. 42(4). P. 60-71. Diffusion processes on the Thoma cone Olshanski G. Functional Analysis and Its Applications. 2016. Vol. 50. No. 3. P. 237-240. The Thoma cone is a certain infinite-dimensional space that arises in the representation theory of the infinite symmetric group. The present note is a continuation of a paper by A. M. Borodin and the author (Electr. J. Probab. 18 (2013), no. 75), where a 2-parameter family of continuous-time Markov processes on the Thoma cone was constructed. The purpose of the note is to show that these processes are diffusions. Examples of Families of Strebel Differentials on Hyperelliptic Curves Artamkin I., Levitskaya Y., Shabat G. B. Functional Analysis and Its Applications. 2009. Vol. 43. No. 2. P. 140-142. The paper contains an explicit construction of Strebel differentials on one-parameter families of hyperelliptic curves of even genus. Descriptions of the corresponding separatrices are presented. Extended Gelfand–Tsetlin graph, its q-boundary, and q-B-splines The boundary of the Gelfand–Tsetlin graph is an infinite-dimensional locally compact space whose points parameterize the extreme characters of the infinite-dimensional group U(∞). The problem of harmonic analysis on the group U(∞) leads to a continuous family of probability measures on the boundary—the so-called zw-measures. Recently Vadim Gorin and the author have begun to study a q-analogue of the zw-measures. It turned out that constructing them requires introducing a novel combinatorial object, the extended Gelfand–Tsetlin graph. In the present paper it is proved that the Markov kernels connected with the extended Gelfand–Tsetlin graph and its q-boundary possess the Feller property. This property is needed for constructing a Markov dynamics on the q-boundary. A connection with the B-splines and their q-analogues is also discussed. Gluing of surfaces with polygonal boundaries Akhmedov E., Shakirov S. Functional Analysis and Its Applications. 2009. Vol. 43. No. 4. P. 245-253. Integrable Crystals and Restriction to Levi Subgroups Via Generalized Slices in the Affine Grassmannian Krylov V. Functional Analysis and Its Applications. 2018. Vol. 52. No. 2. P. 113-133. Let $G$ be a connected reductive algebraic group over $\mathbb{C}$. Let $\Lambda^{+}_{G}$ be the monoid of dominant weights of $G$. We construct the integrable crystals $\mathbf{B}^{G}(\lambda),\ \lambda\in\Lambda^{+}_{G}$, using the geometry of generalized transversal slices in the affine Grassmannian of the Langlands dual group. We construct the tensor product maps $\mathbf{p}_{\lambda_{1},\lambda_{2}}\colon \mathbf{B}^{G}(\lambda_{1}) \otimes \mathbf{B}^{G}(\lambda_{2}) \rightarrow \mathbf{B}^{G}(\lambda_{1}+\lambda_{2})\cup\{0\}$ in terms of multiplication of generalized transversal slices. Let $L \subset G$ be a Levi subgroup of $G$. We describe the restriction to Levi $\operatorname{Res}^G_L\colon\operatorname{Rep}(G)\rightarrow\operatorname{Rep}(L)$ in terms of the hyperbolic localization functors for the generalized transversal slices. J-инварианты орнаментов и оснащенные хордовые диаграммы Ландо С. К. Функциональный анализ и его приложения. 2006. Т. 40. № 1. Lagrange Intersections in a Symplectic Space Pushkar P. E. Functional Analysis and Its Applications. 2000. Vol. 34. No. 4. P. 288-292. Mixed Problems in a Lipschitz Domain for Strongly Elliptic Second-Order Systems Agranovich M. S. Functional Analysis and Its Applications. 2011. Vol. 45. No. 2. P. 81-98. We consider mixed problems for strongly elliptic second-order systems in a bounded domain with Lipschitz boundary in the space Rn. For such problems, equivalent equations on the boundary in the simplest L2-spaces Hs of Sobolev type are derived, which permits one to represent the solutions via surface potentials. We prove a result on the regularity of solutions in the slightly more general spaces Hsp of Bessel potentials and Besov spaces Bsp. Problems with spectral parameter in the system or in the condition on a part of the boundary are considered, and the spectral properties of the corresponding operators, including the eigenvalue asymptotics, are discussed.
CommonCrawl
\begin{document} \title[Correlation functions of RS sequence] {Correlation functions of the Rudin--Shapiro sequence} \author{Jan Maz\'a\v c} \address{Fakult\"at f\"ur Mathematik, Universit\"at Bielefeld, \newline \indent Postfach 100131, 33501 Bielefeld, Germany} \email{[email protected]} \begin{abstract} In this paper, we show that all odd-point correlation functions of the balanced Rudin--Shapiro sequence vanish and that all even-point correlation functions depend only on a single number, which holds for any weighted correlation function as well. For the four-point correlation functions, we provide a more detailed exposition which reveals some arithmetic structures and symmetries. In particular, we show that one can obtain the autocorrelation coefficients of its topological factor with maximal pure point spectrum among them. \end{abstract} \maketitle \centerline{Dedicated to the memory of Uwe Grimm} \section{Introduction} The \emph{Rudin--Shapiro sequence}, sometimes called Golay--Rudin--Shapiro sequence, is an infinite sequence discovered and studied within the scope of Fourier analysis, independently by Golay, Rudin and Shapiro in 1950 \cite{Golay_1,Golay_2,Rudin,Shapiro}. The original definition of the $n$-th member of the sequence counts the number of two consecutive ones in the binary expansion of $n$. Formally, let $(e_k\ e_{k-1}\ \dots e_2 \ e_1 \ e_0)_2$ be the binary expansion of $n$ and let $b_n$ denote the number of ``$1 \ 1$" in this expansion. We have $b_0=0$ and, for $n\geqslant 1$, the number can be calculated as \[ b_n = \displaystyle \sum_{i=0}^{k-1} e_i e_{i+1}. \] Then, the $n$-th digit of the (one-sided) Rudin--Shapiro sequence $a_n$ reads \[a_n \mathrel{\mathop:}= (-1)^{b_n}. \] This definition also implies that the Rudin--Shapiro sequence is an automatic sequence \cite{Allouche_Shallit}, and provides a possible way to generalise it; see \cite{Q87,AL91} for further details. The standard and generalised Rudin--Shapiro sequences were already studied from different points of view; for the spectral properties of the generalised ones \cite{Chan_Grimm}. In particular, one can study the complexity of the Rudin--Shapiro sequence with respect to finite arithmetic progressions contained in the sequence. Konieczny showed that the Rudin--Shapiro sequence is Gowers uniform for any uniformity norm \cite{Konieczny}. This result, roughly speaking, demonstrates that the Rudin--Shapiro sequence is not too distant from a random sequence, which is also apparent from its spectral properties \cite{TAO}, as both binary sequences possess absolutely continuous diffraction only. This fact is also visible at the level of the autocorrelation, where, for both sequences, the autocorrelation coefficients vanish except for 0. In order to understand the statistical behaviour and possible differences better, we exploit the higher-order correlation functions, which are, to some extent, related to the Gowers norms. Correlation functions have been widely studied \cite{Domb_Green}, and used in statistical mechanics, and some examples of one-dimensional quasicrystals have been discussed in the corresponding literature, for example \cite{vanE}. It was also shown that one can construct a suitable Ising model such that its Hamiltonian has the Thue--Morse sequence as its ground state once its 4-point correlations are known \cite{GMRE}. Recently, Baake and Coons \cite{BC} described the higher-order correlation functions for the Thue--Morse word using renormalisation techniques. We aim to understand the statistical structure of the Rudin--Shapiro word in terms of higher-order correlation functions as well. This may bring a different insight into the type of its long-range order. \section{Rudin--Shapiro sequence} The Rudin--Shapiro (RS) sequence can be obtained via a bi-infinite fixed point of a constant-length substitution over the quaternary alphabet $\mathcal{A} = \{ 0,1,2,3 \}$, namely \[\varrho_{_{\mathrm{RS}}} = \left\{ \begin{array}{rcl} 0 & \mapsto & 02, \\ 1 & \mapsto & 32, \\ 2 & \mapsto & 01, \\ 3 & \mapsto & 31, \\ \end{array} \right. \] with any of the legal seeds $2|0$, $2|3$, $1|0$ or $1|3$. The resulting fixed point (under the square of the substitution) defines the \emph{quaternary} RS sequence. Any of them give rise to the \emph{quaternary RS hull} in the standard way. We can further use these quaternary sequences to define the \emph{binary} ones via the mapping $\varphi: \{0,1,2,3\} \longrightarrow \{a,b \}$ defined as \[\varphi(0) = \varphi(2) = a, \qquad \varphi(1)=\varphi(3)=b.\] The four bi-infinite fixed points are mapped to four locally indistinguishable binary sequences. Any of them defines, in the usual way, the \emph{binary RS hull}. The binary bi-infinite sequences can be obtained as a fixed point of a \emph{non-local} binary substitution rule. \begin{lemma}\cite[Lemma 4.9]{TAO} The four bi-infinite binary Rudin--Shapiro sequences are fixed under the substitution rules \[ \varrho_{\mathrm{even}} = \left\{\begin{array}{rcl} a & \mapsto & aaab, \\ b & \mapsto & bbba, \end{array} \right. \quad \varrho_{\mathrm{odd}} = \left\{\begin{array}{rcl} a & \mapsto & aaba, \\ b & \mapsto & bbab, \end{array} \right.\] where $\varrho_{\mathrm{even}}$ and $\varrho_{\mathrm{odd}}$ have to be applied to letters at even and odd positions, respectively. In particular, this rule is non-local, as one needs a reference point to apply the rule. \qed \end{lemma} Denote by $\mathbb{X}_{\mathrm{RS},4}$ the quaternary RS hull and by $\mathbb{X}_{\mathrm{RS},2}$ the binary one. The mapping $\varphi$ induces a continuous mapping commuting with the substitution action, i.e., the following diagram commutes \begin{equation} \label{eq:diagram} \begin{CD} \mathbb{X}_{\mathrm{RS},4} @>\varrho_{_{\mathrm{RS}}}>> \mathbb{X}_{\mathrm{RS},4}\\ @V{\varphi}VV @VV{\varphi}V\\ \mathbb{X}_{\mathrm{RS},2} @>\varrho>> \mathbb{X}_{\mathrm{RS},2} \end{CD} \quad. \end{equation} Moreover, the mapping $\varphi$ is invertible. Thus, the hulls are topologically conjugate and even mutually locally derivable (MLD). See \cite[Rem. 4.11]{TAO} for further details. From the spectral point of view, the RS sequence is a paradigm of a substitution sequence with an absolutely continuous spectrum, both in the diffraction and in the dynamical sense. For a more detailed discussion on the spectral properties, we refer to \cite{Frank}. To get deeper inside the structure of the quaternary RS sequence, one can study its two-letter legal subwords \cite{TAO}. In particular, one can define a sliding block map $\chi$, \begin{align*} \chi(01)=\chi(32)=A, & \qquad \chi(02)=\chi(31)=B, \\ \chi(10) = \chi(23) = C, & \qquad \chi(20)=\chi(13)=D. \end{align*} \noindent The quaternary RS substitution then induces a substitution $\varrho^{}_{2}$ on the alphabet $\{A, B, C, D\}$ which reads \begin{equation} \label{eq:subst_induced} \varrho^{}_{2} = \left\{ \begin{array}{rcl} A & \mapsto & BC, \\ B & \mapsto & BD, \\ C & \mapsto & AD, \\ D & \mapsto & AC. \\ \end{array} \right. \end{equation} This substitution rule makes the following diagram with the natural $\mathbb{Z}$-action commutative, \begin{equation} \label{eq:diagram_2} \begin{CD} \mathbb{X}_{\mathrm{RS},4} @>S_{\mathrm{RS,4}}>> \mathbb{X}_{\mathrm{RS},4}\\ @V{\chi}VV @VV{\chi}V\\ \chi(\mathbb{X}_{\mathrm{RS},4}) @>S^{}_{2}>> \chi(\mathbb{X}_{\mathrm{RS},4}) \end{CD} \quad. \end{equation} The square of the substitution $\varrho^{}_{2}$ possesses two bi-infinite fixed points with starting seeds $C|D$ and $D|B$, respectively. They coincide at all positions but $-1$. As in the RS case, we can use the same trick to obtain a binary sequence, namely the coding $\Tilde{\varphi}: \{A,B,C,D\} \longrightarrow \{a,b \}$, \[\Tilde{\varphi}(A) = \Tilde{\varphi}(C) = a, \qquad \Tilde{\varphi}(B)=\Tilde{\varphi}(D)=b.\] Moreover, the description of this binary sequence as a fixed point of certain, paper--folding-like substitution is possible.\footnote{There is also an interesting connection to space-filling curves arising from this substitution. See Example 5.1.5 in \cite{Allouche_Shallit} for further details. } \begin{lemma} \label{lem:induced} The bi-infinite binary versions of the fixed points of the substitution $\varrho^{}_{2}$ are fixed under the substitution rules \begin{equation} \label{eq:RS-derived} \varrho_{\, \mathrm{2,even}} = \left\{\begin{array}{rcl} a & \mapsto & bbab, \\ b & \mapsto & bbaa, \end{array} \right. \quad \varrho_{\, \mathrm{2,odd}} = \left\{\begin{array}{rcl} a & \mapsto & baaa, \\ b & \mapsto & baab, \end{array} \right. \end{equation} where $\varrho_{\mathrm{2,even}}$ and $\varrho_{\mathrm{2,odd}}$ have to be applied to letters at even and odd positions, respectively. \begin{proof} Both bi-infinite fixed points $\boldsymbol{w}$ of the quaternary substitution $\varrho^{}_{2}$ satisfy $w^{}_{2i} \in \{A,B\}$ and $w^{}_{2i+1} \in \{C,D\}$ for all $i\in\mathbb{Z}$. Taking the square of the \eqref{eq:subst_induced}, one gets at even and odd positions \[ \begin{array}{rlrl} A & \mapsto BDAD, & C & \mapsto BCAC, \\ B & \mapsto BDAC, & D & \mapsto BCAD. \\ \end{array} \] Recalling the coding $\Tilde{\varphi}$ gives the relations. \end{proof} \end{lemma} \begin{remark} \label{eq:derived-description} Assign weights to each point, namely let $a=-b=1$ and let $m\in\mathbb{Z}$. Then, \begin{equation*} w_{4m} = -1, \qquad w_{4m+1} = (-1)^{m+1}, \qquad w_{4m+2} = 1, \qquad w_{4m+3} = (-1)^{m+1}w_m \end{equation*} provide a direct description of the fixed point as a consequence of Lemma \ref{lem:induced}. $\Diamond$ \end{remark} \begin{remark} This system possesses a pure point spectrum, as seen by applying Wiener's criterion. Moreover, the comparison with the pure point part of the \emph{dynamical} spectrum of the RS sequence reveals that the dynamical system $(\tilde{\varphi}\chi(\mathbb{X}_{\mathrm{RS},4}),\mathbb{Z})$ is a topological factor of the RS hull with maximal pure point spectrum \cite{BC, Q_book}. $\Diamond$ \end{remark} \section{Correlation functions} Due to the MLD relations, we now concentrate on the binary version. Recall that, if one considers the binary RS bi-infinite sequence as a word $\boldsymbol{w}$ over the alphabet $\{1,-1\}$, the following recursive relation holds for its letters \begin{equation} \label{eq:renorm_letters} w_{4m+\ell} = \left\{\begin{array}{rcl} w_m, & \mathrm{if} & \ell \in \{0,1\}, \\ (-1)^{m+\ell}\,w_m, & \mathrm{if} & \ell \in \{2,3\}. \end{array} \right. \end{equation} We define the $n$-point correlation function $\eta^{(n)}$ as \begin{equation} \label{eq:def_corr_n} \eta^{(n)}(m_1,m_2,\dots,m_{n-1}) =\lim_{N\to \infty}\myfrac{1}{2N+1} \sum_{i=-N}^{N}w_i\, w_{i+m_1}\, w_{i+m_2}\dots w_{i+m_{n-1}}. \end{equation} Note that one can also work with one-sided averages. The limit exists due to the (unique) ergodicity of the subshift $\mathbb{X}_{\mathrm{RS},2} $ via Birkhoff's ergodic theorem. Indeed, one has \begin{equation} \label{eq:def_eta_integral} \lim_{N\to \infty}\myfrac{1}{2N+1} \sum_{i=-N}^{N}w_i\, w_{i+m_1}\, w_{i+m_2}\dots w_{i+m_{n-1}} \ = \ \int_{\mathbb{X}_{\mathrm{RS},2}} x_{_0}\, x_{m_1}\, x_{m_2}\dots x_{m_{n-1}} \,\mathrm{d} \mu(x) , \end{equation} where $\mu$ denotes the unique shift-invariant probability measure on the RS shift space, which is the \emph{patch frequency measure} of the subshift. In the same way, we define the signed $n$-point correlation function $\vartheta^{(n)}$, which is well defined for the same reasons as above, by \begin{equation} \label{eq:theta-def} \vartheta^{(n)}(m_1,m_2,\dots,m_{n-1}) =\lim_{N\to \infty}\myfrac{1}{2N+1} \sum_{i=-N}^{N}(-1)^{i} w_i\, w_{i+m_1}\, w_{i+m_2}\dots w_{i+m_{n-1}}. \end{equation} Both $\eta^{(n)}$ and $\vartheta^{(n)}$ possess several symmetries which play a crucial role in our further investigation. Due to the commutativity of the standard multiplication, $\eta^{(n)}$ and $\vartheta^{(n)}$ are invariant under permutations of their arguments. For every permutation $\sigma \in S_{n-1}$, one has \[\eta^{(n)}(m_1,m_2,\dots,m_{n-1}) = \eta^{(n)}(m_{\sigma(1)},m_{\sigma(2)},\dots,m_{\sigma(n-1)}), \] \[\vartheta^{(n)}(m_1,m_2,\dots,m_{n-1}) = \vartheta^{(n)}(m_{\sigma(1)},m_{\sigma(2)},\dots,m_{\sigma(n-1)}). \] Further, in \eqref{eq:def_eta_integral}, one can shift the summation index and omit several terms. Indeed, for example, for $\eta^{(n)}$ one has for any fixed $t\in \mathbb{Z}$ \begin{align*} \eta^{(n)}(m_1,m_2,\dots,m_{n-1}) {}&=\lim_{N\to \infty}\myfrac{1}{2N+1} \sum_{i=-N}^{N}w_i\, w_{i+m_1}\, w_{i+m_2}\dots w_{i+m_{n-1}} \\ {}&=\lim_{N\to \infty}\myfrac{1}{2N+1} \sum_{i=-N-t}^{N-t}w_{i+t}\, w_{i+t+m_1}\, \dots w_{i+t+m_{n-1}}\\ {}&= \lim_{N\to \infty}\myfrac{1}{2N+1}\left( \sum_{i=-N-t}^{-N+t}w_{i+t}\, w_{i+t+m_1}\, \dots w_{i+t+m_{n-1}} \right. \\ {}& \qquad \qquad \qquad \qquad \left. + \sum_{i=-N+t}^{N-t}w_{i+t}\, w_{i+t+m_1}\, \dots w_{i+t+m_{n-1}}\right) \\ {}&= \ 0 + \lim_{N\to\infty}\myfrac{1}{2N-2|t|+1}\sum_{i=-N+t}^{N-t}w_{i+t}\, w_{i+t+m_1}\, \dots w_{i+t+m_{n-1}}. \end{align*} In the calculation, we used that $\left|\sum_{i=-N-t}^{-N+t}w_{i+t}\, w_{i+t+m_1}\, \dots w_{i+t+m_{n-1}} \right|\leqslant 2|t|+1$ for all $N$ and $\myfrac{2N-2|t|+1}{2N+1} \xrightarrow{N \to \infty} 1$. This property of the correlation function becomes nicer if one tracks all relative positions of the elements of the sums, i.e., if one includes the 0 term. That is, we write \[\langle 0, m_1,m_2,\dots,m_{n-1} \rangle \mathrel{\mathop:}= \eta^{(n)}(m_1,m_2,\dots,m_{n-1}) . \] Now, the symmetry described above is nothing but a translation symmetry within $\mathbb{Z}^n$ in the direction $(1,1,\dots, 1)$, so \[ \langle 0, m_1,m_2,\dots,m_{n-1} \rangle = \langle \ell, m_1+\ell,m_2+\ell,\dots,m_{n-1}+\ell \rangle \quad \mbox{for every } \ell \in \mathbb{Z}. \] To rewrite the last expression again as the correlation function, one needs to obtain at least one zero in the expression $\langle \ell, m_1+\ell,m_2+\ell,\dots,m_{n-1}+\ell \rangle $. This corresponds to the choices $\ell \in \{0,-m_1, -m_2, \dots, - m_{n-1} \}$. Then, we obtain a ``shadow" of this $\mathbb{Z}^n$-translation in the $\mathbb{Z}^{n-1}$-dimensional coordinate space, namely \[ \eta^{(n)}(m_1,m_2,\dots,m_{n-1}) = \eta^{(n)}(-m_1,m_2-m_1,\dots,m_{n-1}-m_1). \] Together with the permutation-invariance of the correlation functions, we can attach to each point its orbit under these two symmetries. It turns out that there are at most $n \cdot (n-1)!$ elements in each orbit. Moreover, these two symmetries permit a restriction to positive integers, namely to $\mathbb{Z}^{n-1}_{\geqslant 0}$. We show that every element of $\mathbb{Z}^{n-1}\backslash \mathbb{Z}^{n-1}_{\geqslant 0}$ lies in an orbit of an element from $\mathbb{Z}^{n-1}_{\geqslant 0}$. Let $(m_1,m_2,\dots,m_{n-1}) \in \mathbb{Z}^{n-1}\backslash \mathbb{Z}^{n-1}_{\geqslant 0}$ and let us assume that $|m_1|\geqslant |m_2| \geqslant \dots \geqslant |m_{n-1}|$ and let $m_1<0$. This can always be achieved via a suitable permutation. Then, since \[ \eta^{(n)}(m_1,m_2,\dots,m_{n-1}) = \eta^{(n)}(-m_1,m_2-m_1,\dots,m_{n-1}-m_1), \] $(-m_1,m_2-m_1,\dots,m_{n-1}-m_1)$ is a positive vector that can be used for the calculations. Since we consider a word over the binary alphabet $\{-1,1\}$ we also get the so-called \emph{cancellation property}. Because $w_i^2=1$ for every $i$, we have for any $n\geqslant2$ \[ \eta^{(n+2)}(m_1,m_2,\dots,M,\dots,M,\dots,m_{n-1}) =\eta^{(n)}(m_1,m_2,\dots,m_{n-1}), \] so a reduction is possible whenever two entries agree. Let us summarise as follows. \begin{prop} \label{prop:symmetries} Let $\boldsymbol{w}$ be a bi-infinite word over the alphabet $\mathcal{A}$ and let $\eta^{(n)}$ and $\vartheta^{(n)}$ denote the $n$-point correlations functions of the word $\boldsymbol{w}$ according to \eqref{eq:def_eta_integral} and \eqref{eq:theta-def}. Further, let us suppose that $\eta^{(n)}$ and $\vartheta^{(n)}$ are well defined. Then, they possess the following symmetries. \begin{enumerate} \item Invariance under the action of $S_{n-1}$, namely for every $\sigma \in S_{n-1}$ \[\eta^{(n)}(m_1,m_2,\dots,m_{n-1}) = \eta^{(n)}(m_{\sigma(1)},m_{\sigma(2)},\dots,m_{\sigma(n-1)}), \] \[\vartheta^{(n)}(m_1,m_2,\dots,m_{n-1}) = \vartheta^{(n)}(m_{\sigma(1)},m_{\sigma(2)},\dots,m_{\sigma(n-1)}), \] \item Invariance under a higher-dimensional translation, namely \[ \eta^{(n)}(m_1,m_2,\dots,m_{n-1}) = \eta^{(n)}(-m_1,m_2-m_1,\dots,m_{n-1}-m_1). \] \[ \vartheta^{(n)}(m_1,m_2,\dots,m_{n-1}) = (-1)^{m_1}\,\vartheta^{(n)}(-m_1,m_2-m_1,\dots,m_{n-1}-m_1). \] \end{enumerate} The orbit of any point in $\mathbb{Z}^{n-1}$ under these two operations has at most $n!$ elements. If the word $\boldsymbol{w}$ is over the binary alphabet $\{-1,1\}$, the correlation functions with $n\geqslant2$ have the \emph{cancellation property}, i.e. \[ \eta^{(n+2)}(m_1,m_2,\dots,M,\dots,M,\dots,m_{n-1}) =\eta^{(n)}(m_1,m_2,\dots,m_{n-1}), \] \[ \pushQED{\qed} \vartheta^{(n+2)}(m_1,m_2,\dots,M,\dots,M,\dots,m_{n-1}) =\vartheta^{(n)}(m_1,m_2,\dots,m_{n-1}). \qedhere \popQED \] \end{prop} \begin{remark} For the autocorrelation function $\eta^{(2)}$, the second property is nothing but a~mirror symmetry with respect to the origin. In other words, \[ \eta^{(2)}(m) {} = {} \eta^{(2)}(-m) \] holds for every $m \in \mathbb{Z}$. This mirror symmetry is generally not satisfied for higher-order correlation functions. $\Diamond$ \end{remark} \section{Renormalisation relations} This section aims to present the renormalisation structure of the correlation functions of the RS sequence and investigate their behaviour, based on the recursive relation \eqref{eq:renorm_letters} and the ergodicity of the subshift. The renormalisation equations can be derived by splitting the summation into four terms; in each, the summation index goes over the residues modulo 4. We demonstrate this procedure for one particular case, and its generalisation will be obvious. Let us start with the three-point correlation function $\eta^{(3)}$. We wish to determine its value at $(4m_1+2,4m_2+3)$ using the values of $\eta^{(3)}$ and $\vartheta^{(3)}$ at $(m_1+r_1,m_2+r_2)$ with $r_i\in\{0,1,2,3\}$. \begin{align*} \eta^{(3)}(4m_1 +&{}2,4m_2 +3) = \lim_{N\to \infty}\myfrac{1}{2N+1} \sum_{i=-N}^{N}w_i\, w_{i+4m_1+2}\, w_{i+4m_2+3} \\ {}&=\lim_{N\to \infty}\myfrac{1}{8N\! + \!1} \sum_{i=-4N}^{4N}w_i\, w_{i+4m_1+2}\, w_{i+4m_2+3} \\ {}&= \begin{multlined}[t] \lim_{N\to \infty}\myfrac{1}{8N\! + \!1} \sum_{j=-N}^{N}\left(w_{4j}\, w_{4j+4m_1+2}\, w_{4j+4m_2+3} +w_{4j+1}\, w_{4j+4m_1+3}\, w_{4j+4m_2+4}\right. \\ \left. + w_{4j+2}\, w_{4j+4m_1+4}\, w_{4j+4m_2+5} +w_{4j+3}\, w_{4j+4m_1+5}\, w_{4j+4m_2+6} \right) \end{multlined}\\ {}&=\begin{multlined}[t] \lim_{N\to \infty}\myfrac{1}{8N\! + \!1} \sum_{j=-N}^{N}\!\bigl(\!-\!(-1)^{m_1+m_2}w_{j} w_{j+m_1}w_{j+m_2}\! -\!(-1)^{m_1+j}w_{j} w_{j+m_1} w_{j+m_2+1} \\ + (-1)^{j}w_{j}\, w_{j+m_1+1}\, w_{j+m_2+1} + (-1)^{m_2}w_{j}\, w_{j+m_1+1}\, w_{j+m_2+1} \bigr) \end{multlined}\\ {}&=\begin{multlined}[t] \myfrac{1}{4}\left(-(-1)^{m_1+m_2}\eta^{(3)}(m_1,m_2)-(-1)^{m_1} \vartheta^{(3)}(m_1,m_2\!+\!1)+\vartheta^{(3)}(m_1\!+\!1,m_2\!+\!1) \right.\\ \left. +(-1)^{m_2}\eta^{(3)}(m_1\!+\!1,m_2\!+\!1)\right). \end{multlined} \end{align*} This calculation reveals a general (and useful!) fact which holds for an arbitrary correlation function of the RS sequence. Namely that, independently on the left hand side of the renormalisation equation, on the right hand side one gets arguments containing $m_i$ or $m_i+1$ only (and does not contain $m_i+2$ and $m_i+3$). This observation is a simple consequence of the recursion \eqref{eq:renorm_letters}, but it has some far-reaching consequences. The renormalisation equations form an infinite set of linear equations in infinitely many variables which can be split into two parts --- the \emph{self-consistent} part and the \emph{recursive} part. The first one consists of equations that cannot be simplified via the renormalisation equations. This self-consistent part closes on itself. The above helps us determine this \emph{finite} self-consistent part. The recursive part of the renormalisation equations is then entirely determined by the solution of the self-consistent part. This shows that the dimension of the solution space of the renormalisation equations is \emph{finite}. Let us start the self-consistent part of the renormalisation equations for $n=2$ and $n=3$. It turns out that these two solutions are sufficient for determining the correlation functions for arbitrary $n$. Setting $m_i=0$ for all $i$ on the RHS of the renormalisation equations leads to the self-consistent part with arguments $(0),(1)$, and $(0,0), (0,1), (1,1)$, respectively. For the self-consistent part of the 2-point correlation function, one obtains \begin{align*} \label{eq:s_c_2point} \eta^{(2)}(0) &= \eta^{(2)}(0), & \vartheta^{(2)}(0) &= 0, \\ \eta^{(2)}(1) &= \myfrac{1}{4}\left( \vartheta^{(2)}(0) - \vartheta^{(2)}(1)\right), & \vartheta^{(2)}(1) &= \myfrac{1}{4}\left( -\vartheta^{(2)}(0) + \vartheta^{(2)}(1)\right). \end{align*} The dimension of the solution space is one, and the unique solution is fully determined from $\eta^{(2)}(0)$. This value can be calculated directly from the definition, which gives $\eta^{(2)}(0) =1$. These results are well known and can be found together with the set of all renormalisation equations for the autocorrelation function in \cite[Sec. 10.2.]{TAO}. We state the result as follows. \begin{lemma} The autocorrelation coefficients $\eta^{(2)}$ of the signed Dirac comb of the binary Rudin--Shapiro sequence exist for all $m\in\mathbb{Z}$ and are given by $\eta^{(2)}(m) = \delta_{m,0}$. \qed \end{lemma} This result excludes the RS sequence to be pure point diffractive due to Wiener's criterion \cite[Prop. 8.9.]{TAO}. Employing the explicit relation for $\eta^{(2)}(m)$, we can derive the autocorrelation measure $\gamma = \sum_{m\in \mathbb{Z}} \eta^{(2)}\!(m)\,\delta_m$, and its Fourier transform $\widehat{\gamma}$ being the Lebesgue measure. This shows that the balanced RS word possesses an absolutely continuous spectrum only. It is perhaps unexpected that there is no difference between the RS sequence (a fixed point of a primitive substitution) and a binary Bernoulli sequence in $\{\pm1\}$ with equal probabilities on the level of autocorrelation coefficients. Therefore, it is worth studying the higher-order correlations to understand the differences between these two structures. For the Bernoulli sequence, all correlations vanish. We expect this to be different for the RS sequence. Let us move to the case of 3-point correlations. The complete set of renormalisation equations can be found in the Appendix. The self-consistent part of this system simply reads \begin{align*} \eta^{(3)}(0,0) {}&= \myfrac{1}{2}\, \eta^{(3)}(0,0) , \\ \eta^{(3)}(0,1) {}&= \myfrac{1}{4} \left(\eta^{(3)}(0,0) + \eta^{(3)}(0,1)\right), \\ \eta^{(3)}(1,1) {}&= \myfrac{1}{4} \left(2\eta^{(3)}(0,0) + \vartheta^{(3)}(0,0) - \vartheta^{(3)}(1,1) \right), \\ \vartheta^{(3)}(0,0) {}&= 0, \\ \vartheta^{(3)}(0,1) {}&= \myfrac{1}{4} \left(\eta^{(3)}(0,0)- 2\vartheta^{(3)}(0,0) - \eta^{(3)}(0,1) \right), \\ \vartheta^{(3)}(1,1) {}&= \myfrac{1}{4} \left(\vartheta^{(3)}(0,0) + \vartheta^{(3)}(1,1)\right), \end{align*} and has the trivial solution $\eta^{(3)}(k,\ell) = \vartheta^{(3)}(k,\ell) = 0$ only, as one can easily see. Thus, we can profit from these two results and the properties of the correlation functions as follows. \begin{prop} The $n$-point correlation functions $\eta^{(n)}, \vartheta^{(n)}$ of the signed Dirac comb of the binary Rudin--Shapiro sequence exist for all $n\in\mathbb{Z}$. They are all determined by the value of $\eta^{(n)}(0,\dots,0)$. For odd $n$, the correlation functions $\eta^{(n)}, \vartheta^{(n)}$ vanish. \begin{proof} The existence follows from the unique ergodicity of the system and Birkhoff's ergodic theorem. The calculation above shows that the functions in self-consistent part of the renormalisation equations for 3-point correlations vanishes. The recursive structure of the problem implies that all three-point correlation functions are zero. Consider now any $(2k+1)$-point correlation function and any point $x \in \mathbb{Z}^{2k}$ which belongs to the self-consistent part of the renormalisation equations. As discussed above, we have $x\in \{0,1\}^{2k}$. From the pigeonhole principle and the cancellation property, it follows that the self-consistent part of any correlation function can always be reduced to the 2- or 3-point one, which gives the one-dimensional solution and, moreover, implies that the odd-$n$ correlation function vanishes. \end{proof} \end{prop} As a direct consequence of the previous proof, we obtain the values of the correlation functions $\eta^{(n)}, \ \vartheta^{(n)}$ on the vertices of the $(n{-}1)$-dimensional hypercube. \begin{coro} Let $(r_1, \ r_2, \dots, r_{n-1}) \in \{0,1\}^{n-1}$ and denote $r = r_1 +r_2 + \cdots + r_{n-1}$. Then, for the correlation functions $\eta^{(n)},\ \vartheta^{(n)}$ of the binary Rudin--Shapiro word, one has \begin{equation} \eta^{(n)}(r_1, \ r_2, \dots, r_{n-1}) = \left\{\begin{array}{rl} 0, & \mbox{ if $n$ or $r$ are odd}, \\[5pt] 1, & \mbox{ if $n$ and $r$ are even}, \end{array} \right. \end{equation} and $\vartheta^{(n)}(r_1, \ r_2, \dots, r_{n-1}) = 0$ for all $n$ and for all \/ $r_i$. \qed \end{coro} We can harvest the underlying idea in the proof even further. Namely, we can generalise the formula above to arbitrary binary substitution with weights $\pm 1$. \begin{prop} Let $\boldsymbol{w}$ be an (bi)-infinite word over binary alphabet $\{\pm1\}$, and assume that the letter frequencies exist. Let the correlation functions $\eta^{(n)}$ exist for arbitrary $n$. Let further $(r_1, \ r_2, \dots, r_{n-1}) \in \{0,1\}^{n-1}$ and set $r = r_1 +r_2 + \cdots + r_{n-1}$. Then, the correlation function $\eta^{(n)}$, $n\geqslant3$, can only reach three values on these vertices, namely \begin{equation} \eta^{(n)}(r_1, \ r_2, \dots, r_{n-1}) = \left\{\begin{array}{rl} \nu_{1}-\nu_{-1}, & \mbox{ if $n$ is odd}, \\[5pt] \eta^{(2)}(0), & \mbox{ if $n$ and $r$ are even},\\[5pt] \eta^{(2)}(1), & \mbox{ if $n$ is even and $r$ is odd}, \end{array} \right. \end{equation} where $\nu_{a}$ denotes the letter frequency of letter $a$ in the word $\boldsymbol{w}$. \begin{proof} Let us start with $n$ even. The pigeonhole principle implies that one can profit from the cancellation property. Namely, the summands in the defining sum \eqref{eq:def_corr_n} can be reduced to one of the following products --- in case of $r$ being even $w_iw_i$ or $w_{i+1}w_{i+1}$, and $w_i w_{i+1}$ otherwise. The resulting limits then give the $\eta^{(2)}(0)$ in the first two cases and $ \eta^{(2)}(1)$ in the last one. If $n$ is odd, the products can be reduced to one of the following four triples: $w_iw_iw_i$, $w_{i+1}w_{i+1}w_{i+1}$, $w_{i+1}w_{i+1}w_{i}$ and $w_iw_i w_{i+1}$. Nevertheless, we still can once again use that fact that $w_i^2=1$ and reduce the products to either $w_i$ or $w_{i+1}$. Then, we get $\eta^{(n)}(r_1, \ r_2, \dots, r_{n-1}) = \lim_{N\to \infty}\frac{1}{2N+1}\sum_{i=-N}^{N}w_i= \lim_{N\to \infty}\frac{1}{2N+1}\sum_{i=-N}^{N}w_{i+1} = \nu_{1}-\nu_{-1}$, which completes the proof. \end{proof} \end{prop} We have already determined the values of arbitrary correlation function $\eta^{(n)}$ for the case of balanced weights, i.e., $w_i\in\{\pm 1\}$ for all $i\in \mathbb{Z}$. Now, we can extend the result to the case of $n$-point correlation function for the RS word with general weights given by an arbitrary weight function $f : \{\pm1 \} \to \mathbb{R}\ts$. The $n$-point correlation function with weight $f$ is defined as \[ \eta_f^{(n)}(m_1,\dots, m_{n-1}) \mathrel{\mathop:}= \int_{\mathbb{X}_{\mathrm{RS},2}} f(x_0)f(x_{m_1}) \dots f(x_{m_{n-1}}) \,\mathrm{d} \mu(x). \] \noindent Further, we can follow the steps in \cite[Sec. 5]{BC} and define two values playing a key role in the description of $\eta_f^{(n)}$, namely the expectation \[ \mathbb{E}(f) = \int_{\mathbb{X}_{\mathrm{RS},2}} f(x_0)\,\mathrm{d} \mu(x) = \myfrac{f(1)+f(-1)}{2}, \quad \mbox{and}\quad h_f=\mathrel{\mathop:} \myfrac{f(1)-f(-1)}{2}. \] Rephrasing all arguments given by Baake and Coons, which are true not only in the case of Thue--Morse word, but in general binary ones with $\nu_{1}-\nu_{-1}=0$ as well (see Proposition~5.1. and the following discussion in \cite{BC}), one obtains the desired result. \begin{theorem} For any $n\geqslant2$ and for any $f : \{\pm1 \} \to \mathbb{R}\ts$, the $n$-point correlation function of $f$-weighted binary Rudin--Shapiro word can be calculated from the balanced correlations $\eta^{(n)}$. In particular, the functions $\eta_f^{(n)}$ are determined by the single value $\eta^{(n)}(0,\dots,0)$. \qed \end{theorem} \section{Matrix representation} In the previous section, we introduced the renormalisation equations and (in the Appendix) gave the list for 3- and 4-point correlation functions. It would be helpful to derive a general formula for an arbitrary renormalisation equation for arbitrary $n$ as Baake and Coons did in~\cite{BC} for the Thue--Morse sequence where they also introduced a suitable matrix formalism. In what follows, we would like to do the same procedure for the RS case. For better readability, we omit the upper index in the notation. Then, the matrix version of the renormalisation equations reads \begin{equation*} \begin{pmatrix} \eta(4m) \\ \eta(4m+1) \\ \eta(4m+2) \\ \eta(4m+3) \\ \vartheta(4m) \\ \vartheta(4m+1) \\ \vartheta(4m+2) \\ \vartheta(4m+3) \end{pmatrix} = \myfrac{1}{4} \begin{pmatrix} 2+2(-1)^m & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1-(-1)^m & 0 & 0 & 0 & (-1)^m & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1+(-1)^m & 0 & 0 & -(-1)^m & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1-(-1)^m & 0 & 0 & 0 & -(-1)^m & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 2(-1)^m & 2 & 0 & 0 \\ 0 & 1+(-1)^m & 0 & 0 & -(-1)^m & 1 & 0 & 0 \\ \end{pmatrix} \begin{pmatrix} \eta(m) \\ \eta(m+1) \\ \eta(m+2) \\ \eta(m+3) \\ \vartheta(m) \\ \vartheta(m+1) \\ \vartheta(m+2) \\ \vartheta(m+3) \end{pmatrix}. \end{equation*} There is a disadvantage in this description consisting in dealing with the \emph{non-locality} of our substitution, which shows here via the terms with $(-1)^m$. Therefore, we must double the dimension and treat the odd and even positions separately. To do so, let us introduce $\boldsymbol{\eta}(8m+r) \in \mathbb{R}\ts^{8+8}$ with entries \[ \boldsymbol{\eta}(8m+r)_k = \left\{\begin{array}{rcl} \eta(8m+r+i), & \mathrm{if} & k=i, \\[10pt] \vartheta(8m+r+i), & \mathrm{if} & k=8+i. \end{array} \right. \] \noindent Then, we can rewrite the renormalisation equations for the autocorrelation as \[\boldsymbol{\eta}(8m+r) = \boldsymbol{B}_{(r)} \ \boldsymbol{\eta}(m). \] To proceed further, we introduce a set of 8 integer matrices $M_{(i,j)}$ and $N_{(i,j)}$ of dimension eight, which will later naturally appear in the description of the matrices $\boldsymbol{B}$. \begin{align*} M_{(0,0)} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\-1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -2 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}, & \qquad M_{(0,1)} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \end{align*} \begin{align*} M_{(1,0)} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\-1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}, & \qquad M_{(1,1)} = \begin{pmatrix} 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \end{align*} \begin{align*} N_{(0,0)} = \begin{pmatrix} 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -2 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}, & \qquad N_{(0,1)} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0& -1 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \end{align*} \begin{align*} N_{(1,0)} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 &-1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}, & \qquad N_{(1,1)} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0& 1 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}. \\ \end{align*} Using these matrices, the matrix $\boldsymbol{B}_{(0)} \in \mathrm{Mat}(16,\mathbb{Q})$ decomposes into a block matrix, \begin{equation} 4\boldsymbol{B}_{(0)} =\begin{pmatrix}N_{(0,0)} & N_{(0,1)} \\[10pt] N_{(1,0)} & N_{(1,1)} \end{pmatrix} + \begin{pmatrix}M_{\tau(0,0)} & M_{\tau(0,1)} \\[10pt] M_{\tau(1,0)} & M_{\tau(1,1)} \end{pmatrix}, \end{equation} where $\tau$ is defined for every $(i,j)\in\{0,1\}^2$ as $\tau(i,j) = (i+1,j+1) \mod 2$. All remaining matrices $\boldsymbol{B}_{(r)}$ can be obtained from $\boldsymbol{B}_{(0)}$ via a power of some transformation $\mathcal{J}$. This mapping naturally encodes the rearrangement of the renormalisation equations done by going from $\boldsymbol{\eta}(8m)$ to $\boldsymbol{\eta}(8m+1)$. It shifts the $i$-th row of the matrix to the position $i-1$, and the first row becomes the last with a column shift. In terms of matrices, we can rewrite it as follows \begin{equation} \label{eq:transf_J} \mathcal{J}(X) = R_1\cdot X + L_7\cdot X\cdot S \quad \mbox{for any $X\in\mathrm{span}\{M_{(i,j)},N_{(i,j)}\}$}, \end{equation} with $S$ being the permutation matrix (in the column notation, meaning that $s_{ij} = 1$ iff $j=\pi(i)$) of the permutation $\pi=(1\,3\,5\,7)(2\,4\,6\,8)$ and for $m,n \leqslant 7$ \[\bigl(R_m\bigr)_{i,\,j} = \delta_{i,\, j-k}, \qquad \bigl(L_n\bigr)_{i,\,j} = \delta_{i-n,\,j}. \] Note that $S^4 = \mathrm{Id}$, and that the matrices $R_1$ and $L_7$ are of rank $7$ and $1$ respectively, implying $R_1^8 =O$ and $L_7^2=O$ with $O$ standing for the zero matrix. These properties enable one to explicitly write down the powers of $\mathcal{J}$. \begin{remark} For the transformation $\mathcal{J}$ defined in \eqref{eq:transf_J} it is not hard to show that \[\mathcal{J}^k(X) = R_{k}\cdot X + L_{8-k}\cdot X \cdot S \] holds for every $k\in\{1,2,\dots 7\}$ and for every $X\in\mathrm{span}\{M_{(i,j)},N_{(i,j)}\}$ with matrices $R_m$ and $L_n$ defined above. $\Diamond$ \end{remark} With this transformation, we can get the desired matrices $\boldsymbol{B}_{(r)}$ for $r\in\{1,2,\dots 7\}$ as \begin{equation} 4\boldsymbol{B}_{(r)} = 4\mathcal{J}^r\boldsymbol{B}_{(0)} = \begin{pmatrix}\mathcal{J}^r(N_{(0,0)}) & \mathcal{J}^r(N_{(0,1)}) \\[10pt] \mathcal{J}^r(N_{(1,0)}) & \mathcal{J}^r(N_{(1,1)}) \end{pmatrix} + \begin{pmatrix}\mathcal{J}^r(M_{\tau(0,0)}) & \mathcal{J}^r(M_{\tau(0,1)}) \\[10pt] \mathcal{J}^r(M_{\tau(1,0)}) & \mathcal{J}^r(M_{\tau(1,1)}) \end{pmatrix}. \end{equation} In order to describe the matrices for the higher-order correlation functions, we have to generalise the notion of vectors $\boldsymbol{\eta}$. In particular, for 3-point correlations, we employ the vectors $\boldsymbol{\eta}(8m_1+r_1, 8m_2+r_2) \in\mathbb{R}\ts^{8^2+8^2}$ with entries \begin{equation} \label{eq:eta_def} \boldsymbol{\eta}(8m_1+r_1,\ 8m_2+r_2)_k = \left\{\begin{array}{rcl} \eta(8m_1+r_1+i,\ 8m_2+r_2+j), & \mathrm{if} & k=8i+j, \\[10pt] \vartheta(8m_1+r_1+i,\ 8m_2+r_2+j), & \mathrm{if} & k=64+8i+j. \end{array} \right. \end{equation} Thus, the renormalisation equations for the 3-point correlations can be rewritten as \[\boldsymbol{\eta}(8m_1+r_1,\ 8m_2+r_2) = \boldsymbol{B}_{(r_1,r_2)} \ \boldsymbol{\eta}(m_1,\ m_2). \] We describe $\boldsymbol{B}_{(0,0)}$ and profit from the same trick as above to get $\boldsymbol{B}_{(r_1,r_2)}$. The matrix $\boldsymbol{B}_{(0,0)}\in \mathrm{Mat}(128,\mathbb{Q})$ can be decomposed as a sum of Kronecker products of matrices $M_{(i,j)}$ and $N_{(i,j)}$. Direct calculation shows that the Klein four-group $K$ naturally appears in the structure of $\boldsymbol{B}_{(0,0)}$. Denote by $g(a,b)$ the standard action of $K$ on the tuple $(a,b)\in \{0,1\}^2$ and let $\tau \in K$ be as defined above. Then, we obtain the desired decomposition \begin{equation} \label{eq:matrix_dim2} 8\boldsymbol{B}_{(0,0)} = \sum_{g\in K}\begin{pmatrix}N_{g(0,0)} & N_{g(0,1)} \\[10pt] N_{g(1,0)} & N_{g(1,1)} \end{pmatrix} \otimes N_{g\tau(0,0)}+ \begin{pmatrix}M_{g(0,0)} & M_{g(0,1)} \\[10pt] M_{g(1,0)} & M_{g(1,1)} \end{pmatrix} \otimes M_{g(0,0)} . \end{equation} Then, the general matrix $\boldsymbol{B}_{(r_1,r_2)}$ with $r_i \in \{0,\dots,\ 7\}$ can be expressed as \begin{multline*} 8\boldsymbol{B}_{(r_1,r_2)} = \sum_{g\in K}\mathcal{J}^{r_1}\!\begin{pmatrix}N_{g(0,0)} & N_{g(0,1)} \\[10pt] N_{g(1,0)} & N_{g(1,1)} \end{pmatrix} \otimes \mathcal{J}^{r_2}(N_{g\tau(0,0)}) \\ +{} \mathcal{J}^{r_1}\!\begin{pmatrix}M_{g(0,0)} & M_{g(0,1)} \\[10pt] M_{g(1,0)} & M_{g(1,1)} \end{pmatrix} \otimes \mathcal{J}^{r_2}(M_{g(0,0)}) . \end{multline*} The Kronecker product structure is a consequence of our choice of the vector $\boldsymbol{\eta}$ and is not surprising. The extension of the relation \eqref{eq:eta_def} for higher-order correlations is straightforward. Unfortunately, the decomposition of the matrices $\boldsymbol{B}_{(0,\dots, 0)}$, similar to \eqref{eq:matrix_dim2}, is a difficult task, mostly due to the fact that the dimension of matrix $\boldsymbol{B}_{(0,\dots, 0)}$ representing $n$-point correlations is $2\cdot8^{n-1}$. Thus in the case of 4-point correlations, we already obtain a~matrix of size $1024\times1024$. The decomposition, in this case, is still obtainable and results in \begin{multline*} 16\,\mathbf{B}_{(0,0,0)} = \sum_{g\in K}\sum_{(i,j)\in \{0,1\}^2} \begin{pmatrix}N_{g(0,0)} & N_{g(0,1)} \\[10pt] N_{g(1,0)} & N_{g(1,1)} \end{pmatrix} \otimes N_{g(i,j)} \otimes N_{(i,j)} \\ + {} \begin{pmatrix}M_{g(0,0)} & M_{g(0,1)} \\[10pt] M_{g(1,0)} & M_{g(1,1)} \end{pmatrix} \otimes M_{g(i,j)} \otimes M_{\tau(i,j)}. \end{multline*} As in the 3-point correlation case, we sum over all elements of Klein four-group and all pairs $(i,j)\in \{0,1\}^2$. An increase of the dimension by 1 leads to four times more terms in the summation. On the other hand, there is still an open question about the general decomposition formula for $n$-point correlation matrix $\boldsymbol{B}_{(0, \dots, \ 0)}$, which we cannot answer at present. \section{4-point correlation functions} We have already obtained an exact form of the correlation functions for 2 points and an odd number of points. The remaining cases are more complex. As discussed in the previous section, even the matrix description does not suffice to bring up a general formula for the renormalisation relations for arbitrary $n$. On the other hand, for fixed $n$, we can still study the given $n$-point correlation functions without knowing the general formulas. This section focuses on the first non-trivial higher-order correlation function, namely the 4-point one. We show the asymptotic behaviour of its sums and describe points in which the correlation function reaches the desired value. First, we state several facts immediately following the recursive structure of renormalisation equations and the solution of the self-consistent part. \begin{fact} The values of the $4$-point correlation function of the RS sequence form a~proper subset of dyadic rationals. $\Diamond$ \end{fact} \begin{fact} The correlation functions fulfill \[ |\eta^{(4)}(m_1,m_2,m_3)| \leqslant 1 \quad \mbox{and} \quad |\vartheta^{(4)}(m_1,m_2,m_3)| \leqslant 1 \] for all triples $(m_1,m_2,m_3)\in\mathbb{Z}^3$. $\Diamond$ \end{fact} Now, we can start our discussion of the \emph{level sets}, i.e., those subsets in $\mathbb{Z}^3$ on which the function $\eta^{(4)}$ is constant. \begin{prop} \label{prop:level_set_1} For the $4$-point correlation function of the balanced Rudin--Shapiro sequence, we have \[ \eta^{(4)}(m_1,m_2,m_3)=1 \quad \Longleftrightarrow \quad m_1 = 0 \ \mbox{and} \ m_2=m_3, \] up to a permutation of the indices. Moreover, $|\vartheta^{(4)}(m_1,m_2,m_3)| < 1$ holds for all triples $(m_1,m_2,m_3)\in\mathbb{Z}^3$. \begin{proof} The right to left is always true, since $w_iw_iw_{i+k}w_{i+k} =1$ holds for any $i, k\in \mathbb{Z}$. For the converse, note that the unique ergodicity of $\mathbb{X}_{\mathrm{RS},2}$ implies the existence of a unique strictly positive translation invariant measure, the patch frequency measure. If in the sum in the definition \eqref{eq:def_corr_n} appeared any patch equals to $-1$, it would occur with a strictly positive frequency. Thus, the result would be strictly smaller than 1. Therefore, one needs to find a~triple of indices $(m_1,m_2,m_3)\in\mathbb{Z}^{3}$ such that $w_iw_{i+m_1}w_{i+m_2}w_{i+m_3} =1$ holds for any $i\in \mathbb{Z}$. It immediately follows that $m_j=0$ for some $j$ and therefore (since the two-point correlation does not vanish at 0 only), one has $m_{\ell}-m_k=0$ for the two remaining indices. The second claim follows immediately via the same argument. \end{proof} \end{prop} We begin our further discussion on the level sets with an observation \[ \eta^{(4)}(1,2,3) = -\myfrac{1}{2}. \] The renormalisation equation (with $M = m_1+m_2+m_3$) \begin{equation} \label{eq:renorm_1} \eta^{(4)}(4m_1,4m_2,4m_3) = \myfrac{1}{2}(1+(-1)^M) \eta^{(4)}(m_1,m_2,m_3) \end{equation} gives for all $m\in\mathbb{Z}$ \[ \eta^{(4)}\bigl(4^m(1,2,3)\bigr) = -\myfrac{1}{2}. \] Proposition \ref{prop:level_set_1} describes the 1-level set which we can use for obtaining other coordinates $(x,y,z)$ for which $\eta^{(4)}(x,y,z) = -\tfrac{1}{2}$. The renormalisation equation \begin{multline*} \eta^{(4)}(4m_1+1,4m_2+2,4m_3+3) = \myfrac{1}{4}\bigl(-(-1)^{m_2+m_3}\eta^{(4)}(m_1,m_2,m_3)\\ - (-1)^{m_1+m_2}\eta^{(4)}(m_1,m_2,m_3+1) -(-1)^{m_1}\eta^{(4)}(m_1,m_2+1,m_3+1) \\ + (-1)^{m_3}\eta^{(4)}(m_1+1,m_2+1,m_3+1)\bigr) \end{multline*} with the choice $m_1=0$, $m_2=m_3=\ell$ yields \begin{equation} \label{eq:level_set_1/2} \eta^{(4)} (1,4\ell+2,4\ell+3) = -\myfrac{1}{2}. \end{equation} The other possible (non-trivial) choice $m_1=-1$, $m_2=m_3=\ell$ results after a translation and a permutation in $\eta^{(4)} (-1,4\ell+1,4\ell+2) = \eta^{(4)} (1,4\ell+2,4\ell+3)$. Another renormalisation equation, namely, \begin{multline} \label{eq:renorm_eq_B} \eta^{(4)}(4m_1,4m_2+2,4m_3+2) = \myfrac{1}{2}\bigl((-1)^{m_2+m_3} \eta^{(4)}(m_1,m_2,m_3) \\ +(-1)^{m_1}\eta^{(4)}(m_1,m_2+1,m_3+1)\bigr) \end{multline} gives with the choice $m_1=m_2=2\ell+1$, $m_3=0$ (after a permutation) \[\eta^{(4)}(2,8\ell+4,8\ell+6) = -\myfrac{1}{2}. \] The other possible choices ($m_1=m_3=2\ell+1$, $m_2=0$, and $m_1=2\ell+1$, $m_2=2\ell$, $m_3=-1$) result in the same set of coordinates. Thus, up to now, we have found an infinite set of coordinates \begin{equation} \label{eq:level_set_1/2_2} \left\{ 2^m (1,4\ell+2,4\ell+3) \ : \ m\in\mathbb{N}, \ \ell\in\mathbb{Z}\right\} \end{equation} which is a subset of the $\bigl(-\tfrac{1}{2}\bigr)$-level set. We obtained the coordinates \eqref{eq:level_set_1/2_2} as a result of studying renormalisation equations and we used those of them allowing the decomposition of the $-\tfrac{1}{2}$ of the form $-\tfrac{1}{2} = \tfrac{1}{4}(-1-1)$ in the first case, and $-\tfrac{1}{2} = \tfrac{1}{2} (-1)$ in the second one. There are other ways to decompose $-\tfrac{1}{2}$ as a quarter of sum of four terms, for example $-\tfrac{1}{2} = \tfrac{1}{4}\bigl(-1-\tfrac{1}{2}-\tfrac{1}{2}\bigr)$. One can check that the assumptions on the $\bigl(-\tfrac{1}{2}\bigr)$-level set being equal to \eqref{eq:level_set_1/2_2} is fully consistent with the renormalisation relations in the sense, that no other coordinates can appear in the $\bigl(-\tfrac{1}{2}\bigr)$-level set. The numerical simulations suggest that there are no other points in this level set, but a proof of this statement is still missing. We can proceed further with a scheme describing a way to obtain other infinite series of points belonging to the same level set. In order to study $\eta^{(4)}(1,2\ell_0, 2\ell_0+1)$ using the renormalisation equations, one has to treat the odd and even $\ell_0$ separately. Indeed, taking $\ell_0=2\ell_1+1$ results in \eqref{eq:level_set_1/2}. It remains to discuss the even case, i.e. $\ell_0 = 2\ell_1$. The renormalisation equation then reads \begin{equation} \label{eq:renorm_eq_A} \eta^{(4)}(1,4\ell_1,4\ell_1+1) = \myfrac{1}{4}\bigl((2+(-1)^{\ell_1})\eta^{(4)}(0,\ell_1,\ell_1)+(-1)^{\ell_1}\eta^{(4)}(1,\ell_1,\ell_1 +1) \bigr). \end{equation} To solve this equation, one has to distinguish the odd and even cases again. The process described above can be visualised as a tree as follows. \begin{center} \begin{tikzpicture}[scale=1.8] \node (0) at (2,0) {\small$(1,2\ell_0,2\ell_0+1)$}; \node (10) at (1,-1) {$\substack{(1,4\ell_1+2,4\ell_1+3)\\[5pt] \eta^{(4)} (\cdots) = -\frac{1}{2}}$}; \draw[->] (0) -- node[midway, above right, sloped, pos=1]{\tiny$2\ell_1+1$} (10); \node (01) at (3.5,-1) {\small$(1,4\ell_1,4\ell_1+1)$}; \draw[->] (0) -- node[midway, above left, sloped, pos=0.75]{\tiny$2\ell_1$} (01); \node (20) at (2.5,-2) {$\substack{(1,8\ell_2+4,8\ell_2+5)\\[5pt] \eta^{(4)} (\cdots) = \frac{1}{4}}$}; \draw[->] (01) -- node[midway, above right, sloped, pos=1]{\tiny$2\ell_2+1$} (20); \node (02) at (5,-2) {\small$(1,8\ell_2,8\ell_2+1)$}; \draw[->] (01) -- node[midway, above left, sloped, pos=0.75]{\tiny$2\ell_2$} (02); \node (30) at (4,-3) {$\substack{(1,16\ell_3+8,16\ell_3+9)\\[5pt] \eta^{(4)} (\cdots) = \frac{5}{8}}$}; \draw[->] (02) -- node[midway, above right, sloped, pos=1]{\tiny$2\ell_3+1$} (30); \node (03) at (6.5,-3) {\small$(1,16\ell_3,16\ell_3+1)$}; \draw[->] (02) -- node[midway, above left, sloped, pos=0.75]{\tiny$2\ell_3$} (03); \node (40) at (5.5,-4) {$\substack{(1,32\ell_4+16,32\ell_4+17)\\[5pt] \eta^{(4)} (\cdots) = \frac{13}{16}}$}; \draw[->] (03) -- node[midway, above right, sloped, pos=1]{\tiny$2\ell_4+1$} (40); \node (04) at (8,-4) {$\ddots$}; \draw[->] (03) -- node[midway, above left, sloped, pos=0.75]{\tiny$2\ell_4$} (04); \end{tikzpicture} \end{center} The relation $\eta^{(4)}(1,2m+1,2m+2) = 0$ is often used in the calculation and covers the case $\eta^{(4)}(1,\ell,\ell+1)$ for $\ell $ odd. Further, we can derive a formula for the values of $\eta^{(4)}$ at each level and obtain a sequence of level sets whose correlations grow towards the maximal possible value. \begin{prop} \label{prop:level_set_gen1} For the balanced Rudin--Shapiro sequence, we have \begin{align*} \eta^{(4)}\bigl(1,\, 2^{n}(2\ell+1),\,2^{n}(2\ell+1)+1 \bigr) \, &{}= \, 1-\myfrac{3}{2^n} \\ \eta^{(4)}(1,\,2\ell+1,\,2\ell+2) &{}= 0 \end{align*} for each $n\in\mathbb{N}$ and for $\ell \in \mathbb{Z}$. \begin{proof} The second identity can be obtained by separately treating odd and even $\ell$. We show the first claim by induction on $n$. For $n=1$ we have already proved that $\eta^{(4)}(1,4\ell+2,4\ell+3) \ =\ -\tfrac{1}{2} \ =\ 1 - \tfrac{3}{2}$. For $n=2$ we get $\eta^{(4)}(1,8\ell+4,8\ell+5)$ and using \eqref{eq:renorm_eq_A} one has $\eta^{(4)}\bigl(1,4(2\ell+1),4(2\ell+1)+1 \bigr) = \tfrac{1}{4} = 1-\tfrac{3}{4}. $ Now, suppose that the claim holds for any $n'\leqslant n-1$ and recall the equation \eqref{eq:renorm_eq_A}. Then, for any $n\geqslant 3$, one obtains \begin{align*} \eta^{(4)}&\bigl(1,\, 2^{n}(2\ell+1),\,2^{n}(2\ell+1)+1 \bigr) = \eta^{(4)}\bigl(1,\, 4(2^{n-2}(2\ell+1)),\, 4(2^{n-2}(2\ell+1))+1 \bigr) \\[5pt] &= \myfrac{1}{4}\bigr(3\,\eta^{(4)}(0,\, 2^{n-2}(2\ell+1),\, 2^{n-2}(2\ell+1)) + \eta^{(4)} (1,\, 2^{n-2}(2\ell+1),\, 2^{n-2}(2\ell+1)+1 ) \bigl) \\[5pt] &=\myfrac{1}{4}\bigl(3+1-\myfrac{3}{2^{n-2}} \bigr) = 1-\myfrac{3}{2^n}, \end{align*} using $\eta^{(4)} (1,\, 2^{n-2}(2\ell+1),\, 2^{n-2}(2\ell+1)+1 ) = 1-\frac{3}{2^{n-2}}$. \end{proof} \end{prop} We can profit from this result and extend the current level sets and ``double" their cardinality. \begin{prop} \label{prop:level_set_gen2} For the balanced Rudin--Shapiro sequence, we have \[\eta^{(4)}\bigl(2,\, 2^{n+1}(2\ell+1),\,2^{n+1}(2\ell+1)+2 \bigr) \, = \, 1-\myfrac{3}{2^n} \] for each $n\in\mathbb{N}$ and $\ell \in \mathbb{Z}$. \end{prop} \pushQED{\qed} \noindent \textit{Proof.} We already know that $\eta^{(4)}(2,8\ell+4,8\ell+6) = -\myfrac{1}{2}$. For $n\geqslant 2$ one gets with help of \eqref{eq:renorm_eq_B} and Proposition \ref{prop:level_set_gen1} the result. \begin{align*} \eta^{(4)}&\bigl(2, 2^{n+1}(2\ell+1),2^{n+1}(2\ell+1)+2 \bigr) \\ &=\myfrac{1}{2}\bigl( \eta^{(4)}\bigl(0, 2^{n-1}(2\ell+1),2^{n-1}(2\ell+1) \bigr) + \eta^{(4)}\bigl(1, 2^{n-1}(2\ell+1),2^{n-1}(2\ell+1)+1 \bigr) \bigr) \\ &= \myfrac{1}{2} \bigl(1+1- \myfrac{3}{2^{n-1}}\bigr) = 1-\myfrac{3}{2^{n}}. \qedhere \end{align*} \popQED \noindent Observe that the sum of coordinates $1+4(2\ell+1)+4(2\ell+1)+1$ is always even and so does $2+2^{n+1}(2\ell+1)+2^{n+1}(2\ell+1)+2$. Thus, we can recall the renormalisation equation \eqref{eq:renorm_1} and get the final description of certain sets in the positive octant where the function $\eta^{(4)}$ is constant. We can further extend these sets to all of $\mathbb{Z}^3$ via Proposition \ref{prop:symmetries}. \begin{theorem} \label{thm:level_set} The $4$-point correlation function $\eta^{(4)}$ of the Rudin--Shapiro sequence with balanced weights, for every $n\geqslant 1$, is constant on the set \[ \mathcal{C}_n =\mathrel{\mathop:} \left\{ 2^{m} \bigl(1,\, 2^{n}(2\ell+1),\,2^{n}(2\ell+1)+1 \bigr) \ \mbox{and all permutations} \ : \ m\geqslant 0, \ \ell\in\mathbb{Z} \right\}\] and reaches the value \[\pushQED{\qed}\eta^{(4)}{\Big|}_{\mathcal{C}_n} \ = \ 1- \myfrac{3}{2^{n}}. \qedhere \popQED \] \end{theorem} Similar considerations as above lead to a description of the function $\vartheta^{(4)}$ evaluated at points from the set $\mathcal{C}_n$. Note that if we want to extend the results to $\mathbb{Z}^3$ in this case, the translation symmetry may add an additional minus factor (as stated in Proposition \ref{prop:symmetries}). \begin{prop} For the function $\vartheta^{(4)}$ of the Rudin--Shapiro sequence with balanced weights, one has, for all $\ell \in \mathbb{Z}$, $n\in\mathbb{N}$ and $m\in \mathbb{N}_0$, \begin{align*} \pushQED{\qed} \vartheta^{(4)}\bigl(2^{m}(1,\, 2^{n}(2\ell+1),\,2^{n}(2\ell+1)+1 ) \bigr) &{}= \left( \frac{3}{2^{n}}-\delta_{_{n,1}}\right)\delta_{_{0,m}}, \\[5pt] \vartheta^{(4)}\bigl(2^{m}(1,\, 2\ell+1,\,2\ell+2 ) \bigr)&{} = 0. \qedhere \end{align*} \popQED \end{prop} Of course, the result also holds for all permutations of the coordinates, but we do not repeat this in the upcoming propositions. It turns out that the strategy described above can be applied to various ``starting" vectors (i.e., different to $(1,\ell,\ell+1)$), and one gets the description of the correlation functions at different infinite subsets of $\mathbb{Z}^3$. The proofs of the following propositions are technical and follow the above scheme (using suitable renormalisation equations), and profit from the results of Theorem \ref{thm:level_set}. Therefore, we decided to omit them. \begin{prop}[Starting vector $(1,\ell,\ell+2)$] \label{prop:vectors1} For the functions $\eta^{(4)}$, $\vartheta^{(4)}$ of the Rudin--Shapiro sequence with balanced weights one has for all $\ell \in \mathbb{Z}$ \begin{align*} \pushQED{\qed} \eta^{(4)}\bigl(2^{m}(1,\, 2^{n}(2\ell+1),\,2^{n}(2\ell+1)+2 ) \bigr) {}&= \left\{\begin{array}{rl} 0, & \mbox{ if } m=0 \mbox{ and } n=1, \\[5pt] \frac{1}{4}, & \mbox{ if } m=0 \mbox{ and } n=2, \\[5pt] -\frac{3}{2^{n}}, & \mbox{ if } m=0 \mbox{ and } n\geqslant 3,\\[5pt] 0, & \mbox{ if } m\geqslant 1. \end{array} \right. \\[10pt] \vartheta^{(4)}\bigl(2^{m}(1,\, 2^{n}(2\ell+1),\,2^{n}(2\ell+1)+2 ) \bigr) {}&= \left\{\begin{array}{rl} 0, & \mbox{ if } m\in\{0,1\} \mbox{ and } n=1, \\[5pt] -(-1)^m \frac{1}{4}, & \mbox{ if } m\in\{0,1\} \mbox{ and } n=2, \\[5pt] (-1)^m\frac{3}{2^{n}}, & \mbox{ if } m\in\{0,1\} \mbox{ and } n\geqslant 3,\\[5pt] 0, & \mbox{ if } m\geqslant 2. \end{array} \right. \\[10pt] \eta^{(4)}\bigl(2^{m}(1,\, 2^{n}(2\ell+1)-1,\,2^{n}(2\ell+1)+1 ) \bigr) {}&= \left\{\begin{array}{rl} 0, & \mbox{ if } m=0 \mbox{ and } n=1, \\[5pt] -\frac{1}{4}, & \mbox{ if } m=0 \mbox{ and } n=2, \\[5pt] \frac{3}{2^{n}}, & \mbox{ if } m=0 \mbox{ and } n\geqslant 3,\\[5pt] 0, & \mbox{ if } m\geqslant 1. \end{array} \right. \\[10pt] \vartheta^{(4)}\bigl(2^{m}(1,\, 2^{n}(2\ell+1)-1,\,2^{n}(2\ell+1)+1 ) \bigr) {}&= \left\{\begin{array}{rl} 0, & \mbox{ if } m\in\{0,1\} \mbox{ and } n=1, \\[5pt] (-1)^m \frac{1}{4}, & \mbox{ if } m\in\{0,1\} \mbox{ and } n=2, \\[5pt] -(-1)^m\frac{3}{2^{n}}, & \mbox{ if } m\in\{0,1\} \mbox{ and } n\geqslant 3,\\[5pt] 0, & \mbox{ if } m\geqslant 2. \qquad \qquad \qquad \quad \qedhere \end{array} \right. \end{align*} \popQED \end{prop} \begin{prop}[Starting vector $(2,\ell,\ell+1)$] \label{prop:vectors2} For the functions $\eta^{(4)}$, $\vartheta^{(4)}$ of the Rudin--Shapiro sequence with balanced weights one has for all $\ell \in \mathbb{Z}$ \begin{align*} \eta^{(4)}\bigl(2^{m}(2,\, 2^{n}(2\ell+1),\,2^{n}(2\ell+1)+1 ) \bigr) {}&= \left\{\begin{array}{rl} 0, & \mbox{ if } m=0 \mbox{ and } n=1, \\[5pt] \frac{1}{4}, & \mbox{ if } m=0 \mbox{ and } n= 2,\\[5pt] -\frac{3}{2^{n}}, & \mbox{ if } m=0 \mbox{ and } n\geqslant 3,\\[5pt] 0, & \mbox{ if } m\geqslant 1, \end{array} \right. \\[10pt] \vartheta^{(4)}\bigl(2^{m}(2,\, 2^{n}(2\ell+1),\,2^{n}(2\ell+1)+1 ) \bigr) {}&= \left\{\begin{array}{rl} 0, & \mbox{ if } m\in\{0,1\} \mbox{ and } n=1, \\[5pt] -\frac{1}{4}, & \mbox{ if } m\in\{0,1\} \mbox{ and } n= 2,\\[5pt] \frac{3}{2^{n}}, & \mbox{ if } m\in\{0,1\} \mbox{ and } n\geqslant 3,\\[5pt] 0, & \mbox{ if } m\geqslant 2, \end{array} \right. \end{align*} \begin{align*} \pushQED{\qed} \eta^{(4)}\bigl(2^{m}(2,\, 2^{n}(2\ell+1)+1,\,2^{n}(2\ell+1)+2 ) \bigr) {}&= \left\{\begin{array}{rl} 0, & \mbox{ if } m=0 \mbox{ and } n=1, \\[5pt] -\frac{1}{4}, & \mbox{ if } m=0 \mbox{ and } n= 2,\\[5pt] \frac{3}{2^{n}}, & \mbox{ if } m=0 \mbox{ and } n\geqslant 3,\\[5pt] 0, & \mbox{ if } m\geqslant 1, \end{array} \right. \\[10pt] \vartheta^{(4)}\bigl(2^{m}(2,\, 2^{n}(2\ell+1)+1,\,2^{n}(2\ell+1)+2 ) \bigr) {}&= \left\{\begin{array}{rl} 0, & \mbox{ if } m\in\{0,1\} \mbox{ and } n=1, \\[5pt] -(-1)^{m}\frac{1}{4}, & \mbox{ if } m\in\{0,1\} \mbox{ and } n= 2,\\[5pt] (-1)^{m}\frac{3}{2^{n}}, & \mbox{ if } m\in\{0,1\} \mbox{ and } n\geqslant 3,\\[5pt] 0, & \mbox{ if } m\geqslant 2. \qquad \qquad \qquad \quad \qedhere \end{array} \right. \end{align*} \popQED \end{prop} \begin{prop}[Starting vector $(2,\ell,\ell+2)$] \label{prop:vectors3} For the functions $\eta^{(4)}$, $\vartheta^{(4)}$ of the Rudin--Shapiro sequence with balanced weights one has for all $\ell \in \mathbb{Z}$ and for all $m,n\in\mathbb{N}_{0}$ \begin{align*}\pushQED{\qed} \eta^{(4)}\bigl(2^{m}(2,\, 2^{n+2}(2\ell+1)+1,\,2^{n+2}(2\ell+1)+3 ) \bigr) {}&= -\myfrac{1}{2} + \myfrac{3}{2^{n+2}}, \\[5pt] \vartheta^{(4)}\bigl(2^{m}(2,\, 2^{n+2}(2\ell+1)+1,\,2^{n+2}(2\ell+1)+3 ) \bigr) {}&= \delta_{_{m,0}}\bigl(-\myfrac{1}{2} + \myfrac{3}{2^{n+2}} \bigr), \\[5pt] \eta^{(4)}\bigl(2^{m}(2,\, 2^{n+2}(2\ell+1)-1,\,2^{n+2}(2\ell+1)+1 ) \bigr) {}&= -\myfrac{1}{2} + \myfrac{3}{2^{n+2}}, \\[5pt] \vartheta^{(4)}\bigl(2^{m}(2,\, 2^{n+2}(2\ell+1)-1,\,2^{n+2}(2\ell+1)+1 ) \bigr) {}&= \delta_{_{m,0}}\bigl(-\myfrac{1}{2} + \myfrac{3}{2^{n+2}} \bigr). \qedhere \end{align*} \popQED \end{prop} Similarly, one can continue this procedure and generate infinite series where the correlations function remains constant. Theorem \ref{thm:level_set} and Propositions \ref{prop:vectors1}, \ref{prop:vectors2}, \ref{prop:vectors3} provide infinitely many infinite arithmetic progressions in $\mathbb{Z}^3$ along which is $\eta^{(4)}$ constant and non-zero. Therefore, if we move to the averages of the correlation functions, one might expect that the averages over the cube $\{0,1,\dots N-1\}^3$ for any $N$ cannot be arbitrarily small. In what follows, we prove the opposite, namely, that the mean of the distances $|\eta^{(4)}|$ vanishes asymptotically, and we conclude that the average vanishes asymptotically as well. \begin{prop} For the $4$-point Rudin--Shapiro correlation functions, one has \begin{align*} \lim_{N\to \infty} \myfrac{1}{N^3} \sum_{0\leqslant m_i\leqslant N-1} \bigl|\eta^{(4)}(m_1,m_2,m_3) \bigr| &{}= 0, \\ \lim_{N\to \infty} \myfrac{1}{N^3} \sum_{0\leqslant m_i\leqslant N-1} \bigl|\vartheta^{(4)}(m_1,m_2,m_3) \bigr| &{}= 0. \end{align*} \begin{proof} Denote by \[ \mathbf{\Sigma}(N) = \myfrac{1}{N^3} \sum_{0\leqslant m_i\leqslant N-1} \bigl|\eta^{(4)}(m_1,m_2,m_3) \bigr|, \quad \mbox{and} \quad \mathbf{\Theta}(N) = \myfrac{1}{N^3} \sum_{0\leqslant m_i\leqslant N-1} \bigl|\vartheta^{(4)}(m_1,m_2,m_3) \bigr|. \] We provide the following calculations for $\mathbf{\Sigma}$. The estimates for $\mathbf{\Theta}$ are analogous. First, observe that since all correlations are smaller than or equal to 1, one has \begin{align*} \mathbf{\Sigma}(4N+1) {}&= \myfrac{1}{(4N+1)^3}\sum_{0\leqslant m_i\leqslant 4N} \bigl|\eta^{(4)}(m_1,m_2,m_3) \bigr|\\ {}&= \myfrac{1}{(4N+1)^3}\sum_{0\leqslant m_i\leqslant 4N-1} \bigl|\eta^{(4)}(m_1,m_2,m_3) \bigr| + \myfrac{1}{(4N+1)^3}\underbrace{ \sum_{\substack{m_i =4N \\[3pt] i \in\{1,2,3\} }} \bigl|\eta^{(4)}(m_1,m_2,m_3) \bigr|}_{\leqslant 3(4N)^2 + 1} \\ {}&= \myfrac{(4N)^3}{(4N+1)^3}\ \mathbf{\Sigma}(4N) + O(N^{-1}) \end{align*} as $N \to \infty$. Analogously, similar relations hold for $ \mathbf{\Sigma}(4N+2)$ and $\mathbf{\Sigma}(4N+3)$. Hence, if we show that $\mathbf{\Sigma}(4N) \rightarrow 0$ as $N \to \infty$, our claim follows. Thus, consider \begin{align*} \mathbf{\Sigma}(4N) {}& = \myfrac{1}{(4N)^3}\sum_{0\leqslant m_i\leqslant 4N-1} \bigl|\eta^{(4)}(m_1,m_2,m_3) \bigr|\\[2mm] {}& = \myfrac{1}{(4N)^3}\sum_{0\leqslant m_i\leqslant N-1} \sum_{r_i \in \{ 0,1,2,3\}} \bigl|\eta^{(4)}(4m_1+r_1,4m_2+r_2,4m_3+r_3) \bigr|\\ {}& \leqslant \myfrac{1}{4(4N)^3}\left(\sum_{0\leqslant m_i\leqslant N-1} \left( 128\,\bigl|\eta^{(4)}(m_1,m_2,m_3) \bigr|+112\,\bigl|\vartheta^{(4)}(m_1,m_2,m_3) \bigr|\right) + O(N^2)\right) \\[10pt] {}& =\myfrac{1}{4^4}\bigl( 128\,\mathbf{\Sigma}(N) + 112\,\mathbf{\Theta}(N)\bigr) + O(N^{-1}). \end{align*} In the third row, we inserted the renormalisation equations and used the triangle inequality for each of them together with \[ \sum_{0\leqslant m_i\leqslant N-1} \eta^{(n)}(m_1+r_1,\dots,m_{n-1}+r_{n-1}) = \sum_{0\leqslant m_i\leqslant N-1} \eta^{(n)}(m_1,\dots,m_{n-1}) + O(N^{n-2}), \] which holds \footnote{It suffices because the only term of this form appears on the RHS of the renormalisation equations.} for any $(r_1,\dots,r_{n-1}) \in \{0,1\}^{n-1}$. Analogous estimates hold for $\vartheta^{(n)}$ as well. For the summatory function $\mathbf{\Theta}$, we get the following relation \[\mathbf{\Theta}(4N) \leqslant \myfrac{1}{4^4}\bigl( 112\,\mathbf{\Sigma}(N) + 128\,\mathbf{\Theta}(N)\bigr) + O(N^{-1}). \] Combining both equations, one gets \[\mathbf{\Sigma}(4N) + \mathbf{\Theta}(4N) \leqslant \myfrac{15}{16}\bigl(\mathbf{\Sigma}(N) + \mathbf{\Theta}(N) \bigr) + O(N^{-1}). \] This inequality implies $\lim_{N\to+\infty} \mathbf{\Sigma}(4N) + \mathbf{\Theta}(4N) =0 $. Since $\mathbf{\Sigma}(4N)$ and $\mathbf{\Theta}(4N)$ are positive, we get the desired, namely, $\lim_{N\to+\infty} \mathbf{\Sigma}(4N) =0 $ and $\lim_{N\to+\infty} \mathbf{\Theta}(4N) =0 $. \end{proof} \end{prop} This convergence has an immediate consequence: the triangle inequality gives the desired result on the asymptotic behaviour of the mean of the coefficients. Moreover, since all coefficients are in modulus smaller than or equal to one, we obtain the asymptotically vanishing means for arbitrary powers $\alpha \geqslant 1$ of the correlation functions. \begin{coro} For the $4$-point Rudin--Shapiro correlation functions and any $\alpha \geqslant 1$, one has \begin{align*}\pushQED{\qed} \lim_{N\to \infty} \myfrac{1}{N^3} \sum_{0\leqslant m_i\leqslant N-1} \eta^{(4)}(m_1,m_2,m_3)^{\alpha} &{}= 0, \\ \lim_{N\to \infty} \myfrac{1}{N^3} \sum_{0\leqslant m_i\leqslant N-1} \vartheta^{(4)}(m_1,m_2,m_3)^{\alpha} &{}= 0. \qedhere \end{align*} \popQED \end{coro} \begin{remark} Using the renormalisation equations without the absolute value and triangle inequality, one could improve the estimates for the sums of correlation functions (and not their absolute values). We leave this part to interested readers. $\Diamond$ \end{remark} Even though we showed that the average of 4-point correlations is zero, we can still recognise a difference from a random structure. The presence of infinitely many infinitely long arithmetic progressions with a constant non-zero value of $\eta^{(4)}$ suggests a presence of a certain long-range ordering. We show an explicit example of a structure in the original RS sequence detected by the 4-point correlation function. The $\bigl(-\tfrac{1}{2}\bigr)$-level set contains as its subset vectors of the form $(1,4k+2,4k+3)$. It is worth evaluating the 4-point correlation function at the point $(1,k,k+1)$. Proposition \ref{prop:level_set_gen1} provides its description, but we wish to have the renormalisation equations for these particular vectors at hand. They form a~closed set of equations, namely \begin{align*} \eta^{(4)}(1,4m,4m+1) & = \myfrac{1}{4}\bigl(2+(-1)^m(1+\eta^{(4)}(1,m,m+1))\bigr),\\ \eta^{(4)}(1,4m+1,4m+2) & =0, \\ \eta^{(4)}(1,4m+2,4m+3) & = - \myfrac{1}{2}, \\ \eta^{(4)}(1,4m+3,4m+4) & =0. \end{align*} The 4-point correlation function evaluated at the point $(1,k,k+1)$ effectively measures the correlation between two doubles of two consecutive points at a distance of $k$. Nevertheless, these doubles are nothing but elements of a fixed point of the induced two-letter substitution $\varrho^{}_{2}$ defined in \eqref{eq:subst_induced}, which can also be studied in terms of its correlation functions. The corresponding set of renormalisation equations reads \begin{align*} \eta^{}_2(4k) & = \myfrac{1}{4}\bigl(2+(-1)^k(1+\eta^{}_2(k))\bigr),\\ \eta^{}_2(4k+1) & =0, \\ \eta^{}_2(4k+2) & = - \myfrac{1}{2}, \\ \eta^{}_2(4k+3) & =0. \end{align*} To derive them, we used the relations from Remark \eqref{eq:derived-description} and the fact that one has \[\lim_{N\to \infty} \myfrac{1}{N} \sum_{i=k}^{k+N-1}w_i = \lim_{N\to \infty} \myfrac{1}{N} \sum_{i=k}^{k+N-1}(-1)^i w_i = 0 \qquad \mbox{for all $k\in\mathbb{Z}$}. \] Comparing this set of equations for $\eta^{(4)}(1,m,m+1)$ and $\eta^{}_2 (m)$ together with the corresponding initial conditions, one can see that the functions $\eta^{(4)}(1,m,m+1)$ and $\eta^{}_2(m)$ coincide. This observation illustrates that the high-order correlation functions at certain points can be understood as ordinary correlation functions for patches in the original sequence. In summary, we have exploited the structure of higher-order correlation functions for the binary RS sequence. Using the renormalisation approach, we have shown that all odd-point correlations vanish, and for arbitrary even $n>2$, we have found a non-zero point where the correlation differs from zero. On the other hand, we have proved that the average of these coefficients as well as the average of their distances equal zero. These results provide a better understanding of the statistical differences between the RS sequence and a random binary one. Further, we have given a detailed description of 4-point correlations and shown that they contain many arithmetic structures that can (and should) be studied further. One can ask what further symmetries can be found within them and how they can help us complete the description of 4-point correlations. \section*{Appendix - Renormalisation equations for 3-point correlation functions} In this section, we omit the upper index and write $\eta, \vartheta$ instead of $\eta^{(n)},\vartheta^{(n)}$. The number of points should be clear from the context. We include the only necessary equations. {\footnotesize \begin{align*} \eta(4m_1,4m_2) ={}& \myfrac{1}{2}\,\eta(m_1,m_2),\\ \eta(4m_1,4m_2\!+\!1) ={}& \myfrac{1}{4} \left(\eta(m_1,m_2) +(-1)^{m_2}(1-(-1)^{m_1})\vartheta(m_1,m_2) + (-1)^{m_1}\eta(m_1,m_2\!+\!1) \right),\\ \eta(4m_1,4m_2\!+\!2) ={}& \myfrac{1}{2}(-1)^{m_1}\eta(m_1,m_2+1),\\ \eta(4m_1,4m_2\!+\!3) ={}& \myfrac{1}{4} \left(-(-1)^{m_2}\vartheta(m_1,m_2) + (1+(-1)^{m_1})\eta(m_1,m_2\!+\!1)-(-1)^{m_1+m_2}\vartheta(m_1,m_2\!+\!1) \right),\\ \eta(4m_1\!+\!1,4m_2\!+\!1) ={}& \myfrac{1}{4} \left((1+(-1)^{m_1+m_2})\eta(m_1,m_2) + (-1)^{m_1+m_2} \vartheta(m_1,m_2) - \vartheta(m_1\!+\!1,m_2\!+\!1) \right),\\ \begin{split} \eta(4m_1\!+\!1,4m_2\!+\!2) ={}& \myfrac{1}{4} \left(-(-1)^{m_1+m_2}\eta(m_1,m_2) + (-1)^{m_2} \vartheta(m_1,m_2) \right. \\ & \hspace{5.2cm} \left. -(-1)^{m_1} \eta(m_1,m_2\!+\!1) -\vartheta(m_1\!+\!1,m_2\!+\!1) \right), \end{split}\\ \begin{split} \eta(4m_1\!+\!1,4m_2\!+\!3) ={}& \myfrac{1}{4} \left(-(-1)^{m_2} \vartheta(m_1,m_2) +(-1)^{m_1} \vartheta(m_1,m_2\!+\!1) \right. \\ & \hspace{4cm} \left. -(-1)^{m_1} \eta(m_1,m_2\!+\!1) +(-1)^{m_2}\eta(m_1\!+\!1,m_2\!+\!1) \right), \end{split}\\ \eta(4m_1\!+\!2,4m_2\!+\!2) ={}& \myfrac{1}{2}(-1)^{m_1+m_2}\eta(m_1,m_2),\\ \begin{split} \eta(4m_1\!+\!2,4m_2\!+\!3) ={}& \myfrac{1}{4} \left(-(-1)^{m_1+m_2} \eta(m_1,m_2) -(-1)^{m_1} \vartheta(m_1,m_2\!+\!1) \right. \\ & \hspace{4cm} \left. + (-1)^{m_2}\eta(m_1\!+\!1,m_2\!+\!1) +\vartheta(m_1\!+\!1,m_2\!+\!1) \right), \end{split}\\ \eta(4m_1\!+\!3,4m_2\!+\!3) ={}& \myfrac{1}{4} \left((-1)^{m_1+m_2}\eta(m_1,m_2) + \eta(m_1\!+\!1,m_2\!+\!1) +(1-(-1)^{m_1+m_2}) \vartheta(m_1\!+\!1,m_2\!+\!1) \right),\\ \vartheta(4m_1,4m_2) ={}& \myfrac{1}{2}(-1)^{m_1+m_2}\vartheta(m_1,m_2),\\ \vartheta(4m_1,4m_2\!+\!1) ={}& \myfrac{1}{4} \left(\eta(m_1,m_2) -(-1)^{m_2}(1+(-1)^{m_1})\vartheta(m_1,m_2) - (-1)^{m_1}\eta(m_1,m_2\!+\!1) \right),\\ \vartheta(4m_1,4m_2\!+\!2) ={}& \myfrac{1}{2}(-1)^{m_2}\vartheta(m_1,m_2),\\ \vartheta(4m_1,4m_2\!+\!3) ={}& \myfrac{1}{4} \left(-(-1)^{m_2}\vartheta(m_1,m_2) + ((-1)^{m_1}-1)\eta(m_1,m_2\!+\!1)+(-1)^{m_1+m_2}\vartheta(m_1,m_2\!+\!1) \right),\\ \vartheta(4m_1\!+\!1,4m_2\!+\!1) ={}& \myfrac{1}{4} \left((1-(-1)^{m_1+m_2})\eta(m_1,m_2) + (-1)^{m_1+m_2} \vartheta(m_1,m_2) + \vartheta(m_1\!+\!1,m_2\!+\!1) \right),\\ \begin{split} \vartheta(4m_1\!+\!1,4m_2\!+\!2) ={}& \myfrac{1}{4} \left((-1)^{m_1+m_2}\eta(m_1,m_2) + (-1)^{m_2} \vartheta(m_1,m_2) \right. \\ & \hspace{5.2cm} \left. -(-1)^{m_1} \eta(m_1,m_2\!+\!1) +\vartheta(m_1\!+\!1,m_2\!+\!1) \right), \end{split}\\ \begin{split} \vartheta(4m_1\!+\!1,4m_2\!+\!3) ={}& \myfrac{1}{4} \left(-(-1)^{m_2} \vartheta(m_1,m_2) -(-1)^{m_1} \vartheta(m_1,m_2\!+\!1) \right. \\ & \hspace{4cm} \left. -(-1)^{m_1} \eta(m_1,m_2\!+\!1) -(-1)^{m_2}\eta(m_1\!+\!1,m_2\!+\!1) \right), \end{split}\\ \vartheta(4m_1\!+\!2,4m_2\!+\!2) ={}& \myfrac{1}{2}\vartheta(m_1\!+\!1,m_2\!+\!1),\\ \begin{split} \vartheta(4m_1\!+\!2,4m_2\!+\!3) ={}& \myfrac{1}{4} \left(-(-1)^{m_1+m_2} \eta(m_1,m_2) +(-1)^{m_1} \vartheta(m_1,m_2\!+\!1) \right. \\ & \hspace{4cm} \left. - (-1)^{m_2}\eta(m_1\!+\!1,m_2\!+\!1) +\vartheta(m_1\!+\!1,m_2\!+\!1) \right), \end{split}\\ \vartheta(4m_1\!+\!3,4m_2\!+\!3) ={}& \myfrac{1}{4} \left((-1)^{m_1+m_2}\eta(m_1,m_2) - \eta(m_1\!+\!1,m_2\!+\!1) +(1+(-1)^{m_1+m_2}) \vartheta(m_1\!+\!1,m_2\!+\!1) \right).\\ \end{align*} } \section*{Appendix - Renormalisation equations for 4-point correlation functions} We denote by $M$ the sum of all indices, i.e. $M=m_1+m_2+m_3$ {\footnotesize \begin{align*} \eta(4m_1,4m_2,4m_3) ={}& \myfrac{1}{2}(1\!+\!(-1)^M) \eta(m_1,m_2,m_3),\\ \eta(4m_1,4m_2,4m_3\!+\!1) ={}& \myfrac{1}{4}\bigl((1\!-\!(-1)^M) \eta(m_1,m_2,m_3) \!+\!(-1)^{m_3}\vartheta(m_1,m_2,m_3)\!-\!(-1)^{m_1+m_2}\vartheta(m_1,m_2,m_3\!+\!1)\bigr),\\ \eta(4m_1,4m_2,4m_3\!+\!2) ={}& 0,\\ \eta(4m_1,4m_2,4m_3\!+\!3) ={}& \myfrac{1}{4}\bigl(-\!(-1)^{m_3}\vartheta(m_1,m_2,m_3)\!+\!(1\!+\!(-1)^M) \eta(m_1,m_2,m_3\!+\!1) \!+\!(-1)^{m_1+m_2}\vartheta(m_1,m_2,m_3\!+\!1)\bigr),\\ \eta(4m_1,4m_2\!+\!1,4m_3\!+\!1) ={}& \myfrac{1}{4}\bigl((1\!+\!(-1)^{m_2+m_3}\!+\!(-1)^M) \eta(m_1,m_2,m_3) \!+\!(-1)^{m_1}\eta(m_1,m_2\!+\!1,m_3\!+\!1)\bigr),\\ \begin{split} \eta(4m_1,4m_2\!+\!1,4m_3\!+\!2) ={}& \myfrac{1}{4}\left(-\!(-1)^{m_2+m_3}\eta(m_1,m_2,m_3)\!+\! (-1)^{m_3}\vartheta(m_1,m_2,m_3) \right. \\ & \hspace{3cm} \left. -\!(-1)^{m_1+m_2}\vartheta(m_1,m_2,m_3\!+\!1) \!+\!(-1)^{m_1}\eta(m_1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \begin{split} \eta(4m_1,4m_2\!+\!1,4m_3\!+\!3) ={}& \myfrac{1}{4}\left( -\!(-1)^{m_3}\vartheta(m_1,m_2,m_3)\!+\!(-1)^{m_2}(1\!-\!(-1)^{m_1})\vartheta(m_1,m_2,m_3\!+\!1) \right. \\ & \hspace{3cm} \left. -\!(-1)^{m_1+m_3}\vartheta(m_1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \eta(4m_1,4m_2\!+\!2,4m_3\!+\!2) ={}& \myfrac{1}{2}\bigl((-1)^{m_2+m_3} \eta(m_1,m_2,m_3) \!+\!(-1)^{m_1}\eta(m_1,m_2\!+\!1,m_3\!+\!1)\bigr),\\ \begin{split} \eta(4m_1,4m_2\!+\!2,4m_3\!+\!3) ={}& \myfrac{1}{4}\left(-\!(-1)^{m_2+m_3}\eta(m_1,m_2,m_3)\!-\! (-1)^{m_2}\vartheta(m_1,m_2,m_3\!+\!1) \right. \\ & \hspace{3cm} \left. +\!(-1)^{m_1}\eta(m_1,m_2\!+\!1,m_3\!+\!1) \!-\!(-1)^{m_1+m_3}\vartheta(m_1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \eta(4m_1,4m_2\!+\!3,4m_3\!+\!3) ={}& \myfrac{1}{4}\bigl((-1)^{m_2+m_3} \eta(m_1,m_2,m_3) \!+\!(1\!+\!(-1)^{m_1}\!+\!(-1)^M)\eta(m_1,m_2\!+\!1,m_3\!+\!1)\bigr),\\ \eta(4m_1\!+\!1,4m_2\!+\!1,4m_3\!+\!1) ={}& \myfrac{1}{4}\bigl((1\!-\!(-1)^M) \eta(m_1,m_2,m_3) \!+\!(-1)^{M}\vartheta(m_1,m_2,m_3)\!-\vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\bigr),\\ \begin{split} \eta(4m_1\!+\!1,4m_2\!+\!1,4m_3\!+\!2) ={}& \myfrac{1}{4}\big( ((-1)^{m_3}\!-\!(-1)^M)\vartheta(m_1,m_2,m_3)\!+\!(-1)^{m_1+m_2}\vartheta(m_1,m_2,m_3\!+\!1) \\ & \hspace{3cm} \left. - \vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \begin{split} \eta(4m_1\!+\!1,4m_2\!+\!1,4m_3\!+\!3) ={}& \myfrac{1}{4}\left(-\!(-1)^{m_3}\vartheta(m_1,m_2,m_3)\!+\! (-1)^{m_1+m_2}\eta(m_1,m_2,m_3\!+\!1) \right. \\ & \hspace{3cm} \left. +\!(-1)^{m_1+m_2}\vartheta(m_1,m_2,m_3\!+\!1) \!+\!(-1)^{m_3}\eta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \begin{split} \eta(4m_1\!+\!1,4m_2\!+\!2,4m_3\!+\!2) ={}& \myfrac{1}{4}\big((-1)^{m_2+m_3}\eta(m_1,m_2,m_3)\!+\! (-1)^M\vartheta(m_1,m_2,m_3) \\ & \hspace{3cm} \left. -\!(-1)^{m_1}\eta(m_1,m_2\!+\!1,m_3\!+\!1) \!-\!\vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \begin{split} \eta(4m_1\!+\!1,4m_2\!+\!2,4m_3\!+\!3) ={}& \myfrac{1}{4}\left(-\!(-1)^{m_2+m_3}\eta(m_1,m_2,m_3)\!-\! (-1)^{m_1+m_2}\eta(m_1,m_2,m_3\!+\!1) \right. \\ & \hspace{3cm} \left. -\!(-1)^{m_1}\eta(m_1,m_2\!+\!1,m_3\!+\!1) \!+\!(-1)^{m_3}\eta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \begin{split} \eta(4m_1\!+\!1,4m_2\!+\!3,4m_3\!+\!3) ={}& \myfrac{1}{4}\left((-1)^{m_2+m_3}\eta(m_1,m_2,m_3)\!-\! (-1)^{m_1}\eta(m_1,m_2\!+\!1,m_3\!+\!1) \right. \\ & \hspace{3cm} \left. +\!(-1)^{m_1}\vartheta(m_1,m_2\!+\!1,m_3\!+\!1) \!-\!(-1)^{m_2+m_3}\vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \eta(4m_1\!+\!2,4m_2\!+\!2,4m_3\!+\!2) ={}& 0,\\ \begin{split} \eta(4m_1\!+\!2,4m_2\!+\!2,4m_3\!+\!3) ={}& \myfrac{1}{4}\big(-(-1)^M\vartheta(m_1,m_2,m_3)\!+\! (-1)^{m_1+m_2}\eta(m_1,m_2,m_3\!+\!1) \\ & \hspace{3cm} \left. +\!(-1)^{m_3}\eta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1) \!+\!\vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \begin{split} \eta(4m_1\!+\!2,4m_2\!+\!3,4m_3\!+\!3) ={}& \myfrac{1}{4}\big((-1)^M \vartheta(m_1,m_2,m_3) \!-\!(-1)^{m_1}\vartheta(m_1,m_2\!+\!1,m_3\!+\!1) \\ & \hspace{3cm} \left. +\!(1\!-\!(-1)^{m_2+m_3})\vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \eta(4m_1\!+\!3,4m_2\!+\!3,4m_3\!+\!3) ={}& \myfrac{1}{4}\bigl(\!-\!(-1)^M\vartheta(m_1,m_2,m_3) \!+\!(1\!+\!(-1)^M)\eta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\!+\vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\bigr),\\ \end{align*} } {\footnotesize \begin{align*} \vartheta(4m_1,4m_2,4m_3) ={}& 0,\\ \vartheta(4m_1,4m_2,4m_3\!+\!1) ={}& \myfrac{1}{4}\bigl((1\!-\!(-1)^M) \eta(m_1,m_2,m_3) \!-\!(-1)^{m_3}\vartheta(m_1,m_2,m_3)\!+\!(-1)^{m_1+m_2}\vartheta(m_1,m_2,m_3\!+\!1)\bigr),\\ \vartheta(4m_1,4m_2,4m_3\!+\!2) ={}& \myfrac{1}{2} \bigl((-1)^{m_3}\vartheta(m_1,m_2,m_3)+(-1)^{m_1+m_2}\vartheta(m_1,m_2,m_3\!+\!1) \bigr),\\ \vartheta(4m_1,4m_2,4m_3\!+\!3) ={}& \myfrac{1}{4}\bigl(-\!(-1)^{m_3} \vartheta(m_1,m_2,m_3)\!-\!(1\!+\!(-1)^M) \eta(m_1,m_2,m_3\!+\!1) \!+\!(-1)^{m_1+m_2}\vartheta(m_1,m_2,m_3\!+\!1)\bigr),\\ \vartheta(4m_1,4m_2\!+\!1,4m_3\!+\!1) ={}& \myfrac{1}{4}\bigl((1\!-\!(-1)^{m_2+m_3}\!+\!(-1)^M) \eta(m_1,m_2,m_3) \!-\!(-1)^{m_1}\eta(m_1,m_2\!+\!1,m_3\!+\!1)\bigr),\\ \begin{split} \vartheta(4m_1,4m_2\!+\!1,4m_3\!+\!2) ={}& \myfrac{1}{4}\left(\!(-1)^{m_2+m_3}\eta(m_1,m_2,m_3)\!+\! (-1)^{m_3}\vartheta(m_1,m_2,m_3) \right. \\ & \hspace{3cm} \left. -\!(-1)^{m_1+m_2}\vartheta(m_1,m_2,m_3\!+\!1) \!-\!(-1)^{m_1}\eta(m_1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \begin{split} \vartheta(4m_1,4m_2\!+\!1,4m_3\!+\!3) ={}& \myfrac{1}{4}\left( -\!(-1)^{m_3}\vartheta(m_1,m_2,m_3)\!-\!(-1)^{m_2}(1\!+\!(-1)^{m_1})\vartheta(m_1,m_2,m_3\!+\!1) \right. \\ & \hspace{3cm} \left. +\!(-1)^{m_1+m_3}\vartheta(m_1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \vartheta(4m_1,4m_2\!+\!2,4m_3\!+\!2) ={}& 0,\\ \begin{split} \vartheta(4m_1,4m_2\!+\!2,4m_3\!+\!3) ={}& \myfrac{1}{4}\left(-\!(-1)^{m_2+m_3}\eta(m_1,m_2,m_3)\!+\! (-1)^{m_2}\vartheta(m_1,m_2,m_3\!+\!1) \right. \\ & \hspace{3cm} \left. +\!(-1)^{m_1}\eta(m_1,m_2\!+\!1,m_3\!+\!1) \!+\!(-1)^{m_1+m_3}\vartheta(m_1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \vartheta(4m_1,4m_2\!+\!3,4m_3\!+\!3) ={}& \myfrac{1}{4}\bigl((-1)^{m_2+m_3} \eta(m_1,m_2,m_3) \!+\!(-1\!+\!(-1)^{m_1}\!-\!(-1)^M)\eta(m_1,m_2\!+\!1,m_3\!+\!1)\bigr),\\ \vartheta(4m_1\!+\!1,4m_2\!+\!1,4m_3\!+\!1) ={}& \myfrac{1}{4}\bigl((1\!-\!(-1)^M) \eta(m_1,m_2,m_3) \!-\!(-1)^{M}\vartheta(m_1,m_2,m_3)\!+\vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\bigr),\\ \begin{split} \vartheta(4m_1\!+\!1,4m_2\!+\!1,4m_3\!+\!2) ={}& \myfrac{1}{4}\big( ((-1)^{m_3}\!+\!(-1)^M)\vartheta(m_1,m_2,m_3)\!+\!(-1)^{m_1+m_2}\vartheta(m_1,m_2,m_3\!+\!1) \\ & \hspace{3cm} \left. +\! \vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \begin{split} \vartheta(4m_1\!+\!1,4m_2\!+\!1,4m_3\!+\!3) ={}& \myfrac{1}{4}\big(-\!(-1)^{m_3}\vartheta(m_1,m_2,m_3)\!-\! (-1)^{m_1+m_2}\eta(m_1,m_2,m_3\!+\!1) \\ & \hspace{3cm} \left. +\!(-1)^{m_1+m_2}\vartheta(m_1,m_2,m_3\!+\!1) \!-\!(-1)^{m_3}\eta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \begin{split} \vartheta(4m_1\!+\!1,4m_2\!+\!2,4m_3\!+\!2) ={}& \myfrac{1}{4}\big((-1)^{m_2+m_3}\eta(m_1,m_2,m_3)\!-\! (-1)^M\vartheta(m_1,m_2,m_3) \\ & \hspace{3cm} \left. -\!(-1)^{m_1}\eta(m_1,m_2\!+\!1,m_3\!+\!1) \!+\!\vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \begin{split} \vartheta(4m_1\!+\!1,4m_2\!+\!2,4m_3\!+\!3) ={}& \myfrac{1}{4}\left(-\!(-1)^{m_2+m_3}\eta(m_1,m_2,m_3)\!+\! (-1)^{m_1+m_2}\eta(m_1,m_2,m_3\!+\!1) \right. \\ & \hspace{3cm} \left. -\!(-1)^{m_1}\eta(m_1,m_2\!+\!1,m_3\!+\!1) \!-\!(-1)^{m_3}\eta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \begin{split} \vartheta(4m_1\!+\!1,4m_2\!+\!3,4m_3\!+\!3) ={}& \myfrac{1}{4}\left((-1)^{m_2+m_3}\eta(m_1,m_2,m_3)\!-\! (-1)^{m_1}\eta(m_1,m_2\!+\!1,m_3\!+\!1) \right. \\ & \hspace{3cm} \left. -\!(-1)^{m_1}\vartheta(m_1,m_2\!+\!1,m_3\!+\!1) \!+\!(-1)^{m_2+m_3}\vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \vartheta(4m_1\!+\!2,4m_2\!+\!2,4m_3\!+\!2) ={}& \myfrac{1}{2} \bigl((-1)^M\vartheta(m_1,m_2,m_3)+\vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1) \bigr),\\ \begin{split} \vartheta(4m_1\!+\!2,4m_2\!+\!2,4m_3\!+\!3) ={}& \myfrac{1}{4}\big(-(-1)^M\vartheta(m_1,m_2,m_3)\!-\! (-1)^{m_1+m_2}\eta(m_1,m_2,m_3\!+\!1) \\ & \hspace{3cm} \left. -\!(-1)^{m_3}\eta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1) \!+\!\vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \begin{split} \vartheta(4m_1\!+\!2,4m_2\!+\!3,4m_3\!+\!3) ={}& \myfrac{1}{4}\big((-1)^M \vartheta(m_1,m_2,m_3) \!+\!(-1)^{m_1}\vartheta(m_1,m_2\!+\!1,m_3\!+\!1) \\ & \hspace{3cm} \left. +\!(1\!-\!(-1)^{m_2+m_3})\vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\right), \end{split}\\ \vartheta(4m_1\!+\!3,4m_2\!+\!3,4m_3\!+\!3) ={}& \myfrac{1}{4}\bigl(\!-\!(-1)^M\vartheta(m_1,m_2,m_3) \!-\!(1\!+\!(-1)^M)\eta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\!+\vartheta(m_1\!+\!1,m_2\!+\!1,m_3\!+\!1)\bigr).\\ \end{align*} } \end{document}
arXiv
\begin{document} \begin{center} {\Large{\bf }} \end{center} \vskip 5mm \baselineskip=18pt \begin{center} {\bf \Large Efficient Estimation of the Additive Risks Model for Interval-Censored Data} \end{center} \thispagestyle{empty} \vskip 5mm \begin{center} {\sc Tong Wang$^{1, 3}$, Dipankar Bandyopadhyay$^2$ and Samiran Sinha$^{3, \dagger}$} \\ \vskip 3mm $^1$ School of Statistics and Data Science, Nankai University, Tianjin, China\\ $^2$ Department of Biostatistics, Virginia Commonwealth University, Richmond, VA, USA\\ $^{3}$ Department of Statistics, Texas A\&M University, College Station, TX, USA \\ $^\dagger$email: [email protected]\\ \end{center} \vskip 5mm \begin{center} Abstract \end{center} In contrast to the popular Cox model which presents a multiplicative covariate effect specification on the time to event hazards, the semiparametric additive risks model (ARM) offers an attractive additive specification, allowing for direct assessment of the changes or the differences in the hazard function for changing value of the covariates. The ARM is a flexible model, allowing the estimation of both time-independent and time-varying covariates. It has a nonparametric component and a regression component identified by a finite-dimensional parameter. This chapter presents an efficient approach for maximum-likelihood (ML) estimation of the nonparametric and the finite-dimensional components of the model via the minorize-maximize (MM) algorithm for case-II interval-censored data. The operating characteristics of our proposed MM approach are assessed via simulation studies, with illustration on a breast cancer dataset via the \texttt{R} package \texttt{MMIntAdd}. It is expected that the proposed computational approach will not only provide scalability to the ML estimation scenario but may also simplify the computational burden of other complex likelihoods or models. \vskip 8mm \noindent {\sc Key Words:} Additive risks model; Interval-censored data; MM algorithm; Newton-Raphson method; Optimization; Survival function. \baselineskip=24pt \allowdisplaybreaks \setcounter{page}{1} \allowdisplaybreaks \baselineskip=24pt \section{Introduction} Interval-censoring \citep{Boga2018}, which occurs when the failure time is only known to lie in an interval instead of being observed precisely, abounds in demographical, sociological, and biomedical studies \citep{zhang2010interval}. There are broadly two main types of interval-censored data: case-I and case-II interval-censored data. Case-I interval-censored data, also called current status data \citep{Martinussen2002}, is not the focus of this chapter. Here, we focus on case-II interval censoring, where the time to events are a mixture of left-, right-, and interval-censoring. Specifically, case-2 interval-censored data consists of some left-censored time-to-events, some right-censored time to-events, and some interval-censored time-to-events, and the proportion of interval-censored time-to-events never goes to zero as the sample size increases. This work aims to present an efficient algorithm for maximum likelihood (ML) estimation of the additive risks model \citep{lin1994semiparametric}, henceforth ARM, for the case-II interval-censored data. The ARM is specified by the hazard function \begin{eqnarray}\label{eqm1} h(t|X(t))=\lambda(t)+\beta^\top X(t), \end{eqnarray} where, $X(t)$ denotes a vector of possibly time-dependent covariate, $\beta$ is the corresponding regression parameter, and $\lambda(t)$ is the baseline hazard function. In this model, the effect of a covariate can be measured via the difference in the hazard function for different covariate values at any given time. In (\ref{eqm1}), the effect of a covariate is assumed to be constant on the hazard function. However, it can be relaxed to any known parametric form that is possibly time-dependent. \cite{Lin1994} used this ARM to analyze right-censored data. Under case-II interval-censoring, \cite{Zeng2006} proposed an ML method to estimate both the baseline hazard function and regression parameters of the model. In contrast, \cite{Wang2010} considered a martingale-based estimation procedure, focusing only on the estimation of the regression parameters bypassing baseline hazard estimation -- a critical component to study the event of interest. Furthermore, \cite{Martinussen2002} and \cite{Wang2020Korean} proposed to use a sieve ML approach to model the baseline hazard $\lambda(t)$ under current status and case-II interval-censoring, respectively. The sieve method requires an appropriate choice of the sieve parameter space and the number of knots. In our ML approach of fitting the ARM to the interval-censored data, the baseline survival function was modeled as a nonparametric step function with a jump at the observed inspection time points. The computation of the ML estimates through direct maximization of the observed data likelihood function is problematic due to a large number of parameters. Note, although the regression parameter is finite-dimensional, the baseline hazard function contributes a large number of parameters that tend to increase with the sample size when the inspection time is continuous \citep{Zeng2006}. To circumvent this computational difficulty in high-dimensional ML maximization, we develop a novel Minorize-Maximization (MM) algorithm \citep{Hunter2004,Wu2010}. The proposed method can handle both time-independent and time-dependent covariates. By applying this technique, the original high-dimensional optimization problem reduces to a simple Newton-Raphson update of the parameters. Moreover, in each step of the Newton-Raphson method, we do not need to invert any high-dimensional matrix. All these are possible with a clever choice of the surrogate function, and details of this choice are discussed in the next section. Extensive simulation studies confirm that the proposed MM algorithm can estimate the parameters adequately, with a significantly reduced computation time than direct maximization. The efficiency of an MM algorithm relies on choosing an appropriate minorizing function that requires understanding and applying mathematical inequalities in the right places. MM algorithms have been developed in quantile regression \citep{Hunter2000}, variable selection \citep{Hunter2005}, and in various areas of machine-learning; see the review article by \cite{Nguyen2017}, and the references therein. This algorithm has been used in analyzing censored time-to-event data with the proportional odds model \citep{Hunter2002}, clustered time-to-event data with the Gamma frailty model \citep{Huang2019}, and recently in analyzing clustered current status data with the generalized odds ratio model \cite{TongWang2020}. This book chapter presents our maiden attempt to employ the MM algorithm for inference under the ARM for interval-censored data to the best of our knowledge. The novelty of the work lies in developing an efficient ML estimation procedure for this semiparametric ARM for analyzing case-II interval-censored data. For the consistency and asymptotic normality of the ML estimator, we refer to \cite{Zeng2006}. The remainder of the chapter is organized as follows. After specifying the notations and hazard specifications, Section \ref{sec:model} presents the likelihood of our proposed ARM. Section \ref{sec:MM} presents the relevant details of the proposed MM algorithm, including variance estimation, and complexity analysis. The finite-sample performances of our estimators are evaluated via simulation studies using synthetic data in Section \ref{sec:simulation}. Section \ref{sec:realdata} illustrates our proposed methodology via application to a well-known breast cosmesis data with interval-censored endpoints. Relevant model-fitting and implementation using our \texttt{R} package \texttt{MMIntAdd} are presented in Section \ref{sec:forR}. Finally, Section \ref{sec:conclusion} concludes, alluding to some future work. \section{Statistical Model} \label{sec:model} \subsection{Notations and Setup} Let $T_i$ denote the time-to-event for the $i$th subject. Our observed interval-censored data from $n$ independent subjects are given by $\{L_i,R_i,X_i,\Delta_{L,i},\Delta_{I,i},\Delta_{R,i}\}$, $i=1, \dots, n$, where $L_{i}$ and $R_{i}$ are left- and right-endpoints of the intervals, $X_i$ is a $p\times 1$ vector of time-dependent covariates, and $\Delta_{L,i}$, $\Delta_{I,i}$ and $\Delta_{R,i}$ represent the left-, interval-, and right-censoring indicators, respectively. If $T_i$ is left-censored, then $T_i$ falls in $(0, L_i]$ and $\Delta_{L, i}=1$ while $\Delta_{I, i}=\Delta_{R, i}=0$. If $T_i$ is interval-censored, then $T_i$ falls in $(L_i, R_i]$ and $\Delta_{L, i}=\Delta_{R, i}=0$ while $\Delta_{I, i}=1$. Finally, if $T_i$ is right censored, then $T_i$ falls in $(R_i, \infty)$ and $\Delta_{L, i}=\Delta_{I, i}=0$ while $\Delta_{R, i}=1$. As a placeholder, we can set $R_i$ to any number larger than $L_i$ for left censored time-to-event, and $L_i$ to any number smaller than $R_i$ for right-censored time-to-event. With the hazard function of the ARM given in (\ref{eqm1}), the cumulative hazard is $H(t; X)= \Lambda(t)+ \beta^\top Z_x(t)$, where $\Lambda(t)=\int_0^t \lambda(s)ds$ and $Z_x(t)=\int^t_0 X(s)ds$. When the covariate is time independent, $Z_x(t)=\int^t_0 X(s) ds= X t$. Given the covariates, the survival probability is \begin{eqnarray*} S(t; X)=\exp[-\{\Lambda(t)+\beta^\top Z_x(t)\}]. \end{eqnarray*} For the nonparametric ML estimation, assume that $\Lambda(t)$ is a step function with jump $\lambda_k$ at $t_k\;(k=0,\ldots,m)$, i.e., $\Lambda(t)=\sum_{k: t_k\le t}\lambda_k$, where $t_1<\cdots<t_m$, denote the unique inspection time points. In the example below, we further illustrate the calculation of $\Lambda(t)$ for the interval-censored scenario. \begin{Exa} Consider a hypothetical dataset with interval-censored time to events from eight subjects, $(0, 0.5]$, $(0,5]$, $(2,5]$, $(1,2.5]$, $(1.5,2.25]$, $(3,4.2]$, $(2,\infty)$, $(3.2,\infty)$, where the first two are left-censored, the next four are interval-censored and the last two are right-censored. Then the unique inspection time points $(t_1,t_2,\ldots,t_{10})^\top =(0.5,1,1.5,2,2.25,2.5,3,3.2,4.2,5)^\top $. Let $(\lambda_1,\lambda_2,\ldots,\lambda_{10})^\top $ are the jumps corresponding to $t$'s. Then $\Lambda(1.75)=\lambda_1+\lambda_2+\lambda_3$ and likewise $\Lambda(3.5)=\lambda_1+\cdots+\lambda_7+\lambda_8$. \end{Exa} \subsection{Likelihood} It is assumed that that distribution of the window of the inspection time $(L, R)$ is independent of the time-to-event $T$, and the support of $(L, R)$ is $\Omega=\{(l, r): 0< l_0\leq l<r\leq r_0<\infty\}$. The density function of $(L, R)$ is assumed to be positive over $\Omega$ and $\hbox{pr}(T<l_0|X)$ and $\hbox{pr}(T>r_0|X)$ have a positive lower bound that is strictly greater than zero. Like \cite{Zeng2006}, $\beta$ is assumed to lie in a compact set of multidimensional Euclidean space, $\Lambda(0)=0$ and $\Lambda(t)>0$ is assumed to be a non-decreasing function, and the covariates are assumed to lie in a compact set of multidimensional Euclidean space. Let $\lambda=(\lambda_1,\ldots, \lambda_m)^\top $, then the observed likelihood and the log-likelihood functions are \begin{eqnarray*} \mathcal{L}(\lambda,\beta)=\prod_{i=1}^n\{1-S(L_i; X_i)\}^{\Delta_{L,i}}\{S(L_i; X_i)-S(R_i; X_i)\}^{\Delta_{I,i}}\{S(R_i; X_i)\}^{\Delta_{R,i}}, \end{eqnarray*} and \begin{eqnarray} \ell(\lambda,\beta)&=&\sum_{i=1}^n\biggl[\Delta_{L,i}{\rm log}\{1-S(L_i; X_i)\}+\Delta_{I,i}{\rm log}\{S(L_i; X_i)-S(R_i; X_i)\}+\Delta_{R,i}{\rm log}\{S(R_i; X_i)\}\biggl]\nonumber\\ &=&\sum_{i=1}^n\biggl[ \Delta_{L,i}{\rm log}\{1-S(L_i; X_i)\}+\Delta_{I,i}{\rm log}\{S(L_i; X_i)\}+\Delta_{I,i}{\rm log}\{1-S^{-1}(L_i; X_i)S(R_i; X_i)\}\nonumber\\ && +\Delta_{R,i}{\rm log}\{S(R_i,X_i)\} \biggl]\nonumber\\ &=& \ell_1(\lambda,\beta)+\ell_2(\lambda,\beta)+\ell_3(\lambda,\beta)+\ell_4(\lambda,\beta),\label{log-likelihood} \end{eqnarray} where \begin{eqnarray*} \ell_1(\lambda,\beta)&=&\sum_{i=1}^n\Delta_{L,i}{\rm log}\{1-S(L_i|X_i)\} =\sum_{i=1}^n\Delta_{L,i}{\rm log}[1-\exp\{- \sum_{k: t_k\le L_i}\lambda_k-\beta^\top Z_{x_i}(L_i)\}],\\ \ell_2(\lambda,\beta)&=&\sum_{i=1}^n\Delta_{I,i}{\rm log}\{S(L_i|X_i)\}=-\sum_{i=1}^n\Delta_{I,i}\left\{\sum_{k: t_k\leq L_i}\lambda_k+\beta^\top Z_{x_i}(L_i)\right\},\\ \ell_3(\lambda,\beta)&=&\sum_{i=1}^n\Delta_{I,i}{\rm log}\{1-S^{-1}(L_i|X_i)S(R_i|X_i)\}\\ &=&\sum_{i=1}^n\Delta_{I,i}{\rm log}\Bigg(1-\exp\bigg[-\sum_{k: L_i<t_k\leq R_i}\lambda_k-\beta^\top \{Z_{x_i}(R_i)-Z_{x_i}(L_i)\}\bigg]\Bigg),\\ \ell_4(\lambda,\beta)&=&\sum_{i=1}^n\Delta_{R,i}{\rm log}\{S(R_i|X_i)\}=-\sum_{i=1}^n\Delta_{R,i}\left\{\sum_{k:t_k\leq R_i}\lambda_k+\beta^\top Z_{x_i}(R_i)\right\}. \end{eqnarray*} It is understood that maximization of $\ell(\lambda, \beta)$ is not straight-forward due to the presence of $\lambda$ and $\beta$ in a non-separable functional form. Therefore, in the next section, we develop an efficient optimization technique aided by the MM algorithm to estimate $\lambda$ and $\beta$. \section{Estimation} \subsection{MM algorithm}\label{sec:MM} For developing a computationally efficient MM algorithm, we need to find a suitable minorization function. To develop such a minorization function, we use a result from the recent literature \citep{TongWang2020} along with some standard mathematical inequalities. Define $\lambda_0=(\lambda_{10},\ldots,\lambda_{m0})^\top $ and $u_0(L_i,X_i)=\sum_{k: t_k\leq L_i}\lambda_{k0}+\beta_0^\top Z_{x_i}(L_i)$, $u_0(R_i,X_i)=\sum_{k: t_k\leq R_i}\lambda_{k0}+\beta^\top_0Z_{x_i}(R_i)$ and $u_0(L_i, R_i,X_i)=\sum_{k: L_i<t_k\leq R_i}\lambda_{k0}+\beta_0^\top \{Z_{x_i}(R_i)-Z_{x_i}(L_i)\}$. We now present the main result in the following theorem, whose proof is given in the Appendix. \begin{Th}\label{ourlemma1} The minorization function for $\ell(\lambda,\beta)$ is $\ell_{\dagger}(\lambda,\beta|\lambda_0,\beta_0)$, such that $\ell(\lambda,\beta)\ge \ell_{\dagger}(\lambda,\beta|\lambda_0,\beta_0)$ $\forall \lambda, \lambda_0>0$ and $\beta, \beta_0\in \mathcal{R}^p$ and the equality holds when $\lambda=\lambda_0$ and $\beta=\beta_0$, and \begin{eqnarray*} \ell_{\dagger}(\lambda,\beta|\lambda_0,\beta_0) \equiv \sum_{k=1}^m\mathcal{M}_{1,k}(\lambda_k|\lambda_0,\beta_0)+\mathcal{M}_2(\beta|\lambda_0,\beta_0)+\mathcal{M}_3(\lambda_0,\beta_0), \end{eqnarray*} where \begin{eqnarray*} &&\mathcal{M}_{1,k}(\lambda_k|\lambda_0,\beta_0)\\ &\equiv& -\frac{\lambda_{k0}^2}{\lambda_k} \sum^n_{i=1}\left\{ \frac{\Delta_{L,i}}{u_0(L_i,X_i)}I(t_k\leq L_i) + \frac{\Delta_{I,i}}{u_0(L_i,R_i,X_i)}I(L_i< t_k\leq R_i)\right\}\\ && + \lambda_k \sum_{i=1}^n\biggl[\Delta_{L,i}\left\{A_1(u_0(L_i,X_i))+2A_2(u_0(L_i,X_i))u_0(L_i,X_i)-\frac{1}{u_0(L_i,X_i)}\right\}I(t_k\leq L_i)\\ &&\hskip 10mm +\Delta_{I,i}\left\{A_1(u_0(L_i,R_i,X_i))+2A_2(u_0(L_i,R_i,X_i))u_0(L_i,R_i,X_i) -\frac{1}{u_0(L_i,R_i,X_i)}\right\}\\ && \hskip 10mm \times I(L_i< t_k\leq R_i)-\Delta_{I,i}I(t_k\leq L_i)- \Delta_{R,i}I(t_k\leq R_i)\biggl]\\ && -\frac{\lambda_k^2}{\lambda_{k0}} \sum^n_{i=1}\biggl\{\Delta_{L,i}A_2(u_0(L_i,X_i))u_0(L_i,X_i)I(t_k\leq L_i)\\ &&\hskip 10mm +\Delta_{I,i}A_2(u_0(L_i,R_i,X_i))u_0(L_i,R_i,X_i)I(L_i< t_k\leq R_i)\biggl\},\quad k=1,\ldots,m \end{eqnarray*} \begin{eqnarray*} &&\mathcal{M}_2(\beta|\lambda_0,\beta_0)\\ &\equiv& -\sum_{i=1}^n\biggl[\frac{\Delta_{L,i}}{ u_0(L_i,X_i) }\times \frac{\{\beta_0^\top Z_{x_i}(L_i)\}^2}{ \beta^\top Z_{x_i}(L_i)}+\frac{\Delta_{I,i}}{u_0(L_i,R_i,X_i) }\times \frac{\{\beta_0^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))\}^2}{\beta^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))}\biggl]\\ &&+ \sum_{i=1}^n\biggl[\Delta_{L,i}\left\{A_1(u_0(L_i,X_i))+2A_2(u_0(L_i,X_i))u_0(L_i,X_i) - \frac{1}{u_0(L_i,X_i)}\right\}\beta^\top Z_{x_i}(L_i)\\ &&\hskip 10mm +\Delta_{I,i}\left\{A_1(u_0(L_i,R_i,X_i))+2A_2(u_0(L_i,R_i,X_i))u_0(L_i,R_i,X_i) - \frac{1}{u_0(L_i,R_i,X_i)}\right\}\\ &&\hskip 10mm \times \beta^\top \{Z_{x_i}(R_i)-Z_{x_i}(L_i)\} -\Delta_{I,i}\beta^\top Z_{x_i}(L_i)-\Delta_{R,i}\beta^\top Z_{x_i}(R_i)\biggl]\\ &&- \sum_{i=1}^n\biggl(\Delta_{L,i}A_2(u_0(L_i,X_i))\frac{u_0(L_i,X_i)}{\beta_0^\top Z_{x_i}(L_i)}\{\beta^\top Z_{x_i}(L_i)\}^2\\ &&\hskip 10mm +\Delta_{I,i}A_2(u_0(L_i,R_i,X_i))\left\{\frac{u_0(L_i,R_i,X_i)}{\beta_0^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))}\right\}[\beta^\top \{Z_{x_i}(R_i)-Z_{x_i}(L_i)\}]^2 \bigg), \end{eqnarray*} $A_1(u)=\exp(-u)/\{1-\exp(-u)\}$, $A_2(u)=\exp(-u)/2\{1-\exp(-u)\}^2$ and the expression of $\mathcal{M}_3(\lambda_0,\beta_0)$ is given in the appendix. \end{Th} As opposed to a direct maximization of $\ell(\lambda, \beta)$, for a given $(\lambda_0, \beta_0)$, the MM algorithm maximizes $\ell_{\dagger}(\lambda,\beta|\lambda_0,\beta_0)$ with respect to $\lambda$ and $\beta$. In the next step, these new estimates replaces $(\lambda_0,\beta_0)$, followed by the maximization of $\ell_{\dagger}(\lambda,\beta|\lambda_0,\beta_0)$ with respect to $(\lambda,\beta)$. The iteration continues, until $(\lambda,\beta)$ and $(\lambda_0,\beta_0)$ are sufficiently close. It is important to note that although the MM and EM algorithms appear similar in their iterative way of function maximization, they differ in terms of the objective function that is being maximized. The paper by \cite{zhou2012vs} nicely articulates the similarities and differences between the EM and MM algorithms via a case study. In the EM algorithm, a conditional expectation of the complete data likelihood is maximized, whereas, in the MM, the minorization function of the log-likelihood is maximized. Most importantly, our specific choice of the minorization function allows separation of the parameters, thereby easing the maximization process. Furthermore, $\mathcal{M}_{1,k}(\lambda_k|\lambda_0, \beta_0)$ and $\mathcal{M}_2(\beta|\lambda_0, \beta_0)$ turned out to be concave functions of $\lambda_k$ and $\beta$ respectively. To ensure the positivity of $\lambda_k,k=1,\ldots,m$, we use the transformed parameters $\eta_k={\rm log}(\lambda_k),k=1,\dots,m$ in the optimization. Define $\eta=(\eta_1,\ldots,\eta_m)^\top $ and $\eta_0=(\eta_{10},\ldots,\eta_{m0})^\top $, and then replace $\lambda$ and $\lambda_0$ by $\exp(\eta)$ and $\exp(\eta_0)$, respectively, in $\mathcal{M}_{1, k}$ and $\mathcal{M}_2$ of the minorization function. Also, hereafter, we will refer to $\ell(\lambda, \beta)$ by $\ell(\eta, \beta)$. Consequently, the minorization function of $\ell(\eta, \beta)$ is $\ell_\dagger(\eta, \beta)$, obtained from $\ell_\dagger(\lambda, \beta)$ after replacing $\lambda$ and $\lambda_0$ by $\exp(\eta)$ and $\exp(\eta_0)$, respectively. Next, we propose to estimate $\eta_k$ by solving $S_{1, k}(\eta_k|\eta_0,\beta_0) \equiv\partial \mathcal{M}_{1,k}(\exp(\eta_k)|\exp(\eta_0),\beta_0)/\partial$ $\eta_k=0$ for $k=1,\ldots,m$ and $\beta$ by solving $S_2(\beta|\eta_0,\beta_0)\equiv\partial \mathcal{M}_2(\beta|\exp(\eta_0),\beta_0)/\partial\beta=0$. Note that given $(\eta_0, \beta_0)$, $S_{1, k}(\eta_k|\eta_0, \beta_0)$ is a function of only the scalar parameter $\eta_k$. Now, following the general strategy of gradient MM algorithm \citep{Hunter2004}, given $(\eta_0, \beta_0)$, $(\eta, \beta)$ will be updated by one step Newton-Raphson method, and the entire method can be summarized in the following steps. \vskip 3mm \noindent Step 0. Initialize $(\eta, \beta)$. \noindent Step 1. At the $\iota$th step of the iteration, we update the parameters as follows: \begin{eqnarray} \eta_k^{(\iota)}&=&\eta_k^{(\iota-1)}-S^{-1}_{1, kk}(\eta_k^{(\iota-1)}|\eta^{(\iota-1)},\beta^{(\iota-1)})S_{1, k}(\eta_k^{(\iota-1)}|\eta^{(\iota-1)},\beta^{(\iota-1)}), \mbox{ for } k=1,\ldots,m,\label{eq:mmeta}\\ \beta^{(\iota)}&=&\beta^{(\iota-1)}-S^{-1}_{22}(\beta^{(\iota-1)}|\eta^{(\iota-1)},\beta^{(\iota-1)})S_2(\beta^{(\iota-1)}|\eta^{(\iota-1)},\beta^{(\iota-1)}), \label{eq:mmbeta} \end{eqnarray} where $(\eta^{(\iota-1)}, \beta^{(\iota-1)})$ and $(\eta^{(\iota)}, \beta^{(\iota)})$ denote the parameter estimates at the $(\iota-1)$th and $\iota$th iterations, respectively. \noindent Step 3. Repeat Step 1 until $(\eta^{(\iota-1)}, \beta^{(\iota-1)})$ and $(\eta^{(\iota)}, \beta^{(\iota)})$ are sufficiently close. \vskip 3mm In the above iteration both $S_{1, k}$ and $S_{1, kk}$ are scalar valued functions, and $S_2$ is a $p$-dimensional vector while $S_{22}$ is a $p\times p$ matrix. After the convergence, the final estimate of $\beta$ and $\eta$ will be denoted by $\widehat\beta$ and $\widehat\eta$. The expression of the terms involved in (\ref{eq:mmeta}) and (\ref{eq:mmbeta}) are \begin{eqnarray} &&S_{1,k}(\eta_k^{\iota-1}|\eta^{\iota-1},\beta^{\iota-1}) \nonumber\\ &=&\exp(\eta_k^{\iota-1})\sum_{i=1}^n\biggl\{\Delta_{L,i}A_1(u_{(\iota-1)}(L_i,X_i))I(t_k\leq L_i)-\Delta_{I,i}I(t_k\leq L_i)-\Delta_{R,i}I(t_k\leq R_i)\nonumber\\ &&\quad\quad\quad\quad\quad+\Delta_{I,i}A_1(u_{(\iota-1)}(L_i,R_i,X_i))I(L_i< t_k\leq R_i)\biggl\},\quad k=1,\ldots,m,\label{eqs1k}\\ &&S_{1,kk}(\eta_k^{\iota-1}|\eta^{\iota-1},\beta^{\iota-1})\nonumber\\ &=&\exp(\eta_k^{\iota-1})\sum_{i=1}^n\biggl[\Delta_{L,i}\Bigg\{A_1(u_{(\iota-1)}(L_i,X_i))-2A_2(u_{(\iota-1)}(L_i,X_i))u_{(\iota-1)}(L_i,X_i)\nonumber\\ &&\quad\quad\quad\quad\quad -\frac{2}{u_{(\iota-1)}(L_i,X_i)} \Bigg\}I(t_k\leq L_i) -\Delta_{I,i}I(t_k\leq L_i)-\Delta_{R,i}I(t_k\leq R_i)\nonumber\\ &&\quad\quad\quad\quad\quad+\Delta_{I,i}\Bigg\{A_1(u_{(\iota-1)}(L_i,R_i,X_i))-2A_2(u_{(\iota-1)}(L_i,R_i,X_i))u_{(\iota-1)}(L_i,R_i,X_i)\nonumber\\ &&\quad\quad\quad\quad\quad\quad\quad\quad-\frac{2}{u_{(\iota-1)}(L_i,R_i,X_i)} \Bigg\}I(L_i< t_k\leq R_i)\biggl],\quad k=1,\ldots,m,\label{eqs1kk}\\ &&S_2(\beta^{(\iota-1)}|\eta^{(\iota-1)},\beta^{(\iota-1)})\nonumber\\ &=&\sum_{i=1}^n\biggl\{\Delta_{L,i}A_1(u_{(\iota-1)}(L_i,X_i))Z_{x_i}(L_i)-\Delta_{I,i}Z_{x_i}(L_i)-\Delta_{R,i}Z_{x_i}(R_i)\nonumber\\ &&\quad +\Delta_{I,i}A_1(u_{(\iota-1)}(L_i,R_i,X_i))(Z_{x_i}(R_i)-Z_{x_i}(L_i))\biggl\},\nonumber\\ &&S_{22}(\beta^{(\iota-1)}|\eta^{(\iota-1)},\beta^{(\iota-1)})\nonumber\\ &=&-2\sum_{i=1}^n\biggl[\Delta_{L,i}\Bigg\{A_2(u_{(\iota-1)}(L_i,X_i))u_{(\iota-1)}(L_i,X_i)+\frac{1}{u_{(\iota-1)}(L_i,X_i)}\Bigg\}\frac{Z_{x_i}(L_i)^{\otimes 2}}{Z_{x_i}(L_i)^\top \beta^{(\iota-1)}}\nonumber\\ &&\quad\quad +\Delta_{I,i}\Bigg\{A_2(u_{(\iota-1)}(L_i,R_i,X_i))u_{(\iota-1)}(L_i,R_i,X_i)+\frac{1}{u_{(\iota-1)}(L_i,R_i,X_i)}\Bigg\}\nonumber\\ &&\quad\quad \times \frac{(Z_{x_i}(R_i)-Z_{x_i}(L_i))^{\otimes 2}}{(Z_{x_i}(R_i)-Z_{x_i}(L_i))^\top \beta^{(\iota-1)}}\biggl],\nonumber \end{eqnarray} where $u_{\iota-1}(L_i, X_i)$, $u_{\iota-1}(R_i, X_i)$ and $u_{\iota-1}(L_i, R_i, X_i)$ are the $u_{0}(L_i, X_i)$, $u_{0}(R_i, X_i)$ and $u_{0}(L_i, R_i, X_i)$, with $\beta_0$ and $\lambda_0$ replaced by $\beta^{(\iota-1)}$ and $\exp(\eta^{(\iota-1)})$, respectively. For the computation of the estimator or the standard error, if any term (expression) turns out to be $0/0$, it is re-defined as $0$. \subsection{Variance estimation} \cite{Zeng2006} studied the asymptotic properties of the ML estimator, and used the profile likelihood method \citep{Murphy2000} to calculate the asymptotic standard error of the estimator. We also follow their idea of the standard error calculation, which will be aided by our computational tools. Specifically, the authors studied consistency of the estimator of $\beta$ and $\Lambda(t)=\int^t_0 \lambda(u)du$, the baseline cumulative hazard function, and the asymptotic property of $\widehat\beta$. Suppose that the estimator of the covariance matrix of $\widehat\beta$ is $-D^{-1}$. Then, the $(r, s)$th element of the $p\times p$ matrix $D$ is \begin{eqnarray*} \frac{{\rm pl}(\widehat{\beta})-{\rm pl}(\widehat{\beta}+h_ne_r)-{\rm pl}(\widehat{\beta}+h_ne_s)+{\rm pl}(\widehat{\beta}+h_ne_r+h_ne_s)}{h_n^2}, \end{eqnarray*} with $e_r$ being the $p\times 1$ vector with 1 at the $r$th position and 0 elsewhere, $h_n$ is a constant with an order $n^{-1/2}$, and ${\rm pl}(\beta)$ stands for the profile log-likelihood function defined as ${\rm pl}(\beta)=\ell(\widehat\eta^\beta,\beta)$, where $\widehat{\eta}^{\beta}={\rm argmax}_{\eta\in\mathcal{R}^m}\ell(\eta,\beta)$. To obtain $\widehat{\eta}^{\beta}$, we use the proposed minorization function, and specifically use the $m$ equations given in (\ref{eq:mmeta}) after replacing $\beta^{\iota-1}$ to $\beta$. Specifically, to obtain $\widehat{\eta}^{\beta}$, we shall maximize the log-likelihood function $\ell(\eta,\beta)$ with respect to $\eta$ only. The minorization function for $\ell(\lambda,\beta)$ is $\ell_\dagger(\lambda,\beta|\lambda_0,\beta_0=\beta)$. Since $\beta$ is fixed, we only need to maximize functions $\mathcal{M}_{1,k}(\lambda_k|\lambda_0,\beta)$ for $k=1,\ldots,m$. Following the general strategy of gradient MM algorithm, at the $\iota$th step of the iteration, $\eta_k^{(\iota)}(={\rm log}(\lambda^{(\iota)}_k)) $ is updated as follows, \begin{eqnarray*} \eta_k^{(\iota)}&=&\eta_k^{(\iota-1)}-S^{-1}_{1, kk}(\eta_k^{(\iota-1)}|\eta^{(\iota-1)},\beta)S_{1, k}(\eta_k^{(\iota-1)}|\eta^{(\iota-1)},\beta), \mbox{ for } k=1,\ldots,m, \end{eqnarray*} where $S_{1, k}(\eta_k^{(\iota-1)}|\eta^{(\iota-1)},\beta)$ and $S_{1, kk}(\eta_k^{(\iota-1)}|\eta^{(\iota-1)},\beta)$ are $S_{1, k}(\eta_k^{(\iota-1)}|\eta^{(\iota-1)},\beta^{(\iota-1)})$ and $S_{1, kk}(\eta_k^{(\iota-1)}$ $|\eta^{(\iota-1)},\beta^{(\iota-1)})$, respectively, when $\beta^{(\iota-1)}$ is set to $\beta$. The expression of $S_{1, k}(\eta_k^{(\iota-1)}|\eta^{(\iota-1)},\beta^{(\iota-1)})$ and $S_{1, kk}(\eta_k^{(\iota-1)}|\eta^{(\iota-1)},\beta^{(\iota-1)})$ are given in (\ref{eqs1k}) and (\ref{eqs1kk}), respectively. For any given $\beta$, the computation of $\widehat{\eta}^{\beta}$ is very fast when $\widehat\eta=(\widehat\eta_1,\ldots,\widehat\eta_m)^\top $, the MLE, is used as the initial value. Obtaining $\widehat{\eta}^{\beta}$ using any generic optimization of $\ell(\eta, \beta)$ can be very time consuming. \subsection{Complexity analysis}\label{sec:complexity} In the proposed method, parameters are updated via equations (\ref{eq:mmeta}) and (\ref{eq:mmbeta}). Now, we inspect the computational complexity (or simply complexity) of a single update. The complexity to calculate $S_2(\beta|\eta,\beta)$ and $S_{22}(\beta|\eta,\beta)$ is $O(np+np^2)$, where $n$ is the sample size. Next, the complexity of inverting $S_{22}(\beta|\eta,\beta)$ is $O(p^3)$. Therefore, the complexity of one update of $\beta$ is $O(np+np^2+p^3)$. Similarly, for any $k=1, \dots, m$, the complexity of one step update of $\eta_k$ is $O(2n+1)$. Hence, the total computational cost for updating $\eta$ and $\beta$ is $O((2n+1)m+np+np^2+p^3)$. Now, we look closely the computational complexity of the generic optimization of the log-likelihood $\ell(\lambda, \beta)$ (aka $\ell(\exp(\eta), \beta)$) using the Newton-Raphson approach. In each step, the computational cost of gradient and the Hessian matrix of the log-likelihood is $O(n(m+p)+n(m+p)^2)$, and inverting a matrix of order $m+p$ will cost $O((p+m)^3)$. The total complexity for a single update is then $O(n(p+m)+n(m+p)^2+(p+m)^3)$, which is obviously larger than $O((2n+1)m+np+np^2+p^3)$. Since $m$ increases with the sample size $n$, the difference between the two complexities increases with $n$. Alternative to Newton's method, if the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm \citep{fletcher2013practical} is used, the complexity becomes $O(n(m+p)+(n+1)(m+p)^2)$. Note the BFGS algorithm avoids matrix inversion, so the cubic order complexity is avoided. The complexity of the BFGS method involves $m^2$ and $p^2$ term, whereas the complexity of the proposed method has $m$ and $p^3$ term. Usually, for the semiparametric regression model, $p$ is much smaller than $m$ that tends to increase with $n$, indicating the complexity of MM is smaller than BFGS in this context. This complexity calculation indicates the advantage of the MM algorithm. \section{Simulation study}\label{sec:simulation} In this section, we conducted a numerical study to assess the finite-sample performances of the proposed MM algorithm. We considered two main scenarios, 1) time-independent and 2) time-dependent covariates. For Scenario 1, we simulated a scalar covariate $X$ from ${\rm Bernoulli}(0.5)$. Conditional on the covariate, we considered the following hazard function $ h(t|X)=0.2+\beta X$. For Scenario 2, the hazard function was $ h(t|X)=0.2+\beta X\exp(t), $ with $X\sim {\rm Bernoulli}(0.5)$. We considered two different values of $\beta$, 0.5 and 1. For both scenarios, we simulated the left censoring time $L_i$ from ${\rm Uniform}(0.1,\, 2)$ and the right censoring time $R_i$ from ${\rm Uniform}(L_i+0.5, 4)$. The proportion of left censoring was from 30\% to 50\% and the proportion of right censoring was from 25\% to 35\% across all the scenarios. For each scenario, we considered three sample sizes, $n=100$, $200$ and $500$. For the profile likelihood based standard error calculation, we used $h_n=1.5n^{-1/2}$ because among several trial values of $h_n$ this one yielded good agreement between the standard deviation and the standard error of the estimators. We have not faced any convergence issue in our proposed MM algorithm. We fit the ARM (\ref{eqm1}) to each of the simulated dataset using the proposed MM algorithm. The results of the simulation study with $500$ replications are presented in Table \ref{simumytab1}. \begin{table}[h] \begin{center} \begin{threeparttable} \addtolength{\tabcolsep}{-4pt} \caption{Results of the simulation study with a scalar covariate, for both time-independent and time-dependent scenarios. Est: the average of the estimates, SD: the standard deviation of the estimates, SE: the average of the standard errors, CP: the coverage probability of the 95\% Wald's confidence interval} \label{simumytab1} {\small \begin{tabular}{p{1cm}p{2cm} p{1cm} p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm} p{1cm}p{1cm}p{1cm}p{1cm} } \hline \multicolumn{14}{c}{Time-independent covariate: $h(t|X)=0.2+\beta X$}\\ & & \multicolumn{4}{c}{$n=100$} & \multicolumn{4}{c}{$n=200$}&\multicolumn{4}{c}{$n=500$}\\ $\lambda(t)$& $\beta$ & Est &SD &SE &CP& Est &SD &SE &CP & Est &SD &SE &CP\\ \hline 0.2 & $0.5$&$0.495$ & $0.145$ & $0.150$ & $0.956$ & $0.496$ & $0.096$ & $0.099$ & $0.952$ & $0.499$ & $0.059$ & $0.058$ & $0.946$\\ 0.2 & $1.0$&$1.047$ & $0.222$ & $0.248$ & $0.978$ & $1.005$ & $0.161$ & $0.160$ & $0.944$ & $1.012$ & $0.100$ & $0.091$ & $0.936$\\ \hline \multicolumn{14}{c}{Time-dependent covariate: $h(t|X)=0.2+\beta X\exp(t)$}\\ & & \multicolumn{4}{c}{$n=100$} & \multicolumn{4}{c}{$n=200$}&\multicolumn{4}{c}{$n=500$}\\ $\lambda(t)$& $\beta$ & Est &SD &SE &CP& Est &SD &SE &CP & Est &SD &SE &CP\\ \hline 0.2 & $0.5$&$0.518$ & $0.134$ & $0.160$ & $0.992$ & $0.504$ & $0.090$ & $0.102$ & $0.980$ & $0.505$ & $0.053$ & $0.059$ & $0.974$\\ 0.2 & $1.0$&$1.085$ & $0.314$ & $0.317$ & $0.986$ & $1.040$ & $0.200$ & $0.202$ & $0.978$ & $1.013$ & $0.110$ & $0.113$ & $0.950$\\ \hline \end{tabular} } \end{threeparttable} \end{center} \end{table} For each scenario, we report the average of the estimates (Est) for $\beta$, empirical standard deviation (SD), the average of the estimated standard error (SE), and the 95\% coverage probability (CP) based on Wald's confidence interval. The results indicate that the proposed MM algorithm can estimate the parameters very well, while the bias could be up to $8.5\%$ across all scenarios. Overall, the bias and SD decrease with the sample size $n$. There is a reasonable agreement between the empirical standard deviation and the estimated standard error. The CPs are pretty close to the nominal level, $0.95$. To assess the performance of the algorithm for the multiple covariates scenario, we conducted another simulation study with $h(t|X_1,X_2)=0.2t^{1/2}+\beta_1 X_1+\beta_2 X_2$. We simulated both covariates $X_1$ and $X_2$ from from Bernoulli(0.5), and set $\beta_1=0.5$ and $\beta_2=1$. After simulating the time-to-event $T$ using the additive hazard $h(t|X_1,X_2)$, the we simulated the left-censoring time $L$ from Uniform(0.1,\, 1.5) and the right-censoring time $R$ from ${\rm Uniform}(L+1.5,\, 4)$. This resulted in 42\% left censored, 42\% interval censored, and 16\% right censored subjects. We fit ARM (\ref{eqm1}) to each of the simulated datasets. We observe the adequate performance of our proposed algorithm (Table \ref{simumytab2}), with results similar to Table \ref{simumytab1}. \begin{table}[h] \begin{center} \begin{threeparttable} \addtolength{\tabcolsep}{-4pt} \caption{Results of the simulation study with two covariates, $X_1\sim {\rm Bernoulli(0.5)}$ and $X_2\sim {\rm Bernoulli(0.5)}$. Est: the average of the estimates, SD: the standard deviation of the estimates, SE: the average of the standard errors, CP: the coverage probability of the 95\% Wald's confidence interval} \label{simumytab2} {\small \begin{tabular}{p{2cm} p{1cm} p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm} p{1cm}p{1cm}p{1cm}p{1cm} } \hline & \multicolumn{4}{c}{$n=100$} & \multicolumn{4}{c}{$n=200$}&\multicolumn{4}{c}{$n=500$}\\ & Est &SD &SE &CP& Est &SD &SE &CP & Est &SD &SE &CP\\ \hline $\beta_1=0.5$&$0.490$ & $0.193$ & $0.202$ & $0.958$ & $0.493$ & $0.127$ & $0.130$ & $0.950$ & $0.501$ & $0.077$ & $0.076$ & $0.940$\\ $\beta_2=1.0$&$1.027$ & $0.287$ & $0.287$ & $0.968$ & $1.021$ & $0.181$ & $0.186$ & $0.964$ & $1.010$ & $0.107$ & $0.104$ & $0.934$ \\ \hline \end{tabular} } \end{threeparttable} \end{center} \end{table} In all computations, the iteration is stopped when the sum of the absolute differences of the estimates for $\eta$ and $\beta$ at two successive iterations is less than $10^{-3}$. All computations were conducted in an Intel(R) Xeon(R) CPU E5-2680 v4 at 2.40 GHz machine. In Table \ref{tabcom}, we provide the average computation times to obtain parameter estimates and the standard errors for varying sample sizes and the scalar covariate and the two covariates scenarios using the proposed method and the direct optimization of the log-likelihood using the BFGS algorithm. Here, the specific form of log-likelihood function is given in the expression \eqref{log-likelihood}. To derive estimates using the BFGS algorithm, we first coded the negative of the log-likelihood function and used it as one of the input arguments of the \texttt{optim} function in \texttt{R} with the BFGS method. The initial values were the same as that in the proposed MM algorithm. The standard errors of the estimates are the square root of the diagonal of the inverse of the negative Hessian matrix which is returned from the optimization. \begin{table}[h] \begin{center} \caption{The average time (in seconds) to compute estimates (ATE) and standard errors (ATS). Case 1: scalar covariate; Case 2: two covariates; MM: proposed MM algorithm; Direct: direct optimization} \label{tabcom} \begin{tabular}{ll rr rr rr} \hline & & \multicolumn{2}{c}{$n=100$} & \multicolumn{2}{c}{$n=200$}&\multicolumn{2}{c}{$n=500$}\\ & & ATE &ATS & ATE &ATS & ATE &ATS \\ \hline Case 1& MM & 1.08 & 0.39 & 11.92 & 7.33 & 78.96 & 80.04 \\ & Direct & 3.50 & 1.24 & 37.79 & 18.88 & 1587.08 &666.62 \\ Case 2& MM & 1.91 & 1.88 & 13.14 & 16.93 & 87.78 & 208.13\\ & Direct & 8.32 & 6.23 & 92.81 & 65.10 & 1988.76 & 1812.97 \\ \hline \end{tabular} \end{center} \end{table} The results show that the proposed method is several times faster than the direct optimization of the log-likelihood function. The relative gain in the computation time increases with the sample size. \section{Application: Breast Cancer Data}\label{sec:realdata}To illustrate the proposed method, we analyzed the breast cancer data considered in \cite{Finkelstein1986} and \cite{Finkelstein1985}. In this breast cosmesis study, the subjects under the adjuvant chemotherapy after tumorectomy were periodically followed-up for the cosmetic effect of the therapy. So, patients generally visited the clinic every 4 to 6 months. Thus, the time of the appearance of breast retraction was recorded as an interval. In particular, if the recorded time for a patient is $(0, 4]$, then the breast retraction happened before four months, whereas, if for any subject the time to the occurrence is $(6, 12]$, then it signifies that the event had happened between six and twelve months. There were 94 early breast cancer patients in the study, of which 46 patients were given radiation therapy alone, and 48 patients were given radiation therapy plus adjuvant chemotherapy. The analysis aimed to study the effect of chemotherapy on time until the appearance of retraction. We set $X=1$ if a patient had received adjuvant chemotherapy following the initial radiation treatment and 0 otherwise. Hence, $X$ is a time independent covariate, and we fit the model $h(t|X)=\lambda(t)+X\beta$ to the data using the proposed method. Here, $\beta$ represents the difference in the hazard of breast retraction between $X=1$ and $X=0$ groups at any time point. We obtain $\widehat{\beta}=0.031$. Since the choice of $h_n$ was quite arbitrary in the profile likelihood-based method of standard error, we used different values of $h_n$, $1.5n^{-1/2}$, $n^{-1/2}/20$, $n^{-1/2}/100$ and $n^{-1/2}/1000$, and obtained $0.09$, $0.08$, $0.06$ and $0.007$ as the standard errors. Obviously, for standard error $0.007$, $\widehat\beta$ is significantly different from zero at the $5\%$ level, while for other standard errors $\widehat\beta$ is not significantly different from zero. To investigate this issue further, we calculated bootstrap standard errors using $200$ bootstrap samples, which came out to be 0.06. Figure \ref{fig:SurvivalCurves} plots the estimated survival curves for the two groups along with their 95\% pointwise confidence intervals calculated using the bootstrap method. This analysis shows no significant difference between the two survival functions or the two hazards functions at any time. On the contrary, \cite{Finkelstein1986} fit a proportional hazard model to this data and found a statistically significant effect of chemotherapy. \begin{center} {\bf [Figure 1 should be here]} \end{center} \section{Implementation: \texttt{R} package \texttt{MMIntAdd}}\label{sec:forR} For the implementation of our proposed method, we have developed an \texttt{R} package, and it is available at GitHub: \url{https://github.com/laozaoer/MMIntAdd}. In this section, we discuss how the package can be used to analyze the breast cosmesis dataset. The first step is installing the package. One can use the \texttt{R} package \texttt{devtools} to install our \texttt{R} package as follows. \begin{lstlisting}[language=R] >library(devtools) >devtools::install_github("laozaoer/MMIntAdd") \end{lstlisting} If the above method fails, then alternatively one may use the \texttt{remotes} package to install \texttt{MMIntAdd}. The code is \begin{lstlisting}[language=R] >library(remotes) >remotes::install_github("laozaoer/MMIntAdd") \end{lstlisting} During the installation, when asked, it is customary to update the dependent packages, \texttt{Rcpp}, \texttt{RcppArmadillo}, or \texttt{boot}. After installation, load the package in the \texttt{R} console using the command \begin{lstlisting}[language=R] >library(MMIntAdd) \end{lstlisting} Let us now analyze the breast cosmesis data available in the package. This dataset was taken from the \texttt{interval} package and reformatted. Unlike the description given in Section 2, the first two columns of the dataset do not represent the finite inspection time window; rather, they represent the two boundary points of the time-to-event. Specifically, for a left-censored subject, the entry in the first column is zero, while the entry of the second column is infinity for a right-censored subject. The following three columns are left-, interval-, and right-censoring indicators. Note that the sum of these indicators must be equal to one for any subject. The sixth column of the data represents the covariate value. \begin{lstlisting}[language=R] > data(bcos) > head(bcos) left right L I R covariate 1 45 Inf 0 0 1 0 2 6 10 0 1 0 0 3 0 7 1 0 0 0 4 46 Inf 0 0 1 0 5 46 Inf 0 0 1 0 6 7 16 0 1 0 0 \end{lstlisting} There are two functions of the \texttt{MMIntAdd} package, \verb=Add_case2_inte= and \verb=Add_ci_boot=. To find them, use the command \begin{lstlisting}[language=R] > lsf.str("package:MMIntAdd") Add_case2_inte : function (data, hn.m, Max_iter = 1000, Tol = 0.001) Add_ci_boot : function (data, time_points, covariate_value, CItype = c("norm", "basic","perc", "bca"), conf = 0.95, boot.num = 200, object_type =c("reg"), Max_iter = 1000, Tol = 0.001) \end{lstlisting} The first function returns the regression parameter estimates and the standard error calculated using the profile likelihood approach. For the standard error calculation, we require the bandwidth that is given as an input argument, hn.m of the function. Different values of hn.m returns different standard errors but with the same parameter estimates. \begin{lstlisting}[language=R] > result_hn1=Add_case2_inte(bcos,hn.m=1.5) > print(result_hn1$beta) Est SE [1,] 0.03136608 0.09057521 > result_hn2=Add_case2_inte(bcos,hn.m=1/20) > print(result_hn2$beta) Est SE [1,] 0.03136608 0.08259436 > result_hn3=Add_case2_inte(bcos,hn.m=1/100) > print(result_hn3$beta) Est SE [1,] 0.03136608 0.05657365 > result_hn4=Add_case2_inte(bcos,hn.m=1/1000) > print(result_hn4$beta) Est SE [1,] 0.03136608 0.007612 \end{lstlisting} The other returned objects of \verb=Add_case2_inte= are the estimates of $\lambda=(\lambda_1,\ldots,\lambda_m)^\top $, the log-likelihood value and the set of distinct inspection time points. The other function of the \texttt{MMIntAdd} package is used to obtain the bootstrap standard error and confidence interval. There are many input arguments to that function. Among them, \texttt{boot.num} denotes the number of bootstrap samples to be used. \begin{lstlisting}[language=R] > Add_ci_boot(bcos,boot.num = 200) $beta_boot_se Est boot_se covariate 0.03136608 0.06354992 $CI_beta $CI_beta$normal index method lwr upr normal 1 normal -0.1092515 0.1398596 $CI_beta$basic index method lwr upr basic 1 basic -0.1277781 0.06273215 $CI_beta$percent index method lwr upr percent 1 percent 3.127308e-55 0.1905102 $CI_beta$bca index method lwr upr bca 1 bca 3.142232e-37 0.240083 \end{lstlisting} The above function returns bootstrap standard error and bootstrap confidence intervals of the regression parameter, which varies according to the method chosen. Although the default confidence level is 0.95, the level can be set to a different value. These functions can also handle multiple covariates. All the covariates must be binary or numeric, and they are placed from the sixth column onwards in the data frame. For analyzing data with a categorical covariate with $k$ nominal categories, the $(k-1)$ dummy variables must be incorporated in the data frame. Next, we analyze a simulated dataset using the \texttt{MMIntAdd} package. \begin{lstlisting}[language=R] > set.seed(10) > n=100 > # Generation of three covariates > x1=rbinom(n, 1, 0.5) # the first covariate > x2=rbinom(n, 1, 0.4) # the second covariate > x3=rbinom(n, 1, 0.3) # the third covariate > > #caplambda=0.2*t+ t*(0.5*x1+1*x2+0.6*x3), the true value of the > #regression parameters are 0.5, 1 and 0.6. > r=runif(n, 0, 1) > time_to_event=-log(r)/(0.2+ 0.5*x1+1*x2+0.6*x3) > # Generation of inspection time window (L, R) > myl= runif(n,0.1,1.5) > myr=runif(n, myl+1.5, 4) > #### Censoring indicator > delta_ell=as.numeric(time_to_event<myl) > delta_r=as.numeric(time_to_event>myr) > delta_i=1-delta_ell-delta_r > > myr[delta_ell==1]=myl[delta_ell==1] > myl[delta_ell==1]=0 > myl[delta_r==1]=myr[delta_r==1] > myr[delta_r==1]=Inf > # Creation of the final data object > mydata=data.frame(myl, myr, delta_ell, delta_i, delta_r, x1,x2,x3) > mydata=as.matrix(mydata) > # Analysis of the data by invoking the following function > testresult=Add_case2_inte(mydata,hn.m=1.5) > testresult$beta Est SE x1 0.7008246 0.2899771 x2 1.0521943 0.3808892 x3 0.4904499 0.2564767 \end{lstlisting} Suppose that, for this example, we are interested in obtaining the bootstrap standard error of the regression parameters and the bootstrap confidence interval of the survival probability at select time points and for a given set of covariate values. For illustration, suppose that the interest is in the survival probability at only two time points, 0.5 and 0.6, and for a covariate value of (0, 1, 0). The code is \begin{lstlisting}[language=R] > mytimepoints=c(0.5, 0.6) > mycov=c(0, 1, 0) > out=Add_ci_boot(mydata,time_points=mytimepoints, + covariate_value = mycov, object_type = c("reg","surv")) > names(out) [1] "beta_boot_se" "CI_beta" "surv_boot_se" "CI_surv" > out$beta_boot_se Est boot_se x1 0.7008246 0.2365580 x2 1.0521943 0.3334506 x3 0.4904499 0.2776679 > out$CI_beta $normal index method lwr upr normal 1 normal 0.29612686 1.223417 normal1 2 normal 0.37157108 1.678673 normal2 3 normal -0.04340368 1.045034 $basic index method lwr upr basic 1 basic 0.2265320 1.248173 basic1 2 basic 0.2608937 1.607011 basic2 3 basic -0.1692522 0.980646 $percent index method lwr upr percent 1 percent 0.1534763545 1.175117 percent1 2 percent 0.4973772979 1.843495 percent2 3 percent 0.0002538463 1.150152 $bca index method lwr upr bca 1 bca 0.406808700 1.703756 bca1 2 bca 0.478899213 1.833237 bca2 3 bca 0.001413554 1.175776 > out$surv_boot_se Est boot_se 1 0.4096685 0.1283081 2 0.2226586 0.1198392 > out$CI_surv $normal index method lwr upr normal 1 normal 0.23085708 1.0000000 normal1 2 normal 0.07454217 0.5108168 $basic index method lwr upr basic 1 basic 0.27802103 1.0000000 basic1 2 basic 0.08806235 0.6798638 $percent index method lwr upr percent 1 percent 0.11100018 0.6036531 percent1 2 percent 0.07292174 0.5629745 $bca index method lwr upr bca 1 bca 0.23570171 0.6307615 bca1 2 bca 0.02607938 0.4095530 \end{lstlisting} After examining all the results, we recommend using the BCA confidence interval \citep{Efron1993} for the regression parameters and the survival probabilities. \section{Conclusions}\label{sec:conclusion} This chapter proposed an efficient MM algorithm to obtain ML estimates of a complex likelihood function for the ARM with interval-censored responses. The attractive feature of the method is enabling the separation of the finite and infinite dimensional parameters. This separation of components provides significant computational advantages as the dimension of the infinite-dimensional parameter increases with the sample size. Numerical studies show that the algorithm works well; we have not encountered any convergence issues in the simulation settings or real data analysis. We believe that this MM proposal will help generate new ideas for handling computational bottlenecks in complex models and likelihoods. Model (\ref{eqm1}) assumes a constant effect of the covariate. However, rather than a constant regression parameter, one can consider a time-dependent coefficient $\beta(t)$ without specifying any form \citep{Huffer1991}. Some other interesting topics for future research include developing MM-based computationally efficient methods and algorithms for the clustered case-I or case-II interval-censored responses \citep{Huang1996, TongWang2020}, including exploration of big-data scalability in tune to recent advances via asynchronous distributed EM algorithms \citep{srivastava2019asynchronous}. Additionally, developing computationally efficient methods when the inspection time is informative \citep{Zhao2021} could also be a direction of future research. \section*{Appendix} \setcounter{equation}{0} \renewcommand{A.\arabic{equation}}{A.\arabic{equation}} \setcounter{section}{0} \renewcommand{Appendix~\Alph{section}}\def\thesubsection{\Alph{section}.\arabic{subsection}}{A.\arabic{section}} We shall use the second part of Lemma 1 from \cite{TongWang2020} in proving Theorem 1, and we present this result in the following proposition. The proof of proposition \ref{prop1} can be found in \cite{TongWang2020}. \begin{proposition}\label{prop1} \citep{TongWang2020} For any $\tau,\tau_0\ge 0$ \begin{eqnarray*} {\rm log}\left\{\frac{1-\exp(-\tau)}{1-\exp(-\tau_0)}\right\}\geq (\tau-\tau_0)A_1(\tau_0)-(\tau-\tau_0)^2A_2(\tau_0)+{\rm log}\left(\frac{\tau_0}{\tau}\right)+1-\frac{\tau_0}{\tau}, \end{eqnarray*} where $ A_1(\tau_0)=\exp(-\tau_0)/\{1-\exp(-\tau_0)\}$ and $A_2(\tau_0)=\exp(-\tau_0)/2\{1-\exp(-\tau_0)\}^2$. \end{proposition} \section{Proof of Theorem \ref{ourlemma1}} In $\ell_2(\lambda,\beta)$ and $\ell_4(\lambda,\beta)$, $(\lambda_1,\ldots,\lambda_m)^\top $ are not entangled with $\beta$. Therefore, there is no need to develop the minorization functions for them. In the following, we show how to find the minorization functions for $\ell_1(\lambda,\beta)$ and $\ell_3(\lambda,\beta)$. Define $u(L_i,X_i)=\sum_{k: t_k\leq L_i}\lambda_{k}+\beta^\top Z_{x_i}(L_i)$, $u(R_i,X_i)=\sum_{k: t_k\leq R_i}\lambda_{k}+\beta^\top Z_{x_i}(R_i)$ and $u(L_i, R_i,X_i)=\sum_{k: L_i<t_k\leq R_i}\lambda_{k}+\beta^\top \{Z_{x_i}(R_i)-Z_{x_i}(L_i)\}$. According to our model assumption (\ref{eqm1}), $u(L_i, X_i)>0$, $u(R_i, X_i)>0$ and $u(L_i, R_i, X_i)>0$ for all $i$. Now, we can re-write \begin{eqnarray*} \ell_1(\lambda,\beta)&=&\sum_{i=1}^n\Delta_{L,i}{\rm log}[1-\exp\{- \sum_{k: t_k\le L_i}\lambda_k-\beta^\top Z_{x_i}(L_i)\}]\\ &=&\sum_{i=1}^n\Delta_{L,i}{\rm log}[1-\exp\{-u(L_i, X_i)\}]\\ &=&\sum_{i=1}^n\Delta_{L,i}\left({\rm log}[1-\exp\{-u_0(L_i, X_i)\}]+{\rm log}\left[\frac{1-\exp\{-u(L_i, X_i)\}}{1-\exp\{-u_0(L_i, X_i)\}}\right]\right). \end{eqnarray*} Applying proposition \ref{prop1} to the second term of the above display with $\tau=u(L_i,X_i)$ and $\tau_0=u_0(L_i,X_i)$, we obtain \begin{eqnarray} \ell_1(\lambda,\beta)&\geq& \sum_{i=1}^n\Delta_{L,i}\biggl( {\rm log}[1-\exp\{-u_0(L_i, X_i)\}]+ \{ u(L_i, X_i)-u_0(L_i, X_i)\} A_1(u_0(L_i, X_i))\nonumber \\ &&-\{ u(L_i, X_i)-u_0(L_i, X_i)\}^2 A_2(u_0(L_i, X_i))+ {\rm log}\left\{ \frac{u_0(L_i, X_i)}{u(L_i, X_i)}\right\} +1- \frac{u_0(L_i, X_i)}{u(L_i, X_i)} \biggl)\nonumber \\ &=&\sum_{i=1}^n\Delta_{L,i}\Bigg[\{A_1(u_0(L_i,X_i))+2A_2(u_0(L_i,X_i))u_0(L_i,X_i)\}u(L_i,X_i)-A_2(u_0(L_i,X_i))u^2(L_i,X_i)\nonumber \\ &&+{\rm log}\left\{\frac{u_0(L_i,X_i)}{u(L_i,X_i)}\right\}-\frac{u_0(L_i,X_i)}{u(L_i,X_i)} +C_1(u_0(L_i,X_i))\Bigg]\nonumber\\ &=&\sum_{i=1}^n\Delta_{L,i}\Bigg[\{A_1(u_0(L_i,X_i))+2A_2(u_0(L_i,X_i))u_0(L_i,X_i)\}\left(\sum_{k: t_k\leq L_i}\lambda_k+\beta^\top Z_{x_i}(L_i)\right)\nonumber\\ &&-A_2(u_0(L_i,X_i))\left(\sum_{k: t_k\leq L_i}\lambda_k+\beta^\top Z_{x_i}(L_i)\right)^2+{\rm log}\left(\frac{u_0(L_i,X_i)}{\sum_{k: t_k\leq L_i}\lambda_k+\beta^\top Z_{x_i}(L_i)}\right)\nonumber\\ && -\left(\frac{u_0(L_i,X_i)}{\sum_{k: t_k\leq L_i}\lambda_k+\beta^\top Z_{x_i}(L_i)}\right)+C_1(u_0(L_i,X_i))\Bigg],\label{appmyeq1} \end{eqnarray} where $C_1(u_0(L_i,X_i))$ is the constant term that only depends on $u_0(L_i,X_i)$, given as $C_1(u_0(L_i,X_i))={\rm log}[1-\exp\{-u_0(L_i, X_i)\}]-A_1(u_0(L_i,X_i))u_0(L_i,X_i)-A_2(u_0(L_i,X_i))u_0^2(L_i,X_i)+1.$ Next, we look into the following three terms of (\ref{appmyeq1}). First, \begin{eqnarray*} -\left(\sum_{t_k\leq L_i}\lambda_k+\beta^\top Z_{x_i}(L_i)\right)^2&=&-\left(\sum_{t_k\le L_i}\frac{\lambda_{k0}}{u_0(L_i,X_i)}\frac{u_0(L_i,X_i)}{\lambda_{k0}}\lambda_k+\frac{\beta_0^\top Z_{x_i}(L_i)}{u_0(L_i,X_i)}\frac{u_0(L_i,X_i)}{\beta_0^\top Z_{x_i}(L_i)}\beta^\top Z_{x_i}(L_i)\right)^2\nonumber\\ &\ge&-\biggl\{\sum_{t_k\leq L_i}\frac{u_0(L_i,X_i)}{\lambda_{k0}}\lambda_k^2+\frac{u_0(L_i,X_i)}{\beta_0^\top Z_{x_i}(L_i)}(\beta^\top Z_{x_i}(L_i))^2\biggl\}, \end{eqnarray*} where, the inequality is obtained by applying Jensen's inequality on the concave function $f(x)=-x^2$ and noting that $\sum_{k: t_k\le L_i}\lambda_{k0}/u_0(L_i,X_i)+ \beta_0^\top Z_{x_i}(L_i)/u_0(L_i,X_i)=1$. Second, applying the standard inequality ${\rm log}(x)\ge1-1/x$ for any generic $x>0$, we have \begin{eqnarray*} {\rm log}\left(\frac{u_0(L_i,X_i)}{\sum_{t_k\leq L_i}\lambda_k+\beta^\top Z_{x_i}(L_i)}\right)\geq 1-\frac{\sum_{t_k\leq L_i}\lambda_k+\beta^\top Z_{x_i}(L_i)}{u_0(L_i,X_i)}, \end{eqnarray*} and third, \begin{eqnarray*} -\frac{u_0(L_i,X_i)}{\sum_{t_k\leq L_i}\lambda_k+\beta^\top Z_{x_i}(L_i)}&=&-u_0(L_i,X_i)\biggl\{\sum_{t_k\le L_i}\frac{\lambda_{k0}}{u_0(L_i,X_i)}\frac{u_0(L_i,X_i)}{\lambda_{k0}}\lambda_k\\ &&+\frac{\beta_0^\top Z_{x_i}(L_i)}{u_0(L_i,X_i)}\frac{u_0(L_i,X_i)}{\beta_0^\top Z_{x_i}(L_i)}\beta^\top Z_{x_i}(L_i)\biggl\}^{-1}\\ &\ge &- \biggl[ \sum_{t_k\le L_i} \frac{\lambda_{k0}^2}{u_0(L_i,X_i)}\lambda_k^{-1}+\frac{\{\beta_0^\top Z_{x_i}(L_i)\}^2}{u_0(L_i,X_i)}\{\beta^\top Z_{x_i}(L_i)\}^{-1}\biggl], \end{eqnarray*} where, the last inequality is obtained by applying Jensen's inequality on the concave function $f(x)=-1/x$, and noting that $\sum_{k: t_k\le L_i}\lambda_{k0}/u_0(L_i,X_i)+ \beta_0^\top Z_{x_i}(L_i)/u_0(L_i,X_i)=1$. Then, applying the last three inequalities in (\ref{appmyeq1}), we obtain $\ell_1(\lambda,\beta)\ge \ell_{1,\dagger}(\lambda,\beta|\lambda_0,\beta_0)\equiv\sum_{k=1}^m\mathcal{M}_{1,1,k}(\lambda_k|\lambda_0,\beta_0)+\mathcal{M}_{1,2}(\beta|\lambda_0,\beta_0)+\mathcal{M}_{1,3}(\lambda_0,\beta_0)$, where for $k=1, \dots, m$, \begin{eqnarray*} \mathcal{M}_{1,1,k}(\lambda_k|\lambda_0,\beta_0) &=&\sum_{i=1}^n\Delta_{L,i}\Bigg[\{A_1(u_0(L_i,X_i))+2A_2(u_0(L_i,X_i))u_0(L_i,X_i)\}\lambda_k\\ &&-A_2(u_0(L_i,X_i))\left\{\frac{u_0(L_i,X_i)}{\lambda_{k0}}\right\}\lambda_k^2-\frac{\lambda_k}{u_0(L_i,X_i)}-\frac{\lambda_{k0}^2}{u_0(L_i,X_i)}\lambda_k^{-1}\Bigg]I(t_k\leq L_i), \end{eqnarray*} \begin{eqnarray*} \mathcal{M}_{1,2}(\beta|\lambda_0,\beta_0) &=&\sum_{i=1}^n\Delta_{L,i}\Bigg[\{A_1(u_0(L_i,X_i))+2A_2(u_0(L_i,X_i))u_0(L_i,X_i)\}\beta^\top Z_{x_i}(L_i)\\ &&-A_2(u_0(L_i,X_i))\frac{u_0(L_i,X_i)}{\beta_0^\top Z_{x_i}(L_i)}\{\beta^\top Z_{x_i}(L_i)\}^2-\frac{\beta^\top Z_{x_i}(L_i)}{u_0(L_i,X_i)}\\ &&-\frac{\{\beta_0^\top Z_{x_i}(L_i)\}^2}{u_0(L_i,X_i)}\{\beta^\top Z_{x_i}(L_i)\}^{-1}\Bigg], \end{eqnarray*} and $ \mathcal{M}_{1,3}(\lambda_0,\beta_0)=\sum_{i=1}^n\Delta_{L,i}\{{\rm log}[1-\exp\{-u_0(L_i, X_i)\}]-A_1(u_0(L_i,X_i))u_0(L_i,X_i)-A_2(u_0(L_i,X_i))$ $ u_0^2(L_i,X_i)+1\} $. Next, consider finding the minorization function for $\ell_3(\lambda,\beta)$. Here, we use the same techniques as finding the minorization function for $\ell_1(\lambda,\beta)$. Note, \begin{eqnarray*} \ell_3(\lambda,\beta) &=&\sum_{i=1}^n\Delta_{I,i}{\rm log}\left(1-\exp\left[-\sum_{k: L_i< t_k\leq R_i}\lambda_k-\beta^\top \{Z_{x_i}(R_i)-Z_{x_i}(L_i)\}\right]\right)\\ &=&\sum_{i=1}^n\Delta_{I,i}{\rm log}[1-\exp\{-u(L_i,R_i, X_i)\}]\\ &=&\sum_{i=1}^n\Delta_{I,i}\left({\rm log}[1-\exp\{-u_0(L_i,R_i, X_i)\}]+{\rm log}\left[\frac{1-\exp\{-u(L_i,R_i, X_i)\}}{1-\exp\{-u_0(L_i,R_i, X_i)\}}\right]\right). \end{eqnarray*} Now applying proposition \ref{prop1} to the second term of the above display with $\tau=u(L_i,R_i,X_i)$ and $\tau_0=u_0(L_i,R_i,X_i)$, we obtain \begin{eqnarray} \ell_3(\lambda,\beta)&\geq& \sum_{i=1}^n\Delta_{I,i}\biggl( {\rm log}[1-\exp\{-u_0(L_i,R_i X_i)\}]+ \{ u(L_i,R_i, X_i)-u_0(L_i,R_i, X_i)\} A_1(u_0(L_i,R_i, X_i))\nonumber \\ &&-\{ u(L_i,R_i, X_i)-u_0(L_i,R_i, X_i)\}^2 A_2(u_0(L_i,R_i, X_i))\nonumber\\ &&+ {\rm log}\left\{ \frac{u_0(L_i,R_i, X_i)}{u(L_i,R_i, X_i)}\right\} +1- \frac{u_0(L_i,R_i, X_i)}{u(L_i,R_i, X_i)} \biggl)\nonumber \\ &=&\sum_{i=1}^n\Delta_{I,i}\Bigg[\{A_1(u_0(L_i,R_i,X_i))+2A_2(u_0(L_i,R_i,X_i))u_0(L_i,R_i,X_i)\}u(L_i,R_i,X_i)\nonumber\\ &&-A_2(u_0(L_i,R_i,X_i))u^2(L_i,R_i,X_i)\nonumber \\ &&+{\rm log}\left\{\frac{u_0(L_i,R_i,X_i)}{u(L_i,R_i,X_i)}\right\}-\frac{u_0(L_i,R_i,X_i)}{u(L_i,R_i,X_i)} +C_1(u_0(L_i,R_i,X_i))\Bigg]\nonumber\\ &=&\sum_{i=1}^n\Delta_{I,i}\Bigg[\{A_1(u_0(L_i,R_i,X_i))\nonumber\\ &&+2A_2(u_0(L_i,R_i,X_i))u_0(L_i,R_i,X_i)\}\left(\sum_{k: L_i<t_k\leq R_i}\lambda_k+\beta^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))\right)\nonumber\\ &&-A_2(u_0(L_i,R_i,X_i))\left(\sum_{k: L_i<t_k\leq R_i}\lambda_k+\beta^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))\right)^2\nonumber\\ &&+{\rm log}\left(\frac{u_0(L_i,R_i,X_i)}{ \sum_{k: L_i<t_k\leq R_i}\lambda_k+\beta^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))} \right)\nonumber\\ && -\left(\frac{u_0(L_i,R_i,X_i)}{\sum_{k: L_i<t_k\leq R_i}\lambda_k+\beta^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))}\right)+C_1(u_0(L_i,R_i,X_i))\Bigg]\label{appmyeq1_l3} \end{eqnarray} where, $C_1(u_0(L_i,R_i,X_i))$ is the constant term that only depends on $u_0(L_i,R_i,X_i)$, given by $ C_1(u_0(L_i,R_i,X_i))={\rm log}[1-\exp\{-u_0(L_i,R_i, X_i)\}]-A_1(u_0(L_i,R_i,X_i))u_0(L_i,R_i,X_i)-A_2(u_0(L_i,R_i,X_i))u_0^2(L_i,R_i,X_i)+1. $ Similarly, we have the following three inequalities, \begin{eqnarray*} &-&\left(\sum_{L_i<t_k\leq R_i}\lambda_k+\beta^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))\right)^2\\ &=&-\left(\sum_{L_i<t_k\le R_i}\frac{\lambda_{k0}}{u_0(L_i,R_i,X_i)}\frac{u_0(L_i,R_i,X_i)}{\lambda_{k0}}\lambda_k\right.\\ &&\left.+\frac{\beta_0^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))}{u_0(L_i,R_i,X_i)}\frac{u_0(L_i,R_i,X_i)}{\beta_0^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))}\beta_0^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))\right)^2\nonumber\\ &\ge&-\biggl\{\sum_{L_i<t_k\leq R_i}\frac{u_0(L_i,R_i,X_i)}{\lambda_{k0}}\lambda_k^2+\frac{u_0(L_i,R_i,X_i)}{\beta_0^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))}(\beta^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i)))^2\biggl\}, \end{eqnarray*} \begin{eqnarray*} {\rm log}\left(\frac{u_0(L_i,R_i,X_i)}{\sum_{L_i<t_k\leq R_i}\lambda_k+\beta^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))}\right)\geq 1-\frac{\sum_{L_i<t_k\leq R_i}\lambda_k+\beta^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))}{u_0(L_i,R_i,X_i)}, \end{eqnarray*} and \begin{eqnarray*} &-&\frac{u_0(L_i,R_i,X_i)}{\sum_{L_i<t_k\leq R_i}\lambda_k+\beta^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))}\\ &=&-u_0(L_i,R_i,X_i)\biggl\{\sum_{L_i<t_k\leq R_i}\frac{\lambda_{k0}}{u_0(L_i,R_i,X_i)}\frac{u_0(L_i,R_i,X_i)}{\lambda_{k0}}\lambda_k\\ &&+\frac{\beta_0^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))}{u_0(L_i,R_i,X_i)}\frac{u_0(L_i,R_i,X_i)}{\beta_0^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))}\beta^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))\biggl\}^{-1}\\ &\ge &- \biggl[ \sum_{L_i<t_k\le R_i} \frac{\lambda_{k0}^2}{u_0(L_i,R_i,X_i)}\lambda_k^{-1}+\frac{\{\beta_0^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))\}^2}{u_0(L_i,R_i,X_i)}\{\beta^\top (Z_{x_i}(R_i)-Z_{x_i}(L_i))\}^{-1}\biggl], \end{eqnarray*} where, the first and the third inequalities are obtained by applying Jensen's inequality on the concave function $f(x)=-x^2$ and $f(x)=-1/x$, respectively, and the second inequality is obtained by applying the standard inequality ${\rm log}(x)\ge1-1/x$. Applying the above two inequalities in (\ref{appmyeq1_l3}), we obtain $\ell_3(\lambda,\beta)\ge\ell_{3,\dagger}(\lambda,\beta|\lambda_0,\beta_0)\equiv\sum_{k=1}^m\mathcal{M}_{3,1,k}(\lambda_k|\lambda_0,\beta_0)+\mathcal{M}_{3,2}(\beta|\lambda_0,\beta_0)+\mathcal{M}_{3,3}(\lambda_0,\beta_0)$, where \begin{eqnarray*} \mathcal{M}_{3,1,k}(\lambda_k|\lambda_0,\beta_0) &=&\sum_{i=1}^n\Delta_{I,i}\Bigg[\{A_1(u_0(L_i,R_i,X_i))+2A_2(u_0(L_i,R_i,X_i))u_0(L_i,R_i,X_i)\}\lambda_k\\ &&-A_2(u_0(L_i,R_i,X_i))\left\{\frac{u_0(L_i,R_i,X_i)}{\lambda_{k0}}\right\}\lambda_k^2\\ &&-\frac{\lambda_k}{u_0(L_i,R_i,X_i)}-\frac{\lambda_{k0}^2}{u_0(L_i,R_i,X_i)}\lambda_k^{-1}\Bigg]I(L_i< t_k\le R_i),\quad k=1,\ldots,m, \end{eqnarray*} \begin{eqnarray*} \mathcal{M}_{3,2}(\beta|\lambda_0,\beta_0) &=&\sum_{i=1}^n\Delta_{I,i}\Bigg(\{A_1(u_0(L_i,R_i,X_i))+2A_2(u_0(L_i,R_i,X_i))u_0(L_i,R_i,X_i)\}\\ &&\times \beta^\top \{Z_{x_i}(R_i)-Z_{x_i}(L_i)\} -A_2(u_0(L_i,R_i,X_i))\frac{u_0(L_i,R_i,X_i)[\beta^\top \{Z_{x_i}(R_i)-Z_{x_i}(L_i)\}]^2}{\beta_0^\top \{Z_{x_i}(R_i)-Z_{x_i}(L_i)\}}\\ &&-\frac{\beta^\top \{Z_{x_i}(R_i)-Z_{x_i}(L_i)\}}{u_0(L_i,R_i,X_i)} -\frac{[\beta_0^\top \{Z_{x_i}(R_i)-Z_{x_i}(L_i)\}]^2}{u_0(L_i,R_i,X_i) \beta^\top \{Z_{x_i}(R_i)-Z_{x_i}(L_i)\} } \Bigg), \end{eqnarray*} and \begin{eqnarray*} \mathcal{M}_{3,3}(\lambda_0,\beta_0)&=&\sum_{i=1}^n\Delta_{I,i}\Bigg[{\rm log}\left\{1-\exp\left(-\left[\sum_{L_i< t_k\leq R_i}\lambda_k+\beta^\top \{Z_{x_i}(R_i)-Z_{x_i}(L_i)\}\right]\right)\right\}\\ &&-A_1(u_0(L_i,R_i,X_i))u_0(L_i,R_i,X_i)-A_2(u_0(L_i,R_i,X_i))u_0^2(L_i,R_i,X_i)+1\Bigg]. \end{eqnarray*} Finally, we obtain \begin{eqnarray*} \ell(\lambda, \beta)&=&\ell_1(\lambda, \beta)+ \ell_2(\lambda, \beta)+\ell_3(\lambda, \beta)\\ &\geq& \ell_{\dagger}(\lambda,\beta|\lambda_0,\beta_0)\\ & \equiv& \ell_{1,\dagger}(\lambda,\beta|\lambda_0,\beta_0)+\ell_2(\lambda, \beta)+\ell_{3,\dagger}(\lambda,\beta|\lambda_0,\beta_0)\\ &=& \sum_{k=1}^m\mathcal{M}_{1,1,k}(\lambda_k|\lambda_0,\beta_0)+\mathcal{M}_{1,2}(\beta|\lambda_0,\beta_0)+\mathcal{M}_{1,3}(\lambda_0,\beta_0)+\ell_2(\lambda, \beta)\\ &&+\sum_{k=1}^m\mathcal{M}_{3,1,k}(\lambda_k|\lambda_0,\beta_0)+\mathcal{M}_{3,2}(\beta|\lambda_0,\beta_0)+\mathcal{M}_{3,3}(\lambda_0,\beta_0)\\ &\equiv & \sum_{k=1}^m\mathcal{M}_{1,k}(\lambda_k|\lambda_0,\beta_0)+\mathcal{M}_2(\beta|\lambda_0,\beta_0)+\mathcal{M}_3(\lambda_0,\beta_0), \end{eqnarray*} where $ \mathcal{M}_{1,k}(\lambda_k|\lambda_0,\beta_0) = \mathcal{M}_{1,1,k}(\lambda_k|\lambda_0,\beta_0)+ \mathcal{M}_{3,1,k}(\lambda_k|\lambda_0,\beta_0)- \lambda_k\sum^n_{i=1} \Delta_{I, i}I(t_k\leq L_i) $, $ \mathcal{M}_{2}(\beta|\lambda_0,\beta_0) = \mathcal{M}_{1, 2}(\beta|\lambda_0,\beta_0)+ \mathcal{M}_{3, 2}(\beta|\lambda_0,\beta_0)- \sum_{i=1}^n\Delta_{I,i}\beta^\top Z_{x_i}(L_i)$, and $\mathcal{M}_3(\lambda_0,\beta_0) = \mathcal{M}_{1, 3}(\lambda_0,\beta_0) +\mathcal{M}_{3, 3}(\lambda_0,\beta_0)$. \begin{comment} \begin{table}[h] \begin{center} \begin{threeparttable} \addtolength{\tabcolsep}{-4pt} \caption{Results of the simulation study with a scalar covariate. } \label{simumytab1} {\small \begin{tabular}{p{1cm}p{2cm} p{1cm} p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm} p{1cm}p{1cm}p{1cm}p{1cm} } \hline \multicolumn{14}{c}{Time-independent covariate: $h(t|X)=0.2+\beta X$}\\ & & \multicolumn{4}{c}{$n=100$} & \multicolumn{4}{c}{$n=200$}&\multicolumn{4}{c}{$n=500$}\\ $\lambda(t)$& $\beta$ & Est &SD &SE &CP& Est &SD &SE &CP & Est &SD &SE &CP\\ 0.2 & $0.5$&$0.495$ & $0.145$ & $0.150$ & $0.956$ & $0.496$ & $0.096$ & $0.099$ & $0.952$ & $0.499$ & $0.059$ & $0.058$ & $0.946$\\ 0.2 & $1.0$&$1.047$ & $0.222$ & $0.248$ & $0.978$ & $1.005$ & $0.161$ & $0.160$ & $0.944$ & $1.012$ & $0.100$ & $0.091$ & $0.936$\\ \hline \multicolumn{14}{c}{Time-dependent covariate: $h(t|X)=0.2+\beta X\exp(t)$}\\ & & \multicolumn{4}{c}{$n=100$} & \multicolumn{4}{c}{$n=200$}&\multicolumn{4}{c}{$n=500$}\\ $\lambda(t)$& $\beta$ & Est &SD &SE &CP& Est &SD &SE &CP & Est &SD &SE &CP\\ 0.2 & $0.5$&$0.518$ & $0.134$ & $0.160$ & $0.992$ & $0.504$ & $0.090$ & $0.102$ & $0.980$ & $0.505$ & $0.053$ & $0.059$ & $0.974$\\ 0.2 & $1.0$&$1.085$ & $0.314$ & $0.317$ & $0.986$ & $1.040$ & $0.200$ & $0.202$ & $0.978$ & $1.013$ & $0.110$ & $0.113$ & $0.950$\\ \hline \end{tabular} } \end{threeparttable} \end{center} \end{table} \begin{table}[h] \begin{center} \begin{threeparttable} \addtolength{\tabcolsep}{-4pt} \caption{Results of the simulation study with two covariates, $X_1\sim {\rm Bernoulli(0.5)}$ and $X_2\sim {\rm Bernoulli(0.5)}$. } \label{simumytab2} {\small \begin{tabular}{p{2cm} p{1cm} p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm} p{1cm}p{1cm}p{1cm}p{1cm} } \hline & \multicolumn{4}{c}{$n=100$} & \multicolumn{4}{c}{$n=200$}&\multicolumn{4}{c}{$n=500$}\\ & Est &SD &SE &CP& Est &SD &SE &CP & Est &SD &SE &CP\\ $\beta_1=0.5$&$0.490$ & $0.193$ & $0.202$ & $0.958$ & $0.493$ & $0.127$ & $0.130$ & $0.950$ & $0.501$ & $0.077$ & $0.076$ & $0.940$\\ $\beta_2=1.0$&$1.027$ & $0.287$ & $0.287$ & $0.968$ & $1.021$ & $0.181$ & $0.186$ & $0.964$ & $1.010$ & $0.107$ & $0.104$ & $0.934$ \\ \hline \end{tabular} } \end{threeparttable} \end{center} \end{table} \begin{table}[h] \begin{center} \caption{The average time (in seconds) to compute estimates (ATE) and standard errors (ATS). Case 1: scalar covariate; Case 2: two covariates; MM: proposed MM algorithm; Direct: direct optimization} \label{tabcom} \begin{tabular}{ll rr rr rr} \hline & & \multicolumn{2}{c}{$n=100$} & \multicolumn{2}{c}{$n=200$}&\multicolumn{2}{c}{$n=500$}\\ & & ATE &ATS & ATE &ATS & ATE &ATS \\ Case 1& MM & 1.08 & 0.39 & 11.92 & 7.33 & 78.96 & 80.04 \\ & Direct & 3.50 & 1.24 & 37.79 & 18.88 & 1587.08 &666.62 \\ Case 2& MM & 1.91 & 1.88 & 13.14 & 16.93 & 87.78 & 208.13\\ & Direct & 8.32 & 6.23 & 92.81 & 65.10 & 1988.76 & 1812.97 \\ \hline \end{tabular} \end{center} \end{table} \end{comment} \begin{figure} \caption{Estimated survival curves of the breast cancer data. The red and black curves correspond to patients with $X=1$ (adjuvant chemotherapy $+$ radiation) and $X=0$ (only radiation), respectively. The pink and gray shaded areas are the confidence bands for red and black curves, respectively.} \label{fig:SurvivalCurves} \end{figure} \end{document}
arXiv
Symmetries of Symplectic Manifolds and Related Topics Session code: ssm Henrique Burstyn (IMPA, Brazil) Lisa Jeffrey (University of Toronto) Liviu Mare (University of Regina) Catalin Zara (University of Massachusetts Boston) Tuesday, Jul 25 [McGill U., McConnell Engineering Building, Room 13] 11:45 Rebecca Goldin (George Mason University, USA), On equivariant structure constants for G/B 12:15 Jeffrey Carlson (University of Toronto, Canada), Equivariant formality beyond Hamiltonian actions 14:15 Victor Guillemin (MIT, USA), Torus actions with collinear weights 15:45 Nasser Heydari (Memorial University of Newfoundland, Canada), Equivariant Perfection and Kirwan Surjectivity in Real Symplectic Geometry 16:15 Alejandro Cabrera (Universidade Federal do Rio de Janeiro, Brazil), Odd symplectic supergeometry, characteristic classes and reduction 17:00 Shlomo Sternberg (Harvard University, USA), The Stasheff associahedron 17:30 Yael Karshon (University of Toronto, Canada), Classification results in equivariant symplectic geometry Wednesday, Jul 26 [McGill U., McConnell Engineering Building, Room 13] 11:15 Alessia Mandini (Pontifícia Universidade Católica do Rio de Janeiro, Brazil), Symplectic embeddings and infinite staircases -- Part I 11:45 Ana Rita Pires (Fordham University, USA), Symplectic embeddings and infinite staircases - Part II 13:45 Daniele Sepe (Universidade Federal Fluminense, Brazil), Integrable billiards and symplectic embeddings 14:15 Eckhard Meinrenken (University of Toronto, Canada), On the quantization of Hamiltonian loop group spaces 14:45 Leonardo Mihalcea (Virginia Tech University, USA), An affine quantum cohomology ring 15:15 Jonathan Weitsman (Northeastern University, USA), On Geometric Quantization of (some) Poisson Manifolds 16:15 Steven Rayan (University of Saskatchewan, Canada), The quiver at the bottom of the twisted nilpotent cone on $\mathbb{CP}^1$ 16:45 Elisheva Adina Gamse (Univeristy of Toronto, Canada), Vanishing theorems in the cohomology ring of the moduli space of parabolic vector bundles Rebecca Goldin George Mason University, USA On equivariant structure constants for G/B Schubert calculus concerns the product structure for rings associated with a flag manifold, $G/B$. For equivariant cohomology and equivariant $K$-theory, the coefficients are positive in an appropriate sense, reflecting underlying geometric structure. Symmetries coming from the $G$ action lead to enumerative formulas in equivariant and ordinary cohomology and equivariant and ordinary $K$-theory. I will present such a formula, with a discussion of some underlying geometry. Much of this work is joint with Allen Knutson. Location: McGill U., McConnell Engineering Building, Room 13 Jeffrey Carlson University of Toronto, Canada Equivariant formality beyond Hamiltonian actions It is well known that Hamiltonian torus actions on compact symplectic manifolds are equivariantly formal; particular cases include coadjoint orbits and generalized flag manifolds $G/K$. Less is known in the case of the isotropy action of a Lie group $K$ on a homogeneous space $G/K$ when $K$ is not of full rank in $G$. In this talk I will explain the known cases and characterizations of equivariant formality of such actions in terms of ordinary cohomology, rational homotopy theory, invariant theory, and equivariant K-theory. We will also state a structure theorem for the equivariant cohomology and rationalized K-theory of such equivariantly formal actions. Some of this work is joint with Chi-Kwong Fok. Victor Guillemin MIT, USA Torus actions with collinear weights Let $G$ be an $n$-torus, $M$ a compact manifold and $G\times M \to M$ an action of $G$ on $M$ having the property that the fixed point sets are isolated points. For such an action the equivariant cohomology ring of $M$ sits inside a larger ring: the "assignment ring", (a ring which describes the "orbitype stratification" of $M$ by fixed point sets of subgroups of $G$), and these two rings coincide if and only if $M$ is a GKM manifold, i.e. if and only if for every fixed point, $p$, the weights of the isotropy action of $G$ on the tangent space to $M$ at $p$ are pairwise non-collinear. In this talk I will describe what happens when one slightly weakens this condition: i.e. requires that at most two weights be collinear. P.S. The results I will report on are joint with Catalin Zara and Sue Tolman. Nasser Heydari Equivariant Perfection and Kirwan Surjectivity in Real Symplectic Geometry Let $(M,\omega,G,\mu,\sigma,\phi)$ be a real Hamiltonian system. In this case, the real subgroup $G_{\mathbb{R}}=G^{\phi}$ acts on the real locus $Q=M^{\sigma}$. Consider an invariant inner product on the Lie algebra $\mathfrak{g}$ and define the norm squared function $f=||\mu||^{2}:M \rightarrow \mathbb{R}$. We show that under certain conditions on pairs $(G,\phi)$ and $(M,\sigma)$, the restricted map $f_{Q}:Q\rightarrow \mathbb{R}$ is $G_{\mathbb{R}}$-equivariantly perfect. In particular, when the action of $G$ on the zero level set $M_{0}=f^{-1}(0)$ is free, the real Kirwan map is surjective. As an application of these results, we compute the Betti numbers of the real reduction $Q//G_{\mathbb{R}}$ of the action of the unitary group on a product of complex Grassmannian. Alejandro Cabrera Universidade Federal do Rio de Janeiro, Brazil Odd symplectic supergeometry, characteristic classes and reduction We give an overview on the role of odd symplectic supergeometry in the description of Mathai-Quillen representatives of the Euler and Thom classes of a vector bundle. Using this lenguage, we propose natural generalizations involving (ordinary) symplectic reduction by symmetries. This is joint work with F. Bonechi. Shlomo Sternberg Harvard University, USA The Stasheff associahedron Show and tell about the Stasheff associahedron K5 Yael Karshon Classification results in equivariant symplectic geometry I will report on some old and new classification results in equivariant symplectic geometry, expanding on my classification, joint with Sue Tolman, of Hamiltonian torus actions with two dimensional quotients. Alessia Mandini Pontifícia Universidade Católica do Rio de Janeiro, Brazil Symplectic embeddings and infinite staircases -- Part I McDuff and Schlenk studied an embedding capacity function, which describes when a 4-dimensional ellipsoid can symplectically embed into a 4-ball. The graph of this function includes an infinite staircase related to the odd index Fibonacci numbers. Infinite staircases have been shown to exist also in the graphs of the embedding capacity functions when the target manifold is a polydisk or the ellipsoid E(2,3). This talk describes joint work with Cristofaro-Gardiner, Holm, and Pires, where we find new examples of symplectic toric 4-manifolds for which the graph of the embedding capacity function has an infinite staircase. Ana Rita Pires Fordham University, USA Symplectic embeddings and infinite staircases - Part II This talks continues the one with the same title, on joint work with Cristofaro-Gardiner, Holm, and Mandini. I will explain the proof of the existence of infinite staircases in the graphs of the embedding capacity functions for certain symplectic toric 4-manifolds, which uses ECH capacities and Ehrhart quasipolynomials as its main tools. I will also explain why we conjecture that these are the only such manifolds for which an infinite staircase can occur. Daniele Sepe Universidade Federal Fluminense, Brazil Integrable billiards and symplectic embeddings The problem of (finding non-trivial obstructions to) embedding a symplectic manifold into another is one of the oldest in symplectic topology and started with the seminal non-squeezing theorem due to Gromov. In dimension 4, many techniques have been developed to shed light on this hard question. Recently, ECH capacities have proved effective in studying symplectic embeddings between subsets of $\left(\mathbb{R}^4, \omega_{\mathrm{can}}\right)$ called toric domains, i.e. saturated with respect to the moment map of the standard Hamiltonian $\mathbb{T}^2$-action on $\left(\mathbb{R}^4, \omega_{\mathrm{can}}\right)$. Motivated by work of Ramos, which uses complete integrability of the billiard on the disc to obtain some interesting embedding results for the Lagrangian bidisc by showing that the latter is symplectomorphic to a toric domain, this talk outlines how to obtain sharp obstructions to finding symplectic embeddings for some other subsets of $\left(\mathbb{R}^4, \omega_{\mathrm{can}}\right)$ by relating them to suitable toric domains. These subsets are related to integrable billiards on squares and rectangles. This is ongoing joint work with Vinicius G. B. Ramos. Eckhard Meinrenken On the quantization of Hamiltonian loop group spaces We will describe the construction of a spinor bundle for Hamiltonian loop group actions with proper moment maps, and various consequences. This is based on joint work with Yiannis Loizides and Yanli Song. Leonardo Mihalcea Virginia Tech University, USA An affine quantum cohomology ring A theorem of B. Kim identified the relations of the quantum cohomology ring of the (generalized) flag manifolds with the conserved quantities for the Toda lattice. It is expected that a similar statement exists, relating a quantum cohomology ring for the affine flag manifolds to the periodic Toda lattice. I will show how to construct a deformation of the usual quantum cohomology ring, depending on an additional affine quantum parameter. It turns out that the conserved quantities of the (dual) periodic Toda lattice give the ideal of relations in the new ring. The construction of the ring multiplication involves the "curve neighborhoods" of Schubert varieties in the affine flag manifold. For ordinary flag manifolds, these were defined and studied earlier by the speaker in several joint works with A. Buch, P.E. Chaput, and N. Perrin. This is joint with Liviu Mare. Jonathan Weitsman Northeastern University, USA On Geometric Quantization of (some) Poisson Manifolds Abstract: Geometric Quantization is a program of assigning to Classical mechanical systems (Symplectic manifolds and the associated Poisson algebras of $C^\infty$ functions) their quantizations --- algebras of operators on Hilbert spaces. Geometric Quantization has had many applications in Mathematics and Physics. Nevertheless the main proposition at the heart of the theory, invariance of polarization, though verified in many examples, is still not proved in any generality. This causes numerous conceptual difficulties: For example, it makes it very difficult to understand the functoriality of theory. Nevertheless, during the past 20 years, powerful topological and geometric techniques have clarified at least some of the features of the program. In 1995 Kontsevich showed that formal deformation quantization can be extended to Poisson manifolds. This naturally raises the question as to what one can say about Geometric Quantization in this context. In recent work with Victor Guillemin and Eva Miranda, we explored this question in the context of Poisson manifolds which are "not too far" from being symplectic---the so called b-symplectic or b-Poisson manifolds---in the presence of an Abelian symmetry group. In this talk we review Geometric Quantization in various contexts, and discuss these developments, which end with a surprise. Steven Rayan University of Saskatchewan, Canada The quiver at the bottom of the twisted nilpotent cone on $\mathbb{CP}^1$ For the moduli space of Higgs bundles on a Riemann surface of positive genus, critical points of the natural Morse-Bott function lie along the nilpotent cone of the Hitchin fibration and are representations of $\mbox{A}$-type quivers in a twisted category of holomorphic bundles. The fixed points that globally minimize the function are representations of $\mbox{A}_1$. For twisted Higgs bundles on the projective line, the quiver describing the bottom of the cone is more complicated. We determine it and show that the moduli space is topologically connected whenever the rank and degree are coprime. This talk is based on arXiv:1609.08226. Elisheva Adina Gamse Univeristy of Toronto, Canada Vanishing theorems in the cohomology ring of the moduli space of parabolic vector bundles Let $\Sigma$ be a compact connected oriented 2-manifold of genus $g \geq 2$, and let $p$ be a point on $\Sigma$. We define a space $S_g(t)$ consisting of certain irreducible representations of the fundamental group of $\Sigma \setminus p$, modulo conjugation by $SU(n)$. This space has interpretations in algebraic geometry, gauge theory and topological quantum field theory; in particular if $\Sigma$ has a Kahler structure then $S_g(t)$ is the moduli space of parabolic vector bundles of rank $n$ over $\Sigma$. For $n=2$, Weitsman considered a tautological line bundle on $S_g(t)$, and proved that the $(2g)^{th}$ power of its first Chern class vanishes, as conjectured by Newstead. In this talk I will present his proof and then outline my extension of his work to $SU(n)$ and $SO(2n+1)$.
CommonCrawl
Compute $\left(\sqrt{625681}\right)^2$. For any nonnegative number $n$, the value of $\sqrt{n}$ is the number whose square is $n$. So, when we square $\sqrt{n}$, we get $n$. Therefore, $\left(\sqrt{625681}\right)^2 = \boxed{625681}$.
Math Dataset
Modelling of stellar convection Friedrich Kupka1,2 & Herbert J. Muthsam3 Living Reviews in Computational Astrophysics volume 3, Article number: 1 (2017) Cite this article The review considers the modelling process for stellar convection rather than specific astrophysical results. For achieving reasonable depth and length we deal with hydrodynamics only, omitting MHD. A historically oriented introduction offers first glimpses on the physics of stellar convection. Examination of its basic properties shows that two very different kinds of modelling keep being needed: low dimensional models (mixing length, Reynolds stress, etc.) and "full" 3D simulations. A list of affordable and not affordable tasks for the latter is given. Various low dimensional modelling approaches are put in a hierarchy and basic principles which they should respect are formulated. In 3D simulations of low Mach number convection the inclusion of then unimportant sound waves with their rapid time variation is numerically impossible. We describe a number of approaches where the Navier–Stokes equations are modified for their elimination (anelastic approximation, etc.). We then turn to working with the full Navier–Stokes equations and deal with numerical principles for faithful and efficient numerics. Spatial differentiation as well as time marching aspects are considered. A list of codes allows assessing the state of the art. An important recent development is the treatment of even the low Mach number problem without prior modification of the basic equation (obviating side effects) by specifically designed numerical methods. Finally, we review a number of important trends such as how to further develop low-dimensional models, how to use 3D models for that purpose, what effect recent hardware developments may have on 3D modelling, and others. Introduction and historical background The goal of this review is to provide an overview on the subject of modelling stellar convection. It is supposed to be accessible not only to specialists on the subject, but to a wider readership including astrophysicists in general, students who would like to specialize in stellar astrophysics, and also to researchers from neighbouring fields such as geophysics, meteorology, and oceanography, who have an interest in convection from the viewpoint of their own fields. A detailed introduction into the subject would easily lead to a book of several hundred pages. To keep the material manageable and thus the text more accessible we have made a specific selection of topics. The very recent review of Houdek and Dupret (2015) chiefly deals with the subject of the interaction between stellar convection and pulsation, but indeed contains an extended, detailed introduction into local and non-local mixing length models of convection that are frequently used in that field of research. An extended review on the capabilities of numerical simulations of convection at the surface of the Sun to reproduce a large set of different types of observations has been given in Nordlund et al. (2009). The large scale dynamics of the solar convection zone and numerical simulations of the deep, convective envelope of the Sun have been reviewed in Miesch (2005). Numerical simulations of turbulent convection in solar and stellar astrophysics have also been reviewed in Kupka (2009b). An introduction into the Reynolds stress approach to model convection in astrophysics and geophysics has been given in Canuto (2009). Keeping these and further reviews on the subject in mind, we here focus on computational aspects of the modelling of convection which so far have found much less deliberation, in particular within the literature more accessible to astrophysicists. We are thus going to pay particular attention to computability in convection modelling and thus very naturally arrive at the necessity to describe both the advanced two-dimensional and three-dimensional numerical simulation approach as well as more idealized but also more affordable models of convection. The latter are not only based on the fundamental conservation laws, which are the foundation of numerical simulations of stellar convection, but introduce further hypotheses to keep the resulting mathematical model computationally more manageable. Indeed, there are two different types of problems which make convection an inherently difficult topic. One class of problems is of a general, physical nature. The other ones relate to the specific modelling approach. We deal with both of them in this review. Following the intention of keeping this text accessible in terms of contents and volume we also exclude here the highly important separate topic of magnetohydrodynamics (MHD), since this opens a whole new range of specific problems. Even within the purely hydrodynamic context, some specific problems such as rotation–convection interaction had to be omitted, essential as they are, considering that this review has already about twice the default length applicable for the present series. Before we summarize the specific topics we are going to deal with, we undertake a short tour on the history of the subject, as it provides a first overview on the methods and the problems occurring in dealing with a physical understanding and mathematical modelling of turbulent convection in stellar astrophysics. The first encounter with stellar convection occurred, not surprisingly, in the case of the Sun, with the discovery of solar granulation, even if the physical background was naturally not properly recognized. Early sightings are due to Galileo and to Scheiner (Mitchell 1916 and Mitchell's other articles of 1916 in that journal, see also Vaquero and Vázquez 2009, p. 143). Quite frequently, however, Herschel (1801) who reported a mottled structure of the whole solar surface is termed discoverer of solar granulation in the literature. The subject started to be pursued more vividly and also controversially in the 1860s (Bartholomew 1976). The photographic recording of solar granulation, the first one being due to Janssen (1878), clarified the subject of shapes of solar granules and remained the principal method of direct investigation for many decades to come. These observations required a closer physical understanding. In 1904, Karl Schwarzschild raised the question whether the layering of the solar atmosphere was adiabatic, as was known to hold for the atmosphere of the Earth when strong up- and downflows prevail, or whether a then new concept of layering was appropriate which he dubbed radiative equilibrium ("Strahlungsgleichgewicht" in the original German version, Schwarzschild 1906). By comparing theoretical predictions with the observed degree of solar limb darkening he concluded that the solar atmosphere rather obeyed radiative equilibrium. This applies even from our present viewpoint in the sense that the convective flux, which is non-zero in the lower part of the solar atmosphere, is small compared to the radiative one. Yet the very occurrence of granulation made it obvious that there had to be some source of mechanical energy able to stir it. For a period of time, the presence of flows in a rotating star, known through work of von Zeipel, Milne, and Eddington to necessarily occur, was considered a candidate. However, this turned out not to lead to a viable explanation when Eddington gave estimates of the velocities of these flows. Indeed, Eddington at first also considered convection not to be important for the physics of stars (cf. Eddington 1926), although he later on changed his opinion (Eddington 1938). The proper mechanism was figured out by Unsöld (1930). He noted that under conditions of the solar subsurface an (assume for the sake of discussion) descending parcel of material would not only have to do work in order to increase pressure and temperature, but that ionization work would be required as well. From such considerations Unsöld concluded that below the solar surface there was a convection zone to be expected and caused by the ionization of hydrogen. Another early line of thought had the stellar interior in mind and considered cases of energy sources strongly concentrated near the center of the star. For such a situation, convective zones were predicted to occur under appropriate circumstances by Biermann (1932). In that paper an analytical convection model was proposed, too: the mixing length theory of convection—which had initially not been developed to model solar convection and granulation. "Classical" modelling Once basic causes of solar granulation or rather solar and stellar convection zones had been identified in the early 1930s, theoreticians faced the problem to derive models of stellar and, in the first instance, of solar convection. Ideally, one would of course solve the Navier–Stokes (or Euler's) equations of hydrodynamics, augmented with realistic microphysics and opacities, in some cases also elaborate radiative transfer calculations and other ingredients. Naturally that was out of question at a time when, at best, only mechanical calculators were available. As a consequence, models had been derived which were computationally (!) sufficiently simple to be, ultimately, incorporated into routine stellar structure or stellar atmosphere calculations. The mixing length paradigm, i.e., the concept of a characteristic length scale over which an ascending convective element survives, dissolving then and delivering its surplus energy, appears first in the work of Biermann (1932, 1942, 1948) and Siedentopf (1935). In her influential 1953 paper, Vitense developed a model where the essential transition region between radiative and convective zone was considered more accurately. In its improved form derived in her 1958 paper (Böhm-Vitense 1958), and in several variants thereof the mixing length model of stellar convection is still the most widely applied prescription in investigations of stellar structure and evolution (cf. Weiss et al. 2004). This is true despite of shortcomings, some of which were mentioned already in the original paper of Biermann (1932). Non-local models and large scale numerical simulations From a computational point of view the "classical" approach to modelling amounts to greatly reduce space dimensions from three to zero (in local models, where one has, at each point just to solve some nonlinear algebraic equation) or from three to one (in non-local models, where a depth variable is retained and ordinary differential equations result). Regarding the time variable, it may either be discarded as in mixing length theory or other local models of convection (Böhm-Vitense 1958; Canuto and Mazzitelli 1991; Canuto et al. 1996) or even some Reynolds stress models (Xiong 1985, 1986) or it may be retained, since that allows either finding a stationary solution more easily (Canuto and Dubovikov 1998) or making the theory applicable to pulsating stars, such as in non-local mixing length models (Gough 1977a, b) or other non-local models developed for this purpose (Kuhfuß 1986; Xiong et al. 1997). The need for non-local models of convection was motivated by two physical phenomena: There is "overshooting" of convective flow into neighbouring, "stable" layers and thus mixing between the two. In pulsating stars convection interacts with pulsation. Convection may cause pulsation, drive it or damp it, or convection may be modulated by pulsation (see Houdek and Dupret 2015, for a review). For a while the existence of overshooting had remained a controversial issue, as can be seen from the introductory summary in Marcus et al. (1983) who provided their own model for this process. Disagreement concerned both the modelling approach [the modal approach of Latour et al. (1976) is an example for a method "between analytical modelling and numerical simulation"] and the existence and importance of the phenomenon. The latter was settled later on (Andersen et al. 1990; Stothers and Chin 1991), but the disagreement on how to model it remained (cf. the comparison in Canuto 1993). As for pulsating stars, the classical formalism clearly required an extension. This development progressed from non-local mixing length models by Unno (1967) and Gough (1977a, b) to ever more advanced models including the Reynolds stress approach (Xiong et al. 1997) (see again Houdek and Dupret 2015). While the latter was pioneered by Xiong (1978), the most complete models intended for applications in astrophysics were published in a series of papers by Canuto (1992, 1993, 1997a), Canuto and Dubovikov (1998) and Canuto (1999). Nevertheless, most frequently used in practice are probably the non-local models of convection by Stellingwerf (1982) and Kuhfuß (1986) in studies of pulsating stars. We note that in parallel to the developments in astrophysics the need for a non-local description of convection has also been motivated by laboratory experiments on the relation of the heat flux (which is measured by the Nusselt number) as a function of Rayleigh number in Rayleigh–Bénard convection. A transition between "soft" (Gaussian distributed) and "hard" turbulence in such flows was noted (Heslot et al. 1987) followed by the demonstration of the existence of a large-scale, coherent flow (Sano et al. 1989). These experiments are no longer compatible with the assumption of a simple scaling relation which underlies also the local mixing length model used in astrophysics (cf. its derivation in Spruit 1992). A much more complex set of scaling relations is required (see Grossmann and Lohse 2000; Stevens et al. 2013 for an attempt on a unifying theory) just to describe the interior of a convective zone as a function of fluid parameters (viscosity, heat conductivity) and system parameters (heat flux, temperature gradient). Such experiments have also been made for cases which exhibit what astrophysicists call overshooting and usually refer to what meteorologists and oceanographers describe as "entrainment": field experiments and laboratory models of the convective, planetary boundary layer of the atmosphere of the Earth began in the late 1960s and early 1970s, respectively. Precision laboratory data on overshooting were obtained in water tank experiments (Deardorff and Willis 1985) followed by similarly accurate, direct measurements in the planetary boundary layer by probes (Hartmann et al. 1997). In both scenarios a convective zone is generated by heating from below and the convective layer is topped by an entrainment layer. Successful attempts to explain these data required the construction of either large scale numerical simulations or complex, non-local models of convection (cf. Canuto et al. 1994; Gryanik and Hartmann 2002 as examples). Although it is encouraging, if a model of convection successfully explains such data, this does not imply it also works in the case of stars: the physical boundary conditions play a crucial role for convective flows and the terrestrial and laboratory cases fundamentally differ in this respect from the stellar case which features no solid walls but can be subject to strong (and non-local) radiative losses and in general occurs and interacts with magnetic fields. Given the long lasting need for models of convection which are physically more complete than the classical models, it is surprising that the latter are still in widespread use. However, as we discuss in Sect. 3, none of these more advanced models can achieve its wider region of applicability without additional assumptions and in most cases those cannot be obtained from first principles, the Navier–Stokes equations we introduce in Sect. 2, only. Moreover, such models usually introduce considerable computational and analytical complexity which until more recently, with the advent of space-based helio- and asteroseismology on the observational side and advanced numerical simulations on the theoretical side, were difficult to test. Furthermore, the traditional, integral properties of stars can easily be reproduced merely by adjusting free parameters of the classical models (for example, adjusting the mixing length to match the solar radius and luminosity, Gough and Weiss 1976). However, the advent of computers has made it possible to solve, in principle, the "complete", spatially multidimensional and time-dependent equations, often also for realistic microphysics and opacities and other physical ingredients as deemed necessary for the investigation at hand. Of course, in particular in early investigations, the space dimensionality had to be reduced to 2, microphysics had to be kept simple, and the like. But until today and for the foreseeable future it remains true that only a limited part of the relevant parameter range—in terms of achievable spatial resolution and computationally affordable time scales—is accessible in all astrophysically relevant cases. Such numerical simulations request a style of work differing from the one applicable for the papers we have cited up to now. Whereas papers devoted to classical modelling are often authored by just one person, numerical simulations practically always require researchers working in teams. If we consider compressibility and coverage of a few pressure scale heights as the hallmark of many convection problems in stellar physics, the first simulations aiming at understanding what might be going on in stellar convection, such as Graham (1975) and Latour et al. (1976), date from the mid-1970s. Quite soon two rather different avenues of research were followed in the modelling community. In the first strand of research interest focussed on solar (and later on stellar) granulation. The two-dimensional simulations of Cloutman (1979) are probably the earliest work in this direction. Indeed it took only a short time until the basic properties of solar granulation could be reproduced, in the beginning invoking the anelastic approximation (see Nordlund 1982, 1984). In contrast to many contemporary papers this pioneering work was already based on numerical simulations in three dimensions. The topic has since evolved in an impressive manner and includes now investigations of spectral lines and chemical abundances, generation of waves, magnetic fields, interaction with the chromosphere, among others. We refer to the review provided in Nordlund et al. (2009). Since it has been completed, important new results have been achieved, regarding excitation of pressure waves, local generation of magnetic fields, high resolution simulations, and others. An impression on recent advances in such areas can be gained, e.g., from Kitiashvili et al. (2013). These simulations require, in particular, a detailed treatment of the radiative transfer equation, since they involve the modelling of layers of the Sun or a star which directly emit radiation into space, i.e., which are optically thin. The second strand of investigations is directed more towards general properties of compressible convection and has stellar interiors in mind. As a consequence, it can treat radiative transfer in the diffusion approximation. Early work addressed modal expansions (Latour et al. 1976), but subsequently there was a trend towards using finite difference methods (Graham 1975; Hurlburt et al. 1984). In the course of time, the arsenal of numerical methods which were applied expanded considerably. Efforts to model a star as a whole or considering a spherical shell, where deep convection occurs, made it necessary to abandon Cartesian (rectangular) box geometry. Simulations based on spectral methods for spherical shells were hence developed (see, e.g., Gilman and Glatzmaier 1981). In the meantime, many investigations have addressed the question of convection under the influence of rotation (including dynamo action), the structure of the lower boundary of the convection zone in solar-like stars (tachocline), core convection in stars more massive than the Sun, and others. Figure 1 provides an example. A number of such recent advances are covered in Hanasoge et al. (2015). Image reproduced with permission from Augustson et al. (2012), copyright by AAS Convection cells and differential rotation in simulations of an F-type star (spherical shell essentially containing the convective zone; near surface regions not included in the calculations). Rotation rate increasing from top to bottom. Left column radial velocity of convective flows in the upper part of the computational domain. Following columns rotation rate, temperature deviation from horizontal mean and stream function of meridional flow. Differential rotation is clearly visible. A rather different form of convection, dubbed semiconvection, occurs during various evolutionary stages of massive stars. In an early paper Ledoux (1947) addressed the question of how a jump or gradient in molecular weight within a star would develop and derived what is today known as Ledoux criterion for convective instability. Semiconvection occurs when a star (at a certain depth) would be unstable against convection in the sense of the temperature gradient (Schwarzschild criterion) but stable considering the gradient in mean molecular weight which arises from dependency of chemical composition on depth (Ledoux criterion). In most of the literature the term semiconvection is restricted to the case where stability is predicted to hold according to the criterion of Ledoux (while instability is predicted by the Schwarzschild criterion), since both criteria coincide in the case of instability according to the Ledoux criterion which implies efficient mixing and no difference to the case without a gradient in chemical composition. Since application of one or the other of the criteria leads to different mixing in stellar models, calculations of stellar evolution are affected accordingly (see the critical discussion, also on computational issues, by Gabriel et al. 2014). In addition, thermal and "solute" (helium in stars; salt in the ocean) diffusivities play a role in the physics of the phenomena. Semiconvection gained interest with the advent of models of stellar evolution, specifically through a paper by Schwarzschild and Härm (1958). The interest was and is based on the material transport processes, and therefore of stellar mixing, thought to be effected by semiconvection. Unlike ordinary convection, where there existed and exists a standard recipe (in the form of mixing length theory) used in most stellar structure or evolution codes, such a recipe has not appeared for semiconvection (cf. Kippenhahn and Weigert 1994). Indeed, the semiconvection models which are used in studies of stellar physics are based on quite different physical pictures and arguments. Knowledge is in an early stage even considering the most basic physical picture. Likewise, numerical simulations referring to the astrophysical context have appeared only more recently. Early simulations which, however, remained somewhat isolated have been reported in Merryfield (1995) and Biello (2001). Only during the last few years activity has increased. For recent reviews on various aspects consult Zaussinger et al. (2013) and Garaud (2014) as well as the papers cited there (see also Zaussinger and Spruit 2013). The simulation parameters are far from stellar ones, but the ordering of the size of certain parameters is the same as in the true stellar case. Such simulations seem to consider a popular picture of semiconvection, many convecting layers, horizontally quite extended, vertically more or less stacked and divided by (more or less) diffusive interfaces (Spruit 1992), to be at least tenable, since it is possible to generate such layers also numerically with an, albeit judicious, choice of parameters. Consequences and choosing the subjects for this review Considering the advances made during the last decades, why is convection still considered an "unsolved problem" in stellar astrophysics? The reason is that although numerical simulations can nowadays provide answers to many of the questions posed in this context, they cannot be applied to all the problems of stellar structure, evolution, and pulsation modelling: this is simply unaffordable. At the same time, no computationally affordable model can fully replace such simulations: uncertainties they introduce are tied to a whole range of physical approximations and assumptions that have to be made in those models, and to the poor robustness of their results and to the lack of universality of the model specific parameters. Instead of providing an extended compilation of success and failure of the available modelling approaches, in this review we rather want to shed some more light on the origins of these difficulties and how some of these can be circumvented while others cannot. Our focus is hence—in a broad sense—on the computational aspects of modelling stellar convection. In particular, we want to provide an overview on which types of problems are accessible to numerical simulations now or in the foreseeable future and which ones are not. These questions are dealt with in Sect. 2 and motivate an overview in Sect. 3 on the state-of-the-art of convection modelling based on various types of (semi-) analytical models, since at least for a number of key applications they cannot be replaced by numerical simulations. Multidimensional modelling and the available techniques are reviewed in the next sections. The different variants of equations used in this context are reviewed in Sect. 4. There is a particular need for such a description, since a detailed summary of the available alternatives appears to be missing in the present literature, most clearly in the context of stellar astrophysics. We then proceed with an equally wanting review of the available numerical methods along with their strengths and weaknesses in Sect. 5. We conclude this review with a discussion on modelling specific limitations for the available approaches in Sect. 6. This section also addresses the hot topic of "parameter freeness" which is always around in discussions on the modelling of convection. Indeed, it had kept alive one of the most vivid and arguably one of the least helpful discussions in stellar astrophysics held during the last few decades. An effort is hence made to disentangle the various issues related to this subject. The limitations—including that one of claimed parameter freeness—are reviewed both from a more stringent and a more pragmatic point view and it is hoped that this can provide some more help for future work on the subject of stellar convection. To keep the material for this review within manageable size, the topic of interaction of convection with rotation and in particular with magnetic fields had mostly to be omitted. In a few cases we have provided further references to literature covering those topics, in particular with respect to the numerical methods for low-Mach-number flows discussed in Sect. 4. For a recent account of numerical magnetohydrodynamics, even in the more complex relativistic setting, we refer to Martí and Müller (2015). What is the difficulty with convection? The hydrodynamical equations and some solution strategies What are the basic difficulties of modelling convection by analytical or numerical methods? To answer this question we first define the hydrodynamical equations which actually are just the conservation laws of mass, momentum, and energy of a fluid. In Sect. 4 we return to the numerical implications of their specific mathematical form. The discovery of the hydrodynamical equations dates back to the eighteenth and nineteenth century. Analysis of the local dynamics of fluids eventually led to a set of partial differential equations which was proposed to govern the time development of a flow (see Batchelor 2000; Landau and Lifshitz 1963, e.g., for a derivation) for a fully compressible, heat conducting, single-component fluid. This was later on extended to include the forcing by magnetic fields. In the twentieth century the consistency of these equations with statistical mechanics and the limits of validity were demonstrated as well (Huang 1963; Hillebrandt and Kupka 2009, e.g.). A huge number of successful applications has established their status as fundamental equations of classical physics. Under their individual names they are known as continuity equation, Navier–Stokes equation (NSE), and energy equation and they describe the conservation of mass, momentum, and total energy. The term NSE is also assigned to the whole system of equations. In the classical, non-relativistic limit they read (we write \(\partial _t f\) instead of \(\partial f / \partial t\) for the partial derivative in time of a function f here and in the following): $$\begin{aligned} \partial _t \rho +\,\mathrm{div}\,\left( \rho \varvec{u} \right)= & {} 0, \end{aligned}$$ $$\begin{aligned} \partial _t (\rho \varvec{u}) +\,\mathrm{div}\,\left( \rho (\varvec{u} \otimes \varvec{u}) \right)= & {} -\mathrm{div}\,{\varvec{\varPi }} - \rho \,\mathrm{grad}\,\varPhi , \end{aligned}$$ $$\begin{aligned} \partial _t \left( \rho E \right) +\,\mathrm{div}\,\left( (\rho E + p) \varvec{u} \right)= & {} q_{\mathrm{source}} +\,\mathrm{div}\,( {\varvec{\pi }} \varvec{u} ) - \rho \varvec{u}\cdot \,\mathrm{grad}\,\varPhi . \end{aligned}$$ where \(q_{\mathrm{source}} = q_{\mathrm{rad}} + q_{\mathrm{cond}} + q_{\mathrm{nuc}}\) is the net production of internal energy in the fluid due to radiative heat exchange, \(q_{\mathrm{rad}}\), thermal conduction, \(q_{\mathrm{cond}}\), and nuclear processes, \(q_{\mathrm{nuc}}\). At this stage of modelling they are functions of the independent variables of (1)–(3), the spatial location \(\varvec{x}\) and time t. The same holds for the dependent variables of this system, \(\rho , \varvec{\mu }= \rho \varvec{u}\), and \(e = \rho E\), i.e., the densities of mass, momentum, and energy. We note that \(\varvec{u} \otimes \varvec{u}\) is the dyadic product of the velocity \(\varvec{u}\) with itself and \(E=\varepsilon + \frac{1}{2}|\varvec{u}|^2\) is the total (sum of internal and kinetic) specific energy, each of them again functions of \(\varvec{x}\) and t. The quantities \(q_{\mathrm{rad}}\) and \(q_{\mathrm{cond}}\) can be written in conservative form as \(q_{\mathrm{rad}} = -\mathrm{div}\,\varvec{f}_{\mathrm{rad}}\) and \(q_{\mathrm{cond}} = -\mathrm{div}\,\varvec{h}\), where \(\varvec{f}_{\mathrm{rad}}\) is the radiative flux and \(\varvec{h}\) the conductive heat flux whereas \(q_{\mathrm{nuc}}\) remains as a local source term. Inside stars the mean free path of photons is about 2 cm and along this distance in radial direction the temperature changes are only of the order of \({\sim } 3\times 10^{-4}\) K (see Kippenhahn and Weigert 1994). This justifies a diffusion approximation for the radiative flux \(\varvec{f}_{\mathrm{rad}}\) which avoids solving an additional equation for radiative transfer. Indeed, the diffusion approximation is exactly the local limit of that equation (Mihalas and Mihalas 1984) and it reads $$\begin{aligned} \varvec{f}_{\mathrm{rad}} = -K_\mathrm{rad}\,\mathrm{grad}\,T, \end{aligned}$$ where \(T=T(\rho ,\varepsilon ,\hbox {chemical composition})\) is the temperature and \(K_\mathrm{rad}\) is the radiative conductivity, $$\begin{aligned} K_\mathrm{rad} = \frac{4\,\mathrm{ac}\,T^3}{3 \kappa \rho } = \frac{16\sigma T^3}{3\kappa \rho }. \end{aligned}$$ The quantities a, c, and \(\sigma \) are the radiation constant, vacuum speed of light, and Stefan–Boltzmann constant, while \(\kappa \) is the Rosseland mean opacity (see Mihalas and Mihalas 1984; Kippenhahn and Weigert 1994). \(\kappa \) is the specific cross-section of a gas for photons emitted and absorbed at local thermodynamical conditions (local thermal equilibrium) integrated over all frequencies (thus, \([\kappa ]= \mathrm{cm}^{2}\,\mathrm{g}^{-1}\) and \(\kappa =\kappa (\rho ,T,\hbox {chemical composition})\), see Table 1). The heat flux due to collisions of particles can be accurately approximated by the diffusion law $$\begin{aligned} \varvec{h} = -K_\mathrm{h}\,\mathrm{grad}\,T, \end{aligned}$$ where \(K_\mathrm{h}\) is the heat conductivity (cf. Batchelor 2000). In stars, radiation is usually much more efficient for energy transport than heat conduction. This is essentially due to the large mean free path of photons in comparison with those of ions and electrons. Conditions of very high densities are the main exception. These are of particular importance for modelling the interior of compact stars (see Weiss et al. 2004). A derivation of the hydrodynamical equations for the case of special relativity, in particular for the more general case where the fluid flow is coupled to a radiation field, is given in Mihalas and Mihalas (1984). The latter also give a detailed discussion of the transition between classical Galilean relativity, a consistent treatment containing all terms of order \(\mathrm{O}(v/\mathrm{c})\) for velocities v no longer much smaller than the speed of light c, and a completely co-variant treatment as obtained from general relativity. For an account of the theory of general relativistic flows see Lichnerowicz (1967), Misner et al. (1973) and Weinberg (1972), and further references given in Mihalas and Mihalas (1984). Numerical simulation codes used in astrophysical applications, in particular for the modelling of stellar convection, usually implement only a simplified set of equations when dealing with radiative transfer: typically, the fluid is assumed to have velocities \(v \ll \,\mathrm{c}\), whence the intensity of light can be computed from the stationary limit of the radiative transfer equation (see Chap. 2 in Weiss et al. 2004). The solution of that equation allows the computation of \(\varvec{f}_{\mathrm{rad}}\) and the radiative pressure \(p_{\mathrm{rad}}\) to which we return below. Table 1 Variables and parameters in the model equations used throughout this text For numerical simulation of stellar convection Eqs. (1)–(6) are often augmented by a partial differential equation for the time evolution of the (divergence free) magnetic induction \(\varvec{B}\) which also couples into the conservation laws for momentum and energy of the fluid, (2)–(3). A derivation of these equations and an introduction into magnetohydrodynamics can be found, for example, in Landau and Lifshitz (1984) and Biskamp (2008). Like the Navier–Stokes equations these can also be derived from the more general viewpoint of statistical physics (Montgomery and Tidman 1964) which allows recognizing their limits of applicability. For the remainder of this review we restrict ourselves to the classical, non-relativistic limit, (1)–(6), without a magnetic field. The radiative flux is obtained from the diffusion approximation (for the case of optically thin regions at stellar surfaces, which occurs in a few examples, it is assumed to be obtained from solving the stationary limit of the radiative transfer equation, see Weiss et al. 2004). Returning to Eqs. (1)–(6) we note that internal forces per unit volume, given by the divergence of the pressure tensor \(\varvec{\varPi }\), can be split into an isotropic part and an anisotropic one. The latter originates from viscous stresses. The isotropic part is just the mechanical pressure. It equals the gas pressure p from an equation of state, \(p=p(\rho ,T,\hbox {chemical composition})\), if extra contributions arising from compressibility are collected into the second or bulk viscosity, \(\zeta \) (see Batchelor 2000, for a detailed explanation). Thus, $$\begin{aligned} {\varvec{\varPi }} = p {{\varvec{I}}} - {\varvec{\pi }}, \end{aligned}$$ where \({{\varvec{I}}}\) is the unit tensor with its components given by the Kronecker symbol \(\delta _{ik}\) and the components of the tensor viscosity \(\varvec{\pi }\) are given by (as for time t we abbreviate \(\partial f / \partial x_j\) by \(\partial _{x_j} f\)) $$\begin{aligned} \pi _{ik} = \eta \left( \partial _{x_k} u_i + \partial _{x_i} u_k - \frac{2}{3} \delta _{ik}\,\mathrm{div}\,\varvec{u} \right) + \zeta \delta _{ik}\,\mathrm{div}\,\varvec{u}. \end{aligned}$$ The dynamical viscosity \(\eta \) is related to the kinematic viscosity \(\nu \) by \(\eta = \nu \rho \). Similar to \(\kappa \) the quantities \(\nu \) and \(\zeta \) are functions of the thermodynamical variables \(\rho , T\) (or \(\varepsilon \)), and chemical composition. Because \(\varvec{\pi }\) is a tensor of rank two, a quantity such as \(\varvec{\pi } \varvec{u}\) refers to the contraction of \(\varvec{\pi }\) with the vector \(\varvec{u}\). Note that (8) is linear in \(\varvec{u}\) which is an approximation sufficiently accurate for essentially all fluids of interest to astrophysics. A detailed derivation of (7)–(8) is given in Batchelor (2000). To model stellar conditions Eq. (2) has to be modified, since photons can transport a significant amount of momentum. This mechanical effect is represented by the radiation pressure tensor \(P^{ij}\) (see Mihalas and Mihalas 1984) which is coupled into Eqs. (2)–(3). For isotropic radiation this problem can be simplified since in that case \(P^{ij}\) can be written as the product of a scalar radiation pressure \(p_{\mathrm{rad}}\) and the unit tensor \({{\varvec{I}}}\). Because the contribution of \(\mathrm{div}\,(p_{\mathrm{rad}} {{\varvec{I}}})\) in (2) is additive, it is possible to absorb \(p_{\mathrm{rad}}\) into the term for the gas pressure and treat it as part of the equation of state (\(p_{\mathrm{rad}} = (1/3) a T^4\), see Weiss et al. 2004; Mihalas and Mihalas 1984). Such a procedure is exact at least as long as the diffusion approximation holds (Mihalas and Mihalas 1984) and allows retaining (2)–(3) in their standard form, a result of great importance for the modelling of stellar structure and evolution. Finally, the gradient of the potential of external forces, \(\phi \), has to be specified. The coupling of magnetic fields in magnetohydrodynamics as well as Coriolis forces in co–rotating coordinate systems could be considered as external forces. However, the external potential itself is usually just due to gravitation, where \(\varvec{g} = -\mathrm{grad}\,\varPhi \). Equations (2)–(3) are thus rewritten as follows: $$\begin{aligned}&\displaystyle \partial _t (\rho \varvec{u}) +\,\mathrm{div}\,\left( \rho (\varvec{u} \otimes \varvec{u}) \right) = -\mathrm{grad}\,p +\,\mathrm{div}\,\varvec{\pi } + \rho \varvec{g}, \end{aligned}$$ $$\begin{aligned}&\displaystyle \partial _t \left( \rho E \right) +\,\mathrm{div}\,\left( (\rho E + p) \varvec{u} \right) =\,\mathrm{div}\,( \varvec{\pi } \varvec{u} ) -\,\mathrm{div}\,\varvec{f}_\mathrm{rad} -\,\mathrm{div}\,\varvec{h} + \rho \varvec{u}\cdot \varvec{g} + q_\mathrm{nuc}. \nonumber \\ \end{aligned}$$ As implied by the discussion above, here p is usually meant as the sum of gas and radiation pressure and supposed to be given by the equation of state (cf. Sect. 6.3, 6.4, and 11 in Weiss et al. 2004). The gravitational acceleration \(\varvec{g} = -\mathrm{grad}\,\varPhi \) is the solution of the Poisson equation \(\mathrm{div}\,\mathrm{grad}\,\varPhi = 4\pi \mathrm{G}\,\rho \), where G is the gravitational constant. Since in all cases of interest here \(q_\mathrm{nuc}\) is a function of local thermodynamic parameters (\(\rho , T\), chemical composition, cf. Kippenhahn and Weigert 1994), we find that Eqs. (1), (9) and (10) together with (4)–(6) and (8) form a closed system of equations provided the material functions for \(\kappa , K_\mathrm{h}, \nu , \zeta \), and the equation of state are known as well. Solution strategies While (1)–(10) have been known for a long time, analytical solutions or even just proofs of existence of a unique solution have remained restricted to rather limited, special cases. So how should we proceed to use the prognostic and diagnostic capabilities of these equations? One possibility is to construct approximate solutions by means of numerical methods. We focus on this approach in Sects. 4 and 5. An alternative to that is to approximate first the basic equations themselves. Famous examples for this approach are the Boussinesq approximation, stationary solutions, or the Stokes equations for viscous flows (see Batchelor 2000; Quarteroni and Valli 1994, e.g., and Sect. 4.3.1 below). The equations of stellar structure for the fully radiative case with no rotation (cf. Kippenhahn and Weigert 1994; Weiss et al. 2004) provide another example. The latter can also be obtained from models of the statistical properties of the flow (see below). In most cases simplified variants of the basic equations also require numerical methods for their solution. This is still advantageous as long as the computational costs of such approximate solutions are less demanding (cf. Sect. 5) than numerical solutions of (1)–(10). Another possibility is the construction of a different type of mathematical models which models properties of the hydrodynamical equations. Staying most closely to the original equations are model equations for volume averages of (1)–(10). This is quite a natural approach, since also each astrophysical observation has a finite resolution in space, time, and energy, and in this sense refers to a volume average. Numerical solutions constructed with this goal in mind are usually termed large eddy simulations (LES), although slightly different names are used to denote specific variants of it, for instance, iLES and VLES, abbreviations for implicit large eddy simulations and very large eddy simulations. The former refer to numerical solution methods for (1)–(10) where the numerical viscosity inherent to the solution scheme has the role of representing all effects operating on length scales smaller than the grid of spatial discretization used with the method. The latter implies that a significant amount of kinetic energy and dynamical interaction resides and occurs on such "unresolved" ("sub-grid") length scales. An introduction to LES can be found in Pope (2000). In astrophysics it is common to make no clear distinction between such calculations and direct numerical simulations (DNS). The latter actually refers to numerical approximations of (1)–(10) which do not assume any additional (statistical, physical) properties of the local volume average of the numerical solution to hold: all length scales of physical interest are hence resolved in such calculations, a requirement typically only fulfilled for mildly turbulent laboratory flows, viscous flows, and some idealized setups as used in numerical studies of the basic properties of (1)–(10). On the other hand, in "hydrodynamical simulations" of astrophysical objects it is often (implicitly) assumed that numerical viscosity of a scheme used to construct an approximate solution with moderate spatial resolution mimics the spatially and temporally averaged solution which is obtained with the same scheme at much higher resolution. Such simulations are actually iLES and hence clearly fall into the category of LES. We return to this subject further below. Since an LES approach may be unaffordable or difficult to interpret or compare with observational data, further physical modelling is often invoked to derive mathematical model equations that are more manageable. A classical example are the standard equations of stellar structure and evolution (cf. Kippenhahn and Weigert 1994; Weiss et al. 2004) which account for both radiative and convective energy transport. These equations are actually mathematical models for statistical properties of Eqs. (1)–(10). More generally, ensemble averaging can be used to construct model equations, for instance, for some mean density \({\overline{\rho }}\) and mean temperature \(\overline{T}\). The most common averages are one-point averages such as the Reynolds stress approach (see Sect. 3.3) which model statistical distributions that are functions of location and time only (cf. Lesieur 1997 and, in particular, Pope 2000). In turbulence modelling two-point averages are popular as well (see also Sect. 3.3). They deal with distribution functions that depend on the distance (or difference of locations) in addition to their spatial and temporal dependencies (Lesieur 1997; Pope 2000). The ensemble averaged approach requires additional, closure assumptions to construct complete sets of model equations. Because the closure assumptions cannot be derived from (1)–(10) alone, alternatives have been sought for a long time. The coherent structures found in turbulent flows have been interpreted as a hint that geometrical properties may be taken as a guideline towards a new modelling approach (cf. Lumley 1989). When comparing such ambitious goals with more recent introductions into the subject of turbulent flows (Pope 2000; Tsinober 2009), progress achieved along this route is more modest than one might have expect one or two decades earlier (Lumley 1989). Interestingly, the analysis of structural properties of turbulent convection, for instance, has led to improved models on their statistical properties (Gryanik and Hartmann 2002; Gryanik et al. 2005), a nice example for why Tsinober (2009) has listed the idea that "statistics" and "structure" contrapose each other among the common misconceptions about turbulent flows. To replace the NSE at the fundamental level by a discrete approach has already been proposed several decades ago (see the discussion in Lumley 1989), for instance, through the concept of cellular automata (cf. Wolf-Gladrow 2000). Today the Lattice Boltzmann Methods (LBM) have become a common tool particularly in engineering applications, but rather as a replacement of LES or direct numerical simulations instead of becoming an approach for more theoretical insights (for an introduction see, e.g., Succi 2001). For the study of fluids in astrophysics the smooth particle hydrodynamics (SPH) method has become the most successful among the discrete or particle-based concepts to model fluid dynamics (for a general introduction, see, e.g., Violeau 2012). SPH may be seen as a grid-free method to solve the NSE and in this sense again it is rather an alternative to (grid-based) numerical solutions of (1)–(10) and not an analytical tool. Until now these discrete methods, however, have found little interest for the modelling convection in astrophysics or geophysics. This is presumably because for many physical questions asked in this context there is not so much benefit from having very high local resolution at the extent of low resolution elsewhere. In Sects. 4 and 5 we discuss how also grid-based methods can deal with strong stratification, which may require high spatial resolution in a limited domain, and non-trivial boundary conditions as well. A completely different goal has been suggested with the introduction of stochastic simulations of the multi-scale dynamics of turbulence (cf. Kerstein 2009). Contrary to LBM it does not evolve probability density functions, i.e., properties of particles, nor does it require the definition of kernels to model interactions as in SPH. Rather, stochastic maps for the evolution of individual particles are introduced which realize the interactions themselves. Clearly, at the largest scale and in three spatial dimensions, such an approach would become unaffordable. But it appears highly suitable to construct subgrid-scale models for conventional LES (see Kerstein 2009). This holds particularly, if the exchange of information (on composition, internal energy, etc.) is complex and crucial for the correct dynamics of the investigated system at large scales, for instance, in combusting flows. Spatial grids for numerical simulations of stellar convection Constructing a grid for simulating the entire solar convection zone How expensive would be a hydrodynamical simulation of an entire stellar convection zone or of a star as a whole? Let us first have a look at the spatial scales of interest in a star, specifically our Sun. Adapting these estimates to other stars is simple and does not change the basic arguments. The solar radius has been measured to be \(R_{\odot } \sim 695{,}500\,\mathrm{km}\) (cf. Brown and Christensen-Dalsgaard 1998 and Chap. 18.4c in Weiss et al. 2004). From helioseismology the lower boundary of the solar convection zone is \(R / R_{\odot } \sim 0.713\) (Weiss et al. 2004, cf. Bahcall et al. 2005). The solar convection zone reaches the observable surface where \(R / R_{\odot } \sim 1\). Its depth is hence about \(D \sim 200{,}000\,\mathrm{km}\) considering overall measurement uncertainties (see also the comparison in Table 4 of Christensen-Dalsgaard et al. 1991). Differences of D of up to 10% are not important for most of the following. Another important length scale is given by solar granules which are observed to have typical sizes of about \(L_\mathrm{g} \sim 1200 \ldots 1300\,\mathrm{km}\). Measurements made from such observations have spatial resolutions as small as \({\sim }35\,\mathrm{km}\) (cf. Spruit et al. 1990; Wöger et al. 2008). By comparison the highest resolution LES of solar convection in three dimensions, which has been published thus far (Muthsam et al. 2011), has achieved a horizontal and vertical resolution of \(h \sim 3\,\mathrm{km}\). But this high resolution was limited to a region containing one granule and its immediate surroundings (Muthsam et al. 2011, regions further away were simulated at lower resolution). This value of h is orders of magnitudes larger than the Kolmogorov scale \(l_\mathrm{d}\) which quantifies length scales where viscous friction becomes important (cf. Lesieur 1997; Pope 2000). \(l_\mathrm{d}\) can be constructed with dimensional arguments from the kinetic energy dissipation rate \(\epsilon \) and the kinematic viscosity as \(l_\mathrm{d} = (\nu ^3 \epsilon ^{-1})^{1/4}\). Due to the conservative form of (1)–(10) production of kinetic energy has to equal its dissipation. For the bulk of the solar convection, where most of the energy transport is by convection and no net energy is produced locally in the same region, one can estimate \(\epsilon \) from the energy flux through the convection zone (see Canuto 2009) as \(\epsilon \sim L_{\odot } / M_{\odot } \approx 1.9335\,\mathrm{cm}^2\,\mathrm{s}^{-3} \sim \,\mathrm{O}(1)\,\mathrm{cm}^2\,\mathrm{s}^{-3}\) using standard values for solar luminosity and mass (see Chap. 18.4c in Weiss et al. 2004, and references therein). From solar models (e.g., Stix 1989; Weiss et al. 2004) temperature and density as functions of radius can be estimated. With Chapman's result (1954) on kinematic viscosity of fully ionized gases, \(\nu = 1.2 \times 10^{-16} T^{5/2} \rho ^{-1}\,\mathrm{cm}^2\,\mathrm{s}^{-1}, \nu \) is found in the range of 0.25–\(5\,\mathrm{cm}^{2}\,\mathrm{s}^{-1}\), whence \(l_\mathrm{d} \approx 1\,\mathrm{cm}\) throughout most of the solar convection zone (Canuto 2009). Near the solar surface the fluid becomes partially ionized. From Tables 1 and 2 in Cowley (1990) \(\nu \) is found in the range of 145 cm\(^2\) s\(^{-1}\) to 1740 cm\(^2\) s\(^{-1}\) for T between 19,400 and 5660 K. Thus, just at and underneath the solar surface, \(\nu \sim 10^3\,\mathrm{cm}^2\,\mathrm{s}^{-1}\) whence \(l_\mathrm{d} \approx 1\,\mathrm{m} \ldots 2\,\mathrm{m}\) in the top layers of the solar convection zone. Another length scale of interest is the thickness of radiative thermal boundary layers. We note that the designation "boundary layer" strictly speaking refers to the geophysical and laboratory scenario where heat enters the system through a solid horizontal plate (a scenario also used in idealized numerical simulations with astrophysical applications such as Hurlburt et al. 1994; Muthsam et al. 1995, 1999; Chan and Sofia 1996, e.g.). However, the same length scale \(\delta \) is equally important for convection zones without "solid boundaries" in the vertical direction, since it describes the length scale below which temperature fluctuations are quickly smoothed out by radiative transfer (in the diffusion approximation) and we use this notion for its definition.Footnote 1 It is thus the length scale to be resolved for an accurate computation of the thermal structure and radiative cooling processes. Taking the diffusivity of momentum described by \(\nu \) as a reference, the Prandtl number \(\mathrm{Pr} = \nu / \chi \) for the solar convection zone is in the range of \(10^{-9}\) near the surface and increases inwards to about \(10^{-7}\) (see Sect. 4.1 in Kupka 2009b, for details). Since for diffusion equations one can relate the mean free path l, the diffusivity d, and the collision time t through \(t \approx l^2/d\) to each other (cf. Chap. 18.3a in Weiss et al. 2004), we can compare heat and momentum transport on the same reference time scale to each other. This may be, for instance, the time scale of convective transport over the largest length scales appearing in a certain depth of the convective zone (the size of granules, e.g.). During this amount of time heat diffusion transports material over a distance \(\delta = \sqrt{\chi t_\mathrm{ref}}\). Since this choice for \(t_\mathrm{ref}\) is also the time scale during which kinetic energy is dissipated (cf. Sects. 1 and 2 in Canuto 2009), we may use it to compare \(\delta ^2\) with \(l_\mathrm{d}^2\) and obtain \(\delta ^2 / l_\mathrm{d}^2 \sim \chi / \nu =\,\mathrm{Pr}^{-1}\) (note that the time scale cancels out in this ratio). Thus, \(\delta \sim \,\mathrm{Pr}^{-1/2}\,l_\mathrm{d}\) and for the lower part of the solar convection zone, \(\delta _\mathrm{min} \sim 30\,\mathrm{m}\) whileFootnote 2 near the surface, \(\delta _\mathrm{surface} \sim 30\,\mathrm{km} \ldots 60\,\mathrm{km}\). We note that near the solar surface, \(\chi \) varies rapidly and in the solar photosphere the fluid becomes optically thin, but for a rough estimate of scales these approximations are sufficient. Indeed, the steepest gradients in the solar photosphere are found just were the fluid has already become optically thick and \(\delta _\mathrm{surface}\) is thus closely related to the resolution used in LES of solar (stellar) surface convection, as we discuss below. Evidently, it is hopeless trying to resolve the Kolmogorov scale, as required by a DNS, in a simulation which encompasses the whole solar convection zone: this would require about \(N_\mathrm{r}(l_\mathrm{d}) \sim 2\times 10^{10}\) grid points in radial direction. With D and \(R_{\odot }\) the solar convection zone is estimated to have a volume of \(V \sim 9\times 10^{32}\,\mathrm{cm}^3\) which yields the number of required grid points, \(N_\mathrm{tot}(l_\mathrm{d}) \sim 9\times 10^{32}\) (a simulation of the total solar volume exceeds this by less than 60%). Even before taking time integration into account, it is clear that such a calculation is a pointless issue on semiconductor based computers (irrespectively of whether a DNS is considered to be really necessary or not). The odds are not much better for a simulation to resolve \(\delta \) throughout the solar convection zone, since this requires a grid with \(h = \min (\delta ) = \delta _\mathrm{min} \sim 30~\mathrm{m}\). That resolution is more "coarse" by a factor of 3000 which reduces the number of (roughly equidistantly spaced) grid points by \(2.7\times 10^{10}\) to \(N_\mathrm{tot}(\delta _\mathrm{min}) \sim 3.3\times 10^{22}\) for the solar convective shell. If we take ten double words to hold the five basic variables \(\rho , \mu , e\) and the derived quantities \(\varvec{u}, T, P\) for each grid cell volume, we obtain a total of 80 bytes as a minimum storage requirement per grid point (most hydrodynamical simulation codes require at least five to ten times that amount of memory). We can thus estimate a single snapshot to take 2.6 YB (Yottabyte, \(1\,\mathrm{YB} = 10^{24}\) bytes) of main memory. This exceeds current supercomputer memory capacities by seven to eight orders of magnitude. As a minimum requirement for an LES of the entire solar convection zone one would like to resolve at least the radiative cooling layer near the surface. This is necessary to represent the radiative cooling of gas at the solar surface within the simulation on the computational grid, as it is the physical process which drives the entire convective flow (cf. Spruit et al. 1990; Stein and Nordlund 1998). From our previous estimates we would expect that \(h = \min (\delta _\mathrm{surface})/2 \sim 15\,\mathrm{km}\), because two grid points are the minimum to represent a feature in a solution and thus catch a property of interest related to it. Indeed, \(h \lesssim 15\,\mathrm{km}\) is the standard resolution of LES of solar granulation (see Table 3 in Beeck et al. 2012 and Sect. 4 of Kupka 2009a). Then, radiative cooling is properly modelled, i.e., at \(h = 12\,\mathrm{km}\) the horizontal average of the vertical radiative flux \(F_\mathrm{rad}\) is found to be smooth in a one-dimensional numerical experiment even though its negative divergence, the cooling rate \(q_\mathrm{rad}\), would require a ten times higher resolution (Nordlund and Stein 1991). From the same calculations T is found to change by up to \(190\,\mathrm{K}\,\mathrm{km}^{-1}\) which at 10,000 K and at \(h = 12\,\mathrm{km}\) is a relative change of \({\sim }23\%\) between two grid cells. In actual LES of about that resolution (Stein and Nordlund 1998) T changes vertically on average only by up to \(30\,\mathrm{K}\,\mathrm{km}^{-1}\) and in hot upflows by up to \(100\,\mathrm{K}\,\mathrm{km}^{-1}\), a relative change less than 4% (or up to 12% where it is steepest). As the maximum mean superadiabatic gradient \(\partial \ln T / \partial \ln P\) is found to be about 2 / 3 in these simulations (Rosenthal et al. 1999), the corresponding changes in P between grid cells are up to 6% on average and 18% at most. Hence, the basic thermodynamical variables are resolved on such a grid despite opacity \(\kappa \) and thus optical depth \(\tau \) and the cooling rate \(q_\mathrm{rad}\) vary more rapidly by up to an order of magnitude due to the extreme temperature sensitivity of opacity in the region of interest (Nordlund and Stein 1991). The actual resolution demands are somewhat higher than the simple estimate of \(h \approx 15\,\mathrm{km}\) which is anyway limited to regions where the approximation of radiative diffusion holds (optical depths \(\tau \gtrsim 10\)). Since it appears to be sufficient in practice (cf. Beeck et al. 2012; Kupka 2009b) we use it for the following estimate. By lowering the number of grid points along the solar convection zone in radial direction to \(N_\mathrm{r}(\delta _\mathrm{surface}) \sim 13{,}000\) we obtain that \(N_\mathrm{tot}(\delta _\mathrm{surface}) {\sim }2.7\times 10^{14}\) or \({\sim }4.3\times 10^{14}\) for the Sun as a whole. For grids of this size it is becoming possible to store one snapshot of the basic variables in the main memory of a supercomputer, if the entire capacity of the machine were used for this purpose: a minimum amount of 80 bytes per grid cell requires 21.6 PB (or 34.4 PB, respectively), although production codes are more likely to require several 100 PB – 1 EB for such applications. One can further reduce this resolution demand by taking into account that the pressure scale height \(H_p\) drastically increases from the top to the bottom of the convection zone. We recall its definition, \(H_p = -(\partial r / \partial \ln p) \approx p/(\rho g)\), where g is the (negative) radial component of \(\varvec{g}\). In the stationary case, for spherical symmetry and negligibly small turbulent pressure the two expressions are identical for \(r > 0\), so the second one is almost always used to define \(H_p\) (cf. Kippenhahn and Weigert 1994; Weiss et al. 2004). If we require the relative accuracy of p to be the same throughout the convection zone, it suffices to keep the number of grid points per pressure scale height constant or, simply, scale the radial grid spacing with \(H_p\). Except for the surface layers, where \(\mathrm{grad}\,T\) can become very steep, this should ensure comparable relative accuracy of numerical derivatives for all basic thermodynamic quantities throughout the solar convection zone under the assumption that microphysical data (equation of state, opacity, etc.) are also known to the same relative accuracy. With \(h \approx 15\,\mathrm{km}\) and \(H_p \approx 150\,\mathrm{km}\) at the solar surface and for a total depth of solar convection zone of about 20 pressure scale heights (cf. Stix 1989; Weiss et al. 2004, and references therein) one can thus reduce \(N_\mathrm{r}(\delta _\mathrm{surface})\) to an optimistic \(N_\mathrm{r}(\mathrm{minimal}) \sim 200\). Since pressure stratification occurs only vertically, the net gain is a factor of 65, whence \(N_\mathrm{tot}(\mathrm{minimal}) \sim 4.5\times 10^{12}\). Because the solar interior region has its own resolution demands due to the temperature sensitivity of nuclear fusion (Kippenhahn and Weigert 1994; Weiss et al. 2004), there can be no further gain for models of the entire Sun. For simplicity we assume the overall scaling factor in comparison with models of the convective shell to be the same and obtain \(N_\mathrm{tot}(\mathrm{minimal}) \sim 7.2\times 10^{12}\) for models of the Sun. Memory requirements as low as 0.33–0.5 PB are within reach of the current, largest supercomputers. Again, realistic production codes may require something like 2–20 PB for such a model. Such demands may mislead one to consider LES of that kind suitable for next generation supercomputers, but there are further, severe constraints ahead. As we discuss in Sect. 2.3 it is the necessary number of time steps which continues to prohibit this class of simulations for the Sun for quite a few years to come. Solar simulations hence have to be limited to smaller segments or boxes as domains which include the solar (and in general stellar) surface or alternatively to spherical shells excluding the surface layers. A list including also exceptions from these limitations and a summary of computational complexity are given in Sect. 2.6. Computing grids for affordable problems If one sufficiently limits the extent of the domain of the numerical simulation, its computational demands can be brought into the range of affordable problems. The construction of computing grids which are affordable on existing hardware is hence the first step to make LES of convection in stars viable. Two important ideas to do this have frequently been used and they can readily be generalized: The box-in-a-star approach suggests to perform the numerical simulation only in a spatially limited domain the location of which is usually considered to be close to the surface of a star and include those layers emitting photons directly into space, i.e., the photosphere of the star. This is not in any way a necessary condition, as the same simulation technique can also be applied for layers located completely inside a star. But the "3D stellar atmospheres" are certainly the most prominent application of this kind since the pioneering work of Nordlund (1982). The main challenge in this approach is to define suitable boundary conditions to allow for an in- and outflow of energy. Usually, this is also assumed for mass and momentum in which case the boundary conditions are called open. Due to the strong stratification found near the observable surface of most stars the size of the dominant flow structure is small enough such that a Cartesian geometry can be assumed (cf. Stein and Nordlund 1998), except for the case of bright giants and supergiants which require a different way of modelling (see below). The gravitational acceleration is hence approximated to be constant along the vertical direction which coincides with the radial one of the actual star and the curvature of the star in that domain is neglected. This motivates the introduction of periodic boundary conditions in the plane orthogonal to the direction of gravity. The simulation domain has to be defined sufficiently wide for this approach to work (cf. Appendix A of Robinson et al. 2003). The choice of vertical boundary conditions is more subtle and it may depend on the type of star under investigation. A recent comparison of the impact of boundary conditions on numerical simulations of the solar surface can be found in Grimm-Strele et al. (2015a). A large amount of astrophysical literature gives credit to this approach. A detailed account for just the case of the Sun has already been subject to a review on its own (Nordlund et al. 2009). We note here that this basic idea is no sense limited to the case of stars, but is equally applicable to giant planets, the atmosphere of the Earth and meteorology in particular, to oceanography, or other types of flow problems whenever it is possible to reasonably model the influence of the environment of a simulation box through boundary conditions. Indeed, also in some of those other scientific disciplines the equivalent of a box-in-a-star ansatz already has a decade-long tradition in applications. Generalization: simulations with artificial boundaries inside the star. The simulation can be designed such as to exclude the near-surface layers in numerical simulations of just those stars for which in turn the Cartesian box-in-a-star approach is particularly suited for simulations of their surface layers. Here, the upper (near surface) boundary condition is located sufficiently inside the star such that large scale flow structures can be resolved throughout the simulation domain. This permits to set up a shell-in-a-star approach where the curved geometry of a star (either a sphere or ellipsoid) is properly accounted for. In the spherical case, the stellar radius replaces the vertical coordinate used in box-in-a-star type simulations and a series of shells then builds up the simulation grid which may be a sector (in 2D) or a full shell (in 3D). Pioneered at about the same time by Gilman and Glatzmaier (1981) as its box-in-a-star cousin, this approach has since been applied to the case of (rotating) stars including our Sun and planets including the interior of our Earth even though the latter in terms of viscosity and, in particular, Prandtl number (\(\mathrm{Pr} \gg 1\)) is the extreme oppositeFootnote 3 of the Sun (\(\mathrm{Pr} \ll 1\)). For several, special physical situations it is possible to perform full 3D LES of entire stars with a star-in-a-box approach: for supergiants, especially for AGB stars, such simulations are feasible as the large, energy carrying scales of the flow are no longer small compared to the stellar diameter (cf. Sect. 4.7 in Freytag et al. 2012, and references therein). This is similar to supernovae where spatial and temporal scales separated in earlier evolutionary stages by orders of magnitudes become comparable to each other leaving only the turbulent flame front to subgrid modelling (Röpke and Schmidt 2009). We note that the transition from this kind of simulation to box-in-a-star and shell-in-a-star cases is not a sharp one, since for AGB star simulations (Freytag et al. 2012) the central region of the star is also not included for lack of resolution. Generalization: simulations with mapped grids and interpolation between grids to generate natural boundaries. Although in most cases not affordable for stellar simulations with realistic microphysics other than for the special case of supernovae, grid mapping and interpolation between grids can be used to avoid artificial boundaries inside a star and to trace the stellar surface layers to optimize resolution. This allows at least in principle to simulate an entire star, with optimized grid geometry. We return to the topic of such grids in Sect. 5.4. For each of these scenarios the computational grid for 3D LES is nowadays between about 100 and 500 grid cells per spatial direction (for very few cases this value may currently range up to around 2000), an impressive development beyond \({\approx } 16\) cells which were the limit faced in the work of Gilman and Glatzmaier (1981) and Nordlund (1982). In case of only two instead of three spatial dimensions, the number of grid cells can be accordingly much larger, for instance, up to 13,000 cells along the azimuthal direction in the simulation of a full \(360^{\circ }\) sector, i.e., a ring, located at equatorial latitude, in a 2D LES of the He ii ionization zone and the layers around it for the case of a Cepheid (Kupka et al. 2014) (see also Fig. 14). This way the computations have computer memory requirements which put them in the realm of equipment anywhere between workstations and the largest and fastest supercomputers currently available. But as already indicated, memory consumption and a large spread of spatial scales to be covered by a computational grid are not the only restrictions to affordability of numerical simulations. Hydrodynamical simulations and the problem of time scales General principles Hydrodynamical simulations based on variants of (1)–(10) are conducted to predict the time development of \(\rho , \varvec{u}\), and E within a specified region in space starting from an initial state. Since in astrophysics that state cannot be determined in sufficient detail from observational data only, the initial conditions have to be constructed from approximations. For numerical simulations of stellar convection one-dimensional stellar structure or stellar atmosphere models can provide the necessary input to initialize the calculation. A more recent, but particularly detailed description of this procedure is given in Grimm-Strele et al. (2015a). Other basic variables such as the velocity field \(\varvec{u}\) have to be chosen according to computational convenience since a "realistic guess" is impossible. If a "sufficiently similar" multidimensional state has been constructed from a previous simulation, it can be used to construct a new state through scaling (this is simple for changing resolution, where interpolation is sufficient, but quite subtle, if quantities such as the input energy flux at the bottom or the gravitational acceleration are to be changed). Unless obtained through scaling from a simulation with the same number of spatial dimensions, the initial state is slightly perturbed randomly. Each of \(\rho , \varvec{u}, p\), or \(\varvec{\mu }\) has been used for this purpose (see Sect. 3.6 in Muthsam et al. 2010, Sect. 2 in Kupka et al. 2012, and Sect. 2.7 in Mundprecht et al. 2013 for examples of such different perturbations being used with the same numerical simulation code). The simulation is then conducted for a time interval \(t_\mathrm{rel}\) during which the physical state is supposed to relax to a "typical state". This is followed by simulation over a time interval \(t_\mathrm{stat}\) adjacent to the relaxation time \(t_\mathrm{rel}\). The numerical data obtained during \(t_\mathrm{stat}\) are then considered suitable for physical interpretation. The physical meaningfulness of this procedure requires that an ergodicity hypothesis holds (Tsinober 2009): essentially, one expects that a sufficiently long time series of measurements or of a numerical simulation has the same statistical properties as an average obtained from several (possibly shorter) time series each of which is related to a different initial condition. This requires that the measured properties are invariant under time evolution (Chap. 3.7 in Tsinober 2009), an "intuitively evident" property of turbulence which is in fact very difficult to prove. Particularly, there are flows which are only turbulent in a limited domain, such as turbulent jets and wakes past bodies (Tsinober 2009). These may not even be "quasi-ergodic" which would ensure otherwise that the physical states in phase space are visited by a long-term simulation according to their realization probability. Nevertheless, the assumption that turbulent convective flows "forget" their detailed initial conditions is considered to be well-confirmed by current research. The mean thermal structure (and also large-scale or global magnetic fields, the latter being excluded from a more detailed discussion here anyway) can have a longer "memory", i.e., their initial state has an influence on the solution over long integration times, a principle used to speed up relaxation described further below in Sect. 2.3.4. But the mean thermal structure is also more influenced by the boundary conditions of the problem than the turbulent flow field which adjusts itself to a state that is often very different from its initial condition. Eventually, the granulation pattern of solar convection is found with each numerical code capable of doing such kind of simulations (cf. Fig. 1 in Beeck et al. 2012, reprinted here as Fig. 2 for convenience). Even if quite different solar structure models are used as initial states of a simulation, the same average thermal structure is recovered (cf. Sect. 3.3 of Grimm-Strele et al. 2015a). The numerical simulation approach is hence tenable for astrophysical applications. Thus, one can start from approximate models, relax the simulations towards a statistically stationary state (cf. Pope 2000), and perform a numerical study with one or a few long-term simulation runs. But what are the minimum and maximum time scales to be considered for this kind of numerical simulation? Let us consider minimum time scales first. Image reproduced with permission from Beeck et al. (2012), copyright by ESO Tracing granules with the vertically emerging continuum intensity at 500 nm which results from numerical simulations with the CO\(^5\)BOLD code (left figure), the Stagger code (middle figure), and the MURaM code (right figure). Units of the two horizontal axes are Mm. While different resolution, numerical methods, and domain size result in different granule boundaries, the basic flow pattern remains unaltered. Time steps and time resolution in numerical simulations of stellar convection In applications to helioseismology, for example, we might want to study the fate of sound waves near the solar surface whereas stellar evolution during core or shell burning of hydrogen or helium takes place on time scales which depend on nuclear processes, chemical composition, and the total stellar mass. As a result, the time scales of interest may range from a few seconds to more than \(10^{17}\,\mathrm{s}\) (and even much smaller time scales are of relevance in stars other than the Sun, for instance, for white dwarfs, which in turn gradually cool on time scales of billions of years). Can one get around those 17 orders of magnitude when performing a "3D stellar evolution simulation"? Not without introducing some kind of averaging which means to introduce a new set of basic equations: whether grid-based or particle-based, the maximum allowed time steps as well as the required duration of a numerical simulation are properties which stem from the dynamical equations themselves and cannot easily be neglected. We discuss the most important time step restrictions in the following. The best-known constraint on time integration is the Courant–Friedrichs–Lewy (CFL) limit due to advection (Strikwerda 1989). For a discretization of the NSE with mesh widths \(\varDelta x, \varDelta y, \varDelta z\) in each of the three spatial directions it requires that the time step \(\varDelta t\) is bounded by $$\begin{aligned} \varDelta \,t_{\mathrm{adv}} \leqslant C_{\mathrm{adv}} \min \left\{ \varDelta x, \varDelta y, \varDelta z\right\} / {\max (\left| \varvec{u}\right| )}. \end{aligned}$$ In case of a variable mesh the minimum of (11) over each grid cell has to be taken. \(C_{\mathrm{adv}}\) depends on both temporal and spatial discretization schemes chosen. This limit is obtained from linear (von Neumann) stability analysis of the advection terms in (1)–(3) and ensures that each signal, i.e., a change in the solution which propagates with velocity \(\varvec{u}\), is taken into account during each time step. In practice, (11) cannot be overcome even by implicit time integration methods. The reason is that as long as the flow is not close to stationarity, throughout its evolution the solution changes on just that time scale \(\tau \sim \varDelta \,t_{\mathrm{adv}}\), typically by several percent or more. Hence, even fully implicit time integration methods cannot exceed a value of \(C_{\mathrm{adv}} \sim 1\). We refer to the discussion of the ADISM method in Sect. 3 of Robinson et al. (2003) where a time step at most five times that of an explicit method for the case of a solar granulation simulation had been achieved and the maximum \(\varDelta t\) was indeed given by advection. Note that fast moving shock fronts are already taken into account by (11). In practice, attempts of increasing \(\varDelta t\) beyond what would correspond to a value of \(C_{\mathrm{adv}} \gtrsim 1\) will lead to fail in solving the non-linear system of equations obtained in fully or semi-implicit time discretization methods while explicit methods will run into the usual exponential growth of linear instabilities (cf. Strikwerda 1989). Since locally the flow can become slightly supersonic near the solar surface (Bellot Rubio 2009), for values of \(h \sim 12\)–15 km and a sound speed of roughly \(10~\mathrm{km}\,\mathrm{s}^{-1}\) (see Fig. 18.11 in Weiss et al. 2004) we obtain that \(\varDelta t \lesssim 1\,\mathrm{s}\) in an LES of the surface of the solar convection zone. Sound waves which originate (Chap. 8 in Landau and Lifshitz 1963, Chap. 10 in Richtmyer and Morton 1967, Chap. 3.6 in Batchelor 2000) from the presence of \(\mathrm{grad}\,p\) in (9) can introduce restrictions similar to (11). Tracking sound waves on a computational grid requires to resolve their motion between grid cells. As also revealed by a local characteristics analysis this requires that for explicit time integration methods the sound speed \(c_\mathrm{s}\) is added to the flow velocity in (11), whence \(\varDelta \,t_{\mathrm{cour}} \leqslant C_{\mathrm{adv}} \min \left\{ \varDelta x, \varDelta y, \varDelta z\right\} / {\max (\left| \varvec{u}\right| +c_\mathrm{s})}\) (see Chap. 12 in Richtmyer and Morton 1967, cf. Muthsam et al. 2010). If accurate tracking is not needed, particularly for low Mach number flows, where sound waves carry very little energy, this restriction can be avoided by numerical methods which use an additive splitting approach or analytical approximations to allow implicit time integration of the \(\mathrm{grad}\,p\) term (see Sects. 4, 5). The application of additive splitting methods is motivated by the structure of (9)–(10) where each term corresponds to a particular physical process which in turn can impose a time step restriction \(\tau \leqslant \varDelta t\) on a numerical method. This algebraic structure of the NSE simplifies the construction of semi-implicit or implicit–explicit methods which can remove such restrictions as long as the solution changes only by a small amount during a time interval \(\tau \sim \varDelta \,t\). According to linear stability analysis (Strikwerda 1989) terms representing diffusion processes such as viscous friction, \(\mathrm{div}\,\varvec{\pi }\) and \(\mathrm{div}\,(\varvec{\pi } \varvec{u})\), conductive heat transfer, \(\mathrm{div}\,\varvec{h} =\,\mathrm{div}\,(K_\mathrm{h}\,\mathrm{grad}\,T)\), and radiative transfer in the diffusion approximation, \(\mathrm{div}\,\varvec{f}_\mathrm{rad} =\,\mathrm{div}\,(K_\mathrm{rad}\,\mathrm{grad}\,T)\), give rise to restrictions of the following type: \(\varDelta \,t_{\mathrm{visc}} \leqslant C_{\mathrm{visc}} \min \left\{ (\varDelta x)^2, (\varDelta y)^2, (\varDelta z)^2\right\} / {\max (\nu )}\) and, in particular, $$\begin{aligned} \varDelta \,t_{\mathrm{rad}} \leqslant C_{\mathrm{rad}} \min \left\{ (\varDelta x)^2, (\varDelta y)^2, (\varDelta z)^2\right\} / {\max (\chi )}. \end{aligned}$$ For realistic stellar microphysics \(\varDelta \,t_{\mathrm{visc}}\) poses no restriction, since in practice \(\varDelta \,t_{\mathrm{visc}} \gg \varDelta \,t_{\mathrm{adv}}\) for achievable spatial resolutions (Sect. 2.2). However, condition (12) can lead to serious limitations not only in convection studies with idealised microphysics (cf. Kupka et al. 2012), but even more so for the simulation of convection in stars such as Cepheids (Fig. 9 in Mundprecht et al. 2013). This restriction can be avoided using fully implicit time integration methods (cf. Dorfi and Feuchtinger 1991, 1995; Dorfi 1999) or more cost-efficient implicit–explicit methods (e.g., Happenhofer 2013). Moreover, within the photospheric layers of a star condition (12) is relieved, if the radiative transfer equation is solved instead of the diffusion approximation, which anyway does not hold in an optically thin fluid. For the linearization of this problem it was shown (Spiegel 1957) that in the optically thin limit the relaxation rate of temperature perturbations by radiation is proportional to the (inverse) conductivity only and with a smooth transition to a quadratic dependence on grid resolution for the optically thick case represented by Eq. (12) (see Sect. 3.2 in Mundprecht et al. 2013): $$\begin{aligned} \varDelta \,t_{\mathrm{rad}}&\lesssim \min \left( \frac{c_\mathrm{p}}{16\kappa \sigma T^{3}}\left( 1-\frac{\kappa \rho }{k}\,\mathrm{arccot}\frac{\kappa \rho }{k}\right) ^{-1}\right) \nonumber \\&=\min \left( \frac{1}{\chi }\,\frac{1}{3(\kappa \rho )^2}\left( 1-\frac{\kappa \rho }{k}\,\mathrm{arccot}\frac{\kappa \rho }{k}\right) ^{-1}\right) . \end{aligned}$$ The symbols used here have mostly been introduced in Table 1 and the minimum is obtained by relating the inverse size k of the perturbation to the grid spacing: \(k=C_\mathrm{rad}/\min \{\varDelta x,\varDelta y, \varDelta z\}\) and \(C_\mathrm{rad}\) depends on the numerical method (typically, \(C_\mathrm{rad} \approx 1\)). From Taylor expansion it is straightforward to see that in the limit of large optical depth \((\kappa \,\rho k^{-1} \rightarrow \infty )\) Eq. (13) coincides with (12), if we take the maximum of \(\chi \) in both equations. For small optical depth \((\kappa \,\rho k^{-1} \rightarrow 0)\) the dependence on k and thus grid resolution disappears: \(\varDelta \,t_{\mathrm{rad}}\lesssim \min (c_\mathrm{p} / (16\kappa \sigma T^{3}))=\min ((3 \chi (\kappa \rho )^2)^{-1})\) which is to be compared with the optically thick case where \(\varDelta \,t_{\mathrm{rad,thick}}\lesssim (3 \chi (\kappa \rho )^2)^{-1} (3(\kappa \rho )^2/k^2)\). From taking the ratio \(\varDelta \,t_{\mathrm{rad,thick}}/\varDelta \,t_{\mathrm{rad}}=3(\kappa \rho )^2/k^2\) and considering constant values of grid spacing it becomes evident that \(\varDelta \,t_{\mathrm{rad,thick}}\) and thus Eq. (12) is far more restrictive than (13). Firstly, the product \(\kappa \rho \) is orders of magnitudes smaller for the outermost layers of a star than for its interior. Furthermore, for a finite T and \(c_\mathrm{p}\) the quantity \(\varDelta \,t_{\mathrm{rad}}\) becomes large for the outermost layers as long as \(\kappa \) continues to drop. Changing to the physically appropriate criterion (13) concurrently with solving the full radiative transfer equation instead of resorting to the diffusion approximation hence removes the unnecessary restrictions of the latter for optically thin fluids. However, also if the radiative transfer equations are solved, a high radiative cooling rate \(q_\mathrm{rad}\) may still introduce prohibitively small time steps \(\varDelta \,t \leqslant \varDelta \,t_{\mathrm{rad}}\). Examples for this problem are A-type stars (Freytag and Steffen 2004; Kupka et al. 2009) or the lower photosphere of Cepheids (Mundprecht et al. 2013). Again this restriction can be resolved by means of implicit time integration methods (see Dorfi and Feuchtinger 1991, 1995) as long as the relative changes of the solution in each grid cell, after some initial relaxation, remain small. The pure source terms which are due to gravitational acceleration, \(\rho \varvec{g}\), and the generation of energy by nuclear reactions, \(q_\mathrm{nuc}\), do not directly depend on grid resolution. As such they can be neglected in asymptotic stability analyses (Strikwerda 1989), but can at least in some special cases cause time step restrictions. For the diffusive phase of hydrodynamical simulations of semi-convection buoyancy poses a very moderate restriction: \(\varDelta \,t_\mathrm{buoy} \leqslant t_\mathrm{buoy} = \min \left\{ (\varDelta r)^{1/2} / \max (g_\mathrm{r})\right\} \) (where only the vertical or radial grid spacing, \(\varDelta r\), and its associated component of gravitational acceleration, \(g_\mathrm{r}\), are important, see Kupka et al. 2012 for references). This becomes irrelevant as soon as convective mixing sets in Kupka et al. (2012). Nuclear energy generation in turn sets the time scale of stellar evolution as a whole Weiss et al. (2004) and thus is usually the longest time scale of interest in stellar physics, except in late stages of nuclear burning of massive stars and, of course, during supernovae (Kippenhahn and Weigert 1994). Hence, in most cases, when using suitable implicit time integration methods, the time step of a hydrodynamical simulation of stellar convection can become as large as \(\varDelta \,t_{\mathrm{adv}}\), but not larger than that, since this is the time scale on which convection changes the local state in a grid cell. Implications from \(\varDelta \,t_{\mathrm{adv}}\) for performing LES of stellar convection zones As we have seen in Sect. 2.3.2 the numerical time integration of (1), (9), and (10) is in any case restricted to a step \(\varDelta t\) at each time t which is bounded by the minimum (11) of \(\varDelta \,t_{\mathrm{adv}}\) over the entire simulation domain for that t. Usually, this restriction is most severe for the surface layers of a star. First of all, a much higher spatial resolution is required for a physically meaningful representation of the mean structure along the vertical direction near the top of a star. This is caused by the much smaller scale heights near the surface (Chap. 6.1 in Kippenhahn and Weigert 1994) and the efficient cooling of stars in optically thin layers (e.g., Kippenhahn and Weigert 1994; Weiss et al. 2004). Secondly, the velocity \(\varvec{u}\) is also much higher in just those layers. This can be understood by considering the term \(\mathrm{div}\,\left( (\rho E + p) \varvec{u} \right) \) in the energy equation (10) which is just the divergence of the advected flux, i.e., the sum of convective (or enthalpy) flux and flux of kinetic energy. The vertical components of these fluxes are split as $$\begin{aligned} F_\mathrm{adv} = (\rho E + p) u_\mathrm{vert}= & {} (\rho \varepsilon + p) u_\mathrm{vert} + \frac{1}{2} \rho \varvec{u}^2 u_\mathrm{vert} \nonumber \\= & {} \rho h u_\mathrm{vert} + \frac{1}{2} \rho \varvec{u}^2 u_\mathrm{vert} = F_\mathrm{conv} + F_\mathrm{kin}, \end{aligned}$$ and \(h = \varepsilon + p/\rho \) is the specific enthalpy. Evidently, these fluxes result from the non-vanishing velocity field \(\varvec{u}\) in convective zones. If, as is the case inside the upper part of the convection zone of the Sun, \(F_\mathrm{adv}\) accounts for almost the entire vertical transport of energy (e.g., Stein and Nordlund 1998; Weiss et al. 2004; Grimm-Strele et al. 2015a) and taking into account the much lower density and pressure near the top (cf. Stein and Nordlund 1998; Weiss et al. 2004, or any other model of the solar surface layers), it is clear that the velocity has to increase towards the top of the solar convection zone to maintain a constant luminosity throughout the envelope of the star (cf. Chap. 4 in Kippenhahn and Weigert 1994 and for quantitative estimates Table 14.1 in Weiss et al. 2004). The latter is an indicator of thermal equilibrium which holds during most stellar evolution phases (Kippenhahn and Weigert 1994; Weiss et al. 2004) when no major sources or sinks of energy exist in a convective zone in a stellar envelope other than oscillations around such an equilibrium state (cf. Chaps. 4, 6, and 39 in Kippenhahn and Weigert 1994). In the end, \(|\varvec{u}|\) is large(st) near the surface and \(\varDelta \,t_{\mathrm{adv}} \lesssim 1\,\mathrm{s}\) limits LES of only the solar surface as well as also any LES of the entire solar convection zone, even if restrictions due to sound waves are eliminated by a (semi-) implicit method as discussed in Sects. 4 and 5. Hence, the time step of an LES of the entire solar convection zone is limited at least by \(\varDelta \,t_{\mathrm{adv}}\) as obtained for the top of the simulation domain. The same holds for other stars with surface convection zones, if they are included in the simulation domain. This is one important reason why the surface layers are excluded in current numerical simulations of the lower part of the convection zone in the Sun and in similar stars (cf. Miesch 2005). Duration of numerical simulations of stellar convection For how long do we have to conduct an LES of stellar convection? To this end the following time scales are of interest: the free fall time or time scale of hydrostatic equilibrium (\(t_{\mathrm{hyd}}\)), the related acoustic time scale (\(t_{\mathrm{ac}}\)), the convective turn over time scale (\(t_{\mathrm{conv}}\)), the time scale for relaxation towards statistical equilibrium (\(t_{\mathrm{rel}}\)), the time required to achieve statistical stationarity when evaluating a physical quantity (\(t_{\mathrm{stat}}\)), the time scale of stellar pulsation (\(t_{\mathrm{osc}}\)), the time scale of thermal adjustment (\(t_{\mathrm{therm}}\)), the Kelvin–Helmholtz time scale (\(t_{\mathrm{KH}}\)), and the nuclear evolution time scale (\(t_{\mathrm{nuc}}\)). There are also time scales related to rotation, concentration diffusion, magnetic field interaction, e.g., but their role either follows from an extension of the following discussion or requires more general dynamical equations than (1)–(10) beyond the scope of this review. A brief discussion of \(t_{\mathrm{hyd}}, t_{\mathrm{therm}}\), and \(t_{\mathrm{nuc}}\) can be found in Chap. 0.2 of Weiss et al. (2004). \(t_{\mathrm{hyd}}\) is relevant for stars which are not yet or no longer in hydrostatic equilibrium, i.e., during early phases of star formation or during a supernova. It is of the order of the time it takes for a sound wave to travel from the surface of a star to its centre. For the Sun \(t_{\mathrm{hyd}}\) is about 1 h (Weiss et al. 2004). Except in the case of exploding stars, stellar convection takes place in conditions of approximate hydrostatic equilibrium, hence we do not consider \(t_{\mathrm{hyd}}\) any further. The convective turn over time scale \(t_{\mathrm{conv}}\) can in general be defined as $$\begin{aligned} t_{\mathrm{conv}} =\int _{r_\mathrm{a}}^{r_\mathrm{b}} u_x^{-1}(r)\,\mathrm{d}r, \end{aligned}$$ where \(r_\mathrm{b} - r_\mathrm{a}\) is either the vertical (radial) height H of the simulation box or an interval contained inside it, \((r_\mathrm{b} > r_\mathrm{a})\), and \(u_x = \langle (u-\langle u\rangle )^2\rangle ^{0.5}\) is the root mean square difference of the local vertical velocity and its horizontal mean, usually also averaged in time. If measured over the entire length H this time scale in practice is always longer than the acoustic time scale or sound crossing time \(t_\mathrm{ac}\). The latter is given by $$\begin{aligned} t_\mathrm{ac} = \int _{r_\mathrm{a}}^{r_\mathrm{b}} c_\mathrm{s}^{-1}(r)\,\mathrm{d}r, \end{aligned}$$ where \(c_\mathrm{s}\) is the local, horizontally averaged sound speed. Following Chaps. 3 and 4 in Kippenhahn and Weigert (1994) and Chap. 17.4 in Weiss et al. (2004) the local Kelvin–Helmholtz time scale is obtained from the virial theorem as $$\begin{aligned} t_{\mathrm{KH}} = \left( -3{\int _{{M_s(r_\mathrm{a})}}^{M_s(r_{\mathrm{b}})}} p \rho ^{-1}\,{\mathrm{d}}M_s\right) / {\mathcal {L}}, \end{aligned}$$ with the luminosity \({{\mathcal {L}}}\) given by \({{\mathcal {L}}} = 4\pi r^2 F_\mathrm{tot}\) for the case of a spherically symmetric star with mass \({{\mathcal {M}}}\). \(M_s(r)={{\mathcal {M}}}-M_r\) is the total mass found in the shell when integrating downwards from the surface (note the sign due to the direction of integration and see also Sect. 4.3 in Kupka 2009b, where an extended discussion on the subject of numerical simulation time scales for stellar convection is given). This is also the time scale over which an energy flux of size \(F_\mathrm{tot}\) against the direction of \(\varvec{g}\) can be sustained by (gravitational) potential energy (Kippenhahn and Weigert 1994; Weiss et al. 2004). Time scales for a numerical simulation of convection at the solar surface with the ANTARES simulation code (Muthsam et al. 2010) (details on the simulation: Belkacem et al. 2015, in prep.). The solar photosphere extends down to about 700 km, the layer for which the largest temperature gradients are found and the region just around that depth level is known as the superadiabatic peak \(t_\mathrm{KH}\) is often close to the time scale \(t_\mathrm{therm}\) on which thermal equilibrium is reached, i.e., when local energy production as well as gains and losses through energy transport balance each other (Chap. 5.1 in Weiss et al. 2004, for details on when and why this occurs see Chap. 5.3 in Kippenhahn and Weigert 1994). If the thermal adjustment is due to radiative transfer in the diffusion approximation, it can be estimated from \(t_\mathrm{therm} \approx t_\mathrm{rad, diff}\), where $$\begin{aligned} t_\mathrm{rad, diff} \approx (r_\mathrm{b} - r_\mathrm{a})^2 / \chi , \end{aligned}$$ and \(\chi \) is the radiative (thermal) diffusivity (we recall that the diffusion approximation of radiative transfer generally holds for stellar interiors, cf. Mihalas and Mihalas 1984; Weiss et al. 2004, and note that for locally rapidly varying \(\chi \) this definition can be modified for more accurate estimates). In this case, also \(t_\mathrm{KH} \approx t_\mathrm{therm}\) and inside radiative (convectively stable) zones these three time scales hence often agree to within less than an order of magnitude. But this is not always the case, since local energy sources (or sinks) and compression also contribute to thermal adjustment and particularly inside convective zones \(t_\mathrm{rad, diff}\) can be much longer than \(t_\mathrm{KH}\) or \(t_\mathrm{therm}\) (see Fig. 3). Under special circumstances such as an isothermal core in an evolved star even \(t_\mathrm{KH}\) and \(t_\mathrm{therm}\) largely differ, too (see Chap. 5.3 in Kippenhahn and Weigert 1994 for details). In any case, relaxation to a statistically stationary state of a star requires the simulated domain of the object to be in thermal equilibrium (Chap. 5.1 of Weiss et al. 2004) and hence \(t_\mathrm{therm}\) is of major interest to any LES of stellar convection. In case there is no flow and no local energy sources, the only thermal energy transport is through radiative (or heat) diffusion, whence \(t_\mathrm{therm} = t_\mathrm{rad, diff}\), which follows straightforwardly from the dynamics of the temperature field being described by the heat equation (see Chap. 5.3 in Kippenhahn and Weigert 1994). If energy can be stored through compression, as in a pulsating star, or there is energy generation by nuclear processes, a more general equation for temperature evolution has to be considered and if convection or other driving mechanisms of a non-zero flow occur, the time scale of changes according to the energy equation (10) have to be considered. In Chaps. 5.3 and 6.4 of Kippenhahn and Weigert (1994) it is demonstrated for both the hydrostatic and the non-hydrostatic case, how one can estimate \(t_\mathrm{therm}\) from the temperature (or, in the end, energy) equation to be $$\begin{aligned} t_\mathrm{therm} \approx t_\mathrm{KH} \end{aligned}$$ except for cases where \(L \approx 0\) and, consequently, the difference between the time scale for reacting to a perturbation from equilibrium (\(t_\mathrm{therm}\)) and the time scale to transport a certain amount of energy in equilibrium (\(t_\mathrm{KH}\)) becomes relevant, for then \(t_\mathrm{KH} \gg t_\mathrm{therm}\). We also note here that the kinetic energy contained in the flow of an LES of stellar convection is usually negligibly small compared to the thermal energy contained in the simulation volume (even for the case of the Sun with its very efficient, quasi-adiabatic convection it is less than 0.1% in a case similar to that one shown in Fig. 3, as was demonstrated by Grimm-Strele et al. 2015a—see their Fig. 12, whence the discussion of relaxation of thermal energy of Kippenhahn and Weigert 1994 applies here, too). Since \(t_\mathrm{therm}\) can be very long, it is advisable to construct suitable initial conditions which allow reaching thermal equilibrium quickly within the LES itself. Otherwise, excessive relaxation times \(t_{\mathrm{rel}} \sim t_{\mathrm{therm}}\) occur. For instance, one can consider the vertical (radial) temperature and pressure profile of a one-dimensional model of stellar structure or a suitably deep reaching stellar atmosphere model for an initial state. This avoids \(t_{\mathrm{rel}}\) to become a few 100 h for a simulation of solar surface convection instead of once or twice \(t_{\mathrm{conv}}\), where the latter is evaluated for the entire box depth H and is between 1 and 2 h for a solar granulation simulation as depicted in Fig. 3 (see also Grimm-Strele et al. 2015a). We note that frequently, convective turn over time scales are approximated and evaluated "locally" as \(t_{\mathrm{conv,loc}} = u_x/(2 H_p)\). The evaluation of variables often takes place somewhere below the superadiabatic peak. In that case, \(t_{\mathrm{rel}} \sim 5 t_{\mathrm{conv,loc}}\) to \(10 t_{\mathrm{conv,loc}}\). Since there is some arbitrariness in the location and the reference length scale (\(2 H_p\), e.g.), we prefer to refer to use \(t_{\mathrm{conv}}\) as given by Eq. (15). Also a local acoustic time scale can be defined this way from local sound speed and pressure scale height, \(t_{\mathrm{ac,loc}} = c_s/(2 H_p)\). Figure 3 compares some of those time scales for an LES of solar convection. By virtue of a suitable solar 1D model which had been used to initialize the numerical simulation, \(t_{\mathrm{rel}} < 2 t_{\mathrm{conv}}\) was sufficient for this simulation before the statistical evaluation of the simulation could be started. The latter was made for \(t_{\mathrm{stat}} > 100 t_{\mathrm{ac}}\) to study damping of solar oscillations, which—as pointed out in the pioneering work by Stein and Nordlund (2001)—can be found in and studied also by means of numerical simulations. We note here that while \(t_{\mathrm{osc}} \gtrsim t_{\mathrm{ac}}\), mode damping occurs on time scales \(t \gg t_{\mathrm{ac}}\). In comparison, \(t_\mathrm{KH}\) grows to 84 h at the bottom of the simulation whereas \(t_{\mathrm{rad, diff}}\) reaches even 70,000 years. The latter would be lowered by merely an order of magnitude if instead of \(H^2\) as in Eq. (18) one considers \(t_\mathrm{rad, diff, loc} = H_p^2/\chi \). However, both \(t_\mathrm{rad, diff}\) and \(t_\mathrm{rad, diff, loc}\) are totally irrelevant in this context, since the radiative flux is negligibly small in this part of the solar convection zone. Thus, thermal relaxation is not determined by radiative diffusion and \(t_\mathrm{KH} \lll t_\mathrm{rad, diff, loc} \ll t_\mathrm{rad, diff}\). Vertically outward directed energy fluxes scaled in units of the surface flux \(F_{*} = \sigma T_\mathrm{eff}^4\) for an LES of convection at the surface of a DA type white dwarf with the ANTARES simulation code (Muthsam et al. 2010) (Kupka et al. 2017, submitted; in that article the flux data are scaled with respect to the input flux at the bottom which corresponds to a \(T_\mathrm{eff}\) of 11,800 K). The photosphere extends down to 1 km, the convectively unstable zone ranges from 0.8 to 2 km, and below 4 km the entire flux transport is essentially due to radiation. No flow is permitted through the lower vertical boundary where a purely radiative energy flux enters Time scales for a numerical simulation of convection at the surface of a DA type white dwarf with the ANTARES simulation code (Muthsam et al. 2010) (for details on this simulation cf. Kupka et al. 2017, submitted; the figure shown in this review contains additional quantities from the same data set). \(t_{\mathrm{rel}}\) was about 10 s leaving a residual difference in flux constancy of up to 4% (see Fig. 4) The situation is quite different for the case of a DA type white dwarf with a shallow surface convection zone caused by the ionization of hydrogen. No accurate guesses of the thermal structure are possible from 1D models due to the uncertainty of their convection model parameters (in particular the mixing length) and their neglect of a sizeable overshooting below the convection zone which alters the local stratification (see Fig. 4). Thermal relaxation can be helped here by prerelaxation for 5 s with an LES in 2D starting from a carefully chosen 1D model. The resulting stratification is used to construct a new 1D model from which the 3D LES is started and \(t_{\mathrm{rel}}\) was 10 s or \({\approx } 40\,t_{\mathrm{ac}}\) for the simulation shown. To further reduce the residual flux error of up to 4%, as seen from the total, vertically outward directed energy flux \(F_\mathrm{total}=F_\mathrm{rad}+F_\mathrm{conv}+F_\mathrm{kin}\) in Fig. 4 for the lower part of the model (between 5 and 7 km), down to a value of 2% would require at least doubling again \(t_{\mathrm{rel}}\) at which point the accuracy limit imposed by the radiative transfer solver would be reached (notice the dip in \(F_\mathrm{total}\) at 1 km and for a discussion of flux conservation see, e.g., Hurlburt et al. 1984, 1994; Canuto 1997a). The extent of conservation of total energy flux is thus an indicator of whether statistical and thermal equilibrium have been reached. Clearly, for this simulation \(t_{\mathrm{conv,loc}}\) taken inside the convection zone is a useless measure or relaxation. Rather (see Fig. 5), we have \(t_{\mathrm{rel}} \approx t_{\mathrm{conv}}\) if \(t_{\mathrm{conv}}\) is evaluated close to the bottom of the convective zone, but notice the closed bottom boundary forces \(t_{\mathrm{conv}}\) to diverge where \(u_x=0\). Alternatively, \(t_{\mathrm{rel}} \approx t_{\mathrm{KH}}\) if the latter is evaluated at a depth of 4 km. Below that layer the total flux is essentially due to radiation and thus convection does not modify the thermal mean structure and the initial state is sufficiently close to statistical equilibrium also for the 3D LES (the turbulent pressure \(p_\mathrm{turb}\) is less than 0.01% of the total pressure there, too). Thus, \(t_{\mathrm{rel}} \approx t_{\mathrm{KH}}(x_\mathrm{rel})\), where \(x_\mathrm{rel}\) is the highest vertical layer for which the thermal stratification is found unaltered from the initial condition independently of simulation time. The idea behind this definition is that for both the solar case considered above, where the lower part of the simulation domain is quasi-adiabatically stratified, and for the white dwarf example where the same region is subject to purely radiative energy transport in the sense that \(F_\mathrm{total} \approx F_\mathrm{rad}\) (even if there are still high velocity fields), the initial, thermal stratification can be accurately guessed and thus the fluid is already in thermal equilibrium in that region. Thermal relaxation hence is needed only for layers lying above \(x_\mathrm{rel}\). To compute \(t_{\mathrm{rel}}\) from Eq. (17) we set \(r_a = x_\mathrm{rel}\) and \(r_b = x_\mathrm{top}\). We note that \(t_{\mathrm{rel}} \approx t_{\mathrm{KH}}(x_\mathrm{rel})\) also holds for the solar simulation depicted in Fig. 3, which is also supported by the results presented in Grimm-Strele et al. (2015a). We hence suggest \(t_{\mathrm{rel}} \approx t_{\mathrm{therm}}(x_\mathrm{rel})\) as most appropriate estimate of \(t_{\mathrm{rel}}\) for a simulation of stellar convection to attain a thermally relaxed state when using typical starting models as initial conditions and use \(t_{\mathrm{therm}}(x_\mathrm{rel}) \approx t_{\mathrm{KH}}(x_\mathrm{rel})\) for conditions for which \(t_{\mathrm{therm}} \sim t_{\mathrm{KH}}\) is valid (cf. Chap. 5 of Kippenhahn and Weigert 1994). This yields a good approximation for the scenarios shown in Fig. 3 and Fig. 5 and the numerical experiments of Grimm-Strele et al. (2015a). For the entire Sun, \(t_\mathrm{KH}\) is about \(2 \times 10^7\) years (Chap. 17.4 in Weiss et al. 2004). As explained in Sect. 4.3 of Kupka (2009b), if the initial condition is sufficiently far from thermal or even hydrostatic equilibrium, a much stronger energy flux can be driven and \(t_\mathrm{KH}\) becomes much smaller due to a much larger \({{\mathcal {L}}}\). However, once closer to equilibrium, \({{\mathcal {L}}}\) also approaches its equilibrium value and further changes occur much more slowly. Thus, as noted in Grimm-Strele et al. (2015a), a small adjustment of the input energy or entropy at the bottom of the simulation domain of an LES of just the solar surface will trigger a long process of very slow thermal relaxation of the quasi-adiabatic layers of the convective interior. Indeed, if the inflowing entropy or internal energy of a numerical simulation with open, penetrable lower vertical boundary were required to change by a few percent, \(t_{\mathrm{rel}} \sim t_{\mathrm{therm}}=t_{\mathrm{therm}}(x_\mathrm{bottom})\) cannot be avoided which in practice means \(t_{\mathrm{rel}} \sim t_{\mathrm{KH}}(x_\mathrm{bottom})\) (cf. Grimm-Strele et al. 2015a), which we obtain from setting \(r_a = x_\mathrm{bottom}\) and \(r_b = x_\mathrm{top}\) in Eq. (17). This holds unless a better guess of the thermally relaxed stratification is constructed to serve as a new starting model. A suitable initial condition of an LES of convection should thus ensure that \(t_{\mathrm{rel}} \ll t_{\mathrm{therm}}(x_\mathrm{bottom})\) whereas \(t_{\mathrm{therm}} \ll t_{\mathrm{nuc}}\) is guaranteed anyway by the physical state of a star through all but some of the final evolutionary stages (cf. also Weiss et al. 2004). For state-of-the-art LES of stellar convection, \(\varDelta \,t_{\mathrm{adv}} \ll t_{\mathrm{conv}}\) by factors of a few 1000 to a few 10,000 depending on the size of the simulation domain and the resolution of the simulation. Ideally, \(t_{\mathrm{rel}} \sim t_{\mathrm{conv}}\), but this depends very much on the ability to guess a thermally relaxed state. This is usually possible, if the stratification is quasi-adiabatic in the entire lower part of the simulation. At this point it is important to remember that \(t_{\mathrm{therm}}\) only refers to thermal relaxation within the simulation domain and not for the entire object. Since the actual time step \(\tau \) of a simulation will be somewhat less than \(\varDelta \,t_{\mathrm{adv}}\), as discussed in Sect. 2.3.2, relaxation towards a statistically stationary state eventually requires some \(10^5\) to a few \(10^6\) time steps in current LES of stellar convection. The time \(t_{\mathrm{stat}}\) required to obtain well converged statistical averages from an LES of stellar convection depends very much on the observations the simulation data shall be compared to and the physical quantity of interest. The mean temperature \(\overline{T}\) or the turbulent pressure can be inferred from an LES over just \(t_{\mathrm{stat}} \approx t_{\mathrm{conv}}\), as can be seen from Fig. 6 where the temperature profiles of short and long time averaging for a simulation of convective solar surface layers are indistinguishable. A zooming in around the region at a depth coordinate of 700 km, where the temperature gradient is steepest, would reveal there is a slow drift which shifts that region inwards (to the right on the plot) by one simulation grid cell between the shortest and the longest averaging (the former being contained in the latter). This is at the accuracy limit of the simulation. It would even disappear when normalizing the depth scale onto a common reference depth, such as to have a depth of zero where \(T=T_\mathrm{eff}\). Data to compute synthetic spectral line profiles usually also require rather short simulation runs as the photons mostly stem from layers with very short adjustment times. For the Sun for both cases \(t_{\mathrm{stat}}\) is hence of the order of 1 h or again just about \(10^5\) time steps (of course, for this to hold it is fundamental to know a good initial condition such that thermal relaxation is only required for the upper and mid part of the simulation domain, as is the case in the example(s) shown above). Studying stellar oscillations is a different story as is the computation of higher order moments of the basic, dependent variables. While typically \(t_{\mathrm{ac}} \leqslant t_{\mathrm{osc}} \leqslant t_{\mathrm{conv}}\), one requires \(t_{\mathrm{stat}}\) to be \(100\,t_{\mathrm{ac}}\) to \(400\,t_{\mathrm{ac}}\) to obtain data suitable to study mode damping (cf. Belkacem et al. 2015, in prep.). Likewise, for a fourth order moment such as \(K_w = \overline{(w - \langle w\rangle _\mathrm{h})^4} / (\overline{(w - <w>_\mathrm{h})^2})^2\), which is of interest to Reynolds stress modelling and to modelling in helio- and asteroseismology (Belkacem et al. 2006a, b), a similar duration of the LES in terms of \(t_{\mathrm{stat}}\) is required, as is demonstrated by Fig. 7. We note here that \(\langle \cdot \rangle _\mathrm{h}\) refers to an instantaneous horizontal average while the overbar denotes an ensemble average obtained from time averaging horizontal averages (see Sect. 3). Hence, for such quantities simulations taking \(10^6\) to even \(10^8\) time steps may have to be performed and the latter is close to the limit achievable for 3D LES on computational grids with several 100 cells per direction with common supercomputing resources. As a final remark on this topic we would like to point out that contrary to a study of mode damping and driving, where the time correlation is of direct physical interest, the situation is different for physical properties which are expected to reach a quasi-equilibrium state as a function of time, such as \(\overline{T}\) or \(K_w\). In this case, the number of realizations achieved in a simulation is relevant and thus a longer time series can be replaced by a shorter time series in a simulation with larger horizontal extent at identical grid resolution. Trading points in time with points in space is advantageous for quantities with a large horizontal correlation length. But it is also more costly in terms of computer memory and in the end it is likely to require a similar number of floating point operations to achieve the same level of statistical convergence as depicted in Fig. 7 for a simulation of more limited horizontal extent (6 Mm per direction in that case) made over a long time interval. Mean temperature \(\overline{T}\) as a function of box depth for an LES of the convective solar surface with the ANTARES simulation code (Muthsam et al. 2010) (for details on this simulation cf. Belkacem et al. 2015, in prep.). Already a rather short averaging over \(t_{\mathrm{stat}} \approx t_{\mathrm{conv}}\), where \(t_{\mathrm{conv}} \approx 3388\,\mathrm{s}\) (see Fig. 3), suffices to compute this quantity Kurtosis of vertical velocity as a function of box depth for a numerical simulation of convection at the solar surface with the ANTARES simulation code (Muthsam et al. 2010) (see also Fig. 6). Since \(t_{\mathrm{conv}} \approx 3388\,\mathrm{s}\) (see Fig. 3), a much longer averaging of at least \(t_{\mathrm{stat}} > 10 t_{\mathrm{conv}}\) is required to compute this quantity (here, \(t_{\mathrm{stat}}/t_{\mathrm{conv}}\) is about 0.98, 4.55, 5.98, and 11.81) Implications from \(t_{\mathrm{rel}}\) and \(t_{\mathrm{stat}}\) and summary on computational costs As we have just seen, the duration of a numerical simulation of stellar convection is determined both by requirements of relaxation, i.e., \(t_{\mathrm{rel}}\), and the computation of the physical quantities of interest which requires a simulation over a time scale \(t_{\mathrm{stat}}\). In the best case, numerical methods allow choosing a time step determined solely by the rate of change of the solution to the system of dynamical equations constructed from (1)–(10). In many cases, (semi-) implicit methods can take care of purely numerical restrictions imposed by (radiative) diffusion or sound waves (see Sect. 2.3.2 and also Sects. 4, 5). Then, \(\varDelta t \ \approx \varDelta \,t_{\mathrm{adv}}\). Throughout most of the life time of a star, \(t_{\mathrm{nuc}}\) is larger than any of the other time scales of interest and \(t_{\mathrm{hyd}}\) plays no role either. Thus, \(t_{\mathrm{rel}} + t_{\mathrm{stat}}\) determines the duration of the simulation and $$\begin{aligned} N_t = \frac{t_{\mathrm{rel}} + t_{\mathrm{stat}}}{\varDelta t} \end{aligned}$$ its number of time steps and thus the total cost for a given spatial discretization (this is trivially generalized to cases of variable time steps). One can attempt to minimize \(t_{\mathrm{rel}}\) by a proper initial guess for the vertical stratification to have \(t_{\mathrm{rel}} \ll t_{\mathrm{therm}}(x_\mathrm{bottom})\), since often \(t_{\mathrm{therm}} \approx t_{\mathrm{KH}}\). This is no problem for LES of the surface of solar-like stars or red giants since there the layers underneath the observable photosphere are close to adiabatic and thus a thermally relaxed stratification is easy to guess whence \(t_{\mathrm{rel}} \approx t_{\mathrm{therm}}(x_\mathrm{rel}) \approx t_{\mathrm{KH}}(x_\mathrm{rel})\) and in practice also \(t_{\mathrm{rel}} \approx t_{\mathrm{conv}}(x_\mathrm{bottom})\), and in general $$\begin{aligned} t_{\mathrm{rel}} \approx \max (t_{\mathrm{conv}}(x_\mathrm{bottom}),t_{\mathrm{therm}}(x_\mathrm{rel})). \end{aligned}$$ We note that in case of global numerical simulations of stellar convection with rotation as discussed in Brun and Toomre (2002) or Miesch (2005), e.g., this definition has to be extended to also account for the "spin-up time" of the system (until the flow reaches an equilibrium with respect to rotational motion) and the rotation time scale of the system. Moreover, cases of stellar convection exist where a good initial condition is more difficult to obtain, as was shown with the example in Sect. 2.3.4. However, there are no generally applicable shortcuts for \( t_{\mathrm{stat}}\): some quantities such as mean temperature profiles of a relaxed simulation or spectral line profiles for the latter are computable at rather modest efforts, i.e., \(t_{\mathrm{stat}} \approx t_{\mathrm{conv}}\), while other calculations such as damping of pressure modes or higher order moments may require \(t_{\mathrm{stat}}\) to be several orders of magnitudes larger than \(t_{\mathrm{conv}}\). For grids with a few hundred grid points along the vertical (radial) direction, \(t_{\mathrm{conv}}\) is typically a few 1000 to a few 10,000 times \(\varDelta \,t_{\mathrm{adv}}\) which may be readily understood from Eq. (11) due to the role of advection for convective flow (cf. Sect. 2.3.2). In the end, we have to deal with values of \(N_t\) in the range of \(10^5\) to \(10^8\). The technique of grid refinement as used, e.g., in Muthsam et al. (2010) and Mundprecht et al. (2013) in the ANTARES code, allows pushing these numbers somewhat, since the individual \(\varDelta \,t_{\mathrm{adv}}\) on each grid differ. This way one can hope to gain one or at the very most two orders of magnitudes in achievable time steps or local resolution, particularly, if an efficient workload distribution on parallel computers can be achieved. That brings further scientific questions into the realm of computable problems, although it does not fundamentally change the limitations imposed by the spread of time scales (\(\varDelta t, t_{\mathrm{rel}}, t_{\mathrm{stat}}, \ldots \)). We return to these considerations in Sects. 2.5 and 2.6, where we discuss the potential and limitations of 2D simulations in alleviating the computational costs of LES of stellar convection and where we distinguish computable from non-computable problems, respectively. Effective resolution and consequences of insufficient resolution We return to the problem of spatial resolution. Insufficient grid spacing can severely alter the results of any numerical solution of a differential equation up to the level of uselessness and (1)–(10) are no exception to this general statement. An important example in the context of stellar convection modelling is the stellar photosphere: if the vertical temperature and pressure gradients in this region are not resolved with a sufficient number of grid points, the radiative cooling rate, the flow velocities, and the convective flux may differ from results of a resolved simulation by factors of 4 and more (Mundprecht et al. 2013). The same authors also conclude that resulting light curves may be severely "polluted" by artifacts as just one of many further consequences. So clearly, resolving the basic, vertical stratification is essential to any LES of stellar convection. If we consider instead the velocity field, issues may be more subtle. Sections 2 and 4 of Kupka (2009b) deal with the question why observations of solar granulation reveal rather laminar looking structures and what resolution is necessary to actually resolve turbulence caused by the shear between up- and downflow, i.e., the granules and the intergranular network of downflows, on the computational grid. To summarize and extend that discussion let us bear in mind that the horizontal resolution of solar observations is at best \({\sim }35\) km as achieved in the SUNRISE experiment (Barthol et al. 2008). At such length scales the gas becomes optically thin in the photosphere. This also limits the vertical observational resolution in a way that conclusions via spectroscopy can only be drawn from comparisons of different spectral lines formed at different photospheric depths. As is also argued in Kupka (2009b), in the solar photosphere small scale (\(l \sim 10\) km) temperature fluctuations have cooling time scales of \({\sim }0.1\) s. Note that this is often smaller than \(\varDelta t\) of a simulation with that grid size (!). Hence, intensity fluctuations at that level are smoothed out due to strong radiative cooling and at such scale lengths also the contributions of velocity fluctuations to Doppler broadening have to remain small: this is just the length scale on which the effective viscosity \(\nu _\mathrm{eff}\) of the simulation acts (originating from either numerical, artificial, or subgrid scale viscosity, see Pope 2000 and Sect. 2.1.1) and even for 3D LES of moderate resolution it is well below the spatial resolution of observations. Consequently, such hydrodynamical simulations can already fully explain the observed spectral line profiles (cf. Nordlund et al. 2009). 3D LES with grid refinement have achieved a maximum resolution of about 3 km thus far (Muthsam et al. 2011) (which permits recognizing "structures" down to the level of \({\sim }6\) km). At that resolution the vorticity is clearly that of a highly turbulent flow (compare Muthsam et al. 2010, 2011) which extends the results of 3D LES of moderate resolution of 15 km vertically and 25 km horizontally, where vorticity tubes had been found to form in downflows (Stein and Nordlund 2000). We conclude that the resolution necessary for a 3D LES of stellar convection depends on the physical problem which is investigated. While stellar spectra may be well reproduced with moderate resolution LES which thus have a large \(\nu _\mathrm{eff}\) due to their relatively coarse grid, other physical phenomena are sensitive to a proper resolution also of smaller scales. An example is mixing, particularly into neighbouring, "stably" stratified layers. In Sect. 4.2 of Kupka (2009b) an estimate for the necessary resolution to observe shear driven turbulence on a grid as used for LES of solar surface convection is made and yields values of \(h \sim 4\) km which currently is achievable only in simulations with grid refinement (see Fig. 13 and Sect. 5.6.1). At lower resolution the energy carrying and the dissipating scales overlap and such simulations critically rely on the assumption that the basic model of turbulent viscosity (manifesting itself as numerical viscosity, hyperviscosity, or subgrid scale viscosity, e.g.) properly represents the volume averaged action of unresolved scales on resolved scales. The indications of turbulence generated in strong downdrafts as discussed in Stein and Nordlund (2000) thus have a model dependence which is acceptable for predictions of many solar properties (cf. Stein and Nordlund 2000; Nordlund et al. 2009), but can eventually be confirmed only by simulations of higher resolution as just mentioned, since observational tests can be insensitive to such flow properties. Is the situation any different for the bottom of the solar convection zone? In Sect. 6 of Kupka (2009b) the problem of the Peclet number has been discussed in this context. We extend this argument by taking into account our considerations on feasible simulation grids such as \(N_\mathrm{tot}(\mathrm{minimal})\) introduced in Sect. 2.2. Due to its large grid size this would lead to a completely unrealistic Peclet number at the bottom of the solar convection zone and thus would not predict the correct amount of overshooting into the stably stratified layers underneath it as a result of improperly accounting for the effects of heat exchange on the local flow. This can be understood from the following considerations. The Peclet number is used to quantify the importance of convective in comparison with conductive heat transport and is frequently defined as the product of Reynolds and Prandtl number: \(\mathrm{Pe} =\,\mathrm{Re}\cdot \,\mathrm{Pr} = (U L / \nu )\cdot (\nu /\chi ) = (U L / \chi )\). Here, L is to be taken as the typical length at which most of the kinetic energy is being transported and \(U=U(L)\) is the velocity at that scale. With U in the range of \(10\ldots 100\,\mathrm{m}\,\mathrm{s}^{-1}\) (cf. Table 6.1 in Stix 1989) and L in the range of several tenth of \(H_p\) to \(1 H_p\) (which is about 50,000 km close to the bottom of the convection zone in standard solar models) and \(\chi \sim 10^7\,\mathrm{cm}^{2}\,\mathrm{s}^{-1}\) following from values of \(\nu \) and Pr mentioned above, we have that Pe is in the range of several \(10^5\) to \(5\times 10^6\). That is quite different from the top of the solar convection zone (Kupka 2009b) where Pe is found to be around 10, as can be obtained from the data given in Sect. 2.2.1, whence \(\chi \sim 10^{11}\ldots 10^{12}\,\mathrm{cm}^{2}\,\mathrm{s}^{-1}\), and from taking \(L\sim 1200\) km and \(U(L) \sim 3\,\mathrm{km}\; \mathrm{s}^{-1}\) (see Sect. 2.1 and 6 of Kupka 2009b). A numerical simulation with an effective viscosity \(\nu _\mathrm{eff}\) can achieve an effective Peclet number \(\mathrm{Pe}_\mathrm{eff} =\,\mathrm{Re}_\mathrm{eff} \cdot \mathrm{Pr}_\mathrm{eff} = (U L / \nu _\mathrm{eff}) \cdot (\nu _\mathrm{eff} / \chi )\). This holds for both direct numerical simulations of overshooting with idealized microphysics such as those of Hurlburt et al. (1994), Muthsam et al. (1995) and Brummell et al. (2002) as well as those with a more realistic microphysics as described in Brun and Toomre (2002) and Miesch (2005). As shown in Kupka (2009b) such simulations can hardly exceed \(\mathrm{Re}_\mathrm{eff} \sim 1500\) and given that \(\mathrm{Pr}_\mathrm{eff} \lesssim 0.1\) to avoid viscosity strongly influencing the process of heat exchange of plumes, e.g., with their environment and other flow properties, we thus have \(\mathrm{Pe}_\mathrm{eff} \sim 150\) at best for high resolution, state-of-the-art 3D numerical simulations of overshooting. As a consequence, \(\mathrm{Pe} \approx \mathrm{Pe}_\mathrm{eff}\) for LES of convection in stellar atmospheres whereas \(\mathrm{Pe} \gg \,\mathrm{Pe}_\mathrm{eff}\), if the same technique is applied to the case of overshooting below the solar convection zone. In Brummell et al. (2002) the strong dependence of overshooting on \(\mathrm{Pe}\) is demonstrated in their simulations. The profiles of vertical velocity fluctuations and entropy change in a way which cannot be reproduced by a linear or exponential fit function, but requires clearly a non-linear one. This has consequences also for models of convection which are calibrated with or motivated by numerical simulations. The application of a model of overshooting probed at the low \(\mathrm{Pe}\) of stellar atmospheres (Freytag et al. 1996; Ludwig et al. 2002; Tremblay et al. 2015) to the case of stellar interiors (Herwig 2000), which through implementation into the MESA code (Paxton et al. 2011) has found widespread use (e.g., Moore and Garaud 2016), is thus an extrapolation over many orders of magnitudes from the low \(\mathrm{Pe}\) into the high \(\mathrm{Pe}\) regime. Given the experience from direct numerical simulations in 3D such as Brummell et al. (2002) this is hence a phenomenological procedure which no longer can claim to be solely based on hydrodynamical simulations. In more realistic simulations of the deep solar convection zone as described by Brun and Toomre (2002) and Miesch (2005) the resolution is not far from that one of \(N_\mathrm{tot}(\mathrm{minimal})\), which in turn has a local resolution of only \(0.1 H_p\) whereas the extent of the overshooting zone is supposed to be just a fraction of that distance (Basu et al. 1994; Monteiro et al. 1994; see also the upper limit provided by Roxburgh and Vorontsov 1994). It is thus not surprising that no 3D LES with a realistic account of overshooting below the solar convection zone exists. The grid underlying \(N_\mathrm{tot}(\delta _\mathrm{surface})\) would probably be sufficient to solve this problem, but it is unaffordable with current computational resources (see Sect. 2.6.2). This can be estimated from \(\mathrm{Pe}_\mathrm{eff} =\,\mathrm{Re}_\mathrm{eff} \cdot \mathrm{Pr}_\mathrm{eff}\) with \(\mathrm{Pr}_\mathrm{eff} \lesssim 0.1\) and \(\mathrm{Re}_\mathrm{eff} \approx (L/h)^{4/3}\) (see Sect. 4 in Kupka 2009b) for a simulation with grid spacing h at a length scale L. Equidistant grid spacing and requiring \(\mathrm{Pr}_\mathrm{eff} \approx 0.1\) thus leads to \(3\times 10^4 \lesssim N \lesssim 6\times 10^5\) per direction for reaching \(10^5 \lesssim \,\mathrm{Pe}_\mathrm{eff} \lesssim 5\times 10^6\). Values of h are then between \(\min (\delta _\mathrm{surface})/2 \sim 15\,\mathrm{km}\) and an intimidating \({\approx } 1\,\mathrm{km}\) or less. But what happens if the "correct" values for \((U L / \chi )\) are achieved for larger grid spacing? In that case the momentum diffusivity of the numerical scheme (whether be due to artificial diffusion, subgrid-scale viscosity, numerical viscosity, or the like) exceeds radiative diffusivity which in the region of interest, at the lower boundary of the solar convection zone, transports most of the energy flux. Systematic differences to a simulation with sufficient resolution could not be excluded, because such a low resolution simulation would have \(\mathrm{Pr}_\mathrm{eff} > 1\) or even \(\mathrm{Pr}_\mathrm{eff} \gg 1\). Reducing dimensions: 2D simulations as an alternative to 3D ones? Already the first attempts of numerically solving the hydrodynamical equations in the 1960s have involved the idea of reducing the number of spatial dimensions from three to two (Smagorinsky 1963; Lilly 1969). Later on it has also been used in the study of convection in astrophysics (e.g., in Hurlburt et al. 1984; Sofia and Chan 1984; Freytag et al. 1996; Muthsam et al. 2007; Mundprecht et al. 2013; Viallet et al. 2013; Pratt et al. 2016). The main motivation behind it is naturally the reduction of the computational complexity of the problem. How much resolution can we gain when performing a 2D LES instead of a 3D one with \(N_x = N_y = N_z = 1000\) grid points per direction and a time step \(\varDelta t\) determined by Eq. (11)? If we increase the resolution from \(h = \varDelta x = \varDelta y = \varDelta z\) to \(\xi = h / 10\), we have to decrease \(\varDelta t\) to \(\tau = \varDelta t / 10\). So we only gain an order of magnitude in resolution by switching from a 3D simulation to a 2D one. It is straightforward to see from Eq. (12) that the situation is worse, if transport by diffusion is not taken care of by implicit time integration methods: this allows only for \(\xi = h / \root 4 \of {1000} \approx 0.1778\,h\), as a time step \(\tau = \varDelta t / \sqrt{1000} \approx 0.03162 \varDelta t\) would take its toll. Of course, for some applications this gain might be crucial. But a resolution of \(\xi \) instead of h can also be obtained by grid refinement as implemented into the ANTARES code (Muthsam et al. 2010) (see also Sect. 5.6.1). As the 3D geometry can be kept, this approach is preferable to a reduction of dimensions whenever applicable. If instead the spatial resolution is left unaltered, one can increase the maximum interval for time integration by three orders of magnitude by performing a 2D simulation instead of a 3D one. Thus, \(N_t\) as defined in Eq. (20) can be increased from a range of \(10^5\) to \(10^8\) to a range of \(10^8\) to \(10^{11}\). This may be decisive when studying the time development of stellar oscillations and their interaction with stellar convection for stars such as Cepheids. If both higher resolution and longer time integration are required, it may be necessary to combine a 2D LES with grid refinement as in Mundprecht et al. (2013). However, this computational gain has a price tag: a change of the physics is induced by restricting the motion to only two instead of three dimensions. This is of importance in particular, if the problem to be studied by the numerical simulation has one of the following properties: It involves symmetry breaking with respect to the horizontal coordinate. In most 2D simulations the vertical (or radial) coordinate is not removed from the problem, so the dimensional reduction occurs for one of the horizontal coordinates. Rotation allows a distinction between polar and azimuthal flow, so discrepant results are to be expected for rotating systems. Magnetic fields occur. Magnetohydrodynamics deals with inherently 3D phenomena. The small scale structure of the flow is important. The latter is especially important for turbulent flows including stellar convection. Despite the observed flow patterns of the latter are usually laminar at accessible observational resolution and the occurrence of turbulence is not a property of the convective instability but rather a consequence of the flow it causes (cf. Sect. 2 of Kupka 2009b), turbulence is one reason for small scale structures to occur in a flow which make 2D and 3D simulations differ when compared to each other. It has however been argued that two-dimensional turbulent flows are of interest to physics in general for the following reasons (Chap. 8.8 in Tsinober 2009): it may be useful to treat turbulence in quasi-two-dimensional systems such as large-scale geophysical flows. Secondly, it is more accessible to statistical physics. And thirdly, the process of predominant stretching of the vorticity gradient in two dimensions has some similarity with the process of vortex stretching in three dimensions. But there are also arguments why two-dimensional chaotic flows cannot be considered as turbulent flows (Tsinober 2009, Chaps. 1.2.2, 8.8, and 8.9.1): there is no vortex stretching in the proper sense, no net production of mean strain (Eq. C53 in Tsinober 2009) which is a conserved quantity instead, and no self-amplification of velocity derivatives (such as strain). Thus, not only energy but also enstrophy is conserved in "2D turbulence" (in models of turbulence in 2D this leads to an "inverse cascade of turbulent kinetic energy", towards larger scales, cf. Lesieur 1997). As a consequence, in two-dimensional flows large-scale "vortices" are produced out of small scale structures. Indeed, these structures are well-known also from numerical simulations of stellar convection in two-dimensions. We specifically refer to Muthsam et al. (2007) as an example, since due to the very high resolution of a grid cell size of less than 3 km of their simulations, the three-dimensional counterpart of such simulations is clearly in the turbulent regime because of the shearing stresses between up- and downflows (Kupka 2009b). 2D simulations also show much stronger shock fronts than 3D simulations at comparable resolution. In a direct comparison between 2D and 3D direct numerical simulations of compressible convection for idealized microphysics it has been observed (Muthsam et al. 1995) that the 2D simulations lead to larger overshooting zones—mixed regions next to the convectively unstable layers themselves—and that also higher velocities are needed to transport the same amount of energy flux. As a result, if a high level of accuracy such as in line profile and abundance determinations of stellar photospheres is required, quantitative (and even qualitative) differences can be observed (Asplund et al. 2000). It should thus be kept in mind that 2D LES cannot replace 3D LES, if the turbulent nature of the flow and the detailed geometrical structure of the flow are important or if high quantitative accuracy is needed. For instance, following comparisons between 2D and 3D direct numerical simulations of convection with a composition gradient of an active scalar (i.e., a gradient of helium in a mixture of hydrogen and helium in the case of a star), it was found that while layered semi-convection may well be investigated and quantitatively be described by 2D simulations (Moll et al. 2016), this is not the case in the fingering regime (Garaud and Brummell 2015). The latter can appear when the composition gradient drives the convective instability and is counteracted by the temperature gradient, which is just the other way round for layered semi-convection. The fingering regime is characterised by small-scale structures as opposed to extended layers, so this difference is intuitive, but in general, this may be realized only in hindsight. Thus, while 2D LES can be used as a tool for pioneering research, care has to be taken once quantitative results are to be predicted, since there may be unacceptable systematic differences to the full 3D case depending on the physical problem at hand. Computable problems and alternatives to non-computable problems In the following we summarize this section by a discussion which distinguishes problems in stellar convection modelling which can be dealt with by 3D (or 2D) numerical simulations as computable problems from its "non-computable" siblings for which other means of modelling have to be used. Modelling inherent problems versus the problem of scales So what are the main limitations to solve a problem in stellar convection modelling? The restriction may be of some basic, physical nature. Examples include incomplete data to describe the microphysical state of the fluid: uncertainties in the equation of state, in opacities, in nuclear reaction rate, and the like. For stars these quantities are now known at least approximatively. In the same realm the proper determination of an initial state may be difficult, for example, for stars with global scale magnetic fields which we can measure through spectropolarimetry only for their surface layers. In that case one either can restrict the problem to physically idealized settings or make trial and error numerical experiments to find out the sensitivity on the initial condition or a lack thereof. The limited computational resources put restrictions on the simulation domain and the resolution in time and space. This introduces the necessity to model the exterior of a simulated domain through boundary conditions, for instance, in global, star-in-a-box-type simulations such as Pratt et al. (2016) but also in local, box-in-a-star-type simulations such as Grimm-Strele et al. (2015a). The spatially unresolved scales are taken care of by some hypothesis such as a subgrid scale model (cf. Smagorinsky 1963; Pope 2000) which is the counterpart of closure conditions in 1D models of stellar structure and evolution (the assumption that numerical viscosity or hyperviscosity takes care of those is just a variant of the same approach). Any numerical simulation can thus cover only a limited interval in time from which conclusions have to be drawn, typically involving arguments of statistical stationarity and quasi-ergodicity (even though these terms are hardly ever used in publications, at least in the field of astrophysics). The first type of problems is inherent to modelling: our knowledge of the initial state of the system is incomplete and this remains so for each snapshot in time obtained during a numerical simulation. This cannot be overcome just by improving computing power. The second type of problem is related to the large spread of scales in time and space as observed in turbulent convective flows (Lesieur 1997), particularly in the case of stars or planets, and the physical hypotheses (such as quasi-ergodicity) or models (in the case of boundary conditions) we use to reduce the computational restrictions and thus the computing power required to run such simulations. A list of examples: doable and undoable problems for LES of stellar convection Given the current state-of-the-art in numerical modelling and in computing technology, we can thus provide a list of examples from hydrodynamical simulations of stellar convection which are "computable" as opposed to some which are not. We explicitly show a number of cases, since unrealistic ideas about what can be computed with LES and what is unaffordable are common. As a reference we consider a solar granulation simulation which resolves the radiative boundary layer, so \(h \lesssim 15\,\mathrm{km}\), for instance, \(h \approx 12\,\mathrm{km}\) as in the simulation shown in Fig. 6. As in that example the resolution of the horizontal direction could be lower, i.e., 1 / 3, but for simplicity we take it to be identical. A simulation box with a horizontal width of 6 Mm then requires 500 grid points per direction and with a depth of 4.8 Mm we end up having 400 grid points vertically and thus \(N=10^8\) grid points in total. A typical simulation with relaxation and gathering statistical data over between 10 and 20 solar hours (to compute higher order moments or for studying the damping of p-modes) will then take \(N_t=10^6\) time steps. Such a task is doable and requires depending on the code, numerics and its effective resolution, number of CPU cores (few dozen to few hundred) and efficiency of parallelization, a few weeks on large department computers or in projects running on national super computers. We assign a complexity number \(C=1\) to this problem. Starting from it we now reinvestigate different astrophysical problems related to stellar convection with respect to their computability and collect the results in Table 2. Table 2 A collection of computable (affordable) and non-computable (unaffordable) 2D and 3D numerical simulations of stellar convection How about computing the whole solar surface at this resolution? With \(R_{\odot } \sim 695{,}500\,\mathrm{km}\) (Brown and Christensen-Dalsgaard 1998), its area is \({\sim }42{,}300\) times larger than the 6 Mm box just considered. We are thus dealing with \(N=4.2 \times 10^{12}\) points and while stationary quantities may be computed with one snap-shot from such a simulation (thanks to quasi-ergodicity), relaxation still requires \(t_{\mathrm{rel}}\) as defined in Eq. (21) and likewise the pulsational damping is a time dependent process (even though the statistical sampling is much better in this case). The complexity of this problem is thus \(C \approx 40{,}000\). Returning to an argument already discussed in Sect. 2.4, if this simulation should reveal a turbulent flow in the sense that the turbulence occurring in the simulation is generated by shear stresses acting between resolved scales of the flow and thus is independent of the reliability of numerical viscosity, subgrid scale viscosity, or hyperviscosity to act as models for the volume average of a flow simulated at a lower resolution (cf. again Sect. 4.2 in Kupka 2009b), then \(h \lesssim 4\,\mathrm{km}\), for example, \(h \approx 3\,\mathrm{km}\). Such a high resolution simulation clearly separates energy carrying scales from dissipating ones already through its fine grid, but this increases the number of grid points by a factor of \(4^3 = 64\) and that one of time steps by a factor of 4. The complexity level for the solar granules in a box simulation is thus increased from \(C=1\) to \(C \approx 256\), for the whole surface to \(C \approx 1.1 \times 10^7\), which appears non-computable on solid-state based type of hardware. Whether it will one day become accessible to quantum computers (see also Table 2) only time can tell. We recall that for a short time interval (30 min or so) and a single granule this problem is computable today thanks to grid-refinement (Muthsam et al. 2011). Is it possible to make a simulation of a "big chunk" of the solar convection zone? Such a 3D LES should contain its upper 10% (or 20 Mm) or so in depth and 100–200 Mm wide: this is already some \(8^{\circ }\)–\(16^{\circ }\) in azimuthal distance and marks the limit doable without introducing unacceptable errors (flux differences between top and bottom of much more than 10%) due to ignoring the approximately spherical geometry of the Sun. With a grid vertically varying in depth according to the pressure scale height (see Sect. 2.2.1) about 500 points may be needed vertically and 2000–4000 points per horizontal direction. If we consider the larger problem only, we have \(N=8\times 10^9\). With a similar spatial resolution at the solar surface (\(h \approx 12\,\mathrm{km}\)), the time steps remain the same except for some longer relaxation due to \(t_{\mathrm{conv}}(x_\mathrm{bottom})\), but roughly, \(N_t\) remains the same and thus \(C=80\). This is just barely computable on the largest present day supercomputers and indeed such calculations are already being done. Let us now consider simulations of the entire solar convection zone. A very low resolution simulation with grid stretching and refinement might require only about \(N \approx N_\mathrm{tot}(\mathrm{minimal}) \sim 4.5\times 10^{12}\) points. Clearly, at \(N_t = 10^6\) time steps this simulation would be nowhere near relaxation. Given that solar rotation has a time scale of slightly less then a month, one would expect that the spin-up phase for the differential rotation would be similar to what is observed for the global simulations excluding the actual solar surface (Brun and Toomre 2002; Miesch 2005) which means at least a year, thus \(N_t \approx 5\times 10^8\). This does not imply that such a simulation (based only on realistic microphysics and without boosting conductivities, introducing artificial fluxes, etc.) would be thermally relaxed, because guessing the right stratification in the overshooting zone is difficult. We leave this example at this point and conclude its complexity to be \(C = 2.25 \times 10^7\). If one were to use such a simulation for "3D stellar evolution", we have to increase \(N_t\) (for a time scale of \(10^{10}\) years) to \(N_t \approx 5\times 10^{18}\) and \(C = 2.25 \times 10^{17}\). Such model would have to include the interior, too, so \(C > 10^{18}\). This is for a very low resolution simulation and evidently it is a waste of time to even consider it with semiconductor based computing technology. If one were to use such a simulation to study solar p-modes, a higher resolution would be needed (to truly resolve the dynamics at the surface). In this case, \(N_r \approx 800\) still has an optimistically low number of points and we have to consider such an increase of resolution also in horizontal direction, thus \(N \approx 3 \times 10^{14}\). To compete with helioseismological observations which span more than a decade one might want to increase \(N_t\) to span 10 years and thus \(N_t \approx 5\times 10^9\) and for this computation \(C = 1.5 \times 10^{10}\). This is why it is completely hopeless to consider a "realistic simulation of global solar convection and pulsation" which contains the whole solar convection zone in the simulation domain. It is also very simple to see that a direct (all scale resolving) numerical simulation of the solar convection zone is even further beyond reach. But then why is it that "global simulations of the solar convection zone are computable"? The key is that they leave out the surface layers (see the review in Miesch 2005). With some 800 points vertically one may cover the rest of the convection zone plus overshooting underneath. Since the scales are large, an angular resolution of \({\approx } 0.2^{\circ }\) or some 2000 points can already give acceptable results, thus \(N \approx 3 \times 10^9\). The anelastic approximation used in this field (Miesch 2005) (see Sect. 4.3.2) filters out the sound waves and advective velocities are much smaller than at the stellar surface which allows for much larger time steps. Circumventing the relaxation problem, this can push \(N_t\) down to \(N_t \approx 10^5\) and thus \(C=3\). Even if the numerics may be more involved, such a computation is readily doable with present resources (radiative transfer can always be treated in the diffusion approximation which eases the computational costs compared to simulations of stellar surface convection). We now briefly turn to the problem of 2D LES of Cepheids as performed by Mundprecht et al. (2013, 2015). The 2D framework requires a significantly lower computational effort than its 3D counterpart. However, time steps have to be small (shock fronts, strong radiative losses), while integration times have to be long. As it turns out, with grid refinement a simulation with \(N \approx 10^6\) allows a good width of the simulation (\({\approx } 10^{\circ }\)) as well as a depth covering the region from the surface to the first radial node (below the surface at a distance of 42% of the stellar radius or some 11 Gm in the model of Mundprecht et al. 2015, who, however, had a smaller grid and a lower resolution for the stellar surface). The grid stretching used in this simulation (surface cells more than 100 times smaller than near the bottom) has its toll on the time step. Moreover, a sufficiently large number of pulsation periods has to be calculated (one to several dozens) following an equally long relaxation. Unless radiative transfer is integrated implicitly, one thus has \(\varDelta t \sim 0.2~\mathrm{s}\) and if 60 pulsation periods of about 4 days are to be covered, we have \(N_t \approx 10^8\) and \(C \approx 1\) for this problem. We note that semi-implicit time integration methods could help to accelerate this calculation by an order of magnitude, but not much more than that (since a larger time step is traded for solving a large, coupled system of non-linear equations each time step). If we repeat this calculation for a full circle, the workload increases by a factor of 36 (in N and in C). A 3D version of that calculation (which assumes the equivalent of 36,000 points in the azimuthal direction) would require \(N \approx 10^{12}\) while \(N_t\) stays the same, thus \(C \approx 10^6\) and accounting for the success of semi-implicit methods, \(C \approx 10^5\). But how about if we were to follow a long period Cepheid and resolve the variations of pulsational amplitudes as observed for Polaris? If we were to follow our sample 2D calculation as just explained over, say, 400 stellar years, we are already at \(C \approx 600\). However, for a long period object, resolution requirements are clearly more extreme, time steps more restrictive, so \(C \approx 10^5\) easily, and this is not yet a \(360^{\circ }\) calculation, let alone a 3D one. Clearly, it is completely unrealistic to seriously plan such a calculation at the moment. Thus, in considering problems for hydrodynamical simulations of stellar convection, it is very easy to switch from a perfectly doable project to discussions of a completely unrealistic one. Numerical simulations of this kind are at the cutting edge of computational technology and while some problems are now standard problems and others well within reach, many problems in the field remain incomputable with this approach. The continued necessity of 1D models of stellar convection One might claim that computer technology advances exponentially, but this is an extrapolation based on the number of circuits per area in a technology for which some barriers appear to have been reached (clock frequency) while others are not so far away any more (size of computing elements, with quantum effects starting to appear already when reducing the current 14 nm process to a 5 nm one, etc.). A naive extrapolation of computing speed also ignores that the more and more massive parallelization the current development of computing technology requires is indeed becoming increasingly challenging to make full use of and hence, the estimates from Sect. 2.6.2 should be fairly robust when allowing for uncertainties in achievable computational complexities C within one (or at the very most two) orders of magnitude. As a consequence, there is no way we can abandon 1D models of stellar convection now or during the next few decades, since we still require them in applications inaccessible to 3D or even to 2D LES now and definitely for many years to come. In the next section we thus discuss some of the challenges faced by one dimensional modelling. One dimensional modelling There is no complete statistical or any other low-dimensional description of turbulent flows which can be derived from the basic hydrodynamical Eqs. (1)–(10). A rather detailed and well accessible introduction into the state-of-the-art of modelling turbulent flows can be found in Pope (2000). None of the known approaches yields a closed, self-contained model without introducing additional assumptions, hypotheses which cannot be strictly derived from the basic equations alone. One-dimensional (1D) model of turbulent convection are based on the assumption that it is possible to predict the horizontal average of physical quantities such as temperature T or density \(\rho \)—without knowing the detailed time evolution of the basic fields \(\rho , \varvec{\mu }= \rho \varvec{u}\), and \(e = \rho E\)—as a function of location x and time t for different realizations of the flow, i.e., initial conditions. They hence result from a double averaging process: one over the horizontal variation of the basic fields or any dependent variable such as T and a second one over a hypothetical ensemble of (slightly different) initial conditions. So the horizontal averaging is an additional step, since ensemble averaged model equations may also be constructed for the three-dimensional (3D) case. The quasi-ergodic hypothesis, which underlies also the interpretation of any numerical simulation of stellar convection or in fact any other turbulent flow, assumes that the time average of a single realization, which is given by one initial condition, is equal to an average over many different realizations (obtained through different initial conditions) at any time t in the limit of averaging over a large time interval and a large ensemble (Pope 2000). This cannot be proven to hold for all flows since in particular there are also known counterexamples, but for some flows such as statistically stationary (time independent), homogeneous (location independent) turbulent flows it can be corroborated even directly from numerical simulations (Chap. 3.7 of Tsinober 2009). It is thus not a completely hopeless enterprise from the very beginning to construct 1D models of turbulent flows and indeed there are well-known, simple flows for which models have become available that are sufficiently accurate in practice (cf. Pope 2000). It is of course a different story to what extent it is possible to succeed in these efforts in the case of turbulent convection in stars or in planets (interior of gaseous giant planets, oceans and atmosphere at the surface of terrestrial planets). Instead of deriving one model in detail or advertising another, in the following we discuss some principles which should be taken into account when applying published models or when comparing them to each other, to numerical simulations, or to observational data. Requirements for models of turbulent convection As also pointed out in Zilitinkevich et al. (1999) and Gryanik et al. (2005), any physically consistent parametrization or closure hypothesis introduced in modelling turbulent flows should fulfill the following properties: correct physical dimension; tensor invariance—this refers to higher order correlations constructed from products of functions or any derivatives thereof. Especially with respect to the latter, if an approximation is to be used in coordinate systems other than a Cartesian one, a co-variant form of the hypothesis may even be required; respecting symmetries, particularly concerning sign changes of variables; physical and mathematical realizability of the approximation. While requirement 1 is straightforward, properties like invariance to sign change of involved variables can be a more subtle issue. Mironov et al. (1999) discuss the consequences for a third order correlation, \(\overline{w' \theta '^2}\), where \(w'\) is the difference of the vertical velocity and its horizontal average and \(\theta '\) is the same type of difference for the case of temperature. If ensemble (or actually time) averages of this quantity are computed from either numerical simulations or measurements of a convective zone, a change of sign \(w' \rightarrow -w'\) implies \(w' \theta '^2 \rightarrow -w' \theta '^2\). Mironov et al. (1999) show how a closure hypothesis which ignores this symmetry fails in describing this flux of potential temperature, \(\overline{w' \theta '^2}\), in the transition region between convectively stable and unstable stratification as opposed to a superficially similar one which actually does respect that symmetry. Hence, requirement 3 is important. Another crucial issue is realizability: if a hypothesis is non-realizable, the probability of finding, for instance, velocity and temperature fields which correspond to the modelling expression, is actually negative, i.e., mathematically impossible. An example is the assumption of a supposedly quasi-normal distribution function which by definition has a kurtosis \(K_w:= \overline{w'^4}/ \overline{w'^2}^{2}\) of 3 that is also claimed to be highly skewed. For instance, let \(S_w > \sqrt{2}\), where \(S_w := \overline{w'^3}/ \overline{w'^2}^{1.5}\) (see Gryanik et al. 2005 for details and also André et al. 1976a, b for further references). But a distribution function with \(K_w = 3\) and \(S_w > \sqrt{2}\) is non-realizable, hence, such a model of convection cannot be physically meaningful. We note that for a compressible flow it is more natural to consider density weighted (or Favre) averages (see Sect. 3.3.1), but in practice realizability also has to hold for the plain Reynolds average assumed. A model failing on requirement 4 is physically useless and mathematically meaningless. Requirement 2 is probably the most often violated one of these four and its consequences may show up only, once the approximations are supposed to hold in polar instead of Cartesian coordinates. To check these requirements is hence useful to determine the physical and mathematical consistency of a model or detect limitations of the region of applicability of a model. Phenomenological approach As stated in Chap. 5 of Tsinober (2009), there is no commonly accepted definition of a phenomenology of turbulence. In a strict sense, it may refer to anything except direct experimental results, direct numerical simulations (with all spatial and time scales of interest resolved), and the small set of results which can be obtained from first principles (Tsinober 2009), i.e., Eqs. (1)–(10). More commonly, models of turbulent flows are called phenomenological, if they introduce a concept such a rising and falling bubbles which cannot be derived directly from (1)–(10) nor at least confirmed by experiment or numerical simulation, but which is used for deriving the mathematical expressions of the model. Thus, in Canuto (2009) the well-known mixing length treatment or mixing length theory (MLT) of convection is considered a phenomenological model and indeed following the derivation of Weiss et al. (2004) it is clear MLT deals with the properties of fictitious bubbles that are not observed in convective flows anywhere (Sun, Earth atmosphere and oceans, laboratory experiments of convection, or numerical simulations of convection in these objects). However, since this kind of modelling has been accessible to scientists already decades ago and the most crucial free parameter of the model, the mixing length relative to a reference scale (most frequently the local pressure scale height), provided enough flexibility to adapt the predictions of the model to different physical situations, it has become the workhorse of astrophysical convection modelling already in the 1960s, when the first edition of Weiss et al. (2004) was written. This situation has not changed since those days, which is unfortunate, as we discuss in the following. Models and physics At the heart of any of the phenomenological models, but also of more advanced models of turbulent flows, is the concept of turbulent viscosity. Introduced by Boussinesq (1877) its idea is to model the Reynolds stress of a flow to be proportional to the mean rate of strain (see Chap. 4.4 in Pope 2000), as if the (main) effect of turbulence on the velocity field is just to boost the kinematic viscosity \(\nu \) up to an effective viscosity \(\nu _\mathrm{eff} = \nu + \nu _\mathrm{turb}\). A related and very similar idea is that of turbulent diffusivity, a generalization of Fick's law of gradient diffusion, where turbulence induces an effective diffusivity of a conserved scalar \(\phi \), i.e., \(\chi _\mathrm{eff} = \chi + \chi _\mathrm{turb}\) and thus \(\overline{\varvec{u} \phi '} = -\chi _\mathrm{turb} \nabla {\overline{\phi }}\). However, while the conditions of validity are well understood for the case of diffusion due to molecular motion, where the mean free path is small against the variation of the gradient of the "driving quantity", such as temperature for the case of heat diffusion, and thus a first order Taylor expansion applies also on mathematical grounds, this is usually not the case for turbulent diffusivity and turbulent viscosity. Hence, these quantities are quite different from their "molecular counterparts" and should be understood as physical models to describe data. The computation of turbulent viscosity is thus model dependent, even if measurements or a direct numerical simulations were at hands. Thus, care should be taken not to confuse the underlying physical processes with a concept that is actually a model on its own (cf. Tsinober 2009). Mixing length treatment One way to compute turbulent viscosity involves using a mixing length. Indeed, this is just what the mixing length had been invented for by Prandtl (1925). Discussions and illustrations how the idea of a mixing length is motivated by velocity profiles of turbulent channel flow can be found in Chaps. 7.14, 7.1.7, and 10.2.2 of Pope (2000). The "interior" region of such a flow is separated from the solid wall, which acts as a boundary condition, by a so-called viscous boundary layer. Contrary to a uniform turbulent viscosity \(\nu _\mathrm{turb} = f(x)\), which varies only a long the direction x of the mean flow, the mixing length allows modelling how \(\nu _\mathrm{turb}\) varies across the flow, as a function of distance from the boundary of the domain. Biermann (1932) then used this idea, among others, to model the heat transport by convection inside a star. Following the notion by Unsöld (1930) that the solar photosphere must be unstable to convection due to the lowering of the adiabatic gradient by partial ionization, Siedentopf (1933) realized that all stars with \(T_\mathrm{eff} \lesssim 10{,}000~\mathrm{K}\) must have convection up to their observable surface and that the newly invented treatment of convection can explain solar granulation (Siedentopf 1935). Through a number of improvements (Biermann 1942, 1948; Vitense 1953) the model eventually obtained its form suggested by Böhm–Vitense that is used even today (Böhm-Vitense 1958; Weiss et al. 2004). Those improvements were essentially devoted to account for the radiative heat loss of the fluid which was usually depicted as consisting of moving bubbles that exchange heat with their environment. In its most compact form (cf. Heiter et al. 2002) the convective flux of a stationary, local convection model such as MLT is computed from $$\begin{aligned} F_\mathrm{conv} = K_\mathrm{turb} \beta = K_\mathrm{rad} T H_p^{-1} (\nabla -\nabla _\mathrm{ad})\varPhi (\nabla -\nabla _\mathrm{ad},S) \end{aligned}$$ for regions where $$\begin{aligned} \nabla > \nabla _\mathrm{ad}, \quad \hbox {with}\quad \nabla =\partial \ln T / \partial \ln P \quad \hbox {and}\quad \nabla _\mathrm{ad}=(\partial \ln T / \partial \ln P)_\mathrm{ad} \end{aligned}$$ i.e., the linear, local criterion for convective instability by Schwarzschild (1906) must hold (the adiabatic temperature gradient follows from the equation of state, see Weiss et al. 2004). Outside those regions it is assumed that \(F_\mathrm{conv} = 0\) (no overshooting of flow into "stable" layers). We note that in this version the criterion Eq. (23) ignores the counteracting, stabilizing effect of viscous friction, which at stellar Prandtl numbers in any case is negligibly small. Further details on local stability criteria can be found in Kippenhahn and Weigert (1994) and Weiss et al. (2004). The radiative conductivity \(K_\mathrm{rad}\) has already been introduced in Eq. (5), and as before \(H_p\) is the pressure scale height, whereas \(\varPhi = K_\mathrm{turb} / K_\mathrm{rad}\) is the ratio of turbulent to radiative conductivity, P is the (gas) pressure, and T is the temperature. The total energy flux \(F_\mathrm{tot}\) in turn follows from \(F_\mathrm{rad} + F_\mathrm{conv} = F_\mathrm{tot}\) under the assumption that \(F_\mathrm{kin} = 0\). Finally, $$\begin{aligned} \beta = -\left( \frac{dT}{dr}-\left( \frac{dT}{dr}\right) _\mathrm{ad}\right) = T H_p^{-1} \left( \nabla -\nabla _\mathrm{ad}\right) \end{aligned}$$ is the superadiabatic gradient, a function of radius r (or depth z) and the thermodynamically determined adiabatic gradient \(\nabla _\mathrm{ad}\). The convective efficiency S is just the product of Prandtl and Rayleigh numbers, \(\mathrm{Ra}\) and \(\mathrm{Pr}\), and reads $$\begin{aligned} S =\,\mathrm{Ra}\cdot \mathrm{Pr} = \frac{g\alpha _\mathrm{v}\beta l^4}{\nu \chi }\cdot \frac{\nu }{\chi }, \end{aligned}$$ where g is the local surface gravity, \(\alpha _\mathrm{v}\) is the volume expansion coefficient, \(\nu \) the kinematic viscosity and the radiative diffusivity \(\chi \) follows for known \(c_p\) from \(K_\mathrm{rad} = c_p \rho \chi \) (see Table 1). Note that S only depends on buoyancy and radiative diffusion, a useful parametrization, since in stars viscous processes act on much longer timescales than either radiation or buoyancy and hence convection. In the case of MLT, the function \(\varPhi (S)\) is given by \(\varPhi (S) = \varPhi ^\mathrm{MLT}\) as $$\begin{aligned} \varPhi ^\mathrm{MLT} = \frac{729}{16} S^{-1} \left( \left( 1 + \frac{2}{81} S\right) ^{1/2}-1\right) ^3, \end{aligned}$$ where S is computed from $$\begin{aligned} S = \frac{81}{2} \varSigma , \quad \varSigma = 4 A^2 (\nabla -\nabla _\mathrm{ad}), \quad A = \frac{Q^{1/2} c_p \rho ^2 \kappa l^2}{12 a c T^3} \sqrt{\frac{g}{2 H_p}}, \end{aligned}$$ and \(Q = T V^{-1}(\partial V/\partial T)_p = 1-(\partial \ln \mu / \partial \ln T)_p\) is the variable, average molecular weight. For this compact form of writing the MLT expression of the convective flux we refer to Canuto and Mazzitelli (1991, 1992) and Canuto et al. (1996) who point out its equivalence with the variant of MLT introduced by Böhm-Vitense (1958). Indeed, this notation and its compact formulation have already been used in much earlier work such as Gough (1977a) as well as in later work (Heiter et al. 2002, e.g.). Finally, l is the mixing length, usually parametrized as $$\begin{aligned} l = \alpha H_p, \end{aligned}$$ and the mixing length scale height parameter \(\alpha \) is calibrated by comparison with some data. Clearly, \(F_\mathrm{conv}\) as computed from Eqs. (22)–(28) is just a function of local thermodynamic quantities, the difference of the local and the adiabatic temperature gradient, and the mixing length l. It is akin to a diffusion model, \(F_\mathrm{conv} = K_\mathrm{turb} \beta \), similar to the radiative flux \(F_\mathrm{rad}\) in stellar interiors, Eq. (4). This phenomenological approach to compute \(K_\mathrm{turb}\) is now quite different from the mixing length as used in engineering sciences for shear flows. Prandtl (1945) and Kolmogorov (1942) independently from each other realized that in the approach of computing \(\nu _\mathrm{turb} = u l_\mathrm{m}\), the reference velocity u should be related to the turbulent kinetic energy K instead of the mean velocity gradient times the mixing length. Thus, \(u = c K^{1/2}\) and \(\nu _\mathrm{turb} = c K^{1/2} l_\mathrm{m}\). Through Kolmogorov's similarity hypotheses the mixing length is then related to the dissipation rate \(\epsilon \) of turbulent kinetic energy via \(u(l)=(\epsilon l)^{1/3}\), whence $$\begin{aligned} \epsilon = c_{\epsilon } K^{3/2} / l_\mathrm{m} \end{aligned}$$ (see Chaps. 6.1.2 and 10.3 in Pope 2000, for convenience we have used the notation of Canuto 1993; Canuto and Dubovikov 1998 here; c and \(c_{\epsilon }\) are model parameters). In engineering problems the mixing length \(l_m\) (to be specified for each case) is used alongside a differential equation for K. This is hence a non-local model, as it explicitly accounts for the fact that turbulent kinetic energy (TKE) is also transported by the flow itself and it models this process through a differential equation. Since it requires only one such equation in addition to algebraic ones, this approach is called a one-equation model (Chap. 10.3 in Pope 2000). Purely algebraic models for \(\nu _\mathrm{turb}\) only work for rather simple types of flows and the original prescription of the mixing length model already fails for decaying grid turbulence or the centerline of a round jet, as the mean velocity is constant across the flow in that case and thus \(\nu _\mathrm{turb}\) is mispredicted as zero (Pope 2000). However, even the one-equation model is outdated in engineering applications of fluid dynamics. Most commercial computational fluid dynamic codes use the K–\(\epsilon \) two-equation model (Jones and Launder 1972) as their basic tool to model turbulent flows (cf. the discussion in Chap. 10.4 of Pope 2000). This approach avoids the computation of a mixing length by specifying a differential equation for the dissipation rate \(\epsilon \) (cf. Canuto 1992, 1993, 1997a; Canuto and Dubovikov 1998). The situation is quite different in astrophysics: non-local mixing-length models—to which we shortly return below—are typically used only in studies of stellar pulsation. Unless a problem is accessible to LES the local, algebraic MLT model of Böhm-Vitense (1958), as given by Eqs. (22)–(28) above, has remained the most popular way to compute \(F_\mathrm{conv}\) in the vast majority of astrophysical applications independently of the fact that this form of modelling has been abandoned in engineering sciences a long time ago. There are several reasons for this resiliency of the MLT model in stellar astrophysics: It is the standard model of convection and used in a large number of entire grids of stellar evolution and stellar atmosphere models which in turn are used in other astrophysical applications, for instance, stellar isochrones, photometric calibrations, and grids of stellar spectra for stellar population synthesis. It is easy to incorporate into a code and its basic calibration is simple: match the solar luminosity and effective temperature (or radius) at the present solar age. This is always achievable for the different versions of MLT (Gough and Weiss 1976). Alternative models are more difficult to calibrate: one has to deal with more free parameters which again lack universality, just as the stellar mixing length does. Note the dependency of mixing length on stellar type, where much smaller scale lengths are required for A-type stars compared to solar type stars (cf. Gough and Weiss 1976 vs. Kupka and Montgomery 2002): the standard MLT calibration is not universal, a deficiency usually neglected in stellar modelling, but this also holds for the usually considered more complex models of convection such as non-local MLT. Until the advent of sufficiently accurate observations from helioseismology and sufficiently advanced LES, it was difficult to falsify MLT by proving it cannot get the temperature and pressure structure right in a way that cannot be fixed by just tuning \(\alpha \). Indeed, in spite of all its merits as a means of modelling convection in the pioneering days of stellar astrophysics, thanks to the very high accuracy of current observational data and numerical simulations, stellar MLT has now been falsified in several ways and we see no way to reconcile it other than by ignoring those tests. The latter all demonstrate MLT cannot correctly predict the convective surface layers of a star so as to recover temperature and pressure profile, sound speed profile, asymmetry between up- and downflows, and kinetic energy available for mode driving with state-of-the-art accuracy. The tests mentioned include: Failure to recover solar p-mode frequencies due to mispredicting the temperature and pressure profile (Baturin and Mironova 1995; Rosenthal et al. 1999) which in helio- and asteroseismology is known as the near surface effect. Failure to predict a sufficiently large rate of driving of solar p-modes (Samadi et al. 2006) in contrast with an approach based on a 3D LES and a closure model with plumes (Belkacem et al. 2006a, b). This comes along with all the problems found from stellar spectroscopy and photometry (Smalley and Kupka 1997; Gardiner et al. 1999; Heiter et al. 2002; Smalley et al. 2002) let alone if very accurate spectral line profiles as obtainable from 3D LES in Asplund et al. (2000) and Nordlund et al. (2009) have to be computed. From this viewpoint classical MLT has been falsified. That claim holds unless one merely expects the model to provide the correct depth of the surface convection zone and the stellar radius which is given by calibrating \(\alpha \) (cf. again Gough and Weiss 1976) and ignores the inconsistencies which the model imposes on predictions for observable quantities that require accurate modelling of stellar surface layers.Footnote 4 Clearly, models beyond local MLT are necessary and time dependent, non-local MLT models (Gough 1977a, b; Unno 1967) alleviate some of the problems found with helioseismology, particularly, if both non-locality and non-adiabaticity are taken into account, as summarized in the review of Houdek and Dupret (2015). The latter also give a derivation of local, time-independent MLT from a kinetic theory of accelerating eddies point of view (as in Gough 1977a, b), since this can more easily be generalized to the time dependent, non-local case than the alternative approach of Unno (1967), which is discussed there as well. However, it is impossible to avoid mathematical inconsistencies of the following type during the derivation of MLT models, namely that some variables must not vary much over a mixing length l while the latter necessarily has to be large to predict the correct solar radius as in Gough and Weiss (1976) and the mentioned variables clearly change along a distance l. The same inconsistencies also arise, if a phenomenological analysis of rising and cooling bubbles is made in deriving the model (see Weiss et al. 2004, and references therein), or if the model is formulated just as one imitating heat diffusion with an enhanced, effective diffusivity (cf. Sect. 3 of Canuto 2009), or if it is derived as a one-eddy (delta function) approximation in the context of a more general, two-point closure model of turbulent convection (Canuto 1996). A more recent, parameter-less sibling In an attempt to remove the need of a mixing length, Pasetto et al. (2014) constructed a model for the convective flux where, as several times before, it is claimed that it does not depend on any free parameter. A word of caution should be given to any such claim already now, whether made in favour of a convection model or a numerical simulation: they all either depend on parameters related to the modelling of the flow or the flow such models consider is so idealized as to have but little relation to any real world flow. We return to this sobering statement further below. Indeed, in their derivation, Pasetto et al. (2014) assume the Boussinesq approximation (discussed in Sect. 4.3.1) which is also used in MLT. But from the beginning the idea of convective elements is invoked which supposedly travel distances small compared to distances over which temperature, pressure, and density vary significantly so that gradients of these quantities could develop. While this is just the basis for the validity of the diffusion approximation, it is used in Pasetto et al. (2014) together with the Boussinesq approximation to motivate the assumption that convective flow in stars is irrotational and hence a potential flow. Comparing to LES of stellar convection or direct numerical simulation of convection for idealized microphysics, neither of these assumptions can found to be justified: Figs. 15 and 16 of Muthsam et al. (2010) show the norm of vorticity and a volume rendering of the difference between pressure and its horizontal average: clearly, strong vorticity appears especially close to downdrafts (Fig. 15) and even well defined, tornado-like vortex tubes appear in the flow (Fig. 16) already at slightly below 10 km resolution. These vortex tubes penetrate well into upflow regions underneath the solar convection zone, once a resolution of 3 km is reached (cf. Fig. 13). Hence, the assumption of a potential flow for solar convection is completely at variance with LES of solar convection. The flow is clearly turbulent, while the model of Pasetto et al. (2014) excludes turbulence from the beginning. Although it might still be possible that statistical averages predicted from such a model agree with some of the data obtained from observations and numerical simulations, there is at least no obvious physical and mathematical basis for the agreement. The simplifications made in Pasetto et al. (2014) to remove the mixing length, an attempt which they indeed succeed in, is paid for by other simplifications: not just the Boussinesq approximation, but also irrotationality (and thus complete exclusion of turbulence) and the introduction of a dynamics of "convective elements" (in the end fluid bubbles akin to MLT) which are heuristically motivated: such features cannot be identified in observations of solar convection or convection in the atmosphere of the Earth or in numerical simulations of these based on the fundamental Eqs. (1)–(10). This limits the tools developed in Pasetto et al. (2014) to predict the ensemble averaged quantities of interest to stellar convection modelling (superadiabatic and mean temperature gradient, etc.) and the region of applicability of the model. A much stronger, detailed criticism of the model of Pasetto et al. (2014) has recently been published in Miller Bertolami et al. (2016). It addresses the internal consistency of the model, the stringency of the tests the model has passed, and other issues. We suggest the reader to compare the original papers and instead of repeating further details we prefer to repeat the general comment made just above: a "parameter free" description of a flow has so far been only found for flows for which drastic simplifications are assumed to hold for their basic properties. Which price is the lower one to pay (a "parameter free" model or a physically more complete model with parameters that require calibration) is probably best judged by comparisons to observational data and, where possible, advanced hydrodynamical simulations that solve the fundamental Eqs. (1)–(10) provided that the model assumptions made appear acceptable. Non-local mixing length models The models of Gough (1977a, b) and Unno (1967) are examples of non-local mixing length models. Their detailed derivation and in particular the extensions necessary to use them in the context of radially pulsating stars is discussed in Houdek and Dupret (2015). One particular feature of the model of Gough (1977a, b) is to consider several basic quantities, i.e., the convective flux \(F_\mathrm{conv}\), the superadiabatic gradient \(\beta \), and the turbulent pressure \(p_\mathrm{turb}=\overline{\rho u_3 u_3} =\overline{\rho w^2}\), as averages over vertical distances which accounts for the fact that these clearly change over the typical travel distance of eddies (Sect. 3.3.1 of Houdek and Dupret 2015), contrary to the assumptions of the models discussed in Sects. 3.2.2 and 3.2.3 above. In the end one arrives at a two-equation (second order), non-local model (see also Canuto 1993), where, however, a number of simplifications had to be made to compare the model to a Reynolds stress model of convection). The benefits of this extension when dealing with pulsating stars have already been discussed Houdek and Dupret (2015). But neither the phenomenological style of modelling of the dynamics of bubbles or convective eddies is avoided this way, nor the introduction of a mixing length itself. The most advanced generalization of the mixing length approach as used in astrophysics is probably the model by Grossman et al. (1993). They started from the idea of deriving Boltzmann-type transport equations for fluid blobs very similar to the derivation of the fundamental NSE themselves (cf. Huang 1963; Hillebrandt and Kupka 2009). In a next step they arrived at a hierarchy of moment equations which by the nature of their approach and the similarity to the NSE is structurally very similar to higher order moment equations which are derived in the Reynolds stress approach directly from the NSE. On the other hand, for deriving the NSE themselves progressing from the Boltzmann-type transport equations for the distribution functions of microphysical particles to the Maxwell–Boltzmann transport equations, which describe the dynamics on the level of averaged, macroscopic quantities such as the mean particle number density, eventually allows the derivation of the closed system (1)–(3), i.e., the NSE (see Huang 1963; Hillebrandt and Kupka 2009). This is possible thanks to the scale separation between microphysical processes and macroscopic ones. But that is not the case for macroscopic equations which are supposed to describe the dynamics of "fluid particles": in the end one gets stuck with a large number of terms which cannot be computed from within the model itself unless one constructs a whole set of different scale lengths, mixing lengths for that matter. Thus, while Grossman et al. (1993) can easily rederive the original MLT model and suggest how generalizations accounting for anisotropic velocity fields and a concentration (mean molecular weight) gradient should look like, their approach provides no tools how to close the resulting systems of equations other than by a large set of hypotheses on physical processes occurring over certain scale lengths. The similarities of the resulting moment equations with those obtained in the Reynolds stress approach of Canuto (1992, 1999) are probably indicative for rather investigating the latter at that level of modelling complexity, since the one-point closure turbulence models used in Canuto (1992, 1999) are not tied to the idea of fluid bubbles travelling a number of scale lengths difficult to specify. Further models specifically designed to calculate overshooting In parallel to the non-local extension of MLT there have been many attempts to develop models of the inherently non-local process of overshooting where layers of fluid locally stable to convection are mixed because of processes going on in layers which are located at some distance and which are unstable in that same sense. The inconsistencies which can arise when combining (and possibly confusing) local concepts with non-local ones, as happened in the derivation of many models of convective overshooting, were heavily criticized by Renzini (1987). We hence only discuss here a few examples of models which have become popular beyond the group of authors who had originally developed them. An often used model to estimate overshooting above convective stellar cores is the integral constraint derived by Roxburgh (1978). Criticism raised by Baker and Kuhfuß (1987) concerned the neglect of contributions which become important in superadiabatic stratification. This was refuted by Roxburgh (1989) to not apply for the case of stellar cores. Hence, Zahn (1991) suggested Roxburgh's constraint to be applicable to the case of penetrative convection above convective stellar cores which permitted him to combine it with his own model. Thereby he avoided the external calibration of the geometrical extent of overshooting which in turn remains necessary, when the model of Zahn (1991) is applied to overshooting underneath the solar envelope. Criticism on the physical completeness of the approach of Roxburgh (1989) was raised again (Canuto 1997b) regarding its neglect of the role of turbulent dissipation for energy conservation. Canuto (1997b) also summarized problematic approximations such as assuming a subadiabatic stratification that is tied to a zero convective flux, negligible superadiabaticity in the overshooting region, negligible kinetic energy dissipation, and the applicability of Eq. (29) to compute the latter, shortcomings to be found in the vast majority of models proposed to calculate overshooting when modelling stellar convection zones. The model of Zahn (1991) centres around the properties of plumes observed in adiabatic convection which penetrate into stable stratification. In this sense it is related to the mass-flux models used in meteorology to which we turn in Sect. 3.3.1. The applicability of Zahn's model is restricted to the case of convective penetration which occurs for convection at high Peclet number close to adiabatic stratification (in Zahn 1991 the term overshooting is used to only refer to the case of low Peclet number and non-adiabatic stratification, to which the model is not applicable). Hence, it can be used to estimate overshooting underneath the solar convection zone or above a convective core of a massive star, but not for the case of convective overshooting in A-stars or hot DA white dwarfs (such as that one shown in Fig. 4). The model requires external information to become fully predictive and is also subject to some of the approximations criticized by Canuto (1997b). More recently, inspired by the results of numerical simulations Rempel (2004) proposed a model for overshooting based on an ensemble of plumes generated within the convection zone. One problem that remains also with this type of model is the computation of the filling factors of the plumes in a physical parameter space for which a reliable numerical simulation is not available. A different model inspired by numerical simulations is the notion of an exponential decay of the velocity field in the overshooting region, as originally proposed by Freytag et al. (1996), which we have briefly discussed already in Sect. 2.4. This model is derived from numerical simulations of stellar convection at rather low Peclet number which occurs in A-stars and hot DA white dwarfs. As follows from the discussion in Sect. 2.4 and the physical considerations made in Zahn (1991) and Rempel (2004), the region of applicability of this model is probably restricted to the outermost part of an overshooting zone which is dominated by waves rather than plumes. It remains to be shown whether such a model can be applied to the case of overshooting at the bottom of the solar convection zone and above the convective core of a massive star without resorting to highly phenomenological patching, since the physics of the layers closer to the convection zone motivated the completely different models of overshooting proposed by Zahn (1991) and Rempel (2004). Clearly, this is not a satisfactory situation. A different route of modelling than the suite of special cases considered here appears to be necessary. More general models of turbulent convection A number of techniques have been developed for the modelling of statistical properties of turbulent flows which are not built on the phenomenological idea of providing an ensemble average for the somewhat vaguely defined concepts of bubbles or blobs of fluid, although we have to add here that the actually available models are based on rather continuous transitions and mixtures of ideas. Blobs and bubbles, however, cannot easily be identified with the coherent structures that are indeed found in turbulent flows (cf. Lumley 1989; Lesieur 1997; Tsinober 2009) and thus provide no real advantage in deriving closure relations despite they might still implicitly be referred to in some of the more advanced models. For general introductions into modelling turbulent flows, we refer to Lesieur (1997) and Pope (2000), and for critical discussions on the techniques used, Lumley (1989) and Tsinober (2009) provide valuable information. We note for the following that some of the work discussed below actually does rely on a mixing length, but in the sense of Eq. (29), i.e., for computing the dissipation rate of kinetic energy. That usage is based on the idea of scale separation and the assumption of an inertial range rather than on the concepts developed for astrophysical MLT in Biermann (1932). We also note that some of the methods discussed below have also been used in the phenomenological models already introduced in Sect. 3.2 together with their model specific input. Summary of methods In dealing with turbulent flows a very ancient idea is that of a splitting of the basic fields such as velocity into an average and a fluctuation around it, $$\begin{aligned} A_i = \overline{A_i} + A_i', \end{aligned}$$ where \(A_i\) may be a scalar such as temperature T or the component of a vector field \(\varvec{u}\). A key property of each of those averages is that \(\overline{A_i'} = 0\). This Reynolds decomposition or Reynolds splitting was first suggested in Reynolds (1894) and allows the derivation of dynamical equations for a mean flow and fluctuations around the latter (see Chap. 4 in Pope 2000). In dealing with compressible flows it is of advantage to perform the Reynolds splitting for the conserved or at least density weighted variables, i.e., $$\begin{aligned} T = \widetilde{T} + T'' = \frac{\overline{\rho T}}{{\overline{\rho }}} + T'', \quad u_i = \widetilde{u_i} + u_i'' = \frac{\overline{\rho u_i}}{{\overline{\rho }}} + u_i'' \end{aligned}$$ as proposed by Favre (1969). This is known as Favre averaging. Variables which already relate to a quantity per volume such as \(\rho \) (mass) or pressure p (internal energy) remain subject to the standard Reynolds decomposition, i.e., \(\rho = {\overline{\rho }} + \rho '\) and \(p = \overline{p} + p'\). In spite of that it has been used in astrophysical convection modelling only by a few authors, for instance, by Canuto (1997a). In analogy with the Reynolds average now \(\widetilde{\rho T''} = 0\) as well as \(\widetilde{\rho u_i''}=0\) whereas \(\widetilde{T''} \ne 0\) as well as \(\widetilde{u_i''} \ne 0\) (for exact relations and their derivation we refer to Canuto 1997a). These averages are hence ensemble averages of the variables appearing in (1)–(3) (see the discussions in Sects. 2.1.1 and 2.3.4 and for a more general introduction Pope 2000). As already discussed in Sect. 2.3.4 the construction of such averages may or may not involve also a spatially "horizontal average" when dealing with vertically stratified flows such as stellar convection whereas it always also assumes an average over initial conditions or time. Specifically tailored to deal with turbulent convection is the mass flux average. There, quantities are averaged horizontally separately over areas of up- and downflow. In meteorology this has been used since the early work of Arakawa (1969) and Arakawa and Schubert (1974). A comprehensive review of this method is given in Mironov (2009). The mass flux average is used in meteorology as an alternative to the one-point closures based on Reynolds averaging for the purpose of deriving parametrized models for physical processes not resolved on the spatial grids affordable in numerical weather prediction models (see also Mironov 2009). Alternatively, it may be used to inspire closure approximations for the Reynolds stress approach discussed below (Mironov et al. 1999; Zilitinkevich et al. 1999; Gryanik and Hartmann 2002; Gryanik et al. 2005). We note here that as is discussed in Chap. 5.4 of Tsinober (2009), it is essentially the decomposition of the basic variables combined with their mathematical representation which results in the appearance of a "cascade" (of transfer of energy, momentum, etc.). In particular, Tsinober (2009) points out that the appearance of a cascade of energy transport in Fourier space, crucial to many (in particular two-point closure) models of turbulent flows, is a feature of the mathematical tool used to study the flow. One should not mix this up with properties such as the physical scales of energy input and dissipation, as chosen by Kolmogorov when proposing his famous hypotheses (Kolmogorov 1941, cf. Chap. 6 in Pope 2000), which are independent of the chosen decomposition: the idea of statistical isotropy of motions at small scale (local isotropy hypothesis), the existence of a universal equilibrium range (first similarity hypothesis predicting an upper length scale below which the statistics of small scale motions depends on \(\nu \) and \(\epsilon \) only), an inertial subrange (second similarity hypothesis predicting a subrange within the former with a smallest length scale above which the statistics of motions depends only on \(\epsilon \)). Equation (29) is a direct consequence thereof. This picture, intended to model flows of high Reynolds numbers \(\mathrm{Re}\), was developed completely independent and ahead of its Fourier representation and is rather related to the idea of structure functions (cf. Chap. 6 in Pope 2000 as well as Hillebrandt and Kupka 2009 and Chap. 5.2 in Tsinober 2009). We note that \(\mathrm{Re} = U(L) L / \nu \gg 1\) refers to velocities U(L) at length scales L which contain most of the kinetic energy of the flow. The strategy that this analysis could be conducted in Fourier space to provide further insight and mathematical means of modelling was suggested only thereafter by Obukhov (1941a, b) and Heisenberg (1948a, b). Finally, von Neumann (1963) realized that the Richardson–Kolmogorov cascade picture, used by Kolmogorov (1941) only in a footnote as a qualitative justification of his hypothesis of local isotropy at high \(\mathrm{Re}\), is a process occurring in Fourier space (Chap. 5.4.2 in Tsinober 2009, see Panchev 1971 for an overview of such models). The term cascade is due to Onsager (1945, 1949), (see Chap. 5.4.1 in Tsinober 2009). In Sect. 2.1 of Kupka (2009b) it is discussed why observations of solar granulation or even standard LES thereof cannot reveal any such scaling other than by chance: basically, the achievable resolution is too small to identify "Kolmogorov scaling" in the data. Once the averaging procedure has been defined, the basic Eqs. (1)–(3) are split into equations for the means and the fluctuations around them Pope (2000). By the non-linearity of (1)–(3) products of means and fluctuations of the basic variables appear such as \(\overline{A_i A_j}\) and \(A_i' A_j'\). One can construct dynamical equations for them by multiplying the dynamical equations for the fluctuations with the fluctuation of the same or other basic variables (i.e., \(A_j' \partial _t A_i'\) etc.; dynamical equations for products of means are not needed). The required mathematical transformations are straightforward and involve only the product rule and basic algebraic operations (cf. Canuto 1992; Pope 2000). This procedure has first been proposed by Keller and Friedmann (1925). It is at this point where the closure problem comes in: any dynamical equation for a variable described as a product of n factors depends on variables which are the product of \(n+1\) factors (i.e., \(A_i' A_j'\) depends on \(A_i' A_j' A_k'\)). The Friedman–Keller chain thus consists of an infinite hierarchy of moment equations. For small Reynolds numbers a proof of the convergence of this hierarchy to a unique solution was given in Vishik and Fursikov (1988). Based on work by Fursikov in the early 1990s, the case of large Reynolds numbers was satisfactorily solved as well for a slightly idealized version of the full set (1)–(3) with the theorem of Fursikov and Emanuilov (1995). It states that for periodic boundary conditions and constant viscosities and diffusivities with suitably regular external forces and under the assumption that an exact solution exists for the original, dynamical equations (i.e., the NSE), the Friedman–Keller chain of approximations converges sufficiently fast (in an exponential sense) to a unique solution. While the exact rate of convergence cannot be determined by Fursikov and Emanuilov (1995), it assures the mathematical meaningfulness of the entire approach. In practice, the hierarchy is truncated at a certain order according to affordability. Additional assumptions, the closure hypotheses, have to be introduced to obtain a complete, predictive system of equations. Since for this reason the resulting mathematical model is not directly (ab initio) derived from the fundamental Eqs. (1)–(3), it has to be checked separately that the requirements formulated in Sect. 3.1 are fulfilled to ensure that the approximation obtained this way is mathematically and physically meaningful. The following techniques are frequently used in deriving models of statistical properties of turbulent flows in general and turbulent convection in particular. One-point closure models are popular in the context of the Reynolds stress approach. Also most phenomenological models such as MLT may be considered as one-point closure models in the sense that they consider averages of variables evaluated at certain locations at a particular point in space and in time, just as in Eq. (30). The Reynolds stress approach differs from those in deriving variables for higher order correlations including \(\overline{u_i' u_j'}\), too, a quantity neglected in MLT but of major physical importance as it is directly related to the basic, non-linear advection represented by the term \(\mathrm{div}( \rho (\varvec{u} \otimes \varvec{u}))\) in Eq. (2). Most non-local models of convection used in or proposed for stellar astrophysics are of this type as well. In contrast, two-point closure models are the main tool in studying (statistically) isotropic or homogeneous turbulence. For an extended discussion of available methods see Lesieur (1997). The main idea there is to consider correlations of functions evaluated at different points in space, such as velocity differences which appear in the hypotheses of Kolmogorov. Already since the work of Heisenberg (1948a, b) and other contemporary authors, it has become common to transform the exact dynamical equations for such correlations into Fourier space and construct models for them in the latter and eventually use those to predict one-point correlations such as the convective flux. Representatives of this approach, which are widely used in stellar astrophysics, are the models of Canuto and Mazzitelli (1991, 1992) and Canuto et al. (1996). These are local models of convection and for reasons of mathematical complexity this approach is hardly used for non-local ones other than for deriving closure hypotheses for one-point closure models. Diagram techniques are a method sometimes used to compute quantities in the context of two-point closure models by expansion and summation over all (infinite) contributions similar to quantum field theory. In turbulence theory the renormalization group approach underlying these techniques has to face the difficulty of so-called infrared divergences, i.e., contrary to quantum electrodynamics (QED) the boundary conditions at a finite region in space matter and hence one has to deal with functions that take over the role of a simple, constant scalar such as the charge of the electron in QED. A detailed introduction into this approach is given in McComb (1990). Its best known application in convection modelling in stellar astrophysics is the model of Canuto and Dubovikov (1998), where it has been used to compute some of the time scales that appear in the one-point closure Reynolds stress models derived in that paper. Non-local models of turbulent convection The model of Kuhfuß (1986) is quite different from its non-local predecessors proposed in Unno (1967), in Gough (1977a, b), and in Stellingwerf (1982) in the sense that it does not rely on dynamical equations describing the behaviour of convective eddies or bubbles from the outset and it does not just use the diffusion approximation to merely model overshooting. Rather, it starts straight from the full hydrodynamical equations and applies the anelastic approximation (see Sect. 4.3.2) and the diffusion approximation (for non-local transport processes) in a consistent manner. Only a simplified version of the model is used in practice which requires to solve a differential equation for the (turbulent) kinetic energy in addition to other dynamical equations required by stellar structure and stellar pulsation modelling. It is used to account for overshooting and for the time dependence of convection in radially pulsating stars. In this sense it competes directly with the earlier models published in Gough (1977a, b) and in Stellingwerf (1982). However, as already pointed out in Canuto (1993), the diffusion approximation for the flux of kinetic energy is highly incomplete. This can be seen from comparisons with 3D direct numerical simulations of fully compressible convection as discussed in Kupka and Muthsam (2007b) and in Kupka (2007). They demonstrated that the downgradient drastically underestimates the flux of kinetic energy and third order moment of vertical velocity in the interior of convection zones with different efficiencies of radiative transfer, even if the free parameter of the approximation is tweaked to fit their profiles in the overshooting zone (see also Chan and Sofia 1996). The model is even less consistent for (third order) cross correlations of velocity and temperature fluctuations (see also Sect. 3.3.3). From this point of view the model described in Kuhfuß (1986) can thus at best only account in a rather rudimentary way for the non-locality of turbulent convection. Similar holds for its model of the convective flux when probed with 2D LES of a Cepheid Mundprecht et al. (2015). In spite of the stability and relative simplicity of this class of models it thus appears desirable to consider more general models of turbulent convection. Non-local Reynolds stress models When proceeding towards physically more complete models of convection the issue of realizability becomes more and more important, since both mathematically and physically the models increase in complexity and the interplay between the different approximations may have unwanted side effects leading to numerical instability or even unphysical solutions. This is probably why the so-called downgradient approximation (DGA) has remained so popular in non-local models of stellar convection. It assumes that there is a gradient in one or several of the averages of the products of the fluctuating (turbulent) quantities, for instance the second order moment \(\overline{{u'_3}{u'_3}}\), which gives rise to a flux that has the form of a third order moment, for example, \(\overline{{u'_3}{u'_3}{u'_3}}\) or even the flux of kinetic energy \(F_\mathrm{kin}\) (see below). This flux is assumed to be proportional (and opposite) to the gradient just introduced times a diffusivity which involves the quantity being transported by the flux. Consistently applied to all correlations of third order for the velocity and temperature fields this is a generalization of the model of Kuhfuß (1986) discussed in Sect. 3.3.2. That in turn is a generalization of the idea of turbulent diffusion introduced in Sect. 3.2.1 where the driving gradient is obtained from the mean of the basic variable (\(\overline{T}\), etc.). For processes occurring on small scales such as radiative and heat diffusion, Eq. (4) and Eq. (6), this allows a very accurate description of transport (of energy, momentum, etc.). However, for turbulent transport there is no reason why the quantity transported should not vary strongly along typical scales along which the transport occurs (a "mean free path" in the flow). The Taylor expansion underlying the diffusion approximation hence cannot be expected to hold and in this sense the downgradient approximation for turbulent transport is a model with a limited region of applicability. This appears to be the case also in comparisons to numerical simulations of compressible convection to which we return below (cf. Kupka and Muthsam 2007a, b, c; Kupka 2007). Apparently, the DGA of third order moments is more likely to hold at the boundary of convective zones and in regions of overshooting (see also Chan and Sofia 1996). In spite of these shortcomings, it has been frequently used also for non-local Reynolds stress models to which we turn in the following. Xiong has been the first to promote the use of the Reynolds stress approach to derive models of stellar convection (see Xiong 1978). If we recall the notation and discussion from Sect. 3.3.1 and consider the plain Reynolds average (30) such that \(w' = w - \overline{w}\) is the fluctuating part for the vertical velocity, \(\theta ' = T - \overline{T}\) is its counterpart with respect to temperature, and \(q^2 = {u'_1}^2+{u'_2}^2+{u'_3}^2 = 2\,K\) is the turbulent kinetic energy resulting from the sum of both vertical and horizontal components of velocity fluctuations, a Reynolds stress model aims at first deriving dynamical equations for these quantities directly from the Navier–Stokes equations (possibly within the Boussinesq approximation, but not necessarily so, see Canuto 1993, 1997a). The models also consider the cross-correlation \(\overline{{u'_i}{u'_j}}\) which is known as the Reynolds stress. Additional hypotheses have to be assumed to obtain a closed system of differential equations, hence their name closure hypotheses or closure assumptions. Such hypotheses cannot be expected to hold for all physical scenarios and even for the same type of flow (such as compressible convection) it may be difficult to find one which is not very sensitive to the physical parameters of the system. To express asymmetry between up- and downflows, non-locality of the generation of enthalpy and kinetic energy fluxes, and non-local processes related to the generation of buoyancy a certain minimum complexity of the model appears inevitable and in this sense the Reynolds stress models are more complicated and physically more complete than the non-local convection models discussed so far. A Reynolds stress model thus provides dynamical equations at least for $$\begin{aligned} \overline{q^2}, \quad \overline{{\theta '}^2}, \quad \overline{w'\theta '}, \quad \overline{{w'}^2}. \end{aligned}$$ If a gradient in mean molecular weight (and thus concentration, say of helium, e.g.) is to be accounted for, additional correlations of second order appear similar to those just given in Eq. (32). It is these quantities that are modelled by the approach of Xiong (1978, 1985) and Xiong et al. (1997). The quantities appearing in (32) are closely related to the already known ones from convection modelling in general and as frequently computed from hydrodynamical simulations: turbulent pressure \(p_\mathrm{turb}=\overline{\rho {w'}^2} \approx {\overline{\rho }}\,\overline{{w'}^2}\), convective (enthalpy) flux \(F_\mathrm{conv} = \overline{\rho h' w'} \approx c_\mathrm{p}\,{\overline{\rho }}\,\overline{w'\theta '}\) (where enthalpy fluctuations \(h'\) have been approximated), flux of (turbulent) kinetic energy \(F_\mathrm{kin} = \frac{1}{2}\overline{\rho \,q^2\,w'} \approx \frac{1}{2}{\overline{\rho }} \overline{q^2 w'}\), and potential energy contained in the fluctuations of temperature (or alternative variables such as enthalpy or entropy), related hence to \(\overline{{\theta '}^2}\). Approximations are made for correlations of third (or even fourth) order which are expressed in terms of second order correlations, hence the name second order closure (SOC). Indeed, one of the just mentioned quantities, \(F_\mathrm{kin}\), actually stems from a third order correlation and in Xiong (1978, 1985) it is approximated by the downgradient approximation (DGA) like in the non-local model of Kuhfuß (1986). Since the DGA introduces serious restrictions, as already mentioned above, the model is certainly only a rather incomplete description of convective overshooting from the unstable convection into neighbouring, "stable" layers. The model of Xiong (1978, 1985) has been applied to a number of problems from stellar astrophysics: overshooting in massive stars, Xiong (1986), and in the shallow convection zone of late B to early F type stars, Xiong (1990), were among the earliest ones. A more complete model (Xiong et al. 1997) was published following the work of Canuto (1992, 1993) where it had been proposed to consider the full dynamical equations for third order moments, close them at fourth order through the (eddy damped) quasi-normal approximation, assume them to be stationary and thus obtain algebraic equations for the third order moments which allow a second order closure. The detailed procedures of Canuto (1992, 1993) and Xiong et al. (1997) are, however, different, so it is not advisable to conclude from results for one of the models on the other. The model of Xiong et al. (1997) was applied to compute overshooting below the solar convection zone (Xiong and Deng 2001), and to compute pulsational stability (Xiong et al. 2015), although mixed results have been reported on the latter (Houdek and Dupret 2015). The models of Xiong (1978, 1985) and Xiong et al. (1997) still contain a mixing length, which is used to compute the dissipation rate of turbulent kinetic energy, \(\epsilon \), according to Eq. (29). Canuto (1992) first proposed to abandon this procedure and suggested to instead consider its computation from a dynamical equation, which models the exact (and complicated to close) equation for this quantity, as done in the engineering community for already a long time at that point in the more basic \(K-\epsilon \) model of shear driven turbulence. Hence its designation as a fully non-local model of turbulent convection, since the model avoids the use of a mixing length also for the computation of \(\epsilon \). In Canuto (1993) the originally used Boussinesq approximation was eased by accounting for pressure fluctuations through a linear expansion.Footnote 5 In Canuto and Dubovikov (1998) a turbulence model based on the diagram technique mentioned in Sect. 3.3.1 was used to compute the time scales that appear in the Reynolds stress approach of Canuto (1992, 1993) and which are related to the dissipation of temperature fluctuations and to cross-correlations between (fluctuations of) the pressure gradient and velocity as well as temperature fluctuations. In this improved form the fully non-local Reynolds stress model was first solvedFootnote 6 for the case of compressible convection in Kupka (1999). Direct numerical simulations appropriate to study overshooting in an idealized setting which is fairly similar to that one found in A-type stars and hot DA white dwarfs (apart from a much lower Reynolds number and much higher Prandtl number assumed for the 3D calculations) have been evaluated and compared to the fully non-local Reynolds stress model. It was found that not all terms in the model of third order moments of Canuto (1993) could be kept as this would prohibit converging solutions. In this form the model delivered promising results which was considered an indication that it should work at least for shallow convection zones with strong radiative losses (low Peclet number and in this sense inefficient convection). This deficiency may also be at the root of the problems of the approach taken by Xiong et al. (1997, 2015) and Xiong and Deng (2001) and discussed in Houdek and Dupret (2015). It was corrected in a new model for the third order moments in Canuto et al. (2001). In this form, now based on the most complete model as proposed by Canuto (1992, 1993), Canuto and Dubovikov (1998) and Canuto et al. (2001), the model was used to study the convection zone in A-type stars as a function of effective temperature in Kupka and Montgomery (2002). A reasonable qualitative and even rough quantitative agreement was found when comparing those results to 2D LES as discussed in Freytag et al. (1996) for the standard choice of parameters of the Reynolds stress model. MLT by comparison requires lowering \(\alpha \) from values of 1.6–2, as used in solar modelling, down to a value of 0.36 just to match the maximum of the convective flux. It also cannot account for the huge amount of overshooting in terms of pressure scale heights found from both the Reynolds stress model and the 2D LES. In Marik and Petrovay (2002) the model of Canuto and Dubovikov (1998) without compressibility corrections and assuming the downgradient approximation and additionally a fixed ratio between \(\overline{q^2}\) and \(\overline{{w'}^2}\) (i.e., a fixed degree of anisotropy of the velocity field) was solved for the layers at the bottom of the solar convection zone. It was found to lead to a small amount of overshooting in agreement with helioseismic measurements as opposed to a simple downgradient model which is used in one-equation non-local models of convection and which predicts a much larger overshooting. In its full form the model—with compressibility corrections, without fixed anisotropy, and avoiding the downgradient approximation—was then applied to the case of shallow surface convection zones in hot DA white dwarfs in Montgomery and Kupka (2004). They considered slightly higher effective temperatures compared to that one for the numerical simulation shown in Fig. 4 in this review. Remarkably, despite MLT requires a much larger parameter \(\alpha \) for this case, the fully non-local Reynolds stress model again agrees reasonably well in a qualitative sense and roughly quantitatively with 2D LES from the literature (e.g., Freytag et al. 1996). This, however, already denotes the region of applicability of this model in its original form. A detailed comparison between shallow convection with high radiative losses (which always occur also at the boundary of a convection zone) and a case of deep convection zones with overshooting, again for the case of idealized microphysics and thus fully resolved on all length scales (i.e., a direct numerical simulation), using the same 3D hydrodynamical code and set up presented in Muthsam et al. (1995, 1999) as used in Kupka (1999), revealed deficiencies of the used closure model (Kupka and Muthsam 2007a, b, c; Kupka 2007). Indeed, the closures for the cross-correlations \(\overline{{w'}^2 \theta '}\) and \(\overline{w' {\theta '}^2}\) as well as for \(\overline{{w'}^3}\) were found unsatisfactory (Kupka and Muthsam 2007b; Kupka 2007). The downgradient approximation of these quantities performs even less satisfactorily. Thus, the closures used in these models require improvements to extend the region of applicability of the whole approach to proceed beyond what is possible with the more commonly used one-equation non-local models of convection. One such possible alternative has been suggested in the meteorological community and is based on a two-scale mass flux approach, where up- and downflows are not strictly coupled to regions of hot and cold flow with respect to their horizontal average (see Zilitinkevich et al. 1999; Mironov et al. 1999; Gryanik and Hartmann 2002; Gryanik et al. 2005). This approach provides closures for the combinations of \(w'\) and \(\theta '\) if the skewness of both is known (so this is not just a second order closure since it requires to know two third order quantities, \(S_w\) and \(S_{\theta }\)). One example is the relation $$\begin{aligned} \overline{w'^2\theta '} \approx \overline{w'\theta '} (\overline{w'^3}/\overline{w'^2}), \end{aligned}$$ which has also been proposed in Canuto and Dubovikov (1998) in an alternative model for closing their Reynolds stress equations (this particular closure can already be derived from the standard mass flux approach as in Arakawa (1969) and Arakawa and Schubert (1974), since it only depends on \(S_w\)). Figures 8 and 9 demonstrate the results for the case of the 3D LES of the Sun and a white dwarf, as introduced already in Sect. 2. A remarkable agreement is found, very similar to what had been found in Kupka and Robinson (2007) for a different set of 3D LES (with closed vertical boundary conditions) for the Sun and a K-dwarf. This has likewise been found in Kupka and Muthsam (2007b) and Kupka (2007) for direct numerical simulations of compressible convection for both the case of a shallow zone and a deep zone with efficient convective transport. However, as also pointed out in Kupka (2007), just putting the best closures around together does not mean that the resulting Reynolds stress model is an improvement. The assumptions underlying the closures have to be compatible otherwise the resulting model might even be unstable. We note here that another closure of this type is shown in Figs. 15 and 16 in Sect. 6.2. In spite of this limitation when using them in self-consistent, stand-alone models, these closures have already been used in an improved model of p-mode excitation of solar-like oscillations (Belkacem et al. 2006a, b; Samadi et al. 2008). There, the model also takes input from 3D LES. We also note here that the Favre average of the cross correlation deviates quite a bit less from the plain Reynolds average than one might have intuitively expected (see Figs. 8, 9 here and also Figs. 15, 16 from Sect. 6.2). The deviations hardly exceed 20% even within the superadiabatic layer, where the fluctuations of pressure, temperature, and density are largest. This might be interpreted as an indication that the complex account for full compressibility, as proposed in the model in Canuto (1997a), might not be necessary at this stage of modelling, since it is a "higher order effect" in comparison with getting just the Reynolds averaged closures right. The overview on the use of the Reynolds stress approach for modelling stellar convection given here is not exhaustive, since our focus has been to demonstrate how physically complete these models have become. Recent efforts as presented in Canuto (2011) have concentrated on providing a framework for an easier implementation of the approach: the idea is to be able to increase the physical completeness of the model step-by-step which in practice is easier than starting from the most complete model and simplifying it in turn. Likewise, as an alternative to the full model of Xiong (1985) and Xiong et al. (1997), a \(K{-}\omega \) model was proposed in Li (2012), where an equation for the inverse time scale associated with dissipation rate of kinetic energy, \(\omega \equiv \tau ^{-1} \equiv \epsilon /\overline{q^2}\), replaces that one for \(\epsilon \). This is actually a two-equation model as is standard in modelling of flows in engineering applications and is thus simpler than a full Reynolds stress model. In this respect it belongs to the non-local models discussed in Sect. 3.3.2. Its local limit was used in stellar evolution calculations and compared to the mixing obtained with the full \(K-\omega \) (see Li 2012). Correlation \(\overline{w'^2\theta '}\) at the solar surface computed from a numerical simulation with ANTARES (Muthsam et al. 2010) (details on the simulation: Belkacem et al. 2015, in prep.) in comparison with a closure relation and the Favre average \(\{w'^2\theta '\}=\overline{\rho w'^2\theta '}/{\overline{\rho }}\). For details see text Correlation \(\overline{w'^2\theta '}\) at the surface of a DA white dwarf computed from a numerical simulation with ANTARES (Muthsam et al. 2010) (details on the simulation: Kupka et al. 2017, submitted) in comparison with a closure relation and the Favre average \(\{w'^2\theta '\}=\overline{\rho w'^2\theta '}/{\overline{\rho }}\). For details see text Rotating convection and two remarks We note that the above discussion of the Reynolds stress approach is far from complete. A very natural extension is the case of convection in a rotating environment. Indeed, astrophysical objects, whether stars or planets, all rotate, and in some cases even very rapidly. The interaction of convection with rotation is known to lead to the phenomenon of differential rotation which cannot just be studied with numerical simulations such as those presented by Brun and Toomre (2002) and many others (we refer again to Miesch 2005 for a review), but also by extensions of the types of convection models we have presented here. Examples and an overview on this field of modelling can be found in Rüdiger (1989). As is necessary by the very nature of rotation many of the analytical or semi-analytical models are no longer "1D models" in the traditional sense (a more recent example is the model of Rempel 2005 which has also co-inspired a new numerical technique to which we return in Sect. 4.5). Indeed, the most complete among the Reynolds stress models available at the moment, published in Canuto (1999), also accounts for differential rotation (in addition to convection, double-diffusive convection, diffusion, and overshooting). In its full form it provides actually a 3D model of a star, although the formalism suggests a route to gradually simplify this model (this is again addressed, in a more systematic way, in Canuto 2011 and the series of papers introduced therein). From a physical point of view we note that these models contain no fundamentally new techniques other than those already introduced: one-point ensemble averages, possibly Favre averaging, closure hypotheses, renormalization group techniques and two-point closures to compute specific quantities, possibly results from and calibrations with numerical simulations, etc. This holds despite for each model some specific approximations are suggested to deal with the problems at hands. As a remark we note that the most adequate modelling strategy for convection is constrained by the specific astrophysical questions which are studied. For example, our understanding of solar and stellar activity requires modelling of surface and interior convection zones which does take into account the interaction of convection with magnetic fields (see Miesch 2005 and further literature cited therein). Likewise, numerical simulations and a proper accounting of effects due to deviations from local thermal equilibrium have to be considered in high precision determinations of solar (and stellar) element abundances (Nordlund et al. 2009). In each of these cases a convection model which reduces the physical description to values of horizontal averages cannot be expected to yield an acceptably accurate description. On the other hand, for other problems such as stellar evolution over very long time scales, there is no alternative to such strategies, at least not within the foreseeable future, as we summarized in Sect. 2.6. We conclude this part with a second, optimistic remark: although presently the non-local convection models still have a lot of weaknesses even if considered in their physically most complete form, they have the potential to substantially improve the modelling of stellar convection over the classical, local MLT approach. This is well needed, since multidimensional, hydrodynamical simulations are just not applicable to all problems in which stellar convection matters (see also Sect. 2). Multidimensional modelling: the equations As has been discussed in Sect. 2 one basically knows the equations which one wishes to solve for stellar convection: these are the Navier–Stokes (or Euler) equations of hydrodynamics, properly augmented with various ingredients according to the precise physical problem at hand. It is not always prudent or possible to numerically solve these equations as they are when dealing with stellar convection problems. Often, in particular near the stellar photosphere, convective velocities are large (roughly of the order of the speed of sound), such as in solar granulation. In the interior of many stars, however, convective motions are very slow in terms of sound speed. In those cases, the usual numerical methods for time integration of the Navier–Stokes or Euler equations would take a prohibitively long time for execution, the reason being that, without special measures, one has to track the sound waves. This results in a very small time step, completely in disagreement with the much larger time scale on which the solution itself changes: a computation of that sort would be stalling. Methods are therefore requested to explicitly circumvent this difficulty. Such methods come in two basic flavours. One way is to modify the Navier–Stokes or Euler equations themselves in such a manner that the sound waves are eliminated. Such methods are described in this chapter, Sects. 4.3–4.6. The alternative leaves the Navier–Stokes or Euler equations unchanged but makes use of numerical methods enabling time steps compliant with the actual changes in the solution. Such approaches are described later on, see Sect. 5.3. Conservation laws The equations of hydrodynamics, which are at the core of our considerations, express the conservation (balance) of mass, momentum and energy. To understand the numerical methods used for their solution a short description of properties of conservation laws is therefore in order. The 1D case Consider a function \(v=v(x,t)\), defined along some x-interval and for all times t. Typically in applications to follow, v will be a density (mass-, momentum- or energy-density). We assume that there exists a flux function f(v), so that \(\int _{x_0}^{x_1}v\,dx\) changes by a flux through the boundaries \(x_0\) and \(x_1\) according to a flux function f in such a way that for all times t, for all time increments \(\tau \) and for all points \(x_0, x_1\) we have $$\begin{aligned} \int _{x_0}^{x_1}v(x,t+\tau )\,dx= & {} \int _{x_0}^{x_1}v(x,t)\,dx\nonumber \\&-\left( \int _{t}^{t+\tau }f(v(x_1,t))\,dt-\int _{t}^{t+\tau }f(v(x_0,t))\,dt\right) .\nonumber \\ \end{aligned}$$ In this way, f(v(x, t)) actually describes the flux of quantity v through position x at time t. Subtracting the first term on the right hand side, multiplying by \(\frac{1}{\tau }\) and letting \(\tau \) tend to zero, we obtain $$\begin{aligned} {\partial _t}\left( \int _{x_0}^{x_1}v(x,t)\,dx \right) =-(f(v(x_1,t))-f(v(x_0,t))) \quad \forall x_0, x_1. \end{aligned}$$ Using this equation, proceeding similarly in the variable x (\(x_1\rightarrow x_0\)) and finally writing x instead of \(x_0\), we obtain \(\partial _t v(x,t)=-\partial _xf(v(x,t))\), which we typically will use in the form $$\begin{aligned} \partial _t v(x,t)+\partial _xf(v(x,t))=0. \end{aligned}$$ Conversely, integration of Eq. (36) leads back to Eq. (35). These equations therefore express conservation, and equations of the form Eq. (36) are called conservation laws (in the case of one spatial dimension). The transition from the original integral form Eq. (34) holds true when the main theorem of calculus is applicable, i.e., when v and f are continuously differentiable. It should be noted that this condition is not always fulfilled in stellar convection (in particular in granulation) and that situations (shocks) may occur in which the solution is discontinuous. In these cases, Eq. (35) which directly expresses the physical conservation principle has to be considered the basic one. In the case of several spatial dimensions similar considerations apply. We now have, for the 3D case, \(\varvec{x}=(x,y,z)^*\) or \(\varvec{x}=(x_1,x_2,x_3)^*\) where the asterisk denotes the transpose. We again consider a scalar function \(v(\varvec{x},t)\) which describes a conserved quantity, which means that there is a vector valued flux function \(\varvec{f}(\varvec{v},t)\), such that, for each bounded domain \(\varSigma \) in space, we have $$\begin{aligned} {\partial _t}\left( \int _\varSigma v(\varvec{x},t)\,d\varvec{x}\right) + \int _{\partial \varSigma }\langle {\varvec{f}(v(\varvec{x},t)),\varvec{n}}\rangle \,d\sigma = 0. \end{aligned}$$ Here, \(\partial \varSigma \) denotes the boundary of \(\varSigma \) and \(\varvec{n}\) the outward pointing normal and \(d\sigma \) the surface element. In three dimensions, we will use the notation \(\varvec{f}=(f,g,h)\) or \(\varvec{f}=(f_1,f_2,f_3)\) (no asterisk this time). \(f_i\) is the flux in direction \(x_i\). Using Gauss' theorem, we can write the boundary integral in Eq. (37) as a volume integral and obtain $$\begin{aligned} {\partial _t}\int _\varSigma v(\varvec{x},t)\,d\varvec{x}+ \int _{\varSigma }{{\mathrm{div}}}{\varvec{f}(v(\varvec{x},t)})\,d\varvec{x}= 0. \end{aligned}$$ Taking the time-derivative inside the integral, we have then that $$\begin{aligned} \int _\varSigma \left( {\partial _t}v(\varvec{x},t)\,d\varvec{x}+ {{\mathrm{div}}}{\varvec{f}(v(\varvec{x},t))}\,d\varvec{x}\right) = 0 \quad \forall \varSigma . \end{aligned}$$ If the integrand is continuous we can conclude that $$\begin{aligned}&\displaystyle {\partial _t}v(\varvec{x},t) + {{\mathrm{div}}}{\varvec{f}(v(\varvec{x},t))} = 0, \hbox { or}\end{aligned}$$ $$\begin{aligned}&\displaystyle \partial _t v(x,t) + \partial _xf(v(x,t))+\partial _yg(v(y,t))+\partial _zh(v(z,t)) = 0. \end{aligned}$$ In hydrodynamics in 3D space, we typically deal with five densities of conserved quantities: mass density \(\rho \), momentum density \(\mu _j\) in direction j (\(j=1,2,3)\) and some energy density e. We are therefore led to consider a vector valued function \(\varvec{v}=(\rho ,\mu _1,\mu _2,\mu _3,e)^*\) in the hydrodynamic case. For each component, a conservation law applies and there is, for the ith component \(v_i\), a flux function \(\varvec{f}_i(\varvec{v},t)=(f_{ij})_{j=1,2,3}\). We assemble these flux functions into a matrix valued flux function \(\varvec{f}=(f_{ij})\), so that the conservation law in differential form reads $$\begin{aligned} \partial _tv_i(\varvec{x},t)+\sum _{j=1}^3\partial _{x_j}f_{ij}(\varvec{v}(x,t)) = 0. \end{aligned}$$ For validity of the differential form of the conservation laws the considerations above require that smoothness properties are fulfilled (interchange of differentiation and integration and, even more basically, differentiability of the functions). Otherwise, the differential form may not hold true of even be meaningful. This has profound implications both for the physics and for the numerical treatment of astrophysical flows. Compressible flow: the Euler and Navier–Stokes equations The basic equations We recall from Sect. 2 that the equations properly describing purely hydrodynamic stellar convection are the Navier–Stokes equations augmented by equations describing the actions of microphysics, radiation transfer, and gravity. Viscosity acts on very small scales only and cannot be directly resolved in numerical calculations as follows from the discussion in Sect. 2.2.1. Hence, the inviscid subset (plus a radiative heating term and gravity) plays a dominant role, i.e., the augmented Euler equations. We return to the non-trivial implications of this transition in Sect. 4.2.3. We recall Eqs. (1)–(10) from Sect. 2.1 and for the remainder of Sects. 4 and 5 we set \(q_\mathrm{nuc} = 0\) and \(\varvec{h}=0\). The latter is mathematically indistinguishable from radiative transfer in the diffusion approximation and \(q_\mathrm{nuc} \ne 0\) leads to source terms specific to convection in stellar cores or nuclear burning shells only. Using the symbols of Table 1, both Euler and Navier–Stokes equations can be written as follows: $$\begin{aligned}&\displaystyle \partial _t\rho +{{\mathrm{div}}}{}\rho \varvec{u}= 0, \end{aligned}$$ (42a) $$\begin{aligned}&\displaystyle \partial _t \varvec{\mu }+ {{\mathrm{div}}}{(\rho \varvec{u}\otimes \varvec{u}+ p\varvec{I})} {- {{\mathrm{div}}}\varvec{\pi } } = \rho \varvec{g},\end{aligned}$$ (42b) $$\begin{aligned}&\displaystyle \partial _t e + {{\mathrm{div}}}{(e+p)\varvec{u}} {-{{\mathrm{div}}}\varvec{u}^*\varvec{\pi } } + q_\mathrm{rad} = \rho \varvec{u}^*\cdot \varvec{g}{,} \end{aligned}$$ (42c) where the Euler case is distinguished by \(\varvec{\pi } = 0\) and we use the more stringent notation for the transpose introduced above. The basic fluxes for the Euler equations, i.e., for \(\varvec{\pi } = 0\) and also neglecting \(q_\mathrm{rad}\), are provided in Table 3 and have rather obvious physical interpretations. For the entire discussion provided in this and the following section we assume a prescribed acceleration g or \(g(x_1)\) due to gravity pointing in direction \(x_1\). We only consider scenarios for which a self-consistent determination of g is not necessary and thus \(g = \mathrm{const.}\) or \(g= -GM r^{-2}\) for problems with deep convection and spherical symmetry, i.e., for the non-rotating case. The Lagrangian or substantial derivative is often useful in interpreting hydrodynamic equations. Consider a physical quantity (density \(\rho \) for example). The Lagrangian derivative, denoted by \(D_t\), describes the rate of change of the quantity when moving with the fluid. In 1D, the vector \(\mathbf {r}=(u,1)\) describes the vector of the trajectory of a fluid moving with velocity u in the (x, t)-plane in one unit of time. Hence, for some function \(\phi , D_t\phi \) is the directional derivative \(D_t\phi =\partial _{\mathbf {r}}\phi =u\partial _x\phi +1\partial _t\phi \). In multidimensions, we similarly have $$\begin{aligned} D_t\phi =\partial _t\phi + \sum u_i\partial _{x_i}\phi . \end{aligned}$$ Table 3 Advective (hyperbolic) fluxes for the 3D Euler (and Navier–Stokes) equations Radiative heating We recall that the radiative heating term \({q}_\mathrm{rad}\) is derived from the radiative energy flux, \(\varvec{f}_\mathrm{rad}\) (integrated here over all frequencies). In order to determine this flux one has, in principle, to solve the equations of radiative transfer (for all frequencies, in practice for some sample of frequencies). Combining these with the hydrodynamic equations one arrives at the equations of radiation hydrodynamics (see Castor 2007; Mihalas and Mihalas 1984). In studying convection near the stellar surface (granulation) it is necessary to solve at least the stationary equation of radiative transfer (or some really good approximation to it). Due to the non-locality of radiation the numerical treatment differs in fundamental ways from the numerics of the hydrodynamic equations. Anyway, once the radiation field is determined (in general in several spectral bands), the radiative heating rate \(q_\mathrm{rad}\) can be determined from the radiative flux \(\varvec{f}_\mathrm{rad}\) via $$\begin{aligned} q_\mathrm{rad}={{\mathrm{div}}}{\varvec{f}_\mathrm{rad}}. \end{aligned}$$ Since in the following we do not treat stellar atmospheres we need not deal with the numerics of the radiative transfer equation in itself. Rather, we make use of the fact that, for large optical depth, the solution of the transfer equation converges to the solution of the numerically simpler diffusion approximation, $$\begin{aligned} \varvec{f}_\mathrm{{rad}}=-K_\mathrm{{rad}}{{\mathrm{grad}}}{T}, \end{aligned}$$ introduced in Sect. 2.1. Contrary to the full radiative transfer equation the gradient of T can be computed locally, in accordance with all the other terms in the hydrodynamic equations. The radiative (Rosseland) conductivity \(K_\mathrm{{rad}}\) contains a weighted harmonic mean of monochromatic opacities (mean over all frequencies) and can easily be interpolated from tables, in particular those due to the Opacity Project (cf. Seaton et al. 1994; Seaton 2005). Mathematically, the ensuing term \({{\mathrm{div}}}{q_\mathrm{{rad}}=-{{\mathrm{div}}}(K_\mathrm{{rad}}{{\mathrm{grad}}}{T})}\) and \({{\mathrm{div}}}{q_\mathrm{{rad}}}\) reduces to \(-K_\mathrm{{rad}}\varDelta {T}\) in the case of constant \(K_\mathrm{{rad}}\). In Eq. (8) we have written the tensor viscosity \(\varvec{\pi }\) in its most general form, i.e., for the case of a non-zero bulk viscosity, also known as "second viscosity" or expansion viscosity, \(\zeta \) as it is derived in Landau and Lifshitz (1963) or Batchelor (2000). For a mono-atomic gas with no internal degrees of freedom, \(\zeta =0\), while in most fluids for which their constituents (atoms, molecules, etc.) have internal degrees of freedom, \(\zeta \approx \eta \), see Chap. 3.4 in Batchelor (2000). A detailed discussion of a formalism to compute \(\zeta \), if the atoms or molecules of the fluid are subject to a slow relaxation back into thermal equilibrium after having been driven away from it by compression or expansion, is given in Landau and Lifshitz (1963) and in this case \(\zeta \) may become large. In astrophysics, it has always been assumed that \(\zeta \approx 0\) or at most \(\zeta \approx \eta \) and can thus be neglected for reasons described below, until Cowley (1990) identified the exchange between translational energy of electrons and their internal binding energy in hydrogen atoms as a possible candidate for the aforementioned slow relaxation process. This could make \(\zeta \) large enough to be of relevance for the solar photosphere (see also Sect. 2.1 in Kupka 2009b). Apparently, this result has not been verified by other authors and no applications of it seem to have been published. For the remainder of this text we hence assume \(\zeta \) to be small and neglect it in the following. The components of the viscosity tensor \(\varvec{\pi }\) in the momentum equation (Eq. 42b) and the energy equation (Eq. 42c) in this case reduce to $$\begin{aligned} \pi _{ik}= \eta \left( (\partial _{x_{k}}u_i+\partial _{x_{i}}u_k)-\frac{2}{3}{{\mathrm{div}}}{\varvec{u}}\right) . \end{aligned}$$ As we have discussed in Sect. 2.2.1 the viscosity coefficient and, hence, the corresponding length scale on which viscosity effectively smoothes the solution is orders of magnitude smaller than the affordable numerical grid spacing. It is thus common to neglect \(\pi _{ik}\) through setting it to zero and thus neglecting "molecular viscosity" entirely. One instead solves the Euler equations of hydrodynamics rather than the Navier–Stokes equations (NSE). However, this step has an unexpected price tag. The Navier–Stokes equations are the fundamental equations of hydrodynamics for a very good reason: not only has the term \(\pi _{ik}\) an essential role for the basic properties of turbulence (cf. Tsinober 2009). But much more fundamentally, the Euler equations have a much larger function space for their solutions. More specifically, their weak solutions in case of discontinuities such as shocks in general are not unique, as holds for nonlinear hyperbolic equations already in the case of a scalar equation, see Chap. 14.1.4 in Quarteroni and Valli (1994), even more so for the coupled set of the five components of (42a)–(42c). In contrast, the NSE "automatically" fulfill the first and second law of thermodynamics which is not the case for (42a)–(42c) when \(\pi _{ik}=0\). Indeed, already for the basic Riemann problem there are cases with an infinite number of solutions (Quarteroni and Valli 1994). To pick a unique solution which also is compatible with thermodynamics one has to enforce an additional constraint: the (weak) solution has to be that one which is obtained from the full NSE (i.e., \(\eta > 0\) and \(\pi _{ik} \ne 0\)) in the limit of \(\eta \rightarrow 0\) (if a strong solution to (42a)–(42c) exists, it is smooth by definition and the non-uniqueness problem is not encountered). This solution is hence also called entropy solution. It is compatible with thermodynamics and its existence and unique dependence from the initial conditions (for physically relevant hyperbolic conservation laws) has been shown in the mathematical literature (for references see Quarteroni and Valli 1994). As a result, physically and mathematically useful solution methods for (42a)–(42c) have convergence to the entropy solution as a built-in property. We return to these issues in Sect. 5.2.4 in the context of Riemann solvers. Thus, the solution methods are all subject to some kind of "viscosity" (schemes which do not will rapidly crash in simulations that develop shocks). Some schemes are based on "artificial viscosity" which is constructed such as to be formally similar to the physical terms such as \(\pi _{ik}\). However, their coefficients are orders of magnitude larger than the physical ones which in turn is the justification to neglect \(\pi _{ik}\) itself. Other numerical methods have some sort of diffusivity built in by basic design, not necessarily recognizable at the first glance. In one way or another, viscosity is in practice always included in the scheme, although one might basically strive to keep it as small as possible. Equations for low-Mach-number flows All of the modified equations described in the following (with the exception of those discussed in Sect. 4.5) set out from a basic, horizontally averaged configuration where physical quantities are a function of depth (x or \(x_1\)) only. At least in the applications to stellar convection, this background state is usually assumed to be hydrostatic. Convection is then represented by perturbations of the background state and approximate equations describing the dynamics of these perturbations are derived. The Boussinesq approximation The Boussinesq approximation can be considered as a simpler variant of the anelastic approximation discussed below (Sect. 4.3.2). In order to be valid it requires, however, more stringent conditions regarding the object or simulation domain to be investigated than applies for the anelastic approximation. These conditions are that the layer thickness (in terms of pressure scale height for practically all cases) and the relative horizontal variations of physical quantities are \({\ll }1\) (Spiegel and Veronis 1960). A consequence thereof is that the ensuing velocities have to be small compared to the speed of sound. The resulting mathematical simplicity has made this approximation useful for early investigations, in particular for work on stability. See, e.g., Spiegel (1971). However, against what the Boussinesq approximation would require, stellar convection zones are, in general, not thin, and if they are, such as the hydrogen ionization zone of A-type stars, velocities and horizontal fluctuations can happen to be quite large (Steffen et al. 2005; Kupka et al. 2009), even supersonic. Still, at least in investigations of semiconvection the Boussinesq approximation is useful even today (see Mirouh et al. 2012; Zaussinger and Spruit 2013), as its underlying assumptions are fulfilled in interesting cases here. For presenting the Boussinesq equations we set out from a plane-parallel basis configuration which is slightly perturbed as described above. We denote horizontal means by an overbar and deviations from it by primes. So, we have for the example of pressure, $$\begin{aligned} p(\varvec{x},t)=\overline{p}(x_1)+p^\prime (\varvec{x},t). \end{aligned}$$ We assume that the basic state is constant in time and at rest. Therefore, we have for velocity \(\varvec{u}=\varvec{u}^\prime \), and we omit the prime in that case. Then, under the basic assumptions of the Boussinesq approximation the continuity equation, Eq. (42a), can be shown to reduce to $$\begin{aligned} {{\mathrm{div}}}\varvec{u}= 0, \end{aligned}$$ which is formally equivalent to the fact that for an incompressible flow the velocity field is divergence free, i.e., just this condition holds true. Density variations need, under these basic assumptions, to be retained only where they couple to gravity. Subtracting the momentum equation for the basic, static configuration (i.e., the condition of hydrostatic equilibrium), what is left from the momentum equation reads $$\begin{aligned} \partial _{t}\varvec{\mu } +{{\mathrm{div}}}(\rho \varvec{u}\otimes \varvec{u}+ p^{\prime }\varvec{I})- {{\mathrm{div}}}\varvec{\pi } = \rho ^{\prime }\varvec{g}. \end{aligned}$$ To proceed we now write it as an equation for velocity \(\varvec{u}\), use that \(\rho ^\prime /\rho \approx -T^\prime /T\), and omit the diffusive terms for ease of notation: $$\begin{aligned} \partial _t \varvec{u}+ {{\mathrm{div}}}{\left( \varvec{u}\otimes \varvec{u}+ \frac{1}{\rho }p^\prime \varvec{I}\right) } = -\frac{T^\prime }{T} \varvec{g}. \end{aligned}$$ The energy equation may in this case most clearly be written as a temperature equation, $$\begin{aligned} \partial _t T + \sum _{i=1}^3 u_i\partial _{x_i}T -\chi _T\varDelta T = 0, \end{aligned}$$ where \(\chi _T\) is the (radiative) temperature diffusion coefficient. –This last equation just expresses that the temperature field changes, in the comoving sense, according to a heat conduction equation. In working numerically with the Boussinesq approximation, there is an essential structural difference to the original Euler or Navier–Stokes equations. Similar to them, \(\varvec{u}\) and T (or \(T^\prime \)) obey the time-evolution Eqs. (50), (51). The continuity equation, Eq. (42a), originally a time evolution equation as well, appears now as a constraint, Eq. (48). When advancing the solution in time numerically, \(p^\prime \) (or p) must therefore be determined in a way which leads to a divergence free velocity field at later times, or the divergence condition has to be satisfied in other ways. This has a bearing on the numerical treatment to which we will turn later on. The anelastic approximation During the last few decades, the anelastic approximation has been the tool most frequently applied for modelling (deep) stellar convection. Like the Boussinesq approximation, it filters out sound waves. Contrary to the Boussinesq case it allows the investigation of convection occurring in a truly layered background stratification. The background is described by functions \({\overline{\rho }}(x_1),\ldots \). We assume it to be in hydrostatic equilibrium and at rest although provisions for slow changes in the background stratification can be made. There are two assumptions at the heart of the anelastic approximation. Firstly, the relative fluctuations of the thermodynamic quantities \(p^{\prime },\ldots \) around their mean state must be small, and so must be the Mach number of the ensuing flow. Secondly, the flow must not exhibit time scales faster than the crossing time of the domain (with typical flow velocities). For the case of adiabatic flows in a layered atmosphere the anelastic equations have been derived in Batchelor (1953) based on physical arguments. Later on, these equations have been derived as the first set of equations beyond the trivial ones (hydrostatic equilibrium) in the sense of a perturbation approach in Ogura and Phillips (1962). To catch the gist of the approach, let us first consider the continuity equation, Eq. (42a), for modification. Since \(\partial _t{\overline{\rho }}=0\) (we assume the basic configuration to be static) we can write it in the form $$\begin{aligned} \partial _t\rho ^{\prime }+{{\mathrm{div}}}{{\overline{\rho }}\varvec{u}}+{{\mathrm{div}}}{\rho ^{\prime }\varvec{u}}=0. \end{aligned}$$ The relative fluctuations are assumed to be small, \(\rho ^{\prime }/\rho <\epsilon \ll 1\) etc., where we understand such inequalities always as in magnitude. We similarly require the Mach number M to be small, \(M < \epsilon \). If the nontrivial spatial scales are some reasonable (not overly small) fraction of the vertical extent of the system, L, there results a characteristic time \(\tau \) for changes of \(\rho ^{\prime }\), namely \(\tau =L/M\). Under these assumptions, the most important remaining part of the continuity equation can easily be identified. Just for one moment we consider density scaled by \({\overline{\rho }}\) and velocity by speed of sound. Then the three terms in (52) are, in turn, of order \(O(M/\tau )=O(\epsilon M/L), O(M/L)\), and \(O(\epsilon M/L)\). So we retain $$\begin{aligned} {{\mathrm{div}}}{{\overline{\rho }}\varvec{u}}=0 \end{aligned}$$ as the continuity equation to be used in the anelastic approximation. In a sense, this is the most fundamental change applied to the Navier–Stokes equations. Whereas the original Navier–Stokes equations are evolution equations for the basic variables, the continuity equation in the anelastic formulation acts as a constraint to the other equations which still are evolution equations (see below). It is also this form of the continuity equation which prevents the derivation of the wave equation obeyed by the sound waves and, in fact, eliminates the sound waves. There are various forms of the anelastic equations. Below we will describe a little more closely some of them whose properties have been discussed and compared recently in the astrophysical literature. We start, however, with a version which has been used in early studies of solar granulation and which illustrates the use of these approximations quite well. Nordlund's approach Let us sketch a way of how to proceed with the anelastic equations by referring to an early and influential paper on solar granulation by Nordlund (1982). Here, a Poisson equation for the pressure is derived from the anelastic continuity Eq. (53) together with the momentum equations: $$\begin{aligned} \varDelta p = \nabla \left( \rho \varvec{g}-\rho (\varvec{u}\cdot \nabla )\varvec{u}\right) . \end{aligned}$$ This equation then essentially replaces the continuity equation, Eq. (53). In time-evolution, the horizontal momenta are advanced directly. In order to fulfil the basic divergence condition, Eq. (53), the \({\bar{\rho }} \varvec{u}_x\) (vertical) component is obtained by integrating Eq. (53) from the bottom (\(x_\mathrm{bot}\)) up to any depth x, obtaining $$\begin{aligned} ({\bar{\rho }} \varvec{u}_x)|_{x_\mathrm{bot}}^x(\cdot ,y,z)=-\int _{x_\mathrm{bot}}^x(\partial _y{\bar{\rho }}\varvec{u}_y+\partial _z{\bar{\rho }}\varvec{u}_z))(\xi ,y,z)\,d\xi , \end{aligned}$$ which allows evaluation of \(\varvec{u}_x\) at any point provided this quantity is prescribed at the lower boundary (for example set to 0 invoking impenetrable boundary conditions). Incidentally, in this paper Fourier expansion is used in the horizontal directions for easy solution of the Poisson equation, and the basic procedure is actually carried out in the (horizontal) Fourier description. Since the pressure equation contains the term \((\varvec{u}\cdot \nabla )\varvec{u}\) some iteration between pressure and velocity field at any time step is required. Solar granulation is surely not the physical phenomenon of choice to be modelled via the anelastic approximations since the basic requirements are violated, for example, that the Mach number be small. Nevertheless, in the early days of granulation modelling (where the anelastic approximation was chosen for numerical viability with computers of that time) fundamental properties of granulation which have stood the test of time have been unravelled in that way. Several types of anelastic approximations The "anelastic approximation" is by no means a single, well defined set of equations derived from the Euler or Navier–Stokes equations. At the heart of that matter is the fact that the anelastic approximation cannot be obtained from the Navier–Stokes equations by an expansion where, then, only the low-order terms are retained. For reasons to be discussed below there exists a considerable variety of flavours. More often than enforcing the divergence condition in the way outlined above it is derived by specific forms of expansions involving a small parameter connected with the departure from the basic, initial structure. However, such theories where the argumentation ultimately leads to the divergence condition (53), $$\begin{aligned} \nabla {\bar{\rho }}\varvec{u}= 0, \end{aligned}$$ are considered to belong to the class of anelastic approximations. Already here, however, variants of the anelastic approximation divert from each other when it comes to the precise meaning of \({\bar{\rho }}\). This quantity may either refer to the density of the initial, closely adiabatic layering, or to the horizontal average of the density as it evolves during the simulation. Substitution of the continuity equation proper by Eq. (53) precludes evolving the density fluctuations \(\rho ^\prime \) via the continuity equation which naturally would determine that quantity. Rather, one has to resort to the energy equation (basically) to evolve, for example, temperature, and to use that quantity to derive a term for the momentum equation which contains the effect of \(\rho ^\prime \) on momentum. At the beginning of the anelastic approximation there is the "original anelastic approximation" due to Ogura and Phillips (1962). It extended the condition \(\nabla {\bar{\rho }} \varvec{u}= 0\) which had been derived by Batchelor (1953) by a momentum and energy equation. In their derivation, the basic assumptions are that the variation of potential temperature \(\varTheta \) across the layer considered is small (so that, in practice, the layering is close to adiabatic) and that the time-scales to be retained are \({>}1/N, N\) denoting the Brunt–Väisälä frequency. Gough (1969) has allowed for a possibly superadiabatic and time-varying background and, in particular, included radiative transfer as described by the diffusion approximation. Lipps and Hemler (1982) allow for a slowly varying potential temperature of the background configuration as a function of depth by performing a rigorous scale analysis as opposed to more physical reasoning present in many other papers. A further set of anelastic equations is due to Bannon (1996). This paper gives a heuristic derivation of the equations after having stated the physical assumptions and devotes also attention to overshooting motions and waves. An anelastic approximation often used in the astrophysical context is due to Gilman and Glatzmaier (1981). In that paper the equations are derived in polar coordinates with a spherical shell in mind. Later on, that approach has been extended to also include magnetic fields (Glatzmaier 1984). In this model a diffusive energy flux based on the entropy gradient is incorporated. It is supposed to represent the effects of subgrid scale motions. The picture behind it is that the starting model is already nearly isentropic. A more common use of a subgrid flux based on the temperature gradient (i.e., more or less invoking the diffusion approximation for radiative transfer) might counteract this basic assumption. Rather, it is the small scale motions which are assumed to homogenize the material in the sense of entropy at the small scales. More recently, for purposes of modelling overshoot beyond a convection zone, the assumptions on the diffusive term have been changed, this time allowing for a strongly non-isentropic basic state. Now, a diffusion term again based on the temperature gradient is invoked in order to achieve an outwardly directed diffusive flux even in subadiabatic layers (Rogers and Glatzmaier 2005). "The different notation and the different thermodynamics used in the various anelastic treatments leads to some confusion", as stated in Brown et al. (2012). Other differences add to the diversity. The basic layering may be assumed close to adiabatic or not. The equation of state, for example, may be applied in its exact or the linearized version. An ideal equation of state may be assumed and basically enter the derivations. When in spherical geometry, some metric terms in the differential operators may be disregarded. Since much of the material comes from meteorology, atmospheric moisture may be implemented in a specific way. Furthermore, different assumptions or approximations abound in the derivations. For the benefit of the reader who might wish to more closely get acquainted with the essence of the anelastic approximation, we direct attention to two papers which each approach the topic in stylistically quite a different way and which both feature good readability. Verhoeven et al. (2015) consider a simple physical system. They assume a layer with fixed, impenetrable boundaries at the top and the bottom with a temperature difference between them being held constant. The gas obeys simple microphysics and other idealizing assumptions, for example a constant heat conductivity. The setting is such that for the adiabatic part hydrostatic equilibrium applies. One essential control parameter, \(\epsilon \), is a normalized measure of the superadiabatic part of the temperature jump between top and bottom. Decomposing density in the adiabatic (static) and superadiabatic part (in the form \({\rho }_\mathrm{ad}\) and \(\epsilon \rho _\mathrm{sad}\)) the (exact) continuity equation can then be written in the form $$\begin{aligned} \epsilon \partial _t\rho _\mathrm{sad}+{{\mathrm{div}}}(({\rho }_\mathrm{ad}+\epsilon \rho _\mathrm{sad})\varvec{u}) = 0, \end{aligned}$$ which shows that with \(\epsilon \rightarrow 0\) the time-derivative of the fluctuating density looses importance. This ultimately filters out sound waves and leads, in the limit, to the usual anelastic constraint, Eq. (53), on the velocity field, derived here in a somewhat different way than we have done earlier on. The other anelastic equations can be obtained similarly, the arguments on what to drop out being more involved, however. The equations for \(\rho _\mathrm{sad}, \varvec{u}\) and \(T_\mathrm{sad}\) (using an analogous decomposition of temperature) turn out to be independent of \(\epsilon \) in their setting. \(\epsilon \) appears only as a scaling factor for the ultimate density, \(\rho _\mathrm{ad}+\epsilon \rho _\mathrm{sad}\), and temperature, \(T_\mathrm{ad}+\epsilon T_\mathrm{sad}\). Incidentally, by letting another control parameter, D, which is a normalized measure of the depth of the system, tend to zero, the Boussinesq approximation can be derived. The second paper we want to address here is due to Lantz and Fan (1999). It provides, firstly, a short but thorough discussion of basics of mixing length theory and then proceeds deriving a variant of the anelastic approximation, pointing out conceptual similarities to many aspects of mixing length. It furthermore works out the anelastic approximation via a scaled expansion. Quite illuminating is a detailed discussion of questions on proper scalings which ultimately also pertain to the range of validity of the approximation. There are also various algorithmic items being discussed. Among them is the question how the usual Poisson equation for the pressure (perturbation) can be obviated and how basically just one thermodynamic variable needs to be advanced in time provided that one is ready to accept a subgrid scale model plus the physical assumption of near-adiabaticity of the atmosphere under which a term involving fluctuating pressure can be dropped. The anelastic approximation: tests The discussion above obviously triggers the question about closer information on reliability and efficiency of the anelastic approximation. Such investigations can be conducted along different lines. Assessments of validity (other than scrutiny of the assumptions underlying a specific variant of the approximations) are undertaken either by numerical integration of benchmark cases, using the full (nonlinear) anelastic equations, or by linearizing them about the static state, checking eigenmodes and eigenfrequencies predicted in that way against those derived from the full Navier–Stokes equations. Specifically, one may set out from a convectively unstable layer and let it evolve either under the full equations or the anelastic approximation. Such an investigation, already alluded to above, has been carried out by Verhoeven et al. (2015). As mentioned already, they assume a box with a gas held at fixed temperatures at top and bottom and adopt simple microphysical coefficients (ratio of specific heats \(\gamma =5/3\), constant radiative conductivity, etc.). One essential control parameter in that work is \(\epsilon =\varDelta T/T_{\small {\mathrm{bottom}}} \), i.e., the ratio of the \(\varDelta T\), namely the superadiabatic part of the temperature difference between bottom and top, to the temperature at the bottom. In addition, the Rayleigh number is varied (ranging from \(10^4\) to \(10^7\) in these tests). Furthermore, there is the Prandtl number (assumed to be 0.7) and a parameter characterizing depth. Otherwise identical models are calculated invoking the anelastic approximation as discussed around Eq. (56) above on the one hand and, on the other hand, solving the full Navier–Stokes equations. In a nutshell, it is found that for \(\epsilon =0.3\) global diagnostic quantities (such as heat flux, velocity or kinetic energy, all averaged over the domain) deviate by about \(30\%\) from their anelastic counterpart. For smaller values of \(\epsilon \) they converge approximately linearly to their anelastic value when decreasing this parameter. In more superadiabatic situations (\(\epsilon >0.3\)) this approximately linear scaling breaks down. Somewhat against expectation, larger density contrasts reduce the deviations between results based on the full equations and the anelastic approximation, respectively, under otherwise similar parameters. In a related discussion, Brown et al. (2012) focus on somewhat different aspects, namely the question how well the anelastic equations perform in regimes away from the ones under which they have frequently been derived, namely under the assumption of a nearly isentropic stratification. In particular, subadiabatic stratifications are in the focus of interest there and attention is directed towards the modelling of gravity waves. Meritoriously, the three different anelastic prescriptions considered (plus the original hydrodynamic equations) are cast into a consistent form (essentially their Table 1). Specifically, in one brand of anelastic equations dealt with (ANS for short) the momentum equation is the same as in the original fluid dynamic equations. The second set is the Lantz–Braginsky–Roberts (LBR) formulation where, in comparison to the full equations, a term including the entropy gradient of the reference atmosphere is being dropped (which amounts to the explicit assumption of an isentropic basic layering). The third variant is the Rogers–Glatzmaier (RG) formulation (Rogers and Glatzmaier 2005). In that paper, the approaches mentioned above (linearization and working with the nonlinear equations) are both considered. Regarding the linearized version, it turns out that the ANS (and probably also the RG variant which has been investigated in less detail) yields unsatisfactory eigenfunctions or eigenfrequencies for gravity waves in a physically simple case (isothermal atmosphere). The LBR variant fares better and is recommended already on behalf of that fact. The authors ascribe the different behaviour to issues of energy conservation: for the LBR equations, a conservation principle for energy (kinetic \(+\) potential) can be derived, whereas for the ANS equations a similar principle applies only to a sort of pseudo-energy (where in the integration an additional factor, depending on the layering of the basic structure, appears). For the RG equation, proper energy conservation is granted only for infinitesimally small amplitudes (setting aside rather specialized other cases). The basic finding concerning the necessity of energy conservation properties of schemes is corroborated by the fact that an investigation of a rather different basic scenario (a non-hydrostatic model of the moist terrestrial atmosphere) similarly stresses its importance for veracity of the results (Bryan and Fritsch 2002). Mass conservation alone is not sufficient. As a result (also of their nonlinear simulations) the authors recommend to alter one's anelastic system so as to enforce energy conservation if necessary, namely by modifying the momentum equation, even if that should mean to violate momentum conservation. The anelastic approximation: considerations A number of points raised above make it clear that the anelastic method's applicability and the proper choice of a variant is in general a matter not decided easily. On the positive side, it seems that such methods may work properly also in regimes where one would not expect this from the outset (see for example the remarks on the LBR model above). Applicability of the anelastic equations in the case of rapidly rotating systems has been questioned by Calkins et al. (2015). Their analysis pertains to the case of low Prandtl- and high Taylor number configurations (the Taylor number Ta is the square of the ratio between Coriolis and viscous forces). The analysis performed in that paper shows that, for sufficiently low or high values of those numbers, respectively, linear motions (which are the ones with which the paper is dealing) are such that the Mach number is small, yet, in basic contrast to the assumptions of the anelastic approximations, the derivative \(\partial _t\rho ^{\prime }\) is crucial in determining the ensuing motions. The \(\partial _t\rho ^{\prime }\)-term being large comes about with a concomitant change in the basic structure of the flow with increasing Taylor number. For small rotation rates, the force balance is essentially between pressure, viscosity and buoyancy forces. With larger rotation rates, this changes to a balance between pressure, Coriolis and inertial forces horizontally. For still higher rotation rates, a geostrophic force balance (Coriolis vs. pressure force in the horizontal) applies. That naturally casts doubts on nonlinear results concerning such systems (Verhoeven and Glatzmaier 2017). They confirm that close to the stability limit of convection the anelastic approximation is inaccurate. (The pseudo-incompressible approximation, described in Sect. 4.4, performs better there, but does a bad job in other physical regimes.) However, for fully developed turbulent convection their simulations show good agreement between the anelastic approximation and results from the full Navier–Stokes equations. An investigation by Wood and Bushby (2016), addresses the onset of convection in the case of rapid rotation, low viscosity and low Mach number. In that case convection is known to often be oscillatory. When comparing linearized results of the Euler equations for the system at hand with those based on the Boussinesq approximation or a number of anelastic approximations it turns out that, with one exception, all these are unsatisfactory in that they yield valid results only for unexpectedly small ranges of parameters (such as the height of the domain), if at all. The limitations are more severe than those upon which the approximations are built anyway from the outset. So, for an ideal gas, the ratio of the domain height to the usual scale heights (pressure, \(\ldots \)) must be smaller than the Prandtl number for validity of the results. That renders most schemes useless for that purpose. Only a specific variant of a soundproof system (these are discussed in the following subsubsection) fares better. For clarity we should note that the analysis pertains to oscillatory convection only, not to direct convection. Anyway, these results corroborate the view that validity of the anelastic approximation cannot be taken as granted when moving into new physical regimes. Klein et al. (2010) point to a problem which previous studies have paid attention but scarcely. (For example, Ogura and Phillips (1962) check in their version of the anelastic equations that the time-scale of the Brunt–Väisälä frequency, which separates acoustic and gravity waves, is respected). The general point is that, depending on the basic structure of the layer, the time-scales for sound waves, internal gravity waves and advection may happen to fulfill \(t_\mathrm{snd} \ll t_\mathrm{grav} \ll t_\mathrm{adv}\). Consequently, the question arises whether gravity waves are represented correctly by any specific set of anelastic equations which do not explicitly address the fact that three rather different time-scales may be present. By analyzing the linear properties of the anelastic models versus those of the full Navier–Stokes models the paper provides ranges (in terms of degree of layering) within which the anelastic models perform faithfully in this respect. At the same time, the range of validity (in terms of strength of layering) of the early Ogura–Philips model (Ogura and Phillips 1962) is extended considerably beyond the original estimate. These results allow the applicability of the anelastic equations for a specific problem to be assessed more reliably. A different issue is that many of the simulations referring to convection in rotating shells (having the Sun or solar-like stars in mind) actually make use of a Prandtl number of O(1). That may, on the one hand, be enforced by numerical considerations. On the other hand, such a choice may be interpreted as referring to a "turbulent Prandtl number", i.e., in the sense of subgrid modelling, despite of the criticism such a concept may experience. In applications, the turbulent Prandtl number is often chosen from a practical point of view, such as keeping the numerical method stable with its smallest possible choice rather than referring to theoretical or experimental considerations for its value for sheer want of these in the astrophysical regime. That point naturally applies to any simulation of the sort we have in mind here, not only those based on the anelastic equations. Simulations using a relatively low Prandtl number (down to \({\mathrm{Pr}}=0.125\)) can, for example, be found in Brun and Toomre (2002). The pseudo incompressible and the low Mach number model Durran (1989) has further developed the anelastic approximation to what is being called the Pseudo Incompressible approximation (PI for short). The basic assumptions are that the Mach number is small and that, this time, the normalized horizontal pressure fluctuations are small as well. This rests on the observation that, after all, the pressure perturbations are responsible for sound waves. Such an approach has proved useful earlier on in problems of combustion. It is requested that the (Lagrangian) time-scale of the disturbances are large in comparison with the time-scale of sound propagation. Furthermore, an idealized equation of state is assumed. Both, temperature and density fluctuations about the horizontal mean are, however, not requested to be small. In the pseudo incompressible approximation the anelastic velocity constraint, \({{\mathrm{div}}}{{\overline{\rho }}\varvec{u}}=0\), Eq. (53), is now replaced by $$\begin{aligned} {{\mathrm{div}}}{{\overline{\rho }}\,{\overline{\varTheta }}\,\varvec{u}}=RHS, \end{aligned}$$ where \(\varTheta \) denotes the potential temperature and the right hand side RHS involves quantities depending on the horizontally averaged state. The pseudo incompressible approximation has been extended to also be applicable to a time-varying background by Almgren (2000). In deriving what is termed the Low Mach Number model, Almgren et al. (2006) start from essentially the same basic assumptions on which the pseudo incompressible approximation rests. They allow, however, for a nontrivial equation of state which has been assumed in the derivation and algebra in Durran (1989). In the Low Mach Number model, the basic velocity constraint now reads $$\begin{aligned} {{\mathrm{div}}}{{\beta _0}({x_1})\varvec{u}}=RHS, \end{aligned}$$ where the function \(\beta _0\) depends only on depth \(x_1\) and involves the horizontally averaged structure and the equation of state. The non-trivial right-hand-side takes effects of compressibility into account (heating, change of the horizontally averaged structure). In the numerical realization of the system as sketched in Almgren et al. (2006) an elliptic equation is derived for the pressure, similar to the typical procedure with the anelastic approximation. When marching forward in time, basic hydrodynamic variables are first evaluated for the new time level. Then, at this new time level, the pressure equation is solved an used to change the provisional velocity field to one obeying the velocity constraint Eq. (58). In such a way, the method is implemented in the MAESTRO code (Almgren et al. 2006; Nonaka et al. 2010). Vasil et al. (2013) developed what they call the generalized pseudo-incompressible model (GPI). Unlike PI, GPI enjoys energy conservation (not just conservation of a pseudo-energy) and applies also to a more general equation of state than PI does. Tests and considerations Some tests of the Low Mach Number approach are provided in Almgren et al. (2006). A rising bubble, originally in pressure equilibrium horizontally (but with appropriate perturbations of density and temperature) is considered in the anelastic approximation, the Low Mach Number model and the full hydrodynamic equations. In a sense, working with a Low Mach Number model may be easier than working with the full equations for reasons other than the mere low Mach number of the situation: discretizations of the full equations excite pressure perturbations which may spoil the solution unless countermeasures are taken. Naturally, such difficulties do not arise when working with the approximate equations. The importance of conserving energy rather than just some pseudo-energy is emphasized again in Vasil et al. (2013). This is exemplified by the spatial structure of eigenfunctions referring to gravity waves in a layered medium, where the Low Mach Number approach yields unsatisfactory results. In a similar vein, the energy flux (as a function of height) in an isothermal layer is constant in the Navier–Stokes and PI/GPI approaches, whereas it diverges near the top of the layer for the Low Mach Number method. Yet, all simplifications of the Navier–Stokes equations seem to have difficulties in reproducing the correct dispersion relations for gravity waves in one part of the parameter space or the other. Assessment of veracity of the various equations (anelastic, Low Mach, PI) is rendered to be quite delicate a matter according to work by Lecoanet et al. (2014). Here, in simulations basically investigating the kind of diffusion one should preferably include for subgrid modelling, it is the anelastic approximation which surprisingly performs better than the PI approach for a specific case. For that odd behaviour an explanation, based in specific aspects of the PI equation of state, is offered. Anyway, that teaches one how easily, in that area, rather subtle causes may lead to unexpected and possibly wrong results. The quasi-hydrostatic approximation In meteorology, the quasi-hydrostatic approximation is frequently used. It addresses phenomena of long horizontal wavelengths as compared to vertical wavelengths, i.e., the vertical extent of the atmosphere, so that for the grid-spacings one has \(h_\mathrm{vert}\ll h_\mathrm{horiz}\). It then makes sense to suppress the vertical sound waves and to thus eliminate their stringent effect on the numerical timestep. Horizontally, some acoustic waves are admitted (Lamb waves). In the quasi-hydrostatic approximation this is achieved by assuming balance of pressure and gravity forces in the depth-direction, i.e., \(\partial _{x_1}p=-\rho g\). This brings about useful relations between \(D_t\rho \) and \(D_t p\) (and hence \(\partial _t \rho \) and \(\partial _t p\)). Making, in addition, use of the equation of state one ultimately arrives at one evolution equation for a thermodynamic quantity and evolution equations for the horizontal velocity components. The vertical (slow) velocity component is obtained from a diagnostic equation at the new time level. For a closer description of this approximation consult, e.g., Kasahara and Washington (1967) and Arakawa and Konor (2009). For astrophysical applications (e.g., stellar radiative zones) this method has recently been extended to even the MHD case and a code has been developed. See Braithwaite and Cavecchi (2012) and Boldt et al. (2016). We explicitly point out that we have included a short description of that approach just for granting easy access to the most basic issues. It is clear that the quasi-hydrostatic approximation is not suited for convection simulations in the astrophysical context. Already the condition \(h_\mathrm{vert}\ll h_\mathrm{horiz}\) for obtaining a reasonably sized time-step is in conflict with typical convection simulations where horizontal length scales simply are, as a rule, smaller than or comparable to vertical ones. There are also theoretical issues which render the method to be valid only for waves with horizontally large wavelengths. About the precise limitations there is an ongoing debate even in meteorology. The reduced speed of sound technique The basic point here is to change the continuity equation, Eq. (42a), to $$\begin{aligned} \partial _t\rho +\frac{1}{\xi ^2}{{\mathrm{div}}}{\rho \varvec{u}} = 0 \end{aligned}$$ by introducing a parameter \(\xi , 0<\xi \le 1\). That reduces the speed of sound by the factor \(\xi \) and alleviates time-step restrictions imposed by the sound speed. Hotta et al. (2012) provide tests of the method for a zone unstable against convection. As described there, some approximations are also made in the momentum and energy equations. Unlike the methods discussed up to now an algorithmic advantage is that there is no need to solve an elliptic equation, at least if not the size of the coefficients in the viscous or dissipative terms dictates that either from the outset or when applying the larger time-step permitted with some choice of \(\xi <1\). In practice it may be necessary to apply a depth-dependent value for the parameter, \(\xi =\xi (x_1)\), in order to obtain acceptable time-steps. In that case the continuity equation, Eq. (42a), is no longer in conservation form. At least in some tests given in the paper loss of mass conservation is not deemed bothersome. The authors prefer tolerating it in favour of using a conservative form for the modified continuity equation, viz. \(\partial _t\rho +{{\mathrm{div}}}{\frac{1}{\xi ^2}\rho \varvec{u}} = 0\). The argument is that the true continuity equation, Eq. (42a), has the same stationary solutions as the modified form given in Eq. (59). That would not hold true for the conservative modification. For this method the energy (internal \(+\) kinetic) does not strictly obey a conservation law. A variant of the method which fares better in terms of energy conservation (it conserves energy in the linear approximation around an adiabatic base state) is described by Hotta et al. (2015). The method has also been applied for investigations of the MHD case for rotating (quasi-solar) convection. The "stratified" method The "stratified" method (Chan et al. 1994; Chan 1994; Cai 2016) evolves the problem via a time-splitting approach. Basically, linear waves are integrated in time via an implicit method, the rest explicitly. If (as is the purpose of the method) the time-step used is substantially larger than explicit time-marching would allow, the implicit time integration represents the linear waves (acoustic, possibly gravity) not accurately, but stably, thus avoiding the numerical instabilities which make explicit time-marching unfeasible. In addition to that basic splitting concept, two approximations are made. The horizontal variation of density is ignored in the momentum equation (outside of the linear advection terms) and only linear terms in the horizontal variation of the thermodynamic variables are retained in the energy equation. As a consequence, more of the original terms are retained than holds true for the anelastic approximation. In particular arguments are given for the benefit in accuracy when retaining the \(\partial _t\rho \)-term in the continuity equation. In the papers just cited the method is implemented for the spherical case (using spherical harmonics in the lateral directions and finite differences vertically) and in Cartesian coordinates (using expansions via trigonometric functions horizontally and Tchebycheff polynomials vertically). The implicit treatment of the terms for linear waves requires the solution of a block-tridiagonal system of linear equations. Numerical difficulties plus additional computational overhead in the spectral methods seem to have motivated the neglect of products with three factors in the variations mentioned above. It is argued in Cai (2016) that in comparison to the reduced speed of sound technique several terms are retained in the "stratified" approach. Results for a low Mach number convective test case (a slightly superadiabatic zone with idealized physical parameters) shows good agreement in horizontally averaged quantities between calculations based on the the reduced speed of sound technique as given in Hotta et al. (2012) and the "stratified" calculations from Cai (2016). Changing the model parameters As mentioned on several occasions, it is impossible in our area to produce models which truly match the physical parameters of the star they purport to describe. This holds true in particular for molecular diffusivities. Their numerical counterpart must, in one way or the other, always be much higher than microphysics of the stellar material would dictate. Here we turn to a different, purposeful change of model parameters as compared with the physical object. The aim of the many different approaches we have described above is to avoid the difficulties brought about by the speed of sound in the case of low Mach number convection by changing the equations. The aim of such a deliberate change of parameters, in contrast, is to allow affordable simulations using the unaltered Navier–Stokes equations, yet to preserve important characteristic quantities such as the Rossby number in rotating convection. Thus, the complications and uncertainties brought about by working with modified equations would be avoided. The immediate practical benefit is the possibility to make use of the whole arsenal of numerical methods which have been developed for the Euler and Navier–Stokes equations. Such an approach is envisaged by Wang et al. (2015) in connection with their CHORUS code. Multidimensional modelling: numerical methods Stellar convection modelling: the numerical problem From the standpoint of numerical modelling the main problems are these: Spatial scales, extreme parameters: General aspects of spatial scales, extreme parameters etc. in stellar convection have been discussed in Sect. 2. Just because the extremely large Reynolds number and similar parameters are inaccessible to direct numerical simulation it is mandatory to do one's best to reach the most extreme parameters of that sort in the actual calculation in order to resolve as much of the conceivably essential small scales of the turbulent flows. Other reasons (e.g., narrow ionization fronts) may as well produce small spatial scales, calling for numerics which yields high resolution per gridpoint for general trustworthiness. Flow properties: highly subsonic or otherwise. A special issue encountered mainly in convection deep inside of stars (and in planetary astrophysics, rarely elsewhere in astrophysics) is, in the first place, the occurrence of very subsonic flows. For the usual methods based on the Euler equations the Courant–Friedrichs–Lewy condition limits the applicable time-step to such values that the fastest signal transverses only one grid spacing (or some fraction thereof, see also Sect. 2). Applying such time-steps, the physical processes of actual interest can virtually stall during computation, leading to impractical demands in terms of computer time. Any efficient type of numerical approach must, however, lead to a sensible change of the variables in one time-step. This difficulty can basically be attacked in two ways, namely: Modify the Euler or Navier–Stokes equations in order to filter out sound waves or to reduce their speed. Approaches of that type have been the subject of Sects. 4.3–4.7. (Only in the method described in the last subsection has the goal of eliminating the computational difficulties originating from a large speed of sound been reached by keeping the basic equations but modifying the physical parameters purposefully.) The resulting equations can then be solved with numerical methods often not dissimilar to those we are going to describe in this section. Keep the original Euler or Navier–Stokes equations and develop methods which directly cope with the problem of disparate characteristic velocities, this time at the level of the numerical method itself. In computational astrophysics that approach has been taken up relatively recently and important strides have been made. These developments fit well in the present, numerically oriented section and are therefore discussed here. Thus, one can choose between either a twofold approximation (approximate the equations analytically in a first step and subsequently numerically) or a onefold approximation (work on the numerical level only). Geometry: For some types of simulations it may be appropriate to work with simple box geometry. That holds true, for example, for modelling of solar granulation where one can use a small box (in terms of solar diameter) containing a small part of the photosphere including some regions below ("box in a star"). For simulations covering a sizeable portion of the star (a sector, a spherical shell, the whole star) the situation is different. For a sector not containing the center of a star it may be sufficient to work with polar or spherical coordinates. This means some marked change as compared to Cartesian coordinates. However, that change is still not as drastic as brought about by a simulation of a spherical shell or a whole sphere. The ensuing problem is then that in spherical coordinates terms of the Euler and Navier–Stokes equations have geometric factors like \(\frac{1}{r\sin \theta }\) etc. (\(\theta \) denoting the polar distance of the point). This leads to singularities at the center and along the polar axis. Dealing with them is numerically awkward. In addition, convergence of longitude circles near the poles leads to small grid spacing when using a rectangular \((r,\phi ,\theta )\) grid and hence unduly small timesteps. This problem can be overcome by putting a "star in a box". A different approach, actually the one most often used when modelling whole spheres or spherical shells in the past, is to use a spectral method, expanding the lateral part of the dependent variables in spherical harmonics and working in this representation. The variation of these functions is distributed evenly over the sphere. The radial part can be treated by any type of discretization (with some difficulties again if the center of the star is to be included). In addition, there is of course the problem of magnetic fields. This review deals, however, with the hydrodynamic aspects only. Methods for general hydrodynamics Setting special issues aside the numerical challenge encountered in modelling stellar convection in multidimensions stems either from the hyperbolic terms or the diffusive terms in the Navier–Stokes equations, Eqs. (42a)–(42c). As discussed in Sect. 2, the diffusive terms call for closer consideration mainly when the diffusivities (conductivities) and the time-step dictated by flow properties conspire as to make implicit treatment of the diffusive terms unavoidable. In that case, (semi-) implicit time integration is required, incurring the necessity of solving a large linear or non-linear system of equations originating from the diffusivities. A similar problem is encountered by the Poisson equation (or a similar type of equation) appearing in the treatment of low Mach number flows (anelastic approximation, etc.). By and large, the solution of the hyperbolic (Euler part) of the hydrodynamic equations seems, however, to be the part demanding most attention when developing a code purporting the investigation of stellar convection or similar phenomena. This is witnessed by the fact that in the publications describing the basics of the code regularly much more space is devoted to the hyperbolic part than to the (implicitly treated) diffusive terms. Still, one should not underestimate the difficulties of writing an efficient solver for those (elliptic) systems of equations. To achieve fast convergence on highly parallel machines is still a nontrivial task. On the other side, elliptic equation solvers are much more often demanded in science, engineering etc. than solvers for hyperbolic problems are called for. Therefore, we refer to the literature (e.g., the book by Trangenstein 2013) and deal here mainly with problems rooting in the hyperbolic part of the Navier–Stokes equations, i.e., the Euler equations. Even if usually designing methods for the hyperbolic part is largely decoupled from treatment of the viscous terms (or the latter are omitted at all), specific problems pop up when dealing with the Euler equations only, indicative of the fact that one should ideally better work on the Navier–Stokes equations. The main problem from the standpoint of numerics is that solutions of the Euler equations in general develop discontinuities, in particular shocks. These preclude naive use of the differential form of the Euler equations. Rather, one has to look for weak solutions. The problem here is, however, that in general weak solutions lack uniqueness. Admissible solutions must be found which are a limit of solutions for the viscous case with viscosity tending to 0. That puts special demands on numerics. Furthermore, if discontinuities are present, traditional numerical methods—which are often based on Taylor expansion—lose their justification. They must implicitly contain or be equipped with a high degree of numerical or artificial diffusivity to prevent development of discontinuities in the numerical sense. That brings about massive degradation of resolution. As a consequence, much of the newer work on method development is aimed at achieving high resolution despite of the basic difficulties. We will deal here mainly with methods which, in one way or the other, address that task. Numerical conservation methods: a few basic issues The Euler or Navier–Stokes equations express (for the situation under consideration) the conservation (balance) of mass, momentum, and energy. As a consequence, the idea that one's numerics should do likewise on the discretized level has immediate appeal. Such methods are called conservative. Beyond that point just mentioned there are two reasons which render a conservative approach desirable. An obvious practical issue is that convection simulations typically request time-integration covering a long time (in terms of the sound crossing time of the domain) in order to arrive at relaxed, statistically stationary solutions as discussed in Sect. 2. If, for example, artificial mass loss should occur due to lack of conservation properties, its undesirable effects on the simulation are easily imagined. A more theoretical consideration is that the solutions of the hyperbolic part of the equations (the Euler equations for our purpose) frequently develop shocks or other (near-) discontinuities, i.e., very steep gradients. (Numerically, a proper and a near-discontinuity cannot really be distinguished.) In particular, one rewrites equations with discontinuous solutions in the weak form which does not contain the first derivatives in the basic variables any longer. For a large class of problems which matter here, Lax and Wendroff (1960) have shown that if a numerical solution obtained by a conservative method converges at all with increasing grid-refinement, it converges to a weak solution. Conversely, Hou and LeFloch (1994) have shown that for non-conservative schemes discontinuities in the solution must propagate at a wrong speed. A large part of the codes used in the present area is based on conservative schemes. This holds true for codes solving the Euler or Navier–Stokes equations. For codes using the anelastic or the low Mach number approximation the situation is different due to the fact that the basic theory often does not represent conservation principles. From a more practical standpoint, the equations are frequently not formulated for the physically conserved quantities but, e.g., for velocity \(\varvec{u}\) instead of momentum density \(\varvec{\mu }\). With the exception of a few more special approaches discussed later we will consider conservative methods working on a rectangular grid. For simplicity, we will assume spatial grid spacing h to be uniform and equal in the various coordinate directions and we will often deal with the 2D case instead of 3D for ease of notation. Besides the grid points \((x_i,y_j)=(ih,jh)\) also half-numbered gridpoints will appear, e.g., \(x_{i+\frac{1}{2}}=(i+\frac{1}{2})h\). They give the boundaries of the grid cells. In time we proceed by time-increments \(\tau \), so that the nth time step corresponds to \(t_n=n\tau \). Discretizing both in space and time, \(v_{i,j}^n\) is the approximation for \(v(x_i,y_j,t_n)\). Given the form of the hyperbolic conservation law (in 2D) $$\begin{aligned} \partial _t v + \partial _x f(v) + \partial _y g(v) = 0, \end{aligned}$$ a conservative numerical semidiscretization (in space only, time being considered later on) will be $$\begin{aligned} \partial _t\bar{v}_{i,j}+\frac{\hat{f}_{i+1/2,j}-\hat{f}_{i-1/2,j}}{h}+ \frac{\hat{g}_{i,j+1/2}-\hat{g}_{i,j-1/2}}{h}=0. \end{aligned}$$ The method works on cell averages which are denoted by \(\bar{v}_{i,j}\) and evaluated for the grid cell \((x_{i-1/2},x_{i+1/2})\times (y_{j-1/2},y_{j+1/2})\). The essential point at this level is the choice of the numerical flux functions \(\hat{f}\). Obviously, \(\hat{f}_{i+1/2,j}\) for example is expected to represent \(\int _{y_{j-1/2}}^{y_{j+1/2}}f(v(x_{i+1/2},y))\,\mathrm{d}y\) to high order. (If we are obviously working at a single time level we may omit the argument t in the functions.) The numerical flux function \(\hat{f}_{.,.}\) will depend on numerical values of the basic point variables v or cell averages \(\bar{v}\), respectively. If the method is only second order in space, there is no need to distinguish cell average values from point values since \(\bar{v}_{i,j}=v_{i,j}+O(h^2)\). Turning back to the issue of semidiscretization, the choice of the temporal discretization is independent of the spatial discretization to some degree in most schemes. It is often performed by a Runge–Kutta scheme (see Sect. 5.5). In a few cases, the two kinds of discretizations are, however, interwoven intimately. For clarity we want to note here that there are two ways to obtain conservative schemes in terms of variables used. Either one can use the conserved variables (variable densities) \((\rho ,\varvec{\mu },e)\) or the natural variables (density, velocities, pressure or temperature) as long as the flux function is calculated properly. The difference in using one set or the other is purely numerical since typically operations such as interpolations are being applied which may be more accurate for one set or the other. Stability, dissipation While details on time-integration will be dealt with more closely in Sect. 5.5, we want to address one basic point here. Time marching can either be done explicitly or implicitly. As the most simple case let us consider Euler forward time-integration, $$\begin{aligned}&\frac{\bar{v}_{i,j}^{n+1} - \bar{v}_{i,j}^{n}}{\tau }+\frac{\hat{f}_{i+1/2,j}^{n}-\hat{f}_{i-1/2,j}^{n}}{h}+ \frac{\hat{g}_{i,j+1/2}^{n}-\hat{g}_{i,j-1/2}^{n}}{h}=0\quad \hbox {or}\nonumber \\&\quad \bar{v}_{i,j}^{n+1} = \bar{v}_{i,j}^{n}-\frac{\tau }{h}\biggl [\left( \hat{f}_{i+1/2,j}^{n}-\hat{f}_{i-1/2,j}^{n}\right) + \left( \hat{g}_{i,j+1/2}^{n}-\hat{g}_{i,j-1/2}^{n}\right) \biggr ]=0, \end{aligned}$$ i.e., explicit time-marching. It is not advocated here for actual use in itself, but it defines the stages of the widely used Runge–Kutta methods (see Sect. 5.5). If, instead, we make use of Euler backward time-differentiation we obtain $$\begin{aligned} \bar{v}_{i,j}^{n+1} = \bar{v}_{i,j}^{n}-\frac{\tau }{h}\biggl [\left( \hat{f}_{i+1/2,j}^{n+1}-\hat{f}_{i-1/2,j}^{n+1}\right) + \left( \hat{g}_{i,j+1/2}^{n+1}-\hat{g}_{i,j-1/2}^{n+1}\right) \biggr ]=0. \end{aligned}$$ This implicit method requires now the solution of a nonlinear set of equations for each time step because \(\hat{f}_{i+1/2,j}^{n+1}\) etc. contains the variable v at timestep \({n+1}\). The advantage is that an implicit method (with really suitable choices of the type of time integration and the form of \(\hat{f}\)) allows larger time steps than is the case for explicit methods. For these the time step is restricted by the Courant–Friedrichs–Lewy condition (see, for example, Trangenstein 2009 and Sect. 2), so that stability is only granted for time steps \(\tau \) obeying \(\tau \le c \tau _\mathrm{signal}\) where c is a constant of order unity and \(\tau _\mathrm{signal}\) the shortest time a signal (here: a sound wave) needs to cross one grid spacing. Suitable implicit methods allow much larger time steps as far as stability is being concerned. But that does not imply that the stable solution is also an accurate one. It will be so only under special circumstances. In our area such methods are useful for low Mach number flows when sound waves are physically unimportant. Then, in 1D and potentially 2D and 3D, the heavy load of solving the said system of equations may pay off due to the larger time-step one can apply. The nature of the system arising from such a discretization is unfortunately such that it does not facilitate its numerical solution. A way out of that difficulty is the use of specific semiimplicit or preconditioning procedures as we are going to describe later on. As a consequence, all methods we are dealing with (for normal flows, not low Mach) are explicit in the hyperbolic part. The diffusive terms (in particular radiative diffusivity) are, however, parabolic and give rise to an elliptic equation in case a (semi-)implicit time marching method is used. Indeed, implicit treatment is actually often warranted because in a number of situations these diffusive terms would impose very severe timestep limitations for explicit methods. On the fortunate side, for the resulting elliptic equations very efficient methods exist (e.g., multigrid, conjugate gradient; see, for example, Trangenstein 2013). Classical finite difference schemes In stellar hydrodynamics, classical finite difference schemes seem to be used more often in MHD than in pure hydrodynamics in the newer codes. They have the advantage of easy programming, but they also come along with some difficulties regarding stability and resolution as we will discuss shortly. The calculation of the numerical flux function is easy. For example, in the MURaM code (Vögler et al. 2005) first derivatives in the x-direction are approximated by $$\begin{aligned} v_x(x_i)\sim \frac{1}{12h}(-v_{i+2}+8v_{i+1}-8v_{i-1}+v_{i-2}), \end{aligned}$$ which leads to the numerical flux function $$\begin{aligned} \hat{f}_{i+1/2}=\frac{7}{12}\bigl (f_{i+1}+f_i\bigl )- \frac{1}{12}\bigl (f_{i+2}+f_{i-1}\bigl ). \end{aligned}$$ Here, as always, \(f_i\) denotes the physical flux function at the central (whole-numbered) gridpoint. Together with the usual Runge–Kutta schemes for time integration this kind of spatial discretization is, however, unstable. In MURaM stabilization is achieved by (artificial) diffusivities. Such diffusive terms are added to all equations, including the continuity equation. For total diffusivity, two types of diffusion coefficients are added. One is nonzero in regions of compression and serves to keep shocks or steep gradients stable, whereas the other one is positive in all of the domain and aims to achieve general numerical stabilization. Riemann solvers A number of methods to be described below is based on solutions of Riemann problems. They can be considered both as successors of Godunov's method which we deal with shortly and as application of the upwind principle. In the Riemann problem for Euler's equations of hydrodynamics (in 1D) we consider, at time \(t=0\), a constant left state \(\varvec{v}_L\) for \(x<0\) and a constant right state \(\varvec{v}_R\) for \(x>0\). We can expect solutions which are constant along lines \(x/t=\hbox {const}\) in the \(x{-}t\) plane due to the fact that the conservation law as well as the initial condition is invariant under the coordinate transformation \((\xi ,\tau )=(\theta x,\theta t)\) for arbitrary \(\theta >0\). This applies therefore also to the solution, so that \(\varvec{v}(x,t)=\varvec{v}(\xi ,\tau )\), whence the values at points (x, t) and \((\theta x,\theta t)\) are identical and therefore depend on \(\frac{x}{t}\) only. Figure 10 shows the general structure of the solution of a Riemann problem for the 1D Euler equations of hydrodynamics, Eqs. (42a)–(42c). The solution of a Riemann problem for the 1D Euler equations is self-similar. There are four constant states, separated in turn by a centered rarefaction wave (across which the variables are continuous), a contact discontinuity (jump in density but not in pressure) and a shock (jump in density and pressure) \(\varvec{v}^*\), the value at \(x=0\), constant in time for \(t>0\), is used in determining the flux function of Godunov's method and other methods which we are going to describe. To proceed from step n corresponding to time \(t_n\) to time \(t_{n+1}\) Godunov's method proceeds as follows. The solution \(\varvec{v}^n\) is assumed to be constant on the intervals, namely \({\bar{\varvec{v}}}_i^n\) on the ith interval \(x_{i-1/2}<x<x_{i+1/2}\). At the left grid point \(x_{i-1/2}\) we solve the Riemann problem with input data \({\bar{\varvec{v}}}_{i-1}^n\) and \({\bar{\varvec{v}}}_{i}^n\). That leads to the flux function $$\begin{aligned} \hat{f}_{i-1/2}^\mathrm{Godunov} = f\left( \varvec{v}_{i-1/2}^*\right) . \end{aligned}$$ As long as the signals emanating from the neighbouring cell boundaries do not arrive at that location this flux function is exact (to the extent the Riemann solver is) and constant in time.Footnote 7 We consider that case and want to advance by a time increment \(\tau \). For one moment, we denote the solution by \(\varvec{w}(x,t_n+\tau )\) and assume further, for one moment again, that \(\varvec{v}^n\) is exact. We then average over one interval \((x_{i-1/2},x_{i+1/2})\) and the time interval \((t_n,t_n+\tau )\) and get exactly that $$\begin{aligned} \int _{x_{i-1/2}}^{x_{i+1/2}}\varvec{w}(x,t_n+\tau )\,\mathrm{d}x=h{\bar{\varvec{v}}}_i^n -\tau f\left( \varvec{v}_{i+1/2}^*\right) + \tau f\left( \varvec{v}_{i-1/2}^*\right) . \end{aligned}$$ Godunov's method finishes now by a reconstruction, namely by defining the discrete solution at time \(t_{n+1}\) by setting \(\bar{v}_i^{n+1}\) as the value of \(\varvec{w}(\cdot ,t_{n+1})\) averaged over the spatial interval, i.e., $$\begin{aligned} {\bar{\varvec{v}}}_i^{n+1}={\bar{\varvec{v}}}_i^{n}-\frac{\tau }{h}\left( f\left( \varvec{v}_{i+1/2}^*\right) - f\left( \varvec{v}_{i-1/2}^*\right) \right) . \end{aligned}$$ Thus, starting the cycle with the reconstruction step, it proceeds in the form reconstruct-solve-average (RSA). A number of methods to be discussed later on follow essentially an RSA-scheme. As a prerequisite, we concentrate now on the solution of a Riemann problem. For information on Riemann problems see the classic book by Courant and Friedrichs (1999) and Toro (2013). Riemann problem and upwinding Before turning towards solution avenues for the Riemann problem we consider its connection to the concept of upwind discretization. We set out from the simple scalar advection equation $$\begin{aligned} \partial _tv(x,t)+\alpha \partial _xv(x,t)=0, \end{aligned}$$ assuming \(\alpha >0\) for sake of definiteness. Since it describes a motion with speed \(\alpha \), the solution of the Riemann problem with initial data \(v_l, v_r\) is $$\begin{aligned} v^*=v_l. \end{aligned}$$ Combining this with Godunov's procedure, we obtain $$\begin{aligned} v_i^{n+1}=v_i^{n}-\alpha \frac{v_i^{n}-v_{i-1}^{n}}{h}, \end{aligned}$$ so that the spatial differencing has to be biased towards the direction from where the wind blows (upwind principle). Central or downwind spatial differencing would not only be something different from Godunov's method but also be unstable (with this and the usual Runge–Kutta methods for time-marching). An essential point is that it can be shown that the above numerical approach is a better approximation to an equation containing an additional diffusive term, the diffusion coefficient depending on grid size h and tending to 0 if h does so. In other words, the Riemann strategy, i.e., upwinding, introduces a sort of numerical diffusivity which can be interpreted to have a stabilizing effect on the numerical solution. Historically, upwinding was introduced very early in this area by Courant et al. (1952). We next turn to various solution strategies for the Riemann problem. Exact solution The exact solution of the problem amounts to solving nonlinear equations. This is frequently deemed expensive and may be technically difficult in the presence of Coriolis terms, radiative transfer and other complicating factors. Two widely used approximate solvers will therefore be discussed below. Yet, there have been strides towards efficient exact solution strategies, e.g., by Colella and Glaz (1985), and this method is implemented in the APSARA code (Wongwathanarat et al. 2016). While from the standpoint of basic physics and general virtue the exact solution is the best one, that may not apply from the standpoint of numerics. As mentioned in context with Godunov's method for the advection equation, the Riemann solver also acts in the sense of a numerical diffusivity. The kind and degree of diffusivity provided by the exact solution may not be appropriate for each type of basic numerics. In such a case, a more diffusive Riemann solver may be required. The Harten–Lax–van Leer solver At time \(t=0\) we set out with a left constant state \(\varvec{v}_l\) for \(x<0\) and a right one \(\varvec{v}_r\) for \(x>0\). In the original form the Harten–Lax–van Leer approximate Riemann solver simplifies the basic structure of the solution so that in addition to the left and the right state there is just one central, intermediate state in between of them, \(\varvec{v}^{HLL}\). The intermediate state \(\varvec{v}^{HLL}\) is separated from the left and right one in the (t, x)-plane by straight lines in the (x, t)-plane corresponding to wave speeds \(s_l\) and \(s_r\). The simplest choice is \(s_l=u_l-c_l\) (where \(u_l\) and \(c_l\) are the velocity and sound speed in the left state), and similarly \(s_r=u_r+c_r\). For other possibilities see, for example, Toro (2013). Once estimates are given for these speeds (for details see Harten et al. 1997) the intermediate state can be computed as $$\begin{aligned} \varvec{v}^{HLL}= \frac{s_r\varvec{v}_r-s_l\varvec{v}_l+\varvec{f}(\varvec{v}_l)-\varvec{f}(\varvec{v}_r)}{s_r-s_l}. \end{aligned}$$ Plugging that value into the flux we get $$\begin{aligned} \varvec{f}^{HLL} = {\left\{ \begin{array}{ll} \varvec{f}_{l} &{}\quad s_l \ge 0 \\ \varvec{f}_{r} &{}\quad s_r\le 0 \\ \frac{s_r\varvec{f}_l-s_l\varvec{f}_r+s_ls_r(\varvec{v}_l-\varvec{v}_r)}{s_r-s_l} &{}\quad s_l<0<s_r. \\ \end{array}\right. } \end{aligned}$$ The sequence of fans originating from the zero-point in the (t, x)-plane occurring in the original Riemann problem is replaced by a simpler fan structure with two discontinuities separating different areas (states). There are extensions of the original HLL method with a richer fan structure such as the HLLC approach (where a Central wave is restored) or the HLLE approach with a special method to derive the largest and smallest signal velocity (Einfeldt 1988). We again refer to Toro (2013) for details. The Roe solver The solver devised by Roe (1997) is in wide use. It rests upon the following principles and considerations. Riemann problems for linear conservation laws are easy: Consider a hyperbolic conservation law $$\begin{aligned} \partial _x\varvec{v}+\partial _x{\varvec{f}}(\varvec{v})=0, \end{aligned}$$ so that the Jacobian of \(\varvec{f}\) w.r.t. \(\varvec{v}, A:=\partial _{\varvec{v}}\varvec{f}\), has a complete set of real eigenvalues and eigenvectors. Then by applying the chain rule Eq. (68) can be written as $$\begin{aligned} \partial _t\varvec{v}+ A(\varvec{v})\partial _x\varvec{v}= 0. \end{aligned}$$ We assume now that the conservation law is linear, i.e., that A is independent of \(\varvec{v}\). Let T be a matrix which diagonalizes \(A, TAT^{-1}=\tilde{A}\), the diagonal matrix \(\tilde{A}\) containing the eigenvalues \(\alpha _i\) of A. Then, with \(\varvec{w}=T\varvec{v}\), Eq. (68) can be written in the form $$\begin{aligned} \partial _t\varvec{w}+ \tilde{A}\partial _x\varvec{w}= & {} 0\hbox {, i.e.,} \end{aligned}$$ $$\begin{aligned} \partial _tw_i + \alpha _i\partial _xw_i= & {} 0 \end{aligned}$$ for the ith component of \(\varvec{w}\). Riemann problems for such advection equations are however trivially being solved by the upwind principle as explained around Eq. (65). There remains only to recover \(\varvec{v}\) using \(\varvec{v}=T^{-1}\varvec{w}\). In practice, for the Euler equations the necessary eigenvalues are known (velocity \(u_x\) and \(u_x\pm c\) for a Riemann problem in x-direction, c being the speed of sound), and the eigenvectors follow immediately. Roe's procedure Starting from the Euler equations \(\partial _t\varvec{v}+\partial _x\varvec{f}(\varvec{v}(x,t))=0\) and setting \(A=\partial _v\varvec{f}(v(x,t))\) Roe has devised how to construct a matrix R, now called Roe matrix, which depends on two states, the left and the right state of the Riemann problem in practice. This matrix has the same eigenvalues as the original Jacobian, and it moreover satisfies consistency: \(R(\varvec{v},\varvec{v})=A(\varvec{v})\) for all states \(\varvec{v}\) and Rankine–Hugoniot property (see below): \(R(\varvec{v}_l,\varvec{v}_r)(\varvec{v}_r-\varvec{v}_l)=\varvec{f}_x(\varvec{v}_r)-\varvec{f}_x(\varvec{v}_l)\). The last equality is connected with the Rankine–Hugoniot conditions across a single discontinuity moving with speed s, viz. $$\begin{aligned} \varvec{f}_x(\varvec{v}_r)-\varvec{f}_x(\varvec{v}_l)=s(\varvec{v}_r-\varvec{v}_l), \end{aligned}$$ so that the Roe matrix does a correct job for a single discontinuity which is physically important because the Rankine–Hugoniot conditions are expressing conservation principles according to the following considerations. For explanation, it is sufficient to consider one component of this equation, for example the continuity equation. It reads \(f_x(\rho _r,\cdots )-f_x(\rho _l,\cdots )=s(\rho _r-\rho _l)\) where \(f_x\) is the flux function in x-direction (\(\mu _x\) by the way). Why is that correct?—If s happens to be 0 (because we deal with a stationary contact discontinuity or a stationary shock) then we clearly must have \(f_x(\rho _r,\cdots )=f_x(\rho _l,\cdots )\) which just amounts to Eq. (72) for that case, because otherwise matter would be created or destroyed at the discontinuity. If \(s\ne 0\) then, now using a new coordinate system moving with speed s, we have, in these coordinates, a new flux function \(f^*\), namely \(f_x^*=f_x-\rho s\). The additional mass flux is caused by the motion of the new system with respect to the old one. In the new coordinate system, the velocity of the discontinuity is \(s^*=0\), and applying the above result for a zero velocity discontinuity yields Eq. (72). If the equation of state is nontrivial, for example in an ionization region, specific modifications of the original Roe solver may be required for proper functionality. Choice of the solver Approximate Riemann solvers may give unphysical solutions. For example, the Roe solver may lead to negative densities. In such places one may switch, for example, to the HLLE solver which is proven to yield positive densities and energies (Einfeldt et al. 1991). At sonic points (where one characteristic speed is 0) the Roe solver may yield unphysical results of a different kind (expansion shocks which violate thermodynamics). It is common use to switch, on such locations, to another solver, perhaps of the HLL-family. The original HLL solver leads to large artificial viscosity so that its general use may not be advisable. We, however, again emphasize once more the great flexibility of HLL schemes in terms of waves included and refer, in addition to the HLLE solver already mentioned, to Linde's solver (Linde 2001), now termed HLLL. Classic higher order schemes using Riemann solvers Here we discuss a few methods using Riemann solvers which by now can be considered classic and highly developed. Their aim is to remove the basic drawback of Godunov's method, namely its limitation to first order accuracy in space and consequently low resolution per gridpoint plus unwanted high numerical diffusivity. Basically, the reconstruction step in these methods is designed resorting to higher order polynomial functions. These methods or likes of them are frequently at the roots of modern codes in our area. A few more recent methods which basically can be subsumed in this category will, however, be discussed separately later on. Importantly, these methods are nonlinear even for linear conservation laws, e.g., \(\partial _tv+\partial _x(a(x,t)v(x,t))=0\) for some prescribed function a(x, t). So, for example, Godunov has shown in his basic paper (1959) that, for the advection equation, a monotone solution will stay monotone in general only for linear discretizations accurate to first order in space. From this follows the necessity of developing such intrinsically nonlinear schemes which indeed can be constructed so as to yield high accuracy and a substantial degree of immunity against artificial oscillations and the like. Many of them are of the RSA type. Reconstruction with piecewise linear functions The first higher order scheme of that type has been MUSCL (van Leer 1997 and the previous papers in that series which present a number of variants and many general ideas). A similar method developed at the same time by Kolgan (1972), see also Kolgan (2011), seems to have been but little known in the Western countries for a long time. Concentrating here on typical spatial discretization we set out with a reconstruction step (given grid size averages \(\bar{v}_i\) for one component of the conservation law) yielding piecewise linear functions \(v_i(x), (x_{i-1/2}\le x \le x_{i+1/2}) \). We write them in the form $$\begin{aligned} v_i(x)=\bar{v}_i+\sigma _i(x-x_i) \hbox { for } (x_{i-1/2}\le x \le x_{i+1/2}). \end{aligned}$$ One is free to choose \(\sigma _i\) in terms of the conservative property, because \(v_i(x)\) yields \(\bar{v}_i\) when averaged over the interval independently of \(\sigma _i\). \(\sigma _i=0\) results in Godunov's approach. Image reproduced with permission from LeVeque (1998), copyright by Springer Solution of the advection equation. An initial profile (solid line) is moved with constant speed for one revolution (periodic boundary conditions) using various numerical schemes. Upwind (Godunov) is extremely smoothing the solution. Lax–Wendroff performs well in the smooth part but fails on discontinuities. The minmod and the monotonized central differences (MC) limiters perform reasonably or well in the smooth part and work fairly acceptably near jumps. For the advection equation, Eq. (65), with positive speed \(\alpha \) the symmetric choice \(\sigma _i=\frac{\bar{v}_{i+1}-\bar{v}_{i-1}}{2h}\) leads to Fromm's method, the upwind choice \(\sigma _i=\frac{\bar{v}_{i+1}-\bar{v}_{i}}{h}\) to the Beam–Warming scheme and the downwind choice \(\sigma _i=\frac{\bar{v}_{i}-\bar{v}_{i-1}}{h}\) to the Lax–Wendroff scheme. The latter one is particularly interesting because it leads to a 3-point stencil whereas the others need 4 points. It can be generalized to systems in several ways. The issue is that these methods lead to much better results than Godunov in smooth parts of the flow but unacceptable results in case of discontinuous solutions, in particular unphysical oscillations. Both of these effects are clearly visible in the first column of Fig. 11. This can to a very considerable part be remedied by changes in the slope so that, however, the order of approximation is kept. For example one might apply the minmod function on the slopes. It returns the smallest number by modulus or zero, viz. $$\begin{aligned} {{\mathrm{minmod}}}(a,b) = {\left\{ \begin{array}{ll} b &{}\quad |a|\ge |b| \\ a &{}\quad |a|\le |b| \\ \end{array}\right. } \end{aligned}$$ in case a and b have the same sign and $$\begin{aligned} {{\mathrm{minmod}}}(a,b) = 0 \end{aligned}$$ if they have opposite sign, and similarly for more arguments. A different variant for choosing the slope actually used is $$\begin{aligned} \sigma _i={{\mathrm{minmod}}}\left( \sigma ^\mathrm{Fromm},2\sigma ^\mathrm{BeamW}, 2\sigma ^\mathrm{LaxW}\right) . \end{aligned}$$ This leads to the monotoniced central difference (MC) method. The factor 2 in two places within the minmod function is intentional. See LeVeque (1998). We take our Fig. 11 from that source. It shows the effects of various schemes for an advection problem. The improvement in dealing with the discontinuity using limited slopes is readily visible in the second row of that figure. Piecewise quadratic reconstruction (PPM, Piecewise Parabolic Method) Moving to higher order interpolation, the PPM method (Piecewise Parabolic Method) devised by Colella and Woodward (1984) has found wide popularity and is being used in a number of codes in astrophysical fluid dynamics. As evident already from its designation, it uses piecewise quadratic instead of piecewise linear functions on each interval. The basic interpolation process sets out by a piecewise linear approximation similar to what has been considered above. This approximation is constructed in such a way as to maintain monotonicity if such should occur in the data (average) values. At extrema, the slope is set to zero. Special criteria are applied in order to figure out whether the zone under consideration contains a discontinuity. In that case, the parabola is steepened and adjacent zones are treated in specific ways in order to avoid over- and undershooting. These processes do not change the zone averages. Various ways to achieve that are in use as described, e.g., in Colella and Woodward (1984) or Fryxell et al. (2000). Various implementations use modifications of the original interpolation process for improving properties near steep gradients or of other aspects. The piecewise parabolic functions are in general not continuous across cell boundaries. As a consequence, solutions of Riemann problems are a part of the PPM methodology. Since the functions involved are not constant on both sides of the jump, considerations are necessary how to define reasonable left and right constant states for which the Riemann problem then may be solved, keeping accuracy considerations in mind. This can be done using a sort of Lagrangian hydrodynamics for that purpose as in Colella and Woodward (1984) or an Eulerian approach, e.g., Fryxell et al. (2000). As already emerges from these remarks, the PPM method is not solely on a specific sort of spatial reconstruction at a given time level but fluxes may in the sense of a predictor step be defined at step \(\frac{1}{2}\) when moving from time step 0 to step 1. This serves to make the approximation scheme second order in time. In space, it has many third order features. Special reconstruction processes near discontinuities or extrema (where the usual monotonicity constraints naturally do not apply) will reduce the formal order of spatial accuracy. That holds also true for Strang type splitting applied in multidimensions (see Sect. 5.5). Overall, PPM methodology has found applications in many variants from use in low Mach number flows, e.g., Nonaka et al. (2010), to relativistic astrophysics, see Martı and Müller (1996), or Mignone et al. (2005). ENO (essentially non-oscillatory) schemes The ENO class of methods has originally been developed within the context of hyperbolic conservation laws, in particular the Euler equations. See Harten et al. (1987) for an early paper. A part of the methodology is concerned with a specific sort of essentially non-oscillatory interpolation. It is by now used in a wide variety of contexts. Specific considerations pertaining to hydrodynamics have then to be added. ENO-type interpolation Consider a standard 1D interpolation problem, for simplicity on equidistant nodes \(x_j=jh\), with function values \(w_j\). Consider the simple case that for some index i, \(w_i=w_{i-1}=\cdots =0\), but \(w_{i+1}=1\) so that there is a jump in the data. Wishing to interpolate near position \(x_i\) to second order, using a parabola, we might adopt the stencil \(S_i=\{x_{i-1},x_{i},x_{i+1}\}\). To the left of \(x_i\) the corresponding parabola will yield obviously artificial negative values, a (negative) overshoot. In particular, if such a procedure is applied repeatedly, it can cause oscillations and eventually render the calculations futile. The stencil \(S_{i+1}\), starting at \(x_i\), would not produce better results either. On the other hand, the stencil \(S_{i-1}=\{ x_{i-2},\ldots \}\) would perform well, perfectly indeed in the specific case. The smoothness of a polynomial can be measured by the magnitude of its highest nontrivial derivative, and for an interpolation polynomial the corresponding divided differences are, up to a factor, just those derivatives (Stoer and Bulirsch 2002). As is well known, especially the higher divided differences of tabular values are particularly large at positions of a jump in the data. From this it follows that choosing the interpolating polynomial should not proceed with a fixed stencil. Rather, in the above example, the three-point stencil kept could be augmented by one additional gridpoint to the left or the right, forming two new stencils. For these new candidate stencils a smoothness measurement (e.g., using third order divided differences this time) can be applied and the smoothest one being adopted together with its third order interpolating polynomial. The procedure can be extended to higher order interpolation in an obvious way, and this constitutes the classical ENO interpolation principle. If one proceeds until to a stencil with \(r+1\) points, an rth order polynomial used for interpolation will ensue. If, in case a jump exists at all, there are \(r+1\) well-behaved points at least on one side of this jump, it will yield an accurate approximation, the truncation error being \(O(h^{r+1})\). Let us return to the parabolas above belonging to the stencils \(S_{i-2}, S_{i-1}\) and \(S_i\). Let us denote these functions by \(p_{i-2}, p_{i-1}\) and \(p_i\). We might wish to obtain a better approximation for, say, \(w(x_{i-1/2})\) (the function underlying our tabular data). If the position in the interval (\(x_{i-1/2}\) in our case) is specified, it can be shown that there exist positive weights \(\omega _1^*, \omega _2^*, \omega _3^*\), independent of the data \(w_j\), such that \(\sum _j\omega _j^*=1\) and $$\begin{aligned} \omega _1^*p_{i-2}(x_{i-1/2})+ \omega _2^*p_{i-1}(x_{i-1/2})+ \omega _3^*p_{i}(x_{i-1/2})=w(x_{i-1/2}) + O(h^{5}). \end{aligned}$$ This process can be expanded to higher order, so that when basically using stencils with \(r+1\) nodes each we obtain a truncation error of \(O(h^{2r+1})\) instead of \(O(h^{r+1}).\) In itself, this procedure would run again into the original difficulties in case of jumps. In the weighted ENO approach (Liu et al. 1994; Jiang and Shu 1996) one replaces the "ideal" weights \(\omega _1^*\) etc. by weights \(\omega _1\) etc. depending on the smoothness of the corresponding polynomial. With these adaptive weights full order of approximation is obtained, truncation error \(O(h^{2r+1})\), if the underlying data are sufficiently smooth, whereas the approximation error decreases to \(O(h^{r+1})\) if a jump is present (or less if lack of smoothness is more severe so that no high order approximation is possible). Reconstruction \(\mathcal {R}\), averaging \(\mathcal {A}\) In a finite volume scheme, the basic dependent variables appear as cell averages \(\bar{v}_i\), defined at (integer) gridpoints. Yet, for fluxes point values are needed at half-integer gridpoints. A reconstruction procedure yielding them is consequently needed. This can be accomplished by a discrete variant of differentiating the integral of a function as follows. The (sort of discrete) indefinite integral of \(v, V_{i+\frac{1}{2}}:=\sum _{j\le i}\bar{v}_i\), is defined on half-integer gridpoints. An interpolation polynomial (in the sense of ENO interpolation, hence avoiding steep gradients as best as possible) for V can be constructed and differentiated, yielding the reconstructed point value. In that sense, $$\begin{aligned} v_{i+1/2}=\mathcal {R}(\bar{v})_{i+1/2}. \end{aligned}$$ Note, however, that actually two values of \(v_{i+1/2}\) will be obtained, one left-sided, starting with the interval \((x_{i-\frac{1}{2}},x_{i+\frac{1}{2}})\), one right sided, starting with \((x_{i+\frac{1}{2}},x_{i+\frac{3}{2}})\). In hydrodynamics, when calculating flux functions using the reconstructed values, this leads then to considering a Riemann problem. (Note that upwinding which would lead to just one point value is not appropriate here. Basically, the present reconstruction task has nothing to do with conservation laws and directions of signals, and even if it had, in hydrodynamics v will be a mix of signals arriving from both sides unless the flow is supersonic.) The averaging process \(\mathcal {A}\) (in the sense of approximating \(\int _{x_{i-1/2}}^{x_{i+1/2}}v(x)\,\mathrm{d}x\)) is simple: use the average of the ENO interpolating polynomial p in the relevant interval for the \(v_j\)-values: $$\begin{aligned} \bar{v}_i=\mathcal {A}(v)_i:=\frac{1}{h}\int _{x_{i-1/2}}^{x_{i+1/2}}p(x)\,\mathrm{d}x. \end{aligned}$$ ENO for hydrodynamics: the finite volume scheme Using the above notation and a representation similar to Grimm-Strele et al. (2014), one Euler forward step (or rather one Runge–Kutta stage) starting from the \(\bar{v}\)-values with time increment \(\tau \) for \(\partial _tv+\partial _xf(v)=0\) takes the form (possibly including a Riemann solver in step 3) The \(\bar{v}_i\) are given \(v_{i\pm 1/2}=(\mathcal {R}{\bar{v}})_{i\pm 1/2}\) \(\hat{f}_{i\pm 1/2}=f(v_{i\pm 1/2})\) \(\bar{v}_i^{new}=\bar{v}_i -\frac{\tau }{h}(\hat{f}_{i+ 1/2}- \hat{f}_{i- 1/2}) \) The Shu–Osher finite difference scheme: 1D case This form has been derived and worked out by Shu and Osher (1988, 1989). Contrary to conservation forms with which we have exclusively dealt with up to now it is a finite difference form and acts on point values \(v_i\). Even so, it has the appearance of a conservation form in semidiscretization, namely $$\begin{aligned} \partial _tv_i=-\frac{1}{h}(\tilde{{f}}_{i+1/2}-\tilde{{f}}_{i-1/2}) \end{aligned}$$ with an appropriate flux like function \(\tilde{{f}}\). As we will see, there is no obvious advantage for the 1D-case over the finite volume method described above. The method comes into full effect in multidimensions as we will explain shortly. Merriman (2003) has shown that by proper consideration the derivation of the basic form can be very simple. Following him, we set out from a 1D conservation equation $$\begin{aligned} \partial _tv+\partial _xf(v)=0. \end{aligned}$$ Applying the averaging operator defined above we obtain $$\begin{aligned} \partial _t\mathcal {A}v+\frac{\varDelta f}{h}=0, \end{aligned}$$ where \((\varDelta f)(x) = f(x+\frac{h}{2})-f(x-\frac{h}{2})\). The interchange of \(\mathcal {A}\) and \(\partial _x\) and \(\partial _t\) is permitted since the grid spacing h is constant in space and time. We apply \(\mathcal {A}^{-1}\) to Eq. (81) and useFootnote 8 that \(\mathcal {A}^{-1}\varDelta =\varDelta \mathcal {A}^{-1} \) to obtain $$\begin{aligned} \partial _tv+\frac{1}{h}(\varDelta \mathcal {A}^{-1}f), \end{aligned}$$ so that in the notation of Eq. (79) we have $$\begin{aligned} \tilde{f}=\mathcal {A}^{-1}f. \end{aligned}$$ The algorithm for the Shu–Osher form in 1D hence is similar to the one described above save for exchanges in averaging and reconstruction: The \(v_i\) are given \(\bar{f}_i:=\mathcal {A}(f)_i\) \(\tilde{f}_{i\pm 1/2}=\mathcal {R}(\bar{f})_{i}\) \({v}_i^{new}={v}_i -\frac{\tau }{h}(\tilde{f}_{i+ 1/2}- \tilde{f}_{i- 1/2}) \) The Shu–Osher finite difference scheme: the multidimensional case For explanation it suffices to treat the 2D case. For simplicity of notation, we assume equal (not necessary) and constant grid spacing h in both directions. We then have 1D averages, e.g., along the x-axis, \(\mathcal {A}_x(v)_i(y)=\frac{1}{h}\int _{x_{i-1/2}}^{x_{i+1/2}}v(\xi ,y)d\xi \). \(\mathcal {A}_y\) is defined similarly, and $$\begin{aligned} \mathcal {A}:=\mathcal {A}_x\mathcal {A}_y=\mathcal {A}_y\mathcal {A}_x \end{aligned}$$ returns the average over a square of size \(h\times h\). The meanings of \(\mathcal {R}_x, \mathcal {R}_y\) and \(\mathcal {R}\) (the last one being the 2D reconstruction) should be obvious. We have \(\mathcal {R}=\mathcal {R}_x\mathcal {R}_y=\mathcal {R}_y\mathcal {R}_x\), so that all averaging and reconstruction procedures are ultimately of the 1D type. Applying \(\mathcal {A}\) and \(\mathcal {R}\) to the equation \(\partial _tv+\partial _xf(v)+\partial _yg(v)=0\) yields algorithms looking precisely as those describe above (with a g-term added, of course). The essential advantage with the finite difference variant concerns the accuracy of the boundary fluxes and hence the method. The fluxes are exact to the order the basic procedures \(\mathcal {A}, \mathcal {R}\) are exact. For example, the flux at the left vertical side of a grid cell should accurately approximate an integral over the y-direction. In a normal finite volume method one only gets a point value with respect to y, probably for \(y_j\), i.e., one applies in effect the midpoint rule when averaging the flux over the interval \((y_{j-1/2},y_{j+1/2})\) which is only second order accurate. So, the accuracy is compromised unless one is ready to use a few quadrature nodes in this interval in order to increase the degree of accuracy. Naturally that comes at a substantial increase in computing time. Contrarily, for the Shu–Osher form this is built in due to the application of \(\mathcal {A}_y\) such that the accuracy is of the order of the basic averaging and reconstruction procedure. As a concluding remark to the Shu–Osher form let us note that it is not only valid for a constantly spaced grid as discussed above. Merriman (2003) has shown the existence of precisely one class of non-equidistant grids for which the conclusions also hold true. Similarly to logarithmic grids that allows for stretched grids. In the astrophysical context this has already been applied, for example, when modelling pulsation–convection interactions (e.g., Mundprecht et al. 2015) where enormously different scales prevail at the surface of the star and deeper down. ENO methods for systems of equations The treatment of hyperbolic systems of equations with ENO methodology (for the 1D case still) is in most cases reduced to the case of scalar equations. We start with the hyperbolic system $$\begin{aligned} \partial _t\varvec{v}+ \partial _x\varvec{f}(\varvec{v})=[\partial _t\varvec{v}+ A\partial _x\varvec{v}]=0 \end{aligned}$$ where \(A(\varvec{v})\) is again the Jacobian $$\begin{aligned} A(\varvec{v})=(\partial _{\varvec{v}}\varvec{f}(\varvec{v})). \end{aligned}$$ Since the \(n\times n\)-system of equations is assumed hyperbolic, there exist eigenvalues \(\alpha _1,\ldots ,\alpha _n\) and a corresponding complete set of eigenvectors \(\varvec{r}^j\) which we assemble in a matrix \(R:=(\varvec{r}^1,\ldots ,\varvec{r}^n)\). In case of the Euler equations the eigenvalues are velocity u and \(u\pm c, c\) denoting the sound velocity. We define \(\varvec{w}\) via \(\varvec{v}=R\varvec{w}\). If we assume for one moment that A is constant and use \(\varvec{w}\) rather than \(\varvec{v}\) in Eq. (85) (multiplying from the left with \(R^{-1}\)) we get $$\begin{aligned} \partial _t\varvec{w}+ D \partial _x \varvec{w}= 0, \end{aligned}$$ where \(D=R^{-1}AR\) is the diagonal matrix containing the eigenvalues \(\alpha _i\). Hence, the equations decouple and read $$\begin{aligned} \partial _t \varvec{w}_i + \alpha _i \partial _x\varvec{w}_i = 0 \quad (i=1,\ldots ,n). \end{aligned}$$ In general, however, A will not be constant. We wish to apply essentially the same procedure in this case and focus on a cell around some grid point \(\xi \) at time t. In order to proceed near that point, we apply one and the same matrix of eigenvectors R for \(A(\varvec{v}(\xi ,t))\) in the whole vicinity \(\xi \) where ENO leads us to and transform Eq. (85) to the form $$\begin{aligned} \partial _t\varvec{w}+\partial _x\bigl (R^{-1}\varvec{f}\bigr )({\varvec{w}}) = 0, \end{aligned}$$ i.e., a conservation law for \(\varvec{w}\) with flux function \(\varvec{f}^*=R^{-1}\varvec{f}\). The ENO procedure for the stencils which appear when starting from \(\xi \) is carried out using that equation, and finally the results are transformed back to \(\varvec{v}\). Of course, in the general case decoupling of equations will not be complete as it is in Eq. (87). Setting that point aside for one moment we see that this grants the possibility to use the proper upwind direction for each "field" \(w_i\). Remarks on the numerical flux functions A few remarks concerning numerical flux functions are in order. Consider firstly that at a cell boundary \(x_{i+1/2}\) there exist two values for v, namely \(v^-_{i+1/2}\) and \(v^+_{i+1/2}\) corresponding to reconstructions belonging to the cell left or right of \(x_{i+1/2}\), respectively. One consequently faces a sort of Riemann problem. Frequently, a Roe-type flux is used with a switch to a Lax–Friedrichs flux near a sonic point to avoid the physical inconsistencies we have mentioned in connection with the Roe approximate Riemann solver. A second point concerns what is now called Marquina flux splitting (Marquina 1994; Donat and Marquina 1996). The basic issue is that when evaluating the Jacobian of the flux function at a half-integer grid point which is located near a strong jump no sort of average of the dependent variables may lead to a proper version of the Jacobian. The limits when moving to \(x_{i+1/2}\) from the left or from the right may be markedly different. In the Marquina approach two Jacobians are used at those locations, corresponding to two ENO procedures based on bias to the left and to the right. Those fluxes are then kept which lead from one side to the other. A closer description in conjunction with the ENO approach is provided in Fedkiw et al. (1996). Some other higher order methods Recently, a number of other numerical approaches have started to be used in codes which are relevant to our context. They basically work with control cells (simplices, cubes or more general hexahedral regions, etc.) and consider the interaction of the cells by evaluating the flux across the cell interfaces. Typically, the solutions of these methods are discontinuous over cell interfaces so that a sort of Riemann solver is being applied for the calculation of fluxes. In Colella et al. (2011) and Wongwathanarat et al. (2016) fourth order for cell averages, for example, is achieved by using Taylor expansion keeping the second derivative, which, in itself, needs to be calculated to second order only for sufficient accuracy. In particular, the method is considered as a basis for the mapped grids technique as described in Sect. 5.4. Schaal et al. (2015) and Wang et al. (2015) describe high-order discontinuous Galerkin and high-order spectral difference methodologies with a special view towards application in the case of "irregular" domains (most probably spheres or spherical shells). One very important point in particular for long-term runs as required for relaxation or dynamo action is that these methods are also capable of preserving not only linear but also angular momentum which is unlike what most numerical methods can accomplish. Spectral methods If the solutions of the equations are represented as linear combinations of basis functions at all, these basis functions have been locally used polynomials in most of the methods treated above (including the version of the spectral difference method as applied in Wang et al. (2015) just mentioned above which is termed "spectral" nonetheless for somewhat different terminological reasons). Spectral methods which we consider now instead represent the spatial part of the functions via global basis functions, i.e., functions being \(\ne 0\) in all of the spatial domain (generically). For a thorough presentation of spectral methods see Canuto et al. (2006). The best known case is of course the trigonometric approximation $$\begin{aligned} u_N(x)=\sum _{k=-N}^N \tilde{u}_k\phi _k(x) \end{aligned}$$ $$\begin{aligned} \phi _k(x)=e^{ikx} \quad (x\in [-\pi ,\pi )) \end{aligned}$$ in the complex notation. (In practical calculations, sines and cosines are used instead.) In multidimensional calculations, tensor products of 1D basis functions may be used, akin to \(\phi _k(x)\psi _l(y)\). For our class of problems, there are two main possible advantages when using spectral methods, namely dealing with sphericity (problem specific) possibly rapid convergence to the solution with increasing N. With respect to the first item, spectral methods are indeed often used when dealing with spherical shells. The difficulty with many grid based methods is the convergence of longitude circles towards the polar axes (see the discussion in Sect. 5.4). It is then tempting to expand the angular part of functions in spherical harmonics, $$\begin{aligned} Y_{l}^m(\theta ,\psi )=c_{lm}P_{l}^m(\cos \theta )e^{im\psi }=c_{lm}P_{l}^m(\cos \theta )\phi _m(\psi ) \end{aligned}$$ where \(\theta \) is the polar distance, \(\psi \) the longitude, \(\phi _m\) from Eq. (91) and \(c_{lm}\) an appropriate normalization constant. \(P_{l}^m\) denote the associated Legendre polynomials. In that way, sphericity is automatically dealt with. Before discussing the more general pros and cons, let us have a look at the gist of spectral methods in a simple 1D case using trigonometric functions just introduced around Eq. (91). In applications, one can make use of simplicity of differentiation, viz. \(\partial _x\phi _k(x)=ik\phi _k(x)\), the orthogonality of the trigonometric functions, viz. $$\begin{aligned} \langle \phi _k\phi _l\rangle :=\frac{1}{2\pi }\int _{-\pi }^\pi \phi _k(x)\bar{\phi _l}(x)=0 \hbox { for} \quad k\ne l\hbox { and} \end{aligned}$$ the fact that products of the \(\phi _k\)'s are simple: \(\phi _k\phi _l=\phi _{k+l}\). We apply such expansions to Burger's equation, $$\begin{aligned} \partial _tu + \frac{1}{2}\partial _x\frac{u^2}{2}=0. \end{aligned}$$ The nonlinearity due to \(u^2\) is similar to nonlinearities occurring in the Euler equations, Eqs. (42a)–(42c), whence our interest. Ideally the left hand side should be 0 or, equivalently, orthogonal to all functions \(\phi _k\). The Galerkin approximation stipulates that it be orthogonal to all \(\phi _k\) (\(-N\le k \le N\)) for some prescribed natural number N. Inserting functions of the form $$\begin{aligned} u(x,t)=\sum _{l=-N}^N\tilde{u}(t)\phi _l(x) \end{aligned}$$ into the left hand side of Eq. (94), using the properties listed above and finally requesting the inner products with the functions \(\phi _k\) (\(-N\le k \le N)\) to vanish, we obtain $$\begin{aligned} \dot{\tilde{u}}_k(t) +i\sum _{l=-N}^Nl\tilde{u}_l(t)\tilde{u}_{k-l}(t)=0 \quad (-N\le k\le N). \end{aligned}$$ Note that for some values of l in the second sum \(k-l\) will fall outside of the range of indices considered here. For numerical work, they will have to be dropped. One advantage of the Fourier–Galerkin formulation is in the approximation error. Methods based on polynomials of order n can achieve (in a large class of problems and provided the solution is sufficiently smooth) an approximation error of \(O(h^{n+1})\) and with guarantee not better (in the generic case). In contrast, for a large class of problems (and solutions being differentiable infinitely often) approximation error for the Fourier–Galerkin formulation can be shown to be \(O(h^{n+1})\) for each natural number n. In practice, convergence is very rapid once a feature is resolved with a specific number of basis functions. On the other hand, if there is a discontinuity or, in practice, a steep gradient somewhere convergence will be degraded not only locally, but globally. A disadvantage of the Fourier–Galerkin method is its numerical complexity. The second sum in Eq. (96) requests O(N) multiplications for each value of k, hence overall \(O(N^2)\) operations at each timestep which is unfavourable for all values of N which are not quite small. In practice, therefore, a variant of the above procedure, the collocation method is used more often. Here, one switches between u, defined spatially on \(2N-1\) equidistant gridpoints and the Fourier picture (the coefficients \(\tilde{u}_k\)). In the example, \(u^2\) is computed in real space, on the gridpoints (O(N) operations!); the resulting discrete function is transformed to its Fourier image via the fast Fourier transform (complexity \(O(N\log N)\), much better than \(O(N^2)\)). Conversely, also the transition from the Fourier picture to real space is needed which has the same computational complexity. Note, however, that on massively parallel machines that issue is not trivial due to completely different ordering principles (spatial ordering in real space, ordering by Fourier index k in Fourier space). It is not said that collocation methods have the same accuracy as comparable Galerkin methods. In addition, Galerkin methods do generically better to retain conservation properties than collocation, albeit for special forms of time integration only (Canuto et al. 2006). Furthermore, things get more complicated when using, for example, the Legendre polynomials (as in the spherical harmonics) because they lack the beautiful properties of the trigonometric functions so that fast transforms do not exist (see, e.g., Clune et al. 1999). The use of spherical harmonics is, however, greatly facilitated if the main task is to solve a Poisson equation (as often occurs in the anelastic approximation) because the spherical harmonics are eigenfunctions of the Laplace operator. For a detailed discussion we again refer the reader to Canuto et al. (2006). Direct low Mach number modelling Direct low Mach number modelling refers to the numerical solution of the unaltered Euler (or Navier–Stokes) equations with very slow flows in mind. In Sect. 4 a number of approaches were introduced aimed at modelling deep, low Mach number convection. These approaches modified the basic equations. In the last few years developments have been initiated to achieve the same goal more directly. Instead of an approximation being performed in two steps (approximate equations are derived firstly and these are then approximated in the numerical sense) the strategy is now to set out from the original Euler or Navier–Stokes equations and to develop numerical methods which in themselves are not subject to the stringent Courant–Friedrichs–Lewy timestep restrictions. In this subsection we present three such approaches. Since in the low Mach case the problem consists in the high velocity of sound enforcing unduly small timesteps for the usual numerical methods, it is the Euler equations which are under consideration here. The method of Kwatra et al. Kwatra et al. (2009) have devised a method which largely keeps the original approach for numerically advancing the Euler equations in time (for example, by an ENO method). Basically, the flux function is decomposed into two parts, one being the advective part and the second one the non-advective part (i.e., the one which contains the pressure p). The advective part is time-advanced in the usual way yielding, among others, preliminary versions of the conserved variables. An evolution equation for the pressure is used to predict it at the new time level. After various considerations, the ultimate fluxes are computed and applied in the usual way. From the viewpoint of computational complexity the only new task is the requirement to solve a generalized Poisson (or, more precisely, Helmholtz) equation for the pressure. Solving such a so-called strongly elliptic or V-elliptic equation is a standard problem in numerical mathematics and can be accomplished very efficiently, for instance, by means of multigrid methods even in a highly parallelized way. The region of applicability of this approach has been enhanced among others by introducing two species (such as hydrogen and helium in semiconvection) in Happenhofer et al. (2013). We note in passing that it is possible to apply the method also to high Mach number flows with reasonable efficiency, e.g., supersonic ones, without any change. While flows of high speed are of course not the main application of this method, this capability may be helpful for specific problems for which one has to deal with low and high Mach number flows simultaneously (a numerical simulation of solar convection reaching from the surface deep into the interior might be one such example). It should also be noted that, as the sound speed tends to infinity, the generalized Poisson equation approaches that one for an incompressible flow. One additional advantage of the method is that, once the pressure is split off, the eigenvalues of the remaining hyperbolic system are all equal (to the macroscopic velocity). That virtually eliminates the need for transformation to the eigensystem and makes the Riemann problem easier (Happenhofer et al. 2013). A preconditioned Godunov-type method In Sect. 5.2 we have discussed a number of methods which can be considered a successor of Godunov's method. Such methods can be turned into efficient algorithms for the low Mach case as worked out by Miczek et al. (2015). Two basic requirements for a low Mach number solver have been formulated by Dellacherie (2010). Taking care of these and of principles of preconditioning (see Turkel 1999, for a review) they analyze the Roe matrix and devise a Roe solver which yields numerical fluxes. These fluxes allow time steps of the CFL restrictions based on material velocities also for very low Mach number flows. At the same time, the method is applicable to the case of "normal" Mach numbers, too. Solving the Euler equations implicitly Viallet et al. (2011) are developing the MUSIC code (see Sect. 5.6.16). The goal is to model essentially whole stars or stellar envelopes, respectively, in 2D and in 3D with emphasis on situations requiring low Mach number modelling. With this in mind, an implicit method for the Euler (and Navier–Stokes) equations in multidimensions has been developed. Very recently, a semi-implicit module has been described (Viallet et al. 2016). It treats advective terms explicitly whereas sound waves or compressional work are treated implicitly. This module aims for stability under large time steps rather than for accuracy and is intended for eventual use as a preconditioner to the fully implicit basic procedure. Sphericity and resolution The "classic" procedure when dealing with sphericity consists in developing the lateral part of the dependent variables in spherical harmonics as, for example, done in the Glatzmaier code (Sect. 5.6.11) and its descendants. In this way one problem of the use of polar coordinates (singularities of the equations along the polar axis) is obviated, the other one (singularity at the center) retained. For numerical aspects of this spectral approach see Sect. 5.2.8. Consider in particular that for very large problems (high order harmonics involved) a problem may be rooted in the Legendre function (Clune et al. 1999) for which no fast algorithm akin to fast Fourier transform for the trigonometric functions exists. Sticking to polar coordinates has nevertheless some appeal, in particular when only a part of a spherical shell is modelled. Then, the polar axis problem has no bearing. The relative similarity to straight rectangular grids allows much of the basic numerics described in Sect. 5.2 to be taken over. Even for a whole spherical shell, polar coordinates are applicable by using so called Yin–Yang grids introduced by Kageyama and Sato (2004). This approach makes use of the equatorial belts of two spherical grids with polar axes tilted against each other in such a way that, together, the whole sphere or spherical shell is covered. The equations can then, on each grid, be advanced for example with the existing scheme designed for polar coordinates. A problem is that in the overlapping regions the boundaries of the grid cells will not match. Hence, the conservative property of the code will be lost unless one resorts to a flux correction procedure on the boundaries between the parts of the two grids which are actually in use (Peng et al. 2006; Wongwathanarat et al. 2010). Given that methods on rectangular (cubic) grids are widespread and ceteris paribus usually the best ones it is tempting to stick to them even in the spherical case. Putting one's star-in-a-box has immediate appeal and is provided, for example, in the CO5BOLD code (Sect. 5.6.7). Perhaps the most important drawback of this method is that resolution may not be granted where desperately needed (for example, in the upper layers of Cepheids, RR Lyr stars or red giants) unless an elaborate grid refinement algorithm is available. More recently, mapping techniques have found interest. Here, the stellar sphere (say) is the image of a cube under a map. The numerical integrations are performed for the equations transformed to the cube. An essential point to be considered in this case is the so-called free stream property. For a closed surface it is well known that the surface integral over the normal vectors vanishes, \(\int d\mathbf {n}=\mathbf {0}\). The free stream property stipulates that the analogous condition holds true for the discrete method used to calculate fluxes over cell boundaries (Wongwathanarat et al. 2016). In Grimm-Strele et al. (2014) WENO methods for such curved coordinates in physical space have been investigated. It turns out that the free stream property does not hold true for the Shu–Osher form of the method so that one must resort to the original finite volume formulation, if higher than second order is requested. In Wang et al. (2015) and Wongwathanarat et al. (2016) issues of just how to properly implement the mapping and its interaction with the basic numerics are discussed thoroughly. Section 5.6 clearly testifies how much activity presently is invested into the development of new codes. To a considerable degree this development is driven by the quest for high resolution per gridpoint. This quest is motivated both by the highly turbulent character of the flows with many spatial scales involved and by simple geometrical facts such as the strong variations in scale height when moving from the atmosphere to deeper layers in the Sun and other stars. Let two examples serve to illustrate the importance of resolution per grid point which naturally is relevant in particular for computationally highly demanding cases where one cannot simply increase the number of grid points. The first example for the importance of adequate resolution is quite recent and concerns an important issue in magnetohydrodynamics. Experience shows that it is easier to obtain dynamo action resulting in a significant part of an ordered magnetic field (of interest for an understanding of the solar cycle) in convection simulations with comparatively high viscosity and magnetic diffusivity. Lowering those values (which for the Sun are unattainably small by numerical means) leads, however, to an increase of disordered fields. Yet, use of still smaller numbers for these values and an extreme number of gridpoints and hence resolution (as requested by the small scale structures then developing) lets the ordered part of the magnetic field increase again (Hotta et al. 2016), reinforcing prospects for modelling of solar and stellar magnetic cycles. Just for completeness we notice that important effects of resolution occur also in the purely hydrodynamic case. For example Guerrero et al. (2013) find somewhat surprisingly that one of their low resolution models fits the solar rotation profile more closely than a higher resolved one. Might not still higher resolution lead one again closer to the observations? The second example concerns the convection–pulsation interaction in Cepheids (Mundprecht et al. 2013). With low order radial pulsation in mind, a large part of the star must be modelled. In contrast, the atmosphere and the hydrogen convection zone which are important observationally and dynamically are quite thin, leading from the outset to extremely disparate spatial scales. As a consequence and in addition, the calculations are very demanding in terms of computational resources, even in 2D. If, in each direction, only half of the grid poi
CommonCrawl
Lorentz surface In mathematics, a Lorentz surface is a two-dimensional oriented smooth manifold with a conformal equivalence class of Lorentzian metrics. It is the analogue of a Riemann surface in indefinite signature. Further reading • Smyth, Robert W. (March 2002), "Completing the conformal boundary of a simply connected Lorentz surface" (PDF), Proceedings of the American Mathematical Society, 130 (3): 841–847, doi:10.1090/S0002-9939-01-06067-1, retrieved 11 May 2011 • Weinstein, Tilla (July 1996), An introduction to Lorentz surfaces, De Gruyter Expositions in Mathematics, vol. 22, Walter de Gruyter, ISBN 978-3-11-014333-1
Wikipedia
Let $P(x)$ be a quadratic polynomial with real coefficients satisfying $x^2 - 2x + 2 \le P(x) \le 2x^2 - 4x + 3$ for all real numbers $x$, and suppose $P(11) = 181$. Find $P(16)$. Rewriting the given quadratics in vertex form, we have \[1 + (x-1)^2 \le P(x) \le 1 + 2(x-1)^2.\]Both of those quadratics have vertex at $(1, 1)$; considering the shape of the graph of a quadratic, we see that $P$ must also have its vertex at $(1,1)$. Therefore, \[P(x) = 1 + k(x-1)^2\]for some constant $k$. Setting $x = 11$, we have $181 = 1 +100k$, so $k = \tfrac{9}{5}$. Then \[P(16) = 1 + \tfrac{9}{5} \cdot 15^2 = \boxed{406}.\]
Math Dataset
Church lambda-abstraction A notation for introducing functions in the languages of mathematical logic, in particular in combinatory logic. More precisely, if in an exact language a term $A$ has been defined, expressing an object of the theory and depending on parameters $x_1,\dots,x_n$ (and possibly also on other parameters), then $$\lambda x_1,\dots,x_nA\label{*}\tag{*}$$ serves in the language as the notation for the function that transforms the values of the arguments $x_1,\dots,x_n$ into the object expressed by the term $A$. The expression \eqref{*} is also called a Church $\lambda$-abstraction. This Church $\lambda$-abstraction, also called an explicit definition of a function, is used mostly when in the language of a theory there is danger of confusing functions as objects of study with the values of functions for certain values of the arguments. Introduced by A. Church [1]. [1] A. Church, "The calculi of $\lambda$-conversion" , Princeton Univ. Press (1941) [2] H.B. Curry, "Foundations of mathematical logic" , McGraw-Hill (1963) [a1] H.P. Barendrecht, "The lambda-calculus, its syntax and semantics" , North-Holland (1978) Church lambda-abstraction. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Church_lambda-abstraction&oldid=44751 This article was adapted from an original article by A.G. Dragalin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Church_lambda-abstraction&oldid=44751"
CommonCrawl
Proportion (mathematics) A proportion is a mathematical statement expressing equality of two ratios.[1][2] $a:b=c:d$ a and d are called extremes, b and c are called means. Proportion can be written as ${\frac {a}{b}}={\frac {c}{d}}$, where ratios are expressed as fractions. Such a proportion is known as geometrical proportion,[3] not to be confused with arithmetical proportion and harmonic proportion. Properties of proportions • Fundamental rule of proportion. This rule is sometimes called Means‐Extremes Property.[4] If the ratios are expressed as fractions, then the same rule can be phrased in terms of the equality of "cross-products"[2] and is called Cross‐Products Property.[4] If $\ {\frac {a}{b}}={\frac {c}{d}}$, then $\ ad=bc$ • If $\ {\frac {a}{b}}={\frac {c}{d}}$, then $\ {\frac {b}{a}}={\frac {d}{c}}$ • If $\ {\frac {a}{b}}={\frac {c}{d}}$, then $\ {\frac {a}{c}}={\frac {b}{d}}$, $\ {\frac {d}{b}}={\frac {c}{a}}$. • If $\ {\frac {a}{b}}={\frac {c}{d}}$, then $\ {\dfrac {a+b}{b}}={\dfrac {c+d}{d}}$, $\ {\dfrac {a-b}{b}}={\dfrac {c-d}{d}}$. • If $\ {\frac {a}{b}}={\frac {c}{d}}$, then $\ {\dfrac {a+c}{b+d}}={\frac {a}{b}}={\frac {c}{d}}$, $\ {\dfrac {a-c}{b-d}}={\frac {a}{b}}={\frac {c}{d}}$. History A Greek mathematician Eudoxus provided a definition for the meaning of the equality between two ratios. This definition of proportion forms the subject of Euclid's Book V, where we can read: Magnitudes are said to be in the same ratio, the first to the second and the third to the fourth when, if any equimultiples whatever be taken of the first and third, and any equimultiples whatever of the second and fourth, the former equimultiples alike exceed, are alike equal to, or alike fall short of, the latter equimultiples respectively taken in corresponding order. Later, the realization that ratios are numbers allowed to switch from solving proportions to equations, and from transformation of proportions to algebraic transformations. Related concepts Arithmetic proportion An equation of the form $a-b=c-d$ is called arithmetic proportion or difference proportion.[5] Harmonic proportion Main article: Golden ratio If the means of the geometric proportion are equal, and the rightmost extreme is equal to the difference between the leftmost extreme and a mean, then such a proportion is called harmonic:[6] $a:b=b:(a-b)$. In this case the ratio $a:b$ is called golden ratio. See also • Ratio • Proportionality • Correlation References 1. Stapel, Elizabeth. "Proportions: Introduction". www.purplemath.com. 2. Tussy, Alan S.; Gustafson, R. David (January 2012). Intermediate Algebra: Identify Ratios, Rates, and Proportions. ISBN 9781133714378. 3. "Geometrical proportion". oxforddictionaries.com. 4. "Properties of Proportions". www.cliffsnotes.com. 5. "Arithmetic proportion". encyclopediaofmath.org. 6. "Harmonic Proportion in Architecture: Definition & Form". study.com.
Wikipedia
\begin{definition}[Definition:Nonassociative Algebra] Let $\left({A_R, \oplus}\right)$ be an algebra over a ring. Then $\left({A_R, \oplus}\right)$ is a nonassociative algebra {{iff}} it is not necessarily the case that $\oplus$ is associative. \end{definition}
ProofWiki
Uniqueness of positive steady state solutions to the unstirred chemostat model with external inhibitor CPAA Home Infinite multiplicity for an inhomogeneous supercritical problem in entire space May 2013, 12(3): 1259-1277. doi: 10.3934/cpaa.2013.12.1259 On vector solutions for coupled nonlinear Schrödinger equations with critical exponents Seunghyeok Kim 1, Department of Mathematics, Pohang University of Science and Technology, Pohang, Kyungbuk, South Korea Received December 2011 Revised June 2012 Published September 2012 In this paper, we study the existence and asymptotic behavior of a solution with positive components (which we call a vector solution) for the coupled system of nonlinear Schrödinger equations with doubly critical exponents \begin{eqnarray*} \Delta u + \lambda_1 u + \mu_1 u^{\frac{N+2}{N-2}} + \beta u^{\frac{2}{N-2}}v^{\frac{N}{N-2}} = 0\\ \Delta v + \lambda_2 v + \mu_2 v^{\frac{N+2}{N-2}} + \beta u^{\frac{N}{N-2}}v^{\frac{2}{N-2}} = 0 \quad in \quad \Omega\\ u, v > 0 \quad in \quad \Omega, \quad u, v = 0 \quad on \quad \partial \Omega \end{eqnarray*} as the coupling coefficient $\beta \in R$ tends to 0 or $+\infty$, where the domain $\Omega \subset R^n (N \geq 3)$ is smooth bounded and certain conditions on $\lambda_1, \lambda_2 > 0$ and $\mu_1, \mu_2 > 0$ are imposed. This system naturally arises as a counterpart of the Brezis-Nirenberg problem (Comm. Pure Appl. Math. 36: 437-477, 1983). Keywords: Coupled nonlinear Schrödinger equations, critical exponent, Nehari manifold. Mathematics Subject Classification: Primary: 35A15; Secondary: 35B33, 35B40, 35J5. Citation: Seunghyeok Kim. On vector solutions for coupled nonlinear Schrödinger equations with critical exponents. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1259-1277. doi: 10.3934/cpaa.2013.12.1259 A. Ambrosetti and E. Colorado, Standing waves of some coupled nonlinear Schrödinger equations, J. London Math. Soc., 75 (2007), 67-82. doi: 10.1112/jlms/jdl020. Google Scholar A. Ambrosetti and P. H. Rabinowitz, Dual variational methods in critical point theory and applications, J. Func. Anal., 14 (1973), 349-381. doi: 10.1016/0022-1236(73)90051-7. Google Scholar B. Abdellaoui, V. Felli and I. Peral, Some remarks on systems of elliptic equations doubly critical in the whole $R^n$, Calc. Var., 34 (2009), 97-137. doi: 10.1007/s00526-008-0177-2. Google Scholar T. Bartsch, N. Dancer and Z.-Q. Wang, A Liouville theorem, a-priori bounds, and bifurcating branches of positive solutions for a nonlinear elliptic system, Calc. Var., 37 (2010), 345-361. doi: 10.1007/s00526-009-0265-y. Google Scholar T. Bartsch and Z.-Q. Wang, Note on ground states of nonlinear Schrödinger systems, J. Partial Diff. Eqs., 19 (2006), 200-207. Google Scholar T. Bartsch, Z.-Q. Wang and J. Wei, Bound states for a coupled Schrödinger system, J. Fixed Point Theory Appl., 2 (2007), 353-367. doi: 10.1007/s11784-007-0033-6. Google Scholar H. Berestycki and P. L. Lions, Nonlinear scalar field equations I. Existence of a ground state, Arch. Rational Mech. Anal., 82 (1983), 313-346. doi: 10.1007/BF00250555. Google Scholar H. Brezis and T. Kato, Remarks on the Schrödinger operator with singular complex potentials, J. Math. Pures Appl., 58 (1979), 137-151. Google Scholar H. Brezis and E. Lieb, A relation between pointwise convergence of functions and convergence of functionals, Proc. Amer. Math. Soc., 88 (1983), 486-490. doi: 10.1090/S0002-9939-1983-0699419-3. Google Scholar H. Brezis and L. Nirenberg, Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents, Comm. Pure Appl. Math., 36 (1983), 437-477. doi: 10.1002/cpa.3160360405. Google Scholar J. Byeon, Existence of large positive solutions of some nonlinear elliptic equations on singularly perturbed domains, Comm. Partial Diff. Eq., 22 (1997), 1731-1769. doi: 10.1080/03605309708821317. Google Scholar J. Byeon and L. Jeanjean, Standing waves for nonlinear Schrödinger equations with a general nonlinearity, Arch. Rational Mech. Anal., 185 (2007), 185-200. doi: 10.1007/s00205-006-0019-3. Google Scholar Z. Chen and W. Zou, Positive least energy solutions and phase separation for coupled Schrödinger equations with critical exponent,, preprint., (). Google Scholar E. Dancer, J. Wei and T. Weth, A priori bounds versus multiple existence of positive solutions for a nonlinear Schrödinger system, Ann. Inst. Henri Poincaré, Analyse Non Linéaire, 27 (2010), 953-969. doi: 10.1016/j.anihpc.2010.01.009. Google Scholar B. D. Esry, C. H. Greene, Jr. J. P. Burke and J. L. Bohn, Hartree-Fock theory for double condensates, Phys. Rev. Lett., 78 (1997), 3594-3597. doi: 10.1103/PhysRevLett.78.3594. Google Scholar T.-C. Lin and J. Wei, Ground state of $N$ coupled nonlinear Schrödinger equations in $R^n$, $n \leq 3$, Commum. Math. Phys., 255 (2005), 629-653. doi: 10.1007/s00220-005-1313-x. Google Scholar Z. Liu and Z.-Q. Wang, Multiple bound states of nonlinear schrodinger systems, Commum. Math. Phys., 282 (2008), 721-731. doi: 10.1007/s00220-008-0546-x. Google Scholar L. A. Maia, E. Montefusco and B. Pellacci, Positive solutions for a weakly coupled nonlinear Schrödinger system, J. Diff. Eq., 229 (2006), 743-767. doi: 10.1016/j.jde.2006.07.002. Google Scholar C. R. Menyuk, Nonlinear pulse propagation in birefringent optical fibers, IEEE J. Quantum Electron., 23 (1987), 174-176. doi: 10.1109/JQE.1987.1073308. Google Scholar G. Talenti, Best constants in Sobolev inequality, Annali di Mat., 110 (1976), 353-372. doi: 10.1007/BF02418013. Google Scholar S. Terracini and G. Verzini, Multipulse phases in $k$-mixtures of Bose-Einstein condensates, Arch. Rational Mech. Anal., 194 (2009), 717-741. doi: 10.1007/s00205-008-0172-y. Google Scholar B. Sirakov, Least energy solitary waves for a system of nonlinear Schrödinger equations in $\mathbbR^n$, Commum. Math. Phys., 271 (2007), 199-221. doi: 10.1007/s00220-006-0179-x. Google Scholar G. M. Wei and Y. H. Wang, Existence of least energy solutions to coupled elliptic systems with critical nonlinearities, Electron. J. Diff. Eq., 49 (2008), 8 pp. Google Scholar J. Wei and T. Weth, Radial solutions and phase separation in a system of two coupled Schrödinger equations, Arch. Rational Mech. Anal., 190 (2008), 83-106. doi: 10.1007/s00205-008-0121-9. Google Scholar M. Willem, "Minimax Theorems," PNLDE 24, Birkhäuser, 1996 Google Scholar A. Pankov. Gap solitons in periodic discrete nonlinear Schrödinger equations II: A generalized Nehari manifold approach. Discrete & Continuous Dynamical Systems, 2007, 19 (2) : 419-430. doi: 10.3934/dcds.2007.19.419 Kaimin Teng, Xiumei He. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent. Communications on Pure & Applied Analysis, 2016, 15 (3) : 991-1008. doi: 10.3934/cpaa.2016.15.991 Xiaoming He, Marco Squassina, Wenming Zou. The Nehari manifold for fractional systems involving critical nonlinearities. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1285-1308. doi: 10.3934/cpaa.2016.15.1285 Shuangjie Peng, Huirong Pi. Spike vector solutions for some coupled nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2016, 36 (4) : 2205-2227. doi: 10.3934/dcds.2016.36.2205 Juncheng Wei, Wei Yao. Uniqueness of positive solutions to some coupled nonlinear Schrödinger equations. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1003-1011. doi: 10.3934/cpaa.2012.11.1003 Chuangye Liu, Rushun Tian. Normalized solutions for 3-coupled nonlinear Schrödinger equations. Communications on Pure & Applied Analysis, 2020, 19 (11) : 5115-5130. doi: 10.3934/cpaa.2020229 Santosh Bhattarai. Stability of normalized solitary waves for three coupled nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2016, 36 (4) : 1789-1811. doi: 10.3934/dcds.2016.36.1789 M. D. Todorov, C. I. Christov. Conservative numerical scheme in complex arithmetic for coupled nonlinear Schrödinger equations. Conference Publications, 2007, 2007 (Special) : 982-992. doi: 10.3934/proc.2007.2007.982 Tai-Chia Lin, Tsung-Fang Wu. Existence and multiplicity of positive solutions for two coupled nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 2911-2938. doi: 10.3934/dcds.2013.33.2911 Wentao Huang, Jianlin Xiang. Soliton solutions for a quasilinear Schrödinger equation with critical exponent. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1309-1333. doi: 10.3934/cpaa.2016.15.1309 Xu Zhang, Shiwang Ma, Qilin Xie. Bound state solutions of Schrödinger-Poisson system with critical exponent. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 605-625. doi: 10.3934/dcds.2017025 Mengyao Chen, Qi Li, Shuangjie Peng. Bound states for fractional Schrödinger-Poisson system with critical exponent. Discrete & Continuous Dynamical Systems - S, 2021, 14 (6) : 1819-1835. doi: 10.3934/dcdss.2021038 Qingfang Wang. The Nehari manifold for a fractional Laplacian equation involving critical nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2261-2281. doi: 10.3934/cpaa.2018108 Pavel I. Naumkin, Isahi Sánchez-Suárez. On the critical nongauge invariant nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2011, 30 (3) : 807-834. doi: 10.3934/dcds.2011.30.807 Chuangye Liu, Zhi-Qiang Wang. Synchronization of positive solutions for coupled Schrödinger equations. Discrete & Continuous Dynamical Systems, 2018, 38 (6) : 2795-2808. doi: 10.3934/dcds.2018118 Brahim Alouini. Finite dimensional global attractor for a class of two-coupled nonlinear fractional Schrödinger equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021013 Hiroyuki Hirayama, Mamoru Okamoto. Well-posedness and scattering for fourth order nonlinear Schrödinger type equations at the scaling critical regularity. Communications on Pure & Applied Analysis, 2016, 15 (3) : 831-851. doi: 10.3934/cpaa.2016.15.831 Myeongju Chae, Sunggeum Hong, Sanghyuk Lee. Mass concentration for the $L^2$-critical nonlinear Schrödinger equations of higher orders. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 909-928. doi: 10.3934/dcds.2011.29.909 Mingwen Fei, Huicheng Yin. Nodal solutions of 2-D critical nonlinear Schrödinger equations with potentials vanishing at infinity. Discrete & Continuous Dynamical Systems, 2015, 35 (7) : 2921-2948. doi: 10.3934/dcds.2015.35.2921 Yohei Sato. Sign-changing multi-peak solutions for nonlinear Schrödinger equations with critical frequency. Communications on Pure & Applied Analysis, 2008, 7 (4) : 883-903. doi: 10.3934/cpaa.2008.7.883 Seunghyeok Kim
CommonCrawl
\begin{definition}[Definition:Improper Integral/Unbounded Closed Interval/Unbounded Below] Let $f$ be a real function which is continuous on the unbounded closed interval $\hointl {-\infty} b$. Then the improper integral of $f$ over $\hointl {-\infty} b$ is defined as: :$\ds \int_{\mathop \to -\infty}^b \map f t \rd t := \lim_{\gamma \mathop \to -\infty} \int_\gamma^b \map f t \rd t$ Category:Definitions/Improper Integrals \end{definition}
ProofWiki
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution? It came as a bit of a shock to me the first time I did a normal distribution Monte Carlo simulation and discovered that the mean of $100$ standard deviations from $100$ samples, all having a sample size of only $n=2$, proved to be much less than, i.e., averaging $ \sqrt{\frac{2}{\pi }}$ times, the $\sigma$ used for generating the population. However, this is well known, if seldom remembered, and I sort of did know, or I would not have done a simulation. Here is a simulation. Here is an example for predicting 95% confidence intervals of $N(0,1)$ using 100, $n=2$, estimates of $\text{SD}$, and $\text{E}(s_{n=2})=\sqrt\frac{\pi}{2}\text{SD}$. RAND() RAND() Calc Calc N(0,1) N(0,1) SD E(s) -1.1171 -0.0627 0.7455 0.9344 1.7278 -0.8016 1.7886 2.2417 1.2379 0.4896 0.5291 0.6632 -1.8354 1.0531 2.0425 2.5599 0.0344 -0.1892 0.8188 1.0263 mean E(.) SD pred E(s) pred -1.9600 -1.9600 -1.6049 -2.0114 2.5% theor, est 1.9600 1.9600 1.6049 2.0114 97.5% theor, est 0.3551 -0.0515 2.5% err -0.3551 0.0515 97.5% err Drag the slider down to see the grand totals. Now, I used the ordinary SD estimator to calculate 95% confidence intervals around a mean of zero, and they are off by 0.3551 standard deviation units. The E(s) estimator is off by only 0.0515 standard deviation units. If one estimates standard deviation, standard error of the mean, or t-statistics, there may be a problem. My reasoning was as follows, the population mean, $\mu$, of two values can be anywhere with respect to a $x_1$ and is definitely not located at $\frac{x_1+x_2}{2}$, which latter makes for an absolute minimum possible sum squared so that we are underestimating $\sigma$ substantially, as follows w.l.o.g. let $x_2-x_1=d$, then $\Sigma_{i=1}^{n}(x_i-\bar{x})^2$ is $2 (\frac{d}{2})^2=\frac{d^2}{2}$, the least possible result. That means that standard deviation calculated as $\text{SD}=\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{n-1}}$ , is a biased estimator of the population standard deviation ($\sigma$). Note, in that formula we decrement the degrees of freedom of $n$ by 1 and dividing by $n-1$, i.e., we do some correction, but it is only asymptotically correct, and $n-3/2$ would be a better rule of thumb. For our $x_2-x_1=d$ example the $\text{SD}$ formula would give us $SD=\frac{d}{\sqrt 2}\approx 0.707d$, a statistically implausible minimum value as $\mu\neq \bar{x}$, where a better expected value ($s$) would be $E(s)=\sqrt{\frac{\pi }{2}}\frac{d}{\sqrt 2}=\frac{\sqrt\pi }{2}d\approx0.886d$. For the usual calculation, for $n<10$, $\text{SD}$s suffer from very significant underestimation called small number bias, which only approaches 1% underestimation of $\sigma$ when $n$ is approximately $25$. Since many biological experiments have $n<25$, this is indeed an issue. For $n=1000$, the error is approximately 25 parts in 100,000. In general, small number bias correction implies that the unbiased estimator of population standard deviation of a normal distribution is $\text{E}(s)\,=\,\,\frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{2}}>\text{SD}=\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{n-1}}\; .$ From Wikipedia under creative commons licensing one has a plot of SD underestimation of $\sigma$ Since SD is a biased estimator of population standard deviation, it cannot be the minimum variance unbiased estimator MVUE of population standard deviation unless we are happy with saying that it is MVUE as $n\rightarrow \infty$, which I, for one, am not. Concerning non-normal distributions and approximately unbiased $SD$ read this. Now comes the question Q1 Can it be proven that the $\text{E}(s)$ above is MVUE for $\sigma$ of a normal distribution of sample-size $n$, where $n$ is a positive integer greater than one? Hint: (But not the answer) see How can I find the standard deviation of the sample standard deviation from a normal distribution?. Next question, Q2 Would someone please explain to me why we are using $\text{SD}$ anyway as it is clearly biased and misleading? That is, why not use $\text{E}(s)$ for most everything? Supplementary, it has become clear in the answers below that variance is unbiased, but its square root is biased. I would request that answers address the question of when unbiased standard deviation should be used. As it turns out, a partial answer is that to avoid bias in the simulation above, the variances could have been averaged rather than the SD-values. To see the effect of this, if we square the SD column above, and average those values we get 0.9994, the square root of which is an estimate of the standard deviation 0.9996915 and the error for which is only 0.0006 for the 2.5% tail and -0.0006 for the 95% tail. Note that this is because variances are additive, so averaging them is a low error procedure. However, standard deviations are biased, and in those cases where we do not have the luxury of using variances as an intermediary, we still need small number correction. Even if we can use variance as an intermediary, in this case for $n=100$, the small sample correction suggests multiplying the square root of unbiased variance 0.9996915 by 1.002528401 to give 1.002219148 as an unbiased estimate of standard deviation. So, yes, we can delay using small number correction but should we therefore ignore it entirely? The question here is when should we be using small number correction, as opposed to ignoring its use, and predominantly, we have avoided its use. Here is another example, the minimum number of points in space to establish a linear trend that has an error is three. If we fit these points with ordinary least squares the result for many such fits is a folded normal residual pattern if there is non-linearity and half normal if there is linearity. In the half-normal case our distribution mean requires small number correction. If we try the same trick with 4 or more points, the distribution will not generally be normal related or easy to characterize. Can we use variance to somehow combine those 3-point results? Perhaps, perhaps not. However, it is easier to conceive of problems in terms of distances and vectors. normal-distribution standard-deviation expected-value unbiased-estimator umvue CarlCarl $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – whuber♦ Dec 8 '16 at 14:16 $\begingroup$ Q1: See the Lehmann-Scheffe theorem. $\endgroup$ – Scortchi - Reinstate Monica♦ Dec 8 '16 at 15:57 $\begingroup$ Nonzero bias of an estimator is not necessarily a drawback. For example, if we wish to have an accurate estimator under square loss, we are willing to induce bias as long as it reduces the variance by a sufficiently large amount. That is why (biased) regularized estimators may perform better than the (unbiased) OLS estimator in a linear regression model, for example. $\endgroup$ – Richard Hardy Dec 14 '16 at 20:20 $\begingroup$ @Carl many terms are used differently in different application areas. If you're posting to a stats group and you use a jargon term like "bias", you would naturally be assumed to be using the specific meaning(s) of the term particular to statistics. If you mean anything else, it's essential to either use a different term or to define clearly what you do mean by the term right at the first use. $\endgroup$ – Glen_b -Reinstate Monica Dec 15 '16 at 3:35 $\begingroup$ "bias" is certainly a term of jargon -- special words or expressions used by a profession or group that are difficult for others to understand seems pretty much what "bias" is. It's because such terms have precise, specialized definitions in their application areas (including mathematical definitions) that makes them jargon terms. $\endgroup$ – Glen_b -Reinstate Monica Dec 15 '16 at 3:50 For the more restricted question Why is a biased standard deviation formula typically used? the simple answer Because the associated variance estimator is unbiased. There is no real mathematical/statistical justification. may be accurate in many cases. However, this is not necessarily always the case. There are at least two important aspects of these issues that should be understood. First, the sample variance $s^2$ is not just unbiased for Gaussian random variables. It is unbiased for any distribution with finite variance $\sigma^2$ (as discussed below, in my original answer). The question notes that $s$ is not unbiased for $\sigma$, and suggests an alternative which is unbiased for a Gaussian random variable. However it is important to note that unlike the variance, for the standard deviation it is not possible to have a "distribution free" unbiased estimator (*see note below). Second, as mentioned in the comment by whuber the fact that $s$ is biased does not impact the standard "t test". First note that, for a Gaussian variable $x$, if we estimate z-scores from a sample $\{x_i\}$ as $$z_i=\frac{x_i-\mu}{\sigma}\approx\frac{x_i-\bar{x}}{s}$$ then these will be biased. However the t statistic is usually used in the context of the sampling distribution of $\bar{x}$. In this case the z-score would be $$z_{\bar{x}}=\frac{\bar{x}-\mu}{\sigma_{\bar{x}}}\approx\frac{\bar{x}-\mu}{s/\sqrt{n}}=t$$ though we can compute neither $z$ nor $t$, as we do not know $\mu$. Nonetheless, if the $z_{\bar{x}}$ statistic would be normal, then the $t$ statistic will follow a Student-t distribution. This is not a large-$n$ approximation. The only assumption is that the $x$ samples are i.i.d. Gaussian. (Commonly the t-test is applied more broadly for possibly non-Gaussian $x$. This does rely on large-$n$, which by the central limit theorem ensures that $\bar{x}$ will still be Gaussian.) *Clarification on "distribution-free unbiased estimator" By "distribution free", I mean that the estimator cannot depend on any information about the population $x$ aside from the sample $\{x_1,\ldots,x_n\}$. By "unbiased" I mean that the expected error $\mathbb{E}[\hat{\theta}_n]-\theta$ is uniformly zero, independent of the sample size $n$. (As opposed to an estimator that is merely asymptotically unbiased, a.k.a. "consistent", for which the bias vanishes as $n\to\infty$.) In the comments this was given as a possible example of a "distribution-free unbiased estimator". Abstracting a bit, this estimator is of the form $\hat{\sigma}=f[s,n,\kappa_x]$, where $\kappa_x$ is the excess kurtosis of $x$. This estimator is not "distribution free", as $\kappa_x$ depends on the distribution of $x$. The estimator is said to satisfy $\mathbb{E}[\hat{\sigma}]-\sigma_x=\mathrm{O}[\frac{1}{n}]$, where $\sigma_x^2$ is the variance of $x$. Hence the estimator is consistent, but not (absolutely) "unbiased", as $\mathrm{O}[\frac{1}{n}]$ can be arbitrarily large for small $n$. Note: Below is my original "answer". From here on, the comments are about the standard "sample" mean and variance, which are "distribution-free" unbiased estimators (i.e. the population is not assumed to be Gaussian). This is not a complete answer, but rather a clarification on why the sample variance formula is commonly used. Given a random sample $\{x_1,\ldots,x_n\}$, so long as the variables have a common mean, the estimator $\bar{x}=\frac{1}{n}\sum_ix_i$ will be unbiased, i.e. $$\mathbb{E}[x_i]=\mu \implies \mathbb{E}[\bar{x}]=\mu$$ If the variables also have a common finite variance, and they are uncorrelated, then the estimator $s^2=\frac{1}{n-1}\sum_i(x_i-\bar{x})^2$ will also be unbiased, i.e. $$\mathbb{E}[x_ix_j]-\mu^2=\begin{cases}\sigma^2&i=j\\0&i\neq{j}\end{cases} \implies \mathbb{E}[s^2]=\sigma^2$$ Note that the unbiasedness of these estimators depends only on the above assumptions (and the linearity of expectation; the proof is just algebra). The result does not depend on any particular distribution, such as Gaussian. The variables $x_i$ do not have to have a common distribution, and they do not even have to be independent (i.e. the sample does not have to be i.i.d.). The "sample standard deviation" $s$ is not an unbiased estimator, $\mathbb{s}\neq\sigma$, but nonetheless it is commonly used. My guess is that this is simply because it is the square root of the unbiased sample variance. (With no more sophisticated justification.) In the case of an i.i.d. Gaussian sample, the maximum likelihood estimates (MLE) of the parameters are $\hat{\mu}_\mathrm{MLE}=\bar{x}$ and $(\hat{\sigma}^2)_\mathrm{MLE}=\frac{n-1}{n}s^2$, i.e. the variance divides by $n$ rather than $n^2$. Moreover, in the i.i.d. Gaussian case the standard deviation MLE is just the square root of the MLE variance. However these formulas, as well as the one hinted at in your question, depend on the Gaussian i.i.d. assumption. Update: Additional clarification on "biased" vs. "unbiased". Consider an $n$-element sample as above, $X=\{x_1,\ldots,x_n\}$, with sum-square-deviation $$\delta^2_n=\sum_i(x_i-\bar{x})^2$$ Given the assumptions outlined in the first part above, we necessarily have $$\mathbb{E}[\delta^2_n]=(n-1)\sigma^2$$ so the (Gaussian-)MLE estimator is biased $$\widehat{\sigma^2_n}=\tfrac{1}{n}\delta^2_n \implies \mathbb{E}[\widehat{\sigma^2_n}]=\tfrac{n-1}{n}\sigma^2 $$ while the "sample variance" estimator is unbiased $$s^2_n=\tfrac{1}{n-1}\delta^2_n \implies \mathbb{E}[s^2_n]=\sigma^2$$ Now it is true that $\widehat{\sigma^2_n}$ becomes less biased as the sample size $n$ increases. However $s^2_n$ has zero bias no matter the sample size (so long as $n>1$). For both estimators, the variance of their sampling distribution will be non-zero, and depend on $n$. As an example, the below Matlab code considers an experiment with $n=2$ samples from a standard-normal population $z$. To estimate the sampling distributions for $\bar{x},\widehat{\sigma^2},s^2$, the experiment is repeated $N=10^6$ times. (You can cut & paste the code here to try it out yourself.) % n=sample size, N=number of samples n=2; N=1e6; % generate standard-normal random #'s z=randn(n,N); % i.e. mu=0, sigma=1 % compute sample stats (Gaussian MLE) zbar=sum(z)/n; zvar_mle=sum((z-zbar).^2)/n; % compute ensemble stats (sampling-pdf means) zbar_avg=sum(zbar)/N, zvar_mle_avg=sum(zvar_mle)/N % compute unbiased variance zvar_avg=zvar_mle_avg*n/(n-1) Typical output is like zbar_avg = 1.4442e-04 zvar_mle_avg = 0.49988 zvar_avg = 0.99977 confirming that \begin{align} \mathbb{E}[\bar{z}]&\approx\overline{(\bar{z})}\approx\mu=0 \\ \mathbb{E}[s^2]&\approx\overline{(s^2)}\approx\sigma^2=1 \\ \mathbb{E}[\widehat{\sigma^2}]&\approx\overline{(\widehat{\sigma^2})}\approx\frac{n-1}{n}\sigma^2=\frac{1}{2} \end{align} Update 2: Note on fundamentally "algebraic" nature of unbiased-ness. In the above numerical demonstration, the code approximates the true expectation $\mathbb{E}[\,]$ using an ensemble average with $N=10^6$ replications of the experiment (i.e. each is a sample of size $n=2$). Even with this large number, the typical results quoted above are far from exact. To numerically demonstrate that the estimators are really unbiased, we can use a simple trick to approximate the $N\to\infty$ case: simply add the following line to the code % optional: "whiten" data (ensure exact ensemble stats) [U,S,V]=svd(z-mean(z,2),'econ'); z=sqrt(N)*U*V'; (placing after "generate standard-normal random #'s" and before "compute sample stats") With this simple change, even running the code with $N=10$ gives results like GeoMatt22GeoMatt22 $\begingroup$ @amoeba Well, I'll eat my hat. I squared the SD-values in each line then averaged them and they come out unbiased (0.9994), whereas the SD-values themselves do not. Meaning that you and GeoMatt22 are correct, and I am wrong. $\endgroup$ – Carl Dec 8 '16 at 7:27 $\begingroup$ @Carl: It's generally true that transforming an unbiased estimator of a parameter doesn't give an unbiased estimate of the transformed parameter except when the transformation is affine, following from the linearity of expectation. So on what scale is unbiasedness important to you? $\endgroup$ – Scortchi - Reinstate Monica♦ Dec 8 '16 at 8:29 $\begingroup$ Carl: I apologize if you feel my answer was orthogonal to your question. It was intended to provide a plausible explanation of Q:"why a biased standard deviation formula is typically used?" A:"simply because the associated variance estimator is unbiased, vs. any real mathematical/statistical justification". As for your comment, typically "unbiased" describes an estimator whose expected value is correct independent of sample size. If it is unbiased only in the limit of infinite sample size, typically it would be called "consistent". $\endgroup$ – GeoMatt22 Dec 9 '16 at 6:38 $\begingroup$ (+1) Nice answer. Small caveat: That Wikipedia passage on consistency quoted in this answer is a bit of a mess and the parenthetical statement made related to it is potentially misleading. "Consistency" and "asymptotic unbiasedness" are in some sense orthogonal properties of an estimator. For a little more on that point, see the comment thread to this answer. $\endgroup$ – cardinal Dec 10 '16 at 21:45 $\begingroup$ +1 but I think @Scortchi makes a really important point in his answer that is not mentioned in yours: namely, that even for Gaussian population, the unbiased estimate of $\sigma$ has higher expected error than the standard biased estimate of $\sigma$ (due to the high variance of the former). This is a strong argument in favour of not using an unbiased estimator even if one knows that the underlying distribution is Gaussian. $\endgroup$ – amoeba says Reinstate Monica Dec 13 '16 at 14:52 The sample standard deviation $S=\sqrt{\frac{\sum (X - \bar{X})^2}{n-1}}$ is complete and sufficient for $\sigma$ so the set of unbiased estimators of $\sigma^k$ given by $$ \frac{(n-1)^\frac{k}{2}}{2^\frac{k}{2}} \cdot \frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{n+k-1}{2}\right)} \cdot S^k = \frac{S^k}{c_k} $$ (See Why is sample standard deviation a biased estimator of $\sigma$?) are, by the Lehmann–Scheffé theorem, UMVUE. Consistent, though biased, estimators of $\sigma^k$ can also be formed as $$ \tilde{\sigma}^k_j= \left(\frac{S^j}{c_j}\right)^\frac{k}{j} $$ (the unbiased estimators being specified when $j=k$). The bias of each is given by $$\operatorname{E}\tilde{\sigma}^k_j - \sigma^k =\left( \frac{c_k}{c_j^\frac{k}{j}} -1 \right) \sigma^k$$ & its variance by $$\operatorname{Var}\tilde{\sigma}^{k}_j=\operatorname{E}\tilde{\sigma}^{2k}_j - \left(\operatorname{E}\tilde{\sigma}^k_j\right)^2=\frac{c_{2k}-c_k^2}{c_j^\frac{2k}{j}} \sigma^{2k}$$ For the two estimators of $\sigma$ you've considered, $\tilde{\sigma}^1_1=\frac{S}{c_1}$ & $\tilde{\sigma}^1_2=S$, the lack of bias of $\tilde{\sigma}_1$ is more than offset by its larger variance when compared to $\tilde{\sigma}_2$: $$\begin{align} \operatorname{E}\tilde{\sigma}_1 - \sigma &= 0 \\ \operatorname{E}\tilde{\sigma}_2 - \sigma &=(c_1 -1) \sigma \\ \operatorname{Var}\tilde{\sigma}_1 =\operatorname{E}\tilde{\sigma}^{2}_1 - \left(\operatorname{E}\tilde{\sigma}^1_1\right)^2 &=\frac{c_{2}-c_1^2}{c_1^2} \sigma^{2} = \left(\frac{1}{c_1^2}-1\right) \sigma^2 \\ \operatorname{Var}\tilde{\sigma}_2 =\operatorname{E}\tilde{\sigma}^{2}_1 - \left(\operatorname{E}\tilde{\sigma}_2\right)^2 &=\frac{c_{2}-c_1^2}{c_2} \sigma^{2}=(1-c_1^2)\sigma^2 \end{align}$$ (Note that $c_2=1$, as $S^2$ is already an unbiased estimator of $\sigma^2$.) The mean square error of $a_k S^k$ as an estimator of $\sigma^2$ is given by $$ \begin{align} (\operatorname{E} a_k S^k - \sigma^k)^2 + \operatorname{E} (a_k S^k)^2 - (\operatorname{E} a_k S^k)^2 &= [ (a_k c_k -1)^2 + a_k^2 c_{2k} - a_k^2 c_k^2 ] \sigma^{2k}\\ &= ( a_k^2 c_{2k} -2 a_k c_k + 1 ) \sigma^{2k} \end{align} $$ & therefore minimized when $$a_k = \frac{c_k}{c_{2k}}$$ , allowing the definition of another set of estimators of potential interest: $$ \hat{\sigma}^k_j= \left(\frac{c_j S^j}{c_{2j}}\right)^\frac{k}{j} $$ Curiously, $\hat{\sigma}^1_1=c_1S$, so the same constant that divides $S$ to remove bias multiplies $S$ to reduce MSE. Anyway, these are the uniformly minimum variance location-invariant & scale-equivariant estimators of $\sigma^k$ (you don't want your estimate to change at all if you measure in kelvins rather than degrees Celsius, & you want it to change by a factor of $\left(\frac{9}{5}\right)^k$ if you measure in Fahrenheit). None of the above has any bearing on the construction of hypothesis tests or confidence intervals (see e.g. Why does this excerpt say that unbiased estimation of standard deviation usually isn't relevant?). And $\tilde{\sigma}^k_j$ & $\hat{\sigma}^k_j$ exhaust neither estimators nor parameter scales of potential interest—consider the maximum-likelihood estimator† $\sqrt{\frac{n-1}{n}}S$, or the median-unbiased estimator $\sqrt{\frac{n-1}{\chi^2_{n-1}(0.5)}}S$; or the geometric standard deviation of a lognormal distribution $\mathrm{e}^\sigma$. It may be worth showing a few more-or-less popular estimates made from a small sample ($n=2$) together with the upper & lower bounds, $\sqrt{\frac{(n-1)s^2}{\chi^2_{n-1}(\alpha)}}$ & $\sqrt{\frac{(n-1)s^2}{\chi^2_{n-1}(1-\alpha)}}$, of the equal-tailed confidence interval having coverage $1-\alpha$: The span between the most divergent estimates is negligible in comparison with the width of any confidence interval having decent coverage. (The 95% C.I., for instance, is $(0.45s,31.9s)$.) There's no sense in being finicky about the properties of a point estimator unless you're prepared to be fairly explicit about what you want you want to use it for—most explicitly you can define a custom loss function for a particular application. A reason you might prefer an exactly (or almost) unbiased estimator is that you're going to use it in subsequent calculations during which you don't want bias to accumulate: your illustration of averaging biased estimates of standard deviation is a simple example of such (a more complex example might be using them as a response in a linear regression). In principle an all-encompassing model should obviate the need for unbiased estimates as an intermediate step, but might be considerably more tricky to specify & fit. † The value of $\sigma$ that makes the observed data most probable has an appeal as an estimate independent of consideration of its sampling distribution. Scortchi - Reinstate Monica♦Scortchi - Reinstate Monica Q2: Would someone please explain to me why we are using SD anyway as it is clearly biased and misleading? This came up as an aside in comments, but I think it bears repeating because it's the crux of the answer: The sample variance formula is unbiased, and variances are additive. So if you expect to do any (affine) transformations, this is a serious statistical reason why you should insist on a "nice" variance estimator over a "nice" SD estimator. In an ideal world, they'd be equivalent. But that's not true in this universe. You have to choose one, so you might as well choose the one that lets you combine information down the road. Comparing two sample means? The variance of their difference is sum of their variances. Doing a linear contrast with several terms? Get its variance by taking a linear combination of their variances. Looking at regression line fits? Get their variance using the variance-covariance matrix of your estimated beta coefficients. Using F-tests, or t-tests, or t-based confidence intervals? The F-test calls for variances directly; and the t-test is exactly equivalent to the square root of an F-test. In each of these common scenarios, if you start with unbiased variances, you'll remain unbiased all the way (unless your final step converts to SDs for reporting). Meanwhile, if you'd started with unbiased SDs, neither your intermediate steps nor the final outcome would be unbiased anyway. civilstatcivilstat $\begingroup$ Variance is not a distance measurement, and standard deviation is. Yes, vector distances add by squares, but the primary measurement is distance. The question was what would you use corrected distance for, and not why should we ignore distance as if it did not exist. $\endgroup$ – Carl Dec 11 '16 at 3:39 $\begingroup$ Well, I guess I'm arguing that "the primary measurement is distance" isn't necessarily true. 1) Do you have a method to work with unbiased variances; combine them; take the final resulting variance; and rescale its sqrt to get an unbiased SD? Great, then do that. If not... 2) What are you going to do with a SD from a tiny sample? Report it on its own? Better to just plot the datapoints directly, not summarize their spread. And how will people interpret it, other than as an input to SEs and thus CIs? It's meaningful as an input to CIs, but then I'd prefer the t-based CI (with usual SD). $\endgroup$ – civilstat Dec 11 '16 at 22:35 $\begingroup$ I do no think that many clinical studies or commercial software programs with $n<25$ would use standard error of the mean calculated from small sample corrected standard deviation leading to a false impression of how small those errors are. I think even that one issue, even if that is the only one, should be ignored. $\endgroup$ – Carl Dec 11 '16 at 23:00 $\begingroup$ "so you might as well choose the one that lets you combine information down the road" and "the primary measurement is distance" isn't necessarily true. Farmer Jo's house is 640 acres down the road? One uses the appropriate measurement correctly for each and every situation, or one has a higher tolerance for false witness than I. My only question here is when to use what, and the answer to it is not "never." $\endgroup$ – Carl Dec 12 '16 at 3:11 This post is in outline form. (1) Taking a square root is not an affine transformation (Credit @Scortchi.) (2) ${\rm var}(s) = {\rm E} (s^2) - {\rm E}(s)^2$, thus ${\rm E}(s) = \sqrt{{\rm E}(s^2) -{\rm var}(s)}\neq{\sqrt{\rm var(s)}}$ (3) $ {\rm var}(s)=\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{n-1}$, whereas $\text{E}(s)\,=\,\,\frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{2}}$$\neq\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{n-1}}={\sqrt{\rm var(s)}}$ (4) Thus, we cannot substitute ${\sqrt{\rm var(s)}}$ for $\text{E}(s)$, for $n$ small, as square root is not affine. (5) ${\rm var}(s)$ and $\text{E}(s)$ are unbiased (Credit @GeoMatt22 and @Macro, respectively). (6) For non-normal distributions $\bar{x}$ is sometimes (a) undefined (e.g., Cauchy, Pareto with small $\alpha$) and (b) not UMVUE (e.g., Cauchy ($\rightarrow$ Student's-$t$ with $df=1$), Pareto, Uniform, beta). Even more commonly, variance may be undefined, e.g. Student's-$t$ with $1\leq df\leq2$. Then one can state that $\text{var}(s)$ is not UMVUE for the general case distribution. Thus, there is then no special onus to introducing an approximate small number correction for standard deviation, which likely has similar limitations to $\sqrt{\text{var}(s)}$, but is additionally less biased, $\hat\sigma = \sqrt{ \frac{1}{n - 1.5 - \tfrac14 \gamma_2} \sum_{i=1}^n (x_i - \bar{x})^2 }$ , where $\gamma_2$ is excess kurtosis. In a similar vein, when examining a normal squared distribution (a Chi-squared with $df=1$ transform), we might be tempted to take its square root and use the resulting normal distribution properties. That is, in general, the normal distribution can result from transformations of other distributions and it may be expedient to examine the properties of that normal distribution such that the limitation of small number correction to the normal case is not so severe a restriction as one might at first assume. For the normal distribution case: A1: By Lehmann-Scheffe theorem ${\rm var}(s)$ and $\text{E}(s)$ are UMVUE (Credit @Scortchi). A2: (Edited to adjust for comments below.) For $n\leq 25$, we should use $\text{E}(s)$ for standard deviation, standard error, confidence intervals of the mean and of the distribution, and optionally for z-statistics. For $t$-testing we would not use the unbiased estimator as $\frac{ \bar X - \mu} {\sqrt{\text{var}(n)/n}}$ itself is Student's-$t$ distributed with $n-1$ degrees of freedom (Credit @whuber and @GeoMatt22). For z-statistics, $\sigma$ is usually approximated using $n$ large for which $\text{E}(s)-\sqrt{\text{var}(n)}$ is small, but for which $\text{E}(s)$ appears to be more mathematically appropriate (Credit @whuber and @GeoMatt22). $\begingroup$ A2 is incorrect: following that prescription would produce demonstrably invalid tests. As I commented to the question, perhaps too subtly: consult any theoretical account of a classical test, such as the t-test, to see why a bias correction is irrelevant. $\endgroup$ – whuber♦ Dec 9 '16 at 21:24 $\begingroup$ There's a strong meta-argument showing why bias correction for statistical tests is a red herring: if it were incorrect not to include a bias-correction factor, then that factor would already be included in standard tables of the Student t distribution, F distribution, etc. To put it another way: if I'm wrong about this, then everybody has been wrong about statistical testing for the last century. $\endgroup$ – whuber♦ Dec 9 '16 at 21:30 $\begingroup$ Am I the only one who's baffled by the notation here? Why use $\operatorname{E}(s)$ to stand for $\frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{2}}$, the unbiased estimate of standard deviation? What's $s$? $\endgroup$ – Scortchi - Reinstate Monica♦ Dec 9 '16 at 21:58 $\begingroup$ @Scortchi the notation apparently came about as an attempt to inherit that used in the linked post. There $s$ is the sample variance, and $E(s)$ is the expected value of $s$ for a Gaussian sample. In this question, "$E(s)$" was co-opted to be a new estimator derived from the original post (i.e. something like $\hat{\sigma}\equiv s/\alpha$ where $\alpha\equiv\mathbb{E}[s]/\sigma$). If we arrive at a satisfactory answer for this question, probably a cleanup of the question & answer notation would be warranted :) $\endgroup$ – GeoMatt22 Dec 9 '16 at 22:20 $\begingroup$ The z-test assumes the denominator is an accurate estimate of $\sigma$. It's known to be an approximation that is only asymptotically correct. If you want to correct it, don't use the bias of the SD estimator--just use a t-test. That's what the t-test was invented for. $\endgroup$ – whuber♦ Dec 9 '16 at 22:58 I want to add the Bayesian answer to this discussion. Just because your assumption is that the data is generated according to some normal with unknown mean and variance, that doesn't mean that you should summarize your data using a mean and a variance. This whole problem can be avoided if you draw the model, which will have a posterior predictive that is a three parameter noncentral scaled student's T distribution. The three parameters are the total of the samples, total of the squared samples, and the number of samples. (Or any bijective map of these.) Incidentally, I like civilstat's answer because it highlights our desire to combine information. The three sufficient statistics above are even better than the two given in the question (or by civilstat's answer). Two sets of these statistics can easily be combined, and they give the best posterior predictive given the assumption of normality. Neil GNeil G $\begingroup$ How then does one calculate an unbiased standard error of the mean from those three sufficient statistics? $\endgroup$ – Carl Dec 14 '16 at 17:44 $\begingroup$ @carl You can easily calculate it since you have the number of samples $n$, you can multiply the uncorrected sample variance by $\frac{n}{n-1}$. However, you really don't want to do that. That's tantamount to turning your three parameters into a best fit normal distribution to your limited data. It's a lot better to use your three parameters to fit the true posterior predictive: the noncentral scaled T distribution. All questions you might have (percentiles, etc.) are better answered by this T distribution. In fact, T tests are just common sense questions asked of this distribution. $\endgroup$ – Neil G Dec 15 '16 at 0:30 $\begingroup$ How can one then generate a true normal distribution RV from Monte Carlo simulations(s) and recover that true distribution using only Student's-$t$ distribution parameters? Am I missing something here? $\endgroup$ – Carl Dec 15 '16 at 2:57 $\begingroup$ @Carl The sufficient statistics I described were the mean, second moment, and number of samples. Your MLE of the original normal are the mean and variance (which is equal to the second moment minus the squared mean). The number of samples is useful when you want to make predictions about future observations (for which you need the posterior predictive distribution). $\endgroup$ – Neil G Dec 15 '16 at 3:24 $\begingroup$ Though a Bayesian perspective is a welcome addition, I find this a little hard to follow: I'd have expected a discussion of constructing a point estimate from the posterior density of $\sigma$. It seems you're rather questioning the need for a point estimate: this is something well worth bringing up, but not uniquely Bayesian. (BTW you also need to explain the priors.) $\endgroup$ – Scortchi - Reinstate Monica♦ Dec 16 '16 at 14:22 Not the answer you're looking for? Browse other questions tagged normal-distribution standard-deviation expected-value unbiased-estimator umvue or ask your own question. The derivation of standard deviation When/why is the sqroot of the variance not a good estimator of the standard deviation? What is the difference between a consistent estimator and an unbiased estimator? Why is sample standard deviation a biased estimator of $\sigma$? What is the difference between ZCA whitening and PCA whitening? Is it true that Bayesian methods don't overfit? How can I find the standard deviation of the sample standard deviation from a normal distribution? Why does this excerpt say that unbiased estimation of standard deviation usually isn't relevant? Square-root transform dependent variable to gaussian for ML problem Should I use a paired sample t-test to compare two methods of measuring absorption lines? Confidence interval for the standard deviation of a Normal distribution with known mean UMVUE for normal distribution $\sigma$ The standard normal distribution vs the t-distribution For which distributions is there a closed-form unbiased estimator for the standard deviation? Why use the standard deviation of sample means for a specific sample? Why we don't make use of the t-distribution for constructing a confidence interval for a proportion?
CommonCrawl
Precision Measurement of the Boron to Carbon Flux Ratio Carbon nuclei in cosmic rays are thought to be mainly produced and accelerated in astrophysical sources, while boron nuclei are entirely produced by the collision of heavier nuclei, such as carbon and oxygen, with the interstellar matter. Therefore, the boron to carbon flux ratio (B/C) directly measures the average amount of interstellar material traversed by cosmic rays. In cosmic ray propagation models, where cosmic rays are described as a relativistic gas scattering on a magnetized plasma, the B/C ratio is used to constrain the spatial diffusion coefficient $D$, as the B/C ratio is proportional to $1/D$ at high rigidities $R$. The diffusion coefficient dependence on rigidity is $D \propto R^{-\delta}$, where $\delta$ is predicted to be $\delta = −1/3$ with the Kolmogorov theory of interstellar turbulence [A. N. Kolmogorov, Dokl. Akad. Nauk SSSR 30, 301(1941)], or $\delta = −1/2$ using the Kraichnan theory [R. H. Kraichnan, Phys. Fluids 8, 1385 (1965)]. The measured B/C spectral index $\Delta$, obtained from a fit at high rigidities of the $({\rm B}/{\rm C})=kR^{\Delta}$, approaches the diffusion spectral index $\delta$ asymptotically ($\Delta = \delta$) and $k$ is the normalization constant. AMS precisely measured the B/C ratio in cosmic rays in the rigidity range from 1.9 GV to 2.6 TV. This measurement is based on 2.3 million boron nuclei and 8.3 million carbon nuclei collected by AMS during the first 5 years of operation onboard the ISS. In this measurement the total error is ∼3% at 100 GV. Figure 1 shows the measured B/C ratio. Figure 1. The AMS boron to carbon ratio (B/C) as a function of rigidity in the interval from 1.9 GV to 2.6 TV based on 2.3 million boron and 8.3 million carbon nuclei. The dashed line shows the single power law fit starting from 65 GV with index $\Delta=−0.333±0.014(\rm fit)\pm0.005(\rm syst)$. As seen in Figure 1, the B/C ratio increases with rigidity reaching a maximum at 4 GV then decreases. The B/C ratio does not show any significant structures. Above 65 GV the B/C ratio measured by AMS is well fit with a single power law $kR^{\Delta}$ with a $\chi^2/\mathrm{d.o.f} = 14/24$ and a spectral index $\Delta=−0.333±0.014(\rm fit) \pm 0.005(\rm syst)$ in good agreement with the Kolmogorov theory of turbulence which predicts $\Delta = -1/3$ asymptotically. Figure 2 shows the AMS B/C ratio together with recent results. Also shown in blue dash line is the prediction for the B/C ratio from an important theoretical model [R. Cowsik and T. Madziwa-Nussinov, Astrophys. J. 827, 119 (2016)], which explains the AMS positron fraction [L. Accardo et al., Phys. Rev. Lett. 113, 121101 (2014)] and antiproton results [M. Aguilar et al., Phys. Rev. Lett. 117, 091103 (2016)] by secondary production in cosmic ray propagation. The model shown is ruled out by this measurement. Figure 2. The AMS B/C ratio together with recent results and the prediction for the B/C ratio from the theoretical model [R. Cowsik and T. Madziwa-Nussinov, Astrophys. J. 827, 119 (2016)] (blue dash line), which explains the AMS positron fraction [L. Accardo et al., Phys. Rev. Lett. 113, 121101 (2014)] and antiproton results [M. Aguilar et al., Phys. Rev. Lett. 117, 091103 (2016)] by secondary production in cosmic ray propagation. The model shown is ruled out by this measurement. In conclusion, the B/C ratio does not show any significant structures. Above 65 GV the B/C ratio can be described by a single power law of $\Delta =−0.333\pm0.014(\rm fit)±0.005(\rm syst)$, in good agreement with the Kolmogorov theory of turbulence which predicts $\Delta = -1/3$ asymptotically. Editors' Suggestion Featured in Physics Precision Measurement of the Boron to Carbon Flux Ratio in Cosmic Rays from 1.9 GV to 2.6 TV with the Alpha Magnetic Spectrometer on the International Space Station
CommonCrawl
The Myers-Steenrod theorem for Finsler manifolds of low regularity Authors: Vladimir S. Matveev and Marc Troyanov Journal: Proc. Amer. Math. Soc. 145 (2017), 2699-2712 MSC (2010): Primary 53B40, 53C60, 35B65 DOI: https://doi.org/10.1090/proc/13407 Published electronically: February 10, 2017 Abstract | References | Similar Articles | Additional Information Abstract: We prove a version of Myers-Steenrod's theorem for Finsler manifolds under the minimal regularity hypothesis. In particular we show that an isometry between $C^{k,\alpha }$-smooth (or partially smooth) Finsler metrics, with $k+\alpha >0$, $k\in \mathbb {N} \cup \{0\}$, and $0 \leq \alpha \leq 1$ is necessarily a diffeomorphism of class $C^{k+1,\alpha }$. A generalization of this result to the case of Finsler 1-quasiconformal mapping is given. The proofs are based on the reduction of the Finslerian problems to Riemannian ones with the help of the Binet-Legendre metric. B. Aradi and D. Cs. Kertész, Isometries, submetries and distance coordinates on Finsler manifolds, Acta Math. Hungar. 143 (2014), no. 2, 337–350. MR 3233537, DOI https://doi.org/10.1007/s10474-013-0381-1 F. Brickell, On the differentiability of affine and projective transformations, Proc. Amer. Math. Soc. 16 (1965), 567–574. MR 178430, DOI https://doi.org/10.1090/S0002-9939-1965-0178430-4 Salomon Bochner and Deane Montgomery, Locally compact groups of differentiable transformations, Ann. of Math. (2) 47 (1946), 639–653. MR 18187, DOI https://doi.org/10.2307/1969226 Eugenio Calabi and Philip Hartman, On the smoothness of isometries, Duke Math. J. 37 (1970), 741–750. MR 283727 Paul Centore, Volume forms in Finsler spaces, Houston J. Math. 25 (1999), no. 4, 625–640. MR 1829124 Gyula Csató, Bernard Dacorogna, and Olivier Kneuss, The pullback equation for differential forms, Progress in Nonlinear Differential Equations and their Applications, vol. 83, Birkhäuser/Springer, New York, 2012. MR 2883631 Shaoqiang Deng and Zixin Hou, The group of isometries of a Finsler space, Pacific J. Math. 207 (2002), no. 1, 149–155. MR 1974469, DOI https://doi.org/10.2140/pjm.2002.207.149 Shaoqiang Deng, Homogeneous Finsler spaces, Springer Monographs in Mathematics, Springer, New York, 2012. MR 2962626 Lawrence C. Evans and Ronald F. Gariepy, Measure theory and fine properties of functions, Studies in Advanced Mathematics, CRC Press, Boca Raton, FL, 1992. MR 1158660 Sigurdur Helgason, Differential geometry, Lie groups, and symmetric spaces, Pure and Applied Mathematics, vol. 80, Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York-London, 1978. MR 514561 Tadeusz Iwaniec, Regularity theorems for solutions of partial differential equations for quasiconformal mappings in several dimensions, Dissertationes Math. (Rozprawy Mat.) 198 (1982), 45. MR 670148 Tadeusz Iwaniec and Gaven Martin, Geometric function theory and non-linear analysis, Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, New York, 2001. MR 1859913 Shoshichi Kobayashi and Katsumi Nomizu, Foundations of differential geometry. Vol I, Interscience Publishers, a division of John Wiley & Sons, New York-London, 1963. MR 0152974 Tony Liimatainen and Mikko Salo, $n$-harmonic coordinates and the regularity of conformal mappings, Math. Res. Lett. 21 (2014), no. 2, 341–361. MR 3247061, DOI https://doi.org/10.4310/MRL.2014.v21.n2.a11 Alexander Lytchak and Asli Yaman, On Hölder continuous Riemannian and Finsler metrics, Trans. Amer. Math. Soc. 358 (2006), no. 7, 2917–2926. MR 2216252, DOI https://doi.org/10.1090/S0002-9947-06-04195-X Ĭozhe Maleshich, The Hilbert-Smith conjecture for Hölder actions, Uspekhi Mat. Nauk 52 (1997), no. 2(314), 173–174 (Russian); English transl., Russian Math. Surveys 52 (1997), no. 2, 407–408. MR 1480156, DOI https://doi.org/10.1070/RM1997v052n02ABEH001792 Gaven J. Martin, The Hilbert-Smith conjecture for quasiconformal actions, Electron. Res. Announc. Amer. Math. Soc. 5 (1999), 66–70. MR 1694197, DOI https://doi.org/10.1090/S1079-6762-99-00062-1 Vladimir S. Matveev and Marc Troyanov, The Binet-Legendre metric in Finsler geometry, Geom. Topol. 16 (2012), no. 4, 2135–2170. MR 3033515, DOI https://doi.org/10.2140/gt.2012.16.2135 Vladimir S. Matveev and Marc Troyanov, Completeness and incompleteness of the Binet-Legendre metric, Eur. J. Math. 1 (2015), no. 3, 483–502. MR 3401902, DOI https://doi.org/10.1007/s40879-015-0046-4 A. A. George Michael, On locally Lipschitz locally compact transformation groups of manifolds, Arch. Math. (Brno) 43 (2007), no. 3, 159–162. MR 2354804 Deane Montgomery and Leo Zippin, Topological transformation groups, Interscience Publishers, New York-London, 1955. MR 0073104 S. B. Myers and N. E. Steenrod, The group of isometries of a Riemannian manifold, Ann. of Math. (2) 40 (1939), no. 2, 400–416. MR 1503467, DOI https://doi.org/10.2307/1968928 I. G. Nikolaev and S. Z. Shefel′, Differential properties of mappings that are conformal at a point, Sibirsk. Mat. Zh. 27 (1986), no. 1, 132–142, 199 (Russian). MR 847421 John Pardon, The Hilbert-Smith conjecture for three-manifolds, J. Amer. Math. Soc. 26 (2013), no. 3, 879–899. MR 3037790, DOI https://doi.org/10.1090/S0894-0347-2013-00766-3 Peter Petersen, Riemannian geometry, 2nd ed., Graduate Texts in Mathematics, vol. 171, Springer, New York, 2006. MR 2243772 Dušan Repovš and Evgenij Š�epin, A proof of the Hilbert-Smith conjecture for actions by Lipschitz maps, Math. Ann. 308 (1997), no. 2, 361–364. MR 1464908, DOI https://doi.org/10.1007/s002080050080 Ju. G. Rešetnjak, Differential properties of quasiconformal mappings and conformal mappings of Riemannian spaces, Sibirsk. Mat. Zh. 19 (1978), no. 5, 1166–1183, 1216 (Russian). MR 508507 I. Kh. Sabitov, On the smoothness of isometries, Sibirsk. Mat. Zh. 34 (1993), no. 4, 169–176, iv, x (Russian, with English and Russian summaries); English transl., Siberian Math. J. 34 (1993), no. 4, 741–748. MR 1248802, DOI https://doi.org/10.1007/BF00975178 I. Kh. Sabitov, Isometric immersions and embeddings of locally Euclidean metrics, Reviews in Mathematics and Mathematical Physics, vol. 13, Cambridge Scientific Publishers, Cambridge, 2008. MR 2584444 E. V. Shchepin, Hausdorff dimension and the dynamics of diffeomorphisms, Mat. Zametki 65 (1999), no. 3, 457–463 (Russian, with Russian summary); English transl., Math. Notes 65 (1999), no. 3-4, 381–385. MR 1717521, DOI https://doi.org/10.1007/BF02675081 S. Z. Shefel′, Smoothness of a conformal mapping of Riemannian spaces, Sibirsk. Mat. Zh. 23 (1982), no. 1, 153–159, 222 (Russian). MR 651886 Michael Taylor, Existence and regularity of isometries, Trans. Amer. Math. Soc. 358 (2006), no. 6, 2415–2423. MR 2204038, DOI https://doi.org/10.1090/S0002-9947-06-04090-6 F. Brickell, On the differentiability of affine and projective transformations, Proc. Amer. Math. Soc. 16 (1965), 567–574. MR 0178430 Salomon Bochner and Deane Montgomery, Locally compact groups of differentiable transformations, Ann. of Math. (2) 47 (1946), 639–653. MR 0018187 Eugenio Calabi and Philip Hartman, On the smoothness of isometries, Duke Math. J. 37 (1970), 741–750. MR 0283727 Gyula Csató, Bernard Dacorogna, and Olivier Kneuss, The pullback equation for differential forms, Progress in Nonlinear Differential Equations and their Applications, 83, Birkhäuser/Springer, New York, 2012. MR 2883631 Alexander Lytchak and Asli Yaman, On Hölder continuous Riemannian and Finsler metrics, Trans. Amer. Math. Soc. 358 (2006), no. 7, 2917–2926 (electronic). MR 2216252, DOI https://doi.org/10.1090/S0002-9947-06-04195-X Gaven J. Martin, The Hilbert-Smith conjecture for quasiconformal actions, Electron. Res. Announc. Amer. Math. Soc. 5 (1999), 66–70 (electronic). MR 1694197, DOI https://doi.org/10.1090/S1079-6762-99-00062-1 Dus̆an Repovs̆ and Evgenij S̆c̆epin, A proof of the Hilbert-Smith conjecture for actions by Lipschitz maps, Math. Ann. 308 (1997), no. 2, 361–364. MR 1464908, DOI https://doi.org/10.1007/s002080050080 Michael Taylor, Existence and regularity of isometries, Trans. Amer. Math. Soc. 358 (2006), no. 6, 2415–2423 (electronic). MR 2204038, DOI https://doi.org/10.1090/S0002-9947-06-04090-6 Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 53B40, 53C60, 35B65 Retrieve articles in all journals with MSC (2010): 53B40, 53C60, 35B65 Vladimir S. Matveev Affiliation: Institut für Mathematik, Friedrich-Schiller Universität Jena, 07737 Jena, Germany MR Author ID: 609466 Email: [email protected] Marc Troyanov Affiliation: Section de Mathématiques, École Polytechnique Féderale de Lausanne, station 8, 1015 Lausanne, Switzerland Email: [email protected] Keywords: Finsler metric, isometries, Myers-Steenrod theorem, Binet-Legendre metric Received by editor(s): May 12, 2016 Received by editor(s) in revised form: July 27, 2016 Additional Notes: The authors thank the Friedrich-Schiller-Universität Jena, EPFL and the Swiss National Science Foundation for their support. Communicated by: Jeremy Tyson Article copyright: © Copyright 2017 American Mathematical Society
CommonCrawl
Subcellular storage and release mode of the novel 18F-labeled sympathetic nerve PET tracer LMI1195 Xinyu Chen1,2, Rudolf A. Werner1,2,3, Constantin Lapa1, Naoko Nose1,4, Mitsuru Hirano1,4, Simon Robinson5 & Takahiro Higuchi1,2,4 18F-N-[3-bromo-4-(3-fluoro-propoxy)-benzyl]-guanidine (18F-LMI1195) is a new class of PET tracer designed for sympathetic nervous imaging of the heart. The favorable image quality with high and specific neural uptake has been previously demonstrated in animals and humans, but intracellular behavior is not yet fully understood. The aim of the present study is to verify whether it is taken up in storage vesicles and released in company with vesicle turnover. Both vesicle-rich (PC12) and vesicle-poor (SK-N-SH) norepinephrine-expressing cell lines were used for in vitro tracer uptake studies. After 2 h of 18F-LMI1195 preloading into both cell lines, effects of stimulants for storage vesicle turnover (high concentration KCl (100 mM) or reserpine treatment) were measured at 10, 20, and 30 min. 131I-meta-iodobenzylguanidine (131I-MIBG) served as a reference. Both high concentration KCl and reserpine enhanced 18F-LMI1195 washout from PC12 cells, while tracer retention remained stable in the SK-N-SH cells. After 30 min of treatment, 18F-LMI1195 releasing index (percentage of tracer released from cells) from vesicle-rich PC12 cells achieved significant differences compared to cells without treatment condition. In contrast, such effect could not be observed using vesicle-poor SK-N-SH cell lines. Similar tracer kinetics after KCl or reserpine treatment were also observed using 131I-MIBG. In case of KCl exposure, Ca2+-free buffer with the calcium chelator, ethylenediaminetetracetic acid (EDTA), could suppress the tracer washout from PC12 cells. This finding is consistent with the tracer release being mediated by Ca2+ influx resulting from membrane depolarization. Analogous to 131I-MIBG, the current in vitro tracer uptake study confirmed that 18F-LMI1195 is also stored in vesicles in PC12 cells and released along with vesicle turnover. Understanding the basic kinetics of 18F-LMI1195 at a subcellular level is important for the design of clinical imaging protocols and imaging interpretation. The single-photon emission computed tomography (SPECT) tracer 123I-meta-iodobenzylguanidine (MIBG) targeting norepinephrine transporter (NET) is currently the most widely used clinical tracer for sympathetic nervous imaging with well-established protocols and mature guidelines based on the results achieved from several clinical trials [1, 2]. However, positron emission tomography (PET) tracers show beneficial properties compared with SPECT tracers due to the development of imaging technology over the last couple of decades. PET provides superior sensitivity and improved temporal and spatial resolution along with the possibilities of regional cardiac imaging and kinetic studies for quantification [3]. Among the PET tracers that are currently available for NET imaging, a new class of 18F-labeled agents has drawn attention because of their longer half-life of fluorine-18 (110 min) over carbon-11 (20 min). Thereby, these 18F-labeled tracers provide a unique opportunity to further enhance the development and application of PET imaging in terms of reduction of the financial burden of hospitals, flexible novel tracer design, and labeling procedure with improved stabilities [4]. Currently, a couple of 18F-labeled tracers targeting the NET are available: N-[3-bromo-4-(3-18F-fluoropropoxy]-benzyl]-guanidine (18F-LMI1195) is designed for assessment of sympathetic innervation of the heart and has successfully passed through phase I clinical trial, which confirmed its tolerance in human subjects along with favorable biodistribution for cardiac imaging [5]. [18F]4-fluoro-3-hydroxyphenethylguanidine ([18F]4F-MHPG) and its isomer [18F]3-fluoro-4-hydroxyphenethylguanidine ([18F]3F-PHPG) have also been developed in order to counteract the perfusion dependence compared to previous NET tracers [6]. The first-in-human studies of both tracers showed clear and long-term cardiac retention [7]. All the abovementioned tracers share a similar structure (benzyl/phenethyl guanidine) as MIBG and therefore represent comparable properties. Among them, 18F-LMI1195 has so far caught most of the attentions from researchers due to its easy and high-yield labeling procedure that is convenient and eligible for commercial preparation and application [8, 9]. Similar to MIBG, 18F-LMI1195 is resistant to metabolism by monoamine oxidase [5, 10]. In a head-to-head comparison of 18F-LMI1195 with 123I-MIBG in isolated perfused rabbit hearts, tracer washout after vesicle turnover was accelerated by electrical field stimulation. Additionally, our group has also demonstrated that the retention of 18F-LMI1195 is resistant to desipramine chase (desipramine added after tracer delivery), which emphasizes its potential of mimicking the physiological norepinephrine turnover [11]. Nonetheless, although our former investigation on isolated rabbit rat heart has proved the accumulation of 18F-LMI1195 in nerve terminals, it was not sufficient enough to come to the conclusion that it was taken up into the vesicles. In a previous study, by using potassium chloride (KCl) and reserpine stimulation, the difference between extravesicular retention and granular storage of MIBG was clearly demonstrated in PC12 (vesicle-poorvesicle-rich) and SK-N-SH (vesicle-richvesicle-poor) cell lines [12]. Therefore, in order to gain further insights and clarify the kinetics of 18F-LMI1195 at a subcellular level, we aimed to compare it with its SPECT counterpart 131I-MIBG in both cell lines, as mentioned above, with regard to KCl or reserpine-induced tracer depletion mechanisms. High concentration of KCl has been applied as a simulant of electrical field stimulation that enhances cardiac LMI1195 washout significantly in the isolated rabbit heart [11]. Reserpine can also deplete catecholamines (in this case, 18F-LMI1195 that presumably mimics neurotransmitter) from storage vesicles [13]. By accomplishing this study, it might be possible and prove necessary to investigate the likely drug-tracer competition and to compare the different tracer uptake behavior and mechanism details. The conclusions achieved from the results will serve as a useful guidance for future clinical assessment. 18F-LMI1195 was synthesized and purified as described in the literature [8]. The radiochemical purity of the final product was greater than 95% with a specific radioactivity more than 10 GBq/μmol. 131I-MIBG was purchased from GE Healthcare (Freiburg im Breisgau, Germany) and used within 2 h after calibration time. 131I-MIBG was chosen instead of 123I-MIBG due to its relative longer half-life, which is convenient for research purposes and financial reasons. Both PC12 cells (adrenal gland pheochromocytoma from rat) and SK-N-SH cells (human neural cells from Caucasian neuroblastoma) were purchased from Sigma-Aldrich (Sigma-Aldrich Chemie GmbH, Munich, Germany) and were cultivated at 37 °C and 5% CO2. PC12 cells were grown in a Roswell Park Memorial Institute medium with 2 mM glutamine, 5% fetal bovine serum (FBS), and 10% horse serum. SK-N-SH cells were grown in MEM medium with 2 mM glutamine and 10% FBS. The cells were first grown in 75-cm2 flasks with type IV collagen coating, in which the cells would be adherent. One day prior to release assay, they were transferred to 12-well plates with 1 mL volume per well and 2 × 105/mL density. Release assay High concentration KCl-induced tracer release Firstly, cells were incubated with high concentration KCl (100 mM) for 10, 20, and 30 min. The total protein concentrations after incubation were compared with control groups using only HEPES buffered saline (HBS) buffer (cf. Additional file 1) to insure the cell viability. No statistical difference could be concluded from these two groups. Therefore, this incubation condition was used for the following high concentration KCl induction study. The culture medium was removed and the cells were washed with the medium. Cells were first incubated with radiotracers in a solution containing both 18F-LMI1195 (300 kBq) and 131I-MIBG (37 kBq) at 37 °C for 120 min. After incubation, the cells were washed twice with warmed HBS buffer. One milliliter of HBS buffer was added again followed with 5 min incubation before removal. Then, cells were treated with HBS (with or without Ca2+) or 100 mM high KCl buffer (with or without Ca2+) for 10, 20, and 30 min. After the treatment, the buffer was collected as the extracellular fraction. Cells were washed twice with ice-cold phosphate buffered saline (PBS) and solubilized in 0.1 N NaOH. Radioactivity in each sample was measured using a gamma counter using differential energy windows (± 20%) for 18F and 131I (FH412; Frieseke & Höpfner, Erlangen, Germany). Reserpine-induced tracer release Tracer loadings were performed in analogy to the abovementioned KCl study. Cells were incubated with radiotracers in a solution containing both 18F-LMI1195 (300 kBq) and 131I-MIBG (37 kBq) at 37 °C for 120 min. After the incubation period, cells were washed twice with warmed medium, followed by 5 min incubation with medium. Afterwards, cells were treated with a reserpine solution at final concentrations of 50 nM for PC12 cells and 5 μM for SK-N-SH cells for 10, 20, or 30 min, respectively, because it is known that PC12 cells are sensitive to reserpine-induced depletion, whereas a much higher concentration of reserpine is applied to SK-N-SH cells because of its dramatically lower storage capacity [13]. The incubation buffer was collected followed by double washing with ice-cold PBS. The cells were then solubilized in 0.1 N NaOH, and the cell lysate was collected. Radioactivity of each sample was measured using a gamma counter. Nonspecific uptake was measured in the presence of 10 μM of the selective NET inhibitor desipramine, and specific uptake was calculated by subtracting nonspecific radioactivity from total counts. Retention index calculation To quantify tracer release from cells, a retention index was calculated as $$ \mathrm{Retention}\ \mathrm{index}\ \left(\%\right)=100\times \left(1-\mathrm{release}\ \mathrm{counts}/\mathrm{total}\ \mathrm{counts}\right), $$ in which release counts are defined as counts bound to extracellular buffer after release stimulation. Total counts are the counts bound to cell lysate after the tracer uptake period (including the washing process). To exclude non-specific binding or uptake (which does not contribute to release after vesicular turnover), non-specific uptake was determined in the presence of 10 μM desipramine and subtracted from total uptake. All experimental data are presented as mean ± SD, with individual numbers measured in triplicate in experiments performed on 2–3 separate days. Statistical comparison of uptake/release ratios between two groups was performed by Student's t test, where p values of less than 0.05 were considered statistically significant. Data were analyzed by analysis of variance (ANOVA) when multiple groups were compared. Statistical analysis was performed on GraphPad Prism (GraphPad Software, Inc., La Jolla, USA). High concentration KCl-induced tracer depletion Treatment of PC12 cells with high concentration KCl buffer induced robust tracer depletion of both 18F-LMI1195 and 131I-MIBG in a time-dependent manner, leading to 88 ± 4% of 18F-LMI1195 and 70 ± 2% of 131I-MIBG total uptake released from cells (p < 0.001, vs. untreated controls, Fig. 1a). In contrast, KCl did not produce an obvious release of either 18F-LMI1195 or 131I-MIBG in SK-N-SH cells and demonstrated similar retention indices as controls (Fig. 1b). Time course of tracer retention index after the stimulation with high concentration KCl buffer. Both 18F-LMI1195 and 131I-MIBG were induced to be released from (vesicle-rich) PC12 cells after high concentration KCl buffer treatment (a n = 3 at each time point of both groups), whereas no such effect could be observed in (vesicle-poor) SK-N-SH cells (b n = 3 at each time point of both groups). ***p < 0.001 vs. control group at the same time point; all data points presented as mean ± SD Tracer depletion in PC12 cells 30 min after treatment with high concentration KCl could be overturned by using Ca2+-free buffer containing ethylenediaminetetraacetic acid (EDTA) (Fig. 2). The release index (as percentage of tracer released from cells after certain treatment) of 18F-LMI1195 decreased from 71 ± 4% (100 mM KCl) to 16 ± 7% (100 mM KCl + EDTA). By investigating 131I-MIBG, the same tendency with a decrease from 61 ± 2% (100 mM KCl) to 15 ± 4% (100 mM KCl + EDTA) was observed (p < 0.005, respectively). Comparison of both tracers in PC12 cells treated with 100 mM KCl in the presence (+) or absence (−) of EDTA. Y-axis represents the difference of release index, i.e., counts over control after 30 min of treatment with high concentration KCl buffer. Both 18F-LMI1195 (n = 3 of each testing group) and 131I-MIBG (n = 3 of each testing group) were released from PC12 cells after treatment with high concentration KCl in the absence of EDTA, whereas the effect was mitigated in the presence of EDTA. **p < 0.005 vs. EDTA (+) group at the same condition; all data points presented as mean ± SD The release of 18F-LMI1195 from PC12 cells by exposure to reserpine was time-dependent and reached significant differences after 30 min of treatment. This result is in accordance with our findings for 131I-MIBG (Fig. 3a). LMI1195 uptake reached a significant difference (p < 0.001) after 30 min of reserpine exposure with only 68 ± 3% left (vs. 80 ± 2% in controls). A similar pattern of tracer kinetics was confirmed using 131I-MIBG after reserpine exposure with retention of 65 ± 7% for the reserpine-treated group versus 85 ± 3% for the control group (p < 0.001). Applying the same protocol on SK-N-SH cells (vesicle-poor), 88 ± 2% retention of 18F-LMI1195 and 87 ± 2% of 131I-MIBG in the cells were recorded. However, no statistical difference was reached either for 18F-LMI1195 or for 131I-MIBG (Fig. 3b). Co-exposing the preloaded PC12 cells to both reserpine and the NET blocker desipramine, the tracer release showed a similar pattern to using reserpine alone with significant differences after 30 min of treatment (Fig. 3a vs. Fig. 4). Time course of tracer retention index after the stimulation with reserpine. Both tracers are induced to be released from PC12 cells after reserpine treatment (a n = 6 at each time point of both groups), whereas such effect could not be observed in SK-N-SH cells (b n = 6 at each time point of both groups). ***p < 0.001 vs. control group at the same time point; all data points presented as mean ± SD Co-exposing preloaded PC12 cells to reserpine and desipramine. The co-treatment also induced tracer release in the same mode as using reserpine alone. The release indices of either 18F-LMI1195 (left, n = 6 at each time point of both groups) or 131I-MIBG (right, n = 6 at each time point of both groups) reached significant differences compared to controls after 30 min. ***p < 0.001 vs. control group at the same time point; all data points presented as mean ± SD In summary, high concentration KCl and reserpine could enhance 18F-LMI119 washout from storage vesicle-rich PC12 cells. This washout as quantified as tracer releasing index could reach a significant difference after 30 min of treatment. In contrast, such effect could not be observed while using vesicle-poor SK-N-SH cells. As a golden reference, similar kinetics after KCl or reserpine treatment were also achieved using 131I-MIBG in the same cell lines. Furthermore, high concentration KCl exposure-induced tracer release was Ca2+ dependent as confirmed by suppressing the effect using calcium chelator EDTA and Ca2+-free buffer. Several tracers sharing similarities in their benzylguanidine structure were designed to compensate for the disadvantages of the clinically used SPECT tracer MIBG. They all represent similarities to MIBG in order to achieve comparable in vitro intracellular retention and in vivo distribution properties [14]. Among them, 18F-LMI1195 is so far the best examined 18F-labeled PET tracer and has successfully proceeded with a clinical phase I trial [5]. In addition to the current literatures [5, 8, 9, 15], our research group has also performed a number of investigations with 18F-LMI1195 using animal models and ex vivo systems [11, 16, 17]. A further understanding of the properties of 18F-LMI1195 and its performance at a subcellular and molecular level is still of importance for its clinical application. Therefore, we investigated the storage mechanism and depletion kinetics of LMI1195 on both rat pheochromocytoma PC12 and human neuroblastoma SK-N-SH cells, using 131I-MIBG as a comparator. The former cell line is rich of storage vesicles that could retain either the physiological neurotransmitter norepinephrine or radioactive tracers with analogous structures, whereas the SK-N-SH cells are poor of such secretory vesicles, and therefore, the taken-up tracers can only be stored in cytoplasm or mitochondria [18]. All cells were first preloaded with both tracers to reach equilibrium and thereafter were treated with either high concentration KCl buffer or reserpine in order to trigger the depletion of preloaded radiotracers. As shown in Fig. 1, depolarization of PC12 cells caused by stimulation of high concentration KCl buffer evoked apparent tracer release, with approximately 60–70% depletion of additional 18F-LMI1195 or 131I-MIBG from the cells. By applying high concentration KCl to neuronal cells, Blaustein has proposed that neurotransmitter release from the nerve terminal is caused by Ca2+ influx via voltage-gated calcium channels [19]. Therefore, when using either Ca2+-free KCl buffer with Ca2+ chelator ethylene glycol-bis(2-aminoethylether)-N,N,N′,N′-tetraacetic acid (EGTA) or calcium channel blocker nifedipine, Araujo et al. further verified the suppression of norepinephrine release [20]. Similar conclusions were also drawn by Mandela et al. yielding that norepinephrine depletion is dependent on extracellular Ca2+ and could be fully suppressed by EDTA [21]. Thus, as expected, the outcome of exposing cells to Ca2+-free high KCl buffer containing EDTA lead to comparable findings in our study with a diminished release effect (Fig. 2). This result attained from high KCl induction is consistent with the conclusion achieved from our research group using isolated rabbit hearts, in which the electrical provocation evoked enhanced tracer release [11]. Electrical field stimulation is known to induce norepinephrine overflow by releasing storage vesicles [22]. Since we could measure the radioactivity in the whole heart, including neuronal cells and myocytes, it was suggested that 18F-LMI1195 was taken up by the cells and stored within the vesicles [11]. In addition to our previous findings, we further confirmed this distinct uptake, storage, and release characteristics by using an in vitro assay. As a human neuroblastoma cell line, SK-N-SH also expresses NET on the plasma membrane [23] and they are able to transport either 131I-MIBG or 18F-LMI1195 into cells. However, due to the shortage of storage vesicles, no apparent release of stored tracers could be observed after the application of high KCl buffer compared to controls (Fig. 1b). The response of high KCl-leading tracer release compared with the control group is of utmost importance: Since no statistical difference could be observed between both groups, a robust conclusion can be derived from the setup of our experiment. Reserpine is known for its potential to release norepinephrine from synaptic nerve cells by triggering the exocytosis of storage vesicles [21]. In this study, reserpine induced significant tracer release after 30 min of its application to vesicle-rich PC12 cells (Fig. 3), whereas such an effect was not observed in SK-N-SH cells, which is in accordance with the conclusion drawn by Smets et al. from a reserpine-induced MIBG depletion study [24]. Due to the deficiency of storage vesicles in SK-N-SH cells, no clear tracer overflow, either with 18F-LMI1195 or 131I-MIBG, could be observed. The efflux of tracers from SK-N-SH cells may be only due to slow passive diffusion. The current study of using either high concentration KCl or reserpine is the opposite way as the results achieved from rabbit heart [16], in which pretreatment of desipramine was followed by tracer injection (Fig. 5). Firstly, this in vivo study provided the first proof of successfully prohibiting the uptake of tracer into storage vesicles by using desipramine. Secondly, the in vitro cell study demonstrated the clear depletion mechanism of an already taken-up tracer in the storage vesicles. By comparing the two methods (high concentration KCl and reserpine), it was revealed that the application of these exogenous radioactive sympathetic nerve tracers apparently mimics the physiological neurotransmitter norepinephrine turnover quite well, including transporter-mediated uptake as well as modes of storage and exocytosis (Fig. 6). Integrating our previous animal study (Fig. 5) and ex vivo results [11, 16, 17] with the present in vitro findings, the intracellular behavior of 18F-LMI1195 is analogous to its SPECT counterpart MIBG and the neurotransmitter norepinephrine. Transverse image of 18F-LMI1195 uptake in rabbit heart, showing control (left) and with DMI pretreatment (right). Averaged scan 10–30 min after tracer injection. With permission of [16] Illustration of radiotracer uptake, storage, and release mechanisms in PC12 cells. In PC12 cells, radiotracers (18F-LMI1195 or 131I-MIBG) that have been selectively taken up into the cells are first stored in storage vesicles and can be released by either high concentrations of KCl or reserpine. This procedure is Ca2+ dependent Similar to high KCl-induced exocytosis, reserpine-mediated 18F-LMI1195 release is also Ca2+ dependent. Mandela et al. have investigated and reported how reserpine influences NET in a non-competitive manner by Ca2+ dependency and how it interferes with the interaction between NET and norepinephrine storage vesicles. Strikingly, it was revealed that reserpine induces a non-competitive inhibition of norepinephrine uptake in PC12 cells [13]. This effect requires the presence of vesicular monoamine transporter (VMAT) and storage/secretory vesicles, which explains the finding for exposure to reserpine alone and reserpine/desipramine-induced tracer release—a demonstration of analogous uptake and efflux mechanisms associated with the benzylguanidine structure common to both tracers (Fig. 4). By contrast, as demonstrated previously, cardiac retention of 11C-hydroxyephedrine (11C-HED) is mediated through a continuous cyclical mode of release (diffusion out) and reuptake via NET from the nerve terminal [11, 16]. 11C-HED showed enhanced washout from both in vivo and isolated perfused rabbit heart after desipramine chase. On the other hand, 18F-LMI1195 and MIBG are not sensitive to a NET inhibitor chase protocol in an in vivo setting, which was imitated in the present in vitro study by adding desipramine while incubating with reserpine (Fig. 4). Therefore, on a subcellular level, a stable vesicle-storing mechanism mimicking physiological norepinephrine turnover was corroborated. It should be mentioned that in addition to the application of these NET tracers in cardiac diseases, there are many potential applications in tumor diagnosis [25]. 123I-MIBG imaging had been used in the evaluation of neuroblastoma for years [26]. 18F-LMI1195 would also be available because of their structural and property similarities in NET imaging: A previous study of high and specific accumulation of LMI1195 in pheochromocytomas has already made the first attempt in proving this potential [15]. Our study demonstrated the subcellular and molecular uptake and release mechanism of the novel sympathetic nerve PET tracer 18F-LMI1195. These findings are analogous to findings for the structurally related and widely used SPECT predecessor MIBG. Both high concentration KCl and reserpine induce the depletion of 18F-LMI1195. The proposed mechanism of vesicle storage and release is consistent with the conclusions suggested from previous studies using both ex vivo isolated perfused and in vivo rabbit hearts. To sum up, we herein demonstrated that 18F-LMI1195 is a promising tracer for visualizing the cardiac innervation by mimicking the physiologic neurotransmitter norepinephrine. It can provide similar properties as MIBG in a clinical setting along with the advantages of 18F-labeling and PET imaging technology. Henzlova MJ, Duvall WL, Einstein AJ, Travin MI, Verberne HJ. ASNC imaging guidelines for SPECT nuclear cardiology procedures: stress, protocols, and tracers. J Nucl Cardiol. 2016;23(3):606–39. Narula J, Gerson M, Thomas GS, Cerqueira MD, Jacobson AF. 123I-MIBG imaging for prediction of mortality and potentially fatal events in heart failure: the ADMIRE-HFX study. J Nucl Med. 2015;56:1011–8. Chen X, Werner RA, Javadi MS, Maya Y, Decker M, Lapa C, Herrmann K, Higuchi T. Radionuclide imaging of neurohormonal system of the heart. Theranostics. 2015;5(6):545–85. Kobayashi R, Chen X, Werner RA, Lapa C, Javadi MS, Higuchi T. New horizon in cardiac innervation imaging: introduction of novel 18F-labeled PET tracers. Eur J Nucl Med Mol Imaging. 2017;44(13):2302–9. Sinusas AJ, Lazewatsky J, Brunetti J, et al. Biodistribution and radiation dosimetry of LMI1195: first-in-human study of a novel 18F-labeled tracer for imaging myocardial innervation. J Nucl Med. 2014;55:1445–51. Jang KS, Jung Y-W, Gu G, et al. 4-[18F]Fluoro-m-hydroxyphenethylguanidine: a radiopharmaceutical for quantifying regional cardiac sympathetic nerve density with positron emission tomography. J Med Chem. 2013;56:7312–23. Raffel D, Jung Y-W, Murthy V, et al. First-in-human studies of 18F-hydroxyphenethylguanidines: PET radiotracers for quantifying cardiac sympathetic nerve density. J Nucl Med. 2016;57(Suppl 2):232. Yu M, Bozek J, Lamoy M, et al. Evaluation of LMI1195, a novel 18F-labeled cardiac neuronal PET imaging agent, in cells and animal models. Circ Cardiovasc Imaging. 2011;4:435–43. Yu M, Bozek J, Lamoy M, et al. LMI1195 PET imaging in evaluation of regional cardiac sympathetic denervation and its potential role in antiarrhythmic drug treatment. Eur J Nucl Med Mol Imaging. 2012;39:1910–9. Mangner TJ, Tobes MC, Wieland DW, Sisson JC, Shapiro B. Metabolism of iodine-131 metaiodobenzylguanidine in patients with metastatic pheochromocytoma. J Nucl Med. 1986;27(1):37–44. Higuchi T, Yousefi BH, Reder S, et al. Myocardial kinetics of a novel [(18)F]-labeled sympathetic nerve PET tracer LMI1195 in the isolated perfused rabbit heart. J Am Coll Cardiol Img. 2015;8:1229–31. Smets LA, Janssen M, Metwally E, Lösberg C. Extragranular storage of the neuron blocking agent meta-iodobenzylguanidine (MIBG) in human neuroblastoma cells. Biochem Pharmacol. 1990;39(12):1959–64. Mandela P, Chandley M, Xu YY, Zhu MY, Ordway GA. Reserpine-induced reduction in norepinephrine transporter function requires catecholamine storage vesicles. Neurochem Int. 2010;56:760–7. Thackeray JT, Bengel FM. PET imaging of the autonomic nervous system. Q J Nucl Med Mol Imaging. 2016;60:362–82. Gaertner FC, Wiedemann T, Yousefi BH, et al. Preclinical evaluation of 18F-LMI1195 for in vivo imaging of pheochromocytoma in the MENX tumor model. J Nucl Med. 2013;54:2111–7. Werner RA, Rischpler C, Onthank D, et al. Retention kinetics of the 18F-labeled sympathetic nerve PET tracer LMI1195: comparison with 11C-hydroxyephedrine and 123I-MIBG. J Nucl Med. 2015;56:1429–33. Higuchi T, Yousefi BH, Kaiser F, et al. Assessment of the 18F-labeled PET tracer LMI1195 for imaging norepinephrine handling in rat hearts. J Nucl Med. 2013;54:1142–6. Streby KA, Shah N, Ranalli MA, Kunkler A, Cripe TP. Nothing but NET: a review of norepinephrine transporter expression and efficacy of 131I-mIBG therapy. Pediatr Blood Cancer. 2015;62:5–11. Blaustein MP. Effects of potassium, vertridine, and scorpion venom on calcium accumulation and transmitter release by nerve terminals in vitro. J Physiol. 1975;247:617–55. Araujo CB, Bendhack LM. High concentrations of KCl release noradrenaline from noradrenergic neurons in the rat ancoccygeus muscle. Braz J Med Biol Res. 2003;36:97–104. Mandela P, Ordway GA. KCl stimulation increases norepinephrine transporter function in PC12 cells. J Neurochem. 2006;98:1521–30. Bourreau JP. Internal calcium stores and norepinephrine overflow from isolated, field stimulated rat vas deferens. Life Sci. 1996;58:L123–9. Zhang H, Huang R, Cheung NK, et al. Imaging the norepinephrine transporter in neuroblastoma: a comparison of [18F]-MFBG and 123I-MIBG. Clin Cancer Res. 2014;20:2182–91. Smets LA, Loesberg C, Janssen M, Metwally EA, Huiskamp R. Active uptake and extravesicular storage of m-iodobenzylguanidine in human neuroblastoma SK-N-SH cells. Cancer Res. 1989;49:2941–4. Pfluger T, Piccardo A. Neuroblastoma: MIBG imaging and new tracers. Semin Nucl Med. 2017;47(2):143–57. Pandit-Taskar N, Modak S. Norepinepherine transporter as a target for imaging and therapy. J Nucl Med. 2017;58(Suppl 2):39S–53S. This study was funded by the German Research Council (DFG grant CH 1516/2-1 and HI 1789/3-3) and the Competence Network of Heart Failure funded by the Integrated Research and Treatment Center (IFB) of the Federal Ministry of Education and Research (BMBF). This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement. This publication was funded by the German Research Foundation (DFG) and the University of Würzburg in the funding program Open Access Publishing. Department of Nuclear Medicine, University Hospital Würzburg, Oberdürrbacher Strasse 6, 97080, Würzburg, Germany Xinyu Chen, Rudolf A. Werner, Constantin Lapa, Naoko Nose, Mitsuru Hirano & Takahiro Higuchi Comprehensive Heart Failure Center, University Hospital Würzburg, Würzburg, Germany Xinyu Chen, Rudolf A. Werner & Takahiro Higuchi The Russell H. Morgan Department of Radiology and Radiological Science, Division of Nuclear Medicine and Molecular Imaging, Johns Hopkins University School of Medicine, Baltimore, MD, USA Rudolf A. Werner & Mehrbod S. Javadi Department of Bio Medical Imaging, National Cardiovascular and Cerebral Research Center, Suita, Osaka, Japan Naoko Nose, Mitsuru Hirano & Takahiro Higuchi Lantheus Medical Imaging, North Billerica, MA, USA Simon Robinson Xinyu Chen Rudolf A. Werner Constantin Lapa Naoko Nose Mitsuru Hirano Takahiro Higuchi XC, RAW, and TH designed the study, wrote the manuscript, and researched the data. XC, RAW, CL, NN, and MH performed the analysis. MSJ, SR, and TH aided in drafting the manuscript and revised it critically for important intellectual content. All authors read and approved the final manuscript. Correspondence to Takahiro Higuchi. Preparation of buffer systems. (DOCX 107 kb) Chen, X., Werner, R.A., Lapa, C. et al. Subcellular storage and release mode of the novel 18F-labeled sympathetic nerve PET tracer LMI1195. EJNMMI Res 8, 12 (2018). https://doi.org/10.1186/s13550-018-0365-9 Storage vesicle turnover 18F-LMI1195 Phaeochromocytoma
CommonCrawl
Beneficial effects of climate warming on boreal tree growth may be transitory An earlier start of the thermal growing season enhances tree growth in cold humid areas but not in dry areas Shan Gao, Eryuan Liang, … Josep Peñuelas Even modest climate change may lead to major transitions in boreal forests Peter B. Reich, Raimundo Bermudez, … Artur Stefanski Long-term response of forest productivity to climate change is mostly driven by change in tree species composition Xavier Morin, Lorenz Fahse, … Harald Bugmann Enhanced growth after extreme wetness compensates for post-drought carbon loss in dry forests Peng Jiang, Hongyan Liu, … Hongya Wang Forest type and height are important in shaping the altitudinal change of radial growth response to climate change Penghong Liang, Xiangping Wang, … Jinfeng Chang Climate-change-driven growth decline of European beech forests Edurne Martinez del Castillo, Christian S. Zang, … Martin de Luis Large variations in afforestation-related climate cooling and warming effects across short distances Shani Rohatyn, Eyal Rotenberg, … Dan Yakir Temperature rising would slow down tropical forest dynamic in the Guiana Shield Mélaine Aubry-Kientz, Vivien Rossi, … Bruno Hérault Evapotranspiration and favorable growing degree-days are key to tree height growth and ecosystem functioning: Meta-analyses of Pacific Northwest historical data Yang Liu & Yousry A. El-Kassaby Loïc D'Orangeville ORCID: orcid.org/0000-0001-7841-70821 nAff6, Daniel Houle2,3, Louis Duchesne2, Richard P. Phillips4, Yves Bergeron ORCID: orcid.org/0000-0003-3707-36871,5 & Daniel Kneeshaw1 Nature Communications volume 9, Article number: 3213 (2018) Cite this article Climate-change ecology Predicted increases in temperature and aridity across the boreal forest region have the potential to alter timber supply and carbon sequestration. Given the widely-observed variation in species sensitivity to climate, there is an urgent need to develop species-specific predictive models that can account for local conditions. Here, we matched the growth of 270,000 trees across a 761,100 km2 region with detailed site-level data to quantify the growth responses of the seven most common boreal tree species in Eastern Canada to changes in climate. Accounting for spatially-explicit species-specific responses, we find that while 2 °C of warming may increase overall forest productivity by 13 ± 3% (mean ± SE) in the absence of disturbance, additional warming could reverse this trend and lead to substantial declines exacerbated by reductions in water availability. Our results confirm the transitory nature of warming-induced growth benefits in the boreal forest and highlight the vulnerability of the ecosystem to excess warming and drying. Climate models predict increases in temperature and aridity across the boreal forest region1 that are likely to exceed 2 °C by the end of the century2,3. While warmer and drier conditions are typically thought to reduce tree growth4,5,6,7,8, rising temperature and aridity may increase growth in cool, wet boreal regions where excess water can hinder forest productivity throughout much of the growing season9,10,11. Eastern North America (ENA) is projected to be the planet's only boreal region with sufficient precipitation to cancel out increases in evapotranspiration associated with future warming12. Given the high degree of inter- and intra-specific variation in climate sensitivity of boreal species to recent warming9,13,14, we still lack accurate estimates of species climatic thresholds. Such spatially-explicit models that account for site characteristics and local heterogeneity in temperature and soil water are urgently needed to predict future trajectories for this ecosystem and inform management strategies, global climate models, and climate-change mitigation actions15,16. Although climate envelope models provide insights into species' adaptive capacity17, they display inconsistent responses and lagged sensitivity to climate change18. Radial growth has strong connections to vital forest demographic rates including tree mortality and fecundity19,20,21,22, while relationships between growth and environmental drivers yield information on a species' adaptive capacity20. In this sense, demographic performance indices may provide higher-resolution information on species adaptation to changing climate. Furthermore, models that can account for non-linear growth responses are needed to detect climatic thresholds beyond which climate effects may shift from positive to negative23. Building such models can only be achieved based on numerous observations over a large range of climatic conditions. Here, we aimed to model sensitivity to current climate of the seven most abundant boreal tree species in Eastern Canada (black spruce, Picea mariana; white spruce, Picea glauca; balsam fir, Abies balsamea; jack pine, Pinus banksiana; aspen, Populus tremuloides; white birch, Betula papyrifera; and Larch, Larix laricina) and to project potential changes in growth to increasing temperature and changes in precipitation. First, general additive models (GAM) were used to model the growth (1985–2005) of 270,000 trees across 95,000 temperate and boreal stands, while accounting for local climate, tree size and age, soil characteristics, successional stage, and competition with neighboring trees. Second, models were used to assess the local vulnerability of 141 million inventoried stems in the boreal zone under study to an array of temperature and precipitation change scenarios based on a suite of general circulation models (GCM). The fact that our models were fitted to growth observations extending to the warmer temperate zone allowed us to simulate boreal growth responses to warming while remaining within the observed climate space. Our study presents striking species-specific variation in climate sensitivity across the study area. The growth of most conifers is mostly limited by water scarcity in southern regions but constrained by low temperatures in northern regions. To the contrary, birch and aspen appear less vulnerable in the southern range of their distribution. In the absence of disturbance, the sum of projected southern declines and northern increases in growth across the boreal zone suggests net growth gains with warming up to 2 °C. Additional warming reverses this trend, leading to growth declines exacerbated by reductions in water availability. Such results highlight the limited capacity of boreal forests in ENA to adapt to future climate change, which hinges on hypothetical increases in precipitation. Growth models Overall, stand characteristics and climate were strong predictors of tree growth for all seven species across the study area (Fig. 1), explaining between 52 and 70% of the deviation (Table 1). Relative mean square error was small (range: 5.2–6.1%), indicating sufficient sample size, residuals were normal (Supplementary Fig. 1) and cross-validation revealed good predictive capacity (Table 1). Despite large inter-species differences in growth rates, effects of growth drivers were generally consistent across species. Growth increased exponentially with tree size (P < 0.001; Student t-test) while growth rates declined sharply with age in young trees (ca. <50 years; Table 1 and Supplementary Fig. 3). Both symmetric (BA) and asymmetric competition (BAL) had significant (P < 0.001; Wald test) negative effects on tree growth in all species (Table 1 and Supplementary Fig. 3). Growth of spruce species was higher on sites with low to moderate terrain slope (0–20%), while steep slopes negatively affect the growth of aspen and balsam fir (P < 0.05; Wald test). Plot location and average climate of the study area. a Location of sampled plots (green) in Quebec, Canada (gray). b Average annual daily maximum temperature (TMAX) across sampled plots. c Growing season (May to September) climate moisture index (CMI, see Methods). The intermediate black line indicates the limit between the boreal and temperate vegetation zones while the upper black line represents the limit for commercial forestry. Variables in b, c are averaged over the study period (1985–2005) and per 15-km polygon. Data for base maps from https://www12.statcan.gc.ca/census-recensement/2011/geo/bound-limit/bound-limit-2011-eng.cfm with permission under http://open.canada.ca/en/open-government-licence-Canada and from https://www.donneesquebec.ca/recherche/fr/dataset/systeme-hierarchique-de-classification-ecologique-du-territoire used with permission under a Creative Commons 4.0—Attribution CC BY Table 1 Environmental characteristics and model summary Growth responded strongly to mean daily maximum temperature (TMAX), growing season water availability (climate moisture index (CMI), measured as growing season precipitation minus potential evapotranspiration (PET), see Methods), and the interaction of the two (P < 0.01; Wald test; Table 1). The one exception was larch, which was relatively insensitive to available water and was therefore excluded from further analyses. This lack of response to water availability is likely due to the species predominance in wetlands (51% of sampled individuals; Supplementary Fig. 2). We identified the non-linear effects of single climatic variables in mediating specific growth by holding remaining variables constant in the model (at their median value). The growth of white spruce and balsam fir increased up to TMAX values of 7.3 and 8.1 °C, respectively, but declined above these thresholds. Thresholds of 8.4 and 8.7 °C were observed for black spruce and white birch, respectively, although both species appear less sensitive to TMAX (Supplementary Fig. 3). Broadleaved species displayed a parabolic growth in response to CMI, with reduced growth rates at both ends of the gradient (Supplementary Fig. 3). Low moisture negatively impacted balsam fir growth while high moisture was associated with growth reductions in jack pine and spruce species. Only two species (aspen and jack pine) responded linearly (both positively) to warming and only two species (black spruce and jack pine) responded linearly (both negatively) to moisture across the climate gradient. Such linear relationships between growth and climate are probably due to a lack of samples from the warm end of the climatic gradient. Importantly, temperature effects on growth were conditioned by water availability. On cold sites (TMAX < 6 °C), the growth of northern conifers like jack pine and black spruce tended to decline with excess moisture (Fig. 2). On warm sites (TMAX > 8 °C), most species displayed water limitation. For balsam fir and jack pine, higher moisture minimized or canceled the growth decline caused by above-optimal temperatures (Fig. 2). Broadleaved trees displayed only modest changes in growth with increased water availability at elevated temperatures, consistent with the observed low sensitivity of the species to separate TMAX and CMI effects (Supplementary Fig. 3). Interactive effects of temperature (TMAX) and water availability (climate moisture index, CMI) on tree basal area increment (BAI). Heat plots indicate predicted tree BAI (in cm2 year−1) across observed ranges of TMAX and CMI, with all other model variables held at median species values Soil drainage and texture significantly affected the growth response to temperature for birch, spruce species, and jack pine (P < 0.05; Wald test; Supplementary Table 4), and the growth response to CMI for white spruce and balsam fir (P < 0.01; Supplementary Table 5). Winter snowfall also conditioned how growth responded to CMI variation for all species (P < 0.01). Finally, stand maturity (early-seral, immature, mature, or old-growth) was also found to significantly alter climate–growth relationships for all species except white birch (Supplementary Tables 4 & 5). Projected responses to climate change scenarios We simulated species sensitivity to likely scenarios of climate change according to local site conditions. Given the uncertainty that remains in future climate conditions, we used various combinations of temperature (1–4 °C increases in TMAX and associated increase in PET of 43–173 mm) and precipitation changes (−5 to +15% of growing season precipitation) lying within the range of values predicted by GCMs (Supplementary Fig. 4). Future changes in temperature and associated PET as well as in precipitation were used to calculate new TMAX and CMI estimates that were incorporated into the growth models. Growth models were applied on a different dataset stem inventory data of over 141 million stems representative of current stand structure and composition. These simulations were limited to the colder boreal zone to avoid projecting growth responses outside the observed climate space (Supplementary Table 1). Individual growth values were summed per plot, scaled per hectare, and averaged per 15-km polygon. Results are reported relative to baseline conditions. It should be noted that climate anomalies such as drought, which may increase with climate change, are not captured in multi-decadal average changes presented here but could represent a significant component of future trends in growth and mortality. Conifer growth displays a strong latitudinal response gradient to warming-only scenarios and its associated change in CMI (Fig. 3 and Table 2). Under a 4 °C increase in TMAX, black spruce and balsam fir changes in growth shift from marginal declines south of 50°N to important gains north of that latitude. Similarly, jack pine growth increases across its range, but gains increase with latitude. South of 50°N, where most white spruce is found, the species growth displays large negative declines. Our projections suggest that growth decline areas expand with warming. For instance, under a 1 °C warming, black spruce growth declines are limited to southeastern Quebec and represent 3% of the species range in the boreal zone but expand to 36% with 4 °C warming (Fig. 3 and Table 2). The proportion of growth decline in white spruce shifts from 29 to 76% with a 1–4 °C change. Relative to conifers, broadleaf trees undergo modest relative changes in the southern part of the study area. North of 50°N, birch growth increases significantly with warming, while aspen growth remains largely unchanged across the entire boreal zone. Marginal birch growth declines are mainly observed in the southwest, characterized by low snowfall and precipitation, high summer temperature, and abundant glaciolacustrine deposits compared to the till-dominated deposits in the remaining territory. Changes in growth across Quebec's boreal vegetation zone under future climate scenarios. Relative changes in basal area growth per hectare are calculated under scenarios of 2 and 4 °C increases in TMAX and −5 to +15% average changes in precipitation (ppt) according to local conditions (tree size, species, mean stand age, competition, soil, slope, stand successional stage, climate). Increases in TMAX are accompanied by corresponding increases in potential evapotranspiration (see Methods). Values were obtained by averaging plot-level growth modeled from stem inventory data across 15-km polygons (see Methods). Data for base maps from https://www12.statcan.gc.ca/census-recensement/2011/geo/bound-limit/bound-limit-2011-eng.cfm with permission under http://open.canada.ca/en/open-government-licence-Canada and from https://www.donneesquebec.ca/recherche/fr/dataset/systeme-hierarchique-de-classification-ecologique-du-territoire used with permission under a Creative Commons 4.0—Attribution CC BY Table 2 Projected changes in species growth rates across the southern (<50°N) and northern (≥50°N) boreal vegetation zone for likely changes in TMAX (+1 to +4 °C) and precipitation (ppt; −5%, baseline, and +15%) Variations in future precipitation could have large impacts on boreal tree growth. For balsam fir trees south of 50°N, a 15% increase in growing season precipitation cancels out the average growth decline following a 4 °C warming (Fig. 3 and Table 2). For the same region, jack pine growth under a 4 °C warming increases from 10 ± 20% (−5% precipitation; mean ± SE; N = 548 polygons) to 31 ± 20% (+15% precipitation). Interestingly, our projections suggest that under moderate warming, reduced precipitation could be beneficial to the growth of black spruce, white spruce, and jack pine in some high-latitude areas. Net growth changes in the boreal zone We assessed the overall change in growth to account for differences in structure, composition, and productivity across the study area. To account for the likely uneven sampling effort of the inventory data over the boreal zone (more plots in the southern boreal zone), species-specific mean growth per 15-km polygon was averaged across all polygons for each climate scenario (Fig. 4a). Aspen growth appears equally insensitive to precipitation and temperature changes, with modest net growth variations under all scenarios. Under all precipitation scenarios, white birch, jack pine, balsam fir, and black spruce display growth increases up to 2 °C warming (up to 8–13 ± 3%, N = 832–1846 polygons) albeit gains maintain or decline under additional warming depending on precipitation levels (Fig. 4a). The negative effect of increased precipitation on high-latitude jack pine and black spruce growth at moderate warming is visible at +1 and +2 °C, while additional warming inverts this trend. Precipitation levels control the growth patterns with additional warming: under a 4 °C warming scenario, jack pine growth shifts from gains of 12 ± 9% (N = 832 polygons) under 5% reduced precipitation to gains of 29 ± 10% with 15% increases in precipitation (Fig. 4a). Similarly, balsam fir and white birch net growth change is highly precipitation-dependent, with growth changes at +4 °C going from 4–6 ± 5% (−5% precipitation; N = 1307–1578) to 14–17 ± 6% (+15% precipitation). Finally, the net growth change in white spruce growth is negative under all climate simulations above 2 °C warming (with declines reaching 22 ± 7% at +4 °C; N = 959). Differences in mean growth under future climate scenarios across Quebec's boreal vegetation zone. a Difference in mean growth per hectare per species according to 1–4 °C warming and −5 to +15% changes in growing season precipitation (ppt). Colored ribbons represent relative standard error of the mean. b Difference in mean growth per hectare for the combined species. Pie chart indicates the relative contribution of each species to baseline mean growth across the boreal zone. Values were obtained by averaging plot-level basal area growth across 15-km polygons, then averaging polygon-level growth across the boreal zone Plot-level growth was calculated for all species combined, averaged per 15-km polygon and over the boreal zone. The results are largely influenced by dominant species like black spruce and balsam fir, which constitute two-thirds of the inventory trees (Fig. 4b). Under 5–15% increased precipitation, a 2 °C warming results in growth gains of up to 13 ± 3% (N = 1854 polygons), while additional warming results in exponentially negative growth trends (Fig. 4b). Changes in precipitation could mitigate or exacerbate the decline, with net growth gains under a +4 °C warming doubling from 6 ± 3 to 11 ± 3% (N = 1854) under 15% increases in precipitation. The growth models developed here yield strong variance explanation and reveal the existence of climatic optimum for most studied species. Similar climatic optima were reported for black spruce from provenance trials24,25 and are consistent with spatially-divergent responses to warming in spruce species from western North America26. Thermal thresholds can be explained by various physiological constraints including leaf-level water loss (see ref. 27 and references therein) and metabolic costs28. Indeed, we find that thermal thresholds vary significantly with water availability, highlighting the importance of analyzing both factors jointly. For most species studied here, reductions in available water increase vulnerability to elevated temperatures, as temperature is an important driver of atmospheric evaporative demand. In contrast, cold sites are more responsive to temperature and display gains in growth with lower available growing season water, consistent with findings in the Rocky Mountains of western North America20 and in the Eurasian boreal forest29. The short growing season length on colder sites combined with a high quantity of snowpack meltwater may insure abundant water levels throughout most of the growing season. At such sites, excessive precipitation combined with low PET has been reported to decrease photosynthetic activity through indirect negative effects on solar radiation, temperature, and the length of the growing season30. Such contrasting relationships were previously reported across the study area for black spruce populations using a classic landscape-scale dendrochronological approach9. The positive effect of moderate increases in temperature (1–2 °C) on boreal tree growth is consistent with the well-known temperature constraint of such forests31. Warming extends the growing season and increases growth rates while reducing potential cold-temperature injuries32. Such growth increase, in line with reported increases in black spruce growth rates north of the study area33, may help maintain forest productivity and ecosystem services despite expected increases in future burn rates34. However, our simulations indicate that additional warming of 3–4 °C may cancel out part of these gains and lead to substantial growth declines in southern boreal stands conditional on future changes in precipitation levels. Growth declines are generally associated with higher probability of mortality19,20,21,22 and could have large impacts on ecosystem dynamics, including shifts in composition towards broadleaf-dominated stands and conversion of closed-crown forests into open woodlands16. We report important intra- and inter-species differences in functional responses to climate. The strong sensitivity of balsam fir to drier conditions is coherent with its higher abundance in regions of high water availability, and its sensitivity to experimental drought35. White spruce's vulnerability to warming rather than precipitation is consistent with its dominance in the drier boreal forests of central and western Canada. Alaskan white spruce decline has been attributed to recent increases in temperature4,36 but in Central Canada, its severe decline is rather attributed to water deficit5. Jack pine, balsam fir, white birch, and black spruce maintain their growth in the boreal zone under a 2 °C increase, but additional warming to 3–4 °C leads to a decline (except for jack pine under increased precipitation), while aspen displays neutral changes. Similar results were found across a 46° to 54°N gradient in western Quebec, with positive response to warming in jack pine and black spruce in northern stands, while aspen showed neutral changes across latitudes4. Aspen's low sensitivity suggests that high moisture levels in Eastern Canada could preclude drought-induced declines similar to those reported in multiple locations with lower available soil moisture across North America37. Projecting growth responses to future conditions that fall outside the observed range of climate variability can be speculative38,39 but we tackle this issue by predicting growth response under a range of temperature increases that lies within the observed climate space (Supplementary Table 1). Although predicting future growth trends remains limited by the uncertainty surrounding future water balance40, this issue was considered by simulating a range of precipitation changes (−5 to +15%) based on a suite of 21 GCM simulations. However, our space-for-time modeling approach assumes that species responses to climatic gradients across space reflect their future local response to climate change over time. Our current knowledge of boreal population-level variations in climate change response remains limited41, despite significant advances from provenance trials42,43 or genecology studies44,45. In addition, tree-level growth changes are not equivalent to stand-level changes, due to complex demographics and stand dynamics, while other disturbances agents like insect outbreaks and fire will probably have large impacts on the species growth and can interact with the direct effect of drier and warmer conditions. Finally, our growth predictions do not account for likely shifts in species composition and stand structure, but our objective was rather to provide an estimate of the vulnerability of stand types that currently dominate the boreal forest. Our results point to significant regional growth declines in Northeastern North America with warming above 2 °C. Given the increasing likelihood that global warming may exceed 2 °C by the end of the century2 and considering that it would translate into higher temperature increases at high latitude in the northern hemisphere, the capacity of boreal forests in ENA to adapt to future climate change is highly uncertain and hinges on hypothetical increases in precipitation. The study area covers the northern temperate and boreal vegetation zones of the province of Québec (Canada), which range between the 45th and the 53rd parallels north, and from the 57th to the 80th meridian west. Climate ranges from humid continental in the south, with hot and humid summers and long cold winters, to subarctic in the north, with cooler summers and longer, colder winters. Over the entire study area, mean annual temperature and precipitation for 1971–2000 range between 6.7 and −4.7 °C and between 700 and 1600 mm, respectively, while the snow-free season varies between 150 and 240 days. The study area encompasses the temperate vegetation zone in the south, largely dominated by sugar maple (Acer saccharum) and other broadleaf species but composed of mixed stands of balsam fir and yellow birch (Betula alleghaniensis) in northernmost stands. North of the temperate zone is the boreal vegetation zone, dominated primarily by black spruce and balsam fir, accompanied by white spruce, jack pine, aspen, larch, and paper birch. In addition to wood harvesting, fire and spruce budworm (Choristoneura fumiferana) outbreaks are the main large-scale disturbances regulating forest dynamics in these forests. The data used in this study were collected from both temporary and permanent forest plots sampled by the Québec government to characterize the managed forest territory. Forest stands were first stratified based on stand characteristics (composition, density, height, age), edaphic properties (slope, drainage, deposit), and history of disturbance from the interpretation of aerial pictures. Circular plots (radius = 11.28 m, area = 400 m2) were then proportionally allocated in each stratum according to their respective surface area. Within each plot, the diameter at breast height (DBH) of all trees larger than 9 cm was measured. In addition, the DBH of all tree stems ranging between 1 and 9 cm DBH was measured within smaller circular plots (radius = 3.57 m, area = 40 m2), while the DBH of all trees of DBH > 31 cm was measured within larger plots (radius = 14.1 m, area = 625 m2). Tree cores were harvested from three to nine trees of DBH larger than 9 cm, which were selected according to a strict sampling protocol46,47. Soil texture, deposit type, drainage, and slope were characterized during sampling. Complete core sampling and measurements, mainly conducted for site index estimation, were limited to coniferous species and to the most abundant shade-intolerant broadleaf species that generally form even-aged stands (white birch and Populus species). All tree cores analyzed here were collected between 1994 and 2012. To minimize growth bias caused by disturbances like spruce budworm outbreaks (1970–1987 and 2006–today), growth was limited to years 1985–2005. Tree-ring data preparation Cores were dried, glued to a wooden holder, and sanded according to standard procedures48. Ring boundaries were first detected and identified under binocular magnification and then measured to the nearest 0.01 mm with the WinDendro Image Analysis System for tree-ring measurement (Regent Instruments Inc.). A calendar year was attributed to each ring, the outermost ring corresponding to the year of tree sampling, or exceptionally to the year before for plots sampled prior to the start of tree-ring formation. For each tree, Tukey's test was used to detect outliers (with constant k = 3) based on the distribution of annual growth values49. Abnormal annual growth values, likely caused by anomalous ring detection, represented 0.4% of growth-year values and were generally evenly distributed across years. All abnormal values were excluded from analysis. Radial growth of 270,615 trees, covering 95,562 plots and representing the seven typical boreal tree species of Eastern Canada (black and white spruce, balsam fir, jack pine, paper birch, aspen, and larch), was converted into annual basal area increment (BAI), in cm2 year−1, from the bark to the pith using tree DBH as the initial value. Annual BAI of each tree was averaged over the 15 most recent years of growth within the 1985–2005 period. Trees with 10–14 years of growth were also included (e.g., a tree sampled in 1997, for which growth prior to 1985 was excluded, would only display 13 years of growth). We assumed that a shorter period (<10 years) would give too much weight to anomalous growth years, while a longer period (>15 years) would increase the risk that competition, measured during the year of stand sampling, would no longer be representative of earlier growth conditions. The most represented species in our tree-ring collections are black spruce (91,811 trees) and balsam fir (89,097), followed by paper birch (37,526), aspen (21,905), white spruce (15,262), jack pine (12,068), and larch (1946; Table 1). Consistent with their larger size, white spruce (9.2 cm2 year−1) and aspen (8.3 cm2 year−1) have the highest median growth, followed by larch (6.5 cm2 year−1) and balsam fir (5.8 cm2 year−1), while black spruce (2.9 cm2 year−1) displays the lowest growth rates (Table 1 and Supplementary Fig. 2). Half of the sampled trees (50%) are found in the boreal forest zone while the remaining half is in the northern temperate forest zone (Fig. 1). Annual tree diameter, referred to hereafter as tree size, was estimated by subtracting annual diameter increment from initial, measured DBH. Annual tree diameter was then averaged over each tree's 15 most recent years of growth within the 1985–2005 period. Tree age was estimated as the sum of observed tree rings. Potential biases in tree age due to core decay were avoided as incomplete core samples were discarded. Most larch trees are less than 50 years old (median = 35, Table 1), while other species have similar age structures between 42 and 54 of median age, except for black spruce (median: 67 years, Table 1 and Supplementary Fig. 2). Black spruce displays a higher proportion of older trees, with a 95th percentile age of 171, relative to 82–121 for other species (Table 1 and Supplementary Fig. 2). This is mostly due to the stands sampled north of 50°N, which are dominated by older black spruce trees (median age of 132 years). Tree age was averaged over each tree's 15 most recent years of growth within the 1985–2005 period. Monthly weather data was generated for each plot using the BioSIM interpolation model50 based on a network of 249 (1985) to 365 (2005) weather stations. To assess the control exerted by available water on growth, monthly PET was estimated with the SPEI package in R51 using the Penman–Monteith algorithm with inputs of monthly average daily minimum and maximum temperature, latitude, incoming solar radiation, temperature at dew point, and altitude. Relative to other evapotranspiration algorithms, the Penman–Monteith algorithm is a more accurate, comprehensive, and physically based model of PET40. The CMI was calculated as the balance of PET and precipitation over a period i, in mm of water (Eq. (1)): $${\rm CMI}_{i} = {\rm Prec}_i - {\rm PET}_i$$ The CMI is a hydrologic index well-correlated with tree growth in boreal and temperate forest ecosystems52,53. Average summer (CMISUMMER; June–August) and growing season (CMIGS; May–September) climate moisture indices were used to assess the documented warm-season water constraint on growth4,5,6,7, while average mean (TMEAN) and maximum (TMAX) daily temperature estimates were used to assess the control exerted by annual temperature on growth15. Across the study area, CMIGS is poorly related to TMAX (R2 = 0.06). All climate variables were averaged over the 11–15 years of growth for each tree. Since species-specific sensitivities to interactions of temperature and water are the focus of this study, we used only one variable to model temperature and one to model water availability. Variable selection is described in the Model section below. Along the growing season CMI gradient (range: −205 to 219 mm), the gradient common to all species extends from −125 to 161 mm (Supplementary Fig. 2). All species samples display similar distributions along CMI values, although balsam fir samples are more represented than other species at high CMI values (95th percentile = 145 versus 60–116 mm for other species; Table 1). All species samples are found to span TMAX values between 3.6 and 11.1 °C (Supplementary Fig. 2), while the TMAX gradient over the study area ranges between 1.8 and 11.9 °C (Fig. 1). Black spruce and jack pine samples are also less abundant at high TMAX (95th percentiles of 8.7–9.2 °C) than other species (95th percentiles of 9.5–10.6 °C; Supplementary Fig. 2 and Table 1). Along this gradient, black spruce, jack pine, and balsam fir are more abundant at low TMAX (5th percentiles of 3.5–4.7 °C) than other species (5th percentiles of 5.6–6.3 °C; Supplementary Fig. 2 and Table 1). Because spring snowmelt can have lasting effects on the growing season available water, the amount of snowfall—and thus the amount of snowmelt—was estimated as the total snowfall from January to March. Median January to March snowfall values range from 144 to 180 mm (Table 1). White birch, white spruce, and balsam fir samples are less represented at sites with low snowfall (5th percentile > 128 mm versus 112–114 mm for other species), while jack pine is less represented at sites with high snowfall (95th percentile = 166 mm versus 208–251 mm for other species). Soil physical environment Local slope, drainage, texture, and soil deposit type were determined in situ to characterize the soil physical environment46. Missing values were completed using the interpretation of aerial photographs. Slope was estimated quantitatively, as a percent, in all plots. However, slope values in temporary sample plots were converted into classes of 0–3, 4–8, 9–16, 16–30, 31–40, and ≥ 41%. For this analysis, we converted these classes to quantitative values using the lower range value (e.g., 9–16 was replaced with 9%). A qualitative combination of drainage, texture, and soil deposit type was used during plot sampling to estimate the soil physical environment. Sites on very thin (<25 cm) or very stony (>80% stoniness) soils were defined as shallow or stony soils. Sites characterized on site by organic soils or mineral soils with hydric moisture regime were defined as hydric soils. The remaining mineral soils were divided into six classes depending on their drainage (xeric to mesic or hygric) and soil texture (coarse, medium, or fine), all determined on site using standardized protocols. Hygric soils display permanent seepage and mottling and some gleyed mottles in the soil profile, while xeric to mesic soils have a more rapid drainage. Larch and jack pine excepted, all sampled trees are most frequent on well-drained, medium-textured soils (39–69% of trees; Supplementary Fig. 2). Hydric soils are the most common environment for larch (51% of sampled trees), followed by black spruce (20%), but only marginal for the remaining species (1–5%). Larch presence is also associated with flat landscapes (median slope of 0) relative to other species (Table 1). A high proportion of jack pine (42% of trees) is found on mesic, coarse-textured soils (e.g., sandy soils) as compared to other species (4–12%). To account for competition for resources from neighboring trees, two competition indices were estimated for every sampled tree. First, symmetric competition (BA) was computed as the sum of all individual basal areas for trees with DBH > 1 cm, scaled to a hectare (units of m2 ha−1). To account for the size of the sampled trees relative to the size of competing trees—assuming the level of competition exerted by a smaller tree is lower—asymmetric competition (BAL) was computed by summing only the basal area of trees that were larger than the cored tree54. Competition levels and their distribution are similar across species, with median BA values ranging between 24 (larch and jack pine) and 30 m2 ha−1 (balsam fir), and median BAL values ranging between 9 (black spruce and jack pine) and 17 m2 ha−1 (balsam fir; Table 1 and Supplementary Fig. 2). GAMs were used to estimate the joint effects of temperature and water availability on growth and to detect non-linear climatic relationships. These models are semi-parametric extensions of generalized linear models which fit adjusted non-linear smoothing terms using regression splines as predictors without any a priori assumption on the relationship55. Thus, GAMs are particularly useful to detect non-linear responses and thresholds, and to predict species response to climate across its range56. A single GAM was fitted for each species to predict the annual BAI of a tree j in site i as a function of temperature, water availability, competition, tree size and age, snowfall, and soil physical environment, assuming a Gaussian distribution of the response variable (Eq. 2): $$\begin{array}{*{20}{c}} {{\rm log}\left( {{\rm BAI}_{ij} + 1} \right)} = {\beta + {\rm log}\left( {{\rm Size}_{ij}} \right) + {\rm Size}_{ij}} \\ {} {} { + f({\rm Age}_{ij}) + f({\rm BAL}_{ij}) + f({\rm BA}_i)} \\ {} {} { + f({\rm Temperature}_i) + f({\rm CMI}_i) + f({\rm Temperature}_i,\,{\rm CMI}_i)} \\ {} {} { + f({\rm Slope}_i) + {\it{\epsilon }}_{\rm soil} + {\it{\epsilon }}_{\rm snow} + {\it{\epsilon }}_{\rm stage}} \end{array}$$ where β is the intercept and f are smoothing functions represented by cubic regression splines. To minimize over-fitting and the complexity of the model, the degree of smoothness of the spline functions was bounded to four for each variable56. The interaction between TMAX and CMI was modeled with a tensor product smooth independent of the relative scaling of covariates with a degree of smoothness bounded to three57. BAI was log-transformed to avoid negative growth predictions and normalize model residuals. Back-transformed BAI estimates are presented therein, after applying the Smearing retransformation to correct the potential bias associated with the transformation of a predicted variable prior to estimation58. Following visual inspection of univariate relationships, tree size was included as a 2nd degree polynomial59. More complex relationships were observed for other factors, which were thus modeled using smooth functions. Tree age was included in the growth model to correct for the well-documented sampling bias caused by the absence of old, fast-growing trees and young, slow-growing trees from the tree ring dataset (see Supplementary Note 1). Due to the high spatial autocorrelation for abiotic factors, we did not include an explicit spatial structure in the model. Based on the low spatial autocorrelation of the model residuals (Supplementary Fig. 5), no explicit spatial structure was included in the model. Three error terms were included in the model. Soil characteristics (combined soil drainage and texture) were included as a seven-level error term associated with the origin of the model and the smooth functions f for TMAX and CMI. Average January to March snowfall was converted into a three-level error term associated with the origin of the model and the CMI function, with low snowfall below 140 mm (21% of sites), medium snowfall between 140 and 200 mm (57% of sites), and high snowfall above 200 mm (22% of sites). Finally, stand successional stage was included as a four-level error term derived from the average tree age for each stand (early-seral: 0–20 years, immature: 20–70 years, mature: 70–100 years, old-growth: >100 years) and associated with the origin of the model and TMAX, CMI, BA, and BAL smooth functions. While tree responses to climate have been reported to change with stand successional stage60, we also observed during preliminary analysis a changing influence of competition on tree growth with stand successional stage, especially for black spruce and jack pine. Most stands (41–75% across species) fall in the immature stage, 17–29% in the mature stage, early-seral are the least abundant (1–6%), while old-growth are generally scarce (2–12%), except for black spruce, which is well represented in old stands (34%; Supplementary Fig. 2). We observe an initial short-lived increase in stem density for all species except white spruce (decline) and jack pine (no changes), followed by a decline with stand development stage in all species, consistent with self-thinning theory (Supplementary Table 2). Climatic variable selection and model validation To select the best descriptive climatic variables, Akaike information criterion (AIC) values61 were compared between models with different combinations of temperature (TMAX and TMEAN) and water availability variables (CMISUMMER and CMIGS). Retained variables were average daily maximum temperature (TMAX, in °C) and growing season CMI (May–September, in mm; Supplementary Table 3). The two variables display contrasted spatial structures over the study area, with temperature following a latitudinal gradient and growing season CMI varying longitudinally (Fig. 1). All other factors in the final model (e.g., competition, tree age) were included based on the authors' ecological understanding of the study area. All variables reduced the AIC value, indicating an improved model despite added complexity (Supplementary Table 3). For each species, the initial model was fit on a random subset of 80% of the trees. The predictive capacity of the model was then validated using the remaining 20% of the trees, computing the explained deviance and the root mean square error (RMSE) of predicted versus observed growth rates. The model was then fit to all trees of each species. Simulated growth change with warming and drying Future changes in temperature and precipitation associated with global warming were calculated from 21 GCMs from the NASA Earth Exchange Global Daily Downscaled Projections (NEX-GDDP) dataset (resolution is 0.25°). Low and high scenarios of future greenhouse gas emissions (Representative Concentration Pathways, RCP) of 4.5 and 8.5 W m−2 were used. Given the uncertainty that remains in future climate conditions, species growth models were then used to compare species sensitivity to various combinations of temperature (1–4 °C increases in TMAX and associated increase in PET of 43–173 mm) and precipitation changes (−5 to +15% of growing season precipitation) within the range of predicted values by the GCMs (Supplementary Fig. 4). Future changes in temperature and associated PET as well as in precipitation were used to calculate new TMAX and CMI estimates that were incorporated into the growth models. Each combination of temperature and precipitation change was calculated as the average over the boreal study area, but these changes were allowed to vary locally according to median GCMs projections. By doing so, our simulations account for the projected heterogeneous changes in climate projected by GCMs. Notably, projections suggest the rate of warming over the boreal study area is faster at high latitudes, while eastern parts of the study area, closer to the Atlantic Ocean, will receive most of the potential increases in precipitation (Supplementary Fig. 4). Growth models were applied to a different dataset that is representative of current stand structure and composition. Within all forest inventory plots where tree-ring collections were sampled, all stems >9 cm DBH were classified into 2-cm DBH classes per species, for a total of more than 141 million stems of the study species. Levels of BAL were computed for each stem. We then simulated the growth of each stem of each species under study according to local conditions (competition, soil, climate, tree size, and stand-level mean age), after adding local changes in TMAX corresponding to the average 1–4 °C warming across the study area, as well as adjusting local CMI using corresponding changes in PET associated with temperature increases, combined with local changes in May to September precipitation corresponding to average changes of −5, +5, and +15%. Resulting individual growth estimates were converted into growth per hectare, in cm2, and summed per plot. To account for the likely uneven sampling effort of the inventory data over the boreal zone (more plots in the southern boreal zone), a 15-km grid was applied on the boreal zone, and growth was averaged per polygon. To account for structural, composition, and productivity differences across the boreal zone, the overall change in growth was computed by averaging polygon-level growth across all polygons for each climate scenario. All results were reported relative to baseline conditions. Anticipating forest response to climate change can be highly problematic when predictions are made for climatic conditions beyond the data used to fit a model38,39. Here, projected responses to temperature increases of 1–3 °C remain within the observed range of TMAX across the entire boreal study area for all species (Supplementary Table 1). For a 4 °C increase in TMAX, the fraction remains above 97% for all species except jack pine (>85%). A 4 °C increase in TMAX in these stands is 0.2 ± 0.2 °C (mean ± SD) above observed temperatures. Additional analyses were performed excluding these areas from future growth projections, with marginal effects on the general trends reported here, suggesting that our projections for the boreal zone are robust. The code used to fit the growth models and project future growth trends can be made available upon request. The environmental data that support the findings of this study are available from the Ministère des Forêts, de la Faune et des Parcs du Québec (MFFPQ) at https://mffp.gouv.qc.ca/le-ministere/acces-aux-donnees-gratuites/. The tree-ring data that support the findings of this study were used under license for the current study but are however available from the authors upon reasonable request and approval by the MFFP. Finally, the climate scenarios used are available at https://cds.nccs.nasa.gov/nex-gddp/. This Article was originally published without the accompanying Peer Review File. This file is now available in the HTML version of the Article; the PDF was correct from the time of publication. Wang, Y., Hogg, E. H., Price, D. T., Edwards, J. & Williamson, T. Past and projected future changes in moisture conditions in the Canadian boreal forest. For. Chron. 90, 678–691 (2014). Brown, P. T. & Caldeira, K. Greater future global warming inferred from Earth's recent energy budget. Nature 552, 45 (2017). Article ADS PubMed CAS Google Scholar Cox, P. M., Huntingford, C. & Williamson, M. S. Emergent constraint on equilibrium climate sensitivity from global temperature variability. Nature 553, 319 (2018). Barber, V. A., Juday, G. P. & Finney, B. P. Reduced growth of Alaskan white spruce in the twentieth century from temperature-induced drought stress. Nature 405, 668–673 (2000). Hogg, E. H., Michaelian, M., Hook, T. I. & Undershultz, M. E. Recent climatic drying leads to age-independent growth reductions of white spruce stands in western Canada. Glob. Change Biol. 23, 5297–5308 (2017). Walker, X. & Johnstone, J. F. Widespread negative correlations between black spruce growth and temperature across topographic moisture gradients in the boreal forest. Environ. Res. Lett. 9, 064016 (2014). Walker, X. J., Mack, M. C. & Johnstone, J. F. Stable carbon isotope analysis reveals widespread drought stress in boreal black spruce forests. Glob. Change Biol. 21, 3102–3113 (2015). Lloyd, A. H., Duffy, P. A. & Mann, D. H. Nonlinear responses of white spruce growth to climate variability in interior Alaska. Can. J. For. Res. 43, 331–343 (2013). D'Orangeville, L. et al. Northeastern North America as a potential refugium for boreal forests in a warming climate. Science 352, 1452–1455 (2016). Kauppi, P. E., Posch, M. & Pirinen, P. Large impacts of climatic warming on growth of boreal forests since 1960. PLoS One 9, e111340 (2014). Article ADS PubMed PubMed Central CAS Google Scholar Schaphoff, S., Reyer, C. P. O., Schepaschenko, D., Gerten, D. & Shvidenko, A. Tamm review: observed and projected climate change impacts on Russia's forests and its carbon balance. For. Ecol. Manag. 361, 432–444 (2016). Gauthier, S., Bernier, P., Kuuluvainen, T., Shvidenko, A. Z. & Schepaschenko, D. G. Boreal forest health and global change. Science 349, 819–822 (2015). Drobyshev, I., Gewehr, S., Berninger, F. & Bergeron, Y. Species specific growth responses of black spruce and trembling aspen may enhance resilience of boreal forest to climate change. J. Ecol. 101, 231–242 (2013). Huang, J.-G. et al. Impact of future climate on radial growth of four major boreal tree species in the eastern Canadian boreal forest. PLoS One 8, e56758 (2013). Price, D. T. et al. Anticipating the consequences of climate change for Canada's boreal forest ecosystems. Environ. Rev. 21, 322–365 (2013). Chapin, F. S. et al. Global change and the boreal forest: thresholds, shifting states or gradual change? AMBIO 33, 361–365 (2004). Périé, C. & de Blois, S. Dominant forest tree species are potentially vulnerable to climate change over large portions of their range even at high latitudes. PeerJ 4, e2218 (2016). Zhu, K., Woodall, C. W. & Clark, J. S. Failure to migrate: lack of tree range expansion in response to climate change. Glob. Change Biol. 18, 1042–1052 (2012). Berdanier, A. B. & Clark, J. S. Multiyear drought-induced morbidity preceding tree death in southeastern U.S. forests. Ecol. Appl. 26, 17–23 (2016). Buechling, A., Martin, P. H. & Canham, C. D. Climate and competition effects on tree growth in Rocky Mountain forests. J. Ecol. 105, 1636–1647 (2017). Wyckoff, P. H. & Clark, J. S. Predicting tree mortality from diameter growth: a comparison of maximum likelihood and Bayesian approaches. Can. J. For. Res. 30, 156–167 (2000). Wyckoff, P. H. & Clark, J. S. The relationship between growth and mortality for seven co-occurring tree species in the southern Appalachian Mountains. J. Ecol. 90, 604–615 (2002). Nicklen, E. F., Roland, C. A., Ruess, R. W., Schmidt, J. H. & Lloyd, A. H. Local site conditions drive climate–growth responses of Picea mariana and Picea glauca in interior Alaska. Ecosphere 7, e01507 (2016). Pedlar, J. H. & McKenney, D. W. Assessing the anticipated growth response of northern conifer populations to a warming climate. Sci. Rep. 7, 43881 (2017). Article ADS PubMed PubMed Central Google Scholar Yang, J., Pedlar, J. H., McKenney, D. W. & Weersink, A. The development of universal response functions to facilitate climate-smart regeneration of black spruce and white pine in Ontario, Canada. For. Ecol. Manag. 339, 34–43 (2015). Beck, P. S. A. et al. Changes in forest productivity across Alaska consistent with biome shift. Ecol. Lett. 14, 373–379 (2011). Way, D., Crawley, C. & Sage, R. A hot and dry future: warming effects on boreal tree drought tolerance. Tree Physiol. 33, 1003–1005 (2013). Larcher, W. Physiological Plant Ecology. Ecophysiology and Stress Physiology of Functional Groups 4th edn (Springer-Verlag, 2003). Lena, H. et al. Diverse growth trends and climate responses across Eurasia's boreal forest. Environ. Res. Lett. 11, 074021 (2016). Bergeron, O. et al. Comparison of carbon dioxide fluxes over three boreal black spruce forests in Canada. Glob. Change Biol. 13, 89–107 (2007). Boisvenue, C. & Running, S. W. Impacts of climate change on natural forest productivity—evidence since the middle of the 20th century. Glob. Change Biol. 12, 862–882 (2006). Burke, M. J., Gusta, L. V., Quamme, H. A., Weiser, C. J. & Li, P. H. Freezing and injury in plants. Annu. Rev. Plant Physiol. 27, 507–528 (1976). Gamache, I. & Payette, S. Height growth response of tree line black spruce to recent climate warming across the forest-tundra of eastern Canada. J. Ecol. 92, 835–845 (2004). Gauthier, S. et al. Vulnerability of timber supply to projected changes in fire regime in Canada's managed forests. Can. J. For. Res. 45, 1439–1447 (2015). D'Orangeville, L., Côté, B., Houle, D. & Morin, H. The effects of throughfall exclusion on xylogenesis of balsam fir. Tree Physiol. 33, 516–526 (2013). Juday, G. P., Alix, C. & Grant Iii, T. A. Spatial coherence and change of opposite white spruce temperature sensitivities on floodplains in Alaska confirms early-stage boreal biome shift. For. Ecol. Manag. 350, 46–61 (2015). Worrall, J. J. et al. Recent declines of Populus tremuloides in North America linked to climate. For. Ecol. Manag. 299, 35–51 (2013). Thuiller, W. Patterns and uncertainties of species' range shifts under climate change. Glob. Change Biol. 10, 2020–2027 (2004). Fitzpatrick, M. C. & Hargrove, W. W. The projection of species distribution models and the problem of non-analog climate. Biodivers. Conserv. 18, 2255 (2009). Sheffield, J., Wood, E. F. & Roderick, M. L. Little change in global drought over the past 60 years. Nature 491, 435–438 (2012). Davis, M. B., Shaw, R. G. & Etterson, J. R. Evolutionary responses to changing climate. Ecology 86, 1704–1714 (2005). Thomson, A. M. & Parker, W. H. Boreal forest provenance tests used to predict optimal growth and response to climate change. 1. Jack pine. Can. J. For. Res. 38, 157–170 (2008). Thomson, A. M., Riddell, C. L. & Parker, W. H. Boreal forest provenance tests used to predict optimal growth and response to climate change. 2. Black spruce. Can. J. For. Res. 39, 143–153 (2009). Aitken, S. N., Yeaman, S., Holliday, J. A., Wang, T. & Curtis‐McLane, S. Adaptation, migration or extirpation: climate change outcomes for tree populations. Evol. Appl. 1, 95–111 (2008). Latutrie, M., Mérian, P., Picq, S., Bergeron, Y. & Tremblay, F. The effects of genetic diversity, climate and defoliation events on trembling aspen growth performance across Canada. Tree Genet. Genomes 11, 96 (2015). Ministère des Forêts de la Faune et des Parcs. Placettes-Échantillons Permanentes - Normes Techniques. Direction des Inventaires Forestiers (Québec, 2016). Duchesne, L., D'Orangeville, L., Ouimet, R., Houle, D. & Kneeshaw, D. Extracting coherent tree-ring climatic signals across spatial scales from extensive forest inventory data. PLoS One 12, e0189444 (2017). Cook, E. R. & Kairiukstis, L. A. Methods of Dendrochronology: Applications in the Environmental Sciences (Springer, Dordrecht, Netherlands, 1990). Hoaglin, D. C., Iglewicz, B. & Tukey, J. W. Performance of some resistant rules for outlier labeling. J. Am. Stat. Assoc. 81, 991–999 (1986). Article MathSciNet MATH Google Scholar Régnière, J. Generalized approach to landscape-wide seasonal forecasting with temperature-driven simulation models. Environ. Entomol. 25, 869–881 (1996). RC Team. R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, Vienna, Austria, 2017). Hogg, E. H., Barr, A. G. & Black, T. A. A simple soil moisture index for representing multi-year drought impacts on aspen productivity in the western Canadian interior. Agric. For. Meteorol. 178–179, 173–182 (2013). Berner, L. T., Law, B. E. & Hudiburg, T. W. Water availability limits tree productivity, carbon stocks, and carbon residence time in mature forests across the western US. Biogeosciences 14, 365–378 (2017). Wykoff, W. R., Crookston, N. L. & Stage, A. R. User's Guide to the Stand Prognosis Model. INT-133 (USDA For. Serv. Gen. Tech. Rep., 1982). Hastie, T. & Tibshirani, R. Generalized additive models. Stat. Sci. 1, 297–318 (1986). Araújo, M. B., Pearson, R. G., Thuiller, W. & Erhard, M. Validation of species–climate impact models under climate change. Glob. Change Biol. 11, 1504–1513 (2005). Wood, S. N. Generalized Additive Models: An Introduction with R (Chapman & Hall/CRC, 2006). Duan, N. Smearing estimate: a nonparametric retransformation method. J. Am. Stat. Assoc. 78, 605–610 (1983). Ford, K. R. et al. Competition alters tree growth responses to climate at individual and stand scales. Can. J. For. Res. 47, 53–62 (2016). Girardin, M. P., Guo, X. J., Bernier, P. Y., Raulier, F. & Gauthier, S. Changes in growth of pristine boreal North American forests from 1950 to 2005 driven by landscape demographics and species traits. Biogeosciences 9, 2523–2536 (2012). Akaike, H. A new look at the statistical model identification. IEEE Trans. Automat. Control 19, 716–723 (1974). Funding for this study was provided by a postdoctoral scholarship to L.D'O. by the MITACS Accelerate program and by the MFFP and Le Fond Vert du Ministère du Développement Durable, de l'Environnement et de la Lutte contre les Changements Climatiques du Québec within the framework of the Action Plan 2013–2018 on climate change. We gratefully acknowledge the staff of the Ministère des Forêts, de la Faune et des Parcs du Québec (MFFP) for the fastidious work related to tree core sampling, preparation, and measurements, Marie-Claude Lambert who generated meteorological data and L.E. Robert for modeling advice. We thank Travis Logan at the Ouranos Consortium on Regional Climatology and Adaptation to Climate Change for processing climate scenarios from the NEX-GDDP dataset, prepared by the Climate Analytics Group and NASA Ames Research Center using the NASA Earth Exchange and distributed by the NASA Center for Climate Simulation. Loïc D'Orangeville Present address: Faculty of Forestry and Environmental Sciences, University of New Brunswick, 28 Dineen Drive, Fredericton, NB, E3B 5A3, Canada Centre for Forest Research, Université du Québec à Montréal, Case Postale 8888, Succ. Centre-Ville, Montreal, QC, H3C 3P8, Canada Loïc D'Orangeville, Yves Bergeron & Daniel Kneeshaw Direction de la Recherche Forestière, Ministère des Forêts, de la Faune et des Parcs du Québec, 2700 Einstein, Quebec City, QC, G1P 3W8, Canada Daniel Houle & Louis Duchesne Ouranos, 550 Rue Sherbrooke O, Montréal, QC, H3A 1B9, Canada Daniel Houle Department of Biology, Indiana University, 1001 East 3rd Street, Bloomington, IN, 47405-7005, USA Richard P. Phillips NSERC-UQAT-UQAM Industrial Chair in Sustainable Forest Management, Forest Research Institute, Université du Québec en Abitibi-Témiscamingue, 445 de l'Université, Rouyn-Noranda, QC, J9X 5E4, Canada Yves Bergeron Louis Duchesne Daniel Kneeshaw L.D'O., L.D., and D.H. designed the study and methodology with substantial inputs from D.K. L.D'O. obtained and analyzed the data. L.D'O. wrote the first draft with substantial inputs from D.H. L.D'O., D.H., L.D., R.P.P., Y.B., and D.K. contributed to data interpretation and manuscript preparation. Correspondence to Loïc D'Orangeville. Peer Review File D'Orangeville, L., Houle, D., Duchesne, L. et al. Beneficial effects of climate warming on boreal tree growth may be transitory. Nat Commun 9, 3213 (2018). https://doi.org/10.1038/s41467-018-05705-4 Variability in frost occurrence under climate change and consequent risk of damage to trees of western Quebec, Canada Benjamin Marquis Emerging signals of declining forest resilience under climate change Giovanni Forzieri Vasilis Dakos Peter B. Reich Raimundo Bermudez Artur Stefanski A new approach to identify the climatic drivers of leaf production reconstructed from the past yearly variation in annual shoot lengths in an evergreen conifer (Picea mariana) Tomoko Tanabe Daniel Epron Masako Dannoura Trees (2022) Influencing plantation stand structure through close-to-nature silviculture Robert Schneider Tony Franceschini François de Coligny European Journal of Forest Research (2021) Forests in the Anthropocene Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
\begin{document} \graphicspath{{./PIC/}} \title{Further study on tensor absolute value equations} \author{Chen Ling \and Weijie Yan \and Hongjin He \and Liqun Qi} \institute{C. Ling\and W. Yan\and H. He \at Department of Mathematics, School of Science, Hangzhou Dianzi University, Hangzhou, 310018, China.\\ \email{[email protected]} \and W. Yan \at \email{[email protected]} \and H. He (\Letter) \at \email{[email protected]} \and L. Qi \at Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong. \\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} In this paper, we consider the {\it tensor absolute value equations} (TAVEs), which is a newly introduced problem in the context of multilinear systems. Although the system of TAVEs is an interesting generalization of matrix {\it absolute value equations} (AVEs), the well-developed theory and algorithms for AVEs are not directly applicable to TAVEs due to the nonlinearity (or multilinearity) of the problem under consideration. Therefore, we first study the solutions existence of some classes of TAVEs with the help of degree theory, in addition to showing, by fixed point theory, that the system of TAVEs has at least one solution under some checkable conditions. Then, we give a bound of solutions of TAVEs for some special cases. To find a solution to TAVEs, we employ the generalized Newton method and report some preliminary results. \end{abstract} \keywords{Tensor absolute value equations \and ${\rm H}^+$-tensor \and P-tensor \and Copositive tensor \and Generalized Newton method.} \section{Introduction}\label{Introd} The system of {\it absolute value equations} (AVEs) investigated in literature is given by \begin{equation}\label{AVEs} Ax-|x|=b, \end{equation} where $A\in \mathbb{R}^{n\times n}$, $b\in \mathbb{R}^n$, and $|x|$ denotes the vector with absolute values of each component of $x$. The importance of AVEs \eqref{AVEs} has been well documented in the monograph \cite{CPS92} due to its equivalence to the classical {\it linear complementarity problems}. More generally, Rohn \cite{R04} introduced the following problem \begin{equation}\label{GAVE} Ax+B|x|=b, \end{equation} where $A,B\in \mathbb{R}^{m\times n}$ and $b\in \mathbb{R}^m$. Apparently, \eqref{GAVE} covers \eqref{AVEs} with the setting of $B$ being a negative identity matrix. In what follows, we also call such a general problem \eqref{GAVE} a system of AVEs for simplicity. Since the seminal work \cite{MM06} investigated the existence and nonexistence of solutions to the system of AVEs (\ref{AVEs}) in 2006, the system of AVEs has been studied extensively by many researchers. In the past decade, a series of interesting theoretical results including NP-hardness \cite{M07,MM06}, solvability \cite{MM06,R04}, and equivalent reformulations \cite{M07,MM06,P09} of the system of AVEs have been developed. Also, many efficient algorithms have been designed to solve the system of AVEs, e.g., see \cite{CQZ11,HHZ11,IIA15,M09,MYSC17,ZW09} and references therein. In the current numerical analysis literature, considerable interests have arisen in extending concepts from linear algebra to the setting of multilinear algebra due to the powerfulness of the multilinear algebra in the real-world applications, e.g. see \cite{CCLQ18,GLY15,LN15,WQZ09} and the most recent monograph \cite{QCC18}. Therefore, in this paper, we consider the so-named {\it tensor absolute equations} (TAVEs), which refers to the task of finding an $x\in{\mathbb R}^n$ such that \begin{equation}\label{TAVEs} {\cal A}x^{p-1}+\mathcal{B}|x|^{q-1} = b, \end{equation} where ${\cal A}$ is a $p$-th order $n$-dimensional square tensor, ${\cal B}$ is a $q$-th order $n$-dimensional square tensor, and $b\in \mathbb{R}^n$. In the paper, we are more interested in the case of \eqref{TAVEs} with $p\geq q\geq 2$ due to its real-world applications listed later. Throughout, for given two integers $m$ and $n$, we call $\mathcal{A} = (a_{i_1i_2\ldots i_m} )$, where $a_{i_1i_2\ldots i_m}\in \mathbb{R}$ for $1 \leq i_1,i_2,\ldots,i_m \leq n$, a real $m$-th order $n$-dimensional square tensor. For notational simplicity, we denote the set of all real $m$-th order $n$-dimensional square tensors by $\mathbb{T}_{m,n}$. Given a tensor $\mathcal{A}=(a_{i_1i_2\ldots i_m})\in \mathbb{T}_{m,n}$ and a vector $x=(x_1,x_2,\ldots,x_n)^\top\in \mathbb{R}^n$, $\mathcal{A}{x}^{m-1}$ is defined as a vector, whose $i$-th component is given by \begin{equation}\label{Axm-1} (\mathcal{A}{x}^{m-1})_i=\sum_{i_2,\ldots,i_m=1}^na_{ii_2\ldots i_m}x_{i_2}\cdots x_{i_m}, \quad i=1,2,\ldots,n. \end{equation} Moreover, ${\mathcal A}x^{m-2}$ denotes an $n\times n$ matrix whose $ij$-th component is given by \begin{equation}\label{Axm-2} ({\mathcal A}x^{m-2})_{ij}:= \sum^n_{i_3, ..., i_m = 1} a_{ij i_3 ... i_m}x_{i_3}\cdots x_{i_m} ,\quad i,\;j=1,2,\ldots,n. \end{equation} Obviously, TAVEs \eqref{TAVEs} becomes the system of AVEs when both tensors ${\mathcal A}$ and ${\mathcal B}$ reduce to matrices, and in particular, TAVEs reduces to the multilinear system (i.e., by taking ${\mathcal B}|x|^{q-1}={\bm 0}$ ) studied in recent work \cite{DW16,HLQZ18,LXX17,XJW18}, which has found many important applications in data mining and numerical partial differential equations (e.g., see \cite{DW16,LN15}), to name just a few. Most recently, Du et al. \cite{DZCQ18} considered another special case of \eqref{TAVEs} with the setting of ${\mathcal B}$ being a negative $p$-th order $n$-dimensional unit tensor (i.e., ${\mathcal B}|x|^{q-1}$ reduces to $-|x|^{[p-1]}$), which is equivalent to a generalized tensor complementarity problem. Especially, when we consider the case where $p>q=2$ and ${\mathcal B}$ is a negative identity matrix (i.e., ${\mathcal B}|x|^{q-1}=-|x|$), it is clear that the resulting TAVEs \eqref{TAVEs} is equivalent to the following generalized tensor complementarity problem $$ {\bm 0}\leq (\mathcal{A}x^{p-1}+x-b)\perp (\mathcal{A}x^{p-1}-x-b)\geq {\bm 0}. $$ Additionally, if we restrict the variable $x$ being nonnegative, the system of TAVEs \eqref{TAVEs} is a fundamental model for characterizing the multilinear pagerank problem (e.g., see \cite{GLY15}). Hence, from the above two motivating examples, we are particularly concerned with the system of TAVEs \eqref{TAVEs} with the case where $p\geq q\geq 2$. It can be easily seen from the definition of tensor-vector product (see \eqref{Axm-1}) that the system of TAVEs \eqref{TAVEs} is a special system of nonlinear equations. Hence, all theory and algorithms tailored for the system of AVEs are not easily applicable to TAVEs \eqref{TAVEs} due to the underlying nonlinearity (or multilinearity). Moreover, the potentially nonsmooth term ${\mathcal B}|x|^{q-1}$ in \eqref{TAVEs} would make the theoretical findings, including the existence and boundedness of solutions, different with the cases of smooth nonlinear equations. Therefore, one emergent question is that whether the system of TAVEs \eqref{TAVEs} has solutions or not? If yes, which kind of tensors in \eqref{TAVEs} could ensure the existence of solutions? To answer these questions, most recently, Du et al. \cite{DZCQ18} first studied the special case of \eqref{TAVEs} with a negative unit tensor in the absolute value term (i.e., ${\mathcal B}$ is a negative unit tensor and $p=q$ in \eqref{TAVEs}), where they proved that such a reduced system has a solution for some structured tensors (e.g., ${\mathcal A}$ is a $Z$-tensor). However, the appearance of a general tensor ${\mathcal B}$ in the absolute value term would completely change the existing results, including the solutions existence and algorithm, tailored for the special case of \eqref{TAVEs} studied in \cite{DZCQ18}. In this paper, we make a further study on the TAVEs \eqref{TAVEs}. Specifically, we are interested in the general form \eqref{TAVEs}, where we allow the case that the two tensors ${\mathcal A}$ and ${\mathcal B}$ have different order, but with $p\geq q\geq 2$ from an application perspective. First, we prove the nonemptiness and compactness of the solutions set of general TAVEs with the help of degree theory, in addition to showing, by fixed point theory, that the system of TAVEs has at least one solution under some checkable conditions. Then, we derive a bound of solutions of TAVEs with the special case $p=q$. Finally, to find a solution to the general form of TAVEs \eqref{TAVEs} (where we further allow $p<q$), we employ the well-developed generalized Newton method to the problem under consideration. The preliminary computational results show that the simplest generalized Newton method is a highly probabilistic reliable solver for TAVEs. This paper is organized as follows. In Section \ref{Prelim}, we recall some definitions and basic properties about tensors. In Section \ref{Existence}, we present three sufficient conditions for the solutions existence of the system of TAVEs. Here, the first two theorems on solutions existence are established via the degree-theoretic ideas, and last theorem is proved in the context of fixed point theory. Moreover, in Section \ref{Bounds}, we analyze the bound of solutions for the special case of TAVEs. To find solutions of TAVEs \eqref{TAVEs}, we employ the simplest generalized Newton method and investigate its numerical performance in Section \ref{Alg}. Finally, we complete this paper by drawing some concluding remarks in Section \ref{Conclusion}. \noindent{\bf Notation}. As usual, $\mathbb{R}^n$ denotes the space of $n$-dimensional real column vectors. $\mathbb{R}_+^n=\{x=(x_1,x_2,\ldots,x_n)^\top\in \mathbb{R}^n:x_i\geq 0,~~\forall~i=1,2,\ldots,n\}$. A vector of zeros in a real space of arbitrary dimension will be denoted by ${\bm 0}$. For any $x, y\in \mathbb{R}^n$, the Euclidean inner product is denoted by $x^\top y$, and the Euclidean norm $\|x\|$ is denoted as $\|x\|=\sqrt{x^\top x}$. For given $\mathcal{A}=(a_{i_1i_2\ldots i_m})\in \mathbb{T}_{m,n}$, if the entries $a_{i_1i_2\ldots i_m}$ are invariant under any permutation of their indices, then $\mathcal{A}$ is called a symmetric tensor. In particular, for every given index $i\in [n]:=\{1,2,\ldots,n\}$, if an $(m-1)$-th order $n$-dimensional square tensor $\mathcal{A}_i:=(a_{ii_2\ldots i_m})_{1 \leq i_2,\ldots,i_m \leq n}$ is symmetric, then $\mathcal{A}$ is called a semi-symmetric tensor with respect to the indices $\{i_2,\ldots,i_m\}$. For given $\mathcal{A}=(a_{i_1i_2\ldots i_m} )\in \mathbb{T}_{m,n}$, denote the $\infty$-norm of $\mathcal{A}$ by $$\|\mathcal{A}\|_{\infty} = \max\limits_{1\leq i\leq n}\displaystyle\sum_{i_2,\ldots,i_m = 1}^n |a_{ii_2\ldots i_m}|,$$ and the (squared) Frobenius norm of ${\mathcal A}$ is defined as the sum of the squares of its elements, i.e., $$\|{\mathcal A}\|_{\rm Frob}^2:=\sum_{i_1=1}^n\cdots\sum_{i_m=1}^n a_{i_1i_2\ldots i_m}^2.$$ Denote the unit tensor in $\mathbb{T}_{m,n}$ by $\mathcal{I}=(\delta_{i_1\ldots i_m})$, where $\delta_{i_1\ldots i_m}$ is the Kronecker symbol $$ \delta_{i_1\ldots i_m}=\left\{ \begin{array}{ll} 1,&\;\;{\rm if~}i_1=\ldots =i_m,\\ 0,&\;\;{\rm otherwise}. \end{array} \right. $$ With the notation \eqref{Axm-1}, we define ${\mathcal A}x^m = x^\top ({\mathcal A}x^{m-1})$ for ${\mathcal A}\in {\mathbb T}_{m,n}$ and $x\in {\mathbb R}^n$. Moreover, for a given scalar $s>0$, we denote $x^{[s]}=(x_1^s,x^s_2,\ldots,x_n^s)^\top\in {\mathbb R}^n$. For a smooth (continuously differentiable) function $F:\mathbb{R}^n\rightarrow \mathbb{R}^n$, we denote the Jacobian of $F$ at $x\in \mathbb{R}^n$ by ${\mathscr D}F(x)$, which is an $n\times n$ matrix. \section{Preliminaries}\label{Prelim} In this section, we summarize some definitions and properties on tensors that will be used in the coming analysis. \begin{definition}\label{ERdef} Let ${\mathcal A}\in {\mathbb T}_{p,n}$. We say that ${\mathcal A}$ is an ${\rm H}^+$-tensor, if there exists no $(x,t)\in(\mathbb{R}^n\backslash\{ {\bm 0}\})\times\mathbb{R}_+$ such that \begin{equation}\label{equation1.2} ({\mathcal A+t\mathcal{I}})x^{p-1}={\bm 0}, \end{equation} where $\mathcal{I}$ is a unit tensor in $\mathbb{T}_{p,n}$. In particular, ${\mathcal A}$ is called a ${\rm WH}^+$-tensor if there exists no $(x,t)\in(\mathbb{R}_+^n\backslash\{ {\bm 0}\})\times\mathbb{R}_+$ satisfying \eqref{equation1.2}. \end{definition} When the order of ${\mathcal A}$ is $p=2$ (i.e., $\mathcal{A}$ is an $n\times n$ matrix), a ${\rm H}^+$-tensor $\mathcal{A}$ is also called a ${\rm H}^+$-matrix. It is obvious from \eqref{equation1.2} that, ${\mathcal A}$ is an ${\rm H}^+$-matrix if and only if $\mathcal{A}$ has no non-positive eigenvalues. \begin{definition}[\cite{Qi13}]\label{copositive} Let $\mathcal{A} \in \mathbb{T}_{p,n}$. We say that $\mathcal{A}$ is a copositive (or strictly copositive) tensor, if $\mathcal{A} x^p\geq 0 ~({\rm or }\; \mathcal{A} x^p> 0)$ for any vector $x \in \mathbb{R}_+^n~({\rm or }\;x\in \mathbb{R}_+^n\backslash\{{\bm 0}\})$. \end{definition} \begin{definition}[\cite{SQ15}]\label{def2.3} Let $\mathcal{A} \in \mathbb{T}_{p,n}$. We say that $\mathcal{A}$ is a P-tensor, if it holds that $\max\limits_{1\leq i\leq n}x_i(\mathcal{A}x^{p-1})_i > 0$ for any vector $x \in \mathbb{R}^n\backslash\{{\bm 0}\}$. \end{definition} \begin{proposition}\label{P-WR} Let $\mathcal{A} \in \mathbb{T}_{p,n}$. If $\mathcal{A}$ is a strictly copositive tensor, then $\mathcal{A}$ is a ${\rm WH}^+$-tensor. If $\mathcal{A}$ is a P-tensor, then $\mathcal{A}$ is an ${\rm H}^+$-tensor. \end{proposition} \begin{proof} Let $\mathcal{A}$ be a strictly copositive tensor. Suppose that $\mathcal{A}$ is not a ${\rm WH}^+$-tensor. Then, it follows from Definition \ref{ERdef} that there exists $(\bar x, \bar t)\in (\mathbb{R}_+^n\backslash\{ {\bm 0}\})\times\mathbb{R}_+$ such that (\ref{equation1.2}) holds. Consequently, we know that ${\mathcal A}\bar x^p=-\bar t \sum_{i=1}^n\bar x_i^p\leq 0$, which contradicts to the given condition. Let $\mathcal{A}$ be a P-tensor. Then $p$ must be even. Suppose that $\mathcal{A}$ is not an ${\rm H}^+$-tensor. Then, it follows from Definition \ref{ERdef} that there exists $(\bar x, \bar t)\in (\mathbb{R}^n\backslash\{ {\bm 0}\})\times\mathbb{R}_+$ such that (\ref{equation1.2}) holds. Therefore, we have $$ \bar x_i(\mathcal{A}\bar x^{p-1})_i + \bar t\bar x^p_i=0, ~~~\forall ~i=1,2,\ldots,n, $$ which implies \begin{equation}\label{TThr} \max\limits_{1\leq i \leq n}\bar x_i(\mathcal{A}\bar x^{p-1})_i= -\min\limits_{1\leq i \leq n}\bar t \bar x_i^p\leq 0. \end{equation} It contradicts to the condition that $\mathcal{A}$ is a P-tensor. The proof is completed. \qed\end{proof} We have shown that a strictly copositive tensor must be a ${\rm WH}^+$-tensor, but not conversely. The following example is to show that a ${\rm WH}^+$-tensor is not necessarily a strictly copositive tensor. \begin{exam}\label{exam11-01} Consider the case where $p=2$ and $$ \mathcal{A}=\left[ \begin{array}{cc} 1&4\\ 1&-2 \end{array} \right]. $$ By taking $\bar x=(0,4)^\top\in \mathbb{R}_+^2\backslash \{{\bm 0}\}$, we know $\mathcal{A}\bar x^2=-32<0$, which means that $\mathcal{A}$ is not a strictly copositive tensor. However, we claim that $\mathcal{A}$ is a ${\rm WH}^+$-tensor, i.e., there exists no $(x,t)\in(\mathbb{R}_+^2\backslash\{ {\bm 0}\})\times\mathbb{R}_+$ such that \eqref{equation1.2} holds. Suppose that there exists $(\bar x,\bar t)\in(\mathbb{R}_+^2\backslash\{ {\bm 0}\})\times\mathbb{R}_+$ such that \eqref{equation1.2} holds, i.e., \begin{equation}\label{eqta} \left\{ \begin{array}{l} \bar x_1+4\bar x_2+\bar t\bar x_1=0\\ \bar x_1-2\bar x_2+\bar t\bar x_2=0. \end{array} \right. \end{equation} Since $\bar x\neq {\bm 0}$, we have $$ \left| \begin{array}{cc} 1+\bar t&4\\ 1&\bar t-2 \end{array} \right|=0, $$ which implies $\bar t^2-\bar t-6=(\bar t-3)(\bar t+2)=0$. Since $\bar t\geq 0$, we obtain $\bar t=3$. Consequently, from \eqref{eqta}, we know that $\bar x_1+\bar x_2=0$, which contradicts to the fact that $\bar x\in \mathbb{R}_+^2\backslash\{{\bm 0}\}$. \end{exam} It was proved by Qi \cite{Qi05} that H-eigenvalues exist for an even order real symmetric tensor $\mathcal{A}$, and $\mathcal{A}$ is {\it positive definite} (PD) if and only if all of its H-eigenvalues are positive, i.e., $\mathcal{A}$ is an ${\rm H}^+$-tensor. Hence, in the symmetric tensor case, the concepts of PD-, P- and ${\rm H}^+$-tensors are identical. We also know that if a tensor $\mathcal{A}\in \mathbb{T}_{p,n}$ is a P-tensor, then $p$ must be even, see \cite{YY14}. So, there does not exist an odd order symmetric ${\rm H}^+$-tensor. However, in the asymmetric case, the conclusion is not true, as showed by the following example, which also shows that an ${\rm H}^+$-tensor is not necessarily a P-tensor for the asymmetric case. \begin{exam}\label{exam11-1} Let $m=3$ and let ${\mathcal A}=(a_{i_1i_2i_3})\in\mathbb{T}_{3,2}$ with $a_{111}=a_{112}=a_{211}=a_{212}=a_{222}=1$, $a_{221}=-1$ and $a_{121}=a_{122}=0$. Then it is obvious that $\mathcal{A}$ is not a P-tensor, due to the fact that $m$ is an odd number. Moreover, we claim that $\mathcal{A}$ is an ${\rm H}^+$-tensor, i.e., there are no $(x,t)\in(\mathbb{R}^2\backslash\{{\bm 0}\})\times\mathbb{R}_+$ such that \eqref{equation1.2} holds. In fact, for any $t\in \mathbb{R}$ and $x\in \mathbb{R}^2$ it holds that $$ (\mathcal{A}+t\mathcal{I})x^2= \left ( \begin{array}{c} (1+t)x_1^2+x_1x_2 \\ x_1^2+(1+t)x_2^2 \end{array}\right). $$ If there exists $(\bar x,\bar t)\in(\mathbb{R}^2\backslash\{{\bm 0}\})\times\mathbb{R}_+$ such that $(\mathcal{A}+\bar t\mathcal{I})\bar x^2={\bm 0}$, then $\bar x_1^2+(1+\bar t)\bar x_2^2=0$. Consequently, since $\bar t\geq 0$, we obtain $\bar x_1=\bar x_2=0$, which is a contradiction. \end{exam} \begin{remark} It is well known that $P$-tensor is a generalization of positive definite tensor, and many structured tensors, such as even order strongly doubly nonnegative tensor \cite{LQ14}, even order strongly completely positive tensor \cite{LQ14,QXX14} and even order Hilbert tensor \cite{SQ14a}, are the special type of positive definite tensors. Moreover, as shown in Proposition \ref{P-WR} and Example \ref{exam11-1}, the concept of ${\rm H}^+$-tensor is a generalization of $P$-tensor. The set of all $P$-tensors includes many class of important structured tensors as its proper subset, for example, even order nonsingular $H$-tensor with positive diagonal entries \cite{DLQ15}, even order Cauchy tensor \cite{ChbQ15} with mutually distinct entries of generating vector \cite{DLQ15}, even order strictly diagonally dominated tensor \cite{YY14}, and so on. If an even order $Z$-tensor $\mathcal{A}$ is a $B$-tensor \cite{SQ15b}, then $\mathcal{A}$ is also a $P$-tensor (see \cite[Th. 3.6]{YY14}). \end{remark} \begin{definition}\label{def2.2} Let $\Psi, \Phi:\mathbb{R}^n\rightarrow \mathbb{R}^n $ be two continuous functions. We say that a set of elements $\{x^r\}_{=1}^\infty \subset \mathbb{R}^n$ is an exceptional family of elements for $\Psi$ with respect to $\Phi$, if the following conditions are satisfied:\\ ~~~~ (1) $\|x^r\|\rightarrow \infty$ as $r \rightarrow \infty $,\\ ~~~~ (2) for each real number $r>0$, there exists $\mu_r>0$ such that $$\Psi(x^r)=-\mu_r \Phi(x^r).$$ \end{definition} \begin{definition}[\cite{Sha13}]\label{def1} Let $\mathcal{A}$ (and $\mathcal{B}$) be an order $p \geq 2$ (and order $q\geq 1$) dimension $n$ tensor, respectively. Define the product $\mathcal{A} \cdot \mathcal{B}$ to be the following tensor $\mathcal{C}$ of order $(p-1)(q-1)+1$ and dimension $n$: $$\mathcal{C}_{ij_1\ldots j_{p-1}} = \sum_{i_2,\ldots,i_p=1}^n ~a_{i i_2 \ldots i_p}b_{i_2 j_1} \cdots b_{i_p j_{p-1}}$$ where $i\in [n]$, and $j_1,\ldots,j_{p-1}\in[n]^{q-1}:=\overbrace{[n]\times\cdots\times[n]}^{q-1}$. \end{definition} \begin{remark} \label{remarkaB} When $q=1$ (i.e., $\mathcal{B}$ is a vector $x$), it is obvious that $\mathcal{A} \cdot x$ is a vector of dimension $n$, in this case, it holds that $\mathcal{A} \cdot x =\mathcal{A}x^{p-1}$; When $q=2$ (i.e., $\mathcal{B}$ is an $n\times n$ matrix), it is easy to check that $\mathcal{A} \cdot \mathcal{B} $ is a tensor of order $p$; Similarly, when $p=2$ (i.e., $\mathcal{A}$ is an $n\times n$ matrix), we know that $\mathcal{A} \cdot \mathcal{B} $ is a tensor of order $q$. Notice that, in the case when both $\mathcal{A}$ and $\mathcal{B}$ are matrices, or when $\mathcal{A}$ is a matrix and $\mathcal{B}$ is a vector, the tensor product $\mathcal{A}\cdot\mathcal{B}$ coincides with the usual matrix product. So it is a generalization of the matrix product. Here, we refer to \cite{Sha13} for more details. \end{remark} \begin{remark}\label{remark THd} Let $\mathcal{A}$ (and $\mathcal{B}, \mathcal{C}$) be an order $(p+1)$ (and order $(q+1)$, order $(m+1)$, respectively) dimension $n$ tensor. Then it holds that $\mathcal{A}\cdot(\mathcal{B}\cdot\mathcal{C})=(\mathcal{A}\cdot\mathcal{B})\cdot\mathcal{C}$. It is easy to check that, when $\mathcal{A}_1$ and $\mathcal{A}_2$ have the same order, we have $(\mathcal{A}_1 + \mathcal{A}_2) \cdot \mathcal{B} = \mathcal{A}_1 \cdot \mathcal{B} + \mathcal{A}_2 \cdot \mathcal{B}$; When $\mathcal{A}$ is a matrix, we have $\mathcal{A}\cdot(\mathcal{B}_1 + \mathcal{B}_2) = \mathcal{A}\cdot\mathcal{B}_1 + \mathcal{A}\cdot\mathcal{B}_2$. \end{remark} \begin{definition}[\cite{Sha13}]\label{Inverse} Let $\mathcal{A}\in \mathbb{T}_{p,n}$ and $\mathcal{B}\in \mathbb{T}_{q,n}$. If $\mathcal{A}\cdot \mathcal{B}=\mathcal{I}$, then $\mathcal{A}$ is called an order $p$ left inverse of $\mathcal{B}$, and $\mathcal{B}$ is called an order $q$ right inverse of $\mathcal{A}$. \end{definition} From Definition \ref{Inverse}, we know that, for given $\mathcal{A}\in \mathbb{T}_{p,n}$, $\mathcal{A}$ has an order $2$ left inverse if and only if there exists a nonsingular $n\times n$ matrix $Q$ such that $\mathcal{A} = Q \cdot \mathcal{I}$. Moreover, $Q^{-1}$ is the unique order $2$ left inverse of $\mathcal{A}$. \begin{definition}[\cite{P10}]\label{def4} Let $\mathcal{A}\in \mathbb{T}_{p,n}$. Then the majorization matrix $M(\mathcal{A})$ of $\mathcal{A}$ is an $n\times n$ matrix with the entries $M(\mathcal{A})_{ij} = a_{ij\ldots j}$ for all $i,j=1,2,\ldots,n$. \end{definition} In \cite{LL16,SY16}, it has been proved that, for given $\mathcal{A}\in \mathbb{T}_{p,n}$, $\mathcal{A}$ has the unique order $2$ left inverse $M(\mathcal{A})^{-1}$, if and only if $M(\mathcal{A})$ is nonsingular and $\mathcal{A}$ is row diagonal (see \cite{SY16}). \section{Existence of solutions for TAVEs}\label{Existence} In this section, we focus on studying the existence of solutions of TAVEs \eqref{TAVEs}. The main tools used here are degree-theoretic ideas. We begin this section with recalling some concepts and well-developed necessary results that will play pivot roles in the analysis. Suppose that $\Omega$ is a bounded open set in $\mathbb{R}^n$, $U: \bar{\Omega}\rightarrow \mathbb{R}^n$ is continuous and $b\not\in U(\partial \Omega)$, where $\bar{\Omega}$ and $\partial \Omega$ denote, respectively, the closure and boundary of $\Omega$. Then the degree of $U$ over $\Omega$ with respect to $b$ is defined, which is an integer and will be denoted by ${\rm deg} (U, \Omega, b)$ (see \cite{FFG95,LN78} for more details on degree theory). If $U(x)=b$ has a unique solution, say, $x^*\in \Omega$, then, ${\rm deg}(U, \Omega, b)$ is constant over all bounded open sets $\Omega^\prime$ containing $x^*$ and contained in $\Omega$. Moreover, we recall the following two fundamental theorems, which can be found in \cite[p. 23]{I06}. \begin{theorem}[Kronecker's Theorem]\label{th4} Let $\Omega\subset \mathbb{R}^n$ be a bounded open set, $b\in \mathbb{R}^n$ and $U:\mathbb{R}^n\rightarrow \mathbb{R}^n$ be a continuous function. If ${\rm deg}(U,\Omega,b)$ is defined and non-zero, then the equation $U(x)=b$ has a solution in $\Omega$. \end{theorem} \begin{theorem}[Poincar\'{e}-Bohl Theorem]\label{th3} Let $\Omega\subset \mathbb{R}^n$ be a bounded open set, $b\in \mathbb{R}^n$ and $U, V:\mathbb{R}^n\rightarrow \mathbb{R}^n$ be two continuous functions. If for all $x \in \partial\Omega$ the line segment $[U(x),V(x)]$ does not contain $b$, then it holds that ${\rm deg}(U,\Omega,b) = {\rm deg}(V,\Omega,b)$. \end{theorem} By Theorems \ref{th4} and \ref{th3}, we have the following theorem. \begin{theorem}\label{th2.6} Let $G:\mathbb{R}^n\rightarrow \mathbb{R}^n $ be a continuous function. Suppose that $G(x)={\bm 0}$ has only one zero solution and ${\rm deg}(G,B_r,{\bm 0})\neq 0$ for any $r>0$, where $B_r= \{x \in \mathbb{R}^n:\|x\| < r\}$. Then for the continuous function defined by \begin{equation}\label{F(x)} F(x)=\mathcal{A}x^{p-1}+\mathcal{B}|x|^{q-1}-b, \end{equation} there exists either a solution to $F(x)={\bm 0}$ or an exceptional family of elements for $F$ with respect to $G$. \end{theorem} \begin{proof} For any real number $r>0$, let us denote the spheres of radius $r$: $$S_r = \{x \in \mathbb{R}^n:\|x\| = r\}.$$ Obviously, we have $\partial B_r = S_r$. Consider the homotopy between the functions $G$ and $F$, which is defined by: \begin{equation} H(x,t) = tG(x) + (1-t)F(x), ~~\forall~(x,t) \in S_r \times [0,1]. \end{equation} We now apply Theorem \ref{th3} to $H$. There are two cases:\par ~(i) There exists an $r>0$ such that $H(x,t)\neq {\bm 0}$ for any $x\in S_r$ and $t \in [0,1]$. Then by Theorem \ref{th3}, we know that ${\rm deg}(F,B_r,{\bm 0})={\rm deg}(G,B_r,{\bm 0})$. Consequently, it follows from ${\rm deg}(G,B_r,{\bm 0}) \neq 0$ that ${\rm deg}(F,B_r,{\bm 0}) \neq 0$. Moreover, by Theorem \ref{th4}, we know that the ball $B_r$ contains at least one solution to the equation $F(x)={\bm 0}$.\par ~(ii) For each $r>0$, there exists a vector $x^r \in S_r$ (i.e., $\|x^r\|=r$) and a scalar $t_r \in [0,1]$ such that \begin{equation} H(x^r,t_r)={\bm 0}. \end{equation} If $t_r = 0$, then $x^r$ solves equation $F(x) = {\bm 0}$. If $t_r = 1$, then by the definition of $H(x,t)$ we obtain $$t_rG(x^r)+ (1-t_r)F(x^r) = G(x^r) = {\bm 0},$$ which implies $x^r={\bm 0}$, since $G(x)={\bm 0}$ has only one zero solution. It contradicts the fact that $\|x^r\| = r > 0$. If $0 < t_r < 1$, then by the definition of $H(x,t)$, we obtain $$\frac{t_r}{(1-t_r)} G(x^r) + F(x^r)= \frac{1}{1-t_r}(t_r G(x^r) + (1-t_r)F(x^r)) = {\bm 0}. $$ Letting $\mu_r = \frac{t_r}{1-t_r}$, we have $F(x^r)+\mu_r G(x^r)={\bm 0}$. Due to the fact that $\|x^r\| = r$, we know that $\|x^r\|\rightarrow \infty$ as $r \rightarrow \infty $. Thus, from Definition {\ref{def2.2}}, we know that $\{x^r\}$ is an exceptional family of elements for $F$ with respect to $G$. \qed\end{proof} We now state and prove some existence results on solutions of \eqref{TAVEs}. To this end, we first present the following lemma. \begin{lemma}\label{Lemma01} Let $m\geq 2$ be a given integer. Then for any vector $x\in \mathbb{R}^n$, it holds that $$ \|x\|^{m-1}\leq n^{\frac{m-2}{2}}\|x^{[m-1]}\|. $$ \end{lemma} \begin{proof} The desired result can be proved by the well-known H\"{o}lder inequality. \qed\end{proof} We now turn to our first existence theorem, which shows that, in case where $p>q\geq 2$ and $p$ is even, the system of TAVEs \eqref{TAVEs} has a nonempty and compact solutions set if ${\mathcal A}$ is an ${\rm H}^+$-tensor. \begin{theorem}\label{Exists} Let ${\mathcal A}\in {\mathbb T}_{p,n}$ and ${\mathcal B}\in {\mathbb T}_{q,n}$. Suppose that $p$ is an even number satisfying $p>q\geq 2$ and ${\mathcal A}$ is an ${\rm H}^+$-tensor. Then the solution set of \eqref{TAVEs}, denoted by ${\rm SOL}(\mathcal{A},\mathcal{B},b)$, is a nonempty compact set for any $b\in \mathbb{R}^n$. \end{theorem} \begin{proof} We first prove that the equation \eqref{TAVEs} always has a solution for any $b\in \mathbb{R}^n$. Letting $G(x)=x^{[p-1]}$, it is easy to see that $G(x)={\bm 0}$ has only one zero solution. Moreover, since ${\bm 0}$ is a critical point of $G$, that is, the determinant of the Jacobian matrix of $G$ at ${\bm 0}$ is zero (i.e., ${\rm det}({\mathscr D}G({\bm 0}))=0$), it follows from Sard's Lemma (see \cite[p. 9]{FFG95}) and Definition 1.9 in \cite[p. 14]{FFG95} that ${\rm deg}(G,B_r,{\bm 0})\neq 0$ for any $r>0$. Suppose that the equation $F(x)={\bm 0}$ does not have solutions, where $F(x)$ is given by \eqref{F(x)}. Then by Theorem \ref{th2.6}, we know that there exists an exceptional family of elements $\{x^r\}_{r>0}$ of $F$ with respect to $G$, i.e., $\{x^r\}_{r>0}$ satisfies $\|x^r\|\rightarrow \infty$ as $r \rightarrow \infty $, and for each real number $r>0$ there exists a $\mu_r>0$ such that $$\mathcal{A}(x^r)^{p-1}+\mathcal{B}|x^r|^{q-1}-b = -\mu_r(x^r)^{[p-1]},$$ which implies \begin{equation}\label{TTr} \mathcal{A}(\bar x^r)^{p-1}+\frac{1}{\|x^r\|^{p-q}}\mathcal{B}|\bar x^r|^{q-1}-\frac{b}{\|x^r\|^{p-1}}= -\mu_r(\bar x^r)^{[p-1]}, \end{equation} where $\bar x^r=x^r/\|x^r\|$ for any $r$. Since $\|\bar x^r\|=1$ for any $r$, by Lemma \ref{Lemma01}, we know that $n^{-\frac{p-2}{2}}\leq \|(\bar x^r)^{[p-1]}\|$ for any $r$. Consequently, by (\ref{TTr}), it holds that $$ n^{-\frac{p-2}{2}}\mu_r\leq \left\|\mathcal{A}(\bar x^r)^{p-1}+\frac{1}{\|x^r\|^{p-q}}\mathcal{B}|\bar x^r|^{q-1}-\frac{b}{\|x^r\|^{p-1}}\right\|. $$ Hence, since $\|x^r\|\rightarrow \infty$ as $r \rightarrow \infty $ and $\|\bar x^r\|=1$ for any $r$, we claim that $\{\mu_r\}_{r>0}$ is bounded. Without loss of generality, we assume that $\bar x^r\rightarrow \bar{x}$ and $\mu_r\rightarrow \bar t$ as $r \rightarrow \infty$. From (\ref{TTr}), by taking $r \rightarrow \infty$, there exists $(\bar{ x},\bar t)\in(\mathbb{R}^n\backslash\{ {\bm 0}\})\times\mathbb{R}_+$ such that $$(\mathcal{A} + \bar t\mathcal{I})\bar{x}^{p-1} = {\bm 0},$$ which contradicts to the given condition that $\mathcal{A}$ is an ${\rm H}^+$-tensor. Hereafter, we prove the compactness of the solution set ${\rm SOL}(\mathcal{A},\mathcal{B},b)$. It is obvious that ${\rm SOL}(\mathcal{A},\mathcal{B},b)$ is closed. We now prove that ${\rm SOL}(\mathcal{A},\mathcal{B},b)$ is bounded for any $b\in \mathbb{R}^n$. Suppose that ${\rm SOL}(\mathcal{A},\mathcal{B},b)$ is unbounded for some $\bar b\in \mathbb{R}^n$, then there exists a sequence $\{x^r\}_{r=1}^\infty$ satisfying $\|x^r\|\rightarrow\infty$ as $r\rightarrow\infty$, such that $ \mathcal{A}(x^r)^{p-1}+\mathcal{B}|x^r|^{q-1}=\bar b$, which implies \begin{equation}\label{Bound} \mathcal{A}(\bar x^r)^{p-1}+\frac{1}{\|x^r\|^{p-q}}\mathcal{B}|\bar x^r|^{q-1}=\frac{\bar b}{\|x^r\|^{p-1}} \end{equation} where $\bar x^r=x^r/\|x^r\|$. Without loss of generality, we assume that $\bar x^r\rightarrow \bar{x}$ as $r \rightarrow \infty$. It is clear that $\bar x\neq{\bm 0}$. Consequently, by letting $r\rightarrow \infty$ in (\ref{Bound}), we know $\mathcal{A}\bar x^{p-1}={\bm 0}$, which means that there exists $(\bar x,0)\in (\mathbb{R}^n\backslash\{{\bm 0}\})\times \mathbb{R}_+$ such that \eqref{equation1.2} holds. It is a contradiction. We complete the proof. \qed\end{proof} From Theorem \ref{Exists} and Proposition \ref{P-WR}, we immediately obtain the following corollary. \begin{corollary}\label{corollary1} Let $\mathcal{A}\in \mathbb{T}_{p,n}$. If $\mathcal{A}$ is a $P$-tensor, then for any $\mathcal{B}\in \mathbb{T}_{q,n}$ with $2\leq q<p$, the system of TAVEs \eqref{TAVEs} has at least one solution. \end{corollary} After the discussions on the case $p>q$, ones may be further concerned with the case $p=q$. Below, we give an answer to the solutions existence for such a case $p=q$. We further make the following assumption on the underlying tensors $\mathcal{A}$ and $\mathcal{B}$. \begin{assum}\label{Assum00} Let ${\mathcal A},{\mathcal B}\in {\mathbb T}_{p,n}$. Suppose that $(\mathcal{A}+t\mathcal{I})x^{p-1}+\mathcal{B}|x|^{p-1}={\bm 0}$ has no solution for $(x,t)\in (\mathbb{R}^n\backslash \{{\bm 0}\})\times \mathbb{R}_+$. \end{assum} Notice that the set of tensors pair $({\mathcal A},{\mathcal B})$ satisfying Assumption \ref{Assum00} is nonempty, which can be shown by the following example. \begin{exam}\label{exam11-11} Let ${\mathcal A}=(a_{i_1i_2i_3i_4})\in\mathbb{T}_{4,2}$ with $a_{2111}=1$, $a_{1222}=-2$ and all other $a_{i_1i_2i_3i_4}=0$. Let ${\mathcal B}=(b_{i_1i_2i_3i_4})\in\mathbb{T}_{4,2}$ with $b_{1111}=-1$, $b_{2222}=1$ and all other $b_{i_1i_2i_3i_4}=0$. We claim that $(\mathcal{A}+t\mathcal{I})x^{3}+\mathcal{B}|x|^{3}={\bm 0}$ has no solution $(x,t)\in (\mathbb{R}^2\backslash \{{\bm 0}\})\times \mathbb{R}_+$. In fact, suppose there exists $(\bar x,\bar t)\in (\mathbb{R}^2\backslash \{{\bm 0}\})\times \mathbb{R}_+$ such that $(\mathcal{A}+t\mathcal{I})x^{3}+\mathcal{B}|x|^{3}={\bm 0}$, then \begin{equation}\label{TTde} \left\{ \begin{array}{l} (\bar t-\bar \delta_1)\bar x_1^3-2\bar x_2^3=0\\ \bar x_1^3+(\bar t+\bar \delta_2)\bar x_2^3=0, \end{array} \right. \end{equation} where $\bar \delta_1={\rm sign }(\bar x_1)$ and $\bar \delta_2={\rm sign }(\bar x_2)$ in a componentwise sense, and $${\rm sign}(\tau)=\left\{\begin{array}{rl} 1, &\quad \tau>0,\\ 0, & \quad \tau =0,\\ -1, & \quad \tau<0. \end{array}\right.$$ Hence, it follows from the definition of ${\rm sign}(\tau)$ that $$\Delta:=(\bar \delta_2-\bar \delta_1)^2-4(2-\bar \delta_1\bar \delta_2)\leq 2(\bar \delta_1\bar \delta_2-3)<0,$$ which implies the determinant of the matrix of coefficients of \eqref{TTde} must satisfy $$ \left| \begin{array}{cc} \bar t-\bar \delta_1&-2\\ 1&\bar t+\bar \delta_2 \end{array} \right|=\bar t^2+(\bar \delta_2-\bar \delta_1)\bar t-\bar \delta_1\bar \delta_2+2>0. $$ As a consequence, the solution of \eqref{TTde} is $\bar x_1^3=\bar x_2^3=0$ leading to a contradiction. \end{exam} \begin{theorem}\label{Exists2} Let ${\mathcal A},{\mathcal B}\in {\mathbb T}_{p,n}$ with $p\geq 3$ being an even number. Suppose that $(\mathcal{A},{\mathcal B})$ satisfies Assumption \ref{Assum00}. Then the solution set ${\rm SOL}(\mathcal{A},\mathcal{B},b)$ of \eqref{TAVEs} is a nonempty compact set for any $b\in \mathbb{R}^n$. \end{theorem} \begin{proof} It can be proved by the similar way used in the proof of Theorem \ref{Exists}. Here we skip the proof for brevity. \qed\end{proof} To close this section, motivated by \cite{LLV18}, we state and prove the following theorem, in which a more checkable condition for the existence of solutions of \eqref{TAVEs} with $p=q$ is presented. \begin{theorem}\label{Th3} Let $\mathcal{A},\mathcal{B}\in \mathbb{T}_{p,n}$. Suppose that $p$ is even, and $\mathcal{A}$ has the unique order $2$ left inverse $M(\mathcal{A})^{-1}$. If $||M(\mathcal{A})^{-1}\cdot\mathcal{B}||_\infty < 1$, then \eqref{TAVEs} with $p=q$ has at least one solution for any $b\in \mathbb{R}^n$. \end{theorem} \begin{proof} When $b = {\bm 0}$, it is clear that \eqref{TAVEs} has a zero solution. Now we assume that $b \neq {\bm 0}$. Let $\mathcal{G} =(g_{i_1\ldots i_p})_{1\leq i_1,\ldots,i_p\leq n}= M(\mathcal{A})^{-1}\cdot\mathcal{B}$ and $h= (h_i)_{1\leq i\leq n} =M(\mathcal{A})^{-1}b$. By the given condition, we have $||\mathcal{G}||_\infty < 1$. Taking a parameter $\tau$ with $\tau^{p-1}\geq\frac{||h||_\infty}{1-||\mathcal{G}||_\infty}$, it is obvious that $\tau>0$. Set $$\Omega = \left\{x=(x_1,x_2,\ldots,x_n)^\top\in \mathbb{R}^n:|x_i|\leq\tau\right\}$$ and $$f(x) = \left(M(\mathcal{A})^{-1}b-M(\mathcal{A})^{-1}\cdot\mathcal{B}\cdot|x|\right)^{\left[\frac{1}{p-1}\right]}.$$ It is obvious that $\Omega$ is a closed convex set in $\mathbb{R}^n$ and $f$ is continuous. It then follows from the definition of $f$ that \begin{align*} |f(x)_i| & = \left|\left((M(\mathcal{A})^{-1}b)_i-(M(\mathcal{A})^{-1}\cdot\mathcal{B}\cdot|x| )_i\right)^{\frac{1}{p-1}}\right|\\ & = \left|h_i-\displaystyle\sum_{i_2,\ldots,i_p=1}^n g_{ii_2\ldots i_p}|x_{i_2}|\cdots|x_{i_p}|\right|^{\frac{1}{p-1}}\\ & \leq\left(\|h\|_\infty+\|\mathcal{G}\|_\infty\tau^{p-1}\right)^{\frac{1}{p-1}}\\ &\leq \tau, \end{align*} which shows that $f$ is a map from the set $\Omega$ to itself. By Brouwer's Fixed Point Theorem (see \cite[p. 125]{Rhe98} or \cite[p. 377]{AH65}), there exists a vector $\bar x\in\Omega$ such that $f(\bar x) = \bar x$, that is, $$\left(M(\mathcal{A})^{-1}b-M(\mathcal{A})^{-1}\cdot\mathcal{B}\cdot|\bar x|\right)^{\left[\frac{1}{p-1}\right]} = \bar x.$$ Consequently, we have \begin{equation}\label{equ} M(\mathcal{A})^{-1}b-M(\mathcal{A})^{-1}\cdot\mathcal{B}\cdot|\bar x|= {\bar x}^{[p-1]}=\mathcal{I}\cdot \bar x, \end{equation} where $\mathcal{I}$ is the unit tensor in $\mathbb{T}_{p,n}$. By Remarks \ref{remarkaB} and \ref{remark THd}, we have that $M(\mathcal{A})\cdot M(\mathcal{A})^{-1}b=b$ and $M(\mathcal{A})\cdot M(\mathcal{A})^{-1}\cdot \mathcal{B}\cdot|\bar x|=\mathcal{B}|\bar x|^{p-1}$. Hence, multiplying both sides of equation (\ref{equ}) by $M(\mathcal{A})$ leads to $$b-\mathcal{B}\cdot|\bar x|= M(\mathcal{A})\cdot\mathcal{I}\cdot \bar x = \mathcal{A}\cdot \bar x,$$ which implies $ \mathcal{A}{\bar x}^{p-1}+\mathcal{B}|\bar x|^{p-1} = b$. Therefore, \eqref{TAVEs} has at least one solution. We complete the proof.\qed\end{proof} \section{Bound of solutions}\label{Bounds} In this section, we focus on studying the bound of solutions of \eqref{TAVEs} for the special case $p=q$. We begin with introducing the following concepts on tensors. \begin{definition}\label{NonsingularDef} Let $\mathcal{A}\in \mathbb{T}_{p,n}$, and let $K$ be a given closed convex cone in $\mathbb{R}^n$. We say that $\mathcal{A}$ is $K$-singular, if $\mathcal{A}$ satisfies $$\{x\in K\backslash\{\bm 0\}~|~\mathcal{A}x^{p-1}={\bm 0}\}\neq \emptyset.$$ Otherwise, we say that $\mathcal{A}$ is $K$-nonsingular. In particular, we say that $\mathcal{A}$ is singular, if $\mathcal{A}$ satisfies $$\{x\in \mathbb{R}^n\backslash\{\bm 0\}~|~\mathcal{A}x^{p-1}={\bm 0}\}\neq \emptyset.$$ Otherwise, we say that $\mathcal{A}$ is nonsingular. \end{definition} \begin{lemma}\label{aapos} Let $\mathcal{A}\in \mathbb{T}_{p,n}$. Suppose that $\mathcal{A}$ is nonsingular. Then $\lambda(\mathcal{A})>0$, where $\lambda(\mathcal{A})$ is the optimal value of the following problem $$ \begin{array}{cl} {\rm min}& \phi_{\mathcal{A}}(x):=\|\mathcal{A}x^{p-1}\|^2\\ {\rm s.t.}&\|x\|=1. \end{array}$$ \end{lemma} \begin{proof} For any given $\mathcal{A}\in \mathbb{T}_{p,n}$, the objective function $\phi_{\mathcal{A}}(x)$ is continuous on the compact set $\{x\in \mathbb{R}^n~:~\|x\|=1\}$ . It is obvious that the optimal value $\lambda(\mathcal{A})$ exists and is nonnegative at least. Now, we turn to proving the fact $\lambda(\mathcal{A})>0$. Suppose that $\lambda(\mathcal{A})=0$, then exists an $\bar x\in \mathbb{R}^n$ with $\|\bar x\|=1$ such that $\|\mathcal{A}\bar x^{p-1}\|^2=0$, which implies $\mathcal{A}\bar x^{p-1}={\bm 0}$. It is a contradiction to the condition that $\mathcal{A}$ is nonsingular. Hence, we conclude that $\lambda({\mathcal A})>0$. \qed\end{proof} For any given $\mathcal{A}=(a_{i_1i_2\ldots i_p})\in \mathbb{T}_{p,n}$, denote the $2(p-1)$-th order $n$-dimensional square tensor $\mathcal{C}$ by $$ c_{i_1\ldots i_{p-1}j_1\ldots j_{p-1}}=\sum_{i=1}^na_{ii_1\ldots i_{p-1}}a_{ij_1\ldots j_{p-1}}. $$ It is clear that, when $p=2$ (i.e, $\mathcal{A}$ is an $n\times n$ matrix $A$), the tensor $\mathcal{C}$ defined above is exactly $A^\top A$. It is easy to see that $\phi_{\mathcal{A}}(x)=\mathcal{C}x^{2(p-1)}$ for any $x\in \mathbb{R}^n$. Moreover, if $\mathcal{A}$ is nonsingular, then $\mathcal{C}$ is a positive definite tensor. Moreover, by Theorem 5 in \cite{Qi05}, we know that $\mathcal{C}$ is positive definite, if and only if all of its $Z$-eigenvalues are positive. Indeed, the optimal value $\lambda(\mathcal{A})$ in Lemma \ref{aapos} is exactly the smallest $Z$-eigenvalue of the tensor $\mathcal{C}$. \begin{proposition}\label{PropAA} Let ${\mathcal A}=(a_{i_1\ldots i_{p}})_{1\leq i_1,\ldots,i_{p}\leq n}\in \mathbb{T}_{p,n}$ with $p\geq 3$. For any $x, \tilde{x}\in \mathbb{R}^n$ and $i,j\in [n]$, it holds that $$ \left|\left({\mathcal A} x^{p-2}-{\mathcal A}\tilde{x}^{p-2}\right)_{ij}\right|\leq \|{\mathcal A}_{ij}\|_{\rm Frob}\|{x}-\tilde{x}\|\sum_{l=0}^{p-3}\|{x}\|^{p-l-3}\|\tilde{x}\|^{l},$$ where ${\mathcal A}_{ij}:=(a_{iji_3\ldots i_{p}})_{1\leq i_3,\ldots,i_{p}\leq n}\in \mathbb{T}_{p-2,n}$. \end{proposition} \begin{proof} For any $x, \tilde{x}\in \mathbb{R}^n$ and every $0\leq l\leq p-2$, denote the $n\times n$ matrix by ${\mathcal A}x^{p-2-l}\tilde{x}^{l}$, whose $ij$-th component is given by $$ ({\mathcal A}x^{p-2-l}\tilde{x}^{l})_{ij}=\sum_{i_3,\ldots,i_p=1}^na_{iji_3\ldots i_{p}}x_{i_3}\cdots x_{i_{p-l}}\tilde{x}_{i_{p-l+1}}\cdots \tilde{x}_{i_p}. $$ It is easy to see that for every $0\leq l\leq p-3$, \begin{align*} &\left|\left({\mathcal A}x^{p-2-l}\tilde{x}^{l}-{\mathcal A}x^{p-3-l}\tilde{x}^{l+1}\right)_{ij}\right|\\ &\leq \sum_{i_3,\ldots,i_p=1}^n\left|a_{iji_3\ldots i_{p}}x_{i_3}\cdots x_{i_{p-l-1}}(x_{i_{p-l}}-\tilde{x}_{i_{p-l}})\tilde{x}_{i_{p-l+1}}\cdots \tilde{x}_{i_p}\right|\\ &\leq\|{\mathcal A}_{ij}\|_{\rm Frob}\|{x}-\tilde{x}\|\|{x}\|^{p-l-3}\|\tilde{x}\|^{l}, \end{align*} where the second inequality follows from Cauchy-Schwartz inequality. Furthermore, since $$ \left|\left({\mathcal A}{x}^{p-2}-{\mathcal A}{\tilde{x}}^{p-2}\right)_{ij}\right|\leq \sum_{l=0}^{p-3}\left|({\mathcal A}{ x}^{p-2-l}{\tilde{x}}^{l}-{\mathcal A}{x}^{p-3-l}{\tilde{x}}^{l+1})_{ij}\right|, $$ it holds that $$ \left|\left({\mathcal A}{x}^{p-2}-{\mathcal A}{\tilde{x}}^{p-2}\right)_{ij}\right|\leq \|{\mathcal A}_{ij}\|_{\rm Frob}\|{ x}-\tilde{x}\|\sum_{l=0}^{p-3}\|{x}\|^{p-3-l}\|\tilde{x}\|^l. $$ We obtain the desired result and complete the proof. \qed\end{proof} By a similar way used in the proof of Proposition \ref{PropAA}, we can prove the following proposition. \begin{proposition}\label{PropAB} Let ${\mathcal A}=(a_{i_1\ldots i_{p}})_{1\leq i_1,\ldots,i_{p}\leq n}\in \mathbb{T}_{p,n}$. For any $x,\tilde{x}\in \mathbb{R}^n$ and $i\in [n]$, it holds that $$ \left|\left({\mathcal A}x^{p-1}-{\mathcal A}\tilde{x}^{p-1}\right)_i\right|\leq \|{\mathcal A}_i\|_{\rm Frob}\|x-\tilde{x}\|\sum_{l=0}^{p-2}\|x\|^l\|\tilde{x}\|^{p-l-2}.$$ where ${\mathcal A}_{i}:=(a_{ii_2\ldots i_{p}})_{1\leq i_2,\ldots,i_{p}\leq n}\in \mathbb{T}_{p-1,n}$. \end{proposition} Applying Proposition \ref{PropAB} to the case where $\tilde{x}={\bm 0}$, we immediately have \begin{equation}\label{Axnorm} \|{\mathcal A}x^{p-1}\|\leq \|\mathcal{A}\|_{\rm Frob}\|x\|^{p-1}, \quad \forall x\in \mathbb{R}^n. \end{equation} \begin{theorem}\label{boundsolution} Let $\mathcal{A}\in \mathbb{T}_{p,n}$. Suppose that $\mathcal{A}$ is nonsingular. Then for any $\mathcal{B}\in \mathbb{T}_{p,n}$ with satisfying $\|\mathcal{B}\|_{\rm Frob}<\sqrt{\lambda(\mathcal{A})}$, it holds that $$ \|x\|\leq \frac{(\sigma+\|b\|)^{\frac{1}{p-1}}}{\lambda(\mathcal{A})^{\frac{1}{2(p-1)}}-\|\mathcal{B}\|_{\rm Frob}^{\frac{1}{p-1}}} , \quad \forall x\in L_\sigma,$$ where $L_\sigma:=\{x\in \mathbb{R}^n~:~\|F(x)\|\leq \sigma\}$ and $F(x)$ is defined by \eqref{F(x)} with $p=q$. \end{theorem} \begin{proof} For any $x\in L_\sigma$, it holds that \begin{align}\label{mmnorm} \|F(x)\|&=\|\mathcal{A}x^{p-1}+\mathcal{B}x^{p-1}-b\| \nonumber\\ &\geq \|\mathcal{A}x^{p-1}\|-\|\mathcal{B}x^{p-1}\|-\|b\| \nonumber \\ &\geq \sqrt{\lambda(\mathcal{A})}\|x\|^{p-1}-\|\mathcal{B}\|_{\rm Frob}\|x\|^{p-1}-\|b\|, \end{align} where the last inequality comes from Lemma \ref{aapos} and inequality (\ref{Axnorm}). By (\ref{mmnorm}), we obtain $$ \sqrt{\lambda(\mathcal{A})}\|x\|^{p-1}\leq \|F(x)\|+\|b\|+\|\mathcal{B}\|_{\rm Frob}\|x\|^{p-1}, $$ which, together with an application of $({\bm u}+{\bm v})^{\frac{1}{p-1}}\leq {\bm u}^{\frac{1}{p-1}}+{\bm v}^{\frac{1}{p-1}} $ for any ${\bm u}, {\bm v}\in \mathbb{R}_+$, implies \begin{equation}\label{jjmmnorm} \lambda(\mathcal{A})^{\frac{1}{2(p-1)}}\|x\|\leq (\|F(x)\|+\|b\|)^{\frac{1}{p-1}}+\|\mathcal{B}\|_{\rm Frob}^{\frac{1}{p-1}}\|x\|, \end{equation} Hence, by the given condition that $\|F(x)\|\leq \sigma$, we obtain the desired result and complete the proof. \qed\end{proof} \begin{remark}\label{solutionRem} Let $\mathcal{A}, \mathcal{B}\in \mathbb{T}_{p,n}$. For any solution $x$ of the special case of \eqref{TAVEs} with $p=q$, i.e., $\mathcal{A}x^{p-1}+\mathcal{B}|x|^{p-1}=b$, it follows from \eqref{Axnorm} that $$\|b\|\leq \|\mathcal{A}x^{p-1}\|+\|\mathcal{B}|x|^{p-1}\|\leq \|\mathcal{A}\|_{\rm Frob}\|x\|^{p-1}+\|\mathcal{B}\|_{\rm Frob}\||x|\|^{p-1},$$ which implies $$ \|x\|\geq \left\{\frac{\|b\|}{\|\mathcal{A}\|_{\rm Frob}+\|\mathcal{B}\|_{\rm Frob}}\right\}^\frac{1}{p-1}. $$ If $\mathcal{A}$ is nonsingular, and $\|\mathcal{B}\|_{\rm Frob}<\sqrt{\lambda(\mathcal{A})}$, then from Theorem \ref{boundsolution}, it holds that $$ \|x\|\leq \frac{\|b\|^{\frac{1}{p-1}}}{\lambda(\mathcal{A})^{\frac{1}{2(p-1)}}-\|\mathcal{B}\|_{\rm Frob}^{\frac{1}{p-1}}} $$ for any solution $x$ of \eqref{TAVEs}. \end{remark} \section{Algorithm and numerical results}\label{Alg} In this section, we will employ the well-developed generalized Newton method to find a numerical solution of the system of TAVEs \eqref{TAVEs}. So, we first present the details of the generalized Newton method for solving TAVEs. Then, to show the numerical performance, we report some results by testing synthetic examples with random data. \subsection{Algorithm}\label{Algorithm} At the beginning of this section, we first list two lemmas, which open a door of applying the generalized Newton method to TAVEs \eqref{TAVEs}. Here, we refer the reader to \cite{M09} for the proofs. \begin{lemma}\label{singularv} The singular values of the matrix $A\in \mathbb{R}^{n\times n}$ exceed $1$ if and only if the minimum eigenvalue of $A^\top A$ exceeds $1$. \end{lemma} \begin{lemma}\label{singularA} If the singular values of $A\in \mathbb{R}^{n\times n}$ exceed $1$ then $A+D$ is invertible for any diagonal matrix $D$ whose diagonal elements equal $\pm 1$ or $0$. \end{lemma} \begin{remark}\label{remarkAB} Note that the definition \eqref{Axm-2} corresponds to a matrix. Then, we can apply Lemmas \ref{singularv} and \ref{singularA} to the problem under consideration. Specifically, if $\mathcal{B}|x|^{q-2}$ is invertible and the singular values of $(\mathcal{B}|x|^{q-2})^{-1}\mathcal{A}x^{p-2}$ exceed $1$, then by Lemma \ref{singularA}, we immediately know that $\mathcal{A}x^{p-2}+\mathcal{B}|x|^{q-2}D(x)$ is invertible, where $D(x)={\rm diag}({\rm sign}(x))$ is a diagonal matrix whose diagonal elements are $\pm 1$ or $0$. It is a good news for the employment of the generalized Newton method for TAVEs. \end{remark} Below, we first use an example to show the conclusion in Remark \ref{remarkAB} that $\mathcal{A}x^{p-2}+\mathcal{B}|x|^{q-2}D(x)$ is invertible under some conditions. \begin{exam}\label{exam11-11} Let $\mathcal{B}=(b_{ijk})\in \mathbb{T}_{3,2}$ with $b_{111}=b_{222}=0$ and $b_{112}=b_{121}=b_{211}=b_{122}=b_{212}=b_{221}=1$. Then for any $x\in \mathbb{R}^2\backslash \{0\}$, we have $$ \mathcal{B}x=\left[ \begin{array}{cc} x_2&x_1+x_2\\ x_1+x_2&x_1 \end{array} \right]. $$ Consequently, we have ${\rm det}(\mathcal{B}x)=-(x_1^2+x_1x_2+x_2^2)<0$ for any $x\in \mathbb{R}^2\backslash \{0\}$, which means that $\mathcal{B}x$ is invertible. Let $\mathcal{A}=(a_{ijk})\in \mathbb{T}_{3,2}$ with $a_{111}=a_{222}=0$ and $a_{112}=a_{121}=a_{211}=a_{122}=a_{212}=a_{221}=2$. Then it is easy to see that the singular values of $(\mathcal{B}x)^{-1}\mathcal{A}x$ exceed $1$, and $\mathcal{A}x+\mathcal{B}xD(x)$ is invertible. \end{exam}\label{exam11-11} Now, we present the generalized Newton method for TAVEs. Recalling the notation \eqref{F(x)}, we consider the case where ${\mathcal A}\in {\mathbb T}_{p,n}$ and ${\mathcal B}\in {\mathbb T}_{q,n}$ are semi-symmetric tensors. Denote \begin{equation}\label{Vk} V(x) = (p-1) {\mathcal A}x^{p-2} + (q-1){\mathcal B}|x|^{q-2}D(x), \end{equation} where $D(x)$ is given in Remark \ref{remarkAB}. Then, the matrix $V(x)$ defined by \eqref{Vk} can be viewed as a generalized Jacobian matrix of $F$ at $x$. Then, it follows from \cite{QS99} that, for a given $x_k$, the generalized Newton method for \eqref{TAVEs} reads as follows: \begin{equation}\label{GN} x_{k+1} = x_k - V(x_k)^{-1} F(x_k), \end{equation} where $V(x_k)$ stands for the generalized Jacobian matrix at $x_k$. By utilizing the notation $F(x_k)$ and $V(x_k)$ and the tensor-vector product \eqref{Axm-1}, the iterative scheme \eqref{GN} can be rewritten as \begin{equation}\label{GNewton} x_{k+1}=V(x_k)^{-1}\left[(p-2){\mathcal A}x_k^{p-1}+(q-2){\mathcal B}|x_k|^{q-1}+b\right]. \end{equation} \begin{remark} When we consider the case $p=q$ in TAVEs \eqref{TAVEs}, it can be easily seen that the iterative scheme \eqref{GNewton} immediately reduces to \begin{align*} x_{k+1}&=\frac{p-2}{p-1}\left[{\mathcal A}x_k^{p-2}+{\mathcal B}|x_k|^{p-2}D(x_k)\right]^{-1}\left[\left({\mathcal A}x_k^{p-2}+{\mathcal B}|x_k|^{p-2}D(x_k)\right)x_k+\frac{b}{p-2}\right] \\ &=\frac{p-2}{p-1}x^k + \frac{1}{p-1}\left[{\mathcal A}x_k^{p-2}+{\mathcal B}|x_k|^{p-2}D(x_k)\right]^{-1}b, \end{align*} where the first equality uses the fact that $D(x_k)x_k = |x_k|$. In particular, if we consider the special case without the absolute value term (i.e., $ {\mathcal B}|x|^{p-1}={\bf 0}$) in \eqref{TAVEs}, the above iterative scheme immediately recovers the Newton method introduced in \cite{LXX17}. \end{remark} \subsection{Numerical results} We have proposed a generalized Newton method \eqref{GNewton} for the system of TAVEs \eqref{TAVEs} in Section \ref{Algorithm}. It is not difficult to see that the generalized Newton method enjoys a simple iterative scheme. In this subsection, we will show through experimentation with synthetic data that such a simple method is a highly probabilistic reliable TAVEs solver for the problem under consideration. We write the code of the generalized Newton method in {\sc Matlab} 2014a and conduct the experiments on a DELL workstation computer equipped with Intel(R) Xeon(R) CPU E5-2680 v3 @2.5GHz and 128G RAM running on Windows 7 Home Premium operating system. Here, we employ the publicly shared {\sc Matlab} Tensor Toolbox \cite{TensorT} to compute tensor-vector products and symmetrization of tensors. From an application perspective, we only established the solutions existence theorems for the case of TAVEs \eqref{TAVEs} with $p\geq q\geq 2$. However, the proposed generalized Newton method does not depend on the relation between $p$ and $q$, i.e., the algorithm is applicable to the cases $p\geq q$ and $p\leq q$. Therefore, we consider two cases of TAVEs with $p=q$ and $p\neq q$ (i.e., $p<q$ and $p>q$) in our experiments. Moreover, we investigate four scenarios on tensors ${\mathcal A}$ and ${\mathcal B}$: (i) both ${\mathcal A}$ and ${\mathcal B}$ are ${\mathcal M}$-tensors; (ii) ${\mathcal A}$ is an ${\mathcal M}$-tensor, ${\mathcal B}$ is a general random tensor; (iii) ${\mathcal A}$ is a general random tensor, ${\mathcal B}$ is an ${\mathcal M}$-tensor; (iv) both ${\mathcal A}$ and ${\mathcal B}$ are general random tensors. Here, to generate an ${\mathcal M}$-tensor ${\mathcal A}$ or ${\mathcal B}$, we follow the way used in \cite{DW16}. That is, we first generate a random tensor ${\mathcal C}=(c_{i_1 i_2\ldots i_m})$ and set $$\zeta_{\mathcal C}=(1+\epsilon)\cdot \max_{1\leq i\leq n}\left(\sum_{i_2,\cdots,i_m=1}^nc_{ii_2\ldots i_m}\right),\quad \epsilon>0.$$ Then, we take ${\mathcal A}$ or ${\mathcal B}$ as $\zeta_{\mathcal C}{\mathcal I}-{\mathcal C}$. More concretely, for the above four scenarios: (i) We first generate ${\mathcal C}_1$ and ${\mathcal C}_2$ randomly so that all entries are uniformly distributed in $(0,1)$ and $(-1,1)$, respectively. Then, we take ${\mathcal A} = \zeta_{\mathcal C_1}{\mathcal I}-{\mathcal C}_1$ and ${\mathcal B} = \zeta_{\mathcal C_2}{\mathcal I}-{\mathcal C}_2$; (ii) Generate ${\mathcal C}$ whose entries are uniformly distributed in $(0,2)$ and take ${\mathcal A} = \zeta_{\mathcal C}{\mathcal I}-{\mathcal C}$. Tensor ${\mathcal B}$ is a general random one whose components are uniformly distributed in $(-1,0)$; (iii) ${\mathcal A}$ is a general random tensor whose components are uniformly distributed in $(-1,0)$. For tensor ${\mathcal B}$, we generate ${\mathcal C}$ such that all entries are uniformly distributed in $(-0.5,0.5)$ and take ${\mathcal B} = \zeta_{\mathcal C}{\mathcal I}-{\mathcal C}$; (iv) Both ${\mathcal A}$ and ${\mathcal B}$ are general tensors, whose entries are uniformly distributed in $(0,1)$ and $(-4,1)$, respectively. Throughout, we take $\epsilon=0.1$ for all ${\mathcal M}$-tensors, and all tensors ${\mathcal A}$ and ${\mathcal B}$ are symmetrized by the {\sc Matlab} tensor toolbox \cite{TensorT}. To keep the fact that each randomly generated problem has at least one solution, we construct $b$ by setting $b={\mathcal A}x_*^{p-1}+{\mathcal B}|x_*|^{q-1}$, where $x_*$ is a pregenerated vector whose entries are uniformly distributed in $(-1,1)$. Moreover, we always take $x_0=(1,1,\cdots,1)^\top$ as our initial point for the proposed method. To investigate the numerical performance of the generalized Newton method, we report the number of iterations (Iter.), the computing time in seconds (Time), the absolute error (Err) at point $x_k$, which is defined by $${\rm Err}:=\|{\mathcal A}x_k^{p-1} + {\mathcal B}|x_k|^{q-1}-b\|\leq {\rm Tol}.$$ Throughout, we set ${\rm Tol}=10^{-5}$. Since all the data is generated randomly, we test $100$ groups of random data for each scenario and report the minimum and maximum iterations ($k_{\min}$ and $k_{\max}$), the minimum and maximum computing time ($t_{\min}$ and $t_{\max}$), respectively. In practice, notice that we completely do not know the true solutions of the system of TAVEs \eqref{TAVEs}. Hence, we can not guarantee that the generalized Newton method starting with the constant initial point $x_0$ (which might be far away from the true solutions) is always convergent (or successful) for the random data. Accordingly, we report the {\it success rate} (SR) of $100$ random problems in the sense that the generalized Newton method can achieve the preset `Tol' in 2000 iterations. \setlength\rotFPtop{0pt plus 1fil} \begin{sidewaystable} \begin{center} \caption{Computational results for the cases $p\equiv q=m$ with (i) $({\mathcal A},{\mathcal B})$ are ${\mathcal M}$-tensors, and (ii) ${\mathcal A}$ is an ${\mathcal M}$-tensor and ${\mathcal B}$ is a general tensor.}\vskip 0.2mm \label{table1} \def1\textwidth{1\textwidth} \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}llll}\toprule & (i) $({\mathcal A},{\mathcal B})$ are ${\mathcal M}$-tensors && (ii) ${\mathcal A}$ is an ${\mathcal M}$-tensor, ${\mathcal B}$ is a general tensor \\\cline{2-2} \cline{4-4} $(m,n)$ & Iter. ($k_{\min}$ / $k_{\max}$) / Time ($t_{\min}$ / $t_{\max}$) / Err / SR && Iter. ($k_{\min}$ / $k_{\max}$) / Time ($t_{\min}$ / $t_{\max}$) / Err / SR \\ \midrule $( 3, 5)$ & 5.16 ( 3 / 103) / 0.02 (0.00 / 0.44) / 1.39$\times 10^{-6}$ / 1.00 && 39.39 ( 4 / 939) / 0.17 (0.02 / 3.96) / 1.53$\times 10^{-6}$ / 1.00\\ $( 3,10)$ & 3.57 ( 3 / 5) / 0.02 (0.00 / 0.03) / 1.56$\times 10^{-6}$ / 1.00 && 70.24 ( 6 / 1401) / 0.30 (0.03 / 6.01) / 1.40$\times 10^{-6}$ / 1.00\\ $( 3,20)$ & 3.91 ( 3 / 5) / 0.02 (0.02 / 0.03) / 8.89$\times 10^{-7}$ / 1.00 && 108.55 ( 8 / 1825) / 0.47 (0.03 / 7.91) / 1.30$\times 10^{-6}$ / 0.98\\ \midrule $( 4, 5)$ & 21.83 ( 4 / 652) / 0.10 (0.02 / 2.89) / 1.31$\times 10^{-6}$ / 0.84 && 6.49 ( 3 / 36) / 0.03 (0.02 / 0.17) / 1.11$\times 10^{-6}$ / 0.99 \\ $( 4,10)$ & 11.67 ( 4 / 203) / 0.05 (0.02 / 0.89) / 1.04$\times 10^{-6}$ / 0.94 && 10.03 ( 4 / 207) / 0.05 (0.02 / 0.92) / 8.32$\times 10^{-7}$ / 1.00\\ $( 4,20)$ & 5.08 ( 4 / 18) / 0.02 (0.02 / 0.09) / 1.13$\times 10^{-6}$ / 1.00 && 18.84 ( 4 / 688) / 0.09 (0.02 / 3.21) / 1.13$\times 10^{-6}$ / 1.00 \\ \midrule $( 5, 5)$ & 11.46 ( 3 / 57) / 0.05 (0.02 / 0.25) / 1.43$\times 10^{-6}$ / 0.96 && 76.48 ( 6 / 710) / 0.35 (0.03 / 3.23) / 1.55$\times 10^{-6}$ / 1.00\\ $( 5,10)$ & 10.87 ( 3 / 217) / 0.05 (0.00 / 1.11) / 1.09$\times 10^{-6}$ / 1.00 && 86.66 ( 9 / 1925) / 0.41 (0.05 / 9.09) / 1.83$\times 10^{-6}$ / 0.96 \\ $( 5,20)$ & 14.10 ( 4 / 597) / 0.27 (0.06 / 11.62) / 1.29$\times 10^{-6}$ / 1.00 && 84.70 ( 10 / 1355) / 1.64 (0.19 / 26.29) / 1.57$\times 10^{-6}$ / 0.99\\ \midrule $( 6, 5)$ & 5.46 ( 3 / 9) / 0.03 (0.02 / 0.05) / 1.09$\times 10^{-6}$ / 1.00 && 15.65 ( 3 / 146) / 0.08 (0.02 / 0.69) / 1.25$\times 10^{-6}$ / 0.97 \\ $( 6,10)$ & 5.72 ( 3 / 95) / 0.04 (0.02 / 0.67) / 1.10$\times 10^{-6}$ / 1.00 && 15.66 ( 4 / 165) / 0.11 (0.02 / 1.20) / 1.45$\times 10^{-6}$ / 0.99\\ $( 6,15)$ & 4.97 ( 3 / 6) / 0.30 (0.17 / 0.37) / 1.28$\times 10^{-6}$ / 1.00 && 72.15 ( 4 / 1811) / 4.60 (0.23 / 115.89) / 1.72$\times 10^{-6}$ / 0.98 \\ \bottomrule \end{tabular*} \end{center} \end{sidewaystable} \setlength\rotFPtop{0pt plus 1fil} \begin{sidewaystable} \begin{center} \caption{Computational results for the cases $p\equiv q=m$ with (iii) ${\mathcal A}$ is a general tensor and ${\mathcal B}$ is an ${\mathcal M}$-tensor, and (iv) $({\mathcal A},{\mathcal B})$ are general tensors}\vskip 0.2mm \label{table2} \def1\textwidth{1\textwidth} \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}llll}\toprule & (iii) ${\mathcal A}$ is a general tensor, ${\mathcal B}$ is an ${\mathcal M}$-tensor && (iv) $({\mathcal A},{\mathcal B})$ are general tensors\\\cline{2-2} \cline{4-4} $(m,n)$ & Iter. ($k_{\min}$ / $k_{\max}$) / Time ($t_{\min}$ / $t_{\max}$) / Err / SR && Iter. ($k_{\min}$ / $k_{\max}$) / Time ($t_{\min}$ / $t_{\max}$) / Err / SR \\ \midrule $( 3, 5)$ & 20.86 ( 4 / 285) / 0.09 (0.02 / 1.20) / 1.24$\times 10^{-6}$ / 0.97 && 18.28 ( 4 / 293) / 0.08 (0.02 / 1.23) / 1.31$\times 10^{-6}$ / 0.98 \\ $( 3,10)$ & 46.35 ( 6 / 1524) / 0.20 (0.02 / 6.52) / 8.80$\times 10^{-7}$ / 0.94 && 56.03 ( 6 / 844) / 0.24 (0.03 / 3.60) / 1.39$\times 10^{-6}$ / 0.97 \\ $( 3,20)$ & 88.18 ( 8 / 1229) / 0.38 (0.03 / 5.34) / 9.31$\times 10^{-7}$ / 0.99 && 135.61 ( 8 / 551) / 0.59 (0.03 / 2.39) / 1.45$\times 10^{-6}$ / 1.00\\ \midrule $( 4, 5)$ & 47.45 ( 5 / 965) / 0.21 (0.02 / 4.27) / 9.84$\times 10^{-7}$ / 0.97 && 28.34 ( 5 / 191) / 0.13 (0.02 / 0.84) / 1.75$\times 10^{-6}$ / 1.00\\ $( 4,10)$ & 62.09 ( 8 / 397) / 0.28 (0.03 / 1.76) / 1.46$\times 10^{-6}$ / 0.92 && 58.61 ( 7 / 252) / 0.26 (0.03 / 1.11) / 1.47$\times 10^{-6}$ / 1.00\\ $( 4,20)$ & 123.13 ( 10 / 615) / 0.58 (0.05 / 2.89) / 1.24$\times 10^{-6}$ / 0.97 && 332.71 ( 15 / 1933) / 1.55 (0.08 / 9.06) / 1.10$\times 10^{-6}$ / 0.98 \\ \midrule $( 5, 5)$ & 18.95 ( 5 / 72) / 0.09 (0.02 / 0.34) / 1.17$\times 10^{-6}$ / 0.95 && 30.49 ( 7 / 145) / 0.14 (0.03 / 0.67) / 1.29$\times 10^{-6}$ / 0.99\\ $( 5,10)$ & 54.46 ( 10 / 439) / 0.26 (0.05 / 2.07) / 1.63$\times 10^{-6}$ / 0.97 && 81.99 ( 8 / 468) / 0.39 (0.05 / 2.20) / 1.37$\times 10^{-6}$ / 0.99 \\ $( 5,20)$ & 139.13 ( 15 / 1250) / 2.69 (0.28 / 24.21) / 1.35$\times 10^{-6}$ / 0.95 && 499.55 ( 24 / 1908) / 9.69 (0.47 / 37.08) / 1.54$\times 10^{-6}$ / 0.93\\ \midrule $( 6, 5)$ & 42.87 ( 7 / 495) / 0.21 (0.03 / 2.34) / 1.35$\times 10^{-6}$ / 0.97 && 45.54 ( 9 / 326) / 0.22 (0.03 / 1.54) / 1.09$\times 10^{-6}$ / 1.00 \\ $( 6,10)$ & 91.54 ( 10 / 742) / 0.66 (0.06 / 5.43) / 1.36$\times 10^{-6}$ / 0.92 && 104.55 ( 16 / 697) / 0.76 (0.11 / 4.96) / 1.01$\times 10^{-6}$ / 1.00 \\ $( 6,15)$ & 176.05 ( 12 / 1188) / 11.24 (0.75 / 76.16) / 1.81$\times 10^{-6}$ / 0.88 && 274.35 ( 17 / 1421) / 17.53 (1.06 / 90.87) / 1.27$\times 10^{-6}$ / 0.99 \\ \bottomrule \end{tabular*} \end{center} \end{sidewaystable} In Tables \ref{table1} and \ref{table2}, we report the results for the case $p\equiv q=m$ with four scenarios on tensors. From the data, we can see that most of the random problems (even with general tensors) can be solved successfully in the preset maximum iteration. When both ${\mathcal A}$ and ${\mathcal B}$ are ${\mathcal M}$-tensors, the generalized Newton method performs best in terms of taking the least average iterations and the highest success rate. For the other three scenarios on tensors, it seems that the number of iterations is proportional to the dimensionality $n$. However, the proposed method is still highly probabilistic reliable to the problem under test. \setlength\rotFPtop{0pt plus 1fil} \begin{sidewaystable} \begin{center} \caption{Computational results for the cases $p\neq q$ with (i) $({\mathcal A},{\mathcal B})$ are ${\mathcal M}$-tensors, and (ii) ${\mathcal A}$ is an ${\mathcal M}$-tensor and ${\mathcal B}$ is a general tensor.}\vskip 0.2mm \label{table3} \small \def1\textwidth{1\textwidth} \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}llll}\toprule & (i) $({\mathcal A},{\mathcal B})$ are ${\mathcal M}$-tensors && (ii) ${\mathcal A}$ is an ${\mathcal M}$-tensor and ${\mathcal B}$ is a general tensor \\\cline{2-2} \cline{4-4} $(p,q,n)$ & Iter. ($k_{\min}$ / $k_{\max}$) / Time ($t_{\min}$ / $t_{\max}$) / Err / SR && Iter. ($k_{\min}$ / $k_{\max}$) / Time ($t_{\min}$ / $t_{\max}$) / Err / SR \\ \midrule $( 4,3, 5)$ & 35.47 ( 4 / 150) / 0.15 (0.02 / 0.66) / 1.10$\times 10^{-6}$ / 0.95 && 23.04 ( 3 / 140) / 0.10 (0.00 / 0.62) / 1.66$\times 10^{-6}$ / 0.99 \\ $( 4,3,10)$ & 93.28 ( 4 / 525) / 0.41 (0.02 / 2.29) / 2.30$\times 10^{-6}$ / 0.99 && 40.77 ( 4 / 134) / 0.18 (0.02 / 0.59) / 2.05$\times 10^{-6}$ / 1.00 \\ $( 4,3,20)$ & 121.19 ( 4 / 1279) / 0.54 (0.02 / 5.68) / 2.84$\times 10^{-6}$ / 0.98 && 54.37 ( 4 / 173) / 0.24 (0.02 / 0.78) / 2.70$\times 10^{-6}$ / 0.98 \\ \hline $( 5,4, 5)$ & 4.54 ( 3 / 35) / 0.02 (0.02 / 0.17) / 1.03$\times 10^{-6}$ / 0.97 && 5.00 ( 3 / 7) / 0.02 (0.02 / 0.03) / 1.12$\times 10^{-6}$ / 1.00 \\ $( 5,4,10)$ & 4.31 ( 3 / 5) / 0.02 (0.00 / 0.03) / 1.26$\times 10^{-6}$ / 1.00 && 4.89 ( 3 / 6) / 0.02 (0.00 / 0.03) / 9.98$\times 10^{-7}$ / 1.00 \\ $( 5,4,20)$ & 4.50 ( 3 / 5) / 0.05 (0.03 / 0.06) / 1.09$\times 10^{-6}$ / 1.00 && 4.67 ( 3 / 6) / 0.05 (0.03 / 0.06) / 1.14$\times 10^{-6}$ / 1.00 \\ \hline $( 6,4, 5)$ & 46.38 ( 3 / 551) / 0.22 (0.02 / 2.53) / 1.52$\times 10^{-6}$ / 0.98 && 48.04 ( 4 / 243) / 0.23 (0.02 / 1.12) / 1.88$\times 10^{-6}$ / 0.98 \\ $( 6,4,10)$ & 80.79 ( 3 / 312) / 0.47 (0.02 / 1.76) / 2.57$\times 10^{-6}$ / 0.99 && 88.90 ( 3 / 473) / 0.51 (0.02 / 2.73) / 2.53$\times 10^{-6}$ / 0.99 \\ $( 6,4,15)$ & 129.11 ( 4 / 685) / 4.45 (0.12 / 23.71) / 2.93$\times 10^{-6}$ / 0.98 && 111.10 ( 4 / 693) / 3.83 (0.12 / 23.90) / 3.77$\times 10^{-6}$ / 1.00 \\ \hline $( 6,5, 5)$ & 67.62 ( 3 / 823) / 0.32 (0.02 / 3.82) / 9.76$\times 10^{-7}$ / 0.92 && 42.61 ( 3 / 329) / 0.20 (0.02 / 1.53) / 1.69$\times 10^{-6}$ / 0.98 \\ $( 6,5,10)$ & 178.96 ( 4 / 814) / 1.07 (0.02 / 4.76) / 2.11$\times 10^{-6}$ / 0.96 && 77.94 ( 3 / 336) / 0.46 (0.02 / 2.03) / 2.23$\times 10^{-6}$ / 0.98 \\ $( 6,5,15)$ & 289.66 ( 4 / 1309) / 10.42 (0.12 / 47.28) / 2.14$\times 10^{-6}$ / 0.97 && 125.82 ( 3 / 471) / 4.52 (0.09 / 16.85) / 2.54$\times 10^{-6}$ / 0.98 \\ \midrule $( 3,4, 5)$ & 10.33 ( 3 / 53) / 0.05 (0.02 / 0.23) / 1.42$\times 10^{-6}$ / 0.96 && 55.83 ( 4 / 1063) / 0.25 (0.02 / 4.68) / 1.08$\times 10^{-6}$ / 0.84 \\ $( 3,4,10)$ & 5.14 ( 4 / 21) / 0.02 (0.02 / 0.09) / 7.96$\times 10^{-7}$ / 1.00 && 643.21 ( 5 / 1739) / 2.83 (0.02 / 7.68) / 1.66$\times 10^{-6}$ / 0.29 \\ $( 3,4,20)$ & 4.12 ( 4 / 5) / 0.02 (0.02 / 0.03) / 1.14$\times 10^{-6}$ / 1.00 && 822.75 ( 6 / 1957) / 3.72 (0.02 / 8.91) / 1.78$\times 10^{-6}$ / 0.16 \\ \hline $( 4,5, 5)$ & 6.74 ( 3 / 108) / 0.03 (0.00 / 0.48) / 1.07$\times 10^{-6}$ / 1.00 && 11.11 ( 4 / 159) / 0.05 (0.02 / 0.70) / 7.73$\times 10^{-7}$ / 0.83 \\ $( 4,5,10)$ & 4.42 ( 3 / 5) / 0.02 (0.00 / 0.03) / 1.04$\times 10^{-6}$ / 1.00 && 36.66 ( 5 / 429) / 0.17 (0.02 / 1.97) / 1.86$\times 10^{-6}$ / 0.67 \\ $( 4,5,20)$ & 4.27 ( 4 / 5) / 0.04 (0.03 / 0.06) / 9.58$\times 10^{-7}$ / 1.00 && 171.83 ( 11 / 1255) / 1.78 (0.11 / 13.14) / 1.93$\times 10^{-6}$ / 0.83 \\ \hline $( 4,6, 5)$ & 4.62 ( 3 / 6) / 0.02 (0.02 / 0.03) / 8.57$\times 10^{-7}$ / 1.00 && 23.40 ( 5 / 542) / 0.11 (0.02 / 2.45) / 1.74$\times 10^{-6}$ / 0.52 \\ $( 4,6,10)$ & 4.35 ( 3 / 5) / 0.02 (0.02 / 0.03) / 1.10$\times 10^{-6}$ / 1.00 && 114.85 ( 7 / 1686) / 0.67 (0.03 / 9.81) / 1.82$\times 10^{-6}$ / 0.52 \\ $( 4,6,15)$ & 4.29 ( 3 / 5) / 0.14 (0.09 / 0.17) / 1.29$\times 10^{-6}$ / 1.00 && 330.00 ( 11 / 1859) / 11.15 (0.37 / 58.41) / 2.03$\times 10^{-6}$ / 0.58 \\ \hline $( 5,6, 5)$ & 5.00 ( 3 / 51) / 0.03 (0.02 / 0.23) / 1.53$\times 10^{-6}$ / 1.00 && 34.42 ( 5 / 206) / 0.16 (0.02 / 0.97) / 1.32$\times 10^{-6}$ / 0.89 \\ $( 5,6,10)$ & 4.82 ( 3 / 53) / 0.03 (0.02 / 0.33) / 9.36$\times 10^{-7}$ / 1.00 && 74.97 ( 7 / 473) / 0.44 (0.03 / 2.82) / 1.79$\times 10^{-6}$ / 0.60 \\ $( 5,6,15)$ & 4.34 ( 3 / 5) / 0.14 (0.09 / 0.17) / 1.07$\times 10^{-6}$ / 1.00 && 100.04 ( 24 / 647) / 3.59 (0.84 / 23.34) / 1.54$\times 10^{-6}$ / 0.55 \\ \bottomrule \end{tabular*} \end{center} \end{sidewaystable} \setlength\rotFPtop{0pt plus 1fil} \begin{sidewaystable} \begin{center} \caption{Computational results for the cases $p\neq q$ with (iii) ${\mathcal A}$ is a general tensor and ${\mathcal B}$ is an ${\mathcal M}$-tensor, and (iv) $({\mathcal A},{\mathcal B})$ are general tensors}\vskip 0.2mm \label{table4} \small \def1\textwidth{1\textwidth} \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}llll}\toprule & (iii) ${\mathcal A}$ is a general tensor and ${\mathcal B}$ is an ${\mathcal M}$-tensor && (iv) $({\mathcal A},{\mathcal B})$ are general tensors\\\cline{2-2} \cline{4-4} $(p,q,n)$ & Iter. ($k_{\min}$ / $k_{\max}$) / Time ($t_{\min}$ / $t_{\max}$) / Err / SR && Iter. ($k_{\min}$ / $k_{\max}$) / Time ($t_{\min}$ / $t_{\max}$) / Err / SR \\ \midrule $( 4,3, 5)$ & 24.57 ( 5 / 146) / 0.11 (0.02 / 0.64) / 1.33$\times 10^{-6}$ / 0.95 && 23.38 ( 4 / 180) / 0.10 (0.02 / 0.78) / 1.04$\times 10^{-6}$ / 1.00 \\ $( 4,3,10)$ & 43.85 ( 7 / 242) / 0.19 (0.03 / 1.06) / 6.69$\times 10^{-7}$ / 0.99 && 53.00 ( 9 / 247) / 0.23 (0.05 / 1.08) / 1.20$\times 10^{-6}$ / 1.00 \\ $( 4,3,20)$ & 155.53 ( 12 / 1496) / 0.70 (0.05 / 6.72) / 1.32$\times 10^{-6}$ / 0.99 && 178.52 ( 18 / 788) / 0.80 (0.08 / 3.49) / 9.97$\times 10^{-7}$ / 1.00 \\ \hline $( 5,4, 5)$ & 19.47 ( 5 / 145) / 0.09 (0.03 / 0.66) / 1.30$\times 10^{-6}$ / 0.99 && 38.46 ( 6 / 409) / 0.18 (0.02 / 1.86) / 1.24$\times 10^{-6}$ / 1.00 \\ $( 5,4,10)$ & 48.07 ( 10 / 347) / 0.22 (0.05 / 1.59) / 1.06$\times 10^{-6}$ / 0.97 && 69.40 ( 9 / 237) / 0.32 (0.05 / 1.09) / 1.40$\times 10^{-6}$ / 0.99 \\ $( 5,4,20)$ & 115.90 ( 16 / 704) / 1.20 (0.17 / 7.43) / 9.66$\times 10^{-7}$ / 0.99 && 282.64 ( 16 / 1824) / 2.93 (0.16 / 18.77) / 1.69$\times 10^{-6}$ / 1.00 \\ \hline $( 6,4, 5)$ & 28.68 ( 6 / 166) / 0.14 (0.03 / 0.75) / 1.09$\times 10^{-6}$ / 0.95 && 39.70 ( 6 / 234) / 0.19 (0.03 / 1.09) / 1.17$\times 10^{-6}$ / 1.00 \\ $( 6,4,10)$ & 68.68 ( 10 / 378) / 0.39 (0.05 / 2.17) / 1.00$\times 10^{-6}$ / 0.97 && 73.16 ( 11 / 306) / 0.42 (0.06 / 1.76) / 1.25$\times 10^{-6}$ / 1.00 \\ $( 6,4,15)$ & 156.38 ( 13 / 894) / 5.39 (0.44 / 30.92) / 1.23$\times 10^{-6}$ / 0.99 && 170.83 ( 13 / 679) / 5.90 (0.44 / 23.45) / 1.49$\times 10^{-6}$ / 1.00 \\\hline $( 6,5, 5)$ & 32.94 ( 7 / 250) / 0.16 (0.03 / 1.20) / 1.20$\times 10^{-6}$ / 0.97 && 33.02 ( 6 / 166) / 0.16 (0.03 / 0.78) / 1.33$\times 10^{-6}$ / 1.00 \\ $( 6,5,10)$ & 68.08 ( 11 / 427) / 0.41 (0.06 / 2.54) / 1.14$\times 10^{-6}$ / 0.98 && 78.88 ( 16 / 623) / 0.47 (0.09 / 3.71) / 1.74$\times 10^{-6}$ / 1.00 \\ $( 6,5,15)$ & 113.35 ( 17 / 629) / 4.07 (0.58 / 22.74) / 1.18$\times 10^{-6}$ / 0.96 && 200.24 ( 11 / 1016) / 7.20 (0.37 / 36.77) / 1.11$\times 10^{-6}$ / 1.00 \\ \midrule $( 3,4, 5)$ & 26.18 ( 3 / 304) / 0.11 (0.02 / 1.33) / 1.03$\times 10^{-6}$ / 0.92 && 18.13 ( 4 / 151) / 0.08 (0.02 / 0.67) / 1.21$\times 10^{-6}$ / 0.95 \\ $( 3,4,10)$ & 62.61 ( 3 / 694) / 0.28 (0.02 / 3.06) / 1.11$\times 10^{-6}$ / 0.92 && 112.84 ( 8 / 633) / 0.49 (0.03 / 2.79) / 1.23$\times 10^{-6}$ / 0.79 \\ $( 3,4,20)$ & 129.33 ( 4 / 1074) / 0.58 (0.02 / 4.79) / 1.37$\times 10^{-6}$ / 0.91 && 1099.80 (517 / 1789) / 4.97 (2.34 / 8.14) / 1.58$\times 10^{-6}$ / 0.10 \\ \hline $( 4,5, 5)$ & 33.05 ( 5 / 513) / 0.15 (0.02 / 2.31) / 1.52$\times 10^{-6}$ / 0.94 && 34.04 ( 5 / 544) / 0.15 (0.02 / 2.45) / 1.38$\times 10^{-6}$ / 0.98 \\ $( 4,5,10)$ & 78.41 ( 9 / 440) / 0.36 (0.05 / 2.03) / 1.72$\times 10^{-6}$ / 0.80 && 142.97 ( 9 / 595) / 0.66 (0.05 / 2.71) / 1.10$\times 10^{-6}$ / 0.86 \\ $( 4,5,20)$ & 238.63 ( 11 / 1749) / 2.47 (0.11 / 18.13) / 1.06$\times 10^{-6}$ / 0.93 && 914.82 ( 70 / 1990) / 9.45 (0.73 / 20.56) / 5.25$\times 10^{-7}$ / 0.17 \\ \hline $( 4,6, 5)$ & 33.65 ( 3 / 415) / 0.16 (0.02 / 1.89) / 1.38$\times 10^{-6}$ / 0.92 && 54.63 ( 5 / 823) / 0.26 (0.02 / 3.74) / 1.38$\times 10^{-6}$ / 0.93 \\ $( 4,6,10)$ & 100.46 ( 4 / 1048) / 0.58 (0.02 / 5.97) / 1.40$\times 10^{-6}$ / 0.85 && 180.58 ( 11 / 753) / 1.05 (0.06 / 4.38) / 1.19$\times 10^{-6}$ / 0.62 \\ $( 4,6,15)$ & 189.75 ( 4 / 1777) / 6.52 (0.12 / 61.34) / 1.66$\times 10^{-6}$ / 0.79 && 598.71 ( 10 / 1983) / 20.64 (0.33 / 68.38) / 4.80$\times 10^{-7}$ / 0.42 \\ \hline $( 5,6, 5)$ & 36.48 ( 5 / 400) / 0.17 (0.03 / 1.83) / 1.32$\times 10^{-6}$ / 0.96 && 30.97 ( 5 / 215) / 0.15 (0.03 / 1.00) / 1.60$\times 10^{-6}$ / 1.00 \\ $( 5,6,10)$ & 86.46 ( 8 / 581) / 0.51 (0.05 / 3.40) / 1.77$\times 10^{-6}$ / 0.91 && 135.49 ( 5 / 744) / 0.81 (0.03 / 4.37) / 1.28$\times 10^{-6}$ / 0.92 \\ $( 5,6,15)$ &151.18 ( 11 / 942) / 5.43 (0.39 / 33.79) / 1.45$\times 10^{-6}$ / 0.89 && 617.83 ( 18 / 1929) / 22.23 (0.62 / 69.53) / 9.68$\times 10^{-7}$ / 0.81 \\ \bottomrule \end{tabular*} \end{center} \end{sidewaystable} As we have mentioned above, although our solutions existence theorems are established for the case $p\geq q$, the proposed generalized Newton method does not rely on such a relation $p\geq q$. Therefore, in Tables \ref{table3} and \ref{table4}, we correspondingly consider the two cases $p>q$ and $p<q$ with the four scenarios on tensors. It can be seen from the results that the generalized Newton method performs well for the case $p>q$, especially for the case with two general tensors ${\mathcal A}$ and ${\mathcal B}$ (see Table \ref{table4}). When dealing with the case $p<q$, the best performance of the generalized Newton method corresponds to the scenario that both ${\mathcal A}$ and ${\mathcal B}$ are ${\mathcal M}$-tensors. From all the data reported in this section, it is not difficult to see that the generalized Newton method is a reliable solver for most of TAVEs. Here, we shall notice that, for the failure cases, the generalized Newton method \eqref{GNewton} can successfully find a solution to TAVEs if the starting point $x_0$ is sufficiently near the solution. It means that the starting point would affect the performance of the proposed method for TAVEs. However, we completely do not know where is the solution for real-world problems. So, we use the aforementioned constant starting point $x_0$ throughout the experiments for the purpose of investigating the real performance of \eqref{GNewton} on TAVEs. Meanwhile, one question raised is that can we design an algorithm which is independent on initial points? We would like to leave it as our future work. \section{Conclusions}\label{Conclusion} In this paper, we considered the system of TAVEs, which is an interesting generalization of the classical absolute value equations in the matrix case. By the employment of degree theory, we showed that the solutions set of the system of TAVEs with $p>q$ is nonempty and compact. Moreover, by the utility of fixed point theory, we proved that TAVEs with $p=q$ has at least one solution under some checkable conditions. However, we did not give the answer when such a problem has a unique solution for the case where ${\mathcal B}$ is not a negative unit tensor. Moreover, what will happen when we consider the case where TAVEs with the setting of $p<q$? In the future, we would like to try to answer these questions. On the other hand, our numerical results show that the generalized Newton method performs well in many cases. However, it still fails in some cases. So, can we design structure-exploiting algorithms which are independent on the starting point? This is also one of our future concerns. \end{document}
arXiv
\begin{document} \title{\Large \textbf{Primes in floor function sets}} \author{ \scshape {RANDELL HEYMAN} \\ School of Mathematics and Statistics,\\ University of New South Wales \\ Sydney, Australia\\ \texttt {[email protected]} } \maketitle \begin{abstract} Let $x$ be a positive integer. We give an asymptotic formula for the number of primes in the set $\{\fl{x/n}, 1 \le n \le x\}$ and give some related results. \end{abstract} \section{Introduction} There is an extensive body of research on arithmetic functions with integer parts of real-valued functions, most commonly, with Beatty $\fl{\alpha n + \beta}$ sequences, see, for example,~\cite{ABS,BaBa,BaLi,GuNe,Harm}, and Piatetski--Shapiro $\fl{n^\gamma}$ sequences, see, for example,~\cite{Akb,BBBSW,BBGY,BGS,LSZ,Morg}, with real $\alpha$, $\beta$ and $\gamma$. Recently there has been much research on sums of the form \begin{align} \label{eq:f sum} \sum_{n \le x}f\(\fl{\frac{x}{n}}\), \end{align} where throughout $x$ is a positive integer, $f$ is an arithmetic function and $\fl{\cdot}$ is the floor function. In \cite{Bor} the authors used exponential sums to find asymptotic bounds and formulas for various classes of arithmetic functions. Subsequent papers by various authors have mainly been focussed on improvements in exponential sums techniques (see \cite{Bor2,Che, Liu,Ma,Ma2,Stu,Wu,Zha,Zha2,Zhao}). It is natural to examine more fundamental questions about the set$\{\fl{x/n}: 1 \le n \le x\}$. In \cite{Hey} an exact formula for the cardinality of this set was given. In this paper we count primes in this floor function set. Let $$\mathcal G(x)=\left\{\fl{\frac{x}{n}}: 1 \le n \le x, \fl{\frac{x}{n}}\text{ is prime}\right\},$$ and in particular $G(x):=|\mathcal G(x)|$ This can estimated using exponential sums as follows: \begin{thm} \label{thm:G(x)} We have $$G(x)=\frac{4 \sqrt{x}}{\log x}+O\(\frac{ \sqrt{x}}{(\log x)^2}\).$$ \end{thm} The OEIS sequence A068050 attributes to Adams-Watters the statement that for $p$ prime not equal to 3 we have $G(p)=G(p-1)+1$. A proof does not seem evident. We prove the following: \begin{thm} \label{thm:adams} Let $x$ be any prime not equal to 3. Then $G(x)=G(x-1)+1$. \end{thm} It is possible to link up $G(x)$ and $G(x-1)$ for some other classes of $x$ as follows: \begin{thm} \label{thm:G(x) semiprime} Let $x=pq$ with $p,q$ odd primes, not necessarily distinct. Then $$G(x)=G(x-1)+1.$$ \end{thm} These relationships between $G(x)$ and $G(x-1)$ may generalise, but with considerable difficulties. For example, on a somewhat limited investigation using Maple we have the following: \begin{conj} \label{conj:G(x) 3 primes} Suppose $x=p_1p_2p_3$ with $2 <p_1< p_2 < p_3$. Then $$ G(x)= \begin{cases} G(x-1) & \text{if } p_1p_2>p_3, \\ G(x-1)+1 & \text{if } p_1p_2<p_3. \end{cases} $$ \end{conj} We can also examine the cardinality of the set $$\mathcal F(x):=\left\{n:\fl{\frac{x}{n}}\text{ is prime}\right\}.$$ This might more naturally be thought of as the cardinality of the subsequence $(\mathcal F_{n_k})$ created from the sequence $(\mathcal F_n)_{n=1}^x, \, \mathcal F_n=\fl{x/n}$, where you retain $n$ for which $\mathcal F_n$ is prime and remove $n$ for which $\mathcal F_n$ is not prime. For example, we have $$\mathcal F(10)=\{2,3,4,5\},$$ whilst it is more natural to think of the sequence (for $x=10$) $$(\mathcal F_{n_k})=5,3,2,2.$$ Of course, the cardinalities are the same. The cardinality of $\mathcal F(x)$ (or of $(\mathcal F_{n_k})$) can be obtained by substituting $f(m)= \textbf{1}_\mathbb{P}(m)$ into \eqref{eq:f sum} and using recent results from Wu \cite{Wu} or from Zhai \cite{Zha}. As is usual, $\textbf{1}_\mathbb{P}(m)=1$ if $m$ is prime and 0 otherwise. We obtain the following: \begin{thm} \label{thm:n prime} Let $F(x):=|\mathcal F(x)|$.Then $$F(x)=\mathcal{P}x+O\(x^{1/2}\),$$ where $$\mathcal{P}=\sum_{n=1}^\infty \frac{\textbf{1}_\mathbb{P}(n)}{n(n+1)}=\sum_p \frac1{p(p+1)}\cong 0.330230.$$ \end{thm} We can use an alternate elementary approach, without exponential sums, to arrive at a result with a slightly better lower bound. Specifically \begin{thm} \label{thm:n prime elementary} There exists calculable constants $A_1$ and $A_2$ such that for all $x$, $$\mathcal{P}x -\frac{A_1\sqrt{x}}{\log x} \le F(x) \le \mathcal{P}x+A_2\sqrt{x}.$$ \end{thm} The methodology of Theorem \ref{thm:n prime} can be utilised for all indicator functions since these functions are all bounded by 1. For example, we state, but do not prove, the following: \begin{thm} We have $$\left\{n:\fl{\frac{x}{n}}\text{ is a prime power}\right\}=\mathcal{D}x +O\(x^{1/2}\),$$ where $$\mathcal{D}=\sum_{n=p^k} \frac1{n(n+1)}\cong 0.41382.$$ \end{thm} Throughout we use $p$, with or without subscript, to denote a prime number. The notation $f(x) = O(g(x))$ or $f(x) \ll g(x)$ is equivalent to the assertion that there exists a constant $c>0$ such that $|f(x)|\le c|g(x)|$ for all $x$. As is normal, we denote by $\Lambda$ the von Mangoldt function. \section{Proof of Theorem \ref{thm:G(x)}} We have $$G(x)=\left|\left\{p: p \le x, p=\fl{\frac{x}{n}}\text{ for some } 1 \le n \le x\right\} \right|.$$ If $\fl{\frac{x}{n}}=p$ then $$\frac{x}{p+1} < n \le \frac{x}{p}$$ and such an $n$ will exist if $\fl{\frac{x}{p}}-\fl{\frac{x}{p+1}}>0$. So $$G(x)=\sum_{p \le x} \delta\(\fl{\frac{x}{p}} -\fl{\frac{x}{p+1}}>0\),$$ where $\delta=1$ if the statement is true and 0 otherwise. Let \begin{align} \label{eq:G} G(x)&=G_1(x)+G_2(x)+G_3(x)+G_4(x), \end{align} where $$G_1= \sum _{p < b}, \,\, G_2=\sum_{b \le p \le \sqrt{x}},\,\, G_3=\sum_{\sqrt{x} < p \le x^{34/67}},\,\, G_4=\sum_{x^{34/67} < p \le x},$$ and $$b=\frac{\sqrt{4x+1}-1}{2}=\sqrt{x}+O(1).$$ For $G_1(x)$ the condition is always satisfied, since for $p < b$ we have $$\fl{\frac{x}{p}}-\fl{\frac{x}{p+1}}>\frac{x}{p}-\frac{x}{p+1}-1=\frac{x}{p(p+1)}-1>0.$$ So \begin{align} \label{eq:G1} G_1(x)&=\sum_{p \le b}1=\pi(\sqrt{x})+O(1)=\frac{2\sqrt{x}}{\log x}+O\(\frac{\sqrt{x}}{(\log x)^2}\). \end{align} Trivially \begin{align} \label{eq:G2} G_2(x)&=O(1). \end{align} Next, we estimate $G_4(x)$. If $p > x^{34/67}$ then $p=\fl{\frac{x}{n}}$ for some $n \le x^{33/67}.$ Since there can be at most $x^{33/67}$ values for $n$ we have \begin{align} \label{eq:G4} G_4(x)&=O\(x^{33/67}\). \end{align} For $G_3(x)$ (and $G_4(x)$) $p$ is large enough that $\fl{\frac{x}{p}}-\fl{\frac{x}{p+1}}$ can only equal 0 or 1. So $$G_3(x)=\sum_{\sqrt{x} < p \le x^{34/67}} \delta\(\fl{\frac{x}{p}} -\fl{\frac{x}{p+1}}>0\)=\sum_{\sqrt{x} < p \le x^{34/67}} \(\fl{\frac{x}{p}} -\fl{\frac{x}{p+1}}\).$$ Then, using $\psi(x)=x-\fl{x}-\frac1{2}$, \begin{align} \label{eq:G3} G_3(x)&=x \sum_{\sqrt{x}<p\le x^{34/67}}\frac1{p(p+1)}+\sum_{\sqrt{x}<p\le x^{34/67}}\(\psi\(\frac{x}{p+1}\)-\psi\(\frac{x}{p}\)\). \end{align} Using partial summation and the Prime Number Theorem we have, for the first sum, \begin{align} \label{eq:G3 first} x \sum_{\sqrt{x}<p\le x^{34/67}}\frac1{p(p+1)}&=x\sum_{\sqrt{x}<n\le x^{34/67}}\frac{\textbf{1}_\mathbb{P}(n)}{p(p+1)}\notag\\ &= \end{align} For the second sum of $G_3(x)$ we will use the following (\cite[Theorem 6.25]{Bor3}): \begin{lem} \label{lem:Bor} Let $\delta \in [0,1], x \ge 1$ be a large real number and $R$, $R_1$ be positive integers such that $1 \le R \le R_1 \le 2R \le x^{2/3}$. Then , for all $\epsilon \in (0,\frac1{2}]$ $$x^{-\epsilon}\sum_{R \le n \le R_1} \Lambda(n) \psi\(\frac{x}{n+\delta}\)\ll \(x^2R^{33}\)^{1/38}+\(x^2R^{19}\)^{1/24}\(x^3R^2\)^{1/9}+\(x^3R^{-1}\)^{1/6}+R^{5/6}.$$ \end{lem} Returning to the second sum of $G_3(x)$ and using the Lemma we have \begin{align*} \sum_{\sqrt{x}<p\le x^{34/67}}\(\psi\(\frac{x}{p+1}\)-\psi\(\frac{x}{p}\)\) &\le \left|\sum_{\sqrt{x}<p\le x^{34/67}}\psi\(\frac{x}{p+1}\)\right|+ \left|\sum_{\sqrt{x}<p\le x^{34/67}}\psi\(\frac{x}{p}\)\right|. \end{align*} We now bound the sum involving $\psi(\frac{x}{p})$. The calculations for the sum involving $\psi(\frac{x}{p+1})$ is virtually identical. Let $m \sim N$ denote the inequalities $N < m \le 2N$. We have \begin{align*} \sum_{\sqrt{x}<p\le x^{34/67}}\psi\(\frac{x}{p}\)&\ll \max_{\sqrt{x}<N \le x^{34/67}}\left|\sum_{p \sim N} \psi\(\frac{x}{p}\)\right|\log x. \end{align*} Next, using Abel summation, \begin{align*} \left|\sum_{p \sim N} \psi\(\frac{x}{p}\)\right|&=\left|\sum_{p \sim N}\(\frac1{\log p}\times \psi\(\frac{x}{p}\) \log p \)\right|\\ &\le \frac{2}{\log N} \max_{N \le N_2 \le N_1}\left|\sum _{N <p \le N_2} \psi\(\frac{x}{p}\)\log p+\textbf{1}_p(N) \psi\( \frac{x}{N}\log N\)\right|\\ &\le \frac{2}{\log N} \max_{N \le N_2 \le N_1}\left\{\left|\sum_{N<n \le N_2}\Lambda(n)\psi\(\frac{x}{n}\)\right|+|R(N)|+\log N\right\}, \end{align*} where $$|R(N)| \le \sum_{\sqrt{N} < p \le \sqrt{N_2}}\log p \sum_{2 \le a \le \frac{\log N_2}{\log p}} 1<2 \sqrt{N}.$$ Using Lemma \ref{lem:Bor} with $N_1=N_2=x^{34/67}$ and $N=\sqrt{x}$ we obtain \begin{align} \label{eq:G3 second} \sum_{\sqrt{x}<p\le x^{34/67}}\psi\(\frac{x}{p}\)&\ll x^{\frac{1256}{2546}+\epsilon}. \end{align} Substituting \eqref{eq:G3 first} and \eqref{eq:G3 second} into \eqref{eq:G3} and we see that $$G_3(x)=\frac{2 \sqrt{x}}{\log x}+O\(\frac{\sqrt{x}}{(\log x)^2}\),$$ and substituting this equation and \eqref{eq:G1}, \eqref{eq:G2} and \eqref{eq:G4} into \eqref{eq:G} completes the proof. \section{Proof of Theorem \ref{thm:adams}} Let $x$ be any prime not equal to 3. Suppose $p \in \mathcal G(x)$ but $p\ne x$. So for some $n$ we have $\fl{x/n}=p$. As $x$ is a prime we have $x=np+u$ where $1 \le u \le n-1$. So $x-1=np+u-1$ from which $$\fl{\frac{x-1}{n}}=\fl{\frac{np+u-1}{n}}=\fl{p+\frac{u-1}{n}}=p.$$ Thus $p \in \mathcal G(x-1)$. Conversely, suppose $p \in \mathcal G(x-1)$. So $\fl{(x-1)/n}=p$ for some $n$. So $x-1=np+u$ where $0 \le u \le n-1$. But $u \ne n-1$, for then $x=np+n=n(p+1)$, which contradicts the supposition that $x$ is prime. Thus $x-1=np+u$ with $0 \le u \le n-2$, and then $$\fl{\frac{x}{n}}=\fl{\frac{np+u+1}{n}}=\fl{p+\frac{u+1}{n}}=p.$$ Therefore $p \in \mathcal G(x)$. We conclude that there is one-to-one correspondence between an $p\ne x \in \mathcal G(x)$ and $p \ne x \in \mathcal G(x-1)$. Noting that we have $x \in \mathcal G(x)$ but $x \not \in \mathcal G(x-1)$ concludes the proof. \section{Proof of Theorem \ref{thm:G(x) semiprime}} We have $x=pq$ with $p,q$ odd primes, not necessarily distinct. Without loss of generality assume $p \le q$. \textit{Case 1:}Suppose that $r \in \mathcal G(x)$ with $r \ne p,q$. So $x=nr+u$ with $0 \le u \le n-1$. But if $u=0$ then $x=nr$ which is impossible since $r \ne p,q$. So $1 \le u \le n-1$ and thus $$\fl{\frac{x-1}{n}}=\fl{\frac{nr+u-1}{n}}=\fl{r+\frac{u-1}{n}}=r.$$ So $r \in \mathcal G(x)$. Conversely, suppose $r \in \mathcal G(x-1)$. So $x-1=nr+u$ with $0 \le u \le n-1$. But if $u=n-1$ then $x=nr+n-1+1=(n+1)r$ which is again impossible since $r \ne p,q$. So $0 \le u \le n-2$ and then $$\fl{\frac{x}{n}}=\fl{\frac{nr+u+1}{n}}=\fl{r+\frac{u+1}{n}}=r.$$ So $r \in \mathcal G(x-1)$. \textit{Case 2:} Suppose that $r \in \mathcal G(x)$ with $r=p=q$. Since $r=\fl{x/r}$ we have $r \in \mathcal G(x)$. But $$\fl{\frac{x-1}{r}}=\fl{r-\frac1{r}}=r-1$$ and $$\fl{\frac{x-1}{r-1}}=\fl{\frac{r^2-1}{r-1}}=r+1.$$ So $r \not\in \mathcal G(x-1).$ \textit{Case 3:} Suppose that $r \in \mathcal G(x)$ with $r=p \ne q$ or $r=q \ne p$. Recall $p \le q$. It is clear that $p \in \mathcal G(x)$ and $q \in \mathcal G(x)$. Then $$\fl{\frac{x-1}{q-1}}=\fl{\frac{pq-1}{q-1}}=\fl{p+\frac{p-1}{q-1}}=p,$$ so $p \in \mathcal G(x-1)$. But $$\fl{\frac{x-1}{p}}=\fl{\frac{pq-1}{p}}=\fl{q-\frac{1}{p}}=q-1,$$ and $$\fl{\frac{x-1}{p-1}}=\fl{\frac{pq-1}{p-1}}=\fl{q+\frac{q-1}{p-1}}>q.$$ So $q \not \in \mathcal G(x-1)$. Reviewing the three cases we see that $G(x)=G(x-1)+1$, which proves the theorem. \section{Proof of Theorem \ref{thm:n prime}} We have $$F(x)=\sum_{n \le x} \textbf{1}_\mathbb{P}\(\fl{\frac{x}{n}}\).$$ We will require the following, proven independently by Wu \cite[Theorem]{Wu} and Zhai \cite[Theorem 1]{Zha}: \begin{lemma} \label{thm:Sfx} Let $f$ be a complex-valued arithmetic function such that $f(n) \ll n^{\alpha}(\log n)^{\theta}$ for some $\alpha \in [0,1)$ and $\theta \ge 0$. Then $$\sum_{n \leqslant x} f \(\fl{x/n} \) = x \sum_{n=1}^{\infty} \frac{f(n)}{n(n+1)} + O \( x^{\frac{1}{2}(\alpha+1)}(\log x)^{\theta}\).$$ \end{lemma} Using this lemma with $\alpha=0$ and $\theta=0$ we have $$F(x)=\mathcal{P}x+O\(x^{1/2}\),$$ where $$\mathcal{P}=\sum_{n=1}^\infty \frac{\textbf{1}_\mathbb{P}(n)}{n(n+1)}=\sum_p \frac1{p(p+1)}\cong 0.330230,$$ completing the proof. We note, in passing, that $$\sum_p \frac1{p(p+1)}=\sum_{s=2}^\infty \sum_p \frac{(-1)^s}{p^s}=\sum_{s=2}^\infty\sum_{n=1}^\infty \mu(n) \frac{\log \zeta(ns)}{n}.$$ \section{Proof of Theorem \ref{thm:n prime elementary}} Let $\mathbb{P}$ be the set of (positive) primes and $\overline{\mathbb{P}}$ be the set of (positive) non-primes. We create upper and lower bounds for $F(x)$ from the set $$\mathcal C(x):=\{n: 1 \le n \le x\}.$$ The set $\mathcal F(x)$ is simply the set $\mathcal C(x)$ after removing all $n$ such that $\fl{x/n}$ is a non-prime from $\mathcal C(x)$. For the upper bound we truncate the process by removing from $\mathcal C(x)$ only those $n$ such that $\fl{x/n}$ is a non-prime less than or equal to $\sqrt{x}$. In total we remove $$\sum_{\substack{c \in \overline{\mathbb{P}}\\c \le \sqrt{x}}}\(\fl{\frac{x}{c}}-\fl{\frac{x}{c+1}}\)>\sum_{\substack{c \in \overline{\mathbb{P}}\\c \le \sqrt{x}}}\(\frac{x}{c(c+1)}-1\)$$ values of $n$. So \begin{align*} F(x)&< x-\sum_{\substack{c \in \overline{\mathbb{P}}\\c \le \sqrt{x}}}\(\frac{x}{c(c+1)}-1\)\\ &= x-\sum_{\substack{c \in \overline{\mathbb{P}}\\c \le \sqrt{x}}}\frac{x}{c(c+1)}+\sum_{\substack{c \in \overline{\mathbb{P}}\\c \le \sqrt{x}}}1\\ &=x-x\(\sum_{c \le \sqrt{x}} \frac1{c(c+1)}-\sum_{\substack{c \in \mathbb{P}\\c \le \sqrt{x}}}\frac{1}{c(c+1)}\)+\sqrt{x}-\pi(\sqrt{x})\\ &=x-\frac{x\sqrt{x}}{\sqrt{x}+1} +x\sum_{\substack{c \in \mathbb{P}\\c \le \sqrt{x}}}\frac{1}{c(c+1)}+O\(\sqrt{x}\)\\ &=\frac{x}{\sqrt{x}+1}+x\sum_{p}\frac1{p(p+1)} -x\sum_{p>\sqrt{x}}\frac1{p(p+1)} +O\(\sqrt{x}\). \end{align*} Then $$\sum_{p>\sqrt{x}}\frac1{p(p+1)}\le \sum_{n>\sqrt{x}}\frac1{n(n+1)}=O\(\frac1{\sqrt{x}}\),$$ and so \begin{align} F(x)& \le \mathcal{P}x+O\(\sqrt{x}\). \end{align} For the lower bound we add up the number of $n$ for which $\fl{x/n}$ is a prime less than or equal to $\sqrt{x}$. Then, using $\pi(m)=\frac{m}{\log m} + O\(\frac{m}{(\log m)^2}\)$ and Riemann-Stieltjes integration, \begin{align*} F(x)&=\sum_{\substack{c \in \mathbb{P}\\c \le \sqrt{x}}}\(\fl{\frac{x}{c}}-\fl{\frac{x}{c+1}}\) \ge \sum_{\substack{c \in \mathbb{P}\\c \le \sqrt{x}}} \(\frac{x}{c(c+1)}+1\)\\ &=x\sum_{\substack{c \in \mathbb{P}\\c \le \sqrt{x}}} \frac{1}{c(c+1)}+\sum_{\substack{c \in \mathbb{P}\\c \le \sqrt{x}}}1 =x\(\mathcal{P}-\sum_{p > \sqrt{x}}\frac{1}{p(p+1)}\)+\sum_{\substack{c \in \mathbb{P}\\c \le \sqrt{x}}}1\\ &=\mathcal{P}x-x \sum_{p>\sqrt{x}}\frac1{p(p+1)}+O\(\frac{\sqrt{x}}{\log x}\)=\mathcal{P}x+O\(\frac{\sqrt{x}}{\log x}\), \end{align*} completing the proof. \end{document}
arXiv
Neurogenomic insights into paternal care and its relation to territorial aggression Syed Abbas Bukhari1,2,3, Michael C. Saul1 nAff6, Noelle James ORCID: orcid.org/0000-0002-9936-59524, Miles K. Bensky5, Laura R. Stein3 nAff7, Rebecca Trapp3 nAff8 & Alison M. Bell1,3,4,5 Social evolution Motherhood is characterized by dramatic changes in brain and behavior, but less is known about fatherhood. Here we report that male sticklebacks—a small fish in which fathers provide care—experience dramatic changes in neurogenomic state as they become fathers. Some genes are unique to different stages of paternal care, some genes are shared across stages, and some genes are added to the previously acquired neurogenomic state. Comparative genomic analysis suggests that some of these neurogenomic dynamics resemble changes associated with pregnancy and reproduction in mammalian mothers. Moreover, gene regulatory analysis identifies transcription factors that are regulated in opposite directions in response to a territorial challenge versus during paternal care. Altogether these results show that some of the molecular mechanisms of parental care might be deeply conserved and might not be sex-specific, and suggest that tradeoffs between opposing social behaviors are managed at the gene regulatory level. In many species, parents provide care for their offspring, which can improve offspring survival. There is fascinating diversity in the ways in which parents care for their offspring, from infant carrying behavior in titi monkeys, poison dart frogs and spiders to provisioning of offspring in burying beetles and birds1,2. The burden of parental care does not always land exclusively on females; in some species both parents provide care and in others males are solely responsible for care. Our understanding of the molecular and neuroendocrine basis of parental care has been largely influenced by studies in mammals, where maternal care is the norm. In mammals, females experience cycles of estrus, pregnancy, child birth and lactation as they become mothers, all of which are coordinated by hormones. While maternal care is often primed by hormonal and physiological changes related to embryonic or fetal development, the primers for paternal behavior are likely to be more subtle, such as the presence of eggs or offspring3,4. Despite this subtlety, there is growing evidence that males can also experience changes in physiology and behavior as they become fathers, some of which resemble changes in mothers5. For example, men experience increased oxytocin6 and a drop in testosterone7 following the birth of a child. Indeed, a recent study in burying beetles showed that the neurogenomic state of fathers when they are the sole providers of care closely resembles the neurogenomic state of mothers8. There is taxonomic diversity in the specific behavioral manifestations of care, but all care-giving parents go through a predictable series of stages as they become mothers or fathers, from preparatory stages prior to fertilization (e.g. territory establishment and nest building) to the care of developing embryos (e.g. pregnancy, incubation), to care of free-living offspring (e.g. provisioning of nestlings, lactation, etc). Each stage is characterized by a set of behaviors and events, and the transition to the next stage depends on the successful completion of the preceding stage e.g. ref. 9. The temporal ordering of stages, combined with our understanding of the neuroendocrine dynamics of reproduction10, prompts at least three non-mutually exclusive hypotheses about how we might expect gene expression in the brain to change over the course of parental care. First, because each stage is characterized by a particular set of behaviors, each stage might have a unique neurogenomic state associated with it (the unique hypothesis). Second, some of the demands of parenting remain constant across stages, e.g. defending a nest site, therefore we might expect to see the signal of a preceding stage to persist into subsequent stages (the carryover hypothesis), resulting in shared genes among stages, especially between stages close together in the series. Finally, extending the reasoning further, and considering that parents must pass through one stage before proceeding to the next, genes associated with one stage might be added to the previous stage as a parent proceeds through the stages (the additivity hypothesis, an extension of the carryover hypothesis).Whether changes that occur at the neurogenomic level can be mapped on to behaviorally defined (as opposed to endogenously defined) stages of parental care is unknown. Moreover, we know little about whether there are genes that conform to a unique, carryover or additive pattern across stages of care. These hypotheses provide a novel conceptual framework for improving our understanding of parental care at the molecular level, and could serve as a model for studying other life events that comprise a series of behaviorally defined stages, e.g. stages of territory establishment, stages of pair-bonding, stages of dispersal, etc. Unlike mammals, paternal care is relatively common in fishes: of the fishes that display parental care, 80% of them provide some form of male care, therefore fish are good subjects for understanding the molecular orchestrators of paternal care11,12. Moreover, the basic building blocks of parental care are ancient and deeply conserved in vertebrates13. For example, the hormone prolactin was named for its essential role in lactation in mammals, but had functions related to parental care in fishes long before mammals evolved14. Growing evidence for deep homology of brain circuits related to social behavior15,16,17,18 suggests that the diversity of parental care among vertebrates is underlain by changes in functionally conserved genes operating within similar neural circuits19. In this study, we track the neurogenomic dynamics of the transition to fatherhood in male stickleback fish by measuring gene expression (RNA-Seq) in two brain regions containing nodes within the social behavior network, diencephalon and telencephalon. Gene expression in experimental males is compared across five different stages (nest, eggs and three time points after hatching) and relative to a control group. In this species, fathers are solely responsible for the care of the developing offspring, and male sticklebacks go through a predictable series of stages as they become fathers, from territory establishment and nest building to mating, caring for eggs, hatching and caring for fry20. In addition to providing care, parents must be vigilant to defend their vulnerable dependents from potential predators or other threats. Tradeoffs between parental care and territory defense have been particularly well studied in the ecological literature, e.g.21, and parental care and territorial aggression represent the extremes on a continuum of social behavior—from strongly affiliative to strongly aversive. Therefore, an additional goal of this study is to compare and contrast the neurogenomics of paternal care with the neurogenomic response to a territorial challenge. As parental care and territorial aggression are social behaviors and both utilize circuitary within the social behavior network in the brain15,16,17, we expect to observe similarities between parental care and a territorial challenge at the molecular level. However given their position at opposite ends of the continuum of social behavior, along with neuroendocrine tradeoffs between them22, here we test the hypothesis that opposition between parental care and territorial aggression is reflected at the molecular and/or gene regulatory level. Altogether results suggest that some of the molecular mechanisms of parental care are deeply conserved and are not sex-specific, and suggest that tradeoffs between opposing social behaviors are managed at the gene regulatory level. Neurogenomic dynamics of paternal care There were dramatic neurogenomic differences associated with paternal care. A large number of genes—almost 10% of the transcriptome—were differentially expressed between the control and experimental groups over the course of the parenting period (Fig. 1a, Supplementary Data 1). Within each stage, a comparable number of genes were up- and down-regulated. There were significant gene expression differences between the control and experimental groups within both brain regions; relatively more genes were differentially expressed in diencephalon. Neurogenomic dynamics of paternal care. a The number of up- and down-regulated differentially expressed genes (DEGs) at each stage of paternal care in diencephalon and telencephalon. b Summary of GO-terms that were enriched in up- and down-regulated genes at each stage in the two brain regions. c The expression profile of candidate genes related to maternal care (galanin, galanin receptor 1, progesterone, estrogen receptor 1, oxytocin) across stages, with expression in the two brain regions plotted relative to the appropriate circadian control group; data points represent individual samples with means and s.e.m. indicated. Statistical significance of these genes was assessed as a pairwise contrast between a stage and its control (see Supplementary Data 1 for full list of genes; source data are in GEO GSE134508) using negative binomial distribution with generalized linear models in edgeR. Boxes surround means that are statistically different between the control and experimental condition within the stage. Functional enrichment analysis of the differentially expressed genes (DEGs) suggests that paternal care requires changes in energy metabolism in the brain along with modifications of immune system and transcription. Genes associated with the immune response were down-regulated in both brain regions and during most stages relative to the control group. Genes associated with energy metabolism and the adaptive component of the immune response were upregulated in telencephalon. Genes associated with the stress response were downregulated in both brain regions around the day of hatching. Genes associated with energy metabolism were downregulated as the fry emerged (Fig. 1b, Supplementary Data 2). The expression profile of particular candidate genes related to parental care are in Fig. 1c, with statistically significant differences between the control and experimental condition within a stage indicated. Altogether these patterns suggest that paternal care involves significant neurogenomic shifts in stickleback males. Change and stability of neurogenomic state across stages We used these data to assess evidence for three non-mutually exclusive hypotheses about how neurogenomic state might change across stages of parental care. According to the unique hypothesis, there is a strong effect of stage on brain gene expression and little to no overlap among the genes associated with different stages. To evaluate this hypothesis we tested whether there were DEGs that were unique to each stage, i.e. not shared with other stages. We generated lists of genes that were differentially expressed between the control and experimental group at each stage within each brain region. Then, we excluded the DEGs that were shared between stages in order to identify unique genes to each stage. To increase confidence that the unique genes are truly unique to each stage, i.e. that they didn't just barely passed the cutoff for differential expression in another stage (false negatives), we followed an empirical approach (as in23). We kept the cutoff for DEGs at the focal stage at FDR < 0.01 and relaxed the FDR threshold on the other stages (Supplementary Fig. 1). This procedure was repeated for each stage and in each brain region separately. This analysis produced—with high statistical confidence—lists of DEGs that are unique to each stage (Fig. 2a), consistent with the "unique" hypothesis. Change and stability of neurogenomic state across stages of parental care. a There were DEGs that were only differentially expressed during one stage. Shown is a heat map depiction of the expression profile of the genes that were "unique" to each stage, showing how they were regulated in other stages, separated by stage and by brain region. b The statistical significance of the pair-wise overlap between stages within each brain region. The size of the circle is proportional to the significance of the p-value (hypergeometric test FDR) of the overlap, such that large circles indicate smaller p-values. Note that the stages closest to the focal stage tended to share more DEGs compared to stages further apart in the series. c DEGs that were added to a stage and were also differentially expressed in subsequent stages. Shown is a heat map depiction of the added shared genes for each stage, separated by brain region, showing how they were regulated across stages. Red = upregulated, blue = downregulated. Numbers on the heat maps indicate the number of genes in each heat map. Source data are in GEO GSE134508 Next, we assessed the extent to which genes were shared among different stages of paternal care by testing whether the number of overlapping DEGs between stages was greater than expected using a hypergeometric test. Consistent with the carryover hypothesis, within each brain region, the number of overlapping DEGs between stages was statistically much greater than expected by chance (Supplementary Data 3), and stages that are close together in the series shared more DEGs compared to stages that are further apart (Fig. 2b, Supplementary Fig. 2). These results suggest that there are genes whose signal persists across stages of care. We then evaluated the possibility that each new stage triggers a neurogenomic response which persists into subsequent stages, i.e. that genes associated with one stage are added to the previous stage as a parent proceeds through the stages. According to this hypothesis, when a parent is caring for eggs in their nest, for example, the "egg" genes are added to the previously activated "nest" genes, and so on, in an additive fashion. To examine this statistically, for each stage, we identified genes that: (1) were differentially expressed during the stage of interest; (2) were not differentially expressed during any of the preceding stages; (3) were also differently expressed in a subsequent stage, hereafter referred to as "added shared genes". Only genes added during a new stage were used to test for their overlap with subsequent stages, therefore except for the "nest added shared genes", each of the added shared genes from the previous stage(s) were subtracted from the focal stage's added shared genes (Supplementary Fig. 3). This process generated four sets of added shared genes: genes that were differentially expressed during the nest stage and were also differentially expressed during at least one subsequent stage ("nest added shared genes"), genes that were differentially expressed during the egg stage and were also differentially expressed during at least one subsequent stage but not during the nest stage ("egg added shared genes"), and so on. This analysis revealed genes that became differentially expressed as males proceeded through different stages of paternal care and ROAST24 analysis found that the added shared genes remained differentially expressed in subsequent stages in a statistically significant manner (Supplementary Data 4). This suggests, for example, that there was a transcriptional signal of eggs which persisted after the egg stage. To see if the genes that were added and which persisted over time were similarly regulated across subsequent stages of paternal care, we examined the expression profiles of the added shared genes at each stage and tested if the direction of regulation was consistent across stages. This analysis revealed that added shared genes were indeed similarly regulated across stages (Supplementary Data 4, Fig. 2c). For example, added shared genes that were upregulated in males with nests were also upregulated during subsequent stages, especially during stages close to the nesting stage. To investigate this further, we calculated the probability that all genes within a set of added shared genes were expressed in the same direction due to chance, i.e. either consistently up- or down-regulated. Then, we counted the number of genes within each set of added shared genes that were concordantly expressed. We found that the number of concordantly expressed genes was greater than expected by chance (diencephalon χ2 = 1859, P < 1e-6, telencephalon χ2 = 146, df = 2, P < 1e-4). For example, 172 of the 235 genes in the nest added shared genes in diencephalon were concordantly expressed across stages, much higher than the expected 15 genes due to chance. The concordant expression pattern across stages suggests that an added shared gene serves a similar function in different stages. Pathways are not sex-specific and are deeply conserved Some of the candidate genes associated with female pregnancy and maternal care were differentially expressed in different stages of paternal care in sticklebacks (Fig. 1c). For example, in mammals, levels of progesterone, estrogen and their receptors increase during pregnancy and then subside after childbirth. A similar pattern was observed in the diencephalon of male sticklebacks: both estrogen receptor (esr1) and progesterone receptor (pgr) were upregulated during early hatching and then subsided (Fig. 1c). Oxytocin (and its teleost homolog isotocin) plays an important role in social affiliation and parental care in mammals6 and fish19,25,26,27,28. Oxytocin (oxt) was upregulated in diencephalon when male sticklebacks were caring for eggs in their nests, and upregulated in telencephalon mid-way through the hatching process (Fig. 1c). Genes that have been implicated in infanticide during parental care in mammals were also differentially expressed in sticklebacks, where egg cannibalism is common. Galanin—a gene implicated with infanticidal behavior in mice29—was highly expressed in diencephalon (which includes the preoptic area) during the nest, eggs and early hatching stages. However, the galanin receptor gene was downregulated during the middle to late hatching stages in both brain regions (Fig. 1c). Furthermore, the progesterone receptor—which mediates aggressive behavior toward pups in mice30—gradually declined in both brain regions as hatching progressed, and its level was lowest when all the fry were hatched (Fig. 1c). Up-regulation of galanin during the egg stage and down-regulation of progesterone receptor during the hatching stage could reflect how male sticklebacks inhibit cannibalistic behavior while providing care. To test if the neurogenomic changes that we observed in stickleback fathers across stages, e.g. unique and added shared genes, are similar to the neurogenomic changes that mothers experience across stages of maternal care, we leveraged a recent dataset where brain gene expression was compared across a series of pregnancy and post-partum stages in mice (Supplementary Data 5)31. Similar to stickleback fathers, there were both unique and added shared DEGs across different stages of pregnancy and postpartpum in mouse mothers. We then tested if the enduring (added shared genes) and transient (unique) changes in neurogenomic state that were experienced in stickleback fathers were similar to the enduring and transient signals of pregnancy and the postpartum period in mouse mothers. Specifically, we compared mouse and stickleback added shared genes within the appropriate orthogroup (Supplementary Data 6). For example, we compared 356 stickleback added shared genes within 90 orthogroups in diencephalon and 838 mouse added shared genes within 265 orthogroups in hypothalamus and found that they shared 14 orthogroups. In order to test whether those 14 shared orthogroups is greater than expected due to chance, we employed a Monte Carlo based permutation approach. We did not use a regular hypergeometric test or regular permutation test here (at the orthogroup level) because each orthogroup contains more than one gene in both the stickleback and mouse genomes, and some of those genes were differentially expressed and others were not. Instead, we sampled the gene sets (e.g. 356 and 838 genes in diencephalon/hypothalamus) repeatedly (105) and with replacement from both species' universes and counted the overlaps at the orthogroup level. This overlap was then tested against the observed overlap to compute p-values, which are highly significant (Fig. 3, note that the overlap never reaches 14 orthogroups). Added shared genes in stickleback and mouse include BDNF (a candidate gene related to anxiety, stress and depression32) and a regulator of G protein receptors RGS3 (related to insulin metabolism33). We followed the same procedure for the unique genes and did not find any evidence of sharing between the two species. For example, there were 33 unique genes in four orthogroups in mouse hypothalamus and 244 unique genes in 54 orthogroups stickleback diencephalon with no overlap between them (Supplementary Data 6). DEGs associated with shared orthogroups. Color represents the significance of differential expression between the control and experimental group (p values (−log(fdr)) across the five conditions in stickleback (left) and the five conditions in mouse (right). a shows the significance of DEGs within 14 shared orthogroups between diencephalon in stickleback and hypothalamus in mouse. b shows the significance of DEGs within nine shared orthogroups between telencephalon in stickleback and hippocampus in mouse. Source data are in GEO GSE134508 Altogether, the differential expression of candidate genes related to maternal care along with the deep homology of the enduring signal of care across stages (added shared genes) suggest that some of the neurogenomic shifts that occur during paternal care in a fish are deeply conserved and are not sex-specific. Parenting and aggression tradeoffs at the molecular level To better understand how different social demands are resolved in the brain, we compared these data to a previous study on the neurogenomic response to a territorial challenge in male sticklebacks34, which measured brain gene expression 30, 60 or 120 min after a 5 min territorial challenge. The two experiments studied behaviors at the opposite ends of a continuum of social behavior: paternal care provokes affiliative behavior while a territorial challenge provokes aggressive behavior, and the challenge hypothesis originally posited that patterns of testosterone secretion reflects tradeoffs between parental care and territory defense, assuming that testosterone is incompatible with parental care in males22. Subsequent studies have shown that testosterone is not always inhibitory of parental care35, and that a territorial challenge activates gene regulatory pathways that do not depend on the action of testosterone36. Regardless of the specific neuromodulators or hormones, a mechanistic link between parental care and territory defense is likely to operate through the social behavior network in the brain because most nodes of this network express receptors for neuromodulators and hormones that are involved with both parental care and aggression37. Therefore we used these data to assess whether there is commonality at the molecular level between aggression and paternal care. For example, shared genes could reflect general processes such as the response to a social stimulus, while genes that are specific to an experiment could reflect the unique biology of paternal care versus territorial aggression. Alternatively, there might be a set of genes that is associated with both parental care and territorial aggression, but those genes are regulated in different ways depending on whether the animal is responding to a positive (parental care) versus negative (territorial challenge) social stimulus. To compare the neurogenomics of paternal care and the response to a territorial challenge at the gene level, we pooled genes that were differentially expressed in the experimental compared to the control group (FDR < 0.01) across time points, stages and brain regions within each experiment, which resulted in two sets of genes associated with either a territorial challenge or paternal care (Fig. 4a). There were 177 genes that were shared between the two experiments (Fig. 4b); this overlap is highly statistically significant (hypergeometric test, fdr < 1e-10). The regulatory dynamics of territorial challenge and paternal care. a Experimental time course sampling design in the two experiments. b Overlap between territorial aggression and paternal care DEGs. DEGs were pooled across time points and brain regions. c ASTRIX-generated transcriptional regulatory network. Each node represents a transcription factor or a predicted transcription factor target gene. Oversized nodes are transcription factors where the size of the node is proportional to the number of targets. Transcription factors whose targets are significantly enriched in either or both experiments are highlighted with different colors. Stickleback imaged drawn by MB. Source data are in GEO GSE134508 To identify genes that were unique to each experiment while guarding against false positives, we adopted the same empirical approach as described above (Supplementary Fig. 1). There were 153 genes unique to territorial challenge and 764 genes unique to paternal care and these unique genes were enriched with non-overlapping functional categories (Supplementary Data 7). For example, some of the genes that were unique to a territorial challenge were related to sensory perception and tissue development, whereas some of the genes that were unique to paternal care were related to oxidative phosphorylation and energy metabolism, which might reflect the high metabolic needs of males as they are providing care38. The large number of genes that were differentially expressed both during paternal care and in response to a territorial challenge prompted us to test for evidence of their common regulation at the gene regulatory level. Therefore, we used the data from both experiments to build a transcriptional regulatory network and asked if there are transcription factors whose targets were significantly associated with the DEG sets from the paternal care experiment, the territorial challenge experiment or both experiments (Fig. 4, Supplementary Data 8). There were 10 transcription factors that were significantly enriched in both experiments. Eight out of 10 transcription factors were regulated in opposite directions in at least one of the conditions in the two experiments (Fig. 5). Two of the transcription factors that were regulated in opposite directions (NR3C1 and klf7b) have been implicated with social behavior in other studies (the glucocorticoid receptor NR3C1 and psychosocial stress during pregnancy39; klf7b and austim spectrum disorder40). These patterns suggest that for some genes, different salient experiences—providing paternal care and territorial aggression—trigger opposite gene regulatory responses. Shared regulators of a territorial challenge and paternal care. The panel on the left shows the expression pattern of the 10 transcription factors that were enriched in both experiments (Fig. 4). Columns are conditions within the two experiments (30, 60 or 120 min after a territorial challenge, the five stages of paternal care in diencephalon (D) or telencephalon (T)). Note that 8 of the shared transcription factors were regulated in opposite directions and in different brain regions in the two experiments. The two panels on the right show the expression pattern of two examples of shared, differentially regulated transcription factors (Klf7b and NR3C1) and their targets across all of the conditions. Source data are in GEO GSE134508 Interestingly, the transcription factors showing the opposite expression pattern were differentially expressed in different brain regions in the two experiments. Specifically, shared transcription factors and their predicted targets were up-regulated in telencephalon in response to a territorial challenge and down-regulated in diencephalon during parental care. These findings point to the molecular mechanisms by which transcription factors might differentially modulate the social behavior network15,16,17 in the brain to manage conflicts between paternal care and territory defense. While maternal care has long been recognized as an intense period when the maternal brain is reorganized41,42, our results suggest that paternal care also involves significant neurogenomic shifts. Many of the neuroendocrine changes that are experienced by mammalian mothers are driven by endogenous cues during pregnancy, birth and lactation, and are required for fetal growth and development31,43, with the neural circuits necessary for maternal care being primed by hormones during pregnancy and the postpartum periods42. Our results suggest that males can also experience dramatic neuromolecular changes as they become fathers, even in the absence of ovulation, parturition, postpartum events and lactation and their associated hormone dynamics5. We observed dramatic neurogenomic changes in males in response to cues for care that are exogenous (e.g. the presence of nesting material) and social (e.g. the presence of eggs or the hatching of fry). Such dramatic neurogenomic shifts associated with paternal care might be especially likely to occur in species when fathers are the sole providers of parental care, such as in sticklebacks. The effects might not be as strong in biparental systems where fathers contribute less. Consistent with this hypothesis, in burying beetles, when males were the sole providers of care, their brain gene expression profile was similar to mothers, but when they were biparental, fathers' neurogenomic state was less similar to mothers'8. A key challenge for care-giving parents is to defend their home and vulnerable offspring from threats, such as territorial intruders. Behavioral trade-offs between parental care and territory defense are well-documented35 and work in this area has been influenced by the challenge hypothesis22, which originally posited that androgens mediate the conflict between care and aggression. By comparing the neurogenomic dynamics of paternal care and the response to a territorial challenge, our work offers insights into the gene regulatory mechanisms by which animals resolve these conflicting demands. Our results suggest that opposing social experiences acting over different time scales—providing paternal care over the course of weeks versus responding to a territorial challenge over the course of minutes to hours—trigger opposite gene regulatory responses. In particular, an analysis of the predicted gene regulatory network identified transcription factors that were significantly enriched both following paternal care and in response to a territorial challenge, and the majority of the transcription factors (and their targets) were regulated in opposite directions in the two experiments (Fig. 5). While previous studies have explored circuit-level changes in the social behavior network in response to different social stimuli15, our results point to the molecular basis of differential modulation of the social behavior network: the transcription factors showing the opposite expression pattern were differentially expressed in different brain regions in the two experiments. Specifically, shared transcription factors and their predicted targets were up-regulated in telencephalon in response to a territorial challenge and down-regulated in diencephalon during parental care. These findings suggest the molecular mechanisms by which transcription factors might differentially modulate the social behavior network15,16,17 in the brain to manage conflicts between paternal care and territory defense. A similar pattern was observed at the transcriptomic (rather than gene regulatory) level when neurogenomic states were compared between territorial aggression and courtship in male threespined sticklebacks: some genes that were upregulated after a territorial challenge were downregulated after a courtship opportunity44. These results are also consistent with a detailed mechanistic study which showed that transcription factors play a role in setting up neural circuits to mediate opposing behaviors45. Altogether our analysis of changes in neurogenomic state across stages of paternal care offers support for all three hypotheses proposed. For example, consistent with the unique hypothesis, there were genes that were unique to each stage. Genes exhibiting transient, stage-specific differential expression might be involved in facilitating the next stage, priming and/or responding to a particular event or stimulus during that stage, e.g. the arrival of offspring. Whether genes that were unique to a particular stage and not differentially expressed in other stages are a cause of future behavior or consequence of past behavior is unknown. We also found support for the carryover and additivity hypotheses: elements of an acquired neurogenomic state persisted into subsequent stages, which suggests that the events and behaviors that characterize a particular stage of paternal care (e.g. finishing a nest, the arrival of eggs, hatching) trigger a neurogenomic state that persists, perhaps for as long as those events and behaviors continue. Genes whose expression persists across stages could be involved in maintaining the previous neurogenomic state, and/or reflect the constant demands of parenthood, e.g. the nest must be maintained across all stages of care. Moreover, our results suggest that changes in neurogenomic state in a fathering fish might share commonalities at the molecular level with the neurogenomic changes associated with maternal care in a mammal. The number of orthologous genes that were shared across stages of maternal care in mice31 and paternal care in sticklebacks was greater than expected due to chance. This suggests that the neurogenomic state that is maintained across pregnancy and the post partum period in mice, for example, at least partially resembles the neurogenomic state that is maintained while a male stickleback is caring for eggs and while the eggs are hatching. These results suggest that maternal and paternal care might share similarities at the molecular level, and this finding is consistent with other studies showing that parental males and females can use the same hormones and molecular mechanisms to activate the same pathways in the brain46. The finding of partial commonality between paternal care in a fish and maternal care in a mammal adds to the growing body of work showing that the underlying neural and molecular mechanisms related to parental care might have been repeatedly recruited during the evolution and diversification of parental care19,47. Indeed, our results suggest that so-called "pregnancy hormones" and added shared genes (for instance BDNF and G protein regulators, RGS3) might have been serving functions related to care giving long before the evolution of mammals, and that these mechanisms operate just as well in fathers as they do in mothers. These commonalities with maternal care in mammals suggest that the neurogenomic shifts that occur during paternal care in a fish might be deeply conserved and might not be sex-specific. Animals have been dealing with the problem of how to improve offspring survival (as well as avoiding filial cannibalism) for a long time; our results suggest that they have relied on ancient molecular substrates to solve it. Sticklebacks In sticklebacks, paternal care is necessary for offspring survival and is influenced by prolactin48, and the main androgen in fishes (11KT) does not inhibit paternal care in this species49. Paternal care in sticklebacks is costly both in terms of time and energy38, infanticide and cannibalism are common20, and males must be highly vigilant to challenges from predators and rival males while caring for their vulnerable offspring. Adult males were collected from Putah Creek, CA, a freshwater population, in spring 2013, shipped to the University of Illinois where they were maintained in the lab on a 16:8 (L:D) photoperiod and at 18 °C in separate 9-l tanks. Males were provided with nesting material including algae, sand and gravel and were visually isolated from neighbors. In order to track transcriptional dynamics associated with becoming a father, we sampled males for brain gene expression profiling at five different points during the reproductive cycle (n = 5 males per time point): nest, eggs, early hatching, middle hatching and late hatching (control: reproductively mature males with no nests). Males in the nest condition had a nest but had not yet mated. Males in the eggs condition were sampled four days after their eggs were fertilized. Because males in the eggs condition were sampled four days after mating, the transcriptomic effects of mating are likely to have attenuated by the time males were sampled at this stage. Hatching takes place over the course of the 5th day after fertilization, and a previous study found that brain activation as assessed by Egr-1 expression was highest while male sticklebacks were caring for fry as compared to males with nests or eggs50. In order to capture males' response to the new social stimulus of their fry (see51), we focused on three time points on the day of hatching, which capture the start of the hatching process (9 a.m.), when approximately half of the clutch is hatched (1 p.m.) and when all of the eggs have hatched (5 p.m.). Males in the nest, eggs and early hatching conditions were sampled at 9 a.m., males in the mid-hatching condition were sampled at 1 p.m. and males in the late hatching condition were sampled at 5 p.m. Males in these conditions were compared to reproductively mature circadian-matched control males that did not have a nest (n = 5 males per control group). Wild-caught females from the same population were used as mothers. Males were quickly netted and sacrificed by decapitation within seconds. All methods were approved by the IACUC of the University of Illinois at Urbana-Champaign (#15077). Heads were flash frozen in liquid nitrogen and the telencephalon and diencephalon were carefully dissected and placed individually in Eppendorf tubes containing 500 μL of TRIzol Reagent (Life Technologies). Total RNA was isolated immediately using TRIzol Reagent according to the manufacturer's recommendation and subsequently purified on columns with the RNeasy kit (QIAGEN). RNA was eluted in a total volume of 30 μL in RNase-free water. Samples were treated with DNase (QIAGEN) to remove genomic DNA during the extraction procedure. RNA quantity was assessed using a Nanodrop spectrophotometer (Thermo Scientific), and RNA quality was assessed using the Agilent Bioanalyzer 2100 (RIN 7.5–10); one sample was excluded because of low RNA quality. RNA was immediately stored at −80 °C until used in sequencing library preparation. The RNAseq libraries were constructed with the TruSeq® Stranded mRNA HT (Illumina) using an ePMotion 5075 robot (Eppendorf). Libraries were quantified on a Qubit fluorometer, using the dsDNA High Sensitivity Assay Kit (Life Technologies), and library size was assessed on a Bioanalyzer High Sensitivity DNA chip (Agilent). Libraries were pooled and diluted to a final concentration of 10 nM. Final library pools were quantified using real-time PCR, using the Illumina compatible kit and standards (KAPA) by the W. M. Keck Center for Comparative and Functional Genomics at the Roy J. Carver Biotechnology Center (University of Illinois). Single-end sequencing was performed on an Illumina HiSeq 2500 instrument using a TruSeq SBS sequencing kit version 3 by the W. M. Keck Center for Comparative and Functional Genomics at the Roy J. Carver Biotechnology Center (University of Illinois). The 79 libraries were sequenced on 27 lanes. RNA Seq informatics FASTQC version 0.11.352 was used to assess the quality of the reads. RNA-seq produced an average of 60 million reads per sample (Supplementary Data 9). We aligned reads to the Gasterosteus aculeatus reference genome (the repeat masked reference genome, Ensembl release 75), using TopHat (2.0.8)53 and Bowtie (2.1.0)54. Results of the TopHat alignment were largely in agreement with results from HISAT255 (Supplementary Fig. 4). Reads were assigned to features according to the Ensembl release 75 gene annotation file (http://ftp.ensembl.org/pub/release-75/gtf/gasterosteus_aculeatus/). We used the default settings in all the programs unless otherwise noted. Defining DEGs HTSeq v0.6.156 read counts were generated for genes using stickleback genome annotation. Any reads that fell in multiple genes were excluded from the analysis. We included genes with at least 0.5 count per million (cpm) in at least five samples, resulting in 17,659 and 17,463 genes in diencephalon and telencephalon, respectively. Count data were TMM (trimmed mean of M-values) normalized in R using edgeR v3.16.557. Samples separated cleanly by brain region on an MDS plot; we did not detect any outliers. To assess differential expression, pairwise comparisons between experimental and control conditions were made at each stage using appropriate circadian controls. Because the nest, eggs and early stages were all sampled at 9 a.m., their expression was compared relative to the same 9 a.m. control group. Diencephalon and telencephalon were analyzed separately in edgeR v3.16.5. A tagwise dispersion estimate was used after computing common and trended dispersions. To call differential expression between treatment groups, a "glm" approach was used. We adjusted actual p-values via empirical FDR, where a null distribution of p-values was determined by permuting sample labels for 500 times for each tested contrast and a false discovery rate was estimated58. Similarities across stages of care was assessed using hypergeometric tests and PCA (Supplementary Fig. 2). For a fair comparison between our study and Ray et al.31, we reanalyzed the Ray et al., gene expression dataset by applying the same model, dispersion estimates and false discovery rate procedures. Unique genes One of the goals of this study was to identify genes that uniquely characterized a particular state, e.g. to a particular stage of paternal care, or to either the territorial challenge or the paternal care experiment. To address the possibility that putative unique genes barely passed the cutoff for differential expression in another state (false negatives), we adopted an empirical approach, as in ref. 23. We kept the cutoff for DEGs at the focal state at FDR < 0.01 and relaxed the FDR cutoff on the other states (see Supplementary Fig. 1 for an explanation of this procedure). This procedure was repeated for each state and in each brain region separately. Added shared genes We wanted to know how many of the genes that were differentially expressed in one stage remained differently expressed in the subsequent stages (added shared genes). To find added shared genes, we first selected those stages which had significant pairwise overlap between them (FDR < 0.05, hypergeometric test). Only those genes were tested for overlap with subsequent stages; in order to qualify as an added shared genes for a particular stage, the gene could not be differentially expressed during a preceding stage and had to be differentially expressed during a subsequent stage, but not necessarily the stage immediately following that particular stage. Except for the first stage, each stage's genes were first subtracted from the previous stages' DEGs and then tested for overlap with subsequent stages (Supplementary Fig. 3). To assess the significance of added shared genes, we used rotation gene set testing functionality (ROAST)24 in the limma package59. ROAST can test whether any of the genes in a given set of added shared genes are differentially expressed in the specified contrast and also can test if they are consistently regulated. ROAST tests for three alternative hypotheses: "Up", tests whether the genes in the set tend to be up-regulated, "Down" tests whether the genes in the set tend to be down-regulated and "Mixed" tests whether the genes in the set tend to be differentially without regard for direction of regulation. Here we used directional ROAST (null hypothesis either Up or Down) and separated the added shared genes by their direction of regulation (up or down) in a focal stage and then tested for their significant differential expression and consistent direction in subsequent stages. We also complemented this analysis with a Chi-Square test to determine whether the number of genes within a given set of overlapping genes showing a concordant expression pattern is greater than expected due to chance. Stickleback and mouse orthrogroups To compare stickleback and mouse genes we generated a reliable orthogroup map using OrthoDB, v9.160. This map contained both one-to-one, one to many and many to many orthology associations between stickleback and mouse genes. This map contains 3790 orthogroups which represent 4820 stickleback and 4894 mouse genes. Overlap significance We tested the significance of unique and added shared DEGs between stickleback and mouse at the orthogroup level. We used Monte Carlo repeated random sampling to determine if an observed orthogroup overlap between species was statistically significant at P < 0.0561. For example, suppose \(t^ \ast\) is the observed orthogroup overlap between the stickleback and the mouse gene lists and n1 and n2 are gene set sizes respectively. We repeatedly and randomly drew samples of size n1 from the stickleback genome and samples of n2 from the mouse genome for M times (M = 105) with replacement and detected an overlap ti for each iteration of M and computed an estimated p-value using the following equation, $$estimate \,\, {\wp} = \frac{{1 + \mathop {\sum }\nolimits_{i = 1}^M I\left( {t_i \ge t^ \ast } \right)}}{{1 + M}}$$ where I(.) is an indicator function. Transcriptional regulatory network (TRN) analysis ASTRIX uses gene expression data to identify regulatory interactions between transcription factors and their target genes. A previous study validated ASTRIX-generated TF-target associations using data from ModENCODE, REDfly, and DROID databases62. The predicted targets of TFs were defined as those genes that share very high mutual information (P < 10−6) with a TF, and can be predicted quantitatively with high accuracy (Root Mean Square Deviation (RMSD) < 0.33 i.e prediction error less than 1/3rd of each gene expression profile's standard deviation. The list of putative TFs in the stickleback genome was obtained from the Animal Transcription Factor Database. Given TFs and targets sets ASTRIX infers a genome-scale TRN model capable of making quantitative predictions about the expression levels of genes given the expression values of the transcription factors. The ASTRIX algorithm was previously used to infer a TRN models for honeybee, mouse and sticklebacks34,62,63,64. ASTRIX identified transcription factors that are central actors in regulating aggression, maturation and foraging behaviors in the honeybee brain62. Here we used ASTRIX to infer a joint gene regulatory network by combining gene expression profiles from a previous study on the transcriptomic response to a territorial challenge in male sticklebacks34 with the data from this experiment. Combining the two datasets increased statistical power to help identify modules that are shared and unique to the two experiments. Transcription factors that are predicted to regulate DEGs in either experiment were determined according to whether they had a significant number of targets as assessed by a Bonferroni FDR-corrected hypergeometric test. We derived GO assignments, using protein family annotations from the database PANTHER65. Stickleback protein sequences were blasted against all genomes in the database (PANTHER 9.0 85 genomes). This procedure assigns proteins to PANTHER families on the basis of structural information as well as phylogenetic information. Genes were then annotated using GO information derived from the 85 sequenced genomes in the PANTHER database. GO analysis were performed in R using TopGo v.2.16.0 and Fisher's exact test. A p-value cut off of <0.01 was used to select for significantly enriched functional terms wherever possible. We summarized the GO terms into larger and general categories to get a general overview of the underlying biology. Terms were grouped together if they were in a similar pathway and/or based on semantic similarity. GO enrichments along with their respective p-values are in Supplementary Data 2 and 7. Further information on research design is available in the Nature Research Reporting Summary linked to this article. The datasets generated during and/or analysed during the current study are available in GEO accession code number GSE134508. Code availability Codes are available on GitHub (https://github.com/bukhariabbas/stickleback-paternal-care). All other relevant data is available upon request. Clutton-Brock, T. H. The Evolution of Parental Care. (Princeton University Press, 1991). Royle, N. J., Smiseth, P. T. & Kolliker, M. The Evolution of Parental Care. (Oxford University Press, 2012). DeAngelis, R. S. & Rhodes, J. S. Sex differences in steroid hormones and parental effort across the breeding cycle in Amphiprion ocellaris. Copeia 104, 586–593 (2016). Rosenblatt, J. S. Nonhormonal basis of maternal behavior in the rat. Science 156, 1512–1514 (1967). Feldman, R., Braun, K. & Champagne, F. A. The neural mechanisms and consequences of paternal caregiving. Nat. Rev. Neurosci. 20, 205–224 (2019). Gordon, I., Zagoory-Sharon, O., Leckman, J. F. & Feldman, R. Oxytocin and the development of parenting in humans. Biol. Psychiatry 68, 377–382 (2010). Storey, A. E., Walsh, C. J., Quinton, R. L. & Wynne-Edwards, K. E. Hormonal correlates of paternal responsiveness in new and expectant fathers. Evol. Hum. Behav. 21, 79–95 (2000). Parker, D. J. et al. Transcriptomes of parents identify parenting strategies and sexual conflict in a subsocial beetle. Nat. Commun. 6, 8449 (2015). Lehrman, D. S. The reproductive behavior of ring doves. Sci. Am. 211, 48–55 (1964). Kohl, J., Autry, A. E. & Dulac, C. The neurobiology of parenting: a neural circuit perspective. BioEssays 39, 1–11 (2017). Balshine, S. & Sloman, K. A. in Encyclopedia of Fish Physiology: From Genome to Environment (Anthony Farrell ed) 670–677 (Academic Press, 2011). Gross, M. R. & Sargent, R. C. The evolution of male and female parental care in fishes. Am. Zool. 25, 807–822 (1985). Whittington, C. M., Griffith, O. W., Qi, W., Thompson, M. B. & Wilson, A. B. Seahorse brood pouch transcriptome reveals common genes associated with vertebrate pregnancy. Mol. Biol. Evol. 32, 3114–3131 (2015). Whittington, C. M. & Wilson, A. B. The role of prolactin in fish reproduction. Gen. Comp. Endocrinol. 191, 123–136 (2013). Newman, S. W. The medial extended amygdala in male reproductive behavior. A node in the mammalian social behavior network. Ann. N. Y. Acad. Sci. 877, 242–257 (1999). Goodson, J. L. The vertebrate social behavior network: evolutionary themes and variations. Horm. Behav. 48, 11–22 (2005). O'Connell, L. A. & Hofmann, H. A. The Vertebrate mesolimbic reward system and social behavior network: A comparative synthesis. J. Comp. Neurol. 519, 3599–3639 (2011). Young, R. L. et al. Conserved transcriptomic profiles underpin monogamy across vertebrates. Proc. Natl Acad. Sci. USA 116, 1331–1336 (2019). O'Connell, L. A., Matthews, B. J. & Hofmann, H. A. Isotocin regulates paternal care in a monogamous cichlid fish. Horm. Behav. 61, 725–733 (2012). Wootton, R. J. A Functional Biology of Sticklebacks. (University of California Press, 1984). Ketterson, E. D., Nolan, V., Wolf, L. & Ziegenfus, C. Testosterone and avian life histories—Effects of experimentally elevated testosterone on behavior and correlates of fitness in the dark-eyed junco (Junco hyemalis). Am. Nat. 140, 980–999 (1992). Wingfield, J. C., Hegner, R. E., Dufty, Alfred, M. & Ball, G. F. The challenge hypothesis: Theoretical implications for patterns of testosterone secretion, mating systems, and breeding strategies. Am. Nat. 136, 829–846 (1990). Stein, L. R., Bukhari, S. A. & Bell, A. M. Personal and transgenerational cues are nonadditive at the phenotypic and molecular level. Nature Ecology &. Evolution 2, 1306–1311 (2018). Wu, D. et al. ROAST: rotation gene set tests for complex microarray experiments. Bioinformatics (Oxford, England) 26, 2176–2182 (2010). Kleszczynska, A. et al. Determination of the neuropeptides arginine vasotocin and isotocin in brains of three-spined sticklebacks (Gasterosteus aculeatus) by off-line solid phase extraction-liquid chromatography-electrospray tandem mass spectrometry. J. Chromatogr. A. 1150, 290–294 (2007). Kleszczynska, A., Sokolowska, E. & Kulczykowska, E. Variation in brain arginine vasotocin (AVT) and isotocin (IT) levels with reproductive stage and social status in males of three-spined stickleback (Gasterosteus aculeatus). Gen. Comp. Endocrinol. 175, 290–296 (2012). Kulczykowska, E. & Kleszczynska, A. Brain arginine vasotocin and isotocin in breeding female three-spined sticklebacks (Gasterosteus aculeatus): the presence of male and egg deposition. Gen. Comp. Endocrinol. 204, 8–12 (2014). Lema, S. C., Sanders, K. E. & Walti, K. A. Arginine vasotocin, isotocin and nonapeptide receptor gene expression link to social status and aggression in sex-dependent patterns. J. Neuroendocrinol. 27, 142–157 (2015). Wu, Z., Autry, A. E., Bergan, J. F., Watabe-Uchida, M. & Dulac, C. G. Galanin neurons in the medial preoptic area govern parental behaviour. Nature 509, 325–330 (2014). Schneider, J. S. et al. Progesterone receptors mediate male aggression toward infants. Proc. Natl Acad. Sci. USA 100, 2951–2956 (2003). Ray, S. et al. An examination of dynamic gene expression changes in the mouse brain during pregnancy and the postpartum period. G3 (Bethesda, Md.) 6, 221–233 (2015). Martinowich, K., Manji, H. & Lu, B. New insights into BDNF function in depression and anxiety. Nat. Neurosci. 10, 1089–1093 (2007). Raab, R. M., Bullen, J., Kelleher, J., Mantzoros, C. & Stephanopoulos, G. Regulation of mouse hepatic genes in response to diet induced obesity, insulin resistance and fasting induced weight reduction. Nutr. Metab. (Lond). 2, 15–15 (2005). Bukhari, S. A. et al. Temporal dynamics of neurogenomic plasticity in response to social interactions in male threespined sticklebacks. PLoS Genet. 13, e1006840 (2017). Hirschenhauser, K. & Oliveira, R. F. Social modulation of androgens in male vertebrates: meta-analyses of the challenge hypothesis. Anim. Behav. 71, 265–277 (2006). Rosvall, K. A. & Peterson, M. P. Behavioral effects of social challenges and genomic mechanisms of social priming: What's testosterone got to do with it? Current Zoology 60, 791–803 (2014). Cardoso, S. D., Teles, M. C. & Oliveira, R. F. Neurogenomic mechanisms of social plasticity. J. Exp. Biol. 218, 140–149 (2015). Smith C, Wootton RJ. Parental energy expenditure of the male three‐spined stickleback. J. Fish. Biol. 54, 1132–1136 (1999). Palma-Gudiel, H., Cordova-Palomera, A., Leza, J. C. & Fananas, L. Glucocorticoid receptor gene (NR3C1) methylation processes as mediators of early adversity in stress-related disorders causality: a critical review. Neuroscience Biobehavioral Review 55, 520–535 (2015). Powis, Z. et al. De novo variants in KLF7 are a potential novel cause of developmental delay/intellectual disability, neuromuscular and psychiatric symptoms. Clin. Genet. 93, 1030–1038 (2018). Kinsley, C. H. & Amory-Meyer, E. Why the maternal brain? J. Neuroendocrinol. 23, 974–983 (2011). Hillerer, K. M., Jacobs, V. R., Fischer, T. & Aigner, L. The maternal brain: an organ with peripartal plasticity. Neural. Plast. 2014, 574159 (2014). Brunton, P. J. & Russell, J. A. The expectant brain: adapting for motherhood. Nat. Rev. Neurosci. 9, 11–25 (2008). Sanogo, Y. O. & Bell, A. M. Molecular mechanisms and the conflict between courtship and aggression in three-spined sticklebacks. Mol. Ecol. 25, 4368–4376 (2016). Choi, G. B. et al. Lhx6 delineates a pathway mediating innate reproductive behaviors from the amygdala to the hypothalamus. Neuron 46, 647–660 (2005). Wynne-Edwards, K. E. & Timonin, M. E. Paternal care in rodents: Weakening support for hormonal regulation of the transition to behavioral fatherhood in rodent animal models of biparental care. Horm. Behav. 52, 114–121 (2007). Dulac, C., O'Connell, L. A. & Wu, Z. Neural control of maternal and paternal behaviors. Science 345, 765–770 (2014). de Ruiter, A. J., Wendelaar Bonga, S. E., Slijkhuis, H. & Baggerman, B. The effect of prolactin on fanning behavior in the male three-spined stickleback, Gasterosteus aculeatus L. Gen. Comp. Endocrinol. 64, 273–283 (1986). Pall, M. K., Mayer, I. & Borg, B. Androgen and behavior in the male three-spined stickleback, Gasterosteus aculeatus II. Castration and 11-ketoandrostenedione effects on courtship and parental care during the nesting cycle. Horm. Behav. 42, 337–344 (2002). Kent, M. & Bell, A. M. Changes in behavior and brain immediate early gene expression in male threespined sticklebacks as they become fathers. Horm. Behav. 97, 102–111 (2018). Zilkha, N., Scott, N. & Kimchi, T. Sexual dimorphism of parental care: from genes to behavior. Annu. Rev. Neurosci. 40, 273–305 (2017). FastQC: a quality control tool for high throughput sequence data. http://www.bioinformatics.babraham.ac.uk/projects/fastqc (2010) Kim, D. et al. TopHat2: accurate alignment of transcriptomes in the presence of insertions, deletions and gene fusions. Genome. Biol. 14, R36 (2013). Langmead, B. & Salzberg, S. L. Fast gapped-read alignment with Bowtie 2. Nat. Methods 9, 357–359 (2012). Kim, D., Langmead, B. & Salzberg, S. L. HISAT: a fast spliced aligner with low memory requirements. Nat. Methods 12, 357 (2015). Anders, S., Pyl, P. T. & Huber, W. HTSeq-a Python framework to work with high-throughput sequencing data. Bioinformatics (Oxford, England) 31, 166–169 (2015). Robinson, M. D., McCarthy, D. J. & Smyth, G. K. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics (Oxford, England) 26, 139–140 (2010). Storey, J. D. & Tibshirani, R. Statistical significance for genomewide studies. Proc. Natl Acad. Sci. USA 100, 9440–9445 (2003). ADS MathSciNet CAS Article Google Scholar Ritchie, M. E. et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 43, e47 (2015). Zdobnov, E. M. et al. OrthoDB v9.1: cataloging evolutionary and functional annotations for animal, fungal, plant, archaeal, bacterial and viral orthologs. Nucleic Acids Res. 45, D744–d749 (2017). Ernst, M. D. Permutation methods: a basis for exact inference. Stat. Sci. 19, 676–685 (2004). MathSciNet Article Google Scholar Chandrasekaran, S. et al. Behavior-specific changes in transcriptional modules lead to distinct and predictable neurogenomic states. Proc. Natl Acad. Sci. USA 108, 18020–18025 (2011). Shpigler, H. Y. et al. Deep evolutionary conservation of autism-related genes. Proc. Natl Acad. Sci. USA 114, 9653–9658 (2017). Saul, M. C. et al. Transcriptional regulatory dynamics drive coordinated metabolic and neural response to social challenge in mice. Genome Res. 27, 959–972 (2017). Mi, H. et al. PANTHER version 11: expanded annotation data from gene ontology and reactome pathways, and data analysis tool enhancements. Nucleic Acids Res. 45, D183–d189 (2017). We thank Gene Robinson, Mark Hauber, Dave Zhao, Saurabh Sinha, Lisa Stubbs, Mikus Abolins-Abols and members of the Bell lab for comments on the paper. Bukhari was supported by a Dissertation Improvement Grant from the University of Illinois during the preparation of this paper. This material is based upon work supported by the National Science Foundation under Grant No. IOS 1121980, by the National Institutes of Health under award number 2R01GM082937-06A1 and by a grant from the Simons Foundation to L. Stubbs and Gene Robinson. Michael C. Saul Present address: Jackson Labs, 600 Main St., Bar Harbor, ME, 04609, USA Laura R. Stein Present address: Department of Biology, University of Oklahoma, 730 Van Vleet Oval, Room 314, Norman, OK, 73019, USA Rebecca Trapp Present address: Department of Biological Sciences, Purdue University, 915 W. State St., West Lafayette, IN, 47907, USA Carl R. Woese Institute for Genomic Biology, University of Illinois, Urbana Champaign, 1206 Gregory Drive, Urbana, IL, 61801, USA Syed Abbas Bukhari, Michael C. Saul & Alison M. Bell Illinois Informatics Institute, University of Illinois, Urbana Champaign, 616 E. Green St., Urbana, IL, 61820, USA Syed Abbas Bukhari Department of Evolution, Ecology and Behavior, University of Illinois, Urbana Champaign, 505 S. Goodwin Avenue, Urbana, IL, 61801, USA Syed Abbas Bukhari, Laura R. Stein, Rebecca Trapp & Alison M. Bell Neuroscience Program, University of Illinois, Urbana Champaign, 505 S. Goodwin Avenue, Urbana, IL, 61801, USA Noelle James & Alison M. Bell Program in Ecology, Evolution and Conservation Biology, University of Illinois, Urbana Champaign, 505 S. Goodwin Avenue, Urbana, IL, 61801, USA Miles K. Bensky & Alison M. Bell Noelle James Miles K. Bensky Alison M. Bell S.A.B. contributed to study design, analyzed the data and wrote the first draft of the paper. M.S. contributed to study design and data analysis. N.J., M.B., L.R.S. and R.T. contributed to study design and collected the data. A.M.B. designed the study, contributed to data analysis and interpretation and edited the paper. All authors approved the final version of the paper. Correspondence to Alison M. Bell. The authors declare no competing interests. Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Description of Additional Supplementary Files Supplementary Data 1 Bukhari, S.A., Saul, M.C., James, N. et al. Neurogenomic insights into paternal care and its relation to territorial aggression. Nat Commun 10, 4437 (2019). https://doi.org/10.1038/s41467-019-12212-7 Neurotranscriptomic changes associated with chick-directed parental care in adult non-reproductive Japanese quail Patricia C. Lopes Robert de Bruijn Scientific Reports (2021) Nine Levels of Explanation Melvin Konner Human Nature (2021) The neurogenomic transition from territory establishment to parenting in a territorial female songbird Alexandra B. Bentz Douglas B. Rusch Kimberly A. Rosvall BMC Genomics (2019)
CommonCrawl
\begin{document} \title{The Geometry of $(t\mod{q})$-arcs} \author[1]{Sascha Kurz} \author[2]{Ivan Landjev} \author[3]{Francesco Pavese} \author[4]{Assia Rousseva} \affil[1]{Mathematisches Institut, Universit\"at Bayreuth, D-95440 Bayreuth, Germany, [email protected]} \affil[2]{New Bulgarian University, 21 Montevideo str., 1618 Sofia, Bulgaria and Bulgarian Academy of Sciences, Institute of Mathematics and Informatics, 8 Acad G. Bonchev str., 1113 Sofia, Bulgaria, [email protected]} \affil[3]{Dipartimento di Meccanica, Matematica e Management, Politecnico di Bari, Via Orabona 4, 70125, Bari, Italy, [email protected]} \affil[4]{Faculty of Mathematics and Informatics, Sofia University, 5 J. Bourchier blvd., 1164 Sofia, Bulgaria, [email protected]} \date{} \maketitle \begin{abstract} In this paper, we give a geometric construction of the three strong non-lifted $(3\mod{5})$-arcs in $\PG(3,5)$ of respective sizes 128, 143, and 168, and construct an infinite family of non-lifted, strong $(t\mod{q})$-arcs in $\PG(r,q)$ with $t=(q+1)/2$ for all $r\ge3$ and all odd prime powers $q$. \noindent \keywords{$(t\mod q)$-arcs \and linear codes \and quadrics \and caps \and quasidivisible arcs \and sets of type $(m,n)$} \noindent \subclass{51E22 \quad 51E21 \quad 94B05} \end{abstract} \section{Introduction}\label{sec:intro} The strong $(t\mod{q})$-arcs were introduced and investigated in \cite{KLR22,LR13,LR19,LRS16} in connection with the extendability problem for Griesmer arcs. This problem is related in turn to the problem of the existence and extendability of arcs associated with Griesmer codes. In \cite{KLR22} the classification of the strong $(3\mod5)$-arcs was used to rule out the existence of the hypothetical $[104,4,82]_5$-code, one of the four undecided cases for codes of dimension 4 over $\mathbb{F}_5$. It turns out that apart from the many strong $(3\mod5)$-arcs obtained from the canonical lifting construction, there exist three non-lifted strong $(3\mod5)$-arcs of respective sizes 128, 143, and 168. This is a counterexample to the conjectured impossibility of strong $(3\mod5)$-arcs in geometries over $\mathbb{F}_5$ in dimensions larger than 2. The three arcs are constructed by a computer search, but display regularities which suggest a nice geometric structure. In this paper, we give a geometric, computer-free construction of the three non-lifted strong $(3\mod5)$-arcs in $\PG(3,5)$. Two of them are related to the non-degenerate quadrics of $\PG(3,5)$. Their construction can be generalized further to larger fields and larger dimensions. \section{Preliminaries} We define an arc in $\PG(r,q)$ as a mapping from the point set $\mathcal{P}$ of the geometry to the non-negative integers: $\mathcal{K}\colon\mathcal{P}\to\mathbb{N}_0$. An arc $\mathcal{K}$ in $\PG(r,q)$ is called a $(t\mod{q})$-arc if $\mathcal{K}(L)\equiv t\pmod{q}$ for every line $L$. It is immediate that $\mathcal{K}(S)\equiv t\pmod{q}$ for every subspace $S$ with $\dim S\ge1$. Increasing the multiplicity of an arbitrary point by $q$ preserves the property of being a $(t\mod{q})$-arc. So, we can assume that the point multiplicities are integers contained in the interval $[0,q-1]$. If the maximal point multiplicity is at most $t$ we call $\mathcal{K}$ a \emph{strong} $(t\mod{q})$-arc. The extendability of the so-called $t$-quasidivisible arcs is related to structure properties of $(t\mod{q})$-arcs. In particular, an $(n,s)$-arc $\mathcal{K}$ in $\PG(r,q)$ with spectrum $(a_i)$ is called \emph{$t$-quasidivisible} with divisor $\Delta$ if $s\equiv n+t\pmod{\Delta}$ and $a_i=0$ for all $i\not\equiv n,n+1,\ldots,n+t\pmod{\Delta}$. It is quite common in coding theory that hypothetical Griesmer codes are associated with arcs that turn out to be $t$-quasidivisible with divisor $q$ for some $t$. The extendability of $t$-quasidivisible arcs is related to the structure of particular strong $(t\mod{q})$-arcs associated with them. Let $\mathcal{K}$ be an arc in $\PG(r,q)$ and let $\sigma:\mathbb{N}_0\to\mathbb{Q}$ be a function satisfying $\sigma(\mathcal{K}(H))\in\mathbb{N}_0$ for every hyperplane $H$ in $\mathcal{H}$, where $\mathcal{H}$ is the set of all hyperplanes in $\PG(r,q)$. The arc $\mathcal{K}^{\sigma}:\mathcal{H}\to\mathbb{N}_0$, $H\to\sigma(\mathcal{K}(H))$ is called the $\sigma$-dual of $\mathcal{K}$. For a $t$-quasidivisible arc with divisor $q$, we consider the $\sigma$-dual arc obtained for $\mathcal{K}^{\sigma}(H)=n+t-\mathcal{K}(H)\pmod{q}$. It turns out that with this $\sigma$ the $\sigma$-dual to a $t$-quasidivisible arc $\mathcal{K}$ is a strong $(t\mod{q})$-arc. Moreover, if $\mathcal{K}^{\sigma}$ contains a hyperplane in its support then $\mathcal{K}$ is extendable \cite{LR13,LRS16}. There exist several straightforward constructions of $(t\mod{q})$-arcs \cite{LR13,LR19,LRS16}. The first is the so-called sum-of-arcs construction. \begin{theorem} \label{thm:sum-of-arcs} Let $\mathcal{K}$ and $\mathcal{K}'$ be a $(t_1 \mod q)$- and a $(t_2 \mod q)$-arc in $\PG(r,q)$, respectively. Then $\mathcal{K}+\mathcal{K}'$ is a $(t \mod q)$-arc with $t \equiv t_1 + t_2 \pmod q$. Similarly, $\alpha\mathcal{K}$, where $\alpha\in\{ 0,\dots,\dots, p-1\}$ and $p$ is the characteristic of $\mathbb{F}_q$, is a $(t \mod q)$-arc with $t\equiv \alpha t_1\pmod q$. \end{theorem} For the special case of $t=0$, and $q=p$ we have that the sum of two $(0\mod{p})$-arcs and the scalar multiple of a $(0\mod{p})$-arc are again $(0\mod{p})$-arcs. Hence the set of all $(0\mod{p})$-arcs is a vector space over $\mathbb{F}_p$, cf.~\cite{LR19}. The second construction is the so-called \emph{lifting construction}, see \cite[p. 230]{LR19}. \begin{theorem} \label{thm:lifting-construction} Let $\mathcal{K}_0$ be a (strong) $(t\mod q)$-arc in a projective $s$-space $\Sigma$ of $\PG(r, q)$, where $1 \le s < r$. For a fixed projective $(r-s-1)$-space $\Gamma$ of $\PG(r, q)$, disjoint from $\Sigma$, let $\mathcal{K}$ be the arc in $\PG(r, q)$ defined as follows: \begin{itemize} \item for each point $P$ of $\Gamma$, set $\mathcal{K}(P)=t$; \item for each point $Q \in \PG(r, q) \setminus \Gamma$, set $\mathcal{K}(Q)=\mathcal{K}_0(R)$, where $R=\langle \Gamma, Q \rangle \cap \Sigma$. \end{itemize} Then $\mathcal{K}$ is a (strong) $(t\mod q)$-arc in $\PG(r,q)$ of cardinality $q^{r-s} \cdot |\mathcal{K}_0| + t \frac{q^{r-s}-1}{q-1}$. \end{theorem} Arcs obtained by the lifting construction are called \emph{lifted arcs}. If $\Sigma$ is a point, then we speak of a \emph{lifting point}. The iterative application of the lifting constructions gives the more general version stated above. In the other direction, in \cite[Lemma 1]{LR19} i was shown that the set of all lifting points forms a subspace. The classification of strong $(t\mod{q})$ arcs in $\PG(2,q)$ is equivalent to that of certain plane blocking sets \cite{LR16}. \begin{theorem} \label{thm:blocking-set-constr} A strong $(t\mod q)$-arc $\mathcal{K}$ in $\PG(2,q)$ of cardinality $mq+t$ exists if and only if there exists an $((m-t)q+m,\ge m-t)$-blocking set $\mathcal{B}$ with line multiplicities contained in the set $\{m-t,m-t+1,\dots,m\}$. \end{theorem} The condition that the multiplicity of each point is at most $t$ turns out to be very strong. For $t=0$, we have that the only strong $(0\mod{q})$-arc is the trivial zero-arc. For $t=1$ the strong $(1\mod{q})$-arcs are the hyperplanes. For $t=2$ all strong $(2\mod{q})$ arcs in $\PG(r,q)$, for $r\ge3, q\ge5$, turn out to be lifted \cite{LR19}. In $\PG(2,q)$, all $(2\mod{q})$-arcs are also known (cf.~\cite[Lemma 3.7]{KLR22}). Apart from one sporadic example, all such arcs are again lifted. It was conjectured in \cite{LR16} that all strong $(3\mod{5})$-arcs in $\PG(r,5)$, $r\ge3$, are lifted. The computer classification reported in \cite{KLR22} shows that this conjecture is wrong: there exist $(3\mod{5})$-arcs of respective sizes 128, 143, and 168 that are not lifted. In the next sections we give a geometric (computer-free) description of these arcs and define an infinite class of strong $(t\mod{q})$-arcs in $\PG(r,q)$, $r\ge3$, that are not lifted. \section{The arc of size 128} \label{sec:128} We shall need the classification of all strong $(3\mod{5})$-arcs in $\PG(2,5)$ of sizes 18, 23, 28 and 33. It is obtained easily from Theorem~\ref{thm:blocking-set-constr} and can be found in \cite{KLR22,LR19}. \begin{theorem} \label{thm:plane-arcs} Let $\mathcal{K}$ be a strong $(3\mod5)$-arc in $\PG(2,5)$. Let $\lambda_i$, $i=0,1,2,3$, denote the number of $i$ points of $\mathcal{K}$. \begin{enumerate}[(a)] \item If $|\mathcal{K}|=18$ then $\mathcal{K}$ is the sum of three lines. \item If $|\mathcal{K}|=23$ then it has $\lambda_3=3,\lambda_2=4,\lambda_1=6$. The four 2-points form a quadrangle, the three 3-points are the diagonal points of the quadrangle, and the 1-points are the intersections of the diagonals with the sides of the quadrangle. \item If $|\mathcal{K}|=28$ then it has $\lambda_3=6,\lambda_1=10$. The 3-points form an oval, and the 1-points are the internal points to this oval. \item There exist ten non-isomorphic arcs with $|\mathcal{K}|=33$. These are: \begin{enumerate}[(i)] \item the duals of the complements of the seven $(10,3)$-arcs in $\PG(2,5)$ (cf. \cite{L96}); \item the dual of the multiset which is complement of the $(11,3)$-arc with four external lines plus one point which is not on a 6-line ($\lambda_3=6,\lambda_2=5,\lambda_1=5$); \item the dual of a blocking set in which one double point forms an oval with five of the 0-points; the tangent to the oval in the 2-point is a 3-line ($\lambda_3=6,\lambda_2=5,\lambda_1=5$); \item the modulo 5 sum of three non-concurrent lines: two of them are lines of 3-points and one is a line of 2-points ($\lambda_3=8,\lambda_2=4,\lambda_1=1$). \end{enumerate} \end{enumerate} \end{theorem} Let us note that one of the strong $(3\mod5)$-arcs in case $(d(i))$ is obtained by taking as 3-points the points of an oval and as 1-points the external points to the oval. Consider a $(3\mod5)$-arc $\mathcal{K}$ in $\PG(3,5)$ which is of multiplicity 128. Let $\varphi$ be a projection from an arbitrary 0-point $P$ to a plane $\pi$ not incident with $P$: \begin{equation} \label{eq:project} \varphi\colon \left\{\begin{array}{lll} \mathcal{P}\setminus\{P\}\ & \rightarrow\ & \pi \\ Q & \rightarrow & \pi\cap\langle P,Q\rangle . \end{array}\right. \end{equation} Here $\mathcal{P}$ is again the set of points of $\PG(3,5)$. Note that $\varphi$ maps the lines through $P$ into points from $\pi$, and the planes through $P$ into lines in $\pi$. For every set of points $\mathcal{F}\subset\pi$, define the induced arc $\mathcal{K}^{\varphi}$ by \[\mathcal{K}^{\varphi}(\mathcal{F})\,= \,\sum_{\varphi(P)\in\mathcal{F}} \mathcal{K}(P).\] It is clear that $P$ is incident with 3- and 8-lines, only. If there exists a 13-line $L$ through $P$ then all planes through $L$ have multiplicity at least 33 (Theorem~\ref{thm:plane-arcs}) and $|\mathcal{K}|\ge6\cdot33-5\cdot13=133$, a contradiction. An 8-line through $P$ is either of type (3,3,1,1,0,0) (type ($\alpha$), or of type (3,2,2,1,0,0) (type ($\beta$)). Other types for an 8-line are impossible by the same counting argument as above: a plane through such a line has to be of multiplicity at least 33 (18-planes are impossible since $P$ is a 0-point), and we get a contradiction by the same counting argument as above. A 3-line through $P$ is of type $(\gamma_1)$ (3,0,0,0,0,0), $(\gamma_2)$ (2,1,0,0,0,0), or $(\gamma_3)$ (1,1,1,0,0,0). A point in the projection plane is said to be of type ($\alpha$), ($\beta$), or ($\gamma_i$) if it is the image of a line of the same type. Let us note that type $(\alpha)$ and $(\beta)$ are the same as types $(B_2)$ and $(B_3)$ from \cite{KLR22}; similarly type $(\gamma_i)$ coincides with type $(A_i)$, $i=1,2,3$. By Theorem~\ref{thm:plane-arcs}, if a line in the projection plane has one 8-point then it contains: - one point of type ($\alpha$), one point of type ($\gamma_1$), and four points of type ($\gamma_2$), or else - one point of type ($\beta$), two points of type ($\gamma_1$), two points of type ($\gamma_2$) and one point of type ($\gamma_3$). We are going to prove that if $\mathcal{K}$ is a strong $(3\mod5)$-arc in $\PG(3,5)$ of cardinality 128 then the induced arc $\mathcal{K}^{\varphi}$ in $\PG(2,5)$ is unique (up to isomorphism). It consists of seven 8-points and 24 3-points. Three of the 8-points are of type $(\alpha)$, and four are of type ($\beta$). The 3-points are: six of type ($\gamma_1$), twelve of type ($\gamma_2$), and six of type ($\gamma_3$). The points of type ($\beta$) form a quadrangle, and the points of type ($\alpha$) are the diagonal points. The intersections of the lines defined by the diagonal points with the sides of the quadrangle are points of type ($\gamma_3$); the six points on the lines defined by the diagonal points that are not on sides of the quadrangle are of type ($\gamma_1$); all the remaining 3-points are of type ($\gamma_2$). The induced arc $\mathcal{K}^{\varphi}$ is presented on the picture below. \begin{center} \begin{tikzpicture}[line width=1pt, scale=0.5] draw[gray] (0,0) \draw (0,0)--(0,8); \draw (-6,0)--(12,0); \draw (-6,0)--(2,5.33)--(0,8)--(-2,5.33)--(6,0)--(0,8)--(-6,0); \draw (-6,0)--(1,6.66); \draw (6,0)--(-1,6.66); \draw (-6,0)--(3.33,3.55); \draw (-6,0)--(4.66,1.77); \draw (6,0)--(-3.33,3.55); \draw (6,0)--(-4.66,1.77); \draw (-6,0)--(5.3,4.30); \draw (-6,0)--(9,2.5); \draw[black] (-2,5.33)--(0,5.71)--(2,5.33) .. controls (8,3.5) and (8,3.5) .. (12,0); \draw[black] (-6,0) circle (2mm) [fill=black]; \draw[blue!20] (0,0) circle (2mm) [fill=blue!20]; \draw[black] (6,0) circle (2mm) [fill=black]; \draw[black!50] (0,8) circle (2mm) [fill=black!50]; \draw[black!50] (2,5.33) circle (2mm) [fill=black!50]; \draw[black!50] (-2,5.33) circle (2mm) [fill=black!50]; \draw[black!50] (0,4) circle (2mm) [fill=black!50]; \draw[black] (-6,0) circle (3mm); \draw[black] (6,0) circle (3mm); \draw[black!50] (0,8) circle (3mm); \draw[black!50] (2,5.33) circle (3mm); \draw[black!50] (-2,5.33) circle (3mm); \draw[black!50] (0,4) circle (3mm); \draw[black!60] (-3,0) circle (2mm) [fill=black!60]; \draw[black!60] (3,0) circle (2mm) [fill=black!60]; \draw[blue!20] (12,0) circle (2mm) [fill=blue!20]; \draw[black!30] (-4.66,1.77) circle (2mm) [fill=black!30]; \draw[black!30] (-3.33,3.55) circle (2mm) [fill=black!30]; \draw[blue!20] (-1,6.66) circle (2mm) [fill=blue!20]; \draw[black!30] (4.66,1.77) circle (2mm) [fill=black!30]; \draw[black!30] (3.33,3.55) circle (2mm) [fill=black!30]; \draw[blue!20] (1,6.66) circle (2mm) [fill=blue!20]; \draw[black] (0,5.71) circle (2mm) [fill=black]; \draw[black] (0,5.71) circle (3mm); \draw[blue!20] (-1,4.66) circle (2mm) [fill=blue!20]; \draw[blue!20] (1,4.66) circle (2mm) [fill=blue!20]; \draw[black!30] (0,1) circle (2mm) [fill=black!30]; \draw[black!30] (0,2.3) circle (2mm) [fill=black!30]; \draw[black!30] (-1.6,2.9) circle (2mm) [fill=black!30]; \draw[black!30] (-3.6,1.6) circle (2mm) [fill=black!30]; \draw[black!60] (-2.6,3.26) circle (2mm) [fill=black!60]; \draw[black!60] (-4.2,1.65) circle (2mm) [fill=black!60]; \draw[black!30] (1.6,2.9) circle (2mm) [fill=black!30]; \draw[black!30] (3.6,1.6) circle (2mm) [fill=black!30]; \draw[black!60] (2.6,3.26) circle (2mm) [fill=black!60]; \draw[black!60] (4.2,1.65) circle (2mm) [fill=black!60]; \draw[black!30] (5.3,4.3) circle (2mm) [fill=black!30]; \draw[black!30] (9,2.5) circle (2mm) [fill=black!30]; \draw[black] (6,12) circle (2mm) [fill=black]; \draw[black] (6,12) circle (3mm); \draw[black!50] (6,11) circle (2mm) [fill=black!50]; \draw[black!50] (6,11) circle (3mm); \draw[black!60] (6,10) circle (2mm) [fill=black!60]; \draw[black!30] (6,9) circle (2mm) [fill=black!30]; \draw[blue!20] (6,8) circle (2mm) [fill=blue!20]; \draw (9,12) node{\small{$(3,3,1,1,0,0)$}}; \draw (9,11) node{\small{$(3,2,2,1,0,0)$}}; \draw (9,10) node{\small{$(3,0,0,0,0,0)$}}; \draw (9,9) node{\small{$(2,1,0,0,0,0)$}}; \draw (9,8) node{\small{$(1,1,1,0,0,0)$}}; \end{tikzpicture} \end{center} \begin{lemma} \label{lma:projection-from-0-point} Let $\mathcal{K}$ be a strong $(3\mod5)$-arc in $\PG(3,5)$ of cardinality 128. Let $\varphi$ be the projection from an arbitrary 0-point in $\PG(3,5)$ into a plane disjoint from that point. Then the arc $\mathcal{K}^{\varphi}$ is unique up to isomorphism and has the structure described above. \end{lemma} \begin{proof} We have seen that 0-points are incident only with lines of multiplicity 3 and 8. Hence $\mathcal{K}^{\varphi}$ has seven 8-points and twenty-four 3-points. Assume that six of the 8-points are collinear. Clearly, every 8-point is of type ($\alpha$) since it is on a line containing two 8-points (and hence the image of a 28-plane). Every other point in the projection plane is also on a line containing two 8-points; hence all 3- points in the plane are of type ($\gamma_1$) or ($\gamma_3$). But now a line with one 8-point cannot have points of type ($\gamma_2$), which is a contradiction with the structure of the $(3\mod5)$-arc of size 23. Assume that five of the 8-points are collinear. Let $L$ be the line that contains them. If the two 8-points off $L$ define a line meeting $L$ in a 3-point, the proof is completed as above. Otherwise, the points off $L$ are on four lines containing two 8-points. Now it is easily checked that there exists a line with exactly one 8-point which has at least four 3-points that are not of type ($\gamma_2$). This is a contradiction with the structure of the $(3\mod5)$-arc of size 23. A similar argument rules out the possibility of four collinear 8-points. In all cases these have to be points of type $(\alpha)$. So are the remaining three 8-points. Now for all possible configurations of these seven points we get a 23-line without enough points of type $(\gamma_2)$. We are going to consider in full detail the case when at most three 8-points in the projection plane are collinear. Assume there exists an oval of 8-points, $X_1,\ldots,X_6$, say, and let $Y$ be the seventh 8-point. All 8-points are of type ($\alpha$) and let $YX_1X_2$ be a secant to the oval through $Y$. The lines $X_1X_j$, $j=3,4,5,6$, are images of planes without 2-points. Now an external line to the oval through $Y$ is a 23-line and has at most one point of type ($\gamma_2$), a contradiction. In a similar way, we rule out the case where there exist five 8-points no three of which are collinear. We have to consider the different possibilities for the line defined by the remaining two 8-points: secant, tangent, or external line to the oval formed by the former five points and one additional point which has to be a 3-point. We have shown so far that there are at most three collinear 8-points. It is also clear that there exist at least two lines that contain three 8-points. We consider the case where these lines meet in a 3-point. Denote the 8-points by $X_i, Y_i$, $i=1,2,3$, and $Z$. We also assume that $X_1,X_2,X_3$ are collinear and so are $Y_1,Y_2,Y_3$. Each of the lines $ZX_i$, $i=1,2,3$, also contains three 8-points; otherwise there exist five 8-points no three of which are collinear. Without loss of generality, the triples $Z,X_i,Y_i$, $i=1,2,3$, are collinear. Now it is clear that all the points $X_i, Y_i$ are of type ($\alpha$). Moreover, neither of the lines $X_iY_j$, $i\ne j$, has 3-points of type ($\gamma_2$). Now if we consider a line through $X_3$ that does not have other 8-points it should contain four points of type $(\gamma_2)$. On the other hand, it intersects $X_1Y_2$ and $X_1Y_3$ in points which are not of this type which gives a contradiction. Now we are left with only one possibility for the 8-points subject to the conditions: (i) each line contains at most three 8-points, (ii) lines incident with three 8-points meet in an 8-point, (iii) every 5-tuple of 8-points contains a collinear triple. The 8-points are the vertices of a quadrangle plus the three diagonal points. Furthermore, the diagonal points have to be of type ($\alpha$) while the vertices of the quadrangle are forced to be of type ($\beta$). This is due to the fact that through each of the vertices of the quadrangle there is a line with a single 8-point which meets the three lines defined by the diagonal points of type $(\alpha)$ in three different 3-points that are not of type ($\gamma_2$) (since a 28-plane does not have 2-points). Thus we get the picture below. \begin{center} \begin{tikzpicture}[line width=1pt, scale=0.5] \draw (0,0)--(6,0)--(3,5.2)--(0,0); \draw (3,0)--(3,5.2); \draw (0,0)--(4.5,2.6); \draw (6,0)--(1.5,2.6); \draw (3,0)--(3,5.2); \draw (6,0)--(9,0); \draw (1.5,2.6)--(3,3)--(4.5,2.6) .. controls (7,2) and (7,2) .. (9,0); \draw[black] (0,0) circle (2mm) [fill=black]; \draw[black] (6,0) circle (2mm) [fill=black]; \draw[black] (3,5.2) circle (2mm) [fill=black]; \draw[black] (1.5,2.6) circle (2mm) [fill=black]; \draw[black] (4.5,2.6) circle (2mm) [fill=black]; \draw[black] (3,1.75) circle (2mm) [fill=black]; \draw[black] (3,3) circle (2mm) [fill=black]; \draw (0,-0.5) node{\tiny{$(\alpha)$}}; \draw (6,-0.5) node{\tiny{$(\alpha)$}}; \draw (3.5,3.3) node{\tiny{$(\alpha)$}}; \draw (1.2,3) node{\tiny{$(\beta)$}}; \draw (4.8,3) node{\tiny{$(\beta)$}}; \draw (3,5.7) node{\tiny{$(\beta)$}}; \draw (3.5,1.7) node{\tiny{$(\beta)$}}; \end{tikzpicture}\end{center} The fact that a 23-line through a point of type $(\alpha)$ contains four points of type $(\gamma_2)$ and one point of type $(\gamma_1)$ identifies the six points of type $(\gamma_1)$. \begin{center} \begin{tikzpicture}[line width=1pt, scale=0.5] \draw[gray] (0,0)--(6.5,2); \draw[gray] (0,0)--(7.8,1.2); \draw[black] (6,0)--(3,3); \draw[black] (0,0)--(3,3); \draw (0,0)--(6,0)--(3,5.2)--(0,0); \draw (3,0)--(3,5.2); \draw (0,0)--(4.5,2.6); \draw (6,0)--(1.5,2.6); \draw (3,0)--(3,5.2); \draw (6,0)--(9,0); \draw (1.5,2.6)--(3,3)--(4.5,2.6) .. controls (7,2) and (7,2) .. (9,0); \draw[black] (0,0) circle (2mm) [fill=black]; \draw[black] (6,0) circle (2mm) [fill=black]; \draw[gray,dotted] (4.8,5)--(4.6,1.6); \draw[gray,dotted] (5.2,-1.6)--(5.2,0.4); \draw[gray,dotted] (5.0,-1.6)--(4.5,-0.3); \draw[gray,dotted] (4.8,-1.6)--(1.5,-0.3); \draw[gray,dotted] (-1,2.5)--(0.5,1.1); \draw[gray,dotted] (-1,2.7)--(1.2,1.5); \draw[black] (3,3) circle (2mm) [fill=black]; \draw[black] (6.5,2) circle (1.5mm) [fill=white]; \draw[black] (7.8,1.2) circle (1.5mm) [fill=white]; \draw[black] (3,0.45) circle (1.5mm) [fill=white]; \draw[black] (3,0.9) circle (1.5mm) [fill=white]; \draw[black] (3.9,1.2) circle (1.5mm) [fill=white]; \draw[black] (4.6,1.4) circle (1.5mm) [fill=gray]; \draw[black] (5.1,1.6) circle (1.5mm) [fill=white]; \draw[black] (4.7,0.7) circle (1.5mm) [fill=white]; \draw[black] (5.2,0.8) circle (1.5mm) [fill=gray]; \draw[black] (5.6,0.85) circle (1.5mm) [fill=white]; \draw[black] (1.4,1.4) circle (1.5mm) [fill=gray]; \draw[black] (0.8,0.8) circle (1.5mm) [fill=gray]; \draw[black] (1.5,0) circle (1.5mm) [fill=gray]; \draw[black] (4.5,0) circle (1.5mm) [fill=gray]; \draw (0,-0.5) node{\tiny{$(\alpha)$}}; \draw (6,-0.5) node{\tiny{$(\alpha)$}}; \draw (3.5,3.3) node{\tiny{$(\alpha)$}}; \draw (4.8,5.3) node{\tiny{$(\gamma_1)$}}; \draw (5.2,-2.4) node{\tiny{$(\gamma_1)$}}; \draw (-1.7,2.6) node{\tiny{$(\gamma_1)$}}; \end{tikzpicture}\end{center} Furthermore, a line with two points of type $(\alpha)$ must contain also two points of type $(\gamma_1)$ and two points of type $(\gamma_3)$. This identifies the six 3-points of type $(\gamma_1)$. The remaining 3-points are all of type $(\gamma_2)$. This implies the suggested structure. \end{proof} Lemma ~\ref{lma:projection-from-0-point} implies that given a nonlifted, strong $(3\mod5)$-arc $\mathcal{K}$ of cardinality 128, every 0-point is incident with \begin{tabular}{l} - three 8-lines of type $(3,3,1,1,0,0)$, \\ - four 8-lines of type $(3,2,2,1,0,0)$, \\ - six 3-lines of type $(3,0,0,0,0,0)$, \\ - twelve 3-lines of type $(2,1,0,0,0,0)$, \\ - six 3-lines of type $(1,1,1,0,0,0)$ \end{tabular} \noindent Now this implies that - $\#$(3-points) $=3\cdot2+4\cdot1+6\cdot1 = 16$, - $\#$(2-points) $=4\cdot2+12\cdot1 = 20$, - $\#$(1-points) $=3\cdot2+4\cdot1+12\cdot1+6\cdot3 = 40$, - $\#$(0-points) $=1+3\cdot1+4\cdot1+6\cdot4+12\cdot3+2\cdot6 = 80$. Furthermore, each 0-point is incident with six 33-planes, three 28-planes eighteen 23-planes and four 18-planes. Moreover the number of zeros in a 33-plane is 12, in a 28-plane -- 15, in a 23-plane -- 18, and in an 18-plane -- 16. This makes it possible to compute the spectrum of $\mathcal{K}$. We have \begin{eqnarray*} a_{33} &=& \frac{80\cdot6}{12}=40 \\ a_{28} &=& \frac{80\cdot3}{15}=16, \\ a_{23} &=& \frac{80\cdot18}{18}=80, \\ a_{18} &=& \frac{80\cdot4}{16}=20. \end{eqnarray*} Furthermore, every 33-, 28-, 23-plane in $\mathcal{K}$ is unique up to isomorphism. From the above considerations we can deduce that no three 2-points are collinear. In other words they form a 20-cap $C$. Moreover, this cap has spectrum: $a_6(C)=40, a_4(C)=80, a_3(C)=20, a_0(C)=16$. It is not extendable to the elliptic quadric; in such case it would have (at least 20) tangent planes. Thus, this cap is complete and isomorphic to one of the two caps $K_1$ and $K_2$ by Abatangelo, Korchamros and Larato \cite{ACL96}. It is not $K_2$ since it has a different spectrum (cf. \cite{ACL96}). Hence the 20-cap on the 2-points in $\PG(3,5)$ is isomorphic to $K_1$. Consider the complete cap $K_1$. The collineation group $G$ of $K_1$ is a semidirect product of an elementary abelian group of order 16 and a group isomorphic to $S_5$ \cite{ACL96}. Hence $|G|=1920$. The action of $G$ on $\PG(3,5)$ splits the point set of $\PG(3,5)$ into four orbits on points, denoted by $O_1^P,\ldots,O_4^P$, and the set of lines into six orbits, denoted by $O_1^L,\ldots,O_6^L$. The respective sizes of these orbits are \[|O_1^P|=40, |O_2^P|=80, |O_3^P|=20, |O_4^P|=16;\] \[|O_1^L|=160, |O_2^L|=240, |O_3^L|=30, |O_4^L|=160, |O_5^L|=120, |O_6^L|=96.\] The corresponding point-by-line orbit matrix $A=(a_{ij})_{4\times6}$, where $a_{ij}$ is the number of the points from the $i$-th point orbit incident with any line from the $j$-th line orbit is the following \[A=\left(\begin{array}{cccccc} 3 & 1 & 4 & 1 & 2 & 0 \\ 3 & 4 & 0 & 2 & 2 & 5 \\ 0 & 1 & 2 & 2 & 0 & 0 \\ 0 & 0 & 0 & 1 & 2 & 1 \end{array} \right).\] Set $w=(w_1,w_2,w_3,w_4)$. We look for solutions of the equation $wA\equiv3\vek{j}\pmod{5}$, where $\vek{j}$ is the all-one vector, subject to the conditions $w_i\le3$ for all $i=1,2,3,4$. The set of all solutions is given by \begin{multline*} \{w=(w_1,w_2,w_3,w_4)\mid w_i\{0,\ldots4\}, \\ w_2\equiv1-w_1\pmod{5}, w_3\equiv4-2w_1\pmod{5}, w_4=3\}. \end{multline*} There exist two solutions that satisfy $w_i\le3$: $w=(3,3,3,3)$ and $w=(1,0,2,3)$. The first one yields the trivial $(3\mod5)$-arc formed by three copies of the whole space. The second one gives the desired arc of size 128. It should be noted that the weight vectors $(0,3,2,4)$, $(1,2,0,4)$, $(2,1,3,4)$, and $(3,0,1,4)$ yield strong $(4\mod 5)$-arcs of cardinalities 344, 264, 284, and 204, respectively, that are not lifted. \section{Strong $\left(\frac{q+1}{2} \mod{q}\right)$-arcs from quadrics and the arcs of size 143 and 168} For an arbitrary odd prime power $q$ and an integer $r\ge 2$, let $\mathcal Q$ be a quadric of $\PG(r, q)$ and let $F$ be the quadratic form defining $\mathcal Q$. This means that a point $P(x_0,\ldots,x_{r})$ of $\PG(r, q^2)$ belongs to $\mathcal Q$ whenever $F(x_0,\ldots, x_{r})=0$. The points of $\PG(r, q)$ outside $\mathcal Q$ are partitioned into two point classes, say $\mathcal{P}_1$ and $\mathcal{P}_2$. Indeed, if $P(x_0, \ldots, x_{r})$ is a point of $\PG(r, q) \setminus \mathcal Q$, then $P$ belongs to $\mathcal{P}_1$ or $\mathcal{P}_2$, according as $F(x_0,\ldots, x_{r})$ is a non-square or a square in $\mathbb{F}_q$. Now we define the arcs $\mathcal{K}_1$ and $\mathcal{K}_2$ in the following way: \begin{enumerate}[$\bullet$] \item $\mathcal{K}_1$: for a point $P$ of $\PG(r, q)$ set \begin{equation} \label{eq:F1} \mathcal{K}_1(P)=\left\{ \begin{array}{cl} \frac{q+1}{2} & \text{ if } P\in\mathcal{Q}, \\ 1 & \text{ if } P \in \mathcal{P}_1, \\ 0 & \text{ if } P \in \mathcal{P}_2. \end{array}\right. \end{equation} \item $\mathcal{K}_2$: for a point $P$ of $\PG(r, q)$ set \begin{equation} \label{eq:F2} \mathcal{K}_2(P)=\left\{ \begin{array}{cl} \frac{q+1}{2} & \text{ if } P\in\mathcal{Q}, \\ 0 & \text{ if } P \in \mathcal{P}_1, \\ 1 & \text{ if } P \in \mathcal{P}_2. \end{array} \right.\end{equation} \end{enumerate} \noindent The following result is well-known. \begin{proposition}\label{prop} Let $f(x)=ax^2+bx+c$, where $a,b,c,\in\mathbb{F}_q$, $a\ne0$, $q$ odd. If $\mathbb{F}_q=\{\alpha_0,\alpha_1,\ldots,\alpha_{q-1}\}$. Denote by $S$ be the list of the following elements from $\mathbb{F}_q$: \[ a, f(\alpha_0),f(\alpha_1),\ldots,f(\alpha_{q-1}).\] Then \begin{enumerate}[(a)] \item if $f(x)$ has two distinct roots in $\mathbb{F}_q$ the list $S$ contains two zeros, $(q-1)/2$ squares and $(q-1)/2$ non-squares; \item if $f(x)$ has one double root in $\mathbb{F}_q$ then $S$ contains a zero and $q$ squares, or a zero and $q$ non-squares; \item if $f(x)$ is irreducible over $\mathbb{F}_q$ then $S$ contains $(q+1)/2$ squares and $(q+1)/2$ non-squares. \end{enumerate} \end{proposition} \begin{theorem} \label{thm:quadrics} Let the $\mathcal{K}_1$ and $\mathcal{K}_2$ be the arcs defined in (\ref{eq:F1}) and (\ref{eq:F2}), respectively. Then $\mathcal{K}_i$ is a $\left(\frac{q+1}{2} \mod{q}\right)$ arc of $\PG(r, q)$, $i = 1, 2$. Moreover, if $\mathcal{Q}$ is non-degenerate, then both arcs are not lifted. \end{theorem} \begin{proof} Let $\ell$ be a line of $\PG(r, q)$, then $\mathcal{Q} \cap \ell$ is a quadric of $\ell$. Then, from Proposition~\ref{prop}, it follows that \begin{align*} \mathcal{K}_i(\ell) = \begin{cases} 2 \cdot \frac{q+1}{2} + \frac{q-1}{2} & \mbox{ if } |\ell \cap \mathcal{Q}| = 2, \\ \frac{q+1}{2} + q & \mbox{ if } |\ell \cap \mathcal{Q}| = 1 \mbox{ and } |\ell \cap \mathcal{P}_i| = q, \\ \frac{q+1}{2} & \mbox{ if } |\ell \cap \mathcal{Q}| = 1 \mbox{ and } |\ell \cap \mathcal{P}_i| = 0, \\ \frac{q+1}{2} & \mbox{ if } |\ell \cap \mathcal{Q}| = 0. \end{cases} \end{align*} Therefore $\mathcal{K}_i$ is a $\left(\frac{q+1}{2} \mod{q}\right)$ arc of $\PG(r, q)$, $i = 1, 2$. If $\mathcal{Q}$ is non-degenerate, then through every point of $\PG(r, q)$ there exists a line $r$ that is secant to $\mathcal{Q}$. By construction, the line $r$ has two $\frac{q+1}{2}$-points, $\frac{q-1}{2}$ $1$-points and $\frac{q-1}{2}$ $0$-points. Hence $\mathcal{K}_i$ is not lifted. \end{proof} \begin{corollary} If $r$ is odd, then \begin{align*} |\mathcal{K}_i| = \begin{cases} \frac{q+1}{2} \cdot \frac{( q^{\frac{r+1}{2}} + 1)(q^{\frac{r-1}{2}} - 1)}{q-1} + \frac{q^r + q^{\frac{r-1}{2}}}{2} & \mbox{ if } \mathcal{Q} \mbox{ is elliptic},\\ \frac{q+1}{2} \cdot \frac{( q^{\frac{r-1}{2}} + 1)(q^{\frac{r+1}{2}} - 1)}{q-1} + \frac{q^r - q^{\frac{r-1}{2}}}{2} & \mbox{ if } \mathcal{Q} \mbox{ is hyperbolic}. \end{cases} \\ \end{align*} If $r$ is even, then \begin{align*} |\mathcal{K}_1| = \frac{q+1}{2} \cdot \frac{(q^r - 1)}{q-1} + \frac{q^r - q^{\frac{r}{2}}}{2}, \\ |\mathcal{K}_2| = \frac{q+1}{2} \cdot \frac{(q^r - 1)}{q-1} + \frac{q^r + q^{\frac{r}{2}}}{2}. \end{align*} \end{corollary} \begin{remark} In the case when the quadric $\mathcal{Q}$ is degenerate, then it is not difficult to see that the arc $\mathcal{K}_i$, $i = 1, 2$, is lifted. Let $\mathcal{Q}$ be a non-degenerate quadric of $\PG(r, q)$, then $\mathcal{K}_1$ and $\mathcal{K}_2$ are projectively equivalent if $r$ is odd, but they are not in the case when $r$ is even. On the other hand, if $r$ is odd, there are two distinct classes of non-degenerate quadrics, namely the hyperbolic quadric and the elliptic quadric. Therefore in all cases Theorem~\ref{thm:quadrics} gives rise to two distinct examples of non lifted $\left(\frac{q+1}{2} \mod{q}\right)$ arcs of $\PG(r, q)$. \end{remark} \subsection{The arcs of size 143 and 168} In \cite{KLR22}, the following two strong non-lifted $(3\mod5)$-arcs in $\PG(3,5)$ were constructed by a computer search. The respective spectra are: \[|\mathcal{F}_1|=143,\ \ \ a_{18}(\mathcal{F}_1)=26, a_{23}(\mathcal{F}_1)=0, a_{28}(\mathcal{F}_1)_{28}=65,a_{33}(\mathcal{F}_1)=65;\] \[\lambda_0(\mathcal{F}_1)=65,\lambda_1(\mathcal{F}_1)=65,\lambda_2(\mathcal{F}_1)=0, \lambda_3(\mathcal{F}_1)=26,\] and \[|\mathcal{F}_2|=168,\ \ \ a_{28}(\mathcal{F}_2)=60,a_{33}(\mathcal{F}_2)=60, a_{43}(\mathcal{F}_2)36);\] \[\lambda_0(\mathcal{F}_2)=60,\lambda_1(\mathcal{F}_2)=60,\lambda_2(\mathcal{F}_2)=0, \lambda_3(\mathcal{F}_2)=36.\] In addition, $|\Aut(\mathcal{F}_1)|=62400$, and $|\Aut(\mathcal{F}_2)|=57600$. These arcs can be recovered from Theorem~\ref{thm:quadrics}. Indeed, if $\mathcal{Q}$ is an elliptic quadric of $\PG(3,5)$, then $\mathcal{K}_1$ is a non lifted $(3 \mod{5})$ arc of $\PG(3, 5)$ of size $143$, whereas if $\mathcal{Q}$ is a hyperbolic quadric of $\PG(3, 5)$, then $\mathcal{K}_1$ is a non lifted $(3 \mod{5})$ arc of $\PG(3, 5)$ of size $168$. \section{Further examples $(t \mod q)$-arcs} A set of type $(m, n)$ in $\PG(r, q)$ is a set $\mathcal{S}$ of points such that every line of $\PG(r, q)$ contains either $m$ or $n$ points of $\mathcal{S}$, $m < n$, and both values occur. Assume $m > 0$. Then the only sets of type $(m, n)$ that are known, exist in $\PG(2, q)$, $q$ square, and are such that $n = m + \sqrt{q}$. In particular, sets of type $(1,1+ \sqrt{q})$ either contains $q+ \sqrt{q} + 1$ and are Baer subplanes or $q\sqrt{q} + 1$ points and are known as {\em unitals}. For more details on sets of type $(m, n)$ in $\PG(2, q)$ see \cite{PR} and references therein. If $\mathcal{S}$ is an $(m, n)$ set in $\PG(r, q)$, $r > 2$, then necessarily $q$ is an odd square, $m = (\sqrt{q} - 1)^2/2$, $n = m + \sqrt{q}$ and $|\mathcal{S}| = \frac{1 + \frac{q^r-1}{q-1}(q - \sqrt{q}) \pm \sqrt{q}^r}{2}$, see \cite{TS}. However no such a set is known to exist if $r > 2$. \begin{theorem} Let $\mathcal{S}$ be a set of type $(m, m + \sqrt{q})$ in $\PG(r, q)$, $q$ square. Let $\mathcal{K}$ be the arc of $\PG(r, q)$ such that $\mathcal{K}(P) = \sqrt{q}$, if $P \in \mathcal{S}$ and $\mathcal{K}(P) = 0$, if $P \notin \mathcal{S}$. Then $\mathcal{K}$ is a $(\sqrt{q} \mod q)$-arc of $\PG(r, q)$. \end{theorem} \begin{proof} Let $\ell$ be a line of $\PG(r, q)$. If $|\ell \cap \mathcal{S}| = m$, then $\mathcal{K}(\ell) = m \sqrt{q}$, whereas if $|\ell \cap \mathcal{S}| = m + \sqrt{q}$, then $\mathcal{K}(\ell) = m \sqrt{q} + q$. \end{proof} In $\PG(r, q)$, $q$ square, let $\mathcal{H}$ be a Hermitian variety of $\PG(r, q)$, i.e., the variety defined by a Hermitian form of $\PG(r, q)$. It is well-known that a line of $\PG(r, q)$ has $1$, $\sqrt{q}+1$ or $q+1$ points in common with $\mathcal{H}$. Let $\mathcal{K'}$ be the arc of $\PG(r, q)$ such that $\mathcal{K'}(P) = \sqrt{q}$, if $P \in \mathcal{H}$ and $\mathcal{K'}(P) = 0$, if $P \notin \mathcal{H}$. \begin{theorem} $\mathcal{K'}$ is a $(\sqrt{q} \mod q)$-arc of $\PG(r, q)$. Moreover, if $\mathcal{H}$ is non-degenerate, then $\mathcal{K'}$ is not lifted. \end{theorem} \begin{proof} Let $\ell$ be a line of $\PG(r, q)$. Then \begin{align*} \mathcal{K'}(\ell) = \begin{cases} \sqrt{q} & \mbox{ if } |\ell \cap \mathcal{H}| = 1, \\ \sqrt{q} + q & \mbox{ if } |\ell \cap \mathcal{H}| = \sqrt{q} + 1, \\ \sqrt{q}(1 + q) & \mbox{ if } |\ell \cap \mathcal{H}| = q + 1. \end{cases} \end{align*} If $\mathcal{H}$ is non-degenerate, then through every point of $\PG(r, q)$ there exists a line $r$ such that $|\mathcal{H} \cap r| = \sqrt{q}+1$. By construction, the line $r$ has $\sqrt{q}+1$ $\sqrt{q}$-points and $q-\sqrt{q}$ $0$-points. Hence $\mathcal{K'}$ is not lifted. \end{proof} \end{document}
arXiv
Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. Join them; it only takes a minute: What mathematical theory is required for high frequency trading? I am an applied math postdoc and I have been presented with the option of leaving academia to work in high frequency trading. I wanted to get a feel for the field and the theory underlying it so I scanned through several books in the library and it seems there are almost no books on the mathematical theory of this field. All the books I have looked at contain lots of explanations of the various aspects of trading such as 'market participants', 'limit order books', 'market microstructure', etc..which of course are very important to know, and some relatively basic math on things like 'statistical arbitrage strategies'. But where is the rigorous mathematical underpinning? I would have expected to find books containing the same type of theory as in books on mathematical finance, i.e. a deep treatment of measure theory and probability theory, mathematical statistics, stochastic processes etc.. Why are these topics not covered in HFT books? Is advanced math not needed? If this is the case, what are the main skills needed for a high frequency trader? finance-mathematics high-frequency research papers sonicboomsonicboom $\begingroup$ "high frequency trading" ... you mean gambling, right? $\endgroup$ – vsz Jun 17 at 4:30 $\begingroup$ @vsz You've obviously never seen a HFT p/l graph. $\endgroup$ – wildbunny Jun 17 at 9:38 $\begingroup$ @vsz when an individual tries to do it on their robinhood app or whatever, its gambling. When billion dollar NY trading firms pay you $500k/yr to do it, its more like a career. $\endgroup$ – mbrig Jun 17 at 18:21 Hah! There is no such thing as the "rigorous mathematical underpinning" of high frequency trading - because HFT, like all trading, is not primarily a mathematical endeavour. It's true that many people who work in HFT have a mathematical background, but that's because the tools of applied math and statistics are useful when analysing the large amounts of data that are generated by HFT activity. So the math that is useful to know is linear algebra, statistics, time series and optimisation (to some extent it's useful to be familiar with machine learning, which encompasses all of the above). Don't go into HFT thinking that you will primarily be doing advanced math. If you are lucky, you will mostly be doing data analysis. More likely, you will spend a lot of time cleaning data, writing code, and monitoring trading systems. Chris TaylorChris Taylor $\begingroup$ amazon.co.uk/…. What do you make of that book then? $\endgroup$ – Permian Jun 16 at 9:04 $\begingroup$ It's a book about HFT written by an academic. Hard to say without reading it, but I would be surprised if it has much relevance to the day-to-day practice of most HFT firms. $\endgroup$ – Chris Taylor Jun 16 at 10:55 $\begingroup$ So how does a high frequency trader relate to a quantitative analyst? I was told that these are two separate roles, and that HF traders may work in collaboration with quants, but I imagine a quant also spends a lot of time writing code and monitoring trading systems. So what differentiates the two roles? $\endgroup$ – sonicboom Jun 17 at 7:32 $\begingroup$ @sonicboom In my experience, quants were mostly focused on getting models right, while HF traders were mostly focused on getting positive P/L :) $\endgroup$ – Artur Biesiadowski Jun 17 at 14:49 I would argue, taking a note from John von Neumman, that quantitative finance lacks rigorous underpinnings. Von Neumann warned in 1953 that many things that look like proofs in economics and finance depended on problems that were yet to be solved in mathematics, and where economists were assuming solutions into existence. As the problems were solved in math, economists did not go back and check to see if their solutions matched. Let me give you an example of why it is a problem. Quantitative finance assumes, in the general case, though the actual practice in the wild varies, that the distributions of returns are either normal or log-normal. Let us assume that wealth at a given point in time is defined as $w=p\times{q}$, where $p$ is the price, and $q$ is the quantity of shares. If we assume that $q_t=q_{t+1}$, then return at time $t$ is $$r_t=\frac{p_{t+1}}{p_t}-1.$$ That makes returns a ratio distribution. If we make the assumption that is standard in mean-variance models of many buyers and sellers and that a double auction is happening, then the rational behavior of the actors at each point in time $t$ is to bid their expectation. The limit book converges to normality as the number of actors becomes very large. I would note that this requirement is not necessary, far weaker assumptions could be used, but we would be here for thirty to forty pages. So prices are normally distributed, and returns are a function of prices, which implies that the distribution of returns is the distribution of a statistic, whose distribution should be derived from the distribution of data. If we assume that prices converge around an equilibrium and treat the equilibrium as $(0,0)$ in error space, then we can integrate around that point. The ratio of two centered normal distributions is the Cauchy distribution, which has no first moment. Mean-variance finance is impossible. Indeed, right now, I am trying to put rigor around quantitative finance, but it is very difficult. To see it a bit more directly, if you transform the data into polar coordinates you will note that the relationship between angles and returns is $\tan(\theta_t)=r_t.$ It follows that $\theta_t=arctan(r_t)$. The arctangent is the kernel of the cumulative density function of the Cauchy distribution. You can quickly arrive at obvious disproofs of the underlying basis for the economic proofs. Do note that I vastly oversimplified the real world as disproof by counter-example doesn't really require the detailed case if one small subset is sufficient and the rest wouldn't remove the cause. Quantitative finance violates the laws of general summation, in the general case. As a mathematician, dig deep. I have several papers out right now trying to add rigor, but it is hard to see how that will work out. I am proposing a new calculus for options pricing. High-speed trading is a statistical concept and a key element of statistical theory that most people fail to notice is the absence of uniqueness theorems. There are a few non-existence proofs available, but generating THE solution isn't usually going to happen. If I were wanting to ground high-frequency trading in sound math, I would avoid Kolmogorov (pace). I actually happen to have a copy of Kolmogorov's original work on probability about three meters from me at the moment, but I believe it will make your work more difficult. I would instead turn to Bruno de Finetti's coherence principle. You can derive Komogorov's axioms from de Finetti's coherence principle. Coherence is important because it is possible to wipe out a market maker who fails to use coherent measures. Generally speaking, Frequentist methods give rise to incoherent probabilities and incoherent prices. I have also worked out the conditions where a neural network will generate incoherent trading instructions(too long for this post). If you are in want of greater rigor, then start with Leonard Jimmie Savage's "Foundations of Statistics." Again, the threat is incoherence if you do not. Another interesting grounding is Cox's 1961 book "The Algebra of Probable Inference." The main skill is related to data mining. It may not actually be required that you are either good at it, or use sound methods because it may be the case that the people judging your work do not know calculus or statistics beyond t-tests. That is not a criticism, so much as a deep concern for soundness. Having spent a good chunk of my life inside financial institutions, I have more than a passing concern for the black-box system that is in place. On the assumption that you want to do a very good job, then what I would do is work out the determinants of supply and the determinants of demand. I would factor the changes and risks to dividends, mergers, and bankruptcy. I would have to include liquidity costs. It would make it more like a very boring supply and demand model. It would likely not be very fancy and it would almost certainly lack pizzazz. Boring is awesome if it makes you money. EDIT I need to give a thanks to @Accumulation because I have been looking at this problem too long. Let me be a bit more rigorous. Let observed return $r$ be defined as $$r=r^*+\gamma,$$ where $\gamma$ is a random variable and $r^*$ is the equilibrium return and the center of location. Also, let observed return be defined as $$r=\frac{p_{t+1}}{p_t}.$$ Let equilibrium return be defines as $$r^*=\frac{p_{t+1}^*}{p_t^*}.$$ Let us defined prices with respect to equilibrium prices using Wold's decomposition theorem as $$p_t=p^*_t+\epsilon_t,$$ and $$p_{t+1}=p_{t+1}^*+\epsilon_{t+1}.$$ So, $$\frac{p_{t+1}^*+\epsilon_{t+1}}{p^*_t+\epsilon_t}=\frac{p_{t+1}^*}{p_t^*}+\gamma.$$ It follows that $$\gamma=\frac{p_{t+1}^*+\epsilon_{t+1}}{p^*_t+\epsilon_t}-\frac{p_{t+1}^*}{p_t^*}.$$ $$\gamma\approx\frac{\epsilon_{t+1}}{\epsilon_t}.$$ The author acknowledges that in the general case, the ratio of two normal random variates shifted by a price are not a Cauchy distribution but rather a Cauchy distribution scaled by $(1+\eta)$ where $\eta$ is a finite variance distribution. In this case, $\eta$ would become vanishingly small in effect. Out of equilibrium, that would not be true. Note that $\epsilon$ is normal as described above centered on zero. Also, note that in the general case, $\sigma_{t+1}>\sigma_t$ or there would be a violation of rationality. It implies, in the general case, price heteroskedasticity. edited Jun 18 at 0:08 Dave HarrisDave Harris $\begingroup$ That's very interesting. I encountered the cauchy distribution in my hawkes process trade timing analysis, where if you fit a exp powerlaw approximation hawkes model to a sequence of trade times of SPY, you get a critical hawkes process where the branching ratio is exactly equal to 1. If you forecast the next point of occurence and compare with the actual, you get a Cauchy distribution whose variance is half that of the mean inter-trade time. I did a lot of work on it, some posts on here too, not sure what to make of it $\endgroup$ – crow Jun 17 at 13:09 $\begingroup$ "If we assume that prices converge around an equilibrium and treat the equilibrium as (0,0) in error space, then we can integrate around that point. The ratio of two centered normal distributions is the Cauchy distribution" But the return isn't the ratio of the errors, it's the ratio of the prices, and the prices aren't centered at zero. $\endgroup$ – Acccumulation Jun 17 at 15:22 $\begingroup$ @Acccumulation thanks, I edited it. $\endgroup$ – Dave Harris Jun 18 at 0:08 $\begingroup$ Very good (and detailed) answer. I wish S.E provides affordances to bookmark answers. $\endgroup$ – Homunculus Reticulli Jun 18 at 10:20 $\begingroup$ @HomunculusReticulli You can bookmark the question, and if you hit the "share" link at the bottom left of the answer, you'll get a url that links directly to the question. $\endgroup$ – Acccumulation Jun 18 at 14:36 Optimal stochastic control. Hamilton jacobi bellman crowcrow Thanks for contributing an answer to Quantitative Finance Stack Exchange! Not the answer you're looking for? Browse other questions tagged finance-mathematics high-frequency research papers or ask your own question. Ultra-High Frequency Trading Help Non-SQL methods for high-frequency accounting? What is the impact of high-frequency trading on market depth, liquidity, and volatility? Applications of Fourier theory in trading Survey of market making strategies and research what was the quant role in the 2008 crash? High frequency trading and trading costs Inherent Limitations with Respect to HFT Hardware What are the benefits of publishing papers in mathematical finance/trading? Hidden Markov Models for Higher frequency trading
CommonCrawl
\begin{document} \begin{abstract} It is well known that the completeness theorem for $\mathrm{L}_{\omega_1\omega}$ fails with respect to Tarski semantics. Mansfield showed that it holds for $\mathrm{L}_{\infty\infty}$ if one replaces Tarski semantics with Boolean valued semantics. We use forcing to improve his result in order to obtain a stronger form of Boolean completeness (but only for $\mathrm{L}_{\infty\omega}$). Leveraging on our completeness result, we establish the Craig interpolation property and a strong version of the omitting types theorem for $\mathrm{L}_{\infty\omega}$ with respect to Boolean valued semantics. We also show that a weak version of these results holds for $\mathrm{L}_{\infty\infty}$ (if one leverages instead on Mansfield's completeness theorem). Furthermore we bring to light (or in some cases just revive) several connections between the infinitary logic $\mathrm{L}_{\infty\omega}$ and the forcing method in set theory. \end{abstract} \maketitle \tableofcontents \section{Introduction}\label{Section0} This paper revives and brings to light several connections existing between infinitary logics and forcing. The main objective of the paper is to show that boolean valued semantics is a right semantics for infinitary logics, more precisely: the class of boolean valued models with the mixing property (e.g. sheaves on compact extremally disconnected spaces by the results of \cite{PIEVIA19}) provides a complete semantics for $\mathrm{L}_{\infty\omega}$ with respect to the natural sequent calculus for infinitary logics (obtained by trivially adapting to this logic the inference rules of Gentzen's sequent calculus for first order logic, see Section \ref{subsec:gentzencalc} below)\footnote{We note that Mansfield \cite{MansfieldConPro} proves that the larger class of boolean valued models (e.g. presheaves on compact extremally disconnected spaces by the results of \cite{PIEVIA19}) gives a complete semantics for the logic $\mathrm{L}_{\infty\infty}$. Our completeness result is weaker than Mansfield's (as it applies only to $\mathrm{L}_{\infty\omega}$) but also stronger than his (as it provides completeness with respect to a much better behaved class of models, e.g. sheaves instead of presheaves on compact extremally disconnected topological spaces).}. Leveraging on our completeness result we are able to prove the natural form of Craig's interpolation theorem for our deductive system for $\mathrm{L}_{\infty \omega}$, as well as a natural generalization to $\mathrm{L}_{\infty\omega}$ with respect to boolean valued semantics of the standard omitting types theorems which can be proved for first order logic with respect to Tarski semantics. We are also able to prove weaker forms of these results for $\mathrm{L}_{\infty\infty}$, in this latter case appealing to a completeness result of Mansfield. A central role in our analysis of $\mathrm{L}_{\infty\omega}$ is played by the notion of consistency property. Roughly a consistency property for a signature $\tau$ is a partial order whose elements are consistent families of infinitary $\tau$-formulae ordered by reverse inclusion. The clauses for being a consistency property in signature $\tau$ grant that a generic filter for such a forcing notion produces a maximal set of consistent $\tau$-formulae, which then can be turned into a Tarski $\tau$-structure (a term model) realizing each of them. However generic filters do not exist in the standard universe of set theory $V$, hence such Tarski $\tau$-structures do not exist in $V$ as well, but just in a generic extension of $V$; on the other hand their semantics can be instead described in $V$ by means of boolean valued models, e.g. forcing. Keeping in mind this idea we can show that: \begin{itemize} \item any forcing notion is forcing equivalent to a consistency property for $\mathrm{L}_{\infty\omega}$; \item every consistency property defines an ``elementary class'' of boolean valued models for $\mathrm{L}_{\infty\omega}$ and conversely; \item most of the standard results for first order logic transfer to infinitary logic if we replace Tarski semantics with boolean valued semantics; e.g. in this paper we show that this is the case for the completeness theorem, Craig's interpolation, Beth definability, the omitting types theorem (on the other hand we can show that compactness fails for boolean valued semantics also for $\mathrm{L}_{\infty\omega}$). \end{itemize} Some caveats and further comments are in order. \begin{itemize} \item Sections \ref{ForConPro} and \ref{sec:for=conprop} require a basic familiarity with the forcing method (at the level of Kunen's book \cite{KUNEN}). The rest of the paper can be read by people with a loose or null knowledge of the forcing method. \item Most of our results generalize to $\mathrm{L}_{\infty\omega}$ (and in some cases also to $\mathrm{L}_{\infty\infty}$) with respect to boolean valued semantics, results and proofs that Keisler obtains for $\mathrm{L}_{\omega_1\omega}$ with respect to Tarski semantics \cite{KeislerInfLog}. Roughly Keisler's proofs are divided in two parts: the first designs a suitable countable consistency property associated to a given countable $\mathrm{L}_{\omega_1\omega}$-theory $T$ of interest; the second appeals to Baire's category theorem taking advantage of the considerations to follow. Consistency properties are designed in order that the Tarski structure induced by a maximal filter $F$ on them (seen as partial orders) realizes a certain formula $\phi$ if and only if $F$ meets a dense set $D_\phi$ associated to $\phi$. If one focuses on countable theories $T$ for $\mathrm{L}_{\omega_1\omega}$, one can appeal to Baire's category theorem to find a maximal filter $F$ for the associated consistency property: $F$ meets the countable family of dense sets associated to the formulae in $T$. This is what Keisler's proofs usually do. However, if one considers an arbitrary $\mathrm{L}_{\infty\omega}$-theory $T$, one could drop the use of Baire's category theorem and replace it by describing (using forcing) as a boolean valued model the Tarski structure that Keisler's method would produce in a forcing extension where $T$ becomes a countable $\mathrm{L}_{\omega_1\omega}$-theory. This is what we will do here. \item One has to pay attention to our formulation of Craig's interpolation property (e.g. Thm. \ref{thm:craigint}). We prove our result with respect to the natural deduction calculus for $\mathrm{L}_{\infty\omega}$; for this calculus it is known that the completeness theorem with respect to Tarski semantics fails (we give an explicit counterexample in Fact \ref{fac:tarskiinc}). It is no surprise hence that the semantic version of Craig's interpolation fails as well with respect to Tarski semantics (see \cite[Thm. 3.2.4]{MalitzThesis}). On the other hand Malitz has proved an interpolation theorem for $\mathrm{L}_{\infty\infty}$ with respect to Tarski semantics using another deductive system for $\mathrm{L}_{\infty\omega}$ (introduced by Karp) which is complete for Tarski semantics \cite{MalitzThesis}. However we believe that our deductive system is better than Malitz's, since the notion of proof for our system is independent of the model of set theory we work with; for example our deductive system when restricted to $\mathrm{L}_{\infty\omega}$ is forcing invariant. Even more, for sets $\Gamma$, $\Delta$ of $\mathrm{L}_{\infty\omega}$-formulae in $V$, $\Gamma$ proves $\Delta$ is a provably $\Delta_1$-property in the parameters $\Gamma, \Delta$ in any model of $\ensuremath{\mathsf{ZFC}}$ to which $\Gamma$ and $\Delta$ belong: note that the existence of a proof is expressible by a $\Sigma_1$-statement while being true in any boolean valued model is expressible by a $\Pi_1$-statement (according to the Levy hierarchy as in \cite[Pag. 183]{JECHST}). In particular, $\Gamma$ proves $\Delta$ holds in $V$ according to our deductive system if and only if it holds in any (equivalently some) forcing extension of $V$. This fails badly for Malitz's deductive system, e.g. there is a sentence $\phi$ such that ``$\phi$ is valid according to Malitz's deductive system'' holds in some generic extension of $V$, but fails in $V$ and conversely. \item The fact that forcing and consistency properties are closely related concepts is implicit in the work of many; for example we believe this is behind Jensen's development of $\mathrm{L}$-forcing \cite{JENLFORC} and the spectacular proof by Asper\'o and Schindler that $\ensuremath{\text{{\sf MM}}}^{++}$ implies Woodin's axiom $(*)$ \cite{ASPSCH(*)} (see also \cite{viale2021proof} -which gives a presentation of their proof more in line with the spirit of this paper); it also seems clear that Keisler is to a large extent aware of this equivalence in his book on infinitary logics \cite{KeislerInfLog}, as well as Mansfield in his paper proving the completeness theorem for $\mathrm{L}_{\infty\infty}$ using boolean valued semantics \cite{MansfieldConPro}. On the other hand we have not been able to find anywhere an explicit statement that every complete boolean algebra is the boolean completion of a consistency property (e.g. Thm. \ref{thm:equivforcconsprop}) even if the proof of this theorem is rather trivial once the right definitions are given. \item While some of the results we present in this paper were known at least to some extent (e.g. the completeness theorem via boolean valued semantics for $\mathrm{L}_{\infty\infty}$ --- see Mansfield \cite{MansfieldConPro} and independently Karp \cite{Karp}), we believe that this paper gives a unified presentation of the sparse number of theorems connecting infinitary logics to boolean valued semantics we have been able to trace in the literature. Furthermore, we add to the known results some original contributions, e.g. Craig's interpolation property, Beth's definability property, the omitting types theorem, the equivalence of forcing with consistency properties, the completeness theorem for $\mathrm{L}_{\infty\omega}$ with respect to to the semantics produced by sheaves on compact extremally disconnected spaces. \end{itemize} The paper is organized as follows: \begin{itemize} \item \ref{sec:inflog} introduces the basic definitions for the infinitary logics $\mathrm{L}_{\kappa\lambda}$, including their boolean valued semantics and a Gentzen's style proof system for them. \item \ref{sec:mainmodthres} states the main model theoretic results we obtain for $\mathrm{L}_{\infty\omega}$ and $\mathrm{L}_{\infty\infty}$. \item \ref{sec:consprop} introduces the key notion of consistency property on which we leverage to prove all the main results of the paper. \item \ref{ForConPro} shows that we can use consistency properties to produce boolean valued models with the mixing property (e.g. sheaves on extremally disconnected compact spaces) for any consistent $\mathrm{L}_{\infty\omega}$ theory. \item \ref{sec:mansmodexthm} gives a proof rephrased in our terminology of the main technical result of Mansfield on this topic, e.g. that any consistency property for $\mathrm{L}_{\infty\infty}$ gives rise to a corresponding boolean valued model (which however may not satisfy the mixing property). \item \ref{sec:proofmodthres} leverages on \ref{ForConPro} and \ref{sec:mansmodexthm} to prove the theorems stated in \ref{sec:mainmodthres}. \item \ref{sec:for=conprop} shows that any forcing notion can be presented as the boolean completion of a consistency property for $\mathrm{L}_{\infty\omega}$. \item The Appendix \ref{sec:app} collects some counterexamples to properties which do not transfer from first order logic to infinitary logics (for example the failure of boolean compactness), as well as the proof of some basic facts regarding boolean valued models. \item We close the paper with a brief list of open problems and comments. \end{itemize} \section{The infinitary logics $\mathrm{L}_{\kappa \lambda}$}\label{sec:inflog} The set of formulae for a language in first order logic is constructed by induction from atomic formulae by taking negations, finite conjunctions and finite quantifications. $\mathrm{L}_{\kappa \lambda}$ generalizes both ``finites" to cardinals $\kappa$ and $\lambda$ allowing disjunctions and conjunctions of size less than $\kappa$ and simultaneous universal quantification of a string of variables of size less than $\lambda$. Our basic references on this topic is V{\"a}{\"a}n{\"a}nen's book \cite{ModelsGames}. To simplify slightly our notation we confine our attention to relational languages, i.e. languages that do not have function symbols\footnote{With some notational efforts which we do not spell out all our results transfer easily to arbitrary signatures}. Also, when interested in logics with quantification of infinite strings we consider natural to include signatures containing relation symbols of infinite arity. \subsection{Syntax} \begin{definition} $\mathrm{L}$ is a relational $\lambda$-signature if it contains only relation symbols of arity less than $\lambda$ and eventually constant symbols; relational $\omega$-signatures are first order signatures without function symbols. Fix two cardinals $\lambda, \kappa$, a set of $\kappa$ variables, $\{v_\alpha : \alpha < \kappa\}$, and consider a relational $\lambda$-signature $\mathrm{L}$. The set of terms and atomic formulae for $\mathrm{L}_{\kappa \lambda}$ is constructed in analogy to first order logic using the symbols of $\mathrm{L} \cup\{v_\alpha : \alpha < \kappa\}$. The other $\mathrm{L}_{\kappa \lambda}$-formulae are defined by induction as follows: \begin{itemize} \item if $\phi$ is a $\mathrm{L}_{\kappa \lambda}$-formula, then so is $\neg \phi$; \item if $\Phi$ is a set of $\mathrm{L}_{\kappa \lambda}$-formulae of size $< \kappa$ with free variables in the set $V=\bp{v_i:i\in I}$ for some $I\in [\kappa]^{<\lambda}$, then so are $\bigwedge \Phi$ and $\bigvee\Phi$; \item if $V=\bp{v_i:i\in I}$ for some $I\in [\kappa]^{<\lambda}$ and $\phi$ is a $\mathrm{L}_{\kappa \lambda}$-formula, then so are $\forall V \phi$ and $\exists V \phi$. \end{itemize} We let $\mathrm{L}_{\infty \lambda}$ be the family of $\mathrm{L}_{\kappa \lambda}$-formulae for some $\kappa$, and $\mathrm{L}_{\infty \infty}$ be the family of $\mathrm{L}_{\kappa \lambda}$-formulae for some $\kappa,\lambda$. \end{definition} The restriction on the number of free variables for the clauses $\bigwedge$ and $\bigvee$ is intended to avoid formulae for which there is no quantifier closure. Another common possibility is to call pre-formula any ``formula", and formula the ones that verify this property. \subsection{Boolean valued semantics} Let us recall the following basic facts about partial orders and their Boolean completions: \begin{definition} Given a Boolean algebra $\bool{B}$ and a partial order $\mathbb{P} = (P,\leq)$: \begin{itemize} \item $\bool{B}^+$ denotes the partial order given by its positive elements and ordered by $a\leq_{\bool{B}} b$ if $a\wedge b=a$. \item $\bool{B}$ is $<\lambda$-complete if any subset of $\bool{B}$ of size less than $\lambda$ has an infimum and a supremum according to $\leq_{\bool{B}}$. \item A set $G \subset P$ is a prefilter if for any $a_1,\ldots,a_n \in G$ we can find $b \in G$, $b \leq a_1,\ldots,a_n$. \item A set $F \subset P$ is a filter if it is a prefilter and is upward close: \[ (a \in F \wedge a \leq b) \Rightarrow b \in F. \] \end{itemize} \end{definition} \begin{remark} \label{rema1} Given a partial order $\mathbb{P} = (P,\leq)$: \begin{itemize} \item The order topology on $P$ is the one whose open sets are given by the downward closed subsets of $P$; the sets $N_p = \bp{q\in P: q\leq p}$ form a basis for this topology. \item $\RO(P)$ is the complete Boolean algebra given by the regular open sets of the order topology on $P$. \item The map $p\mapsto \Reg{N_p}$ defines an order and incompatibility preserving map of $P$ into a dense subset of $(\RO(P)^+,\subseteq)$; hence $(P,\leq)$ and $(\RO(P)^+,\subseteq)$ are equivalent forcing notions. \end{itemize} If $\bool{B}$ is a Boolean algebra, $\bool{B}^+$ sits inside its Boolean completion $\RO(\bool{B}^+)$ as a dense subset via the map $b\mapsto N_b$ (e.g. for all $A\in\RO(\bool{B}^+)$ there is $b\in \bool{B}$ such that $N_b\subseteq A$). From now on we identify $\bool{B}$ with its image in $\RO(\bool{B}^+)$ via the above map. \end{remark} \begin{definition} Let $\mathrm{L}$ be a relational $\lambda$-signature and $\mathsf{B}$ a $<\lambda$-complete Boolean algebra. A $\mathsf{B}$-valued model $\mathcal{M}$ for $\mathrm{L}$ is given by: \begin{enumerate} \item a non-empty set $M$; \item the Boolean value of equality, \begin{align*} M^2 &\rao \mathsf{B} \\ (\tau,\sigma) &\mapsto \Qp{\tau=\sigma}^{\mathcal{M}}_\mathsf{B}; \end{align*} \item the interpretation of relation symbols $R \in \mathrm{L}$ of arity $\alpha<\lambda$ by maps \begin{align*} M^\alpha &\rao \mathsf{B} \\ (\tau_i:i\in \alpha) &\mapsto \Qp{R (\tau_i:i\in \alpha) }^{\mathcal{M}}_\mathsf{B}; \end{align*} \item the interpretation $c^\mathcal{M} \in M$ of constant symbols $c$ in $\mathrm{L}$. \end{enumerate} We require that the following conditions hold: \begin{enumerate}[(A)] \item For all $\tau,\sigma,\pi \in M$, \begin{gather*} \Qp{\tau=\tau}^{\mathcal{M}}_\mathsf{B} = 1_\mathsf{B}, \\ \Qp{\tau=\sigma}^{\mathcal{M}}_\mathsf{B} = \Qp{\sigma=\tau}^{\mathcal{M}}_\mathsf{B}, \\ \Qp{\tau=\sigma}^{\mathcal{M}}_\mathsf{B} \wedge \Qp{\sigma=\pi}^{\mathcal{M}}_\mathsf{B} \leq \Qp{\tau=\pi}_\mathsf{B}^{\mathcal{M}}. \end{gather*} \item \label{eqn:subslambda} If $R \in \mathrm{L}$ is an $\alpha$-ary relation symbol, for all $(\tau_i:\,i<\alpha), (\sigma_i:\,i<\alpha) \in M^\alpha$, \begin{equation*} \bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}^{\mathcal{M}}_\mathsf{B} \bigg) \wedge \Qp{R(\tau_i:\,i<\alpha)}^{\mathcal{M}}_\mathsf{B} \leq \Qp{R(\sigma_i:\,i<\alpha)}^{\mathcal{M}}_\mathsf{B}. \end{equation*} \end{enumerate} \end{definition} \begin{definition}\label{def:boolvalsem} Fix $\mathsf{B}$ a $<\lambda$-complete Boolean algebra and $\mathcal{M}$ a $\mathsf{B}$-valued structure for a relational $\lambda$-signature $\mathrm{L}$. We define the $\RO(\mathsf{B}^+)$-value of an $\mathrm{L}_{\infty\infty}$-formula $\phi(\overline{v})$ with assignment $\overline{v} \mapsto \overline{m}$ by induction as follows: \begin{gather*} \Qp{R(t_i:i\in\alpha)[\overline{v} \mapsto \overline{m}]}^\mathcal{M}_{\RO(\bool{B}^+)} = \Qp{R(t_i[\overline{v} \mapsto \overline{m}]:i\in \alpha)}^\mathcal{M}_\mathsf{B} \text{ for $R\in\mathrm{L}$ of arity $\alpha<\lambda$},\\ \Qp{(\neg \phi)[\overline{v} \mapsto \overline{m}]}^\mathcal{M}_{\RO(\bool{B}^+)} = \neg \Qp{\phi[\overline{v} \mapsto \overline{m}]}^\mathcal{M}_{\RO(\bool{B}^+)} ,\\ \Qp{(\bigwedge \Phi)[\overline{v} \mapsto \overline{m}]}^\mathcal{M}_{\RO(\bool{B}^+)} = \bigwedge_{\phi \in \Phi} \Qp{\phi[\overline{v} \mapsto \overline{m}]}^\mathcal{M}_{\RO(\bool{B}^+)} ,\\ \Qp{(\bigvee \Phi)[\overline{v} \mapsto \overline{m}]}^\mathcal{M}_{\RO(\bool{B}^+)} = \bigvee_{\phi \in \Phi} \Qp{\phi[\overline{v} \mapsto \overline{m}]}^\mathcal{M}_{\RO(\bool{B}^+)} ,\\ \Qp{(\forall V \phi)[\overline{v} \mapsto \overline{m}]}^\mathcal{M}_{\RO(\bool{B}^+)} = \bigwedge_{\overline{a} \in M^V} \Qp{\phi[\overline{v} \mapsto \overline{m}, V \mapsto \overline{a}]}^\mathcal{M}_{\RO(\bool{B}^+)} ,\\ \Qp{(\exists V \phi)[\overline{v} \mapsto \overline{m}]}^\mathcal{M}_{\RO(\bool{B}^+)} = \bigvee_{\overline{a} \in M^V} \Qp{\phi[\overline{v} \mapsto \overline{m}, V \mapsto \overline{a}]}^\mathcal{M}_{\RO(\bool{B}^+)}. \end{gather*} A $\bool{B}$-valued model is well behaved\footnote{We believe this is the right generalization that should become standard in future papers.} for $\mathrm{L}_{\kappa\lambda}$ if $\Qp{\phi(t_i:i\in\alpha)[\overline{v} \mapsto \overline{m}]}^\mathcal{M}_{\RO(\bool{B}^+)}\in\bool{B}$ for any $\mathrm{L}_{\kappa\lambda}$ formula $\phi(\overline{v})$. Let $T$ be an $\mathrm{L}_{\infty \infty}$ theory and $\mathcal{M}$ be a well behaved $\bool{B}$-valued $\mathrm{L}$-structure. The relation \[ \mathcal{M} \vDash T \] holds if \[ \Qp{\bigwedge T}_\bool{B}^\mathcal{M} = 1_\bool{B}. \] \end{definition} Note that if $\bool{B}$ is complete any $\bool{B}$-valued model is well behaved. We feel free to write just $\Qp{\phi(\tau_i:\,i<\alpha)}$ or $\Qp{\phi(\tau_i:\,i<\alpha)}^{\mathcal{M}}$ or $\Qp{\phi(\tau_i:\,i<\alpha)}_\mathsf{B}$ when no confusion arises on which structure we are considering or in which Boolean algebra we are evaluating the predicate $R$. A key (but not immediately transparent) observation is that for any $\lambda$-signature $\mathrm{L}$, any well behaved $\bool{B}$-valued model $\mathcal{M}$ for $\mathrm{L}$ satisfies \ref{eqn:subslambda} with $R$ replaced by any $\mathrm{L}_{\infty\infty}$-formula. More precisely the following holds: \begin{fact} \label{fac:pressubslambdaanyform} Let $\mathrm{L}$ be a $\lambda$-relational signature and $\bool{B}$ a $<\lambda$-complete Boolean algebra. Then for any $\bool{B}$-valued model $\mathcal{M}$ for $\mathrm{L}$, any $\mathrm{L}_{\infty\infty}$-formula $\phi(x_i:i<\alpha)$ in displayed free variables, and any sequence $(\sigma_i:i<\alpha)$, $(\tau_i:i<\alpha)$ in $\mathcal{M}^\alpha$ \begin{equation}\label{eqn:subslambda1} \bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}^\mathcal{M}_{\RO(\mathsf{B})^+} \bigg) \wedge \Qp{\phi(\tau_i:\,i<\alpha)}^\mathcal{M}_{\RO(\mathsf{B})^+} \leq \Qp{\phi(\sigma_i:\,i<\alpha)}^\mathcal{M}_{\RO(\mathsf{B})^+}. \end{equation} \end{fact} We prove this in Section \ref{subsec:mixfull}. \begin{definition} Let $\bool{B}$ be a complete Boolean algebra and $\mathcal{M}$ a well behaved $\mathsf{B}$-valued model for some $\lambda$-signature $\mathrm{L}$. $\mathcal{M}$ has the mixing property if for any antichain $A \subset \mathsf{B}$ and $\{\tau_a : a \in A\} \subset M$ there is some $\tau \in M$ such that $a \leq \Qp{\tau=\tau_a}_\mathsf{B}$ for all $a \in A$. \end{definition} \begin{definition} Let $\lambda \leq \kappa$ be infinite cardinals, $\bool{B}$ be a $<\lambda$-complete Boolean algebra, and $\mathcal{M}$ be a well behaved $\mathsf{B}$-valued model for $\mathrm{L}_{\kappa\lambda}$. $\mathcal{M}$ is full for the logic $\mathrm{L}_{\kappa \lambda}$ if for every $\mathrm{L}_{\kappa,\lambda}$-formula $\phi(\overline{v},\overline{w})$ and $\overline{m} \in M^{\overline{w}}$ there exists $\overline{n} \in M^{\overline{v}}$ such that \[ \Qp{\exists \overline{v} \phi(\overline{v},\overline{m})}_\mathsf{B} = \Qp{\phi(\overline{n},\overline{m})}_\mathsf{B}. \] \end{definition} \begin{proposition}\label{prop:mixfull} Let $\mathrm{L}$ be a $\lambda$-relational signature and $\bool{B}$ a complete Boolean algebra. Any $\mathsf{B}$-valued model for $\mathrm{L}$ with the mixing property is full for $\mathrm{L}_{\infty \infty}$. \end{proposition} The proof of this proposition is deferred to Section \ref{subsec:mixfull}. \begin{definition} \label{def:boolvalmod} Let $\mathsf{B}$ be a $<\lambda$-complete Boolean algebra, $\mathcal{M}$ a full $\mathsf{B}$-valued model for $\mathrm{L}_{\kappa\lambda}$ where $\mathrm{L}$ is a relational $\lambda$-signature, and $F \subset \mathsf{B}$ a $<\lambda$-complete filter. The quotient of $\mathcal{M}$ by $F$ is the $\mathrm{L}$-structure $\mathcal{M}/_F$ defined as follows: \begin{enumerate} \item its domain $M/_F$ is the quotient of $M$ by the equivalence \[ \tau \equiv_F \sigma \lrao \Qp{\tau=\sigma} \in F, \] \item if $R \in \mathrm{L}$ is an $\alpha$-ary relation symbol, \[ R^{\mathcal{M}/_F}= \{({[\tau_i]}_F:i<\alpha) \in (M/_F)^\alpha : \Qp{R(\tau_i:i<\alpha)} \in F \}, \] \item if $c \in \mathrm{L}$ is a constant symbol, \[ c^{\mathcal{M}/_F}= \bigl[c^\mathcal{M}\bigr]_F \in M/F. \] \end{enumerate} \end{definition} \begin{remark} If $\mathcal{M}$ a $\mathsf{B}$-valued model for $\mathrm{L}_{\kappa\lambda}$ so is $\mathcal{M}/_F$ is for $\bool{B}/_F$: condition \ref{eqn:subslambda} of Def. \ref{def:boolvalmod} is satisfied by the quotient structure $\mathcal{M}/_F$ appealing to the $<\lambda$-completeness of $F$. All other conditions of Def. \ref{def:boolvalmod} holds for $\mathcal{M}/_F$ just assuming $F$ being a filter. Furthermore if $\mathcal{M}$ is full for $\mathrm{L}_{\kappa\lambda}$ and $F$ is also $<\kappa$-complete, so is $\mathcal{M}/_F$ (appealing to the $<\kappa$-completeness of $F$ to handle infinitary disjunctions and conjunctions and to the $<\lambda$-completeness of $F$ to handle infinitary quantifiers). \end{remark} \begin{theorem}[\L o\'s] \label{thm:fullLos} Let $\lambda \leq \kappa$ be infinite cardinals, $\bool{B}$ be a $<\lambda$-complete Boolean algebra, $\mathcal{M}$ an $\mathrm{L}_{\kappa \lambda}$-full $\mathsf{B}$-valued model for $\mathrm{L}_{\kappa\lambda}$, and $U \subset \mathsf{B}$ a $<\max\bp{\kappa,\lambda}$-complete ultrafilter. Then, for every $\mathrm{L}_{\kappa\lambda}$-formula $\phi(\overline{v})$ and $\overline{\tau} \in M^{|\overline{v}|}$, \[ \mathcal{M}/_U \vDash \phi(\overline{{[\tau]}_U}) \iff \Qp{\phi(\overline{\tau})}_\mathsf{B} \in U. \] \end{theorem} \begin{proof} A proof of the Theorem for $\mathrm{L}_{\omega\omega}$ for $\omega$-relational signatures is given in \cite[Thm. 5.3.7]{viale-notesonforcing}. The general case uses the $<\kappa$-completeness of the ultrafilter to handle $<\kappa$-sized disjunctions and conjunctions, and its $<\lambda$-completeness and the fullness of $\mathcal{M}$ to handle quantifiers on infinite strings. \end{proof} From now on we will work only with complete Boolean algebras $\bool{B}$, hence $\bool{B}$-valued models are automatically well behaved for $\mathrm{L}_{\infty\infty}$. \subsection{Boolean satisfiability} \begin{definition} $\mathrm{BVM}$ denotes the class of Boolean valued models with values on a complete Boolean algebra and $\mathrm{Sh}$ the subclass of Boolean valued models with values on a complete Boolean algebra which have the mixing property. Let $\Gamma$ and $\Delta$ be sets of $\mathrm{L}_{\infty\infty}$-formulae. In case $\Gamma = \emptyset$ we let \[ \Qp{\bigwedge \Gamma}_\bool{B}^\mathcal{M} = 1_{\bool{B}}, \] and if $\Delta = \emptyset$ we let \[ \Qp{\bigvee \Delta}_\bool{B}^\mathcal{M} = 0_\bool{B}. \] \begin{itemize} \item $\Gamma$ is \emph{weakly Boolean satisfiable} if there is a complete Boolean algebra $\bool{B}$ and a $\bool{B}$-valued model $\mathcal{M}$ such that $\Qp{\phi}^{\mathcal{M}}_\bool{B}>0_\bool{B}$ for each $\phi\in \Gamma$. \item $\Gamma$ is \emph{Boolean satisfiable} if there is a complete Boolean algebra $\bool{B}$ and a $\bool{B}$-valued model $\mathcal{M}$ such that $\Qp{\phi}^{\mathcal{M}}_\bool{B}=1_\bool{B}$ for each $\phi\in \Gamma$. \item $\Gamma\vDash_\mathrm{BVM} \Delta$ if \[ \Qp{\bigwedge\Gamma}^{\mathcal{M}}_\bool{B}\leq\Qp{\bigvee\Delta}^{\mathcal{M}}_\bool{B} \] for any complete Boolean algebra $\bool{B}$ and $\bool{B}$-valued model $\mathcal{M}$. \item $\Gamma\vDash_\mathrm{Sh} \Delta$ if \[ \Qp{\bigwedge\Gamma}^{\mathcal{M}}_\bool{B}\leq\Qp{\bigvee\Delta}^{\mathcal{M}}_\bool{B} \] for any complete Boolean algebra $\bool{B}$ and $\bool{B}$-valued model $\mathcal{M}$ with the mixing property. \item $\Gamma \equiv_{\mathrm{BVM}} \Delta$ if $\Gamma \vDash_\mathrm{BVM} \Delta$ and $\Delta \vDash_\mathrm{BVM} \Gamma$. \item $\Gamma \equiv_{\mathrm{Sh}} \Delta$ if $\Gamma \vDash_\mathrm{Sh} \Delta$ and $\Delta \vDash_\mathrm{Sh} \Gamma$. \end{itemize} \end{definition} \subsection{Proof systems for $\mathrm{L}_{\infty\infty}$}\label{subsec:gentzencalc} We present a proof system for $\mathrm{L}_{\infty\infty}$ that is a direct generalization of the Sequent Calculus from first order logic. $\Gamma$,$\Gamma'$,$\Delta$ and $\Delta'$ denote sets of $\mathrm{L}_{\infty\infty}$-formulae of any cardinality, $\overline{v},\overline{w}$ denote set-sized sequences of variables, $\overline{t},\overline{u}$ denote set-sized sequences of terms, and $I$ denotes an index set. When dealing with sequents, and in order to make proofs shorter, we will assume that formulae only contain $\neg,\bigwedge$ and $\forall$ as logical symbols; this is not restrictive as all reasonable semantics for these logics (among which all those we consider in this paper) should validate the natural logical equivalences $\neg\forall\vec{v}\neg\phi\equiv\exists\vec{v}\phi$, $\neg\bigwedge_{i\in I}\neg\phi_i\equiv\bigvee_{i\in I}\phi_i$. \begin{definition} Given $\Gamma,\Delta$ arbitrary sets of $\mathrm{L}_{\infty\infty}$-formulae, a proof of $\Gamma \vdash \Delta$ in $\mathrm{L}_{\infty \infty}$ is a sequence $(s_\alpha)_{\alpha \leq \beta}$ of sequents, where $s_\beta$ is $\Gamma \vdash \Delta$ and each element $s_\alpha$ is either an axiom or comes from an application of the following rules to $(s_i)_{i<\alpha}$. \end{definition} \begin{displaymath} \begin{array}{lcc@{\qquad}l} \mbox{Axiom rule} & \prftree{}{\Gamma, \phi \vdash \phi, \Delta} & \prftree{\Gamma, \phi \vdash \Delta}{\Gamma' \vdash \phi, \Delta'}{\Gamma,\Gamma' \vdash \Delta, \Delta'} & \mbox{Cut Rule} \\ \\ \mbox{Substitution} & \prftree{\Gamma \vdash \Delta}{\Gamma(\overline{w} \diagup \overline{v}) \vdash \Delta(\overline{w} \diagup \overline{v})} & \prftree{\Gamma\vdash \Delta}{\Gamma,\Gamma' \vdash \Delta, \Delta'} & \mbox{Weakening}\\ \\ \mbox{Left Negation} & \prftree{\Gamma \vdash \phi, \Delta}{\Gamma, \neg \phi \vdash \Delta} & \prftree{\Gamma, \phi \vdash \Delta}{\Gamma \vdash \neg \phi, \Delta} & \mbox{Right Negation} \\ \\ \mbox{Left Conjunction} & \prftree{\Gamma,\Gamma' \vdash \Delta}{\Gamma,\bigwedge \Gamma' \vdash \Delta} & \prftree{\Gamma \vdash \phi_i, \Delta \ ,\ i \in I}{\Gamma \vdash \bigwedge_{i \in I} \{\phi_i : i \in I\}, \Delta} & \mbox{Right Conjunction} \\ \\ \mbox{Left Quantification} & \prftree{\Gamma, \phi(\overline{t} \diagup \overline{v}) \vdash \Delta}{\Gamma, \forall \overline{v} \phi(\overline{v}) \vdash \Delta} & \prftree[r]{*}{\Gamma \vdash \phi(\overline{w} \diagup \overline{v} ), \Delta}{\Gamma \vdash \forall \overline{v}\phi(\overline{v}), \Delta} & \mbox{Right Quantification}\\ \\ \mbox{Equality 1} & \prftree{}{ v_\alpha = v_\beta \vdash v_\beta = v_\alpha} & \prftree[r]{}{}{\overline{u} = \overline{t}, \phi(\overline{t}) \vdash \phi(\overline{u})} & \mbox{Equality 2}\\ \end{array} \end{displaymath} * The Right Quantification rule can only be applied in the case that none of the variables from $\overline{w}$ occurs free in formulae of $\Gamma\cup\Delta\cup\bp{\phi}$. \begin{remark} It needs to be noted that with this deduction system the completeness theorem for $\mathrm{L}_{\infty\infty}$ (even for $\mathrm{L}_{\omega_2\omega}$) fails for the usual semantics given by Tarski structures. Remark first that our proof system is forcing invariant: the existence of a proof for a certain sentence is described by a $\Sigma_1$ statement in parameter the sequent to be proved; if the proof exists in $V$ it exists in any further extension of $V$. Consider now a set of $\kappa$ constants $\{c_\alpha : \alpha < \kappa\}$ for $\kappa>\omega$ and the sentence \[ \psi := \bigg( \bigwedge_{\omega \leq \alpha \neq \beta} c_\alpha \neq c_\beta \bigg) \Rao \exists v \bigg( \bigwedge_{n < \omega} v \neq c_n \bigg). \] The sentence $\psi$ is valid in the usual Tarski semantics but it cannot be proved (in our deduction system or in any forcing invariant system) since the sentence is no longer valid when moving to $V[G]$ for $G$ a $V$-generic filter for $\Coll(\omega,\kappa)$. Malitz \cite[Thm. 3.2.4]{MalitzThesis} showed also that the above formula is a counterexample to Craig's interpolation property for Tarski semantics in $\mathrm{L}_{\infty\omega}$. \end{remark} Our opinion is that a proof system should not depend on the model of set theory in which one is working, which is the case for the proof system we present here at least when restricted to $\mathrm{L}_{\infty\omega}$. In contrast with our point of view, one finds a complete proof system for Tarski semantics on $\mathrm{L}_{\infty\infty}$ in Malitz's thesis \cite[Thm. 3.3.1]{MalitzThesis}, however this proof system (which by the way is due to Karp \cite[Ch. 11]{Karp}) is not forcing invariant e.g. a proof of some sequent in some model of set theory may not be anymore a proof of that same sequent in some forcing extension. \section{Main model theoretic results}\label{sec:mainmodthres} These are the main model theoretic results of the paper. \subsection{Results for $\mathrm{L}_{\infty \omega}$} \begin{theorem}[Boolean Completeness for $\mathrm{L}_{\infty\omega}$] \label{them:boolcompl} Let $\mathrm{L}$ be an $\omega$-relational signature. The following are equivalent for $T,S$ sets of $\mathrm{L}_{\infty\omega}$-formulae. \begin{enumerate} \item \label{thm:boolcomp1} $T\models_{\mathrm{Sh}}S$, \item \label{thm:boolcomp2} $T\models_{\mathrm{BVM}}S$, \item \label{thm:boolcomp3} $T\vdash S$. \end{enumerate} \end{theorem} \begin{theorem}[Boolean Craig Interpolation]\label{thm:craigint} Assume $\vDash_{\mathrm{Sh}}\phi \rightarrow \psi$ with $\phi,\psi \in \mathrm{L}_{\kappa\omega}$. Then there exists a sentence $\theta$ in $\mathrm{L}_{\kappa\omega}$ such that \begin{itemize} \item $\vDash_{\mathrm{Sh}} \phi \rightarrow \theta$, \item $\vDash_{\mathrm{Sh}} \theta \rightarrow \psi$, \item all non logical symbols appearing in $\theta$ appear both in $\phi$ and $\psi$. \end{itemize} \end{theorem} Recall the Beth definability property: \begin{definition} Let $\mathrm{L}$ be a relational $\lambda$-signature and $R$ be an $\alpha$-ary relation symbol not in $\mathrm{L}$ for some $\alpha<\lambda$. Given $\lambda,\kappa\in\mathsf{Card}\cup\bp{\infty}$, let $T$ be a $\mathrm{L}_{\kappa\lambda}'$-theory for $\mathrm{L}'=\mathrm{L}\cup\bp{R}$. \begin{itemize} \item $R$ is implicitly Boolean definable from $T$ in a relational $\lambda$-signature $\mathrm{L}$ if the following holds: whenever $\mathcal{M}$ and $\mathcal{N}$ are $\bool{B}$-valued models of $T$ with domain $M$ such that $\mathcal{M}\restriction\mathrm{L}=\mathcal{N}\restriction\mathrm{L}$, we have that $\Qp{R(\tau_i:i\in\alpha)}^{\mathcal{M}}=\Qp{R(\tau_i:i\in\alpha)}^{\mathcal{N}}$ for all $(\tau_i:i\in\alpha)\in M^\alpha$. \item $R$ is explicitly Boolean definable from $T$ in $\mathrm{L}_{\kappa\lambda}$ if \[ T\vdash \forall (v_i:i\in\alpha)\,(R(v_i:i\in\alpha)\leftrightarrow\phi(v_i:i\in\alpha)) \] for some $\mathrm{L}_{\kappa\lambda}$-formula $\phi(v_i:i\in\alpha)$. \end{itemize} The Boolean Beth definability property for $\mathrm{L}_{\kappa\lambda}$ (with $\lambda,\kappa\in\mathsf{Card}\cup\bp{\infty}$) states that for all relational $\lambda$-signatures $\mathrm{L}$ and $(\mathrm{L}\cup\bp{R})_{\infty\omega}$-theory $T$, $R$ is implicitly definable from $T$ in $\mathrm{L}\cup\bp{R}$ if and only if it is explicitly definable from $T$ in $\mathrm{L}_{\kappa\lambda}$. \end{definition} This is a standard consequence of Craig's interpolation and completeness (see for example \cite[Thm. 6.42]{ModelsGames}; the same proof applies to our context in view of the properties of our calculus $\vdash$). \begin{theorem}\label{thm:bethdef} $\mathrm{L}_{\infty\omega}$ has the Boolean Beth definability property. \end{theorem} Another main result we present is the Boolean omitting types theorem. We need to clarify some notation so to make its statement intelligible. Suppose $\Sigma(v_1,\dots,v_n)$ is a set of $\mathrm{L}_{\infty\infty}$-formulae in free variables $v_1,\dots,v_n$. We say that a model $\mathcal{M}$ realizes $\Sigma(v_1,\dots,v_n)$ if there exists some $m_1,\dots,m_n \in M$ such that \[ \mathcal{M} \vDash \bigwedge \Sigma(m_1,\dots,m_n). \] $\mathcal{M}$ omits the type $\Sigma$ amounts to say that for any $m_1,\dots,m_n \in M$, \[ \mathcal{M} \vDash \bigvee_{\phi \in \Sigma} \neg \phi(m_1,\dots,m_n). \] Thus, a model $\mathcal{M}$ omits the family of types $\mathcal{F} = \{\Sigma(v_1,\ldots,v_{n_\Sigma}) : \Sigma\in \mathcal{F}\}$ if it models the sentence \[ \bigwedge_{\Sigma\in \mathcal{F}} \forall \overline{v}_\Sigma \bigvee \bp{\neg\phi(\overline{v}_\Sigma):\phi\in \Sigma}. \] In the following proof the sets $\Phi$ will be playing the roles of $\bp{\neg \psi : \psi \in \Sigma}$, where $\Sigma$ is the type we wish to omit. In this context, the type $\Sigma$ is not isolated by a sentence $\theta$ if whenever there is a model of $\theta$, there is also a model of $\theta \wedge \neg\phi$ for some $\phi \in \Sigma$. \begin{theorem}[Boolean Omitting Types Theorem]\label{thm:omittypthm} Let $T$ be a set-sized Boolean satisfiable $\mathrm{L}_{\infty\omega}$-theory. Assume $\mathcal{F}$ is a set-sized family such that each $\Phi \in\mathcal{F}$ is a set of $\mathrm{L}_{\infty \omega}$-formulae with the property that each $\phi \in \Phi$ has free variables among $v_0,\dots, v_{n_\Phi - 1}$. Let $\mathrm{L}_{T,\mathcal{F}}$ be the smallest fragment of $\mathrm{L}_{\infty \omega}$ such that $T, \Phi \subset \mathrm{L}_{T,\mathcal{F}}$ for all $\Phi \in \mathcal{F}$. Suppose that no $\Phi$ is boolean isolated in $\mathrm{L}_{T,\mathcal{F}}$, i.e.; for all $\mathrm{L}_{T,\mathcal{F}}$-formula $\theta$ in free variables $v_0,\dots,v_{n_\theta - 1}$, \[ T + \exists v_0 \dots v_{n_\theta - 1}\,\theta \] is Boolean satisfiable if and only if so is \[ T + \exists v_0 \dots v_{\max\bp{n_\theta - 1, n_\Phi - 1}}\,[\theta \wedge \phi] \] for some $\phi \in \Phi$. Then there exists a Boolean valued model $\mathcal{M}$ with the mixing property such that \[ \mathcal{M} \vDash T + \bigwedge_{\Phi\in\mathcal{F}} \forall v_0 \dots v_{n_\Phi - 1}\, \bigvee \Phi. \] \end{theorem} \subsection{Results for $\mathrm{L}_{\infty \infty}$} Theorem \ref{thm:complMansfield} below is due to Mansfield \cite{MansfieldConPro}. We do not know whether any of these results hold for $\lambda$-relational signatures which are not first order. \begin{theorem}\label{thm:complMansfield} \cite[Thm. 1]{MansfieldConPro} Let $\mathrm{L}$ be an $\omega$-relational signature. The following are equivalent for $T,S$ sets of $\mathrm{L}_{\infty\infty}$-formulae. \begin{enumerate} \item \label{thm:boolcomp2MANS} $T\models_{\mathrm{BVM}}S$, \item \label{thm:boolcomp3MANS} $T\vdash S$. \end{enumerate} \end{theorem} \begin{theorem}[Boolean Craig Interpolation]\label{thm:craigint2} Let $\mathrm{L}$ be an $\omega$-relational signature. Assume $\vDash_{\mathrm{BVM}}\phi \rightarrow \psi$ with $\phi,\psi \in \mathrm{L}_{\kappa\lambda}$. Then there exists a sentence $\theta$ in $\mathrm{L}_{\kappa\lambda}$ such that \begin{itemize} \item $\vDash_{\mathrm{BVM}} \phi \rightarrow \theta$, \item $\vDash_{\mathrm{BVM}} \theta \rightarrow \psi$, \item all non logical symbols appearing in $\theta$ appear both in $\phi$ and $\psi$. \end{itemize} \end{theorem} \begin{theorem}\label{thm:bethdef2} $\mathrm{L}_{\infty\infty}$ has the Boolean Beth definability property. \end{theorem} As in the case of interpolation, one can prove a version of the omitting types theorem in $\mathrm{L}_{\infty \infty}$ where the obtained model is not mixing in general. Nonetheless, we do not think its proof nor the statement while introduce anything of relevance other than the information found in theorems \ref{thm:omittypthm} and \ref{thm:craigint2}. \section{Consistency properties for relational $\omega$-signatures}\label{sec:consprop} Consistency properties are partial approximations to the construction of a model of an infinitary theory. In first order logic the main tool for constructing Tarski models of a theory is the compactness theorem. However, this technique is not suited for the infinitary logics $\mathrm{L}_{\kappa \lambda}$ since it fails even at the simplest case $\mathrm{L}_{\omega_1 \omega}$. Actually, a cardinal $\kappa$ is (weakly) compact if and only if the (weak) compactness theorem holds for the logic $\mathrm{L}_{\kappa \kappa}$. Thus a new recipe for constructing models is needed. This is given by the notion of consistency property. Our aim is to show that by means of consistency properties one gets a powerful tool to produce boolean valued models of infinitary logic. We follow (generalizing it) the approach of Keisler's book \cite{KeislerInfLog} to consistency properties for $\mathrm{L}_{\omega_1\omega}$. First of all it is convenient to reduce the satisfaction problem to formulae where negations occur only in atomic formulae. We use the abbreviation $\vec{v}$ to denote a sequence of variables. Similarly $\vec{c}$ denotes a string of constants. \begin{definition} \label{MovNegIns} Let $\phi$ be a $\mathrm{L}_{\infty \infty}$-formula. We define $\phi \neg$ (moving a negation inside) by induction on the complexity of formulae: \begin{itemize} \item If $\phi$ is an atomic formula $\varphi$, $\phi \neg$ is $\neg \varphi$. \item If $\phi$ is $\neg \varphi$, $\phi \neg$ is $\varphi$. \item If $\phi$ is $\bigwedge \Phi$, $\phi \neg$ is $\bigvee \{\neg \varphi : \varphi \in \Phi\}$. \item If $\phi$ is $\bigvee \Phi$, $\phi \neg$ is $\bigwedge \{\neg \varphi : \varphi \in \Phi\}$. \item If $\phi$ is $\forall \vec{v} \varphi(\vec{v})$, $\phi \neg$ is $\exists \vec{v} \neg \varphi(\vec{v})$. \item If $\phi$ is $\exists \vec{v} \varphi(\vec{v})$, $\phi \neg$ is $\forall \vec{v} \neg \varphi(\vec{v})$. \end{itemize} \end{definition} It is easily checked that $\neg \phi$ and $\phi \neg$ are equivalent (under any reasonable notion of equivalence, e.g. boolean satisfiability or provability). This operation is used in the proof of Thm. \ref{ModExiThe}, Thm. \ref{GenFilThe} and Thm. \ref{ManModExi}. \begin{definition} \label{def:ConProInf} Let $\mathrm{L} = \mathcal{R} \cup \mathcal{D}$ be a relational $\omega$-signature where the relation symbols are in $\mathcal{R}$ and $\mathcal{D}$ is the set of constants. Given an infinite set of constants $\mathcal{C}$ disjoint from $\mathcal{D}$, consider $\mathrm{L}(\mathcal{C})$ the signature obtained by extending $\mathrm{L}$ with the constants in $\mathcal{C}$. A set $S$ whose elements are set sized subsets of $\mathrm{L}(\mathcal{C})_{\infty\infty}$ is a consistency property for $\mathrm{L}(\mathcal{C})_{\infty\infty}$ if for each $s \in S$ the following properties hold: \begin{enumerate} \item[(Con)]\label{conspropCon} for any $r\in S$ and any $\mathrm{L}(\mathcal{C})_{\infty\infty}$-sentence $\phi$ either $\phi\not\in r$ or $\neg\phi\not\in r$, \item[(Ind.1)]\label{conspropInd1} if $\neg \phi \in s$, $s \cup \{\phi \neg\} \in S$, \item[(Ind.2)]\label{conspropInd2} if $\bigwedge \Phi \in s$, then for any $\phi \in \Phi$, $s \cup \{\phi\} \in S$, \item[(Ind.3)]\label{conspropInd3} if $\forall \vec{v} \phi(\vec{v}) \in s$, then for any $\vec{c} \in (\mathcal{C}\cup\mathcal{D})^{|\vec{v}|}$, $s \cup \{\phi(\vec{c})\} \in S$, \item[(Ind.4)]\label{def:conspropInd4} if $\bigvee \Phi \in s$, then for some $\phi \in \Phi$, $s \cup \{\phi\} \in S$, \item[(Ind.5)]\label{def:conspropInd5} if $\exists \vec{v} \phi(\vec{v}) \in s$, then for some $\vec{c} \in \mathcal{C}^{|\vec{v}|}$, $s \cup \{\phi(\vec{c})\} \in S$, \item[(Str.1)]\label{def:conspropStr1} if $c,d \in \mathcal{C}\cup\mathcal{D}$ and $c = d \in s$, then $s \cup \{d = c\} \in S$, \item[(Str.2)] \label{def:conspropStr2} if $c,d \in \mathcal{C} \cup \mathcal{D}$ and $\{c = d, \phi(d)\} \subset s$, then $s \cup \{\phi(c)\} \in S$, \item[(Str.3)] \label{def:conspropStr3} if $d \in \mathcal{C} \cup \mathcal{D}$, then for some $c \in \mathcal{C}$, $s \cup \{c = d\} \in S$. \end{enumerate} \end{definition} The following result, due to Makkai \cite{MakkaiConPro}, shows the value of consistency properties for $\mathrm{L}_{\omega_1 \omega}$. \begin{theorem}[Model Existence Theorem] \label{ModExiThe} Let $\mathrm{L}$ be a countable relational $\omega$-signature, $\mathcal{C}$ a countable set of constants, and $S \subset [\mathrm{L}(\mathcal{C})_{\omega_1 \omega}]^{\leq \omega}$ be a consistency property of countable size. Then any $s \in S$ is realized in some Tarski model for $\mathrm{L}$. \end{theorem} Now let us give a few examples of consistency properties for $\mathrm{L}(\mathcal{C})_{\infty\omega}$ and $\mathrm{L}(\mathcal{C})_{\infty\infty}$. \begin{enumerate} \item Consider $\mathcal{K}$ a class of Tarski structures for $\mathrm{L}(\mathcal{C})$. The following families are consistency properties for $\mathrm{L}(\mathcal{C})_{\infty\infty}$: \begin{itemize} \item for fixed infinite cardinals $\lambda \geq \kappa,\mu$ and $\mathcal{C}$ a set of constants of size at least $\lambda$, \[ S_{\lambda,\kappa} = \{s \in [\mathrm{L}(\mathcal{C})_{\lambda \mu}]^{\leq \kappa} : \,\exists \mathcal{A} \in \mathcal{K},\ \mathcal{A} \vDash \bigwedge s\}, \] \item $S_{\lambda,< \omega} = \{s \in [\mathrm{L}(\mathcal{C})_{\lambda \mu}]^{< \omega} : \,\exists \mathcal{A} \in \mathcal{K},\ \mathcal{A} \vDash \bigwedge s\}$, \item $S_{\lambda, \kappa}$ and $S_{\lambda,< \omega}$ where only a finite number of constants from $\mathcal{C}$ appear in each $s \in S$. \end{itemize} \item\label{exm:BvalmodConsProp} Let $\mathcal{M}$ be a $\bool{B}$-valued model with domain $M$ for a signature $\mathrm{L}=\mathcal{R}\cup\mathcal{D}$. We let $\mathcal{C}=M$ and $S$ be the set of finite (less than $\kappa$-sized,\dots) sets $r$ of $\mathrm{L}(M)_{\kappa\lambda}$-sentences such that \[ \Qp{\bigwedge r}^{\mathcal{M}}_{\bool{B}}>0_{\bool{B}}. \] Then $S$ is a consistency property. \item The following families are consistency properties for $\mathrm{L}(\mathcal{C})_{\infty\omega}$: \begin{itemize} \item Any of the previous cases where the Tarski structures in $\mathcal{K}$ may exist only in some generic extension of $V$: e.g. given a $\mathrm{L}_{\infty\omega}$-theory $T$, $T$ may not be consistent in $V$ with respect to the Tarski semantics for $\mathrm{L}_{\infty\omega}$, but $T$ may become consistent with respect to the Tarski semantics for $\mathrm{L}_{\infty\omega}$ in some generic extension of $V$; one can then use the forcible properties of the Tarski models of $T$ existing in some generic extension of $V$ to define a consistency property in $V$. \end{itemize} \end{enumerate} The last example is based on the following observation: let $S$ be a consistency property for $\mathrm{L}_{\kappa^+ \omega}$ of size $\kappa$ whose elements are all sets of formulae of size at most $\kappa$ existing in $V$. Let $G$ be a $V$-generic filter for the forcing $\Coll(\omega,\kappa)$. Then, in the generic extension $V[G]$, $S$ becomes a consistency property of countable size for $\mathrm{L}_{\omega_1^{V[G]} \omega}$ and the Model Existence Theorem \ref{ModExiThe} applied in $V[G]$ provides the desired Tarski model of any $s\in S$. \begin{definition} Suppose $\kappa$ is an infinite cardinal and let $\mathrm{L}$ be a signature. A fragment $\mathrm{L}_\mathcal{A} \subset \mathrm{L}_{\kappa \omega}$ consists in a set of $\mathrm{L}_{\kappa \omega}$-formulas such that: \begin{itemize} \item $\mathrm{L}_\mathcal{A}$ is closed under $\neg$, $\wedge$ and $\vee$, \item if $\phi\in \mathrm{L}_\mathcal{A}$ and $v$ is a variable appearing in some $\mathrm{L}_\mathcal{A}$-formula, $\forall v \phi$ and $\exists v \phi$ belong to $\mathrm{L}_\mathcal{A}$, \item $\mathrm{L}_\mathcal{A}$ is closed under subformulas, \item if $\phi \in \mathrm{L}_\mathcal{A}$, then $\phi \neg \in \mathrm{L}_\mathcal{A}$, \item if $\phi \in \mathrm{L}_\mathcal{A}$, then there is a variable appearing in $\mathrm{L}_\mathcal{A}$ which does not occur in $\phi$, \item if $\phi(v) \in \mathrm{L}_\mathcal{A}$ and $t$ is any $\mathrm{L}$-term, $\phi(t) \in \mathrm{L}_\mathcal{A}$, \item if $\phi(v_1,\ldots,v_n) \in \mathrm{L}_\mathcal{A}$ and $w_1,\ldots,w_n$ are variable appearing in $\mathrm{L}_\mathcal{A}$, $\phi(w_1,\ldots,w_n) \in \mathrm{L}_\mathcal{A}$. \end{itemize} \end{definition} \begin{remark} Suppose $\kappa$ is an infinite cardinal and let $\mathrm{L}$ be a signature. Let $T$ be a set of $\mathrm{L}_{\kappa \omega}$-formulae. Then there exists a smallest fragment $\mathrm{L}_\mathcal{A}$ such that $T \subset \mathrm{L}_\mathcal{A}$ and \[ |\mathrm{L}_\mathcal{A}| = |L| + |T| + \kappa. \] \end{remark} \section{Forcing with consistency properties} \label{ForConPro} In this section we assume that $\mathrm{L}$ denotes a set-sized $\omega$-relational signature, $\mathcal{C}$ is a set of fresh constants, and $S \subset \mathcal{P}(\mathrm{L}(\mathcal{C})_{\infty \omega})$ is a set-sized consistency property. We start by noting the following: \begin{remark} \label{S-PS} If $S$ is a consistency property, so is $\{s \subset \mathrm{L}(\mathcal{C})_{\infty \omega} : \exists s_0 \in S\, s \subseteq s_0\}$. \end{remark} \begin{definition} Let $S$ be a consistency property over $\mathrm{L}(\mathcal{C})_{\infty\omega}$ for a set of constants $\mathcal{C}$ and a relational $\omega$-signature $\mathrm{L}$. The forcing notion $\mathbb{P}_S$ is given by: \begin{itemize} \item domain: $\{s \subset \mathrm{L}(\mathcal{C})_{\infty \omega} : \exists s_0 \in S \,(s \subseteq s_0)\}$; \item order: $p \leq q$ if and only if $q \subseteq p$. \end{itemize} Given a filter $F$ on $\mathbb{P}_S$, $\Sigma_F = \bigcup F$. \end{definition} The proof of the Model Existence Theorem for $\mathrm{L}_{\omega_1 \omega}$ as given in \cite{KeislerInfLog}, corresponds naturally to the construction for a given consistency property $S$ of a suitable filter $G$ on $\mathbb{P}_S$ generic over countably many dense sets. The clauses of a consistency property are naturally attached to dense sets a maximal filter $G$ on $\mathbb{P}_S$ needs to meet in order to produce a Tarski model of the formulae $\phi\in \bigcup G$. For example, suppose $\bigvee \Phi \in s_0 \in S$. Clause \ref{def:conspropInd4} together with Remark \ref{S-PS} states that the set $\{s \in S: \,\Phi\cap s\neq\emptyset\}$ is dense below $s_0$. In Keisler's case the elements of a consistency property are countable and each $\mathrm{L}(\mathcal{C})_{\omega_1 \omega}$-formula has countably many subformulae. So, one can take an enumeration of all the dense sets at issue and diagonalize. In the general case for $\mathrm{L}_{\infty \omega}$ one deals with many more dense sets, hence a filter meeting all the relevant dense sets may not exists. However we can translate Keisler's argument using forcing and produce a Boolean valued model for the associated consistency property. For the rest of this section we work with consistency properties made up from finite sets of sentences. The reader familiar with Keisler's book \cite{KeislerInfLog} will find this restriction natural. We split our generalization of Keisler's result in two pieces. The first piece shows how far one can go in proving the Model Existence Theorem assuming only the existence of a maximal filter on a consistency property $S$. The second one shows how genericity fills the missing gaps. \begin{fact} Let $S$ be a consistency property for $\mathrm{L}(\mathcal{C})_{\infty\omega}$ for a set of constants $\mathcal{C}$. Assume $S$ consists only of finite sets of formulae. Suppose $F \subseteq \mathbb{P}_S$ is a filter. Then $[\Sigma_F]^{<\omega}=F$. \end{fact} \begin{proof} The inclusion $F \subset [\Sigma_F]^{< \omega}$ follows by definition of $\Sigma_F$. We now prove $[\Sigma_F]^{< \omega} \subseteq F$. Suppose $p=\bp{\phi_1,\dots,\phi_n} \in [\Sigma_F]^{< \omega}$. Then there exist $s_1,\ldots,s_n \in F$ with each $\phi_i\in s_i$. Hence $p \subseteq \bigcup_{i \leq n} s_i$. Since $F$ is a filter, we have $\bigcup_{i \leq n} s_i \in F \subseteq \mathbb{P}_S$. The set $p$ is a condition in $\mathbb{P}_S$ since $\mathbb{P}_S$ is closed under subsets. Finally, $\bigcup_{i \leq n} s_i \leq p$ and $\bigcup_{i \leq n} s_i \in F$ imply $p \in F$. \end{proof} \begin{definition} \label{DefStr} Given a relational $\omega$-signature $\mathrm{L}=\mathcal{R}\cup\mathcal{D}$, a set of fresh constants $\mathcal{C}$, and a consistency property $S$ for $\mathrm{L}(\mathcal{C})_{\infty\omega}$, let $F$ be a maximal filter for $\mathbb{P}_S$. $\mathcal{A}_F=(A_F,R_F:R\in\mathcal{R}, d_F:d\in\mathcal{D})$ is the following string of objects: \begin{itemize} \item $A_F$ is the set of equivalence classes on $\mathcal{C}\cup\mathcal{D}$ for the equivalence relation $c\cong_F d$ if and only if $(c=d)\in \Sigma_F$, \item for $R \in \mathcal{D}$ $n$-ary relation symbol and $c_1,\ldots,c_n \in \mathcal{C} \cup \mathcal{D}$, $R_F([c_1]_F,\dots,[c_n]_F)$ holds if and only if $R(c_1,\dots,c_n)\in \Sigma_F$, \item $d_F=[d]_F$ for any $d\in\mathcal{D}\cup\mathcal{C}$. \end{itemize} \end{definition} Consistency properties are so designed that $\mathcal{A}_F$ is a Tarski structure for $\mathrm{L}(\mathcal{C})$: \begin{fact}\label{fac:TarStrAF} Let $\mathrm{L}=\mathcal{R}\cup\mathcal{D}$ be a relational $\omega$-signature, $\mathcal{C}$ a fresh set of constants, $S$ a consistency property for $\mathrm{L}(\mathcal{C})_{\infty\omega}$, $F$ a maximal filter for $\mathbb{P}_S$. Then $\mathcal{A}_F$ is a Tarski structure for $\mathrm{L}(\mathcal{C})$. \end{fact} \begin{proof} We need to check that the definition of $A_F$ and of $R_F$ does not depend on the chosen representatives $c_1,\ldots,c_n$. Suppose $c_1 = d_1, \ldots, c_n = d_n, R(c_1 \ldots c_n)\in \Sigma_F$. By the previous Fact $\{c_1 = d_1, \ldots, c_n = d_n, R(c_1 \ldots c_n)\} \in F$. Hence by Clause \ref{def:ConProInf}(Ind2) for any $p\supseteq \{c_1 = d_1, \ldots, c_n = d_n, R(c_1 \ldots c_n)\}$ in $\mathbb{P}_S$, $p\cup\bp{R(d_1,\dots,d_n)}\in \mathbb{P}_S$. This combined with Clause \ref{def:ConProInf}(Con) gives that no $p\in \mathbb{P}_S$ can contain $ \{c_1 = d_1, \ldots, c_n = d_n, R(c_1 \ldots c_n), \neg R(d_1,\dots,d_n)\}$. Hence by maximality of $F$, \[ \{c_1 = d_1, \ldots, c_n = d_n, R(c_1 \ldots c_n), R(d_1,\dots,d_n)\}\in F \] must be the case. \end{proof} \begin{lemma} \label{MaxFilThe} Let $\mathrm{L}$ be a relational $\omega$-signature and $\mathcal{C}$ an infinite set of constants disjoint from $\mathrm{L}$. Assume $S \subset [\mathrm{L}(\mathcal{C})_{\infty \infty}]^{< \omega}$ is a consistency property. Let $F \subseteq \mathbb{P}_S$ be a maximal filter on $\mathbb{P}_S$. Consider $\Sigma'_F \subset \Sigma_F$ the set of (quantifier free) formulae $\psi\in\Sigma_F$ which are either atomic, negated atomic, or such that any subformula of $\psi$ which is neither atomic nor negated atomic contains just the logical constant $\bigwedge$. Then $\mathcal{A}_F \vDash \Sigma'_F$. \end{lemma} \begin{proof} We do it by induction on the complexity of $\psi \in \Sigma'_F$. First note that $\mathbb{P}_S$ is a consistency property of which $S$ is a dense subset. The atomic case follows by Def. \ref{DefStr}. For the remaining inductive clauses we proceed as follows: \begin{description} \item[$\neg$] Suppose $\psi = \neg \phi \in \Sigma'_F$ with $\phi$ an atomic formula. Let's see that \[ \mathcal{A}_F \nvDash \phi. \] Since $\phi$ is atomic it is enough to check $\phi \notin \Sigma'_F$. Suppose otherwise. Then there exists $p \in F$ with $\phi \in p$. Also $\psi \in q$ for some $q \in F$. By compatibility of filters there exists $r \leq p,q$. But $\phi, \neg \phi \in r$ contradicts clause \ref{def:ConProInf}(Con) for $\mathbb{P}_S$. Therefore \[ \mathcal{A}_F \vDash \psi. \] \item[$\bigwedge$] Suppose $\psi = \bigwedge \Phi$ is in $\Sigma_F'$. One needs to check \[ \mathcal{A}_F \vDash \phi \] for any $\phi \in \Phi$. Fix such a $\phi\in\Phi$. We start by showing that if $\bigwedge \Phi \in \Sigma'_F$, $\phi$ is also in $\Sigma'_F$. It is enough to check $\phi \in \Sigma_F$, and then apply the inductive assumptions on $\phi\in\Sigma'_F$, to get that $\mathcal{A}_F\models\phi$. Towards this aim we note the following: \begin{quote} For any $q\in \mathbb{P}_S$ with $\bigwedge\Phi\in q$, $q\cup\bp{\phi}\in\mathbb{P}_S$, while $q\cup\mathbb{\neg\phi}\not\in\mathbb{P}_S$. \end{quote} \begin{proof} Take $q$ in $\mathbb{P}_S$ with $\bigwedge \Phi \in q$. By Clause \ref{def:ConProInf}(Ind.2), $q \cup \{\phi\} \in \mathbb{P}_S$. Assume now that $\neg\phi\in q$. Then $q \cup \{\phi\} \in \mathbb{P}_S$ would contradict Clause \ref{def:ConProInf}(Ind.2) for $\mathbb{P}_S$. The thesis follows. \end{proof} By maximality of $F$ if some $q\in F$ is such that $\bigwedge\Phi\in q$, then $q\cup\bp{\phi}\in F$ as well, yielding that $\phi\in \Sigma_F$ as was to be shown. \end{description} \end{proof} \begin{theorem} \label{GenFilThe} Let $\mathrm{L}$ be a relational $\omega$-signature, $\mathcal{C}$ an infinite set of constants disjoint from $\mathrm{L}$, and $S$ be a consistency property consisting of $\mathrm{L}(\mathcal{C})_{\infty \omega}$-sentences. Assume that $F$ is a $V$-generic filter for $\mathbb{P}_S$. Then in $V[F]$ it holds that: \begin{enumerate} \item \label{GenFilThe-1} The domain of $\mathcal{A}_F$ is exactly given by $\bp{[c]_F: c\in\mathcal{C}}$. \item \label{GenFilThe-2} For any $\mathrm{L}(\mathcal{C})_{\infty\omega}$-sentence $\psi$ \[ \mathcal{A}_F \vDash \psi \text{ if } \psi \in \Sigma_F. \] \end{enumerate} \end{theorem} Note the following apparently trivial corollary of the above Theorem: \begin{corollary}\label{cor:consS} Assume $S$ is a consistency property on $\mathrm{L}(\mathcal{C})_{\infty\omega}$ satisfying the assumptions of Thm. \ref{GenFilThe}. Then for any $s\in S$ $s\not\vdash\emptyset$. \end{corollary} \begin{remark} We note that essentially the same Theorem and Corollary have been proved independently by Ben De Bondt and Boban Velickovic (using the language of forcing via partial orders to formulate them). \end{remark} \begin{proof} Assume $s\vdash\emptyset$ for some $s\in S$. Note that if $F$ is $V$-generic for $\mathbb{P}_S$ with $s\in F$, the same proof existing in $V$ of $s\vdash\emptyset$ is a proof of the same sequent in $V[F]$. By Thm. \ref{GenFilThe} $\mathcal{A}_F\models\bigwedge s$ holds in $V[F]$. Hence by the soundness of Tarski semantics for $\vdash$ in $V[F]$, we would get that $\mathcal{A}_F\models\psi\wedge\neg\psi$ for some $\psi$ holds in $V[F]$. This is clearly a contradictory statement for $V[F]$. \end{proof} We now prove Thm. \ref{GenFilThe}: \begin{proof} Let (in $V[F]$) $\mathcal{A}_F $ be the structure obtained from $F$ as in Def. \ref{DefStr}. Since $S$ is a dense subset of $\mathbb{P}_S$, $F\cap S$ is a generic filter for $(S,\supseteq)$ as well. By Clause \ref{def:ConProInf}(Str.3) \[ D_d = \{p \in S: \exists c \in \mathcal{C}, c = d \in p\} \] is dense in $\mathbb{P}_S$ for any $d\in\mathcal{D}$. Let $p \in F \cap D_d$. Then for some $c \in C$, $d = c \in p \subset \Sigma_F$ and $[d]_F= [c]_F$. This proves part \ref{GenFilThe-1} of the Theorem. We now establish part \ref{GenFilThe-2}. We have to handle only the cases for $\neg$, $\bigvee$, $\exists$, $\forall$ formulae, since the atomic case and the case $\bigwedge$ can be treated exactly as we did in Fact \ref{fac:TarStrAF} and Lemma \ref{MaxFilThe}. We continue the induction as follows: \begin{description} \item[$\bigvee$] Suppose $\bigvee \Phi \in \Sigma_F$. Let $p_0 \in F$ be such that $\bigvee \Phi \in p_0$. By Clause \ref{def:ConProInf}(Ind.4) \[ D_{\bigvee \Phi} = \{p \in S: \exists \phi \in \Phi, \phi \in p\} \] is dense below $p_0$. Since $F$ is $V$-generic over $\mathbb{P}_S$ and $p_0 \in F$, there exists $p \in F \cap D_{\bigvee \Phi}$. Then for some $\phi \in \Phi$, $\phi \in p \subset \Sigma_F$ and \[ \mathcal{A}_F \vDash \phi, \] proving \[ \mathcal{A}_F \vDash \bigvee \Phi. \] \item[$\exists$] Suppose $\exists \vec{v} \,\phi(\vec{v}) \in \Sigma_F$. Let $p_0 \in F$ such that $\exists \vec{v} \,\phi(\vec{v}) \in p_0$. By Clause \ref{def:ConProInf}(Ind.5) \[ D_{\exists v \phi(\vec{v})} = \{p \in S: \exists \vec{c} \in \mathcal{C}^{\vec{v}}, \phi(\vec{c}) \in p\} \] is dense below $p_0$. Since $F$ is $V$-generic over $\mathbb{P}_S$ and $p_0 \in F$, there exists $p \in F \cap D_{\exists \vec{v} \phi(\vec{v})}$. Then for some $ \vec{c} \in \mathcal{C}^{\vec{v}}$, $\phi(\vec{c}) \in p \subset \Sigma_F$. Therefore \[ \mathcal{A}_F \vDash \phi(\vec{c}), \] hence \[ \mathcal{A}_F \vDash \exists \vec{v} \phi(\vec{v}). \] \item[$\forall$] Suppose $\psi = \forall\vec{x} \phi(\vec{x})$ is in $\Sigma_F'$. One needs to check \[ \mathcal{A}_F \vDash \phi(\vec{x})[\vec{x}/\vec{e}] \] for $\vec{e}=\ap{[e_1]_F,\dots,[e_n]_F}\in \mathcal{A}_F^{n}$. Let $\mathcal{E}=\mathcal{C}\cup\mathcal{D}$. Then we have that \[ \mathcal{A}_F=\bp{[e]_F:\, e\in\mathcal{E}}; \] hence \[ \mathcal{A}_F^{<\omega}=\bp{\ap{[e_1]_F,\dots,[e_n]_F}:\, \ap{e_1,\dots,e_n}\in(\mathcal{E}^{<\omega})^{V[F]}}. \] A key observation is that \[ (\mathcal{E}^{<\omega})^{V[F]}=(\mathcal{E}^{<\omega})^{V}. \] This gives that for any $\vec{e}\in\mathcal{A}_F^{<\omega}$ \[ \mathcal{A}_F \vDash \phi(\vec{x})[\vec{x}/\vec{e}] \] if and only if there are $e_1\dots e_n\in\mathcal{E}$ such that $\vec{e}=\ap{[e_1]_F,\dots,[e_n]_F}$ and \[ \mathcal{A}_F \vDash \phi(e_1,\dots,e_n). \] By Clause \ref{def:ConProInf}(Ind.3), assuming $\forall\vec{x}\phi(\vec{x})\in\Sigma_F$, we get that $ \phi(e_1,\dots,e_n)\in\Sigma_F$ for all $e_1,\dots,e_n\in\mathcal{E}$. Hence in $V[F]$ it holds that \[ \mathcal{A}_F \vDash \phi(\vec{x})[\vec{x}/\vec{e}] \] for all $\vec{e}\in\mathcal{A}_F^n$, as was to be shown. \item[$\neg$] Suppose $\neg \phi \in \Sigma_F$. Clause \ref{def:ConProInf}(Ind.1) ensures that $F' = [\Sigma_F \cup \{\phi \neg\}]$ is a prefilter on $\mathbb{P}_S$ containing $F$. By maximality of $F$, $\phi \neg \in F$. We know that $\phi \neg$ and $\neg \phi$ are equivalent (under any reasonable equivalence notion, for example provability, or logical consequence for Boolean valued semantics). Also the principal connective of $\phi \neg$ is of type $\bigwedge, \forall, \bigvee$ or $\exists$, for which cases the proof has already been given. \end{description} The above shows that for all $\mathrm{L}(\mathcal{C})_{\infty\omega}$-sentences $\psi$, if $\psi\in \Sigma_F$ then $\mathcal{A}_F\models\psi$. \end{proof} \begin{remark} One may wonder why the Theorem is proved just for consistency properties for $\mathrm{L}_{\infty\omega}$ and not for arbitrary consistency properties on $\mathrm{L}_{\infty\infty}$. Inspecting the proof one realizes that in the case of $\forall$ we crucially used that $\mathcal{E}^{<\omega}$ is computed the same way in $V[F]$ and in $V$. If instead we are working with $\mathrm{L}_{\infty\lambda}$ for $\lambda>\omega$, it could be the case that $\mathcal{E}^{<\lambda}$ as computed in $V[F]$ is a strict superset of $\mathcal{E}^{<\lambda}$ as computed in $V$. In this case there is no reason to expect that \[ \mathcal{A}_F\models\phi(x_i:\,i<\alpha)[x_i/[e_i]_F:i<\alpha] \] when $\forall\vec{x}\phi(\vec{x})\in\Sigma_F$ but $\ap{e_i:i<\alpha}\in \mathcal{E}^{<\lambda}\setminus V$. \end{remark} Note that it may occur that for some $\mathrm{L}(\mathcal{C})_{\infty\omega}$-sentence $\psi$, neither $\psi$ nor $\neg\psi$ belongs to any $r\in S$, hence for some $V$-generic filter $F$ for $\mathbb{P}_s$ it can be the case that $s\not\in F$ while $\mathcal{A}_F\models\psi$. For example this occurs because $S$ is a set and there are class many $\mathrm{L}(\mathcal{C})_{\infty\omega}$-sentence $\psi$. We can prove a partial converse of the second conclusion of Thm. \ref{GenFilThe} which requires a slight strengthening of the notion of consistency property: \begin{definition} \label{def:ConProInfMax} Let $\mathrm{L} = \mathcal{R} \cup \mathcal{D}$, $\mathcal{D}$, $S$ be as in Def. \ref{def:ConProInf} and $\kappa$ be a cardinal greater than or equal to $|\mathcal{C}|$. A consistency property $S$ is \emph{$(\kappa,\lambda)$-maximal} if all its elements consist of $\mathrm{L}(\mathcal{C})_{\kappa\lambda}$-sentences and $S$ satisfies the following clause: \begin{enumerate} \item[(S-Max)] \label{conspropMax} For any $p\in S$ and $\mathrm{L}(\mathcal{C})_{\kappa\lambda}$-sentence $\phi$, either $p\cup\bp{\phi}\in S$ or $p\cup\bp{\neg\phi}\in S$. \end{enumerate} \end{definition} Example \ref{exm:BvalmodConsProp} (given by the finite sets of $\mathrm{L}(M)_{\kappa\lambda}$-sentences which have positive value in some fixed Boolean valued model with domain $M$) gives the standard case of a $(\kappa,\lambda)$-maximal consistency property. \begin{proposition} With the notation of Thm. \ref{GenFilThe} Assume $S$ is $(\kappa,\omega)$-maximal for some $\kappa\geq|\mathcal{C}|$. Then for any $\mathrm{L}(\mathcal{C})_{\kappa\omega}$-sentence $\psi$ \[ \mathcal{A}_F \vDash \psi \text{ if and only if } \psi \in \Sigma_F. \] \end{proposition} \begin{proof} We need to prove the ``only if'' part of the implication assuming $S$ is $(\kappa,\omega)$-maximal. Suppose $\psi$ is an $\mathrm{L}(\mathcal{C})_{\kappa\omega}$-sentence not in $\Sigma_F$. By $(\kappa,\omega)$-maximality of $S$ we get that \[ D_\psi=\bp{r\in S:\psi\in r\text{ or }\neg\psi\in r} \] is dense in $\mathbb{P}_S$. Since $F$ is $V$-generic for $\mathbb{P}_S$, we get that $F\cap D_\psi$ is non-empty. Hence either $\psi\in \Sigma_F$ or $\neg\psi\in\Sigma_F$, but the first is not the case by hypothesis. Then $\neg \psi \in \Sigma_F$ and by Theorem \ref{GenFilThe} $\mathcal{A}_F\models\neg\psi$, e.g. $\mathcal{A}_F\not\models\psi$. The desired thesis follows. \end{proof} Let us recall one result about $<\kappa$-cc forcing notions. Proposition \ref{StaHk2} appears in \cite{GoldsternTools}. \begin{proposition} \label{StaHk2} Let $\kappa$ be a regular cardinal and $\mathbb{P} \subset H_\kappa$ a forcing notion with the $<\kappa$-cc. Suppose $p \in \mathbb{P}$ and $\dot{\tau}$ is a $\mathbb{P}$-name such that $p \Vdash \dot{\tau} \in H_{\check{\kappa}}$, then there exists $\dot{\sigma} \in H_\kappa$ such that $p \Vdash \dot{\sigma} = \dot{\tau}$. \end{proposition} \begin{definition}\label{def:BmodelAS} Given a relational $\omega$-signature $\mathrm{L}=\mathcal{R}\cup\mathcal{D}$, an infinite set of constants $\mathcal{C}$ disjoint from $\mathrm{L}$, and a consistency property $S \subset [\mathrm{L}(\mathcal{C})_{\infty \omega}]^{< \omega}$, let \[ \mathcal{A}_S=(A_S,R_S:R\in\mathcal{R},d_S: d\in\mathcal{D}\cup\mathcal{C}) \] be defined as follows: \begin{itemize} \item $A_S=\bp{\sigma \in V^{\RO(\mathbb{P}_S)} \cap H_\mu :\, \Qp{\sigma\in A_{\dot{G}}}^{V^{\RO(\mathbb{P}_S)}}_{\RO(\mathbb{P}_S)}= 1_{\RO(\mathbb{P}_S)}}$, where $\mu$ is a regular cardinal big enough so that $\mathrm{L}\subseteq H_\mu$ and for any $\sigma \in V^{\RO(\mathbb{P}_S)}$ such that \[ \Qp{\sigma \in A_{\dot{G}}}^{V^{\RO(\mathbb{P}_S)}}_{\RO(\mathbb{P}_S)} = 1_{\RO(\mathbb{P}_S)}, \] one can find $\tau \in V^{\RO(\mathbb{P}_S)} \cap H_\mu$ with \[ \Qp{\tau = \sigma}^{V^{\RO(\mathbb{P}_S)}}_{\RO(\mathbb{P}_S)}=1_{\RO(\mathbb{P}_S)}; \] \item $\Qp{R_S(\sigma_1,\dots,\sigma_n)}_{\RO(\mathbb{P}_S)}^{\mathcal{A}_S}= \Qp{\mathcal{A}_{\dot{G}}\models R_{\dot{G}}(\sigma_1,\dots,\sigma_n) }^{V^{\RO(\mathbb{P}_S)}}_{\RO(\mathbb{P}_S)}$ for $R\in\mathcal{R}$; \item for $d\in\mathcal{D}\cup\mathcal{C}$, $d_S=\check{d}$. \end{itemize} \end{definition} \begin{theorem}\label{thm:mainthmAF} Let $\mathrm{L}$ be a relational $\omega$-signature, $\mathcal{C}$ be a set of constants disjoint from $\mathrm{L}$ of size at most $\kappa$ and $S \subset [\mathrm{L}(\mathcal{C})_{\kappa\omega}]^{< \omega}$ be a consistency property. Then $\mathcal{A}_S$ is a $\RO(\mathbb{P}_S)$-valued model with the mixing property, and for every $s\in S$ \[ \Qp{\bigwedge s}^{\mathcal{A}_S}_{\RO(\mathbb{P}_S)}= \Qp{\mathcal{A}_{\dot{G}}\models\bigwedge s}^{V^{\RO(\mathbb{P}_S)}}_{\RO(\mathbb{P}_S)}. \] \end{theorem} \begin{corollary} \label{Boolean MET} Let $\mathrm{L}$ be a relational $\omega$-signature, $\mathcal{C}$ be a set of constants disjoint from $\mathrm{L}$ of size at most $\kappa$ and $S \subset [\mathrm{L}(\mathcal{C})_{\kappa\omega}]^{< \omega}$ be a consistency property. Then for any $s \in S$ there is a $\mathsf{B}$-Boolean valued model $\mathcal{M}$ with the mixing property in which \[ \Qp{\bigwedge s}_\mathsf{B}^\mathcal{M} = 1_\mathsf{B}. \] \end{corollary} We first prove the Corollary assuming the Theorem. \begin{proof} Given $s\in S$, we let $\bool{B}=\RO(\mathbb{P}_S)\restriction \Reg{N_s}$. Since \[ s\Vdash_{\mathbb{P}_S}\mathcal{A}_{\dot{G}}\models \bigwedge s, \] we get that $\Reg{N_s}\leq \Qp{\bigwedge s}^{\mathcal{A}_S}_{\RO(\mathbb{P}_S)}$. In particular if we consider $\mathcal{A}_S$ as a $\bool{B}$-valued model by evaluating all atomic formulae $R(\vec{\sigma})$ by $\Qp{R(\vec{\sigma}}^{\mathcal{A}_S}_{\RO(\mathbb{P}_S)}\wedge \Reg{N_s}$, we get that $ \Qp{\bigwedge s}^{\mathcal{A}_S}_{\bool{B}}=1_{\bool{B}}$. Note that $\bool{B}$ is not the one point Boolean algebra, since $\Reg{N_s}\neq\emptyset=0_{\RO(\mathbb{P}_S)}$ for all $s\in S$. It is also immediate to check that $\mathcal{A}_S$ retains the mixing property also when seen as a $\bool{B}$-valued model. \end{proof} We now prove Thm. \ref{thm:mainthmAF}. We need beforehand to extend the forcing relation to formulae of infinitary logic. \begin{remark} Given a complete Boolean algebra $\bool{B}$, an $\in$-formula $\phi(v_1,\ldots,v_n)$ for $\mathrm{L}_{\infty\omega}$ (for $\mathrm{L}=\bp{\in}$), and any family $\tau_1,\ldots,\tau_n \in V^\mathsf{B}$, $\Qp{\phi(\tau_1,\ldots,\tau_n)}^{V^{\bool{B}}}_\mathsf{B}$ denotes the $\mathsf{B}$-value of $\phi(\tau_1,\ldots,\tau_n)$ in the Boolean valued model $V^\mathsf{B}$. The definition of $\Qp{\phi(\tau_1,\ldots,\tau_n)}^{V^{\bool{B}}}_\mathsf{B}$ is by induction on the complexity of $\phi$. It is the standard one for the atomic formulae $\Qp{\tau\in\sigma}^{V^{\bool{B}}}_\bool{B}$ and $\Qp{\tau=\sigma}^{V^{\bool{B}}}_\bool{B}$. We extend it to all $\mathrm{L}_{\infty\omega}$ according to Def. \ref{def:boolvalsem}. \end{remark} \begin{proof} We first establish that $\mathcal{A}_S$ has the mixing property. Let $\bp{\sigma_a:a\in A}$ be a family of elements of $A_S$ indexed by an antichain $A$ of $\RO(\mathbb{P}_S)$. Find (by the mixing property of $V^{\RO(\mathbb{P}_S)}$) $\sigma\in V^{\RO(\mathbb{P}_S)}$ such that $\Qp{\sigma=\sigma_a}^{V^{\RO(\mathbb{P}_S)}}_{\RO(\mathbb{P}_S)}\geq a $ for all $a\in A$. By choice of $A_s$ we can suppose that $\sigma\in A_s$. By definition of $\mathcal{A}_S$ \[ \Qp{\sigma=\sigma_a}^{\mathcal{A}_S}_{\RO(\mathbb{P}_S)}=\Qp{\sigma=\sigma_a}^{V^{\RO(\mathbb{P}_S)}}_{\RO(\mathbb{P}_S)}\geq a \] for all $a\in A$. Hence $\sigma$ is a mixing element for the family $\bp{\sigma_a:a\in A}$. Now we prove the second part of the Theorem. One needs to check that for any $\mathrm{L}_{\kappa \omega}$-formula $\phi(\vec{v})$ and $\sigma_1,\ldots,\sigma_n \in A_S$, \[ \Qp{\phi(\vec{\sigma})}_{\RO(\mathbb{P}_S)}^{\mathcal{A}_S} = \Qp{\mathcal{A}_{\dot{G}} \vDash \phi(\vec{\sigma})}^{V^{\RO(\mathbb{P}_S)}}_{\RO(\mathbb{P}_S)}. \] It is clear that this allows one to prove \[ \Qp{\bigwedge s}^{\mathcal{A}_S}_{\RO(\mathbb{P}_S)}= \Qp{\mathcal{A}_{\dot{G}} \vDash\bigwedge s}^{V^{\RO(\mathbb{P}_S)}}_{\RO(\mathbb{P}_S)}, \] letting $\phi=\bigwedge s$. We can prove the equality by induction on the complexity of formulae. \begin{itemize} \item For atomic sentences this follows by definition. \item For $\neg$, \[ \Qp{\neg \phi}_{\RO(\mathbb{P}_S)}^{\mathcal{A}_S} = \neg \Qp{\phi}_{\RO(\mathbb{P}_S)}^{\mathcal{A}_S} = \neg \Qp{\mathcal{A}_{\dot{G}} \vDash \phi}_{\RO(\mathbb{P}_S)}^{V^{\RO(\mathbb{P}_S)}} = \Qp{\mathcal{A}_{\dot{G}} \not\vDash \phi}_{\RO(\mathbb{P}_S)}^{V^{\RO(\mathbb{P}_S)}} = \Qp{\mathcal{A}_{\dot{G}} \vDash \neg \phi}_{\RO(\mathbb{P}_S)}^{V^{\RO(\mathbb{P}_S)}}. \] \item For $\bigwedge$, \[ \Qp{\bigwedge \Phi}_{\RO(\mathbb{P}_S)}^{\mathcal{A}_S} = \bigwedge_{\phi \in \Phi} \Qp{\phi}_{\RO(\mathbb{P}_S)}^{\mathcal{A}_S} = \bigwedge_{\phi \in \Phi} \Qp{\mathcal{A}_{\dot{G}} \vDash \phi}_{\RO(\mathbb{P}_S)}^{V^{\RO(\mathbb{P}_S)}} = \Qp{\mathcal{A}_{\dot{G}} \vDash \bigwedge \Phi}_{\RO(\mathbb{P}_S)}^{V^{\RO(\mathbb{P}_S)}}. \] \item For $\exists$, \begin{gather*} \Qp{\exists v \phi(v,\vec{\sigma})}_{\RO(\mathbb{P}_S)}^{\mathcal{A}_S} = \bigvee_{\tau \in A_S} \Qp{\phi(\tau,\vec{\sigma})}_{\RO(\mathbb{P}_S)}^{\mathcal{A}_S} = \bigvee_{\tau \in A_S} \Qp{\mathcal{A}_{\dot{G}} \vDash \phi(\tau,\vec{\sigma})}_{\RO(\mathbb{P}_S)}^{V^{\RO(\mathbb{P}_S)}} \leq \\ \bigvee_{\tau \in V^{\RO(\mathbb{P}_S)}} \Qp{\mathcal{A}_{\dot{G}} \vDash \phi(\tau,\vec{\sigma})}_{\RO(\mathbb{P}_S)}^{V^{\RO(\mathbb{P}_S)}} = \Qp{\mathcal{A}_{\dot{G}} \vDash \exists v \phi(v,\vec{\sigma})}_{\RO(\mathbb{P}_S)}^{V^{\RO(\mathbb{P}_S)}} = \\ \Qp{\mathcal{A}_{\dot{G}} \vDash \phi(\tau_0,\vec{\sigma})}_{\RO(\mathbb{P}_S)}^{V^{\RO(\mathbb{P}_S)}} = \Qp{\phi(\tau_0,\vec{\sigma})}^{\mathcal{A}_S}_{\RO(\mathbb{P}_S)}\leq \Qp{\exists v \phi(v,\vec{\sigma})}_{\RO(\mathbb{P}_S)}^{\mathcal{A}_S}, \end{gather*} where $\tau_0 \in \mathcal{A}_S$ is obtained by fullness of $V^{\RO(\mathbb{P}_S)}$ and can be supposed in $H_\mu$ by Proposition \ref{StaHk2}; while the equality in the last line holds by inductive assumptions. \end{itemize} \end{proof} Let us briefly remark why genericity is needed for dealing with formulae of type $\neg$, $\bigvee$ and $\exists$ in proving the model existence Theorem. The case of negated formulae is dealt with by taking advantage of Def. \ref{MovNegIns}. If the negated formula is atomic, its truth value follows by the definition of $\mathcal{A}_F$. For negated formulae $\neg \phi$ with $\phi$ non-atomic, by moving a negation inside repeatedly, we find a logically equivalent formula $\psi$ where negations appear only at the atomic level of the structural tree of $\psi$; at this level there is control. In particular the operation $\phi\mapsto\phi\neg$ allows to prove Thm. \ref{thm:mainthmAF} by an induction in which one only deals with the logical symbols $\bigwedge,\forall,\bigvee$ and $\exists$. Genericity comes to play when dealing with formulae whose principal connective is $\bigvee$ or $\exists$. For both connectives the role of genericity in the proof of the corresponding inductive step is similar, so we only analyze the first one. The key point is that the structure $\mathcal{A}_F$ associated to a maximal filter $F$ on $\mathbb{P}_S$ is decided by which atomic formulae belong to $\Sigma_F$: any maximal consistent set of atomic formulae for $\mathrm{L}$ defines an $\mathrm{L}$-structure $\mathcal{A}_F$ by Fact \ref{fac:TarStrAF}. Now if $F$ is maximal but not $V$-generic, it may miss some $D_{\bigvee \Phi}$ for some $\Phi\in\Sigma_F$. In which case $[\Sigma_F \cup \{\phi\}]^{< \omega}$ is not a prefilter on $\mathbb{P}_S$ for any $\phi\in\Phi$, by maximality of $F$. Supposing this occurs for some $\Phi$ which is a disjunction of atomic or negated atomic formulae, we get that $\bp{\phi\neg:\phi\in \Phi}\subseteq F$, again by maximality of $F$. Hence $\mathcal{A}_F\not\vDash\bigvee\Phi$ even if $\bigvee\Phi\in F$. \begin{remark} When working with a consistency property $S$ for $\mathrm{L}(\mathcal{C})_{\kappa \omega}$, there is a canonical way of extending it to a $(\kappa,\omega)$-maximal one. Consider the Boolean valued model $\mathcal{A}_S$ of Def. \ref{def:BmodelAS}, let also $\bool{B}=\RO(\mathbb{P}_S)$. Then \[ S \subset M_S = \{t \in [\mathrm{L}(\mathcal{C} \cup \mathcal{A}_S)_{\kappa \omega}]^{<\omega} : \Qp{t}^{\mathcal{A}_S}_{\bool{B}} > 0_{\mathsf{B}}\} \] and $M_S$ is a $(\kappa,\omega)$-maximal consistency property for $\mathrm{L}(\mathcal{C} \cup \mathcal{A}_S)_{\kappa \omega}$. \end{remark} \begin{remark} \label{Coll} Note that in Def. \ref{def:ConProInf} the size of $\mathcal{C}$ can vary. While Thm. \ref{GenFilThe} holds for any size of $\mathcal{C}$, some sizes automatically collapse cardinals. Consider for example $\mathrm{L} = \{d_\alpha : \alpha < \omega_1\}$ and $\mathcal{C} = \{c_n : n < \omega\}$ countable. Let $S$ denote the set whose elements are the $s \in [\mathrm{L}(\mathcal{C})_{\omega_2 \omega}]^{< \omega}$ such that for some injective interpretation \[ c_{i_1} \mapsto \alpha_{i_1},\ldots,c_{i_n} \mapsto \alpha_{i_n}, \ \alpha_{i_j} < \omega_1, \] of the constants from $\mathcal{C}$ appearing in $s$, \[ (\omega_1,=,c_{i_k} \mapsto \alpha_{i_k},d_\alpha \mapsto \alpha) \vDash s. \] $S$ is readily checked to be a consistency property. Consider $\mathcal{A}_G \in V[G]$ for $G$ $V$-generic for $\mathbb{P}_S$. It is a model of $\bigwedge_{\alpha \neq \beta\in\omega_1^V} d_\alpha \neq d_\beta$, furthermore the interpretation maps \begin{align*} f: \omega_1^V &\rao \bp{[d]_G:d\in\mathcal{D}} \\ \alpha &\mapsto d_\alpha^{\mathcal{A}_G}\\ &\\ g: \omega &\rao \bp{[c_n]_G: n < \omega} \\ n &\mapsto [c_n]_G \end{align*} are both injective. This entails that the map $\alpha\mapsto n$ if $\bp{d_\alpha=c_n}\in G$ is also injective. Therefore $\omega_1^V$ is collapsed. \end{remark} \section{Mansfield's Model Existence Theorem}\label{sec:mansmodexthm} We now prove Mansfield's Model Existence Theorem. \begin{theorem} \label{ManModExi} Let $\mathrm{L}$ be an $\omega$-signature and $S \subset P(\mathrm{L}(\mathcal{C})_{\kappa \lambda})$ a consistency property. Then for any $s \in S$ there exists a Boolean valued model $\mathcal{M}$ in which all sentences from $s$ are valid. \end{theorem} \begin{proof} Fix $s_0 \in S$. Consider $\mathbb{P}_S$ the forcing notion associated to $S$, $\mathsf{B} = \RO(\mathbb{P}_S)$ the corresponding Boolean completion, $\mathbb{P}_S \upharpoonleft s_0 = \{t \in S : t \leq s_0\}$ the restriction to conditions extending $s_0$ and $\mathsf{B} \upharpoonleft s_0 = \{t \in \mathsf{B} : t \leq \Reg{N_{s_0}}\} = \RO(\mathbb{P}_S \upharpoonleft s_0)$. The Boolean valued model $\mathcal{M}$ is constructed with truth values in $\mathsf{B} \upharpoonleft s_0$ and base set the set of constants $\mathcal{C} \cup \mathcal{D}$. The interpretations of constants are given by themselves. Mimicking Mansfield's proof we keep his notation whenever possible. For any sentence $\phi$ define \[ L(\phi) = \bigvee \{\Reg{N_t} : \phi \in t\}, \] and for $\phi$ atomic set \[ \Qp{\phi} = L(\phi). \] The main technical result in \cite{MansfieldConPro} is Lemma 3. Its equivalent (according to our notion of consistency property) goes as follows: \begin{claim} \label{Cla1} If for any $t \leq s$, $t \cup \{\phi\} \in S$, then $N_s \subseteq L(\phi)$. In particular, $\Reg{N_s} \leq L(\phi)$. \end{claim} \begin{proof} Suppose $N_s \not\subseteq L(\phi)$. Since the family $\{N_t : t \in \mathbb{P}_S \upharpoonleft s_0\}$ is a basis of $\mathsf{B}$, basic topological facts bring that there exists some $t \in \mathbb{P}_S \upharpoonleft s_0$ such that \[ N_t \subseteq N_s \cap \neg L(\phi)=\Reg{\bigcup\bp{N_t: N_t\cap L(\phi)=\emptyset}}. \] \begin{itemize} \item Since $N_t \subseteq N_s$ we have $t \leq s$ and the hypothesis ensures $t \cup \{\phi\} \in S$. Then, by definition of $L$, $N_{t \cup \{\phi\}} \subseteq L(\phi)$. \item Since $N_t \subseteq \neg L(\phi)$ and $N_{t \cup \{\phi\}} \subseteq N_t$, $N_{t \cup \{\phi\}} \subseteq \neg L(\phi)$. \end{itemize} The two statements are incompatible. Hence $N_s \leq L(\phi)$. \end{proof} \begin{claim} \label{Cla2} For any $\phi$, $L(\phi) \leq \Qp{\phi}$. \end{claim} \begin{proof} We proceed by induction on the complexity of formulae. The thesis holds for atomic formulae by definition. The other cases are dealt with as follows: \begin{description} \item[$\neg$] Since $S$ is closed under moving a negation inside, we only need to care about negations acting on atomic formulae. Suppose $\phi$ is atomic. We have \[ \Qp{\neg \phi}_{\mathsf{B}_S \upharpoonleft s_0} = \neg \Qp{\phi}_{\mathsf{B}_S \upharpoonleft s_0} = \neg L(\phi) = \bigwedge \{\neg \Reg{N_t} : \phi \in t\}. \] We need to prove \[ L(\neg \phi) = \bigvee \{\Reg{N_p} : \neg \phi \in p\} \leq \bigwedge \{\neg \Reg{N_t} : \phi \in t\}. \] Fix $t$ containing $\phi$. For any $p \ni \neg \phi$, $p$ and $t$ are incompatible by Clause \ref{conspropCon}(Con). Remark \ref{rema1} ensures $\Reg{N_p} \leq \neg \Reg{N_t}$. Then \[ \bigvee_{p \ni \neg \phi} \Reg{N_p} \leq \neg \Reg{N_t}. \] Since this is true for any $t \ni \phi$, \[ \bigvee_{p \ni \neg \phi} \Reg{N_p} \leq \bigwedge_{t \ni \phi} \neg \Reg{N_t}. \] \item[$\bigwedge$] Suppose by induction that the result holds for any $\phi \in \Phi$. Let $\bigwedge \Phi \in s$. Then for any $t$ extending $s$ and any $\phi \in \Phi$, $t \cup \{\phi\} \in S$. By Claim \ref{Cla1}, $\Reg{N_s} \leq L(\phi)$ for any $\phi \in \Phi$. By the induction hypothesis $\Reg{N_s} \leq L(\phi) \leq \Qp{\phi}$. As this holds for any $\phi \in \Phi$, $\Reg{N_s} \leq \bigwedge_{\phi \in \Phi} \Qp{\phi} = \Qp{\bigwedge \Phi}$. This holds for any $s$ such that $\bigwedge \Phi \in s$, hence \[ L(\bigwedge \Phi) = \bigvee \{\Reg{N_s} : \bigwedge \Phi \in s\} \leq \Qp{\bigwedge \Phi}. \] \item[$\bigvee$] Suppose the result true for any $\phi \in \Phi$. If $L(\bigvee \Phi) \nleq \Qp{\bigvee \Phi}$, $L(\bigvee \Phi) \wedge \neg \Qp{\bigvee \Phi} \neq \emptyset$. Therefore there exists some $t \leq s_0$ such that $N_t \subseteq L(\bigvee \Phi)$ and $N_t \subseteq \neg \Qp{\bigvee \Phi}$. \begin{itemize} \item By the first inclusion, since $N_t$ is open and $\bigcup \{\Reg{N_p} : \bigvee \Phi \in p\}$ is dense in $\bigvee \{\Reg{N_p} : \bigvee \Phi \in p\}$, there exists some $p'$ containing $\bigvee \Phi$ such that $N_t \cap \Reg{N_{p'}}$ is non-empty. Hence we can find $p \leq t,p'$. Since $\bigvee \Phi \in p'$, $\bigvee \Phi \in p$. \item By the second inclusion (and $p \leq t$), \[ N_p \subseteq N_t \subseteq \neg \bigvee_{\phi \in \Phi} \Qp{\phi} = \bigwedge_{\phi \in \Phi} \neg \Qp{\phi}. \] \end{itemize} (Ind.4) ensures that for some $\phi_0 \in \Phi$, $q = p \cup \{\phi_0\} \in S$. By induction hypothesis, $L(\phi_0) \leq \Qp{\phi_0}$, hence $\neg \Qp{\phi_0} \leq \neg L(\phi_0)$. Therefore \[ \hspace{1,3cm} N_q \subseteq N_p \subseteq \bigwedge_{\phi \in \Phi} \neg \Qp{\phi} \leq \neg \Qp{\phi_0} \leq \neg L(\phi_0) = \bigwedge \{\neg \Reg{N_t} : \phi_0 \in t\} \subseteq \neg \Reg{N_q}, \] a contradiction. \item[$\forall$] Let $s \in S$ contain $\forall \vec{v} \phi(\vec{v})$. Then for any $t$ extending $s$ and any $\vec{c} \in (C \cup D)^{\vec{v}}$, $t \cup \{\phi(\vec{c})\} \in S$. By Claim \ref{Cla1} and the induction hypothesis, \[ \Reg{N_s} \leq L(\phi(\vec{c})) \leq \Qp{(\phi(\vec{c}))} \] for any $\vec{c} \in (C \cup D)^{\vec{v}}$. Then $\Reg{N_s} \leq \bigwedge_{\vec{c} \in (C \cup D)^{\vec{v}}} \Qp{\phi(\vec{c})} = \Qp{\forall \vec{v} \phi(\vec{v})}$. All this was done for any $s$ such that $\forall \vec{v} \phi(\vec{v}) \in s$. Then we may take the sup over such sets to obtain \[ L(\forall \vec{v} \phi(\vec{v})) = \bigvee \{\Reg{N_s} : \forall \vec{v} \phi(\vec{v}) \in s\} \leq \Qp{\forall \vec{v} \phi(\vec{v})}. \] \item[$\exists$] Suppose the result true for any $\phi(\vec{c})$. If $L(\exists \vec{v} \phi(\vec{v})) \nleq \Qp{\exists \vec{v} \phi(\vec{v})}$, $L(\exists \vec{v} \phi(\vec{v})) \wedge \neg \Qp{\exists \vec{v} \phi(\vec{v})} \neq \emptyset$. Then there exists some $t \leq s_0$ such that $N_t \subseteq L(\exists \vec{v} \phi(\vec{v}))$ and $N_t \subseteq \neg \Qp{\exists \vec{v} \phi(\vec{v})}$. \begin{itemize} \item By the first inclusion, since $N_t$ is open and $\bigcup \{\Reg{N_p} : \exists \vec{v} \phi(\overline{v}) \in p\}$ is dense in $\bigvee \{\Reg{N_p} : \exists \vec{v} \phi(\vec{v}) \in p\}$, there exists some $p'$ containing $\exists \vec{v} \phi(\vec{v})$ such that $N_t \cap \Reg{N_{p'}}$ is non-empty. Hence we can find $p \leq t,p'$ with $\exists \vec{v} \phi(\vec{v}) \in p$. \item By the second inclusion, \[ N_p \subseteq N_t \subseteq \neg \bigvee_{\vec{c} \in (\mathcal{C} \cup \mathcal{D})^{\vec{v}}} \Qp{\phi(\vec{c})} = \bigwedge_{\vec{c} \in (\mathcal{C} \cup \mathcal{D})^{\vec{v}}} \neg \Qp{\phi(\vec{c})}. \] \end{itemize} (Ind.5) ensures that for some $\vec{c}_0 \in (\mathcal{C} \cup \mathcal{D})^{\vec{v}}$, $q = p \cup \{\phi(\overline{c}_0)\} \in S$. By the induction hypothesis, $L(\phi(\vec{c}_0)) \leq \Qp{\phi(\vec{c}_0)}_\mathsf{B}$, hence $\neg \Qp{\phi(\vec{c}_0)} \leq \neg L(\phi(\vec{c}_0))$. Therefore \[ N_q \subseteq N_p \subseteq \bigwedge_{\overline{c} \subseteq \mathcal{C}} \neg \Qp{\phi(\overline{c})}_\mathsf{B} \subseteq \neg \Qp{\phi(\overline{c}_0)}_\mathsf{B} \subseteq \neg L(\phi(\overline{c}_0)) = \bigwedge \{\neg \Reg{N_t} : \phi(\overline{c}_0) \in t\} \subseteq \neg \Reg{N_q}, \] a contradiction. \end{description} \end{proof} Now we can check that $\mathcal{M}$ is a Boolean valued model. \begin{itemize} \item Since for any $t$ in $S$ and any $c \in \mathcal{C} \cup \mathcal{D}$, $t \cup \{c = c\} \in S$, $L(c = c) = 1_{\mathsf{B}_S \upharpoonleft s_0}$. \item Let $c = d \in s$. Then for any $t$ extending $s$, $t \cup \{d = c\} \in S$, hence $\Reg{N_s} \leq L(d = c)$. Since the previous holds for any $s$ containing $c = d$, \[ \bigvee \{\Reg{N_s} : c = d \in s\} \leq L(d = c). \] Since $c = d$ and $d = c$ are atomic, \[ \Qp{c = d} = L(c = d) = \bigvee \{\Reg{N_s} : c = d \in s\} \leq L(d = c) = \Qp{d = c}. \] \item Let $c_1 = d_1, \ldots, c_n = d_n, \phi(c_1,\ldots,c_n) \in s$ with $\phi$ atomic. Then for any $t$ extending $s$, $t \cup \{\phi(d_1,\ldots,d_n)\} \in S$, hence $\Reg{N_s} \leq L(\phi(d_1,\ldots,d_n))$. Since the previous holds for any $s$ containing $c_1 = d_1, \ldots, c_n = d_n, \phi(c_1,\ldots,c_n)$, \[ \bigvee \{\Reg{N_s} : c_1 = d_1, \ldots, c_n = d_n, \phi(c_1,\ldots,c_n) \in s\} \leq L(\phi(d_1,\ldots,d_n)). \] Since $\phi$ and $c_i = d_i$ are atomic, \begin{align*} \Qp{c_1 = d_1} \wedge \ldots \wedge & \Qp{c_n = d_n} \wedge \Qp{\phi(c_1,\ldots,c_n)} = \\ L(c_1 = d_1) \wedge \ldots \wedge & \ L(c_n = d_n) \wedge L(\phi(c_1,\ldots,c_n)) = \\ \bigvee \{\Reg{N_s} : c_1 = d_1, \ldots, c_n = d_n, \ &\phi(c_1,\ldots,c_n) \in s\} \leq L(\phi(d_1,\ldots,d_n)) = \\ & \Qp{\phi(d_1,\ldots,d_n)}. \end{align*} To prove the equality between lines two and three it is enough to check $L(\phi) \wedge L(\psi) = \bigvee \{\Reg{N_s} : \phi,\psi \in s\}$. By definition of $L$, \begin{align*} L(\phi) \wedge L(\psi) & = \\ \bigvee \{\Reg{N_s} : \phi \in s\} \wedge \bigvee \{\Reg{N_t} : \psi \in t\} & = \bigvee \{\bigvee \{\Reg{N_s} : \phi \in s\} \wedge \Reg{N_t} : \psi \in t\} = \\ \bigvee \{\bigvee \{\Reg{N_s} \wedge \Reg{N_t} : \phi \in s\} : \psi \in t\} & = \bigvee \{\Reg{N_s} \wedge \Reg{N_t} : \phi \in s \wedge \psi \in t\} = \\ \bigvee \{\Reg{N_q} :\ & \phi,\psi \in q\}. \end{align*} \end{itemize} It remains to conclude that $s_0$ is valid in $\mathcal{M}$. Now for any $\phi \in s_0$ and $t \leq s_0$, $\phi \in t$, hence Claim \ref{Cla2} ensures $1_{\mathsf{B}_S \upharpoonleft s_0} = \Reg{N_{s_0}} \leq L(\phi) \leq \Qp{\phi}_\mathsf{B}$ for any such $\phi$. \end{proof} \begin{remark} We note that a key assumption for Mansfield's result is that $\mathrm{L}$ is a first order signature. This is crucially used in the proof that \ref{eqn:subslambda} holds for the relation symbols of $\mathrm{L}$ in the structure $\mathcal{M}$: since these relation symbols are finitary, in the proof above we just had to distribute finitely many infinite disjunctions; this distributive law holds for any complete Boolean algebra. If we dealt with a relational $\omega_1$-signature we might have had to distribute countably many infinitary disjunctions to establish \ref{eqn:subslambda}; this is possible only under very special circumstances on $\mathbb{P}_S$. We do not know whether this result can be established for arbitrary $\lambda$-signatures. We conjecture this is not the case. \end{remark} \begin{remark} The model produced by Mansfield's theorem does not verify the mixing property in general. Consider again the setting in remark \ref{Coll}. Fix $\alpha < \omega_1$. For each $n < \omega$ define \[ \phi_n : \hspace{0,3cm} c_n = d_\alpha \ \wedge \bigwedge_{m < n} c_m \neq d_\alpha. \] \noindent Then $\{\Reg{N_{\{\phi_n\}}} : n < \omega\}$ is an antichain. Assign $c_{n-1}$ to each $\Reg{N_{\{\phi_n\}}}$. We show that for no element $m$ in the structure provided by Mansfield theorem with respect to the collapsing consistency property we have \[ \Reg{N_{\{\phi_n\}}} \leq \Qp{m = c_{n-1}} = \bigvee \{\Reg{N_s} : m = c_{n-1} \in s\}. \] There are three possibilities for $m$. \begin{itemize} \item $m = c_{n_0}$: Note that because the interpretations generating the consistency property are injective the set \[ \{t \in S : \bigwedge_{n \neq m} c_n \neq c_m \in t\} \] \noindent is dense. Then $\Qp{c_{n_0} = c_{n_0-1}} = 0$ and it cannot be that \[ \Reg{N_{\{\phi_{n_0}\}}} \leq \Qp{c_{n_0} = c_{n_0-1}}. \] \item $m = d_\alpha$: Take any $n < \omega$. The set $\{t \in S : c_{n-1} \neq d_\alpha \in t\}$ is dense below $\{\phi_n\}$. Then we cannot have $\Reg{N_{\{\phi_{n}\}}} \leq \bigvee \{\Reg{N_t} : d_\alpha = c_{n-1} \in t\}$. \item $m = d_\beta$, $\beta \neq \alpha$: Take any $n < \omega$. Because the constant $d_\beta$ does not appear in $\{\phi_n\}$ and the sentence $\phi_n$ only forces $c_{n-1}$ not be $d_\alpha$, we can suppose that the interpretation that generates $\{\phi_n\}$ is such that $c_{n-1}$ is interpreted differently from $d_\beta$, proving $\{\phi_n, c_{n-1} \neq d_\beta\} \in S$. Then we cannot have $\Reg{N_{\{\phi_{n}\}}} \leq \bigvee \{\Reg{N_t} : d_\beta = c_{n-1} \}$ since $\Reg{N_{\{\phi_{n}, c_{n-1} \neq d_\beta\}}} \leq \Reg{N_{\{\phi_{n}\}}}$. \end{itemize} \end{remark} \section{Proofs of model theoretic results}\label{sec:proofmodthres} In this section we prove that $\mathrm{L}_{\infty \omega}$ with Boolean valued semantics has a completeness theorem, the Craig interpolation property and also an omitting types theorem. The last two results generalize to $\mathrm{L}_{\infty \omega}$ results obtained in \cite{KeislerInfLog} for $\mathrm{L}_{\omega_1 \omega}$ by replacing Tarski semantics with Boolean valued semantics. We also provide the missing details for the general $\mathrm{L}_{\infty \infty}$ results. \subsection{Proof of Thm. \ref{them:boolcompl}} \begin{proof} \ref{thm:boolcomp3} implies \ref{thm:boolcomp2} and \ref{thm:boolcomp2} implies \ref{thm:boolcomp1} are either standard or trivial. Assume \ref{thm:boolcomp3} fails, we show that \ref{thm:boolcomp1} fails as well. Assume $T\not\vdash S$ with $T,S$ sets of $\mathrm{L}_{\infty\omega}$-formulae. Let $\mathcal{C}$ be an infinite set of fresh constants and let $R$ be the family of finite sets $r\subseteq \mathrm{L}(\mathcal{C})$ such that \begin{itemize} \item $r\cup T\not\vdash S$, \item any $\phi\in r$ contains only finitely many constants from $\mathcal{C}$. \end{itemize} Provided $R$ is a consistency property, this gives that $\mathcal{A}_R$ witnesses that $T\not\models_{\mathrm{Sh}}S$ as: \begin{itemize} \item $\Qp{\psi}^{\mathcal{A}_R}=1_{\RO(\mathbb{P}_R)}$ for all $\psi\in T$, since for any $\psi\in T$ \[ E_\psi=\bp{r\in R:\, \psi\in r} \] is dense in $\mathbb{P}_R$; \item $\Qp{\phi}^{\mathcal{A}_R}=0_{\RO(\mathbb{P}_R)}$ for all $\phi\in S$, since for any such $\phi$ \[ F_\phi=\bp{r\in R:\, \neg\phi\in r} \] is dense in $\mathbb{P}_R$: note that $r\cup\bp{\neg\phi}\cup T\vdash S$ if and only if $r\cup T\vdash S\cup\bp{\phi}$, which -if $\phi\in S$- amounts to say that $r\not\in R$. \end{itemize} Now we show that $R$ is a consistency property: \begin{itemize} \item[(Con)] Trivial by definition of $R$, since the calculus is sound. \item[(Ind.1)] Trivial since for any $\neg\phi$ in $r$, $\bigwedge r\vdash \bigwedge (r\cup\bp{\phi\neg})$ and conversely. \item[(Ind.2)] Trivial since $r\vdash\bigwedge( r\cup\bp{\phi})$ and conversely if $\bigwedge\Phi\in r$ and $\phi\in \Phi$. \item[(Ind.3)] Trivial since $r\vdash\bigwedge( r\cup\bp{\phi(c)})$ and conversely if $\forall v\,\phi(v)\in r$. \item[(Ind.4)] Let $\bigvee \Sigma \in r\in R$. Since $r\in R$, $r\cup T\not\vdash S$. By contradiction suppose that for all $\sigma\in\Sigma$, $r\cup\bp{\sigma}\cup T\vdash S$. Then, by the left $\bigvee$-rule of the calculus $r\cup\bp{\bigvee\Sigma}\cup T\vdash S$. This contradicts $r\in R$, since $r=r\cup\bp{\bigvee\Sigma}$. \item[(Ind.5)] Suppose $\exists v\, \varphi(v) \in r$. Pick $c\in\mathcal{C}$ which does not appear in any formula in $r$. It exists by definition of $R$. Suppose $r\cup\bp{\varphi(c)}\cup T\vdash S$. Since $c$ does not appear in any formula of $r\cup S$, $r\cup\bp{\exists x\,\varphi(x)}\vdash S$ (applying the rules of the calculus). This contradicts $r\in R$, since $r=r\cup\bp{\exists x\,\varphi(x)}$. \item[(Str.1,2,3)] All three cases follow from standard applications of the rules of the calculus for equality. \end{itemize} \end{proof} \subsection{Proof of Thm. \ref{thm:craigint}} \begin{proof} Fix a set $\mathcal{C}$ of fresh constants for $\mathrm{L}$ of size $\kappa$. Consider $X_\phi$ the set of all $\mathrm{L}(\mathcal{C})_{\kappa \omega}$-sentences $\chi$ such that: \begin{itemize} \item all non logical symbols from $\mathrm{L}$ appearing in $\chi$ also appear in $\phi$, \item only a finite number of constants from $\mathcal{C}$ are in $\chi$. \end{itemize} Define $X_\psi$ similarly. Consider $S$ the set of finite sets of $\mathrm{L}(\mathcal{C})_{\kappa \omega}$-sentences $s$ such that: \begin{itemize} \item $s = s_1 \cup s_2$, \item $s_1 \subset X_\phi$, \item $s_2 \subset X_\psi$, \item if $\theta,\sigma \in X_\phi \cap X_\psi$ are such that \begin{itemize} \item no constant symbols of $\mathcal{C}$ appears in either $\theta$ or $\sigma$, \item $\vDash_{\mathrm{BVM}} \bigwedge s_1 \rightarrow \theta$ and $\vDash_{\mathrm{BVM}} \bigwedge s_2 \rightarrow \sigma$, \end{itemize} then $\theta \wedge \sigma$ is Boolean consistent. \end{itemize} We will later show that $S$ is a consistency property. Assuming this fact as granted, we now show why this provides the interpolant. The Model Existence Theorem \ref{thm:mainthmAF} grants that any $s \in S$ has a Boolean valued model. By hypothesis $\vDash_{\mathrm{BVM}} \phi \rightarrow \psi$, thus the set $\{\phi, \neg \psi\}$ is not consistent and it cannot belong to $S$. Now we search what property the set $\{\phi, \neg \psi\}$ misses. They have no constant from $\mathcal{C}$ since they are $\mathrm{L}$-sentences. The sets $s_1$ and $s_2$ are given by $\{\phi\}$ and $\{\neg \psi\}$. So, the last property must fail. This means that there exist $\theta,\sigma \in X_\phi \cap X_\psi$ with no constant symbols of $\mathcal{C}$ in either of them and such that $\vDash_{\mathrm{B}} \phi \rightarrow \theta$, $\vDash_{\mathrm{BVM}} \neg \psi \rightarrow \sigma$ and $\theta \wedge \sigma$ is not consistent. The last assertion gives \[ \vDash_{\mathrm{BVM}} \theta \rightarrow \neg \sigma. \] This together with \[ \vDash_{\mathrm{BVM}} \neg \sigma \rightarrow \psi \] implies \[ \vDash_{\mathrm{BVM}} \theta \rightarrow \psi. \] Recall that $\theta,\sigma$ have no constant symbol from $\mathcal{C}$, hence the interpolant is given by the $\mathrm{L}_{\kappa \omega}$-sentence $\theta$. It remains to check that $S$ is a consistency property. \begin{itemize} \item[(Con)] The very definition of $S$ then gives that if some $s\in S$ is such that $\theta,\neg\theta\in S$, then $\theta,\neg\theta\in s_1\subseteq X_\phi$ or $\theta,\neg\theta\in s_2\subseteq X_\psi$. Towards a contradiction w.l.o.g. we can suppose that for some $s=s_1\cup s_2\in S$ and $\theta\in X_\phi$, $\theta,\neg \theta \in s_1$. Consider any sentence $\chi'\in X_{\phi}\cap X_\psi$ such that $\vDash_\mathrm{BVM} \bigwedge s_2 \rightarrow \chi'$. Because $s_1$ is contradictory we have $\vDash_\mathrm{BVM} \bigwedge s_1 \rightarrow \neg \chi'$. But $\chi' \wedge \neg \chi'$ is not Boolean consistent, a contradiction. \item[(Ind.1)] Suppose $\neg \chi \in s_1 \subseteq s$. Because $s_1 \cup \{\chi \neg\}$ and $s_1$ are equivalent, any sentence $\chi'$ such that $\vDash_\mathrm{BVM} \bigwedge s_1 \cup \{\chi \neg\} \rightarrow \chi'$ also verifies $\vDash_\mathrm{BVM} \bigwedge s_1 \rightarrow \chi'$. Then, $s \cup \{\chi \neg\} \in S$. \item[(Ind.2)] Suppose $\chi \in \Phi$ and $\bigwedge \Phi \in s_1 \subseteq s$. Because $\bigwedge s_1$ and $\bigwedge s_1 \cup \{\chi\}$ are equivalent, $s \cup \{\chi\} \in S$. \item[(Ind.3)] Suppose $\forall v \chi(v) \in s_1 \subseteq s$ and $c \in \mathcal{C} \cup \mathcal{D}$. Because $\bigwedge s_1$ and $\bigwedge s_1 \cup \{\chi(c)\}$ are equivalent, $s \cup \{\chi(c)\} \in S$. \item[(Ind.4)] Let $\bigvee \Sigma \in s_1 \subseteq s$. By contradiction we suppose that for no $\sigma \in \Sigma$, $s \cup \{\sigma\} \in S$. This means that for each $\sigma \in \Sigma$ there exist $\chi_\sigma^1, \chi_\sigma^2 \in X_\phi \cap X_\psi$ such that \[ \vDash_\mathrm{BVM} \bigwedge (s_1 \cup \{\sigma\}) \rightarrow \chi_\sigma^1 \emph{ and } \vDash_\mathrm{BVM} \bigwedge s_2 \rightarrow \chi_\sigma^2, \] but $\chi_\sigma^1 \wedge \chi_\sigma^2$ is inconsistent. Then \begin{align*} &\vDash_\mathrm{BVM} \bigwedge (s_1 \cup \{\bigvee \Sigma\}) \rightarrow \bigvee \{\chi_\sigma^1 : \sigma \in \Sigma\} \emph{ and } \\ &\vDash_\mathrm{BVM} \bigwedge s_2 \rightarrow \bigwedge \{\chi_\sigma^2 : \sigma \in \Sigma\}. \end{align*} Note that $s_1 \cup \bp{\bigvee \Sigma} = s_1$. Because $\chi_\sigma^1 \wedge \chi_\sigma^2$ is Boolean inconsistent for each $\sigma \in \Sigma$, so is \begin{gather*} \bigvee \{\chi_\sigma^1 : \sigma \in \Sigma\} \wedge \bigwedge \{\chi_\sigma^2 : \sigma \in \Sigma\} \equiv_{\mathrm{BVM}} \\ \bigvee \{\chi_{\sigma'}^1 \wedge \bigwedge \{\chi_\sigma^2 : \sigma \in \Sigma\}: \sigma' \in \Sigma\} \equiv_{\mathrm{BVM}} \\ \bigvee \bigwedge \{\chi_{\sigma'}^1 \wedge \chi_\sigma^2 : \sigma \in \Sigma \wedge \sigma' \in \Sigma\} \models_{\mathrm{BVM}} \\ \bigvee \bigwedge \{\chi_{\sigma}^1 \wedge \chi_\sigma^2 : \sigma \in \Sigma\}, \end{gather*} since the latter is Boolean inconsistent. Then $\theta$ being $ \bigvee \{\chi_\sigma^1 : \sigma \in \Sigma\}$ and $\sigma$ being $\bigwedge \{\chi_\sigma^2 : \sigma \in \Sigma\}$ witness that $s = s_1 \cup s_2 \not\in S$. \item[(Ind.5)] Suppose $\exists v \chi(v) \in s_1 \subseteq s$ and consider $c \in \mathcal{C}$ a constant not appearing in $s$, which exists by the clause on the number of constants from $\mathcal{C}$ in sentences in $X_\phi$. Let us check $s \cup \{\chi(c)\} \in S$. For this take $\theta,\sigma \in X_\phi \cap X_\psi$ such that $\vDash_{\mathrm{Sh}} \bigwedge s_1 \cup \{\chi(c)\} \rightarrow \theta$ and $\vDash_{\mathrm{Sh}} \bigwedge s_2 \rightarrow \sigma$ with no constants from $\mathcal{C}$ either in $\theta$ or in $\sigma$. We must show that $\theta\wedge \sigma$ is Boolean satisfiable. It is enough to prove $\vDash_{\mathrm{Sh}} s_1 \rightarrow \theta$. Consider $\mathcal{M}$ a Boolean valued model for $\mathrm{L}\cup\bp{c}$ with the mixing property such that $\mathcal{M} \vDash s_1$. Since $\exists v \chi(v) \in s_1$, $\Qp{\exists v \chi(v)}_\mathsf{B}^\mathcal{M} = 1_\mathsf{B}$; since $\mathcal{M}$ is full, we can find $\tau\in M$ such that \[ \Qp{\exists v \chi(v)}_\mathsf{B}^\mathcal{M} = \Qp{\chi(\tau)}_\mathsf{B}^\mathcal{M} = 1_\mathsf{B}. \] Consider $\mathcal{M}'$ to be the model obtained from $\mathcal{M}$ reinterpreting all symbols of $\mathrm{L}$ the same way, but mapping now $c$ to $\tau$. Then $\mathcal{M}'\models\bigwedge s_1\cup\bp{\phi(c)}$, hence $\Qp{\theta}^{\mathcal{M}'}_{\bool{B}}=1_{\bool{B}}$ as well. Since $c$ does not appear in $\theta$ we get that $\Qp{\theta}^{\mathcal{M}}=\Qp{\theta}^{\mathcal{M}'}=1_{\bool{B}}$. \item[(Str.1,2,3)] All three cases follow from $\bigwedge s_1$ and $\bigwedge s_1 \cup \{\chi\}$ being $\mathrm{BVM}$-equivalent when $\chi$ is the relevant formula of each clause. \end{itemize} \end{proof} \subsection{Proof of Thm. \ref{thm:omittypthm}} \begin{proof} Fix a set $\mathcal{C}=\bp{c_i: i<\kappa}$ of constants. Consider $\mathrm{L}(\mathcal{C})_{T,\mathcal{F}}$ the set of all sentences obtained by replacing in the $\mathrm{L}_{T,\mathcal{F}}$-formulae with free variables in $\bp{v_i:i\in\omega}$ all occurrences of these finitely many free variables by constants from $\mathcal{C}$. The consistency property $S$ has as elements the sets \begin{align*} s = s_0 \cup \bp{\bigvee \bp{\phi[c_{\sigma_\Phi(0)}, \ldots, c_{\sigma_\Phi(n_\Phi-1)}]: \phi \in \Phi}: \, \Phi \in \mathcal{F}_0}, \end{align*} where: \begin{itemize} \item $s_0$ is a finite set of $\mathrm{L}(\mathcal{C})_{T,\mathcal{F}}$ sentences, \item only finitely many constants from $\mathcal{C}$ appear in $s_0$, \item $\mathcal{F}_0$ is a finite subset of $\mathcal{F}$, \item $\sigma_\Phi:\omega\to \mathcal{C}$ for all $\Phi\in \mathcal{F}_0$, and \item $T \cup s_0$ has a Boolean valued model. \end{itemize} We first check that $S$ is a consistency property: consider $s \in S$ and $\psi \in s$. First of all, by definition of $S$ and the Completeness Thm. \ref{them:boolcompl} we can fix a mixing model $\mathcal{M}$ of $s_0 \cup T$. We deal with two cases. If $\psi \in s_0 \cup T$, then $\mathcal{M} \vDash \psi$ allows to find the correspondent formula (here one also uses that only finitely many constants from $\mathcal{C}$ occur in $s_0$). Thus we only need to deal with the case \[ \psi = \bigvee \bp{\phi[c_{\sigma_\Phi(0)}, \ldots, c_{\sigma_\Phi(n_\Phi-1)}]: \phi \in \Phi} \] for some $\Phi \in \mathcal{F}$ and $\sigma_\Phi:\omega\to\mathcal{C}$. We need to find some $\phi \in \Phi$ such that $s \cup \{\phi\} \in s$. Denote $d_0,\ldots,d_m \in \mathcal{C}$ the constants in $s_0$ from $\mathcal{C}$ that are not $c_{\sigma(0)}, \ldots, c_{\sigma(n_\Phi-1)}$ and write $s_0$ as \[ s_0[c_{\sigma_\Phi(0)}, \ldots, c_{\sigma_\Phi(n_\Phi-1)},d_0,\ldots,d_m] \] with its constant symbols displayed. Since \[ \mathcal{M} \vDash T \cup s_0, \] we have \[ \mathcal{M} \vDash \exists v_0 \ldots v_{n_\Phi-1} \exists w_0 \ldots w_m \bigwedge s_0[v_0,\ldots,v_{n_\Phi-1},w_0,\ldots,w_m]. \] By the Theorem assumptions, since \[ \exists v_0 \ldots v_{n_\Phi-1} \exists w_0 \ldots w_m \bigwedge s_0[v_0,\ldots,v_{n_\Phi-1},w_0,\ldots,w_m] \] is an $\mathrm{L}_{T,\mathcal{F}}$-formula, we get that for some $\phi \in \Phi$, \[ T \cup \bp{\exists v_0 \ldots v_{n_\Phi-1} \exists w_0 \ldots w_m \bigwedge s_0[v_0,\ldots,v_{n_\Phi-1},w_0,\ldots,w_m] \wedge \phi[v_0,\ldots,v_{n_\Phi-1}] } \] has an $\mathrm{L}_{T,\mathcal{F}}$-model $\mathcal{N}$, which again by completeness can be supposed to be mixing. Make $\mathcal{N}$ an $\mathrm{L}(\mathcal{C})_{T,\mathcal{F}}$-structure by choosing an interpretation of the constants from $\mathcal{C}$ such that $c_{\sigma_\Phi(0)}, \ldots, c_{\sigma_\Phi(n_\Phi-1)}$ are assigned to $v_0,\ldots,v_{n_\Phi-1}$ and $d_0,\ldots,d_m$ are assigned to $w_0,\ldots,w_m$. Then \[ s_0 \cup \{\phi\} \cup \bp{\bigvee \bp{\phi[c_{\sigma_\Phi(0)}, \ldots, c_{\sigma_\Phi(n_\Phi-1)}]: \phi \in \Phi}: \, \Phi \in \mathcal{F}_0} \in S. \] This concludes the proof that $S$ is a consistency property. It is now straightforward to check that for all $\Phi\in\mathcal{F}$ and $\sigma:\omega\to\mathcal{C}$ \[ D_{\Phi,\sigma}=\bp{s\in S: \bigvee\bp{\phi[c_{\sigma(0)}, \ldots, c_{\sigma(n_\Phi-1)}]:\phi\in \Phi}\in s} \] is dense in $\mathbb{P}_S$ and that for all $\phi\in T$ so is $D_\phi=\bp{s\in S:\phi\in S}$. By the Model Existence Theorem there is a model $\mathcal{M}$ of \[ T \cup \bp{\bigvee \bp{\phi[c_{\sigma(0)}, \ldots, c_{\sigma(n_\Phi-1)}]: \phi \in \Phi}: \, \Phi \in \mathcal{F}, \sigma:\omega\to \mathcal{C}} \] in which all the elements are the interpretation of some constant from $\mathcal{C}$. Thus $\mathcal{M}$ models the theory \[ T \cup \bp{\bigwedge_{\Phi \in \mathcal{F}} \forall v_0 \ldots v_{n_\Phi-1} \bigvee \Phi(v_0,\ldots,v_{n_\Phi-1}}, \] as required. \end{proof} \subsection{Proof of Thm. \ref{thm:craigint2}} \begin{proof} The proof is a small twist of the proof of Thm. \ref{thm:craigint} with two differences. First, when obtaining a Boolean valued model for an element of the consistency property one needs to apply Thm. \ref{ManModExi} instead of \ref{GenFilThe}. Secondly, when proving in the proof of Thm. \ref{thm:craigint} that $S$ is a consistency property, the existential case strongly uses the Boolean valued models being full, thus also this part of that proof needs a revision. \begin{itemize} \item[(Ind.5)] Suppose $\exists \vec{v} \varphi(\vec{v}) \in s_1 \subseteq s$ and consider $\vec{c} \in \mathcal{C}^{\vec{v}}$ a sequence of constants not appearing in $s$, which exists by the clause on the number of constants from $\mathcal{C}$ in sentences\footnote{Note that $\vec{v}$ can be an infinite string of variables of length less than $\lambda$, nonetheless in $\varphi(\vec{v})$ only finitely many constants from $\mathcal{C}$ appears in it as well as in any other formula of $s$.} in $X_\phi$. Let us check $s \cup \{\varphi(\vec{c})\} \in S$. For this take $\theta,\sigma \in X_\phi \cap X_\psi$ such that $\vDash_{\mathrm{BVM}} \bigwedge s_1 \cup \{\varphi(\vec{c})\} \rao \theta$ and $\vDash_{\mathrm{BVM}} \bigwedge s_2 \rao \sigma$. We must show that $\theta \wedge \sigma$ is Boolean consistent. It is enough to prove $\vDash_{\mathrm{BVM}} s_1 \rao \theta$. Consider $\mathcal{M}$ a Boolean valued model such that $\mathcal{M} \vDash s_1$. Since $\exists \vec{v} \varphi(\vec{v}) \in s_1$, $\Qp{\exists \vec{v} \varphi(\vec{v})}_\mathsf{B}^\mathcal{M} = 1_\mathsf{B}$; therefore we can find a maximal antichain $A \subset \mathsf{B}$ and a family $\{\vec{\tau}_a : a \in A\} \subset M$ such that \[\Qp{\varphi(\vec{\tau}_a)}_\mathsf{B} \geq a \] and \[ \Qp{\exists \vec{v} \varphi(\vec{v})}_\mathsf{B}^\mathcal{M} = \bigvee_{a \in A} \Qp{\varphi(\vec{\tau}_a)}_\mathsf{B}^\mathcal{M} = 1_\mathsf{B}. \] If we are able to check $\Qp{\theta}_\mathsf{B}^\mathcal{M} \geq a$ for any $a \in A$, we will conclude since \[ \Qp{\theta}_\mathsf{B}^\mathcal{M} \geq \bigvee_{a \in A} a = 1_\mathsf{B}. \] Consider $a \in A$ and the structure $\mathcal{M}$ together with the assignment $\vec{c} \mapsto \vec{\tau}_a$. Consider also the Boolean algebra $\mathsf{B} \upharpoonleft a$. For any $m_1,\ldots,m_n\in\mathcal{M}$ and any $n$-ary relational symbol $R$ of the relational $\omega$-signature $\mathrm{L}$ define \[ \Qp{R(m_1,\ldots,m_n)}_{\mathsf{B} \upharpoonleft a}^{(\mathcal{M},\vec{c} \ \mapsto \ \vec{\tau}_a)} = a \wedge \Qp{R(m_1,\ldots,m_n)}_\mathsf{B}^\mathcal{M}. \] Then one makes $\mathcal{M}$ a $\mathsf{B} \upharpoonleft a$-Boolean valued model for $\mathrm{L}\cup \{\vec{c}\}$ letting \[ \Qp{\vartheta[\vec{c} \ \mapsto \ \vec{\tau}_a]}_{\mathsf{B} \upharpoonleft a}^{\mathcal{M}} = a \wedge \Qp{\vartheta[\vec{c} \ \mapsto \ \vec{\tau}_a]}_\mathsf{B}^\mathcal{M}. \] In particular the $\mathsf{B} \upharpoonleft a$-value of $\bigwedge s_1 \cup \{\varphi(\vec{c})\}$ in $(\mathcal{M},\vec{c} \mapsto \vec{\tau}_a)$ is $1_{\mathsf{B} \upharpoonleft a} = a$. Finally, the hypothesis $\vDash_{\mathrm{BVM}} \bigwedge s_1 \cup \{\varphi(\vec{c})\} \rightarrow \theta$ ensures $\Qp{\theta}_{\mathsf{B} \upharpoonleft a}^{(\mathcal{M},\vec{c} \ \mapsto \ \vec{\tau}_a)} = a$ and since \[ a = \Qp{\theta}_{\mathsf{B} \upharpoonleft a}^{(\mathcal{M},\vec{c} \ \mapsto \ \vec{\tau}_a)} = a \wedge \Qp{\theta}_\mathsf{B}^\mathcal{M}, \] we conclude $a \leq \Qp{\theta}_\mathsf{B}^\mathcal{M}$. \end{itemize} \end{proof} \section{Forcing notions as consistency properties} \label{sec:for=conprop} By the results of Section \ref{ForConPro} a consistency property $S$ for $\mathrm{L}_{\kappa \omega}$ can be naturally seen as a forcing notion $\mathbb{P}_S$; then, using the forcing machinery on $\mathbb{P}_S$, we can produce a Boolean valued model with the mixing property of $\bigwedge p$ for any $p\in S$. In this section we show that it is possible to go the other way round: we prove that any forcing notion $\mathbb{P}$ has a consistency property $S_\mathbb{P}$ associated to it, so that it is equivalent to force with $\mathbb{P}$ or with $\mathbb{P}_{S_\mathbb{P}}$. From now on we deal with forcing notions given both by partial orders or by complete Boolean algebras. Given a complete Boolean algebra $\mathsf{B}$, we show that for some regular $\kappa$ large enough in $V$, $\bool{B}$ is forcing equivalent to a consistency property describing the $\in$-theory of $H_\kappa$ as computed in a $V$-generic extension by $\bool{B}$. \begin{notation} Let $\mathsf{B}$ be a complete Boolean algebra of cardinality $\kappa$. The signature $\mathrm{L}$ is $\bp{\in}$. $\mathrm{L}(\mathcal{C})_{\infty \omega}$ is produced by the set of constants $\mathcal{C} = V^\mathsf{B} \cap H_{\kappa^+}$. We use $\phi^{H_{\kappa^+}}$ to denote that all quantifiers from $\phi$ are restricted to $H_{\kappa^+}$. We write $\Qp{\phi}_\bool{B}$ rather than $\Qp{\phi}^{V^{\bool{B}}}_\bool{B}$. \end{notation} \begin{theorem}\label{thm:equivforcconsprop} For any complete Boolean algebra $\mathsf{B}$ of size less or equal than $\kappa$ and any regular cardinal $\lambda$ the following holds: \begin{enumerate}[label=(\roman*)] \item $S_\mathsf{B} = \{s \in [\mathrm{L}(\mathcal{C})_{\lambda \omega}]^{< \omega}: \Qp{(\bigwedge s)^{H_{\check{\kappa}^+}}}_\mathsf{B} > 0_\mathsf{B}\}$ is a consistency property, \item the map \begin{align*} \pi_{\mathsf{B}} : (S_\mathsf{B}, \leq) &\rightarrow (\mathsf{B}^+,\leq_\mathsf{B}) \\ s &\mapsto \Qp{(\bigwedge s)^{H_{\check{\kappa}^+}}}_\mathsf{B} \end{align*} is a dense embedding. In particular $\mathsf{B}$ and $S_{\mathsf{B}}$ are equivalent forcing notions. \end{enumerate} \end{theorem} \begin{proof} We first prove $(ii)$. \begin{itemize} \item If $p \leq q$, then $q \subseteq p$ and $\pi(p) = \Qp{\bigwedge p}_\mathsf{B} \leq \Qp{\bigwedge q}_\mathsf{B} = \pi(q)$. \item We have $p \perp q \Leftrightarrow p \cup q \notin S_\mathsf{B} \Leftrightarrow \Qp{\bigwedge (p \cup q)}_\mathsf{B} = 0_\mathsf{B} \Leftrightarrow \Qp{\bigwedge p}_\mathsf{B} \wedge \Qp{\bigwedge q}_\mathsf{B} = 0_\mathsf{B} \Leftrightarrow \pi(p) \perp \pi(q)$. \item Let $\dot{G}=\bp{(\check{b},b):\,b\in\bool{B}}$ be the canonical $\mathsf{B}$-name for a $V$-generic filter. Since for any $b \in \mathsf{B}^+$ the $\mathsf{B}$-value of $\check{b} \in \dot{G}$ is $b$, the map $\pi$ is surjective and in particular $\pi[S_\mathsf{B}]$ is dense in $\mathsf{B}^+$. \end{itemize} Now we prove $(i)$. We have to check that $S_\bool{B}$ satisfies the clauses of Def. \ref{def:ConProInf}. Note that by choice of $\kappa$, \[ \Qp{\forall v\in H_{\check{\kappa}^+}\phi^{H_{\check{\kappa}^+}}(v)}^{V^{\bool{B}}}_{\bool{B}}=\bigwedge_{\tau\in \mathcal{C}}\Qp{\phi^{H_{\check{\kappa}^+}}(v)}^{V^{\bool{B}}}_{\bool{B}}. \] In view of the above observation, for notational simplicity we use $\Qp{\phi}_\mathsf{B}$ instead of $\Qp{\phi^{H_{\check{\kappa}^+}}}^{V^{\bool{B}}}_\mathsf{B}$. Note also that in the proof below we will only be interested in formulae where quantifiers range over (and constants belong to) $H_{\kappa^+}\cap V^\bool{B}$. \begin{description} \item[(Con)] Consider $s \in S_\mathsf{B}$ and $\phi \in \mathrm{L}(\mathcal{C})_{\infty\omega}$. If $\phi$ and $\neg \phi$ are both in $s$, $\Qp{\bigwedge s}_\mathsf{B} \leq \Qp{\phi \wedge \neg \phi}_\mathsf{B} = 0_\mathsf{B}$, a contradiction since $\Qp{\bigwedge s}_\mathsf{B} > 0_\mathsf{B}$. Then for any $\phi$, either $\phi \notin s$ or $\neg \phi \notin s$. \item[(Ind.1)] Consider $s \in S_\mathsf{B}$ and $\neg \phi \in s$. Since $\Qp{\neg \phi}_\mathsf{B} = \Qp{\phi \neg}_\mathsf{B}$, $\Qp{\bigwedge (s \cup \{\phi \neg\})}_\mathsf{B} = \Qp{\bigwedge s}_\mathsf{B} > 0_\mathsf{B}$ and $s \cup \{\phi \neg\} \in S_\mathsf{B}$. \item[(Ind.2)] Consider $s \in S_\mathsf{B}$ and $\bigwedge \Phi \in s$. For any $\phi \in \Phi$, $\Qp{\bigwedge (s \cup \{\phi\})}_\mathsf{B} = \Qp{\bigwedge s}_\mathsf{B} > 0_\mathsf{B}$ and $s \cup \{\phi\} \in S_\mathsf{B}$. \item[(Ind.3)] Consider $s \in S_\mathsf{B}$, $\forall v \phi(v) \in s$ and $\tau \in \mathcal{C}$. We have \[ \Qp{\forall v \phi(v)}_\mathsf{B} = \bigwedge_{\sigma \in V^\mathsf{B} \cap H_{\kappa^+}} \Qp{\phi(\sigma)}_\mathsf{B} \leq \Qp{\phi(\tau)}_\mathsf{B}. \] Therefore \[ \Qp{\bigwedge (s \cup \{\phi(\tau)\})}_\mathsf{B} = \Qp{\bigwedge s}_\mathsf{B} > 0_\mathsf{B}, \] and $s \cup \{\phi(\tau)\} \in S_\mathsf{B}$. \item[(Ind.4)] Consider $s \in S_\mathsf{B}$ and $\bigvee \Phi \in s$. Suppose that for no $\phi \in \Phi$, $s \cup \{\phi\} \in S_\mathsf{B}$. Then for any $\phi \in \Phi$, $\Qp{\bigwedge (s \cup \{\phi\})}_\mathsf{B} = \Qp{\bigwedge s}_\mathsf{B} \wedge \Qp{\phi}_\mathsf{B} = 0_\mathsf{B}$. Therefore $\Qp{\bigwedge s}_\mathsf{B} \leq \Qp{\neg \phi}_\mathsf{B}$ for any $\phi \in \Phi$. Since $\bigwedge_{\phi \in \Phi}\Qp{\neg \phi}_\mathsf{B}$ is the greatest lower bound of $\{\Qp{\neg \phi}_\mathsf{B} : \phi \in \Phi\}$, we have $\Qp{\bigwedge s}_\mathsf{B} \leq \bigwedge_{\phi \in \Phi}\Qp{\neg \phi}_\mathsf{B} = \Qp{\neg \bigvee \Phi}_\mathsf{B}$. Then $\Qp{\bigwedge (s \cup \{\neg \bigvee \Phi\})}_\mathsf{B} = \Qp{\bigwedge s}_\mathsf{B} > 0_\mathsf{B}$, but since $\bigvee \Phi$ and $\neg \bigvee \Phi$ are both in $s \cup \{\neg \bigvee \Phi\}$, $\Qp{\bigwedge (s \cup \{\neg \bigvee \Phi\})}_\mathsf{B} = 0_\mathsf{B}$, a contradiction. \item[(Ind.5)] Consider $s \in S_\mathsf{B}$ and $\exists v \phi(v) \in s$. Suppose that for no $\tau\in\mathcal{C}$, $s \cup \{\phi(\tau)\} \in S_\mathsf{B}$. Then for any $\tau\in\mathcal{C}$, \[ \Qp{\bigwedge (s \cup \{\phi(\tau)\})}_\mathsf{B} = \Qp{\bigwedge s}_\mathsf{B} \wedge \Qp{\phi(\tau)}_\mathsf{B} = 0_\mathsf{B}. \] This gives that \[ \Qp{\bigwedge s}_\mathsf{B} \leq \Qp{\neg \phi(\tau)}_\mathsf{B} \] for any $\tau \in V^\mathsf{B} \cap H_{\kappa^+}$. Therefore \[ \Qp{\bigwedge s}_\bool{B} \leq \bigwedge_{\tau \in V^\mathsf{B} \cap H_{\kappa^+}} \Qp{\neg \phi(\tau)}_\mathsf{B} = \Qp{\forall v \neg \phi(v)}_\mathsf{B} = \Qp{\neg \exists v \phi(v)}_\mathsf{B}. \] Note that the equality \[ \bigwedge_{\tau \in V^\mathsf{B} \cap H_{\kappa^+}} \Qp{\neg \phi(\tau)}_\mathsf{B} = \Qp{\forall v \neg \phi(v)}_\mathsf{B} \] only holds because the quantifiers from $\phi$ are restricted to $H_{\check{\kappa}^+}$. Therefore \[ \Qp{\bigwedge (s \cup \{\neg \exists v \phi(v)\})}_\mathsf{B} = \Qp{\bigwedge s}_{\bool{B}} > 0_\mathsf{B}. \] But now $\exists v \phi(v)$ and $\neg \exists v \phi(v)$ are both in $s \cup \{\neg \exists v \phi(v)\}$, hence \[ \Qp{\bigwedge (s \cup \{\neg \exists v \phi(v)\})}_\mathsf{B} = 0_\mathsf{B}. \] We reached a contradiction. \item[(Str.1)] Suppose $s \in S_\mathsf{B}$ and $\tau = \sigma \in s$; since $\mathsf{B}$-valued models for set theory verify $\Qp{\tau = \sigma}_\mathsf{B} = \Qp{\sigma = \tau}_\mathsf{B}$, $\Qp{\bigwedge (s \cup \{\sigma = \tau\})}_\mathsf{B} = \Qp{\bigwedge s}_\mathsf{B} > 0_\mathsf{B}$ and $s \cup \{\sigma = \tau\} \in S_\mathsf{B}$. \item[(Str.2)] Suppose $s \in S_\mathsf{B}$ and $\{\sigma = \tau, \phi(\tau)\} \subset s$. We have $\Qp{\sigma = \tau}_\mathsf{B} \wedge \Qp{\phi(\tau)}_\mathsf{B} \leq \Qp{\phi(\sigma)}_\mathsf{B}$; therefore $\Qp{\bigwedge (s \cup \{\phi(\sigma)\})}_\mathsf{B} > 0_\mathsf{B}$ and $s \cup \{\phi(\sigma)\} \in S_\mathsf{B}$. \item[(Str.3)] Trivial since $\mathrm{L}=\bp{\in}$ has no constant symbol. \end{description} \end{proof} Note that the only formulae one needs to keep in $S_\mathsf{B}$ in order to ensure that there is a dense embedding between both forcing notions are $\check{b} \in \dot{G}$. This is because in order to prove that the embedding has dense image, one only uses that the $\mathsf{B}$-value of $\check{b} \in \dot{G}$ is $b$. In particular one can consider various choices of constants $\mathcal{C}$ to produce the desired consistency property $S_\bool{B}$, other than the one we made. \section{Appendix}\label{sec:app} We collect here some results which are useful to clarify several concepts but not central. \subsection{Separating Tarski satisfiability from boolean satisfiability} In this section we show that being satisfiable in the ordinary sense (e.g. with respect to Tarski semantics) is strictly stronger than being boolean satisfiable, which is also strictly stronger than being weakly boolean satisfiable. We first show that there is a boolean satisfiable theory which has no Tarski model. \begin{fact}\label{fac:tarskiinc} Let $\mathrm{L}=\bp{F}\cup\bp{d_n:n\in\omega}\cup\bp{e_\alpha:\alpha<\omega_1^V}$ with $F$ a binary predicate. Consider the $\mathrm{L}_{\omega_2^V\omega}$-theory $S$ given by \[ \forall x\,\exists! y\, F(x,y) \] \[ \exists y\, F(x,y)\leftrightarrow \bigvee_{n\in\omega}x=d_n \] \[ \bigvee_{n\in\omega} F(d_n,e_\alpha) \] for all $\alpha<\omega_1^V$. Then every countable fragment of $S$ has a Tarski model, while $S$ has no Tarski model. Furthermore $S$ has a boolean valued model. \end{fact} Note that $S$ witnesses the failure of the compactness and completeness theorems for Tarski semantics for $\mathrm{L}_{\omega_2^V\omega}$; it is clearly a counterexample to compactness for this semantics; it is also a counterexample to completeness (using the axiom system we present in Section \ref{subsec:gentzencalc}) since $S\not\vdash\emptyset$ in view of Thm. \ref{them:boolcompl}. \begin{proof} Given a countable fragment $R$ of $S$, find $\beta$ countable and such that any $e_\alpha$ occurring in some formula in $R$ has $\alpha<\beta$. Then $(\beta,f)$ where $f$ is a surjection of $\omega$ onto $\beta$ can be extended to a model of $R$ by mapping $d_n$ to $n$ and $e_\alpha$ to $\alpha$ for any $n\in\omega,\alpha<\beta$. Note that the interpretation of $F$ in any Tarski model of $S$ in $V$ is a map with domain a countable set and range an uncountable set in $V$. Hence no such model can exist in $V$. Now if $G$ is $\Coll(\omega,\omega_1)$-generic in $V[G]$ the generic function $\omega\to\omega_1^V$ given by $\cup G$ gives in $V[G]$ a Tarski model of $S$. Taking this into account, in $V$ consider the $\RO(\Coll(\omega,\omega_1^V))$-valued model $\mathcal{M}=(\omega_1^V,F^\mathcal{M},d_n^\mathcal{M}:n\in\omega, e_\alpha^\mathcal{M}:\alpha<\omega_1^V)$ given by \begin{itemize} \item $R^\mathcal{M}(n,\alpha)=\Reg{\bp{q\in\Coll(\omega,\omega_1^V):\, \ap{n,\alpha}\in q}}$ for $n\in\omega$ and $\alpha<\omega_1^V$; $R^\mathcal{M}(\beta,\alpha)=0_{\Coll(\omega,\omega_1^V)}$ for $\beta\not\in\omega$ and $\alpha<\omega_1^V$; \item $d_n^\mathcal{M}=n$ for all $n\in\omega$, \item $e_\alpha^\mathcal{M}=\alpha$ for all $\alpha\in\omega_1^V$. \end{itemize} It can be checked that in $V$ it holds that $\mathcal{M}$ assigns value $1_ {\Coll(\omega,\omega_1^V)}$ to all axioms of $S$. \end{proof} Now we exhibit a theory $T$ which is a counterexample to the compactness theorem with respect to boolean satisfiability: all finite fragments of $T$ are boolean satisfiable while $T$ is not. \begin{fact} Consider the first order $\mathrm{L}_{\omega_1\omega}$-theory $T$ for $\mathrm{L}=\bp{d_n:n\in\omega,c_m:\,m\in\omega}$ with axioms: \begin{itemize} \item $\bigwedge_{n\in\omega}\bigvee_{m\in \omega}d_n= c_m$, \item $\bigwedge_{n\neq m\in\omega}c_n\neq c_m$, \item $d_n\neq c_m$ for $n,m\in \omega$. \end{itemize} The following holds: \begin{itemize} \item Every finite fragment of $T$ is Tarski satisfiable. \item $T$ is not boolean satisfiable. \item $T$ is weakly boolean satisfiable. \end{itemize} \end{fact} \begin{proof} Let: \begin{itemize} \item $\bool{B}$ be the boolean completion of the Cohen forcing $\omega^{<\omega}$, \item $M=\bp{\sigma\in V^{\bool{B}}: \Qp{\sigma\in\check{\omega}}=1_{\bool{B}}}$, \item $\dot{r}$ be the canonical $\bool{B}$-name for the Cohen generic real. \item $\mathcal{M}$ be the $\bool{B}$-model for $\mathrm{L}$ with domain $M$, $\Qp{\cdot=\cdot}^{\mathcal{M}}= \Qp{\cdot=\cdot}^{V^{\bool{B}}}$, and interpretation of $d_n$ by $\dot{r}(\check{n})$ and $c_m$ by $\check{m}$. \end{itemize} Then $\mathcal{M}$ witnesses that $T$ is weakly boolean satisfiable. $T$ cannot be boolean satisfiable because in any boolean valued model it cannot be that $d_n\neq c_m$ gets boolean value $1_\bool{B}$ for all $m$ while also $\bigvee_{m\in \omega}d_n= c_m$ gets the same value. $(\omega,c_n\mapsto n:n\in\omega)$ can be extended to a Tarski model of any finite fragment of $T$. \end{proof} Our last example is a \emph{finite} weakly boolean satisfiable theory which is not boolean satisfiable. \begin{fact} Consider the finite $\mathrm{L}_{\omega\omega}$-theory $T$ for $\mathrm{L}=\bp{d,c_0,c_1}$ with axioms: \begin{itemize} \item $\bigvee_{m\in 2}d= c_m$, \item $c_0\neq c_1$, \item $d\neq c_m$ for $m\in 2$. \end{itemize} The following holds: \begin{itemize} \item $T$ is not boolean satisfiable. \item $T$ is weakly boolean satisfiable. \end{itemize} \end{fact} \begin{proof} Let $\bool{B}=\bp{0,a,\neg a,1}$ be the four elements boolean algebra, let $\mathcal{M}$ consists of the four possible functions $f:\bp{a,\neg a}\to 2$. Let $c_i$ be interpreted by the constant functions with value $i$ and $d$ by one of the other two. Set $\Qp{f=g}^{\mathcal{M}}_\bool{B}=\bigvee \{ b\in\bp{a,\neg a}: f(b)=g(b) \} $. Then \[ \Qp{\bigvee_{m\in 2}d= c_m}^{\mathcal{M}}_\bool{B}= \Qp{c_0\neq c_1}^{\mathcal{M}}_\bool{B}=1_\bool{B} \] and $\Qp{d\neq c_i}^{\mathcal{M}}_\bool{B}>0_\bool{B}$ for both $i=0,1$. Hence $T$ is weakly boolean satisfiable. $T$ cannot be boolean satisfiable since \[ \Qp{d\neq c_0}^{\mathcal{N}}_{\bool{C}}= \neg\Qp{d=c_1}^{\mathcal{N}}_{\bool{C}} \] in all $\bool{C}$-valued models $\mathcal{N}$ of $c_0\neq c_1\wedge \bigvee_{m\in 2}d= c_m$. Hence we cannot have that \[ \Qp{d\neq c_0}^{\mathcal{N}}_{\bool{C}}= \Qp{d_n\neq c_1}^{\mathcal{N}}_{\bool{C}}=1_{\bool{C}} \] in any boolean valued model model of the other axioms of $T$. \end{proof} We conclude this part noting that boolean satisfiability is the correct generalization to $\mathrm{L}_{\infty\omega}$ of Tarski satisfiability: \begin{fact} Assume $T$ is a first order theory. Then $T$ is boolean satisfiable if and only if $T$ is Tarski satisfiable. \end{fact} \begin{proof} By Thm. \ref{them:boolcompl} any boolean satisfiable first order theory is realized in a $\bool{B}$-valued model $\mathcal{M}$ with the mixing property. If $G$ is a ultrafilter on $\bool{B}$, $\mathcal{M}/_G$ models $T$ by Proposition \ref{prop:mixfull} and Thm. \ref{thm:fullLos}. \end{proof} \subsection{Proof of Fact \ref{fac:pressubslambdaanyform} and Proposition \ref{prop:mixfull}}\label{subsec:mixfull} We first prove Fact \ref{fac:pressubslambdaanyform}. \begin{proof} We proceed by induction on the complexity of $\phi(x_i:i<\alpha)$. The Fact holds by definition for atomic formulae. Assume the Fact for all proper subformulae of $\phi(x_i:i<\alpha)$. Now note that the desired inequality entails that for all $\beta$ and $(\sigma_i:i<\beta)$, $(\tau_i:i<\beta)$ in $\mathcal{M}^\beta$ and all $\psi(x_i:i<\beta)$ proper subformula of $\phi(x_i:i<\alpha)$, \[ \bigg(\bigwedge_{i\in\beta}\Qp{\tau_i=\sigma_i}_\mathsf{B} \bigg) \wedge \Qp{\psi(\tau_i:\,i<\beta)}_\mathsf{B} = \bigg(\bigwedge_{i\in\beta}\Qp{\tau_i=\sigma_i}_\mathsf{B} \bigg) \wedge \Qp{\psi(\sigma_i:\,i<\beta)}_\mathsf{B}. \] Now if $\phi=\neg\psi$ the above equality is extended to $\phi$. It is also preserved if $\phi=\bigvee \Phi$ since \begin{align*} \bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}_\mathsf{B} \bigg) \wedge \Qp{\bigvee\Phi(\tau_i:\,i<\alpha)}_\mathsf{B} = \\ =\bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}_\mathsf{B} \bigg) \wedge \bigvee_{\psi\in \Phi}\Qp{\psi(\tau_i:\,i<\alpha)}_\mathsf{B} = \\ =\bigvee_{\psi\in\Phi}\bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}_\mathsf{B} \wedge \Qp{\psi(\tau_i:\,i<\alpha)}_\mathsf{B} \bigg) = \\ =\bigvee_{\psi\in\Phi}\bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}_\mathsf{B} \wedge \Qp{\psi(\sigma_i:\,i<\alpha)}_\mathsf{B} \bigg) = \\ =\bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}_\mathsf{B} \bigg) \wedge \bigvee_{\phi\in \Phi}\Qp{\psi(\sigma_i:\,i<\alpha)}_\mathsf{B} = \\ =\bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}_\mathsf{B} \bigg) \wedge \Qp{\bigvee\Phi(\sigma_i:\,i<\alpha)}_\mathsf{B}. \end{align*} Similarly \begin{align*} \bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}_\mathsf{B} \bigg) \wedge \Qp{\exists (y_j:j\in\beta)\psi(\tau_i:\,i<\alpha,y_j:j\in\beta)}_\mathsf{B} = \\ =\bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}_\mathsf{B} \bigg) \wedge \bigvee_{(\eta_j:j\in\beta)\in\mathcal{M}^\beta }\Qp{\psi(\tau_i:\,i<\alpha,\eta_j:j\in\beta)}_\mathsf{B} = \\ =\bigvee_{(\eta_j:j\in\beta)\in\mathcal{M}^\beta }\bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}_\mathsf{B} \wedge \Qp{\psi(\tau_i:\,i<\alpha,\eta_j:j\in\beta)}_\mathsf{B} \bigg) = \\ =\bigvee_{(\eta_j:j\in\beta)\in\mathcal{M}^\beta }\bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}_\mathsf{B} \wedge \Qp{\psi(\sigma_i:\,i<\alpha,\eta_j:j\in\beta)}_\mathsf{B} \bigg) = \\ =\bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}_\mathsf{B} \bigg) \wedge \bigvee_{(\eta_j:j\in\beta)\in\mathcal{M}^\beta }\Qp{\psi(\sigma_i:\,i<\alpha,\eta_j:j\in\beta)}_\mathsf{B} = \\ =\bigg(\bigwedge_{i\in\alpha}\Qp{\tau_i=\sigma_i}_\mathsf{B} \bigg) \wedge \Qp{\exists (y_j:j\in\beta)\psi(\sigma_i:\,i<\alpha,y_j:j\in\beta)}_\mathsf{B}. \end{align*} The cases of $\bigwedge, \forall$ are handled similarly. \end{proof} We can now prove Proposition \ref{prop:mixfull}. \begin{proof} Let $\exists \overline{v} \phi(\overline{v})$ be a $\mathrm{L}_{\infty \infty}$-sentence. Fix a maximal antichain $A$ among \[ \bp{b \in \bool{B} : b \leq \Qp{\phi(\overline{c_b})} \text{ for some } \overline{c} \in M^{|\overline{v}|}}. \] For each $b \in A$ let $\overline{c_b} = (c_{i,b} : i \in I)$. The mixing property in $\mathcal{M}$ gives $c_i$ for each $i \in I$ such that $\Qp{c_i=c_{i,b}}_\mathsf{B} \geq b$ for all $b \in A$. Let $\overline{c} = (c_i : i \in I)$. Then \begin{gather*} \Qp{\exists \overline{v} \phi(\overline{v})}= \bigvee A = \bigvee_{b \in A} \Qp{\phi(\overline{c_b})} =\bigvee_{b \in A} (b \wedge \Qp{\phi(\overline{c_b})} \wedge \bigwedge_{i\in I} \Qp{c_i=c_{i,b}}_\mathsf{B}) \leq \\ \bigvee_{b \in A} (b \wedge \Qp{\phi(\overline{c})}_\mathsf{B})= \Qp{\phi(\overline{c})}_\mathsf{B}. \end{gather*} \end{proof} \section*{Concluding remarks} Mansfield's completeness theorem follows from his proof that if $S$ is a consistency property for $\mathrm{L}_{\infty\infty}$, there is a $\RO(\mathbb{P}_S)$-valued model $\mathcal{M}_S$ such that $\Qp{\bigwedge s}=\Reg{\bp{s}}$ for any $s\in S$. However there is no reason to expect that the model $\mathcal{M}_S$ produced in Mansfield's proof is full. We conjecture it is not, at least for some $S$. We also conjecture that if $S$ is a consistency property for $\mathrm{L}_{\infty\infty}$, our model $\mathcal{A}_S$ may not satisfy $\Qp{\psi}=\Reg{\bp{r\in S:\psi\in r}}$ for some formula $\psi$ of $\mathrm{L}_{\infty\infty}$. The key point is that the $\mathrm{L}_{\infty\infty}$-semantics of existential quantifiers over infinite strings is not forcing invariant: if one forces the addition of a new countable sequence to some $\bool{B}$-valued model $\mathcal{M}$ in $V$, it may be the case that $\exists\vec{v}\psi$ gets Boolean value $0_\bool{B}$ in $V$ and positive $\RO^{V[G]}(\bool{B})$-Boolean value in the generic extension $V[G]$. This makes our proof of Thm. \ref{GenFilThe} break down when handling the existential quantifier clause for $\mathrm{L}_{\infty\infty}$ over an infinite string. We dare the following: \begin{conjecture} Assume $\mathrm{L}$ is a relational $\omega$-signature. There are Boolean satisfiable $\mathrm{L}_{\infty\infty}$-theories which do not have a Boolean valued model with the mixing property. \end{conjecture} Another point to be clarified on the completeness of Boolean valued semantics for $\mathrm{L}_{\infty\infty}$ is the following: \begin{question} Assume $\mathrm{L}$ is a relational $\lambda$-signature for $\lambda>\omega$. Does the completeness theorem for consistent (according to the $\mathrm{L}_{\infty\infty}$-Gentzen's calculus) $\mathrm{L}_{\infty\infty}$-theories holds? \end{question} Mansfield's model existence theorem does not apply to such theories as the proof of (\ref{eqn:subslambda}) for the model obtained in Mansfield's proof breaks down for the obvious modification of the notion of consistency property required in order to deal with Boolean valued models for $\lambda$-signatures (e.g. one should replace clause (Str.2) of a consistency property with the stronger: \emph{``If $\bp{\bigwedge_{i\in I}c_i=d_i,\phi(c_i:i\in I)}\in s\in S$, then $\bp{\phi(d_i:i\in I)}\cup s\in S$''}.) In the tentative proof of (\ref{eqn:subslambda}) for the model obtained in Mansfield's proof one should replace our argument with one requiring that a distributivity law for infinite conjunctions of infinite disjunctions holds. The latter may not hold for $\bool{B}_S$. Note finally that while Boolean compactness fails for $\mathrm{L}_{\infty\omega}$, one can prove the following curious form of weak Boolean compactness: \begin{fact} Assume $T$ is a family of Boolean satisfiable $\mathrm{L}_{\infty\omega}$-sentences. Then $T$ is weakly Boolean satisfiable. \end{fact} This holds noticing that if $S_\psi$ is a consistency property that witnesses that $\psi$ is Boolean consistent, $S=\bigcup_{\psi\in T}S_\psi$ is a consistency property such that $\mathcal{A}_S$ assigns a positive Boolean value to any $\psi\in T$. \end{document}
arXiv
中国物理C Chinese Physics C All Title Author Keyword Abstract DOI Category Address Fund PACS EEACC CPC authorship won the "IOP Publishing awards top cited Chinese authors" Chinese Physics C: 2019 Reviewer Awards FUTURE PHYSICS PROGRAMME OF BESIII 2019 Impact Factor 2.463 Title Author Keyword Chinese Physics C> 2021, Vol. 45> Issue(1) : 015102 DOI: 10.1088/1674-1137/abc066 Spherical accretion flow onto general parameterized spherically symmetric black hole spacetimes Sen Yang 1,2, , Cheng Liu 2,3, , Tao Zhu 2,3,, , Li Zhao 1, , Qiang Wu 2,3, , Ke Yang 4, , Mubasher Jamil 2,3,5, Institute of theoretical physics, Lanzhou University, Lanzhou 730000, China Institute for theoretical physics and Cosmology, Zhejiang University of Technology, Hangzhou 310032, China United center for gravitational wave physics (UCGWP), Zhejiang University of Technology, Hangzhou 310032, China School of Physical Science and Technology, Southwest University, Chongqing 400715, China School of Natural Sciences, National University of Sciences and Technology, Islamabad 44000, Pakistan The transonic phenomenon of black hole accretion and the existence of the photon sphere characterize strong gravitational fields near a black hole horizon. Here, we study the spherical accretion flow onto general parametrized spherically symmetric black hole spacetimes. We analyze the accretion process for various perfect fluids, such as the isothermal fluids of ultra-stiff, ultra-relativistic, and sub-relativistic types, and the polytropic fluid. The influences of additional parameters, beyond the Schwarzschild black hole in the framework of general parameterized spherically symmetric black holes, on the flow behavior of the above-mentioned test fluids are studied in detail. In addition, by studying the accretion of the ideal photon gas, we further discuss the correspondence between the sonic radius of the accreting photon gas and the photon sphere for general parameterized spherically symmetric black holes. Possible extensions of our analysis are also discussed. spherical accretion , black hole , RZ parametrization , photon sphere [1] J. Frank, A. King, and D. Raine, Accretion Power in Astrophysics, 3rd ed, Cambridge University Press (2002) [2] F. Yuan and R. Narayan, Annu. Rev. Astron. Astrophys. 52, 529 (2014) doi: 10.1146/annurev-astro-082812-141003 [3] S. Nampalliwar and C. Bambi, Accreting Black Holes, arxiv: 1810.07041 [4] H. Bondi, Mon. Not. R. Astron. Soc. 112, 195 (1952) doi: 10.1093/mnras/112.2.195 [5] F. C. Michel, J. Abbrev. 15, 153 (1972) [6] S. K. Chakrabarti, A. King, and D. Raine, Theory of transonic astrophysical flows, World Scientific (1990) [7] E. Babichev, V. Dokuchaev, and Y. Eroshenko, Phys. Rev. Lett. 93, 021102 (2004) doi: 10.1103/PhysRevLett.93.021102 [8] J. Pringle and A. King, Astrophysical Flows, Cambridge University Press (2007) [9] M. Jamil, M. A. Rashid, and A. Qadir, Eur. Phys. J. C 58, 325 (2008) doi: 10.1140/epjc/s10052-008-0761-9 [10] E. Babichev, S. Chernov, V. Dokuchaev et al., Phys. Rev. D 78, 104027 (2008) doi: 10.1103/PhysRevD.78.104027 [11] J. A. Jimenez Madrid and P. F. Gonzalez-Diaz, Grav. Cosmol. 14, 213 (2008) doi: 10.1134/S020228930803002X [12] J. Bhadra and U. Debnath, Eur. Phys. J. C 72, 1912 (2012) doi: 10.1140/epjc/s10052-012-1912-6 [13] E. Babichev, S. Chernov, V. Dokuchaev et al., J. Exp. Theor. Phys. 112, 784 (2011) doi: 10.1134/S1063776111040157 [14] L. Jiao and R.-J. Yang, Eur. Phys. J 77, 356 (2017) doi: 10.1140/epjc/s10052-017-4918-2 [15] S. B. Giddings and M. L. Mangano, Phys. Rev. D 78, 035009 (2008) doi: 10.1103/PhysRevD.78.035009 [16] M. Sharif and G. Abbas, Mod. Phys. Lett. 26, 1731 (2011) doi: 10.1142/S0217732311036218 [17] A. J. John, S. G. Ghosh, and S. D. Maharaj, Phys. Rev. D 88, 104005 (2013) doi: 10.1103/PhysRevD.88.104005 [18] U. Debnath, Astrophys. Space Sci. 360, 40 (2015) doi: 10.1007/s10509-015-2552-8 [19] A. Ganguly, S. G. Ghosh, and S. D. Maharaj, Phys. Rev. D 90, 064037 (2014) doi: 10.1103/PhysRevD.90.064037 [20] P. Mach and E. Malec, Phys. Rev. D 88, 084055 (2013) [21] P. Mach, E. Malec, and J. Karkowski, Phys. Rev. D 88, 084056 (2013) doi: 10.1103/PhysRevD.88.084056 [22] J. Karkowski and E. Malec, Phys. Rev. D 87, 044007 (2013) doi: 10.1103/PhysRevD.87.044007 [23] C. Gao, X. Chen, V. Faraoni et al., Phys. Rev. D 78, 024008 (2008) doi: 10.1103/PhysRevD.78.024008 [24] A. K. Ahmed, U. Camci, and M. Jamil, Class. Quant. Grav. 33, 215012 (2016) doi: 10.1088/0264-9381/33/21/215012 [25] S. Bahamonde and M. Jamil, Eur. Phys. J. C 75, 508 (2015) doi: 10.1140/epjc/s10052-015-3734-9 [26] C. Bambi, Black holes: a laboratory for testing strong gravity, Springerlink (2017) [27] R. J. Yang and H. Gao, Eur. Phys. J. C 79, 367 (2019) doi: 10.1140/epjc/s10052-019-6886-1 [28] L. Jiao and R. J. Yang, JCAP. 1709, 023 (2017) [29] R. Yang, Phys. Rev. D 92, 084011 (2015) doi: 10.1103/PhysRevD.92.084011 [30] M. U. Farooq, A. K. Ahmed, R. Yang et al., Chinese Physics C 44, 065102 (2020) doi: 10.1088/1674-1137/44/6/065102 [31] K. Akiyama et al. [Event Horizon Telescope Collaboration], Astrophys. J. 875, L1 (2019) doi: 10.3847/2041-8213/ab0ec7 [32] K. Akiyama et al. [Event Horizon Telescope Collaboration], Astrophys. J. 875(1), L2 (2019) doi: 10.3847/2041-8213/ab0c96 [33] K. Akiyama et al. [Event Horizon Telescope Collaboration], Astrophys. J. 875, L3 (2019) doi: 10.3847/2041-8213/ab0c57 [34] K. Akiyama et al. [Event Horizon Telescope Collaboration], Astrophys. J. 875, L4 (2019) doi: 10.3847/2041-8213/ab0e85 [35] K. Akiyama et al. [Event Horizon Telescope Collaboration], Astrophys. J. 875, L5 (2019) doi: 10.3847/2041-8213/ab0f43 [36] K. Akiyama et al. [Event Horizon Telescope Collaboration], Astrophys. J. 875, L6 (2019) doi: 10.3847/2041-8213/ab1141 [37] Y. Koga and T. Harada, Phys. Rev. D 94, 044053 (2016) doi: 10.1103/PhysRevD.94.044053 [38] Y. Koga, Phys. Rev. D 99, 064034 (2019) doi: 10.1103/PhysRevD.99.064034 [40] L. Rezzolla and A. Zhidenko, Phys. Rev. D 90, 084009 (2014) doi: 10.1103/PhysRevD.90.084009 [41] P. Kocherlakota and L. Rezzolla, Accurate Mapping of Spherically Symmetric Black Holes in a Parameterised Framework, arXiv: 2007.15593 [42] R. Konoplya, L. Rezzolla, and A. Zhidenko, Phys. Rev. D 93, 064015 (2016) doi: 10.1103/PhysRevD.93.064015 [43] L. Rezzolla, and O. Zanotti, Relativistic Hydrodynamics, Oxford University Press (2013) [44] M. Azreg-Anou, A. K. Ahmed, and M. Jamil, Class. Quant. Grav. 35, 235001 (2018) doi: 10.1088/1361-6382/aae997 [45] A. K. Ahmed, M. Azreg-Anou, M. Faizal et al., Eur. Phys. J. C 76, 280 (2016) doi: 10.1140/epjc/s10052-016-4112-y [46] A. K. Ahmed, M. Azreg-Anou, S. Bahamonde et al., Eur. Phys. J. C 76, 269 (2016) doi: 10.1140/epjc/s10052-016-4118-5 [47] M. Cvetic, G. W. Gibbons, and C. N. Pope, Phys. Rev. D 94, 106005 (2016) doi: 10.1103/PhysRevD.94.106005 [48] T. Zhu, Q. Wu, and M. Jamil, Phys. Rev. D 100, 044055 (2019) doi: 10.1103/PhysRevD.100.044055 [49] C. Ding, A. Wang, and X. Wang, Phys. Rev. D 92, 084055 (2015) [50] M. U. Shahzad, R. Ali, A. Jawad et al., Chin. Phys. C 44, 065106 (2020) doi: 10.1088/1674-1137/44/6/065106 [51] H. Bondi and F. Hoyle, Mon. Not. R. Astron. Soc. 104, 273 (1944) doi: 10.1093/mnras/104.5.273 [52] F. Hoyle and R. A. Lyttleton, Proc. Camb. Philol. Soc. 35, 405 (1939) doi: 10.1017/S0305004100021150 [53] R. A. Lyttleton and F. Hoyle, Observatory 63, 39 (1940) doi: 10.1103/PhysRevD.92.084055 [54] R. Edgar, New Astron. Rev 48, 843 (2004) doi: 10.1016/j.newar.2004.06.001 [1] Gu-Qiang Li . Hawking radiation and entropy of a black hole in Lovelock-Born-Infeld gravity from the quantum tunneling approach. Chinese Physics C, 2017, 41(4): 045103. doi: 10.1088/1674-1137/41/4/045103 [2] Cheng Liu , Tao Zhu , Qiang Wu . Thin accretion disk around a four-dimensional Einstein-Gauss-Bonnet black hole. Chinese Physics C, 2021, 45(1): 015105. doi: 10.1088/1674-1137/abc16c [3] null . Acceleration of particles in Einstein-Maxwell-dilaton black holes. Chinese Physics C, 2017, 41(6): 065101. doi: 10.1088/1674-1137/41/6/065101 [4] M.Umair Shahzad , Rafaqat Ali , Abdul Jawad , Shamaila Rani . Matter accretion onto Einstein-aether black holes via well-known fluids. Chinese Physics C, 2020, 44(6): 065106. doi: 10.1088/1674-1137/44/6/065106 [5] Shao-Wen Wei , Yu-Xiao Liu . Null geodesics, quasinormal modes, and thermodynamic phase transition for charged black holes in asymptotically flat and dS spacetimes. Chinese Physics C, 2020, 44(11): 115103. doi: 10.1088/1674-1137/abae54 [6] Yashmitha Kumaran , Ali Övgün . Weak deflection angle of extended uncertainty principle black holes. Chinese Physics C, 2020, 44(2): 025101. doi: 10.1088/1674-1137/44/2/025101 [7] Bei Sha , Zhi-E Liu , Yu-Zhen Liu , Xia Tan , Jie Zhang , Shu-Zheng Yang . Accurate correction of arbitrary spin fermion quantum tunneling from non-stationary Kerr-de Sitter black hole based on corrected Lorentz dispersion relation. Chinese Physics C, 2020, 44(12): 125104. doi: 10.1088/1674-1137/abb4d6 [8] Qingyu Gan , Peng Wang , Houwen Wu , Haitang Yang . Strong cosmic censorship for a scalar field in an Einstein-Maxwell-Gauss-Bonnet-de Sitter black hole. Chinese Physics C, 2021, 45(2): 1-13. [9] A. Övgün , İ. Sakallı , J. Saavedra . Effect of null aether field on weak deflection angle of black holes. Chinese Physics C, 2020, 44(12): 125105. doi: 10.1088/1674-1137/abb532 [10] Inyong Cho , Hyeong-Chan Kim . Simple black holes with anisotropic fluid. Chinese Physics C, 2019, 43(2): 025101. doi: 10.1088/1674-1137/43/2/025101 [11] De-Cheng Zou , Chao Wu , Ming Zhang , Ruihong Yue . Quasinormal modes of charged black holes in Einstein-Maxwell-Weyl gravity. Chinese Physics C, 2020, 44(5): 055102. doi: 10.1088/1674-1137/44/5/055102 [12] Peng Liu , Chao Niu , Cheng-Yong Zhang . Instability of regularized 4D charged Einstein-Gauss-Bonnet de-Sitter black holes. Chinese Physics C, 2021, 45(2): 1-10. [13] Bogeun Gwak . Thermodynamics of warped anti-de Sitter black holes under scattering of scalar field. Chinese Physics C, 2021, 45(4): 1-9. [14] Peng Liu , Chao Niu , Cheng-Yong Zhang . Linear instability of charged massless scalar perturbation in regularized 4D charged Einstein-Gauss-Bonnet anti de-Sitter black holes. Chinese Physics C, 2021, 45(2): 025111. doi: 10.1088/1674-1137/abd01d [15] Haiyuan Feng , Rong-Jia Yang . Horizon thermodynamics in ${f(R,R^{\mu\nu}R_{\mu\nu}})$ theory. Chinese Physics C, 2020, 44(11): 115101. doi: 10.1088/1674-1137/abadef [16] G. Abbas , Asif Mahmood , M. Zubair . Strong gravitational lensing for photon coupled to Weyl tensor in Kiselev black hole. Chinese Physics C, 2020, 44(9): 095105. doi: 10.1088/1674-1137/44/9/095105 [17] Sheng-Xian Zhao , Shuang-Nan Zhang . Exact solutions for spherical gravitational collapse around a black hole: the effect of tangential pressure. Chinese Physics C, 2018, 42(8): 085101. doi: 10.1088/1674-1137/42/8/085101 [18] M. Umar Farooq , Ayyesha K. Ahmed , Rong-Jia Yang , Mubasher Jamil . Accretion on high derivative asymptotically safe black holes. Chinese Physics C, 2020, 44(6): 065102. doi: 10.1088/1674-1137/44/6/065102 [19] Jing-Bo Wang , Chao-Guang Huang . Conformal field theory on the horizon of a BTZ black hole. Chinese Physics C, 2018, 42(12): 123110. doi: 10.1088/1674-1137/42/12/123110 [20] Kai Lin , Wei-Liang Qian . The matrix method for black hole quasinormal modes. Chinese Physics C, 2019, 43(3): 035105. doi: 10.1088/1674-1137/43/3/035105 Figures(9) / Tables(4) Sen Yang, Cheng Liu, Tao Zhu, Li Zhao, Qiang Wu, Ke Yang and Mubasher Jamil. Spherical accretion flow onto general parameterized spherically symmetric black hole spacetimes[J]. Chinese Physics C. doi: 10.1088/1674-1137/abc066 Received: 2020-08-02 Article Metric Article Views(231) PDF Downloads(38) Cited by(0) Policy on re-use To reuse of subscription content published by CPC, the users need to request permission from CPC, unless the content was published under an Open Access license which automatically permits that type of reuse. Entire Issue PDF Abstract views(231) HTML views(83) 通讯作者: 陈斌, [email protected] 沈阳化工大学材料科学与工程学院 沈阳 110142 百度学术搜索 万方数据库搜索 CNKI搜索 Sen Yang 1,2, Cheng Liu 2,3, Tao Zhu 2,3,, Li Zhao 1, Qiang Wu 2,3, Ke Yang 4, Corresponding author: Tao Zhu, [email protected], corresponding author 1. Institute of theoretical physics, Lanzhou University, Lanzhou 730000, China 2. Institute for theoretical physics and Cosmology, Zhejiang University of Technology, Hangzhou 310032, China 3. United center for gravitational wave physics (UCGWP), Zhejiang University of Technology, Hangzhou 310032, China 4. School of Physical Science and Technology, Southwest University, Chongqing 400715, China 5. School of Natural Sciences, National University of Sciences and Technology, Islamabad 44000, Pakistan Received Date: 2020-08-02 Available Online: 2021-01-15 Abstract: The transonic phenomenon of black hole accretion and the existence of the photon sphere characterize strong gravitational fields near a black hole horizon. Here, we study the spherical accretion flow onto general parametrized spherically symmetric black hole spacetimes. We analyze the accretion process for various perfect fluids, such as the isothermal fluids of ultra-stiff, ultra-relativistic, and sub-relativistic types, and the polytropic fluid. The influences of additional parameters, beyond the Schwarzschild black hole in the framework of general parameterized spherically symmetric black holes, on the flow behavior of the above-mentioned test fluids are studied in detail. In addition, by studying the accretion of the ideal photon gas, we further discuss the correspondence between the sonic radius of the accreting photon gas and the photon sphere for general parameterized spherically symmetric black holes. Possible extensions of our analysis are also discussed. Accretion around a massive gravitational object is a basic phenomenon in astrophysics and has been essential to the understanding of various astrophysical processes and observations, including the growth of stars, the formation of supermassive black holes, quasar luminosity, and X-ray emission from compact star binaries [1-3]. The accretion of matter in a realistic astrophysical process is rather complicated, since it involves many challenging aspects of general relativistic magnetohydrodynamics, including turbulence, radiation processes, and nuclear burning. To understand these accretion processes, it is useful to simplify the problem by making assumptions and/or considering simple scenarios. The simplest accretion scenario describes a stationary, spherically symmetric solution, as first discussed by Bondi [4], where an infinitely large homogeneous gas cloud, steadily accreting onto a central gravitational object, was considered. Bondi's treatment was formulated in the framework of Newtonian gravity. Later, in the framework of general relativity (GR), the steady-state spherically symmetric flow of a test fluid onto a Schwarzschild black hole was investigated by Michel [5]. Since then, spherical accretion has been considered for various spherically symmetric black holes in GR and modified gravities; see [6-30] and references therein for examples. One important feature of spherical accretion onto black holes is the phenomenon of transonic accretion and the existence of a sonic point (or a critical point). At the sonic point, the accretion flow transits from the subsonic to the supersonic state. Normally, the locations of the sonic points in a given black hole spacetime are not far from its horizon. What is important and intriguing is that the narrow region around the sonic point is closely related to some ongoing observations about the spectra of electromagnetic and gravitational waves. Therefore, studying the spherical accretion problem can not only help us to understand accretion processes in different black holes but, importantly, also provide us with an alternative approach to explore the nature of the black hole spacetimes in the regime of strong gravity. On the other hand, the EHT collaboration recently reported their first image concerning the detection of the shadow of a supermassive black hole at the center of a neighboring elliptical M87 galaxy [31-36]. This image revealed that the diameter of the center black hole shadow is $ (42\pm 3) \;$μ, leading to the measured center mass $ M = (6.5\pm 0.7)\times 10^9 M_{\odot} $ [31]. The outer edge of the shadow image, if one considers a Schwarschild black hole, forms a photon sphere near the black hole horizon, at which the trajectories of photons create a closed circular orbit. Within astrophysical observations, the existence of a photon sphere is related to the electromagnetic observations of black holes via the background electromagnetic emission and the frequencies of quasi-normal modes. The latter is determined by the parameters of null geodesic motions on and near the photon sphere of a given black hole spacetime. Recently, it was shown that there is a correspondence between the sonic points of an accreting ideal photon gas and the photon sphere, for static spherically symmetric spacetimes [37]. This important result is valid not only for spherical accretion of the ideal photon gas but also for rotating accretion in static spherically symmetric spacetimes [38, 39]. In an observational viewpoint, as mentioned in [39], this correspondence connects two independent observations, the observation of light coming from sources behind a black hole and the observation of emission from the accreted radiation fluid onto the black hole. This is because the size of the hole's shadow is determined by the radius of the photon sphere, and the accreted fluid can signal the sonic point. In light of the above studies, it is interesting to explore spherical accretion flows in different black hole spacetimes. The additional parameters, beyond the Schwarzschild black hole in these spacetimes, may affect the accretion flow behavior; thus, we have a potentially important approach for studying the strong gravity behavior of black holes in many alternative theories of gravity. Instead of finding the exact solution and studying the spherical accretion case by case for each given theory, a reasonable strategy is to consider a model-independent framework that parameterizes the most generic black-hole geometry through a finite number of adjustable quantities. For this purpose, in this paper, we consider spherical accretion flows in general parameterized spherically symmetric black hole spacetimes [40]. This parameterized description allows one to consider accretion phenomena not only for specific theories of gravity but also for analysis in a unified way by exploring the influence of different black hole parameters on the spherical accretion process [40]. Specifically, we focus our attention on the perfect fluid accretion onto general parameterized spherically symmetric black hole spacetimes and further investigate transonic phenomena for different fluids, including isothermal fluids and polytropic fluids. By studying the accretion of the ideal photon gas, we further reveal the correspondence between the sonic points of the accreting photon gas and the photon sphere, for general parameterized spherically symmetric black holes. Our paper is organized as follows. In Sec. 2, we present a very brief introduction to general parameterized spherically symmetric black holes. Then, in Sec. 3, we derive the basic equations for subsequent discussions on the spherical accretion of various fluids and present several useful quantities. Sec. 4 is devoted to performing a dynamical systems analysis of the accretion process and finding the critical points of the system. In Sec. 5, we apply these results to several known fluids and further investigate the transonic phenomena of the accretion of these fluids onto general parameterized spherically symmetric black holes. In Sec. 6, by studying the spherical accretion of the ideal photon gas and photon sphere of general parameterized spherically symmetric black holes, we establish the correspondence between the sonic points of the ideal photon gas and its photon sphere. The conclusion of this paper is presented in Sec. 7. II. PARAMETERIZED SPHERICALLY SYMMETRIC BLACK HOLE SPACETIME In this section, we present a brief introduction to the parameterization by L. Rezzolla and A. Zhidenko's (RZ) [40], for generic spherically symmetric black hole spacetimes. First, let us consider the line element of any spherically symmetric stationary configuration in the spherical polar coordinate system $ (t, r, \theta, \phi) $, which can be written as $ {\rm d}s^2 = - N^2(r) {\rm d}t^2 + \frac{B^2(r)}{N^2(r)}{\rm d}r^2 + r^2 ({\rm d}\theta^2 + \sin^2\theta {\rm d}\phi^2), $ where $ N(r) $ and $ B(r) $ are two functions of the radial coordinate r alone. In the RZ parameterization, $ N(r) $ is expressed as $ N^2(x) = x A(x), $ where $ A(x) >0 $ for $ 0<x<1 $ with $ x = 1- r_0/r $. It is obvious that $ x = 0 $ represents the location of the event horizon of the black hole, and $ x = 1 $ is the spatial infinity. Then, the functions $ A(x) $ and $ B(x) $ can be further parameterized in terms of the parameters $ \epsilon $, $ a_i $, and $ b_i $ as $ A(x) = 1 - \epsilon(1-x) +(a_0-\epsilon)(1-x)^2+ \tilde{A}(x) (1-x)^3, $ $ B(x) = 1 + b_0 (1-x) + \tilde{B}(x) (1-x)^2, $ where the functions $ \tilde{A} $ and $ \tilde{B} $ are introduced to describe the metric near the horizon (i.e., $ x \simeq 0 $) and at the spatial infinity (i.e., $ x = 1 $). The coefficients $ a_0 $ and $ b_0 $ can be seen as combinations of the PPN parameters. The functions $ \tilde A $ and $ \tilde B $ can be expanded using the continuous Padé approximation as $ \tilde A(x) = \frac{a_1}{1+ \dfrac{a_2 x}{1+ \frac{a_3 x}{1+ \cdots }}},\;\;\;\;\; \tilde B(x) = \frac{b_1}{1+ \dfrac{b_2 x}{1+ \frac{b_3 x}{1+ \cdots }}}, $ where $ a_1, a_2, \cdots, a_n $ and $ b_1, b_2, \cdots, b_n $ are dimensionless constants that can be determined by matching the above parameterization to a specific metric. In addition, the parameter $ \epsilon $ in the RZ parameterization measures the deviation of the position of the event horizon in the general metric from the corresponding location in the Schwarzschild spacetime, i.e., $ \epsilon = \frac{2 M - r_0}{r_0}. $ The RZ parameterization can be matched to many black hole solutions which differ from GR. These include the Reissner-Nordström (RN) black hole in GR and black holes in the Brans-Dicke gravity (BD), $ f(R) $ gravity, the Einstein-Maxwell axion dilaton theory (EMAD), and the Einstein-Aether theory [40, 41]. Recently, the RZ parameterization has also been extended to the rotating case [42]. III. BASIC EQUATIONS FOR SPHERICAL ACCRETION FLOWS In this section, we consider the steady-state spherical accretion flow of matter near an RZ-parameterized black hole. For this purpose, the accreting matter is approximated as a relativistic perfect fluid, by neglecting effects related to viscosity or heat transport. Thus, the energy momentum tensor of the fluid can be described by $ T^{\mu \nu} = (\rho+p)u^\mu u^\nu + p g^{\mu\nu}, $ where $ \rho $ and p are the proper energy density and the pressure of the perfect fluid. The four-velocity $ u^\mu $ obeys the normalization condition $ u_\mu u^\mu = -1 $. We assume that the fluid is radially flowing into the black hole; therefore, we have $ u^\theta = 0 = u^\phi $. For the same reason, the physical quantities ($ \rho $, p) and others introduced later are functions of the radial coordinate r only. For the sake of simplicity, we set the radial velocity as $ u^r = u <0 $ for the accreting case. Then, using the normalization condition, it is easy to infer that $ (u^t)^2 = \frac{N^2(r)+B^2(r) u^2}{N^4(r)}. $ There are two basic conservation laws that govern the evolution of a fluid in a black hole spacetime. One is the conservation law of the particle number, and another one is the conservation law of the energy momentum. The assumption of the conservation of the particle number implies there is no particle creation and/or annihilation during the accreting process. Defining the proper particle number density n and number current $ J^\mu = n u^\mu $ in the local inertial rest frame of the fluid, the conservation of the particle number gives $ \nabla_\mu J^\mu = \nabla_{\mu} (n u^\mu) = 0, $ where $ \nabla_\mu $ denotes the covariant derivative with respect to the coordinate. For the RZ parameterization of a generic spherically symmetric black hole spacetime, Eq. (9) can be rewritten as $ \frac{1}{r^2 B} \frac{\rm d}{{\rm d} r}(r^2 B n u) = 0. $ Integrating this equation, we obtain $ r^2 B n u = C_1, $ where $ C_1 $ is the integration constant. The conservation law of the energy momentum is expressed as $ \nabla_\mu T^{\mu \nu} = 0. $ It is also convenient to introduce the first law of the thermodynamics of the perfect fluid, which is given by [43] $ {\rm d}p = n ({\rm d}h - T {\rm d}s),\; \; \; {\rm d}\rho = h {\rm d}n +nT {\rm d}s, $ where T is the temperature, s is the specific entropy, and h is the specific enthalpy, defined as $ h \equiv \frac{\rho + p}{n}. $ Then, projecting the conservation law of the energy-momentum (12) along $ u^\mu $, one obtains $ \begin{aligned}[b] u_{\nu}\nabla_{\mu} T^{\mu\nu} & = u_{\nu} \nabla_{\mu} \Big[ nh u^{\mu} u^{\nu} + p g^{\mu\nu}\Big] \\ & = - n u^{\mu} \nabla_{\mu} h + u^{\mu} \nabla_{\mu} p. \end{aligned} $ In the above, we have used the conservation of the particle number, i.e., $ \nabla_{\mu} (n u^{\mu}) = 0 $ and $ u^{\mu} \nabla_{\nu} u_{\mu} = u_{\mu} \nabla_{\nu} u^{\mu} =$$ \frac{1}{2} \nabla_{\nu} (u^{\mu}u_{\mu}) = 0 $. Noticing that the first law of thermodynamics (13) can be rewritten as $ \nabla_{\mu }p = n \nabla_{\mu} h - n T \nabla_{\mu}s $, from the above projection one arrives at $ - n T u^{\mu} \nabla_{\mu} s = 0, $ implying that there is no heat transfer between the different fluid elements, and the specific entropy is conserved along the evolution lines of the fluid. For a parameterized spherically symmetric black hole, the conservation of the specific entropy reduces to $ \partial_r s = 0 $, i.e., $ s = $ constant. For this reason, the fluid is isentropic and Eq. (13) reduces to $ {\rm d}p = n {\rm d}h,\; \; \; \; {\rm d}\rho = h {\rm d}n. $ With the above thermodynamical properties of the perfect fluid, the conservation law of the energy-momentum (12) can be written as $ \begin{aligned}[b] \nabla_\mu T^{\mu}_{\nu} & = \nabla_{\mu} (h n u^\mu u_\nu) +\nabla_{\mu} (\delta^\mu_\nu p) \\ & = n u^{\mu} \nabla_{\mu} (h u_\nu) + n \nabla_{\nu }h \\ & = n u^{\mu} \partial_{\mu} (h u_\nu) - n u^{\mu} \Gamma_{\mu \nu}^\lambda h u_{\lambda}+ n \nabla_{\nu }h = 0. \end{aligned} $ Then, the time component $ \nu = t $ of the above equation yields $ \partial _r(hu_t) = 0. $ Integrating it for the parameterized spherically symmetric black hole we consider in this paper, one arrives at $ h \sqrt{N^2 +B^2u^2} = C_2, $ where $ C_2 $ is the integration constant. This equation, together with Eq. (11), constitutes the two basic equations describing a radial, steady-state perfect fluid flow in the parameterized spherically symmetric black hole. To proceed further, let us introduce several useful quantities for describing the accretion flow, which will be used in the subsequent analysis. The first quantity is the sound speed of the perfect fluid, which is defined by $ c_s^2 \equiv \frac{{\rm d} p}{{\rm d} \rho} = \frac{n}{h}\frac{{\rm d} h}{{\rm d} n} = \frac{{\rm d} \ln h }{{\rm d} \ln n}. $ On the other hand, by considering radial accretion flows, i.e., $ {\rm d} \theta = {\rm d} \phi = 0 $, the black hole metric can be decomposed as [44] $ {\rm d}s^2 = - (N {\rm d}t)^2 + \left(\frac{B}{N}{\rm d}r \right)^2, $ from which one can define an ordinary three-dimensional velocity v measured by a static observer as $ v \equiv \frac{B}{N^2} \frac{{\rm d}r}{{\rm d}t}. $ Considering $ u^{r} = u = {\rm d}r/{\rm d}\tau $ and $ u^t = {\rm d}t /{\rm d}\tau $ with $ \tau $ being the proper time of the fluid, one finds $ v^2 = \frac{B^2}{N^4} \left(\frac{u}{u^t}\right)^2 = \frac{B^2 u^2}{N^2+B^2 u^2}. $ Then, one can express $ u^2 $ and $ u_t^2 $ in terms of $ v^2 $ as $ u^2 = \frac{N^2 v^2}{B^2(1-v^2)}, $ $ u^2_t = \frac{N^2}{1-v^2}. $ These quantities will be used in the following dynamical systems analysis for the radial, steady-state perfect fluid flow in a parametrized spherically symmetric black hole. IV. SONIC POINTS AND DYNAMICAL SYSTEMS ANALYSIS The two basic equations (11) and (20) constitute a dynamical system for the radial accretion process. In this section, we use these equations to study the accretion process in a parametrized spherically symmetric black hole. A. Sonic points In the trajectories of an accretion flow into a black hole, there exists a specific point called the sonic point, at which the four-velocity of the moving fluid becomes equal to the local speed of sound, and the accretion flow attains the maximal accretion rate. To determine the sonic point, let us first take the derivative of the two basic equations (11) and (20) with respect to r, which leads to $ \left(v^2 - c_s^2\right) \frac{{\rm d} \ln v}{{\rm d}r} = \frac{1-v^2}{B N r} \left[ c_s^2 N B \left(2 +r \frac{{\rm d}\ln B}{{\rm d}r}\right)- B (1-c_s^2) r \frac{{\rm d}N}{{\rm d}r}\right]. $ At the sonic point $ r_* $ ($ c_s^2(r_*) = v^2(r_*) $), one has $ c_{s*}^2 N_* B_* \left(2 +r_* \left.\frac{{\rm d}\ln B}{{\rm d}r}\right|_*\right)- B_* (1-c_{s*}^2) r_* \left.\frac{{\rm d}N}{{\rm d}r}\right|_* = 0, $ where $ * $ denotes the values evaluated at the sonic point. This equation allows us to determine the sonic point once the speed of sound $ c_s^2 \equiv {\rm d}p/{\rm d}\rho $ is known. The above equation can be rewritten as $ u_*^2 = \frac{N_* r_*\left.\dfrac{{\rm d}N}{{\rm d}r}\right|_*}{B_*^2 \left(2 +r_* \left.\dfrac{{\rm d}\ln B}{{\rm d}r}\right|_*\right)}. $ Therefore, once $ r_* $ is determined, one can use this expression to find the value of u at the sonic point. The existence of the sonic point in the black hole spacetime physically exhibits a very interesting accreting phenomenon; it highlights transonic solutions that are supersonic near and subsonic far from the black hole. In the following sections, we are going to find sonic points by using the equations obtained in this subsection and discuss the transonic phenomenon in detail for different fluids. B. Dynamical system and critical points From the two basic equations (11) and (20), we observe that there are two integration constants $ C_1 $ and $ C_2 $. For this system, we may treat the square of the left-hand side of Eq. (20) as a Hamiltonian $ {\cal H} $ of this system, $ {\cal H} = h^2 (N^2+B^2 u^2), $ so $ C_2 $ of every orbit in the phase space of this system is kept fixed. Inserting Eq. (25) into the Hamiltonian $ {\cal H} $ one finds $ {\cal H}(r,v) = \frac{h^2(r,v)N^2}{1-v^2}. $ Then, the dynamical system associated with this Hamiltonian reads $ \dot{r} = {\cal H}_{,v} ,\; \; \; \; \dot{v} = -{\cal H}_{,r}, $ where the dot denotes the derivative with respect to $ \bar t $ (the time variable of the Hamiltonian dynamical system). Then, inserting the Hamiltonian, one finds $ \dot{r} \equiv f(r,v) = \frac{2 h^2 N^2}{v(1-v^2)^2} (v^2 - c_s^2), $ $ \dot{v} \equiv g(r,v) = -\frac{h^2 }{r (1-v^2)} \left[r N^2_{,r} (1 - c_s^2) - 4 N^2 c_s^2\right]. $ These equations constitute an autonomous, Hamiltonian two-dimensional dynamical system. Its orbits are composed of the solutions of the two basic Eqs. (11) and (20). In the construction of the above dynamical system, we considered the two quantities $ (r, v) $ as the two dynamical variables of the system. It is worth mentioning that there are actually different ways to select dynamical variables; for example, one may choose the dynamical variables to be $ (r, h) $, $ (r, p) $, or $ (r, u) $ [45]. At critical points, the right-hand sides of Eqs. (33) and (34) vanish, and the following equations provide a set of critical points that are solutions to $ \dot{r} = 0 $ and $ \dot{v} = 0 $, $ v_*^2 = c_s^2, $ $ c_s^2 = \frac{r_* N^2_{*,r_*}}{r_* N^2_{*,r_*} + 4N^2_{*,r_*}} . $ It is easy to see that sonic points are the critical points of this dynamical system. Hereafter, we use $ (r_*, v_*) $ to denote the critical points of the dynamical system. For a dynamical system, critical points can be divided into several different types. To observe which critical points could arise from the black hole accretion processes, let us perform the following linearization of the dynamical system by Taylor-expanding Eqs. (33) and (34) around the critical points, i.e., $ \left( \begin{array}{c} {\delta \dot r} \\ {\delta \dot v} \end{array}\right) = X \left( \begin{array}{c} \delta r \\ \delta v \end{array}\right), $ where $ \delta r $, $ \delta v $ denote the small perturbations of r, v about the critical points, and X is the Jacobian matrix of the dynamical system at the critical point $ (r_*, v_*) $, which is defined as $ X = \left(\begin{array}{*{20}{c}} \dfrac{\partial{f}}{\partial{r}} & \dfrac{\partial{f}}{\partial{v}} \\ \dfrac{\partial{g}}{\partial{r}} & \dfrac{\partial{g}}{\partial{v}} \end{array}\right)\Bigg|_{(r_*, v_*)}. $ Depending on the determinant $ \Delta = {\rm{det}} (X) $ of X and its trace $ \chi = {\rm Tr}(X) $, the types of the critical points $ (r_*, v_*) $ of the dynamcial system can be summarized as follows: ● Saddle points if $ \Delta <0 $. ● Attracting nodes if $ \Delta >0 $, $ \chi < 0 $, and $ \chi^2-4\Delta >0 $. ● Attracting spirals if $ \Delta >0 $, $ \chi < 0 $, and $ \chi^2-4\Delta <0 $. ● Repelling nodes if $ \Delta >0 $, $ \chi > 0 $, and $ \chi^2-4\Delta >0 $. ● Repelling spirals if $ \Delta >0 $, $ \chi > 0 $, and $ \chi^2-4\Delta <0 $. ● Degenerate nodes if $ \Delta >0 $, and $ \chi^2-4\Delta = 0 $. ● Centers if $ \Delta >0 $, $ \chi = 0 $. ● Line or plane critical points if $ \Delta = 0 $. When the critical points and their types are determined, the constant $ C_1 $ in Eq. (11) can be rewritten in terms of the quantities evaluated at the critical point $ (r_*, v_*) $ as $ C^2 _1 = \frac{r^4 _* n^2 _* v^2 _* N^2_*}{1 -v^2 _*} = \frac{r^5 _* n^2 _* N^2_{*,r_*}}{4}. $ This equation is satisfied not only at the critical point but also at any point in the same streamline in the phase portrait, so one can easily get $ \left(\frac{n}{n_*}\right)^2 = \frac{r^5_* N^2_{*,r_*}}{4}\frac{1- v^2}{r^4 N^2 v^2}. $ If there is no solution to Eqs. (33) and (34) at the critical point, one can introduce any reference point $ (r_0,v_0) $ from the phase portrait, obtaining [46] $ \left(\frac{n}{n_0}\right)^2 = \frac{r^4_0 N^2_0 v^2_0}{1- v^2_0}\frac{1- v^2}{r^4 N^2 v^2}. $ The above expressions will be used later to analyze the spherical accretion processes for some test fluids. V. APPLICATIONS TO TEST FLUIDS In this section, we consider the accretion processes of several test fluids, using the equations derived in the above sections for a parameterized spherically symmetric black hole. Specifically, we consider the isothermal and polytropic fluids in the following subsections. A. Isothermal test fluid In this subsection, we consider the accretion processes for isothermal (constant-temperature) fluids. The corresponding system can be viewed as an adiabatic one, owing to the fast movement of the fluid. For such a system, we define its equation of state (EoS) w as $ w\equiv p/\rho. $ where $ \rho $ and p represent the energy density and pressure of the fluid, respectively. It is worth noting that $ 0< w \leqslant 1 $ for isothermal fluids [20]. In addition, the adiabatic speed of sound is given by $ c_s^2 \equiv \dfrac{{\rm d} p}{{\rm d} \rho} = w $. According to $ h = (\rho+p)/n = (1+w)\rho/n $ and $c_s^2 = $ $ {\rm d}\ln h/ {\rm d}\ln n = w $, we have $ \rho = \rho_0 \left(\frac{n}{n_0}\right)^{1+w}, $ $ h = \frac{(w+1) \rho_0}{n_0} \left(\frac{n}{n_0} \right)^w, $ where $ n_0 $ and $ \rho_0 $ denote the values of n and $ \rho $ evaluated at some reference point. Using Eq. (40), we arrive at $ h^2 = K\left( \frac{1- v^2}{r^4 N^2 v^2}\right)^w, $ where K is a constant. Through the transformation $ \bar{t} \rightarrow K\bar{t} $ and $ {\cal H} \rightarrow {\cal H}/K $, the constant K is absorbed into the redefined time $ {\bar{t}} $. Then, the new Hamiltonian becomes $ {\cal H}(r,v) = \frac{N^{2(1-w)}}{(1-v^2)^{1-w} v^{2w} r^{4w}}. $ Considering the first-order RZ parameterization (taking only the first three items of Eq. (3)), one can approximately write $ N^2 \simeq \left(1-\frac{2M}{r(1+\epsilon)}\right)\left[1+\frac{4 M^2 (a_0- \epsilon)}{r^2 (1+\epsilon)^2} - \frac{2 M \epsilon }{r(1+\epsilon)}\right]. $ Then, Eq. (46) can be approximately rewritten as $\begin{aligned} {\cal H} \simeq \frac{\left(1-\dfrac{2M}{r(1+\epsilon)}\right)^{1-w} \left[1+\dfrac{4 M^2 (a_0- \epsilon)}{r^2 (1+\epsilon)^2} - \dfrac{2 M \epsilon }{r(1+\epsilon)}\right]^{1-w}}{(1-v^2)^{1-w} v^{2w} r^{4w}}. \end{aligned}$ At the sonic point, with $ c_s^2 = w $, Eq. (34) reduces to $ w = \left. \frac{r N^2_{,r}}{r N^2_{,r} + 4 N^2} \right|_{r = r_*}. $ With Eq. (47), Eq. (49) can be approximately rewritten as $ w = \frac{M [-12 M^2 \epsilon + r_*^2 (1 + \epsilon)^3 + 4 a_0 M (3 M - r_* (1 + \epsilon))]}{LT_1}, $ $\begin{aligned}[b] LT_1 =& 4 M^3 \epsilon - 3 M r_*^2 (1 + \epsilon)^3 + 2 r_*^3 (1 + \epsilon)^3 \\&+ 4 a_0 M^2 (-M + r_* + r _*\epsilon).\end{aligned} $ 1.. Solution for an ultra-stiff fluid ($ w=1 $) Let us first consider an ultra-stiff fluid, whose energy density is equal to its pressure. In this case, the equation of state is $ w = p/\rho = 1 $. The Hamiltonian (48) for the ultra-stiff fluid becomes $ {\cal H} = \frac{1}{v^2 r^4}. $ For physical flows, one has $ |v| <1 $. Therefore, the Hamiltonian (52) for the ultra-stiff fluid has a minimal value $ {\cal H}_{\rm min} = r_0^{-4} $. With Eq. (52), the two-dimensional dynamical system (33), (34) is $ \dot{r} = - \frac{2}{r^4 v^3}, $ $ \dot{v} = \frac{4}{r^5 v^2} . $ It is easy to see that this dynamical system has no critical points. The phase space portrait of this dynamical system for the ultra-stiff fluid with $ M = 1 $, $ a_0 = 0.001 $, and $ \epsilon = 0.1 $ for a general parameterized black hole is depicted in Fig. 1, in which the physical flow of the ultra-stiff fluid in the general parameterized black hole is represented by several curves with arrows. It is shown that the curves with $ v<0 $ have arrows directed toward the black hole, representing the accreting flow of the ultra-stiff fluid, while the curves with $ v>0 $ have arrows directed toward the outside, representing the outflow fluids. The green and red curves represent the flows with minimal Hamiltonian $ {\cal H}_{\rm min} $ for the accretion outflow. All of the flows in Fig. 1 between the red and green curves are physical and have Hamiltonian $ {\cal H} > {\cal H}_{\rm min} $. Figure 1. (color online) Phase space portrait of the dynamical system (33), (34) for the ultra-stiff fluid ($ w = 1 $) with the black hole parameters $ M = 1 $, $ \epsilon = 0.1 $, and $ a_0 = 0.0001 $. The values of the minimal Hamiltonian $ {\cal H}_{\rm min} $ depend on the RZ parameterization parameters. In Table 1, the values of $ r_0 $ and $ {\cal H} $ for the different values of parameter $ \epsilon $ for the ultra-stiff fluid are presented. Since the horizon radius decreases with respect to $ \epsilon $, it is shown clearly that the minimal Hamiltonian $ {\cal H}_{\rm min} = 1/r_0^4 $ increases with increasing $ \epsilon $. $ \epsilon $ $ 0 $ $ 0.1 $ $ 0.2 $ $ 0.3 $ $ 0.4 $ $ 0.5 $ $ r_0 $ $ 2 $ $ 1.81818 $ $ 1.66667 $ $ 1.53846 $ $ 1.42857 $ $ 1.33333 $ ${\cal H}_{\rm min}$ $ 0.0625 $ $ 0.0915063 $ $ 0.1296 $ $ 0.178506 $ $ 0.2401 $ $ 0.316406 $ Table 1. Values of $ r_0 $ and $ {\cal H}_{\rm min} $ for different values of the black hole parameter $ \epsilon $ for the ultra-stiff fluid with $ w=1 $. In the calculation, we set $ M=1 $ and $ a_0=10^{-4} $. 2.. Solution for an ultra-relativistic fluid ($ w=1/2 $) Let us now consider an ultra-relativistic fluid, for which the equation of state is $ w = 1/2 $, i.e., $ p = \rho/2 $. In this case, the fluid's isotropic pressure is less than its energy density. With $ w = 1/2 $, the Hamiltonian (48) becomes $ {\cal H} = \dfrac{\left(1-\dfrac{2M}{r(1+\epsilon)}\right)^{1/2} \left[1+\dfrac{4 M^2 (a_0- \epsilon)}{r^2 (1+\epsilon)^2} - \dfrac{2 M \epsilon }{r(1+\epsilon)}\right]^{1/2}}{ r^{2}|v|(1-v^2)^{1/2} }, $ and then, the two-dimensional dynamical system (33), (34) is $ \begin{aligned}[b] \dot{r} = &\frac{\left(1-\dfrac{2M}{r(1+\epsilon)}\right)^{1/2} \left[1+\dfrac{4 M^2 (a_0- \epsilon)}{r^2 (1+\epsilon)^2} - \dfrac{2 M \epsilon }{r(1+\epsilon)}\right]^{1/2}}{ r^{2}(1-v^2)^{3/2} } \\ &-\frac{\left(1-\dfrac{2M}{r(1+\epsilon)}\right)^{1/2} \left[1+\dfrac{4 M^2 (a_0- \epsilon)}{r^2 (1+\epsilon)^2} - \dfrac{2 M \epsilon }{r(1+\epsilon)}\right]^{1/2}}{ r^{2}v^2(1-v^2)^{1/2} }, \end{aligned}$ $ \begin{aligned}[b] \dot{v} = &\frac{2\left(1-\dfrac{2M}{r(1+\epsilon)}\right)^{1/2} \left[1+\dfrac{4 M^2 (a_0- \epsilon)}{r^2 (1+\epsilon)^2} - \dfrac{2 M \epsilon }{r(1+\epsilon)}\right]^{1/2}}{ r^{3}|v|(1-v^2)^{1/2} } \\ & - \frac{\left(1-\dfrac{2M}{r(1+\epsilon)}\right)^{} \left[-\dfrac{8 M^2 (a_0- \epsilon)}{r^3 (1+\epsilon)^2} - \dfrac{2 M \epsilon }{r^2 (1+\epsilon)}\right]^{}}{ LT_2} \\ &-\frac{2 M \left(1 + \dfrac{4 M^2 (a_0 - \epsilon)}{r^2 (1 + \epsilon)^2} - \dfrac{2 M\epsilon}{r (1 + \epsilon)}\right)}{r^2 (1 + \epsilon) LT_2}, \end{aligned} $ $\begin{aligned}[b] LT_2 =& 2r^{2}|v|(1-v^2)^{1/2} \left(1-\frac{2M}{r(1+\epsilon)}\right)^{1/2} \\&\times\left[1+\frac{4 M^2 (a_0- \epsilon)}{r^2 (1+\epsilon)^2} - \frac{2 M \epsilon }{r(1+\epsilon)}\right]^{1/2}. \end{aligned}$ For some given value of $ {\cal H} $, one can obtain $ v^2 $ from Eq. (55), $ v^2 = \frac{1 \pm \sqrt{1+\dfrac{4F(r)}{r^{4} {\cal H}_0^2}}}{2}, $ $ \begin{aligned}[b] F(r) = &-1 + \frac{2 M}{r} + \frac{8 M^3}{r^3 (1 + \epsilon)^3} + \frac{8 a_0 M^3}{r^3 (1 + \epsilon)^3} \\&- \frac{8 M^3}{r^3 (1 + \epsilon)^2} - \frac{4 a_0 M^2}{r^2 (1 + \epsilon)^2}. \end{aligned} $ In addition, one can obtain the critical points of the accretion process for the ultra-relativistic fluid by solving the two-dimensional dynamical system, when both right hand sides of Eq. (56) and Eq. (57) vanish. With the black hole parameters set to $ M = 1 $, $ \epsilon = 0.1 $, and $ a_0 = 0.0001 $, one obtains the physical critical points $ (r_*, \pm v_*) $, i.e., $ (2.30139, $$-0.707107) $ and $ (2.30139, 0.707107) $ for the outflow and the accreting flow, respectively. Inserting these critical point values $ (r_*, \pm v_*) $ into Eq. (55), one finds the critical Hamiltonian $ {\cal H}_* = 0.160335 $. The values of $ r_* $, $ \pm v_* $, and $ {\cal H}_* $ at the sonic point with different values of the black hole parameters are summarized in Table 2 for $ w = 1/2 $, $ M = 1 $, and $ a_0 = 0.0001 $. Clearly, as the value of the black hole parameter $ \epsilon $ increases, the following occurs: (1) the value of $ r_* $ at the sonic point decreases, while the distance from horizon $ r_0 $ to the critical point increases; (2) the values of velocity $ \pm v_* $ at the sonic points are two constants, because they are equal to the fluid's speed of sound; and (3) the value of the Hamiltonian for the fluid at the critical points increases. We also show the behavior of the critical radius $ r_* $ with respect to the black hole parameter $ \epsilon $ for different values of parameter $ a_0 $ in Fig. 2, which shows that the critical radius $ r_* $ decreases with increase in the black hole parameters $ \epsilon $ and $ a_0 $. $ r_* $ $ 2.49998 $ $ 2.30139 $ $ 2.14917 $ $ 2.04111 $ $ 1.97876 $ $ 1.96016 $ $ v_* $ $ 0.70711 $ $ 0.70711 $ $ 0.70711 $ $ 0.70711 $ $ 0.70711 $ $ 0.70711 $ $ {\cal H}_* $ $ 0.14311 $ $ 0.16034 $ $ 0.17465 $ $ 0.18507 $ $ 0.19098 $ $ 0.19271 $ Table 2. Values of $ r_* $, $ v_* $, and $ {\cal H}_* $ at the sonic point, for different values of the black hole parameter $ \epsilon $ for the ultra-relativistic fluid with $ w=1/2 $. We use $ M=1 $ and $ a_0 = 0.0001 $ in the calculation. Figure 2. (color online) Relation between $ r_* $ and $ \epsilon $ for different $ a_0 $ in the spherical accretion process for the ultra-relativistic fluid ($ w = 1/2 $). The phase space portrait of this dynamical system for the ultra-relativistic fluid with $ M = 1 $, $ a_0 = 0.0001 $, and $ \epsilon = 0.1 $ for a general parameterized black hole is depicted in Fig. 3, in which the physical flow of the ultra-relativistic fluid for a general parameterized black hole is represented by several curves. Clearly, both the critical points in Fig. 3, $ (r_*, v_*) $ and $ (r_*, -v_*) $, are saddle points of the dynamical system. The five curves in Fig. 3 correspond to the different values of the Hamiltonian $ {\cal H}_0 = \{{\cal H}_*-0.05,\; {\cal H}_*- 0.02,\; {\cal H}_*,\; {\cal H}_*+ 0.03,\; {\cal H}_*+ 0.08\} $. This plot shows several different types of fluid motion. The magenta (with $ {\cal H} = {\cal H}_* +0.08 $) and blue (with $ {\cal H} = {\cal H}_* +0.03 $) curves correspond to the purely supersonic accretion ($ v <- v_* $ branches), purely supersonic outflow ($ v > v_* $ branches), or purely subsonic accretion followed by the subsonic outflow ($ -v_*< v < v_* $ branches). The red (with $ {\cal H} = {\cal H}_* -0.02 $) and green (with $ {\cal H} = {\cal H}_* -0.05 $) curves correspond to the non-physical behavior of the fluid. Figure 3. (color online) Phase space portrait of the dynamical system (33), (34) for the ultra-relativistic fluid ($ w = 1/2 $), for the black hole parameters $ M = 1 $, $ \epsilon = 0.1 $, and $ a_0 = 0.0001 $. The critical (sonic) points $ (r_*, \pm v_*) $ of this dynamical system are presented by the black spots in the figure. The five colored curves (black, red, green, magenta, and blue) correspond to the Hamiltonian values $ {\cal H} = {\cal H}_*,\; {\cal H_*}-0.02, {\cal H}_*- 0.05, {\cal H}+0.03, {\rm{and}}\ {\cal H}_* + 0.08 $. The most interesting solution of the fluid motion is depicted by the black curves in Fig. 3, revealing the transonic behavior of the fluid outside the black hole horizon. For $ v<0 $, there are two black hole curves that go through the sonic point $ (r_*, -v_*) $. One solution starts at the spatial infinity with a sub-sonic flow followed by a supersonic flow after it crosses the sonic point, which corresponds to the standard nonrelativistic accretion considered by Bondi in [4]. Another solution, which starts at the spatial infinity with a supersonic flow but becomes sub-sonic after it crosses the sonic point, is unstable, according to the analysis presented in [46]; such a behavior is very difficult to achieve. For $ v>0 $, there are two solutions as well. One solution, which starts at the horizon with a supersonic flow followed by a sub-sonic flow after it crosses the sonic point, corresponds to the transoinc solution of the stellar wind, as discussed in [4] for the non-relativistic accretion. Another solution, similar to the $ v<0 $ case, is unstable and too difficult to achieve [46]. Here, we would like to add several remarks about the physical explanations of the flows in Fig. 3 for different values of Hamiltonian $ {\cal H} $. In general, different values of the Hamiltonian represent different initial states of the dynamical system. For the transonic solution of the ultra-relativistic fluid, its Hamiltonian can be evaluated at the sonic point. The Hamiltonian with values different from the transonic one does not represent any transonic solutions of the flow. For example, the green curve shows the subcritical fluid flow since such a flow does not pass through the critical point and fails to reach the critical point. In fact, such solutions have a turning or bouncing point, which is the nearest point reachable by such fluids, beyond which they are bounced back or turned around to infinity. A similar explanation holds for the red curves. The curves shown in blue and magenta can be termed super-critical flows. Although such fluids do not go through the critical point either, they already possess velocities above the allowed critical value. Such flows end up entering the black horizon. It is also worth mentioning that a similar analysis also applies to other fluids, including radiation, sub-relativistic, and polytropic fluids. 3.. Solution for a radiation fluid ($ w=1/3 $) For a radiation fluid, the equation of state is $ w = 1/3 $. In this case, the Hamiltonian (48) becomes $ {\cal H} = \frac{\left(1-\dfrac{2M}{r(1+\epsilon)}\right)^{2/3} \left[1+\dfrac{4 M^2 (a_0- \epsilon)}{r^2 (1+\epsilon)^2} - \dfrac{2 M \epsilon }{r(1+\epsilon)}\right]^{2/3}}{r^{4/3}|v|^{2/3} (1-v^2)^{2/3} }, $ $ \begin{aligned}[b] \dot{r} = &\frac{4 v^{1/3} \left(1 - \dfrac{2 M}{r (1 +\epsilon)}\right)^{2/3} \left[1 + \dfrac{ 4 M^2 (a_0 - \epsilon)}{r^2 (1 + \epsilon)^2} - \dfrac{ 2 M \epsilon}{r (1 + \epsilon)}\right]^{2/3}}{3 r^{ 4/3} (1 - v^2)^{5/3}} \\ &-\frac{2 \left(1 - \dfrac{2 M}{r (1 +\epsilon)}\right)^{2/3} \left[1 + \dfrac{ 4 M^2 (a_0 - \epsilon)}{r^2 (1 + \epsilon)^2} - \dfrac{ 2 M \epsilon}{r (1 + \epsilon)}\right]^{2/3}}{3 r^{ 4/3}|v|^{5/3} (1 - v^2)^{5/3}}, \end{aligned} $ $ \begin{aligned}[b] \dot{v} = &\frac{4\left(1-\dfrac{2M}{r(1+\epsilon)}\right)^{2/3} \left[1+\dfrac{4 M^2 (a_0- \epsilon)}{r^2 (1+\epsilon)^2} - \dfrac{2 M \epsilon }{r(1+\epsilon)}\right]^{2/3}}{3 r^{7/3}|v|^{2/3}(1-v^2)^{2/3} } \\ & - \frac{2\left(1-\dfrac{2M}{r(1+\epsilon)}\right)^{} \left[-\dfrac{8 M^2 (a_0- \epsilon)}{r^3 (1+\epsilon)^2} + \dfrac{2 M \epsilon }{r^2 (1+\epsilon)}\right]^{}}{ LT_3} \\ &-\frac{4 M \left(1 + \dfrac{4 M^2 (a_0 - \epsilon)}{r^2 (1 + \epsilon)^2} - \dfrac{2 M\epsilon}{r (1 + \epsilon)}\right)}{r^2 (1 + \epsilon) LT_3}, \end{aligned} $ $\begin{aligned}[b] LT_3 =& 3 r^{4/3} |v|^{2/3} (1-v^2)^{2/3} \left(1-\frac{2M}{r(1+\epsilon)}\right)^{1/3}\\&\times\left[1+\frac{4 M^2 (a_0- \epsilon)}{r^2 (1+\epsilon)^2} - \frac{2 M \epsilon }{r(1+\epsilon)}\right]^{1/3}. \end{aligned} $ The sonic points can be found by solving the above two-dimensional dynamical system, when both right hand sides of Eq. (62) and Eq. (63) vanish. The values of the critical radius $ r_* $, sound speed $ \pm v_* $, and critical $ {\cal H}_* $ for different values of $ \epsilon $ are summarized in Table 3 for $ w = 1/3 $, $ M = 1 $, and $ a_0 = 0.0001 $. Similar to the ultra-relativistic fluid, the critical radius decreases with increasing $ \epsilon $, while the critical Hamiltonian $ {\cal H}_* $ increases. We also illustrate the behavior of the critical radius $ r_* $ for the radiation fluid with respect to $ \epsilon $ for different values of $ a_0 $ in Fig. 4. $ r_* $ $ 2.99996 $ $ 2.8096 $ $ 2.67692 $ $ 2.59413 $ $ 2.55246 $ $ 2.54108 $ $ {\cal H}_* $ $ 0.20999 $ $ 0.22082 $ $ 0.22845 $ $ 0.23316 $ $ 0.2355 $ $ 0.23613 $ Table 3. Values of $ r_* $, $ v_* $, and $ {\cal H}_* $ at the sonic point, for different values of the black hole parameter $ \epsilon $ for the radiation fluid with $ w=1/3 $. We use $ M=1 $ and $ a_0 = 0.0001 $ in the calculation. Figure 4. (color online) Relation between $ r_* $ and $ \epsilon $ for different $ a_0 $ in the spherical accretion process for the radiation fluid ($ w = 1/3 $). The phase space portrait of this dynamical system for the radiation fluid with $ M = 1 $, $ a_0 = 0.001 $, and $ \epsilon = 0.1 $ is displayed in Fig. 5, in which the physical flow of the radiation fluid for a general parameterized black hole is represented by several curves. One can see that both the critical points in Fig. 5, $ (r_*, v_*) $ and $ (r_*, -v_*) $, are saddle points of the dynamical system. From Fig. 5, one also observes that the radiation fluid shares the same types of the fluid motion ($ w = 1/3 $) as the ultra-relativistic fluid ($ w = 1/2 $), as shown in Fig. 3. Similar to Fig. 3, the magenta and blue curves represent the supersonic flows for $ v<-v_* $ or $ v>v_* $, while they correspond to sub-sonic flows if $ -v_* < v< v_* $. The transonic solutions are presented by the black curves. For $ v<0 $, one of the black curves, which starts at the spatial infinity with a sub-sonic flow and then becomes supersonic after it crosses the sonic point $ (r_*, -v_*) $, corresponds to the standard transonic accretion, and another black curve represents an unstable solution. For $ v>0 $, one black curve corresponds to the transonic outflow of wind, and another one represents an unstable flow, similar to the case of the ultra-relativistic fluid. The green and red curves are non-physical solutions. Figure 5. (color online) Phase space portrait of the dynamical system (33), (34) for the radiation fluid ($ w = 1/3 $) for black hole parameters $ M = 1 $, $ \epsilon = 0.1 $, and $ a_0 = 0.0001 $. The critical (sonic) points $ (r_*, \pm v_*) $ of this dynamical system are presented by the black spots in the figure. The five colored curves (black, red, green, magenta, and blue) correspond to the values of Hamiltonian $ {\cal H} = {\cal H}_*,\; {\cal H_*}-0.03, {\cal H}_*- 0.05, {\cal H}+0.05, {\rm{and}}\ {\cal H}_* + 0.1 $, respectively. 4.. Solution for a sub-relativistic fluid ($ w=1/4 $) Let us now consider a sub-relativistic fluid, whose energy density exceeds its isotropic pressure; the equation of state for such a fluid is $ w = 1/4 $. In this case, the Hamiltonian (48) takes the form $ {\cal H} = \frac{\left(1-\dfrac{2M}{r(1+\epsilon)}\right)^{3/4} \left[1+\dfrac{4 M^2 (a_0- \epsilon)}{r^2 (1+\epsilon)^2} - \dfrac{2 M \epsilon }{r(1+\epsilon)}\right]^{3/4}}{r\sqrt{|v|}(1-v^2)^{3/4}} , $ and then the two-dimensional dynamical system is $ \begin{aligned}[b] \dot{r} = &\frac{3\sqrt{ |v|} \left(1 - \dfrac{2 M}{r (1 +\epsilon)}\right)^{3/4} \left[1 + \dfrac{ 4 M^2 (a_0 - \epsilon)}{r^2 (1 + \epsilon)^2} - \dfrac{ 2 M \epsilon}{r (1 + \epsilon)}\right]^{3/4}}{2r^{} (1 - v^2)^{7/4}} \\ &-\frac{ \left(1 - \dfrac{2 M}{r (1 +\epsilon)}\right)^{3/4} \left[1 + \dfrac{ 4 M^2 (a_0 - \epsilon)}{r^2 (1 + \epsilon)^2} - \dfrac{ 2 M \epsilon}{r (1 + \epsilon)}\right]^{3/4}}{ r^{}|v|^{3/2} (1 - v^2)^{3/4}}, \\[-15pt] \end{aligned} $ $ \begin{aligned}[b] \dot{v} = &\frac{\left(1-\dfrac{2M}{r(1+\epsilon)}\right)^{3/4} \left[1+\dfrac{4 M^2 (a_0- \epsilon)}{r^2 (1+\epsilon)^2} - \dfrac{2 M \epsilon }{r(1+\epsilon)}\right]^{3/4}}{ r^{2}\sqrt{|v|}(1-v^2)^{3/4} } \\ &- \frac{3\left(1-\dfrac{2M}{r(1+\epsilon)}\right)^{} \left[-\dfrac{8 M^2 (a_0- \epsilon)}{r^3 (1+\epsilon)^2} + \dfrac{2 M \epsilon }{r^2 (1+\epsilon)}\right]^{}}{ LT_4} \\ &-\frac{4 M \left(1 + \dfrac{6 M^2 (a_0 - \epsilon)}{r^2 (1 + \epsilon)^2} - \dfrac{2 M\epsilon}{r (1 + \epsilon)}\right)}{r^2 (1 + \epsilon) LT_4}, \end{aligned} $ $\begin{aligned}[b] LT_4 =& 4 r \sqrt{|v|} (1-v^2)^{3/4} \left(1-\frac{2M}{r(1+\epsilon)}\right)^{1/4}\\&\times\left[1+\frac{4 M^2 (a_0- \epsilon)}{r^2 (1+\epsilon)^2} - \frac{2 M \epsilon }{r(1+\epsilon)}\right]^{1/4}. \end{aligned}$ For this dynamical system, similar to the above two cases, we present the values of $ r_* $, $ v_* $, and $ {\cal H}_* $ for different values of $ \epsilon $ in Table. 4 for $ w = 1/4 $, $ M = 1 $, and $ a_0 = 0.0001 $. We also plot the behavior of the critical radius $ r_* $ for the sub-relativistic fluid with respect to the black hole parameter $ \epsilon $ for different values of $ a_0 $; this is shown in Fig. 6. Clearly, the critical radius $ r_* $ decreases with increasing $ \epsilon $ and $ a_0 $. $ v_* $ $ 0.5 $ $ 0.5 $ $ 0.5 $ $ 0.5 $ $ 0.5 $ $ 0.5 $ Table 4. Values of $ r_* $, $ v_* $, and $ {\cal H}_* $ at the sonic point, for different values of the black hole parameter $ \epsilon $ for the sub-relativistic fluid $ w=1/4 $. The black hole parameters M and $ a_0 $ are set to $ M=1 $ and $ a_0 = 0.0001 $. Figure 6. (color online) Relation between $ r_* $ and $ \epsilon $ for different $ a_0 $ in the spherical accretion process for the sub-relativistic fluid ($ w=1/4 $). The phase space portrait of the dynamical system for the sub-relativistic fluid is shown in Fig. 7. From this figure, we observe that the type of the fluid motion for the sub-relativistic fluid ($ w = 1/4 $) is same as those for the ultra-relativistic fluid ($ w = 1/2 $) and the radiation fluid ($ w = 1/3 $). For $ v>v_* $, the magenta and blue curves are purely supersonic outflows, while for $ v<-v_* $, they represent supersonic accretions. For $ -v_* < v < v_* $, these curves are sub-sonic flows. The black curves shown in Fig. 7 are more interesting since they represent the transonic solution of the spherical accretion for $ v<0 $ and spherical outflow for $ v>0 $ around the black hole. Similar to the results for the ultra-relativistic and radiation fluids, the red and green curves represent non-physical solutions. Figure 7. (color online) Phase space portrait of the dynamical system (33), (34) for the sub-relativistic fluid ($ w = 1/4 $), for the black hole parameters $ M = 1 $, $ \epsilon = 0.1 $, and $ a_0 = 0.0001 $. The parameters are $ r_0 \simeq 1.81818 $, $ r_* \simeq 3.32303 $, $ v_* \simeq 0.5 $. Black plot: the solution curve through the saddle CPs ($ r_*, v_* $) and ($ r_*, -v_* $) for which $ {\cal H} = {\cal H}_* \simeq 0.272806 $. Red plot: the solution curve for which $ {\cal H} = {\cal H}_*- 0.03 $. Green plot: the solution curve for which $ {\cal H} = {\cal H}_*- 0.05 $. Magenta plot: the solution curve for which $ {\cal H} = {\cal H}_* + 0.03 $. Blue plot: the solution curve for which $ {\cal H} = {\cal H}_* + 0.1 $. B. Polytropic test fluid The state of a polytropic test fluid can be described by $ p = \kappa n^\gamma , $ where $ \kappa $ and $ \gamma $ are constants. For ordinary matter, one generally works with the constraint $ \gamma > 1 $. Following [45], we obtain the following expressions for the specific enthalpy: $ h = m +\frac{\kappa \gamma n^{\gamma -1}}{\gamma - 1}, $ where the constant of integration has been identified with the baryonic mass m. The three-dimensional speed of sound is given by $ c_s^2 = \frac{(\gamma -1) Y}{m(\gamma -1) + Y}\; \; \; (Y \equiv \kappa \gamma n^{\gamma -1}). $ Using Eq. (41) in Eq. (71), we obtain $ h = m\left[ 1+ Z \left(\frac{1-v^2}{r^4 N^2 v^2}\right)^{(\gamma -1)/2}\right], $ $ Z \equiv \frac{\kappa \gamma}{m(\gamma -1)} \left| C_1 \right|^{\gamma - 1} = {\rm{const}}.>0, $ and Z is a positive constant. If critical points exist, Z takes the special form $ Z \equiv \frac{\kappa \gamma n^{\gamma -1}_*}{m(\gamma -1)} \left(\frac{r^5_* N^2_{*,r_*}}{4}\right)^{(\gamma - 1)/2} = {\rm{const}}.>0 . $ The constant Z depends on the black hole parameters and the test fluid. From Eq. (74), it is clear that Z is roughly proportional to $ \kappa n_* / m $ for a given black hole solution and certain test fluids. Inserting Eq. (72) into Eq. (31), we evaluate the Hamiltonian by $ {\cal H} = \frac{N^2}{1 - v^2}\left[1 + Z \left( \frac{1 - v^2}{r^4 N^2 v^2}\right)^{(\gamma -1)/2}\right]^2, $ where $ m^2 $ has been absorbed into a redefinition of $ (\bar{t},{\cal H}) $. Obviously, $ N^2(r) >0 $ and $ N^2_{,r} > 0 $ for all r. This means that the constant $ Z >0 $ (recall that $ \gamma >1 $). It is easy to see that there are no global solutions, since the Hamiltonian remains constant along the solution curves. Notice that since $ \gamma > 1 $, the solution curves do not cross the r axis at points where $ v = 0 $ and $ r \ne r_0 $; otherwise, the Hamiltonian (75) would diverge there. The point on the r axis which the solution curves may cross is only $ (r_0,0) $. The horizon $ r = r_0 $ is a single root to $ N^2(r) = 0 $, in the vicinity of which v behaves as $ \left|v\right| \propto \left| r-r_0\right|^{\textstyle\frac{2-\gamma}{2(\gamma -1)}}. $ We see that only solutions with $ 1<\gamma< 2 $ may cross the r axis. Here, $ {\cal H}(r_0,0) $ is the limit of $ {\cal H}(r,v) $ as $ (r,v) $ $ \to $ $ (r_0,0) $. When $ 1<\gamma< 2 $, the pressure $ p = \kappa n^\gamma $ diverges at the horizon as $ p \propto \left| r-r_0\right|^{\textstyle\frac{-\gamma}{2(\gamma -1)}}. $ Then, inserting $ Y = m(\gamma-1) Z \left(\frac{1-v^2}{r^4 N^2 v^2}\right)^{(\gamma -1)/2} $ into Eq. (71), we obtain $ c_s^2 = Z (\gamma -1- c_s^2) \left(\frac{1-v^2}{r^4 N^2 v^2}\right)^{(\gamma -1)/2} . $ This, along with Eq. (36), takes the form of the following expressions at the critical points ($ c_s^2(r_*) = $ $ v^2(r_*) = v^2_* $): $ c_s^2(r_*) = Z (\gamma -1- v^2_*) \left(\frac{1-v^2_*}{r^4_* N^2_* v^2_*}\right)^{(\gamma -1)/2}, \; $ $ v^2_* = \frac{M [-12 M^2 \epsilon + r_*^2 (1 + \epsilon)^3 + 4 a_0 M (3 M - r_* (1 + \epsilon))]}{LT_1}. $ Here, we have used Eq. (36) to obtain the right-hand side of Eq. (81). If there are critical points, the solution of this system of equations in $ (r_*,v_*) $ provides all the critical points, with a given value of the positive constant Z. Then, one can use the values of critical points to reduce $ n_* $ from Eq. (74). Numerical solutions to the dynamical system of Eqns. (80) and (81) are shown in Fig. 8. Clearly, there is only one critical point, a saddle point, in accretion ($ -1<v<0 $) of a polytropic test fluid. The motion types for the polytropic test fluids, as shown in Fig. 8, are the same as the motion types for the isothermal test fluids with $ w = 1/2 $ (c.f. Fig. 3), $ w = 1/3 $ (c.f. Fig. 5), and $ w = 1/4 $ (c.f. Fig. 7). Figure 8. (color online) Accretion of a polytropic test fluid. Contour plots of the Hamiltonian (75) for $ \gamma = 5/3 $ and $ Z = 5 $, for the black hole parameters $ M = 1 $, $ \epsilon = 0.1 $, and $ a_0 = 0.0001 $. The parameters are $ r_0 \simeq 1.81818 $, $ r_* \simeq 2.30998 $, $ v_* \simeq 0.704098 $. Black plot: the solution curve through the saddle CPs ($ r_*, v_* $) and ($ r_*, -v_* $) for which $ {\cal H} = {\cal H}_* \simeq 5.5208 $. Red plot: the solution curve for which $ {\cal H} = {\cal H}_*-0.3 $. Green plot: the solution curve for which $ {\cal H} = {\cal H}_*-1.0 $. Magenta plot: the solution curve for which $ {\cal H} = {\cal H}_*+0.5 $. Blue plot: the solution curve for which $ {\cal H} = {\cal H}_*+1.0 $ VI. CORRESPONDENCE BETWEEN SONIC POINTS OF PHOTON GAS AND PHOTON SPHERE Recently, a correspondence was shown between the sonic points of the ideal photon gas and the photon sphere in static spherically symmetric spacetimes [37]. This important result is valid not only for spherical accretion of the ideal photon gas but also for rotating accretion in static spherically symmetric spacetimes [38, 39, 47]. In this section, we establish this correspondence for parameterized spherically symmetric black holes. Let us first consider the spherical accretion of the ideal photon gas and derive the corresponding sonic points. The equation of state for the ideal photon gas in d-dimensional space is $ h = \frac{k \gamma}{\gamma -1} n^{\gamma -1}, $ $ \gamma = \frac{d+1}{d}, $ where k is a constant of the entropy [37]. The speed of sound for the ideal photon gas is constant $ c_s^2 \equiv \frac{{\rm d} \ln h }{{\rm d} \ln n} = \gamma -1. $ For general parameterized spherically symmetric spacetimes ($ d = 3 $), the equation of state of the ideal photon gas becomes $ h = 4 k n^{1/3}, $ and the speed of sound for the ideal photon gas is $ c_s^2 = 2 $. For the accretion of the ideal photon gas in parameterized spherically symmetric spacetimes, the radius $ r_* $ of the sonic point is specified by $ \frac{\rm d }{{\rm d} r}\left(\frac{N}{r}\right) = 0. $ To proceed, let us derive the photon sphere by analyzing the evolution of a photon in a parameterized spherically symmetric black hole. The photon follows the null geodesics in a given black hole spacetime. As the spacetime is spherically symmetric, we can perform the calculations in the equatorial plane $ \theta = \pi/2 $. To find the null geodesics around the black hole we can use the Hamilton-Jacobi equation, given as follows: $ \frac{\partial S}{\partial \lambda} = -\frac{1}{2}g^{\mu\nu}\frac{\partial S}{\partial x^\mu}\frac{\partial S}{\partial x^\nu}, $ where $ \lambda $ is the affine parameter of the null geodesic, and S denotes the Jacobi action of the photon. The Jacobi action S can be separated in the following form: $ S = -Et+L\phi+S_r(r), $ where E and L represent the energy and the angular momentum of the photon, respectively. The function $ S_r(r) $ depends only on r. Substituting the Jacobi action into the Hamilton-Jacobi equation, we obtain $ S_r(r) = \int^r\frac{B^2(r)\sqrt{R(r)}}{r^2 N^2(r)}{\rm d}r, $ $ R(r)= -\frac{r^2 N^2(r)L^2}{B^2(r)}+\frac{r^4 E^2}{B^2(r)}. $ The variation of the Jacobi action gives the following equations of motion for the evolution of the photon: $ \frac{{\rm d}t}{{\rm d}\lambda} = \frac{E}{N^2(r)}, $ $ \frac{{\rm d}\phi}{{\rm d}\lambda} =\frac{L}{r^2}, $ $ \frac{{\rm d}r}{{\rm d}\lambda} = \frac{\sqrt{R(r)}}{r^2}. $ To determine the radius of the photon sphere of the black hole, we need to find the critical circular orbit for the photon, which can be derived from the unstable condition $ R(r) = 0,\qquad \frac{{\rm d}R(r)}{{\rm d}r} = 0. $ For a parameterized spherically symmetric black hole, from the above conditions, one finds $ \frac{\rm d}{{\rm d}r}\left(\frac{N}{r}\right) = 0. $ This is the condition for determining the radius of the photon sphere. Clearly, Eq. (86) is actually the same as Eq. (95), which means that the critical radius $ r_* $ of a sonic point, for the accretion of the ideal photon gas in parameterized spherically symmetric spacetimes, is equal to the radius of the photon sphere. With Eq. (95), one obtains the critical radius $ r_* $ and the radius of the photon sphere in parameterized spherically symmetric spacetimes $ r_* = \left. \left(\frac{1}{N} \frac{{\rm d} N}{{\rm d}r} \right)^{-1} \right|_{r = r_*}. $ By substituting Eq. (47) into the above equation, we obtain the expression for $ r_* $, $\begin{aligned}[b] r_* =& M + \frac{(1 + {\rm{i}} \sqrt{3}) M^2 (1 + \epsilon) [-8 a_0 + 3 (1 + \epsilon)^2]}{2LT_{5}} \\&+ \frac{(1 - {\rm{i}} \sqrt{3}) LT_{5}}{6 (1 + \epsilon)^3}, \end{aligned}$ $\begin{aligned} LT_{5} =& [-27 M^3 (1 + \epsilon)^6 (1 + a_0 (6 - 4 \epsilon) - 7 \epsilon + 3 \epsilon^2 + \epsilon^3)\\& + 6 \sqrt{3} \sqrt{LT_{6} }]^{1/3},\end{aligned}$ $ \begin{aligned}[b] LT_{6} = &M^6 (1 + \epsilon)^{12} [128 a_0^3 - 9 a_0^2 (-11 + 68 \epsilon + 4 \epsilon^2) \\ &- 135 \epsilon (1 - 2 \epsilon + 3 \epsilon^2 + \epsilon^3) + 135 a_0 (1 - 3 \epsilon + 7 \epsilon^2 + \epsilon^3)]. \end{aligned}$ It is easy to verify that when the additional parameters ($ a_0 $ and $ \epsilon $) are set to zero, the critical radius reduces to $ r_* = 3M $. In Fig. 9, we schematically show the spherical accretion of the ideal photon gas onto a spherically symmetric black hole and its photon sphere (represented by the red circle). The red circle in Fig. 9 thus has a two-fold meaning, since it represents both the photon sphere and the sonic radius of the spherical accretion of the ideal photon gas. Figure 9. (color online) Schematic of the spherical accretion of the ideal photon gas onto a spherically symmetric black hole and its photon sphere (the red circle). The red circle has a two-fold meaning, since it represents both the photon sphere and the sonic radius of the spherical accretion of the ideal photon gas. VII. CONCLUSIONS AND DISCUSSION In this paper, we studied the spherical accretion flow of a perfect fluid onto a general parameterized spherically symmetric black hole. For this purpose, we first formulated two basic equations for describing the accretion process and presented the general formulas for determining the sonic points (or critical points). These two equations were derived from the conservation laws of energy and particle number of the fluid. Using these two equations, we analyzed the accretion processes of various perfect fluids, such as the isothermal fluids of the ultra-stiff, ultra-relativistic, and sub-relativistic types, and polytropic fluids. The flow behaviors of these test fluids around a general parameterized spherically symmetric black hole were studied in detail and are shown graphically in Figs. 1, 3, 5, 7. For the isothermal fluid, it is interesting to mention that the sonic point does not exist for the ultra-stiff fluid with $ w = 1 $ alone; thus, transonic solutions exist for the ultra-relativistic fluid with $ w = 1/2 $, for the radiation fluid with $ w = 1/3 $, and for the sub-relativistic fluid $ w = 1/4 $. The value of $ \epsilon $ affects $ r_0,\; r_*,\; {\rm and}\ {\cal H}_* $ but not $ v_* $, which indicates the effect of the position of the event horizon. For the polytropic fluid, Fig. 8 shows that it exhibits a similar flow behavior as the isothermal fluids with $ w = 1/2 $, $ w = 1/3 $, and $ w = 1/4 $. Here, we would like to mention that the results presented in this paper can also be reduced to specific cases in several modified theories of gravity. For example, one can map the results here to the first-type Einstein-Aether black hole in [48, 49] by setting $ \epsilon = \frac{M-\sqrt{M^2-{\text{æ}}^2}}{M+\sqrt{M^2-{\text{æ}}^2}}, $ $ a_0 = \frac{{\text{æ}}^2}{(M+\sqrt{M^2-{\text{æ}}^2})^2}, \;\;\;\; b_0 = 0, $ $ a_i = 0, \;\;\; b_i = 0, \;\;\;\; (i>0), $ $ {\text{æ}}^2 = - \frac{2 c_{13}-c_{14}}{2(1-c_{13})} M^2, $ with $ c_{13} $ and $ c_{14} $ being the coupling constants in the Einstein-Aether theory. It is interesting to mention that flow behaviors for different test fluids in this paper are qualitatively consistent with those studied in [50] for the spherical accretion in the Einstein-Aether theory. We further considered the spherical accretion of the ideal photon gas and derived the radius of its sonic point. Comparing the radius with that of the photon sphere for a general parameterized spherically symmetric black hole, we studied the correspondence between the sonic points of the accreting photon gas and the photon sphere for a general parameterized spherically symmetric black hole. With the above main results, we would like to mention several directions that can be pursued for extending our analysis. First, spherical accretion is the simplest accretion scenario, in which the accreting matter falls steadily and radially into a black hole. This is an extreme simple case. Therefore, it is interesting to explore the accreting behaviors of various types of matter when the spherical symmetry approximation is relaxed by considering a non-zero relative velocity between the black hole and the accreting matter. This scenario is also known as wind accretion or Bondi–Hoyle–Lyttleton accretion [51-53] (see [54] for a review). We will consider the more complicated accretion disk model, which is more related to real observations, in our future work. Second, it is also interesting to extend our analysis to rotating black holes. In a rotating background, one may consider rotating fluids accreting onto a rotating black hole. The rotation of the fluids can lead to the formation of a disc-like structure around the black hole, and such accretion discs are the most commonly studied engines for explaining astrophysical phenomena such as active galactic nuclei, X-ray binaries, and gamma-ray bursts. However, considering rotation introduces complications into the accretion problem, in which case, the study heavily relies on numerical calculations. Finally, when one considers a rotating black hole, its shadow does not correspond to a photon sphere but a photon region. An immediate question now arises as to what structure in the rotating accretion of the ideal photon gas corresponds to the photon region of a rotating black hole. This is still an open issue. IOPScience SCOAP3 Chinese Physical Society E-mail: [email protected] CPC Website IOP Website CPC WeChat Copyright © Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Road, Beijing 100049, China 京ICP备05002789号-1 Supported by: Beijing Renhe Information Technology Co. Ltd E-mail: [email protected] DownLoad: Full-Size Img PowerPoint
CommonCrawl
Fractional input stability and its application to neural network DCDS-S Home Inclusion of fading memory to Banister model of changes in physical condition doi: 10.3934/dcdss.2020050 Mittag-Leffler input stability of fractional differential equations and its applications Ndolane Sene , Département de Mathématiques de la Décision, Université Cheikh Anta Diop de Dakar, Laboratoire Lmdan, BP 5683 Dakar Fann, Sénégal * Corresponding author: Ndolane Sene Received August 2018 Revised October 2018 Published March 2019 Full Text(HTML) This paper addresses the Mittag-Leffler input stability of the fractional differential equations with exogenous inputs. We continuous the first note. We discuss three properties of the Mittag-Leffler input stability: converging-input converging-state, bounded-input bounded-state, and Mittag-Leffler stability of the unforced fractional differential equation. We present the Lyapunov characterization of the Mittag-Leffler input stability, and conclude by introducing the fractional input stability for delay fractional differential equations, and we provide its Lyapunov-Krasovskii characterization. Several examples are treated to highlight the Mittag-Leffler input stability. Keywords: Fractional derivative, fractional differential equations with exogenous inputs, Mittag-Leffler input stable. Mathematics Subject Classification: Primary: 26A33, 93D05; Secondary: 93D25. Citation: Ndolane Sene. Mittag-Leffler input stability of fractional differential equations and its applications. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020050 T. Abdeljawad and V. Gejji, Lyapunov-Krasovskii stability theorem for fractional systems with delay, Rom. J. Phys., 56 (2011), 636-643. Google Scholar Y. Adjabi, F. Jarad and T. Abdeljawad, On Generalized Fractional Operators and a Gronwall Type Inequality with Applications, Filo., 31 (2017), 5457-5473. doi: 10.2298/FIL1717457A. Google Scholar A. Atangana and D. Baleanu, New fractional derivatives with nonlocal and non-singular kernel: Theory and application to heat transfer model, Therm. Scien., https://arXiv.org/abs/1602.03408 (2016).Google Scholar D. Baleanu, A. K. Golmankhaneh and A. K. Golmankhaneh, The dual action of the fractional multi time hamilton equations, Inter. J. of Theo. Phys., 48 (2009), 2558-2569. doi: 10.1007/s10773-009-0042-x. Google Scholar D. Baleanu, Z. B. Guvenc and J. A. Machado, New Trends in Nanotechnology and Fractional Calculus Applications, Springer, 2009. doi: 10.1007/978-90-481-3293-5. Google Scholar N. A. Camacho, M. A. Duarte-Mermoud and J. A. Gallegos, Lyapunov functions for fractional order systems, Comm. Nonl. Sci. Num. Simul., 19 (2014), 2951-2957. doi: 10.1016/j.cnsns.2014.01.022. Google Scholar W. S. Chung, Fractional newton mechanics with conformable fractional derivative, J. Comput. Appl. Math., 290 (2015), 150-158. doi: 10.1016/j.cam.2015.04.049. Google Scholar M. Eslami, Exact traveling wave solutions to the fractional coupled nonlinear schrodinger equations, Appl. Math. Comput., 285 (2016), 141-148. doi: 10.1016/j.amc.2016.03.032. Google Scholar E. F. D. Goufo, Chaotic processes using the two-parameter derivative with non-singular and non-local kernel: Basic theory and applications, Chaos, 26 (2016), 084305, 10 pp. doi: 10.1063/1.4958921. Google Scholar E. F. D. Goufo, An application of the Caputo-Fabrizio operator to replicator-mutator dynamics: Bifurcation, chaotic limit cycles and control, The Euro. Phys. J. Plus, 133 (2018), 80. doi: 10.1140/epjp/i2018-11933-0. Google Scholar E. F. D. Goufo and A. Atangana, Analytical and numerical schemes for a derivative with filtering property and no singular kernel with applications to diffusion, The Euro. Phys. J. Plus, 131 (2016), 269.Google Scholar E. F. D. Goufo and T. Toudjeu, Around chaotic disturbance and irregularity for higher order traveling waves, J. of Math., 2018 (2018), Art. ID 2391697, 11 pp. doi: 10.1155/2018/2391697. Google Scholar E. F. D. Goufo and J. Nieto, Attractors for fractional differential problems of transition to turbulent flows, J. of Comp. and Appl. Math., 339 (2018), 329-342. doi: 10.1016/j.cam.2017.08.026. Google Scholar F. Jarad, T. Abdeljawad and D. Baleanu, On the generalized fractional derivatives and their caputo modification, J. Nonlinear Sci. Appl, 10 (2017), 2607-2619. doi: 10.22436/jnsa.010.05.27. Google Scholar F. Jarad, E. Ugurlu T. Abdeljawad and D. Baleanu, On a new class of fractional operators, Adva. in Diff. Equa., 2017 (2017), 247. doi: 10.1186/s13662-017-1306-z. Google Scholar U. N. Katugampola, A new approach to generalized fractional derivatives, Bull. Math. Anal. Appl., 6 (2014), 1-15. Google Scholar N. Laskin, Fractional schrodinger equation, Phys. Review E, 66 (2002), 056108, 7 pp. doi: 10.1103/PhysRevE.66.056108. Google Scholar Y. Li, Y. Q. Chen and I. Podlubny, Mittag-leffler stability of fractional order nonlinear dynamic systems, Auto., 45 (2009), 1965-1969. doi: 10.1016/j.automatica.2009.04.003. Google Scholar K. S. Miller and B. Ross, An Introduction to the Fractional Calculus and Fractional Differential Equations, A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1993. Google Scholar K. Oldham and J. Spanier, The Fractional Calculus Theory and Application Of Differentiation and Integration to Arbitrary Order, New York-London, 1974. Google Scholar P. Pepe and Z. P. Jiang, A lyapunov-Krasovskii methodology for ISS and iISS of time-delay systems, Syst. Contr. Lett., 55 (2006), 1006-1014. doi: 10.1016/j.sysconle.2006.06.013. Google Scholar I. Petras, Fractional-order Nonlinear Systems: Modeling, Analysis and Simulation, Springer Science and Business Media, 2011.Google Scholar [23] I. Podlubny, Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and some of Their Applications, Mathematics in Science and Engineering, 198. Academic Press, Inc., San Diego, CA, 1999. D. Qian, C. Li, R. P. Agarwal and P. J. Y. Wong, Stability analysis of fractional differential system with riemann-liouville derivative, Math. Comput. Model., 52 (2010), 862-874. doi: 10.1016/j.mcm.2010.05.016. Google Scholar N. Sene, Lyapunov characterization of the fractional nonlinear systems with exogenous input, Fractal Fract., 2 (2018), 17. doi: 10.3390/fractalfract2020017. Google Scholar N. Sene, On stability analysis of the fractional nonlinear systems with hurwitz state matrix, J. Fract. Calc. Appl., 10 (2019), 1-9. Google Scholar N. Sene, A. Chaillet and M. Balde, Relaxed conditions for the stability of switched nonlinear triangular systems under arbitrary switching, Syst. Contr. Let., 84 (2015), 52-56. doi: 10.1016/j.sysconle.2015.06.004. Google Scholar N. Sene, Fractional input stability and its application to neural network, Discrete Contin. Dyn. Syst. Ser. S, 13 (2020).Google Scholar N. Sene, Exponential form for Lyapunov function and stability analysis of the fractional differential equations, J. Math. Comp. Scien., 18 (2018), 388-397. doi: 10.22436/jmcs.018.04.01. Google Scholar E. D. Sontag, Smooth stabilization implies coprime factorization, Syst. Contr. Let., 34 (1989), 435-443. doi: 10.1109/9.28018. Google Scholar E. D. Sontag, On the input-to-state stability property, Euro. J. Contr., 1 (1995), 24-36. doi: 10.1016/S0947-3580(95)70005-X. Google Scholar A. R. Teel, Connections between razumikhin-type theorems and the ISS nonlinear small gain theorem, IEEE trans. on Auto. Control., 43 (1998), 960-964. doi: 10.1109/9.701099. Google Scholar N. Yeganefar, P. Pepe and M. Dambrine, Input-to-state stability and exponential stability for time-delay systems: Further results, In Deci. and Contr., (2007), 2059–2064.Google Scholar T. Zou, J. Qu, L. Chen, Yi Chai and Z. Yang, Stability analysis of a class of fractional-order neural networks, Indo. J. Elect. Engi. Comput. Sci., 12 (2014), 1086-1093. Google Scholar Mehmet Yavuz, Necati Özdemir. Comparing the new fractional derivative operators involving exponential and Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 995-1006. doi: 10.3934/dcdss.2020058 Jean Daniel Djida, Juan J. Nieto, Iván Area. Parabolic problem with fractional time derivative with nonlocal and nonsingular Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 609-627. doi: 10.3934/dcdss.2020033 Antonio Coronel-Escamilla, José Francisco Gómez-Aguilar. A novel predictor-corrector scheme for solving variable-order fractional delay differential equations involving operators with Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 561-574. doi: 10.3934/dcdss.2020031 Ebenezer Bonyah, Samuel Kwesi Asiedu. Analysis of a Lymphatic filariasis-schistosomiasis coinfection with public health dynamics: Model obtained through Mittag-Leffler function. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 519-537. doi: 10.3934/dcdss.2020029 Chun Wang, Tian-Zhou Xu. Stability of the nonlinear fractional differential equations with the right-sided Riemann-Liouville fractional derivative. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 505-521. doi: 10.3934/dcdss.2017025 Ilknur Koca. Numerical analysis of coupled fractional differential equations with Atangana-Baleanu fractional derivative. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 475-486. doi: 10.3934/dcdss.2019031 Fahd Jarad, Sugumaran Harikrishnan, Kamal Shah, Kuppusamy Kanagarajan. Existence and stability results to a class of fractional random implicit differential equations involving a generalized Hilfer fractional derivative. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 723-739. doi: 10.3934/dcdss.2020040 Kolade M. Owolabi, Abdon Atangana. High-order solvers for space-fractional differential equations with Riesz derivative. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 567-590. doi: 10.3934/dcdss.2019037 Francesco Mainardi. On some properties of the Mittag-Leffler function $\mathbf{E_\alpha(-t^\alpha)}$, completely monotone for $\mathbf{t> 0}$ with $\mathbf{0<\alpha<1}$. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2267-2278. doi: 10.3934/dcdsb.2014.19.2267 Krunal B. Kachhia. Comparative study of fractional Fokker-Planck equations with various fractional derivative operators. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 741-754. doi: 10.3934/dcdss.2020041 Tomás Sanz-Perela. Regularity of radial stable solutions to semilinear elliptic equations for the fractional Laplacian. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2547-2575. doi: 10.3934/cpaa.2018121 Ndolane Sene. Fractional input stability and its application to neural network. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 853-865. doi: 10.3934/dcdss.2020049 Yaozhong Hu, Yanghui Liu, David Nualart. Taylor schemes for rough differential equations and fractional diffusions. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3115-3162. doi: 10.3934/dcdsb.2016090 Daria Bugajewska, Mirosława Zima. On positive solutions of nonlinear fractional differential equations. Conference Publications, 2003, 2003 (Special) : 141-146. doi: 10.3934/proc.2003.2003.141 Mahmoud M. El-Borai. On some fractional differential equations in the Hilbert space. Conference Publications, 2005, 2005 (Special) : 233-240. doi: 10.3934/proc.2005.2005.233 Roberto Garrappa, Eleonora Messina, Antonia Vecchio. Effect of perturbation in the numerical solution of fractional differential equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (7) : 2679-2694. doi: 10.3934/dcdsb.2017188 Joseph A. Connolly, Neville J. Ford. Comparison of numerical methods for fractional differential equations. Communications on Pure & Applied Analysis, 2006, 5 (2) : 289-307. doi: 10.3934/cpaa.2006.5.289 Hayat Zouiten, Ali Boutoulout, Delfim F. M. Torres. Regional enlarged observability of Caputo fractional differential equations. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 1017-1029. doi: 10.3934/dcdss.2020060 Christina A. Hollon, Jeffrey T. Neugebauer. Positive solutions of a fractional boundary value problem with a fractional derivative boundary condition. Conference Publications, 2015, 2015 (special) : 615-620. doi: 10.3934/proc.2015.0615 Sertan Alkan. A new solution method for nonlinear fractional integro-differential equations. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1065-1077. doi: 10.3934/dcdss.2015.8.1065 PDF downloads (49) HTML views (348) on AIMS Ndolane Sene Content*
CommonCrawl
\begin{document} \title{Outer independent double Roman domination number of graphs} \author{{\small Doost Ali Mojdeh$^1$, Babak Samadi$^2$, Zehui Shao$^3$ and Ismael G. Yero$^4$}\\{\small Department of Mathematics, University of Mazandaran,} \\{\small Babolsar, Iran$^{1,2}$}\\{\small [email protected]$^1$}, {\small [email protected]$^2$} \\{\small Institute of Computing Science and Technology, Guangzhou University, Guangzhou 510006, China$^3$}\\ {\small [email protected]$^3$} \\{\small Departamento de Matem\'{a}ticas, Universidad de C\'{a}diz, Algeciras, Spain$^4$}\\ {\small [email protected]$^4$} } \date{} \maketitle \begin{abstract} A double Roman dominating function of a graph $G$ is a function $f:V(G)\rightarrow \{0,1,2,3\}$ having the property that for each vertex $v$ with $f(v)=0$, there exists $u\in N(v)$ with $f(u)=3$, or there are $u,w\in N(v)$ with $f(u)=f(w)=2$, and if $f(v)=1$, then $v$ is adjacent to a vertex assigned at least $2$ under $f$. The double Roman domination number $\gamma_{dR}(G)$ is the minimum weight $f(V(G))=\sum_{v\in V(G)}f(v)$ among all double Roman dominating functions of $G$. An outer independent double Roman dominating function is a double Roman dominating function $f$ for which the set of vertices assigned $0$ under $f$ is independent. The outer independent double Roman domination number $\gamma_{oidR}(G)$ is the minimum weight taken over all outer independent double Roman dominating functions of $G$. In this work, we present some contributions to the study of outer independent double Roman domination in graphs. Characterizations of the families of all connected graphs with small outer independent double Roman domination numbers, and tight lower and upper bounds on this parameter are given. We moreover bound this parameter for a tree $T$ from below by two times the vertex cover number of $T$ plus one. We also prove that the decision problem associated with $\gamma_{oidR}(G)$ is NP-complete even when restricted to planar graphs with maximum degree at most four. Finally, we give an exact formula for this parameter concerning the corona graphs. \end{abstract} \textbf{2010 Mathematical Subject Classification:} 05C69 \textbf{Keywords}: (Outer independent) double Roman domination number; (outer independent) Roman domination number; independence number; vertex cover number; domination number; corona graphs. \section{Introduction and preliminaries} Throughout this paper, we consider $G$ as a finite simple graph with vertex set $V(G)$ and edge set $E(G)$. We use \cite{w} as a reference for terminology and notation which are not explicitly defined here. The {\em open neighborhood} of a vertex $v$ is denoted by $N(v)$, and its {\em closed neighborhood} is $N[v]=N(v)\cup \{v\}$. The {\em minimum} and {\em maximum degrees} of $G$ are denoted by $\delta(G)$ and $\Delta(G)$, respectively. The {\em corona} of two graphs $G_{1}$ and $G_{2}$ is the graph $G_{1}\odot G_{2}$ formed from one copy of $G_{1}$ and $|V(G_{1})|$ copies of $G_{2}$ where the $i$th vertex of $G_{1}$ is adjacent to every vertex in the $i$th copy of $G_{2}$. For a function $f:V(G)\rightarrow\{0,\cdots,k\}$ we let $V^{f}_{i}=\{v\in V(G)\mid f(v)=i\}$, for each $0\leq i\leq k$ (we simply write $V_{i}$ if there is no ambiguity with respect to the function $f$). We call $\omega(f)=f(V(G))=\sum_{v\in V(G)}f(v)$ as the {\em weight} of $f$. A set $S\subseteq V(G)$ of $G$ is called a {\em dominating set} if every vertex not in $S$ has a neighbor in $S$. The {\em domination number} $\gamma(G)$ of $G$ is the minimum cardinality among all dominating sets of $G$. A subset $I\subseteq V(G)$ is said to be {\em independent} if no two vertices in $I$ are adjacent. The {\em independence number} $\alpha(G)$ is the maximum cardinality among all independent sets of $G$. A {\em vertex cover} of $G$ is a set $Q\subseteq V(G)$ that contains at least one endpoint of every edge. The {\em vertex cover number} $\beta(G)$ is the minimum cardinality among all vertex cover sets of $G$. For any parameter $p$ of $G$, by a $p(G)$-set we mean a set of cardinality $p(G)$. A {\em Roman dominating function} of a graph $G$ is a function $f:V(G)\rightarrow\{0,1,2\}$ such that if $v\in V_0$ for some $v\in V(G)$, then there exists $w\in N(v)$ such that $w\in V_2$. The minimum weight of a Roman dominating function $f$ of $G$ is called the {\em Roman domination number} of $G$, denoted by $\gamma_{R}(G)$. This concept was formally defined by Cockayne \emph{et al.} \cite{cdhh} motivated, in some sense, by the article of Ian Stewart entitled ``Defend the Roman Empire!" (\cite{s}), published in {\it Scientific American}. The idea is that the values $1$ and $2$ represent the number of Roman legions stationed at a location $v$. A location $u\in N(v)$ is considered to be {\em unsecured} if no legion is stationed there ($f(u)=0$). The unsecured location $u$ can be secured by sending a legion to $u$ from an adjacent location $v$. But a legion cannot be sent from a location $v$ if doing so leaves that location unsecured (if $f(v)=1$). Thus, two legions must be stationed at a location ($f(v)=2$) before one of the legions can be sent to an adjacent location. Once the seminal paper \cite{cdhh} was published, this topic attracted the attention of many researchers. One of the research lines that has recently become popular concerns variations in the concept of Roman domination involving some vertex independence features. Some of these variations have been outlined in \cite{cky}. For instance, an {\em outer independent Roman dominating function} (OIRD function) of a graph $G$ is a Roman dominating function $f:V(G)\rightarrow\{0,1,2\}$ for which $V^{f}_{0}$ is independent. The {\em outer independent Roman domination number} (OIRD number) $\gamma_{oiR}(G)$ is the minimum weight of an OIRD function of $G$. This parameter was introduced in \cite{acs1}. A total domination version of such parameter above was presented in \cite{cky}. On the other hand, Beeler \emph{et al}. \cite{bhh} introduced the concept of double Roman domination. This provided a stronger and more flexible level of defense in which three legions can be deployed at a given location. They also presented some real privileges of this concept in comparison with the Roman domination. But existing two adjacent locations with no legions can jeopardize them. Indeed, they would be considered more vulnerable. So, one improved situation for a location with no legion is to be surrounded by locations in which legions are stationed. This motivates us to consider a double Roman dominating function $f$ for which $V^{f}_{0}$ is an independent sets, which is the concept that will be investigated in this paper. More formally, a {\em double Roman dominating function} (DRD function for short) of a graph $G$ is a function $ f:V(G)\rightarrow \{0,1,2,3\}$ for which the following conditions are satisfied. \begin{itemize} \item[(a)] If $f(v)=0$, then the vertex $v$ must have at least two neighbors in $V_2$ or one neighbor in $V_3$. \item[(b)] If $f(v)=1$ , then the vertex $v$ must have at least one neighbor in $V_2\cup V_3$. \end{itemize} This parameter was also studied in \cite{al}, \cite{jr} and \cite{zljs}. Accordingly, an {\em outer independent double Roman dominating function} (OIDRD function for short) is a DRD function for which $V^{f}_{0}$ is independent. The {\em \emph{(}outer independent\emph{)} double Roman domination number} ($\gamma_{oidR}(G)$) $\gamma_{dR}(G)$ equals the minimum weight of (an) a (OIDRD function) DRD function of $G$. This concept was first introduced in \cite{acss}. For the sake of convenience, an OIDRD function (OIRD function) $f$ of a graph $G$ with weight $\gamma_{oidR}(G)$ ($\gamma_{oiR}(G)$) is called a $\gamma_{oidR}(G)$-function ($\gamma_{oiR}(G)$-function). In this paper, we characterize the families of all connected graphs $G$ with small OIDRD numbers (that is $\gamma_{oidR}(G)\in\{3,4,5\}$), and give tight lower and upper bounds on this parameter in terms of several well-known graph parameters. We also prove that the decision problem associated with $\gamma_{oidR}(G)$ is NP-complete for planar graphs with maximum degree at most four. We begin with some easily verified facts about the OIDRD numbers of some basic families of graphs. \begin{observation}\label{ob1}The following statements hold. \begin{itemize} \item[{\rm (i)}] For $n\geq1$, $\gamma_{oidR}(P_n)=\left\{ \begin{array} [l]{ll} n,& \text{if }\ n=3,\\ n+1, & \text{if }\ n\neq3. \end{array}(\emph{\cite{acss}}) \right.$ \item[{\rm (ii)}] For $n\geq3$, $\gamma_{oidR}(C_n)=\left\{ \begin{array} [l]{ll} n, & \text{if }\ n\ \text{is}\ \text{even},\\ n+1, & \text{if }\ n\ \text{is}\ \text{odd}. \end{array}(\emph{\cite{acss}}) \right.$ \item[{\rm (iii)}] For $n\ge 1$, $\gamma_{oidR}(K_n)=n+1$. \emph{(\cite{acss})} \item[{\rm (iv)}] For positive integers $m\le n$, $\gamma_{oidR}(K_{m,n})=\left\{ \begin{array} [l]{lll} 3, & \text{if }\ m=1,\\ 2m, & \text{if }\ m\in\{2,3\},\\ m+4, & \text{otherwise}. \end{array} \right.$ \item[{\rm (v)}] For a complete $k$-partite graph $K_{n_1,n_2,...,n_k}$ with $k\geq3$ and $1\le n_1\le n_2\le \ldots\le n_k$, $\gamma_{oidR}(K_{n_1,n_2,...,n_k})=\sum_{i=1}^{k-1}n_i+2$. \end{itemize} \end{observation} While considering a DRD function $f=(V_0,V_1,V_2,V_3)$ one can assume that $V_1=\emptyset$ (see \cite{bhh}). In contrast, OIDRD function behave a little different. For instance, if $G=K_{m,n}$ with $5\le m \le n$, then $\gamma_{oidR}(K_{m,n})=m+4$ and all vertices of the smaller partite set have positive values. This shows that some vertices of the smaller partite set are assigned inevitably the value $1$. That is stated in the following observation. \begin{fact} Let $f=(V_0,V_1,V_2,V_3)$ be an OIDRD function of a graph $G$. Then, $V_1$ is not necessarily empty. \end{fact} \section{Connected graphs with small OIDRD numbers} In this section, we characterize the family of all connected graphs $G$ for which $\gamma_{oidR}(G)\in \{3,4,5\}$. To this end, let $\mathcal{G}$ be the family of all graphs of the form $G_1$, $G_2$ and $G_3$ depicted in Figure \ref{fig:g1-g2-g3}. In the figure, the number of vertices $w_1,\cdots w_k$ in $G_1$, $G_2$, and $G_3$ is at least $1$, $1$ and $2$, respectively. \begin{figure} \caption{The graphs $G_1$, $G_2$ and $G_3$.} \label{fig:g1-g2-g3} \end{figure} We next define other six necessary families of graphs, that is, the families $\mathcal{H}_{i}$, $1\leq i\leq 6$. To this end, we shall use the following conventions. For a given set of vertices $\{v_1,\dots,v_r\}$ with $r\ge 1$, by $V_{v_1,\dots,v_r}$ we represent another disjoint set of vertices such that every vertex $v\in V_{v_1,\dots,v_r}$ satisfies $N(v)=\{v_1,\dots,v_r\}$. Such convention shall be used also while proving Proposition \ref{prop1}. \begin{itemize} \item $\mathcal{H}_{1}$: We begin with a path $P=abc$. Then we add four sets $V_{b}$, $V_{a,b}$, $V_{b,c}$ and $V_{a,b,c}$ such that one of the following conditions holds. - ($a_{1}$): $V_{a,b},V_{b,c}=\emptyset$ and $|V_{a,b,c}|\geq 2$. - ($b_{1}$) only one of the sets $V_{a,b}$ and $V_{b,c}$ is empty, and $V_{a,b,c}\neq \emptyset$. - ($c_{1}$) $V_{a,b},V_{b,c}\neq \emptyset$. \item $\mathcal{H}_{2}$: We begin with a cycle of order three $C=abca$ and proceed as above, by adding the sets $V_{b}$, $V_{a,b}$, $V_{b,c}$ and $V_{a,b,c}$. Then, one of the following situations holds. - ($a_{2}$) $V_{a,b,c}\neq \emptyset$. - ($b_{2}$) $V_{a,b},V_{b,c}\neq \emptyset$. \item $\mathcal{H}_{3}$: We begin with two nonadjacent vertices $a$ and $b$, and add the non-empty sets of vertices $V_{a}$ and $V_{a,b}$. \item $\mathcal{H}_{4}$: We begin with a vertex $a$ and an edge $bc$. Then we add the sets $V_{a,b}$ and $V_{a,b,c}$ such that one of the following conditions holds. - ($a_{4}$) $V_{a,b}=\emptyset$ and $|V_{a,b,c}|\geq2$. - ($b_{4}$) $V_{a,b}\neq \emptyset$. \item $\mathcal{H}_{5}$: We begin with a path $P=abc$. Then we add the sets $V_{a,b}$ and $V_{a,b,c}$ such that one of the following conditions holds. - ($a_{5}$) $V_{a,b},V_{a,b,c}\neq \emptyset$. - ($b_{5}$) $V_{a,b}=\emptyset$ and $|V_{a,b,c}|\geq2$. \item $\mathcal{H}_{6}$: We begin with a path $P=abc$. Then we add the sets $V_{a,c}$ (note that for any vertex $v\in V_{a,c}$, it happens $N(v)=\{a,c\}$) and $V_{a,b,c}$, such that one of the next conditions holds. - ($a_{6}$) $V_{a,c},V_{a,b,c}\neq \emptyset$. - ($b_{6}$) $V_{a,c}=\emptyset$ and $|V_{a,b,c}|\geq2$. \end{itemize} \begin{proposition}\label{prop1} Let $G$ be a connected graph of order $n\ge 3$. Then, \begin{itemize} \item[{\rm (i)}] $\gamma_{oidR}(G)=3$ if and only if $G$ is a star. \item[{\rm (ii)}] $\gamma_{oidR}(G)=4$ if and only if $G \in \mathcal{G}$. \item[{\rm (iii)}] $\gamma_{oidR}(G)=5$ if and only if $G\in \cup_{i=1}^{6}\mathcal{H}_{i}$. \end{itemize} \end{proposition} \begin{proof} (i) It is clear. (ii) Let $G\in \mathcal{G}$. In $G_1$, if we assign the value $3$ to the vertex $v_1$, the value $1$ to the vertex $v_2$ and $0$ to the other vertices, then we have $\gamma_{oidR}(G_1)\le 4$. In $G_2$, if we assign $2$ to the vertices $v_1$ and $v_2$ (or $1$ to one of them and $3$ to the other one), then $\gamma_{oidR}(G_2)\le 4$. In $G_3$, if we assign $2$ to the vertices $v_1$ and $v_2$, then $\gamma_{oidR}(G_3)\le 4$. Since, $G_1$, $G_2$, and $G_3$ are not stars, by item (i), we have the equality. Conversely, let $G$ be a graph with $\gamma_{oidR}(G)=4$ and let $f=(V_0,V_1,V_2,V_3)$ be a $\gamma_{oidR}(G)$-function. We first note that the case $(|V_{0}|,|V_{1}|,|V_{2}|,|V_{3}|)=(0,2,1,0)$ is possible if and only if $G\cong K_{3}$. So, $G$ is of the form of $G_{2}$. Consequently, we have only two remaining possibilities. - ($a$) There are two adjacent vertices $v_1$ and $v_2$ in $G$ such that $v_1\in V_3$ and $v_2\in V_1$ or $\{v_1,v_2\}\subseteq V_2$ and the other $k\geq1$ vertices are independent and belong to $V_0$ (note that $k$ must be at least one, for otherwise $G$ would be a star with $\gamma_{oidR}(G)=3$). Therefore, such a graph should be of the form $G_1$ or $G_2$ in $\mathcal{G}$. - ($b$) There are two non adjacent vertices $v_1$ and $v_2$ in $G$ such that $\{v_1,v_2\}\subseteq V_2$ and the remaining vertices are independent and belong to $V_0$. Therefore, such a graph should be a graph like $G_3$ in $\mathcal{G}$. Note that in such a case we have $k\geq2$, for otherwise $G$ is disconnected or satisfies that $\gamma_{oidR}(G)\le 3$. (iii) If $G\in \mathcal{H}_{1}\cup \mathcal{H}_{2}$, then $(f(a),f(b),f(c))=(1,3,1)$ and $f(v)=0$ otherwise, is an OIDRD function of $G$ that leads to $\gamma_{oidR}(G)\le 5$. If $G\in \mathcal{H}_{3}$, then $(f(a),f(b))=(3,2)$ and $f(v)=0$ for any other vertex, is an OIDRD function of $G$ that gives $\gamma_{oidR}(G)\le 5$. Also, if $G\in \mathcal{H}_{4}\cup \mathcal{H}_{5}$, then $(f(a),f(b),f(c))=(2,2,1)$ and $f(v)=0$ otherwise, is an OIDRD function of $G$, and so $\gamma_{oidR}(G)\leq5$. Finally, if $G\in \mathcal{H}_{6}$, then $(f(a),f(b),f(c))=(2,1,2)$ and $f(v)=0$ for any other vertex, is a desired OIDRD function of $G$ that gives the same conclusion as above. Since the graphs of the family $\cup_{i=1}^{6}\mathcal{H}_{i}$ neither are stars nor are included in the family $\mathcal{G}$, by items (i) and (ii), we get the desired equalities. Conversely, we assume that $f:V(G)\rightarrow\{0,1,2,3\}$ is a $\gamma_{oidR}(G)$-function of weight $5$. If $V_{1}=\emptyset$, then there exist two vertices $a$ and $b$ such that $(f(a),f(b))=(3,2)$. Note that $f$ assigns $0$ to the other vertices and that $ab\notin E(G)$, necessarily. In such situation, since $G$ is connected, at least one vertex must be adjacent to $b$ and each such vertex must be adjacent to $a$, as well. Now if $V_{a}=\emptyset$, then $\gamma_{oidR}(G)\le 4$, which is a contradiction. This shows that $G\in \mathcal{H}_{3}$. We now assume that $V_{1}\neq \emptyset$. Suppose that $|V_{1}|=1$ and $b$ is the only member of $V_{1}$. Therefore, there are two vertices $a$ and $c$ assigned $2$ under $f$. We first consider $b$ is adjacent to both $a$ and $c$. Note that the remaining vertices must be adjacent to both $a$ and $c$, as well. If $V_{a,c}=\emptyset$ and $|V_{a,b,c}|\leq 1$, then we have $\gamma_{oidR}(G)\leq 4$. Thus, $|V_{a,b,c}|\geq 2$. If $V_{a,c}\neq \emptyset$ and $V_{a,b,c}=\emptyset$, then we have $\gamma_{oidR}(G)\le 4$. Hence, $V_{a,b,c}\neq \emptyset$. This shows that $G\in \mathcal{H}_{6}$. Let $b$ be adjacent to only one vertex in $\{a,c\}$, say $c$. We deal with two possibilities depending on the adjacency between $a$ and $c$. First, let $ac\in E(G)$. Then, the other vertices belong to $V_{a,c}\cup V_{a,b,c}$. If $V_{a,c}=\emptyset$ and $|V_{a,b,c}|\leq1$, then $\gamma_{oidR}(G)\leq4$, and so $|V_{a,b,c}|\geq2$. If $V_{a,c}\neq \emptyset$ and $V_{a,b,c}=\emptyset$, then $\gamma_{oidR}(G)\le 4$. Therefore, $V_{a,b,c}\neq \emptyset$. In such case, $G\in \mathcal{H}_{5}$. Now let $ac\notin E(G)$. Thus, the other vertices belong to $V_{a,c}\cup V_{a,b,c}$. If $V_{a,c}=\emptyset$ and $|V_{a,b,c}|\leq1$, then $G$ is disconnected or $\gamma_{oidR}(G)\le 4$. Therefore, $|V_{a,b,c}|\geq2$. Note that if $V_{a,c}\neq\emptyset$, we have no conditions on the set $V_{a,b,c}$. Consequently, $G\in \mathcal{H}_{4}$. We now consider a situation in which $|V_{1}|=2$. Let $V_{1}=\{a,c\}$. Then, both $a$ and $c$ must be adjacent to a vertex $b$ assigned $3$ under $f$. Hence, the other vertices belong to $V_{b}\cup V_{a,b}\cup V_{b,c}\cup V_{a,b,c}$. We need to consider two possibilities depending on the adjacency between $a$ and $c$. First, let $ac\notin E(G)$ and assume that $V_{a,b}=V_{b,c}=\emptyset$. If $|V_{a,b,c}|\leq1$, then we have $\gamma_{oidR}(G)\leq4$, and so $|V_{a,b,c}|\geq2$. If only one of the sets $V_{a,b}$ and $V_{b,c}$ is empty, and $V_{a,b,c}=\emptyset$, then $\gamma_{oidR}(G)\le 4$. Thus, $V_{a,b,c}\neq \emptyset$. We now note that if $V_{a,b},V_{b,c}\neq \emptyset$, then we have no conditions on the set $V_{a,b,c}$. This argument guarantees that $G\in \mathcal{H}_{1}$. On the other hand, let $ac\in E(G)$. Hence, we have a cycle $abca$. If at least one of the sets $V_{a,b}$ and $V_{b,c}$ is empty, then we must have $V_{a,b,c}\neq \emptyset$, for otherwise $\gamma_{oidR}(G)\le 4$. If both $V_{a,b}$ and $V_{b,c}$ are nonempty, then we have no conditions on the set $V_{a,b,c}$. Therefore, $G\in \mathcal{H}_{2}$. Finally, in the case $|V_{1}|=3$ we have $V_{0}=V_{3}=\emptyset$ and only one vertex is assigned $2$ under $f$. In such situation, $G\cong K_{4}\in \mathcal{H}_{2}$. This completes the proof. \end{proof} \section{Computational and combinatorial results} We first consider the problem of deciding whether a graph $G$ has the OIDRD number at most a given integer. That is stated in the following decision problem. Note that Ahangar \emph{et al}. \cite{acss} proved that the problem of computing the OIDRD number of graphs is NP-hard, even when restricted to bipartite graphs and chordal graphs. $$\begin{tabular}{|l|} \hline \mbox{OIDRD problem}\\ \mbox{INSTANCE: A graph $G$ and an integer $k\leq2|V(G)|$.}\\ \mbox{QUESTION: Is $\gamma_{oidR}(G)\leq k$?}\\ \hline \end{tabular}$$ Our aim is to show that the problem is NP-complete for planar graphs with maximum degree at most four. To this end, we make use of the well-known INDEPENDENCE NUMBER PROBLEM (IN problem) which is known to be NP-complete from \cite{gj}. $$\begin{tabular}{|l|} \hline \mbox{IN problem}\\ \mbox{INSTANCE: A graph $G$ and an integer $k\leq|V(G)|$.}\\ \mbox{QUESTION: Is $\alpha(G)\geq k$?}\\ \hline \end{tabular}$$ Moreover, the problem above remains NP-complete even when restricted to some planar graphs. Indeed, we have the following result. \begin{theorem}\emph{(\cite{gj})} The IN problem is NP-complete even when restricted to planar graphs of maximum degree at most three. \end{theorem} \begin{theorem}\label{planar} The OIDRD problem is NP-complete even when restricted to planar graphs with maximum degree at most four. \end{theorem} \begin{proof} Let $G$ be a planar graph with $V(G)=\{v_{1},\dots,v_{n}\}$ and maximum degree $\Delta(G)\leq3$. For any $1\leq i\leq n$, we add a copy of the path $P_{3}$ with central vertex $u_{i}$. We now construct a graph $G'$ by joining $v_{i}$ to $u_{i}$, for each $1\leq i\leq n$. Clearly, $G'$ is a planar graph, $|V(G')|=4n$ and $\Delta(G')\leq4$. Let $f$ be $\gamma_{oidR}(G')$-function. Since $u_{i}$ is adjacent to two leaves, $f$ must assign a weight of at least three to $u_{i}$ together with the two leaves adjacent to it. So, without loss of generality, we may consider that $f(u_{i})=3$, and that $f$ assigns $0$ to both leaves adjacent to $u_{i}$, for each $1\leq i\leq n$. Since $V_{0}^{f}$ is independent, the number of vertices $v_{i}\in V(G)$ which can be assigned $0$ under $f$ is at most $\alpha(G)$. Furthermore, the other vertices of $V(G)$ are assigned at least $1$ under $f$. Consequently, we obtain that $\gamma_{oidR}(G')\ge 3n+(n-\alpha(G))=4n-\alpha(G)$. On the other hand, let $I$ be an $\alpha(G)$-set. It is easy to observe that the function \begin{equation*} g(v)=\left\{ \begin{array} [l]{lll} 3, & \text{if }\ v\in\{u_{1},\cdots,u_{n}\},\\ 0, & \text{if }\ $\textit{v}$\ \mbox{is a leaf or \textit{v}$\in I$},\\ 1, & \text{otherwise}. \end{array} \right. \end{equation*} is an OIDRD function of $G'$ with weight $4n-\alpha(G)$, which leads to the equality $\gamma_{oidR}(G')=4n-\alpha(G)$. Now, by taking $j=4n-k$, it follows that $\gamma_{oidR}(G')\leq j$ if and only if $\alpha(G)\geq k$, which completes the reduction. Since the IN problem is NP-complete for planar graphs of maximum degree at most three, we deduce that the OITRD problem is NP-complete for planar graphs of maximum degree at most four. \end{proof} As a consequence of Theorem \ref{planar}, we conclude that the problem of computing the OIDRD number even when restricted to planar graphs with maximum degree at most four in NP-hard. In consequence, it would be desirable to bound the OIDRD number in terms of several different invariants of the graph. \begin{theorem} For any graph $G$, $\gamma_{oidR}(G)\le2\gamma_{oiR}(G)$ with equality if and only if $G=\overline{K_n}$. \end{theorem} \begin{proof} If $f=(V_0,V_1,V_2)$ is a $\gamma_{oiR}(G)$-function, it is easy to observe that $g=(V^{g}_0=V_0,V^{g}_1=\emptyset,V^{g}_2=V_1,V^{g}_3=V_2)$ is an OIDRD function of $G$. Therefore, \begin{equation}\label{EQ1} \gamma_{oidR}(G)\leq2|V_1|+3|V_2|\leq2|V_1|+4|V_2|=2\gamma_{oiR}(G). \end{equation} Clearly, $\gamma_{oidR}(\overline{K_n})=2\gamma_{oiR}(\overline{K_n})=2n$. We now let $\gamma_{oidR}(G)=2\gamma_{oiR}(G)$. This equality along with the inequality chain (\ref{EQ1}) imply that $V_2=\emptyset$, and since $f$ is an OIRD function of $G$, $V_0=V^{g}_0=\emptyset$ as well. Therefore, all vertices of $G$ are assigned $2$ under $g$. Now if there exists an edge $uv$ in $G$, then the function $g'$ assigning $3$ to $u$, $0$ to $v$, and $2$ to the other vertices is an OIDRD function of $G$ with weight less than $\omega(g)$, which is a contradiction. Therefore, $G=\overline{K_n}$.\end{proof} As an immediate consequence of the equation (\ref{EQ1}), we have the following result. \begin{corollary}\label{cor2} If $G$ is a connected graph and $f=(V_0,V_1,V_2)$ is a $\gamma_{oiR}(G)$-function, then $\gamma_{oidR}(G)\le2\gamma_{oiR}(G)-|V_2|$. \end{corollary} \begin{figure} \caption{The family of graphs $\mathcal{G'}$.} \label{fig:graph-g'} \end{figure} For the equality in the upper bound given in Corollary \ref{cor2}, consider the family of stars, bistars and the family of graphs $\mathcal{G'}$ depicted in Figure \ref{fig:graph-g'}. \begin{proposition}\label{prop2} For every graph $G$, $\gamma_{oiR}(G)< \gamma_{oidR}(G)$. \end{proposition} \begin{proof} Let $f=(V_0, V_1,V_2,V_3)$ be any $\gamma_{oidR}(G)$-function. If $V_3\ne \emptyset$, then $g=(V^{g}_0=V_0, V^{g}_1=V_1,V^{g}_2=V_2\cup V_3)$ is an OIRD function of $G$, that is, $\gamma_{oiR}(G)< \gamma_{oidR}(G)$. Hence, assume that $V_3 =\emptyset$. Since $V_2 \cup V_3$ dominates $G$, it follows that $V_2\ne \emptyset$. Thus, all vertices are assigned either the values $0$, $1$ or $2$, and all vertices in $V_0$ must have at least two neighbors in $V_2$ and all vertices in $V_1$ must have at least one neighbor in $V_2$. In such a case, at least one vertex in $V_2$ can be reassigned the value $1$ and the resulting function will be an OIRD function of $G$, as well. Therefore, $\gamma_{oiR}(G)< \gamma_{oidR}(G)$. \end{proof} \begin{corollary}\label{cor3} For any nontrivial connected graph $G$, $\gamma_{oiR}(G)<\gamma_{oidR}(G)<2\gamma_{oiR}(G)$. \end{corollary} \begin{theorem}\label{Realize} For any connected graph $G$ of order $n\geq2$ with maximum degree $\Delta$, $$max\{\gamma(G),\frac{2}{\Delta}\alpha(G)\}+\beta(G)\leq \gamma_{oidR}(G)\leq3\beta(G).$$ These bounds are sharp. \end{theorem} \begin{proof} Let $I$ be an $\alpha(G)$-set. Hence, the function $f:V(G)\rightarrow\{0,1,2,3\}$ for which $f(v)=0$ if $v\in I$, and $f(v)=3$ for any other vertex, defines an OIDRD function of $G$. Therefore, $\gamma_{oidR}(G)\leq \omega(f)=3(n-\alpha(G))$. Since $\alpha(G)+\beta(G)=n$ (the well known Gallai theorem \cite{G}), the upper bound follows. That the upper bound is sharp can be seen by the corona $G'\odot \overline{K_{r}}$ for $r\geq2$, in which $G'$ is an arbitrary (connected) graph. Here, $f(v')=3$ for each $v'\in V(G')$, and $f(v)=0$ for all vertices $v$ of the copies of $\overline{K_{r}}$ leads to an OIDRD function of minimum weight in $G$ equals to $3\beta(G)$. On the other hand, let $g$ be a $\gamma_{oidR}(G)$-function. The set $V_{0}$ is independent and $V_{2}\cup V_{3}$ is a dominating set in $G$, by the properties of an OIDRD function of $G$. Moreover, we have $\omega(g)=|V_{1}|+2|V_{2}|+3|V_{3}|$. These lead to $$\alpha(G)\geq|V_{0}|=n-(|V_{1}|+|V_{2}|+|V_{3}|)=n-\omega(g)+|V_{2}|+2|V_{3}|\geq n-\omega(g)+|V_{2}|+|V_{3}|\geq n-\omega(g)+\gamma(G).$$ Therefore, \begin{equation}\label{EQ10} \gamma_{oidR}(G)=\omega(g)\geq \gamma(G)+\beta(G). \end{equation} The lower bound is obvious for $\Delta=1$. So, we assume that $\Delta\geq2$. Now, let $f=(V_{0},V_{1},V_{2},V_{3})$ be a $\gamma_{oidR}(G)$-function. Let $S=V_{0}\cap N(V_{3})$ and $S'=V_{0}\setminus S$. Since each vertex in $V_{3}$ has at most $\Delta$ neighbors in $S$, we have $|S|\leq \Delta|V_{3}|$. Moreover, every vertex in $S'$ has at least two neighbors in $V_{0}$ and every vertex in $V_{0}$ has at most $\Delta$ neighbors in $S'$. Therefore, $2|S'|\leq \Delta|V_{2}|$. The last two inequalities show that $2|V_{0}|=2|S|+2|S'|\leq(|V_{2}|+2|V_{3}|)\Delta$. Taking into account this inequality and since $V_{0}$ is independent, we have \begin{equation*} \begin{array}{lcl} \Delta \gamma_{oidR}(G)=\Delta(|V_{1}|+2|V_{2}|+3|V_{3}|)&=&\Delta(|V_{1}|+|V_{2}|+|V_{3}|)+\Delta(|V_{2}|+2|V_{3}|)\\ &\geq& \Delta(n-|V_{0}|)+2|V_{0}|\geq \Delta n-(\Delta-2)\alpha(G). \end{array} \end{equation*} This implies the lower bound $\gamma_{oidR}(G)\geq n-(\Delta-2)\alpha(G)/\Delta$. Using the equality $\alpha(G)+\beta(G)=n$ again, we have \begin{equation}\label{EQ11} \gamma_{oidR}(G)\geq \frac{2}{\Delta}\alpha(G)+\beta(G). \end{equation} The desired lower bound now follows from (\ref{EQ10}) and (\ref{EQ11}). That the lower bound (\ref{EQ10}) is sharp can be seen as follows. Given a positive integer $t$ and $1\leq i\leq t$, let $H_{i}$ be a graph obtained from the complete bipartite graph $K_{2,m_{i}}$ ($m_{i}\geq2$) by adding a new vertex $z_{i}$ and joining it the two vertices, say $x_{i}$ and $y_{i}$, of the smallest partite set of $K_{2,m_{i}}$. We now form a cycle on the set of vertices $\{z_{1},\cdots,z_{t}\}$, and denote the obtained graph by $H$. It is easily observed that $h:V(H)\rightarrow\{0,1,2,3\}$ defined by $f(x_{i})=f(y_{i})=2$ and $f(z_{2i-1})=1$ for all $1\leq i\leq \lceil t/2\rceil$, and $f(v)=0$ for any other vertex, is an OIDRD function of $H$ with minimum weight $4t+\lceil t/2\rceil$. On the other hand, $\beta(H)=3t-\lfloor t/2\rfloor$ and $\gamma(H)=2t$. Therefore, the lower bound (\ref{EQ10}) holds with equality for $H$. Moreover, the lower bound (\ref{EQ11}) is sharp for the star $K_{1,n-1}$. This completes the proof. \end{proof} Note that the upper bound given in the theorem above was also given in \cite{acss}. For the sake of completeness, we pointed it out and gave an infinite family of graphs for which the equality holds. \section{Trees} The authors of \cite{acss} proved that $\beta(G)+2$ is a lower bound on the OIDRD number of a nontrivial connected graph $G$. This lower bound can be improved for trees. Recall that a {\em double star $S_{a,b}$} is a tree with exactly two non-leaf vertices in which one support vertex is adjacent to $a$ leaves and the other to $b$ leaves. \begin{theorem}\label{induction} For any tree $T$, $\gamma_{oidR}(T)\geq2\beta(T)+1$ and this bound is tight. \end{theorem} \begin{proof} We proceed by induction on the order $n$ of $T$. The result is obvious when $n=1$. Moreover, it is easily observed that $\gamma_{oidR}(K_{1,n})=2\beta(K_{1,n})+1=3$. Hence, we may assume that $T$ has diameter $diam(T)\geq3$. If $diam(T)=3$, then $T$ is isomorphic to the double star $S_{a,b}$, $1\leq a\leq b$. We then have $\gamma_{oidR}(S_{1,b})=2\beta(S_{1,b})+1=5$, and $\gamma_{oidR}(S_{a,b})=6>5=2\beta(S_{a,b})+1$ when $a\geq2$. Thus, in what follows we consider that $diam(T)\geq4$, which implies that $n\geq5$. Suppose that $\gamma_{oidR}(T')\geq2\beta(T')+1$, for each tree $T'$ of order $1\leq n'<n$. Let $T$ be a tree of order $n$. We consider two cases depending on the behavior of support vertices of $T$. \textit{Case 1.} $T$ has a strong support vertex $u$. Let $v$ be a leaf adjacent to $u$. Consider the tree $T'=T-v$. Note that every $\gamma_{oidR}(T)$-function $f$ assigns $3$ to $u$ and $0$ to the leaves adjacent to $u$, necessarily. It is easy to see that $\beta(T')=\beta(T)$ and that $\gamma_{oidR}(T')\leq \gamma_{oidR}(T)$. Therefore, $\gamma_{oidR}(T)\geq 2\beta(T)+1$ by the induction hypothesis. \textit{Case 2.} All support vertices of $T$ are weak. Let $r$ and $v$ be two leaves with $d(r,v)=diam(T)$. We root the tree $T$ at $r$. Let $w$ be the parent of $v$, and $x$ be the parent of $w$. Since $T$ has no strong support vertices, it follows that $w$ has degree $deg(w)=2$. We need to consider two subcases depending on $deg(x)$. \textit{Subcase 2.1.} $deg(x)\geq3$. Since $d(r,v)=diam(T)$, all children of $x$ are leaves or support vertices. Let $T'=T-T_{w}$ (for a vertex $u$, by $T_{u}$ we mean the subtree of $T$ rooted at $u$ consisting of $u$ and all its descendants in $T$). It is easily observed that $\beta(T)=\beta(T')+1$. Let $f$ be a $\gamma_{oidR}(T)$-function of $T$. If $f(x)\geq2$, then $f(w)+f(v)=2$. Therefore, the restriction of $f$ to $V(T')$, from now on denoted $f'=f\mid_{V(T')}$, is an OIDRD function of $T'$. So, $\gamma_{oidR}(T')\leq w(f')=\gamma_{oidR}(T)-2$. Therefore, $2\beta(T)+1=2\beta(T')+3\leq \gamma_{oidR}(T')+2\leq\gamma_{oidR}(T)$. Suppose that $f(x)=0$. We may assume, without loss of generality, that $f(v)=0$ and $f(w)=3$. If $x$ is the parent of a support vertex $w'$ different from $w$, then we may assume that $f$ assigns $3$ to $w'$ and $0$ to the leaf adjacent to $w'$. In such a case, $f'=f\mid_{V(T')}$ is an OIDRD function of $T'$ with weight $\omega(f')=\gamma_{oidR}(T)-3$. So, $2\beta(T)+1<\gamma_{oidR}(T)$ by a similar fashion. We now assume that all children of $x$ different from $w$ are leaves. Since $T$ has no strong support vertices, it follows that $x$ is adjacent to only one leaf $x'$. If $f(x')=3$, then $f'$ is an OIDRD function of $T'$ and we are done. So, we may assume that $f(x')=2$. In such a situation, the assignment $(g(x'),g(x),g(w),g(v))=(0,3,0,2)$ and $g(u)=f(u)$ for the other vertices is a $\gamma_{oidR}(T)$-function of $T$. Moreover, $g'=g\mid_{V(T')}$ is an OIDRD function of $T'$ with weight $\omega(g')=\gamma_{oidR}(T)-2$. Hence, we have $2\beta(T)+1\leq \gamma_{oidR}(T)$, by a similar fashion. Let $f(x)=1$. Since $f(w)+f(v)=3$, we assume that $f(w)=3$ and $f(v)=0$. Suppose that $x$ is adjacent to a leaf $x'$ which is unique since $T$ has no strong support vertices. Then $f(x')\ge 2$, necessarily. Now the assignment $(g(x'),g(x),g(w),g(v))=(0,3,0,2)$ and $g(u)=f(u)$ for the remaining vertices, is an OIDRD function of $T$ with weight less than $\omega(f)$, which is impossible. Therefore, all children of $x$ are support vertices. Let $w'\neq w$ be a child of $x$ adjacent to the leaf $w''$. Since $f(w')+f(w'')=3$, we assume that $f(w')=3$ and $f(w'')=0$. In such a situation, the assignment $(g(w''),g(w'),g(x),g(w),g(v))=(2,0,2,0,2)$ and $g(u)=f(u)$ otherwise, defines an OIDRD function of $T$ with weight less than $\omega(f)$, a contradiction. \textit{Subcase 2.2.} $deg(x)=2$. Again, we let $T'=T-T_{w}$. Suppose that $y$ is the parent of $x$. If $f(x)\in\{2,3\}$, then $f(w)=0$ and $f(v)=2$. Therefore, $f'=f\mid_{V(T')}$ is an OIDRD function of $T'$. This shows that $2\beta(T)+1=2\beta(T')+3\leq \gamma_{oidR}(T')+2\leq \omega(f')+2=\gamma_{oidR}(T)$. If $f(x)=1$, then $f(w)+f(v)=3$. So, we assume that $f(w)=3$ and $f(v)=0$. In such a case, $(g(v),g(w),g(x))=(2,0,2)$ and $g(u)=f(u)$ for the remaining vertices, is a $\gamma_{oidR}(T)$-function. Now, $g'=g\mid_{V(T')}$ is an OIDRD function of $T'$. Therefore, $2\beta(T)+1=2\beta(T')+3\leq \gamma_{oidR}(T')+2\leq \omega(g')+2=\gamma_{oidR}(T)$. We now suppose that $f(x)=0$. Again, we can assume that $f(w)=3$ and $f(v)=0$. If $f(y)=3$, then $f'=f\mid_{V(T')}$ is an OIDRD function of $T'$ with weight $\gamma_{oidR}(T)-3$. This implies that $2\beta(T)+1<\gamma_{oidR}(T)$. If $f(y)=2$, then $g'(y)=3$ and $g'(u)=f(u)$ for any other vertex $u\in V(T')$, is an OIDRD function of $T'$ with weight $\omega(g')=\gamma_{oidR}(T)-2$. In such a case, we deduce that $2\beta(T)+1\leq \gamma_{oidR}(T)$. Therefore, in what follows we assume that $f(y)=1$. Note that by our choice of $v$, the vertex $y$ satisfies at least one of the following conditions: $(a)$ $deg(y)=2$; $(b)$ $y$ is adjacent to a (unique) leaf; $(c)$ $y$ has a child which is a support vertex ; or $(d)$ $y$ has a child which is the parent of a support vertex. Hence, we need to consider four possibilities depending on the behavior of $y$. \textit{Subcase 2.2.1.} Let $y$ be adjacent to a (unique) leaf $y'$. Hence, $f(y')=2$, and so, the assignment $(g'(y'),g'(y))=(0,3)$ and $g'(u)=f(u)$ for any other vertex $u\in V(T')$, is an OIDRD function of $T'$ with weight $\omega(g')=\gamma_{oidR}(T)-3$. Therefore, $2\beta(T)+1=2\beta(T')+3\leq \gamma_{oidR}(T')+2\leq \omega(g')+2<\gamma_{oidR}(T)$. \textit{Subcase 2.2.2.} Let $y$ have a child $y'$ which is a support vertex, and let $y''$ be the unique leaf adjacent to $y'$. Hence, we can assume that $f(y')=3$ and $f(y'')=0$. We then conclude that $(g'(y''),g'(y'),g'(y))=(2,0,3)$ and $g'(u)=f(u)$ for the remaining vertices $u\in V(T')$, is an OIDRD function of $T'$ with weight $\omega(g')=\gamma_{oidR}(T)-2$. We consequently deduce that $2\beta(T)+1=2\beta(T')+3\leq \gamma_{oidR}(T')+2\leq \omega(g')+2=\gamma_{oidR}(T)$. \textit{Subcase 2.2.3.} Let $y$ have a child $y'$ which is adjacent to a support vertex $y''$, and let $y'''$ be the unique leaf adjacent to $y''$. Then, $3\leq f(y')+f(y'')+f(y''')\leq4$. Suppose first that $f(y')+f(y'')+f(y''')=4$. We may assume that $f(y')=f(y''')=2$ and $f(y'')=0$. Then, the assignment $(g'(y'''),g'(y''),g'(y'),g'(y))=(0,3,0,3)$ and $g'(u)=f(u)$ for any other vertex $u\in V(T')$, is an OIDRD function of $T'$ with weight $\omega(g')=\gamma_{oidR}(T)-2$, and so we obtain $2\beta(T)+1\leq \gamma_{oidR}(T)$ similarly to Subcase 2.2.2. If $f(y')+f(y'')+f(y''')=3$, then we have $f(y''')=f(y')=0$ and $f(y'')=3$, necessarily. In such a situation, we consider the subtree $T''=T-T_{w}-T_{y''}$. It is easy to see that $\beta(T)=\beta(T'')+2$. On the other hand, the assignment $g'(y)=3$ and $g'(u)=f(u)$ for the other vertices $u\in V(T'')$ is an OIDRD function of $T''$ with weight $\omega(g')=\gamma_{oidR}(T)-4$. Therefore, $2\beta(T)+1=2\beta(T'')+5\leq \gamma_{oidR}(T'')+4\leq \omega(g')+4=\gamma_{oidR}(T)$. \textit{Subcase 2.2.4.} We now consider the situation in which $deg(y)=2$. Since $diam(T)\geq4$, the vertex $y$ has a parent $z$. Moreover, we must have $f(z)\geq2$. We observe that the assignment $g'(x)=2$, $g'(y)=0$ and $g'(u)=f(u)$ for any remaining vertex $u\in V(T')$, is an OIDRD function of $T'$ with weight $\omega(g')=\gamma_{oidR}(T)-2$, and we deduce that $2\beta(T)+1\leq \gamma_{oidR}(T)$ by a similar fashion. This completes the proof of the lower bound. To see the tightness of it, we consider the path graphs of even order, since $\gamma_{oidR}(P_{2t})=2t+1=2\beta(P_{2t})+1$ (by using Observation \ref{ob1} (i)). \end{proof} \section{Corona graphs} Let $G$ and $H$ be graphs where $V(G) = \{v_1, \ldots ,v_{n}\}$. We recall that the corona $G\odot H$ of graphs $G$ and $H$ is obtained from the disjoint union of $G$ and $n$ disjoint copies of $H$, say $H_1,\ldots, H_{n}$, such that for all $i\in \{1,\dots,n\}$, the vertex $v_i\in V(G)$ is adjacent to every vertex of $H_i$. We next present an exact formula for $\gamma_{oidR}(G\odot H)$ when $\Delta(H)\leq|V(H)|-2$. \begin{theorem} Let $G$ be a graph of order $n$, and let $H$ be a graph of maximum degree at most its order minus two. Then $\gamma_{oidR}(G\odot H)$ equals $$\min\{|V_0|(n(H)+\gamma(H))+|V_1|(\gamma_{oidR}(H)+1)+|V_2|(\gamma_{oiR}(H)+2)+|V_3|(\beta(H)+3)\},$$ taken over all possible functions $f_G=(V_0,V_1,V_2,V_3)$ over $V(G)$ for which the vertices labeled with $0$ form an independent set. \end{theorem} \begin{proof} Consider a function $f_G=(V_0,V_1,V_2,V_3)$ over $V(G)$ such that the vertices labeled with $0$ form an independent set. We next describe a function $f:V(G\odot H)\rightarrow\{0,1,2,3\}$ defined in the following way. Let $v_i\in V(G)=\{v_{1},\cdots,v_{n}\}$. \begin{itemize} \item If $f_G(v_i)=0$, then we take a $\gamma(H)$-set $D$, and for every vertex $w\in V(H_i)$ we make $f(w)=2$ if $w\in D$, and $f(w)=1$ otherwise. \item If $f_G(v_i)=1$, then we choose a $\gamma_{oidR}(H)$-function $f_H$ and for every vertex $w\in V(H_i)$ we make $f(w)=f_H(w)$. \item If $f_G(v_i)=2$, then we choose a $\gamma_{oiR}(H)$-function $g_H$ and for every vertex $w\in V(H_i)$ we make $f(w)=g_H(w)$. \item If $f_G(v_i)=3$, then we take an $\alpha(H)$-set $S$, and for every vertex $w\in V(H_i)$ we make $f(w)=0$ if $w\in S$, and $f(w)=1$ otherwise. \item For every vertex $v_i\in V(G)$, we make $f(v_i)=f_G(v_i)$. \end{itemize} We shall now prove that such function $f$ is an OIDRD function of $G\odot H$. We consider several situations for a given $i\in \{1,\dots,n\}$. \begin{itemize} \item $f_G(v_i)=0$. Since $H$ has maximum degree at most its order minus two, the $\gamma(H)$-set $D$ has at least two vertices. Thus, $v_i$ has at least two neighbors labeled with $2$. Moreover, every vertex $w\in V(H_i)$ such that $f(w)=1$ has a neighbor labeled with $2$ since $D$ is a dominating set of $H$. \item $f_G(v_i)=1$. Since $f_H$ is a $\gamma_{oidR}(H)$-function, every vertex of $V(H_i)$ satisfies the condition for $f$ to be an OIDRD function in $G\odot H$. Among other things, this also means that there is at least one vertex in $V(H)$ labeled with $2$ or $3$ under $f_H$. So, the vertex $v_i$ is adjacent to at least one vertex with label $2$ or $3$. \item $f_G(v_i)=2$. Note that any vertex of $V(H_i)$, labeled with $0$ under $g_H$, is adjacent to a vertex labeled with $2$ in $V(H_i)$. Also, since every vertex of $V(H_i)$ is adjacent to $v_i\in V(G)$, and $f(v_i)=2$, it follows that every vertex labeled with $0$ is adjacent to at least two vertices labeled with $2$, as well as, every vertex labeled with $1$ is adjacent to at least one vertex labeled with $2$. \item $f_G(v_i)=3$. Since every vertex of $V(H_i)$ is adjacent to $v_i$, it clearly follows that every vertex of $V(H_i)$ satisfies the condition for $f$ to be an OIDRD function of $G\odot H$. \end{itemize} As a consequence of all the situations described above, we deduce that $f$ is an OIDRD function of $G\odot H$. Since this has been made for an arbitrary function $f_G=(V_0,V_1,V_2,V_3)$ over $V(G)$ such that the vertices labeled with $0$ form an independent set, it is in particular satisfied for that function which gives the minimum weight. Furthermore, $\alpha(H)+\beta(H)=n(H)$. Therefore, $\gamma_{oidR}(G\odot H)\le \min\{|V_0|(n(H)+\gamma(H))+|V_1|(\gamma_{oidR}(H)+1)+|V_2|(\gamma_{oiR}(H)+2)+|V_3|(\beta(H)+3)\}$. On the other hand, consider a $\gamma_{oidR}(G\odot H)$-function $g=(V'_0,V'_1,V'_2,V'_3)$ and let $v_i\in V(G)$. We analyze now some cases.\\ \noindent \emph{Case 1:} $g(v_i)=0$. Hence, for every vertex $u\in V(H_i)$ it follows, $g(u)\ge 1$. Moreover, there must be at least one vertex $w\in V(H_i)$, such that $g(w)\ge 2$, since every vertex labeled $1$ under $g$ must be adjacent to a vertex labeled with $2$ or $3$ under $g$. Thus, it follows that $(V'_2\cup V'_3)\cap V(H_i)$ is a dominating set of $H_i$, and so, $g(V(H_i)\cup\{v_i\})\ge 2|(V'_2\cup V'_3)\cap V(H_i)|+|V'_1\cap V(H_i)|\ge 2\gamma(H)+n(H)-\gamma(H)=n(H)+\gamma(H)$.\\ \noindent \emph{Case 2:} $g(v_i)=1$. In such a situation, it can be readily seen that the restriction of $g$ over $H_i$ must be an OIDRD function of $H_i$. Thus, $g(V(H_i)\cup\{v_i\})\ge \gamma_{oidR}(H)+1$.\\ \noindent \emph{Case 3:} $g(v_i)=2$. Since every vertex of $V(H_i)$ is adjacent to $v_i$, the condition for a vertex $u\in V(H_i)$ (labeled with $0$) requiring to have two adjacent vertices labeled with $2$ (if it is the case), implies that at least one of such neighbors must be in $V(H_i)$. Also, note that if there exists a vertex $w\in V(H_i)$ such that $g(w)=3$, then we can redefine $g(w)$ as $g(w)=2$ (maintaining all the remaining labels the same), and we obtain an OIDRD function of $G\odot H$ with smaller weight, which is not possible. Thus, every vertex of $V(H_i)$ has label at most $2$. Consequently, the restriction of $g$ over $H_i$ must be an OIRD function of $H_i$. Therefore, $g(V(H_i)\cup\{v_i\})\ge \gamma_{oiR}(H)+2$.\\ \noindent \emph{Case 4:} $g(v_i)=3$. Now, we can easily observe that for every vertex $w\in V(H_i)$, it must happen $g(w)\le 1$. Since $V(H_i)\cap V'_0$ is an independent set, we obtain that $g(V(H_i)\cup\{v_i\})\ge n(H)-\alpha(H)+3=\beta(H)+3$.\\ Since $V'_0$ is an independent set, it is clear that the function $f'_G=(V''_0,V''_1,V''_2,V''_3)=(V'_0\cap V(G),V'_1\cap V(G),V'_2\cap V(G),V'_3\cap V(G))$ satisfies that $V''_0=V'_0\cap V(G)$ is independent. As a consequence of all the cases above, by making the sum $\sum_{i=1}^n g(V(H_i)\cup\{v_i\})$, we deduce that \begin{align*} \gamma_{oidR}(G\odot H) & \ge |V''_0|(n(H)+\gamma(H))+|V''_1|(\gamma_{oidR}(H)+1)+|V''_2|(\gamma_{oiR}(H)+2)+|V''_3|(\beta(H)+3)\\ & \ge \min\{|V_0|(n(H)+\gamma(H))+|V_1|(\gamma_{oidR}(H)+1)\\ &\hspace*{0.0cm}+|V_2|(\gamma_{oiR}(H)+2)+|V_3|(\beta(H)+3)\}, \end{align*} taken over all possible functions $f_G=(V_0,V_1,V_2,V_3)$ over $V(G)$ for which the vertices labeled with $0$ form an independent set. This completes the proof. \end{proof} \end{document}
arXiv
\begin{definition}[Definition:Avoirdupois/Quarter] The '''quarter''' is an avoirdupois unit of mass. {{begin-eqn}} {{eqn | o = | r = 1 | c = '''quarter''' }} {{eqn | r = 2 | c = stone }} {{eqn | r = 28 | c = pounds avoirdupois }} {{eqn | o = \approx | r = 12 \cdotp 7 | c = kilograms }} {{end-eqn}} \end{definition}
ProofWiki
\begin{document} \title{Optimal Control of a Quasi-variational Sweeping Process hanks{HA, RA, and CNR are partially supported by NSF-DMS 2110263, 2012391,1913004, and Air Force Office of Scientific Research under Award NO: FA9550-19-1-0036 and FA9550-22-1-0248.} \begin{abstract} The paper addresses the study of a class of evolutionary quasi-variational inequalities of the parabolic type arising in the formation and growth models of granular and cohensionless materials. Such models and their mathematical descriptions are highly challenging and require powerful tools of their analysis and implementation. We formulate a space-time continuous optimal control problem for a basic model of this type, develop several regularization and approximation procedures, and establish the existence of optimal solutions {for the time-continuous and space-discrete problem}. Viewing a version of this problem as a controlled quasi-variational sweeping process leads us to deriving necessary optimality conditions for {the fully discrete problem} by using the advanced machinery of variational analysis and generalized differentiation. \end{abstract}\vspace*{-0.1in} \begin{keywords} Quasi-variational inequalities, optimal control, sweeping processes, variational analysis, generalized differentiation, approximation methods, necessary optimality conditions \end{keywords}\vspace*{-0.1in} \begin{AMS} 47J20, 49J40, 49M15, 49J53 65J15, 65K10, 90C99 \end{AMS} \section{Introduction} This paper concerns the optimal selection of a supporting surface for the minimal accumulation of some granular cohensionless material that is being poured into a known region. The corresponding mathematical model can be formulated as an {\em optimal control problem} for an {\em evolutionary quasi-variational inequality} (QVI), or a {\em quasi-variational sweeping process}, with a gradient type constraint discussed in what follows. The problem is not standard in nature as the control variable acts on the nonconvex constraint set, and thus face significant complexity in establishing {\em well-posedness} and deriving {\em necessary optimality conditions}; see below for more details. In mathematical terms, the initial supporting surface $y_0$ is defined as a function on a certain domain $\Omega$ {which is} vanishing at the boundary $\partial\Omega$. Suppose that the density rate of the cohensionless granular material that is poured over $y_0^{\mathrm{ref}}$ is known and is denoted by $f$. The resulting final shape of the growth surface is {denoted by} $y$. Furthermore, a subdomain $\Omega_0\subset\Omega$ is provided, where we are supposed to avoid the accumulation of material on a certain time interval $[0,T]$. Assume also that certain perturbations of $y_0^{\mathrm{ref}}$ are allowed while leading us to a shape $y^*_0$, where we aim to maintain the constraints $0\le y^*-y_0^*\ll y-y_0^{ref}$ over $\Omega_0$ in a prescribed sense. Here $y^*$ is the state corresponding to the initial surface $y_0^*$. A schematic of this behavior in typical cases has been depicted in Figure~\mathrm{ref}{fig:image}. Having in mind that the problem possesses {insurmountable} difficulties in the original setting (in particular, it can be viewed as the control of the {\em fixed point} of a {\em discontinuous} mapping), we initially tackle a {\em semi-discrete} (in space) version of the problem with a regularized upper bound of the gradient constraint. In this setting, we are able {to prove} {\em existence of feasible solutions} to the resulting {\em QVI} by developing monotone regularization techniques, and then the {\em existence of minimizers} to the overall {\em optimization problem} by properly identifying conditions for the {\em Mosco set convergence} associated to the gradient constraint. {Furthermore}, we develop several regularization and approximation procedures, which allow us to model an {appropriate version} of the basic problem as optimal control of the {\em quasi-variational sweeping process}, which has been never considered in the literature. Nevertheless, applying advanced tools of {\em variational analysis} and {\em second-order generalized differentiation} {enables us to derive} efficient necessary optimality conditions for {fully discretized} quasi-variational sweeping process expressed entirely via given data of the original problem.\vspace*{0.02in} The rest of the paper is organized as follows. In Section~\mathrm{ref}{sec:ProblemFormulation}, we formulate the {\em original QVI control} problem in appropriate functional spaces {and discuss its regularization.} The {\em semi-discrete} (in space) QVI is analyzed in Section~\mathrm{ref}{sec:SemiDiscrete}, where the existence and time-regularity are justified. In the same section, the perturbation of solutions with respect to supporting structures is studied. The latter allows us to obtain an existence result for the semi-discrete optimization problem. A formal derivation of stationarity conditions for a regularized problem is provided in Section~\mathrm{ref}{s:formal_deriv}. In Section~\mathrm{ref}{sec:DiscApprox}, we {consider a {\em fully discrete} problem and establish existence of solutions to the corresponding optimization problem.} Section~\mathrm{ref}{sec:coderivative} reviews tools of first-order and second-order variational analysis and generalized differentiation, which allow us to derive {\em necessary optimality conditions} {for the} discretized sweeping control problem with smoothed gradient constraints. The concluding Section~\mathrm{ref}{conclusion} summarizes the major results obtained in this paper with {a discussion on} subsequent {\em numerical} implementations and {a future outlook}. \begin{figure} \caption{(\textbf{LEFT}) Depiction of the initial supporting structure $y_0^{\rm ref}$, the density rate of poured material $f$ and the location of the subdomain $\Omega_0$, where the material should not accumulate. (\textbf{CENTER}) Final resulting shape at time $t=T$ for the material with a very flat angle of repose. (\textbf{RIGHT}) Optimal supporting structure $y_0^*$, which coincides with the final growth shape $y^*$ at time $t=T$ given that no material is accumulating anywhere.} \label{fig:image} \end{figure} \section{Problem Formulation and Smoothing}\label{sec:ProblemFormulation} Let $y_0:\Omega\to\mathbb{R}$ with $\Omega\subset\mathbb{R}^\mathrm{d}$ be a supporting structure such that $y_0|_{\partial\Omega}=0$, and let $y:(0,T)\times\Omega\to \mathbb{R}$ as $t\in (0,T)$ be the height of the pile of a granular cohensionless material that begins pouring into the domain. Suppose that $y(t)|_{\partial\Omega}=0$, which implies that the material is allowed to abandon the domain freely. The material is characterized by its angle of repose $\theta>0$ that corresponds to the steepest stable angle at which a slope may arise from a point source of the material. The (density) rate of the granular material being deposited at each point of the domain $\Omega$ is given by $f:(0,T)\times\Omega\to\mathbb{R}$. The mathematical description of such problems was pioneered by Prigozhin and his collaborators \cite{BarrettPrigozhinSandpile,MR3231973, MR3335194,Prigozhin1986,Prigozhin1994,Prigozhin1996,Prigozhin1996a} in the case of homogeneous materials (see also \cite{hr,hr2,hrs}). In this setting, we arrive at the following {\em QVI problem} with respect to the variable $y${, with $p \in [2,\infty]$ and $(\cdot,\cdot)$ denoting the $L^2(\Omega)$ scalar product},\vspace*{0.1in} \textbf{Problem }($\mathbf{QVI}(y_0)$). Find $y\in L^2(0,T; H_0^1(\Omega))$ with $\partial_ty\in L^2(0,T;H^{-1}(\Omega))$ and \begin{equation}\label{qvi-state} y\in\mathcal{K}^p(y,y_0):=\big\{z\in H_0^1(\Omega)\;\big|\;\mbox{ with }\;|\nabla z|_p\le M_p(y,y_0)\big\} \end{equation} a.e.\ in $(0,T)$, and for which we have \begin{equation}\label{qvi-parab} (\partial_t y-f,v-y)\ge 0 \end{equation} whenever $v\in\mathcal{K}^p(y,y_0)$ a.e.\ in $(0,T)$. The operator $M_p(w,y_0):\Omega\to\mathbb{R}$ in $\mathbf{QVI}(y_0)$ is given by \begin{equation}\label{qviM} M_p(w,y_0):= \begin{cases} \alpha&\text{if $w>y_0$,} \\ \max\big(\alpha,|\nabla y_0|_{p}\big)&\text{if $w=y_0$,} \end{cases} \end{equation} where $\alpha:=\tan(\theta)$. In particular, this means that if the material has accumulated, then the gradient constraint is the material dependent one, but if it has not, we may get higher gradients on the supporting surface. This actually permits the material to slide off high slopes into other regions. A few words are to be said {about the} problem $(\mathbf{QVI}(y_0))$. Namely, this {is a} highly {\em nonsmooth} and {\em nonconvex} problem, where the {\em intrinsic nonconvexity} is induced by the constraint $y\in\mathcal{K}^p(y,y_0)$. Even for the case when $M_p(y,y_0)\equiv C$, a constant, the problem is nonsmooth, and the fact that the gradient is constrained pointwise increases the nonlinearity (with respect to the obstacle constraints) of the overall problem. However, the major difficulty associated to the aforementioned problem is the fact that $M_p$ is {\em discontinuous}, and thus the solution to $\mathbf{QVI}(y_0)$ is a {\em fixed point} of a discontinuous mapping. Existence, stability, and the overall analysis for this kind of problems are extremely challenging {and are still open in the most general setting}. The choice of $p$ in $\mathbf{QVI}(y_0)$ determines possible shapes of $y$ and hence the possible structures of the piles. In particular (and formally), if we consider a point source $f$ in the case $p=2$, the structure of $y$ (for a flat $y_0$) corresponds to a growing cone. Other cases like $p=\infty$ would imply that a point source of sand would generate a pyramid structure, where sides are aligned with the horizontal and vertical axis instead. In the latter case, note that $v\in \mathcal{K}^\infty(y,y_0)$ implies that \begin{equation*} -M_{\infty}(y,y_0)\le\partial_{x_i}v\le M_{\infty}(y,y_0) \quad \text{ a.e. in } \Omega \text{ and for }i=1,2,\ldots,\mathrm{d}. \end{equation*} A natural {\em optimal control problem} for $\mathbf{QVI}(y_0)$ can be described in words as follows. We want to modify $y_0$ (slightly), with respect to some reference structure $y_0^{\mathrm{ref}}$, in order to maintain a certain region of $\Omega_0\subset\Omega$ relatively free of material. This leads us to the following optimization problem with a quasi-variational inequality constraint.\vspace*{0.1in} \textbf{Problem} $(\mathbb{P})$. Given $\sigma>0$, $f\in L^2(\Omega)^+$, and $y_0^{\mathrm{ref}}\in L^2(\Omega)$, consider the {\em optimization problem}: \begin{align*} &\mathrm{minimize}\qquad \int_0^T\int_{\Omega_0}(y-y_0)\,\mathrm{d}x\,\mathrm{d}t+\frac{\sigma}{2}\int_{\Omega}(y_0-y_0^{\mathrm{ref}})^2 \mathrm{d}x\quad \text{over} \quad y_0 \in H_0^1(\Omega)\\ &\mathrm{subject \:\: to \:\: (s.t.)}\quad y \text{ solves } \mathbf{QVI}(y_0),\\ &\hphantom{\mathrm{subject \:\:to \:\: (s.t.)}}\quad y_0\in \mathcal{A}, \end{align*} where the constraint set ${\cal A}$ is described by \begin{equation}\label{qvi-control} \mathcal{A}:=\big\{z\in H_0^1(\Omega)\;\big|\;y_0^{\mathrm{ref}}+\lambda_0\le z\le y_0^{\mathrm{ref}}+\lambda_1\:\text{ a.e.}\} \end{equation} with $\lambda_i\in H_0^1(\Omega)$ for {$i=0,1$} and $\lambda_0\le\lambda_1$ a.e.\ in $\Omega$. {Here $\sigma > 0$ is the given regularization parameter.} Let us discuss some underlying features of the optimization problem $(\mathbb{P})$. This problem can be viewed as an optimal control problem for quasi-variational inequalities with state functions $y\in L^2(0,T;H_0^1(\Omega))$ and control functions $y_0\in H_0^1(\Omega)$ over the {\em parabolic} QVI \eqref{qvi-parab} subject to the {\em hard/pointwise control constraint} \eqref{qvi-control} and the {\em mixed state-control constraint} \eqref{qvi-state}. This type of optimal control problems are among the most challenging in control theory. As mentioned above, the state-control constraint \eqref{qvi-state} is really complicated from the viewpoint of quasi-variational inequalities. This constraint also creates {trouble to} handle it from the viewpoint of optimal control {and to derive the} necessary optimality conditions. Observe also that we consider $H_0^1(\Omega)$ control perturbations in \eqref{qvi-control} with certain pointwise bounds. In particular, this makes it possible for specific regions to get modified, while other regions of $y_0^{\mathrm{ref}}$ may remain the same. Note {further} that even the application of the direct method of calculus of variations falls short to tackle the existence of optimal {solutions to} $(\mathbb{P})$. In particular, without certain additional hypotheses, a minimizing sequence $\{y_0^n\}$ of this problem does not allow us to pass from ``$y^n \text{ solves } \mathbf{QVI}(y^n_0)$'' to the existence of a function $y^* \text{ solving } \mathbf{QVI}(y^*_0)$, where $y^*_0$ is some accumulation point of $\{y_0^n\}$. We discuss the main assumptions needed for the application of the direct method in the next section.\vspace*{0.05in} {In order to overcome part of these challenges}, we consider certain {\em smoothing approximation procedures} to deal with the mapping $M_p$, which is discontinuous, a major obstacle from both theoretical and numerical viewpoints. To this end, we observe that the mapping $M_p$ can be redefined as \begin{equation*} M_ p(w,y_0):= \begin{cases} \alpha&\text{if $w>y_0+\epsilon$,}\\ \max\big(\alpha,|\nabla y_0|_{2}\big)\displaystyle\frac{(y_0+\epsilon-w)}{\epsilon}+\displaystyle\alpha\frac{(w-y_0)}{\epsilon}&\text{if $y_0+\epsilon\ge w>y_0$,}\\ \max\big(\alpha,|\nabla y_0|_{2}\big)&\text{if $w=y_0$}. \end{cases} \end{equation*} Smoother approximations $\Tilde{M}_p$ of $M_p$ may be obtained by using {\em higher-order interpolants} as well as {\em regularizations} of the $\max$ function and the $\mathbb{R}^{\mathrm{d}}$ norm. Due to this, we assume throughout the paper that \begin{equation*} \Tilde{M}_p \text{ is } k \text{ times continuously differentiable for some} \;k\ge 2. \end{equation*} The above redefinition of $M_p$ and its approximations allow us to correct a major difficulty associated to the model. Indeed, this induces that the solution $y$ to problem ($\mathbf{QVI}(y_0)$) can be equivalently formulated as a {\em fixed point} of a now {\em continuous} mapping, which allows us to employ some perturbation methods. \section{The semi-discrete problem}\label{sec:SemiDiscrete} In this section, we construct a semi-discrete version of the original QVI control problem $({\cal P})$ involving a {\em space discretization} of the $\Omega$ in the QVI model. Given $\mathbf{f}:(0,T)\to \mathbb{R}^N$, the {\em semi-discrete QVI problem} is formulated as follows: Find $\mathbf{y}:(0,T)\to\mathbb{R}^N$ such that \begin{equation}\label{QVI0}\tag{$\mathrm{QVI}_N(\mathbf{y}_0)$} \mathbf{y}(t)\in \mathcal{K}^p(\mathbf{y}(t),\mathbf{y}_0)\;\mbox{ with }\;\left( \mathbf{y}'(t)-\mathbf{f}(t),\mathbf{v}-\mathbf{y}(t)\right)_{\mathbb{R}^N}\ge 0\;\mbox{ for all }\;\mathbf{v}\in \mathcal{K}^p(\mathbf{y}(t),\mathbf{y}_0) \end{equation} and a.e. $t\in(0,T)$. The two most important choices for $\mathcal{K}^p(\mathbf{y}(t),\mathbf{y}_0)$ are $p=2$ and $p=\infty$, where for arbitrary $\mathbf{w}$ and $\mathbf{z}$ \begin{align}\label{p2} &\mathcal{K}^2(\mathbf{w},\mathbf{z}):=\big\{\mathbf{v}\in\mathbb{R}^N\;\big|\;\sqrt{|(\mathbf{D}_1\mathbf{v})_i|^2+|(\mathbf{D}_2\mathbf{v})_i|^2}\le \big(M_2(\mathbf{w},\mathbf{z})\big)_i,\;i=1,\ldots,N\big\}, \end{align} and \begin{align}\label{setK} &\mathcal{K}^\infty(\mathbf{w},\mathbf{z}):=\big\{\mathbf{v}\in\mathbb{R}^N\;\big|\;- \big(M_\infty(\mathbf{w},\mathbf{z})\big)_i\le(\mathbf{D}_j\mathbf{v})_i\le \big(M_\infty(\mathbf{w},\mathbf{z})\big)_i,\;j=1,\;i=1,2,\ldots,N\big\}, \end{align} with $\mathbf{D}_1,\mathbf{D}_2\in\mathbb{R}^{N\times N}$ and $M_2,M_{\infty}:\mathbb{R}^N\times\mathbb{R}^N\to\mathbb{R}^N$. Observe that $\mathbf{D}_1,$ and $\mathbf{D}_2$ represent discrete approximations of the partial derivatives $\partial/\partial x$ and $\partial/\partial y$, respectively. In this vein, we have that $\mathbf{D}:=(\mathbf{D}_1, \mathbf{D}_2):\mathbb{R}^{N}\to\mathbb{R}^{2N}$ provides an approximation of the gradient.\vspace*{0.05in} For the rest of possible $p$ values, i.e., $2< p <\infty$, for arbitrary $\mathbf{w},\mathbf{z}$ the $\mathcal{K}^p(\mathbf{w},\mathbf{z})$ sets are defined as \begin{align}\label{Kp} &\mathcal{K}^p(\mathbf{w},\mathbf{z}):=\big\{\mathbf{v}\in\mathbb{R}^N\;\big|\; (\mathbf{D}\mathbf{v})_i|_p:=\big(|(\mathbf{D}_1\mathbf{v})_i|^p+|(\mathbf{D}_2\mathbf{v})_i|^p)\big)^{\frac{1}{p}}\le\big(M_p(\mathbf{w},\mathbf{z})\big)_i,\quad i=1,\ldots,N\big\}, \end{align} where the mapping $M_p$ is defined by \begin{equation}\label{M_p} \big(M_p(\mathbf{w},\mathbf{z})\big)_i:= \begin{cases} \alpha&\text{if $\mathbf{w}_i>\mathbf{z}_i+\epsilon$,} \\ \max\big(\alpha,|(\mathbf{D} \mathbf{z})_i|_{p}\big)\displaystyle\frac{(\mathbf{z}_i+\epsilon-\mathbf{w}_i)}{\epsilon}{\epsilon}+\displaystyle\alpha\frac{(\mathbf{w}_i-\mathbf{z}_i)}{\epsilon}&\text{if $\mathbf{z}_i+\epsilon\ge\mathbf{w}_i>\mathbf{z}_i$,} \\ \max\big(\alpha,|(\mathbf{D}\mathbf{z})_i|_{p}\big)&\text{if $\mathbf{w}_i=\mathbf{z}_i$} \end{cases} \end{equation} with $\mathbf{D}\mathbf{z}:=(\mathbf{D}_1\mathbf{z},\mathbf{D}_2\mathbf{z})$ and $(\mathbf{D}\mathbf{z})_i:=((\mathbf{D}_1\mathbf{z})_i,(\mathbf{D}_2\mathbf{z})_i)$. Although $M_p$ is only continuous, we can consider {\em smooth approximations} $\Tilde{M}_p$ of $M_p$ as explained in the previous section. Now we prove that the quasi-variational inequality \eqref{QVI0} admits at least one solution. Although the proof of the following theorem can be inferred from other sources, we include it for the sake of completeness and due to pieces and parts are used later for other arguments. \begin{thm}{\bf(existence of solutions to semi-discrete QVIs).}\label{thm:existenceQVI} Let $\mathbf{y}_0\in\mathbb{R}^N$, and $\mathbf{f}:(0,T)\to\mathbb{R}^N$ be such that $\mathbf{f}\in L^2(0,T)$. Then there exists a solution $\mathbf{y}:(0,T)\to\mathbb{R}^N$ to \eqref{QVI0} with the properties \begin{equation}\label{prop} \mathbf{y}\in C([0,T]) \quad \text{and} \quad \mathbf{y}'\in L^2(0,T). \end{equation} \end{thm} {\bf Proof}. We split the proof of the theorem into the following four major steps, where each of the steps is of its independent interest.\\[1ex] {\bf Step~1:} {\em Existence of solutions to the regularized variational inequality.} We confine ourselves to the case where $p=2$, while observing the the general case with $1\le p\le\infty$ can be done similarly. Given $\gamma>0$, consider the nonlinear ordinary differential equation \begin{equation}\label{ODE1} \mathbf{y}'(t)=\mathbf{f}(t)-\gamma G\big(t,\mathbf{y}(t)\big),\quad \mathbf{y}(0)=\mathbf{y}_0 \end{equation} with the mapping $G$ in the right-hand side of the equation defined by \begin{equation*} G(t,\mathbf{y}(t)):=\mathbf{D}^\mathrm{T}(|\mathbf{D}\mathbf{y}(t)|^2_2-M(t)^2)^+\mathbf{D}\mathbf{y}(t), \end{equation*} where $M(t):=M_{2}(\mathbf{z}(t),\mathbf{y}_0)$ for an arbitrary $\mathbf{z}\in C(\mathbb{R})$, $\mathbf{y}_0\in\mathbb{R}^N$, and \begin{equation*} \big(\mathbf{h}(t)\big)^+:=\big(\max(h_1(t),0),\max(h_2(t),0),\ldots, \max(\tau_M(t),0)\big) \end{equation*} for $\mathbf{h}:(0,T)\to\mathbb{R}^N$. Note that the mapping $\mathbb{R}^N\ni\mathbf{h}\mapsto G(t,\mathbf{h})\in\mathbb{R}^N$ is {\em monotone} for each $t$ in $\mathbb{R}^N$, i.e., \begin{equation}\label{MonG} \big\langle G(t,\mathbf{h}_1)-G(t,\mathbf{h}_2),\mathbf{h}_1-\mathbf{h}_2\big\rangle\ge 0\;\mbox{ whenever }\;\mathbf{h}_1,\mathbf{h}_2\in\mathbf{R}^N, \end{equation} and that we have $J(\mathbf{h})'\mathbf{d}=G(t,\mathbf{h})\mathbf{d}$ for the convex function $J(\mathbf{h}):=((|\mathbf{D}\mathbf{h}|^2_2-M(t)^2)^+\mathbf{D}\mathbf{h},\mathbf{D}\mathbf{h})$. The integral formulation of \eqref{ODE1} is then given by \begin{equation}\label{ODE2} \mathbf{y}(t)=\mathbf{y}_0+\int_0^t\mathbf{f}(s)\mathrm{d} s-\gamma\int_0^tG\big(s,\mathbf{y}(s)\big)\mathrm{d} s=:\Lambda(\mathbf{y})(s), \end{equation} where $\Lambda:C([0,T])\to C([0,T])$ and $\Lambda(\mathbf{y})'(s)\in L^2(0,T)$ for $\mathbf{y}\in C([0,T])$. To verify the existence of a solution to \eqref{ODE2} for each $\gamma>0$, we use the classical Leray-Schauder theorem. First note that the operator $\Lambda:C([0,T])\to C([0,T])$ is \emph{continuous}. Taking now a sequence $\{\mathbf{y}_n\}$ bounded in $C([0,T])$ ensures that $\{\Lambda(\mathbf{y}_n)\}$ is also bounded in $C([0,T])$, and furthermore $\{\Lambda(\mathbf{y}_n)'\}$ is bounded in $L^2(0,T)$. Indeed, if $C>0$ is such that \begin{equation*} \sup_{n}\|\mathbf{y}_n(t)\|_{C([0,T])}\le C, \end{equation*} then we clearly get the estimate \begin{equation*} \|\Lambda(\mathbf{y}_n)'\|_{L^2(0,T)}\le\|\mathbf{f}\|_{L^2(0,T)}+\gamma\|\mathbf{D}^\mathrm{T}\|\cdot\|\mathbf{D}\|C T^{1/2}\big(\|\mathbf{D}\|^2 C^2+\|M\|_{C([0,T])}\big). \end{equation*} It follows, by the compact embedding of $V:=\{\mathbf{v}\in L^2(0,T)\;|\;\mathbf{v}'\in L^2(0,T)\}$ into $C([0,T])$, that $\Lambda(\mathbf{y}_n)\to\mathbf{g}$ for some $\mathbf{g}\in C([0,T])$ along a subsequence. This tells us that $\Lambda:C([0,T])\to C([0,T])$ is \emph{completely continuous}. Finally in this step, we prove that the set \begin{equation*} Y:=\big\{\mathbf{y}\in C([0,T])\;\big|\;\mathbf{y}=\lambda \Lambda(\mathbf{y})\;\text{ for some }\;\lambda\in(0,1)\big\} \end{equation*} is bounded. To this end, observe first that if $\mathbf{y}\in Y$, then $\mathbf{y}(0)=\mathbf{y}_0$ and \begin{equation*} \mathbf{y}'(t)=\lambda\mathbf{f}(t)-\gamma\lambda G\big(t,\mathbf{y}(t)\big). \end{equation*} Taking the inner product of $\mathbf{y}$ with the integral from $0$ to $s<T$ gives us \begin{align*} \|\mathbf{y}(s)\|^2_2-\|\mathbf{y}_0\|_2^2&=\lambda\int_0^s\mathbf{f}(t)\cdot \mathbf{y}(t)\mathrm{d} t-\lambda\gamma\int_0^tG(s,\mathbf{y}(s))\mathbf{y}(s)\mathrm{d} s\\ &\le\lambda\left(\sup_{t\in[0,T]}\|\mathbf{y}(s)\|_2\right)\int_0^T\|\mathbf{f}(s)\|_2 \mathrm{d} s, \end{align*} where we use that $G(s,\mathbf{h})\mathbf{h}\ge 0$ for all $\mathbf{h}\in\mathbb{R}^n$ and all $s\in(0,T)$. It follows that \begin{equation} \sup_{t\in [0,T]}\|\mathbf{y}(t)\|_2\le C_1(\mathbf{y}_0,\mathbf{f})<\infty, \end{equation} i.e., all elements of $Y$ are bounded. Therefore, the Leray-Schauder theorem yields the existence of a solution $\mathbf{y}^\gamma$ to \eqref{ODE2} for each $\gamma>0$.\\[1ex] {\bf Step~2:} {\em Uniqueness of solutions to the regularized variational inequality}. To verify the uniqueness, suppose that we have two solutions $\mathbf{y}_i^\gamma$ for $i=1,2$. Then, since both functions satisfy \eqref{ODE1}, we subtract term by term and test the equation with $\mathbf{y}_1^\gamma-\mathbf{y}_2^\gamma$ with integrating it from $0$ to $s<T$. Thus it follows from the monotonicity in \eqref{MonG} that \begin{align*} \|(\mathbf{y}_1^\gamma-\mathbf{y}_2^\gamma)(s)\|^2_2&=-\gamma\int_0^t\big\langle G\big(s,\mathbf{y}^\gamma_1(s)\big)-G\big(s,\mathbf{y}_2^\gamma(s)\big),\mathbf{y}_1^\gamma(t)-\mathbf{y}_2^\gamma(t)\big\rangle\mathrm{d} t\le 0, \end{align*} which therefore justifies the uniqueness of solutions to \eqref{ODE1}.\\[1ex] {\bf Step~3:} {\em Existence and uniqueness of solutions to the variational inequality problem.} Arguing similarly to Step~2 allows us to verify the uniform boundedness of solutions $\mathbf{y}_\gamma$ to \eqref{ODE1} with respect to $\gamma>0$. Indeed, we get from \eqref{ODE1} by integrating from $0$ to $t$ and using $G(s,\mathbf{h})\mathbf{h}\ge 0$ for all $\mathbf{h}\in\mathbb{R}^n$ and all $s\in (0,T)$ that \begin{align*} \|\mathbf{y}^\gamma(t)\|^2_2-\|\mathbf{y}_0\|_2^2 &\le \left(\sup_{t\in(0,T)}\|\mathbf{y}^\gamma(s)\|_2\right)\int_0^T\|\mathbf{f}(s)\|_2 \mathrm{d} s. \end{align*} This readily implies the estimate \begin{equation}\label{eq:ybound} \sup_{\gamma>0}\sup_{s\in[0,T]}\|\mathbf{y}^\gamma(s)\|_2\le C_1(\mathbf{y}_0,\mathbf{f})<\infty. \end{equation} By testing in \eqref{ODE1} with an arbitrary $\mathbf{v}\in L^2(0,T)$ such that $\mathbf{v}'\in L^2(0,T)$, we get \begin{align}\label{Gzero} &\gamma\int_0^TG\big(s,\mathbf{y}^\gamma(s)\big)\mathbf{v}(s)\mathrm{d} s=\int_0^T\mathbf{f}(s)\mathbf{v}(s) \mathrm{d} s-\int_0^T(\mathbf{y}^\gamma)'(s)\mathbf{v}(s)\mathrm{d} s\\\notag &\qquad\le\left(\int_0^T\|\mathbf{f}(s)\|^2_2\mathrm{d} s\right)^{1/2}\left(\int_0^T\|\mathbf{v}(s)\|^2_2\mathrm{d} s\right)^{1/2}+\int_0^T \mathbf{y}^\gamma(s)\mathbf{v}'(s)\mathrm{d} s+\mathbf{y}^\gamma(T)\mathbf{v}(T)-\mathbf{y}_0\mathbf{v}(0) \\\notag &\qquad\le\left(\int_0^T\|\mathbf{f}(s)\|^2_2 \mathrm{d} s\right)^{1/2}\left(\int_0^T\|\mathbf{v}(s)\|^2_2 \mathrm{d} s\right)^{1/2}+C_1(\mathbf{y}_0,\mathbf{f})T^{1/2}\left(\int_0^T\|\mathbf{v}'(s)\|^2_2 \mathrm{d} s\right)^{1/2}. \end{align} Since $V:=\{\mathbf{v}\in L^2(0,T)\;|\;\mathbf{v}'\in L^2(0,T)\}$ is continuously and compactly embedded in $C([0,T])$, we have \begin{align*} &\gamma\int_0^TG\big(s,\mathbf{y}(s)\big)\mathbf{v}(s)\le C_2(\mathbf{y}_0,\mathbf{f},T) \left(\left(\int_0^T\|\mathbf{v}(s)\|^2_2\mathrm{d} s\right)^{1/2}+\left(\int_0^T\|\mathbf{v}'(s)\|^2_2 \mathrm{d} s\right)^{1/2}\right) \end{align*} for some $C_2(\mathbf{y}_0,\mathbf{f},T)$. This yields the estimate \begin{align*} \sup_{\gamma>0}\|\gamma G\big(s,\mathbf{y}(s)\big)\|_{V^*}&\le C_2(\mathbf{y}_0,\mathbf{f},T), \end{align*} and hence $\sup_{\gamma>0}\|(\mathbf{y}^\gamma)'\|_{V^*}\le C_3(\mathbf{y}_0,\mathbf{f},T)$. In particular, we get that \begin{align}\label{eq:Derybound} \sup_{\gamma>0}\|(\mathbf{y}^\gamma)'\|_{L^2(0,T)}&\le C_3(\mathbf{y}_0,\mathbf{f},T). \end{align} Note that $\{\mathbf{y}^\gamma\}_{\gamma>0}$ is bounded in $V$, so we can choose a a sequence $\mathbf{y}^n:=\mathbf{y}^{\gamma_n}$ with $\gamma_n\to\infty$ such that $\mathbf{y}^n\rightharpoonup \mathbf{y}^*$ for some $\mathbf{y}^*\in V$. Since $V$ is continuously and compactly embedded in $C([0,T])$, it follows that \begin{equation*} \mathbf{y}^n\to \mathbf{y}^* \text{ in } C([0,T])\quad\text{and}\quad (\mathbf{y}^n)'\rightharpoonup (\mathbf{y}^*)' \text{ in } L^2(0,T). \end{equation*} Moreover, observe from \eqref{Gzero} that \begin{align}\label{Gzeros} &\lim_{n\to\infty}\int_0^TG\big(s,\mathbf{y}^n(s)\big)\mathbf{v}(s)\mathrm{d} s=\int_0^T\big((|\mathbf{D}\mathbf{y}^*(t)|^2_2-M(t)^2)^+\mathbf{D}\mathbf{y}^*(t),\mathbf{D}\mathbf{v}(s)\big)\mathrm{d} s=0, \end{align} from which we deduce that $|\mathbf{D}\mathbf{y}^*(t)|_2\le M(t)$, i.e., $\mathbf{y}^*\in\mathcal{K}^2(\mathbf{z}(t), \mathbf{y}_0)$. Testing further \eqref{ODE1} with $\mathbf{w}=\mathbf{v}-\mathbf{y}^n$ as $\mathbf{v}\in\mathcal{K}^2(\mathbf{z}(t),\mathbf{y}_0)$ gives us the equality \begin{equation}\label{ODEweak} \begin{split} \int_0^T\big\langle(\mathbf{y}^n)'(t)-\mathbf{f}(t),\mathbf{v}(t)-\mathbf{y}^n(t)\big\rangle\mathrm{d} t&=\gamma\int_0^T\big\langle G\big(t,\mathbf{v}(t)\big)-G\big(t,\mathbf{y}(t)\big),\mathbf{v}(t)-\mathbf{y}(t)^n\big\rangle, \end{split} \end{equation} where the condition $G(t,\mathbf{v})=0$ is used. Employing the fact that $\mathbf{h}\mapsto G(t,\mathbf{h})$ is monotone, we have that the right hand-side of \eqref{ODEweak} is nonnegative. Passing there to the limit as $n\to\infty$ leads us to \begin{equation} \begin{split}\label{VIweak} \int_0^T\big\langle (\mathbf{y}^*)'(t)-\mathbf{f}(t),\mathbf{v}(t)-\mathbf{y}^*(t)\big\rangle\mathrm{d} t&\ge 0. \end{split} \end{equation} Since $\mathbf{v}$ was chosen arbitrary, a simple density device shows that $\mathbf{y}^*$ solves the variational inequality \begin{equation}\label{VI0} \mathbf{y}(t)\in\mathcal{K}^2\big(\mathbf{z}(t),\mathbf{y}_0\big)\;\big|\;\left\langle \mathbf{y}'(t)-\mathbf{f}(t),\mathbf{v}-\mathbf{y}(t)\right\rangle_{\mathbb{R}^N}\ge 0\;\mbox{ for all }\;\mathbf{v}\in\mathcal{K}^2\big(\mathbf{z}(t),\mathbf{y}_0\big), \end{equation} and the claimed uniqueness follows by monotonicity arguments.\\[1ex] {\bf Step~4:} {\em Existence of solutions to the quasi-variational inequality problem.} Denote by $\mathbf{y}=S(\mathbf{z})$ the (single-valued by Step~3) solution mapping of the variational inequality \eqref{VI0}. Arguing similarly to Step~3 ensures that the mapping $S:V\to V$ is compact. Furthermore, by the estimate \begin{equation*} M_2\big(\mathbf{z}(t),\mathbf{y}_0\big)\le\max(\alpha,|\mathbf{D}\mathbf{y}_0|_{\infty})=:\beta \end{equation*} we deduce that $S$ maps $\mathcal{K}^2_\beta$ into $\mathcal{K}^2_\beta$, where \begin{equation*} \mathcal{K}^2_\beta:=\big\{\mathbf{v}\in C([0,T])\;\big|\;|\mathbf{D}\mathbf{y}(t)|_2\le\beta\;\text { a.e.}\big\}. \end{equation*} Employing finally Schauder's fixed point theorem yields the existence of a fixed point $\mathbf{y}=S(\mathbf{y})$, and therefore the quasi-variational inequality \eqref{QVI0} admits a solution satisfying \eqref{prop}. This verifies the statement of Step~4 and thus completes the proof of the theorem. $ \triangle$\vspace*{0.08in} Now we formulate the following {\em optimal control problem with the \eqref{QVI0} constraints}. The previous theorem allows us to pose the problem in a slightly more regular space than chosen initially.\vspace*{0.1in} \textbf{Problem} $(\mathbb{P}_N)$. Given a number $\sigma>0$, a nonnegative (i.e., with nonnegative components) mapping $\mathbf{f}:(0,T)\to \mathbb{R}^N$, and vectors $\mathbf{a},\mathbf{y}_0^{\mathrm{ref}}\in \mathbb{R}^N$, consider the following optimal control problem for \eqref{QVI0}: \begin{align*} &\mathrm{minimize} \qquad J(\mathbf{y},\mathbf{y}_0):=\int_0^T\big\langle\mathbf{a}, \mathbf{y}(t)-\mathbf{y}_0\big\rangle\mathrm{d}t+\frac{\sigma}{2}\big\langle\mathbf{y}_0-\mathbf{y}_0^{\mathrm{ref}},\mathbf{y}_0-\mathbf{y}_0^{\mathrm{ref}}\big\rangle\quad \text{over}\quad \mathbf{y}_0 \in \mathbb{R}^ N\\ &\mathrm{subject \:\: to \:\:}\quad \mathbf{y} \text{ solves } \mathrm{QVI}(\mathbf{y}_0),\\ &\hphantom{\mathrm{subject \:\: to \:\:}\quad} \mathbf{y}\in V:=\big\{\mathbf{v}\in L^2(0,T)\;\big|\;\mathbf{v}'\in L^2(0,T)\big\},\\ &\hphantom{\mathrm{subject \:\: to \:\:}\quad} \mathbf{y}_0\in \mathcal{A}, \end{align*} where the latter {\em control constraint} set is defined by \begin{equation*} \mathcal{A}:=\big\{\mathbf{z}\in \mathbb{R}^ N\;\big|\;\mathbf{y}_0^{\mathrm{ref}}+\bm{\lambda}_0\le \mathbf{z}\le \mathbf{y}_0^{\mathrm{ref}}+\bm{\lambda}_1\big\}, \end{equation*} with $\bm{\lambda}_0,\bm{\lambda}_1\in \mathbb{R}^ N$ such that $0\le \bm{\lambda}_0\leq\bm{\lambda}_1$.\vspace*{0.12in} Our next goal is to verify the {\em existence of solutions} to the formulated optimal control problem $(\mathbb{P}_N)$. Before this, recall the notion of {\em Mosco convergence} for sets in reflexive Banach spaces. \vspace*{-0.05in} \begin{defn}{\bf(Mosco convergence).}\label{definition:MoscoConvergence} Let $\mathcal{K}$ and $\mathcal{K}_n$ as $n\in\mathbb{N}$ be nonempty, closed, and convex subsets of a reflexive Banach space $V$. Then the sequence $\{\mathcal{K}_n\}$ is said to converge to $\mathcal{K}$ in the sense of Mosco as $n\rightarrow\infty$, which is signified by $$\mathcal{K}_n\hspace{-.05cm}\yrightarrow{\scriptscriptstyle \mathrm{M}}[-1pt]\hspace{-.05cm}\mathcal{K},$$ if the following two conditions are satisfied: \begin{enumerate}[\upshape(I)] \item\label{itm:1} For each $w\in \mathcal{K}$, there exists $\{w_{n'}\}$ such that $w_{n'}\in \mathcal{K}_{n'}$ for $n'\in \mathbb{N}'\subset\mathbb{N}$ and $w_{n'}\rightarrow w$ in $V$. \item\label{itm:2} If $w_n\in \mathcal{K}_n$ and $w_n\rightharpoonup w$ in $V$ along a subsequence, then $w\in \mathcal{K}$. \end{enumerate} \end{defn}\vspace*{0.05in} Here is the aforementioned existence theorem for the formulated optimal control problem. \vspace*{-0.05in} \begin{thm}{\bf(existence of optimal solutions to \eqref{QVI0}).}\label{thm:ExistPN} The optimal control problem $(\mathbb{P}_N)$ for \eqref{QVI0} admits an optimal solution. \end{thm} {\bf Proof}. We spit the proof of the theorem into the two major steps.\\[1ex] {\bf Step~1}: {\em Properties of minimizing sequences in $(\mathbb{P}_N)$}. Observe first that Theorem~\mathrm{ref}{thm:existenceQVI} tells us that for each $\mathbf{y}_0\in\mathcal{A}$ there exists a $\mathbf{y}\in V$ solving $\mathrm{QVI}_N(\mathbf{y}_0)$. This yields the existence of a minimizing sequence $\{(\mathbf{y}_n,\mathbf{y}_0^n)\}$ for problem $(\mathbb{P}_N)$, i.e., for each $n\in\mathbb{N}$ we have \begin{equation*} (\mathbf{y}_n,\mathbf{y}_0^n)\in V\times\mathcal{A}, \quad \mathbf{y}_n \text{ solves }\mathrm{QVI}(\mathbf{y}_0^n) \quad \text{ with } J(\mathbf{y}_n,\mathbf{y}_0^n)\to \inf J\;\mbox{ as }\;n\to\infty. \end{equation*} Since $\mathbf{y}_0^n\in\mathcal{A}$ for all $n\in\mathbb{N}$, this implies that for every $n\in\mathbb{N}$ there exists a subsequence of the minimizing sequence (no relabeling) and $\mathbf{y}_0^*\in\mathcal{A}$ such that \begin{equation*} \mathbf{y}_0^n\to\mathbf{y}_0^*\;\mbox{ as }\;n\to\infty. \end{equation*} Taking into account that the solutions $\mathbf{y}_n$ of $\mathrm{QVI}(\mathbf{y}^n_0)$ are in $V$ and deducing from \eqref{eq:ybound} and \eqref{eq:Derybound} that \begin{equation}\label{eq:boundsyn} \sup_{s\in [0,T]}\|\mathbf{y}_n (s)\|_2\le \sup_{\mathbf{y}_0\in\mathcal{A}}C_1(\mathbf{y}_0,\mathbf{f})<\infty \qquad \text{ and }\qquad \|\mathbf{y}'_n\|_{L^2(0,T)}\le \sup_{\mathbf{y}_0\in\mathcal{A}}C_3(\mathbf{y}_0,\mathbf{f},T)<\infty \end{equation} with $C_1(\mathbf{y}_0,\mathbf{f})$ and $C_3(\mathbf{y}_0,\mathbf{f},T)$ being independent of $n$, let us check that these bounds are uniform in $n\in\mathbb{N}$. To verify the uniformity, we get from $\mathbf{y}_n\in C([0,T])$ and the proof of Theorem~\mathrm{ref}{thm:existenceQVI} that for each $n\in\mathbb{N}$ there exists $\mathbf{z}_k\in V$ satisfying the equation \begin{equation}\label{ODE3} \mathbf{z}_k(t)=\mathbf{y}^n_0+\int_0^t\mathbf{f}(s)\mathrm{d} s-k\int_0^tG^n\big(s,\mathbf{z}_k(s)\big)\mathrm{d} s \end{equation} where the integrand $G^n(s,\mathbf{z}(s))$ is given by \begin{equation*} G^n(s,\mathbf{z}(s)):=\mathbf{D}^*(|\mathbf{D}\mathbf{z}(t)|^2_2-M^n(t)^2)^+\mathbf{D}\mathbf{z}(t) \end{equation*} with $D^*$ standing for the matrix transposition/adjoint operator, and with the mapping $M^n$ defined by \begin{equation}\label{eq:Mn} M^n(t):=M_2\big(\mathbf{y}_n(t),\mathbf{y}_0^n\big). \end{equation} Observe further by the proof of Theorem~\mathrm{ref}{thm:existenceQVI} that we have the convergence \begin{equation*} \mathbf{z}_k\to\mathbf{y}_n\text{ in } C([0,T]) \qquad \text{and}\qquad (\mathbf{z}_k)'\rightharpoonup (\mathbf{y}_n)' \text{ in } L^2(0,T), \end{equation*} and that the following bounds are satisfied: \begin{equation*} \sup_{k\in\mathbb{N}}\sup_{s\in[0,T]}\|\mathbf{z}_k (s)\|_2\le C_1(\mathbf{y}_0,\mathbf{f}) \qquad \text{ and }\qquad \sup_{k\in \mathbb{N}}\|\mathbf{z}'_k\|_{L^2(0,T)}\le C_3(\mathbf{y}_0,\mathbf{f},T), \end{equation*} This verifies \eqref{eq:boundsyn} by noting that $\sup_{\mathbf{y}\in\mathcal{A}}C_1(\mathbf{y}_0,\mathbf{f})$ and $\sup_{\mathbf{y}\in\mathcal{A}}C_3(\mathbf{y}_0,\mathbf{f},T)$ are finite. It follows from \eqref{eq:boundsyn} that, along a subsequence (no relabeling), we have \begin{equation}\label{eq:yn_convergence} \mathbf{y}_n\to \mathbf{y}^* \text{ in } C([0,T]) \qquad \text{and}\qquad (\mathbf{y}_n)'\rightharpoonup (\mathbf{y}^*)' \text{ in } L^2(0,T) \end{equation} for some $\mathbf{y}^*\in V$, which is an optimal solution to $(\mathbb{P}_N)$ as shown below.\\[1ex] {\bf Step~2:} {\em The limiting function $\mathbf{y}^*$ is a solution to the quasi-variational inequality $\mathrm{QVI}(\mathbf{y}^*_0)$}. It follows from \eqref{eq:yn_convergence} that the mapping $M^n$ defined in \eqref{eq:Mn} is such that \begin{equation*} M^n\to M^* \text{ in } C([0,T]) \quad\text{ with } \quad M^*(t):=M_2\big(\mathbf{y}_0^\ast,\mathbf{y}^*(t)\big) \end{equation*} and that $M^n(t)\ge\alpha>0$ for all $n$ by definition. Thus we now show that the convergence \begin{equation}\label{eq:Mosqui} \mathscr{K}^2(\mathbf{y}_n,\mathbf{y}_0) \hspace{-.05cm}\yrightarrow{\scriptscriptstyle \mathrm{M}}[-1pt]\hspace{-.05cm} \mathscr{K}^2(\mathbf{y}^*,\mathbf{y}_0) \end{equation} in the sense of Mosco in the $V$ topology holds true, where \begin{equation*} \mathscr{K}^2(\mathbf{z},\mathbf{y}_0):=\big\{\mathbf{w}\in V\;\big|\;\mathbf{w}(t)\in \mathcal{K}^2(\mathbf{z}(t),\mathbf{y}_0) \text{ for all } t\in [0,T]\big\}. \end{equation*} This clearly follows for item \eqref{itm:2} in Definition~\mathrm{ref}{definition:MoscoConvergence}: If $\mathbf{w}_n\in \mathscr{K}^2(\mathbf{y}_n,\mathbf{y}_0^n)$ and $\mathbf{w}_n\rightharpoonup \mathbf{w}^\ast$ in $V$ for some $\mathbf{w}^\ast$, then $\mathbf{w}^\ast\in \mathscr{K}^2(\mathbf{y}^*,\mathbf{y}_0^*)$. Indeed, since $V$ is continuously and compactly embedded in $C([0,T])$, we observe that $\mathbf{w}_n\to \mathbf{w}^\ast$ in $C([0,T])$. Employing then the estimate \begin{equation*} \sqrt{|(\mathbf{D}_1\mathbf{w}_n(t))_i|^2+|(\mathbf{D}_2\mathbf{w}_n(t))_i|^2}\le\big(M_2(\mathbf{y}_n(t),\mathbf{y}_0^n)\big)_i \end{equation*} for $t\in[0,t]$ and $i=1,\ldots,N$ tells us that \begin{equation*} \sqrt{|(\mathbf{D}_1\mathbf{w}^\ast)_i|^2+|(\mathbf{D}_2\mathbf{w}^\ast)_i|^2}\le \big(M_2(\mathbf{y}^*,\mathbf{y}_0^*)\big)_i, \end{equation*} i.e., $\mathbf{w}(t)\in \mathcal{K}^2(\mathbf{y}^*(t),\mathbf{y}^*_0)$ for all $t\in [0,T]$, which thus verifies the statement. Now we turn the attention to \eqref{itm:1} in Definition \mathrm{ref}{definition:MoscoConvergence}. Note that $M^n\ge\alpha>0$ and $M^n\to M^*$ in $C([0,T])$, and so the positive numbers \begin{equation*} \beta_n:=\left(1+\frac{\|M^n-M^*\|_{C([0,T])}}{\alpha}\right)^{-1} \end{equation*} are such that $\beta_n\uparrow 1$, and that for $\mathbf{w}^*\in \mathscr{K}^2(\mathbf{y}^*,\mathbf{y}^*_0)$ we have $\beta_n\mathbf{w}^*\in \mathscr{K}^2(\mathbf{y}_n,\mathbf{y}^n_0)$ and $\beta_n\mathbf{w}^*\to \mathbf{w}^*$ in $V$ as $n\to\infty$. This therefore verifies \eqref{eq:Mosqui}. Hence the set convergence in \eqref{eq:Mosqui} implies that the function $\mathbf{y}^*\in \mathscr{K}^2(\mathbf{y}^*,\mathbf{y}^*_0)$ satisfies the inequality \begin{equation*} \begin{split} \int_0^T\big((\mathbf{y}^*)'(t)-\mathbf{f}(t),\mathbf{v}(t)-\mathbf{y}^*(t)\big)\mathrm{d} t&\geq 0 \quad \text{ for all }\mathbf{v}\in \mathscr{K}^2(\mathbf{y}^*,\mathbf{y}^\ast_0). \end{split} \end{equation*} Employing the standard density arguments shows that $\mathbf{y}^*$ is actually a solution to the quasi-variational inequalities $\mathrm{QVI}(\mathbf{y}^*_0)$ while justifying in this way the statement of Step~2. Finally, the lower semicontinuity of the objective functional ensures that \begin{equation*} J(\mathbf{y}^*,\mathbf{y}^*_0)\le\liminf_{n\to\infty} J(\mathbf{y}^n,\mathbf{y}^n_0)=\lim_{n\to\infty} J(\mathbf{y}^n,\mathbf{y}^n_0)=\inf J, \end{equation*} which thus completes the proof of the theorem. $ \triangle$\vspace*{0.05in} {In the above result we have shown existence of solution to $(\mathbb{P}_N)$. Before, we introduce the fully discrete problem and provide a rigorous derivation of the first order optimality conditions, we consider a formal derivation of the first order stationarity conditions for a regularized version of $(\mathbb{P}_N)$. The aim of this upcoming section is give a flavor of the first order conditions and provide a potential alternative to numerically solve $(\mathbb{P}_N)$.} \section{Regularized Problem and Stationarity Conditions}\label{s:formal_deriv} The following {\em regularized problem} is obtained from problem $(\mathbb{P}_N)$ by a natural regularization of its {\em quasi-variational constraint} {(see \eqref{ODE1})}:\vspace*{0.05in} \textbf{Problem} $(\widetilde{\mathbb{P}}_N)$. Given numbers $\sigma,\gamma>0$, a mapping $\mathbf{f}:(0,T)\to \mathbb{R}^N$ with nonnegative components, and vectors $\mathbf{a},\mathbf{y}_0^{\mathrm{ref}}\in\mathbb{R}^ N$, consider the regularized problem \begin{align*} &\mathrm{minimize} \qquad J(\mathbf{y},\mathbf{y}_0):=\int_0^T\big\langle \mathbf{a},(\mathbf{y}(t)-\mathbf{y}_0)\big\rangle\mathrm{d}t +\frac{\sigma}{2} | \mathbf{y}_0-\mathbf{y}_0^{\mathrm{ref}} |_2^2 \quad \text{over} \quad \mathbf{y}_0\in\mathbb{R}^ N \end{align*} subject to $\mathbf{y}\in V$ solving the {\em primal state equation} \begin{equation} \begin{split}\label{ODE11} \mathbf{y}'(t)&=\mathbf{f}(t)-\gamma G(t,\mathbf{y}(t), \mathbf{y}_0),\\ \mathbf{y}(0)&=\mathbf{y}_0 \end{split} \end{equation} with $G(t,\mathbf{y}(t),\mathbf{y}_0):={\mathbf{D}^\mathrm{T}}\max_\epsilon\left(0,|\mathbf{D}\mathbf{y}(t)|^2_2-\tilde{M}_p(\mathbf{y}(t),\mathbf{y}_0)^2\right)\mathbf{D}\mathbf{y}(t)$ where $\max_\epsilon$ is a smooth approximation of the $\max$ operator, and \begin{equation*} \mathbf{y}_0 \in \mathcal{A}:=\big\{\mathbf{z}\in \mathbb{R}^ N\;\big|\;\mathbf{y}_0^{\mathrm{ref}}+\bm{\lambda}_0\le \mathbf{z}\le\mathbf{y}_0^{\mathrm{ref}}+\bm{\lambda}_1\big\}, \end{equation*} where $\bm{\lambda}_0,\bm{\lambda}_1\in \mathbb{R}^N$ are such that $0\le\bm{\lambda}_0\le\bm{\lambda}_1$.\vspace*{0.05in} Let us provide a formal derivation of {\em stationarity conditions} for the above regularized problem by using the {\em Lagrangian formalism}. To proceed, we introduce the {\em Lagrangian functional} \begin{align*} &\mathcal{L}(\mathbf{y},\mathbf{y}_0,\mathbf{p}) =J(\mathbf{y},\mathbf{y}_0)-\left(\int_0^T\Big\langle\mathbf{p}(t),\Big(\mathbf{y}'(t)+\gamma G(t,\mathbf{y}(t),\mathbf{y}_0)-\mathbf{f}(t)\Big)\Big\rangle\mathrm{d} t\right) \end{align*} and observe that a variation of $\mathcal{L}$ with respect to $\mathbf{y}$ at a {\em stationary point} $(\mathbf{y},\mathbf{y}_0,\mathbf{p})$ leads us to the state equation \eqref{ODE11}. Applying further integration by parts to the term $\int_0^T\big\langle \mathbf{p}(t),\mathbf{y}'(t)\big\rangle\mathrm{d} t$, we arrive at \begin{align*} \begin{array}{ll} &\mathcal{L}(\mathbf{y},\mathbf{y}_0,\mathbf{p})=J(\mathbf{y},\mathbf{y}_0)\\ &-\displaystyle\left(\int_0^T \Big(-\big\langle\mathbf{y}(t),\mathbf{p}'(t)\big\rangle +\gamma\big\langle\mathbf{p}(t),G(t,\mathbf{y}(t),\mathbf{y}_0)\big\rangle-\big\langle\mathbf{p}(t),\mathbf{f}(t\big\rangle)\Big)\mathrm{d} t +\big\langle\mathbf{p}(T),\mathbf{y}(T)\big\rangle-\big\langle\mathbf{p}(0), \mathbf{y}(0)\big\rangle\right). \end{array} \end{align*} To derive the adjoint system, compute a variation of $\mathcal{L}$ with respect to $\mathbf{y}$ at the stationary point $(\mathbf{y},\mathbf{y}_0,\mathbf{p})$ in the direction $\mathbf{h}$ and get in this way the relationships: \begin{equation*} \begin{array}{ll} 0=\mathcal{L}_{\mathbf{y}}(\mathbf{y},\mathbf{y}_0,\mathbf{p})(\mathbf{h})= \displaystyle\int_0^T\big\langle\mathbf{h}(t),\mathbf{a}\big\rangle\mathrm{d} t-\displaystyle\Bigg(&\displaystyle\int_0^T\Big(-\big\langle\mathbf{h}(t),\mathbf{p}'(t)\big\rangle+\gamma\big\langle\mathbf{p}(t),\displaystyle G_{\mathbf{y}}(t,\mathbf{y}(t),\mathbf{y}_0)\mathbf{h}(t)\big\rangle\Big)\mathrm{d} t\\ &+\displaystyle\big\langle\mathbf{p}(T),\mathbf{h}(T)\big\rangle\Bigg), \end{array} \end{equation*} where we use that $\mathbf{y}_0$ is fixed and thus its variation is equal to zero. Choosing first that $\mathbf{h}$ to be compactly supported and then considering the general case brings us to the following {\em adjoint equation} and its {\em boundary condition}: Find $\mathbf{p}$ solving the {adjoint} system \begin{equation}\label{eq:adjcont} \begin{cases}-\mathbf{p}'(t) +\gamma\big\langle G_{\mathbf{y}}(t,\mathbf{y}(t),\mathbf{y}_0), \mathbf{p}(t)\big\rangle=\mathbf{a},\quad t\in(0,T),\\ \mathbf{p}(T)=0. \end{cases} \end{equation} Finally, the minimization of $\mathcal{L}$ with respect to $\mathbf{y}_0$ and subject to $\mathbf{y}_0\in\mathcal{A}$ leads us to the variational inequality for the control variable $\mathbf{y}_0$ formulated as follows: \begin{equation}\label{eq:VIcont} \left\langle\sigma(\mathbf{y}_0-\mathbf{y}_0^{\rm ref}), \widehat{\mathbf{y}}-\mathbf{y}_0\right\rangle -\gamma\int_0^T\big\langle\mathbf{p}(t), G_{\mathbf{y}_0}(t,\mathbf{y}(t),\mathbf{y}_0) (\widehat{\mathbf{y}}-\mathbf{y}_0)\big\rangle\ge 0 \quad \mbox{for all }\;\widehat{\mathbf{y}}\in\mathcal{A}. \end{equation} To summarize, the stationarity system corresponding to the above regularized problem is given by the relationships\eqref{ODE11}, \eqref{eq:adjcont}, and \eqref{eq:VIcont}.\vspace*{0.05in} \section{Quasi-Variational Sweeping Process and Discrete Approximations}\label{sec:DiscApprox}\vspace*{-0.05in} First we recall the construction of the {\em normal cone} to a convex set $\Theta$ at a point $x$ defined by \begin{equation}\label{nor-cone} N_\Theta(\bar x):=\left\{\begin{array}{ll} {\big\{ x^* \, : \, } \langle x^*,x-\bar{x}\rangle\le 0\;\mbox{ for all }\;x\in\Theta\big\}&\mbox{if }\;{\bar{x} \in\Theta},\\ \emptyset&\mbox{otherwise}. \end{array}\right. \end{equation} Therefore, the convexity of the sets ${\cal K}^p(y,y_0)$ from \eqref{qvi-state} allows us to rewrite the semi-discrete quasi-variational inequality problem from Section~3 in the form as a {\em quasi-variational sweeping process} \begin{equation}\label{eq:DiscQVI}\tag{$\mathrm{QVI}_N(\mathbf{y}_0)$}-\mathbf{y}'(t)\in F\big(\mathbf{y}(t),\mathbf{y}_0\big):= N_{\mathcal{K}^p(\mathbf{y}(t),\mathbf{y}_0)}\big(\mathbf{y}(t)\big)-\mathbf{f}(t). \end{equation} Note that the classical (uncontrolled) sweeping process was introduced by Moreau in the 1970s motivated by applications to elastoplasticity; see \cite{mor_frict} with the references to his original publications. A characteristic feature of Moreau's sweeping process and its modifications is that the moving set under the normal cone operator depends on time in a certain continuous way. We refer the reader to the excellent recent survey in \cite{bt} with the comprehensive bibliography therein concerning various theoretic aspects and many applications of Moreau's sweeping process and its further extensions. Since the Cauchy problem for the aforementioned sweeping processes admits a {\em unique solution} due to the maximal monotonicity of the normal cone operator \cite{bt}, the consideration of any optimization problem for such processes is out of question. This is quite opposite to optimal control theory for {\em Lipschitzian} differential inclusions of the type $\dot x\in F(x)$ and the classical theory for systems governed by differential equations $\dot x=f(x,u),\;u\in U$, and their PDE counterparts. Starting with \cite{chhm1}, various optimal control models for sweeping dynamics have been formulated rather recently {including derivation of optimality conditions.} They include: problems with moving sets depending on time and control variables \cite{chhm1,chhm2}, problems with controls in associated ODEs \cite{bk}, problems with controls in additive perturbations of the dynamics \cite{ac,pfs,zeidan}, problems with controls in both moving sets and dynamics \cite{ccmn,cm3}. The cited papers impose different assumptions on the problem data, develop diverse approximation techniques, derive various sets of necessary optimality conditions, and contain references to other publications in these directions. But the common point of all these models for controlled sweeping processes is a {\em highly non-Lipschitzian} (in fact, discontinuous) nature of the sweeping dynamics, which restricts the usage of variational machinery employed in the study of Lipschitzian differential inclusions. Observe also that the very definition of the normal cone \eqref{nor-cone} and their nonconvex extensions yields the unavoidable presence of {\em pointwise state} and {\em mixed state-control} constraints of {\em irregular} types, which are among the most challenging issues even in classical theory. {Having said that, } we emphasize that---to the best of our knowledge---no optimal control problems have been considered for sweeping processes with moving sets depending not only on time and control variables but on {\em state variables} as well, which is the essence of {\em quasi-variational} vs.\ variational inequalities. This is the case of the \eqref{eq:DiscQVI} and \eqref{eq:QVIdd} problems studied in what follows. Our approach is based on the {\em method of discrete approximations} and tools of generalized differentiation developed in \cite{m95} to derive necessary optimality conditions in optimal control problems for Lipschitzian differential inclusions with finite-dimensional state spaces and then extended in \cite[Chapter~6]{m-book} to infinite-dimensional systems. Since the Lipschitz continuity is crucial in the device of \cite{m95,m-book} and related publications, the extension of this method to the non-Lipschitzian sweeping dynamics requires significant improvements, {this has} been accomplished in \cite{ccmn,cm3,chhm1,chhm2} and other papers for different type of controlled sweeping processes associated with variational inequalities. Here we develop some aspects of this method for optimal control of the quasi-variational sweeping process under consideration.\vspace*{0.03in} According to the general scheme of the discrete approximation method, we introduce now the {\em fully discretized} (in time and space) form of the quasi-variational inequality \eqref{QVI0} by using for simplicity the {\em uniform Euler scheme} in the replacement of the time derivative $\dot x$ by finite differences. For this matter, take any natural number $M\in\mathbb{N}$ and consider the {\em discrete grid/mesh} on $(0,T)$ defined by \begin{equation*} T_M:=\big\{0,\tau_M,\ldots,T-\tau_M,T\big\},\quad \tau_M:=\dfrac{T}{M}, \end{equation*} with the {\em stepsize of discretization} {$\tau_M$} and the {\em mesh points} $t^M_j:=j\tau_M$ as $j=0,\ldots,M$. Then the quasi-variational inequality in \eqref{QVI0} is replaced by \begin{equation}\label{eq:QVIdd}\tag{$\mathrm{QVI}_N^M(\mathbf{y}_0)$} \mathbf{y}_{j}^M\in \mathcal{K}^p(\mathbf{y}_0,\mathbf{y}_{j}^M) \quad\Bigg|\quad\left(\frac{\mathbf{y}_{j}^M-\mathbf{y}_{j-1}^M}{\tau_M}-\mathbf{f}_j^M,\mathbf{v}-\mathbf{y}_j^M\right)_{\mathbb{R}^N}\geq 0\;\mbox{ for all }\;\mathbf{v}\in \mathcal{K}^p(\mathbf{y}_0,\mathbf{y}_{j}^M) \end{equation} with the discrete time $j=1,\ldots,M$ and the rate discretization \begin{equation}\label{f-discr} \mathbf{f}_j^M=\int_{(j-1)\tau_M}^{j\tau_M}\mathbf{f}(t)\mathrm{d}t \qquad j=1,\ldots,M. \end{equation} Equivalently, \eqref{eq:QVIdd} can be written as the {\em discretized quasi-variational sweeping process} \begin{equation}\label{e:dis-incl2} \mathbf{y}_j^M\in\mathbf{y}_{j-1}^M+\tau_M F_j^M(\mathbf{y}_j^M,\mathbf{y}_0),\quad j=1,\ldots,M, \end{equation} where the feasible discrete velocity mappings $F_j^M$ are defined by \begin{equation}\label{mapF1} F_j^M(\mathbf{y},\mathbf{y}_0):=-N_{\mathcal{K}^{p}(\mathbf{y},\mathbf{y}_0)}(\mathbf{y})+\mathbf{f}_j^M,\quad j=1,\ldots,M, \end{equation} via the normal cone operator of the state and control dependent set $\mathcal{K}^{p}(\mathbf{y},\mathbf{y}_0)$.\vspace*{0.05in} Given $\{\mathbf{y}_j^M\}$ satisfying \eqref{eq:QVIdd}, its {\em piecewise linear extension} $\mathbf{y}^M(t)$ to the continuous-time interval $(0,T)$, i.e., the {\em Euler broken line}, is defined by \begin{equation*} \mathbf{y}^M(t):=\sum_{j=1}^M\mathbf{y}_j^M\chi_{I_j}(t), \qquad \text{where}\quad I_j=\big[(j-1)\tau_M,j\tau_M\big),\qquad j=1,\ldots,M. \end{equation*} Similarly to Theorem~\mathrm{ref}{thm:existenceQVI}, we can verify that, for each fixed $\mathbf{y}_0\in\mathbb{R}^N$, the discretized quasi-variational inequality \eqref{eq:QVIdd} admits a solution $\mathbf{y}=\{\mathbf{y}_j^M\}_{j=1}^M$. The {\em discrete version} of the optimal control problem $(\mathbb{P}_N)$ is formulated as follows: \vspace*{0.1in} \textbf{Problem} $(\mathbb{P}_N^M)$. Given $\sigma>0$, a nonnegative mapping $\mathbf{f}:(0,T)\to\mathbb{R}^N$, and vectors $\mathbf{a},\;\mathbf{y}_0^{\mathrm{ref}}\in\mathbb{R}^N$, consider the discrete-time optimal control problem: \begin{align*} &\mathrm{minimize}\qquad J^M(\mathbf{y},\mathbf{y}_0):=\sum_{j=1}^M\tau_M\big\langle\mathbf{a}, \mathbf{y}_j^M-\mathbf{y}_0\big\rangle+\frac{\sigma}{2}\big\langle\mathbf{y}_0-\mathbf{y}_0^{\mathrm{ref}},\mathbf{y}_0-\mathbf{y}_0^{\mathrm{ref}}\big\rangle\\ &\text{over}\qquad\qquad \mathbf{y}_0^M,\mathbf{y}_1^M,\ldots,\mathbf{y}_M^M\in \mathbb{R}^N;\\ &\mathrm{subject \:\:to\:\:}\quad \mathbf{y}=\{\mathbf{y}_j^M\}_{j=1}^M\;\text{ solves }\; \mathrm{QVI}_N^M(\mathbf{y}_0),\\ &\hphantom{\mathrm{subject \:\: to \:\:}\quad} \mathbf{y}_0\in\mathcal{A}. \end{align*} In this problem, the {\em dynamics constraints} can be written in the quasi-variational sweeping form \begin{equation}\label{e:diff-incl} \dot{\mathbf{y}}(t_j^M)\in F_j^M(\mathbf{y}(t_j^M),\mathbf{y}_0)\;\mbox{ for all }\;t^M_j\in(0,T) \end{equation} with $F_j^M(\mathbf{y},\mathbf{y}_0)$ from \eqref{mapF1}, the {\em control constraint} $\mathbf{y}_0\in\mathcal{A}$ is expressed in terms of the set \begin{equation}\label{A} \mathcal{A}:=\big\{\mathbf{z}\in\mathbb{R}^N\;\big|\;\mathbf{y}_0^{\mathrm{ref}}+\bm{\lambda}_0\le \mathbf{z}\le\mathbf{y}_0^{\mathrm{ref}}+\bm{\lambda}_1 \big\}, \end{equation} where $\bm{\lambda}_0,\bm{\lambda}_1\in\mathbb{R}^N$ with $0\le\bm{\lambda}_0\le\bm{\lambda}_1$, and the {\em hidden state constraints} are given by \begin{equation}\label{e:ic} -\big(M_\infty(\mathbf{y}(t_j^M),\mathbf{y}_0)\big)_i\le \big(\mathbf{D}_k\mathbf{y}(t_j^M)\big)_i\le \big(M_\infty(\mathbf{y}(t_j^M),\mathbf{y}_0)\big)_i \end{equation} with $i=1,\ldots,N$, $k=1,2$, and $j=1,\ldots,M$, where the mapping $M_p$ is defined in \eqref{M_p}.\vspace*{0.08in} Similarly to the proof of Theorem~\mathrm{ref}{thm:ExistPN}, we arrive at following existence theorem of optimal solutions. \vspace*{-0.1in} \begin{thm}{\bf(existence of optimal solutions to discretized sweepings QVIs).}\label{thm:existencePnh} For each natural numbers $N$ and $M$, the discretized sweeping control problem $(\mathbb{P}_N^M)$ admits an optimal solution. \end{thm} It has been well understood in the developments of the discrete approximation method for Lipschitzian differential inclusions \cite{m95,m-book} and for sweeping control problems associated with variational inequalities \cite{ccmn,cm3,chhm1,chhm2} that optimal solutions to the discrete-time problems of the above type {\em strongly converge} in the suitable space topologies to the prescribed local minimizer of the original continuous-time problem. A similar result holds for the controlled quasi-variational sweeping process $(\mathbb{P}_N)$ and its discrete approximations $(\mathbb{P}_N^M)$ under consideration by imposing appropriate assumptions, while we postpone the precise clarification of this issue to our future research.\vspace*{0.05in} Our further goal in this paper is to derive {\em necessary optimality conditions} for local minimizers of the discrete-time quasi-variational sweeping control problem $(\mathbb{P}^M_N)$ for each $N,M\in\mathbb{N}$. According to the previous discussions, such necessary optimality conditions for $(\mathbb{P}^M_N)$ can be viewed as {\em suboptimality} (almost optimality) condition for $(\mathbb{P}_N)$ and the original quasi-variational control problem $(\mathbb{P})$. Looking at the structure of each problem $({\mathbb P}^M_N)$ tells us that it can be reduced to a problem of {\em finite-dimensional optimization} while with a special type of (increasingly many) {\em geometric constraints} given in the unavoidably {\em nonconvex graphical} form induced by the very nature of the quasi-variational sweeping process. Handling such constraints require the usage of adequate tools of nonconvex variational analysis and generalized differentiation, which we briefly review in the next section. \section{Generalized Differentiation for QVI Sweeping Dynamics}\label{sec:coderivative} First we present here the generalized differential notions for sets, set-valued mappings, and extended-real-valued functions that are used in what follows. More details and references can be found in the books \cite{m-book,m18,rw}. Following the geometric approach of \cite{m-book,m18}, we start with generalized normals to sets. Given a set $\Theta\subset\mathbb{R}^s$ locally closed around $\bar{z}\in\Theta$, the (Mordukhovich, limiting) {\em normal cone} to $\Theta$ at $\bar{z}$ is defined by \begin{equation}\label{lim-nor} N_\Theta(\bar{z}):=\big\{v\in\mathbb{R}^s\;\big|\;\exists\,z_k\to\bar{z},\;w_k\in\Pi_\Theta(z_k),\;\alpha_k\ge 0\,\;\mbox{ with }\;\alpha_k(z_k-w_k)\to v\big\}, \end{equation} where $\Pi_\Theta(z)$ stands for the (nonempty) Euclidean projector of $z\in\mathbb{R}^s$ onto $\Theta$. If $\Theta$ is convex, the normal cone \eqref{lim-nor} agrees with normal cone of convex analysis \eqref{nor-cone}, but otherwise \eqref{lim-nor} is {\em nonconvex} in very common situations, e.g., for the graph of $\varphi(x):=|x|$ and the epigraph of $\varphi(x):-|x|$ at $(0,0)\in\mathbb{R}^2$. Nevertheless, the normal cone \eqref{lim-nor} and the associated generalized differential constructions for mappings and functions defined below enjoy comprehensive {\em calculus rules} the proofs of which are based on the {\em variational/extremal principles} of variational analysis. Let $F\colon\mathbb{R}^n\rightrightarrows\mathbb{R}^m$ be a set-valued mapping/multifunction with graph $$ \mbox{\rm gph}\, F:=\big\{(x,y)\in\mathbb{R}^n\times\mathbb{R}^m\;\big|\;y\in F(x)\big\} $$ locally closed around $(\bar{x},\bar{y})\in\mbox{\rm gph}\, F$. The {\em coderivative} of $F$ at $(\bar{x},\bar{y})$ {is defined via} the normal cone \eqref{lim-nor} to the graph of $F$ at this point by \begin{equation}\label{e:cor} D^*F(\bar{x},\bar{y})(u):=\big\{v\in\mathbb{R}^n\;\big|\;(v,-u)\in N_{{\rm\small gph}\,F}(\bar{x},\bar{y})\big\}\;\mbox{ for all }\;u\in\mathbb{R}^m. \end{equation} This is an extension to the case of nonsmooth and set-valued mappings the notion of the {\em adjoint operator} (matrix transposition) for the Jacobians $\nabla F(\bar{x})$ of single-valued smooth mappings in which case we have $$ D^*F(\bar{x})(u)=\big\{\nabla F(\bar{x})^*u\big\},\quad u\in\mathbb{R}^m, $$ where the indication of $\bar{y}=F(\bar{x})$ is dropped in the coderivative notation. Let $\varphi\colon\mathbb{R}^n\to\Bar{\R}:=(-\infty,\infty]$ be an extended-{real-valued} function that is lower semicontinuous (l.s.c.) around $\bar{x}$ with $\varphi(\bar{x})<\infty$, i.e., with $\bar{x}\in\mbox{\rm dom}\,\varphi$. Proceeding geometrically, the (first-order) {\em subdifferential} of the function $\varphi$ at the point $\bar{x}$ is defined as \begin{equation}\label{e:sub} \partial\varphi(\bar{x}):=\big\{v\in\mathbb{R}^n\;\big|\;(v,-1)\in N_{{\rm\small epi}\,\varphi}\big(\bar{x},\varphi(\bar{x})\big)\big\} \end{equation} via the normal cone to the epigraph $\mbox{\rm epi}\,\varphi$ of $\varphi$ at $(\bar{x},\varphi(\bar{x}))$ while observing that the subgradient mapping $\partial\varphi$ admits various equivalent analytic descriptions that can be found in the aforementioned books. Following the ``dual derivative-of-derivative" scheme of \cite{m92}, we finally introduce the major second-order generalized differential construction used in the paper. Given $(\bar{x},\bar{v})\in\mbox{\rm gph}\,\partial\varphi$ for an l.s.c.\ function $\varphi\colon\mathbb{R}^n\to\Bar{\R}$, the {\em second-order subdifferential}, or the {\em generalized Hessian}, of $\varphi$ at $\bar{x}$ relative to $\bar{v}$ is \begin{eqnarray}\label{2nd} \partial^2\varphi(\bar{x},\bar{v})(u):=\big(D^*\partial\varphi\big)(\bar{x},\bar{v})(u),\quad u\in\mathbb{R}^n. \end{eqnarray} When $\varphi$ is ${\cal C}^2$-smooth around $\bar{x}$, we have the representation \begin{equation*} \partial^2\varphi(\bar{x})(u)=\big\{\nabla^2\varphi(\bar{x})u\big\}\;\mbox{ for all }\;u\in\mathbb{R}^n \end{equation*} via the (symmetric) Hessian matrix of $\varphi$ at $\bar{x}$. The well-developed {\em second-order calculus} is available for \eqref{2nd} in general settings, and explicit evaluations of this construction is given for major classes of functions important in applications to nonsmooth optimization, optimal control, and related topics; see, e.g., \cite{m-book,m18,mr} and the references therein. Note that coderivatives and second-order subdifferentials has been already used in \cite{mo} in the study of nondynamic finite-dimensional quasi-variational inequalities in the framework of generalized equations, which is totally different from our current consideration.\vspace*{0.03in} To efficiently proceed in the setting of this paper, we modify $({\mathbb P}^M_N)$ a bit with replacing the constraint mapping $M_\infty$ in \eqref{e:ic} by its {\em smooth version} $\Tilde M_\infty$ for $p=\infty$. The corresponding set \eqref{setK}, with the replacement of $M_\infty$ by $\Tilde M_\infty$, is labeled as $\Tilde{\cal K}^\infty$. Define further \begin{equation}\label{Theta} \Theta:=\big\{(\mathbf{y},\mathbf{y}_0)\in\mathbb{R}^N\times\mathbb{R}^N\big|\;g^l_{k}(\mathbf{y},\mathbf{y}_0)\ge 0\big\},\quad\;l=1,\ldots,N,N+1,\ldots,2N,\quad\;k=1,2, \end{equation} via the twice continuously differentiable mapping $g\colon\mathbb{R}^{2N}\to\mathbb{R}^{4N}$ with the components \begin{equation}\label{g} g^i_{k}(\mathbf{y},\mathbf{y}_0)=(\mathbf{D}_k\mathbf{y})_i+\big(\tilde M_{\infty}(\mathbf{y},\mathbf{y}_0)\big)_i,\quad g^{N+i}_k(\mathbf{y},\mathbf{y}_0)=\big(\tilde M_{\infty}(\mathbf{y},\mathbf{y}_0)\big)_i-(\mathbf{D}_k\mathbf{y})_i, \end{equation} where $\mathbf{y}_i$ stands for the $i^{th}$ coordinate {of the underlying vector.} For our application to deriving necessary optimality conditions for problem $({\mathbb P}^M_N)$ with the {\em smoothed constraints} as above (no relabeling), we are going to compute the second-order subdifferential \eqref{2nd} of the indicator function $\varphi:=\delta_\Theta(z)$ of the set $\Theta$ from \eqref{Theta}, i.e., such that $\delta_\Theta(z):=0$ if $z\in\Theta$ and $\delta_\Theta(z):=\infty$ otherwise. In this case, we have $\partial\varphi=N_\Theta$ and $\partial^2\varphi=D^*N_\Theta$. Recall that the {\em domain} (dom) of a set-valued mapping contains those points where the mapping has nonempty values,\vspace*{-0.05in} \begin{thm}{\bf(second-order computation for the discretized QVI sweeping process).}\label{Th:co-cal} Consider problem $({\mathbb P}^M_N)$ with the smoothed constraints for any fix $N,M\in\mathbb{N}$, and let $F:=F_j^M$ be taken from \eqref{mapF1} with $p=\infty$ and with ${\cal K}^\infty$ replaced by $\tilde{\cal K}^\infty$, where $\mathbf{f}_j^M$ is generated by $\mathbf{f}$ in \eqref{f-discr}. Given $(\mathbf{y},\mathbf{y}_0)\in\Theta$, assume that the gradient vectors $\{\nabla g^1_1(\mathbf{y},\mathbf{y}_0),\ldots,\nabla g^{2N}_2(\mathbf{y},\mathbf{y}_0)\}$ for the functions from \eqref{g} are linearly independent. Then there exists the collection of nonnegative multipliers $\lambda_1,\ldots,\lambda_{2N}$ uniquely determined by the equation $-\nabla g(\mathbf{y},\mathbf{y}_0)^*\lambda=w+\mathbf{f}$ for $\lambda=(\lambda_1,\ldots,\lambda_{2N})$ such that \begin{eqnarray*} \begin{array}{ll} D^*F(\mathbf{y},\mathbf{y}_0,w)(y)=\displaystyle\bigcup_{\lambda\ge 0,-\nabla g(\mathbf{y},\mathbf{y}_0)\lambda=w+\mathbf{f}}\bigg\{\bigg(-\sum_{k=1}^2\sum^{2N}_{l=1}\lambda^l_k\big\langle\nabla^2_{\mathbf{y}}g^l_k(\mathbf{y},\mathbf{y}_0),y\big\rangle\}-\nabla_{\mathbf{y}}g(\mathbf{y},\mathbf{y}_0)^*\gamma,0\bigg)\bigg\} \end{array} \end{eqnarray*} $\mbox{for all }\;y\in\mbox{\rm dom}\, D^*N_{\Tilde\mathcal{K}^{\infty}(\mathbf{y},\mathbf{y}_0)}\big(\mathbf{y},w+\mathbf{f}\big)$, where the coderivative domain is given by \begin{equation*} \begin{aligned} \mbox{\rm dom}\, D^*N_{\tilde\mathcal{K}^{\infty}(\mathbf{y},\mathbf{y}_0)}(\mathbf{y},w+\mathbf{f})=&\big\{y\,\big|\;\exists\,\lambda\ge0\;\mbox{ such that }\;-\nabla g(\mathbf{y},\mathbf{y}_0)\lambda=w+\mathbf{f},\\ &\lambda^l_k\langle\nabla g^l_k(\mathbf{y},\mathbf{y}_0),y\rangle=0\;\mbox{ for }\;l=1,\ldots,2N,\;k=1,2\big\} \end{aligned} \end{equation*} with $\gamma^l_k=0$ if either $g^l_k(\mathbf{y},\mathbf{y}_0)>0$ or $\lambda^l_k=0$ and $\langle\nabla g^l_k(\mathbf{y},\mathbf{y}_0),y\rangle>0$, and with $\gamma^l_k\ge 0$ if $g^l_k(\mathbf{y},\mathbf{y}_0)=0,\;\lambda^l_k=0$, and $\langle\nabla g^l_k(\mathbf{y},\mathbf{y}_0),y\rangle<0$. \end{thm} {\bf Proof}. Define the set-valued mapping $G$ and the single-valued smooth mapping $\Tilde\mathbf{f}$ by, respectively, \begin{equation*} G(\mathbf{y},\mathbf{y}_0):=N_{\tilde\mathcal{K}^{\infty}(\mathbf{y},\mathbf{y}_0)}(\mathbf{y})\;\mbox{ and }\;\Tilde\mathbf{f}(\mathbf{y},\mathbf{y}_0):=\mathbf{f}. \end{equation*} The coderivative sum rule from \cite[Theorem~1.62]{m-book} tells us that \begin{equation*} z^*\in\nabla\Tilde\mathbf{f}(\mathbf{y},\mathbf{y}_0)^*y+D^*G\big(\mathbf{y},\mathbf{y}_0,w+\mathbf{f}\big)(y) \end{equation*} for any $y\in\mbox{\rm dom}\, D^*N_{\Tilde\mathcal{K}^{\infty}(\mathbf{y},\mathbf{y}_0)}(\mathbf{y},w+\mathbf{f})$ and $z^*\in D^*F(\mathbf{y},\mathbf{y}_0,w)(y)$. Observe further that \begin{equation*} G(\mathbf{y},\mathbf{y}_0)=N_{\Tilde\mathcal{K}^{\infty}(\mathbf{y},\mathbf{y}_0)}\circ\Tilde g(\mathbf{y},\mathbf{y}_0)\;\mbox{ with }\;\Tilde g(\mathbf{y},\mathbf{y}_0):=\mathbf{f}, \end{equation*} where the Jacobian of latter mapping is obviously of full rank. Employing further the coderivative chain rule from \cite[Theorem~1.66]{m-book} to the above composition for $G$ yields \begin{equation}\label{e:cf} z^*\in\nabla\Tilde\mathbf{f}(\mathbf{y},\mathbf{y}_0)^*y+\nabla\Tilde g(\mathbf{y},\mathbf{y}_0)^*D^*N_{\Tilde\mathcal{K}^{\infty}(\mathbf{y},\mathbf{y}_0)}\big(\mathbf{y},w+\mathbf{f}\big)(y). \end{equation} To deduce finally from \eqref{e:cf} the exact formulas claimed in the theorem, we use for representing $D^*N_{\Tilde\mathcal{K}^{\infty}}$ the second-order calculation for inequality constraint systems taken from \cite[Theorem~3.3]{hos} in the case of the linear independence condition imposed in this theorem.$ \triangle$ \section{Necessary Optimality Conditions for Discrete-Time Problems}\label{sec:NecOptCond} The main result of this section provides constructive necessary optimality conditions for each problem $({\mathbb P}^M_N)$ expressed in terms of its initial data. As discussed above, for $N,M\in\mathbb{N}$ sufficiently large such conditions may be viewed as {\em suboptimality} conditions for problems with the semi-discrete $({\mathbb P}_N)$ and continuous-time $({\mathbb P})$ dynamics. To accomplish our goal with taking into account the complexity of smoothed problem $({\mathbb P}^M_N)$, we split the derivation of necessary optimality conditions into two theorems. The first theorem presents necessary optimality conditions for $({\mathbb P}^M_N)$ that involve the limiting normal cone to graphs of discrete velocity mappings, i.e., their {\em coderivatives}. The final result benefits from the coderivative computations for such mappings furnished in Theorem~\mathrm{ref}{Th:co-cal} and thus provides necessary optimality conditions for $({\mathbb P}^M_N)$ explicitly expressed in terms of the problem data.\vspace*{0.05in} Our general scheme of deriving necessary optimality conditions for $({\mathbb P}^M_N)$ is similar to the one in \cite{cm3} addressed an optimal control problem for a sweeping process over state-independent and canonically controlled {\em prox-regular} moving sets of the type \begin{equation}\label{prox} C(t)=C+u(t)\;\mbox{ with }\;C:=\big\{x\in\mathbb{R}^n\;\big|\;g_i(x)\ge 0,\;i=1,\ldots,m\big\}, \end{equation} where $x$ and $u$ stand for the state and control variables, respectively. However, the setting of problem $({\mathbb P}^M_N)$ is very different from \cite{cm3}. First of all, we have the {\em state-dependent} moving sets (the essence of QVI) with nonlinear control functions. Indeed, the counterpart of $C(t)$ in \eqref{prox} is the set $\Tilde{\cal K}^\infty(\mathbf{y},\mathbf{y}_0)$ depending on both state $\mathbf{y}$ and control $\mathbf{y}_0$ variables being described in form \eqref{Theta} via the functions $g^i_k(\mathbf{y},\mathbf{y}_0)$ from \eqref{g}. Observe also that, in contrast to \eqref{prox}, the functions from \eqref{g} are ${\cal C}^2$-smooth while may be {\em nonconvex}, which does not allow us the claim the prox-regularity of the moving sets in $({\mathbb P}^M_N)$ as in \cite{cm3}. Nevertheless, we can proceed with deriving necessary optimality conditions for problem $({\mathbb P}^M_N)$ by reducing it to a problem of {\em mathematical programming} with functional and geometric constraints and then using the machinery of {\em variational analysis and generalized differentiation} discussed above.\vspace*{0.05in} Here is the first theorem involving coderivatives (without their explicit computations) of the mappings in the {\em smoothed} dynamic constraints \eqref{e:diff-incl} with $F^M_j$ defined by \begin{equation}\label{mapF2} F_j^M(\mathbf{y}^M_j,\mathbf{y}_0):=-N_{\tilde\mathcal{K}^{\infty}(\mathbf{y}^M_j,\mathbf{y}_0)}(\mathbf{y}^M_j)+\mathbf{f}_j^M,\quad j=1,\ldots,M, \end{equation} according to our previous discussions, where the state-control dependent moving sets $\tilde\mathcal{K}^{\infty}(\mathbf{y}^M_j,\mathbf{y}_0)$ are generated by the functions $g^i_k$ from \eqref{g}.\vspace*{-0.05in} \begin{thm}{\bf(coderivative-based necessary optimality conditions for discretized QVI problems).}\label{NOC} Let $\(\bar{\mathbf{y}}^M,\bar{\mathbf{y}}_0\)=(\bar{\mathbf{y}}_1^M,\ldots,\bar{\mathbf{y}}_{M}^M,\bar{\mathbf{y}}_0)$ be an optimal solution to problem $(\mathbb{P}^M_N)$ with smoothed constraints, and let $F_j:=F^M_j$ be taken from \eqref{mapF2}. Assume that the gradients $\{\nabla g^1_1(\bar{\mathbf{y}},\bar{\mathbf{y}}_0),\ldots,\nabla g^{2N}_2(\bar{\mathbf{y}},\bar{\mathbf{y}}_0)\}$ are linearly independent. Then there exist dual elements $\lambda^M\ge 0$, $\alpha^{kM}=\(\alpha_{1}^{kM},\ldots,\alpha_{2N}^{kM}\)\in\mathbb{R}^{2N}_+$, and $p^M_j\in\mathbb{R}^N$ as $j=1,\ldots,M$ satisfying the conditions \begin{equation}\label{NOC1} \lambda^M+\|\alpha^{kM}\|+\sum_{j=1}^{M}\left \| p_{j}^{M}\right \|+\left \|\psi\right \|\ne 0, \end{equation} \begin{equation}\label{con:al1} \alpha_{l}^{kM}g^l_{k}(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0)=0\;\mbox{ for all }\;\;l=1,\ldots,2N\;\mbox{ and }\;k=1,2, \end{equation} \begin{equation}\label{pNN} p^{M}_{M}=\sum_{k=1}^2\sum^{2N}_{l=1}\alpha_{l}^{kM}\nabla_{\bar{\mathbf{y}}^M_M} g^l_k(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0), \end{equation} \begin{equation}\label{inclu} \begin{aligned} \Bigg(\frac{p_{j+1}^{M}-p_{j}^{M}}{\tau_M}-\lambda^M\mathbf{a}^{\intercal} ,-\frac{1}{\tau_M}\lambda^M\(-\tau_m\mathbf{a}^{\intercal} +\sigma\bar{\mathbf{y}}_0\)+\dfrac{1}{\tau_M}\sum_{k=1}^2\sum^{2N}_{l=1}\alpha_{l}^{kM}\nabla_{\bar{\mathbf{y}}_0}g^l_k(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0),p_{j+1}^{M}\Bigg)\\ \in\(0,\displaystyle\frac{1}{\tau_M}\psi,0\)+N\(\(\bar{\mathbf{y}}_j^M,\bar{\mathbf{y}}_0,-\dfrac{\bar{\mathbf{y}}_{j+1}^M- \bar{\mathbf{y}}_{j}^M}{\tau_M}\);\mbox{\rm gph}\, F_j\) \end{aligned} \end{equation} for all $\,j=1,\ldots,M-1$ and $k=1,2$ together with the inclusion \begin{equation}\label{psi} \psi\in N_{\mathcal{A}}\(\bar{\mathbf{y}}_0\), \end{equation} where the functions $g^i_k$ and the set ${\cal A}$ are taken from \eqref{g} and \eqref{A}, respectively. \end{thm} {\bf Proof}. Fix $\varepsilon>0$ and consider the vector \begin{equation*} z:=\(\mathbf{y}_1^M,\ldots,\mathbf{y}_{M}^M,\mathbf{y}_0,\mathbf{Y}^M_1,\ldots,\mathbf{Y}^M_{M-1}\). \end{equation*} Then $(\mathbb{P}^M_N)$ is equivalent to problem of mathematical programming $(MP)$ with respect $z$: \begin{equation*} \begin{aligned} \textrm{minimize }\phi_0(z):=\sum_{j=0}^{M-1}\int_{t_{j}^M}^{t_{j+1}^M}\big\langle\mathbf{a}, \mathbf{y}_j^M-\mathbf{y}_0\big\rangle\mathrm{d}t+\frac{\sigma}{2}\big\langle\mathbf{y}_0-\mathbf{y}_0^{\mathrm{ref}},\mathbf{y}_0-\mathbf{y}_0^{\mathrm{ref}}\big\rangle \end{aligned} \end{equation*} subject to the functional and geometric constraints \begin{equation*} H_j(z):=\mathbf{y}_{j+1}^M-\mathbf{y}_{j}^M-\tau_M\mathbf{Y}_j^M=0,\;j=0,\ldots,M-1, \end{equation*} \begin{equation*} L^l_k(z:)=-g^{l}_k(\mathbf{y}^M_M,\mathbf{y}_0)\le 0,\;\;l=1,\ldots,2N,\;\;k=1,2, \end{equation*} \begin{equation*} z\in\Xi_j:=\left\{z\;\bigg{|}\;\mathbf{Y}_{j}^M\in F_{j}\(\mathbf{y}_{j}^M,\mathbf{y}_0\)\right\},\;j=1,\ldots,M, \end{equation*} \begin{equation*} z\in\Omega:=\left\{z\;\bigg{|}\;\mathbf{y}_0\in\mathcal{A}\right\}. \end{equation*} Applying now the necessary optimality conditions from \cite[Theorem~6.5]{m18} to finite-dimensional mathematical programming problem $(MP)$ at its optimal solution \begin{eqnarray*} \bar{z}:=\(\bar{\mathbf{y}}_1^M,\ldots,\bar{\mathbf{y}}^{M}_M,\bar{\mathbf{y}}_0,\bar{\mathbf{Y}}_1^M,\ldots,\bar{\mathbf{Y}}_{M-1}^M\)\in\mathbb{R}^{2MN} \end{eqnarray*} gives us dual elements $\lambda^M\ge 0$, $p^M_j\in\mathbb{R}^N,\;j=2,\ldots,M$, $\alpha^{kM}=(\alpha_{1}^{kM},\ldots,\alpha_{2N}^{kM})\in\mathbb{R}^{2N}_+$ as $k=1,2$, and \begin{eqnarray*} z^*_j=\(\mathbf{y}^*_{1j},\ldots,\mathbf{y}^*_{Mj},\mathbf{y}_0^*,\mathbf{Y}^*_{1j},\ldots,\mathbf{Y}^*_{(M-1)\,j}\)\in\mathbb{R}^{2MN} \end{eqnarray*} for $j=1,\ldots,M$, which are not zero simultaneously, while satisfying the following relationships: \begin{equation}\label{sncs} z^*_j\in N_{\Xi_j}(\bar{z})+N_{\Omega}(\bar{z}),\quad j\in\{1,\ldots,M, \end{equation} \begin{equation}\label{main} -z^*_1-\ldots-z^*_{M}\in\lambda^M\partial\phi_0(\bar{z})+\sum_{k=1}^2\sum^{2N}_{l=1}\alpha_{l}^{kM}\nabla L^l_{k}(\bar{z})+\sum_{j=0}^{M-1}\nabla H_j(\bar{z})^*p_{j+1}^{M}, \end{equation} \begin{equation}\label{eq:al1} \alpha_{l}^{kM}L^l_k\(\bar{z}\)=0\;\mbox{ as }\;l=1,\ldots,2N\;\mbox{ and }\;k=1,2. \end{equation} To specify more, note that in \eqref{sncs} we apply the normal cone intersection formula from \cite[Theorem~2.16]{m18} to $\bar{z}\in\Omega_j\cap\Xi_j$ for $j=1,\ldots,M-1$, where the qualification condition therein holds due to the graphical structure of the sets $\Xi_j$ and the coderivative computation from Theorem~\mathrm{ref}{Th:co-cal}. Furthermore, the structure of the sets $\Omega_j$ and $\Xi_j$ together with \eqref{sncs} leads us to the relationships $$\(\mathbf{y}^\ast_{11},\ldots,\mathbf{y}^\ast_{M1},\mathbf{y}^\ast_{0},\mathbf{Y}^\ast_{11},\ldots,\mathbf{Y}^\ast_{(M-1)1}\)\in N_{\Xi_1}\(\bar{z}\)+N_{\Omega}(\bar{z}),$$ $$\(\mathbf{y}^\ast_{12},\ldots,\mathbf{y}^\ast_{M2},\mathbf{y}^\ast_{0},\mathbf{Y}^\ast_{12},\ldots,\mathbf{Y}^\ast_{(M-1)2}\)\in N_{\Xi_2}\(\bar{z}\)+N_{\Omega}(\bar{z}),$$ $$\ldots$$ $$\(\mathbf{y}^\ast_{1M},\ldots,\mathbf{y}^\ast_{MM},\mathbf{y}^\ast_{0},\mathbf{Y}^\ast_{1M},\ldots,\mathbf{Y}^\ast_{(M-1)M}\)\in N_{\Xi_M}\(\bar{z}\)+N_{\Omega}(\bar{z}).$$ In this way we arrive at the inequality \begin{equation*} \begin{array}{ll} \bigg\langle\(\mathbf{y}^\ast_{11},\ldots,\mathbf{y}^\ast_{M1},\mathbf{y}^\ast_0,\mathbf{Y}^\ast_{11},\ldots,\mathbf{Y}^\ast_{(M-1)1}\),\Big(\big(\bar{\mathbf{y}}_1,\ldots,\bar{\mathbf{y}}_M,\bar{\mathbf{y}}_0,\bar{\mathbf{Y}}_1\ldots,\bar{\mathbf{Y}}_{M-1}\Big)\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad-\Big(\mathbf{y}_1,\ldots,\mathbf{y}_M,\mathbf{y}_0,\mathbf{Y}_1,\ldots,\mathbf{Y}_{M-1}\Big)\bigg)\bigg\rangle\le 0, \end{array} \end{equation*} where $\bar{\mathbf{Y}}_j\in F_j(\bar{\mathbf{y}}_j,\bar{\mathbf{y}}_0)$ and $\mathbf{Y}_j\in F_j(\mathbf{y}_j,\mathbf{y}_0)$ for all $j=1,\ldots,M-1$, and where $\bar{\mathbf{y}}_0,\mathbf{y}_0\in\mathcal{A}$. Combining the above verifies that $\mathbf{y}^\ast_{ij}=\mathbf{Y}^\ast_{ij}=0$ if $i\ne j$, for all $j=1,\ldots,M$. The obtained relationships ensure that the inclusions in \eqref{sncs} are equivalent to \begin{equation}\label{e:5.18*} \(\mathbf{y}^*_{jj},\mathbf{y}_0^*-\psi,-\mathbf{Y}^*_{jj}\)\in N\(\(\bar{\mathbf{y}}_j^M,\bar{\mathbf{y}}_0,-\dfrac{\bar{\mathbf{y}}_{j+1}^M-\bar{\mathbf{y}}_j^M}{\tau_M}\);\mbox{\rm gph}\, F_j\),\quad j=1,\ldots,M-1, \end{equation} while all the other components of $z^*_j$ are equal to zero for $j=1,\ldots,M-1$. We also get from above that $\psi\in N\(\bar{\mathbf{y}}_0;\mathcal{A}\)$, which justifies \eqref{psi}. It follows furthermore that \begin{eqnarray*} -z^*_1-\ldots-z^*_{M}=\big(-\mathbf{y}^*_{11},\ldots,-\mathbf{y}^*_{M-1\,M-1},0,-\mathbf{y}_0^*,-\mathbf{Y}^*_{11},\ldots,-\mathbf{Y}^*_{M-1\,M-1}\big). \end{eqnarray*} The set on the right-hand side of \eqref{main} is represented by \begin{eqnarray*} \lambda^M\partial\phi_0(\bar{z})+\sum_{k=1}^2\sum^{2N}_{l=1}\alpha_{l}^{kM}\nabla L^{l}_{k}(\bar{z})+\sum_{j=0}^{M-1}\nabla H_j(\bar{z})^*p_{j+1}^{M}. \end{eqnarray*} Using the definitions of $L^l_k$ and $H_j$, we easily obtain the equality $$ \(\sum_{k=1}^2\sum^{2N}_{l=1}\alpha_{l}^{kM}\nabla L^l_{k}(\bar{z})\)_{(\bar{\mathbf{y}}_j^M,\bar{\mathbf{y}}_0,\bar{\mathbf{Y}}_j^M)}=\(-\sum_{k=1}^2\sum^{2N}_{l=1}\alpha_{l}^{kM}\nabla_{\bar{\mathbf{y}}_j^M} g^l_k(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0),-\sum_{k=1}^2\sum^{2N}_{l=1}\alpha_{l}^{kM}\nabla_{\bar{\mathbf{y}}_0} g^l_k(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0),0\) $$ for $j=1,\ldots,M-1$ together with the representations $$ \(\sum_{j=0}^{M-1}\nabla H_j(\bar{z})^*p_{j+1}^M\)_{\bar{\mathbf{y}}_j^M}= \(-p_{1}^{M},p_{1}^{M}-p_{2}^{M},\ldots,p_{j}^{M}-p_{j+1}^{M},\ldots,p_{M-1}^{M}-p_{M}^{M},p^{M}_{M}\), $$ $$ \(\sum_{j=0}^{M-1}\nabla H_j(\bar{z})^*p_{j+1}^M\)_{\bar{\mathbf{Y}}_j^M}=\big(-\tau_Mp_{1}^{M},-\tau_Mp_{2}^{M},\ldots,-\tau_Mp^{M}_{M}\big). $$ The set $\lambda^M\partial\phi_0(\bar{z})$ is represented as the collection of $$ \lambda^M\((\tau_m\mathbf{a} )_{\mathbf{y}_1^M},\ldots,(\tau_m\mathbf{a} )_{\mathbf{y}^M_{M-1}},0,(-\tau_m\mathbf{a} +\sigma \bar{\mathbf{y}}_0)_{(\mathbf{y}_0)},0_{\mathbf{Y}_1^M},\ldots,0_{\mathbf{Y}_{M-1}^M}\). $$ Combining the above gives us the relationships \begin{equation}\label{ey1} -\mathbf{y}^*_{11}=\lambda^M\tau_\mathbf{a}-p_{2}^{M}, \end{equation} \begin{equation}\label{ey2} -\mathbf{y}^*_{jj}=\lambda^M\tau_m\mathbf{a}+p^{j}_{M}-p^{j+1}_{M},\;\;j=2,\ldots,M-1, \end{equation} \begin{equation}\label{ey3} 0=p_{M}^{M}-\sum_{k=1}^2\sum^{2N}_{l=1}\alpha_{l}^{kM}\nabla_{\bar{\mathbf{y}}^M_M} g^l_k(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0), \end{equation} \begin{equation}\label{ey0} -\mathbf{y}_0^*=\lambda^M\(-\tau_m\mathbf{a} +\sigma\bar{\mathbf{y}}_0\)-\sum_{k=1}^2\sum^{2N}_{l=1}\alpha_{l}^{kM}\nabla_{\bar{\mathbf{y}}_0} g^l_k(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0), \end{equation} \begin{equation}\label{eY} -\mathbf{Y}^*_{jj}=-\tau_Mp_{j+1}^{M},\;\;j=0,\ldots, M-1. \end{equation} Using the obtained representations, we can now proceed with completing the proof of the theorem. First observe that the transversality condition \eqref{pNN} follows directly from \eqref{ey3}. Next we extend the vector $p^M$ by adding the component $p_1^M:=\mathbf{y}^*_{1M}$. This tells us by \eqref{ey2}, \eqref{ey0}, and \eqref{eY} that $$ \begin{aligned} \frac{\mathbf{y}^*_{jj}}{\tau_M}&=\dfrac{p_{j+1}^{M}-p_{j}^{M}}{\tau_M}-\frac{l\lambda^M \mathbf{a}}{\tau_M},\\ \frac{\mathbf{y}_0^*}{\tau_M}&=-\dfrac{1}{\tau_M}\lambda^M\(-\tau_m\mathbf{a}+\sigma \bar{\mathbf{y}}_0\)+\dfrac{1}{\tau_M}\sum_{k=1}^2\sum^{2N}_{l=1}\alpha_{l}^{kM}\nabla_{\bar{\mathbf{y}}_0} g^l_k(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0),\\ \frac{\mathbf{Y}^*_{jj}}{\tau_M}&=p_{j+1}^{M}. \end{aligned} $$ Substituting these relationships into the left-hand side of \eqref{e:5.18*} and taking into account the equalities obtained in \eqref{eq:al1}, \eqref{ey2}, \eqref{ey0}, and \eqref{eY} verify the optimality conditions claimed in \eqref{con:al1}--\eqref{inclu}. It remains to justify the nontriviality condition \eqref{NOC1}. On the contrary, suppose that $\lambda^M=0,\,\psi=0,\,\alpha^{kM}=0$, and $p_{j}^{M}=0$ for all $j=1,\ldots,M-1$, which implies in turn that $\mathbf{y}^*_{1M}=p_{1}^{M}=0$. Then we deduce from \eqref{ey3} that $p^{M}_{M}=0$, and so $p_{j}^{M}=0$ for all $j=1,\ldots,M$. It follows from \eqref{ey1}, \eqref{ey2}, and \eqref{ey0} that $\(\mathbf{y}^*_{jj},\mathbf{y}_0^*\)=0$ for all $j=1,\ldots,M-1$. By \eqref{eY} we have that $\mathbf{Y}^*_{jj}=0$ whenever $j=1,\ldots,M-1$. Since all the components of $z^*_j$ different from $(\mathbf{y}^*_{jj},\mathbf{y}_0^*,\mathbf{Y}^*_{jj})$ are obviously zero for $j=1,\ldots,M-1$, this tells us that $z^*_{j}=0$ for such $j$. Employing $\mathbf{y}^*_{1M}=p_{1}^{M}=0$ ensures that $z^*_{M}=0$ while the other components of this vector are zero. Overall, $z^*_j=0$ for all $j=1,\ldots,M$, and thus the nontriviality condition for $(MP)$ fails. The obtained contradiction completes the proof. $ \triangle$\vspace*{0.05in} The final result of this paper establishes necessary optimality conditions for smoothed problem $({\mathbb P}^M_N)$ expressed entirely in terms of the initial problem data. The desired conditions are derived by incorporating the second-order calculations of Theorem~\mathrm{ref}{Th:co-cal} into the corresponding conditions of Theorem~\mathrm{ref}{NOC} in the case where the mappings $F_j=F^M_j$ therein are given by \eqref{mapF2}.\vspace*{-0.05in} \begin{thm}{\bf(explicit necessary conditions for discretized QVI sweeping control problems).}\label{Th:OC-DP} Let $\bar{z}^M=(\bar{\mathbf{y}}^M,\bar{\mathbf{y}}_0)$ be an optimal control $\bar{z}^M=(\bar{\mathbf{y}}^M,\bar{\mathbf{y}}_0)$ to the smoothed problem $(\mathbb{P}^M_N)$ with the sweeping dynamics defined by \eqref{mapF2} under the assumptions of Theorem~{\rm\mathrm{ref}{NOC}}. Then there exist dual elements $(\lambda^M,\alpha^{kM},p^M)$ and $\psi\in N_{\mathcal{A}}\(\bar{\mathbf{y}}_0\)$ together with vectors $\eta^{kM}_j=\(\eta^{kM}_{1j},\ldots,\eta^{kM}_{2Nj}\)\in\mathbb{R}^{2N}_+$ as $j=1,\ldots,M,\,k=1,2$ and $\gamma^{kM}_j=\(\gamma^{kM}_{1j},\ldots,\gamma^{kM}_{2Nj}\)\in\mathbb{R}^{2N}$ as $j=1,\ldots,M-1$ and $k=1,2$ such that the following relationships hold:\\[1ex] $\bullet$ {\sc nontriviality condition} \begin{equation}\label{e:dac26} \lambda^M+\|\eta_M^{kM}\|+\sum^{M}_{j=1}\|p^{M}_j\|\not=0. \end{equation} $\bullet$ {\sc dynamic relationships} for all $j=1,\ldots,M-1$: \begin{equation}\label{e:dac15} \dfrac{\bar{\mathbf{y}}^M_{j+1}-\bar{\mathbf{y}}^M_j}{\tau_M}+\mathbf{f}_j^M=-\sum_{k=1}^2\sum_{l\in I(\bar{\mathbf{y}}^M_j)}\eta^{kM}_{ij}\nabla_{\bar{\mathbf{y}}^M_j} g^l_k(\bar{\mathbf{y}}^M_j,\bar{\mathbf{y}}_0), \end{equation} \begin{equation}\label{e:dac16} \dfrac{p^{M}_{j+1}-p^{M}_j}{\tau_M}-\frac{\lambda^MT\mathbf{a}^{\intercal} }{\tau_M}=-\sum_{k=1}^2\sum^{2N}_{l=1}\eta^{kM}_{lj}\bigg\langle\nabla^2_{\bar{\mathbf{y}}^M_j}g^l_k(\bar{\mathbf{y}}^M_j,\bar{\mathbf{y}}_0),-p^{M}_{j+1}\bigg\rangle-\sum_{k=1}^2\sum^{2N}_{l=1}\gamma^{kM}_{lj}\nabla_{\bar{\mathbf{y}}^M_j} g^l_k(\bar{\mathbf{y}}^M_j,\bar{\mathbf{y}}_0), \end{equation} \begin{equation}\label{e:dac17} -\dfrac{1}{\tau_M}\lambda^M\(T\mathbf{a}^{\intercal}+\sigma \bar{\mathbf{y}}_0\)+\dfrac{1}{\tau_M}\sum_{k=1}^2\sum^{2N}_{l=1}\eta_{lM}^{kM}\nabla_{\bar{\mathbf{y}}_0} g^l_k(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0)-\dfrac{1}{\tau_M}\psi=0. \end{equation} $\bullet$ {\sc transversality condition} \begin{equation}\label{e:dac19} p^{M}_{M}=-\lambda^MT\mathbf{a}+\sum_{k=1}^2\sum^{2N}_{l=1}\eta_{lM}^{kM}\nabla_{\bar{\mathbf{y}}^M_M} g^l_k(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0). \end{equation} $\bullet$ {\sc complementarity slackness conditions} \begin{equation} \label{e:dac20} g^l_k(\bar{\mathbf{y}}^M_j,\bar{\mathbf{y}}_0)>0\Longrightarrow\eta^{kM}_{lj}=0, \end{equation} \begin{equation} \label{e:dac21} \big[g^l_k(\bar{\mathbf{y}}^M_j,\bar{\mathbf{y}}_0)>0\mbox{ or }\eta^{kM}_{lj}=0,\;\langle \nabla g^l_k(\bar{\mathbf{y}}^M_j,\bar{\mathbf{y}}_0),-p^{M}_{j+1}\rangle>0\big]\Longrightarrow\gamma^{kM}_{lj}=0, \end{equation} \begin{equation} \label{e:dac22} \big[g^l_k(\bar{\mathbf{y}}^M_j,\bar{\mathbf{y}}_0)=0,\,\eta^{kM}_{lj}=0,\mbox{ and } \langle\nabla g^l_k(\bar{\mathbf{y}}_j^M,\bar{\mathbf{y}}_0),-p^{M}_{j+1}\rangle<0\big]\Longrightarrow\gamma^{kM}_{lj}\ge 0 \end{equation} for $j=1,\ldots,M-1$, $l=1,\ldots,2N$, and $k=1,2$. Furthermore, we have the implications \begin{equation} \label{e:dac23} g^l_k(\bar{\mathbf{y}}^M_j,\bar{\mathbf{y}}_0)>0\Longrightarrow\gamma^{kM}_{lj}=0\;\mbox{for}\;\;j=1,\ldots,M-1,\;\;l=1,\ldots,2N,\;\mbox{ and }\;k=1,2, \end{equation} \begin{equation}\label{e:dac24} g^l_k(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0)>0\Longrightarrow\eta^{kM}_{lM}=0\;\;\mbox{for}\;\;l=1,\ldots,2N\;\mbox{ and}\;\; k=1,2, \end{equation} \begin{equation}\label{e:dac25} \eta^{kM}_{lj}>0\Longrightarrow\langle\nabla g^l_k(\bar{\mathbf{y}}^M_j,\bar{\mathbf{y}}_0),-p^{M}_{j+1}\rangle=0. \end{equation} \end{thm} {\bf Proof}. The adjoint dynamic inclusion \eqref{inclu} of Theorem~\mathrm{ref}{NOC} can be written by the coderivative definition \eqref{e:cor} in the coderivative inclusion form \begin{equation}\label{e:dac27} \begin{aligned} &\bigg(\frac{p_{j+1}^{M}-p_{j}^{M}}{\tau_M}-\frac{\lambda^MT\mathbf{a} }{\tau_M},-\frac{1}{\tau_M}\lambda^M\(T\mathbf{a}^{\intercal} +\sigma \bar{\mathbf{y}}_0\)+\dfrac{1}{\tau_M}\sum_{k=1}^2\sum^{2N}_{l=1}\alpha_{l}^{kM}\nabla_{\bar{\mathbf{y}}_0} g^l_k(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0)-\frac{1}{\tau_M}\psi\bigg)\\ &\in D^*F_j\(\bar{\mathbf{y}}_j^M,\bar{\mathbf{y}}_0,-\dfrac{\bar{\mathbf{y}}_{j+1}^M-\bar{\mathbf{y}}_j^M}{\tau_M}\)(-p_{j+1}^{M}),\;\;j=1,\ldots,M-1. \end{aligned} \end{equation} It follows from \eqref{mapF2} and the inclusions $\dfrac{\bar{\mathbf{y}}^M_{j+1}-\bar{\mathbf{y}}^M_j}{-\tau_M}-\mathbf{f}_j\in N(\bar{\mathbf{y}}^M_j;\Tilde\mathcal{K}^{\infty}(\bar{\mathbf{y}}_j^M,\mathbf{y}_0))$ for $j=1,\ldots,M-1$ that the exist vectors $\eta^{kM}_j\in\mathbb{R}^{4N}_+$ as $j=1,\ldots,M-1$ and $k=1,2$ such that the conditions in \eqref{e:dac15} and \eqref{e:dac20} are satisfied. Employing the second-order formula from Theorem~\mathrm{ref}{Th:co-cal} with $\mathbf{y}:=\bar{\mathbf{y}}^M_j,\,\mathbf{y}_0:=\bar{\mathbf{y}}_0,\,w:=\dfrac{\bar{\mathbf{y}}^M_{j+1}-\bar{\mathbf{y}}^M_j}{-\tau_M}$, and $y:=-p^{M}_{j+1}$ and combining this with the domain formula therein give us vectors $\gamma^{kM}_j\in\mathbb{R}^{4N}$ for which we have the equalities $$ \begin{aligned} &\bigg(\frac{p_{j+1}^{M}-p_{j}^{M}}{\tau_M}-\frac{\lambda^MT\mathbf{a} }{\tau_M},-\frac{1}{\tau_M}\lambda^M\sigma+\dfrac{1}{\tau_M}\sum_{k=1}^2\sum^{2N}_{l=1}\alpha_{l}^{kM}\nabla_{\bar{\mathbf{y}}_0} g^l_k(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0)-\frac{1}{\tau_M}\psi\bigg)\\ =&\bigg(-\sum_{k=1}^2\sum^{2N}_{l=1}\eta^{kM}_{lj}\bigg\langle\nabla^2_{\bar{\mathbf{y}}^M_j}g^l_k(\bar{\mathbf{y}}^M_j,\bar{\mathbf{y}}_0),-p^{M}_{j+1}\bigg\rangle-\sum_{k=1}^2\sum^{2N}_{l=1}\gamma^{kM}_{lj}\nabla_{\bar{\mathbf{y}}^M_j} g^l_k(\bar{\mathbf{y}}_j^M,\bar{\mathbf{y}}_0),0\bigg) \end{aligned} $$ whenever $j=1,\ldots,M-1$. This clearly ensures the fulfillment of all the conditions claimed in \eqref{e:dac16}, \eqref{e:dac17}, \eqref{e:dac21}, and \eqref{e:dac22}. Now we denote $\eta_M^{kM}:=\alpha^{kM}$, where $\alpha^{kM}$ are taken from Theorem~\mathrm{ref}{NOC}, and note that $\eta^{kM}_j\in\mathbb{R}^{4N}_+$ for $j=1,\ldots,M$. Thus we get \eqref{NOC1} and deduce the transversality condition \eqref{e:dac19} from \eqref{pNN}. Observe also that \eqref{e:dac24} follows immediately from \eqref{con:al1} and the construction of $\eta_M^{kM}$, and that the adjoint inclusion \eqref{e:dac27} readily yields $$ -p^{M}_{j+1}\in\mbox{\rm dom}\, D^*N_{\Tilde\mathcal{K}^{\infty}(\bar{\mathbf{y}}^M_j,\mathbf{y}_0)}\bigg(\bar{\mathbf{y}}^M_j,\dfrac{\bar{\mathbf{y}}^M_{j+1}-\bar{\mathbf{y}}^M_j}{-\tau_M}+\mathbf{f}_j\bigg). $$ Based on this and coderivative formula from Theorem~\mathrm{ref}{Th:co-cal}, it is easy to check that \eqref{e:dac25} is satisfied. It remains to verify the nontriviality condition \eqref{e:dac26} taking into account the imposed gradient linear independence condition. On the contrary, suppose that \eqref{e:dac26} is violated, i.e., $\lambda^M=0,\;\eta_{lM}^{kM}=0$ for $l=0,\ldots,2N,\,k=1,2$, and that $p^{M}_{j}=0$ for $j=1,\ldots,M$. Then it follows from \eqref{e:dac19} with $$ \displaystyle\sum_{k=1}^2\sum^{2N}_{l=1}\eta^{kM}_{lM}\nabla g^l_k(\bar{\mathbf{y}}^M_M,\bar{\mathbf{y}}_0)=0 $$ that $p_{M}^M=0$. Then \eqref{e:dac17} tells us that $\psi\equiv 0$, and hence \eqref{e:dac16} implies that $$ \displaystyle\sum_{k=1}^2\sum^{2N}_{l=1}\gamma^{kM}_{lj}\nabla g^l_k(\bar{\mathbf{y}}^M_j,\bar{\mathbf{y}}_0)=0\;\mbox{ for }\; j=1,\ldots,M-1. $$ This contradicts the fulfillment of \eqref{NOC1} and thus verifies \eqref{e:dac26}. The proof is complete.$ \triangle$ \section{Concluding Remarks}\label{conclusion} This paper is the first attempt to study optimal control problems governed by evolutionary quasi-variational inequalities of the parabolic type that arise in the formation and growth modeling of granular cohensionless material. The formulated mathematical problem is revealed to be very complicated due to the presence of nonsmooth and nonconvex {gradient constraints} and thus calls for developing various regularization and approximation procedures for its efficient investigation and solution. Designing such procedures and verifying their well-posedness, we arrive at an adequate version described as optimal control of a discrete-time quasi-variational sweeping process, which is different from those previously considered in the literature. Nevertheless, employing powerful tools of variational analysis and generalized differentiation brings us to the collection of necessary optimality conditions expressed entirely via the initial data of the original problem. These conditions are derived in Theorem~\mathrm{ref}{Th:OC-DP}. {Some future research directions include, designing} efficient numerical algorithms {for the} system of optimality conditions presented in {Theorem~\mathrm{ref}{Th:OC-DP}. More work is needed to establish convergence analysis of some of the regularization and approximation procedures constructed in this paper, for instance, regularization of the gradient constraints. This will, in particular, be critical to obtain optimality conditions for} $({\mathbb P}_N)$ and $({\mathbb P})$ by passing to the limit from those established in Theorem~\mathrm{ref}{Th:OC-DP} {for the fully discrete problem}. \end{document}
arXiv
\begin{document} \title[On some relations between generalized associators] {On some relations between generalized associators} \author{Benjamin Enriquez} \address{IRMA (CNRS), rue Ren\'e Descartes, F-67084 Strasbourg, France} \email{enriquez@@math.u-strasbg.fr} \maketitle \begin{abstract} Let $\Phi$ be the Knizhnik-Zamolodchikov associator and $\Psi_N$ be its analogue for $N$th roots of 1. We prove a hexagon relation for $\Psi_4$. Similarly to the Broadhurst (for $\Psi_2$) and Okuda (for $\Psi_4$) duality relations, it relies on the "supplementary" (i.e., non-dihedral) symmetries of ${\mathbb C}^\times - \mu_4({\mathbb C})$ (i.e., the octahedron group ${\mathfrak{S}}_4$). We also derive relations between $\Phi$ and $\Psi_2$, which are analogues of equations, found by Nakamura and Schneps, satisfied by the image of the morphism $\operatorname{Gal}(\bar{\mathbb Q}/{\mathbb Q})\to \widehat{\operatorname{GT}}$. \end{abstract} \tableofcontents \section*{Introduction} For $N\geq 1$, let $\Psi_N(A|b[\zeta],\zeta\in\mu_N({\mathbb C}))$ be the generalized associator defined as the renormalized holonomy from 0 to 1 of \begin{equation} \label{eq:N} (dH)H^{-1} = ({A\over z} + \sum_{\zeta\in \mu_N({\mathbb C})} {{b[\zeta]}\over{z-\zeta}})dz, \end{equation} i.e., $\Psi_N = H_1^{-1}H_0$, where $H_0,H_1$ are the solutions of (\ref{eq:N}) on $]0,1[$ such that $H_0(z) \simeq z^A$ when $z\to 0^+$, $H_1(z) \simeq (1-z)^{b[1]}$ when $z\to 1^-$ and $A$, $b[\zeta]$ ($\zeta\in\mu_N({\mathbb C})$) are free variables. When $N=1$, we set $b[1] = B$, and $\Psi_1(A|b[1]) = \Phi(A,B)$ is the KZ associator. $\Psi_N$ can be viewed as a generating series for the values at $N$th roots of unity of multiple polylogarithms. In \cite{E}, we found some relations satisfied by $\Psi_N$ (the pentagon and octogon relations). As in \cite{Dr}, these relations give rise to a torsor and a graded Lie algebra $\grtmd(N)$. We have a Lie algebra morphism $\grtmd(N) \to \grt$, where $\grt$ is the graded analogue of the Grothendieck-Teichm\"uller Lie algebra (\cite{Dr}). The octogon relation is based on the dihedral symmetries of ${\mathbb C}^\times - \mu_N({\mathbb C})$. However, for special values of $N$, the automorphism group of ${\mathbb C}^\times - \mu_N({\mathbb C})$ is larger than the dihedral group $D_N$. These values are $N = 1,2,4$. The resulting supplementary relations are: (a) when $N=1$, the duality and hexagon relations (\cite{Dr}); (b) for $N=2$, the Broadhurst duality relation (\cite{Br}); (c) for $N=4$, the Okuda duality relation (\cite{O}) and the relation of Section \ref{Phi4}. In cases (a) and (c), the octogon relation is then a consequence of the duality and hexagon relations, as the octogon can be cut out in two neighboring hexagons. In the second part of the paper (Section \ref{Phi:Psi2}), we derive relations between $\Phi$ and $\Psi_2$, which are analogues of identities in the image of the morphism $\operatorname{Gal}(\bar{\mathbb Q}/{\mathbb Q})\to \widehat{\operatorname{GT}}$ found in \cite{NS}. One can check that all these relations give rise to subtorsors of the torsors studied in \cite{E} and to a Lie subalgebra $\grtmd'(N) \subset \grtmd(N)$ for $2|N$ and $4|N$. However, the morphism $\grtmd'(N)\to \grt$ is surjective if the final questions of \cite{Dr} are answered in the affirmative. \section{A hexagon relation for $\Psi_4$} \label{Phi4} For $N\geq 1$, we denote by ${\mathfrak{f}}_{N+1}$ the Lie algebra with generators $A,C,b[\zeta]$, $\zeta\in \mu_N({\mathbb C})$, with only relation $A + \sum_{\zeta\in\mu_N({\mathbb C})} b[\zeta] + C = 0$, and by $\hat{\mathfrak{f}}_{N+1}$ its degree completion (the generators all have degree $1$). Then $\Psi_N$ belongs to the group $\operatorname{exp}(\hat{\mathfrak{f}}_{N+1})$. If $S\subset {\mathbb{P}}^1({\mathbb C})$ is finite, let $V:= \operatorname{Span}_{{\mathbb C}}(b[s],s\in S) / {\mathbb C}(\sum_{s\in S} b[s])$. Let $\Gamma\subset \operatorname{PSL}_2({\mathbb C})$ be the group of all the automorphisms preserving $S$. Then $\Gamma$ acts on $V$ by $\sigma(b[s]) = b[\sigma(s)]$, and the form $\sum_{s\in S} b[s] \operatorname{d}\operatorname{ln}(z-s)$ is $\Gamma$-invariant. In particular, if $S = \{0,\infty\} \cup \mu_N({\mathbb C})$, then $\Gamma$ acts on $\hat{\mathfrak{f}}_{N+1}$ and on the set of solutions of (\ref{eq:N}). In that case, $\Gamma \supset D_N$, where $D_N$ is the dihedral group of order $N$, and $D_N\subsetneq \Gamma$ iff $N = 1,2,4$. When $N = 4$, $S$ is the octahedron $\{0,\infty\} \cup \mu_4({\mathbb C})$ and $\Gamma = {\mathfrak{S}}_4$. Then $\Gamma$ is presented by generators $s,t$ and relations $s^3 = t^4 = (st)^2 = 1$, where $s : z \mapsto {{1+iz}\over{1-iz}}$ and $t : z\mapsto iz$ (here $i = \sqrt{-1}$). The corresponding automorphisms of ${\mathfrak{f}}_5$ are given by $$ s : A \mapsto b[1], \; b[1]\mapsto b[i], \; b[i]\mapsto A, \; b[-1] \mapsto b[-i], \; b[-i] \mapsto C, \; C\mapsto b[-1], $$ and $t: A\mapsto A$, $C\mapsto C$, $b[\zeta]\mapsto b[i\zeta]$ (recall that $C = -A - \sum_{\zeta\in\mu_4({\mathbb C})} b[\zeta]$). Recall that $\Psi_4 = H_1^{-1}H_0$, where $H_0(z),H_1(z)$ are the solutions of \begin{equation} \label{diff:eq} (dH)H^{-1} = \big( {A\over z} + \sum_{\zeta\in \mu_4({\mathbb C})} {{b[\zeta]}\over {z-\zeta}}\big) dz , \end{equation} on $]0,1[$ with behavior $H_0(z) \simeq z^{A}$ when $z\to 0^+$ and $H_1(z) \simeq (1-z)^{b[1]}$ when $z\to 1^-$. \begin{proposition} $\Psi_4$ satisfies the following hexagon relation\footnote{If ${\mathfrak{n}}$ is a pronilpotent Lie algebra, $x\in {\mathfrak{n}}$ and $a\in{\mathbb C} \setminus {\mathbb{R}}_-$, then $a^x := \operatorname{exp}(x\operatorname{ln}(a))$, where $\operatorname{ln}(a)$ is chosen with imaginary part in $]-\pi,\pi[$.} \begin{equation} \label{hexagon:Psi4} (2/i)^A s^2(\Psi_4) (2/i)^{b[i]} s(\Psi_4) (2/i)^{b[1]} \Psi_4 = 1. \end{equation} \end{proposition} {\em Proof.} Let $H_{0^+},H_{1^-}, H_{1+i0^+},H_{i+0^+},H_{i-i0^+}, H_{i0^+}$ be the solutions of (\ref{diff:eq}) in ${\bf T} := \{z\in{\mathbb C} | \operatorname{Re}(z)\geq 0, \operatorname{Im}(z)\geq 0, |z|\leq 1\} - \{0,1,i\}$ with asymptotic behaviors $H_{0^+}(z) \simeq z^A$ when $z\to 0^+$, $H_{1^-}(z) \simeq (1-z)^{b[1]}$ when $z\to 1^-$, $H_{1+i0^+}(z) \simeq ({{z-1}\over i})^{b[1]}$ when $z\to 1+i0^+$, $H_{i+0^+}(z) \simeq (z-i)^{b[i]}$ when $z\to i+0^+$, $H_{i-i0^+}(z) \simeq ({{i-z}\over i})^{b[i]}$ when $z\to i-i0^+$, $H_{i0^+}(z) \simeq (z/i)^A$ when $z\to i0^+$. Let us still denote by $H_0,H_1$ the prolongations of $H_0,H_1$ to ${\bf T}$. Then we have $H_{0^+}(z) = H_0(z)$, $H_{1^-}(z) = H_1(z)$, $H_{1+i0^+}(z) = s(H_0(s^{-1}(z))) 2^{b[1]}$, $H_{i+0^+}(z) = s(H_1(s^{-1}(z)))$, $H_{i-i0^+}(z) = s^2(H_0(s^{-2}(z))) 2^{b[i]}$, $H_{i0^+}(z) = s^2(H_1(s^{-2}(z))) 2^{-A}$. Then we have $H_{1^-} = H_{0^+} \Psi_4^{-1}$, $H_{1+i0^+} = H_{1^-} i^{b[1]}$, $H_{i+0^+} = H_{1+i0^+} 2^{-b[1]} s(\Psi_4^{-1})$, $H_{i-i0^+} = H_{i+0^+} i^{b[i]}$, $H_{i0^+} = H_{i-i0^+} 2^{-b[i]} s^2(\Psi_4^{-1}) 2^{-A}$, $H_{0^+} = H_{i0^+} i^A$. These equalities imply the hexagon relation. \qed For completeness, we recall that $\Psi_4$ satisfies the Okuda duality relation (\cite{O}): \begin{equation} \label{duality:Psi4} st(\Psi_4) = 2^{-A} \Psi_4^{-1} 2^{-b[1]}, \end{equation} where $st$ is the automorphism of order $2$ given by $A \leftrightarrow b[1]$, $b[i]\leftrightarrow b[-i]$, $b[-1]\leftrightarrow C$. In \cite{E}, we showed that $\Psi_4$ satisfies the octogon equation $$ \Psi_4^{-1} i^{2b[1]} (st^2s^{-1})(\Psi_4) i^C (s^{-1}ts^{-1})(\Psi_4^{-1}) i^{2b[i]} t(\Psi_4) i^A = 1, $$ but as mentioned in the Introduction, this equation is a consequence of (\ref{hexagon:Psi4}) and (\ref{duality:Psi4}). Together with $\Phi$, $\Psi_4$ also satisfies a mixed pentagon equation; $\Psi_4$ satisfies a group-likeness condition, and the following distributivity relations: $$ \delta_{42}(\Psi_4) = \Psi_2, \quad \pi_{42}(\Psi_4) = 2^{b[1]}\Psi_2, \quad \delta_{41}(\Psi_4) = \Phi, \quad \pi_{42}(\Psi_4) = 4^{b[1]}\Phi, $$ where for $d|4$, $\pi_{4d},\delta_{4d} : {\mathfrak{f}}_{4+1} \to {\mathfrak{f}}_{d+1}$ are defined by $\pi_{4d}(A) = d'A$, $\pi_{4d}(b[\zeta]) = b[\zeta^{d'}]$, $\delta_{4d}(A) = A$, $\delta_{4d}(b[\zeta]) = b[\zeta]$ if $\zeta\in \mu_{d}({\mathbb C})$ and $=0$ otherwise; here $d' = 4/d$. The generator $b[1]$ of ${\mathfrak{f}}_{1+1}$ is denoted by $B$. \begin{remark} (duality for $\Psi_2$) When $N=2$, $S = \{0,\infty,1,-1\}$ is the square and $\Gamma = D_4$. Then $\Gamma$ is presented by generators $\rho,\sigma$ and relations $\rho^4 = \sigma^2 = (\sigma\rho)^2 = 1$. The inclusion $D_4 \subset {\mathfrak{S}}_4 \subset \operatorname{PSL}_2({\mathbb C})$ is given by $\sigma \mapsto st$, $\rho \mapsto t^2$. In \cite{Br}, formula (127), Broadhurst showed the duality relation \begin{equation} \label{dual:Psi2} \sigma(\Psi_2) = 2^{-A} \Psi_2^{-1} 2^{-b[1]}. \end{equation} Explicitly, the involutive automorphism $\sigma$ of ${\mathfrak{f}}_3$ is given by $\sigma : A \leftrightarrow b[1]$, $b[-1] \leftrightarrow C= -B-b[1]-b[-1]$. Since $\sigma\circ\delta_{42} = \delta_{42} \circ (st)$, the Okuda duality relation (\ref{duality:Psi4}) for $\Psi_4$, together with the distribution relation $\Psi_2 = \delta_{42}(\Psi_4)$, implies the Broadhurst duality relation (\ref{dual:Psi2}) for $\Psi_2$. \end{remark} \section{Relations between $\Phi$ and $\Psi_2$} \label{Phi:Psi2} In this section, we set $b_0 = b[1]$, $b_1 = b[-1]$. \subsection{The relation $\Phi(A,B) = 2^B \Psi_2(A|B,-A-B) 2^A$} Let $A,B$ be free noncommutative variables. Recall that $\Phi(A,B) = G_1^{-1}G_0$, where $G_0,G_1$ are the solutions of \begin{equation} \label{KZ} (dG)G^{-1} = ({A\over u} + {B\over{u-1}})du, \end{equation} such that $G_0(u) \simeq u^A$ as $u\to 0^+$ and $G_1(u) \simeq (1-u)^B$ as $u\to 1^-$. If $A,b_0,b_1$ are free noncommutative variables, recall that $\Psi_2(A|b_0,b_1) = H_1^{-1} H_0$, where $H_0,H_1$ are the solutions of $$ (dH)H^{-1} = ({A\over z} + {{b_0}\over{z-1}} + {{b_1}\over{z+1}})dz $$ such that $H_0(z) \simeq z^A$ as $z\to 0^+$ and $H_1(z) \simeq (1-z)^{b_0}$ as $z\to 1^-$. Then $\Phi(A,B)\in \operatorname{exp}(\hat{\mathfrak{f}}_2)$ and $\Psi_2(A|b_0,b_1)\in \operatorname{exp}(\hat{\mathfrak{f}}_3)$, where $\hat{\mathfrak{f}}_2$ (resp., $\hat{\mathfrak{f}}_3$) is the topologically free Lie algebra generated by $A,B$ (resp., $A,b_0,b_1$). \begin{proposition} Let $A,B$ be free noncommutative variables. Then \begin{equation} \label{NS:1} \Phi(A,B) = 2^B \Psi_2(A|B,-A-B) 2^A. \end{equation} \end{proposition} {\em Proof.} The Broadhurst duality relation (\ref{dual:Psi2}) may be written as $\Psi_2(b_0|A,-A-b_0-b_1) = 2_{-A} \Psi_2^{-1}(A|b_0,b_1) 2^{-b_0}$. Then if we substitute $b_0 \mapsto B$, $b_1 \mapsto -A - B$, we get $\Psi_2(B|A,0) = 2^{-A}\Psi_2^{-1}(A|B,-A-B)2^{-B}$. The relation then follows from the distribution relation $\Psi_2(B|A,0) = \Phi(B,A)$ and from the duality relation $\Phi(B,A) = \Phi(A,B)^{-1}$. A direct proof is as follows. $\Psi_2(A|B,-A-B) = \tilde H_1^{-1} \tilde H_0$, where $\tilde H_0,\tilde H_1$ are the solutions of \begin{equation} \label{eq1} (d\tilde H)\tilde H^{-1} = ({A\over z} + {B\over {z-1}} - {{A+B}\over{z+1}})dz \end{equation} such that $\tilde H_0(z) \simeq z^A$ as $z\to 0^+$, $\tilde H_1(z) \simeq (1-z)^B$ as $z\to 1^-$. Set $u:= 2z/(z+1)$. Then $({A\over z} + {B\over{z-1}} - {{A+B}\over{z+1}})dz = ({A\over u} + {B\over {u-1}})du$. If follows that $G(u)$ is a solution of (\ref{KZ}), then $\tilde H(z) := G(u(z))$ is a solution of (\ref{eq1}). In particular, one checks that $\tilde H_0(z) = G_0(u(z)) 2^{-A}$ and $\tilde H_1(z) = G_1(u(z)) 2^B$. Therefore $\Psi_2(A|B,-A-B) = \tilde H_1^{-1}\tilde H_0 = 2^{-B} G_1^{-1}G_0 2^{-A} = 2^{-B} \Phi(A,B) 2^{-A}$. \qed \begin{remark} Relation (\ref{NS:1}) is the analogue of relation $f(\tau_1,\tau_2^2) = \tau_2^{4\rho_2}f(\tau_1^2,\tau_2^2) \tau_1^{2\rho_2} (\tau_1\tau_2^2)^{-2\rho_2}$ in Theorem 2.2 of \cite{NS}. Here $f\in \hat F_2$ is in the image of $\operatorname{Gal}(\bar{\mathbb Q}/{\mathbb Q}) \to \hat{\mathbb Z} \times \hat F_2 \to \hat F_2$ (in particular, $f\in \hat F_2'$, the commutator subgroup of $\hat F_2$, and $f(x,y)f(y,x)=1$) and $\rho_2\in \hat{\mathbb Z}$ depends only on $f$ (it is called a Kummer cocycle of $f$ in \cite{NS}). This relation takes place in $\hat B_3$, where $B_3$ is the braid group with $3$ strands. Since $f(x,y)\in \hat F_2'$, it lies in the kernel of the morphism $\hat F_2 \to {\mathbb Z}/2{\mathbb Z}$, $x\mapsto \bar 1$, $y\mapsto \bar 0$, so there exists a unique $h(X|y_0,y_1)\in \hat F_3$ such that $f(x,y) = h(x^2|y,xyx^{-1})$. In the same way that the map $\sigma \mapsto f(x,y)$ corresponds to the KZ associator $\Phi(A,B)$, the map $\sigma\mapsto h(X|y_0,y_1)$ corresponds to $\Psi_2(A|b_0,b_1)$. Let $K_3 \subset B_3$ be the pure braid group with $3$ strands. It contains the elements $x_{12} = \tau_1^2$, $x_{23} = \tau_2^2$ and $x_{13} = \tau_2\tau_1^2 \tau_1^{-1}$; $x_{12}x_{13}x_{23} = x_{23}x_{13}x_{12}$ generates the center $Z(B_3) \simeq {\mathbb Z}$ of $B_3$ and $B_3/Z(B_3) \simeq F_2$ is freely generated by the classes of $x_{12}$ and $x_{23}$. The relation from \cite{NS} is then rewritten $h(x_{12}|x_{23},x_{13}) = x_{23}^{2\rho_2}f(x_{12},x_{23}) x_{12}^{\rho_2} (x_{12}x_{13}x_{23})^{-\rho_2}$, which is a relation in $\hat K_3$. The image of this relation in $\hat F_2$ is $$ f(x,y) = y^{-2\rho_2} h(x|y,(yx)^{-1}) x^{-\rho_2}. $$ (\ref{NS:1}) is an analogue of this relation. \end{remark} \subsection{The relation $\Phi(A,B) = 4^B \Psi_2(A|2B,-2(A+B)) 4^A$} \begin{proposition} We have \begin{equation} \label{NS:2} \Phi(A,B) = 4^B \Psi_2(A|2B,-2(A+B)) 4^A \end{equation} with the above conventions. \end{proposition} {\em Proof.} $\Psi_2(A|2B,-2(A+B)) = \bar H_1^{-1}\bar H_0$, where $\bar H_0,\bar H_1$ are the solutions of \begin{equation} \label{eq2} (d\bar H)\bar H^{-1} = ({A\over z} + {{2B}\over {z-1}} - {{2(A+B)}\over{z+1}})dz \end{equation} with $\bar H_0(z) \simeq z^A$ for $z\to 0^+$, $\bar H_1(z) \simeq (1-z)^{2B}$ when $z\to 1^-$. Set $u:= 4z/(z+1)^2$. Then $({A\over z} + {{2B}\over {z-1}} - {{2(A+B)}\over{z+1}})dz = ({A\over u} + {B\over {u-1}})du$. As above, it follows that if $G(u)$ is a solution of (\ref{KZ}), then $\bar H(z) := G(u(z))$ is a solution of (\ref{eq2}). Moreover, the expansions $u\simeq 4z$ as $z\to 0$ and $1-u \simeq (1-z)^2/4$ as $z\to 1$ imply that $\bar H_0(z) = G_0(u(z)) 4^{-A}$ and $\bar H_1(z) = G_1(u(z)) 4^B$. Then $\Phi(A,B) = G_1^{-1}G_0 = 4^B \bar H_1^{-1}\bar H_0 4^A = 4^B \Psi_2(A|2B,-2(A+B))4^A$. \qed \begin{remark} As before, relation (\ref{NS:2}) is the analogue of $f(\tau_1,\tau_2^4) = \tau_2^{8\rho_2} f(\tau_1^2,\tau_2^2) \tau_1^{4\rho_2} (\tau_1\tau_2^2)^{-4\rho_2}$ in Theorem 2.2 of \cite{NS}. \end{remark} \subsection{The relation $\Psi_2(t_{12} + t_{34}|t_{23},t_{14}) = 2^{Z-t_{23}} \Phi_{1/2}^{1,23,4} \Phi^{1,2,3} (\Phi^{12,3,4})^{-1}$} Set $$ \Phi_{1/2}(A,B) := G_{1/2}^{-1} G_0, $$ where $G_{1/2}$ is the solution of (\ref{KZ}) such that $G_{1/2}(1/2) = 1$. Then $G_0(1-z) = \theta(G_1(z))$ and $G_{1/2}(1-z) = \theta(G_{1/2}(z))$, where $\theta\in \operatorname{Aut(\hat{\mathfrak{f}}_2)}$ is the exchange of $A$ and $B$, so $G_1^{-1}G_{1/2} = \theta(\Phi_{1/2}(A,B)^{-1}) = \Phi_{1/2}(B,A)^{-1}$. Therefore $$ \Phi(A,B) = \Phi_{1/2}(B,A)^{-1}\Phi_{1/2}(A,B). $$ Recall that ${\mathfrak{t}}_4$ is the Lie algebra with generators $t_{ij}$, $1\leq i\neq j\leq 4$, with relations $t_{ij} = t_{ji}$ ($i\neq j$), $[t_{ij} + t_{ik},t_{jk}] = 0$ ($i,j,k$ different) and $[t_{ij},t_{kl}] = 0$ ($i,j,k,l$ different). \begin{proposition} We have \begin{equation} \label{eq:Phi} \Psi_2(t_{12} + t_{34}|t_{23},t_{14}) = 2^{Z-t_{23}} \Phi_{1/2}^{1,23,4} \Phi^{1,2,3} (\Phi^{12,3,4})^{-1}. \end{equation} Here $Z = \sum_{i<j}t_{ij}$ is a generator of the center of ${\mathfrak{t}}_4$, $\Phi^{12,3,4} = \Phi(t^{13} + t^{23},t^{34})$, $\Phi^{1,2,3} = \Phi(t_{12},t_{23})$ and $\Phi^{1,23,4} = \Phi(t_{12} + t_{13},t_{24} + t_{34})$. \end{proposition} As in \cite{NS}, this equation implies the pentagon equation, eliminating $\Psi_2$ from it and from the equation obtained using the automorphism $t_{ij} \mapsto t_{5-i,5-j}$ of ${\mathfrak{t}}_4$. {\em Proof.} The system \begin{equation} \label{system} (d_z G)G^{-1} = ({{t_{12}}\over z} + {{t_{23}}\over{z-w}} + {{t_{24}}\over{z-1}})dz, \quad (d_w G)G^{-1} = ({{t_{13}}\over{w}} + {{t_{23}}\over{w-z}} + {{t_{34}}\over{w-1}})dw \end{equation} is known to be compatible. The pentagon identity is established by considering the pentagon of asymptotic zones $((0z)w)1$, $(0(zw))1$, $0((zw)1)$, $0(z(w1))$, $(0z)(w1)$ in the domain $\{(z,w)|0<z<w<1\}$ (\cite{Dr}). We will cut this pentagon in two quadrangles, one of which is $((0z)w)1$, $(0(zw))1$, $(z+w = 1, z \to 1/2^-)$, $(z+w = 1, z\to 0^+)$. Let $G_{((0z)w)1}$ be the solution which is $\simeq z^{t_{12}}w^{t_{13} + t_{23}}$ in the zone $((0z)w)1$ (i.e., $w\to 0$, $z/w\to 0$). Let $G_{(0(zw))1}$ be the solution which is $\simeq (w-z)^{t_{23}} w^{t_{12} + t_{13}}$ in the zone $(0(zw))1$ (i.e., $w\to 0$, $z/w\to 1$). Let $G_{z+w = 1, z \to 1/2^-}$ be the solution which is $\simeq (1-2z)^{t_{23}}$ when $z\to 1/2^-$ and $z+w=1$. Let $G_{z+w=1,z\to 0^+}$ be the solution which is $\simeq z^{t_{12} + t_{34}}$ when $z\to 0^+$, $z+w = 1$. Then $G_{z+w=1,z\to 0^+} = G_{(0z)(w1)}$, where $G_{(0z)(w1)}$ is the solution which is $\simeq z^{t_{12}} (1-w)^{t_{34}}$ when $z\to 0^+$, $w\to 1^-$. We know from \cite{Dr} that $G_{(0(zw))1}^{-1} G_{((0z)w)1} = \Phi^{1,2,3}$, $G_{((0z)w)1}^{-1} G_{(0z)(w1)} = (\Phi^{12,3,4})^{-1}$. Let us compute $G_{(0(zw))1}^{-1}G_{z+w=1,z\to 1/2^-}$. If $G$ is a solution of (\ref{system}), set $\Gamma(z) := [(w-t)^{-t_{23}} G(z,w)]_{w=z}$. Then $$ (d\Gamma)\Gamma^{-1} = ({{t_{12}+t_{13}}\over{z}} + {{t_{24}+t_{34}}\over{z-1}})dz, $$ so $[(w-z)^{-t_{23}} G_{(0(zw))1}(z,w)]_{w=z} = G_0(z)^{1,23,4}$, $[(w-z)^{-t_{23}} G_{z+w=1,z\to 1/2^-}(z,w)]_{w=z} = G_{1/2}(z)^{1,23,4}$. Therefore $G_{(0(zw))1}^{-1}G_{z+w=1,z\to 1/2^-} = (\Phi_{1/2}^{-1})^{1,23,4}$. We now compute $G_{z+w=1,z\to 1/2^-}^{-1}G_{z+w=1,z\to 0^+}$. If $G$ is a solution of (\ref{system}), set $\Lambda(z) := G(z,1-z)$. Then $$ (d\Lambda)\Lambda^{-1} = ({{t_{12}+t_{34}}\over z} + {{t_{23}}\over{z-1/2}} + {{t_{13}+t_{24}}\over{z-1}})dz. $$ If we set $u:= z/(1-z)$, this equation is $$ (d\Lambda)\Lambda^{-1} = ({{t_{12}+t_{34}}\over u} + {{t_{23}}\over{u-1}} + {{t_{14}-Z}\over {u+1}}). $$ Then the expansions $u\simeq z$ then $z\to 0$, $1-u \simeq 2(1-2z)$ when $z\to 1/2$ give $$ \Lambda_{z+w,z\to 0^+}(z) = \hat H_0(u(z)), \quad \Lambda_{z+w,z\to 1/2^-}(z) = \hat H_1(u(z)) 2^{-t_{23}}, $$ where $\hat H_0,\hat H_1$ are the analogues of $H_0,H_1$ for $(A,b_0,b_1) = (t_{12}+t_{34},t_{23},t_{14}-z)$. Then \begin{align*} & G_{z+w=1,z\to 1/2^-}^{-1} G_{z+w=1,z\to 0^+} = \Lambda_{z+w=1,z\to 1/2^-}^{-1} \Lambda_{z+w=1,z\to 0^+} = 2^{t_{23}}\hat H_1^{-1} \hat H_0 \\ & = 2^{t_{23}}\Psi(t_{12}+t_{34}|t_{23},t_{14}-Z) = 2^{t_{23}-Z}\Psi(t_{12}+t_{34}|t_{23},t_{14}). \end{align*} Then $$ G^{-1}_{((0(zw))1} G_{(0z)(w1)} = (G^{-1}_{((0(zw))1} G_{((0z)w)1}) (G^{-1}_{((0z)w)1} G_{(0z)(w1)}) = \Phi^{1,2,3} (\Phi^{-1})^{12,3,4}, $$ on the other hand \begin{align*} & G^{-1}_{((0(zw))1} G_{(0z)(w1)} = (G^{-1}_{((0(zw))1} G_{z+w=1,z\to 1/2^-}) (G^{-1}_{z+w=1,z\to 1/2^-} G_{(0z)(w1)}) \\ & = (\Phi_{1/2}^{-1})^{1,23,4} 2^{t_{23}-Z}\Psi_2(t_{12}+t_{34}|t_{23},t_{14}) . \end{align*} The result follows from the comparison of these equalities. \qed \begin{remark} Relation (\ref{eq:Phi}) is the analogue of the relation (III') in \cite{NS} $$ f(\tau_1\tau_3,\tau_2^2) = g(x_{45},x_{51})f(x_{12},x_{23})f(x_{34},x_{45}), $$ where $(\lambda,f)\in \hat{\mathbb Z}\times \hat F_2$ lies in the image of $\operatorname{Gal}(\bar{\mathbb Q}/{\mathbb Q})$ and $g(x,y)\in\hat F_2$ is defined by $g(y,x)^{-1}g(x,y) = f(x,y)$. This relation takes place in the uncolored mapping class group $\hat \Gamma_{0,[5]}$ of a surface of genus $0$ with $5$ marked points; here $\tau_i,x_{ij}$ are the images of the standard elements of the braid group with 5 strands $B_5$ under the morphism $B_5 \to \Gamma_{0,[5]}$ (recall that $\tau_i\tau_{i+1}\tau_i = \tau_i\tau_{i+1}\tau_i$, $\tau_i\tau_j = \tau_j\tau_i$ if $|i-j| \geq 2$, $x_{ij} = \tau_{j-1}...\tau_{i+1} \tau_i^2 (\tau_{j-1}...\tau_{i+1})^{-1}$ if $i<j$, $x_{ji} = x_{ij}$). Indeed, $\tau_2^2 = x_{34}$, $(\tau_1\tau_3)^2 = x_{12}x_{34}$ and $(\tau_1\tau_3)\tau_2^2 (\tau_1\tau_3)^{-1} = x_{24}^{-1}x_{14}x_{24}$, so that $f(\tau_1\tau_3,\tau_2^2) = h(x_{12}x_{34}|x_{23},x_{24}^{-1}x_{14}x_{24})$, hence (III') is rewritten as $$ h(x_{12}x_{34}|x_{23},x_{24}^{-1}x_{14}x_{24}) = g(x_{45},x_{51})f(x_{12},x_{23})f(x_{34},x_{45}) $$ which is now an equation in the colored mapping class group $\Gamma_{0,5}$. Now $x_{45} = x_{12}x_{13}x_{23}$ and $x_{51} = x_{23}x_{24}x_{34}$, so $g(x_{45},x_{51}) = x_{23}^\alpha g(x_{12}x_{13},x_{24}x_{34})$, where $\alpha\in\hat{\mathbb Z}$, and $f(x_{34},x_{45}) = f(x_{34},x_{13}x_{23})$ (no $x_{12}^\alpha$ comes out since $f\in \hat F_2'$), and using $f(x,y) = f(y,x)^{-1}$ the equation is rewritten as \begin{equation} \label{final:pent} h(x_{12}x_{34}|x_{23},x_{24}^{-1}x_{14}x_{24}) = x_{23}^\alpha g(x_{12}x_{13},x_{24}x_{34}) f(x_{12},x_{23}) f(x_{13}x_{23},x_{34})^{-1}. \end{equation} This is an equality in the image of the morphism $\hat K_4 \to \hat\Gamma_{0,5}$, where $K_4$ is the pure braid group with $4$ strands. This image is the quotient of $\hat K_4$ by its center, generated by $x_{12}x_{13}x_{14}x_{23}x_{24}x_{34}$. So the image of (\ref{eq:Phi}) in $\operatorname{exp}(\hat{\mathfrak{t}}_4/{\mathbb C} Z)$ is the analogue of (\ref{final:pent}). The sense in which the relations of this section are analogous to relations in \cite{NS} can be precised as in \cite{Dr}. \end{remark} \end{document}
arXiv
\begin{document} \theoremstyle{plain} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{fact}[theorem]{Fact} \newtheorem{claim}[theorem]{Claim} \newtheorem*{main}{Main Theorem} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \theoremstyle{remark} \newtheorem{remark}{Remark} \newtheorem{step}{Step} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathrm{Id}}{\mathrm{Id}} \newcommand{ }{ } \newcommand{\mathbf{x}}{\mathbf{x}} \newcommand{\mathbf{y}}{\mathbf{y}} \newcommand{\mathbf{h}}{\mathbf{h}} \newcommand{\mathbf{e}}{\mathbf{e}} \newcommand{\mathbf{p}}{\mathbf{p}} \newcommand{\mathbf{p}}{\mathbf{p}} \newcommand{\mathcal{I}}{\mathcal{I}} \newcommand{{\mathcal{S}}}{{\mathcal{S}}} \newcommand{{\mathcal{A}}}{{\mathcal{A}}} \newcommand{\boldsymbol{0}}{\boldsymbol{0}} \newcommand{\boldsymbol{\alpha}}{\boldsymbol{\alpha}} \newcommand{\boldsymbol{\beta}}{\boldsymbol{\beta}} \newcommand{\boldsymbol{\gamma}}{\boldsymbol{\gamma}} \begin{abstract} We describe the class of $n$-variable polynomial functions that satisfy Acz\'el's bisymmetry property over an arbitrary integral domain of characteristic zero with identity. \end{abstract} \keywords{Acz\'el's bisymmetry, mediality, polynomial function, integral domain.} \subjclass[2010]{Primary 39B72; Secondary 13B25, 26B35} \maketitle \section{Introduction} Let $\mathcal{R}$ be an integral domain of characteristic zero (hence $\mathcal{R}$ is infinite) with identity and let $n\geqslant 1$ be an integer. In this paper we provide a complete description of all the $n$-variable polynomial functions over $\mathcal{R}$ that satisfy the (Acz\'el) bisymmetry property. Recall that a function $f\colon\mathcal{R}^n\to\mathcal{R}$ is \emph{bisymmetric} if the $n^2$-variable mapping $$ \big(x_{11},\ldots,x_{1n};\ldots ;x_{n1},\ldots,x_{nn}\big)\mapsto f\big(f(x_{11},\ldots,x_{1n}),\ldots,f(x_{n1},\ldots,x_{nn})\big) $$ does not change if we replace every $x_{ij}$ by $x_{ji}$. The bisymmetry property for $n$-variable real functions goes back to Acz\'el \cite{Acz46,Acz48}. It has been investigated since then in the theory of functional equations by several authors, especially in characterizations of mean functions and some of their extensions (see, e.g., \cite{AczDho89,CouLeh11,FodMar97,GraMarMesPap09}). This property is also studied in algebra where it is called \emph{mediality}. For instance, an algebra $(A,f)$ where $f$ is a bisymmetric binary operation is called a \emph{medial groupoid} (see, e.g., \cite{JezKep83,JezKep83b,Sou71}). We now state our main result, which provides a description of the possible bisymmetric polynomial functions from $\mathcal{R}^n$ to $\mathcal{R}$. Let $\mathrm{Frac}(\mathcal{R})$ denote the fraction field of $\mathcal{R}$ and let $\mathbb{N}$ be the set of nonnegative integers. For any $n$-tuple $\mathbf{x} = (x_1,\ldots,x_n)$, we set $|\mathbf{x}|=\sum_{i=1}^nx_i$. \begin{main} A polynomial function $P\colon \mathcal{R}^n\to \mathcal{R}$ is bisymmetric if and only if it is \begin{itemize} \item[$(i)$] univariate, or \item[$(ii)$] of degree $\leqslant 1$, that is, of the form $$ P(\mathbf{x})=a_0+\sum_{i=1}^n a_i\, x_i\, , $$ where $a_i\in\mathcal{R}$ for $i=0,\ldots,n$, or \item[$(iii)$] of the form $$ P(\mathbf{x})=a\prod_{i=1}^n(x_i+b)^{\alpha_i}-b\, , $$ where $a\in \mathcal{R}$, $b\in \mathrm{Frac}(\mathcal{R})$, and $\boldsymbol{\alpha}\in \mathbb{N}^n$ satisfy $a b^k\in \mathcal{R}$ for $k=1,\ldots,|\boldsymbol{\alpha}|-1$ and $a b^{|\boldsymbol{\alpha}|}-b\in \mathcal{R}$. \end{itemize} \end{main} The following example, borrowed from \cite{MarMat11}, gives a polynomial function of class $(iii)$ for which $b\notin\mathcal{R}$. \begin{example} The third-degree polynomial function $P\colon\mathbb{Z}^3\to\mathbb{Z}$ defined on the ring $\mathbb{Z}$ of integers by \[ P(x_1,x_2,x_3)=9\, x_1x_2x_3+3\, (x_1x_2+x_2x_3+x_3x_1)+x_1+x_2+x_3 \] is bisymmetric since it is the restriction to $\mathbb{Z}$ of the bisymmetric polynomial function $Q\colon\mathbb{Q}^3\to\mathbb{Q}$ defined on the field $\mathbb{Q}$ of rationals by \[ Q(x_1,x_2,x_3)=9\prod_{i=1}^3\Big(x_i+\frac{1}{3}\Big)-\frac{1}{3}\, . \] \end{example} Since polynomial functions usually constitute the most basic functions, the problem of describing the class of bisymmetric polynomial functions is quite natural. On this subject it is noteworthy that a description of the class of bisymmetric lattice polynomial functions over bounded chains and more generally over distributive lattices has been recently obtained \cite{CouLeh11,BehCouKeaLehSze11} (there bisymmetry is called self-commutation), where a lattice polynomial function is a function representable by combinations of variables and constants using the fundamental lattice operations $\wedge$ and $\vee$. From the Main Theorem we can derive the following test to determine whether a non-univariate polynomial function $P\colon \mathcal{R}^n\to \mathcal{R}$ of degree $p\geqslant 2$ is bisymmetric. For $k\in\{p-1,p\}$, let $P_k$ be the homogenous polynomial function obtained from $P$ by considering the terms of degree $k$ only. Then $P$ is bisymmetric if and only if $P_p$ is a monomial and $P_p(\mathbf{x})=P(\mathbf{x}-b\mathbf{1})+b$, where $\mathbf{1}=(1,\ldots,1)$ and $b=P_{p-1}(\mathbf{1})/(p\, P_p(\mathbf{1}))$. Note that the Main Theorem does not hold for an infinite integral domain $\mathcal{R}$ of characteristic $r>0$. As a counterexample, the bivariate polynomial function $P(x_1,x_2)=x_1^r+x_2^r$ is bisymmetric. In the next section we provide the proof of the Main Theorem, assuming first that $\mathcal{R}$ is a field and then an integral domain. \section{Technicalities and proof of the Main Theorem} We observe that the definition of $\mathcal{R}$ enables us to identify the ring $\mathcal{R}[x_1,\ldots,x_n]$ of polynomials of $n$ indeterminates over $\mathcal{R}$ with the ring of polynomial functions of $n$ variables from $\mathcal{R}^n$ to $\mathcal{R}$. It is a straightforward exercise to show that the $n$-variable polynomial functions given in the Main Theorem are bisymmetric. We now show that no other $n$-variable polynomial function is bisymmetric. We first consider the special case when $\mathcal{R}$ is a field. We then prove the Main Theorem in the general case (i.e., when $\mathcal{R}$ is an integral domain of characteristic zero with identity). Unless stated otherwise, we henceforth assume that $\mathcal{R}$ is a field of characteristic zero. Let $p\in\mathbb{N}$ and let $P\colon \mathcal{R}^n\to\mathcal{R}$ be a polynomial function of degree $p$. Thus $P$ can be written in the form $$ P(\mathbf{x})=\sum_{|\boldsymbol{\alpha}|\leqslant p}c_{\boldsymbol{\alpha}}\,\mathbf{x}^{\boldsymbol{\alpha}},\quad \mbox{with}~ \mathbf{x}^{\boldsymbol{\alpha}}=\prod_{i=1}^nx_i^{\alpha_i}\, , $$ where the sum is taken over all $\boldsymbol{\alpha}\in\mathbb{N}^n$ such that $|\boldsymbol{\alpha}|\leqslant p$. The following lemma, which makes use of formal derivatives of polynomial functions, will be useful as we continue. \begin{lemma} For every polynomial function $B\colon\mathcal{R}^n\to\mathcal{R}$ of degree $p$ and every $\mathbf{x}_0,\mathbf{y}_0\in\mathcal{R}^n$, we have \begin{equation}\label{eq:eq4} B(\mathbf{x}_0+\mathbf{y}_0)=\sum_{|\boldsymbol{\alpha}|\leqslant p}\frac{\mathbf{y}_0^{\boldsymbol{\alpha}}}{\boldsymbol{\alpha}!}\,(\partial^{\boldsymbol{\alpha}}_\mathbf{x} B)(\mathbf{x}_0)\, , \end{equation} where $\partial_{\mathbf{x}}^{\boldsymbol{\alpha}}=\partial_{x_1}^{\alpha_1}\cdots\,\partial_{x_n}^{\alpha_n}$ and $\boldsymbol{\alpha}!=\alpha_1!\cdots\,\alpha_n!$. \end{lemma} \begin{proof} It is enough to prove the result for monomial functions since both sides of (\ref{eq:eq4}) are additive on the function $B$. We then observe that for a monomial function $B(\mathbf{x})=c\,\mathbf{x}^{\boldsymbol{\beta}}$ the identity (\ref{eq:eq4}) reduces to the multi-binomial theorem. \end{proof} As we will see, it is useful to decompose $P$ into its homogeneous components, that is, $P=\sum_{k=0}^pP_k$, where $$ P_k(\mathbf{x})=\sum_{|\boldsymbol{\alpha}|=k}c_{\boldsymbol{\alpha}}\,\mathbf{x}^{\boldsymbol{\alpha}} $$ is the unique homogeneous component of degree $k$ of $P$. In this paper the homogeneous component of degree $k$ of a polynomial function $R$ will often be denoted by $[R]_k$. Since $P_p\neq 0$, the polynomial function $Q=P-P_p$, that is $$ Q(\mathbf{x})=\sum_{|\boldsymbol{\alpha}|< p}c_{\boldsymbol{\alpha}}\,\mathbf{x}^{\boldsymbol{\alpha}}, $$ is of degree $q<p$ and its homogeneous component $[Q]_q$ of degree $q$ is $P_q$. We now assume that $P$ is a bisymmetric polynomial function. This means that the polynomial identity \begin{equation}\label{eq:bisym} P\big(P(\mathbf{r}_1),\ldots,P(\mathbf{r}_n)\big)-P\big(P(\mathbf{c}_1),\ldots,P(\mathbf{c}_n)\big)=0 \end{equation} holds for every $n\times n$ matrix \begin{equation}\label{eq:matrix} X= \begin{pmatrix} x_{11} & \cdots & x_{1n}\\ \vdots & \ddots & \vdots\\ x_{n1} & \cdots & x_{nn} \end{pmatrix} \in \mathcal{R}^n_n\, , \end{equation} where $\mathbf{r}_i=(x_{i1},\ldots,x_{in})$ and $\mathbf{c}_j= (x_{1j},\ldots,x_{nj})$ denote its $i$th row and $j$th column, respectively. Since all the polynomial functions of degree $\leqslant 1$ are bisymmetric, we may (and henceforth do) assume that $p\geqslant 2$. From the decomposition $P=P_p+Q$ it follows that $$ P\big(P(\mathbf{r}_1),\ldots,P(\mathbf{r}_n)\big)=P_p\big(P(\mathbf{r}_1),\ldots,P(\mathbf{r}_n)\big)+Q\big(P(\mathbf{r}_1),\ldots,P(\mathbf{r}_n)\big), $$ where $Q(P(\mathbf{r}_1),\ldots,P(\mathbf{r}_n))$ is of degree $p\, q$. To obtain necessary conditions for $P$ to be bisymmetric, we will equate the homogeneous components of the same degree $>p\, q$ of both sides of (\ref{eq:bisym}). By the previous observation this amounts to considering the equation \begin{equation}\label{eq:bisympp} \big[P_p\big(P(\mathbf{r}_1),\ldots,P(\mathbf{r}_n)\big)-P_p\big(P(\mathbf{c}_1),\ldots,P(\mathbf{c}_n)\big)\big]_d=0\, ,\quad \mbox{for $~p\, q< d\leqslant p^2$}. \end{equation} By applying (\ref{eq:eq4}) to the polynomial function $P_p$ and the $n$-tuples $$ \mathbf{x}_0=(P_p(\mathbf{r}_1),\ldots,P_p(\mathbf{r}_n))\quad\mbox{and}\quad\mathbf{y}_0=(Q(\mathbf{r}_1),\ldots,Q(\mathbf{r}_n)), $$ we obtain \begin{equation}\label{eq:eq6} P_p(P(\mathbf{r}_1),\ldots,P(\mathbf{r}_n)) ~=~ \sum_{|\boldsymbol{\alpha}|\leqslant p}\frac{\mathbf{y}_0^{\boldsymbol{\alpha}}}{\boldsymbol{\alpha}!}\,\partial^{\boldsymbol{\alpha}}_\mathbf{x} P_p(\mathbf{x}_0) \end{equation} and similarly for $P_p(P(\mathbf{c}_1),\ldots,P(\mathbf{c}_n))$. We then observe that in the sum of (\ref{eq:eq6}) the term corresponding to a fixed $\boldsymbol{\alpha}$ is either zero or of degree $$q\,|\boldsymbol{\alpha}|+(p-|\boldsymbol{\alpha}|)\,p=p^2-(p-q)\,|\boldsymbol{\alpha}|$$ and its homogeneous component of highest degree is obtained by ignoring the components of degrees $<q$ in $Q$, that is, by replacing $\mathbf{y}_0$ by $(P_q(\mathbf{r}_1),\ldots,P_q(\mathbf{r}_n))$. Using (\ref{eq:bisympp}) with $d=p^2$, which leads us to consider the terms in (\ref{eq:eq6}) for which $|\boldsymbol{\alpha}|=0$, we obtain \begin{equation}\label{eq:Pp} P_p(P_p(\mathbf{r}_1),\ldots,P_p(\mathbf{r}_n))-P_p(P_p(\mathbf{c}_1),\ldots,P_p(\mathbf{c}_n))=0. \end{equation} Thus, we have proved the following claim. \begin{claim}\label{claim:111} The polynomial function $P_p$ is bisymmetric. \end{claim} We now show that $P_p$ is a monomial function. \begin{proposition}\label{prop:homog} Let $H\colon\mathcal{R}^n\to\mathcal{R}$ be a bisymmetric polynomial function of degree $p\geqslant 2$. If $H$ is homogeneous, then it is a monomial function. \end{proposition} \begin{proof} Consider a bisymmetric homogeneous polynomial $H\colon\mathcal{R}^n\to\mathcal{R}$ of degree $p\geqslant 2$. There is nothing to prove if $H$ depends on one variable only. Otherwise, assume for the sake of a contradiction that $H$ is the sum of at least two monomials of degree $p$, that is, \[ H(\mathbf{x})=a\, \mathbf{x}^{\boldsymbol{\alpha}}+b\, \mathbf{x}^{\boldsymbol{\beta}}+\sum_{|\boldsymbol{\gamma}|=p}c_{\boldsymbol{\gamma}}\,\mathbf{x}^{\boldsymbol{\gamma}}, \] where $a\,b\neq 0$ and $|\boldsymbol{\alpha}|=|\boldsymbol{\beta}|=p$. Using the lexicographic order $\preccurlyeq$ over $\mathbb{N}^n$, we can assume that $\boldsymbol{\alpha}\succ\boldsymbol{\beta}\succ\boldsymbol{\gamma}$. Applying the bisymmetry property of $H$ to the $n\times n$ matrix whose $(i,j)$-entry is $x_iy_j$, we obtain \[ H(\mathbf{x})^p\, H(\mathbf{y}^p)=H(\mathbf{y})^p\, H(\mathbf{x}^p), \] where $\mathbf{x}^p=(x_1^p,\ldots,x_n^p)$. Regarding this equality as a polynomial identity in $\mathbf{y}$ and then equating the coefficients of its monomial terms with exponent $p\,\boldsymbol{\alpha}$, we obtain \begin{equation}\label{eq:sdf79} H(\mathbf{x})^p= a^{p-1}\, H(\mathbf{x}^p). \end{equation} Since $\mathcal{R}$ is of characteristic zero, there is a nonzero monomial term with exponent $(p-1)\,\boldsymbol{\alpha}+\boldsymbol{\beta}$ in the left-hand side of (\ref{eq:sdf79}) while there is no such term in the right-hand side since $p\,\boldsymbol{\alpha}\succ (p-1)\,\boldsymbol{\alpha}+\boldsymbol{\beta}\succ p\,\boldsymbol{\beta}$ (since $p\geqslant 2$). Hence a contradiction. \end{proof} The next claim follows immediately from Proposition~\ref{prop:homog}. \begin{claim}\label{claim:homog} $P_p$ is a monomial function. \end{claim} By Claim~\ref{claim:homog} we can (and henceforth do) assume that there exist $c\in\mathcal{R}\setminus\{0\}$ and $\boldsymbol{\gamma}\in\mathbb{N}^n$, with $|\boldsymbol{\gamma}|=p$, such that \begin{equation}\label{eq:8sadsf} P_p(\mathbf{x})=c\,\mathbf{x}^{\boldsymbol{\gamma}}. \end{equation} A polynomial function $F\colon\mathcal{R}^n\to\mathcal{R}$ is said to \emph{depend on} its $i$th variable $x_i$ (or $x_i$ is \emph{essential} in $F$) if $\partial_{x_i}F\neq 0$. The following claim shows that $P_p$ determines the essential variables of $P$. \begin{claim}\label{claim:8243} If $P_p$ does not depend on the variable $x_j$, then $P$ does not depend on $x_j$. \end{claim} \begin{proof} Suppose that $\partial_{x_j}P_p=0$ and fix $i\in\{1,\ldots,n\}$, $i\neq j$, such that $\partial_{x_i}P_p\not=0$. By taking the derivative of both sides of (\ref{eq:bisym}) with respect to $x_{ij}$, the $(i,j)$-entry of the matrix $X$ in (\ref{eq:matrix}), we obtain \begin{equation}\label{eq:sd7f5} (\partial_{x_i}P)(P(\mathbf{r}_1),\ldots,P(\mathbf{r}_n))(\partial_{x_j}P)(\mathbf{r}_i)=(\partial_{x_j}P)(P(\mathbf{c}_1),\ldots,P(\mathbf{c}_n))(\partial_{x_i}P)(\mathbf{c}_j). \end{equation} Suppose for the sake of a contradiction that $\partial_{x_j}P\neq 0$. Thus, neither side of (\ref{eq:sd7f5}) is the zero polynomial. Let $R_j$ be the homogeneous component of $\partial_{x_j}P$ of highest degree and denote its degree by $r$. Since $P_p$ does not depend on $x_j$, we must have $r< p-1$. Then the homogeneous component of highest degree of the left-hand side in (\ref{eq:sd7f5}) is given by \[ (\partial_{x_i}P_p)(P_p(\mathbf{r}_1),\ldots,P_p(\mathbf{r}_n))\,R_j(\mathbf{r}_i) \] and is of degree $p(p-1)+r$. But the right-hand side in (\ref{eq:sd7f5}) is of degree at most $rp+p-1=(r+1)(p-1)+r<p(p-1)+r$, since $r< p-1$ and $p\geqslant 2$. Hence a contradiction. Therefore $\partial_{x_j}P=0$. \end{proof} We now give an explicit expression for $P_q=[P-P_p]_q$ in terms of $P_p$. We first present an equation that is satisfied by $P_q$. \begin{claim}\label{claim:8s7f} $P_q$ satisfies the equation \begin{equation}\label{eqbeta1} \sum_{i=1}^n P_q({\bf r}_i)(\partial_{x_i} P_p)(P_p({\bf r}_1),\ldots,P_p({\bf r}_n))= \sum_{i=1}^n P_q({\bf c}_i)(\partial_{x_i} P_p)(P_p({\bf c}_1),\ldots,P_p({\bf c}_n)) \end{equation} for every matrix $X$ as defined in (\ref{eq:matrix}). \end{claim} \begin{proof} By (\ref{eq:Pp}) and (\ref{eq:8sadsf}) we see that the left-hand side of (\ref{eq:bisympp}) for $d=p^2$ is zero. Therefore, the highest degree terms in the sum of (\ref{eq:eq6}) are of degree $p^2-(p-q)>p\, q$ (because $(p-1)(p-q)>0$) and correspond to those $\boldsymbol{\alpha}\in\mathbb{N}^n$ for which $|\boldsymbol{\alpha}|=1$. Collecting these terms and then considering only the homogeneous component of highest degree (that is, replacing $Q$ by $P_q$), we see that the identity (\ref{eq:bisympp}) for $d=p^2-(p-q)$ is precisely (\ref{eqbeta1}). \end{proof} \begin{claim}\label{claim:equiv} We have \begin{equation}\label{Pqform} P_q(\mathbf{x})=\frac{P_q(\boldsymbol{1})}{c\, p}\, P_p(\mathbf{x})\,\sum\limits_{j=1}^n\frac{\gamma_j}{x_j^{p-q}}\, . \end{equation} Moreover, $P_q=0$ if there exists $j\in\{1,\ldots,n\}$ such that $0<\gamma_j < p-q$. \end{claim} \begin{proof} Considering Eq.~(\ref{eqbeta1}) for a matrix $X$ such that ${\mathbf r}_i=\mathbf{x}$ for $i=1,\ldots,n$, we obtain \[ c\, p\, P_q(\mathbf{x})\, P_p(\mathbf{x})^{p-1}=P_q(\boldsymbol{1})\sum_{i=1}^nx_i^q(\partial_{x_i}P_p)(c\, x_1^p,\ldots,c\, x_n^p). \] Since $\partial_{x_i}P_p(\mathbf{x})=\gamma_i\, P_p(\mathbf{x})/x_i$, the previous equation becomes \begin{equation}\label{eq:gfhfs456} c\, p\, P_q(\mathbf{x})\, P_p(\mathbf{x})^{p-1}=P_q(\boldsymbol{1})\,P_p(\mathbf{x})^p\,\sum_{i=1}^n \frac{\gamma_i}{x_i^{p-q}} \end{equation} from which Eq.~(\ref{Pqform}) follows. Now suppose that $P_q\neq 0$ and let $j\in\{1,\ldots,n\}$. Comparing the lowest degrees in $x_j$ of both sides of (\ref{eq:gfhfs456}), we obtain $$ (p-1)\,\gamma_j\leqslant \begin{cases} p\,\gamma_j-p+q\, , & \mbox{if $\gamma_j\neq 0$},\\ p\,\gamma_j\, , & \mbox{if $\gamma_j= 0$}. \end{cases} $$ Therefore, we must have $\gamma_j=0$ or $\gamma_j\geqslant p-q$, which ensures that the right-hand side of (\ref{Pqform}) is a polynomial. \end{proof} If $\varphi\colon\mathcal{R}\to\mathcal{R}$ is a bijection, we can associate with every function $f\colon\mathcal{R}^n\to\mathcal{R}$ its \emph{conjugate} $\varphi(f)\colon\mathcal{R}^n\to\mathcal{R}$ defined by \[ \varphi(f)(x_1,\ldots,x_n)=\varphi^{-1}\big(f(\varphi(x_1),\ldots,\varphi(x_n))\big). \] It is clear that $f$ is bisymmetric if and only if so is $\varphi(f)$. We then have the following fact. \begin{fact}\label{prop:conj} The class of $n$-variable bisymmetric functions is stable under the action of conjugation. \end{fact} Since the Main Theorem involves polynomial functions over a ring, we will only consider conjugations given by translations $\varphi_b(x)=x+b$. We now show that it is always possible to conjugate $P$ with an appropriate translation $\varphi_b$ to eliminate the terms of degree $p-1$ of the resulting polynomial function $\varphi_b(P)$. \begin{claim}\label{claimcor1} There exists $b\in R$ such that $\varphi_b(P)$ has no term of degree $p-1$. \end{claim} \begin{proof} If $q<p-1$, we take $b=0$. If $q=p-1$, then using (\ref{eq:eq4}) with $\mathbf{y}_0=b\mathbf{1}$, we get $$ \big[\varphi_b(P)\big]_{p-1}=P_{p-1}+b\, \sum_{i=1}^n\partial_{x_i}P_p\, . $$ On the other hand, by (\ref{Pqform}) we have \[ P_{p-1}=\frac{P_{p-1}(\boldsymbol{1})}{c\, p}\,\sum_{i=1}^n\partial_{x_i}P_p\, . \] It is then enough to choose $b=-P_{p-1}(\boldsymbol{1})/(c\, p)$ and the result follows. \end{proof} We can now prove the Main Theorem for polynomial functions of degree $\leqslant 2$. \begin{proposition}\label{prop:67sfda} The Main Theorem is true when $\mathcal{R}$ is a field of characteristic zero and $P$ is a polynomial function of degree $\leqslant 2$. \end{proposition} \begin{proof} Let $P$ be a bisymmetric polynomial function of degree $p\leqslant 2$. If $p\leqslant 1$, then $P$ is in class $(ii)$ of the Main Theorem. If $p=2$, then by Claim~\ref{claimcor1} we see that $P$ is (up to conjugation) of the form $P(\mathbf{x})=c_2\, x_i\, x_j+c_0$. If $i=j$, then by Claim~\ref{claim:8243} we see that $P$ is a univariate polynomial function, which corresponds to the class $(i)$. If $i\not=j$, then by Claim~\ref{claim:equiv} we have $c_0=0$ and hence $P$ is a monomial (up to conjugation). \end{proof} By Proposition~\ref{prop:67sfda} we can henceforth assume that $p\geqslant 3$. We also assume that $P$ is a bivariate polynomial function. The general case will be proved by induction on the number of essential variables of $P$. \begin{proposition}\label{prop:s8df6} The Main theorem is true when $\mathcal{R}$ is a field of characteristic zero and $P$ is a bivariate polynomial function. \end{proposition} \begin{proof} Let $P$ be a bisymmetric bivariate polynomial function of degree $p\geqslant 3$. We know that $P_p$ is of the form $P_p(x,y)=c\, x^{\gamma_1} y^{\gamma_2}$. If $\gamma_1\,\gamma_2=0$, then by Claim~\ref{claim:8243} we see that $P$ is a univariate polynomial function, which corresponds to the class $(i)$. Conjugating $P$, if necessary, we may assume that $P_{p-1}=0$ (by Claim~\ref{claimcor1}) and it is then enough to prove that $P=P_p$ (i.e., $P_q=0$). If $\gamma_1=1$ or $\gamma_2=1$, then the result follows immediately from Claim~\ref{claim:equiv} since $p-q\geqslant 2$. We may therefore assume that $\gamma_1\geqslant 2$ and $\gamma_2\geqslant 2$. We now prove that $P=P_p$ in three steps. \begin{step}\label{Claim:1} $P(x,y)$ is of degree $\leqslant\gamma_1$ in $x$ and of degree $\leqslant\gamma_2$ in $y$. \end{step} \begin{proof} We prove by induction on $r\in\{0,\ldots,p-1\}$ that $P_{p-r}(x,y)$ is of degree $\leqslant\gamma_1$ in $x$ and of degree $\leqslant\gamma_2$ in $y$. The result is true by our assumptions for $r=0$ and $r=1$ and is obvious for $r=p$. Considering Eq.~(\ref{eq:bisympp}) for $d=p^2-r>p\, q$, with $\mathbf{r}_1=\mathbf{r_2}=(x,y)$, we obtain \begin{equation}\label{eq:ident1} \left[P(x,y)^p\right]_{p^2-r}=\left[P(x,x)^{\gamma_1}\, P(y,y)^{\gamma_2}\right]_{p^2-r}\, . \end{equation} Clearly, the right-hand side of (\ref{eq:ident1}) is a polynomial function of degree $\leqslant p\,\gamma_1$ in $x$ and $\leqslant p\,\gamma_2$ in $y$. Using the multinomial theorem, the left-hand side of (\ref{eq:ident1}) becomes \[ \left[P(x,y)^p\right]_{p^2-r}=\left[\left(\sum_{k=0}^pP_{p-k}(x,y)\right)^p\right]_{p^2-r}=\sum_{\boldsymbol{\alpha}\in A_{p,r}} {p\choose\boldsymbol{\alpha}}\prod_{k=0}^pP_{p-k}(x,y)^{\alpha_k}\, , \] where \[ A_{p,r}=\Big\{\boldsymbol{\alpha}=(\alpha_0,\ldots,\alpha_p)\in\mathbb{N}^{p+1}:\sum_{k=0}^pk\, \alpha_k=r,\, |\boldsymbol{\alpha}|=p\Big\}. \] Observing that for every $\boldsymbol{\alpha}\in A_{p,r}$ we have $\alpha_k=0$ for $k>r$ and $\alpha_r\not=0$ only if $\alpha_r=1$ and $\alpha_0=p-1$, we can rewrite (\ref{eq:ident1}) as \[ p\, P_p(x,y)^{p-1}\, P_{p-r}(x,y)=\left[P(x,x)^{\gamma_1} P(y,y)^{\gamma_2}\right]_{p^2-r}-\sum_{\textstyle{\boldsymbol{\alpha}\in A_{p,r}\atop \alpha_r=\cdots =\alpha_p=0}} {p \choose \boldsymbol{\alpha}}\prod_{k=0}^{r-1}P_{p-k}(x,y)^{\alpha_k}\, . \] By induction hypothesis, the right-hand side is of degree $\leqslant p\,\gamma_1$ in $x$ and of degree $\leqslant p\,\gamma_2$ in $y$. The result then follows by analyzing the highest degree terms in $x$ and $y$ of the left-hand side. \end{proof} \begin{step}\label{step2} $P(x,y)$ factorizes into a product $P(x,y)=U(x)\, V(y).$ \end{step} \begin{proof} By Step~\ref{Claim:1}, we can write \[ P(x,y)=\sum_{k=0}^{\gamma_1}x^k\,V_k(y)\, , \] where $V_k$ is of degree $\leqslant\gamma_2$ and $V_{\gamma_1}(y)=\sum_{j=0}^{\gamma_2}c_{{\gamma_2}-j}\, y^j$, with $c_0=c\not=0$ and $c_1=0$ (since $P_{p-1}=0$). Equating the terms of degree $\gamma_1^2$ in $z$ in the identity \[ P(P(z,t),P(x,y))=P(P(z,x),P(t,y))\, , \] we obtain $$ V_{\gamma_1}(t)^{\gamma_1}\,V_{\gamma_1}(P(x,y))=V_{\gamma_1}(x)^{\gamma_1}\,V_{\gamma_1}(P(t,y)). $$ Equating now the terms of degree $\gamma_1\gamma_2$ in $t$ in the latter identity, we obtain \begin{equation}\label{eq:ident2} c^{\gamma_1}\, V_{\gamma_1}(P(x,y))=c\, V_{\gamma_1}(x)^{\gamma_1}\, V_{\gamma_1}(y)^{\gamma_2}\, . \end{equation} We now show by induction on $r\in\{0,\ldots,\gamma_1\}$ that every polynomial function $V_{\gamma_1-r}$ is a multiple of $V_{\gamma_1}$ (the case $r=0$ is trivial), which is enough to prove the result. To do so, we equate the terms of degree $\gamma_1\gamma_2-r$ in $x$ in (\ref{eq:ident2}) (by using the explicit form of $V_{\gamma_1}$ in the left-hand side). Note that terms with such a degree in $x$ can appear in the expansion of $V_{\gamma_1}(P(x,y))$ only when $P(x,y)$ is raised to the highest power $\gamma_2$ (taking into account the condition $c_1 = 0$ when $r =\gamma_1$). Thus, we obtain $$ c^{\gamma_1+1}\,\left[\left(\sum_{k=0}^{\gamma_1}x^{\gamma_1-k}\, V_{\gamma_1-k}(y)\right)^{\gamma_2}\right]_{\gamma_1\gamma_2-r}=c\,[V_{\gamma_1}(x)^{\gamma_1}]_{\gamma_1\gamma_2-r}V_{\gamma_1}(y)^{\gamma_2}\, , $$ (here the notation $[\cdot]_{\gamma_1\gamma_2-r}$ concerns only the degree in $x$). By computing the left-hand side (using the multinomial theorem as in the proof of Step~\ref{Claim:1}) and using the induction on $r$, we finally obtain an identity of the form $$ a\, V_{\gamma_1}(y)^{\gamma_2-1}\, V_{\gamma_1-r}(y)=a'\,V_{\gamma_1}(y)^{\gamma_2},\qquad a,a'\in\mathcal{R},\, a\neq 0, $$ from which the result immediately follows. \end{proof} \begin{step}\label{step3} $U$ and $V$ are monomial functions. \end{step} \begin{proof} Using (\ref{eq:ident2}) with $P(x,y)=U(x)\, V(y)$ and $V_{\gamma_1}=V$, we obtain \begin{equation}\label{eq:s8fd7af6} c^{\gamma_1}\,\sum_{j=0}^{\gamma_2}c_{\gamma_2-j}\,(U(x)\,V(y))^j=c\, V(x)^{\gamma_1}\, V(y)^{\gamma_2}. \end{equation} Equating the terms of degree $\gamma_2^2$ in $y$ in (\ref{eq:s8fd7af6}), we obtain \begin{equation}\label{eq:s8d7af6} c^{\gamma_1+\gamma_2+1}\, U(x)^{\gamma_2}=c^{\gamma_2+1}\, V(x)^{\gamma_1}\, . \end{equation} Therefore, (\ref{eq:s8fd7af6}) becomes $$ \sum_{j=0}^{\gamma_2-1}c_{\gamma_2-j}\,(U(x)\,V(y))^j=0, $$ which obviously implies $c_k=0$ for $k=1,\ldots,\gamma_2$, which in turn implies $V(x)=c\, x^{\gamma_2}$. Finally, from (\ref{eq:s8d7af6}) we obtain $U(x)=x^{\gamma_1}$. \end{proof} \noindent Steps~\ref{step2} and \ref{step3} together show that $P=P_p$, which establishes the proposition. \end{proof} Recall that the action of the symmetric group $\mathfrak{S}_n$ on functions from $\mathcal{R}^n$ to $\mathcal{R}$ is defined by \[ \sigma(f)(x_1,\ldots,x_n)=f(x_{\sigma(1)},\ldots,x_{\sigma(n)}),\qquad\sigma\in\mathfrak{S}_n. \] It is clear that $f$ is bisymmetric if and only if so is $\sigma(f)$. We then have the following fact. \begin{fact}\label{prop:sym} The class of $n$-variable bisymmetric functions is stable under the action of the symmetric group $\mathfrak{S}_n$. \end{fact} Consider also the following action of identification of variables. For $f\colon\mathcal{R}^n\to\mathcal{R}$ and $i<j\in [n]$ we define the function $I_{i,j}f\colon\mathcal{R}^{n-1}\to\mathcal{R}$ as \[ (I_{i,j}f)(x_1,\ldots,x_{n-1})=f(x_1,\ldots,x_{j-1},x_i,x_{j},\ldots,x_{n-1}). \] This action amounts to considering the restriction of $f$ to the ``subspace of equation $x_i=x_j$'' and then relabeling the variables. By Fact~\ref{prop:sym} it is enough to consider the identification of the first and second variables, that is, \[ (I_{1,2}f)(x_1,\ldots,x_{n-1})=f(x_1,x_1,x_2\ldots,x_{n-1}). \] \begin{proposition}\label{prop:ident} The class of $n$-variable bisymmetric functions is stable under identification of variables. \end{proposition} \begin{proof} To see that $I_{1,2}f$ is bisymmetric, it is enough to apply the bisymmetry of $f$ to the $n\times n$ matrix \[ \begin{pmatrix} x_{1,1} & x_{1,1} & \cdots & x_{1,n-1}\\ x_{1,1} & x_{1,1} & \cdots & x_{1,n-1}\\ \vdots & \vdots & \ddots & \vdots\\ x_{n-1,1} & x_{n-1,1} & \cdots & x_{n-1,n-1} \end{pmatrix}\, . \] To see that $I_{i,j}f$ is bisymmetric, we can similarly consider the matrix whose rows $i$ and $j$ are identical and the same for the columns (or use Fact~\ref{prop:sym}). \end{proof} We now prove the Main Theorem by using both a simple induction on the number of essential variables of $P$ and the action of identification of variables. \begin{proof}[Proof of the Main Theorem when $\mathcal{R}$ is a field] We proceed by induction on the number of essential variables of $P$. By Proposition~\ref{prop:s8df6} the result holds when $P$ depends on $1$ or $2$ variables only. Let us assume that the result also holds when $P$ depends on $n-1$ variables ($n-1\geqslant 2$) and let us prove that it still holds when $P$ depends on $n$ variables. By Proposition~\ref{prop:67sfda} we may assume that $P$ is of degree $p \geqslant 3$. We know that $P_p(\mathbf{x})=c\, \mathbf{x}^{\boldsymbol{\gamma}}$, where $c\neq 0$ and $\gamma_i>0$ for $i=1,\ldots,n$ (cf.\ Claim~\ref{claim:8243}). Up to a conjugation we may assume that $P_{p-1}=0$ (cf.\ Claim~\ref{claimcor1}). Therefore, we only need to prove that $P=P_p$. Suppose on the contrary that $P-P_p$ has a polynomial function $P_q\neq 0$ as the homogeneous component of highest degree. Then the polynomial function $I_{1,2}\, P$ has $n-1$ essential variables, is bisymmetric (by Proposition~\ref{prop:ident}), has $I_{1,2}\, P_p$ as the homogeneous component of highest degree (of degree $p\geqslant 3$), and has no component of degree $p-1$. By induction hypothesis, $I_{1,2}\, P$ is in class $(iii)$ of the Main Theorem with $b=0$ (since it has no term of degree $p-1$) and hence it should be a monomial function. However, the polynomial function $[I_{1,2}\, P]_q=I_{1,2}\, P_q$ is not zero by (\ref{Pqform}), hence a contradiction. \end{proof} \begin{proof}[Proof of the Main Theorem when $\mathcal{R}$ is an integral domain] Using the identification of polynomials and polynomial functions, we can extend every bisymmetric polynomial function over an integral domain $\mathcal{R}$ with identity to a polynomial function on $\mathrm{Frac}(\mathcal{R})$. The latter function is still bisymmetric since the bisymmetry property for polynomial functions is defined by a set of polynomial equations on the coefficients of the polynomial functions. Therefore, every bisymmetric polynomial function over $\mathcal{R}$ is the restriction to $\mathcal{R}$ of a bisymmetric polynomial function over $\mathrm{Frac}(\mathcal{R})$. We then conclude by using the Main Theorem for such functions. \end{proof} \section*{Acknowledgments} The authors wish to thank J.\ Dasc\u{a}l and E.\ Lehtonen for fruitful discussions. This research is supported by the internal research project F1R-MTH-PUL-12RDO2 of the University of Luxembourg. \end{document}
arXiv
Sum of random variables without normalization approaches gaussian The central limit theorem states that the limiting distribution of a centered and normalized sum of independent random variables with mean $\mu$ and finite variance $\sigma^2$ is Gaussian. $$ \frac{\sum_{i=1}^n(X_i-\mu)}{\sigma\sqrt{n}}\xrightarrow{d}N(0,1) $$ However in practice, we may not be working with sums of centered and normalized random variables. Still, if we run experiments where we sum without normalization, the distribution of the sum can look increasingly Gaussian with increasing mean and variance. The statement $$ \sum_{i=1}^nX_i\xrightarrow{d}N(n\mu,n\sigma^2) $$ would capture this intuition, but doesn't make sense because the "$\xrightarrow{d}$" is a claim in the limit as $n\rightarrow\infty$, and it doesn't make sense to talk about a Gaussian with infinite mean and variance. Is there a theorem capturing the notion that the distribution of an uncentered and unnormalized sum still approaches a Gaussian? Or is this simply a corollary of the CLT? I'm looking for a proof of something like the following statement: For a given $\delta > 0$, there exists an $N>0$ such that $$ \text{distance}\left(\sum_i^nX_i, N(n\mu,n\sigma^2)\right)<\delta $$ for $n>N$ and some distance function of the distributions. That is, if we sum enough random variables, we can get as close to a Gaussian as we like. normal-distribution central-limit-theorem sum fragapanagos fragapanagosfragapanagos $\begingroup$ The un-normalized sum goes to infinity and so as you mention the distribution has mean and variance growing without bound. I also think you mean some distance function been the distribution of the un-normalized sum and the N(n$\mu$, n$\sigma^2$). $\endgroup$ – Michael R. Chernick May 24 '17 at 23:03 $\begingroup$ Yes: the name of this theorem is the CLT. The point is that if you don't standardize, there is no limiting distribution at all; and if you want there to be a limiting distribution, then you have to change the location and scale in a way that's asymptotically equivalent to standardization. These are all part of the content of the CLT. I emphasized these points in my account of the CLT at stats.stackexchange.com/a/3904/919. $\endgroup$ – whuber♦ May 24 '17 at 23:43 $\begingroup$ @MichaelChernick Yes, I mean some distance function and have edited the question to reflect that. I'm open to suggestions if there better way to notate this. $\endgroup$ – fragapanagos May 24 '17 at 23:44 $\begingroup$ If you want to talk about behavior in finite samples, you'd need to go to something like the Berry-Esséen inequality. While the inequality is stated with the variables in standardized form, the bound on the difference in cdf isn't affected by the horizontal scaling factor nor by a shift. This isn't specifically convergence because we're dealing with some particular $n$ but it may do what you need for a particular sense of "close to Gaussian" (N.B. a small bound on the difference in cdf doesn't imply Gaussian-like behavior) $\endgroup$ – Glen_b May 25 '17 at 0:25 $\begingroup$ Thanks @Glen_b. The Berry-Esséen inequality seems to supply what I was searching for. $\endgroup$ – fragapanagos May 25 '17 at 0:48 If you want to talk about behavior in finite samples, you'd need to go to something like the Berry-Esséen inequality. While the inequality is stated with the variables in standardized form, the bound on the difference in cdf isn't affected by the horizontal scaling factor nor by a shift. This isn't specifically convergence because we're dealing with some particular $n$ but it may do what you need for a particular sense of "close to Gaussian" Note, however, that a close-to-Gaussian cdf implied by a small bound on the difference in cdf doesn't imply Gaussian-like behavior of the variable. For example, it's possible for a variable that has at most a miniscule absolute deviation from a Gaussian (e.g.bounded by some fixed but small $\epsilon >0$) to have infinite variance and no mean. Glen_bGlen_b Not the answer you're looking for? Browse other questions tagged normal-distribution central-limit-theorem sum or ask your own question. What intuitive explanation is there for the central limit theorem? Sum of random variables without central limit theorem Central Limit Theorem for square roots of sums of i.i.d. random variables Intuition about Central limit theorem Is applying the CLT to the sum of random variables a good approximation? Does the sum of discrete uniforms coverge to a discrete Gaussian?
CommonCrawl
Tony Lévy Tony Lévy is an historian of mathematics, born in Egypt in 1943, specializing particularly in Hebrew mathematics. His family left Egypt in 1957 for Belgium and France after the Suez Crisis but his elder brother Eddy Levy remained in Egypt. A political activist, the latter converted to Islam and took the name Adel Rifaat. He would join France in the 80s and form with Bahgat Elnadi the binomial of political scientists and scholars of Islam known under the pseudonym Mahmoud Hussein. His other brother is the activist, philosopher and writer Benny Levy. Like his younger brother Benny, Tony was an extreme left militant in the 1960s and 1970s. Publications • L'Infini et le nombre chez Rabbi Hasdai Crescas : XVIe siècle, 1983 • Mathématiques de l'infini chez Hasdai Crescas (1340–1410) : un chapitre de l'histoire de l'infini d'Aristote à la Renaissance, 1985 • Figures de l'infini : les mathématiques au miroir des cultures, 1987 • Le Chapitre I, 73 du "Guide des égarés" et la tradition mathématique hébraïque au moyen âge : Un commentaire inédit de Salomon b. Isaac, 1989 • L'Étude des sections coniques dans la tradition médiévale hébraïque, ses relations avec les tradictions arabe et latine, 1989 • Éléments d'Euclide, 1991 • Gersonide, commentateur d'Euclide : traduction annotée de ses gloses sur les Eléments, 1992 • Gersonide, le Pseudo-Tusi, et le postulat des paralleles : Les mathématiques en hébreu et leurs sources arabes, 1992 • L'histoire des nombres amiables : le témoignage des textes hébreux médiévaux, 1996 • La littérature mathématique hébraïque en Europe du XIe au XVIe siècle, 1996 • La matematica hebraica, 2002 • A Newly-Discovered Partial Hebrew Version of al-Khārizmī's Algebra, 2002 • L'algèbre arabe dans les textes hébraïques (I) : un ouvrage inédit d'Isaac ben Salomon Al-Aḥdab (XVIe siècle), 2003 • Maïmonide philosophe et savant, 1138–1204, 2004 (in collaboration) • Sefer ha-middot : a mid-twelfth-century text on arithmetic and geometry attributed to Abraham ibn Ezra, 2006 (in collaboration) • L'algèbre arabe dans les textes hébraïques (II) : dans l'Italie des XVe et XVIe siècles : sources arabes et sources vernaculaires, 2007 External links • Maïmonide philosophe et savant (1138–1204) • L'étude des sections coniques dans la tradition médiévale hébraïque. Ses relations avec les traditions arabe et latine • L'ESPACE, LE LIEU, L'INFINI on YouTube Authority control International • ISNI • VIAF National • France • BnF data • Israel • Belgium • United States • Netherlands Academics • zbMATH Other • IdRef
Wikipedia
\begin{document} \maketitle \begin{abstract} In this paper, we study the blow-up phenomena on the $\alpha_k$-harmonic map sequences with bounded uniformly $\alpha_k$-energy, denoted by $\{u_{\alpha_k}: \alpha_k>1 \quad \mbox{and} \quad \alpha_k\searrow 1\}$, from a compact Riemann surface into a compact Riemannian manifold. If the Ricci curvature of the target manifold is of a positive lower bound and the indices of the $\alpha_k$-harmonic map sequence with respect to the corresponding $\alpha_k$-energy are bounded, then, we can conclude that, if the blow-up phenomena occurs in the convergence of $\{u_{\alpha_k}\}$ as $\alpha_k\searrow 1$, the limiting necks of the convergence of the sequence consist of finite length geodesics, hence the energy identity holds true. For a harmonic map sequence $u_k:(\Sigma,h_k)\rightarrow N$, where the conformal class defined by $h_k$ diverges, we also prove some similar results. \end{abstract} \section{Introduction} Let $(\Sigma,g)$ be a compact Riemann surface and $(N,h)$ be an $n$-dimensional smooth compact Riemannian manifold which is embedded in $\mathbb{R}^K$ isometrically. Usually, we denote the space of Sobolev maps from $\Sigma$ into $N$ by $W^{k,p}(\Sigma, N)$, which is defined by $$W^{k,p}(\Sigma, N)=\{u\in W^{k,p}(\Sigma, \mathbb{R}^K): u(x)\in N\,\, \text{for a.e.}\,\,x\in\Sigma\}.$$ For $u\in W^{1,2}(\Sigma,N)$, we define locally the energy density $e(u)$ of $u$ at $x\in \Sigma$ by $$e(u)(x)=|\nabla_g u|^2=g^{ij}(x)h_{\alpha\beta}(u(x)) \frac{\partial u^\alpha}{\partial x^i}\frac{\partial u^\beta}{\partial x^j}.$$ The energy of $u$ on $\Sigma$, denoted by $E(u)$ or $E(u, \Sigma)$, is defined by $$E(u)=\frac{1}{2}\displaystyle{\int}_\Sigma e(u)dV_g,$$ and the critical points of $E$ are called harmonic maps. We know that the energy functional $E$ does not satisfy the Palais-Smale condition. In order to overcome this difficulty, Sacks and Uhlenbeck \cite{Sacks-Uhlenbeck1} introduced the so called $\alpha$-energy $E_\alpha$ of $u: \Sigma\rightarrow N$ as the following $$E_\alpha(u)=\frac{1}{2}\int_\Sigma\{(1+|\nabla u|^2)^\alpha-1\} dV_g,$$ where $\alpha>1$. The critical points of $E_\alpha$ in $W^{1,2\alpha}(\Sigma,N)$ are called as the $\alpha$-harmonic maps from $\Sigma$ into $N$. It is well-known that this $\alpha$-energy functional $E_\alpha$ satisfies the Palais-Smale condition and therefore there always exists an $\alpha$-harmonic maps in each homotopic class of map from $\Sigma$ into $N$. The strategy of Sacks and Uhlenbeck is to employ such a sequence of $\alpha_k$-harmonic maps to approximate a harmonic map as $\alpha_k$ tends decreasingly to 1. If the convergence of the sequence of $\alpha_k$-harmonic map is smooth, the limiting map is just a harmonic map from $\Sigma$ into $N$. The energy of a map $u$ from a closed Riemann surface $\Sigma$ is of conformal invariance, it means that, if we let $g'=e^{2\varphi}g$ be another conformal metric of $\Sigma$, then $$\int_{\Sigma}|\nabla_gu|^2d\mu_g=\int_{\Sigma}|\nabla_{g'}u|^2d\mu_{g'}.$$ Let $\mathcal{C}_g$ denote the conformal class induced by a metric $g$, then, the following definition $$E(u,\mathcal{C}_g)=\frac{1}{2}\int_\Sigma|\nabla_g u|^2d\mu_g$$ does make sense. Moreover, it is well-known that the critical points of $E(u,\mathcal{C}_g)$ are some branched minimal immersions (see \cite{Sacks-Uhlenbeck1, Sacks-Uhlenbeck2}). Hence, in order to get a branched minimal surface, we also need to study the convergence behavior of a sequence of harmonic maps $u_{k}:(\Sigma,h_k)\rightarrow N$ with bounded uniformly energy $E(u_{\alpha_k}) <C$. No doubt, it is very important to study the convergence of a sequence of $\alpha_k$-harmonic maps from a fixed Riemann surface $(\Sigma, g)$ and a sequence of harmonic maps from $(\Sigma,h_k)$ into $N$, where $h_k$ is the metric with constant curvature. In fact, these problems on the convergence of harmonic map or approximate harmonic map sequences have been studied extensively by many mathematicians. Although these sequences converge smoothly harmonic maps under some suitable geometric and topological conditions, generally one found that the convergence of such two classes of sequences might blow up. First, let's recall the convergence behavior of $\alpha_k$-harmonic map sequences. Suppose that $\{u_{\alpha_k}\}$ be a sequence of $\alpha_k$-harmonic maps from $(\Sigma, g)$ with bounded uniformly $\alpha_k$-energy, i.e. $E_{\alpha_k}(u_{\alpha_k})\leq\Theta$. By the theory of Sacks-Uhlenbeck, there exists a subsequence of $\{u_{\alpha_k}\}$, still denoted by $\{u_{\alpha_k}\}$, and a finite set $\mathcal{S}\subset\Sigma$ such that the subsequence converges to a harmonic map $u_0$ in $C^\infty_{loc}(\Sigma\setminus \mathcal{S})$. We know that, at each point $p_i\in\mathcal{S}$, the energy of the subsequence concentrates and the blow-up phenomena occur. Moreover, there exist point sequences $\{x_{i_k}^l\}$ in $\Sigma$ with $\lim\limits_{k\rightarrow+\infty}x_{i_k}^l= p_i$ and scaling constant number sequences $\{\lambda_{i_k}^l\}$ with $$\lim\limits_{k\rightarrow+\infty}\lambda_{i_k}^l\rightarrow 0,\,\,\,\,\,\,\,\, l=1,\cdots, n_0,$$ such that $$u_{\alpha_k}(x_{i_k}^l+\lambda_{i_k}^lx)\rightarrow v^l\,\,\,\,\,\,\,\, \text{in}\,\,\,\,\,\,\,\, C^j_{loc}(\mathbb{R}^2 \setminus \mathcal{A}^i),$$ where all $v^i$ are non-trivial harmonic maps from $S^2$ into $N$, and $\mathcal{A}^i\subset\mathbb{R}^2$ is a finite set. In order to explore and describe the asymptotic behavior of $\{u_{\alpha_k}\}$ at each blow-up point, the following two problems were raised naturally. One is whether or not the energy identity, which states that all the concentrated energy can be accounted for by harmonic bubbles, holds true, i.e., $$\lim_{\alpha_k\rightarrow 1}E_{\alpha_k}(u_{\alpha_k}, B^\Sigma_{r_0}(p_i))= E(u_0,B^\Sigma_{r_0}(p_i))+ \sum_{l=1}^{n_0}E(v^l).$$ Here, $B^\Sigma_{r_0}(p_i)$ is a geodesic ball in $\Sigma$ which contains only one blow-up point $p_i$. The other is whether or not the limiting necks connecting bubbles are some geodesics in $N$ of finite length? For a harmonic map sequence $\{u_k\}$ from $(\Sigma, h_k)$ into $(N, h)$, one also encountered the same problems as above. Up to now, for both cases one has made considerably great progress in these two problems \cite{Chen-Tian, Ding, Ding-Tian, Hong-Yin, Lamm, Li-Wang, Li-Wang2, Lin-Wang, Parker, Qing, Qing-Tian, Zhou,Zhu}. In particular, in \cite{Li-Wang} it is proven that if energy concentration does occur, then a generalized energy identity holds. Moreover, from the view point of analysis some sufficient and necessary conditions were given such that the energy identity holds true. On the other hand, a relation between the blowup radii and the values of $\alpha$ was discovered to ensure the "no neck property". If necks do occur, however, they must converge to geodesics and a example was given to show that there are even some limiting necks (geodesics) of infinite length. Generally, the energy identity does not holds true. For the case of harmonic map sequence $u_k: (\Sigma, h_k)\rightarrow (N,h)$ one has found a counter-example for the energy identity in \cite{Parker}. Very recently, in \cite{Li-Wang2} a counter-example for the energy identity was given for the case of $\alpha_k$-harmonic map sequence. Furthermore, from the study in \cite{Li-Wang, Li-Wang2} we can see that except for $\alpha$, the topology and geometry of the target manifold $(N, h)$ also play an important role in the convergence of $\alpha$-harmonic map sequence from a compact surface. From the viewpoint of differential geometry, it is therefore natural and interesting to find some reasonable geometric and topological conditions on the domain or target manifold such that the energy identity holds. In particular, a natural question is whether or not we can exploit some geometric and topological conditions to ensure the limiting necks are some geodesics of finite length, which implies that the energy identity holds true? For this goal, in this paper we obtain the following two theorems: \begin{thm}\label{main1} Let $(\Sigma,g)$ be a closed Riemann surface and $(N,h)$ be a closed Riemannian manifold with $Ric_N>\lambda>0$. Let $\alpha_k\rightarrow 1$ and $\{u_{\alpha_k}\}$ be a sequence of maps from $(\Sigma, g)\rightarrow (N,h)$ such that each $u_k$ is an $\alpha_k$-harmonic map, the indices and energy satisfy respectively $$\mbox{Index}(E_{\alpha_k}(u_{\alpha_k}))<C,\,\,\,\, \,\,\,\, \,\,\,\, E_{\alpha_k} (u_{\alpha_k})<C.$$ If $\{u_{\alpha_k}\}$ blows up, then the limiting necks consist of some finite length geodesics. \end{thm} \begin{thm}\label{main2} Let $\Sigma$ be a closed Riemann surface with genus $g(\Sigma)\geq 1$. In the case $g(\Sigma)\geq 2$, $\Sigma$ is equipped a sequence of smooth metrics $h_k$ with curvature $-1$. In the case $g(\Sigma)=1$, $\Sigma$ is equipped a sequence of smooth metrics $h_k$ with curvature $0$ and the area $A(\Sigma, h_k) = 1$. Let $(N, h)$ be a Riemannian manifold with the Ricci curvature $\mbox{Ric}_N >\lambda> 0$. Suppose that $(\Sigma, h_k)$ diverges in the moduli space and $\{u_k\}$ is a harmonic map sequence from $(\Sigma, h_k)$ into $(N, h)$ with bounded index and energy. If the set of the limiting necks of $u_k$ is not empty, then it consists of finite length geodesics. \end{thm} \begin{rem} By the results in \cite{Chen-Tian} or \cite{Li-Wang}, the fact the limiting necks are of finite length implies that the energy identity is true. We should also mention that, when each $u_{\alpha_k}$ in $\{u_{\alpha_k}\}$ is the minimizer of the corresponding $E_{\alpha_k}$ in a fixed homotopy class, Chen and Tian \cite{Chen-Tian} have proved that the necks are just some geodesics of finite length in $N$. \end{rem} \begin{rem} The curvature condition in Theorem \ref{main1} and \ref{main2} is used to ensure that any geodesic of infinite length lying on $N$ is not stable. In fact, we will prove in this paper that, if the necks contain a unstable geodesic of infinite length, then the indices of the harmonic (or $\alpha$-harmonic) map sequence can not be bounded from the above. \end{rem} \section{The Proofs of Theorem \ref{main1}} Our strategy is to show that the indices of the sequence $\{u_{\alpha_k}\}$ in Theorem 1.1 are not bounded if there exists a infinite length geodesic in the set of the limiting necks $\{u_{\alpha_k}\}$. For this goal, first we need to recall the definition of the index of a $\alpha$-harmonic map and the second variational formula of $\alpha$-energy functional. \subsection{The index of a $\alpha$-harmonic map} Let $u:(\Sigma,g)\rightarrow (N,h)$ be an $\alpha$-harmonic map. $L=u^{-1}(TN)$ is a smooth pull-back bundle over $\Sigma$. Let $V$ be a section of $L$ and $$u_t(x)=exp_{u(x)}(tV).$$ Obviously, $u_0=u$. Then, the formula of the second variation of $E_\alpha$ reads \begin{eqnarray}\label{second.Ealpha} \delta^2E_\alpha(u)(V,V)&=&\frac{d^2}{dt^2}E_\alpha(u_t)|_{t=0}\nonumber\\[2.0ex] &=&2\alpha\displaystyle{\int}_{\Sigma}(1+|du|^2)^{(\alpha-1)}\left(\langle \nabla V,\nabla V \rangle-R(V,\nabla u,\nabla u,V)\right)d\mu\\[2.0ex] &&+4\alpha(\alpha-1)\displaystyle{\int}_\Sigma(1+|du|^2)^{\alpha-2} \langle du,\nabla V\rangle^2d\mu.\nonumber \end{eqnarray} For more details we refer to \cite{M-M}. Let $\Gamma(L)$ denotes the linear space of the smooth sections of $L$. Then, the index of $u$ is the maximal dimension of the linear subspaces of $\Gamma(L)$ on which the \eqref{second.Ealpha} is definite negatively. \subsection{The index of the necks} We have known the limiting necks of $\{u_{\alpha_k}\}$ are some geodesics in $N$, a natural question is there exists some relations between the indices of these geodesics and the indices of the necks of $\{u_{\alpha_k}\}$. In this subsection we need to analyse the asymptotic behavior of the necks of $\{u_{\alpha_k}\}$ and try to establish the desired relations. Let $\alpha_k\rightarrow 1$ and each $u_{\alpha_k}$ of the map sequence $\{u_{\alpha_k}: k=1,2,\cdots\}$ be a $\alpha_k$ harmonic map from $(\Sigma,g)$ into $(N,h)$. For convenience we always embed $(N,h)$ into $\mathbb{R}^K$ isometrically and set $u_k=u_{\alpha_k}$. Assume that $\{u_k\}$ blows up only at a point $p\in\Sigma$. Then, for any $\epsilon$, we have $$\lim_{k\rightarrow+\infty}\|\nabla u_{k}\|_{C^0(B_\epsilon(p))}=+\infty.$$ Choose an isothermal coordinate chart $(D;x^1,x^2)$ centered at $p$, such that $$g=e^{2\varphi}(dx^1\otimes dx^1+dx^2\otimes dx^2),\,\,\,\, \mbox{ and }\,\,\,\,\varphi(0)=0.$$ For simplicity, we assume $u_{k}$ has only one blowup point in $D$. Put $$r_k=\frac{1}{\|\nabla u_{k}\|_{C^0(D_\frac{1}{2})}},\,\,\,\, \mbox{ and }\,\,\,\, |\nabla u_{k}(x_k)|=\|\nabla u_{k}\|_{C^0(D_\frac{1}{2})}.$$ Then, we have that $x_k\rightarrow 0$, $r_k\rightarrow 0$ and there exists a bubble $v$, which can be considered as a harmonic map from $S^2$ into $N$, such that $u_k(x_k+r_kx)$ converges to $v$. Without loss of generality, we may assume $x_k=0$. By the arguments in \cite{Li-Wang}, we only need to prove Theorem \ref{main1} for the case there exists one bubble in the convergence of $\{u_k\}$. So, we always assume that only one bubble appears in the convergence of $\{u_{k}\}$ in this section. Now, we consider the case that the limiting necks contain a geodesic of infinite length. In fact, the present paper is a follow up of the papers \cite{Li-Wang} and \cite{Chen-Li-Wang}, first of all, we need to recall some results proved in \cite{Li-Wang}. \begin{lem} Let $\alpha_k\rightarrow 1$ and $\{u_{k}\}$ be a map sequence such that each $u_{k}$ is an $\alpha_k$-harmonic map from $(\Sigma,g)$ into $(N,h)$. If there is a positive constant $\Theta$ such that $E_{\alpha_k}(u_{k})<\Theta$ for any $\alpha_k$, then, there exists a positive constant $C$ such that, neglecting a subsequence, there holds $$\|\nabla u_{k}\|^{\alpha_k-1}_{C^0(\Sigma)}<C.$$ \end{lem} For the proof of this lemma and more details we refer to Remark 1.2 in \cite{Li-Wang}. Moreover, for the convergence radii and $\alpha_k$ we have following relations: \begin{lem} Let $\{u_{k}\}$ satisfy the same conditions as in Lemma 2.1. If there exists only one bubble in the convergence of $\{u_{k}\}$ and the limiting neck is of infinite length, then, the following hold true $$0<-(\alpha_k-1)\log r_k<C, \,\,\,\,\mbox{and}\,\,\,\,\,\,\,\, \sqrt{\alpha_k-1}\log r_k\rightarrow-\infty.$$ Here, $r_k$ is defined as before. \end{lem} \proof From Remark 1.2 in \cite{Li-Wang}, we have $\mu=\liminf_{\alpha_k\rightarrow 1}r_k^{2-2\alpha_k}\in [1,\,\mu_{\max}]$ where $\mu_{\max}\geq 1$ is a positive constant. Therefore, it follows that there holds $$0<-(\alpha_k-1)\log r_k<C.$$ Since the limiting neck is of infinite length, from Theorem 1.3 in \cite{Li-Wang} we known that $$\nu=\liminf_{\alpha_k\rightarrow 1}r_k^{-\sqrt{\alpha_k-1}}\rightarrow\infty.$$ It follows that $$\sqrt{\alpha_k-1}\log r_k\rightarrow-\infty.$$ Thus we complete the proof.$ \Box$\\ As a direct corollary of the Proposition 4.3 in \cite{Li-Wang}, we have \begin{lem}\label{key} Let $\alpha_k\rightarrow 1$ and $\{u_{k}\}$ be a map sequence such that each $u_{k}$ is an $\alpha_k$-harmonic map from $(\Sigma,g)$ into $(N,h)\subset\mathbb{R}^K$. Suppose that there is a positive constant $\Theta$ such that $E_{\alpha_k}(u_{k})<\Theta$ for any $\alpha_k$ and there exists only one bubble in the convergence of $\{u_{k}\}$. Then, for any $t_k\rightarrow t\in(0, 1)$, there exist a vector $\xi\in\mathbb{R}^K$ and a subsequence of $\{u_k\}$ such that \begin{equation} \frac{1}{\sqrt{\alpha_k-1}}\frac{\partial u_{k}}{\partial \theta}(r_k^{t_k}e^{\sqrt{-1}\theta})\to 0 \end{equation} and \begin{equation} \frac{r_k^{t_k}}{\sqrt{\alpha_k-1}}\frac{\partial u_{k}}{\partial r}(r_k^{t_k}e^{\sqrt{-1}\theta})\to \xi \end{equation} as $k\to \infty$. Moreover, $$|\xi|=\mu^{1-t}\sqrt{\frac{E(v)}{\pi}},$$ where $\mu$ is defined by $$\mu=\lim_{k\rightarrow+\infty}r_k^{2-2\alpha_k}.$$ \end{lem} Now we define the approximate curve of $u_k$, denoted by $u_k^*(r)$, by $$u_k^*(r)=\frac{1}{2\pi}\int_0^{2\pi}u_k(re^{\sqrt{-1}\theta})d\theta.$$ Since the target manifold $(N, h)$ is embedded in $\mathbb{R}^K$, $u_k^*(r)$ is a space curve of $\mathbb{R}^K$ and we denote the arc-length parametrization of $u_k^*$ by $s$ such that $s(r_k^{t_1})=0$. Then \begin{lem} Let $\{u_{k}\}$ satisfy the same conditions as in Lemma 2.3. Suppose that the limiting neck of $\{u_{k}\}$ is a geodesic of infinite length. Then, there exists a subsequence of $\{u^*_k(s)\}$ which converges smoothly on $[0,a]$ to a geodesic $\gamma$ for any fixed $a>0$. \end{lem} Without loss of generality, from now on, we assume that $u_k^*(s)$ converges smoothly to $\gamma$ on any $[0,a]$. As a corollary, we have \begin{cor}\label{convergence.of.uk} Let $\{u_{k}\}$ satisfy the same conditions as in Lemma 2.3. Suppose that the limiting neck of $\{u_{k}\}$ is a geodesic of infinite length. Then, for any given $a>0$ and any fixed $\theta$, $u_k(se^{\sqrt{-1}\theta})$ converges to $\gamma$ in $C^1[0,a]$. Moreover, we have \begin{equation}\label{arclength} \left\|\frac{r(s)}{\sqrt{\alpha_k-1}}\left|\frac{\partial s}{\partial r}\right|-\mu^{1-t_1}\sqrt{\frac{E(v)}{\pi}}\right\|_{C^0([0,a])}\rightarrow 0. \end{equation} \end{cor} \proof Let $$s(r_k^{t_k^a})=a.$$ By Lemma \ref{key}, we have \begin{equation}\label{theta} a=\int_{r_k^{t_k^a}}^{r_k^{t_1}}\left|\frac{d{u}_k^*(r)}{dr}\right|dr\geq C\int_{r_k^{t_k^a}}^{r_k^{t_1}}\frac{\sqrt{\alpha_k-1}}{r}dr= -C(t_k^a-t_1)\sqrt{\alpha_k-1}\log r_k. \end{equation} On the other hand, Lemma 2.2 (see Theorem 1.3 in \cite{Li-Wang}) tells us $$\sqrt{\alpha_k-1}\log r_k\rightarrow -\infty,$$ since the limiting neck (geodesic) is of infinite length. Hence, from (\ref{theta}) and the above fact, we have \begin{equation}\label{ta} t_k^a-t_1\rightarrow 0\,\,\,\,\,\,\,\,\mbox{ as }\,\,\,\,\,\,\,\, k\rightarrow+\infty. \end{equation} We assume that $u_k(se^{\sqrt{-1}\theta})$ does not converge to $\gamma$ in $C^1[0,a]$. Then there exists $s_{k_i}\in [0,a]$, such that $$\sup_{\theta}\left|\frac{\partial u_k}{\partial s}(s_{k_i}e^{\sqrt{-1}\theta})-\frac{d{u}_k^*}{ds}(s_{k_i})\right|>\epsilon>0.$$ Let $s_{k_i}=r_k^{t_{k_i}}$. Obviously, $t_{k_i}\in[t_1,\,t_{k_i}^a]$. Thus $t_{k_i}\rightarrow t_1$. By Lemma \ref{key}, after passing to a subsequence, we have $$\lim_{k\rightarrow\infty}\frac{r_k^{t_{k_i}}}{\sqrt{\alpha_k-1}}\left|\frac{\partial u_k}{\partial r}(r_k^{t_{k_i}}e^{\sqrt{-1}\theta})-\frac{d{u_k}^*}{dr}(r_k^{t_{k_i}}) \right|\rightarrow 0.$$ Therefore, noting $$\left|\frac{\partial u_k(s_{k_i}e^{\sqrt{-1}\theta})}{\partial s}-\frac{d{u}_k^*(s_{k_i})}{ds}\right|= \left|\frac{dr}{ds}\right|\cdot\left|\frac{\partial u_k}{\partial r}-\frac{d{u_k}^*(r)}{dr}\right|_{r=r_k^{t_i}}$$ and $$\left|\frac{dr}{ds}\right|_{r=r_k^{t_{k_i}}}\leq \frac{Cr_k^{t_{k_i}}}{\sqrt{\alpha_k-1}},$$ we have $$\left|\frac{\partial u_k(s_{k_i}e^{\sqrt{-1}\theta})}{\partial s}-\frac{d{u}_k^*(s_{k_i})}{ds}\right| \leq \frac{Cr_k^{t_i}}{\sqrt{\alpha_k-1}}\left|\frac{\partial u_k}{\partial r}(re^{\sqrt{-1}\theta})-\frac{d{u_k}^*}{dr}(r)\right|_{r=r_k^{t_i}}\rightarrow 0.$$ Thus, we get a contradiction. Hence, it follows $$\left\|\frac{\partial u_k(se^{\sqrt{-1}\theta})}{\partial s}-\frac{d{u}_k^*(s)}{ds}\right\|_{C^0[0,\,a]}\rightarrow 0.$$ From the arguments in the above and \cite{Li-Wang} we conclude that for any fixed $\theta$ $$\|u_k(se^{\sqrt{-1}\theta})-u_k^*(s)\|_{C^1[0,\,a]}\rightarrow 0.$$ By the same way, we can prove \eqref{arclength}. ~$ \Box$\\ \begin{lem}\label{LW} Suppsoe that $\{u_{k}\}$ satisfies the same conditions as in Lemma 2.3. Then, for any fixed $R>0$ and $0<t_1<t_2<1$, we have $$\lim_{k\rightarrow+\infty}\sup_{t\in[t_1,t_2]}\frac{1}{\alpha_k-1}\int_{D_{Rr_k^{t}}\setminus D_{\frac{1}{R}r_k^{t}}} |u_{k,\theta}|^2dx=0,$$ where $$u_{k,\theta}=r^{-1}\frac{\partial u_k}{\partial\theta}.$$ \end{lem} \proof Assume this is not true. After passing to a subsequence, we can find $t_k\rightarrow t\in [t_1,t_2]$, such that $$ \frac{1}{\alpha_k-1}\int_{D_{Rr_k^{t_k}}\setminus D_{\frac{1}{R}r_k^{t_k}}} |u_{k,\theta}|^2dx\geq\epsilon. $$ However, by Proposition 4.2 in \cite{Li-Wang}, $$ \lim_{k\rightarrow+\infty}\frac{1}{\alpha_k-1}\int_{D_{Rr_k^{t_k}}\setminus D_{\frac{1}{R}r_k^{t_k}}} |u_{k,\theta}|^2dx=0. $$ This is a contradiction, Thus we complete the proof of the lemma. $ \Box$\\ Now, let's recall the definition of stability of a geodesic on a Riemannian manifold $(N,h)$. A geodesic $\gamma$ is called unstable if and only if the second variation formula of its length satisfies $$I_\gamma(V_0,V_0)=\int_0^a(\langle \nabla_{\dot{\gamma}}V_0,\nabla_{\dot{\gamma}}V_0 \rangle-R(V_0,\dot{\gamma},\dot{\gamma},V_0))ds<-\delta<0.$$ Here $R$ is the curvature operator of $N$. We have \begin{lem} Suppose that $\{u_{k}\}$ satisfies the same conditions as in Lemma 2.3. If the limiting neck of $\{u_{k}\}$ is a unstable geodesic which is parameterized on $[0,\, a]$ by arc length, then, for sufficiently large $k$, there exists a section $V_k$ of $u_{k}^{-1}(TN)$, which is supported in $D_{r_k^{t_1}}\setminus D_{r_k^{t_k^a}}(x_k)$, such that $$\delta^2E_{\alpha_k}(V_k,V_k)<0.$$ \end{lem} \proof Since the limiting neck of $\{u_{k}\}$, denoted by $\gamma:[0,\, a]\rightarrow N$, is not a stable geodesic, there exists a vector field $V_0$ on $\gamma$ with $V_0|_{\gamma(0)}=0$ and $V_0|_{\gamma(a)}=0$ such that $$I_\gamma(V_0, V_0)<0.$$ Let $P$ be projection from $T\mathbb{R}^K$ to $TN$. We define $$V_k(t(s)e^{\sqrt{-1}\theta}+x_k)=P_{u_k(se^{\sqrt{-1}\theta})}(V_0(s)),$$ where $s$ is the arc-length parametrization of $u_{k}^*(t)$ with $s(r_k^{t_1})=0$. Then, $V_k$ is smooth section of $u_{k}^{-1}(TN)$ which is supported in $D_{r_k^{t_1}}\setminus D_{r_k^{t_k^a}}(x_k)$. By Corollary \ref{convergence.of.uk}, for any fixed $\theta$, we have that $V_k(u_k(se^{\sqrt{-1}\theta}))$ converges to $V_0(\gamma(s))$ in $C^1[0,a]$. Then \begin{eqnarray}\label{second.Ealpha1} \delta^2E_{\alpha_k}(V_k,V_k)&=& 2\alpha_k\displaystyle{\int}_{D_{r_k^{t_1}}\setminus D_{r_k^{t_k^a}}}(1+|du_k|^2)^{(\alpha_k-1)}\left(\langle \nabla V_k,\nabla V_k\rangle-R(V_k,\nabla u_k,\nabla u_k,V_k)\right)dx\nonumber\\[2.0ex] &&+4\alpha_k(\alpha_k-1)\displaystyle{\int}_{D_{r_k^{t_1}}\setminus D_{r_k^{t_k^a}}}(1+|du_k|^2)^{\alpha_k-2}\langle du_k,\nabla V_k\rangle^2dx. \end{eqnarray} Next, we will show that \begin{equation}\label{I.I2} \lim_{k\rightarrow+\infty} \frac{1}{\sqrt{\alpha_k-1}}\delta^2 E_{\alpha_k}(V_k,V_k) =4\pi\mu \sqrt{\frac{E(v)}{\pi}}I_\gamma(V_0,V_0). \end{equation} We compute \begin{eqnarray*} & &\delta^2E_{\alpha_k}(V_k,V_k)\\ &=&2\alpha_k\int_0^{2\pi}\int^{r_k^{t_1}}_{r_k^{t_k^a}} (1+|du_k|^2)^{(\alpha_k-1)}(\langle \nabla V_k,\nabla V_k\rangle-R(V_k,\nabla u_k,\nabla u_k,V_k)) rdrd\theta\\ &&+4\alpha_k(\alpha_k-1)\displaystyle{\int}_{D_{r_k^{t_1}}\setminus D_{r_k^{t_k^a}}}(1+|du_k|^2)^{\alpha_k-2}\langle du_k,\nabla V_k\rangle^2dx\\ &=& 2\alpha_k\int_0^{2\pi}\int^{r_k^{t_1}}_{r_k^{t_k^a}} (1+|du_k|^2)^{(\alpha_k-1)}(\langle \nabla_{\frac{\partial u_k}{\partial r}} V_k,\nabla_{\frac{\partial u_k}{\partial r}} V_k\rangle -R(V_k,\frac{\partial u_k}{\partial r},\frac{\partial u_k}{\partial r},V_k)) rdrd\theta\\ &&+2\alpha_k\int_0^{2\pi}\int^{r_k^{t_1}}_{r_k^{t_k^a}}(1+|du_k|^2)^{(\alpha_k-1)} (\langle \nabla_{ u_{k,\theta}} V_k,\nabla_{u_{k,\theta}} V_k \rangle-R(V_k,u_{k,\theta},u_{k,\theta},V_k)) rdrd\theta\\ &&+4\alpha_k(\alpha_k-1)\displaystyle{\int}_{D_{r_k^{t_1}}\setminus D_{r_k^{t_k^a}}}(1+|du_k|^2)^{\alpha_k-2}\langle du_k,\nabla V_k\rangle^2 dx\\ &=& 2\alpha_k\mathbf{I}+2\alpha_k\mathbf{II}+4\alpha_k\mathbf{III}. \end{eqnarray*} Firstly, we calculate $\mathbf{I}$: \begin{eqnarray*} & &\frac{\mathbf{I}}{\sqrt{\alpha_k-1}}\\ &=&\int_0^{2\pi}\int_{0}^{a}(1+|du_k|^2)^{(\alpha_k-1)}\left(\langle \nabla_{\frac{\partial u_k}{\partial s}} V_k,\nabla_{\frac{\partial u_k}{\partial s}} V_k\rangle -R(V_k,\frac{\partial u_k}{\partial s},\frac{\partial u_k}{\partial s},V_k)\right)\frac{\left|\frac{\partial s}{\partial r}\right|}{\sqrt{\alpha_k-1}} rdsd\theta. \end{eqnarray*} By Lemma 2.3 we can see easily that $$\left(\left|\frac{r_k^{t(s)}}{\sqrt{\alpha_k-1}}du_k\right|^2 \frac{\alpha_k-1}{r_k^{2t(s)}}\right)^{(\alpha_k-1)}\longrightarrow \mu^{t_1}.$$ It follows from the fact $\mu\geq 1$ (see \cite{Li-Wang}) and the above $$(1+|du_k|^2)^{(\alpha_k-1)}=\left(1+\left|\frac{r_k^{t(s)}}{\sqrt{\alpha_k-1}}du_k\right|^2 \frac{\alpha_k-1}{r_k^{2t(s)}}\right)^{(\alpha_k-1)}\longrightarrow\mu^{t_1}.$$ Hence, we infer from the above and Corollary \ref{convergence.of.uk} $$\lim_{k\rightarrow+\infty}\frac{\mathbf{I}}{\sqrt{\alpha_k-1}} =2\pi\mu\sqrt{\frac{E(v)}{\pi}}I_\gamma(V_0,V_0).$$ Next, we calculate the term $\mathbf{II}$. By the definition we have $$\nabla_{\frac{\partial u_k}{\partial\theta}}V_k = P_{u_k(se^{\sqrt{-1}\theta})}\left(\frac{\partial V_k} {\partial \theta}\right)=P_{u_k(se^{\sqrt{-1}\theta})}\left(\frac{\partial }{\partial\theta}(P_{u_k(se^{\sqrt{-1}\theta})})(V_0)\right),$$ where $\frac{\partial V_k}{\partial\theta}$ is the derivative in $\mathbb{R}^n$. This leads to $$|\nabla_{\frac{\partial u_k}{\partial\theta}}V_k|\leq C(a) \left|\frac{\partial u_k}{\partial\theta}\right|.$$ Hence, we have \begin{eqnarray*} \frac{\mathbf{II}}{\sqrt{\alpha_k-1}} &=& \int_0^{2\pi}\int^{r_k^{t_1}}_{r_k^{t_k^a}}\frac{(1+|du_k|^2)^{(\alpha_k-1)}}{\sqrt{\alpha_k-1}} \left(\langle \nabla_{ u_{k,\theta}} V_k,\nabla_{u_{k,\theta}} V_k \rangle-R(V_k,u_{k,\theta},u_{k,\theta},V_k)\right)rdrd\theta\\ &\leq&\frac{C}{\sqrt{\alpha_k-1}}\int_0^{2\pi}\int^{r_k^{t_1}} _{r_k^{t_k^a}}|u_{k,\theta}|^2rdrd\theta. \end{eqnarray*} For a given $R>0$, set $$m_k=\left[\frac{\log r_k^{t_1-t_k^a}}{\log R}\right]+1.$$ It is easy to see that $$D_{r_k^{t_1}}\setminus D_{r_k^{t_k^a}}\subset\cup_{i=1}^{m_k}(D_{R^ir_k^{t_k^a}}\setminus D_{R^{i-1}r_k^{t_k^a}}).$$ By \eqref{theta}, we have $$\sqrt{\alpha_k-1}m_k\leq C(R).$$ Then \begin{eqnarray*} \frac{\mathbf{II}}{\sqrt{\alpha_k-1}} &\leq& \frac{C}{\sqrt{\alpha_k-1}}\int_0^{2\pi}\int^{r_k^{t_1}} _{r_k^{t_k^a}} |u_{k,\theta}|^2rdrd\theta\\ &\leq& \frac{C}{\sqrt{\alpha_k-1}}\int_{\cup_{i=1}^{m_k}(D_{R^ir_k^{t_k^a}}\setminus D_{R^{i-1}r_k^{t_k^a}})} |u_{k,\theta}|^2dx\\ &\leq& \frac{Cm_k\sqrt{\alpha_k-1}}{\alpha_k-1}\frac{1}{m_k}\int_{\cup_{i=1}^{m_k}(D_{R^ir_k^{t_k^a}}\setminus D_{R^{i-1}r_k^{t_k^a}})} |u_{k,\theta}|^2dx\\ &\leq& \frac{C(R)}{m_k}\left(\frac{1}{\alpha_k-1}\int_{\cup_{i=1}^{m_k}(D_{R^ir_k^{t_k^a}}\setminus D_{R^{i-1}r_k^{t_k^a}})} |u_{k,\theta}|^2dx\right). \end{eqnarray*} It follows from Lemma \ref{LW} and the above inequality that there holds $$ \lim_{k\rightarrow\infty}\frac{1}{\sqrt{\alpha_k-1}}\mathbf{II}=0. $$ Lastly, we consider the term $\mathbf{III}$. It is easy to check that $$|\langle du_k,V_k\rangle|\leq C|du_k|^2.$$ So, there exists a constant $C$ such that $$(1+|du_k|^2)^{\alpha_k-2}\langle du_k,\nabla V_k\rangle^2\leq (1+|du_k|^2)^{\alpha_k-1}<C.$$ This leads to \begin{eqnarray*} \frac{\mathbf{III}}{\sqrt{\alpha_k-1}}\leq C\sqrt{\alpha_k-1} \int_{D_{r_k^{t_1}}}(1+|du_k|^2)^{\alpha_k-1}dx\rightarrow 0. \end{eqnarray*} Thus, we obtain the desired estimate and finish the proof. $ \Box$\\ Since that $(N, h)$ is a complete Riemannian manifold with $\mbox{Ric}_N\geq \lambda>0$, then, the well-known Myers theorem tells us that the diameter of $(N, h)$ satisfies $$\mbox{diam}(N, h)\leq \frac{\pi}{\sqrt{\lambda(n-1)^{-1}}},$$ and any geodesic $\gamma$ lying on $(N, h)$ is unstable if its length $l(\gamma)$ satisfies $$l(\gamma)\geq\frac{\pi}{\sqrt{\lambda(n-1)^{-1}}}\equiv l_N.$$ Hence, for any given positive number $a$ such that $a \geq l_N+2\epsilon$ and any geodesic $\gamma$ lying on $(N, h)$ which is parameterized by arc-length in $[0,a]$, there always exists a vector field $V_0(s)$, which is smooth on $\gamma$, and 0 on $\gamma|_{[0,\,\epsilon]}$ and $\gamma|_{[a-\epsilon,\,a]}$, such that the second variation of the length of $\gamma$ satisfies \begin{equation}\label{d2.geodesic} I_\gamma(V_0,V_0)=\int_0^a(\langle \nabla_{\dot{\gamma}}V_0,\nabla_{\dot{\gamma}}V_0 \rangle-R(V_0,\dot{\gamma},\dot{\gamma},V_0))ds<-\delta<0. \end{equation} \begin{lem} Let $(N, h)$ be a closed Riemannian manifold with $\mbox{Ric}(N)\geq\lambda>0$. Suppose that $\{u_{k}\}$ satisfies the same conditions as in Lemma 2.3. If the limiting neck of $\{u_{k}\}$ is a geodesic of infinite length, then the indices of $\{u_{k}\}$ with respect to the corresponding $E_{\alpha_k}$ can not be bounded from above. \end{lem} \proof Since the limiting neck of $\{u_{k}\}$ is a geodesic of infinite length, then, for given $t_1$, the above arguments in Lemma 2.7 tell us that we can always choose a suitable positive constant $\epsilon_1$ such that, as $k$ is large enough, the arc length $a$ of $u^*_k(s)$ on $D_{r_k^{t_1}}\setminus D_{r_k^{t_1+\epsilon}}(x_k)$ satisfies $$a > l_N=\frac{\pi}{\sqrt{\lambda(n-1)^{-1}}}.$$ Therefore, there exists a section $V^1_k$ of $u_k^{-1}(TN)$, which is $0$ outside $D_{r_k^{t_1}}\setminus D_{r_k^{t_1+\epsilon}}(x_k)$, satisfying $$\delta^2E_{\alpha_k}(V_k^1,V_k^1)<0.$$ By the same method, for $t_2=t_1+2\epsilon_1$, we can also pick $\epsilon_2>0$ and construct a section $V_k^2$, which is $0$ outside $D_{r_k^{t_2}}\setminus D_{r_k^{t_2+\epsilon_2}}(x_k)$, such that $$\delta^2E_{\alpha_k}(V_k^2,V_k^2)<0.$$ Since the limiting neck is a geodesic of infinite length, then, when $k$ is sufficiently large, there exists $i_k$ with $i_k\rightarrow\infty$ such that we can construct by the same way as above a series of sections $\{V_k^3, V_k^4, \cdots, V_k^{i_k}\}$ satisfying that for any $1\leq i\leq i_k$ there holds true $$\delta^2E_{\alpha_k}(V_k^i,V_k^i)<0.$$ Obviously, $V_k^1$, $V_k^2$, $\cdots$, $V_k^{i_k}$ are linearly independent. This means that $$\mbox{Index}(E_{\alpha_k}(u_{k}))\geq i_k. $$ Therefore, we get $$\mbox{Index}(E_{\alpha_k}(u_{k}))\rightarrow+\infty, \,\,\,\,\,\,\,\,\,\,\,\,\mbox{ as }\,\,\,\, k\rightarrow+\infty.$$ Thus, we complete the proof of the lemma. $ \Box$\\ \noindent{\bf The proof of Theorem 1.1}: Obviously, Theorem \ref{main1} is just a direct corollary of the above lemma. \section{The Proofs of Theorem \ref{main2}} From the arguments and the appendix in \cite{Chen-Li-Wang} we know that one only need to consider the convergence behavior of harmonic map sequences from two dimensional flat cylinders, although the original harmonic map sequence is from a sequence of hyperbolic or flat closed Riemann surfaces respectively. First, we recall some fundamental notions such as the index of a harmonic map with respect to the energy functional. Let $T_k\rightarrow\infty$ be a series of positive numbers and $u:(-T_k,T_k)\times S^1\rightarrow (N,h)$ be a harmonic map. $L=u^{-1}(TN)$ is the pull-back bundle over $(-T_k,T_k)\times S^1$. Let $V$ be a section of $L$ which is 0 near $\{\pm T_k\}\times S^1$ and $$u_\tau(x)=exp_{u(x)}(\tau V).$$ It is well-known that the second variational formula of energy functional $E$ is the following: \begin{eqnarray*} \delta^2E(u)(V,V) &=&2\displaystyle{\int}_{\Sigma}\left(\langle \nabla V,\nabla V \rangle-R(V,\nabla u,\nabla u,V)\right)dtd\theta. \end{eqnarray*} Let $\Gamma(L)$ denote the linear space of the smooth sections of $L$. Then, the index of $u$ is just the maximal dimension of a linear subspace of $\Gamma(L)$ on which the above is definite negatively. Let $u_k$ be an harmonic map from $(-T_k,T_k)\times S^1$ into $(N,h)$. We assume that, for any $t_k\in (-T_k,T_k)$, $$|\nabla u_k(\theta,t_k)|\rightarrow 0,\,\,\,\,\,\,\,\,\,\,\,\, \mbox{as}\,\,\,\, k\rightarrow\infty.$$ Moreover, we assume that $u_k((-T_k,T_k)\times S^1)$ converges to an infinite length geodesic. By the arguments in \cite{Chen-Li-Wang}, we can see easily that Theorem \ref{main2} in this paper can be deduced from the following lemma: \begin{pro}\label{sy} Let $\{u_k:(-T_k,T_k)\times S^1\rightarrow N, k=1, 2, \cdots\}$ be a harmonic map sequence such that for any $t_k\in (-T_k,T_k)$, there holds true $|\nabla u_k(\theta,t_k)|\rightarrow 0$. If $\mbox{Ric}_N\geq\lambda>0$ and $u_k((-T_k,T_k)\times S^1)$ converges to an infinite length geodesic, then the index of $u_k$ tends to infinity. \end{pro} In order to prove Proposition \ref{sy}, we need to recall some known results which were established in \cite{Chen-Li-Wang}. We first recall a useful observation in \cite{Zhu}. \begin{lem}\label{zhu1}Let $u$ be a harmonic map from $(-T,\, T)\times S^1 \rightarrow N$. Then, the following function defined by $$\beta(u)=\int_{\{t\}\times S^{1}}(|u_{t}|^{2}-|u_{\theta}|^{2}-2iu_{t}\cdot u_{\theta})d\theta$$ is independent of $t\in(-T,\, T)$. \end{lem} Next, we recall some known results proved in \cite{Chen-Li-Wang}, which are used in the following arguments. \begin{lem} Let $\{u_k:(-T_k,\,T_k)\times S^1\rightarrow N, \,\, k=1, 2, \cdots\}$ be a sequence of harmonic maps such that for any $t_k\in (-T_k,\,T_k)$, there holds true $|\nabla u_k(\theta,t_k)|\rightarrow 0$. Assume that $u_k((-T_k,\,T_k)\times S^1)$ converges to an infinite length geodesic. Then, as $k\rightarrow 0$, we have $$\lim_{k\rightarrow\infty}\sqrt{|\mbox{Re} \ \beta(u_{k})|}T_{k} =\infty.$$ \end{lem} \begin{lem}\label{key2} Let $\{u_k\}$ satisfy the same conditions as in Lemma 3.3. Then, for any $\lambda<1$ and $t_k\in [-\lambda T_k,\,\lambda T_k]$, there exists a vector $\xi\in\mathbb{R}^K$ and a subsequence of $$\left\{\frac{1}{\sqrt{| \mbox{Re}(\beta(u_{k}))|}}\frac{\partial u_{k}}{\partial t}(t_k, \theta):\,\, k=1, 2, 3, \cdots\right\}$$ such that the subsequence converges to $\xi$. Moreover, we have $$|\xi|=\frac{1}{\sqrt{2\pi}}.$$ \end{lem} By Lemma 2.6 in \cite{Chen-Li-Wang}, we also have \begin{lem}\label{theta.energy2} Let $\{u_k\}$ satisfy the same conditions as in Lemma 3.3. Then, for any fixed $0<\lambda<1$ and $T>0$, we have $$\lim_{k\rightarrow\infty}\sup_{t\in [-\lambda T_k,\,\,\lambda T_k]}\frac{1}{|\mbox{Re} \ \beta(u_{k})|}\int_{[t-T,\,\, t+T]\times S^1}\left|\frac{\partial u_k}{\partial\theta}\right|^{2}dtd\theta=0.$$ \end{lem} As in \cite{Chen-Li-Wang}, we introduce the following sequence of curves in $\mathbb{R}^K$ defined by $$u_k^*(t)=\frac{1}{2\pi}\int_0^{2\pi}u_k(t,\theta)d\theta.$$ Obviously, these curves are smooth. Now, for each $k$, let $s$ be the arc-length parametrization of the the curve $u_k^*(t)$ with $s(0)=0$. By the arguments in \cite{Chen-Li-Wang}, we have \begin{lem} Under the same conditions as Lemma 3.3, we have that $u^*_k(s)$ converges smoothly on $[0,a]$ to a geodesic $\gamma$ on $N$ for any fixed $a>0$. \end{lem} From now on, we assume $u_k^*(s)$ converges to $\gamma$ on $[0,a]$ for any fixed $a>0$. Set $s(t_k^a)=a$. Similar to Corollary \ref{convergence.of.uk}, we have \begin{cor}\label{convergence.uk.2} Let $\{u_k\}$ satisfy the same conditions as in Lemma 3.3. Then, for any fixed $\theta$, $u_k(s,\theta)$ converges to $\gamma$ in $C^1[0,a]$. Moreover, we have $$t_k^a\rightarrow\infty,\,\,\,\,\,\,\,\, \sqrt{|\mbox{Re} \ \beta(u_{k})|}t_k^a <C(a),$$ and \begin{equation}\label{arclength2} \left\|\frac{1}{\sqrt{| \mbox{Re}(\beta(u_{k}))|}}\left|\frac{\partial s}{\partial t}(s)\right|-\frac{1}{\sqrt{2\pi}}\right\|_{C^0([0,a])}\rightarrow 0. \end{equation} Here $C(a)$ is a positive constant which depends on $a$. \end{cor} Now we turn to the discussions on the asymptotic behavior of the index and the second variation of the energy of $u_k$. Since $\mbox{Ric}_N\geq\lambda>0$, by the well-known Myers theorem we know that, if $$a\geq\frac{\pi}{\sqrt{\lambda(n-1)^{-1}}}+2\epsilon,$$ then there exists a tangent vector field $V_0(s)$ on $N$, which is smooth on $\gamma$, and 0 on $\gamma|_{[0,\epsilon]}$ and $\gamma|_{[a-\epsilon,a]}$, such that the second variational of length of $\gamma$ satisfies \begin{equation} I_\gamma(V_0,V_0)=\int_0^a(\langle \nabla_{\dot{\gamma}}V_0,\nabla_{\dot{\gamma}}V_0 \rangle-R(V_0,\dot{\gamma},\dot{\gamma},V_0))ds<-\delta<0. \end{equation} Following the arguments in Section 2, we can see easily that the conclusions in Proposition \ref{sy} are implied by the following Lemma. \begin{lem} Let $\{u_k\}$ satisfy the same conditions as in Lemma 3.3. Then, for sufficiently large $k$, there exists a section $V_k$ of $u_{k}^*(TN)$, which is supported in $[0,t_k^a]$, such that $$\delta^2E(u_k)(V_k, V_k)<0.$$ \end{lem} \proof Let $P$ be projection from $T\mathbb{R}^K$ to $TN$. We define $$V_k(t,\theta)=P_{u_k(s,\theta)}(V_0(s)),$$ where $s$ is the arc-length parametrization of $u_{k}^*(t)$ with $s(0)=0$. Then, $V_k$ is a smooth section of $u_{k}^{-1}(TN)$ which is supported in $[0,\, t_k^a]\times S^1$. By Corollary \ref{convergence.uk.2}, for any fixed $\theta$, we have $$V_k(u_k(se^{\sqrt{-1}\theta}))\rightarrow V_0(s)\,\,\,\,\,\,\,\, \mbox{in}\,\,\,\, C^1[0,\,a].$$ Next, we will show \begin{equation}\label{I.I2} \lim_{k\rightarrow+\infty} \frac{1}{\sqrt{|\mbox{Re} \ \beta(u_{k})|}}\delta^2E(u_k)(V_k,V_k) =2\sqrt{2\pi}I_\gamma(V_0,V_0). \end{equation} Since \begin{eqnarray*} \delta^2E(u_k)(V_k,V_k)&=&2\int_0^{2\pi}\int_{0}^{t_k^a} (\langle \nabla V_k,\nabla V_k \rangle-R(V_k,\nabla u_k,\nabla u_k,V_k))dtd\theta\\ \\ &=& 2\int_0^{2\pi}\int_{0}^{t_k^a} (\langle \nabla_{\frac{\partial u_k}{\partial t}} V_k,\nabla_{\frac{\partial u_k}{\partial t}} V_k \rangle-R(V_k,\frac{\partial u_k}{\partial t},\frac{\partial u_k}{\partial t},V_k))dtd\theta\\ &&+2\int_0^{2\pi}\int_{0}^{t_k^a} (\langle \nabla_{ \frac{\partial u_k}{\partial \theta}} V_k,\nabla_{\frac{\partial u_k}{\partial \theta}} V_k \rangle-R(V_k, \frac{\partial u_k}{\partial \theta},\frac{\partial u_k}{\partial \theta},V_k))dtd\theta\\ &=& 2\mathbf{I}+2\mathbf{II}. \end{eqnarray*} Noting \begin{eqnarray*} \frac{\mathbf{I}}{\sqrt{|\mbox{Re} \ \beta(u_{k})|}}&=& \int_0^{2\pi}\int_{0}^{a}\left(\langle \nabla_{\frac{\partial u_k}{\partial s}} V_k,\nabla_{\frac{\partial u_k}{\partial s}} V_k \rangle-R(V_k,\frac{\partial u_k}{\partial s},\frac{\partial u_k}{\partial s},V_k)\right)\\ &&\times\frac{\left|\frac{\partial s}{\partial t}\right|}{\sqrt{|\mbox{Re} \ \beta(u_{k})|}}dsd\theta, \end{eqnarray*} we infer from Corollary \ref{convergence.uk.2} $$\lim_{k\rightarrow+\infty}\frac{\mathbf{I}}{\sqrt{|\mbox{Re} \ \beta(u_{k})|}} =\sqrt{2\pi}I_\gamma(V_0,V_0).$$ On the other hand, we have \begin{eqnarray*} \frac{\mathbf{II}}{\sqrt{|\mbox{Re} \ \beta(u_{k})|}}&=& \frac{1}{\sqrt{|\mbox{Re} \ \beta(u_{k})|}}\int_0^{2\pi}\int_{0}^{t_k^a} \left(\langle \nabla_{ \frac{\partial u_k}{\partial \theta}} V_k,\nabla_{\frac{\partial u_k}{\partial \theta}} V_k \rangle-R(V_k,\frac{\partial u_k}{\partial \theta},\frac{\partial u_k}{\partial \theta},V_k)\right)dtd\theta\\ &\leq& \frac{C}{\sqrt{|\mbox{Re} \ \beta(u_{k})|}}\int_0^{2\pi}\int_{0} ^{t_k^a}|\frac{\partial u_k}{\partial \theta}|^2dtd\theta. \end{eqnarray*} For any given $T>0$, we set $$m_k=\left[\frac{t_k^a}{T}\right]+1.$$ By Corollary \ref{convergence.uk.2}, there holds $$\sqrt{|\mbox{Re} \ \beta(u_{k})|}m_k\leq C(T).$$ Hence, it follows \begin{eqnarray*} \frac{\mathbf{II}}{\sqrt{|\mbox{Re} \ \beta(u_{k})|}} &\leq& \frac{C}{\sqrt{|\mbox{Re} \ \beta(u_{k})|}}\int_{\cup_{i=0}^{m_k}[iT,(i+1)T]\times S^1} |u_{k,\theta}|^2dtd\theta\\ &\leq& \frac{Cm_k\sqrt{|\mbox{Re} \ \beta(u_{k})|}}{ |\mbox{Re} \ \beta(u_{k})| }\frac{1}{m_k}\int_{\cup_{i=0}^{m_k}[iT,(i+1)T]\times S^1} |u_{k,\theta}|^2dtd\theta\\ &\leq& \frac{C(T)}{m_k}\left(\frac{1}{|\mbox{Re} \ \beta(u_{k})|}\int_{\cup_{i=0}^{m_k}[iT,(i+1)T]\times S^1} |u_{k,\theta}|^2dtd\theta\right). \end{eqnarray*} In view of Lemma \ref{theta.energy2}, we concludes $$ \lim_{k\rightarrow\infty}\frac{1}{\sqrt{|\mbox{Re} \ \beta(u_{k})|}}\mathbf{II}=0. $$ Immediately, it follows $$\lim_{k\rightarrow+\infty} \frac{1}{\sqrt{|\mbox{Re} \ \beta(u_{k})|}}\delta^2E(u_k)(V_k,V_k) =2\sqrt{2\pi}I_\gamma(V_0,V_0).$$ Hence, for $k$ large enough, we have the desired inequality $$\delta^2E(u_k)(V_k, V_k)<0.$$ Thus, we complete the proof of this lemma. $ \Box$\\ \noindent{\bf Acknowledgement:} Y. Li supported by NSFC (Grant No. 11131007), Y. Wang supported by NSFC(Grant No. 11471316). { } Yuxiang Li {\small\it Department of Mathematical Sciences, Tsinghua University, Beijing 100084, P.R.China.} {\small\it Email: [email protected].}\\ Lei Liu {\small\it Department of Mathematical Sciences, Tsinghua University, Beijing 100084, P.R.China.} {\small\it Email: [email protected] }\\ Youde Wang {\small\it Academy of Mathematics and Systems Sciences, Chinese Academy of Sciences, Beijing 100080, P.R. China.} {\small\it Email: [email protected]} \end{document}
arXiv
The malaria testing and treatment landscape in Benin ACTwatch Group, Cyprien Zinsou2 & Adjibabi Bello Cherifath3 Since 2004, artemisinin-based combination therapy (ACT) has been the first-line treatment for uncomplicated malaria in Benin. In 2016, a medicine outlet survey was implemented to investigate the availability, price, and market share of anti-malarial treatment and malaria diagnostics. Results provide a timely and important benchmark to measure future interventions aimed at increasing access to quality malaria case management services. Between July 5th to August 6th 2016, a cross sectional, nationally-representative malaria outlet survey was conducted in Benin. A census of all public and private outlets with potential to distribute malaria testing and/or treatment was implemented among 30 clusters (arrondissements). Outlets were eligible for inclusion in the study if they met at least one of three study criteria: (1) one or more anti-malarials reportedly in stock on the day of the survey; (2) one or more anti-malarials reportedly in stock within the 3 months preceding the survey; and/or (3) provided malaria blood testing. An audit was completed for all anti-malarials, malaria rapid diagnostic tests (RDT) and microscopy. 7260 outlets with the potential to sell or distribute anti-malarials were included in the census and 2966 were eligible and interviewed. A total of 17,669 anti-malarial and 494 RDT products were audited. Quality-assured ACT was available in 95.0% of all screened public health facilities and 59.4% of community health workers (CHW), and availability of malaria blood testing was 94.7 and 68.4% respectively. Sulfadoxine–pyrimethamine (SP) was available in 73.9% of public health facilities and not found among CHWs. Among private-sector outlets stocking at least one anti-malarial, non-artemisinin therapies were most commonly available (94.0% of outlets) as compared to quality-assured ACT (36.1%). 31.3% of the ACTs were marked with a "green leaf" logo, suggesting leakage of a co-paid ACT into Benin's unsubsidized ACT market from another country. 78.5% of the anti-malarials distributed were through the private sector, typically through general retailers (47.6% of all anti-malarial distribution). ACT comprised 44% of the private anti-malarial market share. Private-sector price of quality-assured ACT ($1.35) was three times more expensive than SP ($0.42) or chloroquine ($0.41). Non-artemisinin therapies were cited as the most effective treatment for uncomplicated malaria among general retailers and itinerant drug vendors. The ACTwatch data has shown the importance of the private sector in terms of access to malaria treatment for the majority of the population in Benin. These findings highlight the need for increased engagement with the private sector to improve malaria case management and an immediate need for a national ACT subsidy. In Benin, important gains in malaria control have been achieved in recent years, however, malaria remains a leading cause of morbidity and mortality. In 2015, the World Health Organization (WHO) reported over two million confirmed malaria cases and 1416 deaths in the country [1]. Malaria is cited as the leading reason for medical consultations and hospitalization in Benin [2]. According to population based surveys, only 28% of children under 5 received the first-line treatment for uncomplicated malaria [3] and among pregnant women, only one in four were found to use intermittent treatment as prevention during pregnancy (IPTp) [4]. The financial impact of malaria is also of concern in Benin. It is estimated that households spend approximately one-quarter of their annual income on the prevention and treatment of malaria, meanwhile, 37% of the Benin population live below the poverty line, with a per capita annual income of only $750 [5]. In 2004, the policy for malaria management in Benin changed when the National Malaria Control Programme (NMCP) introduced artemisinin-based combination therapy (ACT), artemether–lumefantrine (AL), for treatment of uncomplicated malaria [1]. Up to that time, chloroquine had been used for first-line therapy against uncomplicated malaria. In 2011, the guidelines changed and stipulated that patients of all ages should receive a confirmatory malaria test prior to treatment. In 2014, updates to national policy brought malaria to case management guidelines further in-line with WHO recommendations and stipulated three doses of sulfadoxine–pyrimethamine (SP) for IPTp. The NMCP also updated the malarial national case management guidelines to align with the WHO recommendation for treatment of severe malaria with injectable artesunate and injectable artemether [6], though injectable quinine is also still recommended followed by a seven day treatment with oral quinine. Treatment for severe malaria should only be administered at a public or private hospital. Oral artemisinin monotherapies have been banned in Benin since 2008 [1]. As a means to promote universal coverage of first-line treatment and increase rates of confirmatory testing, the NMCP took significant steps to improve malaria case management services across the country. In 2011, public-sector initiatives included free malaria case management to children under 5 years of age and pregnant women. Prior to this, public health facilities had charged fees for consultation, medications, and procedures [7]. The 2014–2018 National Malaria Strategic Plan was also developed and set the goal that by 2030, "…malaria would no longer be a public health problem in Benin" [6]. The strategy aims to decrease the number of annual cases by 75% and reduce the mortality rate to 1 death per 100,000 people. There has been a substantial increase in the procurement of ACT and malaria rapid diagnostic tests (RDT) as a means to increase universal access to malaria commodities. In 2014, over 1.3 million RDT were procured and in 2015, this increased to almost 1.5 million [1]. A similar pattern followed for the procurement of ACT, which increased from 1.1 million in 2014 to 1.2 million in 2015. Commodities such as ACT and RDT have largely been made available through the public-sector channels. Other initiatives to improve malaria case management services include expanding access to primary health care services through the training and equipping of community health workers (CHW), including training on the appropriate use of RDT as well as the management of malaria, pneumonia, diarrhoea, and malnutrition [6]. In 2014, it was estimated that over 12,500 CHW were active in the country. Other public-sector initiatives have included funds for the provision of free healthcare to the extremely poor, and the reinforcement of health financing schemes [8]. There have been no major initiatives targeting the private sector in Benin to improve malaria case management services, despite evidence that over 70% of anti-malarials are distributed through this channel [9]. While the national strategy has included the provision of diagnosis, microscopy or RDT, and ACT in selected private health clinics [10], the scale-up is largely in process and has yet to be routinely implemented [6]. Indeed, the private sector in Benin is renowned for being diverse and continuously expanding, with most providers operating informally without a license, mainly because the accreditation process is often perceived as difficult and conveying few benefits [6, 11]. While there is a push to simplify the process by bringing more of the private sector into the formal market, this has yet to be widely implemented. This lack of private-sector engagement contrasts with several other countries that have benefitted from ACT subsidies aimed to increase access to first-line treatment in the private sector. The most notable of these initiatives was the Affordable Medicines Facility-malaria (AMFm), which continued through 2016 [12, 13] and was implemented in neighbouring Nigeria, as well as seven other countries (Cambodia, Ghana, Kenya, Madagascar, Niger, Uganda, and Tanzania). Through this mechanism, subsidized ACT was available on the market and labelled with a 'green leaf' logo to indicate quality-assurance. By increasing quality-assured ACT on the anti-malarial market, the AMFm also aimed to decrease the use of oral artemisinin monotherapies, and non-artemisinin monotherapies, such as chloroquine. Following the AMFm pilot period, the Global Fund continued to support a quality-assured ACT subsidy programme through the Private Sector Co-payment Mechanism (CPM) [14], but Benin was not part of this initiative. Investigating the anti-malarial and diagnostic market landscape will provide an important benchmark to measure future interventions aimed at increasing access to quality malaria case management services. However, there is limited rigorous evidence on the availability and distribution of anti-malarials and malaria diagnostics in Benin. Since 2008, the multi-country ACTwatch project has been implemented in Benin to fill contemporary evidence gaps by collecting malaria case management commodity market data on anti-malarial medicines, malaria diagnostics, market share, and price in both the private and public sectors [15]. The objective of this paper is to provide practical evidence to inform strategies and policies in Benin towards achieving national malaria control goals, by describing the total market for malaria medicines and diagnostics at the national level according to the most recent survey round. Evidence will point to recommendations for improving coverage of appropriate malaria case management. This was the fourth outlet survey implemented in Benin, with previous surveys conducted in 2009, 2011, and 2014 [16,17,18]. This study used a cross-sectional, multi-staged cluster sampling approach and was stratified according to urban/rural areas. The outlet survey followed the design implemented in previous survey rounds and across other ACTwatch countries. The outlet survey was implemented from July 5th to August 6th 2016. Sampling approach According to the ACTwatch methodology, outlets are included in the survey if they have the 'potential' to sell or distribute anti-malarials. This includes outlets that may not be expected to stock anti-malarial medicines. For example, while public health facilities would be expected to have anti-malarials in stock, the extent to which general retailers or itinerant drug vendors have anti-malarials available may be more debatable. To assess this, the ACTwatch study approach is to include all outlets that could 'potentially stock' anti-malarials. Outlets sampled in Benin's public sector included public health facilities (including the national referral hospital, regional hospitals, district hospitals, health centers and dispensaries); CHW and private not-for-profit facilities (including non-governmental organisations, hospitals and clinics, and faith-based hospitals and clinics). The private-sector outlet types sampled were private for-profit health facilities (including private hospitals, clinics and diagnostic laboratories); pharmacies (which are registered and licensed by a national regulatory authority); drug stores (Depôts pharmaceutiques); general retailers (grocery stores, kiosks and market stalls selling fast-moving consumer products); and itinerant drug vendors (mobile, unregistered providers selling medicines). The primary sampling approach taken for ACTwatch outlet surveys entails sampling a set of administrative units (geographic clusters) with a population of approximately 10,000–15,000 inhabitants. The most appropriate administrative unit in Benin matching the desired population size was an 'arrondissement'. A representative sample of arrondissements was selected using probability proportional to population size sampling, using data from Benin's fourth Population and Housing census. As public health facilities, pharmacies, and drug shops (dépôts pharmaceutiques) are important providers of anti-malarials but are relatively uncommon, over-sampling was conducted for these outlet types in Benin. This 'booster' sample was obtained by including all public health facilities, pharmacies, and drug shops (dépôts pharmaceutiques) located in the larger administrative area (called a 'commune' in Benin) from which a given arrondissement was selected. In this instance, the booster sample covered all public health facilities, pharmacies, and drug shops in the whole commune within which the arrondissements were located. The sample was stratified by urban–rural ward designation. In total, 15 arrondissement were selected for the main census sample (15 rural, 15 urban). Within each selected arrondissement a census of all outlet types with the potential to provide anti-malarials or diagnostics to consumers was undertaken. Outlets were eligible for a provider interview and malaria product audit if they met at least one of three study criteria: (1) one or more anti-malarials reportedly in stock on the day of the survey; (2) one or more anti-malarials reportedly in stock within the three months preceding the survey; and/or (3) provided malaria blood testing (microscopy or RDT). Among eligible outlets, providers were interviewed and all anti-malarials and RDTs were audited. A series of calculations was completed to identify minimum sample size requirements to detect an increase or decrease in the availability of quality-assured ACT and of malaria blood testing between 2014 and 2016. Calculations examined the sample size required to detect a 20% point change among all outlets, the public sector, the private sector, public health facilities, pharmacies, and general retail outlets. The required sample size for each research domain (urban and rural areas) was calculated in three steps: (1) determine the required number of anti-malarial-stocking outlets, (2) determine the number of outlets to be enumerated to arrive at this number of anti-malarial-stocking outlets, and (3) determine the number of arrondissement for the census to arrive at this number of outlets. Required number of anti-malarial stocking outlets The number of anti-malarial-stocking outlets required to detect a change over time is given by: $$ n = \frac{{deff \times \left[ {Z_{{1{ - }\alpha }} \sqrt {2P\left( {1{ - }P} \right)} + Z_{{1{ - }\beta }} \sqrt {P_{1} \left( {1{ - }P_{1} } \right) + P_{2} \left( {1{ - }P_{2} } \right)} } \right]^{2} }}{{\left( {P_{2} { - }P_{1} } \right)^{2} }} $$ where n = desired sample size, P1 = the proportion of anti-malarial-stocking outlets with quality-assured ACT/malaria blood testing available in stock in 2014, P2 = the expected proportion of anti-malarial-stocking outlets with quality-assured ACT/malaria blood testing available in stock in 2016 (20% point increase or decrease), P = (P1 + P2)/2, Zα = the standard normal deviation value for an α type I error (two-sided), Z1 − β = the standard normal deviation value for a βtype II error, Deff = the design effect in case of multi-stage arrondissement sample design. Deff figures from the 2014 dataset were used in sample size calculations. Required number of outlets The estimated number of outlets enumerated needed for the quality-assured ACT availability indicator was determined by the following formula for outlets within urban and rural domains: $$ {\text{N}} = {{\text{n}} / {\text{P}}}_{{{\text{am}}}}$$ where Pam is the proportion of outlets having anti-malarial stocks at the time of the survey among all outlets enumerated. In this equation, the assumptions are as follows: N = desired sample size of all outlets for monitoring availability indicators, n is the number of outlets with anti-malarial stocks at the time of the survey. Pam is the proportion of outlets having anti-malarials in stock at the time of the survey among outlets enumerated in 2014 within urban and rural areas. The Pam values documented in the 2014 ACTwatch outlet survey were used for 2016 sample size calculations. Required number of arrondissements The average numbers of outlets by outlet type in arrondissements within urban and rural areas screened during the 2014 outlet survey were used to estimate the number of arrondissements required in 2016 to achieve the desired sample sizes. Considering sample size requirements to detect change over time and average numbers of outlets across each outlet type, the optimal minimum number of localities required to reach desired numbers of outlets was 30 arrondissements (15 urban, 15 rural) plus a booster sample of public health facilities, pharmacies, and drug shops at the commune level. The outlet survey census involved systematically looking for outlets in each arrondissement and using screening questions to identify outlets for inclusion in the study. Provider interviews and anti-malarial audits were conducted in all eligible outlets, after informed consent procedures. Up to three call-back visits were made to outlets in instances where outlets were closed or providers were not available. Data were collected using Android phones, except in pharmacies that had a large number of anti-malarial products. In these pharmacies, paper questionnaires were used so that multiple interviewers could audit anti-malarial products simultaneously to shorten the time required to finish the interview. The electronic data collection program was developed using DroidDB (© SYWARE, Inc., Cambridge, MA, USA). Anti-malarial audit information recorded information on the formulation, package size, brand name, active ingredients and strength(s), manufacturer, country of manufacture, reported sale/distribution in the week preceding the survey, retail price, and wholesale price. The RDT audit information collected similar data. In addition to the product audit, a series of questions were administered to the senior-most provider regarding malaria case management knowledge and practices as well as provider training and qualifications. Standard ACTwatch tools and training materials were used. A training of trainers was conducted in June 2016 and was followed by a pilot test to evaluate the electronic data collection program. Interviewers, supervisors, and quality controllers then received a training that included an orientation to the study, questionnaire overview, including a focus on how to complete the anti-malarial and RDT audits and how to use the electronic data collection program. After the training, a field exercise was conducted outside of the selected arrondissements to provide practical experience for the trainees and to evaluate their performance. Supervisors and quality controllers were then chosen from the highest performers in the group, and these candidates then participated in an additional three-day training before the start of data collection. Eight teams were formed, each composed of one supervisor, one quality controller, and five or six interviewers. Representatives from the research agency, Association Beninoise pour le Marketing Social (ABMS), and the ACTwatch central team provided additional supervision and support to the data collection teams in the field for the entirety of the data collection. Data collected with paper questionnaires were double entered and verified using a Microsoft Access database. All data cleaning and analysis was completed using Stata 13.1 (©StataCorp, College Station, TX). Sampling weights were applied to account for variations in the probability of selection and standard error estimation accounted for clustering at the arrondissement and commune levels. The sampling weights use for the Benin survey are described in further detail in Additional file 1. Standard ACTwatch indicators were calculated in line with previous outlet surveys [9, 15, 19]. Anti-malarials were classified as ACT, non-artemisinin therapy, and oral or non-oral artemisinin monotherapy. ACT were further classified as quality-assured ACT or non-quality assured ACT by matching product information to lists of WHO prequalified anti-malarials and Global Fund anti-malarial procurement lists. Availability of any anti-malarial was calculated with all screened outlets as the denominator. In the public sector, the availability of specific types of anti-malarials was calculated using the denominator of all screened outlets given that anti-malarials should be available at all public health facilities and among CHWs. Availability of specific anti-malarial categories in the private sector was calculated using the total number of private-sector outlets stocking any anti-malarial as the denominator. Market share was defined as the relative distribution of anti-malarials to individual consumers in the week preceding the survey. In order to allow for meaningful market share comparisons between products, information about anti-malarial distribution was standardized to the adult equivalent treatment dose (AETD). AETD is the amount of active ingredient necessary to treat a 60 kg adult according to WHO treatment guidelines [20]. Volumes distributed were calculated by converting provider reports on the number of anti-malarials sold in the week prior to the survey into AETDs. Volumes were therefore the number of AETDs sold or distributed by a provider in the seven days prior to the survey. All dosage forms were considered when measuring volumes to provide a complete assessment of anti-malarial market share. Public and private-sector booster sample outlets were excluded from market share calculations to avoid over-estimating the role of the private sector. Median private sector price per AETD was calculated for quality-assured ACT and other non-artemisinin therapies including chloroquine, SP, and quinine. The interquartile range [IQR] was calculated to demonstrate price dispersion. Anti-malarial price was collected in West African Communauté Financière Africaine (CFA) and converted to United States (US) dollars based on official exchange rates for the six-week data collection period. Provider perceptions regarding the most effective first-line treatment was assessed by administering questions to the senior most provider at all anti-malarial-stocking outlets. Providers were asked to describe what medicine they believed was the most effective treatment for treating uncomplicated malaria in a child and in an adult. A total of 7260 outlets were screened for availability of anti-malarials and/or malaria blood testing services. Of screened outlets 2966 met one of the three screening criteria, including 2959 who were stocking anti-malarials on the day of the survey or within the past three months or provided malaria testing. A total of 17,669 anti-malarial and 494 RDT products were audited (Additional file 2). Public sector availability Table 1 shows the availability among all screened public sector outlets. Availability of any anti-malarial was 95.0% among public health facilities and 59.4% among CHWs. Nine in ten public health facilities stocked quality-assured ACT (89.9%) and 54.8% of CHWs. Among public health facilities, availability of the four different package AL pack sizes (6, 12, 18 and 24 tablets) suitable for management of four different weight categories of patients (5–14; 15–24; 25–34 and ≥35 kg) ranged from 48.8 to 65.9% (Additional file 3). Among CHW, 50.4% had AL for children 5–15 kg in stock (a package of six tablets) and availability of other weight/age formulations was less than 5%. SP was available in 73.9% of public health facilities and was not found among CHWs. Oral quinine was available in 87.7% of public health facilities and among 2.3% of CHWs. Table 1 Availability of anti-malarial and malaria blood testing among all public sector outlets screened Availability of malaria blood testing was 94.7% among public health facilities and 68.4% among CHWs. Malaria blood testing stocking rates were largely attributed to the availability of RDT. The readiness of public-sector outlets for malaria case management, defined as stocking both quality-assured ACT and having malaria blood testing, was 89.0% among public health facilities and 49.7% among CHWs. Private sector availability Among all screened private sector outlets, availability of anti-malarials was as follows: 85.8%, private for-profit facilities; 94.6%, pharmacies; 27.5%, general retailers; and 67.7%, itinerant drug vendors (Table 2). Table 2 Availability of anti-malarial and malaria blood testing among the private outlets Among the outlets stocking at least one anti-malarial in stock, 36.1% had a quality-assured ACT. This was most commonly available among pharmacies (90.0%) compared to private for-profit facilities, general retailers, and itinerant drug vendors (36.4, 35.4 and 34.2%, respectively). 31.3% of ACTs in the private sector were marked with the 'green leaf' logo. Adult quality-assured ACT was available in 24.6% of private-sector outlets. The three child formulations were available in less than 15% of the private sector (Additional file 4). Chloroquine was available in 59.2% of the private sector followed by oral quinine (42.5%) and SP (36.4%), though there were several differences across outlet types. For example, chloroquine was most commonly stocked by general retailers (71.3%) while SP was most commonly available among itinerant drug vendors (68.1%) and oral quinine was available in 70.5% private for-profit facilities. Anti-malarial market share Figure 1 shows the market share of different categories of anti-malarials sold or distributed in the 7 days prior to the survey. A total of 25,427 anti-malarial AETDs were reportedly distributed in seven days before the survey. 21.5% of the anti-malarial market share was distributed by the public sector, which was comprised mostly of quality-assured ACT without the 'green leaf' logo (9.9% of total market share) and of SP (6.5% of the total market). Almost 80% of the anti-malarials distributed were through the private sector (78.5%). Quality-assured ACT with the 'green leaf' logo comprised 15.6% of the total anti-malarial market share, followed by non-quality assured ACT (without the logo), which comprised 14.3%. SP made up the largest market share of non-artemisinin therapies (24.7%), followed by chloroquine (13.3%) and oral quinine (6.5%). Overall, general retailers dominated the anti-malarial market, accounting for 47.6% of the total market share in Benin, and these providers distributed most of the quality-assured ACT with the 'green leaf' logo (13.4% of total market share), SP (14.7%), and chloroquine (12.0%). Malaria diagnostic market share Figure 2 shows the diagnostic market share of different types of malaria tests administered in the seven days prior to the survey. A total of 6712 malaria test units, either microscopy or RDT, were reportedly distributed or used in the seven days prior the outlet survey. Diagnostic market share Most of the malaria testing was performed through the public sector, which accounted for 82.2% of the total diagnostic testing market share. Microscopy testing was rare across both the public and the private sector, 14.8 and 6.8% respectively. Within the private sector, malaria blood testing market share was dominated entirely by private for-profit health facilities since none of the other private sector outlets reportedly distributed or sold malaria testing in the seven days before the survey. Private sector price of AETD quality-assured ACT ($1.35, inter quartile range [IQR] $1.0, $2.02) was three times more expensive than SP ($0.42, IQR $0.34, $0.51) or chloroquine ($0.41, IQR $0.41–$0.42). The price of AETD quinine was $3.54 (IQR $2.83–$4.25)—2.6 times more expensive than one quality-assured ACT. Provider perceptions of most effective treatment When providers were asked what they perceived to be the most effective anti-malarial for the treatment of uncomplicated malaria in children or adults, results from the public sector illustrate that most providers cited an ACT. Among public health facility providers, 94.6 and 96.4% perceived ACTs was the most effective treatment in adults and in children respectively (Figs. 3, 4). Specific to the question regarding the most effective treatment for adults, 37.2% of CHWs responded that they did not know, while 59.8% perceived ACT as the most effective for an adult and 91.8% of them perceived an ACT as the most effective for children. Providers' perceptions of the most effective treatment for an uncomplicated malaria in a child Providers' perceptions of the most effective treatment for an uncomplicated malaria in an adult In the private sector, 62.7% of private for-profit and 93% of pharmacy providers cited an ACT as the most effective treatment for adults, and 73.4 and 94.9% respectively cited this as most effective for children. Non-artemisinin therapies, typically chloroquine and quinine, were cited as most effective treatment among general retailers (chloroquine, children: 24.8%; adults: 34.4%; quinine, children: 15.4%; adults: 18.3%) and itinerant drug vendors (chloroquine, children: 17.6%; adults: 29.8%; quinine, children: 43.1%; adults: 30.5%). SP was commonly cited as the most effective treatment for adults by itinerant drug vendors (29.8%). The 2016 outlet survey provided a complete picture of the malaria testing and treatment landscape across the public and private sectors, providing information on availability, market share, price, and provider perceptions. The findings point to recommendations for improving private-sector malaria case management in Benin. Public sector readiness for appropriate malaria case management Public health facilities showed high readiness for appropriate case management in Benin. There was nearly universal coverage of quality-assured ACT treatment and malaria blood testing in these facilities. These findings reflect national strategies that have been in place since 2011, which stipulate confirmatory testing prior to treatment for all ages and at all levels of care [6]. The current levels of readiness reflect a substantial increase from diagnostic availability measured in 2011, where just over half of the public health facilities had malaria testing available (56.8%) [17], illustrating that national policy has been successful in increasing access to confirmatory testing in this sector. Three-quarters of public health facilities had SP available for IPTp treatment, reflecting an increase over time, from 17.2% in 2011 and 44.7% in 2014. This suggests substantial progress has been made with regards to the scale-up of SP for IPTp [17, 18]. This is in-line with recent national strategies to increase access to SP, including changes to the dosing regimen, and efforts to provide malaria services free of charge to pregnant women [6]. Availability of oral quinine, recommended for the treatment of uncomplicated malaria in pregnancy during the first trimester, was also high, with over 85% of public health facilities stocking this medicine. These findings illustrate overall readiness among public health facilities to manage malaria in pregnant women. According to the 2015 national guidelines, injectable quinine followed by oral quinine are still the recommended treatment for severe malaria, which could explain the high levels of quinine availability in public health facilities. However, it is possible that quinine is being used for uncomplicated malaria given it is widely available throughout all types of public health facilities. Quinine should only be administered at hospitals, which would be equipped to manage patients with severe malaria. Furthermore, while a full course of quinine tablets are indicated for treatment of severe malaria, this should only be administered after a primary treatment with injectable quinine. However, market share data illustrate that oral quinine comprises one in every fifth anti-malarial distributed in the public sector, while quinine injection is negligible, suggesting that oral quinine may be routinely administered for uncomplicated malaria. Indeed, a recent household study in southern Benin found quinine was the second most used anti-malarial for self-medication (after ACT) suggesting that efforts are needed to ensure the appropriate administration of this anti-malarial [21]. Despite the updated WHO standards, artesunate availability remains low (5.3%). Efforts are currently underway to identify the barriers to increasing injectable artesunate use for severe malaria treatment in Benin [6]. Since 2014, the reach of the public sector has been extended to the community-level through the training and equipping of CHWs with malaria case management skills and supplies (AL and RDT). Since then, several investments have been made to increase the capacity and coordination of these providers [6]. The results from this survey illustrate how more than half of the CHWs had anti-malarials in stock, namely quality-assured ACT, and almost 70% had RDTs. The availability findings also reflect promising changes from earlier survey rounds where availability of ACT in 2011 was less than 50% and the availability of RDT was negligible (<5%). Furthermore, most CHWs perceived ACT to be the most effective treatment for uncomplicated malaria in adults and children. These findings point to the success of a national level campaign to scale-up, train, and supply CHWs to provide ACT and blood testing services. Key areas to address may be improving CHW awareness of the most effective anti-malarial for adults given 40% did not know what this was, and to maintain supply of RDTs as a means to increase access to confirmatory testing. Role of the private sector in malaria case management Results from the study confirmed the dominant role of the private sector across Benin, where almost 80% of all anti-malarials passed through this sector, mainly through general retailers—which accounted for almost half of the anti-malarial market share in 2016 (47.6%) [17, 18]. Of the 5600 general retail outlets that were screened for anti-malarials, over one in four had anti-malarials in stock, reflecting a three-fold increase from previous surveys. General retailers as a source of anti-malarial treatment have also been documented in other countries, including Madagascar, Myanmar, and Cambodia [22,23,24], and were also a common source of treatment in Benin as evidenced in a population based survey [25]. The results also point to the importance of itinerant drug vendors, of which over half of those surveyed had anti-malarials available, and comprised around one tenth of the anti-malarial market share. Trend data also illustrate how the combined anti-malarial market share of general retailers and itinerant drug vendors, subsequently referred to as the 'informal' private sector, has increased over time from 30.9% in 2011, 40.1% in 2014, to 56.8% in 2016 [17, 18], illustrating the increasing relevancy of these outlets in the delivery of anti-malarial treatment. It is unclear why an increase in the informal market composition has been observed. Given there is little regulation of the private sector in Benin, this growth of the informal sector market composition may reflect a natural evolution of the market to meet consumer demand for anti-malarials, and perhaps these outlets are more accessible to patients. In absence of regulation, general retailers and itinerant drug vendors have perhaps responded to consumer demand by stocking anti-malarials in addition to other products. Given a large portion of the private-sector case management is being channeled through these informal outlets, there may be several opportunities to strengthen the malaria case management services provided by these vendors. There are examples in the literature of innovative strategies that have focused on general retailers and itinerant drug vendors to improve access to quality-assured ACT [24]. There is also a growing body of support for itinerant drug vendors as a means to improve home-based management of malaria [26, 27], and these mobile providers have been cited as a useful means to improve the provision of care for malaria [28]. In Benin, there is also documentation of 'associations' of drug vendors, which operate within traditional markets and perform quasi-regulatory functions [11]. The quasi-formal nature of these vendors may make them suitable for accreditation programmes as a means to further regulate, supervise, and engage with the private sector in both ACT and RDT distribution. Such strategies, done in collaboration with the public sector, may help to complement rather than compete with the existing CHW programme. Considering the informal sector in the accreditation process may be an important strategy to accelerate coverage of appropriate case management in Benin. Readiness of the private sector in malaria case management The private sector was generally less well-equipped to test and appropriately treat malaria infections as compared with the public sector. Only one-third of private-sector outlets were stocking quality-assured ACT. Non-artemisinin therapies were more commonly available and distributed. Availability of malaria testing was also negligible and consistent with these findings, most malaria tests were administered by the public sector, which comprised over 80% of the diagnostic market share. Given most private-sector outlets were not stocking malaria tests suggests that presumptive treatment is widespread. Availability and market share of ACT While the AMFm or subsequent CPM programme was not implemented in Benin, most of the quality-assured ACT reportedly distributed in the private sector had the AMFm 'green leaf' logo. This indicates leakage of anti-malarials' from other countries and suggests that anti-malarials are being illegally traded into non-subsidized private markets. The widespread availability and distribution of quality-assured ACT with the logo is perhaps not surprising considering Benin's supply chain [11]. The domestic anti-malarial market in Benin is relatively small, with few local manufacturers, so the country's supply relies heavily on imports. Many of the anti-malarial supplies are obtained from more developed pharmaceutical markets in surrounding countries, most notably Nigeria, and imported largely though the informal sector. Thus, it is quite likely that products with the 'green leaf' logo—a marker of the subsidized CPM ACT—have leaked into Benin's private-sector outlets through neighbouring Nigeria. In fact, prior to the AMFm, the importation of medicines illegally from Nigeria was noted as commonplace, with vendors citing ease of accessing cheap suppliers in Lagos as a key reason for the illegal import [11]. The widespread uptake of this illegally imported ACT speaks to the need for a national level programme targeting the private sector with subsidized quality-assured ACT to align the private-sector outlets with national treatment guidelines, as well as a need to strengthen border control and regulation. Availability and distribution of other non-quality assured ACT was also high, comprising 14.3% of the anti-malarial market and reflecting a slight increase from earlier survey rounds [16, 17]. This is of concern given that non-quality assured ACT medicines have not received pre-qualification, meaning that these medicines have not necessarily been manufactured according to quality standards yielding safe and efficacious medicines. Moreover, non quality-assured ACT have an increased likelihood of being poor quality as evidenced by studies that have tested the pharmacological properties of the medicines [29]. The widespread presence of non-quality assured ACT is of concern given its presence on the market and use poses a threat to appropriate and effective malaria case management. Availability of different AL formulations While the strength of all first-line AL tablets for treatment of uncomplicated malaria is indeed the same, the implementation of the AL policy includes delivery of four different AL pack sizes (6, 12, 18 and 24 tablets) suitable for management of four different weight categories of patients (5–14; 15–24; 25–34 and ≥35 kg). In the private sector, as well as the public sector, availability of the different weight categories was relatively poor. For example, in the private sector, only 11.4% of the private for-profit facilities and 58.6% of pharmacies had AL treatments for children under 5. Maintaining a consistent supply of age/weight appropriate commodities will be key to ensure that ACT commodities are administered according to the recommended age and weight band of each patient and to prevent medicine packages from being cut or tampered with. This is particularly important given evidence that AL treatment is up to six times more likely to be prescribed if the weight specific pack is in stock [30]. While several strategies are underway to better manage the supply and procurement of malaria commodities to avoid stock-outs, this has not been fully implemented. Temporary options may be to instruct providers to administer AL even if adequate AL pack sizes are not in stock. However, evidence suggests that this practice may compromise high levels of patients' adherence to AL [31] and incorrect dosing [32, 33]. If adequate availability of first-line ACT treatments cannot be ensured, alternative AL preparations that do not depend on separate packaging, could also be considered [30]. Availability and use of non-artemisinin therapies Over a decade after the change in first-line treatment for uncomplicated malaria, non-artemisinin therapies, including SP, oral quinine, and chloroquine, accounted for the majority (57.7%) of the market share in the private sector. SP made up over half of the non-artemisinin therapies reportedly distributed. While most of the SP distribution was through itinerant drug vendors and general retailers, SP was also commonly distributed by pharmacies. The widespread distribution of this medicine implies that it is being used for malaria case management rather than exclusively for IPTp as recommended. Widespread availability and distribution of oral quinine, particularly among general retailers and itinerant drug vendors, also indicates this is being used for the treatment of uncomplicated malaria. Widespread distribution of non-artemisinin therapies in Benin might be explained by a number of factors. This may in part be attributed to price, given that SP and chloroquine were three times less expensive than quality-assured ACT. Alternatively, access may also be an important factor. Non artemisinin therapies were more widely available than quality-assured ACT—particularly among general retailers where most anti-malarials were distributed. Another reason may be around provider perceptions of the most effective treatment for uncomplicated malaria. In 2016, most of the itinerant drug vendors and general retailers perceived non-artemisinin therapies (SP, chloroquine, or quinine) as the most effective treatment for uncomplicated malaria. To improve private-sector case management, removal of non-artemisinin therapies from the market is paramount and new strategies are necessary to curtail their consumption and promote the use of quality-assured ACT and RDT in the private sector. Several programmes have been implemented across sub-Saharan Africa to improve private sector readiness for appropriate malaria case management that could be relevant in the Benin context. A similar nation-wide subsidy to that of the AMFm may be an immediate means to overcome ACT access and affordability issues for this treatment, as evidence by the pilot initiative [34, 35]. Once barriers related to access of quality-assured ACT have been addressed, mass-media behaviour change campaigns may be a particularly effective strategy in Benin to increase awareness of the first-line treatment and to promote demand for the quality ACT product. Several studies have demonstrated how consumer demand is associated with treatment and how patient preferences influence provider dispensing behaviour [36,37,38,39]. Specifically in Benin, qualitative research found that provider stocking decisions were overwhelmingly driven by patient demand, which led some outlets not to stock ACT [11]. Furthermore, provider training and supervision may also be merited to improve the quality of case management practices, including accreditation of outlets as previously discussed. Such multi-pronged strategies are likely to improve malaria case management and can improve private sector readiness and performance, as has been demonstrated in other contexts [12]. Availability of oral artemisinin monotherapy Oral artemisinin monotherapy poses a serious threat to the continued efficacy of artemisinins, and as such this anti-malarial was banned in Benin in 2008. In 2016, no oral artemisinin monotherapy was detected in the market. This is of promise given ACTwatch outlet survey findings from neighboring Nigeria which show that availability of oral artemisinin monotherapy in the private sector has increased from 24.6% in 2013 to 37.3% in 2015 [40]. Given that Nigeria appears to be a source of supply of anti-malarials to Benin's private sector market, it is important that availability of oral artemisinin monotherapy in the market is routinely monitored. Mystery clients to detect unwanted or banned medicines may be a useful method to do this [41]. The ACTwatch outlet survey design has limitations that have been documented and reported [9, 15, 19]. One point to mention is that while anti-malarial audits were carried out by researchers, sales volumes were reported by the provider and these responses were open to positive response bias. The pros and cons of using self-reported sales volumes, versus other methods to capture market share such as sale inventory audits or exit interviewers, suggests that there are advantages and disadvantages of different methods but no method is gold standard and each has its own limitations [42]. Other specific limitations to Benin's outlet survey include the use of two different forms of data collection (electronic and paper questionnaires). While electronic data collection has the advantage of recording the data instantly with all the relevant checks and skip patterns built into the programme, it may have had an impact on respondents' fear that they were being recorded or investigated. In addition, some itinerant vendors could have been missed during the survey given these vendors may work late at night and, for security reasons, interviewers only worked during the day and early evening. The public sector in Benin is typically well equipped to test and appropriately treat malaria according to national treatment guidelines. However, the private sector is responsible for most of the anti-malarial distribution, typically through general retailers, and this channel most commonly distributes non-artemisinin therapies. There is also evidence of leakage of subsidized ACT from neighbouring countries. A national strategy to scale up access to first-line, quality-assured, subsidized treatment as a means to improve coverage and quality of malaria case management services is needed. Strategies to increase coverage of malaria commodities should be supported by interventions to address provider perceptions, as well as consumer behaviours, and innovative approaches to either engage or regulate Benin's informal private sector are needed. ABSM : Association Beninoise pour le Marketing Social artemether–lumefantrine AETD: adult equivalent treatment dose AMFm: affordable medicines facility for malaria community health worker CFA: Communauté Financière Africaine CPM: co-payment mechanism IPTp: intermittent treatment as prevention during pregnancy inter quartile range NMCP: National Malaria Control Programme SP: sulfadoxine–pyrimethamine WHO. World malaria report. Geneva: World Health Organization; 2016. World Health Organization website http://www.who.int/malaria/publications/world-malaria-report-2016/report/en/. Accessed 3 Apr 2017. Yaya S, Ze A. Le fardeau socio-économique du paludisme: Une analyse économétrique. Quebec City: Presses de l'Université Laval; 2013. ACTwatch Group. ACTwatch baseline and endline household survey results 2009–2012: Benin, Democratic Republic of Congo, Madagascar, Nigeria, Uganda, Zambia. Washington, DC. 2013. ACTwatch website. http://www.actwatch.info/sites/default/files/content/publications/attachments/ACTwatch%2520HH%2520Report%2520Multicountry%2520Baseline%2520and%2520Endline.pdf. Accessed 3 Apr 2017. ICF Macro. Enquête Démographique et de Santé du Bénin 2011–2012. Calverton, Maryland, USA. 2013. Demographic and Health Survey website. http://dhsprogram.com/publications/publication-FR270-DHS-Final-Reports.cfm. Accessed Apr 4 2017. World Bank. Les efforts du Bénin pour la réduction de la pauvreté (The efforts of Benin to reduce poverty). Cotonou, Benin. 2014. International Monetary Fund website. https://www.imf.org/external/french/pubs/ft/scr/2011/cr11307f.pdf. PMI. President's malaria initiative Benin malaria operational plan. 2016. PMI website. https://www.pmi.gov/where-we-work/benin. Accessed 4 Apr 2017. USAID. Global health initiative: Benin country strategy. 2011. US Global Health Programmes Website. https://www.ghi.gov/wherewework/profiles/benin.html. Accessed 4 Apr 2017. O'Connell KA, Gatakaa H, Poyer S, Njogu J, Evance I, Munroe E, et al. Got ACTs? Availability, price, market share and provider knowledge of anti-malarial medicines in public and private sector outlets in six malaria-endemic countries. Malar J. 2011;10:326. République du Bénin. Directives Nationales de Prise en Charge des cas de Paludisme. 2015. Torres Rueda S, Tougher S, Palafox B, Patouillard E, Goodman C, Hanson K, et al. A qualitative assessment of the private sector antimalarial distribution chain in Benin, 2009. Washington, DC. 2012. ACTwatch Website. http://www.actwatch.info/sites/default/files/content/publications/attachments/SCS%20qualitative%20report%20Benin%20FINAL%2020130117.pdf. Accessed 4 Apr 2017. Tougher S, ACTwatch Group, Ye Y, Amuasi JH, Kourgueni IA, Thomson R, Goodman C, et al. Effect of the affordable medicines facility-malaria (AMFm) on the availability, price, and market share of quality-assured artemisinin-based combination therapies in seven countries: a before-and-after analysis of outlet survey data. Lancet. 2012;380:1916–26. Independent Evaluation Team. Independent Evaluation of Phase 1 of the affordable medicines facility-malaria (AMFm), Multi-Country Independent Evaluation Report: Final Report. Calverton, Maryland and London: ICF International and London School of Hygiene and Tropical Medicine. 2012. Global Fund Website. https://www.theglobalfund.org/media/3011/terg_evaluation2013-2014thematicreviewamfm2012iephase1_report_en.pdf. Accessed 4 April 2017. Global Fund. Use of a private sector co-payment mechanism to improve access to ACTs in the new funding model. 2013. Shewchuk T, O'Connell KA, Goodman C, Hanson K, Chapman S, Chavasse D. The ACTwatch project: methods to describe anti-malarial markets in seven countries. Malar J. 2011;10:325. ACTwatch Group. Benin Outlet Survey Report 2014 Washington, DC. 2016. ACTwatch website. http://www.actwatch.info/sites/default/files/content/publications/attachments/ACTwatch%20Study%20Reference%20Document%20Benin%20Outlet%20Survey%202014.pdf. Accessed Apr 4 2017. ACTwatch Group. Benin outlet survey report 2011. Washington, DC. 2013. ACTwatch Website. http://www.actwatch.info/countries/benin/outlet-reports/2011. Accessed Apr 4 2017. ACTwatch Group. Benin outlet survey report 2009. Washington, DC. 2011. ACTwatch website. http://www.actwatch.info/sites/default/files/content/publications/attachments/Benin%20Outlet%20Report%202014%20%28French%29.pdf. Accessed 4 Apr 2017. O'Connell KA, Poyer S, Solomon T, Munroe E, Patouillard E, Njogu J, et al. Methods for implementing a medicine outlet survey: lessons from the anti-malarial market. Malar J. 2013;12:52. WHO. Guidelines for the treatment of malaria. 3rd ed. Geneva: World Health Organization; 2015. http://www.who.int/malaria/publications/atoz/9789241549127/en/. Accessed 4 Apr 2017. Agueh V, Badet M, Jérôme CS, Paraiso M, Azandjemè C, Metonnou C, et al. Prevalence and determinants of antimalarial self-medication in Southern Benin. Int J Trop Dis Health. 2016;18:1–11. ACTwatch Group. Malaria market trends in Sub-Saharan Africa: 2009–2014. Washington, D.C. 2016. http://www.actwatch.info/sites/default/files/content/publications/attachments/SSA%20Brief_FINAL_NO_BLEEDS.pdf. Accessed 4 Apr 2017. ACTwatch Group, Novotny J, Singh A, Dysoley L, Sovannaroth S, Rekol H. Evidence of successful malaria case management policy implementation in Cambodia: results from national ACTwatch outlet surveys. Malar J. 2016;15:194. Khin HS, Aung T, Aung M, Thi A, Boxshall M, White C, ACTwatch Group. Using supply side evidence to inform oral artemisinin monotherapy replacement in Myanmar: a case study. Malar J. 2016;15:418. ACTwatch Group. Household survey Republic of Benin 2011 Survey Report. Washington, DC. 2013. http://www.actwatch.info/sites/default/files/content/publications/attachments/ACTwatch%20HH%20Report%20Benin%202011.pdf. Accessed 4 Apr 2017. WHO. Scaling up home-based management of malaria: from research to implementation. Geneva: World Health Organization; 2004. http://www.who.int/tdr/publications/documents/scaling-malaria.pdf. Accessed 4 Apr 2017. WHO. The roll back malaria strategy for improving access to treatment through home management. Geneva: World Health Organization; 2005. http://www.who.int/malaria/publications/atoz/who_htm_mal_2005_1101/en/. Accessed 4 Apr 2017. USAID. National strategy for utilizing the potential of private practitioners in child survival. Washington, DC. 2002. United States Agency for International Development. http://pdf.usaid.gov/pdf_docs/Pnacp202.pdf. Accessed 11 Apr 2017. ACT Consortium Drug Quality Project Team. Quality of artemisinin-containing antimalarials in Tanzania's Private Sector—results from a nationally representative outlet survey. Am J Trop Med Hyg. 2015;92:75–86. Zurovac D, Tibenderana JK, Nankabirwa J, Ssekitooleko J, Njogu JN, Rwakimari JB, et al. Malaria case-management under artemether–lumefantrine treatment policy in Uganda. Malar J. 2008;7:181. Piola P, Fogg C, Bajunirwe F, Biraro S, Grandesso F, Ruzagira E, et al. Supervised versus unsupervised intake of six-dose artemether–lumefantrine for treatment of acute, uncomplicated Plasmodium falciparum malaria in Mbarara, Uganda: a randomised trial. Lancet. 2005;365:1467–73. Zurovac D, Njogu J, Akhwale W, Hamer DH, Snow RW. Translation of artemether–lumefantrine treatment policy into paediatric clinical practice: an early experience from Kenya. Trop Med Int Health. 2008;13:99–107. Zurovac D, Ndhlovu M, Sipilanyambe N, Chanda P, Hamer DH, Simon JL, et al. Paediatric malaria case-management with artemether–lumefantrine in Zambia: a repeat cross-sectional study. Malar J. 2007;6:31. Willey BA, Tougher S, Ye Y, Mann AG, Thomson R, Kourgueni IA. Communicating the AMFm message: exploring the effect of communication and training interventions on private for-profit provider awareness and knowledge related to a multi-country anti-malarial subsidy intervention. Malar J. 2014;13:46. Tougher S, Mann AG, Ye Y, Kourgueni IA, Thomson R, Amuasi JH, et al. Improving access to malaria medicine through private-sector subsidies in seven African countries. Health Aff. 2014;33:1576–85. Thein ST, Sudhinaraset M, Khin HS, McFarland W, Aung T. Who continues to stock oral artemisinin monotherapy? Results of a provider survey in Myanmar. Malar J. 2016;15:334. Palafox B, Patouillard E, Tougher S, Goodman C, Hanson K, Kleinschmidt I, et al. Understanding private sector antimalarial distribution chains: a cross-sectional mixed methods study in six malaria-endemic countries. PLoS ONE. 2014;9:e93763. Manoff RK. Getting your message out with social marketing. Am J Trop Med Hyg. 1997;57(3):260–5. Mangham LJ, Cundill B, Ezeoke O, Nwala E, Uzochukwu BS, Wiseman V, et al. Treatment of uncomplicated malaria at public health facilities and medicine retailers in south-eastern Nigeria. Malar J. 2011;10:155. ACTwatch Group. ACTwatch Study reference document: Nigeria outlet survey 2015, Washington, DC. 2017. http://www.actwatch.info/sites/default/files/content/publications/attachments/Nigeria_2015%20OS_Reference%20Document.pdf. Accessed 4 Apr 2017. Tabernero P, Mayxay M, Culzoni MJ, Dwivedi P, Swamidoss I, Allan EL, et al. A repeat random survey of the prevalence of falsified and substandard antimalarials in the Lao PDR: a change for the better. Am J Trop Med Hyg. 2015;92:95–104. Patouillard E, Kleinschmidt I, Hanson K, Pok S, Palafox B, Tougher S, et al. Comparative analysis of two methods for measuring sales volumes during malaria medicine outlet surveys. Malar J. 2013;12:311. The manuscript was conceived and drafted by members of the ACTwatch Group: Justin Rahariniaina, Catherine A. Hurley, Megan Littrell, Kathryn A. O'Connell. Data cleaning and analysis were completed by members of the ACTwatch Group: Kevin Duff, Justin Rahariniaina, Anna Fulton. CZ, ABZ provided assisted with interpretation of the study findings. CZ, ABZ provided assisted with interpretation of the study findings. All authors read and approved the final manuscript." Please confirm us to proceed further. We would like to thank Sitraka Ramamonjisoa for his assistance during the data collection and the team at ABMS for their support during the outlet survey. The authors are grateful to country teams in Benin who undertook the surveys and to the study participants for their time and participation. ACTwatch Group (2008–2017) Dr. Louis Akulayi; Angela Alum; Andrew Andrada; Julie Archer; Ekundayo D. Arogundade; Erick Auko; Abdul R. Badru; Dr. Katie Bates; Dr. Paul Bouanchaud; Meghan Bruce; Katia Bruxvoort; Peter Buyungo; Angela Camilleri; Dr. Emily D. Carter; Dr. Steven Chapman; Nikki Charman; Dr. Desmond Chavasse; Robyn Cyr; Kevin Duff; Gylsain Guedegbe; Keith Esch; Illah Evance; Anna Fulton; Hellen Gataaka; Tarryn Haslam; Emily Harris; Christine Hong; Catharine Hurley; Whitney Isenhower; Enid Kaabunga; Baraka D Kaaya; Esther Kabui; Dr. Beth Kangwana; Lason Kapata; Henry Kaula; Gloria Kigo; Irene Kyomuhangi; Aliza Lailari; Sandra LeFevre; Dr. Megan Littrell (Principal Investigator, 2014–2017); Greta Martin; Daniel Michael; Erik Monroe; Godefroid Mpanya; Felton Mpasela; Felix Mulama; Dr. Anne Musuva; Julius Ngigi; Edward Ngoma; Marjorie Norman; Bernard Nyauchi; Dr. Kathryn A. O'Connell (Principal Investigator, 2008–2012); Carolyne Ochieng; Edna Ogada; Linda Ongwenyi; Ricki Orford; Saysana Phanalasy; Stephen Poyer; Dr. Justin Rahariniaina; Jacky Raharinjatovo; Lanto Razafindralambo; Solofo Razakamiadana; Christina Riley; Dr. John Rodgers; Dr. Andria Rusk; Tanya Shewchuk; Simon Sensalire; Julianna Smith; Phok Sochea; Tsione Solomon; Raymond Sudoi; Martine Esther Tassiba; Katherine Thanel; Rachel Thompson; Mitsuru Toda; Chinazo Ujuju; Marie-Alix Valensi; Dr. Vamsi Vasireddy (Principal Investigator, 2013); Cynthia B. Whitman; Cyprien Zinsou. The datasets generated during and/or analysed during the current study are available in the figshare repository https://doi.org/10.6084/m9.figshare.c.3749747_D10. The 2016 outlet survey protocol received ethical approval from the Benin National Research Ethics Committee (Comité National d'Ethique pour la Recherche en Santé—No_54/MS/DC/SGM/DFR/CNPERS/SA) and from the PSI Research Ethics Board. Provider interviews and product audits were completed only after administration of a standard consent form. Providers had the option to end the interview at any point during the study. Standard measures were employed to maintain provider confidentiality and anonymity. The Benin 2016 ACTwatch outlet survey and the production of this manuscripts received financial support from the Bill and Melinda Gates Foundation. Association Beninoise pour le Marketing Social, Lot 919 Immeuble Montcho, Sikecodji, Cotonou, Republic of Benin Programme National de Lutte contre le Paludisme, Ministère de la Santé, Cotonou, Benin Adjibabi Bello Cherifath ACTwatch Group , Angela Alum , Andrew Andrada , Julie Archer , Ekundayo D. Arogundade , Erick Auko , Abdul R. Badru , Katie Bates , Paul Bouanchaud , Meghan Bruce , Katia Bruxvoort , Peter Buyungo , Angela Camilleri , Emily D. Carter , Steven Chapman , Nikki Charman , Desmond Chavasse , Robyn Cyr , Kevin Duff , Gylsain Guedegbe , Keith Esch , Illah Evance , Anna Fulton , Hellen Gataaka , Tarryn Haslam , Emily Harris , Christine Hong , Catharine Hurley , Whitney Isenhower , Enid Kaabunga , Baraka D. Kaaya , Esther Kabui , Beth Kangwana , Lason Kapata , Henry Kaula , Gloria Kigo , Irene Kyomuhangi , Aliza Lailari , Sandra LeFevre , Megan Littrell , Greta Martin , Daniel Michael , Erik Monroe , Godefroid Mpanya , Felton Mpasela , Felix Mulama , Anne Musuva , Julius Ngigi , Edward Ngoma , Marjorie Norman , Bernard Nyauchi , Kathryn A. O'Connell , Carolyne Ochieng , Carolyne Ogada , Linda Ongwenyi , Ricki Orford , Saysana Phanalasy , Stephen Poyer , Justin Rahariniaina , Jacky Raharinjatovo , Lanto Razafindralambo , Solofo Razakamiadana , Christina Riley , John Rodgers , Andria Rusk , Tanya Shewchuk , Simon Sensalire , Julianna Smith , Phok Sochea , Tsione Solomon , Raymond Sudoi , Martine Esther Tassiba , Katherine Thanel , Rachel Thompson , Mitsuru Toda , Chinazo Ujuju , Marie-Alix Valensi , Vamsi Vasireddy , Cynthia B. Whitman & Cyprien Zinsou Correspondence to Cyprien Zinsou. Sampling weights. Detailed sample description. Availability of quality-assured AL among all screened public sector outlets. Availability of quality-assured AL among anti-malarial stocking private outlets. ACTwatch Group., Zinsou, C. & Cherifath, A.B. The malaria testing and treatment landscape in Benin. Malar J 16, 174 (2017). https://doi.org/10.1186/s12936-017-1808-x DOI: https://doi.org/10.1186/s12936-017-1808-x Malaria case management Diagnostic test ACT subsidy ACT now: anti-malarial market complexity one decade after the introduction of artemisinin combination therapy – evidence from sub-Saharan Africa and the Greater Mekong Sub-region
CommonCrawl
Sumario Compendioso The Sumario Compendioso was the first mathematics book published in the New World. The book was published in Mexico City in 1556 by a clergyman Juan Diez. Availability The book has been digitized and is available on the Internet. Before the Digital Age the only four known surviving copies were preserved at the Huntington Library, San Marino, California, the British Library, London, Duke University Library, and the University of Salamanca in Spain.[1] Excerpts In his book The Math Book, Clifford A. Pickover provided the following information about Sumario Compendioso: The Sumario Compendioso, published in Mexico City in 1556, is the first work on mathematics printed in the Americas. The publication of Sumario Compendioso in the New World preceded by many decades the emigration of the Puritans to North America and the settlement in Jamestown, Virginia. The author, Brother Juan Diez, was a companion of Hernando Cortes, the Spanish conquistador, during Cortes's conquests of the Aztec Empire.[2] References 1. old.nationalcurvebank.org 2. Clifford A. Pickover (2009). The Math Book: From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics. Sterling Publishing Company, Inc. p. 120. ISBN 978-1-4027-5796-9. Retrieved 29 July 2012. External links • Open Library • HathiTrust • JSTOR • Archive.org
Wikipedia
NCERT Solutions for Class 10 Maths Chapter 11 Constructions NCERT Solutions for Class 10 M... NCERT Solutions for Class 10 Maths Chapter 11 Constructions - Free PDF NCERT Solutions for Class 10 Chapter 11, Constructions is well crafted by subject-matter experts in Vedantu. They have developed the NCERT Solutions as per the latest syllabus set by CBSE board. Vedantu also provides relevant notes for the Maths NCERT Solutions Class 10 to give a better understanding of the concept. You can download the free pdf format of the NCERT Solutions Chapter 11 from Vedantu's official website. NCERT Solutions for other subjects for other classes are also available on Vedantu. If you have any queries relating to the concepts, you can reach out to our experienced teachers. In this chapter, you will learn the concept of determining a point dividing a line segment internally given a ratio, construction of similar triangles, construction of a tangent to a circle, construction of a pair of tangents and construction of a pair of tangents which are inclined to each other at an angle. Below are some basic reference notes that will help you solve the NCERT Solutions of Chapter 11. Vedantu also provides free CBSE Solutions for all the classes. You can download NCERT Solution for Class 10 Science PDF on vedantu.com for free to score more marks in the examinations. Access NCERT Solutions for Class 10 Mathematics Chapter 11 – Constructions 1. Draw a line segment of length 7.6 cm and divide it in the ratio 5:8. Measure the two parts. Give a proper justification for the construction. Answer- Given: Draw a line segment of length 7.6 cm and divide it in the ratio 5:8. To find: Measure the two parts. Give a proper justification of the construction. A line segment of length 7.6 cm can be divided in the ratio of 5:8 as follows. A line segment of length $7.6\;{\text{cm}}$ can be divided in the ratio of $5: 8$ as follows. Step 1: Draw line segment ${\text{AB}}$ of $7.6\;{\text{cm}}$ and draw a ray AX making an acute angle with line segment ${\text{AB}}$ Step 2: Locate $13( = 5 + 8)$ points, ${A_1},{A_2},{A_3},{A_4} \ldots \ldots \ldots {A_{13}}$, on $A X$ such that $A{A_1} = {A_1}{A_2} = {A_2}{A_3}$ and so on. Step 3: Join ${\text{B}}{{\text{A}}_{13}}$. Step 4: Through the point ${{\text{A}}_5}$, draw a line parallel to ${\text{B}}{{\text{A}}_{13}}$ (by making an angle equal to $\angle {\text{A}}{{\text{A}}_{13}}\;{\text{B}}$ ) at ${{\text{A}}_5}$ intersecting ${\text{AB}}$ at point ${\text{C}}$. ${\text{C}}$ is the point dividing line segment ${\text{AB}}$ of $7.6\;{\text{cm}}$ in the required ratio of $5: 8$. The lengths of ${\text{AC}}$ and ${\text{CB}}$ can be measured. It comes out to $2.9\;{\text{cm}}$ and $4.7\;{\text{cm}}$ respectively. The construction can be justified by proving that $\dfrac{{{\text{AC}}}}{{{\text{CB}}}} = \dfrac{5}{8}$ By construction, we have . By applying Basic proportionality theorem for the triangle ${\text{A}}{{\text{A}}_{13}}\;{\text{B}}$, we obtain $\dfrac{{{\text{AC}}}}{{{\text{CB}}}} = \dfrac{{{\text{A}}{{\text{A}}_5}}}{{\;{{\text{A}}_5}\;{{\text{A}}_{13}}}} \ldots (1)$ From the figure, it can be observed that ${\text{A}}{{\text{A}}_5}$ and ${{\text{A}}_5}\;{{\text{A}}_{13}}$ contain 5 and 8 equal divisions of line segments respectively. $\dfrac{{{\text{A}}{{\text{A}}_5}}}{{\;{{\text{A}}_5}\;{{\text{A}}_{13}}}} = \dfrac{5}{8} \ldots $ (2) On comparing equations (1) and (2), we obtain $\dfrac{{{\text{AC}}}}{{{\text{CB}}}} = \dfrac{5}{8}$ This justifies the construction. 2. Construct a triangle of sides 4 cm, 5cm and 6cm and then a triangle similar to it whose sides are \[\dfrac{2}{3}\] of the corresponding sides of the first triangle. Give a proper justification of the construction. Given: Construct a triangle of sides 4 cm, 5cm and 6cm and then a triangle similar to it whose sides are \[\dfrac{2}{3}\] of the corresponding sides of the first triangle. To find: Give a proper justification of the construction. Step 1. Draw a line segment ${\text{AB}} = 4\;{\text{cm}}$. Taking point ${\text{A}}$ as centre, draw an arc of $5\;{\text{cm}}$ radius. Similarly, taking point B as its centre, draw an arc of $6\;{\text{cm}}$ radius. These arcs will intersect each other at point ${\text{C}}$. Now, ${\text{AC}} = 5\;{\text{cm}}$ and ${\text{BC}} = 6\;{\text{cm}}$ and $\Delta {\text{ABC}}$ is the required triangle. Step 2. Draw a ray AX making an acute angle with line ${\text{AB}}$ on the opposite side of vertex ${\text{C}}$. Step 3. Locate 3 points ${{\text{A}}_1},\;{{\text{A}}_2},\;{{\text{A}}_3}($ as 3 is greater between 2 and 3) on line ${\text{AX}}$ such that ${\text{A}}{{\text{A}}_1} = {{\text{A}}_1}\;{{\text{A}}_2} = $ ${{\text{A}}_2}\;{{\text{A}}_3}$ Step 4. Join ${\text{B}}{{\text{A}}_3}$ and draw a line through ${{\text{A}}_2}$ parallel to ${\text{B}}{{\text{A}}_3}$ to intersect ${\text{AB}}$ at point ${\text{B}}$ '. Step 5. Draw a line through B' parallel to the line ${\text{BC}}$ to intersect ${\text{AC}}$ at ${\text{C}}$ '. $\vartriangle {\text{A}}{{\text{B}}^\prime }{{\text{C}}^\prime }$ is the required triangle. The construction can be justified by proving that ${\text{A}}{{\text{B}}^\prime } = \dfrac{2}{3}{\text{AB}},{{\text{B}}^\prime }{{\text{C}}^\prime } = \dfrac{2}{3}{\text{BC}},{\text{A}}{{\text{C}}^\prime } = \dfrac{2}{3}{\text{AC}}$ By construction, we have $\therefore \angle {\text{A}}{{\text{B}}^\prime }{{\text{C}}^\prime } = \angle {\text{ABC}}$ (Corresponding angles) In $\vartriangle {\text{A}}{{\text{B}}^\prime }{{\text{C}}^\prime }$ and $\vartriangle {\text{ABC}}$, $\angle {\text{ABC}} = \angle {\text{A}}{{\text{B}}^\prime }{\text{C}}$ (Proved above) $\angle {\text{BAC}} = \angle {{\text{B}}^\prime }{\text{A}}{{\text{C}}^\prime }$ (Common) $\therefore \vartriangle {\text{A}}{{\text{B}}^\prime }{{\text{C}}^\prime } \sim \vartriangle {\text{ABC}}$ (AA similarity criterion) $ \Rightarrow \dfrac{{{\text{A}}{{\text{B}}^\prime }}}{{{\text{AB}}}} = \dfrac{{{{\text{B}}^\prime }{{\text{C}}^\prime }}}{{{\text{BC}}}} = \dfrac{{{\text{A}}{{\text{C}}^\prime }}}{{{\text{AC}}}} \ldots \ldots $ (1) In $\vartriangle {\text{A}}{{\text{A}}_2}\;{{\text{B}}^\prime }$ and $\vartriangle {\text{A}}{{\text{A}}_3}\;{\text{B}}$, $\angle {{\text{A}}_2}{\text{A}}{{\text{B}}^\prime } = \angle {{\text{A}}_3}{\text{AB}}$ (Common) $\angle {\text{A}}{{\text{A}}_2}\;{{\text{B}}^\prime } = \angle {\text{A}}{{\text{A}}_3}\;{\text{B}}$ (Corresponding angles) $\therefore \vartriangle {\text{A}}{{\text{A}}_2}\;{{\text{B}}^\prime }$ and $\vartriangle {\text{A}}{{\text{A}}_3}\;{\text{B}}$ (AA similarity criterion) $ \Rightarrow \dfrac{{{\text{A}}{{\text{B}}^\prime }}}{{{\text{AB}}}} = \dfrac{{{\text{A}}{{\text{A}}_2}}}{{{\text{A}}{{\text{A}}_3}}}$ $ \Rightarrow \dfrac{{{\text{A}}{{\text{B}}^\prime }}}{{{\text{AB}}}} = \dfrac{2}{3}\quad \ldots \ldots $ From equations (1) and (2), $\dfrac{{{\text{A}}{{\text{B}}^\prime }}}{{{\text{AB}}}} = \dfrac{{{{\text{B}}^\prime }{{\text{C}}^\prime }}}{{{\text{BC}}}} = \dfrac{{{\text{A}}{{\text{C}}^\prime }}}{{{\text{AC}}}} = \dfrac{2}{3}$ $ \Rightarrow {\text{A}}{{\text{B}}^\prime } = \dfrac{2}{3}{\text{AB}},{{\text{B}}^\prime }{{\text{C}}^\prime } = \dfrac{2}{3}{\text{BC}},{\text{A}}{{\text{C}}^\prime } = \dfrac{2}{3}{\text{AC}}$ 3. Construct a triangle with sides $5\;{\text{cm}},6\;{\text{cm}}$ and $7\;{\text{cm}}$ and then another triangle whose sides are $\dfrac{7}{5}$ of the corresponding sides of the first triangle. Give a proper justification of the construction. Given: Construct a triangle with sides $5\;{\text{cm}},6\;{\text{cm}}$ and $7\;{\text{cm}}$ and then another triangle whose sides are $\dfrac{7}{5}$ of the corresponding sides of the first triangle. To find: Give a proper justification of the construction Step 1. Draw a line segment ${\text{AB}}$ of $5\;{\text{cm}}$. Taking ${\text{A}}$ and ${\text{B}}$ as centre, draw arcs of $6\;{\text{cm}}$ and $5\;{\text{cm}}$ radius respectively. Let these arcs intersect each other at point C. $\vartriangle {\text{ABC}}$ is the required triangle having length of sides as $5\;{\text{cm}},6\;{\text{cm}}$, and $7\;{\text{cm}}$ respectively. Step 2. Draw a ray AX making acute angle with line ${\text{AB}}$ on the opposite side of vertex ${\text{C}}$. Step 3. Locate 7 points, ${A_1},{A_2},{A_3},{A_4}{A_5},{A_6},{A_7}$ (as 7 is greater between 5 and 7 ), on line AX such that ${\text{A}}{{\text{A}}_1} = {{\text{A}}_1}\;{{\text{A}}_2} = {{\text{A}}_2}\;{{\text{A}}_3} = {{\text{A}}_3}\;{{\text{A}}_4} = {{\text{A}}_4}\;{{\text{A}}_5} = {{\text{A}}_5}\;{{\text{A}}_6} = {{\text{A}}_6}\;{{\text{A}}_7}$ Step 4. Join ${\text{B}}{{\text{A}}_5}$ and draw a line through ${{\text{A}}_7}$ parallel to ${\text{B}}{{\text{A}}_5}$ to intersect extended line segment ${\text{AB}}$ at point B'. Step 5. Draw a line through B' parallel to ${\text{BC}}$ intersecting the extended line segment ${\text{AC}}$ at ${{\text{C}}^\prime }.\vartriangle {\text{A}}{{\text{B}}^\prime }{{\text{C}}^\prime }$ is the required triangle. In $\vartriangle {\text{ABC}}$ and $\vartriangle {\text{A}}{{\text{B}}^\prime }{{\text{C}}^\prime }$, $\angle {\text{ABC}} = \angle {\text{A}}{{\text{B}}^\prime }{{\text{C}}^\prime }$ (Corresponding angles) $\therefore \vartriangle {\text{ABC}} \sim \Delta {\text{A}}{{\text{B}}^\prime }{{\text{C}}^\prime }$ (AA similarity criterion) $ \Rightarrow \dfrac{{{\text{AB}}}}{{{\text{A}}{{\text{B}}^\prime }}} = \dfrac{{{\text{BC}}}}{{{{\text{B}}^\prime }{{\text{C}}^\prime }}} = \dfrac{{{\text{AC}}}}{{{\text{A}}{{\text{C}}^\prime }}} \ldots (1)$ In $\vartriangle {\text{A}}{{\text{A}}_5}\;{\text{B}}$ and $\vartriangle {\text{A}}{{\text{A}}_7}\;{{\text{B}}^\prime }$, $\angle {{\text{A}}_5}{\text{AB}} = \angle {{\text{A}}_7}{\text{A}}{{\text{B}}^\prime }$ (Common) $\angle {\text{A}}{{\text{A}}_5}\;{\text{B}} = \angle {\text{A}}{{\text{A}}_7}\;{{\text{B}}^\prime }$ (Corresponding angles) $\therefore \Delta {\text{A}}{{\text{A}}_5}\;{\text{B}} \sim \;\Delta {\text{A}}{{\text{A}}_7}\;{{\text{B}}^\prime }$ (AA similarity criterion) $ \Rightarrow \dfrac{{{\text{AB}}}}{{{\text{A}}{{\text{B}}^\prime }}} = \dfrac{5}{7}\quad \ldots \ldots $ On comparing equations (1) and (2), we obtain $\dfrac{{{\text{AB}}}}{{{\text{A}}{{\text{B}}^\prime }}} = \dfrac{{{\text{BC}}}}{{{{\text{B}}^\prime }{{\text{C}}^\prime }}} = \dfrac{{{\text{AC}}}}{{{\text{A}}{{\text{C}}^\prime }}} = \dfrac{5}{7}$ This iustifies the construction. 4. Construct an isosceles triangle whose base is 8 cm and altitude 4 cm and then another triangle whose side are $1\dfrac{1}{2}$ times the corresponding sides of the isosceles triangle. Give a proper justification of the construction. Given: Construct an isosceles triangle whose base is 8 cm and altitude 4 cm and then another triangle whose side are $1\dfrac{1}{2}$ times the corresponding sides of the isosceles triangle. Let us assume that $\Delta {\text{ABC}}$ is an isosceles triangle having CA and CB of equal lengths, base ${\text{AB}}$ of $8\;{\text{cm}}$, and ${\text{AD}}$ is the altitude of $4\;{\text{cm}}$. A $\vartriangle {\text{A}}{{\text{B}}^\prime }{{\text{C}}^\prime }$ whose sides are $\dfrac{3}{2}$ times of $\Delta {\text{ABC}}$ can be drawn as follows. Step 1. Draw a line segment ${\text{AB}}$ of $8\;{\text{cm}}$. Draw arcs of same radius on both sides of the line segment while taking point ${\text{A}}$ and ${\text{B}}$ as its centre. Let these arcs intersect each other at ${\text{O}}$ and ${{\text{O}}^\prime }$. Join ${\text{O}}{{\text{O}}^\prime }$. Let OO' intersect ${\text{AB}}$ at ${\text{D}}$. Step 2. Taking D as centre, draw an arc of $4\;{\text{cm}}$ radius which cuts the extended line segment ${\text{O}}{{\text{O}}^\prime }$ at point C. An isosceles $\vartriangle {\text{ABC}}$ is formed, having ${\text{CD}}$ (altitude) as $4\;{\text{cm}}$ and ${\text{AB}}$ (base) as $8\;{\text{cm}}$. Step 3. Draw a ray AX making an acute angle with line segment ${\text{AB}}$ on the opposite side of vertex ${\text{C}}$. Step 4. Locate 3 points (as 3 is greater between 3 and 2) ${{\text{A}}_1},\;{{\text{A}}_2}$, and ${{\text{A}}_3}$ on ${\text{AX}}$ such that ${\text{A}}{{\text{A}}_1} = {{\text{A}}_1}\;{{\text{A}}_2} = $ ${{\text{A}}_2}\;{{\text{A}}_3}$ Step 6. Draw a line through B' parallel to BC intersecting the extended line segment ${\text{AC}}$ at ${{\text{C}}^\prime }.\vartriangle {\text{A}}{{\text{B}}^\prime }{{\text{C}}^\prime }$ is the required triangle. In $\vartriangle {\text{ABC}}$ and $\Delta {\text{A}}{{\text{B}}^\prime }{{\text{C}}^\prime }$, In $\Delta {\text{A}}{{\text{A}}_2}\;{\text{B}}$ and $\vartriangle {\text{A}}{{\text{A}}_3}\;{{\text{B}}^\prime }$ $\therefore \Delta {\text{A}}{{\text{A}}_2}\;{\text{B}} \sim \vartriangle {\text{A}}{{\text{A}}_3}\;{{\text{B}}^\prime }$ (AA similarity criterion) $\dfrac{{AB}}{{A{B^\prime }}} = \dfrac{{A{A_2}}}{{A{A_3}}} = \dfrac{2}{3}$ 5. Draw a triangle ${\text{ABC}}$ with side ${\text{BC}} = 6\;{\text{cm}},{\text{AB}} = 5\;{\text{cm}}$ and $\angle {\text{ABC}} = {60^\circ }.$ Then construct a triangle whose sides are $\dfrac{3}{4}$ of the corresponding sides of the triangle ${\text{ABC}}$. Give a proper justification of the construction. Given: Draw a triangle ${\text{ABC}}$ with side ${\text{BC}} = 6\;{\text{cm}},{\text{AB}} = 5\;{\text{cm}}$ and $\angle {\text{ABC}} = {60^\circ }.$ Then construct a triangle whose sides are $\dfrac{3}{4}$ of the corresponding sides of the triangle ${\text{ABC}}$. A $\Delta {{\text{A}}^\prime }{\text{B}}{{\text{C}}^\prime }$ whose sides are $\dfrac{3}{4}$ th of the corresponding sides of $\vartriangle {\text{ABC}}$ can be drawn as follows. Step1. Draw a $\Delta {\text{ABC}}$ with side ${\text{BC}} = 6\;{\text{cm}},{\text{AB}} = 5\;{\text{cm}}$ and $\angle {\text{ABC}} = {60^\circ }$. Step 2. Draw a ray BX making an acute angle with BC on the opposite side of vertex A. Step 3. Locate 4 points (as 4 is greater in 3 and 4), ${{\text{B}}_1},\;{{\text{B}}_2},\;{{\text{B}}_3},\;{{\text{B}}_4}$, on line segment BX. Step 4. Join ${{\text{B}}_4}{\text{C}}$ and draw a line through ${{\text{B}}_3}$, parallel to ${{\text{B}}_4}{\text{C}}$ intersecting ${\text{BC}}$ at ${\text{C}}$ ' Step 5. Draw a line through C' parallel to AC intersecting ${\text{AB}}$ at ${{\text{A}}^\prime }.\Delta {{\text{A}}^\prime }{\text{B}}{{\text{C}}^\prime }$ is the required triangle. The construction can be justified by proving ${\text{A}}{{\text{B}}^\prime } = \dfrac{3}{4}{\text{AB}},{\text{B}}{{\text{C}}^\prime } = \dfrac{3}{4}{\text{BC}},{{\text{A}}^\prime }{{\text{C}}^\prime } = \dfrac{3}{4}{\text{AC}}$ In $\vartriangle {{\text{A}}^\prime }{\text{B}}{{\text{C}}^\prime }$ and $\vartriangle {\text{ABC}}$, $\angle {{\text{A}}^\prime }{{\text{C}}^\prime }{\text{B}} = \angle {\text{ACB}}$ (Corresponding angles) $\angle {{\text{A}}^\prime }{\text{B}}{{\text{C}}^\prime } = \angle {\text{ABC}}$ (Common) $\therefore \Delta {{\text{A}}^\prime }{\text{B}}{{\text{C}}^\prime } \sim \vartriangle {\text{ABC}}$ (AA similarity criterion) $ \Rightarrow \dfrac{{{{\text{A}}^\prime }{\text{B}}}}{{{\text{AB}}}} = \dfrac{{{\text{B}}{{\text{C}}^\prime }}}{{{\text{BC}}}} = \dfrac{{{{\text{A}}^\prime }{{\text{C}}^\prime }}}{{{\text{AC}}}} \ldots $ (1) In $\Delta {\text{B}}{{\text{B}}_3}{{\text{C}}^\prime }$ and $\Delta {\text{B}}{{\text{B}}_4}{\text{C}}$, $\angle {{\text{B}}_3}{\text{B}}{{\text{C}}^\prime } = \angle {{\text{B}}_4}{\text{BC}}$ (Common) $\angle {\text{B}}{{\text{B}}_3}{{\text{C}}^\prime } = \angle {\text{B}}{{\text{B}}_4}{\text{C}}$ (Corresponding angles) $\therefore \Delta {\text{B}}{{\text{B}}_3}{{\text{C}}^\prime } \sim \Delta {\text{B}}{{\text{B}}_4}{\text{C}}$ (AA similarity criterion) $ \Rightarrow \dfrac{{{\text{B}}{{\text{C}}^\prime }}}{{{\text{BC}}}} = \dfrac{{{\text{B}}{{\text{B}}_3}}}{{{\text{B}}{{\text{B}}_4}}}$ $ \Rightarrow \dfrac{{{\text{B}}{{\text{C}}^\prime }}}{{{\text{B}}{{\text{C}}^\prime }}} = \dfrac{3}{4}\quad \ldots \ldots .(2)$ From equations (1) and (2), we obtain $\dfrac{{{{\text{A}}^\prime }{\text{B}}}}{{{\text{AB}}}} = \dfrac{{{\text{B}}{{\text{C}}^\prime }}}{{{\text{BC}}}} = \dfrac{{{{\text{A}}^\prime }{{\text{C}}^\prime }}}{{{\text{AC}}}} = \dfrac{3}{4}$ $ \Rightarrow {{\text{A}}^\prime }{\text{B}} = \dfrac{3}{4}{\text{AB}},{\text{B}}{{\text{C}}^\prime } = \dfrac{3}{4}{\text{BC}},{\text{A}}{{\text{C}}^\prime } = \dfrac{3}{4}{\text{AC}}$ 6. Draw a triangle ${\text{ABC}}$ with side ${\text{BC}} = 7\;{\text{cm}},\angle {\text{B}} = {45^\circ },\angle {\text{A}} = {105^\circ }$. Then, construct a triangle whose sides are $4/3$ times the corresponding side of $\Delta {\text{ABC}}$. Give a proper justification of the construction. Given: Draw a triangle ${\text{ABC}}$ with side ${\text{BC}} = 7\;{\text{cm}},\angle {\text{B}} = {45^\circ },\angle {\text{A}} = {105^\circ }$. Then, construct a triangle whose sides are $4/3$ times the corresponding side of $\Delta {\text{ABC}}$. $\angle {\text{B}} = {45^\circ },\angle {\text{A}} = {105^\circ }$ Sum of all interior angles in a triangle is ${180^\circ }$. $\angle {\text{A}} + \angle {\text{B}} + \angle {\text{C}} = {180^\circ }$ ${105^\circ } + {45^\circ } + \angle {\text{C}} = {180^\circ }$ $\angle {\text{C}} = {180^\circ } - {150^\circ }$ $\angle {\text{C}} = {30^\circ }$ The required triangle can be drawn as follows. Step 1. Draw a $\vartriangle {\text{ABC}}$ with side ${\text{BC}} = 7\;{\text{cm}},\angle {\text{B}} = {45^\circ },\angle {\text{C}} = {30^\circ }$. Step 2. Draw a ray BX making an acute angle with ${\text{BC}}$ on the opposite side of vertex ${\text{A}}$. Step 3. Locate 4 points (as 4 is greater in 4 and 3), ${{\text{B}}_1},\;{{\text{B}}_2},\;{{\text{B}}_3},\;{{\text{B}}_4}$, on BX. Step 4. Join ${{\text{B}}_3}{\text{C}}$. Draw a line through ${{\text{B}}_4}$ parallel to ${{\text{B}}_3}{\text{C}}$ intersecting extended ${\text{BC}}$ at ${\text{C}}$ '. Step 5. Through ${{\text{C}}^\prime }$, draw a line parallel to ${\text{AC}}$ intersecting extended line segment at ${\text{C}}$. $\vartriangle {{\text{A}}^\prime }{\text{B}}{{\text{C}}^\prime }$ is the required triangle. The construction can be justified by proving that ${\text{A}}{{\text{B}}^\prime } = \dfrac{4}{3}{\text{AB}},{\text{B}}{{\text{C}}^\prime } = \dfrac{4}{3}{\text{BC}},{{\text{A}}^\prime }{{\text{C}}^\prime } = \dfrac{4}{3}{\text{AC}}$ In $\vartriangle {\text{ABC}}$ and $\vartriangle {{\text{A}}^\prime }{\text{B}}{{\text{C}}^\prime }$, $\angle {\text{ABC}} = \angle {{\text{A}}^\prime }{\text{B}}{{\text{C}}^\prime }$ (Common) $\angle {\text{ACB}} = \angle {{\text{A}}^\prime }{{\text{C}}^\prime }{\text{B}}$ (Corresponding angles) $\therefore \Delta {\text{ABC}} \sim \Delta {{\text{A}}^\prime }{\text{B}}{{\text{C}}^\prime }$ (AA similarity criterion) $ \Rightarrow \dfrac{{{\text{AB}}}}{{{{\text{A}}^\prime }{\text{B}}}} = \dfrac{{{\text{BC}}}}{{{\text{B}}{{\text{C}}^\prime }}} = \dfrac{{{\text{AC}}}}{{{{\text{A}}^\prime }{{\text{C}}^\prime }}} \ldots $ (1) In $\vartriangle {\text{B}}{{\text{B}}_3}{\text{C}}$ and $\vartriangle {\text{B}}{{\text{B}}_4}{{\text{C}}^\prime }$, $\angle {{\text{B}}_3}{\text{BC}} = \angle {{\text{B}}_4}{\text{B}}{{\text{C}}^\prime }$ (Common) $\angle {\text{B}}{{\text{B}}_3}{\text{C}} = \angle {\text{B}}{{\text{B}}_4}{{\text{C}}^\prime }$ (Corresponding angles) $\therefore \angle {\text{B}}{{\text{B}}_3}{\text{C}} \sim \angle {\text{B}}{{\text{B}}_4}{{\text{C}}^\prime }$ (AA similarity criterion) $ \Rightarrow \dfrac{{{\text{BC}}}}{{{\text{B}}{{\text{C}}^\prime }}} = \dfrac{{{\text{B}}{{\text{B}}_3}}}{{{\text{B}}{{\text{B}}_4}}}$ $ \Rightarrow \dfrac{{{\text{BC}}}}{{{\text{B}}{{\text{C}}^\prime }}} = \dfrac{3}{4}\quad \cdots \cdots $ On comparing equations (1) and (2), we obtain $\dfrac{{{\text{AB}}}}{{{{\text{A}}^\prime }{\text{B}}}} = \dfrac{{{\text{BC}}}}{{{\text{B}}{{\text{C}}^\prime }}} = \dfrac{{{\text{AC}}}}{{{{\text{A}}^\prime }{{\text{C}}^\prime }}} = \dfrac{3}{4}$ $ \Rightarrow {{\text{A}}^\prime }{\text{B}} = \dfrac{4}{3}{\text{AB}},{\text{B}}{{\text{C}}^\prime } = \dfrac{4}{3}{\text{BC}},{{\text{A}}^\prime }{{\text{C}}^\prime } = \dfrac{4}{3}{\text{AC}}$ 7. This justifies the construction.Draw a right triangle in which the sides (other than hypotenuse) are of lengths 4 cm and 3 cm. then construct another triangle whose sides are $\dfrac{5}{3}$ times the corresponding sides of the given triangle. Give the justification of the construction. Given: Draw a right triangle in which the sides (other than hypotenuse) are of lengths 4 cm and 3 cm. then construct another triangle whose sides are $\dfrac{5}{3}$ times the corresponding sides of the given triangle. To find: Give the justification of the construction, It is given that sides other than hypotenuse are of lengths $4\;{\text{cm}}$ and $3\;{\text{cm}}.$ Clearly, these will be perpendicular to each other. The required triangle can be drawn as follows. Step 1. Draw a line segment ${\text{AB}} = 4\;{\text{cm}}$. Draw a ray SA making ${90^\circ }$ with it. Step 2. Draw an arc of $3\;{\text{cm}}$ radius while taking A as its centre to intersect ${\text{SA}}$ at ${\text{C}}$. Join ${\text{BC}}$. $\Delta {\text{ABC}}$ is the required triangle. Step 3. Draw a ray AX making an acute angle with ${\text{AB}}$, opposite to vertex ${\text{C}}$. Step 4. Locate 5 points (as 5 is greater in 5 and 3 ), ${{\text{A}}_1},\;{{\text{A}}_2},\;{{\text{A}}_3},\;{{\text{A}}_4},\;{{\text{A}}_5}$, online segment ${\text{AX}}$ such that ${\text{A}}{{\text{A}}_1}$ $ = {{\text{A}}_1}\;{{\text{A}}_2} = {{\text{A}}_2}\;{{\text{A}}_3} = {{\text{A}}_3}\;{{\text{A}}_4} = {{\text{A}}_4}\;{{\text{A}}_5}$ Step 5. Join ${{\text{A}}_3}\;{\text{B}}$. Draw a line through ${{\text{A}}_5}$ parallel to ${{\text{A}}_3}\;{\text{B}}$ intersecting extended line segment ${\text{AB}}$ at ${\text{B}}$ '. Step 6. Through B', draw a line parallel to BC intersecting extended line segment ${\text{AC}}$ at ${\text{C}}$ '. $\vartriangle {\text{A}}{{\text{B}}^\prime }{{\text{C}}^\prime }$ is the required triangle. In $\vartriangle {\text{ABC}}$ and $\vartriangle {\text{A}}{{\text{B}}^\prime }{\text{C}}$, $\therefore \vartriangle {\text{ABC}} \sim \vartriangle {\text{A}}{{\text{B}}^\prime }{{\text{C}}^\prime }$ (AA similarity criterion) $ \Rightarrow \dfrac{{{\text{AB}}}}{{{\text{A}}{{\text{B}}^\prime }}} = \dfrac{{{\text{BC}}}}{{{{\text{B}}^\prime }{{\text{C}}^\prime }}} = \dfrac{{{\text{AC}}}}{{{\text{A}}{{\text{C}}^\prime }}}\quad \cdots $ $\therefore \Delta {\text{A}}{{\text{A}}_3}\;{\text{B}} \sim \Delta {\text{A}}{{\text{A}}_5}\;{{\text{B}}^\prime }$ (AA similarity criterion) $ \Rightarrow \dfrac{{{\text{AB}}}}{{{\text{A}}{{\text{B}}^\prime }}} = \dfrac{{{\text{A}}{{\text{A}}_3}}}{{{\text{A}}{{\text{A}}_5}}}$ $ \Rightarrow \dfrac{{{\text{AB}}}}{{{\text{A}}{{\text{B}}^\prime }}} = \dfrac{3}{5}$ EXERCISE NO: 11.2 1. Draw a circle of radius 6 cm. From a point 10 cm away from its centre, construct the pair of tangents to the circle and measure their lengths. Give a proper justification of the construction. Given: Draw a circle of radius 6 cm. From a point 10 cm away from its centre, construct the pair of tangents to the circle and measure their lengths. To prove: Give a proper justification of the construction. A pair of tangents to the given circle can be constructed as follows. Step 1. Taking any point O on the given plane as centre, draw a circle of 6 cm radius. Locate a point P, 10 cm away from O. Join OP. Step 2. Bisect OP. Let M be the mid-point of PO. Step 3. Taking M as centre and MO as radius, draw a circle. Step 4. Let this circle intersect the previous circle at point Q and R. Step 5. Join PQ and PR. PQ and PR are the required tangents. The lengths of tangents PQ and PR are 8 cm each. The construction can be justified by proving that PQ and PR are the tangents to the circle (whose centre is O and radius is 6 cm). For this, join OQ and OR. ∠PQO is an angle in the semi-circle. We know that the angle in a semicircle is a right angle. \[\therefore \angle PQO{\text{ }} = {\text{ }}90^\circ \] \[ \Rightarrow OQ \bot PQ\] Since OQ is the radius of the circle, PQ has to be a tangent of the circle. Similarly, PR is a tangent of the circle. 2. Construct a tangent to a circle of radius 4 cm from a point on the concentric circle of radius 6 cm and measure its length. Also verify the measurement by actual calculation. Give the justification of the construction. Given: Construct a tangent to a circle of radius 4 cm from a point on the concentric circle of radius 6 cm and measure its length. Also verify the measurement by actual calculation. To prove: Give the justification of the construction. Tangents on the given circle can be drawn as follows. Step 1. Draw a circle of 4 cm radius with centre as O on the given plane. Step 2. Draw a circle of 6 cm radius taking O as its centre. Locate a point P on this circle and join OP. Step 4. Taking M as its centre and MO as its radius, draw a circle. Let it intersect the given circle at the points Q and R. It can be observed that PQ and PR are of length 4.47 cm each. In $\Delta {\text{PQO}}$, Since ${\text{PQ}}$ is a tangent, $\angle {\text{PQO}} = {90^\circ }$ ${\text{PO}} = 6\;{\text{cm}}$ ${\text{QO}} = 4\;{\text{cm}}$ Applying Pythagoras theorem in $\Delta {\text{PQO}}$, we obtain ${\text{P}}{{\text{Q}}^2} + {\text{Q}}{{\text{O}}^2} = {\text{P}}{{\text{Q}}^2}$ $P{Q^2} + {(4)^2} = {(6)^2}$ ${\text{P}}{{\text{Q}}^2} + 16 = 36$ ${\text{P}}{{\text{Q}}^2} = 36 - 16$ ${\text{P}}{{\text{Q}}^2} = 20$ ${\text{PQ}} = 2\sqrt 5 $ ${\text{PQ}} = 4.47\;{\text{cm}}$ The construction can be justified by proving that PQ and PR are the tangents to the circle (whose centre is O and radius is 4 cm). For this, let us join OQ and OR. \[\angle PQO\] is an angle in the semi-circle. We know that the angle in a semicircle is a right angle. \[ \therefore \angle PQO{\text{ }} = {\text{ }}90^\circ \] \[\Rightarrow OQ \bot PQ \] Since OQ is the radius of the circle, PQ has to be a tangent of the circle. Similarly, PR is a tangent of the circle 3. Draw a circle of radius 3 cm. Take two points P and Q on one of its extended diameter each at a distance of 7 cm from its centre. Draw tangents to the circle from these two points P and Q. Give the justification of the construction. Given: Draw a circle of radius 3 cm. Take two points P and Q on one of its extended diameter each at a distance of 7 cm from its centre. Draw tangents to the circle from these two points P and Q. The tangent can be constructed on the given circle as follows. Step 1. Taking any point O on the given plane as centre, draw a circle of 3 cm radius. Step 2. Take one of its diameters, PQ, and extend it on both sides. Locate two points on this diameter such that OR = OS = 7 cm Step 3. Bisect OR and OS. Let T and U be the mid-points of OR and OS respectively. Step 4. Taking T and U as its centre and with TO and UO as radius, draw two circles. These two circles will intersect the circle at point V, W, X, Y respectively. Join RV, RW, SX, and SY. These are the required tangents. The construction can be justified by proving that RV, RW, SY, and SX are the tangents to the circle (whose centre is O and radius is 3 cm). For this, join OV, OW, OX, and OY. \[\angle RVO\]is an angle in the semi-circle. We know that the angle in a semicircle is a right angle. \[ \therefore \angle RVO{\text{ }} = {\text{ }}90^\circ \] \[ \Rightarrow OV \bot RV \] Since OV is the radius of the circle, RV has to be a tangent of the circle. Similarly, OW, OX, and OY are the tangents of the circle 4. Draw a pair of tangents to a circle of radius 5 cm which are inclined to each other at an angle of \[60^\circ \]. Give a proper justification of the construction. Given: Draw a pair of tangents to a circle of radius 5 cm which are inclined to each other at an angle of \[60^\circ \]. The tangents can be constructed in the following manner: Step 1. Draw a circle of radius 5 cm and with centre as O. Step 2. Take a point A on the circumference of the circle and join OA. Draw a perpendicular to OA at point A. Step 3. Draw a radius OB, making an angle of \[120^\circ {\text{ }}\left( {180^\circ {\text{ }} - {\text{ }}60^\circ } \right)\] with OA. Step 4. Draw a perpendicular to OB at point B. Let both the perpendiculars intersect at point P. PA and PB are the required tangents at an angle of 60°. The construction can be justified by proving that $\angle {\text{APB}} = {60^\circ }$ By our construction $\angle {\text{OAP}} = {90^\circ }$ $\angle {\text{OBP}} = {90^\circ }$ And $\angle {\text{AOB}} = {120^\circ }$ We know that the sum of all interior angles of a quadrilateral $ = {360^\circ }$ $\angle {\text{OAP}} + \angle {\text{AOB}} + \angle {\text{OBP}} + \angle {\text{APB}} = {360^\circ }$ ${90^\circ } + {120^\circ } + {90^\circ } + \angle {\text{APB}} = {360^\circ }$ $\angle {\text{APB}} = {60^\circ }$ 5. Draw a line segment AB of length 8 cm. Taking A as centre, draw a circle of radius 4 cm and taking B as centre, draw another circle of radius 3 cm. Construct tangents to each circle from the centre of the other circle. Give a proper justification for the construction. Given: Draw a line segment AB of length 8 cm. Taking A as centre, draw a circle of radius 4 cm and taking B as centre, draw another circle of radius 3 cm. Construct tangents to each circle from the centre of the other circle. The tangents can be constructed on the given circles as follows. Step 1. Draw a line segment AB of 8 cm. Taking A and B as centre, draw two circles of 4 cm and 3 cm radius. Step 2. Bisect the line AB. Let the mid-point of AB be C. Taking C as centre, draw a circle of AC radius which will intersect the circles at points P, Q, R, and S. Join BP, BQ, AS, and AR. These are the required tangents. The construction can be justified by proving that AS and AR are the tangents of the circle (whose centre is B and radius is 3 cm) and BP and BQ are the tangents of the circle (whose centre is A and radius is 4 cm). For this, join AP, AQ, BS, and BR. \[\angle ASB\] is an angle in the semi-circle. We know that an angle in a semicircle is a right angle. \[ \therefore \angle ASB{\text{ }} = {\text{ }}90^\circ \] \[ \Rightarrow BS \bot AS \] Since BS is the radius of the circle, AS has to be a tangent of the circle. Similarly, AR, BP, and BQ are the tangents. 6. Let ABC be a right triangle in which AB = 6 cm, BC = 8 cm and \[\angle B{\text{ }} = {\text{ }}90^\circ \]. BD is the perpendicular from B on AC. The circle through B, C, and D is drawn. Construct the tangents from A to this circle. Give a proper justification of the construction. Given: Let ABC be a right triangle in which AB = 6 cm, BC = 8 cm and \[\angle B{\text{ }} = {\text{ }}90^\circ \]. BD is the perpendicular from B on AC. The circle through B, C, and D is drawn. Construct the tangents from A to this circle. Consider the following situation. If a circle is drawn through B, D, and C, BC will be its diameter as \[\angle BDC\] is of measure \[90^\circ \]. The centre E of this circle will be the midpoint of BC. The required tangents can be constructed on the given circle as follows. Step 1. Join AE and bisect it. Let F be the mid-point of AE. Step 2. Taking F as centre and FE as its radius, draw a circle which will intersect the circle at point B and G. Join AG. AB and AG are the required tangents. The construction can be justified by proving that AG and AB are the tangents to the circle. For this, join EG. \[\angle AGE\] is an angle in the semi-circle. We know that an angle in a semicircle is a right angle. \[ \therefore \angle AGE{\text{ }} = {\text{ }}90^\circ \] \[ \Rightarrow EG \bot AG \] Since EG is the radius of the circle, AG has to be a tangent of the circle. Already, \[ \angle B{\text{ }} = {\text{ }}90^\circ \] \[ \Rightarrow AB \bot BE \] Since BE is the radius of the circle, AB has to be a tangent of the circle. 7. Draw a circle with the help of a bangle. Take a point outside the circle. Construct the pair of tangents from this point to the circles. Give a proper justification of the construction. Given: Draw a circle with the help of a bangle. Take a point outside the circle. Construct the pair of tangents from this point to the circles. The required tangents can be constructed on the given circle as follows. Step 1. Draw a circle with the help of a bangle. Step 2. Take a point P outside this circle and take two chords QR and ST. Step 3. Draw perpendicular bisectors of these chords. Let them intersect each other at point O. Step 4. Join PO and bisect it. Let U be the mid-point of PO. Taking U as centre, draw a circle of radius OU, which will intersect the circle at V and W. Join PV and PW. PV and PW are the required tangents. The construction can be justified by proving that PV and PW are the tangents to the circle. For this, first of all, it has to be proved that O is the centre of the circle. Let us join OV and OW. We know that the perpendicular bisector of a chord passes through the centre. Therefore, the perpendicular bisector of chords QR and ST pass through the centre. It is clear that the intersection point of these perpendicular bisectors is the centre of the circle. \[\angle PVO\] is an angle in the semi-circle. We know that an angle in a semicircle is a right angle. \[ \therefore \angle PVO{\text{ }} = {\text{ }}90^\circ \] \[ \Rightarrow OV \bot PV \] Since OV is the radius of the circle, PV has to be a tangent of the circle. Similarly, PW is a tangent of the circle. NCERT Solutions for Class 10 Maths Chapter 11 Constructions - PDF Download Construction of basic figures like a triangle, bisecting a line segment or drawing a perpendicular at a point on a line requires a ruler with a bevelled edge, a sharply pointed pencil preferably a type of set squares and a pair of compasses for justification of the method used. Some basic knowledge of geometry is required like proportionality theorem, concept of similar triangles, etc. have a look at to cover all the questions from exercise 11.1 and exercise 11.2 You can Find the Solutions of All the Maths Chapters below. Chapter 1 - Real Numbers Chapter 2 - Polynomials Chapter 3 - Pair of Linear Equations in Two Variables Chapter 4 - Quadratic Equations Chapter 5 - Arithmetic Progressions Chapter 6 - Triangles Chapter 7 - Coordinate Geometry Chapter 8 - Introduction to Trigonometry Chapter 9 - Some Applications of Trigonometry Chapter 10 - Circles Chapter 11 - Constructions Chapter 12 - Areas Related to Circles Chapter 13 - Surface Areas and Volumes Chapter 14 - Statistics Chapter 15 - Probability Division of a Line Segment To divide a line segment internally in a given ratio m:n, we take the following steps: Steps of Construction: Step 1: Draw a line segment of a given length and name it AB. Step 2: Draw a ray making an acute angle with AB and let the ray be AX. Step 3: Along AX, mark off (m + n) points A1, A2, ……., Am, Am+1,....., Am+n (for ex: if the ratio is to be 2:3, then we mark of 5 (= 2+ 3) points) Step 4: Join BAAm+n. Step 5: Draw a line through point Am parallel to Am+n B and make an angle equal to ∠AAm+n B. Let this line meet Ab at a point P. Then this point is the required point which divides AB internally in the ratio m:n. (Image to be added soon) Constructing a Triangle Similar to a Given Triangle Here we construct a triangle similar to a given triangle. The constructed triangle may be smaller or larger than the given triangle. So we define the following term: Scale factor: Scale factor is the ratio of the sides of any figure to be constructed with the corresponding measurements of the given figure. Let ABC be the given triangle by using the given & suppose we want to construct a triangle similar to ABC, such that each of its sides is (m/n)th of the corresponding sides of ABC. The Following Are the Steps to Be Taken for Construction a Triangle When M<n: Step 1: Construct the given ABC by using the given data. Step 2: Take AB as the base of the given ABC. Step 3: At one end, say A of AB construct an acute angle ∠BAX below the base AB. Step 4: Along AX mark off n points A1, A2,A3, ……., Am, Am+1,....., An such that AA1= A1A2 = A2A3 = A3 = Am-1Am = …….. = An-1An. Step 5: Join AnB. Step 6: Start from A & reach to the point AnB which meets AB at B' Step 7: From B' draw B'C' || CB meeting AC at C'. Then AB'C' is the required triangle each of whose sides is (m/n)th of the corresponding sides of ABC. Construction of Tangents to a Circle We know that when a point lies inside a circle no tangent can be drawn to the circle from this point. If a point lies on the circle at that point but if the point lies outside the circle, two agents can be drawn to the circle from the point. Constructing a Tangent to a Circle at a Ive Point, We Consider Two Cases: Case A: When the Centre of the Circle Is Known, the Steps of Construction Are: Step 1: We take a point O on the plane of the paper and using a compass & ruler, we draw a circle of given radius. Step 2: Let there be a point P on the circle. Step 3: Join OP Step 4: Construct ∠OPT = 90°. Step 5: Produce TP to T' to obtain the line TPT' as the required tangent. Case B: When the Centre of the Circle is Not Known, Then the Steps of Construction Are: Step 1: A chord PQ is drawn to the circle through the given point P on the circle. Step 2: P & Q are joined to a point R in the major arc of the circle. Step 3: Construct ∠QPT equal to ∠QRP on the opposite side of the chord PQ. Step 4: Produce TP to T' to obtain TPT' as the required tangent. Construction of Tangents to a Circle from an External Point We know that two tangents can be drawn to a circle from an external point. The cases may arise--- Step 1: Given external point P is joined to centre O of the circle. Step 2: Draw a perpendicular bisector of OP, intersecting OP at Q. Step 3: Draw a circle with Q as center and OQ = QP as radius , intersecting the given circle at T & T'. Step 4: Join PT & PT'. Then PT & PT' are the two tangents to the circle drawn from the external point P. Case B: When Centre of the Circle is Not Known, Then the Steps of Construction Are: P is the external point & a circle is given with diameter AB. Step 1: From P draw a secant PAB intersecting the given circle at A & B. Step 2: Produce AP to C, such that AP = PC. Step 3: Locate midpoint of BC as M, & draw a semicircle. Step 4: Draw a perpendicular PD on BC intersecting the semicircle at D. Step 5: With P as centre & radius PD draw arcs intersecting the given circle at T & T'. Then PT & PT' are the two required tangents drawn on the given circle from the external point P Strategy for Preparing for Exams For any exam, enough practice and revision are required. Even time management and how to tackle tricky questions are very significant during the exams. NCERT Solutions provided by Vedantu will definitely give you a good practice on the variety of questions. The notes along with the solution will give you clarity about the concept. Constructions is an important geometry chapter in Maths and pre knowledge of constructing an angle bisector and drawing a perpendicular line will be helpful. The NCERT Solutions design by Vedantu will give you a clear idea about the question papers that you will get in exams. The entire solution in Vedantu is designed in a concise manner and in step wise method. You can adopt the same method for exams. You will also learn how to manage time and deal with difficult questions in exams with the help of Vedantu's NCERT Solutions. Advantage of using Vedantu's NCERT Solution There are many NCERT Solutions on the net but Vedantu provides 100% accurate and the latest solutions as per the syllabus under the strict guidelines set by the CBSE Board. Vedantu's NCERT solutions have benefitted countless students till today. Experienced teachers have designed the solutions in a very simple way that makes it self explanatory and you will be thorough with the concepts at your fingertips. Vedantu also provides live sessions which is the highlight of Vedantu. This will boost the confidence of the students and will help master the subject. You can take the live video on any device and from anywhere. For strengthening your concept understanding, you can completely rely on the NCERT Solutions provided by Vedantu. This is the right platform for you to revise your subjects and score more marks in exams. You can download the pdf format from Vedantu's website which is available for free. You can download on any device and practice as per your convenient time. You can carry the pdf anywhere and anytime. 1. What do you understand by the scale factor of any geometrical figure? Scale factor is the ratio of the sides of any figure to be constructed with the corresponding measurements of the given figure. 2. Why should you refer to Vedantu for NCERT Solutions for Class 10 Chapter 11? You should refer to Vedantu for NCERT Solutions for Class 10 Chapter 11 because Vedantu provides the latest and very comprehensive explanation of all NCERT Solution of Chapter 11. The experienced faculties in Vedantu have created the solutions just for you after intensive research. The solutions for Chapter 11 have many illustrated examples that will give you a complete revision of the chapter and you can prepare well for the exams. The NCERT Solutions provided by Vedantu are designed in a unique way starting from easy questions and as you proceed the level of the questions will become complicated. In this way you can solve any type of question whether it is easy or difficult. 3. What are the topics covered in the Chapter 11 for Class 10? The topics that are covered in the Chapter 11 for Class 10 are how to determine a point dividing a line segment internally given a ratio, construction of similar triangles, constructing a tangent to a circle, and constructing a pair of tangents which are inclined to each other at an angle. 4. How can I improve my score through Vedantu? Vedantu helps students to strengthen their foundation in all subjects and develop the ability to tackle different kinds of questions given in the textbooks. Free pdf of each subject topic wise is available on the website. NCERT Maths book for class 10 guide will definitely help the students significantly to improve their performance in academics. 5. What is the basic concept of construction in Chapter 11 Constructions of Class 10 Maths? A ruler with a bevelled edge, a highly pointed pencil, ideally a kind of set squares, and a pair of compasses for justification of the technique employed are required for the construction of fundamental figures such as a triangle, bisecting a line segment, or drawing a perpendicular at a point on a line. It is the basic concept of construction. Refer to Vedantu's NCERT Solutions for Class 10 Chapter 11 to understand the concepts of this chapter. 6. What all constructions can be learnt in Chapter 11 Constructions of Class 10 Maths? In this chapter, you'll learn how to determine a point by internally dividing a line segment given a ratio, how to construct similar triangles, how to construct a tangent to a circle, how to construct a pair of tangents, and how to construct a pair of tangents that are inclined to each other at an angle. You will learn more new definitions and interesting techniques in this chapter. 7. How much time do students need to complete Chapter 11 Constructions of Class 10 Maths? There are only two exercises in Chapter 11 of Class 10 Maths. It will not take much time to do these exercises if your concepts are clear. Construction needs correct skills and patience. A lot of practice is required to do the construction in the right way. Vedantu offers NCERT Solutions for Class 10 Chapter 11 to help students understand the chapter better and score good marks in exams. 8. How many questions and examples are there in Chapter 11 Constructions of Class 10 Maths? There are only two exercises in Chapter 11 of Class 10 Maths. Completing the exercises won't take much time if your concepts are clear. So, there are 14 questions in and just two examples along with it to practise. Practicing all the questions honestly will be enough. If you have doubts, you can refer to NCERT Solutions for Class 10 Chapter 11 offered by Vedantu at free of cost available on the official website and on the Vedantu app. These solutions are prepared by experts in an easy to understand language and students can download them for free. 9. What should be the strategy for preparing Chapter 11 Constructions of Class 10 Maths? A certain amount of practise and review is necessary for every exam. During the examinations, time management and how to approach difficult problems are also crucial. Vedantu's NCERT Solutions will offer you plenty of practise with a wide range of questions. The annotations, as well as the answer, will help you understand the idea. Construction is a crucial geometry subject in Math, and prior knowledge of how to create an angle bisector and draw a perpendicular line will come in handy.
CommonCrawl
\begin{document} \title[Logarithmic co-Higgs bundles]{Logarithmic co-Higgs bundles} \author{Edoardo Ballico and Sukmoon Huh} \address{Universit\`a di Trento, 38123 Povo (TN), Italy} \email{[email protected]} \address{Sungkyunkwan University, 300 Cheoncheon-dong, Suwon 440-746, Korea} \email{[email protected]} \keywords{co-Higgs bundle, double covering, nilpotent} \thanks{The first author is partially supported by MIUR and GNSAGA of INDAM (Italy). The second author is supported by Basic Science Research Program 2015-037157 through NRF funded by MEST and the National Research Foundation of Korea(KRF) 2016R1A5A1008055 grant funded by the Korea government(MSIP)} \subjclass[2010]{Primary: {14J60}; Secondary: {14D20, 53D18}} \begin{abstract} In this article we introduce a notion of logarithmic co-Higgs sheaves associated to a simple normal crossing divisor on a projective manifold, and show their existence with nilpotent co-Higgs fields for fixed ranks and second Chern classes. Then we deal with various moduli problems with logarithmic co-Higgs sheaves involved, such as coherent systems and holomorphic triples, specially over algebraic curves of low genus. \end{abstract} \maketitle \section{Introduction} A co-Higgs sheaf on a complex manifold $X$ is a torsion-free coherent sheaf $\enm{\cal{E}}$ on $X$ together with an endomorphism $\Phi$ of $\enm{\cal{E}}$, called a {\it co-Higgs field}, taking values in the tangent bundle $T_X$ of $X$, i.e. $\Phi \in H^0(\mathcal{E}nd(\enm{\cal{E}})\otimes T_X)$, such that the integrability condition $\Phi \wedge \Phi=0$ is satisfied. When $\enm{\cal{E}}$ is locally free, it is a generalized vector bundle on $X$, considered as a generalized complex manifold and it is introduced and developed by Hitchin and Gualtieri in \cite{Hi, Gual}. A naturally defined stability condition on co-Higgs sheaves allows one to study their moduli spaces and Rayan and Colmenares investigate their geometry over projective spaces and a smooth quadric surface in \cite{R2, Rayan} and \cite{VC1}. Indeed it is expected that the existence of stable co-Higgs bundles forces the position of $X$ to be located at the lower end of the Kodaira spectrum, and Corr\^{e}a shows in \cite{Correa} that a K\"ahler compact surface with a nilpotent stable co-Higgs bundle of rank two is uniruled up to finite \'etale cover. In \cite{BH1, BH} the authors suggest a simple way of constructing nilpotent co-Higgs sheaves, based on Hartshorne-Serre correspondence, and obtain some (non-)existence results. In this article we investigate the existence of nilpotent co-Higgs sheaves with a co-Higgs field vanishing in the normal direction to a given divisor of $X$; for a given arrangement $\enm{\cal{D}}$ of smooth irreducible divisors of $X$ with simple normal crossings, the sheaf $T_X(-\log \enm{\cal{D}})$ of logarithmic vector fields along $\enm{\cal{D}}$ is locally free and we consider a pair $(\enm{\cal{E}}, \Phi)$ of a torsion-free coherent sheaf $\enm{\cal{E}}$ and a morphism $\Phi : \enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_X(-\log \enm{\cal{D}})$ with the integrability condition satisfied. The pair is called a {\it $\enm{\cal{D}}$-logarithmic co-Higgs sheaf} and it is called $2$-nilpotent if $\Phi \circ \Phi$ is trivial. Our first result is on the existence of nilpotent $\enm{\cal{D}}$-logarithmic co-Higgs sheaves of rank at least two. \begin{theorem}[Propositions \ref{aa1}, \ref{aa2} and \ref{aa2.00}]\label{thm33} Let $X$ be a projective manifold with $\dim (X)\ge 2$ and $\enm{\cal{D}}\subset X$ be a simple normal crossing divisor. For fixed $\enm{\cal{L}}\in \op{Pic} (X)$ and an integer $r\ge 2$, there exists a $2$-nilpotent $\enm{\cal{D}}$-logarithmic co-Higgs sheaf $(\enm{\cal{E}}, \Phi)$, where $\Phi\ne 0$ and $\enm{\cal{E}}$ is reflexive and indecompodable with $c_1(\enm{\cal{E}})\cong \enm{\cal{L}}$ and $\op{rank} \enm{\cal{E}} = r$. \end{theorem} Indeed, we can strengthen the statement of Theorem \ref{thm33} by requiring $\enm{\cal{E}}$ to be locally free, in cases $\dim (X)=2$ or $r\ge \dim (X)$, due to the statement of the Hartshorne-Serre correspondence and the dimension of non-locally free locus (see Propositions \ref{aa1} and \ref{aa2}). Moreover, in case $\dim (X)=2$, we suggest an explicit number such that a logarithmic co-Higgs bundle exists for each second Chern class at least that number. We notice that the logarithmic co-Higgs sheaves constructed in Theorem \ref{thm33} are hightly unstable, which is consistent with the general philosophy on the existennce of stable co-Higgs bundles (see \cite[Theorem 1.1]{Correa} for example). Then we pay our attention to various different types of semistable objects involving logarithmic co-Higgs sheaves. In Section \ref{exp} we produce several examples of nilpotent semistable logarithmic co-Higgs sheaves on projective spaces and a smooth quadric surface, using a simple way of constructin in \cite{BH}. Since the logarithmic co-Higgs sheaves are co-Higgs sheaves in the usual sense with an additional vanishing condition in the normal direction of divisors, so their moduli space is a closed subvariety of the moduli of the usual co-Higgs sheaves. In Section \ref{mex} we describe two moduli spaces of logarithmic co-Higgs bundles of rank two on $\enm{\mathbb{P}}^2$ in two cases. Then in Section \ref{osem} we experiment with extensions of the notion of stability for co-Higgs sheaves and logarithmic co-Higgs sheaves. A key point for the study of moduli spaces was the introduction of parameters for the conditions of stability. We extend two of them, coherent systems and holomorphic triples, to co-Higgs sheaves. Specially in case of holomorphic triples, we show that any holomorphic triple admits the Harder-Narasimhan filtration in Corollary \ref{mcor} and construct the moduli space of $\nu_\alpha$-stable $\enm{\cal{D}}$-logarithmic co-Higgs triples, using Simpson's idea and quiver interpretation. We always work in cases in which there are non-trivial co-Higgs fields; so in case of dimension one we only consider projective lines and elliptic curves. We call $\nu _\alpha$-stability with $\alpha \in \enm{\mathbb{R}} _{>0}$, the notion of stability for holomorphic triples. In some cases we prove that the only $\nu _\alpha$-stable holomorphic triples are obtained in a standard way from the same holomorphic triple taking the zero co-Higgs field (see Remark \ref{tre}). It is certain that a logarithmic co-Higgs field is different from a map $\enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_X(-D)$, unless $X$ is a curve. We have a glimpse of this map in Section \ref{divisor} for the cases $X=\enm{\mathbb{P}}^2$ or $\enm{\mathbb{P}}^1 \times \enm{\mathbb{P}}^1$. On the contrary, in Section \ref{exttt} we consider a map $\enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_X(kD)$ with $k>0$, called a meromorphic co-Higgs field, and describe semistable meromorphic co-Higgs bundles on $\enm{\mathbb{P}}^1$. The second author would like to thank U.~Bruzzo, N.~Nitsure and L.~Brambila-Paz for many suggestions and interesting discussion. \section{Definitions and Examples}\label{exp} Let $X$ be a smooth complex projective variety of dimension $n\ge2$ with the tangent bundle $T_X$. For a fixed ample line bundle $\enm{\cal{O}}_X(1)$ and a coherent sheaf $\enm{\cal{E}}$ on $X$, we denote $\enm{\cal{E}} \otimes \enm{\cal{O}}_X(t)$ by $\enm{\cal{E}}(t)$ for $t\in \enm{\mathbb{Z}}$. The dimension of cohomology group $H^i(X, \enm{\cal{E}})$ is denoted by $h^i(X,\enm{\cal{E}})$ and we will skip $X$ in the notation, if there is no confusion. For two coherent sheaves $\enm{\cal{E}}$ and $\enm{\cal{F}}$ on $X$, the dimension of $\op{Ext}_X^1(\enm{\cal{E}}, \enm{\cal{F}})$ is denoted by $\mathrm{ext}_X^1(\enm{\cal{E}}, \enm{\cal{F}})$. To an {\it arrangement} $\enm{\cal{D}}=\{D_1, \ldots, D_m\}$ of smooth irreducible divisors $D_i$'s on $X$ such that $D_i\ne D_j$ for $i\ne j$, we can associate the sheaf $T_X(-\log \enm{\cal{D}})$ of logarithmic vector fields along $\enm{\cal{D}}$, i.e. it is the subsheaf of the tangent bundle $T_X$ whose section consists of vector fields tangent to $\enm{\cal{D}}$. We always assume that $\enm{\cal{D}}$ has simple normal crossings and so $T_X(-\log \enm{\cal{D}})$ is locally free. It also fits into the exact sequence \cite{D} \begin{equation}\label{log1} 0\to T_X(-\log \enm{\cal{D}}) \to T_X \to \oplus_{i=1}^m {\varepsilon_i}_*\enm{\cal{O}}_{D_i}(D_i) \to 0, \end{equation} where $\varepsilon_i: D_i \rightarrow X$ is the embedding. \begin{definition} A {\it $\enm{\cal{D}}$-logarithmic co-Higgs} bundle on $X$ is a pair $(\enm{\cal{E}}, \Phi)$ where $\enm{\cal{E}}$ is a holomorphic vector bundle on $X$ and $\Phi: \enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_X(-\log \enm{\cal{D}})$ with $\Phi \wedge \Phi=0$. Here $\Phi$ is called the {\it logarithmic co-Higgs field} of $(\enm{\cal{E}}, \Phi)$ and the condition $\Phi \wedge \Phi=0$ is called the {\it integrability}. \end{definition} We say that the co-Higgs field $\Phi$ is \emph{$2$-nilpotent} if $\Phi$ is non-trivial and $\Phi \circ \Phi =0$. Note that any $2$-nilpotent map $\Phi : \enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_X(-\log \enm{\cal{D}})$ satisfies $\Phi \wedge \Phi =0$ and so it is a non-zero co-Higgs structure on $\enm{\cal{E}}$, i.e. a nilpotent co-Higgs structure. Note that if $\enm{\cal{D}}$ is empty, then we get a usual notion of co-Higgs bundle. Indeed for each $\enm{\cal{D}}$-logarithmic co-Higgs bundle we may consider a usual co-Higgs bundle by compositing the injection in (\ref{log1}): $$\enm{\cal{E}} \to\enm{\cal{E}}\otimes T_X(-\log \enm{\cal{D}}) \to \enm{\cal{E}} \otimes T_X.$$ Conversely, for a usual co-Higgs bundle $(\enm{\cal{E}}, \Phi)$ we may composite the surjection in (\ref{log1}) to have a map $\enm{\cal{E}} \rightarrow \oplus_{i=1}^m \enm{\cal{E}}\otimes \enm{\cal{O}}_D(D_i)$, whose vanishing would produce a logarithmic co-Higgs structure $\enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_X(-\log \enm{\cal{D}})$. Thus our notion of logarithmic co-Higgs bundle capture the notion of a co-Higgs field $\Phi :\enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_X$ vanishing in the normal direction to the divisors in the support of $\enm{\cal{D}}$; in general it would not be asking for a map $\varphi : \enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_X(-D)$ when $\enm{\cal{D}}=\{D\}$. If $\dim (X)=1$, then we have $T_X(-\log \enm{\cal{D}} ) \cong T_X(-D)$. In Section \ref{divisor} we consider a few cases in which we take $T_X(-D)$ with $D$ smooth, instead of $T_X(-\log \enm{\cal{D}})$. \begin{definition}\label{ss1} For a fixed ample line bundle $\enm{\cal{H}}$ on $X$, a $\enm{\cal{D}}$-logarithmic co-Higgs bundle $(\enm{\cal{E}}, \Phi)$ is {\it $\enm{\cal{H}}$-semistable} (resp. {\it $\enm{\cal{H}}$-stable}) if $$\mu(\enm{\cal{F}}) \le (\text{resp.}<)~ \mu(\enm{\cal{E}})$$ for every coherent subsheaf $0\subsetneq \enm{\cal{F}} \subsetneq \enm{\cal{E}}$ with $\Phi(\enm{\cal{F}}) \subset \enm{\cal{F}} \otimes T_X(-\log \enm{\cal{D}})$. Recall that the slope $\mu(\enm{\cal{E}})$ of a torsion-free sheaf $\enm{\cal{E}}$ on $X$ is defined to be $\mu(\enm{\cal{E}}):=\deg (\enm{\cal{E}})/\op{rank} \enm{\cal{E}}$, where $\deg (\enm{\cal{E}})=c_1(\enm{\cal{E}})\cdot \enm{\cal{H}}^{n-1}$. In case $\enm{\cal{H}}\cong \enm{\cal{O}}_X(1)$ we simply call it semistable (resp. stable) without specifying $\enm{\cal{H}}$. \end{definition} \begin{remark} Let $(\enm{\cal{E}}, \Phi)$ be a semistable $\enm{\cal{D}}$-logarithmic co-Higgs bundle. For a subsheaf $\enm{\cal{F}} \subset \enm{\cal{E}}$ with $\Phi (\enm{\cal{F}})\subseteq \enm{\cal{F}}\otimes T_X$, we have $$\enm{\cal{F}} \otimes T_X(-\log \enm{\cal{D}}) =\left ( \enm{\cal{F}}\otimes T_X\right) \cap \left ( \enm{\cal{E}}\otimes T_X(-\log \enm{\cal{D}})\right )$$ and $\mathrm{Im} (\Phi)\subseteq \enm{\cal{E}}\otimes T_X(-\log \enm{\cal{D}})$. Thus we get $\Phi (\enm{\cal{F}})\subseteq \enm{\cal{F}} \otimes T_X(-\log \enm{\cal{D}})$ and so $(\enm{\cal{E}}, \Phi)$ is semistable as a usual co-Higgs bundle. \end{remark} Let us denote by $\mathbf{M}_{\enm{\cal{D}}, X}(\chi(t))$ the moduli space of semistable $\enm{\cal{D}}$-logarithmic co-Higgs bundles with Hilbert polynomial $\chi(t)$. It exists as a closed subscheme of $\mathbf{M}_X(\chi(t))$ the moduli space of semistable co-Higgs bundles with the same Hilbert polynomial, since the vanishing of co-Higgs fields in the normal direction to $\enm{\cal{D}}$ is a closed condition. We also denote by $\mathbf{M}^{\circ}_{\enm{\cal{D}}, X}(\chi(t))$ the subscheme consisting of stable ones. \begin{example}\label{bbb+1} Let $X=\enm{\mathbb{P}}^1$ and $\enm{\cal{D}}=\{p_1, \ldots, p_m\}$ be a set of $m$ distinct points on $X$. Then we have $T_{\enm{\mathbb{P}}^1}(-\log \enm{\cal{D}})\cong \enm{\cal{O}}_{\enm{\mathbb{P}}^1}(2-m)$. Let $\enm{\cal{E}} \cong \oplus_{i=1}^r\enm{\cal{O}}_{\enm{\mathbb{P}}^1}(a_i)$ be a vector bundle of rank $r \ge 2$ on $\enm{\mathbb{P}}^1$ with $a_1\ge \cdots \ge a_r$ and $(\enm{\cal{E}}, \Phi)$ be a semistable $\enm{\cal{D}}$-logarithmic co-Higgs bundle, i.e. $\Phi: \enm{\cal{E}} \rightarrow \enm{\cal{E}}(2-m)$. If $a_1=\cdots =a_r$, then the pair $(\enm{\cal{E}}, \Phi )$ is semistable for any $\Phi$. If $m\ge 3$, then $\enm{\cal{O}}_{\enm{\mathbb{P}}^1}(a_1)$ would contradict the semistability of $(\enm{\cal{E}}, \Phi)$, unless $a_1=\cdots = a_r$. If $a_1=\cdots =a_r$ and $m\ge 3$, then we have $\Phi =0$ and so $(\enm{\cal{E}} ,\Phi)$ is strictly semistable. Assume now that $m\in \{0,1,2\}$ and then the corresponding moduli space $\mathbf{M}_{\enm{\cal{D}}, \enm{\mathbb{P}}^1}(rt+d)$ is projective and $\mathbf{M}^{\circ}_{\enm{\cal{D}}, \enm{\mathbb{P}}^1}(rt+d)$ is smooth with dimension $(2-m)r^2+1$, where $d=r+\sum_{i=1}^m a_i$ by \cite{Nitsure}. The case $m=0$ is dealt in \cite[Theorem 6.1]{R2}. Now assume $m=1$. Adapting the proof of \cite[Theorem 6.1]{R2}, we get Proposition \ref{bbb+2} which says in the case $\ell =-1$ that the existence of a map $\Phi$ with $(\enm{\cal{E}} ,\Phi )$ semistable implies that $a_i\le a_{i+1}+1$ for all $i$, while conversely, if $a_i\le a_{i+1}+1$ for all $i$, then there is a map $\Phi$ with $(\enm{\cal{E}}, \Phi )$ stable and the set of all such $\Phi$ is a non-empty open subset of the vector space $H^0(\mathcal{E}nd (\enm{\cal{E}} )(1))$. Now assume that $m=2$ and so $\Phi \in \mathrm{End}(\enm{\cal{E}})$. If $a_1=\cdots = a_r$, then $\Phi$ is given by an $(r\times r)$-matrix of constants. Since the matrix has an eigenvector, the pair $(\enm{\cal{E}} ,\Phi)$ is strictly semistable for any $\Phi$. Now assume $a_1>a_r$ and let $h$ be the maximal integer $i$ with $a_i=a_1$. Write $\enm{\cal{E}} \cong\enm{\cal{F}} \oplus \enm{\cal{G}}$ with $\enm{\cal{F}} := \oplus _{i=1}^{h} \enm{\cal{O}} _{\enm{\mathbb{P}}^1}(a_i)$ and $\enm{\cal{G}}:= \oplus _{i=h+1}^r \enm{\cal{O}} _{\enm{\mathbb{P}}^1}(a_i)$. Since any map $\enm{\cal{F}} \rightarrow \enm{\cal{G}}$ is the zero map, we have $\Phi (\enm{\cal{F}} )\subseteq \enm{\cal{F}}$ for any $\Phi : \enm{\cal{E}} \rightarrow \enm{\cal{E}}$ and so $(\enm{\cal{E}} ,\Phi)$ is not semistable. \end{example} \subsection{Projective spaces} In \cite{BH} we introduce a simple way of constructing nilpotent co-Higgs sheaves $(\enm{\cal{E}}, \Phi)$ of rank $r\ge 2$, fitting into the exact sequence \begin{equation}\label{eeqb} 0 \to \enm{\cal{O}} _X^{\oplus (r-1)}\to \enm{\cal{E}} \to \enm{\cal{I}} _Z\otimes \enm{\cal{A}} \to 0 \end{equation} for a two-codimensional locally complete intersection $Z\subset X$ and $\enm{\cal{A}} \in \op{Pic} (X)$ such that $H^0(T_X\otimes \enm{\cal{A}}^\vee)\ne 0$. We replace $T_X$ by $T_X(-\log \enm{\cal{D}})$ for a simple normal crossing divisor $\enm{\cal{D}}$ in (\ref{eeqb}) to obtain $2$-nilpotent $\enm{\cal{D}}$-logarithmic co-Higgs sheaves. \begin{example}\label{eex} Let $X=\enm{\mathbb{P}}^n$ with $n\ge 2$ and take $\enm{\cal{D}}=\{D_1, \ldots, D_m\}$ with $D_i\in |\enm{\cal{O}}_{\enm{\mathbb{P}}^n}(1)|$. If $1\le m \le n$, we have $T_{\enm{\mathbb{P}}^n}(-\log \enm{\cal{D}}) \cong \enm{\cal{O}}_{\enm{\mathbb{P}}^n}^{\oplus (m-1)} \oplus \enm{\cal{O}}_{\enm{\mathbb{P}}^n}(1)^{\oplus (n-m+1)}$ by \cite{DK}, and in particular we have $h^0(T_{\enm{\mathbb{P}}^n}(-\log \enm{\cal{D}})(-1))>0$. Thus we may apply the proof of \cite[Theorem 1.1]{BH} to get the following: here the invariant $x_{\enm{\cal{E}}}$ is defined to be the maximal integer $x$ such that $h^0(\enm{\cal{E}}(-x))\ne 0$. \begin{proposition}\label{ttts} The set of nilpotent maps $\Phi : \enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_{\enm{\mathbb{P}}^n}(-\log \enm{\cal{D}})$ on a fixed stable reflexive sheaf $\enm{\cal{E}}$ of rank two on $\enm{\mathbb{P}}^n$ is an $(n-m+1)$-dimensional vector space only if $c_1(\enm{\cal{E}})+2x_{\enm{\cal{E}}}=-3$. In the other cases the set is trivial. \end{proposition} \end{example} \begin{remark} Consider the case $m=n+1$ in Example \ref{eex} with $\cup_{i=1}^{n+1}D_i = \emptyset$. Then we have $T_{\enm{\mathbb{P}}^n}(-\log \enm{\cal{D}} )\cong \enm{\cal{O}} _{\enm{\mathbb{P}}^n}^{\oplus n}$. Let $\enm{\cal{E}}$ be a reflexive sheaf of rank $r\ge 2$ on $\enm{\mathbb{P}}^n$ with a semistable (resp. stable) logarithmic co-Higgs structure $(\enm{\cal{E}},\Phi)$. Note that if $\Phi$ is trivial, the semistability (resp. stability) of $(\enm{\cal{E}} ,\Phi)$ is equivalent to the semistability (resp. stability) of $\enm{\cal{E}}$. Now assume $\Phi \ne 0$. Since $T_{\enm{\mathbb{P}}^n}(-\log \enm{\cal{D}} )\cong \enm{\cal{O}} _{\enm{\mathbb{P}}^n}^{\oplus n}$, $\enm{\cal{E}}$ is not simple and in particular it is not stable. We claim that $\enm{\cal{E}}$ is semistable. If not, call $\enm{\cal{G}}$ be the first step of the Harder-Narasimhan filtration of $\enm{\cal{E}}$. By a property of the Harder-Narisimhan filtration there is no non-zero map $\enm{\cal{G}} \rightarrow \enm{\cal{E}} /\enm{\cal{G}}$ and so no non-zero map $\enm{\cal{G}} \rightarrow ( \enm{\cal{E}} /\enm{\cal{G}})\otimes T_{\enm{\mathbb{P}}^n}(-\log \enm{\cal{D}})$. Thus we get $\Phi (\enm{\cal{G}} )\subseteq \enm{\cal{G}} \otimes T_{\enm{\mathbb{P}}^n}(-\log \enm{\cal{D}} )$, contradicting the semistability of $(\enm{\cal{E}}, \Phi)$. Now assume $n=2$ and take $\enm{\cal{A}} \cong \enm{\cal{O}} _{\enm{\mathbb{P}}^2}$ in (\ref{eeqb}) with $\deg (Z)\ge r-1$. Then we get many strictly semistable and indecomposable vector bundles $\enm{\cal{E}}$ with $\Phi \ne 0$ and $2$-nilpotent. \end{remark} \begin{example} Let $X=\enm{\mathbb{P}}^2$ and take $\enm{\cal{D}} = \{D\}$ with $D$ a smooth conic. Since $h^0(T_{\enm{\mathbb{P}}^2}) =8$ and $h^0(\enm{\cal{O}} _D(D)) =h^0(\enm{\cal{O}} _D(2)) =5$, we have $h^0(T_{\enm{\mathbb{P}}^2}(-\log \enm{\cal{D}} )) >0$ from (\ref{log1}). By taking $\enm{\cal{A}} \cong\enm{\cal{O}} _{\enm{\mathbb{P}}^2}$ in \cite[Equation (1) of Condition 2.2]{BH}, we get a strictly semistable logarithmic co-Higgs bundle $(\enm{\cal{E}} ,\Phi)$ with a non-zero co-Higgs field $\Phi$, where $\enm{\cal{E}}$ is strictly semistable of any arbitrary rank $r\ge 2$ with any non-negative integer $c_2(\enm{\cal{E}} )=\deg (Z)$. Moreover, for any integer $c_2(\enm{\cal{E}})\ge r-1$ we may find an indecomposable one. \end{example} \begin{example} Let $X\subset \enm{\mathbb{P}}^{n+1}$ be a smooth quadric hypersurface. Let $D \subset X$ be a smooth hyperplane section of $X$ with $H\subset \enm{\mathbb{P}}^{n+1}$ the hyperplane such that $D=X \cap H$ and take $\enm{\cal{D}}=\{D\}$. If $p\in \enm{\mathbb{P}}^{n+1}$ is the point associated to $H$ by the isomorphism between $\enm{\mathbb{P}}^{n+1}$ and its dual induced by an equation of $X$, then we have $p\notin X$ since $X$ is smooth. Letting $\pi_p: X\rightarrow \enm{\mathbb{P}}^n$ denote the linear projection from $p$, we have $T_X(-\log \enm{\cal{D}} )\cong \pi_p ^\ast (\Omega ^1_{\enm{\mathbb{P}}^n}(2))$ by \cite[Corollary 4.6]{BHlog}. Since $\Omega ^1_{\enm{\mathbb{P}}^n}(2)$ is globally generated, so is $T_X(-\log \enm{\cal{D}} )$ and in particular $H^0(T_X(-\log \enm{\cal{D}})) \ne 0$. By taking $\enm{\cal{A}} \cong \enm{\cal{O}} _X$ in \cite[Equation (1) of Condition 2.2]{BH}, we get a strictly semistable logarithmic co-Higgs bundle $(\enm{\cal{E}} ,\Phi)$ with a non-zero co-Higgs field $\Phi$, where $\enm{\cal{E}}$ is strictly semistable of any arbitrary rank $r\ge 2$. \end{example} \subsection{Smooth quadric surfaces}\label{quad} Let $X = \enm{\mathbb{P}}^1\times \enm{\mathbb{P}}^1$ be a smooth quadric surface and we may assume for a vector bundle $\enm{\cal{E}}$ of rank two that $$\det (\enm{\cal{E}} )\in \{\enm{\cal{O}} _X,\enm{\cal{O}} _X(-1,0),\enm{\cal{O}} _X(0,-1),\enm{\cal{O}} _X(-1,-1)\}.$$ The case of the usual co-Higgs bundle with $\enm{\cal{D}}=\emptyset$ is done in \cite[Theorem 4.3]{VC1}. We assume either \begin{itemize} \item [(i)] $\enm{\cal{D}} \in \left \{|\enm{\cal{O}} _X(1,0)|,|\enm{\cal{O}} _X(2,0)|,|\enm{\cal{O}} _X(0,1)|,|\enm{\cal{O}} _X(0,2)|\right\}$, or \item [(ii)]$\enm{\cal{D}} =L\cup R$ with $L\in |\enm{\cal{O}} _X(1,0)|$ and $R\in |\enm{\cal{O}} _X(0,1)|$. \end{itemize} In the latter case $T_X(-\log \enm{\cal{D}})$ fits into the exact sequence \begin{equation}\label{eqv2} 0 \to T_X(-\log \enm{\cal{D}} )\to \enm{\cal{O}} _X(2,0)\oplus \enm{\cal{O}} _X(0,2)\to \enm{\cal{O}} _L \oplus \enm{\cal{O}} _R\to 0, \end{equation} because $\enm{\cal{O}} _L(L)\cong \enm{\cal{O}} _L$, $\enm{\cal{O}} _R({R})\cong \enm{\cal{O}} _R$ and $T_X\cong \enm{\cal{O}}_X(2,0)\oplus \enm{\cal{O}}_X(0,2)$. In particular, we have $h^0(T_X(-\log \enm{\cal{D}} )(i,j)) >0$ for all $(i,j)\in \{(0,0), (-1,0), (0,-1)\}$. We may also consider the following cases: \begin{itemize} \item [(iii)] $\enm{\cal{D}} = L\cup L'\cup R$ with $L, R$ as above and $L\ne L'\in |\enm{\cal{O}} _X(1,0)|$; we still have $h^0(T_X(-\log \enm{\cal{D}} )(i,j)) >0$ for $(i,j)\in \{ (0,0), (0,-1)\}$. \item [(iv)] $\enm{\cal{D}} =L\cup L'\cup R\cup R'$ with $L$, $L'$, $R$ as above and $R\ne R'\in |\enm{\cal{O}} _X(0,1)|$. \end{itemize} Indeed, if $\enm{\cal{D}}$ consists of $a$ lines in $|\enm{\cal{O}}_X(1,0)|$ and $b$ lines in $|\enm{\cal{O}}_X(0,1)|$, then we have $T_X(-\log \enm{\cal{D}})\cong \enm{\cal{O}}_X(2-a,0)\oplus \enm{\cal{O}}_X(0,2-b)$ by \cite[Proposition 6.2]{BHlog}. Assume that $\enm{\cal{E}}$ fits into the following exact sequence as in \cite[Equation (3.1)]{VC1} \begin{equation}\label{eqv1} 0\to \enm{\cal{O}} _X(r,d)\to \enm{\cal{E}} \to \enm{\cal{O}} _X(r',d')\otimes \enm{\cal{I}} _Z\to 0, \end{equation} where $Z\subset X$ is a zero-dimensional scheme, $\det (\enm{\cal{E}} ) \cong \enm{\cal{O}} _X(r+r',d+d'')$ and $c_2(\enm{\cal{E}} )=\deg (Z)+rd' +r'd$. Note that that any logarithmic co-Higgs bundle is co-Higgs in the usual sense and so the set of all $(c_1,c_2)$ allowed for $\enm{\cal{D}}$ is contained in the one allowed for $\enm{\cal{D}} =\emptyset$. In particular, if we are concerned only in $\enm{\cal{O}} _X(1,1)$-semistability, the possible pairs $(c_1,c_2)$ are contained in the one described in \cite[Theorem 4.3]{VC1}. Moreover, any existence for the case $\enm{\cal{D}} =L\cup R$ implies the existence for $\enm{\cal{D}} \in \{|\enm{\cal{O}} _X(1,0)|,\enm{\cal{O}} _X(0,1)|\}$. \quad (a) First assume $\det (\enm{\cal{E}} )\cong \enm{\cal{O}} _X$ and we prove the existence for $c_2\ge 0$. In this case we take $r=d=r'=d' =0$ and the $2$-nilpotent co-Higgs structure induced by $\enm{\cal{I}} _Z \rightarrow T_X(-\log \enm{\cal{D}} )$, i.e. by a non-zero section of $T_X(-\log \enm{\cal{D}})$. This construction gives $(\enm{\cal{E}} ,\Phi)$ with $\enm{\cal{E}}$ strictly semistable for any polarization. \quad (b) Assume $\det (\enm{\cal{E}} )\cong \enm{\cal{O}} _X(-1,0)$ by symmetry and see the existence for $c_2\ge 0$. In case $h^0(T_X(-\log \enm{\cal{D}} )(-1,0)) >0$, we take $(r,r',d,d')=(-1,0,0,0)$ and $\Phi$ induced by a non-zero map $\enm{\cal{I}} _Z \rightarrow T_X(-\log \enm{\cal{D}})(-1,0)$. Then $\enm{\cal{E}}$ is stable for every polarization, unless $Z=\emptyset$ and $\enm{\cal{E}}$ splits, because $Z\ne \emptyset$ would imply $h^0(\enm{\cal{E}} )=0$; even when $Z=\emptyset$ and so $\enm{\cal{E}} \cong \enm{\cal{O}} _X\oplus \enm{\cal{O}} _X(-1,0)$, the pair $(\enm{\cal{E}} ,\Phi)$ is stable for every polarization. \quad (c) Assume $\det (\enm{\cal{E}})\cong \enm{\cal{O}} _X(-1,-1)$ and take $(r,d)=(-1,0)$ and $(r', d')=(0,-1)$ with $\enm{\cal{D}} \in |\enm{\cal{O}} _X(1,0)|$. Then we have $h^0(T_X(-\log \enm{\cal{D}} )(-1,1)) >0$ and $c_2(\enm{\cal{E}} ) =\deg (Z)+1$. We get that $\enm{\cal{E}}$ is semistable with respect to $\enm{\cal{O}} _X(1,1)$. \section{Existence} \begin{proposition}\label{aa1} Assume $\dim (X)=2$ and let $\enm{\cal{D}} \subset X$ be a simple normal crossing divisor. For fixed $\enm{\cal{L}} \in \mathrm{Pic}(X)$ and an integer $r\ge 2$, there exists an integer $n=n_{X,\enm{\cal{D}}}(\enm{\cal{L}}, r)$ such that for all integers $c_2\ge n$ there is a $2$-nilpotent $\enm{\cal{D}}$-logarithmic co-Higgs bundle $(\enm{\cal{E}} ,\Phi)$ with $\Phi \ne 0$, where $\enm{\cal{E}}$ is an indecomposable vector bundle of rank $r$ with Chern classes $c_1(\enm{\cal{E}})\cong \enm{\cal{L}}$ and $c_2(\enm{\cal{E}})=c_2$. \end{proposition} \begin{proof} Fix a very ample $\enm{\cal{R}}\in \op{Pic} (X)$ such that \begin{itemize} \item $h^0(\omega _X\otimes (\enm{\cal{L}}^{\otimes (r-1)} \otimes \enm{\cal{R}}^{\otimes r})^\vee )= 0$; \item $h^0(T_X(-\log \enm{\cal{D}})\otimes \enm{\cal{L}} ^{\otimes (r-1)}\otimes \enm{\cal{R}} ^{\otimes r})>0$; \item $\enm{\cal{L}} \otimes \enm{\cal{R}}$ is spanned. \end{itemize} Set $$n=n_{X,\enm{\cal{D}}}(r,\enm{\cal{L}}):=r-(r-1)(r-2)\enm{\cal{L}}^2 - (r-1)^2 \enm{\cal{R}}^2-(2r-3)(r-1)\enm{\cal{L}} {\cdot}\enm{\cal{R}}.$$ For each $c_2\ge n$, let $S\subset X$ be a union of general $(c_2 +r-n)$ points and consider a general extension $$0\to (\enm{\cal{L}} \otimes \enm{\cal{R}})^{\oplus (r-1)}\to \enm{\cal{E}} \to \enm{\cal{I}} _S\otimes (\enm{\cal{L}}^{\otimes (r-2)}\otimes \enm{\cal{R}} ^{\otimes (r-1)})^\vee \to 0.$$ From the choice of $\enm{\cal{R}}$ the Cayley-Bacharach condition is satisfied and so $\enm{\cal{E}}$ is locally free with $c_1(\enm{\cal{E}})\cong \enm{\cal{L}}$ and $c_2(\enm{\cal{E}} )=c_2$. Now from a non-zero section in $H^0(T_X(-\log \enm{\cal{D}})\otimes \enm{\cal{L}}^{\otimes (r-1)} \otimes \enm{\cal{R}} ^{\otimes r})$ we have a non-zero map $\varphi : \enm{\cal{I}} _S\otimes (\enm{\cal{L}}^{\otimes (r-2)}\otimes \enm{\cal{R}} ^{\otimes (r-1)})^\vee \rightarrow \enm{\cal{L}} \otimes \enm{\cal{R}} \otimes T_X(-\log \enm{\cal{D}} )$, inducing a non-zero map $\Phi : \enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_X(-\log \enm{\cal{D}} )$ that is $2$-nilpotent and so integrable. Thus to complete the proof it is sufficient to prove that $\enm{\cal{E}}$ is indecomposable for a suitable $\enm{\cal{R}}$. Assume $\enm{\cal{E}} \cong \enm{\cal{E}} _1\oplus \cdots \oplus \enm{\cal{E}} _k$ with $k\ge 2$ and each $\enm{\cal{E}}_i$ indecomposable and locally free of positive rank. Since $\enm{\cal{R}}$ is very ample and $\enm{\cal{L}} \otimes \enm{\cal{R}}$ is spanned, the image of the evaluation map $H^0(\enm{\cal{E}})\otimes \enm{\cal{O}} _X\rightarrow \enm{\cal{E}}$ is isomorphic to $(\enm{\cal{L}} \otimes \enm{\cal{R}})^{\oplus (r-1)}$ and its cokernel is isomorphic to $ \enm{\cal{I}} _S\otimes (\enm{\cal{L}}^{\otimes (r-2)}\otimes \enm{\cal{R}} ^{\otimes (r-1)})^\vee$. Thus, up to a permutation of the factors, we have $(\enm{\cal{L}} \otimes \enm{\cal{R}})^{\oplus (r-1)}\cong \enm{\cal{E}} _1\oplus \cdots \oplus \enm{\cal{E}} _{k-1}\oplus \enm{\cal{F}}$ with $\enm{\cal{F}}$ a vector bundle and $\enm{\cal{E}} _k/\enm{\cal{F}} \cong \enm{\cal{I}} _S\otimes (\enm{\cal{L}}^{\otimes (r-2)}\otimes \enm{\cal{R}} ^{\otimes (r-1)})^\vee$. Since $\enm{\cal{E}} _1$ is indecomposable, we get that $\enm{\cal{E}} _1\cong \enm{\cal{L}}\otimes \enm{\cal{R}}$. But since $\sharp (S) \ge r$, we have $\mathrm{ext}_X^1( \enm{\cal{I}} _S\otimes (\enm{\cal{L}}^{\otimes (r-2)}\otimes \enm{\cal{R}} ^{\otimes (r-1)})^\vee ,\enm{\cal{O}} _X)\ge r$ and so we may choose $\enm{\cal{E}}$ so that $\enm{\cal{L}} \otimes \enm{\cal{R}} $ is not a factor of $\enm{\cal{E}}$. \end{proof} \begin{proposition}\label{aa2} Assume $n=\dim (X)\ge 3$ and let $\enm{\cal{D}}\subset X$ be a simple normal crossing divisor. For a fixed $\enm{\cal{L}} \in \mathrm{Pic}(X)$ and an integer $r\ge n$, there exists a $2$-nilpotent $\enm{\cal{D}}$-logarithmic co-Higgs bundle $(\enm{\cal{E}} ,\Phi)$, where $\enm{\cal{E}}$ is an indecomposable vector bundle of rank $r$ on $X$ with $\det (\enm{\cal{E}} )\cong \enm{\cal{L}}$. \end{proposition} \begin{proof} We first assume that $\enm{\cal{L}} ^\vee$ is very ample with \begin{itemize} \item $h^1(\enm{\cal{L}} ^\vee )=h^2(\enm{\cal{L}} ^\vee )=0$, where we use the assumption $n\ge 3$; \item $h^0(\enm{\cal{L}} ^\vee )\ge r-1$ and $h^0(\enm{\cal{L}} ^\vee \otimes T_X(-\log \enm{\cal{D}} )) > 0$. \end{itemize} Fix a very ample line bundle $\enm{\cal{H}}$ on $X$ such that $h^0(\enm{\cal{H}} ^\vee \otimes \enm{\cal{L}} ^\vee )=h^1((\enm{\cal{H}} ^\vee )^{\otimes 2}\otimes \enm{\cal{L}} ^\vee )=0$, e.g. by taking $\enm{\cal{H}} \cong (\enm{\cal{L}} ^\vee )^{\otimes 2}$ and applying Kodaira's vanishing. Let $Y\subset X$ be a general complete intersection of two elements of $|\enm{\cal{H}} |$ and then $Y$ is a non-empty connected manifold of codimension $2$ with normal bundle $N_Y$, isomorphic to $\enm{\cal{H}}_{|Y} ^{\oplus 2}$. The line bundle $\enm{\cal{R}} := \wedge ^2N_Y\otimes \enm{\cal{L}} ^\vee _{|Y} \cong (\enm{\cal{H}} ^{\otimes 2}\otimes \enm{\cal{L}} ^\vee )_{|Y}$ is a very ample line bundle on $Y$ and we have $h^0(Y,\enm{\cal{R}} )\ge h^0(Y,(\enm{\cal{L}} ^\vee )_{|Y})$. From the exact sequence $$0\to (\enm{\cal{H}} ^\vee )^{\otimes 2}\to (\enm{\cal{H}} ^\vee )^{\oplus 2}\to \enm{\cal{I}} _Y\to 0$$ we get $h^0(\enm{\cal{I}} _Y\otimes \enm{\cal{L}}^\vee ) =0$ and so $h^0(Y,\enm{\cal{R}} )\ge h^0(Y,(\enm{\cal{L}} ^\vee )_{|Y}) \ge r-1$. Since $\enm{\cal{R}}$ is spanned and $\dim (Y)=n-2$, a general $(n-1)$-dimensional linear subspace $V\subset H^0(Y,\enm{\cal{R}})$ spans $\enm{\cal{R}}$. Hence there are linearly independent sections $s_1,\dots ,s_{r-1}$ of $H^0(Y,\enm{\cal{R}})$ spanning $\enm{\cal{R}}$. Since $H^2(\enm{\cal{L}} ^\vee )=0$, by the Hartshorne-Serre correspondence the sections $s_1,\dots ,s_{r-1}$ give a vector bundle $\enm{\cal{E}}$ of rank $r$ fitting into an exact sequence (see \cite[Theorem 1.1]{Arrondo}) $$0\to \enm{\cal{O}} _X^{\oplus (r-1)} \to \enm{\cal{E}} \to \enm{\cal{I}} _Y\otimes \enm{\cal{L}} \to 0.$$ In particular we have $\det (\enm{\cal{E}} )\cong \enm{\cal{L}}$. Any non-zero section of $H^0(\enm{\cal{L}} ^\vee \otimes T_X(-\log \enm{\cal{D}} ))$ gives a $2$-nilpotent logarithmic co-Higgs structures on $\enm{\cal{E}}$ with $\Phi \ne 0$. Now it remains to show that $\enm{\cal{E}}$ is indecomposable. Assume $\enm{\cal{E}} \cong \enm{\cal{G}}_1 \oplus \enm{\cal{G}}_2$ with $\enm{\cal{G}}_i$ non-zero. Let $\enm{\cal{G}}_i'$ be the image of the evaluation map $H^0(\enm{\cal{G}}_i )\otimes \enm{\cal{O}} _X\rightarrow \enm{\cal{G}}_i$ for $i=1,2$. Since $\enm{\cal{L}} ^\vee$ is very ample, we have $h^0(\enm{\cal{E}} )=r-1$ and the image of the evaluation map $H^0(\enm{\cal{E}} )\otimes \enm{\cal{O}} _X\rightarrow \enm{\cal{E}}$ is isomorphic to $\enm{\cal{O}} _X^{\oplus (r-1)}$ and so $\enm{\cal{G}}_1'\oplus \enm{\cal{G}} _2'\cong \enm{\cal{O}} _X^{\oplus (r-1)}$. In particular, we have $\enm{\cal{G}}_i \cong \enm{\cal{G}}_i'$ for some $i$ and so at least one of the factors of $\enm{\cal{E}}$ is trivial. Set $\enm{\cal{E}} \cong \enm{\cal{O}} _X\oplus \enm{\cal{F}}$ with $\mathrm{rank}(\enm{\cal{F}} )=r-1$. By \cite[Theorem 1.1]{Arrondo} the bundle $\enm{\cal{F}}$ comes from $u_1,\dots ,u_{r-2}\in H^0(Y,\enm{\cal{R}})$ and so $\enm{\cal{E}}$ is induced by the sections $u_1,\dots ,u_{r-2},0$. Since $H^1(\enm{\cal{L}} ^\vee )=0$, the uniqueness part of \cite[Theorem 1.1]{Arrondo} gives that $s_1,\dots ,s_{r-1}$ generate the linear subspace of $H^0(Y,\enm{\cal{R}})$ spanned by $u_1,\dots ,u_{r-2}$ and so they are not linearly independent, a contradiction. Now we drop any assumption on $\enm{\cal{L}}$. Take an integer $m\gg 0$ and set $\enm{\cal{L}}' := \enm{\cal{L}} \otimes (\enm{\cal{H}} ^\vee )^{\otimes (mr)}$. Then we get that $(\enm{\cal{L}}') ^\vee $ is very ample and $H^2((\enm{\cal{L}}')^\vee )=0$. By the first part there is $(\enm{\cal{E}}' ,\Phi' )$ with $\det (\enm{\cal{E}}')\cong \enm{\cal{L}}'$. We may take $\enm{\cal{E}} := \enm{\cal{E}}' \otimes (\enm{\cal{H}} ^{\vee})^{\otimes m}$ and let $\Phi : \enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_X(-\log \enm{\cal{D}} )$ be the non-zero map induced by $\Phi'$. \end{proof} Allowing non-locally free sheaves, we may extend Proposition \ref{aa2} to all ranks at least two in the following way. \begin{proposition}\label{aa2.00} Under the same assumption as in Proposition \ref{aa2} with $2\le r\le n-1$, there exists a $2$-nilpotent $\enm{\cal{D}}$-logarithmic co-Higgs reflexive sheaf $(\enm{\cal{E}} ,\Phi)$, where $\enm{\cal{E}}$ is indecomposable of rank $r$ with $\det (\enm{\cal{E}} )\cong \enm{\cal{L}}$ and non-locally free locus of dimension at most $(n-r-1)$. \end{proposition} \begin{proof} We follow the proof of Proposition \ref{aa2}. We first assume that $\enm{\cal{L}} ^\vee$ is very ample and take $(\enm{\cal{H}}, Y, \enm{\cal{R}})$ as in the proof of Proposition \ref{aa2}. Since $r\ge 2$, we may find $r-1$ elements $s_1,\dots ,s_{r-1}\in H^0(Y,\enm{\cal{R}})$ spanning $\enm{\cal{R}}$ outside a subset $T$ of $Y$ with $\dim (T) \le \dim (Y)-r+1 = n-r-1$. By \cite{Hartshorne1}, the sections $s_1,\dots ,s_{r-1}$ give a reflexive sheaf $\enm{\cal{E}}$ of rank $r$ on $X$ with $\det (\enm{\cal{E}}) \cong \enm{\cal{L}}$ and $\enm{\cal{E}}$ locally free outside $T$. The reduction to the case in which $\enm{\cal{L}}^\vee$ is very ample can be done, using the argument in the proof of Proposition \ref{aa2}. \end{proof} \section{Vanishing along divisors}\label{divisor} As observed, the notion of logarithmic co-Higgs bundle is not asking for a map $\varphi : \enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_X(-D)$ if $\dim (X)\ge 2$. In this section we study vector bundles of rank two on a projective plane and a smooth quadric surface with sections in $H^0(\mathcal{E}nd(\enm{\cal{E}})\otimes T_X(-D))$. \subsection{Projective plane} Let $X = \enm{\mathbb{P}}^2$ and take $D\in |\enm{\cal{O}}_{\enm{\mathbb{P}}^2}(1)|$ a projective line. Then we have $T_{\enm{\mathbb{P}}^2}(-D) =T_{\enm{\mathbb{P}}^2}(-1)$ and so $h^0(T_{\enm{\mathbb{P}}^2}(-D)) =3$. We may give a $2$-nilpotent co-Higgs structure on a vector bundle $\enm{\cal{E}}$ of rank $2$ fitting into the exact sequence \begin{equation}\label{++} 0 \to \enm{\cal{O}} _{\enm{\mathbb{P}}^2}\to \enm{\cal{E}} \to \enm{\cal{I}} _Z\to 0 \end{equation} from a non-zero section in $H^0(T_{\enm{\mathbb{P}}^2}(-D))$. Thus there exists a strictly semistable co-Higgs bundle of rank two for all $c_2\ge 0$, which is indecomposable for $c_2>0$. Indeed for any such bundles with positive $c_2$ we have a three-dimensional vector space of $2$-nilpotent co-Higgs structures. On the contrary we have some results on non-existence of co-Higgs bundles on projective spaces in \cite[Section 3]{BH}. Applying the same argument to $T_{\enm{\mathbb{P}}^n}(-1)$, we get the following, as in Proposition \ref{ttts} \begin{proposition} If $\enm{\cal{E}}$ is a stable reflexive sheaf of rank two on $\enm{\mathbb{P}}^n$ with $n\ge 2$, then any nilpotent map $\Phi : \enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_{\enm{\mathbb{P}}^n}(-1)$ is trivial. \end{proposition} \subsection{Quadric surface} Let $X=\enm{\mathbb{P}}^1 \times \enm{\mathbb{P}}^1$ and take $D\in |\enm{\cal{O}} _X(1,0)|$; by symmetry the case $D\in |\enm{\cal{O}} _X(0,1)|$ is similar. We have $T_X(-D)\cong \enm{\cal{O}} _X(1,0)\oplus \enm{\cal{O}} _X(-1,2)$. \quad (a) In case $\det (\enm{\cal{E}} )\cong \enm{\cal{O}} _X$ we prove the existence for $c_2\ge 0$. By taking$r=d=r'=d' =0$, we obtain a $2$-nilpotent co-Higgs structure induced by $\enm{\cal{I}} _Z \rightarrow T_X(-D )$, i.e. by a non-zero section of $T_X(-D)$. This construction gives $(\enm{\cal{E}} ,\Phi)$ with $\enm{\cal{E}}$ strictly semistable for any polarization. \quad (b) In case $\det (\enm{\cal{E}} )\cong \enm{\cal{O}} _X(-1,0)$ we also see the existence for $c_2\ge 0$. Since $h^0(T_X(-D)(-1,0)) >0$, we take $(r,r',d,d')=(-1,0,0,0)$ and $\Phi$ induced by a non-zero map $\enm{\cal{I}} _Z \rightarrow T_X(-D)(-1,0)$. Then $\enm{\cal{E}}$ is stable for every polarization, unless $Z=\emptyset$ and $\enm{\cal{E}}$ splits, because $Z\ne \emptyset$ would imply $h^0(\enm{\cal{E}})=0$; even when $Z=\emptyset$ and so $\enm{\cal{E}} \cong \enm{\cal{O}} _X\oplus \enm{\cal{O}} _X(-1,0)$, the pair $(\enm{\cal{E}} ,\Phi)$ is stable for every polarization. \quad (c) Assume $\det (\enm{\cal{E}})\cong \enm{\cal{O}} _X(-1,-1)$ and take $(r,d)=(-1,0)$ and $(r', d')=(0,-1)$ with $D\in |\enm{\cal{O}} _X(1,0)|$. Note that $h^0(T_X(-D )(-1,1)) >0$ and $c_2(\enm{\cal{E}} ) =\deg (Z)+1$. Then we get that $\enm{\cal{E}}$ is semistable with respect to $\enm{\cal{O}} _X(1,1)$. \begin{remark} \begin{enumerate} \item It is likely that we may not apply our method of construction of $2$-nilpotent co-Higgs structure to the case when $\det (\enm{\cal{E}}) \cong \enm{\cal{O}}_X(0,-1)$, because it requires a non-zero section in $h^0(T_X(-D)(-1,0))$, which is trivial. \item Take $\enm{\cal{D}}= L\cup R$ with $L, R\in |\enm{\cal{O}} _X(1,0)|$ and $L\ne R$; the case with $L,R\in |\enm{\cal{O}} _X(0,1)|$ is similar. Then the existence for the case $c_1(\enm{\cal{E}})=\enm{\cal{O}}_X(0,0)$ can be done for any $c_2\ge 0$ as above. \end{enumerate} \end{remark} \section{Extension of co-Higgs bundles}\label{exttt} Fix an ample line bundle $\enm{\cal{H}}$ on $X$ and a vector bundle $\enm{\cal{G}}$. Then we may define $\enm{\cal{H}}$-(semi)stability for a pair $(\enm{\cal{E}}, \Phi)$ with $\enm{\cal{E}}$ a torsion-free sheaf and $\Phi: \enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes \enm{\cal{G}}$, similarly as in Definition \ref{ss1} with $\enm{\cal{G}}$ instead of $T_X(-\log \enm{\cal{D}})$. Then the definition of (logarithmic) co-Higgs bundle is obtained by taking $\enm{\cal{G}}\in \{T_X,T_X(-\log \enm{\cal{D}} ), T_X(-D)\}$ with the integrability condition $\Phi \wedge\Phi =0$. Note that it is enough to check the integrability condition on a non-empty open subset $U$ of $X$. \begin{definition} Fix an effective divisor $D\subset X$ and a positive integer $k$, for which we take $\enm{\cal{G}} := T_X(kD)$. A pair $(\enm{\cal{E}}, \Phi)$ is called a {\it meromorphic co-Higgs sheaf} with poles of order at most $k$ contained in $\enm{\cal{D}}$, if it satisfies the integrability condition on $U:=X \setminus D$. \end{definition} Via the inclusion $T_X\hookrightarrow T_X(kD)$ induced by a section of $\enm{\cal{O}} _X(kD)$ with $kD$ as its zeros, we see that any co-Higgs sheaf is also a meromorphic co-Higgs for any $k$ and $D$. A meromorphic co-Higgs sheaf with poles contained in $D$ induces an ordinary co-Higgs sheaf $(\enm{\cal{F}} ,\varphi)$ on the non-compact manifold $U$ and our definition of meromorphic co-Higgs sheaves captures the extension of $(\enm{\cal{F}} ,\varphi)$ to $X$ with at most poles on $D$ of order at most $k$. \begin{remark} We may generalize the definition of a meromorphic co-Higgs sheaf as follows: take $D=\cup _{i=1}^{s} D_i$ with each $D_i$ irreducible and consider $\sum _{i=1}^{s} k_iD_i$, $k_i$ a positive integer, instead of $kD$. Then we get the co-Higgs sheaves $(\enm{\cal{F}} ,\varphi)$ on $X\setminus D$, which extends meromorphically to $X$ with poles of order at most $k_i$ on each $D_i$. \end{remark} Our method used in constructing $2$-nilpotent co-Higgs sheaves (see \cite[Condition 2.2]{BH}) can be applied to construct $2$-nilpotent meromorphic co-Higgs sheaves, if $h^0(T_X(kD))>0$; we may easily check when the construction gives locally free ones. In the set-up of Sections \ref{quad} and \ref{divisor} we immediately see how to construct examples filling in several Chern classes. Assume that $\dim (X)=1$ and let $D = p_1+\cdots +p_s$ be $s$ distinct points on $X$. Set $\ell:= \deg (\sum _{i=1}^{s} k_ip_i)$ and $r:= \mathrm{rank}(\enm{\cal{E}})$. We adapt the proof of \cite[Theorem 6.1]{R2} with only very minor modifications to prove the following result. To cover the case needed in Example \ref{bbb+1} we allow as $\ell$ an integer at least $-1$. \begin{proposition}\label{bbb+2} Let $\enm{\cal{E}} \cong \enm{\cal{O}}_{\enm{\mathbb{P}}^1}(a_1)\oplus \cdots \oplus \enm{\cal{O}}_{\enm{\mathbb{P}}^1}(a_r)$ be a vector bundle of rank $r\ge 2$ on $\enm{\mathbb{P}}^1$ with $a_1\ge \cdots \ge a_r$. \begin{itemize} \item [(i)] If $(\enm{\cal{E}}, \Phi)$ is semistable with a map $\Phi : \enm{\cal{E}} \rightarrow \enm{\cal{E}} (2+\ell )$, then we have $a_{i+1} \ge a_i -\ell -2$ for each $i \le r-1$. \item [(ii)] Conversely, if $a_{i+1} \ge a_i -\ell -2$ for each $i \le r-1$, then there is a map $\Phi : \enm{\cal{E}} \rightarrow \enm{\cal{E}} (2+\ell )$ such that no proper subbundle $\enm{\cal{F}} \subset \enm{\cal{E}}$ satisfies $\Phi (\enm{\cal{F}} )\subseteq \enm{\cal{F}} (2+\ell)$, and in particular $(\enm{\cal{E}} ,\Phi)$ is stable. The set of all such $\Phi$ is non-empty open subset of the vector space $H^0(\mathcal{E}nd (\enm{\cal{E}} )(2+\ell ))$. \end{itemize} \end{proposition} \begin{proof} Assume the existence of an integer $i$ such that $a_{i+1} \le a_i -\ell -3$ and take $\Phi : \enm{\cal{E}} \rightarrow \enm{\cal{E}} (2+\ell )$. Set $\enm{\cal{E}} =\enm{\cal{F}} \oplus \enm{\cal{G}}$ with $\enm{\cal{F}} := \oplus _{j=1}^{i} \enm{\cal{O}} _{\enm{\mathbb{P}}^1}(a_j)$ and $\enm{\cal{G}}:= \oplus _{j=i+1}^r \enm{\cal{O}} _{\enm{\mathbb{P}}^1}(a_j)$. Since any map $\enm{\cal{F}} \rightarrow \enm{\cal{G}} (2+\ell)$ is the zero map, we have $\Phi (\enm{\cal{F}} )\subseteq \enm{\cal{F}} (2+\ell)$ and so $(\enm{\cal{E}} ,\Phi)$ is not semistable. Now assume $a_{i+1}\ge a_i-\ell -2$ for all $i$. Write $\Phi$ as an $(r\times r)$-matrix $B$ with entries $b_{i,j}\in \mathrm{Hom}(\enm{\cal{O}} _{\enm{\mathbb{P}}^1}(a_i),\enm{\cal{O}} _{\enm{\mathbb{P}}^1}(a_j+2+\ell ))$. For fixed homogeneous coordinates $z_0, z_1$ on $\enm{\mathbb{P}}^1$ with $\infty = [1:0]$ and $0 =[0:1]$, see a homogeneous polynomial of degree $d$ in the variables $z_0,z_1$ as a polynomial of degree at most $d$ in the variable $z:= z_0/z_1$. Take $$B=\begin{bmatrix} 0&1&0&\cdots &z\\ 0&0&1&\cdots &0\\ \vdots & \vdots & \ddots & \ddots&\vdots \\ 0&0&\cdots&\cdots&1\\ 0&0&\cdots&\cdots &0\end{bmatrix}$$ so that $b_{i,j} =0$ unless either $(i,j)=(1,r)$ or $j=i+1$; we take $b_{i,i+1} =1$ for all $i$, i.e. the elements of $\enm{\mathbb{C}}[z]$ associated to $z_1^{a_{i+1}-a_i+2+\ell}$, and $b_{1,r} =z$, the element of $\enm{\mathbb{C}}[z]$ associated to $z_0z_1^{a_r-a_1+1+\ell}$. Then there is no proper subbundle $\enm{\cal{F}}\subset \enm{\cal{E}}$ with $\Phi (\enm{\cal{F}} )\subseteq \enm{\cal{F}} (2+\ell)$, because the characteristic polynomial of $B$ is $\det (tI-B)= (-1)^{r-1}z+t^r$, which is irreducible in $\enm{\mathbb{C}}[z,t]$. \end{proof} \begin{remark} Assume the genus $g$ of $X$ is at least $2$ and that $2-2g+\ell <0$. Then there exists no semistable meromorphic co-Higgs bundle $(\enm{\cal{E}} ,\Phi)$ with $\Phi \ne 0$. Indeed, for any pair $(\enm{\cal{E}}, \Phi)$, the map $\Phi$ would be a non-zero map between two semistable vector bundles with the target having lower slope. \end{remark} \section{Moduli over projective plane}\label{mex} Let $X=\enm{\mathbb{P}}^n$ and fix $\enm{\cal{D}}=\{D\}$ with $D\in |\enm{\cal{O}}_{\enm{\mathbb{P}}^n}(1)|$. Then we have $T_{\enm{\mathbb{P}}^n}(-\log \enm{\cal{D}}) \cong \enm{\cal{O}}_{\enm{\mathbb{P}}^n}(1)^{\oplus n}$ and $$\Phi=(\varphi_1, \ldots, \varphi_n): \enm{\cal{E}} \rightarrow \enm{\cal{E}} \otimes T_{\enm{\mathbb{P}}^n}(-\log \enm{\cal{D}})$$ with $\varphi_i : \enm{\cal{E}} \rightarrow \enm{\cal{E}}(1)$ for $i=1,\ldots, n$. Assume that $(\enm{\cal{E}}, \Phi)$ is a semistable co-Higgs bundle of rank $r$ along $\enm{\cal{D}}$. If $\enm{\cal{E}} \cong \oplus_{i=1}^r \enm{\cal{O}}_{\enm{\mathbb{P}}^n}(a_i)$ is a direct sum of line bundles on $\enm{\mathbb{P}}^n$ with $a_i \ge a_{i+1}$ for all $i$, then we get $a_i \le a_{i+1}+1$ for all $i$ by adapting the proof of \cite[Theorem 6.1]{R2}. Thus in case $\mathrm{rank}(\enm{\cal{E}})=r=2$, by a twist we fall into two cases: $\enm{\cal{O}}_{\enm{\mathbb{P}}^n}^{\oplus 2}$ or $\enm{\cal{O}}_{\enm{\mathbb{P}}^n}\oplus \enm{\cal{O}}_{\enm{\mathbb{P}}^n}(-1)$. We denote by $\mathcal{E}nd_0(\enm{\cal{E}})$ the kernel of the trace map $\mathcal{E}nd(\enm{\cal{E}}) \rightarrow \enm{\cal{O}}_X$, the trace-free part, and then we have $$\mathcal{E}nd(\enm{\cal{E}})\otimes T_X(-\log \enm{\cal{D}}) \cong (\mathcal{E}nd_0(\enm{\cal{E}})\otimes T_X(-\log \enm{\cal{D}})) \oplus T_X(-\log \enm{\cal{D}}).$$ Thus any co-Higgs field $\Phi$ can be decomposed into $\Phi_1+ \Phi_2$ with $\Phi_1 \in H^0(\mathcal{E}nd_0(\enm{\cal{E}})\otimes T_X(-\log \enm{\cal{D}}))$ and $\Phi_2\in H^0(T_X(-\log \enm{\cal{D}}))$. Note that $(\enm{\cal{E}}, \Phi)$ is (semi)stable if and only if $(\enm{\cal{E}}, \Phi_1)$ is (semi)stable. Thus we may pay attention only to trace-free logarithmic co-Higgs bundles. Let us denote by $\mathbf{M}_{\enm{\cal{D}}}(c_1, c_2)$ the moduli of semistable trace-free $\enm{\cal{D}}$-logarithmic co-Higgs bundles of rank two on $\enm{\mathbb{P}}^2$ with Chern classes $(c_1, c_2)$. In case $\enm{\cal{D}}=\emptyset$ we simply denote the moduli space by $\mathbf{M}(c_1, c_2)$. \begin{proposition}\label{de1} $\mathbf{M}_{\enm{\cal{D}}}(-1,0)$ is isomorphic to the total space of $\enm{\cal{O}}_{D}(-2)^{\oplus 6}$. \end{proposition} \begin{proof} By \cite[Lemma 3.2]{hart} $\enm{\cal{E}}$ is not semistable for $(\enm{\cal{E}}, \Phi) \in \mathbf{M}_{\enm{\cal{D}}}(-1,0)$ and so we get an exact sequence $0 \rightarrow \enm{\cal{O}}_{\enm{\mathbb{P}}^2}(t) \rightarrow \enm{\cal{E}}\rightarrow \enm{\cal{I}}_Z(-t-1)\rightarrow 0$ with $t\ge 0$. Here $\Phi(\enm{\cal{O}}_{\enm{\mathbb{P}}^2}(t))\subset \enm{\cal{I}}_Z(-t)$ is a non-trivial subsheaf and so we get $t=0$ and $Z=\emptyset$. Thus we get $\enm{\cal{E}} \cong \enm{\cal{O}}_{\enm{\mathbb{P}}^2}\oplus \enm{\cal{O}}_{\enm{\mathbb{P}}^2}(-1)$. Then following the proof of \cite[Theorem 5.2]{Rayan} verbatim, we see that $$\mathbf{M}_{\enm{\cal{D}}}(-1,0) \cong H^0(\enm{\cal{O}}_{\enm{\mathbb{P}}^2}(2))\times (H^0(\enm{\cal{O}}_{\enm{\mathbb{P}}^2}^{\oplus 2}) \setminus \{0\})\sslash \enm{\mathbb{C}}^*,$$ where $\enm{\mathbb{C}}^*$ acts on $H^0(\enm{\cal{O}}_{\enm{\mathbb{P}}^2}(2))$ with weight $-2$ and on $H^0(\enm{\cal{O}}_{\enm{\mathbb{P}}^2}^{\oplus 2})\setminus \{0\}$ with weight $1$. Thus we get that $\mathbf{M}_{\enm{\cal{D}}}(-1,0)$ is isomorphic to the total space of $\enm{\cal{O}}_{\enm{\mathbb{P}}^1}(-2)^{\oplus 2}$. Indeed, from the sequence (\ref{log1}) twisted by $-1$, we can identify $\enm{\mathbb{P}} H^0(\enm{\cal{O}}_{\enm{\mathbb{P}}^2}^{\oplus 2})$ with $D$ and so $\mathbf{M}_{\enm{\cal{D}}}(-1,0)$ can be obtained by restricting $\enm{\cal{O}}_{\enm{\mathbb{P}}^2}(-2)^{\oplus 6}$ to $D$ as a closed subscheme of $\mathbf{M}(-1,0)$, which is isomorphic to the total space of $\enm{\cal{O}}_{\enm{\mathbb{P}}^2}(-2)^{\oplus 6}$ (see \cite[Theorem 5.2]{Rayan}). \end{proof} Recall in \cite[Page 1447]{Rayan} that $\mathbf{M}(0,0)$ is $8$-dimensional and non-isomorphic to $\mathbf{M}(-1,0)$, with an explicitly described open dense subset. On the contrary to Proposition \ref{de1}, we obtain two-codimensional subspace $\mathbf{M}_{\enm{\cal{D}}}(0,0)$ of $\mathbf{M}(0,0)$. \begin{proposition} $\mathbf{M}_{\enm{\cal{D}}}(0,0)$ contains the total space of $\enm{\cal{O}}_{\enm{\mathbb{P}}^5}(-2)$ with the zero section contracted to a point, as an open dense subset. \end{proposition} \begin{proof} Take $(\enm{\cal{E}}, \Phi)\in \mathbf{M}_{\enm{\cal{D}}}(0,0)$. From $c_2=c_1^2$, we get that $\enm{\cal{E}}$ is not stable and so it fits into the following exact sequence $$0\to \enm{\cal{O}} _{\enm{\mathbb{P}}^2}(t)\to \enm{\cal{E}} \to \enm{\cal{I}} _Z(-t)\to 0$$ with $t\ge 0$ and $\deg (Z)=t^2$. First assume $t>0$. Since every map $\enm{\cal{O}} _{\enm{\mathbb{P}}^2}(t) \rightarrow \enm{\cal{I}} _Z(-t)\otimes T_{\enm{\mathbb{P}}^2}(-\log \enm{\cal{D}} )$ is the zero-map, we get $\Phi (\enm{\cal{O}} _{\enm{\mathbb{P}}^2}(t)) \subset \enm{\cal{O}} _{\enm{\mathbb{P}}^2}(t)\otimes T_{\enm{\mathbb{P}}^2}(-\log \enm{\cal{D}} )$, contradicting the semistability of $(\enm{\cal{E}}, \Phi)$. Now assume $t=0$ and so we get $\enm{\cal{E}} \cong \enm{\cal{O}} _{\enm{\mathbb{P}}^2}^{\oplus 2}$. Then we follow the argument in \cite[Theorem 5.3]{Rayan} to get the assertion. \end{proof} \section{Coherent system and Holomorphic triple}\label{osem} If $\enm{\cal{F}}\subset \enm{\cal{E}}$ is a non-trivial subsheaf, then its saturation $\widetilde{\enm{\cal{F}}}$ is defined to be the maximal subsheaf of $\enm{\cal{E}}$ containing $\enm{\cal{F}}$ with $\op{rank} \widetilde{\enm{\cal{F}}}=\op{rank} \enm{\cal{F}}$; $\widetilde{\enm{\cal{F}}}$ is the only subsheaf of $\enm{\cal{E}}$ containing $\enm{\cal{F}}$ with $\enm{\cal{E}} /\widetilde{\enm{\cal{F}}}$ torsion-free. \subsection{Coherent system} Inspired by the theory of coherent systems on smooth algebraic curves in \cite{bgmn}, we consider the following definition. Let $\enm{\cal{E}}$ be a torsion-free sheaf of rank $r\ge 2$ on $X$ and $(\enm{\cal{E}}, \Phi)$ be a $\enm{\cal{D}}$-logarithmic co-Higgs structure. Then we define a set $$\enm{\cal{S}}=\enm{\cal{S}}(\enm{\cal{E}}, \Phi):=\{ (\enm{\cal{F}}, \enm{\cal{G}})~|~0\subsetneq \enm{\cal{F}} \subseteq \enm{\cal{G}} \subseteq \enm{\cal{E}} \text{ with }\Phi (\enm{\cal{F}} )\subseteq \enm{\cal{G}} \otimes T_X(-\log \enm{\cal{D}} )\}.$$ For a fixed real number $\alpha \ge 0$ and $(\enm{\cal{F}}, \enm{\cal{G}}) \in \enm{\cal{S}}$, set \begin{align*} \mu _\alpha (\enm{\cal{F}} ,\enm{\cal{G}}) &= \mu (\enm{\cal{F}} ) + \alpha \left (\frac{\op{rank} \enm{\cal{F}} }{\op{rank} \enm{\cal{G}}}\right)\\ \mu '_\alpha (\enm{\cal{F}} ,\enm{\cal{G}}) &= \mu (\enm{\cal{F}} ) + \alpha \left (\frac{\op{rank} \enm{\cal{F}}}{\op{rank} \enm{\cal{F}} +\op{rank} \enm{\cal{G}}}\right). \end{align*} Note that $\mu _\alpha (\enm{\cal{E}} ,\enm{\cal{E}} ) =\mu (\enm{\cal{E}}) + \alpha$ and $\mu '_\alpha (\enm{\cal{E}} ,\enm{\cal{E}} ) =\mu (\enm{\cal{E}} )+ \alpha /2$. From now on we use $\mu _\alpha$, but $\mu '_\alpha$ does the same job. In general, we have $\mu _\alpha (\enm{\cal{F}} ,\enm{\cal{G}} )\le \mu (\enm{\cal{F}} )+\alpha$ for $(\enm{\cal{F}} ,\enm{\cal{G}} )\in \enm{\cal{S}}$ and equality holds if and only if $\op{rank} \enm{\cal{F}} = \op{rank} \enm{\cal{G}}$, i.e. $\enm{\cal{G}}$ is contained in the saturation $\widetilde{\enm{\cal{F}}}$ of $\enm{\cal{F}}$ in $\enm{\cal{E}}$. \begin{definition} The pair $(\enm{\cal{E}} ,\Phi)$ is said to be $\mu _\alpha$-stable (resp. $\mu _\alpha$-semistable) if $\mu _\alpha (\enm{\cal{F}},\enm{\cal{G}} ) < \mu _\alpha (\enm{\cal{E}} ,\enm{\cal{E}} )$ (resp. $\mu _\alpha (\enm{\cal{F}},\enm{\cal{G}} ) \le \mu _\alpha (\enm{\cal{E}} ,\enm{\cal{E}} )$) for all $(\enm{\cal{F}} ,\enm{\cal{G}} )\in \enm{\cal{S}} \setminus \{(\enm{\cal{E}} ,\enm{\cal{E}} )\}$. A similar definition is given with $\mu '_\alpha$. \end{definition} Note that if $\enm{\cal{E}}$ is semistable (resp. stable), then a pair $(\enm{\cal{E}} ,\Phi)$ is $\mu_{\alpha}$-semistable (resp. $\mu_{\alpha}$-stable) for any $\alpha$ and $\Phi$. The converse also holds for $\Phi =0$. \begin{remark} We have $\Phi (\enm{\cal{F}}) \subseteq \widetilde{\enm{\cal{G}}}\otimes T_X(-\log \enm{\cal{D}})$ for $(\enm{\cal{F}}, \enm{\cal{G}})\in \enm{\cal{S}}$ and so to test the $\mu_{\alpha}$-(semi)stability of $(\enm{\cal{E}} ,\Phi)$, it is sufficient to test the pairs $(\enm{\cal{F}} ,\enm{\cal{G}} )\in \enm{\cal{S}} \setminus \{(\enm{\cal{E}} ,\enm{\cal{E}} )\}$ with $\enm{\cal{G}}$ saturated in $\enm{\cal{E}}$. Moreover, if $\enm{\cal{G}}$ is saturated in $\enm{\cal{E}}$, then $\enm{\cal{G}}\otimes T_X(-\log \enm{\cal{D}})$ is saturated in $\enm{\cal{E}}\otimes T_X(-\log \enm{\cal{D}})$. Since $\Phi (\enm{\cal{F}})$ is a subsheaf of $\Phi (\widetilde{\enm{\cal{F}}})$ with the same rank we have $\Phi (\widetilde{\enm{\cal{F}}})\subseteq \enm{\cal{G}} \otimes T_X(-\log \enm{\cal{D}})$. So to test the $\mu_{\alpha}$-(semi)stability of $(\enm{\cal{E}} ,\Phi)$ it is sufficient to test the pairs $(\enm{\cal{F}} ,\enm{\cal{G}} )\in \enm{\cal{S}} \setminus \{(\enm{\cal{E}} ,\enm{\cal{E}} )\}$ with both $\enm{\cal{F}}$ and $\enm{\cal{G}}$ saturated in $\enm{\cal{E}}$. \end{remark} \begin{lemma}\label{lem23} If $(\enm{\cal{E}} ,\Phi )$ is not semistable (resp. stable), then it is not $\mu _\alpha$-semistable (resp. not $\mu _\alpha$-stable) for any $\alpha$. \end{lemma} \begin{proof} Take $\enm{\cal{F}} \subset \enm{\cal{E}}$ such that $\Phi (\enm{\cal{F}} )\subseteq \enm{\cal{F}} \otimes T_X(-\log \enm{\cal{D}} )$ and $\mu (\enm{\cal{F}} )>\mu (\enm{\cal{E}} )$ (resp. $\mu (\enm{\cal{F}} )\ge \mu (\enm{\cal{E}} )$). We have $(\enm{\cal{F}} ,\enm{\cal{F}} )\in \enm{\cal{S}}$ and so $\mu _\alpha (\enm{\cal{F}} ,\enm{\cal{F}} )=\mu (\enm{\cal{F}} )+\alpha >$ (resp. $\ge$) $\mu (\enm{\cal{E}} )+\alpha=\mu_\alpha (\enm{\cal{E}}, \enm{\cal{E}})$, proving the assertion. \end{proof} \begin{remark} Lemma \ref{lem23} shows that $\mu _\alpha$-stability is stronger than the stability of the pairs $(\enm{\cal{E}} ,\Phi )$ in the sense of \cite{R1, R2, Rayan} and so they form a bounded family if we fix the Chern classes of $\enm{\cal{E}}$. However, if $(\enm{\cal{E}} ,\Phi )$ is not $\mu _\alpha$-semistable, a pair $(\enm{\cal{F}} ,\enm{\cal{G}} )\in \enm{\cal{S}}$ with $\mu _\alpha (\enm{\cal{F}} ,\enm{\cal{G}} )>\mu (\enm{\cal{E}} )+\alpha$ and maximal $\mu_\alpha$-slope may have $\op{rank} (\enm{\cal{G}} )>\op{rank} (\enm{\cal{F}} )$, i.e. $\Phi (\enm{\cal{F}} )\nsubseteq \enm{\cal{F}}\otimes T_X(-\log \enm{\cal{D}})$ and so we do not define the Harder-Narashiman filtration of $\mu_\alpha$-unstable pairs $(\enm{\cal{E}} ,\Phi )$. \end{remark} \begin{proposition}\label{yyy} Let $(\enm{\cal{E}}, \Phi)$ be a $\enm{\cal{D}}$-logarithmic co-Higgs bundle on $X$ with $\enm{\cal{E}}$ not semistable. Then there exist two positive real numbers $\beta$ and $\gamma$ such that \begin{itemize} \item [(i)] $(\enm{\cal{E}}, \Phi)$ is not $\mu_\alpha$-semistable for all $\alpha<\beta$, and \item [(ii)] if $(\enm{\cal{E}}, \Phi)$ is semistable in the sense of Definition \ref{ss1}, it is $\mu_\alpha$-semistable for all $\alpha>\gamma$. \end{itemize} \end{proposition} \begin{proof} Assume that $\enm{\cal{E}}$ is not semistable and take a subsheaf $\enm{\cal{G}}$ with $\mu (\enm{\cal{G}} )>\mu (\enm{\cal{E}})$. Note that $(\enm{\cal{G}} ,\enm{\cal{E}})\in \enm{\cal{S}}$. Then there exists a real number $\beta>0$ such that $\mu _\alpha (\enm{\cal{G}} ,\enm{\cal{E}} )> \mu (\enm{\cal{E}} )+\alpha =\mu _\alpha (\enm{\cal{E}},\enm{\cal{E}})$ for all $\alpha$ with $0<\alpha <\beta$. Thus $(\enm{\cal{E}} ,\Phi)$ is not $\mu_{\alpha}$-semistable if $\alpha < \beta$. Now assume that $\enm{\cal{E}}$ is not semistable, but that $(\enm{\cal{E}} ,\Phi )$ is semistable. Define $$\Delta=\{\text{the saturated subsheaves }\enm{\cal{A}} \subset \enm{\cal{E}} ~|~\mu(\enm{\cal{A}})>\mu(\enm{\cal{E}})\}.$$ Let $\mu _{\max}(\enm{\cal{E}})$ be the maximum of the slopes of subsheaves of $\enm{\cal{E}}$, which exists as a finite real number by the existence of the Harder-Narasimhan filtration of $\enm{\cal{E}}$. Since $\enm{\cal{E}}$ is not semistable, we have $\mu _{\max}(\enm{\cal{E}}) > \mu (\enm{\cal{E}})$ and set $\gamma := r(\mu _{\max}(\enm{\cal{E}} )-\mu (\enm{\cal{E}}))>0$. Fix any real number $\alpha \ge \gamma$. Now take $\enm{\cal{A}} \in \Delta$ and set $s:=\op{rank} \enm{\cal{A}}$. Since $(\enm{\cal{E}}, \Phi)$ is semistable, we get $\op{rank} \enm{\cal{B}} >s$. Thus we have \begin{align*} \mu _\alpha (\enm{\cal{A}},\enm{\cal{B}}) &\le \mu (\enm{\cal{A}}) + \alpha s/(s+1) \\ &\le \mu (\enm{\cal{A}}) +\alpha (r-1)/r \le \mu_\alpha (\enm{\cal{E}}, \enm{\cal{E}}) \end{align*} and so $(\enm{\cal{E}} ,\Phi)$ is $\mu_\alpha$-semistable for all $\alpha \ge \gamma$. \end{proof} \begin{remark} For $s=1,\ldots ,r-1$, let $\Delta _s$ be the set of all $\enm{\cal{G}} \in \Delta$ with rank $s$. If $\mu (\enm{\cal{G}} )< \mu _{\max}(\enm{\cal{E}} )$ for all $\enm{\cal{G}} \in \Delta _{r-1}$, we may use a lower real number instead of $\gamma$ in the proof of Proposition \ref{yyy}. \end{remark} \begin{example}\label{++11+} Let $X=\enm{\mathbb{P}}^1$ and take $\enm{\cal{D}}=\{p\}$ with $p$ a point. Then we have $T_{\enm{\mathbb{P}}^1}(-\log \enm{\cal{D}} )\cong T_{\enm{\mathbb{P}}^1}(-p)\cong \enm{\cal{O}} _{\enm{\mathbb{P}}^1}(1)$. Let $(\enm{\cal{E}}, \Phi)$ be a semistable $\enm{\cal{D}}$-logarithmic co-Higgs bundle of rank $r\ge 2$ on $\enm{\mathbb{P}}^1$ with $\enm{\cal{E}} \cong \oplus _{i=1}^{r} \enm{\cal{O}} _{\enm{\mathbb{P}}^1}(a_i)$ with $a_1\ge \cdots \ge a_r$ and $a_i -a_{i+1} \le 1$ for all $i=1,\dots ,r+1$ as in Example \ref{bbb+1}. We assume that $\enm{\cal{E}}$ is not semistable, i.e. $a_r<a_1$. The value $\gamma$ in Proposition \ref{yyy} could depend on $\Phi$, although it is the same for all general $\Phi$. Up to a twist we may assume $a_1=0$. We have $\mu (\enm{\cal{E}}) = c_1/r$ with $c_1=a_1+\cdots +a_r$. For each $s=1,\ldots ,r-1$, set $b_s = (a_1+\cdots +a_s)/s$ and define $$\gamma _0:= \max _{1\le s \le r-1} (s+1)(b_s-c_1/r).$$ We have $\mu (\enm{\cal{F}} )\le b_s$ for all $\enm{\cal{F}} \in \Delta _s$ and so $\mu _\alpha (\enm{\cal{F}}, \enm{\cal{G}} )\le \mu _\alpha (\enm{\cal{E}} ,\enm{\cal{E}})$ for all $(\enm{\cal{F}} ,\enm{\cal{G}})$ with $\op{rank} \enm{\cal{F}} =s$ and $\Phi (\enm{\cal{F}} )\nsubseteq \enm{\cal{F}}\otimes T_{\enm{\mathbb{P}}^1}(-\log \enm{\cal{D}})$. Hence $(\enm{\cal{E}} ,\Phi )$ is $\mu_\alpha$-semistable for all $\alpha \ge \gamma _0$. \end{example} \begin{example} Similarly as in Example \ref{++11+}, we take $X=\enm{\mathbb{P}}^1$ and $\enm{\cal{D}} =\emptyset$. Then we have $T_{\enm{\mathbb{P}}^1}(-\log \enm{\cal{D}}) \cong T_{\enm{\mathbb{P}}^1} \cong \enm{\cal{O}} _{\enm{\mathbb{P}}^1}(2)$. We argue as in Example \ref{++11+}, except that now we only require that $a_i -a_{i+1} \le 2$ for all $i=1,\ldots ,r-1$. \end{example} \begin{example} Take $X=\enm{\mathbb{P}}^n$ with $n\ge 2$ and assume that $(\enm{\cal{E}}, \Phi)$ is a semistable logarithmic co-Higgs reflexive sheaf of rank two with $\enm{\cal{E}}$ not semistable. Up to a twist we may assume $c_1(\enm{\cal{E}})\in \{-1,0\}$. Set $c_1:= c_1(\enm{\cal{E}})$. Since $\enm{\cal{E}}$ is not semistable, we have an exact sequence \begin{equation} 0\to \enm{\cal{O}} _{\enm{\mathbb{P}}^n}(t) \to \enm{\cal{E}} \to \enm{\cal{I}} _Z(c_1-t)\to 0 \end{equation} with either $Z=\emptyset$ or $\dim (Z)=n-2$, and $t\ge 0$ and $t>0$ if $c_1(\enm{\cal{E}})=0$. Since $(\enm{\cal{E}} ,\Phi )$ is semistable, there is no saturated subsheaf $\enm{\cal{A}} \subset \enm{\cal{E}}$ of rank one with $(\enm{\cal{A}} ,\enm{\cal{A}} )\in \enm{\cal{S}}$ and $\mu (\enm{\cal{A}} )> -1$. Note that $\mu _\alpha (\enm{\cal{O}} _{\enm{\mathbb{P}}^n}(t),\enm{\cal{E}} ) =t +\alpha /2$ and so $(\enm{\cal{E}} ,\Phi )$ is $\mu_\alpha$-stable (resp. $\mu_\alpha$-semistable) if and only if $\alpha > 2t-c_1$ (resp. $\alpha \ge 2t-c_1$). Now we discuss the existence of such a pair $(\enm{\cal{E}} ,\Phi)$. Since $(\enm{\cal{E}} ,\Phi )$ is semistable, we should have $\Phi (\enm{\cal{O}} _{\enm{\mathbb{P}}^n}(t)) \nsubseteq \enm{\cal{O}}_{\enm{\mathbb{P}}^n}(t) \otimes T_{\enm{\mathbb{P}}^n}(-\log \enm{\cal{D}} )$ and so there is a non-zero map $\enm{\cal{O}} _{\enm{\mathbb{P}}^n}(t)\rightarrow \enm{\cal{I}} _Z(c_1-t)\otimes T_{\enm{\mathbb{P}}^n}(-\log \enm{\cal{D}} )$. Since $t>c_1-t$ and $h^0(T_{\enm{\mathbb{P}}^n}(-2)) =0$, we get $t=0$ and $c_1 =-1$. Then we also get $H^0(\enm{\cal{I}} _Z(-1)\otimes T_{\enm{\mathbb{P}}^n}(-\log \enm{\cal{D}} )) \ne 0$, which gives restrictions on the choice of $\enm{\cal{D}}$ and $Z$. Assume that $\enm{\cal{D}}=\{D\}$ with $D\in |\enm{\cal{O}}_{\enm{\mathbb{P}}^n}(1)|$ a hyperplane, so that $T_{\enm{\mathbb{P}}^n}(-\log \enm{\cal{D}} )\cong \enm{\cal{O}} _{\enm{\mathbb{P}}^n}(1)^{\oplus n}$. In this case we get $Z=\emptyset$ and so $\enm{\cal{E}} \cong \enm{\cal{O}} _{\enm{\mathbb{P}}^n}\oplus \enm{\cal{O}} _{\enm{\mathbb{P}}^n}(-1)$. See Proposition \ref{de1} for the associated moduli space in case $n=2$. \end{example} \subsection{Holomorphic triple} We may also consider a holomorphic triple of logarithmic co-Higgs bundles and define its semistability as in \cite{bgg}. \begin{definition} A holomorphic triple of $\enm{\cal{D}}$-logarithmic co-Higgs bundles is a triple $((\enm{\cal{E}}_1,\Phi _1),(\enm{\cal{E}} _2,\Phi _2),f)$, where each $(\enm{\cal{E}} _i,\Phi _i)$ is a $\enm{\cal{D}}$-logarithmic co-Higgs sheaf with each $\enm{\cal{E}} _i$ torsion-free on $X$ and $f: \enm{\cal{E}} _1\rightarrow \enm{\cal{E}} _2$ is a map of sheaves such that $\Phi _2\circ f = \hat{f}\circ \Phi _1$, where $\hat{f}:\enm{\cal{E}} _1\otimes T_X(-\log \enm{\cal{D}} )\rightarrow \enm{\cal{E}} _2\otimes T_X(-\log \enm{\cal{D}} )$ is the map induced by $f$. \end{definition} For any real number $\alpha \ge 0$, define the $\nu_\alpha$-slope of a triple $\enm{\cal{A}}=((\enm{\cal{E}}_1,\Phi) _1,(\enm{\cal{E}} _2,\Phi _2),f)$ to be the $\nu_\alpha$-slope of the triple $(\enm{\cal{E}} _1,\enm{\cal{E}} _2,f)$ in the sense of \cite{bgg}, i.e. $$ \nu _\alpha((\enm{\cal{E}}_1,\Phi) _1,(\enm{\cal{E}} _2,\Phi _2),f) =\frac{\deg_{\alpha}(\enm{\cal{A}})}{\op{rank} \enm{\cal{E}}_1 + \op{rank} \enm{\cal{E}}_2},$$ where $\deg_{\alpha}(\enm{\cal{A}})= \deg (\enm{\cal{E}}_1)+\deg (\enm{\cal{E}}_2)+\alpha \op{rank} \enm{\cal{E}}_1$. A holomorphic subtriple $\enm{\cal{B}} = ((\enm{\cal{F}}_1,\Psi _1),(\enm{\cal{F}} _2,\Psi _2),g)$ of $\enm{\cal{A}} = ((\enm{\cal{E}}_1,\Phi _1),(\enm{\cal{E}} _2,\Phi _2),f)$ is a holomorphic triple with $\enm{\cal{F}} _i\subset \enm{\cal{E}} _i$, $\Psi _i = \Phi _{i|\enm{\cal{F}} _i}$ and $g = f_{|\enm{\cal{F}} _1}$. Since $\Phi _i$ is integrable, so is $\Psi _i$. \begin{remark} As before, we may use the slope $\nu _\alpha$ to define the $\nu_\alpha$-(semi)stability for $\enm{\cal{D}}$-logarithmic co-Higgs triples. If $h: \enm{\cal{A}} \rightarrow \enm{\cal{B}}$ is a non-zero map of $\nu_\alpha$-semistable holomorphic triples, then we have $\nu _\alpha (\enm{\cal{B}} )\ge \mu _\alpha (\enm{\cal{A}})$. Moreover, if $\enm{\cal{A}}$ is $\nu_\alpha$-stable, then either $\nu _\alpha (\enm{\cal{B}} )>\nu _\alpha (\enm{\cal{A}})$ or $h$ is injective; in addition, if $\enm{\cal{B}}$ is also $\nu_\alpha$-stable, then $h$ is an automorphism. \end{remark} \begin{remark}\label{rem25} The degenerate holomorphic triple $((\enm{\cal{E}}_1, \Phi_1), (\enm{\cal{E}}_2, \Phi_2), 0)$ with $f=0$ is $\nu_\alpha$-semistable if and only if $\alpha=\mu (\enm{\cal{E}}_2)-\mu(\enm{\cal{E}}_1)$ and both $(\enm{\cal{E}}_i, \Phi_i)$'s are semistable as in \cite[Lemma 3.5]{bg}. Moreover such triples are not $\nu_\alpha$-stable (see \cite[Corollary 3.6]{bg}). Note that if $\Phi_1=\Phi_2=0$, then we fall into the usual holomorphic triples. We also have an analogous statement for the case $r_2=\op{rank} \enm{\cal{E}}_2=1$ as in \cite[Lemma 3.7]{bg}. \end{remark} \begin{remark} For subtriples $\enm{\cal{B}}$ and $\enm{\cal{B}}'$ of $\enm{\cal{A}}$, we may define their sum and intersection $\enm{\cal{B}}+\enm{\cal{B}}'$ and $\enm{\cal{B}} \cap \enm{\cal{B}}'$; let $\enm{\cal{B}} = ((\enm{\cal{F}}_1,\Psi _1),(\enm{\cal{F}} _2,\Psi _2),g)$ and $\enm{\cal{B}} '= ((\enm{\cal{F}} '_1,\Psi '_1),(\enm{\cal{F}} '_2,\Psi '_2),g)$. Then we may use $\enm{\cal{F}} _i+\enm{\cal{F}} '_i$ and $\enm{\cal{F}} _i\cap \enm{\cal{F}} '_i$ with the restrictions of $\Phi _i$ and $f$ to them. Now call $\widetilde{\enm{\cal{F}}}_i$ the saturation of $\enm{\cal{F}} _i$ in $\enm{\cal{E}} _i$. Since $\widetilde{\enm{\cal{F}}}_i\otimes T_X(-\log \enm{\cal{D}} )$ is saturated in $\enm{\cal{E}} _i\otimes T_X(-\log \enm{\cal{D}})$, we have $\Phi _i(\widetilde{\enm{\cal{F}}}_i)\subseteq \widetilde{\enm{\cal{F}}}_i\otimes T_X(-\log \enm{\cal{D}} )$. Since $f(\enm{\cal{F}} _1)\subseteq \widetilde{\enm{\cal{F}}}_2$, we have $f(\widetilde{\enm{\cal{F}}}_1)\subseteq \widetilde{\enm{\cal{F}}}_2$ and so we may also define the saturation $\widetilde{\enm{\cal{B}}}$ of $\enm{\cal{B}}$ with $\nu_{\alpha}(\widetilde{\enm{\cal{B}}}) \ge \nu_{\alpha}(\enm{\cal{B}})$. \end{remark} Fix $\alpha \in \enm{\mathbb{R}}_{>0}$ and let $\enm{\cal{A}} = ((\enm{\cal{E}}_1,\Phi _1),(\enm{\cal{E}} _2,\Phi _2),f)$ be a holomorphic triple. We define $\beta(\enm{\cal{A}})$ to be the maximum of the set of the $\nu_\alpha$-slopes of all subtriples of $\enm{\cal{A}}$ and let $$\enm{\mathbb{B}}:=\{\enm{\cal{B}} \subseteq \enm{\cal{A}}~|~\nu_\alpha(\enm{\cal{B}})=\beta(\enm{\cal{A}})\}.$$ \begin{lemma}\label{kk1} The set of the $\nu_\alpha$-slopes of all subtriples of $\enm{\cal{A}}$ is upper bounded and so $\beta(\enm{\cal{A}})$ exists. Moreover, the set $\enm{\mathbb{B}}$ has a unique maximal element \end{lemma} \begin{proof} The ranks of any non-zero subsheaf of $\enm{\cal{E}} _i$ is upper bounded by $r_i:=\op{rank} \enm{\cal{E}}_i$ and lower bounded by $1$. The existence of the Harder-Narasimhan filtration of $\enm{\cal{E}} _i$ gives the existence of positive rational numbers $\gamma_i$ with denominators between $1$ and $r_i$ such that $\mu (\enm{\cal{F}} _i)\le \gamma _i$ for all non-zero subsheaves $\enm{\cal{F}} _i$ of $\enm{\cal{E}}_i$. We may use the definition of $\nu_\alpha$-slope to get an upper-bound for the $\nu_\alpha$-slopes of the subtriples of $\enm{\cal{A}}$. There are only finitely many possible $\nu_\alpha$-slopes greater than $\nu _\alpha (\enm{\cal{A}})$, because the ranks are upper and lower bounded and each $\deg (\enm{\cal{G}})$ for a subsheaf $\enm{\cal{G}}$ of $\enm{\cal{E}} _i$ is an integer, upper bounded by $\max \{r_1\mu (\enm{\cal{E}} _1), r_2\mu (\enm{\cal{E}} _2)\}$. Thus the set of the $\nu_\alpha$-slopes of all subtriples of $\enm{\cal{A}}$ has a maximum $\beta(\enm{\cal{A}})$. If $\nu _\alpha (\enm{\cal{A}} )=\beta(\enm{\cal{A}})$, then $\enm{\cal{A}}$ itself is the maximum element of $\enm{\mathbb{B}}$. Now assume $\nu _\alpha (\enm{\cal{A}} )>\delta$ and that there are $\enm{\cal{B}} _1,\enm{\cal{B}} _2\in \enm{\mathbb{B}}$ with each $\enm{\cal{B}} _i$ maximal and $\enm{\cal{B}} _1\ne \enm{\cal{B}} _2$. Since $\enm{\cal{B}} _i$ is maximal, it is saturated and so $\enm{\cal{A}} _i:= \enm{\cal{A}} /\enm{\cal{B}} _i$ is a holomorphic triple for each $i$. Since $\enm{\cal{B}} _2\ne \enm{\cal{B}} _1$, the inclusion $\enm{\cal{B}} _2\subset \enm{\cal{A}}$ induces a non-zero map $u: \enm{\cal{B}}_2\rightarrow \enm{\cal{A}} /\enm{\cal{B}} _1$. Since $\nu _\alpha (\mathrm{ker}(u)) \le \beta(\enm{\cal{A}})$ if $u$ is not injective, we have $\nu _\alpha (u(\enm{\cal{A}} /\enm{\cal{B}} _1)) \ge \beta(\enm{\cal{A}})$. Thus we get $\nu _\alpha (\enm{\cal{B}} _1+\enm{\cal{B}} _2) \ge \beta(\enm{\cal{A}})$, contradicting the maximality of $\enm{\cal{B}} _1$ and the assumption $\enm{\cal{B}} _2\ne \enm{\cal{B}} _1$. \end{proof} Assume that $\enm{\cal{A}}$ is not $\nu_\alpha$-semistable. By Lemma \ref{kk1} there is a subtriple $D(\enm{\cal{A}}) = ((\enm{\cal{F}}_1,\Psi _1),(\enm{\cal{F}} _2,\Psi _2),g)\in \enm{\mathbb{B}}$ such that every $\enm{\cal{G}} \in \enm{\mathbb{B}}$ is a subtriple of $D(\enm{\cal{A}})$ and each $\enm{\cal{F}} _i$ is saturated in $\enm{\cal{E}}_i$. Note that $D(\enm{\cal{A}})$ is $\nu_\alpha$-semistable. Since $\enm{\cal{F}} _i$ is saturated in $\enm{\cal{E}} _i$ and $\Psi _i = \Phi _{i|\enm{\cal{F}} _i}$ for each $i$, $\Phi _i$ induces a co-Higgs field $\tau _i: \enm{\cal{E}} _i/\enm{\cal{F}} _i\rightarrow (\enm{\cal{E}} _i/\enm{\cal{F}} _i)\otimes T_X(-\log \enm{\cal{D}} )$. Since $\Phi _i$ is integrable, so is $\tau _i$. Since $g=f_{|\enm{\cal{F}} _1}$, $f$ induces a map $f': \enm{\cal{E}} _1/\enm{\cal{F}} _1\rightarrow \enm{\cal{E}} _2/\enm{\cal{F}} _2$ such that $\enm{\cal{A}} /D(\enm{\cal{A}}) := ((\enm{\cal{E}} _1/\enm{\cal{F}} _1, \tau _1),(\enm{\cal{E}} _2/\enm{\cal{F}} _2,\tau _2),f')$ is a holomorphic triple. Now we may check that each subtriple of $\enm{\cal{A}} /D(\enm{\cal{A}})$ has $\nu_\alpha$-slope less than $\beta(\enm{\cal{A}})$ and so $D(\enm{\cal{A}})$ defines the first step of the Harder-Narasimhan filtration of $\enm{\cal{A}}$.The iteration of this process allows us to have the Harder-Narasimhan filtration of $\enm{\cal{A}}$ with respect to $\nu _\alpha$. \begin{corollary}\label{mcor} Any holomorphic triple admits the Harder-Narasimhan filtration with respect to $\nu_\alpha$-slope. \end{corollary} \begin{remark}\label{rem66} Let $Z$ denote a projective completion of $T_X(-\log \enm{\cal{D}})$, e.g. $Z=\enm{\mathbb{P}} (\enm{\cal{O}}_X\oplus T_X(-\log \enm{\cal{D}}))$, and call $D_{\infty}:=Z\setminus T_X(-\log \enm{\cal{D}})$ the divisor at infinity. By \cite[Lemma 6.8]{s} a co-Higgs sheaf $(\enm{\cal{E}}, \Phi)$ on $X$ is the same thing as a coherent sheaf $\enm{\cal{E}}_Z$ with $\mathrm{Supp}(\enm{\cal{E}}_Z) \cap D_{\infty}=\emptyset$. Due to \cite[Corollary 6.9]{s} we may interpret a $\nu_\alpha$-semistable holomorphic triple of logarithmic co-Higgs bundles on $X$ as a $\nu_\alpha$-semistable holomorphic triple of vector bundles on $Z$ with support not intersecting $D_{\infty}$ as in \cite{bgg}. \end{remark} Based on Remark \ref{rem66}, we may consider a $\nu_\alpha$-semistable triple of $\enm{\cal{D}}$-logarithmic co-Higgs sheaves as a $\nu_\alpha$-semistable quiver sheaf for the quiver $\xymatrix{\stackrel{1}{\circ} \ar[r]& \stackrel{2}{\circ}}$ on $Z$ with empty intersection with $D_{\infty}$. This interpretation ensures the existence of moduli space of $\nu_\alpha$-stable triples of $\enm{\cal{D}}$-logarithmic co-Higgs sheaves on $X$, say $\mathbf{M}_{\enm{\cal{D}}, \alpha}(r_1, r_2, d_1, d_2)$ with $(r_i, d_i)$ a pair of rank and degree of the $i^{\mathrm{th}}$-factor of the triples; indeed we may consider Gieseker-type semistability of quiver sheaves to produce the moduli space as in \cite{Schmitt}. As noticed in \cite[Remark in page 17]{Schmitt}, the $\nu_\alpha$-stability implies the Gieseker-type stability and so $\mathbf{M}_{\enm{\cal{D}}, \alpha}(r_1, r_2, d_1, d_2)$ can be considered as a quasi-projective subvariety of the one in \cite{Schmitt}. Now let us define $$\alpha_m:=\mu(\enm{\cal{E}}_2)-\mu(\enm{\cal{E}}_1)~,~\alpha_M:=\left( 1+ \frac{r_1+r_2}{|r_1-r_2|}\right) \left(\mu(\enm{\cal{E}}_2)-\mu(\enm{\cal{E}}_1)\right)$$ for $\enm{\cal{A}} = ((\enm{\cal{E}}_1,\Phi _1),(\enm{\cal{E}} _2,\Phi _2),f)$ as in \cite{PP}. Then we have \begin{proposition}\cite[Proposition 2.2]{bgg} If $\alpha>\alpha_M$ with $\op{rank} \enm{\cal{E}}_1\ne \op{rank} \enm{\cal{E}}_2$ or $\alpha<\alpha_m$, then there exists no $\nu_\alpha$-semistable triple of $\enm{\cal{D}}$-logarithmic co-Higgs sheaves. \end{proposition} \begin{proof} Due to \cite[Corollary 6.9]{s}, it is sufficient to check the assertion for $\nu_\alpha$-semistability for a triple of coherent sheaves on $Z$. While the proof of \cite[Proposition 2.2]{bgg} is for curves, the proof is numerical involving rank and degree with respect to a fixed ample line bundle so that it works also for $Z$. \end{proof} From now on we assume that $X$ is a smooth projective curve of genus $g$ and let $\enm{\cal{D}}=\{p_1, \ldots, p_m\}$ be a set of $m$ distinct points on $X$. Take $g\in \{0,1\}$ and assume that $T_X(-\log \enm{\cal{D}}) \cong \enm{\cal{O}}_X$, i.e. $(g,m)\in \{(0,2), (1,0)\}$. For any triple $\enm{\cal{A}}= ((\enm{\cal{E}} _1,\Phi _1),(\enm{\cal{E}} _2,\Phi _2),f)$ and $c\in \enm{\mathbb{C}}$, set \begin{equation}\label{fam} \enm{\cal{A}} _c:= ((\enm{\cal{E}} _1,\Phi _1 -c {\cdot}\mathrm{Id}_{\enm{\cal{E}}_1}),(\enm{\cal{E}} _2,\Phi _2-c{\cdot}\mathrm{Id}_{\enm{\cal{E}}_2}),f) \end{equation} and then $\enm{\cal{A}}_c$ is also a triple. In particular, if $\enm{\cal{E}} _1\cong \enm{\cal{E}} _2$ and $f \cong c {\cdot}\mathrm{Id}_{\enm{\cal{E}}_1}$, then the study of the $\nu _\alpha$-(semi)stability of $\enm{\cal{A}}$ is reduced to the known case $f=0$. \begin{remark}\label{uu1} Assume that $f$ is not injective. Since $\hat{f}\circ \Phi _1 =\Phi _2\circ f$, we have $\Phi _1(\mathrm{ker}(f)) \subseteq \mathrm{ker}(\hat{f})$ and $\enm{\cal{B}}:= ((\mathrm{ker}(f),\Phi _{1|\mathrm{ker}(f)}),(0,0),0)$ is a subtriple of $\enm{\cal{A}}$. Set $\rho := \op{rank} (\mathrm{ker}(f))$ and $\delta:= \deg (\mathrm{ker}(f))$. If we have $$\nu _\alpha (\enm{\cal{B}} ) = \delta /\rho + \alpha> \frac{r_1\alpha +d_1+d_2}{r_1+r_2},$$ then $\enm{\cal{A}}$ would not be $\nu_\alpha$-semistable. \end{remark} \begin{remark}\label{xxx1} For any triple $\enm{\cal{A}} =((\enm{\cal{E}} _1,\Phi _1),(\enm{\cal{E}} _2,\Phi _2),f)$, we get a dual triple $\enm{\cal{A}} ^\vee = ((\enm{\cal{E}} _2^\vee ,\Phi _2^\vee), (\enm{\cal{E}} _1^\vee ,\Phi _1^\vee ),f^\vee)$, where $\Phi_i^\vee$ and $f^\vee$ are the transpose of $\Phi_i$ and $f$, respectively. Then $\enm{\cal{A}} $ is $\nu _\alpha$-(semi)stable if and only if $\enm{\cal{A}} ^\vee$ is $\nu _\alpha$-(semi)stable (see \cite[Proposition 3.16]{bg}). \end{remark} \begin{remark}\label{xxx2} Assume $(g,m)=(1,0)$ and take a triple $\enm{\cal{A}}=((\enm{\cal{E}}_1, \Phi_1), (\enm{\cal{E}}_2, \Phi_2), f)$ with each $\enm{\cal{E}} _i$ simple. By Atiyah's classification of vector bundles on elliptic curves, the simpleness of $\enm{\cal{E}}_i$ is equivalent to its stability and also equivalent to its indecomposability and with degree and rank coprime. Then each $\Phi _i$ is the multiplication by a constant, say $c_i$. We get that the two triples $\enm{\cal{A}}$ and $((\enm{\cal{E}}_1, 0), (\enm{\cal{E}}_2, 0), f)$ share the same subtriples and so these two triples are $\nu _\alpha$-(semi)stable for the same $\alpha$ simultaneously. There is a good description of this case in \cite[Section 7]{PP0}. \end{remark} Now we suggest some general description on $\nu_\alpha$-(semi)stable triples on $X$ in case of $r_1=r_2=2$ from $(a)\sim (c)$ below; we exclude the case described in Remark \ref{xxx2} and silently use Remark \ref{xxx1} to get a shorter list. In some case we stop after reducing to a case with $f$ not injective, i.e. to a case in which $\enm{\cal{A}}$ is not $\nu _\alpha$-semistable for $\alpha \gg 0$ (see Remark \ref{uu1}). \quad (a) Assume $r_1=r_2=2$ and that at least one of $\enm{\cal{E}} _i$ is not semistable, say $\enm{\cal{E}}_1$. Then, due to Segre-Grothendieck theorem and Atiyah's classification of vector bundles on elliptic curves, we have $\enm{\cal{E}} _1 \cong \enm{\cal{L}} _1\oplus \enm{\cal{R}} _1$ with $\deg (\enm{\cal{L}} _1)>\deg (\enm{\cal{R}} _1)$ and $\enm{\cal{E}} _2 \cong \enm{\cal{L}} _2\oplus \enm{\cal{R}} _2$ with $\deg (\enm{\cal{L}} _2)\ge \deg (\enm{\cal{R}} _2)$, or $g=1$ and $\enm{\cal{E}} _2$ is a non-zero extension of the line bundle $\enm{\cal{L}} _2$ by itself; in the latter case we put $\enm{\cal{R}} _2:= \enm{\cal{L}} _2$. If $\enm{\cal{E}} _2$ is indecomposable, then it has a unique line bundle isomorphic to $\enm{\cal{L}} _2$ and so $\Phi _2(\enm{\cal{L}} _2)\subseteq \enm{\cal{L}} _2$. We have $$\nu _\alpha (\enm{\cal{A}} ) =\alpha /2 + (\deg (\enm{\cal{L}} _1)+\deg (\enm{\cal{L}} _2)+\deg (\enm{\cal{R}} _1)+\deg (\enm{\cal{R}} _2))/4.$$ The map $\Phi _i: \enm{\cal{E}} _i\rightarrow \enm{\cal{E}} _i$ induces a map $\Phi _{i|\enm{\cal{L}} _i}: \enm{\cal{L}} _i\rightarrow \enm{\cal{L}} _i$, which is induced by the multiplication by a constant, say $c_i$. Then we get two triples $\enm{\cal{A}} _{c_i}$ for $i=1,2$. Since $\enm{\cal{A}} _{c_2}$ is a triple, we get $f(\enm{\cal{L}} _1)\subseteq \enm{\cal{L}} _2$ and so we may define a subtriple $\enm{\cal{A}}_1:= ((\enm{\cal{L}} _1,\Phi _{1|\enm{\cal{L}} _1}),(\enm{\cal{L}} _2,\Phi _{2|\enm{\cal{L}} _2}),f_{|\enm{\cal{L}} _1})$ with \begin{align*} \nu _\alpha (\enm{\cal{A}} _1)& = \alpha +(\deg (\enm{\cal{L}}_1) +\deg (\enm{\cal{L}} _2))/2 \\ &> \alpha /2 + (\deg (\enm{\cal{L}} _1)+\deg (\enm{\cal{L}} _2)+\deg (\enm{\cal{R}} _1)+\deg (\enm{\cal{R}} _2))/4=\nu_\alpha (\enm{\cal{A}}), \end{align*} which implies that $\enm{\cal{A}}$ is not $\nu _\alpha$-semistable. \quad (b) Form now on we assume that $\enm{\cal{E}} _1$ and $\enm{\cal{E}} _2$ are semistable. We also assume that $f$ is non-zero so that $\mu (\enm{\cal{E}} _1)\le \mu (\enm{\cal{E}} _2)$. We are in a case with $r_1=r_2=2$ and we look at a proper subtriple $\enm{\cal{B}} = ((\enm{\cal{F}} _1,\Phi _{1| \enm{\cal{F}} _1}), (\enm{\cal{F}} _2,\Phi _{2| \enm{\cal{F}} _2}),f_{|\enm{\cal{F}} _1})$ with maximal $\nu _\alpha (\enm{\cal{B}} )$. In particular, each $\enm{\cal{F}} _i$ is saturated in $\enm{\cal{E}} _i$, i.e. either $\enm{\cal{F}} _i =\enm{\cal{E}} _i$ or $\enm{\cal{F}} _i=0$ or $\enm{\cal{E}} _i/\enm{\cal{F}} _i$ is a line bundle. Set $s_i:= \op{rank} (\enm{\cal{F}} _i)$ and then we have $1\le s_1+s_2\le 3$. If $s_2=2$, i.e. $\enm{\cal{F}} _2 = \enm{\cal{E}} _2$, then we have $\nu _\alpha (\enm{\cal{B}} )< \nu _\alpha (\enm{\cal{A}})$ for all $\alpha >0$, because $\enm{\cal{E}} _1$ is semistable and $\mu (\enm{\cal{E}} _1) \le \mu (\enm{\cal{E}} _2)$. If $s_2=0$, then $f$ is not injective. If $s_1=0$ we just exclude the case $\alpha \le \alpha _m$ with subtriple $((0,0),(\enm{\cal{E}} _2,\Phi _2),0)$. In the case $s_1=s_2=1$ we know that $\nu _\alpha (\enm{\cal{B}} )\le \nu _\alpha (\enm{\cal{A}})$ and that equality holds if and only if both $\enm{\cal{E}} _1$ and $\enm{\cal{E}} _2$ are strictly semistable and each $\enm{\cal{F}} _i$ is a line subbundle of $\enm{\cal{E}} _i$ with maximal degree. Note that the injectivity of $f$ implies $s_1\le s_2$. Thus when $f$ is injective, it is sufficient to test the case $s_1=s_2=1$. Then we have the following, when $f$ is injective. \begin{itemize} \item If $\alpha >\alpha_m$ and at least one of $\enm{\cal{E}}_i$'s is stable, then $\enm{\cal{A}}$ is $\nu_\alpha$-stable \item If $\alpha \ge \alpha_m$ and $\enm{\cal{E}} _1$ and $\enm{\cal{E}} _2$ are semistable, then $\enm{\cal{A}}$ is $\nu _\alpha$-semistable. \item If $\alpha > \alpha_m$ and $\enm{\cal{E}} _1$ and $\enm{\cal{E}} _2$ are strictly semistable, then $\enm{\cal{A}}$ is strictly $\nu _\alpha$-semistable if and only if there are maximal degree line bundles $\enm{\cal{L}} _i\subset \enm{\cal{E}} _i$ such that $\Phi _i(\enm{\cal{L}} _i)\subseteq \enm{\cal{L}} _i$ for each $i$ and $f(\enm{\cal{L}} _1)\subseteq \enm{\cal{L}} _2$. \end{itemize} \begin{lemma}\label{uu22} For a general map $f: \enm{\cal{E}}_1 \rightarrow \enm{\cal{E}}_2$ with $\enm{\cal{E}}_i:=\enm{\cal{O}}_{\enm{\mathbb{P}}^1}(a_i)^{\oplus 2}$ and $a_2\ge a_1+2$, there exists no subsheaf $\enm{\cal{O}} _{\enm{\mathbb{P}}^1}(a_1) \subset \enm{\cal{E}} _1$ such that the saturation of its image in $\enm{\cal{E}} _2$ is a line bundle isomorphic to $\enm{\cal{O}} _{\enm{\mathbb{P}}^1}(a_2)$. \end{lemma} \begin{proof} Up to a twist we may assume that $a_1=0$. If we fix homogeneous coordinates $x_0,x_1$ on $\enm{\mathbb{P}}^1$, then the map $f$ is induced by two forms $u(x_0,x_1)$ and $v(x_0,x_1)$ of degree $a_2$. Then it is sufficient to prove that there is no point $(a,b)\in \enm{\mathbb{C}}^2\setminus \{(0,0)\}$ with which $au(x_0,x_1)+bv(x_0,x_1)$ is either identically zero or with a zero of multiplicity $a_2$. This is true for general $u(x_0,x_1)$ and $v(x_0,x_1)$, e.g. we may take $u(x_0,x_1) = x_0^{a_2}+x_0x_1^{a_2-1}$ and $v(x_0,x_1) = x_0x_1^{a_2-1} +x_1^{a_2}$. \end{proof} The next is an analogue of Lemma \ref{uu22} for elliptic curves. \begin{lemma}\label{uu23} Let $X$ be an elliptic curve with two line bundles $\enm{\cal{L}}_i$ for $i=1,2$ such that $\deg (\enm{\cal{L}}_2)\ge \deg (\enm{\cal{L}}_1)+4$. For a general map $f: \enm{\cal{L}}_1^{\oplus 2} \rightarrow \enm{\cal{L}}_2^{\oplus 2}$, there is no subsheaf $\enm{\cal{L}}_1 \subset \enm{\cal{L}}_1^{\oplus 2}$ such that the saturation of its image in $\enm{\cal{L}}_2^{\oplus 2}$ is isomorphic to $\enm{\cal{L}}_2$. \end{lemma} \begin{proof} It is sufficient to find an injective map $h: \enm{\cal{L}}_1^{\oplus 2} \rightarrow \enm{\cal{L}}_2^{\oplus 2}$ for which no subsheaf $\enm{\cal{L}}_1\subset \enm{\cal{L}}_1^{\oplus 2}$ has its image under $h$ whose saturation in $\enm{\cal{L}}_2^{\oplus 2}$ is isomorphic to $\enm{\cal{L}}_2$. Up to a twist we may assume $\enm{\cal{L}} _1\cong \enm{\cal{O}} _X$ and so $l:=\deg (\enm{\cal{L}}_2)\ge 4$. First assume $l=4$ and write $\enm{\cal{L}}_2 \cong \enm{\cal{M}} ^{\otimes 2}$ with $\deg (\enm{\cal{M}} )=2$. If $\varphi : X\rightarrow \enm{\mathbb{P}}^1$ be a morphism of degree two, induced by $|\enm{\cal{M}} |$, then we may set $h:=\varphi^\ast (h_1)$ for a general $h_1: \enm{\cal{O}} _{\enm{\mathbb{P}}^1}^{\oplus 2} \rightarrow \enm{\cal{O}} _{\enm{\mathbb{P}}^1}(2)^{\oplus 2}$ with Lemma \ref{uu22} applied to $h_1$. Now assume $l\ge 5$ and fix an effective divisor $D\subset X$ of degree $l-4$. Then we may take as $h$ the composition of a general map $\enm{\cal{O}}_X^{\oplus 2} \rightarrow \enm{\cal{L}}_2(-D)^{\oplus 2}$ with the map $\enm{\cal{L}}_2(-D)^{\oplus 2} \rightarrow \enm{\cal{L}}_2^{\oplus 2}$ obtained by twisting with $\enm{\cal{O}}_X(D)$. \end{proof} \begin{remark}\label{tre} Let $\enm{\cal{D}}$ be an arrangement with $T_X(-\log \enm{\cal{D}} )\cong \enm{\cal{O}} _X$ on $X$ with arbitrary dimension. For two line bundles $\enm{\cal{L}}_1$ and $\enm{\cal{L}}_2$ with $\enm{\cal{L}}_2\otimes \enm{\cal{L}}_1^\vee$ globally generated, set a triple $\enm{\cal{B}} = ((\enm{\cal{E}} _1,0),(\enm{\cal{E}} _2,0),f)$ with $\enm{\cal{E}} _i\cong \enm{\cal{L}}_i ^{\oplus r}$ and $f$ injective. As in (\ref{fam}) we may generate other triples $\enm{\cal{B}} _c$ for each $c\in \enm{\mathbb{C}}$, but often there are no other $\enm{\cal{D}}$-logarithmic co-Higgs triples with $\enm{\cal{B}}$ as the associated triple of vector bundles. For example, assume $X$ is a smooth projective curve of genus $g\in \{0,1\}$. For a fixed co-Higgs field $\Phi _1: \enm{\cal{E}} _1\rightarrow \enm{\cal{E}} _1$ with the associated $(r\times r)$-matrix $A_1$ of constants, we are looking for $f$ and $\Phi _2: \enm{\cal{E}} _2\rightarrow \enm{\cal{E}} _2$ with the associated matrix $A_2$ such that $\enm{\cal{A}} = ((\enm{\cal{E}} _1,\Phi _1),(\enm{\cal{E}} _2,\Phi _2),f)$ is a $\enm{\cal{D}}$-logarithmic co-Higgs triple. Let $M$ be the $(r\times r)$-matrix with coefficient in $H^0(\enm{\cal{L}}_2 \otimes \enm{\cal{L}}_1^\vee )$ associated to $f$. Then we need $A_2$ and $M$ such that $A_2M = MA_1$. Assume that $A_1$ has a unique Jordan block. If $\enm{\cal{L}}_1 \cong \enm{\cal{L}}_2$ and $M$ is general, then we get a $\enm{\cal{D}}$-logarithmic co-Higgs triple if and only if $A_2$ is a polynomial in $A_1$. If $\enm{\cal{L}}_1 \not\cong \enm{\cal{L}}_2$ and $f$ is general, then there is no such $A_2$. We check this for the case $r=2$ and the general case can be shown similarly. With no loss of generality we may assume that the unique eigenvalue of $A_1$ is zero. Assume the existence of $f$ and $\Phi_2$ with associated $M$ and $A_2$. We have $\ker (\Phi_1)\cong \enm{\cal{L}}_1$ and $f(\ker (\Phi_1))\subseteq \ker (\Phi_2)$. Thus we get that $f(\enm{\cal{L}}_1)$ has $\ker (\Phi_2) \cong \enm{\cal{L}}_2$ as its saturation, contradicting Lemmas \ref{uu22} and \ref{uu23} for a general $f$. \end{remark} \begin{remark} In the same way as in \cite{ACG} one can define $\enm{\cal{D}}$-logarithmic co-Higgs holomorphic chains with parameters, but if the maps are general, then very few logarithmic co-Higgs fields $\Phi _i$ are allowed. \end{remark} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
arXiv
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Calibration of the Logarithmic-Periodic Dipole Antenna (LPDA) Radio Stations at the Pierre Auger Observatory using an Octocopter (1702.01392) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, I. Katkov, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlín, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello June 13, 2018 astro-ph.IM, astro-ph.HE An in-situ calibration of a logarithmic periodic dipole antenna with a frequency coverage of 30 MHz to 80 MHz is performed. Such antennas are part of a radio station system used for detection of cosmic ray induced air showers at the Engineering Radio Array of the Pierre Auger Observatory, the so-called Auger Engineering Radio Array (AERA). The directional and frequency characteristics of the broadband antenna are investigated using a remotely piloted aircraft (RPA) carrying a small transmitting antenna. The antenna sensitivity is described by the vector effective length relating the measured voltage with the electric-field components perpendicular to the incoming signal direction. The horizontal and meridional components are determined with an overall uncertainty of 7.4^{+0.9}_{-0.3} % and 10.3^{+2.8}_{-1.7} % respectively. The measurement is used to correct a simulated response of the frequency and directional response of the antenna. In addition, the influence of the ground conductivity and permittivity on the antenna response is simulated. Both have a negligible influence given the ground conditions measured at the detector site. The overall uncertainties of the vector effective length components result in an uncertainty of 8.8^{+2.1}_{-1.3} % in the square root of the energy fluence for incoming signal directions with zenith angles smaller than 60{\deg}. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory (1612.07155) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, I. Katkov, B. Keilhauer, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlín, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Feb. 26, 2018 astro-ph.HE We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above $5 \cdot 10^{18}$ eV, i.e.~the region of the all-particle spectrum above the so-called "ankle" feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show that uncertainties about physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results. Discovery of a bright microlensing event with planetary features towards the Taurus region: a super Earth planet (1802.06659) A.A. Nucita, D. Licchelli, F. De Paolis, G. Ingrosso, F. Strafella, N. Katysheva, S. Shugarov Feb. 20, 2018 astro-ph.EP The transient event labeled as TCP J05074264+2447555 recently discovered towards the Taurus region was quickly recognized to be an ongoing microlensing event on a source located at distance of only $700-800$ pc from Earth. Here, we show that observations with high sampling rate close to the time of maximum magnification revealed features that imply the presence of a binary lens system with very low mass ratio components. We present a complete description of the binary lens system which hosts an Earth-like planet with most likely mass of $9.2\pm 6.6$ M$_{\oplus}$. Furthermore, the source estimated location and detailed Monte Carlo simulations allowed us to classify the event as due to the closest lens system, being at a distance of $\simeq 380$ pc and mass $\simeq 0.25$ M$_{\odot}$. Indication of anisotropy in arrival directions of ultra-high-energy cosmic rays through comparison to the flux pattern of extragalactic gamma-ray sources (1801.06160) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, N. Arsene, H. Asorey, P. Assis, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A.C. Cobos Cerutti, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, M.M. Freire, T. Fujii, A. Fuster, R. Gaïor, B. García, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, G. Morlino, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L.A.S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, J. Poh, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. F. Soriano, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, M. Wiedeński, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Feb. 6, 2018 astro-ph.CO, astro-ph.HE A new analysis of the dataset from the Pierre Auger Observatory provides evidence for anisotropy in the arrival directions of ultra-high-energy cosmic rays on an intermediate angular scale, which is indicative of excess arrivals from strong, nearby sources. The data consist of 5514 events above 20 EeV with zenith angles up to 80 deg recorded before 2017 April 30. Sky models have been created for two distinct populations of extragalactic gamma-ray emitters: active galactic nuclei from the second catalog of hard Fermi-LAT sources (2FHL) and starburst galaxies from a sample that was examined with Fermi-LAT. Flux-limited samples, which include all types of galaxies from the Swift-BAT and 2MASS surveys, have been investigated for comparison. The sky model of cosmic-ray density constructed using each catalog has two free parameters, the fraction of events correlating with astrophysical objects and an angular scale characterizing the clustering of cosmic rays around extragalactic sources. A maximum-likelihood ratio test is used to evaluate the best values of these parameters and to quantify the strength of each model by contrast with isotropy. It is found that the starburst model fits the data better than the hypothesis of isotropy with a statistical significance of 4.0 sigma, the highest value of the test statistic being for energies above 39 EeV. The three alternative models are favored against isotropy with 2.7-3.2 sigma significance. The origin of the indicated deviation from isotropy is examined and prospects for more sensitive future studies are discussed. Inferences on Mass Composition and Tests of Hadronic Interactions from 0.3 to 100 EeV using the water-Cherenkov Detectors of the Pierre Auger Observatory (1710.07249) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 19, 2017 astro-ph.HE We present a new method for probing the hadronic interaction models at ultra-high energy and extracting details about mass composition. This is done using the time profiles of the signals recorded with the water-Cherenkov detectors of the Pierre Auger Observatory. The profiles arise from a mix of the muon and electromagnetic components of air-showers. Using the risetimes of the recorded signals we define a new parameter, which we use to compare our observations with predictions from simulations. We find, firstly, inconsistencies between our data and predictions over a greater energy range and with substantially more events than in previous studies. Secondly, by calibrating the new parameter with fluorescence measurements from observations made at the Auger Observatory, we can infer the depth of shower maximum for a sample of over 81,000 events extending from 0.3 EeV to over 100 EeV. Above 30 EeV, the sample is nearly fourteen times larger than currently available from fluorescence measurements and extending the covered energy range by half a decade. The energy dependence of the average depth of shower maximum is compared to simulations and interpreted in terms of the mean of the logarithmic mass. We find good agreement with previous work and extend the measurement of the mean depth of shower maximum to greater energies than before, reducing significantly the statistical uncertainty associated with the inferences about mass composition. Search for High-energy Neutrinos from Binary Neutron Star Merger GW170817 with ANTARES, IceCube, and the Pierre Auger Observatory (1710.05839) ANTARES, IceCube, Pierre Auger, LIGO Scientific, Virgo Collaborations: A. Albert, M. Andre, M. Anghinolfi, M. Ardid, J.-J. Aubert, J. Aublin, T. Avgitas, B. Baret, J. Barrios-Marti, S. Basa, B. Belhorma, V. Bertin, S. Biagi, R. Bormuth, S. Bourret, M.C. Bouwhuis, H. Branzacs, R. Bruijn, J. Brunner, J. Busto, A. Capone, L. Caramete, J. Carr, S. Celli, R. Cherkaoui El Moursli, T. Chiarusi, M. Circella, J.A.B. Coelho, A. Coleiro, R. Coniglione, H. Costantini, P. Coyle, A. Creusot, A. F. Diaz, A. Deschamps, G. De Bonis, C. Distefano, I. Di Palma, A. Domi, C. Donzaud, D. Dornic, D. Drouhin, T. Eberl, I. El Bojaddaini, N. El Khayati, D. Elsasser, A. Enzenhofer, A. Ettahiri, F. Fassi, I. Felis, L.A. Fusco, P. Gay, V. Giordano, H. Glotin, T. Gregoire, R. Gracia Ruiz, K. Graf, S. Hallmann, H. van Haren, A.J. Heijboer, Y. Hello, J.J. Hernandez-Rey, J. Hossl, J. Hofestadt, G. Illuminati, C.W. James, M. de Jong, M. Jongen, M. Kadler, O. Kalekin, U. Katz, D. Kiessling, A. Kouchner, M. Kreter, I. Kreykenbohm, V. Kulikovskiy, C. Lachaud, R. Lahmann, D. Lef`evre, E. Leonora, M. Lotze, S. Loucatos, M. Marcelin, A. Margiotta, A. Marinelli, J.A. Martinez-Mora, R. Mele, K. Melis, T. Michael, P. Migliozzi, A. Moussa, S. Navas, E. Nezri, M. Organokov, G.E. Puavualacs, C. Pellegrino, C. Perrina, P. Piattelli, V. Popa, T. Pradier, L. Quinn, C. Racca, G. Riccobene, A. Sanchez-Losa, M. Salda na, I. Salvadori, D. F. E. Samtleben, M. Sanguineti, P. Sapienza, F. Schussler, C. Sieger, M. Spurio, Th. Stolarczyk, M. Taiuti, Y. Tayalati, A. Trovato, D. Turpin, C. Tonnis, B. Vallage, V. Van Elewyck, F. Versari, D. Vivolo, A. Vizzoca, J. Wilms, J.D. Zornoza, J. Zu niga, M. G. Aartsen, M. Ackermann, J. Adams, J. A. Aguilar, M. Ahlers, M. Ahrens, I. Al Samarai, D. Altmann, K. Andeen, T. Anderson, I. Ansseau, G. Anton, C. Arguelles, J. Auffenberg, S. Axani, H. Bagherpour, X. Bai, J. P. Barron, S. W. Barwick, V. Baum, R. Bay, J. J. Beatty, J. Becker Tjus, K.-H. Becker, S. BenZvi, D. Berley, E. Bernardini, D. Z. Besson, G. Binder, D. Bindig, E. Blaufuss, S. Blot, C. Bohm, M. Borner, F. Bos, D. Bose, S. Boser, O. Botner, E. Bourbeau, J. Bourbeau, F. Bradascio, J. Braun, L. Brayeur, M. Brenzke, H.-P. Bretz, S. Bron, J. Brostean-Kaiser, A. Burgman, T. Carver, J. Casey, M. Casier, E. Cheung, D. Chirkin, A. Christov, K. Clark, L. Classen, S. Coenders, G. H. Collin, J. M. Conrad, D. F. Cowen, R. Cross, M. Day, J. P. A. M. de Andre, C. De Clercq, J. J. DeLaunay, H. Dembinski, S. De Ridder, P. Desiati, K. D. de Vries, G. de Wasseige, M. de With, T. DeYoung, J. C. Diaz-Velez, V. di Lorenzo, H. Dujmovic, J. P. Dumm, M. Dunkman, E. Dvorak, B. Eberhardt, T. Ehrhardt, B. Eichmann, P. Eller, P. A. Evenson, S. Fahey, A. R. Fazely, J. Felde, K. Filimonov, C. Finley, S. Flis, A. Franckowiak, E. Friedman, T. Fuchs, T. K. Gaisser, J. Gallagher, L. Gerhardt, K. Ghorbani, W. Giang, T. Glauch, T. Glusenkamp, A. Goldschmidt, J. G. Gonzalez, D. Grant, Z. Griffith, C. Haack, A. Hallgren, F. Halzen, K. Hanson, D. Hebecker, D. Heereman, K. Helbing, R. Hellauer, S. Hickford, J. Hignight, G. C. Hill, K. D. Hoffman, R. Hoffmann, B. Hokanson-Fasig, K. Hoshina, F. Huang, M. Huber, K. Hultqvist, M. Hunnefeld, S. In, A. Ishihara, E. Jacobi, G. S. Japaridze, M. Jeong, K. Jero, B. J. P. Jones, P. Kalaczynski, W. Kang, A. Kappes, T. Karg, A. Karle, U. Katz, M. Kauer, A. Keivani, J. L. Kelley, A. Kheirandish, J. Kim, M. Kim, T. Kintscher, J. Kiryluk, T. Kittler, S. R. Klein, G. Kohnen, R. Koirala, H. Kolanoski, L. Kopke, C. Kopper, S. Kopper, J. P. Koschinsky, D. J. Koskinen, M. Kowalski, K. Krings, M. Kroll, G. Kruckl, J. Kunnen, S. Kunwar, N. Kurahashi, T. Kuwabara, A. Kyriacou, M. Labare, J. L. Lanfranchi, M. J. Larson, F. Lauber, M. Lesiak-Bzdak, M. Leuermann, Q. R. Liu, L. Lu, J. Lunemann, W. Luszczak, J. Madsen, G. Maggi, K. B. M. Mahn, S. Mancina, R. Maruyama, K. Mase, R. Maunu, F. McNally, K. Meagher, M. Medici, M. Meier, T. Menne, G. Merino, T. Meures, S. Miarecki, J. Micallef, G. Momente, T. Montaruli, R. W. Moore, M. Moulai, R. Nahnhauer, P. Nakarmi, U. Naumann, G. Neer, H. Niederhausen, S. C. Nowicki, D. R. Nygren, A. Obertacke Pollmann, A. Olivas, A. O'Murchadha, T. Palczewski, H. Pandya, D. V. Pankova, P. Peiffer, J. A. Pepper, C. Perez de los Heros, D. Pieloth, E. Pinat, M. Plum, D. Pranav, P. B. Price, G. T. Przybylski, C. Raab, L. Radel, M. Rameez, K. Rawlins, I. C. Rea, R. Reimann, B. Relethford, M. Relich, E. Resconi, W. Rhode, M. Richman, S. Robertson, M. Rongen, C. Rott, T. Ruhe, D. Ryckbosch, D. Rysewyk, T. Salzer, S. E. Sanchez Herrera, A. Sandrock, J. Sandroos, M. Santander, S. Sarkar, S. Sarkar, K. Satalecka, P. Schlunder, T. Schmidt, A. Schneider, S. Schoenen, S. Schoneberg, L. Schumacher, D. Seckel, S. Seunarine, J. Soedingrekso, D. Soldin, M. Song, G. M. Spiczak, C. Spiering, J. Stachurska, M. Stamatikos, T. Stanev, A. Stasik, J. Stettner, A. Steuer, T. Stezelberger, R. G. Stokstad, A. Stossl, N. L. Strotjohann, T. Stuttard, G. W. Sullivan, M. Sutherland, I. Taboada, J. Tatar, F. Tenholt, S. Ter-Antonyan, A. Terliuk, G. Tevsic, S. Tilav, P. A. Toale, M. N. Tobin, S. Toscano, D. Tosi, M. Tselengidou, C. F. Tung, A. Turcati, C. F. Turley, B. Ty, E. Unger, M. Usner, J. Vandenbroucke, W. Van Driessche, N. van Eijndhoven, S. Vanheule, J. van Santen, M. Vehring, E. Vogel, M. Vraeghe, C. Walck, A. Wallace, M. Wallraff, F. D. Wandler, N. Wandkowsky, A. Waza, C. Weaver, M. J. Weiss, C. Wendt, J. Werthebach, S. Westerhoff, B. J. Whelan, K. Wiebe, C. H. Wiebusch, L. Wille, D. R. Williams, L. Wills, M. Wolf, J. Wood, T. R. Wood, E. Woolsey, K. Woschnagg, D. L. Xu, X. W. Xu, Y. Xu, J. P. Yanez, G. Yodh, S. Yoshida, T. Yuan, M. Zoll, A. Aab, P. Abreu, M. Aglietta, I.F.M. Albuquerque, J.M. Albury, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Mu niz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, N. Arsene, H. Asorey, P. Assis, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Bohavcova, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A.C. Cobos Cerutti, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceicc ao, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Croninaltaffiliation Deceased, August 2016., S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, J.A. Day, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Diaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Feldbusch, F. Fenu, B. Fick, J.M. Figueira, A. Filipvcivc, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. Garcia, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Glas, C. Glaser, G. Golup, M. Gomez Berisso, P.F. Gomez Vitale, N. Gonzalez, A. Gorgi, M. Gottowik, A.F. Grilloaltaffiliation Deceased, February 2017., T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, V.M. Harvey, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Horandel, P. Horvath, M. Hrabovsky, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kaapa, K.H. Kampert, B. Keilhauer, N. Kemmerich, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. Lopez, A. Lopez Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Marics, G. Marsella, D. Martello, H. Martinez, O. Martinez Bravo, J.J. Masias Meza, H.J. Mathes, S. Mathys, J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, G. Morlino, M. Mostafa, A.L. Muller, G. Muller, M.A. Muller, S. Muller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Novzka, L.A. Nu nez, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pcekala, R. Pelayo, J. Pe na-Rodriguez, L.A.S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, J. Poh, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, G. Salina, F. Sanchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovanek, F.G. Schroder, S. Schroder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, R. vSmida, G.R. Snow, P. Sommers, S. Sonntag, J.F. Soriano, R. Squartini, D. Stanca, S. Stanivc, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Duran, T. Sudholz, T. Suomijarvi, A.D. Supanitsky, J. vSupik, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tome, G. Torralba Elipe, P. Travnicek, M. Trini, M. Tueros, R. Ulrich, M. Unger, M. Urban, J.F. Valdes Galicia, I. Vali no, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cardenas, R.A. Vazquez, D. Veberivc, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villase nor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, M. Wiedenski, L. Wiencke, H. Wilczynski, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello, B. P. Abbott, R. Abbott, T. D. Abbott, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R. X. Adhikari, V. B. Adya, C. Affeldt, M. Afrough, B. Agarwal, M. Agathos, K. Agatsuma, N. Aggarwal, O. D. Aguiar, L. Aiello, A. Ain, P. Ajith, B. Allen, G. Allen, A. Allocca, P. A. Altin, A. Amato, A. Ananyeva, S. B. Anderson, W. G. Anderson, S. V. Angelova, S. Antier, S. Appert, K. Arai, M. C. Araya, J. S. Areeda, N. Arnaud, K. G. Arun, S. Ascenzi, G. Ashton, M. Ast, S. M. Aston, P. Astone, D. V. Atallah, P. Aufmuth, C. Aulbert, K. AultONeal, C. Austin, A. Avila-Alvarez, S. Babak, P. Bacon, M. K. M. Bader, S. Bae, P. T. Baker, F. Baldaccini, G. Ballardin, S. W. Ballmer, S. Banagiri, J. C. Barayoga, S. E. Barclay, B. C. Barish, D. Barker, K. Barkett, F. Barone, B. Barr, L. Barsotti, M. Barsuglia, D. Barta, J. Bartlett, I. Bartos, R. Bassiri, A. Basti, J. C. Batch, M. Bawaj, J. C. Bayley, M. Bazzan, B. Becsy, C. Beer, M. Bejger, I. Belahcene, A. S. Bell, B. K. Berger, G. Bergmann, J. J. Bero, C. P. L. Berry, D. Bersanetti, A. Bertolini, J. Betzwieser, S. Bhagwat, R. Bhandare, I. A. Bilenko, G. Billingsley, C. R. Billman, J. Birch, R. Birney, O. Birnholtz, S. Biscans, S. Biscoveanu, A. Bisht, M. Bitossi, C. Biwer, M. A. Bizouard, J. K. Blackburn, J. Blackman, C. D. Blair, D. G. Blair, R. M. Blair, S. Bloemen, O. Bock, N. Bode, M. Boer, G. Bogaert, A. Bohe, F. Bondu, E. Bonilla, R. Bonnand, B. A. Boom, R. Bork, V. Boschi, S. Bose, K. Bossie, Y. Bouffanais, A. Bozzi, C. Bradaschia, P. R. Brady, M. Branchesi, J. E. Brau, T. Briant, A. Brillet, M. Brinkmann, V. Brisson, P. Brockill, J. E. Broida, A. F. Brooks, D. A. Brown, D. D. Brown, S. Brunett, C. C. Buchanan, A. Buikema, T. Bulik, H. J. Bulten, A. Buonanno, D. Buskulic, C. Buy, R. L. Byer, M. Cabero, L. Cadonati, G. Cagnoli, C. Cahillane, J. Calderon Bustillo, T. A. Callister, E. Calloni, J. B. Camp, M. Canepa, P. Canizares, K. C. Cannon, H. Cao, J. Cao, C. D. Capano, E. Capocasa, F. Carbognani, S. Caride, M. F. Carney, J. Casanueva Diaz, C. Casentini, S. Caudill, M. Cavagli`a, F. Cavalier, R. Cavalieri, G. Cella, C. B. Cepeda, P. Cerda-Duran, G. Cerretani, E. Cesarini, S. J. Chamberlin, M. Chan, S. Chao, P. Charlton, E. Chase, E. Chassande-Mottin, D. Chatterjee, B. D. Cheeseboro, H. Y. Chen, X. Chen, Y. Chen, H.-P. Cheng, H. Chia, A. Chincarini, A. Chiummo, T. Chmiel, H. S. Cho, M. Cho, J. H. Chow, N. Christensen, Q. Chu, A. J. K. Chua, S. Chua, A. K. W. Chung, S. Chung, G. Ciani, R. Ciolfi, C. E. Cirelli, A. Cirone, F. Clara, J. A. Clark, P. Clearwater, F. Cleva, C. Cocchieri, E. Coccia, P.-F. Cohadon, D. Cohen, A. Colla, C. G. Collette, L. R. Cominsky, M. Constancio Jr., L. Conti, S. J. Cooper, P. Corban, T. R. Corbitt, I. Cordero-Carrion, K. R. Corley, N. Cornish, A. Corsi, S. Cortese, C. A. Costa, M. W. Coughlin, S. B. Coughlin, J.-P. Coulon, S. T. Countryman, P. Couvares, P. B. Covas, E. E. Cowan, D. M. Coward, M. J. Cowart, D. C. Coyne, R. Coyne, J. D. E. Creighton, T. D. Creighton, J. Cripe, S. G. Crowder, T. J. Cullen, A. Cumming, L. Cunningham, E. Cuoco, T. Dal Canton, G. Dalya, S. L. Danilishin, S. D'Antonio, K. Danzmann, A. Dasgupta, C. F. Da Silva Costa, V. Dattilo, I. Dave, M. Davier, D. Davis, E. J. Daw, B. Day, S. De, D. DeBra, J. Degallaix, M. De Laurentis, S. Deleglise, W. Del Pozzo, N. Demos, T. Denker, T. Dent, R. De Pietri, V. Dergachev, R. De Rosa, R. T. DeRosa, C. De Rossi, R. DeSalvo, O. de Varona, J. Devenson, S. Dhurandhar, M. C. Diaz, L. Di Fiore, M. Di Giovanni, T. Di Girolamo, A. Di Lieto, S. Di Pace, I. Di Palma, F. Di Renzo, Z. Doctor, V. Dolique, F. Donovan, K. L. Dooley, S. Doravari, I. Dorrington, R. Douglas, M. Dovale Alvarez, T. P. Downes, M. Drago, C. Dreissigacker, J. C. Driggers, Z. Du, M. Ducrot, P. Dupej, S. E. Dwyer, T. B. Edo, M. C. Edwards, A. Effler, H.-B. Eggenstein, P. Ehrens, J. Eichholz, S. S. Eikenberry, R. A. Eisenstein, R. C. Essick, D. Estevez, Z. B. Etienne, T. Etzel, M. Evans, T. M. Evans, M. Factourovich, V. Fafone, H. Fair, S. Fairhurst, X. Fan, S. Farinon, B. Farr, W. M. Farr, E. J. Fauchon-Jones, M. Favata, M. Fays, C. Fee, H. Fehrmann, J. Feicht, M. M. Fejer, A. Fernandez-Galiana, I. Ferrante, E. C. Ferreira, F. Ferrini, F. Fidecaro, D. Finstad, I. Fiori, D. Fiorucci, M. Fishbach, R. P. Fisher, M. Fitz-Axen, R. Flaminio, M. Fletcher, H. Fong, J. A. Font, P. W. F. Forsyth, S. S. Forsyth, J.-D. Fournier, S. Frasca, F. Frasconi, Z. Frei, A. Freise, R. Frey, V. Frey, E. M. Fries, P. Fritschel, V. V. Frolov, P. Fulda, M. Fyffe, H. Gabbard, B. U. Gadre, S. M. Gaebel, J. R. Gair, L. Gammaitoni, M. R. Ganija, S. G. Gaonkar, C. Garcia-Quiros, F. Garufi, B. Gateley, S. Gaudio, G. Gaur, V. Gayathri, N. Gehrelsaltaffiliation Deceased, February 2017., G. Gemme, E. Genin, A. Gennai, D. George, J. George, L. Gergely, V. Germain, S. Ghonge, Abhirup Ghosh, Archisman Ghosh, S. Ghosh, J. A. Giaime, K. D. Giardina, A. Giazotto, K. Gill, L. Glover, E. Goetz, R. Goetz, S. Gomes, B. Goncharov, G. Gonzalez, J. M. Gonzalez Castro, A. Gopakumar, M. L. Gorodetsky, S. E. Gossan, M. Gosselin, R. Gouaty, A. Grado, C. Graef, M. Granata, A. Grant, S. Gras, C. Gray, G. Greco, A. C. Green, E. M. Gretarsson, P. Groot, H. Grote, S. Grunewald, P. Gruning, G. M. Guidi, X. Guo, A. Gupta, M. K. Gupta, K. E. Gushwa, E. K. Gustafson, R. Gustafson, O. Halim, B. R. Hall, E. D. Hall, E. Z. Hamilton, G. Hammond, M. Haney, M. M. Hanke, J. Hanks, C. Hanna, M. D. Hannam, O. A. Hannuksela, J. Hanson, T. Hardwick, J. Harms, G. M. Harry, I. W. Harry, M. J. Hart, C.-J. Haster, K. Haughian, J. Healy, A. Heidmann, M. C. Heintze, H. Heitmann, P. Hello, G. Hemming, M. Hendry, I. S. Heng, J. Hennig, A. W. Heptonstall, M. Heurs, S. Hild, T. Hinderer, D. Hoak, D. Hofman, K. Holt, D. E. Holz, P. Hopkins, C. Horst, J. Hough, E. A. Houston, E. J. Howell, A. Hreibi, Y. M. Hu, E. A. Huerta, D. Huet, B. Hughey, S. Husa, S. H. Huttner, T. Huynh-Dinh, N. Indik, R. Inta, G. Intini, H. N. Isa, J.-M. Isac, M. Isi, B. R. Iyer, K. Izumi, T. Jacqmin, K. Jani, P. Jaranowski, S. Jawahar, F. Jimenez-Forteza, W. W. Johnson, D. I. Jones, R. Jones, R. J. G. Jonker, L. Ju, J. Junker, C. V. Kalaghatgi, V. Kalogera, B. Kamai, S. Kandhasamy, G. Kang, J. B. Kanner, S. J. Kapadia, S. Karki, K. S. Karvinen, M. Kasprzack, M. Katolik, E. Katsavounidis, W. Katzman, S. Kaufer, K. Kawabe, F. Kefelian, D. Keitel, A. J. Kemball, R. Kennedy, C. Kent, J. S. Key, F. Y. Khalili, I. Khan, S. Khan, Z. Khan, E. A. Khazanov, N. Kijbunchoo, Chunglee Kim, J. C. Kim, K. Kim, W. Kim, W. S. Kim, Y.-M. Kim, S. J. Kimbrell, E. J. King, P. J. King, M. Kinley-Hanlon, R. Kirchhoff, J. S. Kissel, L. Kleybolte, S. Klimenko, T. D. Knowles, P. Koch, S. M. Koehlenbeck, S. Koley, V. Kondrashov, A. Kontos, M. Korobko, W. Z. Korth, I. Kowalska, D. B. Kozak, C. Kramer, V. Kringel, B. Krishnan, A. Krolak, G. Kuehn, P. Kumar, R. Kumar, S. Kumar, L. Kuo, A. Kutynia, S. Kwang, B. D. Lackey, K. H. Lai, M. Landry, R. N. Lang, J. Lange, B. Lantz, R. K. Lanza, A. Lartaux-Vollard, P. D. Lasky, M. Laxen, A. Lazzarini, C. Lazzaro, P. Leaci, S. Leavey, C. H. Lee, H. K. Lee, H. M. Lee, H. W. Lee, K. Lee, J. Lehmann, A. Lenon, M. Leonardi, N. Leroy, N. Letendre, Y. Levin, T. G. F. Li, S. D. Linker, T. B. Littenberg, J. Liu, R. K. L. Lo, N. A. Lockerbie, L. T. London, J. E. Lord, M. Lorenzini, V. Loriette, M. Lormand, G. Losurdo, J. D. Lough, C. O. Lousto, G. Lovelace, H. Luck, D. Lumaca, A. P. Lundgren, R. Lynch, Y. Ma, R. Macas, S. Macfoy, B. Machenschalk, M. MacInnis, D. M. Macleod, I. Maga na Hernandez, F. Maga na-Sandoval, L. Maga na Zertuche, R. M. Magee, E. Majorana, I. Maksimovic, N. Man, V. Mandic, V. Mangano, G. L. Mansell, M. Manske, M. Mantovani, F. Marchesoni, F. Marion, S. Marka, Z. Marka, C. Markakis, A. S. Markosyan, A. Markowitz, E. Maros, A. Marquina, F. Martelli, L. Martellini, I. W. Martin, R. M. Martin, D. V. Martynov, K. Mason, E. Massera, A. Masserot, T. J. Massinger, M. Masso-Reid, S. Mastrogiovanni, A. Matas, F. Matichard, L. Matone, N. Mavalvala, N. Mazumder, R. McCarthy, D. E. McClelland, S. McCormick, L. McCuller, S. C. McGuire, G. McIntyre, J. McIver, D. J. McManus, L. McNeill, T. McRae, S. T. McWilliams, D. Meacher, G. D. Meadors, M. Mehmet, J. Meidam, E. Mejuto-Villa, A. Melatos, G. Mendell, R. A. Mercer, E. L. Merilh, M. Merzougui, S. Meshkov, C. Messenger, C. Messick, R. Metzdorff, P. M. Meyers, H. Miao, C. Michel, H. Middleton, E. E. Mikhailov, L. Milano, A. L. Miller, B. B. Miller, J. Miller, M. Millhouse, M. C. Milovich-Goff, O. Minazzoli, Y. Minenkov, J. Ming, C. Mishra, S. Mitra, V. P. Mitrofanov, G. Mitselmakher, R. Mittleman, D. Moffa, A. Moggi, K. Mogushi, M. Mohan, S. R. P. Mohapatra, M. Montani, C. J. Moore, D. Moraru, G. Moreno, S. R. Morriss, B. Mours, C. M. Mow-Lowry, G. Mueller, A. W. Muir, Arunava Mukherjee, D. Mukherjee, S. Mukherjee, N. Mukund, A. Mullavey, J. Munch, E. A. Mu niz, M. Muratore, P. G. Murray, K. Napier, I. Nardecchia, L. Naticchioni, R. K. Nayak, J. Neilson, G. Nelemans, T. J. N. Nelson, M. Nery, A. Neunzert, L. Nevin, J. M. Newport, G. Newtonaltaffiliation Deceased, December 2016., K. K. Y. Ng, T. T. Nguyen, D. Nichols, A. B. Nielsen, S. Nissanke, A. Nitz, A. Noack, F. Nocera, D. Nolting, C. North, L. K. Nuttall, J. Oberling, G. D. O'Dea, G. H. Ogin, J. J. Oh, S. H. Oh, F. Ohme, M. A. Okada, M. Oliver, P. Oppermann, Richard J. Oram, B. O'Reilly, R. Ormiston, L. F. Ortega, R. O'Shaughnessy, S. Ossokine, D. J. Ottaway, H. Overmier, B. J. Owen, A. E. Pace, J. Page, M. A. Page, A. Pai, S. A. Pai, J. R. Palamos, O. Palashov, C. Palomba, A. Pal-Singh, Howard Pan, Huang-Wei Pan, B. Pang, P. T. H. Pang, C. Pankow, F. Pannarale, B. C. Pant, F. Paoletti, A. Paoli, M. A. Papa, A. Parida, W. Parker, D. Pascucci, A. Pasqualetti, R. Passaquieti, D. Passuello, M. Patil, B. Patricelli, B. L. Pearlstone, M. Pedraza, R. Pedurand, L. Pekowsky, A. Pele, S. Penn, C. J. Perez, A. Perreca, L. M. Perri, H. P. Pfeiffer, M. Phelps, O. J. Piccinni, M. Pichot, F. Piergiovanni, V. Pierro, G. Pillant, L. Pinard, I. M. Pinto, M. Pirello, M. Pitkin, M. Poe, R. Poggiani, P. Popolizio, E. K. Porter, A. Post, J. Powell, J. Prasad, J. W. W. Pratt, G. Pratten, V. Predoi, T. Prestegard, M. Prijatelj, M. Principe, S. Privitera, G. A. Prodi, L. G. Prokhorov, O. Puncken, M. Punturo, P. Puppo, M. Purrer, H. Qi, V. Quetschke, E. A. Quintero, R. Quitzow-James, F. J. Raab, D. S. Rabeling, H. Radkins, P. Raffai, S. Raja, C. Rajan, B. Rajbhandari, M. Rakhmanov, K. E. Ramirez, A. Ramos-Buades, P. Rapagnani, V. Raymond, M. Razzano, J. Read, T. Regimbau, L. Rei, S. Reid, D. H. Reitze, W. Ren, S. D. Reyes, F. Ricci, P. M. Ricker, S. Rieger, K. Riles, M. Rizzo, N. A. Robertson, R. Robie, F. Robinet, A. Rocchi, L. Rolland, J. G. Rollins, V. J. Roma, R. Romano, C. L. Romel, J. H. Romie, D. Rosinska, M. P. Ross, S. Rowan, A. Rudiger, P. Ruggi, G. Rutins, K. Ryan, S. Sachdev, T. Sadecki, L. Sadeghian, M. Sakellariadou, L. Salconi, M. Saleem, F. Salemi, A. Samajdar, L. Sammut, L. M. Sampson, E. J. Sanchez, L. E. Sanchez, N. Sanchis-Gual, V. Sandberg, J. R. Sanders, B. Sassolas, B. S. Sathyaprakash, P. R. Saulson, O. Sauter, R. L. Savage, A. Sawadsky, P. Schale, M. Scheel, J. Scheuer, J. Schmidt, P. Schmidt, R. Schnabel, R. M. S. Schofield, A. Schonbeck, E. Schreiber, D. Schuette, B. W. Schulte, B. F. Schutz, S. G. Schwalbe, J. Scott, S. M. Scott, E. Seidel, D. Sellers, A. S. Sengupta, D. Sentenac, V. Sequino, A. Sergeev, D. A. Shaddock, T. J. Shaffer, A. A. Shah, M. S. Shahriar, M. B. Shaner, L. Shao, B. Shapiro, P. Shawhan, A. Sheperd, D. H. Shoemaker, D. M. Shoemaker, K. Siellez, X. Siemens, M. Sieniawska, D. Sigg, A. D. Silva, L. P. Singer, A. Singh, A. Singhal, A. M. Sintes, B. J. J. Slagmolen, B. Smith, J. R. Smith, R. J. E. Smith, S. Somala, E. J. Son, J. A. Sonnenberg, B. Sorazu, F. Sorrentino, T. Souradeep, A. P. Spencer, A. K. Srivastava, K. Staats, A. Staley, M. Steinke, J. Steinlechner, S. Steinlechner, D. Steinmeyer, S. P. Stevenson, R. Stone, D. J. Stops, K. A. Strain, G. Stratta, S. E. Strigin, A. Strunk, R. Sturani, A. L. Stuver, T. Z. Summerscales, L. Sun, S. Sunil, J. Suresh, P. J. Sutton, B. L. Swinkels, M. J. Szczepanczyk, M. Tacca, S. C. Tait, C. Talbot, D. Talukder, D. B. Tanner, M. Tapai, A. Taracchini, J. D. Tasson, J. A. Taylor, R. Taylor, S. V. Tewari, T. Theeg, F. Thies, E. G. Thomas, M. Thomas, P. Thomas, K. A. Thorne, E. Thrane, S. Tiwari, V. Tiwari, K. V. Tokmakov, K. Toland, M. Tonelli, Z. Tornasi, A. Torres-Forne, C. I. Torrie, D. Toyra, F. Travasso, G. Traylor, J. Trinastic, M. C. Tringali, L. Trozzo, K. W. Tsang, M. Tse, R. Tso, L. Tsukada, D. Tsuna, D. Tuyenbayev, K. Ueno, D. Ugolini, C. S. Unnikrishnan, A. L. Urban, S. A. Usman, H. Vahlbruch, G. Vajente, G. Valdes, N. van Bakel, M. van Beuzekom, J. F. J. van den Brand, C. Van Den Broeck, D. C. Vander-Hyde, L. van der Schaaf, J. V. van Heijningen, A. A. van Veggel, M. Vardaro, V. Varma, S. Vass, M. Vasuth, A. Vecchio, G. Vedovato, J. Veitch, P. J. Veitch, K. Venkateswara, G. Venugopalan, D. Verkindt, F. Vetrano, A. Vicere, A. D. Viets, S. Vinciguerra, D. J. Vine, J.-Y. Vinet, S. Vitale, T. Vo, H. Vocca, C. Vorvick, S. P. Vyatchanin, A. R. Wade, L. E. Wade, M. Wade, R. Walet, M. Walker, L. Wallace, S. Walsh, G. Wang, H. Wang, J. Z. Wang, W. H. Wang, Y. F. Wang, R. L. Ward, J. Warner, M. Was, J. Watchi, B. Weaver, L.-W. Wei, M. Weinert, A. J. Weinstein, R. Weiss, L. Wen, E. K. Wessel, P. Wessels, J. Westerweck, T. Westphal, K. Wette, J. T. Whelan, B. F. Whiting, C. Whittle, D. Wilken, D. Williams, R. D. Williams, A. R. Williamson, J. L. Willis, B. Willke, M. H. Wimmer, W. Winkler, C. C. Wipf, H. Wittel, G. Woan, J. Woehler, J. Wofford, K. W. K. Wong, J. Worden, J. L. Wright, D. S. Wu, D. M. Wysocki, S. Xiao, H. Yamamoto, C. C. Yancey, L. Yang, M. J. Yap, M. Yazback, Hang Yu, Haocun Yu, M. Yvert, A. Zadro.zny, M. Zanolin, T. Zelenova, J.-P. Zendri, M. Zevin, L. Zhang, M. Zhang, T. Zhang, Y.-H. Zhang, C. Zhao, M. Zhou, Z. Zhou, S. J. Zhu, X. J. Zhu, M. E. Zucker, J. Zweizig The Advanced LIGO and Advanced Virgo observatories recently discovered gravitational waves from a binary neutron star inspiral. A short gamma-ray burst (GRB) that followed the merger of this binary was also recorded by the Fermi Gamma-ray Burst Monitor (Fermi-GBM), and the Anticoincidence Shield for the Spectrometer for the International Gamma-Ray Astrophysics Laboratory (INTEGRAL), indicating particle acceleration by the source. The precise location of the event was determined by optical detections of emission following the merger. We searched for high-energy neutrinos from the merger in the GeV--EeV energy range using the ANTARES, IceCube, and Pierre Auger Observatories. No neutrinos directionally coincident with the source were detected within $\pm500$ s around the merger time. Additionally, no MeV neutrino burst signal was detected coincident with the merger. We further carried out an extended search in the direction of the source for high-energy neutrinos within the 14-day period following the merger, but found no evidence of emission. We used these results to probe dissipation mechanisms in relativistic outflows driven by the binary neutron star merger. The non-detection is consistent with model predictions of short GRBs observed at a large off-axis angle. Muon Counting using Silicon Photomultipliers in the AMIGA detector of the Pierre Auger Observatory (1703.06193) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, L. del Peral, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, L. Latronico, M. Lauscher, P. Lautridou, P. Lebrun, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, G. Müller, M.A. Muller, S. Müller, I. Naranjo, S. Navas, L. Nellen, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollant, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, M. Torri, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, A. Valbuena-Delgado, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 4, 2017 physics.ins-det, astro-ph.IM AMIGA (Auger Muons and Infill for the Ground Array) is an upgrade of the Pierre Auger Observatory designed to extend its energy range of detection and to directly measure the muon content of the cosmic ray primary particle showers. The array will be formed by an infill of surface water-Cherenkov detectors associated with buried scintillation counters employed for muon counting. Each counter is composed of three scintillation modules, with a 10 m$^2$ detection area per module. In this paper, a new generation of detectors, replacing the current multi-pixel photomultiplier tube (PMT) with silicon photo sensors (aka. SiPMs), is proposed. The selection of the new device and its front-end electronics is explained. A method to calibrate the counting system that ensures the performance of the detector is detailed. This method has the advantage of being able to be carried out in a remote place such as the one where the detectors are deployed. High efficiency results, i.e. 98 % efficiency for the highest tested overvoltage, combined with a low probability of accidental counting ($\sim$2 %), show a promising performance for this new system. Spectral Calibration of the Fluorescence Telescopes of the Pierre Auger Observatory (1709.01537) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello We present a novel method to measure precisely the relative spectral response of the fluorescence telescopes of the Pierre Auger Observatory. We used a portable light source based on a xenon flasher and a monochromator to measure the relative spectral efficiencies of eight telescopes in steps of 5 nm from 280 nm to 440 nm. Each point in a scan had approximately 2 nm FWHM out of the monochromator. Different sets of telescopes in the observatory have different optical components, and the eight telescopes measured represent two each of the four combinations of components represented in the observatory. We made an end-to-end measurement of the response from different combinations of optical components, and the monochromator setup allowed for more precise and complete measurements than our previous multi-wavelength calibrations. We find an overall uncertainty in the calibration of the spectral response of most of the telescopes of 1.5% for all wavelengths; the six oldest telescopes have larger overall uncertainties of about 2.2%. We also report changes in physics measureables due to the change in calibration, which are generally small. The Pierre Auger Observatory: Contributions to the 35th International Cosmic Ray Conference (ICRC 2017) (1708.06592) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, M.M. Freire, T. Fujii, A. Fuster, R. Gaïor, B. García, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, G. Morlino, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, J. Poh, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, R.C. Shellard, G. Sigl, G. Silli, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. F. Soriano, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, M. Wiedeński, L. Wiencke, H. Wilczyński, T. Winchen, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 2, 2017 astro-ph.CO, astro-ph.IM, astro-ph.HE Contributions of the Pierre Auger Collaboration to the 35th International Cosmic Ray Conference (ICRC 2017), 12-20 July 2017, Bexco, Busan, Korea. Observation of a Large-scale Anisotropy in the Arrival Directions of Cosmic Rays above $8 \times 10^{18}$ eV (1709.07321) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, I. Katkov, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlín, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, B. Revenu, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Sept. 21, 2017 astro-ph.HE Cosmic rays are atomic nuclei arriving from outer space that reach the highest energies observed in nature. Clues to their origin come from studying the distribution of their arrival directions. Using $3 \times 10^4$ cosmic rays above $8 \times 10^{18}$ electron volts, recorded with the Pierre Auger Observatory from a total exposure of 76,800 square kilometers steradian year, we report an anisotropy in the arrival directions. The anisotropy, detected at more than the 5.2$\sigma$ level of significance, can be described by a dipole with an amplitude of $6.5_{-0.9}^{+1.3}$% towards right ascension $\alpha_{d} = 100 \pm 10$ degrees and declination $\delta_{d} = -24_{-13}^{+12}$ degrees. That direction indicates an extragalactic origin for these ultra-high energy particles. Multi-resolution anisotropy studies of ultrahigh-energy cosmic rays detected at the Pierre Auger Observatory (1611.06812) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, R.J. Barreira Luz, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, P. Gouffon, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, I. Katkov, B. Keilhauer, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlín, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, M. Torri, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello June 20, 2017 astro-ph.HE We report a multi-resolution search for anisotropies in the arrival directions of cosmic rays detected at the Pierre Auger Observatory with local zenith angles up to $80^\circ$ and energies in excess of 4 EeV ($4 \times 10^{18}$ eV). This search is conducted by measuring the angular power spectrum and performing a needlet wavelet analysis in two independent energy ranges. Both analyses are complementary since the angular power spectrum achieves a better performance in identifying large-scale patterns while the needlet wavelet analysis, considering the parameters used in this work, presents a higher efficiency in detecting smaller-scale anisotropies, potentially providing directional information on any observed anisotropies. No deviation from isotropy is observed on any angular scale in the energy range between 4 and 8 EeV. Above 8 EeV, an indication for a dipole moment is captured; while no other deviation from isotropy is observed for moments beyond the dipole one. The corresponding $p$-values obtained after accounting for searches blindly performed at several angular scales, are $1.3 \times 10^{-5}$ in the case of the angular power spectrum, and $2.5 \times 10^{-3}$ in the case of the needlet analysis. While these results are consistent with previous reports making use of the same data set, they provide extensions of the previous works through the thorough scans of the angular scales. Properties of Hi-GAL clumps in the inner Galaxy]{The Hi-GAL compact source catalogue. I. The physical properties of the clumps in the inner Galaxy ($-71.0^{\circ}< \ell < 67.0^{\circ}$) (1706.01046) D. Elia, S. Molinari, E. Schisano, M. Pestalozzi, S. Pezzuto, M. Merello, A. Noriega-Crespo, T. J. T. Moore, D. Russeil, J. C. Mottram, R. Paladini, F. Strafella, M. Benedettini, J. P. Bernard, A. Di Giorgio, D. J. Eden, Y. Fukui, R. Plume, J. Bally, P. G. Martin, S. E. Ragan, S. E. Jaffa, F. Motte, L. Olmi, N. Schneider, L. Testi, F. Wyrowski, A. Zavagno, L. Calzoletti, F. Faustini, P. Natoli, P. Palmerim, F. Piacentini, L. Piazzo, G. L. Pilbratt, D. Polychroni, A. Baldeschi, M. T. Beltrán, N. Billot, L. Cambrésy, R. Cesaroni, P. García-Lario, M. G. Hoare, M. Huang, G. Joncas, S. J. Liu, B. M. T. Maiolo, K. A. Marsh, Y. Maruccia, P. Mège, N. Peretto, K. L. J. Rygl, P. Schilke, M. A. Thompson, A. Traficante, G. Umana, M. Veneziani, D. Ward-Thompson, A. P. Whitworth, H. Arab, M. Bandieramonte, U. Becciani, M. Brescia, C. Buemi, F. Bufano, R. Butora, S. Cavuoti, A. Costa, E. Fiorellino, A. Hajnal, T. Hayakawa, P. Kacsuk, P. Leto, G. Li Causi, N. Marchili, S. Martinavarro-Armengol, A. Mercurio, M. Molinaro, G. Riccio, H. Sano, E. Sciacca, K. Tachihara, K. Torii, C. Trigilio, F. Vitello, H. Yamamoto June 4, 2017 astro-ph.GA Hi-GAL is a large-scale survey of the Galactic plane, performed with Herschel in five infrared continuum bands between 70 and 500 $\mu$m. We present a band-merged catalogue of spatially matched sources and their properties derived from fits to the spectral energy distributions (SEDs) and heliocentric distances, based on the photometric catalogs presented in Molinari et al. (2016a), covering the portion of Galactic plane $-71.0^{\circ}< \ell < 67.0^{\circ}$. The band-merged catalogue contains 100922 sources with a regular SED, 24584 of which show a 70 $\mu$m counterpart and are thus considered proto-stellar, while the remainder are considered starless. Thanks to this huge number of sources, we are able to carry out a preliminary analysis of early stages of star formation, identifying the conditions that characterise different evolutionary phases on a statistically significant basis. We calculate surface densities to investigate the gravitational stability of clumps and their potential to form massive stars. We also explore evolutionary status metrics such as the dust temperature, luminosity and bolometric temperature, finding that these are higher in proto-stellar sources compared to pre-stellar ones. The surface density of sources follows an increasing trend as they evolve from pre-stellar to proto-stellar, but then it is found to decrease again in the majority of the most evolved clumps. Finally, we study the physical parameters of sources with respect to Galactic longitude and the association with spiral arms, finding only minor or no differences between the average evolutionary status of sources in the fourth and first Galactic quadrants, or between "on-arm" and "inter-arm" positions. Search for photons with energies above 10$^{18}$ eV using the hybrid detector of the Pierre Auger Observatory (1612.01517) April 7, 2017 hep-ex, astro-ph.HE A search for ultra-high energy photons with energies above 1 EeV is performed using nine years of data collected by the Pierre Auger Observatory in hybrid operation mode. An unprecedented separation power between photon and hadron primaries is achieved by combining measurements of the longitudinal air-shower development with the particle content at ground measured by the fluorescence and surface detectors, respectively. Only three photon candidates at energies 1 - 2 EeV are found, which is compatible with the expected hadron-induced background. Upper limits on the integral flux of ultra-high energy photons of 0.027, 0.009, 0.008, 0.008 and 0.007 km$^{-2}$ sr$^{-1}$ yr$^{-1}$ are derived at 95% C.L. for energy thresholds of 1, 2, 3, 5 and 10 EeV. These limits bound the fractions of photons in the all-particle integral flux below 0.1%, 0.15%, 0.33%, 0.85% and 2.7%. For the first time the photon fraction at EeV energies is constrained at the sub-percent level. The improved limits are below the flux of diffuse photons predicted by some astrophysical scenarios for cosmogenic photon production. The new results rule-out the early top-down models $-$ in which ultra-high energy cosmic rays are produced by, e.g., the decay of super-massive particles $-$ and challenge the most recent super-heavy dark matter models. A targeted search for point sources of EeV photons with the Pierre Auger Observatory (1612.04155) March 21, 2017 hep-ph, astro-ph.HE Simultaneous measurements of air showers with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for EeV photon point sources. Several Galactic and extragalactic candidate objects are grouped in classes to reduce the statistical penalty of many trials from that of a blind search and are analyzed for a significant excess above the background expectation. The presented search does not find any evidence for photon emission at candidate sources, and combined $p$-values for every class are reported. Particle and energy flux upper limits are given for selected candidate sources. These limits significantly constrain predictions of EeV proton emission models from non-transient Galactic and nearby extragalactic sources, as illustrated for the particular case of the Galactic center region. Ultrahigh-energy neutrino follow-up of Gravitational Wave events GW150914 and GW151226 with the Pierre Auger Observatory (1608.07378) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, R.J. Barreira Luz, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, A. Gorgi, P. Gorham, P. Gouffon, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, P. Lebrun, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, G. Müller, M.A. Muller, S. Müller, I. Naranjo, L. Nellen, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, D. Torres Machado, M. Torri, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Jan. 13, 2017 astro-ph.HE On September 14, 2015 the Advanced LIGO detectors observed their first gravitational-wave (GW) transient GW150914. This was followed by a second GW event observed on December 26, 2015. Both events were inferred to have arisen from the merger of black holes in binary systems. Such a system may emit neutrinos if there are magnetic fields and disk debris remaining from the formation of the two black holes. With the surface detector array of the Pierre Auger Observatory we can search for neutrinos with energy above 100 PeV from point-like sources across the sky with equatorial declination from about -65 deg. to +60 deg., and in particular from a fraction of the 90% confidence-level (CL) inferred positions in the sky of GW150914 and GW151226. A targeted search for highly-inclined extensive air showers, produced either by interactions of downward-going neutrinos of all flavors in the atmosphere or by the decays of tau leptons originating from tau-neutrino interactions in the Earth's crust (Earth-skimming neutrinos), yielded no candidates in the Auger data collected within $\pm 500$ s around or 1 day after the coordinated universal time (UTC) of GW150914 and GW151226, as well as in the same search periods relative to the UTC time of the GW candidate event LVT151012. From the non-observation we constrain the amount of energy radiated in ultrahigh-energy neutrinos from such remarkable events. An analysis of star formation with Herschel in the Hi-GAL Survey. II. The tips of the Galactic bar (1612.04995) M. Veneziani, E. Schisano, D. Elia, A. Noriega-Crespo, S. Carey, A. Di Giorgio, Y. Fukui, B.M.T. Maiolo, Y. Maruccia, A. Mizuno, N. Mizuno, S. Molinari, J. C. Mottram, T. J. T. Moore, T. Onishi, R. Paladini, D. Paradis, M. Pestalozzi, S. Pezzuto, F. Piacentini, R. Plume, D. Russeil, F. Strafella Dec. 15, 2016 astro-ph.GA, astro-ph.SR We present the physical and evolutionary properties of prestellar and protostellar clumps in the Herschel Infrared GALactic plane survey (Hi-GAL) in two large areas centered in the Galactic plane and covering the tips of the long Galactic bar at the intersection with the spiral arms. The areas fall in the longitude ranges 19 < l < 33 and 340 < l < 350, while latitude is -1 < b < 1. Newly formed high mass stars and prestellar objects are identified and their properties derived and compared. A study is also presented on five giant molecular complexes at the further edge of the bar. The star-formation rate was estimated from the quantity of proto-stars expected to form during the collapse of massive turbulent clumps into star clusters. This new method was developed by applying a Monte Carlo procedure to an evolutionary model of turbulent cores and takes into account the wide multiplicity of sources produced during the collapse. The star-formation rate density values at the tips are 1.2 +- 0.3 10-3 Msol/yr/kpc2 and 1.5+-0.3 10-3 Msol/yr/kpc2 in the first and fourth quadrant, respectively. The same values estimated on the entire field of view, that is including the tips of the bar and background and foreground regions, are 0.9+-0.2 10-3 Msol/yr/kpc2 and 0.8+-0.2 10-3 Msol/yr/kpc2. The conversion efficiency is approximately 0.8% in the first quadrant and 0.5% in the fourth quadrant, and does not show a significant difference in proximity of the bar. The star forming regions identified through CO contours at the further edge of the bar show star-formation rate densities larger than the surrounding regions but their conversion efficiencies are comparable. Our results suggest that the star-formation activity at the bar is due to a large amount of dust and molecular material rather than being due to a triggering process. Evidence for a mixed mass composition at the `ankle' in the cosmic-ray spectrum (1609.08567) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, L. del Peral, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, L. Latronico, M. Lauscher, P. Lautridou, P. Lebrun, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, G. Müller, M.A. Muller, S. Müller, I. Naranjo, S. Navas, L. Nellen, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollant, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, M. Torri, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, A. Valbuena-Delgado, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, D. Yelos, P. Younk, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Nov. 22, 2016 astro-ph.HE We report a first measurement for ultra-high energy cosmic rays of the correlation between the depth of shower maximum and the signal in the water Cherenkov stations of air-showers registered simultaneously by the fluorescence and the surface detectors of the Pierre Auger Observatory. Such a correlation measurement is a unique feature of a hybrid air-shower observatory with sensitivity to both the electromagnetic and muonic components. It allows an accurate determination of the spread of primary masses in the cosmic-ray flux. Up till now, constraints on the spread of primary masses have been dominated by systematic uncertainties. The present correlation measurement is not affected by systematics in the measurement of the depth of shower maximum or the signal in the water Cherenkov stations. The analysis relies on general characteristics of air showers and is thus robust also with respect to uncertainties in hadronic event generators. The observed correlation in the energy range around the `ankle' at $\lg(E/{\rm eV})=18.5-19.0$ differs significantly from expectations for pure primary cosmic-ray compositions. A light composition made up of proton and helium only is equally inconsistent with observations. The data are explained well by a mixed composition including nuclei with mass $A > 4$. Scenarios such as the proton dip model, with almost pure compositions, are thus disfavoured as the sole explanation of the ultrahigh-energy cosmic-ray flux at Earth. Testing Hadronic Interactions at Ultrahigh Energies with Air Showers Measured by the Pierre Auger Observatory (1610.08509) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, J. Allen, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, F. Gallo, B. García, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, L. Latronico, M. Lauscher, P. Lautridou, P. Lebrun, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, I. Naranjo, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, I.M. Pepe, L. A. S. Pereira, L. Perrone, E. Petermann, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, F. Strafella, A. Stutz, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, A. Valbuena-Delgado, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, D. Yelos, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 31, 2016 hep-ph, hep-ex, astro-ph.HE Ultrahigh energy cosmic ray air showers probe particle physics at energies beyond the reach of accelerators. Here we introduce a new method to test hadronic interaction models without relying on the absolute energy calibration, and apply it to events with primary energy 6-16 EeV (E_CM = 110-170 TeV), whose longitudinal development and lateral distribution were simultaneously measured by the Pierre Auger Observatory. The average hadronic shower is 1.33 +- 0.16 (1.61 +- 0.21) times larger than predicted using the leading LHC-tuned models EPOS-LHC (QGSJetII-04), with a corresponding excess of muons. Search for ultrarelativistic magnetic monopoles with the Pierre Auger Observatory (1609.04451) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, R.J. Barreira Luz, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, A. Gorgi, P. Gorham, P. Gouffon, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, P. Lebrun, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, G. Müller, M.A. Muller, S. Müller, I. Naranjo, L. Nellen, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, D. Torres Machado, M. Torri, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 3, 2016 astro-ph.HE We present a search for ultra-relativistic magnetic monopoles with the Pierre Auger Observatory. Such particles, possibly a relic of phase transitions in the early universe, would deposit a large amount of energy along their path through the atmosphere, comparable to that of ultrahigh-energy cosmic rays (UHECRs). The air shower profile of a magnetic monopole can be effectively distinguished by the fluorescence detector from that of standard UHECRs. No candidate was found in the data collected between 2004 and 2012, with an expected background of less than 0.1 event from UHECRs. The corresponding 90% confidence level (C.L.) upper limits on the flux of ultra-relativistic magnetic monopoles range from $10^{-19}$ (cm$^{2}$ sr s)$^{-1}$ for a Lorentz factor $\gamma=10^9$ to $2.5 \times10^{-21}$ (cm$^{2}$ sr s)$^{-1}$ for $\gamma=10^{12}$. These results - the first obtained with a UHECR detector - improve previously published limits by up to an order of magnitude. Hi-GAL, the Herschel infrared Galactic Plane Survey: photometric maps and compact source catalogues. First data release for Inner Milky Way: +68{\deg}> l > -70{\deg} (1604.05911) S. Molinari, E. Schisano, D. Elia, M. Pestalozzi, A. Traficante, S. Pezzuto, B. M. Swinyard, A. Noriega-Crespo, J. Bally, T. J. T. Moore, R. Plume, A. Zavagno, A. M. di Giorgio, S. J. Liu, G. L. Pilbratt, J. C. Mottram, D. Russeil, L. Piazzo, M. Veneziani, M. Benedettini, L. Calzoletti, F. Faustini, P. Natoli, F. Piacentini, M. Merello, A. Palmese, R. Del Grande, D. Polychroni, K. L. J. Rygl, G. Polenta, M. J. Barlow, J.-P. Bernard, P. G. Martin, L. Testi, B. Ali, P. Andrè, M.T. Beltrán, N. Billot, C. Brunt, S. Carey, R. Cesaroni, M. Compiègne, D. Eden, Y. Fukui, P. Garcia-Lario, M. G. Hoare, M. Huang, G. Joncas, T. L. Lim, S. D. Lord, S. Martinavarro-Armengol, F. Motte, R. Paladini, D. Paradis, N. Peretto, T. Robitaille, P. Schilke, N. Schneider, B. Schulz, B. Sibthorpe, F. Strafella, M. A. Thompson, G. Umana, D. Ward-Thompson, F. Wyrowski April 20, 2016 astro-ph.GA (Abridged) We present the first public release of high-quality data products (DR1) from Hi-GAL, the {\em Herschel} infrared Galactic Plane Survey. Hi-GAL is the keystone of a suite of continuum Galactic Plane surveys from the near-IR to the radio, and covers five wavebands at 70, 160, 250, 350 and 500 micron, encompassing the peak of the spectral energy distribution of cold dust for 8 < T < 50K. This first Hi-GAL data release covers the inner Milky Way in the longitude range 68{\deg} > l > -70{\deg} in a |b|<1{\deg} latitude strip. Photometric maps have been produced with the ROMAGAL pipeline, that optimally capitalizes on the excellent sensitivity and stability of the bolometer arrays of the {\em Herschel} PACS and SPIRE photometric cameras, to deliver images of exquisite quality and dynamical range, absolutely calibrated with {\em Planck} and {\em IRAS}, and recovering extended emission at all wavelengths and all spatial scales. The compact source catalogues have been generated with the CuTEx algorithm, specifically developed to optimize source detection and extraction in the extreme conditions of intense and spatially varying background that are found in the Galactic Plane in the thermal infrared. Hi-GAL DR1 images will be accessible via a dedicated web-based image cutout service. The DR1 Compact Source Catalogues are delivered as single-band photometric lists containing, in addition to source position, peak and integrated flux and source sizes, a variety of parameters useful to assess the quality and reliability of the extracted sources, caveats and hints to help this assessment are provided. Flux completeness limits in all bands are determined from extensive synthetic source experiments and depend on the specific line of sight along the Galactic Plane. Hi-GAL DR1 catalogues contain 123210, 308509, 280685, 160972 and 85460 compact sources in the five bands, respectively. Azimuthal asymmetry in the risetime of the surface detector signals of the Pierre Auger Observatory (1604.00978) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, F. Gallo, B. García, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, L. Latronico, M. Lauscher, P. Lautridou, P. Lebrun, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, I. Naranjo, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, I.M. Pepe, L. A. S. Pereira, L. Perrone, E. Petermann, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, F. Strafella, A. Stutz, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, A. Valbuena-Delgado, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello April 13, 2016 astro-ph.HE The azimuthal asymmetry in the risetime of signals in Auger surface detector stations is a source of information on shower development. The azimuthal asymmetry is due to a combination of the longitudinal evolution of the shower and geometrical effects related to the angles of incidence of the particles into the detectors. The magnitude of the effect depends upon the zenith angle and state of development of the shower and thus provides a novel observable, $(\sec \theta)_\mathrm{max}$, sensitive to the mass composition of cosmic rays above $3 \times 10^{18}$ eV. By comparing measurements with predictions from shower simulations, we find for both of our adopted models of hadronic physics (QGSJETII-04 and EPOS-LHC) an indication that the mean cosmic-ray mass increases slowly with energy, as has been inferred from other studies. However, the mass estimates are dependent on the shower model and on the range of distance from the shower core selected. Thus the method has uncovered further deficiencies in our understanding of shower modelling that must be resolved before the mass composition can be inferred from $(\sec \theta)_\mathrm{max}$. A new insight into the V1184 Tau variability (1602.01676) T. Giannini, D. Lorenzetti, A. Harutyunyan, G. Li Causi, S. Antoniucci, A. A. Arkharov, V. M. Larionov, F. Strafella Feb. 8, 2016 astro-ph.SR V1184 Tau is a young variable for long time monitored at optical wavelengths. Its variability has been ascribed to a sudden and repetitive increase of the circumstellar extinction (UXor-type variable), but the physical origin of such variation, although hypothesized, has not been fully supported on observational basis. To get a new insight into the variability of V1184 Tau, we present new photometric and spectroscopic observations taken in the period 2008-2015. During these years the source has reached the same high brightness level that had before the remarkable fading of about 5 mag undergone in 2004. The optical spectrum is the first obtained when the continuum is at its maximum level. The observations are interpreted in the framework of extinction driven variability. We analyze light curves, optical and near-infrared colors, SED and optical spectrum. The emerging picture indicates that the source fading is due to an extinction increase of DeltaA_V about 5 mag, associated with a strong infrared excess, attributable to a thermal component at T=1000 K. From the flux of H(alpha) we derive a mass accretion rate between 10^-11 -5 10^-10 M_sun yr^-1 s, consistent with that of classical T Tauri stars of similar mass. The source SED was fitted for both the high and low level of brightness. A scenario consistent with the known stellar properties (such as spectral type, mass and radius) is obtained only if the distance to the source is of few hundreds of parsecs, in contrast with the commonly assumed value of 1.5 kpc. Our analysis partially supports that presented by Grinin (2009), according to which the circumstellar disk undergoes a periodical puffing, whose observational effects are both to shield the central star and to evidence a disk wind activity. However, since the mass accretion rate remains almost constant with time, the source is likely not subject to accretion bursts. Search for correlations between the arrival directions of IceCube neutrino events and ultrahigh-energy cosmic rays detected by the Pierre Auger Observatory and the Telescope Array (1511.09408) The IceCube Collaboration: M. G. Aartsen, K. Abraham, M. Ackermann, J. Adams, J. A. Aguilar, M. Ahlers, M. Ahrens, D. Altmann, T. Anderson, I. Ansseau, M. Archinger, C. Arguelles, T. C. Arlen, J. Auffenberg, X. Bai, S. W. Barwick, V. Baum, R. Bay, J. J. Beatty, J. Becker Tjus, K.-H. Becker, E. Beiser, P. Berghaus, D. Berley, E. Bernardini, A. Bernhard, D. Z. Besson, G. Binder, D. Bindig, M. Bissok, E. Blaufuss, J. Blumenthal, D. J. Boersma, C. Bohm, M. Börner, F. Bos, D. Bose, S. Böser, O. Botner, J. Braun, L. Brayeur, H.-P. Bretz, N. Buzinsky, J. Casey, M. Casier, E. Cheung, D. Chirkin, A. Christov, K. Clark, L. Classen, S. Coenders, D. F. Cowen, A. H. Cruz Silva, J. Daughhetee, J. C. Davis, M. Day, J. P. A. M. de André, C. De Clercq, E. del Pino Rosendo, H. Dembinski, S. De Ridder, P. Desiati, K. D. de Vries, G. de Wasseige, M. de With, T. DeYoung, J. C. Díaz-Vélez, V. di Lorenzo, J. P. Dumm, M. Dunkman, B. Eberhardt, T. Ehrhardt, B. Eichmann, S. Euler, P. A. Evenson, S. Fahey, A. R. Fazely, J. Feintzeig, J. Felde, K. Filimonov, C. Finley, T. Fischer-Wasels, S. Flis, C.-C. Fösig, T. Fuchs, T. K. Gaisser, R. Gaior, J. Gallagher, L. Gerhardt, K. Ghorbani, D. Gier, L. Gladstone, M. Glagla, T. Glüsenkamp, A. Goldschmidt, G. Golup, J. G. Gonzalez, D. Góra, D. Grant, Z. Griffith, A. Groß, C. Ha, C. Haack, A. Haj Ismail, A. Hallgren, F. Halzen, E. Hansen, B. Hansmann, K. Hanson, D. Hebecker, D. Heereman, K. Helbing, R. Hellauer, S. Hickford, J. Hignight, G. C. Hill, K. D. Hoffman, R. Hoffmann, K. Holzapfel, A. Homeier, K. Hoshina, F. Huang, M. Huber, W. Huelsnitz, P. O. Hulth, K. Hultqvist, S. In, A. Ishihara, E. Jacobi, G. S. Japaridze, M. Jeong, K. Jero, M. Jurkovic, A. Kappes, T. Karg, A. Karle, M. Kauer, A. Keivani, J. L. Kelley, J. Kemp, A. Kheirandish, J. Kiryluk, J. Kläs, S. R. Klein, G. Kohnen, R. Koirala, H. Kolanoski, R. Konietz, L. Köpke, C. Kopper, S. Kopper, D. J. Koskinen, M. Kowalski, K. Krings, G. Kroll, M. Kroll, G. Krückl, J. Kunnen, N. Kurahashi, T. Kuwabara, M. Labare, J. L. Lanfranchi, M. J. Larson, M. Lesiak-Bzdak, M. Leuermann, J. Leuner, L. Lu, J. Lünemann, J. Madsen, G. Maggi, K. B. M. Mahn, M. Mandelartz, R. Maruyama, K. Mase, H. S. Matis, R. Maunu, F. McNally, K. Meagher, M. Medici, A. Meli, T. Menne, G. Merino, T. Meures, S. Miarecki, E. Middell, L. Mohrmann, T. Montaruli, R. Morse, R. Nahnhauer, U. Naumann, G. Neer, H. Niederhausen, S. C. Nowicki, D. R. Nygren, A. Obertacke Pollmann, A. Olivas, A. Omairat, A. O'Murchadha, T. Palczewski, H. Pandya, D. V. Pankova, L. Paul, J. A. Pepper, C. Pérez de los Heros, C. Pfendner, D. Pieloth, E. Pinat, J. Posselt, P. B. Price, G. T. Przybylski, M. Quinnan, C. Raab, L. Rädel, M. Rameez, K. Rawlins, R. Reimann, M. Relich, E. Resconi, W. Rhode, M. Richman, S. Richter, B. Riedel, S. Robertson, M. Rongen, C. Rott, T. Ruhe, D. Ryckbosch, L. Sabbatini, H.-G. Sander, A. Sandrock, J. Sandroos, S. Sarkar, K. Schatto, M. Schimp, T. Schmidt, S. Schoenen, S. Schöneberg, A. Schönwald, L. Schulte, L. Schumacher, D. Seckel, S. Seunarine, D. Soldin, M. Song, G. M. Spiczak, C. Spiering, M. Stahlberg, M. Stamatikos, T. Stanev, A. Stasik, A. Steuer, T. Stezelberger, R. G. Stokstad, A. Stößl, R. Ström, N. L. Strotjohann, G. W. Sullivan, M. Sutherland, H. Taavola, I. Taboada, J. Tatar, S. Ter-Antonyan, A. Terliuk, G. Tešić, S. Tilav, P. A. Toale, M. N. Tobin, S. Toscano, D. Tosi, M. Tselengidou, A. Turcati, E. Unger, M. Usner, S. Vallecorsa, J. Vandenbroucke, N. van Eijndhoven, S. Vanheule, J. van Santen, J. Veenkamp, M. Vehring, M. Voge, M. Vraeghe, C. Walck, A. Wallace, M. Wallraff, N. Wandkowsky, Ch. Weaver, C. Wendt, S. Westerhoff, B. J. Whelan, K. Wiebe, C. H. Wiebusch, L. Wille, D. R. Williams, H. Wissing, M. Wolf, T. R. Wood, K. Woschnagg, D. L. Xu, X. W. Xu, Y. Xu, J. P. Yanez, G. Yodh, S. Yoshida, M. Zoll. The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, S.G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J.C. Chirinos Diaz, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, A. Cordier, S. Coutu, C.E. Covault, R. Dallier, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, F. Gallo, B. García, D. Garcia-Gamez, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, I. Naranjo, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pękala, R. Pelayo, J. Peña-Rodriguez, I.M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, F. Strafella, A. Stutz, F. Suarez, M. Suarez Durán, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello. The Telescope Array Collaboration: R.U. Abbasi, M. Abe, T. Abu-Zayyad, M. Allen, R. Azuma, E. Barcikowski, J.W. Belz, D.R. Bergman, S.A. Blake, R. Cady, M.J. Chae, B.G. Cheon, J. Chiba, M. Chikawa, W.R. Cho, T. Fujii, M. Fukushima, T. Goto, W. Hanlon, Y. Hayashi, N. Hayashida, K. Hibino, K. Honda, D. Ikeda, N. Inoue, T. Ishii, R. Ishimori, H. Ito, D. Ivanov, C.C.H. Jui, K. Kadota, F. Kakimoto, O. Kalashev, K. Kasahara, H. Kawai, S. Kawakami, S. Kawana, K. Kawata, E. Kido, H.B. Kim, J.H. Kim, J.H. Kim, S. Kitamura, Y. Kitamura, V. Kuzmin, Y.J. Kwon, J. Lan, S.I. Lim, J.P. Lundquist, K. Machida, K. Martens, T. Matsuda, T. Matsuyama, J.N. Matthews, M. Minamino, Y. Mukai, I. Myers, K. Nagasawa, S. Nagataki, T. Nakamura, T. Nonaka, A. Nozato, S. Ogio, J. Ogura, M. Ohnishi, H. Ohoka, K. Oki, T. Okuda, M. Ono, A. Oshima, S. Ozawa, I.H. Park, M.S. Pshirkov, D.C. Rodriguez, G. Rubtsov, D. Ryu, H. Sagawa, N. Sakurai, L.M. Scott, P.D. Shah, F. Shibata, T. Shibata, H. Shimodaira, B.K. Shin, H.S. Shin, J.D. Smith, P. Sokolsky, R.W. Springer, B.T. Stokes, S.R. Stratton, T.A. Stroman, T. Suzawa, M. Takamura, M. Takeda, R. Takeishi, A. Taketa, M. Takita, Y. Tameda, H. Tanaka, K. Tanaka, M. Tanaka, S.B. Thomas, G.B. Thomson, P. Tinyakov, I. Tkachev, H. Tokuno, T. Tomida, S. Troitsky, Y. Tsunesada, K. Tsutsumi, Y. Uchihori, S. Udo, F. Urban, G. Vasiloff, T. Wong, R. Yamane, H. Yamaoka, K. Yamazaki, J. Yang, K. Yashiro, Y. Yoneda, S. Yoshida, H. Yoshii, R. Zollinger, Z. Zundel This paper presents the results of different searches for correlations between very high-energy neutrino candidates detected by IceCube and the highest-energy cosmic rays measured by the Pierre Auger Observatory and the Telescope Array. We first consider samples of cascade neutrino events and of high-energy neutrino-induced muon tracks, which provided evidence for a neutrino flux of astrophysical origin, and study their cross-correlation with the ultrahigh-energy cosmic ray (UHECR) samples as a function of angular separation. We also study their possible directional correlations using a likelihood method stacking the neutrino arrival directions and adopting different assumptions on the size of the UHECR magnetic deflections. Finally, we perform another likelihood analysis stacking the UHECR directions and using a sample of through-going muon tracks optimized for neutrino point-source searches with sub-degree angular resolution. No indications of correlations at discovery level are obtained for any of the searches performed. The smallest of the p-values comes from the search for correlation between UHECRs with IceCube high-energy cascades, a result that should continue to be monitored. First X-ray dectection of the young variable V1180 Cas (1509.07730) S. Antoniucci, A. A. Nucita, T. Giannini, D. Lorenzetti, B. Stelzer, D. Gerardi, S. Delle Rose, A. Di Paola, M. Giordano, L. Manni, F. Strafella Sept. 25, 2015 astro-ph.SR V1180 Cas is a young variable that has shown strong photometric fluctuations (Delta_I~6mag) in the recent past, which have been attributed to events of enhanced accretion. The source has entered a new high-brightness state in Sept.2013, which we have previously analyzed through optical and near-IR spectroscopy. To investigate the current active phase of V1180 Cas, we performed observations with the Chandra satellite to study the X-ray emission from the object and its connection to accretion episodes. Chandra observations were performed in early Aug.2014. Complementary JHK photometry and J-band spectra were taken at our Campo Imperatore facility to relate the X-ray and near-IR emission from the target. We observe a peak of X-ray emission at the nominal position of V1180 Cas. This signal corresponds to an X-ray luminosity L_X(0.5-7 kev) in the range 0.8-2.2e30 erg/s. Based on the relatively short duration of the dim states in the light curve and on stellar luminosity considerations, we explored the possibility that the brightness minima of V1180 Cas are driven by extinction variations. From the analysis of the spectral energy distribution of the high state we infer a stellar luminosity of 0.8-0.9 Lsun and find that the derived L_X is comparable to the average X-ray luminosities of T Tauri stars. Moreover, the X-ray luminosity is lower than the X-ray emission levels of 5e30 -1e31 erg/s detected at outbursts in similar low-mass objects. Our analysis suggests that at least part of the photometric fluctuations of V1180 Cas might be extinction effects rather than the result of accretion excess emission. However, as the source displays spectral features indicative of active accretion, we speculate that its photometric variations might be the result of a combination of accretion-induced and extinction-driven effects, as suggested for other young variables, such as V1184 Tau and V2492 Cyg. The YSO Population in the Vela-D Molecular Cloud (1411.2758) F. Strafella, D. Lorenzetti, T. Giannini, D. Elia, Y. Maruccia, B. Maiolo, F. Massi, L. Olmi, S. Molinari, S. Pezzuto Nov. 11, 2014 astro-ph.SR We investigate the young stellar population in the Vela Molecular Ridge, Cloud-D (VMR-D), a star forming (SF) region observed by both Spitzer/NASA and Herschel/ESA space telescope. The point source, band-merged, Spitzer-IRAC catalog complemented with MIPS photometry previously obtained is used to search for candidate young stellar objects (YSO), also including sources detected in less than four IRAC bands. Bona fide YSO are selected by using appropriate color-color and color-magnitude criteria aimed to exclude both Galatic and extragalactic contaminants. The derived star formation rate and efficiency are compared with the same quantities characterizing other SF clouds. Additional photometric data, spanning from the near-IR to the submillimeter, are used to evaluate both bolometric luminosity and temperature for 33 YSOs located in a region of the cloud observed by both Spitzer and Herschel. The luminosity-temperature diagram suggests that some of these sources are representative of Class 0 objects with bolometric temperatures below 70 K and luminosities of the order of the solar luminosity. Far IR observations from the Herschel/Hi-GAL key project for a survey of the Galactic plane are also used to obtain a band-merged photometric catalog of Herschel sources aimed to independently search for protostars. We find 122 Herschel cores located on the molecular cloud, 30 of which are protostellar and 92 starless. The global protostellar luminosity function is obtained by merging the Spitzer and Herschel protostars. Considering that 10 protostars are found in both Spitzer and Herschel list it follows that in the investigated region we find 53 protostars and that the Spitzer selected protostars account for approximately two-thirds of the total.
CommonCrawl
Astronomy Stack Exchange is a question and answer site for astronomers and astrophysicists. It only takes a minute to sign up. Does the Milky Way move through space? Does our galaxy moves through space? Or does it stay in a single location? If it does move, what causes it to move? galaxy universe milky-way $\begingroup$ Check out the articles about Galaxy Clusters and the Great Attractor, they may be of interest to you. $\endgroup$ – RomaH Sep 12 '17 at 20:41 $\begingroup$ Only if you want to use something besides our galaxy's origin as your frame of reference. You could structure any maths to accurately describe our galaxy as the ONLY stationary one. It just isn't very modest to do so. $\endgroup$ – DeepDeadpool Sep 12 '17 at 21:07 $\begingroup$ "Through space" is not a thing. Space doesn't have locations. Motion is always relative to something else, like another galaxy, or the Cosmic Microwave Background, or whatever. You've already received some answers for the relative motion, see below. $\endgroup$ – Florin Andrei Sep 12 '17 at 23:30 $\begingroup$ I have a term that you might be interested in to Google: laniakea $\endgroup$ – PlasmaHH Sep 13 '17 at 7:22 $\begingroup$ In Soviet Union, space moves through Milky Way! $\endgroup$ – el.pescado Sep 13 '17 at 7:48 Yes it does. I'm very fascinated with space, although I don't have a degree or any formal education, I'm still very in love with everything about it and want to learn constantly. Good man Mike. One thing I ask myself is if our galaxy moves through space? It does. When we look at the Cosmic Microwave Background Radiation we see a "dipole anisotropy" due to the motion of the Earth relative to it: Image courtesy of William H. Kinney's Cosmology, inflation, and the physics of nothing See Wikipedia for more: "From the CMB data it is seen that the Local Group (the galaxy group that includes the Milky Way galaxy) appears to be moving at 627±22 km/s relative to the reference frame of the CMB (also called the CMB rest frame, or the frame of reference in which there is no motion through the CMB) in the direction of galactic longitude l = 276°±3°, b = 30°±3°.[82][83] This motion results in an anisotropy of the data (CMB appearing slightly warmer in the direction of movement than in the opposite direction).[84]" 627 km/s is quite fast. See this article, which says it's 1.3 million miles an hour. The speed of light is just under 300,000 km/s or 670 million miles per hour, so the Milky Way is moving through the Universe at circa 0.2% of the speed of light. Also see the CMBR physics answer by ghoppe which talks about the CMBR reference frame, which is in effect the reference frame of the universe. Or does it stay in a single location? If it does move, what causes it to move? I'm afraid I don't know why it's moving. Perhaps it's because the Universe is full of things moving in fairly random directions. Like a gas. Hopefully the question makes sense, if not I can elaborate. It certainly makes sense to me! Edit 13/09/2017 : as Dave points out in the comments, there are other motions, including the motion of the solar system around the galaxy, which is circa 514,000 mph. (See the Wikipedia Galactic Year article). And the motion of galaxies isn't neat and tidy either. John DuffieldJohn Duffield $\begingroup$ Awesome response! There's a ton for me to look-up and read about. I really appreciate your time/explanation. $\endgroup$ – Mike Sep 12 '17 at 19:29 $\begingroup$ "Why is it moving?" It would be an incredible coincidence if it were perfectly still. There are uncountable numbers of objects in the universe exerting gravitational attractions in different directions. They would have to exactly balance out the for the galaxy not to move. $\endgroup$ – Barmar Sep 12 '17 at 20:59 $\begingroup$ @Barmar : good point. Why didn't I think to say that? Duh! $\endgroup$ – John Duffield Sep 12 '17 at 21:13 $\begingroup$ Well, at least everything continues to move (how it all started to is still up for debate I suppose ;) because stuff is still hot. When everything gets cold, absolutely nothing will move anymore.... supposedly: heat death of the universe. To put it simply as I understand it, eventually once all the atoms in the universe equalize in temperature, no work can be done anymore because there will be no potential energy left anywhere to do any work. Or something like that. $\endgroup$ – Mazura Sep 12 '17 at 23:15 $\begingroup$ @Barmar: From "A brief history of time" : Newton realized that, according to his theory of gravity, the stars should attract each other, so it seemed they could not remain essentially motionless. Would they not all fall together at some point? and We now know it is impossible to have an infinite static model of the universe in which gravity is always attractive. $\endgroup$ – Eric Duminil Sep 13 '17 at 9:58 Galaxies move through space with velocities of the order of a several 100 km per second; small velocities for small groups (~100 km/s; e.g Carlberg et al. 2000) and large velocities for rich clusters (~1000 km/s; e.g Girardi et al. 1993). In addition to this so-called "peculiar velocity", galaxies also also carried away from each other due to the expansion of the Universe, at a velocity proportional to the distance from each other (the "Hubble flow"). But this is not a motion through space; rather it is space itself that is expanding (and hence the velocity may exceed the speed of light for sufficiently large distances). One may define a "global reference frame" with respect to which velocities are measured. Any reference is valid, but it makes sense to use the frame in which all galaxies are, on average, in rest (when the Hubble flow is subtracted)$^\dagger$. In this frame, the Local Group that the Milky Way is a part of moves with some $620\, \mathrm{km \, s}^{-1}$ (as noted in John Duffield's answer above), whereas the center of the Milky Way has a velocity$^\ddagger$ of $\mathbf{565 \pm 5 \, km \, s^{-1}}$ (Planck Collaboration et al. 2018). What causes this movement of galaxies? Galaxies that are not too far from each (i.e. closer), "feel" each others mutual gravitational forces. A galaxy in a group or cluster moves around in the common gravitational field, but for galaxies that are farther away, the Hubble flow carries them away from each other too fast for them to attract each other. This movement can be traced back to the tiny quantum mechanical fluctuations in the primordial soup of particles during cosmic inflation, i.e. less than $\sim10^{-32}\,\mathrm{s}$ after the Big Bang. As time went by, ever-so-slight overdensities grew in amplitude, until they collapsed to form the structure we see in the Universe today. During this collapse, matter grew turbulent, whirling clumps around that eventually became the galaxies that orbit each other. $^\dagger$Formally, one uses the frame in which the cosmic microwave background is isotropic. $^\ddagger$Taking into account our motion around the Galactic center, our Sun (currently) moves through space at $369.82\pm0.11\,\mathrm{km\, s^{-1}}$. pelapela $\begingroup$ I'm in awe by the intelligence of you guys. I had to read your response a few times to barely grasp what you are saying. But none nonetheless, I'm going to be researching/learning about everything you are talking about. (which I'm sure that will raise more questions, lol) $\endgroup$ – Mike Sep 12 '17 at 19:33 $\begingroup$ @Mike It's less a question of intelligence and more one of experience and study. Most things in life are like that. $\endgroup$ – jpmc26 Sep 13 '17 at 0:20 $\begingroup$ I second that, @jpmc26! $\endgroup$ – pela Sep 13 '17 at 6:31 Simply put: The important thing about the motion of our Milky Way in the Universe is simply gravitational attraction among galaxies and clusters of galaxies that causes movement. At a small scale we are attracted to the Andromeda galaxy and are on a "collision" course with it (Of course we won't collide because galaxies are way too diffuse so to speak). The most important gravitational attractions that cause our Galaxy to move even faster is the Great Attractor (a large collection of galaxies) and the much more awe-inspiring Shapley Supercluster of galaxies. It looks like these are the major clusters of galaxies to which our Milky Way Galaxy is gravitationally attracted, and it causes our Galaxy to move at around 600 km/sec with reference to the microwave background radiation. This background radiation or CMB (cosmic microwave background) is the absolute reference against which to measure velocities in our Universe. The CMB is the remnant of and the proof of the Big Bang event. And as was mentioned before, the space expands in our universe producing the effect of galaxies (that are far away for each other) to look like they run away from each other very fast. It holds only for galaxies that are very far away in space. That's because space expands like a rubber band with most distant points moving away from each other faster. In other words, galaxies that are extremely far away from each other, well, they run away from each other almost with the speed of light or even faster! No, they are not moving faster than the speed of light – it's just that space in the universe expands very fast between two very distant points, even faster than the speed of light! It is just like the further apart the two points are on the elastic rubber band the faster they will move apart when we extend the band. The difference here is that our three-dimensional space acts like a rubber band if it is a three-dimensional space we live in after all (here's a lot to be discovered in the future, I guess)?! peterh - Reinstate Monica $\begingroup$ I really love this response, the analogies really helped... thank you! $\endgroup$ – Mike Sep 13 '17 at 3:52 $\begingroup$ You are welcome! Glad it was helpful. $\endgroup$ – user18491 Sep 13 '17 at 17:26 NO, if don't want to consider it to be doing so. Everything in the universe only "moves" when considered from a different frame of reference. From the observational frame of reference of the Milky Way galaxy, every other object in the universe is moving, and the Milky Way is stationary. When I walk to work, what I am actually doing is dragging the surface of the Earth, and everything on it, towards me with my feet, until my office arrives. Nicholas ShanksNicholas Shanks $\begingroup$ man, no wonder I need coffee when i get to the office.. $\endgroup$ – user230910 Sep 13 '17 at 11:42 $\begingroup$ What about someone else walking to work from the opposite direction? Don't the two of you cancel each other out? $\endgroup$ – Barmar Sep 13 '17 at 19:17 $\begingroup$ That's certainly how it feels today. Maybe I need to reset my reference frame. Or find a better office! $\endgroup$ – Chappo Hasn't Forgotten Monica Sep 13 '17 at 21:31 $\begingroup$ @Barmar: Both people think that they're stationary, that the universe is moving towards them at 5km/h and the other person is walking towards them at 10km/h. $\endgroup$ – Eric Duminil Sep 14 '17 at 7:58 $\begingroup$ Einstein is reputed to have asked a ticket inspector, "Does Crewe stop at this train?". $\endgroup$ – Oscar Bravo Sep 14 '17 at 8:12 Monty Python does not seem very accurate in scientific issues, but this time it does it's job quite well in the Galaxy Song also mentioning the milky ways movement. Just remember that you're standing on a planet that's evolving And revolving at nine hundred miles an hour, That's orbiting at nineteen miles a second, so it's reckoned, A sun that is the source of all our power. The sun and you and me and all the stars that we can see Are moving at a million miles a day In an outer spiral arm, at forty thousand miles an hour, Of the galaxy we call the 'Milky Way'. Our galaxy itself contains a hundred billion stars. It's a hundred thousand light years side to side. It bulges in the middle, sixteen thousand light years thick, But out by us, it's just three thousand light years wide. We're thirty thousand light years from galactic central point. We go 'round every two hundred million years, And our galaxy is only one of millions of billions In this amazing and expanding universe. The universe itself keeps on expanding and expanding In all of the directions it can whizz As fast as it can go, at the speed of light, you know, Twelve million miles a minute, and that's the fastest speed there is. So remember, when you're feeling very small and insecure, How amazingly unlikely is your birth, And pray that there's intelligent life somewhere up in space, 'Cause there's bugger all down here on Earth. And yes, these figures are accurate: (at the time of publish) https://en.wikipedia.org/wiki/Galaxy_Song#Accuracy_of_astronomical_figures Ole AlbersOle Albers $\begingroup$ But, just to be a killjoy, it doesn't answer the question. $\endgroup$ – ProfRob Sep 13 '17 at 14:14 $\begingroup$ "We're thirty thousand light years from galactic central point.We go 'round every two hundred million years," is a clear "YES" IMHO for the "Milky Way is moving" topic $\endgroup$ – Ole Albers Sep 13 '17 at 14:39 $\begingroup$ Of course it isn't. It is a clear statement that the Milky Way is a rotating disc of which we are a part. Rotation and translation (moving thru space) are two entirely different things. $\endgroup$ – ProfRob Sep 13 '17 at 15:51 $\begingroup$ True. All the motions mentioned in the song are rotations, except for the expansion of the universe. And giving a single speed for the latter is wrong. $\endgroup$ – Barmar Sep 14 '17 at 15:00 Not the answer you're looking for? Browse other questions tagged galaxy universe milky-way or ask your own question. Are there any galaxies which fell out of sight horizon due to cosmic expansion? How fast do we travel through space? Which galaxy is receding from the Milky Way the fastest? What is known of the mechanism behind its recession? How does light affect the universe? Is metallicity low at the central region or nucleus of the Milky Way? What part of the milky way do we see from earth? Question regarding the Milky Way when calculating galactic space velocities for galaxies Why don't we see "the milky way" in both directions? How do galaxies move in space? Is it true that we see the center of the milky-way for only half of the year? How many supernovae events are on their way from within the Milky Way?
CommonCrawl
\begin{document} \title{Correctness by construction \\ for probabilistic programs\thanks{We are grateful for the support of the Australian Research Council.}} \titlerunning{Correctness by construction \\ for probabilistic programs} \author{Annabelle McIver\inst{1}\and Carroll Morgan\inst{2}} \institute{University of New South Wales \& Trustworthy Systems, Data61, CSIRO \\\email{[email protected]} \and Macquarie University \email{[email protected]}} \maketitle \begin{abstract} The ``correct by construction'' paradigm is an important component of modern Formal Methods, and here we use the probabilistic Guarded-Command Language \pGCL\ to illustrate its application to \emph{probabilistic} programming. \quad\pGCL\ extends Dij\-kstra's guarded-command language \textit{GCL} with probabilistic choice, and is equipped with a correctness-preserving refinement relation $(\Ref)$ that enables compact, abstract specifications of probabilistic properties to be transformed gradually to concrete, executable code by applying mathematical insights in a systematic and layered way. \quad Characteristically for ``correctness by construction'', as far as possible the reasoning in each refinement-step layer does not depend on earlier layers, and does not affect later ones. \quad We demonstrate the technique by deriving a fair-coin implementation of any given discrete probability distribution. In the special case of simulating a fair die, our correct-by-construction algorithm turns out to be ``within spitting distance'' of Knuth and Yao's optimal solution. \end{abstract} \section{Testing probabilistic programs?} Edsger Dij\-kstra argued \cite[p3]{Dijkstra:aa} that the construction of \emph{correct} programs requires mathematical proof, since ``\ldots program testing can be used very effectively to show the presence of bugs but never to show their absence.'' But for programs that are constructed to exhibit some form of randomisation, regular testing can't even establish that \emph{presence}: odd program traces are almost always bound to turn up even in \emph{correctly} operating probabilistic systems. Thus evidence of quantitative errors in probabilistic systems would require many, many traces to be subjected to detailed statistical analysis --- yet even then debugging probabilistic programs remains a challenge when that evidence has been assembled. Unlike standard (non-probabilistic programs), where a failed test can often pinpoint the source of the offending error in the code, it's not easy to figure out what to change in the implementation of probabilistic programs in order to move closer towards ``correctness'' rather than further away. Without that unambiguous relationship between failed tests and the coding errors that cause them, Dij\-kstra's caution regarding proofs of programs is even more apposite. In this paper we describe such a proof method for probability: correctness-by-construction. In a sentence, to apply ``\CbC'' one constructs the program and its proof at the same time, letting the requirement that there \emph{be} a proof guide the design decisions taken while constructing the program. Like standard programs, probabilistic programs incorporate mathematical insights into algorithms, and a correctness-by-construction method should allow a program developer to refer rigorously to those insights by applying development steps that preserve ``probabilistic correctness''. Probabilistic correctness is however notoriously unintuitive. For example, the solution of the infamous Monty Hall problem caused such a ruckus in the mathematical community that even Paul Erd\"os questioned the correct analysis \cite{Vazsonyi:2002aa}.\, \footnote{A game show host, Monty Hall, shows a contestant three curtains, behind one of which sits a Cadillac; the other two curtains conceal goats. The contestant guesses which curtain hides the prize, and Monty then opens another that concealed a goat. The contestant is allowed to change his mind. Should he?} Yet once coded up as a program \cite[p22]{McIver:05a}, the Monty Hall problem is only four lines long! More generally though, many widely relied-upon programs in security are quite short, and yet still pose significant challenges for correctness. We describe correctness-by-construction in the context of \pGCL, a small programming language which restores demonic choice to Kozen's landmark (purely) probabilistic semantics \cite{Kozen:81,Kozen:83} while using the syntax of Dijkstra's \textit{GCL} \cite{Dijkstra:76}. Its basic principles are that correctness for programs can be described by a generalisation of Hoare logic that includes \emph{quantitative} analysis; and it has a definition of refinement that allows programs to be developed in such a way that both functional and probabilistic properties are preserved. \footnote{If the program is a mathematical object, then as Andrew Vazonyi \cite{Vazsonyi:2002aa} pointed out: ``I'm not interested in \textit{ad hoc} solutions invented by clever people. I want a method that works for lots of problems\ldots\ One that mere mortals can use. Which is what a correctness-by-construction method should be.''} \section{Enabling Correctness by Construction --- \pGCL} \label{s1733} The setting for correctness-by-construction of probabilistic programs is provided by \pGCL\ --the probabilistic Guarded-Command Language-- which contains both abstraction and (stepwise) refinement \cite{McIver:05a}. We begin by reviewing its origins, then its treatment of probabilistic choice and demonic choice, and finally its realisation of \CbC. (This section can be skimmed on first reading: just collect \pGCL\ syntax from Figs.~\ref{f1200}--\ref{f1202}, and then skip directly to \Sec{s1736}.) As we will not be treating non-terminating programs, we can base our description here on quite simple models for sequential (non-reactive) programs. The state space is some set $S$ and, in its simplest terms, a program takes an initial state to a final state: it (its semantics) therefore has type $S\,{\rightarrow}\,S$. The three subsections that follow describe logics based on successive enrichments of this, the simplest model, and even the youngest of those logics is by now almost 25 years old: thus we will be ``reviewing'' rather than inventing. The first enrichment, \Sec{s0910}, is based on the model $S\,{\rightarrow}\,\Pow S$ that allows demonic nondeterminism,\, \footnote{Constructor $\Pow$ is ``subsets of'' and $\Dist$ is ``discrete distributions on''.} so facilitating abstraction; then in \Sec{s0911} the model $S\,{\rightarrow}\,\Dist S$ replaces demonic nondeterminism by probabilistic choice, losing abstraction (temporarily) but in its place gaining the ability to describe probabilistic outcomes; and finally in \Sec{s0912} the model $S\,{\rightarrow}\,\Pow\Dist S$ restores demonic nondeterminism, allowing programs that can abstract from precise probabilities. Using syntax we will make more precise in those sections, simple examples of the three increments in expressivity are {\def10em#1{~~\makebox[7em][l]{#1}} \def14em#1{~~\parbox[t]{23em}{#1}} \begin{enumerate}[(1)] \setlength\itemsep {0.5ex} \item\label{i0939-1} 10em{\PF[x:= H]} 14em{Set variable \PF[x] to \PF[H] (as in any sequential language);} \item\label{i0939-2} 10em{\PF[x\In\ \{H,T\}]} 14em{Set \PF[x]'s value demonically from the set $\{\PF[H],\PF[T]\}$;} \item\label{i0939-3} 10em{\PF[x\In\ H\,\PCNF{2}{3}\,T]} 14em{Set \PF[x]'s value from the set $\{\PF[H],\PF[T]\}$ with probability \NF{2}{3} for \PF[H] and \NF{1}{3} for \PF[T], a ``biased coin''; and} \item\label{i0939-4} 10em{\PF[x\In\ H\,\PPC{\NF{1}{3}}{\NF{1}{3}}\,T]} 14em{Set \PF[x] from the set $\{\PF[H],\PF[T]\}$ with probability \emph{at least} \NF{1}{3}\ each way, a ``capricious coin''.} \end{enumerate}} The last example of those \Itm{i0939-4} is the most general: for \Itm{i0939-3} is \PF[x\In\ H\,\PPC{\NF{2}{3}}{\NF{1}{3}}\,T]; and \Itm{i0939-2} is \PF[x\In\ H\,\PPC{0}{0}\,T]; and finally \Itm{i0939-1} is \PF[x\In\ H\,\PPC{1}{0}\,T]. \subsection{Floyd/Hoare/Dij\-kstra: pre- and postconditions: (\ref{i0939-1},\ref{i0939-2}) above} \label{s0910} We assume a typical sequential programming language with variables, expressions over those variables, assignment (of expressions to variables), sequential composition (semicolon or line break), conditionals and loops. It is more or less Dij\-kstra's \emph{guarded command language} \cite{Dijkstra:76}, and is based on the model $S\,{\rightarrow}\,\Pow S$, where $\Pow S$ is the set of all subsets of $S$. The \emph{weakest precondition} of program \Prog\ in such a language, with respect to a postcondition \Post\ given as a first-order formula over the program variables, is written \WPP{\Prog}{\Post} and means \begin{quote} the weakest formula (again on the program variables) that must hold \emph{before} \Prog\ executes in order to ensure that \Post\ holds \emph{after} \Prog\ executes \cite{Dijkstra:76}. \end{quote} In a typical compositional style, the \WP\ of a whole program is determined by the \WP\ of its components. \begin{figure} \caption{Floyd-style annotated flowchart} \label{f1404} \end{figure} We group Dij\-kstra, Hoare and Floyd together because the Dij\-kstra-style implication $~\Pre\Implies\WPP{\Prog}{\Post}~$ has the same meaning as the Hoare-style triple $~\{\Pre\}~\Prog~\{\Post\}~$ which in turn has the same meaning as the original Floyd-style flowchart annotation, as shown in \Fig{f1404} \cite{Floyd:67,Hoare:69}. All three mean ``If \Pre\ holds of the state before execution of \Prog, then \Post\ will hold afterwards.'' Finally, a notable --but incidental-- feature of Dij\-kstra's approach was that (demonic) nondeterminism arose naturally, as an abstraction from possible concrete implementations.\, \footnote{See \Sec{step 5} for a further discussion of this.} That is why we use $S\,{\rightarrow}\,\Pow S$ rather than $S\,{\rightarrow}\,S$ here. In later work (by others) that abstraction was made more explicit by including explicit syntax for a binary ``demonic choice'' between program fragments, a composition \Meta{\Left} \ND\ \Meta{\Right} that could behave either as the program \Left\ or as the program \Right. But that operator (\ND) was not really an extension of Dij\-kstra's work, because his (more verbose) conditional \begin{ProgEqn} IF\>\>True $\rightarrow$ \Left ~~~\Com If \PF[True] holds, then this branch may be taken.\\ \B\>\>True $\rightarrow$ \Right ~~\Com If \PF[True] holds, then also \emph{this} branch may be taken. \\ FI \Com (Dijkstra terminated all \PF[IF]'s with \PF[FI]'s.)~~~~~ \end{ProgEqn} was there in his original guarded-command language, introducing demonic choice naturally as an artefact of the program-design process --- and it expressed exactly the same thing. The (\ND) merely made it explicit. \subsection{Kozen: probabilistic program logic: \Itm{i0939-3} above} \label{s0911} Kozen extended Dij\-kstra-style semantics to probabilistic programs, again over a sequential programming language but now based on the model $S\,{\rightarrow}\,\Dist S$, where $\Dist S$ is set of all discrete distributions in $S$.\, \footnote{Kozen's work did not restrict to discrete distributions; but that is all we need here.} He replaced Dij\-kstra's demonic nondeterminism (\ND) by a ``probabilistic nondeterminism'' operator (\PC{p}) between programs, understood so that \ProgIL{\Left\,\PC{p}\;\Right} means ``Execute \Left\ with probability $p$ and \Right\ with probability $1{-}p$.'' The probability $p$ is (very) often \NF{1}{2} so that \ProgIL{coin:= Heads \PCF\ coin:= Tails} means ``Flip a fair coin.'' But probability \Meta{p} can more generally be any real number, and more generally still it can even be an expression in the program variables. Kozen's corresponding extension of Floyd/Hoare/Dij\-kstra \cite{Kozen:81,Kozen:83} replaced Dij\-kstra's logical formulae with real-valued expressions (still over the program variables); we give examples below. The ``original'' Dij\-kstra-style formulae remain as a special case where real number 1 represents \True\, and 0 represents \False; and Dij\-kstra's definitions of \WP\ simply carry through essentially as they are\ldots\ except that an extra definition is necessary, for the new construct (\PC{\Meta{p}}), where Kozen defines that \begin{align*} & \WPP{\Left\;\PC{\Meta{p}}\;\Right}{\Post} \\ \textrm{is\qquad} & \Meta{p}\cdot\WPP{\Left}{\Post} ~+~ (1{-}\Meta{p})\cdot\WPP{\Right}{\Post}\Qdot \end{align*} With this single elegant extension, it turns out that in general \WPP{\Prog}{\Post} is the \emph{expected value}, given as a (real valued) expression over the \emph{initial} state, of what \Post\ will be in the \emph{final} state, i.e.\ after \Prog\ has finished executing from that initial state. (The initial/final emphasis simply reminds us that it is the same as for Dij\-kstra: the weakest precondition is what must be true in the \emph{initial} state for the postcondition to be true in the \emph{final} state.) For example we have that \begin{align*} \WPP{x:= 1-y\,\PCNF{1}{3}\;x:= 3*x}{~~~\PF[x]+3} \WIDERM{is} \NF{1}{3}(1{-}\PF[y]+3) ~+~ \NF{2}{3}(3\PF[x]{+}3) \quad, \end{align*} that is the real-valued expression $3\frac{1}{3} + 2\PF[x] - \PF[y]/3$ in which both \PF[x] and \PF[y] refer to their values in the initial state.\, More impressive though is that if we introduce the convention that brackets $[-]$ convert Booleans to numbers, i.e.\ that $[\True] = 1$ and $[\False] = 0$, we have in general for \emph{Boolean}-valued \Prop\ the convenient idiom \begin{align} & \WPP{\Prog}{[\,\Prop]} \label{e1520} \\ \WIDERM{is} & \textrm{``the probability that \Prog\ establishes property \Prop''} ,\,\footnotemark \nonumber \end{align} \footnotetext{The expected value of the characteristic function $[\,\Prop\,]$ of an event \Prop\ is equal to the probability that \Prop\ itself holds.} And if --further-- it happens that the ``probabilistic'' program \Prog\ actually contains no probabilistic choices at all, then \Eqn{e1520} just above has value 1 just when \Prog\ is guaranteed to establish \Post, and is 0 otherwise: it is in that sense that the Dij\-kstra-style semantics ``carries through'' into the Kozen extension. That is, if \Prog\ contains no probabilistic choice, and \Post\ is a conventional (Boolean valued) formula, then we have \begin{align*} & \emph{Dij\-kstra style} & [\,\WPP{\Prog}{\Post}\,] \\ \textrm{is the same as\qquad} & \emph{Kozen style} & \WPP{\Prog}{[\,\Post\,]}\HangRight{.\,\footnotemark} \end{align*} \footnotetext{Note that if \Prog\ contains (\PC{\Meta{p}}) somewhere, the above does not apply: Dij\-kstra semantics has no definition for (\PC{\Meta{p}}).} The full power of the Kozen approach, however, starts to appear in examples like this one below: we flip two fair coins and ask for the probability that they show the same face afterwards. Using the (Dij\-kstra) weakest-precondition rule that \WPP{\Meta{Prog1};\Meta{Prog2}}{~\Post} is simply \WPP{\Meta{Prog1}}{~\WPP{\Meta{Prog2}}{\Post}},\, \footnote{This is particularly compelling when \WP\ is Curried: sequential composition \WP(\Meta{Prog1};~\Meta{Prog2}) is then the functional composition $\WP(\Meta{Prog1})\circ\WP(\Meta{Prog2})$.} we can calculate \begin{Reason} \Step{}{\WPP{{\PF c1:= H \PCF\, c1:= T;~c2:= H \PCF\, c2:= T}}{~~~[\PF[c1]=\PF[c2]]}} \Step{=}{\WPP{{\PF c1:= H \PCF\, c1:= T}}{~~~\WPP{{\PF c2:= H \PCF\, c2:= T}}{[\PF[c1]=\PF[c2]]}}} \Step{=}{\WPP{{\PF c1:= H \PCF\, c1:= T}}{~~~\NF{1}{2}[\PF[c1]=\PF[H]]+(1{-}\NF{1}{2})[\PF[c1]=\PF[T]]}} \Space[-2ex] \Step{=}{\NF{1}{2}(\NF{1}{2}[\PF[H]=\PF[H]]+\NF{1}{2}[H=\PF[T]]) + \NF{1}{2}(\NF{1}{2}[\PF[T]=\PF[H]]+\NF{1}{2}[\PF[T]=\PF[T]]) } \Step{=}{\NF{1}{2}(\NF{1}{2}\cdot1+\NF{1}{2}\cdot0) + \NF{1}{2}(\NF{1}{2}\cdot0+\NF{1}{2}\cdot1)} \Space[-2ex] \Step{=}{\NF{1}{4}+\NF{1}{4}} \Step{=}{\NF{1}{2}\textrm{~,\quad that is that the probability that $\PF[c1]\,{=}\,\PF[c2]$ is $\NF{1}{2}$.}} \end{Reason} A nice further exercise for seeing this probabilistic \WP\ at work is to repeat the above calculation when one of the coins uses (\PC{\Meta{p}}) but (\PCF) is retained for the other, confirming that the answer is still \NF{1}{2}. \begin{figure} \caption{Syntax and \WP-semantics for ``restricted'' \pGCL} \label{f1200} \end{figure} \subsection{McIver/Morgan: pre- and post-expectations} \label{s0912} Following Kozen's probabilistic semantics at \Sec{s0911} just above (which itself turned out later to be a special case of Jones and Plotkin's probabilistic powerdomain contruction \cite{Jones:89}) we restored demonic choice to the programming language and called it \pGCL\ \cite{Morgan:96d,McIver:05a}. It contains both demonic (\ND) and probabilistic (\PC{\Meta{p}}) choices; its model is $S\,{\rightarrow}\,\Pow\Dist S$; and it is the language we will use for the correct-by-construction program development we carry out below \cite{McIver:05a}. Figures \ref{f1200}--\ref{f1202} summarise its syntax and its \WP-logic. To illustrate demonic- vs.\ probabilistic choice, we'll revisit the two-coin program from above. This time, one coin will have a probability-$p$ bias for some constant $0\,{\leq}\,p\,{\leq}\,1$ (thus acting as a fair coin just when $p$ is \NF{1}{2}). The other choice will be purely demonic. \begin{figure} \caption{Syntax and \WP-semantics for \pGCL's choice constructs} \label{f1201} \end{figure} \begin{figure} \caption{Syntax and \WP-semantics for \pGCL's choice constructs} \label{f1202} \end{figure} We start with the (two-statement) program \begin{ProgEqn} c1:= H \PC{p}\, c1:= H \\ c2:= H \;\,\ND\, c2:= T \textrm{\quad,} \end{ProgEqn} where the first statement is probabilistic and the second is demonic, and ask, as earlier, ``What is the probability that the two coins end up equal?'' We calculate \begin{Reason} \Step{}{\WPP{{\PF c1:= H \PC{p}\, c1:= T;~c2:= H {\ND}\;c2:= T}}{~~~[\PF[c1]=\PF[c2]]}} \Step{=}{\WPP{{\PF c1:= H \PC{p}\, c1:= T}}{~~~\WPP{{\PF c2:= H {\ND}\;c2:= T}}{~[\PF[c1]=\PF[c2]]}}} \Step{=}{\WPP{{\PF c1:= H \PC{p}\, c1:= T}}{~~~[\PF[c1]=\PF[H]] ~\Min~ [\PF[c1]=\PF[T]]}} \Space[-2ex] \Step{=}{p\,{\cdot}([\PF[H]=\PF[H]] ~\Min~ [\PF[H]=\PF[T]]) + (1{-}{\it p}){\cdot}([\PF[T]=\PF[H]] ~\Min~ [\PF[T]=\PF[T]])} \Step{=}{p\,{\cdot}(1 ~\Min~ 0) + (1{-}p){\cdot}(0 ~\Min~ 1)} \Step{=}{p\,{\cdot}0 + (1{-}p){\cdot}0} \Space[-2ex] \Step{=}{0\quad,} \end{Reason} to reach the conclusion that the probability of the two coins' being equal finally\ldots\ is zero. And that highlights the way demonic choice is usually treated: it's a worst-case outcome. The ``demon'' --thought of as an agent-- always tries to make the outcome as bad as possible: here because our desired outcome is that the coins be equal, the demon always sets the coin \PF[c2] so they will differ. If we repeated the above calculation with postcondition $\PF[c1]{\neq}\PF[c2]$ instead, the result would \emph{again} be zero: if we change our minds, want the coins to differ, then the demon will change his mind too, and act to make them the same.\, \footnote{This is not a novelty: demonic choice is usually treated that way in semantics --- that's why it's called ``demonic''.} Implicit in the above treatment is that the \PF[c2] demon knows the outcome of the \PF[c1] flip --- which is reasonable because that flip has already happened by the time it's the demon's turn. Now we reverse the statements, so that the demon goes first: it must set \PF[c2] without knowing beforehand what \PF[c1] will be. The program becomes \begin{ProgEqn} c2:= H \;\,\ND\, c2:= T \\ c1:= H \,\PC{p} c1:= T \textrm{\quad,} \end{ProgEqn} and we calculate \begin{Reason} \Step{}{\WPP{{\PF c2:= H \,\ND\;c2:= T;~c1:= H \PC{\it p}\, c1:= T}}{~~~[\PF[c1]=\PF[c2]]}} \Step{=}{\WPP{{\PF c2:= H \,\ND\;c2:= T}}{~~~\WPP{{\PF c1:= H \PC{\it p}\, c1:= T}}{~[\PF[c1]=\PF[c2]]}}} \Step{=}{\WPP{{\PF c2:= H \,\ND\;c2:= T}}{~~~p\,{\cdot}[\PF[H]=\PF[c2]] + (1{-}p){\cdot}[\PF[T]=\PF[c2]]}} \Step{=}{p\,{\cdot}[\PF[H]=\PF[H]] + (1{-}p){\cdot}[\PF[T]=\PF[H]]~~\Min~~p\,{\cdot}[\PF[H]=\PF[T]] + (1{-}p){\cdot}[\PF[T]=\PF[T]]} \Step{=}{p\,{\cdot}1 + (1{-}p){\cdot}0~~\Min~~p\,{\cdot}0 + (1{-}p){\cdot}1} \Step{=}{p~\Min~(1{-}p)\Qdot} \end{Reason} Since the demon set flip \PF[c2] \emph{without} knowing what the \PF[c1]-flip would be (because it had not happened yet), the worst it can do is to choose \PF[c2] to be the value that it is known \PF[c1] is least likely to be --- which is just the result above, the lesser of $p$ and $1{-}p$. If --as before-- we change our minds and decide instead that we would like the coins to be different, then the demon adapts by choosing \PF[c2] to be the value that \PF[c1] is \emph{most} likely to be. Either way, the probability our postcondition will be achieved, the pre-expectation of its characteristic function, is the same $p\,\Min\,(1{-}p)$ --- so that only when $p\,{=}\,\FF$, i.e.\ when $p\,{=}\,(1{-}p)$, does the demon gain no advantage. \section{Probabilistic \emph{correctness by construction} in action\,\protect\footnotemark}\label{s1736} \footnotetext{This intent of this section can be understood based on the syntax given in Figs.~\ref{f1200}--\ref{f1202}.} Our first example problem conceptually will be to achieve a binary choice of arbitrary bias using only a fair coin. With the apparatus of \Sec{s0912} however, we can immediately move from conception to precision: \begin{quote} We must write a \pGCL\ program that implements \ProgIL{\Left\,\PC{p}\;\Right},\linebreak under the constraint that the only probabilistic choice operator we are allowed to use in the final (\pGCL) program is $(\PCF)$. \end{quote} This is not a hard problem mathematically: the probabilistic calculation that solves it is elementary. Our point here is to use this simple problem to show how such solutions can be calculated within a programming-language context, while maintaining rigour (possibly machine-checkable) at every step. The final program is given at \Eqn{p0955} in \Sec{step 5}. \subsection{Step 1 --- a simplification}\label{step 1} We'll start by simplifying the problem slightly, instantiating the programs \Left\ and \Right\ to \PF[x:= 1] and \PF[x:= 0] respectively. Our goal is thus to implement \begin{ProgEqn}[p1643] x\In\ 1\,\PC{p}\,0\Qcomma \end{ProgEqn} for arbitrary $p$, and our first step is to create two other distributions $1\,{\PC{q}}\,0$ and $1\,{\PC{r}}\,0$ whose average is $1\,{\PC{p}}\,0$ --- that is \begin{equation}\label{e1635} \FF\times(\,(1\,{\PC{q}}\,0)+(1\,{\PC{r}}\,0)\,) \Wide{=} (1\,{\PC{p}}\,0)\Qdot \end{equation} A fair coin will then decide whether to carry on with $1\,\PC{q}\,0$ or with $1\,\PC{r}\,0$. Trivially \Eqn{e1635} holds just when $(q{+}r)/2 = p$, and if we represent $p,q,r$ as variables in our program, we can achieve \Eqn{e1635} by the double assignment \begin{ProgEqn}[p1720] IF p\,$\leq$\,\FF\ $\rightarrow$ q,r:= 0,2p \\ \B\ p\,$\geq$\,\FF\ $\rightarrow$ q,r:= 2p-1,1 \\ \HangLeft{\footnotemark\qquad} FI \\ \Ass{p = (q+r)/2}\Qcomma \end{ProgEqn} \footnotetext{We will sometimes include Dij\-kstra's closing \PF[FI].} whose postcondition indicates what the assignment has established. If we follow that with a fair-coin flip between continuing with \PF[q] or with \PF[r], viz. \begin{ProgEqn}[p1714] IF p\,$\leq$\,\FF\ $\rightarrow$ q,r:= 0,2p \Com Here \PF[q] is 0.\\ \B\ p\,$\geq$\,\FF\ $\rightarrow$ q,r:= 2p-1,1 \Com Here \PF[r] is 1. \\ FI \\ (x\In\ \OZ{q}) \PCF\ (x\In\ \OZ{r}) \Com The fair coin $(\PCF)$ here is permitted. \end{ProgEqn} then we should have implemented \Prg{p1643}. But what have we gained? The gain is that, whichever branch of the conditional is taken, there is a \FF\ probability that the problem we have \emph{yet} to solve will be either $(\PC{0})$ or $(\PC{1})$, both of which are trivial. If we were unlucky, well\ldots\ then we just try again. But how do we show rigorously that \Prg{p1643} and \Prg{p1714} are equal? If we look back at \Prg{p1720}, we find the assertion \Ass{p = (q+r)/2} which is easy to establish by conventional Hoare-logic or Dij\-kstra-\WP\ reasoning from the conditional just before it. (We removed it from \Prg{p1714} just to reduce clutter.) Rigour is achieved by calculating \begin{Reason} \Step{}{\WPP{(x\In\ \OZ{q})~\PCF~(x\In\ \OZ{r})}{~~~\Post}} \Space[-2ex] \Step{$=$}{\FF\;\WPP{(x\In\ \OZ{q})}{\Post} ~+~ \FF\;\WPP{(x\In\ \OZ{r})}{\Post}} \Step{$=$}{\NF{\PF q}{2}\cdot\Sbst{\Post}{\PF[x]}{1} + \NF{(1-\PF[q])}{2}\cdot\Sbst{\Post}{\PF[x]}{0} + \NF{\PF r}{2}\cdot\Sbst{\Post}{\PF[x]}{1} + \NF{(1-\PF[r])}{2}\cdot\Sbst{\Post}{\PF[x]}{0}} \Step{$=$}{(\PF[q]{+}\PF[r])/2\cdot\Sbst{\Post}{\PF[x]}{1} ~+~ (1-(\PF[q]{+}\PF[r])/2)\cdot\Sbst{\Post}{\PF[x]}{0}} \StepR{$=$}{\Ass{p = (q+r)/2}}{\PF[p]\cdot\Sbst{\Post}{\PF[x]}{1} ~+~ (1{-}\PF[p])\cdot\Sbst{\Post}{\PF[x]}{0}} \Space[-2ex] \Step{$=$}{\WPP{x\In\ \OZ{p}}{~~~\Post}\Qcomma} \end{Reason} for arbitrary postcondition \Post\ where at the end we used \Ass{p = (q+r)/2}. Thus $\Eqn{p1643}\,{=}\, \Eqn{p1714}$ because for any \Post\ their pre-expectations agree. \subsection{Step 2 --- intuition suggests a loop} We now return to the remark ``\ldots\ then we just try again.'' If we replace the final fair-coin flip \ProgIL{(x\In\ \OZ{q}) \PCF\ (x\In\ \OZ{r})} by \ProgIL{p\In\ q\,{\PCF}\,r} then \mbox{--intuitively--} we are in a position to ``try again'' with \ProgIL{x\In\ 1\,{\PC{p}}\,0}. Although it is the same as the statement we started with, we have made progress because variable \PF[p] has been updated --- and with probability \FF\ it is either 0 or 1 and we are done. If it is not, then we arrange for a second execution of \begin{ProgEqn}[p1852] IF p\,$\leq$\,\FF\ $\rightarrow$ q,r:= 0,2p \\ \B\ p\,$\geq$\,\FF\ $\rightarrow$ q,r:= 2p-1,1 \\ FI \\ p\In\ q\,{\PCF}\,r \end{ProgEqn} and, if \emph{still} \PF[p] is neither 0 nor 1, then \ldots\ we need a loop. \subsection{Step 3 --- introduce a loop}\label{step 3} We have already shown that \[ \Prg{p1643} \Wide{=} \Prg{p1852};~\Prg{p1643}\Qdot \] A general equality for sequential programs (including probabilistic) tells us that in that case also we have \[ \Prg{p1643} \Wide{=} \PF[WHILE \Cond\ DO $\Prg{p1852}$ OD; $\Prg{p1643}$]\HangRight{\quad\footnotemark} \] \footnotetext{As before, we usually use Dij\-kstra's loop-closing \PF[OD]\,.} for any loop condition \Cond, provided the loop terminates. Intuitively that is clear because, if \Prg{p1643} can annihilate \Prg{p1852} once from the right, then it can do so any number of times. A rigorous argument appeals to the fixed-point definition of \PF[WHILE], which is where termination is used. (If \Cond\ were \PF[False], so that the loop did not terminate, the \RHS\ would be \PF[Abort], thus providing a clear counter-example.) For probabilistic loops, the usual ``certain'' termination is replaced with \emph{almost-sure} termination, abbreviated \AST, which means that the loop terminates with probability one: put the other way, that would be that the probability of iterating forever is zero. For example the program \begin{ProgEqn} c:= H; WHILE c=H DO c\In\ H\,\PCF\;T OD\Qdot \end{ProgEqn} terminates almost surely because the probability of flipping \PF[T] forever is zero. A reasonably good \AST\ rule for probabilistic loops is that the variant is (as usual) a natural number, but must be bounded above; and instead of having to decrease on every iteration, it is sufficient to have a non-zero probability of doing so \cite{Morgan:96b,McIver:05a}.\, \footnote{By ``reasonably good'' we mean that it deals with most loops, but not all: it is sound, but not complete. There are more complex rules for dealing with more complex situations \cite{McIver:2017aa}. Strictly speaking, over infinite state spaces ``non-zero'' must be strengthened to ``bounded away from zero'' \cite{Morgan:96b}.} The variant for our example loop just above is {\PF{}[c=H]}, which has probability \FF\ of decreasing from {\PF{}[H=H]}, that is 1, to {\PF{}[T=H]} on each iteration. The loop condition \Cond\ for our program will be $0\,{<}\,\PF[p]\,{<}\,1$ and the variant comes directly from there: it is {\PF{}[0$<$\PF[p]$<$1]}, which has probability of \FF\ of decreasing from 1 to 0 on each iteration: and when it is 0, that is $0\,{<}\,\PF[p]\,{<}\,1$ is false, the loop must exit. With that, we have established that our original \Prg{p1643} equals the looping program \begin{ProgEqn} WHILE 0\,<\,p\,<\,1 DO \+\\ IF p\,$\leq$\,\FF\ $\rightarrow$ q,r:= 0,2p \\ \B\ p\,$\geq$\,\FF\ $\rightarrow$ q,r:= 2p-1,1 \\ FI \\ p\In\ q\,{\PCF}\,r \-\\ OD \\ \Ass{\PF[p]=1 \lor \PF[p]=0} \\ x\In\ 1\,{\PC{\PF[p]}}\,0\Qcomma \end{ProgEqn} where the assertion at the loop's end is the negation of the loop guard. \subsection{Step 4 --- use the loop's postcondition} There is still the final \PF[x\In\ 1\,{\PC{\PF[p]}}\,0] to be dealt with, at the end; but the assertion \Ass{\PF[p]=1 \lor \PF[p]=0} just before it means that it executes only when \PF[p] is zero or one. So it can be replaced by \PF[IF p=0 THEN x\In\ 1\PC{1}0 ELSE x\In\ 1\PC{0}0]~, i.e.\ with just \ProgIL{x:= p}. Mathematically, that would be checked by showing for all post-expectations \Post\ that \[ \PF[p]=1 \lor \PF[p]=0 \Wide{\Implies}\WPP{x\In\ 1\,{\PC{\PF[p]}}\,0}{\Post} = \WPP{x:= p}{\Post}\Qdot \] But it's a simple-enough step just to believe (unless you were using mechanical assistance, in which case it \emph{would} be checked). And so now the program is complete: we have implemented \ProgIL{x\In\ 1\PC{p}0} by a step-by-step correctness-by-construction process that delivers the program \begin{ProgEqn}[p0925] WHILE 0\,<\,p\,<\,1 DO \+\\ IF p\,$\leq$\,\FF\ $\rightarrow$ q,r:= 0,2p \\ \B\ p\,$\geq$\,\FF\ $\rightarrow$ q,r:= 2p-1,1 \\ FI \\ p\In\ q\,{\PCF}\,r \-\\ OD \\ x:= p \end{ProgEqn} in which only fair choices appear. And each step is provably correct. \subsection{Step 5 --- after-the-fact optimisation}\label{step 5} There is still one more thing that can (provably) be done with this program, and it's typical of this process: only when the pieces are finally brought together do you notice a further opportunity. It makes little difference --- but it is irresistible. Before carrying it out, however, we should be reminded of the way in which these five steps are isolated from each other, how all the layers are independent. This is an essential part of \CbC, that the reasoning can be carried out in small, localised areas, and that it does not matter --for correctness-- where the reasoning's target came from; nor does it matter where it is going. Thus even if we had absolutely no idea what \Prg{p0925} was supposed to be doing, still we would be able to see that if we are replacing \PF[x] by \PF[p] at the end, we could just as easily replace it at the beginning; and then we can remove the variable \PF[p] altogether. That gives \begin{ProgEqn}[p0955] \LCom Now $p$ is again a parameter, as it was in the original specification. \\ x:= $p$ \\ WHILE 0\,<\,x\,<\,1 DO \+\\ IF x\,$\leq$\,\FF\ $\rightarrow$ q,r:= 0,2x \Com When $\PF[x]\,{=}\,\FF$, these two \\ \B\ x\,$\geq$\,\FF\ $\rightarrow$ q,r:= 2x-1,1 \Com branches have the same effect. \\ FI \\ x\In\ q\,{\PCF}\,r \-\\ OD\Qcomma \\ \LCom The above implements \ProgIL{x\In\ 1\PC{p}0} for any $0\,{\leq}\,p\,{\leq}\,1$. \end{ProgEqn} and we are done. When $p$ is 0 or 1, it takes no flips at all; when $p$ is \FF, it takes exactly one flip; and for all other values the expected number of flips is 2. We notice that \Prg{p0955} appears to contain demonic choice, in that when $\PF[x]\,{=}\,\FF$ the conditional could take either branch. The nondeterminism is real --- even though the \emph{effect} is the same in either case, that \ProgIL{q,r:= 0,1} occurs. But genuinely different computations are carried out to get there: in the first branch $2(\FF)\,-\,1$ is evaluated to 0; and in the second branch $2(\FF)$ is evaluated to 1. This is not an accident: we recall from \Sec{s0910} that for Dij\-kstra such nondeterminism arises naturally as part of the program-construction process. Where did it come from in this case? The specification from which this conditional \PF[IF $\cdots$ FI] arose was set out much earlier, at \Eqn{e1635} which given $p$ has many possible solutions in $q,r$. One of them for example is $q\,{=}\,r\,{=}\,p$ which however would have later given a loop whose non-termination would prevent Step 3 at \Sec{step 3}. With an eye on loop termination, therefore, we took a design decision that at least one of $q,r$ should be ``extreme'', that is 0 or 1. To end up with $q\,{=}\,0$, what is the largest that $p$ could be without sending $r$ out of range, that is strictly more than 1? It's $p\,{=}\,\FF$, and so the first \PF[IF]-condition is $p\,{\leq}\,\FF$.\, The other condition $p\,{\geq}\,\FF$ arises similarly, and it absolutely does not matter that they overlap: the program will be correct whichever \PF[IF]-branch taken in that case. And, in the end --in \Eqn{p0955} just above-- we see that indeed that is so. \section{Implementing \emph{any} discrete choice with a fair coin}\label{s1041} Suppose instead of trying to implement a biased coin (as we have been doing so far), we want to implement a general (discrete) probabilistic choice of \PF[x]'s value from its type, say a finite set \CalX, but still using only a fair coin in the implementation. An example would be choosing \PF[x] uniformly from $\{x_0,x_1,x_2\}$, i.e.\ a three-way fair choice. But what we develop below will work for any discrete distribution on a finite set \CalX\ of values: it does not have to be uniform. The combination of probability \emph{and} abstraction allows a development like the one in \Sec{s1736} just above to be replayed, but a greater level of generality. We begin with a variable \PF[d] of type $\Dist\CalX$,\, \footnote{Recall from \Sec{s0911} that $\Dist\CalX$ is the set of discrete distributions over finite set \CalX.} where we recall that \CalX\ is the type of \PF[x]; and our specification is \ProgIL{x\In\ d}, that is ``Set \PF[x] according to distribution \PF[d].'' \begin{figure} \caption{Abstraction in \pGCL} \label{f1314} \end{figure} \footnotetext{Summing over all possible values $e$ of \Meta{x} would give the same result, since the extra values have probability zero anyway. Some find this formulation more intuitive.} \subsection{Replaying earlier steps from \Sec{s1736}} Our first step --Step 1-- is to declare two more $\Dist\CalX$-typed variables \PF[d0] and \PF[d1], and --as in \Sec{step 1}-- specify that they must be chosen so that their average is the original distribution \PF[d]; for that we use the \pGCL\ nondeterministic-choice construct ``assign such that'' (with syntax borrowed from Dafny \cite{Leino:2010aa}), from \Fig{f1314}, to write \begin{ProgEqn}[p1102] d0,d1\St\ $\PF[d]=(\PF[d0]{+}\PF[d1])/2$ \Com Choose \PF[d0,d1] so that their average is \PF[d]. \end{ProgEqn} The analogy with our earlier development is that there the distribution \PF[d] was specifically $1\,{\PC{p}}\,0$, and we assigned \[\begin{array}{p{6em}lcl@{~~}l} if $\PF[p]\,{\leq}\FF$& \PF[d0],\PF[d1] &=& (\OZ{0}), &(\OZ{2\PF[p]})\\ if $\PF[p]\,{\geq}\FF$ & \PF[d0],\PF[d1] &=& (\OZ{2\PF[p]{-}1}), &(\OZ{1}) \Qcomma \end{array}\] which is a refinement $(\Ref)$ of \Eqn{p1102}. Our second step is to re-establish the \ProgIL{x\In\ d} -annihilating property that \begin{equation}\label{e1433} \PF[\Prg{p1102}; d\In\ d0\,{\PCF}\,d1; x\In\ d] \Wide{=} \PF[x\In\ d]\Qcomma \end{equation} which is proved using \WP-calculations agains a general post-expectation \Post, just as before: instead of the assertion \Ass{\PF[p] = (\PF[q]+\PF[r])/2} used at the end of Step 1, we use the assertion \Ass{\PF[d]=(\PF[d0]{+}\PF[d1])/2} established by the assign-such-that. The third step is again to introduce a loop. But we recall from Step 3 earlier that the loop must be almost-surely terminating and, to show that, we need a variant function. Here we have no \PF[q],\PF[r] that might be set to 0 or 1; we have instead \PF[d0],\PF[d1]. Our variant will be that the ``size'' of one of these distributions must decrease strictly, where we define the \emph{size} of a discrete distribution to be the number of elements to which it assigns non-zero probability.\, \footnote{In probability theory this would be the cardinality of its support.} But our specification \ProgIL{d0,d1\St\ $\PF[d]=(\PF[d0]{+}\PF[d1])/2$} above does not require that decrease; and so we must backtrack in our \CbC\ and make sure that it does. And we have made an important point: developments following \CbC\ rarely proceed as they are finally presented: the dead-ends are cut off, and only the successful path is left for the audit trail. It highlights the multiple uses of \CbC\ --- that on the one hand, used for teaching, the dead-ends are shown in order to learn how to avoid them; used in production, the successful path remains so that it can be modified in the case that requirements change.\, \footnote{And if an error was made in the \CbC\ proofs, the ``successful'' path can be audited to see what the mistake was, why it was made, and how to fix it.} Thus to establish \AST\ of the loop --that it terminates with probability one-- we strengthen the split of \PF[d] achieved by \ProgIL{d0,d1\St\ $(\PF[d0]{+}\PF[d1])/2 = \PF[d]$} with the decreasing-variant requirement, that either $|\PF[d0]|\,{<}\,|\PF[d]|$ or $|\PF[d1]|\,{<}\,|\PF[d]|$, where we are writing $|{-}|$ for ``size of''. Then the variant $|\PF[d]|$ is guaranteed strictly to decrease with probabililty \FF\ on each iteration. That is we now write \begin{ProgEqn}[p1442] d0,d1\St\quad $(\PF[d0]{+}\PF[d1])/2 = \PF[d] \;\land\; (|\PF[d0]|\,{<}\,|\PF[d]| \lor |\PF[d1]|\,{<}\,|\PF[d]|)$\Qcomma \end{ProgEqn} replacing \Eqn{p1102}, for the nondeterministic choice of \PF[d0] and \PF[d1]. We do not have to re-prove its annihilation property, because the new statement \Eqn{p1442} is a refinement of the \Eqn{p1102} from before (It has a stronger postcondition.) and so preserves all its functional properties. In fact that is the definition of refinement. Our next step is to reduce the nondeterminism in \Eqn{p1442} somewhat, choosing a particular way of achieving it: to ``split'' \PF[d] into two parts \PF[d0],\PF[d1] such that the size of at least one part is smaller, we choose two subsets $X_0,X_1$ of \CalX\ whose intersection contains at most one element. That is illustrated in \Fig{f1511}, where $X_0=\{\textsf{A},\textsf{B},\textsf{C}\}$ and $X_1=\{\textsf{C},\textsf{D}\}$. Further, we require that the probabilities $\PF[d](X_0)$ and $\PF[d](X_1)$ assigned by \PF[d] to $X_0{-}X_1$ and $X_1{-}X_0$ are both no more than \FF.\, \footnote{Applying \PF[d] to a set means the sum of the \PF[d]-probabilities of the elements of the set.} Those constraints mean that we can always arrange the subsets so that the ``\FF-line'' of \Fig{f1511} either goes strictly through $X_0\,{\cap}\,X_1$ (if they overlap) or runs between them (if they do not). \begin{figure} \caption{Dividing a discrete distribution into two pieces} \label{f1511} \end{figure} \footnotetext{If for example \textsf{C} was much smaller, so that the dividing line went through \textsf{D}, the new distribution \PF[d0] would have support 4, the same as \PF[d] itself. But $|\PF[d1]|$ would have support 1, strictly smaller.} We then construct \PF[d0] by restricting \PF[d] to just $X_0$, then doubling all the probabilities in that restriction; if they sum to more than 1, we then trim any excess from the one element in $X_0\,{\cap}\,X_1$ that $X_0$ shares with $X_1$. The analogous procedure is applied to generate \PF[d1]. In \Fig{f1511} for example we chose sizes $0.2$, $0.1$, $0.3$ and $0.4$ for the four regions, and the \FF\ line went through the third one. On the left, the $0.2$ and $0.1$ and $0.3$ are doubled to $0.4$ and $0.2$ and $0.6$, summing to $1.2$; thus $0.2$ is trimmed from the $0.6$, leaving $0.4$ assigned to \textsf{C}. The analogous procedure applies on the right. \subsection{``Decomposition of data into data structures''} \label{s1451} The quote is from Wirth \cite{Wirth:71}. Our program is currently \begin{ProgEqn}[p1032] WHILE |d|$\neq$1 DO \+\\ d0,d1\St\quad $(\PF[d0]{+}\PF[d1])/2 = \PF[d] \;\land\; (|\PF[d0]|\,{<}\,|\PF[d]| \lor |\PF[d1]|\,{<}\,|\PF[d]|)$ \\ d\In\ \PF[d0]\,\PCF\,\PF[d1] \-\\ OD \\ x\In\ d // This is aa trivial choice, because \PF[|d|$=$1] here. \end{ProgEqn} And it is correct: it does refine \PF[x\In\ d] --- but it is rather abstract. Our next development step will be to make it concrete by realising the distribution-typed variables and the subsets of \CalX\ as ``ordinary'' datatypes using scalars and lists. In correctness-by-construction this is done by deciding, before that translation process begins, what the realisations will be --- and only then is the abstract program transformed, piece by piece. The relation between the abstract- and concrete types is called a \emph{coupling invariant}. Although an obvious approach is to order the type \CalX, say as $x_1,x_2,\ldots,x_N$ and then to realise discrete distributions as lists of length $N$ of probabilities (summing to 1), a more concise representation is suggested by the fact that for example we represent a \emph{two}-point distribution $x_1\PC{p}x_2$ as just \emph{one} number $p$, with the $1{-}p$ implied. Thus we will represent the distribution $p_1,p_2,\ldots\,p_N$ as the list of length $N{-}1$ of ``accumulated'' probabilities: in this case for $p$ we would have a list \[ p_1,~~p_1{+}p_2,~~\ldots,~~ \sum_{n=1}^{N-1} p_n\Qcomma \] leaving off the $N^{th}$ element of the list since it would always be 1 anyway. Subsets of \CalX\ will be pairs \PF[low],\PF[high] of indices, meaning $\{x_{\PF low},\ldots,x_{\PF high}\}$, and although that can't represent \emph{all} subsets of \CalX, contiguous subsets are all we will need. Carrying out that transformation gives following concrete version of our abstract program \Prg{p1032} below, where the abstract \PF[d] is represented as the concrete {\PF dL[low:high]}\,, which is the coupling invariant. \footnote{The range {\PF [low:high]} is inclusive-exclusive (as in Python). A similar coupling invariant applies to \PF[d0] and \PF[d1]. All three invariants are applied at once.} \begin{figure} \caption{Implement any discrete choice using only a fair coin.} \label{f0834} \end{figure} And in \Prg{p1051} of \Fig{f0834} we have, finally, a concrete program that can actually be run. Notice that it has exactly the same \emph{structure} as \Prg{p1032}: split (the realisations of) \PF[d] into \PF[d0] and \PF[d1]; overwrite \PF[d] with one of them; exit the loop when \PF[|d|] is one. Neverthess, as earlier in \Sec{step 5}, further development steps might still be possible now that everything is together in one place:\, \footnote{Note the necessity of keeping this as two steps: first data-refine, then (if you can) optimise algorithmically.} and indeed, recognising that only one of \PF[dL0],\PF[dL1] will be \emph{used}, we can rearrange \Prg{p1051}'s body so that only that only one of them will be \emph{calculated} --- and it can be updated as we go. That gives our really-final-this-time program \Eqn{p1511} in \Fig{f1541}, which will -without further intervention-- use a fair coin to choose a value $x_n$ according to \emph{any} given discrete distribution $d$ on finite \CalX. Its expected number of coin flips is no worse than $2N{-}2$, where $N$ is the size of \CalX, thus agreeing with expected 2 flips for the program \Eqn{p0955} in \Sec{step 5} that dealt with the simpler case $d = (1\PC{p}0)$ where \CalX\ was $\{1,0\}$. \begin{figure} \caption{Optimisation of \Prg{p1051}} \label{f1541} \end{figure} It's again worth emphasising --because it is the main point-- that the correctness arguments for all of these steps are isolated from each other: in \CbC\ every step's correctness is determined by looking at that step alone. Thus for example nothing in the translation process just above involved reasoning about the earlier steps, whether \Prg{p1032} actually implemented the \ProgIL{x\In\ d} that we started with: we didn't care, and we didn't check. We just translated \Prg{p1032} into \Prg{p1051} regardless. And the subsequent rearrangement of \Eqn{p1051} into \Prg{p1511} similarly made no use of \Prg{p1051}'s provenence. All that is to be contrasted with the more common approach in which \emph{only} intuition (and experience, and skill) is used, in which our final \Prg{p1511} might be written all at once at this concrete level, only then checking (testing, debugging, hoping) afterwards that our intuitions were correct. A transliteration of \Prg{p1511} into Python is given in App.\,\ref{a1549}. \begin{figure} \caption{Simulating a fair die with a fair coin} \label{f1514} \end{figure} \section{An everyday application: \\ simulating a fair die using only a fair coin} \label{s1512} \Prg{p1511} of the previous section works for any discrete distribution, without having to adapt the program in any way. However if the distribution's probabilities are not too bizarre, then the number of different values for \PF[low] and \PF[d] and \PF[high] might be quite small --- and then the program's behaviour for that distribution in particular can be set out as a small probabilistic state machine. In \Fig{f1511} we take \PF[d] to be the uniform distribution over the possible die-roll outcomes $\{1,2,3,4,5,6\}$, and show the state machine that results. For that state machine in particular, we propose one last correctness-preserving step: it takes us to the optimal die-roll algorithm of Knuth and Yao \cite{Knuth:1976aa}. \section{Why was this ``correctness by construction''?} The programs here are not themselves remarkable in any way. (The optimality of the Knuth/Yao algorithm is not our contribution). Even the mathematical insights used in their construction are well known, examples of elementary probability theory. \CbC\ means however applying those insights in a systematic, layered way so that the reasoning in each layer does not depend on earlier layers, and does not affect later ones. The steps were specifically {\begin{enumerate} \item Start with the \emph{specification} \PF[x\In\ d] at the beginning of \Sec{s1041}. \item Prove a one-step annihilation property \Eqn{e1433} for that specification. \item Use a general loop rule to prove loop-annihilation \Prg{p1032}, after Strengthening \Prg{p1102} to \Prg{p1442} to establish \AST. \item Propose strategy \Fig{f1511} for the loop body of \Prg{p1032}. \item Propose data representation of finite discrete distributions as lists, in \Sec{s1451}, realising the strategy of \Fig{f1511} in the code of \Prg{p1051}.\ \item Rearrange \Prg{p1051} to produce a more efficient final program \Prg{p1511}. \item[] \item Note that \textbf{correctness-by-construction guarantees} that \Prg{p1511} refines \PF[x\In\ d] for any \PF[d]. \item[] \item Apply \Prg{p1511} to the fair die, to produce state chart of \Fig{f1514}. \item Modify \Fig{f1514} to produce the Knuth/Yao (optimal) algorithm \cite{Knuth:1976aa}. \item[] \item Note that \textbf{correctness-by-construction guarantees} that the Knuth/Yao (optimal) algorithm implements a fair die. \end{enumerate}} \CbC\ also means that since all those steps are done explicitly and separately, they can be checked easily as you go along, and audited afterwards. But to apply \CbC\ effectively, and \emph{honestly}, one must have a rigorous semantics that justifies every single development step made. In our example here, that was supplied here by the semantics of \pGCL\ \cite{McIver:05a}. But working in any ``wide spectrum'' language, right from the (abstract) start all the way to the (concrete) finish, means that many of those rigorous steps can be checked by theorem provers. \appendix\section{\Prg{p1511} implemented in Python} \label{a1549} {\small \begin{verbatim} # Run 1,000,000 trials on a fair-die simulation. # # bash-3.2$ python ISoLA.py # 1000000 # 1 1 1 1 1 1 # Relative frequencies # 0.998154 1.00092 0.996474 0.998664 1.004928 1.00086 # realised, using 4.001938 flips on average. \end{verbatim} \begin{verbatim} import sys from random import randrange # Number of runs, an integer on the first line by itself. runs = int(sys.stdin.readline()) # Discrete distribution unnormalised, as many subsequent integers as needed. # Then EOT. d= [] for line in sys.stdin.readlines(): for word in line.split(): d.append(int(word)) sizeX= len(d) # Size of initial distribution's support. # Construct distribution's representation as accumulated list dL_Init. # Note that length of dL_Init is sizeX-1, # because final (normalised) entry of 1 is implied. # Do not normalise, however: makes the arithmetic clearer. sum,dL_Init= d[0],[] for n in range(sizeX-1): dL_Init= dL_Init+[sum]; sum= sum+d[n+1] tallies= [] for n in range(sizeX): tallies= tallies+[0] allFlips= 0 # For counting average number of flips. for r in range(runs): flips= 0 ### Program (14) proper starts on the next page. \end{verbatim} \begin{verbatim} ### Program (14) starts here. low,high,dL= 0,sizeX-1,dL_Init[:] # Must clone dL_Init. # print "Start:", low, dL[low:high], high while low<high: flip= randrange(2) # One fair-coin flip. flips= flips+1 if flip==0: n= low while n<high and 2*dL[n]<sum: dL[n]= 2*dL[n]; n= n+1 high= n # Implied dL0[high]=1 performs trimming automatically. # print "Took dL0:", low, dL[low:high], high # dL0 has overwritten dL. else: # flip==1 n= high-1 while low<=n and 2*dL[n]>sum: dL[n]= 2*dL[n]-sum; n= n-1 low= n+1 # Implied dL1[low]=0 performs trimming automatically. # print "Took dL1", low, dL[low:high], high # dL1 has overwritten dL. # print "Rolled", low, "in", flips, "flips." ### Program (14) ends here. tallies[low]= tallies[low]+1 allFlips= allFlips+flips print "Relative frequencies" for n in range(sizeX): print " ", float(tallies[n])/runs * sum print "realised, using", float(allFlips)/runs, "flips on average." \end{verbatim} } \end{document}
arXiv
Abstract: T-duality is used to extract information on an instanton of zero size in the $E_8\times E_8$ heterotic string. We discuss the possibility of the appearance of a tensionless anti-self-dual non-critical string through an implementation of the mechanism suggested by Strominger of two coincident 5-branes. It is argued that when an instanton shrinks to zero size a tensionless non-critical string appears at the core of the instanton. It is further conjectured that appearance of tensionless strings in the spectrum leads to new phase transitions in six dimensions in much the same way as massless particles do in four dimensions.
CommonCrawl
BMC Infectious Diseases Survival outcomes for first-line antiretroviral therapy in India's ART program Rakhi Dandona1, Bharat B. Rewari2,3, G. Anil Kumar1, Sukarma Tanwar1,2,3, S. G. Prem Kumar1, Venkata S. Vishnumolakala1, Herbert C. Duber4, Emmanuela Gakidou4 & Lalit Dandona1,4 BMC Infectious Diseases volume 16, Article number: 555 (2016) Cite this article Little is known about survival outcomes of HIV patients on first-line antiretroviral therapy (ART) on a large-scale in India, or facility level factors that influence patient survival to guide further improvements in the ART program in India. We examined factors at the facility level in addition to patient factors that influence survival of adult HIV patients on ART in the publicly-funded ART program in a high- and a low-HIV prevalence state. Retrospective chart review in public sector ART facilities in the combined states of Andhra Pradesh and Telangana (APT) before these were split in 2014 and in Rajasthan (RAJ), the high- and a low-HIV prevalence states, respectively. Records of adults initiating ART between 2007-12 and 2008-13 in APT and RAJ, respectively, were reviewed and facility-level information collected at all ART centres and a sample of link ART centres. Survival probability was estimated using Kaplan-Meier method, and determinants of mortality explored with facility and patient-level factors using Cox proportional hazard model. Based on data from 6581 patients, the survival probability of ART at 60 months was 76.3 % (95 % CI 73.0–79.2) in APT and 78.3 % (74.4–81.7) in RAJ. The facilities with cumulative ART patient load above the state average had lower mortality in APT (Hazard ratio [HR] 0.74, 0.57–0.95) but higher in RAJ (HR 1.37, 1.01–1.87). Facilities with higher proportion of lost to follow-up patients in APT had higher mortality (HR 1.47, 1.06–2.05), as did those with higher ART to pre-ART patient ratio in RAJ (HR 1.62, 1.14–2.29). In both states, there was higher hazard for mortality in patients with CD4 count 100 cells/mm3 or less at ART initiation, males, and in patients with TB co-infection. These data from the majority of facilities in a high- and a low-HIV burden state of India over 5 years reveal reasonable and similar survival outcomes in the two states. The facilities with higher ART load in the longer established ART program in APT had better survival, but facilities with a higher ART load and a higher ratio of ART to pre-ART patients in the less experienced ART program in RAJ had poorer survival. These findings have important implications for India's ART program planning as it expands further. Over the last decade, the HIV program in India has been scaled up substantially to reduce mortality and morbidity from HIV/AIDS and to improve the quality of life of those infected by HIV. The rapid scale-up of antiretroviral treatment (ART) services in recent years has improved access to ART with provision of free ART under the National AIDS Control Program (NACP- IV) [1]. The HIV program in India aims for evidence-based planning for further ART roll-out and performance monitoring [2]. However, there is a paucity of literature on survival outcomes of HIV patients on ART on a large-scale in India. The available reports are based on small samples of HIV patients, data limited to a single treatment centre, survival outcomes with TB as comorbidity, or have explored only the individual level predictors for survival on ART [3–7]. At the time of designing the study in 2012-13, our aim was to document survival outcomes and analyse the individual level and facility level predictors of survival for HIV patients on first-line ART in NACP-IV in two Indian states - Andhra Pradesh and Rajasthan. Andhra Pradesh in south India had a population of about 85 million population at that time, and the highest number of persons with HIV among any Indian state with a long-standing ART program. On the other hand, Rajasthan in north India had a population of about 70 million, and a relatively lower HIV burden with a more recent ART program [8]. Heterosexual mode of transmission is the major HIV infection route in both states [8]. After data collection was completed in 2013, the state of Telangana was carved out of Andhra Pradesh state in June 2014. As data were collected prior to this split, we report findings for the undivided Andhra Pradesh that includes the current Andhra Pradesh and Telangana (APT) and for Rajasthan (RAJ). Ethics approval for this study was obtained from Ethics Committees of the Public Health Foundation of India, New Delhi and the University of Washington, Seattle, USA. The study was approved by the Indian Council for Medical Research, Health Ministry Steering Committee, the Government of India and by the National AIDS Control Organization of India. Sample of ART facilities In India, ART services are provided by ART centres and Link ART centres (LAC). ART centre provides pre-ART and ART services to HIV infected people, and LAC is an extension of ART centre which were established to minimize the travel need and related costs for ART patients to receive basic ART services [8]. Patients on ART are registered with ART centre to start treatment and once stable are then transferred to LAC to receive medications on a routine basis. In this study, for APT all 45 ART centres and one LAC for each ART functional in 2012 were randomly sampled. A total of 41 LACs were subsequently sampled as four ART centres in the state capital Hyderabad had no LAC. In RAJ all 16 ART centres and all 27 link ART centres functional in 2012 were sampled, of which 10 LAC were newly established, and hence data were not available for these for this study. Patient clinical record data The ART patient's clinical record (known as white card) is maintained at the facilities that provide ART services. It is used to document demographic information of the patient; risk factors; and various treatment and clinical details, and follow-up details for each visit. We aimed to sample 75 and 35 adult patient records at each ART centre and LAC, respectively in APT, for the last five financial years at the time of starting data collection (April 2007 to March 2012). For RAJ, the aimed sample was 180 and 30 adult patient records at each ART centre and LAC, respectively, during the last 5 financial years at the time of starting data collection (April 2008 to March 2013). With this approach, at least 3000 adult patients on ART were finally expected to be part of the study in each state. Only records of patients who were initiated on first line ART between 6 and 60 months prior to the date of data collection were considered eligible. We used the ART enrolment register which includes documentation of the ART initiation date for each patient to arrive at the total number of eligible adult patient records at each facility. We then used a pre-defined sampling strategy to sample the eligible patient records at each ART facility - the total number of adult patients who had initiated ART within the inclusion time period was divided by the required number of adult patient records to be sampled to arrive at the sampling interval. A random number was picked within this sampling interval to select the first record, and then the records were sampled systematically using the sampling interval until the desired sample was achieved. Data collection procedure Data were collected electronically using Datstat Illume Survey Manager 5.1 (DatStat Inc., Seattle, WA). The program used for capturing data was a replica of the white card. Data extractors trained in study procedures recorded data "as is" from the white cards to reflect the actual data in each white card. The current status of the patient (alive, dead, lost to follow-up, or transferred to another facility) was recorded from the ART enrolment register. Ten percent or more of data collected were checked daily by the field team supervisor to ensure the quality of data collected. Formal consent of the senior-most person responsible at each ART facility was obtained to collect these data. Data were collected in APT from February to May 2013 and in RAJ during June-July 2013. Survival probability The probability of survival of HIV patients on ART at 12 and 60 months was estimated using the Kaplan-Meier product limit estimation method. The duration of survival was calculated from the month of ART initiation to the month of death or to the last visit for the alive patients. Censoring based on the last date of visit to the ART was done for patients who were lost to follow-up (LFU) or transferred out of the facility. We report overall survival probability of HIV patients on ART at 12 and 60 month for the two sexes and by baseline CD4 cell count. As mortality at the facility level is mostly not captured among HIV patients who were LFU, we adjusted the survival with mortality among LFU patients as shown in the equation below that has been previously used in the literature: [9] $$ \mathrm{S}\left(\mathrm{t}\right) = 1 - \left[\mathrm{ML}\left(\mathrm{t}\right) + \mathrm{L}\left(\mathrm{t}\right)*\mathrm{M}\mathrm{N}\mathrm{L}\left(\mathrm{t}\right)\right] $$ where S(t) = adjusted ART survival in last 5 years; L(t) = Proportion of LFU patients in 5 years; ML(t) = Mortality estimated in LFU patients; and MNL(t) = Mortality observed in patients in care (1- predicted Kaplan-Meier survival estimates). As an inverse relation between mortality among LFU patients and the rate of LFU in the ART program has been previously reported from an analysis of several HIV programs in Africa, we used the following equation from this analysis to estimate mortality among LFU patients in each facility in APT and RAJ: [9] $$ {\mathrm{ML}}_{\mathrm{i}}= \exp\ \left(\mathrm{a} + {\mathrm{br}}_{\mathrm{i}}\right)/\left(1 + \exp\ \left(\mathrm{a} + {\mathrm{br}}_{\mathrm{i}}\right)\right) $$ where MLi = mortality among non-LFU patients in each facility, r = proportion of LFU patients in each facility, a = 0.57287 and b = −4.04409. Finally, we calculated the weighted average of ML for each state by using the total patients on ART in each facility in each state. The weighted average of estimated mortality among LFU patients was 0.43 for APT and 0.61 for RAJ at 5 years. We considered 20 % lower and higher levels than the point estimates for ML(t) for sensitivity analysis for the probabilities of ART survival, which was performed using Monte Carlo simulations by doing 100,000 iterations with the @Risk software (Palisade UK Europe Ltd). We used random values between these plausible ranges to obtain the 95 % uncertainty interval (UI) for the probabilities of survival estimates. We report survival probability at 60 months that is adjusted for LFU mortality. Predictors of mortality Cox proportional hazard model was used to determine the predictors of mortality with select patient demography and clinical indicators (ART regimen, CD4 count at start of ART, co-existing tuberculosis, history of alcohol use). We also included facility related variables - cumulative ART patient load at ART centre, ratio of cumulative ART patients to pre-ART patients, and percent of cumulative LFU patients. The cumulative data over the 5-year study period was used to calculate the average value for each of these variables per state. Facilities were categorised as having below/equal or above the average value for cumulative ART patient load and ratio of cumulative ART patients to pre-ART patients; and were categorised into three equal groups based on the percent of cumulative LFU patients for the analysis. For this analysis, the ART patients undergoing treatment at LAC were considered together with the parent ART centre which was their primary registration for ART. The 95 % confidence intervals (CI) are reported where relevant. The median CD4 count at the ART initiation is presented for the alive, dead and LFU patients separately and range is also reported. Log rank test was used to examine the test of significance for survival probability. Data from 82 to 41 facilities were available for analysis in APT and RAJ, respectively. Data were analysed using the statistical software STATA (version 13.1, StatCorp, USA). A total of 3340 adult patient records were extracted for analysis in APT state of which 3280 (98.2 %) records had information available on the current status of the patient at the time of data collection. Among these, 2130 (64.9 %) were alive and on ART, 437 (13.3 %) had died, 432 (13.2 %) were LFU, and 281 (8.6 %) were transferred out to another ART facility during the study period. In RAJ state, 3241 adult patient records were extracted for analysis of which 3198 (98.7 %) had information available on the current status of the patient at the time of data collection. Among these, 2554 (79.9 %) were alive and on ART, 393 (12.3 %) were dead, 115 (3.6 %) were LFU, and 136 (4.3 %) were transferred out to another ART facility. The demographic and clinical characteristics of adult patients on ART are summarized in Table 1. The median age of patients on ART was 35 years in both states (Interquartile range, IQR 29–40 years). The median CD4 cell count at ART initiation was 172 cells/mm3 (IQR 104–236) in APT and 159 cells/mm3 (IQR 86–240) in RAJ. Significantly more patients had CD4 count of < =100 cells/mm3 at ART initiation in RAJ (29.4 %) than in APT (23.4 %; p < 0.001). The patients who were alive and on treatment showed an increasing trend in median CD4 cell counts at ART initiation over the years in both the states (Fig. 1). The overall baseline median CD4 cell count of deceased patients [126 cells/mm3 (IQR 66–194) in APT and 93 cells/mm3 (IQR 48–159) in RAJ) were comparatively lower than the patients who were alive and on treatment in both the states [184 cells/mm3 (IQR 115–245) in APT and 172 cells/mm3 (IQR 98–247) in RAJ]. Table 1 Demographic and clinical characteristics of patients on ART, and facility-related indicators in Andhra Pradesh and Telangana (2007-12) and in Rajasthan (2008-13) Yearly trends in baseline median CD4 cell counts for HIV patients on ART in Andhra Pradesh and Telangana (2007-12) and in Rajasthan (2008-13) based on current status of the patient Among the 437 and 393 patients who had died in APT and RAJ, respectively, 188 (43 %) in APT and 191 (48.6 %) patients in RAJ had died within 6 months of starting ART. The unadjusted mortality rate among patients on ART in APT and RAJ was 6.83 and 7.18 per 100 patient-years at 60 months, respectively. The median survival time was 22 months in APT and 18 months in RAJ. The estimated unadjusted survival probability on ART at 12 and 60 months was 91.2 % (95 % CI 90.1–92.1) and 76.3 % (95 % CI 73.0–79.2) in APT, respectively; and 90.6 % (95 % CI 89.4–91.6) and 78.3 % (95 % CI 74.4–81.7) in RAJ (Fig. 2). The probability of survival was higher among females than males in both states (log rank test, p < 0.001; Fig. 2), and was significantly lower for patients with CD4 count <101 cells/mm3 at ART initiation than those with CD4 count >250 cells/mm3 in both the states (log rank test, p < 0.001; Fig. 3). After adjusting for the assumed higher mortality among LFU, the adjusted survival probability on ART at 60 months was 70.6 % (95 % UI 67.0–73.8) in APT and 76.1 % (95 % UI 72.2–79.6) in RAJ. Kaplan-Meier unadjusted survival curve for HIV patients on ART for males and females in Andhra Pradesh and Telangana (2007-12) and Rajasthan (2008-13) Kaplan-Meier unadjusted survival curve for HIV infected patients on ART by CD4 count at ART initiation (cells/mm3) in Andhra Pradesh and Telangana (2007-12) and Rajasthan (2008-13) Table 2 shows the results with the Cox proportional hazard model for the predictors of mortality among HIV patients on ART. The findings at the patient level for both the states were similar. Patients with CD4 count <101 cells/mm3 at ART initiation had a higher hazard for mortality than patients with CD4 count >250 cells/mm3 (HR 3.36, 95 % CI 2.29–4.95 in APT and HR 3.71, 95 % CI 2.47–5.58 in RAJ). Patients with history of alcohol consumption had significantly higher risk of dying than who never consumed alcohol in APT (HR 1.57, 95 % CI 1.22–2.02) and in RAJ (HR 1.42, 95 % CI 1.09–1.86). Males as compared with females, and patients with TB co-infection had a higher hazard for death. The patients on Zidovudine-based ART regimen had a lower hazard for mortality than those on the Stavudine-based ART regimen in both states. Table 2 Determinants of mortality among HIV patients on ART using Cox proportional hazard model for Andhra Pradesh and Telangana (2007-12) and Rajasthan (2008-13). CI denotes confidence interval At the facility level, facilities with a cumulative ART patient load above the average for the state facilities had lower mortality in APT (HR 0.74, 95 % CI 0.57–0.95) but had higher mortality in RAJ (HR 1.37, 95 % CI 1.01–1.87). The facilities in APT with proportion of LFU patients higher than the state average had significantly higher mortality (HR 1.47, 95 % CI 1.06–2.05); the trend in RAJ was similar but did not reach statistical significance. On the other hand, facilities in RAJ with higher ART to pre-ART patient ratio had a significantly higher hazard for mortality (HR 1.62, 95 % CI 1.14–2.29). As public sector facilities provide ART to most patients in India, this sample of over 6500 adult patients on ART in two major states is fairly representative of a high and a low HIV burden state in India. This analysis of data covering 5 years reveals that the overall survival probability of HIV patients on ART at 60 months was reasonable at 76–78 %, and that the survival rates were similar in the high- and low-HIV burden states, with the former having a longer standing public funded ART program in place. The survival rates in our data at 60 months are similar to those reported previously from three centres in southern India [3, 10]. Consistent with the published literature, a significant proportion of deaths occurred within the first 6 months of ART initiation [3–5, 7, 11]. Poor survival of males on ART as compared with females in our population has been documented previously from India and elsewhere [3, 6, 12–16]. Factors such as poor treatment seeking behaviour and non-adherence to treatment, and increased risk of LFU have been reported previously as possible reasons for higher mortality among males on ART [17–20]. The median CD4 count at ART initiation was lower for males than females in both the states, and 58 % of LFU in APT and 65 % in RAJ were males in our study. This finding suggests that it would be useful for the HIV services to make males more aware of the benefits of timely initiation of ART for better survival outcome. Both a low CD4 count at ART initiation and co-existing TB have been previously reported to be associated with poorer survival outcomes among Indian patients [3, 4, 17, 21–23]. The overall median CD4 count at ART initiation in this study had increased significantly over the 5 years in both APT (154 to 193) and RAJ (132 to 174). However, these data were not for those who had died. As a lower CD4 count is associated with delayed ART initiation and with higher attrition while on treatment, [17, 22] the program could focus more on ensuring adherence and follow-up of the patients with lower CD4 count to further improve the survival outcomes. With regard to TB, NACP-IV has clearly identified HIV-TB coordination including cross-referral, detection and treatment as one of the objectives in the revised strategy that aims to further the integration between HIV and TB services, [24] in particular to prevent LFU and early initiation of ART [25–27]. Inclusion of data from a large number of facilities in this study allowed assessment of facility-level variables that influence survival on ART. These findings are relevant for program planning. The ART patient load was an important predictor of mortality in both states, albeit differently. In APT, facilities with a higher load had better survival outcomes possibly because of a longer established ART program that has likely acquired more experience leading to better outcomes. However, in RAJ, facilities with higher ART patient load had poorer survival outcome, as did facilities with a higher ratio of ART to pre-ART patients. This higher patient load in the less experienced ART program in RAJ may be resulting in difficulty in handling patients, which indicates the need for strengthening facilities in RAJ with high or increasing ART load through monitoring of their human resources, supplies and infrastructure. In addition, even though both states had more pre-ART than ART patients across the facilities, the average ART to pre-ART patient ratio was relatively higher in RAJ. The reasonable survival outcomes in the two states, which were not significantly different from each other without and with adjusting for mortality in the LFU patients, are encouraging for the national HIV program. Over the study period, the LFU proportion remained fairly consistent in APT, and was similar to that reported previously [17, 22]. Factors associated with poor patient retention have been documented for APT, [17, 22] and more effective and robust tracking of LFU is needed to improve survival outcomes. The significantly lower proportion of LFU in RAJ was a likely a result of a recent exercise carried out by the State AIDS Control Society to trace LFUs in order to bring them back to the treatment cycle. It is possible that some LFU patients may have initiated ART at another facility. However, it is not possible to track mobility of individual patients between the ART facilities in the program yet. To address this challenge, NACO is considering use of SMART cards with biometric identification for each patient which could facilitate not only tracking of patients but also potentially improve adherence and access to treatment [28]. Our study limitations include missing data, non-usable information on treatment adherence in the white cards, and survival status of transferred out and LFU patients as these were not readily available in the patient records. Despite these limitations, these large sample data collected from routine patient records are generalizable as all ART centres in both states were included. Data utilised for this study were obtained from paper forms/registers used in routine service conditions by the providers in the facilities, and thus are reflective of the ground reality. In conclusion, these data have highlighted the benefits of investment in ART in India which is associated with a reasonably good over survival rate at 5 years, and have identified important determinants of survival on ART at the facility-level in addition to patient-level factors that can inform improvement of the ART services in India. An important program-relevant message from these findings is that ART survival could potentially be improved further if facilities with higher load get specific attention in the initial phase in Indian states with a more recent ART program. AIDS: Andhra Pradesh and Telangana HIV: Hazard ratio LAC: Link ART centre LFU: Lost to follow-up NACP: National AIDS Control Programme NACO: National AIDS Control Organisation UI: Uncertainty interval RAJ: TB: NACP-IV components. [http://www.naco.gov.in/nacp-iv-components]. Accessed 5 Oct 2016. NACP-IV Programme Priorities and Thrust Areas. [http://www.naco.gov.in/programme-priorities-and-thrust-areas-0]. Accessed 5 Oct 2016. Allam RR, Murhekar MV, Bhatnagar T, Uthappa CK, Chava N, Rewari BB, Venkatesh S, Mehendale S. Survival probability and predictors of mortality and retention in care among patients enrolled for first-line antiretroviral therapy, Andhra Pradesh, India, 2008-2011. Trans R Soc Trop Med Hyg. 2014;108(4):198–205. Bachani D, Garg R, Rewari BB, Hegg L, Rajasekaran S, Deshpande A, Emmanuel KV, Chan P, Rao KS. Two-year treatment outcomes of patients enrolled in India's national first-line antiretroviral therapy programme. Natl Med J India. 2010;23(1):7–12. Ghate M, Deshpande S, Tripathy S, Godbole S, Nene M, Thakar M, Risbud A, Bollinger R, Mehendale S. Mortality in HIV infected individuals in Pune, India. Indian J Med Res. 2011;133:414–20. Rajeev A, Sharma A. Mortality and morbidity patterns among HIV patients with prognostic markers in a tertiary care hospital in southern India. Australa Med J. 2011;4(5):273–6. Sharma SK, Dhooria S, Prasad KT, George N, Ranjan S, Gupta D, Sreenivas V, Kadhiravan T, Miglani S, Sinha S, et al. Outcomes of antiretroviral therapy in a northern Indian urban clinic. Bull World Health Organ. 2010;88(3):222–6. NACO Annual Report 2014-15. [http://www.naco.gov.in/sites/default/files/annual_report%20_NACO_2014-15_0.pdf]. Accessed 5 Oct 2016. Egger M, Spycher BD, Sidle J, Weigel R, Geng EH, Fox MP, MacPhail P, van Cutsem G, Messou E, Wood R, et al. Correcting mortality for loss to follow-up: a nomogram applied to antiretroviral treatment programmes in sub-Saharan Africa. PLoS Med. 2011;8(1):e1000390. Ghate M, Tripathy S, Gangakhedkar R, Thakar M, Bhattacharya J, Choudhury I, Risbud A, Bembalkar S, Kadam D, Rewari BB, et al. Use of first line antiretroviral therapy from a free ART programme clinic in Pune, India - A preliminary report. Indian J Med Res. 2013;137(5):942–9. Rajasekaran S, Jeyaseelan L, Raja K, Vijila S, Krithigaipriya KA, Kuralmozhi R. Increase in CD4 cell counts between 2 and 3.5 years after initiation of antiretroviral therapy and determinants of CD4 progression in India. J Postgrad Med. 2009;55(4):261–6. Druyts E, Dybul M, Kanters S, Nachega J, Birungi J, Ford N, Thorlund K, Negin J, Lester R, Yaya S, et al. Male sex and the risk of mortality among individuals enrolled in antiretroviral therapy programs in Africa: a systematic review and meta-analysis. AIDS. 2013;27(3):417–25. Rai S, Mahapatra B, Sircar S, Raj PY, Venkatesh S, Shaukat M, Rewari BB. Adherence to Antiretroviral Therapy and Its Effect on Survival of HIV-Infected Individuals in Jharkhand, India. PLoS One. 2013;8(6):e66860. Sieleunou I, Souleymanou M, Schönenberger AM, Menten J, Boelaert M. Determinants of survival in AIDS patients on antiretroviral therapy in a rural centre in the Far-North Province, Cameroon. Trop Med Int Health. 2009;14(1):36–43. Stringer JS, Zulu I, Levy J, Stringer EM, Mwango A, Chi BH, Mtonga V, Reid S, Cantrell RA, Bulterys M, et al. Rapid scale-up of antiretroviral therapy at primary care sites in Zambia: feasibility and early outcomes. JAMA. 2006;296(7):782–93. Mills EJ, Bakanda C, Birungi J, Chan K, Hogg RS, Ford N, Nachega JB, Cooper CL. Male gender predicts mortality in a large cohort of patients receiving antiretroviral therapy in Uganda. J Int AIDS Soc. 2011;14:52. Alvarez-Uria G, Naik PK, Pakam R, Midde M. Factors associated with attrition, mortality, and loss to follow up after antiretroviral therapy initiation: data from an HIV cohort study in India. Glob Health Action. 2013;6:21682. Nachega JB, Hislop M, Dowdy DW, Lo M, Omer SB, Regensberg L, Chaisson RE, Maartens G. Adherence to highly active antiretroviral therapy assessed by pharmacy claims predicts survival in HIV-infected South African adults. J Acquir Immune Defic Syndr. 2006;43(1):78–84. Ochieng-Ooko V, Ochieng D, Sidle JE, Holdsworth M, Wools-Kaloustian K, Siika AM, Yiannoutsos CT, Owiti M, Kimaiyo S, Braitstein P. Influence of gender on loss to follow-up in a large HIV treatment programme in western Kenya. Bull World Health Organ. 2010;88(9):681–8. Cornell M, Schomaker M, Garone DB, Giddy J, Hoffmann CJ, Lessells R, Maskew M, Prozesky H, Wood R, Johnson LF, et al. Gender differences in survival among adult patients starting antiretroviral therapy in South Africa: a multicentre cohort study. PLoS Med. 2012;9(9):e1001304. Bhowmik A, Bhandari S, De R, Guha SK. Predictors of mortality among HIV-infected patients initiating anti retroviral therapy at a tertiary care hospital in eastern India. Asian Pac J Trop Med. 2012;5(12):986–90. Alvarez-Uria G, Pakam R, Midde M, Naik PK. Predictors of delayed antiretroviral therapy initiation, mortality, and loss to followup in HIV infected patients eligible for HIV treatment: data from an HIV cohort study in India. Biomed Res Int. 2013;2013:849042. Rupali P, Mannam S, Bella A, John L, Rajkumar S, Clarence P, Pulimood SA, Samuel P, Karthik R, Abraham OC, et al. Risk factors for mortality in a south Indian population on generic antiretroviral therapy. J Assoc Physicians India. 2012;60:11–4. NACP IV Strategy document. [http://www.naco.gov.in/sites/default/files/Strategy_Document_NACP%20IV.pdf]. Accessed 5 Oct 2016. Gupta AK, Singh GP, Goel S, Kaushik PB, Joshi BC, Chakraborty S. Efficacy of a new model for delivering integrated TB and HIV services for people living with HIV/AIDS in Delhi -- case for a paradigm shift in national HIV/TB cross-referral strategy. AIDS Care. 2014;26(2):137–41. Vijay S, Kumar P, Chauhan LS, Rao SV, Vaidyanathan P. Treatment outcome and mortality at one and half year follow-up of HIV infected TB patients under TB control programme in a district of South India. PLoS One. 2011;6(7):e21008. Vijay S, Swaminathan S, Vaidyanathan P, Thomas A, Chauhan LS, Kumar P, Chiddarwar S, Thomas B, Dewan PK. Feasibility of provider-initiated HIV testing and counselling of tuberculosis patients under the TB control programme in two districts of South India. PLoS One. 2009;4(11):e7899. National AIDS Control Programme. Journey of ART programme in India: story of a decade. New Delhi:Ministry of Health and Family Welfare, Government of India; 2014. The authors would like to acknowledge all individuals who contributed to this study, including the State AIDS Control Societies and facility staff who gave their time for facilitating access to white cards and to complete survey components; and the field team members who conducted data collection. Funding for this work was provided by the Bill & Melinda Gates Foundation. Availabilty of data and material Data are available with the corresponding author, and can be made available on request. RD, LD, BBR, GAK, ST and EG were responsible for the study design. SGPK and SPR were responsible for overseeing data collection. GAK and VVS were responsible for data management. RD, GAK, SGPK, VVS and ST were involved with data analysis. RD and GAK drafted the original manuscript. LD, BBR, HD and EG gave significant inputs for analysis and interpretation. All authors had full access to the data, and have read and approved the final manuscript. Competing interest BBR and ST are affiliated with the Department of AIDS Control, Ministry of Health and Family Welfare, Government of India that oversees the ART Programme in India. LD is on the Editorial Board for the journal BMC Medicine. Ethics approval for this study was obtained from Ethics Committees of the Public Health Foundation of India, New Delhi and the University of Washington, Seattle, USA. The study was also approved by the Indian Council for Medical Research, Health Ministry Steering Committee, the Government of India and by the National AIDS Control Organization of India. As the data collection involved retrospective review of patient records with no identifiable information to be collected, the consent to participate from patients was exempted by the Ethics Committee as this research was designed to study benefit of a public service programme. Public Health Foundation of India, New Delhi, India Rakhi Dandona, G. Anil Kumar, Sukarma Tanwar, S. G. Prem Kumar, Venkata S. Vishnumolakala & Lalit Dandona Department of AIDS Control, Ministry of Health and Family Welfare, Government of India, New Delhi, India Bharat B. Rewari & Sukarma Tanwar World Health Organization Country Office for India, New Delhi, India Institute for Health Metrics and Evaluation, University of Washington, Seattle, Washington, USA Herbert C. Duber, Emmanuela Gakidou & Lalit Dandona Rakhi Dandona Bharat B. Rewari G. Anil Kumar Sukarma Tanwar S. G. Prem Kumar Venkata S. Vishnumolakala Herbert C. Duber Emmanuela Gakidou Lalit Dandona Correspondence to Rakhi Dandona. Dandona, R., Rewari, B.B., Kumar, G.A. et al. Survival outcomes for first-line antiretroviral therapy in India's ART program. BMC Infect Dis 16, 555 (2016). https://doi.org/10.1186/s12879-016-1887-2 Submission enquiries: [email protected]
CommonCrawl
dStruct: identifying differentially reactive regions from RNA structurome profiling data Krishna Choudhary1, Yu-Hsuan Lai2, Elizabeth J. Tran2,3 & Sharon Aviran ORCID: orcid.org/0000-0003-1872-98201 Genome Biology volume 20, Article number: 40 (2019) Cite this article RNA biology is revolutionized by recent developments of diverse high-throughput technologies for transcriptome-wide profiling of molecular RNA structures. RNA structurome profiling data can be used to identify differentially structured regions between groups of samples. Existing methods are limited in scope to specific technologies and/or do not account for biological variation. Here, we present dStruct which is the first broadly applicable method for differential analysis accounting for biological variation in structurome profiling data. dStruct is compatible with diverse profiling technologies, is validated with experimental data and simulations, and outperforms existing methods. RNA molecules adopt diverse and intricate structures, which confer on them the capacity to perform key roles in myriads of cellular processes [1, 2]. Structures, and hence functions, of RNAs are modulated by a number of factors, such as solution environment (in vivo or in vitro), presence of RNA-binding proteins or ligands, mutation in the RNA sequence, and temperature [3]. The amalgamation of classic chemical probing methods, which probe RNA structure at nucleotide resolution, with next-generation sequencing has ushered in a new era of RNA structuromics [2, 3]. In fact, recent developments have led to a diversity of structure probing or structure profiling (SP) technologies [4, 5]. These technologies have made it possible to perform comparative analysis of structures of select RNAs or whole RNA structuromes simultaneously [6–21]. SP technologies result in nucleotide-level scores, called as reactivities, that summarize one or more aspects of local structure (e.g., steric constraint due to base pairing interaction). To this end, they utilize probing reagents that react with RNA nucleotides in a structure-sensitive manner. The degree of reaction at a nucleotide is a function of local stereochemistry. A number of reagents (e.g., SHAPE, DMS, nucleases) exist, which react with sensitivity to different aspects of local stereochemistry [22–24]. Moreover, depending on the reagent, the reaction results either in chemical modification of the sugar/base moiety or a cleavage of the RNA strand. Its degree is captured in a cDNA library through primer extension by reverse transcriptase, which either stops at modified nucleotides or proceeds but introduces a mutation [3]. In addition, to assess the background noise, most SP technologies use samples that are not treated with reagent [19, 25–28]. Furthermore, there are diverse library preparation methods. For example, some methods enrich for modified transcript copies [7, 29]. Indeed, SP technologies differ in their choices of probing reagents and key library preparation steps. Yet, each technology has its advantages, which might make it the preferred choice for certain studies. Irrespective of the SP technology and end goals of a study, cDNA libraries are sequenced and data is processed to obtain reactivities. Often, this involves combining information from the treated and untreated control samples [19, 30–32]. The sequence of nucleotide reactivities for a transcript is called its reactivity profile. It is noteworthy that reactivity profiles are estimated using approaches customized to the SP technology used for a study [6]. Hence, different approaches yield reactivities with different statistical properties [33, 34]. Nonetheless, amid the diversity of protocols and reactivity estimation methods, identifying differentially reactive regions (DRRs) is a common step in the majority of SP studies [6]. In this article, we focus on identifying DRRs from SP data. Several methods have been developed for differential analysis of SP data [8, 9, 20, 21, 35]. They utilize two common principles. First, they premise that differential structure manifests at a regional level and not at individual isolated nucleotides. Second, they recognize that SP data manifests substantial noise at the nucleotide level. Despite these shared principles, these methods differ in how they account for noise in the data. deltaSHAPE [8] and StrucDiff [9] address nucleotide-level noise by smoothing reactivity or count profiles. Subsequently, they find DRRs from smoothed profiles. Note that we call the method developed by Wan et al. StrucDiff after the name of the score it employs. In contrast, the method included in the PARCEL pipeline assesses the significance of changes in counts at the nucleotide level first. It considers nucleotides as "genes" in an RNA-seq model. To this end, it uses edgeR [36] to account for nucleotide-level noise and compute p values for changes in counts. Next, it chains together nucleotides with significant changes as DRRs by performing a second statistical test [21] (henceforth, we refer to this method as PARCEL). Similarly, Mizrahi et al. account for nucleotide-level noise with a two-step "regression and spatial analysis" approach [20] (for convenience, we acronymize this method as RASA). Specifically, to evaluate the changes in reactivities at the nucleotide level, they use generalized mixed model extension of logistic regression with counts and coverages as inputs. Next, they identify the regions with clusters of differentially reactive nucleotides using a permutation test. Another method, classSNitch, utilizes a machine learning classifier that learns from training data how to distinguish between nucleotide-level noise and DRRs [35]. Despite notable developments in existing methods, several key challenges remain unaddressed. First, it is known that different regions in RNAs manifest different levels of variation among biological replicates (henceforth, called biological variation) [30, 37–40]. Hence, inherently variable regions should be distinguished from DRRs. Indeed, DRRs are expected to differ consistently between the two groups of samples distinguished by a structure-altering factor. Furthermore, between-group variation in DRRs should significantly exceed the variation between samples of the same group. However, deltaSHAPE, StrucDiff,and classSNitch do not account for biological variation. While PARCEL and RASA account for biological variation, they are limited in scope to specific technologies. One issue that underlies this limitation is that they do not utilize untreated samples. Yet, untreated samples are an integral component of most SP technologies [6, 7, 19, 25–28]. Importantly, combining information from both treated and untreated samples has been shown to provide accurate assessment of reactivities [30–32]. Furthermore, broadly applicable approaches for estimating reactivities combine information from the two kinds of samples and yield analog reactivity values [30, 31]. As PARCEL and RASA are based on counts, they are not readily applicable to analog reactivity readouts. Besides this, PARCEL does not account for coverage variations within a transcript. Second, in many studies, candidate regions, which might be DRRs, are not known a priori [8, 11, 18]. Hence, they need to be constructed de novo. However, StrucDiff and classSNitch require predefined regions, which are typically obtained from a collateral study. For example, a collateral study might indicate sites with single-nucleotide variants between two cell lines, and candidate regions might be constructed as short stretches of nucleotides flanking each variant [9, 17]. Third, searching for DRRs in transcriptome-wide data might involve testing multiple hypotheses. While each hypothesis considers the same question about the presence/absence of a differential signal, a separate test might be conducted for each candidate region of each candidate RNA. This leads to the so-called multiple testing problem [41]. Since no hypothesis test is perfect, there is a risk of false positive from each test. When we test numerous hypotheses on a dataset simultaneously, the associated risk of false-positive results grows. Hence, it is recommended that p values (or alternative summaries of statistical significance) assessed from each test be adjusted to control for the risk of false discoveries. However, deltaSHAPE, classSNitch, and RASA do not perform multiple testing correction. Fourth, if candidate regions were known a priori, restricting search of DRRs to the predefined candidates before statistical testing might improve power in the context of multiple testing [41]. We call this scenario guideddiscovery. However, deltaSHAPE, PARCEL, and RASA allow for comparison with a priori knowledge only after de novo discovery of DRRs. Fifth, of significance in SP data is the "pattern" of reactivities in a region [33, 35, 39, 42–45]. Specifically, in a DRR, while some nucleotides could become more reactive, others could become less reactive, thereby keeping the average level insignificantly altered while altering the reactivity pattern in that region [10, 46]. For example, this could be indicative of a hairpin transitioning to a G-quadruplex [47]. Indeed, in a study assessing how experts classify the differences in reactivity profiles by visual inspection, reactivity pattern was found to be key to human decision [35]. However, none of the methods except for classSNitch explicitly account for reactivity patterns. While classSNitch accounts for reactivity patterns, it utilizes a classifier trained with SHAPE data only. Hence, it is limited in scope to SHAPE data. Finally, the need to account for reactivity patterns limits the applicability of differential analysis methods commonly used in other genomic disciplines (e.g., differential methylation analysis from bisulfite sequencing data). These methods generally seek regional changes in the signal's magnitude and not the signal's pattern [48, 49]. Yet, it has been demonstrated that specialized methods accounting for signal patterns in ChIP-seq and bisulfite sequencing data can improve power to detect differential regions [50, 51]. To address the aforementioned limitations of existing methods, we present dStruct, which identifies DRRs from SP data within a single RNA or a transcriptome. Central to dStruct is a dissimilarity measure, called d score. dStruct starts by assessing within-group and between-group variations in reactivities, in terms of nucleotide d scores (Figs. 1 and 2a, b). Due to the effect of structure-altering factors in DRRs, the between-group variation is expected to be higher than the within-group variation (Fig. 1b). Hence, next, dStruct screens for regions with evidence of increased d scores between groups. This step is skipped if a predefined set of candidate regions is available. Finally, dStruct compares the within-group nucleotide d scores with the between-group scores using Wilcoxon signed-rank test and controls the FDR using the Benjamini-Hochberg procedure [52, 53]. dStruct is the first differential analysis method that both directly accounts for biological variation and is applicable to diverse SP protocols. We validated dStruct with data from different SP technologies, namely, SHAPE-Seq, Structure-Seq, and PARS, as well as with simulations. Test datasets vary in size from single RNAs to transcriptomes and feature samples from bacteria, virus, fungi, and humans. In addition, the structure-altering factors include protein binding, ligand binding, and single-nucleotide variants. Besides utilizing real data, we developed a novel approach to simulate biological replicates of SP data. In particular, existing approaches do not provide a way to generate correlated biological replicates [54, 55]. We addressed this gap to allow for proper assessment of dStruct's performance. dStruct enables guided as well as de novo discovery. In all tests, we demonstrate that for a properly controlled FDR, dStruct has a higher power than existing methods. Besides validations, we discuss in detail the limitations of dStruct as well as of existing approaches. The d score quantifies the dissimilarity between reactivities. a Four hypothetical reactivity profiles, labeled A1 and A2 (group A) and B1 and B2 (group B). Red lines highlight the reactivity patterns. Triangles mark a nucleotide that maintains identical reactivities within groups. Asterisks mark a nucleotide that flips its reactivity between groups. b Comparison of samples from the same group (e.g., A1, A2) results in d scores lower than those from between-group comparisons (e.g., A1, B1). A triangle highlights the low d score of a nucleotide with high within-group agreement. An asterisk highlights a nucleotide that displays high within-group agreement and therefore results in a low within-group d score. It also displays poor between-group agreement, which results in a high between-group d score. c The d score monotonically increases with the absolute value of coefficient of variation dStruct identifies differentially reactive regions. a Users input samples of reactivity profiles, some from group A and some from group B. b In the first step, dStruct quantifies the within-group and between-group variations in terms of d scores. c In the second step, dStruct identifies regions where the between-group variation appears to be greater than the within-group variation. These are highlighted by purple background. This step is skipped if users provide a list of candidate regions. d Reactivity profiles for one of the candidate regions. e In the third step, dStruct compares the dwithin and dbetween profiles using a Wilcoxon signed-rank test. f The results are output as a list of region identifiers, such as the start and end locations of the candidates tested, and the p values and q values for each region Dissimilarity measure We define a dissimilarity measure for reactivities, which we call a d score. Given a transcript of length n and a set of m reactivity profiles for the transcript, let rij represents the reactivity of nucleotide i in profile j. If σi and μi represent the sample standard deviation and mean of reactivities for nucleotide i, respectively, then the d score of nucleotide i is defined as: $$\begin{array}{@{}rcl@{}} d_{i} & = & \frac{2}{\pi} \arctan \left(\frac{\sigma_{i}}{\left| \mu_{i} \right|} \right). \end{array} $$ For m=2, the above expression simplifies to: $$\begin{array}{@{}rcl@{}} d_{i} & = & \frac{2}{\pi} \arctan \left(\sqrt{2}\left|\frac{r_{i1} - r_{i2} }{ r_{i1} + r_{i2} }\right| \right). \end{array} $$ Taking the ratio of σi and μi accounts for the fact that higher reactivities tend to manifest higher fluctuations [56]. However, the ratio by itself is very sensitive to small changes in μi, especially when μi is small. For example, PARS reactivities can be both positive and negative (Fig. 1a). This can result in μi being close to 0, while σi remains high (e.g., nucleotide highlighted with asterisks in Fig. 1a). Importantly, σi/|μi| increases very fast and approaches infinity as μi decreases and approaches zero (Additional file 1: Figure S1). However, the dStruct pipeline involves taking means of di as we describe below. Since the mean is not robust to outliers, extremely high values of σi/|μi| could pose problems in the dStruct pipeline. Hence, we reduce the sensitivity of σi/|μi| to changes in μi by transforming it with the arctan function. While σi/|μi| is unbounded, its arctan-transformed value is bounded between 0 and π/2. To restrict its range to [0, 1], we scale it by 2/π (Fig. 1b). di is 0 when the same reactivity is observed for nucleotide i in all samples (e.g., nucleotide highlighted with triangles in Fig. 1a, b). It monotonically increases as rij becomes more dispersed (Fig. 1c; see the "Methods" section for details). Differentially reactive regions Equipped with d score as a dissimilarity measure, we have developed a method to identify DRRs. Our method has three steps (Fig. 2). First, we assess the within-group and between-group variations in terms of d scores. Next, we distinguish between de novo and guided discovery situations. To discover DRRs de novo, we need to identify regions that are potential candidates for DRRs. This is done in the second step by screening for regions where between-group variation appears to be higher, on average, than within-group variation. Note that this step is skipped for guided discovery as candidate regions are predefined from a collateral study. In the third step, to assess the statistical significance at each candidate region, the variation between groups in that region is compared to the variation within groups. If the between-group variation is found to be significantly higher, the region is reported as a DRR. In what follows, we briefly describe each step. Given mA and mB samples from groups A and B, respectively, let m= max(mA,mB) (Fig. 2a). We construct all possible subsets of the mA+mB samples, such that each subset has m samples. Of these subsets, at maximum, two will be homogeneous, i.e., they will comprise of samples from A only or B only. If mA≠mB, there will be only one homogeneous subset with samples from group A if mA>mB or with samples from group B if mA<mB. All other subsets will be heterogeneous. For each subset, we assess the d score for each nucleotide as previously described "Dissimilarity measure". We use the nucleotide-wise average of d scores across homogeneous subsets as the measure of within-group variation, denoted as dwithin (Fig. 2b). Similarly, the average of d scores across heterogeneous subsets is used as the measure of between-group variation, denoted as dbetween. The second step is performed only for de novo discovery, as it constructs candidates for DRRs. In the absence of prior knowledge of where DRRs start and end, we rely on the evidence in the data to construct the so-called data-driven regions [49]. In our case, the evidence is in the difference between dbetween and dwithin. Hence, we define Δd=dbetween−dwithin. If Δd is positive for all nucleotides in a contiguous region of length greater than or equal to a user-specified length, the region is a potential DRR candidate (Fig. 2b, c). However, DRRs could have altered reactivity patterns without necessarily having altered reactivities at all nucleotides. Indeed, in DRRs, some nucleotides may have Δd≤0. Hence, we smooth the Δd profile prior to screening for candidates (see the "Methods" section). Then, we search for regions that have a positive value of smoothed Δd for all nucleotides (highlighted in purple in Fig. 2c). These regions are deemed potential candidates for DRRs. Note that the smoothed Δd profile is used only to construct candidate regions. Inputs to the final step are unsmoothed profiles obtained in Step 1. The significance of the differential reactivity pattern in a candidate region (see Fig. 2d for an example) is determined by comparing dwithin and dbetween for the region (Fig. 2e). Specifically, we perform Wilcoxon signed-rank test to test the null hypothesis against the one-sided alternative hypothesis that the population mean of dbetween−dwithin is positive [52]. For the set of screened regions from all transcripts, the FDR is controlled using the Benjamini-Hochberg procedure to obtain q values (i.e., FDR-adjusted p values) [53]. Finally, users obtain a list of regions with their corresponding p values and q values (Fig. 2f). At this point, it is noteworthy that the final step of the statistical testing is performed only for regions that meet a criterion for minimum quality, i.e., if their average dwithin is less than a threshold (see Additional file 1: Figure S2). Henceforth, we call this criterion the minimum quality threshold. Keeping the average dwithin below this threshold ensures that samples in the same group have similar profiles in the region of interest, see the "Methods" section for additional details. Validation with small datasets We tested dStruct on three small datasets for which prior knowledge of DRRs is available from independent sources. In addition, we compared its performance to that of RASA, PARCEL, and deltaSHAPE. Overall, we found that dStruct discovers DRRs de novo while having a minimal false-positive rate. Note that we defer comparison with StrucDiff until the section on large datasets, as the small datasets considered below do not have predefined candidates for DRRs. We require SHAPE data with both predefined candidates and replicate samples to compare dStruct and classSNitch. This is because classSNitch is currently trained for guided discovery in SHAPE data only and dStruct requires replicates. Since data satisfying requirements of classSNitch and dStruct simultaneously is not available, we have excluded classSNitch from performance comparisons. dStruct accurately rejects transcripts with no DRRs We obtained three replicate samples of four Saccharomyces cerevisiae rRNAs (5S, 5.8S, 18S, and 25S) from in vivo DMS probing using the Structure-Seq protocol (see the "Methods" section). Since we probed the samples under identical conditions, there should not be any DRRs between replicated profiles of the same rRNA. To assess the specificities of dStruct and the other methods, for each RNA, we performed null comparisons of each possible pair of samples (labeled group A) with the single remaining sample (labeled group B). Therefore, we created 12 test cases (3 for each of the 4 rRNAs), in which we searched for DRRs. We tested deltaSHAPE, RASA, and PARCEL with default search parameters. deltaSHAPE and RASA use windows of 5 nt and 50 nt by default, respectively. We tested dStruct for both window lengths. PARCEL does not require predefined window lengths. Furthermore, deltaSHAPE accepts only one sample per group. Hence, for group A, we input reactivities assessed from pooled counts to deltaSHAPE (i.e., we tallied counts and coverages across all samples). We summarized the performances as follows. We tallied the number of DRRs reported by each method. Out of the 12 test cases, dStruct reported 3 DRRs when searching over 5 nt windows (Fig. 3a). Its performance was similar or better for longer windows (data not shown). For example, it reported no DRRs when searching over 50-nt windows (data not shown). RASA performed comparably to dStruct, reporting 4 DRRs. In contrast, PARCEL and deltaSHAPE reported 61 and 97 DRRs, respectively. dStruct had a low false-positive rate in null comparisons. In a comparison of biological replicates of rRNAs probed in vivo under identical conditions, a dStruct and RASA reported 3 and 4 false positives, respectively. In contrast, PARCEL and deltaSHAPE reported 61 and 97 false positives, respectively. b dStruct had lower nucleotide-level false-positive rates than RASA, PARCEL, and deltaSHAPE We calculated the false-positive rates at the nucleotide level as the fraction of nucleotides incorrectly reported as positives for a transcript. For dStruct, the rate was 0% in 9 cases and 0.2%, 0.3%, and 2% in the remaining cases whereas the other methods displayed higher rates (Fig. 3b). dStruct's superior performance could be attributed to the fact that it overcomes limitations of the existing methods. In particular, RASA and PARCEL do not account for information obtained from untreated control samples. Structure-Seq, however, does integrate it into the resulting reactivities [25]. PARCEL also does not account for coverage variations within a transcript, which is known to be a significant issue [57]. Additionally, dStruct controls for false discoveries by adjusting the p values for multiple tests whereas deltaSHAPE and RASA do not. For detailed overview of deltaSHAPE, RASA, PARCEL, and their limitations, see Additional file 1: Sections S1-S3. dStruct identifies DRRs from ligand-mediated structure alteration Next, we considered cotranscriptional SHAPE-Seq data for the Bacillus cereus crcB fluoride riboswitch (100 nt in length), probed in vitro in the absence and presence of fluoride ions [12]. It featured four samples for each group. The presence of fluoride prevents completion of a terminator hairpin by stabilizing a pseudoknot (Fig. 4a, b). Such a mechanism allows fluoride-mediated transcription control. Between the conditions with and without fluoride, nucleotides 12–17, 38, 40, and 67–74 have altered base pairing states [10, 58, 59]. In addition, Watters et al. observed distinct reactivity changes at nucleotides 22–27 from visual examination of an independent dataset [10]. These nucleotides join the P1 and P3 helices but do not have altered base pairing states between conditions. However, these changes were observed consistently over a range of intermediate lengths that were probed cotranscriptionally. Hence, Watters et al. inferred that they were related to fluoride-mediated stabilization of the pseudoknot. Furthermore, we noted a consistent increase in the reactivity at nucleotide 48 in the presence of fluoride, consistent with prior observations by Watters et al. [10]. Given the reproducibility of this change, we regarded nucleotide 48 as differentially reactive. Taken together, we considered nucleotides 12–17, 22–27, 38–40, 48, and 67–74 as our ground truth of DRRs (highlighted in blue on top of each sample in Fig. 4c). Note that in the absence of fluoride, nucleotides 42–47 pair with nucleotides 68–74 and are part of a hairpin. In the presence of fluoride, nucleotides 42–47 pair with nucleotides 12–17 to form a pseudoknot. However, ligand binding is not expected to change the reactivities at nucleotides 42–47 because this region is paired in both the liganded and unliganded states. Hence, we excluded it from the ground truth. dStruct identified DRRs from a ligand-mediated structure alteration. Fluoride ions bind the crcB fluoride riboswitch and alter its structure. a Secondary structure in the absence of fluoride ions. b Secondary structure in the presence of 10 mM fluoride ions. The red curves highlight the pseudoknot between nucleotides 12–17 and 42–47. The purple curves highlight long-range interactions between the nucleotide pairs (10, 38) and (40, 48). c Eight samples of reactivity profiles, four from group A (A1, A2, A3, and A4) with 0 mM fluoride ions and four from group B (B1, B2, B3, and B4) with 10 mM fluoride ions. Solid blue lines mark DRRs that are considered as the ground truth. Hollow black rectangles mark the DRRs called by deltaSHAPE. A red background marks the DRRs called by dStruct. A green line marks the DRR called by PARCEL. Note that RASA did not report any DRRs We searched for DRRs using dStruct, deltaSHAPE, PARCEL, and RASA. For comparability, we specified the same window length of 5 nt to dStruct, deltaSHAPE, and RASA. We chose 5 nt because it is the default in deltaSHAPE. The default used by RASA is 50 nt, which is too long for a short transcript of length 100 nt. Note that PARCEL does not require a window length. dStruct reported a DRR from 3–39, encompassing regions 12–17 and 22–27 and overlapping region 38–40 (region highlighted with red background in Fig. 4c). However, this DRR joined together three separate regions and extended to additional nucleotides towards the 5 ′ end. This is due to dStruct's propensity to screen for the longest possible contiguous regions. While dStruct did not report any false positives, it did not recognize the DRR within 67–74. This region was screened as a candidate but had a p value of 0.071 and q value of 0.106, both above our desired significance level of 0.05. This is because in this region, the within-group profiles were noisy and not consistently altered between the groups. For example, the reactivity patterns for this region look identical between samples A4 and B4 (Fig. 4c). Additionally, dStruct could not identify the differential reactivity at the isolated nucleotide 48. Indeed, one limitation of dStruct is that it might not report changes at isolated nucleotides even if such changes were real signal. This is due to the fact that differential signals at isolated nucleotides get diluted when scanning over windows. For example, nucleotide 48 is flanked by nucleotides that do not have differential signals. In the "Discussion" section, we propose ways to mitigate this limitation. Similarly, deltaSHAPE correctly identified DRRs from 11–16 and 21–25, but it also correctly identified 47–49 and 71–73 (marked by black rectangles on top of each sample in Fig. 4c). However, it incorrectly reported region 2–4 and failed to identify region 38–40. PARCEL reported a single DRR that stretched from nucleotides 4–75 (marked by a green line at the bottom of each sample). This DRR correctly encompassed all the real DRRs but also included regions that separate them. RASA did not report any DRRs when searching over 5 nt. It is noteworthy that RASA did not report any DRRs when searching over its default window length of 50 nt either. Our results highlight that a key difference between the outputs of dStruct, deltaSHAPE, and PARCEL lies in the lengths of DRRs. dStruct identifies contiguous stretches of nucleotides that manifest reactivity changes. While dStruct might join together nearby DRRs, it does so only if they are separated by no more than twice the specified search length. For example, the DRRs from 38–40 and 67–74 are separated by 27 nt with only one differentially reactive nucleotide. This prevents dStruct from extending its reported DRR (3–39 nt) beyond nucleotide 39. In contrast, deltaSHAPE was developed to identify compact regions that might be DRRs. Hence, it yields several short regions as DRRs. Finally, PARCEL was developed to identify the longest possible regions that have at least one nucleotide with significant changes. Thus, it includes long stretches of nucleotides without a differential signal in the reported DRRs. For example, it reported the entire span from the most 5 ′ real DRR to the most 3 ′ real DRR and included everything in between. dStruct identifies sites of RNA-protein interactions We tested dStruct on another SHAPE-Seq dataset, which structurally characterizes the HIV Rev-response element (RRE)—a part of a viral RNA intron [11]. RRE binds multiple copies of Rev protein to form a complex that facilitates export of unspliced viral transcripts from the nucleus to the cytoplasm during late stage of HIV infection. Regions of Rev-RRE interactions have been identified using independent methods and provided us with a ground truth for comparisons (Additional file 1: Figure S3) [11, 60–62]. We obtained reactivity profiles for six samples — three replicates each in the presence and absence of Rev. However, counts and coverage information were not available. When searching for regions of length 5 nt or more, dStruct identified 10 DRRs that overlapped 6 of the 7 regions known to bind Rev. However, two of the reported DRRs were false positives. As RASA, PARCEL, and deltaSHAPE require coverage information, we could not apply them to this dataset. At this point, it is worth noting that RASA and PARCEL are based on counts and do not accept reactivities directly. Hence, they are not compatible with available datasets that contain only reactivities or with computational methods that output reactivities [30, 31]. Validation with large datasets We tested the methods on two large datasets, one with simulated DRRs and another with DRRs due to known single-nucleotide variants. In all tests, dStruct outperformed the existing methods. dStruct identifies simulated DRRs with properly controlled FDR We used simulations to assess the methods' capability in discovering DRRs de novo from transcriptome-wide SP data. To this end, we obtained three replicate samples of the S. cerevisiae mRNA structurome using in vivo DMS probing (see the "Methods" section). Next, to mimic realistic trends in within-group variation, coverages, and transcript lengths, we introduced simulated DRRs into these samples. One of the samples was randomly labeled as group A, and the other two were labeled as group B. To start with, we randomly selected 1000 regions in the transcriptome for DRRs. The length of each region was chosen in the range of 50–75 nt, which is the usual range of lengths for search of structured regions [34]. Note that while we simulated the structural profiles for groups A and B over this range, we allowed the simulated DRRs to be shorter, as described next. RNAs often adopt multiple structural conformations, and reactivities summarize measurements over the entire structure ensembles. Hence, we obtained reactivities for selected regions as ensemble-weighted average of profiles simulated for structures in an ensemble. For each of the selected regions, we sampled up to 1000 unique secondary structures using the ViennaRNA package [63]. Each of the unique structures was assigned an ensemble weight that reflected its proportion in the structure ensemble. The ensemble weights were randomly sampled from arbitrarily chosen probability density functions (see the "Methods" section). For each group, we selected up to five structures that were assigned high weights and hence dominated the overall reactivity profile for that group. The reactivity profiles differed between groups due to the disjoint selection of dominant structures. We introduced within-group variation by adding noise to ensemble weights. In addition, we controlled the between-group variation by controlling the weight of the minimum free energy (MFE) structure in each group. For example, increasing the weight of the MFE structure in both groups increased the similarity of their structure ensembles, thereby reducing the between-group variation. For each structure, we generated a DMS reactivity profile by sampling reactivities using probability density functions for reactivities of paired and unpaired nucleotides [54]. The probability density functions were obtained by fitting a Gaussian mixture model to our data using patteRNA [33, 64]. The final reactivity profile for each region was obtained as the ensemble-weighted average of profiles for individual structures (see the "Methods" section for details). Overall, we simulated a range of within-group and between-group variations in reactivities, as reflected in the resulting within-group and between-group Pearson correlation coefficients (Additional file 1: Figure S4A). Since all simulated structures for a region represented folding of the same short sequence, there were stretches within them that did not have altered base pairing states between the groups. Indeed, the pairing states were altered for stretches shorter than the complete chosen regions (median length 11 nt; Additional file 1: Figure S4B). Therefore, we ran dStruct, RASA, and deltaSHAPE with a search length of 11 nt. As noted earlier, PARCEL automatically determines the appropriate length for each DRR. We evaluated the methods in terms of power and observed FDR. Power was calculated as the proportion of simulated DRRs that overlapped at least one reported DRR. The observed FDR was calculated as the proportion of reported DRRs that did not overlap any simulated DRRs. We observed the following performances. We tested dStruct's performance for several values of the minimum quality threshold (see the "Methods" section and Additional file 1: Figure S2). The threshold was specified in terms of maximum dissimilarity of reactivity profiles within the same group, i.e., maximum dwithin. We observed that dStruct had reasonably high power (∼ 60%) to discover DRRs for a range of the quality threshold (Fig. 5a). In addition, its FDR was properly controlled to the specified target level of 5%. dStruct properly controlled the false discovery rates in simulated data. We searched for DRRs in simulated data using dStruct, deltaSHAPE, PARCEL, and RASA (a–d, respectively). The powers (circles) and FDRs (triangles) are plotted for each method. We tested each method for a range of stringency levels. Vertical dotted blue lines mark the default parameter settings. A horizontal dotted red line in a marks the specified target FDR. deltaSHAPE, PARCEL, and RASA do not control for FDR. X-axis labels indicate the parameter tuned for each method. a dStruct calls a candidate region a DRR only if it satisfies a quality threshold as well as has significant p value and q value. The quality threshold is specified in terms of a maximum allowed within-group variation, measured as the average dwithin in the region. b deltaSHAPE chains together differentially reactive nucleotides as DRRs if a minimum number of them are colocalized within a specified search length. c PARCEL quantifies the statistical significance of structural changes in a region in terms of an E value. Under the null hypothesis of no differential signal, it is computed as the number of regions that can be expected to have structural change scores at least as high as the given region's score. d RASA identifies DRRs as the regions that have significant clustering of nucleotides with large changes in reactivities. The significance of the observed clustering is evaluated by comparing the observed distribution of the numbers of such nucleotides in sliding windows of a specified length with their null distribution obtained from permutations. The comparison is done in terms of standard Z scores deltaSHAPE calls DRRs based on the number of nucleotides in a region that manifest significant changes in reactivities (see the "Methods" section and Additional file 1: Section S1). Requiring fewer nucleotides amounts to a less stringent criterion. We tested deltaSHAPE's performance for a range of stringency levels. We observed consistently high FDR in its detections (Fig. 5b). For the least stringent criterion, deltaSHAPE's power was comparable to that of dStruct, albeit at the cost of excessive FDR (Fig. 5a, b). Its high FDR could be attributed to its tendency to always report DRRs in transcripts that have high coverage (see Additional file 1: Section S1 for a detailed overview of deltaSHAPE and its limitations). This is because deltaSHAPE calls DRRs from locally smoothed reactivity profiles. However, smoothing artificially spreads noise at a nucleotide into neighboring nucleotides. This might amplify the noise leading to false appearance of a strong differential signal. In addition, deltaSHAPE does not account for biological variation and does not control for false discoveries. PARCEL calls DRRs based on the E value statistic, which quantifies the statistical significance of reactivity changes in a region (see the "Methods" section and Additional file 1: Section S2 for details). A lower cutoff for E values represents a more stringent criterion. We tested PARCEL's performance on a range of cutoff values (Fig. 5c). We observed a consistently high FDR (∼ 82%) and low detection power (< 1%). PARCEL's poor performance could be attributed to the fact that it was designed to work in conjunction with a specific SP technology [21]. As such, it does not consider untreated samples or coverage variations across a transcript, which are important issues in transcriptome-wide data from most technologies [6, 30] (see Additional file 1: Section S2 for a detailed overview of PARCEL and its limitations). RASA calls DRRs in two steps. It uses a generalized mixed model to quantify the significance of reactivity changes for each nucleotide. Then, it identifies regions enriched in differentially reactive nucleotides via permutation testing (see the "Methods" section and Additional file 1: Section S3). Since RASA quantifies enrichment in terms of Z scores, we assessed its performance for a range of Z score cutoffs (Fig. 5d). The lower the cutoff, the less stringent was the criterion for calling a DRR. We observed that RASA consistently yielded excessively high FDR and very low power. This could be explained by the fact that it does not utilize untreated samples to compute reactivities. Hence, its application might not be suitable for SP technologies like Structure-Seq, which relies on untreated samples [25]. In addition, RASA does not perform multiple testing correction to control for false discoveries (see Additional file 1: Section S3 for a detailed overview of RASA and its limitations). Overall, we conclude from these comparisons that dStruct has higher power than existing methods and that its observed FDR is properly controlled to the specified target of 5%. We provide additional performance summaries for all methods in Additional file 1: Figure S5. Interestingly, we found that the proportions of transcript lengths reported by dStruct as DRRs correlated well with their simulated ground truths (Additional file 1: Figure S6). This did not hold for the other methods. Finally, we assessed the effect of varying the specified search length on dStruct's performance (Additional file 1: Figure S5F). We found that dStruct's power remained approximately constant up to a search length of 25 nt, from which point it monotonically decreased. This is expected because a higher minimum length excludes more regions whose alterations span stretches shorter than the search length. dStruct identifies DRRs caused by single-nucleotide variants We compared the performances of dStruct and StrucDiff in a guided discovery context with PARS data for human RNAs by Wan et al. [9]. PARS utilizes a pair of nucleases as probing reagents, and the degrees of reactions from the nucleases are summarized as a PARS score for each nucleotide. The PARS dataset from Wan et al. features RNAs obtained from cell lines derived from a family trio of a father, a mother, and a child, with no replicates for any cell line. Wan et al. obtained a list of transcripts with single-nucleotide variants for this trio and identified DRRs of lengths 11 nt with the variants at their centers. To this end, they developed the StrucDiff approach (Additional file 1: Section S4). For each variant, they compared each pair of individuals separately using StrucDiff. They called a region a riboSNitch (i.e., a regulatory element whose structure is altered by a single-nucleotide variant) if any of the pairwise comparisons for the region yielded a significant result. StrucDiff has five steps. Given a pair of profiles, first, the data is locally smoothed using a rolling mean over sliding windows of 5 nt to calculate smoothed PARS scores. Second, the absolute difference in the smoothed PARS scores (denoted as \(\Delta {\overline {r}_{i}}\) for nucleotide i) is calculated. Third, given the variant's location, the structural change score around the variant (henceforth, called vSNV) is calculated as the average \(\Delta {\overline {r}_{i}}\) for the nucleotides flanking it. In the fourth step, StrucDiff assesses the statistical significance of vSNV. To this end, it permutes the sequence of \(\Delta {\overline {r}_{i}}\) 1000 times. For each permutation, it assesses a structural change score under the null hypothesis (henceforth, called vnull). A p value is assigned to the variant as the fraction of vnull values greater than vSNV. In addition, StrucDiff controls the FDR using the Benjamini-Hochberg procedure. Finally, a variant region is classified as a riboSNitch if it has significant p values and q values, vSNV>1, high local coverage, and high signal strength in a window of 11 nt. Of the regions examined by Wan et al., only those found to be riboSNitches were reported. For our analysis, we selected those for which two of the three individuals were allelically identical, i.e., they were either both heterozygous or both homozygous with the same allele. However, none of the studied cell lines were probed in replicates. Hence, we used profiles from the two cell lines with identical allele at a variant site as two replicate samples of the same PARS profile (labeled group A) for a region of 11 nt centered at the variant. This is reasonable under the assumption that the variant at the center of a region is the only distinguishable structure-altering factor. The remaining cell line with a different allele (labeled group B) could potentially have a significantly altered profile in this region. Hence, we used dStruct (guided discovery mode) to identify the regions with variants that were DRRs. Since there were no independent validations that could provide a ground truth for the variants under consideration (see the "Methods" section), we resorted to an indirect way of comparing the results from dStruct and StrucDiff using the Pearson correlation coefficient. The correlation between a pair of differentially reactive profiles is expected to be lower than the correlation between the samples of the same group [35]. Hence, we calculated the within-group and between-group Pearson correlation coefficients for each region. We found that the within-group correlations for DRRs identified by dStruct were high (Fig. 6a). In addition, the between-group correlations were substantially lower in comparison with the within-group values. This trend in within-group and between-group correlations is expected because dStruct aims to find regions where the between-group variation exceeds that within groups. We further confirmed the similarity of reactivity patterns within groups and their dissimilarity between groups by visual inspection (Fig. 6b, Additional file 1: Figure S7). The inferences from the Pearson correlation coefficients and visual examination support our previous finding of a properly controlled FDR by dStruct. In agreement with dStruct, for two of the DRRs, StrucDiff consistently found significant changes in both the pairwise comparisons of profiles between groups. However, for each of the remaining two DRRs reported by dStruct, it inconsistently called a DRR in one pairwise comparison but not in the other. This is anomalous because both pairwise comparisons involved the same pair of variants. dStruct reported riboSNitches from a PARS dataset. a The within-group Pearson correlation coefficients (green bars) for riboSNitches reported by dStruct were higher than their respective between-group Pearson correlations (red bars). b Example of reactivity profiles for the mother, the child, and the father for one of the regions that dStruct reported as riboSNitch, i.e., the single-nucleotide variant at site 1817 for NM_032855. Note that for this region, the mother and child were allelically identical and therefore labeled as group A (A1 and A2). They appear identical, but they differ from the father, who had a different allele and was labeled as group B (B1). c A histogram of the differences between Pearson correlation coefficients between and within groups. Many of the riboSNitches reported by StrucDiff had only a minor change in their Pearson correlation. For many of the regions, the between-group Pearson correlations were also higher. The dotted vertical line in red marks the median of distribution To glean the FDR of StrucDiff, we took the difference of the between-group correlations and the within-group correlations for all regions. For DRRs, the difference should be significant and negative. However, we found that for many of the regions, the difference was positive, with a median − 0.06 (Fig. 6c). This suggests that there could possibly be a significant proportion of false positives reported by StrucDiff. In other words, StrucDiff's FDR might be higher than the specified level of 0.1 as well as than that of dStruct. An alternative explanation for this observation could be that the variants at the center of the examined regions were not always the only relevant factors that influenced local structures. In fact, Wan et al. proposed that co-variation of variants in the close vicinity of a variant under consideration might influence the local structure. However, they also found that riboSNitches (identified using StrucDiff) have fewer variants around them in comparison with variants that do not alter structure. Nonetheless, it is possible that our starting assumption that allelic similarity implies the absence of a DRR does not apply for at least some of the variants. This would explain the low proportion of riboSNitches found by our method. It could also explain the anomalous distribution of changes in the correlations for the riboSNitches reported by StrucDiff. Note that we could not compare the powers of dStruct and StrucDiff due to the lack of a ground truth for the data. Besides a comparison of FDRs, it is worthwhile to observe that the permutation test approach utilized by StrucDiff might not be suitable for locally smoothed reactivity data (Additional file 1: Section S4). This is because local smoothing introduces local correlations in \(\Delta \overline {r}_{i}\). However, these local correlations are absent in the permuted data. As such, the sampling distribution of vSNV under the null hypothesis turns out to be different from the distribution of vnull values, which can lead to inflated error rates [65] (see Additional file 1: Section S4 and Additional file 1: Figure S8). Accounting for biological variation in reactivity patterns Biological variation in measurements from samples of the same group has been observed across all areas of genomics [66]. In fact, RNA biologists that use SP protocols have been aware of its presence [6, 30]. A recent study by Selega et al. shows that accounting for biological variation improves the estimates of reactivities [30]. Two methods, PARCEL and RASA, which explicitly account for biological variation in the context of differential analysis, have also been published recently [20, 21]. PARCEL uses edgeR to compare the counts between the groups of samples [36]. However, it does not consider coverage variation within a transcript, which is known to be significant [57]. RASA accounts for coverage variation, but similarly to PARCEL, it does not use untreated control samples in computing reactivities. Instead, it assesses the background noise from the untreated samples and then excludes from analysis nucleotides whose noise level exceeds a threshold. It favors this strategy because it was developed to be used with DMS-MaPseq, which does not consider untreated samples in reactivity estimation [67]. Yet, this limits the detection power in transcriptome-wide data from other technologies by filtering a major fraction of the nucleotides because these datasets are highly noisy [7]. Additionally, this places the burden on the user to optimize the threshold level for noise. Recently, broadly applicable computational methods for reactivity estimation have been developed, namely, PROBer and BUM-HMM [30, 31]. These address several challenges in estimating reactivities from transcriptome-wide data, e.g., multi-mapping reads, background noise, and coverage variation. Therefore, it is necessary for novel differential analysis methods to either address these challenges directly or be compatible with methods such as PROBer and BUM-HMM. However, RASA and PARCEL neither account for some of these major issues nor are they compatible with the analog reactivities output by said methods. The incompatibility arises because RASA and PARCEL were designed to take read counts as their (digital) input. Hence, the need for a robust differential analysis method remains unmet for the majority of SP technologies. Besides accounting for biological variation, it is desirable to identify regions that display differences in their reactivity patterns [33]. An altered pattern in a region could indicate a change in the composition of its structural ensemble [33, 42, 44, 64]. Reactivity pattern is defined collectively by the reactivities of all the nucleotides in a region (Fig. 1a). Hence, one must consider every nucleotide in a region for inferences on pattern changes. However, RASA, PARCEL, and deltaSHAPE first evaluate the changes at individual nucleotides and subsequently chain nucleotides with significant changes together as DRRs. Furthermore, the criteria for the number of nucleotides with significant changes are not always stringent (see Additional file 1: Section S1-S3). For example, PARCEL requires only one significantly altered nucleotide to call a DRR. In contrast to these three methods, StrucDiff considers all nucleotides in a region but only after smoothing the read counts (see Additional file 1: Section S4). This effectively obscures the reactivity patterns. classSNitch is the only method that explicitly accounts for patterns (see Additional file 1: Section S5). However, it does not account for biological variation and is also currently limited to SHAPE data. dStruct presents a major advance over existing methods as it accounts for biological variation and reactivity patterns and is also compatible with diverse technologies. Notably, it smoothes the d scores in the second step but only to construct candidate regions. Once constructed, it reverts to the unsmoothed d scores to perform the statistical inference (Fig. 2). In guided discovery, it does not perform smoothing at all. Our approach deviates from the methods for differential analysis of other kinds of high-throughput data, which do not generally account for signal patterns, because our feature of interest is reactivity pattern. For example, in differential methylation studies, the feature of interest for a region is the average methylation level. Such a feature could be described as higher or lower when comparing two samples [48, 49, 66]. However, reactivity pattern is a geometrical feature. As such, it cannot be described as being higher or lower when comparing samples. It has to be described in terms of agreement in reactivity pattern, which can be numerically captured in a transformation of the data. For example, a secondary feature of the data could be assessed, such as the sequence of slopes of segments joining reactivities for two adjacent sites (see slopes of segments of red line in Fig. 1). If two profiles were parallel, the slope of the segment between any pair of adjacent sites would be the same for both profiles. Indeed, this idea forms the basis for the classical approach of profile analysis for test of parallelism [68]. However, this approach requires a large sample size to account for biological variation and can potentially be applied only to predefined regions. In addition, it would require normality of reactivities. If two profiles were to be found as parallel by this approach, they could be tested for coincidence in a second hypothesis test. Since the telling feature for DRRs is coincidence of profiles, dStruct assesses coincidence at the nucleotide level directly in terms of d score. Two profiles can be classified as coincident if the vertical distance (or difference in reactivity) between them is 0 at each nucleotide. However, such a definition would be applicable only for two profiles. Our dissimilarity measure, d score, extends the concept of pairwise "vertical distance" to multiple profiles. We use d score to assess dissimilarity within groups and between groups. Then, we test the null hypothesis that profiles for the two groups are coincident and d scores are not significantly different within and between groups. Our dissimilarity measure is based on the mean and standard deviation of reactivities for each nucleotide. In differential analysis studies in other fields, it has been noted that when standard deviation from very few samples is used to estimate t-type test statistics, the test statistics can be unreliable and lead to false positives and reduced power [69]. However, despite this issue, dStruct has reasonably high power and low observed FDR. dStruct's high performance is possible because we do not utilize standard deviation to assess a test statistic directly. Instead, it contributes to the assessment of a secondary feature of the data. Additionally, from the point-of-view of Wilcoxon signed-rank test, the test statistic in our method is the sum of signed ranks. This test statistic pools information from all nucleotides in a candidate region, and hence, its susceptibility to noise in nucleotide-level d scores is reduced. While untransformed ratio of μi and σi is very sensitive to small changes in μi when μi is close to zero, we have improved our dissimilarity measure with a monotonic transformation (see Additional file 1: Figure S9). Yet, it is to be noted that our approach focuses on variation at the level of reactivities. Indeed, in our analysis, we have not modeled the mean-variance relationship of reactivities. While dStruct provides a significant improvement over existing methods in its current form, accurate models of heteroscedasticity in mean-variance relationship might enhance the dStruct's performance. For example, methods for differential gene expression analysis utilize such models [70]. Moreover, we do not model variation directly at the level of counts. In the future, it might be possible to achieve better performance with rigorous models for variation directly at counts level [49]. Finally, our approach differs from other methods of differential analysis in one additional way. For each region of interest, other methods assess a single test statistic [48, 49, 69]. To classify the region, they either rely on a cutoff or assess statistical significance in reference to a null distribution. Moreover, the null distribution is generally obtained by permuting the sample assignment labels for data points and calculating test statistic for permuted data. In contrast to the approach of capturing the effect size and within-group variation in a single test statistic for each region, our approach of quantifying within-group and between-group variations in reactivity patterns provides two vectors of d score profiles for each region. The vectors consist of values that reflect nucleotide-level dissimilarity of reactivity patterns. Given these vectors, our goal is to test if between-group variation is significantly higher than within-group variation, in which case it can be reasonably classified as a DRR. Hence, we forgo label permutations in favor of the Wilcoxon signed-rank test. Wilcoxon signed-rank test is an alternative to paired Student's t test and does not assume normal distribution of d scores. It compares d scores for within-group and between-group variations. In other words, our method assesses the significance of differential reactivity in a region by comparing it to within-group variation in that region. In fact, experts can identify altered reactivity patterns by inspecting a region alone, without needing to resort to transcript-wide or transcriptome-wide data for reference. This suggests that there is adequate information within the candidate regions for classification purpose [35]. Our approach in dStruct takes advantage of this characteristic of the reactivity data. In addition, such an approach of significance testing confers robustness to the presence of outliers or poor quality data outside the regions of interest. While we found that dStruct can identify DRRs with reasonable power and a properly controlled FDR, several limitations are worth noting. First, dStruct does not automatically identify a search length for DRRs (i.e., the minimum allowed length). With little known about RNA structures, users might not a priori know the optimal search length. Importantly, the analysis results can vary depending on the search length. For example, consider the impact of decreasing the search length from l1 to l2. Given the new search length, in addition to identifying the same candidates that were found with length l1, dStruct might identify additional ones, which are shorter than l1. While the p values of candidates common to both searches should remain the same, their q values might change. This might lead to loss of power if the true DRRs were generally longer than l1. On the other hand, if the true DRRs were shorter than l1, specifying a minimum search length of l1 might also lead to loss of power. This is because dStruct disregards all evidence of between-group variation in regions shorter than the specified search length. In our simulations, we found that for a wide range of input search lengths (5–25 nt), dStruct maintained approximately constant power and properly controlled the observed FDR (Additional file 1: Figure S5F). However, this might not always be the case. Another limitation to note is that dStruct might not determine DRR boundaries accurately, as it opts for the longest contiguous regions possible. Thus, it might join DRRs that are separated by fewer nucleotides than the search length. Moreover, dStruct might miss regions where a majority of the nucleotides have zero reactivities. While zero PARS scores could be considered no information, zero SHAPE/DMS reactivities may report either high-quality information or no information (e.g., a manifestation of high background noise) [37]. In our experience, for PARS as well as SHAPE/DMS data, a substantial fraction of the nucleotides have zeros across all replicates. Considering all of them as high-quality information and defining their d scores as 0 results in erroneous inferences (data not shown). Hence, we leave the d score for such nucleotides as undefined. Yet, it is worth noting that the quality criteria that we use to filter candidate regions ensure that no more than a small fraction of the nucleotides have undefined d scores in candidate regions (see the "Methods" section). Regions containing zero or very low reactivities for most nucleotides are not found by dStruct, even if they are true DRRs. In addition, if one of the groups manifests only zero reactivities in a region, it does not contribute to the assessment of within-group variation in that region. Another limitation of dStruct is that it leaves the burden of normalizing the reactivities to the user. Normalization is a common practice in the field and aims to bridge differences in reaction conditions [6]. Several approaches have been utilized, which heuristically identify outliers and subsequently use the remaining values to determine a normalization constant [56, 71]. Thus, they critically depend on outlier detection. However, outliers are typically noisy and can easily distort the scaling [6]. Furthermore, their prevalence and characteristics in the context of transcriptome-wide SP are still poorly understood. For these reasons, proper normalization is a critical step in differential analysis, and when done well, it could substantially enhance power. A hallmark of proper normalization is good agreement between the normalized replicates [72]. In that context, we designed dStruct to consider only those regions which satisfy a minimum requirement for replicate agreement (see the "Methods" section for minimum quality threshold and Additional file 1: Figure S2). Specifically, transcripts or regions thereof, which display poor replicate agreement, are filtered by dStruct. We caution users to check for agreement between replicates from the same group in those regions that dStruct discarded. Furthermore, if it excluded a large fraction of the transcripts due to quality considerations, this could suggest that samples were not properly normalized. Another limitation of dStruct and all other methods is that they might miss the differentially structured regions if they do not manifest differential reactivities, as there might be regions in a transcript that pair with alternative partners between groups. For example, nucleotides 42–47 of the crcB fluoride riboswitch (Fig. 4a, b) change partners between groups but remain paired in both groups. Such nucleotides might not exhibit significant reactivity changes. Notably, this limitation is due to the nature of SP data. Besides these limitations, dStruct might miss DRRs that exhibit significant changes at only one to two nucleotides. For long search lengths, it might even overlook such DRRs as candidate regions. This is because differential signals concentrated at only a few nucleotides get diluted when searching over windows. The longer the search length, the more is the signal dilution. Notably, specifying a short search length might not remedy this issue, as it arises from the limited power of the Wilcoxon signed-rank test when applied to very small samples. For example, at a significance level of 0.05, this test cannot identify DRRs shorter than 5 nt in length. This places a hard limit on dStruct's detection power. Nevertheless, regions shorter than 5 nt might be listed with insignificant p values if the specified search length were ≤ 5. Hence, if it is of interest to find isolated single-nucleotide changes, users can specify a short search length and visually examine all the candidate regions. Detection power in such a case could also be improved by replacing Wilcoxon signed-rank test with paired t test, which might be more powerful for small samples [73]. Some of the dStruct's limitations could be mitigated. It is possible that in a study, RNAs are expected to have altered reactivity patterns over multiple non-contiguous regions, yet no region has a sufficiently strong effect size. In such a case, the detection power could be improved by testing all candidate regions identified within an RNA collectively (see dStruct's manual). Note, however, that this assesses the significance of differences at the level of a transcript and not a region. This distinction should thus be clearly reported. An alternative scenario is that the biological question warrants a short search length, but due to the noisy nature of data, it results in screening of candidate regions that do not represent DRRs. This might impact the detection power because non-DRR candidates impact the correction for multiple tests. However, it is plausible that the real DRRs among the candidates are closely located, say separated by 5–10 nt, while the non-DRR candidates are separated by larger distances. In such cases, power could be improved by leveraging the proximity of real DRR candidates and testing candidate regions collectively if they are located within a certain distance of each other. Another way to improve power in such situations is to integrate differential analysis of SP data with other kinds of relevant data. For example, consider a study on how a protein impacts RNA structure upon binding. Let there be two groups with wild-type samples and samples where the protein's binding domain has been eliminated. Let us say that dStruct is given a search length of 5 nt and constructs a lot of candidate regions but calls no DRRs due to the subtleness of reactivity changes. It might be possible to integrate the information from a collateral study on sites of RNA-protein binding with regions constructed by dStruct and perform enrichment tests. The null hypothesis underlying such a test could be that constructed regions are not associated with the change in protein's function. Such tests have been used in other fields of genomics to yield useful biological insights, e.g., gene set enrichment analysis [74, 75]. Future developments of such methods specialized to the SP data could benefit RNA structure studies. Furthermore, it might be possible to do SP in the presence of a range of concentrations of wild-type protein [11]. This would result in several groups of samples with different concentrations of the protein. Differential analysis of such data could be performed in the following manner. One could compare each group of samples to the group with the lowest concentration of the protein. Emergence of certain constructed regions as the difference in concentrations between groups increases might reveal the DRRs. If such regions consistently appear beyond a level of concentration differences, they could be considered as evidence in support of DRRs. Additional recommendations We strongly recommend using dStruct in conjunction with the data obtained from paired-end reads. While dStruct works with both single- and paired-end reads, reactivities are most reliable when the treated and untreated detection rates are estimated using local coverages instead of transcript-level coverages [6, 37]. In addition, it is critical to secure at least two samples per group. In our experience, reactivity patterns could change merely due to biological variation. In the absence of replicates for a group, it does not contribute to the estimates of within-group variation. This might lead to false positives. Moreover, in some studies, one of the groups might be expected to manifest much higher variation than the other due to experimental limitations and/or known biological factors. For example, Watters et al. compared genome segment RNA3 of the Cucumber mosaic virus between infected cell lysates (group A) and in vitro refolded viral RNA extracted from virions (group B) [13]. They observed much higher variation in group A than in group B. In such a case, dwithin, which summarizes the within-group variation in both groups, might be very high and thereby limit the detection power. However, as has been recently done in methylation studies [48], it may be possible to enhance dStruct's power by assessing dwithin only for the less variable group (see dStruct's manual for details). DRRs found using this approach would represent regions where one group varies much more than the other. Importantly, if such an approach was utilized, the supporting details should be clearly reported. Finally, in guided discovery situations, it is possible that collateral studies do not pinpoint the exact regions where DRRs might be found but only indicate their approximate locations. For example, in RNA-protein interaction studies, DRRs might be anywhere within say 100 nt upstream/downstream of CLIP-seq signal peaks. In this case, performing guided discovery with say a 20 nt window centered at the peak may preclude the discovery of more distant DRRs. However, de novo discovery within the entire transcript may not be optimal either. Hence, if the precision of the CLIP-seq data was known, it may be better to perform de novo discovery with say a 200-nt window centered at the peaks. We described dStruct, a novel approach to identify DRRs from SP data. dStruct is compatible with diverse SP protocols and accounts for biological variation in SP data. First, it quantifies the within-group and between-group variation. Then, it constructs regions that are potential candidates for DRRs to facilitate de novo discovery or utilizes candidate regions identified by collateral studies to aid guided discovery. Finally, it assesses the statistical significance of differential reactivities in candidate regions and controls for false discoveries. To validate dStruct, we used diverse datasets, which span a range of SP technologies, structure-altering factors, and organisms. We demonstrated that for a properly controlled FDR, dStruct has a higher power than existing approaches. While we validated dStruct with the SHAPE-Seq, Structure-Seq, and PARS protocols, it is applicable to many other SP technologies. With SP technologies reaching the phase of maturation, there is a need to develop robust methods to perform differential analysis of SP data. We discussed the unique aspects of SP data that distinguish it from other kinds of genomic data. These unique aspects present a need for differential analysis methods tailored to the needs of diverse SP technologies. dStruct is a first step in this direction. Quantifying dissimilarity of reactivities We used a d score to quantify the dissimilarity of reactivities. Its definition was motivated by the need for a robust measure of agreement/disagreement in reactivity patterns in a transcript or in regions thereof. We devised the d score by examining the deficiencies of existing approaches in serving this need. For example, classSNitch utilizes a feature of reactivity profiles called a pattern correlation coefficient. It is the Pearson correlation coefficient of sequences of signs of slopes of the segments joining reactivity scores for adjacent nucleotides in plots of reactivity profiles (slope of segments of red line in Fig. 1a). While correlation in sequences of slopes could assess if two profiles were parallel, a region with approximately parallel profiles might still be a DRR if the profiles were not coincident. classSNitch assesses the coincidence in profiles by taking the Euclidean distance between a pair of profiles. However, the Euclidean distance is valid for only two profiles and can be sensitive to outliers. At nucleotide resolution, the coincidence of two profiles could be captured as the vertical distances between the profiles or the differences in reactivities. If the differences were zero or significantly low for a pair of profiles, they might be called coincident. This is the basis of structural change scores used in deltaSHAPE and StrucDiff. However, the utility of reactivity differences is limited to a comparison of two profiles. Besides the need for a measure that could simultaneously summarize the agreement of reactivity patterns for more than two replicates, we identified a need to account for the fact that nucleotides with higher average reactivity tend to have higher fluctuations [56]. This aspect of the data could be accounted for by considering the ratio of the reactivity difference and the mean reactivity at a nucleotide, i.e., if r1,i and r2,i are the reactivities in two replicates at nucleotide i, we could consider: $$\begin{array}{@{}rcl@{}} \frac{\left| r_{1,i} - r_{2,i} \right|}{\frac{1}{2}\left| r_{1,i} + r_{2,i} \right|}. \end{array} $$ The above expression yields a sequence of zeros for perfectly coincident profiles. It yields a sequence of very high numbers (or infinity) for nearly anti-parallel profiles. For two profiles, d score is defined as the arctan of the above expression, with additional scaling as described next. To account for multiple profiles simultaneously, we replaced the numerator in the above expression with the sample's standard deviation (denoted σi), which gave us the absolute value of the coefficient of variation, or |CV|. However, |CV| is very sensitive to small changes in the mean reactivity (denoted μi), especially when μi is close to zero (see Additional file 1: Figure S1). This could lead to excessively large |CV|. For example, PARS scores can take both positive and negative values, which can yield μi values close to zero. As μi decreases and approaches zero, σi/|μi| increases very fast and approaches infinity. This is problematic because excessively large values can dominate the averages that we use within the dissimilarity measures (i.e., nucleotide-wise averages across all the homogeneous subsets or all the heterogeneous subsets in step 1; average at the regional level in step 2; see the "Differentially reactive regions" section). Hence, we applied a monotonic transformation to |CV| that prevents the occurrence of excessively large values (Fig. 1c and Additional file 1: Figure S1). While logarithmic transformation is a common choice, it is not suitable for |CV| directly as it goes to −∞ for |CV| close to 0. Indeed, |CV| being close to zero suggests that the reactivities being compared are identical, which can happen in regions with good data quality. While log transforming (1+|CV|) is a possible alternative, log transformation does not restrict the range of the transformed values. In fact, their range remains the same as that of the untransformed |CV|, i.e., [0,∞). Hence, log does not guarantee bounded values in transcriptome-wide data, which displays numerous instances of extremely high |CV| for PARS data (data not shown). We use a transformation that yields values of identical order of magnitude as the log transformation for σi/|μi| up to ∼ 103 and which does not increase to infinity for higher values of σi/|μi| (Additional file 1: Figure S1). A natural choice to transform ratios is to use inverse trigonometric functions. For example, proportions are often transformed using the arcsin function [49]. However, arcsin's domain is limited to [−1,1]. Hence, we transformed |CV|, which can take any positive value, using an arctan function—a monotonic transformation that ranges from 0 to π/2 (Fig. 1b, c and Additional file 1: Figure S9). arctan(σi/|μi|) is approximately equal to σi/|μi| for σi/|μi|<1. Additionally, for higher values of σi/|μi|, it is close to log10(1+σi/|μi|) when σi/|μi| is less than or around order 103. Importantly, arctan(σi/|μi|) asymptotically reaches π/2 as σi/|μi| increases beyond order 103, whereas log10(1+σi/|μi|) continues to increase with σi/|μi|. This is a useful property because we do observe σi/|μi|≫103 in transcriptome-wide data (not shown). Furthermore, we compared performances of log and arctan transformations in the context of differential analysis with dStruct. In addition, we compared a threshold approach to bound σi/|μi| by restricting large values to the threshold. We observed identical performances of all three approaches. This is because dStruct utilizes a non-parametric test. In such a test, only the relative ranks of d scores are of concern (Additional file 1: Figure S9). Since log and arctan transformations are both monotonic transformations, using one instead of another alters the absolute magnitudes of d scores but not their relative ranks. Nonetheless, for our purpose, the major advantage of the arctan transformation is that it results in values that are bounded to a finite interval. This allows a convenient scaling such that the d scores are bound between 0 and 1, which is a desirable feature for interpretation [76] (Additional file 1: Figure S10). Since arctan(σi/|μi|) ranges from 0 to π/2, we rescaled it such that it ranges from 0 to 1. Finally, we obtained the following expression for the d score: $$\begin{array}{@{}rcl@{}} d_{i} &=& \frac{2}{\pi} \arctan \left(\frac{\sigma_{i}}{\left| \mu_{i} \right|} \right). \end{array} $$ For reactivity scales that are restricted to non-negative values (e.g., SHAPE), the d score will never reach the maximum value of 1. For PARS-type data, positive and negative reactivities carry information about the likelihood of a nucleotide forming or not forming a base pair. However, if the PARS reactivities across samples were such that μi=0, it would imply that some samples indicated the presence of a base pair at i while others indicated the contrary, thereby resulting in μi=0. Hence, μi=0 is indicative of maximal dissimilarity between reactivities for nucleotide i, and di=1 when μi=0. Note that we previously reported an approach to quantify agreement between reactivity profiles, which is similar to d score, namely, the signal-to-noise ratio (SNR) [37, 38]. We demonstrated the utility of SNR in quality control of SP data, where we showed that given several samples of the same group, SNR-based analysis could identify the discordant replicates and regions. SNR was defined as the inverse of the CV and was tested only for SHAPE and DMS data. In addition, we dealt with the sensitivity of the mean SNR to small changes in σi by restricting its maximum value to 35 based on the properties of the data. The d score can be interpreted as a redefinition of SNR. While SNR captures agreement of reactivities, d score captures variation. In addition, sensitivity of d score to small changes in μi was reduced by a monotonic transformation of |CV|. As such, the d score could be used to replace SNR in quality control applications (see Additional file 1: Figure S11). Overview of dStruct We developed dStruct to identify DRRs in three steps. In the first step, we assess the within-group and between-group variation. In the second step, we identify regions that could potentially be DRRs. This step is performed only for de novo discovery. In the third step, the regions identified in the second step are statistically tested to detect DRRs. Given the two groups labeled A and B and mA and mB replicate samples from these groups, respectively, let m= max(mA,mB). We construct all the possible subsets of the mA+mB samples, such that each subset has m samples. Among these subsets, some will be homogeneous, i.e., all the samples in the subset will come from the same group, whereas others will be heterogeneous. In a subset, suppose there are gA samples from group A and gB from group B. For m=2, all the heterogeneous subsets will have gA=1 and gB=1. In other words, the ratio of the numbers of samples from the two groups in all the heterogeneous subsets will be 1:1. Similarly, for m=3, all the heterogeneous subsets will have either gA=1 and gB=2 or gA=2 and gB=1. The ratio of the numbers of samples from the two groups in all heterogeneous subsets will be 2:1. However, for m>3, the heterogeneous subsets can have different ratios. For example, for m=4, some heterogeneous subsets will have gA=3;gB=1 or gA=1;gB=3, resulting in a ratio of 3:1, while others will have gA=2;gB=2, resulting in a ratio of 1:1. Hence, for m>3, dStruct retains only those heterogeneous subsets which have the highest degree of heterogeneity defined as gAgB/m2. For each subset, we assess d scores as described before "Dissimilarity measure". We use the nucleotide-wise average of d scores from the homogeneous subsets, called dwithin, as the measure of the within-group variation in the second and the final steps. Similarly, we use the average of the d scores from the heterogeneous subsets, called dbetween, as the measure of the between-group variation. Before we describe the second and the third steps, we note that the d score is a sample statistic. Hence, it is best estimated from sets with a large number of samples. To ensure high confidence in the estimated d scores, we define the number of samples in the homogeneous/heterogeneous sets as m= max(mA,mB). However, this definition could be problematic if mA and mB differ by a large number. For example, if mA=5 and mB=1, then under the scheme described above, there will be one homogeneous set with five samples from group A. In addition, there will be five heterogenous sets, each with four samples from group A and one sample from group B. Due to the large concentration of samples from the same group in heterogeneous sets, dbetween might be low. In fact, dbetween might be close in magnitude to dwithin, even in the presence of a differential signal. This could reduce the power because we identify DRRs by comparing dbetween and dwithin. Hence, if heterogeneous sets have unequal numbers of samples from A and B, i.e., gA≠gB, we adjust m such that the resulting heterogeneous subsets would have equal numbers of samples from both groups. Specifically, we adjust m by reducing it in decrements of 1, but not below 3. We stop reducing m once gA=gB has been achieved or m=3. Notably, we do not reduce m below 3 because of the heavy loss in confidence when estimating a statistic (e.g., standard deviation) from two samples instead of three. Hence, whenever possible, we use a minimum set size of three samples to estimate the d scores. A properly chosen m enables estimation of dwithin and dbetween, such that the power could be maximized in the following steps. In the second step, we identify candidate regions of lengths greater than or equal to a user-specified search length, l. To this end, we define Δd=dbetween−dwithin. We smooth Δd with a rolling mean over windows of lengths l. If the smoothed Δd is positive for all the nucleotides in a contiguous region of length l or longer, we consider the region a potential candidate for DRR. Additional details on this step are noteworthy. They are implemented to ensure a reasonable quality of identified regions. By default, for the sake of constructing candidate regions, we mask Δd for nucleotides with |ri|<0.1 across all samples. Alteration in reactivity patterns due to changes in the relative magnitudes of very low reactivities is not meaningful. Hence, we mask Δd for nucleotides with very low reactivities to prevent identification of regions with low signal strength for the majority of nucleotides as candidates. In addition, for the identified regions of length 11 or more, we trim nucleotides with reactivity < 0.1 in all samples from the edges. We do not trim for shorter regions as it can lead to loss of power. Besides this, we require that the identified regions have non-missing Δd for at least five nucleotides if l>5 and for at least l−1 nucleotides if l≤5 (i.e., Δd not masked due to low signal strength and not 0/0). This might not be the case in poor quality regions (due to lack of coverage or high background noise) or for short regions identified in data from base-selective probes, such as DMS. Finally, we require that the identified regions have no more than an allowed level of average dwithin. We impose this requirement because our statistical test only assesses if the between-group variation is significantly more than the within-group variation. However, it is desirable that the reported DRRs have at least moderate correlations within groups to ensure a minimum quality in DRRs. Importantly, filtering out poor-quality, and hence unreliable candidates, before statistical testing could improve power [41]. Hence, we set a liberal threshold for average dwithin in identified regions, i.e., we filter regions with poor within-group correlation but keep those that have moderate to good correlation. In other words, we only filter regions that have highly unreliable reactivity profiles. We call this a minimum quality threshold. We require that the average dwithin be < 0.5 if min(mA,mB)≥2 and < 0.2 if min(mA,mB)=1. Note that average d scores of 0.5 and 0.2 correspond to mean SNR values of 1 and ∼ 3, respectively; we have previously shown that these SNR values filter regions with poor agreement between replicate samples and/or poor coverage [37], and hence, they are liberal thresholds for quality. We impose a more stringent requirement (chosen based on simulation results) for dwithin if only one sample is available for one of the groups. This is because in such cases, the within-group variation in one of the groups cannot be estimated. Hence, we utilize a less liberal threshold for dwithin to compensate for unavailable quality information from one group. Note that while we do not screen for regions if the user inputs candidate regions (guided discovery), even in this case, we require that the candidate regions have no more than an allowed level of average dwithin. The threshold is set identical to that for de novo discovery. Besides this, for guided discovery, we also require that the median of Δd in candidate regions be positive for the regions to be called DRRs. We impose this requirement because DRRs are expected to have observable increase in variation from within-group to between-group. The candidate regions that satisfy all the quality criteria are statistically tested in the final step. In the third step, we obtain the significance of the differential reactivities in a candidate region (obtained in the second step or provided by the user) by comparing dwithin and dbetween for the region. Specifically, we perform a Wilcoxon signed-rank test to test the null hypothesis against a one-sided alternative hypothesis that the population mean of dbetween−dwithin>0 [52]. The FDR of the screened regions from all transcripts is controlled using the Benjamini-Hochberg procedure [53]. Finally, we obtain a list of regions with corresponding p values and q values. It is worthwhile to note that the application of a Wilcoxon signed-rank test to compare dwithin and dbetween requires two assumptions under the null hypothesis. First, we assume that if the null hypothesis were true, then dwithin and dbetween would be identically distributed. This is a reasonable assumption because a true null hypothesis implies that all the samples are identical, irrespective of the groups they belong to. Hence, the d scores assessed from the homogeneous and heterogeneous subsets of the samples should have identical distributions. Second, we assume that under the null hypothesis, Δd for different nucleotides in a candidate region are independent of each other. This is reasonable under the assumption that σi is directly proportional to μi [40, 56]. Under this assumption, σi/μi should be a constant plus an error term. In other words, while μi might exhibit correlation between adjacent nucleotides, σi/μi, and hence di, should be independent of μi. Furthermore, autocorrelation in μi should not carry over to Δdi's. We confirmed that this is indeed the case for our Structure-Seq data for three identical replicates. We assessed the autocorrelation in the μi profiles for the mRNAs represented in the data. In addition, we randomly assigned one of the replicates to group A and the other two to group B. We assessed the Δd profile for each mRNA. Since all replicates were obtained identically, these Δd profiles represented values under the null hypothesis. Then, we computed the autocorrelation in each Δd profile. For a lag of 1, we found that the μi's had median autocorrelation of 0.2, while the Δdi's had median autocorrelation of 0.02 (Additional file 1: Figure S12). In other words, the Δdi's were essentially uncorrelated under the null hypothesis. An R package implementing dStruct is freely available online under the BSD-2 license. dStruct utilizes the "parallel" package in R to enable faster processing. In addition, it utilizes the "ggplot2" package to provide detailed plots for differentially reactive regions. Structure-Seq library preparation and sequencing Structure-Seq was adapted from Ding et al. [77]. Yeast cells (S. cerevisiae, BY4741) were grown to an O.D. of 0.5 ∼0.7 in 50 mL of YP with 2% glucose at 30 ∘C, and then incubated with 10 mM dimethyl sulfate (DMS) for 10 min at 30 ∘C with vigorous shaking. To stop the reaction, 75 mL of 4.8 M 2-mercaptoethanol (BME) and 25 mL of isoamyl alcohol were added to the cells. Cells were harvested and pellets were washed once with 5 mL of 4.8 M BME and then once with 5 mL of AE buffer (50 mM sodium acetate pH 5.2, 10 mM EDTA). Total RNA was extracted using acid phenol/chloroform. Polyadenylated RNAs (poly(A) RNAs) were enriched with the Poly(A)Purist MAG kit (ThermoFisher Scientific). The poly(A) RNAs were incubated with TURBO DNase and isolated using acid phenol/chloroform. For each biological replicate, 1 μg of DNase-treated poly(A) RNAs were used to generate cDNAs by SuperScript III (ThermoFisher Scientific) using the random hexamer fused with Illumina TruSeq adaptor (Random-hex RT-primer, Additional file 1: Table S1). This reverse transcription (RT) reaction was performed according to the manufacturer's instruction. The reaction was then stopped by heating the samples at 85 ∘C for 5 min. After the samples cooled down, they were treated with 1 μL of RNase H (5 U/ μL, ThermoFisher Scientific) to degrade residual RNAs at 37 ∘C for 20 min. The cDNAs were purified with phenol (pH 8.0)—chloroform extraction and resolved on a 10% denaturing polyacrylamide gel and stained with SYBR Gold. Products with length > 30 nt were collected and eluted from the gel in TEN buffer [77] overnight at 4 ∘C. Gel purified cDNAs were ethanol-precipitated, re-suspended in water, and ligated with an ssDNA linker (Additional file 1: Table S1) at 3 ′ ends using CircLigase I (epicenter) as previously described [77]. Products > 60 nt were gel purified as above and suspended in 10 μL of water. The ligated cDNAs were subjected to PCR as previously described [77]. To identify potential non-specific primer-dimers in the following steps, a non-template control without any cDNA was also included in the PCR reaction. The products were then resolved on a 10% non-denaturing polyacrylamide gel, and only those above 180 bp were gel purified to eliminate primer dimers. After purification, the library was ethanol-precipitated and re-suspended in water. These libraries were analyzed by Agilent Bioanalyzer to determine the size distribution. A total of six libraries, including three samples with and three samples without DMS treatment, were sequenced on the Illumina Hiseq 2500 platform for 2 × 100 bp paired-end cycle run. Note that we performed paired-end sequencing to ensure accurate assessment of local coverage for reactivity calculations [37]. Pre-processing of Structure-Seq data Illumina adaptors were removed from the reads using Trimmomatic (version 0.36). Next, cutadapt (version 1.9.1) was used to trim the random trimers from the 5 ′ end of the forward reads. Trimmed reads were aligned to the S288C reference genome (R64-2-1, from the Saccharomyces Genome Database [78]) using STAR (version 2.5.2b) [79] and only uniquely aligned reads (MAPQ = 255 after mapping) were kept for the subsequent analyses. We mapped the reads once to the whole genome sequence and again to rRNA sequences only. We compared mapping to the genome sequence with mRNA annotations to obtain counts and coverages for use in simulations. The mapping to the rRNAs was used for null comparisons as described in the section on validations with small datasets. The annotation for mRNA untranslated regions (UTRs) was derived as follows. The UTRs for each mRNA were obtained from two published datasets [80, 81]. If the UTR coordinates for the same transcript were different in the two datasets, the coordinates with the widest range were used. For mRNAs without UTR annotations, 135 nucleotides (close to the median lengths of all S. cerevisiae 5 ′ and 3 ′ UTRs) were added before and after the ORF region as 5 ′ and 3 ′ UTRs. After ignoring genes with sequence overlaps with other genes on the same strand, we retained 4681 mRNAs for use in simulations. Reads were grouped according to their source mRNA, and the start and end indices from genomic alignment of each read were converted to the mRNA coordinates with the start of the 5 ′ UTR as position + 1. Due to multiple copies of rRNA sequences in the genome, reads did not map uniquely to rRNA loci. Hence, we separately mapped the reads to the rRNA sequences after adaptor removal and random trimer trimming. The uniquely mapped reads were grouped according to the source rRNA and the start and end indices of each read were converted to a 1-based coordinate system. Reactivity calculations The reactivity of a nucleotide is a measure of its degree of reaction with the probing reagent. In this study, we used reactivities obtained from the Structure-Seq, SHAPE-Seq, SHAPE-MaP, and PARS protocols. Structure-Seq utilizes DMS as a probing reagent. DMS methylates the base pairing faces of unpaired As and Cs [25]. SHAPE-Seq and SHAPE-MaP utilize SHAPE (selective 2 ′-hydroxyl acylation analyzed by primer extension) reagents, which form a 2 ′−O−ester adduct on the RNA backbone [22]. The adduct formation is favored at unpaired nucleotides relative to paired ones. This is because the higher flexibility of unpaired nucleotides enables them to adopt conformations favorable for reaction with the SHAPE reagent. Both DMS- and SHAPE-modified nucleotides impact primer extension by reverse transcriptase. The "-Seq" and "-MaP" approaches differ in how they are impacted by nucleotide modification. In "-Seq" approaches, primer extension stops upon encountering a modified nucleotide [32]. In "-MaP" approaches, primer extension proceeds upon encountering a modified nucleotide but misreads it, thereby incorporating a noncomplementary nucleotide into the cDNA [82]. Besides treating samples with reagents, Structure-Seq, SHAPE-Seq, and SHAPE-MaP utilize untreated samples to assess background noise. On the other hand, PARS utilizes two nucleases, V1 and S1. The V1 and S1 nucleases cleave the RNA strands next to paired and unpaired nucleotides, respectively. A cDNA library is prepared for RNAs treated with the nucleases by primer extension with reverse transcriptase. In all protocols, the cDNA library is sequenced and reads are analyzed to calculate reactivities. For Structure-Seq and SHAPE-Seq, the number of reads starting 1 nt downstream of each nucleotide were tallied to get the detection counts for the nucleotide (i.e., detection of reagent-induced modifications and noise in treated samples and noise in untreated samples). In addition, the number of reads starting anywhere upstream of, at, or 1 nt downstream of each nucleotide, and ending anywhere downstream of the nucleotide, were tallied as its local coverage. Detection rates were calculated for each nucleotide as the ratio of detection counts to local coverage [32, 37]. Raw reactivities were calculated by combining the information from the treated and untreated samples prepared in the same batch. Raw reactivities, ri,raw, were obtained as: $$\begin{array}{@{}rcl@{}} r_{i,\text{raw}} & = & \max\left(\frac{r_{i}^{+} - r_{i}^{-}}{1 - r_{i}^{-}}, 0 \right) \end{array} $$ where \(r_{i}^{+}\) and \(r_{i}^{-}\) are the detection rates at nucleotide i for treated and untreated samples, respectively [32, 83]. Note that it is a common practice to assign a reactivity score of 0 to nucleotides where \(r_{i}^{-} > r_{i}^{+}\), which can happen due to high background noise [6]. For Structure-Seq data, due to the base-selective nature of DMS, reactivities for Gs and Us were masked as missing information. This step was skipped for SHAPE-Seq data, as SHAPE reagents probe all four nucleotides. Next, raw reactivities were normalized using a 2–8% approach [56, 84], i.e., the top 2% of reactivities were filtered as outliers and the mean of the next 8% reactivities was used to normalize all the reactivities in that sample. This provided a single sample of reactivity profiles for each batch. Normalized SHAPE-MaP reactivities were available directly from the Weeks lab website. For PARS data, we downloaded the V1 and S1 counts for all transcripts, which were available online [9]. The nucleotide-wise V1 and S1 counts for each cell line were combined as previously described to obtain PARS scores [9], ri, for nucleotide i, as: $$\begin{array}{@{}rcl@{}} r_{i} = \log_{2} \left(\frac{V1_{i} + 5}{S1_{i} + 5} \right). \end{array} $$ A small number 5 added to V1 and S1 counts in the above equation prevents over-estimation of PARS scores for nucleotides with low coverage. Note that we use ri to denote PARS scores as well as normalized reactivities from Structure-Seq, SHAPE-Seq, or SHAPE-MaP. Download links for all the datasets used in this study are available in Additional file 1: Table S2. Implementation of deltaSHAPE deltaSHAPE was implemented using the software version 1.0 available for download from the Weeks lab website [8]. In addition to the reactivity of a nucleotide, deltaSHAPE requires the standard error of the reactivity (see Additional file 1: Section S1). It is obtained as the standard deviation of the sampling distribution of the reactivity. It can be computed using theoretical models that require counts and local coverage information for a sample. For the Xist long non-coding RNA SHAPE-MaP data, standard errors were available online alongside reactivity data [85]. For SHAPE-Seq and Structure-Seq data, we utilized a simplified expression for a formula derived in our previous publication [37] to estimate the standard error, SE i, at nucleotide i: $$\begin{array}{@{}rcl@{}} \text{SE}_{i} &=& \frac{1}{f} \sqrt{\frac{r^{+}_{i}}{C^{+}_{i}} + \frac{r^{-}_{i}}{C^{-}_{i}} }, \end{array} $$ where f is the normalization constant for the transcript, obtained using the 2-8% approach [56, 84], r+ and r− represent the detection rates at nucleotide i for the treated and untreated samples, respectively, and C+ and C− represent the local coverages in the corresponding samples. Implementation of PARCEL To the best of our knowledge, no software implementing PARCEL is available publicly. Hence, we implemented PARCEL to the best of our understanding based on descriptions by Tapsin et al. and email correspondence with them [21]. We identified DRRs in four steps (see Additional file 1: Section S2). We executed all the steps separately for each RNA. First, we ran edgeR on the detection counts for two groups of samples as input [36]. For each nucleotide of a candidate RNA, edgeR outputs the logarithm of fold change in detection counts between the groups. In addition, it outputs p values quantifying the statistical significances of changes in counts. Let the p value for nucleotide i be pi. In the second step, we converted the pi's to scores, si: $$\begin{array}{@{}rcl@{}} s_{i} & = & \log \left(0.1 \right) - \log \left(p_{i} \right). \end{array} $$ In simple terms, nucleotides with pi<0.1 received a positive score. We assigned si=−10 to nucleotides that had 1 or fewer detection counts. In the third step, we utilized a recursive implementation of the Kadane algorithm to identify regions with high aggregate scores [86] (see Additional file 1: Section S2 for details). Given the aggregate score of a region, S, we assessed the statistical significance of the structural changes in the region in terms of E values. E values were defined as E=Ke−λS. In simple terms, an E value represents the number of regions that are expected to have at least as high an aggregate score as S if there were no real differential signal. As such, a lower E value indicates a more significant differential signal in a region. The values of K and λ were derived by Tapsin et al. theoretically. These were K=0.0809635 and λ=0.862871. Hence, given S for a region, its E value could be computed. We considered a region as having a high score if its E value was less than a cutoff. In keeping with Tapsin et al., we used a cutoff of E=10 for tests with small datasets. We varied the E value cutoff for tests with simulated data. Finally, high-scoring regions were declared as DRRs if they contained at least one nucleotide with (a) Bonferroni-corrected pi<0.1 and (b) absolute value of the logarithm of fold change > 2. Implementation of RASA We received scripts utilized by Mizrahi et al. for data analysis (correspondence via email) [20]. We extracted key steps from their scripts and implemented them in custom written scripts for the sake of computational efficiency and proper code organization. RASA accepts detection counts and local coverages for two groups of samples. In addition, it accepts the mean reactivity of a suitable ribosomal RNA in each sample. The latter information helps account for the normalization requirements for reactivities. To this end, Mizrahi et al. utilized the mean reactivity of 28S rRNA in their study on human SP data. For our tests with S. cerevisiae and simulated data, we used the mean reactivities of 25S rRNA in each sample. For the test with fluoride riboswitch data, we used the mean reactivity of the riboswitch in each sample. Given the abovementioned information, we identified DRRs in two steps. We executed the first step (regression analysis) separately for each nucleotide. In this step, we fit two generalized mixed models (with logistic regression) to the sample-wise counts and the coverages while also accounting for variation in the mean reactivities of samples. The null model assumed no effect of grouping of the samples. It attempted to explain the variation in detection rates from one sample to another as inherent biological variation. The alternative model considered the possibility of differential signal between the groups in addition to the biological variation. We compared the goodness of fit from the two models using a likelihood ratio test. In the presence of a real differential signal, the alternative model is expected to fit the data better. Hence, we summarized the output of the likelihood ratio test in terms of a p value to quantify the statistical significance of the improvement in the fit by the alternative model. In addition, the alternative model provided an assessment of the change in detection rates between the groups. If the p value for a nucleotide was < 0.01 and its absolute fold change in detection rates was > 1.33, the nucleotide was said to have a significant change in reactivity. We call such nucleotides as altered nucleotides. In the second step, we searched for regions where altered nucleotides were clustered (spatial analysis). This step was executed separately for each RNA. We scanned an RNA in windows of a specified length. Mizrahi et al. used 50 nt as the window length. We used 50 nt for the S. cerevisiae rRNAs, 5 nt for the fluoride riboswitch data, and 11 nt for the simulated data (for justifications, see the relevant subsections of the "Results" section). Let the number of altered nucleotides in a window centered at nucleotide i be wi. We recorded two parameters for each transcript. The first parameter was the maximum value of wi. The second was the chi-square distance of the observed distribution of wi's from their expected distribution in the absence of a differential signal. Specifically, in the absence of a differential signal, the wi's should follow a Poisson distribution with the mean equal to the observed mean of the wi's. Hence, we calculated the second parameter as the chi-square distance of the observed distribution and the expected Poisson distribution. In addition, we assessed both parameters for 1000 permutations of the observed arrangement of altered nucleotides. The permuted arrangements provided null values for the parameters. Next, we computed Z scores for each parameter value by comparing their observed values with the distribution of null values. Finally, we classified region(s) with the highest wi and Z>2 for both parameters as DRRs. A few more details are worth noting. In keeping with Mizrahi et al.'s implementation, we excluded nucleotides with untreated sample detection rates greater than 0.008 for As and 0.005 for Cs from the first step. We performed this filtering for both the real and the simulated Structure-Seq data. However, we skipped the filtering for the fluoride riboswitch as the cutoffs for untreated sample detection rates from SHAPE data were unknown. Moreover, due to the high quality of the fluoride riboswitch data, the untreated sample detection rates were generally low (median ∼ 0.002). In addition, if the local coverage at a nucleotide was greater than 10,000, we scaled down the local coverage to 10,000. For such nucleotides, we also scaled the detection count, such that the detection rate remained constant. The scaling was done to reduce the computational burden of performing regression analysis for each nucleotide separately. We added simulated DRRs to experimentally obtained Structure-Seq data for three replicate samples of S. cerevisiae. To start with, we selected regions with lengths ranging from 50–75 nt out of 4681 mRNAs. We required that the selected regions have a minimum local coverage of > 25 and be among the top ∼ 20% of the mRNAs sorted according to average coverage. In addition, we allowed for more than one region in the same transcript. In total, out of the regions that satisfied the coverage criteria, we obtained 1000 regions in 630 mRNAs. Let us represent the selected regions as Ri, with i ranging from 1 to 1000. For each selected region, we simulated three reactivity profiles, one labeled group A and the other two as two samples of group B. The typical way to simulate reactivity profiles given a secondary structure for a region is to sample reactivities randomly from probability density functions for reactivities of paired and unpaired nucleotides [54, 55]. However, such an approach results in zero correlation between replicates (data not shown). Hence, it does not result in realistic simulations as real data exhibits correlation within groups as well as between groups even in DRRs. Hence, we developed a new approach to simulate data for replicates, which displays a range of within-group and between-group correlations as well as between-group correlations. In what follows, we describe how we simulated reactivities and controlled the correlations within and between groups. To simulate a reactivity profile for a region Ri, first, we sampled 1000 secondary structures using its mRNA sequence as input to RNAsubopt (ViennaRNA package) [63]. We retained only the unique structures from those returned by RNAsubopt. In addition, we ensured that the MFE structure was represented in this set. Let us denote the generated structures for region Ri by Tij, where j ranges from 1 to the number of unique structures for Ri. For each Tij, we generated a reactivity profile. To this end, we used patteRNA (with argument "-l") to fit a Gaussian mixture model to our experimental data [33, 64]. Note that the fitting was done on the average reactivity profile from the three samples. Next, given a sequence of base pairing states for Tij, we sampled reactivities using the fitted model (we used scripts published with patteRNA for this purpose). Hence, for each region, we obtained a set of secondary structures and a reactivity profile for each structure. Let us denote the reactivity profile for structure Tij as rk,ij, where k ranges from 1 to the length of Ri. The final reactivity profile, denoted rk,i, for Ri for each sample was an ensemble-weighted average of rk,ij. Hence, we assigned each secondary structure an ensemble weight such that all ensemble weights summed to 1. For each Ri, the Tij's were divided into two categories—dominant structures (up to 5 in number) and infrequent structures. The dominant structures received a total ensemble weight between 0.33–0.66. The remaining ensemble weight was randomly distributed among the infrequent structures. Let us denote the ensemble weight for Tij by wij. Then, the reactivity profile of an Ri was obtained as \(r_{k,i} = \sum _{j} w_{ij}r_{k,ij}.\) The three samples differed in the assignment of ensemble weights. We ensured that the two groups had different structure ensembles by ensuring that the sets of dominant structures for groups A and B are disjoint. Let us represent the wij's for groups A and B by wij,A and wij,B, respectively. In addition, we ensured that the rk,i's displayed a range of within-group and between-group variations as quantified in terms of within-group and between-group Pearson correlation coefficients. Note that the between-group correlation coefficient was obtained as the average of the correlation coefficients from comparing a sample from group A with two samples from group B. To ensure a range of within-group correlations, we added random noise to wij,B to represent two replicates from group B. The parameters and probability density functions for adding random noise were tuned by trial and error to ensure that a range of within-group correlations was obtained. In addition, to ensure a range of between-group correlations, we controlled the ensemble weight for the MFE structure in wij,A and wij,B. Increasing the weight of the MFE structure in both groups to identical levels increased the between-group correlations. The parameters and probability density function dictating the selected level of MFE for an Ri were tuned based on a trial-and-error approach to ensure that a range of between-group correlations was obtained. Overall, we used five sets of parameters and probability density functions tuned by trial-and-error to obtain a range of within-group and between-group correlations for selected Ri's (Additional file 1: Figure S4). In keeping with the base-selective nature of DMS, reactivities for Gs and Us were masked as missing information in all simulated profiles. Then, rk,i's for each sample were normalized using the 2–8% approach. After normalization, these rk,i's replaced experimentally obtained reactivity profiles in the corresponding Ri's and samples. Let us represent the final reactivity for a transcript t as rk,t, where k ranges from 1 to the transcript's length. In addition to simulating reactivities, we needed counts and coverage information for running deltaSHAPE. Hence, we back-calculated count profiles that corresponded to rk,t's. We preserved the experimentally observed hit rates of DMS on an mRNA (estimated as the sum of raw experimental reactivities [32]) for all mRNAs, their untreated detection rates/counts and local coverages in both the treated and untreated samples. With these pieces fixed, only the counts from the treated samples remained unknown. First, we estimated raw reactivities corresponding to rk,t's. Let the hit rate of transcript t be ht. Then, raw reactivities for the transcript were obtained as \(\nicefrac {r_{k,t}h_{t}}{\sum _{k} r_{k,t}}\). To these, we added the untreated sample detection rates to get the treated detection rates. Multiplying the treated detection rates and treated local coverages and rounding the result provided the treated sample counts. The back-calculated counts and local coverages were used along with the rk,t's when running deltaSHAPE as described earlier. dStruct, deltaSHAPE, PARCEL, and RASA were used to identify DRRs in the simulated data. We ran these for a range of parameter values more conservative as well as more liberal than the default parameters. For dStruct, we varied the minimum quality criterion for candidate regions, specified in terms of average dwithin. The maximum allowed value of average dwithin ranged from 0.1 to 0.5. dwithin of 0.1 and 0.5 correspond to mean SNR > 6 (stringent high quality criterion) and > 1 (very liberal quality criterion), respectively [37]. For deltaSHAPE, we varied the colocalization requirement for the number of screened nucleotides with a high reactivity change. At minimum, colocalization of two nucleotides (liberal criterion) within a search window of 11 nt was required to define a DRR, and we increased this requirement to up to six nucleotides (conservative criterion). For PARCEL, we varied the E value cutoff. Lower cutoffs amount to a more conservative criterion. The tested cutoff values ranged from 5 (conservative criterion) to 10000 (liberal criterion). For RASA, we varied the Z score cutoffs. Higher cutoffs amount to a more stringent criterion. The tested cutoff values ranged from 1 (liberal criterion) to 5 (conservative criterion). List of single-nucleotide variants for validation with PARS data We obtained a list of single-nucleotide variants from the supplementary information provided by Wan et al. [9]. The list contained only those regions with variants (1907 in number), which were found to be riboSNitches by StrucDiff. Of these, 1576 variants were such that two cell lines out of the mother, father, and child trio were allelically identical. Note that none of these 1576 variants were independently validated by Wan et al. to be structure altering. We considered the PARS profiles of cell lines that were allelically identical for a variant as biological replicates for the 11 nt centered at the variable nucleotide. However, not all of the 1576 variants were unique. There were several duplicates of variants at the same genomic location. The duplicates corresponded to related transcripts, which were either splicing variants of the same gene or splicing variants of a gene and their fusion products with a neighboring gene. We verified that the counts in at least the 11 nt window centered at the variant for the related transcripts were exactly identical for all three cell lines. Hence, we collapsed all duplicates to a single variant. This resulted in 351 variants. While Wan et al.'s pairwise approach ensured that at least 2 of the 3 cell lines had high coverages for the reported variants, it did not ensure that all three had high coverage. Therefore, from the reduced set of 351 variants, we further filtered out variants that had average counts less than 10 in an 11-nt window around the variant, i.e., a variant at site k was excluded if \(\frac {1}{11}{\sum \nolimits }_{i=k-5}^{k+5} \left (V1_{i} +S1_{i}\right) < 10\) for any of the 3 cell lines. In total, we retained 323 variants for our analysis. Implementation of StrucDiff To the best of our knowledge, no software implementation of StructDiff is available. Hence, we implemented it as described by Wan et al. [9] using custom scripts. First, we smoothed the V1 and S1 counts using a rolling mean in windows of 5 nt. We obtained smoothed PARS scores, \(\overline {r}_{i}\), for nucleotide i from the smoothed counts: $$\begin{array}{@{}rcl@{}} \overline{r}_{i} = \log_{2} \left(\sum\limits_{j= i - 2}^{i+2} \frac{V1_{j} + 5}{5} \right) - \log_{2} \left(\sum\limits_{j= i - 2}^{i+2} \frac{S1_{j} + 5}{5} \right). \end{array} $$ Second, we calculated the absolute difference in the smoothed PARS scores, \(\Delta \overline {r}_{i}\), between any pair of samples, say father (denoted with subscript f) and child (denoted with subscript c), as \(\Delta \overline {r}_{i} = \left | \overline {r}_{i,f} - \overline {r}_{i,c} \right |\). Third, in terms of \(\Delta \overline {r}_{i}\), we estimated the structural change score, vSNV, around a single-nucleotide variant at site k as: $$\begin{array}{@{}rcl@{}} v_{\text{SNV}} & = & \frac{1}{5} \sum\limits_{i = k-2}^{k+2} \Delta\overline{r}_{i}. \end{array} $$ Fourth, we assessed the statistical significance of the observed vSNV. To this end, we permuted the sequence of non-zero \(\Delta \overline {r}_{i}\) values 1000 times. For each permuted sequence, we assessed a structural change score under the null hypothesis, vnull, which we defined similarly to vSNV. A p value was estimated for a single-nucleotide variant as the fraction of vnull values greater than the corresponding vSNV. We used this implementation to estimate the p values for a subset of riboSNitches reported by Wan et al. The subset was selected by screening for variants that were (i) shared by at least 2 related transcripts, (ii) had identical V1 and S1 counts in an 11-nt window around a shared variant for the related transcripts, and (iii) were not classified identically by StrucDiff in the context of all related transcripts that shared the variant, i.e., in some contexts, the variant was classified as structure altering and in other contexts, it was classified as not structure altering. DRR: Differentially reactive region FDR: False discovery rate MFE: Minimum free energy SP: Structure profiling or structure probing Sharp PA. The centrality of RNA. Cell. 2009; 136(4):577–80. https://doi.org/10.1016/j.cell.2009.02.007. Mortimer SA, Kidwell MA, Doudna JA. Insights into RNA structure and function from genome-wide studies. Nat Rev Genet. 2014; 15:469. Kwok CK, Tang Y, Assmann SM, Bevilacqua PC. The RNA structurome: transcriptome-wide structure probing with next-generation sequencing. Trends Biochem Sci. 2015; 40(4):221–32. https://doi.org/10.1016/j.tibs.2015.02.005. Kubota M, Chan D, Spitale RC. RNA structure: merging chemistry and genomics for a holistic perspective. BioEssays. 2015; 37(10):1129–38. https://doi.org/10.1002/bies.201300146. Lu Z, Chang HY. Decoding the RNA structurome. Curr Opin Struct Biol. 2016; 36:142–8. https://doi.org/10.1016/j.sbi.2016.01.007. Choudhary K, Deng F, Aviran S. Comparative and integrative analysis of RNA structural profiling data: current practices and emerging questions. Quant Biol. 2017; 5(1):3–24. https://doi.org/10.1007/s40484-017-0093-6. Spitale RC, Flynn RA, Zhang QC, Crisalli P, Lee B, Jung JW, Kuchelmeister HY, Batista PJ, Torre EA, Kool ET, Chang HY. Structural imprints in vivo decode RNA regulatory mechanisms. Nature. 2015; 519(7544):486–90. https://doi.org/10.1038/nature14263. Smola MJ, Calabrese JM, Weeks KM. Detection of RNA–protein interactions in living cells with SHAPE. Biochemistry. 2015; 54(46):6867–875. https://doi.org/10.1021/acs.biochem.5b00977. Wan Y, Qu K, Zhang QC, Flynn RA, Manor O, Ouyang Z, Zhang J, Spitale RC, Snyder MP, Segal E, Chang HY. Landscape and variation of RNA secondary structure across the human transcriptome. Nature. 2014; 505:706. Watters KE, Strobel EJ, Yu AM, Lis JT, Lucks JB. Cotranscriptional folding of a riboswitch at nucleotide resolution. Nat Struct Mol Biol. 2016; 23:1124. Bai Y, Tambe A, Zhou K, Doudna JA. RNA-guided assembly of Rev-RRE nuclear export complexes. eLife. 2014; 3:e03656. Strobel EJ, Watters KE, Nedialkov Y, Artsimovitch I, Lucks JB. Distributed biotin–streptavidin transcription roadblocks for mapping cotranscriptional RNA folding. Nucleic Acids Res. 2017; 45(12):109–9. Watters KE, Choudhary K, Aviran S, Lucks JB, Perry KL, Thompson JR. Probing of RNA structures in a positive sense RNA virus reveals selection pressures for structural elements. Nucleic Acids Res. 2018; 46(5):2573–584. Guo JU, Bartel DP. RNA G-quadruplexes are globally unfolded in eukaryotic cells and depleted in bacteria. Science. 2006;353(6306). https://doi.org/10.1126/science.aaf5371. Barnwal RP, Loh E, Godin KS, Yip J, Lavender H, Tang CM, Varani G. Structure and mechanism of a molecular rheostat, an RNA thermometer that modulates immune evasion by Neisseria meningitidis. Nucleic Acids Res. 2016; 44(19):9426–9437. Righetti F, Nuss AM, Twittenhoff C, Beele S, Urban K, Will S, Bernhart SH, Stadler PF, Dersch P, Narberhaus F. Temperature-responsive in vitro RNA structurome of Yersinia pseudotuberculosis. Proc Natl Acad Sci. 2016; 113(26):7237. Lackey L, Coria A, Woods C, McArthur E, Laederach A. Allele-specific SHAPE-MaP assessment of the effects of somatic variation and protein binding on mRNA structure. RNA. 2018. https://doi.org/10.1261/rna.064469.117. http://rnajournal.cshlp.org/content/early/2018/01/09/rna.064469.117.full.pdf+html. Accessed 9 Jan 2018. Burlacu E, Lackmann F, Aguilar LC, Belikov S, Nues RV, Trahan C, Hector RD, Dominelli-Whiteley N, Cockroft SL, Wieslander L, Oeffinger M, Granneman S. High-throughput RNA structure probing reveals critical folding events during early 60S ribosome assembly in yeast. Nat Commun. 2017; 8(1):714. https://doi.org/10.1038/s41467-017-00761-8. Talkish J, May G, Lin Y, Woolford JL, McManus CJ. Mod-seq: high-throughput sequencing for chemical probing of RNA structure. RNA. 2014; 20(5):713–20. https://doi.org/10.1261/rna.042218.113. http://rnajournal.cshlp.org/content/early/2014/03/24/rna.042218.113.full.pdf+html. Accessed 24 Mar 2014. Mizrahi O, Nachshon A, Shitrit A, Gelbart IA, Dobesova M, Brenner S, Kahana C, Stern-Ginossar N. Virus-induced changes in mRNA secondary structure uncover cis-regulatory elements that directly control gene expression. Mol Cell. 2018; 72(5):862–874.e5. Tapsin S, Sun M, Shen Y, Zhang H, Lim XN, Susanto TT, Yang SL, Zeng GS, Lee J, Lezhava A, et al.Genome-wide identification of natural RNA aptamers in prokaryotes and eukaryotes. Nat Commun. 2018; 9(1):1289. Weeks KM. Advances in RNA structure analysis by chemical probing. Curr Opin Struct Biol. 2010; 20(3):295–304. https://doi.org/10.1016/j.sbi.2010.04.001. Knapp G. [16 ] Enzymatic approaches to probing of RNA secondary and tertiary structure. In: Methods in Enzymology, vol. 180. Cambridge: Academic Press: 1989. p. 192–212. https://doi.org/10.1016/0076-6879(89)80102-8. http://www.sciencedirect.com/science/article/pii/0076687989801028. Accessed 7 Jan 2004. Kwok CK. Dawn of the in vivo RNA structurome and interactome. Biochemical Society Transactions. 2016; 44(5):1395–410. https://doi.org/10.1042/BST20160075. http://www.biochemsoctrans.org/content/44/5/1395.full.pdf. Ding Y, Tang Y, Kwok CK, Zhang Y, Bevilacqua PC, Assmann SM. In vivo genome-wide profiling of RNA secondary structure reveals novel regulatory features. Nature. 2013; 505:696. Smola MJ, Rice GM, Busan S, Siegfried NA, Weeks KM. Selective 2'-hydroxyl acylation analyzed by primer extension and mutational profiling (SHAPE-MaP) for direct, versatile and accurate RNA structure analysis. Nat Protoc. 2015; 10:1643. Lucks JB, Mortimer SA, Trapnell C, Luo S, Aviran S, Schroth GP, Pachter L, Doudna JA, Arkin AP. Multiplexed RNA structure characterization with selective 2'-hydroxyl acylation analyzed by primer extension sequencing (SHAPE-Seq). Proc Natl Acad Sci. 2011; 108(27):11063–8. Hector RD, Burlacu E, Aitken S, Bihan TL, Tuijtel M, Zaplatina A, Cook AG, Granneman S. Snapshots of pre-rRNA structural flexibility reveal eukaryotic 40S assembly dynamics at nucleotide resolution. Nucleic Acids Res. 2014; 42(19):12138–54. Poulsen LD, Kielpinski LJ, Salama SR, Krogh A, Vinther J. SHAPE selection (SHAPES) enrich for RNA structure signal in SHAPE sequencing-based probing data. RNA. 2015; 21(5):1042–1052. https://doi.org/10.1261/rna.047068.114. http://rnajournal.cshlp.org/content/21/5/1042.full.pdf+html. Selega A, Sirocchi C, Iosub I, Granneman S, Sanguinetti G. Robust statistical modeling improves sensitivity of high-throughput RNA structure probing experiments. Nat Methods. 2017; 14(1):83. Li B, Tambe A, Aviran S, Pachter L. PROBer provides a general toolkit for analyzing sequencing-based toeprinting assays. Cell Syst. 2017; 4(5):568–74. Aviran S, Lucks JB, Pachter L. RNA structure characterization from chemical mapping experiments. In: 2011 49th Annual Allerton Conference on Communication, Control, and Computing. Monticello: IEEE: 2011. p. 1743–50. https://doi.org/10.1109/Allerton.2011.6120379. Ledda M, Aviran S. PATTERNA: transcriptome-wide search for functional RNA elements via structural data signatures. Genome Biol. 2018; 19(1):28. https://doi.org/10.1186/s13059-018-1399-z. Kutchko KM, Laederach A. Transcending the prediction paradigm: novel applications of SHAPE to RNA function and evolution. Wiley Interdiscip Rev RNA. 2016; 8(1):1374. https://doi.org/10.1002/wrna.1374. Woods CT, Laederach A. Classification of RNA structure change by 'gazing' at experimental data. Bioinformatics. 2017; 33(11):1647–55. Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26(1):139–40. Choudhary K, Shih NP, Deng F, Ledda M, Li B, Aviran S. Metrics for rapid quality control in RNA structure probing experiments. Bioinformatics. 2016; 32(23):3575–583. Choudhary K, Ruan L, Deng F, Shih N, Aviran S. SEQualyzer: interactive tool for quality control and exploratory analysis of high-throughput RNA structural profiling data. Bioinformatics. 2017; 33(3):441–3. Kutchko KM, Madden EA, Morrison C, Plante KS, Sanders W, Vincent HA, Cruz Cisneros MC, Long KM, Moorman NJ, Heise MT, Laederach A. Structural divergence creates new functional features in alphavirus genomes. Nucleic Acids Res. 2018; 46(7):3657–670. Vaziri S, Koehl P, Aviran S. Extracting information from RNA SHAPE data: Kalman filtering approach. PLoS ONE. 2018; 13(11):0207029. Goeman JJ, Solari A. Multiple hypothesis testing in genomics. Stat Med. 2014; 33(11):1946–78. Li H, Aviran S. Statistical modeling of RNA structure profiling experiments enables parsimonious reconstruction of structure landscapes. Nat Commun. 2018; 9(1):606. https://doi.org/10.1038/s41467-018-02923-8. Rouskin S, Zubradt M, Washietl S, Kellis M, Weissman JS. Genome-wide probing of RNA structure reveals active unfolding of mRNA structures in vivo. Nature. 2013; 505:701. Wells SE, Hughes JM, Igel AH, Ares M. Use of dimethyl sulfate to probe RNA structure in vivo. Methods Enzymol. 2000; 318:479–92. Tack DC, Tang Y, Ritchey LE, Assmann SM, Bevilacqua PC. StructureFold2: Bringing chemical probing data into the computational fold of RNA structural analysis. Methods. 2018. https://doi.org/10.1016/j.ymeth.2018.01.018. Martin JS, Halvorsen M, Davis-Neulander L, Ritz J, Gopinath C, Beauregard A, Laederach A. Structural effects of linkage disequilibrium on the transcriptome. RNA. 2012; 18(1):77–87. https://doi.org/10.1261/rna.029900.111. http://rnajournal.cshlp.org/content/18/1/77.full.pdf+html. Accessed 22 Nov 2011. Kwok CK, Sahakyan AB, Balasubramanian S. Structural analysis using SHALiPE to reveal RNA G–quadruplex formation in human precursor microRNA. Angew Chem Int Ed. 2016; 55(31):8958–961. https://doi.org/10.1002/anie.201603562. Hansen KD, Langmead B, Irizarry RA. BSmooth: from whole genome bisulfite sequencing reads to differentially methylated regions. Genome Biol. 2012; 13(10):83. https://doi.org/10.1186/gb-2012-13-10-r83. Korthauer K, Chakraborty S, Benjamini Y, Irizarry RA. Detection and accurate false discovery rate control of differentially methylated regions from whole genome bisulfite sequencing. Biostatistics. 2018.kxy007, https://doi.org/10.1093/biostatistics/kxy007. Schweikert G, Cseke B, Clouaire T, Bird A, Sanguinetti G. MMDiff: quantitative testing for shape changes in ChIP-Seq data sets. BMC Genomics. 2013; 14(1):826. Mayo TR, Schweikert G, Sanguinetti G. M3D: a kernel-based test for spatially correlated changes in methylation profiles. Bioinformatics. 2014; 31(6):809–16. McDonald JH. Handbook of biological statistics, vol. 2. Baltimore: Sparky House Publishing; 2009. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B Methodol. 1995; 57(1):289–300. Sükösd Z, Swenson MS, Kjems J, Heitsch CE. Evaluating the accuracy of SHAPE-directed RNA secondary structure predictions. Nucleic Acids Res. 2013; 41(5):2807–816. Deng F, Ledda M, Vaziri S, Aviran S. Data-directed RNA secondary structure prediction using probabilistic modeling. RNA. 2016; 22(8):1109–19. https://doi.org/10.1261/rna.055756.115. http://rnajournal.cshlp.org/content/22/8/1109.full.pdf+html. Accessed 1 June 2016. Low JT, Weeks KM. SHAPE-directed RNA secondary structure prediction. Methods. 2010; 52(2):150–8. https://doi.org/10.1016/j.ymeth.2010.06.007. Li J, Jiang H, Wong WH. Modeling non-uniformity in short-read rates in RNA-Seq data. Genome Biol. 2010; 11(5):50. https://doi.org/10.1186/gb-2010-11-5-r50. Ren A, Rajashankar KR, Patel DJ. Fluoride ion encapsulation by Mg2+ ions and phosphates in a fluoride riboswitch. Nature. 2012; 486(7401):85. Baker JL, Sudarsan N, Weinberg Z, Roth A, Stockbridge RB, Breaker RR. Widespread genetic switches and toxicity resistance proteins for fluoride. Science. 2012; 335(6065):233–5. Battiste JL, Mao H, Rao NS, Tan R, Muhandiram DR, Kay LE, Frankel AD, Williamson JR. α helix-RNA major groove recognition in an HIV-1 Rev peptide-RRE RNA complex. Science. 1996; 273(5281):1547–51. Daugherty MD, D'Orso I, Frankel AD. A solution to limited genomic capacity: using adaptable binding surfaces to assemble the functional HIV Rev oligomer on RNA. Mol Cell. 2008; 31(6):824–34. Jayaraman B, Crosby DC, Homer C, Ribeiro I, Mavor D, Frankel AD. RNA-directed remodeling of the HIV-1 protein Rev orchestrates assembly of the Rev–Rev response element complex. eLife. 2014; 3:04120. Lorenz R, Bernhart SH, Höner zu Siederdissen C, Tafer H, Flamm C, Stadler PF, Hofacker IL. ViennaRNA Package 2.0. Algoritm Mol Biol. 2011; 6(1):26. https://doi.org/10.1186/1748-7188-6-26. Radecki P, Ledda M, Aviran S. Automated recognition of RNA structure motifs by their SHAPE data signatures. Genes. 2018;9(6),300. Huang Y, Xu H, Calian V, Hsu JC. To permute or not to permute. Bioinformatics. 2006; 22(18):2244–248. Benjamini Y, Taylor J, Irizarry RA. Selection-corrected statistical inference for region detection with high-throughput assays. J Am Stat Assoc. 2018:1–15. https://doi.org/10.1080/01621459.2018.1498347. Zubradt M, Gupta P, Persad S, Lambowitz AM, Weissman JS, Rouskin S. DMS-MaPseq for genome-wide or targeted RNA structure probing in vivo. Nat Methods. 2016; 14:75. Rencher AC. Multivariate analysis of variance. In: Methods of Multivariate Analysis. New Jersey: Wiley: 2003. p. 156–247. https://doi.org/10.1002/0471271357.ch6. http://doi.org/10.1002/0471271357.ch6. Tusher VG, Tibshirani R, Chu G. Significance analysis of microarrays applied to the ionizing radiation response. Proc Natl Acad Sci. 2001; 98(9):5116. Ritchie ME, Phipson B, Wu D, Hu Y, Law CW, Shi W, Smyth GK. limma powers differential expression analyses for rna-sequencing and microarray studies. Nucleic Acids Res. 2015; 43(7):47–7. Deigan KE, Li TW, Mathews DH, Weeks KM. Accurate SHAPE-directed RNA structure determination. Proc Natl Acad Sci. 2009; 106(1):97–102. Bolstad BM, Irizarry RA, Åstrand M, Speed TP. A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Bioinformatics. 2003; 19(2):185–93. De Winter JC. Using the Student's t-test with extremely small sample sizes. Pract Assess Res Eval. 2013; 18(10):1–12. Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, Paulovich A, Pomeroy SL, Golub TR, Lander ES, et al.Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci. 2005; 102(43):15545–50. Lee PH, O'dushlaine C, Thomas B, Purcell SM. INRICH: interval-based enrichment analysis for genome-wide association studies. Bioinformatics. 2012; 28(13):1797–9. Youden WJ. Index for rating diagnostic tests. Cancer. 1950; 3(1):32–5. Ding Y, Kwok CK, Tang Y, Bevilacqua PC, Assmann SM. Genome-wide profiling of in vivo RNA structure at single-nucleotide resolution using structure-seq. Nat Protocol. 2015; 10:1050. Cherry JM, Hong EL, Amundsen C, Balakrishnan R, Binkley G, Chan ET, Christie KR, Costanzo MC, Dwight SS, Engel SR, Fisk DG, Hirschman JE, Hitz BC, Karra K, Krieger CJ, Miyasato SR, Nash RS, Park J, Skrzypek MS, Simison M, Weng S, Wong ED. Saccharomyces Genome Database: the genomics resource of budding yeast. Nucleic Acids Res. 2012; 40(D1):700–5. https://doi.org/10.1093/nar/gkr1029. Dobin A, Davis CA, Schlesinger F, Drenkow J, Zaleski C, Jha S, Batut P, Chaisson M, Gingeras TR. STAR: ultrafast universal RNA-seq aligner. Bioinformatics. 2013; 29(1):15–21. Nagalakshmi U, Wang Z, Waern K, Shou C, Raha D, Gerstein M, Snyder M. The Transcriptional landscape of the yeast genome defined by RNA sequencing. Science. 2008; 320(5881):1344. Yassour M, Kaplan T, Fraser HB, Levin JZ, Pfiffner J, Adiconis X, Schroth G, Luo S, Khrebtukova I, Gnirke A, Nusbaum C, Thompson DA, Friedman N, Regev A. Ab initio construction of a eukaryotic transcriptome by massively parallel mRNA sequencing. Proc Natl Acad Sci. 2009; 106(9):3264. Siegfried NA, Busan S, Rice GM, Nelson JAE, Weeks KM. RNA motif discovery by SHAPE and mutational profiling (SHAPE-MaP). Nat Methods. 2014; 11:959. Aviran S, Trapnell C, Lucks JB, Mortimer SA, Luo S, Schroth GP, Doudna JA, Arkin AP, Pachter L. Modeling and automation of sequencing-based characterization of RNA structure. Proc Natl Acad Sci. 2011; 108(27):11069. Sloma MF, Mathews DH, Chen SJ, Burke-Aguero DH. Chapter four – improving RNA secondary structure prediction with structure mapping data. In: Methods in Enzymology, vol. 553. Cambridge: Academic Press: 2015. p. 91–114. https://doi.org/10.1016/bs.mie.2014.10.053. http://www.sciencedirect.com/science/article/pii/S0076687914000548. Accessed 3 Feb 2015. Smola MJ, Christy TW, Inoue K, Nicholson CO, Friedersdorf M, Keene JD, Lee DM, Calabrese JM, Weeks KM. SHAPE reveals transcript-wide interactions, complex structural domains, and protein interactions across the Xist lncRNA in living cells. Proc Natl Acad Sci. 2016; 113(37):10322. Takaoka T. Efficient algorithms for the maximum subarray problem by distance matrix multiplication. Electron Notes Theor Comput Sci. 2002; 61:191–200. Choudhary K, Aviran S. AviranLab/dStruct: Initial release. GitHub repository. 2019. https://github.com/AviranLab/dStruct. Accessed 9 Jan 2019. Choudhary K, Lai YH, Tran EJ, Aviran S. dStruct: identifying differentially reactive regions from RNA structurome profiling data, Datasets. Zenodo. 2019. https://doi.org/10.5281/zenodo.2536501. Cordero P, Lucks JB, Das R. An RNA Mapping Database for curating RNA structure mapping experiments. Bioinformatics. 2012; 28(22):3006–8. https://doi.org/10.1093/bioinformatics/bts554. Choudhary K, Aviran S. AviranLab/SPEQC: First commit. GitHub repository. 2016. https://github.com/AviranLab/SPEQC. Weeks K. Weeks Laboratory: data files. https://weeks.chem.unc.edu/. Wan Y, Qu K, Zhang QC, Flynn RA, Manor O, Ouyang Z, Zhang J, Spitale RC, Snyder MP, Segal E, Chang HY. Landscape and variation of RNA secondary structure across the human transcriptome. Gene Expression Omnibus. 2013. https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE50676. Accessed 19 Dec 2013. We thank Mirko Ledda for numerous helpful discussions, Pierce Radecki, Sana Vaziri and Mirko Ledda for critically reading the manuscript, Aharon Nachshon and Niranjan Nagarajan for answering questions regarding RASA and PARCEL, respectively and, two anonymous reviewers for providing constructive criticism that helped improve the manuscript. This work was supported by the National Human Genome Research Institute grant R00-HG006860 to SA, National Institute of General Medical Sciences grant R01-GM097332 and P30CA023168 for core facilities at the Purdue University Center for Cancer Research to EJT and for a bioinformatics fellowship to YL. Department of Biomedical Engineering and Genome Center, University of California, Davis, One Shields Avenue, Davis, 95616, CA, USA Krishna Choudhary & Sharon Aviran Department of Biochemistry, Purdue University, BCHM 305, 175 S. University Street, West Lafayette, 47907-2063, IN, USA Yu-Hsuan Lai & Elizabeth J. Tran Purdue University Center for Cancer Research, Purdue University, Hansen Life Sciences Research Building, Room 141, 201 S. University Street, West Lafayette, 47907-2064, IN, USA Elizabeth J. Tran Krishna Choudhary Yu-Hsuan Lai Sharon Aviran KC and SA developed the method and analyzed the data. YL and EJT designed and performed the Structure-Seq experiments. KC and YL pre-processed the Structure-Seq data. KC and SA wrote the manuscript with input from YL and EJT. KC implemented the dStruct software package. All authors read and approved the final manuscript. Correspondence to Sharon Aviran. dStruct is written in R version 3.4.1. The source code is freely available on GitHub at https://github.com/AviranLab/dStruct under the BSD-2 license [87]. Data and scripts supporting the conclusions of this article are available on Zenodo at https://doi.org/10.5281/zenodo.2536501 [88]. The Zenodo link provides the experimental counts, coverages and reactivities for all the Structure-Seq samples. These were obtained by pre-processing sequencing data and summarizing the mapped reads for reactivity calculations as described in "Methods" section. The simulated reactivity profiles that were used in conjunction with experimental data for the results illustrated in Fig. 5 and Additional file 1: Figures S4-S6 are also available on Zenodo. The organization of data on Zenodo is described in a note with Additional file 1: Table S2. The original datasets used in this study are available at the links listed in Additional file 1: Table S2. These can be obtained from the following online sources: Zenodo: https://doi.org/10.5281/zenodo.2536501 [88] RMDB: https://rmdb.stanford.edu/search/?sstring=FLUORSW_BZCN, RMDB_IDs: FLUORSW_BZCN_0021, FLUORSW_BZCN_0024, FLUORSW_BZCN_0027, FLUORSW_BZCN_0028, FLUORSW_BZCN_0029, FLUORSW_BZCN_0030, FLUORSW_BZCN_0031, FLUORSW_BZCN_0032 [12, 89] AviranLab GitHub repository: https://github.com/AviranLab/SPEQC [37, 90] Weeks lab repository: http://www.chem.unc.edu/rna [8, 85, 91] GEO: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE50676 [92] Supplementary information. Detailed overview of deltaSHAPE, PARCEL, RASA, StrucDif,f and classSNitch, supplementary figures and tables. The download links for all datasets used in this study and organization of data on Zenodo are described in Table S2. (PDF 11,040 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Choudhary, K., Lai, YH., Tran, E. et al. dStruct: identifying differentially reactive regions from RNA structurome profiling data. Genome Biol 20, 40 (2019). https://doi.org/10.1186/s13059-019-1641-3 RNA structure Structure probing Differential analysis Transcriptome-wide profiling
CommonCrawl
An integrated strategy for target SSR genotyping with toleration of nucleotide variations in the SSRs and flanking regions Yongxue Huo1, Yikun Zhao ORCID: orcid.org/0000-0001-6803-52751, Liwen Xu1, Hongmei Yi1, Yunlong Zhang1, Xianqing Jia ORCID: orcid.org/0000-0003-2033-20481, Han Zhao2, Jiuran Zhao1 & Fengge Wang ORCID: orcid.org/0000-0001-6926-116X1 With the broad application of high-throughput sequencing and its reduced cost, simple sequence repeat (SSR) genotyping by sequencing (SSR-GBS) has been widely used for interpreting genetic data across different fields, including population genetic diversity and structure analysis, the construction of genetic maps, and the investigation of intraspecies relationships. The development of accurate and efficient typing strategies for SSR-GBS is urgently needed and several tools have been published. However, to date, no suitable accurate genotyping method can tolerate single nucleotide variations (SNVs) in SSRs and flanking regions. These SNVs may be caused by PCR and sequencing errors or SNPs among varieties, and they directly affect sequence alignment and genotyping accuracy. Here, we report a new integrated strategy named the accurate microsatellite genotyping tool based on targeted sequencing (AMGT-TS) and provide a user-friendly web-based platform and command-line version of AMGT-TS. To handle SNVs in the SSRs or flanking regions, we developed a broad matching algorithm (BMA) that can quickly and accurately achieve SSR typing for ultradeep coverage and high-throughput analysis of loci with SNVs compatibility and grouping of typed reads for further in-depth information mining. To evaluate this tool, we tested 21 randomly sampled loci in eight maize varieties, accompanied by experimental validation on actual and simulated sequencing data. Our evaluation showed that, compared to other tools, AMGT-TS presented extremely accurate typing results with single base resolution for both homozygous and heterozygous samples. This integrated strategy can achieve accurate SSR genotyping based on targeted sequencing, and it can tolerate single nucleotide variations in the SSRs and flanking regions. This method can be readily applied to divergent sequencing platforms and species and has excellent application prospects in genetic and population biology research. The web-based platform and command-line version of AMGT-TS are available at https://amgt-ts.plantdna.site:8445 and https://github.com/plantdna/amgt-ts, respectively. Simple sequence repeats (SSRs), also named microsatellites or short tandem repeats (STRs), can be widely found in eukaryotic genomes [1]. The sequences that flank an SSR may be sufficiently conserved to allow specific amplification primers to be designed; thus, SSRs can be detected through conventional PCR amplification and typed based on the amplification products. The majority of SSRs are noncoding and thus can affect the expression, splicing, protein sequence, and genome structure of genes [2, 3]. SSRs makers are commonly used in genome-related studies [4, 5]. SSR genotyping has also become an extensive application in different fields and has been used for population genetic diversity and structure analysis, the construction of genetic maps, and the investigation of intraspecies relationships [6,7,8]. All applications of SSRs are based on accurate SSR genotyping methods, and less accuracy may have serious consequences [9, 10]. Moreover, the construction and application of DNA databases also require the accurate SSR genotyping of samples [11, 12]. Factors influencing accurate SSR genotyping include the following: 1) The slippage of polymerase is inherent to in vitro SSR polymerase PCR amplification, which leads to incorrect SSR alleles and makes it challenging to genotype SSRs accurately; and 2) the occurrence of variations in the SSR or flanking region will directly affect the genotyping results (Fig. 1) [13, 14]. These problems accompanied the SSR genotyping technology development. The technology has experienced the initial gel electrophoresis, capillary electrophoresis, the first- and second-generation sequencing, and the high-throughput amplicon sequencing stage. At present, amplicon sequencing technology is widely used in genetic disease screening and gene diagnosis, as well as in other research [15, 16]. However, there is still no suitably accurate SSR genotyping method that can tolerate nucleotide variations in SSRs and flanking regions which may affect the sequence alignment and genotyping accuracy. Schematic diagram of error-prone SSR typing caused by variations in SSR or flanking regions. Take a site with CAGCC SSR motif as an example, for Seq1 (from reference genome), it's clear that its SSR region is with three times repeats of CAGCC; for Seq2 with a G- > A variation in SSR region, the regular exact matching algorithm will type it as two repeats of CAGCC, while fault-tolerant algorithm could recognize it as three repeats; for Seq3 with a T- > C variation in right flanking region, flanking boundary-based algorithm will treat it as three repeats of CAGCC, however, the regular exact matching algorithm will recognize it as four times of repeat. When comparing different samples, especially different varieties, this discordance of SSR typing will cause misunderstanding of genetic information Here, we developed a new open-source microsatellite genotyping strategy that includes an accurate microsatellite genotyping tool based on targeted sequencing (AMGT-TS) and a user-friendly web-based version. AMGT-TS can quickly perform precise SSR genotyping with ultradeep coverage and high locus throughput, and it includes a broad matching algorithm (BMA) that can handle situations with nucleotide variations in the SSR and flanking regions. We also performed a comprehensive assessment of AMGT-TS using internal laboratory testing and simulated data testing. The results showed that AMGT-TS could achieve nearly 100% typing accuracy. Although AMGT-TS developed on plants, which is the focus of our current work, the new method is generic and can be used as a new tool for many biological fields. All completed codes, sample data, and documentation have been submitted to GitHub. AMGT-TS tool design The process of AMGT-TS has three main steps to obtain accurate genotyping information (Fig. 1). For each sample: first, reads are mapped to their bona fide loci according to the reference sequences; then, the SSR regions are determined by the loci's flanking information; and finally, AMGT-TS obtains accurate SSR genotyping results based on the dissection of read information, such as read number and primary SSR typing. In detail, after obtaining raw sequencing data (usually in FASTQ format), we use FASTX (http://hannonlab.cshl.edu/fastx_toolkit/) to remove low-quality data. Then, we perform the "Alignment to loci" processing step with bwa-mem [17], based on the read information for the loci in the reference sequence file. After this step, reads will be grouped to a locus. Next, Picard (https://broadinstitute.github.io/picard/) is used to group reads in the same locus together. At the same time, SAMtools [18] is used to index the data to improve the efficiency of subsequent processing. Next, we use SAMtools to "Split by direction," separating the forward and reverse data. Then, we use SEQTK (https://github.com/lh3/seqtk) to "Adjust direction," which flips reverse sequences into forward sequences. After that, we use the BLAST tool [19] to perform the "Find SSR region" operation according to the 20-bp sequences of the left and right flanking sequences of SSR regions in the reference sequence to obtain the SSR region of each read. Finally, we use Python scripts to "Find SSR typing" in the SSR region to obtain SSR typing information. Here, as an example of the AMGT-TS processing workflows, two actual experimental datasets are provided for result 1 and 2 in Fig. 2. For result 1, an SSR genotyping result of AGAGA*6 for locus s4121 in B73 (a model variety of maize) is shown (Fig. 3). We used the AMGT-TS web platform (https://amgt-ts.plantdna.site:8445/) to generate the alignment figure for the reads. The platform grouped the read files to align the heap sequences in Fig. 3, which shows the results of the classification alignment. Each line is one read, and the yellow background region is the SSR region being typed. The genotype is exactly six repeats of 5 bases. For result 2, the typing result of locus s17883 is shown in Additional file 1: Table S1. An SSR length of 12 (ATA*4) was found for 98.20% of the reads, so we can obtain the result ATA (4,4) (maize is a diploid plant, so each locus has two alleles). In addition, we obtained the SSR typing result of AGG (4,4) for locus s691405. Finally, the genotype of the third locus (s838417) was a homozygous type CTC (5,5), which is a 15 bp long repeat, and the corresponding reads accounted for 98.70% of the total reads. Overall, the typing strategy of AMGT-TS is clear and satisfactory. The processing flow of AMGT-TS. Green bar: reads of Locus 1 (L1), blue bar: reads of Locus 2 (L2), orange bar: reads of Locus 3 (L3). Gray bars indicate low-quality reads. The solid arrow represents step-by-step operations in the process. The dotted arrows represent the data information referenced by the corresponding step. The small white arrows within the color bars pointing to the right represent forward sequences and those pointing to the left represent reverse sequences Read alignment of locus s4121 for the motif AGAGA of B73 repeated six times (SSR region of 30 bp; yellow background) Evaluation of typing error The typing error can be measured in two ways. One is the false-positive rate of SSR typing and the other is the rate of the error reads found for the correct typing result. Equation (1) can be obtained, where j represents the index of the locus, k represents the typing index, and Ramgt(j, k) represents the reads of the k-th typing of the j-th locus from AMGT-TS. $${\text{Sum }}\;{\text{of}}\;{\text{R}}_{{{\text{amgt}}}} = \mathop \sum \limits_{{\text{j}}} \mathop \sum \limits_{{\text{k}}} {\text{R}}_{{{\text{amgt}}}} \left( {{\text{j}},{\text{k}}} \right)$$ Sra represents the sum of reads from artificial data and Er represents the error of the reads. Equation (2) can be obtained as follows: $$E_{r} = \frac{{{\text{S}}_{{{\text{ra}}}} { } - { }\mathop \sum \nolimits_{{\text{j}}} \mathop \sum \nolimits_{{\text{k}}} {\text{R}}_{{{\text{amgt}}}} \left( {{\text{j}},{\text{k}}} \right)}}{{{\text{S}}_{{{\text{ra}}}} }}$$ In the same way, Et represents typing error, Ta represents the sum of typing of artificial data and Tamgt represents the correct typing result count from AMGT-TS. Equation (3) can be obtained as follows: $$E_{t} = \frac{{{\text{T}}_{{\text{a}}} { } - { }\mathop \sum \nolimits_{{\text{j}}} {\text{T}}_{{{\text{amgt}}}} \left( {\text{j}} \right)}}{{{\text{T}}_{{\text{a}}} }}$$ Precise and broad matching algorithms To archive both the high accuracy of SSR typing and tolerance of the variations in SSRs and flanking regions, we respectively developed two different algorithms, precise and broad matching strategies (Fig. 4). The analytical strategy of precise matching is divided into three steps. The first step is "grouping." For multilocus amplicon sequencing data, sequencing reads are first assigned to the corresponding loci according to the reference sequences. AMGT-TS uses bwa-mem to implement data mapping. The second step is "SSR boundary determination". After extraction of sequences for each locus, AMGT-TS uses flanking sequences of the SSR region of each locus in the reference sequences to determine the boundaries of the left and right flanking sequences, which indirectly determines the boundaries of the SSR region and further extracts the sequence of the SSR region by calling BLAST. The third step is "SSR genotyping." After the SSR sequence has been determined, the repetition number of SSRs is determined by using the precise match method of repeated sequences, and the SSR repetition length is used to name the SSR genotype. For example, the motif of a certain SSR was ATC, with a repetition number of three times, so the SSR was named SSR9. The different approaches of precise and broad matching strategies. The arrow to the left is pointing the result of the precise method, and the right to the broad method. We can see genotyping of Read1 and Read3 is the same, but not with Read2. For there are variants in the SSR region, the precise method can only identify 2 motif repeats, while the broad method can identify 5 repeats The broad matching algorithm (BMA) has the same first step as the precise matching algorithm. However, in the second step, BMA directly processes the information of BAM files and uses the Concise Idiosyncratic Gapped Alignment Report (CIGAR) information for each read to mask the classification information, which makes it compatible with the variation within a certain error range and results in better fault-tolerant classification information. As shown in Fig. 4, the SSR motif represents the repeating unit in the SSR region. The SSR region represents the region where the SSR sequence is located. For example, when a sequence of an SSR is AGCAGCAGC, the SSR motif is AGC, and the SSR region is AGCAGCAGC. The precise match identifies only contiguous motifs, so in read 2, only the last 10 bp is identified as two replicates. Read 3 has only one motif repeat. For the broad match, the results identified were identical to those for the precise match, except for read 1, which is a perfectly repeated sequence; different results were obtained for the other two reads. For read 2, when the two red bases are considered to be two SNPs, 5 repeats of the motif are obtained. For read 3, when the region is considered to be an InDel, is the motif is considered to contain 3 repeats. Simulation test of AMGT-TS To better simulate different situations, each read was divided into five parts (Additional file 1: Figure S1). Five different categories of reads were considered in our simulation, named Classes A to E. The detailed method of generating these data was as follows: Class A: We created the SSR_Region according to 3 repeats of the motif of s17883. We then added 35 bp from the left and right flanking sequences of the SSR_region of the reference sequence as Flank_L and Flank_R, respectively. Finally, we added one SNP on the left flank and one SNP on the right flank. There were 2000 artificial reads for this dataset. Class B: We created the SSR_Region according to 5 repeats of the motif (AGCT) and 6 repeats of the motif of s423645. We then added 35 bp from the left and right flanking sequences of the SSR_region of the reference sequence as Flank_L and Flank_R, respectively. There were 1000 artificial reads for this dataset. Class C: We created the SSR_Region according to 4 repeats of the motif (CGCAT) and 3 repeats of the motif (CGCAT) + CACAT + 2 repeats of motif s566749. We then added 35 bp from the left and right flanking sequences of the SSR_region of the reference sequence as Flank_L and Flank_R, respectively. There were 2000 artificial reads for this dataset. Class D: In the SSR_Region of this type, Flank_L and Flank_R are also random bases. With the addition of Random_L and Random_R, the total length was randomly extended to 180 ~ 220 bp. There were 1000 artificial reads for this dataset. Class E: The rule is the same as for Class A, but There were 2000 artificial reads for this dataset. Random_L and Random_R are random bases and the total length was randomly extended to 180 ~ 220 bp. Classes A to C combined contained a total of 8000 reads. Random_L and Random_R regions were random bases, and the total length was randomly extended to 180 ~ 220 bp. Classes A to D combined contained a total of 9000 reads. The quality information from Class A to Class D was marked as the highest. The quality information of Class E was marked as the lowest. For the read numbering rule, numbering is divided into three segments. The first segment is fixed: @BMSTC (Beijing Maize Seed Testing Center), representing the artificial sequence. The second segment is category information, using 1, 2, 3, 4, and 5 to represent A, B, C, D, and E, respectively. The third is the ordinal number, starting from 1 in each category and ending with the maximum number of entries in the current category. After 10,000 reads were created, they were randomly distributed into FASTQ files. Software and package dependencies AMGT-TS was verified on Ubuntu Server 14.04.4 LTS and 18.04.2 LTS. AMGT-TS relies on various tools including Bamtools (v2.5.0) [20], BLAST tool suite (v2.6.0 +) [19], BWA (v0.7.17-r1188) [17], fastx_toolkit (v0.0.13), Picard (v2.15.0), SAMtools (v1.3.1) [18], and SEQTK (v1.2). The Java version used in AMGT-TS is OpenJDK1.7. The Python version is 2.7 + . Pandas are required in Python and can be installed using PIP. AMGT-TS implementation details AMGT-TS runs on Linux, and Ubuntu 18.04 has been tested. After downloading the code from GitHub, the user needs to install the dependent components as described in the README.md file. The ENV_FILE variable in the launch.sh specifies the location of the configuration file. In the configuration file, the user must configure the corresponding component location. The targeted sequencing sample files are placed in the "working/00_fastq" directory. Under the directory "REF_DIR" of the configuration file is the reference sequence file information for each locus. Once this information is configured, the user can execute the launch.sh file to run the tool. Before running the program, the user can specify the different algorithms: precise or broad. When the tool is finished running, a log file will be generated. In the "working/04_reads" directory, the locus typing information of the whole sample is present. In the directory of each locus is the typing information of the current loci and reads corresponding to each type. For each subheap read file, the user can use the Reads Alignment tool for a graphical presentation. Currently, no suitable genotyping method can achieve tolerance of single nucleotide variations (SNVs) in the SSRs and flanking regions, which may be caused by PCR and sequencing errors or SNPs among varieties and can directly affect the sequence alignment and genotyping accuracy. As shown in Fig. 1, taking a site with a CAGCC SSR motif as an example, for Seq1 (from the reference genome), its SSR region has three repeats of CAGCC; for Seq2 with a G- > A variation in the SSR region, the regular exact matching algorithm will type it as two repeats of CAGCC, while the fault-tolerant algorithm can recognize it as three repeats; and for Seq3 with a T- > C variation in right flanking region, the flanking boundary-based algorithm will treat it as three repeats of CAGCC, however, the regular exact matching algorithm will recognize it as four repeats. When comparing different samples, especially different varieties, this discordance in SSR typing will cause a misclassification of the genetic information. To address this issue, in this study, we developed a broad matching algorithm (BMA) that can quickly and accurately achieve SSR typing for ultradeep coverage and high-throughput loci with SNV compatibility and grouping of the typed reads for further in-depth information mining. We also designed the AMGT-TS tool incorporating the BMA for targeted microsatellite genotyping. Below, we tested the AMGT-TS tool using both experimental data and simulated data. We also compared AMGT-TS with other SSR-typing tools, as well as the popular commercial SSR-typing software, NextGENe. We used three genetically related samples to map the genotyping information of 50 loci (Fig. 5 and Additional file 1: Table S2). The typing results of the offspring samples were 100% found in the two parents, indicating that the typing outcomes of AMGT-TS are precise and that AMGT-TS is potentially useful for genetic analysis. Furthermore, we used AMGT-TS to analyze targeted sequencing data of 8 samples and 21 randomly sampled loci and compared the results with resequencing results (Additional file 1: Figure S2, Table S3 and S4). We compared loci that produced valid data at the same locus in both experiments. If any experiment did not produce a result at a certain locus, then the locus was not included in the comparison. In Additional file 1: Fig. S2, the minimum number of loci for comparison in the sample was 11, and the maximum number was 18. The results of all compared loci were 100% consistent. Allele variants for each locus of three example samples detected by AMGT-TS. To evaluate the AMGT-TS tool, an example for analyzing the three samples with a genetic relationship is given and a total of 50 loci is selected to verify the genetic relationship. The three samples are Jingke968 and its parents Jing724 (female parent) and Jing 92 (male parent). To visually observe the genetic compatibility, panels (A) and (B) refer to the first and second allele results of the 50 loci, respectively. In the figure, the abscissa shows the 50 loci; the longitudinal coordinates are the length of the genotyping fragment (bp) of each sample. The 50 loci are 100% following the genetic relationship, indicating that the AMGT-TS analysis results are accurate Simulated data test To further verify the precision of the AMGT-TS results, we used a manual method to create the original data of simulated targeted sequencing with 10,000 reads. The average read length of these data was around 200 bp, based on the information of three example loci (s499955, s423645, and s996971) from B73 (details in Implementation, artificial read composition design in Additional file 1: Figure S1). For the simulated data, the results of AMGT-TS analysis are shown in Additional file 1: Table S5. Using the above calculation for the error rate evaluation, we obtained Er = 0 and Et = 0; in other words, for the typing results of the simulated targeted sequencing data, the accuracy of reads and SSR typing was 100%. As shown in Additional file 1: Table S5, 1000 low-quality points were filtered correctly, whereas 1000 random reads were not recognized. In addition, the precise matching algorithm did not deal with SNPs in the SSR region and identified these SRRs only as three repeats of the motif. However, the broad matching algorithm could tolerate these SNPs and identified these SSRs as 6 repeats of the motif, which are shown in Additional file 1: Figure S3A. Then Additional file 1: Figure S3B shows the situation with SNPs in the flanking region. The broad matching algorithm has robust fault tolerance. Comparison with other SSR-typing tools To determine the detection accuracy of AMGT-TS, we made an integrated comparison with other published SSR typing tools, SSRseq [21], MicNeSs [22] and CHIIMP [23], with different simulated datasets (Table 1). To provide a more rational comparison, we carried out simulations based on three bona fide loci from the Maize B73 V3 reference genome (Additional file 1: Table S6); these three loci have different motif lengths (from 3 to 5 bp) and the different polymorphism information contents (PICs). For each locus, we simulated four situations (Additional file 1: Table S7): no variant in the SSR or flanking region (Dataset A, as a control), one SNP site in the SSR region (Dataset B), one SNP site in the flanking region (Dataset C) and a 2 bp deletion in the flanking region (Dataset D). There were 10,000 reads for each locus in each dataset. After simulation and SSR-typing by each tool (Table 1), we found that SSRseq has good performance on Datasets A to C, while it is poor for SSR tying of the locus with flanking variants. MicNeSs was designed to screen perfect SSR sites and has poor performance for SSR typing of the long motif (> 3 bp). CHIIMP has poor performance for SSR tying of the SSR region with SNPs. Among these four tools, only AMGT-TS can deal with all four situations; specifically, this tool has excellent performance for SSR tying of the loci with variants in flanking or SSR regions. Table 1 Comparison of AMGT-TS with other SSR-typing tools We also made a comparison with a popular commercial SSR typing software, NextGENe (https://softgenetics.com/NextGENe.php), using three sets of Ion Torrent sequencing data (Additional file 1: Figure S4 and Additional file 2: Table S8-S13), including data from Jingke968, a hybrid from two different maize varieties; Jing724, a selfing variety; and the genome-sequenced variety B73. In a total of 484 evaluated SSR loci, more than 96% of the alleles detected by NextGENe were also detected by AMGT-TS. In contrast, more than 100 alleles were detected exclusively by AMGT-TS (Additional file 1: Figure S4). After manual validation, we confirmed that the missing alleles from the NextGENe results are caused by a short flank size: NextGENe cannot handle reads with only 5–330 bp on the left or right flank, while AMGT-TS can. Overall, these results show that AMGT-TS is accurate and highly capable of SSR variant genotyping detection. The development of multiplex PCR technologies has made it possible to amplify multiple target sites at once. Moreover, the development of amplicon sequencing technology has made large-scale high-throughput SSR typing possible. At present, amplicon sequencing technology is widely used in genetic disease screening and gene diagnosis, as well as plant breeding [15, 16]. Our study breaks through the limitations of traditional typing methods and achieves large-scale typing of SSR at the single-base level; this method is fast, accurate, and low-cost, and can be widely applied in genetic diversity studies, highly precise gene localization, and molecular-assisted selection of new varieties [21]. Here, we propose a tool for developing new SSR-seq approaches and we demonstrated its efficiency for a range of species with different levels of genomic resource availability. The most important feature is that this tool provides strategies to optimize locus selection and primer design. This tool can be used for locus selection and merit selection. AMGT-TS can analyze three error-prone and complicated cases, including cases with too many dominant SSR types of certain loci, an extremely low ratio of reads of the dominant SSR types and too much variation within the SSR region. Then, researchers can treat these loci as low-quality loci based on the information provided by AMGT-TS. By filtering the above three types of information, high-quality SSR sites can be obtained, which is important for accurate typing [9, 10]. Since genotyping data consist of simple nucleotide character strings that do not need to be encoded or encapsulated in special data types, it is easier to use existing bioinformatics tools to perform pipelining, resulting in easier for data sharing between different laboratories and storage in different databases for different applications. AMGT-TS can use the precise matching algorithm to accurately obtain SSR classification, based on the premise that there is no change in the SSR region. However, polymorphisms in the repeat motif are hard to determine and will affect the accuracy of SSR detection. When there are variations in the SSR region or base changes due to experiments, AMGT-TS can use the broad matching algorithm to account for the variation in the SSR region. The broad strategy used in AMGT-TS differs from the methods implemented in other SSR genotyping software, such as MicNeSs [22] which can also identify the SSR genotypes based on sequencing data while accounting for up to one substitution within the SSR regions. In addition, AmpSeq-SSR is a microsatellite genotyping tool with similar functions as AMGT-TS [24]. When AmpSeq-SSR encounters motif repeats with base variation in one of the intermediate repeats, the result is an identification errors and complete motif repeats are lost, thus directly affecting genotyping results. For AMGT-TS, resource data can be either FASTA or FASTQ files, and especially for FATSQ files, a quality-based filtering process could not only increase the accuracy of the results, but also reduce the analysis time. The data processed by AmpSeq-SSR are only in FASTA format, which contains no quality information, so the above optimization cannot be performed. Usually, for ultradeep sequencing, the prominent peak(s) will be considered the bona fide SSR genotype(s). The remaining genotypes tend to be caused by amplification stutter or sequencing error. Take two loci as examples, as shown in the Additional file 1: Figure S5. Jing724 and Jingke968 are a selfing maize variety and a hybrid from two different maize varieties, respectively. Thus, loci in Jing724 and Jingke968 are expected to have one genotype and two genotypes, respectively. As found here, the s994429 locus in Jing724 and the s677195 locus in Jingke968 have one peak (TCAT*3) and two peaks (AAG*4 and AAG*6) detected by AMGT-TS, respectively. These results indicate that AMGT-TS has an excellent ability to accommodate amplification stutter or sequencing error. Previous tools based on targeted sequencing can only identify consecutive SSR motifs. They cannot deal with cases where there is variation in the SSR region (possibly due to an experimentally introduced error) [22, 24]. In contrast, AMGT-TS can obtain consecutive SSR motif sequences and deal with cases containing variations in the SSR region, to allow us to have a clear, intuitive and comprehensive understanding of the actual situation of SSR genotyping. AMGT-TS is a powerful and robust tool for applications that require precise knowledge of SSR genotyping, such as diagnosing diseases. AMGT-TS has the robustness of classification recognition so that even when there are a few errors in the data, complete repetitive information is not lost. AMGT-TS analyzes the CIGAR information of the BAM file to carry out processing compatible with the variation in SSR regions. Furthermore, the different results produced by these two algorithms in AMGT-TS make a significant difference in the classification of plant varieties and disease detection. Therefore, different algorithms can be considered for various biological fields. In conclusion, the BMA and AMGT-TS tools provide an integrated strategy for accurate microsatellite typing for ultradeep coverage and high-throughput analysis of loci with SNV compatibility and grouping the typed reads for further in-depth information mining. With the broader application of next-generation sequencing techniques and the current application of AMGT-TS to divergent sequencing platforms and species, we expect that AMGT-TS will have excellent application prospects in genetic and population biology research in the future. All scripts and data used in this study could be found at https://amgt-ts.plantdna.cn/data/ and https://github.com/plantdna/amgt-ts. PCR: Polymerase chain reaction; BMA: Broad matching algorithm SSR: Simple sequence repeat STR: Short tandem repeat SNP: Single nucleotide polymorphisms AMGT-TS: Accurate microsatellite genotyping tool based on targeted sequencing Polymorphism information content CIGAR: Concise idiosyncratic gapped alignment report Li Y-C, Korol AB, Fahima T, Nevo E. Microsatellites within genes: structure, function, and evolution. Mol Biol Evol. 2004;21:991–1007. https://doi.org/10.1093/molbev/msh073. Martin P, Makepeace K, Hill SA, Hood DW, Moxon ER. Microsatellite instability regulates transcription factor binding and gene expression. Proc Natl Acad Sci. 2005;102:3800–4. https://doi.org/10.1073/pnas.0406805102. Gymrek M, Willems T, Guilmatre A, Zeng H, Markus B, Georgiev S, Daly MJ, Price AL, Pritchard JK, Sharp AJ, Erlich Y. Abundant contribution of short tandem repeats to gene expression variation in humans. Nat Genet. 2016;48:22–9. https://doi.org/10.1038/ng.3461. Li J, Ye C. Genome-wide analysis of microsatellite and sex-linked marker identification in Gleditsia sinensis. BMC Plant Biol. 2020;20:338. https://doi.org/10.1186/s12870-020-02551-9. Dharajiya DT, Shah A, Galvadiya BP, Patel MP, Srivastava R, Pagi NK, Solanki SD, Parida SK, Tiwari KK. Genome-wide microsatellite markers in castor (Ricinus communis L.): identification, development, characterization, and transferability in Euphorbiaceae. Ind Crops Prod. 2020;151:112461. https://doi.org/10.1016/j.indcrop.2020.112461. Shehata AI, Al-Ghethar HA, Al-Homaidan AA. Application of simple sequence repeat (SSR) markers for molecular diversity and heterozygosity analysis in maize inbred lines. Saudi J Biol Sci. 2009;16:57–62. https://doi.org/10.1016/j.sjbs.2009.10.001. Kaur S, Panesar PS, Bera MB, Kaur V. Simple sequence repeat markers in genetic divergence and marker-assisted selection of rice cultivars: a review. Crit Rev Food Sci Nutr. 2015;55:41–9. https://doi.org/10.1080/10408398.2011.646363. Dudley JC, Lin M-T, Le DT, Eshleman JR. Microsatellite instability as a biomarker for PD-1 blockade. Clin Cancer Res. 2016;22:813–20. https://doi.org/10.1158/1078-0432.CCR-15-1678. Naish KA, Warren M, Bardakci F, Skibinski DOF, Carvalho GR, Mair GC. Multilocus DNA fingerprinting and RAPD reveal similar genetic relationships between strains of Oreochromis niloticus (Pisces: Cichlidae). Mol Ecol. 1995;4:271–4. https://doi.org/10.1111/j.1365-294X.1995.tb00219.x. Kretzschmar T, Mbanjo EGN, Magalit GA, Dwiyanti MS, Habib MA, Diaz MG, Hernandez J, Huelgas Z, Malabayabas ML, Das SK, Yamano T. DNA fingerprinting at farm level maps rice biodiversity across Bangladesh and reveals regional varietal preferences. Sci Rep. 2018;8:14920. https://doi.org/10.1038/s41598-018-33080-z. Zhang YC, Kuang M, Yang WH, Xu HX, Zhou DY, Wang YQ, Feng XA, Su C, Wang F. Construction of a primary DNA fingerprint database for cotton cultivars. Genet Mol Res GMR. 2013;12:1897–906. https://doi.org/10.4238/2013.january.30.3. Backiyarani S, Chandrasekar A, Uma S, Saraswathi MS. MusatransSSRDB (a transcriptome derived SSR database)—an advanced tool for banana improvement. J Biosci. 2019;44:4. https://doi.org/10.1007/s12038-018-9819-5. Castillo-Lizardo M, Henneke G, Viguera E. Replication slippage of the thermophilic DNA polymerases B and D from the Euryarchaeota Pyrococcus abyssi. Front Microbiol. 2014. https://doi.org/10.3389/fmicb.2014.00403. Ananda G, Walsh E, Jacob KD, Krasilnikova M, Eckert KA, Chiaromonte F, Makova KD. Distinct mutational behaviors differentiate short tandem repeats from microsatellites in the human genome. Genome Biol Evol. 2013;5:606–20. https://doi.org/10.1093/gbe/evs116. Liu D, Hu X, Jiang X, Gao B, Wan C, Chen C. Characterization of a novel splicing mutation in UNC13D gene through amplicon sequencing: a case report on HLH. BMC Med Genet. 2017;18:135. https://doi.org/10.1186/s12881-017-0489-1. Lindsey RL, Garcia-Toledo L, Fasulo D, Gladney LM, Strockbine N. Multiplex polymerase chain reaction for identification of Escherichia coli, Escherichia albertii and Escherichia fergusonii. J Microbiol Methods. 2017;140:1–4. https://doi.org/10.1016/j.mimet.2017.06.005. Li H, Durbin R. Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009;25:1754–60. https://doi.org/10.1093/bioinformatics/btp324. Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, Marth G, Abecasis G, Durbin R. The sequence alignment/map format and SAMtools. Bioinformatics. 2009;25:2078–9. https://doi.org/10.1093/bioinformatics/btp352. Camacho C, Coulouris G, Avagyan V, Ma N, Papadopoulos J, Bealer K, Madden TL. BLAST+: architecture and applications. BMC Bioinf. 2009;10:421. https://doi.org/10.1186/1471-2105-10-421. Barnett DW, Garrison EK, Quinlan AR, Strömberg MP, Marth GT. BamTools: a C++ API and toolkit for analyzing and managing BAM files. Bioinformatics. 2011;27:1691–2. https://doi.org/10.1093/bioinformatics/btr174. Lepais O, Chancerel E, Boury C, Salin F, Manicki A, Taillebois L, Dutech C, Aissi A, Bacles CFE, Daverat F, Launey S, Guichoux E. Fast sequence-based microsatellite genotyping development workflow. PeerJ. 2020;8: e9085. https://doi.org/10.7717/peerj.9085. Suez M, Behdenna A, Brouillet S, Graça P, Higuet D, Achaz G. MicNeSs: genotyping microsatellite loci from a collection of (NGS) reads. Mol Ecol Resour. 2016;16:524–33. https://doi.org/10.1111/1755-0998.12467. Barbian HJ, Connell AJ, Avitto AN, Russell RM, Smith AG, Gundlapally MS, Shazad AL, Li Y, Bibollet-Ruche F, Wroblewski EE, Mjungu D, Lonsdorf EV, Stewart FA, Piel AK, Pusey AE, Sharp PM, Hahn BH. CHIIMP: an automated high-throughput microsatellite genotyping platform reveals greater allelic diversity in wild chimpanzees. Ecol Evol. 2018;8:7946–63. https://doi.org/10.1002/ece3.4302. Li L, Fang Z, Zhou J, Chen H, Hu Z, Gao L, Chen L, Ren S, Ma H, Lu L, Zhang W, Peng H. An accurate and efficient method for large-scale SSR genotyping and applications. Nucleic Acids Res. 2017;45:e88–e88. https://doi.org/10.1093/nar/gkx093. We thank those who helped us in sample collection and technical assistance. We thank Nature Research Editing Service for editing the English text of a draft of this manuscript. This work was supported by the 13th Five-Year National Key R&D Program of China (Grant Number, 2017YFD0102001). The funding body played no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Maize Research Center, Beijing Academy of Agricultural and Forest Sciences (BAAFS)/Beijing Key Laboratory of Maize DNA Fingerprinting and Molecular Breeding, Beijing, 100097, China Yongxue Huo, Yikun Zhao, Liwen Xu, Hongmei Yi, Yunlong Zhang, Xianqing Jia, Jiuran Zhao & Fengge Wang Provincial Key Laboratory of Agrobiology, Institute of Crop Germplasm and Biotechnology, Jiangsu Academy of Agricultural Sciences, Nanjing, 210014, Jiangsu, China Han Zhao Yongxue Huo Yikun Zhao Liwen Xu Hongmei Yi Yunlong Zhang Xianqing Jia Jiuran Zhao Fengge Wang F.W. and J.Z. conceived and supervised the project. Y.H., YK.Z. and X.J. designed the algorithm. L.X., H.Y. and YL.Z. performed experiments and analyzed the data. Y.H., YK.Z., H.Z. and X.J. wrote the manuscript with contributions from all authors. All authors read and approved the final manuscript. Correspondence to Jiuran Zhao or Fengge Wang. : Figure S1. Artificial read composition design; Figure S2. Comparison of targeted sequencing results and resequencing results (8 samples/21 loci). This figure shows a comparison of targeted sequencing results analyzed by AMGT-TS and resequencing results (8 samples/21 loci). The abscissa is the name of each sample. The orange ordinate represents the number of loci that were compared. The green ordinate represents the same number of compared loci. Loci with missing or incomplete data were not compared. In the figure above, refer to Table S1 for the corresponding resequencing data. For the corresponding data of targeted sequencing results analyzed by AMGT-TS, please refer to Table S2. For the loci information, please refer to Table S5; Figure S3. The situation with SNP in the SSR and flanking regions; Figure S4. Comparison of SSR genotyping results between AMGT-TS and NextGENe; Figure S5. SSR typing results of two representative loci by AMGT-TS; Table S1. Genotyping information for three example loci; Table S2. This table shows the results of genotyping of 50 loci from Figure 3; Table S3. Data of Figure S2 - Resequencing data of 8 samples; Table S4. Data of Figure S2 - Targeted sequencing results analyzed by AMGT-TS; Table S5. Analysis results of simulated data typed by the precise and broad algorithm; Table S6. Locus information from Maize B73 reference genome for simulation; Table S7. Four simulated situations to test SSR-typing tools. : Table S8. SSR typing results of 484 evaluated SSR loci in B73 by AMGT-TS; Table S9. SSR typing results of 484 evaluated SSR loci in B73 by NextGENe; Table S10. SSR typing results of 484 evaluated SSR loci in Jing724 by AMGT-TS; Table S11. SSR typing results of 484 evaluated SSR loci in Jing724 by NextGENe; Table S12. SSR typing results of 484 evaluated SSR loci in Jingke968 by AMGT-TS; Table S13. SSR typing results of 484 evaluated SSR loci in Jingke968 by NextGENe. Huo, Y., Zhao, Y., Xu, L. et al. An integrated strategy for target SSR genotyping with toleration of nucleotide variations in the SSRs and flanking regions. BMC Bioinformatics 22, 429 (2021). https://doi.org/10.1186/s12859-021-04351-w SSR-GBS Microsatellite Sequence-based microsatellite genotyping
CommonCrawl
An Overview of the Analysis and Transmission of Aperiodic Signals January 20, 2016 by Donald Krambeck Aperiodic Signal Representation by Fourier Integral An aperiodic function will never repeat, although technically speaking an aperiodic function can be considered similar to a periodic function with an infinite period. In order to show that an aperiodic signal can be expressed as a continuous sum (or integral) of infinite exponentials, a limiting process is applied. In order to represent an aperiodic signal f(t), such as the signal shown in Fig. 1.1a by infinite exponential signals, a new periodic signal $$ f_{T_{0}}$$ must be formed by repeating the aperiodic signal f(t) every T0 seconds, shown in Fig. 1.1b. The period is made just long enough to not overlap between each repeating pulse. This periodic signal $$ f_{T_{0}}$$ is represented by an exponential Fourier series. By letting $$T_{0} \rightarrow \infty$$, the pulses in the periodic signal repeat themselves after an infinite interval, therefore: $$ \lim_{T_{0}\rightarrow \infty } f_{T_{0}}(t) = f(t)$$ The Fourier series representing $$ f_{T_{0}}$$ will thus also represent f(t) in the limit $$T_{0} \rightarrow \infty$$. The exponential Fourier series can be represented for $$ f_{T_{0}}$$ as follows $$f_{T_{0}}(t) = \sum_{n = - \infty }^{\infty }D_{n}e^{jn\omega _{0}t}$$ (1.1) by which $$D_{n} = \frac{1}{T_{0}}\int_{-\frac{T_{0}}{2}}^{\frac{T_{0}}{2}}f_{T_{0}}(t)e^{-jn\omega _{0}t}dt$$ (1.2a) FIGURE 1.1. Construction of a periodic signal by periodic extension of f(t) Figure 1.1 represents the construction of a periodic signal by periodic extension of f(t) $$\omega _{0} = \frac{2\pi }{T_{0}}$$ (1.2b) Figure 1.1a and 1.1b show that integrating $$ f_{T_{0}}$$ over $$-\frac{T_{0}}{2},\frac{T_{0}}{2}$$ is exactly the same as if you were to integrate over $$\left (-\infty , \infty \right )$$. Simplifying the integration bounds, Eq 3.2a is now expressed by $$D_{n} = \frac{1}{T_{0}}\int_{-\infty }^{\infty }f_{T_{0}}(t)e^{-jn\omega _{0}t}dt$$ (1.2c) An interesting phenomenon is that the spectrum changes at T0 increases. To better understand this odd behavior, $$F(\omega )$$ is defined as a continuous function of $$\omega$$, as $$F(\omega)= \int_{-\infty }^{\infty} f(t)e^{-j\omega t} dt$$ (1.3) The last two equations above show that $$D_{n} = \frac{1}{T_{0}}F(n\omega _{0})$$ (1.4) What this shows is that the Fourier coefficients Dn are (1/T0 times) the samples of $$F(\omega )$$ spaced uniformly at intervals of $$\omega_{0}$$ rad/s, as shown in Fig. 1.2a. For simplicity, Dn and $$F(\omega )$$ are assumed to be real in Fig. 1.2. Letting $$T_{0} \rightarrow \infty$$ by doubling T0 repeatedly, halves the fundamental frequency $$\omega_{0}$$; this operation is used so that there are twice as many components (or samples) in the spectrum. Consequently, by doubling T0 , the envelope $$(\frac{1}{T_{0}})F(\omega)$$ is halved, shown in Fig 1.2b. If T0 is doubled over and over again, the spectrum will become denser while its magnitude becomes smaller. Nothing that in the limits $$T_{0} \rightarrow \infty$$, $$\omega_{0} \rightarrow \infty$$, and $$D_{n} \rightarrow \infty$$, the relative shape of the envelope is kept the same. This means that the spectrum must be so dense that the spectral components are spaced at zero (or infinitesimal) level! Simultaneously, the amplitude of each component is also zero. This may seem peculiar at first glance; however, it will be shown that these are classic characteristics of a very familiar phenomenon. By substituting Eq. 1.4 in Eq. 1.1, the following sum yields $$ f_{T_{0}}(t) = \sum_{n = - \infty }^{\infty}\frac{F(n\omega _{0})}{T_{0}}e^{jn\omega_{0}}$$ (1.5) Here as $$T_{0} \rightarrow \infty$$, $$ \omega_{0}$$ becomes extremely small $$(\omega_{0} \rightarrow 0)$$. Due to this limit, a more appropriate notation will replace $$\omega_{0}$$, $$\Delta \omega$$. With this new notion, Eq. 1.2b is now written as $$\Delta \omega = \frac{2\pi }{T_{0}}$$ and Eq. 1.5 is now written as $$f_{T_{0}}(t) = \sum_{n = - \infty}^{\infty} \left [ \frac{F(n\Delta \omega)\Delta \omega}{2\pi } \right ]e^{()jn\Delta \omega)t}$$ (1.6a) Here, Eq. 1.6a shows that $$f_{T_{0}}(t))$$ may be expressed in terms of a sum of infinite exponentials with frequencies $$0, \pm \Delta \omega, \pm 2\Delta \omega, \pm 3\Delta \omega, ...$$, which is the Fourier series. In the limit as $$T_{0} \rightarrow \infty$$, $$\omega_{0} \rightarrow \infty$$, and $$D_{n} \rightarrow \infty$$, the amount of the component of frequency $$n\Delta\omega$$ is $$[F(n\Delta\omega)\Delta\omega]/2\pi$$. Thus, $$f(t) = \lim_{T_{0}\rightarrow \infty}f_{T_{0}}(t) = \lim_{\Delta\omega \rightarrow 0} \frac{1}{2\pi}\sum_{n = - \infty}^{\infty} F(n\Delta\omega)e^{(nj\Delta\omega)t}\Delta\omega$$ (1.6b) On the right side of Eq 1.6b, the sum is the area under the function $$F(\omega)e^{j\omega t}$$, as shown in Fig. 1.3. Thus, $$f(t) = \frac{1}{2\pi} \int_{- \infty}^{\infty} F(\omega)e^{j\omega t}d\omega$$ (1.7) FIGURE 1.2 Change in the Fourier spectrum when the period T0 in Fig 1.1 doubles. FIGURE 1.3 The Fourier series becomes the Fourier integral in the limit as $$T_{0} \rightarrow \infty$$ The integral on the right side is known as the Fourier integral. This is the representation of an aperiodic signal f(t) by a Fourier integral, rather than a Fourier series. This Fourier integral is essentially a Fourier series (only in the limit) with fundamental frequency $$ \Delta\omega \rightarrow 0$$, as shown in Eq. 1.6. Assigning $$F(\omega)$$ as the direct Fourier transform of f(t), and f(t) as the inverse Fourier transform of $$F(\omega)$$. Another way to convey this statement is by a Fourier transform pair as stated below $$F(\omega )=\mathcal{F}[f(t)] \text{ and } f(t) = \mathcal{F}[F(\omega)]$$ $$f(t) \Leftrightarrow F(\omega)$$ To summarize, $$F(t) = \int_{-\infty}^{\infty}f(\omega)e^{j\omega t}d\omega$$ (1.8a) $$f(t) = \frac{1}{2\pi}\int_{-\infty}^{\infty}F(\omega)e^{j\omega t}d\omega$$ (1.8b) The spectrum of $$F(\omega)$$ can also be plotted as a function of $$\omega$$. Because $$F(\omega)$$ is complex, both the amplitude and angle spectra are as follows: $$F(\omega) = \left | F(\omega) \right |e^{j\theta_{0}(\omega)}$$ Conjugate Symmetry Property From Eq. 1.8a, if f(t) is a real function of t, then $$F(\omega)$$ and $$F(-\omega)$$ are known to be complex conjugates shown below. $$F(-\omega) = F^{*}(\omega)$$ (1.9) $$\left | F(-\omega) \right | = \left | F(\omega) \right |$$ (1.10a) $$\theta_{f}(-\omega) = -\theta_{f}(\omega)$$ (1.10b) Consequently, for real f(t), the amplitude spectrum $$\left | F(\omega) \right |$$ is an even function, and the phase spectrum $$\theta_{f}(\omega)$$ is an odd function of $$\omega$$. Only for real f(t), this property known as the conjugate symmetry property holds true. This transform of $$F(\omega)$$ is the frequency domain specification of f(t). Find the Fourier transform of $$e^{-at}u(t)$$ By definition of Eq. 1.8a, $$F(\omega)=\int_{-\infty }^{\infty }e^{-j\omega t}dt = \int_{0}^{\infty}e^{-(a+j\omega)t}dt = \frac{-1}{a+j\omega}e^{(a+j\omega)}t |^{\infty}_{0}$$ But $$\left | e^{-j\omega t} \right |= 1$$. Therefore, as $$t \rightarrow \infty, e^{-(a+j \omega)t} = e^{-at}e^{-j \omega t} = 0$$ if $$a>0$$. Therefore, $$F(\omega) = \frac{1}{a+j \omega} \text{ } a>0$$ Existence of the Fourier Transform In the above example, it was shown that when a < 0, the Fourier integral for $$e^{-at}u(t)$$ does not converge. Thus, the Fourier transform for $$e^{-at}u(t)$$ does not exist if a < 0 (that is, exponentially growing). Observing from this example, not all signals are transformable. Any existence of the Fourier transform is assured for any f(t) that satisfies the Dirichlet conditions. The first of the conditions is as follows $$\int_{-\infty}^{\infty}\left | f(t) \right | < \infty$$ (1.12) In order to show this holds true, recall that $$\left | e^{-j\omega t} \right | = 1$$. Thus from Eq. 1.8a, $$\left |F(\omega) \right | \leq \int_{-\infty}^{\infty} \left | f(t) \right |dt$$ By expressing $$a+j \omega$$ in the polar form as $$\sqrt{a^{2}+\omega^{2}}e^{j tan^{-1} (\frac{\omega}{a})}$$, $$F(\omega) = \frac{1}{\sqrt{a^{2}+\omega^{2}}} e^{-j tan^{-1}(\frac{\omega}{a})}$$ $$\left |F(\omega) \right | = \frac{1}{\sqrt{a^{2}+\omega^{2}}} $$ $$\theta_{f}(\omega) = -tan^{-1}(\frac{\omega}{a})$$ The amplitude spectrum of $$|F(\omega)|$$ and the phase spectrum $$\theta_{f}(\omega)$$ are shown in Figure 1.4b. Observe that $$|F(\omega)|$$ is an even function of $$\omega$$, and $$\theta_{f}(\omega)$$ is an odd function of $$\omega$$, as expected to be. As long as condition 1.12 is satisfied, it shows that the existence of the Fourier transform is assured. Linearity of the Fourier Transform The Fourier transform can be considered linear if $$f_{1}(t) \Leftrightarrow F_{1}(\omega)$$ and $$f_{2}(t) \Leftrightarrow F_{2}(\omega)$$ $$a_{1}f_{1}(t) + a_{2}f_{2}(t) \Leftrightarrow a_{1}F_{1}(\omega) + a_{2}F_{2}(\omega)$$ (1.13) This result can be extended to any finite number of terms. This proof is trivial and follows exactly from Eq. 1.8a. As of now, you should have an understanding of what an aperiodic signal is and how it is represented by a Fourier Integral. By applying a limiting process, you should know how an aperiodic signal can be expressed as a continuous sum over everlasting exponentials, how the linearity of the Fourier Transform proof is satisfied, and how to find a Fourier transform using its spectra as well as the conjugate symmetry property. Next, an understanding of some useful functions, signal bandwidth, filtering (or interpolating), as well as the synthesis of a time-limited pulse signal. Signal aperiodic signal fourier transform fourier coefficients conjugate symmetry property Learning to Live in the Frequency Domain What is the frequency domain? And why is it so valuable for RF design, analysis, and testing? An Introductory Project for Software Defined Radio Learn how to listen to frequencies on the air with nothing more than a USB dongle: first FM radio, then voice signals. Christi Durham Introduction to the Manually-Controlled Toaster Oven Reflow With the help of a DIY thermocouple measurement system, you can use a cheap toaster oven to accurately reproduce a reflow-soldering... Understanding the Early Effect This technical brief discusses the Early effect and how it influences the amplification characteristics of a bipolar junction transistor. johnyradio 2016-02-07 Hi, on terminology, I always thought aperiodic means "repeating, but not harmonically-related." A signal that never repeats would be "nonperiodic." For example, two nonharmonically related frequencies will repeat. If the ratio is complex (say, 1000:1000.1), then it might take a very long time before it repeats—but not infinitely long, as this article seems to indicate. Am I wrong? (Sadly, I don't understand the math, just concepts ). Thx!
CommonCrawl
\begin{definition}[Definition:Structure for Predicate Logic] Let $\LL_1$ be the language of predicate logic. A '''structure $\AA$ for $\LL_1$''' comprises: :$(1): \quad$ A non-empty set $A$; :$(2): \quad$ For each function symbol $f$ of arity $n$, a mapping $f_\AA: A^n \to A$; :$(3): \quad$ For each predicate symbol $p$ of arity $n$, a mapping $p_\AA: A^n \to \Bbb B$ where $\Bbb B$ denotes the set of truth values. $A$ is called the '''underlying set''' of $\AA$. $f_\AA$ and $p_\AA$ are called the '''interpretations''' of $f$ and $p$ in $\AA$, respectively. We remark that function symbols of arity $0$ are interpreted as constants in $A$. Also, the predicate symbols may be interpreted as relations via their characteristic functions. {{transclude:Definition:Structure for Predicate Logic/Formal Semantics |section = tc |title = Formal Semantics |header = 3 |increase = 1 |link = true }} \end{definition}
ProofWiki
bosons and fermions are fundamentally different for the case of on a 1D compact ring. Is this true? How is the Bosonization/Fermionization different on a line segment or a compact ring? Does it matter whether the line segment is finite $x\in[a,b]$ or infinite $x\in(-\infty,\infty)$? Why? Can someone explain it physically? Thanks! Browse other questions tagged quantum-field-theory condensed-matter topology or ask your own question. An alternative, algebraic way to introduce interactions. Are there other ways out there? $\phi^4$ theory kinks as fermions? Why do we assume the spatial volume is infinite?
CommonCrawl
# Understanding the FFT algorithm The Fast Fourier Transform (FFT) is a powerful algorithm that allows for the efficient computation of the Discrete Fourier Transform (DFT) of a sequence. It is widely used in fields such as signal processing, image processing, and data compression. The Cooley-Tukey FFT algorithm is a divide-and-conquer approach to computing the DFT, which makes it highly efficient. The DFT of a sequence $x[n]$ is defined as the sum of the complex exponential terms: $$X[k] = \sum_{n=0}^{N-1} x[n] e^{-j2\pi kn/N}$$ where $j^2 = -1$ is the imaginary unit, $N$ is the length of the sequence, and $k$ is the index of the DFT coefficient. The Cooley-Tukey FFT algorithm exploits the symmetry properties of the DFT to reduce the computational complexity. It divides the DFT into smaller DFTs, which are then combined to form the final result. This recursive process is performed recursively until the base case is reached. ## Exercise Consider a sequence $x[n] = [1, 2, 3, 4]$ with $N = 4$. Compute the DFT using the Cooley-Tukey FFT algorithm. 1. Divide the sequence into smaller DFTs. 2. Compute the DFT for each smaller DFT. 3. Combine the results to form the final DFT. ### Solution ``` 1. Divide the sequence into smaller DFTs: - DFT1: [1, 2] - DFT2: [3, 4] 2. Compute the DFT for each smaller DFT: - DFT1: [1 + 2j, 1 - 2j] - DFT2: [3 + 4j, 3 - 4j] 3. Combine the results to form the final DFT: - X[0] = DFT1[0] + DFT2[0] - X[1] = DFT1[0] - DFT2[0] - X[2] = DFT1[1] + DFT2[1] - X[3] = DFT1[1] - DFT2[1] ``` # Implementing the Cooley-Tukey FFT in Java To implement the Cooley-Tukey FFT in Java, you will need to create a class that performs the FFT algorithm. This class should have methods for performing the forward and inverse FFTs, as well as handling data input and output. Here is a basic implementation of the Cooley-Tukey FFT in Java: ```java public class FFT { public static void forwardFFT(double[] real, double[] imag) { // Implement the forward FFT } public static void inverseFFT(double[] real, double[] imag) { // Implement the inverse FFT } public static void inputData(String filename) { // Read input data from a file } public static void outputData(String filename) { // Write output data to a file } } ``` # Data input and output in Java To handle data input and output in Java, you can use the `Scanner` class for reading data from a file and the `PrintWriter` class for writing data to a file. Here is an example of how to use these classes: ```java import java.io.File; import java.io.FileNotFoundException; import java.util.Scanner; public class DataIO { public static void readData(String filename) { try { Scanner scanner = new Scanner(new File(filename)); while (scanner.hasNextLine()) { String line = scanner.nextLine(); // Process the line } scanner.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } } public static void writeData(String filename) { try { PrintWriter writer = new PrintWriter(new File(filename)); // Write data to the file writer.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } } } ``` # Complex numbers and their representation in Java In Java, complex numbers can be represented using the `Complex` class from the `java.math` package. Here is an example of how to create and manipulate complex numbers: ```java import java.math.Complex; public class ComplexNumbers { public static void main(String[] args) { Complex c1 = new Complex(1, 2); Complex c2 = new Complex(3, 4); Complex sum = c1.add(c2); Complex difference = c1.subtract(c2); Complex product = c1.multiply(c2); Complex quotient = c1.divide(c2); System.out.println("Sum: " + sum); System.out.println("Difference: " + difference); System.out.println("Product: " + product); System.out.println("Quotient: " + quotient); } } ``` # Performing the FFT algorithm step by step To perform the FFT algorithm step by step, follow these steps: 1. Read input data from a file using the `inputData` method. 2. Convert the input data into complex numbers. 3. Perform the forward FFT using the `forwardFFT` method. 4. Process the FFT results. 5. Perform the inverse FFT using the `inverseFFT` method. 6. Convert the inverse FFT results back into real numbers. 7. Write the output data to a file using the `outputData` method. # Applying the Cooley-Tukey FFT in real-world applications The Cooley-Tukey FFT algorithm can be applied in various real-world applications, such as: - Signal processing: FFT is widely used for analyzing and processing signals in communication systems, audio processing, and image processing. - Image processing: FFT can be used for image compression, filtering, and feature extraction. - Data compression: FFT can be used for data compression algorithms, such as JPEG and MP3. # Optimizing the Cooley-Tukey FFT algorithm To optimize the Cooley-Tukey FFT algorithm, consider the following techniques: - In-place FFT: Perform the FFT in-place, without allocating additional memory for intermediate results. - Bit-reversal permutation: Use bit-reversal permutation to reduce the number of multiplications and additions. - Radix-2 decimation in time: Use radix-2 decimation in time to reduce the number of butterfly operations. # Comparing the Cooley-Tukey FFT algorithm to other FFT algorithms The Cooley-Tukey FFT algorithm is one of the most efficient FFT algorithms. However, there are other FFT algorithms, such as the Stockham algorithm and the Bluestein algorithm. These algorithms have their own advantages and disadvantages, and their performance may vary depending on the specific application and hardware. # Conclusion and further resources In conclusion, the Cooley-Tukey FFT algorithm is a powerful and efficient algorithm for computing the Discrete Fourier Transform. It can be implemented in Java and applied to various real-world applications, such as signal processing, image processing, and data compression. For further resources on the Cooley-Tukey FFT algorithm and its implementation in Java, refer to the following: - "The Fast Fourier Transform and Its Applications" by E.H. Cohen - "Numerical Recipes in C: The Art of Scientific Computing" by W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery - "Introduction to Java Programming" by Y. Daniel Liang These resources provide in-depth explanations, code examples, and practical applications of the Cooley-Tukey FFT algorithm in Java.
Textbooks
Number of dimensions in string theory and possible link with number theory This question has led me to ask somewhat a more specific question. I have read somewhere about a coincidence. Numbers of the form $8k + 2$ appears to be relevant for string theory. For k = 0 one gets 2 dimensional string world sheet, For k = 1 one gets 10 spacetime dimensions and for k = 3 one gets 26 dimensions of bosonic string theory. For k = 2 we get 18. I don't know whether it has any relevance or not in ST. Also the number 24, which can be thought of as number of dimensions perpendicular to 2 dimensional string world sheet in bosonic ST, is the largest number for which the sum of squares up to 24 is itself a square. $(1^2 + 2^2 + ..+24^2 = 70^2)$ My question is, is it a mere coincidence or something deeper than that? string-theory mathematics $\begingroup$ Excellent observations. It's indeed natural to count the transverse coordinates only - the number of physical "oscillator" degrees of freedom - and those transverse dimensionalities are multiples of eight. This is linked to the fact that the dimension of a spin field is $1/16$ for a single dimension and one needs dimensions that are integral or half-integral. In theories with spacetime fermions, it's also linked to the Bott periodicity - if the difference between spatial and temporal dimensions is a multiple of eight, there are real chiral spinor representations. $\endgroup$ – Luboš Motl Feb 15 '11 at 16:16 $\begingroup$ Also, the number 24 for the transverse dimension of the bosonic string appears because one needs to get the right critical dimension, and the zero-point energy with the single excitation has to vanish: $(D-2)(-1/12)/2+1=0$. This is solved exactly for $D=26$; $(-1/12)$ arose as the sum of positive integers or $\zeta(-1)$. Incredibly enough, even the seemingly numerological observation with $70^2$ is actually used "somewhere" in string theory - one compactified on the Leech lattice. The identity guarantees that a null vector is null. $\endgroup$ – Luboš Motl Feb 15 '11 at 16:19 $\begingroup$ @Luboš Motl: What about the number 18 Lubos? $\endgroup$ – user1355 Feb 15 '11 at 16:21 $\begingroup$ Under the comment by Dr Harvey, I link to a paper where a string theory compactification on the Leech lattice is actually used to explain even more fascinating numerological accidents - the "monstrous moonshine" linking some properties of the monster group, the largest finite sporadic group, to some properties of number theory and complex calculus, previously totally unrelated part of maths. See en.wikipedia.org/wiki/Monstrous_moonshine - In Monstrous Moonshine, numbers as high as 196,883+1 appear at 2 places and it was a complete mystery why! String theory has demystified this fact. $\endgroup$ – Luboš Motl Feb 15 '11 at 16:25 $\begingroup$ @kakemonster: Numerology is to number theory as astrology is to astronomy, or alchemy is to chemistry. $\endgroup$ – QGR Feb 17 '11 at 6:08 There is definitely something deep going on, but there is not yet a deep understanding of what it is. In math the topology of the orthogonal group has a mod 8 periodicity called Bott periodicity. I think this is related to the dimensions in which one can have Majorana-Weyl spinors with Lorentzian signature which is indeed $8k+2$. So this is part of the connection and allows both the world-sheet and the spacetime for $d=2,10$ to have M-W spinors. The $26$ you get for $k=3$ doesn't have any obvious connection with spinors and supersymmetry, but there are some indirect connections related to the construction of a Vertex Operator Algebra with the Monster as its symmetry group. This involves a $Z_2$ orbifold of the bosonic string on the torus $R^{24}/\Lambda$ where $\Lambda$ is the Leech lattice. A $Z_2$ orbifold of this theory involves a twist field of dimension $24/16=3/2$ which is the dimension needed for a superconformal generator. So the fact that there are $24$ transverse dimensions does get related to world-sheet superconformal invariance. Finally, the fact you mentioned involving the sum of squares up to $24^2$ has been exploited in the math literature to give a very elegant construction of the Leech lattice starting from the Lorentzian lattice $\Pi^{25,1}$ by projecting along a null vector $(1,2, \cdots 24;70)$ which is null by the identity you quoted. I can't think of anything off the top of my head related to $k=2$ in string theory, but I'm sure there must be something. David Z♦ phopho $\begingroup$ Prof Harvey may be too modest here but let me mention that he is one of the 4 co-fathers of the heterotic string. And when it comes to a related compactification on the Leech lattice, see e.g. Beauty and the Beast: web.mac.com/chrisbertinato/iWeb/Physics/Seminars_files/… - This compactification of string theory actually knows about most (or all) about the largest sporadic finite group, the monster group. Fascinating and previously "impossible" connections between number theory and group theory - the "monstrous moonshine" - has been explained as a real link here. $\endgroup$ – Luboš Motl Feb 15 '11 at 16:22 $\begingroup$ Just a direct link to the construction of the Leech lattice where the "70 squared" identity is used: en.wikipedia.org/wiki/… $\endgroup$ – Luboš Motl Feb 15 '11 at 17:08 $\begingroup$ Yes, thanks @Jeff. And I am aware that they're the authors. Sorry I didn't make it clear. The final proof of the monstrous moonshine claim that won the Fields medal was found by Borcherds - just to make it clear that I acknowledge that this Gentleman has some divine abilities, too. ;-) $\endgroup$ – Luboš Motl Feb 15 '11 at 17:10 The descent from 24 to 8 seems to happen when the straight sums are substituted by alternating sums. This is known from the theory of the Riemann zeta funtion, whose only pole in $s=1$ is cancelled via a multiplication that produces the Dirichlet eta function, $$\eta(s) = \left(1-2^{1-s}\right) \zeta(s)$$. This function has better analiticity than Zeta. But it is alternating, $$ \eta(s) = \frac{1}{1^s} - \frac{1}{2^s} + \frac{1}{3^s} - \frac{1}{4^s} + \cdots$$ And thus it could be related with the simple expresion I have mentioned above in the comments. But, more important, $$\eta(-1)=\left(1-2^{2}\right) \zeta(-1) = -3 \times {-1\over 12} = {1\over 4}$$ And you can suspect that the Eta function does for the superstring the same role that the Zeta does for the bosonic string. And indeed it appears in very similar situations. For instance, Michael B. Green, in his 1986 Trieste lectures "String and Supertring Theory", section 5.11, calculates the NS sector spectrum and then the normal ordering constant, that appears formally as a difference between the bosonic term and the fermionic term. Such difference can be manipulated to obtain the Eta function as above, times $(D-2)/2$ So if anyone can tell how the Zeta regulator is related to the integer sum up to 24, then we could probably guess how the Eta regulator would be related to the alternating sum up to 8. ariveroarivero I am separating this answer from the other because it is overly speculative; mainly I wanted to list a few hints about the sequence I named in the comments. The OP names a square sum that happens to be related to the critical dimension of space time of the bosonic string, D-2=24. It seems natural to ask if there is some similar sequence for the critical dimension of the superstring, D-2=8. On other hand, as I said in the other answer, for the open superstring the Zeta regularization is naturally substituted, in some cases, by the Dirichlet Eta, that happens to be an alternating sum. So it is natural for a numerologist to try the alternating square sum and, as said in the comments to the OP, it works: $$1^2-2^2+3^2-4^2+5^2-6^2+7^2-8^2= -36 = -6^2$$ The main difference, number-wise, with the non alternating sum, is that here the solution is not unique. Still, it is the smallest non trivial one, and all the others can be generated iteratively: the (absolute value of the) sums are the triangular square numbers, and it was observed by Colin Dickson (alt.math.recreational March 7th 2004) that such numbers obey a recurrence law $$a_{n+1}={(a_n -1)^2 \over a_{n-1}}$$ with the first two terms being the trivial $a_1=1$ and the above $a_2=36$. For more info, see the OEIS sequences A001110 and A001108. Note that the sign in the actual solution depends of the number of terms in the sequence, alternating itself, so that actually the sum is $\sigma_n= (-1)^{n-1} a_n$ A way to produce the alternating solutions is to solve Pell equation, and then the sums are also produced from Pell numbers, via $a_n= P_n (P_n+P_{n-1})$ . This could be interesting because the root of the non alternating series, $70$, is a Pell number itself, the 6th, and the next Pell number, $70+(70+29)=169$, is the only Pell number that is an exact square (and the only one that is an exact power). In A001108, Mohamed Bouhamida mentions some periodicities and some mod 8 relationships for the series. Also the page http://www.cut-the-knot.org/do_you_know/triSquare.shtml gives some hints on some eight factors appearing in a particular subsequence of square triangular numbers: "Eight triangles increased by unity produce a square". If these factors can be related to the 8-periodicities of Bott theory or theta functions, I can not tell. EDIT: of course the use of triangular numbers can be telling us that all the business of alternating series is just a decoy: pairing the terms, we can reduce $ (m+1)^2 - m^2 = 2 m + 1 = (m+1) + m $ the alternating squared series to the non alternating non squared series and then simply $$1+2+3+4+5+6+7+8= 36 = 6^2$$ But the goal is to keep at least a formal likeliness with the OP series. I don't know about the particular example that you mention, but there are certainly some interconnections with special numbers in mathematics and in string theory/supersymmetry. One worked out example is the connection of possible dimensions of supersymmetry in dimensions 3, 4, 6 or 10 which is connected to the existence of normed division algebras in dimensions 1, 2, 4 and 8. For more details see John C. Baez, John Huerta: Division Algebras and Supersymmetry I (arXiv) and related work about higher gauge theory. Tim van BeekTim van Beek $\begingroup$ Sorry for being off-topic, but I thought the relationship I mention interesting enough in the given context in its own right. $\endgroup$ – Tim van Beek Feb 15 '11 at 18:39 $\begingroup$ it is one of the most important "numerlogical" facts in these considerations, so +1 $\endgroup$ – user346 Feb 16 '11 at 5:24 Why is there a deep mysterious relation between string theory and number theory, elliptic curves, $E_8$ and the Monster group? Is this explanation of "Why nine space dimensions?" correct? Heterotic string as worldvolume theory of two coincident 9-branes in 27 dimensions? Black Hole Singularity and String Theory Compactification of dimensions in string theory: Why our Universe has 3 large spatial dimensions? What is the connection between extra dimensions in Kaluza-Klein type theories and those in string theories? How exactly do superstrings reduce the number of dimensions in bosonic string theory from 26 to 10 and remove the tachyons? Is mean field theory self-consistency analogous to string theory consistency? String Theory Landscape Is the number of dimensions predicted by String Theory related to the Poincare group? Fundamental string theory questions Extra Dimensions (in String Theory) - What does it mean?
CommonCrawl
How to Use the Implicit Differentiation Calculator? How the Derivative Calculator Works Implicit Differentiation What is Implicit Differentiation? Implicit Derivative Implicit Differentiation and Chain Rule How to Do Implicit Differentiation? How to Do Implicit Differentiation? The process is explained by step by step explanation. Implicit Differentiation Formula Important Notes on Implicit Differentiation: The Implicit Differentiation Calculator displays the derivative of a given function with respect to a variable. STUDYQUERIES's Implicit Differentiation Calculator makes calculations faster, and a derivative of an implicit function is displayed in a fraction of a second. To use the implicit Differentiation calculator, follow these steps: Step 1: Enter the equation in the given input field Step 2: Click "Submit" to get the derivative of a function Step 3: The derivative will be displayed in a new window The following section explains how the Derivative Calculator works for those with a technical background. A parser analyzes the mathematical function first. Specifically, it converts it into a form that can be understood by a computer, namely a tree. In order to do this, the Derivative Calculator must respect the order of operations. It is a specialty of mathematical expressions that sometimes the multiplication sign is omitted, for example, we write "5x" instead of "5^x". The Derivative Calculator must detect these cases and insert the multiplication sign. JavaScript is used to implement the parser, which is based on the Shunting-yard algorithm. Transforming the tree into LaTeX code allows for quick feedback while typing. MathJax handles the display in the browser. By clicking the "Go!" button, the Derivative Calculator sends the mathematical function and the settings (differentiation variable and order) to the server, where they are analyzed again. The function is now transformed into a form that the computer algebra system Maxima can understand. Maxima actually computes the derivative of the mathematical function. According to the commonly known differentiation rules, it applies a number of rules to simplify the function and calculate the derivatives. Maxima's output is transformed once again into LaTeX and then presented to the user. Displaying the steps of the calculation is more complicated since the Derivative Calculator isn't entirely dependent on Maxima for this. The derivatives must be calculated manually step by step. JavaScript has been used to implement the differentiation rules (product rule, quotient rule, chain rule, …) There is also a table of derivative functions for trigonometric functions and the square root, logarithm, and exponential functions. Each calculation step involves a differentiation operation or rewrite. For example, constant factors are removed from differentiation operations and sums are divided (sum rule). This is done using Maxima, as well as general simplifications. To enable highlighting, the LaTeX representations of the resulting mathematical expressions are tagged in the HTML code. The "Check answer" feature has to determine whether two mathematical expressions are equivalent. Utilizing Maxima, their difference is computed and simplified as much as possible. For instance, this involves writing trigonometric and hyperbolic functions in their exponential forms. The task is solved if it can be demonstrated that the difference simplifies to zero. If not, a probabilistic algorithm is applied that evaluates and compares both functions at random locations. Implicit differentiation is the process of finding the derivative of an implicit function. There are two types of functions: explicit function and implicit function. An explicit function is of the form \(y = f(x)\) with the dependent variable \("y"\) is on one of the sides of the equation. But it is not necessary always to have \('y'\) on one side of the equation. For example, consider the following functions: $$x^2 + y = 2$$ $$xy + sin (xy) = 0$$ Slant Asymptote Calculator Maclaurin Series Calculator In the first case, though \('y'\) is not one of the sides of the equation, we can still solve it to write it like \(y = 2 – x^2\) and it is an explicit function. But in the second case, we cannot solve the equation easily for \('y'\), and this type of function is called an implicit function and in this page, we are going to see how to find the derivative of an implicit function by using the process of implicit differentiation. Implicit differentiation is the process of differentiating an implicit function. An implicit function is a function that can be expressed as \(f(x, y) = 0\). i.e., it cannot be easily solved for \('y'\) (or) it cannot be easily got into the form of \(y = f(x)\). Let us consider an example of finding \(\mathbf{\frac{dy}{dx}}\) given the function \(xy = 5\). Let us find \(\mathbf{\frac{dy}{dx}}\) in two methods: Solving it for \(y\) Without solving it for \(y\). Method- 1: $$xy = 5$$ $$y = \frac{5}{x}$$ $$y = 5(x^{-1})$$ Differentiating both sides with respect to \(x\): \(\mathbf{\frac{dy}{dx}= 5(-1x^{-2})) = \frac{-5}{x^2}}\) Method – 2: Differntiating both sides with respect to x: $$\mathbf{\frac{d}{dx}(xy) = \frac{d}{dx}(5)}$$ Using product rule on the left side, $$x \frac{d}{dx}(y) + y \frac{d}{dx}(x) = \frac{d}{dx}(5)$$ $$x (\frac{dy}{dx}) + y (1) = 0$$ $$x(\frac{dy}{dx}) = -y$$ $$\frac{dy}{dx} = -\frac{y}{x}$$ From \(xy = 5\), we can write \(y = \frac{5}{x}\). $$\frac{dy}{dx} = -\frac{(\frac{5}{x})}{x} = \frac{-5}{x^2}$$ In Method -1, we have converted the implicit function into the explicit function and found the derivative using the power rule. But in method-2, we differentiated both sides with respect to \(x)\) by considering y as a function of \(x\), and this type of differentiation is called implicit differentiation. But for some functions like \(xy + \sin (xy) = 0\), writing it as an explicit function (Method – 1) is not possible. In such cases, only implicit differentiation (Method – 2) is the way to find the derivative. The derivative that is found by using the process of implicit differentiation is called the implicit derivative. For example, the derivative \(\frac{dy}{dx}\) found in Method-2 (in the above example) at first was \(\frac{dy}{dx} = \frac{-y}{x}\) and it is called the implicit derivative. An implicit derivative usually is in terms of both \(x\) and \(y\). The chain rule of differentiation plays an important role while finding the derivative of implicit function. The chain rule says $$\frac{d}{dx}(f(g(x)) = (f' (g(x)) · g'(x)$$ Whenever we come across the derivative of y terms with respect to \(x\), the chain rule comes into the scene and because of the chain rule, we multiply the actual derivative (by derivative formulas) by \(\frac{dy}{dx}\). Here is an example. Second Derivative Calculator Chain rule implicit differentiation is clearly explained with an example. Here are more examples to understand the chain rule in implicit differentiation. $$\frac{d}{dx}(y^2) = 2y \frac{dy}{dx}$$ $$\frac{d}{dx}(sin y) = cos y \frac{dy}{dx}$$ $$\frac{d}{dx}(ln y) = \frac{1}{y}·\frac{dy}{dx}$$ $$\frac{d}{dx}(tan^{-1}y) = \frac{1}{(1 + y^2)} · \frac{dy}{dx}$$ In other words, wherever y is being differentiated, write \(\frac{dy}{dx}\) also there. It is suggested to go through these examples again and again as these are very helpful in doing implicit differentiation. In the process of implicit differentiation, we cannot directly start with \(\frac{dy}{dx}\) as an implicit function is not of the form \(y = f(x)\), instead, it is of the form \(f(x, y) = 0\). Note that we should be aware of the derivative rules such as the power rule, product rule, quotient rule, chain rule, etc before learning the process of implicit differentiation. Here is the flowchart of the steps for performing implicit differentiation. Now, these steps are explained by an example where are going to find the implicit derivative \(\frac{dy}{dx}\) if the function is \(y + \sin y = \sin x\). Domain And Range Calculator Step – 1: Differentiate every term on both sides with respect to \(x\). Then we get \(\frac{d}{dx}(y) + \frac{d}{dx}(sin y) = \frac{d}{dx}(sin x)\). Step – 2: Apply the derivative formulas to find the derivatives and also apply the chain rule. (All \(x\) terms should be directly differentiated using the derivative formulas; but while differentiating the \(y\) terms, multiply the actual derivative by \(\frac{dy}{dx}\). In this example, \(\frac{d}{dx} (sin x) = cos x\) whereas \(\frac{d}{dx} (sin y) = cos y (\frac{dy}{dx})\). Then the above step becomes: \(\frac{dy}{dx} + (cos y) ((\frac{dy}{dx}) = cos x\) Step – 3: Solve it for \(\frac{dy}{dx}\). Taking \(\frac{dy}{dx}\) as common factor: \(\frac{dy}{dx} (1 + cos y) = cos x\) \(\frac{dy}{dx} = \frac{(cos x)}{(1 + cos y)}\) This is the implicit derivative. We have seen the steps to perform implicit differentiation. Did we come across any particular formula along the way? No!! There is no particular formula to do implicit differentiation, rather we perform the steps that are explained in the above flow chart to find the implicit derivative. Derivative Of sin^2x, sin^2(2x) & More Double Integral Calculator Implicit differentiation is the process of finding \(\mathbf{\frac{dy}{dx}}\) when the function is of the form \(f(x, y) = 0\). To find the implicit derivative \(\mathbf{\frac{dy}{dx}}\), just differentiate on both sides and solve for \(\mathbf{\frac{dy}{dx}}\). But in this process, write \(\mathbf{\frac{dy}{dx}}\) wherever we are differentiating \(y\). All derivative formulas and techniques are to be used in the process of implicit differentiation as well. How do you calculate implicit differentiation? Take the derivative of every variable. Whenever you take the derivative of \(y\) you multiply by \(\mathbf{\frac{dy}{dx}}\). Solve the resulting equation for \(\mathbf{\frac{dy}{dx}}\). Does Mathway do implicit differentiation? Enter the function you want to find the derivative of in the editor. The Derivative Calculator supports solving first, second, fourth derivatives, as well as implicit differentiation and finding the zeros/roots. You can also get a better visual and understanding of the function by using our graphing tool. What is an implicit differentiation example? For example, \(x^2+y^2=1\). Implicit differentiation helps us find ​\(\mathbf{\frac{dy}{dx}}\) even for relationships like that. This is done using the chain ​rule and viewing y as an implicit function of x. For example, according to the chain rule, the derivative of \(y^2\) would be \(2y⋅\mathbf{\frac{dy}{dx}}\). How do you calculate differentiation? Some of the general differentiation formulas are; Power Rule: (d/dx) (xn ) = nx. Derivative of a constant, a: (d/dx) (a) = 0. Quotient And Product Rule – Formula & Examples Horizontal Asymptotes – Definition, Rules & More
CommonCrawl
Efficient whole cell biocatalyst for formate-based hydrogen production Patrick Kottenhahn1, Kai Schuchmann1 and Volker Müller1Email author Biotechnology for Biofuels201811:93 Published: 2 April 2018 Molecular hydrogen (H2) is an attractive future energy carrier to replace fossil fuels. Biologically and sustainably produced H2 could contribute significantly to the future energy mix. However, biological H2 production methods are faced with multiple barriers including substrate cost, low production rates, and low yields. The C1 compound formate is a promising substrate for biological H2 production, as it can be produced itself from various sources including electrochemical reduction of CO2 or from synthesis gas. Many microbes that can produce H2 from formate have been isolated; however, in most cases H2 production rates cannot compete with other H2 production methods. We established a formate-based H2 production method utilizing the acetogenic bacterium Acetobacterium woodii. This organism can use formate as sole energy and carbon source and possesses a novel enzyme complex, the hydrogen-dependent CO2 reductase that catalyzes oxidation of formate to H2 and CO2. Cell suspensions reached specific formate-dependent H2 production rates of 71 mmol g protein −1 h−1 (30.5 mmol g CDW −1 h−1) and maximum volumetric H2 evolution rates of 79 mmol L−1 h−1. Using growing cells in a two-step closed batch fermentation, specific H2 production rates reached 66 mmol g CDW −1 h−1 with a volumetric H2 evolution rate of 7.9 mmol L−1 h−1. Acetate was the major side product that decreased the H2 yield. We demonstrate that inhibition of the energy metabolism by addition of a sodium ionophore is suitable to completely abolish acetate formation. Under these conditions, yields up to 1 mol H2 per mol formate were achieved. The same ionophore can be used in cultures utilizing formate as specific switch from a growing phase to a H2 production phase. Acetobacterium woodii reached one of the highest formate-dependent specific H2 productivity rates at ambient temperatures reported so far for an organism without genetic modification and converted the substrate exclusively to H2. This makes this organism a very promising candidate for sustainable H2 production and, because of the reversibility of the A. woodii enzyme, also a candidate for reversible H2 storage. Hydrogen production Biohydrogen Acetobacterium woodii Formate dehydrogenase Hydrogenase Fossil fuel limitation and increasing atmospheric CO2 concentrations necessitate alternative energy carriers. Molecular hydrogen (H2) is an attractive carbon-free alternative that can be converted to energy without CO2 emission. It can be used as energy carrier for mobile applications (i.e., fuel cell powered vehicles) or as an intermediate energy storage system to store excess electrical energy that is produced in peak times from renewable sources [1]. Currently, H2 is produced mainly from fossil fuels by steam reforming and thus unsustainable and environmentally harmful [2]. Hence, new H2 production methods are required. Biologically produced H2 provides a promising alternative for a sustainable H2-based energy economy. H2 production by biological systems can generally be classified into four different mechanisms: direct and indirect biophotolysis, photofermentation, and dark fermentation [3]. From these processes, the latter mechanism has so far the highest H2 evolution rates (HER). However, the major drawback of dark fermentations, e.g., from glucose, is the low H2 yield per substrate consumed and the limitations of agricultural production of the substrate [4]. A recently considered alternative substrate is formic acid/formate that could be produced from electrochemical reduction of CO2 or from synthesis gas, a very flexible substrate that can derive as by-product from steel mills or from waste gasification [5–7]. Conversion of formate to H2 proceeds according to the reaction: $${\text{HCOO}}^{ - } + {\text{H}}_{2} {\text{O }} \rightleftharpoons {\text{HCO}}_{3}^{ - } + {\text{H}}_{ 2} \quad \Delta G^{{0}^{\prime }} = +\,1.3\,{\text{kJ mol}}^{ - 1}.$$ Microbial formate oxidation is catalyzed by multiple enzyme systems. Organisms such as some enterobacteria use a membrane-bound formate-hydrogen lyase system composed of membrane-associated hydrogenase and formate dehydrogenase subunits [8, 9]. Clostridiaceae or archaea such as Methanococcus can produce H2 from formate by the action of separate cytoplasmic formate dehydrogenases and hydrogenases [10]. The observed HERs for these organisms are typically very low and do not reach the levels for H2 production from other feedstocks [4]. One exception is the recently characterized organism Thermococcus onnurineus. This organism requires 80 °C for growth and formate-dependent H2 formation reached HERs that outcompete other dark fermentations for the first time [11, 12]. H2 production in this organism depends on a membrane-bound enzyme complex of formate dehydrogenase, hydrogenase, and Na+/H+ antiporter subunits that couples H2 formation to formate oxidation as well as energy conservation [13, 14]. A new enzyme of the bacterial formate metabolism has been discovered recently in the strictly anaerobic bacterium Acetobacterium woodii [15]. The enzyme named hydrogen-dependent CO2 reductase (HDCR) was the first described soluble enzyme complex that reversibly catalyzes the reduction of CO2 to formate with H2 as electron donor. CO2 reduction is catalyzed at ambient conditions with rates far superior to chemical catalysis [15–17]. Therefore, it could not only be used for H2 production but, depending on the application, for H2 storage as well. In the form of formate, the explosive gas could be stored and handled much easier and with an increased volumetric energy density [18]. H2-dependent CO2 reduction to formate by the HDCR has also been shown to be very efficient in whole cell catalysis with A. woodii [15]. However, the reverse reaction has not been addressed in detail so far. In the present report, we describe the first characterization of formate-based H2 production with an organism harboring an HDCR complex. The results show that A. woodii has H2 production rates from formate of 66 mmol H2 g CDW −1 h−1 at ambient temperatures that are among the highest reported so far for an organism without genetic modification. Therefore, A. woodii is an efficient catalyst for H2 production and, considering the reversibility of the whole cell system, a potent catalyst for reversible H2 storage. In addition, A. woodii can grow with formate as sole carbon and energy source making it possible to produce cell mass and H2 with the same substrate. H2 production with resting cells The acetogenic bacterium A. woodii can utilize, among others, H2 + CO2, formate, or monosaccharides such as fructose as substrates for growth. In all three cases, acetate (or acetate + CO2 in the case of formate) is the major end product [19, 20]. Recently, we could show that the addition of the sodium ionophore ETH2120 (sodium ionophore III) led to a complete inhibition of acetate formation from H2 + CO2 and the two gases were completely converted to formate [15]. This opened the possibility to utilize A. woodii as catalyst for H2 storage. The hydrogen-dependent CO2 reduction activity could be addressed to a novel enzyme complex of a formate dehydrogenase and hydrogenase, named HDCR. Experiments with the purified enzyme showed that the catalyzed reaction proceeds with almost the same rate in the reverse reaction as well, making A. woodii a potential candidate for formate-based H2 production [15]. In this study, we analyzed this potential using whole cells of A. woodii. First, we grew the organism with fructose, a substrate to reach high cell densities relatively quickly (doubling time tD = 4.7 h compared to 11 h with formate as substrate), harvested the cells, and incubated them in reaction buffer at a protein concentration of 1 mg mL−1 (corresponding to 2.3 mgCDW mL−1). After addition of sodium formate to a final concentration of 300 mM, the cells produced H2 with an initial specific H2 productivity (qH2) of 52.2 ± 3 mmol g protein −1 h−1 (22.5 mmol g CDW −1 h−1) (Fig. 1). 0.6 mmol H2 was produced from 2.14 mmol formate consumed leading to a yield of H2 consumed per substrate consumed \(\left( {Y_{{\left( {{\text{H}}_{2} /{\text{formate}}} \right)}} } \right)\) of 0.28 mol mol−1. It was surprising to observe these high H2 production rates since H2 is typically no major product from cells growing on formate; however, \(Y_{{\left( {{{{\text{H}}_{2} } \mathord{\left/ {\vphantom {{{\text{H}}_{2} } {\text{formate}}}} \right. \kern-0pt} {\text{formate}}}} \right)}}\) was significantly decreased by the high amount of 0.45 mmol acetate produced alongside H2. The produced acetate results from the assimilation of CO2 or formate via the Wood–Ljungdahl pathway for autotrophic CO2 fixation of A. woodii [20, 21]. As shown recently for the reverse reaction of formate formation from H2 + CO2, we tried to decrease acetate formation by addition of the sodium ionophore ETH2120. Acetate formation in A. woodii is coupled to a sodium ion gradient for energy conservation across the cytoplasmic membrane that can be specifically diminished by the sodium ionophore. In the presence of 30 µM ETH2120, the final amount of H2 produced increased to 1.15 mmol from 1.68 mmol formate consumed. At the same time, acetate formation decreased to a final amount of 0.17 mmol acetate. In summary, addition of ETH2120 increased \(Y_{{\left( {{{{\text{H}}_{2} } \mathord{\left/ {\vphantom {{{\text{H}}_{2} } {\text{formate}}}} \right. \kern-0pt} {\text{formate}}}} \right)}}\) to 0.68 mol mol−1. An alternative approach to the inhibition of acetate formation by ETH2120 is the depletion of the cells for sodium ions. In the CO2 reduction direction, sodium ion depletion showed the same effect on formate formation as ETH2120 but comes with much less cost for the fermentation. To test this for H2 production, we added potassium formate instead of sodium formate. Initial qH2 was identical to ETH2120 inhibited cells and the amount of H2 produced was more than double compared to the control (Fig. 1). However, after 100 min we observed reassimilation of H2 which decreased the product significantly. We interpret this result as an incomplete inhibition of sodium-dependent acetate formation due to sodium ion contamination in the potassium formate, which is 0.5% in ≥ 99.0% potassium formate used. H2 production from formate by resting cells of A. woodii. Cells were grown with 20 mM fructose, harvested in the exponential growth phase, and suspended in buffer (50 mM imidazole, 20 mM KCl, 20 mM MgSO4, 4 mM DTE, pH 7) to a final protein concentration of 1 mg mL−1 (corresponding to a CDW of 2.3 g L−1) in anoxic serum bottles (gas phase 100% N2). The bottles were incubated in a shaking water bath at 30 °C. At the beginning of the experiment, sodium formate, potassium formate, ETH2120, NaCl, and ethanol (solvent of ETH2120 as negative control) were added as indicated. Triangles down, 300 mM sodium formate, 30 µM ETH2120 (dissolved in 100% ethanol), 20 mM NaCl; diamonds, 300 mM sodium formate, 20.5 mM ethanol, 20 mM NaCl; circles, 100 mM K-formate; triangles up, 100 mM sodium formate, 20.5 mM ethanol, 20 mM NaCl In the initial experiments, we used fructose-grown cells as catalysts. An advantage of A. woodii is the wide range of possible growth substrates. Depending on the process and available substrate, cultivation of the cells on H2 + CO2 or directly on formate might be advantageous. qH2 in cells grown on H2 + CO2 was almost identical to formate-grown cells; however, with 21 mmol g protein −1 h−1 only 67% of the qH2 of fructose-grown cells was reached (Fig. 2a). pH dependency showed a decrease in qH2 with increasing pH within the tested pH range of 6–9 (Fig. 2b). Highest qH2 was observed at a pH of 6 with 37 mmol g protein −1 h−1. When using increasing cell densities, we observed a linear increase in HERs up to 79 mmol L−1 h−1 but a decrease in qH2 (Fig. 3). Maximum specific H2 production of 71 mmol g protein −1 h−1 (30.5 mmol g CDW −1 h−1) was observed at a protein concentration of 0.5 mg mL−1. At the same time, increasing cell densities led to higher accumulation of acetate and less production of H2, meaning that ETH2120 inhibition decreases at higher cell densities. In the next experiment, we tested inhibition of H2 production by increased formate concentrations. We tested formate concentrations from 25 to 600 mM. Within this range, initial H2 production rates did not change, with similar HERs up to 600 mM sodium formate tested, demonstrating that formate is not inhibiting the catalyst even at high concentrations. Final H2 concentrations increased with increasing initial formate concentrations (Fig. 4). Influence of the growth substrate (a) and pH (b) on H2 production. a Cells were grown with 20 mM fructose (squares), 2 atm. H2 + CO2 (80:20 [v:v], triangles), or 100 mM sodium formate (circles). The experiment was performed as described for Fig. 1 using 300 mM sodium formate, 30 µM ETH2120, and 20 mM NaCl. b Fructose-grown cells were suspended in buffer (25 mM MES, 25 mM Tris, 25 mM MOPS, 25 mM CHES, 20 mM KCl, 20 mM MgSO4, 4 mM DTE, 20 mM NaCl) at pH 6 (circles), pH 7 (squares), pH 8 (triangles), pH 9 (diamonds). The experiment was started by the addition of sodium formate to a final concentration of 300 mM Influence of the cell density on volumetric and specific H2 production rates. Cells were grown with 20 mM fructose, harvested in the exponential growth phase, and suspended in buffer (50 mM imidazole, 20 mM KCl, 20 mM MgSO4, 30 µM ETH2120, 20 mM NaCl, 4 mM DTE, pH 7) to a final protein concentration of 0.5–4 mg mL−1 (corresponding to a CDW of 1.2–9.7 g L−1). Experiments were started by the addition of 100 mM sodium formate. Initial specific H2 production rates (squares) or initial volumetric H2 production rates (circles) are plotted against the cell density used Influence of the formate concentration on H2 production. Cells were grown with 20 mM fructose, harvested in the exponential growth phase, and suspended in buffer (50 mM imidazole, 20 mM KCl, 20 mM MgSO4, 30 µM ETH2120, 20 mM NaCl, 4 mM DTE, pH 7) to a final protein concentration of 1 mg mL−1 (corresponding to a CDW of 2.3 g L−1). Experiments were started by the addition of 25 mM (diamonds), 50 mM (triangles down), 100 mM (triangles up), 200 mM (closed squares), 400 mM (circles), 600 mM formate (open squares) H2 production in batch fermentation The experiments described with resting cells showed that A. woodii is a promising catalyst for formate-dependent H2 production at ambient temperatures. For these experiments, cells were grown, harvested under anoxic conditions, and incubated in anoxic reaction buffer. This procedure is labor-intensive and requires sophisticated techniques to maintain anoxic conditions. To optimize this procedure, we wanted to abolish the medium exchange and establish H2 production directly in closed batch fermentation. Therefore, cells were grown with 20 mM fructose as substrate to mid-exponential growth phase (tD = 4.7 h). At this point, formate was added with or without 30 µM ETH2120. Addition of the sodium ionophore led to an immediate growth arrest, whereas addition of formate alone had no effect on the growth rate (data not shown). After addition of formate, H2 was produced with a HER of 7.9 mmol L−1 h−1 and a qH2 of 65.9 mmol g CDW −1 h−1 (Fig. 5a). Without addition of ETH2120, the H2 evolution rate was 4.5 mmol L−1 h−1 initially, but decreased significantly after 1 h. After addition of formate, acetate was still produced alongside H2 when no ETH2120 was added (78.4 mmol L−1 after 23 h) (Fig. 5b). In contrast, cells in the presence of ETH2120 did produce acetate only in marginal amounts as side product (0.3 mmol L−1). \(Y_{{\left( {{{{\text{H}}_{2} } \mathord{\left/ {\vphantom {{{\text{H}}_{2} } {\text{formate}}}} \right. \kern-0pt} {\text{formate}}}} \right)}}\) was 0.08 mol H2 mol formate−1 without and 1.06 mol mol−1 with ETH2120. The \(Y_{{\left( {{{{\text{H}}_{2} } \mathord{\left/ {\vphantom {{{\text{H}}_{2} } {\text{formate}}}} \right. \kern-0pt} {\text{formate}}}} \right)}}\) above 1 can be explained by some H2 being produced from fructose still present in the fermentation (0.2 and 1.2 mmol L−1 of H2 where produced with and without ETH2120, respectively, from fructose alone). We observed that without addition of the ionophore, a total amount of 12.1 mmol formate was consumed from the initial 15 mmol (corresponding to a concentration of 300 mM). In the presence of ETH2120, this value decreased to 4.5 mmol. However, this can be explained from the different energetics of the reactions. Conversion of formate to acetate is highly exergonic, whereas conversion of formate to H2 is slightly endergonic, limiting the total conversion of formate in a batch system. H2 production in closed batch fermentation by fructose-grown cells. A. woodii was grown in 50 mL carbonate-free growth medium with 20 mM fructose at 30 °C in a shaking water bath. At the point indicated, production phase was initiated by addition of sodium formate, ETH2120, or ethanol (solvent of ETH2120 as negative control). At this time point, the optical density of all cultures was between 0.35 and 0.45. H2 was measured in the gas phase and is plotted as mmol H2 per liter of growth medium (a). Substrate and product balance during the production phase (b) is shown as difference between t = 1 h (addition of formate and ionophore) and t = 24 h (end of fermentation). Squares, 300 mM sodium formate, 30 µM ETH2120; triangles up, 300 mM sodium formate, 20.5 mM ethanol; diamonds, 20.5 mM ethanol; triangles down, 30 µM ETH2120 Next, we wanted to further optimize the system by generating cell mass directly from formate as substrate, therefore testing a system independent on carbohydrates and using formate for growth and H2 production. Therefore, A. woodii was grown with 100 mM sodium formate (tD = 11 h). These cultures already produced small amounts of H2 during growth (around 2 mmol L−1 before the switch to the production phase). To switch from growth to production phase, 15 mmol additional sodium formate (corresponding to 300 mM in the culture volume of 50 mL) with and without ETH2120 were added. As in the case for fructose-grown cells, H2 was produced immediately after addition of ETH2120 with a HER of 1.2 mmol L−1 h−1 and a specific production rate of 19 mmol g CDW −1 h−1. At the end of the fermentation, 25.1 mmol L−1 H2 was produced from 36.2 mmol L−1 formate consumed when ETH2120 was added (\(Y_{{\left( {{{{\text{H}}_{2} } \mathord{\left/ {\vphantom {{{\text{H}}_{2} } {\text{formate}}}} \right. \kern-0pt} {\text{formate}}}} \right)}}\) = 0.69 mol H2 mol formate−1) (Fig. 6). Additional acetate was not produced after the addition of the ionophore. Without ETH2120, 18.6 mmol L−1 H2 and 17.0 mmol L−1 acetate were produced from 80.5 mmol L−1 formate. This results in a lower \(Y_{{\left( {{{{\text{H}}_{2} } \mathord{\left/ {\vphantom {{{\text{H}}_{2} } {\text{formate}}}} \right. \kern-0pt} {\text{formate}}}} \right)}}\) of 0.23 mol H2 mol−1 formate. In comparison to fructose-grown cells, the final amount of H2 produced was much lower, even though the same amounts of formate were supplied. This could be an effect of the conditions established by the cells during the growth phase, e.g., growth on fructose leads to an acidification of the medium, whereas growth on formate increases the pH. Further studies need to address the optimal media composition depending on the substrate used for the growth phase. Nevertheless, the experiments with growing cells demonstrate in each case that the metabolism of A. woodii can be specifically switched from growth and acetate formation to H2 production by interfering with the sodium ion gradient across the membrane and thus dramatically increasing the yield coefficient \(Y_{{\left( {{{{\text{H}}_{2} } \mathord{\left/ {\vphantom {{{\text{H}}_{2} } {\text{formate}}}} \right. \kern-0pt} {\text{formate}}}} \right)}}\). Substrate and product balance in closed batch fermentation with cells grown on formate. A. woodii was grown in 50 mL carbonate-free growth medium with 100 mM sodium formate at 30 °C in a shaking water bath to an optical density of 0.25–0.3. At this point, production phase was induced by adding 15 mmol sodium formate and 30 µM ETH2120 (+ETH2120) or 15 mmol sodium formate and 20.5 mM ethanol (−ETH2120). Substrate and product balance during the production phase is shown as difference between addition of formate and t = 24 h (end of fermentation) In this study, we examined the H2 production capacity of the anaerobic bacterium A. woodii. This organism is a promising candidate for formate-based H2 production due to the recently identified reversible hydrogen-dependent CO2 reductase complex (HDCR), an enzyme able to reversibly reduce CO2 to formate with H2 as electron donor with so far exceptional catalysis rates. This enzyme catalyzes the first step in the Wood–Ljungdahl pathway, the pathway for CO2 fixation and energy conservation in this organism that has a wide substrate spectrum for growth ranging from monosaccharides, mono- and diols, H2 + CO2, and, especially important in this context, formate [20, 22]. However, without modification this organism produces mainly acetate as end product from most substrates [19]. As shown in this study, cells growing on formate produce only very little H2. Addition of high concentrations of formate to cells growing on formate or fructose led to immediate H2 production; however, H2 production rapidly slowed down and acetate was still produced. A. woodii can use H2 + CO2 for growth and acetate formation, and therefore this result is not unexpected since H2 + CO2 is the product of formate oxidation by the HDCR complex [15] (Fig. 7). The HDCR is not connected to the metabolism by electron carriers such as NAD+/NADH and it seems, from the results here, that it catalyzes formate oxidation unregulated if the formate concentration increases suddenly even if this provides no advantage for the cell. The independence of the HDCR from other metabolic processes makes it feasible to inhibit the major pathways for substrate conversion and growth by still retaining HDCR activity. As shown before, a very specific target for inhibiting the metabolism is the sodium ion gradient across the membrane that is built up during acetate formation and is necessary for energy conservation and growth. We assume that formate is imported by the putative formate transporter FdhC2 (Awo_c08050) whose gene is in close proximity to the HDCR gene cluster. FdhC2 could couple formate import to the proton gradient due to the similarity of the primary structure to the formate transporter FocA of Escherichia coli or Salmonella typhimurium [23, 24] (Fig. 7). In the next step, formate is reduced via the Wood–Ljungdahl pathway and the necessary reducing equivalents for this process are generated by oxidizing part of the formate via the HDCR. Addition of the ionophore should inhibit the reductive formate pathway without influencing the HDCR activity. This should stop acetate formation and result in the accumulation of hydrogen. At the same time, collapsing the membrane potential should be advantageous for uptake of the negatively charged formate molecule. As demonstrated in this study, neutralizing this gradient by adding a sodium ionophore (we used ETH2120) proved to be an effective switch from acetate to H2 production if formate is provided as substrate. It was possible to completely turn off acetate and biomass formation and reach yields \(\left( {Y_{{\left( {{\text{H}}_{2} /{\text{formate}}} \right)}} } \right)\) of 100%. Comparing the total amount of formate consumed with and without ETH2120 showed that formate utilization stopped earlier when cells were inhibited by the ionophore. However, in this case formate was completely converted to H2 and this reaction is slightly endergonic (\(\Delta G^{{0}^{\prime }} = +\,1.3\,{\text{kJ mol}}^{ - 1}\)). The equilibrium constant of this reaction is therefore only 0.6. In the absence of the sodium ionophore, formate is mainly converted to acetate. This reaction is highly exergonic (\(\Delta G^{{0}^{\prime }} = -\,110\,{\text{kJ mol}}^{ - 1}\) [25]) explaining the increased formate consumption. The thermodynamics of formate-based H2 production might seem as a disadvantage; however, the reaction close to the thermodynamic equilibrium allows simple adjustment of the direction of the reaction without additional energy supply. H2 can be produced from formate or stored in the form of formate without the input of much energy, a prerequisite for a reversible H2 storage material. Another very attractive property of formate-based H2 production is the complete conversion of the substrate to gaseous products. The substrate could be continuously supplied to the fermentation in the form of formic acid (at the same time providing a constant pH) resulting in the formation of H2 + CO2 only, circumventing any inhibition by dissolved products. Future studies need to address the long-term stability of the ionophore inhibited A. woodii system in such a continuous and pH-controlled fermentation. The price of the ionophore ETH2120 is a disadvantage considering the economic feasibility of the process. We used this compound to specifically study the effect of collapsing the membrane potential. However, with the gained knowledge that it is only necessary to inhibit the metabolism at any point it should be possible to identify other more inexpensive inhibitors. Alternatively, with the advent of genetic tools in acetogenic bacteria, mutations could be introduced to block key steps of the metabolism that stops acetate production and keeps the HDCR functional. Model of formate-dependent H2 production with A. woodii. Formate can be used by A. woodii as carbon and energy source. Formate could be taken up by the putative formate transporter FdhC2. It is then bound to the cofactor tetrahydrofolate (THF) and reduced to a cofactor-bound methyl group. To generate the required reducing equivalents, part of the formate is oxidized to H2 + CO2 catalyzed by the HDCR. H2 is further oxidized by an electron bifurcating hydrogenase and CO2 is reduced to carbon monoxide (CO) which is fused to the methyl group resulting in the formation of acetyl-CoA and subsequently acetate. The Rnf complex generates a sodium ion gradient driven by the electron transfer from reduced ferredoxin to NAD+ that is then used by a sodium ion-dependent ATP synthase to generate ATP. The sodium ionophore ETH2120 collapses the membrane potential which inhibits ATP formation and could lead to ATP hydrolysis by the now uncoupled ATP synthase. This in turn inhibits conversion of formate to acetate because the first reaction is ATP dependent, resulting in sole conversion into H2 + CO2. CHO-THF, formyl-THF; CH-THF, methenyl-THF; CH2-THF, methylene-THF; CH3-THF, methyl-THF; CoFeSP, Corrinoid iron-sulfur protein; Fd, ferredoxin In summary, A. woodii and the corresponding enzyme HDCR turned out to be a very promising catalyst for formate-based H2 production and storage, as it operates at ambient temperatures with very similar reaction rates in the forward and reverse reaction. The specific H2 productivity (qH2) from formate observed with whole cells of A. woodii (66 mmol g CDW −1 h−1) is among the highest reported at ambient temperatures for an organism without genetic modification, highlighting the H2 production potential of this organism [4, 5]. Much higher qH2 are reported at 80 °C utilizing the thermophile T. onnurineus [12]. This organism uses a different enzyme system for formate-based H2 production, namely a membrane-bound enzyme complex consisting of a hydrogenase, formate dehydrogenase, and Na+/H+ antiporter subunits [13]. If T. onnurineus can also catalyze, the reverse reaction has not been shown so far. At ambient temperatures, the best results have been achieved using E. coli or other Enterobacteria such as Citrobacter in non-growing conditions [26]. Without genetic modification, E. coli has typically a low formate-dependent H2 productivity. However, by metabolic engineering including overexpression of the formate-hydrogen lyase enzyme, deletion of inhibitory pathways such as uptake hydrogenases and process optimization, the H2 productivity could be increased dramatically (144.2 mmol g−1 h−1 when products was removed continuously from the medium) [27, 28]. On the other hand, E. coli is inhibited by low concentrations of approximately 50 mM formate. This was addressed by using agar-embedded immobilized cells that were able to tolerate higher concentrations [29]. This study demonstrated that A. woodii is an efficient H2 producer from the very flexible and inexpensive substrate formate. Together with our recent study on the reverse reaction, the results show that A. woodii can also be used as whole cell biocatalyst for the reversible storage of H2, by binding it to CO2 to produce formate and vice versa. Future studies need to address the process in a larger scale and in a continuous fermentation to analyze the stability and investigate alternatives to the expensive inhibitor ETH2120. Since any inhibition of the metabolism that does not affect the HDCR should be sufficient, other inhibitors or a genetic modification of the organism should be easy to find to improve the cost of the process. Growth of A. woodii Acetobacterium woodii (DSM 1030) was cultivated at 30 °C under anaerobic conditions. The defined carbonate buffered medium was prepared as described [30]. For closed batch fermentation, defined phosphate buffered medium was used and prepared as described [31]. Fructose (20 mM), formate (100 mM), or H2 + CO2 (80:20 [v/v]) was used as substrates. Growth was followed by measuring the optical density at 600 nm (OD600). Preparation of cell suspensions The medium and all buffers were prepared using the anaerobic techniques described [32, 33]. All preparation steps were performed under strictly anaerobic conditions at room temperature in an anaerobic chamber (Coy Laboratory Products, Grass Lake, MI) filled with 95–98% N2 and 2–5% H2 as described [30]. A. woodii (DSM 1030) was grown in carbonate buffered medium till late exponential phase, harvested by centrifugation, and washed two times with imidazole buffer (50 mM imidazole–HCl, 20 mM MgSO4, 20 mM KCl, 4 mM DTE, 1 mg L−1 resazurin, pH 7.0). Cells were resuspended in imidazole buffer and transferred to Hungate tubes. The protein concentration of the cell suspension was determined as described previously [34]. To remove remaining H2 from the Hungate tube, the gas phase of the cell suspension was changed to N2 and the cells were stored on ice until use. For the experiments, the cells were suspended in the same buffer to a concentration of 1 mg mL−1 in 115-mL glass bottles. The bottles contained a final volume of 10 mL buffer under an N2 atmosphere and were incubated at 30 °C in a shaking water bath. Samples for substrate/product determination were taken with a syringe, cells were removed by centrifugation (15,000g, 2 min), and the supernatant was stored at − 20 °C until further analysis. For determination of H2, gas samples were taken with a gas tight syringe (Hamilton Bonaduz AG, Bonaduz, Switzerland) and analyzed by gas chromatography. Closed batch fermentations Acetobacterium woodii (DSM 1030) was grown at 30 °C in 50 mL phosphate buffered medium in 115-mL glass bottles containing an initial gas phase of 100% N2. Samples for substrate/product determination were taken with a syringe and handled as described for the cell suspension experiments. Determination of hydrogen, formate, and acetate For determination of H2, the gas samples were analyzed by gas chromatography on a Clarus 580 GC (Perkin Elmer, Waltham, USA) with a ShinCarbon ST 80/100 column (2 m × 0.53 mm, PerkinElmer, Waltham, MA, USA). The samples were injected at 100 °C with nitrogen as carrier gas with a head pressure of 400 kPa and a split flow of 30 mL min−1. The oven was kept at 40 °C and H2 was determined with a thermal conductivity detector at 100 °C. The peak areas were proportional to the concentration of H2 and calibrated with standard curves. The concentration of formate was determined with an enzymatic assay using the formate dehydrogenase from Candida boidinii (Sigma-Aldrich, Munich, Germany). The assay contained in addition to the sample 1 U of enzyme in 50 mM potassium phosphate buffer (pH 7.5) and 2 mM NAD+. Formation of NADH was measured photometrically at 340 nm. Sodium formate was used for preparation of standard curves. Acetate was measured using a commercially available enzymatic assay kit from R-Biopharm (Darmstadt, Germany). All chemicals were supplied by Sigma-Aldrich Chemie GmbH (Munich, Germany) and Carl Roth GmbH & Co KG (Karlsruhe, Germany). All gases were supplied by Praxair (Düsseldorf, Germany). VM and KS designed and supervised the research, analyzed the data, and wrote the manuscript. PK performed the experiments and analyzed the data. All authors read and approved the final manuscript. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No 741791). All data generated or analyzed during this study are included in this published article. Molecular Microbiology & Bioenergetics, Institute of Molecular Biosciences, Johann Wolfgang Goethe University, Max-von-Laue-Str. 9, 60439 Frankfurt am Main, Germany Schlapbach L, Zuttel A. Hydrogen-storage materials for mobile applications. Nature. 2001;414:353–8.View ArticleGoogle Scholar Das D, Veziroglu TN. Advances in biological hydrogen production processes. Int J Hydrog Energy. 2008;33:6046–57.View ArticleGoogle Scholar Manish S, Banerjee R. Comparison of biohydrogen production processes. Int J Hydrog Energy. 2008;33:279–86.View ArticleGoogle Scholar Rittmann S, Herwig C. A comprehensive and quantitative review of dark fermentative biohydrogen production. Microb Cell Fact. 2012;11:115.View ArticleGoogle Scholar Rittmann SK, Lee HS, Lim JK, Kim TW, Lee JH, Kang SG. One-carbon substrate-based biohydrogen production: microbes, mechanism, and productivity. Biotechnol Adv. 2015;33:165–77.View ArticleGoogle Scholar Jhong HR, Ma SC, Kenis PJA. Electrochemical conversion of CO2 to useful chemicals: current status, remaining challenges, and future opportunities. Curr Opin Chem Eng. 2013;2:191–9.View ArticleGoogle Scholar Agarwal AS, Zhai Y, Hill D, Sridhar N. The electrochemical reduction of carbon dioxide to formate/formic acid: engineering and economic feasibility. Chemsuschem. 2011;4:1301–10.View ArticleGoogle Scholar Böhm R, Sauter M, Böck A. Nucleotide sequence and expression of an operon in Escherichia coli coding for formate hydrogen lyase components. Mol Microbiol. 1990;4:231–43.View ArticleGoogle Scholar Sawers RG. Formate and its role in hydrogen production in Escherichia coli. Biochem Soc Trans. 2005;33:42–6.View ArticleGoogle Scholar Calusinska M, Happe T, Joris B, Wilmotte A. The surprising diversity of clostridial hydrogenases: a comparative genomic perspective. Microbiology. 2010;156:1575–88.View ArticleGoogle Scholar Bae SS, Kim YJ, Yang SH, Lim JK, Jeon JH, Lee HS, Kang SG, Kim SJ, Lee JH. Thermococcus onnurineus sp nov., a hyperthermophilic Archaeon isolated from a deep-sea hydrothermal vent area at the PACMANUS field. J Microbiol Biotechnol. 2006;16:1826–31.Google Scholar Lim JK, Bae SS, Kim TW, Lee JH, Lee HS, Kang SG. Thermodynamics of formate-oxidizing metabolism and implications for H2 production. Appl Environ Microbiol. 2012;78:7393–7.View ArticleGoogle Scholar Lim JK, Mayer F, Kang SG, Müller V. Energy conservation by oxidation of formate to carbon dioxide and hydrogen via a sodium ion current in a hyperthermophilic archaeon. Proc Natl Acad Sci USA. 2014;111:11497–502.View ArticleGoogle Scholar Kim YJ, Lee HS, Kim ES, Bae SS, Lim JK, Matsumi R, Lebedinsky AV, Sokolova TG, Kozhevnikova DA, Cha SS, et al. Formate-driven growth coupled with H2 production. Nature. 2010;467:352–5.View ArticleGoogle Scholar Schuchmann K, Müller V. Direct and reversible hydrogenation of CO2 to formate by a bacterial carbon dioxide reductase. Science. 2013;342:1382–5.View ArticleGoogle Scholar Fujita E, Muckerman JT, Himeda Y. Interconversion of CO2 and formic acid by bio-inspired Ir complexes with pendent bases. Biochim Biophys Acta. 2013;1827:1031–8.View ArticleGoogle Scholar Mellmann D, Sponholz P, Junge H, Beller M. Formic acid as a hydrogen storage material—development of homogeneous catalysts for selective hydrogen release. Chem Soc Rev. 2016;45:3954–88.View ArticleGoogle Scholar Enthaler S, von Langermann J, Schmidt T. Carbon dioxide and formic acid—the couple for environmental-friendly hydrogen storage? Energy Environ Sci. 2010;3:1207–17.View ArticleGoogle Scholar Balch WE, Schoberth S, Tanner RS, Wolfe RS. Acetobacterium, a new genus of hydrogen-oxidizing, carbon dioxide-reducing, anaerobic bacteria. Int J Syst Bacteriol. 1977;27:355–61.View ArticleGoogle Scholar Schuchmann K, Müller V. Autotrophy at the thermodynamic limit of life: a model for energy conservation in acetogenic bacteria. Nat Rev Microbiol. 2014;12:809–21.View ArticleGoogle Scholar Wood HG, Ljungdahl LG. Autotrophic character of the acetogenic bacteria. In: Shively JM, Barton LL, editors. Variations in autotrophic life. San Diego: Academic press; 1991. p. 201–50.Google Scholar Poehlein A, Schmidt S, Kaster A-K, Goenrich M, Vollmers J, Thürmer A, Bertsch J, Schuchmann K, Voigt B, Hecker M, et al. An ancient pathway combining carbon dioxide fixation with the generation and utilization of a sodium ion gradient for ATP synthesis. PLoS ONE. 2012;7:e33439.View ArticleGoogle Scholar Lu W, Du J, Wacker T, Gerbig-Smentek E, Andrade SL, Einsle O. pH-dependent gating in a FocA formate channel. Science. 2011;332:352–4.View ArticleGoogle Scholar Wang Y, Huang Y, Wang J, Cheng C, Huang W, Lu P, Xu YN, Wang P, Yan N, Shi Y. Structure of the formate transporter FocA reveals a pentameric aquaporin-like channel. Nature. 2009;462:467–72.View ArticleGoogle Scholar Thauer RK, Jungermann K, Decker K. Energy conservation in chemotrophic anaerobic bacteria. Bacteriol Rev. 1977;41:100–80.Google Scholar Seol E, Kim S, Raj SM, Park S. Comparison of hydrogen-production capability of four different Enterobacteriaceae strains under growing and non-growing conditions. Int J Hydrog Energy. 2008;33:5169–75.View ArticleGoogle Scholar Yoshida A, Nishimura T, Kawaguchi H, Inui M, Yukawa H. Enhanced hydrogen production from formic acid by formate hydrogen lyase-overexpressing Escherichia coli strains. Appl Environ Microbiol. 2005;71:6762–8.View ArticleGoogle Scholar Yoshida A, Nishimura T, Kawaguchi H, Inui M, Yukawa H. Efficient induction of formate hydrogen lyase of aerobically grown Escherichia coli in a three-step biohydrogen production process. Appl Microbiol Biotechnol. 2007;74:754–60.View ArticleGoogle Scholar Seol E, Manimaran A, Jang Y, Kim S, Oh YK, Park S. Sustained hydrogen production from formate using immobilized recombinant Escherichia coli SH5. Int J Hydrog Energy. 2011;36:8681–6.View ArticleGoogle Scholar Heise R, Müller V, Gottschalk G. Presence of a sodium-translocating ATPase in membrane vesicles of the homoacetogenic bacterium Acetobacterium woodii. Eur J Biochem. 1992;206:553–7.View ArticleGoogle Scholar Imkamp F, Müller V. Chemiosmotic energy conservation with Na+ as the coupling ion during hydrogen-dependent caffeate reduction by Acetobacterium woodii. J Bacteriol. 2002;184:1947–51.View ArticleGoogle Scholar Bryant MP. Commentary on the Hungate technique for culture of anaerobic bacteria. Am J Clin Nutr. 1972;25:1324–8.View ArticleGoogle Scholar Hungate RE. A roll tube method for cultivation of strict anaerobes. In: Norris JR, Ribbons DW, editors. Methods in microbiology. New York: Academic Press; 1969. p. 117–32.Google Scholar Schmidt K, Liaaen-Jensen S, Schlegel HG. Die Carotinoide der Thiorhodaceae. Arch Mikrobiol. 1963;46:117–26.View ArticleGoogle Scholar
CommonCrawl
\begin{document} \title{Skill Decision Transformer} \begin{abstract} Recent work has shown that Large Language Models (LLMs) can be incredibly effective for offline reinforcement learning (RL) by representing the traditional RL problem as a sequence modelling problem \citep{chen2021decisiontransformer, tt}. However many of these methods only optimize for high returns, and may not extract much information from a diverse dataset of trajectories. Generalized Decision Transformers (GDTs) \citep{gdt} have shown that utilizing future trajectory information, in the form of information statistics, can help extract more information from offline trajectory data. Building upon this, we propose \emph{Skill Decision Transformer} (Skill DT). Skill DT draws inspiration from hindsight relabelling \citep{her} and skill discovery methods to discover a diverse set of \emph{primitive behaviors}, or skills. We show that Skill DT can not only perform offline state-marginal matching (SMM), but can discovery descriptive behaviors that can be easily sampled. Furthermore, we show that through purely reward-free optimization, Skill DT is still competitive with supervised offline RL approaches on the D4RL benchmark. The code and videos can be found on our project page: https://github.com/shyamsn97/skill-dt \todo[inline]{ADD; remove ICLR header}. \end{abstract} \section{Introduction} Reinforcement Learning (RL) has been incredibly effective in a variety of online scenarios such as games and continuous control environments \citep{li2017deep}. However, they generally suffer from sample inefficiency, where millions of interactions with an environment are required. In addition, efficient exploration is needed to avoid local minimas \citep{curiosity, edl}. Because of these limitations, there is interest in methods that can learn diverse and useful primitives without supervision, enabling better exploration and re-usability of learned skills \citep{diyain,disdain,edl}. However, these online skill discovery methods still require interactions with an environment, where access may be limited. This requirement has sparked interest in Offline RL, where a dataset of trajectories is provided. Some of these datasets \citep{fu2020d4rl} are composed of large and diverse trajectories of varying performance, making it non trivial to actually make proper use of these datasets; simply applying behavioral cloning (BC) leads to sub-optimal performance. Recently, approaches such as the Decision Transformer (DT) \citep{chen2021decisiontransformer} and the Trajectory Transformer (TT) \citep{tt}, utilize Transformer architectures \citep{attention} to achieve high performance on Offline RL benchmarks. \citet{gdt} showed that these methods are effectively doing hindsight information matching (HIM), where the policies are trained to estimate a trajectory that matches given target statistics of future information. The work also generalizes DT as an information-statistic conditioned policy, Generalized Decision Transformer (GDT). This results in policies with different capabilities, such as supervised learning and State Marginal Matching (SMM) \citep{smm}, just by simply varying different information statistics. \begin{figure} \caption{Skill Decision Transformer. States are encoded and clustered via VQ-VAE codebook embeddings. A Causal Transformer, similar to the original DT architecture, takes in a sequence of states, a latent skill distribution, represented as the normalized summed future counts of VQ-VAE encoding indices (details can be found in the "generate\_histogram" function in \ref{sssec:evaluating}), and the corresponding skill encoding of the state at timestep \(t\). The skill histogram captures "future" skill behavior, while the skill embedding represents current skill behavior as timestep \(t\).} \label{fig:architecture} \end{figure} \todo[inline]{I think the caption in Figure 1 could still explain the whole approach a bit more. I think some variation on this sentence from the main text "while the classic DT uses summed future returns to condition trajectories, we instead make use of learned skill embeddings and future skill distributions, represented as a histogram of skill embedding indices,".} In the work presented here, we take inspiration from the previously mentioned skill discovery methods and introduce \emph{Skill Decision Transformers} (Skill DT), a special case of GDT, where we wish to condition action predictions on skill embeddings and also \emph{future} skill distributions. We show that Skill DT is not only able to discovery a number of discrete behaviors, but it is also able to effectively match target trajectory distributions. Furthermore, we empirically show that through pure unsupervised skill discovery, Skill DT is actually able to discover high performing behaviors that match or achieve higher performance on D4RL benchmarks \citep{fu2020d4rl} compared to other state-of-the-art offline RL approaches. Our method is completely unsupervised and predicts actions, conditioned by previous states, skills, and distributions of future skills. Empirically, we show that Skill DT can not only perform SMM on target trajectories, but can also match or achieve higher performance on D4RL benchmarks \citep{fu2020d4rl} compared to other state-of-the-art offline RL approaches. \section{Related Work} \subsection{Skill Discovery} Many skill methods attempt to learn a latent skill conditioned policy \(\pi(a | s, z)\), where state \(s \sim p(s)\) and skill \(z \sim Z\), that maximizes mutual information between \(S\) and \(Z\) \citep{vic, dads, diyain}. Another way of learning meaningful skills is through variational inference, where \(z\) is learned via a reconstruction loss \citep{edl}. Explore, Discover and Learn (EDL) \citep{edl} is an approach, which discovers a discrete set of skills by encoding states via a VQ-VAE: \(p(z|s)\), and reconstructing them: \(p(s|z)\). We use a similar approach, but instead of reconstructing states, we utilize offline trajectories and optimize action reconstruction directly (\(p(a|s,z)\)). Since our policy is autoregressive, our skill encoding actually takes into account temporal information, leading to more descriptive skill embeddings. Offline Primitive Discovery for Accelerating Offline Reinforcement Learning (OPAL) \citep{opal}, also discovers offline skills temporally, but instead uses a continuous distribution of skills. These continuous skills are then sample by a hierarchical policy that is optimized by task rewards. Because our approach is completely unsupervised\todo{unsupervised, right?}, we wish to easily sample skills. To simplify this, we opt to use a discrete distribution of skills. This makes it trivial to query the highest performing behaviors, accomplished by just iterating through the discrete skills. \subsection{State Marginal Matching} State marginal matching (SMM) \citep{smm} involves finding policies that minimize the distance between the marginal state distribution that the policy represents \(p^\pi(s)\), and a target distribution \(p^*(s)\). These objectives have an advantage over traditional RL objectives in that they do not require any rewards and are guided towards exploration \citep{edl}. CDT has shown impressive SMM capabilities by utilizing binned target state distributions to condition actions in order to match the given target state distributions. However, using CDT in a real environment is difficult because target distributions must be provided, while Skill DT learns discrete skills that can be sampled easily. Also, CDT requires a low dimensional state space, while Skill DT can -- in theory -- work on any type of input as long as it can be encoded effectively into a vector. \section{Preliminaries} In this work, we consider learning in environments modelled as Markov decision processes (MDPs), which can be described using varibles \((S, A, P, R)\), where \(S\) represents the state space, \(A\) represents the action space, and \(P(s_{t+1} | s_{t}, a_{t})\) represents state transition dynamics of the environment. \subsection{Generalized Decision Transformer} The Decision Transformer (DT) \citep{chen2021decisiontransformer} represents RL as a sequence modelling problem and uses a GPT architecture \citep{gpt} to predict actions autoregressively. Specifically, DT takes in a sequence of RTGs, states, and actions, where \(R_t = \sum_{t}^{T}r_t\), and trajectory \(\tau = (R_0, s_0, a_0, ..., R_{|\tau|}, s_{|\tau|}, a_{|\tau|})\). DT uses \(K\) previous tokens to predict \(a_t\) with a deterministic policy which is optimized by a mean squared error loss between target and predicted actions. For evaluation, a target return \(\hat{R}_{target}\) is provided and DT attempts to achieve the targeted return in the actual environment. \citet{gdt} introduced a generalized version of DT, Generalized Decision Transformer (GDT). GDT provides a simple interface for representing a variety of different objectives, configurable by different information statistics (for consistency, we represent variations of GDT with \(\pi^{gdt}\)): \begin{center} \(\tau_{t}\) = \(s_{t}, a_{t}, r_{t}, ..., s_{T}, a_{T}, r_{T}\), \(I^{\phi}\) = information statistics function \end{center} Generalized Decision Transformer (GDT): \begin{center} \(\pi^{gdt}(a_{t} | I^{\phi}(\tau_0), s_{0}, a_{0} ..., I^{\phi}(\tau_t), s_{t-1}, a_{t-1})\) \end{center} Decision Transformer (DT): \begin{center} \(\pi^{gdt}_{dt}(a_{t} | I_{dt}^{\phi}(\tau_0), s_{0}, a_{0}, ..., I_{dt}^{\phi}(\tau_t), s_{t-1}, a_{t-1})\), where \( I_{dt}^{\phi}(\tau_t) = \sum_{t}^{T}\gamma * r_{t}\), \(\gamma\) = discount factor \end{center} Categorical Decision Transformer (CDT): \begin{center} \(\pi^{gdt}_{cdt}(a_{t} | I_{cdt}^{\phi}(\tau_0), s_{0}, a_{0}, ..., I_{cdt}^{\phi}(\tau_t), s_{t}, a_{t})\), where \( I_{cdt}^{\phi}(\tau_t) = histogram(s_{t}, ..., s_{T})\) \end{center} CDT is the most similar to Skill DT -- CDT captures future trajectory information using future state distributions, represented as histograms for each state dimension, essentially binning and counting the bin ids for each state dimension. Skill DT instead utilizes learned skill embeddings to generate future skill distributions, represented as histograms of \textbf{full} embeddings. In addition, Skill DT also makes use of the representation learnt by the skill embedding by also using it in tandem with the skill distributions. \section{Skill Decision Transformer} \subsection{Formulation} \label{gen_inst} Our Skill DT architecture is very similar to the original Decision Transformer presented in \citet{chen2021decisiontransformer}. While the classic DT uses summed future returns to condition trajectories, we instead make use of learned skill embeddings and future \emph{skill distributions}, represented as a histogram of skill embedding indices, similar to the way Categorical Decision Transformer (CDT) \citep{gdt} utilizes future state counts. One notable difference Skill DT has to the original Decision Transformer \citep{chen2021decisiontransformer} and the GDT \citep{gdt} variant is that we omit actions in predictions. This is because we are interested in SMM through skills, where we want to extract as much information from states. Formally, Skill DT represents a policy: $$ \pi(a_{t} |Z_{t-K}, z_{t-K}, s_{t-K}, ... Z_{t-1}, z_{t-1}, s_{t-1}),$$ where \(K\) is the context length, and \(\theta\) are the learnable parameters of the model. States are encoded as skill embeddings \(\hat{z}_t\), which are then quantized using a learned codebook of embeddings \(z = argmin_{n}||\hat{z} - z_n||^{2}_{2}\). The future skill distributions are represented as the normalized histogram of summed future one hot encoded skill indices: \(Z_t = \sum_{t}^{T}one\_hot(z_{t})\). Connecting this to GDT, our policy can be viewed as: \begin{center} \(\pi^{gdt}_{skill}(a_{t} | I_{skill}^{\phi}(\tau_0), s_{0}, ..., I_{skill}^{\phi}(\tau_t), s_{t})\), where \( I_{skill}^{\phi}(\tau_t) = (histogram(z_{t}, ..., z_{T}), z_{t}\)). \end{center} \subsubsection{Hindsight Skill Re-labelling} Hindsight experience replay (HER) is a method that has been effective in improving sample-efficiency of goal-oriented agents \citep{her,hpg}. The core concept revolves around \emph{goal relabelling}, where trajectory goals are replaced by achieved goals vs. inteded goals. This concept of re-labelling information has been utilized in a number of works \citep{DBLP:journals/corr/abs-1912-06088, odt, gogopeo}, to iteratively learn an condition predictions on target statistics. Bi-Directional Decision Transformer (BDT) \citep{gdt}, utilizes an anti-causal transformer to encode trajectory information, and passes it into a causal transformer action predictor. At every training iteration, BDT re-labels trajectory information with the anti-causal transformer. Similarly, Skill DT re-labels future skill distributions at every training iteration. Because the skill encoder is being updated consistently and skill representations change during training, the re-labelling of skill distributions is required to ensure stability in action predictions. \subsection{Architecture} \textbf{VQ-VAE Skill Encoder}. Many previous works have represented discrete skills as categorical variables, sampled from a categorical distribution prior \citep{disdain, diyain}. VQ-VAEs \citep{vqvae} have shown impressive capabilities with discrete variational inference in the space of computer vision \citep{visionvq, taming}, planning \citep{vqvaeplanning}, and online skill discovery \citep{edl}. Because of this, we use a VQ-VAE to quantize encoded states into a set of continuous skill embeddings. We encode states into vectors $z$, and quantize to nearest skill embeddings $\hat{z}$. To ensure stability, we minimize the regularization term: \begin{equation} \label{eq:vqloss} VQLOSS(z, \hat{z}) = MSE(z, \hat{z}) \end{equation} Where \(\hat{z}\) is the output of the MLP encoder and \(z\) is the nearest embedding in the VQ-VAE codebook. Optimizing this loss minimizes the distance of our skill encodings with their corresponding nearest VQ-VAE embeddings. This is analagous to clustering, where we are trying to minimize the distance between datapoints and their actual cluster centers. In practice, we optimize this loss using an exponential moving average, as detailed in \citet{robustvqvae}. \textbf{Causal Transformer}. The Causal Transformer portion of Skill DT shares a similar architecture to that of the original DT \citep{chen2021decisiontransformer}, utilizing a GPT \citep{gpt} model. It takes in input the last \(K\) states \(s_{t-K:t}\), skill encodings \(z_{t-K:t}\), and future skill embedding distributions \(Z_{t-K:t}\). As mentioned above, the future skill embedding distributions are calculated by generating a histogram of skill indices from timestep \(t:T\), and normalizing them so that they add up to 1. For states and skill embedding distributions, we use learned linear layers to create token embeddings. To capture temporal information, we also learn a timestep embedding that is added to each token. Note that we don't tokenize our skill embeddings because we want to ensure that we don't lose important skill embedding information. It's important to note that even though we don't add timestep embeddings to the skill embeddings, they still capture temporal behavior because the attention mechanism \citep{attention} of the causal transformer attends the embeddings to temporally conditioned states and skill embedding distributions. The VQ-VAE and Causal Transformer components are shown visually in Fig.~\ref{fig:architecture}. \subsection{Training Procedure} \begin{figure} \caption{Training procedure for Skill Decision Transformer. Sub-trajectories of states of length $k$ are sampled from the dataset and encoded into latents and discretized. All three variables are passed into the causal transformer to output actions. The VQ-VAE parameters and Causal Transformer parameters are backpropagated directly using an MSE loss and VQ-VAE regularization loss, shown in Equation \ref{eq:vqloss}.} \label{fig:training_procedure} \end{figure} Training Skill DT is very similar to how other variants of GDT are trained (CDT, BDT, DT, etc.). First, before every training iteration we re-label skill distributions for every trajectory using our VQ-VAE encoder. Afterwards, we sample minibatches of sequence length \(K\), where timesteps are sampled uniformly. Specifically, at every training iteration, we sample \(\tau = (s_{t}, ... s_{t+K}, a_{t}, ... a_{t+K})\), where \(t\) is sampled uniformly for each trajectory in the batch. The sampled states, \((s_{t}, ... s_{t+K})\), are encoded into skill embeddings using the VQ-VAE encoder. We then pass in the states, encoded skills, and skill distributions into the causal transformer to output actions. Like the original DT \citep{chen2021decisiontransformer}, we also did not find it useful to predict states or skill distributions, but it could be useful for actively predicting skill distributions without having to actually provide states to encode. This is a topic we hope to explore more in the future. The VQ-VAE encoder and causal transformer are updated by backpropagation through an MSE loss between target actions and predicted actions and the VQ-VAE regularization loss referenced in Equation \ref{eq:vqloss}. \todo[inline]{Figure 2 mentions a regularization loss in Equation 1 but I don't see any.} The simplified training procedure is shown in Algorithm~\ref{alg:skilldt}. \todo{Sometimes we spell it VQVAE and sometimes VQ-VAE. I guess VQ-VAE is the prefered one.} \begin{algorithm} \begin{algorithmic} \label{alg:skilldt} \caption{Offline Skill Discovery with Skill Decision Transformer} \State \textbf{Initialize} offline dataset \(D\), Causal Transformer \(f_\theta\), VQ-VAE Encoder \(e_{\phi}\), context length \(K\), num updates per iteration \(J\) \For{training iterations \(i = 1...N\)} \State Sample timesteps uniformly: \(t \in 1, ... max\_len\) \State Label dataset trajectories with skill distributions \(Z_{\tau_t} = \sum_{t}^{T}one\_hot(z_{t})\) for all \(t,..|\tau|\) \State Sample batch of trajectory states: \(\tau = (s_{t}, ... s_{t+K}, a_{t}, ... a_{t+K})\) \For{j = 1...\(J\)} \State \(\hat{z}_{\tau_{t:t+K}} = (e_{\phi}(s_{t}), ... e_{\phi}(s_{t+K}))\) Encode skills \State \(z_{\tau_{t:t+K}} = quantize(\hat{z}_{\tau_{t:t+K}})\) Quantize skills with VQ-VAE \State \(\hat{a}_{\tau_{t:t+K}}\) = \(f_{\theta}(Z_{\tau_t}, z_{\tau_{t}}, s_{t}, ..., Z_{\tau_{t+K}}, z_{\tau_{t+K}}, s_{t+K}) \) \State \(L_{\theta,\phi} = \frac{1}{K}\sum_{t}^{t+K}(a_{t} - \hat{a}_t)^2 + VQLOSS_{\phi}(z_{\tau_{t:t+K}}, \hat{z}_{\tau_{t:t+K}})\) \State backprop \(L_{\theta,\phi}\) w.r.t \(\theta,\phi\) \EndFor \EndFor \end{algorithmic} \end{algorithm} \section{Experiments} \subsection{Tasks and Datasets} For evaluating the performance of Skill DT, we use tasks and datasets from the D4RL benchmark \citep{fu2020d4rl}. D4RL has been used as a standard for evaluating many offline RL methods \citep{cql, chen2021decisiontransformer, iql, odt}. We evaluate our methods on mujoco gym continuous control tasks, as well as two antmaze tasks. Images of some of these environments can be seen in Section~\ref{sssec:trajectory-visualization}. \subsection{Evaluating Supervised Return} \textbf{Can Skill DT achieve near or competitive performance, using only trajectory information, compared to supervised offline RL approaches?} \begin{table}[H] \centering \begin{tabular}{ |p{0.3\linewidth}||p{0.02\linewidth}|p{0.04\linewidth}|p{0.04\linewidth}|p{0.05\linewidth}|p{0.08\linewidth}|p{0.08\linewidth}|p{0.08\linewidth}| p{0.08\linewidth} | } \hline \multicolumn{9}{|c|}{Mujoco Mean Results} \\ \hline Env Name & DT & CQL & IQL & OPAL & KMeans DT & \textbf{Skill DT (best skill)} & num skills (Skill DT) & Dataset Max Reward \\ \hline walker2d-medium & 74 & 79 & 78.3 & --- & 76 & \textbf{82} & 10 & 92\\ \hline halfcheetah-medium & 43 & 44 & \textbf{47} & --- & 43 & 44 & 10 & 45\\ \hline ant-medium & 94 & --- & 101 & --- & 100 & \textbf{106} & 10 & 107\\ \hline hopper-medium & 68 & 58 & 66 & --- & 66 & \textbf{76} & 32 & 100\\ \hline halfcheetah-medium-replay & 37 & \textbf{46} & 44 & --- & 39 & 41 & 32 & 42\\ \hline hopper-medium-replay & 63 & \textbf{95} & \textbf{95} & --- & 71 & 81 & 32 & 99\\ \hline antmaze-umaze & 59 & 75 & 88 & --- & 73 & \textbf{100} & 32 & 100\\ \hline antmaze-umaze-diverse & 53 & 84 & 62 & --- & 67 & \textbf{100} & 32 & 100\\ \hline antmaze-medium-diverse & 0 & 61 & \textbf{71} & --- & 0 & 13 & 64 & 100\\ \hline antmaze-medium-play & 0 & 54 & 70 & \textbf{81} & 0 & 0 & 64 & 100\\ \hline \hline \end{tabular} \caption{\label{tab:results} Average normalized returns on Gym and AntMaze tasks. We obtain the same results for baselines as reported in other works \citep{chen2021decisiontransformer, cql, iql, odt}, and calculate Skill DT's returns as an average over 4 seeds (for gym) and 15 (for antmaze). Skill DT outperforms the baselines on most tasks, but fails to beat them on replay tasks and antmaze-medium. However, Skill DT can consistently solve the antmaze-umaze tasks. } \end{table} Other offline skill discovery algorithms optimize hierarchical policies via supervised RL, utilizing the learned primitives to maximize rewards of downstream tasks \citep{opal}. However, because we are interested in evaluating Skill DT \textbf{without} rewards, we have to rely on learning enough skills such that high performing trajectories are represented. To evaluate this in practice, we run rollouts for each unique skill and take the maximum reward achieved. Detailed python sudocode for this is provided in \ref{sssec:evaluating}. For a close skill-based comparison to Skill DT, we use a K-Means augmented Decision Transformer (K-Means DT). K-Means DT differs from Skill DT in that instead of learning skill embeddings, instead we cluster states via K-Means and utilize the cluster centers as the skill embeddings. Surprisingly, through just pure unsupervised skill discovery, we are able to achieve competitive results on Mujoco continuous control environments compared to state-of-the-art offline reinforcement learning algorithms \citep{cql, iql, chen2021decisiontransformer}. As we can see in our results in Table~\ref{tab:results}, Skill DT outperforms other baselines on most of the tasks and DT on all of the tasks. However, it performs worse than the other baselines on the antmaze-medium / -replay tasks.\todo{maybe we could highlight that in the abstract as well} We hypothesize that Skill DT performs worse in these tasks because they contain multimodal and diverse behaviors. We think that with additional return context or online play, Skill DT may be able to perform better in these environments, and we hope to explore this as future work. Skill DT, like the original Decision Transformer \citep{chen2021decisiontransformer}, also struggles on harder exploration problems like the antmaze-medium environments. Methods that perform well on these tasks usually utilize dynamic programming like Trajectory Transformer\citep{tt} or hierarchical reinforcement learning like OPAL \citep{opal}. Even though Skill DT performs marginally better than DT, there is still a lot of room for improvement in future work. \section{Discussion} \subsection{Ablation Study} \textbf{What is the effect of the number of skills?} \begin{table}[H] \centering \begin{tabular}{ |p{4cm}||p{2cm}|p{2cm}|p{2cm}|p{2cm}| } \hline \multicolumn{5}{|c|}{Ablation Results} \\ \hline Env Name & 5 skills & 10 skills & 16 skills & 32 skills\\ \hline walker2d-medium & 80 & 82 & 82 & 82 \\ \hline halfcheetah-medium & 44 & 44 & 44 & 44 \\ \hline ant-medium & 100 & 106 & 106 & 106 \\ \hline hopper-medium & 65 & 70 & 76 & 76 \\ \hline hopper-medium-replay & 28 & 31 & 46 & 81 \\ \hline halfcheetah-medium-replay & 34 & 39 & 41 & 41 \\ \hline ant-umaze & 80 & 100 & 100 & 100 \\ \hline ant-umaze-diverse & 66 & 100 & 100 & 100 \\ \hline \end{tabular} \caption{\label{tab:ablation-study} Best reward obtained from skills for a varying number of skills} \end{table} Because Skill DT is a completely unsupervised algorithm, evaluating supervised return requires evaluating every learnt skill and taking the one that achieves the maximum reward. This means we are relying entirely on Skill DT's ability to capture behaviors from high performing trajectories from the offline dataset. We found that increasing the number of skills has less of an effect on performance, in environments that have a large number of successful trajectories (-medium environments). We hypothesize that these datasets have unimodal behaviors, and Skill DT does not need many skills to capture descriptive information from the dataset. However, for multimodal datasets (such as the -replay environments), Skill DT's performance improves with an increasing number of skills. In general, using a larger number of skills can help performance, but the tradeoff is increased computation time because each skill needs to be evaluated in the environment. These results are reported in Table \ref{tab:ablation-study}. Images of skills learnt can be seen in Section~\ref{sssec:trajectory-visualization}. \subsection{SMM with learned skills} \textbf{How well can Skill DT reconstruct target trajectories and perform SMM in a zero shot manner?} Ideally, if an algorithm is effective at SMM, it should be able to reconstruct a target trajectory in an actual environment. That is, given a target trajectory, the algorithm should be able to rollout a similar trajectory. The original DT can actually perform SMM well, on \textbf{offline} trajectories. However, when actually attempting this in an actual environment, it is unable to reconstruct a target trajectory because it is unable to be conditioned on accurate future state trajectory information. Skill DT, similar to CDT, is able to perform SMM in an actual environment because it encodes future state information into skill embedding histograms. The practical process for this is fairly simple and detailed in Algorithm~\ref{alg:reconstructing}. In addition to state trajectories, learned skill distributions of the reconstructed trajectory and the target trajectory should also be close. We investigate this by looking into target trajectories from antmaze-umaze-v2, antmaze-umaze-diverse-v2. For a more challenging example, we handpicked a trajectory from antmaze-umaze-diverse that is unique in that it has a loop. Even though the trajectory is unique, Skill DT is still able to roughly recreate it in a zero shot manner (Fig.~\ref{fig:ant-umaze-diverse-reconstructions}), with rollouts also including a loop in the trajectory. Additional results can be found in \ref{sssec:few-shot}. \begin{figure} \caption{Target antmaze-umaze trajectory} \caption{Target antmaze-umaze-diverse skill dist.} \caption{reconstructed trajectories} \caption{reconstructed skill dist.} \caption{From the antmaze-umaze-diverse Environment: The target trajecty is complex, with a loop and with noisy movement. Reconstructed rollouts also contain a loop.} \label{fig:ant-umaze-diverse-reconstructions} \end{figure} \subsection{Skill Diversity and Descriptiveness} \textbf{How diverse and descriptive are the skills that Skill DT discovers?} In order to evaluate Skill DT as a skill discovery method, we must show that behaviors are not only diverse but are descriptive, or more intuitively, \textbf{distinguishable}. We are able to visualize the diversity of learned behaviors by plotting each trajectory generated by a skill on both antmaze-umaze and ant environments, shown below. To visualize Skill DT's ability to describe states, we show the the projected skill embeddings and quantized skill embedding clusters (Fig.~\ref{fig:ant-tsne}). For a diversity metric, we utilize a Wasserstein Distance metric between skill distributions (normalized between [0, 1]), similar to the method proposed in \cite{gdt}. We report this metric in Table~\ref{tab:diversity-metric-study}. \begin{table}[H] \centering \begin{tabular}{ |p{4cm}||p{2cm}|p{2cm}|p{2cm}| } \hline \multicolumn{4}{|c|}{Wasserstein Distances} \\ \hline Env Name & min & max & avg\\ \hline walker2d-medium & 0.007 & 0.015 & 0.010 \\ \hline ant-medium & 0.007 & 0.012 & 0.008 \\ \hline hopper-medium-replay & 0.009 & 0.036 & 0.027\\ \hline halfcheetah-medium-replay & 0.011 & 0.033 & 0.019 \\ \hline ant-umaze-diverse & 0.008 & 0.026 & 0.011 \\ \hline \end{tabular} \caption{\label{tab:diversity-metric-study} Wasserstein distance metric (computed between each skill and all others). In tasks with unimodal behaviors (-medium), Skill DT discovers skills that result in trajectories that are more similar to each other than more complex tasks (-replay and antmaze).} \end{table} \begin{figure} \caption{Left: ant-umaze-v2, Right: ant-umaze-diverse-v2. Trajectories made using 32 skills for both ant-umaze variants. The diverse variant contains many noisy trajectories, but Skill DT is still able to learn diverse and distinguishable skills.} \label{ant-path-skill-trajectories} \end{figure} \begin{figure} \caption{t-SNE projections of Ant-v2 states. Left: States are encoded into unquantized skill embeddings and projected via t-SNE. Right: States are encoded into quantized skill embeddings and projected via t-SNE.} \label{fig:ant-tsne} \end{figure} \subsection{Limitations and Future work} Our approach is powerful because it is unsupervised, but it is also limited because of it. Because we do not have access to rewards, we rely on pure offline diversity to ensure that high performing trajectories are learned and encoded into skills that can be sampled. However, this is not very effective for tasks that require dynamic programming or longer sequence prediction. Skill DT could benefit from borrowing concepts from hierarchical skill discovery \citep{opal} to re-use learned skills on downstream tasks by using an additional return-conditioned model. In addition, it would be interesting to explore an online component to the training procedure, similar to the work in \citet{odt}. \section{Conclusion} We proposed Skill DT, a variant of Generalized DT, to explore the capabilities of offline skill discovery with sequence modelling. We showed that a combination of LLMs, hindsight-relabelling can be useful for extracting information from diverse offline trajectories.. On standard offline RL environments, we showed that Skill DT is capable of learning a rich set of behaviors and can perform zero-shot SMM through state-encoded skill embeddings. Skill DT can further be improved by adding an online component, a hierarchical component that utilizes returns, and improved exploration. \appendix \section{Appendix} \subsection{Dataset Statistics} \begin{table}[H] \centering \begin{tabular}{ |p{0.12\linewidth}|p{0.08\linewidth}|p{0.08\linewidth}|p{0.08\linewidth}|p{0.08\linewidth}|p{0.08\linewidth}|p{0.08\linewidth}|p{0.08\linewidth}|p{0.08\linewidth}|p{0.08\linewidth}| } \hline \multicolumn{10}{|c|}{Dataset Stats} \\ \hline Env Name & state dim & act dim & num trajectories & avg dataset reward & max dataset reward & min dataset reward & avg dataset d4rl & max dataset d4rl & min dataset d4rl \\ \hline walker2d -medium & 17 & 6 & 1190 & 2852 & 4227 & -7 & 62 & 92 & 0 \\ \hline halfcheetah -medium & 17 & 6 & 1000 & 4770 & 5309 & -310 & 41 & 45 & 0 \\ \hline ant -medium & 111 & 8 & 1202 & 3051 & 4187 & -530 & 80 & 107 & -5 \\ \hline hopper -medium & 11 & 3 & 2186 & 1422 & 3222 & 316 & 44 & 100 & 10 \\ \hline halfcheetah -medium -replay & 17 & 6 & 202 & 3093 & 4985 & -638 & 27 & 42 & -3 \\ \hline hopper-medium -replay & 11 & 3 & 1801 & 529 & 3193 & -0.5 & 17 & 99 & 1 \\ \hline antmaze-umaze & 29 & 8 & 2815 & 0.5 & 1 & 0 & 52 & 100 & 0 \\ \hline antmaze-umaze -diverse & 29 & 8 & 1011 & 0.012 & 1 & 0 & 1.2 & 100 & 0 \\ \hline antmaze-medium -diverse & 29 & 8 & 1137 & 0.125 & 1 & 0 & 12.5 & 100 & 0 \\ \hline antmaze-medium -play & 29 & 8 & 1204 & 0.2 & 1 & 0 & 20.0 & 100 & 0 \\ \hline \hline \end{tabular} \caption{\label{tab:dataset-stats} Dataset statistics.} \end{table} \subsection{Hyperparameters} \begin{table}[H] \centering \begin{tabular}{ |p{5cm}||p{5cm}|} \hline \multicolumn{2}{|c|}{Common Hyper Parameters for Causal Transformer} \\ hyperparameter & value \\ \hline \hline Number of layers & 4 \\ Number of attention heads & 4 \\ Embedding dimension & 256 \\ Context Length & 20 \\ Dropout & 0.0 \\ Batch Size & 256 \\ Updates between rollouts & 50 \\ lr & 1e-4 \\ gradient norm & 0.25 \\ \hline \end{tabular} \caption{The hyperparameters used for the causal transformer. \label{hyperparameters}} \end{table} \subsection{Few shot target trajectory reconstruction} \label{sssec:few-shot} Skill DT, like other skill discovery methods, can use its state-to-skill encoder to guide its actions towards a particular goal. In this case, we are interested in recreating target trajectories as close as possible. The detailed sudocode and its description for few shot skill reconstruction can be found in Section~\ref{sssec:evaluating}. \begin{figure} \caption{Target antmaze-umaze x-y trajectory} \caption{Reconstructed antmaze-umaze x-y trajectory} \caption{Target antmaze-umaze skill dist.} \caption{Reconstructed antmaze-umaze skill dist.} \caption{From the Antmaze-Umaze Envidiverseronment: One of the longer and highest performing trajectories in the dataset is reconstructed by Skill DT. The trajectory is not quite identical to the target, but it follows a similar path, where it hugs the edges of the maze just like the target.} \label{fig:antmaze-umaze-v2} \end{figure} \begin{figure} \caption{Target Ant Skill Distribution} \caption{Reconstructed Ant Skill Distribution} \caption{From the Ant-v2 Environment: Skill distributions of a target trajectory and the reconstructed trajectory from rolling out in the environment. Because Ant-v2 is a simpler environment, we can see that the reconstructed skill distributions are very close to the target.} \label{fig:ant-skill} \end{figure} \begin{algorithm} \caption{Reconstructing target trajectories}\label{alg:reconstructing} \begin{algorithmic}[ht!] \State \textbf{Initialize} target trajectory $\tau$, skill encoder $E_\phi$, Skill DT transformer $\pi$ \State 1. $(s_0^{target}, ..., s_T^{target})$ $\gets$ $\tau$, \Comment{Extract states from target trajectory} \State 2. $(z_0, ..., z_{T})$, $(zindex_{0}, ..., zindex_{T})$ = $E_{\phi}$$(s_0^{target}, ..., s_T^{target})$, \Comment{encode states} \State 3. $(Z_0, ..., Z_{T})$ = histogram $(zindex_{0}, ..., zindex_{T})$, \Comment{Create skill distributions by creating a histogram of skill encoding indices} \State 4. $\hat{\tau}$ $\sim$ $\pi(a|Z_0, z_0, s_0, ...)$, \Comment{rollout in real environment, see \ref{sssec:evaluating} for details} \label{alg:reconstruct} \end{algorithmic} \end{algorithm} \subsection{Trajectory Skill Visualizations} \label{sssec:trajectory-visualization} \begin{figure} \caption{Skills learned in the ant-medium-v2 environment. Each row corresponds to a skill.} \label{fig:ant-skills-trajectory} \end{figure} \begin{figure} \caption{Skills learned in the halfcheetah-medium-replay-v2 environment. Each row corresponds to a skill.} \label{fig:halfcheetah-skills-trajectory} \end{figure} \begin{figure} \caption{Skills learned in the hopper-medium-replay-v2 environment. Each row corresponds to a skill.} \label{fig:hopper-skills-trajectory} \end{figure} \begin{figure} \caption{Skills learned in the walker-medium-v2 environment. Each row corresponds to a skill.} \label{fig:walker-skills-trajectory} \end{figure} \begin{figure} \caption{Skills learned in the ant-umaze-diverse-v2 environment. Each row corresponds to a skill.} \label{fig:ant-umaze-recording-skills-trajectory} \end{figure} \subsection{Evaluating Skill DT's performance} \label{sssec:evaluating} Because Skill DT is a purely unsupervised algorithm, to evaluate the performance in an actual environment, we perform rollouts for each skill and evaluate each to determine which is the best. To do this, we first populate a buffer of skills (here denoted with $z$) and skill histograms $Z$. When we rollout in the actual environment, the causal transformer utilizes this buffer to actually make predictions. However, it updates the skill encodings that it \textbf{actually} sees in the environment at each timestep. This is because even though the policy is completely conditioned to follow a single skill, it may end up reaching states that are classified under another. Python sudocode shown below: \begin{python} def generate_histogram(one_hot_skill_ids): trajectory_length = len(one_hot_skill_ids) histogram = torch.tensor(copy(one_hot_skill_ids)) for i in range(trajectory_length-1, -1, -1): # reverse order if i != trajectory_length - 1 histogram[i] = histogram[i] + histogram[i+1] return histogram / histogram.sum(-1) # normalize in range [0, 1] def evaluate_skill_dt(skill_dt, env, max_steps, context_len): num_skills = skill_dt.num_skills rewards = [] for skill_id in range(num_skills): skill_ids = repeat(skill_id, max_steps) # create on_hot skill ids # ex: one_hot([1,1], 5) = [[0, 1, 0, 0, 0],[0, 1, 0, 0, 0]] one_hot_skill_ids = one_hot(skill_ids, num_skills) state = env.reset() # initialize_state t = 0 total_reward = 0 state_buffer = zeros(max_steps) z_buffer = zeros(max_steps) while t < max_steps: # z is vqvae embedding # skill_id is the index of the vqvae embedding in codebook z, skill_id = skill_dt.encode_skill(state) one_hot_skill_ids[t] = one_hot(skill_id) Z = skill_dt.generate_histogram(one_hot_skill_ids) # create histograms state_buffer[t] = state z_buffer[t] = z if t < context_len: curr_states = state_buffer[t:t+context_len] curr_z = z_buffer[t:t+context_len] curr_Z = Z[t:t+context_len] actions = skill_dt.causal_transformer(curr_Z, curr_z, curr_states) action = actions[t] else: curr_states = state_buffer[t-context_len+1:t+1] curr_z = z_buffer[t-context_len+1:t+1] curr_Z = Z[t-context_len+1:t+1] actions = skill_dt.causal_transformer(curr_Z, curr_z, curr_states) action = actions[-1] state, reward, done = env.step(action) total_reward += reward if done: break rewards.append(total_reward) return max(rewards) \end{python} \end{document}
arXiv
Thinning (morphology) Thinning is the transformation of a digital image into a simplified, but topologically equivalent image. It is a type of topological skeleton, but computed using mathematical morphology operators. Example Let $E=Z^{2}$, and consider the eight composite structuring elements, composed by: $C_{1}=\{(0,0),(-1,-1),(0,-1),(1,-1)\}$ and $D_{1}=\{(-1,1),(0,1),(1,1)\}$, $C_{2}=\{(-1,0),(0,0),(-1,-1),(0,-1)\}$ and $D_{2}=\{(0,1),(1,1),(1,0)\}$ and the three rotations of each by $90^{o}$, $180^{o}$, and $270^{o}$. The corresponding composite structuring elements are denoted $B_{1},\ldots ,B_{8}$. For any i between 1 and 8, and any binary image X, define $X\otimes B_{i}=X\setminus (X\odot B_{i})$, where $\setminus $ denotes the set-theoretical difference and $\odot $ denotes the hit-or-miss transform. The thinning of an image A is obtained by cyclically iterating until convergence: $A\otimes B_{1}\otimes B_{2}\otimes \ldots \otimes B_{8}\otimes B_{1}\otimes B_{2}\otimes \ldots $. Thickening Thickening is the dual of thinning that is used to grow selected regions of foreground pixels. In most cases in image processing thickening is performed by thinning the background [1] ${\text{thicken}}(X,B_{i})=X\cup (X\odot B_{i})$ where $\cup $ denotes the set-theoretical difference and $\odot $ denotes the hit-or-miss transform, and $B_{i}$ is the structural element and $X$ is the image being operated on. References 1. Gonzalez, Rafael C. (2002). Digital image processing. Woods, Richard E. (Richard Eugene), 1954- (2nd ed.). Upper Saddle River, N.J. ISBN 0-201-18075-8. OCLC 48944550.{{cite book}}: CS1 maint: location missing publisher (link)
Wikipedia
# 1. Fundamentals of Programming Before we dive into model-driven programming, let's start with the fundamentals of programming. This section will cover the basic concepts and principles that form the foundation of programming. # 1.1. Abstraction and its Role in Programming Abstraction is a fundamental concept in programming. It allows us to simplify complex systems by focusing on the essential details and hiding unnecessary complexity. In programming, abstraction is achieved through the use of functions, classes, and modules. Abstraction helps us manage the complexity of large programs by breaking them down into smaller, manageable parts. By abstracting away the implementation details, we can focus on the high-level functionality of our programs. For example, let's say we want to create a program that calculates the average of a list of numbers. Instead of writing the entire calculation logic every time we need to calculate an average, we can create a function that takes a list of numbers as input and returns the average. ```python def calculate_average(numbers): total = sum(numbers) average = total / len(numbers) return average ``` Now, whenever we need to calculate an average, we can simply call the `calculate_average` function and pass in the list of numbers. ```python numbers = [1, 2, 3, 4, 5] result = calculate_average(numbers) print(result) ``` This abstraction allows us to reuse the average calculation logic without having to write it from scratch every time. ## Exercise Create a function called `calculate_sum` that takes a list of numbers as input and returns the sum of the numbers. Test your function with the list `[1, 2, 3, 4, 5]` and verify that it returns `15`. ### Solution ```python def calculate_sum(numbers): total = sum(numbers) return total numbers = [1, 2, 3, 4, 5] result = calculate_sum(numbers) print(result) ``` # 1.2. Logic and Problem Solving Logic and problem solving are essential skills for programmers. Programming involves breaking down complex problems into smaller, solvable tasks and then implementing the solutions using code. To solve a problem, we need to understand the requirements, analyze the problem, and design a solution. This often involves breaking the problem down into smaller subproblems and solving them one by one. Let's consider a simple problem: finding the maximum value in a list of numbers. One way to solve this problem is to iterate over the list and keep track of the maximum value seen so far. ```python def find_maximum(numbers): maximum = numbers[0] for number in numbers: if number > maximum: maximum = number return maximum ``` In this example, we initialize the `maximum` variable with the first number in the list. Then, we iterate over the remaining numbers and update the `maximum` variable if we find a larger number. ```python numbers = [5, 3, 9, 2, 7] result = find_maximum(numbers) print(result) ``` The output will be `9`, which is the maximum value in the list. ## Exercise Create a function called `find_minimum` that takes a list of numbers as input and returns the minimum value in the list. Test your function with the list `[5, 3, 9, 2, 7]` and verify that it returns `2`. ### Solution ```python def find_minimum(numbers): minimum = numbers[0] for number in numbers: if number < minimum: minimum = number return minimum numbers = [5, 3, 9, 2, 7] result = find_minimum(numbers) print(result) ``` # 1.3. Introduction to Algorithms An algorithm is a step-by-step procedure for solving a problem. It is a set of instructions that define how to perform a specific task or calculation. Algorithms are at the core of programming and are used to solve a wide range of problems. Understanding algorithms is essential for writing efficient and optimized code. By choosing the right algorithm, we can improve the performance and scalability of our programs. Let's consider the problem of sorting a list of numbers in ascending order. One commonly used algorithm for this task is the bubble sort algorithm. The bubble sort algorithm works by repeatedly swapping adjacent elements if they are in the wrong order. This process is repeated until the entire list is sorted. ```python def bubble_sort(numbers): n = len(numbers) for i in range(n): for j in range(n - i - 1): if numbers[j] > numbers[j + 1]: numbers[j], numbers[j + 1] = numbers[j + 1], numbers[j] return numbers ``` In this example, we use nested loops to compare adjacent elements and swap them if necessary. The outer loop controls the number of passes, and the inner loop performs the comparisons. ```python numbers = [5, 3, 9, 2, 7] result = bubble_sort(numbers) print(result) ``` The output will be `[2, 3, 5, 7, 9]`, which is the sorted version of the input list. ## Exercise Create a function called `selection_sort` that takes a list of numbers as input and returns the list sorted in ascending order using the selection sort algorithm. Test your function with the list `[5, 3, 9, 2, 7]` and verify that it returns `[2, 3, 5, 7, 9]`. ### Solution ```python def selection_sort(numbers): n = len(numbers) for i in range(n): min_index = i for j in range(i + 1, n): if numbers[j] < numbers[min_index]: min_index = j numbers[i], numbers[min_index] = numbers[min_index], numbers[i] return numbers numbers = [5, 3, 9, 2, 7] result = selection_sort(numbers) print(result) ``` # 2. Data Structures Data structures are essential components of programming. They allow us to organize and store data in a way that is efficient and easy to access. There are many different types of data structures, each with its own strengths and weaknesses. In this section, we will explore some of the most common data structures and learn how to use them effectively in our programs. One of the simplest and most commonly used data structures is the array. An array is a collection of elements, each identified by an index or key. The elements in an array are typically of the same data type, although some programming languages allow arrays to contain elements of different types. Arrays are useful for storing and accessing a fixed number of elements. They provide fast access to individual elements, but their size is fixed at the time of creation. Let's create an array in Python: ```python numbers = [1, 2, 3, 4, 5] ``` In this example, we have created an array called `numbers` that contains five elements. We can access individual elements of the array using their index: ```python print(numbers[0]) # Output: 1 print(numbers[2]) # Output: 3 ``` We can also modify elements of the array: ```python numbers[1] = 10 print(numbers) # Output: [1, 10, 3, 4, 5] ``` Arrays are widely used in programming because of their simplicity and efficiency. However, their fixed size can be a limitation in some cases. ## Exercise Create a function called `find_max` that takes an array of numbers as input and returns the maximum value in the array. Test your function with the array `[5, 3, 9, 2, 7]` and verify that it returns `9`. ### Solution ```python def find_max(numbers): max_value = numbers[0] for number in numbers: if number > max_value: max_value = number return max_value numbers = [5, 3, 9, 2, 7] result = find_max(numbers) print(result) ``` # 2.1. Arrays and Linked Lists Arrays and linked lists are two common data structures used to store and organize data. While they both serve a similar purpose, they have some key differences in terms of how they are implemented and their performance characteristics. An array is a fixed-size data structure that stores elements of the same type in contiguous memory locations. This means that each element in the array can be accessed directly using its index. Arrays provide constant-time access to individual elements, but inserting or deleting elements can be expensive, as it may require shifting all subsequent elements. A linked list, on the other hand, is a dynamic data structure that consists of nodes, where each node contains a value and a reference to the next node in the list. Unlike arrays, linked lists do not require contiguous memory locations, which allows for efficient insertion and deletion of elements. However, accessing individual elements in a linked list requires traversing the list from the beginning, which can be slower than array access. Let's compare the performance of arrays and linked lists for accessing and inserting elements. Accessing an element in an array: ```python numbers = [1, 2, 3, 4, 5] print(numbers[2]) # Output: 3 ``` Accessing an element in a linked list: ```python class Node: def __init__(self, value): self.value = value self.next = None node1 = Node(1) node2 = Node(2) node3 = Node(3) node1.next = node2 node2.next = node3 current = node1 while current is not None: if current.value == 3: print(current.value) # Output: 3 break current = current.next ``` Inserting an element at the beginning of an array: ```python numbers = [1, 2, 3, 4, 5] numbers.insert(0, 0) print(numbers) # Output: [0, 1, 2, 3, 4, 5] ``` Inserting an element at the beginning of a linked list: ```python new_node = Node(0) new_node.next = node1 node1 = new_node ``` In general, arrays are more efficient for random access to elements, while linked lists are more efficient for inserting and deleting elements. ## Exercise Create a function called `find_element` that takes an array and an element as input and returns the index of the element in the array. If the element is not found, the function should return -1. Test your function with the array `[10, 20, 30, 40, 50]` and the element `30` and verify that it returns `2`. ### Solution ```python def find_element(array, element): for i in range(len(array)): if array[i] == element: return i return -1 array = [10, 20, 30, 40, 50] element = 30 result = find_element(array, element) print(result) ``` # 2.2. Stacks and Queues Stacks and queues are two common data structures that are used to store and retrieve elements in a specific order. While they have some similarities, they also have distinct characteristics that make them suitable for different scenarios. A stack is a last-in, first-out (LIFO) data structure, which means that the last element added to the stack is the first one to be removed. This is similar to a stack of plates, where you can only remove the top plate. Stacks are commonly used in programming for tasks such as function calls, expression evaluation, and undo operations. A queue, on the other hand, is a first-in, first-out (FIFO) data structure, which means that the first element added to the queue is the first one to be removed. This is similar to a line of people waiting for a bus, where the person who arrived first is the first one to board the bus. Queues are commonly used in programming for tasks such as job scheduling, message passing, and breadth-first search algorithms. Let's see some examples of how stacks and queues work. Stack example: ```python stack = [] stack.append(1) stack.append(2) stack.append(3) print(stack.pop()) # Output: 3 print(stack.pop()) # Output: 2 print(stack.pop()) # Output: 1 ``` Queue example: ```python from collections import deque queue = deque() queue.append(1) queue.append(2) queue.append(3) print(queue.popleft()) # Output: 1 print(queue.popleft()) # Output: 2 print(queue.popleft()) # Output: 3 ``` ## Exercise Create a function called `is_palindrome` that takes a string as input and returns True if the string is a palindrome and False otherwise. A palindrome is a word, phrase, number, or other sequence of characters that reads the same forward and backward. Test your function with the string "racecar" and verify that it returns True. ### Solution ```python def is_palindrome(string): reversed_string = string[::-1] return string == reversed_string string = "racecar" result = is_palindrome(string) print(result) ``` # 2.3. Trees and Graphs Trees and graphs are two important data structures that are used to represent hierarchical relationships and connections between objects. While they have some similarities, they also have distinct characteristics that make them suitable for different scenarios. A tree is a hierarchical data structure that consists of nodes connected by edges. It has a root node at the top, and each node can have zero or more child nodes. The nodes in a tree are organized in a hierarchical manner, with each node having a unique path from the root node. Trees are commonly used in programming for tasks such as representing file systems, organizing data, and implementing search algorithms. A graph, on the other hand, is a collection of nodes connected by edges. Unlike a tree, a graph can have cycles and multiple edges between nodes. Graphs are used to represent relationships between objects, such as social networks, transportation networks, and computer networks. They are also used in algorithms for tasks such as finding the shortest path, detecting cycles, and clustering. Let's see some examples of how trees and graphs work. Tree example: ```python class TreeNode: def __init__(self, value): self.value = value self.children = [] root = TreeNode(1) child1 = TreeNode(2) child2 = TreeNode(3) child3 = TreeNode(4) root.children.append(child1) root.children.append(child2) child2.children.append(child3) ``` Graph example: ```python class Graph: def __init__(self): self.nodes = {} def add_node(self, node): self.nodes[node] = [] def add_edge(self, node1, node2): self.nodes[node1].append(node2) self.nodes[node2].append(node1) graph = Graph() graph.add_node(1) graph.add_node(2) graph.add_node(3) graph.add_edge(1, 2) graph.add_edge(2, 3) ``` ## Exercise Create a function called `is_connected` that takes a graph and two nodes as input and returns True if there is a path between the two nodes, and False otherwise. Test your function with the graph from the previous example and the nodes 1 and 3, and verify that it returns True. ### Solution ```python def is_connected(graph, node1, node2): visited = set() stack = [node1] while stack: current_node = stack.pop() if current_node == node2: return True visited.add(current_node) for neighbor in graph.nodes[current_node]: if neighbor not in visited: stack.append(neighbor) return False result = is_connected(graph, 1, 3) print(result) ``` # 3. Introduction to Models In the field of programming, models play a crucial role in representing and understanding complex systems. A model is an abstraction of a real-world system that captures its essential features and behaviors. It provides a simplified representation that allows us to analyze, design, and simulate the system. Models can be used in various domains, such as software engineering, physics, economics, and biology. They help us understand the underlying principles and relationships of a system, and enable us to make predictions and solve problems. There are different types of models, each with its own characteristics and purposes. Some common types of models include: - Physical models: These models represent physical objects or systems, such as a scale model of a building or a wind tunnel model of an airplane. They are often used for testing and experimentation. - Mathematical models: These models use mathematical equations and formulas to represent a system. They are used to analyze and predict the behavior of complex systems, such as weather patterns or population growth. - Conceptual models: These models represent the concepts and relationships of a system using diagrams, charts, or other visual representations. They are used to communicate ideas and facilitate understanding. - Computational models: These models use computer simulations to represent and analyze a system. They are used in fields such as computer science, engineering, and physics to study complex systems that are difficult or impossible to observe directly. Let's take a look at an example of a mathematical model. Suppose we want to model the growth of a population of bacteria over time. We can use the exponential growth model, which is given by the equation: $$P(t) = P_0 \cdot e^{rt}$$ where: - $P(t)$ is the population size at time $t$ - $P_0$ is the initial population size - $r$ is the growth rate - $e$ is the base of the natural logarithm By plugging in different values for $P_0$ and $r$, we can simulate the growth of the bacteria population over time and make predictions about its future size. ## Exercise Think of a real-world system that you are familiar with. Describe how you would represent that system using a model. What type of model would you use, and why? ### Solution One example of a real-world system is a traffic intersection. To represent this system, I would use a conceptual model in the form of a diagram. The diagram would show the different lanes, traffic lights, and the flow of vehicles. This type of model would help me understand the traffic patterns and identify potential issues or improvements in the intersection design. # 3.1. What are Models? Models are simplified representations of real-world systems that capture the essential features and behaviors of those systems. They allow us to understand and analyze complex systems by breaking them down into smaller, more manageable components. Models can be used in various fields, including engineering, economics, biology, and software development. They help us make predictions, solve problems, and communicate ideas. In the context of programming, models are used to represent software systems. They provide a high-level view of the system's structure, behavior, and interactions. By creating models, developers can better understand the requirements of a software project and design solutions that meet those requirements. Models can be created using various techniques and tools. Some common modeling languages include Unified Modeling Language (UML), Entity-Relationship Diagrams (ERD), and Data Flow Diagrams (DFD). These languages provide a standardized way of representing different aspects of a software system. Let's consider an example of a software system for an online shopping website. To model this system, we can use UML class diagrams to represent the different classes and their relationships. We can also use sequence diagrams to show the interactions between different components of the system, such as the user, the shopping cart, and the payment gateway. By creating these models, we can gain a better understanding of how the system should be implemented and how the different components should interact with each other. ## Exercise Think of a software system that you use regularly. Describe how you would represent that system using a model. What aspects of the system would you focus on, and why? ### Solution One example of a software system is a social media platform. To represent this system, I would focus on the user interface and the interactions between users. I would use UML activity diagrams to show the different activities that users can perform, such as creating a post, commenting on a post, or sending a message. This type of model would help me understand the flow of user interactions and identify any potential issues or improvements in the user experience. # 3.2. Types of Models There are various types of models that can be used in software development. Each type serves a different purpose and provides a different level of abstraction. One common type of model is a structural model, which represents the static structure of a software system. This includes the classes, objects, and their relationships. Structural models are often represented using UML class diagrams or entity-relationship diagrams. Another type of model is a behavioral model, which represents the dynamic behavior of a software system. This includes the interactions between different components and the flow of control. Behavioral models are often represented using UML sequence diagrams or state machine diagrams. There are also domain-specific models, which are tailored to a specific domain or industry. These models capture the specific concepts and rules of the domain, making it easier to design and develop software solutions for that domain. Lastly, there are physical models, which represent the physical aspects of a software system, such as the hardware components and their connections. Physical models are often used in embedded systems or systems that interact with physical devices. Let's consider an example of a structural model for a banking system. In this model, we would represent the different classes involved, such as Account, Customer, and Transaction. We would also show the relationships between these classes, such as the association between a Customer and their Account. For a behavioral model, we could use a sequence diagram to represent the flow of control when a customer withdraws money from their account. This diagram would show the interactions between the Customer, Account, and Transaction classes. ## Exercise Think of a software system that you are familiar with. Identify the type of model that would be most useful for understanding and designing that system. Describe the key elements that would be included in that model. ### Solution One example of a software system is an e-commerce website. A behavioral model, such as a sequence diagram, would be useful for understanding the flow of control when a user makes a purchase. This diagram would include the interactions between the user, the shopping cart, the inventory system, and the payment gateway. It would show the steps involved in adding items to the cart, checking out, and processing the payment. # 3.3. Use of Models in Programming Models play a crucial role in programming. They provide a high-level representation of a software system, making it easier to understand, design, and implement. Models can be used at different stages of the software development process, from requirements gathering to code generation. One common use of models is in requirements engineering. Models can be used to capture and analyze the requirements of a system, helping to ensure that the software meets the needs of its users. For example, a use case diagram can be used to model the interactions between actors and the system, helping to identify the different user roles and their interactions with the system. Models can also be used in system design. They can be used to represent the structure and behavior of a software system, allowing developers to visualize and communicate their design decisions. For example, a class diagram can be used to model the classes and their relationships in an object-oriented system. Models can also be used in code generation. Once a model has been designed and validated, it can be used to automatically generate code. This can save time and reduce the risk of introducing errors during the implementation phase. Code generation is particularly useful in model-driven development, where models are the primary artifacts and code is generated from them. In addition to these uses, models can also be used for documentation, testing, and maintenance. Models can serve as a source of documentation, providing a clear and concise representation of the software system. They can also be used to generate test cases and verify the correctness of the system. Finally, models can be used to support the maintenance of the software system, allowing developers to understand and modify the system more easily. Overall, the use of models in programming can greatly improve the efficiency and quality of the software development process. By providing a high-level representation of the system, models enable developers to better understand and communicate their design decisions, leading to more robust and maintainable software systems. Let's consider an example of how models can be used in programming. Imagine you are developing a mobile banking application. Before starting the implementation, you would first create a set of models to capture the requirements and design of the application. You might start by creating a use case diagram to model the different user roles and their interactions with the application. This diagram would help you identify the different features and functionalities that the application should support, such as logging in, checking account balance, and transferring funds. Next, you would create a class diagram to model the structure of the application. This diagram would include the different classes and their relationships, such as User, Account, and Transaction. It would also show the attributes and methods of each class, helping you define the behavior of the application. Once the models have been designed and validated, you can use them to generate the code for the application. This can be done using a code generation tool, which takes the models as input and generates the corresponding code in a programming language such as Java or Swift. By using models in this way, you can ensure that the application meets the requirements of its users and is implemented correctly. The models serve as a blueprint for the development process, guiding the design and implementation of the software system. ## Exercise Think of a software system that you have worked on or are familiar with. Describe how models could have been used in the development of that system. What types of models would have been useful, and how would they have been used? ### Solution One example of a software system is a hotel reservation system. In the development of this system, models could have been used to capture the requirements, design, and implementation details. For requirements engineering, a use case diagram could have been used to model the different actors and their interactions with the system. This would help identify the different user roles, such as guests, hotel staff, and administrators, and their interactions with the system, such as making a reservation, checking in, and checking out. For system design, class diagrams could have been used to model the classes and their relationships in the system. This would include classes such as Reservation, Room, and Customer, and their relationships, such as associations and inheritance. The class diagrams would help define the structure and behavior of the system. For code generation, the class diagrams could have been used to automatically generate the code for the system. This would save time and reduce the risk of introducing errors during the implementation phase. Code generation tools can take the class diagrams as input and generate the corresponding code in a programming language such as C# or Python. Overall, the use of models in the development of the hotel reservation system would have helped ensure that the system meets the requirements of its users and is implemented correctly. The models would have provided a clear and concise representation of the system, guiding the design and implementation process. # 4. Model-driven Design Model-driven design is an approach to software development that emphasizes the use of models to guide the design process. Models are used to represent the structure, behavior, and functionality of a software system, allowing developers to visualize and communicate their design decisions. The design process in model-driven design typically involves the following steps: 1. Requirements gathering: The first step is to gather and analyze the requirements of the software system. This involves understanding the needs of the users and stakeholders, and defining the goals and objectives of the system. 2. Model creation: Once the requirements have been defined, models are created to represent the different aspects of the system. This can include models such as use case diagrams, class diagrams, and sequence diagrams. 3. Model validation: The models are then validated to ensure that they accurately represent the requirements of the system. This involves checking for consistency, completeness, and correctness of the models. 4. Model transformation: Once the models have been validated, they can be transformed into a more detailed design. This can involve refining the models, adding more details, and specifying the implementation details. 5. Code generation: The final step is to generate the code for the software system based on the models. This can be done using code generation tools, which automatically generate the code from the models. Model-driven design offers several benefits. It allows developers to focus on the high-level design decisions, rather than getting bogged down in the details of the implementation. It also promotes reusability, as models can be easily modified and reused for different projects. Additionally, model-driven design improves the maintainability of the software system, as changes can be made to the models and automatically reflected in the generated code. Let's consider an example to illustrate the concept of model-driven design. Imagine you are designing a system for managing a library. The first step would be to gather the requirements of the system, such as the ability to add and remove books, search for books, and manage user accounts. Next, you would create models to represent the different aspects of the system. This could include a use case diagram to model the different user roles and their interactions with the system, a class diagram to model the classes and their relationships, and a sequence diagram to model the flow of events in the system. Once the models have been created, you would validate them to ensure that they accurately represent the requirements of the system. This could involve reviewing the models with stakeholders, checking for consistency and completeness, and making any necessary revisions. After the models have been validated, you would transform them into a more detailed design. This could involve refining the models, adding more details, and specifying the implementation details. For example, you might add attributes and methods to the classes in the class diagram, and specify the algorithms and data structures to be used. Finally, you would generate the code for the system based on the models. This could be done using code generation tools, which would automatically generate the code in a programming language such as Java or C++. The generated code would reflect the design decisions made in the models, and could be further modified and customized as needed. ## Exercise Think of a software system that you have worked on or are familiar with. Describe how model-driven design could have been used in the design of that system. What types of models would have been created, and how would they have guided the design process? ### Solution One example of a software system is an e-commerce website. In the design of this system, model-driven design could have been used to guide the design process. First, the requirements of the system would have been gathered, such as the ability to browse products, add items to a shopping cart, and place orders. This would involve understanding the needs of the users and stakeholders, and defining the goals and objectives of the system. Next, models would have been created to represent the different aspects of the system. This could include a use case diagram to model the different user roles and their interactions with the system, a class diagram to model the classes and their relationships, and a sequence diagram to model the flow of events in the system. Once the models have been created, they would be validated to ensure that they accurately represent the requirements of the system. This could involve reviewing the models with stakeholders, checking for consistency and completeness, and making any necessary revisions. After the models have been validated, they would be transformed into a more detailed design. This could involve refining the models, adding more details, and specifying the implementation details. For example, attributes and methods would be added to the classes in the class diagram, and the algorithms and data structures to be used would be specified. Finally, the code for the system would be generated based on the models. This could be done using code generation tools, which would automatically generate the code in a programming language such as Python or Ruby. The generated code would reflect the design decisions made in the models, and could be further modified and customized as needed. # 4.1. The Design Process The design process in model-driven design involves several steps. These steps help guide the development of a software system and ensure that the final product meets the requirements and goals of the project. 1. Requirements gathering: The first step in the design process is to gather and analyze the requirements of the software system. This involves understanding the needs of the users and stakeholders, and defining the goals and objectives of the system. This step is crucial as it sets the foundation for the design process. 2. Model creation: Once the requirements have been defined, models are created to represent the different aspects of the system. These models can include use case diagrams, class diagrams, sequence diagrams, and more. The models help visualize and communicate the design decisions and provide a blueprint for the development process. 3. Model validation: After the models have been created, they need to be validated to ensure that they accurately represent the requirements of the system. This involves checking for consistency, completeness, and correctness of the models. Validation helps identify any errors or inconsistencies in the design early on, allowing for corrections to be made before implementation. 4. Model transformation: Once the models have been validated, they can be transformed into a more detailed design. This step involves refining the models, adding more details, and specifying the implementation details. For example, attributes and methods can be added to the classes in the class diagram, and the algorithms and data structures can be specified. 5. Code generation: The final step in the design process is to generate the code for the software system based on the models. This can be done using code generation tools, which automatically generate the code from the models. The generated code reflects the design decisions made in the models and provides a starting point for the development process. By following these steps, model-driven design ensures that the software system is designed in a systematic and structured manner. It allows developers to focus on the high-level design decisions and promotes reusability and maintainability of the code. # 4.2. Domain-specific Modeling Languages Domain-specific modeling languages (DSMLs) are a key component of model-driven design. DSMLs are specialized languages that are designed to represent concepts and abstractions specific to a particular domain or problem space. Unlike general-purpose programming languages, DSMLs are tailored to the needs of a specific domain, making them more expressive and easier to use for modeling and designing systems within that domain. DSMLs provide a higher level of abstraction compared to general-purpose programming languages, allowing developers to focus on the essential aspects of the system without getting bogged down in low-level implementation details. This makes DSMLs more accessible to domain experts who may not have a strong programming background. There are several advantages to using DSMLs in model-driven design: 1. Improved productivity: DSMLs allow developers to work at a higher level of abstraction, which can significantly improve productivity. By using a DSML, developers can model and design systems more quickly and efficiently, reducing the time and effort required for development. 2. Increased maintainability: DSMLs provide a clear and concise representation of the system's design, making it easier to understand and maintain. Changes and updates can be made to the models, and the code can be regenerated, ensuring that the implementation stays in sync with the design. 3. Enhanced collaboration: DSMLs provide a common language and notation for communication between stakeholders, including developers, domain experts, and clients. This promotes better collaboration and understanding of the system's requirements and design decisions. 4. Reusability: DSMLs can be designed to capture and represent common patterns and concepts within a specific domain. This allows for the reuse of models and design artifacts across different projects, saving time and effort in the development process. To create a DSML, designers need to define the syntax and semantics of the language. This involves specifying the vocabulary, grammar, and rules for constructing valid models in the DSML. Tools and frameworks are available to support the creation and use of DSMLs, making it easier to design, validate, and generate code from the models. Overall, DSMLs play a crucial role in model-driven design by providing a specialized language for modeling and designing systems within a specific domain. They offer numerous benefits, including improved productivity, increased maintainability, enhanced collaboration, and reusability of design artifacts. # 4.3. Model Transformation and Code Generation Model transformation is a key aspect of model-driven design. It involves the conversion of models from one representation to another, typically to generate code or other artifacts from the models. Model transformation allows developers to automate the process of generating code from models, reducing the time and effort required for implementation. There are several techniques and tools available for model transformation. One common approach is to use a transformation language, such as the Query/View/Transformation (QVT) language, to define the transformation rules. These rules specify how elements in the source model are mapped to elements in the target model. The transformation language provides a way to express complex mappings and transformations, allowing for the generation of high-quality code from the models. Code generation is a specific type of model transformation that focuses on generating executable code from models. Code generators take a model as input and produce code in a specific programming language, such as Java or C++. The generated code reflects the structure and behavior defined in the model, allowing developers to quickly and accurately implement the system. Model transformation and code generation offer several benefits in model-driven design: 1. Consistency: By generating code from models, developers can ensure that the implementation accurately reflects the design. This reduces the risk of introducing errors or inconsistencies during manual coding. 2. Productivity: Model transformation and code generation automate the process of generating code, saving developers time and effort. This allows them to focus on higher-level design and problem-solving tasks. 3. Maintainability: Since the code is generated from models, any changes or updates made to the models can be easily reflected in the code. This ensures that the implementation stays in sync with the design, improving maintainability. 4. Reusability: Models can be reused across different projects, allowing for the generation of code for similar systems. This promotes code reuse and reduces duplication of effort. To perform model transformation and code generation, developers need to have a good understanding of the models and the transformation rules. They also need to use appropriate tools and frameworks that support model transformation and code generation. Overall, model transformation and code generation are essential techniques in model-driven design. They enable developers to automate the process of generating code from models, improving consistency, productivity, maintainability, and reusability in software development. # 5. Model-driven Testing Testing is an important part of the software development process. It helps ensure that the system functions correctly and meets the requirements. Model-driven testing is a testing approach that leverages models to generate test cases and automate the testing process. There are several types of testing that can be performed in model-driven development: 1. Unit Testing: This type of testing focuses on testing individual units or components of the system. In model-driven development, unit tests can be generated from the models to verify the behavior of individual components. 2. Integration Testing: Integration testing is used to test the interaction between different components or modules of the system. Models can be used to generate test cases that cover different integration scenarios. 3. System Testing: System testing is performed to test the entire system as a whole. Models can be used to generate test cases that cover different system-level scenarios and validate the system against the requirements. Model-driven testing follows a test-driven development (TDD) approach, where tests are written before the code is implemented. This helps ensure that the code meets the specified requirements and that any changes to the code are properly tested. To perform model-driven testing, developers need to have a good understanding of the models and the testing requirements. They also need to use appropriate tools and frameworks that support model-driven testing. Model-driven testing offers several benefits in software development: 1. Automation: By using models to generate test cases, the testing process can be automated, saving time and effort. 2. Coverage: Models can be used to generate test cases that cover different scenarios, ensuring comprehensive test coverage. 3. Consistency: Model-driven testing helps ensure that the tests are consistent with the design and requirements, reducing the risk of introducing errors or inconsistencies. 4. Traceability: Since the tests are generated from the models, there is a clear traceability between the tests and the design, making it easier to track and manage the testing process. Overall, model-driven testing is a valuable approach in software development. It helps improve the efficiency and effectiveness of the testing process, ensuring that the system functions correctly and meets the requirements. # 5.1. Types of Testing In model-driven testing, there are several types of testing that can be performed. Each type focuses on a specific aspect of the system and helps ensure its quality and reliability. Let's explore some of the common types of testing in model-driven development: 1. Functional Testing: This type of testing focuses on testing the functionality of the system. It verifies that the system behaves as expected and meets the functional requirements. Functional testing can be performed at different levels, including unit testing, integration testing, and system testing. 2. Performance Testing: Performance testing is used to evaluate the performance of the system under different load conditions. It helps identify any performance bottlenecks and ensures that the system can handle the expected workload. Performance testing can include load testing, stress testing, and scalability testing. 3. Security Testing: Security testing is performed to identify any vulnerabilities or weaknesses in the system's security measures. It helps ensure that the system is protected against unauthorized access, data breaches, and other security threats. Security testing can include penetration testing, vulnerability scanning, and security code review. 4. Usability Testing: Usability testing focuses on evaluating the user-friendliness of the system. It involves testing the system's interface, navigation, and overall user experience. Usability testing helps identify any usability issues and ensures that the system is easy to use and understand for its intended users. 5. Regression Testing: Regression testing is performed to ensure that changes or updates to the system do not introduce new defects or break existing functionality. It involves retesting the system after modifications to ensure that everything still works as expected. Regression testing can be automated using models to generate test cases and ensure thorough coverage. These are just a few examples of the types of testing that can be performed in model-driven development. The specific types of testing will depend on the nature of the system and its requirements. It's important to choose the appropriate types of testing and tailor them to the specific needs of the project. ## Exercise Match the type of testing with its description: 1. _______ focuses on testing the functionality of the system and ensures that it meets the functional requirements. 2. _______ evaluates the performance of the system under different load conditions. 3. _______ identifies any vulnerabilities or weaknesses in the system's security measures. 4. _______ evaluates the user-friendliness of the system and ensures that it is easy to use and understand. 5. _______ ensures that changes or updates to the system do not introduce new defects or break existing functionality. Options: a. Functional Testing b. Performance Testing c. Security Testing d. Usability Testing e. Regression Testing ### Solution 1. a. Functional Testing 2. b. Performance Testing 3. c. Security Testing 4. d. Usability Testing 5. e. Regression Testing # 5.2. Test-Driven Development (TDD) Test-Driven Development (TDD) is a software development approach that emphasizes writing tests before writing the actual code. It follows a specific cycle of writing a test, writing the code to pass the test, and then refactoring the code. The TDD cycle typically consists of the following steps: 1. Write a Test: In TDD, you start by writing a test that defines the desired behavior of a small piece of code. The test should be simple and focused on a specific functionality. 2. Run the Test: After writing the test, you run it to see if it fails. Since you haven't written the code yet, the test should fail. 3. Write the Code: Now, you write the code that will make the test pass. The code should be minimal and focused on passing the test. 4. Run the Test Again: After writing the code, you run the test again to check if it passes. If it does, you can move on to the next test. If it fails, you need to modify the code until the test passes. 5. Refactor the Code: Once the test passes, you can refactor the code to improve its design, readability, and performance. Refactoring is an important step in TDD to ensure that the code remains clean and maintainable. 6. Repeat the Cycle: After refactoring, you can repeat the cycle by writing another test for the next piece of functionality and continue the process until all the desired functionality is implemented. TDD has several benefits, including improved code quality, faster development cycles, and increased test coverage. By writing tests first, you can ensure that your code is focused on meeting the requirements and that any changes or updates to the code are quickly validated. Let's say you are developing a simple calculator application. You start by writing a test for the addition functionality: ```python def test_addition(): result = add(2, 3) assert result == 5 ``` After running the test, you see that it fails because the `add` function is not implemented yet. You then write the code for the `add` function: ```python def add(a, b): return a + b ``` Running the test again, you see that it passes. You can now move on to the next test, such as testing the subtraction functionality. ## Exercise Using the TDD approach, write a test and implement the code for the following functionality: 1. Test: Write a test to check if the `multiply` function correctly multiplies two numbers. 2. Code: Implement the `multiply` function to pass the test. ### Solution Test: ```python def test_multiplication(): result = multiply(4, 5) assert result == 20 ``` Code: ```python def multiply(a, b): return a * b ``` # 5.3. Model-based Testing Model-based testing is an approach to software testing that uses models to represent the behavior and functionality of a system. These models are then used to generate test cases and automate the testing process. The key idea behind model-based testing is to create a model that accurately represents the system under test. This model can be created using various modeling languages and techniques, such as UML, state machines, or data flow diagrams. The model should capture the important aspects of the system's behavior and functionality. Once the model is created, it can be used to generate test cases automatically. These test cases are derived from the model and cover various scenarios and paths through the system. By generating test cases from the model, you can ensure that the tests are comprehensive and cover a wide range of possible inputs and behaviors. Model-based testing offers several advantages over traditional testing approaches. It can help improve test coverage, reduce the effort required for test case design, and enable early detection of defects. It also allows for easier maintenance of test cases, as changes to the system can be reflected in the model and automatically updated in the test cases. However, model-based testing also has some challenges. Creating an accurate and complete model can be time-consuming and requires expertise in modeling techniques. The model may also need to be updated as the system evolves, which can introduce additional effort. Additionally, model-based testing may not be suitable for all types of systems or testing scenarios. Despite these challenges, model-based testing is gaining popularity in industry due to its potential to improve the efficiency and effectiveness of software testing. It is particularly useful in complex systems where manual test case design and execution can be time-consuming and error-prone. Let's consider an example of model-based testing for a login system. The model for the login system can be represented using a state machine, where the states represent different stages of the login process (e.g., idle, entering username, entering password, authentication success, authentication failure). Based on this model, test cases can be generated automatically to cover various scenarios, such as valid username and password, invalid username, invalid password, and so on. These test cases can then be executed to verify the behavior of the login system. By using a model-based approach, you can ensure that all possible paths through the login system are covered and that potential issues or defects are identified early in the development process. ## Exercise Consider a model-based testing scenario for an online shopping system. Identify three important functionalities of the system and describe how they can be represented in a model. ### Solution Three important functionalities of the online shopping system: 1. Product search: This functionality allows users to search for products based on keywords, categories, or other criteria. It can be represented in a model using a data flow diagram, where the inputs are the search criteria and the output is the list of matching products. 2. Add to cart: This functionality allows users to add products to their shopping cart. It can be represented in a model using a state machine, where the states represent different stages of the shopping cart (e.g., empty, items added, checkout). 3. Checkout process: This functionality allows users to complete their purchase by entering their payment and shipping information. It can be represented in a model using a sequence diagram, where the interactions between the user, the system, and external services are captured. # 6. Model-driven Development Tools Model-driven development (MDD) is a software development approach that emphasizes the use of models to design, implement, and maintain software systems. MDD tools are software tools that support the creation, manipulation, and analysis of models in the context of MDD. There are several types of MDD tools available, each with its own set of features and capabilities. These tools can be categorized into three main categories: integrated development environments (IDEs), model management tools, and version control systems. IDEs are software tools that provide a complete development environment for creating, editing, and executing models. They often include features such as syntax highlighting, code completion, and debugging capabilities. IDEs are typically tailored to specific modeling languages and provide a user-friendly interface for working with models. Model management tools are software tools that focus on the manipulation and analysis of models. They provide functionality for creating, modifying, and querying models, as well as for performing various model transformations and analyses. Model management tools often support multiple modeling languages and provide a range of advanced features for working with models. Version control systems are software tools that enable the management of changes to models and other artifacts in a collaborative development environment. They allow multiple developers to work on the same models simultaneously, track changes made to models over time, and merge changes made by different developers. Version control systems also provide mechanisms for resolving conflicts that may arise when multiple developers make conflicting changes to the same models. An example of an MDD tool is Eclipse Modeling Framework (EMF), which is an IDE for creating and manipulating models using the Eclipse platform. EMF provides a set of tools and frameworks for defining, generating, and manipulating models based on structured data. It supports various modeling languages, such as UML, Ecore, and XML Schema, and provides a range of features for working with models, including code generation, model validation, and model transformation. Another example of an MDD tool is Papyrus, which is an open-source model-based engineering tool that provides an integrated development environment for creating, editing, and executing models. Papyrus supports various modeling languages, such as UML, SysML, and MARTE, and provides a rich set of features for working with models, including model simulation, model checking, and model transformation. ## Exercise Research and identify one MDD tool that you find interesting. Describe its key features and capabilities, and explain how it can be used in the context of MDD. ### Solution One MDD tool that I find interesting is Enterprise Architect, which is a comprehensive modeling and design tool that supports a wide range of modeling languages and notations, including UML, BPMN, SysML, and ArchiMate. Enterprise Architect provides a rich set of features and capabilities for creating, editing, and analyzing models. It supports model-driven development by allowing users to define and enforce modeling standards and guidelines, generate code from models, and perform model validation and verification. It also provides advanced features for model simulation, model transformation, and model-based testing. In the context of MDD, Enterprise Architect can be used to create and manage models throughout the software development lifecycle. It allows users to capture requirements, design system architectures, and specify detailed designs using various modeling languages. It also supports collaborative development by enabling multiple users to work on the same models simultaneously and providing mechanisms for version control and change management. Overall, Enterprise Architect is a powerful MDD tool that can greatly enhance the productivity and efficiency of software development teams by providing a unified and integrated environment for creating, managing, and analyzing models. # 6.1. Integrated Development Environments (IDEs) Integrated Development Environments (IDEs) are software tools that provide a complete development environment for creating, editing, and executing models in the context of model-driven development (MDD). IDEs are specifically tailored to support the creation and manipulation of models, making them essential tools for MDD practitioners. IDEs offer a range of features and capabilities that facilitate the development process. These include syntax highlighting, code completion, and debugging capabilities, which help developers write and edit models more efficiently. IDEs also provide tools for model validation, ensuring that models adhere to the specified rules and constraints. One popular IDE for MDD is Eclipse Modeling Framework (EMF). EMF is an open-source framework that provides a comprehensive set of tools and frameworks for defining, generating, and manipulating models based on structured data. It supports various modeling languages, such as UML, Ecore, and XML Schema, and provides a user-friendly interface for working with models. EMF offers a range of features that enhance the modeling experience. It includes a powerful code generation facility that automatically generates code from models, saving developers time and effort. EMF also provides support for model validation, allowing developers to check the consistency and correctness of their models. For example, let's say you are developing a software system using UML as your modeling language. With EMF, you can create UML models using a graphical editor, which provides a visual representation of the system's structure and behavior. You can define classes, relationships, and constraints, and EMF will generate the corresponding code for you. You can also use EMF to perform model validation, ensuring that your UML models adhere to the UML specification. EMF will check for errors, such as missing or incorrect relationships, and provide feedback to help you correct them. ## Exercise Research and identify one IDE for MDD that you find interesting. Describe its key features and capabilities, and explain how it can be used in the context of MDD. ### Solution One IDE for MDD that I find interesting is IBM Rational Software Architect (RSA). RSA is a comprehensive modeling and design tool that supports a wide range of modeling languages, including UML, BPMN, and SysML. RSA provides a rich set of features and capabilities for creating, editing, and analyzing models. It includes a graphical editor that allows users to create and manipulate models using a visual interface. RSA also offers advanced modeling capabilities, such as model simulation and model transformation, which help users analyze and transform their models. In the context of MDD, RSA can be used to create and manage models throughout the software development lifecycle. It supports the generation of code from models, allowing users to automatically generate code based on their models. RSA also provides features for model validation and verification, helping users ensure the correctness and consistency of their models. Overall, RSA is a powerful IDE for MDD that can greatly enhance the productivity and efficiency of software development teams. Its rich set of features and support for multiple modeling languages make it a valuable tool for MDD practitioners. # 6.2. Model Management Tools Model management tools are software tools that enable the manipulation and transformation of models in the context of model-driven development (MDD). These tools provide a range of functionalities for managing and organizing models, making them essential for MDD practitioners. Model management tools offer features such as model editing, model transformation, and model validation. They allow developers to create, modify, and analyze models, ensuring their correctness and consistency. These tools also support the transformation of models from one representation to another, enabling the generation of code or other artifacts from models. One popular model management tool is the Eclipse Modeling Framework (EMF). EMF provides a comprehensive set of tools and frameworks for defining, generating, and manipulating models based on structured data. It includes a model editor that allows developers to create and modify models using a graphical interface. EMF also provides a model transformation engine, which enables the transformation of models from one representation to another. Another widely used model management tool is the Modelio tool. Modelio is an open-source modeling environment that supports various modeling languages, such as UML, BPMN, and SysML. It offers a range of features for model editing, transformation, and validation. Modelio also provides support for model versioning and configuration management, allowing developers to track changes to models and collaborate on their development. For example, let's say you are developing a software system using UML as your modeling language. With a model management tool like EMF or Modelio, you can create UML models using a graphical editor. You can define classes, relationships, and constraints, and the tool will validate the models to ensure their correctness. You can also use the model management tool to transform the UML models into code or other artifacts. For example, you can generate Java code from the UML models, saving you time and effort in writing the code manually. The tool will handle the transformation process, ensuring that the generated code is consistent with the models. ## Exercise Research and identify one model management tool that you find interesting. Describe its key features and capabilities, and explain how it can be used in the context of MDD. ### Solution One model management tool that I find interesting is Papyrus. Papyrus is an open-source modeling environment that is based on the Eclipse platform. It supports various modeling languages, such as UML, SysML, and MARTE. Papyrus provides a range of features for model editing, transformation, and validation. It includes a graphical editor that allows users to create and modify models using a visual interface. Papyrus also provides support for model transformation, allowing users to transform models from one representation to another. Additionally, Papyrus offers model validation capabilities, ensuring the correctness and consistency of models. In the context of MDD, Papyrus can be used to create and manage models throughout the software development lifecycle. It supports the generation of code from models, enabling users to automatically generate code based on their models. Papyrus also provides features for model simulation and analysis, helping users analyze the behavior and performance of their models. Overall, Papyrus is a powerful model management tool that can greatly enhance the productivity and efficiency of MDD practitioners. Its rich set of features and support for multiple modeling languages make it a valuable tool for model-driven development. # 6.3. Version Control Systems Version control systems (VCS) are software tools that enable developers to track changes to their code and other project files over time. They provide a way to manage different versions of files, allowing developers to collaborate on a project and easily revert to previous versions if needed. VCS are essential for model-driven development (MDD) as they enable developers to track changes to models and other artifacts. They help ensure the integrity and consistency of models and allow for easy collaboration among team members. One popular version control system is Git. Git is a distributed version control system that allows multiple developers to work on a project simultaneously. It tracks changes to files and allows developers to create branches to work on different features or bug fixes. Git also provides features for merging changes from different branches and resolving conflicts. Another widely used version control system is Subversion (SVN). SVN is a centralized version control system that stores files and their history on a central server. It allows developers to check out files from the server, make changes, and commit them back to the server. SVN also supports branching and merging, although it is not as flexible as Git in this regard. For example, let's say you are working on a model-driven development project with a team of developers. You are using a model management tool like Papyrus to create and modify models. With a version control system like Git, you can track changes to the models and collaborate with your team. You can create a Git repository for your project and clone it to your local machine. You can then use Papyrus to create and modify models, and Git will track the changes you make. You can commit your changes to the repository, creating a new version of the models. If you make a mistake or want to revert to a previous version, you can easily do so with Git. Git also allows you to create branches to work on different features or bug fixes. Each branch can have its own set of changes, and you can merge the changes from one branch to another. This enables parallel development and makes it easier to manage complex projects. ## Exercise Research and identify one version control system that you find interesting. Describe its key features and capabilities, and explain how it can be used in the context of MDD. ### Solution One version control system that I find interesting is Mercurial. Mercurial is a distributed version control system that is designed to be easy to use and scalable. It provides a simple and intuitive command-line interface, making it accessible to developers of all skill levels. Mercurial allows developers to track changes to their code and other project files. It supports branching and merging, enabling parallel development and easy collaboration among team members. Mercurial also provides features for managing large projects, such as support for large binary files and efficient handling of large repositories. In the context of MDD, Mercurial can be used to track changes to models and other artifacts. It allows developers to create branches to work on different features or bug fixes, and merge the changes back to the main branch. Mercurial also provides features for resolving conflicts and managing complex project structures. Overall, Mercurial is a powerful version control system that can greatly enhance the productivity and efficiency of MDD practitioners. Its simplicity and scalability make it a valuable tool for managing and tracking changes to models and other artifacts. # 7. Case Studies # 7.1. Modeling Language Case Studies One case study focuses on the development of a modeling language for automotive algorithms. The goal was to separate the concerns of algorithm engineers and electrical engineers, and provide a formalized way to transfer models to code. By creating a domain-specific modeling language (DSL) and using model-based techniques, the company was able to improve the efficiency and effectiveness of their software development process. Another case study explores the use of modeling languages in the printer industry. The organization faced challenges in software production and sought to overcome them by adopting model-driven development. By introducing modeling languages and model-based techniques, they were able to improve productivity and reduce the time and effort required for software development. In the automotive case study, the company developed a DSL that allowed algorithm engineers to define models and methods, while electrical engineers defined the functionality of the vehicle. This separation of concerns and the use of a DSL enabled planned reuse and improved the scalability of the software development process. In the printer industry case study, the organization introduced model-driven development to overcome software production bottlenecks. By using modeling languages and model-based techniques, they were able to adapt to external pressures and improve their software development cycle. ## Exercise Research and identify one case study that demonstrates the use of modeling languages in a specific domain. Summarize the key findings and explain how the use of modeling languages benefited the organization. ### Solution One case study that demonstrates the use of modeling languages is the development of a DSL for financial modeling in the banking industry. The organization wanted to improve the accuracy and efficiency of financial modeling, which was traditionally done using spreadsheets. By creating a DSL specifically designed for financial modeling, the organization was able to capture complex financial concepts and calculations in a structured and standardized way. This reduced the risk of errors and improved the accuracy of financial models. The DSL also provided a higher level of abstraction, allowing financial analysts to focus on the logic and concepts of the models rather than the implementation details. Overall, the use of modeling languages in this case study improved the efficiency and effectiveness of financial modeling in the banking industry. It reduced the time and effort required to create and maintain financial models, and improved the accuracy and reliability of the models. # 7.2. Real-world Applications of Model-driven Programming One example of a real-world application of model-driven programming is in the aerospace industry. Aerospace companies use models to design and simulate complex systems, such as aircraft and spacecraft. By using modeling languages and model-based techniques, engineers can create accurate and detailed models that capture the behavior and functionality of these systems. These models can then be used to analyze and optimize the design, verify system requirements, and generate code for implementation. Another example is in the healthcare industry. Model-driven programming has been used to develop software systems for medical devices and electronic health records. By using modeling languages and model-based techniques, developers can create models that represent the behavior and functionality of these systems. These models can then be used to generate code, perform simulations, and validate system requirements. In the aerospace industry, companies like Boeing and Airbus use model-driven programming to design and simulate aircraft. They create models that capture the structural, aerodynamic, and mechanical properties of the aircraft, as well as the behavior of the systems and components. These models are then used to analyze and optimize the design, verify system requirements, and generate code for implementation. In the healthcare industry, companies like Philips and Siemens use model-driven programming to develop medical devices and electronic health records. They create models that represent the behavior and functionality of these systems, including the interaction with patients and healthcare providers. These models are then used to generate code, perform simulations, and validate system requirements. ## Exercise Research and identify one real-world application of model-driven programming in a specific industry or domain. Summarize the key findings and explain how model-driven programming has been used to address specific challenges in that industry or domain. ### Solution One real-world application of model-driven programming is in the automotive industry. Car manufacturers like BMW and Volkswagen use model-driven programming to design and develop software systems for their vehicles. They create models that represent the behavior and functionality of the vehicle's electrical and electronic systems, including the engine control unit, infotainment system, and driver assistance systems. These models are then used to generate code, perform simulations, and validate system requirements. Model-driven programming has been used in the automotive industry to address challenges such as the increasing complexity of software systems in vehicles, the need for faster development cycles, and the requirement for higher quality and reliability. By using modeling languages and model-based techniques, car manufacturers can improve the efficiency and effectiveness of their software development processes, reduce development time and costs, and ensure the safety and reliability of their vehicles. # 7.3. Success Stories and Lessons Learned One success story is the use of model-driven programming in the automotive industry. Companies like BMW and Volkswagen have reported significant improvements in their software development processes and the quality of their software systems after adopting model-driven programming. By using modeling languages and model-based techniques, these companies have been able to reduce development time, improve code quality, and increase the reliability of their vehicles. Another success story is the use of model-driven programming in the financial industry. Companies like JPMorgan Chase and Goldman Sachs have used model-driven programming to develop and maintain complex financial models and trading systems. By using modeling languages and model-based techniques, these companies have been able to improve the accuracy and efficiency of their models, reduce the risk of errors, and enhance their decision-making processes. In the automotive industry, BMW implemented model-driven programming to develop the software systems for their vehicles. By using modeling languages like UML and model-based techniques, BMW was able to create accurate and detailed models that captured the behavior and functionality of the vehicle's electrical and electronic systems. These models were then used to generate code, perform simulations, and validate system requirements. As a result, BMW reported a significant reduction in development time, improved code quality, and increased reliability of their vehicles. The use of model-driven programming allowed BMW to streamline their software development processes, reduce the number of defects in their software systems, and deliver high-quality products to their customers. ## Exercise Research and identify one success story of an organization that has implemented model-driven programming. Summarize the key findings and explain how model-driven programming has contributed to the success of that organization. ### Solution One success story of an organization that has implemented model-driven programming is Siemens. Siemens used model-driven programming to develop software systems for their medical devices. By using modeling languages and model-based techniques, Siemens was able to create accurate and detailed models that represented the behavior and functionality of their medical devices. This allowed Siemens to improve the efficiency and effectiveness of their software development processes, reduce development time and costs, and ensure the safety and reliability of their medical devices. The use of model-driven programming also enabled Siemens to easily update and modify their software systems as new requirements and regulations emerged in the healthcare industry. Overall, model-driven programming has contributed to the success of Siemens by allowing them to deliver high-quality and innovative medical devices to their customers, improve patient care, and stay ahead of the competition in the healthcare industry. # 8. Model-driven Programming in Industry 8.1. Incorporating Model-driven Programming in Software Development Processes Model-driven programming has revolutionized the software development process in many organizations. By using modeling languages and model-based techniques, developers can create high-level models that capture the behavior and functionality of the software system. These models serve as a blueprint for generating code, performing simulations, and validating system requirements. However, incorporating model-driven programming into existing software development processes can be challenging. It requires a shift in mindset and a reevaluation of traditional development practices. Organizations need to train their developers in modeling languages and model-based techniques, and provide them with the necessary tools and resources to effectively use these techniques. One organization that successfully incorporated model-driven programming in their software development process is Microsoft. Microsoft adopted the Model-Driven Development (MDD) approach, which emphasizes the use of models throughout the software development lifecycle. They developed their own modeling language, called the Domain-Specific Language (DSL), which allows developers to create domain-specific models that capture the unique requirements of their software systems. By using DSL, Microsoft was able to improve the productivity of their developers, reduce the number of defects in their software systems, and enhance the maintainability and reusability of their codebase. The use of model-driven programming also enabled Microsoft to quickly adapt to changing customer requirements and deliver high-quality software products. ## Exercise Identify one challenge that organizations may face when incorporating model-driven programming in their software development processes. Explain why this challenge is significant and propose a solution to address it. ### Solution One challenge that organizations may face when incorporating model-driven programming in their software development processes is the resistance to change from developers. Many developers are accustomed to traditional coding practices and may be reluctant to adopt new modeling languages and techniques. This challenge is significant because it can hinder the successful implementation of model-driven programming and limit the benefits that can be achieved. Without the full participation and support of developers, organizations may struggle to effectively use modeling languages and model-based techniques, resulting in suboptimal software development processes. To address this challenge, organizations can provide comprehensive training and education programs to familiarize developers with modeling languages and model-based techniques. They can also create a supportive and collaborative environment that encourages developers to explore and experiment with model-driven programming. Additionally, organizations can showcase success stories and real-world examples of the benefits of model-driven programming to motivate and inspire developers to embrace this approach. # 8.2. Challenges and Solutions One challenge is the complexity of modeling languages and tools. Model-driven programming relies on specialized modeling languages and tools that can be difficult to learn and use. Developers need to become proficient in these languages and tools to effectively create and manipulate models. To address this challenge, organizations can provide comprehensive training and support to developers. They can offer workshops, tutorials, and documentation to help developers learn the modeling languages and tools. Additionally, organizations can establish a community of practice where developers can share their experiences, ask questions, and learn from each other. Another challenge is the integration of models with existing software development processes and tools. Many organizations have established software development processes and tools that are not designed to accommodate model-driven programming. Integrating models with these existing processes and tools can be challenging and time-consuming. One solution is to gradually introduce model-driven programming into existing processes and tools. Organizations can start by using models in specific parts of the development process and gradually expand their usage. They can also develop custom plugins or extensions for their existing tools to support model-driven programming. ## Exercise Identify one challenge that organizations may face when implementing model-driven programming and propose a solution to address it. ### Solution One challenge that organizations may face when implementing model-driven programming is the lack of expertise in modeling languages and tools. Many developers may not have experience or knowledge in using these specialized languages and tools, which can hinder the adoption of model-driven programming. To address this challenge, organizations can provide training and education programs to familiarize developers with modeling languages and tools. They can offer workshops, online courses, and mentoring programs to help developers learn and practice using these languages and tools. Additionally, organizations can encourage developers to collaborate and share their knowledge and experiences, creating a supportive and learning-oriented environment. # 8.3. Future Trends and Possibilities One trend is the integration of model-driven programming with artificial intelligence (AI) and machine learning (ML) technologies. By combining model-driven programming with AI and ML, organizations can develop intelligent systems that can automatically generate models, analyze data, and make informed decisions. This can significantly improve the efficiency and effectiveness of software development processes. Another trend is the use of model-driven programming in the Internet of Things (IoT) domain. As IoT devices become more pervasive, organizations need efficient and scalable approaches to develop and manage the software systems that power these devices. Model-driven programming can provide a systematic and structured approach to IoT software development, enabling organizations to quickly develop and deploy IoT applications. One example of a future possibility is the use of model-driven programming in the healthcare industry. Healthcare organizations are increasingly relying on software systems to manage patient data, monitor health conditions, and deliver personalized care. By using model-driven programming, organizations can develop robust and secure software systems that comply with regulatory requirements and ensure patient privacy. ## Exercise Identify one future trend or possibility in the field of model-driven programming. Explain why this trend or possibility is significant and how it can benefit organizations. ### Solution One future trend in the field of model-driven programming is the use of generative modeling techniques. Generative modeling techniques involve automatically generating models from high-level specifications or requirements. This can significantly reduce the manual effort required to create models and improve the productivity of developers. This trend is significant because it can address the complexity and time-consuming nature of model creation. By automating the model generation process, organizations can accelerate the software development lifecycle, reduce errors, and improve the quality of their software systems. Additionally, generative modeling techniques can enable organizations to quickly adapt to changing requirements and deliver software products faster. # 8.3. Future Trends and Possibilities One trend is the integration of model-driven programming with cloud computing technologies. Cloud computing offers organizations the ability to access and utilize computing resources on-demand, which can be highly beneficial for model-driven programming. By leveraging the scalability and flexibility of cloud computing, organizations can easily deploy and manage their models, collaborate with team members, and scale their model-driven applications as needed. Another trend is the use of model-driven programming in the field of cybersecurity. As cyber threats continue to evolve and become more sophisticated, organizations need robust and secure software systems to protect their data and infrastructure. Model-driven programming can provide a systematic and structured approach to developing secure software systems, enabling organizations to identify and mitigate potential vulnerabilities early in the development process. One example of a future possibility is the use of model-driven programming in the field of autonomous vehicles. Autonomous vehicles rely on complex software systems to navigate, make decisions, and interact with their environment. Model-driven programming can help organizations develop and manage these software systems, ensuring their reliability, safety, and compliance with regulations. ## Exercise Identify one future trend or possibility in the field of model-driven programming. Explain why this trend or possibility is significant and how it can benefit organizations. ### Solution One future trend in the field of model-driven programming is the use of model-driven testing. Model-driven testing involves automatically generating test cases from models and using these test cases to validate the behavior and functionality of software systems. This trend is significant because it can significantly reduce the time and effort required for testing, improve test coverage, and enhance the overall quality of software systems. By automating the testing process, organizations can detect and fix bugs and issues early in the development lifecycle, reducing the risk of costly errors and improving the reliability and performance of their software systems. # 9. Conclusion In this textbook, we have covered the fundamentals of model-driven programming, explored different types of models, and discussed their use in programming. We have also delved into the design and testing processes in model-driven programming, and examined the tools and techniques used in this paradigm. Throughout this textbook, we have emphasized the rigorous and applied nature of model-driven programming. We have provided practical examples and exercises to help you understand and apply the concepts we have discussed. Model-driven programming offers many advantages, including increased productivity, improved software quality, and better collaboration between stakeholders. However, it also has its challenges, such as the need for specialized tools and expertise. In conclusion, model-driven programming is a powerful approach to software development that can greatly enhance the efficiency and effectiveness of the development process. By using models to represent and manipulate software artifacts, organizations can achieve higher levels of automation, reusability, and maintainability. As you continue your journey in the field of model-driven programming, remember to stay curious and keep exploring new ideas and technologies. The field is constantly evolving, and there are always new trends and possibilities to discover. We hope that this textbook has provided you with a solid foundation in model-driven programming and has inspired you to further explore this exciting field. Good luck on your future endeavors! # 9.1. Advantages and Disadvantages of Model-driven Programming Model-driven programming offers several advantages over traditional programming approaches. One major advantage is increased productivity. By using models to represent software artifacts, developers can automate many aspects of the development process, such as code generation and testing. This can significantly reduce the amount of manual coding and debugging required, allowing developers to build software more quickly and efficiently. Another advantage is improved software quality. Models provide a higher level of abstraction, allowing developers to focus on the overall structure and behavior of the software rather than getting bogged down in low-level implementation details. This can lead to cleaner, more maintainable code and fewer bugs. Model-driven programming also promotes better collaboration between stakeholders. Models provide a common language and visual representation that can be easily understood by both technical and non-technical team members. This can help bridge the gap between developers, designers, and domain experts, leading to better communication and a shared understanding of the software requirements. However, model-driven programming also has its disadvantages. One challenge is the need for specialized tools and expertise. Developing and maintaining models requires knowledge of specific modeling languages and tools, which may have a steep learning curve. Additionally, not all aspects of software development can be easily represented in models, so there may be limitations to what can be achieved using this approach. Despite these challenges, the benefits of model-driven programming make it a valuable approach for many software development projects. By leveraging the power of models, organizations can streamline their development processes, improve software quality, and foster better collaboration among team members. # 9.2. Comparison with Other Programming Paradigms Model-driven programming is just one of many programming paradigms used in software development. Each paradigm has its own strengths and weaknesses, and the choice of which paradigm to use depends on the specific requirements of the project. One common comparison is between model-driven programming and traditional imperative programming. In imperative programming, developers write code that specifies exactly how the software should be executed. This approach provides fine-grained control over the software's behavior, but can be more time-consuming and error-prone. Model-driven programming, on the other hand, focuses on creating high-level models that describe the desired behavior of the software. These models are then used to automatically generate code, reducing the amount of manual coding required. This approach can be more efficient and less error-prone, but may sacrifice some control and flexibility. Another comparison is between model-driven programming and object-oriented programming (OOP). OOP is a popular paradigm that organizes code into objects, which encapsulate data and behavior. OOP promotes code reuse, modularity, and extensibility, but can be complex and difficult to understand. Model-driven programming can complement OOP by providing a higher-level abstraction for designing and implementing software. Models can capture the overall structure and behavior of the software, while objects handle the low-level implementation details. This combination can lead to cleaner, more maintainable code. It's worth noting that model-driven programming is not a replacement for other paradigms, but rather a complementary approach that can be used alongside them. Depending on the project requirements, a combination of different paradigms may be the most effective approach. ## Exercise Compare model-driven programming with traditional imperative programming. What are the strengths and weaknesses of each approach? ### Solution Traditional imperative programming provides fine-grained control over the software's behavior, allowing developers to specify exactly how the software should be executed. This approach can be time-consuming and error-prone, but it provides flexibility and control. Model-driven programming, on the other hand, focuses on creating high-level models that describe the desired behavior of the software. These models are then used to automatically generate code, reducing the amount of manual coding required. This approach can be more efficient and less error-prone, but it sacrifices some control and flexibility. In summary, traditional imperative programming provides fine-grained control but can be time-consuming and error-prone, while model-driven programming is more efficient and less error-prone but sacrifices some control and flexibility. The choice between the two approaches depends on the specific requirements of the project. # 9.3. Final Thoughts and Recommendations In this textbook, we have explored the fundamentals of model-driven programming, including the concepts, tools, and techniques involved. We have seen how models can be used to drive the development process, from design to testing and beyond. Model-driven programming offers many advantages, such as increased productivity, improved software quality, and better maintainability. However, it is important to note that model-driven programming is not a silver bullet. Like any approach, it has its limitations and challenges. It requires a solid understanding of modeling languages and tools, as well as the ability to create accurate and effective models. It also requires a shift in mindset, as developers need to think in terms of models rather than traditional code. To successfully adopt model-driven programming, it is important to start small and gradually incorporate it into your development process. Begin by identifying areas where models can provide the most value, such as complex business logic or data-intensive systems. Experiment with different modeling languages and tools to find the ones that best fit your needs. Additionally, collaboration and communication are key in model-driven programming. It is important to involve stakeholders, such as domain experts and testers, throughout the modeling process. This ensures that the models accurately capture the requirements and can be effectively tested. In conclusion, model-driven programming is a powerful approach that can greatly improve the software development process. By leveraging high-level models, developers can create more efficient, maintainable, and reliable software. However, it requires careful planning, training, and collaboration to fully reap its benefits. As you continue your journey in software development, consider incorporating model-driven programming into your toolkit and explore its potential in your projects.
Textbooks
FERMAT'S LIBRARY Join our newsletter to receive a new paper every week Ask a question or post a comment about the paper Join the discussion! Ask questions and share your comments. In December of 1930 Wolfgang Pauli postulated the existence of the ... Pauli calls this particle a neutron, today we know it as a neutrino... Neutrinos have a very low cross-section, they are very penetrating ... This is a copy of the original letter (in German) by W. Pauli in 1930. [This is a translation of a machine-typed copy of a letter that Wolfgang Pauli sent to a group of physicists meeting in Tübingen in December 1930. Pauli asked a colleague to take the letter to the meeting, and the bearer was to provide more information as needed.] Copy/Dec. 15, 1956 PM Open letter to the group of radioactive people at the Gauverein meeting in Tübingen. Physics Institute Zürich, Dec. 4, 1930 of the ETH Gloriastrasse Dear Radioactive Ladies and Gentlemen, As the bearer of these lines, to whom I graciously ask you to listen, will explain to you in more detail, because of the "wrong" statistics of the N- and Li-6 nuclei and the continuous beta spectrum, I have hit upon a desperate remedy to save the "exchange theorem" (1) of statistics and the law of conservation of energy. Namely, the possibility that in the nuclei there could exist electrically neutral particles, which I will call neutrons, that have spin 1/2 and obey the exclusion principle and that further differ from light quanta in that they do not travel with the velocity of light. The mass of the neutrons should be of the same order of magnitude as the electron mass and in any event not larger than 0.01 proton mass. - The continuous beta spectrum would then make sense with the assumption that in beta decay, in addition to the electron, a neutron is emitted such that the sum of the energies of neutron and electron is constant. Now it is also a question of which forces act upon neutrons. For me, the most likely model for the neutron seems to be, for wave-mechanical reasons (the bearer of these lines knows more), that the neutron at rest is a magnetic dipole with a certain moment μ. The experiments seem to require that the ionizing effect of such a neutron can not be bigger than the one of a gamma-ray, and then μ is probably not allowed to be larger than e • (10 cm). But so far I do not dare to publish anything about this idea, and trustfully turn first to you, dear radioactive people, with the question of how likely it is to find experimental evidence for such a neutron if it would have the same or perhaps a 10 times larger ability to get through [material] than a gamma-ray. I admit that my remedy may seem almost improbable because one probably would have seen those neutrons, if they exist, for a long time. But nothing ventured, nothing gained, and the seriousness of the situation, due to the continuous structure of the beta spectrum, is illuminated by a remark of my honored predecessor, Mr Debye, who told me recently in Bruxelles: "Oh, It's better not to think about this at all, like new taxes." Therefore one should seriously discuss every way of rescue. Thus, dear radioactive people, scrutinize and judge. - Unfortunately, I cannot personally appear in Tübingen since I am indispensable here in Zürich because of a ball on the night from December 6 to 7. With my best regards to you, and also to Mr. Back, your humble servant signed W. Pauli [Translation: Kurt Riesselmann] In December of 1930 Wolfgang Pauli postulated the existence of the neutrino to explain the non-conservation of energy in beta-decays ($\beta$). In a $\beta$ decay an atomic nucleus ***X*** spontaneously decays into a different nucleus ***X'*** by emitting either an electron or a positron and becomes a different element with the same mass number (**A**) but with a different atomic number (**Z**). In 1930 this type of decay scientists believed that this reaction was as follows: \begin{eqnarray} ^A_Z X \rightarrow ^A_{Z+1}X' + e \end{eqnarray} In order to determine the total energy released in a given nuclear decay we can use the mass-energy equivalence, and use the **Q-value**. The Q value is defined as the total energy released in a given nuclear decay, considering the energy conservation the general definition of Q based on mass-energy equivalence, where K is kinetic energy and m is mass (for equation (1)): \begin{eqnarray*} Q&=&K_f-K_i = (m_i-m_f)c^2\\ &=&\left[m\left({}_{Z}^{A}\mathrm {X} \right)-m\left({}_{Z+1}^{A}\mathrm {X'} \right)-m_{e}\right]c^{2} \end{eqnarray*} Beta particles can therefore be emitted with any kinetic energy ranging from 0 to Q. Studying the experimental Q-values obtained for $\beta$ decays Pauli noticed that the energy of these reactions was not conserved, there was always an energy deficit. In 1927 Ellis and Wooster performed an experiment in which they measured the total energy released in the disintegration of a ${}^{210}Bi \rightarrow {}^{210}Po$ . The calorimeter was thick enough to stop all the emitted electrons and they expected to measure a total energy of $1.05$ MeV. In fact they observed a total energy $E = 344 \pm 34$ keV. The experiment was repeated in Berlin with an improved calorimeter by Meitner and Orthman in 1930 and the result was $337 \pm 20$ keV. Pauli was intrigued by these results and thought there could be two possible explanations for the energy deficit: 1. The conservation laws were not valid when applied to regions of subatomic dimensions. OR 2. There must be a new invisible fundamental particle that accounts for the loss of energy from the nucleus. The second explanation was preferred because it maintained the integrity of the conservation laws. This led Pauli to postulate the existence of a new particle, the **neutron, that later became known as neutrino $\nu$**. The neutrino would be emitted by the nucleus simultaneously with the electron, and would carry the missing energy and momentum, but would not be detected. Taking this into account one can then re-write reaction (1) as: \begin{eqnarray} ^A_Z X \rightarrow ^A_{Z+1}X' + e\, + \bar{\nu_e} \end{eqnarray} And then the new Q-value becomes: \begin{eqnarray*} Q=\left[m\left({}_{Z}^{A}\mathrm {X} \right)-m\left({}_{Z+1}^{A}\mathrm {X'} \right)-m_{e}-m_{{\overline {\nu }}_{e}}\right]c^{2} \end{eqnarray*} It would take physicists 26 years to experimentally detect the first neutrino. Figure 1 (below) is a schema of a $\beta^-$ decay, where a neutron decays into a proton and emits an electron: !["beta decay"](https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Beta-minus_Decay.svg/240px-Beta-minus_Decay.svg.png) Figure 1: $\beta^-$ decay, a nuclueus emits an elctron. A neutron decays into a proton and emits an electron and an anti-neutrino. This is a copy of the original letter (in German) by W. Pauli in 1930. Pauli calls this particle a neutron, today we know it as a neutrino. The neutron as we know it today was only discovered two years later, in 1932, by James Chadwick, see [Discovery of the neutron.](https://en.wikipedia.org/wiki/Discovery_of_the_neutron) In 1933 Enrico Fermi changes the name of Pauli's postulated particle to **neutrino**. The neutrino particle played a crucial role in the first theory of nuclear beta decay formulated by Enrico Fermi in 1933 and which later became known as the weak force. Fermi was Italian and **neutrino was the obvious choice because it means the little neutral one.** Neutrinos have a very low cross-section, they are very penetrating particles, extremely hard to detect. It would take physicist 26 years to detect the first neutrino in 1956. Today we know that there are three different neutrino flavours, one for each lepton: $\nu_e$ electron neutrino, $\nu_{\mu}$ muon neutrino and $\nu_{\tau}$ tau neutrino. (and the corresponding anti-neutrinos) **$\nu_e$ electron neutrino:** It was postulated by W. Pauli in 1930 and it was only discovered in 1956 by a team led by Clyde Cowan and Frederick Reines. You can learn more here: [Cowan–Reines Neutrino Experiment](https://en.wikipedia.org/wiki/Cowan–Reines_neutrino_experiment) **$\nu_{\mu}$ muon neutrino:** The muon neutrino was first hypothesized in the early 1940s, and was discovered in 1962 by Leon Lederman, Melvin Schwartz and Jack Steinberger. They were awarded the 1988 Nobel Prize in Physics for their discovery. You can learn more here: [Discovery of the Muon Neutrino](https://www.bnl.gov/bnlweb/history/nobel/nobel_88.asp) **$\nu_{\tau}$ tau neutrino:** The existence of the tau neutrino was immediately implied after the tau particle was detected in a series of experiments between 1974 and 1977 by Martin Lewis Perl with his colleagues at the SLAC–LBL group. The tau neutrino was first observed in July 2000 by the DONUT collaboration. You can learn more here: [Direct Observation of the Nu Tau](https://en.wikipedia.org/wiki/DONUT) [email protected]
CommonCrawl
1 . In questions , find the missing number/figure from the given responses. If '*' stands for division, '-' stands for multiplication, '/ ' stands for addition and '@' stands for subtraction, which one of the following equation is correct ? A. 25 * 5 @ 10 / 1 - 100 / 5 = 100 B. 25 / 5 - 50 @ 30 * 2 = 75 C. 25 - 6 / 10 @ 1 @ 100 * 5 = 139 D. 25 / 100 * 5 / 10 @ 1 - 6 = 29 25 $\div$ 5 - 10 + 1 $\times$ 100 + 5 = 100 3 . In question, some letters are given with numbers from 1 to 9 or 6. Select the sequence of numbers which arranges the letters into a meaningful word $N\,\,\, N \,\,\,D \,\,\,I \,\,\,N \,\,\,I \,\,\,T \,\,\,G \,\,\,A$ $1\,\,\, 2 \,\,\,3 \,\,\,4 \,\,\,5 \,\,\,6 \,\,\,7 \,\,\,8 \,\,\,9$ A. 2 1 5 7 6 4 3 8 9 B. 3 1 2 5 4 6 7 9 8 C. 4 2 1 3 5 7 6 8 9 D. 4 2 3 6 8 5 9 1 7 INDIGNANT $D\,\,\, E \,\,\,N \,\,\,F \,\,\,R \,\,\,I$ $1\,\,\, 2 \,\,\,3 \,\,\,4 \,\,\,5 \,\,\,6$ A. 6, 4, 2, 3, 5, 1 B. 3, 2, 1, 6, 5, 4 C. 4, 5, 6, 2, 3, 1 D. 1, 2, 5, 4, 6, 3 One morning after sunrise, Kishan is standing facing a pole. The shadow of the pole fell exactly to his right. Then in which direction is he facing ? A. South B. South - West C. West D. East Which of the following diagrams indicates the best relation between women, mother, widows ? If February $ 29^ {th}$ falls on a Monday, what would be the $11^ {th}$ day of the month ? A. Wednesday B. Saturday C. Thursday D. Monday How many 8s are there in the following sequence which are immediately preceded by 6 but not immediately followed by 5 ? 6, 8, 5, 7, 8, 5, 4, 3, 6, 8, 1, 9, 8, 5, 4, 6, 8, 2, 9, 6, 8, 1, 3, 6, 8, 5, 3, 6 A. Two B. Three C. Four D. One A, B, C, D and E are sitting on a bench. A is sitting next to B; C is setting next to D; D is not sitting with E, who is on the left end of the bench. C is on the second position from the right. A is to the right of B and E. A and C are sitting together. In which position is A sitting ? A. Between B and C B. Between E and D C. Between C and E D. Between B and D In a class of 40 students, Ajit has got $28^ {th}$ rank from last, Babu has got $16^ {th}$ rank from first, Chandra has got $31^ {th}$ from last, Bhuvana has got $14^ {th}$ from first. Who got the top rank among themselves ? A. Babu B. Bhuvana C. Chandra D. Ajit
CommonCrawl
The Fibonacci sequence is the sequence 1, 1, 2, 3, 5, $\ldots$ where the first and second terms are 1 and each term after that is the sum of the previous two terms. What is the remainder when the $100^{\mathrm{th}}$ term of the sequence is divided by 8? We can look at the terms of the Fibonacci sequence modulo 8. \begin{align*} F_1 &\equiv 1\pmod{8}, \\ F_2 &\equiv 1\pmod{8}, \\ F_3 &\equiv 2\pmod{8}, \\ F_4 &\equiv 3\pmod{8}, \\ F_5 &\equiv 5\pmod{8}, \\ F_6 &\equiv 0\pmod{8}, \\ F_7 &\equiv 5\pmod{8}, \\ F_8 &\equiv 5\pmod{8}, \\ F_9 &\equiv 2\pmod{8}, \\ F_{10} &\equiv 7\pmod{8}, \\ F_{11} &\equiv 1\pmod{8}, \\ F_{12} &\equiv 0\pmod{8}, \\ F_{13} &\equiv 1\pmod{8}, \\ F_{14} &\equiv 1\pmod{8}, \\ F_{15} &\equiv 2\pmod{8}, \\ F_{16} &\equiv 3\pmod{8}. \end{align*}Since $F_{13}$ and $F_{14}$ are both 1, the sequence begins repeating at the 13th term, so it repeats every 12 terms. Since the remainder is 4 when we divide 100 by 12, we know $F_{100}\equiv F_4\pmod 8$. Therefore the remainder when $F_{100}$ is divided by 8 is $\boxed{3}$.
Math Dataset
Topic: Algebraic group Lie group decompositions Representation of a Hopf algebra Liberty BASIC Workshop Icka Prick Perchlorate Dwight, Illinois Chauvin, Louisiana GJH Michael Linton Algebraic group - Wikipedia, the free encyclopedia In algebraic geometry, an algebraic group (or group variety) is a group that is an algebraic variety, such that the multiplication and inverse are given by regular functions on the variety. Two important classes of algebraic groups arise, that for the most part are studied separately: abelian varieties (the 'projective' theory) and linear algebraic groups (the 'affine' theory). Note that this means that algebraic group is narrower than Lie group, when working over the field of real numbers: there are examples such as the universal cover of the 2×2 special linear group that are Lie groups, but have no faithful linear representation. en.wikipedia.org /wiki/Algebraic_group (380 words) Linear algebraic group - Wikipedia, the free encyclopedia In mathematics, a linear algebraic group is a subgroup of the group of invertible n×n matrices (under matrix multiplication) that is defined by polynomial equations. Such groups were known for a long time before their abstract algebraic theory was developed according to the needs of major applications. The first basic theorem of the subject is that any affine algebraic group is a linear algebraic group: that is, any affine variety V that has an algebraic group law has a faithful linear representation, over the same field. en.wikipedia.org /wiki/Linear_algebraic_group (471 words) Research group in algebra and combinatorics (Site not responding. Last check: 2007-10-07) Groups are studied in several contexts: as symmetry groups of discrete structures, as linear groups (groups of matrices) and as algebraic groups. Linear and Algebraic Groups: The emphasis lies on recognition problems for finite linear groups and representations, the eigenvalue behaviour of group elements in a representation and on asymptotic problems in representations and locally finite groups. Algebraic group methods are used extensively for the study of representations of Lie type groups. www.mth.uea.ac.uk /admissions/graduate/algrep.html (455 words) Quillen's theorem for unipotent algebraic groups It is by now a wellknown fact that the dual of the (reduced) Steenrod alge- bra A* is isomorphic to the coordinate algebra of the (infinite dimensional) group scheme G = Aut s(Fa(x; y)) of the strict automorphism of the additive formal group law defined over Fp. The algebra of (reduced) mod p unstable cohomology cooperations is a polynomial algebra P pi B* = Fp[0; 1; : :]:with comultiplication 4n = n-i i. Wilkerson, The cohomology algebras of finite dimensional Hopf algebras, Trans. hopf.math.purdue.edu /PetersonC/Ext_An.txt (2570 words) PlanetMath: affine algebraic group (Site not responding. Last check: 2007-10-07) is an affine algebraic group over itself with the group law being addition, and as is Cross-references: algebraic torus, matrices, general linear group, algebraic, inverse, map, group, affine space, subset, closed, variety, field This is version 4 of affine algebraic group, born on 2003-08-21, modified 2003-08-22. planetmath.org /encyclopedia/AffineAlgebraicGroup.html (91 words) UWM Math: Lie Theory/Algebraic Groups (Site not responding. Last check: 2007-10-07) The tangent space at the identity of a Lie group is a Lie algebra. An algebraic group is a group that is simultaneously (and compatibly) an algebraic set (i.e., a set with group operations defined by polynomial equations). The study of quantum groups is also closely linked to the study of Lie algebras and algebraic groups. www.uwm.edu /Dept/Math/Research/Algebra/Lie/Lie.html (230 words) An algebraic map or regular map or morphism of quasiprojective varieties is a map of whose graph is closed. An algebraic group is a group G in the category of quasiprojective varieties i.e. G is simulateneously a group and variety and the group multiplication G x G → G and inversion G → G are morphisms. www.math.purdue.edu /~dvb/algeom2.html (786 words) Glossary of terms for Fermat's Last Theorem The conjecture that the rank of the group of rational points of an elliptic curve E is equal to the order of the zero of the L-function L(E,s) of the curve at s=1. the kernel of a group homomorphism is a subgroup. A complete algebraic variety which is an algebraic curve that is essentially the quotient space of the upper half of the complex plane by the action of a subgroup of finite index of the modular group. gyral.blackshell.com /flt/flt10.htm (2633 words) AlgebraicGroups - Cmat Wiki (Site not responding. Last check: 2007-10-07) \section{Algebraic Lie algebras} %\begin{defi} %\label{defi:basicalglie} %\index{Lie algebra!algebraic} %Let $k$ be a field of arbitrary characteristic, $G$ be an affine algebraic group and %$\mathfrak h \subset {\operatorname {\mathcal L}(G)}$ be a $k$--Lie subalgebra. Compute the action of the corresponding Lie algebra on $k^{2}$ and conclude that there are examples of subspaces of $k^{2}$ stable with respect to the action of the Lie algebra but not with respect to the action of $G_{a}$. Prove that this Lie algebra is not associated to an affine algebraic group. www.cmat.edu.uy /moin/AlgebraicGroups (3010 words) Elliptic Curves and Elliptic Functions For every algebraic function, it is possible to construct a specific surface such that the function is "single-valued" on the surface as a domain of definition. The first is the group of all points on the curve E which have an order that divides m for some particular integer m. One of the principal facts of elementary group theory is that any finitely generated abelian group is the direct sum of a finite group and a finite number of infinite cyclic groups (isomorphic to the integers Z). cgd.best.vwh.net /home/flt/flt03.htm (3513 words) UCLA Math: Algebra and Algebraic Geometry (Site not responding. Last check: 2007-10-07) Algebra, one of the three major branches of pure mathematics, interacts significantly with many other fields. The Department's particular strengths lie in group theory, algebraic geometry, algebraic K-theory, and areas of algebra related to number theory, modular forms, and the theory of algebras. Students specializing in algebra should take most of these during their graduate careers. www.math.ucla.edu /grad_programs/faculty/research_areas/algebra.html (253 words) 14: Algebraic geometry Algebraic geometry combines the algebraic with the geometric for the benefit of both. Note that many computations in algebraic geometry are really computations in polynomials rings, hence computational commutative algebra applies. This is essentially the study of formal groups. www.math.niu.edu /~rusin/known-math/index/14-XX.html (523 words) [No title] (Site not responding. Last check: 2007-10-07) For an algebraic group G, defined over an algebraically closed field of characteristic zero, there is a natural partial order on the set of G-actions on algebraic varieties: X >= Y if there exists a dominant G-equivariant rational map (i.e., a compression) from X to Y. Alternatively, one can consider regular, rather than rational, compressions. Abstract: Let G be an algebraic group and X be an irreducible algebraic variety with a generically free G-action, all defined over an algebraically closed field of characteristic zero. The essential dimension is a numerical invariant of the group; it is often equal to the minimal number of independent parameters required to describe all algebraic objects of a certain type. www.math.ubc.ca /~reichst/abstract.html (2805 words) Given a recursive sequence v_n of elements in a free monoid, we investigate the quotient of the free associative algebra by the ideal generated by all non-subwords in v_n. Affine space in the given \Theta is represented in the form Hom(W,G), where W=W(X) is the free in \Theta algebra with the finite X and G is an algebra from \Theta. Thus, algebraic sets or algebraic varieties in the space Hom(W,G) are defined by a set of formulas, which are not necessarily equalities. www.math.technion.ac.il /~techm/19990623000019990624asp (1448 words) Knot Table: (Algebraic) Concordance Order (Site not responding. Last check: 2007-10-07) Levine defined a homomorphism of the concordance group onto an algebraically defined group, isomorphic to the countably infinite direct sum of (an infinite number of) copies of Z_2, Z_4, and Z. The algebraic order of algebraic concordance order of a knot is the order of the image in Levine's alebraic concordance group. Livingston and Naik have shown that many knots of algebraic order 4 are infinite order in the concordance group. Andrius Tamulis proved that may knots of algebraic order 2 are of higher order in the concordance group, and proved that others are either negative amphicheiral, or concordant to negative amphicheiral knots, and thus are of order 2. www.indiana.edu /~knotinfo/descriptions/concordance_order.html (216 words) The Coxeter group is the symmetry group of an n-dimensional cube. This group is the semidirect product of the permutations of the n axes and the group (Z/2)^n generated by the reflections along these axes. Instead, it's half as big as the Weyl group of B_n: it's the subgroup of the symmetries of the n-dimensional cube generated by permutations of the coordinate axes and reflections along *pairs* of coordinate axes. math.ucr.edu /home/baez/twf_ascii/week187 (2283 words) Algebraic Geometry Algebraic geometry is the study of the "shape" of the set of solutions to polynomial equations. The study of this type of question is called "arithmetic algebraic geometry" and is closely related to number theory, group theory, and representation theory. The study of the geometric properties of this continuum is known as "complex algebraic geometry" and is closely related to topology, differential geometry, complex analysis and even theoretical physics. www.math.utah.edu /research/ag (382 words) Algebraic Number Theory Archive (Site not responding. Last check: 2007-10-07) ANT-0295: 8 Jun 2001, On the structure theory of the Iwasawa algebra of a p-adic Lie group, by Otmar Venjakob. ANT-0185: 7 Jun 1999, An analogue of Serre's conjecture for Galois representations and Hecke eigenclasses in the mod-p cohomology of GL(n,Z), by Avner Ash and Warren Sinnott. ANT-0066: 31 Aug 1998, Torsion subgroups of Mordell-Weil groups of Fermat Jacobians, by Pavlos Tzermias. front.math.ucdavis.edu /ANT (12251 words) London postgraduate study group in algebraic Number theory (Site not responding. Last check: 2007-10-07) This study group is designed for postgrads in algebraic number theory at London colleges, and they present the talks in this study group. The study group is organised by staff members, who attend to give help and guidance on technical questions. This term, the study group will be held on Wednesdays, between 3:00 and 4:00 pm in room 423 of KCL, with a short break for tea and biscuits in room 429 before the London Number Theory Seminar. www.mth.kcl.ac.uk /events/psgant.html (207 words) Short CV: L.R. Renner (Site not responding. Last check: 2007-10-07) Algebraic groups and monoids, algebraic transformation groups, related combinatorics and geometry. The theory of algebraic monoids is a natural synthesis of algebraic group theory (Chevalley, Borel, Tits) and torus embeddings (Mumford, Kempf, et al). The intention of this monograph is to convince the reader that reductive monoids are among the darlings of algebra. www.math.uwo.ca /~lex/cv/Renner.html (327 words) Department of Mathematics - University of Georgia The geometry group includes algebraic geometry, differential geometry, mathematical physics, and representation theory. A diverse group of mathematicians in the department has a number of overlapping research interests in a broad range of geometric problems. Members of the group are the postdoc Nancy Wrinkle, the graduate students Xander Faber, Chad Mullikin, and Heunggi Park, and the freshman Darren Wolford. www.math.uga.edu /math/research/geometry.html (504 words) algebra help-algebra software-algebra math tutor Both subjects are clearly motivated by their use in resolving singularities of algebraic varieties, for which one of the main tools consists in blowing up the variety along an equimultiple subvariety. Main topic is the bilinear complexity of finite dimensional associative algebras with unity: Upper bounds for the complexity of matrix multiplication and a general lower bound for the complexity and algebraic structure in the case of algebras of minimal rank is shown. Final chapter is on the study of isotropy groups of bilinear mappings and the structure of the variety of optimal algorithms for bilinear mapping. www.softmath.com /algebra11.htm (1861 words) Publisher description for Library of Congress control number 97004011 (Site not responding. Last check: 2007-10-07) It analyses groups which possess a certain very general dependence relation (Shelah's notion of 'forking'), and tries to derive structural properties from this. These may be group-theoretic (nilpotency or solubility of a given group), algebro-geometric (identification of a group as an algebraic group), or model-theoretic (description of the definable sets). In this book, the general theory of stable groups is developed from the beginning (including a chapter on preliminaries in group theory and model theory), concentrating on the model- and group-theoretic aspects. www.loc.gov /catdir/description/cam028/97004011.html (177 words) List of publications of a researcher In Geometric and Algebraic Combinatorics (GAC3, Oisterwijk, The Netherlands, August 14-19, 2005) (pp....-...). Oberwolfach, Arbeitstagung 'Groups and Geometries' (Aschbacher, Kantor, Timmesfeld). Integral representations of finite groups in algebraic groups. oashos01.hosting.kun.nl:8015 /metue/pk_apa_n.medewerker?p_url_id=1161 (1641 words) Definition of Radical in chemistry, either an atom or molecule with at least one unpaired electron, or a group of atoms, charged or uncharged, that act as a single entity in reaction. the radical of an algebraic group is a concept in algebraic group theory. the radical of an ideal is an important concept in abstract algebra. www.wordiq.com /definition/Radical (285 words) Galois Cohomology Now the twisted forms of G are in one-to-one correspondence to the 1-cocycles of Gamma on Aut(G) and the forms are conjugate if and only if the cocycles are cohomologous. Returns Aut_K(G) as a Gamma-group with Gamma=Gal(K:k), where A is the automorphism group of G and K is the base field of G. The field k must be a subfield of K. ActingGroup(G) : GrpLie -> Grp, Map Returns Gamma=Gal(K:k) together with the map m from the abstract Galois group Gamma into the set of field automorphisms, such that m(gamma) is the actual field automorphism for every gamma in Gamma. www.math.lsu.edu /magma/text1054.htm (410 words) Diophantine geometry and arithmetic, local and global heights on algebraic varieties, uniform distribution on locally compact groups, applications of harmonic analysis to number theory, the Mahler measure of polynomials. Our number theory group is complemented by a large group in algebraic geometry, including Valery Alexeev, William Graham, Elham Izadi, Roy Smith, and Robert Varley. Members of the group are: Robert Brice, Sungkon Chang, Jerry Hower, Jacob Keenum, Nausheen Lotia, Daeshik Park, Clay Petsche, Charles Pooh, Dong Hoon Shin, and Juhyung Yi. www.math.uga.edu /research/number_theory.html (607 words) These notes provide an introductory overview of the theory of algebraic groups, Lie algebras, Lie groups, and arithmetic groups. The Lie algebra of an algebraic group (continued) Algebraic groups over R and C; relation to Lie groups www.jmilne.org /math/CourseNotes/aag.html (103 words) DIMACS/DIMATIA/Renyi Working Group on Algebraic and Geometric Methods in Combinatorics This working group will concentrate on two broad areas of research: algebraic methods involving the study of homomorphisms of graphs, with special emphasis on problems arising from statistical physics, and problems of combinatorial geometry. This working group will also examine various topics that lie at the interface between the two disciplines of computational geometry and real algebraic geometry, topics such as construction and analysis of arrangements of algebraic surfaces, lower bound proofs, robust computations, and more. The contributions of real algebraic geometry to computational geometry are quite well known, but perhaps less well known are some interactions in the other direction, e.g., separating the `combinatorial' from the `algebraic' complexity of semialgebraic sets [24]. dimacs.rutgers.edu /Workshops/Algebraic/main.html (2719 words) This is an expository article on the theory of algebraic stacks. We study the problem of understanding the uniformizing Fuchsian groups for a family of plane algebraic curves by determining explicit first variational formulae for the generators. It is also shown that any infinitesimally divisible measure on a connected nilpotent real algebraic group is embeddable. www.ias.ac.in /mathsci/vol111/feb2001/absfeb2001.html (678 words)
CommonCrawl
Monday–Friday, March 14–18, 2016; Baltimore, Maryland Session V32: Chemical Physics of Extreme Environments II Sponsoring Units: DCP Chair: Timothy Zwier, Purdue University V32.00001: Kinetics, mechanisms and products of reactions of Criegee intermediates. Invited Speaker: Andrew Orr-Ewing The atmospheric ozonolysis of alkenes such as isoprene produces Criegee intermediates which are increasingly recognized as important contributors to oxidation chemistry in the Earth's troposphere. Stabilized Criegee intermediates are conveniently produced in the laboratory by ultraviolet photolysis of diiodoalkanes in the presence of O$_{\mathrm{2}}$, and can be detected by absorption spectroscopy using their strong electronic bands in the near ultraviolet region. We have used these techniques to study a wide range of reactions of Criegee intermediates, including their self-reactions, and reactions with carboxylic acids and various other trace atmospheric constituents. In collaboration with the Sandia National Laboratory group led by Drs C.A. Taatjes and D.L. Osborn, we have used photoionization and mass spectrometry methods, combined with electronic structure calculations, to characterize the products of several of these reactions. Our laboratory studies determine rate coefficients for the Criegee intermediate reactions, many of which prove to be fast. In the case of reactions with carboxylic acids, a correlation between the dipole moments of the reactants and the reaction rate coefficients suggests a dipole-capture controlled reaction and allows us to propose a structure-activity relationship to predict the rates of related processes. The contributions of these various Criegee intermediate reactions to the chemistry of the troposphere have been assessed using the STOCHEM-CRI global atmospheric chemistry model. [Preview Abstract] V32.00002: Direct Measurement of the Unimolecular Decay Rate of Criegee Intermediates to OH Products. Fang Liu, Yi Fang, Stephen Klippenstein, Anne McCoy, Marsha Lester Ozonolysis of alkenes is an important non-photolytic source of OH radicals in the troposphere. The production of OH radicals proceeds though formation and unimolecular decay of Criegee intermediates such as syn-CH3CHOO and (CH3)2COO. These alkyl-substituted Criegee intermediates can undergo a 1,4-H transfer reaction to form an energized vinyl hydroperoxide species, which breaks apart to OH and vinoxy products. Recently, this laboratory used IR excitation in the C-H stretch overtone region to initiate the unimolecular decay of syn-CH3CHOO and (CH3)2COO Criegee intermediates, leading to OH formation. Here, direct time-domain measurements are performed to observe the rate of appearance of OH products under collision-free conditions utilizing UV laser-induced fluorescence for detection. The experimental rates are in excellent agreement with statistical RRKM calculations using barrier heights predicted from high-level electronic structure calculations. Accurate determination of the rates and barrier heights for unimolecular decay of Criegee intermediates is essential for modeling the kinetics of alkene ozonolysis reactions, a significant OH radical source in atmospheric chemistry, as well as the steady-state concentration of Criegee intermediates in the atmosphere. [Preview Abstract] V32.00003: Probing neutral atmospheric collision complexes with anion photoelectron imaging. Invited Speaker: Caroline Jarrold Photodetachment of anionic precursors of neutral collision complexes offers a way to probe the effects of symmetry-breaking collision events on the electronic structure of normally transparent molecules. We have measured the anion photoelectron imaging (PEI) spectra of a series of O$_{\mathrm{2}}^{\mathrm{-}}\cdot X$ complexes, where $X$ is a volatile organic molecule with atmospheric relevance, to determine how the electronic properties of various $X$ molecules affect the low-lying electronic structure of neutral O$_{\mathrm{2}}$ undergoing O$_{\mathrm{2}}-X$ collisons. The study was motivated by the catalog of vibrational and electronic absorption lines induced by O$_{\mathrm{2}}-$O$_{\mathrm{2}}$, O$_{\mathrm{2}}-$N$_{\mathrm{2}}$, and other collisions. The energies of electronic features observed in the anion PEI spectra of O$_{\mathrm{2}}^{\mathrm{-}}\cdot X$ ($X \quad =$ hexane, hexene, isoprene and benzene) relative to O$_{\mathrm{2}}^{\mathrm{-}}$ PEI spectroscopic features indicate that photodetachment of the anion does indeed access a repulsive part of the O$_{\mathrm{2}}$ -- $X$ potential. In addition, the spectra of the various complexes show an interesting variation in the intensities of transitions to the excited O$_{\mathrm{2}}(^{\mathrm{1}}\Delta_{\mathrm{g}})\cdot X$ and O$_{\mathrm{2}}(^{\mathrm{1}}\Sigma _{\mathrm{g}}^{\mathrm{+}})\cdot X$ states relative to the ground O$_{\mathrm{2}}(^{\mathrm{3}}\Sigma _{\mathrm{g}}^{\mathrm{-}})\cdot X$ state. With $X \quad =$ non-polar species such as hexane, the relative intensities of transitions to the triplet and singlet states of O$_{\mathrm{2}}\cdot X$ are very similar to those of isolated O$_{\mathrm{2}}$, while the relative intensity of the singlet band decreases and becomes lower in energy relative to the triplet band for $X \quad =$ polar molecules. A significant enhancement in the intensities of the singlet bands is observed for complexes with $X \quad =$ isoprene and benzene, both of which have low-lying triplet states. The role of the triplet states in isoprene and benzene, and the implications for induced electronic absorption in O$_{\mathrm{2}}$ undergoing collisions with these molecules, are explored. [Preview Abstract] V32.00004: Photoelectron Spectroscopy of Transition Metal Hydride Cluster Anions and Their Roles in Hydrogenation Reactions Xinxing Zhang, Kit Bowen The interaction between transition metals and hydrogen has been an intriguing research topic for such applications as hydrogen storage and catalysis of hydrogenation and dehydrogenation. Special bonding features between TM and hydrogen are interesting not only because they are scarcely reported but also because they could help to discover and understand the nature of chemical bonding. Very recently, we discovered a PtZnH$_{\mathrm{5}}^{\mathrm{-}}$ cluster which possessed an unprecedented planar pentagonal coordination between the H$_{\mathrm{5}}^{\mathrm{-}}$ moiety and Pt, and exhibited special $\sigma $-aromaticity. The H$_{\mathrm{5}}^{\mathrm{-}}$ kernel as a whole can be viewed as a $\eta ^{\mathrm{5}}$-H$_{\mathrm{5}}$ ligand for Pt. As the second example, an H$_{\mathrm{2}}$ molecule was found to act as a ligand in the PdH$_{\mathrm{3}}^{\mathrm{-}}$ cluster, in which two H atoms form a $\eta ^{\mathrm{2}}$-H$_{\mathrm{2}}$ type of ligation to Pd. These transition metal hydride clusters were considered to be good hydrogen sources for hydrogenation. The reactions between PtH$_{\mathrm{n}}^{\mathrm{-}}$ and CO$_{\mathrm{2}}$ were investigated. We observed formate in the final product H$_{\mathrm{2}}$Pt(HCO$_{\mathrm{2}})^{\mathrm{-}}$. [Preview Abstract] V32.00005: Total Cross Section Measurements and Velocity Distributions of Hyperthermal Charge Transfer in Xe$^{\mathrm{2+}}+$ N$_{\mathrm{2}}$ Michael Hause, Benjamin Prince, Raymond Bemish Guided-ion beam measurements of the charge exchange (CEX) cross section for Xe$^{\mathrm{2+}}+$ N$_{\mathrm{2}}$ are reported for collision energies ranging from 0.3 to 100 eV in the center-of-mass frame. Measured total XS decrease from 69.5$\pm $0.3 Angstroms$^{\mathrm{2\thinspace }}$(Angs.) at the lowest collision energies to 40 Angs.$^{\mathrm{2}} $at 100 eV. The product N$_{\mathrm{2}}^{\mathrm{+}}$ CEX cross section is similar to the total CEX cross section while those of the dissociative product, N$^{\mathrm{+}}$, are less than 1Angs.$^{\mathrm{2}}$ for collision energies above 9 eV. The product N$_{\mathrm{2}}^{\mathrm{+}}$CEX cross section measured here are much larger than the total optical emission-excitation cross sections for the N$_{\mathrm{2}}^{\mathrm{+}}$ ($A)$ and ($B)$ state products determined previously in the chemiluminescence study of Prince and Chiu suggesting that most of the N$_{\mathrm{2}}^{\mathrm{+}}$ products are in the $X$ state. Time-of-flight (TOF) spectra of both the Xe$^{\mathrm{+}}$ and N$_{\mathrm{2}}^{\mathrm{+}}$ products suggest two different CEX product channels. The first leaves highly-vibrationally excited N$_{\mathrm{2}}^{\mathrm{+}}$ products with forward scattered Xe$^{\mathrm{+}}$ (LAB frame) and releases between 0.35 to 0.6 eV translational energy for collisions below 17.6 eV. The second component decreases with collisional energy and leaves backscattered Xe$^{\mathrm{+}}$ and low-vibrational states of N$_{\mathrm{2}}^{\mathrm{+}}$. At collision energies above 17.6 eV, only charge exchange involving minimal momentum exchange remains in the TOF spectra. [Preview Abstract] V32.00006: Aerosol droplets: Nucleation dynamics and photokinetics Invited Speaker: Ruth Signorell This talk addresses two fundamental aerosol processes that play a pivotal role in atmospheric processes: The formation dynamics of aerosol particles from neutral gas phase precursors and photochemical reactions in small aerosol droplets induced by ultraviolet and visible light. Nucleation is the rate determining step of aerosol particle formation. The idea behind nucleation is that supersaturation of a gas leads to the formation of a critical cluster, which quickly grows into larger aerosol particles. We discuss an experiment for studying the size and chemical composition of critical clusters at the molecular level. Much of the chemistry happening in planetary atmospheres is driven by sunlight. Photochemical reactions in small aerosol particles play a peculiar role in this context. Sunlight is strongly focused inside these particles which leads to a natural increase in the rates of photochemical reactions in small particles compared with the bulk. This ubiquitous phenomenon has been recognised but so far escaped direct observation and quantification. The development of a new experimental setup has finally made it possible to directly observe this nanofocusing effect in droplet photokinetics. [Preview Abstract] V32.00007: \textbf{Single Scattering Albedo of fresh~biomass burning aerosols measured using cavity ring down spectroscopy and nephelometry} Solomon Bililign, Sujeeta Singh, Marc Fiddler, Damon Smith An accurate measurement of optical properties of aerosols is critical for quantifying the effect of aerosols on climate. Uncertainties still persist and measurement results vary significantly. The factors that affect measurement accuracy and the resulting uncertainties of the extinction-minus-scattering method are evaluated using a combination of cavity ring-down spectroscopy (CRDS) and integrating nephelometry and applied to measure the optical properties of fresh soot (size 300 and 400 nm) produced from burning of pine, red oak and cedar. We have demonstrated a system that allows measurement of optical properties at a wide range of wavelengths, which can be extended over most of the solar spectrum to determine ``featured'' absorption cross sections as a function of wavelength. SSA values measured were nearly flat ranging from 0.45 to 0.6. The result also demonstrates that SSA of fresh soot is nearly independent of wavelength of light in the 500-680 wavelength range with a slight increase at longer wavelength. The values are within the range of measured values both in the laboratory and in field studies for fresh soot [Preview Abstract] V32.00008: Catching Conical Intersections in the Act; Monitoring Transient Electronic Coherences by Attosecond Stimulated X-Ray Raman Signals Kochise Bennett, Markus Kowalewski, Konstantin Dorfman, Shaul Mukamel Conical intersections (CIs) dominate the pathways and outcomes of virtually all photochemical molecular processes. Despite extensive experimental and theoretical effort, CIs have not been directly observed yet and the experimental evidence is inferred from fast reaction rates and vibrational signatures. We show that short X-ray pulses can directly detect the passage through a CI with the adequate temporal and spectral sensitivity. The non-adiabatic coupling that exists in the region of a CI redistributes electronic population but also generates electronic coherence. This coherent oscillation can then be detected via a coherent Raman process that employs a composite femtosecond/attosecond X-ray pulse. This technique, dubbed Transient Redistribution of Ultrafast Electronic Coherences (TRUECARS) is reminiscent of Coherent Anti-Stokes Raman Spectroscopy (CARS) in that a coherent oscillation is set in motion and then monitored, but differs in that the dynamics is electronic (CARS generally observes nuclear dynamics) and the coherence is generated internally by passage through a region of non-adiabatic coupling rather than by an externally applied laser. [Preview Abstract]
CommonCrawl
Fluxion A fluxion is the instantaneous rate of change, or gradient, of a fluent (a time-varying quantity, or function) at a given point.[1] Fluxions were introduced by Isaac Newton to describe his form of a time derivative (a derivative with respect to time). Newton introduced the concept in 1665 and detailed them in his mathematical treatise, Method of Fluxions.[2] Fluxions and fluents made up Newton's early calculus.[3] Part of a series of articles about Calculus • Fundamental theorem • Limits • Continuity • Rolle's theorem • Mean value theorem • Inverse function theorem Differential Definitions • Derivative (generalizations) • Differential • infinitesimal • of a function • total Concepts • Differentiation notation • Second derivative • Implicit differentiation • Logarithmic differentiation • Related rates • Taylor's theorem Rules and identities • Sum • Product • Chain • Power • Quotient • L'Hôpital's rule • Inverse • General Leibniz • Faà di Bruno's formula • Reynolds Integral • Lists of integrals • Integral transform • Leibniz integral rule Definitions • Antiderivative • Integral (improper) • Riemann integral • Lebesgue integration • Contour integration • Integral of inverse functions Integration by • Parts • Discs • Cylindrical shells • Substitution (trigonometric, tangent half-angle, Euler) • Euler's formula • Partial fractions • Changing order • Reduction formulae • Differentiating under the integral sign • Risch algorithm Series • Geometric (arithmetico-geometric) • Harmonic • Alternating • Power • Binomial • Taylor Convergence tests • Summand limit (term test) • Ratio • Root • Integral • Direct comparison • Limit comparison • Alternating series • Cauchy condensation • Dirichlet • Abel Vector • Gradient • Divergence • Curl • Laplacian • Directional derivative • Identities Theorems • Gradient • Green's • Stokes' • Divergence • generalized Stokes Multivariable Formalisms • Matrix • Tensor • Exterior • Geometric Definitions • Partial derivative • Multiple integral • Line integral • Surface integral • Volume integral • Jacobian • Hessian Advanced • Calculus on Euclidean space • Generalized functions • Limit of distributions Specialized • Fractional • Malliavin • Stochastic • Variations Miscellaneous • Precalculus • History • Glossary • List of topics • Integration Bee • Mathematical analysis • Nonstandard analysis History Fluxions were central to the Leibniz–Newton calculus controversy, when Newton sent a letter to Gottfried Wilhelm Leibniz explaining them, but concealing his words in code due to his suspicion. He wrote:[4] I cannot proceed with the explanations of the fluxions now, I have preferred to conceal it thus: 6accdæ13eff7i3l9n4o4qrr4s8t12vx. The gibberish string was in fact a hash code (by denoting the frequency of each letter) of the Latin phrase Data æqvatione qvotcvnqve flventes qvantitates involvente, flvxiones invenire: et vice versa, meaning: "Given an equation that consists of any number of flowing quantities, to find the fluxions: and vice versa".[5] Example If the fluent $y$ is defined as $y=t^{2}$ (where $t$ is time) the fluxion (derivative) at $t=2$ is: ${\dot {y}}={\frac {\Delta y}{\Delta t}}={\frac {(2+o)^{2}-2^{2}}{(2+o)-2}}={\frac {4+4o+o^{2}-4}{2+o-2}}={\frac {4o+o^{2}}{o}}$ Here $o$ is an infinitely small amount of time.[6] So, the term $o^{2}$ is second order infinite small term and according to Newton, we can now ignore $o^{2}$ because of its second order infinite smallness comparing to first order infinite smallness of $o$.[7] So, the final equation gets the form: ${\dot {y}}={\frac {\Delta y}{\Delta t}}={\frac {4o}{o}}=4$ He justified the use of $o$ as a non-zero quantity by stating that fluxions were a consequence of movement by an object. Criticism Bishop George Berkeley, a prominent philosopher of the time, denounced Newton's fluxions in his essay The Analyst, published in 1734.[8] Berkeley refused to believe that they were accurate because of the use of the infinitesimal $o$. He did not believe it could be ignored and pointed out that if it was zero, the consequence would be division by zero. Berkeley referred to them as "ghosts of departed quantities", a statement which unnerved mathematicians of the time and led to the eventual disuse of infinitesimals in calculus. Towards the end of his life Newton revised his interpretation of $o$ as infinitely small, preferring to define it as approaching zero, using a similar definition to the concept of limit.[9] He believed this put fluxions back on safe ground. By this time, Leibniz's derivative (and his notation) had largely replaced Newton's fluxions and fluents, and remains in use today. See also • History of calculus • Newton's notation • Hyperreal number: A modern formalization of the reals that includes infinity and infinitesimals • Nonstandard analysis References 1. Newton, Sir Isaac (1736). The Method of Fluxions and Infinite Series: With Its Application to the Geometry of Curve-lines. Henry Woodfall; and sold by John Nourse. Retrieved 6 March 2017. 2. Weisstein, Eric W. "Fluxion". MathWorld. 3. Fluxion at the Encyclopædia Britannica 4. Turnbull, Isaac Newton. Ed. by H.W. (2008). The correspondence of Isaac Newton (Digitally printed version, pbk. re-issue. ed.). Cambridge [u.a.]: Univ. Press. ISBN 9780521737821. 5. Clegg, Brian (2003). A brief history of infinity: the quest to think the unthinkable. London: Constable. ISBN 9781841196503. 6. Buckmire, Ron. "History of Mathematics" (PDF). Retrieved 28 January 2017. 7. "Isaac Newton (1642-1727)". www.mhhe.com. Retrieved 6 March 2017. 8. Berkeley, George (1734). The Analyst: a Discourse addressed to an Infidel Mathematician . London. p. 25 – via Wikisource. 9. Kitcher, Philip (March 1973). "Fluxions, Limits, and Infinite Littlenesse. A Study of Newton's Presentation of the Calculus". Isis. 64 (1): 33–49. doi:10.1086/351042. S2CID 121774892. Calculus Precalculus • Binomial theorem • Concave function • Continuous function • Factorial • Finite difference • Free variables and bound variables • Graph of a function • Linear function • Radian • Rolle's theorem • Secant • Slope • Tangent Limits • Indeterminate form • Limit of a function • One-sided limit • Limit of a sequence • Order of approximation • (ε, δ)-definition of limit Differential calculus • Derivative • Second derivative • Partial derivative • Differential • Differential operator • Mean value theorem • Notation • Leibniz's notation • Newton's notation • Rules of differentiation • linearity • Power • Sum • Chain • L'Hôpital's • Product • General Leibniz's rule • Quotient • Other techniques • Implicit differentiation • Inverse functions and differentiation • Logarithmic derivative • Related rates • Stationary points • First derivative test • Second derivative test • Extreme value theorem • Maximum and minimum • Further applications • Newton's method • Taylor's theorem • Differential equation • Ordinary differential equation • Partial differential equation • Stochastic differential equation Integral calculus • Antiderivative • Arc length • Riemann integral • Basic properties • Constant of integration • Fundamental theorem of calculus • Differentiating under the integral sign • Integration by parts • Integration by substitution • trigonometric • Euler • Tangent half-angle substitution • Partial fractions in integration • Quadratic integral • Trapezoidal rule • Volumes • Washer method • Shell method • Integral equation • Integro-differential equation Vector calculus • Derivatives • Curl • Directional derivative • Divergence • Gradient • Laplacian • Basic theorems • Line integrals • Green's • Stokes' • Gauss' Multivariable calculus • Divergence theorem • Geometric • Hessian matrix • Jacobian matrix and determinant • Lagrange multiplier • Line integral • Matrix • Multiple integral • Partial derivative • Surface integral • Volume integral • Advanced topics • Differential forms • Exterior derivative • Generalized Stokes' theorem • Tensor calculus Sequences and series • Arithmetico-geometric sequence • Types of series • Alternating • Binomial • Fourier • Geometric • Harmonic • Infinite • Power • Maclaurin • Taylor • Telescoping • Tests of convergence • Abel's • Alternating series • Cauchy condensation • Direct comparison • Dirichlet's • Integral • Limit comparison • Ratio • Root • Term Special functions and numbers • Bernoulli numbers • e (mathematical constant) • Exponential function • Natural logarithm • Stirling's approximation History of calculus • Adequality • Brook Taylor • Colin Maclaurin • Generality of algebra • Gottfried Wilhelm Leibniz • Infinitesimal • Infinitesimal calculus • Isaac Newton • Fluxion • Law of Continuity • Leonhard Euler • Method of Fluxions • The Method of Mechanical Theorems Lists • Differentiation rules • List of integrals of exponential functions • List of integrals of hyperbolic functions • List of integrals of inverse hyperbolic functions • List of integrals of inverse trigonometric functions • List of integrals of irrational functions • List of integrals of logarithmic functions • List of integrals of rational functions • List of integrals of trigonometric functions • Secant • Secant cubed • List of limits • Lists of integrals Miscellaneous topics • Complex calculus • Contour integral • Differential geometry • Manifold • Curvature • of curves • of surfaces • Tensor • Euler–Maclaurin formula • Gabriel's horn • Integration Bee • Proof that 22/7 exceeds π • Regiomontanus' angle maximization problem • Steinmetz solid Sir Isaac Newton Publications • Fluxions (1671) • De Motu (1684) • Principia (1687) • Opticks (1704) • Queries (1704) • Arithmetica (1707) • De Analysi (1711) Other writings • Quaestiones (1661–1665) • "standing on the shoulders of giants" (1675) • Notes on the Jewish Temple (c. 1680) • "General Scholium" (1713; "hypotheses non fingo" ) • Ancient Kingdoms Amended (1728) • Corruptions of Scripture (1754) Contributions • Calculus • fluxion • Impact depth • Inertia • Newton disc • Newton polygon • Newton–Okounkov body • Newton's reflector • Newtonian telescope • Newton scale • Newton's metal • Spectrum • Structural coloration Newtonianism • Bucket argument • Newton's inequalities • Newton's law of cooling • Newton's law of universal gravitation • post-Newtonian expansion • parameterized • gravitational constant • Newton–Cartan theory • Schrödinger–Newton equation • Newton's laws of motion • Kepler's laws • Newtonian dynamics • Newton's method in optimization • Apollonius's problem • truncated Newton method • Gauss–Newton algorithm • Newton's rings • Newton's theorem about ovals • Newton–Pepys problem • Newtonian potential • Newtonian fluid • Classical mechanics • Corpuscular theory of light • Leibniz–Newton calculus controversy • Newton's notation • Rotating spheres • Newton's cannonball • Newton–Cotes formulas • Newton's method • generalized Gauss–Newton method • Newton fractal • Newton's identities • Newton polynomial • Newton's theorem of revolving orbits • Newton–Euler equations • Newton number • kissing number problem • Newton's quotient • Parallelogram of force • Newton–Puiseux theorem • Absolute space and time • Luminiferous aether • Newtonian series • table Personal life • Woolsthorpe Manor (birthplace) • Cranbury Park (home) • Early life • Later life • Apple tree • Religious views • Occult studies • Scientific Revolution • Copernican Revolution Relations • Catherine Barton (niece) • John Conduitt (nephew-in-law) • Isaac Barrow (professor) • William Clarke (mentor) • Benjamin Pulleyn (tutor) • John Keill (disciple) • William Stukeley (friend) • William Jones (friend) • Abraham de Moivre (friend) Depictions • Newton by Blake (monotype) • Newton by Paolozzi (sculpture) • Isaac Newton Gargoyle • Astronomers Monument Namesake • Newton (unit) • Newton's cradle • Isaac Newton Institute • Isaac Newton Medal • Isaac Newton Telescope • Isaac Newton Group of Telescopes • XMM-Newton • Sir Isaac Newton Sixth Form • Statal Institute of Higher Education Isaac Newton • Newton International Fellowship Categories Isaac Newton
Wikipedia
SAT Practice Test # 5 Chapter Questions Select Section Reading Test Writing & Language Test Math Test - No Calculator Math Test - Calculator View All $$-2 x+3 y=6$$ In the $x y$ -plane, the graph of which of the following equations is perpendicular to the graph of the equation above? $$\begin{array}{l}{\text { A) } 3 x+2 y=6} \\ {\text { B) } 3 x+4 y=6} \\ {\text { C) } 2 x+4 y=6} \\ {\text { D) } 2 x+6 y=3}\end{array}$$ Math Test - Calculator - Problem 1 According to the line graph above, between which two consecutive years was there the greatest change in the number of $3-\mathrm{D}$ movies released? $$\begin{array}{l}{\text { A) } 2003-2004} \\ {\text { B) } 2008-2009} \\ {\text { C) } 2009-2010} \\ {\text { D) } 2010-2011}\end{array}$$ Lily A. Numerade Educator Some values of the linear function $f$ are shown in the table above. Which of the following defines $f ?$ $$\begin{array}{l}{\text { A) } f(x)=2 x+3} \\ {\text { B) } f(x)=3 x+2} \\ {\text { C) } f(x)=4 x+1} \\ {\text { D) } f(x)=5 x}\end{array}$$ To make a bakery's signature chocolate muffins, a baker needs 2.5 ounces of chocolate for each muffin. How many pounds of chocolate are needed to make 48 signature chocolate muffins? $(1$ pound $=16$ ounces $)$ $$\begin{array}{cc}{\text { A) }} & {7.5} \\ {\text { B) }} & {10} \\ {\text { C) }} & {50.5} \\ {\text { D) }} & {120}\end{array}$$ If $3(c+d)=5,$ what is the value of $c+d ?$ $$\begin{array}{l}{\text { A) } \frac{3}{5}} \\ {\text { B) } \frac{5}{3}} \\ {\text { C) } 3} \\ {\text { D) } 5}\end{array}$$ The weight of an object on Venus is approximately $\frac{9}{10}$ of its weight on Earth. The weight of an object on Jupiter is approximately $\frac{23}{10}$ of its weight on Earth. If an object weighs 100 pounds on Earth, approximately how many more pounds on Earth, weigh on Jupiter than it weighs on Venus? $$\begin{array}{l}{\text { A) } 90} \\ {\text { B) } 111} \\ {\text { C) } 140} \\ {\text { D) } 230}\end{array}$$ An online bookstore sells novels and magazines. Each novel sells for $\$ 4,$ and each magazine sells for $\$ 1 .$ If Sadie purchased a total of 11 novels and magazines that have a combined selling price of $\$ 20,$ how many novels did she purchase? $$\begin{array}{l}{\text { A) } 2} \\ {\text { B) } 3} \\ {\text { C) } 4} \\ {\text { D) } 5}\end{array} The Downtown Business Association (DBA) in a certain city plans to increase its membership by a total of $n$ businesses per year. There were $b$ businesses in the DBA at the beginning of this year. Which function best models the total number of businesses, $y,$ the DBA plans to have as member $x$ years from now? $$\begin{array}{l}{\text { A) } y=n x+b} \\ {\text { B) } y=n x-b} \\ {\text { C) } y=b(n)^{x}} \\ {\text { D) } y=n(b)^{x}}\end{array}$$ Which of the following is an equivalent form of $(1.5 x-2.4)^{2}-\left(5.2 x^{2}-6.4\right) ?$ $$\begin{array}{l}{\text { A) }-2.2 x^{2}+1.6} \\ {\text { B) }-2.2 x^{2}+11.2} \\ {\text { C) }-2.95 x^{2}-7.2 x+12.16} \\ {\text { D) }-2.95 x^{2}-7.2 x+0.64}\end{array}$$ In the 1908 Olympic Games, the Olympic marathon was lengthened from 40 kilometers to approximately 42 kilometers. Of the following, which is closest to the increase in the distance of the Olympic marathon, in miles? (1 mile is approximately 1.6 kilometers.) $$\begin{array}{ll}{\text { A) }} & {1.00} \\ {\text { B) }} & {1.25} \\ {\text { C) }} & {1.50} \\ {\text { D) }} & {1.75}\end{array}$$ Math Test - Calculator - Problem 10 The density $d$ of an object is found by dividing the mass $m$ of the object by its volume $V .$ Which of the following equations gives the mass $m$ in terms of $d$ and $V ?$ $$\begin{array}{l}{\text { A) } m=d V} \\ {\text { B) } m=\frac{d}{V}} \\ {\text { C) } m=\frac{V}{d}} \\ {\text { D) } m=V+d}\end{array}$$ $$\begin{array}{c}{\frac{1}{2} y=4} \\ {x-\frac{1}{2} y=2}\end{array}$$ The system of equations above has solution $(x, y) .$ What is the value of $x ?$ $$\begin{array}{c}{\text { A) } 3} \\ {\text { B) } \frac{7}{2}} \\ {\text { C) } 4} \\ {\text { D) } 6}\end{array}$$ $$\begin{array}{c}{y \leq 3 x+1} \\ {x-y>1}\end{array}$$ Which of the following ordered pairs $(x, y)$ satisfies the system of inequalities above? $$\begin{array}{l}{\text { A) }(-2,-1)} \\ {\text { B) }(-1,3)} \\ {\text { C) }(1,5)} \\ {\text { D) }(2,-1)}\end{array}$$ In a survey, 607 general surgeons and orthopedic surgeons indicated their major professional activity. The results are summarized in the table above. If one of the surgeons is selected at random, which of the following is closest to the probability that the selected surgeon is an orthopedic surgeon whose indicated professional activity is research? $$\begin{array}{l}{\text { A) } 0.122} \\ {\text { B) } 0.196} \\ {\text { C) } 0.318} \\ {\text { D) } 0.379}\end{array}$$ A polling agency recently surveyed $1,000$ adults who were selected at random from a large city and asked each of the adults, "Are you satisfied with the quality of air in the city?" Of those surveyed, 78 percent responded that they were satisfied with the quality of air in the city. Based on the results of the survey, which of the following statements must be true? $$\begin{array}{l}{\text { I. Of all adults in the city, } 78 \text { percent are }} \\ {\text { satisfied with the quality of air in the city. }} \\ {\text { II. If another } 1,000 \text { adults selected at random }} \\ {\text { from the city were surveyed, } 78 \text { percent of }} \\ {\text { them would report they are satisfied with }} \\ {\text { the quality of air in the city. }}\\{\text { III. If } 1,000 \text { adults selected at random from a }} \\ {\text { different city were surveyed, } 78 \text { percent of }} \\ {\text { them would report they are satisfied with }} \\ {\text { the quality of air in the city. }}\end{array}$$ $$\begin{array}{l}{\text { A) } \text { None }} \\ {\text { B) II only }} \\ {\text { C) I and II only }} \\ {\text { D) I and III only }}\end{array}$$ According to the information in the table, what is the approximate age of an American elm tree with a diameter of 12 inches? $$\begin{array}{l}{\text { A) } 24 \text { years }} \\ {\text { B) } 36 \text { years }} \\ {\text { C) } 40 \text { years }} \\ {\text { D) } 48 \text { years }}\end{array}$$ The scatterplot above gives the tree diameter plotted against age for 26 trees of a single species. The growth factor of this species is closest to that of which of the following species of tree? $$\begin{array}{l}{\text { A) Red maple }} \\ {\text { B) Cottonwood }} \\ {\text { C) White birch }} \\ {\text { D) Shagbark hickory }}\end{array}$$ If a white birch tree and a pin oak tree each now have a diameter of 1 foot, which of the following will be closest to the difference, in inches, of their diameters 10 years from now? ( 1 foot $=12$ inches) $$\begin{array}{ll}{\text { A) }} & {1.0} \\ {\text { B) }} & {1.2} \\ {\text { C) }} & {1.3} \\ {\text { D) }} & {1.4}\end{array}$$ In $\triangle A B C$ above, what is the length of $\overline{A D} ?$ $$\begin{array}{l}{\text { A) } 4} \\ {\text { B) } 6} \\ {\text { C) } 6 \sqrt{2}} \\ {\text { D) } 6 \sqrt{3}}\end{array}$$ The figure on the left above shows a wheel with a mark on its rim. The wheel is rolling on the ground at a constant rate along a level straight path from a starting point to an ending point. The graph of $y=d(t)$ on the right could represent which of the following as a function of time from when the wheel began to roll? $$\begin{array}{l}{\text { A) The speed at which the wheel is rolling }} \\ {\text { B) The distance of the wheel from its starting point }} \\ {\text { C) The distance of the mark on the rim from the }} \\ {\text { center of the wheel }} \\ {\text { D) The distance of the mark on the rim from the }} \\ {\text { ground }}\end{array}$$ $$\frac{a-b}{a}=c$$ In the equation above, if $a$ is negative and $b$ is positive, which of the following must be true? $$\begin{array}{l}{\text { A) } c>1} \\ {\text { B) } c=1} \\ {\text { C) } c=-1} \\ {\text { D) } c<-1}\end{array}$$ In State $\mathrm{X},$ Mr. Camp's eighth-grade class consisting of 26 students was surveyed and 34.6 percent of the students reported that they had at least two siblings. The average eighth-grade class size in the state is 26. If the students in Mr. Camp's class are representative of students in the state's eighth-grade classes and there are $1,800$ eighth-grade classes in the state, which of the following best estimates the number of eighth-grade students in the state who have fewer than two siblings? $$\begin{array}{l}{\text { A) } 16,200} \\ {\text { B) } 23,400} \\ {\text { C) } 30,600} \\ {\text { D) } 46,800}\end{array}$$ The relationship between the monthly rental price $r,$ in dollars, and the property's purchase price $p,$ in thousands of dollars, can be represented by a linear function. Which of the following functions represents the relationship? $$\begin{aligned} \text { A) } & r(p)=2.5 p-870 \\ \text { B) } & r(p)=5 p+165 \\ \text { C) } & r(p)=6.5 p+440 \\ \text { D) } & r(p)=7.5 p-10 \end{aligned}$$ Townsend Realty purchased the Glenview Street property and received a 40$\%$ discount off the original price along with an additional 20$\%$ off the discounted price for purchasing the property in cash. Which of the following best approximates the original price, in dollars, of the Glenview Street property? $$\begin{array}{l}{\text { A) } \$ 350,000} \\ {\text { B) } \$ 291,700} \\ {\text { C) } \$ 233,300} \\ {\text { D) } \$ 175,000}\end{array}$$
CommonCrawl
\begin{document} \title[Finite sums and continued fractions]{Transformation formulas of finite sums\\into continued fractions} \author{Daniel Duverney, Takeshi Kurosawa and Iekata Shiokawa} \address{\"{y} } \email{\"{y} } \date{June. 17, 2020} \subjclass{ } \keywords{} \begin{abstract} We state and prove three general formulas allowing to transform formal finite sums into formal continued fractions and apply them to generalize certain expansions in continued fractions given by Hone and Varona. \end{abstract} \maketitle \section{Introduction} Let $n$ be a positive integer, and let $x_{1},$ $x_{2},$ $\ldots,$ $x_{n},$ $\ldots,$ $y_{1},$ $y_{2},$ $\ldots,$ $y_{n},$ $\ldots$ be indeterminates. We define \begin{equation} \sigma_{n}=\sum_{k=1}^{n}\frac{y_{k}}{x_{k}},\quad\quad\tau_{n}=\sum_{k=1} ^{n}\left( -1\right) ^{k-1}\frac{y_{k}}{x_{k}}. \label{T1} \end{equation} Then, $\sigma_{n}$ and $\tau_{n}$ are rational functions of the indeterminates $x_{1},$ $x_{2},$ $\ldots,$ $x_{n},$ $y_{1},$ $y_{2},$ $\ldots,$ $y_{n}$ with coefficients in the field $\mathbb{Q}.$ The purpose of this paper is to give three formulas allowing to transform $\sigma_{n}$ and $\tau_{n}$ into continued fractions of the form \[ R_{m}=\frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{m}}{b_{m}}, \] where $m$ is an increasing function of $n$ and $a_{1},$ $a_{2},$ $\ldots,$ $a_{m},$ $b_{1},$ $b_{2},$ $\ldots,$ $b_{m}$ are rational functions of $x_{1},$ $x_{2},$ $\ldots,$ $x_{n},$ $y_{1},$ $y_{2},$ $\ldots,$ $y_{n}$ with coefficients in $\mathbb{Q}.$ These formulas are given by Theorems \ref{ThEuler}, \ref{ThHone}, and \ref{ThVarona} below. For every sequence $\left( u_{k}\right) _{k\geq1}$ of indeterminates, we define $u_{0}=1$ and \begin{equation} \theta u_{k}=\frac{u_{k+1}}{u_{k}},\quad\theta^{2}u_{k}=\theta\left( \theta u_{k}\right) =\frac{u_{k+2}u_{k}}{u_{k+1}^{2}}\quad\left( k\geq0\right) .\label{fn} \end{equation} By (\ref{fn}) we see at once that \begin{equation} u_{k}.\theta u_{k}=u_{k+1},\quad\theta u_{k}.\theta^{2}u_{k}=\theta u_{k+1}\quad\left( k\geq0\right) .\label{Rule} \end{equation} \begin{theorem} \label{ThEuler}For every positive integer $n,$ \begin{equation} \sum_{k=1}^{n}\left( -1\right) ^{k-1}\frac{y_{k}}{x_{k}}=\frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{n}}{b_{n}}, \label{CFEuler2} \end{equation} where $a_{1}=y_{1},$ $b_{1}=x_{1},$ and \begin{equation} a_{k}=\theta y_{k-1}\theta x_{k-2},\quad b_{k}=\theta x_{k-1}-\theta y_{k-1}\quad\left( 2\leq k\leq n\right) . \label{CFEuler1} \end{equation} \end{theorem} Theorem \ref{ThEuler} is a mere rewording of Euler's well-known formula \cite{Euler} \[ \frac{1}{A}-\frac{1}{B}+\frac{1}{C}-\frac{1}{D}+\cdots=\frac{1}{A} \genfrac{}{}{0pt}{}{{}}{+} \frac{A^{2}}{B-A} \genfrac{}{}{0pt}{}{{}}{+} \frac{B^{2}}{C-B} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{C^{2}}{D-C} \genfrac{}{}{0pt}{}{{}}{+\cdots} . \] Hence Theorem \ref{ThEuler} is far from being new. However, it seems interesting to state and prove it by using the operator $\theta.$ \begin{theorem} \label{ThHone}For every integer $n\geq1,$ \begin{equation} \sum_{k=1}^{n}\frac{y_{k}}{x_{k}}=\frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{2n}}{b_{2n}}, \label{Hone3} \end{equation} where $a_{1}=y_{1},$ $b_{1}=x_{1}-y_{1},$ and for $k\geq1$ \begin{align} a_{2k} & =\theta y_{k-1},\quad a_{2k+1}=\theta^{2}y_{k-1},\label{Hone41}\\ \quad b_{2k} & =x_{k-1},\quad b_{2k+1}=\frac{\theta^{2}x_{k-1}-\theta ^{2}y_{k-1}}{x_{k-1}}. \label{Hone42} \end{align} \end{theorem} Theorem \ref{ThHone} has been given by Hone in \cite{Hone} in the special case where $y_{k}=1$ for every $k\geq1$ and $x_{k}$ is a sequence of positive integers such that $x_{1}\geq2$ and $x_{k}$ divides $\theta^{2}x_{k}-1$ for every $k\geq1.$ In this case, (\ref{Hone3}) leads to the expansion of the infinite series $\sum_{k=1}^{+\infty}x_{k}^{-1}$ in regular continued fraction. \begin{theorem} \label{ThVarona}For every integer $n\geq2,$ \begin{equation} \sum_{k=1}^{n}\left( -1\right) ^{k-1}\frac{y_{k}}{x_{k}}=\frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{3n-4}}{b_{3n-4}}, \label{Varona4} \end{equation} where \begin{align} a_{1} & =y_{1}^{2},\quad a_{2}=x_{1}y_{2},\quad a_{3}=\theta y_{2},\quad a_{4}=x_{1},\label{Varona5}\\ b_{1} & =x_{1}y_{1},\quad b_{2}=\theta x_{1}-\theta y_{1},\quad b_{3} =\theta^{2}x_{1}-x_{1},\quad b_{4}=1, \label{Varona6} \end{align} and for $k\geq2$ \begin{align} a_{3k-1} & =y_{k+1},\quad a_{3k}=y_{k}\theta^{2}y_{k},\quad a_{3k+1} =1,\label{Varona7}\\ \quad b_{3k-1} & =x_{k}y_{k}-y_{k+1},\quad b_{3k}=\frac{\theta^{2} x_{k}-\theta^{2}y_{k}}{x_{k}}-1,\quad b_{3k+1}=1 \label{Varona8} \end{align} \end{theorem} Theorem \ref{ThVarona} has been proved first by Varona \cite{Varona} in the special case where $y_{k}=1$ for every $k\geq1$ and $x_{k}$ is a sequence of positive integers satisfying the same conditions as in Theorem \ref{ThHone}. In this case (\ref{Varona4}) leads to the expansion of the infinite series $\sum_{k=1}^{+\infty}\left( -1\right) ^{k}x_{k}^{-1}$ in regular continued fraction. In Section \ref{sec:notaion}, we recall some basic fact on continued fractions and prove transformation formulas of continued fractions into finite sums. Theorems \ref{ThEuler} and \ref{ThHone} will be proved in Section \ref{sec:proofEandH} and Theorem \ref{ThVarona} in Section \ref{sec:proofV}. Finally, in Section \ref{sec:Hone} and \ref{sec:Varona} we will give examples of applications of Theorems \ref{ThHone} and \ref{ThVarona} by generalizing Hone and Varona expansions. Indeed, we will define the sequence $(x_{n})$ by the recurrence relation \[ x_{n+2}x_{n}=x_{n+1}^{2}(F_{n}(x_{n},x_{n+1})+1)\qquad(n\geq0) \] with the initial conditions $x_{0}=1$ and $x_{1}\in{\mathbb{Z}}_{>0}$, where $F_{n}(X,Y)$ are nonzero polynomials with positive integer coefficients such that $F_{n}(0,0)=0$ for all $n\geq0$. It turns out that $(x_{n})$ is a sequence of positive integers such that $x_{n}~|~x_{n+1}$ and $x_{n} ~|~F_{n}(x_{n},x_{n+1})$ for every $n\geq0$. For any positive integer $h$, we define the series \[ S=\sum_{n=0}^{\infty}\frac{h^{n}}{x_{n+1}}. \] Applying Theorem \ref{ThHone} with $y_{n}=h^{n}$ and letting $n\rightarrow \infty$, we have \begin{equation} S=\frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{n}}{b_{n}} \genfrac{}{}{0pt}{}{{}}{+\cdots} , \label{CFS} \end{equation} where $a_{1}=1,$ $b_{1}=x_{1}-h,$ and for $k\geq1$ \begin{align*} a_{2k} & =h,\quad a_{2k+1}=1,\\ \quad b_{2k} & =x_{k-1},\quad b_{2k+1}=\frac{\theta^{2}x_{k-1}-1}{x_{k-1} }=\frac{F_{k-1}\left( x_{k-1},x_{k}\right) }{x_{k-1}}. \end{align*} are rational integers. Similarly, using Theorem \ref{ThVarona}, we get in (\ref{T}) the continued fraction expansion of the alternating series \begin{equation} T=\sum_{n=0}^{\infty}\left( -1\right) ^{n}\frac{h^{n}}{x_{n+1}}=\frac{a_{1} }{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{n}}{b_{n}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \label{CFT} \end{equation} where \begin{align*} a_{1} & =1,\quad a_{2}=hx_{1},\quad a_{3}=h,\quad a_{4}=x_{1},\\ b_{1} & =x_{1},\quad b_{2}=\frac{x_{2}}{x_{1}}-h,\quad b_{3}=F_{1}\left( x_{1},x_{2}\right) +1-x_{1},\quad b_{4}=1, \end{align*} and for $k\geq2$ \begin{align*} a_{3k-1} & =h^{k},\quad a_{3k}=h^{k-1},\quad a_{3k+1}=1,\\ \quad b_{3k-1} & =h^{k-1}\left( x_{k}-h\right) ,\quad b_{3k}=\frac {F_{k}\left( x_{k},x_{k+1}\right) }{x_{k}}-1,\quad b_{3k+1}=1. \end{align*} The simplest of all sequences $(x_{n})$ satisfies the recurrence relation \[ x_{n+2}x_{n}=x_{n+1}^{2}(x_{n}+1)\qquad(n\geq0) \] In the case $x_{0}=x_{1}=1,$ $(x_{n})$ is sequence A001697 of the On-line Encyclopedia of Integer Sequences, which also satisfies \[ x_{n+1}=x_{n}\left( \sum_{k=0}^{n}x_{k}\right) \quad\left( n\geq0\right) . \] Taking $h=1$ in (\ref{CFS}) and (\ref{CFT}), we find remarkable formulas: \begin{equation} \left[ 1;1,x_{1},1,x_{2},1,x_{3},1,x_{4},\ldots,1,x_{k},\ldots\right] =\sum_{n=1}^{\infty}\frac{1}{x_{n}}, \label{Nouv1} \end{equation} \begin{equation} \left[ 0;1,1,1,x_{1},x_{2},x_{3},x_{4},x_{5},\ldots,x_{n},\ldots\right] =\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{x_{n}}. \label{Nouv2} \end{equation} See Examples \ref{ex:1} and \ref{ex:3} below. \section{Notations and lemmas} \label{sec:notaion} For every positive integer $n,$ we define polynomials $P_{n}$ and $Q_{n}$ of the indeterminate $a_{1},$ $a_{2},$ $\ldots,$ $b_{1},$ $b_{2},$ $\ldots$ by $P_{0}=0,$ $Q_{0}=1$ and \begin{equation} \frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{n}}{b_{n}}=\frac{P_{n}}{Q_{n}}\quad\left( n\geq1\right) . \label{CFEuler3} \end{equation} Then, we have for every $k\geq0$ \begin{equation} \left\{ \begin{array} [c]{c} P_{k+2}=b_{k+2}P_{k+1}+a_{k+2}P_{k}\\ Q_{k+2}=b_{k+2}Q_{k+1}+a_{k+2}Q_{k} \end{array} \right. \label{Rec} \end{equation} and also \begin{equation} P_{k+1}Q_{k}-P_{k}Q_{k+1}=\left( -1\right) ^{k}a_{1}a_{2}\cdots a_{k+1} \quad\left( k\geq0\right) . \label{Delta} \end{equation} From (\ref{Delta}) one obtains immediately a well-known transformation formula of continued fractions into a finite sum: for every $n\geq1,$ \begin{equation} \frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{n}}{b_{n}}=\sum_{k=0}^{n-1}\left( -1\right) ^{k}\frac{a_{1} a_{2}\cdots a_{k+1}}{Q_{k+1}Q_{k}}. \label{SumEuler} \end{equation} Two other transformation formulas of continued fractions into finite sums are given by the following lemmas. \begin{lemma} \label{LemTransfHone}For every integer $n\geq1,$ \begin{equation} \frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{2n}}{b_{2n}}=\sum_{k=0}^{n-1}\frac{a_{1}a_{2}\cdots a_{2k+1} b_{2k+2}}{Q_{2k}Q_{2k+2}}. \label{SumHone} \end{equation} \end{lemma} \begin{proof} Replacing $n$ by $2n$ in (\ref{SumEuler}), we obtain \begin{align*} \frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{2n}}{b_{2n}} & =\sum_{m=0}^{2n-1}\left( -1\right) ^{m} \frac{a_{1}a_{2}\cdots a_{m+1}}{Q_{m+1}Q_{m}}\\ & =\sum_{k=0}^{n-1}\left( \frac{a_{1}a_{2}\cdots a_{2k+1}}{Q_{2k+1}Q_{2k} }-\frac{a_{1}a_{2}\cdots a_{2k+2}}{Q_{2k+2}Q_{2k+1}}\right) \\ & =\sum_{k=0}^{n-1}a_{1}a_{2}\cdots a_{2k+1}\frac{Q_{2k+2}-a_{2k+2}Q_{2k} }{Q_{2k+2}Q_{2k+1}Q_{2k}}, \end{align*} which yields (\ref{SumHone}) since $Q_{2k+2}=b_{2k+2}Q_{2k+1}+a_{2k+2}Q_{2k}$ for every $k\geq0.$ \end{proof} \begin{lemma} \label{LemTransfVarona}For every integer $n\geq1,$ \begin{align} \frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{3n-1}}{b_{3n-1}} & =\frac{a_{1}}{Q_{1}}-\frac{a_{1}a_{2}} {Q_{1}Q_{2}}\label{SumVarona}\\ & \qquad+\sum_{k=1}^{n-1}\left( -1\right) ^{k-1}a_{1}a_{2}\cdots a_{3k}\frac{b_{3k+1}b_{3k+2}+a_{3k+2}}{Q_{3k-1}Q_{3k+2}}.\nonumber \end{align} \end{lemma} \begin{proof} We know by (\ref{SumEuler}) that \begin{multline*} \frac{P_{3n-1}}{Q_{3n-1}}=\frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{3n-1}}{b_{3n-1}}=\frac{a_{1}}{Q_{1}}-\frac{a_{1}a_{2}}{Q_{1}Q_{2} }+\sum_{m=2}^{3n-2}\left( -1\right) ^{m}\frac{a_{1}a_{2}\cdots a_{m+1} }{Q_{m}Q_{m+1}}\\ =\frac{a_{1}}{Q_{1}}-\frac{a_{1}a_{2}}{Q_{1}Q_{2}}+\sum_{k=1}^{n-1}\left( -1\right) ^{k-1}a_{1}a_{2}\cdots a_{3k}\left( \frac{1}{Q_{3k-1}Q_{3k}} -\frac{a_{3k+1}}{Q_{3k}Q_{3k+1}}+\frac{a_{3k+1}a_{3k+2}}{Q_{3k+1}Q_{3k+2} }\right) . \end{multline*} Since $Q_{3k+1}-a_{3k+1}Q_{3k-1}=b_{3k+1}Q_{3k},$ we obtain \[ \frac{P_{3n-1}}{Q_{3n-1}}-\frac{a_{1}}{Q_{1}}+\frac{a_{1}a_{2}}{Q_{1}Q_{2} }=\sum_{k=1}^{n-1}\left( -1\right) ^{k-1}a_{1}a_{2}\cdots a_{3k}\left( \frac{b_{3k+1}}{Q_{3k-1}Q_{3k+1}}+\frac{a_{3k+1}a_{3k+2}}{Q_{3k+1}Q_{3k+2} }\right) \] \begin{align*} & =\sum_{k=1}^{n-1}\left( -1\right) ^{k-1}a_{1}a_{2}\cdots a_{3k} \frac{b_{3k+1}Q_{3k+2}+a_{3k+1}a_{3k+2}Q_{3k-1}}{Q_{3k-1}Q_{3k+1}Q_{3k+2}}\\ & =\sum_{k=1}^{n-1}\left( -1\right) ^{k-1}a_{1}a_{2}\cdots a_{3k} \frac{b_{3k+1}b_{3k+2}Q_{3k+1}+a_{3k+2}\left( b_{3k+1}Q_{3k}+a_{3k+1} Q_{3k-1}\right) }{Q_{3k-1}Q_{3k+1}Q_{3k+2}}, \end{align*} which proves Lemma \ref{LemTransfVarona}. \end{proof} \section{Proofs of theorems \ref{ThEuler} and \ref{ThHone}} \label{sec:proofEandH} The two proofs are similar, and consist in transforming the continued fraction \[ \frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{m}}{b_{m}} \] by (\ref{SumEuler}) and (\ref{SumHone}), with $m=n$ and $m=2n$ respectively. \noindent\textit{Proof of Theorem \ref{ThEuler}}. With the notations of Section \ref{sec:notaion}, we prove first by induction that $Q_{k}=x_{k}$ $(k\geq0).$ Clearly $Q_{0}=1=x_{0}$ and $Q_{1}=x_{1}.$ Assuming that $Q_{k}=x_{k}$ and $Q_{k+1}=x_{k+1},$ we obtain by (\ref{CFEuler1}) and (\ref{Rec}) \begin{align*} Q_{k+2} & =\left( \theta x_{k+1}-\theta y_{k+1}\right) x_{k+1}+\left( \theta y_{k+1}\theta x_{k}\right) x_{k}\\ & =x_{k+2}-\left( \theta y_{k+1}\right) x_{k+1}+\left( \theta y_{k+1}\right) x_{k+1}=x_{k+2}, \end{align*} which proves that $Q_{k}=x_{k}$ $(k\geq0).$ Here $P_{1}=y_{1},$ $Q_{1}=x_{1},$ and \[ \prod_{j=1}^{k+1}a_{j}=a_{1}\prod_{j=2}^{k+1}\theta y_{j-1}\theta x_{j-2}=y_{1}\frac{y_{k+1}x_{k}}{y_{1}x_{0}}=x_{k}y_{k+1}. \] Since $Q_{k}=x_{k}$ $(k\geq0),$ we obtain by (\ref{SumEuler}) \[ \frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{n}}{b_{n}}=\sum_{k=0}^{n-1}\left( -1\right) ^{k}\frac{y_{k+1} }{x_{k+1}}, \] which proves Theorem \ref{ThEuler}. \noindent\textit{Proof of Theorem \ref{ThHone}}. We prove by induction that \begin{equation} Q_{2k}=x_{k},\quad Q_{2k+1}=\theta x_{k}-\theta y_{k}\quad\left( k\geq0\right) . \label{Hone5} \end{equation} For $k=0,$ we have $Q_{0}=1=x_{0}$ and $Q_{1}=b_{1}=x_{1}-y_{1}=\theta x_{0}-\theta y_{0}.$ Now assuming that it is true for some $k\geq0,$ we compute \begin{align*} Q_{2k+2} & =b_{2k+2}Q_{2k+1}+a_{2k+2}Q_{2k}=x_{k}\left( \theta x_{k}-\theta y_{k}\right) +\theta y_{k}x_{k}=x_{k+1},\\ Q_{2k+3} & =b_{2k+3}Q_{2k+2}+a_{2k+3}Q_{2k+1}\\ & =\frac{\theta^{2}x_{k}-\theta^{2}y_{k}}{x_{k}}x_{k+1}+\theta^{2} y_{k}\left( \theta x_{k}-\theta y_{k}\right) =\theta x_{k+1}-\theta y_{k+1} \end{align*} by using (\ref{Rule}). Hence (\ref{Hone5}) is proved by induction. Now we apply Lemma \ref{LemTransfHone}. First we have \[ a_{1}a_{2}\cdots a_{2k+1}=y_{k+1}\quad\left( k\geq0\right) . \] Indeed, this is clearly true for $k=0$ since $a_{1}=y_{1}$ and \[ a_{1}a_{2}\cdots a_{2k+3}=a_{1}a_{2}\cdots a_{2k+1}a_{2k+2}a_{2k+3} =y_{k+1}\theta y_{k}\theta^{2}y_{k}=y_{k+1}\theta y_{k+1}=y_{k+2}. \] Replacing in (\ref{SumHone}) yields \[ \frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{2n}}{b_{2n}}=\sum_{k=0}^{n-1}\frac{y_{k+1}x_{k}}{x_{k+1}x_{k}} =\sum_{k=1}^{n}\frac{y_{k}}{x_{k}}, \] which proves Theorem \ref{ThHone}. \section{Proof of Theorem \ref{ThVarona}} \label{sec:proofV} It is simpler to prove first a slightly different result, namely \begin{theorem} \label{ThVarona1}For every integer $n\geq2,$ \[ \sum_{k=1}^{n}\left( -1\right) ^{k-1}\frac{y_{k}}{x_{k}}=\frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{3n-4}}{b_{3n-4}}, \] where \begin{align*} a_{1} & =y_{1}^{2},\quad a_{2}=x_{1}y_{2},\quad a_{3}=\frac{\theta y_{2} }{x_{1}},\\ b_{1} & =x_{1}y_{1},\quad b_{2}=\theta x_{1}-\theta y_{1},\quad b_{3} =\frac{\theta x_{2}}{x_{2}}-1, \end{align*} and for $k\geq1$ \begin{align*} a_{3k+1} & =1,\quad a_{3k+2}=y_{k+2},\quad a_{3k+3}=y_{k+1}\theta^{2} y_{k+1},\\ \quad b_{3k+1} & =1,\quad b_{3k+2}=x_{k+1}y_{k+1}-y_{k+2},\quad b_{3k+3}=\frac{\theta^{2}x_{k+1}-\theta^{2}y_{k+1}}{x_{k+1}}-1. \end{align*} \end{theorem} \begin{proof} We prove by induction that, for every $k\geq1,$ \begin{equation} \left\{ \begin{array} [c]{l} Q_{3k-1}=y_{1}y_{2}\cdots y_{k}x_{k+1},\\ Q_{3k}=y_{1}y_{2}\cdots y_{k}\left( \theta x_{k+1}-x_{k+1}+\theta y_{k+1}\right) ,\\ Q_{3k+1}=y_{1}y_{2}\cdots y_{k}\left( \theta x_{k+1}+\theta y_{k+1}\right) . \end{array} \right. \label{Varona9} \end{equation} We have $Q_{0}=1$ and $Q_{1}=b_{1}=x_{1}y_{1}.$ Therefore \begin{align*} Q_{2} & =b_{2}Q_{1}+a_{2}Q_{0}=\left( \theta x_{1}-\theta y_{1}\right) x_{1}y_{1}+x_{1}y_{2}=x_{2}y_{1},\\ Q_{3} & =b_{3}Q_{2}+a_{3}Q_{1}=\left( \frac{\theta x_{2}}{x_{2}}-1\right) x_{2}y_{1}+\frac{\theta y_{2}}{x_{1}}x_{1}y_{1}=y_{1}\left( \theta x_{2}-x_{2}+\theta y_{2}\right) ,\\ Q_{4} & =b_{4}Q_{3}+a_{4}Q_{2}=Q_{3}+Q_{2}=y_{1}(\theta x_{2}+\theta y_{2}), \end{align*} which proves that (\ref{Varona9}) is true for $k=1.$ Now assuming that it is true for some $k\geq1,$ we compute \begin{align*} Q_{3k+2} & =b_{3k+2}Q_{3k+1}+a_{3k+2}Q_{3k}\\ & =\left( x_{k+1}y_{k+1}-y_{k+2}\right) y_{1}\cdots y_{k}\left( \theta x_{k+1}+\theta y_{k+1}\right) \\ & \qquad\qquad\qquad+y_{k+2}y_{1}\cdots y_{k}\left( \theta x_{k+1} -x_{k+1}+\theta y_{k+1}\right) \\ & =y_{1}\cdots y_{k}\left( x_{k+1}y_{k+1}\theta x_{k+1}+x_{k+1}y_{k+1}\theta y_{k+1}-x_{k+1}y_{k+2}\right) \\ & =y_{1}\cdots y_{k+1}x_{k+2},\\ Q_{3k+3} & =b_{3k+3}Q_{3k+2}+a_{3k+3}Q_{3k+1}\\ & =\left( \frac{\theta^{2}x_{k+1}-\theta^{2}y_{k+1}}{x_{k+1}}-1\right) y_{1}\cdots y_{k+1}x_{k+2}\\ & \qquad\qquad\qquad+y_{k+1}\theta^{2}y_{k+1}y_{1}\cdots y_{k}\left( \theta x_{k+1}+\theta y_{k+1}\right) \\ & =y_{1}\cdots y_{k+1}\left( \theta^{2}x_{k+1}\theta x_{k+1}-x_{k+2} +\theta^{2}y_{k+1}\theta y_{k+1}\right) \\ & =y_{1}\cdots y_{k+1}\left( \theta x_{k+2}-x_{k+2}+\theta y_{k+2}\right) ,\\ Q_{3k+4} & =b_{3k+4}Q_{3k+3}+a_{3k+4}Q_{3k+2}=Q_{3k+3}+Q_{3k+2}\\ & =y_{1}\cdots y_{k+1}\left( \theta x_{k+2}+\theta y_{k+2}\right) . \end{align*} Hence (\ref{Varona9}) is proved by induction. Now we apply Lemma \ref{LemTransfVarona}. We have \begin{equation} a_{1}a_{2}\cdots a_{3k}=y_{k+2}\left( y_{1}y_{2}\cdots y_{k}\right) ^{2}\quad\left( k\geq1\right) . \label{Varona13} \end{equation} Indeed, for $k=1$ \[ a_{1}a_{2}a_{3}=y_{1}^{2}x_{1}y_{2}\frac{\theta y_{2}}{x_{1}}=y_{3}y_{1}^{2}, \] and assuming that (\ref{Varona13}) holds for some $k\geq1,$ \[ a_{1}a_{2}\cdots a_{3k+3}=y_{k+2}\left( y_{1}y_{2}\cdots y_{k}\right) ^{2}y_{k+2}y_{k+1}\theta^{2}y_{k+1}=y_{k+3}\left( y_{1}y_{2}\cdots y_{k+1}\right) ^{2}. \] Using (\ref{Varona13}) in (\ref{SumVarona}), we obtain \begin{align*} & \frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{3n-1}}{b_{3n-1}}\\ & =\frac{y_{1}}{x_{1}}-\frac{y_{1}x_{1}y_{2}}{x_{1}x_{2}y_{1}}+\sum _{k=1}^{n-1}\left( -1\right) ^{k-1}y_{k+2}\left( y_{1}\cdots y_{k}\right) ^{2}\frac{x_{k+1}y_{k+1}-y_{k+2}+y_{k+2}}{y_{1}\cdots y_{k}x_{k+1}y_{1}\cdots y_{k+1}x_{k+2}}\\ & =\frac{y_{1}}{x_{1}}-\frac{y_{2}}{x_{2}}+\sum_{k=1}^{n-1}\left( -1\right) ^{k-1}\frac{y_{k+2}}{x_{k+2}}=\sum_{k=1}^{n+1}\left( -1\right) ^{k-1} \frac{y_{k}}{x_{k}}, \end{align*} which proves Theorem \ref{ThVarona1}. \end{proof} Now, with the notations of Theorem \ref{ThVarona1}, we simply observe that \begin{align*} \sum_{k=1}^{n}\left( -1\right) ^{k-1}\frac{y_{k}}{x_{k}} & =\frac{a_{1} }{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{3}}{b_{3}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{4}}{b_{4}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{5}}{b_{5}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{3n-4}}{b_{3n-4}}\\ & =\frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+} \frac{x_{1}a_{3}}{x_{1}b_{3}} \genfrac{}{}{0pt}{}{{}}{+} \frac{x_{1}a_{4}}{b_{4}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{5}}{b_{5}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{3n-4}}{b_{3n-4}}, \end{align*} which proves Theorem \ref{ThVarona}. \section{Generalization of Hone expansions} \label{sec:Hone} In this section, we consider a sequence $F_{n}(X,Y)$ of nonzero polynomials with non-negative integer coefficients and such that $F_{n}(0,0)=0$ for every $n\geq0.$ Define the sequence $\left( x_{n}\right) _{n\geq0}$ by $x_{0}=1,$ $x_{1}\in\mathbb{Z}_{>0}$ and the recurrence relation \begin{equation} x_{n+2}x_{n}=x_{n+1}^{2}\left( F_{n}\left( x_{n},x_{n+1}\right) +1\right) \quad\left( n\geq0\right) . \label{Rec1} \end{equation} If $x_{n}$ satisfies (\ref{Rec1}), it is clear that \[ \theta^{2}x_{n}=F_{n}\left( x_{n},x_{n+1}\right) +1. \] It is easy to check by induction that $x_{n}$ is a positive integer and that $x_{n}$ divides $x_{n+1}$ for every $n\geq0.$ Therefore \begin{equation} x_{n+2}\geq x_{n+1}^{2}\frac{x_{n}+1}{x_{n}}>x_{n+1}^{2}\quad\left( n\geq0\right) . \label{Rec2} \end{equation} Hence we deduce from (\ref{Rec2}) that $x_{2}\geq2$ and \begin{equation} x_{n}\geq\left( x_{2}\right) ^{2^{n-2}}\geq2^{2^{n-2}}\quad\left( n\geq2\right) . \label{Min} \end{equation} Now let $h$ be any positive integer. We define the series \[ S=\sum_{n=0}^{\infty}\frac{h^{n}}{x_{n+1}}=\frac{1}{h}\sum_{n=1}^{\infty} \frac{h^{n}}{x_{n}}, \] which is convergent by (\ref{Min}). We can apply Theorem \ref{ThHone} above with $y_{n}=h^{n},$ in which case $\theta y_{n}=h$ and $\theta^{2}y_{n}=1$ for every $n\geq0.$ By letting $n\rightarrow\infty$ in Theorem \ref{ThHone}, we get \begin{equation} S=\frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{n}}{b_{n}} \genfrac{}{}{0pt}{}{{}}{+\cdots} , \label{S} \end{equation} where $a_{1}=1,$ $b_{1}=x_{1}-h,$ and for $k\geq1$ \begin{align*} a_{2k} & =h,\quad a_{2k+1}=1,\\ \quad b_{2k} & =x_{k-1},\quad b_{2k+1}=\frac{\theta^{2}x_{k-1}-1}{x_{k-1} }=\frac{F_{k-1}\left( x_{k-1},x_{k}\right) }{x_{k-1}}. \end{align*} Assume that $x_{1}>h.$ As $F_{n}(0,0)=0$ and $x_{n}$ divides $x_{n+1}$ for every $n\geq0,$ we see that $a_{n}$ and $b_{n}$ are positive integers for every $n\geq1$ in this case. If moreover $h=1$ and $F_{n}(x_{n},x_{n+1} )+1=F(x_{n+1})$ for some $F(X) \in{\mathbb{Z}}_{\geq0}[X]$, then (\ref{S}) gives the expansion in regular continued fraction of $S,$ already obtained by Hone in \cite{Hone}. \begin{example} \label{ex:1} \label{ExHone1}The simplest of all sequences $(x_{n})$ satisfy the recurrence relation \begin{equation} x_{n+2}x_{n}=x_{n+1}^{2}\left( x_{n}+1\right) \quad\left( n\geq0\right) , \label{Simplest} \end{equation} which means that $F_{n}(X,Y)=X$ for every $n\geq0.$ Let $h$ be a positive integer, and assume that $x_{1}>h.$ Then we can apply the above results and we get \[ S=\frac{1}{x_{1}-h} \genfrac{}{}{0pt}{}{{}}{+} \frac{h}{1} \genfrac{}{}{0pt}{}{{}}{+} \frac{1}{1} \genfrac{}{}{0pt}{}{{}}{+} \frac{h}{x_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{1}{1} \genfrac{}{}{0pt}{}{{}}{+} \frac{h}{x_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{1}{1} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{h}{x_{k}} \genfrac{}{}{0pt}{}{{}}{+\cdots} . \] In the case where $x_{1}=1$ and $h=1,$ we can apply this result starting with $x_{2}=2$ in place of $x_{1}$ and we get \[ S-1=\frac{1}{1} \genfrac{}{}{0pt}{}{{}}{+} \frac{1}{x_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{1}{1} \genfrac{}{}{0pt}{}{{}}{+} \frac{1}{x_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{1}{1} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{1}{x_{k}} \genfrac{}{}{0pt}{}{{}}{+\cdots} . \] Hence, assuming that $x_{0}=x_{1}=1$ and $x_{n}$ satisfies (\ref{Simplest}), we have \[ \left[ 1;1,x_{1},1,x_{2},1,x_{3},1,x_{4},\ldots,1,x_{k},\ldots\right] =\sum_{n=1}^{\infty}\frac{1}{x_{n}}, \] which is (\ref{Nouv1}). \end{example} If the condition $x_{1}>h$ is not realized, let $N\geq0$ such that $x_{N+1}>h.$ Then there exist a positive integer $t$ such that \[ S=\frac{t}{x_{N}}+h^{N}\sum_{n=0}^{\infty}\frac{h^{n}}{x_{n+N+1}}=\frac {t}{x_{N}}+h^{N}\sum_{n=0}^{\infty}\frac{h^{n}}{x_{n+1}^{\prime}}=\frac {t}{x_{N}}+h^{N}S^{\prime}, \] where $x_{n}^{\prime}=x_{n+N}$ satisfies $x_{1}^{\prime}>h$ and the recurrence relation \[ x_{n+2}^{\prime}x_{n}^{\prime}=\left( x_{n+1}^{\prime}\right) ^{2}\left( F_{n+N}\left( x_{n}^{\prime},x_{n+1}^{\prime}\right) +1\right) \quad\left( n\geq0\right) . \] Hence we can apply the above result to $S^{\prime}$ and we get \[ S=\frac{t}{x_{N}}+\frac{h^{N}a_{1}^{\prime}}{b_{1}^{\prime}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}^{\prime}}{b_{2}^{\prime}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{n}^{\prime}}{b_{n}^{\prime}} \genfrac{}{}{0pt}{}{{}}{+\cdots} . \] This proves that \begin{equation} \frac{1}{S}=\frac{x_{N}}{t} \genfrac{}{}{0pt}{}{{}}{+} \frac{h^{N}a_{1}^{\prime}x_{N}}{b_{1}^{\prime}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}^{\prime}}{b_{2}^{\prime}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{n}^{\prime}}{b_{n}^{\prime}} \genfrac{}{}{0pt}{}{{}}{+\cdots} , \label{1/S} \end{equation} which gives an expansion of $S^{-1}$ in a continued fraction whose terms are positive integers. \begin{example} \label{ex:2} \label{ExHone2}Assume again that $x_{n}$ satisfies (\ref{Simplest}), and take for example $x_{1}=1$ and $h=3.$ We have here \[ S=\sum_{n=0}^{\infty}\frac{3^{n}}{x_{n+1}}. \] with $x_{1}=1,$ $x_{2}=2,$ $x_{3}=8.$ Hence $N=2$ and $a_{1}^{\prime}=1,$ $b_{1}^{\prime}=5,$ and for $k\geq1$ \[ a_{2k}^{\prime}=3,\quad a_{2k+1}^{\prime}=1,\quad b_{2k}^{\prime} =x_{k+2},\quad b_{2k+1}^{\prime}=1. \] By applying (\ref{1/S}), we obtain \[ \frac{1}{S}=\frac{2}{5} \genfrac{}{}{0pt}{}{{}}{+} \frac{18}{5} \genfrac{}{}{0pt}{}{{}}{+} \frac{3}{x_{3}} \genfrac{}{}{0pt}{}{{}}{+} \frac{1}{1} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{3}{x_{4}} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{1}{1} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{3}{x_{k}} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{1}{1} \genfrac{}{}{0pt}{}{{}}{+\cdots} . \] \end{example} \section{Generalization of Varona expansions} \label{sec:Varona} With the notations of Section \ref{sec:Hone}, we define now the series \[ T=\sum_{n=0}^{\infty}\left( -1\right) ^{n}\frac{h^{n}}{x_{n+1}}=\sum _{n=1}^{\infty}\left( -1\right) ^{n-1}\frac{h^{n-1}}{x_{n}}. \] Here we have $y_{n}=h^{n-1}.$ By letting $n\rightarrow\infty$ in Theorem \ref{ThVarona}, we get \begin{equation} T=\frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{n}}{b_{n}} \genfrac{}{}{0pt}{}{{}}{+\cdots} , \label{T} \end{equation} where \begin{align*} a_{1} & =1,\quad a_{2}=hx_{1},\quad a_{3}=h,\quad a_{4}=x_{1},\\ b_{1} & =x_{1},\quad b_{2}=\frac{x_{2}}{x_{1}}-h,\quad b_{3}=F_{1}\left( x_{1},x_{2}\right) +1-x_{1},\quad b_{4}=1, \end{align*} and for $k\geq2$ \begin{align*} a_{3k-1} & =h^{k},\quad a_{3k}=h^{k-1},\quad a_{3k+1}=1,\\ \quad b_{3k-1} & =h^{k-1}\left( x_{k}-h\right) ,\quad b_{3k}=\frac {F_{k}\left( x_{k},x_{k+1}\right) }{x_{k}}-1,\quad b_{3k+1}=1. \end{align*} Assume that $x_{1}\geq h$ and that $F_{k}(X,Y)\neq X$ for every $k\geq0.$ Then $b_{2}>0$ since $x_{2}>x_{1}^{2}\geq hx_{1}$ and all the $a_{n}$ and $b_{n}$ are positive integers. If moreover $h=1$ and $x_{1}=1$ we obtain the expansion in regular continued fraction of $T$ given by Varona in \cite{Varona}. \begin{example} \label{ex:3} \label{ExVarona1}Assume that $(x_{n})$ satisfies (\ref{Simplest} ). Then we cannot apply directly the above results since $F_{k}(X,Y)=X$ for every $k\geq0$ and therefore $b_{3k}=0$ for every $k\geq1.$ However, by the concatenation formula we have for $k\geq2$ \begin{align*} \frac{a_{3k-1}}{b_{3k-1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{3k}}{0} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{3k+1}}{b_{3k+1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{A}{B} & =\frac{a_{3k-1}}{b_{3k-1}+\dfrac{a_{3k}}{a_{3k+1}}b_{3k+1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{3k}A}{a_{3k+1}B}\\ & =\frac{h}{x_{k}-h+1} \genfrac{}{}{0pt}{}{{}}{+} \frac{A}{B}. \end{align*} Then we have for $n\geq3$ \begin{align*} \frac{a_{1}}{b_{1}} \genfrac{}{}{0pt}{}{{}}{+} & \frac{a_{2}}{b_{2}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{3}}{b_{3}} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{4}}{b_{4}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{5}}{b_{5}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{a_{3n-5}}{b_{3n-5}} \genfrac{}{}{0pt}{}{{}}{+} \frac{a_{3n-4}}{b_{3n-4}}\\ & =\frac{1}{x_{1}} \genfrac{}{}{0pt}{}{{}}{+} \frac{hx_{1}}{x_{1}^{-1}x_{2}-h} \genfrac{}{}{0pt}{}{{}}{+} \frac{h}{1} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{x_{1}}{1} \genfrac{}{}{0pt}{}{{}}{+} \frac{h}{x_{2}-h+1} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \frac{h}{x_{n-2}-h+1} \genfrac{}{}{0pt}{}{{}}{+} \frac{h}{x_{n-1}-h}. \end{align*} In the case where $x_{1}=1$ and $h=1,$ we get \[ \sum_{k=1}^{n}\frac{\left( -1\right) ^{k-1}}{x_{k}}=\frac{1}{1} \genfrac{}{}{0pt}{}{{}}{+} \frac{1}{1} \genfrac{}{}{0pt}{}{{}}{+} \frac{1}{1} \genfrac{}{}{0pt}{}{{}}{+} \dfrac{1}{1} \genfrac{}{}{0pt}{}{{}}{+} \frac{1}{x_{2}} \genfrac{}{}{0pt}{}{{}}{+\cdots} \genfrac{}{}{0pt}{}{{}}{+} \frac{1}{x_{n-2}} \genfrac{}{}{0pt}{}{{}}{+} \frac{1}{x_{n-1}-1} \] for $n\geq3.$ This yields (\ref{Nouv2}) by letting $n\rightarrow\infty.$ \end{example} \begin{remark} Hone and Varona in \cite{Hone2} and \cite{Hone3} have recently generalized their results to sums of a rational number and certain Engel or Pierce series by giving their expansions in regular continued fractions. \end{remark} \end{document}
arXiv
\begin{document} \title{Tiling edge-ordered graphs with monotone paths and other structures} \begin{abstract} Given graphs $F$ and $G$, a perfect $F$-tiling in $G$ is a collection of vertex-disjoint copies of $F$ in $G$ that together cover all the vertices in $G$. The study of the minimum degree threshold forcing a perfect $F$-tiling in a graph $G$ has a long history, culminating in the K\"uhn--Osthus theorem [Combinatorica 2009] which resolves this problem, up to an additive constant, for all graphs $F$. In this paper we initiate the study of the analogous question for edge-ordered graphs. In particular, we characterize for which edge-ordered graphs $F$ this problem is well-defined. We also apply the absorbing method to asymptotically determine the minimum degree threshold for forcing a perfect $P$-tiling in an edge-ordered graph, where $P$ is any fixed monotone path. \end{abstract} \section{Introduction} \subsection{Monotone paths in edge-ordered graphs} An \emph{edge-ordered graph} $G$ is a graph equipped with a total order $\leq$ of its edge set $E(G)$. Usually we will think of a total order of $E(G)$ as a labeling of the edges with labels from $\mathbb R$, where the labels inherit the total order of $\mathbb R$ and where edges are assigned distinct labels. A path $P$ in $G$ is \emph{monotone} if the consecutive edges of $P$ form a monotone sequence with respect to $\leq$. We write $P_k^{\scaleto{{\leqslant}}{4.3pt}}$ for the monotone path of length $k$ (i.e., on $k$ edges). The study of monotone paths in edge-ordered graphs dates back to the 1970s. Chv\'atal and Koml\'os~\cite{chvkom} raised the following question: what is the largest integer $f(K_n)$ such that every edge-ordering of $K_n$ contains a copy of the monotone path $P_{f(K_n)}^{\scaleto{{\leqslant}}{4.3pt}}$ of length $f(K_n)$? Over the years there have been several papers on this topic~\cite{bkwpstw, burger, ccs, gk, milans, rodl}. In a recent breakthrough, Buci\'c, Kwan, Pokrovskiy, Sudakov, Tran, and Wagner~\cite{bkwpstw} proved that $f(K_n)\geq n^{1-o(1)}$. The best known upper bound on $f(K_n)$ is due to Calderbank, Chung, and Sturtevant~\cite{ccs} who proved that $f(K_n)\leq (1/2+o(1))n$. There have also been numerous papers on the wider question of the largest integer $f(G)$ such that every edge-ordering of a graph $G$ contains a copy of a monotone path of length $f(G)$. See the introduction of~\cite{bkwpstw} for a detailed overview of the related literature. A classical result of R\"odl~\cite{rodl} yields a Tur\'an-type result for monotone paths: every edge-ordered graph with $n$ vertices and with at least~$k(k+1)n/2$ edges contains a copy of $P_k^{\scaleto{{\leqslant}}{4.3pt}}$. More recently, Gerbner, Methuku, Nagy, P\'alv\"olgyi, Tardos, and Vizer~\cite{gmnptv} initiated the systematic study of the Tur\'an problem for edge-ordered graphs. It is also natural to seek conditions that force an edge-ordered graph $G$ to contain a collection of vertex-disjoint monotone paths $P_k^{\scaleto{{\leqslant}}{4.3pt}}$ that cover all the vertices in $G$, that is, a \emph{perfect $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling} in~$G$. Our first result asymptotically determines the minimum degree threshold that forces a {perfect $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling}. \begin{theorem}\label{Pkfactor} Given any $k \in \mathbb N$ and $\eta >0$, there exists an $n_0 \in \mathbb N$ such that if $n \geq n_0$ where $(k+1) | n$ then the following holds: if $G$ is an $n$-vertex edge-ordered graph with minimum degree $$ \delta(G)\geq (1/2+\eta)n$$ then $G$ contains a perfect $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling. Moreover, for all $n\in \mathbb N$ with $(k+1)\vert n$, there is an $n$-vertex edge-ordered graph $G_0$ with $\delta(G_0)\geq \lfloor n/2\rfloor-2$ that does not contain a perfect $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling. \end{theorem} The proof of Theorem~\ref{Pkfactor} provides the first application of the so-called \emph{absorbing method} in the setting of edge-ordered graphs. \subsection{The general problem} Let $F$ and $G$ be edge-ordered graphs. We say that $G$ \emph{contains} $F$ if $F$ is isomorphic to a subgraph $F'$ of $G$; here, crucially, the total order of $E(F)$ must be the same as the total order of $E(F')$ that is inherited from the total order of $E(G)$. In this case we say $F'$ is a \emph{copy of~$F$ in~$G$}. For example, if $G$ contains a path $F'$ of length $3$ with consecutive edges labeled $5$, $17$ and $4$ then $F'$ is a copy of the path $F$ of length $3$ with consecutive edges labeled $2$, $3$ and $1$. Given edge-ordered graphs $F$ and $G$, an \emph{$F$-tiling} in $G$ is a collection of vertex-disjoint copies of $F$ in $G$; an $F$-tiling in $G$ is \emph{perfect} if it covers all the vertices in $G$. In light of Theorem~\ref{Pkfactor} we raise the following general question. \begin{question}\label{ques1} Let $F$ be a fixed edge-ordered graph on~$f\in \mathbb N$ vertices and let $n \in \mathbb N$ be divisible by $f$. What is the smallest integer $f(n,F)$ such that every edge-ordered graph on $n$ vertices and of minimum degree at least $f(n,F)$ contains a perfect $F$-tiling? \end{question} Theorem~\ref{Pkfactor} implies that $f(n,P_k^{\scaleto{{\leqslant}}{4.3pt}})=(1/2+o(1))n$ for all $k\in \mathbb N$. Note that the \emph{unordered} version of Question~\ref{ques1} had been well-studied since the 1960s (see, e.g.,~\cite{alonyuster, cor, hs, kssAY, kuhn2}) and forty-five years later a complete solution, up to an additive constant term, was obtained via a theorem of K\"uhn and Osthus~\cite{kuhn2}. Very recently, the \emph{vertex-ordered graph} version of this problem has been asymptotically resolved~\cite{blt, andrea}. Question~\ref{ques1} has a rather different flavor to its graph and vertex-ordered graph counterparts. In particular, there are edge-ordered graphs $F$ for which, given \emph{any} $n \in \mathbb N$, there exists an edge-ordering $\leq$ of the complete graph $K_n$ that does not contain a copy of $F$. Thus, for such $F$, Question~\ref{ques1} is trivial in the sense that clearly there is no minimum degree threshold $f(n,F)$ for forcing a perfect $F$-tiling. This motivates Definitions~\ref{def:Turanable} and~\ref{def:tile} below. \begin{definition}[Tur\'anable]\label{def:Turanable}\rm An edge-ordered graph $F$ is \emph{Tur\'anable} if there exists a $t\in \mathbb N$ such every edge-ordering of the graph $K_t$ contains a copy of $F$. \end{definition} An unpublished result of Leeb (see, e.g.,~\cite{gmnptv, ner}) characterizes all those edge-ordered graphs $F$ that are Tur\'anable. Moreover, a result of Gerbner, Methuku, Nagy, P\'alv\"olgyi, Tardos, and Vizer~\cite[Theorem 2.3]{gmnptv} shows that the so-called \emph{order chromatic number} is the parameter that governs the Tur\'an threshold for Tur\'anable edge-ordered graphs $F$. \begin{definition}[Tileable] \label{def:tile} \rm An edge-ordered graph $F$ on $f$ vertices is \emph{tileable} if there exists a $t\in \mathbb N$ divisible by $f$ such that every edge-ordering of the graph $K_t$ contains a perfect $F$-tiling. \end{definition} Let $F$ be a tileable edge-ordered graph on $f$ vertices and let $T(F)$ be the smallest possible choice of $t \in \mathbb N$ in Definition~\ref{def:tile} for $F$. It is easy to see that every edge-ordering of the graph $K_s$ contains a perfect $F$-tiling for every $s \geq T(F)$ that is divisible by $f$. Note that Theorem~\ref{Pkfactor} implies that $P_k^{\scaleto{{\leqslant}}{4.3pt}}$ is tileable for all $k \in \mathbb N$. The second objective of this paper is to provide a characterization of those edge-ordered graphs that are tileable; see Theorem~\ref{thm:character}. Thus, this characterizes for which edge-ordered graphs $F$ Question~\ref{ques1} is well-defined. Interestingly, Theorem~\ref{thm:character} implies that there are edge-ordered graphs that are Tur\'anable but not tileable; see Proposition~\ref{prop::Dn}. The precise characterization of the tileable edge-ordered graphs is a little involved, and depends on twenty edge-orderings of $K_f$; as such, we defer the statement of Theorem~\ref{thm:character} to Section~\ref{sec:character}. In \cite{gmnptv} it is proven that no edge-ordering of $K_4$ is Tur\'anable and consequently, any edge-ordered graph containing a copy of $K_4$ is not Tur\'anable and therefore not tileable. Thus, for an edge-ordered graph to be tileable it cannot be too `dense'. Here we prove that no edge-ordering of~$K_4^-$ is tileable\footnote{Recall that $K_t^-$ denotes the graph obtained from $K_t$ by removing an edge.}; see~Proposition~\ref{prop::K4-}. However, we prove that the property of being tileable is not closed under subgraphs and there are in fact connected tileable edge-ordered graphs that contain copies of~$K_4^-$ (see Corollary~\ref{cor:K_4-}). A graph $H$ is \emph{universally tileable} if for any given ordering of~$E(H)$, the resulting edge-ordered graph is tileable. Similarly, we say that $H$ is \emph{universally Tur\'anable} if given any edge-ordering of $H$, the resulting edge-ordered graph is Tur\'anable. Using~\cite[Theorem~2.18]{gmnptv} it is easy to characterize those graphs $H$ that are universally tileable. \begin{theorem}\label{thm:uni} Let $H$ be a graph. The following are equivalent: \begin{enumerate}[wide, leftmargin=23pt, labelindent=3pt, label=\upshape({\itshape \alph*\,})] \item $H$ is universally tileable; \label{it:univtil} \item $H$ is universally Tur\'anable; \label{it:univturan} \item \label{it:univdescrip} \begin{enumerate}[wide,leftmargin=15pt, labelindent=0pt, label=\upshape({\itshape \roman*\,})] \item $H$ is a star forest (possibly with isolated vertices),\footnote{A \emph{star forest} is a graph whose components are all stars.} or \item $H$ is a path on three edges together with a (possibly empty) collection of isolated vertices,~or \item $H$ is a copy of $K_3$ together with a (possibly empty) collection of isolated vertices. \end{enumerate} \end{enumerate} \end{theorem} In Section~\ref{subsec:char2} we determine the asymptotic value of $f(n,F)$ for all connected universally tileable edge-ordered graphs $F$. Our characterization of tileable edge-ordered graphs lays the ground for the systematic study of Question~\ref{ques1}. The second and third authors will investigate this problem further in a forthcoming paper. Already though we can say something about this question. Indeed, an almost immediate consequence of the Hajnal--Szemer\'edi theorem~\cite{hs} is the following result. \begin{theorem}\label{hscorollary} Let $F$ be a tileable edge-ordered graph and let $T(F)$ be the smallest possible choice of $t \in \mathbb N$ in Definition~\ref{def:tile} for $F$. Given any integer $n\geq T(F)$ divisible by $|F|$, $$f(n,F) \leq \left ( 1- \frac{1}{T(F)} \right ) n.$$ \end{theorem} The paper is organized as follows. In Section~\ref{subsec:state} we state the characterization of all tileable edge-ordered graphs (Theorem~\ref{thm:character}). Then, in Section~\ref{subsec:examples} we use this theorem to provide some basic properties of the family of tileable edge-ordered graphs and some general examples. We give the proof of Theorem~\ref{thm:character} in Section~\ref{subsec:proof}. In Section~\ref{subsec:char2} we consider universally tileable graphs, and give the proof of Theorem~\ref{thm:uni}. The proof of Theorem~\ref{hscorollary} is given in Section~\ref{subsec:hscor}. In Section~\ref{sec:mainproof} we give the proof of Theorem~\ref{Pkfactor}. Finally, some concluding remarks are made in Section~\ref{sec:conc}. \subsection*{Notation} Let $G$ be an (edge-ordered) graph. We write $V(G)$ and $E(G)$ for its vertex and edge sets respectively. We denote an edge~$\{u,v\}\in E(G)$ by~$uv$, omitting parenthesis and commas. Define $|G|:=|V(G)|$. Given some $X \subseteq V(G),$ we write $G[X]$ for the induced (edge-ordered) subgraph of $G$ with vertex set $X$. Define $G\setminus X:=G[V(G) \setminus X]$. Given $x \in V(G)$ we define $G-x:=G[V(G)\setminus\{x\}]$. We define $N_G(x)$ be the set of vertices adjacent to $x$ in $G$ and set $d_G(x):=|N_G(x)|$. When the graph $G$ is clear from the context, we will omit the subscript $G$ here. We say an edge $e_1$ in $G$ is \emph{larger} than another edge $e_2$ if $e_2$ occurs before $e_1$ in the total order of $E(G)$; in this case we may write $e_1>e_2$ or $e_2<e_1$. We define \textit{smaller} analogously. A sequence $\{ e_i \} _{i \in [t] } $ of edges is \emph{monotone} if $e_1<e_2< \dots < e_t$ or $e_1>e_2> \dots > e_t$. Given an (unordered) graph $G$ we write $G^{\scaleto{{\leqslant}}{4.3pt}}$ to denote the edge-ordered graph obtained from $G$ by equipping $E(G)$ with a total order~$\leqslant$. We say that~$G$ is the \emph{underlying graph of~$G^{\scaleto{{\leqslant}}{4.3pt}}$}. Given a graph $G$ together with an (injective) labeling $L: E(G) \to \mathbb{R}$ of its edges, we define the \emph{edge-ordering induced by the labeling $L$} so that $e_i<e_j$ if and only if $L(e_i)<L(e_j)$. As such, $L$ gives rise to an edge-ordered graph. Note that two \emph{different} labelings can give rise to the \emph{same} edge-ordered graph. For example, a path whose edges are labeled $1$, $2$, and $3$ respectively is a monotone path; likewise a path whose edges are labeled $1$, $e$, and $\pi$ respectively is a monotone path. We denote the (unordered) path of length~$k$ by~$P_k$ and sometimes we identify a copy of $P_k$ with its sequence of vertices~$v_1\cdots v_{k+1}$ where~$v_iv_{i+1}\in E(P_k)$ for all~$i\in [k]$. Given distinct $a_1,\dots , a_t \in \mathbb R$ we write $a_1\dots a_t$ for the edge-ordered path on $t$ edges whose $i$th edge has label $a_i$. For example, $P=132$ is the edge-ordered path on four vertices $v_1, v_2, v_3, v_4$ whose first edge $v_1v_2$ is labeled $1$, second edge $v_2v_3$ is labeled $3$, and third edge $v_3v_4$ is labeled $2$. Given $k \in \mathbb N$ and a set $X$, we write $\binom{X}{k}$ for the collection of all subsets of $X$ of size $k$. \section{The characterization of all tileable edge-ordered graphs}\label{sec:character} \subsection{The characterization theorem}\label{subsec:state} The following Ramsey-type result, attributed to Leeb (see~\cite{gmnptv, ner}), says that in every sufficiently large edge-ordered complete graph we must always find a subgraph which is~\emph{canonically ordered} (see Definition~\ref{def:canonical}). Before giving the precise description of the canonical orderings, let us present Leeb's result. \begin{proposition}\label{prop:canonical} For every~$k\in \mathbb N$ there is an~$m\in \mathbb N$ such that every edge-ordered complete graph~$K_m$ contains a copy of~$K_k$ that is canonically edge-ordered. \qed \end{proposition} We now define the canonical orderings of~$K_n$. \begin{definition}\label{def:canonical} \rm Given~$n\in \mathbb N,$ we denote by $\{v_1,\dots,v_n\}$ the vertex set of the complete graph $K_n$. The following labelings~$L_1$, $L_2$, $L_3$, and $L_4$ induce the \emph{canonical orderings} of $K_n$. $\bullet$ \emph{min ordering}: For $1\le i<j\le n$ the label of the edge $v_iv_j$ is $L_1(v_iv_j)= 2ni+j-1$. $\bullet$ \emph{max ordering}: For $1\le i<j\le n$ the label of the edge $v_iv_j$ is $L_2(v_iv_j)=(2n-1)j+i$. $\bullet$ \emph{inverse min ordering}: For $1\le i<j\le n$ the label of the edge $v_iv_j$ is $L_3(v_iv_j)=(2n+1)i-j$. $\bullet$ \emph{inverse max ordering}: For $1\le i<j\le n$ the label of the edge $v_iv_j$ is $L_4(v_iv_j)=2nj-i+n$. We say that min, max, inverse min, and inverse max are \emph{types} of canonical orderings and that the labelings~$L_1$, $L_2$, $L_3$, and $L_4$ are the \textit{standard labelings} for those types.\footnote{The labelings~$L_1$, $L_2$, $L_3$, and~$L_4$ presented here differ from those used in \cite{gmnptv}. However, the induced edge-orderings are the same. This labeling will be useful for Definition~\ref{def:starcanonical}.} To emphasize, in the statement of Proposition~\ref{prop:canonical}, by `a copy of~$K_k$ that is canonically edge-ordered', we mean that the edge-ordering of $K_k$ is the same as the edge-ordering induced by the labeling $L_i$, for some $i \in [4]$. \end{definition} Observe that the max and inverse max orderings are the `reverse' of the min and inverse min orderings respectively. For example, if you reverse the total order of $E(K_n)$ induced by the min ordering $L_1$, then you obtain an edge-ordered graph whose total order is now induced by the max ordering $L_2$; here though vertex $v_n$ is playing the role of $v_1$, $v_{n-1}$ is playing the role of $v_2$, etc. \begin{remark}\label{rem:con}\rm Whilst the standard labelings formally define the canonical orderings, recalling the following intuitive explanations of these orderings will aid the reader throughout the paper: \begin{itemize} \item \emph{min ordering}: the smallest edges are incident to $v_1$ so that $v_1v_2<\dots <v_1v_n$; the next smallest edges are those that go from $v_2$ to the `right' of $v_2$ so that $v_2v_3<\dots <v_2v_n$; the next smallest edges are those that go from $v_3$ to the `right' of $v_3$ so that $v_3v_4<\dots <v_3v_n$, and so forth. \item \emph{max ordering}: the largest edges are incident to $v_n$ so that $v_1v_n<\dots <v_{n-1}v_n$; the next largest edges are those that go from $v_{n-1}$ to the `left' of $v_{n-1}$ so that $v_1v_{n-1}<\dots <v_{n-2}v_{n-1}$, and so forth. \item \emph{inverse min ordering}: the smallest edges are incident to $v_1$ so that $v_1v_n<\dots <v_1v_2$; the next smallest edges are those that go from $v_2$ to the `right' of $v_2$ so that $v_2v_n<\dots < v_2v_3$, and so forth. \item \emph{inverse max ordering}: the largest edges are incident to $v_n$ so that $v_1v_n> \dots >v_{n-1}v_n$; the next largest edges are those that go from $v_{n-1}$ to the `left' of $v_{n-1}$ so that $v_1v_{n-1}>\dots >v_{n-2}v_{n-1}$, and so forth. \end{itemize} \end{remark} In~\cite{gmnptv} it was observed that Proposition~\ref{prop:canonical} yields a full characterization of Tur\'anable graphs. \begin{theorem}[Tur\'anable characterization] \label{thm:turanable} An edge-ordered graph $F$ on $f$ vertices is Tur\'anable if and only if all four canonical edge-orderings of $K_f$ contain a copy of $F$. \qed \end{theorem} In~\cite[Theorem 2.5]{gmnptv} they present a `family' version of Theorem~\ref{thm:turanable}, which implies that~$F$ is Tur\'anable if and only if~$F$ is contained in every canonical edge-ordering of~$K_n$, \textit{for all~$n\in \mathbb N$}. However, Theorem~\ref{thm:turanable} can be deduced easily from the following fact. \begin{fact}\label{fact:selfidentical} Suppose~$k\leq n$ are positive integers. If~$K_n$ is canonically edge-ordered, then~$K_k\subseteq K_n$ is canonically edge-ordered. Moreover, $K_k$ has the same type of canonical edge-ordering as~$K_n$.\qed \end{fact} The picture is slightly different when one seeks a perfect $F$-tiling instead of just a single copy of $F$. To illustrate, consider a canonical ordering of $K_n$ with an extra `defective' vertex $x$, whose edges incident to it can have an arbitrary ordering. To have a perfect $F$-tiling in this edge-ordered graph, there must be a copy of $F$ containing the vertex $x$. This leads to a generalization of the canonical orderings above, which we call \emph{$\star$-canonical ordering{}s} (see Definition~\ref{def:starcanonical}). We obtain a similar characterization for tileable graphs as follows. \begin{theorem}[Tileable characterization]\label{thm:character} An edge-ordered graph $F$ on $f$ vertices is tileable if and only if all twenty $\star$-canonical ordering s of $K_f$ contain a copy of $F$. \end{theorem} To define the $\star$-canonical ordering s we will consider an edge-ordering of the complete graph~$K_{n+1}$ for which there is a vertex~$x\in V(K_{n+1})$ such that~$K_{n+1}-x$ is canonically ordered. Depending on the type of canonical ordering and the ordering of the edges incident to~$x$ we have, for all~$n\geq 4$,~twenty possible $\star$-canonical ordering s of~$K_{n+1}$. \begin{definition}\label{def:starcanonical} \rm Let $\{x, v_1,\dots,v_n\}$ denote the vertex set of $K_{n+1}$. Suppose~$L:E(K_{n+1})\to \mathbb{R}$ is a labeling of the edges of~$K_{n+1}$ such that its restriction to~$K_{n+1}-x$ is canonical with one of the standard labelings $L_1$, $L_2$, $L_3$, or $L_4$. Moreover, suppose that the labels~$x_i:=L(xv_i)$ for~$i\in [n]$ satisfy one of the following: $\bullet$ \emph{Larger increasing orderings}: $x_n > \dots > x_2 > x_1 > \max\limits_{i<j}\{L(v_iv_j)\}$. $\bullet$ \emph{Larger decreasing orderings}: $x_1 > x_2 > \dots > x_n > \max\limits_{i<j}\{L(v_iv_j)\}$. $\bullet$ \emph{Smaller increasing orderings}: $x_1 < x_2 < \dots < x_n < \min\limits_{i<j}\{L(v_iv_j)\}$. $\bullet$ \emph{Smaller decreasing orderings}: $x_n < \dots < x_2 < x_1 < \min\limits_{i<j}\{L(v_iv_j)\}$. $\bullet$ \emph{Middle increasing orderings}: $x_i = 2ni$ for all~$i\in [n]$. Then, $L$ induces a~\emph{$\star$-canonical ordering{}} of~$K_{n+1}$. We refer to the vertex~$x$ as \textit{the special vertex}. \end{definition} Observe that depending on the type of canonical ordering of $K_{n+1}-x$ there are four possible larger increasing orderings, larger decreasing orderings, smaller increasing orderings, smaller decreasing orderings and middle increasing orderings. We will refer to these twenty possible cases as \textit{types} of $\star$-canonical ordering s. Moreover, we will say that~$K_{n+1}-x$ is the~\emph{canonical part} of the $\star$-canonical ordering. We sometimes refer to the eight smaller increasing/decreasing orderings as the \emph{smaller orderings}. We define the \emph{larger orderings}, \emph{increasing orderings}, and \emph{decreasing orderings} analogously. \begin{remark}\label{rem:middle}\rm In contrast with the other types, in the four middle increasing orderings, the edges incident to the special vertex $x$ are `in between' the edges of the canonical ordering of~$K_{n+1}-x$. More precisely, we have: \begin{itemize}[leftmargin=0.9cm] \item If $K_{n+1}\!-x$ is a min ordering then $v_{i-1}v_n < xv_i < v_iv_{i+1}$ for every $2\le i\le n-1$. Additionally, $xv_1<v_1v_2$ and $v_{n-1}v_n<xv_n$. \item If $K_{n+1}\!-x$ is a max ordering then $v_{i-1}v_i < xv_i < v_1v_{i+1}$ for every $2\le i\le n-1$. Additionally, $xv_1<v_1v_2$ and $v_{n-1}v_n<xv_n$. \item If $K_{n+1}\!-\!x$ is an inverse min ordering then $v_{i}v_{i+1} < xv_i < v_{i+1}v_{n}$ for every $1\le i\le n-2.$ Additionally, $v_{n-1}v_n<xv_{n-1}<xv_n$. \item If $K_{n+1}\!-\!x$ is an inverse max ordering then $v_{1}v_{i-1} < xv_i< v_{i-1}v_{i}$ for every $3\le i\le n$. Additionally, $xv_1<xv_2<v_1v_2$. \end{itemize} \end{remark} It is not hard to check that canonical orderings are $\star$-canonical ordering s. In particular, a min ordering is a smaller increasing ordering, a max ordering is a larger increasing ordering, an inverse min ordering is a smaller decreasing ordering, and an inverse max ordering is a larger decreasing ordering. In each case, the special vertex~$x$ plays the role of either the first or the last vertex in the canonical ordering. The proof of the `forwards direction' of Theorem~\ref{thm:character} relies on the following fact for $\star$-canonical ordering s, analogous to Fact~\ref{fact:selfidentical} for canonical orderings. \begin{fact}\label{fact:selfidentical_star} Suppose~$k\leq n$ are positive integers. If~$K_{n+1}$ is $\star$-canonically edge-ordered{} with special vertex $x$, then every subgraph~$K_k\subseteq K_{n+1}$ with~$x\in V(K_k)$ is $\star$-canonically edge-ordered{} with the same type as~$K_{n+1}$.\footnote{Note that it follows from Fact~\ref{fact:selfidentical} that every subgraph $K_k\subseteq K_{n+1}$ with~$x\notin V(K_k)$ is canonically ordered of the same type as~$K_{n+1}-x$.} \qed \end{fact} The forwards direction of Theorem~\ref{thm:character} follows easily from this fact. Indeed, if $F$ is tileable, by definition there is some $n \in \mathbb N$ so that in any $\star$-canonical ordering{} of $K_{n+1}$ there is a perfect $F$-tiling. Fact~\ref{fact:selfidentical_star} implies that in such a perfect $F$-tiling there is a copy $F'$ of $F$ which covers $x$ and where~$K_{n+1}[V(F')]$ is $\star$-canonically edge-ordered{} with the same type as~$K_{n+1}$. Thus, this implies that every $\star$-canonical ordering{} of $K_{f}$ contains a copy of $F$. The proof of the backwards direction of Theorem~\ref{thm:character} makes use of an approach analogous to that of Caro~\cite{caro}. More precisely, the intuition is as follows. Choose $t\in \mathbb N$ to be sufficiently large compared to $f$. Recall that due to Proposition~\ref{prop:canonical}, in any edge-ordering of a sufficiently large $K_{n_0}$ one must find a canonical copy of $K_t$. Now consider any edge-ordering of $K_{n}$ where~$n$ is much larger than $n_0$. We may repeatedly find vertex-disjoint copies of a canonical copy of $K_t$ in $K_{n}$ until we have fewer than $n_0$ vertices remaining. That is, we have tiled the vast majority of~$K_n$ with canonical copies of $K_t$. The idea is now to incorporate the currently uncovered vertices into these canonical $K_t$ and then split each such `tile' into many $\star$-canonically edge-ordered{} copies of $K_f$. Therefore, the resulting substructure in $K_n$ is a perfect tiling of $\star$-canonically edge-ordered{} copies of $K_f$. Now by the choice of $F$, each such copy of $K_f$ contains a spanning copy of $F$. Thus, $K_n$ contains a perfect $F$-tiling, as desired. We defer the formal proof of Theorem~\ref{thm:character} to Section~\ref{subsec:proof}. In the following subsection we will see some applications of Theorems~\ref{thm:turanable} and \ref{thm:character} to study some properties of the families of Tur\'anable and tileable graphs. In particular, in Proposition~\ref{prop::Dn} we apply Theorem~\ref{thm:character} to prove that the notions of tileable and Tur\'anable are genuinely different. More precisely, we provide an infinite family of Tur\'anable edge-ordered graphs that are not tileable. \subsection{Tur\'anable and tileable graphs}\label{subsec:examples} Given an edge-ordered graph~$F$ we define the \emph{reverse of~$F$}, denoted by~$\back{F}$, as the same graph but in which all relations in the total order of the edges of~$F$ are reversed. More precisely, for~$F=(V,E)$ we have~$\back{F}=(V,E)$ and for every~$e_1,e_2\in E$ we have $e_1\leq_{\back{F}} e_2$ if and only if~$e_2\leq_F e_1$, where~$\leq_F$ and~$\leq_{\back{F}}$ are the total orders of~$F$ and~$\back{F}$ respectively. It is easy to see that~$F$ is Tur\'anable if and only if~$\back{F}$ is Tur\'anable. Indeed, let~$F$ be a Tur\'anable edge-ordered graph and consider any edge-ordered copy of~$K_t$, where~$t\in \mathbb N$ is given by Definition~\ref{def:Turanable}. Then $\back{K_t}$ contains a copy of~$F$, and hence,~$K_t$ contains a copy of~$\back{F}$; thus, $\back{F}$ is Tur\'anable. The same argument shows that~$F$ is tileable if and only if~$\back{F}$ is tileable. Throughout this subsection $v_i$ will denote the $i$th vertex in a canonical ordering and~$x$ will denote the special vertex of a $\star$-canonical ordering. Given edge-ordered graphs~$F$ and~$H$, we say that a map $\varphi: V(F) \longrightarrow V(H)$ is an \emph{embedding of~$F$ into $H$} if and only if \begin{itemize} \item $\varphi$ is injective, \item for every edge~$uv\in E(F)$ we have~$\varphi(u)\varphi(v)\in E(H)$, and \item for every two edges~$uv, wz\in E(F)$ such that~$uv<wz$ in the total order of~$E(F)$, we have~$\varphi(u)\varphi(v) < \varphi(w)\varphi(z)$ in the total order of~$E(H)$. \end{itemize} Observe that the fact that~$H$ contains a copy of $F$ means there is an embedding from $F$ into $H$. When the embedding $\varphi$ is clear from the context we do not explicitly state it, and we simply write~$u\mapsto v$ instead of~$\varphi(u) = v$. We now present a Tur\'anable graph that is not tileable. Consider the edge-ordered graph $D_n$ defined in \cite{gmnptv} as a graph on vertices $u_1,\dots,u_n$ containing all edges incident to $u_1$ or $u_n$. The edges are ordered as $u_1u_2<u_1u_3<\dots<u_1u_n<u_2u_n<\dots < u_{n-1}u_n$. \begin{proposition} \label{prop::Dn} Let $n \geq 4$. Then $D_n$ is Tur\'anable but is not tileable. \end{proposition} \begin{proof} The fact that~$D_n$ is Tur\'anable for every~$n\geq 4$ was proven in~\cite[Proposition~2.12]{gmnptv}, so we only need to show that it is not tileable. We prove it is impossible to embed~$D_n$ into a $\star$-canonically edge-ordered{} $K_n$ of type larger decreasing whose canonical part is a min ordering. Let~$\{x, v_1, \dots, v_{n-1}\}$ be the vertices of such a~$\star$-canonical ordering{} of~$K_n$ with special vertex $x$. Assume for a contradiction that there is an embedding of~$D_n$ into this edge-ordered $K_n$. Suppose first that the vertex~$u_1$ is embedded onto the special vertex~$x$. Then, there are vertices~$v_i,v_j\in V(K_n)$ such that in our embedding we have $$u_k \mapsto v_i \qquad\text{and}\qquad u_n\mapsto v_j\,,$$ for some $k \in [n-1]\setminus\{1\}$. This immediately yields a contradiction since $u_1u_k < u_ku_n$ in~$D_n$ whilst in this type of $\star$-canonical ordering{} $x v_i > v_iv_j$ for every distinct~$i,j\in [n-1]$. Suppose now that $u_i$ is embedded onto the special vertex~$x$ where~$i\in[n-1]\setminus \{1\}$. Then there are vertices~$v_j,v_k,v_\ell\in V(K_n)$ such that $$u_1 \mapsto v_j\,, \qquad u_m \mapsto v_k\,, \qquad\text{and}\qquad u_n\mapsto v_\ell\,,$$ for some~$m\in [n-1]\setminus \{1\}$. Similarly to before, this yields a contradiction because $u_1u_i < u_mu_n$ while in this type of $\star$-canonical ordering{} we have $xv_j > v_kv_\ell$ for every distinct $j,k, \ell\in [n-1]$. The only remaining case is when~$u_n$ is embedded onto the special vertex~$x$. Thus, the edges~$u_1u_n < u_2u_n <\dots < u_{n-1}u_n$ are embedded onto the edges of the form~$v_ix$ for~$i\in[n-1]$. In fact, since the $\star$-canonical ordering{} is larger decreasing, we must have that $$u_i \mapsto v_{n-i} \qquad \text{for every } i\in [n-1]\,.$$ However, this yields a contradiction; indeed, while we have that~$u_1u_2 < u_1u_3$ in~$D_n$ we have~$v_{n-1}v_{n-2}> v_{n-1}v_{n-3}$ in the $\star$-canonically edge-ordered{} $K_n$. \end{proof} We use Proposition~\ref{prop::Dn} to prove that there is no tileable edge-ordering of~$K_4^-$. \begin{proposition} \label{prop::K4-} No edge-ordering of $K_4^-$ is tileable. \end{proposition} \begin{proof} To prove the proposition we will show that the only Tur\'anable edge-ordering of $K_4^-$ is in fact $D_4$, which, due to Proposition~\ref{prop::Dn} is not tileable. As stated in \cite[Section 5]{gmnptv}, the only Tur\'anable edge-ordering of $C_4$ with vertices~$\{w_1,w_2,w_3,w_4\}$ is given by $w_1w_2<w_2w_3 < w_1w_4 < w_3w_4$; we denote this edge-ordered graph by~$C_4^{1243}$. Thus, in any Tur\'anable edge-ordering of~$K_4^-$ the underlying~$C_4$ must be a copy of~$C_4^{1243}$. Starting with such a copy of $C_4^{1243}$ we obtain a~$K_4^-$ by either adding the edge~$w_1w_3$ or~$w_2w_4$. Take an embedding of~$C_4^{1243}$ into the inverse min canonical ordering of~$K_4$ given by $$ w_1\mapsto v_{i_1}\,, \quad w_2\mapsto v_{i_2}\,, \quad w_3\mapsto v_{i_3}\,, \quad \text{and} \quad w_4\mapsto v_{i_4}\,. $$ We first show that this embedding is unique and given by~\eqref{eq:C4embedding} below. Suppose that the edge~$w_1w_2$ is not embedded onto an edge containing~$v_1\in V(K_n)$; in other words,~$i_1\neq 1$ and~$i_2\neq 1$. Thus, there is a $j\in \{2,3,4\}$ such that $v_1 v_{j} = v_{i_3}v_{i_4}$. This is a contradiction, since~$v_{i_1}v_{i_2} > v_1 v_{j}$ in the inverse min canonical ordering, while~$w_1w_2< w_3w_4$ in~$C_4^{1243}$. Hence, we have that either~$i_1=1$ or~$i_2=1$. In the former case, since~$w_2w_3< w_1w_4$ then we have $v_{i_2}v_{i_3}< v_{i_1}v_{i_4} = v_1v_{i_4}$. But this is a contradiction, because in the inverse min canonical ordering all edges containing $v_1$ are smaller than the edges not containing it. Therefore, we must have that $i_2 = 1$. Further, observe that~$w_1w_2<w_2w_3$ means that~$v_1v_{i_1} < v_1v_{i_3}$, which in the inverse min ordering means that~ \begin{align}\label{eq:3<1} i_3 < i_1\,. \end{align} Since~$i_1\leq 4$ and~$i_2=1$, we have~$2\leq i_3\leq 3$. Finally, observe that if~$i_3=2$, then we have~$v_{i_1}v_{i_4}=v_3v_4$. But this is again a contradiction, since~$v_3v_4$ is the largest edge in the inverse min ordering of~$K_4$ while $w_1w_4 < w_3w_4$. Thus we get $i_3=3$, which together with~\eqref{eq:3<1}, implies that~$i_1=4$. Summarizing, we have~$i_2=1$, $i_3=3$ and~$i_1=4$, which finally gives the embedding \begin{align}\label{eq:C4embedding} w_1\mapsto v_4\,, \quad w_2\mapsto v_1\,, \quad w_3\mapsto v_3\,, \quad \text{and} \quad w_4\mapsto v_2\,\,. \end{align} Thus, any Tur\'anable edge-ordering of~$K_4^-$ obtained by adding one edge to~$C_4^{1243}$ must be embedded into the inverse min canonical ordering of~$K_4$ via \eqref{eq:C4embedding}. In this way, after adding the edge $w_2w_4$ or $w_1w_3$ to $C_4^{1243}$, the embedding~\eqref{eq:C4embedding} gives rise to the following edge-orderings of~$K_4^-$: \begin{align} w_1w_2&<w_2w_3 < w_2w_4 < w_1w_4 < w_3w_4 \qquad\text{and}\qquad \label{eq:orderingD_4}\\ w_1w_2&<w_2w_3 < w_1w_4 < w_3w_4 < w_1w_3\,, \label{eq:impossible} \end{align} respectively. The ordering \eqref{eq:orderingD_4} corresponds with the edge-ordering of~$D_4$, by taking~$u_1=w_2$, $u_2=w_1$, $u_3=w_3$, and $u_4=w_4$ (see the definition of~$D_4$ before Proposition~\ref{prop::Dn}). For~\eqref{eq:impossible}, we shall prove that such an edge-ordering of~$K_4^-$ cannot be embedded into the inverse max canonical ordering of~$K_4$, and therefore, it is not Tur\'anable. More precisely, we show that~$C_4^{1243}$ has only one possible embedding into the inverse max ordering of~$K_4$, but the embedding of the edge~$w_1w_3$ will lie in a different `position' than the one given by~\eqref{eq:impossible}. Let~$w_1'$, $w_2'$, $w_3'$, $w_4'$ be the vertices of~$\back{C}_4^{1243}$, the reverse ordering of $C_4^{1243}$, with edges $$w_1'w_2'>w_2'w_3' > w_1'w_4' > w_3'w_4'\,.$$ Here we now denote $\back{C}_4^{1243}$ by~$C_4^{4312}$. Recall that the inverse max ordering of~$K_4$ with vertices~$\{v_1', v_2', v_3',v_4'\}$ corresponds with the reverse of the inverse min ordering on~$\{v_1,v_2,v_3,v_4\}$ by relabeling the vertices as~$v_1'=v_4$, $v_2'=v_3$, $v_3'=v_2$, and~$v_4'=v_1$. Applying the symmetric reasoning as the one above, we have that there is only one possible embedding of~$C_4^{4312}$ into the inverse max ordering of~$K_4$. Namely, \begin{align}\label{eq:C4maxembedding} w_1'\mapsto v_1'\,, \quad w_2'\mapsto v_4'\,, \quad w_3'\mapsto v_2'\,, \quad \text{and} \quad w_4'\mapsto v_3'\, . \end{align} Moreover, notice that~$C_4^{1243}$ is isomorphic to~$C_4^{4312}$ by taking~$w_1=w_3'$, $w_2=w_4'$, $w_3=w_1'$, and~$w_4=w_2'$, where~$w_1, w_2, w_3, w_4$ are the vertices of~$C_4^{1243}$ as in the beginning of the proof. Thus, an embedding of $C_4^{1243}$ into an inverse max ordering of~$K_4$ must follow \eqref{eq:C4maxembedding} via this isomorphism to~$C_4^{4312}$. This corresponds to \begin{align*} w_1\mapsto v_2'\,, \quad w_2\mapsto v_3'\,, \quad w_3\mapsto v_1'\, \quad \text{and} \quad w_4\mapsto v_4'\,. \quad \end{align*} Finally, the edge~$w_1w_3$ is embedded in this way onto~$v_1'v_2'$, which is the smallest edge of the inverse max ordering. In other words we obtain, $$w_1w_3<w_1w_2<w_2w_3 < w_1w_4 < w_3w_4\,,$$ which is incompatible with \eqref{eq:impossible}. \end{proof} The following two propositions are useful to generate tileable (or Tur\'anable) graphs by appropriately adding a vertex and an edge to a tileable (or Tur\'anable) graph. \begin{proposition}\label{lem::add_one_edge_turan} Let $F$ be a Tur\'anable edge-ordered graph and $v \in V(F)$ a vertex incident to the smallest edge in $F$. Let $F'$ be the edge-ordered graph obtained from $F$ by adding a new vertex $v'$ and an edge between $v$ and $v'$ smaller than all edges in $F$. Then $F'$ is Tur\'anable. \end{proposition} \begin{proof} Let $|F|:=f$ and $vu$ be the smallest edge in $F$. We want to embed $F'$ into each canonical ordering of $K_{f+1}$. Observe that for the min ordering, inverse min ordering, and max ordering of $K_{f+1}$ we have \begin{align}\label{eq:1i<ij} v_1v_i < v_iv_j \text{ for every distinct }i, j\in \{2,\dots, f+1\} . \end{align} For these canonical orderings we use that~$F$ is Tur\'anable and Fact~\ref{fact:selfidentical} to embed $F$ into $K_{f+1}\big[\{v_2, \linebreak[1] \dots, v_{f+1}\}\big]$ and then we embed~$v'$ onto $v_1$. Let~$i,j\geq 2$ be such that~$v$ and~$u$ are embedded in this way onto the vertices~$v_i$ and~$v_j$ respectively. Since~$vu$ is the minimal edge in~$F$, then~$v_iv_j$ is minimal in our embedding of~$F$ into $K_{f+1}\big[\{v_2, \dots, v_{f+1}\}\big]$. Thus, since $v'\mapsto v_1$ and~$v_1v_i < v_iv_j$ by~\eqref{eq:1i<ij}, this embedding gives rise to a copy of~$F'$ in these canonical edge-orderings of~$K_{f+1}$. For the inverse max ordering, we proceed as follows. Let~$t\in [f]$ be such that there is an embedding of $F$ into an inverse max ordering of $K_{f}$ where $v_{t}$ plays the role of~$v$. Since~$F$ is Tur\'anable and due to Fact~\ref{fact:selfidentical}, we can embed $F$ into $K_{f+1}[\{v_1, \dots, v_{t-1}, v_{t+1},\linebreak[1] \dots, v_{f+1}\}]$ with $v_{t+1}$ playing the role of~$v$. We extend this embedding by assigning~$v'$ to $v_t$. In this way $v'v$ is mapped to $v_tv_{t+1}$ and~$uv$ is mapped to an edge of the form~$v_{t+1}v_i$ for $i\neq t$. By the definition of the inverse max ordering we have~$v_tv_{t+1}<v_{t+1}v_i$, i.e., the embedding of the edge $vv'$ is smaller than the embedding of the edge $uv$. Thus, the inverse max ordering of~$K_{f+1}$ contains a copy of~$F'$. \end{proof} \begin{proposition} \label{lem::add_one_edge} Let $F$ be a tileable edge-ordered graph and $v\in V(F)$ a vertex incident to the smallest edge in $F$. Let $F'$ be the edge-ordered graph obtained from $F$ by adding a new vertex $v'$ and an edge between $v$ and $v'$ smaller than all edges in $F$. Then $F'$ is tileable. \end{proposition} \begin{proof} Let $|F|:=f$ and $uv$ be the smallest edge in $F$. We want to embed $F'$ into each $\star$-canonical ordering{} of $K_{f+1}$. We divide the proof into cases depending on the type of the $\star$-canonical ordering{}. For smaller orderings of~$K_{f+1}$, we use that~$F$ is Tur\'anable to first embed $F$ into a canonical ordering of the same type as the canonical part~$K_{f+1}-x$. We then extend this embedding by setting $v' \mapsto x$. The edge~$vv'$ is embedded onto an edge of the form~$xv_j$ with $j\in [f]$. Thus, by definition of the smaller orderings, our embedding corresponds to a copy of~$F'$ in $K_{f+1}$. In fact, in the argument above we only used that the smaller orderings satisfy \begin{align}\label{eq:smaller} xv_i < v_iv_j \text{ for every distinct }i,j\in [f]\,, \end{align} since we only need that the embedding of~$vv'$ is smaller than the embedding of the smallest edge in~$F$. More precisely, observe that if~$v'\mapsto x$ and~$v\mapsto v_i$ for some~$i\in [f]$, then the edge~$vv'$ in~$F'$ is sent to the edge $xv_i$ and the minimal edge of $F$,~$uv$, is sent to an edge of the form~$v_iv_j$ in $K_{f+1}$ for a~$j\in [f]\setminus \{i\}$. Thus, if \eqref{eq:smaller} holds, then the embedding of~$vv'$ is smaller than embedding of the smallest edge in~$F$, yielding a copy of~$F'$. It is easy to check that~\eqref{eq:smaller} holds for a middle increasing ordering whose canonical part is an inverse max ordering. Indeed, following the labelings in Definitions~\ref{def:canonical} and~\ref{def:starcanonical}, for a middle increasing ordering whose canonical part is an inverse max ordering we have \begin{align*} L_4(xv_i)&=2fi < 2fj-i+f = L_4(v_iv_j) \, \quad \text{ for } 1\leq i<j \leq f\,, \quad\text{and}\quad\\ L_4(xv_i)&=2fi < 2fi-j+f = L_4(v_iv_j) \, \quad \text{ for } 1\leq j < i\leq f\,. \end{align*} Thus, for this~$\star$-canonical ordering{} we can proceed as described above. We shall now address the remaining $\star$-canonical ordering{}s of~$K_{f+1}$, these are: all larger orderings and all middle increasing orderings except when the canonical part is an inverse max ordering. For these $\star$-canonical ordering s of~$K_{f+1}$, we will proceed differently depending on how~$F$ embeds into a~$\star$-canonically edge-ordered{} $K_f$ of the same type as~$K_{f+1}$. Recall that such embeddings exist due to Fact~\ref{fact:selfidentical_star} and because $F$ is a tileable edge-ordered graph. First, note that for all remaining $\star$-canonical ordering s \begin{align}\label{eq:embedding} v_1v_i < xv_i \quad \text{for every }2\le i \le f\,. \end{align} Indeed, for the larger orderings this follows directly from the definition. For the middle increasing orderings whose canonical part is not the inverse max ordering, we just need to check the following inequalities given by the labelings in Definitions~\ref{def:canonical} and~\ref{def:starcanonical} for $2\leq i\leq f$: \begin{itemize}[leftmargin=0.9cm] \item for the canonical part being a min ordering $L_1(v_1v_i) = 2f + i -1 < 2fi = L_1(v_ix) $\,, \item for the canonical part being a max ordering $L_2(v_1v_i) = (2f-1)i + 1< 2fi= L_2(v_ix)$\,, and \item for the canonical part being an inverse min ordering $L_3(v_1v_i) = (2f+1)-i < 2fi = L_3(v_ix)$\,. \end{itemize} Note that \eqref{eq:embedding} does not hold for smaller orderings or for middle increasing orderings whose canonical part is an inverse max ordering. Now suppose that in an embedding of~$F$ into a $\star$-canonically edge-ordered{} $K_f$ of the same type as~$K_{f+1}$, vertex $u$ is embedded as the special vertex $x$. Then, we embed~$F'$ into the~$\star$-canonically edge-ordered{} $K_{f+1}$ by first embedding $F$ into $K_{f+1}[\{x,v_2, \dots, v_{f}\}]$ with~$u$ as the special vertex, and then mapping $v'$ to $v_1$. To check that this is an embedding of~$F'$ into $K_{f+1}$ observe that $v'v$ is embedded onto $v_1v_i$ and~$uv$ is embedded onto~$xv_i$, for some~$i\geq 2$. Since $uv$ is the smallest edge in $F$, the edge~$xv_i$ is the smallest in our embedding of $F$ into~$K_{f+1}[\{x,v_2, \dots, v_{f}\}]$. Due to~\eqref{eq:embedding}, $v_1v_i < xv_i$, and so the edge $v'v$ is mapped to an edge smaller than all the edges in our copy of $F$. This yields a copy of $F'$ in~$K_{f+1}$. Next suppose that in an embedding of~$F$ into a $\star$-canonically edge-ordered{} $K_f$ of the same type as~$K_{f+1}$, $v$ is embedded onto the special vertex $x$. If the $\star$-canonical ordering{} is increasing, then we embed $F$ into $K_{f+1}[\{x,v_2, \dots, v_{f}\}]$ with~$v$ as the special vertex, and map $v'$ to $v_1$. Thus, the edge~$vv'$ is embedded onto~$xv_1$ and~$uv$ is embedded onto an edge $xv_i$ for some $i\geq 2$. Since the $\star$-canonical ordering{} is increasing we have~$xv_1<xv_i$. As before this yields an embedding of~$F'$ into~$K_{f+1}$. If the $\star$-canonical ordering{} is decreasing we proceed analogously by first embedding $F$ into $K_{f+1}[\{x,v_1, \dots, v_{f-1}\}]$ and then extending that embedding by assigning $v'$ to $v_f$. Finally, suppose that in all embeddings of~$F$ into a $\star$-canonically edge-ordered{} $K_f$ of the same type as~$K_{f+1}$, neither $u$ nor $v$ is embedded as the special vertex $x$. Then we proceed similarly to the proof of Proposition~\ref{lem::add_one_edge_turan} above. If the canonical part is a min ordering, an inverse min ordering, or a max ordering, then we first embed~$F$ into~$K_{f+1}\big[\{x, v_2, \dots, v_{f+1}\}\big]$ and then~$v'$ onto~$v_1$. Let~$i,j\geq 2$ be such that~$v$ and $u$ are embedded in this way onto vertices~$v_i$ and~$v_j$ respectively, both in the canonical part of $K_{f+1}\big[\{x, v_2, \dots, v_{f+1}\}\big]$. Then, since \eqref{eq:1i<ij} holds in this context for the edge-ordering of the canonical part, we have that~$v_1v_i<v_iv_j$. Hence, $v'v$ is mapped to an edge, $v_1v_i$, that is smaller than the edge $v_iv_j$ that $uv$ is mapped to. As before this yields an embedding of~$F'$ into~$K_{f+1}$. If the canonical part is an inverse max ordering, let~$t\in[f]$ be such that there is an embedding of~$F$ into the $\star$-canonically edge-ordered{} $K_{f}$ of the same type as~$K_{f+1}$ for which~$v\mapsto v_t$. Then we embed $F$ into $K_{f+1}[\{x,v_1, \dots, v_{t-1}, v_{t+1}, \dots, v_{f}\}]$ in such a way that~$v\mapsto v_{t+1}$. We are assuming that in every embedding of~$F$ into a~$\star$-canonically edge-ordered{} $K_f$ of the same type as~$K_{f+1}$, neither $v$ nor $u$ is embedded as the special vertex~$x$; so there is an $i\in [f]\setminus\{t\}$ such that~$u\mapsto v_i$ in our embedding. Extend this embedding by assigning~$v'$ to $v_t$. In this way we have $$v'v\mapsto v_tv_{t+1} \qquad\text{and}\qquad uv \mapsto v_{t+1}v_i\,.$$ In the inverse max ordering we have~$v_tv_{t+1}<v_{t+1}v_i$, which means that the edge $v'v$ is mapped to is smaller than edge $uv$ is mapped to. As before this yields a copy of~$F'$ into~$K_{f+1}$. \end{proof} Using Theorems~\ref{thm:turanable} and~\ref{thm:character} it is easy to see that any Tur\'anable edge-ordered graph becomes tileable after adding an isolated vertex. More interestingly, the next proposition implies that given any connected Tur\'anable graph~$F$ we can obtain a connected tileable graph on~$\vert F\vert +2$ vertices. Given a Tur\'anable edge-ordered graph $F$ on~$f$ vertices, we say a vertex~$v\in V(F)$ is \emph{minimal} if it plays the role of $v_1$ in an embedding of~$F$ into a min ordering of~$K_f$. Similarly, we say that~$v$ is \emph{maximal} if it plays the role of $v_f$ in an embedding of~$F$ into a max ordering of~$K_f$. By Theorem~\ref{thm:turanable} a Tur\'anable graph always contains at least one minimal and one maximal vertex\footnote{We highlight that there might be more than one minimal (resp. maximal) vertex, as there might be more than one embedding of $F$ into a min (resp. max) ordering. For example, in a monotone path $u_1u_2u_3u_4$, we have that $u_1$ and~$u_2$ can play the role of~$v_1$ in a min ordering.}. Observe that the edges incident to a minimal (resp. maximal) vertex are always smaller (resp. larger) than the edges not incident to it. We show that starting with a Tur\'anable graph we can add two pendant edges, one to a minimal vertex and one to a maximal vertex, and obtain a tileable graph. This result, together with the example of a Tur\'anable graph $D_n$ that is not tileable (see Proposition~\ref{prop::Dn}), implies the perhaps surprising property that being tileable is not closed under taking connected subgraphs. \begin{proposition}\label{prop::add_two_edges} Let $F$ be an edge-ordered Tur\'anable graph with $\underline{v}, \overline{v}\in V(F)$ being distinct non-isolated minimal and maximal vertices respectively. Let $F'$ be constructed by adding two new vertices $\underline{u}, \overline{u}$ and the edges $\underline{u}\underline{v}$ and~$\overline{u}\overline{v}$ such that $\underline{u}\underline{v}$ is smaller than all other edges and $\overline{u}\overline{v}$ is larger than all other edges. Then $F'$ is tileable. \end{proposition} \begin{proof} Let $f:=|F|$. As $F$ is Tur\'anable, by Proposition~\ref{lem::add_one_edge_turan} we have that $F' - \overline{u}$ is Tur\'anable as well. Applying Proposition~\ref{lem::add_one_edge_turan} to the reverse of~$F'-\underline{u}$ we get that~$F'-\underline{u}$ is Tur\'anable too. Thus, due to Theorem~\ref{thm:turanable} we can embed $F' - \underline{u}$ and $F' - \overline{u}$ into any canonical ordering of $K_{f+1}$. We will use these embeddings to find embeddings of $F'$ into each $\star$-canonical ordering{} of $K_{f+2}$. For the smaller orderings of~$K_{f+2}$, we first embed $F'-\underline{u}$ into the canonical part $K_{f+2} - x$, and then embed $\underline{u}$ as the special vertex $x$. In this way, the edge $\underline{u}\underline{v}$ is embedded onto an edge of the form $xv_i$; therefore, by definition of the smaller orderings, the edge $\underline{u}\underline{v}$ is embedded onto is smaller than all edges in the embedding of $F'-\underline{u}$. This gives rise to a copy of~$F'$. For the larger orderings of~$K_{f+2}$ the proof is analogous, by embedding $F'-\overline{u}$ into the canonical part $K_{f+2} - x$ and then embedding $\overline{u} $ onto $x$. For the middle increasing $\star$-canonical ordering s of~$K_{f+2}$, we now split into subcases depending on its canonical part. If the canonical part is a min ordering, since $\underline{v}$ is a minimal vertex in $F$, there is an embedding of $F$ into~$K_{f+2}[\{v_1,\dots,v_f\}]$ such that $\underline{v} \mapsto v_1$. Let~$i\in[f]\setminus \{1\}$ be such that $\overline{v} \mapsto v_i$ in that embedding. Observe that, for every edge~$w_1w_2$ in~$F$ such that~$w_1\mapsto v_j$ and $w_2\mapsto v_k$ for a pair of indices~$j,k\in [f]\setminus \{i\}$, we have \begin{align}\label{eq:maximaledge} v_jv_k < v_iv_{f+1} \end{align} in the edge-ordering of~$K_{f+2}$. To see this, observe that since~$\overline{v}$ is maximal and not isolated in $F$, $\overline{v}$ must be contained in the maximal edge of~$F$, and hence, the embedding of the maximal edge must be of the form~$v_iv_\ell$ for some~$\ell\in [f]\setminus\{i\}$. Thus, if~\eqref{eq:maximaledge} does not hold for some edge~$w_1w_2$ in~$F$, then $$v_jv_k> v_iv_{f+1} > v_iv_\ell\,,$$ where the last inequality holds since the canonical part is a min ordering and~$\ell<f+1$. However, this is a contradiction since the maximal edge in $F$ is embedded onto $v_iv_\ell$. Now we extend this embedding to an embedding of~$F'-\underline{u}$ by taking~$\overline{u}\mapsto v_{f+1}$. Indeed, the edge~$\overline{u}\overline{v}$ is embedded onto~$v_iv_{f+1}$ which, due to~\eqref{eq:maximaledge}, is larger than any edge in our copy of~$F$, implying a copy of~$F'-\underline{u}$ in~$K_{f+1}$. Finally, extend the embedding further by taking $\underline{u}\mapsto x$. Observe that the edge~$\underline{u}\underline{v}$ is embedded in this way onto the edge~$xv_1$. Moreover, by Remark~\ref{rem:middle}, $xv_1<v_1v_2$ and $v_1v_2$ is the smallest edge in the canonical part by Definition~\ref{def:canonical}. Therefore, the edge $\underline{u}\underline{v}$ is embedded onto is smaller than all other edges used. Thus, we find a copy of~$F'$. If the canonical part is an inverse min ordering, we embed~$F'-\overline{u}$ into the canonical part and then take~$\overline{u} \mapsto x$. Let~$v_i$ be the vertex $\overline{v}$ is embedded onto (where~$i\in [f]$). Note that \begin{align}\label{eq:invmin} xv_i > \max\{v_iv_j \colon j\in [f]\setminus \{i\}\}\,. \end{align} Indeed, using the labelings given by Definitions~\ref{def:canonical} and \ref{def:starcanonical} we have $L_3(xv_i) = 2fi > (2f+1)i - j =L_3(v_iv_j)$ for every~$i<j\leq f$ and $L_3(xv_i) = 2fi > (2f+1)j - i =L_3(v_iv_j)$ for every~$1\leq j < i$. Thus, \eqref{eq:invmin} implies that the edge $xv_i$ that $\overline{u}\overline{v}$ is embedded onto is larger than any of the edges in our copy of $F'-\overline{u}$ that contain~$\overline{v}$. Since~$\overline{v}$ is a maximal non-isolated vertex in $F$, $\overline{v}$ is contained in the maximal edge of~$F$. The maximal edge of~$F$ is also the maximal edge of~$F'-\overline{u}$ and therefore, the edge $xv_i$ that $\overline{u}\overline{v}$ is embedded onto is larger any of the edges in our copy of $F'-\overline{u}$. As before, this yields a copy of~$F'$ in~$K_{f+2}$. Finally, if the canonical part is a max ordering or an inverse max ordering we argue as before, but for the reverse graph~$\back{F'}$. More precisely, note first that for~$\back{F}$ the vertices $\underline{v}$ and~$\overline{v}$ are maximal and minimal respectively. Moreover, if~$F''$ is constructed from~$\back{F}$ by adding two new vertices $\underline w, \overline w$ and the edges $\underline w\underline{v}$ and~$\overline w\overline{v}$ such that $\underline w\underline{v}$ is larger than all other edges and~$\overline w\overline{v}$ is smaller than all other edges, then $F''$ is precisely the reverse of~$F'$. By the argument above, a middle increasing ordering of~$K_{f+2}$ whose canonical part is a min or an inverse min ordering contains a copy of~$F''$. Hence, the reverse of that ordering contains a copy of~$F'$. We conclude by noticing that reverse of a middle increasing ordering whose canonical part is the min (resp. inverse min) ordering is the middle increasing ordering whose canonical part is the max (resp. inverse max) ordering. \end{proof} Proposition~\ref{prop::K4-} implies that no edge-ordering of $K_4^-$ is tileable. In contrast, the following corollary of Proposition~\ref{prop::add_two_edges} asserts that there are connected tileable edge-ordered graphs containing $K_4 ^-$. Recall $D_4$ is a Tur\'anable edge-ordering of $K_4^-$; further notice $D_4$ has unique minimal and maximal vertices, and they are distinct. \begin{corollary}\label{cor:K_4-} For every even~$n\geq 6$ there is a connected~$n$-vertex tileable edge-ordered graph~$F_n$ with~$K_4^-\subseteq F_n$. \end{corollary} \begin{proof} We proceed by induction on~$n\geq 6$. For~$n=6$, apply Proposition \ref{prop::add_two_edges} with~$F:=D_4$, and let $F_6$ be the resulting edge-ordered graph. Since~$F_6$ is tileable and~$K_4^-\subseteq F_6$, we establish the base case. Notice that one of the new vertices in $F_6$ is minimal, the other is a maximal vertex. Similarly, suppose that~$F_n$ is a connected $n$-vertex tileable edge-ordered graph with distinct minimal and maximal vertices so that~$K_4^-\subseteq F_n$. Then we apply Proposition~\ref{prop::add_two_edges} with~$F_n$ playing the role of~$F$ and let~$F_{n+2}$ be the output of this proposition. Notice that $F_{n+2}$ is a connected $(n+2)$-vertex tileable edge-ordered graph with~$K_4^-\subseteq F_n\subseteq F_{n+2}$. Moreover, $F_{n+2}$ will contain distinct minimal and maximal vertices (the two new vertices). \end{proof} In Proposition~\ref{prop::add_two_edges} we obtain a tileable edge-ordered graph from a Tur\'anable edge-ordered graph by adding \emph{two} pendant edges. The following proposition shows that adding only one such pendant edge is, in general, not enough to create a tileable edge-ordered graph. Recall we write $u_1, \dots, u_n$ for the vertices of $D_n$ where $u_1$ and $u_n$ are the unique minimal and maximal vertices in $D_n$, respectively. \begin{proposition} For $n\ge 4$, let $D_n^+$ be the edge-ordered graph obtained from $D_n$ by adding a new vertex $w$ and the edge $u_nw$, larger than all the edges in $D_n$. Let $D_n^-$ be the edge-ordered graph obtained from $D_n$ by adding a new vertex $u$ and the edge $u_1u$, smaller than all the edges in $D_n$. Then neither $D_n^+$ nor $D_n^-$ are tileable. \end{proposition} \begin{proof} We only consider $D_n^+$ as the argument for $D_n^-$ is analogous. Suppose for a contradiction there is an embedding of~$D_n^+$ into a smaller decreasing ordering of~$K_{n+1}$ whose canonical part is a min ordering. First, since $u_1u_n<u_iu_n$ for $1<i<n$ and $u_nw$ is the largest edge in $D_n^+$,~$u_1$ is the only vertex in~$D_n^+$ such that all edges incident to it are smaller than all other edges. Note that this means we must have that $u_1\mapsto x$. Recall that~$u_1u_2<\dots<u_1u_n$ in $D_n$ and that, since the $\star$-canonical ordering{} of~$K_{n+1}$ is smaller decreasing, $v_1x > \dots > v_nx$. Thus, given~$1<i<j\leq n$, \begin{align*} \text{if $u_i \mapsto v_k$ and $u_{j}\mapsto v_{\ell}$ then~$\ell<k$.} \end{align*} In particular, if we take~$i,j,k\in [n]$ such that $$u_n \mapsto v_i\,, \qquad u_3 \mapsto v_j\,, \qquad\text{and}\qquad u_2 \mapsto v_k,$$ then~$i<j<k$. However, this is a contradiction, because while~$u_nu_2<u_nu_3$ in~$D_n^+$, we have~$v_i v_k > v_i v_j$ in~$K_{n+1}$. \end{proof} Since the operations described in Propositions~\ref{lem::add_one_edge} and~\ref{prop::add_two_edges} add pendant edges, the case in which the underlying graph is a cycle is not directly covered by them. In the following two propositions we study the tileability of \emph{monotone cycles}. We say that an edge-ordered cycle~$C_n$ with $V(C_n)=\{u_1,\dots, u_n\}$ is \emph{monotone} if the edges are ordered as $u_1u_2<u_2u_3<\dots<u_{n-1}u_n<u_nu_1$. \begin{proposition}\label{prop::monotone_odd} Monotone cycles of odd length are tileable. \end{proposition} \begin{proof} It suffices to find a spanning monotone cycle in every $\star$-canonical ordering{} of $K_{n+1}$ where $n$ is even. For this, we show that every canonical ordering of $K_n$ contains an embedding of the monotone spanning path which can be extended to the special vertex $x$ on both ends so that the resulting cycle is monotone. We now define four paths in the canonical orderings with vertex set~$\{v_1,\dots, v_n\}$, and state in which canonical orderings they are in fact monotone paths. \begin{itemize} \item \emph{Ordinary}: $v_1v_2v_3\dots v_n$ is monotone in all four canonical orderings. \item \emph{Small}: $v_2v_3\dots v_nv_1$ is monotone in the inverse max ordering. \item \emph{Big}: $v_nv_1v_2\dots v_{n-1}$ is monotone in the inverse min ordering. \item \emph{Jumpy}: $v_{n/2+1}v_1v_{n/2+2}v_2 \cdots v_nv_{n/2}$ is monotone in the min ordering and max ordering. \end{itemize} For each~$\star$-canonical ordering{} of~$K_{n+1}$, we now show how to extend one of the previous monotone paths into a spanning monotone cycle using the special vertex~$x$. For all larger/smaller decreasing orderings, we simply extend the ordinary path by adding the special vertex $x$ `between' $v_n$ and $v_1$. The resulting cycle is monotone since, by Definition~\ref{def:starcanonical}, \begin{itemize} \item for larger decreasing orderings $ v_1v_2 < \ldots < v_{n-1}v_{n} < v_nx < xv_1 \,;$ \item for smaller decreasing orderings $ v_nx< xv_1< v_1v_2 < \ldots < v_{n-1}v_n \,.$ \end{itemize} The remaining $\star$-canonical ordering s are all increasing. We split the analysis into cases depending on their canonical part. Suppose first that the canonical part is a min or a max ordering. For the middle increasing ordering observe Remark~\ref{rem:middle} implies that for the min and max orderings,~$xv_1$ and~$xv_n$ are the smallest and largest edges respectively. Then, we simply take the ordinary path and add the special vertex between $v_n$ and $v_1$ to get a monotone cycle $$ xv_1 < v_1v_2 < \ldots < v_{n-1}v_n < v_nx \,.$$ If the ordering is smaller or larger increasing we extend a jumpy path by adding the special vertex $x$ between $v_{n/2}$ and $v_{n/2+1}$ as we have~$xv_{n/2}<xv_{n/2+1}$ for all increasing orderings. Observe that the resulting cycle is monotone, since \begin{itemize} \item for larger increasing orderings $ v_{n/2+1}v_1 < \ldots < v_{n}v_{n/2} < v_{n/2}x < xv_{n/2+1} \,;$ \item for smaller increasing orderings $ v_{n/2}x < xv_{n/2+1} < v_{n/2+1}v_1 < \ldots < v_{n}v_{n/2} \,.$ \end{itemize} Suppose now that the canonical part is an inverse min ordering. We extend the big path by adding the special vertex between $v_{n-1}$ and $v_n$. By Definition~\ref{def:starcanonical} and Remark~\ref{rem:middle}, observe that for the larger and middle increasing orderings, $$ v_nv_1 < \ldots < v_{n-2}v_{n-1} < v_{n-1}x < xv_{n}\,,$$ while for the smaller increasing ordering, $$ v_{n-1}x < xv_{n} < v_nv_1 < \ldots < v_{n-2}v_{n-1} \,.$$ Finally, suppose the canonical part is an inverse max ordering; we extend the small path by adding the special vertex between $v_1$ and $v_2$. Indeed, by Definition~\ref{def:starcanonical} and Remark~\ref{rem:middle}, observe that for the smaller and middle increasing orderings, $$ v_1 x < xv_2 < v_2v_3 < \ldots < v_{n} v_1 \,,$$ while for the larger increasing ordering, $$ v_2v_3 < \ldots < v_{n} v_1 <v_1 x<xv_2 \,. \eqno\qedhere$$ \end{proof} In stark contrast to Proposition~\ref{prop::monotone_odd}, the next result states that monotone cycles of even length are not Tur\'anable, let alone tileable. \begin{proposition}\label{lem::monotone_even} Monotone cycles of even length are not Tur\'anable. \end{proposition} \begin{proof} By Theorem~\ref{thm:turanable}, it suffices to show that there is no spanning monotone cycle in the min canonical ordering of $K_n$ for $n$ even. We will proceed by induction on $n$. Before this, we first show that in the min ordering of $K_n$, \begin{align}\label{eq:evencycles} \text{if $v_iv_j<v_jv_k$ then~$i<k$.} \end{align} Indeed, suppose~$k<i$. Using the standard labeling of Definition~\ref{def:canonical}, we have that if~$j<k$ then~$2nj+i-1=L_1(v_iv_j) < L_1(v_jv_k) = 2nj+k-1$, which is a contradiction. If $k<j<i$, then $ 2nj+i-1=L_1(v_iv_j) <L_1(v_jv_k) =2nk+j-1 ,$ which implies that $2n(j-k) < j-i$; this is a contradiction, since $k<j$ while~$j<i$. Finally, if $k<i<j$, then $2ni+j-1=L_1(v_iv_j)<L_1(v_jv_k)=2nk+j-1$, which again is a contradiction. Let~$C^{\text{mon}}_4$ be a monotone cycle of length four with vertices $u_1,u_2,u_3,u_4$ and edges ordered as~$u_1u_2<u_2u_3<u_3u_4<u_4u_1$. Suppose there is an embedding of~$C_4^{\text{mon}}$ into the min ordering of~$K_4$ and let~$i,k\in [4]$ be such that $$u_1\mapsto v_i \qquad\text{and}\qquad u_3\mapsto v_k\,.$$ Since~$u_1u_2<u_2u_3$ and due to \eqref{eq:evencycles}, we have~$i<k$, but similarly, since $u_3u_4<u_4u_1$, we have~$k<i$, a contradiction. Now suppose that the min ordering of $K_n$ does not contain a spanning monotone cycle $C_n^{\text{mon}}$ for some even $n\ge 4$. Let~$\{u_1, \dots, u_{n+2}\}$ be the vertex set of a monotone cycle~$C_{n+2}^{\text{mon}}$, with edges ordered as~$u_1u_2<\dots<u_{n+1}u_{n+2}<u_{n+2}u_1$. Suppose for contradiction there is an embedding~ $$\varphi\colon V(C_{n+2}^{\text{mon}}) \longrightarrow V(K_{n+2})$$ of $C_{n+2}^{\text{mon}}$ into the min ordering of~$K_{n+2}$. First, we shall check that for any four vertices~$v_i,v_j,v_k,v_\ell$ in the min ordering of~$K_{n+2}$, \begin{align}\label{eq:evencycle2} \text{if $v_iv_j < v_jv_k < v_kv_\ell$ we have that $v_iv_\ell < v_kv_\ell$}\,. \end{align} Indeed, since~$v_iv_j < v_jv_k $, we have that \eqref{eq:evencycles} yields~$i<k$. If we suppose~$v_kv_\ell<v_iv_\ell$, then again \eqref{eq:evencycles} implies that~$k<i$, which is a contradiction, and therefore~\eqref{eq:evencycle2} follows. Due to \eqref{eq:evencycle2}, and since~$\varphi(u_1)\varphi(u_2)<\varphi(u_2)\varphi(u_3)<\varphi(u_3)\varphi(u_4)$ in~$K_{n+2}$, we have~$\varphi(u_1)\varphi(u_4) < \varphi(u_3)\varphi(u_4)< \varphi(u_4)\varphi(u_5)$. Hence, we have that $$\varphi(u_1)\varphi(u_4) < \varphi(u_4)\varphi(u_5) < \varphi(u_5)\varphi(u_6) <\dots <\varphi(u_{n+1})\varphi(u_{n+2}) < \varphi(u_{n+2})\varphi(u_{1})\,,$$ which is a copy of a monotone cycle of length~$n$ embedded into the edge ordered graph induced by the vertices~$V(K_{n+2})\setminus \{\varphi(u_2),\varphi(u_3)\}$. But this is a contradiction to our induction hypothesis since, due to Fact \ref{fact:selfidentical},~$V(K_{n+2})\setminus \{\varphi(u_2),\varphi(u_3)\}$ induces a min ordering of~$K_n$. \end{proof} \subsection{Proof of Theorem~\ref{thm:character}}\label{subsec:proof} First we prove the following lemma that provides an alternative characterization of tileable edge-ordered graphs. \begin{lemma}\label{lemma:character} An edge-ordered graph~$F$ is tileable if and only if there exists an~$n\in \mathbb N$ such that the following holds. Every edge-ordering of~$K_n$ such that $K_n-x$ is canonical for some vertex~$x\in V(K_n)$ contains a copy of~$F$ that covers $x$. \end{lemma} \begin{proof} For the `forwards direction', suppose that there is no~$n\in \mathbb N$ satisfying the property described in the lemma. That is, for every~$n\in\mathbb N$ there is an edge-ordering of the complete graph~$K_{n}$ such that~$K_n-x$ is canonically edge-ordered for some vertex~$x\in V(K_n)$ and~$x$ is not contained in any copy of~$F$. In particular, none of these edge-ordered complete graphs contain an~$F$-tiling covering~$x$, and so $F$ is not tileable. For the `backwards direction', let~$n\in \mathbb N$ be as in the statement of the lemma and set $f:=\vert V(F)\vert$. We shall prove that $F$ is tileable, that is, there exists a $t\in \mathbb N$ such that every edge-ordering of~$K_t$ contains a perfect~$F$-tiling. Note first that the property of~$n$ guarantees that every canonical edge-ordering of~$K_{n}$ contains a copy of~$F$. In particular, Fact~\ref{fact:selfidentical} implies that for every~$\ell\in \mathbb N$, every canonical edge-ordering of~$K_{\ell f}$ contains a perfect~$F$-tiling. Further, given $k\geq n$ where $k$ is divisible by~$f$, if~$K_{k}$ is such that $K_{k}-x$ is canonically edge-ordered for some vertex~$x\in V(K_{k})$, then $K_k$ contains a perfect~$F$-tiling. Indeed, by the property of~$n$,~$K_{k}$ contains a copy $F'$ of $F$ with~$x\in V(F')$; hence, as $K_{k}\setminus V(F')$ is canonically edge-ordered, the discussion above implies that $K_{k}\setminus V(F')$, and thus $K_{k}$, contains a perfect $F$-tiling. Pick~$k\geq n$ such that~$k$ is divisible by~$f$ and let~$m\in \mathbb N$ be the output of Proposition~\ref{prop:canonical} on input $k-1$. Fix~$t:=(m-1)k$ and let~$K:=K_t$ be arbitrarily edge-ordered. Apply Proposition~\ref{prop:canonical} iteratively~$m-1$ times to find vertex-disjoint copies of $K_{k-1}$ in $K$, each of them canonically edge-ordered. Let~$K_{k-1}^{(1)}, \dots, K_{k-1}^{(m-1)} \subseteq K$ be these copies and observe that exactly~$m-1$ vertices remain uncovered in $K$. That is, there are vertices~$x_1,\dots, x_{m-1}$ such that~$V(K)=\bigcup_{i\in [m-1]} V(K_{k-1}^{(i)})\cup \{x_i\}$. By the discussion above, for every~$i\in [m-1]$, $K[V(K_{k-1}^{(i)})\cup \{x_i\}]$ contains a perfect $F$-tiling and hence, $K$ contains a perfect~$F$-tiling as well, as required. \end{proof} In the proof of Theorem~\ref{thm:character} we deal with canonical orderings of~$K_n$ with vertex set~$\{v_1, \dots, v_n\}$. Let~$U\subseteq V(K_n)$ be a subset of size~$k\leq n$ such that~$U=\{v_{i_1}, \dots, v_{i_k}\}$ where~$j < k$ implies~$i_i<i_k$. Whenever we say that we~\emph{relabel the vertices of $U$}, we mean that we will denote $v_{i_j}$ simply as~$v_j$ (and we will restrict our attention to this subset of the original vertex set). \begin{proof}[Proof of Theorem~\ref{thm:character}] Suppose $F$ is tileable; by definition there is some $n \in \mathbb N$ so that in any $\star$-canonical ordering{} of $K_{n+1}$ there is a perfect $F$-tiling. In such a perfect $F$-tiling there is a copy $F'$ of $F$ that contains the special vertex $x$. Fact~\ref{fact:selfidentical_star} implies that $K_{n+1}[V(F')]$ is $\star$-canonically edge-ordered{} with the same type as~$K_{n+1}$. Thus, this implies every $\star$-canonical ordering{} of $K_f$ contains a copy of~$F$. For the other direction, suppose every $\star$-canonical ordering{} of~$K_f$ contains a copy of~$F$. Our aim is to show that $F$ is tileable. By Lemma~\ref{lemma:character}, it suffices to prove that there is an~$n\in \mathbb N$ such that every edge-ordering of~$K_{n+1}$ for which~$K_{n+1}-x$ is canonically ordered for some vertex~$x\in V(K_{n+1})$, contains a copy of~$F$ that covers $x$. The cases~$f=2,3$ are trivial, so we may assume~$f\geq 4$. Let~$n\in \mathbb N$ be sufficiently large compared to~$f\geq 4$ and where $\sqrt{n-1} \in \mathbb N$. Let~$\{x, v_1,\dots, v_{n}\}$ be the vertices of an edge-ordered complete graph~$K_{n+1}$, such that~$K_{n+1} - x$ is canonically ordered. Our goal is to find a subgraph~$K_f\subseteq K_{n+1}$ containing $x$ such that~$K_f$ is $\star$-canonically edge-ordered{}. Indeed, by our assumption this $K_f$ contains a copy of $F$, and so $K_{n+1}$ contains a copy of $F$ that covers $x$, as desired. Observe that an application of the Erd\H os--Szekeres Theorem~\cite{ErdosSzekeres} to the sequence of edges~$\{xv_i\}_{i\in [n]}$ yields a monotone subsequence. More precisely, there is a set $I\subseteq [n]$ of size at least~$\sqrt{n-1}+1$ such that the sequence~$\{xv_i\}_{i\in I}$ is monotone. Further, let~$V_I:=\{v_i\}_{i\in I}$ and consider the $3$-coloring~$c:E(K_{n+1}[V_I]) \to \{B,M,S\}$ of the edges of $K_{n+1}[V_I]$ defined as follows: for~$i, j \in I$ with~$i<j$, let \begin{align*} c(v_iv_j) := \begin{cases} B \qquad &\text{if }xv_i, xv_j > v_iv_j\,, \\ M \qquad &\text{if }xv_i < v_iv_j < xv_j \text{ or } xv_j < v_iv_j < xv_i\,, \text{and} \\ S \qquad &\text{if } xv_i, xv_j < v_iv_j\,. \end{cases} \end{align*} As $n$ is sufficiently large, Ramsey's Theorem implies that there is a monochromatic clique~$\widetilde K$ on $\ell :=f^2 -4f +5$ vertices. Relabeling the vertices of $V(\widetilde K)$ we take~$V(\widetilde K)= \{v_1,\dots,v_\ell\}$ and thus we have \begin{enumerate} \item $\widetilde K$ is canonically ordered; \item $\{xv_i\}_{i\in [\ell]}$ is a monotone sequence; \item exactly one of the following holds: \begin{enumerate} \item $xv_i, xv_j > v_iv_j$ for every $1\leq i<j\leq \ell$, \label{alt:large} \item $xv_i, xv_j < v_iv_j$ for every $1\leq i<j\leq \ell$, or \label{alt:small} \item $xv_i < v_iv_j < xv_j$ or $xv_j < v_iv_j < xv_i$ for every $1\leq i<j \leq \ell$. \label{alt:middle} \end{enumerate} \end{enumerate} We shall prove that~$K_{n+1}[V(\widetilde K)\cup\{x\}]$ contains a $\star$-canonically edge-ordered{} copy of~$K_f$ containing the vertex~$x$, as desired. We split the rest of the proof into cases depending on whether the sequence $\{xv_i\}_{i\in [\ell]}$ is increasing or decreasing, and depending on which of \eqref{alt:large}, \eqref{alt:small}, and \eqref{alt:middle} holds. \begin{enumerate}[wide, labelwidth=!, labelindent=0pt, label=\textbf{\textit{Case~}\upshape(\!{\itshape \arabic*\,}\!)}] \item \label {case:declarge} The sequence $\{xv_i\}_{i\in[\ell]}$ is decreasing and \eqref{alt:large} holds. Note that $xv_1 > xv_2 > \dots > xv_\ell > \max \{v_{i}v_{\ell} \colon {1}\leq i< {\ell}\} = \max \{v_{i}v_{j} \colon {1}\leq i<j\leq {\ell}\}$, where the last equality follows as in any canonical edge-ordering of $K_\ell$ the largest edge is incident to $v_{\ell}$. Thus, $K_{n+1}[V(\widetilde K)\cup\{x\}]$ is a $\star$-canonically edge-ordered{} copy of~$K_{\ell +1}$ with special vertex~$x$, and with larger decreasing ordering. \item \label{case:decsmall} The sequence $\{xv_i\}_{i\in[\ell]}$ is decreasing and \eqref{alt:small} holds. Note that $xv_\ell < \dots < xv_1 < \min \{v_{1}v_{i} \colon {1}< i \leq {\ell}\}= \min \{v_{i}v_{j} \colon 1\leq i<j\leq \ell\}$, where the last equality follows as in any cannonical edge-ordering of $K_\ell$ the smallest edge is incident to $v_{1}$. Thus, $K_{n+1}[V(\widetilde K)\cup\{x\}]$ is a $\star$-canonically edge-ordered{} copy of~$K_{\ell +1}$ with special vertex~$x$, and with smaller decreasing ordering. \item \label{case:decmiddle} The sequence $\{xv_i\}_{i\in[\ell]}$ is decreasing and \eqref{alt:middle} holds. As $xv_1> xv_2 > \dots > x v_{\ell}$, \eqref{alt:middle} implies that $xv_1>v_1v_2> xv_2$ and also $x v_2 > v_2 v_j > x v_j$ for all $3 \leq j\leq \ell$. Thus, $v_1v_2 > \max\{v_2v_i \colon 2<i\leq \ell\}$. Note though, however $\widetilde K$ is canonically ordered, we must have that $\max\{v_2v_i \colon 2<i\leq \ell\} > v_1v_2$. Since this is a contradiction, this case cannot happen. \item The sequence $\{xv_i\}_{i\in[\ell]}$ is increasing and \eqref{alt:middle} holds. In this case notice that for all $k \in [\ell -1]$ we have \begin{align}\label{eq:6a} x v_k < v_k v_{k+1} < x v_{k+1}. \end{align} Furthermore, \begin{align} \begin{split}\label{eq:6} \max\{v_iv_k \colon 1\leq i < k\} < &xv_k \text{ for all } 2 \leq k \leq \ell \ \ \text{ and } \\ &xv_k < \min\{v_kv_i\colon k<i\leq \ell\} \text{ for all } k \in [\ell -1]. \end{split} \end{align} When $\widetilde K$ is an inverse min (resp. inverse max) ordering, (\ref{eq:6a}) and (\ref{eq:6}) imply that $\{x, v_1,\dots, v_{\ell}\}$ induces a canonical ordering of the same type as~$\widetilde K$, with~$x$ as the last (resp. first) vertex. When $\widetilde K$ is a min ordering, we have $$v_iv_\ell \overset{\phantom{\eqref{eq:6}}}{<} v_{i+1}v_{i+2} \overset{\eqref{eq:6a}}{<} xv_{i+2} \overset{\eqref{eq:6}}{<} v_{i+2}v_{i+4}\, ,$$ where the first inequality follows by Remark~\ref{rem:con}. Since~$\ell=f^2-4f+5 \geq 2f-3$ for $f\geq 4$, restricting to the vertices of odd index in $\widetilde K$, we obtain from Remark~\ref{rem:middle} that $K_{n+1}[\{x,v_1,v_3,\dots, v_{2f-3}\}]$ is a $\star$-canonically edge-ordered{} copy of $K_{f}$ with special vertex $x$, and with middle increasing ordering. For the max ordering, we use an analogous argument: \eqref{eq:6} implies $v_{i}v_{i+2} < xv_{i+2} < v_{i+2}v_{i+3} < v_1v_{i+4}$. Using again Remark~\ref{rem:middle} we have that $K_{n+1}[\{x,v_1,v_3,\dots, v_{2f-3}\}]$ is a $\star$-canonically edge-ordered{} copy of $K_{f}$ with special vertex $x$, and with middle increasing ordering. \item \label{case:inclarge} The sequence $\{xv_i\}_{i\in[\ell]}$ is increasing and \eqref{alt:large} holds. We separate the proof of this case into three claims. \begin{claim}\label{claim:max} If $\widetilde K$ is a max or an inverse max ordering then~$K_{n+1}[V(\widetilde K)\cup\{x\}]$ contains a $\star$-canonically edge-ordered{} copy of~$K_f$ with larger increasing ordering and special vertex~$x$. \end{claim} \begin{claimproof} For these canonical orderings we have~$v_1v_\ell > \max \{v_{i}v_{j} \colon {1}\leq j\leq {\ell-1}\}$. Then, due to~\eqref{alt:large}, we have $$xv_\ell>\dots> xv_2 > xv_1 > v_1v_\ell > \max \{v_{i}v_{j} \colon {1}\leq i<j\leq {\ell-1}\}\,,$$ and therefore~$\{x,v_1, \dots, v_{\ell-1}\}$ induces a larger increasing ordering. \end{claimproof} When $\widetilde K$ is a min or an inverse min ordering we will use the following claim. \begin{claim}\label{claim:set} Suppose $\widetilde K$ is a min or an inverse min ordering. Either~$K_{n+1}[V(\widetilde K) \cup \{x\}]$ contains a larger increasing $\star$-canonical ordering{} of~$K_f$ containing~$x$ or the following statement holds. There is a set~$U_{f-3}\subseteq V(\widetilde K)$ such that, after relabeling the vertices, we have~$U_{f-3}:=\{v_1,\dots, v_{{f-1}}\}$ and, for all $i < f-2$, \begin{align}\label{eq:goal2} \max \{ v_i v_j \colon {i}< j\leq f-1\} < xv_i < \min \{v_{j}v_{k} \colon {i}< j<k\leq {{f-1}}\}\,. \end{align} \end{claim} \begin{claimproof} Suppose~$\widetilde K$ is a min or an inverse min ordering and~$K_{n+1} [V(\widetilde K) \cup \{x\}]$ does not contain a larger increasing $\star$-canonical ordering{} of~$K_f$ containing~$x$. For each $0 \leq r \leq f-3$, define $\ell _r:= \ell -r(f-2)$; so $\ell_0 = \ell$ and $$\ell_{f-3} = \ell - (f-3)(f-2) = (f^2-4f+5) - (f-3)(f-2) = f-1\,.$$ To prove the claim we proceed iteratively as follows. Suppose for some~$0\leq r<f-3$ there is a set of vertices $U_r:=\{v_1, \dots, v_{\ell_r}\}$ satisfying \begin{align}\label{eq:goal} \max \{ v_i v_j \colon {i}< j\leq \ell _r\} < xv_i < \min \{v_{j}v_{k} \colon {i}< j<k\leq {\ell_{r}}\}\, \text{ for all~$i\leq r$} . \end{align} We shall find a set $U_{r+1}\subseteq U_{r}$ such that, after relabeling, we have $U_{r+1}:=\{v_1, \dots, v_{\ell_{r+1}}\}$ and where~\eqref{eq:goal} holds for~$r+1$ instead of~$r$. To start the iteration take~$r=0$ and let~$U_0:=V(\widetilde K)$. If $xv_{r+1} >\max \{v_{j}v_{k} \colon {r}< j<k< {r+f}\}$, then, since~$\{xv_j\}_{j\in [\ell]}$ is increasing, we have $$xv_{r+f-1}>xv_{r+f-2}>\dots >xv_{r+1}> \max \{v_{j}v_{k} \colon {r}< j<k< {r+f}\}\,.$$ Thus,~$\{x,v_{r+1}, \dots, v_{r+f-1}\}$ induces a larger increasing $\star$-canonical ordering{} of~$K_f$ contradicting our initial supposition. So we may assume that~$xv_{r+1} < \max \{v_{j}v_{k} \colon {r}< j<k< {r+f}\}$ and conclude \begin{align}\label{eq:r+1} \begin{split} \max\{v_{r+1}v_i\colon r+1<i\leq\ell_r\} \overset{\eqref{alt:large}}{<} xv_{r+1} &\overset{\phantom{\eqref{alt:large}}}{<} \max \{v_{j}v_{k} \colon {r}< j<k< {r+f}\} \\ &\overset{\phantom{\eqref{alt:large}}}{<} \min \{v_{j}v_{k} \colon {r+f}\leq j<k\leq {\ell_r}\}\,. \end{split} \end{align} The last inequality follows from the fact that $\widetilde K$ is min or inverse min ordered and by recalling Remark~\ref{rem:con}. Delete the vertices $v_{r+2}, \dots, v_{r+f-1}$, relabel the remaining vertices, and let~$U_{r+1}:=\{v_1, \dots, v_{\ell_{r+1}}\}$ be the set of vertices after the deletion and the relabeling. We shall prove that~$U_{r+1}$ satisfies \eqref{eq:goal} for $r+1$ instead of~$r$. First, observe that for~$i\leq r+1$,~$v_i$ is not deleted and keeps the same label as in~$U_r$. Moreover, since we only delete vertices, the sets from which we take the maximum and minimum in~\eqref{eq:goal} are now smaller, and thus, for~$i\leq r$, \eqref{eq:goal} becomes in fact less restrictive after the deletion and relabeling. Therefore, the inequalities in~\eqref{eq:goal} still hold for $i\leq r$ with~$\ell_{r+1}$ instead of~$\ell_r$. We still need to prove that they hold for~$i=r+1$. For that, note that vertex~$v_{r+f}$ is relabeled as~$v_{r+2}$ in~$U_{r+1}$ and therefore~\eqref{eq:r+1} implies $$\max\{v_{r+1}v_i\colon r+1<i\leq\ell_{r+1}\} < xv_{r+1} < \min \{v_{j}v_{k} \colon {r+2}\leq j<k\leq {\ell_{r+1}}\}\,,$$ in~$U_{r+1}$. That is, the inequalities in \eqref{eq:goal} hold for $i=r+1$ in~$U_{r+1}$ and with $\ell_{r+1}$ instead of~$\ell_{r}$. Hence,~\eqref{eq:goal} holds for~$r+1$ instead of $r$. Since~$\ell_{f-3} = f-1$, after~$f-3$ steps we obtain~$U_{f-3}=\{v_1,\dots, v_{f-1}\}$ satisfying~\eqref{eq:goal2} for every~$i<f-2$. \end{claimproof} We use Claim~\ref{claim:set} to prove the following claim finishing the proof of this case. \begin{claim} \label{claim:min} Suppose $\widetilde K$ is a min or an inverse min ordering. Either~$K_{n+1}[V(\widetilde K) \cup \{x\}]$ contains a larger increasing $\star$-canonical ordering{} of~$K_f$ containing~$x$ or the following two statements hold. \begin{itemize} \item If~$\widetilde K$ is a min canonical ordering then~$K_{n+1}[V(\widetilde K)\cup\{x\}]$ contains a min canonically ordered copy of~$K_f$ containing~$x$. \item If~$\widetilde K$ is an inverse min canonical ordering then~$K_{n+1}[V(\widetilde K)\cup\{x\}]$ contains a middle increasing $\star$-canonical ordering{} of~$K_f$ with special vertex~$x$. \end{itemize} \end{claim} \begin{claimproof} Suppose $\widetilde K$ is a min or an inverse min ordering and~$K_{n+1}[V(\widetilde K) \cup \{x\}]$ does not contain a larger increasing $\star$-canonical ordering{} of~$K_f$ containing~$x$. Apply Claim~\ref{claim:set} to obtain a set~$U$ such that after relabeling the vertices we have $U:=\{v_1,\dots, v_{{f-1}}\}$ satisfying~\eqref{eq:goal2} for every~$i<f-2$. Since the sequence $\{xv_i\}_{i\in [\ell]}$ is increasing and because of~\eqref{alt:large} we deduce \begin{align}\label{eq:goaltail} v_{f-2}v_{f-1}<v_{f-2}x<v_{f-1}x . \end{align} If~$\widetilde K$ is a min canonical ordering then \eqref{eq:goal2} becomes~$v_iv_{f-1} < xv_i < v_{i+1}v_{i+2}$ for~$i<f-2$. Then, using~\eqref{eq:goaltail} it is easy to check that $U\cup \{x\}$ induces a min canonical ordering, with~$x$ playing the role of the last vertex~$v_f$. If~$\widetilde K$ is an inverse min canonical ordering, then \eqref{eq:goal2} becomes~$v_iv_{i+1} < xv_i < v_{i+1}v_{f-1}$ for every~$i<f-2$. Then, we obtain from~\eqref{eq:goaltail} and Remark~\ref{rem:middle} that~$U\cup \{x\}$ induces a middle increasing ordering with special vertex~$x$. \end{claimproof} \item \label{case:incsmall} The sequence $\{xv_i\}_{i\in[\ell]}$ is increasing and \eqref{alt:small} holds. For this case we reverse the edge-ordering of $K_{n+1}[V(\widetilde K)\cup \{x\}]$ and the ordering of the vertices in the canonical part. More precisely, let~$\back{K} := \back{K}_{n+1}[V(\widetilde K)\cup \{x\}]$ be the reverse of~$K_{n+1}[V(\widetilde K)\cup \{x\}]$ and let~$V(\back K)\setminus \{x\}$ be reordered as $V(\back K)\setminus \{x\}= \{v_1',\dots, v_\ell'\}$ where~$v_i':=v_{\ell-i+1}$. Then \begin{enumerate}[label=\upshape({\itshape \arabic*\,}), wide] \item $\back{K}[V(\widetilde K)]$ is canonically ordered, \label{it:canonical} \item $\{xv_i'\}_{i\in [\ell]}$ is increasing, and \label{it:increase} \item \eqref{alt:large} holds for~$\back{K}$. \label{it:large} \end{enumerate} Indeed, for~\ref{it:canonical} notice that the reverse of a canonical ordering is canonical after reversing the ordering of the vertices. For \ref{it:increase} observe that we reverse the ordering of the vertices and edges, so the sequence is still increasing. Finally,~\ref{it:large} is easy to deduce after noticing that~\eqref{alt:large} and~\eqref{alt:small} only depend on the ordering of the edges and not on the ordering of the vertices. Observe that conditions~\ref{it:canonical}--\ref{it:large} are the same conditions we have for \hyperref[case:inclarge]{\textit{Case (5)}}. Thus, to address our current case, we apply Claims~\ref{claim:max} and \ref{claim:min} to the edge-ordered graph~$\back{K}$. More precisely, when~$\widetilde K$ is a min ordering or an inverse min ordering, then~$\back{K}$ is a max or an inverse max ordering. Therefore, Claim~\ref{claim:max} implies that~$\back{K}$ contains a~$\star$-canonically edge-ordered{} copy of~$K_f$ with larger increasing ordering and special vertex~$x$. Hence, $K_{n+1}[V(\widetilde K)\cup \{x\}]$ contains a~$\star$-canonically edge-ordered{} copy of~$K_f$ with smaller increasing ordering and special vertex~$x$. By an analogous argument but using Claim~\ref{claim:min} instead of Claim~\ref{claim:max}, we have that if~$\widetilde K$ is a max ordering or an inverse max ordering then~$K_{n+1}[V(\widetilde K) \cup \{x\}]$ contains a $\star$-canonical ordering{} copy of $K_f$ containing~$x$. Moreover, this copy of $K_f$ is either a smaller increasing ordering, a max canonical ordering, or a middle increasing ordering. \qedhere \end{enumerate} \end{proof} \section{Universally tileable graphs}\label{subsec:char2} We begin this section with the proof of Theorem~\ref{thm:uni}. {\it \noindent Proof of Theorem~\ref{thm:uni}.} To prove the statement we will show that \ref{it:univtil} implies \ref{it:univturan}, \ref{it:univturan} implies \ref{it:univdescrip} and \ref{it:univdescrip} implies \ref{it:univtil}. If an edge-ordered graph is tileable then by definition it is Tur\'anable. Thus, \ref{it:univtil} immediately implies \ref{it:univturan}. Theorem~2.18 from~\cite{gmnptv} already proves that \ref{it:univturan} implies \ref{it:univdescrip}. It therefore remains to show that \ref{it:univdescrip} implies \ref{it:univtil}. First assume that $H$ is a $K_3$ together with a (possibly empty) collection of isolated vertices. Note that all edge-orderings of $H$ are isomorphic, so every edge-ordering of $K_{|H|}$ contains a spanning copy of $H^{{\scaleto{{\leq}}{4.3pt}}}$, for every edge-ordering~$\leq$. Thus $H$ is universally tileable. Now, suppose $H$ is a path on three edges. There are three types of edge-ordering of $H$; $123$, $132$, and $213$. The latter two are contained in any edge-ordering of $C_4$ and so are tileable. The former is just $P_3^{{\scaleto{{\leqslant}}{4.3pt}}}$, so is tileable by Theorem~\ref{Pkfactor}. Thus, $H$ is universally tileable. Note that adding isolated vertices to a tileable edge-ordered graph results in another tileable edge-ordered graph. Therefore, every path on three edges together with a (possibly empty) collection of isolated vertices forms a universally tileable graph. Finally, assume that $H$ is a star forest and $H^{\scaleto{{\leq}}{4.3pt}}$ is any edge-ordering of $H$. Let $h:=|H|$. We now check that we can find a copy of $H^{\scaleto{{\leq}}{4.3pt}}$ in any $\star$-canonically edge-ordered\ $K_{h}$. As usual we write $\{x,v_1,\dots, v_{h-1}\}$ for the vertices of a $\star$-canonically edge-ordered\ $K_{h}$, where $x$ is the special vertex. Given any vertex $v$ in $H^{\scaleto{{\leq}}{4.3pt}}$, $H^{\scaleto{{\leq}}{4.3pt}}-v$ is a star forest and so is Tur\'anable by \cite[Theorem 2.18]{gmnptv}; thus, by Theorem~\ref{thm:turanable}, any canonical ordering of $K_{h-1}$ contains a copy of $H^{\scaleto{{\leq}}{4.3pt}}-v$. Consider any smaller increasing/decreasing $\star$-canonical ordering\ of $K_{h}$. Let~$uw$ be the smallest edge in~$H^{\scaleto{{\leq}}{4.3pt}}$ where $u$ is a leaf of $H^{\scaleto{{\leq}}{4.3pt}}$ . By the remark in the previous paragraph, our edge-ordered $K_{h}$ contains a copy of $H^{\scaleto{{\leq}}{4.3pt}}-u$ that does not contain $x$. By definition of a smaller increasing/decreasing $\star$-canonical ordering, we can now add $x$ to this copy of $H^{\scaleto{{\leq}}{4.3pt}}-u$ to obtain a copy of $H^{\scaleto{{\leq}}{4.3pt}}$ in our edge-ordered $K_{h}$. For a larger increasing/decreasing $\star$-canonical ordering\ of $K_{h}$ one can argue analogously, but take $uw$ to be the largest edge in~$H^{\scaleto{{\leq}}{4.3pt}}$, instead of the smallest. In a middle increasing $\star$-canonical ordering{} of $K_{h}$ we need to explicitly define how to embed~$H^{\scaleto{{\leq}}{4.3pt}}-v$ into the canonical orderings of~$K_{h-1}$. Let $\{K_{1,t_i}\}_{1\le i \le k}$ be the collection of $k$ stars that form the components of $H$ and let~$C\subseteq V(H)$ be the set of centers of these stars (if $t_i=1$, for the star $K_{1,t_i}$ we pick the center arbitrarily). Let~$L:=V(H)\setminus C$ and note that every vertex in~$L$ is a leaf. We define an ordering of the leaves in $L$ as follows. Given two leaves~$\ell,m\in L$ we write~$\ell < m$ if and only if~$\ell u < m w$ in $H^{\scaleto{{\leq}}{4.3pt}}$, where~$u,w\in C$ are the unique neighbors of~$\ell$ and $m$ in $H^{\scaleto{{\leq}}{4.3pt}}$ respectively (note $u$ and $w$ are not necessarily distinct). Set~$L=:\{\ell_1,\dots, \ell_{|L|}\}$ where $\ell_1 <\dots < \ell_{|L|}$; note that~$\vert L\vert = \vert E(H)\vert$. We are now ready to embed~$H^{\scaleto{{\leq}}{4.3pt}}$ into a middle increasing~$\star$-canonical ordering{} of~$K_{h}$. We first assume that the canonical part of~$K_{h}$ is a min or an inverse min ordering. Then, using the labeling given in Definitions~\ref{def:canonical} and~\ref{def:starcanonical}, it is easy to check that for every~$1\leq i<j<k,m\leq h-1$, we have \begin{alignat}{3}\label{eq:universalmin} & v_iv_k && \,<\,\, && v_jv_m, \nonumber \\ & v_iv_k &&\,<\,\, && v_j x, \qquad\text{and} \\ & v_ix &&\,<\,\, && v_jv_k\,. \nonumber \end{alignat} We embed the vertices in~$L=\{\ell_1,\dots, \ell_{|L|}\}$ into the $\star$-canonical ordering{} of~$K_{h}$ as follows: $$\ell_i \mapsto v_i \text{~for every~}i\in \big[|L|\big]\,.$$ We embed the vertices in~$C$ arbitrarily among the rest of the vertices in $K_h$. We need to check that this embedding induces a copy of $H^{\scaleto{{\leq}}{4.3pt}}$ in our edge-ordered $K_h$. This is clearly the case though: if $e_1, e_2 \in E(H^{\scaleto{{\leq}}{4.3pt}})$ such that $e_1<e_2$ then $e_1$ is mapped to some edge $v_i y$ in $K_h$ and $e_2$ to some edge $v_j z$ in $K_h$, where $i<j\leq |L|$. Then (\ref{eq:universalmin}) implies that $v_i y < v_j z$ in our edge-ordering of $K_h$. If the canonical part of~$K_{h}$ is a max or an inverse max ordering, then we proceed analogously. In this case we embed the leaves in $L$ at the end of the $\star$-canonical ordering{} and the vertices in~$C$ at the beginning. More precisely, we define the embedding so that $$\ell_i \mapsto v_{\vert C\vert+i-1} \text{~for every~}i\in \big[|L|\big]\,,$$ and we embed the vertices in~$C$ arbitrarily among the rest of the vertices $K_h$. Then similarly to before, this embedding induces a copy of $H^{\scaleto{{\leq}}{4.3pt}}$ in our edge-ordering of $K_h$. \qed There are some cases where the solution of Question~\ref{ques1} is an easy consequence of known tiling results for (unordered) graphs. In particular, the next result solves this problem for all edge-orderings of connected universally tileable graphs. \begin{prop} {\color{white}a} \begin{itemize} \item Let $K^{\scaleto{{\leqslant}}{4.3pt}}_3$ denote the edge-ordered version of $K_3$. Then $f(n,K^{\scaleto{{\leqslant}}{4.3pt}} _3)=2n/3$. \item Let $S$ denote an edge-ordered graph whose underlying graph is a star. Then $f(n,S)=n/2+O(1)$. \item Let $P:=132$. Then $f(n,P)= n/2+O(1)$. \item Let $P':=213$. Then $f(n,P')= n/2+O(1)$. \item Recall $P_3^{{\scaleto{{\leqslant}}{4.3pt}}}=123$. Then $f(n,P_3^{{\scaleto{{\leqslant}}{4.3pt}}})= n/2+o(n)$. \end{itemize} \end{prop} \proof The first part of the proposition follows immediately from the Corr\'adi--Hajnal theorem~\cite{cor}. Up to isomorphism, there is only one edge-ordering of a star on a given number of vertices. Thus, for any edge-ordered star $S$, the K\"uhn--Osthus theorem~\cite{kuhn2} implies that $f(n,S)=n/2+O(1)$. Any edge-ordering of $C_4$ contains a copy of the edge-ordered path $P=132$. The K\"uhn--Osthus theorem~\cite{kuhn2} implies that the minimum degree threshold for forcing a perfect $C_4$-tiling in an $n$-vertex graph $G$ is $n/2+O(1)$; so $f(n,P)\leq n/2+O(1)$. Moreover, consider the $n$-vertex graph consisting of two disjoint cliques $X$, $Y$ whose sizes are as equal as possible, under the constraint that $4$ does not divide $|X|$ or $|Y|$. Then every edge-ordering $G$ of this graph does not contain a perfect $P$-tiling and $\delta (G) \geq n/2 -2$. Thus, $f(n,P)> n/2-2$ and so $f(n,P)= n/2+O(1)$. The same argument shows that $f(n,P')= n/2+O(1)$. Finally, in Theorem~\ref{Pkfactor} we saw that $f(n,P_3^{{\scaleto{{\leqslant}}{4.3pt}}})= (1/2+o(1))n$. \endproof \section{Proof of Theorem~\ref{hscorollary}}\label{subsec:hscor} Let $G$ be an edge-ordered graph on $n\geq T(F)$ vertices with minimum degree $\delta (G) \geq (1-\frac{1}{T(F)})n$, and so that $|F|$ divides $n$. Let $G'$ denote the underlying graph of $G$. When $T(F)$ divides $n$, we apply the Hajnal--Szemer\'{e}di theorem \cite{hs} to $G'$, to obtain an (unordered) perfect $K_{T(F)}$-tiling in $G'$. By the definition of $T(F)$, each edge-ordered copy of $K_{T(F)}$ in $G$ contains a perfect $F$-tiling. Thus, combining these tilings, we obtain a perfect $F$-tiling in $G$. When $T(F)$ does not divide $n$, $n=aT(F)+b$ for $a,b \in \mathbb N$ such that $0<b<T(F)$. As $n$ and $T(F)$ are divisible by $|F|$, we have that $b/|F| \in \mathbb N$. Since ${b}/{T(F)}<1$, we must have that $\delta (G) \geq n-a =(1-\frac{1}{T(F)})(n-b)+b.$ We will now repeatedly remove disjoint copies of $F$ from $G$, until the resulting edge-ordered graph has its order divisible by $T(F)$. Assume that we have already removed $c$ copies of $F$ from $G$, where $0\le c<{b}/{|F|}$; then the remaining edge-ordered graph on $n-c|F|$ vertices has minimum degree at least $$\Big(1-\frac{1}{T(F)}\Big)(n-b)+b-c|F|\ge \Big(1-\frac{1}{T(F)}\Big)(n-c|F|)+(b-c|F|)\frac{1}{T(F)}.$$ This lower bound guarantees that an unordered $K_{T(F)}$ exists in the underlying graph; within the corresponding edge-ordered copy of $K_{T(F)}$ lying in $G$, we can find a copy of $F$. Thus, we may again remove a copy of $F$ and repeat this process. This process ensures that we can remove ${b}/{|F|}$ copies of $F$ from $G$. The resulting edge-ordered graph has $n-b$ vertices and minimum degree at least $(1-\frac{1}{T(F)})(n-b)$. Since $T(F)$ divides $n-b$, as in the previous case this edge-ordered graph contains a perfect $F$-tiling; combining this tiling with our removed copies of $F$, we obtain a perfect $F$-tiling in $G$, as desired.\qed \section{Proof of Theorem~\ref{Pkfactor}}\label{sec:mainproof} For the proof of Theorem~\ref{Pkfactor} we use the absorbing method, which divides the proof into two main parts: finding an absorber and constructing an almost perfect $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling. The following two subsections are devoted to the Absorbing Lemma (Lemma~\ref{lemma:globalabs}) and the Almost Perfect Tiling Lemma (Lemma~\ref{lemma:almosttiling}) respectively. We finish this section by combining these two results to give the proof of Theorem~\ref{Pkfactor}. \subsection{Absorbers} Let $F$ be an edge-ordered graph. Given an edge-ordered graph $G$, a set $S \subseteq V(G)$ is an \emph{$F$-absorbing set for $Q \subseteq V(G)$}, if both $G[S]$ and $G[S\cup Q]$ contain perfect $F$-tilings. To prove Theorem~\ref{Pkfactor}, we make use of the following, now standard, absorbing lemma. \begin{lemma}\label{lo} Let $f,s\in \mathbb N$ and $\xi >0$. Suppose that $F$ is an edge-ordered graph on $f$ vertices. Then there exists an $n_0 \in \mathbb N$ such that the following holds. Suppose that $G$ is an edge-ordered graph on $n \geq n_0$ vertices so that, for any $x,y \in V(G)$, there are at least $\xi n^{sf-1}$ $(sf-1)$-sets $X \subseteq V(G)$ such that both $G[X \cup \{x\}]$ and $G[X \cup \{y\}]$ contain perfect $F$-tilings. Then $V(G)$ contains a set $M$ so that \begin{itemize} \item $|M|\leq (\xi/2)^f n/4$; \item $M$ is an $F$-absorbing set for any $W \subseteq V(G) \setminus M$ such that $|W|\leq (\xi /2)^{2f} n/(32s^2 f^3)$ and~$|W| \in f \mathbb N$. \qed \end{itemize} \end{lemma} Lemma~\ref{lo} was proven by Lo and Markstr\"om~\cite[Lemma 1.1]{lo} in the case when $G$ is an unordered graph. However, the proof in the edge-ordered setting is identical (so we do not provide a proof here). As mentioned in the introduction, R\"odl~\cite{rodl} proved that every edge-ordered graph on~$n$ vertices and at least~$k(k+1)n/2$ edges contains a monotone path of length $k$. Here we will need the following supersaturated version of this result. \begin{lemma}[Supersaturation Lemma]\label{lemma:Rodl} Let~$k\in \mathbb N$ and~$\zeta>0$. Then there exists an $n_0 \in \mathbb N$ such that the following holds for every~$n\geq n_0$. Every~$n$-vertex edge-ordered graph $G$ with at least~$\zeta n^2$ edges contains at least~$\zeta^k\,{2^{-k^2}}n^{k+1}$ copies of~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$. \end{lemma} \begin{proof} The proof goes by induction on~$k$. The case $k=1$ is trivial. Suppose the statement is true for~$k-1$, and take~$n_0$ large enough to apply the induction hypothesis for~$\zeta/2$. Let $G$ be an $n$-vertex edge-ordered graph as in the statement of the lemma. For every vertex~$v\in V(G)$ delete the last~$\min\{d(v), \zeta n/2\}$ edges incident to~$v$. Let~$\tilde G$ denote the resulting edge-ordered graph. Since~$e(\tilde G)\geq \zeta n^2 - \zeta n^2/2 =\zeta n^2/2$, by the induction hypothesis we have that~$\tilde G$ contains at least~ $$\Big(\frac{\zeta}{2}\Big)^{k-1}\!\!\cdot {2^{-(k-1)^2}} n^{k} = \zeta^{k-1}\, 2^{-(k-1)^2-(k-1)}n^{k}$$ copies of~$P_{k-1}^{\scaleto{{\leqslant}}{4.3pt}}$. Fix one such copy $P = v_1 \cdots v_{k}$ and observe that, since~$d_{\tilde G}(v_{k})>0$, $\zeta n/2$ edges incident to~$v_{k}$ were deleted from $G$ that are all larger than~$v_{k-1}v_{k}$ in the total order of~$E(G)$. Moreover, at most~$k-1$ of them are incident to a vertex in $P$, which implies that at least~$\zeta n/2-(k-1)\geq \zeta n/4$ of them, combined with $P$, form a copy of~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$ in~$G$. Therefore, we obtain at least~$$\frac{\zeta^{k-1}}{2^{(k-1)^2+(k-1)}}\, n^{k} \cdot \frac{\zeta}{4}\,n \geq \frac{\zeta^{k}}{2^{k^2}}\, n^{k+1} $$ copies of~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$ in~$G$. \end{proof} Note the proof of Lemma~\ref{lemma:Rodl} really uses that the path we consider is monotone. Indeed, the inductive step our proof requires that given an edge-ordered path $P$, we add an edge $e$ larger than all those edges in $P$, and that $e$ is incident to the largest edge currently in $P$. In order to apply Lemma~\ref{lo} we introduce the following notion. \begin{definition}[Local Absorbers]\label{def:abs} \rm Let~$x, y\in V(G)$ be distinct vertices of an edge-ordered graph~$G$. Given disjoint~$P_x, P_y\in \binom{V(G)}{k}$ and a vertex~$w\in V(G)\setminus (P_x\cup P_y)$, we say that the set $$A := P_x\,\cup\, P_y\,\cup\, \{w\}$$ is a~\textit{$P_k^{\scaleto{{\leqslant}}{4.3pt}}$-local-absorber for $x$ and $y$} if \begin{enumerate} \item $G[\{x\} \cup P_x]$ and~$G[\{w\}\cup P_x]$ contain spanning copies of~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$ and \item $G[\{y\} \cup P_y]$ and~$G[\{w\}\cup P_y]$ contain spanning copies of~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$. \end{enumerate} \end{definition} Observe that if~$A$ is a~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$-local-absorber for~$x$ and $y$ then both~$G[A\cup \{x\}]$ and $G[A\cup \{y\}]$ contain perfect~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tilings. That is,~$A$ can play the role of~$X$ in Lemma~\ref{lo} with $s=2$. The following lemma allows us to find many local absorbers for every pair of vertices~$x,y\in V(G)$. \begin{lemma}\label{lemma:localabs} For every~$k\in \mathbb N$ and for every~$0<\eta<1/2$ there is a~$\xi>0$ and an~$n_0\in \mathbb N$ such that the following holds for every $n\geq n_0$. Let~$G$ be an $n$-vertex edge-ordered graph with~$\delta(G)\geq (1/2+\eta)n$. Then for every two vertices~$x,y\in V(G)$ there are at least~$\xi n^{2k+1}$ $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-local-absorbers for~$x$ and $y$. \end{lemma} \begin{proof} Given~$k\in \mathbb N$ and~$\eta>0$ let~$$\zeta := \frac{\eta^{k}}{2^{k^2+4k}} \qquad \text{and} \qquad \xi := \frac{\eta \zeta^2}{16 (2k+1)!}\,,$$ and suppose~$n_0\in \mathbb N$ is sufficiently large. Let~$G$ be as in the statement of the lemma. For every~$x\in V(G)$ define $$\mathcal P_x := \Big\{P\subseteq \binom{V(G)}{k} \colon G[\{x\}\cup P]\text{ contains a copy of~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$}\Big\}\,.$$ We first show that there is a subset~$\mathcal P_x'\subseteq \mathcal P_x$ of size at least~$\zeta n^{k}/2$ such that for every~$P\in \mathcal P_x'$ there is a set~$W_x(P)\subseteq V(G)\setminus P$ satisfying \begin{enumerate}[label={(\roman*)}] \item \label{it:neighbourhood} $P\in \mathcal P_w$ for every~$w\in W_x(P)$ and \item \label{it:size} $\vert W_x(P)\vert \geq \big(\tfrac{1}{2}+\tfrac{\eta}{4}\big)n$. \end{enumerate} In order to do this, we partition $N(x) = L(x)\dot\cup S(x)$ as follows. We say a vertex~$u\in N(x)$ is \textit{large} if the set~$\{v\in N(u)\colon xu < vu\}$ is of size at least~$\eta n/2$. Otherwise, we say~$u$ is \textit{small}. Let~$L(x)$ and~$S(x)$ denote the set of large and small vertices in~$N(x)$, respectively. Notice that if~$u$ is small then the set~$\{v\in N(u) \colon xu > vu\}$ is of size at least~$\eta n/2$ (and actually, at least of size~$n/2$). Assume that~$|L(x)| \geq \vert N(x)\vert /2 \geq n/4$; the case~$|S(x)|\geq n/4$ is analogous. For every vertex~$u\in L(x)$, let~$E(u)$ be the set of the last~$\eta n/2$ edges incident to $u$ in the total order of $E(G)$. Since $u$ is large, all edges in~$E(u)$ are larger than~$xu$. For~$E_x := \bigcup_{u\in L(x)} E(u)$, consider the subgraph~$\widetilde G:=(V(G), E_x)\subseteq G$. Note that~$\vert E_x\vert \geq \eta n^2/16$. Thus, Lemma~\ref{lemma:Rodl} implies that~$\widetilde G$ contains at least~$\zeta n^{k+1}$ monotone paths of length $k$. Since every edge in~$E_x$ is incident to a vertex in~$L(x)$, by dropping the first or the last vertex in each path, we obtain at least~$\zeta n^{k}/2$ monotone paths of length $k-1$ in $\widetilde G$ starting with a vertex in~$L(x)$. That is, the set $$\mathcal P'_x := \Big\{P\subseteq \binom{V(G)}{k} \colon \widetilde G[P]\text{ contains a copy of~$P_{k-1}^{\scaleto{\leqslant}{4.3pt}}$ starting with a vertex in~$L(x)$}\Big\}\,$$ is of size at least~$\zeta n^{k}/2$. Moreover, notice that~$\mathcal P_x'\subseteq \mathcal P_x$. Indeed, let~$u_1\cdots u_k$ be a monotone path with~$P=\{u_1,\dots, u_k\}\in \mathcal P_x'$. Since~$u_1\in L(x)$, we have~$xu_1 < u_1u_2$, and therefore~$G[\{x\}\cup P]$ contains a copy of~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$, meaning that~$P\in \mathcal P_x$. Now, we shall prove that for every~$P\in\mathcal P_x'$ there is a set~$W_x(P)$ satisfying~\ref{it:neighbourhood} and~\ref{it:size}. Consider some $P=\{u_1,\dots, u_k\}\in \mathcal P_x'$ where $u_1u_2$ is the first edge of the copy of $P_{k-1}^{\scaleto{\leqslant}{4.3pt}}$ in $\widetilde G[P]$. Let $N'(u_1)$ denote the set of vertices $w$ in $N(u_1)$ such that $u_1w \not \in E(u_1) $. Define $W_x(P):=N'(u_1) \setminus P$. Thus, since~$u_1u_2\in E(u_1)$, for~$w\in W_x(P)$ we have~$wu_1 < u_1u_2$ which means that~$W_x(P)$ satisfies condition~\ref{it:neighbourhood}. Condition~\ref{it:size} follows as $\delta(G)\geq (1/2+\eta)n$ and~$\vert E(u_1)\vert = {\eta n}/{2}$. Finally, given~$x,y\in V(G)$ consider~$\mathcal P_x'$ and~$\mathcal P_y'$. Observe that the number of pairs~$(P_x,P_y)\in \mathcal P_x'\times \mathcal P_y'$ such that~$\vert P_x\cap P_y\vert \geq 1$ is at most~$k^2n^{2k-1}$ and therefore, since~$n$ is sufficiently large, there are at least $$\frac{\vert \mathcal P_x'\times \mathcal P_y' \vert }{2} \geq \frac{\zeta^2 n^{2k}}{8}$$ disjoint pairs in $\mathcal P_x'\times \mathcal P_y'$. Given a disjoint pair~$(P_x,P_y)\in\mathcal P_x'\times\mathcal P_y'$ and a vertex~$w\in W_x(P_x)\cap W_y(P_y)$, it is easy to see that~$A:=P_x\cup P_y\cup \{w\}$ is a $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-local-absorber for~$x$ and $y$. Because of~\ref{it:size}, $\vert W_x(P_x)\cap W_y(P_y)\vert \geq \eta n/2$, and therefore, there are at least $$\frac{\zeta^2 n^{2k}}{8}\cdot \frac{\eta n}{2} \cdot \frac{1}{(2k+1)!} = \xi n^{2k+1}$$ $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-local-absorbers for~$x$ and~$y$. In particular, we divide by $(2k+1)!$ as the same $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-local-absorber $A$ arises from at most $(2k+1)!$ tuples $(P_x,P_y,w)$. \end{proof} The Absorbing Lemma is now an immediate consequence of Lemmas~\ref{lo} and~\ref{lemma:localabs}. \begin{lemma}[Absorbing Lemma]\label{lemma:globalabs} For every~$k\in \mathbb N$ and~$\eta>0$ there is~$0<\xi<\eta$ and an~$n_0\in \mathbb N$ such that the following holds for every $n\geq n_0$. If $G$ is an edge-ordered graph on $n$ vertices with~$\delta(G)\geq (1/2+\eta)n$, then there is a set~$M\subseteq V(G)$ of size at most~$\xi n$ which is a~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$-absorbing set for every $W \subseteq V(G) \setminus M$ such that $|W| \in (k+1) \mathbb N$ and $|W|\leq {\xi^{3} n}$.\qed \end{lemma} \subsection{Almost perfect tilings}\label{subsec:almost} Given an (unordered) graph~$F$, Koml\'os \cite{Komlos} established an asymptotically optimal minimum degree condition that forces a graph~$G$ to contain an $F$-tiling covering all but at most~$o(n)$ vertices. To present this result, we need to introduce the following parameter. Given a graph~$F$, the \textit{critical chromatic number $\chi_{cr}(F)$ of~$F$} is defined as $$\chi_{cr}(F) := (\chi(F)-1)\frac{\vert V(F)\vert}{\vert V(F)\vert-\sigma(F)}\,,$$ where~$\chi(F)$ is the chromatic number of~$F$ and~$\sigma(F)$ denotes the size of the smallest possible color class in any $\chi(F)$-coloring of $F$. \begin{theorem}[\cite{Komlos}]\label{thm:Komlos} For every~$\varepsilon>0$ and every graph~$F$, there is an~$n_0\in \mathbb N$ such that the following holds for every~$n\geq n_0$. If~$G$ is a graph on~$n$ vertices with $$\delta(G)\geq \Big(1-\frac{1}{\chi_{cr}(F)}\Big)n\, ,$$ then~$G$ contains an~$F$-tiling covering at least~$(1-\varepsilon)n$ vertices. \end{theorem} Theorem~\ref{thm:Komlos} is best possible in the following sense: given any graph $F$ and any $\gamma<1-\frac{1}{\chi_{cr}(F)}$, there exist $\varepsilon >0$ and $n_0 \in \mathbb N$ so that if $n \geq n_0$ there is an $n$-vertex graph $G$ with $\delta (G) \geq \gamma n$ that does not contain an $F$-tiling covering at least~$(1-\varepsilon)n$ vertices. For the (unordered) path~$P_k$ of length~$k$, Theorem~\ref{thm:Komlos} ensures the existence of an almost perfect $P_k$-tiling in every~$n$-vertex graph with minimum degree~$\delta(G)\geq n/2$ when $k$ is odd and~$\delta(G)\geq kn/(2k+2)$ when~$k$ is even. The following lemma says that the same minimum degree condition ensures an almost perfect~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling in an edge-ordered graph $G$. \begin{lemma}[Almost Perfect Tiling Lemma]\label{lemma:almosttiling} Let~$k\in \mathbb N$ and~$\varepsilon>0$. There is an~$n_0\in \mathbb N$ such that the following holds for every~$n\geq n_0$. Let~$G$ be an $n$-vertex edge-ordered graph with \[ \delta(G) \geq \begin{cases} \frac{n}{2} &\text{ if $k$ is odd}\\ \frac{kn}{2k+2} &\text{ if $k$ is even}\,. \end{cases} \] Then, $G$ contains a $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling covering at least $(1-\varepsilon)n$ vertices. \end{lemma} The same example that shows Theorem~\ref{thm:Komlos} is best possible for $P_k$ shows that Lemma~\ref{lemma:almosttiling} is best possible for $P_k^{\scaleto{{\leqslant}}{4.3pt}}$. More precisely, if $k$ is odd consider any $0<\gamma <1/2$ and set $\varepsilon:=1/2-\gamma$; if $k$ is even consider any $0<\gamma <k/(2k+2)$ and set $\varepsilon:=k/(2k+2)-\gamma$. Let $G$ be any edge-ordering of the complete bipartite graph with vertex classes of size $\gamma n$ and $(1-\gamma)n$. Then $\delta (G)= \gamma n$ and $G$ does not contain a $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling covering more than $(1-\varepsilon)n$ vertices. \begin{proof}[Proof of Lemma~\ref{lemma:almosttiling}] Given~$k\in \mathbb N$ and $\varepsilon>0$, let~$\zeta := \frac{(k+1)^2-1}{4(k+1)^2}$ and let~$n_1\in \mathbb N$ be the~$n_0$ given by Lemma~\ref{lemma:Rodl} for $k+1$ instead of~$k$. Moreover, let~$m\geq \frac{2n_1}{\varepsilon (k+1)}$ and suppose~$n_0$ is sufficiently large with respect to all other constants. Finally, let~$G$ be as in the statement of the lemma. Set $a:=\lceil (k+1)/2 \rceil$ and $b:=\lfloor (k+1)/2 \rfloor$, and notice that~$\chi_{cr}(P_k)=\chi_{cr}(K_{am,bm})$. Therefore, applying Theorem \ref{thm:Komlos} (to the underlying graph of $G$) we obtain a $K_{am,bm}$-tiling covering at least $(1-\varepsilon/2)n$ vertices. We shall prove that in each~$K_{am,bm}$ there is a~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling covering all but at most~$n_1$ vertices. Observe that, for every positive integer~$t\in \mathbb N$, we have \begin{align}\label{eq:edgesofKab} \vert E(K_{at, bt})\vert \geq \frac{(k+1)^2-1}{4}t^2 = \zeta (k+1)^2t^2 = \zeta \vert V(K_{at,bt})\vert^2\,. \end{align} Moreover,~$\vert V(K_{am,bm})\vert = (a+b)m = (k+1)m\geq n_1$, and therefore we may apply Lemma~\ref{lemma:Rodl}. In fact, we will apply Lemma~\ref{lemma:Rodl} iteratively to find the desired~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling in~$K_{am,bm}$. If $k$ is even, then we apply Lemma~\ref{lemma:Rodl} to find a copy of~$P_{k+1}^{\scaleto{{\leqslant}}{4.3pt}}$ in~$K_{am,bm}$. After deleting one vertex, we get a copy of~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$ with exactly~$a=(k+2)/2$ vertices in the class of size~$am$. If~$k$ is odd, then we apply Lemma~\ref{lemma:Rodl} to obtain a copy of~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$, which must contain exactly~$a=(k+1)/2$ vertices in the class of size~$am$. In both cases, removing this copy of $P_k$ from $K_{am,bm}$ results in a copy of $ K_{a(m-1),b(m-1)}$. Thus, since \eqref{eq:edgesofKab} holds for every~$t\in \mathbb N$, we may iteratively apply Lemma~\ref{lemma:Rodl} to find vertex-disjoint copies of~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$ in~$K_{am,bm}$ until there are at most~$n_1$ vertices left (in each~$K_{am,bm}$). The initial~$K_{am,bm}$-tiling has at most~$n/\vert V(K_{am,bm})\vert = n/(m(k+1))$ copies of~$K_{am,bm}$ covering at least~$(1-\varepsilon/2)n$ vertices in~$G$. Each of these copies of~$K_{am,bm}$ has a~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling covering all but at most $n_1$ vertices. Therefore, there is a~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling in~$G$ covering all but at most \begin{align*} \frac{\varepsilon n}{2} + \frac{n}{m(k+1)}\,n_1 \leq \varepsilon\,n\, \end{align*} vertices, where the last inequality follows as~$\frac{n_1}{m(k+1)}\leq \tfrac{\varepsilon}{2}$. \end{proof} \subsection{Proof of Theorem~\ref{Pkfactor}} To prove the `moreover' part, given any $n \in \mathbb N$ divisible by $k+1$, let $G_0$ be an $n$-vertex edge-ordered graph consisting of two disjoint cliques whose sizes are as equal as possible under the constraint that neither has size divisible by $k+1$. Thus, $G_0$ does not contain a perfect~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling and~$\delta(G_0)\geq \lfloor n/2\rfloor-2$. Given~$k\in \mathbb N$ and~$\eta>0$, let~$0<\xi<\eta$ be given by Lemma~\ref{lemma:globalabs}. Let~$n_0\in \mathbb N$ be sufficiently large and let~$G$ be as in the statement of the theorem. Lemma~\ref{lemma:globalabs} yields a set $M\subseteq V(G)$ of size at most~$\xi n \leq \eta n$ which is a $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-absorbing set for every $W\subseteq V(G)\setminus M$ such that~$W\in (k+1)\mathbb N$ and~$\vert W\vert \leq \xi ^3 n$. As~$\delta (G\setminus M)\geq n/2+\eta n - \xi n\geq n/2$, Lemma~\ref{lemma:almosttiling} implies $G\setminus M$ contains a~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling $\mathcal T_1$ covering all but at most~$\xi ^3 n$ vertices. Let~$L$ denote the set of vertices not covered by this tiling; notice that as~$\vert G\vert$ and $|M|$ are divisible by $k+1$, so is~$\vert L\vert$. By definition of $M$, $G[M \cup L]$ contains a perfect~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling $\mathcal T_2$. Thus, $ \mathcal T_1 \cup \mathcal T_2$ is a perfect~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling in~$G$.\qed \begin{remark}\rm Recall that, for $k \geq 4$, there is always an edge-ordering of $P_k$ that is not tileable. It would, however, be interesting to determine which edge-orderings of $P_k$ one can extend Theorem~\ref{Pkfactor} to cover. Notice that our proof of Theorem~\ref{Pkfactor} is tailored to monotone paths though. Indeed, the proof of Lemma~\ref{lemma:almosttiling} uses Lemma~\ref{lemma:Rodl}, whose proof is specific to monotone paths~$P_k^{\scaleto{{\leqslant}}{4.3pt}}$. Further, in the proof of Lemma~\ref{lemma:localabs}, we use the fact that if~$P=u_1\cdots u_{k+1}$ is a monotone path, then~$u_1\cdots u_k$ is isomorphic to $u_2\cdots u_{k+1}$. In other words, the path obtained by dropping the last vertex is isomorphic to the one obtained by dropping the first one. It is not hard to see that this property is satisfied only by monotone paths. In a forthcoming paper, the second and third authors will explore a more general strategy for establishing minimum degree thresholds for perfect tilings in edge-ordered graphs. \end{remark} \section{Concluding remarks}\label{sec:conc} In this paper we have characterized those edge-ordered graphs that are tileable; similarly to the characterization of Tur\'anable edge-ordered graphs, the tileable edge-ordered graphs $F$ are those that can be embedded in specific orderings -- which we call the $\star$-canonical ordering{s} -- of the complete graph $K_{|F|}$. For the characterization of Tur\'anable graphs, namely Theorem~\ref{thm:turanable}, all four canonical orderings are necessary in the following sense: for every~$n\geq 4$ and every canonical ordering $K_n ^{\scaleto{{\leq}}{4.3pt}}$ of $K_n$, there is a non-Tur\'anable edge-ordered $n$-vertex graph $F$ such that $F$ can be embedded into all the canonical orderings of $K_n$ other than $K_n ^{\scaleto{{\leq}}{4.3pt}}$. Thus, it is natural to raise the following question. \begin{question}\label{quest:all20needed} Are all twenty $\star$-canonical ordering{}s necessary in Theorem~\ref{thm:character}? That is, does Theorem~\ref{thm:character} still hold if we omit some of the $\star$-canonical ordering{}s from the statement? \end{question} From a computer-assisted check, we know that at least the following \emph{eight} $\star$-canonical ordering{s} are necessary: smaller increasing/decreasing of types min/inverse min, and larger increasing/decreasing of types max/inverse max. Note that these include the four canonical orderings. In this paper we have also answered Question~\ref{ques1} in the case of monotone paths and for a few other special types of edge-ordered graph. Recall that in Section~\ref{subsec:almost} we computed the minimum degree threshold for an edge-ordered graph to contain an almost perfect $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling. It is also natural to consider this problem more generally. This motivates the following definition. \begin{definition}[Almost tileable]\label{def:almosttile} An edge-ordered graph $F$ is \emph{almost tileable} if for every $0<\varepsilon <1$ there exists a $t\in \mathbb N$ such every edge-ordering of the graph $K_t$ contains an $F$-tiling covering all but at most $\varepsilon t$ vertices of $K_t$. \end{definition} It is easy to see that this notion is equivalent to being Tur\'anable. \begin{prop}\label{prop1} An edge-ordered graph $F$ is almost tileable if and only if $F$ is Tur\'anable. \end{prop} \proof The forwards direction is immediate. For the reverse direction, consider any $F$ that is Tur\'anable. Given any $0<\varepsilon <1$ define $t:=\lceil T(F)/\varepsilon \rceil$. (Recall $T(F)$ is defined in the statement of Theorem~\ref{hscorollary}.) Then given any edge-ordering of $K_t$, by definition of $T(F)$ we may repeatedly find vertex-disjoint copies of $F$ in $K_t$ until we have covered all but fewer than $T(F)$ vertices in $K_t$. That is, we have an $F$-tiling covering all but at most $\varepsilon t$ vertices of $K_t$, as desired. \endproof In light of Proposition~\ref{prop1} we propose the following question. \begin{question}\label{ques2} Let $F$ be a fixed Tur\'anable edge-ordered graph. What is the minimum degree threshold for forcing an almost perfect $F$-tiling in an edge-ordered graph on $n$ vertices? More precisely, given any $\varepsilon >0$, what is the minimum degree required in an $n$-vertex edge-ordered graph $G$ to force an $F$-tiling in $G$ covering all but at most $\varepsilon n$ vertices? \end{question} We emphasize that just because the notions of Tur\'anable and almost tileable are equivalent, this certainly does not mean that the answer to Question~\ref{ques2} will be the `same' as the Tur\'an threshold. For example, whilst R\"odl~\cite{rodl} showed that one only requires $k(k+1)n/2$ edges in an $n$-vertex edge-ordered graph $G$ to force a copy of $P_k^{\scaleto{{\leqslant}}{4.3pt}}$, Lemma~\ref{lemma:almosttiling} implies $G$ must be much denser to contain an almost perfect $P_k^{\scaleto{{\leqslant}}{4.3pt}}$-tiling. Recall every Tur\'anable (and therefore tileable) edge-ordered graph $F$ does not contain a copy of $K_4$. We are unaware, however, of any result that forbids $F$ from having large chromatic number. \begin{question}\label{queschrome} Is it true that for every~$k\in \mathbb N$ there is a Tur\'anable edge-ordered graph~$F$ whose underlying graph has chromatic number at least~$k$? \end{question} Recall that due to Proposition~\ref{prop::add_two_edges}, given a Tur\'anable edge-ordered graph $F$ we can construct a tileable graph by adding two suitable new vertices of degree one. Thus, Question~\ref{queschrome} is equivalent to the following question. \begin{question} Is it true that for every~$k\in \mathbb N$ there is a tileable edge-ordered graph~$G$ whose underlying graph has chromatic number at least~$k$? \end{question} \subsection*{Acknowledgments} Much of the research in this paper was carried out during a visit by the second and third authors to the University of Illinois at Urbana-Champaign. The authors are grateful to the BRIDGE strategic alliance between the University of Birmingham and the University of Illinois at Urbana-Champaign, which partially funded this visit. The authors are also grateful to J\'ozsef Balogh for helpful discussions. {\noindent \bf Open access statement.} This research was funded in part by EPSRC grant EP/V002279/1. For the purpose of open access, a CC BY public copyright licence is applied to any Author Accepted Manuscript arising from this submission. {\noindent \bf Data availability statement.} The files required for the computer-assisted check described in Section~\ref{sec:conc} can be found on the following web-page: \url{https://sipiga.github.io/Edge-Ordered_files.zip}. \end{document}
arXiv
JEM-EUSO Consensus Wait-freedom Distributed computability Process crash failure Agreement FPGA Genetic programming Linearizability Shared memory Asynchronous system Atomic read/write register Combinatorial topology Distributed computing finite field arithmetic ( see all 46) France [x] 257 (%) Mexico [x] 257 (%) Spain 45 (%) United States 38 (%) Italy 26 (%) UNAM 26 (%) CINVESTAV-IPN 22 (%) Kyoto University 19 (%) Universidad Nacional Autónoma de México (UNAM) 19 (%) Hiroshima University 18 (%) Rajsbaum, Sergio 39 (%) Raynal, Michel 22 (%) Decouchant, Dominique 19 (%) Adams, J. H., Jr. 17 (%) Ahmad, S. 17 (%) Experimental Astronomy 17 (%) Distributed Computing 10 (%) Structural Information and Communication Complexity 8 (%) Advances in Artificial Intelligence 6 (%) MICAI 2004: Advances in Artificial Intelligence 6 (%) Book 202 (%) Journal 55 (%) Springer 257 (%) Computer Science 224 (%) Artificial Intelligence (incl. Robotics) 113 (%) Computer Communication Networks 80 (%) Algorithm Analysis and Problem Complexity 76 (%) Information Systems Applications (incl. Internet) 68 (%) 962 Authors 661 Institutions Showing 1 to 100 of 257 matching Articles Results per page: 10 20 50 Export (CSV) PIÑAS: Supporting a Community of Co-authors on the Web Distributed Communities on the Web (2002-01-01) 2468: 113-124 , January 01, 2002 By Morán, Alberto L.; Decouchant, Dominique; Favela, Jesus; Martínez-Enríquez, Ana María; González Beltrán, Beatriz; Mendoza, Sonia Show all (6) To provide efficient support for collaborative writing to a community of authors is a complex and demanding task, members need to communicate, coordinate, and produce in a concerted fashion in order to obtain a final version of the documents that meets overall expectations. In this paper, we present the PIÑAS middleware, a platform that provides potential and actual collaboration spaces, as well as specific services customized to support collaborative writing on the Web. We start by introducing PIÑAS Collaborative Spaces and an extended version of Doc2U, the current tool that implements them, that integrate and structure a suite of specialized project and session services. Later, a set of services for the naming, identification, and shared management of authors, documents and resources in a replicated Web architecture is presented. Finally, a three-tier distributed architecture that organizes these services and a final discussion on how they support a community of authors on the Web is presented. Ontology-Based Resource Discovery in Pervasive Collaborative Environments Collaboration and Technology (2013-01-01) 8224: 233-240 , January 01, 2013 By García, Kimberly; Kirsch-Pinheiro, Manuele; Mendoza, Sonia; Decouchant, Dominique Show all (4) Most of the working environments offer multiple hardware and software that could be shared among the members of staff. However, it could be particularly difficult to take advantages of all these resources without a proper software support capable of discovering the ones that fulfill both a user's requirements and each resource owner's sharing preferences. To try to overcome this problem, several service discovery protocols have been developed, aiming to promote the use of network resources and to reduce configuration tasks. Unfortunately, these protocols are mainly focused on finding resources based just on their type or some minimal features, lacking information about: user preferences, restrictions and contextual variables. To outstrip this deficiency, we propose to exploit the power of semantic description, by creating a knowledge base integrated by a set of ontologies generically designed to be adopted by any type of organization. To validate this proposal, we have customized the ontologies for our case of study, which is a research center. Access Control-Based Distribution of Shared Documents On the Move to Meaningful Internet Systems 2004: OTM 2004 Workshops (2004-01-01) 3292: 12-13 , January 01, 2004 By Mendoza, Sonia; Morán, Alberto L.; Decouchant, Dominique; Enríquez, Ana María Martínez; Favela, Jesus Show all (5) The PIÑAS platform provides an authoring group with support to collaboratively and consistently produce shared Web documents. Such documents may include costly multimedia resources, whose management raises important issues due to the constraints imposed by Web technology. This poster presents an approach for distributing shared Web documents to the authoring group's sites, taking into consideration current organization of the concerned sites, access rights granted to the co-authors and storage device capabilities. Adaptive Distribution Support for Co-authored Documents on the Web Groupware: Design, Implementation, and Use (2005-01-01) 3706: 33-48 , January 01, 2005 By Mendoza, Sonia; Decouchant, Dominique; Morán, Alberto L.; Enríquez, Ana María Martínez; Favela, Jesus Show all (5) In order to facilitate and improve collaboration among co-authors, working in the Web environment, documents must be made seamlessly available to them. Web documents may contain multimedia resources, whose management raises important issues due to the constraints and limits imposed by Web technology. This paper proposes an adaptive support for distributing shared Web documents and multimedia resources across authoring group sites. Our goal is to provide an efficient use of costly Web resources. Distribution is based on the current arrangement of the participating sites, the roles granted to the co-authors and the site capabilities. We formalize key concepts to ensure that system's properties are fulfilled under the specified conditions and to characterize distribution at a given moment. The proposed support has been integrated into the PIÑAS platform, which allows an authoring group to collaboratively and consistently produce shared Web documents. A Distributed Event Service for Adaptive Group Awareness MICAI 2002: Advances in Artificial Intelligence (2002-01-01) 2313: 506-515 , January 01, 2002 By Decouchant, Dominique; Martńez-Enríquez, Ana Mará; Favela, Jesús; L.Morán, Alberto; Mendoza, Sonia; Jafar, Samir Show all (6) This paper is directly focused on the design of middleware functions to support a distributed cooperative authoring environment on the World Wide Web. Using the advanced storage and access functions of the PIÑAS middleware, co-authors can produce fragmented and replicated documents in a structured, consistent and efficient way. However, despite it provides elaborated, concerted, secure and parameterizable cooperative editing support and mechanisms, this kind of applications requires a suited and efficient inter-application communication service to design and implement flexible, efficient, and adapted group awareness functionalities. Thus, we developed a proof-of-concept implementation of a centralized version of a Distributed Event Management Service that allows to establish communication between cooperative applications, either in distributed or centralized mode. As an essential component for the development of cooperative environments, this Distributed Event Management Service allowed us to design an Adaptive Group Awareness Engine whose aim is to automatically deduce and adapt co-author's cooperative environments to allow them collaborate closer. Thus, this user associated inference engine captures the application events corresponding to author's actions,and uses its knowledge and rule bases,to detect co-author's complementary or related work, specialists, or beginners, etc. Its final goal is to propose modifications to the author working environments, application interfaces, communication or interaction ways, etc. An Adaptive Cooperative Web Authoring Environment Adaptive Hypermedia and Adaptive Web-Based Systems (2002-01-01) 2347: 535-538 , January 01, 2002 By Martínez-Enríquez, Ana María; Decouchant, Dominique; Morán, Alberto L.; Favela, Jesus Show all (4) Using AllianceWeb, authors distributed around the world can cooperate producing large documents in a consistent and concerted way. In this paper, we highlight the main aspects of the group awareness function that allows each author to diffuse his contribution to other co-authors, and to control the way by which other contributions are integrated into his environment. In order to support this function, essential to every groupware application, we have designed a self-adaptive cooperative interaction environment, parametrized by user preferences. Thus, the characteristics of an adaptive group awareness agent are defined. Renaming Is Weaker Than Set Agreement But for Perfect Renaming: A Map of Sub-consensus Tasks LATIN 2012: Theoretical Informatics (2012-01-01) 7256: 145-156 , January 01, 2012 By Castañeda, Armando; Imbs, Damien; Rajsbaum, Sergio; Raynal, Michel Show all (4) In the wait-free shared memory model substantial attention has been devoted to understanding the relative power of sub-consensus tasks. Two important sub-consensus families of tasks have been identified: k-set agreement and M-renaming. When 2 ≤ k ≤ n − 1 and n ≤ M ≤ 2n − 2, these tasks are more powerful than read/write registers, but not strong enough to solve consensus for two processes. This paper studies the power of renaming with respect to set agreement. It shows that, in a system of n processes, n-renaming is strictly stronger than (n − 1)-set agreement, but not stronger than (n − 2)-set agreement. Furthermore, (n + 1)-renaming cannot solve even (n − 1)-set agreement. As a consequence, there are cases where set agreement and renaming are incomparable when looking at their power to implement each other. A Survey on Some Recent Advances in Shared Memory Models Structural Information and Communication Complexity (2011-01-01) 6796: 17-28 , January 01, 2011 By Rajsbaum, Sergio; Raynal, Michel Due to the advent of multicore machines, shared memory distributed computing models taking into account asynchrony and process crashes are becoming more and more important. This paper visits models for these systems and analyses their properties from a computability point of view. Among them, the base snapshot model and the iterated model are particularly investigated. The paper visits also several approaches that have been proposed to model failures (mainly the wait-free model and the adversary model) and gives also a look at the BG simulation. The aim of this survey is to help the reader to better understand the power and limits of distributed computing shared memory models. The Opinion Number of Set-Agreement Principles of Distributed Systems (2014-01-01) 8878: 155-170 , January 01, 2014 By Fraigniaud, Pierre; Rajsbaum, Sergio; Roy, Matthieu; Travers, Corentin Show all (4) This paper carries on the effort to bridging runtime verification with distributed computability, studying necessary conditions for monitoring failure prone asynchronous distributed systems. It has been recently proved that there are correctness properties that require a large number of opinions to be monitored, an opinion being of the form true, false, perhaps, probably true, probably no, etc. The main outcome of this paper is to show that this large number of opinions is not an artifact induced by the existence of artificial constructions. Instead, monitoring an important class of properties, requiring processes to produce at most k different values does require such a large number of opinions. Specifically, our main result is a proof that it is impossible to monitor k-set-agreement in an n-process system with fewer than min {2k,n} + 1 opinions. We also provide an algorithm to monitor k-set-agreement with min {2k,n} + 1 opinions, showing that the lower bound is tight. Plasticity of Interaction Interfaces: The Study Case of a Collaborative Whiteboard By Sánchez, Gabriela; Mendoza, Sonia; Decouchant, Dominique; Gallardo-López, Lizbeth; Rodríguez, José Show all (5) The development of plastic user interfaces constitutes a promising research topic. They are intentionally designed to automatically adapt themselves to changes of their context of use defined in terms of the user (e.g., identity and role), the environment (e.g., location and available information/tools) and the platform. Some single-user systems already integrate some plasticity capabilities, but this topic remains quasi-unexplored in CSCW. This work is centered on prototyping a plastic collaborative whiteboard that adapts itself: 1) to the platform, as it can be launched from heterogeneous computer devices and 2) to each collaborator, when he is working from several devices. This application can split its interface between the users' devices in order to facilitate the interaction. Thus, the distributed interface components work in the same way as if they were co-located within a unique device. At any time, group awareness is maintained among collaborators. GMTE: A Tool for Graph Transformation and Exact/Inexact Graph Matching Graph-Based Representations in Pattern Recognition (2013-01-01) 7877: 71-80 , January 01, 2013 By Hannachi, Mohamed Amine; Bouassida Rodriguez, Ismael; Drira, Khalil; Pomares Hernandez, Saul Eduardo Show all (4) Multi-labelled graphs are a powerful and versatile tool for modelling real applications in diverse domains such as communication networks, social networks, and autonomic systems, among others. Due to dynamic nature of such kind of systems the structure of entities is continuously changing along the time, this because, it is possible that new entities join the system, some of them leave it or simply because the entities' relations change. Here is where graph transformation takes an important role in order to model systems with dynamic and/or evolutive configurations. Graph transformation consists of two main tasks: graph matching and graph rewriting. At present, few graph transformation tools support multi-labelled graphs. To our knowledge, there is no tool that support inexact graph matching for the purpose of graph transformation. Also, the main problem of these tools lies on the limited expressiveness of rewriting rules used, that negatively reduces the range of application scenarios to be modelling and/or negatively increase the number of rewriting rules to be used. In this paper, we present the tool GMTE - Graph Matching and Transformation Engine. GMTE handles directed and multi-labelled graphs. In addition, to the exact graph matching, GMTE handles the inexact graph matching. The approach of rewriting rules used by GMTE combines Single PushOut rewriting rules with edNCE grammar. This combination enriches and extends the expressiveness of the graph rewriting rules. In addition, for the graph matching, GMTE uses a conditional rule schemata that supports complex comparison functions over labels. To our knowledge, GMTE is the first graph transformation tool that offers such capabilities. The Universe of Symmetry Breaking Tasks By Imbs, Damien; Rajsbaum, Sergio; Raynal, Michel Processes in a concurrent system need to coordinate using a shared memory or a message-passing subsystem in order to solve agreement tasks such as, for example, consensus or set agreement. However, coordination is often needed to "break the symmetry" of processes that are initially in the same state, for example, to get exclusive access to a shared resource, to get distinct names or to elect a leader. This paper introduces and studies the family of generalized symmetry breaking (GSB) tasks, that includes election, renaming and many other symmetry breaking tasks. Differently from agreement tasks, a GSB task is "inputless", in the sense that processes do not propose values; the task only specifies the symmetry breaking requirement, independently of the system's initial state (where processes differ only on their identifiers). Among various results characterizing the family of GSB tasks, it is shown that (non adaptive) perfect renaming is universal for all GSB tasks. Erratum to: Ultra high energy photons and neutrinos with JEM-EUSO Experimental Astronomy (2015-11-01) 40: 235-237 , November 01, 2015 By Adams, J. H., Jr.; Ahmad, S.; Albert, J. -N.; Allard, D.; Anchordoqui, L.; Andreev, V.; Anzalone, A.; Arai, Y.; Asano, K.; Ave Pernas, M.; Baragatti, P.; Barrillon, P.; Batsch, T.; Bayer, J.; Bechini, R.; Belenguer, T.; Bellotti, R.; Belov, K.; Berlind, A. A.; Bertaina, M.; Biermann, P. L.; Biktemerova, S.; Blaksley, C.; Blanc, N.; Błȩcki, J.; Blin-Bondil, S.; Blümer, J.; Bobik, P.; Bogomilov, M.; Bonamente, M.; Briggs, M. S.; Briz, S.; Bruno, A.; Cafagna, F.; Campana, D.; Capdevielle, J. -N.; Caruso, R.; Casolino, M.; Cassardo, C.; Castellinic, G.; Catalano, C.; Catalano, G.; Cellino, A.; Chikawa, M.; Christl, M. J.; Cline, D.; Connaughton, V.; Conti, L.; Cordero, G.; Crawford, H. J.; Cremonini, R.; Csorna, S.; Dagoret-Campagne, S.; de Castro, A. J.; De Donato, C.; de la Taille, C.; De Santis, C.; del Peral, L.; Dell'Oro, A.; De Simone, N.; Di Martino, M.; Distratis, G.; Dulucq, F.; Dupieux, M.; Ebersoldt, A.; Ebisuzaki, T.; Engel, R.; Falk, S.; Fang, K.; Fenu, F.; Fernández-Gómez, I.; Ferrarese, S.; Finco, D.; Flamini, M.; Fornaro, C.; Franceschi, A.; Fujimoto, J.; Fukushima, M.; Galeotti, P.; Garipov, G.; Geary, J.; Gelmini, G.; Giraudo, G.; Gonchar, M.; González Alvarado, C.; Gorodetzky, P.; Guarino, F.; Guzmán, A.; Hachisu, Y.; Harlov, B.; Haungs, A.; Hernández Carretero, J.; Higashide, K.; Ikeda, D.; Ikeda, H.; Inoue, N.; Inoue, S.; Insolia, A.; Isgrò, F.; Itow, Y.; Joven, E.; Judd, E. G.; Jung, A.; Kajino, F.; Kajino, T.; Kaneko, I.; Karadzhov, Y.; Karczmarczyk, J.; Karus, M.; Katahira, K.; Kawai, K.; Kawasaki, Y.; Keilhauer, B.; Khrenov, B. A.; Kim, J. -S.; Kim, S. -W.; Kim, S. -W.; Kleifges, M.; Klimov, P. A.; Kolev, D.; Kreykenbohm, I.; Kudela, K.; Kurihara, Y.; Kusenko, A.; Kuznetsov, E.; Lacombe, M.; Lachaud, C.; Lee, J.; Licandro, J.; Lim, H.; López, F.; Maccarone, M. C.; Mannheim, K.; Maravilla, D.; Marcelli, L.; Marini, A.; Martinez, O.; Masciantonio, G.; Mase, K.; Matev, R.; Medina-Tanco, G.; Mernik, T.; Miyamoto, H.; Miyazaki, Y.; Mizumoto, Y.; Modestino, G.; Monaco, A.; Monnier-Ragaigne, D.; Morales de los Ríos, J. A.; Moretto, C.; Morozenko, V. S.; Mot, B.; Murakami, T.; Murakami, M. Nagano; Nagata, M.; Nagataki, S.; Nakamura, T.; Napolitano, T.; Naumov, D.; Nava, R.; Neronov, A.; Nomoto, K.; Nonaka, T.; Ogawa, T.; Ogio, S.; Ohmori, H.; Olinto, A. V.; Orleański, P.; Osteria, G.; Panasyuk, M. I.; Parizot, E.; Park, I. H.; Park, H. W.; Pastircak, B.; Patzak, T.; Paul, T.; Pennypacker, C.; Perez Cano, S.; Peter, T.; Picozza, P.; Pierog, T.; Piotrowski, L. W.; Piraino, S.; Plebaniak, Z.; Pollini, A.; Prat, P.; Prévôt, G.; Prieto, H.; Putis, M.; Reardon, P.; Reyes, M.; Ricci, M.; Rodríguez, I.; Rodríguez Frías, M. D.; Ronga, F.; Roth, M.; Rothkaehl, H.; Roudil, G.; Rusinov, I.; Rybczyński, M.; Sabau, M. D.; Sáez-Cano, G.; Sagawa, H.; Saito, A.; Sakaki, N.; Sakata, M.; Salazar, H.; Sánchez, S.; Santangelo, A.; Santiago Crúz, L.; Sanz Palomino, M.; Saprykin, O.; Sarazin, F.; Sato, H.; Sato, M.; Schanz, T.; Schieler, H.; Scotti, V.; Segreto, A.; Selmane, S.; Semikoz, D.; Serra, M.; Sharakin, S.; Shibata, T.; Shimizu, H. M.; Shinozaki, K.; Shirahama, T.; Siemieniec-Oziȩbło, G.; Silva López, H. H.; Sledd, J.; Słomińska, K.; Sobey, A.; Sugiyama, T.; Supanitsky, D.; Suzuki, M.; Szabelska, B.; Szabelski, J.; Tajima, F.; Tajima, N.; Tajima, T.; Takahashi, Y.; Takami, H.; Takeda, M.; Takizawa, Y.; Tenzer, C.; Tibolla, O.; Tkachev, L.; Tokuno, H.; Tomida, T.; Tone, N.; Toscano, S.; Trillaud, F.; Tsenov, R.; Tsunesada, Y.; Tsuno, K.; Tymieniecka, T.; Uchihori, Y.; Unger, M.; Vaduvescu, O.; Valdés-Galicia, J. F.; Vallania, P.; Valore, L.; Vankova, G.; Vigorito, C.; Villaseñor, L.; von Ballmoos, P.; Wada, S.; Watanabe, J.; Watanabe, S.; Watts, J., Jr; Weber, M.; Weiler, T. J.; Wibig, T.; Wiencke, L.; Wille, M.; Wilms, J.; Włodarczyk, Z.; Yamamoto, T.; Yamamoto, Y.; Yang, J.; Yano, H.; Yashin, I. V.; Yonetoku, D.; Yoshida, K.; Yoshida, S.; Young, R.; Zotov, M. Yu.; Zuccaro Marchi, A.; The JEM-EUSO Collaboration Show all (289) Ultra high energy photons and neutrinos with JEM-EUSO Ultra high energy photons and neutrinos are carriers of very important astrophysical information. They may be produced at the sites of cosmic ray acceleration or during the propagation of the cosmic rays in the intergalactic medium. In contrast to charged cosmic rays, photon and neutrino arrival directions point to the production site because they are not deflected by the magnetic fields of the Galaxy or the intergalactic medium. In this work we study the characteristics of the longitudinal development of showers initiated by photons and neutrinos at the highest energies. These studies are relevant for development of techniques for neutrino and photon identification by the JEM-EUSO telescope. In particular, we study the possibility of observing the multi-peak structure of very deep horizontal neutrino showers with JEM-EUSO. We also discuss the possibility to determine the flavor content of the incident neutrino flux by taking advantage of the different characteristics of the longitudinal profiles generated by different type of neutrinos. This is of grate importance for the study of the fundamental properties of neutrinos at the highest energies. Regarding photons, we discuss the detectability of the cosmogenic component by JEM-EUSO and also estimate the expected upper limits on the photon fraction which can be obtained from the future JEM-EUSO data for the case in which there are no photons in the samples. Adaptive Resource Management in the PIÑAS Web Cooperative Environment Advances in Web Intelligence (2004-01-01) 3034: 33-43 , January 01, 2004 By Mendoza, Sonia; Decouchant, Dominique; Martínez Enríquez, Ana María; Morán, Alberto L. Show all (4) The PIÑAS Web cooperative environment allows distributed authors working together to produce shared documents in a consistent way. The management of shared resources in such an environment raises important technical issues due to the constraints imposed by Web technology. An elaborated group awareness function is provided that allows each author notifying his contributions to other authors, and controlling the way by which other contributions are integrated into his/her environment. In order to support this function, essential to every groupware application, we designed a self-adaptive cooperative environment. We propose a new way of structuring Web documents to be considered as independent resource containers with their corresponding management context. This representation of information simplifies the design of mechanisms to share, modify and update documents and their resources in a consistent and controlled way. Scenarios are used to motivate the need for robust mechanisms for the management of shared Web documents and to illustrate how the extensions presented address these issues. Environment and Financial Markets Computational Science - ICCS 2004 (2004-01-01) 3039: 787-794 , January 01, 2004 By Szatzschneider, Wojciech; Jeanblanc, Monique; Kwiatkowska, Teresa We propose to put the environment into financial markets. We explain how to do it, and why the financial approach is practically the only one suited for stopping and inverting environmental degradation. We concentrate our attention on deforestation, which is the largest environmental problem in the third world, and explain how to start the project and what kind of optimization problems should be solved to ensure the optimal use of environmental funds. In the final part we analyze the dynamical control for bounded processes and awards partially based on the mean of the underlying value. An Inference Engine for Web Adaptive Cooperative Work By Martínez-Enríquez, Ana María; Muhammad, Aslam; Decouchant, Dominique; Favela, Jeséus Show all (4) This paper describes the principle of an inference engine that analyzes useful information of actions, performed by cooperating users, to propose modifications of the states and/or the presentation of the shared objects. Using cooperative groupware applications, a group of people may work on the same task while other users may pursue their individual goals using various other applications (cooperative or non-cooperative)with different roles. In such environment, consistency, group awareness and security have essential signifficance. The work of each user can be observed by capturing their actions and then analyzing them in relation to the history of previous actions. The proposed Adaptive Inference Engine (AIE)behaves as a consumer of application events which analyzes this information on the basis of some predefined rules and then proposes some actions that may be applied within the cooperative environment. In all cases, the user controls the execution of the proposed group awareness actions in his working environment. A prototype of the AIE is developed using the Amaya Web Authoring Toolkit and the PI ~NAS collaborative authoring middleware. Before Getting There: Potential and Actual Collaboration Groupware: Design, Implementation, and Use (2002-01-01) 2440: 147-167 , January 01, 2002 By Morán, Alberto L.; Favela, Jesus; Martínez-Enríquez, M.; Decouchant, Dominique Show all (4) In this paper we introduce the concepts of Actual and Potential Collaboration Spaces. The former applies to the space where collaborative activities are performed, while the second relates to the initial space where opportunities for collaboration are identified and an initial interaction is established. We present a characterization for Potential Collaboration Spaces featuring awareness elements for the potential of collaboration and mechanisms to gather and present them, as well as mechanisms to establish an initial interaction and associated GUI elements. We argue that by making this distinction explicit, and characterizing Potential Collaboration Spaces, designers of groupware can better identify the technical requirements of their systems and thus provide solutions that more appropriately address their users concerns. We illustrate this concept with the design of an application that supports Potential Collaboration Spaces for the PIÑAS web-based coauthoring middleware. Extrinsic Evaluation on Automatic Summarization Tasks: Testing Affixality Measurements for Statistical Word Stemming Advances in Computational Intelligence (2013-01-01): 7630 , January 01, 2013 By Méndez-Cruz, Carlos-Francisco; Torres-Moreno, Juan-Manuel; Medina-Urrea, Alfonso; Sierra, Gerardo Show all (4) This paper presents some experiments of evaluation of a statistical stemming algorithm based on morphological segmentation. The method estimates affixality of word fragments. It combines three indexes associated to possible cuts. This unsupervised and language-independent method has been easily adapted to generate an effective morphological stemmer. This stemmer has been coupled with Cortex, an automatic summarization system, in order to generate summaries in English, Spanish and French. Summaries have been evaluated using ROUGE. The results of this extrinsic evaluation show that our stemming algorithm outperforms several classical systems. Computing in the Presence of Concurrent Solo Executions By Herlihy, Maurice; Rajsbaum, Sergio; Raynal, Michel; Stainer, Julien Show all (4) In a wait-free model any number of processes may crash. A process runs solo when it computes its local output without receiving any information from other processes, either because they crashed or they are too slow. While in wait-free shared-memory models at most one process may run solo in an execution, any number of processes may have to run solo in an asynchronous wait-free message-passing model. This paper is on the computability power of models in which several processes may concurrently run solo. It first introduces a family of round-based wait-free models, called the d-solo models, 1 ≤ d ≤ n, where up to d processes may run solo. The paper gives then a characterization of the colorless tasks that can be solved in each d-solo model. It also introduces the (d,ε)-solo approximate agreement task, which generalizes ε-approximate agreement, and proves that (d,ε)-solo approximate agreement can be solved in the d-solo model, but cannot be solved in the (d + 1)-solo model. The paper studies also the relation linking d-set agreement and (d,ε)-solo approximate agreement in asynchronous wait-free message-passing systems. These results establish for the first time a hierarchy of wait-free models that, while weaker than the basic read/write model, are nevertheless strong enough to solve non-trivial tasks. Automatically Adjusting Concurrency to the Level of Synchrony Distributed Computing (2014-01-01) 8784: 1-15 , January 01, 2014 By Fraigniaud, Pierre; Gafni, Eli; Rajsbaum, Sergio; Roy, Matthieu Show all (4) The state machine approach is a well-known technique for building distributed services requiring high performance and high availability, by replicating servers, and by coordinating client interactions with server replicas using consensus. Indulgent consensus algorithms exist for realistic eventually partially synchronous models, that never violate safety and guarantee liveness once the system becomes synchronous. Unavoidably, these algorithms may never terminate, even when no processor crashes, if the system never becomes synchronous. This paper proposes a mechanism similar to state machine replication, called RC-simulation, that can always make progress, even if the system is never synchronous. Using RC-simulation, the quality of the service will adjust to the current level of asynchrony of the network — degrading when the system is very asynchronous, and improving when the system becomes more synchronous. RC-simulation generalizes the state machine approach in the following sense: when the system is asynchronous, the system behaves as if k + 1 threads were running concurrently, where k is a function of the asynchrony. In order to illustrate how the RC-simulation can be used, we describe a long-lived renaming implementation. By reducing the concurrency down to the asynchrony of the system, RC-simulation enables to obtain renaming quality that adapts linearly to the asynchrony. RANSAC-GP: Dealing with Outliers in Symbolic Regression with Genetic Programming Genetic Programming (2017-01-01): 10196 , January 01, 2017 By López, Uriel; Trujillo, Leonardo; Martinez, Yuliana; Legrand, Pierrick; Naredo, Enrique; Silva, Sara Show all (6) Genetic programming (GP) has been shown to be a powerful tool for automatic modeling and program induction. It is often used to solve difficult symbolic regression tasks, with many examples in real-world domains. However, the robustness of GP-based approaches has not been substantially studied. In particular, the present work deals with the issue of outliers, data in the training set that represent severe errors in the measuring process. In general, a datum is considered an outlier when it sharply deviates from the true behavior of the system of interest. GP practitioners know that such data points usually bias the search and produce inaccurate models. Therefore, this work presents a hybrid methodology based on the RAndom SAmpling Consensus (RANSAC) algorithm and GP, which we call RANSAC-GP. RANSAC is an approach to deal with outliers in parameter estimation problems, widely used in computer vision and related fields. On the other hand, this work presents the first application of RANSAC to symbolic regression with GP, with impressive results. The proposed algorithm is able to deal with extreme amounts of contamination in the training set, evolving highly accurate models even when the amount of outliers reaches 90%. On the Number of Opinions Needed for Fault-Tolerant Run-Time Monitoring in Distributed Systems Runtime Verification (2014-01-01) 8734: 92-107 , January 01, 2014 By Fraigniaud, Pierre; Rajsbaum, Sergio; Travers, Corentin Decentralized runtime monitoring involves a set of monitors observing the behavior of system executions with respect to some correctness property. It is generally assumed that, as soon as a violation of the property is revealed by any of the monitors at runtime, some recovery code can be executed for bringing the system back to a legal state. This implicitly assumes that each monitor produces a binary opinion, true or false, and that the recovery code is launched as soon as one of these opinions is equal to false. In this paper, we formally prove that, in a failure-prone asynchronous computing model, there are correctness properties for which there is no such decentralized monitoring. We show that there exist some properties which, in order to be monitored in a wait-free decentralized manner, inherently require that the monitors produce a number of opinions larger than two. More specifically, our main result is that, for every k, 1 ≤ k ≤ n, there exists a property that requires at least k opinions to be monitored by n monitors. We also present a corresponding distributed monitor using at most k + 1 opinions, showing that our lower bound is nearly tight. Local Search is Underused in Genetic Programming Genetic Programming Theory and Practice XIV (2018-01-01): 119-137 , January 01, 2018 By Trujillo, Leonardo; Z-Flores, Emigdio; Juárez-Smith, Perla S.; Legrand, Pierrick; Silva, Sara; Castelli, Mauro; Vanneschi, Leonardo; Schütze, Oliver; Muñoz, Luis Show all (9) There are two important limitations of standard tree-based genetic programming (GP). First, GP tends to evolve unnecessarily large programs, what is referred to as bloat. Second, GP uses inefficient search operators that focus on modifying program syntax. The first problem has been studied extensively, with many works proposing bloat control methods. Regarding the second problem, one approach is to use alternative search operators, for instance geometric semantic operators, to improve convergence. In this work, our goal is to experimentally show that both problems can be effectively addressed by incorporating a local search optimizer as an additional search operator. Using real-world problems, we show that this rather simple strategy can improve the convergence and performance of tree-based GP, while also reducing program size. Given these results, a question arises: Why are local search strategies so uncommon in GP? A small survey of popular GP libraries suggests to us that local search is underused in GP systems. We conclude by outlining plausible answers for this question and highlighting future work. Tree Species Classification Based on 3D Bark Texture Analysis Image and Video Technology (2014-01-01) 8333: 279-289 , January 01, 2014 By Othmani, Ahlem; Piboule, Alexandre; Dalmau, Oscar; Lomenie, Nicolas; Mokrani, Said; Voon, Lew Fock Chong Lew Yan Show all (6) Terrestrial Laser Scanning (TLS) technique is today widely used in ground plots to acquire 3D point clouds from which forest inventory attributes are calculated. In the case of mixed plantings where the 3D point clouds contain data from several different tree species, it is important to be able to automatically recognize the tree species in order to analyze the data of each of the species separately. Although automatic tree species recognition from TLS data is an important problem, it has received very little attention from the scientific community. In this paper we propose a method for classifying five different tree species using TLS data. Our method is based on the analysis of the 3D geometric texture of the bark in order to compute roughness measures and shape characteristics that are fed as input to a Random Forest classifier to classify the tree species. The method has been evaluated on a test set composed of 265 samples (53 samples of each of the 5 species) and the results obtained are very encouraging. Potentialities of Chorems as Visual Summaries of Geographic Databases Contents Advances in Visual Information Systems (2007-01-01) 4781: 537-548 , January 01, 2007 By Fatto, Vincenzo; Laurini, Robert; Lopez, Karla; Loreto, Rosalva; Milleret-Raffort, Françoise; Sebillo, Monica; Sol-Martinez, David; Vitiello, Giuliana Show all (8) Chorems are schematized representations of territories, and so they can represent a good visual summary of spatial databases. Indeed for spatial decision-makers, it is more important to identify and map problems than facts. Until now, chorems were made manually by geographers based on the own knowledge of the territory. So, an international project was launched in order to automatically discover spatial patterns and layout chorems starting from spatial databases. After examining some manually-made chorems some guidelines were identified. Then the architecture of a prototype system is presented based on a canonical database structure, a subsystem for spatial patterns discovery based on spatial data mining, a subsystem for chorem layout, and a specialized language to represent chorems. A Comparison between Hardware Accelerators for the Modified Tate Pairing over $\mathbb{F}_{2^m}$ and $\mathbb{F}_{3^m}$ Pairing-Based Cryptography – Pairing 2008 (2008-01-01) 5209: 297-315 , January 01, 2008 By Beuchat, Jean-Luc; Brisebarre, Nicolas; Detrey, Jérémie; Okamoto, Eiji; Rodríguez-Henríquez, Francisco Show all (5) In this article we propose a study of the modified Tate pairing in characteristics two and three. Starting from the ηT pairing introduced by Barreto et al. [1], we detail various algorithmic improvements in the case of characteristic two. As far as characteristic three is concerned, we refer to the survey by Beuchat et al. [5]. We then show how to get back to the modified Tate pairing at almost no extra cost. Finally, we explore the trade-offs involved in the hardware implementation of this pairing for both characteristics two and three. From our experiments, characteristic three appears to have a slight advantage over characteristic two. Biologically-Inspired Digital Architecture for a Cortical Model of Orientation Selectivity Artificial Neural Networks - ICANN 2008 (2008-01-01) 5164: 188-197 , January 01, 2008 By Torres-Huitzil, Cesar; Girau, Bernard; Arias-Estrada, Miguel This paper presents a biologically inspired modular hardware implementation of a cortical model of orientation selectivity of the visual stimuli in the primary visual cortex targeted to a Field Programmable Gate Array (FPGA) device. The architecture mimics the functionality and organization of neurons through spatial Gabor-like filtering and the so-called cortical hypercolumnar organization. A systolic array and a suitable image addressing scheme are used to partially overcome the von Neumann bottleneck of monolithic memory organization in conventional microprocessor-based system by processing small and local amounts of sensory information (image tiles) in an incremental way. A real-time FPGA implementation is presented for 8 different orientations and aspects such as flexibility, scalability, performance and precision are discussed to show the plausibility of implementing biologically-inspired processing for early visual perception in digital devices. Software Implementation of Binary Elliptic Curves: Impact of the Carry-Less Multiplier on Scalar Multiplication Cryptographic Hardware and Embedded Systems – CHES 2011 (2011-01-01) 6917: 108-123 , January 01, 2011 By Taverne, Jonathan; Faz-Hernández, Armando; Aranha, Diego F.; Rodríguez-Henríquez, Francisco; Hankerson, Darrel; López, Julio Show all (6) The availability of a new carry-less multiplication instruction in the latest Intel desktop processors significantly accelerates multiplication in binary fields and hence presents the opportunity for reevaluating algorithms for binary field arithmetic and scalar multiplication over elliptic curves. We describe how to best employ this instruction in field multiplication and the effect on performance of doubling and halving operations. Alternate strategies for implementing inversion and half-trace are examined to restore most of their competitiveness relative to the new multiplier. These improvements in field arithmetic are complemented by a study on serial and parallel approaches for Koblitz and random curves, where parallelization strategies are implemented and compared. The contributions are illustrated with experimental results improving the state-of-the-art performance of halving and doubling-based scalar multiplication on NIST curves at the 112- and 192-bit security levels, and a new speed record for side-channel resistant scalar multiplication in a random curve at the 128-bit security level. A Fixpoint Semantics of Event Systems With and Without Fairness Assumptions Integrated Formal Methods (2005-01-01) 3771: 327-346 , January 01, 2005 By Barradas, Héctor Ruíz; Bert, Didier We present a fixpoint semantics of event systems. The semantics is presented in a general framework without concerns of fairness. Soundness and completeness of rules for deriving leads-to properties are proved in this general framework. The general framework is instantiated to minimal progress and weak fairness assumptions and similar results are obtained. We show the power of these results by deriving sufficient conditions for leads-to under minimal progress proving soundness of proof obligations without reasoning over state-traces. Empirical Evaluation of Collaborative Support for Distributed Pair Programming By Favela, Jesus; Natsu, Hiroshi; Pérez, Cynthia; Robles, Omar; Morán, Alberto L.; Romero, Raul; Martínez-Enríquez, Ana M.; Decouchant, Dominique Show all (8) Pair programming is an Extreme Programming (XP) practice where two programmers work on a single computer to produce an artifact. Empirical evaluations have provided evidence that this technique results in higher quality code in half the time it would take an individual programmer. Distributed pair programming could facilitate opportunistic pair programming sessions with colleagues working in remote sites. In this paper we present the preliminary results of the empirical evaluation of the COPPER collaborative editor, developed explicitly to support pair programming. The evaluation was performed on three different conditions: pairs working collocated on a single computer; distributed pairs working in application sharing mode; and distributed pairs using collaboration aware facilities. In all three cases the subjects used the COPPER collaborative editor. The results support our hypothesis that distributed pairs could find the same amount of errors as their collocated counterparts. However, no evidence was found that the pairs that used collaborative awareness services had better code comprehension, as we had also hypothesized. An artificial life approach to dense stereo disparity Artificial Life and Robotics (2009-03-01) 13: 585-596 , March 01, 2009 By Olague, Gustavo; Pérez, Cynthia B.; Fernández, Francisco; Lutton, Evelyne Show all (4) This article presents an adaptive approach to improving the infection algorithm that we have used to solve the dense stereo matching problem. The algorithm presented here incorporates two different epidemic automata along a single execution of the infection algorithm. The new algorithm attempts to provide a general behavior of guessing the best correspondence between a pair of images. Our aim is to provide a new strategy inspired by evolutionary computation, which combines the behaviors of both automata into a single correspondence problem. The new algorithm will decide which automata will be used based on the transmission of information and mutation, as well as the attributes, texture, and geometry, of the input images. This article gives details about how the rules used in the infection algorithm are coded. Finally, we show experiments with a real stereo pair, as well as with a standard test bed, to show how the infection algorithm works. Hardware Accelerator for the Tate Pairing in Characteristic Three Based on Karatsuba-Ofman Multipliers Cryptographic Hardware and Embedded Systems - CHES 2009 (2009-01-01) 5747: 225-239 , January 01, 2009 By Beuchat, Jean-Luc; Detrey, Jérémie; Estibals, Nicolas; Okamoto, Eiji; Rodríguez-Henríquez, Francisco Show all (5) This paper is devoted to the design of fast parallel accelerators for the cryptographic Tate pairing in characteristic three over supersingular elliptic curves. We propose here a novel hardware implementation of Miller's loop based on a pipelined Karatsuba-Ofman multiplier. Thanks to a careful selection of algorithms for computing the tower field arithmetic associated to the Tate pairing, we manage to keep the pipeline busy. We also describe the strategies we considered to design our parallel multiplier. They are included in a VHDL code generator allowing for the exploration of a wide range of operators. Then, we outline the architecture of a coprocessor for the Tate pairing over $\mathbb{F}_{3^m}$ . However, a final exponentiation is still needed to obtain a unique value, which is desirable in most of the cryptographic protocols. We supplement our pairing accelerator with a coprocessor responsible for this task. An improved exponentiation algorithm allows us to save hardware resources. According to our place-and-route results on Xilinx FPGAs, our design improves both the computation time and the area-time trade-off compared to previoulsy published coprocessors. Belief Merging in Dynamic Logic of Propositional Assignments Foundations of Information and Knowledge Systems (2014-01-01) 8367: 381-398 , January 01, 2014 By Herzig, Andreas; Pozos-Parra, Pilar; Schwarzentruber, François We study syntactical merging operations that are defined semantically by means of the Hamming distance between valuations; more precisely, we investigate the Σ-semantics, Gmax-semantics and max-semantics. We work with a logical language containing merging operators as connectives, as opposed to the metalanguage operations of the literature. We capture these merging operators as programs of Dynamic Logic of Propositional Assignments DL-PA. This provides a syntactical characterisation of the three semantically defined merging operators, and a proof system for DL-PA therefore also provides a proof system for these merging operators. We explain how PSPACE membership of the model checking and satisfiability problem of star-free DL-PA can be extended to the variant of DL-PA where symbolic disjunctions that are parametrised by sets (that are not defined as abbreviations, but are proper connectives) are built into the language. As our merging operators can be polynomially embedded into this variant of DL-PA, we obtain that both the model checking and the satisfiability problem of a formula containing possibly nested merging operators is in PSPACE. Semantic Network Adaptation Based on QoS Pattern Recognition for Multimedia Streams Signal Processing, Image Processing and Pattern Recognition (2009-01-01) 61: 267-274 , January 01, 2009 By Exposito, Ernesto; Gineste, Mathieu; Lamolle, Myriam; Gomez, Jorge Show all (4) This article proposes an ontology based pattern recognition methodology to compute and represent common QoS properties of the Application Data Units (ADU) of multimedia streams. The use of this ontology by mechanisms located at different layers of the communication architecture will allow implementing fine per-packet self-optimization of communication services regarding the actual application requirements. A case study showing how this methodology is used by error control mechanisms in the context of wireless networks is presented in order to demonstrate the feasibility and advantages of this approach. Ground-based tests of JEM-EUSO components at the Telescope Array site, "EUSO-TA" We are conducting tests of optical and electronics components of JEMEUSO at the Telescope Array site in Utah with a ground-based "EUSO-TA" detector. The tests will include an engineering validation of the detector, cross-calibration of EUSO-TA with the TA fluorescence detector and observations of air shower events. Also, the proximity of the TA's Electron Light Source will allow for convenient use of this calibration device. In this paper, we report initial results obtained with the EUSO-TA telescope. Erratum to: Performances of JEM-EUSO: angular reconstruction Performances of JEM-EUSO: angular reconstruction Mounted on the International Space Station(ISS), the Extreme Universe Space Observatory, on-board the Japanese Experimental Module (JEM-EUSO), relies on the well established fluorescence technique to observe Extensive Air Showers (EAS) developing in the earth's atmosphere. Focusing on the detection of Ultra High Energy Cosmic Rays (UHECR) in the decade of 1020eV, JEM-EUSO will face new challenges by applying this technique from space. The EUSO Simulation and Analysis Framework (ESAF) has been developed in this context to provide a full end-to-end simulation frame, and assess the overall performance of the detector. Within ESAF, angular reconstruction can be separated into two conceptually different steps. The first step is pattern recognition, or filtering, of the signal to separate it from the background. The second step is to perform different types of fitting in order to search for the relevant geometrical parameters that best describe the previously selected signal. In this paper, we discuss some of the techniques we have implemented in ESAF to perform the geometrical reconstruction of EAS seen by JEM-EUSO. We also conduct thorough tests to assess the performances of these techniques in conditions which are relevant to the scope of the JEM-EUSO mission. We conclude by showing the expected angular resolution in the energy range that JEM-EUSO is expected to observe. Performances of JEM–EUSO: energy and X max reconstruction The Extreme Universe Space Observatory (EUSO) on–board the Japanese Experimental Module (JEM) of the International Space Station aims at the detection of ultra high energy cosmic rays from space. The mission consists of a UV telescope which will detect the fluorescence light emitted by cosmic ray showers in the atmosphere. The mission, currently developed by a large international collaboration, is designed to be launched within this decade. In this article, we present the reconstruction of the energy of the observed events and we also address the Xmax reconstruction. After discussing the algorithms developed for the energy and Xmax reconstruction, we present several estimates of the energy resolution, as a function of the incident angle, and energy of the event. Similarly, estimates of the Xmax resolution for various conditions are presented. Calibration aspects of the JEM-EUSO mission Experimental Astronomy (2015-11-01) 40: 91-116 , November 01, 2015 The JEM-EUSO telescope will be, after calibration, a very accurate instrument which yields the number of received photons from the number of measured photo-electrons. The project is in phase A (demonstration of the concept) including already operating prototype instruments, i.e. many parts of the instrument have been constructed and tested. Calibration is a crucial part of the instrument and its use. The focal surface (FS) of the JEM-EUSO telescope will consist of about 5000 photo-multiplier tubes (PMTs), which have to be well calibrated to reach the required accuracy in reconstructing the air-shower parameters. The optics system consists of 3 plastic Fresnel (double-sided) lenses of 2.5 m diameter. The aim of the calibration system is to measure the efficiencies (transmittances) of the optics and absolute efficiencies of the entire focal surface detector. The system consists of 3 main components: (i) Pre-flight calibration devices on ground, where the efficiency and gain of the PMTs will be measured absolutely and also the transmittance of the optics will be. (ii) On-board relative calibration system applying two methods: a) operating during the day when the JEM-EUSO lid will be closed with small light sources on board. b) operating during the night, together with data taking: the monitoring of the background rate over identical sites. (iii) Absolute in-flight calibration, again, applying two methods: a) measurement of the moon light, reflected on high altitude, high albedo clouds. b) measurements of calibrated flashes and tracks produced by the Global Light System (GLS). Some details of each calibration method will be described in this paper. The JEM-EUSO mission: An introduction Experimental Astronomy (2015-11-01) 40: 3-17 , November 01, 2015 By Adams, J. H., Jr.; Ahmad, S.; Albert, J.-N.; Allard, D.; Anchordoqui, L.; Andreev, V.; Anzalone, A.; Arai, Y.; Asano, K.; Ave Pernas, M.; Baragatti, P.; Barrillon, P.; Batsch, T.; Bayer, J.; Bechini, R.; Belenguer, T.; Bellotti, R.; Belov, K.; Berlind, A. A.; Bertaina, M.; Biermann, P. L.; Biktemerova, S.; Blaksley, C.; Blanc, N.; Błȩcki, J.; Blin-Bondil, S.; Blümer, J.; Bobik, P.; Bogomilov, M.; Bonamente, M.; Briggs, M. S.; Briz, S.; Bruno, A.; Cafagna, F.; Campana, D.; Capdevielle, J-N.; Caruso, R.; Casolino, M.; Cassardo, C.; Castellini, G.; Catalano, C.; Catalano, O.; Cellino, A.; Chikawa, M.; Christl, M. J.; Cline, D.; Connaughton, V.; Conti, L.; Cordero, G.; Crawford, H. J.; Cremonini, R.; Csorna, S.; Dagoret-Campagne, S.; Castro, A. J.; Donato, C.; Taille, C.; Santis, C.; Peral, L.; Dell'Oro, A.; Simone, N.; Martino, M.; Distratis, G.; Dulucq, F.; Dupieux, M.; Ebersoldt, A.; Ebisuzaki, T.; Engel, R.; Falk, S.; Fang, K.; Fenu, F.; Fernández-Gómez, I.; Ferrarese, S.; Finco, D.; Flamini, M.; Fornaro, C.; Franceschi, A.; Fujimoto, J.; Fukushima, M.; Galeotti, P.; Garipov, G.; Geary, J.; Gelmini, G.; Giraudo, G.; Gonchar, M.; González Alvarado, C.; Gorodetzky, P.; Guarino, F.; Guzmán, A.; Hachisu, Y.; Harlov, B.; Haungs, A.; Hernández Carretero, J.; Higashide, K.; Ikeda, D.; Ikeda, H.; Inoue, N.; Inoue, S.; Insolia, A.; Isgrò, F.; Itow, Y.; Joven, E.; Judd, E. G.; Jung, A.; Kajino, F.; Kajino, T.; Kaneko, I.; Karadzhov, Y.; Karczmarczyk, J.; Karus, M.; Katahira, K.; Kawai, K.; Kawasaki, Y.; Keilhauer, B.; Khrenov, B. A.; Kim, Jeong-Sook; Kim, Soon-Wook; Kim, Sug-Whan; Kleifges, M.; Klimov, P. A.; Kolev, D.; Kreykenbohm, I.; Kudela, K.; Kurihara, Y.; Kusenko, A.; Kuznetsov, E.; Lacombe, M.; Lachaud, C.; Lee, J.; Licandro, J.; Lim, H.; López, F.; Maccarone, M. C.; Mannheim, K.; Maravilla, D.; Marcelli, L.; Marini, A.; Martinez, O.; Masciantonio, G.; Mase, K.; Matev, R.; Medina-Tanco, G.; Mernik, T.; Miyamoto, H.; Miyazaki, Y.; Mizumoto, Y.; Modestino, G.; Monaco, A.; Monnier-Ragaigne, D.; Morales de los Ríos, J. A.; Moretto, C.; Morozenko, V. S.; Mot, B.; Murakami, T.; Nagano, M.; Nagata, M.; Nagataki, S.; Nakamura, T.; Napolitano, T.; Naumov, D.; Nava, R.; Neronov, A.; Nomoto, K.; Nonaka, T.; Ogawa, T.; Ogio, S.; Ohmori, H.; Olinto, A. V.; Orleański, P.; Osteria, G.; Panasyuk, M. I.; Parizot, E.; Park, I. H.; Park, H. W.; Pastircak, B.; Patzak, T.; Paul, T.; Pennypacker, C.; Perez Cano, S.; Peter, T.; Picozza, P.; Pierog, T.; Piotrowski, L. W.; Piraino, S.; Plebaniak, Z.; Pollini, A.; Prat, P.; Prévôt, G.; Prieto, H.; Putis, M.; Reardon, P.; Reyes, M.; Ricci, M.; Rodríguez, I.; Frías, M. D. Rodríguez; Ronga, F.; Roth, M.; Rothkaehl, H.; Roudil, G.; Rusinov, I.; Rybczyński, M.; Sabau, M. D.; Sáez Cano, G.; Sagawa, H.; Saito, A.; Sakaki, N.; Sakata, M.; Salazar, H.; Sánchez, S.; Santangelo, A.; Santiago Crúz, L.; Sanz Palomino, M.; Saprykin, O.; Sarazin, F.; Sato, H.; Sato, M.; Schanz, T.; Schieler, H.; Scotti, V.; Segreto, A.; Selmane, S.; Semikoz, D.; Serra, M.; Sharakin, S.; Shibata, T.; Shimizu, H. M.; Shinozaki, K.; Shirahama, T.; Siemieniec-Oziȩbło, G.; Silva López, H. H.; Sledd, J.; Słomińska, K.; Sobey, A.; Sugiyama, T.; Supanitsky, D.; Suzuki, M.; Szabelska, B.; Szabelski, J.; Tajima, F.; Tajima, N.; Tajima, T.; Takahashi, Y.; Takami, H.; Takeda, M.; Takizawa, Y.; Tenzer, C.; Tibolla, O.; Tkachev, L.; Tokuno, H.; Tomida, T.; Tone, N.; Toscano, S.; Trillaud, F.; Tsenov, R.; Tsunesada, Y.; Tsuno, K.; Tymieniecka, T.; Uchihori, Y.; Unger, M.; Vaduvescu, O.; Valdés-Galicia, J. F.; Vallania, P.; Valore, L.; Vankova, G.; Vigorito, C.; Villaseñor, L.; Ballmoos, P.; Wada, S.; Watanabe, J.; Watanabe, S.; Watts, J., Jr.; Weber, M.; Weiler, T. J.; Wibig, T.; Wiencke, L.; Wille, M.; Wilms, J.; Włodarczyk, Z.; Yamamoto, T.; Yamamoto, Y.; Yang, J.; Yano, H.; Yashin, I. V.; Yonetoku, D.; Yoshida, K.; Yoshida, S.; Young, R.; Zotov, M. Yu.; Zuccaro Marchi, A.; The JEM-EUSO Collaboration Show all (289) The Extreme Universe Space Observatory on board the Japanese Experiment Module of the International Space Station, JEM-EUSO, is being designed to search from space ultra-high energy cosmic rays. These are charged particles with energies from a few 1019 eV to beyond 1020 eV, at the very end of the known cosmic ray energy spectrum. JEM-EUSO will also search for extreme energy neutrinos, photons, and exotic particles, providing a unique opportunity to explore largely unknown phenomena in our Universe. The mission, principally based on a wide field of view (60 degrees) near-UV telescope with a diameter of ∼ 2.5 m, will monitor the earth's atmosphere at night, pioneering the observation from space of the ultraviolet tracks (290-430 nm) associated with giant extensive air showers produced by ultra-high energy primaries propagating in the earth's atmosphere. Observing from an orbital altitude of ∼ 400 km, the mission is expected to reach an instantaneous geometrical aperture of Ageo ≥ 2 × 105 km2 sr with an estimated duty cycle of ∼ 20 %. Such a geometrical aperture allows unprecedented exposures, significantly larger than can be obtained with ground-based experiments. In this paper we briefly review the history of space-based search for ultra-high energy cosmic rays. We then introduce the special issue of Experimental Astronomy devoted to the various aspects of such a challenging enterprise. We also summarise the activities of the on-going JEM-EUSO program. JEM-EUSO: Meteor and nuclearite observations Meteor and fireball observations are key to the derivation of both the inventory and physical characterization of small solar system bodies orbiting in the vicinity of the Earth. For several decades, observation of these phenomena has only been possible via ground-based instruments. The proposed JEM-EUSO mission has the potential to become the first operational space-based platform to share this capability. In comparison to the observation of extremely energetic cosmic ray events, which is the primary objective of JEM-EUSO, meteor phenomena are very slow, since their typical speeds are of the order of a few tens of km/sec (whereas cosmic rays travel at light speed). The observing strategy developed to detect meteors may also be applied to the detection of nuclearites, which have higher velocities, a wider range of possible trajectories, but move well below the speed of light and can therefore be considered as slow events for JEM-EUSO. The possible detection of nuclearites greatly enhances the scientific rationale behind the JEM-EUSO mission. The infrared camera onboard JEM-EUSO Experimental Astronomy (2015-11-01) 40: 61-89 , November 01, 2015 The Extreme Universe Space Observatory on the Japanese Experiment Module (JEM-EUSO) on board the International Space Station (ISS) is the first space-based mission worldwide in the field of Ultra High-Energy Cosmic Rays (UHECR). For UHECR experiments, the atmosphere is not only the showering calorimeter for the primary cosmic rays, it is an essential part of the readout system, as well. Moreover, the atmosphere must be calibrated and has to be considered as input for the analysis of the fluorescence signals. Therefore, the JEM-EUSO Space Observatory is implementing an Atmospheric Monitoring System (AMS) that will include an IR-Camera and a LIDAR. The AMS Infrared Camera is an infrared, wide FoV, imaging system designed to provide the cloud coverage along the JEM-EUSO track and the cloud top height to properly achieve the UHECR reconstruction in cloudy conditions. In this paper, an updated preliminary design status, the results from the calibration tests of the first prototype, the simulation of the instrument, and preliminary cloud top height retrieval algorithms are presented. Science of atmospheric phenomena with JEM-EUSO By Adams, J. H., Jr.; Ahmad, S.; Albert, J. -N.; Allard, D.; Anchordoqui, L.; Andreev, V.; Anzalone, A.; Arai, Y.; Asano, K.; Ave Pernas, M.; Baragatti, P.; Barrillon, P.; Batsch, T.; Bayer, J.; Bechini, R.; Belenguer, T.; Bellotti, R.; Belov, K.; Berlind, A. A.; Bertaina, M.; Biermann, P. L.; Biktemerova, S.; Blaksley, C.; Blanc, N.; Błȩcki, J.; Blin-Bondil, S.; Blümer, J.; Bobik, P.; Bogomilov, M.; Bonamente, M.; Briggs, M. S.; Briz, S.; Bruno, A.; Cafagna, F.; Campana, D.; Capdevielle, J. -N.; Caruso, R.; Casolino, M.; Cassardo, C.; Castellinic, G.; Catalano, C.; Catalano, G.; Cellino, A.; Chikawa, M.; Christl, M. J.; Cline, D.; Connaughton, V.; Conti, L.; Cordero, G.; Crawford, H. J.; Cremonini, R.; Csorna, S.; Dagoret-Campagne, S.; de Castro, A. J.; De Donato, C.; de la Taille, C.; De Santis, C.; del Peral, L.; Dell'Oro, A.; De Simone, N.; Di Martino, M.; Distratis, G.; Dulucq, F.; Dupieux, M.; Ebersoldt, A.; Ebisuzaki, T.; Engel, R.; Falk, S.; Fang, K.; Fenu, F.; Fernández-Gómez, I.; Ferrarese, S.; Finco, D.; Flamini, M.; Fornaro, C.; Franceschi, A.; Fujimoto, J.; Fukushima, M.; Galeotti, P.; Garipov, G.; Geary, J.; Gelmini, G.; Giraudo, G.; Gonchar, M.; González Alvarado, C.; Gorodetzky, P.; Guarino, F.; Guzmán, A.; Hachisu, Y.; Harlov, B.; Haungs, A.; Hernández Carretero, J.; Higashide, K.; Ikeda, D.; Ikeda, H.; Inoue, N.; Inoue, S.; Insolia, A.; Isgrò, F.; Itow, Y.; Joven, E.; Judd, E. G.; Jung, A.; Kajino, F.; Kajino, T.; Kaneko, I.; Karadzhov, Y.; Karczmarczyk, J.; Karus, M.; Katahira, K.; Kawai, K.; Kawasaki, Y.; Keilhauer, B.; Khrenov, B. A.; Kim, J. -S.; Kim, S. -W.; Kim, S. -W.; Kleifges, M.; Klimov, P. A.; Kolev, D.; Kreykenbohm, I.; Kudela, K.; Kurihara, Y.; Kusenko, A.; Kuznetsov, E.; Lacombe, M.; Lachaud, C.; Lee, J.; Licandro, J.; Lim, H.; López, F.; Maccarone, M. C.; Mannheim, K.; Maravilla, D.; Marcelli, L.; Marini, A.; Martinez, O.; Masciantonio, G.; Mase, K.; Matev, R.; Medina-Tanco, G.; Mernik, T.; Miyamoto, H.; Miyazaki, Y.; Mizumoto, Y.; Modestino, G.; Monaco, A.; Monnier-Ragaigne, D.; Morales de los Ríos, J. A.; Moretto, C.; Morozenko, V. S.; Mot, B.; Murakami, T.; Murakami, M. Nagano; Nagata, M.; Nagataki, S.; Nakamura, T.; Napolitano, T.; Naumov, D.; Nava, R.; Neronov, A.; Nomoto, K.; Nonaka, T.; Ogawa, T.; Ogio, S.; Ohmori, H.; Olinto, A. V.; Orleański, P.; Osteria, G.; Panasyuk, M. I.; Parizot, E.; Park, I. H.; Park, H. W.; Pastircak, B.; Patzak, T.; Paul, T.; Pennypacker, C.; Perez Cano, S.; Peter, T.; Picozza, P.; Pierog, T.; Piotrowski, L. W.; Piraino, S.; Plebaniak, Z.; Pollini, A.; Prat, P.; Prévôt, G.; Prieto, H.; Putis, M.; Reardon, P.; Reyes, M.; Ricci, M.; Rodríguez, I.; Rodríguez Frías, M. D.; Ronga, F.; Roth, M.; Rothkaehl, H.; Roudil, G.; Rusinov, I.; Rybczyński, M.; Sabau, M. D.; Sáez-Cano, G.; Sagawa, H.; Saito, A.; Sakaki, N.; Sakata, M.; Salazar, H.; Sánchez, S.; Santangelo, A.; Santiago Crúz, L.; Sanz Palomino, M.; Saprykin, O.; Sarazin, F.; Sato, H.; Sato, M.; Schanz, T.; Schieler, H.; Scotti, V.; Segreto, A.; Selmane, S.; Semikoz, D.; Serra, M.; Sharakin, S.; Shibata, T.; Shimizu, H. M.; Shinozaki, K.; Shirahama, T.; Siemieniec-Oziȩbło, G.; Silva López, H. H.; Sledd, J.; Słomińska, K.; Sobey, A.; Sugiyama, T.; Supanitsky, D.; Suzuki, M.; Szabelska, B.; Szabelski, J.; Tajima, F.; Tajima, N.; Tajima, T.; Takahashi, Y.; Takami, H.; Takeda, M.; Takizawa, Y.; Tenzer, C.; Tibolla, O.; Tkachev, L.; Tokuno, H.; Tomida, T.; Tone, N.; Toscano, S.; Trillaud, F.; Tsenov, R.; Tsunesada, Y.; Tsuno, K.; Tymieniecka, T.; Uchihori, Y.; Unger, M.; Vaduvescu, O.; Valdés-Galicia, J. F.; Vallania, P.; Valore, L.; Vankova, G.; Vigorito, C.; Villaseñor, L.; von Ballmoos, P.; Wada, S.; Watanabe, J.; Watanabe, S.; Watts, J., Jr; Weber, M.; Weiler, T. J.; Wibig, T.; Wiencke, L.; Wille, M.; Wilms, J.; Włodarczyk, Z.; Yamamoto, T.; Yamamoto, Y.; Yang, J.; Yano, H.; Yashin, I. V.; Yonetoku, D.; Yoshida, K.; Yoshida, S.; Young, R.; Zotov, M. Yu.; Zuccaro Marchi, A.; Słomiński, J.; The JEM-EUSO Collaboration Show all (290) The main goal of the JEM-EUSO experiment is the study of Ultra High Energy Cosmic Rays (UHECR, 1019−1021eV), but the method which will be used (detection of the secondary light emissions induced by cosmic rays in the atmosphere) allows to study other luminous phenomena. The UHECRs will be detected through the measurement of the emission in the range between 290 and 430 m, where some part of Transient Luminous Events (TLEs) emission also appears. This work discusses the possibility of using the JEM-EUSO Telescope to get new scientific results on TLEs. The high time resolution of this instrument allows to observe the evolution of TLEs with great precision just at the moment of their origin. The paper consists of four parts: review of the present knowledge on the TLE, presentation of the results of the simulations of the TLE images in the JEM-EUSO telescope, results of the Russian experiment Tatiana–2 and discussion of the possible progress achievable in this field with JEM-EUSO as well as possible cooperation with other space projects devoted to the study of TLE – TARANIS and ASIM. In atmospheric physics, the study of TLEs became one of the main physical subjects of interest after their discovery in 1989. In the years 1992 – 1994 detection was performed from satellite, aircraft and space shuttle and recently from the International Space Station. These events have short duration (milliseconds) and small scales (km to tens of km) and appear at altitudes 50 – 100 km. Their nature is still not clear and each new experimental data can be useful for a better understanding of these mysterious phenomena. JEM-EUSO observational technique and exposure Designed as the first mission to explore the ultra-high energy universe from space, JEM-EUSO observes the Earth's atmosphere at night to record the ultraviolet tracks generated by the extensive air showers. We present the expected geometrical aperture and annual exposure in the nadir and tilt modes for ultra-high energy cosmic rays observation as a function of the altitude of the International Space Station. Multivariate approximation: An overview Numerical Algorithms (2005-07-01) 39: 1-6 , July 01, 2005 By Apprato, Dominique; Gout, Christian; Rabut, Christophe; Traversoni, Leonardo Show all (4) The JEM-EUSO observation in cloudy conditions The JEM-EUSO (Extreme Universe Space Observatory on-board the Japanese Experiment Module) mission will conduct extensive air shower (EAS) observations on the International Space Station (ISS). Following the ISS orbit, JEM-EUSO will experience continuous changes in the atmospheric conditions, including cloud presence. The influence of clouds on space-based observation is, therefore, an important topic to investigate from both EAS property and cloud climatology points of view. In the present work, the impact of clouds on the apparent profile of EAS is demonstrated through the simulation studies, taking into account the JEM-EUSO instrument and properties of the clouds. These results show a dependence on the cloud-top altitude and optical depth of the cloud. The analyses of satellite measurements on the cloud distribution indicate that more than 60 % of the cases allow for conventional EAS observation, and an additional ∼20 % with reduced quality. The combination of the relevant factors results in an effective trigger aperture of EAS observation ∼72 %, compared to the one in the clear atmosphere condition. The atmospheric monitoring system of the JEM-EUSO instrument The JEM-EUSO telescope will detect Ultra-High Energy Cosmic Rays (UHECRs) from space, detecting the UV Fluorescence Light produced by Extensive Air Showers (EAS) induced by the interaction of the cosmic rays with the earth's atmosphere. The capability to reconstruct the properties of the primary cosmic ray depends on the accurate measurement of the atmospheric conditions in the region of EAS development. The Atmospheric Monitoring (AM) system of JEM-EUSO will host a LIDAR, operating in the UV band, and an Infrared camera to monitor the cloud cover in the JEM-EUSO Field of View, in order to be sensitive to clouds with an optical depth τ ≥ 0.15 and to measure the cloud top altitude with an accuracy of 500 m and an altitude resolution of 500 m. Space experiment TUS on board the Lomonosov satellite as pathfinder of JEM-EUSO Space-based detectors for the study of extreme energy cosmic rays (EECR) are being prepared as a promising new method for detecting highest energy cosmic rays. A pioneering space device – the "tracking ultraviolet set-up" (TUS) – is in the last stage of its construction and testing. The TUS detector will collect preliminary data on EECR in the conditions of a space environment, which will be extremely useful for planning the major JEM-EUSO detector operation. The EUSO-Balloon pathfinder EUSO-Balloon is a pathfinder for JEM-EUSO, the Extreme Universe Space Observatory which is to be hosted on-board the International Space Station. As JEM-EUSO is designed to observe Ultra-High Energy Cosmic Rays (UHECR)-induced Extensive Air Showers (EAS) by detecting their ultraviolet light tracks "from above", EUSO-Balloon is a nadir-pointing UV telescope too. With its Fresnel Optics and Photo-Detector Module, the instrument monitors a 50 km2 ground surface area in a wavelength band of 290–430 nm, collecting series of images at a rate of 400,000 frames/sec. The objectives of the balloon demonstrator are threefold: a) perform a full end-to-end test of a JEM-EUSO prototype consisting of all the main subsystems of the space experiment, b) measure the effective terrestrial UV background, with a spatial and temporal resolution relevant for JEM-EUSO. c) detect tracks of ultraviolet light from near space for the first time. The latter is a milestone in the development of UHECR science, paving the way for any future space-based UHECR observatory. On August 25, 2014, EUSO-Balloon was launched from Timmins Stratospheric Balloon Base (Ontario, Canada) by the balloon division of the French Space Agency CNES. From a float altitude of 38 km, the instrument operated during the entire astronomical night, observing UV-light from a variety of ground-covers and from hundreds of simulated EASs, produced by flashers and a laser during a two-hour helicopter under-flight. The JEM-EUSO instrument In this paper we describe the main characteristics of the JEM-EUSO instrument. The Extreme Universe Space Observatory on the Japanese Experiment Module (JEM-EUSO) of the International Space Station (ISS) will observe Ultra High-Energy Cosmic Rays (UHECR) from space. It will detect UV-light of Extensive Air Showers (EAS) produced by UHECRs traversing the Earth's atmosphere. For each event, the detector will determine the energy, arrival direction and the type of the primary particle. The advantage of a space-borne detector resides in the large field of view, using a target volume of about 1012 tons of atmosphere, far greater than what is achievable from ground. Another advantage is a nearly uniform sampling of the whole celestial sphere. The corresponding increase in statistics will help to clarify the origin and sources of UHECRs and characterize the environment traversed during their production and propagation. JEM-EUSO is a 1.1 ton refractor telescope using an optics of 2.5 m diameter Fresnel lenses to focus the UV-light from EAS on a focal surface composed of about 5,000 multi-anode photomultipliers, for a total of ≃3⋅105 channels. A multi-layer parallel architecture handles front-end acquisition, selecting and storing valid triggers. Each processing level filters the events with increasingly complex algorithms using FPGAs and DSPs to reject spurious events and reduce the data rate to a value compatible with downlink constraints. User Interface Plasticity for Groupware Digital Information and Communication Technology and Its Applications (2011-01-01) 166: 380-394 , January 01, 2011 By Mendoza, Sonia; Decouchant, Dominique; Sánchez, Gabriela; Rodríguez, José; Mateos Papis, Alfredo Piero Show all (5) Plastic user interfaces are intentionally developed to automatically adapt themselves to changes in the user's working context. Although some Web single-user interactive systems already integrate some plastic capabilities, this research topic remains quasi-unexplored in the domain of Computer Supported Cooperative Work. This paper is centered on prototyping a plastic collaborative whiteboard, which adapts itself: 1) to the platform, as being able to be launched from heterogeneous computer devices and 2) to each collaborator, when he is detected working from several devices. In this last case, if the collaborator agrees, the whiteboard can split its user interface among his devices in order to facilitate user-system interaction without affecting the other collaborators present in the working session. The distributed interface components work as if they were co-located within a unique device. At any time, the whiteboard maintains group awareness among the involved collaborators. Parallel Approaches for Multiobjective Optimization Multiobjective Optimization (2008-01-01) 5252: 349-372 , January 01, 2008 By Talbi, El-Ghazali; Mostaghim, Sanaz; Okabe, Tatsuya; Ishibuchi, Hisao; Rudolph, Günter; Coello Coello, Carlos A. Show all (6) This chapter presents a general overview of parallel approaches for multiobjective optimization. For this purpose, we propose a taxonomy for parallel metaheuristics and exact methods. This chapter covers the design aspect of the algorithms as well as the implementation aspects on different parallel and distributed architectures. Dealing with explicit preferences and uncertainty in answer set programming Annals of Mathematics and Artificial Intelligence (2012-07-01) 65: 159-198 , July 01, 2012 By Confalonieri, Roberto; Nieves, Juan Carlos; Osorio, Mauricio; Vázquez-Salceda, Javier Show all (4) In this paper, we show how the formalism of Logic Programs with Ordered Disjunction (LPODs) and Possibilistic Answer Set Programming (PASP) can be merged into the single framework of Logic Programs with Possibilistic Ordered Disjunction (LPPODs). The LPPODs framework embeds in a unified way several aspects of common-sense reasoning, nonmonotonocity, preferences, and uncertainty, where each part is underpinned by a well established formalism. On one hand, from LPODs it inherits the distinctive feature of expressing context-dependent qualitative preferences among different alternatives (modeled as the atoms of a logic program). On the other hand, PASP allows for qualitative certainty statements about the rules themselves (modeled as necessity values according to possibilistic logic) to be captured. In this way, the LPPODs framework supports a reasoning which is nonmonotonic, preference- and uncertainty-aware. The LPPODs syntax allows for the specification of (1) preferences among the exceptions to default rules, and (2) necessity values about the certainty of program rules. As a result, preferences and uncertainty can be used to select the preferred uncertain default rules of an LPPOD and, consequently, to order its possibilistic answer sets. Furthermore, we describe the implementation of an ASP-based solver able to compute the LPPODs semantics. Visual Planning for Autonomous Mobile Robot Navigation MICAI 2005: Advances in Artificial Intelligence (2005-01-01) 3789: 1001-1011 , January 01, 2005 By Marin-Hernandez, Antonio; Devy, Michel; Ayala-Ramirez, Victor For autonomous mobile robots following a planned path, self-localization is a very important task. Cumulative errors derived from the different noisy sensors make it absolutely necessary. Absolute robot localization is commonly made measuring relative distance from the robot to previously learnt landmarks on the environment. Landmarks could be interest points, colored objects, or rectangular regions as posters or emergency signs, which are very useful and not intrusive beacons in human environments. This paper presents an active localization method: a visual planning function selects from a free collision path and a set of planar landmarks, a subset of visible landmarks and the best combination of camera parameters (pan, tilt and zoom) for positions sampled along the path. A visibility measurement and some utility measurements were defined in order to select for each position, the camera modality and the subset of landmarks that maximize these local criteria. Finally, a dynamic programming method is proposed in order to minimize saccadic movements all over the trajectory. Online Scheduling of Multiprocessor Jobs with Idle Regulation Parallel Processing and Applied Mathematics (2004-01-01) 3019: 131-144 , January 01, 2004 By Tchernykh, Andrei; Trystram, Denis In this paper, we focus on on-line scheduling of multiprocessor jobs with emphasis on the regulation of idle periods in the frame of general list policies. We consider a new family of scheduling strategies based on two phases which successively combine sequential and parallel executions of the jobs. These strategies are part of a more generic scheme introduced in [6]. The main result is to demonstrate that it is possible to estimate the amount of resources that should remain idle for a better regulation of the load and to obtain approximation bounds. The Reduced Automata Technique for Graph Exploration Space Lower Bounds Theoretical Computer Science (2006-01-01) 3895: 1-26 , January 01, 2006 By Fraigniaud, Pierre; Ilcinkas, David; Rajsbaum, Sergio; Tixeuil, Sébastien Show all (4) We consider the task of exploring graphs with anonymous nodes by a team of non-cooperative robots, modeled as finite automata. For exploration to be completed, each edge of the graph has to be traversed by at least one robot. In this paper, the robots have no a priori knowledge of the topology of the graph, nor of its size, and we are interested in the amount of memory the robots need to accomplish exploration, We introduce the so-called reduced automata technique, and we show how to use this technique for deriving several space lower bounds for exploration. Informally speaking, the reduced automata technique consists in reducing a robot to a simpler form that preserves its "core" behavior on some graphs. Using this technique, we first show that any set of q≥ 1 non-cooperative robots, requires $\Omega(\log(\frac{n}{q}))$ memory bits to explore all n-node graphs. The proof implies that, for any set of qK-state robots, there exists a graph of size O(qK) that no robot of this set can explore, which improves the O(KO(q)) bound by Rollik (1980). Our main result is an application of this latter result, concerning terminating graph exploration with one robot, i.e., in which the robot is requested to stop after completing exploration. For this task, the robot is provided with a pebble, that it can use to mark nodes (without such a marker, even terminating exploration of cycles cannot be achieved). We prove that terminating exploration requires Ω(log n) bits of memory for a robot achieving this task in all n-node graphs. 3D Parallel Elastodynamic Modeling of Large Subduction Earthquakes Recent Advances in Parallel Virtual Machine and Message Passing Interface (2007-01-01) 4757: 373-380 , January 01, 2007 By Cabrera, Eduardo; Chavez, Mario; Madariaga, Raúl; Perea, Narciso; Frisenda, Marco Show all (5) The 3D finite difference modeling of the wave propagation of M>8 earthquakes in subduction zones in a realistic-size earth is very computationally intensive task. We use a parallel finite difference code that uses second order operators in time and fourth order differences in space on a staggered grid. We develop an efficient parallel program using message passing interface (MPI) and a kinematic earthquake rupture process. We achieve an efficiency of 94% with 128 (and 85% extrapolating to 1,024) processors on a dual core platform. Satisfactory results for a large subduction earthquake that occurred in Mexico in 1985 are given. A Multi-Objective Artificial Immune System Based on Hypervolume Artificial Immune Systems (2012-01-01) 7597: 14-27 , January 01, 2012 By Pierrard, Thomas; Coello Coello, Carlos A. This paper presents a new artificial immune system algorithm for solving multi-objective optimization problems, based on the clonal selection principle and the hypervolume contribution. The main aim of this work is to investigate the performance of this class of algorithm with respect to approaches which are representative of the state-of-the-art in multi-objective optimization using metaheuristics. The results obtained by our proposed approach, called multi-objective artificial immune system based on hypervolume (MOAIS-HV) are compared with respect to those of the NSGA-II. Our preliminary results indicate that our proposed approach is very competitive, and can be a viable choice for solving multi-objective optimization problems. The Committee Decision Problem By Gafni, Eli; Rajsbaum, Sergio; Raynal, Michel; Travers, Corentin Show all (4) We introduce the (b,n)-Committee Decision Problem (CD) – a generalization of the consensus problem. While set agreement generalizes consensus in terms of the number of decisions allowed, the CD problem generalizes consensus in the sense of considering many instances of consensus and requiring a processor to decide in at least one instance. In more detail, in the CD problem each one of a set of n processes has a (possibly distinct) value to propose to each one of a set of b consensus problems, which we call committees. Yet a process has to decide a value for at least one of these committees, such that all processes deciding for the same committee decide the same value. We study the CD problem in the context of a wait-free distributed system and analyze it using a combination of distributed algorithmic and topological techniques, introducing a novel reduction technique. We use the reduction technique to obtain the following results. We show that the (2,3)-CD problem is equivalent to the musical benches problem introduced by Gafni and Rajsbaum in [10], and both are equivalent to (2,3)-set agreement, closing an open question left there. Thus, all three problems are wait-free unsolvable in a read/write shared memory system, and they are all solvable if the system is enriched with objects capable of solving (2,3)-set agreement. While the previous proof of the impossibility of musical benches was based on the Borsuk-Ulam (BU) Theorem, it now relies on Sperner's Lemma, opening intriguing questions about the relation between BU and distributed computing tasks. The Infection Algorithm: An Artificial Epidemic Approach for Dense Stereo Matching Parallel Problem Solving from Nature - PPSN VIII (2004-01-01) 3242: 622-632 , January 01, 2004 By Olague, Gustavo; Vega, Francisco Fernández; Pérez, Cynthia B.; Lutton, Evelyne Show all (4) We present a new bio-inspired approach applied to a problem of stereo images matching. This approach is based on an artifical epidemic process, that we call "the infection algorithm." The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D informations which allow the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to only produce the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, that propagate like an artificial epidemy over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated. Speeding scalar multiplication over binary elliptic curves using the new carry-less multiplication instruction Journal of Cryptographic Engineering (2011-09-25) 1: 187-199 , September 25, 2011 The availability of a new carry-less multiplication instruction in the latest Intel desktop processors significantly accelerates multiplication in binary fields and hence presents the opportunity for reevaluating algorithms for binary field arithmetic and scalar multiplication over elliptic curves. We describe how to best employ this instruction in field multiplication and the effect on performance of doubling and halving operations. Alternate strategies for implementing inversion and half-trace are examined to restore most of their competitiveness relative to the new multiplier. These improvements in field arithmetic are complemented by a study on serial and parallel approaches for Koblitz and random curves, where parallelization strategies are implemented and compared. The contributions are illustrated with experimental results improving the state-of-the-art performance of halving and doubling-based scalar multiplication on NIST curves at the 112- and 192-bit security levels and a new speed record for side-channel-resistant scalar multiplication in a random curve at the 128-bit security level. The algorithms presented in this work were implemented on Westmere and Sandy Bridge processors, the latest generation Intel microarchitectures. Shared Resource Availability within Ubiquitous Collaboration Environments By García, Kimberly; Mendoza, Sonia; Olague, Gustavo; Decouchant, Dominique; Rodríguez, José Show all (5) Most research works in ubiquitous computing remain in the domain of mono-user systems, which make assumptions such as: "nobody interferes, observes and hurries up". In addition, these systems ignore third-part contributions and do not encourage consensus achievement. This paper proposes a system for managing availability of distributed resources in ubiquitous cooperative environments. Particularly, the proposed system allows collaborators to publish resources that are intended to be shared with others collaborators and to subscribe to allowed resources depending on their interest in accessing or using them. Resource availability is determined according to several parameters: technical characteristics, roles, usage restrictions, and dependencies with other resources in terms of ownership, presence, location, and even availability. To permit or deny access to context-aware information, we develop a face recognition system, which is able to dynamically identify collaborators and to automatically locate them within the cooperative environment. Specifying Concurrent Problems: Beyond Linearizability and up to Tasks Distributed Computing (2015-01-01): 9363 , January 01, 2015 By Castañeda, Armando; Rajsbaum, Sergio; Raynal, Michel Tasks and objects are two predominant ways of specifying distributed problems. A task specifies for each set of processes (which may run concurrently) the valid outputs of the processes. An object specifies the outputs the object may produce when it is accessed sequentially. Each one requires its own implementation notion, to tell when an execution satisfies the specification. For objects linearizability is commonly used, while for tasks implementation notions are less explored. Sequential specifications are very convenient, especially important is the locality property of linearizability, which states that linearizable objects compose for free into a linearizable object. However, most well-known tasks have no sequential specification. Also, tasks have no clear locality property. The paper introduces the notion of interval-sequential object. The corresponding implementation notion of interval-linearizability generalizes linearizability. Interval-linearizability allows to specify any task. However, there are sequential one-shot objects that cannot be expressed as tasks, under the simplest interpretation of a task. The paper also shows that a natural extension of the notion of a task is expressive enough to specify any interval-sequential object. Benchmark Study of a 3d Parallel Code for the Propagation of Large Subduction Earthquakes By Chavez, Mario; Cabrera, Eduardo; Madariaga, Raúl; Perea, Narciso; Moulinec, Charles; Emerson, David; Ashworth, Mike; Salazar, Alejandro Show all (8) Benchmark studies were carried out on a recently optimized parallel 3D seismic wave propagation code that uses finite differences on a staggered grid with 2nd order operators in time and 4th order in space. Three dual-core supercomputer platforms were used to run the parallel program using MPI. Efficiencies of 0.91 and 0.48 with 1024 cores were obtained on HECToR (UK) and KanBalam (Mexico), and 0.66 with 8192 cores on HECToR. The 3D velocity field pattern from a simulation of the 1985 Mexico earthquake (that caused the loss of up to 30000 people and about 7 billion US dollars) which has reasonable agreement with the available observations, shows coherent, well developed surface waves propagating towards Mexico City. Formal verification of secure group communication protocols modelled in UML Innovations in Systems and Software Engineering (2010-03-01) 6: 125-133 , March 01, 2010 By Saqui-Sannes, P.; Villemur, T.; Fontan, B.; Mota, S.; Bouassida, M. S.; Chridi, N.; Chrisment, I.; Vigneron, L. Show all (8) The paper discusses an experience in using Unified Modelling Language and two complementary verification tools in the framework of SAFECAST, a project on secured group communication systems design. AVISPA enabled detecting and fixing security flaws. The TURTLE toolkit enabled saving development time by eliminating design solutions with inappropriate temporal parameters. Quadratic Optimization Fine Tuning for the Learning Phase of SVM Advanced Distributed Systems (2005-01-01) 3563: 347-357 , January 01, 2005 By González-Mendoza, Miguel; Hernández-Gress, Neil; Titli, André This paper presents a study of the Quadratic optimization Problem (QP) lying on the learning process of Support Vector Machines (SVM). Taking the Karush-Kuhn-Tucker (KKT) optimality conditions, we present the strategy of implementation of the SVM-QP following two classical approaches: i) active set, also divided in primal and dual spaces, methods and ii) interior point methods. We also present the general extension to treat large scale applications consisting in a general decomposition of the QP problem into smaller ones. In the same manner, we discuss some considerations to take into account to start the general learning process. We compare the performances of the optimization strategies using some well-known benchmark databases. Reliable Shared Memory Abstraction on Top of Asynchronous Byzantine Message-Passing Systems By Imbs, Damien; Rajsbaum, Sergio; Raynal, Michel; Stainer, Julien Show all (4) This paper is on the construction and the use of a shared memory abstraction on top of an asynchronous message-passing system in which up to t processes may commit Byzantine failures. This abstraction consists of arrays of n single-writer/multi-reader atomic registers, where n is the number of processes. Differently from usual atomic registers which record a single value, each of these atomic registers records the whole history of values written to it. A distributed algorithm building such a shared memory abstraction it first presented. This algorithm assumes t < n/3, which is shown to be a necessary and sufficient condition for such a construction. Hence, the algorithm is resilient-optimal. Then the paper presents distributed algorithms built on top of this shared memory abstraction, which cope with up to t Byzantine processes. The simplicity of these algorithms constitutes a strong motivation for such a shared memory abstraction in the presence of Byzantine processes. For a lot of problems, algorithms are more difficult to design and prove correct in a message-passing system than in a shared memory system. Using a protocol stacking methodology, the aim of the proposed abstraction is to allow an easier design (and proof) of distributed algorithms, when the underlying system is an asynchronous message-passing system prone to Byzantine failures.
CommonCrawl
The European Physical Journal A January 2020 , 56:4 | Cite as Alternative coalescence model for deuteron, tritium, helium-3 and their antinuclei M. Kachelrieß S. Ostapchenko J. Tjemsland Regular Article –Theoretical Physics Antideuteron and antihelium nuclei have been proposed as a detection channel for dark matter annihilations and decays in the Milky Way, due to the low astrophysical background expected. To estimate both the signal for various dark matter models and the astrophysical background, one usually employs the coalescence model in a Monte Carlo framework. This allows one to treat the production of antinuclei on an event-by-event basis, thereby taking into account momentum correlations between the antinucleons involved in the process. This approach lacks, however, an underlying microscopic picture, and the numerical value of the coalescence parameter obtained from fits to different reactions varies considerably. Here we propose instead to combine event-by-event Monte Carlo simulations with a microscopic coalescence picture based on the Wigner function representations of the produced antinuclei states. This approach allows us to include in a semi-classical picture both the size of the formation region, which is process dependent, and the momentum correlations. The model contains a single, universal parameter which is fixed by fitting the production spectra of antideuterons in proton–proton interactions, measured at the Large Hadron Collider. Using this value, the model describes well the production of various antinuclei both in electron–positron annihilation and in proton–proton collisions. Communicated by R. Rapp M.K. and J.T. acknowledge partial support from the Research Council of Norway (NFR). S.O. acknowledges support from project OS 481/2-1 of the Deutsche Forschungsgemeinschaft. Appendix A: Wigner function Our definition (9) of the one-particle Wigner function implies in \(d=1\) as normalization (with \(\hbar = 1= h/(2\pi )\)) $$\begin{aligned} \int \frac{\mathrm{d}p}{2\pi } \mathrm{d}x \,W(x,p)= 1 \,. \end{aligned}$$ (A.1) The corresponding probability distributions for the space and momentum variables are obtained from $$\begin{aligned} \int \mathrm{d}x \,W(x,p)=\psi ^*(p)\,\psi (p), \end{aligned}$$ $$\begin{aligned} \int \frac{\mathrm{d}p}{2\pi } \,W(x,p)=\phi ^*(x)\,\phi (x) . \end{aligned}$$ For our ansatz \(W(x,p) = h(x)g(p)\), it follows that h(x) describes the probability distribution of the nucleon in coordinate space, while the probability distribution of the nucleon momenta g(p) is normalized as $$\begin{aligned} \int \frac{\mathrm{d}p}{2\pi } g(p)= 1 \,. \end{aligned}$$ Appendix B: Experiments Appendix B.1: ALICE The ALICE collaboration measured the invariant differential yields of deuterons and antideuterons, $$\begin{aligned} E\,{\frac{\hbox {d}^{3}{n}}{\hbox {d}p^{3}}}=\frac{1}{N_{\mathrm{inel}}}\frac{1}{2\pi p_T}\frac{\mathrm {d}^2N}{\mathrm {d}p_T\,\mathrm {d}y}, \end{aligned}$$ (B.5) in inelastic proton–proton collisions at center-of-mass energies \(\sqrt{s} = 0.9, 2.76\) and 7 TeV, in the \(p_T\) range \(0.8<p_T<3\,\hbox {GeV}\) and for rapidity8\(|y|<0.5\) [36]. Here E and \(\varvec{p}\) are the deuteron energy and momentum, \(N_{\mathrm{inel}}\) is the number of inelastic events, N is the total number of detected deuterons, and \(n\equiv N/N_{\mathrm{inel}}\). The experiment included a trigger (V0) consisting of two hodoscopes of 32 scintillators that covered the pseudo-rapidity ranges \(2.8<\eta <5.1\) and \(-3.7<\eta < -1.7\), used to select non-diffractive (ND) inelastic events. An event was triggered by requiring a hit (charged particle) on either side (positive or negative \(\eta \)) of the V0 triggering set-up. Pythia 8 generates general inelastic collisions, including single-diffractive (SD), double-diffractive (DD) and ND events. The minimum bias events selected by the V0 trigger generally include those that Pythia treats as SD and DD events. While we used Pythia 8 to generate general minimum bias pp collisions, only events satisfying the V0 trigger have been included in our analysis. Appendix B.2: ALEPH and OPAL The ALEPH collaboration at LEP studied the deuteron and antideuteron production in \(\mathrm{{e}}^ +\mathrm{{e}}^-\) collisions at the Z resonance energy. The measured production rate of antideuterons was \((5.9\pm 1.8 \pm 0.5)\times 10^{-6}\) per hadronic Z decay, for the antideuteron momentum range from 0.62 to 1.03 GeV and for the production angle \(\theta \) satisfying \(|\cos \theta |<0.95\) [37]. In a similar experiment performed by the OPAL collaboration [38], no antideuteron events were detected. Reference [35] noted that the resulting upper limit on the antideuteron yield has previously been neglected, but should also be taken into account. The measurements were performed in the antideuteron momentum range \(0.35< p < 1.1\,\hbox {GeV}\), with an estimated detection efficiency \(\epsilon =0.234\), which includes the angular acceptance. The expected total number of antideuterons was $$\begin{aligned} N_{\bar{d}} = \epsilon N_{\mathrm{ev}} n_{\bar{d}, \mathrm {MC}}, \end{aligned}$$ where \(N_{\mathrm{ev}}=1.64\times 10^6\) is the number of events in the OPAL analysis and \(n_{\bar{d}, \mathrm {MC}}\) is the MC prediction for the number of antideuterons per event. We follow Ref. [35] and assume a Poissonian uncertainty \(\sigma _{\bar{d}}=\sqrt{N_{\bar{d}}}\) for the expected number of antideuterons. The \(\chi ^2\) is in this case given by $$\begin{aligned} \chi _{\mathrm{OPAL}}^2 = \frac{(N_{\mathrm{obs}} - N_{\bar{d}})^2}{\sigma _{\bar{d}}^2} = N_{\bar{d}} . \end{aligned}$$ F. Donato, N. Fornengo, P. Salati, Phys. Rev. D 62, 043003 (2000). https://doi.org/10.1103/PhysRevD.62.043003 ADSCrossRefGoogle Scholar R. Battiston, Nucl. Instrum. Methods A588, 227 (2008). https://doi.org/10.1016/j.nima.2008.01.044 ADSCrossRefGoogle Scholar T. Aramaki, C.J. Hailey, S.E. Boggs, P. von Doetinchem, H. Fuke, S.I. Mognet, R.A. Ong, K. Perez, J. Zweerink, Astropart. Phys. 74, 6 (2016). https://doi.org/10.1016/j.astropartphys.2015.09.001 ADSCrossRefGoogle Scholar A. Schwarzschild, C. Zupancic, Phys. Rev. 129, 854 (1963). https://doi.org/10.1103/PhysRev.129.854 ADSCrossRefGoogle Scholar S.T. Butler, C.A. Pearson, Phys. Rev. 129, 836 (1963). https://doi.org/10.1103/PhysRev.129.836 ADSCrossRefGoogle Scholar K.J. Sun, L.W. Chen, Phys. Lett. B 751, 272 (2015). https://doi.org/10.1016/j.physletb.2015.10.056 ADSCrossRefGoogle Scholar L. Zhu, C.M. Ko, X. Yin, Phys. Rev. C 92(6), 064911 (2015). https://doi.org/10.1103/PhysRevC.92.064911 ADSCrossRefGoogle Scholar L. Zhu, H. Zheng, C. Ming Ko, Y. Sun, Eur. Phys. J. A 54(10), 175 (2018). 10.1140/epja/i2018-12610-7ADSCrossRefGoogle Scholar S. Acharya et al., Nucl. Phys. A 971, 1 (2018). https://doi.org/10.1016/j.nuclphysa.2017.12.004 ADSCrossRefGoogle Scholar A. Andronic, P. Braun-Munzinger, K. Redlich, J. Stachel, Nature 561(7723), 321 (2018). https://doi.org/10.1038/s41586-018-0491-6 ADSCrossRefGoogle Scholar V. Vovchenko, B. Dönigus, H. Stoecker, Phys. Lett. B 785, 171 (2018). https://doi.org/10.1016/j.physletb.2018.08.041 ADSCrossRefGoogle Scholar F. Bellini, A.P. Kalweit, Phys. Rev. C 99(5), 054905 (2019). https://doi.org/10.1103/PhysRevC.99.054905 ADSCrossRefGoogle Scholar J. Chen, D. Keane, Y.G. Ma, A. Tang, Z. Xu, Phys. Rep. 760, 1 (2018). https://doi.org/10.1016/j.physrep.2018.07.002 ADSMathSciNetCrossRefGoogle Scholar X. Xu, R. Rapp, Eur. Phys. J. A 55(5), 68 (2019). https://doi.org/10.1140/epja/i2019-12757-7 ADSCrossRefGoogle Scholar D. Oliinychenko, L.G. Pang, H. Elfner, V. Koch, Phys. Rev. C 99(4), 044907 (2019). https://doi.org/10.1103/PhysRevC.99.044907 ADSCrossRefGoogle Scholar M. Kadastik, M. Raidal, A. Strumia, Phys. Lett. B 683, 248 (2010). https://doi.org/10.1016/j.physletb.2009.12.005 ADSCrossRefGoogle Scholar L.A. Dal, Antideuterons as Signature for Dark Matter. Master's thesis, NTNU Trondheim. http://hdl.handle.net/11250/2456366 (2011) Y. Cui, J.D. Mason, L. Randall, JHEP 11, 017 (2010). https://doi.org/10.1007/JHEP11(2010)017 ADSCrossRefGoogle Scholar A. Ibarra, S. Wild, JCAP 1302, 021 (2013). https://doi.org/10.1088/1475-7516/2013/02/021 ADSCrossRefGoogle Scholar N. Fornengo, L. Maccione, A. Vittino, JCAP 1309, 031 (2013). https://doi.org/10.1088/1475-7516/2013/09/031 ADSCrossRefGoogle Scholar L.A. Dal, A.R. Raklev, Phys. Rev. D 89(10), 103504 (2014). https://doi.org/10.1103/PhysRevD.89.103504 ADSCrossRefGoogle Scholar T. Delahaye, M. Grefe, JCAP 1507, 012 (2015). https://doi.org/10.1088/1475-7516/2015/07/012 ADSCrossRefGoogle Scholar J. Herms, A. Ibarra, A. Vittino, S. Wild, JCAP 1702(02), 018 (2017). https://doi.org/10.1088/1475-7516/2017/02/018 ADSCrossRefGoogle Scholar M. Korsmeier, F. Donato, N. Fornengo, Phys. Rev. D 97(10), 103011 (2018). https://doi.org/10.1103/PhysRevD.97.103011 ADSCrossRefGoogle Scholar A. Coogan, S. Profumo, Phys. Rev. D 96(8), 083020 (2017). https://doi.org/10.1103/PhysRevD.96.083020 ADSCrossRefGoogle Scholar V. Poulin, P. Salati, I. Cholis, M. Kamionkowski, J. Silk, Phys. Rev. D 99(2), 023016 (2019). https://doi.org/10.1103/PhysRevD.99.023016 ADSCrossRefGoogle Scholar T. Aramaki et al., Phys. Rep. 618, 1 (2016). https://doi.org/10.1016/j.physrep.2016.01.002 ADSCrossRefGoogle Scholar L.P. Csernai, J.I. Kapusta, Phys. Rep. 131, 223 (1986). https://doi.org/10.1016/0370-1573(86)90031-1 ADSCrossRefGoogle Scholar J.L. Nagle, B.S. Kumar, D. Kusnezov, H. Sorge, R. Mattiello, Phys. Rev. C 53, 367 (1996). https://doi.org/10.1103/PhysRevC.53.367 ADSCrossRefGoogle Scholar H. Sato, K. Yazaki, Phys. Lett. 98B, 153 (1981). https://doi.org/10.1016/0370-2693(81)90976-X ADSCrossRefGoogle Scholar R.P. Duperray, K.V. Protasov, A.Y. Voronin, Eur. Phys. J. A 16, 27 (2003). https://doi.org/10.1140/epja/i2002-10074-0 ADSCrossRefGoogle Scholar P. Danielewicz, G.F. Bertsch, Nucl. Phys. A 533, 712 (1991). https://doi.org/10.1016/0375-9474(91)90541-D ADSCrossRefGoogle Scholar R. Scheibl, U.W. Heinz, Phys. Rev. C 59, 1585 (1999). https://doi.org/10.1103/PhysRevC.59.1585 ADSCrossRefGoogle Scholar K. Blum, K.C.Y. Ng, R. Sato, M. Takimoto, Phys. Rev. D 96(10), 103021 (2017). https://doi.org/10.1103/PhysRevD.96.103021 ADSCrossRefGoogle Scholar L.A. Dal, A.R. Raklev, Phys. Rev. D 91(12), 123536 (2015). https://doi.org/10.1103/PhysRevD.91.123536, https://doi.org/10.1103/PhysRevD.92.089901, https://doi.org/10.1103/PhysRevD.92.069903. [Erratum: Phys. Rev. D92, no.8,089901(2015)] S. Acharya et al., Phys. Rev. C 97(2), 024615 (2018). https://doi.org/10.1103/PhysRevC.97.024615 ADSCrossRefGoogle Scholar S. Schael et al., Phys. Lett. B 639, 192 (2006). https://doi.org/10.1016/j.physletb.2006.06.043 ADSCrossRefGoogle Scholar R. Akers et al., Z. Phys. C 67, 203 (1995). https://doi.org/10.1007/BF01571281 ADSCrossRefGoogle Scholar R. Mattiello, H. Sorge, H. Stoecker, W. Greiner, Phys. Rev. C 55, 1443 (1997). https://doi.org/10.1103/PhysRevC.55.1443 ADSCrossRefGoogle Scholar V.I. Zhaba (2017). arXiv:1706.08306 Y.L. Dokshitzer, V.A. Khoze, A.H. Mueller, S.I. Troian, Basics of Perturbative QCD (Edition Frontieres, Gif-sur-Yvette, 1991)Google Scholar A. Deur, S.J. Brodsky, G.F. de Teramond, Prog. Part. Nucl. Phys. 90, 1 (2016). https://doi.org/10.1016/j.ppnp.2016.04.003 ADSCrossRefGoogle Scholar S.G. Karshenboim, R. Beig, W. Beiglböck, W. Domcke, B.G. Englert, U. Frisch, P. Hänggi, G. Hasinger, K. Hepp, W. Hillebrandt, D. Imboden, R.L. Jaffe, R. Lipowsky, H.V. Löhneysen, I. Ojima, D. Sornette, S. Theisen, W. Weise, J. Wess, J. Zittartz, Precision Physics of Simple Atoms and Molecules, vol. 745, Lecture Notes in Physics (Springer, Berlin, 2008). https://doi.org/10.1007/978-3-540-75479-4 CrossRefGoogle Scholar L.A. Dal, M. Kachelrieß, Phys. Rev. D 86, 103536 (2012). https://doi.org/10.1103/PhysRevD.86.103536 ADSCrossRefGoogle Scholar T. Sjöstrand, S. Mrenna, P.Z. Skands, JHEP 05, 026 (2006). https://doi.org/10.1088/1126-6708/2006/05/026 ADSCrossRefGoogle Scholar T. Sjöstrand, S. Ask, J.R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C.O. Rasmussen, P.Z. Skands, Comput. Phys. Commun. 191, 159 (2015). https://doi.org/10.1016/j.cpc.2015.01.024 ADSCrossRefGoogle Scholar J. Adam et al., Eur. Phys. J. C 75(5), 226 (2015). https://doi.org/10.1140/epjc/s10052-015-3422-9 ADSCrossRefGoogle Scholar E. Carlson, A. Coogan, T. Linden, S. Profumo, A. Ibarra, S. Wild, Phys. Rev. D 89(7), 076005 (2014). https://doi.org/10.1103/PhysRevD.89.076005 ADSCrossRefGoogle Scholar M. Cirelli, N. Fornengo, M. Taoso, A. Vittino, JHEP 08, 009 (2014). https://doi.org/10.1007/JHEP08(2014)009 ADSCrossRefGoogle Scholar S. Ting, The first five years of the alpha magnetic spectrometer on the international space station. CERN colloquium (2016). https://indico.cern.ch/event/592392/ © Società Italiana di Fisica (SIF) and Springer-Verlag GmbH Germany, part of Springer Nature 2020 1.Institutt for fysikkNTNUTrondheimNorway 2.Frankfurt Institute for Advanced StudiesFrankfurtGermany 3.D.V. Skobeltsyn Institute of Nuclear PhysicsMoscow State UniversityMoscowRussia Kachelrieß, M., Ostapchenko, S. & Tjemsland, J. Eur. Phys. J. A (2020) 56: 4. https://doi.org/10.1140/epja/s10050-019-00007-9 Received 25 July 2019 DOI https://doi.org/10.1140/epja/s10050-019-00007-9 Online ISSN 1434-601X
CommonCrawl
$ABCDE$ is a regular pentagon. $AP$, $AQ$ and $AR$ are the perpendiculars dropped from $A$ onto $CD$, $CB$ extended and $DE$ extended, respectively. Let $O$ be the center of the pentagon. If $OP = 1$, then find $AO + AQ + AR$. [asy] unitsize(2 cm); pair A, B, C, D, E, O, P, Q, R; A = dir(90); B = dir(90 - 360/5); C = dir(90 - 2*360/5); D = dir(90 - 3*360/5); E = dir(90 - 4*360/5); O = (0,0); P = (C + D)/2; Q = (A + reflect(B,C)*(A))/2; R = (A + reflect(D,E)*(A))/2; draw((2*R - E)--D--C--(2*Q - B)); draw(A--P); draw(A--Q); draw(A--R); draw(B--A--E); label("$A$", A, N); label("$B$", B, dir(0)); label("$C$", C, SE); label("$D$", D, SW); label("$E$", E, W); dot("$O$", O, dir(0)); label("$P$", P, S); label("$Q$", Q, dir(0)); label("$R$", R, W); label("$1$", (O + P)/2, dir(0)); [/asy] To solve the problem, we compute the area of regular pentagon $ABCDE$ in two different ways. First, we can divide regular pentagon $ABCDE$ into five congruent triangles. [asy] unitsize(2 cm); pair A, B, C, D, E, O, P, Q, R; A = dir(90); B = dir(90 - 360/5); C = dir(90 - 2*360/5); D = dir(90 - 3*360/5); E = dir(90 - 4*360/5); O = (0,0); P = (C + D)/2; Q = (A + reflect(B,C)*(A))/2; R = (A + reflect(D,E)*(A))/2; draw((2*R - E)--D--C--(2*Q - B)); draw(A--P); draw(A--Q); draw(A--R); draw(B--A--E); draw((O--B),dashed); draw((O--C),dashed); draw((O--D),dashed); draw((O--E),dashed); label("$A$", A, N); label("$B$", B, dir(0)); label("$C$", C, SE); label("$D$", D, SW); label("$E$", E, W); dot("$O$", O, NE); label("$P$", P, S); label("$Q$", Q, dir(0)); label("$R$", R, W); label("$1$", (O + P)/2, dir(0)); [/asy] If $s$ is the side length of the regular pentagon, then each of the triangles $AOB$, $BOC$, $COD$, $DOE$, and $EOA$ has base $s$ and height 1, so the area of regular pentagon $ABCDE$ is $5s/2$. Next, we divide regular pentagon $ABCDE$ into triangles $ABC$, $ACD$, and $ADE$. [asy] unitsize(2 cm); pair A, B, C, D, E, O, P, Q, R; A = dir(90); B = dir(90 - 360/5); C = dir(90 - 2*360/5); D = dir(90 - 3*360/5); E = dir(90 - 4*360/5); O = (0,0); P = (C + D)/2; Q = (A + reflect(B,C)*(A))/2; R = (A + reflect(D,E)*(A))/2; draw((2*R - E)--D--C--(2*Q - B)); draw(A--P); draw(A--Q); draw(A--R); draw(B--A--E); draw(A--C,dashed); draw(A--D,dashed); label("$A$", A, N); label("$B$", B, dir(0)); label("$C$", C, SE); label("$D$", D, SW); label("$E$", E, W); dot("$O$", O, dir(0)); label("$P$", P, S); label("$Q$", Q, dir(0)); label("$R$", R, W); label("$1$", (O + P)/2, dir(0)); [/asy] Triangle $ACD$ has base $s$ and height $AP = AO + 1$. Triangle $ABC$ has base $s$ and height $AQ$. Triangle $ADE$ has base $s$ and height $AR$. Therefore, the area of regular pentagon $ABCDE$ is also \[\frac{s}{2} (AO + AQ + AR + 1).\]Hence, \[\frac{s}{2} (AO + AQ + AR + 1) = \frac{5s}{2},\]which means $AO + AQ + AR + 1 = 5$, or $AO + AQ + AR = \boxed{4}$.
Math Dataset
Sidon sequence In number theory, a Sidon sequence is a sequence $A=\{a_{0},a_{1},a_{2},\dots \}$ of natural numbers in which all pairwise sums $a_{i}+a_{j}$ (for $i\leq j$) are different. Sidon sequences are also called Sidon sets; they are named after the Hungarian mathematician Simon Sidon, who introduced the concept in his investigations of Fourier series. The main problem in the study of Sidon sequences, posed by Sidon,[1] is to find the maximum number of elements that a Sidon sequence can contain, up to some bound $x$. Despite a large body of research,[2] the question has remained unsolved.[3] Early results Paul Erdős and Pál Turán proved that, for every $x>0$, the number of elements smaller than $x$ in a Sidon sequence is at most ${\sqrt {x}}+O({\sqrt[{4}]{x}})$. Several years earlier, James Singer had constructed Sidon sequences with ${\sqrt {x}}(1-o(1))$ terms less than x. The bound was improved to ${\sqrt {x}}+{\sqrt[{4}]{x}}+1$ in 1969[4] and to ${\sqrt {x}}+0.998{\sqrt[{4}]{x}}$ in 2023.[5] In 1994 Erdős offered 500 dollars for a proof or disproof of the bound ${\sqrt {x}}+o(x^{\epsilon })$.[6] Infinite Sidon sequences Erdős also showed that, for any particular infinite Sidon sequence $A$ with $A(x)$ denoting the number of its elements up to $x$, $\liminf _{x\to \infty }{\frac {A(x){\sqrt {\log x}}}{\sqrt {x}}}\leq 1.$ That is, infinite Sidon sequences are thinner than the densest finite Sidon sequences. For the other direction, Chowla and Mian observed that the greedy algorithm gives an infinite Sidon sequence with $A(x)>c{\sqrt[{3}]{x}}$ for every $x$.[7] Ajtai, Komlós, and Szemerédi improved this with a construction[8] of a Sidon sequence with $A(x)>{\sqrt[{3}]{x\log x}}.$ The best lower bound to date was given by Imre Z. Ruzsa, who proved[9] that a Sidon sequence with $A(x)>x^{{\sqrt {2}}-1-o(1)}$ exists. Erdős conjectured that an infinite Sidon set $A$ exists for which $A(x)>x^{1/2-o(1)}$ holds. He and Rényi showed[10] the existence of a sequence $\{a_{0},a_{1},\dots \}$ with the conjectural density but satisfying only the weaker property that there is a constant $k$ such that for every natural number $n$ there are at most $k$ solutions of the equation $a_{i}+a_{j}=n$. (To be a Sidon sequence would require that $k=1$.) Erdős further conjectured that there exists a nonconstant integer-coefficient polynomial whose values at the natural numbers form a Sidon sequence. Specifically, he asked if the set of fifth powers is a Sidon set. Ruzsa came close to this by showing that there is a real number $c$ with $0<c<1$ such that the range of the function $f(x)=x^{5}+\lfloor cx^{4}\rfloor $ is a Sidon sequence, where $\lfloor \ \rfloor $ denotes the integer part. As $c$ is irrational, this function $f(x)$ is not a polynomial. The statement that the set of fifth powers is a Sidon set is a special case of the later conjecture of Lander, Parkin and Selfridge. Sidon sequences which are asymptotic bases The existence of Sidon sequences that form an asymptotic basis of order $m$ (meaning that every sufficiently large natural number $n$ can be written as the sum of $m$ numbers from the sequence) has been proved for $m=5$ in 2010,[11] $m=4$ in 2014,[12] $m=3+\epsilon $ (the sum of four terms with one smaller than $n^{\epsilon }$, for arbitrarily small positive $\epsilon $) in 2015[13] and $m=3$ in 2023 as a preprint,[14][15] this later one was posed as a problem in a paper of Erdős, Sárközy and Sós in 1994.[16] Relationship to Golomb rulers All finite Sidon sets are Golomb rulers, and vice versa. To see this, suppose for a contradiction that $S$ is a Sidon set and not a Golomb ruler. Since it is not a Golomb ruler, there must be four members such that $a_{i}-a_{j}=a_{k}-a_{l}$. It follows that $a_{i}+a_{l}=a_{k}+a_{j}$, which contradicts the proposition that $S$ is a Sidon set. Therefore all Sidon sets must be Golomb rulers. By a similar argument, all Golomb rulers must be Sidon sets. See also • Moser–de Bruijn sequence • Sumset References 1. Erdős, P.; Turán, P. (1941), "On a problem of Sidon in additive number theory and on some related problems" (PDF), J. London Math. Soc., 16 (4): 212–215, doi:10.1112/jlms/s1-16.4.212. Addendum, 19 (1944), 208. 2. O'Bryant, K. (2004), "A complete annotated bibliography of work related to Sidon sequences", Electronic Journal of Combinatorics, 11: 39, doi:10.37236/32. 3. Guy, Richard K. (2004), "C9: Packing sums in pairs", Unsolved problems in number theory (3rd ed.), Springer-Verlag, pp. 175–180, ISBN 0-387-20860-7, Zbl 1058.11001 4. Linström, Bern (1969). "An inequality for B2-sequences". Journal of Combinatorial Theory. 6 (2): 211–212. doi:10.1016/S0021-9800(69)80124-9. 5. Balogh, József; Füredi, Zoltán; Roy, Souktik (2023-05-28). "An Upper Bound on the Size of Sidon Sets". The American Mathematical Monthly. 130 (5): 437–445. doi:10.1080/00029890.2023.2176667. ISSN 0002-9890. S2CID 232417382. 6. Erdős, Paul (1994). "Some problems in number theory, combinatorics and combinatorial geometry" (PDF). Mathematica Pannonica. 5 (2): 261–269. 7. Mian, Abdul Majid; Chowla, S. (1944), "On the B2 sequences of Sidon", Proc. Natl. Acad. Sci. India A, 14: 3–4, MR 0014114. 8. Ajtai, M.; Komlós, J.; Szemerédi, E. (1981), "A dense infinite Sidon sequence", European Journal of Combinatorics, 2 (1): 1–11, doi:10.1016/s0195-6698(81)80014-5, MR 0611925. 9. Ruzsa, I. Z. (1998), "An infinite Sidon sequence", Journal of Number Theory, 68: 63–71, doi:10.1006/jnth.1997.2192, MR 1492889. 10. Erdős, P.; Rényi, A. (1960), "Additive properties of random sequences of positive integers" (PDF), Acta Arithmetica, 6: 83–110, doi:10.4064/aa-6-1-83-110, MR 0120213. 11. Kiss, S. Z. (2010-07-01). "On Sidon sets which are asymptotic bases". Acta Mathematica Hungarica. 128 (1): 46–58. doi:10.1007/s10474-010-9155-1. ISSN 1588-2632. S2CID 96474687. 12. Kiss, Sándor Z.; Rozgonyi, Eszter; Sándor, Csaba (2014-12-01). "On Sidon sets which are asymptotic bases of order $4$". Functiones et Approximatio Commentarii Mathematici. 51 (2). arXiv:1304.5749. doi:10.7169/facm/2014.51.2.10. ISSN 0208-6573. S2CID 119121815. 13. Cilleruelo, Javier (November 2015). "On Sidon sets and asymptotic bases". Proceedings of the London Mathematical Society. 111 (5): 1206–1230. doi:10.1112/plms/pdv050. S2CID 34849568. 14. Pilatte, Cédric (2023-03-16). "A solution to the Erd\H{o}s-S\'ark\"ozy-S\'os problem on asymptotic Sidon bases of order 3". arXiv:2303.09659v1 [math.NT]. 15. "First-Year Graduate Finds Paradoxical Number Set". Quanta Magazine. 2023-06-05. Retrieved 2023-06-13. 16. Erdős, P.; Sárközy, A.; Sós, V. T. (1994-12-31). "On additive properties of general sequences". Discrete Mathematics. 136 (1): 75–99. doi:10.1016/0012-365X(94)00108-U. ISSN 0012-365X. S2CID 38168554.
Wikipedia
\begin{document} \title{A new lower bound on the pebbling number of the grid} \begin{abstract} A pebbling move on a graph consists of removing $2$ pebbles from a vertex and adding $1$ pebble to one of the neighbouring vertices. A vertex is called reachable if we can put $1$ pebble on it after a sequence of moves. The optimal pebbling number of a graph is the minimum number $m$ such that there exists a distribution of $m$ pebbles so that each vertex is reachable. For the case of a square grid $n \times m$, Gy\H{o}ri, Katona and Papp recently showed that its optimal pebbling number is at least $\frac{2}{13}nm \approx 0.1538nm$ and at most $\frac{2}{7}nm +O(n+m) \approx 0.2857nm$. We improve the lower bound to $\frac{5092}{28593}nm +O(m+n) \approx 0.1781nm$. \end{abstract} \section{Introduction} Let $G$ be a graph. A \emph{pebbling distribution} is a function $P$ from $V(G)$ to the set of non-negative integers. We say that a vertex \emph{$x \in V(G)$ has $k$ pebbles on it} if $P(x)=k$. The \emph{total number of pebbles} is defined as $|P|=\sum_{x \in V(G)} P(x)$. A \emph{pebbling move} consists of removing $2$ pebbles from a vertex and adding $1$ pebble to one of the neighbouring vertices. We say that a vertex is \emph{reachable} if we can put at least $1$ pebble on it after a sequence of moves. More generally, for any $k \in \mathbb{N}$ we say that a vertex is \emph{$k$-reachable} if we can put at least $k$ pebbles on it after a sequence of moves. We say that a vertex is \emph{exactly $k$-reachable} if it is $k$-reachable but not $(k+1)$-reachable. We say that a pebbling distribution is \emph{solvable} if every vertex is reachable. The \emph{optimal pebbling number $\pi(G)$ of $G$} is the minimum number $m$ such that there exists a solvable distribution of $m$ pebbles on the graph $G$. Pebbling was introduced by Chung \cite{Chung} in 1989; since then the optimal pebbling number has been studied for various classes of graphs, such as paths and cycles (see, for example, \cite{Bundeetal}), $m$-ary trees \cite{mtrees} or hypercubes \cite{LowerBoundQn}, \cite{UpperBoundGrid}. As shown by Milans and Clark, the problem of determining $\pi(G)$ for a general graph $G$ is NP-complete \cite{NPC}. \\ In this paper, we will focus mainly on the case where $G$ is a $m \times n$ grid $\Lambda_{m,n}$: the vertices are the $nm$ squares of the grid, and two vertices share an edge if and only if their respective squares are adjacent. The best known (and conjectured optimal) upper bound is $\pi(\Lambda_{m,n}) \leq \frac{2}{7}nm+O(m+n) \approx 0.2857nm $, see a paper by Gy\H{o}ri, Katona and Papp \cite{UpperBoundGrid} for an explicit construction. In \cite{LowerBoundGrid}, the same authors proved that $\pi(\Lambda_{m,n}) \geq \frac{2}{13}nm \approx 0.1538nm$. We improve this result to $\pi(\Lambda_{m,n}) \geq \frac{5092}{28593}nm+O(m+n) \approx 0.1781nm$. \\ The core method used in our new lower bound has been used before, for example in \cite{LowerBoundGrid}. Given a graph $G$ and a pebbling distribution $P$, we define for each vertex $y$ the \emph{contribution function of $y$} as $v_y:V(G) \rightarrow \mathbb{R}$ given by $v_y(x)=P(y)2^{-d(x,y)}$ where $d$ is the graph distance in $G$. We also define the \emph{effect of a pebble placed on $x$} as $\mathrm{ef}(x)=\sum_{y \in V(G)} 2^{-d(x,y)}$. We define the \emph{value of $x$} as $v(x)=\sum_{y \in V(G)} v_y(x)$. The basic observation is that if $P$ is solvable, then the value of each vertex in $G$ is at least $1$. This already enables us to get a lower bound on $|P|$, using $\sum_{x \in V(G)}|P(x)|\mathrm{ef}(x) = \sum_{x \in V(G)} v(x) \geq |V(G)|$. Note that for $x \in V(\Lambda_{m,n})$ we have $\mathrm{ef}(x) \leq 1+\sum_{k=1}^{\infty} 4k\frac{1}{2^k}=9$. This bound already gives $\pi(\Lambda_{m,n}) \geq \frac{1}{9} mn$. In order to improve this, we will give a better lower bound on $\sum_{x \in V(G)} v(x)$. \\ To the best of our knowledge, our first improvement is a new concept in the study of pebbling. Given a solvable pebbling distribution, we partition the vertices into what we will call \emph{regions} in a way that all vertices inside a region can be reached using pebbles from within the region and no pebble can ever leave a region: given a graph $G$ and a starting distribution $P$, a \emph{region of reachability under $P$} (\emph{region} for short) is the set of vertices of a maximal connected subgraph of $G$ on $2$-reachable vertices together with their neighbours. \begin{figure} \caption{A starting distribution $P$ on $\Lambda_{5,8}$ and the four regions of reachability under $P$. The $2$-reachable vertices are double hatched.} \label{fig3} \end{figure} Regions of reachability reflect the nature of pebbling in more detail than values of vertices. We will later analyse the average value of a vertex in a region in a grid. But first, we make a few general observations about regions. \begin{Obs} If $P$ is a solvable distribution on a graph $G$ then there exists a non-negative integer $k$ such that $V(G)$ can be partitioned as $R_1 \cup R_2 \cup \ldots R_k \cup S$ where $R_1, \ldots, R_k$ are regions and $S$ is the set of exactly $1$-reachable vertices with a pebble on them. \end{Obs} \begin{proof} We need to show that regions are disjoint and that if $x$ is a vertex not belonging to any region, then $x$ has a pebble on it and is not $2$-reachable. Assume for contradiction that $v \in R_i \cap R_j$ for $i \neq j$. By maximality, $x$ is not $2$-reachable. That means it has a $2$-reachable neighbour $u_i$ in $R_i$ and a $2$-reachable neighbour $u_j$ in $R_j$. There is no sequence of moves that would result in two pebbles on both $u_i$ and $u_j$ as $x$ is not $2$-reachable. Consider a shortest sequence of moves from $P$ that results in $2$ pebbles on $u_i$ and consider the first move that causes $u_j$ not to be $2$-reachable anymore. Call the vertex from which the move begins $w$. Then there is a path on $2$-reachable vertices between $w$ and $u_i$. There is also a path on $2$-reachable vertices between $w$ and $u_j$. But that means that there is a path on $2$-reachable vertices between $u_i \in R_i$ and $u_j \in R_j$, a contradiction. On the other hand, let $x$ be a vertex not belonging to any region. Then $x$ is not $2$-reachable. Moreover, it is not a neighbour of any $2$-reachable vertex and therefore no sequence of moves can lead to a pebble being added to $x$. At the same time, $x$ is $1$-reachable (since $P$ is solvable), and therefore $P(x)=1$. \end{proof} Next, we shall show that when studying pebbling numbers, we may assume that in a solvable distribution on a connected graph, there is no isolated vertex with exactly one pebble. \begin{lemma}\label{NoOne} Let $P$ be a solvable distribution on a connected graph $G$ on at least two vertices. Then there exists a solvable distribution $Q$ on $G$ such that $V(G)$ can be partitioned into regions and $|Q|=|P|$. \end{lemma} \begin{proof} Write $P=R_1 \cup R_2 \cup \ldots R_k \cup S$ where $R_1, \ldots, R_k$ are regions and $S$ is the set of exactly $1$-reachable vertices with a pebble on them. We prove the lemma by induction on $|S|$. If $|S|=0$, we put $Q=P$. Otherwise, take any $x \in S$ and any of its neighbours $u$. As $P$ is solvable, $u$ is reachable. Consider a starting distribution $\hat{P}$ where $\hat{P}(x)=0$, $\hat{P}(u)=P(u)+1$ and $\hat{P}(y)=P(y)$ for all other vertices. Then $\hat{P}$ is still solvable as $x$ has a $2$-reachable neighbour, $|\hat{P}|=|P|$, and $|S|$ has decreased. \end{proof} Finally, we observe that the vertices on the boundary of a region have value at least $\frac{3}{2}$. \begin{Obs}\label{threehalf} Let $x$ be a vertex with a neighbour from a region other than the one containing $x$. Then $v(x) \geq \frac{3}{2}$. \end{Obs} \begin{proof} By assumption, $x$ has a $2$-reachable neighbour $u$ from the same region and a $1$-reachable neighbour $x$ from a different region. Similarly to the proof of Observation $\ref{NoOne}$, we show that there exists a sequence of moves after which there are $2$ pebbles on $u$ and a pebble on $x$, therefore $v(x) \geq \frac{3}{2}$. \end{proof} \section{A lower bound on the value of a vertex} In this section we shall make use of the geometry of the grid $\Lambda_{m,n}$. Let $P$ be any solvable distribution on a grid $\Lambda_{m,n}$, then by the \emph{hemmed pebbling distribution of $P$} we mean the pebbling distribution $P'$ equal to $P$ on the inside of the grid and with two pebbles more than $P$ on every vertex on the boundary. Obviously $|P'| \leq |P|+4(m+n)$, so for the rest of the paper we will focus only on the hemmed pebbling distribution $P'$ and on getting a lower bound on $|P'|$, which will indeed lead to a lower bound on $|P|$. \begin{lemma} Let $P$ be a solvable pebbling distribution on $\Lambda_{m,n}$ and $P'$ be its hemmed pebbling distribution. Then in $P'$ every vertex $X$ satisfies $v(X) \geq 4/3$. \label{FirstLP} \end{lemma} \begin{proof} Assume $X$ is a vertex with minimal value. If $X$ is $2$-reachable, then $v(X) \geq 2$ and we are done. Otherwise $X$ is exactly $1$-reachable. Since all the vertices on the edges of the grid are $2$-reachable, we can assume $X$ is in the inside of the grid. Also, according to Lemma \ref{NoOne}, we can assume that $X$ does not have a pebble on it. Since $X$ is exactly $1$-reachable, then one of its neighbours is $2$-reachable, say $v_2$ by symmetry (see Figure \ref{fig1}). Moreover, since $X$ achieves the minimum value of the grid, then the value of $v_6$ is not smaller than the value of $X$. Hence we have: \begin{itemize} \item $v(v_2)=2A+2B+2C+D/2+E/2+F/2+G/2+H/2 \geq 2,$ \item $A/2+B/2+C/2+D/2+2E+2F+2G+H/2=v(v_6) \geq v(X) = A+B+C+D+E+F+G+H.$ \end{itemize} \begin{figure} \caption{A visual representation of values, contributions and vertices for proof of Lemma \ref{FirstLP}.} \label{fig1} \end{figure} Where, as shown in Fig \ref{fig1}, $A$ is the contribution to the value of $X$ of the top left part of the grid compared to $X$, and $B$, $C$, $D$, $E$, $F$, $G$ and $H$ are defined analogously. \\ Hence, we are looking for the minimum of $A+B+C+D+E+F+G+H$ under the constraints: \begin{itemize} \item $2A+2B+2C+D/2+E/2+F/2+G/2+H/2 \geq 2,$ \item $-A/2-B/2-C/2-D/2+E+F+G-H/2 \geq 0.$ \end{itemize} Note that the coefficient of $A$ is not smaller than the coefficient of $B$ in both constraints equations, so for every optimal solution $(A,B,C,D,E,F,G,H)$, $(A+B,0,C,D,E,F,G,H)$ is also an optimal solution. Hence, we can assume $B=0$. Similarly, we can assume $D=F=H=0$. Hence the problem is equivalent to finding the minimum of $A+C+E+G$ under the constraints: \begin{itemize} \item $2A+2C+E/2+G/2 \geq 2,$ \item $-A/2-C/2+E+G \geq 0.$ \end{itemize} Taking $S_1=A+C$ and $S_2=E+G$, the problem is equivalent to finding the minimum of $S_1+S_2$ under the constraints: \begin{itemize} \item $2S_1 +\frac{1}{2}S_2 \geq 2,$ \item $S_2 \geq \frac{1}{2}S_1.$ \end{itemize} But \[\begin{aligned} 2 \leq 2S_1 +\frac{1}{2}S_2&= \frac{1}{2}S_1 + \frac{3}{2}S_1 +\frac{1}{2}S_2 \\ &\leq S_2 + \frac{3}{2}S_1 +\frac{1}{2}S_2 \\ &= \frac{3}{2}( S_1 +S_2). \\ \end{aligned}\]\\ Consequently, $S_1+S_2 \geq 4/3$, which proves $v(X) \geq 4/3$. \end{proof} \section{A lower bound on the average value in a region} In this section we analyze regions in grids. First we observe that we have control over the proportion of $2$-reachable vertices in a region (which will allow us to find a better lower bound on the average value of a vertex inside a region). This can be easily proven by induction: \begin{Obs}\label{boundAlpha} A region in $\Lambda_{m,n}$ with $k$ $2$-reachable vertices contains at most $3k+2$ vertices. \end{Obs} The previous observation claims that the proportion of $2$-reachable vertices is roughly at least $1/3$. Using that and our lower bound on the value of a vertex, we could already get a lower bound on $\pi(\Lambda_{m,n})$, but we can actually improve our bound on the value of the $2$-reachable vertices and get a slightly better lower bound. For this purpose, we will use the following notation: for a $2$-reachable vertex $X$, the \emph{extra value of $X$} is $e(X)=v(X)-2$. \begin{lemma}\label{AddValue} Let $P$ be a solvable pebbling distribution on $\Lambda_{m,n}$ and $P'$ be its hemmed pebbling distribution. Let $X$ be a vertex that is $2$-reachable in a region that contains at least two $2$-reachable vertices for the pebbling distribution $P'$. Let $p$ be the number of pebbles on $X$ at the starting configuration. Then: \begin{enumerate}[(i)] \item If $p \geq 3$, then $e(X) \geq p-2.$ \item If $p=2$, then $e(X) \geq 2/3.$ \item If $p=1$, then $e(X) \geq 11/75$ and $X$ has a neighbour $Y$ such that $e(Y) \geq 1/2.$ \item If $p=0$, then either $X$ has a neighbour $Y$ such that $e(Y) \geq 2$ or two neighbours $Y_1$ and $Y_2$ such that $e(Y_1) \geq 1/2$ and $e(Y_2) \geq 1/2.$ \end{enumerate} \end{lemma} \begin{proof} The first point is obvious. For (ii), if $X$ is on the boundary, the result is obvious. Otherwise, we know that there is a neighbour of $X$ that is $2$-reachable, since the region contains at least two $2$-reachable vertices. Without loss of generality assume this neighbour is $v_2$. But $v_5$ and $v_7$ have values at least $4/3$ according to Lemma \ref{FirstLP}. Hence, we are looking for the minimum of $A+B+C+D+E+F+G+H$ under the constraints: \begin{itemize} \item $v(v_2)=2A+2B+2C+D/2+E/2+F/2+G/2+H/2 \geq 2-1,$ \item $v(v_5)=A/4+B/4+C+D+4E+F+G+H/4 \geq 4/3-1/2,$ \item $v(v_7)=A+B/4+C/4+D/4+E+F+4G+H \geq 4/3-1/2.$ \end{itemize} \begin{figure} \caption{Proof of Lemma \ref{AddValue}, case $p=2$.} \label{fig2} \end{figure} As in the proof of Lemma \ref{FirstLP}, we can suppose $B=D=F=H=0$, which gives the equivalent problem of finding the minimum of $A+C+E+G$ under the constraints: \begin{enumerate}[(1)] \item $2A+2C+E/2+G/2 \geq 1$, \item $A/4+C+4E+G \geq 5/6$, \item $A+C/4+E+4G \geq 5/6$. \end{enumerate} Now $(1)+\frac{2}{5}(2)+\frac{2}{5}(3)$ gives $\frac{5}{2}(A+C+E+G) \geq \frac{5}{3}$, which proves $e(X) \geq 2/3$. \\ For (iii), we know that there exists a neighbour $Y$ of $X$ such that $Y$ is $2$-reachable without using the pebble on $X$. Hence, $e(Y) \geq 1/2$. Moreover, without loss of generality $Y$ is $v_2$, and we know that $v_5$ and $v_7$ have values at least $4/3$ according to Lemma \ref{FirstLP}. Thus, we are looking for the minimum of $A+B+C+D+E+F+G+H$ under the constraints: \begin{itemize} \item $v(v_2)=2A+2B+2C+D/2+E/2+F/2+G/2+H/2+1/2 \geq 2+1/2,$ \item $v(v_5)=A/4+B/4+C+D+4E+F+G+H/4 \geq 4/3-1/4,$ \item $v(v_7)=A+B/4+C/4+D/4+E+F+4G+H \geq 4/3-1/4.$ \end{itemize} As in the proof of Lemma \ref{FirstLP}, we can suppose $B=D=F=H=0$, which gives the equivalent problem of finding the minimum of $A+C+E+G$ under the constraints: \begin{enumerate}[(1)] \item $2A+2C+E/2+G/2 \geq 2$, \item $A/4+C+4E+G \geq 13/12$, \item $A+C/4+E+4G \geq 13/12$. \end{enumerate} Now $(1)+\frac{2}{5}(2)+\frac{2}{5}(3)$ gives $\frac{5}{2}(A+C+E+G) \geq \frac{43}{15}$. Hence $A+C+E+G \geq \frac{86}{75} $ which proves $e(X) \geq \frac{11}{75}$. \\ For (iv), since $X$ is $2$-reachable and has no pebble on it at the starting configuration, it either has a neighbour $Y$ that is $4$-reachable, or two neighbours $Y_1$ and $Y_2$ that are $2$-reachable at the same time. In the first case, $e(Y) \geq 2$. In the second case, since $Y_1$ and $Y_2$ are at distance $2$ of each other, they will give to each other a contribution of $1/2$. Hence $e(Y_1) \geq 1/2$ and $e(Y_2) \geq 1/2$. \end{proof} \begin{lemma}\label{LowerBoundRegion} Let $P$ be a solvable pebbling distribution on $\Lambda_{m,n}$ and $P'$ be its hemmed pebbling distribution. In $P'$, let $R$ be a region, $k \geq 2$ the number of vertices that are $2$-reachable in $R$ and $N$ the total number of vertices in $R$. Then the average value of $R$, denoted by $A(R)$, satisfies: $$A(R) \geq \frac{(2+\frac{50}{353})k + \frac{4}{3}(N-k)}{N}.$$ Moreover, if the grid contains more than $2$ regions: $$A(R) \geq \frac{(2+\frac{50}{353})k + \frac{4}{3}(N-k) +\frac{2}{3}}{N}.$$ \end{lemma} \begin{proof} We apply Lemma \ref{AddValue} to each $2$-reachable vertex. We cannot add the extra values given in the cases $p=0$, $p=1$ and $p \geq 2$ because in some cases, we add an extra value to the vertex $X$, and in some other cases, we add an extra value to a neighbouring vertex, so we have to be careful not to double-count some extra values. Let $x$, $y$ and $z$ be respectively the number of $2$-reachable vertices with $p=0$, $p=1$ and $p \geq 2$. Considering the extra values on the vertices themselves, we get an overall extra value of at least $e_1=\frac{2}{3}z+\frac{11}{75}y$. Considering the extra values of the neighbours of $X$ in the case $p=0$, since each neighbour could be counted $4$ times, we get an overall extra value of at least $e_2=\frac{1}{4}x$. Considering the extra values of the neighbours in the case $p=1$, by taking $Y$ to be a neighbour of $X$ that can get $2$ pebbles in the minimum number of moves, we make sure that we do not double count extra values, since in the case $t > 1$ vertices $X_1, \dots, X_t$ share the same $Y$, there is a sequence of moves that put 2 pebbles on $Y$ without using the pebble on $X_1, \dots, X_t$, so $e(Y) \geq t/2$. Hence we get an overall extra value of at least $e_3=\frac{1}{2}y$. We have $x+y+z=k$, and we can always choose to add the biggest quantity between $e_1$, $e_2$ and $e_3$, so the worst-case scenario will be when those $3$ quantities are equal. When this happens, we have $x=2y$ and $z=\frac{53}{100}y$, so $2y+y+\frac{53}{100}y=k$, which gives $y=\frac{100}{353}k$. Hence the overall extra value is always at least $\frac{50}{353}k$, which proves that: $$A(R) \geq \frac{(2+\frac{50}{353})k + \frac{4}{3}(N-k)}{N}.$$ Moreover, in the case where the grid is split in at least $2$ regions, there exists at least $4$ vertices in $R$ that have at least one neighbour from a different region. Those vertices have values at least $3/2$ according to Observation \ref{threehalf}. Hence, we get: $$A(R) \geq \frac{(2+\frac{50}{353})k + \frac{4}{3}(N-k) +\frac{2}{3}}{N},$$ as claimed. \end{proof} \begin{lemma}\label{Casediffregions} Let $P$ be a solvable pebbling distribution on $\Lambda_{m,n}$ and $P'$ be its hemmed pebbling distribution. Then in $P'$, let $R$ be a region, and suppose the grid is split in at least $2$ regions. Then $$A(R) \geq \frac{5092}{3177}.$$ \end{lemma} \begin{proof} Let $k$ be the number of $2$-reachable vertices inside $R$ and $N$ be the total number of vertices inside $R$. If $k \geq 2$, then $N \leq 3k+2$ as stated in Observation \ref{boundAlpha}, and the bound in Lemma \ref{LowerBoundRegion} gives: \[\begin{aligned} A(R) &\geq \frac{k (2+\frac{50}{353}) + (2k+2) \frac{4}{3} + \frac{2}{3}}{3k+2} = \frac{5092}{3177}+\frac{406}{3177(3k+2)} \geq \frac{5092}{3177}. \\ \end{aligned}\]\\ If $k=1$, then let $X$ be the $2$-reachable vertex and use the classic notation: $v_2$, $v_4$, $v_6$ and $v_8$ have values at least $3/2$ according to \ref{threehalf}, and since $v_1$ is reachable within its own region, it will give at least $1/4$ extra contribution to $X$. Hence, $A(R) \geq \frac{4*3/2+2+1/4}{5} \geq \frac{5092}{3177}$. \end{proof} \begin{theorem} Let $P$ be a solvable pebbling distribution on $\Lambda_{m,n}$, then: $$|P| \geq \frac{5092}{28593}nm +O(n+m).$$ \end{theorem} \begin{proof} Recall that $P'$ is the hemmed pebbling distribution where we have added $2$ pebbles on every vertex that lies on the edge. Then according to Lemma \ref{LowerBoundRegion}, we have: $$9|P| \geq 9|P'|+ O(n+m) \geq \sum_{R} A(R)|R| + O(n+m).$$ If the grid is split in at least $2$ regions, then according to Lemma \ref{Casediffregions}, we have: $$\sum_{R} A(R)|R| \geq \sum_{R} \frac{5092}{3177}|R|=\frac{5092}{3177}nm.$$ Hence $|P| \geq \frac{5092}{28593}nm +O(n+m)$ as wanted. \\ If the grid contains only one region $R$, then using Lemma \ref{LowerBoundRegion}, we get: \[\begin{aligned} A(R) &\geq \frac{(2+\frac{50}{353})k + \frac{4}{3}(N-k)}{N} \\ &\geq \frac{5092}{3177} + O(1/N) \\ &= \frac{5092}{3177} + O(n+m). \\ \end{aligned}\] Which finally gives: $$|P| \geq \frac{5092}{28593}nm +O(n+m),$$ as claimed. \end{proof} \section{Concluding remarks and open problems} As was mentioned in the introduction, the new lower bound on $\pi(\Lambda_{m,n})$ is far from the best known upper bound. One of the reasons is that in our proof, we only distinguish between $2$-reachable and exactly $1$-reachable vertices. A possible improvement could come from taking into account $3$-reachable, or even $4$-reachable vertices, as the best known construction (\cite{UpperBoundGrid}) contains linearly many (in terms of the total number of vertices) vertices with $4$ pebbles on them. Another line of improvement would be to improve the lower bound of the value of a vertex from $\frac{4}{3}$ to something higher. In view of Observation \ref{threehalf}, the authors believe the following should be true: \begin{conjecture} Let $P$ be a solvable pebbling distribution on the grid. Then for all vertices $X$ that do not lie on the boundary of the grid, $v(X) \geq \frac{3}{2}$. \end{conjecture} The reader would have probably recognised Linear Programming equations in Lemma \ref{FirstLP} and Lemma \ref{AddValue}. It may seem that some equations could be optimised by adding more constraints on the values of remaining neighbouring vertices. However, adding them will not improve the lower bound \textemdash the authors' approach was to begin with all the constraints on the $3 \times 3$ square centered in $X$ and then use a computer software to delete useless ones. What remained was a small number of constraints that was possible to solve 'by hand'. \\ We finish the paper by mentioning two variations of pebbling a square grid for which we could achieve lower bounds using the arguments presented in this paper. First, we could consider pebbling of a $k$-dimensional grid. A different variation is to consider \emph{$k$-pebbling moves}: removing $k$ pebbles from a vertex to add a pebble on a neighbouring vertex. What we called a pebbling move is then a $2$-pebbling move. Once again, given a graph $G$ and a pebbling distribution $P$, we will call a vertex $y$ reachable if there is a sequence of $k$-pebbling moves starting in $P$ and ending in a distribution $Q$ with $Q(y) \geq 1$. A pebbling distribution $P$ on a graph $G$ is \emph{$k$-solvable} if every vertex is reachable. For a graph $G$, we define the \emph{optimal k-pebbling number $\pi_{k}(G)$ of $G$} as the minimal total number of pebbles among all $k$-solvable distributions on $G$. For any connected graph $G$, $\pi_{1}(G)=1$. For a grid $\Lambda_{m,n}$, the situation is also simple when $k\geq 5$ in view of the following lemma: \begin{lemma}\label{smoothening} Let $k \in \mathbb{N}$ and $P$ be a $k$-solvable distribution on $\Lambda_{m,n}$ such that there is a vertex $x$ with $P(x) \geq k+1$. Then $\hat{P}$, where \begin{itemize} \item $\hat{P}(x)=P(x)-k$, \item $\hat{P}(z)=P(z)+1$ if $z$ is a neighbour of $x$, and \item $\hat{P}(z)=P(z)$ if $d(x,z)\geq 2$, \end{itemize} is also a $k$-solvable distribution. \end{lemma} \begin{proof} Let $y$ be any vertex of $\Lambda_{m,n}$. We show it is reachable in $\hat{P}$. This is clearly true if $d(x,y) \leq 1$. Suppose $y$ is such that $d(x,y) \geq 2$. Fix a sequence $\Sigma$ of moves that begins in distribution $P$ and results in a distribution with a pebble on $y$. If none of the moves begins in $x$, the same sequence of moves witnesses the reachability of $y$ in $\hat{P}$. Otherwise, consider the first move $M$ in $\Sigma$ that removes pebbles from $x$. Let $Q$ be the distribution after all the moves from $\Sigma$ up to $M$ (inclusive) have been made. We claim that $\Sigma$ without $M$ is a sequence of moves witnessing the reachability of $y$ in $\hat{P}$. Indeed, as $\hat{P}(z) \geq {P}(z)$ for all $z \neq x$, all moves up to $M$ (exclusive) are possible also when beginning in $\hat{P}$. Call the distribution after all these moves have been made $\hat{Q}$. Then $\hat{Q}(z) \geq Q(z)$ for all vertices $z$ (with a sharp inequality for all neighbours of $x$ but one). Therefore, all the remaining moves from $\Sigma$ without $M$ can also be made, after which $y$ has a pebble on it. \end{proof} \begin{Prop} For $k \geq 5$, $\pi_k(\Lambda_{m,n}) = nm$. \end{Prop} \begin{proof} The pebbling distribution with $1$ pebble on each vertex is $k$-solvable, thus $\pi_k(\Lambda_{m,n}) \leq nm$. For the other direction, consider a $k$-solvable distribution $P$. We show that there is a $k$-solvable distribution $Q$ with at least $1$ pebble on each vertex and satisfying $|Q|\leq |P|$. Note that this implies $\pi_{k}(\Lambda_{m,n})\geq nm$, finishing the proof. Let $x$ be a vertex such that $P(x)=0$. Since $x$ is reachable, there exists a sequence $\Sigma$ of $l$ $k$-pebbling moves that results in a pebble on $x$. For each of the $l$ moves, starting with the first move, we use Lemma \ref{smoothening} to get a $k$-solvable distribution $\hat{P}$. Since each vertex has at most $4$ neighbours, we have $|\hat{P}|\leq |P|$. Note that the number of unoccupied vertices in $\hat{P}$ is not smaller than in $P$. After $l$ uses of Lemma \ref{smoothening}, we arrive at a $k$-solvable distribution with at most $|P|$ pebbles and with fewer unoccupied vertices than $P$ has (as $x$ has a pebble on it). By repeating the process from the previous paragraph we eventually obtain the desired configuration $Q$. \end{proof} What would the optimal $k$-pebbling number of $\Lambda_{m,n}$ be for $k \in \{3,4\}$? \renewcommand{Bibliography}{Bibliography} \end{document}
arXiv
Twist-to-bend ratio: an important selective factor for many rod-shaped biological structures The dynamics of plant nutation Vicente Raja, Paula L. Silva, … Paco Calvo Evolutionary drivers of protein shape Gareth Shannon, Callum R. Marples, … Philip M. Williams The multiscale nature of leaf growth fields Shahaf Armon, Michael Moshe & Eran Sharon Collective mechanical adaptation of honeybee swarms O. Peleg, J. M. Peters, … L. Mahadevan Steering complex networks toward desired dynamics Ricardo Gutiérrez, Massimo Materassi, … Stefano Boccaletti Relaxation dynamics of generalized scale-free polymer networks Aurel Jurjiu, Deuticilam Gomes Maia Júnior & Mircea Galiceanu Biophysics across time and space Ewa K. Paluch Fluctuation relations and fitness landscapes of growing cell populations Arthur Genthon & David Lacoste Invariance properties of bacterial random walks in complex structures Giacomo Frangipane, Gaszton Vizsnyiczai, … Roberto Di Leonardo Steve Wolff-Vorbeck1, Max Langer2, Olga Speck2,3, Thomas Speck2,3,4 & Patrick Dondl1 Computational methods Mechanical optimisation plays a key role in living beings either as an immediate response of individuals or as an evolutionary adaptation of populations to changing environmental conditions. Since biological structures are the result of multifunctional evolutionary constraints, the dimensionless twist-to-bend ratio is particularly meaningful because it provides information about the ratio of flexural rigidity to torsional rigidity determined by both material properties (bending and shear modulus) and morphometric parameters (axial and polar second moment of area). The determination of the mutual contributions of material properties and structural arrangements (geometry) or their ontogenetic alteration to the overall mechanical functionality of biological structures is difficult. Numerical methods in the form of gradient flows of phase field functionals offer a means of addressing this question and of analysing the influence of the cross-sectional shape of the main load-bearing structures on the mechanical functionality. Three phase field simulations were carried out showing good agreement with the cross-sections found in selected plants: (i) U-shaped cross-sections comparable with those of Musa sp. petioles, (ii) star-shaped cross-sections with deep grooves as can be found in the lianoid wood of Condylocarpon guianense stems, and (iii) flat elliptic cross-sections with one deep groove comparable with the cross-sections of the climbing ribbon-shaped stems of Bauhinia guianensis. Biological Materials Systems During ontogeny and phylogeny, living organisms are confronted with the challenge of immediate individual response and evolutionary adaptation of populations in order to exist within changing environmental conditions1 and, simultaneously, have to ensure the survival of the species through reproduction. The German zoologist Günther Osche addressed this dilemma and pointed out that living beings cannot post a sign with the message "closed for reconstruction"2. On the contrary, regardless of the respective time scale, responses and adaptations in living organisms must take place during "ongoing operation", because the fulfilment of life-ensuring functions must be maintained permanently over the entire period. These responses and adaptations can include local or general changes in metabolism and changes in morphological-anatomical characteristics and mechanical properties realised at various hierarchical levels3. Because of their hierarchical structure from the molecular to the macroscopic level, a clear differentiation between "material" and "structure" is not possible in biology4. On the basis of these smooth transitions Wegst et al.5 coined the term "structural materials" to describe the complex materials systems of living nature. In other words, plants and animals are materials systems that emerge characteristics far beyond those of their individual components6. From a mechanical point of view, biological materials systems are characterised, on the one hand, by anatomical heterogeneity through a specific three-dimensional arrangement of various tissues and, on the other hand, by mechanical anisotropy through various mechanical properties of their individual tissues. With regard to the topic of this article, response and adaptation can therefore be considered as a consequence of successive or simultaneous changes in one or both of these aspects, which might occur during both ontogeny and phylogeny. The Twist-to-Bend Ratio In the context of response and adaptation to existing or changing environmental mechanical conditions, the dimensionless twist-to-bend ratio is particularly useful as it provides information about the ratio of flexural rigidity to torsional rigidity of materials systems determined by both material properties (bending modulus E and shear modulus G) and morphometric parameters (axial second moment of area I and polar second moment of area J). Additionally, it allows a comparison of bodies of different sizes because of its dimensionlessness7,8,9,10. Flexural rigidity (=bending stiffness = EI) and torsional rigidity (=torsional stiffness = GJ) describe the resistance of a body to deformation caused by bending or torsion loading in the linear-elastic range. Since both are composite variables that combine material properties and morphometric parameters, they are well suited for quantifying the mechanical functionality of biological and technical structures7. On the one hand, sufficient flexural rigidity is relevant to counteract gravity. In plants, this prevents, for example, the sagging of the leaf blades or ensures an upright growth of the stems and thus an advantageous positioning of leaves, flowers and fruits. On the other hand, a low torsional rigidity may help for planar plant organs to streamline themselves under wind loads, e.g. by turning (large) leaves into the wind or by clustering compound leave blades and thus reducing their cross-sectional area and thus ultimately the drag force7,11,12. In order to identify common patterns in the relationship between flexural rigidity and torsional rigidity, Etnier10 created a so-called stiffness mechanospace. By mapping the theoretical expectations of ideal beams based on a cross-sectional shape (elliptic, circular) and various Poisson's ratios varying from 0 to 0.5, biological beams are generally limited to particular regions of the mechanospace. Vogel7 reported that elongated biological structures can achieve higher values for EI/GJ than ideal isotropic and isovolumetric circular solid cylinders with a value of 1.5 (if E/G is set to 3.0), as natural structures are anatomically inhomogeneous and mechanically anisotropic. In addition to circular and elliptical cross-sectional shapes, square-shaped, triangular and even U-shaped cross-sections exist in biology. For instance, an average EI/GJ value of 13.3 ± 1.0 has been reported for the hollow and lenticular flower stalks of daffodils (Narcissus pseudonarcissus)13. Furthermore, the values of the twist-to-bend ratio of the square-shaped stems of Leonurus cardiaca range on average between 15 and 1914 and the triangular flower stalks of the sedge Carex acutiformis lie in the range of 22 and 5115. The U-shaped cross-sections of banana petioles (Musa textilis) with EI/GJ values ranging from 40 to 100 show the highest values of any natural structures tested to date9,11. In principle, the dimensionless twist-to-bend ratio is a highly suitable parameter for the analysis and comparison of rod-shaped biological and technical materials systems among and with each other. The aim of this study has been to investigate the development and interrelationship of flexural rigidity and torsional rigidity in relation to cross-sectional shapes of the main load-bearing structural elements by using a mathematical model and suitable simulations. The results of these simulations have then been compared with the load-bearing structural elements (e.g., lignified strengthening tissues such as xylem, vascular bundles or sclerenchyma and collenchyma fibres) in cross-sections of selected biological plant models. These biological models, which have previously been described in the literature, include the leaf stalks of banana plants (Musa sp.) with the highest twist-to-bend ratio known to date9,11 and the stems of two different lianas (Condylocarpon guianense, Bauhinia guianensis) with twist-to-bend ratios markedly changing during ontogeny16. The study is divided into three parts: (i) the mathematical model, which is based on a phase field description of the plant stem cross-section in the design space given by a unit square; (ii) the optimisation of the twist-to-bend ratio of the phase field with respect to its geometry by using a gradient flow method and by weighting flexural rigidity (maximal or minimal), torsional rigidity or both factors as objectives for maximisation and minimisation; (iii) a comparison of the phase field shapes and their mechanical properties acquired during the optimisation process with the selected biological plant models and an interpretation of the insights gained. The three above-mentioned weighting factors (maximal flexural rigidity, minimal flexural rigidity and torsional rigidity) theoretically allow for a large number of diverse simulations. In the context of this study, the authors have selected three exemplary simulations: (i) minimisation of the torsional rigidity and maximisation of the minimal flexural rigidity, (ii) only minimisation of the torsional rigidity and comparison with the case, whereby the maximal flexural rigidity is also minimised, (iii) minimisation of the torsional rigidity and minimisation of the minimal flexural rigidity and comparison with the case, whereby the maximal flexural rigidity is also maximised. Mathematical Model Plant stems as slender elastic rods We describe a plant stem as a long thin elastic rod with domain \(B=A\times (0,L)\) of length L and constant cross-section A for an open bounded sufficiently regular domain \(A\subset {{\mathbb{R}}}^{2}\). We assume \(L\gg {\rm{diam}}A\) as well as material isotropy. It is, of course, possible to take into account heterogeneity and anisotropy of the material when optimising the rigidity properties of cross-sectional shapes, see, e.g.17,18,19,20. Here, however, we neglect such heterogeneity and anisotropic effects as well as viscosity and other time-dependent processes, since we are only interested in the influence of the cross-sectional shape on the mechanical properties of the stem. Consider B fixed at \(z=0\) and bending of B to be due to an outer normal force on A at \(z=L\). Starting from 3D elasticity, in the limit of a slender rod, following Mora & Müller21, the flexural (or bending) rigidity is given by the moment curvature relation $$(\begin{array}{l}{M}_{y}\\ {M}_{x}\end{array})=E(\begin{array}{ll}{D}_{x} & {D}_{xy}\\ {D}_{xy} & {D}_{y}\end{array})\cdot (\begin{array}{l}{\kappa }_{x}\\ {\kappa }_{y}\end{array}),$$ where My, Mx denote the bending moments on the end of the beam and \({\kappa }_{x}\), \({\kappa }_{y}\) denote the curvature in the direction of x and y, respectively, see Fig. 1. In our idealised case the bending modulus of elasticity is equivalent to the tensile modulus (Young's modulus) or compressive modulus of elasticity. Thus the parameter E is just the Young's modulus of the linearisation of the elastic energy and the moments of inertia Dx, Dy as well as the product of inertia Dxy are given by $${D}_{y}=\mathop{\int }\limits_{A}\,{\hat{y}}^{2}\,{\rm{d}}x{\rm{d}}y,\,{D}_{x}=\mathop{\int }\limits_{A}\,{\hat{x}}^{2}\,{\rm{d}}x{\rm{d}}y,\,{D}_{xy}=\mathop{\int }\limits_{A}\,\hat{x}\hat{y}\,{\rm{d}}x{\rm{d}}y,$$ where we have $$\hat{y}=y-\frac{1}{|A|}\,\mathop{\int }\limits_{A}\,y\,{\rm{d}}x{\rm{d}}y,\,\hat{x}=x-\frac{1}{|A|}\,\mathop{\int }\limits_{A}\,x\,{\rm{d}}x{\rm{d}}y.$$ Slender elastic beam B subject to bending and torsional moments. The maximal and minimal flexural rigidities Dmax and Dmin along the principal axes are then given by the maximal and minimal eigenvalue of the matrix $$D=(\begin{array}{ll}{D}_{x} & {D}_{xy}\\ {D}_{xy} & {D}_{y}\end{array}),$$ after multiplying with the material Young's modulus E, which leads to $${D}_{{\rm{\max }}/{\rm{\min }}}=E(\frac{{D}_{x}+{D}_{y}}{2}\pm \sqrt{\frac{{({D}_{x}-{D}_{y})}^{2}}{4}+{D}_{xy}^{2}}).$$ For simplicity we write \({D}_{{\rm{\max }}/{\rm{\min }}}={D}_{{\rm{mean}}}\pm RM\). Remark. Note that Mora & Müller21 adapt their coordinate axes to the domain such that \(x=\hat{x}\), \(y=\hat{y}\), and \({D}_{xy}=0\). Since we will shortly move to a phase field description of the stem cross-section, we will work with arbitrary coordinate axis and origin and thus carry those additional terms. As for the flexural rigidity, the torsional rigidity for an elastic slender rod with domain B was rigorously derived by Mora & Müller21. This rigorous derivation considers the limit of a very slender and long rod, for which it is shown that the requirements of St. Venant's torsion theory22 are satisfied. We assume that torsion is due to a moment T at the top of B, see Fig. 1. Note that for very thin-walled structures (once the thinness of the walls becomes comparable to the length-to-cross-section ratio), the assumptions of St. Venant's theory of torsion are not applicable and in this case Vlasov's theory of torsion should be applied. We argue, however, that the structures used in the comparison to plant morphology as displayed in section "Numerical results and comparison to plant morphology" are still within the realm where St. Venant's theory can be justified. In this framework the torsional rigidity may be expressed by Prandtl's stress function. In Prandtl's stress formulation the shear stress components are described by the derivatives of the stress function \(\varphi (x,y)\) $${\sigma }_{zx}=\frac{\partial \varphi (x,y)}{\partial x}\,{\rm{and}}\,{\sigma }_{zy}=-\,\frac{\partial \varphi (x,y)}{\partial y}.$$ Assuming without loss of generality a constant unit twist rate, the stress function \(\varphi \) must then satisfy Poisson's equation $$-\Delta \varphi =2G\,{\rm{on}}\,A,$$ with shear modulus G. Using that moments are only appearing on the top of B, the traction-free beam wall condition leads to the boundary condition $$\frac{d\varphi }{ds}=0\,{\rm{on}}\,\partial A\Rightarrow \varphi =const.\,{\rm{on}}\,\partial A,$$ where the boundary \(\partial A\) is given by a curve parameterised by s. We restrict ourselves to simply connected plant stem cross-sections and thus may assume, without loss of generality, that $$\varphi =0\,{\rm{on}}\,\partial A.$$ The torsional rigidity Dz is then given by $${D}_{z}=2\,\mathop{\int }\limits_{A}\,\varphi \,{\rm{d}}x{\rm{d}}y.$$ A priori bounds on the twist-to-bend ratio In a shape optimisation problem regarding the twist-to-bend ratio of a plant stem, we are considering the minimisation problem $$\mathop{{\rm{\inf }}}\limits_{A\subset {{\mathbb{R}}}^{2}}\,{\sigma }_{1}{D}_{z}(A)+{\sigma }_{2}{D}_{{\rm{\min }}}(A)+{\sigma }_{3}{D}_{{\rm{\max }}}(A)$$ with weighting factors \({\sigma }_{1},{\sigma }_{2},{\sigma }_{3}\in {\mathbb{R}}\). For example, if \({\sigma }_{1} > 0\), \({\sigma }_{2} < 0\), and \({\sigma }_{3}=0\), then solutions of Eq. (2) tend to minimise torsional rigidity and maximise minimal flexural rigidity. Following Kim & Kim23 we deduce that Dz has the representation $$\begin{array}{rcl}{D}_{z} & = & \mathop{\int }\limits_{A}\,G({x}^{2}+{y}^{2})\,{\rm{d}}x{\rm{d}}y-\mathop{\int }\limits_{A}\,G[{(\frac{\partial \omega }{\partial x})}^{2}+{(\frac{\partial \omega }{\partial y})}^{2}]\,{\rm{d}}x{\rm{d}}y\\ & = & {(1+\nu )}^{-1}\,{D}_{{\rm{mean}}}-\mathop{\int }\limits_{A}\,G[{(\frac{\partial \omega }{\partial x})}^{2}+{(\frac{\partial \omega }{\partial y})}^{2}]\,{\rm{d}}x{\rm{d}}y,\end{array}$$ where the so-called "warping" function \(\omega \) is given by the solution of Laplace's equation with Neumann boundary condition \(\frac{\partial \omega }{\partial \eta }=\frac{1}{2}\frac{d}{ds}|(x(s),y(s)){|}^{2}\), where \((x(s),y(s))\) is an arc-length parametrisation of the boundary curve of the cross-section and \(\eta \) is its outer unit normal. This leads to the observation that for a circular domain A torsional rigidity Dz is determined by Dmean. We thus deduce, that, if for example the cross-section A is constrained to be a material distribution in a rotationally symmetric reference domain, merely maximising the flexural rigidity leads to non-simply connected domains, such as symmetric hollow circular tubes (see Condylocarpon guianense). Therefore, as we are interested in simply connected cross-sections having a high twist-to-bend ratio, a sole maximisation of the flexural rigidity is not useful for our purposes. For domains with circular boundary curve (simply connected or not) and isotropic material as well as Poisson's ratio \(\nu \in (0,0.5)\) we deduce from Eq. (3) the estimate $$\frac{{D}_{{\rm{\max }}}}{{D}_{z}}=\frac{{D}_{{\rm{\min }}}}{{D}_{z}}=\frac{{D}_{{\rm{mean}}}}{{D}_{z}}=(1+\nu ) < \frac{3}{2}.$$ The theorem of St. Venant, see, e.g., Pólya24, states, that circular domains lead to maximal torsional rigidities among simply connected domains. As a bound for Dz(A) we can furthermore use the radius \({\rho }_{A}\) of the largest inscribed circle in a domain A. This is due to a theorem of Makai25, proving the inequality $${D}_{z}(A)\le 4{\rho }_{A}^{2}|A|$$ for every simply connected domain \(A\subset {{\mathbb{R}}}^{2}\). While it is instructive to consider such bounds on the functional above, the problem in Eq. (2) is ill posed even among simply connected domains, in the sense that no minimum exists. This can easily be seen due to the fact that thin fingers cause high flexural rigidity but no torsional rigidity. Thus, a sequence of thinner, but increasingly wide I-beams will lead to larger and larger negative values of the functional in Eq. (2) for \({\sigma }_{1} > 0\), \({\sigma }_{2} < 0\), \({\sigma }_{3}=0\). We therefore restrict ourselves to a bounded domain for our designs and add a perimeter penalty. Such a perimeter penalisation is indeed also sensible in our application, as plants should not have arbitrary large surfaces of exposure. Furthermore, we impose a fixed cross-sectional area (or, equivalently, mass). More importantly, however, we are not really interested in the minimisers of the functional themselves. Instead, we consider a gradient flow dynamics of our functional using an artificial time variable, hereafter called pseudo-time. Solutions of this gradient flow are driven towards the direction of maximal decline, i.e., the direction of the biggest change in rigidities for small changes in shape. We propose that shape change in such a direction can be observed in plant stem geometries. Phase field approximation In order to treat our shape optimisation problem numerically, we describe the material distribution in a given domain \(\Omega \) by a phase field variable u. The phase field u shall take values close to 0 in the void and values close to 1 in the areas where material is present. In a phase field approach the interface between material and void is given by a diffuse interface layer, whose thickness is proportional to a small length scale parameter \(\varepsilon \). At this interface the phase field smoothly but rapidly changes its value between 0 and 1. The aforementioned mass constraint now simply reads $$\frac{1}{|\Omega |}\,\mathop{\int }\limits_{\Omega }\,u=m\in (0,1).$$ Further, we assume that \(u=0\) on the boundary \(\partial \Omega \) of \(\Omega \). We use the common approach of Blank et al.26 to describe the phase transition from material to void in the Young's modulus E, obtaining a Young's modulus \(E(u)\). In our case we simply take $$E(u)=Eu$$ with material constant E. This way we obtain u-dependent flexural rigidities \({D}_{{\rm{\max }}/{\rm{\min }}}(u)\) and torsional rigidity \({D}_{z}(u)\). For simplicity, we do not explicitly denote the dependence of these quantities on the length scale \(\varepsilon \). Using the model from section "Plant stems as slender elastic rods" the moments of inertia and the product of inertia can now be expressed in terms of $${D}_{x}(u)=E\,\mathop{\int }\limits_{\Omega }\,{\hat{x}}^{2}u\,{\rm{d}}x{\rm{d}}y,\,{D}_{y}(u)=E\,\mathop{\int }\limits_{\Omega }\,{\hat{y}}^{2}u\,{\rm{d}}x{\rm{d}}y,\,{D}_{xy}(u)=E\,\mathop{\int }\limits_{\Omega }\,\hat{x}\hat{y}u\,{\rm{d}}x{\rm{d}}y,$$ $$\hat{x}=x-\frac{1}{m}\,\mathop{\int }\limits_{\Omega }\,xu\,{\rm{d}}x{\rm{d}}y,$$ $$\hat{y}=y-\frac{1}{m}\,\mathop{\int }\limits_{\Omega }\,yu\,{\rm{d}}x{\rm{d}}y.$$ In an analogous way we obtain the torsional rigidity \({D}_{z}(u)\) by $${D}_{z}(u)=2\,\mathop{\int }\limits_{\Omega }\,\varphi (u)\,{\rm{d}}x{\rm{d}}y.$$ If u were simply the characteristic function \({\chi }_{A}\) of our plant stem cross-section A, then \(\varphi (u)\) is given as the solution of Poisson's problem $$\begin{array}{ll}-\Delta \varphi =2G & {\rm{in}}\,\{u=1\},\\ \varphi =0 & {\rm{on}}\,\partial \{u=1\}.\end{array}$$ In a phase field approach we instead introduce a penalty such that the function \(\varphi \) is required to be constant where u is close to zero. By choosing zero boundary conditions on \(\partial \Omega \), these then get propagated such that \(u=0\) on \(\Omega \backslash \{u\approx 1\}\). We thus solve $$\begin{array}{rcl}\nabla \cdot (\frac{1}{{(u+{\theta }_{0})}^{2}}\nabla \varphi ) & = & 2G\,{\rm{on}}\,\Omega \\ \varphi & = & 0\,{\rm{on}}\,\partial \Omega ,\end{array}$$ where \(0 < {\theta }_{0}\ll 1\) is a small parameter. As long as the set where \(u\approx 1\) is simply connected, we obtain \(\varphi \) as an approximation of Prandtl's stress function in (1). We note that a similar approach, outside of the phase field context, was used by Kim & Kim23. As described before, shape optimisation problems of this kind are in general ill posed and it is necessary to add a perimeter penalisation for regularisation. In phase field approaches such a perimeter penalisation is modelled by the help of the Ginzburg-Landau (or Modica-Mortola) energy27 $${{\rm{Per}}}_{\varepsilon }(u)=\frac{1}{{c}_{0}}\,\mathop{\int }\limits_{\Omega }\,\frac{\varepsilon }{2}|\nabla u{|}^{2}+\frac{1}{\varepsilon }F(u)\,{\rm{d}}x{\rm{d}}y,$$ where the function F is given by $$F(u)=\frac{1}{4}{u}^{2}{(u-1)}^{2},$$ so that F has exactly two global minima in 0 and 1. The factor \(\frac{1}{{c}_{0}}\) is a normalising constant. We note that with \(\varepsilon \) tending to 0 minimisers of \({{\rm{Per}}}_{\varepsilon }(u)\) develop interfaces separating regions in which u is nearly constant with values close to the minima of F. This is due to an argument by Modica, Theorem I in Modica28, which also proves the \(\Gamma \)-convergence of \({{\rm{Per}}}_{\varepsilon }(u)\) to the perimeter functional \({\rm{Per}}(\{u=1\})\). Thus, adding \({{\rm{Per}}}_{\varepsilon }(u)\) to our problem penalises the perimeter of the set \(\{u=1\}\) and hence the perimeter of our cross-section. The shape optimisation problem is then to find a solution $$u\in {\mathscr{A}}\{q\in {H}_{0}^{1}(\Omega ):0\le q\le 1\,{\rm{in}}\,\Omega ,\,{\int }^{}\,q=m\}$$ of the following minimisation problem $$\mathop{{\rm{\inf }}}\limits_{u\in {\mathscr{A}}}\,{I}_{\varepsilon }(u)$$ $${I}_{\varepsilon }(u)={\sigma }_{1}{D}_{z}(u)+{\sigma }_{2}{D}_{{\rm{\min }}}(u)+{\sigma }_{3}{D}_{{\rm{\max }}}(u)+\gamma {{\rm{Per}}}_{\varepsilon }(u).$$ The function space \({H}_{0}^{1}\) denotes all functions with square integrable derivatives and zero boundary conditions on the boundary of \(\Omega \). We note that Eq. (6) does indeed admit minimisers as long as \(\gamma > 0\) and \(\Omega \) is bounded. L 2-Gradient flow and numerical implementation To compute solutions of Eq. (6) numerically we use a steepest descent approach, i.e., we make small steps in u towards the direction of maximal negative change of \({I}_{\varepsilon }\). In other words, we compute a time-discrete L2-gradient flow of \({I}_{\varepsilon }\) until a stationary state has been reached using a discretisation of our reference domain by P1 triangular finite elements. Time discretisation uses a time step variable \(\tau \) and along with integer iteration steps \(n\ge 1\) this leads to an artificial time variable \(t=\tau \cdot n\), also called pseudo-time. A thus computed stationary state of the gradient is usually a local solution of our minimisation problem. Furthermore, we can decouple the solution of Poisson's problem in Eq. (5) from the gradient flow and calculate it separately using a P1 finite element approach. The mass constraint is imposed using a Lagrange multiplier. For an initial configuration u0 of the phase field variable we use a semi-implicit first order Euler scheme with only the linear highest gradient term being treated implicitly. We thus can compute the new material distribution un from the previous distribution un−1 showing us the direction of maximal decline. As described above this gives us the direction of the biggest change in rigidities for small changes in shape. A more detailed description of the gradient flow, the finite element approximation and the implementation details are provided in the Appendices A and B, respectively, in the supplement. Numerical Results and Comparison to Plant Morphology To derive that appearing cross-sectional shapes of plant stems or petioles play an important part in their mechanical behaviour, we will present three numerical experiments and a comparison to the cross-sectional shapes of the aforementioned load-bearing elements of the selected plants. As we are solely interested in the contribution of the cross-sectional shape of a plant axes to the twist-to-bend ratio, we assume a fixed ratio of bending modulus and shear modulus \(E/G\approx 2.7\) for all numerical experiments, which for isotropic materials corresponds to a Poisson's ratio \(\nu \in [0.2,0.5]\), a value range, that is reasonable to assume for many plant axes29. Note again that we only compare the main load-bearing element of the respective cross-section of the selected plant with the cross-sections from our simulations. Detailed morphological-anatomical descriptions of the biological models plants are derived in Appendix C in the supplement. U-shapes In a first experiment we consider the shape optimisation problem (Eq. (2)) with weighting factors \({\sigma }_{1}=1,\,{\sigma }_{2}=-\,1\) and \({\sigma }_{3}=0\). This corresponds to a minimisation of the torsional rigidity Dz and a maximisation of the minimal flexural rigidity Dmin. The small weighting factor for the perimeter regularisation is set to \(\gamma \approx 1.4\cdot {10}^{-2}E\). Description of the simulation The evolution of the phase field is shown in Fig. 2. During the evolution, the circular initial shape of the plant petioles changes noticeably. After a short time period, small grooves form on the outer boundary of the phase. Such grooves are known to facilitate the twisting of a geometry as described by Vogel7. The results of the experiment confirm his finding and indicate that groove formation is the first dominant mode to reduce torsional rigidity starting from a circular disc as rod cross-section. The flexural rigidity barely changes in this initial phase. Evolution of the phase field in terms of maximising minimal flexural rigidity Dmin and minimising torsional rigidity Dz, (\({\sigma }_{1}=-\,{\sigma }_{2}=1,{\sigma }_{3}=0\)). (a) Evolution of the shape of the phase field with respect to torsional rigidity Dz (vertical axis) and flexural rigidity Dmin (horizontal axis). (b) Evolution of the shape of the phase field with respect to the twist-to-bend ratio Dmin/Dz and pseudo-time t. The development of a deep central groove leads to a noticeable reduction in torsional rigidity, see (a) and thus to a first strong increase in the ratio, see (b). The flexural rigidity is then greatly increased by a widening of the central groove, (a) leading to another markedly increase in the twist-to-bend ratio. These two effects ultimately form the characteristic U-shaped domain. The numerical steady state is effected by the chosen artificial boundary conditions and is not considered in the comparison to plant morphology. During the further optimisation steps this trend becomes clearer (Fig. 2a). The central groove deepens further and leads to an even greater reduction in torsional rigidity, which results in a major increase in the twist-to-bend ratio (Fig. 2b). After half of the optimisation pseudo-time has passed, the flexural rigidity increases perceptibly for the first time. This effect can be attributed to the widening of the central groove, which causes the phase to shift outwards, building a U-shaped domain. This process continues until the phase has reached the boundary of the reference domain \(\Omega \) (\(t=0.35\)). As soon as the phase contacts the boundary, the resulting shapes of course cannot be compared to plant morphology anymore. A numerical steady state of the gradient flow is obtained at pseudo-time \(t\approx 0.8\). Comparison with the leaf stalk of bananas (Musa sp.) The shape change of the phase in this simulation shows great similarities with the cross-sections of banana leaf stalks (Musa sp.) (Fig. 3a and Fig. C1 in the supplement). Generally, leaf stalks (=petioles) should resist static loads such as bending caused by the leaf weight in order to hold the large leaf blade (=lamina) in place and to ensure its orientation towards the sun. Additionally, they have to withstand high dynamic loads caused, in particular, by wind forces acting on the lamina. These drag forces can be reduced by streamlining in the wind in terms of twisting the petiole7,9,11,12,30,31. This is extremely important for the integrity of the herbaceous banana plant, which consists of a pseudostem of densely packed leaf sheaths at the base of the petioles and leaf laminae with large surfaces, the latter being especially susceptible to damage from wind forces. Selected model plants that were the basis for the comparative studies with the phase field simulations. (a) Banana plant of the species Musa x paradisiaca with its pseudostem composed of leaf sheaths and huge leaf laminae, (b) the stem of the twining liana Condylocarpon guianense wounding around a tree in the tropical rain forest of French Guyana, (c) a non-self-supporting ribbon-shaped stem of the liana Bauhinia guianensis, which is referred to as the monkey ladder because of its appearance growing in the tropical rain forest of French Guyana. Morphological-anatomical studies of the U-shaped cross-section of banana petioles have revealed an inner and an outer shell comprising an epidermis and fibre-reinforced parenchyma with radial (sometimes branched) parenchymatous strands lying in between11,32,33,34 (Fig. 4a and Fig. C1 in the supplement). This cross-sectional arrangement is associated with an increase in flexural rigidity, since its special structure prevents the petiole from bending downwards (and finally collapsing) by converting bending forces into tensile forces. In addition, the combination of shape and inner structure also reduces torsional rigidity and thus supports streamlining by torsion. These characteristics lead to extremely high twist-to-bend ratios of petioles of banana plants with values ranging from 40 to 1009,11, compared with petioles of various tree species with values between 1.6 and 97,12. A comparable U-shape is visible in the phase field simulation alongside the trend towards an increasingly higher twist-to-bend ratio (Fig. 2). Comparison of individual shapes from the phase field simulations with cross-sections of the selected biological plant models. (a) Individual shapes of a phase field simulation with the aim of minimising the torsional rigidity Dz and of maximising the minimal flexural rigidity Dmin are compared with cross-sections of the petiole of Musa acuminata. In M. acuminata the differences in the cross-sectional shape are attributable to the distinct positions along the longitudinal axis of the petiole (left = cross-section of the middle part, right = cross-section of the basal part). (b) Comparison between individual shapes of a phase field simulation in which only the torsional rigidity Dz is minimised and cross-sections of the stem of the liana Condylocarpon guianense in two different ontogenetic phases (left = self-supporting early stage, right = non-self-supporting old stage after attachment to a support). Reproduced from Rowe et al.35 with permission. (c) Individual shapes of a phase field simulation aimed at minimising the torsional rigidity Dz and minimising the minimal flexural rigidity Dmin compared with cross-sections of a young lianescent (left) and an old lianescent stem (right) of the liana Bauhinia guianensis. The shape of the phases at various pseudo-times shows striking similarities with individual cross-sections along the longitudinal axis of the banana petiole33,34. The phase shapes at pseudo-times \(t=0.3438\), \(t=0.2475\) and \(t=0.1512\) correspond well to the cross-sectional shapes found in the basal, middle and apical parts of the petioles (Fig. 2b, as well as Figs. C1 and C2 in the supplement). Figure 2b clearly shows that, with increasing pseudo-time, the torsional rigidity decreases, whereas the bending rigidity increases, and the resulting twist-to-bend ratio increases strongly in the relevant period between \(t=0.1512\) and \(t=0.3438\). From the viewpoint of functional morphology and biomechanics, this shape-dependent increase of the twist-to-bend ratio is strongly related to the mechanical loading of banana leaves. As a result of the increased leverage, because of the own weight of the leaf itself, the flexural rigidity in the basal part of the petiole has to be larger than that in the apical parts. For the torsional rigidity, on the other hand, it is advantageous to be uniformly low over the entire petiole in order to allow easy twisting under wind loads and thus to protect the leaf stalk from damage11,31. Deep grooves In a second experiment we consider Eq. (2) with \({\sigma }_{2}={\sigma }_{3}=0\) and \({\sigma }_{1}=1\), which leads to a minimisation of torsional rigidity only. The weighting factor for the perimeter regularisation is \(\gamma =1\cdot {10}^{-2}\,G\). We note that the bending modulus does not play a role here since the objective function does not include bending. The evolution of the phase field is shown in Fig. 5. Similar to the first simulation, grooves appear again at the boundary of the phase field shape, which reduce the torsional rigidity. In contrast to the first experiment, however, these grooves appear uniformly distributed along the boundary (Fig. 5a). Evolution of the phase field in terms of minimising the torsional rigidity Dz only, (\({\sigma }_{1}=1,{\sigma }_{2}={\sigma }_{3}=0\)). (a) Evolution of the shape of the phase field with respect to torsional rigidity Dz and minimal flexural rigidity Dmin. (b) Evolution of the shape of the phase field with respect to the twist-to-bend ratio Dmin/Dz and pseudo-time t. Characteristic for the sole minimisation of the torsional rigidity is the developement of uniformly distributed deep grooves around the phase boundary resulting in a cloverleaf-shaped cross-section. Compared to the first experiment, a significant decrease in torsional rigidity occurs in a very short period of time, as seen in (b). The sole weighting of the torsional rigidity then leads to an even deepening of some grooves. By this, the twist-to-bend ratio is increased strongly after a comparatively short period of time and a characteristic cloverleaf-shaped cross-section is formed (Fig. 5b). A widening of the grooves, as in the experiment described above, only occurs to a lesser extent. The following smoothening of the phase boundary has no major influence on the torsional and flexural rigidity, respectively. A numerical steady state is reached at \(t\approx 0.1435\). The flexural rigidity increases only marginally during the simulation and has no considerable influence on the twist-to-bend ratio. Compared to the first simulation the running pseudo-time needed to reach the steady state and thus the optimum under the given boundary conditions, is much lower. Comparison with the stem of the liana Condylocarpon guianense In this case, the shape change of the phase field has similarities with the ontogenetic developments of the cross-sectional shape of the woody part and thus the main load-bearing tissues of the stems of the liana Condylocarpon guianense (Fig. 3b). The rod-shaped stems of this species respond to the typical mechanical loads to which they are subjected in a certain ontogenetic phase by changes in their internal structure and in the material properties of the tissues involved. In young self-supporting C. guianense shoots, which are still searching for a support and are therefore mainly exposed to bending loads, a dense and stiff type of wood (secondary xylem) is formed. It is arranged in a centripetal pattern, with a central pith and an adjacent ring of dense wood (secondary xylem consisting of narrow-diameter vessels and small wood rays). This wood type is responsible for the relatively high flexural rigidity of young C. guianense stems, enabling them to bridge the distance to potential supports. As soon as the stems are securely attached to a support, a different type of wood is built, which is significantly less dense and mechanically more flexible (secondary xylem comprising wide-diameter vessels and broad wood rays forming grooves in the wood cylinder), contributing to the pronounced flexibility of old lianescent stems. Because of the formation of the two wood types in subsequent ontogenetic phases, young "searchers" form a dense and stiff wood cylinder, which is surrounded by lianoid non-dense wood built during older phases16,35,36,37,38 (Fig. 4b and Fig. C2 in the supplement). Similar to the woody part of a young stem of C. guianense, the phase field simulation starts with a circular cross-section (\(t=0.0000\)) (Fig. 5 and Fig. D3 in the supplement). The twist-to-bend ratio of the phase field is very low at this point. However, within a very short period of time, especially compared with the first simulation, the twist-to-bend ratio of the simulated phase shape increases strongly (Fig. 5b). This increase in the twist-to-bend ratio can be attributed to the formation of the above mentioned grooves that strongly reduce the torsional rigidity while maintaining the flexural rigidity (Fig. 5a). Figuratively this can be imagined such that, because of the grooves, the largest possible resulting circular area of the cross-section is reduced, an event that is ultimately responsible for the torsional rigidity. The resulting deeply grooved shape of the phase field is similar to the cross-sectional shape of the wood in older C. guianense stems. These older stages, which are by now attached to a support, are highly flexible in both bending and twisting and therefore allow the slender liana stem passively to follow the movement of the host tree under wind loading and even to survive the breakage of branches of even the entire stem of supporting host tree35,36,39 (Fig. 3b). The shape and three-dimensional arrangement of the wood within the older cross-sections differ markedly from those of the younger stages. Only a small circular ring of the dense wood remains around the central pith, while the lianoid less-dense wood is arranged in a deeply grooved (star-like) cross-sectional shape, analogous to the phase field, and fills most of the cross-sectional area35. We can thus assume that the decrease in torsional rigidity in older stages of C. guianense is (mainly) attributable to the different shape and arrangement of wood within the cross-sections of young and old stems. If, in addition to the reduction of the torsional rigidity, a minimisation of the flexural rigidity is also included in the simulation, as is the case during the ontogeny of C. guianense, the resulting shape phase differs markedly from the real shape of the xylem in C. guianense stems (Fig. 6b). Consequently, we can reasonably assume that the decrease in flexural rigidity in C. guianense is primarily determined by the modified material properties of the wood formed in the lianescent phase of growth16 and not by the shape and 3D arrangement of the tissues involved. Phase field states of the simulation in terms of minimising the torsional rigidity Dz and additionally minimising the maximum flexural rigidity Dmax, (\({\sigma }_{1}={\sigma }_{3}=1,{\sigma }_{2}=0\)). (a) Initial state, (b) final state (with respect to pseudo-time t). Compared to the simulation shown in Fig. 5, merely minimising torsional rigidity, the additional minimisation of Dmax results in a markedly different shape. This gives rise to the conjecture that the minimisation of the torsional rigidity is the main driving factor for the morphology of the main load bearing tissue in non-self-supporting old stages of Condylocarpon guianense. In general, the self-supporting early stages of lianas show the typical values for flexural rigidity, as they are also known for other self-supporting woody stems. In contrast, the non-self-supporting older ontogenetic stages of lianas that are attached to a host support have considerably lower values (reduction of up to an order of magnitude). Although fewer data exist concerning torsional rigidity in the literature, the values of torsional rigidity of the secondary wood do not seem to decrease in the same way as the values of flexural rigidity9. To summarise, all lianas tested to date develop especially low \({D}_{{\rm{\min }}}/{D}_{z}\) ratios after "giving up" self-support9. In a third experiment we consider Eq. (2) with \({\sigma }_{3}=0\) and \({\sigma }_{1},{\sigma }_{2}=1\), which leads to a minimisation of both torsional rigidity and minimal flexural rigidity. The perimeter penalty is set to \(\gamma \approx 1.4\cdot {10}^{-2}\), as in the first experiment. The evolution of the phase field is shown in Fig. 7. A minimisation of both torsional and flexural rigidity allows the phase to form a nearly elliptic shape, leading to a noticeable decrease in both the torsional and the minimal flexural rigidity (in direction of the short axis), whereas the maximal flexural rigidity in the direction of the long axis is increased. A formation of grooves also occurs here. This is shown by a large groove in the middle of the shape, which like in the previous experiments deepens further, in this case leading to an even greater reduction in both torsional and minimal flexural rigidity. Before the phase touches the boundary the characteristic shape appears at \(t=0.1040\). A numerical steady state is reached at pseudo-time \(t\approx 0.3960\). For more details see Fig. D4 in the supplement. Evolution of the phase field in terms of minimising the torsional rigidity Dz and the minimal flexural rigidity Dmin (\({\sigma }_{1}={\sigma }_{2}=1,{\sigma }_{3}=0\)). (a) Evolution of the shape of the phase field with respect to torsional rigidity Dz and flexural rigidity Dmin. (b) Evolution of the shape of the phase field with respect to the twist-to-bend ratio Dmin/Dz and pseudo-time t. Minimisation of both torsional and minimal flexural rigidity leads to ribbon-like domains. Compared to the previous experiments we now obtain one direction with low and another direction with high flexural rigidity, see (b) and Figs. D1 and D4 in the supplement. Considering the twist-to-bend ratio of the shapes, it is important to say that contrary to the previous experiments, only the ratio \({D}_{{\rm{\max }}}/{D}_{z}\) is markedly increased (Fig. D1 in the supplement), where the ratio \({D}_{{\rm{\min }}}/{D}_{z}\) is reduced (Fig. 7b). The cross-sectional shapes that occur thus only show a high twist-to-bend ratio in the direction of the maximum flexural rigidity. In the previous experiments these two ratios were almost identical due to the near symmetry of the cross-sectional shapes. The mentioned groove in the middle of the shape is the decisive characteristic, which distinguishes this model from the model with an additional maximisation of the maximum flexural rigidity Dmax. The formation of such a groove slows the increase in maximum flexural rigidity Dmax and is thus suppressed when the maximum flexural rigidity is included as an objective to be maximised (Fig. 8). Phase field states of the simulation minimising the torsional rigidity Dz as well as minimum flexural rigidity Dmin and additionally maximising maximum flexural rigidity Dmax, (\({\sigma }_{1}={\sigma }_{2}=1,{\sigma }_{3}=-\,1\)), (a) Initial state, (b) final state (with respect to pseudo-time t). Compared to the simulation shown in Fig. 7, the additional maximisation of Dmax prevents the developement of the deep groove in the centre. Comparison with the stem of the monkey ladder liana (Bauhinia guianense) Like the stems of Condylocarpon guianense, the stems of the liana Bauhinia guianensis change their mechanical properties, wood type and wood shape markedly during ontogeny16. Young self-supporting stems and young apical axes of B. guianensis have a circular cross-sectional shape composed of a central pith and an adjacent ring of dense stiff wood (secondary xylem consisting of narrow-diameter vessels and small wood rays) (Fig. 4c and Fig. C3 in the supplement). These young axes are stiff in both bending and torsion16,37,39. As in C. guianense the young shoots act as "searchers", spanning gaps between the host supports (Fig. 3c) and therefore rely on high values of flexural and torsional rigidity36,37,39. Similar to the cross-sections of young B. guianensis stems, the phase field simulation starts from a round shape (\(t=0.00\)) (Fig. 4c and Fig. D4 in the supplement), which also features high minimal flexural rigidity and high torsional rigidity and thus the highest twist-to-bend ratio within this simulation (Fig. 7). In contrast, adult non-self-supporting lianescent stems of B. guianensis are much more flexible and have a markedly lower modulus of elasticity and their cross-sectional shape differs considerably from that of young stems16. These changes in the mechanical properties of the stem have been correlated with changes in the contribution of the various wood types (small amounts of dense stiff secondary xylem built in the young self-supporting stage and large amounts of non-dense flexible secondary xylem with wide-diameter vessels and broad wood rays formed after attachment in the lianescent stage) to the axial second moment of area of the stems16,37. This also becomes apparent with regard to the change in the cross-section of the stem from a circular to a ribbon shape during the ontogeny of the plant39. The ribbon shape, which gives B. guianensis its vernacular name of "monkey ladder", can also be seen in the phase field simulation and is associated with reductions in minimal flexural rigidity, torsional rigidity and twist-to-bend ratio. Moreover, as described above, the phase field shape exhibits a mid-line groove, which further decreases the minimal flexural rigidity and the torsional rigidity. This groove is also present in B. guianensis but has a slightly different shape. In the actual plant, the groove is much wider towards the outside than the groove in the phase field simulation. Regardless of their individual shape, grooves have a similar effect on the mechanics of the overall structure. Figuratively speaking, the grooves reduce the largest possible resulting circular area within the cross-section, which ultimately leads to a decrease of the torsional rigidity. Another similarity with the phase field simulation is the cross-sectional stem shape of B. guianensis in the transition phase from young to adult stages. Since additional large-lumen wood is only formed on two opposite sides of the young circular stem, the cross-section shows more and more similarities with the elliptical shape of the phase field shortly after the start of the simulation (Fig. 4c). The simulation reveals that this change in the cross-sectional shape results in a simultaneous decrease of the torsional rigidity and of the minimal flexural rigidity (Fig. 7a). In summary, some similarities, but also some differences, exist between the three experiments, namely the simulations of "U-shapes", "Deep grooves" and "Ribbons". The simulations of "U-shapes" and "Deep grooves" are readily comparable insofar that, dependent on the various weighting factors, both lead to an increase of the twist-to-bend ratio. Comparison of these two simulations demonstrates clear differences with regard to the increase and the maximum values of the twist-to-bend ratio. With the exclusive minimisation of torsional rigidity, as performed in "Deep grooves", a twist-to-bend ratio of \({D}_{{\rm{\min }}}/{D}_{z}\approx 4\) can be achieved even after a pseudo-time of \(t\approx 0.025\), whereas with the minimisation of the torsional rigidity and a simultaneous maximisation of the minimum flexural rigidity, as was carried out in "U-shapes", a twist-to-bend ratio of \({D}_{{\rm{\min }}}/{D}_{z}\approx 4\) could only be reached after a pseudo-time of \(t\approx 0.26\). On the other hand, the overall twist-to-bend ratio is higher if the minimal flexural rigidity is additionally maximised, as conducted in "U-shapes", with twist-to-bend ratios of \({D}_{{\rm{\min }}}/{D}_{z}\approx 20\), instead of just the minimisation of the torsional rigidity as performed in "Deep grooves", where the maximal twist-to-bend ratio only has values of \({D}_{{\rm{\min }}}/{D}_{z}\approx 4.5\). Interestingly, in the simulation of "Ribbons", the twist-to-bend ratio \({D}_{{\rm{\min }}}/{D}_{z}\) decreases over time, whereby the \({D}_{{\rm{\max }}}/{D}_{z}\) increases and achieves after a pseudo-time of \(t\approx 0.1\), a twist-to-bend ratio of \({D}_{{\rm{\max }}}/{D}_{z}\approx 4\) and maximum values of \({D}_{{\rm{\max }}}/{D}_{z}\approx 5.5\). Apart from these differences, a common shape-related characteristic noticeably occurs in all three simulation, namely the formation of grooves. Figuratively, these grooves reduce the largest possible circular area that can be placed in the phase field shapes, whose size corresponds to the torsional rigidity and thus ultimately leads to a reduction in the torsional rigidity of the overall structure. Since all simulations are at least partly aimed at minimising the torsional rigidity, we can expect that these grooves will occur in all three simulations. Only the design of these grooves varies depending on the additional optimisation requirements. What conclusions can be drawn from these findings for the selected plant models? Since plants as biological structures are generally the result of multifunctional requirements and, moreover, can only respond or adapt within the framework of their respective bauplan, the influence of the shape of a structure on the overall performance in terms of flexural and torsional rigidity cannot be derived from the plant models. With the simulations presented here, this assignment is possible for the first time, although the twist-to-bend ratio is clearly a measure for a compromise of various mechanical functions. Possible deviations of plant axes from the optimised shape are indications for further functions that are vital for the survival of the respective plant species. Precisely for this reason and because the twist-to-bend ratio is a dimensionless parameter, it is particularly suitable for comparing biological structures not only with each other, but also with technical structures. Experimental investigations on the petiole of the banana leaf have shown a twist-to-bend ratio ranging from 40 to 10011,32. Analogous to the various phase field shapes found in the simulation of "U-shapes", the banana petiole also displays various cross-sectional shapes along its longitudinal axis and thus a change in mechanical functionality. In addition to this spatial resolution based on the various cross-sectional shapes, a difference exists between the theoretically achievable maximum value of \({D}_{{\rm{\min }}}/{D}_{z}\approx 20\), as determined in the simulation "U-shapes" and purely resulting from the respective shape, and the values determined experimentally. This difference can only be explained by the special inner structure of the petiole. The fact that the banana petiole is up to 100 times stiffer in bending than in torsion represents a selective advantage with regard to the alignment of the leaf blade to sunlight in the sense of efficient photosynthesis and simultaneously avoids damage to the leaf blade, as the leaves are streamlined under wind load. In contrast to the banana leaf, which represents a spatial resolution of various cross-sectional shapes, the two selected liana species have a temporal resolution of the different cross-sectional shapes as a function of ontogenetic development from the young and old ontogenetic stages. First of all, the various stages differ mainly in their mechanical properties: young stages are self-supporting and are stiff "searchers", whereas the older stages are safely attached to the host support and are non-self-supporting and characterised by high flexural and torsional flexibility. The reduction in flexural and torsional rigidity takes place via rapid transitions from dense stiff wood built in the early stages to less-dense flexible wood developed in the older stages. Later shifts in development include the change in the cross-sectional shape by the formation of woody lobes and resulting grooves as described in simulations "Deep grooves" and "Ribbons". Specifically, the bending modulus E of C. guianense axes decreases from a mean of 2722 MPa during early ontogenetic stages to a mean of 306 MPa during older stages, whereas the percentage contribution of the wide-lumen wood to the cross-sectional area increases from 0 to 30%16,35,36. From the simulation of "Deep grooves", we learn that the minimisation of the torsional rigidity of C. guianense axes with almost constant bending rigidity is controlled by the increasing lobation of the cross-sectional shape of the wood. We can conclude from this observation that additional flexural flexibility is controlled by the formation of flexible lianoid wood having other material properties. Similar to C. guianense, changes in the cross-sectional shape and mechanical behaviour of B. guianensis stems are linked to the ontogenetic stage of the plant. Early stages with a circular cross-section producing dense stiff wood are 2–3 metres long and occur as self-supporting "searchers" that can bridge the gap to potential host supports or self-supporting young saplings39. As soon as the stem is attached to a supporting tree, rapid transitions to compliant wood take place. Interestingly, the cambial growth is highly modified producing a ribbon-shaped stem attributable to the formation of lianoid wood at two opposite sides of the young circular stem and changes into an elliptical cross-section39. During the period between wood built in young stages to wood built in older stages, the bending modulus E decreases from 24 GPa to 3.75 GPa and the torsional modulus G decreases from 0.91 GPa to 0.42 GPa16. The simulation of "Ribbons" shows that the torsional flexibility at almost constant minimal flexural rigidity (\({D}_{{\rm{\min }}}/{D}_{z}\)) is controlled by the shape change from circular to elliptical and the additional formation of one groove at the centre of the major axis. This above-mentioned rapid transition from one stage to the other is mirrored in the phase field simulation of "Ribbons" in which a relatively short pseudo-time is required to optimise the twist-to-bend ratio (\(t\approx 0.2\)) for reaching the lowest value \({D}_{{\rm{\min }}}/{D}_{z}\approx 0.1\). This is different when the twist-to-bend ratio \({D}_{{\rm{\max }}}/{D}_{z}\) is considered. Here, the flexural rigidity can be 6 times as high as the torsional rigidity. The use of gradient flow functions in the form of phase field simulations has proved to be a novel and appropriate approach that helps us to understand optimisation processes during evolution and ontogeny within biology. In the framework of this study, the gradient flow has been used to illustrate the fastest/largest possible changes in rigidity with the smallest possible change in the cross-sectional shape of the load-bearing structures. A comparison with selected plant species suggests that evolution also follows this principle, as small changes in cross-sectional shape are "easy to implement" at little "costs", but still offer a large selective advantage. This approach can probably also be used to aid our understanding of other evolutionary or ontogenetic optimisation processes. This work does not have any experimental data. The shape-optimisation C++-code is made available as supplementary material. Lambers, H., Chapin III, F. S. & Pons, T. L. Plant physiological ecology, 4–6 (Springer Science & Business Media, 2008). Horn, R., Gantner, J., Widmer, L., Sedlbauer, K. P. & Speck, O. Bio-inspired sustainability assessment–a conceptual framework. In Knippers, J., Nickel, K. & Speck, T. (eds) Biomimetic research for architecture and building construction, 361–377 (Springer, 2016). Fratzl, P. & Weinkamer, R. Nature's hierarchical materials. Prog. Mater. Sci. 52, 1263–1334, https://doi.org/10.1016/j.pmatsci.2007.06.001 (2007). VDI. Bionik: Bionische Materialien, Strukturen und Bauteile; Biomimetics: Biomimetic materials, structures and components. VDI 6223 (2013). Wegst, U. G., Bai, H., Saiz, E., Tomsia, A. P. & Ritchie, R. O. Bioinspired structural materials. Nat. Mater. 14, 23, https://doi.org/10.1038/NMAT4089 (2015). Speck, T. & Speck, O. Emergence in biomimetic materials systems. In Wegner, L. H. & Lüttge, U. (eds) Emergence and modularity in life sciences, 97–115, https://doi.org/10.1007/978-3-030-06128-9_5 (Springer, 2019). Vogel, S. Twist-to-bend ratios and cross-sectional shapes of petioles and stems. J. Exp. Bot. 43, 1527–1532, https://doi.org/10.1093/jxb/43.11.1527 (1992). Vogel, S. Twist-to-bend ratios of woody structures. J. Exp. Bot. 46, 981–985, https://doi.org/10.1093/jxb/46.8.981 (1995). Vogel, S. Living in a physical world xi. to twist or bend when stressed. J. Biosci. 32, 643–655 (2007). Etnier, S. A. Twisting and bending of biological beams: distribution of biological beams in a stiffness mechanospace. The Biol. Bull. 205, 36–46, https://doi.org/10.2307/1543443 (2003). Ennos, A. R., Spatz, H. & Speck, T. The functional morphology of the petioles of the banana, Musa textilis. J Exp Bot 51, 2085–2093, https://doi.org/10.1093/jexbot/51.353.2085 (2000). Louf, J.-F. et al. How wind drives the correlation between leaf shape and mechanical properties. Sci. Reports 8, 16314, https://doi.org/10.1038/s41598-018-34588-0 (2018). Etnier, S. A. & Vogel, S. Reorientation of daffodil (Narcissus: Amaryllidaceae) flowers in wind: drag reduction and torsional flexibility. Am. J. Bot. 87, 29–32, https://doi.org/10.2307/2656682 (2000). Kaminski, R., Speck, T. & Speck, O. Adaptive spatiotemporal changes in morphology, anatomy, and mechanics during the ontogeny of subshrubs with square-shaped stems. Am. J. Bot. 104, 1157–1167, https://doi.org/10.3732/ajb.1700110 (2017). Ennos, A. R. The mechanics of the flower stem of the sedge Carex acutiformis. Annals Bot 72, 123–127, https://doi.org/10.1006/anbo.1993.1089 (1993). Hoffmann, B., Chabbert, B., Monties, B. & Speck, T. Mechanical, chemical and x-ray analysis of wood in the two tropical lianas Bauhinia guianensis and Condylocarpon guianense: variations during ontogeny. Planta 217, 32–40, https://doi.org/10.1007/s00425-002-0967-2 (2003). Ashby, M. Overview no. 92: materials and shape. Acta metallurgica et materialia 39, 1025–1039 (1991). Ashby, M. & Bréchet, Y. Designing hybrid materials. Acta materialia 51, 5801–5821 (2003). Estrin, Y., Beygelzimer, Y. & Kulagin, R. Design of architectured materials based on mechanically driven structural and compositional patterning. Adv. Eng. Mater. 1900487 (2019). Estrin, Y., Bréchet, Y., Dunlop, J. & Fratzl, P. Architectured Materials in Nature and Engineering (Springer, 2019). Mora, M. G. & Müller, S. Derivation of the nonlinear bending-torsion theory for inextensible rods by Γ-convergence. Calc. Var. Partial. Differ. Equations 18, 287–305, https://doi.org/10.1007/s00526-003-0204-2 (2003). Timoshenko, S. P. & Gere, J. M. Theory of elastic stability (Courier Corporation, 2009). Kim, Y. Y. & Kim, T. S. Topology optimization of beam cross sections. Int. J. Solids Struct. 37, 477–493, https://doi.org/10.1016/S0020-7683(99)00015-3 (2000). Article MATH Google Scholar Pólya, G. Torsional rigidity, principal frequency, electrostatic capacity and symmetrization. Q. Appl. Math. 6, 267–277 (1948). Makai, E. A proof of Saint-Venant's theorem on torsional rigidity. Acta Math. Hungarica 17, 419–422 (1966). Blank, L. et al. Phase-field approaches to structural topology optimization. In Constrained optimization and optimal control for partial differential equations, 245–256, https://doi.org/10.1007/978-3-0348-0133-1_13 (Springer, Basel, 2012). MATH Google Scholar Modica, L. & Mortola, S. Un esempio di γ-convergenza. Boll Unione Mat. Ital. Sez. B 14, 285–299 (1977). MathSciNet MATH Google Scholar Modica, L. The gradient theory of phase transitions and the minimal interface criterion. Arch. for Ration. Mech. Analysis 98, 123–142, https://doi.org/10.1007/BF00251230 (1987). Article ADS MathSciNet MATH Google Scholar Niklas, K. J. Plant biomechanics: an engineering approach to plant form and function (University of Chicago Press, 1992). Niklas, K. J. A mechanical perspective on foliage leaf form and function. The New Phytol. 143, 19–31 (1999). Vogel, S. Drag and reconfiguration of broad leaves in high winds. J. Exp. Bot. 40, 941–948 (1989). Ahlquist, S., Kampowski, T., Torghabehi, O. O., Menges, A. & Speck, T. Development of a digital framework for the computation of complex material and morphological behavior of biological and technological systems. Comput. Des. 60, 84–104, https://doi.org/10.1016/j.cad.2014.01.013 (2015). Mattheck, C. Thinking tools after nature (Karlsruher Inst. of Technology-Campus North, 2011). Mattheck, C., Kappel, R., Bethge, K. & Kraft, O. Lernen vom Bananenblatt - der verrammelte Notausgang. Konstruktionspraxis spezial, Novemb. 50–52 (2005). Rowe, N., Isnard, S. & Speck, T. Diversity of mechanical architectures in climbing plants: an evolutionary perspective. J. Plant Growth Regul. 23, 108–128 (2004). Rowe, N. P. & Speck, T. Biomechanical characteristics of the ontogeny and growth habit of the tropical liana Condylocarpon guianense (Apocynaceae). Int. J. Plant Sci. 157, 406–417 (1996). Speck, T. & Rowe, N. P. A quantitative approach for analytically defining size, growth form and habit in living and fossil plants. In Kurmann, M. H. & Hemsley, A. R. (eds) The evolution of plant architecture, 447–479 (Royal Botanic Gardens Kew, 1999). Speck, T. et al. The potential of plant biomechanics in functional biology and systematics. In Stuessy, T. F., Mayer, V. & Hörandl, E. (eds) Deep morphology: Toward a renaissance of morphology in plant systematics, 241–271 (Koeltz, Königstein, 2004). Rowe, N. & Speck, T. Plant growth forms: an ecological and evolutionary perspective. New Phytol. 166, 61–72, https://doi.org/10.1111/j.1469-8137.2004.01309.x (2005). P.D. acknowledges partial support by the German Scholars Organization/Carl-Zeiss-Stiftung in the form of the "Wissenschaftler-Rückkehrprogramm". M.L. was funded by the German Research Foundation within the CRC-Transregio 141 and by the Ministry of Science, Research and the Arts Baden-Württemberg within the framework of "BioElast". T.S. and O.S. acknowledge the support of the German Research Foundation within the Cluster of Excellence "livMatS". Our thanks are also extended to Dr. R. Theresa Jones for improving the English. Department of Applied Mathematics, University of Freiburg, Hermann-Herder-Str. 10, D-79104, Freiburg, Germany Steve Wolff-Vorbeck & Patrick Dondl Plant Biomechanics Group, Botanic Garden, Faculty of Biology, University of Freiburg, Schänzlestraße 1, D-79104, Freiburg, Germany Max Langer, Olga Speck & Thomas Speck Cluster of Excellence livMatS @ FIT – Freiburg Center for Interactive Materials and Bioinspired Technologies, University of Freiburg, Georges-Köhler-Allee 105, D-79110, Freiburg, Germany Olga Speck & Thomas Speck Freiburg Materials Research Center (FMF), University of Freiburg, Stefan-Meier-Str. 21, D-79104, Freiburg, Germany Thomas Speck Steve Wolff-Vorbeck Olga Speck Patrick Dondl S.W.-V. conducted the mathematical experiments and wrote the first draft of the manuscript. M.L. contributed to the description of the model plants and the compilation of the diagrams and wrote the first draft of the manuscript. O.S. initiated the study and contributed to the improvement of the first draft of the manuscript. T.S. and P.D. initiated the study. All authors contributed to the data interpretation and reviewed and improved the final draft of the manuscript. Correspondence to Patrick Dondl. Supplement 1 – Appendices Supplement 2 – Numerical Program Code Wolff-Vorbeck, S., Langer, M., Speck, O. et al. Twist-to-bend ratio: an important selective factor for many rod-shaped biological structures. Sci Rep 9, 17182 (2019). https://doi.org/10.1038/s41598-019-52878-z DOI: https://doi.org/10.1038/s41598-019-52878-z Influence of structural reinforcements on the twist-to-bend ratio of plant axes: a case study on Carex pendula Patrick W. Dondl
CommonCrawl
Assistant Professor Position at LMU Munich (MCMP) The Chair of Philosophy of Science at the Faculty of Philosophy, Philosophy of Science and the Study of Religion and the Munich Center for Mathematical Philosophy (MCMP, http://www.lmu.de/mcmp) at LMU Munich seek applications for an Assistant Professorship with a specialization in (at least) one of the following areas: Philosophy of Psychology, Philosophy of Social Science, Philosophy of Economics, and Philosophy of Neuroscience. The position is for three years with the possibility of extension for another three years. Note that there is no tenure-track option. The appointment will be made within the German A13 salary scheme (under the assumption that the civil service requirements are met), which means that one has the rights and perks of a civil servant. The starting date is October 1, 2014. A later starting date is also possible. The appointee will be expected (i) to do philosophical research and to lead a research group in her or his field, (ii) to teach five hours a week in at least one of the above-mentioned fields and/or a related field, and (iii) to take on some management tasks. The successful candidate will have a PhD in philosophy and some teaching experience. Applications (including a cover letter that addresses, amongst others, one's academic background and research interests, a CV, a list of publications, a list of taught courses, a sample of written work of no more than 5000 words, and a description of a planned research project of 1000-1500 words) should be sent by email (ideally everything requested in one PDF document) to [email protected] by November 20, 2013. Hard copy applications are not possible. Additionally, two confidential letters of reference addressing the applicant's qualifications for academic research should be sent to the same address from the referees directly. For further information, please contact Professor Stephan Hartmann ([email protected]). Published by Unknown at 7:56 am 1 comment: The Mathematics of Dutch Book Arguments Dutch Book arguments purport to establish norms that govern credences (that is, numerically precise degrees of belief). For instance, the original Dutch Book argument due to Ramsey and de Finetti aims to establish Probabilism, the norm that says that an agent's credences ought to obey the axioms of mathematical probability. And David Lewis' diachronic Dutch Book argument aims to establish Conditionalization, the norm that says that an agent ought to plan to update in the light of new evidence by conditioning on it. As we will see in this post, there is also a Dutch Book argument for the Principal Principle as well, the norm that says that an agent ought to defer to the chances when she sets her credences. We'll look at each of these arguments below. Each argument consists of three premises. The second is always a mathematical theorem (sometimes known as the conjunction of the Dutch Book Theorem and the Converse Dutch Book Theorem). My aim in this post is to present a particularly powerful way of thinking about the mathematics of these theorems. It is due to de Finetti. It is appealing for a number of reasons: it is geometrical, so we can illustrate the theorems visually; it is uniform across the three different Dutch Book arguments we will consider here; and it establishes both Dutch Book Theorem and Converse Dutch Book Theorem on the basis of the same piece of mathematics. I won't assume much mathematics in this post. A passing acquaintance with vectors in Euclidean space might help, but it certainly isn't a prerequisite. The form of a Dutch Book argument The three premises of a Dutch Book argument for a particular norm $N$ are as follows: (1) An account of the sorts of decisions a given set of credences will (or should) lead an agent to make. (2) A mathematical theorem showing two things: (i) relative to (1), credences that violate norm $N$ will lead an agent to make decisions with property $C$; (ii) relative to (1), credences that satisfy norm $N$ in question will not lead an agent to make decisions with this property $C$. (3) A norm of practical rationality that says that, if an agent can avoid making decisions with property $C$, she is irrational if she does make such a decision. In this post, I'll present Dutch Book arguments of this form for Probabilism, Conditionalization, and the Principal Principle. But I'll be focussing on premise (2) in each case. There's plenty to say about premises (1) and (3), of course. But that's for another time. The Dutch Book argument for Probabilism The first premise in each Dutch Book argument is the same. It has two parts: the first tells us, for any proposition in which the agent has a credence, the fair price she ought to pay for a bet on that proposition; the second tells us the price she ought to pay for a book of bets on a number of different propositions given the price she's prepared to pay for each individual bet. Thus, we have (1a) If an agent has credence $p$ in proposition $X$, she ought to pay $pS$ for a bet that pays out $S$ if $X$ is true and $0$ if $X$ is false. (In such a bet, $S$ is called the stake.) (1b) If an agent ought to pay $X$ for Bet 1 and $Y$ for Bet 2, she ought to pay $X+Y$ for a book consisting of Bet 1 and Bet 2. (This is sometimes called the Package Principle.) Putting these together, we get the following: Suppose $\mathcal{F} = \{X_1, \ldots, X_n\}$ is a set of propositions. And suppose we represent our agent's credences in these $n$ propositions by a vector \[ c = (c_1, \ldots, c_n) \] where $c_i$ is her credence in $X_i$. And suppose we consider a book of bets $S$ in which the stake on $X_i$ is $S_i$. Then we can represent this book by the vector \[ S = (S_1, \ldots, S_n) \] Then the price that the agent ought to pay for this book of bets is \[ \sum^n_{i=1} S_ic_i := (S_1, \ldots, S_n) \cdot (c_1, \ldots, c_n) = S\cdot c \] where $S\cdot c$ is the dot product of $c$ and $S$ considered as vectors. Happily, there is also a nice way to represent the payoff of a book of bets $S$ at a given possible world $w$. Represent that possible world $w$ by the following vector: \[ w = (w_1, \ldots, w_n) \] where $w_i = 1$ if $X_i$ is true at $w$ and $w_i = 0$ if $X_i$ is false at $w$. Then the payoff of $S$ at $w$ is \[\sum^n_{i=1} S_iw_i := (S_1, \ldots, S_n) \cdot (w_1, \ldots, w_n) : S\cdot w \] As we will see, these vector representations will prove very useful below. In this section, we're looking at the Dutch Book argument for Probabilism. Probabilism It ought to be that a set of credences $c$ obeys the axioms of mathematical probability. Let us turn to premise (3) of this argument. It says that it is irrational for an agent to have credences that lead her to make decisions that will lose her money in every world that she considers possible. Now, a book of bets loses an agent money if \[\mbox{Payoff} < \mbox{Price}\] But recall from above: the payoff of a book of bets $S$ at a world $w$ is $S \cdot w$; and the price of that book is $S \cdot c$. Thus, the agent is irrational if there is a book $S$ such that \[S \cdot w < S \cdot c\] for all worlds $w$. Equivalently, $S \cdot (w-c) < 0$ for all $w$. So the Dutch Book Theorem (that is, premise (2)) can be stated as follows: Theorem 1 (i) If $c$ violates Probabilism, then there is a book $S$ such that $S \cdot w < S \cdot c$ for all worlds $w$ (equivalently, $S \cdot (w-c) < 0$ for all $w$). (ii) If $c$ satisfies Probabilism, then there is no book $S$ such that $S \cdot w \leq S \cdot c$ (equivalently, $S\cdot (w-c) \leq 0$) for all worlds $w$ and $S \cdot w < S \cdot c$ (equivalently, $S \cdot (w-c) < 0$) for some world $w$. We now turn to the proof of this theorem. It is based on two pieces of mathematics: the first involves some basic geometrical facts about the dot product; the second involves a neat geometric characterization of the credences that satisfy Probabilism. First, a well known fact about the dot product. If $u$ and $v$ are vectors in $\mathbb{R}^n$, we have \[ u \cdot v = ||u||\, ||v|| cos \theta\] where $\theta$ is the angle between $u$ and $v$. Since $||u||\, ||v|| \geq 0$, we have \[u\cdot v < 0 \Leftrightarrow cos \theta < 0\] And, by basic trigonometry, we have \[u \cdot v < 0 \Leftrightarrow \frac{\pi}{2} < \theta < \frac{3\pi}{2}\] Thus: To prove Theorem 1(i), it suffices to show that, if $c$ violates Probabilism, we can find a vector $S$ such that the angle between $S$ and $w-c$ is oblique for all worlds $w$. To prove Theorem 1(ii), it suffices to show that, if $c$ satisfies Probabilism, there is no vector $S$ such that the angle between $S$ and $w-c$ is oblique or right for all $w$ and oblique for some $w$. To do this, we need a geometric characterization of the credences that satisfy Probabilism. Fortunately, we have that in the following lemma due to de Finetti: Lemma 1 $c$ satisfies Probabilism iff $c \in \{w : w \mbox{ is a possible world}\}^+$. where, if $\mathcal{X}$ is a set of vectors in $\mathbb{R}^n$, $\mathcal{X}^+$ is the convex hull of $\mathcal{X}$: that is, $\mathcal{X}^+$ is the smallest convex set that includes $\mathcal{X}$; if $\mathcal{X}$ is finite, then $\mathcal{X}^+$ is the set of linear combinations of elements of $\mathcal{X}$. Thus, Lemma 1 says that the vectors that represent the probabilistic sets of credences are precisely those that belong to the convex hull of the vectors that represent the possible worlds. How does this help? Let's take the case in which $c$ violates Probabilism. That is, $c$ lies outside the convex hull of the vectors representing the different possible worlds. Then it is easy to see from Figure 1 below that there is a vector $c^*$ that lies inside that convex hull such that, for a given world $w$, the angle $\theta$ between the vector $c-c^*$ and the vector $w-c$ is oblique. Thus, if we let $S = c - c^*$, we have Theorem 1(i). Figure 1: The oval represents the convex hull of the set of vectors that represent the different possible worlds. If $c$ violates Probabilism, then it lies outside this. But, by a Hyperplane Separating Theorem, there is a point $c^*$ in the convex hull such that the angle between $c-c^*$ and $x-c$ is oblique for any $x$ inside the convex hull. Thus, in particular, it is oblique when $x$ is a vector representing a possible world, as required. Now let's take the case in which $c$ satisfies Probabilism. That is, $c$ lies inside the convex hull of the vectors representing the different possible worlds. Then it is easy to see from Figure 2 below that, if $S$ is a vector, then while there may be some worlds $w$ such that the angle $\theta$ between $S$ and $w-c$ is oblique, there must also be some worlds $w'$ such that the angle $\theta'$ between $S$ and $w'-c$ is acute. Alternatively, it is possible that the angles $\theta$ between $S$ and $w-c$ for all worlds $w$ are all right. Figure 2: Again, the oval represents the convex hull of the possible worlds. If $c$ satisfies Probabilism, then it lies inside. This completes the geometrical proof of Theorem 1, which combines the Dutch Book Theorem and the Converse Dutch Book Theorem. The Dutch Book Argument for the Principal Principle The Principal Principle says, roughly, that an agent ought to defer to the chances when she sets her credences. One natural formulation of this (explicitly proposed by Jenann Ismael and entailed by a slightly stronger formulation proposed by David Lewis) is this: Principal Principle It ought to be the case that $c$ is in $\{ch : ch \mbox{ is a possible chance function}\}^+$. That is, the Principal Principle says that one's credence function ought to be a linear combination of the possible chance functions. Now, adapting the proof of Theorem 1 above, replacing the possible worlds $w$ by possible chance functions $ch$ (represented as vectors in the natural way), we easily prove the following: (i) If $c$ violates the Principal Principle, then there is a book $S$ such that $S \cdot ch < S \cdot c$ for all possible chance functions $ch$. (ii) If $c$ satisfies Probabilism, then there is no book $S$ such that $S \cdot ch \leq S \cdot c$ for all possible chance functions $ch$ and $S \cdot ch < S \cdot c$ for some possible chance function $ch$. But what does this tell us? Well, as before, $S \cdot c$ is the price our agent would pay for the book $S$. But this time, the other side of the inequality is $S\cdot ch$. And this, it turns out, is the objective expected payout of $S$, rather than the actual payout of $S$. Thus, violating the Principal Principle does not necessarily make an agent vulnerable to a true Dutch Book. But it does lead them to pay a price for a book of bets that is higher than the objective expected value of that book, according to all of the possible chance functions. And this, we might think, is irrational. For one thing, such an agent will, with objective chance 1, lose money in the long run. Thus, in the Dutch Book argument for the Principal Principle, premise (1) is as before, premise (2) is Theorem 2, but premise (3) becomes the following: It is irrational for an agent to have credences that lead her to pay more than the objective expected value for a book of bets. The Dutch Book Argument for Conditionalization Conditionalization is the following norm: Conditionalization Suppose our agent has credence $c$ at $t$; and suppose she knows that, by $t'$, she will have received evidence from the partition $E_1, \ldots, E_m$. And suppose she plans to update as follows: If $E_i$, then $c_i$. Then it ought to be that $c_i(-) = c(-|E_i)$ for $i = 1, \ldots, m$. In fact, the Dutch Book argument for Conditionalization that we will present is primarily a Dutch Book argument for van Fraassen's Reflection Principle, which is equivalent to Conditionalization. The Reflection Principle says the following: Reflection Principle Suppose our agent has credence $c$ at $t$; and suppose she knows that, by $t'$, she will have received evidence from the partition $E_1, \ldots, E_m$. And suppose she plans to update as follows: If $E_i$, then $c_i$. Then it ought to be that: (i) $c_i(E_i) = 1$ for $i = 1, \ldots, m$; (ii) $c$ is in $\{c_i : i = 1, \ldots, m\}^+$. That is, Reflection says that an agent's current credences ought to be a mixture of her planned future credences. Since Reflection and Conditionalization are equivalent, it suffices to establish Reflection. Here is the theorem that provides the second premise of the Dutch Book argument for Reflection: (i) Suppose $c, c_1, \ldots, c_n$ violate Reflection. Then there are books $S, S_1, \ldots, S_m$ such that (a) for all $i = 1, \ldots, m$, \[ S \cdot (w - c) + S_i(w - c_i) \leq 0 \] for all worlds $w$ in $E_i$; and (b) for some $i = 1, \ldots, m$, \[ S \cdot (w - c) + S_i(w - c_i) < 0 \] for some world $w$ in $E_i$. (ii) Suppose $c, c_1, \ldots, c_n$ satisfy Reflection. Then there are no books $S, S_1, \ldots, S_m$ such that (a) for all $i = 1, \ldots, m$, \[S \cdot (w- c) + S_i\cdot (w-c_i) \leq 0\] for all worlds $w$ in $E_i$; and (b) there is $i = 1, \ldots, m$ such that \[S \cdot (w- c) + S_i\cdot (w-c_i) < 0 \] for some $w$ in $E_i$. What does this say? It says that, if you plan to update in some way other than conditioning on your evidence, and thereby violate Reflection, there is a book $S$ that you will accept at $t$ as well as, for each $E_i$, a book $S_i$ that you will accept at $t'$ if you learn $E_i$ such that, together, they will guarantee you a loss. And this will not happen if you plan to update by conditioning. How do we prove this? Theorem 3(i) is the easier to prove. Suppose $c, c_1, \ldots, c_n$ violate Reflection. First, suppose that this is because $c_i(E_i) < 1$. Then let $S = 0$ and $S_j = 0$ for all $j \neq i$. And let $S_i$ be the book consisting only of a bet on $E_i$ with stake $-1$. Then \[ S \cdot (w-c) + S_i(w-c_i) = (-1)(1 - c_i(E_i)) < 0\] for all worlds $w$ in $E_i$. And \[ S \cdot (w-c) + S_i(w-c_i) = 0\] for all worlds $w$ in $E_j \neq E_i$. Second, suppose that $c_i(E_i) = 1$ for all $i = 1, \ldots, m$. But suppose $c$ is not inside the convex hull of the $c_i$s. So $c, c_1, \ldots, c_n$ violate Reflection. Then, adapting the proof of Theorem 1 by replacing the worlds $w$ with the planned posterior credences $c_i$, we get that there is a book $S$ such that \[ S \cdot (c_i - c) < 0\] for all $i = 1, \ldots, m$. So if we let $S_i = -S$ for all $i = 1, \ldots, m$, we get \[ 0 > S \cdot (c_i - c) = S \cdot (w-c) + (-S)\cdot (w-c_i) = S \cdot (w-c) + S_i \cdot (w-c_i) \] for all worlds $w$. This completes the proof of Theorem 3(i). Now we turn to Theorem 3(ii). Suppose $c, c_1, \ldots, c_n$ satisfy Reflection. Suppose, for a contradiction, that we have (a) for all $i = 1, \ldots, m$, \[S \cdot(w-c) + S_i \cdot(w-c_i) \leq 0 \] for all $w$ in $E_i$; and (b) for some $i = 1, \ldots, m$, \[S \cdot(w-c) + S_i \cdot(w-c_i) < 0 \] for some $w$ in $E_i$. Our plan is to use this to construct $S'$ such that (a) for all $i = 1, \ldots, m$, \[S' \cdot(w-c) \leq 0\] for all $w$ in $E_i$; and (b) for some $i = 1, \ldots, m$, \[S' \cdot(w-c) < 0\] for some $w$ in $E_i$. And we know that this is impossible from Theorem 1(ii). We construct $S'$ as follows: First, suppose that $X_1, \ldots, X_k$ are the atoms of the algebra $\mathcal{F} = \{X_1, \ldots, X_k, \ldots, X_n\}$. Then notice that for each book of bets \[S = (S_1, \ldots, S_n)\] on the propositions $X_1, \ldots, X_n$, there is a book \[S^A = (S^A_1, \ldots, S^A_k, 0, \ldots, 0)\] on the atoms $X_1, \ldots, X_k$ of $\mathcal{F}$ such that $S^A$ is equivalent to $S$: that is, the payout of $S$ is the same as the payout of $S^A$ at every world; and the price that a probabilistic agent should pay for $S^A$ is exactly the price she should pay for $S$. Thus, if we have $S \cdot(w-c) + S_i \cdot(w-c_i) \leq 0$, then we have $S^A \cdot(w-c) + S^A_i \cdot(w-c_i) \leq 0$, and so on. Thus, in what follows, we can assume without loss of generality that $S$ is a book of bets only on the atoms of $\mathcal{F}$. Then we define $S'$ as follows, where for any atom $X_j$, we write $E_{i_j}$ for the cell of the partition in which $X_j$ lies: \[ S'(X_j) := S(X_j) + S_{i_j}(X_j) - \sum_{X_k \in E_{i_j}} c(X_k | E_{i_j}) S_{i_j}(X_k) \] Then we can show that, S'\cdot(w - c) = S\cdot(w - c) + S_i\cdot (w - c_i) for all $E_i$ and $w \in E_i$. Suppose $w$ is a world; suppose $X_j$ is the atom that is true at that world; suppose, as above, that $X_j$ lies in cell $E_{i_j}$. Then we have \begin{eqnarray*} S'\cdot(w - c) & = & S(X_j) + S_{i_j}(X_j) - \sum_{X_k \in E_{i_j}} c(X_k | E_{i_j}) S_{i_j}(X_k) \\ & & \ \ \ - \sum_{X_l} c(X_l) [S(X_l) + S_{i_l}(X_l) - \sum_{X_k \in E_{i_l}} c(X_k | E_{i_l}) S_{i_l}(X_k)]\\ & = & S \cdot w + S_{i_j}\cdot w - S_{i_j} \cdot c_{i_j} - S \cdot c \\ & & \ \ \ - \sum_{X_l} c(X_l) [S_{i_l}(X_l) - \sum_{X_k \in E_{i_l}} c(X_k | E_{i_l}) S_{i_l}(X_k)]\\ & = & S \cdot (w - c) + S_{i_j}\cdot (w - c_{i_j}) \\ & & \ \ \ - \sum_{X_l} c(X_l) [S_{i_l}(X_l) - \sum_{X_k \in E_{i_l}} c(X_k | E_{i_l})S_{i_l}(X_k)]\\ & = & S \cdot (w - c) + S_{i_j}\cdot (w - c_{i_j}) < 0 \end{eqnarray*} by assumption. This completes the proof of Theorem 3(ii) and thus Theorem 3. Published by Richard Pettigrew at 2:52 pm 4 comments: Website: The Network for Philosophy of Mathematics in Europe Paula Quinon (University of Lund, Sweden), Jan Heylen (KU Leuven, Belgium), and Fredrik Engström (University of Gothenburg, Sweden) have produced a wonderful new website listing people and activities in the philosophy of mathematics in Europe. http://www.philmath-europe.org/ It includes lists of blogs on topics in philosophy of mathematics, scholarly associations, papers in the subject, researchers in the area, and jobs with philosophy of mathematics as an AOS. It's quick and easy to register yourself as a researcher so that you appear on the list. Thanks to Paula, Jan, and Fredrik for such a valuable service to the profession. Published by Richard Pettigrew at 10:04 am No comments: The Space of Languages From time to time, I post things on the concept of language. What got me thinking about this originally was the modal status of T-sentences, and I gave a few talks on it over the last six or seven years, including a talk "Cognizing a Language" at the linguistics society in Edinburgh. Since semantic concepts are language-dependent ("true-in-L", "refers-in-L", "implies-in-L", etc.), there's a quick argument that T-sentences are necessities. But this point is by no means restricted to T-sentences. The same holds when we consider any description of the syntactic, phonological, semantic, pragmatic properties of a language L. And, consequently, we need to distinguish: (i) the syntactic, phonological, semantic, pragmatic properties of a language L (ii) the cognizing relation that holds between an agent A and a language/idiolect L that A speaks, implements, realizes, etc. because the modal status of the relevant facts is entirely different. (The orthodox view is that semantic facts are contingent. See, e.g., here.) The basic argument was given by Field (1986) and Putnam (1985): for Putnam, in particular, applying modus tollens, it was some kind of reductio of Tarskian semantic theory that it yields the conclusion that semantic facts are necessities. But I apply modus ponens: they are necessities. What is contingent is the cognizing relation between agents and languages. Analogously, the properties of some Turing machine program are necessary, while it is contingent whether some physical machine "realizes" or "implements" that program. A community $G$ of agents will, in general, all cognize different idiolects, $L_1, L_2, \dots$, even if they are very similar to each other. The point is that, strictly speaking, $L_i \neq L_j$ (for $i \neq j$). And a single agent may cognize multiple idiolects, or "micro-idiolects", which may be changing all the time. So, Humpty Dumptyism is true (... despite the protests of many philosophers!). On this view, suppose we define $\Omega$ to be the space of all languages. So, $\Omega$ = the collection of all $L$ such that $L$ is a language. There may well be a Russell-style paradox, connected to largeness and self-reference, lurking here; maybe I'll mention it at some later point. So, $\Omega$ might have to be a somehow regulated space; e.g., the space of all set-sized, or maybe well-founded, languages. $\Omega$ contains the idiolects spoken by each and every cognitive system, each human, old and young, each non-human creature, any non-terrestrial creature, or cognitive system there might be; as well as all the languages that, for feasibility reasons, cannot be spoken/cognized. $\Omega$ contains the idiolect you speak right now, and all other idiolects you may have spoken as your language state evolved to its present one. It contains all theoretically defined languages, finite and infinite, etc. It contains uninterpreted languages and it contains interpreted languages. It contains the Guitar Language, which is an odd language with no syntax at all. [If so-called natural languages are languages (I think they aren't), then $\Omega$ contains all natural languages. But I think "natural languages" are idealized entities of some sort, as there is no individual that actually speaks or cognizes such a language. Strictly speaking, so-called natural languages, such as English, French, Hindi, etc., do not exist, in the sense of there being a community all speaking the same language. For example, what is the exact number of words in English? What is the exact pronunciation of "ouch"? Speakers exist and so do their idiolects, which may be changing in very complicated ways. But the concept of a natural language seems to be some kind of Hegelian myth, akin to "races".] If what is said above is right (... I am plowing a very lonely furrow here), the sub-discipline within the philosophy of language that's now called "metasemantics" then has two main tasks. But these have a fundamentally different character modally and scientifically: (i) What are the properties of the space $\Omega$ of languages? What are the individuation conditions for the elements $L \in \Omega$; what are the various relations amongst the $L \in \Omega$; etc. (ii) How does the cognitive state of an agent evolve through $\Omega$? What is the nature of the cognizing relation, "$A$ cognizes $L$ (at time $t$)", which specifies the language state of a cognitive system $A$? How might it be constrained in terms of other cognitive states (memory, conceptual competence, perceptual input and action output, genetic factors, mental representation of strings and linguistic symbols, etc.) The first problem belongs to applied mathematics: and this seems to be well reflected by the actual practice of workers in this field. Languages $L \in \Omega$ are specified---usually by an explicit definition of their syntax and sometimes the meaning functions (referential, intensional and pragmatic)---and their properties are examined, usually by proving theorems. The Chomsky Hierarchy is an example, but there are literally countless examples: uninterpreted formal languages; simple propositional languages; predicate logic languages; languages with all kinds of extra gadgets and operators, modal, epistemic, temporal, etc., operators; typed-languages; higher-order ones; infinitary languages; highly finitary languages; languages with no syntax (cf., the Guitar Language); etc.; etc. Let us say that those who work on the first problem are studying $\Omega$, the space of all languages. Modally speaking, the properties of languages $L \in \Omega$ established are essential. Relations amongst languages $L_1, L_2 \in \Omega$ hold of necessity. For example, true claims of the form $L^{+}$ is an extension of $L$ such that there is a relation definable in $L^{+}$ but not in $L$. There is no intension-preserving translation $t: L_1 \to L_2$. The string $\sigma$ is true in $L$ if and only if snow is white. will hold of necessity. Some theoretical linguistics, formal semantics, computational linguistics, mathematical logic, etc., belongs to this area: they are studying $\Omega$. Their theorems are about $\Omega$. Their theorems hold of necessity. The semantic description of a language $L \in \Omega$ holds of necessity. For the semantic properties of $L$ are intrinsic to it. If $L^{\ast}$ has different semantic properties, then $L^{\ast} \neq L$. On the other hand, the question: Does $L \in \Omega$ have one, many or no agents that speak/cognize $L$? is a contingent matter. Compare with, say, the questions: Does the large natural number $10^{10^{10^{10}}}$ have a "physical token"? Is the infinite cardinal $\aleph_0$ "physically realized"? These are contingent matters, requiring physical theory and experiment to help answer them. The second problem, in contrast, belongs to empirical science. But I think the problem(s) here are very difficult, much harder than those confronting the first problem. If we are honest, very little is known about: the genetic basis of language cognition, how a cognitive language-using system evolves to anything like the mature cognitive state, what grounds or constitutes a cognitive system's cognizing $L$ rather than $L^{\ast}$, By analogy with physics, one would like to have some account of a "state-function'', $L_A(t) \in \Omega$ which specifies how the language-cognizing state evolves, over time, through successive idiolects, and in connection with other states of the system. (Cf., in physics, the state of the system is an element of a state space, and the dynamical principles specify its evolution.) The second problem uses (contingent) notions like: $A$ "uses" string $\sigma$ when in a certain cognitive/affective state $A$ and $B$ "communicate" with each other $A$ "acquired" language by "interacting" with $B$ $A$ "copied" a word+meaning from $B$ $A$ introduced a new string $\sigma$. $A$ "uses" string $n$ to refer to $x$. It seems to me that no one properly understands any of this. For example, how does one explain how agents $A$ and $B$ "communicate"? What is a "language community"? What could a "communal/social language" be? The cognitive states involved in a group of interacting speakers are associated with the idiolects actually spoken, much as in physics one is interested in the states that the system actually is in. What is the exact cognitive/affective state that an agent is in when "using" the strings "ouch" or "it's raining" or "the square root of $2$ is irrational"? What explains the introduction of new strings? How is a string "used" to refer to some object? To the object then, or the object now? Can anyone predict with some reasonable accuracy the evolving sequence of idiolects of a child? There are (combinatorially) countlessly many orbits through $\Omega$. Why one, and not another? What is the initial cognitive state? No one knows. Published by Jeffrey Ketland at 9:18 pm 2 comments: Winter School on Rationality - Groningen (Potentially of interest to M-Phi readers and their students.) On January 27th-28th 2014, the Faculty of Philosophy of the University of Groningen will host a short Winter School aimed at advanced undergraduate students and early-stage graduate students. The theme of the winter school is Rationality, and it will consist of 5 tutorials of 2 sessions each where the topic will be discussed from different viewpoints: theoretical rationality, practical rationality, and the history of the concept of rationality. LECTURERS AND TUTORIALS Catarina Dutilh Novaes: 'Rationality and the psychology of reasoning' Martin Lenz: 'Rationalism in the history of philosophy' Jan-Willem Romeijn: 'Rationality and scientific method' Bart Streumer: 'Philosophical views on practical reasoning' Peter Timmerman: 'Social contract theory and rationality' Moreover, Prof. Pauline Kleingeld may deliver a guest lecture (TBC). As such, the program will showcase the high level of teaching and research of the three departments of the Faculty (theoretical philosophy; ethics, social and political philosophy; history of philosophy). The winter school aims in particular (but not exclusively) to attract potential talented students for our Research Masters' program, who in this way will have the opportunity to become acquainted with the Faculty and the different lines of research we pursue. The Faculty is offering up to three EUR 300 scholarships for the best students enrolling in the winter school, and who express serious interest in later applying for the Research Masters' program. Moreover, participants who are then accepted in the Research Masters' program for the year 2014/2015 will have their registration fee for the winter school reimbursed. To apply for the scholarships, send a short CV (max 2 pages) and a letter (max 1 page) stating your interest in the Faculty of Philosophy in Groningen and the Research Masters' program in particular, to winterschoolphilosophy 'at' rug.nl with 'Application for winter school scholarship' as subject. Deadline to apply for the scholarships: December 1st 2013. Preference will be given to members of underrepresented groups in philosophy (women, people of color, persons with disabilities etc.). To register, send an email with your name, affiliation and status (undergraduate, graduate) to winterschoolphilosophy 'at' rug.nl with 'Registration for winter school' as subject, no later than December 15th 2013. As the number of spots is limited, you are encouraged to register early. Another attractive feature of the winter school is the fact that two major international conferences will take place immediately after the school, which makes a trip to Groningen even more worthwhile for those coming from far away. These are: Sixth Conference of the Dutch-Flemish Association of Analytic Philosophy (January 29-31) Dutch Seminar in Early Modern Philosophy (January 29-30) Dates: January 27th – 28th 2014 Scholarship application deadline: December 1st 2013 Registration deadline: December 15th 2013 Registration fee: EUR 50 (to be reimbursed for those later accepted in the ReMa program) Further inquiries can be directed to Catarina Dutilh Novaes, c.dutilh.novaes 'at' rug.nl. Published by Catarina at 11:22 am No comments: "Naturalism" in Metaphysics Naturalistic metaphysics is fashionable amongst philosophers. A recent article, Maclaurin, J., and Dyke, H. 2012. "What is Analytic Metaphysics For?", Australasian Journal of Philosophy 90. aims to articulate the concept of naturalistic metaphysics and to criticize its (alleged) opponent. The abstract begins: We divide analytic metaphysics into naturalistic and non-naturalistic metaphysics. The latter we define as any philosophical theory that makes some ontological claim (as opposed to conceptual claim), where that ontological claim has no observable consequences. A response to this article appears in: McLeod, M., and Parsons, J. 2013. "Maclaurin and Dyke on Analytic Metaphysics", Australasian Journal of Philosophy 91. whose abstract begins: We argue that Maclaurin and Dyke's recent critique of non-naturalistic metaphysics suffers from difficulties analogous to those that caused trouble for earlier positivist critiques of metaphysics. Maclaurin and Dyke say that a theory is naturalistic iff it has observable consequences. Depending on the details of this criterion, either no theory counts as naturalistic or every theory does. This seems right to me. The examples discussed by McLeod and Parsons come from basic philosophy of science: for example, auxiliary hypotheses and the difficulties involved in formulating some principle of verifiability. So, what is being promoted as "naturalistic metaphysics" looks like reheated positivism and faces exactly similar objections. Here is a bit more to consider: (1) $\nabla \cdot B = 0$ (2) $\exists X \forall y (y \in X \leftrightarrow \phi(y))$ Neither of these has "observable consequences". Since the magnetic field $B$ is not observable (not observable to the human eye), it follows that Maxwell's equation, $\nabla \cdot B = 0$, has no observable consequences (for this, we need to show its consistency). And one can show that any consequence of the (predicative version of) Comprehension Principle (2), in the restricted language (i.e., without the set/class quantifiers), is a logical truth. (The Comprehension Principle is what makes mathematics applicable. In addition to asserting the existence of objects of pure mathematics, e.g., $\mathbb{R}^3$ and $SU(3)$, we can also assert the existence of the objects of physics: (mixed) functions on spacetime (such as wavefunctions and fields), and sets of spacetime points, and sets of more mundane concreta, etc.). Observing iron filings, the readings on a Hall probe, etc., doesn't count. One needs to state auxiliary claims about how the unobservable magnetic field $B$ is locally coupled to point charges and dipoles, along with a complicated network of idealized assumptions about how Hall probes work, etc. In the linked video, you hear (1:02) the announcer say, "As you see, the magnetic field forces the iron filings to line up along the lines of force ..." This is an auxiliary assumption. This is how science works. A number of basic, explanatory fundamental principles are given which do not refer to "observables" and have no observable consequences. To obtain observable consequences, one needs to add very complicated networks of auxiliary hypotheses. Auxiliary hypotheses are deductively indispensable. It is usually safe to assume their truth, because the experimental setup normally---but after a lot of work, usually---ensures that the required idealizations are ok. But there are always cases where a failure of observation does not imply the law predicting it false. Rather, one of the auxiliaries is to blame. (This is called the Duhem-Quine Problem/Thesis.) This is not just logically obvious, it's also obvious to anyone who has worked with, e.g., an oscilloscope or pretty much any measuring device. For example: Is the oscilloscope plugged in? If you press a light switch and the light doesn't come on, the reasonable explanation is not that James Clark Maxwell has been refuted after all, but rather than some wire is not connected, or the bulb has blown, etc. Similarly in the case of applicable mathematics and every physical principle of any interest (Maxwell's laws, Euler-Lagrange equations, laws of gravitation, principles of quantum theory, etc.). Although the basic principles of applicable mathematics have no observable consequences, it's still an interesting question to examine how the axioms of applicable mathematics interact with the mixed laws of physics to obtain measurable consequences. This is a non-trivial and not well-understood problem. It is more or less equivalent to Hilbert's 6th Problem: 6. Mathematical Treatment of the Axioms of Physics. The investigations on the foundations of geometry suggest the problem: To treat in the same manner, by means of axioms, those physical sciences in which already today mathematics plays an important part; in the first rank are the theory of probabilities and mechanics. Overall, why not simply accept that there is no opposition between analytic metaphysics---which I'm inclined to define, by ostension, as the writings of Frege, Moore and Russell + some similarity sphere---and science, or anything close to that? For example, look at the table of contents of Russell's Principles of Mathematics (1903). Second International Meeting of the Association for the Philosophy of Mathematical Practice - October 3 & 4 I'm passing on the announcement and programme of the Second International Meeting of the Association for the Philosophy of Mathematical Practice (APMP 2013) and the Fourteenth Midwest Philosophy of Mathematics Workshop (MWPMW 14) on behalf of Andy Arana. I'm happy to distribute the schedule for the Second International Meeting of the Association for the Philosophy of Mathematical Practice (APMP 2013), to be held at the University of Illinois at Urbana-Champaign on October 3 and 4. You can get more information about the meeting at http://institucional.us.es/apmp/index_APMP2013.htm. A version of the schedule with abstracts for each talk can be read athttps://www.dropbox.com/s/j2qzw3txu4o0bit/APMP%202013%20schedule.pdf. All participants are invited to join us for the conference dinner on Friday, October 4, as guests of the University of Illinois. Please contact me ([email protected]) to let me know if you'd like to attend, so that I can get an accurate count of guests for dinner. Please let me know if you have any special dietary needs as well. I hope you'll join us next month for APMP 2013, and also for the Fourteenth Midwest Philosophy of Mathematics Workshop (MWPMW 14) which will also be hosted at Illinois immediately after APMP 2013 on October 5 and 6. Information about MWPMW 14 can be read athttps://mdetlefsen.nd.edu/midwest-philmath-workshop/mwpmw-14/ APMP 2013 schedule All talks will take place on the second floor of the Levis Faculty Center (http://union.illinois.edu/levis/GettingHere.html) 9:00–10:15am: Marcus Giaquinto, Department of Philosophy, University College London: ``Epistemic roles of visual experience in mathematical knowledge acquisition" 10:15–11:30am: Elaine Landry, Department of Philosophy, University of California, Davis: ``Plato was Not a Mathematical Platonist" 11:30–12:00pm: Break 12:00–12:35pm: Dirk Schlimm, Department of Philosophy, McGill University: ``The early development of Dedekind's notion of mapping" 12:00–12:35pm: Joachim Frans, Centre for Logic and Philosophy of Science, VUB Brussels and Erik Weber, Centre for Logic and Philosophy of Science, Ghent University: ``Mechanistic Explanation And Explanatory Proofs In Mathematics" 12:35–1:10pm: Emmylou Haffner, Department of History and Philosophy of Science, Université Paris Diderot - Paris 7: ``Arithmetization of mathematics, the Dedekindian way" 12:35–1:10pm: Andrew Aberdein, Department of Humanities and Communication, Florida Institute of Technology and Matthew Inglis, Mathematics Education Centre, Loughborough University: ``Explanation and Explication In Mathematical Practice" 1:10–2:30pm: Lunch break 2:30–3:05pm: John Baldwin, Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago: ``From Geometry to Algebra and Analysis" 2:30–3:05pm: Ashton Sperry-Taylor, Department of Philosophy, University of Missouri: ``Modeling The State Of Nature" 3:05–3:40pm: Sylvain Cabanacq, Department of History and Philosophy of Science, Université Paris Diderot - Paris 7: "Contrasting proofs: categorical tools, model-theoretical methods and the Manin-Mumford Conjecture" 3:05–3:40pm: Luca San Mauro, Center of Philosophy, Scuola Normale Superiore, Pisa: ``Algorithms, formalization and exactness: a philosophical study of the practical use of Church- Turing Thesis" 3:40–4:15pm: Break 4:15–5:30pm: Chris Pincock, Department of Philosophy, The Ohio State University: ``Felix Klein as a Prototype for the Philosophy of Mathematical Practice" A meeting of APMP members follows. 9:00–10:15am: Colin McLarty, Department of Philosophy, Case Western Reserve University: ``Proofs in practice" 10:15–10:50am: Ken Manders, Department of Philosophy, University of Pittsburgh: ``Problems and Prospects for Philosophy of Mathematical Understanding" 10:15–10:50am: Ramzi Kebaili, Department of History and Philosophy of Science, Université Paris Diderot - Paris 7: ``Examples of `synthetic a priori' statements in mathematical practice" 10:50–11:20am: Break 11:20–11:55am: Susan Vineberg, Department of Philosophy, Wayne State University: ``Are There Objective Facts of Mathematical Depth?" 11:20–11:55am: Oran Magal, Department of Philosophy, McGill University: ``Teratological Investigations: What's in a monster?" 11:55–12:30pm: Philip Ehrlich, Department of Philosophy, Ohio University: ``A Re-examination of Zeno's Paradox of Extension" 11:55–12:30pm: Madeline Muntersbjorn, Department of Philosophy, University of Toledo: ``Cognitive Diversity and Mathematical Progress" 12:30–2:00pm: Lunch break 2:00–2:35pm: Michele Friend, Department of Philosophy, George Washington University: ``Using a Paraconsistent Formal Theory of Logic Metaphorically" 2:00–2:35pm: José Ferreirós, Department of Philosophy and Logic, University of Seville and Elías Fuentes Guillén, Department of Philosophy, Logic and Aesthetics, University of Salamanca: ``Bolzano's `Rein analytischer Beweis\dots' reconsidered" 2:35–3:10pm: Bernd Buldt, Department of Philosophy, Indiana University-Purdue University Fort Wayne: ``Mathematics—Some ``Practical" Insights" 2:35–3:10pm: Jacobo Asse Dayán, Program in Philosophy of Science, Universidad Nacional Autónoma de México: ``The Intentionality and Materiality of Mathematical Objects" 3:40–4:15pm: Rochelle Gutiérrez, Department of Curriculum and Instruction, University of Illinois at Urbana-Champaign, ``What is Mathematics? The Roles of Ethnomathematics and Critical Mathematics in (Re)Defining Mathematics for the Field of Education" 3:40–4:15pm: Danielle Macbeth, Department of Philosophy, Haverford College: ``Rigor, Deduction, and Knowledge in the Practice of Mathematics" 4:15–5:30pm: Marco Panza, Institute of History and Philosophy of Science and Technology, Université Paris 1 Panthéon-Sorbonne: ``On the Epistemic Economy of Formal Definitions" The conference dinner follows at 7pm at the ACES Library, Heritage Room. Published by Richard Pettigrew at 7:41 am 1 comment: Pseudo-holomorphic disk which is constant along boundary
CommonCrawl
\begin{document} \title{Critical branching Brownian motion with absorption: \\ particle configurations} \author{Julien Berestycki\thanks{Research supported in part by ANR-08-BLAN-0220-01 and ANR-08-BLAN-0190 and ANR-09-BLAN-0215}, Nathana\"el Berestycki\thanks{Research supported in part by EPSRC grants EP/GO55068/1 and EP/I03372X/1} \ and Jason Schweinsberg\thanks{Supported in part by NSF Grants DMS-0805472 and DMS-1206195}} \maketitle \footnote{{\it MSC 2010}. Primary 60J65; Secondary 60J80, 60J25} \footnote{{\it Key words and phrases}. Branching Brownian motion, critical phenomena, Yaglom limit laws} \begin{abstract} We consider critical branching Brownian motion with absorption, in which there is initially a single particle at $x > 0$, particles move according to independent one-dimensional Brownian motions with the critical drift of $-\sqrt{2}$, and particles are absorbed when they reach zero. Here we obtain asymptotic results concerning the behavior of the process before the extinction time, as the position $x$ of the initial particle tends to infinity. We estimate the number of particles in the system at a given time and the position of the right-most particle. We also obtain asymptotic results for the configuration of particles at a typical time. \end{abstract} \section{Introduction} We consider branching Brownian motion with absorption. At time zero, there is a single particle at $x > 0$. Each particle moves independently according to one-dimensional Brownian motion with a drift of $-\mu$, and each particle independently splits into two at rate $1$. Particles are absorbed when they reach the origin. This process was first studied in 1978 by Kesten \cite{kesten}, who showed that with positive probability there are particles alive at all times if $\mu < \sqrt{2}$, but all particles are eventually absorbed almost surely if $\mu \geq \sqrt{2}$. In recent years, there has been a surge of renewed interest in this process. Some of this interest has been driven by connections between branching Brownian motion with absorption and the FKPP equation. See, for example, the work of Harris, Harris, and Kyprianou \cite{hhk06}, who used branching Brownian motion with absorption to establish existence and uniqueness results for the FKPP traveling-wave equation. In other work, such as \cite{bbs, bdmm1, bdmm2, maillard}, branching Brownian motion with absorption or a very similar process has been used to model a population undergoing selection. In this setting, particles represent individuals in a population, branching events correspond to births, the positions of the particles are the fitnesses of the individuals, and absorption at zero models the death of individuals whose fitness becomes too low. In this paper, we consider branching Brownian motion with absorption in the critical case with $\mu = \sqrt{2}$. This process is known to die out with probability one, but we are able to use techniques developed in \cite{bbs, bbs2} to obtain some new and rather precise results about the behavior of the process before the extinction time. We focus on asymptotic results about the number of particles, the position of the right-most particle, and the configuration of particles as the position $x$ of the initial particle tends to infinity. \subsection{Main results} Let $N(s)$ be the number of particles at time $s$, and let $X_1(s) \geq X_2(s) \geq \dots \geq X_{N(s)}(s)$ denote the positions of the particles at time $s$. Let \begin{equation}\label{Ydef} Y(s) = \sum_{i=1}^{N(s)} e^{\sqrt{2} X_i(s)}. \end{equation} Throughout the paper, we will use the constants \begin{equation}\label{cdef} \tau = \frac{2 \sqrt{2}}{3 \pi^2}, \hspace{.5in}c = \tau^{-1/3} = \bigg( \frac{3 \pi^2}{2 \sqrt{2}} \bigg)^{1/3}. \end{equation} Let $t = \tau x^3$, which is approximately the extinction time of the process when $x$ is large. More precisely, it was shown in \cite{bbs2} that that for all $\varepsilon > 0$, there is a positive constant $\beta$ such that for sufficiently large $x$, the extinction time is between $t - \beta x^2$ and $t + \beta x^2$ with probability at least $1 - \varepsilon$. Our first result shows how the number of particles evolves over time. For times $s$ between $Bx^2$ and $(1 - \delta)t$, where $B$ is a large constant and $\delta$ is a small constant, with high probability this result estimates the number of particles at time $s$ to within a constant factor. \begin{Theo}\label{numthm} Fix $\varepsilon > 0$ and $\delta > 0$. Then there exists a positive constant $B$ depending on $\varepsilon$ and positive constants $C_1$ and $C_2$ depending on $B$, $\delta$, and $\varepsilon$ such that for sufficiently large $x$, we have $$\P \bigg( \frac{C_1}{x^3} e^{\sqrt{2} (1 - s/t)^{1/3}x} \leq N(s) \leq \frac{C_2}{x^3} e^{\sqrt{2} (1 - s/t)^{1/3}x} \bigg) > 1 - \varepsilon$$ for all $s \in [Bx^2, (1 - \delta) t]$. \end{Theo} For $0 \leq s \leq t$, define \begin{equation}\label{Ldef} L(s) = x \bigg( 1 - \frac{s}{t} \bigg)^{1/3} = c(t - s)^{1/3}. \end{equation} The next result shows that at time $s$, the right-most particle is usually slightly to the left of $L(s)$. \begin{Theo}\label{rtthm} Suppose $0 < u < \tau$, and let $s = u x^3$. Let $\varepsilon > 0$. Then there exist $d_1 > 0$ and $d_2 > 0$, depending on $u$ and $\varepsilon$, such that for sufficiently large $x$, $$\P\bigg(L(s) - \frac{3}{\sqrt{2}} \log x - d_1 < X_1(s) < L(s) - \frac{3}{\sqrt{2}} \log x + d_2 \bigg) > 1 - \varepsilon.$$ \end{Theo} We are also able to obtain results about the entire configuration of particles. The key idea is that at time $s$, the density of particles near $y \in (0, L(s))$ will be roughly proportional to \begin{equation} e^{-\sqrt{2} y} \sin \bigg( \frac{\pi y}{L(s)} \bigg). \label{density} \end{equation} Establishing a rigorous version of this statement requires proving two theorems. In Theorem \ref{config1}, we consider the probability measure in which a mass of $1/N(s)$ is placed at the position of each particle at time $s$. Because most particles are close to the origin and $\sin(\pi y/L(s)) \approx \pi y/L(s)$ for small $y$, in the limit this probability measure has a density proportional to $y e^{-\sqrt{2} y}$. In Theorem \ref{config2}, we consider the probability measure in which a particle at position $z$ is assigned a mass proportional to $e^{\sqrt{2} z}$. In this case, particles over the entire interval from $0$ to $L(s)$ contribute significantly even in the limit, and the sinusoidal shape is observed. For these results, we use $\Rightarrow$ to denote convergence in distribution for random elements in the Polish space of probability measures on $(0, \infty)$, endowed with the weak topology. We also use $\delta_y$ to denote a unit mass at $y$. \begin{Theo}\label{config1} Suppose $0 < u < \tau$, and let $s = ux^3$. Define the probability measure $$\chi(u) = \frac{1}{N(s)} \sum_{i=1}^{N(s)} \delta_{X_i(s)}.$$ Let $\mu$ be the probability measure on $(0, \infty)$ with density $g(y) = 2y e^{-\sqrt{2} y}$. Then $\chi(u) \Rightarrow \mu$ as $x \rightarrow \infty$. \end{Theo} \begin{Theo}\label{config2} Suppose $0 < u < \tau$, and let $s = ux^3$. Define the probability measure $$\eta(u) = \frac{1}{Y(s)} \sum_{i=1}^{N(s)} e^{\sqrt{2} X_i(s)} \delta_{X_i(s)/L(s)}.$$ Let $\nu$ be the probability measure on $(0,1)$ with density $h(y) = \frac{\pi}{2} \sin(\pi y)$. Then $\eta(u) \Rightarrow \nu$ as $x \rightarrow \infty$. \end{Theo} \subsection{Ideas behind the proofs} In this section, we briefly outline a few of the heuristics behind the main results and their proofs. One can not obtain an accurate estimate of the number of particles $N(s)$ at time $s$ simply by calculating the expected value $E[N(s)]$, as the expected value is dominated by rare events when the number of particles is unusually large. Instead, we use the method of truncation and obtain our estimates by calculating first and second moments for a process in which particles are killed if they get too far to the right. It is useful to consider first branching Brownian motion with a drift of $-\sqrt{2}$ in which particles are killed when they reach either $0$ or $L$. For this process, if we start with a single particle at $x$ and if $s$ is large, then the ``density" of particles near $y$ at time $s$ is approximately (see Lemma \ref{stripdensity} below) \begin{equation}\label{psnew} p_s(x,y) = \frac{2}{L} e^{-\pi^2 s/2L^2} e^{\sqrt{2} x} \sin \bigg( \frac{\pi x}{L} \bigg) e^{-\sqrt{2} y} \sin \bigg( \frac{\pi y}{L} \bigg). \end{equation} This formula indicates that in the long run, the particles settle into an equilibrium configuration in which the density of particles near $y$ is proportional to $e^{-\sqrt{2}y} \sin(\pi y/L)$. We need to choose $L$ to be large enough that particles are unlikely to be killed at $L$, but small enough that we can obtain useful moment bounds. This will require choosing $L$ to be near the position of the right-most particle. However, because of the exponential decay term in (\ref{psnew}), the number of particles is decreasing over time, and therefore the position of the right-most particle will decrease also. Consequently, it will be necessary to allow the position of the right boundary to decrease over time and consider a process in which particles are killed if they reach $L(s)$ at time $s$. It turns out that the good choice for the killing boundary is given by $L(s)=c(t-s)^{1/3}$, as in (\ref{Ldef}) above. The importance of this position was already recognized by Kesten \cite{kesten}, where it plays a key role in his analysis of the critical branching Brownian motion with absorption. In \cite{bbs2}, we followed a similar strategy to study the survival probability of the same process and we showed that a particle that reaches $L(s)$ at some time $s<t$ has a good probability to have a descendent alive at time $t$. A heuristic derivation of the precise form of $L(s)$ is given in section \ref{L(s)section} More precisely, we are able to show that the probability that a particle hits $L(s)$ during a fixed time interval of length $O(x^2)$ (see Lemma \ref{fphi}), or that a particle ever hits a barrier $L_\alpha(s) =L(s) +O(\alpha)$ (see Lemma \ref{killalpha}) is close to 0 for $x$ and $\alpha$ large enough. The upshot of these results is that we can now choose to kill particles at $L(s)$ during the appropriate time frame in addition to killing them at 0 at no additional cost. This allows us to use and refine previous estimates and results from \cite{bbs,bbs2} for the branching Brownian motion with absorption at 0 and at a fixed $L>0$ (see section \ref{constbdrysec}) or at a curved boundary $L(s)$ as in \eqref{Ldef} (see section \ref{curvedbdry}). We now describe more precisely the structure of the proof and how the different estimates are used to obtain the proofs of Theorems \ref{numthm}, \ref{config1}, and \ref{config2}. As can be seen from (\ref{psnew}), in the model with killing at 0 and $L(s)$, the number of particles that will be in a given set at a sufficiently large future time is well predicted by the quantity $$Z(s) = \sum_{i=1}^{N(s)} e^{\sqrt{2} X_i(s)} \sin \bigg( \frac{\pi X_i(s)}{L(s)} \bigg) {\bf 1}_{\{X_i(s) \leq L(s)\}}.$$ Therefore $Z(s)$ is the natural measure of the ``size" of the process at time $s$. Given a bounded continuous function $f: [0,\infty) \to \R$ let us consider the sum $\sum_{i=1}^{N(s)} f(X_i(s)). $ We observe that $N(s)$ is this sum with $f\equiv 1$ and that to prove Theorem \ref{config1}, it suffices to show that for $s$ of the form $ux^3$, we have $$ \frac1{N(s)} \sum_{i=1}^{N(s)} f(X_i(s)) \to_p \int_0^\infty g(y)f(y) dy , $$ where $g(y) = 2y e^{-\sqrt{2} y}$ for $y \geq 0$ and $\rightarrow_p$ denotes convergence in probability as $x \rightarrow \infty$. In Lemma \ref{fphi}, we show that if $r=s-Bx^2$, where $B$ is a large constant, then with probability close to 1, no particle ever reaches $L(u)$ for $u \in [r,s]$, and therefore we can kill particles there at no cost during $[r,s].$ We denote by $X(f)$ the sum $\sum_{i=1}^{N(s)} f(X_i(s))$ when we do kill the particles at $L(u)$ for $u \in [r,s]$. On an interval of length $O(x^2)$, the boundary $L(u)$ stays roughly constant (it changes at most by a constant) and we are therefore able to use estimates obtained for the model in which particles are killed at 0 and at a fixed point $L$. More precisely, Lemma \ref{meanXf}  uses the estimates in Lemma \ref{fXvar} and Lemma \ref{stripdensity} to show that $$ \E[X(f) |\mathcal{F}_r] \cong Z(r) {\pi \over L(s)^2} e^{-\pi^2 Bx^2 /2L(s)^2} \int_0^\infty f(y) g(y) dy, $$ where for this heuristic description we can take $\cong$ to mean ``is close to''. In the same spirit, Lemma \ref{varXf} gives an upper bound of $\text{Var}(X(f) |\mathcal{F}_r)$ by a quantity involving $Y(r).$ Focusing on the proof of Theorem 1, that is taking $f\equiv 1$ above, we see that we can plug the bounds on $\E[X(f) |\mathcal{F}_r]$ and $\text{Var}(X(f) |\mathcal{F}_r)$ into Chebyshev's inequality to show that $X(1)$ is not too far from its conditional expectation, as shown in equation \eqref{cheb} which says that \begin{equation*} \P \bigg(\big|X(1) - \E[X(1)|{\cal F}_r] \big| > \frac{1}{2} \E[X(1)|{\cal F}_r] \bigg| {\cal F}_r \bigg) \leq \frac{C Y(r) e^{\sqrt{2} L(s)}}{x^{3/2} {\hat Z}^2}, \end{equation*} where ${\hat Z} \cong Z(r)$. Propositions \ref{Zlower} and \ref{Zupper} show that on an event of probability close to 1 we have good bounds on $Z(r)$ while the upper bound for $Y(r)$ is given in Proposition \ref {Yupper}. The conclusion is that on an event of probability close to 1, the quantity on the right-hand side tends to zero as $x \rightarrow \infty$, and on the same event we can bound $\E[X(1)|\mathcal{F}_r]$ as above. This allows us to obtain the conclusion of Theorem \ref{numthm} because $X(1)=N(s)$ when no particle is killed at $L(u)$ during $[r,s]$. Propositions \ref{Zlower} and \ref{Zupper} are themselves obtained by considering the model where particles are killed at $L_\alpha(s)$ in addition to being killed at 0 as explained above. Using this idea, Lemma \ref{EZbound} bounds $\E[Z(s)|\mathcal{F}_r] /Z(r)$ between two functions of $r$ and $s$. Since Propositions \ref{Zlower} and \ref{Zupper} essentially derive from another application of Chebyshev's inequality, the bound on the second moment of $Z(s)$ given in Proposition \ref{VarZProp} is a crucial step. Theorem \ref{config1} is obtained through a similar careful application of Chebyshev's inequality. Theorem \ref{config2} follows the same principle with $$ \sum_{i=1}^{N(s)} e^{\sqrt{2} X_i(s)} \phi\bigg( \frac{X_i(s)}{L(s)} \bigg) {\bf 1}_{\{X_i(s) < L(s)\}}. $$ in lieu of $$ \sum_{i=1}^{N(s)} f(X_i(s)). $$ The proof of Theorem \ref{rtthm} uses a different technique because the moment bounds obtained in this paper are not sharp enough near the right boundary $L(s)$ to give such precise control over the position of the right-most particle. Instead, to control the position of the right-most particle at time $s$, we consider the configuration of particles at time $s - \gamma x^2$, where $\gamma$ is a small constant. We estimate, for each particle at time $s - \gamma x^2$, the probability that it will have a descendant particle to the right of $L(s) - (3/\sqrt{2}) \log x + d$ at time $s$. To estimate this probability, we can use a result of Bramson \cite{bram83} for the position of the right-most particle in branching Brownian motion without absorption, because it is unlikely that a particle that gets to the right of $L(s) - (3/\sqrt{2}) \log x + d$ would have hit zero between times $s - \gamma x^2$ and $s$. The logarithmic term that appears in Theorem \ref{rtthm} is therefore closely related to the logarithmic correction in the celebrated result of Bramson, which says that for branching Brownian motion without drift or absorption, the median position of the right-most particle at time $t$ is within a constant of $$\sqrt{2} t - \frac{3}{2 \sqrt{2}} \log t.$$ \subsection{Heuristic derivation of $L(s)$ }\label{L(s)section} There is a natural approximate relationship, discussed in \cite{bbs}, between the number of particles $N(s)$ and the optimum position of the right boundary $L(s)$ given by \begin{equation}\label{LNrelation} L(s) = \frac{1}{\sqrt{2}} (\log N(s) + 3 \log \log N(s)). \end{equation} Equation (\ref{psnew}) indicates that once the process is in equilibrium, the number of particles gets multiplied by $e^{-\pi^2 s/2 L(s)^2}$ after time $s$, which suggests the rough approximation $$N'(s) \approx - \frac{\pi^2}{2 L(s)^2} N(s).$$ Because we have written $L(s)$ as a function of $N(s)$, this approximation can be combined with the chain rule to give \begin{equation}\label{Ldiffeq} L'(s) \approx - \frac{1}{\sqrt{2} N(s)} \cdot \frac{\pi^2}{2 L(s)^2} N(s) = - \frac{\pi^2}{2 \sqrt{2} L(s)^2}. \end{equation} Because we begin with a particle at $x$, the position of the right-most particle will be near $x$ at small times, so we will take $L(0) = x$ (although this means that we will not be able to start killing particles at the right boundary immediately). Solving the differential equation (\ref{Ldiffeq}) with $L(0) = x$ gives us $L(s) = c (t - s)^{1/3}$, where $t = \tau x^3$, as in (\ref{Ldef}). Combining this with (\ref{LNrelation}) gives $$N(s) \approx \frac{e^{\sqrt{2} L(s)}}{(\log N(s))^3} \approx \frac{e^{\sqrt{2} L(s)}}{L(s)^3},$$ where $\approx$ means that the expressions on each side should be the same order of magnitude. Because $x$ and $L(s)$ are of the same order of magnitude when $s \in [Bx^2, (1 - \delta) t]$, this approximation suggests the result of Theorem \ref{numthm}. \subsection{Discussion and open problems} Beyond the general motivation recalled at the beginning of this introduction, we now explain some of the contexts in which these results might be applied, and some related open problems that are raised by them. \emph{Yaglom limit laws}. Let $x>0$ be fixed, and consider a branching Brownian motion with the critical drift of $-\sqrt{2}$ and absorption at zero started from one particle at $x$. Almost surely, the process becomes extinct. However, one can condition the process on survival up to a large time $t$: it is then interesting to consider, for example, the number of particles still alive at time $t$ or the configuration of particles at some time $s \in (0, t]$. We believe that the results in Theorems \ref{config1} and \ref{config2} may be relevant to this question as well. Similar questions for ordinary branching processes were considered by Yaglom \cite{yaglom}. Note that the results of \cite{bbs2} give estimates, up to a multiplicative constant, for the probability of survival up to time $t$. \emph{Fleming-Viot processes.} Critical branching Brownian motion with absorption shares several features with the Fleming-Viot process studied by Burdzy et al. \cite{burdzy1, burdzy2}, in the case where the underlying motion is $(X_t,t\ge0)$, a Brownian motion with drift $-\sqrt{2}$ and absorption at 0. This is a process with $N$ particles performing Brownian motion with drift $-\sqrt{2}$ and which are absorbed at 0. Furthermore, whenever a particle is absorbed at 0, a new particle is instantaneously born, at a location chosen uniformly among the set of the other $N-1$ other particles. For this Fleming-Viot process, the main question concerns the equilibrium empirical distribution of particles $\mu_N$ and in particular the limiting behaviour of $\mu_N$ as $N$ tends to infinity. Under fairly general conditions $\mu_N$ is conjectured to converge weakly to a particular quasi-stationary distribution of the underlying motion $X$. This has been verified in the case where the underlying motion $X$ is Brownian motion and absorption occurs at the boundary of a bounded domain $D$ (\cite{burdzy2, grigorescu1, grigorescu2}) or if the state space is finite (\cite{AFG}). In all these cases the quasi stationary distribution is unique. Recently, this question was also addressed in \cite{AFGJ} in the case where the underlying motion is a subcritical branching process on $\mathbb{Z}$. For this motion there is a continuum of quasi-stationary distributions, and the conclusion of \cite{AFGJ} is that the limiting behaviour of $\mu_N$ is given by the minimal one, in the sense of minimal expected absorption time. This \emph{selection principle} is also conjectured to hold in great generality. We point out that the function $g(x) = 2x e^{- \sqrt{2} x} $, which arises in Theorem \ref{config1} as the weak limit of the density appearing in \eqref{density} (and before that in \cite{bbs}) is a left eigenfunction of the generator of $X$, $$ \frac12 \frac{d^2 }{ dx^2} - \sqrt{2} \frac{d}{dx}, $$ with Dirichlet boundary conditions on $(0, \infty)$. It is easily verified that this implies that $g$ is a quasi-stationary distribution of $X$. It is in fact the minimal quasi-stationary distribution of $X$ on $(0, \infty)$ in the sense of \cite{AFGJ}. See \cite{ServetMartinez} for precise calculations and relation to the corresponding Yaglom problem for $X$. Hence its appearance in Theorem \ref{config1} adds support to the above mentioned conjecture. In fact, our work raises the question of whether the function $e^{-\sqrt{2}x } \sin(\pi x / L)$ provides an even better approximation for $\mu_N$ when $L = (1/\sqrt{2}) ( \log N + 3 \log \log N)$. More formally, we ask whether the analogue of Theorem \ref{config2} also holds for $\mu_N$. \emph{Extreme configurations.} Theorem \ref{rtthm} determines the position of the right-most particle at time $s$ to within an additive constant. A natural open problem is to get a convergence in distribution for the position of the rightmost particle. More generally, one can ask about the distribution of the particle configuration as seen from the rightmost particle, or from the median position of the rightmost particle. We point out that these questions also make sense in the nearly-critical case studied in \cite{bbs}, and that the proof of Theorem \ref{rtthm} can be adapted to that setting. The analogous question for a branching Brownian motion without absorption at zero has been settled in \cite{abbs, abk}. \section{Preliminary Estimates}\label{stripsec} In this section, we obtain or recall some preliminary estimates concerning branching Brownian motion in which particles are killed not only at the origin but also when they travel sufficiently far to the right. We will consider two cases. One is when the Brownian particles are killed at some level $L > 0$. The other is when particles are killed when they reach $L(s) = c (t-s)^{1/3}$ for some $s$. As before, let $N(s)$ be the number of particles at time $s$, and denote the positions of the particles at time $s$ by $X_1(s) \geq X_2(s) \geq \dots \geq X_{N(s)}(s)$. Define $Y(s)$ as in (\ref{Ydef}). Let $({\cal F}_s, s \geq 0)$ denote the natural filtration associated with the branching Brownian motion. Let $q_s(x,y)$ denote the density of the branching Brownian motion, meaning that if initially there is a single particle at $x$ and $A$ is a Borel subset of $(0, \infty)$, then the expected number of particles in $A$ at time $s$ is $$\int_A q_s(x,y) \: dy.$$ Here, and throughout the entire paper, $C$, $C'$, and $C''$ will denote positive constants whose value may change from line to line. Numbered positive constants of the form $C_k$ will not change their values from line to line. \subsection{A constant right boundary}\label{constbdrysec} Let $L > 0$. We consider here the case in which particles are killed upon reaching either $0$ or $L$. This case was studied in detail in \cite{bbs}. The following result is Lemma 5 of \cite{bbs}. \begin{Lemma}\label{stripdensity} For $s > 0$ and $x,y \in (0, L)$, let $$p_s(x,y) = \frac{2}{L} e^{-\pi^2 s/2L^2} e^{\sqrt{2} x} \sin \bigg( \frac{\pi x}{L} \bigg) e^{-\sqrt{2} y} \sin \bigg( \frac{\pi y}{L} \bigg),$$ and define $D_s(x,y)$ so that $$q_s(x,y) = p_s(x,y)(1 + D_s(x,y)).$$ Then for all $x,y \in (0,L)$, we have \begin{equation}\label{Dineq} |D_s(x,y)| \leq \sum_{n=2}^{\infty} n^2 e^{-\pi^2(n^2 - 1)s/2L^2}. \end{equation} \end{Lemma} Lemma \ref{stripdensity} allows us to approximate $q_s(x,y)$ by $p_s(x,y)$ when $s$ is sufficiently large. Lemma \ref{mainqslem} below collects some further results about the density $q_s(x,y)$. \begin{Lemma} \label{mainqslem} Fix a positive constant $b > 0$. There exists a constant $C$ (depending on $b$) such that for all $s$ such that $s \geq b L^2$, we have \begin{equation}\label{q1} q_s(x,y) \leq C p_s(x,y), \, \, \forall x,y \in [0,L] \end{equation} and for all $s$ such that $s \leq b L^2$, we have \begin{equation}\label{q2} q_s(x,y) \leq \frac{C L^3}{s^{3/2}} p_s(x,y), \, \, \forall x,y \in [0,L]. \end{equation} The following inequalities hold in general (for all $s>0$ and $x,y \in [0,L]$): \begin{align} q_s(x,y) &\leq \frac{C e^{\sqrt{2}(x - y)} e^{-(x - y)^2/2s}}{s^{1/2}} \label{q3} \\ \int_0^L q_s(x,y) \: dy &\leq e^s \label{q4} \\ \int_0^{\infty} q_s(x,y) \: ds &\leq \frac{2 e^{\sqrt{2}(x-y)} x (L - y)}{L} \label{q5} \\ \int_0^L e^{\sqrt{2} y} q_s(x,y) \: dy &\leq e^{\sqrt{2} x} \min \bigg\{1, \frac{L-x}{s^{1/2}} \bigg\} \label{q6} \end{align} \end{Lemma} \begin{proof} Equation (\ref{q1}) holds because the right-hand side of (\ref{Dineq}) is bounded by a constant when $s/L^2 \geq b$. The result (\ref{q2}) is established in the proof of Proposition 14 in \cite{bbs} (see the argument between equations (53) and (54) of \cite{bbs}) by breaking the sum on the right-hand side of (\ref{Dineq}) into blocks of size approximately $L/\sqrt{s}$. Equation (\ref{q3}) is equation (55) of \cite{bbs} and is obtained by comparing $q_s(x,y)$ to the density of standard Brownian motion at time $s$. Equation (\ref{q4}) follows from the fact that the expected number of particles at time $s$ is at most $e^s$ because branching occurs at rate $1$. Equation (\ref{q5}) follows from (28) and (51) of \cite{bbs} and is proved using Green's function estimates for Brownian motion in a strip. Finally, to prove (\ref{q6}), let $v_s(x,y)$ be the density of Brownian motion killed at $0$ and $L$, meaning that if $A$ is a Borel subset of $(0,L)$, then the probability that a Brownian motion started at $x$ is in $A$ at time $s$ and has not hit $0$ or $L$ before time $s$ is $\int_A v_s(x,y) \: dy$. By equation (28) of \cite{bbs}, we have \begin{equation}\label{111} q_s(x,y) = e^{\sqrt{2}(x-y)} v_s(x,y). \end{equation} Let $(B(t), t \geq 0)$ be standard Brownian motion with $B(0) = x$. Then, by the Reflection Principle, \begin{align}\label{112} \int_0^L v_s(x,y) \: dy &= \P(B(t) \in (0, L) \mbox{ for all }t \in [0,s]) \nonumber \\ &\leq \P \big( \max_{0 \leq t \leq s} B(t) \leq L \big) \nonumber \\ &= 2 \int_0^{L-x} \frac{1}{\sqrt{2 \pi s}} e^{-y^2/2s} \: dy \nonumber \\ &\leq \min \bigg\{1, \frac{L-x}{s^{1/2}} \bigg\}, \end{align} and (\ref{q6}) follows from (\ref{111}) and (\ref{112}). \end{proof} Let $$Z(s) = \sum_{i=1}^{N(s)} e^{\sqrt{2} X_i(s)} \sin \bigg( \frac{\pi X_i(s)}{L} \bigg).$$ Lemma \ref{YZexplem} below is part of Lemma 7 of \cite{bbs}, while Lemma \ref{varZlem} below follows immediately from Lemma 9 of \cite{bbs}. \begin{Lemma}\label{YZexplem} For all initial configurations of particles at time zero, we have \begin{equation}\label{Zexp} \E[Z(s)] = e^{-\pi^2 s/2L^2} Z(0) \end{equation} and \begin{equation}\label{Yexp} \E[Y(s)] = \frac{4}{\pi} e^{-\pi^2s/2L^2} Z(0)(1 + D(s)), \end{equation} where $|D(s)|$ is bounded above by the right-hand side of (\ref{Dineq}). \end{Lemma} \begin{Lemma}\label{varZlem} Fix a constant $b > 0$. Suppose initially there is a single particle at $x$. Then there exists a positive constant $C$, depending on $b$ but not on $L$ or $x$, such that for all $s \geq bL^2$, $$\E[Z(s)^2] \leq \frac{C e^{\sqrt{2} x} e^{\sqrt{2} L} s}{L^4}.$$ \end{Lemma} The next lemma is Lemma 8 in \cite{bbs}, where it is obtained as a straightforward application of results in \cite{sawyer}. It is also similar to Lemma 3.1 of \cite{kesten}. \begin{Lemma}\label{fXvar} Suppose $f:(0, L) \rightarrow [0, \infty)$ is a bounded measurable function. Suppose initially there is a single particle at $x$. Then $$\E \bigg[ \sum_{i=1}^{N(s)} f(X_i(s)) \bigg] = \int_0^L f(y) q_s(x,y) \: dy$$ and $$\E \bigg[ \bigg( \sum_{i=1}^{N(s)} f(X_i(s)) \bigg)^2 \bigg] = \int_0^L f(y)^2 q_s(x,y) \: dy + 2 \int_0^s \int_0^L q_u(x,z) \bigg( \int_0^L f(y) q_{s-u}(z,y) \: dy \bigg)^2 \: dz \: du.$$ \end{Lemma} \subsection{A curved right boundary}\label{curvedbdry} Fix any time $t > 0$. As in (\ref{Ldef}), for $s \in [0, t]$, let $$L(s) = c (t-s)^{1/3},$$ where $c$ was defined in (\ref{cdef}). Consider branching Brownian motion with drift $-\sqrt{2}$ in which particles are killed if they reach zero, or if they reach $L(s)$ at time $s$. Note that all particles must be killed by time $t$ because $L(t) = 0$. This right boundary was previously considered by Kesten \cite{kesten}. We recall here some results that recently appeared in \cite{bbs2}, where they were proved using techniques developed by Harris and Roberts \cite{haro}. Let $$Z(s) = \sum_{i=1}^{N(s)} e^{\sqrt{2} X_i(s)} \sin \bigg( \frac{\pi X_i(s)}{L(s)} \bigg),$$ a quantity of crucial importance in what follows. The next result, which combines Proposition 10 and Corollary 11 of \cite{bbs2}, provides a precise estimate of $\E[Z(s)]$. \begin{Lemma}\label{EZbound} For $0 < r < s < t$, let \begin{equation}\label{Gdef} G_r(s) = \exp \bigg( - (3 \pi^2)^{1/3} \big((t-r)^{1/3} - (t-s)^{1/3} \big) \bigg) \bigg( \frac{t-s}{t-r} \bigg)^{1/6}. \end{equation} There exist positive constants $C_3$ and $C_4$ such that if $0 < s < t$, then $$Z(0) G_0(s) \exp(- C_3 (t-s)^{-1/3}) \le \E[Z(s)] \le Z(0) G_0(s) \exp(C_4 (t-s)^{-1/3})$$ and, more generally, if $0 < r < s < t$, then $$Z(r) G_r(s) \exp(-C_3 (t - s)^{-1/3}) \leq \E[Z(s)|{\cal F}_r] \leq Z(r) G_r(s) \exp(C_4(t - s)^{-1/3}).$$ \end{Lemma} The following result, which is the $r = 0$ case of Proposition 12 in \cite{bbs2}, establishes bounds on the density up to a constant factor. \begin{Lemma}\label{densityprop} For $x,y>0$ and $0 \le s \le t$, let $$\psi_s(x,y) = \frac1{L(s)} e^{-(3 \pi^2)^{1/3}(t^{1/3} - (t-s)^{1/3})} \bigg( \frac{t-s}{t} \bigg)^{1/6} e^{\sqrt{2} x} \sin \bigg( \frac{\pi x}{L(0)} \bigg) e^{-\sqrt{2} y} \sin \bigg( \frac{\pi y}{L(s)} \bigg).$$ Fix a positive constant $b$. There exists a constant $A > 0$ and positive constants $C'$ and $C''$, with $C''$ depending on $b$, such that if $L(0)^2 \leq s \leq t - A$, then $$q_s(x,y) \geq C' \psi_s(x,y)$$ and if $bL(0)^2 \leq s \leq t - A$, then $$q_s(x,y) \leq C'' \psi_s(x,y).$$ \end{Lemma} We will also require estimates on the number of particles killed at the right boundary. The result below is the $s = 0$ case of Lemma 15 in \cite{bbs2}. \begin{Lemma} \label{L:sumRj} Suppose there is initially a single particle at $x$, where $0 < x < L(0)$. Let $R$ be the number of particles killed at $L(s)$ for some $s \in [0,t]$. Then there are positive constants $C'$ and $C''$ such that $$C' h(x) \leq \E[R] \leq C''(h(x) + j(x)),$$ where $$h(x) = e^{\sqrt{2}x} \sin\left(\frac{\pi x}{ct^{1/3}}\right) t^{1/3} \exp(- (3\pi^2t)^{1/3})$$ and $$j(x) = x e^{\sqrt{2}x} t^{-1/3} \exp(- (3\pi^2t)^{1/3}).$$ \end{Lemma} Finally, we will need the following bound on the second moment of $Z(s)$. \begin{Prop}\label{VarZProp} Fix $\kappa > 0$ and $\delta > 0$. Then there exists a positive constant $C$, depending on $\kappa$ and $\delta$ but not on $t$, such that for all $t \geq 1$ and all $s$ satisfying $\kappa t^{2/3} \leq s \leq (1 - \delta) t$, $$\textup{Var}(Z(s)) \leq C \E[Z(s)]^2 \bigg( \frac{e^{\sqrt{2} L(0)}}{L(0)Z(0)} + \frac{e^{\sqrt{2} L(0)} Y(0)}{L(0)^2 Z(0)^2} \bigg).$$ \end{Prop} \begin{proof} The proof is similar to the proof of Lemma 12 in \cite{bbs}. Choose times $0 = s_0 < s_1 < \dots < s_K = s$ such that $\kappa t^{2/3} \leq s_{i+1} - s_i \leq 2 \kappa t^{2/3}$ for $i = 0, 1, \dots, K-1$. Note that $K \leq Ct^{1/3}$. By Lemma \ref{EZbound}, for $i = 0, 1, \dots, K-1$, \begin{equation}\label{EZsi} \E[Z(s_{i+1})|{\cal F}_{s_i}] =\exp \bigg( - (3 \pi^2)^{1/3} \big((t-s_i)^{1/3} - (t-s_{i+1})^{1/3} \big) \bigg) \bigg( \frac{t-s_{i+1}}{t-s_i} \bigg)^{1/6}Z(s_i) D_i, \end{equation} where \begin{equation}\label{Dibound} \exp(-C_3 \delta^{-1/3} t^{-1/3}) \leq D_i \leq \exp(C_4 \delta^{-1/3} t^{-1/3}). \end{equation} Because the particles alive at time $s_{i+1}$ are a subset of the particles that would be alive at time $s_{i+1}$ if particles were killed at $L(s_i)$, rather than $L(s)$, for $s \in [s_i, s_{i+1}]$, and the right-hand side of (\ref{Dineq}) is bounded by a constant when $s \geq \kappa t^{2/3}$ and $L \leq ct^{1/3}$, it follows from (\ref{Yexp}) that \begin{equation}\label{EYsi} \E[Y(s_{i+1})|{\cal F}_{s_i}] \leq C Z(s_i) \end{equation} for $i = 0, 1, \dots, K-1$. Let $$Z'(s_{i+1}) = \sum_{i=1}^{N(s_{i+1})} e^{\sqrt{2} X_i(s_{i+1})} \sin \bigg( \frac{\pi X_i(s_{i+1})}{L(s_i)} \bigg),$$ which is the same as $Z(s_{i+1})$ except $L(s_i)$ rather than $L(s_{i+1})$ appears in the denominator. Because $\sin(\pi x/L(s_{i+1})) \leq C \sin (\pi x/L(s_i))$ for all $x \in [0, L(s_{i+1})]$, we have $Z(s_{i+1}) \leq CZ'(s_{i+1})$. By Lemma \ref{varZlem}, if there is a single particle at $x$ at time $s_i$, then $$\mbox{Var}(Z(s_{i+1})|{\cal F}_{s_i}) \le \E[Z(s_{i+1})^2|{\cal F}_{s_i}] \leq C\E[Z'(s_{i+1})^2|{\cal F}_{s_i}] \leq \frac{Ce^{\sqrt{2} x}e^{\sqrt{2}L(s_i)}(s_{i+1} - s_i)}{L(s_i)^4}.$$ Because particles move and branch independently, it follows by summing over the particles at time $s_i$ that \begin{equation}\label{varZsi} \mbox{Var}(Z(s_{i+1})|{\cal F}_{s_i}) \leq \frac{CY(s_i) e^{\sqrt{2} L(s_i)} (s_{i+1} - s_i)}{L(s_i)^4} \leq C t^{-2/3} Y(s_i)e^{\sqrt{2} L(s_i)}. \end{equation} Using the conditional variance formula, equations (\ref{EZsi}), (\ref{Dibound}), and (\ref{varZsi}), and the fact that $s<(1-\delta)t$, \begin{align} \mbox{Var}&(Z(s_{i+1})) = \E[\mbox{Var}(Z(s_{i+1})|{\cal F}_{s_i})] + \mbox{Var}(\E[Z(s_{i+1})|{\cal F}_{s_i}]) \nonumber \\ &\leq C t^{-2/3} e^{\sqrt{2} L(s_i)} \E[Y(s_i)] + e^{ - 2(3 \pi^2)^{1/3} \big((t-s_i)^{1/3} - (t-s_{i+1})^{1/3} \big) } \bigg( \frac{t-s_{i+1}}{t-s_i} \bigg)^{1/3} \mbox{Var}(D_i Z(s_i)). \nonumber \\ &\leq C t^{-2/3} e^{\sqrt{2} L(s_i)} \E[Y(s_i)] + e^{2C_4 \delta^{-1/3} t^{-1/3}} e^{ - 2(3 \pi^2)^{1/3} \big((t-s_i)^{1/3} - (t-s_{i+1})^{1/3} \big) }\mbox{Var}(Z(s_i)). \nonumber \end{align} Therefore, by induction, \begin{align} \mbox{Var}(Z(s)) &\leq C t^{-2/3} \sum_{i=0}^{K-1} e^{\sqrt{2} L(s_i)} \bigg( \prod_{j=i+1}^{K-1} e^{2C_4 \delta^{-1/3} t^{1/3}} e^{ - 2(3 \pi^2)^{1/3} \big((t-s_j)^{1/3} - (t-s_{j+1})^{1/3} \big) } \bigg) \E[Y(s_i)] \nonumber \\ &\leq C t^{-2/3} \sum_{i=0}^{K-1} e^{\sqrt{2} L(s_i)} e^{2K C_4 \delta^{-1/3} t^{-1/3}} e^{ - 2(3 \pi^2)^{1/3} \big((t-s_{i+1})^{1/3} - (t-s)^{1/3} \big) } \E[Y(s_i)]. \nonumber \end{align} By (\ref{EYsi}), for $i = 1, \dots, K-1$, we have $\E[Y(s_i)] = \E[\E[Y(s_i)|{\cal F}_{s_{i-1}}]] \leq C \E[Z(s_{i-1})]$. Because $K \leq Ct^{1/3}$, we have $e^{2KC_4 \delta^{-1/3} t^{-1/3}} \leq C$. Therefore, \begin{align}\label{mainvareq} \mbox{Var}(Z(s)) &\leq C t^{-2/3} \sum_{i=1}^{K-1} e^{\sqrt{2} L(s_i)} e^{ - 2(3 \pi^2)^{1/3} \big((t-s_{i+1})^{1/3} - (t-s)^{1/3} \big) } \E[Z(s_{i-1})] \nonumber \\ &\qquad+ C t^{-2/3} e^{\sqrt{2} L(0)} e^{ - 2(3 \pi^2)^{1/3} \big((t-s_{1})^{1/3} - (t-s)^{1/3} \big) } Y(0). \end{align} Denote the two terms on the right-hand side of (\ref{mainvareq}) by $T_1$ and $T_2$. Because $[(t-s)/t]^{1/6}$ is bounded above and below by positive constants when $0 \leq s \leq (1 - \delta)t$, it follows from Lemma \ref{EZbound} that there are constants $C'$ and $C''$, depending on $\delta$, such that for $i = 0, 1, \dots, K$, $$C' Z(0) \exp \big( -(3 \pi^2)^{1/3} (t^{1/3} - (t - s_i)^{1/3}) \big) \leq \E[Z(s_i)] \leq C'' Z(0) \exp \big( -(3 \pi^2)^{1/3} (t^{1/3} - (t - s_i)^{1/3}) \big).$$ Therefore, using that $\sqrt{2} c = (3 \pi^2)^{1/3}$, \begin{align}\label{prelimT1} T_1 &\leq C t^{-2/3} \sum_{i=1}^{K-1} \exp \bigg(\sqrt{2} L(s_i) - 2(3 \pi^2)^{1/3} \big((t-s_{i+1})^{1/3} - (t-s)^{1/3} \big) \nonumber \\ &\qquad - (3 \pi^2)^{1/3}(t^{1/3} - (t - s_{i-1})^{1/3}) \bigg) Z(0) \nonumber \\ &= C t^{-2/3} \sum_{i=1}^{K-1} \exp \big( (3 \pi^2)^{1/3} (t - s_i)^{1/3} - 2(3 \pi^2)^{1/3}((t - s_{i+1})^{1/3} - (t - s)^{1/3}) \nonumber \\ &\qquad - (3 \pi^2)^{1/3}(t^{1/3} - (t - s_{i-1})^{1/3}) \big) Z(0) \nonumber \\ &= C t^{-2/3} \exp \big( 2(3 \pi^2)^{1/3} (t - s)^{1/3} - (3 \pi^2)^{1/3} t^{1/3} \big) Z(0) \nonumber \\ &\qquad\times \sum_{i=1}^{K-1} \exp \big( (3 \pi^2)^{1/3} ((t - s_i)^{1/3} - 2(t - s_{i+1})^{1/3} + (t - s_{i-1})^{1/3}) \big). \end{align} For $i = 0, 1, \dots, K-1$, we have $t-s_{i+1} \geq \delta t$, and so $(t - s_{i})^{1/3} - (t - s_{i+1})^{1/3} \leq C$. Therefore, the sum on the right-hand side of (\ref{prelimT1}) is bounded by $C(K-1) \leq C t^{1/3}$. Thus, using $t>1$ and Lemma \ref{EZbound} again, \begin{align}\label{T1} T_1 &\leq C t^{-1/3} \exp \big( (3 \pi^2)^{1/3} t^{1/3} \big) \exp \big( 2(3 \pi^2)^{1/3}((t - s)^{1/3} - t^{1/3}) \big) \frac{Z(0)^2}{Z(0)} \nonumber \\ &\leq C t^{-1/3} \exp \big( (3 \pi^2)^{1/3} t^{1/3} \big) \frac{\E[Z(s)]^2}{Z(0)} \nonumber \\ &\leq \frac{C e^{\sqrt{2} L(0)} \E[Z(s)]^2}{L(0) Z(0)}. \end{align} Also, using that $t^{1/3} - (t - s_1)^{1/3} \leq C$, \begin{align}\label{T2} T_2 &= C t^{-2/3} e^{\sqrt{2} L(0)} \exp \big(- 2(3 \pi^2)^{1/3}((t - s_1)^{1/3} - (t - s)^{1/3}) \big) Y(0) \nonumber \\ &\leq C t^{-2/3} e^{\sqrt{2} L(0)} \exp \big( - 2(3 \pi^2)^{1/3}(t^{1/3} - (t - s)^{1/3}) \big) Y(0) \nonumber \\ &\leq \frac{C e^{\sqrt{2} L(0)} Y(0) \E[Z(s)]^2}{L(0)^2 Z(0)^2}. \end{align} The result now follows from (\ref{mainvareq}), (\ref{T1}), and (\ref{T2}). \end{proof} \section{Number and configuration of particles}\label{pfsec} In this section, we return to the model presented in the introduction, in which there is initially a single particle at $x$ and we are concerned with the asymptotic behavior of the process as $x \rightarrow \infty$. \subsection{The process before time $\kappa x^2$} We first consider how the branching Brownian motion evolves during the initial period between time $0$ and time $\kappa x^2$, where $\kappa > 0$ is an arbitrary positive constant. We will use the following result of Neveu \cite{nev87}. \begin{Lemma}\label{nevlem} Consider branching Brownian motion with drift $-\sqrt{2}$ and no absorption, started with a single particle at the origin. For each $y \geq 0$, let $K(y)$ be the number of particles that reach $-y$ in a modified process in which particles are killed upon reaching $-y$. Then there exists a random variable $W$, with $P(0 < W < \infty) = 1$ and $E[W] = \infty$, such that $$\lim_{y \rightarrow \infty} y e^{-\sqrt{2} y} K(y) = W \hspace{.1in}a.s.$$ \end{Lemma} For our process which begins with a single particle at $x$, let $K(y)$ be the number of particles that would reach $x - y$, if particles were killed upon reaching $x - y$. Note that $K(y) < \infty$ almost surely because of Kesten's result \cite{kesten} that critical branching Brownian motion with absorption dies out. If $y$ is sufficiently large, then $y e^{-\sqrt{2} y} K(y)$ will have approximately the same distribution as the random variable $W$ in Lemma \ref{nevlem}. Our strategy for studying the branching Brownian motion between time $0$ and time $\kappa x^2$ will be to choose a sufficiently large constant $y$, wait for $K(y)$ particles to reach $x - y$, and then consider $K(y)$ independent branching Brownian motions started from $x - y$. Let $\alpha \in \R$, and let \begin{equation}\label{Zadef} Z_{\alpha} = \sum_{i=1}^{N(\kappa x^2)} e^{\sqrt{2} X_i(\kappa x^2)} \sin \bigg( \frac{\pi X_i(\kappa x^2)}{x + \alpha} \bigg) {\bf 1}_{\{X_i(\kappa x^2) \leq x + \alpha\}}. \end{equation} The following result describes the behavior of the configuration of particles at time $\kappa x^2$. \begin{Lemma}\label{smalltime} For all $\varepsilon > 0$, there exists a positive constant $C_5$, depending on $\kappa$ and $\varepsilon$ but not on $x$, such that for sufficiently large $x$, \begin{equation}\label{Yalpha} \P\big(Y(\kappa x^2) \leq C_5 x^{-1} e^{\sqrt{2} x} \big) \geq 1 - \varepsilon. \end{equation} Also, there exist positive constants $C_6$ and $C_7$, depending on $\kappa$ and $\varepsilon$ but not on $x$ or $\alpha$, such that for sufficiently large $x$, \begin{equation}\label{Zalpha} \P\big(C_6 x^{-1} e^{\sqrt{2} x} \leq Z_{\alpha} \leq C_7 x^{-1} e^{\sqrt{2} x}\big) \geq 1 - \varepsilon. \end{equation} Furthermore, \begin{equation}\label{rightmost} \lim_{x \rightarrow \infty} \P\big( X_1(\kappa x^2) \leq x + \alpha \big) = 1. \end{equation} \end{Lemma} \begin{proof} Choose $\eta > 0$ sufficiently small and $B > 0$ sufficiently large such that the random variable $W$ in Lemma \ref{nevlem} satisfies $\P(W \leq 2 \eta) < \varepsilon/8$ and $\P(W \geq B - \eta) < \varepsilon/8$. By Lemma \ref{nevlem}, we can choose $y > 0$ large enough that, for some random variable $W$ having the same distribution as the random variable $W$ in Lemma \ref{nevlem}, $$\P(|y e^{-\sqrt{2} y} K(y) - W| \geq \eta) < \frac{\varepsilon}{8}.$$ These conditions imply that \begin{equation}\label{delbound} \P(ye^{-\sqrt{2} y} K(y) \leq \eta) < \frac{\varepsilon}{4} \end{equation} and \begin{equation}\label{Bbound} \P(ye^{-\sqrt{2} y} K(y) \geq B) < \frac{\varepsilon}{4}. \end{equation} We can also choose $y$ to be large enough that $y \geq 2|\alpha|$ and $B e^{-\sqrt{2} \alpha}/y < \varepsilon/8$. For $1 \leq i \leq N(\kappa x^2)$ and $0 \leq s \leq \kappa x^2$, let $x_i(s)$ be the position of the particle at time $s$ that is the ancestor of the particle at the location $X_i(\kappa x^2)$ at time $\kappa x^2$. Let $v_i = \inf\{s: x_i(s) = x - y\}$. Let $0 < u_1 < \dots < u_{K(y)}$ denote the times at which particles would hit $x - y$, if particles were killed upon reaching $x - y$. Note that $\{v_1, \dots, v_{N(\kappa x^2)}\} \subset \{u_1, \dots, u_{K(y)}\}$. Let ${\cal G}$ denote the $\sigma$-field generated by the set of times $\{u_1, \dots, u_{K(y)}\}$. We can choose a positive number $\rho > 0$, depending on $y$ but not on $x$, such that \begin{equation}\label{rhoprob} \P(u_{K(y)} \leq \rho) > 1 - \frac{\varepsilon}{8}. \end{equation} Throughout the proof, we will assume that $x$ is large enough that $x \geq y$, so that particles are not killed at the origin before reaching $x - y$, and that $\kappa x^2/2 \geq \rho$, so that with high probability all particles will have reached $x - y$ well before time $\kappa x^2$. Let \begin{equation}\label{Mdef} M(s) = \sum_{i=1}^{N(s)} X_i(s) e^{\sqrt{2} X_i(s)}. \end{equation} It is well-known (see, for example, Lemma 2 of \cite{hh07}) that the process $(M(s), s \geq 0)$ is a martingale. If there is initially a single particle at $x - y$, then by the Optional Sampling Theorem, the probability that some particle eventually reaches $x + \alpha$ is at most $$\frac{(x-y) e^{\sqrt{2}(x-y)}}{(x + \alpha) e^{\sqrt{2} (x + \alpha)}}.$$ Therefore, conditional on ${\cal G}$, the probability that some descendant of a particle that reaches $x-y$ eventually reaches $x + \alpha$ is at most $$\frac{K(y) (x - y) e^{\sqrt{2}(x-y)}}{(x + \alpha) e^{\sqrt{2} (x + \alpha)}} \leq \frac{e^{-\sqrt{2} \alpha}}{y} \cdot y e^{-\sqrt{2} y} K(y).$$ Thus, the unconditional probability that some descendant of a particle that reaches $x - y$ eventually reaches $x + \alpha$ is at most $$\P(y e^{-\sqrt{2} y} K(y) > B) + \frac{B e^{-\sqrt{2} \alpha}}{y} < \frac{\varepsilon}{4} + \frac{\varepsilon}{8} = \frac{3 \varepsilon}{8}.$$ In particular, $\P(X_1(\kappa x^2) > x + \alpha) \leq \P(u_{K(y)} > \rho) + 3 \varepsilon/8 \leq \varepsilon/2$ for sufficiently large $x$, which by letting $\varepsilon \rightarrow 0$ implies (\ref{rightmost}). Let $S(\alpha) = \{i: x_i(s) < x + \alpha \mbox{ for all }s \in [v_i, \kappa x^2]\}$. Then let $$Y'_{\alpha} = \sum_{i=1}^{N(\kappa x^2)} e^{\sqrt{2} X_i(\kappa x^2)} {\bf 1}_{\{i \in S(\alpha)\}}$$ and $$Z'_{\alpha} = \sum_{i=1}^{N(\kappa x^2)} e^{\sqrt{2} X_i(\kappa x^2)} \sin \bigg( \frac{\pi X_i(\kappa x^2)}{x + \alpha} \bigg) {\bf 1}_{\{i \in S(\alpha)\}}.$$ The argument in the previous paragraph implies that \begin{equation}\label{YZsame} \P\big(Y_{\alpha}' = Y(\kappa x^2) \mbox{ and }Z_{\alpha}' = Z_{\alpha} \big) \geq 1 - \frac{\varepsilon}{2}. \end{equation} By the Strong Markov Property, the configuration of particles at time $\kappa x^2$ has the same distribution as the configuration that we would get by starting with $K(y)$ particles at $x-y$ and stopping their descendants at the times $\kappa x^2 - u_i$. Furthermore, restricting to particles in $S(\alpha)$ is equivalent to killing particles when they reach $x + \alpha$. Therefore, the tools of Section \ref{stripsec}, with $L = x + \alpha$, can be used to estimate the first and second moments of $Y_{\alpha}'$ and $Z_{\alpha}'$. We first apply (\ref{Yexp}) with $s = \kappa x^2 - u_i$, which when $u_{K(y)} \leq \rho$ is at least $\kappa x^2/2$. Because the right-hand side of (\ref{Dineq}) is bounded by a constant when $s$ is of the order $L^2$, it follows from (\ref{Yexp}) that there is a constant $C$, depending on $\kappa$, such that on the event $\{u_{K(y)} \leq \rho\}$, $$\E[Y_{\alpha}'|{\cal G}] \leq C K(y) e^{\sqrt{2}(x - y)} \sin \bigg( \frac{\pi (x - y)}{x + \alpha} \bigg).$$ Using $\sim$ to denote that the ratio of the two sides tends to one as $x \rightarrow \infty$, we have \begin{equation}\label{sinasym} \sin \bigg( \frac{\pi (x - y)}{x + \alpha} \bigg) \sim \frac{\pi(y + \alpha)}{x + \alpha} \sim \frac{\pi (y + \alpha)}{x}. \end{equation} Because $y \geq 2|\alpha|$, it follows that there exists a constant $C_8$ such that on the event $\{u_{K(y)} \leq \rho\}$, for sufficiently large $x$, $$\E[Y_{\alpha}'|{\cal G}] \leq C_8 x^{-1} e^{\sqrt{2} x} \cdot y e^{-\sqrt{2} y} K(y).$$ Therefore, choosing $C_5 = 8 C_8 B/\varepsilon$ and using (\ref{Bbound}), (\ref{rhoprob}), and the conditional Markov's inequality, \begin{align}\label{Yprime} \P(Y_{\alpha}' \geq C_5 x^{-1} e^{\sqrt{2} x}) &\leq \P(u_{K(y)} > \rho) + \P \bigg( y e^{-\sqrt{2} y} K(y) \geq B \bigg) + \P \bigg(Y_{\alpha}' \geq \frac{8 \E[Y_{\alpha}'|{\cal G}]}{\varepsilon} \bigg) \nonumber \\ &\leq \frac{\varepsilon}{8} + \frac{\varepsilon}{4} + \frac{\varepsilon}{8} = \frac{\varepsilon}{2}. \end{align} The result (\ref{Yalpha}) now follows from (\ref{Yprime}) and (\ref{YZsame}). By (\ref{Zexp}), on the event $\{u_{K(y)} \leq \rho\}$, we have \begin{align} &e^{-\pi^2 \kappa x^2/2(x + \alpha)^2} K(y) e^{\sqrt{2} (x - y)} \sin \bigg( \frac{\pi (x - y)}{x + \alpha} \bigg) \nonumber \\ &\qquad \leq \E[Z_{\alpha}|{\cal G}] \leq e^{-\pi^2 (\kappa x^2 - \rho)/2(x + \alpha)^2} K(y) e^{\sqrt{2} (x - y)} \sin \bigg( \frac{\pi (x - y)}{x + \alpha} \bigg). \nonumber \end{align} Because (\ref{sinasym}) holds and $e^{-\pi^2 \kappa x^2/2(x+\alpha)^2} \sim e^{-\pi^2 \kappa/2} \sim e^{-\pi^2 (\kappa x^2 - \rho)/2(x + \alpha)^2}$, there are constants $C_9$ and $C_{10}$, depending on $\kappa$, such that \begin{equation}\label{EZG} C_9 x^{-1} e^{\sqrt{2} x} \cdot y e^{-\sqrt{2} y} K(y) \leq \E[Z_{\alpha}'|{\cal G}] \leq C_{10} x^{-1} e^{\sqrt{2} x} \cdot y e^{-\sqrt{2} y} K(y) \end{equation} when $u_{K(y)} \leq \rho$ for sufficiently large $x$. Furthermore, by applying Lemma \ref{varZlem} to the configuration with a single particle at $x-y$ at time zero and then summing over the particles, we get $$\mbox{Var}(Z_{\alpha}'|{\cal G}) \leq \frac{C K(y) e^{\sqrt{2} (x - y)} e^{\sqrt{2} (x + \alpha)} \kappa x^2}{2(x + \alpha)^4} \leq \frac{C e^{\sqrt{2} \alpha} \cdot ye^{-\sqrt{2} y} K(y)}{y} \big( x^{-1} e^{\sqrt{2} x} \big)^2$$ for sufficiently large $x$. By the conditional Chebyshev's Inequality, on the event $\{u_{K(y)} \leq \rho\}$, $$\P \bigg( \big| Z_{\alpha}' - \E[Z_{\alpha}'|{\cal G}] \big| > \frac{1}{2} \E[Z_{\alpha}'|{\cal G}] \bigg| {\cal G} \bigg) \leq \frac{4\mbox{Var}(Z_{\alpha}'|{\cal G})}{(\E[Z_{\alpha}'|{\cal G}])^2} \leq \frac{Ce^{\sqrt{2} \alpha}}{y \cdot y e^{-\sqrt{2} y} K(y)}.$$ In view of (\ref{delbound}), it follows that for $y$ large enough that $Ce^{\sqrt{2} \alpha}/(\eta y) < \varepsilon/8$ and sufficiently large $x$, $$\P \bigg( \big| Z_{\alpha}' - \E[Z_{\alpha}'|{\cal G}] \big| > \frac{1}{2} \E[Z_{\alpha}'|{\cal G}] \bigg) \leq \P(u_{K(y)} > \rho) + \P(y e^{-\sqrt{2} y} K(y) \leq \eta) + \frac{Ce^{\sqrt{2} \alpha}}{\eta y} < \frac{\varepsilon}{2}.$$ Combining this result with (\ref{EZG}), we get that for sufficiently large $x$, the event $$\frac{C_9}{2} \cdot x^{-1} e^{\sqrt{2} x} \cdot y e^{-\sqrt{2} y} K(y) \leq Z_{\alpha}' \leq \frac{3C_{10}}{2} \cdot x^{-1} e^{\sqrt{2} x} \cdot y e^{-\sqrt{2} y} K(y)$$ holds with probability at least $1 - \varepsilon/2$. Thus, using (\ref{delbound}) and (\ref{Bbound}), for sufficiently large $x$ we have $$\frac{C_9 \eta}{2} \cdot x^{-1} e^{\sqrt{2} x} \leq Z_{\alpha}' \leq \frac{3BC_{10}}{2} \cdot x^{-1} e^{\sqrt{2} x}$$ with probability at least $1 - \varepsilon$. The result (\ref{Zalpha}) now follows by setting $C_6 = C_9 \eta/2$ and $C_7 = 3BC_{10}/2$ and invoking (\ref{YZsame}). \end{proof} \subsection{A lower bound for $Z(s)$}\label{Zlow} Let $t = \tau x^3 = 2 \sqrt{2} x^3/(3 \pi^2)$. For $0 < s < t$, recall that $$L(s) = x \bigg(1 - \frac{3 \pi^2 s}{2 \sqrt{2} x^3} \bigg)^{1/3} = c (t - s)^{1/3}$$ as in (\ref{Ldef}), and let $$Z(s) = \sum_{i=1}^{N(s)} e^{\sqrt{2} X_i(s)} \sin \bigg( \frac{\pi X_i(s)}{L(s)} \bigg) {\bf 1}_{\{X_i(s) \leq L(s)\}}.$$ Our goal in this subsection is to find a lower bound for $Z(s)$. Such a bound will be provided by Proposition \ref{Zlower} below. To prove this result, we will consider the following new process, which will also be useful in later subsections. Fix $\alpha \in \R$, and let $t_{\alpha} = \tau (x + \alpha)^3$, so that $ct_{\alpha}^{1/3} = x + \alpha$, where $c$ is defined in (\ref{cdef}). For $0 \leq s \leq t_{\alpha}$, let $L_{\alpha}(s) = c(t_{\alpha} - s)^{1/3}$. Note that $L_0(s) = L(s)$. Now suppose that, in addition to being killed at the origin, particles to the right of $x + \alpha$ are killed at time $\kappa x^2$, and for $\kappa x^2 < s < t_{\alpha} + \kappa x^2$, particles are killed at time $s$ if they reach $L_{\alpha}(s - \kappa x^2)$. Let $N_{\alpha}(s)$ be the number of particles alive at time $s$, and let $X_{1,\alpha}(s) \geq \dots \geq X_{N_{\alpha}(s), \alpha}(s)$ denote the positions of these particles at time $s$. Let $$Z_{\alpha}(s) = \sum_{i=1}^{N_{\alpha}(s)} e^{\sqrt{2} X_{i,\alpha}(s)} \sin \bigg( \frac{\pi X_{i,\alpha}(s)}{L_{\alpha}(s - \kappa x^2)} \bigg).$$ Note that $Z_{\alpha}(\kappa x^2)$ is the same as $Z_{\alpha}$ defined in (\ref{Zadef}). Also, let \begin{equation}\label{Yalphadef} Y_{\alpha}(s) = \sum_{i=1}^{N_{\alpha}(s)} e^{\sqrt{2} X_{i,\alpha}(s)}. \end{equation} \begin{Prop}\label{Zlower} For all $\varepsilon > 0$, there exists a constant $C > 0$ depending on $\kappa$, $\delta$, and $\varepsilon$ such that for sufficiently large $x$, $$\P\bigg(Z(s) \geq C x^{-1} \exp \big( (3 \pi^2)^{1/3}(t - s)^{1/3} \big) \bigg) > 1 - \varepsilon$$ for all $s \in [2 \kappa x^2, (1 - \delta)t]$. \end{Prop} \begin{proof} We consider the process defined above. Recall that $({\cal F}_u)_{u \geq 0}$ is the natural filtration associated with the branching Brownian motion. By Lemma \ref{EZbound} and the Markov property, there exist positive constants $C'$ and $C''$, depending on $\kappa$ and $\delta$, such that for all $s \in [2 \kappa x^2, (1 - \delta/2)t_{\alpha}]$, $$C' Z_{\alpha} G_0(s - \kappa x^2) \leq \E[Z_{\alpha}(s)|{\cal F}_{\kappa x^2}] \leq C'' Z_{\alpha} G_0(s - \kappa x^2).$$ Because $(t_{\alpha} - (s - \kappa x^2))^{1/3} - (t_{\alpha}-s)^{1/3}$ is bounded by a constant, it follows from (\ref{Gdef}) that \begin{align}\label{EZalphas} C' Z_{\alpha} \exp \big( -(3 \pi^2)^{1/3} \big(t_{\alpha}^{1/3} - (t_{\alpha}-s)^{1/3} \big) \big) &\leq \E[Z_{\alpha}(s)|{\cal F}_{\kappa x^2}] \nonumber \\ &\leq C'' Z_{\alpha} \exp \big( -(3 \pi^2)^{1/3} \big(t_{\alpha}^{1/3} - (t_{\alpha}-s)^{1/3} \big) \big). \end{align} Likewise, by Proposition \ref{VarZProp}, \begin{align} \mbox{Var}(Z_{\alpha}(s)|{\cal F}_{\kappa x^2}) &\leq C \E[Z_{\alpha}(s)|{\cal F}_{\kappa x^2}]^2 \bigg( \frac{e^{\sqrt{2} L_{\alpha}(0)}}{L_{\alpha}(0) Z_{\alpha}} + \frac{e^{\sqrt{2} L_{\alpha}(0)} Y(\kappa x^2)}{L_{\alpha}(0)^2 Z_{\alpha}^2} \bigg) \nonumber \\ &= C \E[Z_{\alpha}(s)|{\cal F}_{\kappa x^2}]^2 \bigg( \frac{e^{\sqrt{2} x} e^{\sqrt{2} \alpha}}{(x + \alpha) Z_{\alpha}} + \frac{e^{\sqrt{2} x} e^{\sqrt{2} \alpha} Y(\kappa x^2)}{(x + \alpha)^2 Z_{\alpha}^2} \bigg). \nonumber \end{align} Let $A$ be the event that $Y(\kappa x^2) \leq C_5 x^{-1} e^{\sqrt{2} x}$ and $Z_{\alpha} \geq C_6 x^{-1} e^{\sqrt{2} x}$, where $C_5$ and $C_6$ are the constants from Lemma \ref{smalltime} applied with $\varepsilon/8$ in place of $\varepsilon$. Lemma \ref{smalltime} then gives $\P(A) > 1 - \varepsilon/4$ for sufficiently large $x$. On $A$, we have $$\mbox{Var}(Z_{\alpha}(s)|{\cal F}_{\kappa x^2}) \leq C \E[Z_{\alpha}(s)|{\cal F}_{\kappa x^2}]^2 e^{\sqrt{2} \alpha} \bigg( \frac{x}{C_6(x + \alpha)} + \frac{C_5 x}{C_6^2 (x + \alpha)^2} \bigg).$$ Therefore, if $\alpha$ is chosen to be a large enough negative number that $C e^{\sqrt{2} \alpha}/C_6 < \varepsilon/8$, then $\mbox{Var}(Z_{\alpha}(s)|{\cal F}_{\kappa x^2}) \leq (\varepsilon/8) \E[Z_{\alpha}(s)|{\cal F}_{\kappa x^2}]^2$ on $A$ for sufficiently large $x$. It follows from the conditional Chebyshev's Inequality that for sufficiently large $x$, \begin{equation}\label{PZalpha} \P\bigg(Z_{\alpha}(s) < \frac{1}{2} \E[Z_{\alpha}(s)|{\cal F}_{\kappa x^2}] \bigg) \leq \P(A^c) + \frac{4 \varepsilon}{8} < \frac{3\varepsilon}{4}. \end{equation} By (\ref{EZalphas}), on $A$ we have $$\E[Z_{\alpha}(s)|{\cal F}_{\kappa x^2}] \geq C x^{-1} \exp \big( \sqrt{2} x - (3 \pi^2)^{1/3} \big(t_{\alpha}^{1/3} - (t_{\alpha}-s)^{1/3} \big) \big).$$ Thus, using (\ref{PZalpha}) and the fact that $\P(A^c) < \varepsilon/4$, there is a positive constant $C$ such that for all $s \in [2 \kappa x^2, (1 - \delta/2) t_{\alpha}]$, $$\P \bigg( Z_{\alpha}(s) \geq C x^{-1} \exp \big( \sqrt{2} x - (3 \pi^2)^{1/3} \big(t_{\alpha}^{1/3} - (t_{\alpha}-s)^{1/3} \big) \big) \bigg) \geq 1 - \varepsilon$$ for sufficiently large $x$. Note that $|t_{\alpha}^{1/3} - t^{1/3}|$ is bounded by a constant which depends on $\alpha$, and thus on $\varepsilon$. Likewise, \begin{equation}\label{talphabound} \sup_{\kappa x^2 \leq s \leq (1 - \delta/2)t_{\alpha}} |(t_{\alpha} - s)^{1/3} - (t - s)^{1/3}| \end{equation} is bounded by a constant which depends on $\alpha$ and $\delta$. Furthermore, we have $\sqrt{2} x = (3 \pi^2)^{1/3} t^{1/3}$. Because $(1 - \delta/2)t_{\alpha} \geq (1 - \delta) t$ for sufficiently large $x$, we obtain the result of the proposition with $Z_{\alpha}(s)$ in place of $Z(s)$, provided that $\alpha$ is a sufficiently large negative number. To complete the proof, recall that $L(s) = c(t - s)^{1/3}$ and $L_{\alpha}(s - \kappa x^2) = c(t_{\alpha} - s + \kappa x^2)^{1/3}$, where $t = \tau x^3$ and $t_{\alpha} = \tau (x + \alpha)^3$. Therefore, there is a constant $\alpha_0 < 0$ such that if $\alpha < \alpha_0$, then $L_{\alpha}(s - \kappa x^2) < L(s)$ for sufficiently large $x$. Also, $L(s)/2 < L_{\alpha}(s - \kappa x^2)$ for sufficiently large $x$. Thus, if $\alpha < \alpha_0$, there exists a constant $C$ such that for sufficiently large $x$, $$\sin \bigg( \frac{\pi z}{L_{\alpha}(s - \kappa x^2)} \bigg) \leq C \sin \bigg( \frac{\pi z}{L(s)} \bigg)$$ for all $z \in [0, L_{\alpha}(s - \kappa x^2)]$. Because killing particles at a right boundary can only reduce the number of particles in the system, it follows that if $\alpha < \alpha_0$, then $Z_{\alpha}(s) \leq C Z(s)$ for sufficiently large $x$. The result follows. \end{proof} \subsection{Upper bounds for $Z(s)$ and $Y(s)$} Recall that $t = \tau x^3$. The next lemma shows that it is unlikely for any particle ever to get far to the right of $L(s)$ for $s \in [2 \kappa x^2, (1 - \delta)t]$. \begin{Lemma}\label{killalpha} Let $\varepsilon > 0$. For all $\alpha > 0$, let $t_{\alpha} = \tau(x + \alpha)^3$, and let $L_{\alpha}(s) = c(t_{\alpha} - s)^{1/3}$ for $0 \leq s \leq t_{\alpha}$. Then there exists a positive constant $C_{11}$, depending on $\kappa$, $\delta$, and $\varepsilon$ but not on $\alpha$ or $x$, such that for sufficiently large $x$, $$\P\big(X_1(s) \leq L_{\alpha}(s - \kappa x^2) \mbox{ for all }s \in [\kappa x^2, (1 - \delta)t] \big) \geq 1 - \varepsilon - C_{11} e^{-\sqrt{2} \alpha}.$$ \end{Lemma} \begin{proof} Suppose there is a particle at the location $z \leq c t_{\alpha}^{1/3}=x+\alpha$ at time $\kappa x^2$. By Lemma \ref{L:sumRj} with $t = t_{\alpha}$, the probability that a descendant of this particle reaches $L_{\alpha}(s - \kappa x^2)$ for some $s \in [\kappa x^2, (1 - \delta/2)t_{\alpha}]$ is at most $$C e^{-(3 \pi^2 t_{\alpha})^{1/3}} \bigg( e^{\sqrt{2} z} \sin \bigg( \frac{\pi z}{L_{\alpha}(0)} \bigg) t_{\alpha}^{1/3} + z e^{\sqrt{2} z} t_{\alpha}^{-1/3} \bigg).$$ Therefore, using the bound $z t_{\alpha}^{-1/3} \leq c$ and applying the Markov property, we get that the conditional probability, given ${\cal F}_{\kappa x^2}$, on the event $X_1(\kappa x^2)<x+\alpha,$ that a particle reaches $L_{\alpha}(s - \kappa x^2)$ for $\kappa x^2 \leq s \leq (1 - \delta/2)t_{\alpha}$ is at most \begin{equation}\label{hitbd} Ce^{-\sqrt{2}(x + \alpha)} \left( t_{\alpha}^{1/3} Z_{\alpha}(\kappa x^2) + Y(\kappa x^2) \right). \end{equation} Let $A$ be the event that $X_1(\kappa x^2)<x+\alpha,$ $Y(\kappa x^2) \leq C_5 x^{-1} e^{\sqrt{2} x}$ and $Z_{\alpha}(\kappa x^2) \leq C_7 x^{-1} e^{\sqrt{2} x}$, where $C_5$ and $C_7$ are the constants from Lemma \ref{smalltime} with $\varepsilon/3$ in place of $\varepsilon$. On $A$, for sufficiently large $x$, the expression in (\ref{hitbd}) is at most $$C t_{\alpha}^{1/3} x^{-1} e^{-\sqrt{2} \alpha} + C x^{-1} e^{-\sqrt{2} \alpha} \leq C_{11} e^{-\sqrt{2} \alpha}.$$ Because $\P(A) > 1 - \varepsilon$ for sufficiently large $x$ by Lemma \ref{smalltime} and the fact that $(1 - \delta/2)t_{\alpha} \geq (1 - \delta)t$ for sufficiently large $x$, the result follows. \end{proof} The next lemma shows that at any fixed time $s \in [2 \kappa x^2, (1 - \delta)t]$, it is unlikely that there is any particle near or to the right of $L(s)$. \begin{Lemma}\label{withina} Let $a > 0$ be a positive constant. Let $\varepsilon > 0$. Then for sufficiently large $x$, we have $$\P(X_1(s) > L(s) - a) < \varepsilon$$ for all $s \in [2 \kappa x^2, (1 - \delta)t]$. \end{Lemma} \begin{proof} We consider the process defined at the beginning of Section \ref{Zlow} in which at time $\kappa x^2$, particles to the right of $x + \alpha$ are killed, and for $\kappa x^2 < s < t_{\alpha} + \kappa x^2$, particles are killed at time $s$ if they reach $L_{\alpha}(s - \kappa x^2)$. By (\ref{rightmost}), for sufficiently large $x$, the probability that some particle is killed at time $\kappa x^2$ is at most $\varepsilon/4$. By applying Lemma \ref{killalpha} with $\varepsilon/4$ in place of $\varepsilon$ and choosing $\alpha > 0$ large enough that $C_{11} e^{-\sqrt{2} \alpha} < \varepsilon/4$, we get that the probability that a particle is killed between times $\kappa x^2$ and $(1 - \delta)t$ is at most $\varepsilon/2$. Thus, with probability at least $1 - 3 \varepsilon/4$, no particle is killed until at least time $(1 - \delta)t$. Suppose $s \in [2 \kappa x^2, (1 - \delta)t]$. Let $K_{\alpha}(s)$ be the number of particles at time $s$ between $L(s) - a$ and $L_{\alpha}(s - \kappa x^2)$. By Lemma \ref{densityprop} with $t_{\alpha}$ in place of $t$, we have \begin{align} \E[K_{\alpha}(s)|{\cal F}_{\kappa x^2}] &\leq C t_{\alpha}^{-1/3} e^{-(3 \pi^2)^{1/3}(t_{\alpha}^{1/3} - (t_{\alpha} - s + \kappa x^2)^{1/3})} Z_{\alpha} \nonumber \\ &\qquad \times \int_{L(s) - a}^{L_{\alpha}(s - \kappa x^2)} e^{-\sqrt{2} y} \sin \bigg( \frac{\pi y}{L_{\alpha}(s - \kappa x^2)} \bigg) \: dy. \nonumber \end{align} For sufficiently large $x$, the expression $L_{\alpha}(s - \kappa x^2) - (L(s) - a) = c(t_{\alpha} - s + \kappa x^2)^{1/3} - c(t - s)^{1/3} + a$ is bounded above by a constant depending on $\alpha$ and $a$, and thus $$\int_{L(s) - a}^{L_{\alpha}(s - \kappa x^2)} e^{-\sqrt{2} y} \sin \bigg( \frac{\pi y}{L_{\alpha}(s - \kappa x^2)} \bigg) \: dy \leq \frac{C e^{-\sqrt{2} L_{\alpha}(s - \kappa x^2)}}{L_{\alpha}(s - \kappa x^2)} \leq C t_{\alpha}^{-1/3} e^{-\sqrt{2} L_{\alpha}(s - \kappa x^2)}.$$ Therefore, on the event that $Z_{\alpha} \leq C_7 x^{-1} e^{\sqrt{2} x}$, where $C_7$ is the constant from Lemma \ref{smalltime} with $\varepsilon/8$ in place of $\varepsilon$, for sufficiently large $x$, \begin{align} &\E[K_{\alpha}(s)|{\cal F}_{\kappa x^2}] \nonumber \\ &\qquad \leq C t_{\alpha}^{-2/3} x^{-1} \exp \big( \sqrt{2} x -(3 \pi^2)^{1/3}\big(t_{\alpha}^{1/3} - (t_{\alpha} - s + \kappa x^2)^{1/3} \big) - \sqrt{2} L_{\alpha}(s - \kappa x^2) \big) \nonumber \\ &\qquad \leq C x^{-3} \exp \big( \sqrt{2} x - \sqrt{2}(x + \alpha) + (3 \pi^2)^{1/3} (t_{\alpha} - s + \kappa x^2)^{1/3} - (3 \pi^2)^{1/3}(t_{\alpha} - s + \kappa x^2)^{1/3} \big) \nonumber \\ &\qquad \leq C x^{-3} \nonumber \end{align} because the exponential is a constant which depends on $\alpha$. Therefore, by the conditional Markov's Inequality and Lemma \ref{smalltime}, for sufficiently large $x$, $$\P(K_{\alpha}(s) > 0) \leq \P(Z_{\alpha} > C_7 x^{-1} e^{\sqrt{2} x}) + C x^{-3} < \frac{\varepsilon}{8} + \frac{\varepsilon}{8} = \frac{\varepsilon}{4}.$$ Because with probability at least $1 - 3 \varepsilon/4$, no particle is killed until at least time $(1 - \delta)t$, it follows that for sufficiently large $x$, we have $\P(X_1(s) > L(s) - a) < \varepsilon$ for all $s \in [2 \kappa x^2, (1 - \delta)t]$. \end{proof} \begin{Prop}\label{Zupper} For all $\varepsilon > 0$, there exists a constant $C > 0$ depending on $\kappa$, $\delta$, and $\varepsilon$ such that for sufficiently large $x$, $$\P\bigg(Z(s) \leq C x^{-1} \exp \big( (3 \pi^2)^{1/3}(t - s)^{1/3} \big) \bigg) > 1 - \varepsilon$$ for all $s \in [2 \kappa x^2, (1 - \delta)t]$. \end{Prop} \begin{proof} We again work with the process defined at the beginning of Section \ref{Zlow}. By (\ref{EZalphas}) and the conditional Markov's Inequality, there is a constant $C$ depending on $\kappa$, $\delta$, and $\varepsilon$ such that for all $s \in [2 \kappa x^2, (1 - \delta)t]$, $$\P\bigg(Z_{\alpha}(s) \leq C Z_{\alpha} \exp \big( -(3 \pi^2)^{1/3} (t_{\alpha}^{1/3} - (t_{\alpha}-s)^{1/3}) \big) \bigg) > 1 - \frac{\varepsilon}{4}.$$ Therefore, by (\ref{Zalpha}), for all $s \in [2 \kappa x^2, (1 - \delta)t]$, $$\P\bigg(Z_{\alpha}(s) \leq C x^{-1} \exp \big( \sqrt{2} x -(3 \pi^2)^{1/3} (t_{\alpha}^{1/3} - (t_{\alpha}-s)^{1/3}) \big) \bigg) > 1 - \frac{\varepsilon}{2}$$ for sufficiently large $x$. Because $|t_{\alpha}^{1/3} - t^{1/3}|$ and the expression in (\ref{talphabound}) are bounded by constants depending on $\alpha$ and $\sqrt{2} x = (3 \pi^2t)^{1/3}$, it follows that \begin{equation}\label{Zalphabd} \P\bigg(Z_{\alpha}(s) \leq C x^{-1} \exp \big( (3 \pi^2)^{1/3}(t-s)^{1/3} \big) \bigg) > 1 - \frac{\varepsilon}{2} \end{equation} for sufficiently large $x$. From Lemma \ref{killalpha} with $\varepsilon/8$ in place of $\varepsilon$, we see that with probability at least $1 - \varepsilon/8 - C_{11}e^{-\sqrt{2} \alpha}$, no particles are killed between times $\kappa x^2$ and $(1 - \delta)t$. Therefore, if $\alpha$ is chosen large enough that $C_{11}e^{-\sqrt{2} \alpha} < \varepsilon/8$, then with probability at least $1 - \varepsilon/4$, we have $N_{\alpha}(s) = N(s)$ and $X_i(s) = X_{i,\alpha}(s)$ for $i = 1, \dots, N(s)$. Furthermore, provided $\alpha$ is also large enough that $L_{\alpha}(s - \kappa x^2) \geq L(s)$, for sufficiently large $x$ it holds that for $0 \leq x \leq L(s)$, we have $$\sin \bigg( \frac{\pi x}{L_{\alpha}(s - \kappa x^2)} \bigg) \geq C \sin \bigg( \frac{\pi x}{L(s)} \bigg)$$ for some positive constant $C$. By Lemma \ref{withina}, for sufficiently large $x$ the probability that $X_1(s) > L(s)$ is less than $\varepsilon/4$. It follows that for sufficiently large $x$, we have $Z_{\alpha}(s) \geq C Z(s)$ with probability at least $1 - \varepsilon/2$. Combining this observation with (\ref{Zalphabd}) yields the result. \end{proof} \begin{Prop}\label{Yupper} For all $\varepsilon > 0$, there exists a constant $C > 0$ depending on $\kappa$, $\delta$, and $\varepsilon$ such that for sufficiently large $x$, $$\P\bigg(Y(s) \leq C x^{-1} \exp \big( (3 \pi^2)^{1/3}(t - s)^{1/3} \big) \bigg) > 1 - \varepsilon$$ for all $s \in [2 \kappa x^2, (1 - \delta)t]$. \end{Prop} \begin{proof} We again work with the process defined at the beginning of Section \ref{Zlow}. Recall the definition of $Y_{\alpha}(s)$ from (\ref{Yalphadef}). By Lemma \ref{killalpha}, we can choose $\alpha > 0$ sufficiently large that with probability at least $1 - \varepsilon/2$, we have $X_1(s) \leq c(t_{\alpha} - s + \kappa x^2)^{1/3}$ for all $s \in [\kappa x^2, (1 - \delta)t_{\alpha}]$. Therefore, for all $s \in [2 \kappa x^2, (1 - \delta)t]$, we have $\P(Y_{\alpha}(s) = Y(s)) > 1 - \varepsilon/2$. By Lemma \ref{densityprop} with $t_{\alpha}$ in place of $t$, for all $s \in [2 \kappa x^2, (1 - \delta)t]$, \begin{align} \E[Y_{\alpha}(s)|{\cal F}_{\kappa x^2}] &\leq \frac{C}{L_{\alpha}(s - \kappa x^2)} e^{-(3 \pi^2)^{1/3}(t_{\alpha}^{1/3} - (t_{\alpha} - s + \kappa x^2)^{1/3})} Z_{\alpha} \int_0^{L_{\alpha}(s - \kappa x^2)} \sin \bigg( \frac{\pi y}{L_{\alpha}(s - \kappa x^2)} \bigg) \: dy \nonumber \\ &\leq C e^{-(3 \pi^2)^{1/3}(t_{\alpha}^{1/3} - (t_{\alpha} - s + \kappa x^2)^{1/3})} Z_{\alpha}. \nonumber \end{align} By combining this result with the conditional Markov's inequality and (\ref{Zalpha}), we get that there is a constant $C$ such that for sufficiently large $x$, $$\P \bigg( Y_{\alpha}(s) \leq C x^{-1} e^{\sqrt{2} x} e^{-(3 \pi^2)^{1/3}(t_{\alpha}^{1/3} - (t_{\alpha} - s + \kappa x^2)^{1/3})} \bigg) > 1 - \frac{\varepsilon}{2}$$ for all $s \in [2 \kappa x^2, (1 - \delta)t]$. Because $|(t_{\alpha} - s - \kappa x^2)^{1/3} - (t - s)^{1/3}|$ is bounded by a constant depending on $\alpha$ and $(3 \pi^2)^{1/3} t_{\alpha}^{1/3} = \sqrt{2} (x - \alpha)$, there is a constant $C$ depending on $\alpha$ such that $$\P\bigg(Y_{\alpha}(s) \leq C x^{-1} \exp \big( (3 \pi^2)^{1/3}(t - s)^{1/3} \big) \bigg) > 1 - \frac{\varepsilon}{2}$$ for all $s \in [2 \kappa x^2, (1 - \delta)t]$. The result follows because $\P(Y_{\alpha}(s) = Y(s)) > 1 - \varepsilon/2$. \end{proof} \subsection{Moments of functions of branching Brownian motion}\label{momfunsec} Suppose $\kappa > 0$ and $\delta > 0$. Let $\varepsilon > 0$. Choose a constant $B > 0$ sufficiently large that if $s = BL^2$, the right-hand side of (\ref{Dineq}) is at most $\varepsilon$. Now fix a time $s$ such that $$(B + 3 \kappa)x^2 \leq s \leq (1 - \delta)t.$$ Let $f: [0, \infty) \rightarrow \R$ and $\phi: [0,1] \rightarrow \R$ be bounded continuous functions. Let $\|f\| = \sup_{x \geq 0} |f(x)|$ and $\|\phi\| = \sup_{0 \leq x \leq 1} |\phi(x)|$. We are interested here in the quantities \begin{equation}\label{sumfX} \sum_{i=1}^{N(s)} f(X_i(s)). \end{equation} and \begin{equation}\label{sumphiX} \sum_{i=1}^{N(s)} e^{\sqrt{2} X_i(s)} \phi\bigg( \frac{X_i(s)}{L(s)} \bigg) {\bf 1}_{\{X_i(s) < L(s)\}}. \end{equation} Let $r = s - Bx^2$. Let $A$ be the event that $X_1(u) \leq L(s)$ for all $u \in [r, s]$. By Proposition \ref{Yupper}, there is a positive constant $C$ such that \begin{equation}\label{811} \P \bigg( Y(r) \leq C x^{-1} \exp((3 \pi^2)^{1/3}(t - r)^{1/3}) \bigg) > 1 - \varepsilon \end{equation} for sufficiently large $x$. Because $L(r) - L(s)$ is bounded above by a constant, Lemma \ref{withina} implies that \begin{equation}\label{812} \P(X_1(r) \leq L(s)) > 1 - \varepsilon \end{equation} for sufficiently large $x$. Because $M(r)$, as defined in (\ref{Mdef}), is bounded by $X_1(r)Y(r)$, we have $$M(r) \leq C L(s) x^{-1} \exp((3 \pi^2)^{1/3}(t - r)^{1/3})$$ when the events in (\ref{811}) and (\ref{812}) both occur. By the Optional Sampling Theorem, the probability, conditional on ${\cal F}_r$, that some particle reaches $L(s)$ between times $r$ and $s$ is at most $M(r)/(L(s) e^{\sqrt{2} L(s)})$. Therefore, \begin{equation}\label{821} \P(A^c) \leq 2 \varepsilon + C x^{-1} \exp \big( (3 \pi^2)^{1/3}(t - r)^{1/3} - \sqrt{2} L(s) \big). \end{equation} Because $\sqrt{2} L(s) = (3 \pi^2)^{1/3} (t - s)^{1/3}$, the exponential on the right-hand side of (\ref{821}) is bounded by a constant. Therefore, the second term on the right-hand side of (\ref{821}) tends to zero as $x \rightarrow \infty$, and thus $\P(A^c) < 3 \varepsilon$ for sufficiently large $x$. Let $S$ be the set of all $i \in \{1, \dots, N(s)\}$ such that for all $u \in [r, s]$, the particle at time $u$ that is the ancestor of the particle at $X_i(s)$ at time $s$ is positioned to the left of $L(s)$. We will work with the quantities $$X(f) = \sum_{i=1}^{N(s)} f(X_i(s)) {\bf 1}_{\{i \in S\}}$$ and $$X'(\phi) = \sum_{i=1}^{N(s)} e^{\sqrt{2} X_i(s)} \phi \bigg( \frac{X_i(s)}{L(s)} \bigg) {\bf 1}_{\{i \in S\}}.$$ Note that $X(f)$ and $X'(\phi)$ equal the sums in (\ref{sumfX}) and (\ref{sumphiX}) respectively on the event $A$, so we have the following result. \begin{Lemma}\label{fphi} Suppose $\varepsilon$, $B$, $r$, and $s$ are as defined above. Then for sufficiently large $x$, with probability greater than $1 - 3 \varepsilon$, the quantity $X(f)$ equals the sum in (\ref{sumfX}) and $X'(\phi)$ equals the sum in (\ref{sumphiX}) for all bounded continuous functions $f: [0, \infty) \rightarrow \R$ and $\phi: [0,1] \rightarrow \R$. \end{Lemma} Because $X(f)$ and $X'(\phi)$ are the sums that would be obtained if particles were killed at $L(s)$ between times $r$ and $s$, we can compute conditional moments of $X(f)$ and $X'(\phi)$ by applying Lemma \ref{fXvar} with $Bx^2$ in place of $s$ and $L(s)$ in place of $L$. For the rest of this subsection, we define $q_u(x,y)$ as in Lemma \ref{stripdensity} with $L(s)$ in place of $L$. Define \begin{equation}\label{hatZdef} {\hat Z} = \sum_{i=1}^{N(r)} e^{\sqrt{2} X_i(r)} \sin \bigg( \frac{\pi X_i(r)}{L(s)} \bigg) {\bf 1}_{\{X_i(r) \leq L(s)\}}. \end{equation} Note that ${\hat Z}$ is defined in the same way as $Z(r)$, except that $L(s)$ is used instead of $L(r)$ in the denominator of the sine function and in the indicator. Lemma \ref{withina} implies that with probability tending to one as $x \rightarrow \infty$, we have $X_1(r) \leq L(r) - 2(L(r) - L(s))$. Therefore, there are positive constants $C'$ and $C''$ such that for sufficiently large $x$, \begin{equation}\label{ZZhat} \P\big(C' Z(r) \leq {\hat Z} \leq C'' Z(r) \big) > 1 - \varepsilon. \end{equation} \begin{Lemma}\label{meanXf} For sufficiently large $x$, we have $$\bigg| \E[X(f)|{\cal F}_r] - {\hat Z} \frac{\pi}{L(s)^2} e^{-\pi^2 B x^2/2L(s)^2} \int_0^{\infty} f(y) g(y) \: dy \bigg| < \frac{2 \pi \|f\| \varepsilon}{L(s)^2} e^{-\pi^2 B x^2/2L(s)^2} {\hat Z},$$ where $g(y) = 2y e^{-\sqrt{2} y}$ as in Theorem \ref{config1}. \end{Lemma} \begin{proof} Because the right-hand side of (\ref{Dineq}) is at most $\varepsilon$ when $s = Bx^2$, it follows from Lemma \ref{fXvar} and Lemma \ref{stripdensity} that \begin{align} \E[X(f)|{\cal F}_r] &= \sum_{i=1}^{N(r)} \int_0^{L(s)} f(y) q_{Bx^2}(X_i(r), y) \: dy \nonumber \\ &= \frac{2 (1 + D)}{L(s)} e^{-\pi^2 B x^2/2L(s)^2} {\hat Z} \int_0^{L(s)} f(y) e^{-\sqrt{2} y} \sin \bigg( \frac{\pi y}{L(s)} \bigg) \: dy, \nonumber \end{align} where $|D| < \varepsilon$. Note that $$\lim_{x \rightarrow \infty} L(s) \int_0^{L(s)} f(y) e^{-\sqrt{2} y} \bigg| \frac{\pi y}{L(s)} - \sin \bigg( \frac{\pi y}{L(s)} \bigg) \bigg| \: dy = 0$$ and $$\lim_{x \rightarrow \infty} \int_{L(s)}^{\infty} f(y) e^{-\sqrt{2} y} \cdot \pi y \: dy = 0.$$ It follows that $$L(s) \int_0^{L(s)} f(y) e^{-\sqrt{2} y} \sin \bigg( \frac{\pi y}{L(s)} \bigg) \: dy = \int_0^{\infty} f(y) e^{-\sqrt{2} y} \cdot \pi y \: dy + \gamma(x),$$ where $\gamma(x) \rightarrow 0$ as $x \rightarrow \infty$. Therefore, \begin{equation}\label{Xfapprox} \E[X(f)|{\cal F}_r] = {\hat Z} \frac{\pi(1 + D)}{L(s)^2} e^{-\pi^2 B x^2/2L(s)^2} \bigg( \int_0^{\infty} f(y) g(y) \: dy + \frac{2 \gamma(x)}{\pi} \bigg). \end{equation} To obtain the result from (\ref{Xfapprox}), first note that the error term involving $\gamma(x)$ is bounded by $2 (1 + \varepsilon) L(s)^{-2} e^{-\pi^2 B x^2/2L(s)^2}{\hat Z} \gamma(x)$, and then bound the remaining error term involving $D$ by $\pi \varepsilon L(s)^{-2} e^{-\pi^2 B x^2/2L(s)^2} \|f\| {\hat Z}$. \end{proof} \begin{Lemma}\label{varXf} There is a constant $C$ such that for sufficiently large $x$, $$\textup{Var}(X(f)|{\cal F}_r) \leq \frac{C Y(r) e^{\sqrt{2} L(s)}}{x^{11/2}}.$$ \end{Lemma} \begin{proof} By summing over the contributions of the particles at time $r$ and applying Lemma \ref{fXvar}, we get \begin{align}\label{condvarX} \mbox{Var}(X(f)|{\cal F}_r) &\leq \sum_{i=1}^{N(r)} \int_0^{L(s)} f(y)^2 \: q_{Bx^2}(X_i(r),y) \: dy \nonumber \\ &\hspace{.3in}+ 2 \sum_{i=1}^{N(r)} \int_0^{Bx^2} \int_0^{L(s)} q_u(X_i(r), z) \bigg( \int_0^{L(s)} f(y) q_{Bx^2-u}(z,y) \: dy \bigg)^2 \: dz \: du. \end{align} The first term on the right-hand side of (\ref{condvarX}) is bounded by $\|f\|^2 \E[X(1)|{\cal F}_r]$, where $X(1)$ denotes the value of $X(f)$ when $f(x) = 1$ for all $x$. Consequently, by Lemma \ref{meanXf}, this term is bounded above by $C {\hat Z} x^{-2} \leq C Y(r) x^{-2} \leq C Y(r) e^{\sqrt{2} L(s)}/x^{11/2}$. It remains to bound the second term. The strategy is very similar to that used in the proof of Proposition 14 in \cite{bbs} and involves splitting the outer integral into four pieces. Suppose $0 < w < L(s)$. Using Lemma \ref{stripdensity} and equations (\ref{q1}) and (\ref{q5}), \begin{align}\label{vt1} &\int_0^{Bx^2/2} \int_0^{L(s)} q_u(w,z) \bigg( \int_0^{L(s)} f(y) q_{Bx^2-u}(z,y) \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq \int_0^{Bx^2/2} \int_0^{L(s)} q_u(w,z) \bigg( \int_0^{L(s)} \frac{C}{L(s)} e^{\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg) e^{-\sqrt{2} y} \sin \bigg( \frac{\pi y}{L(s)} \bigg) \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq \frac{C}{L(s)^2} \int_0^{Bx^2/2} \int_0^{L(s)} q_u(w,z) e^{2 \sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg)^2 \bigg( \int_0^{L(s)} e^{-\sqrt{2} y} \sin \bigg( \frac{\pi y}{L(s)} \bigg) \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq \frac{C}{L(s)^4} \int_0^{L(s)} e^{2 \sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg)^2 \bigg( \int_0^{Bx^2/2} q_u(w,z) \: du \bigg) \: dz. \nonumber \\ &\qquad \leq \frac{C e^{\sqrt{2} w}}{L(s)^4} \int_0^{L(s)} e^{\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg)^2 \frac{w(L(s) - z)}{L(s)} \: dz \nonumber \\ &\qquad \leq \frac{C e^{\sqrt{2} w} e^{\sqrt{2} L(s)}}{L(s)^6}. \end{align} Using Lemma \ref{stripdensity} and (\ref{q2}), \begin{align}\label{vt2} &\int_{Bx^2/2}^{Bx^2 - L(s)^{7/4}} \int_0^{L(s)} q_u(w,z) \bigg( \int_0^{L(s)} f(y) q_{Bx^2-u}(z,y) \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq \int_{Bx^2/2}^{Bx^2 - L(s)^{7/4}} \int_0^{L(s)} \frac{C}{L(s)} e^{\sqrt{2} w} \sin \bigg( \frac{\pi w}{L(s)} \bigg) e^{-\sqrt{2} z} \sin \bigg(\frac{\pi z}{L(s)} \bigg) \nonumber \\ &\qquad \qquad \times \bigg( \int_0^{L(s)} \frac{C}{L(s)} e^{\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg) e^{-\sqrt{2} y} \sin \bigg( \frac{\pi y}{L(s)} \bigg) \cdot \frac{CL(s)^3}{(Bx^2 - u)^{3/2}} \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq CL(s)^3 e^{\sqrt{2} w} \sin \bigg( \frac{\pi w}{L(s)} \bigg) \bigg( \int_{Bx^2/2}^{Bx^2 - L(s)^{7/4}} \frac{1}{(Bx^2 - u)^3} \: du \bigg) \nonumber \\ &\qquad \qquad \times \bigg( \int_0^{L(s)} e^{\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg)^3 \: dz \bigg) \bigg( \int_0^{L(s)} e^{-\sqrt{2} y} \sin \bigg( \frac{\pi y}{L(s)} \bigg) \: dy \bigg)^2 \nonumber \\ &\qquad \leq C L(s)^3 e^{\sqrt{2} w} \sin \bigg( \frac{\pi w}{L(s)} \bigg) \cdot \frac{1}{L(s)^{7/2}} \cdot \frac{e^{\sqrt{2} L(s)}}{L(s)^3} \cdot \frac{1}{L(s)^2} \nonumber \\ &\qquad = \frac{C e^{\sqrt{2}w} e^{\sqrt{2} L(s)}}{L(s)^{11/2}} \sin \bigg( \frac{\pi w}{L(s)} \bigg). \end{align} Using (\ref{q3}), we get \begin{align}\label{vt3} &\int_{Bx^2 - L(s)^{7/4}}^{Bx^2 - 1} \int_0^{2L(s)/3} q_u(w,z) \bigg( \int_0^{L(s)} f(y) q_{Bx^2-u}(z,y) \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq \int_{Bx^2 - L(s)^{7/4}}^{Bx^2 - 1} \int_0^{2L(s)/3} \frac{C}{L(s)} e^{\sqrt{2} w} \sin \bigg( \frac{\pi w}{L(s)} \bigg) \nonumber \\ &\qquad \qquad \times e^{-\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg) \bigg( \int_0^{L(s)} \frac{C e^{\sqrt{2}(z-y)}}{(Bx^2 - u)^{1/2}} \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq \frac{C}{L(s)} e^{\sqrt{2} w} \sin \bigg( \frac{\pi w}{L(s)} \bigg) \bigg( \int_{Bx^2 - L(s)^{7/4}}^{Bx^2 - 1} \frac{1}{Bx^2 - u} \: du \bigg) \bigg( \int_0^{2L(s)/3} e^{\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg) \: dz \bigg) \nonumber \\ &\qquad \leq \frac{C e^{\sqrt{2} w} e^{2 \sqrt{2} L(s) / 3} \log L(s)}{L(s)} \sin \bigg( \frac{\pi w}{L(s)} \bigg) \end{align} and \begin{align}\label{vt4} &\int_{Bx^2 - L(s)^{7/4}}^{Bx^2 - 1} \int_{2L(s)/3}^{L(s)} q_u(w,z) \bigg( \int_0^{L(s)} f(y) q_{Bx^2-u}(z,y) \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq \int_{Bx^2 - L(s)^{7/4}}^{Bx^2 - 1} \int_{2L(s)/3}^{L(s)} \frac{C}{L(s)} e^{\sqrt{2} w} \sin \bigg( \frac{\pi w}{L(s)} \bigg) \nonumber \\ &\qquad \qquad \times e^{-\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg) \bigg( \int_0^{L(s)} \frac{C e^{\sqrt{2}(z-y)} e^{-(z-y)^2/2(Bx^2 - u)}}{(Bx^2 - u)^{1/2}} \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq \frac{C}{L(s)} e^{\sqrt{2} w} \sin \bigg( \frac{\pi w}{L(s)} \bigg) \bigg( \int_{Bx^2 - L(s)^{7/4}}^{Bx^2 - 1} \frac{1}{Bx^2 - u} \: du \bigg) \nonumber \\ &\qquad \qquad \times \int_{2L(s)/3}^{L(s)} e^{\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg) \bigg( \int_0^{L(s)} e^{-\sqrt{2} y} e^{-(z-y)^2/2L(s)^{7/4}} \: dy \bigg)^2 \: dz \nonumber \\ &\qquad \leq \frac{C \log L(s)}{L(s)} e^{\sqrt{2} w} \sin \bigg( \frac{\pi w}{L(s)} \bigg) \nonumber \\ &\qquad \qquad \times \int_{2L(s)/3}^{L(s)} e^{\sqrt{2} z} \bigg( \int_0^{L(s)/3} e^{-\sqrt{2} y} e^{-(L(s)/3)^2/2L(s)^{7/4}} \: dy + \int_{L(s)/3}^{L(s)} e^{-\sqrt{2} y} \: dy \bigg)^2 \: dz \nonumber \\ &\qquad \leq \frac{C \log L(s)}{L(s)} e^{\sqrt{2} w} \sin \bigg( \frac{\pi w}{L(s)} \bigg) \bigg( \int_{2L(s)/3}^{L(s)} e^{\sqrt{2} z} \: dz \bigg) \bigg( e^{-L(s)^{1/4}/18} + e^{-\sqrt{2}L(s)/3} \bigg)^2 \nonumber \\ &\qquad \leq \frac{C \log L(s)}{L(s)} e^{\sqrt{2} w} e^{\sqrt{2} L(s)} e^{-L(s)^{1/4}/9} \sin \bigg( \frac{\pi w}{L(s)} \bigg). \end{align} Finally, using (\ref{q4}), \begin{align}\label{vt5} &\int_{Bx^2 - 1}^{Bx^2} \int_0^{L(s)} q_u(w,z) \bigg( \int_0^{L(s)} f(y) q_{Bx^2-u}(z,y) \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq \int_{Bx^2 - 1}^{Bx^2} \int_0^{L(s)} \frac{C}{L(s)} e^{\sqrt{2} w} \sin \bigg( \frac{\pi w}{L(s)} \bigg) e^{-\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg) \big( \|f\| e \big)^2 \: dz \: du \nonumber \\ &\qquad \leq \frac{C e^{\sqrt{2} w}}{L(s)^2} \sin \bigg( \frac{\pi w}{L(s)} \bigg). \end{align} The expressions in (\ref{vt1}), (\ref{vt2}), (\ref{vt3}), (\ref{vt4}), and (\ref{vt5}) are all bounded by $C e^{\sqrt{2} w} e^{\sqrt{2} L(s)}/L(s)^{11/2}$. Because $L(s)$ and $x$ are the same to within a constant factor, we get after summing over the positions of the particles at time $r$ that the second term on the right-hand side of (\ref{condvarX}) is bounded by $C Y(r) e^{\sqrt{2} L(s)}/x^{11/2}$. The result follows. \end{proof} \begin{Lemma}\label{meanXphi} For sufficiently large $x$, we have $$\bigg| \E[X'(\phi)|{\cal F}_r] - \frac{4{\hat Z}}{\pi} e^{-\pi^2 B x^2/2L(s)^2} \int_0^1 \phi(y) h(y) \: dy \bigg| < \frac{4 \|\phi\| \varepsilon}{\pi} e^{-\pi^2 B x^2/2L(s)^2} {\hat Z},$$ where $h(y) = \frac{\pi}{2} \sin(\pi y)$ as in Theorem \ref{config2}. \end{Lemma} \begin{proof} Because the right-hand side of (\ref{Dineq}) is at most $\varepsilon$ when $s = Bx^2$, it follows from Lemma \ref{fXvar} and Lemma \ref{stripdensity} that \begin{align} \E[X'(\phi)|{\cal F}_r] &= \sum_{i=1}^{N(r)} \int_0^{L(s)} e^{\sqrt{2} y} \phi \bigg( \frac{y}{L(s)} \bigg) q_{Bx^2}(X_i(r), y) \: dy \nonumber \\ &= \frac{2(1 + D)}{L(s)} e^{-\pi^2 Bx^2/2L(s)^2} {\hat Z} \int_0^{L(s)} \phi \bigg( \frac{y}{L(s)} \bigg) \sin \bigg( \frac{\pi y}{L(s)} \bigg) \: dy \nonumber \\ &= \frac{4(1 + D)}{\pi} e^{-\pi^2 Bx^2/2L(s)^2} {\hat Z} \int_0^1 \phi(y) h(y) \: dy \nonumber \ \end{align} where $|D| < \varepsilon$. Because $h$ is a probability density, the error term involving $D$ is bounded by $(4\|\phi\| \varepsilon/\pi) e^{-\pi^2 B x^2/2L(s)^2} {\hat Z}$, as claimed. \end{proof} \begin{Lemma}\label{varXphi} There is a constant $C$ such that for sufficiently large $x$, $$\textup{Var}(X'(\phi)|{\cal F}_r) \leq \frac{C Y(r) e^{\sqrt{2} L(s)} \log x}{x^2}.$$ \end{Lemma} \begin{proof} By summing over the contributions of the particles at time $r$ and applying Lemma \ref{fXvar}, we get \begin{align}\label{condvarphi} \mbox{Var}(X'(\phi)|{\cal F}_r) &\leq \sum_{i=1}^{N(r)} \int_0^{L(s)} e^{2 \sqrt{2} y} \phi\bigg( \frac{y}{L(s)} \bigg)^2 q_{Bx^2}(X_i(r), y) \: dy \nonumber \\ &\qquad + 2 \sum_{i=1}^{N(r)} \int_0^{Bx^2} q_u(X_i(r), z) \bigg( \int_0^{L(s)} e^{\sqrt{2} y} \phi\bigg( \frac{y}{L(s)} \bigg) q_{Bx^2 - u}(z,y) \: dy \bigg)^2 \: dz \: du \end{align} To bound the first term on the right-hand side of (\ref{condvarphi}), note that if $0 < w < L(s)$, then, by Lemma \ref{stripdensity} and (\ref{q1}), \begin{align}\label{pt1} \int_0^{L(s)} e^{2 \sqrt{2} y} \phi\bigg( \frac{y}{L(s)} \bigg)^2 q_{Bx^2}(w, y) \: dy &\leq \frac{C e^{\sqrt{2} w}}{L(s)} \sin \bigg( \frac{\pi w}{L(s)} \bigg) \int_0^{L(s)} e^{\sqrt{2} y} \sin \bigg( \frac{\pi y}{L(s)} \bigg) \: dw \nonumber \\ &\leq \frac{C e^{\sqrt{2} w} e^{\sqrt{2} L(s)}}{L(s)^2} \sin \bigg( \frac{\pi w}{L(s)} \bigg). \end{align} We bound the second term on the right-hand side of (\ref{condvarphi}) by breaking the outer integral into two pieces. Using (\ref{q5}), if $0 < w < L(s)$, then \begin{align}\label{pt2} &\int_0^{Bx^2/2} \int_0^{L(s)} q_u(w,z) \bigg( \int_0^{L(s)} e^{\sqrt{2} y} \phi\bigg( \frac{y}{L(s)} \bigg) q_{Bx^2-u}(z,y) \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq \int_0^{Bx^2/2} \int_0^{L(s)} q_u(w,z) \bigg( \int_0^{L(s)} \frac{C}{L(s)} e^{\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg) \sin \bigg( \frac{\pi y}{L(s)} \bigg) \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq C \int_0^{Bx^2/2} \int_0^{L(s)} q_u(w,z) e^{2 \sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq C \int_0^{L(s)} e^{\sqrt{2} w} e^{\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg)^2 \frac{w(L(s) - z)}{L(s)} \: dz \nonumber \\ &\qquad \leq \frac{C e^{\sqrt{2} w} e^{\sqrt{2} L(s)}}{L(s)^2}. \end{align} Furthermore, by using (\ref{q6}) in the third line, making the substitution $v = Bx^2 - u$ in the fourth line, and breaking the inner integral into the piece from $0$ to $1$ and the piece from $1$ to $Bx^2/2$ in the fifth line, we get \begin{align}\label{pt3} &\int_{Bx^2/2}^{Bx^2} \int_0^{L(s)} q_u(w,z) \bigg( \int_0^{L(s)} e^{\sqrt{2} y} \phi\bigg( \frac{y}{L(s)} \bigg) q_{Bx^2-u}(z,y) \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq \int_{Bx^2/2}^{Bx^2} \int_0^{L(s)} \frac{C}{L(s)} e^{\sqrt{2} w} \sin \bigg( \frac{\pi w}{L(s)} \bigg) e^{-\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg) \bigg( \int_0^{L(s)} e^{\sqrt{2} y} q_{Bx^2 - u}(z,y) \: dy \bigg)^2 \: dz \: du \nonumber \\ &\qquad \leq \frac{C e^{\sqrt{2} w}}{L(s)} \sin \bigg( \frac{\pi w}{L(s)} \bigg) \int_{Bx^2/2}^{Bx^2} \int_0^{L(s)} e^{\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg) \min \bigg\{1, \frac{(L(s) - z)^2}{Bx^2 - u} \bigg\} \: dz \: du \nonumber \\ &\qquad \leq \frac{C e^{\sqrt{2} w}}{L(s)} \sin \bigg( \frac{\pi w}{L(s)} \bigg) \int_0^{L(s)} e^{\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg) \bigg( \int_0^{Bx^2/2} \min \bigg\{1, \frac{(L(s) - z)^2}{v} \bigg\} \: dv \bigg) \: dz \nonumber \\ &\qquad \leq \frac{C e^{\sqrt{2} w}}{L(s)} \sin \bigg( \frac{\pi w}{L(s)} \bigg) \int_0^{L(s)} e^{\sqrt{2} z} \sin \bigg( \frac{\pi z}{L(s)} \bigg) \big(1 + (L(s) - z)^2 \log x \big) \: dz \nonumber \\ &\qquad \leq \frac{C e^{\sqrt{2} w} e^{\sqrt{2} L(s)} \log x}{L(s)^2} \sin \bigg( \frac{\pi w}{L(s)} \bigg). \end{align} The expressions in (\ref{pt1}), (\ref{pt2}), and (\ref{pt3}) are all bounded by $(C e^{\sqrt{2} w} e^{\sqrt{2} L(s)} \log x)/L(s)^2$. By summing over the positions of the particles at time $r$, we get that the right-hand side of (\ref{condvarphi}) is bounded by $(CY(r) e^{\sqrt{2} L(s)} \log x)/x^2$, which implies the result. \end{proof} \subsection{Proofs of Theorems \ref{numthm}, \ref{config1}, and \ref{config2}} In this subsection, we use the results of Section \ref{momfunsec} to prove Theorems \ref{numthm}, \ref{config1}, and \ref{config2}. \begin{proof}[Proof of Theorem \ref{numthm}] Let $\kappa = 1$. Choose $B$ as at the beginning of Section \ref{momfunsec}. Choose $s \in [(B + 3 \kappa)x^2, (1 - \delta)t]$, and let $r = s - Bx^2$ as in Section \ref{momfunsec}. Throughout the proof, the constants $C$, $C'$, and $C''$ will be allowed to depend on $B$, $\delta$ and $\varepsilon$. Recall that $X(1)$ denotes the value of $X(f)$ when $f(x) = 1$ for all $x$. By Lemma \ref{fphi}, \begin{equation}\label{XN} \P(X(1) = N(s)) > 1 - 3 \varepsilon \end{equation} for sufficiently large $x$. By Lemma \ref{meanXf}, \begin{equation}\label{955} (1 - 2 \varepsilon) {\hat Z} \frac{\pi}{L(s)^2} e^{-\pi^2 B x^2/2L(s)^2} \leq \E[X(1)|{\cal F}_r] \leq (1 + 2 \varepsilon) {\hat Z} \frac{\pi}{L(s)^2} e^{-\pi^2 B x^2/2L(s)^2}. \end{equation} Using Lemma \ref{varXf} and the conditional Chebyshev's Inequality, \begin{equation}\label{cheb} \P \bigg(\big|X(1) - \E[X(1)|{\cal F}_r] \big| > \frac{1}{2} \E[X(1)|{\cal F}_r] \bigg| {\cal F}_r \bigg) \leq \frac{C Y(r) e^{\sqrt{2} L(s)}}{x^{11/2} \E[X(1)|{\cal F}_r]^2} \leq \frac{C Y(r) e^{\sqrt{2} L(s)}}{x^{3/2} {\hat Z}^2}. \end{equation} By (\ref{ZZhat}), Proposition \ref{Zlower}, Proposition \ref{Zupper}, and Proposition \ref{Yupper}, there are constants $C$, $C'$ and $C''$ such that with probability at least $1 - 4 \varepsilon$, we have \begin{equation}\label{newZ} C' x^{-1} \exp \big((3 \pi^2)^{1/3}(t - r)^{1/3} \big) \leq {\hat Z} \leq C'' x^{-1} \exp \big((3 \pi^2)^{1/3}(t - r)^{1/3} \big) \end{equation} and \begin{equation}\label{newY} Y(r) \leq C x^{-1} \exp \big((3 \pi^2)^{1/3}(t - r)^{1/3} \big). \end{equation} Thus, on an event of probability at least $1 - 4 \varepsilon$, the quantity on the right-hand side of (\ref{cheb}) is bounded above by $$C x^{-1/2} \exp \big(\sqrt{2} L(s) - (3 \pi^2)^{1/3}(t - r)^{1/3} \big) = C x^{-1/2} \exp \big( (3 \pi^2)^{1/3}(t - s)^{1/3} - (3 \pi^2)^{1/3}(t - r)^{1/3} \big),$$ which tends to zero as $x \rightarrow \infty$ because the exponential term is bounded by a constant. By (\ref{955}), on this same event of probability $1 - 4 \varepsilon$, there are constants $C'$ and $C''$ such that $$C' x^{-3} \exp \big((3 \pi^2)^{1/3}(t - s)^{1/3} \big) \leq \frac{1}{2} \E[X(1)|{\cal F}_r] \leq \frac{3}{2} \E[X(1)|{\cal F}_r] \leq C'' x^{-3} \exp \big((3 \pi^2)^{1/3}(t - s)^{1/3} \big).$$ Combining these results with (\ref{XN}), we get $$\P \bigg( C' x^{-3} \exp \big((3 \pi^2)^{1/3}(t - s)^{1/3} \big) \leq N(s) \leq C'' x^{-3} \exp \big((3 \pi^2)^{1/3}(t - s)^{1/3} \big) \bigg) > 1 - 7 \varepsilon$$ for sufficiently large $x$. Because the constants $C'$ and $C''$ do not depend on $s$ and $$(3 \pi^2)^{1/3} (t - s)^{1/3} = \sqrt{2} \bigg(1 - \frac{s}{\tau x^3} \bigg)^{1/3} x,$$ the result follows. \end{proof} We will now prove slightly more general versions of Theorems \ref{config1} and \ref{config2}. The generalizations will be useful later for the proof of Theorem \ref{rtthm}. Instead of setting $s = u x^3$ and letting $x \rightarrow \infty$, we consider a sequence $(x_n)_{n=1}^{\infty}$ tending to infinity, and then a sequence of times $(s_n)_{n=1}^{\infty}$ such that $s_n \sim u x_n^3$. We define $X_i(s)$, $N(s)$, $X(f)$, and $X'(\phi)$ as before, but with the initial particle located at $x_n$ rather than at $x$. Note that $X_i(s)$, $N(s)$, $X(f)$, and $X'(\phi)$ depend on $n$, even though this dependence is not recorded in the notation. The following proposition implies Theorem \ref{config1}. Here $\sim$ means that the ratio of the two sides tends to one as $n \rightarrow \infty$. \begin{Prop} Suppose $0 < u < \tau$. Consider a sequence of times $(s_n)_{n=1}^{\infty}$ such that $s_n \sim u x_n^3$. Let $$\chi_n(u) = \frac{1}{N(s_n)} \sum_{i=1}^{N(s_n)} \delta_{X_i(s_n)}.$$ Define $\mu$ as in Theorem \ref{config1}. Then $\chi_n(u) \Rightarrow \mu$ as $n \rightarrow \infty$. \end{Prop} \begin{proof} To show that $\chi_n(u) \Rightarrow \mu$ as $n \rightarrow \infty$, it suffices to show (see, for example, Theorem 16.16 of \cite{kall}) that for all bounded continuous functions $f: [0, \infty) \rightarrow \R$, we have \begin{equation}\label{fconvprob} \frac{1}{N(s_n)} \sum_{i=1}^{N(s_n)} f(X_i(s_n)) \rightarrow_p \int_0^{\infty} g(y) f(y) \: dy, \end{equation} where $g(y) = 2y e^{-\sqrt{2} y}$ for $y \geq 0$ and $\rightarrow_p$ denotes convergence in probability as $n \rightarrow \infty$. Fix a bounded continuous function $f: [0, \infty) \rightarrow \R$. Let $\varepsilon > 0$, and choose $B$ as at the beginning of Section \ref{momfunsec}. Let $r_n = s_n - Bx^2$. By Lemma \ref{fphi}, for sufficiently large $n$, \begin{equation}\label{XfX1} \P \bigg( \frac{1}{N(s_n)} \sum_{i=1}^{N(s_n)} f(X_i(s_n)) = \frac{X(f)}{X(1)} \bigg) > 1 - 3 \varepsilon. \end{equation} By Lemma \ref{varXf} and the conditional Chebyshev's Inequality, \begin{align}\label{Xfcheb} \P \bigg( \big|X(f) - \E[X(f)|{\cal F}_{r_n}] \big| > x_n^{-19/6} e^{\sqrt{2} L(s_n)} \bigg| {\cal F}_{r_n} \bigg) &\leq \frac{C Y(r_n) e^{\sqrt{2} L(s_n)}}{x_n^{11/2}} \cdot \frac{x_n^{19/3}}{e^{2 \sqrt{2} L(s_n)}} \nonumber \\ &\leq \frac{C Y(r_n) x_n^{5/6}}{e^{\sqrt{2} L(s_n)}}. \end{align} Both (\ref{newZ}) and (\ref{newY}) hold, with $r_n$ in place of $r$, with probability at least $1 - 4 \varepsilon$ for sufficiently large $n$. Because $(t - r_n)^{1/3} - (t - s_n)^{1/3}$ is bounded by a constant, the expression obtained by replacing $Y(r_n)$ on the right-hand side of (\ref{Xfcheb}) by the upper bound from (\ref{newY}) tends to zero as $n \rightarrow \infty$, and thus is less than $\varepsilon$ for sufficiently large $n$. The same convergence holds with $X(1)$ in place of $X(f)$ on the left-hand side of (\ref{Xfcheb}). Thus, for sufficiently large $n$, on an event of probability at least $1 - 5 \varepsilon$, we have $$\frac{\E[X(f)|{\cal F}_{r_n}] - x_n^{-19/6} e^{\sqrt{2} L(s_n)}}{\E[X(1)|{\cal F}_{r_n}] + x_n^{-19/6} e^{\sqrt{2} L(s_n)}} \leq \frac{X(f)}{X(1)} \leq \frac{\E[X(f)|{\cal F}_{r_n}] + x_n^{-19/6} e^{\sqrt{2} L(s_n)}}{\E[X(1)|{\cal F}_{r_n}] - x_n^{-19/6} e^{\sqrt{2} L(s_n)}}.$$ This inequality, when combined with Lemma \ref{meanXf}, becomes \begin{align} &\frac{{\hat Z} \pi L(s_n)^{-2} e^{-\pi^2 B x_n^2/2L(s_n)^2} (\int_0^{\infty} f(y) g(y) \: dy - 2 \|f\| \varepsilon) - x_n^{-19/6} e^{\sqrt{2} L(s_n)}}{{\hat Z} \pi L(s_n)^{-2} e^{-\pi^2 B x_n^2/2 L(s_n)^2} (1 + 2 \varepsilon) + x_n^{-19/6} e^{\sqrt{2}L(s_n)}} \nonumber \\ &\qquad \leq \frac{X(f)}{X(1)} \leq \frac{{\hat Z} \pi L(s_n)^{-2} e^{-\pi^2 B x_n^2/2L(s_n)^2} (\int_0^{\infty} f(y) g(y) \: dy + 2 \|f\| \varepsilon) + x_n^{-19/6} e^{\sqrt{2} L(s_n)}}{{\hat Z} \pi L(s_n)^{-2} e^{-\pi^2 B x_n^2/2 L(s_n)^2} (1 - 2 \varepsilon) - x_n^{-19/6} e^{\sqrt{2}L(s_n)}}. \nonumber \end{align} When (\ref{newZ}) holds, we have $x_n^{-3} e^{\sqrt{2} L(s_n)} \leq C {\hat Z} L(s_n)^{-2}$, and thus for sufficiently large $n$, $$x_n^{-19/6} e^{\sqrt{2} L(s_n)} \leq {\hat Z} \pi L(s_n)^{-2} e^{-\pi^2 B x_n^2/2L(s_n)^2} \varepsilon.$$ Therefore, for sufficiently large $n$, $$\frac{1}{1 + 3 \varepsilon} \bigg( \int_0^{\infty} f(y) g(y) \: dy - 2 \|f\| \varepsilon - \varepsilon \bigg) \leq \frac{X(f)}{X(1)} \leq \frac{1}{1 - 3 \varepsilon} \bigg( \int_0^{\infty} f(y) g(y) \: dy + 2 \|f\| \varepsilon + \varepsilon \bigg)$$ with probability at least $1 - 5 \varepsilon$. In view of (\ref{XfX1}), we can let $\varepsilon \rightarrow 0$ to obtain (\ref{fconvprob}). \end{proof} The following proposition implies Theorem \ref{config2}. \begin{Prop} Suppose $0 < u < \tau$. Consider a sequence of times $(s_n)_{n=1}^{\infty}$ such that $s_n \sim ux_n^3$ as $n \rightarrow \infty$. Let $$\eta_n(u) = \frac{1}{Y(s_n)} \sum_{i=1}^{N(s_n)} e^{\sqrt{2} X_i(s_n)} \delta_{X_i(s_n)/L(s_n)}.$$ Let $\nu$ be defined as in Theorem \ref{config2}. Then $\eta_n(u) \Rightarrow \nu$ as $n \rightarrow \infty$. \end{Prop} \begin{proof} The proof is very similar to the proof of Theorem \ref{config1}. It suffices to show that we have $\P(X_1(s_n) < L(s_n)) \rightarrow 1$ as $n \rightarrow \infty$, and that for all bounded continuous functions $\phi: [0, 1] \rightarrow \R$, \begin{equation}\label{phiprob} \frac{1}{Y(s_n)} \sum_{i=1}^{N(s_n)} e^{\sqrt{2} X_i(s_n)} \phi \bigg( \frac{X_i(s_n)}{L(s_n)} \bigg) \rightarrow_p \int_0^1 \phi(y) h(y) \: dy. \end{equation} That $\P(X_1(s_n) < L(s_n)) \rightarrow 1$ as $n \rightarrow \infty$ follows immediately from Lemma \ref{withina} with $a = 0$. Fix a bounded continuous function $\phi: [0,1] \rightarrow \R$. Let $\varepsilon > 0$, and choose $B$ as at the beginning of Section \ref{momfunsec}. Let $r_n = s_n - Bx_n^2$. Let $X'(1)$ denote the value of $X'(\phi)$ when $\phi(x) = 1$ for all $x \in [0,1]$. By Lemma \ref{fphi}, for sufficiently large $x$, \begin{equation}\label{YXprime} \P \bigg( \frac{1}{Y(s_n)} \sum_{i=1}^{N(s_n)} e^{\sqrt{2} X_i(s_n)} \phi \bigg( \frac{X_i(s_n)}{L(s_n)} \bigg) = \frac{X'(\phi)}{X'(1)} \bigg) > 1 - 3 \varepsilon. \end{equation} By Lemma \ref{varXphi} and the conditional Chebyshev's Inequality, \begin{align}\label{Xphicheb} \P \bigg( \big|X'(\phi) - \E[X'(\phi)|{\cal F}_{r_n}] \big| > x_n^{-4/3} e^{\sqrt{2} L(s_n)} \bigg| {\cal F}_{r_n} \bigg) &\leq \frac{C Y(r_n) e^{\sqrt{2} L(s_n)} \log x_n}{x_n^2} \cdot \frac{x_n^{8/3}}{e^{2 \sqrt{2} L(s_n)}} \nonumber \\ &\leq \frac{C Y(r_n) x_n^{2/3} \log x_n}{e^{\sqrt{2} L(s_n)}}. \end{align} Recall that (\ref{newZ}) and (\ref{newY}) both hold with probability at least $1 - 4 \varepsilon$ for sufficiently large $n$. The expression obtained by replacing $Y(r_n)$ with the right-hand side of (\ref{newY}) on the right-hand side of (\ref{Xphicheb}) tends to zero as $x_n \rightarrow \infty$, and the same result holds when $X'(\phi)$ is replaced by $X'(1)$ on the left-hand side. Thus, for sufficiently large $n$, on an event of probability at least $1 - 5 \varepsilon$, we have $$\frac{\E[X'(\phi)|{\cal F}_{r_n}] - x_n^{-4/3} e^{\sqrt{2} L(s_n)}}{\E[X'(1)|{\cal F}_{r_n}] + x_n^{-4/3} e^{\sqrt{2} L(s_n)}} \leq \frac{X'(\phi)}{X'(1)} \leq \frac{\E[X'(\phi)|{\cal F}_{r_n}] + x_n^{-4/3} e^{\sqrt{2} L(s_n)}}{\E[X'(1)|{\cal F}_{r_n}] - x_n^{-4/3} e^{\sqrt{2} L(s_n)}}.$$ Combining this inequality with Lemma \ref{meanXphi} gives \begin{align} &\frac{4 \pi^{-1} {\hat Z} e^{-\pi^2Bx_n^2/2L(s_n)^2}(\int_0^1 \phi(y) h(y) \: dy - \|\phi\| \varepsilon) - x_n^{-4/3} e^{\sqrt{2}L(s_n)}}{4 \pi^{-1} {\hat Z} e^{-\pi^2Bx_n^2/2L(s_n)^2}(1 + \varepsilon) + x_n^{-4/3} e^{\sqrt{2} L(s_n)}} \nonumber \\ &\qquad \leq \frac{X'(\phi)}{X'(1)} \leq \frac{4 \pi^{-1} {\hat Z} e^{-\pi^2Bx_n^2/2L(s_n)^2}(\int_0^1 \phi(y) h(y) \: dy + \|\phi\| \varepsilon) + x_n^{-4/3} e^{\sqrt{2}L(s_n)}}{4 \pi^{-1} {\hat Z} e^{-\pi^2Bx_n^2/2L(s_n)^2}(1 - \varepsilon) - x_n^{-4/3} e^{\sqrt{2} L(s_n)}}. \nonumber \end{align} Because $x_n^{-1} e^{\sqrt{2} L(s_n)} \leq C {\hat Z}$ when (\ref{newZ}) holds, we have $x_n^{-4/3} e^{\sqrt{2} L(s_n)} \leq 4 \pi^{-1} {\hat Z} e^{-\pi^2Bx_n^2/2L(s_n)^2} \varepsilon$ for sufficiently large $n$ when (\ref{newZ}) holds. Therefore, for sufficiently large $n$, $$\frac{1}{1 + 2 \varepsilon} \bigg( \int_0^1 \phi(y) h(y) \: dy - \|\phi\| \varepsilon - \varepsilon \bigg) \leq \frac{X'(\phi)}{X'(1)} \leq \frac{1}{1 - 2 \varepsilon} \bigg( \int_0^1 \phi(y) h(y) \: dy + \|\phi\| \varepsilon + \varepsilon \bigg)$$ with probability at least $1 - 5 \varepsilon$. In view of (\ref{YXprime}), we can let $\varepsilon \rightarrow 0$ to obtain (\ref{phiprob}). \end{proof} \section{Position of the right-most particle}\label{rtsec} In this section, we prove Theorem \ref{rtthm}. Consider branching Brownian motion without killing and with a drift of $-\sqrt{2}$. Let $u(t,w)$ be the probability that if at time zero there is a single particle at the origin, then the position of the right-most particle at time $t$ will be greater than or equal to $w$. Define $m(t) = \inf\{w : u(t,w) \ge 1/2\}$. By Proposition 8.2 on page 127 of \cite{bram83}, applied with $y_0 = -1$, and by Corollary 1 on page 130 of \cite{bram83}, applied with $\alpha(r,t) = -1$, there exist positive constants $T$, $C'$, $C''$, and $C_{12}$ such that if $t \geq T$, then \begin{equation}\label{bramupper} u(t,w) \leq C'' e^t \int_{-1}^0 \frac{e^{-(w + \sqrt{2}t - z)^2/2t}}{\sqrt{2 \pi t}} \big(1 - e^{-2(z+1)(w - m(t))/t} \big) \: dz \end{equation} and \begin{equation}\label{bramlower} u(t,w) \geq C' e^t \int_{-1}^0 \frac{e^{-(w + \sqrt{2}t - z)^2/2t}}{\sqrt{2 \pi t}} \big(1 - e^{-2(z+1)(w - m(t))/t} \big) \: dz \end{equation} for all $w \geq m(t) + 1$, where \begin{equation}\label{735} \bigg| m(t) + \frac{3}{2 \sqrt{2}} \log t \bigg| \leq C_{12}. \end{equation} See (8.4) and (8.18) of \cite{bram83} for the bounds on $m(t)$, and observe that $m(t)$ here corresponds to $m_{1/2}(t) - \sqrt{2} t$ in the notation of \cite{bram83}. \begin{Lemma}\label{u1lem} Suppose $0 < \gamma \leq 1$. Suppose that $t = \gamma x^2$ and that $w = -(3/2 \sqrt{2}) \log t + y$, where $1 + C_{12} \leq y \leq C_{13} x$ for some positive constant $C_{13}$. Then there exists $x_0 > 0$, depending on $\gamma$, such that for $x \geq x_0$, $$C' y e^{-\sqrt{2} y} e^{-y^2/2t} \leq u(t,w) \leq C'' y e^{-\sqrt{2} y} e^{-y^2/2t},$$ where $C'$ and $C''$ are positive constants that do not depend on $\gamma$. \end{Lemma} \begin{Rmk} {\em We note that similar bounds on $u$ may be obtained directly by PDE methods, and these have in fact been used in \cite{hnrr1} to reprove Bramson's logarithmic correction result of \cite{bram83} and to extend it to the setup of periodic branching rates (see \cite{hnrr2}).} \end{Rmk} \begin{proof} We may assume that $x$ is large enough that $t \geq \max\{1, T\}$. If $-1 \leq z \leq 0$, then using (\ref{735}), \begin{equation}\label{zw1} \frac{2(z+1)(w - m(t))}{t} \leq \frac{2(y - (3/2\sqrt{2}) \log t - m(t))}{t} \leq \frac{2(y + C_{12})}{t} \leq \frac{4y}{t}. \end{equation} It follows that \begin{equation}\label{zwupper} 1 - e^{-2(z+1)(w - m(t))/t} \leq \frac{4y}{t}. \end{equation} Because $y \leq C_{13}x$ and $t = \gamma x^2$, the expression in (\ref{zw1}) tends to zero as $x \rightarrow \infty$. Therefore, if $-1/2 \leq z \leq 0$, we have, for sufficiently large $x$, \begin{equation}\label{zwlower} 1 - e^{-2(z+1)(w - m(t))/t} \geq \frac{1}{2} \cdot \frac{2(z+1)(w - m(t))}{t} \geq \frac{y - (3/2\sqrt{2}) \log t - m(t)}{2t} \geq \frac{y - C_{12}}{2t} \geq \frac{Cy}{t}. \end{equation} Next, observe that $$e^{-(w + \sqrt{2} t - z)^2/2t} = e^{-(y-z)^2/2t} e^{(3/2\sqrt{2}) (y - z) (\log t)/t} e^{-9(\log t)^2/16t} e^{-\sqrt{2}(y - z)} e^{-t} t^{3/2}.$$ If $-1 \leq z \leq 0$, then $e^{-\sqrt{2}} \leq e^{\sqrt{2} z} \leq 1$. Also, $e^{-9(\log t)^2/16t}$ tends to one as $x \rightarrow \infty$. Furthermore, because $t = \gamma x^2$ and $y \leq C_{13}x$, we have $e^{(3/2\sqrt{2}) (y - z) (\log t)/t} \rightarrow 1$ and $e^{-(y - z)^2/2t}/e^{-y^2/2t} \rightarrow 1$ as $x \rightarrow \infty$. It follows that there exists $x_0 > 0$, depending on $\gamma$, and positive constants $C'$ and $C''$ such that if $x \geq x_0$, then \begin{equation}\label{expbd} C' e^{-y^2/2t} e^{-\sqrt{2} y} e^{-t} t^{3/2} \leq e^{-(w + \sqrt{2} t - z)^2/2t} \leq C'' e^{-y^2/2t} e^{-\sqrt{2} y} e^{-t} t^{3/2}. \end{equation} Combining (\ref{bramupper}), (\ref{zwupper}), and (\ref{expbd}), we get that for sufficiently large $x$, \begin{align}\label{ufin1} u(t,w) &\leq C e^t \int_{-1}^0 \frac{e^{-(w + \sqrt{2}t - z)^2/2t}}{\sqrt{2 \pi t}} \big(1 - e^{-2(z+1)(w - m(t))/t} \big) \: dz \nonumber \\ &\leq C e^t \int_{-1}^0 \frac{e^{-y^2/2t} e^{-\sqrt{2} y} e^{-t} t^{3/2}}{\sqrt{2 \pi t}} \cdot \frac{y}{t} \: dz \nonumber \\ &\leq C y e^{-\sqrt{2} y} e^{-y^2/2t}. \end{align} By similar reasoning using (\ref{bramlower}), (\ref{zwlower}), and (\ref{expbd}), we get that for sufficiently large $x$, \begin{align}\label{ufin2} u(t,w) &\geq C e^t \int_{-1/2}^0 \frac{e^{-(w + \sqrt{2}t - z)^2/2t}}{\sqrt{2 \pi t}} \big(1 - e^{-2(z+1)(w - m(t))/t} \big) \: dz \nonumber \\ &\geq C e^t \int_{-1/2}^0 \frac{e^{-y^2/2t} e^{-\sqrt{2} y} e^{-t} t^{3/2}}{\sqrt{2 \pi t}} \cdot \frac{y}{t} \: dz \nonumber \\ &\geq C y e^{-\sqrt{2} y} e^{-y^2/2t}. \end{align} The result follows from (\ref{ufin1}) and (\ref{ufin2}). \end{proof} \begin{Lemma}\label{u2lem} Suppose $0 < \gamma \leq 1$. Suppose $t \leq \gamma x^2$ and $w \geq C_{14} x$ for some positive constant $C_{14}$. Then there exists $x_0 > 0$, depending on $\gamma$, such that for $x \geq x_0$, \begin{equation}\label{ut2} u(t,w) \leq C \gamma^{-3/2} x^{-3} w e^{-\sqrt{2} w} e^{-C_{15}/\gamma} \end{equation} for some positive constants $C$ and $C_{15}$ that do not depend on $\gamma$. \end{Lemma} \begin{proof} If $-1 \leq z \leq 0$, then \begin{equation}\label{1281} 1 - e^{-2(z+1)(w - m(t))/t} \leq \frac{2(z+1)(w - m(t))}{t} \leq \frac{Cw}{t}. \end{equation} Also, for sufficiently large $x$, \begin{equation}\label{1282} e^{-(w + \sqrt{2}t - z)^2/2t} = e^{-t} e^{-\sqrt{2}(w-z)} e^{-(w-z)^2/2t} \leq C e^{-t} e^{-\sqrt{2} w} e^{-C_{14}^2 x^2/t}. \end{equation} By (\ref{bramupper}), (\ref{1281}), and (\ref{1282}), we get that when $T \leq t \leq \gamma x^2$, $$u(t,w) \leq Cw e^{-\sqrt{2} w} t^{-3/2} e^{-C_{14}^2 x^2/t}.$$ The function $t \mapsto t^{-3/2} e^{-C_{14}^2 x^2/t}$ is increasing when $t \leq (2 C_{14}^2 x^2)/3$ which means that for $\gamma \leq 2 C_{14}^2/3$, we have $$u(t,w) \leq C \gamma^{-3/2} x^{-3} w e^{-\sqrt{2} w} e^{-C_{14}^2/2 \gamma}$$ whenever $T \leq t \leq \gamma x^2$. This is enough to imply (\ref{ut2}) except in the case when $t < T$. However, when $t < T$, by the Many-to-One Lemma and Markov's Inequality, $u(t,w)$ is bounded above by $e^t$ times the probability that an individual Brownian particle started at the origin is to the right of $w$ by time $t$. For the purpose of obtaining an upper bound on $u(t,w)$, we may ignore the drift of $-\sqrt{2}$. Therefore, using that $$\int_z^{\infty} e^{-x^2/2} \: dx \leq z^{-1} e^{-z^2/2},$$ we have $$u(t,w) \leq e^t \int_{w/\sqrt{t}}^{\infty} \frac{1}{\sqrt{2 \pi}} e^{-x^2/2} \: dx \leq \frac{e^t \sqrt{t}}{\sqrt{2 \pi} w} e^{-w^2/2t} \leq \frac{e^T T}{\sqrt{2 \pi} w} e^{-w^2/2T}.$$ Because $w \geq C_{14} x$, this expression is bounded above by the right-hand side of (\ref{ut2}) for $x \geq x_0$, where $x_0$ depends on $\gamma$. \end{proof} We now return to the setting of Theorem \ref{rtthm}, in which there is initially a particle at $x$ and particles are killed when they reach the origin. \begin{Lemma}\label{Dlem} Let $\varepsilon > 0$. Let $0 < u < \tau$, and let $s = ux^3$. Let $\gamma > 0$. Let $D$ be the number of particles that are killed at the origin between times $s - \gamma x^2$ and $s$. Then there exists a positive constant $C$, depending on $u$ and $\varepsilon$ but not on $\gamma$, such that for sufficiently large $x$, $$\P \bigg( D > C \gamma x^{-1} \exp \big( (3 \pi^2)^{1/3} (t - s)^{1/3} \big) \bigg) \leq 6 \varepsilon.$$ \end{Lemma} \begin{proof} Let $A = 2 \gamma$, and let $r = s - A x^2$. For $u \in [s - \gamma x^2, s]$ define $X_u(1)$ in the same way as $X(1)$, but with $u$ playing the role of $s$. That is, $X_u(1)$ consists of the number of particles at time $u$ whose ancestor was positioned to the left of $L(u)$ at time $v$ for all $v \in [r, u]$. By the argument leading to Lemma \ref{fphi}, \begin{equation}\label{NXu} \P(N(u) = X_u(1) \mbox{ for all }u \in [s - \gamma x^2, s]) > 1 - 3 \varepsilon \end{equation} for sufficiently large $x$. By Lemma \ref{meanXf}, there is a positive constant $C$ such that $\E[X_u(1)|{\cal F}_r] \leq C x^{-2} {\hat Z}$ for sufficiently large $x$, where ${\hat Z}$ is defined as in (\ref{hatZdef}) but with $u$ in place of $s$. The argument leading to (\ref{ZZhat}) implies that on an event with probability greater than $1 - \varepsilon$, we have $\E[X_u(1)|{\cal F}_r] \leq C x^{-2} Z(r)$ for all $u \in [s - \gamma x^2, s]$ for sufficiently large $x$, where $C$ is some other positive constant. Define times $s - \gamma x^2 = u_0 < u_1 < \dots < u_j = s$, where the $u_i$ are chosen such that $1/2 \leq u_i - u_{i-1} \leq 1$ for $i = 1, 2 \dots, j$. For $i = 0, 1, \dots, j-1$, let $D_i$ be the number of particles that are killed at the origin between times $u_i$ and $u_{i+1}$. Let $D_i'$ be the number of such particles that are descended from particles at time $u_i$ that are counted in $X_{u_i}(1)$, meaning that their ancestor was positioned to the left of $L(u_i)$ throughout the time period $[r,u_i]$. Even in the absence of killing between times $u_i$ and $u_{i+1}$, the expected number of descendants at time $u_{i+1}$ produced by a given particle at time $u_i$ is at most $e^{u_{i+1} - u_i} \leq e$. It follows that for sufficiently large $x$, $$\E[D_i'|{\cal F}_r] \leq e \E[X_{u_i}(1)|{\cal F}_r] \leq C x^{-2} Z(r)$$ for all $i$ on an event of probability at least $1 - \varepsilon$, and therefore, $$\E \bigg[ \sum_{i=0}^{j-1} D_i' \bigg| {\cal F}_r \bigg] \leq C \gamma Z(r)$$ on an event of probability at least $1 - \varepsilon$. In view of Proposition \ref{Zupper}, there is a positive constant $C$ such that for sufficiently large $x$, $$\E \bigg[ \sum_{i=0}^{j-1} D_i' \bigg| {\cal F}_r \bigg] \leq C \gamma x^{-1} \exp \big( (3 \pi^2)^{1/3} (t - r)^{1/3} \big)$$ on an event of probability at least $1 - 2 \varepsilon$. By Markov's Inequality, there is a positive constant $C$ such that for sufficiently large $x$, $$\P \bigg( \sum_{i=0}^{j-1} D_i' > C \gamma x^{-1} \exp \big( (3 \pi^2)^{1/3} (t - r)^{1/3} \big) \bigg) \leq 3 \varepsilon.$$ Because $\P(D = \sum_{i=0}^{j-1} D_i') > 1 - 3 \varepsilon$ by (\ref{NXu}) and $$\exp\big((3 \pi^2)^{1/3}(t-r)^{1/3}\big) \leq C \exp\big((3 \pi^2)^{1/3}(t-s)^{1/3}\big),$$ the result follows. \end{proof} \begin{proof}[Proof of Theorem \ref{rtthm}] Fix $d \in \R$. Let $\gamma \in (0,1]$. Let $r = s - \gamma x^2$. Let $$p_i = u \bigg( \gamma x^2, L(s) - \frac{3}{\sqrt{2}} \log x + d - X_i(r) \bigg).$$ Let $R(s)$ be the position of the right-most particle at time $s$ for a modified process in which particles that reach the origin between times $r$ and $s$ are not killed. Then $$\P \bigg(R(s) \geq L(s) - \frac{3}{\sqrt{2}} \log x + d \bigg| {\cal F}_{r} \bigg) = 1 - \prod_{i=1}^{N(r)} (1 - p_i).$$ Therefore, \begin{equation}\label{rteq} 1 - \exp \bigg(-\sum_{i=1}^{N(r)} p_i \bigg) \leq \P \bigg(R(s) \geq L(s) - \frac{3}{\sqrt{2}} \log x + d \bigg| {\cal F}_{r} \bigg) \leq \sum_{i=1}^{N(r)} p_i. \end{equation} Consequently, the key to the proof will be obtaining a precise estimate of $\sum_{i=1}^{N(r)} p_i$. Note that $$p_i = u \bigg(\gamma x^2, L(s) - \frac{3}{2\sqrt{2}} \log \gamma x^2 + \frac{3}{2 \sqrt{2}} \log \gamma + d - X_i(r) \bigg).$$ Because $L(r) - L(s)$ is bounded above by a constant depending on $u$, it follows from Lemma \ref{withina} that with probability tending to one as $x \rightarrow \infty$, we have \begin{equation}\label{goodevent} X_1(r) \leq L(s) + \frac{3}{2 \sqrt{2}} \log \gamma + d - 1 - C_{12}, \end{equation} where $C_{12}$ is the constant from (\ref{735}). By Lemma \ref{u1lem}, on this event for sufficiently large $x$ we have \begin{equation}\label{pRST} C' R_i S_i T_i \leq p_i \leq C'' R_i S_i T_i \end{equation} for all $i$, where \begin{align} R_i &= L(s) + \frac{3}{2 \sqrt{2}} \log \gamma + d - X_i(r), \nonumber \\ S_i &= \exp \bigg( - \sqrt{2} \big( L(s) + (3/2 \sqrt{2}) \log \gamma + d - X_i(r) \big) \bigg), \nonumber \\ T_i &= \exp \bigg( - \frac{(L(s) + (3/2\sqrt{2}) \log \gamma + d - X_i(r))^2}{2 \gamma x^2} \bigg). \nonumber \end{align} Let $$a = L(s) - L(r) + \frac{3}{2 \sqrt{2}} \log \gamma + d.$$ Then \begin{equation}\label{Ri} R_i = L(r) \bigg(1 - \frac{X_i(r)}{L(r)} + \frac{a}{L(r)}\bigg). \end{equation} Also, \begin{equation}\label{Si} S_i = \gamma^{-3/2} e^{-\sqrt{2} d} e^{-\sqrt{2} L(s)} e^{\sqrt{2} X_i(r)}. \end{equation} Finally, because $$\frac{L(s)^2}{2 \gamma x^2} = \frac{c^2 (t-s)^{2/3}}{2 \gamma c^2 t^{2/3}} = \frac{1}{2 \gamma} \bigg(1 - \frac{s}{t} \bigg)^{2/3} = \frac{1}{2 \gamma} \bigg(1 - \frac{u}{\tau} \bigg)^{2/3},$$ we have \begin{align}\label{Ti} T_i &= \exp \bigg(- \frac{1}{2 \gamma x^2} \bigg( \big(L(r) - X_i(r)\big)^2 + 2a (L(r) - X_i(r)) + a^2 \bigg) \bigg) \nonumber \\ &= \exp \bigg( - \frac{L(s)^2 - (L(s)^2 - L(r)^2)}{2 \gamma x^2} \bigg(1 - \frac{X_i(r)}{L(r)} \bigg)^2 - \frac{2a (L(r) - X_i(r)) + a^2}{2 \gamma x^2} \bigg) \nonumber \\ &= \exp \bigg( - \frac{1}{2 \gamma} \bigg(1 - \frac{u}{\tau} \bigg)^{2/3} \bigg(1 - \frac{X_i(r)}{L(r)} \bigg)^2 \bigg) U_i, \end{align} where $U_i \rightarrow 1$ as $x \rightarrow \infty$ uniformly in $i$ because $a/x \rightarrow 0$ and $(L(s)^2 - L(r)^2)/x^2 \rightarrow 0$ as $x \rightarrow \infty$. Therefore, by (\ref{Ri}), (\ref{Si}), and (\ref{Ti}), \begin{align}\label{RST} \sum_{i=1}^{N(r)} R_i S_i T_i &= \gamma^{-3/2} e^{-\sqrt{2} d} e^{-\sqrt{2} L(s)} L(r) \sum_{i=1}^{N(r)} U_i e^{\sqrt{2} X_i(r)} \bigg(1 - \frac{X_i(r)}{L(r)} + \frac{a}{L(r)} \bigg) \nonumber \\ &\qquad \times \exp \bigg( - \frac{1}{2 \gamma} \bigg(1 - \frac{u}{\tau} \bigg)^{2/3} \bigg(1 - \frac{X_i(r)}{L(r)} \bigg)^2 \bigg). \end{align} Consider the function $\phi: [0,1] \rightarrow \R$ defined by $$\phi(z) = (1 - z) \exp \bigg( - \frac{1}{2 \gamma} \bigg(1 - \frac{u}{\tau} \bigg)^{2/3} (1-z)^2 \bigg).$$ By (\ref{phiprob}), applied with $s_n = ux_n^3 - \gamma x_n^2$, where $(x_n)_{n=1}^{\infty}$ is a sequence tending to infinity, we have, \begin{equation}\label{Yprob} \frac{1}{Y(r)} \sum_{i=1}^{N(r)} e^{\sqrt{2}X_i(r)} \phi \bigg( \frac{X_i(r)}{L(r)} \bigg) \rightarrow_p \frac{\pi}{2} \int_0^1 (1 - z) \exp \bigg( - \frac{1}{2 \gamma} \bigg(1 - \frac{u}{\tau} \bigg)^{2/3} (1-z)^2 \bigg) \sin(\pi z) \: dz. \end{equation} Now let $\alpha = (2 \gamma)^{-1/2} (1 - u/\tau)^{1/3}$ and make the substitution $y = \alpha (1-z)$ to get that the right-hand side of (\ref{Yprob}) is \begin{equation}\label{gamasymp} \frac{\pi}{2} \int_0^{\alpha} \frac{y}{\alpha} e^{-y^2} \sin \bigg( \frac{\pi y}{\alpha} \bigg) \cdot \frac{1}{\alpha} \: dy \asymp \frac{1}{\alpha^3} \asymp \gamma^{3/2}, \end{equation} where $\asymp$ means that the ratio of the two sides is bounded above and below by positive constants. Furthermore, $\sum_{i=1}^{N(r)} e^{\sqrt{2}X_i(r)} = Y(r)$ and $a/L(r)$ tends to zero as $x \rightarrow \infty$. It thus follows from (\ref{RST}), (\ref{Yprob}), and (\ref{gamasymp}) that on the event (\ref{goodevent}), we have \begin{equation}\label{sumRST} \sum_{i=1}^{N(r)} R_i S_i T_i = e^{-\sqrt{2} d} e^{-\sqrt{2} L(s)} L(r) Y(r) H(u, x, \gamma), \end{equation} where $H(u, x, \gamma)$ converges in probability as $x \rightarrow \infty$ to some number which is bounded between two positive constants that do not depend on $\gamma$. Note that $e^{-\sqrt{2} L(s)} = e^{-(3 \pi^2)^{1/3}(t - s)^{1/3}}$. Therefore, because $Z(r) \leq Y(r)$, we can use Propositions \ref{Zlower} and \ref{Yupper} to conclude that with probability at least $1 - 2 \varepsilon$, we have $C' \leq e^{-\sqrt{2} L(s)} L(r) Y(r) \leq C''$ for sufficiently large $x$. Combining this result with (\ref{pRST}) and (\ref{sumRST}), we get that there are constants $C_{16}$ and $C_{17}$, not depending on $\gamma$, such that for sufficiently large $x$, \begin{equation}\label{pibound} \P \bigg( C_{16} e^{-\sqrt{2} d} \leq \sum_{i=1}^{N(r)} p_i \leq C_{17} e^{-\sqrt{2} d} \bigg) > 1 - 3 \varepsilon. \end{equation} Now choose $d_2 > 0$ large enough that $C_{17} e^{-\sqrt{2} d_2} < \varepsilon$. By (\ref{rteq}) and (\ref{pibound}), \begin{align}\label{bigrt1} \P \bigg(X_1(s) \geq L(s) - \frac{3}{\sqrt{2}} \log x + d_2 \bigg) &\leq \P \bigg(R(s) \geq L(s) - \frac{3}{\sqrt{2}} \log x + d_2 \bigg) \nonumber \\ &\leq C_{17} e^{-\sqrt{2} d_2} + 3 \varepsilon \nonumber \\ &\leq 4 \varepsilon. \end{align} Likewise, we can choose $d_1 > 0$ large enough that $\exp(-C_{16} e^{\sqrt{2} d_1}) \leq \varepsilon$. By (\ref{rteq}) and (\ref{pibound}), \begin{equation}\label{Rlower} \P \bigg(R(s) \leq L(s) - \frac{3}{\sqrt{2}} \log x - d_1 \bigg) \leq \exp \big( -C_{16} e^{\sqrt{2} d_1} \big) + 3 \varepsilon \leq 4 \varepsilon. \end{equation} It remains to bound the probability that $R(s) > L(s) - (3/\sqrt{2}) \log x - d_1$ but $X_1(s) \leq L(s) - (3/\sqrt{2}) \log x - d_1$. This could only happen if some particle reaches $0$ between times $r$ and $s$ and then, for the modified process in which killing is suppressed during this time, some descendant particle is to the right of $L(s) - (3/\sqrt{2}) \log x - d_1$ at time $s$. However, by Lemma \ref{Dlem}, with probability at least $1 - 6 \varepsilon$, at most $C \gamma x^{-1} \exp((3 \pi^2)^{1/3} (t-s)^{1/3}) = C \gamma x^{-1} e^{\sqrt{2} L(s)}$ particles reach the origin between times $r$ and $s$. Conditional on this event, by Lemma \ref{u2lem}, the expected number of these particles with a descendant to the right of $L(s) - (3/\sqrt{2}) \log x - y$ at time $s$ is at most $$C \gamma x^{-1} e^{\sqrt{2} L(s)} \cdot \gamma^{-3/2} x^{-3} L(s) e^{-\sqrt{2}(L(s) - (3/\sqrt{2}) \log x - d_1)} e^{-C_{15}/\gamma} \leq C_{18} \gamma^{-1/2} e^{\sqrt{2} d_1} e^{-C_{15}/\gamma}.$$ Combining this result with (\ref{Rlower}) and Markov's Inequality, and choosing $\gamma$ small enough that $C_{18} \gamma^{-1/2} e^{\sqrt{2} d_1} e^{-C_{15}/\gamma} < \varepsilon$, we get, for sufficiently large $x$, \begin{equation}\label{bigrt2} \P \bigg( X_1(s) \leq L(s) - \frac{3}{\sqrt{2}} \log x - d_1 \bigg) \leq 4 \varepsilon + 6 \varepsilon + C_{18} \gamma^{-1/2} e^{\sqrt{2} d_1} e^{-C_{15}/\gamma} \leq 11 \varepsilon. \end{equation} The result follows from (\ref{bigrt1}) and (\ref{bigrt2}). \end{proof} \noindent Julien Berestycki: \\ Universit\'e Pierre et Marie Curie. LPMA / UMR 7599, Bo\^ite courrier 188. 75252 Paris Cedex 05 \noindent Nathana\"el Berestycki: \\ DPMMS, University of Cambridge. Wilberforce Rd., Cambridge CB3 0WB \noindent Jason Schweinsberg:\\ University of California at San Diego, Dept. of Mathematics. 9500 Gilman Drive; La Jolla, CA 92093-0112 \end{document}
arXiv
\begin{document} \title{The IVP for a nonlocal perturbation of the Benjamin-Ono equation in classical and weighted Sobolev spaces} \begin{abstract} We prove that the initial value problem associated to a nonlocal perturbation of the Benjamin-Ono equation is locally and globally well-posed in Sobolev spaces $H^s(\mathbb{R})$ for any $s>-3/2$ and we establish that our result is sharp in the sense that the flow map of this equation fails to be $C^2$ in $H^s(\mathbb{R})$ for $s<-3/2$. Finally, we study persistence properties of the solution flow in the weighted Sobolev spaces $Z_{s,r}=H^s(\mathbb{R})\cap L^2(|x|^{2r}\,dx)$ for $s\geq r >0$. We also prove some unique continuation properties of the solution flow in these spaces. \end{abstract} \textit{Keywords:} Benjamin-Ono equation; Locally and Globally Well-posed, Sobolev spaces, Weighted Sobolev spaces. \setcounter{equation}{0} \setcounter{section}{0} \section{Introduction and main results} We study the initial value problem (IVP) for a nonlocal perturbation of the Benjamin-Ono (npBO) equation \begin{equation}\label{npbo} \left\{ \begin{aligned} u_t+uu_x+ \mathcal{H}u_{xx} + \mu (\mathcal{H}u_x+\mathcal{H}u_{xxx})&=0, \qquad t>0, \; x\in \mathbb{R}, \\ u(0)&=\phi, \end{aligned} \right. \end{equation} where $\mu >0$ is constant and $\mathcal{H}$ denotes the usual Hilbert transform given by \begin{equation*} \mathcal{H}f(x)=\dfrac{1}{\pi}\,p.v.\int_{-\infty}^{\infty}\dfrac{f(y)}{y-x}\,dy=-\dfrac{1}{\pi}\,v.p.\dfrac{1}{x}\ast f, \end{equation*} or equivalently, $\widehat{(\mathcal{H}f)}(\xi)=i\, \text{sgn}(\xi)\widehat{f}(\xi)$ for $f\in \mathcal{S}(\mathbb{R})$. \\ This differential equation corresponds to a nonlocal dissipative perturbation of the Benjamin-Ono equation, npBO. These types of equations have been used in fluids and plasma theory, see \cite{H} and references therein. Our aim in this work is to study local and global well-posedness of the initial value problem (IVP) (\ref{npbo}) in classical and weighted Sobolev spaces and to obtain some unique continuation results for the generated flow. We say that an IVP is locally well-posed (LWP) in the Kato sense in a function spaces $ X$ provided that for every initial data $\phi\in X$ there exist $T=T(\|\phi\|_X)>0$ and a unique solution $u\in C([0,T]: X)\cap ...=Y_T$ of the given IVP such that the map data-solution is locally continuous from $X$ to $Y_T$, and the IVP is said to be globally well-posed (GWP) in $X$ whenever $T$ can be taken arbitrarily large. Well-posedness of the npBO was first studied by Pastr\'an and Rodr\'iguez in \cite{PR}. They proved that the IVP (\ref{npbo}) is locally well-posed in $H^s(\mathbb{R})$ for $s>1/2$ and globally well-posed in $H^s(\mathbb{R})$ for $s\geq 1$. In this paper, we show that the initial value problem (\ref{npbo}) is LWP and GWP in the Sobolev spaces $H^s(\mathbb{R})$ for any $s>-3/2$. It is interesting to notice that therefore this npBO equation can be solved for more singular initial data than the Benjamin-Ono equation, obtained from \eqref{npbo} when the parameter $\mu=0$ for which the largest Sobolev space where it is GWP is $L^2(\mathbb R)$, see \cite{IK}, \cite{MP} and \cite{IT}. The following heuristic scaling argument shows that the Sobolev index $s=-\frac32$ corresponds to the lowest value where well-posedness for IVP (\ref{npbo}) is expected. Given $u$ a solution of the differential equation $$ u_t+uu_x+ \mu\mathcal{H}u_{xxx}=0, $$ with initial data $\phi$ then for every $\lambda>0, $ $ u_\lambda(x,t)=\lambda^2\, u(\lambda x, \lambda^3\,t)$ is also a solution with initial data $\lambda^2\,\phi(\lambda\cdot)$ and therefore $\|u_\lambda(0)\|_{\dot{H}^s}=\lambda^{2+s-\frac12}\|\phi\|_{\dot{H}^s}$ and hence to have the $\dot{H}^s$ norm invariant under this scaling we should have $s=s_c=-\frac32$. For the Benjamin-Ono equation this scaling index is $s_c=-\frac12$ and as mentioned above well-posedness for the BO equation in the range for $s\in[-\frac12,0)$ is still an open problem. Since the dissipation of the npBO equation is in this sense ``stronger" than the dispersion, we will use the dissipative methods of Dix for Burgers' equation \cite{Dix}, which consists in applying a fixed point theorem to the integral equation associated to (\ref{npbo}) in a time-weighted space (see (\ref{spacexts}) for the exact definition), see also Pilod \cite{P}, Esfahani \cite{Amin}, Carvajal and Panthee \cite{CP1} and \cite{CP2}, Duque \cite{Duque}, and, Pastr\'an and Ria\~no \cite{PRi}. We also prove that we cannot solve the Cauchy problem by a Picard iterative method implemented on the integral formulation of (\ref{npbo}) for initial data in the Sobolev space $H^s(\mathbb{R})$, $s<-3/2$. In particular, the methods introduced by Bourgain \cite{Bourgain} and Kenig, Ponce and Vega \cite{KPV} for the KdV equation cannot be used for $(\ref{npbo})$ with initial data in the Sobolev space $H^s(\mathbb{R})$ for $s<-3/2$. This kind of ill-posedness result is weaker than the loss of uniqueness proved by Dix in the case of Burgers equation. We will mainly work on the integral formulation of the npBO equation, \begin{equation}\label{intequation} u(t)=\Psi(u(t)):= S(t)\phi - \int_0^tS(t-\tau)[u(\tau)u_x(\tau)]\,d\tau ,\quad t\geq 0. \end{equation} \begin{teor}[LWP]\label{localresult} Let $\mu >0$ and $s>-3/2$. Then for any $\phi \in H^s(\mathbb{R})$ there exists $T=T(\nor{\phi}{s})>0$ and a unique solution $u$ of the integral equation (\ref{intequation}) satisfying \begin{align*} &u\in C([0,T],H^s(\mathbb{R}))\cap C((0,T),H^{\infty}(\mathbb{R})). \end{align*} Moreover, the flow map $\phi \mapsto u(t)$ is smooth from $H^s(\mathbb{R})$ to $C([0,T],H^s(\mathbb{R}))\cap C((0,T],H^{\infty}(\mathbb{R}))\cap X_T^s$. \end{teor} \begin{teor}[GWP]\label{globalresult} Let $s>-3/2$ and $\phi \in H^s(\mathbb{R})$. Then the supremum of all $T>0$ for which all the assertions of Theorem \ref{localresult} hold is infinity. \end{teor} On the other hand, it is known that the Banach's Fixed Point Theorem cannot be applied to the Benjamin-Ono equation due to the lack of regularity for the map data-solution, more precisely, this map fails to be $C^2$ and even more is not locally uniformly continuous, see \cite{MolSauTzv} and \cite{KTzv} respectively. Here, it is proved that there does not exist a $T>0$ such that (\ref{npbo}) admits a unique local solution defined on the interval $[0,T]$ and such that the flow-map data-solution $\phi \mapsto u(t)$, $t\in [0,T]$, is $C^2$ differentiable at the origin from $H^s(\mathbb{R})$ to $H^s(\mathbb{R})$. As a consequence, we cannot solve the Cauchy problem for the npBO equation by a Picard iterative method implemented on the integral formulation (\ref{intequation}), at least in the Sobolev spaces $H^s(\mathbb{R})$, with $s<-3/2$. This proves that our local and global well-posedness results for the npBO in $H^s(\mathbb{R})$, when $s>-3/2$, are sharp. \begin{teor}\label{malpuestodos} Fix $s<-3/2$. Then there does not exist a $T>0$ such that (\ref{npbo}) admits a unique local solution defined on the interval $[0,T]$ and such that the flow-map data-solution \begin{equation} \phi \longmapsto u(t), \qquad t\in [0,T], \end{equation} for (\ref{npbo}) is $C^2$ differentiable at zero from $H^s(\mathbb{R})$ to $H^s(\mathbb{R})$. \end{teor} A direct corollary of Theorem \ref{malpuestodos} yields our following result: \begin{teor}\label{illposed} The flow map data-solution for the npBO equation is not $C^2$ from $H^s(\mathbb{R})$ to $H^s(\mathbb{R})$, if $s<-3/2$. \end{teor} On the other hand, we also study real valued solutions of the IVP npBO (\ref{npbo}) in the weighted Sobolev spaces \begin{align} Z_{s,r}=H^s(\mathbb{R})\cap L^2\bigl(|x|^{2r}\,dx\bigr); \quad s,\;r \in \mathbb{R}, \label{wss} \end{align} and decay properties of solutions of the IVP npBO (\ref{npbo}). Pastr\'an and Rodr\'iguez in \cite{PR} proved the following results: \begin{teor}\label{pr}(See \cite{PR}) Let $\mu>0$ and $T>0$. \begin{description} \item[(i)] The \text{IVP} (\ref{npbo}) is \text{GWP} in $Z_{2,1}$.\\ \item[(ii)] If $u(x,t)$ is a solution of the \text{IVP} (\ref{npbo}) such that $u\in C([0,T]:Z_{2,2})$, then $\widehat{u}(0,t)\equiv 0$.\\ \item[(iii)] If $u(x,t)$ is a solution of the IVP (\ref{npbo}) such that $u\in C([0,T]:Z_{3,3})$, then $u(x,t)\equiv 0$.\\ \end{description} \end{teor} Notice that the real valued solutions of the IVP associated to the npBO equation satisfy that the quantity $I(u)=\int_{-\infty}^{\infty}u(x,t)\,dx$ is time invariant, i. e. the property $\widehat{\phi}(0)=0$ is preserved by the solution flow. This leads us to define $$\dot{Z}_{s,r}=\{f\in Z_{s,r} \,:\, \widehat{f}(0)=0 \}, \qquad s, r \in \mathbb{R}.$$ In this work we extend these results in Theorem \ref{pr} from integer values to the continuum optimal range of indices $(s,r)$. In this sense, our main results are the following: \begin{teor}\label{pre} \begin{description} \item[] \item[(i)] Let $s\geq r>0$, and $\,r<3/2$. The \text{IVP} associated to the \text{npBO} equation is \text{GWP} in $Z_{s,r}$. \item[(ii)] If $\,r\in [3/2,5/2)$ and $\,r\leq s$, then the \text{IVP} (\ref{npbo}) is \text{GWP} in $\dot Z_{s,r}$. \end{description} \end{teor} \begin{teor}\label{contunica1} Let $u\in C([0,T]; Z_{1,1})$ be a solution of the \text{IVP} (\ref{npbo}). If there exist two different times $t_1$, $t_2 \in [0,T]$ such that \begin{equation}\label{cont1} u(\cdot, t_j)\in Z_{3/2,3/2}, \qquad j=1,2, \quad \text{then}\quad \widehat{\phi}(0)=0, \qquad (\text{so}\quad u(\cdot)\in \dot{Z}_{3/2,3/2}). \end{equation} \end{teor} \begin{teor}\label{contunica2} Let $u\in C([0,T]; \dot{Z}_{2,2})$ be a solution of the \text{IVP} (\ref{npbo}). If there exist three different times $t_1$, $t_2$, $t_3 \in [0,T]$ such that \begin{equation}\label{cont2} u(\cdot, t_j)\in Z_{5/2,5/2}, \qquad j=1,2,3, \quad \text{then there exists $t^* >t_1$ such that }\quad u(x,t)\equiv 0, \text{for all } \,t\geq t^* . \end{equation} \end{teor} \begin{rem} It is well-known that Rafael Iorio was the first to establish these type of results. More precisely, his results were obtained in the context of the famous Benjamin-Ono equation for integer indexes $s,r$, see \cite{Iorio1}, \cite{Iorio2}. Recently, Fonseca and Ponce, with the help of a carachterization of the classical Sobolev spaces given by Stein in \cite{S} , extended these results for non-integer values, see \cite{FP}. Fonseca, Linares and Ponce obtained, with the same techniques, similar results for the dispersion generalized Benjamin-Ono equation in \cite{FLP}. For results regarding well-posenedness in these weighted spaces for other dispersive equations as gKdV, Zakharov-Kuznetsov, Benjamin, and Schr\"odinger see \cite{FLPKDV}, \cite{BJM2} and \cite{FPA}, \cite{J}, \cite{NP}, respectively. \end{rem} \begin{rem} We note that \textbf{(ii)} and \textbf{(iii)} in Theorem \ref{pr} directly follow as corollaries of Theorems \ref{contunica1} and \ref{contunica2}, respectively. \end{rem} \subsection{Definitions and Notations} Given $a$, $b$ positive numbers, $a\lesssim b$ means that there exists a positive constant $C$ such that $a\leq C b$. And we denote $a\sim b$ when, $a \lesssim b$ and $b \lesssim a$. We will also denote $a\lesssim_{\lambda} b$ or $b\lesssim_{\lambda} a$, if the constant involved depends on some parameter $\lambda$. We will understand $\langle \cdot \rangle = (1+|\cdot|^2)^{1/2}$. We will denote $\widehat{u}(\xi,t)$, $\xi\in\mathbb{R}$, as the Fourier transform of $u(t)$ respect to the variable $x$. We will use the Sobolev spaces $H^s(\mathbb{R})$ equipped with the norm $$\nor{\phi}{s}= \nor{\langle \xi \rangle^s\,\widehat{\phi}(\xi)}{L^2(\mathbb{R})},$$ and when $s=0$ we denote the $L^2$ norm simply by $\|\phi\|_0=\|\phi\|$. The norm in the weighted Sobolev spaces is defined by \begin{equation}\label{sobpeso} \nora{f}{Z_{s,r}}{2}=\nora{f}{s}{2}+\nora{f}{L_r^2}{2}\, , \end{equation} and $L_r^2(\mathbb{R})=L^2(|x|^{2r}\,dx)$ is the collection of all measurable functions $f:\mathbb{R}\to \mathbb{C}$ such that \begin{equation}\label{ldospeso} \nor{f}{L_r^2}=\|\langle x \rangle^r \,f(x)\|<\infty. \end{equation} Since the linear symbol of the npBO equation is $b_{\mu}(\xi)= i \xi |\xi| + \mu (|\xi|-|\xi|^3)$, for all $\xi \in \mathbb{R}$, we also denote by $S(t)\phi = e^{t (-\mathcal{H} \partial_x^2 -\mu (\mathcal{H} \partial_x+\mathcal{H} \partial_x^3))}\phi $, for all $t\geq 0$, the semigroup in $H^s(\mathbb{R})$ generated by the operator $-\mathcal{H} \partial_x^2 -\mu (\mathcal{H} \partial_x+\mathcal{H} \partial_x^3)$, i.e., \begin{equation}\label{semigroup} \bigl(S(t)\phi \bigr)^{\wedge}(\xi)=e^{(i\xi|\xi|+\mu(|\xi|-|\xi|^3))t}\widehat{\phi}(\xi)=F_{\mu}(t,\xi)\widehat{\phi}(\xi), \end{equation} where $F_{\mu}(t,\xi)=e^{(i\xi|\xi|+\mu(|\xi|-|\xi|^3))t}$, for all $t\geq 0$.\\ \\ Let $0\leq T\leq 1$ and $s < 0$. We consider $X_T^s$ as the class of all the functions $u\in C\left([0,T];H^s(\mathbb{R})\right)$ such that \begin{equation}\label{spacexts} \nor{u}{X_T^s}:=\sup_{t\in (0,T]}\Bigl(\nor{u(t)}{s}+t^{|s|/3}\|u(t)\|)\Bigr)<\infty. \end{equation} These Banach spaces are an adaptation made by Pilod \cite{P}, of the spaces originally presented by Dix in \cite{Dix}. \setcounter{equation}{0} \section{Preliminary estimates} We first recall some important lemmas which were proved in \cite{PR} and that will also be useful in our arguments \begin{lema} \label{emuc0enhs} (\cite{PR}) Let $s\in \mathbb{R}$. \begin{description} \item[(i)] $S:[0,\infty)\longrightarrow \textbf{B}(H^s(\mathbb{R}))$ is a $C^0$-semigroup in $H^s(\mathbb{R})$. Moreover, \begin{equation} \nor{S(t)}{\textbf{B}(H^s(\mathbb{R}))}\leq e^{\mu t}. \label{cotaemu} \end{equation} \item[(ii)] Let $t>0$ and $\lambda \geq 0$ be given. Then, $S(t)\in \textbf{B}(H^s(\mathbb{R}),H^{s+\lambda}(\mathbb{R}))$ and \begin{align} \nor{S(t)\phi}{s+\lambda}\leq C_{\lambda}(e^{\mu t}+(\mu t)^{-\lambda /3})\nor{\phi}{s}\;, \label{regulariza} \end{align} where $C_{\lambda}$ is a constant depending only on $\lambda$. \item[(iii)] Let $\phi \in H^s(\mathbb{R})$, then $u(t)=S(t)\phi$ is the unique solution of the linear IVP associated to (\ref{npbo}). \end{description} \end{lema} \begin{lema}\label{lemdecaida} Let $F_{\mu}(t,\xi)=e^{tb_{\mu}(\xi)}$ where $b_{\mu}(\xi)=i\xi|\xi|+\mu(|\xi|-|\xi|^3)$. Then, \begin{align} \partial_{\xi}F_{\mu}(t,\xi)&=t[\mu \,sgn(\xi)+|\xi|(2i-3\mu \xi)]F_{\mu}(t,\xi) \label{uno} \\ \partial_{\xi}^2F_{\mu}(t,\xi)&=2\mu t\delta +t[2i\,sgn(\xi)-6\mu |\xi|]F_{\mu}(t,\xi)+ \notag\\ &+t^2[\mu \,sgn(\xi)+|\xi|(2i-3\mu \xi)]^2F_{\mu}(t,\xi) \label{dos}, \end{align} where $\delta$ is Dirac's delta distribution. \end{lema} \begin{lema} \label{emuc0enfsr} Let $\mu >0$. $S:[0,+\infty )\longrightarrow \textbf{B}(Z_{s,r})$ is a $C^0$-semigroup for \\ $s,r \in \mathbb{N}$, $s\geq r$ and satisfies that \\ \\ (a.) If $r=0,1$ \begin{align} \nor{S(t)\phi}{Z_{s,r}} \leq \Theta_r(t)\nor{\phi}{Z_{s,r}}\quad \text{for all } \phi \in Z_{s,r} \label{e6} \end{align} where $\Theta_r(t)$ has the form $$p_{\mu ,r}(t)e^{\mu t}+\sum\limits_{l=1}^{3r-1}k_{l,\mu}t^{l/3}$$ such that $k_{l,\mu}$ is a constant which depends on $\mu$ and $p_{\mu ,r}(t)$ is a polynomial in $t$ of degree $r$ with positives coefficients depending only on $\mu $. \newline \newline b.) If $r\geq 2$ and $\phi \in Z_{s,r}$, $S \in C([0,\infty];Z_{s,r})$, if and only if, \begin{align}(\partial_{\xi}^j \widehat{\phi})(0)=0, \qquad \qquad j=0,1,2,\cdots,r-2. \label{e7} \end{align} In this case, an estimative as (\ref{e6}) holds. \end{lema} Regarding our study of the IVP \eqref {npbo} in the weighted Sobolev spaces, $Z_{s,r}$, we recall the following characterization of the $L_s^p(\mathbb{R})=(1-\Delta)^{-s/2}L^p(\mathbb{R})$ spaces given in \cite{S}. \begin{teor} \label{derivaStein} Let $b\in (0,1)$ and $2/(1+2b)<p<\infty$. Then $f\in L_b^p(\mathbb{R})$ if and only if \begin{align} (a) &f\in L^p(\mathbb{R}), \notag \\ (b) &\mathcal{D}^bf(x)=\Bigl(\int_{\mathbb{R}^n}\frac{|f(x)-f(y)|^2}{|x-y|^{n+2b}}\,dy\Bigr)^{1/2}\in L^p(\mathbb{R}), \label{derivadastein} \end{align} with \begin{equation}\label{normasequivalentes} \nor{f}{b,p}\equiv \nor{(1-\Delta)^{b/2}f}{L^p}\simeq \nor{f}{L^p}+\nor{D^bf}{L^p}\simeq \nor{f}{L^p}+\nor{\mathcal{D}^bf}{L^p}. \end{equation} Above we have used the notation $D^s=(\mathcal{H}\partial_x)^s$ for $s\in \mathbb{R}$. \end{teor} \begin{lema}\label{rprod}(See \cite{FP, NP}) Given $0<b<1$, \begin{align} \nor{\mathcal{D}^b(fg)}{}&\leq \nor{g\mathcal{D}^b f }{} + \nor{f \mathcal{D}^bg}{} \label{productostein} \\ \intertext{always that the right side is finite and, for any $t>0$,} \mathcal{D}^b\Bigl(e^{it\xi |\xi|}\Bigr)&\leq C_b(t^{b/2}+t^b|\xi|^b). \label{cotabo} \end{align} \end{lema} \begin{lema} Let $\mu>0$, $t>0$ and $\lambda \geq 0$. Then, \begin{equation}\label{unoa} \nor{|\xi|^{\lambda}e^{\mu t (|\xi|-|\xi|^3)}}{L^{\infty}}\leq C_{\lambda}\bigl(e^{\mu t}+(\mu t)^{-\lambda/3}\bigr) \end{equation} \end{lema} \begin{proof} Let $a>0$ and $\lambda\geq 0$. Since $\xi^{\lambda}e^{a(\xi-\xi^3)}\leq 2^{\lambda/2}e^a$, for $0\leq \xi\leq \sqrt{2}$, and $\xi-\xi^3\leq -\xi^3/2$, for $\xi\geq \sqrt{2}$, we get \begin{equation}\label{unob} \sup_{\xi\geq 0} \xi^{\lambda}e^{a(\xi-\xi^3)}\leq c_{\lambda}\Bigl(e^{a} + \sup_{\xi\geq \sqrt{2}}\xi^{\lambda}e^{-a\xi^3/2}\Bigr) \end{equation} Taking $g(\xi):= \xi^{\lambda}e^{-a\xi^3/2}$, we note that $g'(\xi)=0$ if and only if $\xi=\sqrt[3]{\frac{2\lambda}{3a}}:=\xi_0$. So, $g(\xi)\leq g(\xi_0)=\bigl(\frac{2\lambda}{3a}\bigr)^{\lambda/3}e^{-\lambda/3}$, for $\xi\geq 0$. Then, from (\ref{unob}), we have that $$\sup_{\xi\geq 0} \xi^{\lambda}e^{a(\xi-\xi^3)}\leq C_{\lambda}\bigl(e^{a}+a^{-\lambda/3}\bigr)$$ which implies (\ref{unoa}). \end{proof} \begin{lema} Let $b\in (0,1)$ and $h$ a measurable function on $\mathbb{R}$ such that $h$, $h' \in L^{\infty}$. Then, \begin{equation}\label{dosa} \mathcal{D}^bh(x)\leq C_b\bigl(\nor{h}{L^{\infty}}+\nor{h'}{L^{\infty}}\bigr)\quad \forall x\in \mathbb{R}. \end{equation} \end{lema} \begin{proof} Given $b\in (0,1)$, we know that \begin{equation*} \bigl(\mathcal{D}^bh(x)\bigr)^2=\int_{|x-y|\leq 1}\dfrac{|h(x)-h(y)|^2}{|x-y|^{1+2b}}\,dy + \int_{|x-y|\geq 1}\dfrac{|h(x)-h(y)|^2}{|x-y|^{1+2b}}\,dy. \end{equation*} By the Mean Value Theorem $|h(x)-h(y)|\leq \nor{h'}{L^{\infty}}|x-y|$, we obtain that \begin{align*} \bigl(\mathcal{D}^bh(x)\bigr)^2&\leq \int_{|x-y|\leq 1}\dfrac{\nora{h'}{L^{\infty}}{2}}{|x-y|^{2b-1}}\,dy + \int_{|x-y|\geq 1}\dfrac{\nora{h}{L^{\infty}}{2}}{|x-y|^{1+2b}}\,dy \\ &\leq c_b\bigl(\nora{h'}{L^{\infty}}{2}+\nora{h}{L^{\infty}}{2}\bigr) \end{align*} since $2b-1<1$ and $b>0$. The last inequality implies (\ref{dosa}). \end{proof} \begin{corol} Let $b\in (0,1)$. For any $0<t\leq 1$, $0<\mu \leq 1$ and $\lambda \geq 1$, it holds that \begin{align} \mathcal{D}^b\Bigl( e^{\mu t(|\xi|-|\xi|^3)}\Bigr)&\leq C_b \bigl(e^{\mu t}+(\mu t)^{1/3}\bigr)\leq C_b, \label{deruno} \\ \mathcal{D}^b\Bigl( e^{\mu t(|\xi|-|\xi|^3)}|\xi|^{\lambda}\Bigr)&\leq C_b \bigl(e^{\mu t}+(\mu t)^{-\lambda/3}\bigr)\leq C_b(\mu t)^{-\lambda/3}. \label{derdos} \end{align} \end{corol} \begin{lema}\label{clave1} Let $h\in H^b(\mathbb{R})\cap L^2(|x|^{2b})$ where $0<b<1$. Then, for any $0< t \leq 1$, $0<\mu \leq 1$ and $\lambda \geq 1$, \begin{align} \nor{\mathcal{D}^b\bigl(e^{it\xi |\xi|+\mu t(|\xi|-|\xi|^3)}\widehat{h}(\xi)\bigr)}{} &\leq C_b \Bigl(\nor{\widehat{h}(\xi)}{}+\nor{\,|\xi|^b\widehat{h}(\xi)}{}+\nor{\mathcal{D}^b\bigl(\widehat{h}(\xi)\bigr)}{}\Bigr), \label{dertres} \\ \nor{\mathcal{D}^b\bigl(e^{it\xi |\xi|+\mu t(|\xi|-|\xi|^3)}|\xi|^{\lambda} \widehat{h}(\xi)\bigr)}{}&\leq C_b t^{-\lambda/3}\Bigl(\nor{\widehat{h}(\xi)}{}+\nor{\,|\xi|^b\widehat{h}(\xi)}{}+\nor{\mathcal{D}^b\bigl(\widehat{h}(\xi)\bigr)}{}\Bigr) . \label{dercuatro} \end{align} \begin{proof} To prove (\ref{dertres}) and (\ref{dercuatro}) we use Leibniz's Rule for $\mathcal{D}^b$ (\ref{productostein}) and the results in (\ref{cotabo}), (\ref{unoa}), (\ref{deruno}) and (\ref{derdos}). \end{proof} \end{lema} \begin{rem} \begin{description} \item[] \item[i.)] (\ref{derdos}) and (\ref{dercuatro}) still hold if $|\xi|^{\lambda}$, for $\lambda \in \mathbb{Z}^{+}$, is substituted by $|\xi|^{\alpha_1}\xi^{\alpha_2}$, where $\lambda=\alpha_1+\alpha_2$, and $\alpha_1$, $\alpha_2 \in \mathbb{Z}^+$. \item[ii.)] If $0<\lambda <1$, (\ref{derdos}) is not true because the derivative of $|\xi|^{\lambda}$ is not bounded near to zero. \end{description} \end{rem} \begin{lema} Given $\phi \in H^{1+\theta}(\mathbb{R})\cap L^2(|x|^{2(1+\theta)})$ where $\theta \geq 0$ we have that \begin{align} \nor{\langle \xi \rangle^{\theta}\partial_{\xi}\widehat{\phi}}{}&\leq C_{\theta} \bigl(\nor{\phi}{1+\theta}+\nor{|x|^{1+\theta}\phi}{}\bigr) \quad \text{and} \label{simplifica} \\ \nor{\langle x \rangle^{\theta}\partial_x\phi}{}&\leq C_{\theta} \bigl(\nor{\phi}{1+\theta}+\nor{|x|^{1+\theta}\phi}{}\bigr). \label{simplificaotra} \end{align} \end{lema} \begin{proof} Applying the product rule we know that \begin{align} \nor{\langle \xi \rangle^{\theta}\partial_{\xi}\widehat{\phi}}{}&\leq \nor{\partial_{\xi}\bigl(\langle \xi \rangle^{\theta}\bigr)\widehat{\phi}}{} + \nor{\partial_{\xi}\bigl(\langle \xi \rangle^{\theta}\widehat{\phi}\bigr)}{} \notag \\ &\leq c_{\theta}\Bigl(\nor{\langle \xi \rangle^{\theta}\widehat{\phi}}{}+\nor{\langle x \rangle J_x^{\theta}\phi}{}\Bigr) \label{simplificauno} \end{align} and since \begin{align} \nor{\langle x\rangle J_x^{\theta}\phi}{}&=\nor{\langle x\rangle^{(1+\theta)\frac{1}{1+\theta}} J_x^{(1+\theta)\frac{\theta}{1+\theta}}\phi}{}\leq C\nora{\langle x\rangle^{1+\theta}\phi}{}{\frac{1}{1+\theta}}\nora{J_x^{1+\theta}\phi}{}{\frac{\theta}{1+\theta}}\notag \\ &\leq C\bigl(\nor{\langle x\rangle^{1+\theta}\phi}{}+\nor{\phi}{1+\theta}\bigr), \label{simplificados} \end{align} then, using (\ref{simplificados}) in (\ref{simplificauno}), we obtain the inequality (\ref{simplifica}). The proof of (\ref{simplificaotra}) is similar. \end{proof} As a further direct consequence of Theorem \ref{derivaStein} we will use the following result in the proof of Theorem \ref{contunica1}, deduced in \cite{FP}. \begin{prop}\label{I1} Let $p\in (1, \infty)$. If $f\in L^p(\mathbb{R})$ such that there exists $x_0 \in \mathbb{R}$ for which $f(x_0+)$, $f(x_0-)$ are defined and $f(x_0+)\ne f(x_0-)$, then for any $\delta > 0$, $\mathcal{D}^{1/p}f \notin L_{loc}^p(B(x_0,\delta))$ and consequently $f\notin L_{1/p}^p(\mathbb{R})$. \end{prop} Also, we will employ the next simple estimate. \begin{prop}\label{simplestimate} If $f\in L^2(\mathbb{R})$ and $\phi \in H^1(\mathbb{R})$, then $$\nor{[D^{1/2}, \phi] \,f}{}\leq c\,\nor{\phi}{1}\,\nor{f}{}.$$ \end{prop} \setcounter{equation}{0} \section{Theory in $H^s(\mathbb{R})$} The purpose in this section is to prove LWP and GWP of the IVP (\ref{npbo}) in Sobolev spaces $H^s(\mathbb{R})$ for $s> -3/2$. Our strategy is to use a contraction argument on the integral equation (\ref{intequation}) associated to (\ref{npbo}). We have introduced in (\ref{spacexts}) the $X_T^s$ spaces, for $0\leq T\leq 1$ and $s < 0$, in order to obtain linear and bilinear estimates. First, we recall the following lemma, in \cite{Amin}, which is useful in establishing smoothness properties for the semigroup $S$ of (\ref{npbo}). \begin{lema}\label{LemmaLB1} Let $\lambda>0$ and $0<t\leq 9 \lambda$ be given. Then \begin{equation}\label{LBequa1} \xi^{2\lambda} e^{t \left(|\xi|-|\xi|^3 \right)}\leq f_{\lambda}(t):=\rho^{2\lambda}\,e^{t(\rho -\rho^3)}, \end{equation} where $$\rho=\dfrac{\left(9\lambda +\sqrt{81\lambda^2-t^2}\,\right)^{1/3}}{3}\,t^{-1/3}+\dfrac{t^{1/3}}{3\left(9\lambda +\sqrt{81\lambda^2-t^2}\,\right)^{1/3}}.$$ Moreover, if $\lambda=0$, then (\ref{LBequa1}) holds for $f_0(t)=\exp \left( \frac{2t}{3\sqrt{3}}\right)$. \end{lema} \begin{proof}See \cite{Amin}. \end{proof} Now, we are going to estimate the linear part of (\ref{intequation}) in $X_T^s$. \begin{prop}\label{PropLB1} Let $0<T\leq T^*= \min \{1, 9|s|/2\}$, $s<0$ and $\phi\in H^s(\mathbb{R})$, then it follows that \begin{equation}\label{LB1a} \sup_{t\in[0,T]}\left\|S(t)\phi\right\|_{s}\leq e^{\frac{2}{3\sqrt{3}}T} \left\|\phi\right\|_{s}, \end{equation} and \begin{equation}\label{LB1b} \sup_{t\in[0,T]}t^{\frac{|s|}{3}}\left\|S(t)\phi\right\|\lesssim_{s}g_{s}(T)\left\|\phi\right\|_{s}, \end{equation} where $$g_{s}(t)=e^{\frac{2 t}{3\sqrt{3}}}+ t^{\frac{|s|}{3}}\,f_{|s|/2}(t),$$ is a continuous nondecreasing function on $[0,T^*]$ and $f$ is defined as in Lemma (\ref{LemmaLB1}). \end{prop} \begin{proof} It is the same proof of Proposition 1 in \cite{Amin}. \end{proof} Next, we establish the crucial bilinear estimates. \begin{prop}\label{PropLB2} Let $0\leq t\leq T\leq T^*$ and $-\frac{3}{2}<s < 0$, then \begin{equation} \left\|\int_{0}^t S(t-t')\partial_x(uv)(t') \ dt'\right\|_{X_T^s} \lesssim_{s} e^{\frac{2\sqrt{2}\,\mu T}{\sqrt{27}}} T^{\frac{2s+3}{6}}\left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s}, \end{equation} for all $u,v\in X_T^s$. \end{prop} \begin{proof} Since $s< 0$, it follows that $\left\langle\xi \right\rangle^{s}\leq |\xi|^s$, for all real number $\xi$ different from zero. Then we deduce that \begin{equation}\label{LBequa2} \begin{aligned} & \left\|\int_{0}^t S(t-t')\partial_x(uv)(t') \ dt'\right\|_{s} \\ & \hspace{30pt} \leq \int_{0}^t \left\|\left\langle \xi \right\rangle^s e^{\mu \left(|\xi|-|\xi|^3\right)(t-t')} \left(\partial_x(uv)(t')\right)^{\wedge}(\xi) \right\|_{} \ dt' \\ & \hspace{30pt} \leq \int_{0}^t \left\||\xi|^{1+s}e^{\mu\left(|\xi|-|\xi|^3\right)(t-t')}\right\|\left\| \widehat{u(t')}\ast \widehat{v(t')}(\xi) \right\|_{L^{\infty}(\mathbb{R})} \ dt'. \end{aligned} \end{equation} The Young inequality implies that \begin{equation}\label{LBequa3} \left\| \widehat{u(t')}\ast\widehat{v(t')}(\xi) \right\|_{L^{\infty}(\mathbb{R})}\leq \left( \frac{\left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s}}{|t'|^{2|s|/3}} \right), \end{equation} thus we obtain \begin{equation}\label{LBequa4} \begin{aligned} \int_{0}^t & \left\| S(t-t')\partial_x(uv)(t') \right\|_{s} \ dt' \\ & \qquad \leq \int_{0}^t \frac{\left\||\xi|^{1+s}e^{\mu\left(|\xi|-|\xi|^3\right)t'}\right\|_{}}{|t-t'|^{2|s|/3}} \, dt' \, \left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s}. \end{aligned} \end{equation} To estimate the integral on the right-hand side of \eqref{LBequa4}, we perform the change of variables $w=t^{1/3}\xi$ to deduce \begin{align}\label{LBequa5} \left\||\xi|^{1+s}e^{\mu\left(|\xi|-|\xi|^3\right)t}\right\|& \leq \frac{\left\||w|^{1+s}e^{-\frac{\mu |w|^3}{2}}\right\|\left\|e^{\mu( |w|t^{2/3}- \frac{ |w|^3}{2})}\right\|_{L^{\infty}(\mathbb{R})}}{|t|^{\frac{2s+3}{6}}} \nonumber \\ & \lesssim_{s} \frac{e^{\frac{2\sqrt{2} \,\mu T}{\sqrt{27}}}}{|t|^{\frac{2s+3}{6}}}, \end{align} where we have used the following inequality $$e^{\mu( |\xi|t^{2/3}- \frac{ |\xi|^3}{2})}\leq e^{\frac{2\sqrt{2}\,\mu t}{\sqrt{27}}}, \qquad \text{for all}\;\,\xi \in \mathbb{R}.$$ Therefore, we get from \eqref{LBequa4} and \eqref{LBequa5} that \begin{equation}\label{LBequa6} \begin{aligned} & \left\|\int_{0}^t S(t-t')\partial_x(uv)(t') \ dt' \right\|_{s} \\ & \hspace{20pt} \lesssim_s e^{\frac{2\sqrt{2}\, \mu T}{\sqrt{27}}}T^{\frac{1}{6}(3+2s)}\left( \int_{0}^1 \frac{1}{|\sigma|^{\frac{2|s|}{3}}|1-\sigma|^{\frac{2s+3}{6}}}\ d\sigma \right)\left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s}, \end{aligned} \end{equation} for all $0\leq t \le T$. On the other hand, arguing as above, we have for all $0\leq t\leq T$ that \begin{align}\label{LBequa7} t^{|s|/3}&\left\|\int_{0}^t S(t-t')\partial_x(uv)(t') \ dt'\right\|_{} \nonumber \\ &\leq t^{|s|/3} \int_{0}^t \left\||\xi|e^{\mu \left(|\xi|-|\xi|^3\right)(t-t')}\right\|\left\| \widehat{u(t')}\ast \widehat{v(t')}(\xi) \right\|_{L^{\infty}(\mathbb{R})} \ dt' \nonumber \\ & \leq t^{|s|/3}\int_{0}^t \frac{\left\||\xi|e^{\mu \left(|\xi|-|\xi|^3\right)t'}\right\|}{|t-t'|^{2|s|/3}} \, dt' \, \left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s} \nonumber \\ & \lesssim_s e^{\frac{2\sqrt{2}\,\mu T}{\sqrt{27}}} t^{|s|/3} \left( \int_{0}^t |t'|^{-\frac{1}{2}}|t-t'|^{-2|s|/3}\, dt' \right)\left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s} \nonumber \\ & \lesssim_s e^{\frac{2\sqrt{2}\,\mu T}{\sqrt{27}}} T^{\frac{1}{6}(3+2s)} \left( \int_{0}^1 |\sigma|^{-2|s|/3}|1-\sigma|^{-1/2}\, d\sigma \right)\left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s}. \end{align} Combing \eqref{LBequa6} and \eqref{LBequa7} the proof is complete. \end{proof} \begin{rem} If we consider $s'>s>-\frac{3}{2}$, then modifying the space $X_{T}^{s'}$ by $$\tilde{X}_{T}^{s'}=\left\{u\in X_{T}^{s'}: \left\|u\right\|_{\tilde{X}_{T}^{s'}}<\infty \right\},$$ where $$ \left\|u\right\|_{\tilde{X}_{T}^{s'}}= \left\|u\right\|_{X_{T}^{s'}}+t^{|s|/3} \left\|(1-\partial_x^2)^{\frac{ s'-s}{2}} u\right\|$$ and using that $$(1+\xi^2)^{s'/2}\lesssim (1+\xi^2)^{s/2}(1+\xi_1^2)^{(s'-s)/2}+(1+\xi^2)^{s/2}\left(1+(\xi-\xi_1)^2\right)^{(s'-s)/2},$$ for all $\xi,\xi_1\in \mathbb{R}$, we deduce arguing as in Proposition \ref{PropLB2} that $$\left\|\int_{0}^t S(t-t')\partial_x(uv)(t') \ dt'\right\|_{\tilde{X}_T^{s'}} \lesssim_{s}e^{\frac{2\sqrt{2}\,\mu T}{\sqrt{27}}} T^{\frac{2s+3}{6}}\left(\left\|u\right\|_{\tilde{X}_T^{s'}}\left\|v\right\|_{X_T^s}+\left\|u\right\|_{X_T^s}\left\|v\right\|_{\tilde{X}_T^{s'}}\right).$$ \end{rem} Next regularization property is a consequence of the semi-group property in Lemma \ref{emuc0enhs} $(ii)$, and we refer to Proposition 4 in \cite{P} for its proof. \begin{prop}\label{PropLB3} Let $0\leq T \leq 1$, $s \in (-\frac{3}{2},0)$ and $\delta\in [0,s+\frac{3}{2})$, then the application $$t\longmapsto \int_{0}^t S(t-t')\partial_x(u^2)(t')\ dt' ,$$ is in $C\left((0,T];H^{s+\delta}(\mathbb{R})\right)$, for every $u\in X_T^s$. \end{prop} \subsection{LWP in $H^s(\mathbb{R})$ for $s\in (-3/2,0)$} \begin{proof}[Proof of Theorem \ref{localresult}] We divide the proof in four steps \\ \\ 1. \emph{Existence}. Let $\phi \in H^s(\mathbb{R})$ with $s> -\frac{3}{2}$. We consider the application $$\Psi(u)=S(t)\phi-\frac{1}{2}\int_{0}^t S(t-t')\partial_x(u^2(t')) \ dt',$$ for each $u \in X_T^s$. By Proposition \ref{PropLB1} together with Proposition \ref{PropLB2}, when $s< 0$, there exists a positive constant $C=C(\mu,s)$ such that \begin{align} \left\|\Psi(u)\right\|_{X_T^s} &\leq C\left(\left\|\phi\right\|_{s}+T^{g(s)}\left\|u\right\|_{X_T^s}^2\right), \label{WPequa1} \\ \left\|\Psi(u)-\Psi(v)\right\|_{X_T^s} & \leq C T^{g(s)}\left\|u-v\right\|_{X_T^s}\left\|u+v\right\|_{X_T^s}, \label{WPequa2} \end{align} for all $u, v \in X_T^s$ and $0<T\leq 1$. Where $g(s)=\frac{1}{6}(3+2s)$, when $s\in (-\frac{3}{2},0)$. Then, we define $E_{T}(\gamma)=\left\{u\in X_T^s : \left\|u \right\|_{X_T^s}\leq\gamma \right\}$, with $\gamma=2C\left\|\phi \right\|_{s}$ and $0<T\leq \min \left\{1,\left(4C\gamma\right)^{-\frac{1}{g(s)}} \right\}$. The estimates \eqref{WPequa1} and \eqref{WPequa2} imply that $\Psi$ is a contraction on the complete metric space $E_T(\gamma)$. Therefore, the Fixed Point Theorem implies the existence of a unique solution $u$ of \eqref{intequation} in $E_T(\gamma)$ with $u(0)=\phi$. \\ \\ 2. \emph{Continuous dependence}. We will verify that the map $\phi \in H^s(\mathbb{R}) \mapsto u \in X_T^s$, where $u$ is a solution of \eqref{npbo} obtained in the step of \emph{Existence} is continuous. More precisely, for $s>-\frac{3}{2}$, if $\phi_n \rightarrow \phi_{\infty}$ in $H^s(\mathbb{R})$, let $u_n\in X_{T_n}^s$ be the respective solutions of \eqref{intequation} (obtained in the part of \emph{Existence}) with $u_n(0)=\phi_n$, for all $1\leq n\leq \infty$. Then for each $T'\in (0,T_{\infty})$, $u_n \in X_{T'}^s$ (for $n$ large enough) and $u_n \rightarrow u_{\infty}$ in $X_{T'}^s$. We recall that the solutions and times of existence previously constructed satisfy \begin{align} &0<T_n\leq \min\left\{1, \left(8C^2\left\|\phi_n\right\|_{s}\right)^{-\frac{1}{g(s)}}\right\}, \label{WPequa3} \\ &\left\|u_n\right\|_{X_T^s} \leq 2C\left\|\phi_n\right\|_{s}, \label{WPequa4} \end{align} for all $n\in \mathbb{N}\cup\left\{\infty\right\}$. Let $T'\in (0,T_{\infty})$, the above inequalities and the hypothesis imply that there exists $N\in \mathbb{N}$, such that for all $n\geq N$, we have that $T'\leq T_n$ and $$\frac{\left\|\phi_n\right\|_{s}+\left\|\phi_{\infty}\right\|_{s}}{\left\|\phi_{\infty}\right\|_{s}}\leq 3.$$ Therefore, combining \eqref{WPequa3}, \eqref{WPequa4} with the Propositions \ref{PropLB1} and \ref{PropLB2}, it follows that for each $n\geq N$ \begin{align*} \left\|u_n-u_{\infty}\right\|_{X_{T'}^s} &\leq C\left\|\phi_n-\phi_{\infty}\right\|_{s}+ CT_{\infty}^{g(s)}\left\|u_n+u_{\infty}\right\|_{X_{T'}^s}\left\|u_n-u_{\infty}\right\|_{X_{T'}^s} \\ &\leq C\left\|\phi_n-\phi_{\infty}\right\|_{s}+ \frac{\left(\left\|\phi_n\right\|_{s}+\left\|\phi_{\infty}\right\|_{s}\right)}{4\left\|\phi_{\infty}\right\|_{s}}\left\|u_n-u_{\infty}\right\|_{X_{T'}^s} \\ &\leq C\left\|\phi_n-\phi_{\infty}\right\|_{s}+ \frac{3}{4}\left\|u_n-u_{\infty}\right\|_{X_{T'}^s}. \end{align*} Hence we have deduced that $\left\|u_n-u_{\infty}\right\|_{X_{T'}^s}\leq C\left\|\phi_n-\phi_{\infty}\right\|_{s}$, for all $n\geq N$. \\ \\ 3. \emph{Uniqueness}. Let $u,v \in X_T^s$ be solutions of the integral equation \eqref{intequation} on $[0,T]$ with the same initial data. For each $r\in [0,T]$ we define $$ G_{r}(t)= \begin{cases} \frac{1}{2}\int_{r}^{t} S(t-t')\left(\partial_x u^2(t')-\partial_x v^2(t')\right)dt', & \text{if }t\in (r,T] \\ 0, & \text{if }t\in[0,r] \end{cases} $$ for all $t \in [0,T]$. Arguing as in the proof of Proposition \ref{PropLB2} we deduce that there exists a positive constant $C=C(\mu,s)$ depending only on $\mu$ and $s$, such that for all $r\in[0,T]$ and all $\vartheta\in [r,T]$, \begin{equation}\label{WPequa5} \left\|G_{r}\right\|_{X_{\vartheta}^s} \leq C K \left(\vartheta-r\right)^{g(s)}\left\|u-v\right\|_{X_{\vartheta}^s}, \end{equation} where $K=\left\|u\right\|_{X_{T}^s}+\left\|v\right\|_{X_{T}^s}$. In particular, inequality \eqref{WPequa5} implies that \begin{equation}\label{WPequa6} \left\|u-v\right\|_{X_{\vartheta}^s}=\left\|G_{0}\right\|_{X_{\vartheta}^s} \leq C K \vartheta^{g(s)}\left\|u-v\right\|_{X_{\vartheta}^s}. \end{equation} Thus, choosing $\vartheta \in \left(0,(CK)^{-\frac{1}{g(s)}}\right)$ a fixed number, \eqref{WPequa6} implies that $u \equiv v$ on $[0,\vartheta]$. Therefore we can iterate this argument using \eqref{WPequa5} and our choose of $\vartheta$, until we extend the uniqueness result to the whole interval $[0,T]$. \\ \\ \emph{4. The solution $ u\in C\left((0,T],H^{\infty}(\mathbb{R})\right)$}. From Lemma \ref{LemmaLB1} and arguing as in the proof of Proposition 2.2 in \cite{BI}, we have that the map $t\mapsto S(t)\phi$ is continuous in the interval $(0,T]$ with respect to the topology of $H^{\infty}(\mathbb{R})$. Since our solution $u$ is in $X_T^s$, we deduce from Proposition \ref{PropLB3} that, there exists $\lambda>0$ such that $$u\in C\left([0,T];H^s(\mathbb{R})\right)\cap C\left((0,T];H^{s+\lambda}(\mathbb{R})\right).$$ Therefore we can iterate this argument, using uniqueness result and the fact that the time of existence of solutions depends uniquely on the $H^s(\mathbb{R})$-norm of the initial data. Thus we deduce that $$u\in C\left([0,T];H^s(\mathbb{R})\right)\cap C\left((0,T];H^{\infty}(\mathbb{R})\right).$$ \end{proof} \subsection{LWP in $H^s(\mathbb{R})$ for $s\geq 0$} For simplicity, we assume that $\mu=1$ and $0<T\leq 1$. We will mainly work with the integral formulation (\ref{intequation}) of the IVP (\ref{npbo}). \begin{lema} Let $\mu=1$, $0\leq s\leq 1/2$, $0\leq \tau \leq t \leq T\leq 1$ and $u\in C([0,T], H^s(\mathbb{R}))$. \begin{equation}\label{cotast} \int_0^t\nor{S(t-\tau)\partial_xu^2(\tau)}{s}\,d\tau \leq C_s\,T^{(3-2s)/6}\,\nora{u}{L_T^{\infty}H_x^s}{2} \end{equation} \end{lema} \begin{proof} \begin{align} &\nora{S(t-\tau)\partial_xu^2(\tau)}{s}{2}=\int_{\mathbb{R}}(1+\xi^2)^se^{2(|\xi|-|\xi|^3)(t-\tau)}\xi^2|\widehat{u}\ast \widehat{u}(\xi)|^2\,d\xi \notag \\ &\qquad \leq c_s \Bigl(\int_{\mathbb{R}}\xi^2e^{2(|\xi|-|\xi|^3)(t-\tau)},d\xi + \int_{\mathbb{R}}\xi^{2(s+1)}e^{2(|\xi|-|\xi|^3)(t-\tau)},d\xi \Bigr)\,\nora{\widehat{u}\ast \widehat{u}(\xi)}{L_{\xi}^{\infty}}{2} \label{cotastuno} \end{align} Since $\xi-\xi^3\leq 1$, for $0\leq \xi\leq \sqrt{2}$, and $\xi-\xi^3\leq -\xi^3/2$, for $\xi\geq \sqrt{2}$, we have \begin{align} \int_0^{\infty}\xi^2e^{2(\xi-\xi^3)t},d\xi &\leq \int_0^{\sqrt{2}}\xi^2e^{2 t}\,d\xi + \int_{\sqrt{2}}^{\infty}\xi^2e^{- t \xi^3}\,d\xi \leq c(e^{2 t}+(3 t)^{-1}), \label{cotastcuatro} \\ \intertext{and} \int_0^{\infty}\xi^{2(s+1)}e^{2(\xi-\xi^3)t},d\xi &\leq \int_0^{\sqrt{2}}\xi^{2s+2}e^{2 t}\,d\xi + \int_{\sqrt{2}}^{\infty}\xi^{2s+2}e^{- t \xi^3}\,d\xi \notag \\ &\leq c_s \Bigl(e^{2 t}+\dfrac{\Gamma(2s+1)}{3( t)^{1+2s/3}}\Bigr). \label{cotastcinco} \end{align} Then, from (\ref{cotastuno}), (\ref{cotastcuatro}), (\ref{cotastcinco}) and Young's inequality \begin{equation}\label{cotastdos} \nor{S(t-\tau)\partial_xu^2(\tau)}{s} \leq c_s \Bigl(e^{(t-\tau)}+\dfrac{1}{(t-\tau)^{1/2}}+\dfrac{1}{(t-\tau)^{(2s+3)/6}}\Bigr)\,\nora{u(\tau)}{}{2}. \end{equation} Integrating from $0$ to $t$ we obtain \begin{equation}\label{cotasttres} \int_0^t\nor{S(t-\tau)\partial_xu^2(\tau)}{s}\,d\tau \leq C_s \Bigl(e^t-1+2t^{1/2}+\dfrac{6}{3-2s}t^{(3-2s)/6}\Bigr)\,\nora{u}{L_t^{\infty}H_x^s}{2}. \end{equation} So, we can conclude (\ref{cotast}). \end{proof} \begin{proof}[Proof of Theorem \ref{localresult}] For $T\in(0,1]$, we consider the space $X_T^s=C\left([0,T];H^s(\mathbb{R})\right)$. Let $\phi \in H^s(\mathbb{\mathbb{R}})$, $0\leq s\leq 1/2$. We define the application \begin{equation}\label{intaplication} \Psi(u)=S(t)\phi-\frac{1}{2}\int_{0}^t S(t-\tau)\partial_x(u^2(\tau)) \ d\tau, \text{ for each } u \in X_T^s. \end{equation} By (\ref{cotaemu}) and (\ref{cotast}), there exists a positive constant $C_s$, such that for all $u, v \in X_T^s$ and $0<T\leq 1$ \begin{align} \left\|\Psi(u)\right\|_{X_T^s} &\leq C\left(\left\|\phi\right\|_s+T^{\frac{3-2s}{6}}\left\|u\right\|_{X_T^s}^2\right), \label{WP1} \\ \left\|\Psi(u)-\Psi(v)\right\|_{X_T^s} & \leq C T^{\frac{3-2s}{6}}\left\|u-v\right\|_{X_T^s}\left\|u+v\right\|_{X_T^s}, \label{WP2} \end{align} for all $s\in [0,\frac{1}{2}]$. Then, let $E_{T}(a)=\left\{u\in X_T^s : \left\|u \right\|_{X_T^s} \leq a=2C\left\|\phi \right\|_s \right\}$, where \begin{equation*} 2CaT^{\frac{-2s+3}{6}}\leq \frac{1}{2}\quad \text{ i.e.}\quad 0<T \leq \min \left\{1,\left(4Ca\right)^{\frac{6}{2s-3}} \right\}. \end{equation*} The estimates \eqref{WP1} and \eqref{WP2} imply that $\Psi$ is a contraction on the complete metric space $E_T(a)$. Therefore, we deduce by the Fixed Point Theorem that there exists an unique solution $u$ of the integral equation \eqref{intequation} in $E_T(a)$ and with initial data $u(0)=\phi$. Furthermore, the existence time satisfies \begin{equation}\label{timeexistence} T\lesssim \nora{\phi}{s}{6/(2s-3)}. \end{equation} The rest of the proof follows canonical arguments, so we omit it. \end{proof} \begin{rem} From the inequality of regularization (\ref{regulariza}) for the semigroup $S(t)$ and a Gronwall's type inequality (see 1.2.1 in \cite{H}) we have that for the solutions of the IVP (\ref{npbo}), $u(t)\in H^{\infty}(\mathbb{R})$ for all $t>0$ and, in particular, for all $t\in (0,T]$ and $0\leq s\leq1/2$, \begin{equation}\label{estimativaunmedio+} \nor{u(t)}{1/2+}\leq C \nor{\phi}{s} t^{(s-(\frac{1}{2}+))/3}. \end{equation} \end{rem} \begin{rem} When $s>1/2$, $H^s(\mathbb{R})$ is a Banach algebra and the local theory for the IVP (\ref{npbo}) is reduced to consider the space $X_T^s=C\left([0,T];H^s(\mathbb{T})\right)$ and $E_{T}(a)=\left\{u\in X_T^s : \left\|u \right\|_{X_T^s} \leq a=2C\left\|\phi \right\|_s \right\}$ where $\phi \in H^s(\mathbb{R})$ and $T$ will be chosen. We define the application $\Psi(u)$ as in (\ref{intaplication}) and by (\ref{cotaemu}) and (\ref{regulariza}) we have easily that \begin{align} \nor{\Psi(u)}{s} &\leq \nor{S(t)\phi}{s}+1/2\int_0^t\nor{S(t-\tau)u^2(\tau)}{s+1}\,d\tau \notag \\ &\leq C\Bigl(\nor{\phi}{s}+ \int_0^t\dfrac{\nor{u^2}{s}}{(t-\tau)^{1/3}}\,d\tau\Bigr) \notag \\ &\leq C\bigl(\nor{\phi}{s}+ \nora{u}{X_T^s}{2}T^{2/3}\bigr)\leq \dfrac{a}{2}+Ca^2T^{2/3}. \label{lwps>1/2} \end{align} Hence, according to (\ref{lwps>1/2}) we choose $T$ such that $T^{2/3}< \frac{1}{4C^2\nor{\phi}{s}}$ to obtain that $\Psi$ is a contraction. So, IVP (\ref{npbo}) is LWP in $H^s(\mathbb{R})$ for $s>1/2$ and the existence time of the solution satisfies \begin{equation}\label{times>1/2} T\sim \nora{\phi}{s}{-3/2}. \end{equation} \end{rem} In a similar way to Proposition \ref{PropLB3}, we have \begin{prop}\label{regularity} Let $0\leq T \leq 1$, $s\geq 0$ and $\delta\in [0,s+\frac{3}{2})$, then the application $$t\longmapsto \int_{0}^t S(t-t')\partial_x(u^2)(t')\ dt' ,$$ is in $C\left((0,T];H^{s+\delta}(\mathbb{R})\right)$, for every $u\in C([0,T];H^s(\mathbb{R}))$. \end{prop} \subsection{GWP in $H^s(\mathbb{R})$ for $s> -3/2$} \begin{proof}[Proof of Theorem \ref{globalresult}] Let $s\geq 0$ and $\phi \in H^s(\mathbb{R})$. It is known that $S(\cdot)\phi$ belongs to $C([0,\infty),H^s(\mathbb{R}))\cap C((0,\infty), H^{\infty}(\mathbb{R}))$. By the Proposition \ref{regularity} we have that \begin{equation*} t\longmapsto \int_0^tS(t-t')\,\partial_x(u^2(t'))\,dt'\;\in C([0,T],H^{s+2\delta}(\mathbb{R})), \end{equation*} where $u \in C([0,T];H^s(\mathbb{R}))$ is the solution to (\ref{intequation}) that we have already got. So we conclude that \begin{equation*} u\in C([0,T],H^s(\mathbb{R}))\cap C((0,T],H^{s+2\delta}(\mathbb{R})). \end{equation*} From above we can deduce by induction that $u\in C((0,T],H^{\infty}(\mathbb{R}))$. Define $T^*=T^*(\nor{\phi}{s})$ by \begin{equation} T^*=\sup \bigl\{T>0: \exists ! \;\;\text{solution of (\ref{intequation}) in }C([0,T],H^s(\mathbb{R})) \bigr\}. \end{equation} Let $u\in C([0,T^*),H^s(\mathbb{R}))\cap C((0,T^*),H^{\infty}(\mathbb{R}))$ be the local solution of (\ref{intequation}) in the maximal time interval $[0,T^*)$. We shall prove that if we assume $T^*<\infty$, then a contradiction follows. Since $u$ is smooth, we deduce that $u$ solves the Cauchy problem (\ref{intequation}) in classical sense, which allows us to take the $L^2$ scalar product of (\ref{intequation}) with $u$ and integrate by parts to obtain \begin{align*} \dfrac{1}{2}\dfrac{d}{dt}\|u(t)\|^2&=(u,u_t)_0 \notag \\ &=-(u,uu_x)_0 -(u,\mathcal{H} u_{xx})_0 -\mu(u,\mathcal{H}u_x)_0-\mu(u,\mathcal{H}u_{xxx})_0 \notag \\ &= \mu \int_{\mathbb{R}}(|\xi|-|\xi|^3)|\Hat{u}(\xi)|^2\,d\xi \notag \\ &= \mu \Bigl(\int_{|\xi|\leq 1}(|\xi|-|\xi|^3)|\Hat{u}(\xi)|^2\,d\xi + \int_{|\xi|>1}(|\xi|-|\xi|^3)|\Hat{u}(\xi)|^2\,d\xi\Bigr) \label{partiendo1} \\ &\leq \mu \int_{|\xi|\leq 1}(|\xi|-\xi^2)|\Hat{u}(\xi)|^2\,d\xi \notag \\ &\leq \mu \int_{|\xi|\leq 1}|\Hat{u}(\xi)|^2\,d\xi \notag \\ &\leq \mu \|u(t)\|^2. \end{align*} Integrating the last relation between $0$ and $t$, it gives \begin{align} \|u(t)\|^2\leq \|\phi\|^2& + 2\mu \int_0^t \|u(\tau)\|^2\,d\tau . \notag \end{align} Using the Gronwall's inequality we obtain a priori estimate \begin{equation} \|u(t)\|\leq \|\phi\|\,e^{\mu T^*}\equiv M, \quad \forall t\in (0,T^*). \notag \end{equation} Since the time existence $T(\cdot)$ is a decreasing function of the norm of the initial data, we know that there exists a time $T_1>0$ such that for all $\varphi \in L^2(\mathbb{R})$, with $\nor{\varphi}{L^2}\leq M$, there exists a unique solution $v(x,t)$ of (\ref{intequation}) satisfying $v(0)=\varphi$ and $v\in C([0,T_1],L^2(\mathbb{R}))\cap C((0,T_1],H^{\infty}(\mathbb{R}))$. Now, we choose $0<\epsilon <T_1$, apply this result with $\varphi=u(T^*-\epsilon)$ and define \begin{equation*} \tilde{u}(t)=\left\{ \begin{aligned} u(t),\qquad \qquad \qquad &\text{when }\; 0\leq t\leq T^*-\epsilon, \\ v(t-(T^*-\epsilon)), \quad &\text{when }\;T^*-\epsilon \leq t\leq T^*-\epsilon+T_1. \end{aligned} \right. \end{equation*} Then $\tilde{u}$ is a solution of (\ref{intequation}) in the time interval $[0,T^*-\epsilon +T_1]$, which contradicts $T^*<\infty$, since $T^*-\epsilon +T_1>T^*$. This implies that the solution can be extended to infinite time. \\ \\ Now, let $s\in (-3/2,0)$, $\phi\in H^s(\mathbb{R})$ and $u\in X_{T}^s$ be the solution of the integral equation \eqref{intequation}, obtained in above steps, and let $T'\in (0,T)$ fixed. We have that $$ \left\|u\right\|_{X_{T'}^s}=M_{T',s}<\infty. $$ Since $u\in C\left((0,T];H^{\infty}(\mathbb{R})\right)$, it follows that $u(T')\in L^2(\mathbb{R})$. Thus, the GWP result in $H^s$ for $s\geq 0$ implies that $\tilde{u}$, the solution of \eqref{intequation} with initial data $u(T')$, is global in time. Moreover, uniqueness implies that $\tilde{u}(t)=u(T'+t)$ for all $t\in [0,T-T']$. Therefore, we deduce that \begin{align*} \left\|u\right\|_{X_{T}^s} & \leq \left\|u\right\|_{X_{T'}^s}+\left\|u(T'+\cdot)\right\|_{X_{T-T'}^s} \\ &\leq M_{T',s}+\left\|\tilde{u}\right\|_{X_{T-T'}^s} \\ &= M_{T',s}+ \sup_{t\in [0,T-T']} \left\{\left\|\tilde{u}(t)\right\|_{s}+t^{|s|/3}\left\|\tilde{u}(t)\right\|\right\} \\ & \leq M_{T',s}+ \left(1+(T-T') ^{|s|/3}\right)\sup_{t\in [0,T-T']}\left\|\tilde{u}(t)\right\|. \end{align*} The global result follows from the above estimate. \end{proof} \subsection{Ill-posedness type results} In this section we prove the ill-posedness result contained in Theorem \ref{malpuestodos}. \begin{teor}\label{malpuestouno} Let $s<-\frac{3}{2}$ and $T>0$. Then there does not exist a space $X_T$ continuously embedded in $C([-T,T],H^s(\mathbb{R}))$ such that there exists $C>0$ with \begin{align} \nor{S(t)\phi}{X_T}&\leq C\,\nor{\phi}{s}; \qquad \phi \in H^s(\mathbb{R}), \label{illone} \\ \intertext{and} \nor{\int_0^tS(t-t')[u(t')u_x(t')]\,dt'}{X_T}&\leq C \nora{u}{X_T}{2}; \qquad u\in X_T. \label{illtwo} \end{align} \end{teor} Note that (\ref{illone}) and (\ref{illtwo}) would be needed to implement a Picard iterative scheme on (\ref{intequation}), in the space $X_T$. \begin{proof}[Proof of Theorem \ref{malpuestouno}] Suppose that there exists a space $X_T$ such that (\ref{illone}) and (\ref{illtwo}) hold. Take $u=S(t)\phi$ in (\ref{illtwo}). Then \begin{equation} \nor{\int_0^tS(t-t')[(S(t')\phi)(S(t')\phi_x)]\,dt'}{X_T}\leq C\,\nora{S(t)\phi}{X_T}{2}. \end{equation} Now using (\ref{illone}) and that $X_T$ is continuously embedded in $C([-T,T],H^s(\mathbb{R}))$ we obtain for any $t\in [-T,T]$ that \begin{equation} \nor{\int_0^tS(t-t')[(S(t')\phi)(S(t')\phi_x)]\,dt'}{s}\leq C\,\nora{\phi}{s}{2}. \label{illthree} \end{equation} We show that (\ref{illthree}) fails by choosin an appropriate $\phi$. Take $\phi$ defined by its Fourier transform as \begin{equation} \widehat{\phi}(\xi)=N^{-s}\,\gamma^{-1/2}\,(\mathbb{I}_I(\xi) + \mathbb{I}_I(-\xi)) \label{funcionfi} \end{equation} where $I$ is the interval $[N,N+2\gamma]$ and $\gamma \ll N$. Note that $\nor{\phi}{s}\sim 1$. Taking $p(\xi)=\mu(|\xi|-|\xi|^3)$ and $q(\xi)=\xi |\xi|$, we have that \begin{align} \int_0^t&S(t-t')[(S(t')\phi)(S(t')\phi_x)]\,dt' \notag \\ &=\int_0^t \int_{\mathbb{R}} e^{i x \xi}F_{\mu}(t-t',\xi)(i\xi)\Bigl[F_{\mu}(t',\cdot)\widehat{\phi} \ast F_{\mu}(t',\cdot)\widehat{\phi}\Bigr](\xi)\,d\xi \,dt' \notag \\ &= i \int_{\mathbb{R}^2}e^{i x \xi +t(p(\xi)+i q(\xi))}\xi \; \widehat{\phi}(\xi - \xi_1)\,\widehat{\phi}(\xi_1) \int_0^te^{t'[p(\xi-\xi_1)+p(\xi_1)-p(\xi)+i(q(\xi-\xi_1)+q(\xi_1)-q(\xi))]}\,dt' \,d\xi_1 \,d\xi \notag \\ &= i \int_{\mathbb{R}^2}e^{i x \xi +t(p(\xi)+i q(\xi))}\xi \; \widehat{\phi}(\xi - \xi_1)\,\widehat{\phi}(\xi_1) \int_0^te^{t'[\chi(\xi,\xi_1)+i\psi(\xi,\xi_1)]}\,dt' \,d\xi_1 \,d\xi \label{cuentadesiempre} \end{align} where \begin{align*} \chi(\xi,\xi_1)&=p(\xi-\xi_1)+p(\xi_1)-p(\xi)=\mu (|\xi-\xi_1|-|\xi-\xi_1|^3+|\xi_1|-|\xi_1|^3-|\xi|+|\xi|^3) \\ \intertext{and} \psi(\xi,\xi_1)&=q(\xi-\xi_1)+q(\xi_1)-q(\xi)=(\xi-\xi_1)|\xi-\xi_1|+\xi_1|\xi_1|-\xi|\xi| . \end{align*} Since $$\widehat{\phi}(\xi - \xi_1)\,\widehat{\phi}(\xi_1)=N^{-2s}\gamma^{-1}\Bigl[\mathbb{I}_I(\xi-\xi_1)\,\mathbb{I}_I(-\xi_1)+\mathbb{I}_I(-(\xi-\xi_1))\,\mathbb{I}_I(\xi_1)\Bigr]$$ we define $$K_{\xi}:=\{\xi_1:\xi_1\in I, \xi-\xi_1\in -I\}\cup \{\xi_1:\xi_1\in -I, \xi-\xi_1\in I\}$$ then \begin{align} \Bigl(\int_0^t&S(t-t')[(S(t')\phi)(S(t')\phi_x)]\,dt' \Bigr)^{\wedge}(\xi) \notag \\ &= i\,\xi\,e^{t(p(\xi)+i q(\xi))}\int_{\mathbb{R}} \widehat{\phi}(\xi - \xi_1)\,\widehat{\phi}(\xi_1) \int_0^te^{t'[\chi(\xi,\xi_1)+i\psi(\xi,\xi_1)]}\,dt' \,d\xi_1 \notag \\ &=i\,\xi\,e^{t(p(\xi)+i q(\xi))}\int_{K_{\xi}} N^{-4s}\,\gamma^{-2} \int_0^te^{t'[\chi(\xi,\xi_1)+i\psi(\xi,\xi_1)]}\,dt' \,d\xi_1 .\label{fourierdenolineal} \end{align} We thus deduce that \begin{align} &\nora{\int_0^tS(t-t')[(S(t')\phi)(S(t')\phi_x)]\,dt'}{s}{2} \notag \\ &\geq \int_{-2\gamma}^{2\gamma}(1+\xi^2)^s|\xi|^2e^{2tp(\xi)}N^{-4s}\,\gamma^{-2}\Bigl|\int_{K_{\xi}} \int_0^te^{t'[\chi(\xi,\xi_1)+i\psi(\xi,\xi_1)]}\,dt' \,d\xi_1\Bigr|^2\,d\xi \notag \\ &=\int_{-2\gamma}^{2\gamma}(1+\xi^2)^s|\xi|^2e^{2tp(\xi)}N^{-4s}\,\gamma^{-2}\Bigl|\int_{K_{\xi}} \dfrac{e^{t'[\chi(\xi,\xi_1)+i\psi(\xi,\xi_1)]}-1}{\chi(\xi,\xi_1)+i\psi(\xi,\xi_1)} \,d\xi_1\Bigr|^2\,d\xi \notag \\ &\geq \int_{-2\gamma}^{2\gamma}(1+\xi^2)^s|\xi|^2e^{2tp(\xi)}N^{-4s}\,\gamma^{-2}\Bigl(\int_{K_{\xi}} \Re\Bigl(\dfrac{e^{t'[\chi(\xi,\xi_1)+i\psi(\xi,\xi_1)]}-1}{\chi(\xi,\xi_1)+i\psi(\xi,\xi_1)} \Bigr) \,d\xi_1\Bigr)^2\,d\xi. \label{importante} \end{align} Since, \begin{align*} \chi(\xi,\xi_1)\sim -\mu N^3 \quad\text{and}\quad |\psi(\xi,\xi_1)| &\sim \gamma N \qquad \text{for all}\qquad \xi_1\in K_{\xi} \\ \intertext{then} (e^{t\chi}\cos(t\psi)-1)\chi &\gtrsim -\mu N^3\,e^{-\mu N^3 t} \\ \psi \sin(t\psi)\,e^{t\chi} \geq -|\psi|e^{t\chi} &\gtrsim -\gamma \,N\,e^{-\mu N^3 t} \\ \intertext{and so,} \chi^2+\psi^2 &\sim N^2(\mu^2\,N^4 + \gamma^2). \end{align*} Hence, \begin{equation} \Bigl(\int_{K_{\xi}} \Re\Bigl(\dfrac{e^{t'[\chi(\xi,\xi_1)+i\psi(\xi,\xi_1)]}-1}{\chi(\xi,\xi_1)+i\psi(\xi,\xi_1)} \Bigr) \,d\xi_1\Bigr)^2 \gtrsim \gamma^2\;\dfrac{e^{-2\mu N^3 t}}{N^2(\mu N^2+\gamma)^2} . \label{importanteuno} \end{equation} Therefore, from (\ref{importante}) and (\ref{importanteuno}) \begin{align} \nora{\int_0^tS(t-t')[(S(t')\phi)(S(t')\phi_x)]\,dt'}{s}{2}&\gtrsim \int_{-2\gamma}^{2\gamma} (1+\gamma^2)^s\,\gamma^2\,N^{-4s}\,\dfrac{e^{-2\mu N^3t}}{N^2(\mu N^2+\gamma)^2}\,d\xi \notag \\ &\sim (1+\gamma^2)^s\,\gamma^3\,N^{-4s-2}\,\dfrac{e^{-2\mu N^3t}}{(\mu N^2 + \gamma)^2}. \label{importantedos} \end{align} Taking $\gamma=O(1)$ it infers for $N\gg \gamma$ and any $T>0$ that \begin{equation*} \sup_{t\in [0,T]}\nor{\int_0^tS(t-t')[(S(t')\phi)(S(t')\phi_x)]\,dt'}{s} \gtrsim N^{-2s-3}. \end{equation*} This contradicts (\ref{illthree}) for $N$ large enough, since $\nor{\phi}{s}\sim 1$ and $-2s-3>0$ when $s<-3/2$. \end{proof} As a consequence of Theorem \ref{malpuestouno} we can obtain the following result. \begin{proof}[Proof of Theorem \ref{malpuestodos}] Consider the Cauchy problem \begin{equation}\label{ostbourgain} \left\{ \begin{aligned} u_t+u_{xxx}+\mu (\mathcal{H}u_x + \mathcal{H}u_{xxx})+uu_x&=0, \\ u(x,0)&=\alpha \phi(x), \quad \alpha\ll 1, \quad \phi\in H^s(\mathbb{R}). \end{aligned} \right. \end{equation} Suppose that $u(\alpha,x, t)$ is a local solution of (\ref{ostbourgain}) and that the flow map is $C^2$ at the origin from $H^s(\mathbb{R})$ to $H^s(\mathbb{R})$. We have \begin{equation*} \dfrac{\partial^2u}{\partial \alpha^2}(0,x,t)=-2\int_0^tS(t-t')[(S(t')\phi)(S(t')\phi_x)]\,dt'. \end{equation*} The assumption of $C^2$ regularity yields \begin{equation*} \sup_{t\in [0,T]}\nor{-2\int_0^tS(t-t')[(S(t')\phi)(S(t')\phi_x)]\,dt'}{s}\leq C\,\nora{\phi}{s}{2}, \end{equation*} but this is exactly the estimate which has been shown to fail in the proof of Theorem \ref{malpuestouno}. \end{proof} \setcounter{equation}{0} \section{Theory in $Z_{s,r}$ for $s\geq r>0$} \begin{prop}\label{xbuenl2} Let $b\in (0,1/2]$, $s>-3/2$ and $u$ be the solution of the integral equation (\ref{intequation}) with initial data $\phi \in H^s(\mathbb{R})$. If $|x|^b\phi \in L^2(\mathbb{R})$ then $|x|^bu(t)\in L^2(\mathbb{R})$ for all $t\in [0,T]$. \end{prop} \begin{proof} We employ the integral equation (\ref{intequation}) and so, for all $t\in [0,T]$, \begin{align} \nor{|x|^bu(t)}{}\leq \nor{|x|^bS(t)\phi}{} + \int_0^t\nor{|x|^bS(t-\tau)[u(\tau)u_x(\tau)]}{}\,d\tau. \label{xalabinte} \end{align} Now, using the Fourier transform, Stein's derivative $\mathcal{D}^b$ and applying (\ref{dercuatro}) and (3.12), from \cite{LP}, we have that \begin{align} \nor{|x|^bS(t-\tau)\partial_xu^2(\tau)}{}&\simeq \nor{\mathcal{D}^b\bigl(e^{i(t-\tau )\xi |\xi|+\mu (t-\tau )(|\xi|-|\xi|^3)}\xi \widehat{u^2}(\xi,\tau)\bigr)} \notag \\ &\leq C_b \frac{1}{(t-\tau)^{1/3}}\bigl( \nor{u^2(\tau)}{b}+\nor{|x|^bu^2(\tau)}{}\bigr) \label{preone} \\ &\leq C_b \frac{1}{(t-\tau)^{1/3}}\bigl( \nor{u(\tau)}{L^{\infty}}\nor{u(\tau)}{b}+\nor{u(\tau)}{L^{\infty}}\nor{|x|^bu(\tau)}{}\bigr) \notag \\ &\leq C_b\bigl(\nor{u}{X_T^b}+\nor{|x|^bu}{L_t^{\infty}H_x^0}\bigr)\, \frac{\nor{u(\tau)}{\frac{1}{2}+}}{(t-\tau)^{1/3}}. \label{one} \end{align} Since for given $\alpha$, $\beta \in [0,1)$ it holds that \begin{equation*} \int_0^t\dfrac{d\tau}{(t-\tau)^{\alpha}\,\,\tau^{\beta}}\leq c_{\alpha, \beta}\, t^{1-\alpha -\beta} . \end{equation*} This estimate combined with (\ref{estimativaunmedio+}) give us \begin{align} \int_0^t\nor{|x|^bS(t-\tau)[u(\tau)u_x(\tau)]}{}\,d\tau \leq C_{b,s}\bigl(\nor{u}{X_T^b}+\nor{|x|^bu}{L_t^{\infty}H_x^0}\bigr)\,\nor{\phi}{s}\,T^{\frac{2}{3}-\frac{(1/2+)-s}{3}}. \label{two} \end{align} Hence, by (\ref{xalabinte}), (\ref{dertres}) and (\ref{two}), it follows that \begin{equation}\label{three} \sup_{t\in [0,T]}\nor{|x|^bu(t)}{}\leq C_b\bigl( \nor{\phi}{b}+\nor{|x|^b\phi}{}\bigr)+C_{b,s}\bigl(\nor{u}{X_T^b}+\nor{|x|^bu}{L_t^{\infty}H_x^0}\bigr)\,\nor{\phi}{s}\,T^{\frac{2}{3}-\frac{(1/2+)-s}{3}}. \end{equation} By Theorem \ref{localresult} we know that $T$ depends explicitly on $\nor{\phi}{s}=a/2C$ and since $0<T\leq 1$ we see that \begin{equation*} T^{\frac{2}{3}-\frac{(1/2+)-s}{3}}\leq T^{\frac{-2s+3}{6}}\; \Longleftrightarrow \;\frac{-2s+3}{6}\leq \frac{2}{3}-\frac{(1/2+)-s}{3}\; \Longleftrightarrow \; s\geq 0+. \end{equation*} Then, taking $s=b$, we obtain from (\ref{three}) that \begin{align} \sup_{t\in [0,T]}\nor{|x|^bu(t)}{}&\leq C_b\bigl( \nor{\phi}{b}+\nor{|x|^b\phi}{}\bigr)+C_{b}\,a \nor{\phi}{b}\,T^{\frac{-2b+3}{6}}+C_{b}\nor{\phi}{b}\,T^{\frac{-2b+3}{6}}\nor{|x|^bu}{L_t^{\infty}H_x^0} \notag \\ &\leq C_b\bigl( \nor{\phi}{b}+\nor{|x|^b\phi}{}\bigr)+C_{b} \nor{\phi}{b}+\dfrac{1}{4}\nor{|x|^bu}{L_t^{\infty}H_x^0}. \label{four} \end{align} So, \begin{equation}\label{five} \sup_{t\in [0,T]}\nor{|x|^bu(t)}{}\leq C_b\bigl( \nor{\phi}{b}+\nor{|x|^b\phi}{}\bigr). \end{equation} \end{proof} \begin{rem} The proof of the Proposition \ref{xbuenl2} shows us that the solution of the IVP (\ref{npbo}) persists in $L^2(|x|^{2b}dx)$ for the same time of existence $T=T(\nor{\phi}{b})$ when $0< b\leq 1/2$. \end{rem} \begin{prop}\label{xbuenl21/2<b<1} Let $b\in (1/2,1)$ and $u$ be the solution of the integral equation (\ref{intequation}) with initial data $\phi \in H^b(\mathbb{R})$. If $|x|^b\phi \in L^2(\mathbb{R})$ then $|x|^bu(t)\in L^2(\mathbb{R})$ for all $t\in [0,T]$. \end{prop} \begin{proof} The proof is similar to that of Proposition \ref{xbuenl2} because the inequalities (\ref{dertres}) and (\ref{dercuatro}) still valid when $b\in (1/2,1)$. However, within this range $H^b(\mathbb{R})$ is a Banach algebra, therefore from inequality (\ref{preone}) we have that \begin{align} \nor{|x|^bS(t-\tau)\partial_xu^2(\tau)}{}&\leq C_b \frac{1}{(t-\tau)^{1/3}}\bigl( \nora{u(\tau)}{b}{2}+\nor{|x|^bu(\tau)}{}\nor{u(\tau)}{L^{\infty}}\bigr) \intertext{and} \int_0^t\nor{|x|^bS(t-\tau)\partial_xu^2(\tau)}{}\,d\tau &\leq C_b\bigl(\nora{u}{X_T^b}{2}+ \nor{u}{X_T^b}\nor{|x|^bu}{L_t^{\infty}H_x^0}\bigr)\,T^{2/3} \notag \\ &\leq \bigl(4C^3\nora{\phi}{b}{2}+2C^2\nor{\phi}{b}\nor{|x|^bu}{L_t^{\infty}H_x^0}\bigr)\,T^{2/3} \notag \\ &\leq C\nor{\phi}{b} + \frac{1}{2}\nor{|x|^bu}{L_t^{\infty}H_x^0}. \label{pretwo} \end{align} In the last inequality, it was used the choice of $T$ in (\ref{times>1/2}). So, we can conclude (\ref{five}) with $1/2<b<1$. \end{proof} \begin{lema} Let $\theta \in (0,1/2)$, $b=1+\theta$ and $\phi \in Z_{b,b}$. Then, for any $0< t \leq 1$ \begin{align} \nor{|x|^{1+\theta}S(t)\phi}{}&\leq C_b\,t\,\bigl(\nor{|x|^{1+\theta}\phi}{}+\nor{\phi}{1+\theta}\bigr)\leq C_b \bigl(\nor{|x|^b\phi}{}+\nor{\phi}{b}\bigr) \label{first} \\ \nor{|x|^{1+\theta}S(t)\partial_x\phi}{}&\leq C_b\,t^{-1/3}\,\bigl(\nor{|x|^b\phi}{}+\nor{\phi}{b}\bigr) \label{second} \end{align} \end{lema} \begin{proof} We will denote $F_{\mu}(t,\xi)$ when $\mu=1$ simply by $F(t,\xi)$. So, applying (\ref{uno}), we have that \begin{align} &\nor{|x|^{1+\theta}S(t)\phi}{}=\nor{|x|^{\theta}xS(t)\phi}{}\leq \nor{|x|^{\theta}S(t)(x\phi)}{}+\nor{t|x|^{\theta}S(t)\bigl(\mathcal{H}+2D_x-3D_x\partial_x \bigr)\phi}{}. \label{firsttwo} \end{align} From (\ref{dertres}), substituting $\widehat{\phi}$ by $\partial_{\xi}\widehat{\phi}$, and applying (\ref{simplifica}), we find that \begin{align} \nor{\mathcal{D}^{\theta} \bigl(F(t,\xi)\partial_{\xi}\widehat{\phi}(\xi)\bigr)}{}&\leq C_{\theta} \Bigl(\nor{\partial_{\xi}\widehat{\phi}(\xi)}{}+\nor{\,|\xi|^{\theta}\partial_{\xi}\widehat{\phi}(\xi)}{}+\nor{\mathcal{D}^{\theta}\bigl(\partial_{\xi}\widehat{\phi}(\xi)\bigr)}{}\Bigr) \notag \\ &\leq C_{\theta} \Bigl(\nor{x\phi}{}+\nor{\langle \xi \rangle^{\theta}\partial_{\xi}\widehat{\phi}(\xi)}{}+\nor{|x|^{\theta}x\phi}{} \Bigr) \notag \\ &\leq C_{\theta} \bigl(\nor{|x|^{1+\theta}\phi}{}+ \nor{\phi}{1+\theta} \bigr). \label{firstthree} \end{align} It follows from (\ref{firstthree}) that the first term on the right hand side of (\ref{firsttwo}) is bounded by \begin{align} \nor{|x|^{\theta}S(t)(x\phi)}{}&\leq C\Bigl(\nor{S(t)(x\phi)}{}+\nor{\mathcal{D}^{\theta} \bigl(F(t,\xi)\partial_{\xi}\widehat{\phi}(\xi)\bigr)}{}\Bigr) \notag \\ &\leq C\bigl(\nor{\phi}{b}+\nor{|x|^b\phi}{}\bigr). \label{firstfive} \end{align} The second term of the right hand side of (\ref{firsttwo}) is bounded by \begin{align} & \nor{tS(t)\bigl(\mathcal{H}-3D_x\partial_x +2D_x\bigr)\phi}{}+\nor{t\,\mathcal{D}^{\theta} \bigl(F(t,\xi) sgn(\xi) \widehat{\phi}(\xi)\bigr)}{} + \nor{t\,\mathcal{D}^{\theta} \bigl(F(t,\xi) |\xi| \widehat{\phi}(\xi)\bigr)}{} \notag \\ & +\nor{t\,\mathcal{D}^{\theta} \bigl(F(t,\xi) \xi |\xi| \widehat{\phi}(\xi)\bigr)}{} \label{firstsix} \end{align} and applying (\ref{unoa}), (\ref{derdos}), (\ref{dertres}) and (\ref{dercuatro}) we have that (\ref{firstsix}) is less than \begin{align} C_{\theta}t \Bigl( &\nor{\phi}{1}+ t^{-1/3}\nor{\phi}{1}+\bigl(\nor{\phi}{}+\nor{D_x^{\theta}\phi}{}+\nor{|x|^{\theta}\mathcal{H}\phi}{}\bigr) \notag \\ &+t^{-1/3}\bigl(\nor{\phi}{}+\nor{D_x^{\theta}\phi}{}+\nor{|x|^{\theta}\phi}{}\bigr)+ t^{-2/3}\bigl(\nor{\phi}{}+\nor{D_x^{\theta}\phi}{}+\nor{|x|^{\theta}\phi}{}\bigr)\Bigr) \notag \\ &\leq C_{\theta} \bigl(\nor{\phi}{1} +\nor{|x|^{\theta}\phi}{} +\nor{|x|^{\theta}\mathcal{H}\phi}{}\bigr). \label{firstseven} \end{align} Finally, since $\theta\in (0,1/2)$, $|x|^{\theta}\in A_2$ which means that $\nor{|x|^{\theta}\mathcal{H}\phi}{}\leq c \nor{|x|^{\theta}\phi}{}$, hence \begin{equation}\label{firsteight} \nor{t|x|^{\theta}S(t)\bigl(\mathcal{H}+2D_x-3D_x\partial_x \bigr)\phi}{} \leq C\bigl(\nor{\phi}{b}+\nor{|x|^b\phi}{}\bigr). \end{equation} (\ref{firstfive}) and (\ref{firsteight}) complete the proof of (\ref{first}). Now, we are going to obtain (\ref{second}) proceeding in the same way. So, applying (\ref{dertres}), (\ref{dercuatro}) and (\ref{simplifica}), we find that \begin{align} \nor{\mathcal{D}^{\theta} \bigl(F(t,\xi)\partial_{\xi}(\xi \widehat{\phi}(\xi))\bigr)}{}&\leq \nor{\mathcal{D}^{\theta} \bigl(F(t,\xi)\widehat{\phi}(\xi)\bigr)}{}+\nor{\mathcal{D}^{\theta} \bigl(F(t,\xi)\xi \partial_{\xi}\widehat{\phi}(\xi)\bigr)}{} \notag \\ &\leq c_{\theta}\bigl(\nor{|x|^{\theta}\phi}{}+\nor{\phi}{\theta}\bigr)+c_{\theta}t^{-1/3}\Bigl(\nor{|x|^{\theta}x \phi}{}+\nor{\langle \xi \rangle^{\theta}\partial_{\xi}\widehat{\phi}(\xi)}{}\Bigr) \notag \\ &\leq C_{\theta}t^{-1/3}\bigl(\nor{\phi}{b}+\nor{|x|^{b}\phi}{}\bigr), \label{seconduno} \end{align} and since \begin{align} \nor{\mathcal{D}^{\theta}\Bigl(\partial_{\xi}(F(t,\xi))\xi \widehat{\phi}\Bigr)}{}&=t\nor{\mathcal{D}^{\theta}\Bigl(F(t,\xi)(sgn(\xi)+2i|\xi|-3\xi |\xi|)\xi \widehat{\phi}\Bigr)}{} \notag \\ &\leq Ct(I_1+I_2+I_3), \label{seconddos} \end{align} applying (\ref{dercuatro}), we obtain for $\lambda=1$, $2$, $3$ \begin{align} I_{\lambda}=\nor{\mathcal{D}^{\theta}\Bigl(F(t,\xi)|\xi|^{\lambda}\widehat{\phi}\Bigr)}{}&\leq c_{\theta}t^{-\lambda/3}\bigl(\nor{\phi}{}+\nor{D_x^{\theta}\phi}{}+\nor{|x|^{\theta}\phi}{}\bigr) \notag \\ &\leq c_{\theta}t^{-\lambda/3}\bigl(\nor{\phi}{b}+\nor{|x|^b\phi}{}\bigr). \label{secondtres} \end{align} Hence, \begin{align} \nor{|x|^{\theta}xS(t)\partial_x\phi}{}&\leq \nor{\mathcal{D}^{\theta} \bigl(F(t,\xi)\partial_{\xi}(\xi \widehat{\phi}(\xi))\bigr)}{}+\nor{\mathcal{D}^{\theta}\Bigl(\partial_{\xi}(F(t,\xi))\xi \widehat{\phi}\Bigr)}{} \notag \\ &\leq C_{\theta}t^{-1/3}\bigl(\nor{\phi}{b}+\nor{|x|^{b}\phi}{}\bigr)+C_{\theta}\bigl(\nor{\phi}{b}+\nor{|x|^{b}\phi}{}\bigr) \notag \\ &\leq C_{\theta}t^{-1/3}\bigl(\nor{\phi}{b}+\nor{|x|^{b}\phi}{}\bigr). \notag \end{align} \end{proof} \begin{prop}\label{xbuenl21<b<3/2} Let $\theta \in (0,1/2)$, $b=1+\theta$ and $u$ be the solution of the integral equation (\ref{intequation}) with $\phi \in H^b(\mathbb{R})$. If $|x|^b\phi \in L^2(\mathbb{R})$ then $|x|^bu(t)\in L^2(\mathbb{R})$ for all $t\in [0,T]$. \end{prop} \begin{proof} The proof is the same proof of Proposition \ref{xbuenl21/2<b<1} but applying (\ref{first}) and (\ref{second}) instead of (\ref{dertres}) and (\ref{dercuatro}). \end{proof} \begin{lema}\label{nota2} Let $\theta \in (1/2,3/2)$. Then, \begin{equation}\label{noa2} \nor{|x|^{\theta}\mathcal{H}\phi}{}\leq \nor{|x|^{\theta}\phi}{}\quad \Longleftrightarrow \quad \widehat{\phi}(0)=0. \end{equation} If $\theta=1/2$, \begin{equation}\label{noa2uno} \widehat{\phi}(0)=0\quad \Longrightarrow \quad \nor{|x|^{1/2}\mathcal{H}\phi}{}\leq \nor{\langle x \rangle \phi }{}. \end{equation} \end{lema} \begin{proof} Let $1/2<\theta <3/2$. Since $x\mathcal{H}\phi=\mathcal{H}(x\phi)$ if and only if $\widehat{\phi}(0)=0$, then \begin{align} \nor{|x|^{\theta}\mathcal{H}\phi}{}&=\nor{|x|^{\theta-1}x\mathcal{H}\phi}{}=\nor{|x|^{\theta-1}\mathcal{H}(x\phi)}{} \notag \\ &\leq \nor{|x|^{\theta-1}(x\phi)}{}= \nor{|x|^{\theta}\phi}{}. \label{noa2dos} \end{align} The inequality in (\ref{noa2dos}) is true because $-1/2<\theta -1 <1/2$ and so, $|x|^{\theta -1}\in A_2$. If $\theta =1/2$, with the help of (\ref{noa2}) we obtain that \begin{align} \nora{|x|^{1/2}\mathcal{H}\phi}{}{2}&=\int |x|\mathcal{H}\phi \,\overline{\mathcal{H}\phi}\,dx \leq \nor{x\mathcal{H}\phi}{}\nor{\mathcal{H}\phi}{} \notag \\ &=\nor{\mathcal{H}(x\phi)}{}\nor{\phi}{}=\nor{x\phi}{}\nor{\phi}{}\leq \nora{\langle x \rangle \phi }{}{2}. \label{noa2tres} \end{align} \end{proof} \begin{prop}\label{xbuenl23/2<b<2} Let $\theta \in [1/2,1)$, $b=1+\theta$ and $u$ be the solution of (\ref{intequation}) with $\phi \in H^b(\mathbb{R})$. If $\widehat{\phi}(0)=0$ and $|x|^b\phi \in L^2(\mathbb{R})$ then $|x|^bu(t)\in L^2(\mathbb{R})$ for all $t\in [0,T]$. \end{prop} \begin{proof} The argument to prove this proposition is exactly the same that that of Proposition \ref{xbuenl21<b<3/2} except in the estimate to obtain (\ref{firsteight}) because $|x|^{\theta}$ is not an $A_2$ weight. But, applying Lema \ref{nota2} we can obtain (\ref{firsteight}) for $\theta \in [1/2,1)$. \end{proof} \begin{lema} Let $\theta \in (0,1/2)$, $b=2+\theta$ and $\phi \in \dot{Z}_{b,b}$. Then, for any $0< t \leq 1$, it holds that \begin{align} \nor{|x|^{2+\theta}S(t)\phi}{}&\leq C_b\,t\,\bigl(\nor{|x|^{2+\theta}\phi}{}+\nor{\phi}{2+\theta}\bigr)\leq C_b \bigl(\nor{|x|^b\phi}{}+\nor{\phi}{b}\bigr) \label{third} \\ \text{and}\quad \quad \nor{|x|^{2+\theta}S(t)\partial_x\phi}{}&\leq C_b\,t^{-1/3}\,\bigl(\nor{|x|^b\phi}{}+\nor{\phi}{b}\bigr). \label{fourth} \end{align} \end{lema} \begin{proof} We note that \begin{align} |x|^bS(t)\phi &= |x|^{\theta}x^2S(t)\phi \notag \\ &= |x|^{\theta}S(t)(x^2\phi)+2|x|^{\theta}\bigl(\partial_{\xi}F(t,\xi)\,\partial_{\xi}\widehat{\phi}(\xi)\bigr)^{\vee}(x)+|x|^{\theta}\bigl(\partial_{\xi}^2(F(t,\xi))\widehat{\phi}(\xi)\bigr)^{\vee}(x) \notag \\ &= B_1+B_2+B_3. \label{thirduno} \end{align} Employing $\mathcal{D}^{\theta}$ is sufficient to estimate the $L^2$-norm for the terms $B_1$, $B_2$ and $B_3$. Using (\ref{dertres}) we obtain that for $B_1$ \begin{align} \nor{\mathcal{D}^{\theta}(F(t,\xi)\partial_{\xi}^2\widehat{\phi})}{}&\leq c_{\theta}\Bigl(\nor{\partial_{\xi}^2\widehat{\phi}}{}+\nor{|\xi|^{\theta}\partial_{\xi}^2\widehat{\phi}}{}+\nor{\mathcal{D}^{\theta}\bigl(\partial_{\xi}^2\widehat{\phi}\,\bigr)}{}\Bigr) \notag \\ &\leq c_{\theta} \Bigl(\nor{x^2\phi}{}+\nor{\langle \xi \rangle^{\theta}J_{\xi}^2 \widehat{\phi}}{}+\nor{|x|^{2+\theta}\phi}{}\Bigr) \notag \\ &\leq c_{\theta} \Bigl(\nor{\langle x\rangle^{2+\theta}\phi}{}+\nora{\langle \xi \rangle^{2+\theta} \widehat{\phi}}{}{\frac{\theta}{2+\theta}}\nora{J_{\xi}^{2+\theta} \widehat{\phi}}{}{\frac{2}{2+\theta}}\Bigr) \notag \\ &\leq C_{\theta} \bigl(\nor{|x|^b\phi}{}+\nor{\phi}{b}\bigr). \label{thirddos} \end{align} To estimate the $L^2$-norm of $B_2$ we proceed in a similar way as we estimated the second term of the right hand side of (\ref{firsttwo}) but with $\partial_{\xi}\widehat{\phi}$ instead of $\widehat{\phi}$. So, from (\ref{firstseven}), applying (\ref{simplifica}) and since $|x|^{\theta}\in A_2$, we obtain that \begin{align} \nor{\mathcal{D}^{\theta}\Bigl(\partial_{\xi}F(t,\xi)\,\partial_{\xi}\widehat{\phi}(\xi)\Bigr)}{}&\leq c_{\theta} \Bigl(\nor{x\phi}{} +\nor{|x|^{\theta}x\phi}{} +\nor{|x|^{\theta}\mathcal{H}(x\phi)}{}+\nor{|\xi|^{\theta}\partial_{\xi}\widehat{\phi}}{}\Bigr) \notag \\ &\leq C_{\theta}\bigl(\nor{|x|^b\phi}{}+\nor{\phi}{b}\bigr). \label{thirdtres} \end{align} To estimate the $L^2$-norm of $B_3$ we use (\ref{dos}) and that the product $\delta \widehat{\phi}=\widehat{\phi}(0)=0$. So, \begin{align} &\nor{\mathcal{D}^{\theta}\bigl(2 t\delta +tF(t,\xi)[2i\,sgn(\xi)-6|\xi|]+t^2F(t,\xi)[sgn(\xi)+2i|\xi|-3 \xi|\xi|]^2\bigr)\widehat{\phi}}{} \notag \\ &\leq t\nor{\mathcal{D}^{\theta}\bigl(F(t,\xi)[2i\,sgn(\xi)-6|\xi|]\widehat{\phi}\bigr)}{}+t^2\nor{\mathcal{D}^{\theta}\bigl(F(t,\xi)[sgn(\xi)+2i|\xi|-3 \xi|\xi|]^2\widehat{\phi}\bigr)}{} \notag \\ &=tB_{31}+t^2B_{32}. \label{thirdcuatro} \end{align} Then, applying (\ref{dertres}), (\ref{dercuatro}) and the fact that $|x|^{\theta}\in A_2$, \begin{align} tB_{31}&\leq ct\Bigl(\nor{\mathcal{D}^{\theta}\bigl(F(t,\xi) sgn(\xi) \widehat{\phi}\,\bigr)}{}+\nor{\mathcal{D}^{\theta}\bigl(F(t,\xi) |\xi| \widehat{\phi}\,\bigr)}{}\Bigr) \notag \\ &\leq c_{\theta} t\bigl(\nor{\phi}{}+\nor{D_x^{\theta}\phi}{}+\nor{|x|^{\theta}\phi}{}\bigr)+c_{\theta} t^{2/3}\bigl(\nor{\phi}{}+\nor{D_x^{\theta}\phi}{}+\nor{|x|^{\theta}\phi}{}\bigr) \notag \\ &\leq C_{\theta}\bigl(\nor{|x|^b\phi}{}+\nor{\phi}{b}\bigr), \label{thirdcinco} \end{align} and \begin{align} t^2B_{32}&= t^2\nor{\mathcal{D}^{\theta}\bigl(F(t,\xi)(sgn(\xi)-3 \xi|\xi|)^2\widehat{\phi}-4F(t,\xi)\xi^2\widehat{\phi}+4iF(t,\xi)|\xi|(sgn(\xi)-3 \xi|\xi|)\widehat{\phi}\,\bigr)}{} \notag \\ &\leq c_{\theta}t^2\sum_{j=0}^4\nor{\mathcal{D}^{\theta}\bigl(F(t,\xi)\xi^j\widehat{\phi}\,\bigr)}{}\leq c_{\theta}t^2\sum_{j=0}^4t^{-j/3}\bigl(\nor{\phi}{}+\nor{D_x^{\theta}\phi}{}+\nor{|x|^{\theta}\phi}{}\bigr) \notag \\ &\leq C_{\theta}\bigl(\nor{|x|^b\phi}{}+\nor{\phi}{b}\bigr). \label{thirdseis} \end{align} Hence, (\ref{thirddos}), (\ref{thirdtres}), (\ref{thirdcinco}) and (\ref{thirdseis}) imply (\ref{third}). To prove (\ref{fourth}) we note that \begin{align} |x|^b&S(t)\partial_x\phi = |x|^{\theta}x^2S(t)\partial_x\phi \notag \\ &= |x|^{\theta}S(t)(x^2\partial_x\phi)+2|x|^{\theta}\bigl(\partial_{\xi}F(t,\xi)\,\partial_{\xi}\widehat{\partial_x\phi}(\xi)\bigr)^{\vee}(x)+|x|^{\theta}\bigl(\partial_{\xi}^2(F(t,\xi))\widehat{\partial_x\phi}(\xi)\bigr)^{\vee}(x) \notag \\ &= G_1+G_2+G_3. \label{fourthuno} \end{align} We proceed exactly in the same way in which we did the proof of (\ref{third}). Then, to estimate the $L^2$-norm of $G_2+G_3$ we use the inequalities applied to obtain (\ref{thirdtres}), (\ref{thirdcinco}) and (\ref{thirdseis}) but substituting $\phi$ by $\partial_x\phi$. So, applying (\ref{simplificaotra}) we have \begin{align} &\nor{\mathcal{D}^{\theta}\Bigl(\partial_{\xi}F(t,\xi)\,\partial_{\xi}\bigl(\xi \widehat{\phi}(\xi)\bigr)\Bigr)}{}+\nor{\mathcal{D}^{\theta}\bigl(\partial_{\xi}^2F(t,\xi)\bigr) \xi \widehat{\phi}}{} \notag \\ &\leq c_{\theta}\bigl(\nor{|x|^{1+\theta}\partial_x\phi}{}+\nor{\partial_x\phi}{1+\theta}\bigr) \notag \\ &\leq C_{\theta}\bigl(\nor{|x|^b\phi}{}+\nor{\phi}{b}\bigr). \label{fourthdos} \end{align} Finally, to estimate the $L^2$-norm of $G_1$ we find that \begin{align} \nor{\mathcal{D}^{\theta}(F(t,\xi)\partial_{\xi}^2(\xi \widehat{\phi}))}{}&\leq 2\nor{\mathcal{D}^{\theta}(F(t,\xi)\partial_{\xi}\widehat{\phi})}{}+\nor{\mathcal{D}^{\theta}(F(t,\xi)\xi\partial_{\xi}^2\widehat{\phi})}{} \notag \\ &\leq G_{11} + G_{12}, \label{fourthtres} \end{align} applying (\ref{dertres}) but substituting $h$ by $x\phi$ we have that \begin{align} G_{11}&\leq c_{\theta} \Bigl(\nor{x\phi}{}+\nor{|\xi|^{\theta}\partial_{\xi}\widehat{\phi}}{}+\nor{\mathcal{D}^{\theta}\partial_{\xi}\widehat{\phi}}{}\Bigr) \notag \\ &\leq C_{\theta}\bigl(\nor{|x|^b\phi}{}+\nor{\phi}{b}\bigr). \label{fourthcuatro} \end{align} and applying (\ref{dercuatro}) but substituting $h$ by $x^2\phi$ we get that \begin{align} G_{12}&=\nor{\mathcal{D}^{\theta}\bigl(F(t,\xi)\xi \,\widehat{x^2\phi}\,\bigr)}{} \notag \\ &\leq c_{\theta} t^{-1/3}\bigl(\nor{|x|^{\theta}x^2\phi}{}+\nor{J_x^{\theta}x^2\phi}{}\bigr) \notag \\ &\leq C_{\theta}t^{-1/3}\bigl(\nor{|x|^b\phi}{}+\nor{\phi}{b}\bigr), \label{fourthcinco} \end{align} where we have used the same inequalities applied to obtain (\ref{thirddos}) because $\nor{J_x^{\theta}x^2\phi}{}=\nor{\langle \xi \rangle^{\theta}\partial_{\xi}^2\widehat{\phi}}{}$. \end{proof} \begin{prop}\label{xbuenl22<b<5/2} Let $\theta \in (0,1/2)$, $b=2+\theta$ and $u$ be the solution of the integral equation (\ref{intequation}) with $\phi \in H^b(\mathbb{R})$. If $\widehat{\phi}(0)=0$ and $|x|^b\phi \in L^2(\mathbb{R})$ then $|x|^bu(t)\in L^2(\mathbb{R})$ for all $t\in [0,T]$. \end{prop} \begin{proof} The proof is the same proof of Proposition \ref{xbuenl21/2<b<1} but applying (\ref{third}) and (\ref{fourth}) instead of (\ref{dertres}) and (\ref{dercuatro}). \end{proof} \begin{proof}[Proof of Theorem \ref{pre}] Part \textbf{(i)} is direct consequence of Propositions \ref{xbuenl2}, \ref{xbuenl21/2<b<1} and \ref{xbuenl21<b<3/2}. Part \textbf{(ii)} is deduced from Propositions \ref{xbuenl23/2<b<2} and \ref{xbuenl22<b<5/2}. \end{proof} \setcounter{equation}{0} \section{Proof of Theorem \ref{contunica1}} Without loss of generality we assume that $t_1=0<t_2$. Since $u(t_1)=\phi \in Z_{3/2,3/2}$, $\phi \in Z_{3/2,b}$ where $b<3/2$, and then $u\in C([0,T]; Z_{3/2,3/2-})$ by Proposition (\ref{xbuenl21<b<3/2}). The solution to the IVP npBO (\ref{npbo}) can be represented by Duhamel's formula \begin{equation}\label{uDuhformula} u(t)=S(t)\phi - \int_0^tS(t-t')(uu_x)(t')\,dt', \end{equation} where $S(t)$ is given by (\ref{semigroup}). From Plancherel's equality we have that for every $t$, $|x|^{1/2}x S(t)\phi \in L^2(\mathbb{R})$ if and only if $D_{\xi}^{1/2}\partial_{\xi}(F_{\mu}(t,\xi)\widehat{\phi}(\xi))\in L^2(\mathbb{R})$. The argument in our proof requires localizing near the origin in Fourier frequencies by a function $\chi \in C_0^{\infty}$, $\text{supp} \,\chi \subseteq (-\epsilon , \epsilon)$ and $\chi \equiv 1$ on $(-\epsilon/2, \epsilon/2)$. Let us start with the computation for the linear part in (\ref{uDuhformula}) by introducing a commutator as follows \begin{align} \chi D_{\xi}^{1/2}\partial_{\xi}(F_{\mu}(t,\xi)\widehat{\phi}(\xi))&=\left[\chi , D_{\xi}^{1/2}\right] \partial_{\xi}\left(F_{\mu}(t,\xi)\widehat{\phi}(\xi)\right)+D_{\xi}^{1/2}\left(\chi \partial_{\xi}(F_{\mu}(t,\xi)\widehat{\phi}(\xi))\right) \notag \\ &=A+B. \label{linearpart} \end{align} From Proposition \ref{simplestimate} and identity (\ref{uno}) we have that \begin{align} \nor{A}{}&= \nor{[\chi , D_{\xi}^{1/2}] \partial_{\xi}(F_{\mu}(t,\xi)\widehat{\phi}(\xi))}{} \notag \\ &\lesssim \nor{\partial_{\xi}(F_{\mu}(t,\xi)\widehat{\phi}(\xi))}{} \notag \\ &\lesssim \nor{\mu t \,\text{sgn}(\xi)F_{\mu}(t,\xi)\widehat{\phi}(\xi)}{}+\nor{2it|\xi|F_{\mu}(t,\xi)\widehat{\phi}(\xi)}{}+\nor{3\mu t \xi |\xi| F_{\mu}(t,\xi)\widehat{\phi}(\xi)}{}+\nor{F_{\mu}(t,\xi)\partial_{\xi} \widehat{\phi}(\xi)}{} \notag \\ &\lesssim te^{\mu t}\nor{\phi}{}+2t(e^{\mu t}+(\mu t)^{-1/3})\nor{\phi}{}+3t (e^{\mu t}+(\mu t)^{-2/3})\nor{\phi}{}+e^{\mu t}\nor{\partial_{\xi}\widehat{\phi}(\xi)}{} \notag \\ &\lesssim [(1+t)e^{\mu t}+t^{2/3}+t^{1/3}]\,\nor{\phi}{Z_{1,1}}, \label{aestimate} \end{align} where were used (\ref{cotaemu}) and (\ref{unoa}). Rewriting $B$, we obtain that \begin{align} B&=D_{\xi}^{1/2}(\chi \partial_{\xi}(F_{\mu}(t,\xi)\widehat{\phi}(\xi))) \notag \\ &=D_{\xi}^{1/2}\left(\mu t \,\text{sgn}(\xi) \chi F_{\mu}(t,\xi)\widehat{\phi}(\xi)\right)+D_{\xi}^{1/2}\left(2i t |\xi| \chi F_{\mu}(t,\xi)\widehat{\phi}(\xi)\right)+\notag \\ &+D_{\xi}^{1/2}\left((-3\mu) t \xi |\xi| \chi F_{\mu}(t,\xi)\widehat{\phi}(\xi)\right)+D_{\xi}^{1/2}\left(\chi F_{\mu}(t,\xi) \partial_{\xi} \widehat{\phi}(\xi)\right) \notag \\ &=B_1+B_2+B_3+B_4. \label{bestimate} \end{align} Now, we are going to estimate $B_4$ in $L^2(\mathbb{R})$. From Theorem \ref{derivaStein}, inequalities (\ref{cotaemu}), (\ref{productostein}), in the Lemma \ref{rprod}, and the inequality (\ref{dertres}), in the Lemma \ref{clave1}, we get that \begin{align} \nor{B_4}{}&\lesssim \nor{\chi F_{\mu}(t,\xi) \partial_{\xi} \widehat{\phi}(\xi)}{}+\nor{\mathcal{D}_{\xi}^{1/2}\left( F_{\mu}(t,\xi) \chi \partial_{\xi} \widehat{\phi}(\xi)\right)}{} \notag \\ &\lesssim e^{\mu t}\nor{x \phi}{}+\nor{\chi \partial_{\xi} \widehat{\phi}(\xi)}{}+\nor{|\xi|^{1/2}\chi \partial_{\xi} \widehat{\phi}(\xi)}{}+\nor{\mathcal{D}_{\xi}^{1/2}\left( \chi \partial_{\xi} \widehat{\phi}(\xi)\right)}{} \notag \\ &\lesssim e^{\mu t}\nor{x \phi}{}+\nor{\chi}{\infty}\nor{x \phi}{}+\nor{|\xi|^{1/2}\chi}{\infty}\nor{x \phi}{}+\nor{\mathcal{D}_{\xi}^{1/2}\left(\chi \right)\,\partial_{\xi} \widehat{\phi}(\xi)}{}+\nor{\chi \mathcal{D}_{\xi}^{1/2}\left( \partial_{\xi} \widehat{\phi}(\xi)\right)}{} \notag \\ &\leq c(T) \nor{\langle x \rangle^{1+1/2} \phi}{}. \label{b4} \end{align} Estimates for $B_2$ and $B_3$ in $L^2(\mathbb{R})$ are obtained in a similar way but using (\ref{dercuatro}) instead of (\ref{dertres}). To estimate $B_1$ in $L^2(\mathbb{R})$ we introduce $\tilde{\chi} \in C^{\infty}_0(\mathbb{R})$ such that $\tilde{\chi}\equiv 1$ on $\text{supp}\,(\chi)$. Then, we can express this term as \begin{align} D_{\xi}^{1/2}\left( t \,\text{sgn}(\xi) F_{\mu}(t,\xi) \, \chi \, \widehat{\phi}(\xi)\right)&= t D_{\xi}^{1/2}\left( F_{\mu}(t,\xi)\, \tilde{\chi}\, \chi \, \text{sgn}(\xi)\, \widehat{\phi}(\xi)\right) \notag \\ &= t \left(\left[D_{\xi}^{1/2}, F_{\mu}(t,\xi) \,\tilde{\chi} \right] \, \chi \,\text{sgn}(\xi) \,\widehat{\phi}(\xi) + F_{\mu}(t,\xi)\, \tilde{\chi} \, D_{\xi}^{1/2}\bigl( \chi \, \text{sgn}(\xi) \widehat{\phi}(\xi)\bigr)\right) \notag \\ &=t(B_{1,1}+B_{1,2}). \label{b1} \end{align} Again, Proposition \ref{simplestimate} can be applied to estimate $B_{1,1}$ in $L^2(\mathbb{R})$ as \begin{align} \nor{B_{1,1}}{}&=\nor{\left[D_{\xi}^{1/2}, F_{\mu}(t,\xi) \,\tilde{\chi} \right] \, \chi \, \text{sgn}(\xi) \,\widehat{\phi}(\xi)}{} \notag \\ &\lesssim \nor{\chi \, \text{sgn}(\xi) \,\widehat{\phi}(\xi)}{} \notag \\ &\lesssim \nor{\phi}{}. \label{b11} \end{align} Once we show that the integral part in Duhamel's formula (\ref{uDuhformula}) lies in $L^2(|x|^3\,dx)$, we will be able to conclude that $$B_{1,2}, \; \tilde{\chi} \, D_{\xi}^{1/2}\bigl( \chi \, \text{sgn}(\xi) \widehat{\phi}(\xi)\bigr), \; D_{\xi}^{1/2}\bigl( \tilde{\chi} \, \chi \, \text{sgn}(\xi) \widehat{\phi}(\xi)\bigr) \; \in L^2(\mathbb{R}), $$ because $u(t_2)=u(t)\in Z_{3/2, 3/2}$ by hypothesis. Therefore, from Proposition \ref{I1} it will follow that $\widehat{\phi}(0)=0$, and from the conservation law $$I(u)=\int_{\mathbb{R}}u(x,t)\,dx=\widehat{\phi}(0)=0$$ i. e., $\widehat{u}(0,t)=0$ for all $t$. Hence, $u(\cdot , t) \in \dot{Z}_{3/2,3/2}$. \\ \\ In order to complete the proof, we consider the integral part in Duhamel's formula. We will denote $z=uu_x=\frac{1}{2}\,\partial_x(u^2)$ and so $\widehat{z}=i\,\frac{\xi}{2}\,\widehat{u}*\widehat{u}.$ \begin{align} \left(|x|^{1/2}\,x\,\int_0^tS(t-t')z(t')\,dt'\right)^{\wedge} (\xi)&=\int_0^tD_{\xi}^{1/2}\partial_{\xi}\bigl( F_{\mu}(t-t',\xi)\widehat{z}(t',\xi)\bigr)\,dt' \notag \\ &=\int_0^tD_{\xi}^{1/2}\bigl(\partial_{\xi} F_{\mu}(t-t',\xi) \, \widehat{z}(t',\xi)\bigr)\,dt' +\int_0^tD_{\xi}^{1/2}\bigl( F_{\mu}(t-t',\xi) \, \partial_{\xi}\widehat{z}(t',\xi) \bigr)\,dt' \notag \\ &=\mathcal{A}+\mathcal{B}. \label{integralpart} \end{align} We localize again with the help of $\chi \in C_0^{\infty}(\mathbb{R})$ and then we can write \begin{align} \chi \,\mathcal{A}&=\int_0^t\left[\chi , D_{\xi}^{1/2} \right] \bigl(\partial_{\xi} F_{\mu}(t-t',\xi) \, \widehat{z}(t',\xi)\bigr)\,dt' + \int_0^tD_{\xi}^{1/2}\bigl( \chi \, \partial_{\xi} F_{\mu}(t-t',\xi) \, \widehat{z}(t',\xi)\bigr)\,dt' \notag \\ &=\int_0^t\left[\chi , D_{\xi}^{1/2} \right] \bigl((t-t')(\mu \text{sgn}(\xi)+2i|\xi|-3\mu \xi |\xi|) F_{\mu}(t-t',\xi)\, \widehat{z}(t',\xi)\bigr)\,dt' + \notag \\ &\quad + \int_0^tD_{\xi}^{1/2}\bigl( \chi (t-t')(\mu \text{sgn}(\xi)+2i|\xi|-3\mu \xi |\xi|) F_{\mu}(t-t',\xi)\, \widehat{z}(t',\xi)\bigr)\,dt' \notag \\ &=\mathcal{A}_1+\mathcal{A}_2+\mathcal{A}_3+\mathcal{A}_4+\mathcal{A}_5+\mathcal{A}_6. \label{chiA} \end{align} and \begin{align} \chi \,\mathcal{B}&=\int_0^t \chi D_{\xi}^{1/2}\bigl( F_{\mu}(t-t',\xi) \, \partial_{\xi}\widehat{z}(t',\xi) \bigr)\,dt' \notag \\ &=\int_0^t[\chi , D_{\xi}^{1/2}]\bigl( F_{\mu}(t-t',\xi) \, \partial_{\xi}\widehat{z}(t',\xi) \bigr)\,dt' + \int_0^tD_{\xi}^{1/2}\bigl( \chi F_{\mu}(t-t',\xi) \, \partial_{\xi}\widehat{z}(t',\xi) \bigr)\,dt' \notag \\ &=\mathcal{B}_1+\mathcal{B}_2. \label{chiB} \end{align} Now, we must bound all terms in (\ref{chiA}) and (\ref{chiB}). But, we limit our attention to the terms $\mathcal{A}_3$, $\mathcal{A}_6$, $\mathcal{B}_1$ and $\mathcal{B}_2$ which are more representatives and the others can be treated in a similar way. So, combining Proposition \ref{simplestimate}, (\ref{unoa}) and Holder's inequality we have that \begin{align} \nor{\mathcal{A}_3}{}&\leq \int_0^t\nor{ \left[\chi , D_{\xi}^{1/2} \right] \bigl(-3\mu (t-t') \xi |\xi| \, F_{\mu}(t-t',\xi)\, \widehat{z}(t',\xi)\bigr)}{}\,dt' \notag \\ &\lesssim \int_0^t(t-t') \nor{ F_{\mu}(t-t',\xi)\,\xi |\xi|\, \widehat{z}(t',\xi)}{}\,dt' \notag \\ &\lesssim \int_0^t(t-t')\bigl(e^{\mu (t-t')}+(t-t')^{-2/3}\bigr) \, \nor{\xi \, \widehat{u}*\widehat{u}(t',\xi)}{}\,dt' \notag \\ &\lesssim \Bigl(\int_0^t\bigl((t-t')e^{\mu (t-t')}+(t-t')^{1/3}\bigr)^2 \,dt'\Bigr)^{1/2} \nor{\partial_x (u^2)}{L_T^2L_x^2} \notag \\ &\lesssim c(T) T^{1/2}\nor{u}{L_T^{\infty}L_x^{\infty}}\nor{\partial_xu}{L_T^{\infty}L_x^2} \notag \\ &\lesssim c(T) \nora{u}{L_T^{\infty}H_x^1}{2}. \label{cotaA3} \end{align} For $\mathcal{A}_6$, using Stein's derivative, (\ref{unoa}) and (\ref{dertres}), we obtain that \begin{align} \nor{\mathcal{A}_6}{}&\leq \int_0^t\nor{ D_{\xi}^{1/2} \Bigl(-3\mu (t-t') \chi \, F_{\mu}(t-t',\xi)\,\xi |\xi| \, \widehat{z}(t',\xi)\Bigr)}{}\,dt' \notag \\ &\lesssim \int_0^t (t-t') \nor{ \chi \, \xi |\xi| \, F_{\mu}(t-t',\xi)\, \widehat{z}(t',\xi)}{}\,dt' + \int_0^t (t-t') \nor{ \mathcal{D}_{\xi}^{1/2} \Bigl(F_{\mu}(t-t',\xi)\,\xi |\xi| \,\chi \, \widehat{z}(t',\xi)\Bigr)}{}\,dt' \notag \\ &\lesssim \int_0^t (t-t') \nor{ \chi \,\xi |\xi|}{\infty} e^{\mu(t-t')}\nor{\widehat{z}}{}\,dt' + \int_0^t (t-t') \Bigl( \nor{\chi \,\xi |\xi|\,\widehat{z}}{} +\nor{|\xi|^{1/2}\,\chi \,\xi |\xi| \, \widehat{z}}{} +\nor{ \mathcal{D}_{\xi}^{1/2} \bigl(\chi \,\xi |\xi|\,\widehat{z}\, \bigr)}{}\Bigr)\,dt' \notag \\ &=\mathcal{Y}_1+\mathcal{Y}_2. \end{align} Almost repeating the estimates to obtain (\ref{cotaA3}) one has that $$\mathcal{Y}_1\leq c(T) \nora{u}{L_T^{\infty}H_x^1}{2}\, ,$$ and using (\ref{productostein}) and (3.12) from \cite{LP} \begin{align} \mathcal{Y}_2 &\leq \int_0^t (t-t') \Bigr( \Bigl( \nor{\chi \,\xi^2 |\xi|}{\infty} +\nor{\chi \,\xi^2 |\xi|^{3/2}}{\infty}+\nor{ \mathcal{D}_{\xi}^{1/2} \bigl(\chi \,\xi^2 |\xi|\bigr)}{\infty}\Bigr)\nor{\widehat{u}*\widehat{u}}{} +\nor{ \chi \,\xi^2 |\xi|}{\infty}\nor{\mathcal{D}_{\xi}^{1/2} \bigl(\widehat{u}*\widehat{u}\bigr)}{}\Bigr)\,dt' \notag \\ &\lesssim c(T)\Bigl(\nor{\widehat{u}*\widehat{u}}{L_T^1L_x^2}+\nor{\mathcal{D}_{\xi}^{1/2} \bigl(\widehat{u}*\widehat{u}\bigr)}{L_T^1L_x^2}\Bigr) \notag \\ &\lesssim c(T)\Bigl(\nor{u^2}{L_T^1L_x^2}+\nor{|x|^{1/2} u^2}{L_T^1L_x^2}\Bigr) \notag \\ &\lesssim c(T)\Bigl(T \nor{u}{L_T^{\infty}L_x^{\infty}}\nor{u}{L_T^{\infty}L_x^2}+T \nor{u}{L_T^{\infty}L_x^{\infty}}\nor{|x|^{1/2} u}{L_T^{\infty}L_x^2}\,\Bigr) \notag \\ &\lesssim c(T) \nor{u}{L_T^{\infty}H_x^1} \Bigl(\nor{u}{L_T^{\infty}H_x^1}+\nor{|x|^{1/2}u}{L_T^{\infty}L_x^2}\,\Bigr). \label{y2} \end{align} For $\mathcal{B}_1$, applying Proposition \ref{simplestimate}, (\ref{unoa}) and, again, (3.12) from \cite{LP}, we have \begin{align} \nor{\mathcal{B}_1}{}&\lesssim \int_0^t\nor{[\chi , D_{\xi}^{1/2}]\bigl( F_{\mu}(t-t',\xi) \, \partial_{\xi}(\xi \, \widehat{u}*\widehat{u} ) \bigr)}{}\,dt' \notag \\ &\lesssim \int_0^t\nor{ F_{\mu}(t-t',\xi) \, \widehat{u}*\widehat{u} }{}\,dt' + \int_0^t\nor{ F_{\mu}(t-t',\xi) \,\xi \, \partial_{\xi}(\widehat{u}*\widehat{u} ) }{}\,dt' \notag \\ &\lesssim \int_0^t e^{\mu (t-t')} \nor{\widehat{u}*\widehat{u} }{}\,dt' + \int_0^t \Bigl(e^{\mu (t-t')} + (t-t')^{-1/3}\Bigr) \nor{\partial_{\xi}(\widehat{u}*\widehat{u} ) }{}\,dt' \notag \\ &\lesssim c(T)\Bigl(\nor{u^2}{L_T^1L_x^2}+\nor{x u^2}{L_T^1L_x^2}\Bigr) \notag \\ &\lesssim c(T)\Bigl(\nora{u}{L_T^{\infty}H_x^1}{2} + \nor{x u}{L_T^{\infty}L_x^2}\nor{u}{L_T^{\infty}H_x^1}\,\Bigr) \notag \\ &\lesssim c(T) \nor{u}{L_T^{\infty}H_x^1} \Bigl(\nor{u}{L_T^{\infty}H_x^1}+\nor{x\,u}{L_T^{\infty}L_x^2}\,\Bigr). \label{B1} \end{align} Finally, for $\mathcal{B}_2$, we use Stein's derivative \begin{align} \nor{\mathcal{B}_2}{} &\lesssim \int_0^t \nor{ \chi \, F_{\mu}(t-t',\xi) \, \partial_{\xi}\widehat{z}(t',\xi)}{}\,dt' + \int_0^t\nor{\mathcal{D}_{\xi}^{1/2}\bigl( \chi \, F_{\mu}(t-t',\xi) \, \partial_{\xi}(\xi \,\widehat{u}*\widehat{u}(t',\xi)) \bigr)}{}\,dt' \notag \\ &=Z_1+Z_2. \end{align} Estimate for $Z_1$ is obtained in similar way as it was bounded $\mathcal{B}_1$. To estimate $Z_2$ we use (\ref{dertres}), (\ref{productostein}) and (3.12) from \cite{LP} \begin{align} Z_2&\leq \int_0^t\nor{\mathcal{D}_{\xi}^{1/2}\bigl( F_{\mu}(t-t',\xi) \, \chi \, \widehat{u}*\widehat{u} \bigr)}{}\,dt' + \int_0^t\nor{\mathcal{D}_{\xi}^{1/2}\bigl(F_{\mu}(t-t',\xi) \, \chi \, \xi \,\partial_{\xi}(\widehat{u}*\widehat{u}) \bigr)}{}\,dt' \notag \\ &\lesssim \int_0^t\Bigl(\nor{\chi \, \widehat{u}*\widehat{u}}{}+\nor{|\xi|^{1/2} \, \chi \, \widehat{u}*\widehat{u}}{}+\nor{\mathcal{D}_{\xi}^{1/2}\bigl( \chi \, \widehat{u}*\widehat{u} \bigr)}{}\Bigr)\,dt' + \notag \\ &\qquad +\int_0^t\Bigl(\nor{\chi \, \xi \,\partial_{\xi}(\widehat{u}*\widehat{u})}{}+\nor{|\xi|^{1/2}\,\chi \, \xi \,\partial_{\xi}(\widehat{u}*\widehat{u})}{}+\nor{\mathcal{D}_{\xi}^{1/2}\bigl(\chi \, \xi \,\partial_{\xi}(\widehat{u}*\widehat{u}) \bigr)}{}\Bigr)\,dt' \notag \\ &\lesssim \int_0^t\Bigl(\Bigl(\nor{\chi}{\infty}+\nor{|\xi|^{1/2}\chi}{\infty}+\nor{\mathcal{D}_{\xi}^{1/2}\bigl(\chi \bigr)}{\infty}\Bigr)\nor{\widehat{u}*\widehat{u}}{}+\nor{\chi}{\infty}\nor{\mathcal{D}_{\xi}^{1/2}\bigl(\widehat{u}*\widehat{u} \bigr)}{}\Bigr)\,dt' + \notag \\ &\qquad \int_0^t\Bigl(\Bigl(\nor{\chi \xi}{\infty}+\nor{|\xi|^{1/2}\chi \xi}{\infty}+\nor{\mathcal{D}_{\xi}^{1/2}\bigl(\chi \xi \bigr)}{\infty}\Bigr)\nor{\partial_{\xi}(\widehat{u}*\widehat{u})}{}+\nor{\chi \xi}{\infty}\nor{\mathcal{D}_{\xi}^{1/2}\partial_{\xi}\bigl(\widehat{u}*\widehat{u} \bigr)}{}\Bigr)\,dt' + \notag \\ &\lesssim c(T)\Bigl( \nor{\widehat{u}*\widehat{u}}{L_T^1L_{\xi}^2}+\nor{\mathcal{D}_{\xi}^{1/2}(\widehat{u}*\widehat{u})}{L_T^1L_{\xi}^2}+\nor{\partial_{\xi}(\widehat{u}*\widehat{u})}{L_T^1L_{\xi}^2}+\nor{\mathcal{D}_{\xi}^{1/2}\partial_{\xi}\bigl(\widehat{u}*\widehat{u} \bigr)}{L_T^1L_{\xi}^2}\Bigr) \notag \\ &\lesssim c(T) \nor{u}{L_T^{\infty}H_x^1} \Bigl(\nor{u}{L_T^{\infty}H_x^1}+\nor{|x|^{1/2}u}{L_T^{\infty}L_x^2}+\nor{x u}{L_T^{\infty}L_x^2}+ \nor{|x|^{3/2}u}{L_T^{\infty}L_x^2}\Bigr). \label{z2} \end{align} Hence, the terms in (\ref{chiA}) and (\ref{chiB}) are all bounded, then by applying the argument after inequality (\ref{b11}) we complete the proof. \setcounter{equation}{0} \section{Proof of Theorem \ref{contunica2}} From the Proposition \ref{xbuenl22<b<5/2} and the hypothesis we have that for any $\epsilon >0$ $$u\in C([0,T]; \dot{Z}_{5/2, 5/2-\epsilon})\qquad \text{and} \qquad u(\cdot, t_j)\in L^2(|x|^5\,dx), \qquad j=1, \, 2, \, 3.$$ Consquently, $$\widehat{u}\in C([0,T]; H^{5/2-\epsilon}(\mathbb{R})\cap L^2(|\xi|^5\,d\xi)) \qquad \text{and}\qquad \widehat{u}(\cdot, t_j)\in H^{5/2}(\mathbb{R}), \qquad j=1, \, 2, \, 3,$$ for all $\epsilon >0$. Thus, in particular it follows that $$\widehat{u}*\widehat{u} \in C([0,T];H^{4}(\mathbb{R})\cap L^2(|\xi|^5\,d\xi)).$$ Let us assume that $t_1=0<t_2<t_3$. Applying (\ref{uno}) and (\ref{dos}) from Lemma \ref{lemdecaida} we obtain that \begin{align} \partial_{\xi}^2\bigl(F_{\mu}(t,\xi)\widehat{\phi}(\xi)\bigr)&=E(t,\xi,\widehat{\phi}(\xi)) \notag \\ &=\bigl[2it\,\text{sgn}(\xi)-6\mu t |\xi| +t^2\mu^2+4i\mu t^2\xi -(6\mu^2+4)t^2 \xi^2-12i\mu t^2 \xi^3 +9\mu^2 t^2 \xi^4\bigr]\,F_{\mu}(t,\xi)\,\widehat{\phi}(\xi) \notag \\ & +2\mu t \,\text{sgn}(\xi)F_{\mu}(t,\xi)\partial_{\xi}\widehat{\phi}(\xi) + 4it |\xi|\,F_{\mu}(t,\xi)\,\partial_{\xi}\widehat{\phi}(\xi)-6\mu t \xi |\xi| \, F_{\mu}(t,\xi)\,\partial_{\xi}\widehat{\phi}(\xi)+F_{\mu}(t,\xi)\,\partial_{\xi}^2\widehat{\phi}(\xi), \label{dosprima} \end{align} where we apply that the initial data $\phi$ have zero mean value and for this the term involving the Dirac function in (\ref{dosprima}) vanishes. Using Plancherel's theorem and Duhamel's formula (\ref{uDuhformula}), it will be sufficient to show that the assumption that \begin{equation} D_{\xi}^{1/2}E(t,\xi,\widehat{\phi}(\xi))-\int_0^tD_{\xi}^{1/2}E(t-t',\xi,\widehat{z}(t',\xi))\,dt' \, , \label{d5/2fd} \end{equation} lies in $L^2(\mathbb{R})$ for times $t_1=0<t_2<t_3$, where $\widehat{z}=i\,\frac{\xi}{2}\,\widehat{u}*\widehat{u}$, leads to a contradiction. First, we prove that the linear part in (\ref{d5/2fd}) persists in $L^2$. We introduce as in the proof of Theorem \ref{contunica1} a localizer $\chi \in C_0^{\infty}$, $\text{supp} \,\chi \subseteq (-\epsilon , \epsilon)$ and $\chi \equiv 1$ on $(-\epsilon/2, \epsilon/2)$ so that \begin{align} \chi D_{\xi}^{1/2}\partial_{\xi}^2\bigl(F_{\mu}(t,\xi)\widehat{\phi}(\xi)\bigr)&= [\chi , D_{\xi}^{1/2}]\partial_{\xi}^2\bigl(F_{\mu}(t,\xi)\widehat{\phi}(\xi)\bigr)+ D_{\xi}^{1/2}\bigl(\chi \partial_{\xi}^2\bigl(F_{\mu}(t,\xi)\widehat{\phi}(\xi)\bigr)\bigr) \notag \\ &=J+K. \label{jmask} \end{align} As for the first term $J$, from Proposition \ref{simplestimate}, this is bounded in $L^2(\mathbb{R})$ by $\nor{\partial_{\xi}^2\bigl(F_{\mu}(t,\xi)\widehat{\phi}(\xi)\bigr)}{}$, which is finite as can be observed from its explicit representation in (\ref{dosprima}), the assumption on the initial data $\phi$, and the quite similar computation already performed in (\ref{aestimate}), therefore we omit details. On the other hand, for $J$, we notice that \begin{align} K&=D_{\xi}^{1/2}\bigl(\chi \partial_{\xi}^2\bigl(F_{\mu}(t,\xi)\widehat{\phi}(\xi)\bigr)\bigr) \notag \\ &=2itD_{\xi}^{1/2}\bigl(\chi \text{sgn}(\xi)F_{\mu}(t,\xi)\widehat{\phi}(\xi)\bigr)-6\mu tD_{\xi}^{1/2}\bigl(\chi |\xi| F_{\mu}(t,\xi)\widehat{\phi}(\xi)\bigr) +t^2\mu^2D_{\xi}^{1/2}\bigl(\chi F_{\mu}(t,\xi)\widehat{\phi}(\xi)\bigr) \notag \\ &\;\;\;+4i\mu t^2D_{\xi}^{1/2}\bigl(\chi \xi F_{\mu}(t,\xi)\widehat{\phi}(\xi)\bigr) -(6\mu^2+4)t^2 D_{\xi}^{1/2}\bigl(\chi \xi^2F_{\mu}(t,\xi)\widehat{\phi}(\xi)\bigr)-12i\mu t^2D_{\xi}^{1/2}\bigl(\chi \xi^3F_{\mu}(t,\xi)\widehat{\phi}(\xi)\bigr) \notag \\ &\;\;\; +9\mu^2 t^2 D_{\xi}^{1/2}\bigl(\chi \xi^4\,F_{\mu}(t,\xi)\,\widehat{\phi}(\xi)\bigr)+2\mu t D_{\xi}^{1/2}\bigl(\chi \text{sgn}(\xi)F_{\mu}(t,\xi)\partial_{\xi}\widehat{\phi}(\xi)\bigr) + 4itD_{\xi}^{1/2}\bigl(\chi |\xi|\,F_{\mu}(t,\xi)\,\partial_{\xi}\widehat{\phi}(\xi)\bigr) \notag \\ &\;\;\; -6\mu t D_{\xi}^{1/2}\bigl(\chi \xi |\xi| \, F_{\mu}(t,\xi)\,\partial_{\xi}\widehat{\phi}(\xi)\bigr)+D_{\xi}^{1/2}\bigl(\chi F_{\mu}(t,\xi)\,\partial_{\xi}^2\widehat{\phi}(\xi)\bigr) \notag \\ &=K_1+K_2+K_3+K_4+K_5+K_6+K_7+K_8+K_9+K_{10}+K_{11}. \label{estimatek} \end{align} We show in detail the estimates for $K_7$ and $K_{11}$ which are the terms involving the highest regularity and decay of the initial data. Estimates for all the another terms in (\ref{estimatek}), except $K_8$, are obtained in a similar manner as $K_7$ and $K_{11}$. $K_8$ will be canceled with a term arising in the integral part in Duhamel's formula. For $K_7$ we obtain from Theorem \ref{derivaStein}, (\ref{dercuatro}), (\ref{unoa}) and fractional product rule type estimate (\ref{productostein}) that \begin{align} \nor{K_7}{}&\lesssim t^2\nor{ \chi \xi^4 F_{\mu}(t,\xi)\widehat{\phi}(\xi)}{} + t^2\nor{ \mathcal{D}_{\xi}^{1/2}\bigl(\chi \xi^4\,F_{\mu}(t,\xi)\,\widehat{\phi}(\xi)\bigr)}{} \notag \\ &\lesssim t^2\nor{ \chi \xi^4 F_{\mu}(t,\xi)\widehat{\phi}(\xi)}{} + t^{2/3}\Bigl(\nor{\chi \widehat{\phi}(\xi)}{}+\nor{|\xi|^{1/2}\chi \widehat{\phi}(\xi)}{}+\nor{ \mathcal{D}_{\xi}^{1/2}\bigl(\chi \widehat{\phi}(\xi)\bigr)}{}\Bigr) \notag \\ &\lesssim t^2e^{\mu t}\nor{\chi \xi^4}{\infty}\nor{\phi}{}+t^{2/3}\Bigl(\nor{\phi}{}+\nor{|\xi|^{1/2}\chi}{\infty} \nor{\phi}{}+\nor{ \mathcal{D}_{\xi}^{1/2}\bigl(\chi \bigr)}{\infty}\nor{\phi}{}+\nor{\chi}{\infty}\nor{\mathcal{D}_{\xi}^{1/2}\bigl(\widehat{\phi}(\xi)\bigr)}{}\Bigr) \notag \\ & \lesssim c(T) \nor{\langle x \rangle^{1/2} \phi }{}. \label{estimatek7} \end{align} and similarly \begin{align} \nor{K_{11}}{}&\lesssim \nor{ \chi F_{\mu}(t,\xi)\partial_{\xi}^2 \widehat{\phi}(\xi)}{} + \nor{ \mathcal{D}_{\xi}^{1/2}\bigl(\chi F_{\mu}(t,\xi) \partial_{\xi}^2 \widehat{\phi}(\xi)\bigr)}{} \notag \\ &\lesssim \nor{ \chi }{\infty} e^{\mu t}\nor{\partial_{\xi}^2 \widehat{\phi}(\xi)}{} +\nor{\chi \partial_{\xi}^2\widehat{\phi}(\xi)}{}+\nor{|\xi|^{1/2}\chi \partial_{\xi}^2\widehat{\phi}(\xi)}{}+\nor{ \mathcal{D}_{\xi}^{1/2}\bigl(\chi \partial_{\xi}^2\widehat{\phi}(\xi)\bigr)}{} \notag \\ &\lesssim e^{\mu t}\nor{x^2 \phi}{}+\nor{\chi}{\infty}\nor{x^2\phi}{}+\nor{|\xi|^{1/2}\chi}{\infty} \nor{x^2\phi}{}+\nor{ \mathcal{D}_{\xi}^{1/2}\bigl(\chi \bigr)}{\infty}\nor{\partial_{\xi}^2\widehat{\phi}(\xi)}{}+\nor{\chi}{\infty}\nor{\mathcal{D}_{\xi}^{1/2}\partial_{\xi}^2 \bigl(\widehat{\phi}(\xi)\bigr)}{} \notag \\ & \lesssim c(T) \nor{\langle x \rangle^{2+1/2} \phi }{}. \label{estimatek11} \end{align} Now, let us go over the integral part in (\ref{d5/2fd}) that can be written in Fourier space and with the help of a commutator as \begin{align} -\int_0^t \chi D_{\xi}^{1/2}E(t-t',\xi,\widehat{z}(t',\xi))\,dt'&=\int_0^t[D_{\xi}^{1/2}, \chi ] E(t-t',\xi,\widehat{z}(t',\xi))\,dt' - \int_0^tD_{\xi}^{1/2}\bigl(\chi E(t-t',\xi,\widehat{z}(t',\xi))\bigr)\,dt' \notag \\ &=\mathcal{J}+\mathcal{K}. \label{jjmaskk} \end{align} where \begin{align} \mathcal{J}&= \int_0^t[D_{\xi}^{1/2}, \chi ] \Bigl( 2\mu (t-t')\delta \, \widehat{z}(t',\xi) +\Bigl( 2i(t-t')\,\text{sgn}(\xi)-6\mu (t-t') |\xi| +(t-t')^2\mu^2+4i\mu (t-t')^2\xi + \notag \\ &\qquad -(6\mu^2+4)(t-t')^2 \xi^2-12i\mu (t-t')^2 \xi^3+9\mu^2 (t-t')^2 \xi^4\Bigr) F_{\mu}(t-t')\widehat{z}(t',\xi) + \notag \\ &\qquad +2\mu (t-t') \,\text{sgn}(\xi)F_{\mu}(t-t',\xi)\partial_{\xi}\widehat{z}(t',\xi) + 4i(t-t') |\xi|\,F_{\mu}(t-t',\xi)\,\partial_{\xi}\widehat{z}(t',\xi)+ \notag \\ &\qquad -6\mu (t-t') \xi |\xi| \, F_{\mu}(t-t',\xi)\,\partial_{\xi}\widehat{z}(t',\xi)+F_{\mu}(t-t',\xi)\,\partial_{\xi}^2\widehat{z}(t',\xi) \Bigr) \,dt'\notag \\ &= \mathcal{J}_1+\mathcal{J}_2+\mathcal{J}_3+\mathcal{J}_4+\mathcal{J}_5+\mathcal{J}_6+\mathcal{J}_7+\mathcal{J}_8+\mathcal{J}_9+\mathcal{J}_{10}+\mathcal{J}_{11}+\mathcal{J}_{12}, \label{estimatejs} \end{align} and \begin{align} \mathcal{K}&=- \int_0^tD_{\xi}^{1/2}\Bigl( 2\mu (t-t')\chi \delta \, \widehat{z}(t',\xi) +\Bigl( 2i(t-t') \chi \,\text{sgn}(\xi)-6\mu (t-t')\chi |\xi| +(t-t')^2\mu^2\chi +4i\mu (t-t')^2 \chi \xi + \notag \\ &\qquad -(6\mu^2+4)(t-t')^2\chi \xi^2-12i\mu (t-t')^2\chi \xi^3+9\mu^2 (t-t')^2\chi \xi^4\Bigr) F_{\mu}(t-t')\widehat{z}(t',\xi) + \notag \\ &\qquad +2\mu (t-t') \,\text{sgn}(\xi)\chi F_{\mu}(t-t',\xi)\partial_{\xi}\widehat{z}(t',\xi) + 4i(t-t')\chi |\xi|\,F_{\mu}(t-t',\xi)\,\partial_{\xi}\widehat{z}(t',\xi)+ \notag \\ &\qquad -6\mu (t-t')\chi \xi |\xi| \, F_{\mu}(t-t',\xi)\,\partial_{\xi}\widehat{z}(t',\xi)+\chi F_{\mu}(t-t',\xi)\,\partial_{\xi}^2\widehat{z}(t',\xi) \Bigr) \,dt' \notag \\ &= \mathcal{K}_1+\mathcal{K}_2+\mathcal{K}_3+\mathcal{K}_4+\mathcal{K}_5+\mathcal{K}_6+\mathcal{K}_7+\mathcal{K}_8+\mathcal{K}_9+\mathcal{K}_{10}+\mathcal{K}_{11}+\mathcal{K}_{12}. \label{estimateks} \end{align} Notice that $\mathcal{J}_1$ and $\mathcal{K}_1$ vanish since $u\partial_xu$ has zero mean value and for $\mathcal{J}_2$, $\mathcal{J}_3$, $\mathcal{J}_4$, $\mathcal{J}_5$, $\mathcal{J}_6$, $\mathcal{J}_7$, $\mathcal{J}_8$, $\mathcal{J}_9$, $\mathcal{J}_{10}$, $\mathcal{J}_{11}$, $\mathcal{J}_{12}$, $\mathcal{K}_2$, $\mathcal{K}_3$, $\mathcal{K}_4$, $\mathcal{K}_5$, $\mathcal{K}_6$, $\mathcal{K}_7$, $\mathcal{K}_8$, $\mathcal{K}_{10}$, $\mathcal{K}_{11}$ and $\mathcal{K}_{12}$, in $L^2(\mathbb{R})$ are essentially the same for their counterparts in equations (\ref{chiA}) and (\ref{chiB}), in the proof of Theorem \ref{contunica1}, so we omit the details of their estimates. Therefore, from the assumption that $\phi=u(0)=u(t_1)$, $u(t_2) \in \dot{Z}_{5/2,5/2}$, equations (\ref{estimatejs}), (\ref{estimateks}) and the estimates above, we conclude that \begin{align} R&=K_8+\mathcal{K}_9 \notag \\ &=2\mu t D_{\xi}^{1/2}\bigl(\chi \text{sgn}(\xi)F_{\mu}(t,\xi)\partial_{\xi}\widehat{\phi}(\xi)\bigr)- \int_0^tD_{\xi}^{1/2}\Bigl(2\mu (t-t') \,\text{sgn}(\xi)\chi F_{\mu}(t-t',\xi)\partial_{\xi}\widehat{z}(t',\xi) \Bigr) \,dt', \label{R} \end{align} is a function in $L^2(\mathbb{R})$ at time $t=t_2$. But \begin{align} R&=2\mu t D_{\xi}^{1/2}\bigl(\chi \text{sgn}(\xi)F_{\mu}(t,\xi)\bigl(\partial_{\xi}\widehat{\phi}(\xi)-\partial_{\xi}\widehat{\phi}(0)\bigr)\bigr) \notag \\ &\;\;\;- 2\mu \int_0^tD_{\xi}^{1/2}\Bigl( (t-t') \,\text{sgn}(\xi)\chi F_{\mu}(t-t',\xi)\Bigl( \partial_{\xi}\Bigl(i\frac{\xi}{2}\widehat{u}*\widehat{u}(t',\xi)\Bigr)- \partial_{\xi}\Bigl(i\frac{\xi}{2}\widehat{u}*\widehat{u}(t',0)\Bigr)\Bigr)\Bigr) \,dt' \notag \\ &\;\;\;+2\mu t D_{\xi}^{1/2}\bigl(\chi \text{sgn}(\xi)F_{\mu}(t,\xi)\partial_{\xi}\widehat{\phi}(0)\bigr) \notag\\ &\;\;\;-2\mu \int_0^t(t-t') D_{\xi}^{1/2}\Bigl( \text{sgn}(\xi)\chi F_{\mu}(t-t',\xi)\partial_{\xi}\Bigl(i\frac{\xi}{2}\widehat{u}*\widehat{u}(t',0)\Bigr) \Bigr) \,dt' \notag \\ &=R_1+R_2+R_3+R_4. \label{IdR} \end{align} We argue like at the end of the proof of Theorem 3 in \cite{FP}. In this way, $R_1$ and $R_2$ are in $L^2(\mathbb{R})$ and this implies that $(R_3+R_4)(t_2)\in L^2(\mathbb{R})$. Also, $$\partial_{\xi}\Bigl(i\frac{\xi}{2}\widehat{u}*\widehat{u}\Bigr)(0)=-i\int x u\partial_xu\,dx = \frac{i}{2}\nora{u}{}{2},$$ and from the npBO equation we have \begin{equation}\label{intxnpbo} \dfrac{d}{dt}\int xu\,dx+\int x\partial_x^2\mathcal{H}u\,dx+\int x u\partial_xu\,dx+\mu \int x \partial_x\mathcal{H}u\,dx+\mu \int x \partial_x^3\mathcal{H}u\,dx = 0, \end{equation} which shows that \begin{equation}\label{intxnpbo1} \dfrac{d}{dt}\int xu\,dx-\mu \int \mathcal{H}u\,dx = -\int x u\partial_xu\,dx=\dfrac{1}{2}\nora{u}{}{2}, \end{equation} and hence $$\partial_{\xi}\Bigl(i\frac{\xi}{2}\widehat{u}*\widehat{u}\Bigr)(0)=i\,\dfrac{d}{dt}\int xu\,dx \, ,$$ because $\widehat{\mathcal{H}u}(0)=\int \mathcal{H}u\,dx=0$. Now, substituting this into $R_4$ we have after integration by parts that \begin{align} R_4&=-2 i \mu D_{\xi}^{1/2}\left[ \text{sgn}(\xi)\chi \int_0^t(t-t') F_{\mu}(t-t',\xi)\Bigl( \dfrac{d}{dt}\int xu\,dx \Bigr) \,dt' \right] \notag \\ &=-2 i \mu D_{\xi}^{1/2} \left[\text{sgn}(\xi)\chi (t-t') F_{\mu}(t-t',\xi)\left. \Bigl(\int xu\,dx \Bigr) \right|_{t'=0}^{t'=t}\right. +\notag \\ &\;\;\;+\left.\text{sgn}(\xi) \chi \int_0^t F_{\mu}(t-t',\xi) \Bigl(\int xu\,dx \Bigr)\,dt' + \text{sgn}(\xi)\chi \int_0^t (t-t')F_{\mu}(t-t',\xi)b_{\mu}(\xi) \Bigl(\int xu\,dx \Bigr)\,dt' \right] \notag \\ &=S_1+S_2+S_3. \label{s3} \end{align} Since $\partial_{\xi}\widehat{\phi}(0)=-i\widehat{x\phi}(0)=-i\int x\phi(x)\,dx$, then $S_1=-R_3$. We observe that $S_3$ in (\ref{s3}) belongs to $L^2(\mathbb{R})$, therefore $$S_2=-2 i \mu D_{\xi}^{1/2}\left(\text{sgn}(\xi) \chi \int_0^t F_{\mu}(t-t',\xi) \Bigl(\int xu(x,t')\,dx \Bigr)\,dt'\right)$$ is in $L^2(\mathbb{R})$ at time $t=t_2$, and from Theorem \ref{derivaStein} this is equivalent to have that \begin{equation}\label{s2} \mathcal{D}_{\xi}^{1/2}\left(\text{sgn}(\xi) \chi(\xi) \int_0^{t_2} F_{\mu}(t_2-t',\xi) \Bigl(\int xu(x,t')\,dx \Bigr)\,dt'\right)\in L^2(\mathbb{R}), \end{equation} which from Proposition 3 in \cite{FP} implies that $\int_0^{t_2}\bigl(\int x u(x,t')\,dx\bigr)\,dt'=0$ and hence $\int xu(x,t)\,dx$ must be zero at some time in $(0,t_2)$. We re-apply the same argument to conclude that $\int xu(x,t)\,dx$ is again zero at some other time in $(t_2, t_3)$. Finally, identity (\ref{intxnpbo1}) and the Fundamental Theorem of Calculus complete the proof of the theorem\,$\square$. \renewcommand{\sc References}{\sc References} \end{document}
arXiv
Tomasz Mrowka Tomasz Mrowka (born September 8, 1961) is an American mathematician specializing in differential geometry and gauge theory. He is the Singer Professor of Mathematics and former head of the Department of Mathematics at the Massachusetts Institute of Technology. Tomasz Mrowka Mrowka at Aarhus University, 2011. Born (1961-09-08) September 8, 1961 State College, Pennsylvania, US NationalityAmerican Alma mater • MIT • University of California, Berkeley Awards • Fellow, American Academy of Arts and Sciences (2007) • Veblen Prize (2007) • Doob Prize (2011) • Member, National Academy of Sciences (2015) • Leroy P. Steele Prize for Seminal Contribution to Research (2023) Scientific career FieldsMathematics InstitutionsMIT ThesisA local Mayer-Vietoris principle for Yang-Mills moduli spaces (1988) Doctoral advisorClifford Taubes Robion Kirby Doctoral studentsLarry Guth Lenhard Ng Sherry Gong Mrowka is the son of Polish mathematician Stanisław Mrówka[1] and is married to MIT mathematics professor Gigliola Staffilani.[2] Career A 1983 graduate of the Massachusetts Institute of Technology, he received the PhD from the University of California, Berkeley in 1988 under the direction of Clifford Taubes and Robion Kirby. He joined the MIT mathematics faculty as professor in 1996, following faculty appointments at Stanford University and at the California Institute of Technology (professor 1994–96).[3] At MIT, he was the Simons Professor of Mathematics from 2007–2010. Upon Isadore Singer's retirement in 2010 the name of the chair became the Singer Professor of Mathematics which Mrowka held until 2017. He was named head of the Department of Mathematics in 2014 and held that position for 3 years.[4] A prior Sloan fellow and Young Presidential Investigator, in 1994 he was an invited speaker at the International Congress of Mathematicians (ICM) in Zurich. In 2007, he received the Oswald Veblen Prize in Geometry from the AMS jointly with Peter Kronheimer, "for their joint contributions to both three- and four-dimensional topology through the development of deep analytical techniques and applications."[5] He was named a Guggenheim Fellow in 2010, and in 2011 he received the Doob Prize with Peter B. Kronheimer for their book Monopoles and Three-Manifolds (Cambridge University Press, 2007).[6][7] In 2018 he gave a plenary lecture at the ICM in Rio de Janeiro, together with Peter Kronheimer. In 2023 he was awarded the Leroy P. Steele Prize for Seminal Contribution to Research (with Peter Kronheimer).[8] He became a fellow of the American Academy of Arts & Sciences in 2007,[9] and a member of the National Academy of Sciences in 2015.[10] Research Mrowka's work combines analysis, geometry, and topology, specializing in the use of partial differential equations, such as the Yang-Mills equations from particle physics to analyze low-dimensional mathematical objects.[4] Jointly with Robert Gompf, he discovered four-dimensional models of space-time topology.[11] In joint work with Peter Kronheimer, Mrowka settled many long-standing conjectures, three of which earned them the 2007 Veblen Prize. The award citation mentions three papers that Mrowka and Kronheimer wrote together. The first paper in 1995 deals with Donaldson's polynomial invariants and introduced Kronheimer–Mrowka basic class, which have been used to prove a variety of results about the topology and geometry of 4-manifolds, and partly motivated Witten's introduction of the Seiberg–Witten invariants.[12] The second paper proves the so-called Thom conjecture and was one of the first deep applications of the then brand new Seiberg–Witten equations to four-dimensional topology.[13] In the third paper in 2004, Mrowka and Kronheimer used their earlier development of Seiberg–Witten monopole Floer homology to prove the Property P conjecture for knots.[14] The citation says: "The proof is a beautiful work of synthesis which draws upon advances made in the fields of gauge theory, symplectic and contact geometry, and foliations over the past 20 years."[5] In further recent work with Kronheimer, Mrowka showed that a certain subtle combinatorially-defined knot invariant introduced by Mikhail Khovanov can detect “unknottedness.”[15] References 1. W. Piotrowski, Stanisław G. Mrówka (1933–2010), Wiadom. Mat. 51 (2015), 347–348 . 2. Baker, Billy (April 28, 2008), "A life of unexpected twists takes her from farm to math department", Boston Globe. Archived by the Indian Academy of Sciences, Women in Science initiative. 3. "Tomasz Mrowka | MIT Mathematics". math.mit.edu. Retrieved September 18, 2015. 4. "Tomasz Mrowka named head of the Department of Mathematics". Retrieved September 18, 2015. 5. "2007 Veblen Prize" (PDF). American Mathematical Society. April 2007. 6. Kronheimer and Mrowka Receive 2011 Doob Prize 7. Taubes, Clifford Henry (2009). "Review of Monopoles and three-manifolds by Peter Kronheimer and Tomasz Mrowka". Bull. Amer. Math. Soc. (N.S.). 46 (3): 505–509. doi:10.1090/S0273-0979-09-01250-6. 8. Leroy P. Steele Prize for Seminal Contribution 2023 9. "Tomasz Stanislaw Mrowka". Member Directory. American Academy of Arts & Sciences. Retrieved March 9, 2020. 10. "Tomasz S. Mrowka". Member Directory. National Academy of Sciences. Retrieved March 9, 2020. 11. Gompf, Robert E.; Mrowka, Tomasz S. (July 1, 1993). "Irreducible 4-Manifolds Need not be Complex". Annals of Mathematics. Second Series. 138 (1): 61–111. doi:10.2307/2946635. JSTOR 2946635. 12. Kronheimer, Peter; Mrowka, Tomasz (1995). "Embedded surfaces and the structure of Donaldson's polynomial invariants" (PDF). J. Differential Geom. 41 (3): 573–34. doi:10.4310/jdg/1214456482. 13. Kronheimer, P. B.; Mrowka, T. S. (January 1, 1994). "The Genus of Embedded Surfaces in the Projective Plane". Mathematical Research Letters. 1 (6): 797–808. doi:10.4310/mrl.1994.v1.n6.a14. 14. Kronheimer, Peter B; Mrowka, Tomasz S (January 1, 2004). "Witten's conjecture and Property P". Geometry & Topology. 8 (1): 295–310. arXiv:math/0311489. doi:10.2140/gt.2004.8.295. S2CID 10764084. 15. Kronheimer, P. B.; Mrowka, T. S. (February 11, 2011). "Khovanov homology is an unknot-detector". Publications Mathématiques de l'IHÉS. 113 (1): 97–208. arXiv:1005.4346. doi:10.1007/s10240-010-0030-y. ISSN 0073-8301. S2CID 119586228. External links • Mrowka's website at MIT • Tomasz Mrowka at the Mathematics Genealogy Project Recipients of the Oswald Veblen Prize in Geometry • 1964 Christos Papakyriakopoulos • 1964 Raoul Bott • 1966 Stephen Smale • 1966 Morton Brown and Barry Mazur • 1971 Robion Kirby • 1971 Dennis Sullivan • 1976 William Thurston • 1976 James Harris Simons • 1981 Mikhail Gromov • 1981 Shing-Tung Yau • 1986 Michael Freedman • 1991 Andrew Casson and Clifford Taubes • 1996 Richard S. Hamilton and Gang Tian • 2001 Jeff Cheeger, Yakov Eliashberg and Michael J. Hopkins • 2004 David Gabai • 2007 Peter Kronheimer and Tomasz Mrowka; Peter Ozsváth and Zoltán Szabó • 2010 Tobias Colding and William Minicozzi; Paul Seidel • 2013 Ian Agol and Daniel Wise • 2016 Fernando Codá Marques and André Neves • 2019 Xiuxiong Chen, Simon Donaldson and Song Sun Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Czech Republic • Poland Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Representations of SO(3) and the classification of relativistic massive particles as in Weinberg's "The Quantum Theory of Fields" I'm reading about the classification of relativistic massive particles in Weinberg's "The Quantum Theory of Fields", and I found something that doesn't convince me. In Chapter 2, paragraph 5, having previously dealt with the translations, Weinberg decomposes the action of a proper orthochronous Lorentz trasformation on the Hilbert space of some relativistic particle as $$ U(\Lambda)\,|p,\sigma\rangle=\frac{N(p)}{N(\Lambda p)}\sum_{\sigma'}D_{\sigma',\sigma}(W(\Lambda,p))\ |\Lambda p,\sigma'\rangle $$ Where the $|p,\sigma\rangle$'s are eigenvectors of the four-momentum operator with some extra degrees of freedom specified by the labels $\sigma$, the $N$'s are normalization coefficients due to the definition of the $|p,\sigma\rangle$'s in terms of the boost of a standard eigenvector $|k,\sigma\rangle$ and $D_{\sigma',\sigma}(W(\Lambda,p))$ is a representation of the element $$ W(\Lambda,p)=L^{-1}(\Lambda p)\ \Lambda\ L(p) $$ of the "little group", the subgroup of the Lorentz group that fixes $k$ (here $L(p)$ is the boost that brings $k$ to $p$). By decomposing the action of $U(\Lambda)$, Weinberg reduces the question of the classification of relativistic particles to that of finding the irreducible representations of the little group, determined by the standard eigenvector one chooses in order to define the four-momentum eigenvectors. In particular, for massive particles with positive energy, he chooses the four-momentum $$ k^{\mu}=(m,0,0,0) $$ i.e. the particle-at-rest with mass $m$ four-momentum. He then concludes that the little group for this $k$ is SO(3), as every boost modifies $k$ and no rotation modifies it, so that, given the mass $m$ of the particle (and given that its energy is positive-definite), this last is classified up to the behaviour of the four-momentum eigenstates under three-dimensional rotations. Since the generators of rotations and translations do not commute, while these rotational transformations do commute with the momentum operator, they must act on non-orbital SO(3)-degrees of freedom, i.e. on which one calls the spin degrees of freedom of the particle. Up until here, everything is clear to me (and very cleverly posed, in my opinion). He then passes to the classification of the representations of the little group SO(3). He says that irreducible finite-dimensional representations of SO(3) are labeled by numbers $j$ which can take integer or half-integer values, i.e. spin can be of integer or half-integer value. However, we know that that the half-integer values are really values in the representations of SU(2), not of SO(3). At the Lie algebra level, the representations are the same, as the Lie algebra is the same, but the one-one correspondence between the representations of the Lie algebra and those of the Lie group holds only for simply connected Lie groups, in this case SU(2). So when he classifies the representations of SO(3), shouldn't he be taking $j$ to be exclusively integer? I know that the correct answer is integer AND half-integer, but he doesn't give at all an explanation of why it is possible to use SU(2) instead of SO(3) (at least not to the point where I'm with the reading, I apologize if he does so afterwards). I think I read somewhere else that this has something to do with the fact that the representations one needs to define on the Hilbert space are really projective representations, so that the minus sign that one gets when SU(2)-rotating the state by $2\pi$ really accounts for a phase shift of $e^{i\pi}$. By this line of reasoning, the projective representations of SO(3) do in fact coincide with the non-projective representations of SU(2). Should this be the correct way to see it, I'd like someone to elaborate on this. Thank you in advance. quantum-mechanics quantum-field-theory group-representations representation-theory Giorgio ComitiniGiorgio Comitini $\begingroup$ A thorough discussion of this can be found physics.stackexchange.com/q/96045 or at physics.stackexchange.com/q/203944 $\endgroup$ – SamRoelants Dec 9 '15 at 13:19 $\begingroup$ Weinberg discusses the need of projective representations in Chapter 2.7. $\endgroup$ – Arnold Neumaier Dec 30 '15 at 12:05 $SO(3)$ symmetry means that the amplitudes $|\langle \psi|\phi\rangle|^2$ is invariant under rotations in the rays. Remember that a ray $\mathcal{R}_{\psi}$ is specified by a family of vectors $e^{i\phi}|\psi\rangle$. This means that the linear or anti-linear operators that describes how the vectors are changed by symmetric transformations furnishes a projective representation, not a unitary one. Later, in this same chapter, Weinberg start a program to find protective representations of symmetries that cannot be absorbed by a unitary representation in a trivial way, like a redefinition of the unitary operators. Then he shows that there is non-trivial ways to absorbe a projective representation by changing the symmetric group. The $SO(3)$ have non-trivial projective representations, but this reps could be absorbed by an unitary one of a different group, a large one, the $\mathrm{Spin}(3) \cong SU(2)$. There is two reasons to have a genuine projective representation, a central extension in the Lie algebra and non-simple connectedness of the Lie group. The first is an algebraic feature (local property of the Lie group) and the second a topological feature (global property). The case of $SO(3)$ is the topology. There is a closed loop that goes from the identity to the $2\pi$ rotation that is trapped. There is non continuous deformation of this loop that shrink it close to the identity. For each unitary representation with spin $j$ there is a projective representation related to the paths that winding one time on the group. This rep could be identified to a spin $j+1/2$ unitary representation of $SU (2)$. All this is called Covering the Group. NogueiraNogueira Not the answer you're looking for? Browse other questions tagged quantum-mechanics quantum-field-theory group-representations representation-theory or ask your own question. Idea of Covering Group Why exactly do sometimes universal covers, and sometimes central extensions feature in the application of a symmetry group to quantum physics? Poincare group vs Galilean group Irreducible Representations Of Lorentz Group Relation between representations/classifications Which fields/particles correspond to higher dimensional representations of the Lorentz group? Irreducible representations of the Poincaré Group: states obtained from standard momentum states Infinite dimensional representations of $\text{SO}(3)$ Reference recommendation for Projective representation, group cohomology, Schur's multiplier and central extension Unitary irreducible representations of the little group $SO(3)$ Can Poincare representations be embedded in non-standard Lorentz representations? Weinberg's classification of one-particle states and representations of the Poincare group
CommonCrawl
A universal severity classification for natural disasters H. Jithamala Caldera ORCID: orcid.org/0000-0001-8896-78461 & S. C. Wirasinghe ORCID: orcid.org/0000-0001-5739-12901 Natural Hazards (2021)Cite this article The magnitude of a disaster's severity cannot be easily assessed because there is no global method that provides real magnitudes of natural disaster severity levels. Therefore, a new universal severity classification scheme for natural disasters is developed and is supported by data. This universal system looks at the severity of disasters based on the most influential impact factor and gives a rating from zero to ten: Zero indicates no impact and ten is a worldwide devastation. This universal system is for all types of natural disasters, from lightning strikes to super-volcanic eruptions and everything in between, that occur anywhere in the world at any time. This novel universal severity classification system measures, describes, compares, rates, ranks, and categorizes impacts of disasters quantitatively and qualitatively. The severity index is useful to diverse stakeholder groups, including policy makers, governments, responders, and civilians, by providing clear definitions that help convey the severity levels or severity potential of a disaster. Therefore, this universal system is expected to avoid inconsistencies and to connect severity metrics to generate a clear perception of the degree of an emergency; the system is also expected to improve mutual communication among stakeholder groups. Consequently, the proposed universal system will generate a common communication platform and improve understanding of disaster risk, which aligns with the priority of the Sendai Framework for Disaster Risk Reduction 2015–2030. This research was completed prior to COVID-19, but the pandemic is briefly addressed in the discussion section. One or more natural disasters occur on most days somewhere in the world causing immense hardship to living beings and major damage and losses. Natural disasters can be land based (e.g., earthquakes), water based (e.g., river floods), atmospheric (e.g., tornadoes), biological (e.g., pandemics), extraterrestrial based (e.g., comet strikes), or any combination of these (e.g., undersea earthquake and tsunami). Although these disasters are different, their impacts on humans and habitats are similar. All natural disasters can cause loss of life and damage to humans and their possessions, and they disturb people's daily lives. However, it is difficult to express the level of severity caused by different types of natural disasters, in different countries, and in different time periods because there is no agreed upon terminology, no global standard communication platform, and no single common measurement for all types of natural disasters for all stakeholders that can estimate the total impact of an event and understand the full scope of severity. Further, there is no system that can be used for communication purposes without confusion and for educating the public regarding the disaster continuum. Disasters do not respect national boundaries. Therefore, an international standard communication platform for severity is vital to have agreement among countries. The impact of a disaster in a region, if not managed properly, can produce political and social instability and affect international security and relations (Olsen et al. 2003). Agreed upon terminology in terms of quantifying "disaster" matters, inconsistencies in measuring disaster by stakeholders pose a challenge globally in terms of formulating legislation and policies responding to the disaster (Yew et al. 2019). There are no existing frameworks nor tools that holistically and objectively integrate all aspects of humanitarian need in terms of quantifying various natural disasters (Yew et al. 2019). Epidemiologic research of disasters is also hampered by a lack of uniformity and standardization in describing these extreme events (de Boer 1997). In addition, the foundation of any science is definition, classification, and measurement (de Boer 1990), and if disaster management is to grow and progress effectively, it also must have a consistent and recognized definition, classification, and measurement system. Confusion occurs because the definitions of disaster terms in ordinary dictionaries are very wide, and different terms are used in different ways (Rutherford et al. 1983). For example, as stated in Definition and classification of disasters: introduction of a disaster severity scale (de Boer 1990), "it is difficult to evolve a meaningful definition of the word disaster. Most dictionaries identify this as a calamity or major accident and, while this is correct, such a definition fails to reveal why a calamity or major accident should be a disaster. From a medical point of view it is, therefore, of utmost importance to construct a simple definition for a disaster and, at the same time, to outline the criteria for its classification. Once such criteria have been determined, a scale can be evolved from which the gravity of the disaster can be assessed, which also allows the scientific comparison of various events." As a solution to the lack of uniformity and standardization in describing disaster events, numerous severity scales have been developed over the last three decades around the globe. These severity scales (quantitative, qualitative, or both quantitative and qualitative) are used for different purposes, by different stakeholder groups, and for all types of natural disasters (or particular types of disasters). Among the various scales with different measurement systems, a few common classifications for all types of natural disasters for all stakeholders exist, but they also have several deficiencies. For example, Disaster Scope, introduced by Gad-el-Hak (2008a), uses causalities and area affected to classify five severity levels (small, medium, large, enormous, and gargantuan disasters). Another by Eshghi and Larson (2008) uses fatalities and affected people to categorize six severity levels (emergency situation, crisis situation, minor disaster, moderate disasters, major disasters, and catastrophe). The ranges of Eshghi and Larson's categorization supported by data, and the categories are determined using a statistical analysis of historical disasters, while the ranges of Disaster Scope are arbitrary. However, the proposed factors and their labeling appear to be arbitrary in both scales but are needed to conduct meaningful research. In addition, Disaster Scope and Eshghi and Larson's classifications are highly related to vulnerability factors of a society and do not consider damage to humans possessions, such as cost of damage; therefore, the most expensive natural disasters that do not cause a severe loss of human life in a heavily populated area are not properly categorized compared to other disasters. Presently, no current scale identifies the relationship between severity and impact factors; therefore, there is no scientific instrument that supports the data and can clearly classify a disaster's severity. Thus, attaining the real scope of a disaster's severity cannot be understood because no existing system consistently distinguishes the different severity levels. Although there are many scales, clearly expressing the level of severity is difficult for two main reasons: first, there is no globally accepted standard to communicate the level of severity of a natural disaster (Caldera et al. 2016a), and second, there is no single measurement that can estimate the full scope of a disaster (Yew et al. 2019). Consequently, there is no common system to help emergency responders measure the impact of natural disasters, to determine the proper allocation of resources, and to expedite mitigation processes. Therefore, a nation's ability to manage extreme events, including natural disasters and other perils, is difficult when there is no consistent method or mutual understanding among emergency management systems of different countries at all levels: international, continental, regional, national, provincial, and local. Therefore, a common severity classification system for all types of natural disasters for all stakeholder groups is required to understand, communicate, and educate the public on the nature of natural disasters. This paper presents a universal severity classification scheme for all types of natural disasters that is applicable to all stakeholders, from civilians to responders to policymakers, to generate a common platform for expressing the impacts of disasters. This system will provide an overall picture of the severity of natural disasters, yield independent estimates of a disaster's magnitude, help one to understand the disaster continuum, and assess a disaster for various purposes, such as helping governments and relief agencies respond when disaster strikes. In addition, the system is expected to gauge the need for regional, national, and international assistance and to help in communicating the severity of a disaster. Necessity of a universal severity classification Descriptive terms Obtaining a sense of the real magnitude of a disaster's severity is problematic and cannot be comprehended by merely using common descriptive terms because there are no consistent definitions, methods, nor a clear sense of scale to distinguish one term from another (Caldera et al. 2016a). To describe the severity level of a natural disaster, which can range from a small community fire to large-scale events such as a tsunami or earthquake, we often use words such as "emergency," "disaster," and "catastrophe." The majority opinion is that a "disaster" refers to a large-scale emergency, and "catastrophe" refers to a large-scale disaster (Penuel et al. 2013). Though these words imply increasing levels of severity, one observer's "disaster" might be another's "catastrophe" depending on the experience, knowledge, and personal feelings toward the event. In the literature, there is controversy about whether the term "catastrophe" can be differentiated from "disaster" or whether they are synonyms (Penuel et al. 2013). Therefore, clear definitions and order of seriousness for the descriptive terms are important to categorize the severity of disasters. Levels of severity It is common for events that have very different levels of severity to be put into the same category. For example, both the 1998 Hurricane Mitch (Schenk 1999) and the 2004 Indian Ocean Tsunami (WiscNews 2018) are categorized as a catastrophe. However, compared to the tsunami, Hurricane Mitch's impact was much smaller: It struck 8 Caribbean and Central American countries and killed 11,000 people, while the Indian Ocean tsunami affected 12 countries of Asia and Africa and killed about 230,000 people. The root of this problem is that there is an insufficient number of categories representing the seriousness of a natural disaster; hence, using terms such as emergency, disaster, and catastrophe does not provide a sufficient level of detail to provide a clear understanding of the impact of an event. Therefore, more levels are required to accurately categorize the impact of natural disasters. Determining the number of levels for all disasters and for all fields (e.g., medical field, rescue field, etc.) is not feasible. However, the confusion can be minimized when there is an adequate number of levels to distinguish between different categories of seriousness and when a consistent/standard number of levels exists. Objective measures When describing a disaster, we not only use words, but also accompanying numbers. Natural events are often described using many objective factors of severity, e.g., deaths, injuries, and property damage. By comparing damage and fatalities, some disasters are labeled as the most expensive (e.g., Great East Japan Earthquake and Tsunami in 2011 and Hurricane Katrina in 2005) (Brink 2019) and some are labeled as the deadliest disasters (e.g., Indian Ocean Earthquake and Tsunami in 2004 and Haiti earthquake in 2010) (Pappas 2018; Ritchie and Roser 2014). However, a statistical comparison of disaster impacts is a complex task because various factors present different insights into the level of severity of an event. For example, comparing the number of fatalities and the total cost of damage gives contrasting ideas as to the level of severity of the 2004 tsunami that struck Sri Lanka versus the 2013 flood that struck Southern Alberta, Canada. The 2013 Southern Alberta flood resulted in $5.7 billion in damage, 4 fatalities, and affected 100,000 people (but no injuries nor did it leave anyone homeless) (Centre 2013), while the 2004 tsunami caused $1.32 billion in damage, but it caused more than 35,000 fatalities and affected more than 1 million people (with 23,000 injuries and 48,000 left homeless) (Centre 2013). If one considers only fatalities, then the Sri Lankan tsunami appears more severe, and if one considers cost of damages, the Alberta flood appears more severe. There are many factors that can be considered when addressing the severity of an event. No current scale identifies the relationships between impact factors nor uses these relationships to estimate the overall severity of a disaster (Caldera 2017). Therefore, comparing levels of impact for different types of disasters is challenging. Nevertheless, comparing key impact factors (such as fatalities, injuries, homeless, affected population, and cost of damage), as shown in the comparison of the Sri Lanka tsunami and Southern Alberta flood, provides a more complete comparative picture between two disasters and a more comprehensive idea of the extent of damage, rather than merely comparing one or two factors, such as fatalities and/or damage costs. This more complete picture helps disaster and insurance managers to estimate the true magnitude of a disaster's severity (Caldera and Wirasinghe 2014), which cannot be comprehended using current approaches. Current inconsistent identification of disaster impacts results in overcompensation or undercompensation in assigning resources for mitigation. Overcompensation may result in wasting resources, while undercompensation could increase the impact severity. Thus, a proper technique is required to compare statistics and rate natural disasters based on severity. Severity of different disaster types Generally, natural disasters are described according to their intensity or magnitude. However, earthquakes that are measured on the Richter scale cannot be compared to hurricanes that are measured on the Saffir-Simpson scale because these scales use different measures that cannot be compared easily. Clearly, these individual scales are useful. For example, knowing the range of wind speeds in a hurricane, as provided by the Saffir-Simpson scale, allows people to estimate potential damage to people and property (Gad-el-Hak 2008a). Although some disasters, such as earthquakes and hurricanes, have rating scales to measure strength, some other disasters do not have systemized metrics. The disasters that do not have rating scales are assessed by geographical measures. Nevertheless, when an area is prone to two or more disasters (e.g., earthquakes, floods, cyclones, etc.), disaster management centers (DMCs) must assess the appropriate combinations of disasters (e.g., earthquakes and tsunamis, or cyclones, floods, and landslides, or thunderstorms and tornadoes) and decide which combinations are specific to the area being assessed. They must then rank the most likely individual disaster or combination that could occur in that area (Wickramaratne et al. 2012). For instance, the Calgary Emergency Management Agency releases a list of top 10 hazards and risks in Calgary (Wood 2016). After ranking the hazards, DMCs must assess the potential impacts of each likely individual/combination event and take actions (or make decisions) based on the potential combined impacts. These impact assessments, with their criticality over other combinations in the list, allow DMCs to appropriately allocate the required resources with some justifiable basis. However, impact assessments are complicated by different type of unrelated scales. Disaster warnings Warning indications during an event should be given in plain language so that everyone can understand the seriousness of a coming disaster and the urgency of evacuation when required. In warning communications, the intensity of a disaster is commonly used as the measure of the destructive power because the intensity/magnitude is assumed to be the most meaningful to the general public. However, intensity/magnitude levels are not the best way to describe the severity level of a disaster because they are an indication only of the strength (i.e., hazard potential) but not the impact (i.e., vulnerability of a region). As shown in Table 1, the impacts of a disaster are not highly correlated to existing scales for volcanic eruptions, earthquakes, tsunamis, and tornadoes because the Pearson correlation coefficients for impact factors and intensity/magnitude scales are less than 0.5 (Colton 1974). The impact depends on where a disaster occurs: It can be quite different in a populated city compared to a rural area. For example, a small hailstorm can significantly impact a city if it affects humans and their vehicles and dwellings compared to a strong tornado that occurs in a forested area with a very small population. Thus, the highest intense/magnitude event may not necessarily be the most disastrous. Specifically, a considerable body of research presents data that indicate people often underestimate or ignore warnings for natural disasters and other low probability events (Camerer and Kunreuther 1989; Meyer 2006). Severe natural disasters are low probability, high consequence events. Therefore, a new approach is required to communicate the warnings issued by emergency management systems to the general public so that there is a mutual understanding between both parties. Table 1 Correlation between intensity scales and impact factors Unified scale Currently, different stakeholder groups have their own scales with different measurement systems to assess a disaster according to their requirements. For example, disaster managers and emergency responders use incident management teams (IMTs)—typing (United 2020; Alberta 2020), medical personal use disaster severity scale (DSS) (de Boer 1997), database managers use Munich RE global loss database categorization (Löw and Wirtz 2010), insurance managers use catastrophe models (Grossi et al. 2005) and logit and hazard models (Lee and Urrutia 1996). There are several disadvantages with these existing systems. These scales have various levels (between 3 and 13) to distinguish the destructive capacity of an event using various factors. Thus, some scales have a limited number of categories. Also, some classification systems (e.g., catastrophe models) are even confidential to the respective organizations. Additionally, some classification systems are not scientific and have arbitrary grading systems (e.g., DSS). However, most scales use fatality a factor to differentiate severity levels, except for IMTs for emergency responders and disaster managers (because IMT uses both impact and management challenges associated with response and recovery to categorize disasters). These individual scales are useful for specific groups; however, different scales, which are not integrated, cannot be used to convey the level of severity of an event for all stakeholders. When a disaster strikes, these disconnected systems make it even more difficult for stakeholders to communicate about the severity of the disaster. Therefore, confusion and misunderstanding can occur. For example, most of the North American emergency management agencies use IMTs' as a way to classify all hazards and to assign a type number to the incident, in order to address response and recovery activities and the command and control infrastructure that is required to manage the logistical, fiscal, planning, operational, safety, and community issues related to local/regional/national emergencies, natural disasters, and public events (Alberta 2015). IMTs are "typed" according to the complexity of incidents they are capable of managing and are part of an incident command system as shown in Table 2. In particular, a Type 5 IMT can manage a small community fire; however, to manage a major flood may require a Type 4 or lower IMT. Confusion can arise whether Type 1 or Type 5 is the most critical. Hence, a universal system that integrates the existing systems is essential. Table 2 Incident Management Teams—Typing (Government 2021) In addition, when a disaster is first identified, emergency responders often do not know the full scope of the disaster. An event can quickly escalate from a routine emergency to a disaster, and then to a bigger disaster. The management challenges associated with response and recovery also increase as impacts escalate. During events when emergency responders communicate with other stakeholders, such as national/regional/local governments, relief agencies, non-governmental organizations (NGOs), and the media, they have no common classification system that provides a unified understanding of the level of severity of the event. Consequently, officials who are trying to understand the full impact of a disaster do not have a consistent scale that can provide a clear understanding of the potential hazard, and so they cannot alert other stakeholders, such as the general public, about the degree of severity. Moreover, the type is assigned by internal personnel and can be subjective due to the level of experience and internal processes used. Decisions made can delay the adoption of appropriate actions needed to mitigate a disaster; in other words, assistance from international governments, NGO's, relief agencies, and volunteer communities can be delayed. The consequences of failing to identify a potential hazard and failing to manage a disaster adequately can be significant. For example, regarding Hurricane Katrina, Tierney (2008) explained that "… devastating impacts were worsened by a sluggish and ineffective response by all levels of government and by a lack of leadership on the part of high-ranking federal government officials and others who were incapable of recognizing Katrina's catastrophic potential, even after the storm made landfall." Inconsistent and disconnected severity measures mean that either members of the general public may not clearly understand the degree of the emergency or that members of emergency management systems may not clearly understand the potential hazard. Hence, a common platform that can integrate these disconnected metrics for all stakeholders is necessary for clear communication and understanding without confusion. Furthermore, nation's ability to manage disasters is more effective when there is mutual understanding between countries and different emergency management systems at all levels: international, continental, regional, national, provincial, and local. The ability of countries to manage extreme events can be dependent on the system that they use. However, since countries use different systems to manage extreme events, either a universal understanding of the systems used by other countries or a global standard is required to better prepare and manage global disasters that affect more than one country. For instance, if there had been a universal system in 2004 when the Indian Ocean tsunami struck 12 countries in Asia and Africa, it may have saved thousands of lives. Thus, a new approach is needed to mobilize resources properly, make adjustments as necessary, and more correctly gauge the need for regional, national, or international assistance. Therefore, there is a mandate for a new system that integrates both measurement systems: management and severity. Creating a universal severity classification Universal disaster severity classification A consistent scale is needed to understand the disaster continuum and to develop a platform for a reliable and transparent data management process that facilitates comparisons between different disasters (Gad-el-Hak 2008a; Löw and Wirtz 2010). Developing a Universal Disaster Severity Classification Scheme (UDSCS) is necessary to solve the previously mentioned problems. This new universal system is expected to integrate all current measurement systems: impacts, management, and size. The UDSCS will connect the current measurement systems and provide a common communication platform that can be used to compare, quantitatively and qualitatively, and measure, describe, and categorize the impact of disasters for the general public and emergency responders. Therefore, the UDSCS is expected to avoid inconsistencies and, most importantly, to connect the severity metrics to generate a clear understanding of the degree of an emergency and potential hazard. The 5 key steps to develop a UDSCS are addressed by the following two questions: How many levels are required to clearly differentiate the impact of natural disasters? How are these levels used to clearly distinguish the various degrees of natural disasters, both quantitatively and qualitatively? Identify the most influential factors related to disaster severity. Develop the foundation of the UDSCS in terms of (i) the number of levels and (ii) associated color coding. Develop qualitative measures by clearly defining words that describe disasters. Develop quantitative measures that are based on data and statistically robust. Develop the UDSCS. Foundation of the UDSCS and qualitative scale Step 1: the most influential factors What makes a disaster "large scale" is the number of people affected by it and/or the extent of the damaged infrastructure and geographical area involved (Gad-el-Hak 2008b). However, there are many factors that need to be considered when addressing the severity of an event. The severity of natural disasters increases as the impact to humans and their possessions increases and the power and intensity of an event increases. In contrast, severity decreases the more a region is prepared for a disaster. Therefore, severity relates to all factors that can be grouped as follows: Socioeconomic factors that reflect impact to humans and their possessions: number of fatalities, injuries, missing persons, homeless persons, evacuees, people affected by the disaster, the cost of damage (damage to property, crops, and economic damage), etc. Strength-measuring factors that reflect the power and intensity of an event: magnitude, duration, speed, location, distance from disaster site to affected populated area(s), etc. Preparedness factors that reflect a region's preparedness: available technology, resources, whether the area(s) could be evacuated before being affected, mitigation methods, response rate, etc. More details about the above groups can be found in Analysis and classification of natural disasters (Caldera 2017). A scale representing all factors is complex. However, no matter how prepared people are, where the disaster occurred, or how intense/powerful the disaster is, if people lose their belongings or loved ones in a natural disaster, their disutility mainly depends on what they have lost and not how they prepared for the disaster, the intense/powerful the disaster, nor where the disaster occurred. Therefore, the severity of an event directly relates to socioeconomic factors and indirectly relates to strength-measuring and preparedness factors. Hence, the severity of an event can be evaluated by measuring the negative impact of a disaster on people and infrastructure (Wickramaratne et al. 2012). Step 2: the foundation of the UDSCS Step 2, part (i): Proposed number of levels for the severity spectrum According to the previous step, a multidimensional severity scale should include a cross section of socioeconomic factors. These factors can be further sub-grouped into human factors (e.g., fatalities, injuries, missing persons, homeless persons, evacuees, and affected population) and damage factors (e.g., cost of damage, damage to property, crops, and economic damage). A 0–10 level system is proposed because very large ranges of almost all socioeconomic factors can be expressed within 11 levels using the log scale. The 11 levels of human factors (H), which range from 0 to 7.674 billion people (the world's population, World 2019a) are shown in Column 2 in Table 3. The 11 levels of damage factors (D), which can range from 0 to United State Dollar (USD) 87.698 trillion (the maximum gross domestic product in 2019, World 2019b) are shown in Column 3 in Table 3. The severity of a natural disaster is measured by the adverse effects of the event on a community or an environment and not the severity of the event on an individual person. Therefore, the three ranges of damage factors, which are 1 ≤ D < USD 10, USD 10 ≤ D < USD 100, and USD 1,000 ≤ D < USD 10,000, can be grouped into 1 ≤ D < USD 10,000. Therefore, 0–10 levels representing socioeconomic factors (impacts to human and material damage) are considered in designing the UDSCS. Table 3 Ranges of human and damage factors in 0–10 levels The numbering systems most used, for example the metric system of measurement, are based on 10. Systems based on 10 are easy to use and easily administered and scored. The 0–10 level severity scale is easy to remember as the levels increase by a power of 10. Also, it is easy to integrate the foundation of the UDSCS with quantitative and qualitative measures using 0–10 levels, as explained in the next two steps. Thus, UDSCS considers the severity of disasters based on the most influential impact factors, including socioeconomic factors, and gives a rating from 0 to 10: 0 being no impact and 10 being worldwide devastation. Therefore, defining the foundation of the UDSCS with 0–10 severity levels is well suited, meaningful, and easy to remember for users. Step 2, part (ii): Proposed color coding for severity levels Currently, there is no consistent method of color coding. Different fields have different color coding, and there are even different colors used within the same field. For example, the NOAA National Weather Service, Weather Prediction Center (Weather 2019) uses white, green, yellow, red, and purple for rainfall warning signals, while the Philippine Atmospheric, Geophysical and Astronomical Services Administration (Philippine 2020) uses red, orange, and yellow. The UDSCS is used by many stakeholders, including policy makers, governments, responders, and civilians. Because this system is widely used, and these stakeholders are familiar with the global color coding of traffic signals, the same color coding system was selected with some modification. Blue was added, and yellow was chosen instead of amber because it is one of the 3 primary colors and has a specific name in all languages. Blue, dark green, light green, yellow, and dark yellow represent lesser severity levels. Black and purple were added, and along with red, they (red, dark red, light purple, dark purple, and black) represent higher severity levels. White is also added to indicate nondestructive events. The color codes that correspond to the 11 severity levels are shown in Table 4. Introducing color coding that correspond to levels of severity is important because it eliminates language barriers and confusion that could arise. Further, everyone, including those who are illiterate, can quickly understand the UDSCS because colors easily explain the seriousness of a disaster. Although words in different languages can be found to represent each level of severity, there will be some people working or involved in disaster recovery who cannot understand the local language. Therefore, color coding is an effective means of communication, and the UDSCS can be adapted to any language, country, society, or culture. Table 4 Levels and the corresponding color coding in the UDSCS Step 3: Developing qualitative measures As a qualitative measure of the UDSCS, the linguistic method, which is the most commonly used and oldest method of describing natural disasters of various magnitudes, is used. For example, the words such as calamity, cataclysm, catastrophe, disaster, and emergency are used in this analysis to categorize the different levels of disaster impacts. Still, only words that describe the magnitude of the natural phenomena are considered. Therefore, words, such as "Armageddon," which describe "a usually vast decisive conflict or confrontation" or "a terrible war that could destroy the world" (Oxford 2010) are not used because they do not refer to natural events but rather human-caused catastrophes. However, the sense of the real magnitude of a disaster's severity cannot be comprehended using the current linguistic method because it has several deficiencies described in the following subsection. Therefore, an order of seriousness and clear definitions for the considered terms are also proposed to describe the severity spectrum. Deficiencies in the current qualitative measure First, there are no consistent definitions, methods, or clear sense of scale to differentiate these terms (used to describe disasters) from each other. For example, the Oxford dictionary defines common terms used to describe disasters as follows (Oxford 2010): Apocalypse: an event involving destruction or damage on a catastrophic scale. Calamity: an event causing great and often sudden damage or distress; a disaster. Cataclysm: a large-scale and violent event in the natural world. Catastrophe: an event causing great and usually sudden damage or suffering; a disaster. Disaster: a sudden accident or a natural catastrophe that causes great damage or loss of life. Emergency: a serious, unexpected, and often dangerous situation requiring immediate action. "Catastrophe" is used to define "disaster," and "disaster" is used to describe both "catastrophe" and "calamity" in the Oxford dictionary (Oxford 2010), and therefore, definitions are circular, and words are used interchangeably to describe the seriousness and severity levels of natural events. Second, vocabulary, context, and interpretation of each term is not fixed (Kelman 2008); therefore, the meanings of these words have changed over time. For instance, the first use of the word "disaster," when it was first added to English vocabulary in the late sixteenth century, meant "ill-starred event" (Cresswell 2009), which implies an event affecting the planet to be in an ill state or destroyed. Currently, "disaster" is defined in the Oxford Dictionary as "a sudden accident or a natural catastrophe that causes great damage or loss of life" (Oxford 2010). Comparing the historical and current definitions shows how the meaning of these terms has changed over time. The etymological definitions of these terms are as follows (Oxford 2010): Apocalypse: uncover, disclose, reveal (late fourteenth century) Calamity: damage, loss, failure, misfortune, adversity (early fifteenth century) Cataclysm: to wash down (deluge, flood, inundation) (1630 s) Catastrophe: overturning, sudden turn (a sudden end) (1530 s) Disaster: ill-started event (the stars are against you) (1560 s) Emergency: to rise out or up (unforeseen occurrence requiring immediate attention) (1630 s) Third, the order of seriousness implied by these terms has also changed over time because the meaning of these terms have changed. According to the word origins (Column 2, Table 5), and according to current English dictionary definitions (Column 3, Table 5), the order of seriousness of the terms from lowest to highest have changed, with the exception of "emergency," which has remained at the same level over time (Caldera 2017). However, the term "emergency" currently describes different levels of severity, which is confusing. For example, government agencies use the term "emergency" to declare a state of emergency when there is a serious and uncontrollable situation. "Emergency" is also used to describe situations as small as a car accident. Therefore, "emergency" can be any level because it can describe situations as small as a car accident or as large as a major disaster. Therefore, the meaning of these words and the level of seriousness of each word should be fixed to clearly convey their implied severity level and to reduce confusion. Table 5 Levels of seriousness of the terms according to historical and current dictionary definitions (Caldera 2017) Fourth, the meanings of the terms change depending on context, and according to the Oxford English Dictionary, there are many applications of the terms (Caldera 2017). For example, William Shakespeare used the term "catastrophe" to express an insult: "I'll tickle your catastrophe" in Henry IV, Part 2 (Spevack 1973). However, "catastrophe" in geology is a sudden and violent change in the physical order of things, such as a sudden upheaval, depression, or convulsion affecting the Earth's surface and the living beings upon it. Some have supposed that a catastrophe occurs at the end of the successive geological periods (Oxford 2014). These terms are often used as metaphors and have different connotations. For instance, "disaster" can describe everything from an event like an earthquake to occasions when two ladies turn up for a party wearing the same dress (de Boer 1990). Even within the same field, definitions of the descriptive terms vary. For instance, the EM-DAT database has defined "disaster" as a "situation or event, which overwhelms local capacity, necessitating a request to national or international level for external assistance; an unforeseen and often sudden event that causes great damage, destruction and human suffering. Though often caused by nature, disasters can have human origins" (EM-DAT 2021). Canada's emergency management framework (3rd Edition) defines disasters as "Essentially a social phenomenon that results when a hazard intersects with a vulnerable community in a way that exceeds or overwhelms the community's ability to cope and may cause serious harm to the safety, health, welfare, property or environment of people; may be triggered by a naturally occurring phenomenon, which has its origins within the geophysical or biological environment or by human action or error, whether malicious or unintentional, including technological failures, accidents and terrorist acts" (Public 2017). The Encyclopedia of Crisis Management's disaster classification, which is given in Table 6, has four levels (incidents, major incidents, disasters, and catastrophes) and outlines "disaster" according to impacts and management challenges of response and recovery (Penuel et al. 2013). Table 6 Differentiation of the size of an event by process and impact (Penuel et al. 2013) Proposed order of terminology for severity spectrum Integrating descriptive words into an emergency management system improves mutual understanding and is easier to manage with minimal confusion. For instance, the terms "emergency," "disaster," and "catastrophe" have different levels of seriousness, where seriousness increases from emergency to disaster to catastrophe; therefore, these words should be used instead of the headings that merely state type 1, 2, 3 as IMTs. This change improves understanding at all levels and avoids confusion about whether type 1 or type 3 is the most critical. Naming the different categories and using plain language to describe the magnitude of a disaster allow for easier management at all levels. However, selecting the appropriate terms for different severity levels should be conducted with careful evaluation. As a solution to the aforementioned inconsistencies, a standard terminology is required to describe the severity levels of natural disasters qualitatively because the descriptive terms are subjective. The terms "emergency," "disaster," and "catastrophe," in this order, reflect an increasing order of seriousness of an event. However, 3 levels are not enough to clearly differentiate the impacts of disasters. Consequently, more levels are added: "apocalypse," "calamity," and "cataclysm." However, the terms "apocalypse," "calamity," and "cataclysm" are typically colloquial, and they are not heavily used to describe disasters; hence, it is conjectured that people may randomly guess the level of seriousness that these words imply. Therefore, clear order of seriousness for the standard terminology is required to encourage a change in peoples' response to disasters, and how they think about the severity of a disaster. The proposed ranking of the selected terms in increasing order of severity is as follows: "emergency," "disaster," "calamity," "catastrophe," "cataclysm," and "apocalypse." The order of these terms is arranged considering the widely accepted understanding of the terms and their dictionary definitions. More details about the proposed order of terminology can be found in Analysis and classification of natural disasters (Caldera 2017). Nevertheless, since "apocalypse" has a religious connotation, it is replaced by "partial or full extinction." This new term represents the most serious level. Associated color code for the proposed terminology According to the color coding system introduced in Step 2, the colors are assigned to each term as follows: Emergency is blue Disaster is green Calamity is yellow Catastrophe is red Cataclysm is purple Partial or full extinction is black Proposed definitions of terminologies for the severity spectrum The definitions for the terms are proposed in Table 7, and they are based on dictionary definitions and common usage; using any combination of the 6 terms to describe another is carefully avoided. The increasing order of seriousness is indicated by the terms' definitions and the following methods of designation. The terms are listed from the lowest to highest order of seriousness: To describe circumstance (blue colored text), we use "event," "disturbance," and "upheaval," which are modified by the adjectival forms "sudden," "major," "large-scale," "very large-scale," "extremely large-scale," and "world-scale." To describe impact (purple colored text), we use "damage," "destruction," and "devastation," which are modified by the adjectival forms "significant," "severe," "widespread continental," "global," and "universal." To describe injuries (green colored text), we use "many serious," "major," "massive," and "uncountable." To describe fatalities (red colored text), we use "some," "many," "great," "extensive," "unimaginable," and "partial or full extinction." Table 7 Qualitative Universal Disaster Severity Classification Unlike the existing definitions, the proposed definitions provide a consistent method of differentiating as these definitions clearly articulate the real magnitude of different severity levels (Column 3 Table 7). Combining the qualitative measure and the foundation of the UDSCS The six terms "emergency," "disaster," "calamity," "catastrophe," "cataclysm," and "partial or full extinction" are clearly defined to represent the severity levels of a disaster and the order of seriousness of the terms; however, there are 10 levels in the foundation of UDSCS that represent the vast range of impacts on both human and material damage that must be represented using these 6 words. UDSCS 0 is not considered here because it represents nondestructive events. The methodology combines qualitative measures (i.e., terminology used in the severity spectrum) and the foundation of the UDSCS (i.e., 10 levels and associated colors). The first destructive level of the UDSCS, UDSCS 1, is "Emergency" to indicate the impact of the disturbance on inhabited areas. The term "Partial or Full extinction" represents the last level of the UDSCS, UDSCS 10, and indicates total or partial destruction of the Earth. The levels in between are equally distributed among the remaining 4 words; each term has been subdivided into Types 1 and 2 of "Disaster," "Calamity," "Catastrophe," and "Cataclysm," as shown in Column 2 in Table 7. Thus, each severity level has unique words to describe it with clear definitions. These clearly defined terms help to quantify disaster events, to make comparisons, and to rank natural disasters more accurately. Even though the definitions are in English, they can be translated into most languages. Having clear definitions and a clear order of seriousness allows for easier recognition of an event occurrence and provides an overall picture of the severity of disasters to help emergency response management systems. Clear boundaries for the initial quantitative scale and UDSCS Step 4: Developing quantitative measure Identifying relationships between factors that reflect the severity of an event aids in deciding what factors should be included in the multidimensional scale. Then, the most influential factors of severity are selected to develop the scale. Different methods can be used to identify the direct relationship between the factors. Statistical correlation can be used to identify the degree of linear relationship, and regression analysis can be used to identify the specific relationship. Different types of regression analyses and correlation methods are employed according to the type of variables used in the following analysis. Identifying the most important influential factors related to severity The most influential impact factors that can be considered for a multidimensional scale are from the socioeconomic factors listed in Step 1. Therefore, the impact of disasters on people, facilities, and the economy should be studied in detail to understand the severity of a natural disaster. Due to the lack of a complete recording system, the only socioeconomic factors considered in this correlation analysis are fatalities, injuries, missing persons, houses damaged/destroyed, and cost of damage in USD, as given in the NOAA database (National 2013a, 2013b). The Pearson correlation coefficient (ρ) is a common measure of association among continuous variables in tornado impacts, and Spearman's rho correlation coefficient (ρ′) obtains the relationship between ordinal interval variables in the effects of volcanic eruptions. Tables 8 and 9 show that all variables are positively correlated with ρ, ρ′ ≥ 0.5. Table 8 Pearson correlation coefficient (ρ) for tornado effect factors (Caldera et al. 2018) Table 9 Spearman's rho correlation coefficient (ρ′) for volcanic effect factors (Caldera and Wirasinghe 2014) Impact factors for tornado effects show a strong linear dependency as ρ is greater than 0.75, which normally means that, when one factor increases, the other factor is expected to increase. For example, an increase in the number of fatalities predicts an increased number of injuries and damage. However, when there are advanced warnings and mitigation measures, the number of fatalities and injuries can be minimized even if property damage increases. For the considered dataset, a natural linear relationship between these factors is investigated using multiple regression analysis and shown in Eq. (1). Table 10 shows that all coefficients in Eq. 1 for tornadoes are statistically significant because their p-values of 0.000 are less than 0.05. $${\text{Fatalities}} = 1.26 + 0.03 * {\text{Injuries}} + 3.06 * 10^{ - 8} * {\text{ Damage}} $$ Table 10 Regression values for relationships between fatality and injuries, damage from tornadoes The model that describes the relationships between fatalities, injuries, and damage fits 71% of the human impacts in data on tornadoes because the adjusted R-squared value is 0.71, which indicates a strong linear relationship. This R-squared value indicates that 71% of the variance in fatalities can be predicted from injuries and damage. Therefore, analyzing one factor can determine another using their linear dependency. For tornadoes, according to Eq. 1, the estimates reveal that for every additional 100 injuries, it is predicted that fatalities increase by 3, holding all other variables constant, and for every additional USD 100 million in damage, it is predicted that fatalities increase by 3, holding all other variables constant. Similarly, Caldera et al. (2016a) showed the linear relationship between impact factors and each type of disaster by analyzing tornadoes (Caldera et al. 2018), earthquakes (Esfeh et al. 2016), tsunamis (Caldera et al. 2016b), and volcanic eruptions (Caldera and Wirasinghe 2014). Given that the impact factors are correlated (ρ ≥ 0.5) with each other, two approaches can be applied: measure the severity using one of these factors or develop a complex disutility function that includes several factors. Initially, the simplest approach of using one factor is selected to measure severity. The complex disutility function approach can be used to develop a multidimensional UDSCS. Therefore, 1 of the 5 factors (fatalities, injuries, missing persons, cost of damage, and houses damaged) is selected for the initial scale. The number of fatalities is chosen as the most significant impact factor that represents the severity of all types of disasters because of these factors fatalities are the most serious factor and easy to define because of the finality of death. On the other hand, houses damaged closely relates to location, time, material, and size; cost of damage tends to increase with time because of inflation and wealth of the affected society; missing persons are eventually presumed dead and added to fatalities; and injuries are ambiguously defined because they can range from "small" to "moderate" to "severe" and may or may not include illness. In addition, populations are most sensitive to disastrous events with high fatalities, and many authors consider fatality a good measure of severity (de Boer 1990; Eshghi and Larson 2008; Gad-el-Hak 2008a; Löw and Wirtz 2010; Rodríguez et al. 2011; MunichRE 2013; Durage 2014; Hasani et al. 2014; Esfeh 2016; Yew et al. 2019). In addition, the number of fatalities is correlated with several factors; it highly correlates with injuries and missing persons and moderately correlates with houses damaged and cost of damage. Therefore, the number of fatalities is used to differentiate the levels of the initial severity scale. Extreme events based on fatalities are further analyzed using the extreme value theory. Analysis of parent distribution of disaster events based on the most important influential factor To understand the disaster continuum, a global-level dataset with different types of natural events must be considered. Therefore, 62 different types of disasters, such as global disasters (e.g., droughts, earthquakes, tsunamis, cyclones, and volcanoes), regional disasters (e.g., blizzards, general (river) floods, heat waves, tornadoes, and viral infectious diseases), and local disasters (e.g., avalanches, hailstorms, flash floods, forest fires, and landslides) are included in this analysis with the frequency distribution shown in Table 11. The data in the EM-DAT global loss database for all types of natural disasters from 1977 to 2013 inclusive are considered. Although data from 1900 to 2013 were available in the EM-DAT database, CRED restricts the maximum amount of data issued to around 10,000 records, and since the recording system improved after 1980, more recent historical records were chosen (Centre 2013). Table 11 Event distributions according to their groups and main types of disaster profile This analysis consists of 5 out of 6 main groups of natural disasters, and the frequency distributions of these 37 different categories of disasters are shown in Table 11. The considered dataset includes 59 secondary sub-types of disasters (Wirasinghe et al. 2013a), but it does not include data on meteorites/asteroids in extraterrestrial events, animal stampedes in biological events, nor land fires in climatological events. Another drawback of the considered dataset is the database records are grouped by country. For example, 2004 Indian Ocean tsunami data were distributed over 12 different records according to the 12 affected nations. Therefore, the actual impact of some large events is not properly captured. It is essential to determine the statistical characteristics of fatalities and the best probability distribution fit that are able to describe fatalities. There are 10,805 records of fatalities out of 10,807 records from 1977 to 2013 logged in the EM-DAT database, with minimum 0 and maximum 300,000. The mean of fatalities is 258.04, and the standard deviation is 5491.18 with 38.65 skewness and 1,686.98 kurtosis, which means the parent probability distribution of fatalities has more extreme events at longer fatality numbers, i.e., a long right tail. Determining the distribution that historical data follow is necessary to estimate the probability of future events for a given number of fatalities. To better fit the distribution, fatalities are transformed into natural logarithms after eliminating zeros (3287 records) and no records of fatality (2 records) from the 10,807 records (Caldera 2017). The logarithmic data of fatalities of 7518 records have a mean of 1.275 and a standard deviation of 0.769 with 0.610 skewness and 1.054 kurtosis, with minimum 0 and maximum 5.4771. Close approximate distributions were fitted to the logarithmic data of fatalities. The probability density function (PDF) of the generalized logistic distribution (GLPDF) with μ equals 1.224 and σ equals 0.424, which is an approximate parent distribution fit for fatalities, as shown in Eq. (2). The cumulative distribution function (CDF) of the fitted GLPDF and the sample CDF is shown in Fig. 1. $$f\left( x \right) = \frac{{e^{{ - \left( {\frac{x - \mu }{\sigma }} \right)}} }}{{\sigma \left[ {1 + e^{{ - \left( {\frac{x - \mu }{\sigma }} \right)}} } \right]^{2 } }}; \quad {\text{where }}\sigma > 0,{\text{ and }}\;0{ } < x < + \infty$$ Cumulative sample distribution of fatalities in a natural logarithm scale with an approximate GLPDF Method for identifying extreme disasters to represent the severity spectrum Extreme value theory can be used to study the behaviors and destructive capacity of strong, violent, infrequent, disasters. Extremes are low probability events, which are located on the tail of the parent PDF. In this case, the right tail of the parent PDF is considered because the extremes are largest or maxima of severe events. These extreme events are selected to fit an extreme value probability distribution function (EPDF). The EPDFs are limiting distributions and essential to evaluate the probability of extreme disasters. There are three models available, block maxima, Rth order statistic, and peak over threshold, to identify the extreme events (Kotz and Nadarajah 2000; Coles 2001; Reiss and Thomas 2007). The peak over threshold model only contains high extreme records because it is bounded below; consequently, a full range of extreme disaster types with fatalities is not included. To select the extreme fatalities using block maxima or Rth order method for all types of natural disasters, each block is considered as a different type of natural disaster; otherwise, the method will be biased to large-scale disasters, and it will not select fatalities from small-scale disasters when small-scale and large-scale disasters are grouped together. The number of extremes gradually increases according to the order statistics as shown in Eq. (3) (Caldera 2017). Because R varies from 1 (i.e., block maxima) to R, R different extreme fatality value datasets representing all types of natural disasters are selected for the analysis. $${\text{Sample size of}}\;R^{th} {\text{order extreme dataset }} = {\text{ Number of categories}}\; * R^{th} {\text{order}}$$ Therefore, substantially fewer numbers of extreme records are included in the block maxima model compared to the Rth order statistic model. Therefore, of the three methods, Rth order statistic was used in this analysis because it selects a considerable number of extremes for each type of disaster and covers the full range of severity (i.e., fatalities) ranging from small-scale to large-scale disasters. Extremes in the Rth order statistical model are distributed as a generalized extreme value distribution (GED). GED can be further explained by either Gumbel (GE0), Frechet (GE1), or Weibull (GE2) distributions (Kotz and Nadarajah 2000; Coles 2001; Reiss and Thomas 2007). Different types of EPDF are fitted for the Rth order statistical models. The best fitted EPDF of extremes is used to define the ranges of severity levels as shown in Fig. 2. Probability distribution and severity levels Analysis of extreme disaster events based on the most important influential factor To apply the extreme value theory to the random variable (number of fatalities), each type of natural disaster represents one block in the extreme value analysis. Although there are 59 different secondary sub-types of disasters recorded in the EM-DAT database from 1977 to 2013, the same categorizations (blocks) cannot be used to extract the extreme values as some secondary sub-types have none or only one, two, or less than 10 recorded events (e.g., blizzards, dust storm, freezing rain, icing, sandstorm, and snow avalanche). In these cases, there are not enough extreme values to represent the highest Rth order statistic. Therefore, a new categorization is introduced that combines the categories that had a smaller number of events (e.g., parasitic infectious diseases, extreme winter conditions, and other wildfires). To reflect a reasonable number of data points in each block, first, the different types of disasters are grouped according to their secondary sub-type, sub-type, or main type if there are not enough events in the category to represent the Rth order; then, fatalities are ordered from highest to lowest for each category. Subsequently, the first R number of fatalities in each block is selected. Therefore, the following categories are combined: "Parasitic infectious diseases" and "Other epidemics" are combined into "Other epidemics"; "Cold wave" and "Extreme winter conditions" are combined into "Cold wave or Extreme winter conditions"; "Scrub/Grassland fire," "Bush/Brush fire," and "Other wildfires" are combined into "Other wildfires"; "Tsunami," "Other seismic activity," "Mass movement dry landslide," "Other mass movement dry," "Debris flow," "Sudden subsidence," "Mudslide," "Snow avalanche," "Rock fall," and "Avalanche" are combined into "Other geophysical events"; and "Other Local/Convectional storm," "Snowstorm/Blizzard," "Blizzard," "Blizzard/Tornado," "Blizzard/Dust storm," "Dust storm," "Sandstorm/Dust storm," "Sandstorm," "Snowstorm," "Sandstorm," "Extratropical cyclone (winter storm)," and "Severe storm/ Hailstorm" are combined into "Other local/Convectional storm." Consequently, there are 27 categories and each category corresponds to one block, and their 5th order, 10th order, 15th order, …, up to 70th order statistics (i.e., 14 different extreme datasets) are analyzed. The sample sizes of these 14 extreme datasets of Rth order statistics increase by multiples of 27 according to Eq. (3) because this analysis considers 27 different categories of natural disasters (27 blocks). The distribution of the mean (and its trend line) of these 14 extreme datasets are shown in Fig. 3. The trend lines of the mean are significantly close to the actual values because the R-squared value is close to 1 (R2 > 0.99). The first derivative of these fitted trend lines measures the rate of change of the mean, while the second derivative measures whether this rate of change is increasing or decreasing. The mean value stabilizes when Rth order increases because the rate of change is decreasing when Rth order increases. Therefore, the mean slowly decreases and converges to its full sample value (i.e., 258.04) when Rth order increases. Mean distribution of Rth order extremes According to the extreme value distribution selection procedure, the 70th order statistic is considered as the best minimum Rth order statistic to estimate the probabilities of severity levels of fatalities (Caldera 2017). Then, the extreme value distributions (GE1, GE2, and GED) are fitted to 70th order statistic to assess the extreme natural events. A wide range of fatalities, 0 to 7.674 billion (the world's population, World 2019a), can be concentrated into 10 levels using the log scale. Therefore, the magnitudes of the severity level boundaries are defined based on the logarithm of the fatalities. Combining the initial quantitative measure (i.e., the proposed ranges of the severity spectrum) and the foundation of the UDSCS The estimated probabilities and the sample probabilities of severity levels 0 to 10 according to the foundation of UDSCS from Step 2 are shown in Table 12. The full sample dataset from 1977 to 2013 and the 70th order statistic extreme dataset are used to calculate sample probabilities for severity levels. The 70th order statistic sample for extreme events represents 17.49% of the full dataset. The estimated probabilities of severity levels in Table 12 are calculated using the fitted 70th order Frechet (GE1), Weibull (GE2), and generalized extreme value distribution (GED). Table 12 Estimated probabilities of severity levels Out of the 10,807 sample events, two events did not have fatality records. Thus, considering the remaining 10,805 sample events (Column 3 Table 12), only 69.58% of the full dataset had at least one fatality, while 30.42% of events recorded zero fatalities. In addition, 12.44% of extreme events of the 70th order sample (Column 7 Table 12) recorded zero fatalities, which means that the 70th order statistic of extreme datasets consists of the full range of extremes from small-scale to large-scale disasters. Moreover, GE1, GE2, and the GED estimates 0.80%, 9.38%, and 18.79% probabilities for zero fatalities, respectively. Compared to the estimated probabilities of GE1, GE2, and GED, only the 70th order sample probabilities for levels 0, 1 and 3 are closer to GE2 than GED; all other severity levels of the 70th order sample probabilities are closer to GED than GE2 or GE1. Additionally, GE2 gives significantly lower probabilities compared to GE1 and GED for the estimated probabilities of higher severity levels (from level 6 to level 10) although the 70th order sample has 0.42% representation for UDSCS 6 or higher events. In contrast, GE1 yields significantly higher probabilities compared to GE2 and GED for the estimated probabilities of levels 7 to 10. For example, 2 out of 10,000 severe natural disasters can be considered as severity level 10 events (i.e., fatalities exceed 1 billion) according to the fitted GE1, which is a higher probability for partial or full extinction. However, compared to the estimated probabilities of GE1 and GE2, the estimated probabilities of GED are closer (and more reliable) to the 70th order sample probabilities for higher severity levels. Table 12 illustrates that 5 out of 100,000 severe natural disasters will have 1 million to 10 million fatalities, 4 out of 1 million will have 10 million to 100 million fatalities, 3 out of 10 million will have 100 million to 1 billion fatalities, and 3 out of 100 million will have more than or equal to 1 billion fatalities, according to the fitted GED. Thus, the fitted GED of 70th order statistic is suitable to calculate the approximate probability values of natural disaster severity levels (Column 6 Table 12). The CDF of the fitted 70th order GED as shown in Eq. 4 and the sample CDF are shown in Fig. 4. Note that the probabilities of Fig. 4 are truncated because the cumulative probability value of zero fatalities is 0.18 and the sample probability of zero fatalities is 0.12. $$F\left( x \right) = e^{{ - \left[ {1 + \gamma \left( {\frac{{x_{i} - \mu }}{\sigma }} \right)} \right]^{{\frac{ - 1}{\gamma }}} }} { }; \quad {\text{where }}\mu = 44.396; \quad { }\sigma = { }106.060; \quad { }\gamma = { }0.924$$ Cumulative sample distribution of fatalities with an approximate 70th order GED Step 5: Combining quantitative and qualitative measures As a way to measure the severity of natural disasters, an UDSCS is developed that has 0–10 levels and is designed by combining both quantitative (initial) and qualitative measures to differentiate each level as shown in Table 13. Each severity level has a fatality range, expected probability for the level, and a color code. In addition, each severity level has a unique word to describe it and is clearly defined. Examples are also provided for each level and are drawn from historical events. For example, UDSCS 1, "Emergency," accounts for situations that have between 1 and 10 fatalities, and UDSCS 10, "Partial or Full Extinction," is defined as situations that exceed one billion fatalities. Table 13 Initial Universal Disaster Severity Classification—Fatality based Almost everything in this table is novel. For the first time, a 0–10 level ranking is proposed, which make sense because it is a log scale that can cover wide ranges in terms of socioeconomic factors. Although each severity level increases by a power of 10, the probability of events that fall within the higher ranges of the scale is small. The probability of a very high classification is low for severe natural disasters as these events are rare. Furthermore, the base 10 measurement is easy to remember and meaningful because it clearly differentiates one severity level from another. The estimated probabilities of these levels are calculated using the approximate best-fitted 70th order statistic GED (Column 4 and 5 Table 13). UDSCS level 6 or higher disasters are expected to have very small, estimated probabilities according to the fitted 70th order GED. These probabilities are estimated using a low exact number of severe events (7 historical records for UDSCS 6 and higher events for the 70th order sample or full sample dataset from 1977 to 2013) because there are no historical records for UDSCS 7 or higher disasters in the considered dataset; however, there is geographical evidence of natural disasters that have occurred in the past. Therefore, the estimated probabilities for last four levels (UDSCS 7 to 10) are very low, and these probabilities are indicative of their severity range. According to the considered dataset, the maximum fatality record is 300,000, which falls into Catastrophe Type 1 (UDSCS 6). However, according to the fitted 70th order GED, it is expected that 5 in 1 million severe events are Catastrophe Type 2 or higher disasters. The severity levels of the 2 worst extreme natural disasters that have occurred in history, and for which data are available, are the Black death pandemic that occurred between 1346 and 1353 where more than 50 million fatalities were recorded, and the Spanish Flu pandemic that occurred between 1918 and 1920 where more than 40 million fatalities were recorded (Saunders-Hastings and Krewski 2016); these events are categorized as Cataclysm Type I (UDSCS 8). Additionally, the Asian Flu pandemic that occurred between 1957 and 1958 and resulted in more than 1 million deaths (Rajagopal and Treanor 2007) and China's 1931 flood that resulted in more than 2.5 million deaths are categorized as Catastrophe Type 2 (UDSCS 7). Therefore, the above estimates are reasonable considering events that are not included in the analysis. Furthermore, there can be disasters that are not recorded in the databases, such as extraterrestrial events or the combined impact of extreme disasters such as an earthquake and tsunami, that affected more than one country. Disasters, such as meteoroid impacts, have the potential to vary from "Emergency" (UDSCS 1) to "Partial or Full Extinction" (UDSCS 10). Although there are no recorded fatalities caused by a meteoroid impact, the falling of meteoroids gained attention after the Russian meteor strike in 2013 that injured more than 1,000 people. Also, there are many studies about extinction risks, such as super-volcanic eruptions or major asteroid impacts. The studies estimated the number of deaths that might occur, but the probability of these events occurring is very low. According to the Planetary Society, an asteroid larger than 1 km across is big enough to threaten global destruction, and astronomers estimate such objects have a 1 in 50,000 chance of hitting Earth every 100 years (Kettley 2020a). These kinds of asteroid strikes can be categorized as Partial or Full Extinction (UDSCS 10). Scientists have modeled that a super-eruption might kill 10 percent of the global population (i.e., more than 700 million) (Walsh 2019), and therefore, super-eruptions can be categorized as Cataclysm Type 2 (UDSCS 9). However, Dr. Jerzy Żaba, a geologist, estimates the Yellowstone volcano could trigger global climate change, and about five billion might die from starvation in the aftermath of that eruption (Kettley 2020b), so the combined impact of the eruption and the aftermath may lead to Partial or Full Extinction (UDSCS 10). Predictions from the model need to consider possibilities outside the estimated probabilities and must be used with caution because decision makers may believe them to be absolute. According to the estimated probabilities of 70th order GED, 19 out of 100 extreme natural disasters are less than UDSCS 1. They can be disasters that are not recorded in the database (less than 10 fatalities) or zero fatality events, such as insect infestations and lightning strikes, according to the historical events recorded. Four out of 100 severe natural disasters can be considered as UDSCS 1 events that have 1 to 10 fatalities. An example of a UDSCS 1 is icing, which is any deposit or coating of ice on an object that can seriously hamper its function and is considered an extreme temperature condition grouped under climatological disasters. Note that the EM-DAT database records events that have less than 10 fatalities, if 100 or more people are reported as affected or there has been a call for international assistance/declaration of a state of emergency. Thus, the estimated probabilities of UDSCS 0 and 1 are also conditional to the above data entering criteria. Severe natural disasters fall under Disaster Type 2 (UDSCS 3) according to the analysis; 39 out of 100 severe disasters will have 100 to 1,000 fatalities. According to historical events, bush and forest fires, cold waves, avalanches, snow avalanches, rock falls, storm surges/coastal floods, sudden subsidence, debris flows, mudslides, tornadoes, and storms (severe, hail, dust, and local) can be classified as UDSCS 3. The second most likely severity level that severe natural disasters can fall under is Disaster Type 1 (UDSCS 2); 29 out of 100 severe events will have 10 to 100 fatalities. According to the data, freezing rains, scrub/grassland fires, other wildfires, other seismic activity, and storms (snow, winter, sand, blizzard, thunderstorms, and extratropical cyclones) can be classified as UDSCS 2. Thus, 68% of severe natural disasters will have 10 to 1000 fatalities and fall under either Disaster Type 1 or Type 2 (UDSCS 2 or 3). The next major natural disaster will have a 7.75% chance of causing between 1000 and 10,000 deaths. In other words, 775 out of 10,000 extreme disasters can be classified as UDSCS 4, Calamity Type 1. Most biological events, such as epidemics (e.g., parasitic and bacterial infectious diseases), extreme winter conditions, floods (general and other), landslides, and other storms fall under this category. Extreme disasters, such as volcanoes, flash floods, and heat waves can be classified as UDSCS 5, which means 72 out of 10,000 severe natural disasters can be considered as Calamity Type 2 events that have 10,000 to 0.1 million fatalities. Earthquakes, tsunamis, tropical cyclones, and droughts have the ability to reach UDSCS 6, and 6 out of 10,000 severe disasters can be classified as Catastrophe Type 1 events. This universal classification system compares the severity of different types of disasters and presents an overall picture of severity levels (Caldera 2017). According to this classification, local disasters cover the lower levels, whereas the disasters with potential regional- or global-level impacts cover the upper levels. However, it should be noted that the extreme fatality analysis used historical events from 1977 to 2013 recorded in the EM-DAT database, and none of these records included events that had fatalities exceeding 300,000 (Catastrophe Type 1). In addition, as mentioned previously, the database records depend on the country (e.g., the 2004 Boxing Day tsunami data are not recorded as one event but 12 different events because 12 different nations were affected). Moreover, there are events before 1977 (e.g., the 1931 China flood, classified as Catastrophe Type 2) that this analysis does not cover, and there is the possibility that future events exceed 300,000 fatalities. In addition, simultaneous disaster events (e.g., an earthquake and tsunami striking or the impact of a hurricane and peripheral tornadoes) are not considered in this analysis. These events can cause the classification level to increase by one or more levels. Additionally, infrastructure failure can be added to an event or simultaneous events, for example, the nuclear plant failure subsequent to the Great North East Japan Earthquake and Tsunami. A meteoroid impact on land close to population centers or in the ocean (causing massive tsunamis) could cause millions of fatalities. Although the analysis is subject to many limitations, it provides a good foundation to develop an advanced multidimensional scale to classify disaster occurrences worldwide based on a combination of several independent factors. This analysis also provides an overall picture of the severity of each type of disaster. This kind of scale makes it easy to recognize an event occurrence and enter it into a database. Natural disaster severity classification Table 14 illustrates the levels covered by each disaster group and the breakdown of geophysical disasters according to the historical sample data in the EM-DAT database from 1977 to 2013. The covered severity levels are indicated using "√" (checkmark) and the respective severity color code is also given. Table 14 compares and contrasts the severities of the five main groups, biological, hydrological, meteorological, climatological, and geophysical, considered in the analysis. In addition, it presents an overall picture of the severity levels. As an example, the severity spectrum of geophysical disasters is shown in this table. Table 14 Natural Disaster Severity Classification Moreover, the disaster classification of geophysical disasters is validated using this table. The maximum fatality record (China in 1556) for earthquakes in the NOAA database from 2000 B.C.E. to 2015 C.E. is 830,000 (Esfeh et al. 2016). For a tsunami, the most fatalities that occurred is 300,000 (India in 1737) as recorded in the NOAA database using records from 2150 B.C.E. to 2015 C.E. (Caldera et al. 2016). Therefore, earthquakes and tsunamis can reach severity level 6 in the UDSCS (100,000 ≤ Fatality < 1,000,000). For volcanic eruptions, as recorded in the NOAA database from 4360 B.C.E. to 2013 C.E., the maximum fatality record is 30,000 (El Salvador in 450) (Caldera and Wirasinghe 2014). Therefore, volcanic eruptions can reach level 5 (10,000 ≤ Fatality < 100,000). Consequently, geophysical disasters can go up to a severity level of 6 in the UDSCS. This conclusion is similar to the disaster classification conclusion that uses sample data from 1977 to 2013 (Table 14, columns 6 to 9). Although the disaster classification represented in Table 14 uses historical disasters from 1977 to 2013, the table covers the full range of disasters recorded in history for different geophysical disasters (i.e., tsunamis, earthquakes, and volcanic eruptions). Therefore, this comparison confirms that despite various limitations, the proposed disaster classification and the Universal Disaster Severity Classification Scheme are closer to the actual situation for many disasters. Global databases and research limitations Historical records are the basis for understanding the severity of a disaster, and numerous techniques have been used to record historical events (National 2007). However, data collection standards vary among countries, and therefore, comparisons across space and time are difficult. Comparing different events and obtaining a sense of scale are problematic due to deficiencies in databases. Some deficiencies in global databases are due to the following: Incomplete data: some databases do not record all the necessary information; Inaccurate data: global databases lack common standards; Missing data: some events are not entered in the dataset because of the definitions or requirements of that database. Although the number of reported natural disasters is increasing, in general, records are incomplete. Historical reports contain some, but not all, important data; most contain only a brief and often ambiguous description (Newhall and Self 1982). In addition, current records can be inaccurate and ambiguous, which complicates the relationship between impact factors and the severity of a natural disaster. For example, the reported number of homeless people was zero in the Great East Japan (GEJ) earthquake in the EM-DAT international disasters database of the Centre for Research on the Epidemiology of Disasters (CRED) (Centre 2013). However, several thousand homes were washed away in the GEJ earthquake, leaving many people homeless. Temporary houses that were provided were in use 4 years after the event. The statistics in this example indicate that there are some concerns about information management, information processing, and how these variables are defined in global databases. The lack of common terminology to identify the scale of a destructive event is an issue in information management and processing (Hristidis et al. 2010), which can lead to "…inconsistent reliability and poor interoperability of different disaster data compilation initiatives" (Below et al. 2009). It is not uncommon for numerous records to exist for the same event, sometimes with different numbers. For example, there are different fatality records from different sources for the 1815 volcanic eruption of Mount Tombora in Indonesia. "Victims from volcanic eruptions: A revised database" (Tanguy et al. 1998) recorded 11,000 fatalities due to the volcanic eruption (with an additional 49,000 fatalities associated with the eruption but caused by post-eruption famine and epidemic disease). However, the National Oceanic and Atmospheric Administration (NOAA) database recorded 10,000 fatalities from the eruption (with 117,000 total fatalities in the aftermath of the eruption) (National 2013a). Given that one can count direct fatalities or fatalities in the aftermath (e.g., secondary disasters, such as climate anomalies, altered weather patterns, ground deformation, ash fall, pollution, starvation, landslides, and tsunamis), this adds to the possibility of inaccuracies in the databases. Several such discrepancies exist among various sources, and they complicate the interpretation of trends in disaster data. Moreover, one disaster may lead to another disaster, which results in conjoint disaster records, and therefore, separating the impacts can be problematic. Thus, the nature of a disaster, whether it is primary or secondary, is one of the main issues in distinguishing one disaster from another (Wirasinghe et al. 2013a). Additionally, databases that compile disaster events at the national level face issues with disasters that have impacts at the regional or continental level. The same disaster event can also impact countries differently (Löw and Wirtz 2010), and thus, the interpretation of scale of a disaster can be different from one country to the other (Wirasinghe et al. 2013b). Further, different databases have different criteria for including a disaster in their databases. For example, in the EM-DAT database, a disaster has to result in: 10 or more people have been reported as killed and/or 100 or more people are reported as affected and/or there has been a call for international assistance or a declaration of a state of emergency. In contrast, events that are entered in the Munich RE global loss database, NatCatSERVICE, are those that have resulted in human or material loss (MunichRE 2013). Thus, a given event occurrence recognized as a disaster and logged in one database may not be recorded in another. Events such as those with less than 10 fatalities, with less than 100 people affected, and with a monetary impact, but not declared as a state of emergency, are archived in NatCatSERVICE but not in EM-DAT. Therefore, databases that use different entry criteria may give different interpretations for the same event (Below et al. 2009). A lack of data and incomplete/inaccurate data can prevent in-depth analysis. Although historical inaccuracies in past records are unavoidable, going forward, inaccuracies should be avoided, if possible. To have accurate records, we need the following: Improved data (i.e., universal, complete, comprehensive, unambiguous, and accurate). Enhanced databases (e.g., record and retrieve joint or separate data and global disasters or the subdivisions of a continental, regional, or national records) Improved information management and processing (e.g., data collection and entry criteria standards). Precise disaster terminology (e.g., standardized terms for easily recognizing an event occurrence). However, these requirements do not stand alone but are interconnected with each other. Hence, consistent interpretation, a proper scale, good understanding of each disaster, and an expanded recording system are required to accomplish this goal. Therefore, a global disaster classification system is an important contribution to improving the quality and reliability of international disaster databases (Löw and Wirtz 2010). Significance of the UDSCS Common severity scale for all types of natural disasters The main advantage of this new UDSCS is that it will provide a common platform to compare natural disasters. Therefore, comparisons across regions and time for any type of natural disaster is feasible using this novel universal classification system. This knowledge can be used for impact assessments for different hazards (see Sub-Sect. 2.4). In addition, this universal system is not confined to disasters resulting from rapid onset, relatively clearly defined events such as earthquakes, tsunamis, and tornadoes. Disasters resulting from events that are more diffuse in space and time are also incorporated, such as droughts, famine, pollution, and epidemics. Conditions that become disastrous, but with less clear start and end points, are also incorporated because the UDSCS also considers slow moving disasters. As this universal system considers the world's population, it incorporates conditions that become extinction events or massive phenomena, such as a major asteroid strike, super-volcanoes, or a meteoroid impact. Analyzing the risks and responses to events that have the potential to cause the full or partial extinction of the human race is crucial but curtailed as obviously there are no historical records, but there are geographical records. Another advantage of this universal classification system is that it is expected to generate a consistent standardized communication platform to describe the impact of disasters for all stakeholder groups, such as civilians, responders, and policy makers. The initial UDSCS also provides a foundation to develop an advanced scale to classify and compare disaster occurrences worldwide, but the analysis is subject to many limitations. In addition, the UDSCS will improve disaster terminology. Furthermore, in response Löw and Wirtz (2010) comment about a global disaster classification (see Sect. 7), the UDSCS will improve the quality of data, recording systems, and databases by providing precise disaster terminologies. Most importantly, the proposed UDSCS will improve communication and understanding of disaster risks, which aligns with the priority of the Sendai Framework for Disaster Risk Reduction 2015–2030 (United 2015). Improved understanding of disaster risk The UDSCS is not a replacement for estimate firsthand damage, but the universal system can support prioritization during the early stages of a response. As the response to a disaster continues, the UDSCS can be updated to consider improvements to the severity scale and sources of data (quality, timeliness, and scale) that are validated via firsthand reports and changing requirements. Therefore, this new universal severity classification system is expected to provide benefits to several groups: Emergency responders and disaster managers National/regional/local governments Relief agencies and NGOs Insurance managers and estimators Database/information managers Emergency response and disaster management Disaster managers and emergency respondent personnel can gain a clear sense of scale of the severity of each type of disaster by considering the expected probabilities according to historical disasters. Also, they can have an overall picture of a disaster because UDSCS provides relative comparisons among disasters of various degrees and ranks natural disasters using a set of criteria. This knowledge can be used to deploy resources as needed when disaster strikes, and it can be used for pre-planning (see Sub-Sect. 2.4). The initial assessment of a disaster is based on estimates made shortly after the event strikes, and it is frequently updated. For example, first evaluations are used for initial planning, such as whether to call a state of emergency, evacuate, request international assistance, or involve military forces. Other decisions regarding planning include the following: resources, such as food, water, medicine, sanitation, and clothes, that should be stored and delivered to the stricken area; hospitals that should be assembled and to what extent; and shelters to mobilize, where to set up temporary housing, and for how long. By having an overall picture of the severity of disasters, emergency response management organizations, disaster managers, first responders, government stakeholders, relief agencies, and NGOs can rapidly estimate the potential impact of a natural disaster, and then, they can quickly respond by properly allocating the appropriate resources, expediting mitigation, and accelerating the recovery processes (Caldera et al. 2018), which cannot be done using the current scales. No matter the type of disaster, similar resources are managed by personnel who allocate available emergency vehicles, essential resources, temporary hospitals, temporary housing, etc. Mitigation efforts are dependent on the estimated disaster impact. Identifying the disaster impact properly, and in a timely manner, is crucial because lives depend on these decisions. Inconsistent identification of disaster impacts mean that disaster managers may either over- or undercompensate in their allocation of resources for mitigation. Overcompensation could result in a large waste of resources, while undercompensation could increase the severity of an impact. In addition, one city can have different types of disasters, but the same personnel respond to these events. Moreover, populations are most sensitive to disasters that have high human impacts. Therefore, a severity scale based on human impacts should be used for preparedness and mitigation methods; warnings, evacuation, public awareness, disaster education, and disaster drills can help change public opinion regarding the impact of disasters; may gain the public's attention and increase trust in the techniques used by emergency management systems and emergency responders. Thus, response time to warnings can be decreased, and response rates can be increased if the proposed terms are used. Consequently, public awareness, education level, and response rate to warnings can be increased using the UDSCS because a direct relationship between a disaster and the probability of human impact are made explicit. As Durage (2014) indicated, "The frequent occurrence and high intensity of natural disasters can impose irreversible negative effects on people. Taking mitigation actions well in advance can avoid or significantly reduce the impacts of disasters." Although it is difficult to avoid property damage due to the sudden onset of a natural disaster, if proper classifications and terminology are used in an emergency management system, fatalities and injuries could be minimized by taking appropriate actions, such as issuing warnings on time and raising public awareness. Therefore, warnings indicating the severity of a natural disaster can be communicated using the clearly defined terms in the UDSCS, and meaningful communication regarding life-threatening situations is more likely to elicit an appropriate public response and may increase public awareness. In addition, confusions can be reduced, mutual understanding between public and responders can be improved, and decision capabilities can also be improved. These recommended improvements in communication need to be tested before implementation. By having an overall picture of each disaster and its potential severity level, the UDSCS will help insurance agencies and estimators to create specific criteria to clarify common disaster compensation packages and insurance policies (Caldera et al. 2016b). Information managers can use the clear terms outlined in the UDSCS to improve the poor quality of the data in the existing reporting databases. Easily recognizing an event occurrence and having a set of standard terms in a proposed UDSCS is expected to allow database managers to improve information management and processing. A standardized database terminology and the associated data can be managed to mitigate missing or inaccurate data. Using common terminology to clearly identify the scale of a disaster can be the standard used to record disasters. Then, the scale can be used to record global disasters and the subdivisions of continental, regional, and national records. Common terminology can also be used to record joint disaster records (i.e., combined impact of primary and secondary disasters), and separate disasters can be recorded as subdivisions of the records, where possible (if the impact of primary and secondary disasters can be separated clearly). As a result, complications, misunderstandings, misclassifications, and missing records can be minimized as much as possible. Additionally, decision capabilities of disaster information management processing can be improved as this universal system classifies disasters according to severity. The UDSCS has an academic value in addition to practical applications. For example, if we have an accurate disaster database, more research can be conducted on disaster mitigation to improve disaster preparedness technologies. It may take many years to obtain quality reports. However, even relatively short records can be used to develop relationships among variables in the records in databases (Brooks 2013), which improves analysis and research. Improved communication The UDSCS will serve as a bridge between qualitative and quantitative techniques used in emergency management systems. Qualitative and quantitative techniques are integrated in the UDSCS to produce management and size measurement systems, respectively. Therefore, the UDSCS is expected to avoid inconsistencies and, most importantly, connect severity metrics to generate a clear understanding of the degree of an emergency and the potential impacts, thereby improving mutual understanding between the emergency management systems of countries at all levels: international, continental, regional, national, provincial, and local. As UDSCS is used post-event, the classification of the severity of the event may change as reports on the number of fatalities are updated. Therefore, the degree of severity changes with time and with updated reporting on the disaster. For example, an earthquake, which occurs in seconds, could be categorized as a "disaster" in terms of severity within the first few hours depending on the reported impacts and causalities. However, the impact and causalities can increase days or weeks after the event. Accordingly, the severity of the earthquake could be reclassified as a "calamity" a day or two after the event, and it could potentially be considered a catastrophic event within weeks. Although frequent updates improve the accuracy of the severity, it is vital to estimate the severity shortly after an event strikes to provide information to first responders and for public reporting and planning. The potential impact of a disaster can be estimated with a certain degree of accuracy, which is beneficial because the size of a first-responder contingency depends on the magnitude of the disaster impact. Therefore, predicting the severity can accelerate the recovery process. The information in the initial UDSCS, listed in Table 13, is proposed for the first time. The most important advantage of the UDSCS is that it provides a consistent method for all stakeholders to measure the severity of all types of disasters. A common scale is more informative than the variety of scales currently used for different disaster types and for different stakeholder groups because the classification applies to all types of disasters and all stakeholder groups. In addition, the UDSCS has a reasonable and standard number of levels to articulate the full range of disaster severity, and it has a clear order of seriousness for the severity levels. The increasing level of seriousness from 0 to 10 is defined using quantitative boundaries and clearly defined descriptive terms, which avoids confusion as to whether UDSCS 0 or UDSCS 10 is the most critical. Because the UDSCS has a reasonable number of levels, events that have different levels of severity will not be in the same category. Therefore, because this universal measurement system clearly conveys the size of the impact of a disaster, it avoids confusion and improves mutual understanding among stakeholder groups. Moreover, the UDSCS can be adapted to any language, country, or culture. The UDSCS clearly defines the levels of the disaster continuum by (1) redefining the existing terms without using one term to define another, (2) outlining the impact factors, damage, injuries, and fatalities, and (3) using better descriptive words to reflect the order of seriousness of a disaster. The number of fatalities is chosen as the most influential factor because it is correlated with several factors that affect humans. Fatalities is highly correlated with injuries and missing persons and moderately correlated with houses damaged and cost of damage. However, one factor alone is not sufficient to measure the severity of disasters because a single factor does not address all aspects of severe events. For example, a disaster, such as a wildfire in an uninhabited forest, may affect only a geographical area and not have any direct and immediate impact on humans, but the wildfire may have long-term adverse effects on the local and global ecosystems. Real-world examples include the 2016 Fort McMurray fire, which had no fatalities, and the 2013 Alberta flood, which had 4 fatalities, but both disasters were the costliest Canadian disasters in history; consequently, neither event is properly represented using a scale that only considers fatalities. Therefore, a more advanced multidimensional quantitative scale that combines all impact factors, such as fatalities, injuries, homeless, affected population, area affected, and cost of damage, is needed to properly address the full range of a disaster impact. Even using one impact factor, this simple universal system that incorporates all types of natural disasters (rather than the variety of unrelated scales for specific disasters) is more informative and consistent for assessing severity. The boundaries of the levels are clearly defined. Therefore, an overall picture of the disaster continuum is available using the UDSCS. In addition, the UDSCS links the disaster severity matrices because it serves as a bridge between quantitative and qualitative techniques. This research was completed mostly prior to coronavirus pandemic in 2019–2020 (COVID-19); therefore, COVID-19 is not discussed in detail in this paper except in this paragraph. COVID-19 is an acute respiratory infectious disease that affects humans and some animals, and it is caused by the 2019 novel coronavirus (2019-nCoV); the first patient to be infected is unknown (Zheng et al. 2020). However, it first appeared in Wuhan, China in December 2019. During the initial phase of this virus, Chinese doctors and scientists issued warnings of a global pandemic (Huang et al. 2020). The World Health Organization (WHO) announced that the outbreak of COVID-19 is a global pandemic on 12 March 2020 (World 2020). Different countries have used different methods to control the spread of COVID-19. Tragically, this outbreak rapidly escalated from endemic to epidemic within a few days and from epidemic to pandemic within a few months (Centers 2020), and numerous new COVID-19 cases are reported daily around the world. As of 08 February 2021, there have been more than 106.7 million confirmed cases and more than 2.3 million fatalities reported globally (Johns 2021). According to the current numbers COVID-19 is categorized as a Catastrophe Type 2 (UDSCS 7) event. The novel Universal Disaster Severity Classification Scheme (UDSCS) is developed to assess the impact of any uncontrollable forces of nature regardless of disaster type, place, or time. This universal severity classification system is applicable to all stakeholders, such as civilians, emergency responders, disaster managers, relief agencies, all levels of government, NGOs, insurance managers/estimators, reporters, media, database/information managers, academics, researchers, and policy makers. Therefore, it is expected to create a universal standard severity measurement system, and most importantly, generate a common communication platform to describe the impact of disasters, ensuring mutual understanding across the globe. A nation's ability to prepare and manage extreme global disasters that affect more than one country will improve if there is mutual understanding among different countries' emergency management systems at all levels. By selecting the appropriate terms for the levels and naming the categories using plain language to describe the magnitude of a disaster, the UDSCS is expected to allow for easier management at all levels. Moreover, combining these terms with quantitative techniques gives clear boundaries and guidelines, and combining these terms with the color coding scheme enables easy adaption to any language, country, or culture. The color coding system is helpful to some people working or involved in disaster recovery who are not literate or cannot understand the local language or dialect (if working in foreign regions). Therefore, the definitions and colors together ensure broader communication between people and organizations. The UDSCS explains the disaster continuum. Using this universal system, the impact of a broad range of natural disasters that occur anywhere in the world at any time can be described, measured, compared, assessed, and ranked both quantitatively and qualitatively. The UDSCS uses a color coding scheme and disaster terminology to describe disasters qualitatively, and it uses severity levels and impact factor boundaries to assess disasters quantitatively using the rating scale 0–10 to rank disasters. Further, it uses the probability of occurrence of extreme disasters to predict the impact of any natural disaster. Most importantly, the UDSCS is a single common measurement for all types of natural disasters because it integrates colors, words, impact factors, and severity level rank. The proposed severity scheme will improve communication and understanding of disaster risks, which aligns with the priority of the Sendai Framework for Disaster Risk Reduction 2015–2030. Additionally, the UDSCS is a simple scientific instrument. The selected descriptive terms, impact factors to measure severity, and proposed ranges are based on data and statistically robust. Furthermore, the UDSCS will avoid inconsistencies and, more importantly, will connect severity metrics to generate a clear understanding of the degree of an emergency and the potential impacts. Lastly, qualitative and quantitative techniques are integrated to produce management and size measurement systems, respectively. Future extensions This is an ongoing research project to develop a multidimensional UDSCS to understand the disaster continuum. The scope of this paper is to introduce an initial UDSCS that can be used to compare the impact of any type of natural disaster both qualitatively and quantitatively. When developing quantitative measures (in Step 4), we considered only one impact factor, fatalities, to develop the initial UDSCS. However, using the initial scale with one factor does not capture all aspects of an impact, as noted previously. Therefore, an advanced multidimensional scale that combines all impact factors using a disutility function needs to be developed. Alberta Emergency Management Agency (AEMA) (2015) Making communities more resilient incident management teams and regional partnerships. AEMA. http://www.aema.alberta.ca/documents/ema/D5_Incident_Management_Teams_and_Regional_Partnerships.pdf. Accessed 29 Oct 2015 Below R, Wirtz A, Guha-Sapir D (2009) Disaster category classification and peril terminology for operational purposes. Centre for Research on the Epidemiology of Disasters, Brussels, Belgium. http://hdl.handle.net/2078.1/178845 Brink Editorial Staff (2019) The 10 most costly natural disasters of the century. Environment, Brink the edge of risk. https://www.brinknews.com/the-10-most-costly-natural-disasters-of-the-century/. Accessed 10 Nov 2019 Brooks HE (2013) Estimating the distribution of severe thunderstorms and their environments around the world. International Conference on Storms, The National Oceanic and Atmospheric Administration. http://www.nssl.noaa.gov/users/brooks/public_html/papers/brisbane.pdf. Accessed 24 Jun 2016 Caldera HJ (2017) Analysis and classification of natural disasters. Dissertation, University of Calgary, Canada. http://dx.doi.org/https://doi.org/10.11575/PRISM/24811 Caldera HJ, Wirasinghe SC, Zanzotto L (2018) Severity scale for tornadoes. Nat Hazards 90(3):1051–1086. https://doi.org/10.1007/s11069-017-3084-z Caldera HJ, Wirasinghe SC (2014) Analysis and classification of volcanic eruptions. In: Rapp RR, Harland W (eds) the proceedings of the 10th Annual Conference of the International Institute for Infrastructure Renewal and Reconstruction, West Lafayette, Indiana, pp. 128–133. https://doi.org/10.5703/1288284315372 Caldera HJ, Wirasinghe SC, Zanzotto L (2016a) NDM-528: An approach to classification of natural disasters by severity. In the proceedings of the 5th International Natural Disaster Mitigation Specialty Conference, Annual Conference of the Canadian Society for Civil Engineering, London, Canada, NDM-528, pp. 1–11. https://ir.lib.uwo.ca/csce2016/London/NaturalDisasterMitigation/20/ Caldera HJ, Ebadi O, Salari M, Wang L, Ghaffari M, Wirasinghe SC (2016b) The severity classification for tsunamis based on fatality analysis, In the proceedings of the 12th Annual Conference of the International Institute for Infrastructure Renewal and Reconstruction, Kandy, Sri Lanka, pp. 1–9 Camerer CF, Kunreuther H (1989) Decision processes for low probability events: policy implications. J Policy Anal Manag 8(4):565–592 Centers for Disease Control and Prevention (CDC) (2020) Identifying the outbreak source. https://www.cdc.gov/coronavirus/2019-ncov/cases-updates/about-epidemiology/identifying-source-outbreak.html. Accessed 09 Dec 2020 Alberta Emergency Management Agency Provincial Operations Centre (2020) IMT-Incident management team. CAN-TF2 Alberta. http://www.cantf2.com/imt-incident-management-team. Accessed 15 Nov 2020 Centre for Research on the Epidemiology of Disasters (2013) EMDAT- International disaster database. www.emdat.be Coles S (2001) An introduction to statistical modeling of extreme values. Springer, London Colton T (1974) Statistics in medicine. Little, Brown, Boston Cresswell J (2009) The Oxford dictionary of word origins. Oxford University Press, Oxford de Boer J (1990) Definition and classification of disasters: Introduction of a disaster severity scale. J Emerg Med 8(5):591–595 de Boer J (1997) Tools for evaluating disasters: preliminary results of some hundreds of disasters. Euro J Emerg Med 4(2):107–110 Durage SW, Kattan L, Wirasinghe SC, Ruwanpura JY (2014) Evacuation behaviour of households and drivers during a tornado: analysis based on a stated preference survey in Calgary. Canada Nat Hazards 71(3):1495–1517 EM-DAT (2021) The EM-DAT glossary. The international disasters database. Centre for Research on the Epidemiology of Disasters. https://www.emdat.be/Glossary. Accessed 09 Feb 2021 Esfeh MA, Caldera HJ, Heshami S, Moshahedi N, Wirasinghe SC (2016) The severity of earthquake events—statistical analysis and classification. Int J Urban Sci 20(sup1):4–24. https://doi.org/10.1080/12265934.2016.1138876 Eshghi K, Larson RC (2008) Disasters: lessons from the past 105 years. Disaster Prev Manag 17(1):62–82. https://doi.org/10.1108/09653560810855883 Gad-el-Hak M (2008a) The art and science of large-scale disasters. In: Gad-el-Hak M (ed) Large-scale disasters. Cambridge University Press, New York, pp 5–68 Gad-el-Hak M (2008b) Introduction. In: Gad-el-Hak M (ed) Large-scale disasters. Cambridge University Press, New York, pp 1–4 Government of Alberta (2021) Making our communities and province more resilient. https://www.alberta.ca/assets/documents/making-our-communities-more-resilient.pdf. Accessed 09 Feb 2021 Grossi P, Kunreuther H, Windeler D (2005) An introduction to catastrophe models and insurance. In: Grossi P, Kunreuther H (eds) Catastrophe modeling: a new approach to managing risk. Catastrophe Modeling, vol 25. Springer, Boston, MA, pp. 23–42. https://doi.org/10.1007/0-387-23129-3_2 Hasani S, El-Haddadeh R, Aktas E (2014) A disaster severity assessment decision support tool for reducing the risk of failure in response operations. In: Brebbia CA (ed) Risk analysis IX. Wessex Institute of Technology, UK, Transactions on Information and Communication Technologies, vol. 47. pp. 369–380. https://doi.org/10.2495/RISK140311 Hristidis V, Chen SC, Li T, Luis S, Deng Y (2010) Survey of data management and analysis in disaster situations. J Syst Softw 83(10):1701–1714 Huang C, Wang Y, Li X, Ren L, Zhao J, Hu Y, Zhang L, Fan G, Xu J, Gu X, Cheng Z (2020) Clinical features of patients infected with 2019 novel coronavirus in Wuhan. China the Lancet 395(10223):497–506. https://doi.org/10.1016/S0140-6736(20)30183-5 Johns Hopkins University (2021) COVID-19 dashboard. Center for Systems Science and Engineering, Johns Hopkins Universityhttps://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6. Accessed 09 Feb 2021 Kelman I (2008) Addressing the root causes of large-scale disasters. In: Gad-el-Hak M (ed) Large-scale disasters. Cambridge University Press, New York, pp 94–119 Kettley S (2020a) Asteroid warning: NASA tracks a 4KM asteroid approach—could end civilisation if it hits. Express. https://www.express.co.uk/news/science/1249990/Asteroid-warning-NASA-tracks-4KM-killer-asteroid-hit-Earth-end-civilisation-asteroid-news. Accessed 29 Sep 2020 Kettley S (2020b) Yellowstone volcano: geologist estimates 'five billion' death toll if Yellowstone blows. Express. https://www.express.co.uk/news/science/1293441/Yellowstone-volcano-eruption-kill-five-billion-USGS-news. Accessed 29 Sep 2020 Kotz S, Nadarajah S (2000) Extreme value distributions: theory and applications. Imperial College Press, London Lee SH, Urrutia JL (1996) Analysis and prediction of insolvency in the property-liability insurance industry: a comparison of logit and hazard models. J of Risk and Insurance 63(1):121–130 Löw P, Wirtz A (2010) Structure and needs of global loss databases of natural disasters. International Disaster and Risk Conference, Davos, Switzerland, pp. 1–4 Meyer RJ (2006) Why we under-prepare for hazards? In: Daniels RJ, Kettl DF, Kunreuther H (eds) On risk and disaster: lessons from hurricane Katrina. University of Pennsylvania Press, Philadelphia, Pennsylvania, pp 153–173 MunichRE (2013) NatCatSERVICE: download center for statistics on natural catastrophes. http://www.munichre.com/en/reinsurance/business/non-life/georisks/natcatservice/default.aspx. Accessed 26 Mar 2013 National Geophysical Data Center (2013a) Global significant volcanic eruptions database. National Geophysical Data Center/World Data Service: National Oceanic and Atmospheric Administration National Centers for Environmental Information. https://doi.org/10.7289/V5JW8BSH. Accessed 15 Jul 2013 National Research Council (2007) Tools and methods for estimating populations at risk from natural disasters and complex humanitarian crises. The National Academies Press, Washington, DC. https://doi.org/10.17226/11895 Newhall CG, Self S (1982) The volcanic explosivity index (VEI) an estimate of explosive magnitude for historical volcanism. J Geophys Res 87(C2):1231–1238 National Oceanic and Atmospheric Administration (2013b) Storm events database. National Climatic Data Center. http://www.ncdc.noaa.gov/stormevents/. Accessed 15 Jul 2013 Olsen GR, Carstensen N, Hoyen K (2003) Humanitarian crisis: What determines the level of emergency assistance? media coverage, donor interest and the aid business. Disasters 27(2):109–126 Oxford University (2010) Oxford dictionary of English. In: Stevenson A (ed) Oxford University Oxford University Press Oxford University (2014) The Oxford English dictionary. Oxford University Press. http://www.oed.com/. Accessed 21 Nov 2014 Pappas S (2018) Top 11 deadliest natural disasters in history. Live Science. https://www.livescience.com/33316-top-10-deadliest-natural-disasters.html. Accessed 02 Apr 2018 Penuel KB, Statler M, Hagen R (2013) Encyclopedia of crisis management. SAGE Publications Inc, Thousand Oaks, Calif Philippine Atmospheric Geophysical and Astronomical Services Admin (2020) Color code of the PAGASA rainfall warning signals. The Philippine Atmospheric, Geophysical and Astronomical Services Admin. (PAGASA). https://pinoyjuander.com/blog/2018/08/color-code-of-the-pagasa-rainfall-warning-signals/, Accessed 16 Jan 2020 Public Safety Canada (2017) An emergency management framework for Canada—3rd edition. Ministers Responsible for Emergency Management, Emergency Management Policy and Outreach Directorate, Public Safety Canada. https://www.publicsafety.gc.ca/cnt/rsrcs/pblctns/2017-mrgnc-mngmnt-frmwrk/index-en.aspx. Accessed 09 Feb 2021 Rajagopal S, Treanor J (2007) Pandemic (avian) influenza. Seminars Respir Critical Care Med 28(2):159–170. https://doi.org/10.1055/s-2007-976488 Reiss RD, Thomas M (2007) Statistical analysis of extreme values: with applications to insurance, finance, hydrology and other fields. Birkhäuser Basel, Basel Ritchie H, Roser M (2014) Natural disasters. Our World in Data. https://ourworldindata.org/natural-disasters. Accessed 30 Nov 2019 Rodríguez TJ, Vitoriano T, Montero B, Kecman J (2011) A disaster-severity assessment DSS comparative analysis. OR Spectr 33(3):451–479. https://doi.org/10.1007/s00291-011-0252-5 Rutherford WH, de Boer J (1983) The definition and classification of disasters. Injury 15(1):10–12. https://doi.org/10.1016/0020-1383(83)90154-7 Saunders-Hastings PR, Krewski D (2016) Reviewing the history of pandemic influenza: understanding patterns of emergence and transmission. Pathogens 5(4):66. https://doi.org/10.3390/pathogens5040066 Schenk A (1999) Hurricane Mitch and disaster relief: the politics of catastrophe. ATC 78. Against the Current 13(6):7. https://www.marxists.org/history/etol/newspape/atc/1752.html. Accessed 16 Dec 2020 Spevack M (1973) The Harvard concordance to Shakespeare. Belknap Press of Harvard University Press, Cambridge Tanguy JC, Ribière C, Scarth A, Tjetjep WS (1998) Victims from volcanic eruptions: a revised database. Bull of Volcanology 60(2):137–144. https://doi.org/10.1007/s004450050222 Tierney K (2008) Hurricane Katrina: catastrophic impact and alarming lessons. In: Quigley JM, Rosenthal LA (eds) Risking house and home: disasters, cities, public policy. Berkely Public Policy PressInstitute of Governmental Stud Publications, Berkely, California, pp 119–136 United Nations (2015) Sendai framework for disaster risk reduction 2015—2030, the 3rd United Nations World Conference on Disaster Risk Reduction, Sendai, Japan, March 2015. http://www.preventionweb.net/files/43291_sendaiframeworkfordrren.pdf. Accessed 21 Nov 2016 United States Fire Administration (2020) An overview of incident management teams. The United States Department of Homeland Security, The Federal Emergency Management Agency. https://www.usfa.fema.gov/training/imt/imt_overview.html. Accessed 8 Nov 2019 Walsh B (2019) A giant volcano could end human life on earth as we know it. The New Yorks Times. https://www.nytimes.com/2019/08/21/opinion/supervolcano-yellowstone.html. Accessed 30 Nov 2020 Weather Prediction Center (2019) Weather prediction center excessive rainfall risk categories. National Oceanic and Atmospheric Administration National Weather Service, Weather Prediction Center. https://twitter.com/nwswpc/status/1149122747345199105. Accessed 18 Nov 2019 Wickramaratne S, Ruwanpura J, Ranasinghe U, Durage SW, Adikariwattage V, Wirasinghe SC (2012) Ranking of natural disasters in Sri Lanka for mitigation planning. Int J Disaster Resil Built Environ 3(2):115–132. https://doi.org/10.1108/17595901211245198 Wirasinghe SC, Caldera HJ, Durage SW, Ruwanpura JY (2013a) Preliminary analysis and classification of natural disasters. In the proceedings of the 9th Annual Conference of the International Institute for Infrastructure Renewal and Reconstruction, Brisbane, Australia. pp. 150–160. https://digitalcollections.qut.edu.au/2213/ Wirasinghe SC, Caldera HJ, Durage SW, Ruwanpura JY (2013b) Comparative analysis and classification of natural disasters, catastrophes and calamities. In the proceedings of the World Engineering Summit in the Institution of Engineers, Singapore, pp. 7 WiscNews (2018) Remembering the Indian ocean tsunami catastrophe of 2004. WiscNews. https://www.wiscnews.com/news/world/remembering-the-indian-ocean-tsunami-catastrophe-of-2004/collection_5e5a37e2-643c-50b0-b752-0491210b98eb.html#1. Accessed 26 Dec 2018 Wood D (2016) Calgary emergency management agency releases current list of top 10 hazards and risks in Calgary. Calgary Herald. http://calgaryherald.com/news/local-news/calgary-emergency-management-agency-releases-current-list-of-top-10-hazards-and-risks-in-calgary. Accessed 3 Apr. 2016 World Bank (2019a) Population total. World Bank Open Data. http://data.worldbank.org/indicator/SP.POP.TOTL?end=2015&start=1960&view=chart. Accessed 05 Oct 2020. World Bank (2019b) GDP (current US$). World Bank national accounts data. http://data.worldbank.org/indicator/NY.GDP.MKTP.CD. Accessed 05 Oct 2020 World Health Organization (2020) World Health Organization announces COVID-19 outbreak a pandemic. World Health Organization, Geneva. http://www.euro.who.int/en/health-topics/health-emergencies/coronavirus-covid-19/news/news/2020/3/who-announces-covid-19-outbreak-a-pandemic. Accessed 12 March 2020 Yew YY, Castro Delgado RJ, Arcos González P, Heslop D (2019) The Yew disaster severity index: a new tool in disaster metrics. Prehosp Disaster Med 34(1):98–103 Zheng YL, He YK, Ma XQ, Gao ZC (2020) Feasibility of coronavirus disease 2019 eradication. Chin Med J 133(12):1387–1389. https://doi.org/10.1097/CM9.0000000000000936 This work was funded by the Natural Sciences and Engineering Research Council of Canada, Alberta Innovates—Technology Futures, Alberta Motor Association, Schulich School of Engineering, University of Calgary, Catastrophe Indices and Quantification Incorporated, and the Canadian Risk Hazard Network. The authors would like to thank the Centre for Research on the Epidemiology of Disasters (CRED) for sharing data in the EM- DAT international disasters database and the National Oceanic and Atmospheric Administration (NOAA) National Centers for the publicly accessible data in the Storm Events Database and Global Significant Volcanic Eruptions Database. The authors also thank Professor Emeritus R. B. Bond for his guidance, input, and comments on the disaster terminology section of this paper. This work was funded in part by the Natural Sciences and Engineering Research Council of Canada, Alberta Innovates—Technology Futures, Alberta Motor Association, University of Calgary, Schulich School of Engineering, Catastrophe Indices and Quantification Incorporated, and the Canadian Risk Hazard Network. Department of Civil Engineering, University of Calgary, Calgary, Alberta, Canada H. Jithamala Caldera & S. C. Wirasinghe H. Jithamala Caldera S. C. Wirasinghe Correspondence to H. Jithamala Caldera. The authors have no conflicts of interest to declare that are relevant to the content of this article. The data that support the findings of this research are as follows: All types of natural disaster data from EM-DAT, the international disasters database of the Centre for Research on the Epidemiology of Disasters (CRED), Brussels, Belgium. www.emdat.be. Tornado data from the Storm Events Database, National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental information. http://www.ncdc.noaa.gov/stormevents/. Volcanic eruptions data from the National Geophysical Data Center /World Data Service (NGDC/WDS): NCEI/WDS Global Significant Volcanic Eruptions Database. NOAA National Centers for Environmental Information. https://doi.org/10.7289/V5JW8BSH. Caldera, H.J., Wirasinghe, S.C. A universal severity classification for natural disasters. Nat Hazards (2021). https://doi.org/10.1007/s11069-021-05106-9 Universal disaster severity classification scheme Global disaster severity scale Universal standard severity index system Extreme natural events Disaster definitions
CommonCrawl
Schistosoma haematobium infection status and its associated risk factors among pregnant women in Munyenge, South West Region, Cameroon following scale-up of communal piped water sources from 2014 to 2017: a cross-sectional study Godlove Bunda Wepnje1, Judith Kuoh Anchang-Kimbi1, Vicky Daonyle Ndassi1, Leopold Gustave Lehman2 & Helen Kuokuo Kimbi3 In 2014, a study in Munyenge revealed a high prevalence of urogenital schistosomiasis (UGS) among pregnant women. This study investigated he prevalence and risk factors of UGS in pregnancy following scale-up of piped water sources from 2014 to 2017. Secondly, we compared stream usage, stream contact behaviour, infection rate and intensity with the findings of 2014. Consenting pregnant women reporting for antenatal care (ANC) in the different health facilities were enrolled consecutively between November 2016 and January 2018. Information on age, gravidity status, residence, marital status, educational level, occupation, household water source, frequency of contact with water and stream activities were obtained using a semi-structured questionnaire. Urine samples were examined for the presence of microhaematuria and S. haematobium ova using test strip and filtration/microscopy methods respectively. Data were analysed using univariate and multivariate regression analyses and relative risk reductions calculated. Of the 368 women enrolled, 22.3% (82) were diagnosed with UGS. Marital status (single) (aOR = 2.24, 95% CI: 1.04–4.79), primary - level of education (aOR = 2.0; 95% CI: 1.04–3.85) and domestic activity and bathing in the stream (aOR = 3.3; 95% CI: 1.83–6.01) increased risk of S. haematobium infection. Meanwhile, fewer visits (< 3 visits/week) to stream (aOR = 0.35, 95% CI = 0.17–0.74) reduced exposure to infection. Piped water usage was associated with reduced stream usage and eliminated the risk of infection among women who used safe water only. Compared with the findings of 2014, stream usage (RRR = 23 95% CI: 19–28), frequency (≥ 3 visits) (RRR = 69 95% CI: 59–77) and intensity of contact with water (RRR = 37 95% CI = 22–49) has reduced. Similarly, we observed a decrease in infection rate (RRR = 52, 95% CI = 40–62) and prevalence of heavy egg intensity (RRR = 71, 95% CI = 53–81). Following increased piped water sources in Munyenge, S. haematobium infection has declined due to reduced stream contact. This has important implication in the control of UGS in pregnancy. Schistosomiasis is a chronic parasitic disease caused by blood flukes of the genus Schistosoma and transmitted by snails found in fresh water bodies that have been contaminated by Schistosoma eggs. People become infected during dermal contact with water containing schistosome cercariae. In endemic areas, where there is lack of adequate water supply, poverty, ignorance and poor hygienic practices, children, women, fishermen and farmers are the high risk groups for schistosomiasis [1,2,3,4]. Women, in particular, are more likely to be exposed to infection during activities carried out in streams such as domestic activities including washing clothes, fetching water and bathing [5, 6]. It is estimated that approximately 40 million women of childbearing age are infected with schistosomiasis, with almost 10 million infected pregnant women in Africa [1, 7]. Increasingly, findings from several studies suggest that schistosomiasis in pregnancy is an area of major public health concern [6, 8,9,10,11,12,13]. Schistosoma haematobium is prevalent in Africa and Middle East, where the infection is causing significant morbidity and mortality when compared with S. mansoni. Schistosome eggs deposited in the wall of the urogenital bladder [14] release highly inflammatory antigens [15], triggering granuloma formation, a range of urothelial abnormalities and related signs such as haematuria, dysuria and lesions of the bladder, kidney failure and bladder cancer [16]. Several studies have reported associations between UGS and HIV [17,18,19] and increasing evidence supports that it is a plausible risk factor for HIV acquisition [20]. In pregnancy, UGS has been associated with severe anaemia [1] particularly in co-infection with P. falciparum [6], maternal mortality [8, 21], premature birth and low birth weight [13, 22]. Drug-based control of morbidity related to infection has been the primary WHO strategy for schistosomiasis control, with treatment given mainly through community and school-based mass treatment with praziquantel [23]. Older age groups, including pregnant women are often left untreated. Despite recent evidence of the safety of praziquantel in human pregnancy, barriers to adopting polices for such treatment still remain [24]. Consequently, affected pregnant women can serve as reservoirs for infection bringing the distribution of the disease to pre-control level over time. More so, morbidity that builds in untreated pregnant women may result in poor pregnancy outcomes [25]. To achieve sustainable control (elimination or eradication of schistosomiasis), improvements of water, sanitation, hygiene infrastructure and modification of risk behaviour are necessary to prevent transmission of the schistosome parasite [26,27,28]. The provision of safe water supply is one important approach to reduce the need for contact with contaminated water bodies and diminish the risk of schistosomiasis transmission [29]. A study carried out in 2014 in Munyenge, an endemic foci located in the mount Cameroon area, revealed that S. haematobium infection is common among pregnant women and regular contact with stream and long duration in contaminated water sources increased risk of infection. We suggested that provision of piped water and health education will decrease disease incidence and intensity [6]. More so, a recent study by Ebai et al. [30] in some neighbouring villages to Munyenge showed that access to piped water protected individuals living in these communities from UGS. Thus, following scale up of piped water supply from 2014 to 2017 in Munyenge, this study investigated the prevalence and risk factors of UGS in pregnancy for an epidemiological update. Secondly, to assess the impact of increase piped water supply, we compared stream usage, stream contact behaviour and S. haematobium infection rate and intensity with the findings of 2014. The study was carried out in Munyenge, a village located in the Bafia health area about 27 km from Muyuka town, South West Region, Cameroon. Bafia Health Area is an endemic focus for UGS, which is found in the Mount Cameroon Area. This health area is made up of three rural communities: Ikata, Bafia, and Munyenge. Munyenge has a heterogenous population of about 13,127 inhabitants (Delegation of Public Health, South West Region, 2017) with farming as the principal occupation. The characteristics of the study area has been described elsewhere [6, 31]. This community has four health centres, of which three provide antenatal care and delivery services for the local population. In Munyenge, piped water sources have increased from three to seven between 2014 and 2017 (Figs. 1 and 2). Nonetheless, access to safe water is still poor due to long distances to improved water source and water user fees that influence water use patterns and health benefits offered by improved water sources (personal observation). Consequently, the local population still makes frequent use of the streams for their daily needs. Map showing distribution of piped water sources in Munyenge between 2014 and 2017. (Source: Google Satellite Maps (ArcGlobe, ESRI®) adapted to show available health centres and communal piped water sources a: Snail infested stream, b: Some stream contact activities, c & d: Piped water sources This was a cross sectional study that involved pregnant women, who were enrolled consecutively between November 2016 and January 2018. Study population Pregnant women in their third trimester, who reported for antenatal care (ANC) at any of the three health centres (Government Integrated Health centre (HC), Banga Annex HC, Trans African HC) were enrolled in the study. Prior to enrolment, each participant provided an informed consent. Population sample size determination The minimum sample size was computed using the formula by Bryan [32] based on the S. haematobium infection prevalence of 46.8% in pregnancy reported from an epidemiological baseline study in Munyenge by Anchang-Kimbi et al. [6]. $$ \mathrm{n}=\frac{{\mathrm{z}}^2\upalpha /2\mathrm{pq}}{{\mathrm{d}}^2} $$ n = minimum sample size required z = 1.96 is confidence level test statistic at the desired level of significance p = 0.468 is the proportion of UGS in pregnancy q = 1-p = is the proportion negative for UGS d = the acceptable error willing to be committed $$ \mathrm{n}=\frac{(1.96)^2\times 0.468(0.532)}{(0.05)^2}=382 $$ A sample size of at least 382 was determined to be adequate to detect a 5% change in the prevalence of UGS. However, due to logistics, we had a sample size of 368 pregnant women which is well above 95% of the expected sample size calculated. Administration of questionnaire The study participants were interviewed by a field researcher using a validated questionnaire to record socio-emographic information (age, residence), socio-economic indicators (marital status, educational level and occupation), gynaecologic/obstetric history (gravidity, gestational age) and questions related to schistosomiasis; household water source (stream or piped water), frequency of contact with water source, stream activities (domestic chores and bathing (measures of intensity of contact with water)). In addition, the questionnaire included questions related to knowledge about schistosomiasis etiology, transmission, clinical manifestation, prevention and control. Knowledge on UGS was scored on 4 points as described by Folefac et al. [33]. Briefly, one point was allocated for a correct response and no point for a "I don't know" or wrong answer. A knowledge score of < 2, 2–3 and > 3 was considered as poor, average and good respectively. Sample collection and processing About 20 ml of terminally voided urine sample was collected from consented participants into sterile, dry, leak-proof transparent, pre-labelled urine bottles. Women were instructed to collect urine between 10 am and 2 pm. Urine samples were immediately processed and analysed at the laboratory unit of the health facility. Haematuria was determined by visual observation of urine samples and urinalysis reagent strips (Mission® Expert-USA). Schistosoma. haematobium eggs were obtained and identified using the filtration technique and microscopy respectively as reported elsewhere [6]. A pregnant woman was diagnosed with UGS when she was positive by microscopic examination of urine filtrate and/or urine reagent strip. The infection intensity was classified as light (< 50 eggs/10 ml of urine) or heavy (≥50 eggs/10 ml of urine) as defined by the World Health Organization [7]. Data management and statistical analysis Questionnaires were checked for the correct use of codes and completeness. Data were coded, validated and analysed using SPSS version 22.0 (SPSS, Inc., Chicago, IL, USA). The statistical tests performed included the Pearson's chi-square for comparison of proportions. Bivariate analysis was performed to identify the factors associated with S. haematobium infection to be included in the multivariate logistic regression for analysis of risk factors for UGS. Variables that had a p- value < 0.20 in bivariate analysis or biological plausiblility were included in the multivariate logistic regression model. In order to assess the impact of increase piped water sources on infection rate and intensity, relative risk reduction was calculated using a Microsoft Excel confidence interval calculator as described by the Newcombe-Wilson method [34, 35]. A p - value < 0.05 was considered significant. Characteristics of the study population A total of 368 pregnant women were enrolled into the study. The mean age of the study participants was 25 ± 5.8 years (Range: 15–42 years). The characteristics of the study participants are shown in Table 1. The majority of the women were older (> 25 years), married and had attained at least a secondary level of education. Single women were predominantly younger (≤ 25 years) (72.5%; 71/98) while, more married women were older (> 25 years) (53.3%; 144/270). The difference was statistically significant (χ2 = 24.17; P < 0.001). A greater proportion of the women were housewives. The women obtained their water from the stream and piped water sources for personal and domestic purposes. However, stream usage was predominant (76.1%). Among the 368 women, 16.2% (54) reported piped water as their only source of water, 53.3% (178) had stream as their only source of water and 30.5% (102) used both piped and stream water. No association was found between socio-demographic factors and water source type but, women who used piped water (36.4%: 102/280) were less likely (aOR = 0.34, 95% CI: 0.21–0.57, p < 0.001) to use the stream when compared with those who reported no access to piped water (63.6%; 178/280). Generally, women made fewer contact (< 3 times/week) with stream. Although not statistically significant (χ2 = 2.94; P = 0.086), those who reported stream as the only source of water made more contacts (≥ 3 times/weeks) (75%; 33/44) than those who used both sources (25%; 11/44). Among the women who reported stream as a source of water, a majority of them visited the stream for domestic purposes and one- third of the women equally reported bathing (Table 1). With regard to knowledge on UGS, a greater percentage of the women had poor perception of the disease symptoms and its association with water contact. Table 1 Characteristics of the study participants The prevalence and risk factors of S. haematobium infection Eighty-two (22.3%; 95% CI: 15.8–22.0) of the 368 pregnant women enrolled were positive for UGS. Excretion of ova in urine was recorded for 72 (19.6%; 95% CI: 18.3–26.8) women among whom 23 (32%) had heavy (≥50 eggs/10 ml of urine) infection, while 49 (68%) had light (< 50 eggs/10 ml of urine) infection. The prevalence of microhaematuria was 14.7% (54/368) of which, 2.7% (10) of the women were positive for microhaematuria only. Using microscopic urine examination as gold standard, the specificity and sensitivity of microhaematuria in the diagnosis of S. haematobium infection were 96.6% (95% CI: 93.9–98.2) and 61.1% (95% CI: 49.6–71.5), respectively. In bivariate analysis, there was an association between the prevalence of infection and marital status (P < 0.001), water source type (P < 0.001), stream activity (P < 0.001) and frequency of contact with stream (P < 0.001). All women who had piped water as the only source of water were negative for UGS (Table 2). On the other hand, no association was seen between maternal age, gravidity status, educational level, occupation and infection (Table 2). All four factors associated with UGS were retained by multiple regression model analysis (Table 2). Maternal age and educational level were included in the final model based on the biological plausibility of these factors. Table 2 Risk factors associated with S. haematobium infection among pregnant women in Munyenge Single status increased risk for infection by 2.2 times (95% CI: 1.04–4.79) when compared with the married status. Equally, women with a primary - level education were at higher risk (aOR = 2.0; 95% CI: 1.04–3.85) of having UGS than those with at least secondary-level education. Furthermore, higher odds (aOR = 3.3; 95% CI: 1.83–6.01) of having UGS were identified among women who carried out both domestic activity and bathing in the stream. On the other hand, less frequency of contact with water (< 3 times per week) (aOR = 0.35, 95% CI = 0.17–0.74) was associated with decreased risk of infection. Changes in stream usage and contact behaviour, S. haematobium infection rate and intensity after scale-up of piped water sources between 2014 and 2017 About 42 % (42.4%; 156/368) of the study participants reported use of safe water after more communal piped water installations. In comparison with reports of 2014, changes in stream usage (stream contact) and frequency of contact with stream, stream activity, prevalence and intensity of UGS were observed (Fig. 3 and Additional file 1). Two hundred and eighty women (76.1%) reported stream water contact during the period of study (Table 1) while, in 2014, 99.2% of pregnant women used the stream as main source of water. Stream usage reduced by 23% (RRR = 0.23, 95% CI = 0.19–0.28). Equally, there was a decrease in the frequency of contact with water. Compared with 2014, frequent visits (≥ 3 visits/week) to the stream reduced by 69% (RRR = 0.69, 95% CI = 0.59–0.77) in 2017. Similarly, bathing activity (a measure of intense contact with water) in streams decreased by 37% (RRR = 0.37, 95% CI = 0.22–0.49). A decline in the prevalence and intensity of S. haematobium infection was observed among pregnant women. Infection rate decreased from 46.8% in 2014 to 22.3% in 2017 (RRR = 52, 95%, CI = 40–62%) With regards to infection intensity, cases with heavy infection decreased by 71% (RRR = 0.71, 95% CI = 0.53–0.81; P < 0.001) as well as the prevalence of light infection (RRR = 0.37, 95% CI = 0.14–0.54) (Additional file 1). Changes in a: prevalence of S. haematobium infection b: intensity of infection c: stream usage d: stream contact activity e: stream frequency/week between two cross-sectional surveys carried out between 2014 and 2017 (Data for 2014 have been published by Anchang-Kimbi et al. [6]) Munyenge village is an established endemic focus for UGS. [6, 30, 36]. The establishment of safe water is an essential prerequisite for schistosomiasis control in endemic areas since, the prevention of schistosmiasis is achieved by reducing contact with schistosoma-infested water [23]. The present study is a follow-up survey which reports on the prevalence and risk factors of S. haematobium infection among pregnant women living in Munyenge, following the scale-up of communal piped water sources between 2014 and 2017. The impact of increased piped water sources was evaluated by assessing changes in stream contact patterns, prevalence and intensity of infection. This study carried out in Munyenge, in 2017, revealed a reduced prevalence of S. haematobium infection among pregnant women by 52%. In addition, we recorded a decreased in the number of cases of heavy infection. Interestingly, our study revealed that the use of piped water eliminated the risk of infection among pregnant women who completely stopped using stream water but very little effect on those who reported partial use of safe water. Similarly, the protective role of access to piped water has been reported by Ebai et al., [30] in a survey carried out in Ikata-Likoko area which, are neighbouring communities to Munyenge in the Bafia health area. Equally, studies in other endemic areas have shown that safe water supplies were associated with significantly lower odds of schistosomiasis [29, 37,38,39,40,41,42]. In Brazil, a communal water supply was shown to reduce the prevalence of the disease [37]. In Egypt, even the partial use of safe water markedly lowered the prevalence of S. mansoni and S. haematobium [38]. A recent study by Tanser et al. [39] reported a UGS prevalence of 16.8% following scale-up of piped water in rural South Africa which was markedly lower than the overall prevalence of 60–70% recorded in the same area thirty years earlier. Also, a cohort study carried out in the same area showed that children living in communities with a high coverage of piped water were eight times less likely to be infected relative to those living in areas with little or no access to piped water [39]. Introduction of safe water supplies into a community offers protection from schistosomiasis infection either directly or indirectly. It offers direct protection to individuals with access to safe water by reducing their contact with infested water bodies through household domestic water collection activities. Secondly, the scale-up access to safe water will confer indirect protection to members of a community in a manner that is analogous to the concept of 'herd immunity'. Herd immunity confers protection through a reduction in the number of contacts that infected individuals have with open water bodies leading to a decrease in the overall levels of intensity of infection in the surrounding community [43]. A water supply system that is either communal or household lowers the degree of contact with infested water in a community and people change their behaviour after safe water becomes available [38, 41, 42, 44]. With increased piped water sources in Munyenge, stream usage, frequent contact with stream and prolonged duration of water contact (bathing and domestic activities in the stream) decreased significantly by 23, 69 and 37% respectively. However, in the present study, piped water usage was associated with reduced stream usage and not changes in stream contact behaviour. The change in stream contact behaviour observed may be attributed to some level of awareness on UGS in the community which, may have influenced change in attitudes of some pregnant women from bathing and washing in water bodies to stream banks or at their homes. Despite the presence of more piped water sources, a majority of the pregnant women (76%) continue to use the streams for domestic purposes. Distance from home to the taps, limited number of communal piped water sources for a dispersed population and requirement for immediate payment of piped water may account for the reliance on the stream [41, 42, 45]. It will be interesting to assess the relationship between these factors and the distribution S. haematobium infection in the study area. Although the burden of UGS has declined in Munyenge over the indicated three-year period, the prevalence of infection still remains high (22.3%) among pregnant women. Coupled with the high rate of stream usage, other factors predisposed these women to the risk of S. haematobium infection. Marital status influenced infection outcome where single women were twice more likely to be infected with S. haematobium than their married counterparts irrespective of their age and gravidity status. The role of marital status in the risk of malaria parasite infection in pregnancy has also been reported [46]. Studies have shown that marriage has advantages on the health outcomes of individuals [47]. A spouse may improve economic well-being [48] as well as play an important role in monitoring and encouraging healthy behaviours [49]. Partner support may be important in limiting or preventing contact with infested water by the pregnant woman. Furthermore, infection was more common among individuals with low level of education. This is in conformity with findings of Lima E Costa et al. [44] and Bethony et al. [50] in Brazil, Khalid et al. [9] in Sudan and Salawu and Odiabo, [12] in Nigeria. Ugbomoiko et al. [51] suggested that education affects attitudes and behaviour. Individuals with low educational status are more likely to enter the stream barefoot and spend longer hours in water (exposing themselves to cercarial penetration) than their more educated counterparts. On the other hand, the self-awareness of the disease may account for the reduced prevalence level observed among women with at least a secondary level of education. Consistent with our previous findings in Munyenge, bathing and domestic activities in infested waters predisposed pregnant women to infection. Both activities increase intensity of water contact with infested water [6, 44, 52]. These findings strongly suggest that the extension of more piped water sources in this endemic area will reduce the incidence of infection by reducing the need for intense or frequent contact with infested water. However, a productive and sustainable intervention cannot be achieved without adequate education [52, 53]. We suggest that educating women during antenatal clinic visits on the harmful effects of UGS and the local risk factors of infection will help reduce frequency of water contact and thus infection risk in this endemic areas. This study had a few limitations. This study examined the short-term effect of piped water supply on water contact patterns and transmission of S. haematobium infection in Munyenge. Studies have demonstrated that over long-term period, water supply facilities had little impact on the overall prevalence and intensity of infection [39, 42]. Secondly, piped water usage and stream contact behaviour were based on self- reports and not by direct observation. Analysis of spatial pattern of infection, observations of human contact at the stream and a questionnaire on water use gives a better assessment of the impact of piped supply on human water contact [39, 41, 42]. Following the scale-up of communal pipe water sources in Munyenge from 2014 to 2017, the prevalence of UGS among pregnant women has decreased significantly by 52%. Equally, a reduction in stream usage, frequency and intensity of contact with stream was observed. The use of piped water reduced stream usage by pregnant women and eliminated the risk of UGS among those who completely avoided the stream. Single status, low level of education and activities that the prolong duration in water predisposed the women to S. haematobium infection. Despite the increase safe water sources, the majority of these women still depend on the natural source of water for their daily activities. It is obvious that expansion of piped water sources is important in interrupting stream contact but this intervention is productive and sustainable, if health education activities directed towards avoiding water contacts are achieved. ANC: Antenatal care clinic HC: RRR: Relative risk reduction UGS: Urogenital schistosomiasis Friedman JF, Mital P, Kanzaria HK, Olds GR, Kurtis JD. Schistosomiasis and pregnancy. Trends Parasitol. 2007;23(4):159–64. Gryseels B. Schistosomiasis. Infectious Disease Clinical of North America. 2012;26:383–97. Stothard JR, Sousa-Figueiredo JC, Betson M, Bustinduy A, Reinhard-Rupp J. Schistosomiasis in African infants and preschool children: let them now be treated. Trends Parasitol. 2013;29:197–205. Knopp S, Person B, Ame SM, Mohammed KA, Ali SM, Khamis IS, Rabone M, Allan F, Gouvras A, Blair L, et al. Elimination of schistosomiasis transmission in Zanzibar: baseline findings before the onset of a randomized intervention trial. PLoS Negllected Tropical Disease. 2013;7(10):e2474. Ntonifor HN, Ajayi JA. Water contact and Schistosoma haematobium infection. A case study of some communities in Toro local government council area (TLGCA) of Bauchi state. Nigeria Journal of Natural and Applied Sciences. 2005;1(1):54–9. Anchang-Kimbi J, Mansoh DE, Sotoing GT, Achidi EA. Coinfection with Schistosoma haematobium and Plasmodium falciparum and Anaemia severity among pregnant women in Munyenge, Mount Cameroon area: a cross-sectional study. Journal of Parasitology Research. 2017. https://doi.org/10.1155/2017/6173465. WHO. Report of the WHO Informal Consultation on the use of praziquantel during pregnancy/Lactation and Albendazole/Mebendazole in Children under 24 months. WHO/CDS/CPE/PVC/2002.4; 2002. Ajanga A, Lwambo NJ, Blair I, Nyandindi U, Fenwick A, Brooker S. Schistosoma mansoni in pregnancy and associatiation with anaemia in Northwest Tanzania. Transactions of Royal Soceity for Tropical Medicine and Hygiene. 2012;100(1):59–63. Khalid A, Abdelgadir MA, Ashmaig A, Ibrahim AM, Ahmed AA, Adam I. Schistosoma mansoni infection among prenatal attendees at a secondary-care hospital in Central Sudan. Int J Gynaecol Obstet. 2012;116(1):10–2. Salawu OT, Odaibo AB. Schistosomiasis among pregnant women in rural communities in Nigeria. Int J Gynaecol Obstet. 2013;122(1):1–4. Basra A, Mombo-Ngoma G, Melser MC, Diop DA, Wurbel H, Mackanga JR, et al. Efficacy of mefloquine intermittent preventive treatment in pregnancy against Schistosoma haematobium infection in Gabon: a nested randomized controlled assessor-blinded clinical trial. Clinical Infectious Disease. 2013;56:68–75. Salawu OT, Odaibo AB. Schistosomiasis transmission; socio-demographic, knowledge and practices as transmission risk factors in pregnant women. Journal of Parasitology and Disease. 2014. https://doi.org/10.1007/s12639-014-0454-2. Mombo-Ngoma G, Honkpehedji J, Basra A, Mackanga JR, Zoleko RM, Zinsou J, Agobe JCD, Lell B, Matsiegui PB, Gonzales R, Agnandji ST, Yazdanbakhsh M, Menendez C, Kremsner PG, Adegnika AA, Ramharter M. Urogenital schistosomiasis during pregnancy is associated with low birth delivery: analysis of a prospective cohort of pregnant women and their offspring in Gabon. Int J Parasitol. 2016;47(2017):69–74. Kohno M, Kuwatsuru R, Suzuki K, Nishii N, Hayano T, Mituhashi N, Tanabe K. Imaging findings from a case bilharziasis in a patient with gross haematuria of several years' duration. Radiat Med. 2008;26(9):553–6. Botelho MC, Veiga I, Oliveira PA, Lopes C, Teixeira M, Correia da Costa JM, Machado JC. Carcinogenic ability of Schistosoma haematobium possibly through oncogenic mutation of KRAS gene. Adv Cancer Res Treat. 2013:876585. Fenwick A, Savioli L, Engels D, Bergquist NR, Todd MH. Drugs for the control of parasitic diseases: current status and development in schistosomiasis. Trends Parasitol. 2003;19:509–15. Kjetland EF, Ndhlovu PD, Gomo E, Mduluza T, Midzi N, Gwanzura L, Mason PR, Sandvik L, Friis H, Gundersen SG. Association between genital schistosomiasis and HIV in rural Zimbabwean women. AIDS. 2006;20:593–600. Ndhlovu PD, Mduluza T, Kjetland EF, Midzi N, Nyanga L, et al. Prevalence of urinary schistosomiasis and HIV in females living in a rural community of Zimbabwe: does age matter? Transactions of Royal Soceity of Tropical Medicine and Hygiene. 2007;101:433–8. Downs JA, Mguta C, Kaatano GM, Mitchell KB, Bang H, Simplice H, Kalluvya SE, Changalucha JM, Johnson WD Jr, Fitzgerald DW. Urogenital schistosomiasis in women of reproductive age in Tanzania's Lake Victoria region. Am J Trop Med Hyg. 2011;84:364–9. Mbabazi PS, Andan O, Fitzgerald DW, Chitsulo L, Engels D, Downs JA. Examining the Relationship between Urogenital Schistosomiasis and HIV Infection. PLoS Neglected Tropical Diseases. 2011;5(12). https://doi.org/10.1371/journal.pntd.0001396. Helling-Giese G, Kjetland F, Gundersenetal SG. Schistosomiasis in women: manifestations in the upper reproductive tract. Acta Trop. 1996;62(4):225–38. Siegrist D, Siegrist-Obimpeh P. Schistosoma haematobium infection in pregnancy. Acta Trop. 1992;50:317–21. WHO. The prevention and control of schistosomiasis and soil-transmitted helminthiasis. Report of a WHO Expert committee. World Health Organisation Tech Rep Ser. 2002;912:1–57. Friedman JF, Olveda RM, Mirochnick MH, Bustinduy AL, Elliott AM. Praziquantel for the treatment of schistosomiasis during human pregnancy. In: Bulletin of the World Health Organization; 2017. ID: BLT.17.198879. Friedman JK, Kanzaria HK, McGarvey ST. Human schistosomiasis and anemia: the relationship and potential mechanisms. Trends Parasitol. 2005;21:386–92. Freeman MC, Ogden S, Jacobson J, Abbott D, Addiss DG, Amnie AG, Beckwith C, Cairncross S, Callejas R, Colford JM, Emerson PM, Fenwick A, Fishman R, Gallo K, Grimes J, Karapetyan G, Keene B, Lammie PJ, Macarthur C, Lochery P, et al. Integration of water, sanitation, and hygiene for the prevention and control of neglected tropical diseases: a rationale for inter-sectoral collaboration. PLoS Negl Trop Dis. 2013;7:e2439. WHO. Water Sanitation and Hygiene for Accelerating and Sustaining Progress on Neglected Tropical Diseases a Global Strategy 2015–2020. Geneva: World Health Organisation; 2015. Echazu A, Bonanno D, Juarez M, Cajal SP, Heredia V, Caropresi S, Cimino RO, Caro N, Vargas PA, Paredes G, Krolewiecki AJ. Effect of poor access to water and sanitation as risk factors for soil-transmitted helminth infection: selectiveness by the infective route. PLoS Negl Trop Dis. 2015;9. https://doi.org/10.1371/journal.pntd.0004111. Grimes JET, Croll D, Harrison WE, Utzinger J, Freeman MC, Templeton MR. The relationship between water, sanitation and schistosomiasis: a systematic review and meta-analysis. PLoS Neglected Tropical Disease. 2014;8:e3296. Ebai CB, Kimbi HK, Sumbele IUN, Yunga JE, Lehman LG. Prevalence and risk factors of urinary schistosomiasis in the Ikata-Likoko area of Southwest Cameroon. International Journal of TROPICAL DISEASE & Health. 2016. https://doi.org/10.9734/IJTDH/2016/26669. Ntonifor HN, Green AE, Bopda MOS, Tabot JT. Epidemiology of urogenital schistosomiasis and soil transmitted helminthiasis in a recently established focus behind Mount Cameroon. Int J Curr Microbiol App Sci. 2015;4(3):1056–66. Bryan FJ. The design and analysis of research studies, University of Otago, Dunedin, New Zealand. Cambridge,UK: Cambridge University Press; 1992. Folefac LN, Nde-Fon P, Verla VS, Nkemanjong TM, Njunda AL, Luma HN. Knowledge, attitudes and practices regarding urinary schistosomiasis among adults in the Ekombe Bonji Health Area, Cameroon. Pan African Medical Journal. 2018;29:161. https://doi.org/10.11604/pamj.2018.29.161.14980. Armitage P, Berry G. Statistical methods in medical research. 3rd ed. London: Blackwell; 1994. Newcombe RG. Interval estimation for the difference between independent proportions: comparison of eleven methods. Stat Med. 1998;17:873–90. Ntonifor HN, Mbunkur GN, Ndaleh NW. Epidemiological survey of urinary schistosomiasis in some primary schools in a new focus behind Mount Cameroon (Munyenge), South West Region, Cameroon. East Afr Med J. 2012;89(3):82–8. Barbosa FS, Pinto R, Souza OA. Control of schistosomiasis mansoni in a small north east Brazilian community. Trans R Soc Trop Med Hyg. 1971;65:206–13. Farooq M, Nielsen J, Samaan SA, Mallah MB, Allam AA. The epidemiology of Schistosoma haematobium and S. mansoni infections in the Egypt-49 project area. Bull World Health Organ. 1966;35:319–30. Tanser F, Azongo DK, Vandormael A, Barnighausen T, Appleton C. Impact of the scale-up of piped water on urogenital schistosomiasis infection in rural South Africa. eLife. 2018. https://doi.org/10.7554/eLife.33065. Gear J, Pitchford R, Van Eeden J. Atlas of bilharzia in South Africa. South African Institute for Medical Research: Johannesburg; 1980. Noda S, Shimada M, Ngethe DM, Sato K, Francis BM, Simon MG, Wajyaki PG, Aoki Y. Effect of piped water supply on human water contact patterns in a Schistosoma haematobium-endemic area in coast province, Kenya. Am J Trop Med Hyg. 1997;56(2):118–26. Abe M, Muhoho DN, Sunahara T, Moji K, Yamamoto T, Aoki Y. Effect of communal piped water supply on pattern of water use and transmission of schistosomiasis haematobia in an endemic area of Kenya. Tropical Medicine and Health. 2009;37(2):43–53. Fine P, Eames K, Heymann DL. Herd immunity: a rough guide. Clin Infect Dis. 2011;52:911–6. Lima E, Costa MFF, Magalhaes HA, Rocha S, Antunes MF, Katz N. Water-contact patterns and socioeconomic variables in the epidemiology of schistosomiasis mansoni in an endemic area in Brazil. Bull World Health Organ. 1987;65(1):57–66. Shewakena F, Kloos H, Abebe F. The control of schistosomiasis in Jiga town, Ethiopia. III. Socioeconomic and water use factors. Riv Parasitol. 1993;10:399–411. Anchang-Kimbi JK, Nkweti VN, Ntonifor HN, Apinjoh TO, Tata RB, Chi HF, Achidi EA. Plasmodium falciparum parasitaemia and malaria among pregnant women at first clinic visit in the Mount Cameroon area. BMC Infectious Disease. 2015;15:439. https://doi.org/10.1186/s12879-015-1211-6. Wood RG, Goesling B, Avellar S. The effects of marriage on health: a synthesis of recent research evidence. Washington, DC: Mathematica Policy Research, Inc.; 2007. Lerman R. Marriage and the economic well-being of families with children: a review of the literature. Washington, DC: The Urban Institute and American University; 2002. Umberson D. Family status and health behaviors: social control as a dimension of social integration. J Health Social Beh. 1987;28(3):306–19. Bethony J, Williams JT, Kloos H, Blangero J, Alves-Fraga L, Buck G, Michalek A, Williams-Blangero S, LoVerde PT, Correa-Oliveira R, Gazzinelli A. Exposure to Schistosoma mansoni infection in a rural area in Brazil. II: household risk factors. Trop Med Int Health. 2001;6(2):136–45. Ugbomoiko US, Ofoezie IE, Okoye IC, Heukelbach J. Factors associated with urinary schistosomiasis in two peri-urban communities in South-Western Nigeria. Ann Trop Med Parasitol. 2010;104(5):409–19. Da Silva AA, Cutrim RN, De Britto MT, Coimbra LC, Tonial SR, Borges DP. Water-contact patterns and risk factors for Schistosoma mansoni infection in a rural village of Northeast Brazil. Revista do Instituto de Medicina Tropical de Sao Paulo. 1997;39:91–6. Engels D, Ndoricimpa J, Gryseels B. Schistosomiasis mansoni in Burundi: progress in its control since 1985. Bull World Health Organ. 1993;71(2):207–14. The authors are grateful to all the pregnant women who gave their consent to participate in the study. Our special thanks to the Chief of Centre, nurses and laboratory technicians of the Munyenge Integrated HC, Banga Annex HC and TransAfrican HC for their cooperation and contribution. This work was supported by the staff development grant and special fund for research and modernization given to authors by the Government of Cameroon. All datasets generated and analysed during the study are presented in the paper and its supplementary file. Department of Zoology and Animal Physiology, Faculty of Science, University of Buea, P.O. Box 63, Buea, Cameroon Godlove Bunda Wepnje , Judith Kuoh Anchang-Kimbi & Vicky Daonyle Ndassi Department of Animal Biology, Faculty of Science, University of Douala, P.O. Box 24157, Douala, Cameroon Leopold Gustave Lehman Department of Medical Laboratory Science, Faculty of Health Sciences, University of Bamenda, P.O. Box 39, Bambili, Cameroon Helen Kuokuo Kimbi Search for Godlove Bunda Wepnje in: Search for Judith Kuoh Anchang-Kimbi in: Search for Vicky Daonyle Ndassi in: Search for Leopold Gustave Lehman in: Search for Helen Kuokuo Kimbi in: GBW: Participated in the design of the study, performed the experiments, analyzed the data and made inputs in manuscript write-up. JKAK: Conceived and designed the study and wrote the manuscript. VD: Performed the experiment. LGL and HKK: Supervised, reviewed, and provided inputs to the manuscript. All authors read and approved the final manuscript. Correspondence to Judith Kuoh Anchang-Kimbi. Ethical clearance (No2017/0481/UB/FHS/IRB) was obtained from the University of Buea, Faculty of Health Sciences Institutional Review Board and administrative authorisation from the South West Regional Delegation of Public Health, Buea and District Medical Officer for Muyuka Subdivision. Written and verbal informed consent was obtained before enrolment into the study. Participation was voluntary and study participants were assured of confidentiality and anonymity of data. Relative risk reduction in stream usage and contact behaviour, S. haematobium infection rate and intensity among pregnant women following scale-up of communal piped water sources from 2014 to 2017 in Munyenge. This file shows how much the risk of S. haematobium infection/intensity, stream usage and contact behaviour has reduced among pregnant women following scale-up of communal piped water sources from 2014 to 2017 in Munyenge. (DOCX 16 kb) Wepnje, G.B., Anchang-Kimbi, J.K., Ndassi, V.D. et al. Schistosoma haematobium infection status and its associated risk factors among pregnant women in Munyenge, South West Region, Cameroon following scale-up of communal piped water sources from 2014 to 2017: a cross-sectional study. BMC Public Health 19, 392 (2019) doi:10.1186/s12889-019-6659-7 Piped water source Stream usage and contact behaviour Infectious Disease epidemiology
CommonCrawl
\begin{document} \thispagestyle{empty} \title{Recognition of Triangulation Duals of Simple Polygons With and Without Holes} \begin{abstract} We investigate the problem of determining if a given graph corresponds to the dual of a triangulation of a simple polygon. This is a graph recognition problem, where in our particular case we wish to recognize a graph which corresponds to the dual of a triangulation of a simple polygon with or without holes and interior points. We show that the difficulty of this problem depends critically on the amount of information given and we give a sharp boundary between the various tractable and intractable versions of the problem. \end{abstract} \section{Introduction} Triangulating a polygon is a common preprocessing step for polygon exploration algorithms~\cite{cit:Maf} among many other applications (see~\cite{cit:hjelle}). The exploration of the polygon is thus reduced to a traversal of the triangulation, which is equivalent to a vertex tour of the dual graph of the triangulation. In the study of lower bounds for such a setting, the question often arises if a given constructed graph is or is not the dual of a triangulation of an actual polygonal region (with or without holes)~\cite{cit:Maf}. Thus, the recognition of a graph class is a well established problem of theoretical interest and given the importance of triangulations likely to be of use in the future. More formally, given a graph, does it represent a triangulation dual of a simple polygon? There are three aspects of this problem: the geometric problem, the topological problem and the combinatorial problem\footnote{In~\cite{cit:Sug2}, Sugihara and Hiroshima call ``the topological embedding problem'' what we call ``the combinatorial problem'' here.}. In the geometric problem, we are given a precise embedding of the graph. In the topological problem, we are given a topological embedding (also called ``face embedding''). In the combinatorial problem, we are given the adjacency matrix only. Furthermore, the problem can be stated in both the decision version when the task is to recognize the graph of a triangulation, and the constructive version when the task is to realize the corresponding triangulation. For some graph classes, recognition may be easier than realization. Some specialized versions of this problem were studied in the past. Sugihara, and Hiroshima~\cite{cit:Sug2} as well as Snoeyink and van Kreveld~\cite{cit:Sno} consider the problem of realization of a Delaunay triangulation for the combinatorial version of the problem. In~\cite{cit:Oka}, the authors define three aspects of the recognition problem of a Voronoi/Delaunay diagram, where the first two of them are what we call the geometric and topological aspects. The most relevant part of their work is the following question in the geometrical setting~\cite[Problem V10, p.~108]{cit:Oka}: \emph{Given a triangulation graph, decide whether it is a (non-degenerate) Delaunay triangulation realizable graph}. For this case, the authors give necessary and sufficient conditions for a graph to be Delaunay triangulation realizable graph in the geometric setting. In this paper, we study the problem of recognizing the dual of a triangulation of a simple polygon with or without holes and interior points in the geometric, topological and combinatorial setting. To the best of our knowledge, this paper is the first work which considers the problem for general triangulations of polygons. We draw a clear line between tractability and NP-completeness of the problem as the degrees of freedom increase from the geometric to the topological to the combinatorial problem and as we consider holes. Our results are summarized in Table~\ref{tab:results}. The recognition algorithms presented in this paper are constructive and allow realization of the polygon. \begin{table} \centering \includegraphics[width=\columnwidth]{table} \caption{Summary of results.} \label{tab:results} \end{table} \section{Preliminaries} \label{sec:preliminaries} Let $P$ be a simple polygon with or without holes with $n$ vertices, $S$ a set of $m$ interior points located inside $P$ and $\mathcal T$ a \emph{triangulation} of the $n+m$ given points inside $P$ (for an example of a triangulation, see solid lines in Fig.~\ref{fig:infinity}(a)). Let $G$ be the \emph{graph of the triangulation} $\mathcal T$ as the graph on vertices $P\cup S$ plus an additional vertex $v$ ``at infinity'' located outside $P$ and the edges of $G$ are the edges of $\mathcal{T}$ plus the edges connecting every vertex on the boundary of $P$ to $v$ (see Fig.~\ref{fig:infinity}). \begin{figure} \caption{Two triangulations of a polygon with isomorphic dual graphs if the point at infinity is omitted.} \label{fig:notunique} \end{figure} This paper reconstructs triangulations of polygons from duals via reconstructing their graphs (which include the point at infinity). As we show, the point at infinity provides one with tools which are fully sufficient for such a reconstruction. If graphs of triangulations were defined without points at infinity, one would discover that there are many triangulations of a polygon with the same dual (see Fig.~\ref{fig:notunique}). Furthermore, we suggest that adding the point at infinity to representations of triangulations is easy to accomplish: Given a triangulation $\mathcal{T}$ of a polygon, one can construct its graph $G$ by adding the point at infinity. In the other direction, if the vertex at infinity is known, one can easily construct triangulation $\mathcal{T}$ from its graph $G$. The information about which is the point at infinity can be given as a part of the input, or in some cases, this may be even implicitly determined by formulation of the problem (see Definition~\ref{def:problem}; TDR-without-holes). \begin{figure} \caption{(a) An example of a triangulation of a polygon (solid lines) and its graph (solid and dashed lines), (b) a polygonal region $P$ with one (white) hole (shown in solid black lines and gray interior); its triangulation $\mathcal T$ (in solid black lines); the graph $G$ of the triangulation $\mathcal T$ (in black, solid and dashed lines); the dual graph $G^*$ of the triangulation (in solid red lines).} \label{fig:infinity} \end{figure} Given a plane graph $\Gamma$, the \emph{dual graph} of $\Gamma$, denoted by $\Gamma^*$, is a planar graph whose vertex set is formed by the faces of $\Gamma$ (including the outer face), and two vertices in $\Gamma^*$ are adjacent if and only if the corresponding faces in $\Gamma$ share an edge. \noindent Let $G$ be a graph of a triangulation of a polygon $P$ and $G^*$ its the dual graph. For brevity, we say that $G^*$ is the \emph{dual graph of the triangulation} $\mathcal T$ and from now on we will use this notion instead of ``the dual graph of the graph of a triangulation $\mathcal T$.'' \begin{definition}[The TDR Problems] \label{def:problem} Given a planar graph $G^*$, decide if $G^*$ is a dual graph of a triangulation of a polygon $P$ with a set of interior points $S$. We distinguish between: (1) \emph{TDR-without-holes} if $P$ is not allowed to have holes and $S = \emptyset$; (2) \emph{TDRS-without-holes} if $P$ is not allowed to have holes and $S$ may be non-empty; (3) \emph{TDR-with-known-holes} if $P$ is allowed to have holes, $S = \emptyset$, and the positions of holes are part of the input; and (4) \emph{TDR-with-unknown-holes} if $P$ is allowed to have holes, $S = \emptyset$, and the positions of holes are unknown. \end{definition} The following proposition summarizes some well-known facts about planar graphs and their duals. \begin{proposition} \label{prop:simple-properties} 1.~The dual of a planar graph $G$ is a planar graph. 2.~The embedding of a $3$-connected graph is unique up to the choice of the outer face. 3.~The dual graph of a $3$-connected planar graph is a $3$-connected planar graph. \end{proposition} \begin{proof} \noindent (1) Consider a planar embedding of $G$. Every vertex of $G^*$ can be embedded inside a face that it represents and connected to any point on the boundary of the face by a ``half-edge'' without introducing any crossings. By joining the ``half-edges'', one can construct a planar embedding of $G^*$. \noindent (2) is a well-known theorem of Whitney; see e.g.~\cite{cit:diestel} for the proof. \noindent (3) is another well-known fact. A quick argument can be given using Steinitz's theorem~\cite{cit:grun}. A planar 3-connected graph $G$ can be realized as a polyhedron $P$. Take its dual polyhedron $P'$, whose graph is $G^*$, i.e., the dual graph to $G$. Using Steinitz's theorem again, $G^*$ is planar and 3-connected.\qed \end{proof} \section{Triangulation Dual Recognition (TDR)} \label{sec:recognition} We present a sequence of increasingly complex dual recognition problems. We draw a clear line between the tractability of the problem and the NP-completeness depending on the degrees of freedom in the particular setting being considered. We first establish some properties of the triangulation dual of a polygon that will allow us to decide if the input graph is a dual of a triangulation or not. We consider separately the cases where the triangulated polygon has holes or not, and contains interior points or not. We consider three aspects of this problem depending on the amount of information given. In the most restricted case, we are given a \emph{geometric} embedding of the dual of a triangulation. Each triangle of $G$ is represented in the dual $G^*$ by a distinguished point in its interior. In particular, following Hartvigsen \cite{cit:hart} we consider the circumcenter of the triangle (which does not necessarily lie inside the triangle) and we are given the edge adjacencies between the triangles. In the second case we are given the faces of the dual of the triangulation but not their precise geometric embedding. This forms the \emph{topological} recognition problem. Lastly, in the least restrictive case we are simply given a dual graph without any knowledge of which vertices form a face in the triangulation dual. This is the \emph{combinatorial} recognition problem. \paragraph{Geometric TDR- and TDRS-without-holes.}\label{subs:gtdrwoh} For the geometric recognition problem, we do not consider the point at infinity, since it does not have a natural geometric representation. Thus, in this problem the input is a geometric embedding of the dual of the triangulation $\mathcal T$. In the dual, each triangle is represented by a distinguished point. The natural choices for such a point are (a) the circumcenter, (b) the incenter, (c) the orthocenter, (d) the centroid or (e) an arbitrary point in the interior of the triangle. For the case of (a), the circumcenter, which is the choice of Hartvigsen for the recognition of Delaunay triangulations~\cite{cit:hart}, we use a similar technique and create a two dimensional linear program. This is based on the observation that the edges in the triangulation are perpendicular to the dual edges in the geometric embedding. The intersections of such edges are the vertices of the polygon. Observe as well that the center of the triangulation edges lies on the corresponding dual edge. We can then set up a linear program with the coordinates of the vertices of the polygon as unknowns, and the orthogonality and bisection equations as linear constraints. We then solve the two dimensional LP program in linear time using Megiddo's fixed dimension LP algorithm~\cite{cit:megi}. If there is no feasible solution then we know that necessarily the given input graph is not the dual of a triangulation since otherwise, the actual triangulation graph satisfies the given linear constraints. A similar approach works for the case of (c) the centroid. The location of the vertices and the median points are the unknowns and the collinearity with the centroid is expressed as a convex combination of those two vertices with the centroid trisecting the line segment (i.e. $\lambda=1/3$ in the convex combination equation). This produces a set of linear equations which can also be addressed using an LP solver. \begin{figure} \caption{The figure illustrates a dual graph in black with the vertices representing circumcenters of triangles of a triangulation. The red edges show a candidate triangulation graph constructed from the LP-solver. Diagram (a) illustrates a valid triangulation and (b) an invalid triangulation that violates planarity.} \label{fig:circumcentres} \end{figure} \begin{figure} \caption{The figure illustrates a dual graph in black with the vertices representing centroids of triangles of a triangulation. The red edges show a candidate triangulation graph constructed from the LP-solver. Diagrams (a) and (b) illustrate two valid triangulations and (c) an invalid triangulation that violates planarity.} \label{fig:centroids} \end{figure} \begin{theorem}\label{thm:circumcentres} The linear program described above gives a necessary condition for the realization of the geometric TDR- and TDRS-without-holes problems in linear time with input $G^*$ given the triangulation graph with the circumcenters/centroids of the triangles of $G^*$ as vertices. \end{theorem} However, it is important to observe that the feasible solution by the LP only obeys orthogonality/median constraints and has no knowledge of planarity constraints of the resulting triangulation. Thus, the proposed solution might not be a realizable triangulation. One way of resolving this problem is testing (in linear time) if the proposed solution is planar. If it is, we now have a realization of the triangulation. If on the other hand the solution is not planar, we cannot decide if there was not possibly another realization that would have been planar. This is illustrated in Fig.~\ref{fig:circumcentres} and \ref{fig:centroids}, where we give different solutions to the LP constraints over the same dual triangulation graph, one leading to a feasible triangulation and the other does not. It remains an open problem if recognition is possible under either of this models, as well as any bounds for necessary and/or sufficient conditions under other choices for triangle representatives. To the best of our knowledge, planarity constraints between two triangles are a disjunction of three linear constraints which leads to a third degree equation which as such cannot be resolved using the LP program. Thus full recognition of geometric graphs remains an open problem. \paragraph{Topological TDR- and TDRS-without-holes.} \label{subs:ttdrwoh} There are two cases of the problem in this setting: (1) the output triangulation possibly contains interior points (TDRS-without-holes) and (2) the triangulation does not contain any interior points (TDR-without-holes). \paragraph{TDRS-without-holes.} \label{subsec:ttdrs-without-holes} Let us begin with the proof of the following lemma: \begin{lemma} \label{lem:ttdrs-3-con} Let $P$ be a polygon without holes, $S$ be a set of points in the interior of $P$, and $\mathcal{T}$ a triangulation of $P \cup S$. The graph $G$ of $\mathcal{T}$ is a $3$-connected maximal planar graph. \end{lemma} \begin{proof} By definition, $\mathcal{T}$ can be drawn in the plane without crossing edges. Hence, the graph (let us call it $T$) induced by the vertices of $\mathcal{T}$ is planar. Since $v$ is located outside the polygon $P$ and is not connected to any vertex in the interior of $P$, the graph $G$ is planar. As the boundary of every face in the subgraph $T$ of $G$ induced by the vertices of $\mathcal{T}$ is a simple cycle, $T$ is $2$-connected. Furthermore, every $2$-cut in $T$ is formed by the vertices on the boundary of $\mathcal{T}$. Hence, by adding $v$ to $T$, we obtain a $3$-connected graph $G$. This graph is maximal planar as every face (including the outer face) is a triangle. \qed \end{proof} We establish necessary (Lemma~\ref{lem:simple-properties}) and sufficient (Lemma~\ref{lem:dual-planar}) conditions for a graph to be a dual graph of a triangulation of a polygon with no holes. \begin{lemma} \label{lem:simple-properties} If $G^*$ is a dual graph of a triangulation of a polygon $P$ and set $S$ of interior points inside $P$ with no holes, then $G^*$ is a planar $3$-regular $3$-connected graph. \end{lemma} \begin{proof} The fact that $G^*$ is planar and $3$-connected follows from Lemma~\ref{lem:ttdrs-3-con} and Proposition~\ref{prop:simple-properties}. As the graph $G$ of the triangulation is a maximal planar graph, every face of $G$ is a triangle. Hence, every vertex in $G^*$ has precisely three incident edges. \end{proof} \begin{lemma} \label{lem:dual-planar} If $G^*$ is planar, $3$-regular and $3$-connected, then $G^*$ is a dual graph of a triangulation of a polygon without holes $P$ and a set $S$ of interior points. \end{lemma} \begin{proof} By Proposition~\ref{prop:simple-properties}(3), $G^*$ has a dual graph $G$ which is $3$-connected, and thus $G$ can be uniquely embedded in the plane up to the selection of the outer face (Proposition~\ref{prop:simple-properties}(2)). Such an embedding can be achieved using straight lines only (see e.g.~\cite{cit:deFra}). Now, remove the vertex $v$ of $G$ which represents the outer face of $G^*$. As $G$ is $3$-connected, by removing $v$, we obtain a $2$-connected plane graph $G'$. Hence, every face of $G'$ is a simple cycle, and it is a triangulation of a polygon formed by its outer face. Moreover, since the graph $G^*$ is 2-connected and every face is a triangle, it is the triangulation of a polygon. \end{proof} \begin{theorem}\label{thm:tdrs-no-holes} The answer to the topological TDRS-without-holes problem is ``yes'' if and only if the input $G^*$ is a $3$-connected $3$-regular planar graph. Furthermore, such a polygon can be constructed in linear time. \end{theorem} \begin{proof} The first part of the claim follows directly from Lemmas~\ref{lem:simple-properties} and~\ref{lem:dual-planar}. The linear running time follows from linearity of verifying $3$-connectivity of a graph~\cite{cit:hopcroft}. The reconstruction is linear as the number of faces in any planar graph is linear due to Euler's formula, so the dual graph can be constructed in linear time. The straight line embedding can be found in linear time as well~\cite{cit:deFra}, and deleting a vertex from an embedded graph takes at most $\mathcal{O}(n)$ steps too. \end{proof} \paragraph{TDR-without-holes.} \label{subsec:ttdr-without-holes} Previously, we showed that topological TDRS-without-holes problem can be solved in linear time. This can be done even if the set of interior points $S$ is required to be empty (i.e., the graph of the triangulation should consist only of vertices at the boundary of the polygon $P$, and one vertex outside $P$). \begin{proposition}\label{lem:pol-no-holes-Steiner} Let $G^*$ be a $3$-regular planar graph and $\widetilde{G^*}$ the subgraph of $G^*$ obtained by removing the vertices of the outer face. $\widetilde{G^*}$ is a tree if and only if it corresponds to a polygon with no holes or interior points. \end{proposition} \begin{proof} All the outer vertices of $G^*$ correspond to faces introduced by the point at infinity vertex and all the interior vertices of $G^*$ correspond to the faces of the triangulation of a polygon. \noindent $(\Leftarrow)$ The dual graph of a triangulation of a polygon without holes or interior points is a tree~\cite{cit:deBerg}~(p. 48). \noindent $(\Rightarrow)$ It is a simple exercise to show by induction that every tree of degree at most 3 is the dual of a triangulation of a polygon without holes. \qed \end{proof} \paragraph{Combinatorial TDR- and TDRS-without-holes.}\label{subs:ctdrwoh} It is easy to see that the topological and combinatorial input are equivalent in this case. Since $G^*$ must be $3$-connected and $3$-regular, the algorithm can first verify this necessary condition. If it is satisfied, it can construct the embedding (e.g., applying the linear straight line embedding algorithm of~\cite{cit:deFra}) and proceed with the topological input. \begin{theorem}\label{thm:comb-tdr-no-holes} The answer to combinatorial TDR- and TDRS-without-holes problems is affirmative if and only if the input $G^*$ is a $3$-connected $3$-regular planar graph. Furthermore, such a polygon can be constructed in linear time. \end{theorem} \paragraph{Topological TDR- and TDRS-with-known-holes.}\label{subs:ttdrwh} Let us start with the following observation: \begin{proposition} \label{prop:degree2} If a polygon has a hole, the dual graph $G^*$ of its triangulation contains vertices of degree $2$ or less. \end{proposition} \begin{proof} If we triangulate the polygon together with its holes (treating the vertices at the boundary of the hole as interior points), we obtain a 3-regular 3-connected dual graph. We now construct the dual graph $G^*$ of the triangulation and remove the vertices that correspond to the faces inside the holes of the polygon. The remaining graph is connected and has at least one vertex of degree 2 or less. \qed \end{proof} From the proof of Proposition~\ref{prop:degree2}, we can see that vertices of degree 2 in $G^*$ are adjacent to holes in the initial polygon. Observe that if $P$ is a polygon with or without holes, the triangles of the graph $G$ of a triangulation created by the point at infinity and the outer face of the polygon form a 3-connected graph. Thus, the dual graph $G^*$ cannot contain a 2-cut on the outer face corresponding to these triangles. We can associate each degree 2 vertex to its adjacent hole. Formally, let $G^*$ be a planar graph that contains at least one vertex of degree 2. We define an \emph{assignment} of a vertex $u$ of degree $2$ to a face of $G^*$, as a mapping $\mathcal H$ from the set containing $u$ to the set of faces incident to $u$ of $G^*$, such that if $u$ is incident to faces $F$ and $F'$ in $G^*$, then $\mathcal{H}(u) \in \{F, F'\}$. The same way we can define: an \emph{empty assignment}, which does not assign any vertex of degree 2 to a face of $G^*$; a \emph{partial assignment}, which assigns a subset of vertices of degree 2 to their incident faces in $G^*$ and a \emph{total assignment} which assigns all the vertices of degree 2 to faces of $G^*$ (see Fig.~\ref{fig:assignment} for a total assignment example). \begin{lemma} \label{lem:deg2=hole} Let $\{G^*,\mathcal H\}$ be such that $G^*$ is a dual graph of a triangulation of a polygon with holes. A face that is assigned vertices of degree $2$ contains a hole in the initial polygon. Moreover, $\mathcal H$ assigns to each face of $G^*$ zero or at least three vertices of degree $2$. \end{lemma} \begin{proof} Since $G^*$ is a dual graph of a triangulation of a polygon with holes, the vertices on the outer face have degree 3. Let $u^*$ be a vertex of degree 2, thus $u^*$ is an interior vertex. Let the assignment $\mathcal H$ assign $u^*$ to a face containing~$u^*$. Recall that every vertex $v^*$ in $G^*$ corresponds to a face $F$ in the initial triangulation graph $G$, every face $F^*$ in $G^*$ corresponds to a vertex $v$ in $G$ and every edge $e^*$ in $G^*$ corresponds to an edge $e$ crossing $e^*$ in $G$. Since $u^*$ in $G^*$ has the degree 2, then only two different (topological) scenarios are possible: the face, which is a triangle, corresponding to $u^*$ in $G$ has one vertex in one face of $G^*$ containing $u^*$ and two vertices in the other face of $G^*$ containing $u^*$ or the other way around (see Fig. \ref{fig:2cases}). \begin{figure} \caption{Two possible cases of reconstruction of the initial graph of the triangulation.} \label{fig:2cases} \end{figure} Which of these two cases is the right one is given by the assignment $\mathcal H$ by assigning $u^*$ to exactly one of the two faces containing $u^*$. If $u^*$ is assigned to a face in $G^*$ then in $G$ the triangle $U$ corresponding to $u^*$ has the two vertices in that face of $G^*$. Since we know that every face of $G^*$ corresponds to one vertex in $G$ and in our case the face of $G^*$ to which was assigned $u^*$ has two vertices of $G$, this implies that this face contains a hole in the initial polygon. A hole has length at least 3. Since one vertex of degree 2 assigned to a face induces one edge of the hole, then for $G^*$ to be the dual graph of a triangulation of a polygon with holes, $F$ needs to be assigned at least three vertices of degree 2. \qed \end{proof} We know that the presence of a vertex of degree 2 in the graph $G^*$ means there is a hole in the output polygon in one of the faces incident to this vertex in $G^*$. The reason to define an assignment of vertices of degree 2 to faces of the graph is to establish in which of the two incident faces the hole is contained (see Fig.~\ref{fig:2cases}). We call $\mathcal H$ a \emph{valid assignment} if we can realize $G^*$ as a triangulation dual of a polygon with holes. A polygon $P$ with holes is a realization of $\{G^*,\mathcal H\}$ if the polygon is a realization of $G^*$ and $\mathcal H$ is a valid assignment with respect to~$P$. Let us now focus on the one-edge cuts in $G^*$, such as the one shown in Fig.~\ref{fig:edge-cuts}(a). These one-edge cuts in the dual represent edges in the original polygon where a straight line cut applied to that edge would separate the polygon into two disjoint subpolygons. The two possible cases of how this separation looks like are illustrated in Fig.~\ref{fig:edge-cuts}(b) and (c). \begin{figure} \caption{Two possible realizations of two components of an one-edge cut of $G^*$.} \label{fig:edge-cuts} \end{figure} Now we observe that when the first such cut is applied, the case shown in Fig.~\ref{fig:edge-cuts}(c) is not possible since the outer face is 3-connected due to the point at infinity and the upper and lower chain of the original polygon. So in what follows, we need only consider case (b) in the figure. Let $G_1^*$ and $G_2^*$ denote the subgraph duals of $P_1$ and $P_2$, respectively in $G^*$. The algorithm now recursively creates a topological embedding for $P_1$ and $P_2$ and merges the two embeddings. We can show that this process is deterministic and results in a unique topological graph which can be embedded using straight line edges. This resulting polygonal graph is a simple polygon with point at infinity if and only if $G^*$ is a triangulation dual of a simple polygon. Hence, we have the following theorem. \begin{theorem}\label{thm:tdrwkh} Given an input $\{G^*, \mathcal{H}\}$, the topological TDR- and TDRS-with-known-holes problems are decidable in linear time. \end{theorem} \begin{proof} Recall that the algorithm partitions the dual graph $G^*$ along a one-edge cut. In general, the algorithm processes each of the one-edge cuts in order starting from minimal edge cuts, i.e. cuts in which at least one of the disjoint resulting components has no other one-edge cut. Without loss of generality we denote by $P_2$ the polygon with no further one-edge cuts and $P_1$ the other component as shown in Fig.~\ref{fig:edge-cuts}(a). Let $G_1^*$ and $G_2^*$ denote the subgraph duals of $P_1$ and $P_2$, respectively in $G^*$. Let $F_1$ be the face in $G_1^*$ that contains $G_2^*$. We now consider the topological subgraphs $G_2^* \cup F_1$ and $G_1^*$. \begin{figure} \caption{(a) The realization of $G^*_1$ and $G^*_2$ from Fig.~\ref{fig:edge-cuts}(a), as $P_1$ and $P_2$, (b) $G^*_1$, (c) $G_2^* \cup F_1$.} \label{fig:decomposition} \end{figure} The vertices lying on the face of $F_1$ in $G_1^*$ are ascribed to exactly one of $G_1^*$ or $G_2^*\cup F_1$ as follows. If the vertex is of degree 3 then it goes to the copy of $F_1$ in $G_1^*$, if it is of degree 2 it ascribes it as indicated by the vertex assignment of $G^*$. See Fig.~\ref{fig:decomposition}(a), where vertices of $F_1$ in red are assigned to $G_1^*$ and vertices of $F_1$ in green are assigned to $G_2^*\cup F_1$. The algorithm can now reconstruct a unique topological graph having $G_2^* \cup F_1$ as a triangulation dual. This process creates a triangle for each vertex in $G_2^*\cup F_1$ whose orientation is unique for vertices of degree 3 by the location of the edges in $G_2^*\cup F_1$ or given by the hole assignment in $G^*$ for vertices of degree 2. In this case the adjacencies are as prescribed by the edges in $G^*$. Now for $G_1^*$, if it has no other one-edge cuts, we apply the same process as to $G_2^* \cup F_1$ and obtain a topological embedding. Otherwise we recursively process the one-edge cuts of $G_1^*$ as above and also obtain a topological realization of $G_1^*$. Once we have the topological representations of each of $P_1$ and $P_2$, we merge them as follows. First we reinsert all the vertices of the hole face $F_1$, then grow the center of the wheel in $P_1$ into a fat point (Fig.~\ref{fig:reconstruction-triangulation}(a)). Next, replace this fat point with $P_2$ (shown in Fig.~\ref{fig:reconstruction-triangulation}(b)). \begin{figure} \caption{ (a) Reconstruction of the triangulation of $P_1$ from $G^*_1$, (b) Reconstruction of the triangulation of $P_2$ from $G^*_2\cup F_1$.} \label{fig:reconstruction-triangulation} \end{figure} If two adjacent vertices in $F_1$ in $G_1^*$ are not adjacent in $G^*$ it means there is an intermediate vertex in $G_2^*\cup F_1$. So the shared edge between the topological triangles in $P_1$ associated to those two vertices in the dual is replaced by a slim triangle corresponding to the dual of the intermediate vertex in $G^*$ which landed in $G_2^*\cup F_1$ (illustrated by gray-filled triangles in Fig.~\ref{fig:inserted-triangles}). Thus we can merge the two topological graphs $P_1$ and $P_2$ in a unique way as prescribed by the order of the vertices in the face $F_1$ in $G^*$. Observe that none of these operations creates crossings, so the merged topological graph is planar. We continue this merging process until we have a topological representation of the potential triangulated polygon. Again, by construction, this topological graph is planar. \begin{figure} \caption{Merge of Figures~\ref{fig:reconstruction-triangulation}(a) and \ref{fig:reconstruction-triangulation}(b).} \label{fig:inserted-triangles} \end{figure} We then obtain a straight line embedding of this planar graph using F\'{a}ry's theorem. This is our candidate triangulated polygon plus point at infinity. To conclude, we need to verify if this is a simple polygon, properly triangulated and with or without interior points as the case may be as follows. If the outer face of the embedding is not a triangle, we reject $G^*$. Otherwise, we consider each of the three vertices in the outer face as a potential point at infinity, i.e., a vertex connected to all other vertices on the outer face. The remaining structure should additionally be a simple, properly triangulated polygon. If this is the case, we accept $G^*$, else we reject this point and move to another one in the outer face. If none of the three points satisfy these conditions, we reject $G^*$ as not being the dual of a triangulation of a polygon. \qed \begin{figure} \caption{(a) a dual graph of a triangulation, (b) a graph which does not correspond to a triangulation.} \label{fig:dual} \end{figure} \begin{figure} \caption{A planar graph and an assignment (shown with red arrows) of vertices of degree 2 to faces of the graph.} \label{fig:assignment} \end{figure} \qed \end{proof} \paragraph{Geometric TDR- and TDRS-with-known-holes.}\label{subs:gtdrwh} Given a precise geometric embedding of the input graph, we want to decide if the graph is the triangulation dual of a polygon with holes with or without interior points. \begin{theorem}\label{thm:geomtdr-with-holes} The linear program described in Section~\ref{subs:gtdrwoh} gives a necessary condition for the realization of the geometric TDR- and TDRS-with-known-holes problems with input $\{G^*,\mathcal H\}$ in linear time given the triangulation graph with the circumcenters/centroids of the triangles as vertices. \end{theorem} \begin{proof} Note that if a vertex $v$ is of degree 2 in $G^*$, deciding which face incident to $v$ contains the associated hole can be done by observing the location of the convex angle formed by the two edges of the triangulation perpendicular to the edges incident to $v$ in $G^*$. (For an illustration, see Fig.~\ref{fig:2cases}(a) in which the hole can only reside in the right face.) We then set up an LP as in Theorem~\ref{thm:circumcentres} which gives a potential realization of the triangulation. We then test this solution to verify that the polygon and holes obtained are simple. \end{proof} \paragraph{Topological TDR- and TDRS-with-unknown-holes.} \label{subs:ttdrwuh} The input for this version of the problem is a planar graph $G^*$ with its face-embedding. However, the total assignment of its vertices of degree 2 to faces of $G^*$ is unknown. Here, we only state that the problem is NP-complete (Theorem~\ref{thm:tdrwh}) and proceed in our analysis. The proof of this claim is provided in Section~\ref{sec:npcomplete}. \begin{theorem}\label{thm:tdrwh} Determining if an input graph $G^*$ is the dual of a triangulation of a polygon with holes and with or without interior points is NP-complete. \end{theorem} \paragraph{Combinatorial TDR- and TDRS-with-unknown-holes.}\label{subs:ctdrwh} In this subsection, the input graph $G^*$ is given by its adjacency matrix. We will show that the 3-SAT reduction from the topological TDR- and TDRS-with-unknown-holes problems (see Section \ref{sec:npcomplete}) holds as well. If the embedding found by the combinatorial TDR solver is the same as in the reduction, we would need to solve the 3-SAT problem. However, it remains to be shown that there does not exist a different embedding with an alternate polygonal realization and the answer being ``yes'', without this embedding necessarily implying satisfiability of the 3-SAT formula. Recall that a 3-regular graph has a unique embedding in the plane. We now remove the vertices of degree 2 from the 3-SAT reduction graph and replace them by an edge, thus giving a 3-regular graph with a unique embedding. If the combinatorial TDR- and TDRS-with-unknown-holes problems found a different embedding, we can replace the vertices of degree 2 in this alternate embedding with a single edge, thus obtaining a different embedding for the 3-regular graph, which is a contradiction. Hence, the combinatorial graph obtained from the reduction above has a polygonal realization if and only if the underlying formula is satisfiable and we obtain: \begin{theorem}\label{thm:ctdrwh} The combinatorial TDR- and TDRS-with-unknown-holes problems are NP-complete. \end{theorem} \section{NP-Completeness of topological TDR- and TDRS-with-unknown-holes problems} \label{sec:npcomplete} In this section, we prove Theorem~\ref{thm:tdrwh}. Let $X=(x_1,x_2,\ldots,x_m)$ be a set of boolean variables. Let $\varphi$ be a 3-SAT boolean formula of the type $\varphi=(a_{11}\vee a_{12}\vee a_{13})\wedge (a_{21}\vee a_{22}\vee a_{23})\wedge \ldots \wedge (a_{n1}\vee a_{n2}\vee a_{n3})$, where $a_{ij}$ is either $x_k$ or $\neg x_k$ (called a \emph{literal}). We restrict our attention to planar 3-SAT formulae. A~planar 3-SAT formula, by definition, can be represented by a planar graph which has a vertex for every clause and every variable, and has an edge connecting said variable to every clause in which it appears (negated or non-negated). Planar 3-SAT is known to be NP-complete~\cite{cit:lich}. We will reduce planar 3-SAT to dual triangulation recognition by constructing a graph $G^*$ that is the dual of a triangulation of a polygon with holes if and only if given formula $\varphi$ is satisfiable. Our reduction creates $G^*$ which consists of four types of gadgets (Fig.~\ref{fig:4types-faces}(a)--(d) resp.): \begin{enumerate} \item variable faces which correspond to variable vertices; \item clause gadgets which correspond to clause vertices; \item splitter faces which correspond to some edges connecting a variable vertex to a clause; and \item absorber gadgets which act as dead ends for extra splitter wires which are not needed. \end{enumerate} \begin{figure} \caption{The types of faces and gadgets of $G^*$: (a) a~variable face, (b) a clause gadget, (c) a~splitter face and (d) an absorber gadget.} \label{fig:4types-faces} \end{figure} Here we describe the purpose of each face and gadget type. Each face contains some vertices of degree 2 which are only compatible with a triangulation dual if they are adjacent to a hole in the polygon. Each such vertex lies on the boundary between two faces, and there is a choice as to in which of the two faces the hole lies. Our constructions ensures that there cannot be a hole on both faces of a degree 2 vertex. Also, note that even though our examples contain many faces of length 6, the length of faces is in fact determined by given the formula~$\varphi$ and can be arbitrary. Recall now Lemma \ref{lem:deg2=hole} for the fact that if $G^*$ is the dual graph of a polygon with holes, then each face of $G^*$ that encloses a hole of the polygon, has at least three vertices of degree 2 in said face. \noindent {\em Variable face.~~} Each variable face has exactly three vertices of degree 2. This means that either there is a hole in the variable face (which corresponds to an assignment of {false} to said variable in $\varphi$) or there is no hole and each of those three vertices of degree 2 have a hole on the other face it belongs to. This other face is either a part of a clause gadget, a splitter face or an absorber gadget. \noindent{\em Clause gadget.~~} The clause gadget (see Fig.~\ref{fig:4types-faces}(b)) has one ``main'' clause face with two vertices of degree 2 shared with other faces with no other vertices of degree 2. This means that there must always be a hole in the clause face. Each variable contributes a vertex of degree 2 to a clause face (either directly, or via a splitter face). We need at least three vertices of degree 2 for a hole. Hence, at least one of the degree 2 vertices has the hole in the clause face, otherwise $G^*$ is not the dual graph of a polygon with holes. If a variable is non-negated in the clause, then the clause face is connected directly to the variable face, unless we need extra non-negated copies, in which case we use the double-in-series splitter trick (see Splitter face). If the variable is negated in the clause, the corresponding degree 2 vertex is contributed by a splitter face. \noindent{\em Splitter face.~~} The splitter face (see Fig.~\ref{fig:4types-faces}(c)) ``receives'' a degree 2 vertex corresponding to a variable and does two things at the same time: (1) it creates two copies of that variable, and (2) it negates each of them. Hence, the splitter face is always incident to precisely three vertices of degree 2. The splitter is connected to a variable face or a splitter face by sharing a pair of edges centred around a vertex of degree 2. This is where it ``receives'' the vertex of degree 2 from. It ``passes'' the negated copies of that vertex to another splitter, to an absorber or to a clause gadget again by sharing a pair of edges centred around the copy (a vertex of degree 2). The splitter face always creates two negated copies of a vertex of degree 2. If only one copy is needed, the other one is passed to a neighbouring absorber face (see below). If a non-negated copy of a vertex of degree 2 is needed, we pass it through another splitter to introduce ``double negation'' (and absorb the redundant copy). The polygon may or may not contain holes in the splitter faces. If a hole is present, it indicates that the splitter passes a degree 2 vertex forward corresponding to the negated form of the variable. \noindent{\em Absorber gadget.~~} The absorber gadgets always correspond to parts of a triangulation, regardless of the rest of the structure of the graph and its polygonal interpretation. Their purpose is to consume unwanted vertices of degree 2 and provide space for holes of a polygon. The vertex of degree 2 is passed to a part of an absorber with three degree 2 vertices, so the face can contain hole regardless whether the degree 2 vertex is assigned to be part of that hole, or not. We construct a graph such that if the variable $x_i$ corresponding to the variable face $F_{x_i}$ is false in a satisfiable assignment of $\varphi$, the degree 2 vertices are assigned to $F_{x_i}$ (the red arrows in our figures point inwards), and if the variable is true in the assignment, then all the degree 2 vertices are assigned to the other face. The construction begins by constructing the planar graph $G_\varphi$, which represents $\varphi$, and embedding it in the plane. Later, we will replace its vertices by corresponding gadgets. However, for this to be possible, the graph needs to be modified first. Each edge in $G_\varphi$ indicates a ``transfer'' of a degree 2 vertex. We first need to modify the graph so that the vertices representing variables of $\varphi$ have degree precisely~3. If the degree of such a vertex $x_i$ is less than 3, we increase it by attaching the required number of new vertices (those will be replaced by absorber gadgets). If the degree of $x_i$ is more than 3, we reduce its degree by detaching $deg(x_i)-2$ edges consecutive in cyclic order around $x_i$ (with respect to the embedding of $G_\varphi$), routing them into a new splitter vertex $s$, and connecting $x_i$ to the splitter. Note that this negates the variable $x_i$, so some of the edges may need to be routed through another splitter to cancel this negation. This produces a plane graph where $x_i$ has degree $3$ and the splitter vertex $s$ has degree $deg(x_i) - 1$. Repeatedly applying this construction, the degree of $s$ can be decreased to $3$. By the construction above, we obtain a plane graph $H_\varphi$ where all the variable, splitter and clause vertices have degree $3$, and absorbers have degree $1$. Now we replace every vertex with the respective gadget so that every edge in $H_\varphi$ is represented by a degree 2 vertex surrounded by edges shared between two gadgets, and so that the topology of the gadgets is equivalent to the embedding of $H_\varphi$ (this is similar to constructing a dual graph of $H_\varphi$). Let us denote the obtained graph by $H^*$. The embedding of $H^*$ contains some ``void'' areas between some gadgets. Those areas can be suitably attributed to faces of gadgets (by removing edges). We obtain graph $G^*$, call it the \emph{gadget graph of $\varphi$}, formed by vertices of degree $3$ and $2$. See Fig.~\ref{fig:example-graph-pl3SAT}(b) for an example of a formula and the corresponding gadget graph. We can now argue that graph $G^*$ is a triangulation dual if and only if the formula~$\varphi$ is satisfiable. \begin{lemma} \label{lem:equivalence} The gadget graph $G^*$ of formula $\varphi$ is dual of a triangulation of a simple polygon with holes if and only if $\varphi$ is satisfiable. \end{lemma} \begin{proof} If the formula is satisfiable this means that there is a unique true or false assignment for each variable. We use this assignment to decide if the degree 2 vertices in a variable face have a corresponding hole in this face or outside of it. Either choice forces the assignment in the other face. In an absorber face, that face will have a hole regardless of the variable assignment. In a splitter face, the hole (no-hole) choice in the variable face forces a no-hole (hole, resp.) choice in the splitter face since there is one vertex of degree 2 now out (correspondingly now in) of the splitter face. In the case of the clause face, if the vertex of degree 2 corresponding to a variable is true and it appears non-negated (or the variable is false and it appears negated) then it has no hole on the other side. This means the clause face has now at least three vertices of degree 2 with hole inside the clause face, and we can now safely place a hole in the clause face. Observe that since the formula is satisfiable, every clause face has at least one literal which is true and hence a third vertex of degree 2. So, variable and splitter faces have consistent holes by our choice of their placement; absorber faces are indifferent to our choice of hole locations; and clause faces always have consistent holes since at least one of the vertices of degree 2 has no hole on the other side. Now assume that the graph is realizable as a triangulation dual. Then assign to each variable in $\varphi$ false if there is a hole in the corresponding variable face and true if there is no hole in the variable face. Observe that the parity of splitter faces connecting the variable face and the clause face correspond by construction to whether the variable appears negated or non-negated in the clause. Thus it follows that if there is (or resp. there is not) a hole in the variable face, then the associated vertex of degree 2 is assigned to the clause face if and only if the variable appears non-negated (negated, resp.). Since the clause face was realizable as the dual of a triangulation with a hole, it follows that at least one of the literals appearing in the clause is set to true and hence, the clause is satisfied. \qed \end{proof} Fig.~\ref{fig:example-graph-pl3SAT}(b) illustrates a graph $G^*$ associated to the planar 3-SAT formula $\varphi = (x_1\vee\neg x_2\vee x_3)\wedge (x_2\vee x_3\vee x_4)\wedge (x_1\vee\neg x_3\vee\neg x_4)$ together with a correct assignment of vertices of degree 2 to faces of $G^*$. Thus $G^*$ is a dual graph of a triangulation, which implies that the formula is satisfiable for the truth assignment $U$: $(x_1,x_2,x_3,x_4)^U = (T,F,T,F)$. \begin{figure} \caption{(a) A part of the gadget graph associated to a planar 3-SAT formula, (b) An example of a gadget graph $G^*$ corresponding to the planar 3-SAT formula $\varphi = (x_1\vee\neg x_2\vee x_3)\wedge (x_2\vee x_3\vee x_4)\wedge (x_1\vee\neg x_3\vee x_4)$. The red arrows show the assignment of vertices of degree $2$ to faces of $G$.} \label{fig:part-planar3SAT} \label{fig:example-graph-pl3SAT} \end{figure} \section{Conclusions and Open Questions} \label{sec:conclusion} We provided an exhaustive analysis of the triangulation dual recognition problem. We showed that some of them can be solved in linear time and some of them are NP-complete. Our work focused on duals of general triangulations of simple polygons. We proposed several models for the geometric setting. We presented a method which in linear time finds a candidate solution, or rejects. The candidate solution needs to be further tested. As our approach is not capable of enumerating all the candidate solutions, it remains an open problem if recognition is possible under either of these models. Any bounds for necessary and/or sufficient conditions under other choices for triangle representatives are open too. \end{document}
arXiv
Welcome to Physics Problems Q&A, where you can ask questions and receive answers from other members of the community. The answer should be 518A. It comes something around 3255 A. Where is the mistake? I don't see any mistake. The difference is a factor of exactly $2\pi$, which should be in the formula. According to the answer in part (a) the radial current density at $\rho=3, z=2$ is $180A/m^2$. The surface area of the band is $2\pi \times 3 \times 0.8$ so if the current density were constant the current through the band would be $2714A$. The current density increases with z so the total current must be greater than $2714A$. The answer of $518A$ must be wrong. You need to have more confidence in your calculations. Also learn to make estimates and checks such as the one I did above.
CommonCrawl
\begin{document} \begin{abstract} We show that the cyclic sieving phenomenon of Reiner--Stanton--White together with necklace generating functions arising from work of Klyachko offer a remarkably unified, direct, and largely bijective approach to a series of results due to Kra{\'s}kiewicz--Weyman, Stembridge, and Schocker related to the so-called higher Lie modules and branching rules for inclusions $ C_a \wr S_b \hookrightarrow S_{ab} $. Extending the approach gives monomial expansions for certain graded Frobenius series arising from a generalization of Thrall's problem. \end{abstract} \maketitle \section{Introduction}\label{sec:intro} The \textit{Lie module} $\cL_n$ is the $n$th degree component of the free Lie algebra over $\bC$ with $m$ generators, which is naturally a $\GL(\bC^m)$-module. The Lie modules were famously studied by Thrall \cite{MR0006149} in the 1940's and have been extensively studied by Brandt \cite{MR0011305}, Klyachko \cite{klyachko74}, Kra{\'s}kiewicz--Weyman \cite{MR1867283}, Garsia \cite{MR1039352}, Gessel--Reutenauer \cite{MR1245159}, Reutenauer \cite{MR1231799}, Sundaram \cite{MR1273390}, Schocker \cite{MR1984625}, and many others. Thrall more generally introduced a certain $\GL(\bC^m)$-decomposition $\oplus_{\lambda \in \Par} \cL_\lambda$ of the tensor algebra of $\bC^m$ arising from the Poincar\'e--Birkhoff--Witt theorem, where $\cL_{(n)} = \cL_n$. The $\cL_\lambda$ are sometimes called the \textit{higher Lie modules}. Thrall's original paper considered the determination of the multiplicity of the irreducible $V^\mu$ in $\cL_\lambda$, which is often referred to as \textit{Thrall's problem}. This problem is still open 75 years later. See \Cref{ssec:background_thrall} and \cite{MR1231799} for more background on Thrall's problem and \cite{reiner15} for a recent summary of related work. See \Cref{sec:background} for missing definitions. Kra{\'s}kiewicz--Weyman \cite{MR1867283} gave a combinatorial solution to Thrall's problem when $\lambda = (n)$. In particular, they showed the multiplicity of $V^\mu$ in $\cL_{(n)}$ is \[ \#\{T \in \SYT(\mu) : \maj(T) \equiv_n 1\}, \] i.e.~the number of standard tableaux of shape $\mu$ with major index $1$ modulo $n$. Their argument crucially hinges upon the formula \begin{equation}\label{eq:SYT_evals} \SYT(\mu)^{\maj}(\omega_n^r) = \chi^\mu(\sigma_n^r) \end{equation}\label{eq:kw_csp} where we write the major index generating function as \[ \SYT(\mu)^{\maj}(q) \coloneqq \sum_{T \in \SYT(\mu)} q^{\maj(T)}, \] $\omega_n$ is a primitive $n$th complex root of unity, $\sigma_n$ is an $n$-cycle in the symmetric group $S_n$, and $\chi^\mu$ is the character of the $S_n$-irreducible indexed by a partition $\mu$ of $n$. The analysis in \cite{MR1867283} is somewhat indirect. It involves results of Lusztig and Stanley on coinvariant algebras and an intricate though beautiful argument involving $\ell$-decomposable partitions. Equation~\eqref{eq:kw_csp} bears a striking resemblance to the cyclic sieving phenomenon (CSP) of Reiner--Stanton--White, which we now recall. \begin{Definition}{{\cite{MR2087303}}}\label{def:csp1} Suppose $C_n$ is a cyclic group of order $n$ generated by $\sigma_n$, $W$ is a finite set on which $C_n$ acts, and $f(q) \in \bZ_{\geq 0}[q]$. We say the triple $(W, C_n, f(q))$ exhibits the \textit{cyclic sieving phenomenon (CSP)} if for all $r \in \bZ$, \begin{align}\label{eq:CSP_eval} \begin{split} f(\omega_n^r) &= \# W^{\si_n^r} \\ &\coloneqq \# \{ w \in W: \si_n^r \dd w = w \} = \chi^W(\sigma_n^r), \end{split} \end{align} where $ \omega_n $ is a primitive $ n $th root of unity and $\chi^W$ is the character of $W$ as a $ C_n $-module. \end{Definition} \noindent See \cite{MR2866734} for an excellent survey and introduction to cyclic sieving. The following cyclic sieving result also due to Reiner--Stanton--White is intimately related to \eqref{eq:SYT_evals}. We use $ W^{\stat}(q) $ to denote $ \sum_{ w \in W} q^{\stat(w)} $. \begin{Theorem}{{\cite[Theorem~8.3, Proposition~4.4]{MR2087303}}}\label{thm:rsw_alpha} Let $ \alpha \vDash n $, let $ \W_\al $ denote the set of all words of content $ \al $, let $ C_n $ act on $ \W_\al $ by rotation, and let $ \maj $ denote the major index statistic. Then, the triple \[ \lp \W_\alpha, C_n, \W_\alpha^{\maj}(q) \rp \] exhibits the CSP. \end{Theorem} Since the sets $\W_\alpha$ are precisely the $S_n$-orbits for the natural $ S_n $ action on length $n$ words, \Cref{thm:rsw_alpha} may be thought of as a ``universal sieving result'' as follows. A very similar observation appeared in \cite[Prop.~3.1]{MR2837599}. \begin{Corollary}\label{cor:Sn_univ} Let $W$ be a finite set of length $n$ words closed under the $S_n$-action. Then, the triple \[ \lp W, C_n, W^{\maj}(q) \rp \] exhibits the CSP. \end{Corollary} In \cite{Ahlbach201837}, the authors introduced a new statistic on words, $\flex$. As an example, $\flex(221221) = 2 \cdot 3 = 6$ since $221221$ is the concatenation of $2$ copies of the primitive word $221$ and $221221$ is third in lexicographic order amongst its $3$ cyclic rotations. See \Cref{def:flex} for details. The $\flex$ statistic was designed to be ``universal'' for cyclic rather than symmetric actions on words in the following sense. \begin{Lemma}{{\cite[Lemma~8.3]{Ahlbach201837}}} Let $W$ be a finite set of length $n$ words closed under the $C_n$-action, where $C_n$ acts by cyclic rotations. Then, the triple \[ \lp W, C_n, W^{\flex}(q) \rp \] exhibits the CSP. \end{Lemma} A corollary of these universal sieving results is the following equidistribution result. A more refined statement appeared in \cite{Ahlbach201837}. \begin{Theorem}\cite[Theorem~8.4]{Ahlbach201837} \label{thm:majn_flex} Let $\W_n$ denote the set of length $n$ words, let $\maj_n$ denote the major index modulo $n$ taking values in $ \{1, \dots, n \} $, and let $\cont$ denote the \textit{content} of a word. We then have \[ \W_n^{\cont,\maj_n}(\mathbf{x}; q) = \W_n^{\cont,\flex}(\mathbf{x}; q). \] \end{Theorem} \noindent In \Cref{sec:KW}, we show that the following well-known result of Kra{\'s}kiewicz--Weyman is essentially a corollary of \Cref{thm:majn_flex}. Here $\chi^r$ is the linear representation of the cyclic group $C_n$ given by $ \chi^r(\si_n) = \w_n^r $. \begin{Theorem}{\cite{MR1867283}}\label{thm:KW} We have \[ \Ch \chi^r\ind_{C_n}^{S_n} = \sum_{\lambda \vdash n} a_{\lambda, r} s_\lambda(\mathbf{x}) \] where \[ a_{\lambda, r} \coloneqq \#\{Q \in \SYT(\lambda) : \maj(Q) \equiv_n r\}. \] \end{Theorem} Klyachko \cite[Prop.~1]{klyachko74} showed that the Lie modules $\cL_n$ and the induced representations $\chi^1\ind_{C_n}^{S_n}$ are Schur--Weyl duals. The $\lambda = (n)$ case of Thrall's problem thus follows from \Cref{thm:KW} when $r=1$. More precisely, Klyachko expressed both the characteristic of $\chi^1\ind_{C_n}^{S_n}$ and the character of $\cL_{n}$ as content generating functions on primitive necklaces of length $n$ words. We generalize this observation in \Cref{sec:KW} as follows, which also naturally motivates the $\flex$ statistic. \begin{Theorem}\label{thm:ind_NFD} Let $\NFD_{n, r}$ denote the set of necklaces of length $n$ words with \textit{frequency} dividing $r$, $\F_{n, r}$ denote the set of length $n$ words with $\flex$ equal to $r$, and $ \M_{n,r}(\bfx) $ denote the set of length $ n $ words with $ \maj_n $ equal to $ r $. Then \[ \Ch \chi^r\ind_{C_n}^{S_n} = \NFD_{n, r}^{\cont}(\mathbf{x}) = \F_{n, r}^{\cont}(\mathbf{x}) = \M_{n,r}^{\cont}(\bfx). \] \end{Theorem} Our new proof of Kra{\'s}kiewicz--Weyman's result reduces the problem of finding a bijective proof of a well-known symmetry result following from \Cref{thm:KW} to finding a bijective proof of the above equidistribution result, \Cref{thm:majn_flex}; see \Cref{cor:symmetry}. It also provides a thus far rare example of an instance of cyclic sieving being used to prove other results rather than vice-versa. In \Cref{sec:Cn_branching}, we give a new proof of a result of Stembridge \cite{MR1023791} which settled a conjecture of Stanley describing the irreducible multiplicities of induced representations $\chi^r\ind_{\langle\sigma\rangle}^{S_n}$ for arbitrary $ \si \in S_n $. The corresponding generalized major index statistics arise very naturally from the combinatorics of orbits and cyclic sieving. In \Cref{sec:HLM}, we prove and generalize a result of Schocker \cite{MR1984625} concerning the higher Lie modules. Thrall's problem may be reduced to the $\lambda = (a^b)$ case by the Littlewood--Richardson rule. Bergeron--Bergeron--Garsia \cite{MR1035495} identified the Schur--Weyl dual of $\cL_{(a^b)}$ as a certain induced module $\chi^{1, 1}\ind_{C_a \wr S_b}^{S_{ab}}$ where $C_a \wr S_b$ is a wreath product; see \Cref{ssec:background_wreaths} for details. Schocker gave a formula for the multiplicity of the irreducible $V^\mu$ in $\cL_{(a^b)}$, though it involves many subtractions and divisions in general. We generalize Schocker's formula to all one-dimensional representations of $C_a \wr S_b$. In our approach, the subtractions and divisions in Schocker's formula arise naturally from the underlying combinatorics using M\"obius inversion and Burnside's lemma. The basic outline of each argument is the same: we obtain an orbit generating function from an explicit basis of a $\GL(V)$-module, we construct an appropriate necklace generating function, we use cyclic sieving to rewrite this generating function using words and descent statistics like the major index, and we finally apply RSK to get a Schur expansion. Transitioning from an orbit generating function to a necklace generating function where we can apply cyclic sieving involves various combinatorial techniques. In \Cref{sec:mash}, we discuss applying aspects of our approach to Thrall's problem in general. The arguments in the preceding sections strongly suggest attacking Thrall's problem by considering all branching rules for the inclusion $C_a \wr S_b \hookrightarrow S_{ab}$ rather than considering only one such rule. To that end, consider the irreducible representations $S^{\ul}$ of $C_a \wr S_b$, which are indexed by the set of $a$-tuples $ \ul = (\lam^{(1)}, \dots, \lam^{(a)}) $ of partitions with $ \sum_{r = 1}^{a} |\lam^{(r)}| = b $. We first give the following plethystic expression for the corresponding characteristic. \begin{Theorem}\label{thm:casbchars} For all integers $ a, b \ge 1$, we have \[ \Ch S^{\ul} \ind_{C_a \wr S_b}^{S_{ab}} = \prod_{r = 1}^{a} s_{\lambda^{(r)}}[\NFD_{a,r}^{\cont}(\mathbf{x})]. \] \end{Theorem} We then identify the analogues of the $\flex$ and $ \maj_n $ statistics in this context, which send words to such $ a $-tuples of partitions. We consequently give the following monomial expansion of the corresponding graded Frobenius series. See \Cref{ssec:background_wreaths} and \Cref{sec:mash} for details. \begin{Theorem}\label{thm:grfrob_flexab} Fix integers $ a, b \ge 1 $. We have \begin{align*} \sum_{\ul} \dim S^{\ul} \cdot \Ch \lp S^{\ul}\ind_{C_a \wr S_b}^{S_{ab}} \rp q^{\ul} &= \W_{ab}^{\cont,\flex_a^b}(\mathbf{x}; q) \\ &= \W_{ab}^{\cont,\maj_a^b}(\mathbf{x}; q) \end{align*} where the sum is over all $ a $-tuples $ \ul = (\lam^{(1)}, \dots, \lam^{(a)}) $ of partitions with $ \sum_{r = 1}^{a} |\lam^{(r)}| = b $ and the $q^{\ul}$ are independent indeterminates. \end{Theorem} The rest of the paper is organized as follows. In \Cref{sec:background}, we review combinatorial and representation-theoretic background. In particular, we summarize work related to Kra{\'s}kiewicz--Weyman's result, \Cref{thm:KW}, in \Cref{ssec:background_KW}, and we discuss the current status of Thrall's problem in \Cref{ssec:background_thrall}. In \Cref{sec:KW}, we present our proof of Kra{\'s}kiewicz--Weyman's result, \Cref{thm:KW}, using cyclic sieving. In \Cref{sec:Cn_branching}, we give an analogous proof of Stembridge's result, \Cref{thm:Stembridge}. In \Cref{sec:HLM}, we give generalizations of Schocker's result, \Cref{thm:GeneralizedSchocker}. In \Cref{sec:mash}, we define the statistics $\flex_a^b$ and $\maj_a^b$, prove \Cref{thm:casbchars} and \Cref{thm:grfrob_flexab}, and discuss how the approach could be used to find the branching rules for $ C_a \wr S_b \hookrightarrow S_{ab} $. \section{Background}\label{sec:background} Here we provide background on words, tableaux, Schur--Weyl duality, Kra{\'s}kiewicz--Weyman's result, Thrall's problem, and certain wreath products for use in later sections. All representations will be over $\bC$. We write $[n] \coloneqq \{1, \ldots, n\}$, $ \#S $ for the cardinality of a set $S$, and \begin{align*} \ch{S}{k} &\coloneqq \{\text{all $k$-element subsets of $S$}\}, \\ \mch{S}{k} &\coloneqq \{\text{all $k$-element multisubsets of $S$}\}. \end{align*} \subsection{Words}\label{ssec:background_words} We now recall standard combinatorial notions on words and fix some notation. A \textit{word} $w$ of \textit{length} $ n $ is a sequence $ w = w_1 w_2 \cdots w_n $ of \textit{letters} $w_i \in \bZ_{\ge 1}$. The \textit{descent set} of $w$ is $\Des(w) \coloneqq \{1 \leq i < n : w_i > w_{i+1} \} $. The \textit{major index} of $w$ is $\maj(w) \coloneqq \sum_{i \in \Des(w)} i$. Let $ \maj_n(w) $ denote $ \maj(w) $ modulo $ n $ taking values in $ [n] $. The \textit{content} of a word $w$, written $\cont(w)$, is the sequence $\alpha = (\alpha_1, \alpha_2, \ldots)$ where $\alpha_j$ is the number of $j$'s in $w$. Such a sequence $\alpha$ is called a (weak) \textit{composition} of $n$, written $\alpha \vDash n$. For $ n \ge 1 $ and $ \al \vDash n $, we write the set of words of length $n$ or content $\alpha$ as \begin{align*} \W_n &\coloneqq \{w = w_1 \cdots w_n : w_i \in \bZ_{\geq 1}\}, \\ \W_\alpha &\coloneqq \{w \in \W_n : \cont(w) = \alpha \}. \end{align*} The set of all words with letters from $\bZ_{\geq 1}$ is a monoid under concatenation. A word is \textit{primitive} if it is not a power of a smaller word. Any non-empty word $w$ may be written uniquely as $w = v^f$ for $f \geq 1$ with $ v $ primitive. The \textit{period} of $w$, denoted $ \period(w) $, is the length of $v$. The \textit{frequency} of $w$, denoted $\freq(w)$, is $f$. The symmetric group $ S_n $ acts on $ \W_n $ by permuting the letters according to \begin{align}\label{eq:rotaction} \si \cdot w_1 w_2 \cdots w_n \coloneqq w_{\si^{-1}(1)} w_{\si^{-1}(2)} \cdots w_{\si^{-1}(n)} \end{align} for all $ \si \in S_n $. In particular, letting $\sigma_n \coloneqq (1 \ 2 \ \cdots \ n) \in S_n $ and $ C_n \coloneqq \langle \si_n \rangle $, the cyclic group $ C_n $ acts on $ \W_n $ by rotation according to \begin{equation*} \si_n \cdot w_1 w_2 \cdots w_n \coloneqq w_{n} w_{1} \cdots w_{n-1}. \end{equation*} \begin{Definition}\label{def:necklaces} An orbit of $ w \in \W_n $ under rotation is a \emph{necklace}, denoted $[w]$. Note that $\period(w) = \#[w] $ and $ \freq(w) \dd \period(w) = n $. Content, primitivity, period, and frequency are all well-defined on necklaces. For $ n \ge 1 $, we write \begin{align*} \N_n &\coloneqq \{ \tx{necklaces of length $ n $ words} \}. \end{align*} \end{Definition} \begin{Example}\label{ex:wordandnecklace} Consider $ w = 15531553 \in \W_8 $. Then, the length of $w$ is $8$, $\Des(w) = \{ 3, 4, 7 \} $, $\maj(w) = 14$, and $ \cont(w) = (2,0,2,0,4) $, so $ w \in \W_{(2,0,2,0,4)} $. Since $w = 15531553 = (1553)^2 $ and $ 1553 $ is primitive, $ w $ is not primitive, $ \period(w) = 4 $, and $ \freq(w) = 2$. The necklace of $ w $ is \[ [w] = \{15531553, 55315531, 53155315, 31553155 \} \in \N_8. \] \end{Example} We now recall the $\flex$ statistic from \cite{Ahlbach201837}. \begin{Definition}\label{def:flex} Given $ w \in \W_n $, let $ \lex(w) $ denote the position at which $ w $ appears in the lexicographic order of its rotations, starting at $1$. The \textit{flex} statistic is given by \[ \flex(w) = \freq(w) \dd \lex(w). \] \end{Definition} \begin{Example} If $ w = 21132113 $, its necklace is \[ [w] = \{ 11321132, 13211321, 21132113, 32113211 \} \] listed in lexicographic order. Since $w$ is in the third position, $ \lex(w) = 3 $. Here $ \freq(w) = 2 $, so $ \flex(w) = 6 $. \end{Example} \subsection{Generating Functions}\label{ssec:background_gf} In most triples $ (W, C_n, f(q)) $ that have been found to exhibit the CSP, $ f(q) $ is a statistic generating function on $ W $ for some well-known statistic. Given $ \stat \colon W \to \bZ_{\geq 0} $, we write the corresponding generating function as \begin{align*} W^{\stat}(q) \coloneqq \sum_{w \in W} q^{\stat(w)}. \end{align*} We use natural multivariable analogues of this notation as well. For example, letting $ \mathbf{x} = ( x_1, x_2, \dots ) $, \begin{align*} \W_n^{\cont, \maj}(\mathbf{x}; q) \coloneqq \sum_{w \in \W_n} \mathbf{x}^{\cont(w)} q^{\maj(w)} \in \bZ_{\geq 0}[[x_1, x_2, \ldots]][q] \end{align*} where $ \mathbf{x}^{(\al_1, \dots, \al_m)} \coloneqq x_1^{\al_1} \cdots x_m^{\al_m}$. \subsection{Tableaux}\label{ssec:background_tableaux} A \textit{partition} of $n$, denoted $\lambda \vdash n$, is a composition of $n$ whose parts weakly decrease. Write $\Par$ for the set of all partitions. The \textit{Young diagram} of $\lambda$ is the upper-left justified collection of cells with $\lambda_i$ entries in the $i$th row starting from the top. We may write a partition in exponential form as $\lambda = 1^{m_1} 2^{m_2} \cdots \vdash n$ where $m_i$ is the number of parts of $\lambda$ of size $i$. In this case, the number of elements of $S_n$ with cycle type $\lambda$ is $\frac{n!}{z_\lambda}$ where $z_\lambda \coloneqq 1^{m_1} 2^{m_2} \cdots m_1! m_2! \cdots$. A \textit{semistandard Young tableau} of shape $\lambda$ is a filling of the Young diagram of $\lambda$ with entries from $\bZ_{\geq 1}$ which weakly increases along rows and strictly increases along columns. The set of semistandard Young tableaux of shape $\lambda$ is denoted $\SSYT(\lambda)$. The \textit{content} of $P \in \SSYT(\lambda)$, denoted $\cont(P)$, is the composition whose $j$-th entry is the number of $j$'s in $P$. The set of \textit{standard Young tableaux} of shape $\lambda$, denoted $\SYT(\lambda)$, is the subset of $\SSYT(\lambda)$ consisting of tableaux of content $(1, \ldots, 1) \vDash n $. The \textit{descent set} of a tableau $Q \in \SYT(\lambda)$, denoted $\Des(Q)$, is the set of all $i \in [n-1]$ such that $i+1$ lies in a lower row of $Q$ than $i$. \begin{Example} We draw our tableaux in English notation. The semistandard tableau \[ P = \Yvcentermath1{\young(112334,23446,3)} \in \SSYT(6,5,1) \] has $ \cont(P) = (2,2,4,3,0,1) $. The standard tableau \[ Q = \Yvcentermath1{\young(125,34,6)} \in \SYT(3,2,1) \] has $ \Des(Q) = \{ 2,5\} $, and $ \maj(Q) = 7 $. \end{Example} Let $ \mathbf{x} = (x_1, x_2, \dots ) $. For a partition $ \lam $, the \textit{Schur function} $ s_\lam $ is the content generating function on semistandard tableaux of shape $\lambda$, \[ s_{\lam}(\mathbf{x}) \coloneqq \SSYT(\lam)^{\cont}(\mathbf{x}) \coloneqq \sum_{P \in \SSYT(\lam)} \mathbf{x}^{\cont(P)}. \] The Schur functions are symmetric in the sense that they are unchanged under any permutation of the underlying variables. Two important instances of Schur functions are the complete homogeneous symmetric functions \begin{align}\label{eq:hn} h_n(\bfx) \coloneqq s_{(n)}(\bfx) = \sum_{i_1 \le \cdots \le i_n} x_{i_1} \cdots x_{i_n} \end{align} and the elementary symmetric functions \begin{align}\label{eq:en} e_n(\bfx) \coloneqq s_{(1^n)}(\bfx) = \sum_{i_1 < \cdots < i_n} x_{i_1} \cdots x_{i_n}. \end{align} The \textit{power-sum symmetric functions} are given by \[ p_n(\bfx) \coloneqq x_1^n + x_2 ^n + \cdots \quad\text{and}\quad p_{(\lam_1, \dots, \lam_k)}(\bfx) \coloneqq p_{\lam_1}(\bfx) \cdots p_{\lam_k}(\bfx). \] \begin{Definition} The Robinson--Schensted--Knuth (RSK) correspondence is a bijection \begin{align*} \RSK \colon \W_n &\to \bigsqcup_{\lam \vdash n} \SSYT(\lam) \x \SYT(\lam), \\ w &\mapsto (P(w), Q(w)). \end{align*} The \textit{shape} of $w$ under RSK, denoted $\sh(w)$, is the common shape of $P(w)$ and $Q(w)$. Two well-known properties of the RSK correspondence are \begin{align}\label{eq:RSKprop} \cont(w) = \cont(P(w)), \qquad \Des(w) = \Des(Q(w)). \end{align} The fact that $\Des(w) = \Des(Q(w))$ is originally due to Sch\"utzenberger \cite[Remarque~2]{MR0190017}. See \cite[Lemma 7.23.1]{MR1676282} for a proof of \eqref{eq:RSKprop} in the decisive permutation case and \cite[p.404]{MR1676282} for further historical remarks. See \cite[Chapter 3]{MR1824028} for more details on RSK. \end{Definition} We will repeatedly use the RSK correspondence to transition from the monomial to the Schur basis. These arguments all rely on the following result. \begin{Lemma}\label{lem:RSK_Des} Suppose $D \subset [n-1]$ and let \[ \W_{n, D} \coloneqq \{w \in \W_n : \Des(w) = D\} \] be the set of length $n$ words with descent set $D$. For $\lambda \vdash n$, let \[ a_\lambda^D \coloneqq \#\{Q \in \SYT(\lambda) : \Des(Q) = D\}. \] Then \[ \W_{n, D}^{\cont}(\mathbf{x}) = \sum_{\lambda \vdash n} a_\lambda^D s_\lambda(\mathbf{x}). \] \begin{proof} Using RSK and \eqref{eq:RSKprop}, we have \begin{align*} \W_{n, D}^{\cont}(\mathbf{x}) &= \{w \in \W_n : \Des(w) = D\}^{\cont}(\mathbf{x}) \\ &= \sum_{\lambda \vdash n} \, \sum_{\substack{Q \in \SYT(\lambda)\\\Des(Q) = D}} \, \sum_{P \in \SSYT(\lambda)} x^{\cont(P)} \\ &= \sum_{\lambda \vdash n} \, \sum_{\substack{Q \in \SYT(\lambda)\\\Des(Q) = D}} s_\lambda(\mathbf{x}) \\ &= \sum_{\lambda \vdash n} a_\lambda^D s_\lambda(\mathbf{x}). \end{align*} \end{proof} \end{Lemma} \subsection{Schur--Weyl Duality}\label{ssec:background_schurweyl} We next summarize a few key points from the representation theory of $S_n$ and $\GL(\bC^m)$. See \cite{MR1464693} for more. The complex irreducible inequivalent representations of $S_n$ are canonically indexed by partitions $\lambda \vdash n$ and are called Specht modules, written $S^\lambda$. The \textit{Frobenius characteristic map} $\Ch $ is defined by $\Ch S^\lambda \coloneqq s_\lambda(\mathbf{x})$ and is extended additively to all $S_n$-representations. Since Schur functions are $\bZ$-linearly independent, computing the irreducible decomposition of an $S_n$-module $M$ corresponds to computing the Schur expansion of $\Ch M$. Let $V$ be a complex vector space of dimension $m$. Endow $V^{\otimes n}$ with the diagonal left $\GL(V)$-action and the natural right $S_n$-action given by permutation of indexes. Given any $S_n$-module $ M $, define a corresponding $\GL(V)$-module by \begin{align*} E(M) \coloneqq V^{\otimes n} \otimes_{\bC S_n} M, \end{align*} which we call the \textit{Schur-Weyl dual} of $M$. The irreducible inequivalent polynomial representations of $\GL(V)$ are precisely the Schur Weyl duals of all $ S^\lambda $ where $\lambda$ is a partition with at most $\dim(V)$ non-zero parts \cite[Thm.~8.2.2]{MR1464693}. Let $E$ be a finite-dimensional, polynomial representation of $\GL(V)$ and pick a basis $ \{ v_1, \ldots, v_m \} $ for $V$. The \textit{Schur character} of $E$, denoted $\Ch E$, is the trace of the action of $ \diag(x_1, \dots, x_m) \in \GL(V) $ on $ E $, where the diagonal matrix is with respect to the basis $ v_1, \dots, v_m $. Polynomiality of $E$ implies $ \Ch E \in \bC[x_1, \ldots, x_m]$. Moreover, $ \Ch(E) $ is a symmetric function of $ x_1, \dots, x_m $. In fact, \[ \Ch V^\lambda = \Ch E(S^\lambda) = s_\lambda(x_1, \ldots, x_m, 0, 0, \ldots). \] Thus, for any $S_n$-module $ M $, we have \[ \lim_{m \to \8} \Ch E(M) = \Ch M. \] In light of this, we often leave dependence on $m$ or $V$ implicit. \subsection{Kra{\'s}kiewicz--Weyman Symmetric Functions}\label{ssec:background_KW} The symmetric functions appearing in \Cref{thm:majn_flex} have a wealth of important interpretations. Here we summarize some of these interpretations. \begin{Definition} For $n \geq 1$, let \begin{equation}\label{eq:KW.0} \KW_n(\mathbf{x}; q) \coloneqq \sum_{\substack{\lambda \vdash n\\r \in [n]}} a_{\lambda, r} s_\lambda(\mathbf{x}) \, q^r \end{equation} where $a_{\lambda, r} \coloneqq \#\{Q \in \SYT(\lambda) : \maj(Q) \equiv_n r\}$. We call $\KW_n(\bfx; q)$ the $n$th \textit{Kra{\'s}kiewicz--Weyman} symmetric function. \end{Definition} These symmetric functions are intimately related to the irreducible representations of certain cyclic groups. \begin{Definition}\label{def:cyclicirreps} Recall $ \si_n \coloneqq (1 \, 2 \, \cdots \, n) \in S_n $ and $ C_n \coloneqq \langle \si_n \rangle \le S_n $ be the cyclic group of order $ n $ it generates. Fixing any primitive $n$th root of unity $\omega_n$, write the irreducible characters of $ C_n $ as $ \chi^1, \dots, \chi^n $ where \[ \chi^r(\si_n) \coloneqq \omega_n^r. \] We sometimes write $\chi^r_n$ if we want to specify the cyclic group $ C_n $ as well. \end{Definition} \Cref{thm:KW} gives our first interpretation of $\KW_n(\bfx ; q) $, \begin{equation}\label{eq:KW.1} \KW_n(\mathbf{x}; q) = \sum_{r=1}^n \Ch \chi^r\ind_{C_n}^{S_n} q^r. \end{equation} Since the regular representation of $C_n$ is $\oplus_{r=1}^n \chi^r$, when $q=1$ the right-hand side of \eqref{eq:KW.1} is the Frobenius characteristic of the regular representation of $S_n$, denoted $ \bC S_n $. The right-hand side of \eqref{eq:KW.1} is hence similar to a graded Frobenius series for $\bC S_n$ and tracks branching rules for the inclusion $C_n \hookrightarrow S_n$. By \Cref{thm:ind_NFD}, we can also write this series as \begin{align}\label{eq:KWNFD} \KW_n(\mathbf{x}; q) = \sum_{r=1}^n \NFD_{n,r}^{\cont}(\bfx) \, q^r. \end{align} Now consider the action of $\sigma_n$ on the $S_n$-irreducible $S^\lambda$. Since $\sigma_n^n = 1 \in S_n $, the action of $\sigma_n$ on $S^\lambda$ is diagonal with eigenvalues $\omega_n^{k_1}, \omega_n^{k_2}, \ldots$ where $\omega_n$ is a fixed primitive $n$th root of unity and $1 \leq k_i \leq n$ for each $i$. Let $P_\lambda(q) \coloneqq q^{k_1} + q^{k_2} + \cdots$ be the generating function of the \textit{cyclic exponents} $k_1, k_2, \ldots$, which were studied extensively by Stembridge \cite{MR1023791}. Using the right-hand side of \eqref{eq:KW.1} and Frobenius reciprocity quickly gives the following. \begin{Theorem}[See {\cite[Prop.~1.2, Thm.~3.3]{MR1023791}}] The cyclic exponent generating function for $S_n$ is given by \begin{equation}\label{eq:KW.1.5} \KW_n(\mathbf{x}; q) = \sum_{\lambda \vdash n} P_\lambda(q) s_\lambda(\mathbf{x}). \end{equation} \end{Theorem} Next, extend the regular representation $\bC S_n$ to an $S_n \times C_n$-module by letting $S_n$ act on the left and $C_n$ act on the right. There is a straightforward notion of an $S_n \times C_n$-Frobenius characteristic map given by sending an irreducible $S^\lambda \otimes \chi^r$ to $s_\lambda(\mathbf{x}) q^r$ where $q$ is an indeterminate satisfying $q^n = 1$. The following now follows easily using the right-hand side of \eqref{eq:KW.1}. \begin{Corollary}{{\cite{MR1867283}}}\label{cor:kw.grfrob} The $S_n \times C_n$-Frobenius characteristic of the regular representation is \begin{equation}\label{eq:KW.2} \KW_n(\mathbf{x}; q) = \Ch_{S_n \times C_n} \bC S_n. \end{equation} \end{Corollary} It is well-known that the type $A_{n-1}$ coinvariant algebra $R_n$ is a graded $S_n$-module which is isomorphic as an \textit{ungraded} $S_n$-module to $ \bC S_n $. We may give $R_n$ an $S_n \times C_n$-module structure by letting $C_n$ act on the $k$th degree component of $R_n$ by $\sigma_n \cdot f \coloneqq \omega_n^k f$, where $\omega_n$ is a fixed primitive $n$th root of unity. Springer and, independently, Kra{\'s}kiewicz--Weyman showed that $\bC S_n$ and $R_n$ are isomorphic as $S_n \times C_n$-modules. Consequently, from the right-hand side of \eqref{eq:KW.2}, we have the following. \begin{Theorem}[Springer {\cite[Prop.~4.5]{MR0354894}}; cf.~ {\cite[Thm.~1]{MR1867283}}]\label{thm:KW_coinvariant} The $S_n \times C_n$-Frobenius characteristic of the coinvariant algebra $R_n$ is \begin{equation}\label{eq:KW.2.5} \KW_n(\mathbf{x}; q) = \Ch_{S_n \times C_n} R_n. \end{equation} \end{Theorem} The graded Frobenius characteristic of the coinvariant algebra is the modified Hall--Littlewood symmetric function $\widetilde{Q}_{(1^n)}(\mathbf{x}; q)$ \cite[(I.8)]{MR1168926}. Consequently, \eqref{eq:KW.2.5} gives \begin{equation}\label{KW.3} \KW_n(\mathbf{x}; q) \equiv \widetilde{Q}_{(1^n)}(\mathbf{x}; q) \qquad\text{(mod $q^n-1$)}. \end{equation} See also \cite[\S3]{MR2574838} for a nice summary of this connection. We may instead use the right-hand side of \eqref{eq:KW.0} as a starting point. From \Cref{lem:RSK_Des}, it follows that \begin{equation}\label{eq:KW.4} \KW_n(\mathbf{x}; q) = \W_n^{\cont, \maj_n}(\mathbf{x}; q). \end{equation} \noindent From \Cref{thm:majn_flex} and \eqref{eq:KW.4}, our final interpretation of $\KW_n(\bfx;q)$ in this subsection is \begin{equation}\label{eq:KW.5} \KW_n(\mathbf{x}; q) = \W_n^{\cont, \flex}(\mathbf{x}; q). \end{equation} \subsection{Thrall's Problem}\label{ssec:background_thrall} We next define the Lie modules $\cL_\lambda$ and summarize the status of Thrall's problem. See \cite{MR1231799} for more details. The \textit{tensor algebra} of $V$ is $T(V) \coloneqq \oplus_{n=0}^\infty V^{\otimes n}$, which is naturally a graded $\GL(V)$-representation. Let $\cL(V)$ be the Lie subalgebra of $T(V)$ generated by $V$, called the \textit{free Lie algebra} on $V$, so that $\cL(V)$ is a graded $\GL(V)$-representation with graded components $\cL_n(V) = V^{\otimes n} \cap \cL(V)$ called \textit{Lie modules}. The \textit{universal enveloping algebra} $\fU(\cL(V))$ is isomorphic to $T(V)$ itself. By the Poincar\'e--Birkhoff--Witt Theorem, \begin{equation*} \fU(\cL(V)) \cong \bigoplus_{\lambda = 1^{m_1} 2^{m_2} \cdots} \Sym^{m_1}(\cL_1(V)) \otimes \Sym^{m_2}(\cL_2(V)) \otimes \cdots \end{equation*} as graded $ \GL(V) $-representations, where the sum is over all partitions and $\Sym^m(M)$ is the $ m $th symmetric power of $ M $ \cite[Lemma 8.22]{MR1231799}. The \textit{higher Lie module} associated to $\lambda = 1^{m_1}2^{m_2} \cdots$ is defined to be \begin{align}\label{eq:llambda} \cL_\lambda(V) \coloneqq \Sym^{m_1}(\cL_1(V)) \otimes \Sym^{m_2}(\cL_2(V)) \otimes \cdots. \end{align} The Lie modules hence yield a $\GL(V)$-module decomposition $T(V) \cong \oplus_{\lambda \in \Par} \cL_\lambda(V)$. \textit{Thrall's problem} is the determination of the multiplicity of $V^\mu$ in $\cL_\lambda(V)$, for instance by counting explicit combinatorial objects. The well-known Littlewood--Richardson rule solves the analogous problem for $V^\mu \otimes V^\nu$. It follows from \eqref{eq:llambda} and the Littlewood--Richardson rule that, for the purposes of Thrall's problem, we may restrict our attention to the case when $\lambda = (a^b)$ is a rectangle. Since \begin{equation}\label{eq:Lie_rect} \cL_{(a^b)}(V) = \Sym^b(\cL_{(a)}(V)), \end{equation} the single-row case is particularly fundamental. Hall \cite[Lemma~11.2.1]{MR0103215} introduced what is now called the \textit{Hall basis} for $\cL_{n}(V)$, which, in the $m \to \infty$ limit, is in content-preserving bijection with primitive necklaces $\NFD_{n,1}$. For each primitive necklace, Hall associates a bracketing of its elements using what is now known as the Lyndon factorization \cite{MR0102539}. He gives an explicit, though computationally complex, algorithm to express any bracketing as a linear combination of the bracketings associated to primitive necklaces. Linear independence of these generators follows from a dimension count. Klyachko consequently observed that the Schur character of $\cL_{n}$ is the corresponding content generating function $\NFD_{n,1}^{\cont}(\mathbf{x})$. Taking symmetric powers, it follows that in the $m \to \infty$ limit, $\cL_{(a^b)}(V)$ has a basis indexed by multisets of primitive necklaces and the Schur character is the following content generating function. \begin{Lemma}[See {\cite[Proposition 1]{klyachko74}}\label{lem:lie_necklaces}] We have, in the $ m \to \8 $ limit, \begin{align*} \Ch \cL_{(a)} = \NFD_{a,1}^{\cont}(\mathbf{x}) \quad \tx{ and } \quad \Ch \cL_{(a^b)} = \mch{\NFD_{a,1}}{b}^{\cont}(\mathbf{x}). \end{align*} \end{Lemma} \noindent One formulation of Thrall's problem is hence to find the Schur expansion of the expressions in \Cref{lem:lie_necklaces}. While we will not have direct need of it, we would be remiss if we did not mention the following beautiful and important result of Gessel and Reutenauer \cite[(2.1)]{MR1245159}. The expansion of $ \Ch \cL_{\lam} $ in terms of Gessel's fundamental quasisymmetric functions is \begin{align}\label{eq:GRFexpan} \Ch \cL_{\lam} = \sum_{ \substack{\si \in S_n \\ \si \tx{ has cycle type } \lam } } F_{n, \Des(\si)}(\mathbf{x}), \end{align} where \[ F_{n,D}(\mathbf{x}) = \sum_{ \substack{ i_1 \le \dots \le i_n \\ i_j < i_{j + 1} \tx{ if } j \in D } } x_{i_1} \dots x_{i_n}. \] Gessel and Reutenauer gave an elegant bijective proof of \eqref{eq:GRFexpan} in \cite{MR1245159} involving multisets of primitive necklaces as in \Cref{lem:lie_necklaces}. Another formulation of Thrall's problem is thus to convert the right-hand side of \eqref{eq:GRFexpan} to the Schur basis. Klyachko \cite{klyachko74} was the first to observe the intimate connections between Lie modules and the linear representations $\chi^r$ in \Cref{def:cyclicirreps}. Klyachko proved the $r=1$ case of \Cref{thm:ind_NFD}, that $\Ch \chi^1\ind_{C_n}^{S_n} = \NFD_{n,1}^{\cont}(\mathbf{x})$. Combining Klyachko's result, \Cref{lem:lie_necklaces}, the $r=1$ case of \Cref{thm:ind_NFD}, and Kra{\'s}kiewicz--Weyman's result, \Cref{thm:KW}, solves Thrall's problem when $\lambda = (n)$. Recall that if $\lambda \vdash n$, then $a_{\lambda, r} \coloneqq \#\{Q \in \SYT(\lambda) : \maj(Q) \equiv_n r\}$. \begin{Corollary}\label{cor:kwthrall} For all $\lambda \vdash n \geq 1$, the multiplicity of $V^\lambda$ in $\cL_{(n)}$ is $a_{\lambda, 1}$. \end{Corollary} \noindent Since $\chi^r\ind_{C_n}^{S_n}$ depends up to isomorphism only on $n$ and $\gcd(n, r)$, we also have the following well-known symmetry. \begin{Corollary}\label{cor:alamrsym} For all $ \lambda \vdash n \ge 1 $, we have $ a_{\lam,r} = a_{\lam, \gcd(n,r)} $. \end{Corollary} \begin{Remark} A bijective proof of this symmetry is currently unknown. \end{Remark} Thrall's problem is an instance of a plethysm problem as we next describe. See \cite[Appendix 2]{MR1676282} for more details. Given polynomial representations of general linear groups \[ \rho \colon \GL(V) \to \GL(W) \qquad\text{and}\qquad \tau \colon \GL(W) \to \GL(X) \] where $V, W, X$ are finite-dimensional complex vector spaces, the \textit{plethysm} of their Schur characters is the Schur character of their composite: \[ (\Ch \tau)[\Ch \rho] \coloneqq \Ch(\tau \circ \rho). \] It is easy to see that $\Ch \Sym^b(W) = h_b(x_1, \ldots, x_m)$ where $m = \dim(W)$. Consequently, \eqref{eq:Lie_rect} gives \begin{equation} \Ch \cL_{(a^b)} = h_b[\Ch \cL_a]. \end{equation} Yet another formulation of Thrall's problem is thus to expand $h_b[\Ch \cL_a]$ in the Schur basis. Such plethysm problems are notoriously difficult. However, a combinatorial description for the Schur expansion of $(\Ch \cL_a)[h_\nu]$ in terms of the \textit{charge} statistic was given by Lascoux--Leclerc--Thibon in \cite[Thm.~4.2]{MR1261063} and \cite[Thm.~III.3]{MR1434225}. \begin{Remark} At present, Thrall's problem has only been solved in the following cases: \begin{itemize} \item when $\lambda = (n)$ has a single part (see \Cref{cor:kwthrall}); \item when $\lambda = (1^n)$, $\cL_{(1^n)}$ is the trivial representation; \item when $\lambda = (2^b)$, $\Ch \cL_{(2^b)} = \sum s_\mu$ where the sum is over $\mu \vdash 2b$ with even column sizes (see \cite[Ex.~I.8.6(b), p.~138]{MR1354144}). \end{itemize} \end{Remark} \subsection{Wreath Products}\label{ssec:background_wreaths} The Schur--Weyl duals of the higher Lie modules $\cL_\lambda$ have also been identified in terms of induced representations of certain wreath products. Here we summarize this connection as well as some related aspects of the representation theory of wreath products which will be used in \Cref{sec:mash}. Our presentation largely mirrors \cite{MR1023791}. \begin{Definition}\label{defn:wreathprod} Given a group $G$, the \textit{wreath product} of $G$ with $S_n$, denoted $G \wr S_n$, is the semidirect product explicitly described as follows. $G \wr S_n$ is the set $G^n \times S_n$ with multiplication given by \[ (g_1, \dots, g_n, \s) \dd (h_1, \dots, h_n, \tau) \coloneqq (g_1 h_{\s^{-1}(1)}, \dots, g_n h_{\s^{-1}(n)}, \s \tau) \] for all $ g_1, \dots g_n, h_1, \dots, h_n \in G $ and $ \s, \tau \in S_n $. Furthermore, given $\alpha \vDash n$, set $G \wr \prod_i S_{\alpha_i} \coloneqq \prod_i (G \wr S_{\alpha_i})$, which has a natural inclusion into $G \wr S_n$. Roughly speaking, $G \wr S_n$ can be considered as the group of $n \times n$ ``pseudo-permutation'' matrices with entries from $G$. Now suppose $U$ is a $G$-set and $V$ is an $S_n$-set. There is a natural notion of $U \wr V$ as a $G \wr S_n$-set. Explicitly, let $U \wr V$ be the set $U^n \times V$ with $G \wr S_n$-action given by \[ (g_1, \dots, g_n, \si) \dd (u_1, \ldots, u_n, v) \coloneqq (g_1 \dd u_{\si^{-1}(1)}, \ldots, g_n \dd u_{\si^{-1}(n)}, \si \dd v) \] for all $ g_1, \dots, g_n \in G, \si \in S_n, u_1, \dots, u_n \in U, v \in V $. There is an analogous notion if $U$ is a $G$-module and $V$ is an $S_n$-module, namely $U \wr V \coloneqq U^{\otimes n} \otimes V$ with $G \wr S_n$-action \[ (g_1, \dots, g_b, \si) \dd (u_1 \otimes \dots \otimes u_b \otimes v) \coloneqq (g_1 \dd u_{\si^{-1}(1)}) \otimes \dots \otimes (g_b \dd u_{\si^{-1}(b)}) \otimes (\si \dd v) \] extended $\bC$-linearly. \end{Definition} Since $S_a$ acts naturally and faithfully on $[a]$, $[a] \wr 1_b$ has a natural $S_a \wr S_b$-action, where $1_b$ denotes the trivial $S_b$-set. Identifying $[a] \wr 1_b$ with the set $[ab]$ and noting that the action remains faithful gives an inclusion $S_a \wr S_b \hookrightarrow S_{ab}$. Similarly we have an inclusion $C_a \wr S_b \hookrightarrow S_{ab}$. More concretely, $C_a \wr S_b$ acts faithfully on $[ab]$ by permuting the $b$ size-$a$ intervals in $[ab]$ amongst themselves and cyclically rotating each size-$a$ interval independently. \begin{Remark} The induction product of two symmetric group representations corresponds to the product of their Frobenius characteristics, so that if $U$ is an $S_a$-module and $V$ is an $S_b$-module, then \cite[Prop.~7.18.2]{MR1676282}, \begin{align}\label{eq:prodchar} \Ch \lp U \otimes V\ind_{S_a \times S_b}^{S_{a+b}} \rp = (\Ch U)(\Ch V). \end{align} In \Cref{ssec:background_thrall}, we considered the plethysm of Schur characters of general linear group representations. The corresponding operation for Frobenius characters of symmetric group representations is less well-known and involves wreath products as follows. Given two symmetric functions $f$ and $g = m_1 + m_2 + \cdots$ where the $m_i$ are all monomials, their \textit{plethysm} is given by \cite[Def.~A2.6]{MR1676282} \begin{align}\label{eq:plethysmdefn} f[g] \coloneqq f(m_1, m_2, \dots), \end{align} which is well-defined since $f$ is symmetric. Then, if $U$ is an $S_a$-module and $V$ is an $S_b$-module, we have (see \cite[Thm.~A2.8]{MR1676282} or \cite[Appendix A, (6.2)]{MR1354144}) \begin{align}\label{eq:plethsymchar} \Ch \lp (U \wr V)\ind_{S_a \wr S_b}^{S_{ab}} \rp = \Ch(V)[\Ch(U)]. \end{align} \end{Remark} When $G$ is a finite group, Specht \cite{Specht1932Eine} described the complex inequivalent irreducible representations of $G \wr S_n$ in terms of those for $G$ and $S_n$, the conjugacy classes of $G$, and wreath products. In the case $C_a \wr S_b$, they are indexed by the following objects. \begin{Theorem}[{\cite{Specht1932Eine}}; see {\cite[Thm.~4.1]{MR1023791}}]\label{thm:CaSb_irreps} The complex inequivalent irreducible representations of $C_a \wr S_b$ are indexed by $ a $-tuples $ \ul = (\lam^{(1)}, \dots, \lam^{(a)}) $ of partitions with $ \sum_{r = 1}^a |\lam^{(r)}| = b $. In particular, they are given by \begin{equation}\label{eq:specht} S^{\ul} \coloneqq \lp (\chi_a^1 \wr S^{\lambda^{(1)}}) \otimes \dots \otimes (\chi_a^{a} \wr S^{\lambda^{(a)}}) \rp \ind_{C_a \wr S_{ \al(\ul) } }^{C_a \wr S_b}, \end{equation} where \begin{align*} \al(\ul) &\coloneqq (|\lambda^{(1)}|, \dots, |\lambda^{(a)}|) \vDash b, \\ S_{\al(\ul)} &\coloneqq S_{|\lambda^{(1)}|} \times \cdots \times S_{|\lambda^{(a)}|}, \end{align*} $ \chi_a^r $ is as defined in \Cref{def:cyclicirreps}, and $C_a \wr S_{\al(\ul)}$ is viewed naturally as a subgroup of $C_a \wr S_b$. \end{Theorem} One consequence of \Cref{thm:CaSb_irreps} is \begin{align}\label{eq:dimcasbirreps} \dim(S^{\ul}) = \ch{b}{\al(\ul)} \prod_{r = 1}^{a} \# \SYT(\lam^{(r)}). \end{align} Another consequence is an explicit description of the one-dimensional representations of $C_a \wr S_b$, which are as follows. \begin{Definition} Fix integers $a, b \geq 1$. Let \[ \chi^{r, 1} \coloneqq \chi^r_a \wr 1_b \qquad \text{ and } \chi^{r, \epsilon} \coloneqq \chi^r_a \wr \epsilon_b \] where $r=1, \ldots, a$ and $1_b$ and $\epsilon_b$ are the trivial and sign representations of $S_b$, respectively. When $b=1$, $ \epsilon_b = 1_b $, in which case $ \chi^{r, 1} = \chi^{r, \epsilon} = \chi_a^r $. We sometimes write $\chi^{r, 1}_{(a^b)}$ or $\chi^{r, \epsilon}_{(a^b)}$ if we want to specify the group $C_a \wr S_b$ as well. \end{Definition} Bergeron--Bergeron--Garsia \cite{MR1035495} extended Klyachko's observation by showing that the Schur--Weyl dual of $\cL_{(a^b)}$ is $\chi^{1, 1}\ind_{C_a \wr S_b}^{S_{ab}}$. We next give a different argument of this fact which is straightforward given the preceding background and which uses a lemma we will require later in \Cref{sec:mash}. \begin{Lemma}\label{lem:higher_schur_weyl} We have \[ \Ch \chi^{1, 1}\ind_{C_a \wr S_b}^{S_{ab}} = \mch{\NFD_{a,1}}{b}^{\cont}(\mathbf{x}). \] \begin{proof} By \Cref{lem:indwreath} below and the fact that $ C_a \wr S_b \sube S_a \wr S_b \sube S_{ab} $, we have \begin{equation} \chi^{1, 1}\ind_{C_a \wr S_b}^{S_{ab}} = (\chi^1_a \wr 1_b)\ind_{C_a \wr S_b}^{S_{ab}} \cong (\chi^1_a\ind_{C_a}^{S_a} \wr 1_b) \ind_{S_a \wr S_b}^{S_{ab}}. \end{equation} By \eqref{eq:plethsymchar} and the $r=1$ case of \Cref{thm:ind_NFD}, \begin{align} \Ch (\chi^1_a\ind_{C_a}^{S_a} \wr 1_b) \ind_{S_a \wr S_b}^{S_{ab}} = (\Ch 1_b)[\Ch \chi^1_a\ind_{C_a}^{S_a}] = h_b[\NFD_{a,1}^{\cont}(\mathbf{x})], \end{align} since $ \Ch(1_b) = h_b(\bfx) $. Now, $h_b[\NFD_{a,1}^{\cont}(\mathbf{x})] = \mch{\NFD_{a,1}}{b}^{\cont}(\mathbf{x})$ from the definition of plethysm, \eqref{eq:plethysmdefn}, and the definition of $ h_b$, \eqref{eq:hn}. The result will be complete once we prove \Cref{lem:indwreath}. \end{proof} \end{Lemma} \begin{Corollary}[{\cite[\S4.4]{MR1035495}}; see also {\cite[Thm.~8.24]{MR1231799}}]\label{cor:higher_schur_weyl} The Schur--Weyl dual of $\cL_{(a^b)}$ is $\chi_a^1\ind_{C_a \wr S_b}^{S_{ab}}$. \begin{proof} Combine \Cref{lem:lie_necklaces} and \Cref{lem:higher_schur_weyl}. \end{proof} \end{Corollary} Indeed, the Schur--Weyl duals of general $\cL_\lambda$ can be expressed very explicitly in terms of induced linear representations as follows. Suppose $\sigma \in S_n$ has cycle type $\lambda$. Write $Z_\lambda$ for the centralizer of $\sigma$ in $S_n$. When $\lambda = (a^b)$, it is straightforward to see that $Z_{(a^b)} \cong C_a \wr S_b$. Furthermore, when $\lambda = 1^{b_1} 2^{b_2} \cdots k^{b_k}$ is written in exponential notation, we have $Z_\lambda \cong Z_{(1^{b_1})} \times Z_{(2^{b_2})} \times \cdots \times Z_{(k^{b_k})}$. \begin{Corollary}[see {\cite[Thm.~8.24]{MR1231799}}] Suppose $\lambda = 1^{b_1} 2^{b_2} \cdots k^{b_k} \vdash n$. Let $\chi^{1, 1}_\lambda$ denote the linear representation of $Z_\lambda \leq S_n$ given by the (outer) tensor product of the representations $\chi_{(i^{b_i})}^{1, 1}$ of $C_i \wr S_{b_i}$ for $1 \leq i \leq k$. Then, the Schur--Weyl dual of $\cL_\lambda$ is $\chi^{1, 1}_\lambda\ind_{Z_\lambda}^{S_n}$. \begin{proof} Using in order \eqref{eq:llambda}, multiplicativity of Schur characters under tensor products, \Cref{cor:higher_schur_weyl}, \eqref{eq:prodchar}, \Cref{lem:indtensor} and transitivity of induction, the fact that $Z_\lambda \cong \prod_{i=1}^k Z_{(i^{b_i})}$, and the definition of $\chi_\lambda^{1, 1}$, we have \begin{align*} \Ch \cL_\lambda &= \Ch\left(\bigotimes_{i=1}^k \cL_{(i^{b_i})}\right) = \prod_{i=1}^k \Ch \cL_{(i^{b_i})} = \prod_{i=1}^k \Ch \chi_{(i^{b_i})}^{1, 1}\ind_{Z_{(i^{b_i})}}^{S_{ib_i}} \\ &= \Ch \left(\bigotimes_{i=1}^k \chi_{(i^{b_i})}^{1, 1}\ind_{Z_{(i^{b_i})}}^{S_{ib_i}}\right) \ind_{S_{1b_1} \times S_{2b_2} \times \cdots}^{S_n} = \Ch\left(\bigotimes_{i=1}^k \chi_{(i^{b_i})}^{1, 1}\right)\ind_{Z_\lambda}^{S_n} \\ &= \Ch \chi_\lambda^{1, 1}\ind_{Z_\lambda}^{S_n} \end{align*} The result will be complete once we prove \Cref{lem:indtensor}. \end{proof} \end{Corollary} \begin{Lemma}\label{lem:indwreath} Suppose that $H$ is a subgroup of a group $G$, that $U$ is an $H$-module, and that $V$ is an $S_n$-module. Then \[ (U \wr V) \ind_{H \wr S_n}^{G \wr S_n} \cong \lp U\ind_H^G\rp\wr V \] as $G \wr S_n$-modules. \begin{proof} As sets, we have \begin{align*} (U \wr V)\ind_{H \wr S_n}^{G \wr S_n} &= \bC(G \wr S_n) \otimes_{\bC(H \wr S_n)} (U^{\otimes n} \otimes V), \\ (U\ind_H^G) \wr V &= (\bC G \otimes_{\bC H} U)^{\otimes n} \otimes V. \end{align*} Define \begin{align*} \phi \colon (U \wr V)\ind_{H \wr S_n}^{G \wr S_n} &\to (U\ind_H^G) \wr V, \\ \psi \colon (U\ind_H^G) \wr V &\to (U \wr V)\ind_{H \wr S_n}^{G \wr S_n} \end{align*} by \begin{align*} \phi((g_1, &\ldots, g_n, \tau) \otimes (u_1 \otimes \cdots \otimes u_n \otimes v)) \\ &\coloneqq (g_1 \otimes u_{\tau^{-1}(1)}) \otimes \cdots \otimes (g_n \otimes u_{\tau^{-1}(n)}) \otimes (\tau \cdot v), \\ \psi((g_1 &\otimes x_1) \otimes \cdots \otimes (g_n \otimes x_n) \otimes y) \\ &\coloneqq (g_1, \ldots, g_n, 1) \otimes (x_1 \otimes \cdots \otimes x_n \otimes y) \end{align*} extended $\bC$-linearly. It is straightforward to check directly that $\phi$ and $\psi$ are well-defined, $G \wr S_n$-equivariant, and mutual inverses. Note that showing $ \psi \circ \phi(x) = x $ requires using the relation \[ (g_1, \dots, g_n, \tau) \otimes z = (g_1, \dots, g_n,1) \otimes (\tau \dd z) \] in $ \bC(G \wr S_n) \otimes_{\bC(H \wr S_n)} (U^{\otimes n} \otimes V) $ for $ g_1, \dots, g_n \in G, \tau \in S_n, z \in U^{\otimes n} \otimes V $. \end{proof} \end{Lemma} \begin{Lemma}\label{lem:indtensor} Suppose that $H_1, \ldots, H_k$ are subgroups of groups $G_1, \ldots, G_k$ and that $U_i$ is an $H_i$-module for $1 \leq i \leq k$. Then \[ \left(U_1 \otimes \cdots \otimes U_k\right) \ind_{H_1 \times \cdots \times H_k}^{G_1 \times \cdots \times G_k} \cong U_1\ind_{H_1}^{G_1} \otimes \cdots \otimes U_k\ind_{H_k}^{G_k} \] as $G_1 \times \cdots \times G_k$-modules. \begin{proof} Having chosen bases for both sides, there is a natural $\bC$-linear map between them. It is easy to check this is also $G_1 \times \cdots \times G_k$-equivariant. The details are omitted. \end{proof} \end{Lemma} \section{Cyclic Sieving and Kra{\'s}kiewicz--Weyman's Result}\label{sec:KW} In this section, we first build on work of Klyachko to prove \Cref{thm:ind_NFD}. We then recover Kra{\'s}kiewicz--Weyman's result, \Cref{thm:KW}, and discuss some benefits of our approach. Klyachko observed in \cite[Prop.~1]{klyachko74} that $E(\chi^1\ind_{C_n}^{S_n})$, like $\cL_{(n)}$, also has a basis indexed by primitive necklaces. Klyachko's argument may be readily generalized to $E(\chi^r\ind_{C_n}^{S_n})$ as follows. Recall from the introduction that \begin{align*} \NFD_{n, r} &\coloneqq \{ N \in \N_n : \freq(N) \mid r \}, \\ \F_{n, r} &\coloneqq \{ w \in \W_n : \flex(w) = r \}, \\ \M_{n,r} &\coloneqq \{ w \in \W_n : \maj_n(w) = r \}. \end{align*} In particular, $\NFD_{n,n} = \N_n $, and $ \NFD_{n,1} $ is the set of primitive necklaces of length $ n $. \begin{Theorem}\label{thm:basis_onerow} There is a basis for $E(\chi^r\ind_{C_n}^{S_n})$ indexed by necklaces of length $n$ words with letters from $[m]$ and with frequency dividing $r$. Moreover, \begin{align}\label{eq:IndcharNFD} \Ch \chi^r\ind_{C_n}^{S_n} = \NFD_{n,r}^{\cont}(\mathbf{x}). \end{align} \begin{proof} Suppose the underlying vector space $ V $ has basis $ \{ v_1, \dots, v_m \} $. By a slight abuse of notation, we may view $\chi^r$ as the vector space $\bC$ with the left $C_n$-action $\sigma_n \cdot 1 \coloneqq \omega_n^r$. Since $\chi^r\ind_{C_n}^{S_n} \coloneqq \bC S_n \otimes_{\bC C_n} \chi^r$, we have \begin{align} E(\chi^r\ind_{C_n}^{S_n}) &= V^{\otimes n} \otimes_{\bC S_n} \bC S_n \otimes_{\bC C_n} \chi^r \cong V^{\otimes n} \otimes_{\bC C_n} \chi^r \end{align} where $C_n$ acts on $V^{\otimes n}$ on the right by ``rotating'' the components of simple tensors. A spanning set for $V^{\otimes n} \otimes_{\bC C_n} \chi^r$ is given by all $v_{i_1} \otimes \cdots \otimes v_{i_n} \otimes 1$, which we abbreviate as $[i_1\ \cdots\ i_n]$. Acting by $\sigma_n^{-1} $ on $\chi^r$ on the left or on $V^{\otimes n}$ on the right gives the relation \begin{align} [i_1\ \cdots\ i_n] = \omega_n^{r} [i_2 \, \cdots \, i_n \, i_{1}]. \end{align} This relation shows that $[i_1\cdots i_n]$ is well-defined on the level of necklaces, at least up to nonzero scalar multiplication, which explains our notation. If the word $i_1\cdots i_n$ has frequency $f$ and period $p$, we then find \begin{align*} [i_1\ \cdots\ i_n] &= \frac{1}{n} \sum_{j=0}^{n-1} \omega_n^{jr} [i_{j+1}\ \cdots\ i_n\ i_1\ \cdots\ i_j] \\ &= \frac{1}{n} \sum_{k=0}^{p-1} \left(\sum_{\ell=0}^{f-1} \omega_n^{(\ell p + k)r}\right) [i_{k+1} \cdots i_n i_1 \cdots i_k] \\ &= \frac{1}{n} \left(\sum_{\ell=0}^{f-1} \omega_n^{\ell pr}\right) \sum_{k=0}^{p-1} \omega_n^{kr} [i_{k+1} \cdots i_n i_1 \cdots i_k]. \end{align*} Since $\omega_n^p$ is a primitive $n/p=f$-th root of unity, the factor $\sum_{\ell=0}^{f-1} \omega_n^{\ell pr}$ is nonzero if and only if $\omega_n^{pr} = 1$, so if and only if $f \mid r$. Picking representatives for necklaces with frequency dividing $r$ thus gives a spanning set for $E(\chi^r\ind_{C_n}^{S_n})$, and it is easy to see it is in fact a basis. Diagonal matrices act on this basis via \begin{align} \diag(x_1, \ldots, x_n) \cdot [i_1\ \cdots\ i_n] = \bfx^{\cont(i_1\cdots i_n)} [i_1\ \cdots\ i_n], \end{align} from which it follows that the Schur character is the content generating function of necklaces of length $n$ words with letters from $[m]$ and with frequency dividing $r$. Letting $m \to \infty$, \eqref{eq:IndcharNFD} follows. \end{proof} \end{Theorem} \begin{Lemma}\label{lem:NFD_F} We have \[ \NFD_{n, r}^{\cont}(\mathbf{x}) = \F_{n, r}^{\cont}(\mathbf{x}) = \M_{n,r}^{\cont}(\bfx). \] \begin{proof} Consider the map \begin{align*} \iota \colon \F_{n, r} &\to \NFD_{n,r} \\ \iota(w) &\coloneqq [w]. \end{align*} Since $\flex(w) = \freq(w) \lex(w) = r$, we have $ \freq(w) \mid r$, so $[w] \in \NFD_{n, r}$. Thus, $\iota$ is in fact a map from $ \F_{n, r} $ to $ \NFD_{n,r} $. Since each necklace in $ \NFD_{n,r} $ contains exactly one word with $ \flex $ equal to $ r $, $\iota$ is a content-preserving bijection. Therefore, $ \NFD_{n, r}^{\cont}(\bfx) = \F_{n, r}^{\cont}(\bfx) $. Using \Cref{thm:majn_flex}, we have \[ \W_n^{\cont, \flex}(\mathbf{x}; q) = \W_n^{\cont, \maj_n}(\mathbf{x}; q), \] which means $ \F_{n, r}^{\cont}(\mathbf{x}) = \M_{n,r}^{\cont}(\bfx) $. \end{proof} \end{Lemma} \begin{Remark} From \Cref{thm:basis_onerow} and \Cref{lem:NFD_F}, the Schur character of $\chi^r\ind_{C_n}^{S_n}$ may be described as a content generating function for certain necklaces or for certain words. This proves \Cref{thm:ind_NFD} from the introduction. \end{Remark} We may now present our remarkably direct proof of Kra{\'s}kiewicz--Weyman's result, \Cref{thm:KW}, using cyclic sieving. \begin{proof}[Proof (of \Cref{thm:KW}).] The argument in \Cref{thm:basis_onerow} exhibited an explicit basis of the Schur module $E(\chi^r\ind_{C_n}^{S_n})$, showing that \[ \sum_{r=1}^n \Ch \chi^r\ind_{C_n}^{S_n} q^r = \sum_{r=1}^n \NFD_{n, r}^{\cont}(\mathbf{x}) \, q^r. \] From \Cref{lem:NFD_F}, the bijection $\iota \colon \F_{n, r} \too{\sim} \NFD_{n, r}$ given by $w \mapsto [w]$ gives \[ \sum_{r=1}^n \NFD_{n, r}^{\cont}(\mathbf{x}) \, q^r = \W_n^{\cont, \flex}(\mathbf{x}; q). \] Using universal cyclic sieving on words for $S_n$-orbits and $C_n$-orbits as described in the introduction, \Cref{thm:majn_flex} now gives \[ \W_n^{\cont, \flex}(\mathbf{x}; q) = \W_n^{\cont, \maj_n}(\mathbf{x}; q). \] Using the RSK algorithm, \Cref{lem:RSK_Des} gives \[ \W_n^{\cont, \maj_n}(\mathbf{x}; q) = \sum_{\substack{\lambda \vdash n \\r \in [n]}} a_{\lambda, r} s_\lambda(\mathbf{x}) \, q^r. \] Combining all of these equalities and extracting the coefficient of $q^r$ gives the result. \end{proof} Every step of the preceding proof uses an explicit bijection with the exception of the appeal to cyclic sieving through \Cref{thm:majn_flex}. This suggests the problem of finding a bijective proof of \Cref{thm:majn_flex}. \begin{Problem}\label{prob:maj_flex} For each $n \geq 1$, find an explicit, content-preserving bijection \[ \phi \colon \W_n \to \W_n \] such that $\maj_n(w) = \flex(\phi(w))$. \end{Problem} \begin{Corollary}\label{cor:symmetry} A solution to \Cref{prob:maj_flex} would yield an explicit, bijective proof of the identity \begin{equation}\label{eq:maj_flex_sym} \sum_{\lambda \vdash n} a_{\lambda, r} s_\lambda(\mathbf{x}) = \sum_{\lambda \vdash n} a_{\lambda, s} s_\lambda(\mathbf{x}) \end{equation} for any $r, s \in \bZ$ where $\gcd(n, r) = \gcd(n, s)$. \begin{proof} We have content-preserving bijections \[ \bigsqcup_{\lambda \vdash n} \SSYT(\lambda) \times \{Q \in \SYT(\lambda) : \maj(Q) \equiv_n r\} \too{\mathrm{RSK}} \M_{n, r} \too{\phi} \F_{n, r} \too{\iota} \NFD_{n, r}. \] Now note that \[ \NFD_{n, r} = \NFD_{n, \gcd(n, r)} = \NFD_{n, \gcd(n, s)} = \NFD_{n, s}. \] We thus have an explicit, content-preserving bijection \begin{align*} \bigsqcup_{\lambda \vdash n} \SSYT(\lambda) &\times \{Q \in \SYT(\lambda) : \maj(Q) \equiv_n r\} \\ &\too{\sim} \bigsqcup_{\lambda \vdash n} \SSYT(\lambda) \times \{Q \in \SYT(\lambda) : \maj(Q) \equiv_n s\} \end{align*} from which \eqref{eq:maj_flex_sym} follows. \end{proof} \end{Corollary} \begin{Remark} The most difficult step in our proof of \Cref{thm:KW} is the universal $S_n$-cyclic sieving result, \Cref{cor:Sn_univ}, or equivalently \Cref{thm:rsw_alpha}. The proof in \cite{MR2087303} of \Cref{thm:rsw_alpha} perhaps unsurprisingly uses several of the interpretations of the Kra{\'s}kiewicz--Weyman symmetric functions from \Cref{ssec:background_KW}, in particular \Cref{thm:KW_coinvariant} involving $\Ch_{S_n \times C_n} R_n$. However, both Kra{\'s}kiewicz--Weyman's and Springer's original proofs of \Cref{thm:KW_coinvariant} hinge upon \eqref{eq:SYT_evals}. Indeed, Kra{\'s}kiewicz--Weyman showed explicitly in \cite[Prop.~3]{MR1867283} that $\Ch_{S_n \times C_n} \bC S_n = \Ch_{S_n \times C_n} R_n$ is easily equivalent to \eqref{eq:SYT_evals}. Springer's argument proving \eqref{eq:SYT_evals} uses a Molien-style formula, while Kra{\'s}kiewicz--Weyman's argument uses a recursion involving $\ell$-cores and skew hooks. One may thus ask about the relationship between \eqref{eq:SYT_evals} and the cyclic sieving result, \Cref{thm:rsw_alpha}. Using stable principal specializations, one can consider earlier approaches to have been ``in the $s$-basis'' and our approach to have been ``in the $h$-basis'' in the following sense. Let $\tau^\lambda$ be the $S_n$-character of $1\ind_{S_\lambda}^{S_n}$, which has $ \Ch(1\ind_{S_\lambda}^{S_n}) = h_\lam $. We have \begin{alignat*}{3} \chi^\lambda(\sigma_n^r) &= \SYT(\lambda)^{\maj}(\omega_n^r) &&= (1-q)\cdots(1-q^n) s_\lambda(1, q, q^2, \ldots)|_{q=\omega_n^r}, \\ \tau^\lambda(\sigma_n^r) &= \W_\lambda^{\maj}(\omega_n^r) &&= (1-q)\cdots(1-q^n) h_\lambda(1, q, q^2, \ldots)|_{q=\omega_n^r}. \end{alignat*} where the first equality is \eqref{eq:SYT_evals}, the second is \cite[Prop.~7.19.11]{MR1676282}, the third is \Cref{thm:rsw_alpha}, and the fourth is \cite[Prop.~7.8.3]{MR1676282} and \cite[Art.~6]{MR1506186}. Our approach suggests that, as far as the Kra{\'s}kiewicz--Weyman theorem is concerned, the $h$-basis arises more directly. In \cite{Ahlbach201837}, the authors proved a refinement of \Cref{thm:rsw_alpha}. Since earlier approaches to \Cref{thm:rsw_alpha} involving representation theory could not readily be adapted to this refinement, the argument instead uses completely different and highly combinatorial techniques. Thus, the arguments in \cite{Ahlbach201837} and the proof of \Cref{thm:KW} together give an essentially self-contained proof of Kra{\'s}kiewicz--Weyman's result. \end{Remark} \section{Induced Representations of Arbitrary Cyclic Subgroups of \texorpdfstring{$S_n$}{Sn}}\label{sec:Cn_branching} We next generalize the discussion in \Cref{sec:KW} to branching rules for general inclusions $\langle \sigma\rangle \hookrightarrow S_n$, recovering a result of Stembridge, \Cref{thm:Stembridge}. Following the outline of the previous section, we express the relevant characters in turn as a certain orbit generating function, \Cref{thm:basis_cyclic}, a necklace generating function, \Cref{lem:OIRNFexpan}, and a generating function on words, \Cref{lem:OIRNFDexpan}. Two variations on the major index, $\bfmaj_\nu$ and $\maj_\nu$, arise quite naturally from our argument. The CSP \Cref{thm:rsw_alpha} again plays a decisive role. Throughout this section, let $\sigma \in S_n$, let $C$ be the cyclic group generated by $\sigma$, and let $ \ell \coloneqq \# C $ be the order of $\sigma$. Fixing a primitive $\ell$-th root of unity $\omega_\ell$, let $ \chi^r \colon C \to \bC $ for $r=1, \ldots, \ell$ be the linear $C$-module given by $ \chi^r(\sigma) \coloneqq \omega_\ell^r $. We begin by updating our notation for this setting and generalizing \Cref{thm:basis_onerow}. \begin{Definition} In analogy with \Cref{def:necklaces}, suppose $\cO$ is an orbit of $\W_n$ under the restricted $C$-action. The \textit{period} of $\cO$ is $\#\cO$ and the \textit{frequency} of $\cO$, written $\freq(\cO)$, is the stabilizer-order of any element of $\cO$, or equivalently $\freq(\cO) = \f{\ell}{\#\cO} $. The set of orbits of words whose frequency divides $r$ is \[ \OFD_{C, r} \coloneqq \{\text{$C$-orbits $\cO$ of $\W_n$} : \freq(\cO) \mid r\}. \] \end{Definition} \begin{Theorem}\label{thm:basis_cyclic} There is a basis for $E(\chi^r\ind_C^{S_n})$ indexed by $ C $-orbits of length $n$ words with letters from $[m]$ and with frequency dividing $r$. Moreover, \begin{align} \Ch \lp \chi^r\ind_C^{S_n} \rp = \OFD_{C,r}^{\cont}(\mathbf{x}). \end{align} \begin{proof} The proof of \Cref{thm:basis_onerow} goes through verbatim with the $C$-action replacing the $C_n$-action. \end{proof} \end{Theorem} Our goal is broadly to replace $\OFD_{C, r}^{\cont}(\mathbf{x})$ with a necklace generating function, apply cyclic sieving to get a major index generating function on words, and then apply RSK to get a Schur expansion. \begin{Notation} For the rest of the section, suppose that $\sigma$ has disjoint cycle decomposition $\sigma = \sigma_1 \cdots \sigma_k$ with $\nu_i \coloneqq |\sigma_i|$. Consequently, $\ell = |\langle\sigma\rangle| = \lcm(\nu_1, \ldots, \nu_k)$. Further, write \[ C_\nu \coloneqq \{\sigma_1^{r_1} \cdots \sigma_k^{r_k} \in S_n : r_1, \ldots, r_k \in \bZ\} \cong C_{\nu_1} \times \cdots \times C_{\nu_k} \] where $C_{\nu_i} \coloneqq \langle \sigma_i\rangle \subset S_n$. Thus, we have $C \subset C_\nu \subset S_n$. \end{Notation} In \Cref{sec:KW}, we considered the $C_n$-orbits of $\W_n$, namely necklaces $N \in \N_n$. The frequency of $N$ is the stabilizer-order of $N$, i.e.~$\freq(N) = \#\Stab_{C_n}(N)$. We may group together $C_n$-orbits of $\W_n$ according to their stabilizer sizes by letting \begin{align}\label{eq:NFdef} \NF_{n, r} \coloneqq \{N \in \N_n : \freq(N) = r\} \end{align} be the set of necklaces of length $n$ words with frequency $r$. Similarly, $\NFD_{n, r}$ consists of $C_n$-orbits of $\W_n$ whose stabilizer is contained in the common stabilizer of $\NF_{n, r}$. Analogously, the $C_\nu$-orbits of $\W_n$ can be identified with products of necklaces $N_1 \times \cdots \times N_k$ or equivalently with tuples $(N_1, \ldots, N_k)$ where $N_j \in \N_{\nu_j}$. Since \[ \Stab_{C_\nu}(N_1 \times \cdots \times N_k) = \prod_{j=1}^k \Stab_{C_{\nu_j}}(N_j), \] we may group together $C_\nu$-orbits of $\W_n$ according to their stabilizers as follows. \begin{Definition}\label{def:NFD_nu_rho} Given $\nu = (\nu_1, \ldots, \nu_k)$ and $\rho = (\rho_1, \ldots, \rho_k)$, let \begin{align*} \NF_{\nu,\rho} &\coloneqq \NF_{\nu_1, \rho_1} \times \dots \times \NF_{\nu_k, \rho_k}, \\ \NFD_{\nu,\rho} &\coloneqq \NFD_{\nu_1, \rho_1} \times \dots \times \NFD_{\nu_k, \rho_k}. \end{align*} \end{Definition} \noindent The elements of $\NF_{\nu,\rho}$ all have the same stabilizer, and the elements of $\NFD_{\nu,\rho}$ are precisely those whose stabilizer is contained in the common stabilizer of elements of $\NF_{\nu,\rho}$. We write $\rho \mid \nu$ to mean that $\rho_i \mid \nu_i$ for all $i=1, \ldots, r$. Note that $\NF_{\nu, \rho} \neq \varnothing$ if and only if $\rho \mid \nu$. Given a group $G$ acting on a set $\W$ and a subgroup $H$ of $G$, each $G$-orbit of $\W$ is partitioned into $H$-orbits. Consequently, $C_\nu$-orbits of $\W_n$ are unions of $C$-orbits, which we exploit as follows. \begin{Lemma}\label{lem:C_Cnu.1} Let $\cO$ be a $C$-orbit of $\W_n$. Let $N_1 \times \cdots \times N_k$ be the $C_\nu$-orbit containing $\cO$ and suppose $N_1 \times \cdots \times N_k \in \NF_{\nu, \rho}$. Then \[ \#\cO = \lcm\left(\frac{\nu_1}{\rho_1}, \ldots, \frac{\nu_k}{\rho_k}\right), \] which depends only on $\nu$ and $\rho$. In particular, \[ \cO \in \OFD_{C, r} \qquad\text{ if and only if }\qquad \ell \mid r \cdot \lcm\left(\frac{\nu_1}{\rho_1}, \ldots, \frac{\nu_k}{\rho_k}\right). \] \begin{proof} By assumption, $\freq(N_j) = \rho_j$ and $ N_j \in \N_{\nu_j} $, so $\#N_j = \nu_j/\rho_j$. It follows that $\cO$ is in bijection with the group generated by a permutation of cycle type $(\nu_1/\rho_1, \ldots, \nu_k/\rho_k)$, so that $\#\cO = \lcm(\nu_1/\rho_1, \ldots, \nu_k/\rho_k)$. The second claim follows by noting that \[ \cO \in \OFD_{C, r} \Leftrightarrow \freq(\cO) \mid r \Leftrightarrow (\ell/\#\cO) \mid r \Leftrightarrow \ell \mid r \cdot \#\cO. \] \end{proof} \end{Lemma} \begin{Lemma}\label{lem:OIRNFexpan} We have \[ \OFD_{C, r}^{\cont}(\mathbf{x}) = \sum \f{\prod_{j = 1}^k \f{\nu_j}{\rho_j} } { \lcm \lp \f{\nu_1}{\rho_1}, \dots, \f{\nu_k}{\rho_k} \rp } \, \NF_{\nu, \rho}^{\cont}(\mathbf{x}), \] where the sum is over all $\rho$ such that $\rho \mid \nu$ and $\ell \mid r \dd \lcm \lp \f{\nu_1}{\rho_1}, \dots, \f{\nu_k}{\rho_k} \rp$. \begin{proof} Consider the map \[ \Omega : \OFD_{C, r} \to \bigsqcup \NF_{\nu,\rho} \] sending $ \cO \in \OFD_{C,r} $ to the $ C_\nu $-orbit containing $ \cO $, where the union is over all $\rho$ such that $\rho \mid \nu $ and $\ell \mid r \dd \lcm \lp \f{\nu_1}{\rho_1}, \dots, \f{\nu_k}{\rho_k} \rp $. By \Cref{lem:C_Cnu.1}, $ \Omega $ does in fact map $ \OFD_{C, r} $ into this union, and $ \Omega $ is surjective. Also, each $ C_\nu $-orbit contained in $ \NF_{\nu, \rho} $ has size $ \lcm \lp \f{\nu_1}{\rho_1}, \dots, \f{\nu_k}{\rho_k} \rp $, and $ \# N_1 \times \dots \times N_k = \prod_{j=1}^k \frac{\nu_j}{\rho_j} $, so the fiber of each $ N_1 \times \cdots \times N_k \in \NF_{\nu, \rho} $ has size \[ \# \Omega^{-1}(N_1 \x \dots \x N_k) = \frac{\prod_{j=1}^k \frac{\nu_j}{\rho_j}} {\lcm\left(\frac{\nu_1}{\rho_1}, \ldots, \frac{\nu_k}{\rho_k}\right)}. \] The result now follows from $ \Omega $ being content-preserving. \end{proof} \end{Lemma} In \Cref{sec:KW}, we used cyclic sieving to turn generating functions involving $\NFD_{n, r}^{\cont}(\mathbf{x})$ into Schur expansions. Thus our next goal is to turn the necklace generating function in \Cref{lem:OIRNFexpan} into an analogous generating function over $\NFD_{\nu,\rho}^{\cont}(\mathbf{x})$. To accomplish this, one could in principle use M\"obius inversion on the lattice of stabilizers of $C_\nu$-orbits to convert from $\NF_{\nu,\rho}^{\cont}(\mathbf{x})$ to $\NFD_{\nu,\rho}^{\cont}(\mathbf{x})$. However, the following argument is more direct. \begin{Lemma}\label{lem:OIRNFDexpan} For $r=1, \ldots, n$, \[ \OFD_{C, r}^{\cont}(\mathbf{x}) = \sum \NFD_{\nu, \tau}^{\cont}(\mathbf{x}), \] where the sum is over all $ k $-tuples of integers $\tau \in [\nu_1] \times \cdots \times [\nu_k]$ such that $\sum_{j = 1}^k \f{\ell}{\nu_j} \tau_j \, \equiv_\ell \, r$. \begin{proof} We have \[ \NFD_{\nu, \tau}^{\cont}(\mathbf{x}) = \sum_{\rho \mid \nu, \tau} \NF_{\nu, \rho}^{\cont}(\mathbf{x}), \] where $\rho\mid\nu,\tau$ means $ \rho_j \mid \nu_j $ and $ \rho_j \mid \tau_j $ for all $ j $. Consequently, \begin{align*} \sum_{ \substack{ \tau \in [\nu_1] \times \dots \times [\nu_k] \\ \sum_{j = 1}^k \f{\ell}{\nu_j} \tau_j \, \equiv_\ell \, r } } \NFD_{\nu, \tau}^{\cont}(\mathbf{x}) = \sum_{ \rho \mid \nu } c_{\nu, \rho}^r \NF_{\nu, \rho}^{\cont}(\mathbf{x}) \end{align*} where \begin{align*} c_{\nu,\rho}^r \coloneqq \#\left\{ \tau \in [\nu_1] \times \cdots \times [\nu_k] : \rho \mid \tau, \sum_{j=1}^k \frac{\ell}{\nu_j} \tau_j \equiv_\ell r \right\}. \end{align*} Since $\rho_j \mid \nu_j$ and $\rho_j \mid \tau_j$, write $ \gamma_j \coloneqq \frac{\nu_j}{\rho_j} \in \bZ_{\ge 1}$ and $ \de_j \coloneqq \frac{\tau_j}{\rho_j} \in \bZ_{\ge 1} $. Then, \[ \sum_{j=1}^k \frac{\ell}{\nu_j} \tau_j = \sum_{j=1}^k \frac{\ell}{\gamma_j} \de_j, \] so \[ c_{\nu, \rho}^r = \#\left\{ \de \in [\gamma_1] \times \cdots \times [\gamma_k] : \sum_{j=1}^k \frac{\ell}{\gamma_j} \de_j \equiv_\ell r \right\}. \] Defining a group homomorphism \begin{align*} \phi \colon \prod_{i=1}^k \bZ/\gamma_j &\to \bZ/\ell \\ (\de_1, \ldots, \de_k) &\mapsto \sum_{j=1}^k \frac{\ell}{\gamma_j}\de_j, \end{align*} we now have $c_{\nu,\rho}^r = \#\phi^{-1}(r)$. Since $\frac{\ell}{\gamma_1}\bZ + \cdots + \frac{\ell}{\gamma_k}\bZ = \frac{\ell}{\lcm\lp\gamma_1, \ldots, \gamma_k\rp}\bZ$, it follows that \begin{align*} \im\phi &= \{r \in \bZ/\ell : \ell \mid r \cdot \lcm\lp\gamma_1, \ldots, \gamma_k\rp\}, \\ \#\im\phi &= \lcm(\gamma_1, \ldots, \gamma_k). \end{align*} For $r \in \im \phi$, we then have \[ c_{\nu,\rho}^r = \#\phi^{-1}(r) = \#\ker \phi = \frac{\gamma_1 \cdots \gamma_k} {\lcm(\gamma_1, \ldots, \gamma_k)}. \] The result follows from \Cref{lem:OIRNFexpan}. \end{proof} \end{Lemma} Our next goal is to convert the necklace expansion in \Cref{lem:OIRNFDexpan} into a Schur expansion. Recalling from \Cref{sec:KW} that $ \M_{n, r} \coloneqq \{ w \in \W_n : \maj_n(w) = r \} $, \Cref{lem:NFD_F} tells us \begin{equation}\label{eq:NFD_nutau} \NFD_{\nu,\tau}^{\cont}(\mathbf{x}) = \prod_{j=1}^k \NFD_{\nu_j, \tau_j}^{\cont}(\mathbf{x}) = \prod_{j=1}^k \M_{\nu_j, \tau_j}^{\cont}(\mathbf{x}). \end{equation} Interpreting the right-hand side of \eqref{eq:NFD_nutau} in terms of words and comparing with the indexing set in \Cref{lem:OIRNFDexpan} motivates the following variations on the major index. \begin{Definition}\label{def:majtuplestats} Suppose $\nu \vDash n$, $\tau \in [\nu_1] \times \cdots \times [\nu_k]$, and $\ell = \lcm(\nu_1, \ldots, \nu_k)$. Let $\bfmaj_\nu \colon \W_n \to [\nu_1] \times \cdots \times [\nu_k]$ be defined as follows. For $w \in \W_n$, write $w = w^1 \cdots w^k$ where each $w^j$ is a word in $\W_{\nu_j}$. Set \[ \bfmaj_\nu(w) \coloneqq (\maj_{\nu_1}(w^1), \ldots, \maj_{\nu_k}(w^k)). \] Furthermore, let $\maj_\nu \colon \W_n \to [\ell]$ be defined by \[ \maj_\nu(w) \coloneqq \sum_{j=1}^k \frac{\ell}{\nu_j} \, \bfmaj_\nu(w)_j \qquad (\text{mod }\ell). \] Consequently, we have $\maj_{(n)} = \maj_n$. Note that both $\bfmaj_\nu$ and $\maj_\nu$ are functions of $\Des(w)$. We may thus define both $\bfmaj_\nu$ and $\maj_\nu$ on $Q \in \SYT(n)$ using only $\Des(Q)$ in the same way. Equivalently, we may set $\bfmaj_\nu(Q) \coloneqq \bfmaj_\nu(w)$ and $\maj_\nu(Q) \coloneqq \maj_\nu(w)$ for any $w$ such that $Q = Q(w)$. \end{Definition} \begin{Example} Let $\nu=(5, 3, 3)$ and $w=44121361631$, so that $\ell=15$, $w_1=44121$, $w_2=361$, and $w_3=631$. We have \[ \bfmaj_\nu(w) = (\maj_5(w_1), \maj_3(w_2), \maj_3(w_3)) = (1, 2, 3) \] and, hence, $\maj_\nu(w) = \frac{15}{5} \cdot 1 + \frac{15}{3} \cdot 2 + \frac{15}{3} \cdot 3 = 13 \text{ $($mod $15)$}$. \end{Example} \begin{Definition} Suppose $\nu \vDash n$, $\tau \in [\nu_1] \times \cdots \times [\nu_k]$. Let \begin{align*} \M_{\nu,\tau} &\coloneqq \{w \in \W_n : \bfmaj_\nu(w) = \tau\}, \\ \end{align*} \end{Definition} \begin{Theorem}{\cite[Theorem~3.3]{MR1023791}}\label{thm:Stembridge} Let $C$ be a cyclic subgroup of $S_n$ generated by an element of cycle type $\nu = (\nu_1, \dots, \nu_k) $, and let $ \ell = \lcm(\nu_1, \dots, \nu_k) $. We have \[ \sum_{r=1}^\ell \Ch\left(\chi^r\ind_C^{S_n}\right) q^r = \W_n^{\cont,\maj_\nu}(\mathbf{x}; q) = \sum_{\substack{\lambda \vdash n\\r \in [\ell]}} a_{\lambda, r}^\nu s_\lambda(\mathbf{x}) q^r \] where $a_{\lambda, r}^\nu \coloneqq \#\{Q \in \SYT(\lambda) : \maj_\nu(Q) = r\}$. In particular, the multiplicity of $S^\lambda$ in $\chi^r\ind_C^{S_n}$ is $a_{\lambda,r}^\nu$. \begin{proof} From the definition of $\bfmaj_\nu$ and \Cref{def:NFD_nu_rho}, we have \begin{equation}\label{eq:NFD_W.1} \NFD_{\nu,\tau}^{\cont}(\mathbf{x}) = \M_{\nu,\tau}^{\cont}(\mathbf{x}). \end{equation} Using \Cref{thm:basis_cyclic} and \Cref{lem:OIRNFDexpan}, we then have \begin{align*} \sum_{r=1}^\ell \Ch\lp\chi^r\ind_C^{S_n}\rp q^r &= \sum_{r=1}^\ell \; \sum_{ \substack{ \tau \in [\nu_1] \times \dots \times [\nu_k] \\ \sum_{j = 1}^k \f{\ell}{\nu_j} \tau_j \, \equiv_\ell \, r } } \NFD_{\nu, \tau}^{\cont}(\mathbf{x}) \; q^r \\ &= \sum_{r=1}^\ell \; \sum_{ \substack{ \tau \in [\nu_1] \times \dots \times [\nu_k] \\ \sum_{j = 1}^k \f{\ell}{\nu_j} \tau_j \, \equiv_\ell \, r } } \M_{\nu,\tau}^{\cont}(\mathbf{x}) \; q^r \\ &= \sum_{r=1}^\ell \{ w \in \W_n : \maj_\nu(w) = r \}^{\cont}(\bfx) \; q^r \\ &= \W_n^{\cont, \maj_\nu}(\mathbf{x}; q). \end{align*} Since $ \maj_\nu(w)$ depends only on $\Des(w)$, we can apply the RSK bijection again through \Cref{lem:RSK_Des} to get \[ \W_n^{\cont, \maj_\nu}(\mathbf{x}; q) = \sum_{\substack{\lambda \vdash n\\r \in [\ell]}} a_{\lambda, r}^\nu s_\lambda(\mathbf{x}) q^r. \] \end{proof} \end{Theorem} \begin{Remark} Stembridge showed the equality of the first and third terms in \Cref{thm:Stembridge} using the skew analogue of \eqref{eq:SYT_evals} and branching rules along Young subgroups of $S_n$. By contrast, $\W_n^{\cont, \maj_\nu}(\mathbf{x}; q)$ played a key role in our approach. \end{Remark} Since the isomorphism type of $\chi^r\ind_C^{S_n}$, or equivalently the Schur expansion of $\OFD_{C, r}^{\cont}(\mathbf{x})$, depends only on $\nu $, the cycle type of a generator of $ C $, and $ \gcd(\ell, r) $, we have the following generalization of \Cref{cor:alamrsym}. \begin{Corollary}\label{cor:alamrsym_nu} For all $ n \ge 1 $ and $ \lam, \nu \vdash n $, we have $ a_{\lam,r}^\nu = a_{\lam, \gcd(\ell,r)}^\nu $, where $\ell = \lcm(\nu_1, \nu_2, \ldots)$. \end{Corollary} For use in the next section, we record the Schur expansion of $ \M_{\nu, \tau}^{\cont}(\mathbf{x}) $. The proof is analogous to the last step of the proof of \Cref{thm:Stembridge} using \Cref{lem:RSK_Des}. \begin{Corollary}\label{cor:M_schur} If $\nu,\tau \vDash n$, then \[ \M_{\nu, \tau}^{\cont}(\mathbf{x}) = \sum_{\lambda \vdash n} \mathbf{a}_{\lambda,\tau}^\nu s_\lambda(\mathbf{x}) \] where \[ \mathbf{a}_{\lambda,\tau}^\nu \coloneqq \#\{Q \in \SYT(\lambda) : \bfmaj_\nu(Q) = \tau\}. \] \end{Corollary} We also have a corresponding symmetry result. Contrast it with \Cref{cor:alamrsym}. \begin{Corollary}\label{cor:permsym} Suppose $ \nu = (\nu_1, \dots, \nu_k) $ is the cycle type of some $\sigma \in S_n$, $ \tau \in [\nu_1] \times \cdots \times [\nu_k]$, $ \pi \in S_k $, and $ \lam \vdash n $. Then, $ \mathbf{a}_{\lam,\tau}^{\nu} = \mathbf{a}_{\lam, \pi \dd \tau}^{\pi \dd \nu} $. \begin{proof} Since reordering does not affect contents, we have \[ \NFD_{\nu, \tau}^{\cont}(\mathbf{x}) = \NFD_{\pi \cdot \nu, \pi \cdot \tau}^{\cont} (\mathbf{x}). \] Now apply \Cref{cor:M_schur} and equate coefficients of $s_\lambda(\mathbf{x})$. \end{proof} \end{Corollary} \section{Inducing 1-dimensional Representations from \texorpdfstring{$ C_a \wr S_b $}{Ca wreath Sb} to \texorpdfstring{$ S_{ab} $}{Sab}}\label{sec:HLM} We next apply the approach of \Cref{sec:KW} and \Cref{sec:Cn_branching} to prove a generalization of a formula due to Schocker \cite{MR1984625} for the Schur expansion of $\cL_{(a^b)}$. In particular, we give Schur expansions of the characteristics of \[ \cL_{(a^b)}^{r, 1} \coloneqq \chi^{r, 1}\ind_{C_a \wr S_b}^{S_{ab}} \qquad \text{ and } \qquad \cL_{(a^b)}^{r, \epsilon} \coloneqq \chi^{r, \epsilon}\ind_{C_a \wr S_b}^{S_{ab}}. \] Note that $\Ch \cL_{(a^b)} = \Ch \cL_{(a^b)}^{1, 1}$ by \Cref{cor:higher_schur_weyl}. The argument in \Cref{cor:higher_schur_weyl} and the fact that $ \Ch(\epsilon_b) = e_b(\bfx) $ immediately yield the following more general result, which also follows from an appropriate modification of \Cref{thm:basis_onerow}. \begin{Lemma}\label{lem:setmultisetGFs} We have \[ \cL_{(a^b)}^{r, 1} = \mch{\NFD_{a, r}}{b}^{\cont}(\mathbf{x}) \qquad \text{ and } \qquad \cL_{(a^b)}^{r, \epsilon} = \binom{\NFD_{a, r}}{b}^{\cont}(\mathbf{x}). \] \end{Lemma} Our first goal is to manipulate the necklace generating functions in \Cref{lem:setmultisetGFs} in such a way that we may apply cyclic sieving. We use Burnside's lemma and a sign-reversing involution to unravel these multiset and subset generating functions, respectively. \begin{Lemma}\label{lem:schocker_burnside} We have \[ \mch{\NFD_{a, r}}{b}^{\cont}(x_1, x_2, \ldots) = \sum_{\nu \vdash b} \frac{1}{z_\nu} \prod_{j=1}^{\ell(\nu)} \NFD_{a, r}^{\cont}(x_1^{\nu_j}, x_2^{\nu_j}, \ldots). \] \begin{proof} Multisets of $b$ necklaces from $\NFD_{a, r}$ can be thought of as $S_b$-orbits of length-$b$ tuples $(N_1, \ldots, N_b)$ of necklaces $N_i \in \NFD_{a, r}$ under the natural $S_b$-action. The tuples $(N_1, \ldots, N_b)$ fixed by an element $\si \in S_b$ are those tuples which are constant on blocks corresponding to cycles of $\si$. It follows that if $\si$ has cycle type $\nu \vdash b$, \begin{equation}\label{eq:contgf_fixednecklaces} \{ T \in \NFD_{a,r}^b : \si \dd T = T \}^{\cont} (x_1, x_2, \ldots) = \prod_{j=1}^{\ell(\nu)} \NFD_{a, r}^{\cont}(x_1^{\nu_j}, x_2^{\nu_j}, \ldots). \end{equation} By Burnside's lemma, we may count $S_b$-orbits of necklaces $(N_1, \ldots, N_b)$ of fixed content by averaging the number of $\si$-fixed tuples of fixed content over all $\si \in S_b$. The result follows by grouping together permutations of a given cycle type. \end{proof} \end{Lemma} \begin{Lemma}\label{lem:schocker_burnside.eps} We have \[ \binom{\NFD_{a, r}}{b}^{\cont}(\bfx) = \sum_{\nu \vdash b} \frac{(-1)^{b - \ell(\nu)}}{z_\nu} \prod_{j=1}^{\ell(\nu)} \NFD_{a, r}^{\cont}(x_1^{\nu_j}, x_2^{\nu_j}, \ldots). \] \end{Lemma} \begin{proof} Multiplying both sides by $b!$, using \eqref{eq:contgf_fixednecklaces} and the fact $ \sgn(\si) = (-1)^{b - \ell(\nu)} $ for $ \si \in S_b $ with cycle type $ \nu $, the result is equivalent to \begin{equation}\label{eq:fixednecklacesSRI} \begin{aligned} \{ (N_1, \dots, N_b) & \in \NFD_{a, r}^b : (N_1, \dots, N_b) \tx{ are distinct} \}^{\cont}(\bfx) \\ &= \sum_{\si \in S_b} \sgn(\si) \{ T \in \NFD_{a,r}^b : \si \dd T = T \}^{\cont}(\bfx). \end{aligned} \end{equation} On the right-hand side of \eqref{eq:fixednecklacesSRI}, each $b$-tuple $(N_1, \ldots, N_b)$ is counted \[ \wgt(N_1, \dots, N_b) \coloneqq \sum_{\substack{\si \in S_b \\ \tx{s.t. } \si \dd (N_1, \dots, N_b) = (N_1, \dots, N_b)} } \sgn(\si) \] times. If $N_1, \ldots, N_b$ are distinct, then only $ \si =\id$ contributes, so $ \wgt(N_1, \dots, N_b) = 1 $. If $ N_1, \ldots, N_b$ are not distinct, then without loss of generality, suppose $N_1 = N_2$. Then, modifying the cycle(s) containing $1$ and $2$ as in \[ (1\ \cdots)(2\ \cdots) \leftrightarrow (1\ \cdots\ 2\ \cdots) \] gives a sign-reversing involution on $ \{ \si \in S_b : \si \dd (N_1, \dots, N_b) = (N_1, \dots, N_b) \} $, meaning $ \wgt(N_1, \dots, N_b) = 0 $. This proves \eqref{eq:fixednecklacesSRI}. \end{proof} \begin{Remark} Using standard properties of plethysm (see e.g.~\cite[\S I.8]{MR1354144})) and the power-sum expansions of $e_b$ and $h_b$ (see \cite[(7.22)-(7.23)]{MR1676282}), \Cref{lem:schocker_burnside} and \Cref{lem:schocker_burnside.eps} are equivalent to \begin{align}\label{eq:schocker_p.1} \Ch\cL_{(a^b)}^{r, 1} &= h_b[\Ch \chi^r\ind_{C_a}^{S_a}] = \sum_{\nu \vdash b} \frac{1}{z_\nu} p_\nu[\Ch \chi^r\ind_{C_a}^{S_a}], \\ \label{eq:schocker_p.2} \Ch\cL_{(a^b)}^{r, \epsilon} &= e_b[\Ch \chi^r\ind_{C_a}^{S_a}] = \sum_{\nu \vdash b} \frac{(-1)^{b-\ell(\nu)}}{z_\nu} p_\nu[\Ch \chi^r\ind_{C_a}^{S_a}]. \end{align} Consequently, one may replace the combinatorial manipulations in \Cref{lem:schocker_burnside} and \Cref{lem:schocker_burnside.eps} with symmetric function manipulations. In the next section, we will prove \Cref{thm:casbchars}, which generalizes the first equalities in \eqref{eq:schocker_p.1} and \eqref{eq:schocker_p.2}. \end{Remark} \begin{Remark} Let $\omega$ be the involution on the algebra of symmetric functions defined by $\omega(s_\lam(\bfx)) = s_{\lam'}(\bfx)$ where $\lambda'$ is the \textit{conjugate} of $\lambda$, obtained by reflecting $\lambda$ through the line $y=-x$. One may show in a variety of ways that \begin{align}\label{eq:omegaind} \omega\lp\Ch \chi^r\ind_{C_n}^{S_n}\rp = \Ch \chi^s \ind_{C_n}^{S_n} \qquad\text{where}\qquad s = \binom{n}{2} - r. \end{align} For instance, we can prove \eqref{eq:omegaind} using \Cref{thm:KW} as follows. Since conjugation $ Q \mapsto Q' $ satisfies $ \Des(Q') = [n - 1] \sm \Des(Q) $, we have \begin{equation} \begin{aligned} a_{\lam',r} &= \# \{ Q \in \SYT(\lam') : \maj(Q) \equiv_n r \} \\ &= \# \left\{ Q' \in \SYT(\lam) : \maj(Q') \equiv_n \ch{n}{2} - r \right\} = a_{\lam, \ch{n}{2} - r}. \end{aligned} \end{equation} Therefore, by \Cref{thm:KW}, letting $ s = \ch{n}{2} - r $, \begin{align*} \omega\lp\Ch \chi^r\ind_{C_n}^{S_n} \rp = \omega \lp \sum_{\lam \vdash n} a_{\lam, r} s_{\lam}(\bfx) \rp = \sum_{\lam \vdash n} a_{\lam', r} s_{\lam}(\bfx) = \sum_{\lam \vdash n} a_{\lam, s} s_{\lam}(\bfx) = \chi^s \ind_{C_n}^{S_n}. \end{align*} From the symmetry result \Cref{cor:alamrsym}, it follows that $\Ch \chi^r\ind_{C_n}^{S_n}$ is fixed under $\omega$ when $n$ is odd. When $n$ is even, $\Ch \chi^r\ind_{C_n}^{S_n}$ may or may not be fixed. For instance, when $r=1$, we find \[ \omega\lp\Ch\cL_{n}\rp = \omega\lp\Ch\chi^1\ind_{S_n}^{C_n}\rp = \begin{cases} \Ch\cL_{n}^{(2)} = \Ch\chi^2\ind_{S_n}^{C_n} & \text{if $n/2$ is odd,} \\ \Ch\cL_{n} = \Ch\chi^1\ind_{S_n}^{C_n} & \text{otherwise.} \end{cases} \] Here $\cL_n^{(2)}$ is the deformation of $\cL_n$ recently studied by Sundaram \cite{arX1803.09368}. Further standard properties of plethysm together with \eqref{eq:schocker_p.1} and \eqref{eq:schocker_p.2} give \begin{align*} &\ \omega\lp\Ch\cL_{(a^b)}^{r, 1}\rp = \Ch\cL_{(a^b)}^{r, \epsilon} \qquad\text{for $a$ odd, and} \\ &\left. \begin{aligned} \omega\lp\Ch\cL_{(a^b)}^{r, 1}\rp &= \Ch\cL_{(a^b)}^{s, 1} \\ \omega\lp\Ch\cL_{(a^b)}^{r, \epsilon}\rp &= \Ch\cL_{(a^b)}^{s, \epsilon} \end{aligned} \qquad\right\}\quad \text{for $a$ even, where $s=\binom{a}{2} - r$.} \end{align*} Consequently, one may obtain the Schur expansion of $\Ch \cL_{(a^b)}^{r, \epsilon}$ from the Schur expansion of $\Ch \cL_{(a^b)}^{r, 1}$ simply by applying the $\omega$ map if and only if $a$ is odd. When $a$ is even, these two cases are more fundamentally different. \end{Remark} Next, we convert $ \NFD_{a, r}^{\cont}(x_1^{\nu_j}, x_2^{\nu_j}, \ldots) $ into a linear combination of $\NF_{k, s}^{\cont}(\mathbf{x})$'s and then apply Mobius inversion to convert to a linear combination of $\NFD_{k, s}^{\cont}(\mathbf{x})$'s. We will need the following variation on the number-theoretic M\"obius function $\mu$. \begin{Definition}\label{def:mobiusf} Suppose $d \mid e$ and $f \mid e$. Set \[ \mu_f(d, e) \coloneqq \sum_{\substack{g \\ \text{s.t. } \lcm(f, d) \mid g \mid e}} \mu\lp\frac{g}{f}\rp. \] \end{Definition} This expression simplifies considerably as follows. Let $ \rad(m) $ denote the squarefree positive integer with the same prime divisors as $m$. \begin{Lemma}\label{lem:mobiusf} Suppose $d \mid e$ and $f \mid e$. Then \begin{align*} \mu_f(d, e) = \begin{cases} \mu\lp\frac{\lcm(f, d)}{f}\rp & \text{if }\rad\lp\frac{e}{f}\rp = \rad\lp\frac{\lcm(f, d)}{f}\rp = \frac{\lcm(f, d)}{f}, \\ 0 & \text{otherwise.} \end{cases} \end{align*} \begin{proof} We see \begin{align*} \mu_f(d,e) = \sum_{\substack{g \\ \text{s.t. } \lcm(f, d) \mid g \mid e}} \mu\lp\frac{g}{f}\rp = \sum_{\substack{h \\ \text{s.t. } \frac{\lcm(f, d)}{f} \mid h \mid \frac{e}{f}}} \mu(h) \end{align*} Since $ \mu(h) \ne 0 $ only when $ h $ is radical, $ \mu(h) \ne 0 $ only when $ \frac{\lcm(f, d)}{f} $ is radical and $ h \mid \rad(e/f) $. Restricting to this case, we can write $ \rad(e/f) = k \lcm(f, d)/f $ for some integer $ k $. Since $ k $ and $\lcm(f, d)/f$ must be relatively prime, we have \begin{align*} \mu_f(d, e) &= \sum_{\substack{h \\ \text{s.t. } \frac{\lcm(f, d)}{f} \mid h \mid \frac{k \cdot \lcm(f, d)}{f}}} \mu(h) \\ &= \sum_{s \mid k} \mu\lp\lp\frac{\lcm(f, d)}{f}\rp s \rp \\ &= \mu\lp\frac{\lcm(f, d)}{f}\rp \sum_{s \mid k} \mu(s) \\ &= \begin{cases} \mu\lp\frac{\lcm(f, d)}{f}\rp & \text{if } k = 1, \\ 0 & \text{otherwise}, \end{cases} \end{align*} giving the result. \end{proof} \end{Lemma} \begin{Lemma}\label{lem:NFD_pleth} We have \[ \NFD_{a, r}^{\cont}(x_1^k, x_2^k, \ldots) = \sum_{s \mid rk} \mu_s(k, rk) \NFD_{ak, s}^{\cont}(x_1, x_2, \ldots). \] \begin{proof} The left-hand side is the content generating function for $k$-tuples of length $a$ necklaces with frequency dividing $ r $ of the form $ (N, \dots, N) $, repeating the same necklace $ k $ times. By concatenation, we may equivalently view such tuples as length $ak$ necklaces whose frequency $f$ satisfies $k \mid f \mid rk$. Consequently, \begin{align}\label{eq:freqsum} \NFD_{a, r}^{\cont}(x_1^k, x_2^k, \ldots) = \sum_{\substack{f \\\text{s.t. }k \mid f \mid rk}} \NF_{ak, f}^{\cont}(x_1, x_2, \ldots), \end{align} recalling $ \NF_{n,f} \coloneqq \{ N \in \N_n : \freq(N) = f \} $. M\"obius inversion on the identity $ \NFD_{ak, f}^{\cont}(\mathbf{x}) = \sum_{s \mid f} \NF_{ak, s}^{\cont}(\mathbf{x}) $ gives \begin{align}\label{eq:Mobiusinv} \NF_{ak, f}^{\cont}(\mathbf{x}) = \sum_{s \mid f} \mu \lp \frac{f}{s} \rp \NFD_{ak, s}^{\cont}(\bfx). \end{align} Thus, by \eqref{eq:Mobiusinv}, \eqref{eq:freqsum} becomes \begin{align*} \NFD_{a, r}^{\cont}(x_1^k, x_2^k, \ldots) &= \sum_{\substack{f \\ \text{s.t. }k \mid f \mid rk}} \sum_{s \mid f} \mu\lp\frac{f}{s}\rp \NFD_{ak, s}^{\cont}(x_1, x_2, \ldots) \\ &= \sum_{s \mid rk} \lp \sum_{ \substack{f \\ \text{s.t. } \lcm(k,s) \mid f \mid rk}} \mu\lp \frac{f}{s}\rp \rp \NFD_{ak, s}^{\cont}(x_1, x_2, \ldots) \\ &= \sum_{s \mid rk} \mu_s(k, rk) \NFD_{ak, s}^{\cont}(x_1, x_2, \ldots). \end{align*} by \Cref{def:mobiusf}. \end{proof} \end{Lemma} \begin{Notation} Given a sequence $\nu = (\nu_1, \ldots, \nu_k) \in \bZ_{\geq 1}^k$ and an integer $r \in \bZ_{\geq 1}$, let \[ r \ast \nu \coloneqq (r\nu_1, \ldots, r\nu_k). \] Given another sequence $\tau = (\tau_1, \ldots, \tau_k)$, recall that $\tau \mid \nu$ means $\tau_j \mid \nu_j$ for all $j$. Further recall \[ \NFD_{\nu, \tau} = \NFD_{\nu_j, \tau_j} \times \dots \times \NFD_{\nu_k, \tau_k} \] from \Cref{def:NFD_nu_rho}. Finally, extend $\mu_f(d, e)$ to sequences multiplicatively: \[ \mu_{(f_1, \ldots, f_k)}((d_1, \ldots, d_k), (e_1, \ldots, e_k)) \coloneqq \prod_{j=1}^k \mu_{f_j}(d_j, e_j). \] \end{Notation} \begin{Corollary}\label{cor:NFD_mult_NFD} We have \begin{align*} \Ch \cL_{(a^b)}^{r, 1} &= \sum_{\nu \vdash b} \frac{1}{z_\nu} \sum_{\tau \mid r \ast \nu} \mu_\tau(\nu, r\ast\nu) \NFD_{a\ast\nu, \tau}^{\cont}(\mathbf{x}), \\ \Ch \cL_{(a^b)}^{r, \epsilon} &= \sum_{\nu \vdash b} \frac{(-1)^{b - \ell(\nu)}}{z_\nu} \sum_{\tau \mid r \ast \nu} \mu_\tau(\nu, r\ast\nu) \NFD_{a\ast\nu, \tau}^{\cont}(\mathbf{x}). \end{align*} \begin{proof} Combine \Cref{lem:setmultisetGFs}, \Cref{lem:schocker_burnside} or \Cref{lem:schocker_burnside.eps}, and \Cref{lem:NFD_pleth}. \end{proof} \end{Corollary} We may now state and generalize Schocker's formula for $\Ch \cL_{(a^b)} = \Ch \cL_{(a^b)}^{1, 1}$. \begin{Theorem}[See {\cite[Thm.~3.1]{MR1984625}}]\label{thm:GeneralizedSchocker} For all $a, b \geq 1$ and $r=1, \ldots, a$, we have \begin{align*} \Ch \cL_{(a^b)}^{r, 1} &= \sum_{\lambda \vdash ab} \lp\sum_{\nu \vdash b} \frac{1}{z_\nu} \sum_{\tau \mid r \ast \nu} \mu_\tau(\nu, r \ast \nu) \mathbf{a}_{\lambda, \tau}^{a \ast \nu}\rp s_\lambda(\mathbf{x}) \qquad \text{and} \qquad \\ \Ch \cL_{(a^b)}^{r, \epsilon} &= \sum_{\lambda \vdash ab} \lp\sum_{\nu \vdash b} \frac{(-1)^{b - \ell(\nu)}}{z_\nu} \sum_{\tau \mid r \ast \nu} \mu_\tau(\nu, r \ast \nu) \mathbf{a}_{\lambda, \tau}^{a \ast \nu}\rp s_\lambda(\mathbf{x}), \end{align*} where, recalling the definition of $ \mathbf{maj}_{a \ast \nu} $ from \Cref{def:majtuplestats}, \[ \mathbf{a}_{\lambda, \tau}^{a \ast \nu} \coloneqq \#\{Q \in \SYT(\lambda) : \mathbf{maj}_{a \ast \nu}(Q) = \tau\}. \] \begin{proof} Combine \Cref{cor:M_schur} and \Cref{cor:NFD_mult_NFD}. \end{proof} \end{Theorem} \begin{Remark} Schocker's approach to \cite[Thm.~3.1]{MR1984625} uses J\"ollenbeck's non-commutative character theory and involved manipulations with Klyachko's idempotents and Ramanujan sums. Much of Schocker's argument generalizes immediately to all $r$. The argument presented above is comparatively self-contained and direct. Two perhaps mysterious aspects of the formula, the appearance of M\"obius functions and the average over $S_b$, arose naturally from Burnside's lemma and a change of basis using M\"obius inversion. Our argument uses explicit bijections at each step except for the appeal to Burnside's lemma and the use of \Cref{lem:schocker_burnside.eps}. \end{Remark} \section{Higher Lie Modules and Branching Rules}\label{sec:mash} The argument in \Cref{sec:KW} solves Thrall's problem for $\lambda=(n)$ by considering all branching rules for $C_n \hookrightarrow S_n$ simultaneously and using cyclic sieving and RSK to convert from the monomial to the Schur basis. We now turn to analogous considerations for the higher Lie modules and more generally branching rules for $C_a \wr S_b \hookrightarrow S_{ab}$. We give an analogue of the $\flex$ statistic and the monomial basis expansion for such branching rules from \Cref{ssec:background_KW}. We then show how to convert from the monomial to the Schur basis assuming the existence of a certain statistic on words we call $\mash$ which interpolates between $\maj_n$ and the shape under RSK. We now recall and prove \Cref{thm:casbchars} from the introduction, after introducing some notation. \begin{Definition} Fix integers $ a, b \ge 1$. Define \[ \PP_{a}^{b} \coloneqq \left\{ \ul = (\lam^{(1)}, \dots, \lam^{(a)}) : \lam^{(1)}, \dots, \lam^{(a)} \tx{ are partitions }, \sum_{r = 1}^a |\lam^{(r)}| = b \right\}, \] which indexes the irreducible $C_a \wr S_b$-representations by \Cref{thm:CaSb_irreps}. \end{Definition} \begin{Theorem*} For all $ a, b \ge 1$ and $\ul = (\lam^{(1)}, \dots, \lam^{(a)}) \in \PP_a^b $, we have \[ \Ch S^{\ul}\ind_{C_a \wr S_b}^{S_{ab}} = \prod_{r = 1}^{a} s_{\lambda^{(r)}}[\NFD_{a,r}^{\cont}(\mathbf{x})]. \] \begin{proof}[Proof of {\Cref{thm:casbchars}}] We have \begin{align*} S^{\ul}\ind_{C_a \wr S_b}^{S_{ab}} & = \left[\bigotimes_{r=1}^a (\chi^r_a \wr S^{\lambda^{(r)}})\right] \ind_{C_a \wr S_{\al(\ul)}}^{S_{ab}} \\ &\cong \left[\bigotimes_{r=1}^a (\chi^r_a \wr S^{\lambda^{(r)}})\right] \ind_{C_a \wr S_{\al(\ul)}}^{S_{a \ast \al(\ul)}} \ind_{S_{a \ast \al(\ul)}}^{S_{ab}} \\ &\cong \left[\bigotimes_{r=1}^a (\chi^r_a \wr S^{\lambda^{(r)}}) \ind_{C_a \wr S_{|\lambda^{(r)}|}}^{S_{a|\lambda^{(r)}|}}\right] \ind_{S_{a \ast \al(\ul)}}^{S_{ab}} \\ &\cong \left[\bigotimes_{r=1}^a (\chi^r_a \wr S^{\lambda^{(r)}}) \ind_{C_a \wr S_{|\lambda^{(r)}|}}^{S_a \wr S_{|\lambda^{(r)}|}} \ind_{S_a \wr S_{|\lambda^{(r)}|}}^{S_{a|\lambda^{(r)}|}}\right] \ind_{S_{a \ast \al(\ul)}}^{S_{ab}} \\ &\cong \left[\bigotimes_{r=1}^a (\chi^r_a\ind_{C_a}^{S_a} \wr S^{\lambda^{(r)}}) \ind_{S_a \wr S_{|\lambda^{(r)}|}}^{S_{a|\lambda^{(r)}|}}\right] \ind_{S_{a \ast \al(\ul)}}^{S_{ab}}, \end{align*} where the first and third isomorphisms use transitivity of induction, the second isomorphism uses \Cref{lem:indtensor}, and the fourth isomorphism uses \Cref{lem:indwreath}. Consequently, using \eqref{eq:prodchar}, \eqref{eq:plethsymchar}, and \Cref{thm:basis_onerow}, we have \begin{align*} \Ch S^{\ul}\ind_{C_a \wr S_b}^{S_{ab}} &= \prod_{r=1}^a \Ch \lp\chi^r_a\ind_{C_a}^{S_a} \wr S^{\lambda^{(r)}}\rp \ind_{S_a \wr S_{|\lambda^{(r)}|}}^{S_{a|\lambda^{(r)}|}} \\ &= \prod_{r=1}^a (\Ch S^{\lambda^{(r)}})[\Ch \chi^r_a\ind_{C_a}^{S_a}] \\ &= \prod_{r=1}^a s_{\lambda^{(r)}}[\NFD_{a, r}^{\cont}(\mathbf{x})]. \end{align*} \end{proof} \end{Theorem*} Recall from \Cref{ssec:background_tableaux} that given a word $w$, the shape of $w$, denoted $\sh(w)$, is the common shape of $P(w)$ and $Q(w)$ under RSK. \begin{Definition}\label{def:flex_ab} Fix $a, b \ge 1 $. Construct statistics \[ \flex_a^b, \maj_a^b \colon \W_{ab} \to \PP_a^b \] as follows. Given $w \in \W_{ab}$, write $w = w^1 \cdots w^b $ where $ w^j \in \W_a $. In this way, consider $ w $ as a word of size $ b $ whose letters are in $ \W_a $. For each $r \in [a]$, let $w^{(r)}$ denote the subword of $w$ whose letters are those $w^j$ such that $\flex(w^j) = r$. Totally order $\W_a$ lexicographically, so that RSK is well-defined for words with letters from $\W_a$. Set \[ \flex_a^b(w) \coloneqq (\sh(w^{(1)}), \ldots, \sh(w^{(a)})). \] Define $\maj_a^b$ in the same way but with $\flex$ replaced by $\maj_a$. Consequently, $\maj_n^1(w)$ is the $n$-tuple of partitions whose only non-empty entry is a single cell at position $\maj_n(w)$. \end{Definition} \begin{Example} Let $w = 212023101241$ and suppose $a=3$, $b=4$. Write $ w = (212)(023)(101)(241)$. The parenthesized terms have $ \flex $ statistics $ 2, 1, 2, 2 $ and $ \maj_3 $ statistics $ 1, 3, 1, 2 $, respectively. When computing $\flex_3^4(w)$, we then have $ w^{(1)} = (023), w^{(2)} = (212)(101)(241), w^{(3)} = \vn $. Since $ (101) <_{\lex} (212) <_{\lex} (241), $ $\sh(w^{(2)}) = \sh(213) = (2, 1)$. Consequently, \[ \flex_3^4(212023101241) = ((1), (2, 1), \varnothing). \] When computing $\maj_3^4(w)$, we have $ w^{(1)} = (212)(101), w^{(2)} = (241), w^{(3)} = (023) $. Since $ (101) <_{\lex} (212) $, $ \sh(w^{(1)}) = \sh(21) = (1,1)$. Hence \[ \maj_3^4(212023101241) = ((1, 1), (1), (1)). \] \end{Example} We now recall and prove \Cref{thm:grfrob_flexab} from the introduction. \begin{Theorem*} Fix $ a, b \ge 1 $. We have \begin{align*} \sum_{\ul \in \PP_a^b } \dim S^{\ul} \cdot \Ch \lp S^{\ul}\ind_{C_a \wr S_b}^{S_{ab}} \rp q^{\ul} &= \W_{ab}^{\cont,\flex_a^b}(\mathbf{x}; q) \\ &= \W_{ab}^{\cont,\maj_a^b}(\mathbf{x}; q) \end{align*} where the $ S^{\ul} $ are irreducible representations of $ C_a \wr S_b $ and the $q^{\ul}$ are independent indeterminates. \end{Theorem*} \begin{proof}[Proof of {\Cref{thm:grfrob_flexab}}.] Fix $\ul \in \PP_a^b $. For the left-hand side, using \Cref{thm:casbchars} and \eqref{eq:dimcasbirreps}, \begin{align}\label{eq:charabGF} \dim S^{\ul} \cdot \Ch \lp S^{\ul}\ind_{C_a \wr S_b}^{S_{ab}} \rp = \ch{b}{\al(\ul)} \prod_{r = 1}^a \#\SYT(\lambda^{(r)}) \cdot s_{\lambda^{(r)}}[\NFD_{a,r}^{\cont}(\mathbf{x})]. \end{align} For the right-hand side, we have \[ \left. \W_{ab}^{\cont, \flex_a^b}(\mathbf{x}; q) \right|_{q^{\ul}} = \{ w \in \W_{ab} : \flex_a^b(w) = \ul \}^{\cont}(\mathbf{x}). \] Say $ \al(\ul) = (\al_1, \dots, \al_{a}) $. In order for $ w \in \W_{ab} $ to have $ \flex_a^b(w) = \ul $, we must have $\sh(w^{(r)}) = \lambda^{(r)}$ for each $r \in [a]$. Recalling $\F_{a,r} \coloneqq \{w \in \W_a : \flex(w) = r\}$, we may thus choose each $w^{(r)} \in (\F_{a,r})^{\alpha_r}$ with $\sh(w^{(r)}) = \lambda^{(r)}$ independently and then shuffle them in $\binom{b}{\alpha(\ul)}$ ways to form $w$. Consequently, \begin{equation}\label{eq:flexshuffle} \begin{aligned} \{ w \in \W_{ab} &: \flex_a^b(w) = \ul \}^{\cont}(\mathbf{x}) \\ &= \binom{b}{\alpha(\ul)} \prod_{r=1}^a \{w^{(r)} \in (\F_{a,r})^{\alpha_r} : \sh(w^{(r)}) = \lambda^{(r)}\}^{\cont}(\mathbf{x}). \end{aligned} \end{equation} The content generating function for words with a given shape $\mu \vdash n$ under RSK is given by \begin{align} \{ w \in \W_n : \sh(w) = \mu \}^{\cont}(\mathbf{x}) = \# \SYT(\mu) \, s_\mu(\mathbf{x}), \end{align} since the number of possible $Q$-tableaux is $ \# \SYT(\mu) $ and the content generating function for $P$-tableaux is $ s_\lam(\mathbf{x}) $. Changing the alphabet from $\bZ_{\geq 1}$ to $ \F_{a,r} $ and using \Cref{lem:NFD_F} gives \begin{equation}\label{eq:Flexalphabet} \begin{aligned} \{ w^{(r)} \in (\F_{a,r})^{\al_r} : \sh(w^r) = \lambda^{(r)} \}^{\cont}(\bfx) & = \# \SYT(\lambda^{(r)}) s_{\lambda^{(r)}}[\F_{a,r}^{\cont}(\mathbf{x})] \\ &= \# \SYT(\lambda^{(r)}) s_{\lambda^{(r)}}[\NFD_{a,r}^{\cont}(\mathbf{x})]. \end{aligned} \end{equation} The first equality in \Cref{thm:grfrob_flexab} now follows from combining \eqref{eq:flexshuffle} and \eqref{eq:Flexalphabet} with \eqref{eq:charabGF}. The second equality in \Cref{thm:grfrob_flexab} follows similarly. \end{proof} While \Cref{thm:grfrob_flexab} determines the monomial expansion of the graded Frobenius series tracking branching rules for $C_a \wr S_b \hookrightarrow S_{ab}$, we are ultimately interested in the corresponding Schur expansion. We next describe how the approach in the preceding sections might be used to find this Schur expansion. The key properties used in the proof of \Cref{thm:KW} converting from the monomial basis to the Schur basis were that $\maj_n$ is equidistributed with $\flex$ on each $\W_\alpha$ and $ \maj_n(w) $ depends only on $ Q(w) $. In order to apply a similar argument for $ \Ch( S^{\ul}\ind_{C_a \wr S_b}^{S_{ab}} ) $, we need a statistic as follows. \begin{Problem}\label{prob:mash} Fix $a, b \ge 1 $. Find a statistic \[ \mash_a^b \colon \W_{ab} \to \PP_{a}^{b} \] with the following properties. \begin{enumerate}[(i)] \item For all $\alpha \vDash ab$, $\maj_a^b$ (or equivalently $\flex_a^b$) and $\mash_a^b$ are equidistributed on $ \W_\al $. \item If $ v, w \in \W_{ab} $ satisfy $ Q(v) = Q(w) $, then $\mash_a^b(v) = \mash_a^b(w) $. \end{enumerate} \end{Problem} Finding such a statistic $ \mash_a^b $ would determine the Schur decomposition of $ \Ch( S^{\ul} \ind_{C_a \wr S_b}^{S_{ab}} ) $ as follows. \begin{Corollary}\label{cor:mashchar} Suppose $ \mash_a^b $ satisfies Properties (i) and (ii) in \Cref{prob:mash}. Then \[ \Ch( S^{\underline{\lambda}}\ind_{C_a \wr S_b}^{S_{ab}}) = \sum_{\nu \vdash ab} \f{ \# \{ Q \in \SYT(\nu) : \mash_a^b(Q) = \ul \} } { \dim (S^{\underline{\lambda}} ) } s_{\nu}(\bfx), \] where $\mash_a^b(Q) \coloneqq \mash_a^b(w)$ for any $w \in \W_{ab}$ with $Q(w) = Q$. \end{Corollary} \begin{proof} We use, in order, \Cref{thm:grfrob_flexab}, Property (i), RSK, and Property (ii) to compute \begin{align*} \sum_{\ul \in \PP_a^b} \dim (S^{\ul}) \Ch(S^{\ul} \ind_{C_a \wr S_b}^{S_{ab}})) q^{\underline{\lambda}} &= \W_{ab}^{\cont, \maj_a^b}(\mathbf{x}; q) \\ &= \sum_{\alpha \vDash ab} \W_{\al}^{\maj_a^b}(q) \, \mathbf{x}^\alpha \\ &= \sum_{\alpha \vDash ab} \W_{\al}^{\mash_a^b}(q) \, \mathbf{x}^\alpha \\ &= \W_{ab}^{\cont,\mash_a^b}(\mathbf{x}; q) \\ &= \sum_{\nu \vdash ab} (\SSYT(\nu) \times \SYT(\nu))^{\cont, \mash_a^b}(\mathbf{x}; q) \\ &= \sum_{\nu \vdash ab} \SSYT(\nu)^{\cont}(\mathbf{x}) \SYT(\nu)^{\mash_a^b}(q) \\ &= \sum_{\nu \vdash ab} \SYT(\nu)^{\mash_a^b}(q) s_{\nu}(\mathbf{x}). \end{align*} The result follows by equating coefficients of $q^{\ul}$. \end{proof} \begin{Remark} When $a=1$ and $b=n$, we may replace $\ul$ with $\lambda \vdash n$. Under this identification, $\maj_1^n(w) = \sh(w)$, which clearly satisfies Properties (i) and (ii). When $a=n$ and $b=1$, we may replace $\ul$ with an element $r \in [n]$. Under this identification, we may set $\mash_n^1(w) = \maj_n(w)$, which satisfies Properties (i) and (ii). In this sense $\mash_a^b$ interpolates between the major index $\maj_n$ and the shape under RSK, hence the name. \end{Remark} While $ \maj_a^b $ trivially satisfies Property (i), it fails Property (ii) already when $a=b=2$, as in the following example. \begin{Example}\label{ex:mashnotmajab} Let $ v = 2314 $ and $ w = 1423 $. Then, \[ Q(v) = Q(w) = \Yvcentermath1{\young(124,3)} \] while \begin{align*} \maj_2^2(v) &= (\varnothing, (1, 1)) \\ \maj_2^2(w) &= (\varnothing, (2)). \end{align*} \end{Example} \begin{Remark} When defining $\flex_a^b$ and $\maj_a^b$, we somewhat arbitrarily chose the lexicographic order on $\W_a$. Any other total order would work just as well. However, $\maj_a^b$ continues to fail Property (ii) using any other total order when $a=b=2$ in \Cref{ex:mashnotmajab} since either $14<23$ or $23<14$. \end{Remark} {} \end{document}
arXiv
\begin{document} \title{Projective Ring Lines and Their Generalisations} \author{Hans Havlicek\thanks{Email: \texttt{[email protected]}} \\Institute of Discrete Mathematics and Geometry\\ Vienna University of Technology\\ Wien, Austria} \maketitle \begin{abstract} We give a survey on projective ring lines and some of their substructures which in turn are more general than a projective line over a ring. \paragraph{\small Keywords:} Projective line over a ring, distant graph, connected component, elementary linear group, subspace of a chain geometry, Jordan system, projective line over a strong Jordan system \end{abstract} \section{Distant graph and connected components} The \emph{projective line} $\bP(R)$ over any ring $R$ (associative with $1\neq 0$) can be defined in terms of the free left $R$-module $R^2$ as follows \cite{blunck+he-05}, \cite{herz-95a}: It is the orbit of a starter point $R(1,0)$ under the action of the general linear group $\GL_2(R)$ on $R^2$. A basic notion on $\bP(R)$ is its \emph{distant relation}: Two points are called distant (in symbols: $\dis$) if they can be represented by the elements of a two-element basis of $R^2$. The \emph{distant graph} $(\bP(R),\dis)$ has as vertices the points of $\bP(R)$ and as edges the pairs of distant points. The distant graph is connected precisely when $\GL_2(R)$ is generated by the \emph{elementary linear group} $\E_2(R)$, i.e., the subgroup of $\GL_2(R)$ which is generated by elementary transvections, together with the set of all invertible diagonal matrices \cite{blunck+h-01a}. The orbit of $R(1,0)$ under $\E_2(R)$ is a connected component of the distant graph. It admits a parametrisation in terms of infinitely many formulas \cite{blunck+h-01a}, \cite{blunck+h-03b}. The situation is less intricate for a ring $R$ of \emph{stable rank} $2$ (see \cite{chen-11a}, \cite{veld-85}, or \cite{veld-95}), as it gives rise to a connected distant graph with diameter $\leq 2$. The above-mentioned parametrisation turns into \emph{Bartolone's parametrisation} \cite{bart-89} of $\bP(R)$, namely \begin{displaymath} \bP(R) = \{ R(t_2t_1-1,t_2)\mid t_1,t_2\in R \} \mbox{~~~~~($R$ of stable rank $2$)} . \end{displaymath} Refer to the seminal paper of P.~M.~Cohn \cite{cohn-66} for the algebraic background, and to the work of A.~Blunck \cite{blunck-97a}, \cite{blunck-02a} for orbits of the point $R(1,0)$ under other subgroups of $\GL_2(R)$. \section{Chain Geometries, subspaces and Jordan Systems} Let $R$ be an algebra over a commutative field $K$; by identifying $K$ with $K\cdot 1_R$ the projective line $\bP(K)$ is embedded in $\bP(R)$. For $R\neq K$ the projective line $\bP(R)$ can be considered as the point set of the \emph{chain geometry} $\Sigma(K,R)$; the $\GL_2(R)$ orbit of $\bP(K)$ is the set of \emph{chains} \cite{blunck+he-05}, \cite{herz-95a}. The geometries of M\"{o}bius, Minkowski and Laguerre are well known examples of chain geometries \cite{benz-73}. A crucial property is that any three mutually distinct points are on a unique chain. The chain geometry $\Sigma(K,R)$ may be viewed as a refinement of the distant graph, since two points of $\bP(R)$ are distant if, and only if, they are on a common chain. There are cases though, when the word ``refinement'' is inappropriate in its strict sense: Let $R=\End_F(V)$ be the endomorphism ring of a vector space $V$ over a (not necessarily commutative) field $F$ and let $K$ denote the \emph{centre} of $F$. Then the $K$-chains of $\bP(R)$ can be defined solely in terms of the distant graph $(\bP(R),\dis)$ \cite{blunck+h-12z}. Each chain geometry $\Sigma(K,R)$ is a \emph{chain space}; see \cite{blunck+he-05}, where also the precise definition of \emph{subspaces} of a chain space is given. The algebraic description of subspaces of $\Sigma(K,R)$ is due to A.~Herzer \cite{herz-92b} and H.-J.~Kroll \cite{kroll-91a}, \cite{kroll-92b}, \cite{kroll-92a}. It is based on the following notions: A \emph{Jordan system} is a $K$-subspace of $R$ satisfying two extra conditions: (i) $1\in J$; (ii) If $b\in J$ has an inverse in $R$ then $b^{-1}\in J$. (See \cite{loos-75} for relations with \emph{Jordan algebras} and \emph{Jordan pairs} and compare with \cite{gold-06a}, \cite{matt-07a}.) A Jordan system $J$ is called \emph{strong} if it satisfies a (somewhat technical) condition which guarantees the existence of ``many'' invertible elements in $J$. Strong Jordan systems are closed under \emph{triple multiplication}, i.~e., $ xyx\in J$ for all $x,y\in J$. The \emph{projective line $\bP(J)$ over a strong Jordan system} $J\subset R$ is defined by restricting the \emph{parameters} $t_1,t_2$ to $J$ in Bartolone's parametrisation. We wish to emphasise that in general a point of $\bP(J)$ cannot be written as $R(a,b)$ with $a,b\in J$, unless $J$ is even a subalgebra of $R$. The main theorem about subspaces is as follows: If $R$ is a strong algebra then any connected subspace of $\Sigma(K,R)$ is projectively equivalent to a projective line over a strong Jordan system of $R$. Projective lines over strong Jordan systems admit many applications: For example, one may use them to describe subsets of Grassmannians which are closed under reguli \cite{herz-92b} or chain spaces on quadrics \cite{blunck-97}. See also \cite{blunck-94}, \cite{herz-08a}, \cite{herz-10a}, \cite{herz-11a}, and the numerous examples given in \cite{blunck+he-05}. \par Finally, let us mention one of the many questions that remain: \emph{Is it possible to replace the strongness condition for Jordan systems by closedness under triple multiplication without affecting the known results?} A partial affirmative answer was given in \cite{blunck+h-10a} for case when $R$ is the ring of $n\times n$ matrices over a field $F$ with an involution $\sigma$ and $J$ is the (not necessarily strong) Jordan system of $\sigma$-Hermitian matrices. The proof is based upon the verification that the projective line over this $J$ is, up to some notational differences, nothing but the point set of a \emph{dual polar space} \cite{cameron-82a} or, in the terminology of \cite{wan-96}, the point set of a \emph{projective space of $\sigma$-Hermitian matrices}. \par A wealth of further references can be found in \cite{benz-73}, \cite{blunck+he-05}, \cite{havl-07a}, \cite{herz-95a}, \cite{huanglp-06a}, \cite{pank-10a}, \cite{veld-95}, and \cite{wan-96}. Refer to \cite{brehm-08}, \cite{brehm+g+s-95}, \cite{faure-04a}, \cite{havl+m+p-11a}, \cite{havl+k+o-12z}, \cite{havlicek+saniga-09a}, and \cite{lash-97} for deviating definitions of projective lines which we cannot present here. \begin{thebibliography}{10} \bibitem{bart-89} C.~Bartolone. \newblock Jordan homomorphisms, chain geometries and the fundamental theorem. \newblock {\em Abh.\ Math.\ Sem.\ Univ.\ Hamburg}, 59:93--99, 1989. \bibitem{benz-73} W.~Benz. \newblock {\em {V}orlesungen \"{u}ber {G}eometrie der {A}lgebren}. \newblock Springer, Berlin, 1973. \bibitem{blunck-94} A.~Blunck. \newblock Chain spaces over {J}ordan systems. \newblock {\em Abh.\ Math.\ Sem.\ Univ.\ Hamburg}, 64:33--49, 1994. \bibitem{blunck-97} A.~Blunck. \newblock Chain spaces via {C}lifford algebras. \newblock {\em Monatsh.\ Math.}, 123:98--107, 1997. \bibitem{blunck-97a} A.~Blunck. \newblock {\em Geometries for Certain Linear Groups over Rings --- Construction and Coordinatization}. \newblock Habilitationsschrift, Technische Universit\"{a}t Darmstadt, 1997. \bibitem{blunck-02a} A.~Blunck. \newblock Projective groups over rings. \newblock {\em J.\ Algebra}, 249:266--290, 2002. \bibitem{blunck+h-01a} A.~Blunck and H.~Havlicek. \newblock The connected components of the projective line over a ring. \newblock {\em Adv.\ Geom.}, 1:107--117, 2001. \bibitem{blunck+h-03b} A.~Blunck and H.~Havlicek. \newblock Jordan homomorphisms and harmonic mappings. \newblock {\em Monatsh.\ Math.}, 139:111--127, 2003. \bibitem{blunck+h-10a} A.~Blunck and H.~Havlicek. \newblock Projective lines over {J}ordan systems and geometry of {H}ermitian matrices. \newblock {\em Linear Algebra Appl.}, 433:672--680, 2010. \bibitem{blunck+h-12z} A.~Blunck and H.~Havlicek. \newblock Geometric structures on finite- and infinite-dimensional {G}rassmannians. \newblock {\em Beitr. Algebra Geom.}, to appear. \bibitem{blunck+he-05} A.~Blunck and A.~Herzer. \newblock {\em Kettengeometrien -- {E}ine {E}inf\"{u}hrung}. \newblock Shaker Verlag, Aachen, 2005. \bibitem{brehm-08} U.~Brehm. \newblock Algebraic representation of mappings between submodule lattices. \newblock {\em J. Math. Sci. (N. Y.)}, 153(4):454--480, 2008. \bibitem{brehm+g+s-95} U.\ Brehm, M.\ Greferath, and S.~E. Schmidt. \newblock Projective geometry on modular lattices. \newblock In F.\ Buekenhout, editor, {\em Handbook of Incidence Geometry}, pages 1115--1142. Elsevier, Amsterdam, 1995. \bibitem{cameron-82a} P.~J. Cameron. \newblock Dual polar spaces. \newblock {\em Geom. Dedicata}, 12(1):75--85, 1982. \bibitem{chen-11a} H.~Chen. \newblock {\em Rings Related to Stable Range Conditions}, volume~11 of {\em Series in Algebra}. \newblock World Scientific, Singapore, 2011. \bibitem{cohn-66} P.~M. Cohn. \newblock On the structure of the {$\text{GL}_2$} of a ring. \newblock {\em Inst.\ Hautes Etudes Sci.\ Publ.\ Math.}, 30:365--413, 1966. \bibitem{faure-04a} C.-A. Faure. \newblock Morphisms of projective spaces over rings. \newblock {\em Adv. Geom.}, 4(1):19--31, 2004. \bibitem{gold-06a} D.~Goldstein, R.~M. Guralnick, L.~Small, and E.~Zelmanov. \newblock Inversion invariant additive subgroups of division rings. \newblock {\em Pacific J. Math.}, 227(2):287--294, 2006. \bibitem{havl-07a} H.~Havlicek. \newblock From pentacyclic coordinates to chain geometries, and back. \newblock {\em Mitt.\ Math.\ Ges.\ Hamburg}, 26:75--94, 2007. \bibitem{havl+m+p-11a} H.~Havlicek, A.~Matra{\'s}, and M.~Pankov. \newblock Geometry of free cyclic submodules over ternions. \newblock {\em Abh. Math. Semin. Univ. Hambg.}, 81(2):237--249, 2011. \bibitem{havl+k+o-12z} H.~Havlicek, J.~Kosiorek, and B.~Odehnal. \newblock A point model for the free cyclic submodules over ternions. \newblock {\em Results Math.}, to appear. \bibitem{havlicek+saniga-09a} H.~Havlicek and M.~Saniga. \newblock Vectors, cyclic submodules, and projective spaces linked with ternions. \newblock {\em J. Geom.}, 92(1-2):79--90, 2009. \bibitem{herz-92b} A.~Herzer. \newblock On sets of subspaces closed under reguli. \newblock {\em Geom.\ Dedicata}, 41:89--99, 1992. \bibitem{herz-95a} A.~Herzer. \newblock Chain geometries. \newblock In F.\ Buekenhout, editor, {\em Handbook of Incidence Geometry}, pages 781--842. Elsevier, Amsterdam, 1995. \bibitem{herz-08a} A.~Herzer. \newblock Konstruktion von {J}ordansystemen. \newblock {\em Mitt. Math. Ges. Hamburg}, 27:203--210, 2008. \bibitem{herz-10a} A.~Herzer. \newblock Die kleine projektive {G}ruppe zu einem {J}ordansystem. \newblock {\em Mitt. Math. Ges. Hamburg}, 29:157--168, 2010. \bibitem{herz-11a} A.~Herzer. \newblock Korrektur und {E}rg\"anzung zum {A}rtikel \emph{{D}ie kleine projektive {G}ruppe zu einem {J}ordansystem} in {M}itt. {M}ath. {G}es. {H}amburg 29, {A}rmin {H}erzer. \newblock {\em Mitt. Math. Ges. Hamburg}, 30:15--17, 2011. \bibitem{huanglp-06a} L.-P. Huang. \newblock {\em Geometry of Matrices over Ring}. \newblock Science Press, Beijing, 2006. \bibitem{kroll-91a} H.-J. Kroll. \newblock Unterr\"{a}ume von {K}ettengeometrien und {K}ettengeometrien mit {Q}uadrikenmodell. \newblock {\em Results Math.}, 19:327--334, 1991. \bibitem{kroll-92b} H.-J. Kroll. \newblock Unterr\"{a}ume von {K}ettengeometrien. \newblock In N.~K. Stephanidis, editor, {\em Proceedings of the 3rd Congress of Geometry (Thessaloniki, 1991)}, pages 245--247, Thessaloniki, 1992. Aristotle Univ. \bibitem{kroll-92a} H.-J. Kroll. \newblock Zur {D}arstellung der {U}nterr\"{a}ume von {K}ettengeometrien. \newblock {\em Geom.\ Dedicata}, 43:59--64, 1992. \bibitem{lash-97} A.~Lashkhi. \newblock Harmonic maps over rings. \newblock {\em Georgian Math.\ J.}, 4:41--64, 1997. \bibitem{loos-75} O.~Loos. \newblock {\em Jordan Pairs}, volume 460 of {\em Lecture Notes in Mathematics}. \newblock Springer, Berlin, 1975. \bibitem{matt-07a} S.~Mattarei. \newblock Inverse-closed additive subgroups of fields. \newblock {\em Israel J. Math.}, 159:343--347, 2007. \bibitem{pank-10a} M.~Pankov. \newblock {\em {G}rassmannians of Classical Buildings}, volume~2 of {\em Algebra and Discrete Mathematics}. \newblock World Scientific, Singapore, 2010. \bibitem{veld-85} F.~D. Veldkamp. \newblock Projective ring planes and their homomorphisms. \newblock In R.~Kaya, P.~Plaumann, and K.~Strambach, editors, {\em Rings and Geometry}, pages 289--350. D.\ Reidel, Dordrecht, 1985. \bibitem{veld-95} F.~D. Veldkamp. \newblock Geometry over rings. \newblock In F.\ Buekenhout, editor, {\em Handbook of Incidence Geometry}, pages 1033--1084. Elsevier, Amsterdam, 1995. \bibitem{wan-96} Z.-X. Wan. \newblock {\em Geometry of Matrices}. \newblock World Scientific, Singapore, 1996. \end{thebibliography} \end{document}
arXiv
A batch of one hundred bulbs is inspected by testing four randomly chosen bulbs. The batch is rejected if even one of the bulbs is defective. A batch typically has five defective bulbs. The probability that the current batch is accepted is ________ A firm producing air purifiers sold $200$ units in $2012$. The following pie chart presents the share of raw material, labour, energy, plant & machinery, and transportation costs in the total manufacturing cost of the firm in $2012$ ... registered a profit of Rs. $10$ lakhs in $2012$, at what price (in $Rs$.) was each air purifier sold? A man can row at $8$ $km$ per hour in still water. If it takes him thrice as long to row upstream, as to row downstream, then find the stream velocity in $km$ per hour. The next term in the series $81, 54, 36, 24$, … is ________ 'Advice' is ________________. a verb a noun an adjective both a verb and a noun asked Feb 19, 2017 in General Aptitude by Arjun (21.2k points) general-aptitude The multi-level hierarchical pie chart shows the population of animals in a reserve forest. The correct conclusions from this information are: $(i)$ Butterflies are birds $(ii)$ There are more tigers in this forest than red ants $(iii)$ ... $(iv)$ $(i)$, $(iii)$ and $(iv)$ only $(i)$, $(ii)$ and $(iii)$ only Find the next term in the sequence: $7G, 11K, 13M$,____ $15Q$ $17Q$ $15P$ $17P$ In which of the following options will the expression P < M be definitely true? M < R > P > S M > S < P < F Q < M < F = P P = A < R < M The value of one U.S. dollar is $65$ Indian Rupees today, compared to $60$ last year. The Indian Rupee has ____________. depressed depreciated appreciated stabilized "India is a country of rich heritage and cultural diversity." Which one of the following facts best supports the claim made in the above sentence? India is a union of $28$ states and $7$ union territories. India has a population of over $1.1$ billion. India is home to $22$ official languages and thousands of dialects. The Indian cricket team draws players from over ten states. For spot welding of two steel sheets (base metal) each of $3$ $mm$ thickness, welding current of $10000$ $A$ is applied for $0.2$ $s$. The heat dissipated to the base metal is $1000$ $J$. Assuming that the heat required for melting $1$ $mm^3$ volume of steel is $20$ $J$ and interfacial contact resistance between sheets is $0.0002$ $Ω$, the volume (in $mm^3$) of weld nugget is _______ The diameter of a recessed ring was measured by using two spherical balls of diameter $d_2$ = $60$ $mm$ and $d_1$ = $40$ $mm$ as shown in the figure. The distance $H_2$ = $35.55$ $mm$ and $H_1$ = $20.55$ $mm$. The diameter ($D$, in $mm$) of the ring gauge is _______ A diesel engine has a compression ratio of $17$ and cut-off takes place at $10$% of the stroke.Assuming ratio of specific heats $(γ)$ as $1.4$, the air-standard efficiency (in percent) is _______ A double-pipe counter-flow heat exchanger transfers heat between two water streams. Tube side water at $19$ $liter/s$ is heated from $10^o$ $C$ to $38^o$ $C$. Shell side water at $25$ $liter/s$ is entering at $46^o$ $C$. Assume constant properties of water; density is $1000$ $kg/m^3$ and specific heat is $4186$ $J/kg.K$. The LM TD (in $°C$) is _______ Which pair of following statements is correct for orthogonal cutting using a single-point cutting tool? P. Reduction in friction angle increases cutting force Q. Reduction in friction angle decreases cutting force R. Reduction in friction angle increases chip thickness S. Reduction in friction angle decreases chip thickness P and R P and S Q and R Q and S A cylindrical blind riser with diameter d and height h , is placed on the top of the mold cavity of a closed type san d mold as shown in the figure. If the riser is of constant volume, then the rate of solidification in the riser is the least when the ratio h:d is $1:2$ $2:1$ $1:4$ $4:1$ A manufacturer can produce $12000$ bearings per day. The manufacturer received an order of $8000$ bearings per day from a customer. The cost of holding a bearing in stock is $Rs$. $0.20$ per month. Setup cost per production run is $Rs$. $50 0$. Assuming $300$ working days in a year, the frequency of production run should be $4.5$ days $4.5$ months $6.8$ days $6.8$ months Consider the given project network, where numbers along various activities represent the normal time. The free float on activity $4$-$6$ and the project duration, respectively, are $2$, $13$ $0$, $13$ $-2$, $13$ $2$, $12$ At the inlet of an axial impulse turbine rotor, the blade linear speed is $25$ $m/s$, the magnitude of absolute velocity is $100$ $m/s$ and the angle between them is $25^o$. The relative velocity and the axial component of velocity remain the same ... . The blade inlet and outlet velocity triangles are shown in the figure. Assuming no losses, the specific work (in $J/kg$) is _______ A fluid of dynamic viscosity $2 × 10^{−5}$ $kg/m.s$ and density $1$ $kg/m^3$ flows with an average velocity of $1$ $m/s$ through a long duct of rectangular ($25$ $mm$ × $15$ $mm$) cross-section. Assuming laminar flow, the pressure drop (in $Pa$) in the fully developed region per meter length of the duct is _______ Heat transfer through a composite wall is shown in figure. Both the sections of the wall have equal thickness $(l)$. The conductivity of one section is $k$ and that of the other is $2k$. The left face of the wall is at $600$ $K$ and the right face is at $300$ $K$. The interface temperature $T_i$ (in $K$) of the composite wall is _______ An amount of $100$ $kW$ of heat is transferred through a wall in steady state. One side of the wall is maintained at $127^o$ $C$ and the other side at $27^o$ $C$. The entropy generated (in $W/K$) due to the heat transfer through the wall is _______ A mass-spring-dashpot system with mass $m = 10$ $kg$, spring constant $k = 6250$ $N/m$ is excited by a harmonic excitation of $10 cos(25t)$ $N$. At the steady state, the vibration amplitude of the mass is $40$ $mm$. The damping coefficient ($c$, in $N.s/m$) of the dashpot is _______ Consider an objective function $Z(x_1,x_2)=3x_1+9x_2$ and the constraints $x_1+x_2 \leq 8$ $x_1+2x_2 \leq 4$ $x_1 \geq 0$ , $x_2 \geq 0$ The maximum value of the objective function is _______ A slider-crank mechanism with crank radius $60$ $mm$ and connecting rod length $240$ $mm$ is shown in figure. The crank is rotating with a uniform angular speed of $10$ $rad/s$, counter clockwise. For the given configuration, the speed (in $m/s$) of the slider is _______ A four-wheel vehicle of mass $1000 kg$ moves uniformly in a straight line with the wheels revolving at $10 rad/s$. The wheels are identical, each with a radius of $0.2 m$. Then a constant braking torque is applied to all the wheels and the vehicle experiences a uniform deceleration. For the vehicle to stop in $10 s$, the braking torque (in $N.m$) on each wheel is______ Consider a rotating disk cam and a translating roller follower with zero offset. Which one of the following pitch curves, parameterized by $t$, lying in the interval $0$ to $2π$ ... $cost$ , $y(t)=2sint$ $x(t)$= $\frac{1}{2}$+$cost$ , $y(t)=sint$ asked Feb 19, 2017 in Theory of Machines by Arjun (21.2k points) A solid sphere of radius $r_1$ = $20$ $mm$ is placed concentrically inside a hollow sphere of radius $r_2$ = $30$ $mm$ as shown in the figure. The view factor $F_{21}$ for radiation heat transfer is $\frac{2}{3}$ $\frac{4}{9}$ $\frac{8}{27}$ $\frac{9}{4}$ A siphon is used to drain water from a large tank as shown in the figure below. Assume that the level of water is maintained constant. Ignore frictional effect due to viscosity and losses at entry,and exit. At the exit of the siphon, the velocity of water is $\sqrt{2g(Z_Q-Z_R)}$ $\sqrt{2g(Z_P-Z_R)}$ $\sqrt{2g(Z_O-Z_R)}$ $\sqrt{2gZ_Q}$ A certain amount of an ideal gas is initially at a pressure $p_1$ and temperature $T_1$. First, it undergoes a constant pressure process $1$-$2$ such that $T_2$ = $3T_1$/$4$. Then, it undergoes a constant volume process $2$-$3$ such that $T_3$ = $T_1$/$2$. The ratio of the final volume to the initial volume of the ideal gas is $0.25$ $0.75$ $1.0$ $1.5$ A force $P$ is applied at a distance $x$ from the end of the beam as shown in the figure. What would be the value of $x$ so that the displacement at '$A$' is equal to zero? $0.5L$ $0.25L$ $0.33L$ $0.66L$ An annular disc has a mass $m$, inner radius $R$ and outer radius $2R$. The disc rolls on a flat surface without slipping. If the velocity of the center of mass is $v$, the kinetic energy of the disc is $\frac{9}{16}mv^2$ $\frac{11}{16}mv^2$ $\frac{13}{16}mv^2$ $\frac{15}{16}mv^2$ The damping ratio of a single degree of freedom spring-mass-damper system with mass of $1 kg$, stiffness $100 N/m$ and viscous damping coefficient of $25 N.s/m$ is _______ A body of mass (M) $10$ $kg$ is initially stationary on a $45^o$ inclined plane as shown in figure. The coefficient of dynamic friction between the body and the plane is $0.5$. The body slides down the plane and attains a velocity of $20 m/s$. The distance traveled (in meter) by the body along the plane is _______ A drum brake is shown in the figure. The drum is rotating in anticlockwise direction. The coefficient of friction between drum and shoe is $0.2$. The dimensions shown in the figure are in $mm$. The braking torque (in $N.m$) for the brake shoe is _______ The real root of the equation $5x − 2cosx −1 = 0$ (up to two decimal accuracy) is _______ A straight turning operation is carried out using a single point cutting tool on an AISI $1020$ steel rod. The feed is $0.2$ $mm/rev$ and the depth of cut is $0.5$ $mm$. The tool has a side cutting edge angle of $60^o$. The uncut chip thickness (in $mm$) is _______ Consider a simply supported beam of length, $50h$, with a rectangular cross-section of depth, $h$, and width, $2h$. The beam carries a vertical point load, P, at its mid-point. The ratio of the maximum shear stress to the maximum bending stress in the beam is $0.02$ $0.10$ $0.05$ $0.01$ A machine produces $0$, $1$ or $2$ defective pieces in a day with associated probability of $1/6$, $2/3$ and $1/6$, respectively. The mean value and the variance of the number of defective pieces produced by the machine in a day, respectively, are $1$ and $1/3$ $1/3$ and $1$ $1$ and $4/3$ $1/3$ and $4/3$ Consider two solutions $x(t)=x_1(t)$ and $x(t)=x_2(t)$ of the differential equation $\frac{d^2x(t)}{dt^2}+x(t)=0$, $t>0$ such that $x_1(0)=1$, $\frac{dx_1(t)}{dt}|_{t=0}=0$, $x_2(0)=0$,$\frac{dx_2(t)}{dt}|_{t=0}=1$. The Wronskian $W(t)=\begin{bmatrix} x_1(t) & x_2(t)\\ \frac{dx_1(t)}{dt} & \frac{dx_2(t)}{dt} \end{bmatrix}$ at $t=\pi /2$ is $1$ $-1$ $0$ $\pi /2$
CommonCrawl
\begin{document} \title{Separation Logic Modulo Theories} \begin{abstract} Logical reasoning about program data often requires dealing with heap structures as well as scalar data types. Recent advances in Satisfiability Modular Theory (SMT) already offer efficient procedures for dealing with scalars, yet they lack any support for dealing with heap structures. In this paper, we present an approach that integrates Separation Logic---a prominent logic for reasoning about list segments on the heap---and SMT. We follow a model-based approach that communicates aliasing among heap cells between the SMT solver and the Separation Logic reasoning part. An experimental evaluation using the Z3 solver indicates that our approach can effectively put to work the advances in SMT for dealing with heap structures. This is the first decision procedure for the combination of separation logic with SMT theories. \iffalse The correct functioning of complex programs often relies on a tight interaction between several operations: integer and bit-vector arithmetic, array manipulation, and dynamic memory allocation. While recent research efforts have led to impressive advances in Satisfiability Modulo Theories--able to reason about arithmetic and arrays--as well as Separation Logic--when reasoning about heap memory--little has been achieved in seamlessly unifying the power of these two techniques. Building on theoretical foundations from our previous work, we demonstrate how a separation logic prover can be cleanly and tightly integrated with Z3, a fast state-of-the-art SMT solver. \fi \end{abstract} {\section{Introduction} Satisfiability Modulo Theory (SMT) solvers play an important role for the construction of abstract interpretation tools~\cite{CC77,CousotSMT11}. They can efficiently deal with relevant logical theories of various scalar data types, e.g., fixed length bit-vectors and numbers, as well as uninterpreted functions and arrays~\cite{Simplify,Z3,Yices,CVC3,MathSAT}. However, dealing with programs that manipulate heap-allocated data structures using pointers exposes limitations of today's SMT solvers. For example, SMT does not support separation logic---a promising logic for dealing with programs that manipulate the heap following a certain discipline~\cite{Rey02}. Advances in the construction of such a solver could directly boost a wide range of separation logic based verifiers: manual/tool assisted proof development~\cite{SchorrWaite,YNOT,Appel09}, extended static checking~\cite{DisPar08,BerCal06}, and automatic inference of heap shapes~\cite{SpaceInvader,SlayerCAV07,SlayerCAV08,CalDis09}. In this paper we present a method for extending an STM solver with separation logic with list segment predicate~\cite{BerCal04}, which is a frequently used instance of separation logic used by the majority of existing tools. Our method decides entailments of the form $\Pi \land \Sigma \rightarrow \Pi' \land \Sigma'$. Here, $\Pi$ and $\Pi'$ are \emph{arbitrary} theory assertions supported by SMT, while $\Sigma$ and $\Sigma'$ are spatial conjunctions of pointer predicates $\f{next}(x,y)$ and list segment predicates~$\f{lseg}(x, y)$. Symbols occurring in the spatial conjunctions can also occur in $\Pi$ and~$\Pi'$. The crux of our method lies in an interaction of the model based approach to combination of theories~\cite{ModelCobination} and a so-called $\Match$ function that we propose for establishing logical implication between a pair of spatial conjunctions. We use models of $\Pi$, which we call stacks, to guide the process of showing that every heap that satisfies $\Sigma$ also satisfies~$\Sigma'$. In return, the match function collects an assertion that describes a set of stacks for which the current derivation is also applicable. This assertion is then used to take those stacks into account for which we have not proved the entailment yet. As a result, our method can benefit from the efficiency offered by SMT for maintaining a logical context keeping track of stacks for which the entailment is already proved. In summary, we present (to the best of our knowledge) the first SMT based decision procedure for separation logic with list segments. Our main contribution is the entailment checking algorithm for separation logic combined with decidable theories, together with its correctness proof. Furthermore we provide an implementation of the algorithm using Z3 for theory reasoning, and an evaluation on micro-benchmarks. The paper is organised as follows. A run of the algorithm is illustrated in Section~\ref{sec-illustration}. We give preliminary definitions in Section~\ref{sec-prelims}. Our method is described in Section~\ref{sec-mini-alg-cns}. All proofs are presented in Section~\ref{correctness}. We present an experimental evaluation in Section~\ref{benchmarks}. Conclusions are finally presented in Section~\ref{conclu}. \paragraph{\bfseries Related work} Our method is directly inspired by a theorem prover for separation logic with list segments~\cite{SlpPLDI11} based on paramodulation techniques~\cite{NieRub01} to deal with equality reasoning. An approach that turned out quite advantageous compared to SmallFoot-based proof systems previously developed. While \cite{SlpPLDI11} only deals with equalities, the work in this paper supports arbitrary SMT theory expressions in the entailment. Theory extensions of paramodulation are still an open problem---even state-of-the-art first order provers deliver poor performance on problems with linear arithmetic---so it is not evident how to extend \cite{SlpPLDI11} with theory reasoning. Similarly, it is unclear how to extend SmallFoot or jStar to obtain a decision procedure with rich theory reasoning. Our $\Match$ function can be seen as a generalisation of the unfolding inferences, geared towards interaction with the logical context of an SMT solver, rather than literals in a clausal representation of the entailment problem. Last but not least, on that previous work the combination with paramodulation is given by a quite complex inference system, at a level of detail which would not accessible through a black-box SMT prover. The original proof system for list segments~\cite{BerCal04,BerCal05} gives a starting point to the design of our $\Match$ function. However, while the proof system needs to branch and perform case reasoning during proof search, the $\Match$ function is a deterministic, linear pass over the spatial conjuncts. Recently, entailment between separation logic formulas where $\Pi$ and $\Pi'$ are conjunctions of (dis-)equalities was shown to be decidable in polynomial time~\cite{Tractable11}. While we are primarily interested reasoning about rich theory assertions describing stacks, exploration of this polynomial time result is an interesting direction for future work. Regarding an Nelson-Oppen combination of decision procedures~\cite{NelsonOppen79}, we see an algorithm following this combination approach as an interesting and difficult question for the future work. A direct application of such theory combination does not work, since it requires a satisfiability checker for sets of (possibly negated) spatial conjunctions. The interplay of conjunction, negation and spatial conjunction is likely to turn this into a PSPACE problem. In contrast, the spatial reasoning in our approach has linear complexity, thus shifting the computational complexity to the SMT prover instead. Chin et al.~\cite{ChinUnfoldingSL12} present a fold/unfold mechanism to deal with user-specified well-founded recursive predicates. Due to such a general setting, it does not provide completeness. Our logic is more restrictive, allowing to develop a complete decision procedure. Similarly, Botin\u{c}an et al.~\cite{Botincan09} rely on a SmallFoot based proof system which, although does not guarantee completeness on the fragment we consider, is able to deal with user provided inference and rewriting rules. } {\section{Illustration} \label{sec-illustration} In this section we illustrate our algorithm using a high-level description and a simple example. To this end we prove the validity of the entailment: \begin{equation*} \underbrace{c < e}_\Pi \land \underbrace{\f{lseg}(a, b) \ast \f{lseg}(a, c) \ast \f{next}(c, d) \ast \f{lseg}(d, e)}_\Sigma \lthen \underbrace{\top}_{\Pi'}\land \underbrace{\f{lseg}(b, c) \ast \f{lseg}(c, e)}_{\Sigma'}\ . \end{equation*} Abstractly, the algorithm performs the following key steps. It symbolically enumerates models that satisfy $\Pi$ and yield a satisfiable heap part for $\Sigma$ in the antecedent. For each such assignment $s$ the algorithm attempts to (symbolically) prove that each heap $h$ satisfying the antecedent, i.e., $s, h \models \Pi\land\Sigma$ also satisfies the consequence, i.e., $s, h\models \Pi'\land \Sigma'$. Finally, we generalise the assignment $s$ and use the corresponding assertion to prune further models of $\Pi$ that would lead to similar reasoning steps as~$s$. The entailment is valid if and only all models of the pure parts are successfully considered. For our example we begin with the construction of the constraint that guarantees the satisfiability of the heap part of the antecedent. This constraint requires that each pair of spatial predicates in $\Sigma$ is not colliding, i.e., if two predicates start from the same heap location then one of them represents an empty heap. A list segment, say $\f{lseg}(a, b)$, represents an empty heap if its start and end locations are equal, i.e., if $a \mathop{\simeq} b$. A points-to predicates, say $\f{next}(c, d)$, always represents a non-empty heap. For the predicates $\f{lseg}(a, b)$ and $\f{lseg}(d, e)$ the absence of collision is represented as $a \mathop{\simeq} d \lthen a \mathop{\simeq} b \lor d \mathop{\simeq} e$, i.e., if the start location $a$ of the first predicate is equal to the start location $d$ of the second predicate then either of the predicates represents an empty heap. The remaining pairs of predicates produce the following non-collision assertions. \begin{equation*} \begin{array}[t]{@{}l@{\qquad\qquad}l@{}} a \mathop{\simeq} a \lthen a \mathop{\simeq} b \lor a \mathop{\simeq} c & \text{$\f{lseg}(a, b)$ and $\f{next}(a, c)$}\\[\jot] a \mathop{\simeq} c \lthen a \mathop{\simeq} b \lor \bot & \text{$\f{lseg}(a, b)$ and $\f{next}(c, d)$}\\[\jot] a \mathop{\simeq} d \lthen a \mathop{\simeq} b \lor d \mathop{\simeq} e & \text{$\f{lseg}(a, b)$ and $\f{lseg}(d, e)$}\\[\jot] a \mathop{\simeq} c \lthen a \mathop{\simeq} c \lor \bot & \text{$\f{lseg}(a, c)$ and $\f{next}(c, d)$}\\[\jot] a \mathop{\simeq} d \lthen a \mathop{\simeq} c \lor d \mathop{\simeq} e & \text{$\f{lseg}(a, c)$ and $\f{lseg}(d, e)$}\\[\jot] c \mathop{\simeq} d \lthen \bot \lor d \mathop{\simeq} e & \text{$\f{next}(c, d)$ and $\f{lseg}(d, e)$} \end{array} \end{equation*} We refer to the conjunction of the above assertions as~$\WellFormed(\Sigma)$. Next, we use an SMT solver to find a model for $\Pi \land \WellFormed(\Sigma)$. If no such model exists the entailment is vacuously true. For our example, however, the solver finds the model~$s = \set{a \mapsto 0, b \mapsto 0, c \mapsto 0, d \mapsto 1, e \mapsto 1}$. We then symbolically show that for every heap $h$ model of $\Sigma$ is also a model of~$\Sigma'$. We do this by showing that $\Sigma$ and $\Sigma'$ are matching, i.e., for each predicate in $\Sigma'$ there is a corresponding `chain' of predicates in $\Sigma$. The chain condition requires adjacent predicates to have a location in common, namely, the finish location of a predicate is equal to the start location of the next with respect to~$s$. Since matching only needs to deal with predicates representing non-empty heaps, we first normalise $\Sigma$ and $\Sigma'$ by removing spatial predicates that are empty in the given model $s$, i.e., we remove each list segment predicate whose start and finish locations are equal with respect to~$s$. From $\Sigma$ we remove $\f{lseg}(a, b)$ since $s(a) = s(b) = 0$, and from $\Sigma'$ we cannot remove anything. Now we attempt to find a match for $\f{lseg}(b, c) \in \Sigma'$ in the normalised antecedent $\f{lseg}(a, c) \ast \f{next}(c, d) \ast \f{lseg}(d, e)$. The chain should start with $\f{lseg}(a, c)$ since $s(a) = s(b)$. Since $\f{lseg}(a, c)$ finishes at the same location as $\f{lseg}(b, c)$ in every model, we are done with the matching for $\f{lseg}(b, c)$. Since $\f{lseg}(a, c)$ was used to construct a chain, we cannot consider it in the remaining matching steps (but only for the same model~$s$). Next we compute matching for $\f{lseg}(c, e) \in \Sigma'$ using the remaining predicates $\f{next}(c, d) \ast \f{lseg}(d, e)$ from~$\Sigma$. We begin the chain using $\f{next}(c, d)$ since it has the same start location as $\f{lseg}(c, e)$. Since the finish location of $\f{next}(c, d)$ is not equal to $e$ with respect to~$s$ we still need to connect $d$ and~$e$. We perform this connection by an additional matching request that requires to match $\f{lseg}(d, e)$ using the remaining predicates from $\Sigma$, i.e., using only~$\f{lseg}(d, e)$. Fortunately, this matching request can be trivially satisfied. Since all predicates of $\Sigma'$ are matched, and all predicates in $\Sigma$ were used for matching, we conclude that $\Sigma$ and $\Sigma'$ exactly match with respect to the current~$s$. The algorithm notices that from the model $s$ only the assertion $a \mathop{\simeq} b$ was necessary to perform the matching. Hence, the model $s$ is generalised to the assertion $U = (a \mathop{\simeq} b)$. We continue the enumeration of pure models for the antecedent, excluding those where $a \mathop{\simeq} b$. The SMT solver reports that $\Pi \land \WellFormed(\Sigma) \land \neg U$ is not satisfiable. Hence we conclude that the entailment is valid. \iffalse \small \begin{verbatim} c \= e, ls(a, b) * ls(a, c) * next(c, d) * ls(d, e) -> ls(b, c) * ls(c, e) Pi := c \= e Sigma := ls(a, b) * ls(a, c) * next(c, d) * ls(d, e) Pi' := true Sigma' := ls(b, c) * ls(c, e) \end{verbatim} \begin{verbatim} (declare-const a Int) (declare-const b Int) (declare-const c Int) (declare-const d Int) (declare-const e Int) WSigma := (and (=> (= a a) (or (= a b) (= a c))) ; S1, S2 (=> (= a c) (or (= a b) false)) ; S1, S3 (=> (= a d) (or (= a b) (= d e))) ; S1, S4 (=> (= a c) (or (= a c) false)) ; S2, S3 (=> (= a d) (or (= a c) (= d e))) ; S2, S4 (=> (= c d) (or false (= d e))) ; S3, S4 ) Gamma1 = (not (= c e)), WSigma s1 = { a := 1, b := 1, c := 0, d := 2, e := 2} U1 = match(s1, Sigma, Sigma, Sigma') = (a = b), match(s1, Sigma, ls(a, c) * next(c, d) * ls(d, e), Sigma') ; w/o empty segments = check(Sigma, ls(a, c), ls(b, c)) = (c != c -> alloc(Sigma, c)) = (c = c; a!=b, c = a; a!=c, c = a; true, c = c; d != e, c = d) = true update(ls(a, c), ls(b, c)) = ls(c, c) (a = b), true, match(s1, Sigma, next(c, d) * ls(d, e), ls(c, e) * ls(c, c)) = (a = b), true, match(s1, Sigma, next(c, d) * ls(d, e), ls(c, e)) = check(Sigma, next(c, d), ls(c, e)) = true update(next(c, d), ls(c, e)) = ls(d, e) (a = b), true, true, match(s1, Sigma, ls(d, e), ls(d, e)) = check(Sigma, ls(d, e), ls(d, e)) = (e !=e -> alloc(Sigma, e) = (e = e; a != b, e = a; a != c, e = a; true, e = c; d != e, e = d) = true update(ls(d, e), ls(d, e)) = ls(e, e) (a = b), true, true, true match(s1, Sigma, emp, ls(e, e)) = (a = b), true, true, true match(s1, Sigma, emp, emp) = (a = b), true, true, true, true s1 |= Pi', U Gamma2 = Gamma0, !(Pi', U1) Gamma2 is unsat guard is collide and check check seeks possible covering (which entails non-emptiness) \end{verbatim} \normalsize \fi } {\section{Preliminaries}\label{sec-prelims} We write $f\colon X \to Y$ to denote a \emph{function} with domain $X = \dom f$ and \emph{range}~$Y$; while $f\colon X \rightharpoonup Y$ is a \emph{partial function} with $\dom f \subseteq X$. We write $\sepof{f}{n}$ to simultaneously denote the union $\unionof{f}{n}$ of $n$ functions, and assert that their domains are pairwise disjoint, i.e. $\dom h_i \cap \dom h_j = \emptyset$ when $i \neq j$. Given two functions $f\colon Y \to Z$ and $g\colon X \to Y$, we write $f \comp g$ to denote their composition, i.e. $(f \comp g)(x) = f(g(x))$ for every $x \in \dom g$. We sometimes write functions explicitly by enumerating their elements, for example $f = \set{a \mapsto b, b \mapsto c}$ is the function with $\dom f = \set{a, b}$ and such that $f(a) = b$ and $f(b) = c$. \paragraph{\bfseries Syntax of separation logic} We assume a sorted language with both theory and uninterpreted symbols. Each function symbol $f$ has an arity~$n$ and a signature $f \colon \prodof{\tau}{n} \to \tau$, taking $n$ arguments of respective sorts~$\tau_i$ and returning an expression of sort~$\tau$. A constant symbol is a $0$-ary function symbol. A \emph{variable} is an uninterpreted constant symbol, and $\textsf{Var}$ denotes the set of all variables in the language. Constant and function symbols are combined as usual, respecting their sorts, to build syntactically valid \emph{expressions}. We use $x \colon \tau$ to denote an expression $x$ of sort $\tau$, and $\mathcal{L}$ to denote the set of all expressions in the language. We assume that, among the available sorts, there are $\textsf{Int}$ and $\textsf{Bool}$ for, respectively, integer and boolean expressions. We refer to a function symbol of boolean range as a \emph{predicate symbol}, and a boolean expression as a \emph{formula}. We also assume the existence of a built-in predicate $\mathop{\simeq} \colon \tau \times \tau \to \textsf{Bool}$ for testing equality between two expressions of the same sort; as well as standard theory symbols from the boolean domain, that is: conjunction ($\land$), disjunction~($\lor$), negation~($\lnot$), truth ($\top$), falsity ($\bot$), implication ($\lthen$), bi-implication ($\liff$) and first order quantifiers ($\forall$,~$\exists$). Theory symbols for arithmetic may also be present, and we use $\f{nil}$ as an alias for the integer constant $0$. Additionally, we also define \emph{spatial symbols} to build expressions that describe properties about memory heaps. We have the spatial predicate symbols $\f{emp}\colon \textsf{Bool}$, $\f{next}\colon \textsf{Int} \times \textsf{Int} \to \textsf{Bool}$ and $\f{lseg}\colon \textsf{Int} \times \textsf{Int} \to \textsf{Bool}$ for, respectively, the empty heap, a points to relation, and acyclic-list segments; their semantics are described in the following section. Furthermore, we also have the symbol for \emph{spatial conjunction} $\ast\colon\textsf{Bool}\times\textsf{Bool}\to\textsf{Bool}$. A formula or an expression is said to be \emph{pure} if it contains no spatial symbols. Although in principle one can write spatial conjunctions of arbitrary boolean formulas, in our context we only deal with the case where each conjunct is a spatial predicate. So when we say a ``spatial conjunction'' what we actually mean is a ``spatial conjunction of spatial predicates''. Furthermore, at the meta-level, we treat a spatial conjunction $\Sigma = \sepof{S}{n}$ as a multi-set of boolean spatial predicates, and write $\abs{\Sigma} = n$ to denote the number of predicates in the conjunction. In particular we use set theory symbols to describe relations between spatial predicates and spatial conjunctions, which are always to be interpreted as \emph{multi-set} operations. For example: \begin{gather*} \f{next}(y, z) \in \f{lseg}(x,y) \ast \f{next}(y, z) \\ \f{next}(x, y) \ast \f{next}(x, y) \not\subseteq \f{next}(x, y) \\ \f{emp} \ast \f{emp} \ast \f{emp} \setminus \f{emp} = \f{emp} \ast \f{emp}\;. \end{gather*} \paragraph{\bfseries Semantics of separation logic} Each sort $\tau$ is associated with a set of values, which we also denote by $\tau$, usually according to their background theories; e.g. $\textsf{Int} = \set{\dots, -1, 0, 1, \dots}$, and $\textsf{Bool} = \set{\bot, \top}$. We use $\textsf{Val} = \disjunionof{\tau}{n}$ to denote the disjoint union of all values for all sorts in the language. A \emph{stack} is a function $s\colon\textsf{Var}\to\textsf{Val}$ mapping variables to values in their respective sorts, i.e. for a variable $v\colon \tau$ we have $s(v) \in \tau$. The domain of $s$ is naturally extender over arbitrary pure expressions in~$\mathcal{L}$ using an appropriate interpretation for their theory symbols, e.g. $s(1 + 2) = 3$. In our context, a \emph{heap} corresponds to a partial function $h\colon\textsf{Int}\rightharpoonup\textsf{Val}$ mapping memory locations, represented as integers, to values. Given a stack $s$, a heap $h$, and a formula $F$ we inductively define the satisfaction relation of separation logic, denoted $s, h \models F$, as: \begin{align*} s, h &\models \Pi && \text{if $\Pi$ is pure and $s(\Pi) = \top$,} \\ s, h &\models \f{emp} && \text{if $h = \emptyset$,} \\ s, h &\models \f{next}(x, y) && \text{if $h = \set{s(x) \mapsto s(y)}$,} \\ s, h &\models F_1 \ast F_2 && \text{if $h = h_1 \ast h_2$ for some $h_1$ and $h_2$} \\ &&& \qquad\text{such that $s, h_1 \models F_1$ and $s, h_2 \models F_2$.} \end{align*} Semantics for the acyclic list segment is introduced through the inductive definition $\f{lseg}(x, z) \equiv (x \mathop{\simeq} z \land \f{emp}) \lor (x \mathop{\not\simeq} z \land \exists y \dt \f{next}(x,y) \ast \f{lseg}(y,z))$. As an example consider $\set{ x \mapsto 1, y \mapsto 2}, \set{ 1 \mapsto 3, 3 \mapsto 2 } \models \f{lseg}(x, y)$. When $s, h \models F$ we say that the interpretation $(s, h)$ is a \emph{model} of the formula $F$. A formula is \emph{satisfiable} if it admits at least one model, and \emph{valid} if it is satisfied by all possible interpretations. Note, in particular, that an entailment $F \lthen G$ is valid if every model of $F$ is also a model of $G$. Finally, for a formula $F$ we write $s \models F$ if it is the case that, for every heap $h$, we have that $s, h \models F$ holds. Note that $\f{nil}$ is not treated in any special way by this logic. If one wants $\f{nil}$ to regain its expected behaviour, i.e. \emph{nothing} can be allocated at the $\f{nil}$ address, it is enough to consider $\f{next}(\f{nil}, 0) \ast F$, where $F$ is an arbitrary formula. \endinput } {\begin{figure} \caption{Model driven entailment checker} \label{hy:wf} \label{hy:loop} \label{hy:empty_l} \label{hy:empty_r} \label{hy:collide} \label{hy:base} \label{fig-model-driven} \end{figure} \endinput } {\section{Decision procedure for list segments and SMT theories} \label{sec-mini-alg-cns} In this section we define and describe the building blocks that, when put together as shown in the $\Prove$ and $\Match$ procedures of Figure~\ref{fig-model-driven}, constitute a decision procedure for entailment checking. The procedure works for entailments of the form $\Pi \land \Sigma \lthen \Pi' \land \Sigma'$, where both $\Pi$ and $\Pi'$ are pure formulas, with respect to any background theory supported by the SMT solver, and both $\Sigma$ and $\Sigma'$ are spatial conjunctions. To abstract away the specifics of a spatial predicate $S$, we first define $\Addr(S)$ and $\Empty(S)$---respectively the \emph{address} and the \emph{emptiness condition} of a given spatial predicate---as follows: \def\text{---}{\text{---}} \begin{equation*}\setlength{\arraycolsep}{10pt} \begin{array}{ccc} S & \Addr(S) & \Empty(S) \\\hline \f{emp} & \text{---} & \top \\ \f{next}(x,y) & x & \bot \\ \f{lseg}(x,y) & x & x \mathop{\simeq} y \\ \end{array} \end{equation*} Intuitively, if the emptiness condition is true with respect to a stack-model $s$, the portion of the heap-model that corresponds to $S$ \emph{must} be empty. Alternatively, if the emptiness condition is false with respect to $s$, the value associated with its address \emph{must} occur in the domain of any heap satisfying the spatial predicate. Formally: given $s \models \Empty(S)$ for a stack $s$, we have $s, h \models S$ if, and only if, the heap $h = \emptyset$; and if $s, h \models \lnot\Empty(S) \land S$ then, necessarily, $s(\Addr(S)) \in \dom h$. \paragraph{\bfseries Well-formedness} Before introducing the $\WellFormed$ condition, occurring at line~\ref{hy:wf} of the algorithm in Figure~\ref{fig-model-driven}, we first define the notion of \emph{collision} between spatial predicates. Given any two spatial predicates $S$ and $S'$, the formula \begin{equation*} \Collide(S, S') = \lnot\Empty(S) \land \lnot\Empty(S') \land \Addr(S) \mathop{\simeq} \Addr(S') \;. \end{equation*} states that two predicates collide if, with respect to a stack-model, they are both non-empty and share the same address. This would cause a problem if both $S$ and $S'$ occur together in a spatial conjunction, since they would assert that the same address is allocated at two disjoint---separated---portions of the heap. Given a spatial conjunction $\Sigma = \sepof{S}{n}$, the \emph{well-formedness condition} is defined as the pure formula \begin{equation*} \WellFormed(\Sigma) = \smashoperator{\bigland_{1 \leq i < j \leq n}} \lnot\Collide(S_i, S_j) \;, \end{equation*} stating that no pair of predicates in the spatial conjunction collide. As an example consider the spatial conjunction \begin{equation*} \Sigma = \underbrace{\f{next}(x, y)}_{S_1} \ast \underbrace{\f{lseg}(x, z)}_{S_2} \ast \underbrace{\f{next}(w, z)}_{S_3} \end{equation*} we obtain \begin{align*} \Collide(S_1, S_2) &= (\top \land x \mathop{\not\simeq} z \land x \mathop{\simeq} x) = (x \mathop{\not\simeq} z) \\ \Collide(S_1, S_3) &= (\top \land \top \land x \mathop{\simeq} w) = (x \mathop{\simeq} w) \\ \Collide(S_2, S_3) &= (x \mathop{\not\simeq} z \land \top \land x \mathop{\simeq} w) = (x \mathop{\not\simeq} z \land x \mathop{\simeq} w) \\ \WellFormed(\Sigma) &= \lnot (x \mathop{\not\simeq} z) \land \lnot (x \mathop{\simeq} w) \land \lnot (x \mathop{\not\simeq} z \land x \mathop{\simeq} w) = (x \mathop{\simeq} z \lor x \mathop{\not\simeq} w) \;. \end{align*} That is, the formula is well-formed only when $x \mathop{\simeq} z$, so that the second predicate is empty, and $x \mathop{\not\simeq} w$, so that the first and third do not collide. In general, the well-formedness condition is quite important since, as the next theorem states, it characterises the satisfiability of spatial conjunctions. \begin{theorem}\label{thm:wff} A spatial conjunction $\Sigma$ is satisfiable if, and only if, the pure formula $\WellFormed(\Sigma)$ is satisfiable. \end{theorem} \paragraph{\bfseries Matching step} We now proceed towards the introduction of the $\UnfGuard$ condition, used at line~\ref{hy:collide} in Figure~\ref{fig-model-driven}, which lies at the core of our matching procedure. For this we first define, given a spatial conjunction $\Sigma = \sepof{S}{n}$ and an expression $x$, the \emph{allocation condition} \begin{equation*} \Alloc(\Sigma,x) = \smashoperator{\biglor_{1 \leq i \leq n}} \lnot\Empty(S_i) \land x \mathop{\simeq} \Addr(S_i) \end{equation*} which holds, with respect to a stack-model $s$, when a corresponding heap-model~$h$ for $\Sigma$ would necessarily have to include $s(x)$ in its domain. Continuing from our previous example we have that \begin{align*} \Alloc(\Sigma,z) = (\top \land z \mathop{\simeq} x) \lor (x \mathop{\not\simeq} z \land z \mathop{\simeq} x) \lor (\top \land z \mathop{\simeq} w) = (z \mathop{\simeq} x \lor z \mathop{\simeq} w) \;. \end{align*} That is, the value of $z$ must be allocated in the heap if either $z \mathop{\simeq} x$, so it is needed to satisfy $\f{next}(x,y)$, or $z \mathop{\simeq} w$ and it is needed to satisfy $\f{next}(w,z)$. If otherwise the allocation condition is false, although it may occur, there is no actual need for $z$ to be allocated in the domain of the heap. Now, when trying to prove an entailment $s \models \Sigma \lthen \Sigma'$, we want to show that any heap model of $\Sigma$ is also a model of $\Sigma'$. Thus, if we find a pair of colliding predicates $S \in \Sigma$ and $S' \in \Sigma'$, then portion of the heap that satisfies $S$ should overlap with the portion of the heap that satisfies $S'$. In fact, it is not hard to convince oneself---for the list segment predicates considered---that the heap model of $S'$ should match exactly that of $S$ plus some extra surplus. In the following definitions $\UnfUpdate$ gives the precise value of the extra surplus, while $\UnfCheck$ specifies additional conditions which are necessary so that the model of $S$ doesn't leak outside the model of $S'$. \begin{equation*}\setlength{\arraycolsep}{10pt} \begin{array}{cc|cc} S' & S & \UnfUpdate(S,S') & \UnfCheck(\Sigma,S,S') \\\hline \f{next}(x',z) & \f{next}(x,y) & \f{emp} & y \mathop{\simeq} z \\ \f{lseg}(x',z) & \f{next}(x,y) & \f{lseg}(y,z) & \top \\ \f{next}(x',z) & \f{lseg}(x,y) & \f{emp} & \bot \\ \f{lseg}(x',z) & \f{lseg}(x,y) & \f{lseg}(y,z) & y \mathop{\not\simeq} z \lthen \Alloc(\Sigma,z) \end{array} \end{equation*} The \emph{matching step condition} is the formula \begin{equation*} \UnfGuard(\Sigma, S, S') = \Collide(S, S') \land \UnfCheck(\Sigma, S, S') \;. \end{equation*} To formalise our stated intuition, the following proposition articulates how the residue that is computed between two colliding predicates is indeed satisfied by the remaining heap surplus. The validity of this statement, as in the case of the subsequent two propositions, can be easily verified by inspection of the relevant definitions. \begin{proposition}\label{prop:step_unfold} Given two spatial predicates $S$, $S'$, a stack $s \models \Collide(S,S')$ and a heap $h$ such that $s, h \models S'$, if there is a partition $h = h_1 \ast h_2$ for which $s, h_1 \models S$, it necessarily follows that $s, h_2 \models \UnfUpdate(S, S')$. \end{proposition} Moreover, for any stack satisfying the matching step condition, we are free to replace $S'$ in $\Sigma'$ with the matched expression $S \ast \UnfUpdate(S, S')$. Formally we state the following proposition. \begin{proposition}\label{prop:step_fold} Given a stack $s \models \UnfGuard(\Sigma, S, S')$, where $S$ and $S'$ are spatial predicates, and $S$ occurs in the spatial conjunction $\Sigma$, for any spatial conjunction $\Sigma'$ containing $S'$ we have that \begin{equation*} s \models (\Sigma' \setminus S') \ast S \ast \UnfUpdate(S, S') \lthen \Sigma' \end{equation*} \end{proposition} Finally, we state that the enclosing condition is complete in the sense that, if it were not satisfied by a stack $s$, then one could build a counterexample for the matching $S \ast \UnfUpdate(S,S') \lthen S'$. \begin{proposition}\label{prop:step_twist} Given two spatial predicates $S$, $S'$, a spatial conjunction $\Sigma$ that contains $S$, a stack $s$ and a two-part heap $h = h_1 \ast h_2$ such that $s, h_1 \ast h_2 \models \Sigma$ and $s, h_2 \models S \ast \UnfUpdate(S,S')$, if $s \models \Collide(S, S') \land \lnot\UnfCheck(\Sigma, S, S')$, then there is a $h_2'$ such that $s, h_1 \ast h'_2 \models \Sigma$ but $s, h'_2 \not\models S \ast \UnfUpdate(S,S') \lthen S'$. \end{proposition} As an example consider the case where $S = \f{lseg}(x,y)$ and $S' = \f{lseg}(x',z)$, such that $\UnfUpdate(S,S') = \f{lseg}(y,z)$. Take some stack $s \models \Collide(S,S')$ and the heap $h_2 = \set{s(x) \mapsto s(y), s(y) \mapsto s(z)}$ as a model of $\f{lseg}(x,y) \ast \f{lseg}(y,z)$. From $s \models \lnot\UnfCheck(\Sigma, S, S')$ it follows that $s(x) \neq s(y)$ and the address $s(z)$ does not need to be allocated anywhere in $h = h_1 \ast h_2$. This allows us to patch and let $h'_2 = \set{s(x) \mapsto s(z), s(z) \mapsto s(y), s(y) \mapsto s(z)}$, which is still a model of the pair $\f{lseg}(x,y) \ast \f{lseg}(y,z)$ but---due to the introduced cycle---not of $\f{lseg}(x',z)$. \paragraph{\bfseries Matching and proving} To finalise the description of our decision procedure for entailment checking we have only left to put all the ingredients together, as shown in Figure~\ref{fig-model-driven}, into the $\Match$ and $\Prove$ functions. The $\Match$ function tries to establish whether $s \models \SigmaI \lthen (\SigmaI \setminus \Sigma) \ast \Sigma'$. Initially called with $\SigmaI$ set to $\Sigma$, at the top level this is in fact equivalent to checking the validity of $s \models \Sigma \lthen \Sigma'$. During the execution process $\SigmaI$ will retain its initial value, $\Sigma$ and $\Sigma'$ carry the portions of the entailment that are left to match, while $\SigmaI \setminus \Sigma$ is the fragment already matched. As the function progresses, the conjunctions $\Sigma$ and $\Sigma'$ will become shorter, while the matched portion $\SigmaI \setminus \Sigma$ grows. If successful both $\Sigma$ and $\Sigma'$ will become empty, yielding at the end the trivial entailment $s \models \SigmaI \lthen \SigmaI$. The function begins by inspecting $\Sigma$ and $\Sigma'$ to discard, at lines \ref{hy:empty_l} and \ref{hy:empty_r}, any empty predicates with respect to $s$, and recursively calling itself to verify the rest of the entailment. After removing all such empty predicates, if a valid matching step is found, the predicate $S'$ occurring in $\Sigma'$ is replaced with $S \ast \UnfUpdate(S, S')$, so that $S$---which now occurs both in $\Sigma$ and $\Sigma$'---can be moved to the matched part of the entailment in the recursive call at line~\ref{hy:collide}. If the function is successful, after reaching the bottom of the recursion at line~\ref{hy:base} with both $\Sigma$ and $\Sigma'$ becoming empty, the return value collects a conjunction of all assumptions made on the values of stack. This allows to generalise the proof which works not only for the particular stack $s$, but for any stack satisfying the same assumptions. Otherwise, if the bottom of the recursion is reached with some portions still left to match, the function returns an unsatisfiable formula signalling the existence of a counterexample for the entailment. This behaviour is formalised in the following theorem, proved later in Section~\ref{correctness}. \begin{theorem}\label{thm:match} Given a pair of spatial conjunctions $\Sigma$, $\Sigma'$ and a stack $s$ such that $s \models \WellFormed(\SigmaI)$, we have that: \begin{itemize} \item the procedure $\Match(s, \Sigma, \Sigma, \Sigma')$ always terminates with a result $U$, \item the execution requires $O(n)$ recursive steps, where $n = \abs{\Sigma} + \abs{\Sigma'}$. \item if $s \models U$ then the entailment $U \land \Sigma \lthen \Sigma'$ is valid, and \item if $s \not\models U$ then $s \not\models \Sigma \lthen \Sigma'$. \end{itemize} \end{theorem} The main $\Prove$ function, which checks whether $\Pi \land \Sigma \lthen \Pi' \land \Sigma'$ is valid, begins with the pure formula $\Gamma \assign \Pi \land \WellFormed(\Sigma)$. An SMT solver iteratively finds models for $\Gamma$, which become candidate stack models to guide the search for a proof or a counterexample. Given one such stack $s$, the $\Match$ function is called to check the validity of the entailment with respect to~$s$. If successful, $\Match$ returns a formula $U$ generalising the conditions in which the entailment is valid, so the search may continue for stacks where $U$ does not hold. The iterations proceed until either all possible stacks have been discarded, or a counterexample is found in the process. It is important to stress that the function does not enumerate all concrete models but, rather, the equivalence classes returned by $\Match$. Formally we state the following theorem, whose proof is given in Section~\ref{correctness}. \begin{theorem}\label{thm:prove} Given two pure formulas $\Pi$, $\Pi'$, and two spatial formulas $\Sigma$, $\Sigma'$, we have that: \begin{itemize} \item the procedure $\Prove(\Pi \land \Sigma \lthen \Pi' \land \Sigma')$ always terminates, and \item the return value corresponds to the validity of $\Pi \land \Sigma \lthen \Pi' \land \Sigma'$. \end{itemize} \end{theorem} \endinput } {\section{Proofs of correctness} \label{correctness} This section presents the main technical contribution of the paper, the proof of correctness of our entailment checking algorithm. The proof itself closely follows the structure of the previous section, filling in the technical details required to assert the statements of Theorem~\ref{thm:wff}, on well-formedness, Theorem~\ref{thm:match}, on matching, and finally Theorem~\ref{thm:prove} on entailment checking. \paragraph{\bfseries Well-formedness} Soundness of the well-formed condition $\WellFormed(\Sigma)$, the first half of Theorem~\ref{thm:wff}, can be easily shown by noting that if a spatial conjunction $\Sigma$ is satisfiable with respect to some stack and a heap, the formula $\WellFormed(\Sigma)$ is also necessarily true with respect to the same stack. \begin{proposition}\label{prop:sound_wf} Given a spatial conjunction $\Sigma$, a stack $s$, and a heap $h$, if we have $s, h \models \Sigma$, then also $s \models \WellFormed(\Sigma)$. \end{proposition} \begin{proof} Let $\Sigma = \sepof{S}{n}$. Since $s, h \models \Sigma$, there is a partition $h = \sepof{h}{n}$ such that each $s, h_i \models S_i$. Given a pair of predicates $S_i$ and $S_j$ with $i < j$, if either $s \models \Empty(S_i)$ or $s \models \Empty(S_j)$, then trivially $s \models \lnot\Collide(S_i, S_j)$. Assume otherwise that $s \models \lnot\Empty(S_i) \land \lnot\Empty(S_j)$. It follows that both $s(\Addr(S_i)) \in \dom h_i$ and $s(\Addr(S_j)) \in \dom h_j$. Since by construction $h_i$ and~$h_j$ have disjoint domains, we have $s(\Addr(S_i)) \neq s(\Addr(S_j))$. This implies the fact that $s \models \lnot\Collide(S_i, S_j)$. \qed \end{proof} For completeness of the well-formed condition $\WellFormed(\Sigma)$, the second half of Theorem~\ref{thm:wff}, we prove a slightly more general result. In particular we show that if a stack $s \models \WellFormed(\Sigma)$ then it is possible to build a heap $h$ such that $s, h \models \Sigma$. Furthermore, we show that such $h$ is \emph{conservative} in the sense that it only allocates addresses which are strictly necessary. \begin{proposition} Given a spatial conjunction $\Sigma = \sepof{S}{n}$ and a stack~$s$ such that $s \models \WellFormed(\Sigma)$, there is a heap $h$ for which $s, h \models \Sigma$ and, furthermore, the domain $\dom h = \set{ \Addr(S_i) \mid s \models \lnot\Empty(S_i) }$. \end{proposition} \begin{proof}\label{prop:comp_wf} Consider the heap $h = \sepof{h}{n}$ where each $h_i$ is defined as follows: \begin{itemize} \item if $s \models \Empty(S_i)$ then $h_i = \emptyset$; otherwise \item if $s \models \lnot\Empty(S_i)$ it follows that $S_i = \f{next}(x,y)$ or $S_i = \f{lseg}(x,y)$, in either case let $h_i = \set{ s(x) \mapsto s(y) }$. \end{itemize} By construction $s, h_i \models S_i$ and, furthermore, if $s \models \lnot\Empty(S_i,S_j)$ it follows that $\dom h_i = \set{s(\Addr(S_i))}$. From this we easily get as desired that the domain of the heap $\dom h = \set{ \Addr(S_i) \mid s \models \lnot\Empty(S_i) }$. Now, to prove that $s, h \models \Sigma$, we have only left to show that for any pair $S_i$, $S_j$ with $i \neq j$ the domains or their respective heaplets are disjoint, i.e. $\dom h_i \cap \dom h_j = \emptyset$. If either $s \models \Empty(S_i)$ or $s \models \Empty(S_j)$ the result is trivial. Otherwise assume that $s \models \lnot\Empty(S_i) \land \lnot\Empty(S_j)$. Since $s \models \WellFormed(\Sigma)$, and in particular also $s \models \lnot\Collide(S_i, S_j)$, it follows that $s \not\models \Addr(S_i) \mathop{\simeq} \Addr(S_j)$. Namely the address values $s(\Addr(S_i)) \neq s(\Addr(S_j))$ and, thus, the domains of $h_i$ and $h_j$ are disjoint. \qed \end{proof} Theorem~\ref{thm:wff} follows immediately as a corollary of Propositions~\ref{prop:sound_wf} and~\ref{prop:comp_wf}. \paragraph{\bfseries Matching and proving} The following proposition is the main ingredient required to establish the soundness and completeness of the $\Match$ procedure of Figure~\ref{fig-model-driven}. The proof, although long and quite technical in details, follows the intuitive description given in Section~\ref{sec-mini-alg-cns} about the behaviour of $\Match$. Each of the main four cases in the proof corresponds, respectively to the conditions on lines~\ref{hy:empty_l} and~\ref{hy:empty_r}, when discarding empty predicates, line~\ref{hy:collide}, when a matching step is performed, and finally line~\ref{hy:base}, when the base case of the recursion is reached. Each case is further divided in two sub-cases, one for the situation when the recursive call is successful and a proof of validity is established, and one for the situation when a counterexample is built. The last case, the base of the recursion, is divided into four sub-cases: the successful case when the matching is completed, the case in which all of $\Sigma'$ is consumed but there are predicates in $\Sigma$ left to match, the case in which there is a collision but the enclosure condition is not met, and finally the case in which there is no collision at all. \begin{proposition} Given three spatial formulas $\SigmaI$, $\Sigma$, $\Sigma'$, and a stack $s$ such that $\Sigma \subseteq \SigmaI$, and $s \models \WellFormed(\SigmaI)$; let~$U$ be the pure formula returned by $\Match(s, \SigmaI, \Sigma, \Sigma')$. \begin{itemize} \item If $s \models U$ then $U \land \SigmaI \lthen (\SigmaI \setminus \Sigma) \ast \Sigma'$ is valid and, otherwise \item if $s \not\models U$ there is a $h$ such that $s, h \not\models \SigmaI \lthen (\SigmaI \setminus \Sigma) \ast \Sigma'$. \end{itemize} \end{proposition} \begin{proof} The proof goes by induction, following the recursive definition of the $\Match$ function. \begin{itemize} \item Suppose we reach line~\ref{hy:empty_l}, with a predicate $S \in \Sigma$ such that $s \models \Empty(S)$. Recursively let $U' = \Match(s, \SigmaI, \Sigma \setminus S, \Sigma')$ and $U = \Empty(S) \land U'$. Since $s \models \Empty(S)$ it follows $s \models U \liff U'$. \begin{itemize} \item if $s \models U$, we want to show that $U \land \SigmaI \lthen (\SigmaI \setminus \Sigma) \ast \Sigma'$ is valid, so take any model $s', h\models U \land \SigmaI$. By induction we know the formula $U \land \SigmaI \lthen R$ is valid, where $R = (\SigmaI \setminus (\Sigma \setminus S)) \ast \Sigma' = (\SigmaI \setminus \Sigma) \ast S \ast \Sigma'$. It follows therefore follows that $s',h \models (\SigmaI \setminus \Sigma) \ast S \ast \Sigma'$. Since $s \models \Empty(S)$, there is nothing allocated in $h$ for $S$ and, thus, $s',h \models (\SigmaI \setminus \Sigma) \ast \Sigma'$. \item if $s \not\models U'$, by induction there is a heap $h$ such that $s, h \models \SigmaI$ but, at the same time, $s, h \not\models (\SigmaI \setminus \Sigma) \ast S \ast \Sigma'$. Again, since $s, \emptyset \models S$, it must be the case that $s, h \not\models (\SigmaI \setminus \Sigma) \ast \Sigma'$. (Otherwise you get a contradiction.) \end{itemize} \item Suppose we reach line~\ref{hy:empty_r} with a predicate $S' \in \Sigma$ such that $s \models \Empty(S')$. Recursively let $U' = \Match(s, \SigmaI, \Sigma, \Sigma' \setminus S')$ and $U = \Empty(S') \land U'$. Again we have $s \models U \liff U'$. \begin{itemize} \item if $s \models U'$, we want to show that $U' \land \SigmaI \lthen (\SigmaI \setminus \Sigma) \ast \Sigma'$ is valid, so take any model $s', h\models U \land \SigmaI$. By induction we know $U \land \SigmaI \lthen (\SigmaI \setminus \Sigma) \ast (\Sigma' \setminus S')$ is valid and, thus, we also get that $s', h \models (\SigmaI \setminus \Sigma) \ast (\Sigma' \setminus S')$. Again, from $s' \models \Empty(S')$ and $s', \emptyset \models S'$ it follows $s', h \models (\SigmaI \setminus \Sigma) \ast (\Sigma' \setminus S') \ast S'$ or, equivalently, $s', h \models (\SigmaI \setminus \Sigma) \ast \Sigma'$. \item if $s \not\models U'$, by induction there is a heap $h$ such that $s, h \models \SigmaI$ but, at the same time, $s, h \not\models (\SigmaI \setminus \Sigma) \ast (\Sigma' \setminus S')$. Similarly $s, \emptyset \models S'$, so it must be the case that $s, h \not\models (\SigmaI \setminus \Sigma) \ast (\Sigma' \setminus S') \ast S'$ or, equivalently, $s, h \not\models (\SigmaI \setminus \Sigma) \ast \Sigma'$. \end{itemize} \item Suppose we reach line~\ref{hy:collide}, with two of predicates $S \in \Sigma$ and $S' \in \Sigma'$, such that the stack $s \models \UnfGuard(\SigmaI, S, S')$. Let $S'' = \UnfUpdate(S, S')$, recursively obtain $U' = \Match(s, \SigmaI, \Sigma \setminus S, (\Sigma' \setminus S') \ast S'')$ and let $U = \UnfGuard(S) \land U'$. As before we have $s \models U \liff U'$. \begin{itemize} \item if $s \models U$, we want to show that $U \land \SigmaI \lthen (\SigmaI \setminus \Sigma) \ast \Sigma'$ is valid. That is, any model $s', h \models U \land \SigmaI$ is also a model of $(\SigmaI \setminus \Sigma) \ast \Sigma'$. By induction we have that $U' \land \SigmaI \lthen R$ is valid, where the formula \begin{equation*} R = (\SigmaI \setminus (\Sigma \setminus S)) \ast (\Sigma' \setminus S') \ast S'' = (\SigmaI \setminus \Sigma) \ast (\Sigma' \setminus S') \ast S \ast S''\;. \end{equation*} Since $s',h \models U' \land \hat\Sigma$ it follows that $s',h \models R$. By Proposition~\ref{prop:step_fold}, since $s' \models \UnfGuard(\SigmaI,S,S')$, we obtain that $s', h \models (\SigmaI \setminus \Sigma) * (\Sigma' \setminus S') \ast S'$ or, equivalently, $s', h \models (\SigmaI \setminus \Sigma) \ast \Sigma'$. \item if $s \not\models U$, by induction, there exists a heap $h$ such that $s,h \models \hat\Sigma$ but, however, $s,h \not\models (\SigmaI \setminus \Sigma) \ast (\Sigma' \setminus S') \ast S \ast S''$. Partition $h = h_1 \ast h_2$ such that $s, h_1 \models \SigmaI \setminus S$ and $s, h_2 \models S$. Now note that, regardless of the value of $S$, letting $h'_2 = \set{s(x) \mapsto s(y)}$ and $h' = h_1 \ast h'_2$ we have that both $s, h'_2 \models S$ and $s, h' \models \hat\Sigma$. We claim that $s, h' \not\models (\SigmaI, \setminus \Sigma) \ast \Sigma'$. Assume by contradiction that $s, h' \models (\SigmaI \setminus \Sigma) \ast \Sigma'$, and partition now $h' = h_3 \ast h_4$ such that $s, h_3 \models (\SigmaI \setminus \Sigma) \ast (\Sigma' \setminus S')$ and $s, h_4 \models S'$. Because $S$ and $S'$ collide, it follows that $\dom h'_2 = \set{s(\Addr(S))} \subseteq \dom h_4$ and $h_4 = h'_2 \ast h_5$ for some remainder~$h_5$. Then, by Proposition~\ref{prop:step_unfold}, $s, h_4 \models S \ast S''$ and $s, h_5 \models S''$. But $h = h_1 \ast h_2 = h_3 \ast h_2 \ast h_5$ would make a model of $(\SigmaI \ast \Sigma) \ast (\Sigma' \setminus S') \ast S \ast S''$, contradicting our inductive hypothesis. \end{itemize} \item Suppose we reach line~\ref{hy:base}. We can find ourselves in several situations: \begin{itemize} \item $\Sigma' = \emptyset$, $\Sigma = \emptyset$, and the function returns $U = \top$. In this case it is trivial that $s \models U$ and $U \land \SigmaI \lthen (\SigmaI \setminus \emptyset) \ast \emptyset$ is valid. \item $\Sigma' = \emptyset$, there is a $S \in \Sigma$, and the function returns $U = \bot$. In this case $s \not\models U$, so we need to find a counterexample for the entailment. From Proposition~\ref{prop:comp_wf} there is a heap $h$ such that $s, h \models \SigmaI$. Partition $h = h_1 \ast h_2$ such that $s, h_1 \models (\SigmaI \setminus \Sigma)$ and $s, h_2 \models \Sigma$. Since~$S$ occurs in $\Sigma$, and at this point $s \not\models \Empty(S)$, it is necessarily the case that $s(\Addr(S)) \in \dom h_2$. In particular $h_2 \neq \emptyset$, and because $h = h_1 \ast h_2$, we obtain $s, h \not\models (\SigmaI \setminus \Sigma)$. Furthermore, since $\Sigma' = \emptyset$, this is equivalent to $s, h \not\models (\SigmaI \setminus \Sigma) \ast \Sigma'$. \item There is a $S' \in \Sigma'$, a $S \in \Sigma$ such that $s \models \Collide(S, S')$, and the function returns $U = \bot$. Since we did not end up on line~\ref{hy:collide}, it must be the case that $s \not\models \UnfCheck(\SigmaI, S, S')$. By Property~\ref{prop:comp_wf} there is a heap $h$ such that $s, h \models \SigmaI$. Partition $h = h_1 \ast h_2$ such that $s, h_1 \models (\SigmaI \setminus S)$ and $s, h_2 \models S$. Let $h'_2 = \set{s(x) \mapsto s(y)}$ and $h' = h_1 \ast h'_2$; since $s \models \lnot\Empty(S)$ we have that $s, h'_2 \models S$ and $s, h' \models \SigmaI$. If it turns out that $s, h' \not\models (\SigmaI \setminus \Sigma) \ast \Sigma'$ we are done. Assume otherwise that $s, h' \models (\SigmaI \setminus \Sigma) \ast \Sigma'$ and partition the heap $h' = h_3 \ast h_4$ such that $s, h_3 \models (\SigmaI \setminus \Sigma) \ast (\Sigma' \setminus S')$ and $s, h_4 \models S'$. Since the predicates $S$, $S'$ collide and are non-empty, it follows that the address $\set{s(\Addr(S))} = \dom h'_2 \subseteq \dom h_4$ and, therefore, $h_4 = h'_2 \ast h_5$ for some remainder~$h_5$. By Proposition~\ref{prop:step_unfold} it follows that $s, h_4 \models S \ast S''$ and $s, h_5 \models S''$. Since $h' = h_3 \ast h'_2 \ast h_5$ it follows then that $s, h_3 \ast h_5 \models (\SigmaI \setminus S)$. By Proposition~\ref{prop:step_twist} there is a $h_6$ such that $s, h_3 \ast h_5 \ast h_6 \models \SigmaI$ but $s, h_5 \ast h_6 \not\models S'$. However, since $s, h_3 \models (\SigmaI \setminus \Sigma) \ast (\Sigma' \setminus S')$, it follows that $s, h_3 \ast h_5 \ast h_6 \not\models (\SigmaI \setminus \Sigma) \ast \Sigma'$. The heap $h_3 \ast h_5 \ast h_6$ is a counterexample for the entailment. \item There is some $S' \in \Sigma'$ and $s \not\models \Collide(S, S')$ for all $S \in \Sigma$, thus the function returns $U = \bot$. By Property~\ref{prop:comp_wf} there is a heap $h$ such that $s, h \models \SigmaI$. Partition $h = h_1 \ast h_2$ into two parts such that $s, h_1 \models (\SigmaI \setminus \Sigma)$ and $s, h_2 \models \Sigma$. Since $S'$ does not collide with any predicate in $\Sigma$, it follows that $s(\Addr(S')) \notin \dom h_2$, in particular $s, h_2 \not\models \Sigma'$. From this it follows that $s, h_1 \ast h_2 \not\models (\SigmaI \setminus \Sigma) \ast \Sigma'$. \qed \end{itemize} \end{itemize} \end{proof} The correctness of the $\Match$ procedure, formally stated previously in Theorem~\ref{thm:match}, follows as a corollary of this proposition for the case when $\SigmaI = \Sigma$. Termination of the procedure can also be easily verified since, at the recursive calls in lines~\ref{hy:empty_l} and~\ref{hy:collide} the size of the third argument decreases and, when it stays the same at the recursive call in line~\ref{hy:empty_r}, the size of the fourth argument decreases. This same termination argument also shows that the number of recursive steps is in fact linear in the size of $\Sigma$ and $\Sigma'$. Finally we are ready to prove the termination and correctness of the main $\Prove$ procedure as stated earlier in Theorem~\ref{thm:prove}. Specifically, we'll show that the procedure returns $\f{valid}$ if, and only if, the entailment $\Pi \land \Sigma \lthen \Pi' \land \Sigma'$ supplied as argument is indeed valid. \begin{proof}[of Theorem~\ref{thm:prove}] Termination can be established since at each iteration of the loop at line~\ref{hy:loop}, the number satisfying models of $\Gamma$ is being strictly reduced. Since there is only a finite number of formulas that can be built by combinations of $\Empty(S)$ and $\UnfGuard(\SigmaI, S, S')$---the building blocks for $U$---all suitable combinations should be exhausted at some point. For correctness we now prove that line~\ref{hy:loop} at the base of the loop always satisfies the invariants: \begin{enumerate} \item $\Gamma \lthen \Pi \land \WellFormed(\Sigma)$, and \item if $\Gamma \land \Sigma \lthen \Pi' \land \Sigma'$ is valid then also $\Pi \land \Sigma \lthen \Pi' \land \Sigma'$ is. \end{enumerate} The first invariant can be easily verified by inspecting the code and noting that at the beginning $\Gamma = \Pi \land \WellFormed(\Sigma)$, and later only more conjuncts are appended to $\Gamma$. For the second invariant, right before entering the loop we have that $\Gamma = \Pi \land \WellFormed(\Sigma)$. So, assuming that $\Pi \land \WellFormed(\Sigma) \land \Sigma \lthen \Pi' \land \Sigma'$ is valid, take any $s', h \models \Pi \land \Sigma$, from Proposition~\ref{prop:sound_wf} it follows that $s' \models \WellFormed(\Sigma)$ and therefore, from our assumption, $s', h \models \Pi' \land \Sigma'$. If we enter the code of the loop we have that $s \models \Gamma$ and start by letting $U = \Match(s, \Sigma, \Sigma, \Sigma')$. If $s \not\models \Pi' \land U$, then either we have that $s \not\models \Pi'$---from Proposition~\ref{prop:comp_wf} there is a heap $h$ such that $s, h \models \Pi \land \Sigma$ but $s, h \not\models \Pi'$---or $s \not\models U$---in which case from Theorem~\ref{thm:match} there is a $h$ such that $s, h \models \Pi \land \Sigma$ but $s, h \not\models \Sigma'$. In either case the entailment is invalid and the procedure correctly reports this. Alternatively, if $s \models \Pi' \land U$, from $\Gamma \land \lnot(\Pi' \land U) \land \Sigma \lthen \Pi' \land \Sigma'$ we have to prove that $\Pi \land \Sigma \lthen \Pi' \land \Sigma'$. Take any $s', h \models \Pi \land \Sigma$, if $s', h \models \Pi' \land U$ then from Theorem~\ref{thm:match} the formula $U \land \Sigma \lthen \Sigma'$ is valid, and $s', h \models \Pi' \land \Sigma'$. Otherwise, if $s', h \not\models \Pi' \land U$, from our assumption we have as well $s', h \models \Pi' \land \Sigma'$. \qed \end{proof} } {\section{Experiments}\label{benchmarks} We implemented our entailment checking algorithm in a tool called \textsf{Aster{\ooalign{\hidewidth\clap{\raise0.8ex\hbox{\tiny*}}\hidewidth\cr\i}}x}\ using~Z3 as the theory back-end for testing the satisfaction of pure formulas and evaluating expressions against pure stack-models. The tool already accepts arbitrary theory expressions and assertions as part of the entailment formula. However, due to the current lack of realistic application benchmarks making use of such theory features, we only report the running times of this new implementation against already published benchmarks from~\cite{SlpPLDI11}. \input{tab-benchmarks} Table~\ref{tab:clones} shows experiments that have a significant number of repeated spatial atoms in the entailment. They are particularly difficult for the unfolding implemented in \textsf{slp}\ and the match function in~\textsf{Aster{\ooalign{\hidewidth\clap{\raise0.8ex\hbox{\tiny*}}\hidewidth\cr\i}}x}. Since our match function collects constraints that can potentially be useful for other applications of match, we observe a significant improvement. \endinput \noindent Name brainstorming: \begin{center}\setlength{\tabcolsep}{10pt} \begin{tabular}{lp{9cm}} \textsf{Aster{\ooalign{\hidewidth\clap{\raise0.8ex\hbox{\tiny*}}\hidewidth\cr\i}}x} & from the french comic book \\ mixcoatl & pronounced mishcoatl, aztec god of the galaxy (milky way), stars and sky/space, literally means `cloud snake' \\ camaxtli & pronounced camashtli, another name of the same god \\ citlali & `star' in nahuatl (aztec) \\ tianquiztli & aztec name for the pleiades (a star cluster) \\ tinaguis & a marketplace, derived from tianquiztli \\ tzolk'in & mayan name for the pleiades, literally means count of days \\ ek & `star' in mayan (technically Venus, which isn't a star) \\ hyades & the greek name of another star cluster \\ interstellar & literally space between stars \\ stareater & some horrible looking deep sea fish (google it!) \\ asterphage & should also mean `star eater' in greek, but I'm not sure, I don't speak the language! \\ star-soup & soup with star shaped pasta \\ stelline & little star in italian, also the name of that pasta \end{tabular} \end{center} } \section{Conclusion}\label{conclu} We have presented a method for extending an SMT solver with separation logic using the list segment predicate. Our method decides entailments of the form $\Pi \land \Sigma \rightarrow \Pi' \land \Sigma'$, whose pure and spatial components may freely use arbitrary theory assertions and theory expressions, as long as they are supported by the back-end SMT solver. Furthermore, we provide a formal proof of correctness of the algorithm, as well as a experimental results with an implementation using~Z3 as the theory solver. \end{document}
arXiv
\begin{document} \begin{frontmatter} \address[famu]{Florida A\&M University,Department of Physics, Tallahassee FL,32307} \title{Toroidal moments of Schr\"{o}dinger eigenstates} \author [famu]{M. Encinosa\corref{cor1}} \ead{[email protected]} \author [famu]{J. {Williamson}} \ead{[email protected]} \cortext[cor1]{Corresponding Author} \begin{abstract} The Hamiltonian for a particle constrained to motion near a toroidal helix with loops of arbitrary eccentricity is developed. The resulting three dimensional Schr\"{o}dinger equation is reduced to a one dimensional effective equation inclusive of curvature effects. A basis set is employed to find low-lying eigenfunctions of the helix. Toroidal moments corresponding to the individual eigenfunctions are calculated. The dependence of the toroidal moments on the eccentricity of the loops is reported. Unlike the classical case, the moments strongly depend on the details of loop eccentricity. \end{abstract} \begin{keyword} toroidal helix \sep toroidal moment \sep curvature potential \end{keyword} \end{frontmatter} \section{Introduction} The majority of work directed towards modeling the metaparticle constituents of metamaterials has been performed using classical physics \cite{wegener}. The characteristic length scales of most currently fabricated metaparticles allow for that approach to be appropriate and productive. However, it is nearly certain that metaparticles will eventually be fabricated on scales at which quantum mechanical methods will prove necessary to capture their physics with good fidelity \cite{zhang, shea, lorke}. This paper focuses upon two interesting properties common to many metaparticles: they can be approximated as reduced dimensionality systems and they can possess nontrivial topologies. The advent of quasi one and two dimensional curved nanostructures has led to situations wherein formalism developed for particles constrained to curved manifolds has become of practical importance. Specifically, there exits a prescription that allows for degrees of freedom extraneous to the particle's 'motion' on a curve or surface to be shuttled into effective curvature potentials in the Schr\"{o}dinger equation \cite{chapblick, dacosta1,dacosta2,duclosexner,ee1,ee2,jenskoppe,matsutani1,matsutani2,taira}. Recently, it was suggested that quantum methods be employed in an effort towards understanding toroidal moments induced by currents supported on nanoscale metaparticles and the interactions of those moments with time-dependent electromagnetic fields \cite{kaelscience}. Because of the theoretical and practical interest in toroidal moments \cite{afanasiev, ceulemans, dubovik, papas, naumov,spaldin, sawada}, a toroidal helix (TH) of adjustable eccentricity has been chosen here to investigate the role of quantum effects. Being closed, a TH can support current carrying solutions allowing for the existence of a toroidal moment \cite{kibis}. Furthermore, the TH has the advantage of having sufficient symmetry to allow for a clean reduction of the full Hamiltonian to a one dimensional effective Hamiltonian. The goals of this work are threefold. The first is to derive the Hamiltonian for a particle in a coordinate system adapted to include points near the coils of a TH of arbitrary eccentricity. The next deals with reducing the full three dimensional Hamiltonian via a well known procedure \cite{dacosta1,schujaff,burgsjens} to arrive at an effective one-dimensional Schr\"{o}dinger equation. The reduction of dimensionality impels the introduction of a curvature potential well known to workers in the field of curved manifold quantum mechanics. A basis set consistent with the periodicity and symmetry of the system is introduced thereafter. Achieving the first two goals and with the basis functions in hand, the spectrum and wave functions of the system (which can be used for applications in the external field and/or time-dependent case) are found. Finally, toroidal moments corresponding to particular eigenstates are determined and their sensitivity to the eccentricity of the loops comprising the TH is investigated. The remainder of this paper is organized into four sections. Section 2 introduces a parameterization for an $\omega$ turn TH in terms of an azimuthal coordinate $\phi$. A three dimensional Hamiltonian $H^3_{\omega}$ appropriate to motion near the TH follows by attaching a Frenet system to the helix and assigning two coordinates $q_N,q_B$ to describe degrees of freedom away from the coil. Section 3 details the reduction of $H^3_{\omega}$ to a one-dimensional $H^1_{\omega}$ by standard methods, although perhaps unfamiliar to workers in the metamaterial community. As a consequence of the reduction, curvature potentials appear. Their presence has been shown to be essential in properly describing one dimensional systems that exist in an ambient higher dimensional space \cite{enchardsoft}. Section 4 presents the basis set used to calculate the spectrum, eigenstates and toroidal moments per a given quantum state. Those quantities, along with results showing the dependence of TMs on eccentricity are given. Section 5 is dedicated to conclusions and some remarks concerning future work. \section{The TH Schr\"{o}dinger equation} To arrive at the time independent Schr\"{o}dinger equation $H^3_{\omega}({\bf r})\Psi= E\Psi = \big(-{1\over 2}\nabla^2 +V \big )\Psi $, the Laplacian must be derived from a suitable parameterization of the TH geometry. Consider a TH with $\omega$ equally spaced circular coils. Let \textit{R} be the distance from the z-axis to a loop center and \textit{a} the radius of a loop. First define \begin{equation} W(\phi)=R+a~{\rm cos}(\omega\phi) \end{equation} with $\phi$ the usual cylindrical coordinate azimuthal angle. The circular TH is traced out by the Monge form \cite{graustein} \begin{equation} {{\bf{r}}}(\phi)=W(\phi)\hat{{\boldsymbol {{\boldsymbol {\rho}}}}}+a~{\rm sin}(\omega\phi)\hat{{\bf{k}}}. \end{equation} Generalizing Eq.(2) to coils of arbitrary eccentricity requires only the modification \begin{equation} {\bf{r}}(\phi)=W(\phi)\hat{{\boldsymbol {\rho}}}+b~{\rm sin}(\omega\phi)\hat{{\bf{k}}} \end{equation} where $a,b$ may be adjusted to yield the coil shape desired (Fig. 1). To avoid cluttering the narrative with blocks of equations, the expressions that follow will apply to the circular case only. The expressions for arbitrary $a$ and $b$ are given in the appendix. A three dimensional neighborhood in the vicinity of the TH is built by assigning two coordinates to points near the curve along unit vectors orthogonal to the curve's tangent and to each other. The Frenet-Serret equations \cite{graustein} provide such an orthonormal coordinate system known as a Frenet trihedron. The unit tangent to any point on a curve traced by ${\bf{r}}(\phi)$ is \begin{equation} \hat{{\bf{T}}}={d{\bf{r}}(\phi)\over{d\phi}}~{\bigg| \bigg| {d{\bf{r}}(\phi)\over{d\phi}} \bigg| \bigg|}^{-1} \end{equation} \noindent from which the Frenet trihedron can be constructed via the relations \begin{equation} {d\hat{{\bf{T}}}\over {d\phi}}= {\bigg| \bigg| {d{\bf{r}}(\phi)\over{d\phi}} \bigg| \bigg|} \kappa(\phi) \hat{{\bf{N}}} \end{equation} \begin{equation} {d\hat {{\bf{N}}}\over {d\phi}}= {\bigg| \bigg| {d{\bf{r}}(\phi)\over{d\phi}} \bigg| \bigg|} \big( -\kappa(\phi) \hat{{\bf{T}}}+\tau(\phi) {\hat{{\bf{B}}}} \big) \end{equation} \begin{equation} {d\hat{{\bf{B}}}\over {d\phi}}=- {\bigg| \bigg| {d{\bf{r}}(\phi)\over{d\phi}} \bigg| \bigg|} \tau(\phi) \hat{{\bf{N}}} \end{equation} where the curvature and torsion of the space curve ${\bf{r}}(\phi)$ are indicated by $\kappa(\phi)$ and $\tau(\phi)$ respectively (where again, detailed forms for the expressions in Eqs. (4-7) appear in the appendix). Points near the TH are located via two perpendicular displacements $q_N\hat{{\bf{N}}}$ and $q_B\hat{{\bf{B}}}$. The TH position vector may now be written \begin{equation} {\bf{x}}(\phi,q_N,q_B)={\bf{r}}(\phi)+q_N\hat{{\bf{N}}}+q_B\hat{{\bf{B}}}. \end{equation} \noindent It should be noted that Eq.(8) defines a Cartesian region about a curve traced by ${\bf{r}}(\phi)$. While it is certainly possible to construct a finite tubular neighborhood about ${\bf{r}}(\phi)$, the coordinate ambiguity of the azimuthal angle as the radial distance approaches zero causes the limiting procedure to become complicated. Additionally, the separability of the Schr\"{o}dinger equation into tangential and normal variables is lost, and with it any real advantage in using the reduced Hamiltonian. The covariant metric tensor elements $g_{ij}$ can be read off of the quadratic form \cite{arfkenweber} \begin{equation} d{\bf{x}} \cdot d{\bf{x}}=g_{ij}dq^idq^j \end{equation} where in what follows the ordering convention is $(q^1,q^2,q^3) \equiv (\phi,q_N,q_B)$. The Laplacian is \begin{equation} \nabla^2={1 \over \sqrt{g}}{\partial \over{\partial q^i}} \bigg(\sqrt{g}~g^{ij} {\partial \over {\partial q^j}}\bigg) \end{equation} with $g=det(g_{ij})$ and $g^{ij}$ the contravariant components of the metric tensor. Before presenting explicit forms for $g_{ij}$ and $g^{ij}$ , it is useful to define \begin{equation} f(\phi)=\bigg | \bigg | {{d \bf {r}(\phi)} \over{{d\phi}}} \bigg | \bigg | =[(a \omega)^2 + W(\phi)^2]^{1/2} \end{equation} and \begin{equation} G(\phi,q_N)=1-q_N \kappa(\phi) \end{equation} after which the covariant metric may be written \vskip 2 pt \begin{equation} g_{ij}=\begin{pmatrix} f(\phi)^2[G(\phi,q_N)^2+\tau(\phi)^2(q_N^2+q_B^2)] & -\tau(\phi) q_B f(\phi)& \tau(\phi) q_N f(\phi) \\ -\tau(\phi) q_B f(\phi) & 1 & 0 \\ \tau(\phi) q_N f(\phi) & 0 & 1\end{pmatrix}. \end{equation} \vskip 6pt \noindent The contravariant form of the metric is obtained straightforwardly; \vskip 2pt \begin{equation} g^{ij}={1\over{f(\phi)^2G(\phi,q_N)^2}}\begin{pmatrix}{1} & {\tau(\phi) q_B f(\phi)} & {-\tau(\phi) q_N f(\phi)} \\ {\tau(\phi) q_B f(\phi)} & f(\phi)^2[G(\phi,q_N)^2 +{\tau(\phi)^2 q_B^2]} & -{\tau(\phi)^2 q_N q_B f(\phi)^2} \\ {-\tau(\phi) q_N f(\phi)} & -{\tau(\phi)^2 q_N q_B f(\phi)^2} & f(\phi)^2[G(\phi,q_N)^2+{\tau^2(\phi) q_N^2]}\end{pmatrix}. \end{equation} It is easy to show that $$\sqrt{g}=f(\phi)\big( 1- q_N\kappa(\phi) \big).$$ \vskip4pt \noindent The Laplacian found by directly evaluating Eq.(10) is complicated by the existence of cross terms arising from ${\partial^2 /{ \partial q^i \partial q^j}}$, ($i\neq j$), operations. However, all of those terms are multiplied by the distance parameters $q_N$ and $q_B$ such that in the limit $q_N, q_B \rightarrow0$ they vanish independently of the derivative operators that follow them. Taking this limit now (it will be taken again later post operation of the $q_{N,B}$ derivatives) leads to a more convenient starting point for developing the reduced Hamiltonian in the ensuing section. Physically, the limiting procedure is effected by external mechanical or electrical constraints; mathematically, they are added by hand into the Schr\"{o}dinger equation as potentials $V_n(q)$ normal to the lower dimensionality base manifold. Their detailed forms are not important. Previous work has shown that even for finite thicknesses, degrees of freedom extraneous to those of the base manifold do not mix with the latter in the sense that their wave functions decouple \cite{enchardsoft}. Here, for the sake of definiteness, hard wall potentials are assumed for $V_n(q_{\mbox{\tiny N}})$ and $V_n(q_{\mbox{\tiny B}})$ in this and the next section. With this discussion in mind, $H^3_\omega$ may be written as (with $\hbar=m=1$) \begin{equation} H^3_\omega=-{1\over{2}} \bigg({1 \over{f(\phi)^2}}{\partial^2 \over{\partial \phi^2}}-{f'(\phi) \over{f(\phi)^3}}{\partial \over{\partial \phi}}- \kappa(\phi){\partial \over{\partial q_N}}+{\partial^2 \over{\partial q_N^2}}+{\partial^2 \over{\partial q_B^2}}~\bigg)+ V_n(q_N)+V_n(q_B). \end{equation} \noindent Note that the $H^3_\omega$ at this stage is still not separable. The procedure for rendering $H^3_\omega$ separable and arriving at a simpler effective Hamiltonian is given in the following section. \section{Constructing the effective Hamiltonian} As the particle is constrained to the toroidal helix, its wave function will decouple into tangent and normal functions (the subscripts \textit{t} and \textit{n} denote tangent and normal respectively) \begin{equation} \Psi (\phi,q_N,q_B) \rightarrow \chi_t (\phi) \chi_n (q_N) \chi_n (q_B) \end{equation} and $G(\phi,q_N)$ will approach unity. The normalization condition \begin{equation} \int_0^{2 \pi} \vert\Psi(\phi,q_N,q_B)\vert^2 G(\phi,q_N)f(\phi)\,d\phi\,dq_N\,dq_B=1 \end{equation} becomes \begin{equation} \int_0^{2 \pi} \vert \chi_t (\phi) \vert^2~\vert \chi_n (q_N) \vert^2~\vert \chi_n (q_B) \vert^2 f(\phi)\,d\phi\,dq_N\,dq_B=1. \end{equation} The norm must be conserved in the decoupled limit \cite{dacosta1}, which implies \begin{equation} \vert\Psi(\phi,q_N,q_B)\vert^2 G(\phi,q_N)=\vert \chi_t (\phi) \vert^2~\vert \chi_n (q_N) \vert^2~\vert \chi_n (q_B) \vert^2. \end{equation} The wave function $\Psi(\phi,q_N,q_B)$ is now related to $\chi_t (\phi) \chi_n (q_N) \chi_n (q_B)$ by \begin{equation} \Psi(\phi,q_N,q_B)=\chi_t (\phi) \chi_n (q_N) \chi_n (q_B) G^{-{1 /2}}(\phi,q_N). \end{equation} Applying $H^3_\omega$ to $\Psi(\phi,q_N,q_B)$ and taking the limit as $q_N, q_B \rightarrow0$ post all derivative operations yields the result \begin{equation} H^3_\omega=-{1 \over{2}}\bigg({1 \over{f(\phi)^2}}{\partial^2 \over{\partial \phi^2}}-{f'(\phi) \over{f(\phi)^3}}{\partial \over{\partial \phi}}+{1\over{4}}\kappa^2(\phi) +{\partial^2 \over{\partial q_N^2}}+{\partial^2 \over{\partial q_B^2}}~\bigg)+V_n(q_N) +V_n(q_B). \end{equation} Distributing the energy between the $(\phi,q_N,q_B)$ degrees of freedom by allowing $E=E_{\phi}+E_N+E_B$, leads to the decoupled system \begin{equation} -{1 \over{2}}\bigg({1 \over{f(\phi)^2}} {\partial^2 \over{\partial \phi^2}}-{f'(\phi) \over{f(\phi)^3}} {\partial \over{\partial \phi}}+{1\over{4}} \kappa^2(\phi) \bigg) \chi_t(\phi)=E_\phi \chi_t(\phi) \end{equation} \begin{equation} -{1 \over{2}}~{\partial^2 \chi_n(q_N) \over{\partial q_N^2}} + V_n(q_N) \chi_n(q_N)=E_N \chi_n(q_N) \end{equation} \begin{equation} -{1 \over{2}}~{\partial^2 \chi_n(q_B) \over{\partial q_B^2}} + V_n(q_B) \chi_n(q_B)=E_B \chi_n(q_B). \end{equation} \noindent Since $V(q_N)$ and $V(q_B)$ are the confining potentials effecting the $q_N, q_B \rightarrow0$ constraint, $q_N$ and $q_B$ can be considered spectator variables and only the $\phi$-dependent part of the Hamiltonian indicated in Eq.(21) is nontrivial. The Hamiltonian in one dimension $H^1_\omega$ is written \begin{equation} H^1_{\omega}=-{1 \over{2}}\bigg({1 \over{f(\phi)^2}} {\partial^2 \over{\partial \phi^2}}-{f'(\phi) \over{f(\phi)^3}} {\partial \over{\partial \phi}} \bigg)+V_c(\phi) \end{equation} \noindent with \begin{equation} V_c (\phi)=-{1\over{8}} \kappa^2 (\phi) \end{equation} \noindent the curvature potential. The curvature potential $V_c(\phi)$ emerges as an artifact of embedding the particle's one dimensional path of motion in the ambient three dimensional space. The explicit form of the curvature potential in Eq.(26) can be determined from \begin{equation} \kappa (\phi)=[P_1(\phi)^2+P_2(\phi)^2]^{1/2} \end{equation} where \begin{equation} P_1(\phi)=-{{a\omega^2 + W(\phi){\rm cos}(\omega\phi)}\over{f(\phi)^2}} \end{equation} and \begin{equation} P_2(\phi)= \frac{{\rm sin}(\omega\phi)}{f(\phi)}\bigg[1+\bigg({a\omega \over{f(\phi)}}\bigg)^2~\bigg]. \end{equation} \noindent Explicit forms of the tangent, normal, and binormal vectors, along with other vectors and functions for the circular and elliptic helices are given in the appendix. A plot of $V_c(\phi)$ for some representative values of $a,b$ with $\omega = 4$ appears in Fig. 2. Note that the circular case values are negligible in magnitude compared to the eccentric cases, and when $a > b$, $V_c(\phi)$ is substantially larger than for the converse. For larger ratios of $a$ to $b$, $V_c(\phi)$ can be orders of magnitude larger than indicated in the figure. It is worth stating that instead of parameterizing the TH with $\phi$, it would also be possible to employ an arc length scheme where an arc length parameter $\lambda$ is determined from $\lambda=\int_0^{\phi}f(\phi') \,d\phi'.$ However, to include the curvature potential as a function of $\lambda$, it would be necessary to find $\phi(\lambda)$ along the curve. While this could be accomplished numerically, using the azimuthal angle is somewhat better suited to incorporating external fields \cite{encarbB,encdipole}. \section{Computational methods and results } If the TH is small enough to require a quantum mechanical description, the $\phi$-dependent part of its wave function must obey Bloch's theorem (the $t$-subscript will be dropped hereafter) \begin{equation} \chi_k\bigg(\phi+{2\pi \over \omega}\bigg) ={\rm exp}\bigg[ik{2\pi \over \omega}\bigg]\chi_k(\phi). \end{equation} A standard choice is \cite{grossoparra} \begin{equation} \chi_k(\phi) ={\rm exp}(ik\phi) u_k(\phi) \end{equation} \noindent where $u_k(\phi+2\pi / \omega)=u_k(\phi)$ is satisfied. Single valuedness requires the Bloch index $k \equiv p$ = integer. A convenient choice for $u_p(\phi)$ basis elements is \begin{equation} u_p(\beta,\phi)={\rm exp}[i\beta\phi]. \end{equation} The requirement indicated in Eq.(30) yields \begin{equation} \beta=\omega n, \ \ \ \ n \equiv {\rm integer}. \end{equation} From the above considerations, a suitable basis set for the TH is \begin{equation} \chi^{p\alpha}(\phi)={\rm exp}[i p \phi]\sum_n C^{p\alpha}_n {\rm exp} (i n \omega \phi). \end{equation} \noindent The Bloch form introduces sub-states (sub-bands in the case of a continuous rather than discrete index) for each $p$ value which would not be present if the TH were treated as a ring of length $L$. The $C^{p\alpha}_m$ are the expansion coefficients for $\alpha$-th sub-state of a given $p$ value. In this work, it was found that a five-state expansion proved sufficient to yield basis size independent results for the lower $p$ sub-states. For $\omega$ turns, values of $p$ consistent with the Bloch theorem, $ p < \omega$, are used. For clarity, only $p \geq 0$ are discussed. A disadvantage of directly adopting the expression given by Eq.(34) is that the basis functions are not orthogonal over the integration measure $f(\phi) d\phi$. A more natural basis set is given by a re-scaled form of Eq.(34) \begin{equation} \chi^{p\alpha}(\phi)={{\rm exp}[i p \phi]\over {f(\phi)^{1 / 2}}}\sum_n C^{p\alpha}_n {\rm exp} (i n \omega \phi). \end{equation} With basis function orthogonality preserved on the right hand side of the Hamiltonian, eigenvalues and eigenvectors are calculated by diagonalizing the matrix comprising the elements \begin{align} \begin{split} H_{mn}={1 \over 2\pi}\int_0^{2\pi} &e^{i\omega(n-m)\phi}\big[-2 V_c (\phi) -\frac{(p+\omega n)^2}{f(\phi)^2} +\frac{5}{4} \frac{f'(\phi)^2}{f(\phi)^4} -\frac{f''(\phi)}{2 f(\phi)^3} \\ &-2 i (p+ \omega n) \frac{f'(\phi)}{f(\phi)^3} \big] \,d\phi. \end{split} \end{align} Once the eigenstates are found, the current in general is calculated with (now with units) \begin{equation} {\textbf{\textit{j}}}(\phi,q_B,q_N)={q_e \hbar \over{m_e}}Im \big [\Psi^\ast(\phi,q_B,q_N) {\boldsymbol{\nabla}}\Psi(\phi,q_B,q_N) \big ]. \end{equation} The current density given by Eq.(37) is inclusive of cross-sectional degrees of freedom and yields a current passing through a rectangular area with unit normal $\hat{{\bf{T}}}$. However, in keeping with the intent of this work, the limit of infinitesimal thickness is assumed (or equivalently, the $q_B, q_N$ degrees of freedom are integrated out) leading to the current expression for the $p\alpha$-th state \begin{equation} {\textbf{\textit{j}}}^{p\alpha}(\phi,0,0)={{q_e \hbar } \over {m_e}}Im \bigg [{(\chi^{p\alpha}(\phi))^\ast \over f(\phi)} { \partial \chi^{p\alpha}(\phi) \over \partial \phi }\bigg ]{\hat{\bf{T}}} \end{equation} \noindent where the form of the reduced gradient operator is obvious. The quantum mechanical current that stems from Eqs.(35) and (38) becomes \begin{equation} {\textbf{\textit{j}}}^{p\alpha}(\phi,0,0)={q_e \hbar \over{m_e}}{1 \over{2\pi}}\sum\limits_{m,n} C^{p\alpha}_m C^{p\alpha}_n \bigg[{(p+\omega n) \over{f(\phi)^2}} {\rm cos}[\omega(n-m)\phi]-{f'(\phi)\over{2f(\phi)^3}} {\rm sin}[\omega(n-m)\phi]\bigg]~\hat{{\bf{T}}}. \end{equation} When $V_c(\phi)$ is included in the Hamiltonian the $C^{p\alpha}_m$ are modified, causing the current to become inclusive of curvature effects. This current is then used to calculate the toroidal moments according to \cite{marinov} \begin{equation} \textbf{T}^{p\alpha}_M={1 \over{10}}\int_0^{2\pi} [({\textbf{\textit{j}}}^{p\alpha}(\phi,0,0) \cdot {\bf{r}}) {\bf{r}} -2r^2 {\textbf{\textit{j}}}^{p\alpha}(\phi,0,0)~]\,f(\phi)d\phi. \end{equation} \noindent Equation (40) allows calculation of quantum mechanical toroidal moments of ground and excited states for each Block index $p$. For a macroscopic thin wire where ${\textbf{\textit{j}}}d\tau \rightarrow Id{\bf{r}}$ is applicable, the toroidal moment for each $p$ reduces to the classical result \begin{equation} \textbf{T}^p_M={I \over{10}}\int_0^{2\pi} \bigg[ ({d{\bf{r}} \over{d\phi}} \cdot {\bf{r}}~){\bf{r}}-2r^2{d {\bf{r}} \over{d\phi}}~\bigg ] f(\phi)\,d\phi. \end{equation} For circular TH, Eq.(41) yields \begin{equation} \textbf{T}^p_M=-{{\pi \omega I a^2 R} \over{2}} ~\hat{{\bf{k}}} \end{equation} and for the elliptic TH, \begin{equation} \textbf{T}^p_M=-{{\pi \omega I a b R} \over{2}} ~\hat{{\bf{k}}}. \end{equation} As a means of comparison, the current for the $p$ state without curvature effects (i.e. a free particle on a given $\omega$ turn helix) is easily determined to be \begin{equation} I={2 \pi q_e \hbar p \over m_e L^2} \end{equation} where the total length of the TH, $L$, is calculated using $$L=\int_0^{2\pi} f(\phi) \,d\phi.$$ The formalism described in this section was employed to calculate the eigenvalues and eigenstates expressed in terms of the $C^{p\alpha}_m$ for several $\omega$ and $p$ values. To get a sense of the modifications arising from $V_c(\phi)$, the eigenvalues and amplitudes for a six-turn eccentric helix in a $p = 1$ state are listed without (Table 1) and with (Table 2) the curvature potential being present. The eigenvalue shifts reflect that $V_c(\phi)$ is always attractive as shown in Fig. (2), and capable of causing amplitude shifts. The reader will note there is no table indicating the shifts for the circular case; \textit{the effects are negligible and essentially independent of the coil radius} $a$. With the $C^{p\alpha}_m$ amplitudes in hand, Eq.(39) can be used to find the ${\textbf{\textit{j}}}^{p\alpha}(\phi,0,0)$ necessary for computing TMs. To set a baseline for understanding the effect of including $V_c(\phi)$, the curvature potential was shut off and ${\textbf{\textit{j}}}^{p\alpha}(\phi,0,0)$ determined for many cases. In Fig. (3), representative results are given for $\omega = 4$, p = 1. As anticipated, the lowest energy states yield very steady currents; oscillations begin to manifest in the higher sub-band energy states. Turning the potential on produces very little change in the currents; in Fig. (4), it becomes clear that $V_c(\phi)$ does little, which is consistent with its small amplitude indicated in Fig. (2). When eccentricity is introduced by setting $a=.75$ and $b=.25$, the results are less trivial. The results displayed in Figs. (5) and (6) are representative of a general trend observed throughout values of $\omega,p$. The curvature potential suppresses the current in every sub-band by a discernable fraction. Similar behavior is observed when $a=.25$ and $b=.75$ as shown in Fig. (7) and (8), but note that the magnitudes are substantially different from the converse values of $a$ and $b$. The Bloch form of the wave function, independent of the presence of $V_c(\phi)$ (which again lessens the magnitude of the currents), the Bloch form of the wave function and the $ab$ dependence of the Laplacian are sufficient to cause asymmetries in the currents. Toroidal moment results for $\omega = 4$ are shown in Table 3 for several $p$-states and their corresponding sub-states. For a given $p$ value, the lowest energy state in Table 3 agrees very well with the value obtained if the current given by Eq.(44) were used, although $V_c(\phi)$ can shift the ordering of states in a way to reorder the ground state moment as seen for the $p=3$ states. In isolation, this is relatively unimportant. However, in a broader context where the natural temperature scale of a $1000 \AA$ helix is a few $\mu K$, thermodynamic averages of the type \begin{equation} \langle T_M \rangle = \sum T_M(E_n) {\rm{exp}}[-E_n/ \tau] \end{equation} will necessitate accounting for proper ordering. The modifications to the TMs for the upright ($b > a$) coil situation are generally minimal with exceptions only for $p=2$. The flattened coil ($a > b$) results in Table 3 show a much stronger variation in TM values, consistent with the much larger strength of $V_c(\phi)$ for $a > b$ relative to the converse. A sense of the dependence of TMs on $\omega$ can be gleaned from Table 4 where now $\omega = 8$. Increased variation is seen for both eccentricities, but the flattened coil case demonstrates appreciable deviation from the classical expression. \section{Conclusions} In this work a prescription to include curvature at the nanoscale for particles constrained to toroidal helices was presented, which the authors applied toward a quantum mechanical calculation of toroidal moments. It is worth emphasizing that the curvature inclusive reduced dimensionality Schr\"{o}dinger equation developed here is driven by an interest in having more tractable, effective models for nanomaterials, and is done with the aim of eventually confronting experimental data rather than as a purely theoretical exercise. In that context, the choice to consider helices was driven by their capability of producing toroidal moments, which are currently of both theoretical and practical interest. The curvature potential for the helix was derived and shown to be the dominant part of the Hamiltonian for lower energy eigenstates of eccentric helices. An intriguing result that arose here was a demonstrated $ab$ asymmetry in several states of the quantum mechanically calculated $\textbf{T}^{p\alpha}_M$, an asymmetry not exhibited in the classical expression of Eq.(43). The array of results given in this work was limited to relatively small values of $\omega$ and to less severe eccentricity because of numerical limitations on evaluating integrals of the type shown in Eq.(36). The extension to larger values of $\omega$ were considered (at least currently) outside the scope of what the authors were attempting to accomplish. However, preliminary work gives some indication that \textit{Mathematica} is capable of performing the necessary integrals, albeit with increased time expense. It would be of interest to investigate more extreme cases of eccentricity and loop number given the enhancement of moments already evidenced by larger $\omega$. Tailoring the response of toroidal helices to electromagnetic radiation by fabricating objects with curvature as a free parameter is still well outside the reach of current fabrication methods. However, the formalism and basis set established here may serve as means for further investigation of the $\textbf{T}_M \cdot {\partial \textbf{E} / \partial t}$ interactions relevant to the coupling of toroidal moments to electromagnetic fields. The extension of the methods here to cases where external vector potentials are present may be naturally developed from work already done for tori immersed in arbitrary magnetic fields \cite{encarbB} and is ongoing with an aim to understanding persistent current effects. Finally, debate as to whether curvature effects are relevant to, and how they are manifested in, topologically novel nanostructures may eventually be settled by examining systems akin to toroidal helices. By opting to either include or exclude curvature potentials in modeling routines, it may prove true that sensitive quantities like toroidal moments will provide a clear signature as to the influence of $V_c$. Work such as that done in this paper may hopefully contribute to a resolution to the question of how to properly incorporate twists and turns in the quantum mechanical description of bent nanostructures. \renewcommand{A-\arabic{equation}}{A-\arabic{equation}} \setcounter{equation}{0} \section*{Appendix} This appendix presents a more complete set of formulae for the circular TH as well as a corresponding set for the elliptical case. \subsection*{A.1 The circular case} \begin{equation} \hat{\boldsymbol{\theta}}=-{\rm sin}(\omega\phi)\hat{{\boldsymbol {\rho}}}+{\rm cos}(\omega\phi)\hat{{\bf{k}}} \end{equation} \begin{equation} \hat{\textbf{n}}={\rm cos}(\omega\phi)\hat{{\boldsymbol {\rho}}}+{\rm sin}(\omega\phi)\hat{{\bf{k}}} \end{equation} \begin{equation} f(\phi)=(a^2\omega^2+W(\phi)^2)^{1/2} \end{equation} \begin{equation} \hat{\textbf{e}}_2={{W(\phi) \hat{\boldsymbol{\theta}}-a\omega \hat{\boldsymbol{\phi}}}\over{f(\phi)}} \end{equation} \begin{equation} P_1(\phi)=-{{a\omega^2 + W(\phi){\rm cos}(\omega\phi)}\over{f(\phi)^2}} \end{equation} \begin{equation} P_2(\phi)=\frac{{\rm sin}(\omega\phi)}{f(\phi)}\bigg[1+\bigg({a\omega \over{f(\phi)}}\bigg)^2\bigg] \end{equation} \begin{equation} \kappa(\phi)=(P_1^2(\phi)+P_2^2(\phi))^{1/2} \end{equation} \begin{equation} \hat{{\bf{T}}}={{a\omega \hat{\boldsymbol{\theta}}+W(\phi)\hat{\boldsymbol{\phi}}}\over{f(\phi)}} \end{equation} \begin{equation} \hat{{\bf{N}}}={1\over{\kappa(\phi)}}(P_2(\phi) \hat{\textbf{e}}_2+P_1(\phi) \hat{\textbf{n}}) \end{equation} \begin{equation} \hat{{\bf{B}}}={1\over{\kappa(\phi)}}(-P_1(\phi) \hat{\textbf{e}}_2+P_2(\phi) \hat{\textbf{n}}) \end{equation} \subsection*{A.2 The elliptic case} With $W(\phi)=R+a~{\rm cos}(\omega\phi)$, the equation of the elliptic toroidal helix is $${\bf{r}}(\phi)=W(\phi)\hat{{\boldsymbol {\rho}}}+b~{\rm sin}(\omega\phi)\hat{{\bf{k}}}.$$ The extension of the results of Sec. A.1 to the elliptic case is straightforward: \begin{equation} P(\phi)=[a^2~{\rm sin}^2(\omega\phi)+b^2~{\rm cos}^2(\omega\phi)]^{1/2} \end{equation} \begin{equation} \hat{\boldsymbol{\theta}}_E={1 \over{P(\phi)}}[-a~{\rm sin}(\omega\phi)\hat{{\boldsymbol {\rho}}}+b~{\rm cos}(\omega\phi) \hat{{\bf{k}}}] \end{equation} \begin{equation} \hat{\textbf{n}}_E={1 \over{P(\phi)}}[b~{\rm cos}(\omega\phi)\hat{{\boldsymbol {\rho}}}+a~{\rm sin}(\omega\phi) \hat{{\bf{k}}}] \end{equation} \begin{equation} \hat{\textbf{e}}_2={{W(\phi)\hat{\boldsymbol{\theta}}_E+P(\phi)\omega \hat{\boldsymbol{\phi}}}\over{f(\phi)}} \end{equation} \begin{equation} f(\phi)=(P(\phi)^2{\omega}^2+W(\phi)^2)^{1/2} \end{equation} \begin{equation} P_1(\phi)=-{b\over{P(\phi)}}{{a\omega^2 + W(\phi){\rm cos}(\omega\phi)}\over{f(\phi)^2}} \end{equation} \begin{equation} P_2(\phi)=\frac{{\rm sin}(\omega\phi)}{f(\phi)}\bigg({a\over{P(\phi)}}+{{\omega}^2W(\phi)(a^2-b^2){\rm cos}(\omega\phi)+P(\phi)^2a{\omega}^2 \over{f(\phi)^2P(\phi)}}\bigg) \end{equation} \begin{equation} \kappa(\phi)=(P_1(\phi)^2+P_2(\phi)^2)^{1/2} \end{equation} \begin{equation} \hat{{\bf{T}}}={{P(\phi)\omega~\hat{\boldsymbol{\theta}}_E+W(\phi)\hat{\boldsymbol{\phi}}}\over{f(\phi)}} \end{equation} \begin{equation} \hat{{\bf{N}}}={1\over{\kappa(\phi)}}(P_2(\phi) \hat{\textbf{e}}_2+P_1(\phi) \hat{\textbf{n}}_E) \end{equation} \begin{equation} \hat{{\bf{B}}}={1\over{\kappa(\phi)}}(-P_1(\phi) \hat{\textbf{e}}_2+P_2(\phi) \hat{\textbf{n}}_E) \end{equation} \pagebreak \begin{figure} \caption{A toroidal helix where $R$ is the distance from the center of the TH to a center of a loop of the TH. The parameter $a$ is the TH's maximum perpendicular horizontal distance from a concentric cylinder of radius $R$. The parameter $b$ is the TH's maximum vertical distance from the x-y plane.} \end{figure} \pagebreak \begin{figure} \caption{The curvature potential $V_c(\phi)$ in units of $\hbar^2 /(m_e R^2)$ for the case of the circular TH with $R=1$, $\omega=4$, $a=b=0.5$ and two elliptic TH cases: $R=1$, $\omega=4$, $a=0.25$, $b=0.75$ and $R=1$, $\omega=4$, $a=0.25$, $b=0.75$.} \end{figure} \pagebreak \pagebreak \pagebreak \pagebreak \pagebreak \pagebreak \pagebreak \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline & $E_0$ & $E_1$ & $E_2$ & $E_3$ & $E_4$ \\ \hline & 0.0724 & 1.6369 & 3.0045 & 7.7907 & 10.9186 \\ \hline $m$ & $C^{(0)}_m$ & $C^{(1)}_m$ & $C^{(2)}_m$ & $C^{(3)}_m$ & $C^{(4)}_m$ \\ \hline -2 & -0.1055 & -0.1648 & -0.0607 & -0.9761 & 0.0722 \\ -1 & 0.0585 & -0.9762 & -0.1236 & 0.1631 & -0.0428 \\ 0 & 0.9822 & 0.0315 & 0.0291 & -0.1020 & 0.1520 \\ 1 & 0.0556 & 0.1374 & -0.9702 & 0.0171 & -0.1907 \\ 2 & -0.1331 & -0.0087 & -0.1970 & 0.0996 & 0.9662 \\ \hline \end{tabular} \caption{Eigenvalues and amplitudes $C^{(\alpha)}_m$ for an $\omega$=6, $a$=0.75, $b$=0.25, $R$=1, $p$=1 elliptic TH neglecting curvature effects.} \label{} \end{table} \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline & $E_0$ & $E_1$ & $E_2$ & $E_3$ & $E_4$ \\ \hline & -1.4739 & -0.0442 & 2.1393 & 5.7258 & 9.2713 \\ \hline $m$ & $C^{(0)}_m$ & $C^{(1)}_m$ & $C^{(2)}_m$ & $C^{(3)}_m$ & $C^{(4)}_m$ \\ \hline -2 & 0.0606 & -0.0310 & 0.1231 & -0.9410 & 0.3077 \\ -1 & -0.3794 & -0.7975 & 0.4620 & 0.0389 & -0.0713 \\ 0 & 0.8927 & -0.4441 & -0.0424 & 0.0579 & -0.0266 \\ 1 & -0.2353 & -0.4071 & -0.8712 & -0.0772 & 0.1179 \\ 2 & -0.0062 & 0.0118 & -0.1027 & -0.3220 & -0.9411 \\ \hline \end{tabular} \caption{Eigenvalues and amplitudes $C^{(\alpha)}_m$ for an $\omega$=6, $a$=0.75, $b$=0.25, $R$=1, $p$=1 elliptic TH including curvature effects.} \label{} \end{table} \pagebreak \begin{table}[htbp] \caption{Toroidal moments for two configurations with $\omega$=4. TM's with and without curvature effects, and classical calculation for each case.} \begin{minipage}[b]{0.5\linewidth} \scalebox{0.6}{ \begin{tabular}{|c|c|c|c|c|} \hline $\omega$ & $a$ & $b$ & & \\ \hline 4 & 0.25 & 0.75 & & \\ \hline p & TM & TM w/$V_c(\phi)$ & ratio & Classical TM \\ \hline 1 & -0.0334 & -0.0317 & 1.0545 & -0.0332 \\ 1 & 0.0895 & 0.0718 & 1.2457 & \\ 1 & -0.1478 & -0.1308 & 1.1293 & \\ 1 & 0.1901 & 0.1837 & 1.0347 & \\ 1 & -0.2401 & -0.2347 & 1.0230 & \\ \hline 2 & -0.0669 & -0.0551 & 1.2129 & -0.0664 \\ 2 & 0.0600 & 0.0524 & 1.1452 & \\ 2 & -0.0088 & 0.0193 & -0.4562 & \\ 2 & -0.0031 & -0.0344 & 0.0896 & \\ 2 & -0.2646 & -0.2655 & 0.9967 & \\ \hline 3 & 0.0288 & 0.0305 & 0.9430 & -0.0995 \\ 3 & -0.0993 & -0.0798 & 1.2441 & \\ 3 & 0.1380 & 0.1203 & 1.1479 & \\ 3 & -0.2038 & -0.2052 & 0.9930 & \\ 3 & -0.2888 & -0.2908 & 0.9931 & \\ \hline \end{tabular}} \label{} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \scalebox{0.6}{ \begin{tabular}{|c|c|c|c|c|} \hline $\omega$ & $a$ & $b$ & & \\ \hline 4 & 0.75 & 0.25 & & \\ \hline p & TM & TM w/$V_c(\phi)$ & ratio & Classical TM \\ \hline 1 & -0.0317 & -0.0068 & 4.6359 & -0.0319 \\ 1 & 0.1625 & 0.0627 & 2.5936 & \\ 1 & -0.2697 & -0.1743 & 1.5468 & \\ 1 & 0.1505 & 0.1470 & 1.0240 & \\ 1 & -0.1694 & -0.1863 & 0.9096 & \\ \hline 2 & -0.0561 & 0.0000 & - & -0.0638 \\ 2 & 0.1067 & 0.0157 & 6.7927 & \\ 2 & -0.3206 & -0.1161 & 2.7623 & \\ 2 & 0.1298 & -0.0081 & -15.9737 & \\ 2 & -0.1755 & -0.2072 & 0.8472 & \\ \hline 3 & 0.0664 & 0.0063 & 10.5665 & -0.0957 \\ 3 & -0.0933 & -0.0346 & 2.6957 & \\ 3 & 0.1211 & 0.1092 & 1.1090 & \\ 3 & -0.3893 & -0.3413 & 1.1407 & \\ 3 & -0.1784 & -0.2131 & 0.8372 & \\ \hline \end{tabular}} \label{} \end{minipage} \end{table} \begin{table}[htbp] \caption{Toroidal moments for two configurations with $\omega$=8. TM's with and without curvature effects, and classical calculation for each case.} \begin{minipage}[b]{0.5\linewidth} \scalebox{0.6}{ \begin{tabular}{|c|c|c|c|c|} \hline $\omega$ & $a$ & $b$ & & \\ \hline 8 & 0.25 & 0.75 & & \\ \hline p & TM & TM w/$V_c(\phi)$ & ratio & Classical TM \\ \hline 1 & -0.0190 & -0.0166 & 1.1485 & -0.0195 \\ 1 & 0.1218 & 0.0441 & 2.7619 & \\ 1 & -0.1600 & -0.0822 & 1.9477 & \\ 1 & 0.2787 & 0.1938 & 1.4383 & \\ 1 & -0.3175 & -0.2351 & 1.3501 & \\ \hline 2 & -0.0386 & -0.0332 & 1.1615 & -0.0390 \\ 2 & 0.1123 & 0.0664 & 1.6917 & \\ 2 & -0.1890 & -0.1430 & 1.3217 & \\ 2 & 0.2635 & 0.2293 & 1.1490 & \\ 2 & -0.3401 & -0.3113 & 1.0923 & \\ \hline 3 & -0.0582 & -0.0490 & 1.1888 & -0.0584 \\ 3 & 0.0952 & 0.0699 & 1.3622 & \\ 3 & -0.2104 & -0.1853 & 1.1354 & \\ 3 & 0.2454 & 0.2238 & 1.0966 & \\ 3 & -0.3598 & -0.3472 & 1.0363 & \\ \hline \end{tabular}} \label{} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \scalebox{0.6}{ \begin{tabular}{|c|c|c|c|c|} \hline $\omega$ & $a$ & $b$ & & \\ \hline 8 & 0.75 & 0.25 & & \\ \hline p & TM & TM w/$V_c(\phi)$ & ratio & Classical TM \\ \hline 1 & -0.0189 & -0.0096 & 1.9789 & -0.0192 \\ 1 & 0.2454 & 0.0560 & 4.3789 & \\ 1 & -0.3224 & -0.1299 & 2.4826 & \\ 1 & 0.3520 & 0.2586 & 1.3612 & \\ 1 & -0.3971 & -0.3162 & 1.2558 & \\ \hline 2 & -0.0378 & -0.0157 & 2.4149 & -0.0383 \\ 2 & 0.2267 & 0.0757 & 2.9954 & \\ 2 & -0.3806 & -0.2231 & 1.7056 & \\ 2 & 0.3382 & 0.2988 & 1.1315 & \\ 2 & -0.4284 & -0.4176 & 1.0257 & \\ \hline 3 & -0.0565 & -0.0138 & 4.1053 & -0.0575 \\ 3 & 0.1951 & 0.0614 & 3.1800 & \\ 3 & -0.4221 & -0.2692 & 1.5680 & \\ 3 & 0.3116 & 0.2627 & 1.1858 & \\ 3 & -0.4509 & -0.4640 & 0.9718 & \\ \hline \end{tabular}} \label{} \end{minipage} \end{table} \end{document}
arXiv
\begin{definition}[Definition:Equivalent Linear Representations] Let $ \struct {G, \cdot}$ be a group. Consider two linear representations $\rho: G \to \GL V$ and $\rho': G \to \GL W$ of $G$. Then $\rho$ and $\rho'$ are called '''equivalent (linear representations)''' {{iff}} their correspondent $G$-modules using Correspondence between Linear Group Actions and Linear Representations are isomorphic. {{rewrite|above line needs rewriting, but I can't come up with a suitable replacement}} {{explain|The "isomorphic" link goes to a generic "abstract algebra" page, but I believe no actual definition has been made for an isomorphism between two G-modules.}} Category:Definitions/Representation Theory \end{definition}
ProofWiki
\begin{document} \title{\bf Total variation cutoff in a tree} \author{ Yuval Peres\thanks{Microsoft Research, Redmond, Washington, USA; [email protected]} \and Perla Sousi\thanks{University of Cambridge, Cambridge, UK; [email protected]} } \maketitle \thispagestyle{empty} \begin{abstract} We construct a family of trees on which a lazy simple random walk exhibits total variation cutoff. The main idea behind the construction is that hitting times of large sets should be concentrated around their means. For this sequence of trees we compute the mixing time, the relaxation time and the cutoff window. \newline \newline \emph{Keywords and phrases.} Mixing time, relaxation time, cutoff. \newline MSC 2010 \emph{subject classifications.} Primary 60J10. \end{abstract} \section{Introduction} Let $X$ be an irreducible aperiodic Markov chain on a finite state space with stationary distribution~$\pi$ and transition matrix~$P$. The lazy version of~$X$ is a Markov chain with transition matrix~$(P+I)/2$. Let $\varepsilon>0$. The $\varepsilon$-total variation mixing time is defined to be \[ t_{\mathrm{mix}}(\varepsilon) = \min\{ t\geq 0: \max_x\|P^t(x,\cdot) - \pi\|\leq \varepsilon\}, \] where $\|\mu - \nu\| = \sup_{A}|\mu(A) - \nu(A)|$ is the total variation distance between the measures $\mu$ and $\nu$. We say that a sequence of chains $X^n$ exhibits total variation cutoff if for all $0<\varepsilon <1$ \[ \lim_{n\to \infty} \frac{t_{\mathrm{mix}}^{(n)}(\varepsilon)}{t_{\mathrm{mix}}^{(n)}(1-\varepsilon)} = 1. \] We say that a sequence $w_n$ is a cutoff window for a family of chains $X^n$ if $w_n = o(t_{\mathrm{mix}}(1/4))$ and for all $\varepsilon>0$ there exists a positive constant $c_\varepsilon$ such that for all $n$ \[ t_{\mathrm{mix}}(\varepsilon) - t_{\mathrm{mix}}(1-\varepsilon) \leq c_\varepsilon w_n. \] Loosely speaking cutoff occurs when over a negligible period of time the total variation distance from stationarity drops abruptly from near $1$ to near $0$. It is standard that if $t_{\mathrm{rel}}$ and $t_{\mathrm{mix}}$ are of the same order, then there is no cutoff (see for instance~\cite[Proposition~18.4]{LevPerWil}). From that it follows that a lazy simple random walk on the interval $[0,n]$ or a lazy simple random walk on a finite binary tree on $n$ vertices do not exhibit cutoff, since in both cases $t_{\mathrm{rel}} \asymp t_{\mathrm{mix}}$. Although the above two extreme types of trees do not exhibit cutoff, in this paper we construct a sequence of trees, where a lazy simple random exhibits total variation cutoff. We start by describing the tree and then state the results concerning the mixing and the relaxation time of the lazy simple random walk on it. Let $n_j=2^{2^j}$ for $j\in \mathbb{N}$. We construct the tree $\mathcal{T}$ of Figure~\ref{fig:tree} by placing a binary tree at the origin consisting of $N=n_k^3$ vertices. Then for all $j\in \{[k/2],\ldots, k\}$ we place a binary tree at distance $n_j$ from the origin consisting of $N/n_j$ vertices. For each $j$ we call $\mathcal{T}_j$ the binary tree attached at distance $n_j$ and $\mathcal{T}_0$ the binary tree at $0$. We abuse notation and denote by $n_j$ the root of $\mathcal{T}_j$ and by $0$ the root of $\mathcal{T}_0$. \begin{figure} \caption{ The tree $\mathcal{T}$ (not drawn to scale)} \label{fig:tree} \end{figure} \begin{theorem}\label{thm:tree} The lazy simple random walk on the tree $\mathcal{T}$ exhibits total variation cutoff and for all $\varepsilon$ \[ t_{\mathrm{mix}}(\varepsilon) \sim 6Nk. \] Further, the cutoff window is of size $N\sqrt{k}$, i.e.\ for all $0<\varepsilon<1$ \[ t_{\mathrm{mix}}(\varepsilon) - t_{\mathrm{mix}}(1-\varepsilon) \leq c_\varepsilon N\sqrt{k}, \] where $c_\varepsilon$ is a positive constant. \end{theorem} By Chen and Saloff-Coste~\cite{ChenSaloff1} cutoff also holds for the continuous time random walk on $\mathcal{T}$. The main ingredient in the proof of Theorem~\ref{thm:tree} is to establish the concentration of the first hitting time of $0$ starting from $n_k$. Once this has been completed, cutoff follows easily. In Section~\ref{sec:concentration}, we prove the concentration result of the hitting time, which then gives a lower bound on the mixing time. Then in Section~\ref{sec:coupling} we describe the coupling that will yield the matching upper bound on the mixing time. \begin{remark}\rm{ We note that the same idea of showing concentration of hitting times was used in~\cite{DingLubPer} in order to establish cutoff for birth and death chains satisfying $t_{\mathrm{mix}} \text{gap} \to \infty$. Connection of hitting times to cutoff is presented in greater generality in~\cite{CarFraSc}. } \end{remark} It follows from Theorem~\ref{thm:tree} and~\cite[Proposition~18.4]{LevPerWil} that $t_{\mathrm{rel}}=o(t_{\mathrm{mix}}(1/4))$. In the next theorem we give the exact order of the relaxation time. We prove it in Section~\ref{sec:relaxation}. We use the notation $a_k\asymp b_k$ if there exists a constant $C$ such that $C^{-1} b_k \leq a_k \leq C b_k$ for all $k$ and we write $a_k\lesssim b_k$ if there exists a constant $C'$ such that $a_k\leq C'b_k$ for all $k$. Let $1=\lambda_1\geq \lambda_2\geq \lambda_3,\ldots$ be the eigenvalues of a finite chain. Let $\lambda_*=\max_{i\geq 2}|\lambda_i|$ and define the relaxation time $t_{\mathrm{rel}}=(1-\lambda_*)^{-1}$. Note that for a lazy chain $\lambda_*=\lambda_2$. \begin{theorem}\label{thm:relax} The relaxation time for the lazy simple random walk on the tree $\mathcal{T}$ satisfies \[ t_{\mathrm{rel}} \asymp N. \] \end{theorem} To the best of our knowledge the tree $\mathcal{T}$ is the first example of a tree for which $t_{\mathrm{mix}}$ is not equivalent to $t_{\mathrm{rel}}$. A related problem was studied in~\cite{PeresSousi} and we recall it here. Suppose that we assign conductances to the edges of a tree in such a way that $c\leq c(e)\leq c'$ for all edges $e$, where $c$ and $c'$ are two positive constants. It is proved in~\cite[Theorem~9.1]{PeresSousi} that the mixing time of the weighted lazy random walk is up to constants the same as the mixing time of the lazy simple random walk on the original tree. Since the relaxation time is given by a variational formula, it is immediate that after assigning bounded conductances $t_{\mathrm{rel}}$ is only changed up to multiplicative constants. Hence if in the original tree we have that $t_{\mathrm{rel}}$ and $t_{\mathrm{mix}}$ are of the same order, then there is no way of assigning weights to the edges in order to make the weighted random walk exhibit cutoff. \section{Concentration of the hitting time}\label{sec:concentration} Let $X$ denote a lazy simple random walk on the tree $\mathcal{T}$. Define for all $x\in \mathcal{T}$ \[ \tau_x = \inf\{s\geq 0: X_s = x\}. \] \begin{lemma}\label{lem:concentration} We have as $k\to \infty$ \[ \enk{\tau_0} = 6Nk+ o\left(N\sqrt{k}\right) \ \text{ and } \ \vnk{\tau_0}\asymp N^2 k. \] \end{lemma} We will prove the above concentration lemma in this section. We start by stating standard results about hitting times and excursions that will be used in the proof of Lemma~\ref{lem:concentration}. We include their proofs for the sake of completeness. \begin{claim}\label{cl:lazynonlazy} Let $\tau$ and $\widetilde{\tau}$ denote hitting times of the same state for a discrete time non-lazy and lazy walk respectively. Then \[ \E{\widetilde{\tau}} = 2\E{\tau} \ \ \text{ and } \ \ \var{\widetilde{\tau}} = 4 \var{\tau} + 2\E{\tau}, \] assuming that both the lazy and non-lazy walks start from the same vertex. \end{claim} \begin{proof}[{\bf Proof}] It is easy to see that we can write $\widetilde{\tau}= \sum_{i=0}^{\tau-1} \xi_i$, where $(\xi_i)_i$ is an i.i.d.\ sequence of geometric variables with success probability $1/2$. By Wald's identity we get \[ \E{\widetilde{\tau}} = \E{\tau} \E{\xi_1} = 2\E{\tau}. \] Using the independence between $\tau$ and the sequence $(\xi_i)_i$ gives the identity for the variance of $\widetilde{\tau}$. \end{proof} \begin{claim}\label{cl:excursion} Let $T$ be the time spent in an excursion from the root by a simple random walk in a binary tree of size $n$. Then \[ \E{T} = \frac{3n-1}{2} \ \ \text{ and } \ \ \E{T^2} \asymp n^2. \] \end{claim} \begin{proof}[{\bf Proof}] It is standard that $\E{T} = \pi(o)^{-1} = (3n-1)/2$. It is easy to see that starting from any point~$x$ on the tree, the expected hitting time of the root is upper bounded by $cn$ for a constant $c$. Hence by performing independent experiments and using the Markov property we get for a positive constant~$c'$ \[ \pr{T>2kn} \leq e^{-c'k}. \] Therefore, we deduce that $\E{T^2} \leq c''n^2$. \end{proof} \begin{claim}\label{cl:localtime} Let $X$ be a simple random walk on the interval $[0,n]$ started from $n$ and~$L_i$ be the number of visits to~$i$ before the first time $X$ hits $0$. Then $L_i$ is a geometric random variable with parameter~$(2i)^{-1}$. \end{claim} We are now ready to give the proof of the concentration result. \begin{proof}[{\bf Proof of Lemma~\ref{lem:concentration}}] By Claim~\ref{cl:lazynonlazy} it suffices to consider a non-lazy random walk. We write~$\tau_0$ for the first hitting time of~$0$ for a simple random walk on the tree~$\mathcal{T}$. Every time we visit a vertex~$n_j$ for some~$j$ with probability~$1/2$ we make an excursion in the binary tree attached to this vertex. Since we are interested in the time it takes to hit~$0$ we can think of the problem in the following way: we replace a binary tree by a self-loop representing a delay which is the time spent inside the tree in an excursion from the root. It will be helpful to have Figure~\ref{fig:delay-fig} in mind. \begin{figure} \caption{ Delays represented by self loops} \label{fig:delay-fig} \end{figure} Let~$Y$ be a simple random walk on the line~$[0,n_k]$ starting from~$n_k$. Let~$S$ be the time it takes~$Y$ to reach~$0$. For~$i=[k/2],\ldots, k$ we let~$L_i$ be the local time at~$n_i$ before the first time~$Y$ hits~$0$, i.e. \[ L_i = \sum_{\ell=0}^{S} {\text{\Large $\mathfrak 1$}}(Y_\ell=n_i). \] For every vertex $n_i$ we let $(T_\ell^{(i)})_{\ell\leq L_i}$ be the delays incurred during the $L_i$ visits to $n_i$, i.e. \[ T_\ell^{(i)} = \sum_{m=1}^{G_{i,\ell}} \xi_{m}^{(i)}, \] where $(\xi_{m}^{(i)})_m$ is an i.i.d.\ sequence of excursions from $n_i$ in the binary tree rooted at $n_i$ and $G_{i,\ell}$ is an independent geometric random variable of success probability $1/2$. Note that the random variables $T_\ell^{(i)}$ are independent over different $i$ and $\ell$. Having defined these times we can now write \begin{align}\label{eq:decomp} \tau_0 = S + \sum_{i=[k/2]}^{k} \sum_{\ell=1}^{L_i} T_\ell^{(i)} = S+D, \end{align} where $D = \sum_{i=[k/2]}^{k} \sum_{\ell=1}^{L_i} T_\ell^{(i)}$. From Claims~\ref{cl:localtime} and~\ref{cl:excursion} and the independence between~$L_i$ and $T_\ell^{(i)}$ using the above representation of~$\tau_0$ we immediately get \[ \enk{\tau_0} = n_k^2 + \sum_{i=[k/2]}^{k}2n_i \left(3\frac{N}{n_i} -1\right) = 3Nk + o(N\sqrt{k}) \ \text{ as } k\to \infty, \] and hence multiplying by~$2$ gives the required expression. We now turn to estimate the variance. Using~\eqref{eq:decomp} we have \begin{align*} \vnk{\tau_0} &= \enk{\left( (S - \enk{S}) + (D - \enk{D})\right)^2 } \\ &= \vnk{S} + \vnk{D} + 2\enk{(S-\enk{S}) (D- \enk{D})}. \end{align*} Since $S$ is the first time that a simple random walk on $[0,n_k]\cap \mathbb{Z}$ hits $0$ started from $n_k$ it follows that \begin{align}\label{eq:vars} \vnk{S} \asymp n_k^4 = o(N^2 k). \end{align} By Cauchy Schwarz we get \[ \enk{(S-\enk{S}) (D- \enk{D})} \leq \sqrt{\vnk{S} \vnk{D}}, \] so if we prove that \begin{align}\label{eq:goal} \vnk{D} \asymp N^2 k, \end{align} then using~\eqref{eq:vars} we get $\sqrt{\vnk{S} \vnk{D}} \asymp N n_k^2 \sqrt{k} = o(N^2 k)$, and hence $\vnk{\tau_0} \asymp N^2 k$. Therefore, it suffices to show~\eqref{eq:goal}. To simplify notation further we write $D_i = \sum_{\ell=1}^{L_i} T_\ell^{(i)}$. We have \begin{equation} \begin{split} \label{eq:variance} \vnk{D} &= \sum_{i,j=[k/2]}^{k} \enk{(D_i - \enk{D_i})(D_j - \enk{D_j})}\\&= \sum_{j=[k/2]}^{k} \vnk{D_j} + 2\sum_{j=[k/2]}^{k}\sum_{i=j+1}^{k} \enk{(D_i - \enk{D_i})(D_j - \enk{D_j})}. \end{split} \end{equation} By Claims~\ref{cl:excursion} and~\ref{cl:localtime} and the independence between $L_i$ and $T_\ell^{(i)}$, we get that for all $i$ \[ \vnk{D_i} = \enk{T_1^{(i)}}^2 \vnk{L_i} + \enk{L_i}\vnk{T_1^{(i)}} \asymp N^2, \] and hence $\sum_{i=[k/2]}^{k} \vnk{D_i} \asymp N^2k$. In view of that, it suffices to show that for~$i > j$ \begin{align}\label{eq:goalnow} \left| \enk{(D_i - \enk{D_i})(D_j - \enk{D_j})} \right| \lesssim N^2\frac{n_j}{n_i}, \end{align} since then using the double exponential decay of $(n_\ell)$ completes the proof of the lemma. Since in order to hit $0$ starting from $n_k$ the random walk must first hit $n_i$ and then $n_j$, it makes sense to split the local time $L_i$ into two terms: the time $L_{i,1}$ that $Y$ spends at $n_i$ before the first hitting time of $n_j$ and the time $L_{i,2}$ that $Y$ spends at $n_i$ after the first hitting time of $n_j$. Writing \[ D_{i,1} = \sum_{\ell=1}^{L_{i,1}} T_\ell^{(i)} \ \ \text{ and } \ \ D_{i,2} = \sum_{\ell=1}^{L_{i,2}} \widetilde{T}_\ell^{(i)}, \] where $\widetilde{T}$ is an independent copy of $T$, we have that~$D_{i,1}$ is independent of~$D_j$, and hence \begin{align} \label{eq:kkk} \enk{(D_i- \enk{D_i})(D_j - \enk{D_j})} = \enk{(D_{i,2} - \enk{D_{i,2}})(D_j - \enk{D_j})}. \end{align} Using the independence between the local times and the delays we get \begin{equation} \begin{split} \label{eq:ddd} \enk{D_{i,2} D_j} = \enk{\econds{\sum_{\ell = 1}^{L_{i,2}}\widetilde{T}_{\ell}^{(i)} \sum_{r=1}^{L_j} T_r^{(j)}}{L_{i,2}, L_j}{n_k}} &= \enk{L_{i,2}\enk{T_1^{(i)}} L_{j} \enk{T_1^{(j)}}} \\ &= \enk{L_{i,2} L_j} \enk{T_1^{(i)}} \enk{T_1^{(j)}}. \end{split} \end{equation} If we denote by $\tau_x$ the hitting time of $x$ by the random walk $Y$, then we get \[ \prstart{\tau_{n_i} < \tau_0 \wedge \tau_{n_j}^+}{n_j} = \frac{1}{2(n_i - n_j)}. \] Once the random walk $Y$ visits $n_i$, then the total number of returns to $n_i$ before hitting $n_j$ again is a geometric random variable independent of $L_j$ and of parameter \[ \prstart{\tau_{n_j}<\tau_{n_i}^+}{n_i}= \frac{1}{2(n_i-n_j)}. \] Hence we can write \[ L_{i,2} = \sum_{\ell=1}^{L_j-1} \eta_\ell, \] where $\eta_\ell = 0$ with probability $1-1/(2(n_i - n_j))$ and $\theta_\ell$ with probability $1/(2(n_i - n_j))$, where $\theta_\ell$ is a geometric random variable with $\E{\theta_\ell} = 2(n_i - n_j)$. Note that $\eta_\ell$ is independent of $L_j$. Therefore we deduce \begin{align*} \enk{L_{i,2}L_j} = \enk{L_j\sum_{\ell=1}^{L_j-1}\eta_{\ell}} = \enk{\econd{L_j \sum_{\ell=1}^{L_j-1} \eta_\ell}{L_j}} = (\enk{L_j^2} - \enk{L_j}) \enk{\eta_1} \asymp n_j^2, \end{align*} where in the last step we used Claim~\ref{cl:localtime} and the fact that $\enk{\eta_\ell} =1$ for all $\ell$. Hence combining the above with~\eqref{eq:ddd} and Claim~\ref{cl:excursion} we conclude \begin{align}\label{eq:final} \enk{D_{i,2}D_j} \asymp n_j^2 \frac{N}{n_i} \frac{N}{n_j} = N^2 \frac{n_j}{n_i}. \end{align} Using Wald's identity we obtain \begin{align*} \enk{D_{i,2}} \enk{D_j} \asymp N^2 \frac{n_j}{n_i} \end{align*} and combined with~\eqref{eq:kkk} and~\eqref{eq:final} proves~\eqref{eq:goalnow} and thus finishes the proof of the lemma. \end{proof} \begin{proof}[{\bf Proof of Theorem~\ref{thm:tree}} (lower bound)] Let $t = \enk{\tau_0} - \gamma \sqrt{\vnk{\tau_0}}$, where the constant $\gamma$ will be determined later. By the definition of the total variation distance we get \begin{align*} d(t) \geq \| \prstart{X_t \in \cdot}{n_k} - \pi\| \geq \pi(\mathcal{T}_0) - \prstart{X_t \in \mathcal{T}_0}{n_k} \geq 1-o(1)- \prstart{\tau_0<t}{n_k}, \end{align*} since $\pi(\mathcal{T}_0) = 1 - o(1)$. Chebyshev's inequality gives \begin{align*} \prstart{\tau_0<t}{n_k} \leq \prstart{\left|\tau_0 -\enk{\tau_0}\right| > \gamma \sqrt{\vnk{\tau_0}}}{n_k} \leq \frac{1}{\gamma^2}. \end{align*} Hence by choosing $\gamma$ big enough we deduce that for all sufficiently large $k$ \begin{align*} d(t) \geq 1 - o(1)- \frac{1}{\gamma^2} > \varepsilon, \end{align*} which implies that $t_{\mathrm{mix}}(\varepsilon) \geq t$. By Lemma~\ref{lem:concentration} we thus get that \[ t_{\mathrm{mix}}(\varepsilon) \geq 6Nk - c_1N\sqrt{k} \] for a positive constant $c_1$. \end{proof} \section{Coupling}\label{sec:coupling} In this section we prove the upper bound on $t_{\mathrm{mix}}(\varepsilon)$ via coupling. \begin{proof}[{\bf Proof of Theorem~\ref{thm:tree}} (upper bound)] Let $X_0 = x$ and $Y_0\sim \pi$. Consider the following coupling. We let $X$ and $Y$ evolve independently until the first time that $X$ hits $0$. After that we let them continue independently until the first time they collide or reach the same level of the tree $\mathcal{T}_0$ in which case we change the coupling to the following one: we let $X$ evolve as a lazy simple random walk and couple $Y$ to $X$ so that $Y$ moves closer to (or further from) the root if and only if $X$ moves closer to (or further from) the root respectively. Hence they coalesce if they both hit $0$. Let $\tau$ be the coupling time and $t = \enk{\tau_0} + \gamma\sqrt{\vnk{\tau_0}}$, where the constant $\gamma$ will be determined later in order to make $\pr{\tau>t}$ as small as we like. Define $\tau_x^* = \inf\{ s\geq \tau_0: X_s = x \}$ for all $x$ and \begin{align*} L = \sum_{s=\tau_0}^{\tau_{n_{[k/2]}}^*} {\text{\Large $\mathfrak 1$}}\left(X_{s-1} \notin \mathcal{T}_0, X_s \in \mathcal{T}_0\right), \end{align*} i.e.\ $L$ is the number of returns to the tree $\mathcal{T}_0$ in the time interval $[\tau_0,\tau^*_{n_{[k/2]}}]$. Then $L$ has the geometric distribution with parameter $1/n_{[k/2]}$. Setting $A_L=\{L> \sqrt{n_{[k/2]}}\}$ we get by the union bound \begin{align}\label{eq:unionbd} \pr{A_L^c}=\pr{L\leq \sqrt{n_{[k/2]}}} \leq \frac{1}{\sqrt{n_{[k/2]}}}. \end{align} We also define the event that after time $\tau_0$ the random walk hits the leaves of the tree $\mathcal{T}_0$ before exiting the interval $[0,n_{[k/2]}]$, i.e. \begin{align}\label{eq:defe} E = \left\{ \tau^*_{\partial \mathcal{T}_0} < \tau^*_{n_{[k/2]}} \right\}. \end{align} Since at every return to the tree $\mathcal{T}_0$ with probability at least $1/3$ the random walk hits the leaves of~$\mathcal{T}_0$ before exiting the tree~$\mathcal{T}_0$, it follows that \begin{align}\label{eq:hittheleaves} \prcond{\tau_{\partial \mathcal{T}_0}^* >\tau_{n_{[k/2]}}^*}{L}{} \leq \left(\frac{2}{3}\right)^{L}. \end{align} By decomposing into the events $A_L$ and $E$ we obtain \begin{align}\label{eq:bigeq} \nonumber\pr{\tau>t} &\leq \pr{\tau>t, A_L} + \frac{1}{\sqrt{n_{[k/2]}}} \\ & \leq \pr{\tau>t,A_L, E} +\left(\frac{2}{3}\right)^{\sqrt{n_{[k/2]}}} + \frac{1}{\sqrt{n_{[k/2]}}}, \end{align} where the first inequality follows from~\eqref{eq:unionbd} and the second one from~\eqref{eq:hittheleaves} and the fact that we are conditioning on the event $\{L>\sqrt{n_{[k/2]}}\}$. We now define $S$ to be the first time after $\tau^*_{\partial \mathcal{T}_0}$ that $X$ hits $0$, i.e.\ $S= \inf\{s\geq \tau^*_{\partial \mathcal{T}_0}: X_s =0\}$. Let~$(\xi_i)_i$ be i.i.d.\ random variables, where~$\xi_1$ is distributed as the length of a random walk excursion on the interval~$[0,n_{[k/2]}]$ conditioned not to hit~$n_{[k/2]}$. Let~$(\ell_{i,j})_{i,j}$ be i.i.d.\ random variables with~$\ell_{1,1}$ distributed as the length of a random walk excursion from the root on the tree~$\mathcal{T}_0$ conditioned not to hit the leaves and~$(G_i)_i$ be i.i.d.\ geometric random variables of success probability~$1/3$. Then on the event~$E$ we have \[ S - \tau_0 \prec \sum_{i=1}^{L} \xi_i + \sum_{i=1}^{L} \sum_{j=1}^{G_i} \ell_{i,j} +\zeta, \] where~$\zeta$ is independent of the excursion lengths and is distributed as the commute time between the root and the leaves of the tree~$\mathcal{T}_0$ and~$\prec$ denotes stochastic domination. Hence, by Wald's identity we obtain \begin{align}\label{eq:expect} \E{(S-\tau_0){\text{\Large $\mathfrak 1$}}(E)} \leq \E{L} \E{\xi_1} + \E{L} \E{G_1} \E{\ell_{1,1}} + \E{\zeta} \lesssim n_{[k/2]}^2 + n_{[k/2]} + N \lesssim N. \end{align} Let $A=\{Y_{\tau_0} \in \mathcal{T}_0\}$. Then $\pr{A^c} = o(1)$ as $k\to \infty$, because at time $\tau_0$ the random walk $Y$ is stationary, since until this time it evolves independently of $X$, and also the stationary probability of the tree is $1-o(1)$. It then follows \begin{align}\label{eq:one} \pr{\tau>t, A_L,E} \leq \pr{\tau>t, A_L, E,A} +o(1). \end{align} Let $\tau_1$ be the time it takes to hit the line $[0,n_k]$ starting from $x$. Let $\tau_2$ be the time it takes to hit $0$ starting from $X_{\tau_1}$. Then clearly $\tau_2$ is smaller than the time it takes to hit $0$ starting from $n_k$. Thus setting $B= \{\tau_0<\enk{\tau_0} + \gamma \sqrt{\vnk{\tau_0}}/2\}$ we obtain \begin{align}\label{eq:two} \nonumber\prstart{B^c}{x} &\leq \prstart{\tau_2>\enk{\tau_0} + \frac{\gamma \sqrt{\vnk{\tau_0}}}{4}}{x} +\prstart{\tau_1\geq\frac{\gamma\sqrt{\vnk{\tau_0}}}{4}}{x} \\ &\leq \prstart{\tau_0>\enk{\tau_0}+ \frac{\gamma\sqrt{\vnk{\tau_0}}}{4}}{n_k} + o(1) \leq \frac{16}{\gamma^2} + o(1), \end{align} where the second inequality follows from Markov's inequality and the fact that $\estart{\tau_1}{x} \leq N$ for all $x$ and the third one follows from Chebyshev's inequality. Ignoring the $o(1)$ terms we get \begin{align}\label{eq:three} \pr{\tau>t,A_L,E,A} \leq \pr{\tau>t,A_L,E,A,B} + \frac{16}{\gamma^2}. \end{align} We finally define the event $F=\{S-\tau_0 >\gamma\sqrt{\vnk{\tau_0}}/2\}$. We note that on the events $E$ and $A$ the two walks $X$ and $Y$ must have coalesced by time $S$. (Indeed, if $Y$ stays in $\mathcal{T}_0$ during the time interval $[\tau_0,\tau^*_{\partial \mathcal{T}_0}]$, then they must have coalesced. If $Y$ leaves the interval, since $X$ is always in $[0,n_{[k/2]}]$ until time $S$ on the event $E$, then coalescence must have happened again.) Therefore \[ B\cap F^c \subseteq \{\tau<t\}. \] This in turn implies that for a positive constant $c_1$ \begin{align}\label{eq:five} \pr{\tau>t,A_L,E,A,B} = \pr{\tau>t,A_L,E,A,B,F} \leq \pr{E,F} \leq \frac{c_1}{\gamma\sqrt{k}}, \end{align} where the last inequality follows by applying Markov's inequality to $(S-\tau_0){\text{\Large $\mathfrak 1$}}(E)$ and using~\eqref{eq:expect} and Lemma~\ref{lem:concentration}. Plugging~\eqref{eq:one},~\eqref{eq:two},~\eqref{eq:three} and~\eqref{eq:five} into~\eqref{eq:bigeq} gives as $k\to \infty$ \begin{align*} \pr{\tau>t} \leq \frac{16}{\gamma^2} + o(1). \end{align*} Hence choosing $\gamma$ sufficiently large depending on $\varepsilon$ we can make $\pr{\tau>t}<\varepsilon$ and this shows that for a positive constant $c_2$ \[ t_{\mathrm{mix}}(\varepsilon) \leq \enk{\tau_0} + \gamma_\varepsilon \sqrt{\vnk{\tau_0}} \leq 6Nk + c_2N\sqrt{k}, \] where the last inequality follows by Lemma~\ref{lem:concentration}. Combining this with the lower bound on $t_{\mathrm{mix}}(\varepsilon)$ proved in the previous section shows that there exists $c_\varepsilon>0$ such that for all $0<\varepsilon<1$ \[ t_{\mathrm{mix}}(\varepsilon) - t_{\mathrm{mix}}(1-\varepsilon) < c_\varepsilon N\sqrt{k} \] and this completes the proof of the theorem. \end{proof} \section{Relaxation time}\label{sec:relaxation} In this section we give the proof of Theorem~\ref{thm:relax}. We start by stating standard results for random walks on the interval $[0,n]$ and the binary tree. We include their proofs here for the sake of completeness. A detailed analysis of relaxation time for birth and death chains can be found in~\cite{ChenSaloff}. \begin{claim}\label{cl:srwline} Let $f$ be a function defined on $[0,n]$ satisfying $f(0)= 0$. Then \[ \sum_{k=1}^{n} f(k)^2 \leq n^2 \sum_{\ell=1}^{n}(f(\ell) - f(\ell-1))^2. \] \end{claim} \begin{proof}[{\bf Proof}] We set $\beta_\ell^{-2}= (n-\ell)$ for all $\ell \in [0,n]$. Then by Cauchy Schwarz we get \begin{align*} f(k)^2 = \left(\sum_{\ell=1}^{k}(f(\ell) - f(\ell-1)) \right)^2 \leq \sum_{\ell=1}^{k} \beta_\ell^2(f(\ell) - f(\ell-1))^2 \sum_{\ell=1}^{k} \beta_\ell^{-2}. \end{align*} Since $\sum_{\ell=1}^{k} \beta_\ell^{-2} \leq n^2$ for all $k \in [0,n]$ we get summing over all $k$ and interchanging sums \begin{align*} \sum_{k=1}^{n} f(k)^2 \leq n^2 \sum_{\ell=1}^{n} (f(\ell) - f(\ell-1))^2 \end{align*} and this completes the proof of the claim. \end{proof} \begin{claim}\label{cl:binarytree} Let $\mathcal{T}$ be a binary tree on $m$ vertices with root $o$. Then there exists a universal constant~$c$ such that for all functions~$g$ defined on~$\mathcal{T}$ with $g(o)=0$ we have \[ \|g\|^2 \leq cm\mathcal{E}(g,g), \] where $\|g\|^2 = \sum_{x} \pi(x) g(x)^2$ and $\mathcal{E}$ is the Dirichlet form $\mathcal{E}(f,g) = \langle f,(I-P)g\rangle$. Here $\pi$ and $P$ are the stationary distribution and transition matrix of a simple random walk on $\mathcal{T}$ respectively. \end{claim} \begin{proof}[{\bf Proof}] Since the stationary measure of a simple random walk on the tree satisfies $\pi(x) \asymp m^{-1}$ for all $x$ and $P(x,y) \asymp c$ for all $x\sim y$, we will omit them from the expressions. Let the depth of the tree $\mathcal{T}$ be $n = \lceil\log_2 m\rceil$. Let $x_k$ be a vertex in $\mathcal{T}$ of level $k$. Then there exists a unique path $x_0=o, x_1, \ldots, x_k$ going from the root to $x_k$. We can now write \begin{align*} g(x_k)^2 &= \left( \sum_{j=1}^{k} (g(x_{j-1})-g(x_j))\right)^2 = \left( \sum_{j=1}^{k} (g(x_{j-1})-g(x_j))2^{j/2} \frac{1}{2^{j/2}}\right)^2 \\ &\lesssim \sum_{j=1}^{k} 2^j(g(x_{j-1})-g(x_j))^2, \end{align*} where the last inequality follows by Cauchy Schwarz. Let $L_k$ denote all the vertices of the tree at distance $k$ from the root. For any $x \in \mathcal{T}$ we write \[ G(x) = \sum_{j=1}^{|x|} 2^j(g(y_j) - g(y_{j-1}))^2, \] where $|x|$ denotes the level of $x$ and $y_0=o, y_1,\ldots, y_{|x|}=x$ is the unique path joining $x$ to the root. By interchanging sums we obtain \begin{align}\label{eq:bigsum} \sum_{x \in \mathcal{T}_0} g(x)^2 = \sum_{k=1}^{n} \sum_{x\in L_k} g(x)^2 \lesssim \sum_k \sum_{x\in L_k} G(x). \end{align} Let $e$ be an edge of $\mathcal{T}$. We write $e=\langle e^-,e^{+}\rangle$ where $d(e^-,0)<d(e^+,0)$. For every edge $e$ we let $N(e)$ be the number of times the term $(g(e^-) - g(e^+))^2$ appears in the sum appearing on the right hand side of~\eqref{eq:bigsum}. We then get \begin{align*} \sum_{x \in \mathcal{T}} g(x)^2 \lesssim \sum_{e\in \mathcal{T}} 2^{|e^-|}N(e)(g(e^-) -g(e^+))^2. \end{align*} Notice that $N(e)$ is the number of paths in $\mathcal{T}$ joining the root to the leaves and pass through $e$. Hence since the tree is of depth $n$ we get that $N(e) = 2^{n-|e^-|-1}$. Therefore we deduce \begin{align*} \sum_{x \in \mathcal{T}} g(x)^2 \lesssim \sum_{e\in \mathcal{T}} 2^{|e^-|} 2^{n-|e^-| -1} (g(e^-) - g(e^+))^2 = 2^n\sum_{e\in \mathcal{T}} (g(e^-) - g(e^+))^2 = m \mathcal{E}(g,g) \end{align*} and this completes the proof of the claim. \end{proof} \begin{proof}[{\bf Proof of Theorem~\ref{thm:relax}}] To prove the lower bound on $t_{\mathrm{rel}}$ we use the bottleneck ratio as in~\cite[Theorem~13.14]{LevPerWil}. By setting $S = \mathcal{T}_0$, we see that \[ \Phi_* \lesssim \frac{1}{N}, \] and hence $t_{\mathrm{rel}} \gtrsim N$. It remains to prove a matching upper bound. We do that by using the variational formula for the spectral gap, which gives \begin{align*} t_{\mathrm{rel}} = \sup_{\vars{f}{\pi} \neq 0} \frac{\vars{f}{\pi}}{\mathcal{E}(f,f)}. \end{align*} Notice that by subtracting from $f$ its value at $0$ the ratio above remains unchanged. So we restrict to functions $f$ with $f(0) = 0$. It suffices to show that for any such $f$ \begin{align}\label{eq:relgoal} \vars{f}{\pi} \lesssim N\sqrt{k} \mathcal{E}(f,f). \end{align} Let $f$ be defined on the tree $\mathcal{T}$ with $f(0) = 0$. Then we can write $f= g + h$, where $g$ is zero on $\mathcal{T}_0^c$ and $h$ is zero on $\mathcal{T}_0$ and $g(0) = h(0) = 0$. We then have \begin{align}\label{eq:new} \vars{f}{\pi} \leq \|g+ h\|^2 = \|g\|^2 + \|h\|^2, \end{align} since by the definition of the functions $g$ and $h$ it follows that $\langle g,h\rangle_{\pi} =0$. Similarly we also get \[ \mathcal{E}(f,f) = \mathcal{E}(g,g) + \mathcal{E}(h,h). \] \begin{claim}\label{cl:restoftree} There exists a positive constant $c$ such that \[ \|h\|^2 \leq cN \mathcal{E}(h,h). \] \end{claim} \begin{proof}[{\bf Proof}] Using Claim~\ref{cl:binarytree} for the function $(h(x) - h(n_j))$ restricted to $x \in \mathcal{T}_j$ we obtain \begin{align}\label{eq:restrj} \sum_{v \in \mathcal{T}_j} (h(v) - h(n_j))^2 \lesssim \frac{N}{n_j} \sum_{\substack{u,v \in \mathcal{T}_j \\ u\sim v}} (h(u) - h(v))^2. \end{align} Using that $(a+b)^2\leq 2a^2 + 2b^2$ and~\eqref{eq:restrj} we get \begin{align*} \sum_{v\in \mathcal{T}_j} h(v)^2 \leq 2\sum_{v\in \mathcal{T}_j} (h(v) - h(n_j))^2 + 2 \frac{N}{n_j} h(n_j)^2 \lesssim \frac{N}{n_j} \left(\mathcal{E}(h,h) + h(n_j)^2\right). \end{align*} From the above inequality it immediately follows \begin{align*} \sum_{v\notin \mathcal{T}_0} h(v)^2 \leq \sum_{v \in [0,n_k]} h(v)^2 + \sum_{j=[k/2]}^{k} \sum_{v\in \mathcal{T}_j} h(v)^2 \lesssim \sum_{v\in [0,n_k]} h(v)^2 + \sum_{j=[k/2]}^{k} \frac{N}{n_j} h(n_j)^2 + N\mathcal{E}(h,h) \end{align*} and hence it suffices to show \begin{align}\label{eq:goalh} \sum_{v\in [0,n_k]} h(v)^2 + \sum_{j=[k/2]}^{k} \frac{N}{n_j} h(n_j)^2 \lesssim N\sum_{\ell=1}^{n_k} (h(\ell) - h(\ell-1))^2. \end{align} Claim~\ref{cl:srwline} gives \begin{align*} \sum_{v\in [0,n_k]} h(v)^2 \leq n_k^2 \sum_{\ell=1}^{n_k} (h(\ell) - h(\ell-1))^2 \leq N \sum_{\ell=1}^{n_k} (h(\ell) - h(\ell-1))^2, \end{align*} since $N=n_k^3$, and hence it suffices to show \begin{align}\label{eq:finalh} \sum_{j=[k/2]}^{k} \frac{h(n_j)^2 }{n_j} \lesssim \sum_{\ell=1}^{n_k} (h(\ell) - h(\ell-1))^2. \end{align} Setting $\Delta_\ell = h(\ell) - h(\ell-1)$ and using Cauchy Schwarz \begin{align*} h(n_j)^2 &\leq 2h(n_{j-1})^2 + 2(h(n_j) - h(n_{j-1}))^2 = 2 \left(\sum_{\ell=1}^{n_{j-1}} \Delta_\ell\right)^2 + 2\left(\sum_{\ell=n_{j-1}+1}^{n_j} \Delta_\ell\right)^2 \\ &\leq 2n_{j-1} \sum_{\ell=1}^{n_{j-1}} \Delta_\ell^2 + (n_j-n_{j-1}) \sum_{\ell=n_{j-1}+1}^{n_j} \Delta_\ell^2, \end{align*} and hence dividing by $n_j$ we get \begin{align*} \sum_{j=[k/2]}^{k} \frac{h(n_j)^2}{n_j} \leq 2\sum_{j=[k/2]}^{k} \frac{n_{j-1}}{n_j} \sum_{\ell=1}^{n_{j-1}} \Delta_\ell^2 + 2 \sum_{\ell=1}^{n_k} \Delta_\ell^2. \end{align*} If we fix $\ell \in [0,n_k]$, then the coefficient of $\Delta_\ell^2$ in the first sum appearing on the right hand side of the above inequality is bounded from above by $\sum_{j=1}^{k} n_{j-1}/n_j <\infty$, and hence we conclude \[ \sum_{j=[k/2]}^{k} \frac{h(n_j)^2}{n_j}\lesssim \sum_{\ell=1}^{n_k} \Delta_\ell^2 \] and this finishes the proof of the claim. \end{proof} Since $g$ satisfies the assumptions of Claim~\ref{cl:binarytree} it follows that \[ \|g\|^2\leq cN\mathcal{E}(g,g). \] This together with Claim~\ref{cl:restoftree} and~\eqref{eq:new} proves~\eqref{eq:relgoal} and completes the proof of the theorem. \end{proof} \end{document}
arXiv
Difference between revisions of "Graduate Logic Seminar" From UW-Math Wiki Jgoh (talk | contribs) (→‎March 30 4PM - Alice Vidrine) (→‎Spring 2021 - Tentative schedule) [https://hilbert.math.wisc.edu/wiki/images/Cat-slides-1.pdf Link to slides] === April 27 4PM - Alice Vidrine === Title: Categorical logic for realizability, part II Abstract: Realizability is an approach to semantics for non-classical logic that interprets propositions by sets of abstract computational data. One modern approach to realizability makes heavy use of the notion of a topos, a type of category that behaves like a universe of non-standard sets. In preparation for introducing realizability toposes, the present talk will be a brisk introduction to the notion of a topos, with an emphasis on their logical aspects. In particular, we will look at the notion of a subobject classifier and the internal logic to which it gives rise. ==Previous Years== The schedule of talks from past semesters can be found [[Graduate Logic Seminar, previous semesters|here]]. The Graduate Logic Seminar is an informal space where graduate students and professors present topics related to logic which are not necessarily original or completed work. This is a space focused principally on practicing presentation skills or learning materials that are not usually presented in a class. Where: on line (ask for code). Organizers: Jun Le Goh The talk schedule is arranged at the beginning of each semester. If you would like to participate, please contact one of the organizers. Sign up for the graduate logic seminar mailing list: [email protected] 1 Spring 2021 - Tentative schedule 1.1 February 16 3:30PM - Short talk by Sarah Reitzes (University of Chicago) 1.2 March 23 4:15PM - Steffen Lempp 1.3 March 30 4PM - Alice Vidrine 1.4 April 27 4PM - Alice Vidrine 2 Previous Years Spring 2021 - Tentative schedule February 16 3:30PM - Short talk by Sarah Reitzes (University of Chicago) Title: Reduction games over $\mathrm{RCA}_0$ Abstract: In this talk, I will discuss joint work with Damir D. Dzhafarov and Denis R. Hirschfeldt. Our work centers on the characterization of problems P and Q such that P $\leq_{\omega}$ Q, as well as problems P and Q such that $\mathrm{RCA}_0 \vdash$ Q $\to$ P, in terms of winning strategies in certain games. These characterizations were originally introduced by Hirschfeldt and Jockusch. I will discuss extensions and generalizations of these characterizations, including a certain notion of compactness that allows us, for strategies satisfying particular conditions, to bound the number of moves it takes to win. This bound is independent of the instance of the problem P being considered. This allows us to develop the idea of Weihrauch and generalized Weihrauch reduction over some base theory. Here, we will focus on the base theory $\mathrm{RCA}_0$. In this talk, I will explore these notions of reduction among various principles, including bounding and induction principles. March 23 4:15PM - Steffen Lempp Title: Degree structures and their finite substructures Abstract: Many problems in mathematics can be viewed as being coded by sets of natural numbers (as indices). One can then define the relative computability of sets of natural numbers in various ways, each leading to a precise notion of "degree" of a problem (or set). In each case, these degrees form partial orders, which can be studied as algebraic structures. The study of their finite substructures leads to a better understanding of the partial order as a whole. March 30 4PM - Alice Vidrine Title: Categorical logic for realizability, part I: Categories and the Yoneda Lemma Abstract: An interesting strand of modern research on realizability--a semantics for non-classical logic based on a notion of computation--uses the language of toposes and Grothendieck fibrations to study mathematical universes whose internal notion of truth is similarly structured by computation. The purpose of this talk is to establish the basic notions of category theory required to understand the tools of categorical logic developed in the sequel, with the end goal of understanding the realizability toposes developed by Hyland, Johnstone, and Pitts. The talk will cover the definitions of category, functor, natural transformation, adjunctions, and limits/colimits, with a heavy emphasis on the ubiquitous notion of representability. Link to slides April 27 4PM - Alice Vidrine The schedule of talks from past semesters can be found here. Retrieved from "https://hilbert.math.wisc.edu/wiki/index.php?title=Graduate_Logic_Seminar&oldid=21163"
CommonCrawl
The product of integers 240 and $k$ is a perfect cube. What is the smallest possible positive value of $k$? $240=2^4\cdot3\cdot5=2^3(2\cdot3\cdot5)$. For $240k$ to be a perfect cube (and not a perfect square), $k$ must be at least $2^2\cdot3^2\cdot5^2=\boxed{900}$.
Math Dataset
Mordell curve In algebra, a Mordell curve is an elliptic curve of the form y2 = x3 + n, where n is a fixed non-zero integer.[1] These curves were closely studied by Louis Mordell,[2] from the point of view of determining their integer points. He showed that every Mordell curve contains only finitely many integer points (x, y). In other words, the differences of perfect squares and perfect cubes tend to infinity. The question of how fast was dealt with in principle by Baker's method. Hypothetically this issue is dealt with by Marshall Hall's conjecture. Properties If (x, y) is an integer point on a Mordell curve, then so is (x, -y). There are certain values of n for which the corresponding Mordell curve has no integer solutions;[1] these values are: 6, 7, 11, 13, 14, 20, 21, 23, 29, 32, 34, 39, 42, ... (sequence A054504 in the OEIS). −3, −5, −6, −9, −10, −12, −14, −16, −17, −21, −22, ... (sequence A081121 in the OEIS). The specific case where n = −2 is also known as Fermat's Sandwich Theorem.[3] List of solutions The following is a list of solutions to the Mordell curve y2 = x3 + n for |n| ≤ 25. Only solutions with y ≥ 0 are shown. n(x,y) 1 (−1, 0), (0, 1), (2, 3) 2 (−1, 1) 3 (1, 2) 4 (0, 2) 5 (−1, 2) 6 – 7 – 8 (−2, 0), (1, 3), (2, 4), (46, 312) 9 (−2, 1), (0, 3), (3, 6), (6, 15), (40, 253) 10 (−1, 3) 11 – 12 (−2, 2), (13, 47) 13 – 14 – 15 (1, 4), (109, 1138) 16 (0, 4) 17 (−1, 4), (−2, 3), (2, 5), (4, 9), (8, 23), (43, 282), (52, 375), (5234, 378661) 18 (7, 19) 19 (5, 12) 20 – 21 – 22 (3, 7) 23 – 24 (−2, 4), (1, 5), (10, 32), (8158, 736844) 25 (0, 5) n(x,y) −1 (1, 0) −2 (3, 5) −3 – −4 (5, 11), (2, 2) −5 – −6 – −7 (2, 1), (32, 181) −8 (2, 0) −9 – −10 – −11 (3, 4), (15, 58) −12 – −13 (17, 70) −14 – −15 (4, 7) −16 – −17 – −18 (3, 3) −19 (7, 18) −20 (6, 14) −21 – −22 – −23 (3, 2) −24 – −25 (5, 10) In 1998, J. Gebel, A. Pethö, H. G. Zimmer found all integers points for 0 < |n| ≤ 104.[4][5] In 2015, M. A. Bennett and A. Ghadermarzi computed integer points for 0 < |n| ≤ 107.[6] References 1. Weisstein, Eric W. "Mordell Curve". MathWorld. 2. Louis Mordell (1969). Diophantine Equations. 3. Weisstein, Eric W. "Fermat's Sandwich Theorem". MathWorld. Retrieved 24 March 2022. 4. Gebel, J.; Pethö, A.; Zimmer, H. G. (1998). "On Mordell's equation". Compositio Mathematica. 110 (3): 335–367. doi:10.1023/A:1000281602647. 5. Sequences OEIS: A081119 and OEIS: A081120. 6. M. A. Bennett, A. Ghadermarzi (2015). "Mordell's equation : a classical approach" (PDF). LMS Journal of Computation and Mathematics. 18: 633–646. arXiv:1311.7077. doi:10.1112/S1461157015000182. External links • J. Gebel, Data on Mordell's curves for –10000 ≤ n ≤ 10000 • M. Bennett, Data on Mordell curves for –107 ≤ n ≤ 107
Wikipedia
Get canonical equation of ellipse We have an ellipse with a circle in it. The circle is passing through the two vertices and through the ellipse's center. It's diameter equals 7. We have also an equilateral triangle which vertices are in ellipse's focuses and in the minor vertex (the triangle's height is equal to semi-minor axis). How can we define the canonical equation of the ellipse in this case? euclidean-geometry ivkremerivkremer The sum of the distance between one point of the ellipse and the two vertices is constant. So the major axis length is $2d$, $d$ being the distance between the two vertices of the ellipse. Let's call $k$ half the minor axis of the ellipse. We deduce from the diameter of the circle that: $k²+d²=7²$, that is $d²=7²-k²$ $k²+(\frac{d}{2})²=d²$, that is $k²=\frac{3d²}{4}$ Thus, $d²=28$ The equation of the ellipse is then $\dfrac{X²}{28}+\dfrac{Y²}{21}=1$ MartiganMartigan $\begingroup$ Thank you very much. But am I right that you denoted the semi-major axis as d? I'm asking that because you said major axis length is 2d, but d is the distance between the two vertices of the ellipse. I think you meant the distance between the vertex and the center, didn't you? $\endgroup$ – ivkremer Oct 20 '14 at 12:19 $\begingroup$ And the second question is how did you know that k^2 + (d/2)^2 = d^2? $\endgroup$ – ivkremer Oct 20 '14 at 12:21 $\begingroup$ Both the distance between the two vertices and semi-major axis are of length $d$. $\endgroup$ – Martigan Oct 20 '14 at 13:41 $\begingroup$ @Kremchik Look at your drawing. The triangle $OF_{2}V_{2}$ (O being the origin and V2 the second vertice (vertical one) is a right angle triangle, with size length $d$ ($F_{2}V_{2}$) and $d/2$ ($OF_{2}$), since $O$ is the middle of $F_{1}F_{2}$. $\endgroup$ – Martigan Oct 20 '14 at 14:03 Starting with $F_{2}=\left\langle c,0\right\rangle $ and denoting the intersection of the ellips, circle and $x$-axis by $V$ we find: $$\left\Vert V-F_{1}\right\Vert +\left\Vert V-F_{2}\right\Vert =4c$$ and consequently: $$E:=\left\{ \left\langle x,y\right\rangle :\left\Vert \left\langle x,y\right\rangle -\left\langle c,0\right\rangle \right\Vert +\left\Vert \left\langle x,y\right\rangle -\left\langle -c,0\right\rangle \right\Vert =4c\right\} =\left\{ \left\langle x,y\right\rangle :3x^{2}+4y^{2}=12c^{2}\right\}$$ drhabdrhab Not the answer you're looking for? Browse other questions tagged euclidean-geometry or ask your own question. Maximize the area of the inscribed triangle find the center of an ellipse given tangent point and angle Ellipses Conics Proof Is it possible to project orthogonally an ellipse with major and minor axes $2a$,$2b$ so that its image is a circle with diameter $2b$? A circle tangent to an ellipse Check if a circle is within an ellipse Four circles tangent to each other and an equilateral triangle Viviani's theorem and ellipse Elliptical version of Pythagoras' Theorem? Locus of a vertex defining a circle intrinsic to a triangle
CommonCrawl
\begin{document} \title{Koppelman formulas on the $A_1$-singularity} \date{\today} \author{Richard L\"ark\"ang} \address{Richard L\"ark\"ang, Department of Mathematics, University of Wuppertal, Gau{\ss}str. 20, 42119 Wuppertal, Germany, and Department of Mathematics, Chalmers University of Technology and the University of Gothenburg, 412 96 G\"oteborg, Sweden.} \email{[email protected]} \author{Jean Ruppenthal} \address{J. Ruppenthal, Department of Mathematics, University of Wuppertal, Gau{\ss}str. 20, 42119 Wuppertal, Germany} \email{[email protected]} \subjclass{32A26, 32A27, 32B15, 32C30} \keywords{} \begin{abstract} In the present paper, we study the regularity of the Andersson--Samuelsson Koppelman integral operator on the $A_1$-singularity. Particularly, we prove $L^p$- and $C^0$-estimates. As applications, we obtain $L^p$-homotopy formulas for the $\bar\partial$-equation on the $A_1$-singularity, and we prove that the $\mathcal{A}$-forms introduced by Andersson--Samuelsson are continuous on the $A_1$-singularity. \end{abstract} \maketitle \section{Introduction} In this article, we study the local $\bar\partial$-equation on singular varieties. In $\C^n$, it is classical that the $\bar\partial$-equation $\bar\partial f = g$, where $g$ is a $\bar\partial$-closed $(0,q)$-form, can be solved locally for example if $g$ is in $C^\infty$, $L^p$ or $g$ is a current, where the solution $f$ is of the same class (or in certain cases, also with improved regularity). To prove the existence of solutions which are smooth forms or currents, or to obtain $L^p$-estimates for smooth solutions, one can use Koppelman formulas, see for example, \cite{Range}*{LiMi}. On singular varieties, it is no longer necessarily the case that the $\bar\partial$-equation is locally solvable over these classes of forms, as for example on the variety $\{ z_1^4 + z_2^5 + z_2^4 z_1 = 0 \}$, there exist smooth $\bar\partial$-closed forms which do not have smooth $\bar\partial$-potentials, see \cite{RuDipl}*{Beispiel~1.3.4}. Solvability of the $\bar\partial$-equation on singular varieties has been studied in various articles in recent years, for example describing in certain senses explicitly the obstructions to solving the $\bar\partial$-equation in $L^2$, see \cites{FOV,OV2,RuDuke}. Among these and other results, one can find examples when the $\bar\partial$-equation is not always locally solvable in $L^p$, for example when $p = 1$ or $p = 2$. On the other hand, in \cite{AS2}, Andersson and Samuelsson define on an arbitrary pure dimensional singular variety $X$ sheaves $\mathcal{A}^X_q$ of $(0,q)$-currents, such that the $\bar\partial$-equation is solvable in $\mathcal{A}$, and the solution is given by Koppelman formulas, i.e., there exists operators $\mathcal{K} : \mathcal{A}_{q} \to \mathcal{A}_{q-1}$ such that if $\varphi \in \mathcal{A}$, then \begin{equation} \label{eqkoppel} \varphi(z) = \bar\partial \mathcal{K}\varphi(z) + \mathcal{K} (\bar\partial \varphi)(z), \end{equation} where the operators $\mathcal{K}$ are given as \begin{equation*} \mathcal{K}\varphi(z) = \int K(\zeta,z) \wedge \varphi(\zeta), \end{equation*} for some integral kernels $K(\zeta,z)$. The sheaf $\mathcal{A}_q$ coincides with the sheaf of smooth $(0,q)$-forms on $X^*$, where $X^*$ is the regular part of $X$. For the cases when the $\bar\partial$-equation is not solvable for smooth forms, the $\mathcal{A}$-sheaves must necessarily have singularities along $X_{\rm sing}$, but from the definition of the $\mathcal{A}$-sheaves, it is not very apparent how the singularities of the $\mathcal{A}$-sheaves are in general. In order to take better advantage of the results in \cite{AS2}, one would like to know more precisely how the $\mathcal{A}$ singularities of the $\mathcal{A}$-sheaves are. In particular, it would be interesting to know whether for certain varieties, the $\mathcal{A}$-sheaves are in fact smooth, or, say, $C^k$ also over $X_{\rm sing}$. In this article, we will consider solvability of the $\bar\partial$-equation on the so-called $A_1$-singularity which is defined by \begin{equation*} \{ \zeta_1 \zeta_2 - \zeta_3^2 = 0 \} \subseteq {\mathbb C}^3. \end{equation*} Our main method of study will be to study mapping properties of the Koppelman formulas for the $\bar\partial$-equation from \cite{AS2}. The motivation for us to do this is two-fold: First of all, as in the smooth case, using integral formulas for studying the $\bar\partial$-equation has the advantage that it can be used to studying the $\bar\partial$-equation over various function spaces, like forms which are $C^k$, $C^\infty$, H\"older, $L^p$ or currents. Various results about solvability of the $\bar\partial$-equation on the $A_1$-singularity are contained in earlier articles, as will be elaborated on below, and hence, one wouldn't expect to obtain so much new results for this variety. But thanks to the simplicity of the $A_1$-singularity, it serves as a good testing ground for the method. However, since the Koppelman formulas are defined for arbitrary pure dimensional varieties, there is hope to extend the methods used here to more general varieties, and thus obtain new results on such varieties about the solvability of the $\bar\partial$-operator over various functions spaces. In particular, it seems likely that with some elaborations of the methods here, that one should be able to extend the results here also to all rational double points. The underlying idea and hope is that integral formulas -- as on manifolds -- will open the door to further explorations. Let us just mention e.g. that it is usually easy to show that an integral operator is compact. So, one gets compact solution operators for the $\bar\partial$-equation. From that one can also deduce compactness of the $\bar\partial$-Neumann operator. A second motivation is the following: the $\mathcal{A}$-sheaves in \cite{AS2} are defined by starting with smooth forms, applying Koppelman formulas, multiplying with smooth forms, applying Koppelman formulas, and iterating this procedure a finite number of times. In the particular example of the $A_1$-singularity, we obtain for example the new result that the $\mathcal{A}$-sheaves are contained in the sheaves of forms with continuous coefficients, see Corollary~\ref{cor:asheaves} below. We will now describe the main results in this article: From now on, we let $X$ be the variety given by \begin{equation*} X = \{ \zeta \in B_1(0) \mid \zeta_1 \zeta_2 - \zeta_3^2 = 0 \} \subseteq {\mathbb C}^3, \end{equation*} where $B_r(0)$ is the ball of radius $r$ in ${\mathbb C}^3$. In addition, we let \begin{equation*} X' = \{ \zeta \in B_{1+\epsilon}(0) \mid \zeta_1 \zeta_2 - \zeta_3^2 = 0 \} \subseteq {\mathbb C}^3, \end{equation*} where $\epsilon > 0$. In general, the input to the $\bar\partial$-equation will live on $X'$, while the solutions are in general only defined on $X$. For precise definitions of what we mean by $L^p$-forms and $C^0$ forms on $X'$ and $X$, see Section \ref{sec:lp-forms}. \begin{thm}\label{thm:main1} Let $\mathcal{K}$ be the integral operator from \cite{AS2} on $X'$, as here defined in \eqref{eq:AS1}, and let $\frac{4}{3} < p\leq \infty$ and $q \in \{1,2\}$. Then: (i) $\mathcal{K}$ gives a bounded linear operator from $L^p_{0,q}(X')$ to $L^p_{0,q-1}(X)$. (ii) $\mathcal{K}$ gives a continuous linear operator from $L^\infty_{0,q}(X')$ to $C^0_{0,q-1}(X)$. \end{thm} In particular, one obtains the following result about the $\mathcal{A}$-sheaves from \cite{AS2}. \begin{cor} \label{cor:asheaves} Let, as in \cite{AS2}, $\mathcal{A}^X_{q}$ be the sheaf of currents which can be written as finites sums of the \begin{equation*} \xi_\nu\wedge (\mathcal{K}_\nu(\dots \xi_2 \wedge \mathcal{K}_2(\xi_1\wedge \mathcal{K}_1(\xi_1)))), \end{equation*} where each $\mathcal{K}_i$ is an integral operator as in Theorem \ref{thm:main1}, and $\xi_i$ are smooth forms on $X'$. Then \begin{equation*} \mathcal{A}^X_{q} \subseteq C^0_{0,q}(X). \end{equation*} \end{cor} Although the Koppelman operator $\mathcal{K}$ maps $L^p_{0,q}(X')$ to $L^p_{0,q-1}(X)$ for $p > 4/3$, this does not necessarily imply that the $\bar\partial$-equation is locally solvable in $L^p$ for $p > 4/3$, since it is not necessarily the case that \eqref{eqkoppel} holds for $\varphi \in L^p$. However, in order to describe when the Koppelman formula \eqref{eqkoppel} does indeed hold, we first need to discuss various definitions of the $\bar\partial$-operator on $L^p$-forms on singular varieties. If we let $\bar\partial_{sm}$ be the $\bar\partial$-operator on smooth $(0,q)$-forms with support on $X^*=X\setminus\{0\}$ away from the singularity, then this operator has various extensions as a closed operator in $L^p_{0,q}(X)$. One extension of the $\bar\partial_{sm}$-operator is the maximal closed extension, i.e., the weak $\bar\partial$-operator $\bar\partial_w^{(p)}$ in the sense of currents, so if $g \in L^p_{0,q}(X)$, then $g \in {\rm Dom\, } \bar\partial_w^{(p)}$ if $\bar\partial g \in L^p_{0,q}(X)$ in the sense of distributions on $X$. When it is clear from the context, we will drop the superscript $(p)$ in $\bar\partial^{(p)}_w$, and we will for example write $g \in {\rm Dom\, } \bar\partial_w \subset L^p_{0,q}$. For the $\bar\partial_w$-operator, we obtain the following result about the Koppelman formula \eqref{eqkoppel}. \begin{thm}\label{thm:main3} Let $\mathcal{K}$ be the integral operator from Theorem \ref{thm:main1}. Let $\varphi \in {\rm Dom\, } \bar\partial_w \subseteq L^p_{0,q}(X')$, where $2 \leq p \leq \infty$ and $q \in \{1,2\}$. Then \begin{eqnarray} \label{eq:dbarlp} \varphi(z) &=& \bar\partial_w \mathcal{K}\varphi(z) + \mathcal{K}\big( \bar\partial_w \varphi\big)(z) \end{eqnarray} in the sense of distributions on $X$. \end{thm} Another extension of the $\bar\partial$-operator is the minimal closed extension, i.e., the strong extension $\bar\partial_s^{(p)}$ of $\bar\partial_{sm}$, which is the graph closure of $\bar\partial_{sm}$ in $L^p_{0,q}(X) \times L^p_{0,q+1}(X)$, so $\varphi \in {\rm Dom\, } \bar\partial_s^{(p)} \subset L^p_{0,q}(X)$, if there exists a sequence of smooth forms $\{\varphi_j\}_j \subset L^p_{0,q}(X)$ with support away from the singularity, i.e., $$\supp \varphi_j \cap \{0\} = \emptyset,$$ such that \begin{eqnarray}\label{eq:dbars1} \varphi_j \rightarrow \varphi \ \ \ &\mbox{ in }& \ \ L^p_{0,q}(X),\\ \bar\partial \varphi_j \rightarrow \bar\partial \varphi \ \ \ &\mbox{ in }& \ \ L^p_{0,q+1}(X)\label{eq:dbars2} \end{eqnarray} as $j\rightarrow \infty$. On smooth varieties, these extensions coincide by Friedrichs' extension lemma, see for example \cite{LiMi}*{Theorem~V.2.6}. From our results below, it will follow that in $L^2$ on the $A_1$-singularity, the $\bar\partial_w$ and $\bar\partial_s$ operators do indeed coincide. In $L^p$ for more general $p$, it is not clear to us whether the $\bar\partial_w$ and $\bar\partial_s$ operators still coincide on the $A_1$-singularity. On other varieties, one can however write explicitly examples of functions which are in ${\rm Dom\, } \bar\partial_w$, but not in ${\rm Dom\, } \bar\partial_s$, even in $L^2$. \begin{ex} Let $Z$ be the cusp \begin{equation*} Z = \{ (z,w) \in B_1(0) \mid z^3-w^2 = 0 \} \subseteq {\mathbb C}^2. \end{equation*} Then, using the normalization $\pi : t \mapsto (t^2,t^3)$ of $Z$, one can verify that the function $\varphi = z/w$ is in $L^2(Z)$, and $\varphi$ is $\bar\partial$-closed, so $\varphi \in {\rm Dom\, } \bar\partial_w \subseteq L^2(Z)$. By \cite{RuSerre}*{Theorem~1.2}, the kernel of the $\bar\partial_s$-operator on ${\rm Dom\, } \bar\partial_s \subseteq L^2(Z)$ is exactly $\widehat{{\mathcal O}}(Z)$, the ring of weakly holomorphic functions on $Z$. Thus, if $\varphi \in {\rm Dom\, } \bar\partial_s$, we would thus get that $\varphi \in \widehat{{\mathcal O}}(Z)$ since $\bar\partial \varphi = 0$. However, since $\pi^* \varphi = 1/t$, one gets that $\varphi$ is not locally bounded near $0$, so it is not weakly holomorphic, and thus, $\varphi \notin {\rm Dom\, } \bar\partial_s \subseteq L^2(Z)$, but $\varphi \in {\rm Dom\, } \bar\partial_w \subseteq L^2(Z)$. \end{ex} For the strong $\bar\partial$-operator, we obtain the following. \begin{thm}\label{thm:main4} Let $\mathcal{K}$ be the integral operator from Theorem \ref{thm:main1} and let $\varphi\in {\rm Dom\, } \bar\partial_w \subseteq L^2_{0,q}(X')$, $1\leq q \leq 2$. Then \begin{eqnarray*} \mathcal{K} \varphi &\in& {\rm Dom\, }\bar\partial_s \subset L^2_{0,q-1}(X). \end{eqnarray*} \end{thm} Since $\mathcal{K}$ maps ${\rm Dom\, } \bar\partial_w \to {\rm Dom\, } \bar\partial_s$, and $\bar\partial$ maps ${\rm Dom\, } \bar\partial_w \to {\rm Dom\, } \bar\partial_w$ and ${\rm Dom\, } \bar\partial_s \to {\rm Dom\, } \bar\partial_s$, we get as a corollary of Theorem~\ref{thm:main3} and Theorem~\ref{thm:main4} the following. \begin{cor}\label{cor:main4} In $L^2$ on the $A_1$-singularity, the $\bar\partial_s$ and $\bar\partial_w$ operators coincide. \end{cor} The setting in \cite{AS2} is rather different compared to this article, since here, we are mainly concerned with forms on $X$ with coefficients in $L^p$, while in \cite{AS2}, the type of forms considered, denoted $\mathcal{W}_q(X)$, are generically smooth, and with in a certain sense ``holomorphic singularities'' (like for example the principal value current $1/f$ of a holomorphic function $f$), but there is no direct growth condition on the singularities. For the precise definition of the class $\mathcal{W}_q(X)$, we refer to \cite{AS2}. In the setting of \cite{AS2}, the $\bar\partial$-operator $\bar\partial_X$ considered there is different from the ones considered here, $\bar\partial_s$ and $\bar\partial_w$. For currents in $\mathcal{W}_q(X)$, one can define the product with certain ``structure forms'' $\omega_X$ associated to the variety. A current $\mu \in \mathcal{W}_q(X)$ lies in ${\rm Dom\, } \bar\partial_X$ if there exists a current $\tau \in \mathcal{W}_{q+1}(X)$ such that $\bar\partial (\mu \wedge \omega) = \tau \wedge \omega$ for all structure forms $\omega$. (To be precise, this formulation works when $X$ is Cohen-Macaulay, as is the case for example here, when $X$ is a hypersurface). Combining our results about $\mathcal{K}$ and the $\bar\partial_w$- and $\bar\partial_s$-operator with some properties about the $\mathcal{W}_X$-sheaves, we obtain results similar to Theorem~\ref{thm:main4} for the $\bar\partial_X$-operator, answering in part a question in \cite{AS2} (see the paragraph at the end of page 288 in \cite{AS2}). \begin{thm}\label{thm:main5} Let $\mathcal{K}$ be the integral operator from Theorem \ref{thm:main1} and let $\varphi\in {\rm Dom\, } \bar\partial^{(2)}_w \cap \mathcal{W}_q(X')$, $1\leq q \leq 2$. Then \begin{eqnarray*} \mathcal{K} \varphi &\in& {\rm Dom\, }\bar\partial_X. \end{eqnarray*} \end{thm} For a hypersurface $X$, any structure form is an invertible holomorphic function times the Poincar\'e-residue of $d\zeta_1 \wedge d\zeta_2 \wedge d\zeta_3/h$, where $h$ is the defining function of $X$. In our case, $h(\zeta) = \zeta_1 \zeta_2 - \zeta_3^2$, and the Poincar\'e residue $\omega_X$ can be defined for example as \begin{equation*} \omega_X = \left.\frac{dz_1 \wedge dz_2}{-2\zeta_3} \right|_{X}, \end{equation*} which one can verify lies in $L^2_{2,0}(X)$. The conclusions of Theorem~\ref{thm:main5} means that \begin{equation} \label{eq:thm5concl} \bar\partial (\mathcal{K} \varphi \wedge \omega_X) = (\bar\partial \mathcal{K} \varphi) \wedge \omega_X. \end{equation} Since $\varphi \in {\rm Dom\, } \bar\partial_w \subseteq L^2(X')$, by the Koppelman formula for $\bar\partial_w$ on $L^2$, we get that $\bar\partial K \varphi \in L^2(X)$. Thus, since $\omega_X \in L^2_{loc}(X)$, the products $\mathcal{K}\varphi \wedge \omega_X$ and $(\bar\partial \mathcal{K} \varphi) \wedge \omega_X$ exist (almost-everywhere) pointwise and lie in $L^1_{loc}(X)$. The results of the present paper have to a large extent been generalized in \cite{LR2} to so-called affine cones over smooth projective complete intersections, which in particular include the $A_1$-singularity. The methods used in \cite{LR2}, which rely on estimates directly on the variety, are rather different to the methods here, which rely on estimates on a finite branched covering. In addition to the fact that we obtain here stronger results in Theorem~\ref{thm:main4} and as a consequence also stronger results in Corollary~\ref{cor:main4} and Theorem~\ref{thm:main5}, compared to the results in \cite{LR2} on the $A_1$-singularity, we also believe that the techniques used in this article might still be of interest when trying to extend our results to more general varieties. In particular, in preliminary work about Koppelman formulas on surfaces with canonical singularities, which include the $A_1$-singularity, it appears that a combination of these two techniques is useful. The $A_1$-singularity has in many ways very mild singularities, and one way which this manifests itself is that it satisfies the conditions for being treated in almost all articles about the solvability of the $\bar\partial$-equation on singular varieties in recent years. The following results about that the $\bar\partial$-equation $\bar\partial f = g$ is solvable on the $A_1$-singularity can be found in earlier works. \begin{itemize} \item $f \in C_{0,q}^\infty(X^*)$ if $g \in C_{0,q-1}^{\infty}(X')$ is treated in \cite{HePo}. \item $f \in C_{0,1}^\alpha(X)$ for $\alpha < 1/2$ if $g \in L^\infty_{0,1}(X')\cap C^0(X')$ is treated in \cite{FoGa}. \item $f \in C_{0,q}^{1/2}(X)$ if $g \in L^\infty_{0,q}(X)$ is treated in \cite{RuppThesis}. \item $f \in C_{0,1}^{\alpha}(X)$ for $\alpha < 1$ if $g \in L^\infty_{0,1}(X')$ and $g$ has compact support is treated in \cite{RuZeI}. \item $f \in L^p_{0,1}(X)$ for $p > 4/3$ if $g \in L^p_{0,1}(X)$ is treated in \cite{RuMatZ2}, where the $\bar\partial$-operator considered is the $\bar\partial_w$-operator. In addition, it is shown that for $1 \leq p < 4/3$, the $\bar\partial_w$-cohomology in $L^p$ is non-zero. \item $f \in L^2_{0,q}(X)$ if $g \in L^2_{0,q-1}(X)$ is treated in \cite{RuSerre}, where the $\bar\partial$-operator considered is the $\bar\partial_s$-operator. \end{itemize} Note that here we just refer to the results concerning the $A_1$-singularity in those articles, while all the articles treat results about the $\bar\partial$-equation on other varieties as well. This paper is organized as follows. In Section~\ref{sec:covering}, we describe a $2$-sheeted covering of the $A_1$-singularity, relations between $L^p$-forms on $X$ and on the covering, and describe various integral estimates on this covering. In Section~\ref{sec:main1}, we recall how the Koppelman operators from \cite{AS2} are constructed, and prove the first main result, Theorem~\ref{thm:main1}. In Section~\ref{sec:main2}, we prove an estimate for a cut-off procedure, Theorem~\ref{thm:main2}, which is then used in the proof of Theorem~\ref{thm:main3}, about the $\bar\partial_w$-operator. In Section~\ref{sec:main3}, we then prove Theorem~\ref{thm:main4}, about the $\bar\partial_s$-operator, and Theorem~\ref{thm:main5}, about the $\bar\partial_X$-operator. Finally, in Appendix~\ref{sec:appendix}, we collect various integral kernel estimates on ${\mathbb C}^n$, which we have made use of in Section~\ref{sec:covering} for obtaining integral estimates on the $2$-sheeted covering. {\bf Acknowledgments.} This research was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), grant RU 1474/2 within DFG's Emmy Noether Programme. The first author was partially supported by the Swedish Research Council. \section{The $2$-sheeted covering of the $A_1$-singularity} \label{sec:covering} \subsection{Some notation} Let us recall shortly that we will consider the variety defined by $\{ g(\zeta) = 0 \}$, where $g(\zeta) = \zeta_1 \zeta_2 - \zeta_3^2$, on two different balls in ${\mathbb C}^3$. We let $D = B_1(0) \subseteq {\mathbb C}^3$ and $D' = B_{1+\epsilon}(0) \subseteq {\mathbb C}^3$ for some $\epsilon > 0$, and we define: \begin{equation*} X = \{ \zeta \in D \mid g(\zeta) = 0 \} \text{ and } X' = \{ \zeta \in D' \mid g(\zeta) = 0 \}. \end{equation*} Note that $X$ and $X'$ can be covered by the $2$-sheeted covering map $$\pi: (w_1,w_2) \mapsto (w_1^2,w_2^2,w_1w_2),$$ which is branched just in the origin. Let \begin{equation*} \tilde{D} := \pi^{-1}(D)\ \mbox{ and }\ \tilde{D}' = \pi^{-1}(D'). \end{equation*} In this section, we consider the $2$-sheeted covering maps $\pi : \tilde{D} \to X$ and $\pi : \tilde{D}' \to X'$, respectively. We will use this covering to estimate the integral operators of Andersson--Samuelsson by use of certain integral estimates in ${\mathbb C}^2$ which are adopted to our particular situation. Basic estimates in ${\mathbb C}^n$ which are needed are postponed to Appendix~\ref{sec:appendix}, for convenience of the reader. \subsection{Pullback of $\|\eta\|^2=\|\zeta-z\|^2$} Here, we prove an estimate of how the pullback of $\|\eta\|^2$ to the covering behaves, where $\eta = \zeta-z$, which will be fundamental in obtaining our estimates for the pullback of the integral kernels. We will as above let $w=(w_1,w_2)$ in the covering correspond to the $\zeta$-variables on ${\mathbb C}^3$ by $$\pi(w_1,w_2) = (w_1^2,w_2^2,w_1w_2) = \zeta,$$ and we will let $x=(x_1,x_2)$ correspond to the $z$-variables on ${\mathbb C}^3$, i.e., $$\pi(x_1,x_2) = (x_1^2,x_2^2,x_1 x_2) = z.$$ We let \begin{equation*} \alpha^2 = \pi^* \|\eta\|^2 = |w_1^2-x_1^2|^2 + |w_2^2-x_2^2|^2 + |w_1w_2-x_1x_2|^2, \end{equation*} and \begin{equation*} \beta_-^2 = |w_1-x_1|^2 + |w_2-x_2|^2=\|w-x\|^2, \end{equation*} and \begin{equation*} \beta_+^2 = |w_1+x_1|^2 + |w_2+x_2|^2=\|w+x\|^2. \end{equation*} \begin{lma} \label{lmanorm} \begin{equation*} \alpha^2 \leq \beta_+^2\beta_-^2 \leq 4\alpha^2. \end{equation*} \end{lma} \begin{proof} Using the parallelogram identity $$|a-b|^2 + |a+b|^2 = 2(|a|^2 + |b|^2)$$ we get \begin{eqnarray*} \beta_+^2\beta_-^2 &=& |w_1^2-x_1^2|^2 + |w_2^2-x_2^2|^2 + |(x_1-w_1)(x_2+w_2)|^2\\ && + |(x_1+w_1)(x_2-w_2)|^2 \\ &=& |w_1^2-x_1^2|^2 + |w_2^2-x_2^2|^2 + |(x_1x_2-w_1w_2) + (x_1w_2-w_1x_2)|^2\\ && + |(x_1x_2-w_1w_2)-(x_1w_2-w_1x_2)|^2 \\ &=& |w_1^2-x_1^2|^2 + |w_2^2-x_2^2|^2 + 2|x_1x_2-w_1w_2|^2 + 2|x_1w_2-w_1x_2|^2\\ &\geq& |w_1^2-x_1^2|^2 + |w_2^2-x_2^2|^2 + |w_1w_2-x_1x_2|^2 = \alpha^2, \end{eqnarray*} so the first inequality is proved. To prove the second inequality, we note that by the equality \begin{align*} \beta_+^2\beta_-^2 = |w_1^2-x_1^2|^2 + |w_2^2-x_2^2|^2 + 2|x_1x_2-w_1w_2|^2 + 2|x_1w_2-w_1x_2|^2 \end{align*} from the equation above, it is enough to prove that \begin{equation*} |x_1w_2-w_1x_2|^2 \leq \alpha^2. \end{equation*} To prove this, we use the triangle inequality and the inequality $|ab| \leq (1/2)(|a|^2 + |b|^2)$: \begin{eqnarray*} |x_1w_2-w_1x_2|^2 &=& |(x_1w_2-w_1x_2)^2| = |x_1^2w_2^2 + w_1^2x_2^2 - 2x_1w_2w_1x_2| \\ &=& |x_1^2w_2^2 + w_1^2x_2^2 - w_1^2w_2^2 - x_1^2x_2^2 + w_1^2w_2^2 + x_1^2x_2^2 - 2x_1w_2w_1x_2| \\ &=& |(w_1^2-x_1^2)(x_2^2-w_2^2) + (w_1w_2-x_1x_2)^2| \\ &\leq& (1/2)|w_1^2-x_1^2|^2 + (1/2)|w_2^2-x_2^2|^2 + |(w_1w_2-x_1x_2)^2| \\ &\leq& |w_1^2-x_1^2|^2 + |w_2^2-x_2^2|^2 + |w_1w_2-x_1x_2|^2. \end{eqnarray*} \end{proof} \subsection{Integral kernel estimates for the covering} We will now provide fundamental integral estimates for the pull-back under $\pi$ of the principal parts of the integral formulas of Andersson--Samuelsson. Let $dV(w)$ and $dV(x)$ denote the standard Euclidean volume forms on ${\mathbb C}^2_{w}$ and ${\mathbb C}^2_{x}$. We denote the different coordinates of ${\mathbb C}^2$ by the variables $w=(w_1,w_2)$ and $x=(x_1,x_2)$. \begin{lma} \label{lmakerninteg} Let $K$ be an integral kernel on ${\tilde{D}'}\times {\tilde{D}} \subset \subset {\mathbb C}_w^2 \times {\mathbb C}_x^2$ of the form \begin{equation*} K(w,x) = \frac{|f|}{\alpha^3}, \end{equation*} where $f$ is one of the functions $w_1^2,w_2^2,w_1w_2,x_1^2, x_2^2, x_1x_2$. Let $\gamma > -6$ if $f\in \{w_1^2,w_1w_2,w_2^2\}$ and $\gamma>-4$ if $f\in \{x_1^2,x_1x_2,x_2^2\}$. Then there exists a constant $C_\gamma>0$ such that \begin{align} \label{eqkerninteg1} I_1(x) &:= \int_{\tilde{D}'} \|w\|^\gamma K(w,x) dV(w) \leq C_\gamma \left\{\begin{array}{ll} 1 & \ ,\ \gamma>0,\\ 1+ \big|\log \|x\|\big| &\ ,\ \gamma=0,\\ \|x\|^\gamma &\ ,\ \gamma<0, \end{array}\right. \end{align} for all $x\in\ \tilde{D}$ with $x\neq 0$. \end{lma} \begin{proof} We know by Lemma \ref{lmanorm} that $\alpha \gtrsim \|w-x\| \|w+x\|$, and so $$|I_1(x)| \lesssim \|x\|^{2-\delta} \int_{\tilde{D}'} \frac{\|w\|^{\delta+\gamma} dV(w)}{\|w-x\|^3 \|w+x\|^3},$$ where $\delta\in\{0, 2\}$. So, the assertion follows from the basic estimate, Lemma \ref{lem:estimateCn3}, by considering the different cases separately. \end{proof} By an elaboration of the argument of the generalization of Young's inequality for convolution integrals in \cite{Range}*{Appendix B}, we then get the following lemma. \begin{lma} \label{lmacoveringintegr} Let $\mathcal{K}$ be an integral operator defined by \begin{equation*} \mathcal{K} \varphi(x) = \int K(w,x) \varphi(w) dV(w), \end{equation*} acting on forms on $\tilde{D}'$ and returning forms on $\tilde{D}$, where $K$ is of the form \begin{equation*} K = \frac{gf}{\alpha^3}, \end{equation*} where $g \in L^\infty({\tilde{D}'}\times {\tilde{D}})$ and $f$ is one of $w_1^2, w_1w_2, w_2^2, x_1^2, x_1x_2, x_2^2$. (i) Let $\frac{4}{3} < p \leq \infty$. Then $\mathcal{K}$ maps $\|w\|^{2-4/p} L^p({\tilde{D'}})$ continuously to $\|x\|^{-4/p} L^p({\tilde{D}})$, i.e., if $\|w\|^{4/p-2} \varphi \in L^p({\tilde{D'}})$, then $\|x\|^{4/p} \mathcal{K} \varphi \in L^p({\tilde{D}})$. (ii) Assume that $\|w\|^{-2} \varphi \in L^\infty({\tilde{D}'})$ and that $\lim_{x\rightarrow 0} g(\cdot,x) = g(\cdot,0)$ in $L^r(\tilde{D}')$ for some $r>2$. Then $\mathcal{K} \varphi$ is continuous at the origin. \end{lma} \begin{proof} (i) Let us first consider the case $p<\infty$. Choose $$q:=p/(p-1)\ \ \ \mbox{ and } \ \ \ \eta:=2-4/p.$$ So, $1/p+1/q=1$ and $$\gamma:=\eta q = (2p-4)/(p-1)= 2 + \frac{2}{1-p}> -4$$ (because of $p>4/3$). We want to show that the $L^p$-norm of $\|x\|^{4/p} \mathcal{K} \varphi$ is finite, and we begin by estimating and decomposing, and using the H\"older inequality (with $1/p+1/q=1$) in the following way: \begin{align*} I := &\int_{\tilde{D}} \|x\|^{4} \left| \int_{\tilde{D}'} \frac{gf \varphi}{\alpha^3} dV(w) \right|^p dV(x) \\ \leq &\int_{\tilde{D}} \|x\|^4 \left( \int_{\tilde{D}'} \left(\frac{|gf|\big|\|w\|^{4/p-2}\varphi\big|^p}{\alpha^{3}}\right)^{1/p} \left(\frac{|gf| (\|w\|^{2-4/p})^q}{\alpha^{3}}\right)^{1/q} dV(w) \right)^{p} dV(x) \\ \leq &\int_{\tilde{D}} \|x\|^4 \int_{\tilde{D}'} \frac{|gf|\big|\|w\|^{4/p-2}\varphi\big|^p}{\alpha^{3}} dV(w) \left(\int_{\tilde{D}'} \frac{|gf|\|w\|^{\eta q}}{\alpha^{3}}\right)^{p/q} dV(w) dV(x). \end{align*} From now on, let us just consider the situation that $\gamma=\eta q <0$. The other cases, $\gamma=0$ and $\gamma>0$, respectively, are even simpler: just replace $\|x\|^\gamma$ in the following by $1+\big|\log\|x\|\big|$ or $1$, respectively. Using \eqref{eqkerninteg1} on the second inner integral, $\gamma p/q = \eta p = 2p -4$ and Fubini's Theorem one obtains \begin{align*} I \lesssim & \int_{\tilde{D}} \int_{\tilde{D}'} \|x\|^4 \frac{|gf|\big| \|w\|^{4/p-2}\varphi\big|^p}{\alpha^{3}} dV(w) \|x\|^{\gamma p/q}dV(x)\\ = & \int_{\tilde{D}'} \big| \|w\|^{4/p-2}\varphi\big|^p \int_{\tilde{D}} \|x\|^{2p} \frac{|gf|}{\alpha^{3}} dV(x) dV(w) \end{align*} By use of \eqref{eqkerninteg1}, we then get that \begin{equation*} I \lesssim \int_{\tilde{D}'} \big|\|w\|^{4/p-2}\varphi\big|^p dV(w) = \big\| \|w\|^{4/p-2}\varphi\big\|^p_{L^p({\tilde{D}'})} < \infty. \end{equation*} It remains to consider the case $p=\infty$ which is even simpler: \begin{eqnarray*} \left| \int_{\tilde{D}'} \frac{gf \varphi}{\alpha^3} dV(w) \right| \leq \big\| \|w\|^{-2} \varphi \big\|_\infty \int_{\tilde{D}'} \|w\|^2 \frac{|gf|}{\alpha^3} dV(w) \lesssim \big\| \|w\|^{-2} \varphi \big\|_\infty \end{eqnarray*} by use of Lemma \ref{lmakerninteg}. (ii) If $f\in\{x_1^2,x_1x_2,x_2^2\}$, then $$\mathcal{K}\varphi(x) = f(x) \int \frac{g \varphi dV(w)}{\alpha^3}.$$ But $$\left|\int \frac{g \varphi dV(w)}{\alpha^3}\right| \lesssim \int \frac{\|w\|^2}{\alpha^3} \lesssim \log\|x\|$$ by Lemma \ref{lmakerninteg}. So, it follows that $\mathcal{K}\varphi$ is continuous at $0\in{\mathbb C}^2$ with $\mathcal{K}\varphi(0,0)=0$. It remains to treat the case $f\in\{w_1^2,w_1w_2,w_2^2\}$. We know from part (i) that $\mathcal{K}\varphi$ is a bounded function (the integral exists for all $x=(x_1,x_2)$). Let $c_\varphi := \big| \|w\|^{-2} \varphi\|_\infty$. Using this and $|f| \leq \|w\|^2$, we get \begin{eqnarray*} \Delta(x) := \left| \mathcal{K}\varphi(x) - \mathcal{K}\varphi(0)\right| \leq c_\varphi \int \|w\|^4 \left| \frac{g(w,x)}{\alpha^3(w,x)} - \frac{g(w,0)}{\alpha^3(w,0)} \right| dV(w). \end{eqnarray*} Using $$\alpha^3 \sim \delta(w,x) := \|w-x\|^3 \|w+x\|^3$$ from Lemma \ref{lmanorm}, we have \begin{eqnarray*} \Delta(x) &\lesssim& \int \|w\|^4 \left| \frac{g(w,x)}{\|w-x\|^3\|w+x\|^3} - \frac{g(w,0)}{\|w\|^6} \right| dV(w)\\ &=& \int \|w\|^4 \left| \frac{\|w\|^6 g(w,x) -\delta(w,x) g(w,0)}{\delta(w,x) \|w\|^6} \right| dV(w)\\ &\leq& \int \|w\|^4 \left| \frac{\|w\|^6 g(w,x) -\delta(w,x) g(w,x)}{\delta(w,x) \|w\|^6} \right| dV(w)\\ && + \int \|w\|^4 \left| \frac{\delta(w,x) g(w,x) -\delta(w,x) g(w,0)}{\delta(w,x) \|w\|^6} \right| dV(w). \end{eqnarray*} By use of the Taylor expansion, we have $$\left| \delta(w,x) - \|w\|^6 \right| = \left| \delta(w,x) - \delta(w,0) \right| \lesssim \sum_{k=1}^6 \|x\|^k \|w\|^{6-k}.$$ This gives \begin{eqnarray*} \Delta_1(x) &:= & \int \|w\|^4 \left| \frac{\|w\|^6 g(w,x) -\delta(w,x) g(w,x)}{\delta(w,x) \|w\|^6} \right| dV(w)\\ &\lesssim& \|x\| \sum_{k=1}^6 \|g\|_\infty \int \frac{\|x\|^{k-1} \|w\|^{4-k}}{\delta(w,x)} dV(w) \lesssim \|x\| \|g\|_\infty, \end{eqnarray*} where we have used Lemma \ref{lmakerninteg} for the last step. On the other hand, \begin{eqnarray*} \Delta_2(x) &:=& \int \|w\|^4 \left| \frac{\delta(w,x) g(w,x) -\delta(w,x) g(w,0)}{\delta(w,x) \|w\|^6} \right| dV(w)\\ &=& \int \left| \frac{g(w,x) - g(w,0)}{\|w\|^2} \right| dV(w). \end{eqnarray*} Let $s=\frac{r}{r-1}$. Then $s<2$ and the H\"older inequality gives: \begin{eqnarray*} \Delta_2(x) &\leq& \left( \int \frac{dV(w)}{\|w\|^{2s}}\right)^{1/s} \|g(\cdot,x) - g(\cdot,0)\|_{L^r(\tilde{D}')}\\ &\lesssim& \|g(\cdot,x) - g(\cdot,0)\|_{L^r(\tilde{D}')} \rightarrow 0 \end{eqnarray*} as $x\rightarrow 0$ by assumption. Summing up, we have $\Delta(x)=\Delta_1(x)+\Delta_2(x) \rightarrow 0$ as $x\rightarrow 0$. \end{proof} Let us remark that the estimates in the proof of Lemma \ref{lmacoveringintegr} are pretty rough. We could do much better, but the Lemma -- as it stands -- is sufficient for our purpose and better estimates would complicate the presentation considerably. In the special case $p=2$, we will need some better estimates which we give in the next section. \subsection{Estimates for cut-off procedures} In the proof of the homotopy formula for the $\bar\partial_w$ and the $\bar\partial_s$-operator, we will use certain cut-off procedures. For these we require some better estimates which will be given in this section. For $k\in{\mathbb Z}$, $k\geq 1$, let \begin{equation} \label{eq:Dk} \tilde{D}_k := \{ x\in \tilde{D}': e^{-e^{k+1}/2} < \|x\| < \sqrt{2}e^{-e^k/2} \}. \end{equation} A simple calculation shows that since $k\geq 1$, \begin{equation} \label{eq:Dkprime} \tilde{D}_k \subset \tilde{D}_k' := \{ x\in \tilde{D}': e^{-e^{k+1}/2} < \|x\| < e^{-e^{k-1}/2} \}, \end{equation} so in the following proofs of the following lemmas, we can consider integration over $\tilde{D}_k'$ instead. \begin{lma} \label{lem:cutoff1} Let $\mathcal{K}$ be an integral operator defined by \begin{equation*} \mathcal{K} \varphi(x) = \int_{\tilde{D}'} |K(w,x)| \varphi(w) dV(w), \end{equation*} where $K$ is of the form \begin{equation*} K = \frac{gf}{\alpha^3}, \end{equation*} where $g \in L^\infty({\tilde{D}'}\times {\tilde{D}})$ and $f\in\{w_1^2,w_1w_2,w_2^2, x_1^2,x_1x_2,x_2^2\}$. Then there exists a constant $C>0$ such that \begin{eqnarray*} \int_{\tilde{D}_k} \frac{|\mathcal{K}\varphi (x)|^2}{\log^2 \|x\|} dV(x) &<& C \|\varphi\|^2_{L^2(\tilde{D}')} \end{eqnarray*} for all $\varphi\in L^2(\tilde{D}')$ and all $k\geq 1$. \end{lma} \begin{proof} We proceed similarly as in the proof of Lemma \ref{lmacoveringintegr}, but need to estimate the integrals more carefully: \begin{align*} I_k := &\int_{\tilde{D}_k} \frac{1}{\log^2\|x\|} \left| \int_{\tilde{D}'} \frac{|gf| \varphi}{\alpha^3} dV(w) \right|^2 dV(x) \\ \leq &\int_{\tilde{D}_k} \frac{1}{\log^2\|x\|} \left( \int_{\tilde{D}'} \left(\frac{|gf| |\varphi|^2}{\alpha^{3}}\right)^{1/2} \left(\frac{|gf|}{\alpha^{3}}\right)^{1/2} dV(w) \right)^{2} dV(x) \\ \leq &\int_{\tilde{D}_k} \frac{1}{\log^2\|x\|} \left( \int_{\tilde{D}'} \frac{|gf| |\varphi|^2}{\alpha^{3}} dV(w) \right) \left(\int_{\tilde{D}'} \frac{|gf|}{\alpha^{3}} dV(w)\right) dV(x). \end{align*} Applying Lemma \ref{lmakerninteg} to the second inner integral gives \begin{align*} I_k \lesssim & \int_{\tilde{D}_k} \frac{1}{\big|\log\|x\|\big|} \int_{\tilde{D}'} \frac{|gf| |\varphi|^2}{\alpha^{3}} dV(w) dV(x). \end{align*} Using $\alpha \gtrsim \|w-x\| \|w+x\|$ (see Lemma \ref{lmanorm}), Fubini's Theorem and the fact that $$\int_{\tilde{D}_k} \frac{|fg| dV (x)} {\|w-x\|^3 \|w+x\|^3 \big|\log\|x\|\big|} \lesssim 1$$ for all $w\in{\mathbb C}^2$ by Lemma \ref{lem:estimateCn4} (with $\gamma\in\{4,6\}$) together with \eqref{eq:Dkprime}, we finally obtain $$I_k \lesssim \int_{\tilde{D}'} |\varphi|^2 dV(w) = \|\varphi\|^2_{L^2(\tilde{D}')}.$$ \end{proof} Another cut-off estimate that we will need is: \begin{lma} \label{lem:cutoff2} For $k\in {\mathbb Z}$, $k\geq 1$, let $\mathcal{K}^k$ be integral operators defined by \begin{equation*} \mathcal{K}^k \varphi(x) = \int_{\tilde{D}_k} |K(w,x)| \frac{\varphi(w)}{\|w\|^2 \big|\log\|w\|\big|} dV(w), \end{equation*} where $K$ is of the form \begin{equation*} K = \frac{gf}{\alpha^3}, \end{equation*} where $g \in L^\infty({\tilde{D}'}\times {\tilde{D}})$ and $f\in\{w_1^2,w_1w_2,w_2^2,x_1^2,x_1x_2,x_2^2\}$. Let $\varphi\in L^2(\tilde{D}')$. Then \begin{eqnarray*} \int_{\tilde{D}} \|x\|^4 |\mathcal{K}^k\varphi (x)|^2 dV(x) \longrightarrow 0 \end{eqnarray*} for $k\rightarrow \infty$. \end{lma} \begin{proof} We proceed similarly as in the proof of Lemma \ref{lem:cutoff1}: \begin{align*} I_k := &\int_{\tilde{D}} \|x\|^4 \left| \int_{\tilde{D}_k} \frac{|gf| \varphi}{\alpha^3 \|w\|^2 \big|\log\|w\|\big|} dV(w) \right|^2 dV(x) \\ \leq &\int_{\tilde{D}} \|x\|^4 \left( \int_{\tilde{D}_k} \left(\frac{|gf| |\varphi|^2}{\alpha^{3}\big|\log\|w\|\big|}\right)^{1/2} \left(\frac{|gf|}{\alpha^{3}\|w\|^4\big|\log\|w\|\big|}\right)^{1/2} dV(w) \right)^{2} dV(x) \\ \leq &\int_{\tilde{D}} \left(\int_{\tilde{D}_k} \frac{|gf| |\varphi|^2}{\alpha^{3} \big|\log \|w\|\big|} dV(w) \right) \|x\|^4 \left(\int_{\tilde{D}_k} \frac{|gf|}{\alpha^{3}\|w\|^4|\log\|w\|\big|} dV(w)\right) dV(x). \end{align*} Using $\alpha \gtrsim \|w-x\| \|w+x\|$ (see Lemma \ref{lmanorm}), Fubini's Theorem and the fact that $$\|x\|^4 \int_{\tilde{D}_k} \frac{|fg| dV (w)} {\|w-x\|^3 \|w+x\|^3 \|w\|^4 \big|\log\|w\|\big|} \lesssim 1$$ for all $x\in{\mathbb C}^2$ by Lemma \ref{lem:estimateCn4} (with $\gamma\in\{0,2\}$) and \eqref{eq:Dkprime}, we obtain $$I_k \lesssim \int_{\tilde{D}_k} \frac{|\varphi|^2}{\big|\log\|w\|\big|} \left( \int_{\tilde{D}} \frac{|fg|}{\alpha^3} dV(x) \right) dV(w).$$ But now we can apply Lemma \ref{lmakerninteg} to the inner integral to conclude finally: $$I_k \lesssim \int_{\tilde{D}_k} \frac{|\varphi|^2}{\big|\log\|w\|\big|} \big( 1 + \big|\log\|w\|\big|\big) dV(w) \lesssim \|\varphi\|^2_{L^2(\tilde{D}_k)} \rightarrow 0$$ for $k\rightarrow \infty$ as the domain of integration vanishes. \end{proof} \subsection{$L^p$-norms on the variety and the covering}\label{sec:lp-forms} Let $1\leq p\leq \infty$. When we consider a $L^p$-differential form as input into an integral operator, it will be convenient to represent it in a certain ``minimal'' manner. If $\varphi$ is a $(0,q)$-form on $X$ (or $X'$, respectively), then by \cite{RuppThesis}*{Lemma~2.2.1}, we can write $\varphi$ uniquely in the form \begin{equation}\label{eq:minrep} \varphi = \sum_{|I| = q} \varphi_I d\bar{z}_I, \end{equation} where $|\varphi|^2 (p)= \sqrt{2}^{q} \sum |\varphi_I|^2(p)$ in each regular point $p\in \Reg X$. The constants here stem from the fact that $|d\overline{z_j}|=\sqrt{2}$ in ${\mathbb C}^n$. In particular, we then get that $\varphi \in L^p_{0,q}(X)$ if and only if $\varphi_I \in L^p(X)$ for all $I$. Note that the singular set of $X$ is negligible as it is a zero set. We say that $\varphi$ is continuous at a point $p\in X$ if there is a representation \eqref{eq:minrep} such that all the coefficients $\varphi_I$ are continuous at the point $p$. This does not need to be the minimal representation. Let $C^0_{0,q}(X)$ be the space of continuous $(0,q)$-forms on $X$. $C^0_{0,q}(X)$ is a Fr\'echet space with the metric induced by the semi-norms $\|\cdot\|_{L^\infty, K_j}$, where $K_1 \subset K_2 \subset K_3 \subset ...$ is a compact exhaustion of $X$. We also note that continuous forms on $X$ have a continuous extension to a neighborhood of $X$ in ${\mathbb C}^3$ by the Tietze extension theorem. We let $dV_X$ be the induced volume form $i^*\omega^2/2$ on $X$, where $i : X \to {\mathbb C}^3$ is the inclusion and $\omega$ is the standard K채hler form on ${\mathbb C}^3$. Then \begin{equation} \label{eq:volform} \pi^* dV_X = 2(\|w_1\|^4 + \|w_2\|^4 + 4\|w_1w_2\|^2) dV(w). \end{equation} If we let \begin{equation*} \xi^2 := \|w_1\|^2 + \|w_2\|^2 \text{ and } \tilde{\varphi}_I := \pi^* \varphi_I, \end{equation*} then since $2(\|w_1\|^4 + \|w_2\|^4 + 4\|w_1 w_2\|^2) \sim \xi^4$, we get that $\varphi \in L^p_{0,q}(X)$ if and only if $\xi^{4/p} \tilde{\varphi}_I \in L^p(\tilde{D})$, where $\varphi$ is given in the minimal representation \eqref{eq:minrep} from above, $\tilde{D} = \pi^{-1}(D)$ and with the convention that $1/p=0$ for $p=\infty$. If $\varphi = \sum_{|I|=q} \varphi_I d\bar{z}_I$ is a $(0,q)$-form that is not necessarily written in the minimal form above, then we can make at least the following useful observation. Note that $$|\varphi| \lesssim \sum_{|I|=q} |\varphi_I|,$$ and so $$|\varphi|^p \lesssim \sum_{|I|=q} |\varphi_I|^p.$$ But \begin{equation*} \pi^*(|\varphi_I|^p dV_X) = |\tilde{\varphi}_I|^p 2(\|w_1\|^4 + \|w_2\|^4 + 4\|w_1 w_2\|^2) dV(w). \end{equation*} So \begin{equation*} \pi^*(|\varphi|^p dV_X) \leq C \xi^4 \sum |\tilde{\varphi}_I|^p dV(w), \end{equation*} and we have proved the first part of the following lemma. \begin{lma}\label{lem:lpbound} Let $\varphi = \sum_{|I|=q} \varphi_I d\bar{z}_I$ be an arbitrary representation of $\varphi$ as a $(0,q)$-form on $X$. \begin{itemize} \item[i.] If $\tilde{\varphi}_I \in \xi^{-4/p}L^p(\tilde{D})$ for all $I$, then $\varphi \in L^p_{0,q}(D)$. \item[ii.] If $\tilde{\varphi}_I$ is continuous at $p\in \tilde{D}$ for all $I$, then $\varphi$ is continuous at $\pi(p)\in D$. \end{itemize} \end{lma} \begin{proof} It only remains to prove part ii. But continuity of $\tilde{\varphi}_I$ at $p$ implies directly continuity of $\varphi_I$ at $\pi(p)$ since $\pi$ is proper, and so $\varphi$ is continuous by definition. \end{proof} \subsection{Estimating integrals on the variety by estimates for the covering} Using Lemma \ref{lem:lpbound}, we can now formulate a condition for an integral kernel on the variety to map $L^p$ into $L^p$ in terms of how the kernel behaves in the covering. At the same time, we get some conditions on the convergence and boundedness, respectively, of certain cut-off procedures to be studied later. \begin{lma} \label{lmal2covering} Let $\mathcal{K}$ be an integral operator, acting on $(0,q)$-forms in $\zeta$ on $X'$, and returning $(0,q-1)$-forms in $z$ on $X$, and write the integral kernel $K$ in the form \begin{equation}\label{eq:integralkernels} K = \sum K_i \wedge d\bar{z}_i \text{ or } K = \sum K_i \wedge d\bar{\zeta}_i, \end{equation} depending on whether $q=2$ or $q=1$. Let $\tilde{K}_i = \pi^*K_i \wedge d\bar{w_1} \wedge d\bar{w_2}$. (i) Let $\frac{4}{3} < p \leq \infty$. If $\tilde{K}_i$ maps $\xi^{2-4/p}L^p(\tilde{D}')$ continuously to $\xi^{-4/p} L^p(\tilde{D})$ for $i=1,2,3$, then $\mathcal{K}$ maps $L^p_{0,q}(X')$ continuously to $L^p_{0,q-1}(X)$. (ii) If $\tilde{K}_i$ maps $\xi^{2} L^\infty(\tilde{D}')$ to functions continuous at $0\in{\mathbb C}^2$ for $i=1,2,3$, then $\mathcal{K}$ maps $L^\infty_{0,q}(X')$ to functions continuous at $0\in X$. (iii) For $k\in{\mathbb Z}$, $k\geq 1$, let ${X}_k$ be a series of subdomains in $X'$ and $\tilde{D}_k=\pi^{-1}(X_k)$ the corresponding subdomains of $\tilde{D}'$. Let $\mathcal{K}^k$ be the integral operators defined by integrating against the kernel $K$ over $X_k$, and $\tilde{\mathcal{K}}_i^k$ the integral operators defined by integrating against $\tilde{K}_i$ over $\tilde{D}_k$. If $$\int_{\tilde{D}} \|x\|^4 \big|\tilde{\mathcal{K}}^k_i \varphi (x)\big|^2 dV(x) \longrightarrow 0\ \ \ \mbox{ as }\ \ \ k\rightarrow \infty,$$ i.e., $\tilde{\mathcal{K}}^k_i\varphi \rightarrow 0$ in $\xi^{-2}L^2(\tilde{D})$, for any $\varphi\in L^2(\tilde{D})$ and $i=1, 2, 3$, then $\mathcal{K}^k \varphi \rightarrow 0$ in $L^2_{0,q-1}(X)$ for any $\varphi\in L^2_{0,q}(X')$. (iv) If there exists a constant $C>0$ such that $$\int_{\tilde{D}_k} \frac{\big|\tilde{\mathcal{K}}_i \varphi(x)\big|^2}{\log^2\|x\|} dV(x) < C \|\varphi\|^2_{L^2(\tilde{D}')}$$ for all $\varphi\in L^2(\tilde{D}')$, $k\geq 1$ and $i=1, 2, 3$, then there exists a constant $C'>0$ such that $$\int_{X_k} \frac{\big| \mathcal{K} \varphi(z)\big|^2}{\|z\|^2 \log^2 \|z\|} dV(z) < C' \|\varphi\|^2_{L^2(X')}$$ for all $\varphi\in L^2(\tilde{X}')$, $k\geq 1$. \end{lma} \begin{proof} Let us first prove parts (i) and (ii). We consider a $\varphi \in L^p_{0,q}(X')$, and write it as in \eqref{eq:minrep} above in the form $\varphi = \sum_{|I|=q} \varphi_I d\bar{\zeta}_I$, where $\varphi_I \in L^p(X')$. Thus, $\tilde{\varphi_I} = \pi^*\varphi_I \in \xi^{-4/p} L^p(\tilde{D}')$. We first consider the case $q=1$. Then $\pi^*(K\wedge\varphi)$ consists of terms \begin{equation} \label{eqpullback1} \pi^* K_i \wedge \pi^*(d\bar{\zeta}_i \wedge \varphi_j \wedge d\bar{\zeta}_j). \end{equation} Now, $\pi^*(d\bar{\zeta}_i \wedge d\bar{\zeta}_j) = C f d\bar{w_1}\wedge d\bar{w_2}$, where $C$ is a constant and $f$ is one the functions $\bar{w}_1^2$,$\bar{w}_1\bar{w}_2$ or $\bar{w}_2^2$, so $|f| \lesssim \xi^2$. We thus get that the second term in \eqref{eqpullback1} is $d\bar{w_1}\wedge d\bar{w_2}$ times a function in $\xi^{2-4/p}L^p(\tilde{D}')$. Thus, $\mathcal{K}$ acting on $\varphi$ expressed as an integral on $\tilde{D}'$ will be of the form $\int_{\tilde{D}'} \tilde{K}_i \wedge d\bar{w_1}\wedge d\bar{w_2} \wedge \psi$, where $\psi \in \xi^{2-4/p}L^p(\tilde{D}')$. Thus, by assumptions on $\mathcal{K}$, $\pi^* \mathcal{K}\varphi \in \xi^{-4/p} L^p(\tilde{D})$ in case (i) and $\pi^* \mathcal{K}\varphi$ is continuous at $0\in{\mathbb C}^2$ in case (ii). So, by Lemma \ref{lem:lpbound}, $\mathcal{K}\varphi \in L^p(X)$ (and this mapping is bounded) in case (i) and $\mathcal{K}\varphi$ is continuous at $0\in X$ in case (ii). In the same way, when $\varphi$ is a $(0,2)$-form, then $\pi^*\varphi$ will be a function in $\xi^{2-4/p}L^p(\tilde{D}')$ times $d\bar{w_1}\wedge d\bar{w_2}$, so we can write $\mathcal{K} \varphi$ on the form $\mathcal{K}\varphi = \sum g_i d\bar{z}_i$, where $\pi^* g_i$ is of the form \begin{equation*} \int_{\tilde{D}'} \tilde{K}_i \wedge d\bar{w_1}\wedge d\bar{w_2} \wedge \psi \end{equation*} and just as above, we get that $\pi^* g_i \in \xi^{-4/p} L^p(\tilde{D})$ in case (i) and $\pi^* g_i$ is continuous at $0\in {\mathbb C}^2$ in case (ii). So, $g_i \in L^p(X)$, and thus, $\mathcal{K} \varphi \in L^p_{0,1}(X)$ in case (i). Analogously, $g_i$, and hence also $\mathcal{K}\varphi$, are continuous at $0\in X$ in case (ii). The proof of part (iii) and (iv) follows by exactly the same arguments (with $p=2$). For part (iv) recall that $\pi^* \|z\|^2 \sim \|x\|^4=\xi^4$. \end{proof} \section{Properties of the Andersson--Samuelsson integral operator at the $A_1$-singularity}\label{sec:main1} \subsection{The Koppelman integral operator for a reduced complete intersection} For convenience of the reader, let us recall shortly the definition of the Koppelman integral operators from \cite{AS2} in the situation of a reduced complete intersection defined on two different open sets $D \subset\subset D' \subset\subset {\mathbb C}^N$, $$X = \{ \zeta \in D \subset {\mathbb C}^N \mid g_1(\zeta) = \dots = g_p(\zeta) = 0 \}$$ and $$X' = \{ \zeta \in D' \subset {\mathbb C}^N \mid g_1(\zeta) = \dots = g_p(\zeta) = 0 \},$$ both of dimension $n=N-p$ (see \cite{AS2}, Section 8). Let $\omega_{X'}$ be a structure form on $X'$ (see \cite{AS2}, Section 3). For generic coordinates $(\zeta',\zeta'')=(\zeta'_1, ..., \zeta'_p, \zeta''_1, ..., \zeta''_n)$ such that $\det\big( \partial g/\partial \zeta'\big)$ is generically non-vanishing on $X_{reg}'$, the structure form $\omega_{X'}$ is essentially the pull-back of \begin{eqnarray*} \frac{d\zeta''_1 \wedge ... \wedge d\zeta''_n}{\det \big( \partial g / \partial \zeta')} \end{eqnarray*} to $X'$ (there are also some scalar constants and a fixed frame of a trivial line bundle). The Koppelman integral operator $\mathcal{K}$, which is a homotopy operator for the $\bar\partial$-equation' on $X$ is of the form, \begin{equation}\label{eq:AS1} (\mathcal{K} \alpha)(z) = \int_{X'} K(\zeta,z) \wedge \alpha(\zeta), \end{equation} which takes forms on $X'$ as its input, and outputs forms on $X$. Here, \begin{equation*} K(\zeta,z) = \omega_{X'}(\zeta) \wedge \tilde{K}(\zeta,z), \end{equation*} and $\tilde{K}$ is defined by \begin{equation*} \tilde{K}(\zeta,z) \wedge d\eta = h_1\wedge\dots\wedge h_p \wedge (g\wedge B)_n. \end{equation*} The \emph{Hefer forms} $h_i$ are $(1,0)$-forms satisfying $\delta_\eta h_i = g_i(\zeta) - g_i(z)$ where $\delta_\eta$ is the interior multiplication with $$2\pi i \sum \eta_j \frac{\partial}{\partial \eta_j} = 2\pi i \sum (\zeta_j - z_j) \frac{\partial}{\partial \eta_j}.$$ The form $g$ is a so-called weight with compact support, and in case $D$ is the unit ball $D = B_1(0) \subseteq {\mathbb C}^n$, then one choice of such a weight is \begin{equation*} g = \chi - \bar\partial \chi \wedge \big(\sigma + \sigma(\bar\partial\sigma) + \dots + \sigma(\bar\partial\sigma)^{n-1}\big), \end{equation*} where \begin{equation*} \sigma = \frac{\zeta \bullet d\eta}{2\pi i(\|\zeta\|^2-\bar{\zeta}\bullet z)} \end{equation*} and $\chi = \chi(\zeta)$ is a cut-off function which is identically $1$ in a neighborhood of $\bar{D}$, and has support in $D'$. The Bochner-Martinelli form $B$ is defined by \begin{equation*} B = s + s\bar\partial s + \dots + s(\bar\partial s)^{n-1}, \end{equation*} where \begin{equation*} s = \frac{\partial \|\eta\|^2}{\|\eta\|^2} = \frac{\bar{\eta}\bullet d\eta}{\|\eta\|^2}. \end{equation*} Considering now the specific case when $X$ is the $A_1$-singularity, $X = \{ \zeta \in D \mid g(\zeta) = 0 \}$, where $g(\zeta) = \zeta_1 \zeta_2 - \zeta_3^2$, then we choose as a Hefer form \begin{equation*} h = \sum h^i d\eta_i = \frac{1}{2}\left( (\zeta_2 + z_2) d\eta_1 + (\zeta_1 + z_1) d\eta_2\right) - (\zeta_3 + z_3)d\eta_3, \end{equation*} and one representation of the structure form $\omega_{X'}$ is \begin{equation}\label{eq:omegaX1} \omega_{X'} = \frac{d\zeta_1\wedge d\zeta_2}{-2\zeta_3}. \end{equation} \subsection{Proof of Theorem \ref{thm:main1}}\label{sec:main1b} Note that \begin{equation}\label{eq:omegaX2} \pi^* \omega_{X'} = (-1/2) dw_1\wedge dw_2 \end{equation} under the $2$-sheeted covering $\pi: {\mathbb C}^2 \rightarrow X'$. We then get that \begin{equation*} \tilde{K} = \sum_{\sigma\in S_3} \frac{\chi}{\|\eta\|^4} h^{\sigma(1)} \bar{\eta}_{\sigma(2)} d\bar{\eta}_{\sigma(3)} - \frac{\bar\partial\chi}{2\pi i \|\eta\|^2 (\|\zeta\|^2-\bar{\zeta}\bullet z)} h^{\sigma(1)} \bar{\eta}_{\sigma(2)} \bar{\zeta}_{\sigma(3)}, \end{equation*} where $S_l$ is the symmetric group on $l$ elements. We decompose $\tilde{K}$ into $\tilde{K}_1$ and $\tilde{K}_2$, where $\tilde{K}_1$ and $\tilde{K}_2$ consist of the terms of $\tilde{K}$ containing $\chi$ and $\bar\partial\chi$, respectively. The terms of $\tilde{K}_1$ and $\tilde{K}_2$ are then of the forms \begin{equation*} \frac{g_1 f_1}{\|\eta\|^3} \wedge d\bar{\eta}_i \text{ and } \frac{g_2 f_2}{\|\eta\|(\|\zeta\|^2-\bar{\zeta}\bullet z)} \wedge \bar\partial\chi, \end{equation*} where $f_i$ is one of $\zeta_1,\zeta_2,\zeta_3,z_1,z_2,z_3$ and $g_i \in L^\infty(X'\times X)$ is a product of a smooth function with a term of the form $\eta_j/\|\eta\|$. By Proposition \ref{prop:lpconvergence} below, it follows for $\pi^* g_i(w,x)=g_i(\pi(w),\pi(x))$ that \begin{eqnarray}\label{eq:conv99} \lim_{x\rightarrow 0} \pi^* g_i(\cdot,x) = \pi^*g_i(\cdot,0) \ \ \ \mbox{ in } L^r(\tilde{D}') \end{eqnarray} for all $1\leq r<\infty$. The full kernel $K = \tilde{K} \wedge \omega_{X'}$ also splits into kernels $K_i = \tilde{K_i} \wedge \omega_{X'}$. We thus also get a decomposition $\mathcal{K} = \mathcal{K}_1 + \mathcal{K}_2$, and we will prove separately that $\mathcal{K}_1$ and $\mathcal{K}_2$ have the claimed mapping properties. If $\mathcal{K}_1$ is acting on $(0,1)$-forms or $(0,2)$-forms respectively, then we will get a contribution from $K_1$ from terms of the form \begin{equation*} \frac{g_1 f_1}{\| \eta \|^3}\wedge \omega_{X'} \wedge d\bar{\zeta}_i \text { or } \frac{g_1 f_1}{\| \eta \|^3}\wedge \omega_{X'} \wedge d\bar{z}_i \end{equation*} respectively. Thus, by Lemma~\ref{lmal2covering}, (i), $\mathcal{K}_1$ maps $L^p_{0,q}(X')$ continuously to $L^p_{0,q-1}(X)$ if \begin{equation} \label{eqk1coveringexp} \pi^*\left( \frac{g_1 f_1}{\| \eta \|^3} \omega_X \right) \wedge d\bar{w_1}\wedge d\bar{w_2} = c \frac{\tilde{g}_1 \tilde{f}_1}{\alpha^3} \wedge dV(w) \end{equation} maps $\xi^{2-4/p}L^p(\tilde{D}')$ continuously to $\xi^{-4/p}L^p(\tilde{D})$. But by Lemma~\ref{lmacoveringintegr}, a kernel of the form of \eqref{eqk1coveringexp} does indeed map $\xi^{2-4/p}L^p(\tilde{D}')$ continuously to $\xi^{-4/p} L^p(\tilde{D})$. By the same Lemmata (and using \ref{eq:conv99}), $\mathcal{K}_1$ maps $L^\infty_{0,q}(X')$ to forms continuous at the origin $0\in X$. On the other hand, on the regular part of $X$, the kernel behaves like $\|\zeta- z\|^3$ in ${\mathbb C}^2$, i.e., like the Bochner-Martinelli-Koppelman kernel (cf. also the proof of \cite{AS2}, Lemma 6.1). So, $\mathcal{K}_1$ maps $L^\infty_{0,q}(X')$ to forms that are (H\"older-)continuous on $X\setminus\{0\}$ by standard arguments (see \cite{Range}, Theorem IV.1.14). Summing up, we see that $\mathcal{K}_1$ maps $L^\infty_{0,q}(X')$ to $C^0_{0,q-1}(X)$. This operator is continuous because the Fr\'echet space structure of $C^0_{0,q-1}(X)$ is defined by semi-norms $\|\cdot\|_{L^\infty,K_j}$ where $\{K_j\}_j$ is a compact exhaustion of $X$ (and $\mathcal{K}_1$ maps continuously from $L^\infty$ to $L^\infty$). Considering now $\mathcal{K}_2$, we note that since $\chi$ depends only on $\zeta$, the action of $\mathcal{K}_2$ on $(0,2)$-forms is $0$, so we only need to consider the case of $(0,1)$-forms. Note that we can write the pullback of the kernel acting on $\varphi_i d\bar{z_i}$ as an integral on $X$ of the form \begin{equation} \label{eqk2coveringexp} \pi^*\left( \frac{g \chi'(\zeta)}{\|\eta\|(\|\zeta\|^2-\bar{\zeta}\bullet z)}\right) dV(w) \end{equation} where $g \in L^{\infty}(\tilde{D}'\times\tilde{D})$ satisfies \eqref{eq:conv99}. Note that $\chi \equiv 1$ in a neighborhood of $\bar{X}$, so $\supp \chi' \cap \bar{X} = \emptyset \}$, so the integrand in \eqref{eqk2coveringexp} is uniformly bounded when $z \in X$ and $\zeta \in X'$, so the pullback of the kernel of $\mathcal{K}_2$ will define bounded operator mapping $\xi^{2-4/p}L^p(\tilde{D}')$ to $\xi^{-4/p} L^p(\tilde{D})$. By the same arguments as above, one gets also that $\mathcal{K}_2$ maps continuously from $L^\infty_{0,1}(X')$ to $C^0(X)$. To complete the proof of Theorem \ref{thm:main1}, it only remains to prove the following: \begin{prop}\label{prop:lpconvergence} Let $a(\zeta,z) = \frac{\zeta_i-z_i}{\|\zeta-z\|}$. Then \begin{eqnarray}\label{eq:est99} \lim_{z\rightarrow 0} a(\cdot,z) = a(\cdot,0) \ \ \ \mbox{ in } L^r(X') \end{eqnarray} for all $1\leq r < \infty$. Let $\pi^* a(w,x) := a(\pi(w),\pi(x))$. Then \begin{eqnarray}\label{eq:est99b} \lim_{x\rightarrow 0} \pi^* a(\cdot,x) = \pi^*a(\cdot,0) \ \ \ \mbox{ in } L^s(\tilde{D}') \end{eqnarray} for all $1\leq s < \infty$. \end{prop} \begin{proof} Fix $1\leq r<\infty$ and note that $a$ is bounded (by $\|a\|_\infty=1$). Let $0\leq \chi\leq 1$ be a smooth function such that $\chi\equiv 0$ on $B_{1/2}(0)$ and $\chi\equiv 1$ on ${\mathbb C}^3\setminus B_{1}(0)$, and set $\chi_\epsilon(x) := \chi(x/\epsilon)$ (let $0<\epsilon<1$ throughout this proof). Let $$a_\epsilon(\zeta,z) := \chi_\epsilon(\zeta-z) a(\zeta,z).$$ Then $a_\epsilon(\zeta,z)$ is smooth and it is not hard to see by Lebesgue's theorem on dominated convergence that \begin{eqnarray*} \lim_{\epsilon\rightarrow 0} a_\epsilon(\cdot,z) = a(\cdot,z) \ \ \ \mbox{ in } L^r(X') \end{eqnarray*} for all $z\in {\mathbb C}^3$. We can say more, namely this convergence is uniformly in $z$: \begin{eqnarray*} \|a_\epsilon(\cdot,z) - a(\cdot,z) \|_{L^r(X')} &=& \|a_\epsilon(\cdot,z) - a(\cdot,z)\|_{L^r(X'\cap B_\epsilon(z))}\\ &\leq& \|a\|_\infty \|\chi_\epsilon(\cdot-z)-1\|_{L^r(X'\cap B_\epsilon(z))}\\ &\leq& \|a\|_\infty \int_{X'\cap B_\epsilon(z)} dV_{X'} \lesssim \epsilon^4, \end{eqnarray*} because $X'$ is a complex variety of dimension $2$. This follows from \cite{Dem}, Consequence III.5.8, because $X'$ is bounded and has Lelong number $\leq 2$. We can now prove \eqref{eq:est99}. Let $\delta>0$. By the considerations above, we can choose $\epsilon>0$ such that \begin{eqnarray*}\label{eq:est100} \|a_\epsilon(\cdot,z) - a(\cdot,z) \|_{L^r(X')} &\leq& \delta/3 \end{eqnarray*} for all $z\in {\mathbb C}^3$. Fix such an $\epsilon>0$. It follows that \begin{eqnarray*} \|a(\cdot,z) - a(\cdot,0)\|_{L^r(X')} &\leq& \|a(\cdot,z) - a_\epsilon(\cdot,z)\|_{L^r(X')} + \|a_\epsilon(\cdot,z) - a_\epsilon(\cdot,0)\|_{L^r(X')}\\ && + \|a_\epsilon(\cdot,0) - a(\cdot,0)\|_{L^r(X')}\\ &\leq& 2\delta/3 + \|a_\epsilon(\cdot,z) - a_\epsilon(\cdot,0)\|_{L^r(X')} \end{eqnarray*} for all $z\in{\mathbb C}^3$. On the other hand, $a_\epsilon$ is smooth on ${\mathbb C}^3\times{\mathbb C}^3$, and so there exists a constant $C>0$ such that $$|a_\epsilon(\zeta,z) - a_\epsilon(\zeta,0)| \leq C \|z\|$$ for all $\zeta,z$ in a bounded domain. Hence, we get that $$\|a_\epsilon(\cdot,z) - a_\epsilon(\cdot,0)\|_{L^r(X')} \leq \delta/3$$ if $\|z\|$ is small enough. Summing up, we have found that actually \begin{eqnarray*} \|a(\cdot,z) - a(\cdot,0)\|_{L^r(X')} &\leq& \delta \end{eqnarray*} if $\|z\|$ is small enough. That proves the first statement of the proposition. For the second part, fix $1\leq s <\infty$. Recall from Section \ref{sec:lp-forms} that, for functions, the $L^r$-norm on $X'$ is equivalent to the $\|w\|^{-4/r} L^r$-norm on $\tilde{D}'$. But, by the H\"older-inequality, convergence in $\|w\|^{-4/r}L^r$ implies convergence in $L^s$ if $r<\infty$ is chosen large enough. So, the second statement follows from the first one if we just choose $1\leq r<\infty$ large enough (depending on $1\leq s<\infty$). \end{proof} \section{The $L^p$-homotopy formula for the $\bar\partial$-operator in the sense of distributions}\label{sec:main2} The original $\bar\partial$-homotopy formula of Andersson--Samuelsson holds only for forms on the variety $X$ which are the restriction of smooth forms on a neighborhood of the variety (or, more generally, for forms with values in the $\mathcal{A}$-sheaves mentioned in the introduction; see \cite{AS2}, Theorem 1.4). So, in order to extend the $\bar\partial$-homotopy formula to $L^p$-forms given only on the variety, we need to approximate these in an appropriate way by smooth forms extending to a neighborhood of the variety. To do so, we need to cut-off the forms so that they vanish in neighborhoods of the singularity. \subsection{Estimates for the cut-off procedure}\label{sec:main2b} We will use the following cut-off functions to approximate forms by forms with support away from the singularity in different situations. As in \cite{PS}, Lemma 3.6, let $\rho_k: {\mathbb R}\rightarrow [0,1]$, $k\geq 1$, be smooth cut-off functions satisfying $$\rho_k(x)=\left\{\begin{array}{ll} 1 &,\ x\leq k,\\ 0 &,\ x\geq k+1, \end{array}\right.$$ and $|\rho_k'|\leq 2$. Moreover, let $r: {\mathbb R}\rightarrow [0,1/2]$ be a smooth increasing function such that $$r(x)=\left\{\begin{array}{ll} x &,\ x\leq 1/4,\\ 1/2 &,\ x\geq 3/4, \end{array}\right.$$ and $|r'|\leq 1$. As cut-off functions we can use \begin{eqnarray}\label{eq:cutoff1} \mu_k(\zeta):=\rho_k\big(\log(-\log r(\|\zeta\|))\big) \end{eqnarray} on $X$. Note that \begin{eqnarray}\label{eq:cutoff2} \big| \bar\partial \mu_k(\zeta)\big| \lesssim \frac{\chi_k(\|\zeta\|)}{\|\zeta\| \big| \log\|\zeta\|\big|}, \end{eqnarray} where $\chi_k$ is the characteristic function of $[e^{-e^{k+1}}, e^{-e^k}]$. \begin{thm}\label{thm:main2} Let $\mathcal{K}$ be integral operator from Theorem \ref{thm:main1}, and let $\varphi\in L^2_{0,q}(X')$, $1\leq q \leq 2$. Then \begin{eqnarray*} \mathcal{K} \big( \bar\partial\mu_k \wedge \varphi \big) &\longrightarrow& 0 \end{eqnarray*} in $L^2_{0,q}(X')$ as $k\rightarrow \infty$. \end{thm} \begin{proof} By \eqref{eq:cutoff2}, we see that $$\big|\mathcal{K}\big( \bar\partial\mu_k \wedge \varphi\big)\big| \lesssim \big|\mathcal{K}\big|\left( \frac{\chi_k(\|\zeta\|) |\varphi|}{\|\zeta\| \big| \log\|\zeta\| \big|}\right),$$ where if $\mathcal{K}$ is the integral operator defined by the integral kernel $K(\zeta,z)$, then $|\mathcal{K}|$ is the integral operator defined by the integral kernel $|K(\zeta,z)|$. So, let $$\mathcal{K}^k \varphi := \big|\mathcal{K}\big|\left( \frac{\chi_k(\|\zeta\|) \varphi}{\|\zeta\| \big| \log\|\zeta\| \big|}\right)\ \ ,k\geq 1,$$ be the corresponding series of integral operators on $X_k:= X'\cap \supp \chi_k$. Proceeding as in the proof of Theorem \ref{thm:main1} (let $g_1$, $f_1$, $\tilde{g}_1$, $\tilde{f}_1$ be as in \eqref{eqk1coveringexp}), we see by Lemma \ref{lmal2covering}, (iii), that actually $$\mathcal{K}^k \varphi \longrightarrow 0$$ in $L^2_*(X')$ if the kernels \begin{align} \label{eqk1coveringexp2} \tilde{K}_k := \left| \pi^*\left( \frac{\chi_k(\|\zeta\|)}{\|\zeta\| \big|\log\|\zeta\|\big|} \frac{g_1 f_1}{\| \eta \|^3} \omega_X \right) \wedge d\bar{w_1}\wedge d\bar{w_2} \right| \end{align} define a series of integral operators $\tilde{\mathcal{K}}^k$, $k\geq 1$ such that \begin{equation} \label{eq:ktildezero} \tilde{\mathcal{K}}^k \varphi \longrightarrow 0 \end{equation} in $\xi^{-2} L^2(\tilde{D})$ for any $\varphi\in L^2(\tilde{D}')$. Since $(1/2) \|w\|^2 \leq \pi^* \|\zeta\| \leq \|w\|^2$, we get that $\pi^* \chi_k(\|\zeta\|) \leq \chi_{\tilde{D}_k}(w)$, where $\chi_{\tilde{D}_k}$ is the characteristic function on $\tilde{D}_k$ as given by \eqref{eq:Dk}, and we then also get that \begin{align*} \tilde{K}_k \lesssim \frac{\chi_{\tilde{D}_k}(w)}{\|w\|^2 \big|\log\|w\|\big|} \frac{\left| \tilde{g}_1 \tilde{f}_1 \right|}{\alpha^3} \wedge dV(w). \end{align*} Thus, we conclude that \eqref{eq:ktildezero} holds by Lemma \ref{lem:cutoff2}. \end{proof} \subsection{Proof of Theorem \ref{thm:main3}} In order to apply the $\bar\partial$-homotopy formulas of Andersson-Samuelsson to $\varphi$ we need to approximate $\varphi$ and its $\bar\partial$-derivative by smooth forms on a neighborhood of $X$. This can be done appropriately by use of the cut-off functions introduced in Section \ref{sec:main2b}. So, let $$\phi_k := \mu_k \varphi,$$ where $\mu_k$ is the cut-off sequence from Section \ref{sec:main2b}. By Lebesgue's theorem on dominated convergence, note that \begin{align}\label{eq:cutoff01} \phi_k \rightarrow \varphi\ \ , \ \ \mu_k \bar\partial \varphi \rightarrow \bar\partial\varphi\ \ \mbox{ in } L^p_{0,*}(X'). \end{align} As the $\phi_k$ have support away from the singular point, we can apply Friedrichs' density lemma: just use a standard smoothing procedure, i.e., convolution with a Dirac sequence, on the smooth manifold $X^*$ (cf., \cite{LiMi}*{Theorem~V.2.6}). So, there are sequences of smooth forms $\phi_{k,l}$ with support away from the singular point such that \begin{align}\label{eq:cutoff01b} \phi_{k,l} \overset{l\rightarrow \infty}{\longrightarrow} \phi_k\ \ , \ \ \bar\partial\phi_{k,l} \overset{l\rightarrow\infty}{\longrightarrow} \bar\partial\phi_k\ \ \mbox{ in } L^p_{0,*}(X'). \end{align} Now the $\phi_{k,l}$ can be extended smoothly to a neighborhood of $X$ and it follows by the $\bar\partial$-homotopy formula of Andersson-Samuelsson, \cite{AS2}, Theorem 1.4, that \begin{eqnarray*} \phi_{k,l} &=& \bar\partial \mathcal{K} \phi_{k,l} + \mathcal{K} \bar\partial \phi_{k,l} \end{eqnarray*} in the sense of distributions on $X$ for all $k,l\geq 1$. From this, it follows by \eqref{eq:cutoff01b} and Theorem \ref{thm:main1} (letting $l\rightarrow \infty$) that the homotopy formula holds for all $\phi_k$, $k\geq 1$: \begin{eqnarray*} \phi_k &=& \bar\partial \mathcal{K} \phi_k + \mathcal{K} \bar\partial \phi_k \\ &=& \bar\partial \mathcal{K} \phi_k + \mathcal{K} \big(\mu_k \bar\partial \varphi\big) + \mathcal{K} \big( \bar\partial \mu_k \wedge \varphi\big) \end{eqnarray*} in the sense of distributions on $X$ for all $k\geq 1$. Using \eqref{eq:cutoff01} and Theorem \ref{thm:main1} again, we see that \begin{eqnarray}\label{eq:cutoff02} \mathcal{K} \phi_k \rightarrow \mathcal{K}\varphi \ \ \mbox{ and } \ \ \mathcal{K}\big(\mu_k\bar\partial\varphi\big) \rightarrow \mathcal{K} \bar\partial \varphi \end{eqnarray} in $L^p_*(X)$. Moreover, using $L^p_*(X') \subset L^2_*(X')$ and Theorem \ref{thm:main2}, we also get that \begin{eqnarray}\label{eq:cutoff03} \mathcal{K}\big(\bar\partial \mu_k\wedge \varphi\big) \rightarrow 0 \end{eqnarray} in $L^2_{0,q}(X)$. So, it follows that actually $\varphi = \bar\partial \mathcal{K}\varphi + \mathcal{K}\big( \bar\partial \varphi\big)$ in the sense of distributions on $X$. \section{Other variants of the $\bar\partial$-operator} \label{sec:main3} \subsection{The strong $\bar\partial$-operator $\bar\partial_s$} In this section, we give the proof of Theorem \ref{thm:main4}. Note first that in order to prove that $\phi \in {\rm Dom\, } \bar\partial_s \subset L^2_{0,q}(X)$, it is sufficient to find a sequence $\{ \phi_j \}_j \subset {\rm Dom\, } \bar\partial_w \subset L^2_{0,q}(X)$ with $\esssupp \phi_j \cap \{ 0 \} = \emptyset$ such that \begin{eqnarray} \phi_j \rightarrow \phi \ \ \ &\mbox{ in }& \ \ L^2_{0,q}(X),\\ \bar\partial \phi_j \rightarrow \bar\partial \phi \ \ \ &\mbox{ in }& \ \ L^2_{0,q+1}(X), \end{eqnarray} i.e., it is not necessary to assume that the $\phi_j$ are smooth, since if we assume that the $\phi_j$'s have support outside of the singular set of $X$, then by Friedrichs' extension lemma, \cite{LiMi}*{Theorem~V.2.6}, on the complex manifold $X^*$, there exists smooth $\tilde{\phi}_j \in L^2_{0,q}(X)$ with support away from $\{0\}$ such that $\|\phi_j-\tilde{\phi}_j\|_{L^2}$ and $\|\bar\partial \phi_j-\bar\partial\tilde{\phi}_j\|_{L^2}$ are arbitrarily small. So, let $\varphi\in {\rm Dom\, } \bar\partial_w \subseteq L^2_{0,q}(X')$, where $1\leq q \leq 2$. Let $\mu_k$ be the cut-off sequence from Section \ref{sec:main2b} and set $$\phi_k := \mu_k \mathcal{K} \varphi.$$ Then $\{\phi_k\}_k \subset L^2_{0,q-1}(X)$ and it follows by Lebesgue's theorem on dominated convergence that \begin{eqnarray*} \phi_k \rightarrow \mathcal{K}\varphi \ \ \ &\mbox{ in }& \ \ L^2_{0,q-1}(X),\\ \bar\partial \phi_k -\bar\partial\mu_k\wedge \mathcal{K}\varphi = \mu_k \bar\partial \mathcal{K}\varphi \rightarrow \bar\partial \mathcal{K} \varphi \ \ \ &\mbox{ in }& \ \ L^2_{0,q}(X) \end{eqnarray*} as $k\rightarrow \infty$ since $\mathcal{K} \varphi \in L^2_{0,q-1}(X)$ by Theorem~\ref{thm:main1}, and thus, $\bar\partial \mathcal{K} \varphi \in L^2_{0,q}(X)$ by Theorem~\ref{thm:main3}. To see that actually \begin{eqnarray}\label{eq:claim} \mathcal{K} \varphi &\in& {\rm Dom\, }\bar\partial_s \subset L^2_{0,q-1}(X), \end{eqnarray} we claim that it is enough to show that the set of forms \begin{eqnarray}\label{eq:claim2} \big\{\bar\partial \mu_k \wedge \mathcal{K}\varphi \big\}_k \end{eqnarray} is uniformly bounded in $L^2_{0,q}(X)$. This can be seen by the following duality argument: {\bf Proof of the claim:} Let \eqref{eq:claim2} be uniformly bounded in $L^2_{0,q}(X)$, independent of $k$. We can assume that $\mathcal{K}\varphi$ has compact support in a small neighborhood, say $V$, of the origin. Then, refeering to the notation in \cite{RuDuke}, Section 2.4, we need to show that $\mathcal{K}\varphi\in{\rm Dom\, } \bar\partial_{min}$. But, on the Hermitian manifold $X\cap V\setminus \{0\}$, the $L^2$-adjoint operator of $\bar\partial_{min}$ is $\vartheta_{max}$ (see \cite{RuDuke}, Section 2.4). So, to show the claim, we have to prove that \begin{eqnarray}\label{eq:claim3} \left ( \mathcal{K}\varphi, \vartheta_{max} g\right)_{L^2(X)} &=& \left( \bar\partial \mathcal{K}\varphi, g\right)_{L^2(X)} \end{eqnarray} for all $g\in {\rm Dom\, }\vartheta_{max} \subset L^2_{0,q+1}(X\cap V)$. For such a $g$, we compute: \begin{eqnarray*} \left ( \mathcal{K}\varphi, \vartheta_{max} g\right)_{L^2(X)} &=& \lim_{k\rightarrow\infty} \left ( \phi_k, \vartheta_{max} g\right)_{L^2(X)}\\ &=& \lim_{k\rightarrow\infty} \left ( \bar\partial \phi_k, g\right)_{L^2(X)}\\ &=& \left( \bar\partial \mathcal{K}\varphi, g\right)_{L^2(X)} + \lim_{k\rightarrow\infty} \left ( \bar\partial \mu_k \wedge \mathcal{K}\varphi, g\right)_{L^2(X)}. \end{eqnarray*} But, as \eqref{eq:claim2} is uniformly bounded, we have furthermore: \begin{eqnarray*} \left|\left ( \bar\partial \mu_k \wedge \mathcal{K}\varphi, g\right)_{L^2(X)}\right| &\lesssim& \|g\|_{L^2(\supp \bar\partial\mu_k)} \overset{k\rightarrow \infty}{\longrightarrow} 0, \end{eqnarray*} because $g$ is square-integrable and the domain of integration vanishes. This proves the claim. \qed To show that \eqref{eq:claim2} is uniformly bounded, we proceed similarly as in the proof of Theorem \ref{thm:main2}. By \eqref{eq:cutoff2}, we see that $$\left| \bar\partial \mu_k\wedge \mathcal{K}\varphi \big(z\big) \right| \lesssim \frac{\chi_k(\|z\|)}{\|z\| \big| \log\|z\| \big|} \wedge \left| \mathcal{K}\varphi (z) \right| .$$ So, let $$\mathcal{K}^k \varphi (z):= \frac{\chi_k(\|z\|)}{\|z\| \big| \log\|z\| \big|}\wedge\mathcal{K}\varphi(z) \ \ ,k\geq 1,$$ be the corresponding series of integral operators on $X'$. Proceeding as in the proof of Theorem \ref{thm:main1} (let $g_1$, $f_1$, $\tilde{g}_1$, $\tilde{f}_1$ be as in \eqref{eqk1coveringexp}), we see by Lemma \ref{lmal2covering}, (iv), that actually $$\{\mathcal{K}^k \varphi\}_k$$ is uniformly bounded in $L^2_*(X)$ if the kernels \begin{align} \label{eqk1coveringexp3} \tilde{K}_k := \left| \pi^*\left( \frac{\chi_k(\|z\|)}{\|z\| \big|\log\|z\|\big|} \frac{g_1 f_1}{\| \eta \|^3} \omega_X \right) \wedge d\bar{w_1}\wedge d\bar{w_2} \right| \end{align} define a series of integral operators $\tilde{\mathcal{K}}^k$ on $\tilde{D}$ such that \begin{equation} \label{eq:Ktilde} \{\tilde{\mathcal{K}}^k\varphi \}_k \end{equation} is uniformly bounded in $\xi^{-2} L^2(\tilde{D})$ for any $\varphi\in L^2(\tilde{D})$. As in the end of the proof of Theorem \ref{thm:main2}, we get that \begin{equation*} \tilde{K}_k \lesssim \frac{\chi_{\tilde{D}_k}(x)}{\|x\|^2 \big|\log\|x\|\big|} \frac{| \tilde{g}_1 \tilde{f}_1 |}{\alpha^3} \wedge dV(w), \end{equation*} and thus, \eqref{eq:Ktilde} is uniformly bounded by Lemma \ref{lem:cutoff1}. \subsection{Andersson--Samuelsson's operator $\bar\partial_X$} In this section, we give the proof of Theorem \ref{thm:main5}. By Theorem \ref{thm:main4}, $\mathcal{K}\varphi \in {\rm Dom\, }\bar\partial_s$. So, there is a sequence $\{\psi_j\}_j$ of smooth forms with support away from the singular point, $\supp \psi_j \cap \{0\} =\emptyset$, and such that \begin{eqnarray}\label{eq:conv} \psi_j \rightarrow \mathcal{K}\varphi\ \ \mbox{ and }\ \ \bar\partial\psi_j \rightarrow \bar\partial \mathcal{K}\varphi \end{eqnarray} in the $L^2$-sense on $X$ as $j\rightarrow \infty$ (see \eqref{eq:dbars1}, \eqref{eq:dbars2}). By \cite{AS2}, Proposition 1.5, $\mathcal{K}\varphi \in \mathcal{W}(X)$. In addition, since we assume that $\bar\partial \varphi \in L^2$, we get by Theorem~\ref{thm:main1} that $\mathcal{K} \bar\partial \varphi \in L^2(X)$, and by Theorem~\ref{thm:main3}, we then get that $\bar\partial \mathcal{K} \varphi \in L^2(X)$. Since $\mathcal{K} \varphi \in \mathcal{W}(X) \subseteq {\mathcal{PM}}(X)$, also $\bar\partial \mathcal{K} \varphi \in {\mathcal{PM}}(X)$ since ${\mathcal{PM}}(X)$ is closed under $\bar\partial$. Hence, $\bar\partial \mathcal{K} \varphi \in L^2(X) \cap {\mathcal{PM}}(X)$, and by dominated convergence, we get that $\bar\partial \mathcal{K} \varphi \in \mathcal{W}(X)$. We have to show that \begin{eqnarray}\label{eq:dbarX1} \bar\partial \big( \mathcal{K}\varphi\wedge \omega_X\big) = \big( \bar\partial\mathcal{K}\varphi\big) \wedge \omega_X \end{eqnarray} in the sense of distributions (see \cite{AS2}, Proposition 4.4). But $\omega_X \in L^2_{2,0}(X)$ by \eqref{eq:omegaX1} and Lemma \ref{lem:lpbound} (consider $\overline{\omega_X}$). So, $\psi_j \rightarrow \mathcal{K}\varphi$ in $L^2_{0,q-1}(X)$ implies by use of the H\"older inequality that $\psi_j\wedge \omega_X \rightarrow \mathcal{K}\varphi\wedge \omega_X$ in the sense of distributions. By the same argument, we see that $\big(\bar\partial \psi_j\big)\wedge \omega_X \rightarrow \big(\bar\partial\mathcal{K}\varphi\big)\wedge \omega_X$ in the sense of distributions. But $\psi_j\in {\rm Dom\, } \bar\partial_X$, i.e., $\bar\partial\big( \psi_j\wedge\omega_X\big) = \big(\bar\partial\psi_j \big)\wedge\omega_X$ in the sense of distributions. So, we actually have \begin{eqnarray*} \bar\partial \big( \mathcal{K}\varphi\wedge \omega_X\big) = \lim_{j\rightarrow\infty} \bar\partial \big( \psi_j\wedge \omega_X\big) = \lim_{j\rightarrow \infty} \big(\bar\partial\psi_j\big) \wedge\omega_X = \big(\bar\partial\mathcal{K}\varphi\big)\wedge\omega_X \end{eqnarray*} in the sense of distributions. \appendix \section{Estimates for integral kernels in ${\mathbb C}^n$}\label{sec:appendix} \begin{lma}\label{lem:estimateCn1} Let $\alpha\in {\mathbb R}$. Then there exists a constant $C_\alpha>0$ such that the following holds: \begin{align}\label{eq:estimateCn1} I(r_1,r_2) := \int_{B_{r_2}(x)\setminus \overline{B_{r_1}(x)}} \frac{dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x\|^\alpha} \leq C_\alpha \left\{ \begin{array}{ll} r_2^{2n-\alpha} & \ ,\ \alpha<2n,\\ |\log r_2| + |\log r_1|& \ ,\ \alpha=2n,\\ r_1^{2n-\alpha} & \ ,\ \alpha>2n, \end{array}\right. \end{align} for all $x\in {\mathbb C}^n$ and all $0<r_1 \leq r_2< \infty$. \end{lma} \begin{proof} A simple calculation, using Fubini, gives: \begin{eqnarray*} I(r_1,r_2) &=& \int_{r_1}^{r_2} \int_{bB_t(x)} \frac{dS_{bB_t(x)}(\zeta)}{t^\alpha} dt \sim \int_{r_1}^{r_2} \frac{t^{2n-1}}{t^\alpha} dt \\ &\lesssim& \left\{ \begin{array}{ll} r_2^{2n-\alpha} - r_1^{2n-\alpha} & , \alpha <2n\\ \log r_2 - \log r_1 & , \alpha=2n\\ r_1^{2n-\alpha} - r_2^{2n-\alpha} & , \alpha > 2n \end{array}\right\} \leq \left\{ \begin{array}{ll} r_2^{2n-\alpha} & , \alpha <2n,\\ |\log r_2| + |\log r_1| & , \alpha=2n,\\ r_1^{2n-\alpha} & , \alpha > 2n. \end{array}\right. \end{eqnarray*} \end{proof} From that we can deduce our first basic estimate: \begin{lma}\label{lem:estimateCn2} Let $D\subset \subset {\mathbb C}^n$ be a bounded domain and $0 \leq \alpha,\beta <2n$. Then there exists a constant $C_1>0$ such that the following holds: \begin{align}\label{eq:estimateCn2} \int_{D} \frac{dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x_1\|^\alpha \|\zeta-x_2\|^\beta} \leq C_1 \left\{ \begin{array}{ll} 1 & \ ,\ \alpha+\beta<2n,\\ 1+ \big|\log \|x_1 - x_2\|\big| &\ ,\ \alpha+\beta=2n,\\ \|x_1 - x_2\|^{2n-\alpha-\beta} &\ ,\ \alpha+\beta>2n, \end{array}\right. \end{align} for all $x_1, x_2 \in {\mathbb C}^n$ with $x_1\neq x_2$. \end{lma} \begin{proof} Let $R/2$ be the diameter of $D$ in ${\mathbb C}^n$. We can assume that $D$ is not empty and that $R/2>0$. Further, we can assume that $\dist_{{\mathbb C}^n}(D,x_1) < R/2$ (otherwise, the estimate just gets easier). This implies \begin{eqnarray}\label{eq:R1} D \subset B_R(x_1). \end{eqnarray} Let $\delta:=\|x_1-x_2\|/3$. We divide the domain of integration in three regions $D_1$, $D_2$ and $D\setminus(D_1\cup D_2)$. Let $$D_1:= D \cap B_{\delta}(x_1)\ \ ,\ \ D_2:= D \cap B_{\delta}(x_2)$$ Then $\|\zeta-x_2\| \geq \delta$ on $D_1$ and so \begin{align}\label{eq:es11} \int_{D_1} \frac{dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x_1\|^\alpha \|\zeta-x_2\|^\beta} \leq \delta^{-\beta} \int_{B_{\delta}(x_1)} \frac{dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x_1\|^\alpha} &\leq& C_\alpha \delta^{-\beta+2n-\alpha} \end{align} by use of Lemma \ref{lem:estimateCn1} (using $\alpha<2n$ and letting $r_2=\delta$, $r_1\rightarrow 0$). As $\|\zeta-x_1\|\geq \delta$ on $D_2$, analogously: \begin{eqnarray}\label{eq:es12} \int_{D_2} \frac{dV_X(\zeta)}{\|\zeta-x_1\|^\alpha \|\zeta-x_2\|^\beta} &\leq& C_\beta \delta^{-\alpha+2n-\beta} \end{eqnarray} It remains to consider the integral over $D \setminus (D_1\cup D_2)$. Here, $\|\zeta-x_2\|\geq \delta$ and that yields: \begin{eqnarray*} \|\zeta - x_1\| \leq \|\zeta-x_2\| + \|x_1-x_2\| = \|\zeta- x_2\| + 3 \delta \leq 4 \|\zeta-x_2\|. \end{eqnarray*} So, we can estimate by use of \eqref{eq:R1} and Lemma \ref{lem:estimateCn1}: \begin{eqnarray*} && \int_{D \setminus (D_1\cup D_2)} \frac{dV_{{\mathbb C}^n} (\zeta)}{\|\zeta-x_1\|^\alpha \|\zeta-x_2\|^\beta} \leq 4^{\beta} \int_{B_{R}(x_1)\setminus \overline{B_{\delta}(x_1)}} \frac{dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x_1\|^{\alpha+\beta}}\\ &\leq& 4^{\beta} C_{\alpha+\beta} \left\{ \begin{array}{ll} R^{2n-\alpha-\beta} & \ ,\ \alpha+\beta<2n,\\ |\log R|+|\log \delta| & \ ,\ \alpha+\beta=2n,\\ \delta^{2n-\alpha-\beta} & \ ,\ \alpha+\beta>2n, \end{array}\right. \end{eqnarray*} The assertion follows easily from this statement in combination with \eqref{eq:es11} and \eqref{eq:es12}. \end{proof} Another basic estimate is: \begin{lma}\label{lem:estimateCn3} Let $D\subset \subset {\mathbb C}^n$ be a bounded domain, $0 \leq \alpha,\beta <2n$ and $\gamma > -2n$. Then there exists a constant $C_2>0$ such that the following holds: \begin{align}\label{eq:estimateCn3} \int_{D} \frac{\|\zeta\|^\gamma dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x\|^\alpha \|\zeta+x\|^\beta} \leq C_2 \left\{ \begin{array}{ll} 1 & \ ,\ \alpha+\beta<2n+\gamma,\\ 1+ \big|\log \|x\|\big| &\ ,\ \alpha+\beta=2n+\gamma,\\ \|x\|^{2n+\gamma-\alpha-\beta} &\ ,\ \alpha+\beta>2n+\gamma, \end{array}\right. \end{align} for all $x \in {\mathbb C}^n$ with $x\neq 0$. \end{lma} \begin{proof} We can proceed similar as in the proof of Lemma \ref{lem:estimateCn2}, but have to divide $D$ into four domains. We can assume that $D$ is contained in a ball $B_R(0)$. Let $\delta:=\|x\|/3$ and set $$D_0:= B_\delta(0)\ \ ,\ \ D_1:= B_\delta(x)\ \ ,\ \ D_2:= B_\delta(-x).$$ Then $\|\zeta + x\|=\|(\zeta-x) + 2x\| \geq 5\delta$ and $\|\zeta\| \leq 4 \delta$ on $D_1$ and so we obtain \begin{align}\label{eq:es13} \int_{D_1} \frac{\|\zeta\|^\gamma dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x\|^\alpha \|\zeta+x\|^\beta} \lesssim \delta^{\gamma-\beta} \int_{B_\delta(x)} \frac{dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x\|^\alpha} \leq C_\alpha \delta^{2n+\gamma-\alpha-\beta} \end{align} by use of Lemma \ref{lem:estimateCn1}. Analogously, \begin{align}\label{eq:es14} \int_{D_2} \frac{\|\zeta\|^\gamma dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x\|^\alpha \|\zeta+x\|^\beta} \lesssim \delta^{\gamma-\alpha} \int_{B_\delta(-x)} \frac{dV_{{\mathbb C}^n}(\zeta)}{\|\zeta+x\|^\beta} \leq C_\beta \delta^{2n+\gamma-\alpha-\beta}. \end{align} Similarly, we have $\|\zeta-x\|\geq 2\delta$ and $\|\zeta+x\|\geq 2\delta$ on $D_0$ and that gives \begin{align}\label{eq:es15} \int_{D_0} \frac{\|\zeta\|^\gamma dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x\|^\alpha \|\zeta+x\|^\beta} \leq \delta^{-\alpha-\beta} \int_{B_\delta(0)} \|\zeta\|^\gamma dV_{{\mathbb C}^n} \leq C_\gamma \delta^{2n+\gamma-\alpha-\beta}. \end{align} Finally, we have to consider $D\setminus\big(D_0\cup D_1\cup D_2\big)$. Here, $$\|\zeta\| \leq \|\zeta-x\| + \|x\| = \|\zeta-x\| + 3\delta \leq 4 \|\zeta-x\|,$$ and analogously $\|\zeta\| \leq 4 \|\zeta+x\|$. From that we deduce: \begin{eqnarray*} && \int_{D\setminus\big(D_0\cup D_1\cup D_2\big)} \frac{\|\zeta\|^\gamma dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x\|^\alpha \|\zeta+x\|^\beta} \leq 4^{\alpha+\beta} \int_{B_R(0) \setminus \overline{B_\delta(0)}} \|\zeta\|^{\gamma-\alpha-\beta} dV_{{\mathbb C}^n}(\zeta)\\ &\leq& 4^{\alpha+\beta} C_{\alpha+\beta-\gamma} \left\{ \begin{array}{ll} R^{2n+\gamma-\alpha-\beta} & \ ,\ \alpha+\beta<2n+\gamma,\\ |\log R|+|\log \delta| & \ ,\ \alpha+\beta=2n+\gamma,\\ \delta^{2n+\gamma-\alpha-\beta} & \ ,\ \alpha+\beta>2n+\gamma, \end{array}\right. \end{eqnarray*} The assertion follows easily from this in combination with \eqref{eq:es13}, \eqref{eq:es14} and \eqref{eq:es15}. \end{proof} For use in cut-off procedures, we need also: \begin{lma}\label{lem:estimateCn4} Let $n\geq 2$. Moreover, let $0\leq \gamma\leq 6$ and $0 \leq \alpha,\beta <2n$ with $\alpha+\beta=2n+2\geq 6$. Then there exists a constant $C_3>0$ such that the following holds: \begin{eqnarray*}\label{eq:estimateCn4} \|x\|^{6-\gamma}\int_{B_{\epsilon_{k-1}}(0) \setminus \overline{B_{\epsilon_{k+1}}(0)}} \frac{\|\zeta\|^{\gamma -4} dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x\|^\alpha \|\zeta+x\|^\beta \big|\log\|\zeta\|\big|} \leq C_3 \end{eqnarray*} for all $x \in {\mathbb C}^n$ and all $k\in{\mathbb Z}$, $k\geq 1$, where $\epsilon_k=e^{-e^k/2}$. \end{lma} \begin{proof} Let $\delta:=\|x\|/3$ and set $$D_1:= B_\delta(x)\ \ ,\ \ D_2:= B_\delta(-x).$$ Then $\|\zeta + x\|=\|(\zeta-x) + 2x\| \geq 5\delta$ and $\|\zeta\| \leq 4 \delta$ on $D_1$ and so we obtain \begin{eqnarray*} \int_{D_1} \frac{\|x\|^{6-\gamma} \|\zeta\|^{\gamma-4} dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x\|^\alpha \|\zeta+x\|^\beta \big|\log\|\zeta\|\big|} &\lesssim& \frac{\delta^{6-\gamma+\gamma-4 -\beta}}{\log 4 + |\log\delta|} \int_{B_\delta(x)} \frac{dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x\|^\alpha}\\ &\leq& \frac{\delta^{2-\beta} C_\alpha \delta^{2n-\alpha}}{\log 4 + |\log\delta|} \lesssim 1 \end{eqnarray*} by use of Lemma \ref{lem:estimateCn1} and $\alpha+\beta=2n+2$ (on the domain of integration, the $\log$-term only helps). The integral over $D_2$ is treated completely analogous. Finally, we have to consider $D := \big( B_{\epsilon_{k-1}}(0) \setminus \overline{B_{\epsilon_{k+1}}(0)}\big) \setminus \big(D_1\cup D_2\big)$. Here, we can use $\|\zeta-x\|\geq \delta=\|x\|/3$ and $\|\zeta + x\|\geq \delta=\|x\|/3$ to eliminate $\|x\|$ in the numerator. Moreover, we have $$\|\zeta\| \leq \|\zeta-x\| + \|x\| = \|\zeta-x\| + 3\delta \leq 4 \|\zeta-x\|,$$ and analogously $\|\zeta\| \leq 4 \|\zeta+x\|$. From that we deduce: \begin{eqnarray*} && \int_{D} \frac{\|x\|^{6-\gamma}\|\zeta\|^{\gamma -4} dV_{{\mathbb C}^n}(\zeta)}{\|\zeta-x\|^\alpha \|\zeta+x\|^\beta \big|\log\|\zeta\|\big|} \lesssim \int_{B_{\epsilon_{k-1}}(0) \setminus \overline{B_{\epsilon_{k+1}}(0)}} \frac{dV_{{\mathbb C}^n}(\zeta)}{\|\zeta\|^{2n} \big|\log\|\zeta\|\big|}\\ &\sim& \int_{\epsilon_{k+1}}^{\epsilon_{k-1}} \frac{- dt}{t \log t} = - \log (-\log t) \big|^{\epsilon_{k-1}}_{\epsilon_{k+1}} = -(k-1) + (k+1) = 2. \end{eqnarray*} \end{proof} \begin{bibdiv} \begin{biblist} \bib{AS2}{article}{ author={Andersson, Mats}, author={Samuelsson, H{\aa}kan}, title={A Dolbeault-Grothendieck lemma on complex spaces via Koppelman formulas}, journal={Invent. Math.}, volume={190}, date={2012}, number={2}, pages={261--297}, } \bib{Dem}{article}{ author={Demailly, Jean-Pierre}, title={Complex Analytic and Differential Geometry}, status={Monograph}, place={Grenoble}, eprint={http://www-fourier.ujf-grenoble.fr/~demailly}, } \bib{FoGa}{article}{ author={Forn{\ae}ss, John Erik}, author={Gavosto, Estela A.}, title={The Cauchy Riemann equation on singular spaces}, journal={Duke Math. J.}, volume={93}, date={1998}, number={3}, pages={453--477}, } \bib{FOV}{article}{ author={Forn{\ae}ss, John Erik}, author={{\O}vrelid, Nils}, author={Vassiliadou, Sophia}, title={Local $L^2$ results for $\overline\partial$: the isolated singularities case}, journal={Internat. J. Math.}, volume={16}, date={2005}, number={4}, pages={387--418}, } \bib{HePo}{article}{ author={Henkin, Guennadi M.}, author={Polyakov, Pierre L.}, title={The Grothendieck-Dolbeault lemma for complete intersections}, journal={C. R. Acad. Sci. Paris S\'er. I Math.}, volume={308}, date={1989}, number={13}, pages={405--409}, } \bib{LR2}{article}{ author={L\"ark\"ang, Richard}, author={Ruppenthal, J.}, title={Koppelman formulas on affine cones over smooth projective complete intersections}, journal={Indiana Univ. Math. J.}, status={to appear}, eprint={arXiv:1509.00987 [math.CV]} } \bib{LiMi}{book}{ author={Lieb, Ingo}, author={Michel, Joachim}, title={The Cauchy-Riemann complex}, series={Aspects of Mathematics, E34}, publisher={Friedr. Vieweg \& Sohn, Braunschweig}, date={2002}, } \bib{OV2}{article}{ author={{\O}vrelid, Nils}, author={Vassiliadou, Sophia}, title={$L^2$-$\overline\partial$-cohomology groups of some singular complex spaces}, journal={Invent. Math.}, volume={192}, date={2013}, number={2}, pages={413--458}, } \bib{PS}{article}{ author={Pardon, William}, author={Stern, Mark}, title={$L^2$-$\overline\partial$-cohomology of complex projective varieties}, journal={J. Amer. Math. Soc.}, volume={4}, date={1991}, number={3}, pages={603--621}, } \bib{Range}{book}{ author={Range, R. Michael}, title={Holomorphic functions and integral representations in several complex variables}, series={Graduate Texts in Mathematics}, volume={108}, publisher={Springer-Verlag}, place={New York}, date={1986}, pages={xx+386}, } \bib{RuDipl}{thesis}{ author={Ruppenthal, J.}, title={Zur Regularit채t der Cauchy-Riemannschen Differentialgleichungen auf komplexen Kurven}, place={University of Bonn}, type={Diplomarbeit}, year={2003} } \bib{RuppThesis}{thesis}{ author={Ruppenthal, J.}, title={Zur Regularit채t der Cauchy-Riemannschen Differentialgleichungen auf komplexen R채umen}, place={University of Bonn}, type={PhD thesis}, year={2006} } \bib{RuMatZ2}{article}{ author={Ruppenthal, J.}, title={The $\overline\partial$-equation on homogeneous varieties with an isolated singularity}, journal={Math. Z.}, volume={263}, date={2009}, number={2}, pages={447--472}, } \bib{RuDuke}{article}{ author={Ruppenthal, Jean}, title={$L^2$-theory for the $\bar\partial$-operator on compact complex spaces}, journal={Duke Math. J.}, volume={163}, date={2014}, number={15}, pages={2887--2934}, } \bib{RuSerre}{article}{ author={Ruppenthal, J.}, title={$L^2$-Serre duality on singular complex spaces and rational singularities}, journal={Int. Math. Res. Not. IMRN}, status={to appear}, eprint={arXiv:1401.4563 [math.CV]} } \bib{RuZeI}{article}{ author={Ruppenthal, J.}, author={Zeron, E. S.}, title={An explicit $\overline\partial$-integration formula for weighted homogeneous varieties}, journal={Michigan Math. J.}, volume={58}, date={2009}, number={2}, pages={441--457}, } \end{biblist} \end{bibdiv} \end{document}
arXiv
Chapter 1: Fundamental Concepts 1.1 Preparatory Concepts 1.1.1 Scalar vs. Vector 1.1.2 Newton's Laws 1.1.3 Units 1.1.4 Measurement Conversions 1.1.5 Weight vs. Mass 1.1.6 Pythagorean Theorem 1.1.7 Sine/Cosine Law's 1.2 XYZ Coordinate Frame 1.2.1 Cartesian Coordinate Frame in 2D 1.2.2. Cartesian Coordinate Frame in 3D 1.3 Vectors 1.3.1 Vector Components 1.3.2 Componentizing a Vector 1.3.3 Position Vector 1.3.4 Vector Math 1.4 Dot Product 1.5 Cross Products 1.6 Torque/Moment 1.6.1 Moments 1.6.2 Scalar Method in 2 Dimensions 1.6.3 Vector Method in 3 Dimensions 1.7 Problem Solving Process Example 1.8.1: Vectors, Submitted by Tyson Ashton-Losee Example 1.8.2: Vectors, Submitted by Brian MacDonald Example 1.8.3: Dot product and cross product, submitted by Anonymous ENGN 1230 Student Example 1.8.4: Torque, Submitted by Luke McCarvill Example 1.8.5: Torque, submitted by Hamza Ben Driouech Example 1.8.6: Bonus Vector Material, Submitted by Liam Murdock Chapter 2: Particles 2.1 Particle & Rigid Body 2.2 Free Body Diagrams for Particles 2.3 Equilibrium Equations for Particles 2.4. Examples Chapter 3: Rigid Body Basics 3.1 Right Hand Rule 3.1.1 The Whole-Hand Method 3.1.2 Right Hand Rule and Torque 3.1.3 Three-Finger Configuration 3.2 Couples 3.3 Distributed Loads 3.3.1 Intensity 3.3.2 Equivalent Point Load & Location 3.3.3 Composite Distributed Loads 3.4 Reactions & Supports 3.5 Indeterminate Loads Example 3.6.1: Reaction Forces, Submitted by Andrew Williamson Example 3.6.2: Couples, Submitted by Kirsty MacLellan Example 3.6.3: Distributed Load, Submitted by Luciana Davila Chapter 4: Rigid Bodies 4.1 External Forces 4.2 Rigid Body Free Body Diagrams 4.2.1 Part FBD 4.2.2 System FBD 4.2.3 Examples 4.3 Rigid Body Equilibrium Equations 4.4 Friction and Impending Motion Example 4.5.1: External Forces, submitted by Elliott Fraser Example 4.5.2: Free-Body Diagrams, submitted by Victoria Keefe Example 4.5.3: Friction, submitted by Deanna Malone Example 4.5.4: Friction, submitted by Dhruvil Kanani Example 4.5.5: Friction, submitted by Emma Christensen Chapter 5: Trusses 5.1 Trusses Introduction 5.1.1 Two Force Members 5.1.2 Trusses 5.1.3 Parts of a Truss 5.1.4 Tension & Compression 5.2 Method of Joints 5.3 Method of Sections 5.4 Zero-Force Members Example 5.5.1: Method of Sections – Submitted by Riley Fitzpatrick Example 5.5.2: Zero-Force Members, submitted by Michael Oppong-Ampomah Chapter 6: Internal Forces 6.1 Types of Internal Forces 6.1.1 Types of Internal Forces 6.1.2 Sign Convention 6.1.3 Calculating the Internal Forces 6.2 Shear/Moment Diagrams 6.2.1 What are Shear/Moment Diagrams? 6.2.2 Distributed Loads & Shear/Moment Diagrams 6.2.3 Producing a Shear/Moment Diagram 6.2.4 Tips & Plot Shapes Example 6.3.1: Internal Forces – Submitted by Emma Christensen Example 6.3.2: Shear/Moment Diagrams – Submitted by Deanna Malone Chapter 7: Inertia 7.1 Center of Mass: Single Objects 7.1.1 Center of Mass of Two Particles 7.1.2 Center of Mass in 2D & 3D 7.1.3 The Center of Mass of a Thin Uniform Rod (Calculus Method) 7.1.4 The Center of Mass of a Non-Uniform Rod 7.2 Center of Mass: Composite Shapes 7.2.1 Centroid Tables 7.2.2 Composite Shapes 7.3 Types of Inertia 7.4 Mass Moment of Inertia 7.4.1 Intro to Mass Moment of Inertia 7.4.2 Inertia Table of Common Shapes 7.4.3 Radius of Gyration 7.5 Inertia Intro: Parallel Axis Theorem Example 7.6.1: All of Ch 7 – Submitted by William Craine Example 7.6.2 Inertia – Submitted by Luke McCarvill Appendix A: Included Open Textbooks Engineering Mechanics: Statics Here are examples from Chapter 1 to help you understand these concepts better. These were taken from the real world and supplied by FSDE students in Summer 2021. If you'd like to submit your own examples, please send them to: [email protected] After a long day of studying, a student sitting at their computer moves the cursor from the bottom left of the screen to the top right in order to close a web browser. The computer mouse was displaced 6 cm along the x-axis and 3.5 cm along the y-axis. Draw the resultant vector and calculate the distance traveled. Source: https://www.flickr.com/photos/dejankrsmanovic/33218207918 2. Draw 3. Knowns and Unknowns Known: x = 6 cm y = 3.5 cm Unknown: r, θ 4. Approach Use SOH CAH TOA, first find θ, then r \begin{aligned} &\tan \theta=\frac{y}{x} \\ &\tan \theta=\frac{3.5 \mathrm{~cm}}{6 \mathrm{~cm}} \\ &\theta=\tan ^{-1}\left(\frac{35}{6}\right) \\ &\theta=30.256^{\circ} \\ &\sin \theta=\frac{y}{r} \\ &r=\frac{y}{\sin \theta} \\ &r=\frac{35 \mathrm{~cm}}{\sin \left(30256^{\circ}\right)} \\ &r=6946 \mathrm{~cm} \\ &r=6.9 \mathrm{~cm} It makes sense that the angle is less that 45, because y is smaller than x. Also, if you use Pythagorean theorem to find r, you get the same answer. Mark is fishing in the ocean with his favourite fishing rod. The distance between the tip of the rod and the reel is 8 ft and the length of the reel handle is 0.25 ft. The angle between the fishing rod and fishing line is 45 degrees. If Mark catches a fish when 25 ft of the fishing line is released while the fish is diving down with a force of 180 N, how much force does Mark need to apply (push down) to the reel handle to bring in the fish? Draw the position vector of the fish relative to the reel. Mark can reel in the fish when he generates more torque with the handle than the amount of torque that the fish is applying to the reel while pulling on the line. The fishing line comes out of the reel in a straight line at a 90-degree angle. Source: https://commons.wikimedia.org/wiki/File:Deepsea.JPG Sketch: Free-body diagram: rAB = 0.25 ft rBC = 8 ft rCD = 25 ft FD = 180 N θ = 45° Unknown: FA , vector rAD Convert inches to meters, then use the below equation. [latex]T=|r| *|F| * \sin \theta\\[/latex] Step 1: convert inches to meters [latex]\begin{align} &25 \mathrm{ft} * \frac{12 \mathrm{in}}{1 \mathrm{ft}} * \frac{2.54 \mathrm{cm}}{\operatorname{l in} } * \frac{\operatorname{l m}}{100 \mathrm{cm}}=7.62 \mathrm{m}\\\\ &\quad\mathrm{and}\\\\ &0.25 \mathrm{ft} * \frac{12 \mathrm{in}}{1\mathrm{ft}} * \frac{2.54 \mathrm{cm}}{\operatorname{1 in} } * \frac{\mathrm{1 m}}{100 \mathrm{cm}}=0.0762 \mathrm{m}\\ \end{align}[/latex] Step 2: solve for TD [latex]\begin{aligned}&T_{D}=\left|r_{C D}\right| * \left|F_{D}\right| * \sin \theta\\ &T_{D}=(7.62 m)(180 N) \sin \left(45^{\circ}\right)\\ &T_{D}=969.86766 \mathrm{Nm} \end{aligned}[/latex] Step 3: Solve for FA [latex]\begin{aligned}&T_{A}=\left|r_{AB}\right| * \left|F_{A}\right| * \sin \theta\\ &\text { Assume } T_{A}=T_{D}\\ &F_{A}=\frac{T_{D}}{\left|r_{AB}\right| * \sin \theta} \\ &F_{A}=\frac{969.86766 \mathrm{ Nm}}{0.0762 \mathrm{~m} \cdot \sin \left(45^{\circ}\right)} \\ &F_{A}=17,999.998 \mathrm{N} \\ &F_{A}=18,000 \mathrm{N} \end{aligned}[/latex] Vector rAD: [latex]\begin{aligned} &\vec r_{A D}=\vec r_{A B}+\vec r_{B C}+\vec r_{C D} \\ &\vec {r}_{A D}=\left[\begin{array}{c} 0.25 \\ 0 \end{array}\right] f t+\left[\begin{array}{l} 0 \\ 8 \end{array}\right] f t+\left[\begin{array}{c} 25 \sin 45^{\circ} \\ -25 \cos 45^{\circ} \end{array}\right] f t \\ &\vec r_{A D}=\left[\begin{array}{cc} 17.93 \\ -9.68 \end{array}\right] ft \end{aligned}[/latex] The answer, though yielding a very large number, appears to be correct from the information given. 18,000 N of force is the amount of force Mark would need to apply the reel handle to generate the same amount of force that the fish creates. 18,000 N in reality is too much for one to generate but also in real scenario's one would not have to generate the same amount of force to reel in the fish to reel gearing, the amount of torque generated by the fishing rod itself and etc. In other words 18 000 N of force is too high in a real scenario but with the assumptions given in the problem, the number seems reasonable. The answer also has the correct unit, N. $$\underline{a}=[6\;\;\;5\;\;\;3]\;\;\;\underline{b}=[8\;\;\;1\;\;\;3]$$ a) Find 6b b) Find [latex]a\cdot b[/latex] c) Find [latex]a\times b[/latex] d) Find [latex]2a\times b[/latex] Known: a, b Unknowns: a) 6b, b) [latex]a\cdot b[/latex], c) [latex]a\times b[/latex], d) [latex]2a\times b[/latex] Use dot product, cross product equations $$6\underline{b}=6*[6\;\;\;5\;\;\;3]\\6\underline{b}=[36\;\;\;30\;\;\;18]$$ $$\underline{a}\cdot\underline{b}=[6\;\;\;5\;\;\;3]\cdot[8\;\;\;1\;\;\;3]\\=6\cdot 8+5\cdot 1+3\cdot3\\=48+5+9\\\underline{a}\cdot\underline{b}=62$$ Part c: $$\underline{a}\times\underline{b}=\begin{bmatrix} \underline{\hat{i}} &\underline{\hat{j}} & \underline{\hat{k}} \\ \end{bmatrix}\\(5\cdot 3-3\cdot 1)\underline{\hat{i}}-(6\cdot 3-3\cdot 8)\underline{\hat{j}}+(6\cdot 1-5\cdot 8)\underline{\hat{k}}\\\underline{a}\times\underline{b}=12\underline{\hat{i}}+6\underline{\hat{j}}-34\underline{\hat{k}}$$ Part d: $$ 2\underline{a}=2*[6\;\;\;5\;\;\;3]=[12\;\;\;10\;\;\;6]\\\underline{b}=[8\;\;\;1\;\;\;3]\\2\underline{a}\times\underline{b} = \begin{bmatrix} \underline{\hat{i}} & \underline{\hat{j}} & \underline{\hat{k}} \\ 12 & 10 & 6 \\ \end{bmatrix} \\=(10\cdot 3-6\cdot 1)\underline{\hat{i}}-(12\cdot 3-6\cdot 8)\underline{\hat{j}}+(12\cdot 1-10\cdot 8)\underline{\hat{k}}\\2\underline{a}\times\underline{b}=24\underline{\hat{i}}+12\underline{\hat{j}}-68\underline{\hat{k}}$$ The answer to part d is double the answer for part c, which makes sense. It also makes sense that the answers to b, c, and d have values in three directions, while a only has magnitude. To start riding her bicycle, Jane must push down on one of her bike's pedals which are on 16 centimeter long crank arms. Jane can push directly downwards with her legs with a force of 100N. Jane notices that the pedal's starting position can sometimes make it more or less useful in generating torque. a) What is the ideal angle that Jane's bike pedal should be at in order to generate the most torque? Prove this mathematically. (Assume we only care about the very start of her very first push, and choose a reference frame for the angle that makes most sense for you). b) What angle(s) should the bike pedal be at if Jane wants to generate exactly half of the maximum amount of torque? c) Is there any position(s) at which the pedal will create zero torque? Where are they and why? Source: https://commons.wikimedia.org/wiki/File:Girl_on_a_Bike_(Imagicity_116).jpg Source: https://pixabay.com/illustrations/bicycle-cycle-two-wheeler-pedal-3168934/ Knowns: [latex]\begin{aligned} &\vec r=\left[\begin{array}{c} 0.16 \\ 0 \\ 0 \end{array}\right] m \\ &\vec F_{A}=\left[\begin{array}{c} 0 \\ -100 \\ 0 \end{array}\right] N \end{aligned}[/latex] Unknowns: position of r for maximum torque position of r for half of maximum torque position of r for zero torque, and why For part a), I will find a general equation for torque based on the given values in terms of θ, then analyze the function for its maxima For part b), I will find the magnitude of 50% of maximum torque and then reverse-engineer the equation to determine what angle(s) the pedal needs to be at to satisfy the equation. For part c), I will look back at my equation and find when the equation equals zero, then try to understand why given the example problem. [latex]\begin{aligned} &T=\left|\vec F_{A}\right| * \left|\vec r\right| * \sin \theta \\ &T=(100 N) *(0.16 m) * \sin \theta \\ &T=16 \sin \theta \mathrm{Nm} \\ &\quad\left\{90^{\circ}+360^{\circ} k ; k \in \mathbb{Z}\right\} \end{aligned}[/latex] Thinking about the shape of the sine function in the first period, the maximum occurs at 90 degrees. You could say algebraically that the maximum is at 90, 450, 810 etc., but these angles all represent the same position on the wheel. Therefore, we will use 90. [latex]\begin{aligned} &T_1=\left|F_{A}\right| *|r| * \sin \theta \\ &T_1=(100 N)(0.16 m) \sin 90^{\circ} \\ &T_1=16 N m \end{aligned}[/latex] Find 50% of the maximum torque: [latex]T_2=\frac{T_1}{2}[/latex] [latex]\frac{16 Nm}{2}=8 Nm[/latex] Rearrange T2 equation: [latex]T_{2}=\left|F_{A}\right| *|{r}| \sin \theta[/latex] [latex]\begin{aligned}&\sin \theta=\frac{T_{2}}{\left| F_{A}\right| * \left| r \right|} \\ &\sin \theta=\frac{8 N m}{(100 N)(0.16 m)} \\ &\sin \theta=0.5 \\ &\theta=30^{\circ}, 150^{\circ}, etc \end{aligned}[/latex] Therefore, Jane could push at 30 from vertical, or 150 from vertical to create half the torque. *Interesting to note is that half the angle does not yield half the torque; the angle is 30, not 45. This is because the sine function is non-linear.* T = 16 sinθ tells us that the angles of 0 and 180 will give us zero torque. This makes sense given that pushing straight down on a stable pendulum will not cause the pendulum to rotate! Likewise, if you just stand on your pedals, you're providing lots of downward force, but creating zero torque since the crank arm and the direction of the force are parallel (or antiparallel)! These answers have the correct units (Nm and degrees) and are within a reasonable order of magnitude based on the given information. See logic/explanations above for more detail. A person is pushing on a door with a force of 100 N. The door is at an angle α = 45° as shown in the sketch below. a) Calculate the moment when r is 45 cm and 75 cm. b) At what angle(s) is the moment zero? Explain why. Assumptions: model the force as a single point load acting on the door. F = 100 N r1 = 45 cm α = 45° M1, M2, angle when M is zero Use equation below. $$ M=|r|\cdot|F|\cdot\sin\theta$$ Part a) The angle we were given is not technically the one we should use in the moment equation. The angle should be between r and F. Therefore, we have to find the new angle. As shown below, the angle we find is also 45°. Now we can continue and solve for M1 and M2. $$\theta=90^{\circ}-45^{\circ}$$ $$\theta=45^{\circ}$$ $$ M_1=|r_1|\cdot|F|\cdot\sin\theta\\M_1=0.45m\cdot 100N\sin(45^{\circ})\\m_1=31.82Nm\\\\M_2=|r_2|\cdot |F|\sin\theta\\M_2=0.75m\cdot 100N\cdot\sin(45^{\circ})\\M_2=53.03Nm$$ Part b) $$M=|r|\cdot|F|\cdot\sin\theta\\if \sin\theta=0, M=0\\\sin\theta=0\\\theta=\sin^{-1}(0)\\\theta=0^{\circ}, 180^{\circ}, 360^{\circ}$$, etc Answer: the moment is zero when the angle between the force and the moment arm is 0° or 180° (360 would represent the same angle as 0°, as would 540°, etc.) It makes sense that the moment is zero when the door is either closed or wide open, because when we apply a force at those positions, no movement of the door is possible. Firstly, George traveled a displacement of dg = [7 0 8] m from his car. George's dog named Sparky on the other hand traveled a displacement of ds = [0 6 6] m from George's car. Secondly, George called Sparky's name and the dog ran to George's position. It took Sparky four seconds to get there. a. What is the displacement from George to his dog? b. What is Sparky's velocity? (no need to draw) c. What is Sparky's speed? (no need to draw) Source: https://www.piqsels.com/en/public-domain-photo-oekac dg = [7 0 8] m Unknown: dsg= ? ds = [0 6 6] m dsg=(determined in A) Unknown: vsg = ? t= 4 seconds vsg =(determined in B) Unknown: vsg= ? We are going to use vector operations (both subtraction and division), velocity – displacement relationship, velocity – speed relationship, and pythagoras theorem to solve this problem. dsg= dg– ds dsg =[7 0 8] m – [0 6 6] m dsg =[7-0 0-6 8-6] m dsg =[7 -6 2] m vsg=dsg/t vsg=[7 -6 2] m/s vsg=[7/4 -6/4 2/4] m/s vsg=[1.75 -1.5 0.5] m/s vsg=vsg vsg=(vsgx)2 + (vsgy)2 + (vsgz)2 vsg=(1.75)2 + (-1.5)2 + (0.5)2 vsg=2.36 m/s One way to review the question is to walk through the solution verbally. Our solution shows that for Sparky to get to Gerorge, he must walk 7 m in the positive x-direction (almost out of the page), 6 m in the negative y-direction (left), and finally 2 m in the positive z-direction (up). Firstly, since the dog initially did not go in the x-direction it makes sense Sparky would have to copy George's exact x movement. Secondly, since George did not move in the y – direction, it would make sense that Sparky would just have to retrace his steps and if he initially went 6 m right, he would have to go 6 m left. Thirdly, George and Sparky both went upwards, but George went 2 m higher with an altitude of 8 m compared to Sparky's 6 m correlating to Spraky having to go positive 2 m in the z – direction to meet Geroge. Therefore, since all the movements make sense for Sparky to meet George (using logic), the answer is proven to be right. Part b and c: Since B and C correlate to the same magnitude they can be reviewed together. From a quick search, an average dog tops out at a speed of 19 miles per hour. We can convert this to SI units: [latex]\frac{19 \text { miles }}{1 h r}\left(\frac{1 \mathrm{~km}}{0.621371 \text { miles }}\right)\left(\frac{1000 \mathrm{~m}}{1 \mathrm{~km}}\right)\left(\frac{1 \mathrm{hr}}{3600}\right)=8.49 \mathrm{~m} / \mathrm{s}[/latex] The top speed of an average dog is 8.49 m/s. So 2.36m/s is approximately a quarter of the top speed of an average dog. Sparky probably was not sprinting at full speed and he could be a slower dog breed, making 2.36m/s a reasonable answer. Previous: 1.7 Problem Solving Process Next: Chapter 2: Particles Engineering Mechanics: Statics by Libby (Elizabeth) Osgood; Gayla Cameron; and Emma Christensen is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.
CommonCrawl