text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
Electromagnetism - Why electric and magnetic fields are manifestations of the same phenomenon [closed]
Maxwell's equations reveal an interdependency between electric and magnetic fields, inasmuch as a time varying magnetic field generates a rotating electric field and vice versa. Furthermore, the equations predict that even in the absence of any sources one can have self propagating electric and magnetic fields, so called electromagnetic waves.
However, is it correct to say that although Maxwell's equations show that electric and magnetic fields are interdependent, they do not imply that the two are different aspects of the same underlying physical phenomenon?!
Given this, is it then correct to say that it is not until one takes into account special relativity that it becomes clear that electricity and magnetism are different manifestations of the same underlying phenomenon?
Indeed, if one considers a frame of reference in which only an electric (or magnetic) field is observed, then, upon a Lorentz transformation to another frame of reference, it is found that one will observe a combination of electric and magnetic fields. This implies that the two are not independent of one another, since there is no observer independent manner in which one can separate electric and magnetic fields, hence implying that they are manifestations of the same underlying field - the electromagnetic field?!
In essence, my question is, can one deduce purely from Maxwell's equations that the electric and magnetic fields are actually "the same" field, or is this (necessary) unification not explicitly confirmed until one takes into account special relativity?
electromagnetism special-relativity maxwell-equations
closed as unclear what you're asking by ACuriousMind♦, sammy gerbil, user36790, Wolpertinger, knzhou Aug 23 '16 at 22:10
$\begingroup$ @OrangeDog Relativity is not derived from Maxwell's equations. $\endgroup$ – lemon Aug 23 '16 at 10:19
$\begingroup$ they are the same object ( and not manifestations of another object ) , the magnetic field being the electric field as seen in a moving frame $\endgroup$ – user46925 Aug 23 '16 at 10:20
$\begingroup$ @igael Yes, you're right, I was just trying to emphasise the fact that they are unified into a single object, but that historically they were thought of as two distinct objects. Is it correct though, to say that it is through application of special relativity that such a unification is explicitly shown, in the sense that there is no observer independent way to separate the two and as such they the same object? $\endgroup$ – user35305 Aug 23 '16 at 10:38
$\begingroup$ not really, it was already the case with Maxwell equations, which are , on another hand compatible with special relativity rather than with classical mechanics $\endgroup$ – user46925 Aug 23 '16 at 10:46
$\begingroup$ perhaps, but as today, the Maxwell synthesis is sufficient to identify them, letting to other theories deeper interpretations. Agreeing on the main ... ( surely, the difference comes from the objects : previously forces, now fields ) $\endgroup$ – user46925 Aug 23 '16 at 11:02
Electric and magnetic fields are field strengths of a gauge field. If you consider any matter field/particle Lagrangian, its coupling constant is the same for Electric and magnetic fields.
For eg. classically in the Lorentz force equation $\vec{F}=q(\vec{E}+\vec{v}\times\vec{B})$. Notice that $q$ is the parameter that gives the coupling of the particle with Electric as well as magnetic field.
This is what we mean when we say the two forces are unified.
Edit: Reply to whether historical approach is the one to use relativity
I have to disagree. Electric and magnetic field were realized to be the same once Maxwell wrote his equations and light was found out to be an electromagnetic wave by Hertz. It is true the Maxwell equations are Lorentz covariant and when you do that you have a single four potential that gives electric and magnetic fields. And then you add coupling with relativistic matter Lagrangians. Then it is natural to have a single coupling constant. So it was rather me who was taking the historical approach. Anyhow it is also modern. When you consider the standard model, weak interaction and electromagnetic interaction are given by the same $SU(2)_L$ field multiplet. But there are still two coupling constants which is why it is not a unification
BoundaryGravitonBoundaryGraviton
$\begingroup$ From an historical perspective though (and also from a pedagogical approach), before introducing the notion of gauge fields and the like, is it correct to say that it is not one takes into account special relativity that it becomes abundantly clear that electricity and magnetism are aspects of the same field - the electromagnetic field, since there is no observer independent manner in which one can separate the two?! $\endgroup$ – user35305 Aug 23 '16 at 10:45
$\begingroup$ I disagree. See my edit. $\endgroup$ – BoundaryGraviton Aug 23 '16 at 13:58
$\begingroup$ Thanks for the updated answer. The issue I find is that, although Maxwell's equations predict "self reinforcing" propagating electromagnetic waves, it is really a statement that time varying electric fields and magnetic fields can generate one another such that they create an electromagnetic wave propagating through space. However, I don't see how one can conclude that they are the same object from this, it simply implies that they are interdependent... $\endgroup$ – user35305 Aug 23 '16 at 15:42
$\begingroup$ ... Surely it only explicitly becomes evident that they are "the same" once one studies how they transform under Lorentz transformations, in which it is found that the two "rotate" into one another and hence are really "the same" object?! $\endgroup$ – user35305 Aug 23 '16 at 15:43
$\begingroup$ So is the point that the if one takes into account the form of the Lorentz force, it is evident that the coupling to the electric and magnetic fields is identical, implying that they are in fact "the same", a single entity. If they were distinct fields then they would have distinct coupling constants?! This, along with the fact that Maxwell's equations describe the dependence between electric and magnetic fields (as well as predicting the existence of electromagnetic waves), reveals the unification of electric and magnetic fields?! $\endgroup$ – user35305 Aug 23 '16 at 18:22
Not the answer you're looking for? Browse other questions tagged electromagnetism special-relativity maxwell-equations or ask your own question.
Mechanism by which electric and magnetic fields interrelate
Is there any relation between weak and strong fields, similar to electric and magnetic fields?
Do the fields exist without electric charges?
Are electron fields and photon fields part of the same field in QED?
Magnetic and Electric fields
Why are $E$ and $B$ fields not redundant?
A question on the unification of electricity and magnetism
Are maxwell's equations valid for accelerated source charges? If not, how could they be amended?
Can we combine electric and magnetic fields?
Are magnetic fields just modified relativistic electric fields? | CommonCrawl |
\begin{definition}[Definition:Tri-Automorphic Number]
A '''tri-automorphic number''' is a positive integer $n$ such that $3 n^2$ ends in a repetition of $n$.
\end{definition} | ProofWiki |
Abstracts for MPI-Oberseminar
-Year2020202120222023202420252026
$q$-deformed Whittaker functions and Demazure modules
Sergey Oblezin
ITEP Moscow / MPIM
Whittaker functions are special functions on reductive groups, which are naturally arising in the theory of automorphic representations. The talk is devoted to recent results (joint with A. Gerasimov and D. Lebedev) on explicit construction of $q$-deformation of Whittaker functions for the group $GL(N,\mathbb{R})$. In the first part of my talk I will introduce two (integral) representations of $GL(N)$-Whittaker function, using two different limits of Macdonald polynomials. The first representation is a $q$-deformation of the classical Gelfand-Zetlin formula for the character.
André-Quillen cohomology theory of an algebra over an operad
Joan Millès
Following the ideas of Quillen and by means of model category structures, Hinich, Goerss and Hopkins have developped a cohomology theory for (simplicial) algebras over a (simplicial) operad. Thank to Koszul duality theory of operads, we describe the cotangent complex to make these theories explicit in the differential graded setting. We recover the known theories as Hochschild cohomology theory for associative algebras and Chevalley-Eilenberg cohomology theory for Lie algebras and we define the new case of homotopy algebras.
The fundamental group of symplectic manifolds with Hamiltonian Lie group actions
Hui Li
U Luxembourg / U Bourgogne / MPIM
Six vertex model and enumerations of alternating-sign matrices
A. V. Razumov
Inst. for High Engergy Physics, Protvino/MPI
One more interaction between theoretical physics and mathematics will be discussed. There is a famous combinatorial problem to enumerate the so-called alternating-sign matrices, which are a generalization of the permutation matrices. This problem was solved by Doron Zeilberger in 1995. Much simpler solution was given by Greg Kuperberg in 1996. It was based on the one-to-one correspondence between the alternating-sign matrices and the states of the statistical six-vertex model.
Moduli spaces of vector bundles over a Klein surface
IHES/MPI
A compact topological surface S, possibly non-orientable and with non-empty boundary, always admits a Klein surface structure (an atlas whose transition maps are dianalytic). Its complex cover is, by definition, a compact Riemann surface X endowed with an antiholomorphic involution which determines topologically the original surface S. In this talk, we relate dianalytic vector bundles over S and holomorphic vector bundles over X, devoting special attention to the implications this has for moduli spaces of semistable bundles over X.
Layer cake and homotopy representations I: formal geometry approach
Yaël Frégier
U of Luxembourg
Representations up to homotopy of Lie algebras have attracted recently much attention. On the other hand J. Baez has introduced a way to build a homotopy Lie algebra out of a Lie algebra and an n-cocycle. We show in this work a common framework enabling to generalize both notions (replacing Lie algebras by homotopy Lie algebras) and extend them for other types of algebras (commutative and associative). The main tool is the language of homological vector fields on products of formal manifolds. This is a joint work with John Baez.
Universal family for subgroups of an algebraic group
Michael Le Barbier
Inst. Fourier/ St. Marin d'Hères/MPI
I describe the construction of a moduli space for the connected subgroups of an algebraic group, and of a universal family. I give a quick illustration of the notion of universal families, trough Grassmann varieties, then discuss in turn the construction of a moduli space and of a universal family, balancing general statements and examples.
Triangle groups, finite simple groups and applications
Shelly Garion
The Hebrew U of Jerusalem/MPI
In this talk we will discuss the following problem: Given a triple of integers (r,s,t), which finite simple groups are quotients of the triangle group T(r,s,t)? This problem has many applications, especially concerning Riemann surfaces and Beauville surfaces. In the talk we will focus on the group theoretical aspects of this problem.
Geometry of Maurer-Cartan Elements on Complex Manifolds
Zhuo Chen
Peking U/MPI
The semi-classical data attached to stacks of algebroids in the sense of Kashiwara and Kontsevich are Maurer-Cartan elements on complex manifolds, which we call extended Poisson structures as they generalize holomorphic Poisson structures. A canonical Lie algebroid is associated to each Maurer-Cartan element. We study the geometry underlying these Maurer-Cartan elements in the light of Lie algebroid theory.
Elliptic curves over imaginary quadratic fields
B.Z. Moroz
About 10 years ago the methods developed by A.Wiles, R. Taylor, and their collaborators led to the proof of the modularity of the elliptic curves defined over the field of rational numbers. In a recent work Dielefait, Gueberooff, and Pacetti developed a new method, allowing to compare two 2-dimensional l-adic Galois representations, and applied their method to prove modularity of three elliptic curves defined over an imaginary quadratic field. | CommonCrawl |
\begin{document}
\titlerunning{Towards a compact representation of temporal rasters} \authorrunning{Cerdeira-Pena et al.} \authorrunning{A. Cerdeira-Pena, G. de Bernardo, A. Fari\~na, J. Param\'a, and F. Silva-Coira}
\title{Towards a compact representation of temporal rasters
\thanks{
\footnotesize{
Funded in part by European Union's Horizon 2020 research and innovation programme
under the Marie Sklodowska-Curie grant agreement No 690941 (project BIRDS);
by Xunta de Galicia/FEDER-UE [CSI: ED431G/01 and GRC: ED431C 2017/58];
by MINECO-AEI/FEDER-UE [Datos 4.0: TIN2016-78011-C4-1-R; Velocity: TIN2016-77158-C4-3-R; and ETOME-RDFD3: TIN2015-69951-R]; and
by MINECO-CDTI/FEDER-UE [INNTERCONECTA: uForest ITC-20161074]. }
} }
\author{Ana Cerdeira-Pena \inst{1} \and
Guillermo de Bernardo \inst{1,2} \and
Antonio Fari\~na\inst{1} \and \\
Jos\'e R. Param\'a\inst{1} \and
Fernando Silva-Coira\inst{1}
}
\institute{Universidade da Coruña, Fac. Inform\'atica, CITIC, Spain \and Enxenio S.L.
}
\maketitle
\begin{abstract} Big research efforts have been devoted to efficiently manage spatio-temporal data. However, most works focused on vectorial data, and much less, on raster data. This work presents a new representation for raster data that evolve along time named \texttt{Temporal} $\ensuremath{\mathsf{k^2raster}}\xspace$. It faces the two main issues that arise when dealing with spatio-temporal data: the space consumption and the query response times. It extends a compact data structure for raster data in order to manage time and thus, it is possible to query it directly in compressed form, instead of the classical approach that requires a complete decompression before any manipulation. In addition, in the same compressed space, the new data structure includes two indexes: a spatial index and an index on the values of the cells, thus becoming a self-index for raster data.
\end{abstract}
\section{Introduction}
Spatial data can be represented using either a raster or a vector data model \cite{Couclelis92}. Basically, vector models represent the space using points and lines connecting those points. They are used mainly to represent man-made features. Raster models represent the space as a tessellation of disjoint fixed size tiles (usually squares), each one storing a value. They are traditionally used in engineering, modeling, and representations of real-word elements that were not made by men, such as pollution levels, atmospheric and vapor pressure, temperature, precipitations, wind speed, land elevation, satellite imagery, etc.
Temporal evolution of vectorial data has been extensively studied, with a large number of data structures to index and/or store spatio-temporal data. Examples are the 3DR-tree \cite{Vazirgiannis1998}, HR-tree \cite{NascimentoS98}, the MVR-tree \cite{PapadiasT01}, or PIST \cite{Botea2008}.
In \cite{mennis} the classical Map Algebra of Tomlin for managing raster data is extended to manage raster data with a temporal evolution. The conceptual solution is simple, instead of considering a matrix, it considers a cube, where each slice of the temporal dimension is the raster corresponding to one time instant.
Most real systems capable of managing raster data, like Rasdaman, Grass, or even R are also capable of managing time-series of raster data. These systems, as well as raster representation formats such as NetCDF (standard format of the OGC\footnote{\url{http://www.opengeospatial.org/standards/netcdf}}) and GeoTiff, rely on classic compression methods such as run length encoding, LZW, or Deflate to reduce the size of the data. The use of these compression methods poses an important drawback to access a given datum or portion of the data, since the dataset must be decompressed from the beginning.
Compact data structures \cite{j-ssds-89,Nav16} are capable of storing data in compressed form and enable us to access a given datum without the need for decompressing from the beginning. In most cases, compact data structures are equipped with an index that provides fast access to data. There are several compact data structures designed to store raster data \cite{k2ones,LadraGonzalez17}. In this work, we extend one of those compact data structures, the $\ensuremath{\mathsf{k^2raster}}\xspace$ \cite{LadraGonzalez17}, to support representing time-series of rasters.
\section{Related work} \label{sec:prevwork}
In this section, we first revise the $\mathsf{k^2tree}$, a compact data structure that can be used to represent binary matrices. Then, we also present several compact data structures for representing raster data containing integers in the cells. We pay special attention to discuss one of them, the $\ensuremath{\mathsf{k^2raster}}\xspace$, which is the base of our proposal \texttt{Temporal} $\ensuremath{\mathsf{k^2raster}}\xspace$ ($\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$).
\subsubsection{$\mathsf{k^2tree}$:}
The $\mathsf{k^2tree}$ \cite{ktree} was initially designed to represent web graphs, but it also allows to represent binary matrices, that is, rasters where the cells contain only a bit value. It is conceptually a non-balanced $k^2$-ary tree built from the binary matrix by recursively dividing it into $k^2$ submatrices of the same size. First, the original matrix is divided into $k^2$ submatrices of size $n^2/k^2$, being $n$$\times$$n$ the size of the matrix. Each submatrix generates a child of the root whose value is $1$ if it contains at least one $1$, and $0$ otherwise. The subdivision continues recursively for each node representing a submatrix that has at least one $1$, until the submatrix is full of 0s, or until the process reaches the cells of the original matrix (i.e., submatrices of size 1$\times$1).
The $\mathsf{k^2tree}$ is compactly stored using just two bitmaps $T$ and $L$.
\textit{T} stores all the bits of the conceptual $\mathsf{k^2tree}$, except the last level, following a level-wise traversal: first the bit values of the children of the root, then those in the second level, and so on. $L$ stores the last level of the tree.
It is possible to obtain any cell, row, column, or window of the matrix very efficiently, by running $rank$ and $select$ operations \cite{j-ssds-89} over the bitmaps $T$ and $L$.
\subsubsection{$\mathsf{k^3tree}$:} The $\mathsf{k^3tree}$ \cite{k2ones} is obtained by simply adding a third dimension to the $\mathsf{k^2tree}$, and thus, it conceptually represents a binary cube. This can be trivially done by using the same space partitioning and representation techniques from the $\mathsf{k^2tree}$, yet applied to cubes rather than to matrices.
Thus, each 1 in the binary cube represents a tuple $\langle x, y,z \rangle$, where $(x,y)$ are the coordinates of the cell of the raster and $z$ is the value stored in that cell.
\subsubsection{$\mathsf{k^2acc}$:} The $\mathsf{k^2acc}$ \cite{k2ones} representation for a raster dataset is composed by as many $\mathsf{k^2trees}$ as different values can be found in the raster. Given $t$ different values in the raster: $v_1 < v_2 < \dots < v_t$, $\mathsf{k^2acc}$ contains $K_1,K_2,\dots,K_t$ $\mathsf{k^2trees}$, where each $K_i$ has a value 1 in those cells whose value is $v \le v_i$.
\subsubsection{2D-1D mapping:} In \cite{Pinto17}, it is presented a method that uses a space-filling curve to reduce the raster matrix to an array, and the use of one dimensional index (for example a B-tree) over that array to access the data.
\subsubsection{$\ensuremath{\mathsf{k^2raster}}\xspace$:}
$\ensuremath{\mathsf{k^2raster}}\xspace$ has proven to be superior in both space and query time \cite{Pinto17,LadraGonzalez17} to all the other compact data structures for storing rasters. In \cite{LadraGonzalez17}, it was also compared with NetCDF.
It drew slightly worse space needs than the compressed version (that uses Deflate) of NetCDF, but queries performed noticeably faster.
$\ensuremath{\mathsf{k^2raster}}\xspace$ is based in the same partitioning method of the $\mathsf{k^2tree}$, that is, it recursively subdivides the matrix into $k^2$ submatrices and builds a conceptual tree representing these subdivisions. Now, in each node, instead of having a single bit, it contains the minimum and maximum values of the corresponding submatrix. The subdivision stops when the minimum and maximum values of the submatrix are equal, or when the process reaches submatrices of size 1$\times$1. Again the conceptual tree is compactly represented using, in addition to binary bitmaps, efficient encoding schemes for integer sequences.
\begin{figure}\label{fig:k2raster}
\end{figure}
More in detail, let $n$$\times$$n$ be the size of the input matrix. The process begins by obtaining the minimum and maximum values of the matrix. If these values are different, they are stored in the root of the tree, and the matrix is divided into $k^2$ submatrices of size $n^2/k^2$. Each submatrix produces a child node of the root storing its minimum and maximum values. If these values are the same, that node becomes a leaf, and the corresponding submatrix is not subdivided anymore. Otherwise, this procedure continues recursively until the maximum and minimum values are the same, or the process reaches a 1$\times$1 submatrix.
Figure \ref{fig:k2raster} shows an example of the recursive subdivision (top) and how the conceptual tree is built (centre-top), where the minimum and maximum values of each submatrix are stored at each node. The root node corresponds to the original raster matrix, nodes at level 1 correspond to submatrices of size 4$\times$4, and so on. The last level of the tree corresponds to cells of the original matrix. Note, for instance, that all the values of the bottom-right 4$\times$4 submatrix are equal; thus, its minimum and maximum values are equal, and it is not further subdivided. This is the reason why the last child of the root node has no children.
The compact representation includes two main parts. The first one represents the topology of the tree ($T$) and the second one stores the maximum/minimum values at the nodes ($Lmin/Lmax$). The topology is represented as in the $\mathsf{k^2tree}$, except that the last level ($L$) is not needed. The maximum/minimum values are differentially encoded with respect to the values stored at the parent node. Again, these values are stored as arrays following the same method of the $\mathsf{k^2tree}$, that is, following the same level-wise order of the conceptual tree. By using differential encoding, the numbers become smaller. \emph{Directly Addressable Codes} (DACs) \cite{BLN13} take advantage of this, and at the same time, provide direct access. The last two steps to create the final representation of the example matrix are also illustrated in Figure \ref{fig:k2raster}.
In the center-bottom and bottom parts, we respectively show the tree with the differences for both the maximum and minimum values, and the data structures that compose the final representation of the $\ensuremath{\mathsf{k^2raster}}\xspace$. Therefore, the original raster matrix is compactly stored using just a bitmap $T$, which represents the tree topology, and a pair of integer arrays ($Lmax$ and $Lmin$), which contain the minimum and maximum values stored at the tree. Note that when the raster matrix contains uniform areas, with large areas of equal or similar values, this information can be stored very compactly using differential encoding and DACs.
The maximum/minimum values provide indexation of the stored values, this technique is usually known as lightweight indexation. It is possible to query the structure only decompressing the affected areas. Queries can be efficiently computed navigating the conceptual tree by running $rank$ and $select$ operations on $T$ and, in parallel, accessing the arrays $Lmax$ and $Lmin$.
\section{$\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$: A temporal representation for raster data}
Let $\cal{M}$ be a raster matrix of size $n$$\times$$n$ that evolves along time with a timeline of size $\tau$ time instants. We can define $\cal{M}$$= \langle M_1, M_2, \dots, M_{\tau} \rangle$ as the sequence of raster matrices $M_i$ of size $n$$\times$$n$ for each time instant $i \in [1,\tau]$.
A rather straightforward baseline representation for the temporal raster matrix $\cal{M}$ can be obtained by simply representing each raster matrix $M_i$ in a compact way with a $\ensuremath{\mathsf{k^2raster}}\xspace$. In this section we use a different approach. The idea is to use sampling at regular intervals of size $t_{\delta}$. That is, we represent with a $\ensuremath{\mathsf{k^2raster}}\xspace$ all the raster matrices $M_s, s=1+i~t_{\delta},~i\in [0,(\tau-1) /t_{\delta}]$. We will refer to those $M_i$ rasters as {\em snapshots of $\cal{M}$ at time $i$}. The $t_{\delta} -1$ raster matrices $M_t, t \in [s+1, s+t_{\delta} -1]$ that follow a snapshot $M_s$ are encoded using $M_s$ as a reference. The idea is to create a modified $\ensuremath{\mathsf{k^2raster'}}\xspace$ to represent $M_t$ where, at each step of the construction process, the values in the submatrices are encoded as differences with respect to the corresponding submatrices in $M_s$ rather than as differences with respect to the parent node as usual in a regular $\ensuremath{\mathsf{k^2raster}}\xspace$.
With this modification, we still expect to encode small gaps for the maximum and minimum values in each node of the conceptual tree of $M_t$. Yet, in addition, when a submatrix in $M_t$ is identical to the same submatrix in $M_s$, or when all the values in both submatrices differ only in a unique gap value $\alpha$, we can stop the recursive splitting process and simply have to keep a reference to the corresponding submatrix of $M_s$ and the gap $\alpha$ (when they are identical, we simply set $\alpha=0$). In practice, keeping that reference is rather cheap as we only have to mark, in the conceptual tree of $M_t$, that the subtree rooted at a given node $p$ has the same structure of the one from the conceptual tree of $M_s$. For such purpose, in the final representation of $\ensuremath{\mathsf{k^2raster'}}\xspace$, we include a new bitmap $eqB$, aligned to the zeroes in $T$. That is, if we have $T[i]=0$ (node with no children), we set $eqB[rank_0(T,i)] \leftarrow 1 $,\footnote{From now on, asume $rank_b(B,i)$ returns the number of bits set to $b$ in $B[0,i-1]$, and $rank_b(B,0)=0$. Note that the first index of $T$, $eqB$, $Lmax$, and $Lmin$ is $0$. } and set $Lmax[i] \leftarrow \alpha$. Also, if we have $T[i]=0$, we also can set $eqB[rank_0(T,i)] \leftarrow 0 $ and $Lmax[i]\leftarrow \beta$ (where $\beta$ is the gap between the maximum values of both submatrices) to handle the case in which the maximum and minimum values in the corresponding submatrix are identical (as in a regular $\ensuremath{\mathsf{k^2raster}}\xspace$).
The overall construction process of the $\ensuremath{\mathsf{k^2raster'}}\xspace$ for the matrix $M_t$ related to the snapshot $M_s$ can be summarized as follows. At each step of the recursive process, we consider a submatrix of $M_t$ and the related submatrix in $M_s$. Let the corresponding maximum and minimum values of the submatrix of $M_t$ be $max_t$ and $min_t$, and those of $M_s$ be $max_s$ and $min_s$ respectively. Therefore: \begin{itemize}
\item If $max_t$ and $min_t$ are identical (or if we reach a 1$\times$1 submatrix), the recursive process stops. Being $z_t$ the position in the final bitmap $T$, we set $T[z_t]\leftarrow 0$, $eqB[rank_0[T,z_t]]\leftarrow 0$, and $Lmax[z_t] \leftarrow (max_t -max_s)$.\footnote{Since in $\ensuremath{\mathsf{k^2raster'}}\xspace$ we have to deal both with positive and negative values, we actually apply a {\em zig-zag} encoding for the gaps $(max_t -max_s)$.}
\item If all the values in $M_s$ and $M_t$ differ only in a unique value $\alpha$ (or if they are identical, hence $\alpha=0$), we set $T[z_t]\leftarrow 0$, $eqB[rank_0[T,z_t]]\leftarrow 1$, and $Lmax[z_t] \leftarrow (max_t -max_s)$.
\item Otherwise, we split the submatrix $M_t$ into $k^2$ parts and continue recursively. We set $T[z_t]\leftarrow 1$, and, as in the regular $\ensuremath{\mathsf{k^2raster}}\xspace$, $Lmax[z_t]\leftarrow (max_t -max_s)$, and $Lmin[rank_1(z_t)]\leftarrow (min_t - min_s)$. \end{itemize}
\begin{figure}\label{fig:tkdosr}
\end{figure}
Figure~\ref{fig:tkdosr} includes an example of the structures involved in the construction of a $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$ over a temporal raster of size 8$\times$8, with $\tau=3$. The raster matrix corresponding to the first time instant becomes a {\em snapshot} that is represented exactly as the $\ensuremath{\mathsf{k^2raster}}\xspace$ in Figure~\ref{fig:k2raster}. The remaining raster matrices $M_{s+1}$ and $M_{s+2}$ are represented with two $\ensuremath{\mathsf{k^2raster'}}\xspace$ that are built taking $M_s$ as a reference. We have highlighted some particular nodes in the differential conceptual trees corresponding to $M_{s+1}$ and $M_{s+2}$. {\em (i)} the shaded node labeled $\langle 0 \colon 0 \rangle$ in $M_{s+1}$ indicates that the first 4$\times$4 submatrix of both $M_s$ and $M_{s+1}$ are identical. Therefore, node $\langle 0 \colon 0 \rangle$ has no children, and we set: $T[2]\leftarrow 0$, $eqB[1]\leftarrow 1$, and $Lmax[2]\leftarrow 0$. {\em (ii)} the shaded node labeled $\langle 1 \colon 1 \rangle$ in $M_{s+2}$ illustrates the case in which all the values of a given submatrix are increased by $\alpha \leftarrow 1$. In this case values $\langle 6,6,5,5 \rangle$ in $M_s$ become $\langle 7,7,6,6 \rangle$ in $M_{s+2}$. Again, the recursive traversal stops at that node, and we set: $T[8]\leftarrow 0$, $eqB[3]\leftarrow 1$, and $Lmax[8]\leftarrow 1$ (values are increased by $1$). {\em (iii)} the shaded node labeled $\langle 1 \colon 2 \rangle$ in $M_{s+1}$ corresponds to the node labeled $\langle 3 \colon 2 \rangle$ in $M_s$. In this case, when we sum the maximum and minimum values of both nodes we obtain that that node in $M_{s+1}$ has the same maximum and minimum values (set to $4$). Consequently the recursive process stops again. In this case, we set $T[7]\leftarrow 0$, $eqB[3]\leftarrow 0$, and $Lmax[7]\leftarrow 1$.
\section{Querying temporal raster data} \label{sec:tcsaqueries}
In this section, we show two basic queries over $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$.
\subsubsection*{Obtaining a cell value in a time instant:} This query retrieves the value of a cell $(r,c)$ of the raster at time instant $t$: $v\leftarrow getCellValue(r,c,t)$. For solving this query, there are two cases: if $t$ is represented by a snapshot, then the algorithm to obtain a cell in the regular $\ensuremath{\mathsf{k^2raster}}\xspace$ is used, otherwise, a synchronized top-down traversal of the trees representing that time instant ($M_{t}$) and the closest previous snapshot ($M_{s}$) is required.
Focusing on the second case, the synchronized traversal inspects the two nodes at each level corresponding to the submatrix that contains the queried cell. The problem is that due to parts of $M_{t}$ or $M_{s}$ having the same value, the shape of the trees representing them can be different. Therefore, it is possible that one of the two traversals reaches a leaf, whereas the other does not. In such a case, the traversal that did not reach a leaf, continues, but the process must remember the value in the reached leaf, since that is the value that will be added or subtracted to the value found when the continued traversal reaches a leaf.
Indeed, we have three cases: (a) the processed submatrix of $M_t$ is uniform, (b) the original submatrix of $M_s$ is uniform and, (c) the processed submatrix after applying the differences with the snapshot has the same value in all cells.
\tolerance 10000 \pretolerance 10000 Algorithm \ref{alg:getCell} shows the pseudocode of this case. To obtain the value stored at cell $(r,c)$ of the raster matrix $M_{t}$, it is invoked as {\bf getCell}$(n,r,c,1,1,Lmax_s[0],Lmax_t[0])$, assuming that the cell at position (0,0) of the raster is that in the upper-left corner.
\tolerance 500 \pretolerance 500
$z_s$ is used to store the current position in the bitmap $T$ of $M_{s}$ ($T_{s}$) during the downward traversal at any given step of the algorithm, similarly, $z_t$ is the position in $T$ of $M_{t}$ ($T_{t}$). When $z_s$ ($z_t$) has a $-1$ value, it means that the traversal reached a leaf and, in $maxval_s$ ($maxval_t$) the algorithm keeps the maximum value stored at that leaf node. Note that, $T_s$, $T_t$, $Lmax_s$, $Lmax_t$, and $k$ are global variables.
\begin{algorithm}[t] \scriptsize \SetAlgoNoLine \caption{{\bf getCell}$(n,r,c,z_s,z_t, maxval_s,maxval_t)$ returns the value at cell $(r,c)$}\label{alg:getCell}
\If{$z_s\neq -1$}{
$z_s \leftarrow (rank_1(T_s,z_s)-1) \cdot k^2+1$ \\
$z_s \leftarrow z_s + \lfloor r/(n/k) \rfloor\cdot k + \lfloor c/(n/k) \rfloor$ +1\\
$val_s \leftarrow Lmax_s[z_s-1]$\\
$maxval_s \leftarrow maxval_s-val_s$\\ }
\If{$z_t\neq -1$}{
$z_t \leftarrow (rank_1(T_t,z_t)-1) \cdot k^2$+1 \\
$z_t \leftarrow z_t + \lfloor r/(n/k) \rfloor\cdot k + \lfloor c/(n/k) \rfloor$ +1\\
$maxval_t \leftarrow Lmax_t[z_t-1])$\\ }
\If(\tcc*[h]{Both leafs}){$(z_s > |T_s| ~\mathbf{or}~z_s=-1 ~\mathbf{or}~ T_s[z_s]=0)~\mathbf{and}~(z_t > |T_t| ~\mathbf{or}~z_t=-1 ~\mathbf{or}~ T_t[z_t]=0)$}{ \Return $maxval_s+ZigZag\_Decoded(maxval_t)$\\ }
\ElseIf(~\tcc*[h]{Leaf in Snapshot}){$z_s > |T_s| ~\mathbf{or}~z_s=-1 ~\mathbf{or}~ T_s[z_s]=0$} {$z_s \leftarrow -1$\\ \Return {\bf getCell}$(n/k,r\,\textrm{mod}\,(n/k),c\,\textrm{mod}\,(n/k),z_s,z_t, maxval_s,maxval_t)$}
\ElseIf(~\tcc*[h]{Leaf in time instant}){$z_t > |T_t| ~\mathbf{or}~z_t=-1 ~\mathbf{or}~ T_t[z_t]=0$} {\If{$z_t\neq -1 ~\mathbf{and}~ T_t[z_t]=0$}{$eq \leftarrow eqB[rank_0(T_t,z_t)]$\\
\lIf{$eq=1$}
{$z_t\leftarrow -1$ }
\lElse{\Return $maxval_s+ZigZag\_Decoded(maxval_t)$
}
} \Return {\bf getCell}$(n/k,r\,\textrm{mod}\,(n/k),c\,\textrm{mod}\,(n/k),z_s,z_t, maxval_s,maxval_t)$\\} \Else(~\tcc*[h]{Both internal nodes}) { \Return {\bf getCell}$(n/k,r\,\textrm{mod}\,(n/k),c\,\textrm{mod}\,(n/k),z_s,z_t, maxval_s,maxval_t)$\\ }
\end{algorithm}
In lines 1-11, the algorithm obtains the child of the processed node that contains the queried cell, provided that in a previous step, the algorithm did not reach a leaf node (signaled with $z_s$/$z_t$ set to $-1$). In $maxval_s$ ($maxval_t$), the algorithm stores the maximum value stored in that node.
If the condition in line 12 is true, the algorithm has reached a leaf in both trees, and thus the values stored in $maxval_s$ and $maxval_t$ are added/subtracted to obtain the final result. If the condition of line 15 is true, the algorithm reaches a leaf in the snapshot. This is signaled by setting $z_s$ to $-1$ and then a recursive call continues the process.
The {\em If} in line 19 treats the case of reaching a leaf in $M_t$. If the condition of line 20 is true, the algorithm uses bitmap $eqB$ to check if the uniformity is in the original $M_t$ submatrix or if it is in the submatrix resulting from applying the differences between the corresponding submatrix in $M_s$ and $M_t$. A $1$ in $eqB$ implies the latter case, and this is solved by setting $z_t$ to $-1$ and performing a recursive call. A $0$ means that the treated original submatrix of $M_t$ has the same value in all cells, and that value can be obtained adding/subtracting the values stored in $maxval_s$ and $maxval_t$, since the unique value in the submatrix of $M_t$ is encoded as a difference with respect to the maximum value of the same submatrix of $M_s$, and thus the traversal ends.
The last case is that the treated nodes are not leaves, that simply requires a recursive call.
\subsubsection*{Retrieving cells with range of values in a time instant:}
$\langle[r_i,c_i]\rangle \leftarrow getCells(v_b,v_e,r_1,r_2,c_1,c_2,t)$ obtains from the raster of the time instant $t$, the positions of all cells within a region $[r_1,r_2]\times [c_1,c_2]$ containing values in the range $[v_b,v_e]$.
Again, if $t$ is represented with a snapshot, the query is solved with the normal algorithm of the $\ensuremath{\mathsf{k^2raster}}\xspace$. Otherwise, as in the previous query, the search involves a synchronized top-down traversal of both trees. This time requires two main changes: (i) the traversal probably requires following several branches of both trees, since the queried region can overlap the submatrices corresponding to several nodes of the tree, (ii) at each level, the algorithm has to check whether the maximum and minimum values in those submatrices are compatible with the queried range, discarding those that fall outside the range of values sought.
\section{Experimental evaluation} \label{sec:experiments}
In this section we provide experimental results to show how $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$ handles a dataset of raster data that evolve along time. We discuss both the space requirements of our representation and its performance at query time.
We used several synthetic and real datasets to test our representation, in order to show its capabilities. All the datasets are obtained from the TerraClimate collection~\cite{TerraClimate}, that contains high-resolution time series for different variables, including temperature, precipitations, wind speed, vapor pressure, etc. All the variables in this collection are taken in monthly snapshots, from 1958 to 2017. Each snapshot is a 4320$\times$8640 grid storing values with $1/24^\circ$ spatial resolution. From this collection we use data from two variables: TMAX (maximum temperature) is used to build two synthetic datasets, and VAP (vapor pressure) is compressed directly using our representation. Variable TMAX is a bad scenario for our approach, since most of the cells change their value between two snapshots. In this kind of dataset, our $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$ would not be able to obtain good compression. Hence, we use TMAX to generate two synthetic datasets that simulate a slow, and approximately constant, change rate, between two real snapshots. We took the snapshots for January and February 2017 and built two synthetic datasets called T\_100 and T\_1000, simulating 100 and 1000 intermediate steps between both snapshots; however, note that to make comparisons easier we only take the first 100 time steps in both datasets. We also use a real dataset, VAP, that contains all the monthly snapshots of the variable VAP from 1998 to 2017. Note that, although we choose a rather small number of time instants in our experiments, the performance of our proposal is not affected by this value: it scales linearly in space with the number of time instants, and query times should be unaffected as long as the change rate is similar.
We compared our representation with two baseline implementations. The first, called $\ensuremath{\mathsf{k^2raster}}\xspace$\footnote{https://gitlab.lbd.org.es/fsilva/k2-raster} is a representation that stores just a full snapshot for each time instant, without trying to take advantage of similarities between close time instants. The second baseline implementation, \texttt{NetCDF}\xspace, stores the different raster datasets in NetCDF format, using straightforward algorithms on top of the NetCDF library\footnote{https://www.unidata.ucar.edu/software/netcdf/} (v.4.6.1) to implement the query operations. Note that NetCDF is a classical representation designed mainly to provide compression, through the use of Deflate compression over the data. Therefore, it is not designed to efficiently answer indexed queries.
We tested cell value queries (\textit{getCellValue}) and range queries (\textit{getCells}). We generated sets of 1000 random queries for each query type and configuration: 1000 random cell value queries per dataset, and sets of 1000 random range queries for different spatial window sizes (ranging from 4$\times$4 windows to the whole matrix), and different ranges of values (considering cells with 1 to 4 possible values). To achieve accurate results, when the total query time for a query set was too small, we repeated the full query set a suitable number of times (in practice, 100 or 1000 times) and measured the average time per query.
All tests were run on an Intel (R) Core TM i7-3820 CPU @ 3.60GHz (4 cores) with 10MB of cache and 64GB of RAM, over Ubuntu 12.04.5 LTS with kernel 3.2.0-126 (64 bits). The code is compiled using gcc 4.7 with -O9 optimizations.
\renewcommand{1.3}{1.3}
\begin{table}[th]
\centering
\setlength\tabcolsep{3pt}
{\scriptsize
\begin{tabular}{l|cccccc|c|cccc|}
\cline{2-12}
& \multicolumn{6}{|c|}{\textbf{\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace} (varying $t_\delta$)} & \multirow{2}{*}{\textbf{\ensuremath{\mathsf{k^2raster}}\xspace}} & \multicolumn{4}{|c|}{\textbf{\texttt{NetCDF}\xspace} (varying deflate level)}\\
& \multicolumn{1}{|c}{\textbf{4}} & \multicolumn{1}{c}{\textbf{6}} & \multicolumn{1}{c}{\textbf{8}} & \multicolumn{1}{c}{\textbf{10}} & \multicolumn{1}{c}{\textbf{20}} & \multicolumn{1}{c|}{\textbf{50}} & & \multicolumn{1}{|c}{\textbf{0}} & \multicolumn{1}{c}{\textbf{2}} & \multicolumn{1}{c}{\textbf{5}} & \multicolumn{1}{c|}{\textbf{9}}\\
\hline
\multicolumn{1}{|l|}{\texttt{T\_100}} & 398.2 & 407.0 & 429.6 & 456.7 & 584.4 & 820.8 & 769.3 & 14241.3 & 615.3 & 539.5 & 528.0 \\
\hline
\multicolumn{1}{|l|}{\texttt{T\_1000}} & 170.4 & 152.5 & 151.2 & 154.6 & 196.2 & 304.6 & 496.6 & 14241.3 & 435.0 & 344.7 & 323.6 \\
\hline
\end{tabular} }
\caption{Space requirements (in MB) of \ensuremath{\mathsf{T\!-\!k^2raster}}\xspace, \ensuremath{\mathsf{k^2raster}}\xspace and \texttt{NetCDF}\xspace over synthetic datasets.}\label{table:syntheticsizes}
\end{table}
Table~\ref{table:syntheticsizes} displays the space requirements for the datasets T\_100 and T\_1000 in all the representations. We tested our \ensuremath{\mathsf{T\!-\!k^2raster}}\xspace with several sampling intervals $t_\delta$, and also show the results for $\texttt{NetCDF}\xspace$ using several deflate levels, from level 0 (no compression) to level 9. Our representation achieves the best compression results in both datasets, especially in T\_1000, as expected, due to the slower change rate. In T\_100, our approach achieves the best results for $t_\delta = 4$, since as the number of changes increases our differential approach becomes much less efficient. In T\_1000, the best results are also obtained for a relatively small $t_\delta$ (6-8), but our proposal is still smaller than $\ensuremath{\mathsf{k^2raster}}\xspace$ for larger $t_\delta$. $\texttt{NetCDF}\xspace$ is only competitive when compression is applied, otherwise it requires roughly 20 times the space of our representations. In both datasets, $\texttt{NetCDF}\xspace$ with compression enabled becomes smaller than the $\ensuremath{\mathsf{k^2raster}}\xspace$ representation, but $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$ is able to obtain even smaller sizes.
\begin{figure}
\caption{Space/time trade-off on T\_100 and T\_1000 datasets for cell value queries.}
\label{fig:syntheticcelltimes}
\end{figure}
Figure~\ref{fig:syntheticcelltimes} shows the space/time trade-off for the datasets T\_100 and T\_1000 in cell value queries. We show the results only for NetCDF with compression enabled (deflate level 2 and 5), and for $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$ with a sampling interval of 6 and 50. The $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$ is slower than the baseline $\ensuremath{\mathsf{k^2raster}}\xspace$, but is much smaller if a good $t_\delta$ is selected. Note that we use two extreme sampling intervals to show the consistency of query times, since in practice only the best approach in space would be used for a given dataset. In our experiments we work with a fixed $t_\delta$, but straightforward heuristics could be used to obtain an space-efficient $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$ without probing for different periods: for instance, the number of nodes in the tree of differences and in the snapshot is known during construction, so a new snapshot can be built whenever the size of the tree of differences increases above a given threshold.
\begin{table}[th]
\centering
\setlength\tabcolsep{3pt}
{\scriptsize
\begin{tabular}{rc|rr|r|rr||rr|r|rr|}
\cline{3-12}
& & \multicolumn{5}{|c||}{\texttt{T\_100}} & \multicolumn{5}{|c|}{\texttt{T\_1000}} \\
\cline{3-12}
& & \multicolumn{2}{|c|}{\textbf{\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace}} & \multirow{2}{*}{\textbf{\ensuremath{\mathsf{k^2raster}}\xspace}} & \multicolumn{2}{|c||}{\textbf{\texttt{NetCDF}\xspace}} & \multicolumn{2}{|c|}{\textbf{\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace}} & \multirow{2}{*}{\textbf{\ensuremath{\mathsf{k^2raster}}\xspace}} & \multicolumn{2}{|c|}{\textbf{\texttt{NetCDF}\xspace}}\\
\cline{1-2}
\multicolumn{1}{|r}{\textbf{wnd}} & \textbf{rng} & \multicolumn{1}{|c}{\textbf{6}} & \multicolumn{1}{c|}{\textbf{50}} & & \multicolumn{1}{|c}{\textbf{2}} & \multicolumn{1}{c||}{\textbf{5}} & \multicolumn{1}{|c}{\textbf{6}} & \multicolumn{1}{c|}{\textbf{50}} & & \multicolumn{1}{|c}{\textbf{2}} & \multicolumn{1}{c|}{\textbf{5}}\\
\hline
\multicolumn{1}{|r}{\textbf{16}} & \multicolumn{1}{c|}{\textbf{1}} & 3.6 & 3.8 & 2.8 & 6130 & 10070 & 3.3 & 3.4 & 2.5 & 6160 & 10020\\
\multicolumn{1}{|r}{\phantom{x}} & \multicolumn{1}{c|}{\textbf{4}} & 5.1 & 5.5 & 3.6 & 6240 & 10100 & 3.5 & 3.5 & 2.6 & 6160 & 10100\\
\hline
\multicolumn{1}{|r}{\textbf{256}} & \multicolumn{1}{c|}{\textbf{1}} & 222.9 & 248.1 & 163.9 & 9610 & 15330 & 207.1 & 228.9 & 167.6 & 9370 & 15110 \\
\multicolumn{1}{|r}{\phantom{x}} & \multicolumn{1}{c|}{\textbf{4}} & 429.3 & 489.4 & 301.7 & 9340 & 14790 & 213.4 & 234.3 & 172.7 & 9510 & 15240 \\
\hline
\multicolumn{1}{|r}{\textbf{ALL}} & \multicolumn{1}{c|}{\textbf{1}} & 111450 & 126220 & 78250 & 443830 & 580660 & 79650 & 89380 & 63350 & 436400 & 568730\\
\hline \end{tabular} }
\caption{Range query times over \texttt{T\_100} and \texttt{T\_1000} datasets. Times shown in $\mu$s/query for different spatial windows (wnd) and range of values (rng).}\label{table:syntheticrangetimes}
\end{table}
Table~\ref{table:syntheticrangetimes} shows an extract of the range query times for all the representations in datasets T\_100 and T\_1000. We only include here the results for $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$ with a $t_\delta$ of 6 and 50, and for $\texttt{NetCDF}\xspace$ with deflate level 2 and 5, since query times with the other parameters report similar conclusions. We also show the results for some relevant spatial window sizes and ranges of values. In all the cases, $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$ is around 50\% slower than $\ensuremath{\mathsf{k^2raster}}\xspace$, due to the need of querying two trees to obtain the results. However, the much smaller space requirements of our representation compensate for this query time overhead, especially in T\_1000. $\texttt{NetCDF}\xspace$, that is not designed for this kind of queries, cannot take advantage of spatial windows or ranges of values, so it is several orders of magnitude slower than the other approaches. The last query set (\texttt{ALL}) involves retrieving all the cells in the raster that have a given value (i.e. the spatial window covers the complete raster). In this context, $\texttt{NetCDF}\xspace$ must traverse and decompress the whole raster, but our representation cannot take advantage of its spatial indexing capabilities, so this provides a fairer comparison. Nevertheless, both $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$ and $\ensuremath{\mathsf{k^2raster}}\xspace$ are still several times faster than $\texttt{NetCDF}\xspace$ in this case, and our proposal remains very close in query times to the $\ensuremath{\mathsf{k^2raster}}\xspace$ baseline.
\begin{figure}
\caption{ Results for VAP dataset. Left plot shows space/time tradeoff for cell value queries. Right table shows query times for range queries. Times in $\mu$s/query.}
\label{fig:vapresults}
\end{figure}
Figure~\ref{fig:vapresults} (left) shows the space/time trade-off for the real dataset VAP. Results are similar to those obtained for the previous datasets: our representation, $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$, is a bit slower in cell value queries than $\ensuremath{\mathsf{k^2raster}}\xspace$, but also requires significantly less space. The $\texttt{NetCDF}\xspace$ baseline is much slower, even if it becomes competitive in space when deflate compression is applied.
Finally, Figure~\ref{fig:vapresults} (right) displays the query times for all the alternatives in range queries over the VAP dataset. The $\ensuremath{\mathsf{k^2raster}}\xspace$ is again a bit faster than the $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$, as expected, but the time overhead is within 50\%. $\texttt{NetCDF}\xspace$ is much slower, especially in queries involving small windows, as it has to traverse and decompress a large part of the dataset just to retrieve the values in the window. Note that even if the window covers the complete raster, $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$ and $\ensuremath{\mathsf{k^2raster}}\xspace$ achieve significantly better query times.
\section{Conclusions and future work} \label{sec:conclusions}
In this work we introduce a new representation for time-evolving raster data. Our representation, called $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$, is based on a compact data structure for raster data, the $\ensuremath{\mathsf{k^2raster}}\xspace$, that we extend to efficiently manage time series. Our proposal takes advantage of similarities between consecutive snapshots in the series, so it is especially efficient in datasets where few changes occur between consecutive time instants. The $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$ provides spatial and temporal indexing capabilities, and is also able to efficiently filter cells by value. Results show that, in datasets where the number of changes is relatively small, our representation can compress the raster and answer queries very efficiently. Even if its space efficiency depends on the dataset change rate, the $\ensuremath{\mathsf{T\!-\!k^2raster}}\xspace$ is a good alternative to compress raster data with high temporal resolution, or slowly-changing datasets, in small space.
As future work, we plan to apply to our representation some improvements that have already been proposed for the $\ensuremath{\mathsf{k^2raster}}\xspace$, such as the use of specific compression techniques in the last level of the tree. We also plan to develop an adaptive construction algorithm, that selects an optimal, or near-optimal, distribution of snapshots to maximize compression.
\end{document} | arXiv |
\begin{document}
\begin{center} \Large \bf Bounded-Velocity Stochastic Control for Dynamic Resource Allocation
\footnote{An abbreviated preliminary form of this paper has appeared in a conference proceedings (without copyright transfer) \cite{GaLuSh+13}.} \end{center}
\author{} \begin{center} {Xuefeng Gao}\,
\footnote{Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Shatin, Hong Kong; ([email protected]).},
{Yingdong Lu}\,
\footnote{Mathematical Sciences Department, IBM Research AI~-–~Science, Yorktown Heights, NY 10598, USA; ([email protected]).},
{ Mayank Sharma}\,
\footnote{AI and Blockchain Solutions Department, IBM Research, Yorktown Heights, NY 10598, USA; ([email protected]).},
{Mark S.~Squillante}\,
\footnote{Mathematical Sciences Department, IBM Research AI~-–~Science, Yorktown Heights, NY 10598, USA; ([email protected]).},
{Joost W.~Bosman}\,
\footnote{Centrum Wiskunde \& Informatica, 1098 XG Amsterdam, The Netherlands; ([email protected])} \end{center}
\begin{abstract} We consider a general class of dynamic resource allocation problems within a stochastic optimal control framework. This class of problems arises in a wide variety of applications, each of which intrinsically involves resources of different types and demand with uncertainty and/or variability. The goal involves dynamically allocating capacity for every resource type in order to serve the uncertain/variable demand, modeled as Brownian motion, and maximize the discounted expected net-benefit over an infinite time horizon based on the rewards and costs associated with the different resource types, subject to flexibility constraints on the rate of change of each type of resource capacity. We derive the optimal control policy within a bounded-velocity stochastic control setting, which includes efficient and easily implementable algorithms for governing the dynamic adjustments to resource allocation capacities over time. Computational experiments investigate various issues of both theoretical and practical interest, quantifying the benefits of our approach over recent alternative optimization approaches. \end{abstract}
\section{Introduction}\label{sec:intro}
A general class of canonical forms of dynamic resource allocation problems arises naturally across a broad spectrum of computer systems, communication networks and business applications. As
their complexities continue to grow, together with ubiquitous advances in technology, new approaches and methods are required to effectively and efficiently solve canonical forms of general dynamic resource allocation problems in such complex system, network and application environments. These environments often consist of different types of resources that are allocated in combination to serve demand whose behavior over time includes diverse types of uncertainty and variability. Each type of resource has a different reward and cost structure that ranges from the best of a set of primary resource allocation options~--~having the highest reward, highest cost and highest net-benefit~--~to the next best primary resource allocation option~--~having the next highest reward, next highest cost and next highest net-benefit~--~and so on down to a secondary resource allocation option~--~having the lowest reward, lowest cost and lowest net-benefit. Each type of resource also has different degrees of flexibility and different cost structures with respect to making changes to the allocation capacity. The resource allocation optimization problem we consider consists of adaptively determining the vector of primary resource capacities and the secondary resource capacity that serve the uncertain/variable demand and that maximize the expected net-benefit over a time horizon of interest based on the foregoing structural properties of the different types of resources.
Motivated by this general class of dynamic resource allocation problems, we take a stochastic control approach that manages future risks associated with resource allocation decisions and uncertain demand, including the reward, cost and flexibility structures of the primary and secondary resource allocation options. Specifically, we consider the underlying fundamental stochastic optimal control problem where the dynamic control policy that allocates the set of primary resource capacities to serve uncertain/variable demand, modeled as Brownian motion, is a vector of absolutely continuous stochastic processes with constraints on its element-wise rates of change with respect to time (``bounded-velocity''), which in turn determines the secondary resource allocation capacity. The ultimate objective is to maximize the expected discounted net-benefit over an infinite horizon based on the structural properties of the different resources types, with the desired outcome rendering an explicit characterization of the optimal dynamic control policy that includes efficient and easily implementable algorithms for governing the dynamic adjustments to resource allocation capacities over time.
\subsection{Motivating applications}\label{sec:examples}
The wide variety of system, network and application domains in which arise the general class of canonical forms of dynamic resource allocation problems of interest in this study include cloud computing and data center environments, computer and communication networks, energy-aware computing and smart power grid environments, and business analytics and optimization, among many others. For example, large-scale cloud computing and data center environments often involve resource allocation over different server options (ranging from the fastest performance and most expensive to the slowest performance and least expensive) and different network bandwidth options (ranging from the highest guaranteed performance at the highest cost to opportunistic options at no cost, such as the Internet); e.g., refer to~\cite{AmCiGu+10,ChHeLi+08,ClFrHa+05,JaMeMo+12,PaSpTa+05}.
In related energy-aware computing environments, the dynamic resource allocation problem concerns the effective and efficient management of the consumption of energy by resources in the face of time-varying uncertain system demand and (especially in large-scale environments) energy prices; e.g., see~\cite{GuJaWi11,LiWiAn+11}. In particular, the system control policy dynamically adjusts the allocation of very high-performance, very high-power servers (best primary resource option), the next highest performance, next highest power servers (next primary resource option), and so on to satisfy the uncertain/variable system demand where any remaining demand is satisfied by low-performance, low-power servers (secondary resource), with the objective of maximizing expected (discounted) net-benefit over time. Here, the rewards (costs) are based on the performance (energy) properties of the type of servers allocated to satisfy demand over time, together with the additional per-unit costs incurred for respectively increasing or decreasing the allocation of the primary resource capacities over time. The resulting optimal dynamic control policy determines the adaptive control of the primary resource capacities at every time, subject to certain constraints and to the demand process.
Another motivating application is based on strategic business provisioning and workforce sourcing in human capital supply chains; refer to, e.g.,~\cite{Wagner-2011}. The demand for product and services offerings is satisfied through resource allocation from among a diversity of available supply options. These sourcing options include internal resources with the highest reward, highest cost and highest net-benefit, business-partner resources with the next highest reward, next highest cost and next highest net-benefit, and so forth down to contractor or crowdsourcing resources with the lowest reward, lowest cost and lowest net-benefit. For each supply option, examples of the rewards can be revenue and quality of service and examples of the costs can be salary and other compensation. The goal of the strategic human capital sourcing problem is to determine the portfolio of various supply options to meet the time-varying and uncertain demand in the sense of maximizing expected (discounted) net-benefit over time, including the costs incurred in hiring, reskilling, promoting and incentivizing to reduce attrition for the primary human capital resources over time. This also involves constraints on the rates of change in the capacities of primary options to accommodate the non-instantaneous adjustment of human capital resources and the limited availability of such resources in the labor market (see, e.g.,~\cite{DaDeSe+87}).
\subsection{Related work}
The research literature contains a great diversity of studies of resource allocation problems, with differing objective functions, control policies, and reward, cost and flexibility structures. A wide variety of approaches and methods have been developed and applied to (approximately) solve this diversity of resource allocation problems including, for example, online algorithms and dynamic programming. It is therefore important to compare and contrast our problem formulation and solution approach with some prominent and closely related alternatives. One classical instance of a dynamic resource allocation problem is the multi-armed bandit problem~\cite{mab2007} where the rewards are associated with tasks and the goal is to determine under uncertainty which tasks the resource should work on, rather than the other way around. Another widely studied problem is the \emph{ski-rental} or \emph{lease-or-buy} problem~\cite{linial1999} where there is demand for a resource, but it is initially not known as to how long the resource would be required. In each decision epoch, the choice is between two options: either lease the resource for a fee, or purchase the resource for a price much higher than the leasing fee. Our resource allocation problem differs from this situation in that there are multiple types of resources each with an associated reward and cost per unit time of allocation, since the resources cannot be purchased outright.
From a methodological perspective, the general resource allocation problem we consider in this paper is closely related to the vast literature on stochastic control; refer to, e.g.,~\cite{Pham09,YonZho99}. Of particular relevance is the so-called \emph{bounded velocity follower problem}~\cite{BeShWi80,KarOco02} where the control is an absolutely continuous process with bounded derivative. For example, Bene{\v{s}} et al.~\cite{BeShWi80} consider this {bounded velocity follower problem} with a quadratic running cost objective function, where the authors propose a smooth-fit principle to characterize the optimal policy. In comparison with our study, the paper only considers a single resource, does not consider any costs associated with the actions taken by the control policy, and deals with a smoother objective function. Karatzas and Ocone~\cite{KarOco02} study an alternative bounded velocity follower problem where discretionary stopping is allowed and the objective is to choose a control law and a stopping time that minimizes the expected sum of a running cost and a termination cost. Davis et al.~\cite{DaDeSe+87} consider a control problem where a decision maker determines the rate of investment in capacity expansion to satisfy a Poisson demand process, showing that the optimal control is ``bang-bang'' type (either carry out construction at the maximal speed or take no action) and using computational methods to compute the value function. In contrast, we derive the optimal solution to a stochastic optimal control problem that does not allow discretionary stopping and that seeks to maximize the expected discounted net-benefit over an infinite horizon under a Brownian motion demand process, where we analytically characterize the corresponding value function and explicitly characterize the optimal dynamic control policy.
Our work is also related to, but different from, the problem of \emph{drift control} for Brownian motion (see, e.g., \cite{Ata2005,Matoglu2011,Matoglu2015}) where the controller can, at some cost, shift the drift among a finite set of alternatives, and the problem of \emph{bounded variation control} for diffusion processes (see, e.g., \cite{Weerasinghe2005}) where the available control is an added bounded variation process (but without bounded velocity constraints). Additional related studies include reversible/irreversible investment and capacity planning using a stochastic control approach, for which we refer the interested reader to \cite{abel1996,bentolila1990firing,federico2012smooth,guo2005optimal,guoandtomecek09,MerhiandZervos07,shepp1996hiring} as well as \cite{van2003commissioned} for a survey and a list of references.
From an applications perspective, there is a growing interest and vast literature in the computer system, communication network and operations research communities to address allocation problems involving various types of resources associated with computation, memory, bandwidth and/or energy; refer to, e.g., \cite{CioFar12,Dieker2012,JaMeMo+12,Kasbekar2014,LiWiAn+11,Urgaonkar2010} and the references therein. We limit our discussions here to the works of Lin et al.~\cite{LiWiAn+11} and Ciocan and Farias~\cite{CioFar12} since we will compare our dynamic control policy through computational experiments with the optimization algorithms from both of these studies that are cast within discrete time-interval models. Lin et al.~\cite{LiWiAn+11} study the problem of dynamically adjusting the number of active servers in a data center as a function of demand to minimize operating costs. They consider average demand over small intervals of time, subject to system constraints, and propose an optimal offline algorithm together with an online algorithm that is shown to be within a constant factor worse than the proposed optimal offline policy. Ciocan and Farias~\cite{CioFar12} study model predictive control for a large class of dynamic resource allocation problems with stochastic demand rate processes. They develop
an online algorithm for demand allocation within appropriately selected time intervals that relies on frequent reoptimization using suitably updated demand forecasts. Our work differs from these two and the aforementioned studies in that we take a stochastic control approach for the dynamic resource allocation optimization problem where the control has bounded velocity constraints and associated costs.
\subsection{Our contributions}
The goal of our study is to determine the optimal dynamic risk hedging strategy for managing the portfolio of primary resource allocation options and secondary resource allocation option in order to maximize the expected discounted net-benefit over an infinite time horizon based on the structural properties of the different primary and secondary resources, which we show to be equivalent to a minimization problem involving a piecewise-linear running cost and a proportional cost for making adjustments to the control policy process. Our solution approach is based on explicitly constructing a twice continuously differentiable (with the exception of at most three points) solution to the corresponding Hamilton-Jacobi-Bellman equation. Our theoretical results also include an explicit characterization of the dynamic control policy, which is of threshold type, and then we verify that this control policy is optimal through a martingale argument. In contrast to an optimal static allocation strategy, in which a single primary allocation vector capacity is determined to maximize expected net-benefit over the entire time horizon, our theoretical results establish that the optimal dynamic control policy adjusts its allocation decisions in primary and secondary resources to hedge against the risks of under-allocating primary resource capacities (resulting in lost reward opportunities) and over-allocating primary resource capacities (resulting in incurred cost penalties).
Our study provides important methodological contributions and new theoretical results by deriving the solution of a fundamental stochastic optimal control problem. This stochastic optimal control solution approach highlights the importance of timely and adaptive decision making in the allocation of a mixture of different resource options with distinct features in optimal proportions to satisfy time-varying and uncertain demand. Our study also provides important algorithmic contributions through a new class of online policies for dynamic resource allocation problems arising across a wide variety of application domains. Computational experiments quantify the effectiveness of our optimal online dynamic control algorithm over recent work in the area, including comparisons demonstrating how our optimal online algorithm significantly outperforms the type of optimal offline algorithm within a discrete-time framework recently proposed in~\cite{LiWiAn+11}, which appears to be related to the optimal online model predictive control algorithm proposed in~\cite{CioFar12} within a different discrete-time stochastic optimization framework. This includes relative improvements up to $90\%$ and $130\%$ in comparison with the optimal offline algorithm considered in~\cite{LiWiAn+11}, and even larger relative improvements of more than $150\%$ and $230\%$ in comparison with the optimal online algorithm in~\cite{CioFar12}.
An abbreviated preliminary form of this paper has appeared in a conference proceedings (without copyright transfer) \cite{GaLuSh+13} in which we presented a subset of our results on the optimal control policy for resource allocation problems with only a single primary resource option and a secondary resource option, together with limited computational experiments demonstrating some of the benefits of our optimal dynamic control policy over the optimal offline algorithm proposed in~\cite{LiWiAn+11}. The current paper significantly extends the preliminary conference paper in four important aspects: First, we present herein our complete derivation of the optimal control policy for resource allocation problems with single primary and secondary resource options; Second, we present herein a more thorough set of computational experiments that demonstrate the significant benefits of our optimal dynamic control policy over both the optimal offline algorithm in~\cite{LiWiAn+11} and the optimal online algorithm proposed in~\cite{CioFar12}; Third, we extend herein our theoretical results beyond the case of a single primary resource allocation option to a general case with multiple primary resource allocation options and provide computational experiments that compare the single and multiple primary resource allocation models; Finally, we provide herein rigorous proofs of all our main results for both resource allocation models under all possible conditions on the model parameters.
The remainder of this paper is organized as follows. Section~\ref{sec:math} defines our mathematical models of the stochastic resource allocation optimization problem, first for the case of a single primary resource and then for a version of the multiple primary resources case. Our mathematical formulations and main results for the corresponding stochastic control problems are presented in Sections~\ref{sec:main} and \ref{sec:main2}, respectively. A representative sample of numerous computational experiments are discussed in Section~\ref{sec:computational}.
Concluding remarks are provided in Section~\ref{sec:conclusion}, including a discussion of the use of model predictive control and learning to determine the parameters of our optimal dynamic control policy over time. All of our technical proofs are provided in the appendices.
\section{Mathematical Models}\label{sec:math}
We investigate a general class of fundamental resource allocation problems in which a set of \emph{primary} resource allocation options and a \emph{secondary} resource allocation option are available to satisfy demand whose behavior over time includes uncertainty and/or variability. There is an important reward and cost ordering among these resource options where the first primary resource allocation option has the highest net-benefit, followed by the second primary resource allocation option having the next highest net-benefit, and so on, with the (single) secondary resource allocation option having the lowest net-benefit. In addition, the set of primary resource capacities are somewhat less flexible in the sense that their rates of change at any instant of time are bounded, whereas the secondary resource capacity is more flexible in this regard (as made more precise below). Beyond these differences, the set of primary resources and the secondary resource are capable of serving the demand and all of this demand needs to be served, i.e., there is no loss of demand.
To elucidate the exposition of our analysis, we first consider the single primary and secondary instance of our general resource allocation model. This instance captures the key aspects of the fundamental trade-offs among the net-benefits of the various resource allocation options together with their associated risks. We then consider our general resource allocation model comprising multiple primary resources and a secondary resource together with an important relationship maintained among the primary resources.
\subsection{System model: Single primary resource}\label{sec:model:Single}
We consider the stochastic optimal control problems underlying our resource allocation model in which uncertain and/or variable demand needs to be served by the primary resource allocation capacity and the secondary resource allocation capacity. A control policy defines at every time $t \in \mathbb{R}_+$ the level of capacity allocation for the primary resource, denoted by $P(t)$, and the level of secondary resource capacity allocation, denoted by $S(t)$, that are used in combination to satisfy the uncertain/variable demand, denoted by $D(t)$.
The demand process $\{D(t): t \ge 0 \}$ is modeled by \begin{eqnarray} dD(t) &=& b dt +\sigma d{W}(t), \label{eqn:demand} \end{eqnarray} where $b \in \mathbb{R}$ is the demand growth/decline rate (which can be extended to a deterministic function of time),
$\sigma>0$ is the demand volatility, and ${W}(t)$ is a one-dimensional standard Brownian motion, whose sample paths are continuous but nondifferentiable almost everywhere~\cite{KarShr91}. This model is natural when the demand forecast involves Gaussian noises; see, e.g., \cite{ChoMeyn2010,Yao2010} for similar Brownian demand models. The demand process is served by the combination of primary and secondary resource allocation capacities $P(t) + S(t)$. Given the higher net-benefit structure of the primary resource option, the optimal dynamic control policy seeks to determine an absolutely continuous stochastic process $P(\cdot)$ describing the primary resource allocation capacity to serve the demand $D(\cdot)$ such that any remaining demand is served by the secondary resource allocation capacity $S(\cdot)$.
Let $R_p(t)$ and $C_p(t)$ respectively denote the reward and cost associated with the primary resource allocation capacity $P(t)$ at time $t$. The rewards $R_p(t)$ are linear functions of the primary resource capacity and the demand, whereas the costs $C_p(t)$ are linear functions of the primary resource capacity. We therefore have \begin{equation} R_p(t) \; = \; \mathcal{R}_p\times [P(t) \wedge D(t)]
\qquad\qquad \mbox{and} \qquad\qquad C_p(t) \; = \; \mathcal{C}_p\times P(t) , \quad \label{eq:benefitP:Single} \end{equation} where $x\wedge y := \min\{x, y\}$, $\mathcal{R}_p \geq 0$ captures all per-unit rewards for serving demand with the primary resource capacity, $\mathcal{C}_p \geq 0$ captures all per-unit costs for the primary resource capacity, and $\mathcal{R}_p > \mathcal{C}_p$. Observe that the rewards for the primary resource are linear in $P(t)$ as long as $P(t) \leq D(t)$, otherwise any primary resource capacity exceeding the demand will solely incur costs without rendering rewards. Hence, from a risk hedging perspective, the risks associated with the primary resource allocation position at time $t$, $P(t)$, concern lost reward opportunities whenever $P(t) < D(t)$ on the one hand and concern incurred cost penalties whenever $P(t) > D(t)$ on the other hand.
In addition, any adjustments to the primary resource allocation capacities have associated costs, where we write $\mathcal{I}_p$ and $\mathcal{D}_p$ to denote the per-unit costs of increasing and decreasing the decision process $P(t)$, respectively. Namely, $\mathcal{I}_p$ represents the per-unit cost whenever the allocation of the primary resource capacity is being increased while $\mathcal{D}_p$ represents the per-unit cost whenever the allocation of the primary resource capacity is being decreased.
Since all the remaining demand is served by the secondary resource allocation capacity, we therefore have \begin{equation} S(t) \; = \; [D(t) - P(t)]^+ . \label{eq:secondary:Single} \end{equation} The corresponding reward function $R_s(t)$ and cost function $C_s(t)$ are then given by \begin{equation} R_s(t) \; = \; \mathcal{R}_s\times [D(t) - P(t)]^+
\qquad\qquad \mbox{and} \qquad\qquad C_s(t) \; = \; \mathcal{C}_s\times [D(t) - P(t)]^+ , \quad \label{eq:benefitS:Single} \end{equation} where $x^+ := \max\{x,0\}$, $\mathcal{R}_s \geq 0$ captures all per-unit rewards for serving demand with the secondary resource capacity, $\mathcal{C}_s \geq 0$ captures all per-unit costs for the secondary resource capacity, and $\mathcal{R}_s > \mathcal{C}_s$.
Hence, from a risk hedging perspective, the secondary resource allocation position at time $t$, $S(t)$, is riskless in the sense that rewards and costs are both linear in the resource capacity actually used.
\subsection{System model: Multiple primary resources}\label{sec:model:Multiple}
We next consider the stochastic optimal control problems underlying our resource allocation models in which uncertain and/or variable demand needs to be served by the set of primary resource allocation capacities and the secondary resource allocation capacity. Let $\mathcal{P}$ denote the number of primary resource options. A control policy then defines at every time $t \in \mathbb{R}_+$ the level of capacity allocation for all primary resources, respectively denoted by $P_1(t), P_2(t), \ldots, P_{\mathcal{P}}(t)$, and the level of secondary resource capacity allocation, denoted by $S(t)$, that are used in combination to satisfy the uncertain/variable demand, denoted by $D(t)$.
The demand process $\{D(t): t \ge 0 \}$ continues to be modeled as in \eqref{eqn:demand}. This demand process is served by the combination of primary and secondary resource allocation capacities $P_1(t)+P_2(t)+\ldots+P_{\mathcal{P}}(t)+S(t)$, where $P_i(\cdot)$ has absolutely continuous paths for each $i=1, \ldots, \mathcal{P}$. Given the higher net-benefit structures of the primary resource options, the optimal dynamic control policy seeks to determine at every time $t \in \mathbb{R}_+$ the primary resource allocation capacity vector $\mathbf{P}(t) = [P_1(t), \ldots, P_{\mathcal{P}}(t)]$ to serve the demand $D(t)$ such that any remaining demand is served by the secondary resource allocation capacity $S(t)$.
The corresponding stochastic control problem becomes high-dimensional in the presence of multiple primary resources, and thus it is inherently prohibitive to solve analytically or computationally in general.
We therefore introduce in our system model definition the notion of coordination through a contractual agreement among the multiple primary resources, namely the model definition includes a contract that fixes the ratio of capacities among the primary resources. From a mathematical perspective, this contract-based system model definition makes it possible to derive explicit solutions for the corresponding stochastic control problem. In particular, by introducing such a contractual agreement among the multiple primary resource options, the dimensionality of the stochastic control problem is relaxed and our mathematical derivations exploit and build on the results we obtain for the single primary resource model.
From an applications perspective, our contract-based model definition is consistent with coordinated contractual agreements that have been adopted in various resource allocation problems in practice, such as strategic sourcing in human capital supply chains (see, e.g., \cite{bulmash2010} and the strategic sourcing example in the introduction) because they capture important business relationships and are easy to implement.
More formally, we introduce a nonnegative vector $w=(w_1, \ldots, w_{\mathcal{P}}) \in \mathbb{R}_+^{\mathcal{P}}$, with $\sum_{i=1}^{\mathcal{P}} w_i =1$, to represent the contract-based fixed distribution of capacities among the multiple primary resources. Then, for each $t \ge 0$, we set \begin{eqnarray} \label{eq:multip} P(t) := \sum_{i=1}^{\mathcal{P}} P_i(t), \qquad\qquad \text{and} \qquad\qquad P_i(t) = w_i P(t), \quad \text{for $i=1, \ldots, \mathcal{P}$}, \end{eqnarray} where $P(t)$ (with a slight abuse of notation) represents the aggregate capacity of primary resources and $w_i$ is set to maintain the initial (agreed upon) percentage $P_i(0)/P(0)$. In other words, for a given contract vector $w$, all $\mathcal{P}$ primary resource allocations must maintain the relationship \eqref{eq:multip} at all time $t$.
For each primary resource type $i=1, \ldots, \mathcal{P}$, let $R_{p,i}(t)$ and $C_{p,i}(t)$ respectively denote the reward and cost associated with the $i$th primary resource allocation capacity at time $t$. When the collection of primary resource allocations exceeds the demand, the rewards $R_{p,i}(t)$ are linear functions of the fraction of the demand served by the $i$th primary resource allocation capacity, namely $w_i \cdot D(t)$, from \eqref{eq:multip}. However, when the demand exceeds the collection of primary resource allocations, then the rewards $R_{p,i}(t)$ are linear functions of the $i$th primary resource allocation capacity, which is less than $w_i \cdot D(t)$ due to \eqref{eq:multip}. The costs $C_{p,i} (t)$ are linear functions of the $i$th primary resource allocation capacity. We therefore have \begin{eqnarray} R_{p,i}(t) \; = \; \mathcal{R}_{p,i}\times [P_i(t) \wedge (w_i \cdot D(t))] & \qquad \mbox{and} \qquad & C_{p,i}(t) \; = \; \mathcal{C}_{p,i}\times P_i(t) , \quad \label{eq:benefitP:Multiple} \end{eqnarray} where $x\wedge y := \min\{x, y\}$, $\mathcal{R}_{p,i} \geq 0$ captures all per-unit rewards for serving demand with the $i$th primary resource capacity, $\mathcal{C}_{p,i} \geq 0$ captures all per-unit costs for the $i$th primary resource capacity, and $\mathcal{R}_{p,i} > \mathcal{C}_{p,i}$. The per-unit reward and the per-unit cost for the aggregate primary resource capacity $P(t)$, under a given contract vector $w$, are then respectively given by \begin{eqnarray} \label{eq:Rpw} \mathcal{R}_{p}(w) = \sum_{i=1}^{\mathcal{P}} w_i \mathcal{R}_{p,i}, \qquad \mathcal{C}_{p}(w) = \sum_{i=1}^{\mathcal{P}} w_i \mathcal{C}_{p,i}. \end{eqnarray}
Observe that the rewards for the $i$th primary resource allocation are linear in $P_i(t)$ as long as $P_i(t) \leq w_i D(t)$, or equivalently $P(t) \leq D(t)$; otherwise the fraction of the $i$th primary resource capacity exceeding $w_i D(t)$ will solely incur costs without rendering rewards. Hence, from a risk hedging perspective, the risks associated with the collection of primary resource allocations at time $t$ concern lost reward opportunities whenever $P(t) < D(t)$ on the one hand and concern incurred cost penalties whenever $P(t) > D(t)$ on the other hand.
Similarly to \eqref{eq:Rpw}, as any adjustments to the primary resource allocation capacities have associated costs, let $\mathcal{I}_{p,i}$ and $\mathcal{D}_{p,i}$ respectively denote the per-unit cost associated with increasing and decreasing the allocation of the $i$th primary resource capacity. The per-unit costs of increasing and decreasing the allocation of the aggregate primary resource capacity are then respectively given by \begin{eqnarray} \label{eq:Ipw} \mathcal{I}_{p}(w) = \sum_{i=1}^{\mathcal{P}} w_i \mathcal{I}_{p,i}, \qquad \mathcal{D}_{p}(w) = \sum_{i=1}^{\mathcal{P}} w_i \mathcal{D}_{p,i} . \end{eqnarray}
Since the optimal dynamic control policy serves all remaining demand with secondary resource allocation capacity, we therefore have \begin{equation} S(t) \; = \; [D(t) - \sum_{i=1}^{\mathcal{P}} P_i(t)]^+ . \label{eq:secondary:Multiple} \end{equation} The corresponding reward function $R_s(t)$ and cost function $C_s(t)$ are then given by \begin{eqnarray} R_s(t) \; = \; \mathcal{R}_s\times [D(t) - \sum_{i=1}^{\mathcal{P}} P_i(t)]^+ & \qquad \mbox{and} \qquad & C_s(t) \; = \; \mathcal{C}_s\times [D(t) - \sum_{i=1}^{\mathcal{P}} P_i(t)]^+ , \quad \label{eq:benefitS:Multiple} \end{eqnarray} where $x^+ := \max\{x,0\}$, $\mathcal{R}_s \geq 0$ captures all per-unit rewards for serving demand with secondary resource capacity, $\mathcal{C}_s \geq 0$ captures all per-unit costs for secondary resource capacity, and $\mathcal{R}_s > \mathcal{C}_s$.
Hence, from a risk hedging perspective, the secondary resource allocation position at time $t$, $S(t)$, is riskless in the sense that rewards and costs are both linear in the resource capacity actually used.
\section{Main Results: Single Primary Resource}\label{sec:main}
In this section we consider our main results on the optimal dynamic control policy for the stochastic optimal control problem when there is a single primary resource.
After providing a formulation of the stochastic control problem and some technical preliminaries, we present our main results under different conditions for the values of $\mathcal{I}_p \geq 0$ and $\mathcal{D}_p \geq 0$, including the special case in which both are zero.
\subsection{Problem formulation}\label{sec:formulation:Single}
The stochastic optimal control problem associated with the system model of Section~\ref{sec:model:Single} allows the dynamic control policy to adjust its allocation positions in primary and secondary resource capacities based on the demand realization observed up to the current time, which we call the risk-hedging position of the dynamic control policy. More formally, in the single primary resource case, the decision process $P(t)$, is adapted to the filtration $\mathcal{F}_t$ generated by $\{D(s): s \le t\}$. Then the objective of the optimal dynamic control policy is to maximize the expected discounted net-benefit over an infinite horizon, where net-benefit at time $t$ consists of the difference between the rewards and costs from the primary resource allocation capacity and the secondary resource allocation capacity minus the additional costs for adjustments to $P(t)$.
In formulating the corresponding stochastic optimization problem, we impose a pair of additional conditions on the decision process $\{P(t): t \ge 0\}$ based on practical aspects of the diverse application domains motivating our study. The control policy cannot
make unbounded adjustments in the primary resource allocation capacity at any instant in time; i.e., the amount of change in $P(t)$ at time $t$ is restricted (even if only to a very small extent) by various factors. We therefore assume that the rate of change in the primary resource allocation capacity by the control policy is bounded. More precisely, for an absolutely continuous process $P(\cdot)$, there is a pair of finite constants $\theta_\ell < 0$ and $\theta_u > 0$ such that \text{for each $t \ge 0$} \begin{eqnarray} \label{eq:adjbound:Single} \theta_\ell \; \le \; \dot P(t) \; \le \; \theta_u, \end{eqnarray} where $\dot P(t)$ denotes the derivative of the decision variable $P(\cdot)$ (absolutely continuous process) with respect to time. On the other hand, the ability of the control policy to make adjustments to the secondary resource capacity in response to changes in the primary resource capacity tends to be more flexible such that \eqref{eq:secondary:Single} holds at all time $t$.
Now we can present the mathematical formulation of our stochastic optimization problem for the case of a single primary resource. Defining \begin{eqnarray*} N_p(t) \; := \; R_p(t) - C_p(t) & \qquad\qquad \mbox{and} \qquad\qquad & N_s(t) \; := \; R_s(t) - C_s(t) , \end{eqnarray*} we seek to determine the optimal dynamic control policy that solves the problem (SC-OPT:S) \begin{eqnarray} \max_{P(\cdot)} \quad && \mathbb{E} \int_0^\infty e^{-\alpha t} \left[ N_p(t) + N_s(t) \right] dt - \mathbb{E} \int_0^\infty e^{-\alpha t} [ \mathcal{I}_p \cdot \indi{\dot P(t)>0} ] dP(t) \nonumber \\ && \qquad\qquad\qquad\qquad\qquad\qquad\quad - \; \mathbb{E} \int_0^\infty e^{-\alpha t} [ \mathcal{D}_p \cdot \indi{\dot P(t)<0} ] d(-P(t)) \qquad \label{opt1:obj:Single} \\ \mbox{s.t.} && -\infty \; < \; \theta_\ell \; \le \; \dot P(t) \; \le \; \theta_u \; < \; \infty , \qquad\qquad \mbox{for $t \ge 0$}, \qquad \label{opt1:st1:Single} \\ && \qquad dD(t) \; = \; b dt +\sigma d{W}(t), \qquad\qquad \mbox{for $t \ge 0$}, \label{opt1:st2:Single} \end{eqnarray} where $\alpha$ is the discount factor and $\indi{A}$ denotes the indicator function returning $1$ if $A$ is true and $0$ otherwise. The control variable is the rate of change in the primary resource capacity by the control policy at every time $t$ subject to the lower and upper bound constraints on $\dot P(t)$ in \eqref{opt1:st1:Single}. Note that the second (third) expectation in \eqref{opt1:obj:Single} causes a decrease with rate $\mathcal{I}_p$ ($\mathcal{D}_p$) in the value of the objective function whenever the control policy increases (decreases) $P(t)$.
\subsection{Preliminaries}\label{sec:main:prelim:Single}
For notational convenience, we define the constants \begin{eqnarray} r_1 \; := \; \frac{b+ \sqrt{b^2 +2 \alpha \sigma^2 }}{\sigma^2} \; > \; 0, \label{eq:r1} & \quad & r_2 \; := \; \frac{b- \sqrt{b^2 +2 \alpha \sigma^2 }}{\sigma^2} \; < \; 0, \label{eq:r2} \\ s_1 \; := \; \frac{b- \theta_u + \sqrt{(b-\theta_u)^2 +2 \alpha \sigma^2 }} {\sigma^2} \; > \; 0, \label{eq:s1} & \quad & s_2 \; := \; \frac{b- \theta_u - \sqrt{(b-\theta_u)^2 +2 \alpha \sigma^2 }} {\sigma^2} \; < \; 0 , \label{eq:s2} \\ t_1 \; := \; \frac{b- \theta_\ell + \sqrt{(b-\theta_\ell)^2 +2 \alpha \sigma^2 }} {\sigma^2} \; > \; 0, \label{eq:t1} & \quad & t_2 \; := \; \frac{b- \theta_\ell - \sqrt{(b-\theta_\ell)^2 +2 \alpha \sigma^2 }} {\sigma^2} \; < \; 0 . \label{eq:t2} \end{eqnarray} These quantities are the roots of the quadratic equation \[ \frac{\sigma^2}{2} y^2 + (\theta -b) y - \alpha =0,\] when $\theta$ takes on the values of $\theta_\ell$, $0$ or $\theta_u$.
Next, we turn to consider the first expectation in the objective function \eqref{opt1:obj:Single} of the stochastic optimization problem (SC-OPT:S), which can be simplified as follows. Define \[ X(t) \; := \; P(t) - D(t) , \qquad\qquad \mathcal{N}_p \; := \; \mathcal{R}_p - \mathcal{C}_p , \qquad\qquad \mathcal{N}_s \; := \; \mathcal{R}_s - \mathcal{C}_s , \] and $x^- := -\min\{x,0\}$. Upon substituting \eqref{eq:benefitP:Single} and \eqref{eq:benefitS:Single} into the first expectation in \eqref{opt1:obj:Single}, and making use of the fact that \begin{eqnarray*} [P(t) \wedge D(t)] &=& D(t) - [ D(t) - P(t) ]^{+} ,
\end{eqnarray*} we obtain \begin{equation} \mathbb{E} \left[ \int_0^\infty e^{-\alpha t} [-\mathcal{C}_p X(t) + (\mathcal{N}_s-\mathcal{R}_p) X(t)^-] dt \right] + \mathcal{N}_p \mathbb{E} \left[ \int_0^\infty e^{-\alpha t} D(t) dt \right] . \label{eq:simplify} \end{equation} Note that the second expectation in \eqref{eq:simplify} represents the expected discounted cumulative demand over the infinite horizon. Since this second summand in \eqref{eq:simplify} does not depend on the control variable $P(t)$, this term plays no role in determining the optimal dynamic control policy.
Together with the above results, we derive the following stochastic optimization problem which is equivalent to the original problem formulation (SC-OPT:S): \begin{eqnarray} \min_{P(\cdot)} \quad && \mathbb{E}_x \bigg[ \int_0^\infty e^{-\alpha t} \Big\{ \left( \mathcal{C}_+ X(t)^+ + \mathcal{C}_- X(t)^{-} \right) dt + \left( \mathcal{I}_p \indi{\dot P(t)>0} - \mathcal{D}_p \indi{\dot P(t)<0} \right) dP(t) \Big\} \bigg] \qquad \label{optHF:obj} \\ \mbox{s.t.} && -\infty \; < \; \theta_\ell \; \le \; \dot P(t) \; \le \; \theta_u \; < \; \infty , \qquad\qquad \mbox{for $t \ge 0$}, \label{optHF:st1} \\ && dX(t) \; = \; dP(t) - b dt - \sigma d{W}(t), \qquad\qquad \mbox{for $t \ge 0$}, \label{optHF:st2} \\ && X(0) \; = \; x , \qquad\qquad \mathcal{C}_+ \; = \; \mathcal{C}_p , \qquad\qquad \mathcal{C}_- \; = \; \mathcal{N}_p - \mathcal{N}_s, \label{optHF:st3} \end{eqnarray} where $\mathbb{E}_x[\cdot]$ denotes expectation with respect to the initial state distribution (i.e., state at time $t=0$) being $x$ with probability one.
We use $V(x)$ to represent the optimal value of the objective function \eqref{optHF:obj}; namely, $V(x)$ is the value function of the corresponding stochastic dynamic program. Given its equivalence with the original optimization problem (SC-OPT:S), the remainder of this section will focus on the stochastic dynamic program formulation in \eqref{optHF:obj}~--~\eqref{optHF:st3}.
Finally, for additional convenience in stating our main results, we further define the following constants \begin{eqnarray*} B_1 := ({\mathcal{C}_+}-\alpha \mathcal{D}_p) (t_2 -r_2), & \qquad &
B_2 := (\mathcal{C}_- - \alpha \mathcal{I}_p) (s_1 -r_2), \qquad\qquad
B_3 := (\mathcal{C}_+ + \mathcal{C}_-) (-r_2), \\ J_1 := ({\mathcal{C}_+}-\alpha \mathcal{D}_p) (r_1 -t_2), & \qquad &
J_2 := (\mathcal{C}_- - \alpha \mathcal{I}_p) (r_1 -s_1), \qquad\qquad
J_3 := (\mathcal{C}_+ + \mathcal{C}_-) r_1, \\ A := (\mathcal{C}_+ + \alpha \mathcal{I}_p) (r_2-r_1), & \qquad &
K := (\mathcal{C}_- + \alpha \mathcal{D}_p)(r_2-r_1) , \end{eqnarray*} where $r_1, r_2, s_1, s_2, t_1, t_2$ are given in \eqref{eq:r1}~--~\eqref{eq:t2} and ${\mathcal{C}_+}, {\mathcal{C}_-}$ are given in \eqref{optHF:st3}.
\subsection{Case 1: $\boldsymbol{\mathcal{D}_p<\mathcal{C}_+/\alpha}$ and $\boldsymbol{\mathcal{I}_p<\mathcal{C}_-/\alpha}$}\label{sec:main:case1}
Let us first briefly explain the conditions of this subsection, which are likely to be the most relevant case in practice. Observe from the objective function~\eqref{optHF:obj} that $\mathcal{C}_+/\alpha$ reflects the discounted overage cost associated with the primary resource capacity and $\mathcal{C}_-/\alpha$ reflects the corresponding discounted shortage cost, recalling that $\alpha$ is the discount rate. In comparison, $\mathcal{D}_p$ represents the cost incurred for decreasing $P(t)$ when in an overage position while $\mathcal{I}_p$ represents the cost incurred for increasing $P(t)$ when in a shortage position.
To elucidate the exposition, we denote
{Condition (1a):} \quad $\mathcal{I}_p + \mathcal{D}_p >0$, \quad $0 \; < \; {B_3- B_2} \; < \; {B_1} \quad \mbox{and} \quad \left(\frac{B_3- B_2}{B_1}\right)^{\frac{r_2}{r_1}} \; \ge \; \frac{J_3-J_2}{J_1}$.
{Condition (1b):} \quad $\mathcal{I}_p + \mathcal{D}_p >0$, \quad \mbox{and} \quad $B_3 \le B_2$.
{Condition (1c):} \quad $\mathcal{I}_p = \mathcal{D}_p =0$, \quad \mbox{and} \quad $B_3 -B_2 -B_1 \le 0$.
{Condition (2a):} \quad $\mathcal{I}_p + \mathcal{D}_p >0$, \quad ${B_3-B_2- B_1} \; > \; 0 \qquad \mbox{and} \qquad \left(\frac{B_3-B_1}{B_2}\right)^{\frac{r_2}{r_1}} \; \ge \; \frac{J_3 -J_1}{J_2}$.
{Condition (2b):} \quad $\mathcal{I}_p = \mathcal{D}_p =0$, \quad \mbox{and} \quad $B_3 -B_2 -B_1 \ge 0$.
\noindent We are now ready to state our main result for Case 1.
\begin{theorem}\label{THM:CASE1} Suppose the adjustment costs satisfy $\mathcal{D}_p<\mathcal{C}_+/\alpha$ and $\mathcal{I}_p<\mathcal{C}_-/\alpha$. Then there are two threshold values $L$ and $U$ with $L \le U$ such that the optimal dynamic control policy is given by: For each $t \geq 0$, \begin{align*} \dot P(t)= \left\{\begin{array}{ll} \theta_u, & \qquad \text{if} \quad P(t)-D(t)<L,\\
0, & \qquad \text{if} \quad P(t)-D(t) \in [L, U], \\ \theta_\ell, & \qquad \text{if} \quad P(t)-D(t)>U. \end{array}\right. \end{align*} Moreover, the values of $L$ and $U$ can be characterized by the following three cases. \begin{enumerate}[I.] \item If either Condition (1a), (1b) or (1c) hold, we have $U \ge L \ge 0$ where $L$ and $U$ are uniquely determined by \begin{eqnarray} B_1 e ^{r_1 (L-U)} + J_1 e ^{r_2 (L-U)}+ A &=& 0, \label{eq:xhminusxf} \\ \frac{B_1 r_2}{r_1-r_2} e ^{r_1 (L-U)} + \frac{J_1 r_1}{r_1-r_2} e^{r_2 (L-U)} &=& (r_1+r_2-s_1)(\alpha \mathcal{I}_p+{\mathcal{C}_+}) + (\mathcal{C}_+ + \mathcal{C}_-) s_1 e^{s_2 L}. \qquad \label{eq:xH} \end{eqnarray} \item If either Condition (2a) or (2b) hold, we have $L \; \le \; U \; \le \; 0,$ where $L$ and $U$ are uniquely determined by \begin{eqnarray} B_2 e ^{r_1 (U-L)} + J_2 e^{r_2 (U-L)}+ K &=& 0, \label{eq:xfminusxh} \\ \frac{B_2 r_2}{r_1-r_2} e^{r_1(U-L)} + \frac{J_2 r_1}{r_1-r_2} e^{r_2 (U-L)} &=& (r_1+r_2-t_2)(\alpha \mathcal{D}_p+\mathcal{C}_-) + (\mathcal{C}_+ + \mathcal{C}_- )t_2 e^{t_1 U}. \qquad \label{eq:xF} \end{eqnarray} \item If none of the above conditions hold, we then have $U \; \ge \; 0 \; \ge \; L,$ where $L$ and $U$ are uniquely determined by \begin{eqnarray} B_1 e^{-r_1 U} + B_2 e^{-r_1 L} &=& B_3, \label{eq:xhxf2} \\ J_1 e^{-r_2 U} + J_2 e^{-r_2 L} &=& J_3. \label{eq:xhxf2c} \end{eqnarray} \end{enumerate} \end{theorem}
Theorem~\ref{THM:CASE1} can be interpreted as follows. The optimal dynamic control policy seeks to maintain $X(t)=P(t)-D(t)$ within the risk-hedging interval $[L,U]$ at all time $t$, taking no action (i.e., making no change to $P(t)$) as long as $X(t) \in [L,U]$. Whenever $X(t)$ falls below $L$, the optimal dynamic control policy pushes toward the risk-hedging interval as fast as possible, namely at rate $\theta_u$, thus increasing the primary resource capacity allocation. Similarly, whenever $X(t)$ exceeds $U$, the optimal dynamic control policy pushes toward the risk-hedging interval as fast as possible, namely at rate $\theta_\ell$, thus decreasing the primary resource capacity allocation. In each of the cases \textit{I}, \textit{II} and \textit{III}, the optimal threshold values $L$ and $U$ are uniquely determined by two nonlinear equations.
\subsubsection{Special Case $\boldsymbol{\mathcal{I}_p=\mathcal{D}_p=0}$}\label{sec:main:special}
In the special case where the dynamic control policy incurs no costs for making adjustments, which may be of particular interest in some application domains, Theorem~\ref{THM:CASE1} has the following reduced form.
\begin{corollary}\label{THM:SPECIAL} Suppose there are no adjustment costs, namely $\mathcal{I}_p=\mathcal{D}_p=0$.
Then there exists a constant $\delta$ such that the optimal dynamic control policy is given by \begin{eqnarray*} \dot P(t)= \left\{\begin{matrix} \theta_u, && \qquad \text{if} \quad P(t)-D(t)<\delta,\\ 0 && \qquad \text{if} \quad P(t)-D(t)= \delta, \\ \theta_\ell, && \qquad \text{if} \quad P(t)-D(t)>\delta. \end{matrix}\right. \end{eqnarray*} Moreover, $\delta$ can be given explicitly by \begin{eqnarray} \label{eq:delta} \delta = \left\{\begin{matrix} \frac{1}{s_2}\ln (\frac{\mathcal{C}_+}{\mathcal{C}_++\mathcal{C}_-} \frac{s_1-t_2}{s_1})>0, && \qquad \text{if} \quad B_1 + B_2 -B_3 >0, \\
0, && \qquad \text{if} \quad B_1 + B_2 -B_3 =0, \\ \frac{1}{t_1}\ln (\frac{\mathcal{C}_-}{\mathcal{C}_++\mathcal{C}_-} \frac{s_1-t_2}{-t_2})<0, && \qquad \text{if} \quad B_1 + B_2 -B_3 <0 . \end{matrix}\right. \qquad \end{eqnarray}
\end{corollary}
The interpretation of this corollary is the same as that for Theorem~\ref{THM:CASE1} where the risk-hedging interval collapses to a single point $\delta$. Hence, the optimal dynamic control policy seeks to maintain $X(t)=P(t)-D(t)$ at the position $\delta$ at all time $t$, pushing toward this point as fast as possible with rate $\theta_u$ when below and with rate $\theta_\ell$ when above.
\subsection{Case 2: $\boldsymbol{\mathcal{D}_p \ge \mathcal{C}_+ / \alpha}$ and $\boldsymbol{\mathcal{I}_p < \mathcal{C}_- / \alpha}$}\label{sec:main:case2}
We next consider the case in which the per-unit cost for decreasing the decision variable $P(t)$ is at least as large as the discounted overage cost associated with this decision variable. Our main result for this case can be expressed as follows.
\begin{theorem}\label{THM:CASE2} Suppose the adjustment costs satisfy $\mathcal{D}_p \ge \mathcal{C}_+ / \alpha$ and $\mathcal{I}_p < \mathcal{C}_- / \alpha$. Then there exists a threshold $L$ such that the optimal policy is given by \begin{eqnarray*} \dot P(t)= \left\{\begin{matrix} \theta_u, && \qquad \text{if} \quad P(t)-D(t)<L,\\ 0, && \qquad \text{if} \quad P(t)-D(t) \ge L. \end{matrix}\right. \end{eqnarray*} Moreover, the value of $L$ can be characterized by \begin{eqnarray} L = \left\{\begin{array}{ll} \frac{1}{s_2} \ln \left[ {\frac{(\alpha \mathcal{I}_p +\mathcal{C}_+)(s_1 -r_2)}{ (\mathcal{C}_+ +\mathcal{C}_-) s_1}}\right]
\ge 0, & \qquad \text{if} \quad B_3 \le B_2, \label{eq:thm2Lpos}\\
\frac{1}{-r_1} \ln \left[{\frac{(\mathcal{C}_++\mathcal{C}_-)( -r_2)}{ (\mathcal{C}_- - \alpha \mathcal{I}_p) (s_1-r_2)}}\right] < 0, & \qquad \text{if} \quad B_3 > B_2 \label{eq:thm2Lneg}. \end{array}\right. \end{eqnarray} \end{theorem}
This result is closely related with Theorem~\ref{THM:CASE1}. One readily checks that in Theorem~\ref{THM:CASE2} when $B_3 \le B_2$, $L \ge 0$ satisfies \begin{eqnarray*}
\frac{r_1}{r_1-r_2} (-A) &=& (r_1+r_2-s_1)(\alpha \mathcal{I}_p+{\mathcal{C}_+}) + (\mathcal{C}_+ + \mathcal{C}_-) s_1 e^{s_2 L}, \end{eqnarray*} which has a similar structure as \eqref{eq:xH}. When $B_3 >B_2 $, $L<0$ solves \begin{eqnarray*} B_2 e^{-r_1 L} = B_3, \end{eqnarray*} which is the same as Equation \eqref{eq:xhxf2} if we regard $U=\infty$. Therefore, given the relatively larger cost for decreasing $P(t)$, this theorem essentially provides a one-sided version of Theorem~\ref{THM:CASE1} in which the optimal dynamic control policy seeks to maintain $X(t)=P(t)-D(t)$ at or above the threshold $L$ at all time $t$. Whenever $X(t)$ falls below $L$, the optimal dynamic control policy pushes toward the risk-hedging threshold as fast as possible, namely at rate $\theta_u$. Otherwise, the optimal dynamic control policy takes no action, because taking action to decrease an overage position costs more than the benefits from such an action.
\subsection{Case 3: $\boldsymbol{\mathcal{D}_p < \mathcal{C}_+ / \alpha}$ and $\boldsymbol{\mathcal{I}_p \ge \mathcal{C}_- / \alpha}$}\label{sec:main:case3}
We now consider the case in which the per-unit cost for increasing the decision variable $P(t)$ is at least as large as the discounted shortage cost associated with this decision variable. Our main result for this case can be expressed as follows.
\begin{theorem}\label{THM:CASE3} Suppose the adjustment costs satisfy $\mathcal{D}_p < \mathcal{C}_+ / \alpha$ and $\mathcal{I}_p \ge \mathcal{C}_- / \alpha$. Then there exists a threshold $U$ such that the optimal policy is given by \begin{eqnarray*} \dot P(t)= \left\{\begin{matrix} 0, && \qquad \text{if} \quad P(t)-D(t) \le U,\\ \theta_\ell, && \qquad \text{if} \quad P(t)-D(t)>U. \end{matrix}\right. \end{eqnarray*} Moreover, the value of $U$ can be characterized by \begin{eqnarray} \label{eq:thm3U} U= \left\{\begin{array}{ll} \frac{1}{-r_2} \ln \left[ \frac{(\mathcal{C}_+ + \mathcal{C}_-)r_1}{ (\mathcal{C}_- -\alpha \mathcal{D}_p)(r_1 -t_2)} \right]
\ge 0, & \qquad \text{if} \quad J_1 \le J_3,\\
\frac{1}{t_1} \ln \left[\frac{(\alpha \mathcal{D}_p + \mathcal{C}_-)(r_1-t_2)}{(\mathcal{C}_+ + \mathcal{C}_-) (-t_2)}\right] < 0, & \qquad \text{if} \quad J_1 > J_3. \end{array}\right. \end{eqnarray} \end{theorem}
We also observe the connection of this result with Theorem~\ref{THM:CASE1}. It can be readily verified from Theorem~\ref{THM:CASE3} that when $J_1 \le J_3$, $U \ge 0$ satisfies \begin{eqnarray*} J_1 e^{-r_2 U} = J_3, \end{eqnarray*} and thus is equivalent to \eqref{eq:xhxf2c} if we regard $L = -\infty$. When $J_1 >J_3$, we have $U<0$ solving \begin{eqnarray*} \frac{r_2}{r_1-r_2} (-K) &=& (r_1+r_2-t_2)(\alpha \mathcal{D}_p+\mathcal{C}_-) + (\mathcal{C}_+ + \mathcal{C}_- )t_2 e^{t_1 U}, \end{eqnarray*} which is closely related to \eqref{eq:xfminusxh} and \eqref{eq:xF}. Hence, given the relatively larger cost for increasing $P(t)$, this theorem essentially provides a one-sided version of Theorem~\ref{THM:CASE1} in which the optimal dynamic control policy seeks to maintain $X(t)=P(t)-D(t)$ at or below the threshold $U$ at all time $t$. Whenever $X(t)$ exceeds $U$, the optimal dynamic control policy pushes toward the risk-hedging threshold as fast as possible, namely at rate $\theta_\ell$. Otherwise, the optimal dynamic control policy takes no action, because taking action to increase a shortage position costs more than the benefits from such an action.
\subsection{Case 4: $\boldsymbol{\mathcal{D}_p \ge \mathcal{C}_+ / \alpha}$ and $\boldsymbol{\mathcal{I}_p \ge \mathcal{C}_- / \alpha}$}\label{sec:main:case4}
Lastly, we consider the case in which the per-unit costs for adjusting the decision variable $P(t)$ are at least as large as the corresponding discounted overage and shortage costs associated with this decision variable. We now state our main result for this case.
\begin{theorem}\label{THM:CASE4} Suppose the adjustment costs satisfy $\mathcal{D}_p \ge \mathcal{C}_+ / \alpha$ and $\mathcal{I}_p \ge \mathcal{C}_- / \alpha$. Then the optimal policy consists of taking no action. Specifically, \[P (t) \; \equiv \; P(0), \qquad \text{for all $t$.} \] \end{theorem}
Given the relatively larger costs for adjusting $P(t)$, this theorem essentially consists of the inaction sides of both Theorems~\ref{THM:CASE2} and~\ref{THM:CASE3}. Intuitively, the theorem characterizes the conditions under which the costs of any control policy action exceeds the resulting benefit, namely taking an action to decrease an overage position or increase a shortage position costs more than the benefits from such an action.
\section{Main Results: Multiple Primary Resources with Contract}\label{sec:main2}
In this section we consider our main results on the optimal dynamic control policy for the stochastic optimal control problem when there are multiple primary resources under a contract-based relationship.
After providing a formulation of the stochastic optimal control problem and some technical preliminaries, we present our main results analogous to those in Section~\ref{sec:main}.
\subsection{Problem formulation}\label{sec:formulation:Multiple}
The stochastic optimal control problem associated with the system model of Section~\ref{sec:model:Multiple} allows the dynamic control policy to adjust its allocation positions in primary and secondary resource capacities, while maintaining the contract-based relationship $w$, based on the demand realization observed up to the current time. More formally, the decision processes $P_1(t), \ldots, P_{\mathcal{P}}(t)$, are adapted to the filtration $\mathcal{F}_t$ generated by $\{D(s): s \le t\}$.
Then the objective of the optimal dynamic control policy is to maximize the expected discounted net-benefit over an infinite horizon, subject to the contract-based constraints \eqref{eq:multip}, where net-benefit at time $t$ consists of the difference between rewards and costs from both the set of primary resource allocation capacities and the secondary resource allocation capacity and the additional costs for adjustments to $P_1(t), \ldots, P_{\mathcal{P}}(t)$.
Similar to the single primary resource formulation, the control policy cannot make unbounded adjustments in the $i$th primary resource allocation capacity at any instant in time; i.e., the amount of change in $P_i(t)$ at time $t$ is restricted (even if only to a very small extent) by various factors. We therefore assume that the rate of change in the $i$th primary resource allocation capacity by the control policy is bounded. More precisely, there are pairs of finite constants $\theta_{\ell,i} < 0$ and $\theta_{u,i} > 0$ such that \text{for each $t \ge 0$} \begin{eqnarray} \label{eq:adjbound:Multiple} \theta_{\ell,i} \; \le \; \dot P_i(t) \; \le \; \theta_{u,i}, \end{eqnarray} where $\dot P_i(t)$ denotes the derivative of the decision variable $P_i(t)$ with respect to time, for all $i=1, \ldots, \mathcal{P}$. On the other hand, the ability of the control policy to make adjustments to the secondary resource capacity, in response to changes in the primary resource capacities, tends to be more flexible such that \eqref{eq:secondary:Multiple} holds at all time $t$.
Now we can present the mathematical formulation of our stochastic optimization problem for the case of multiple primary resources with a contract-based relationship among these resources. Let us fix a given contract vector $w$. Defining \begin{eqnarray*} N_{p,i}(t) \; := \; R_{p,i}(t) - C_{p,i}(t) & \qquad\qquad \mbox{and} \qquad\qquad & N_s(t) \; := \; R_s(t) - C_s(t) , \end{eqnarray*} we seek to determine the optimal dynamic control policy for the contract vector $w$ that solves the problem (SC-OPT:M) \begin{eqnarray} \max_{P_1(\cdot), \ldots, P_{\mathcal{P}}(\cdot)} \quad && \mathbb{E} \int_0^\infty e^{-\alpha t} \left[ \sum_{i=1}^{\mathcal{P}} N_{p,i}(t) + N_s(t) \right] dt - \sum_{i=1}^{\mathcal{P}} \mathbb{E} \int_0^\infty e^{-\alpha t} [ \mathcal{I}_{p,i} \cdot \indi{\dot P_i(t)>0} ] dP_i(t) \nonumber \\ && \qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad - \; \sum_{i=1}^{\mathcal{P}} \mathbb{E} \int_0^\infty e^{-\alpha t} [ \mathcal{D}_{p,i} \cdot \indi{\dot P_i(t)<0} ] d(-P_i(t)) \qquad \label{opt1:obj:Multiple} \\ \mbox{s.t.} && -\infty \; < \; \theta_{\ell,i} \; \le \; \dot P_i(t) \; \le \; \theta_{u,i} \; < \; \infty , \qquad\qquad \forall i = 1, \ldots, \mathcal{P} \quad \mbox{and $t \ge 0$}, \label{opt1:st1:Multiple} \\ && P_i(t) \; = \; w_i P(t), \qquad\qquad\qquad\qquad\qquad\qquad \forall i = 1, \ldots, \mathcal{P} \quad \mbox{and $t \ge 0$}, \label{opt1:st0a:Multiple} \\ && \sum_{i=1}^{\mathcal{P}} w_i =1 , \qquad\qquad\qquad\qquad\qquad\qquad\qquad w=(w_1, \ldots, w_{\mathcal{P}}) \in \mathbb{R}_+^{\mathcal{P}} , \label{opt1:st0b:Multiple} \\ && dD(t) \; = \; b dt +\sigma d{W}(t), \label{opt1:st2:Multiple} \end{eqnarray} where $\alpha$ is the discount factor and $\indi{A}$ again denotes the indicator function associated with event $A$. The control variables are the rates of change in the primary resource capacities by the control policy at every time $t$ subject to the lower and upper bounds on each $\dot P_i(t)$ in \eqref{opt1:st1:Multiple} and the contract-based relationship among the primary resources in \eqref{opt1:st0a:Multiple} and \eqref{opt1:st0b:Multiple}. Note that the second (third) expectation in \eqref{opt1:obj:Multiple} causes a decrease with rate $\mathcal{I}_{p,i}$ ($\mathcal{D}_{p,i}$) in the value of the objective function whenever the control policy increases (decreases) $P_i(t)$.
\subsection{Preliminaries}\label{sec:main:prelim:Multiple}
Consider the first expectation in the objective function \eqref{opt1:obj:Multiple} of the stochastic optimization problem (SC-OPT:P). From \eqref{eq:multip}, \eqref{eq:Rpw} and \eqref{eq:benefitP:Multiple}, we can rewrite this expectation in terms of the aggregate primary resource capacity $P(t)$ for a given contract vector $w$. Upon analogously applying the derivations of Section~\ref{sec:main:prelim:Single}, we then can simplify and reduce this expectation to the single primary resource setting as follows: \begin{equation*} \mathbb{E} \bigg[ \int_0^\infty e^{-\alpha t} \Big\{ \left( \mathcal{C}_+ (w)X(t)^+ + \mathcal{C}_-(w) X(t)^{-} \right) dt \Big\} \bigg] \end{equation*} where $X(t) := P(t) - D(t)$, $\mathcal{C}_+(w) = \mathcal{C}_p(w)$, $\mathcal{C}_-(w) = \mathcal{N}_p(w) - \mathcal{N}_s$, $\mathcal{N}_p(w) := \mathcal{R}_p(w) - \mathcal{C}_p(w)$, and $\mathcal{N}_s := \mathcal{R}_s - \mathcal{C}_s$. Similarly, the second and third expectations in \eqref{opt1:obj:Multiple} can be rewritten with respect to \eqref{eq:Ipw} in terms of the aggregate primary resource capacity $P(t)$ for the given contract vector $w$. Define $\mathcal{N}_{p,i} := \mathcal{R}_{p,i} - \mathcal{C}_{p,i}$, for $i=1,\ldots,\mathcal{P}$.
Next, we deduce from \eqref{eq:multip}, (\ref{eq:adjbound:Multiple}) and the system model definition of Section~\ref{sec:model:Multiple} that the aggregate control process $P(t)$ satisfies the constraint \begin{eqnarray} \label{eq:constraintw} {\tilde \theta}_\ell \; \le \; \dot P(t) \; \le \; {\tilde \theta}_u, \quad \mbox{for each $t \ge 0$}, \end{eqnarray} where \begin{align}\label{eq:tilde-theta} {\tilde \theta}_\ell := \max_{i=1, \ldots, \mathcal{P}} \frac{\theta_{\ell,i}}{w_i} \qquad \mbox{and} \qquad {\tilde \theta}_u:= \min_{i=1, \ldots, \mathcal{P}} \frac{\theta_{u,i}}{w_i}. \end{align} Then, for a given contract vector $w$, we have the following stochastic optimization problem in terms of the aggregate control process $P(t)$ that is equivalent to the original problem formulation (SC-OPT:M): \begin{align} \min_{P(\cdot)} & \quad \mathbb{E}_x \bigg[ \int_0^\infty e^{-\alpha t} \Big\{ \left( \mathcal{C}_+ (w)X(t)^+ + \mathcal{C}_-(w) X(t)^{-} \right) dt + \left( \mathcal{I}_p(w) \indi{\dot P(t)>0} - \mathcal{D}_p(w) \indi{\dot P(t)<0} \right) dP(t) \Big\} \bigg] \qquad \label{optHF:obj_w} \\
\mbox{s.t.} & \quad -\infty \; <{\tilde \theta}_\ell \; \le \; \dot P(t) \; \le \; {\tilde \theta}_u \; < \; \infty , \qquad\qquad \mbox{for $t \ge 0$}, \nonumber \\
& \quad dX(t) \; = \; dP(t) - b dt - \sigma d{W}(t), \qquad\qquad \mbox{for $t \ge 0$}, \nonumber \\ & \quad X(0) \; = \; x , \qquad \mathcal{C}_+ (w) \; = \; \mathcal{C}_p (w) , \qquad \mathcal{C}_- (w) \; = \; \mathcal{N}_p(w) - \mathcal{N}_s , \nonumber \end{align} where $\mathbb{E}_x[\cdot]$ again denotes expectation with respect to the initial state distribution (i.e., state at time $t=0$) being $x$ with probability one.
Once again, we use $V_w(x)$ to represent the optimal value of the objective function in \eqref{optHF:obj_w} for a given contract vector $w$; namely, $V_w(x)$ is the value function of the corresponding stochastic dynamic program. Given its equivalence with the original optimization problem (SC-OPT:M), the remainder of this section will focus on the stochastic dynamic program formulation in \eqref{optHF:obj_w}.
For notational convenience, we define the following constants which represent modifications of some of the constants in Section~\ref{sec:main:prelim:Single} due to the differences in the parameters and the problem setting: \begin{eqnarray*}
{\tilde s}_1 \; := \; \frac{b- {\tilde \theta}_u+ \sqrt{(b-{\tilde \theta}_u)^2 +2 \alpha \sigma^2 }} {\sigma^2} \; > \; 0, \label{eq:s1_w} & \qquad & {\tilde s}_2 \; := \; \frac{b- {\tilde \theta}_u - \sqrt{(b-{\tilde \theta}_u)^2 +2 \alpha \sigma^2 }} {\sigma^2} \; < \; 0 , \label{eq:s2_w} \\ {\tilde t}_1 \; := \; \frac{b- {\tilde \theta}_\ell + \sqrt{(b-{\tilde \theta}_\ell)^2 +2 \alpha \sigma^2 }} {\sigma^2} \; > \; 0, \label{eq:t1_w} & \qquad & {\tilde t}_2 \; := \; \frac{b- {\tilde \theta}_\ell - \sqrt{(b-{\tilde \theta}_\ell)^2 +2 \alpha \sigma^2 }} {\sigma^2} \; < \; 0 . \label{eq:t2_w} \end{eqnarray*} and \begin{eqnarray*}
B_1(w) := ({\mathcal{C}_+}(w)-\alpha \mathcal{D}_p(w)) ({\tilde t}_2 -r_2), & \qquad &
B_2(w) := (\mathcal{C}_-(w) - \alpha \mathcal{I}_p(w)) ({\tilde s}_1 -r_2), \\
B_3(w) := (\mathcal{C}_+(w) + \mathcal{C}_-(w)) (-r_2), & \qquad &
J_1(w) := ({\mathcal{C}_+}(w)-\alpha \mathcal{D}_p(w)) (r_1 -{\tilde t}_2), \\
J_2(w) := (\mathcal{C}_-(w) - \alpha \mathcal{I}_p(w)) (r_1 -{\tilde s}_1), & \qquad &
J_3(w) := (\mathcal{C}_+(w) + \mathcal{C}_-(w)) r_1, \\
A(w) := (\mathcal{C}_+(w) + \alpha \mathcal{I}_p(w)) (r_2-r_1), & \qquad &
K(w) := (\mathcal{C}_-(w) + \alpha \mathcal{D}_p(w))(r_2-r_1) . \end{eqnarray*}
\subsection{Case 1: $\boldsymbol{\mathcal{D}_{p}(w) < \mathcal{C}_{+}(w) / \alpha}$ and $\boldsymbol{\mathcal{I}_p(w) < \mathcal{C}_-(w) / \alpha}$}\label{sec:main:case1:Multiple}
Let us first briefly explain the conditions of this subsection, for a given contract vector $w \in \Omega$ where \begin{equation*}
\Omega := \{ w = (w_1, \ldots, w_{\mathcal{P}}) \in \mathbb{R}^{\mathcal{P}}: w \ge 0, \sum_{i=1}^{\mathcal{P}} w_i =1\}. \end{equation*} The conditions of this subsection are likely to be the most relevant case in practice. Observe that, if $\mathcal{D}_{p,i} < \mathcal{C}_{p,i} / \alpha$ and $\mathcal{I}_{p,i} < (\mathcal{N}_{p,i} -\mathcal{N}_{s})/ \alpha$ for each primary resource type $i=1, \ldots, \mathcal{P}$, then the conditions $\mathcal{D}_{p}(w) < \mathcal{C}_{+}(w) / \alpha$ and $\mathcal{I}_p(w) < \mathcal{C}_-(w) / \alpha$ hold for any $w \in \Omega$. Further observe from the objective function in \eqref{optHF:obj_w} that $\mathcal{C}_+(w)/\alpha$ reflects the discounted overage cost associated with the aggregate primary resource capacity and $\mathcal{C}_-(w)/\alpha$ reflects the discounted shortage cost associated with the aggregate primary resource capacity, recalling that $\alpha$ is the discount rate. In comparison, $\mathcal{D}_p(w)$ represents the cost incurred for decreasing $P(t)$ when in an overage position while $\mathcal{I}_p(w)$ represents the cost incurred for increasing $P(t)$ when in a shortage position.
The conditions of this subsection represent counterparts of the conditions (1a), (1b), (1c), (2a) and (2b) of Section~\ref{sec:main:case1}. To elucidate the exposition, we denote
{Condition (1a'):} \quad $\mathcal{I}_p(w) + \mathcal{D}_p(w) >0$, \quad $0 \; < \; {B_3(w)- B_2(w)} \; < \; {B_1(w)} \quad \mbox{and} \quad \left(\frac{B_3(w)- B_2(w)}{B_1(w)}\right)^{\frac{r_2}{r_1}} \; \ge \; \frac{J_3(w)-J_2(w)}{J_1(w)}$.
{Condition (1b'):} \quad $\mathcal{I}_p(w)(w) + \mathcal{D}_p(w) >0$, \quad \mbox{and} \quad $B_3(w) \le B_2(w)$.
{Condition (1c'):} \quad $\mathcal{I}_p(w) = \mathcal{D}_p(w) =0$, \quad \mbox{and} \quad $B_3(w) -B_2(w) -B_1(w) \le 0$.
{Condition (2a'):} \quad $\mathcal{I}_p(w) + \mathcal{D}_p(w) >0$, \quad ${B_3(w)-B_2(w)- B_1(w)} \; > \; 0 \qquad \mbox{and} \qquad \left(\frac{B_3(w)-B_1(w)}{B_2(w)}\right)^{\frac{r_2}{r_1}} \; \ge \; \frac{J_3(w) -J_1(w)}{J_2}$.
{Condition (2b'):} \quad $\mathcal{I}_p(w) = \mathcal{D}_p(w) =0$, \quad \mbox{and} \quad $B_3(w) -B_2(w) -B_1(w) \ge 0$.
\noindent We are now ready to state our main result for Case 1 under this setting. Recall that ${\tilde \theta}_\ell$ and ${\tilde \theta}_u$ are defined in \eqref{eq:tilde-theta}.
\begin{theorem}\label{THM:CASE1:Multiple} Fix $w \in \Omega$. Suppose the adjustment costs satisfy $\mathcal{D}_p(w)<\mathcal{C}_+ (w)/\alpha$ and $\mathcal{I}_p (w)<\mathcal{C}_- (w)/\alpha$. Then there are two threshold values $L(w)$ and $U(w)$ with $L(w) \le U(w)$ such that the optimal dynamic control policy is given by \begin{align*} \dot P(t)= \left\{\begin{array}{ll} \tilde \theta_u, & \qquad \text{if} \quad P(t)-D(t)<L(w),\\ 0, & \qquad \text{if} \quad P(t)-D(t) \in [L(w), U(w)], \\ \tilde \theta_\ell, & \qquad \text{if} \quad P(t)-D(t)>U(w). \end{array}\right. \end{align*} Moreover, the values of $L(w)$ and $U(w)$ can be characterized by the following three cases. \begin{enumerate}[I.] \item If either Condition (1a'), (1b') or (1c') hold, we have $U(w) \ge L(w) \ge 0$ where $L(w)$ and $U(w)$ are uniquely determined by \begin{eqnarray} B_1(w) e ^{r_1 (L(w)-U(w))} + J_1(w) e ^{r_2 (L(w)-U(w))}+ A(w) &=& 0, \label{eq:xhminusxf_w} \\ \frac{B_1(w) r_2}{r_1-r_2} e ^{r_1 (L(w)-U(w))} + \frac{J_1(w) r_1}{r_1-r_2} e^{r_2 (L(w)-U(w))} &=& \nonumber (r_1+r_2-{\tilde s}_1)(\alpha \mathcal{I}_p(w) +{\mathcal{C}_+(w)})\\ && + (\mathcal{C}_+(w) + \mathcal{C}_-(w)) {\tilde s}_1 e^{{\tilde s}_2 L(w)}. \qquad \label{eq:xH_w} \end{eqnarray} \item If either Condition (2a') or (2b') hold, we have $L(w) \; \le \; U(w) \; \le \; 0,$ where $L(w)$ and $U(w)$ are uniquely determined by \begin{eqnarray} B_2(w) e ^{r_1 (U(w)-L(w))} + J_2 e^{r_2 (U(w)-L(w))}+ K(w) &=& 0, \label{eq:xfminusxh_w} \\ \frac{B_2(w) r_2}{r_1-r_2} e^{r_1(U(w)-L(w))} + \frac{J_2(w) r_1}{r_1-r_2} e^{r_2 (U(w)-L(w))} &=& (r_1+r_2-{\tilde t}_2)(\alpha \mathcal{D}_p(w)+\mathcal{C}_-(w)) \nonumber \\ &&+ (\mathcal{C}_+(w) + \mathcal{C}_-(w) ){\tilde t}_2 e^{{\tilde t}_1 U(w)}. \qquad \label{eq:xF_w} \end{eqnarray} \item If none of the above conditions hold, we then have $U(w) \; \ge \; 0 \; \ge \; L(w),$ where $L(w)$ and $U(w)$ are uniquely determined by \begin{eqnarray} B_1(w) e^{-r_1 U} + B_2(w) e^{-r_1 L(w)} &=& B_3(w), \label{eq:xhxf2_w} \\ J_1(w) e^{-r_2 U} + J_2(w) e^{-r_2 L(w)} &=& J_3(w). \label{eq:xhxf2c_w} \end{eqnarray} \end{enumerate} \end{theorem}
Theorem~\ref{THM:CASE1:Multiple} can be interpreted similar to Theorem~\ref{THM:CASE1}. Given a fixed contract $w$ among primary resource options, the optimal dynamic control policy seeks to maintain $X(t)$ within the risk-hedging interval $[L(w),U(w)]$ at all time $t$. When outside this risk-hedging interval $[L(w), U(w)]$, the optimal dynamic control policy pushes toward this interval as fast as possible in a synchronized way such that the contract condition (\ref{eq:multip}) is always maintained among the primary resource allocations $P_1(t), \ldots, P_{\mathcal{P}}(t)$; namely, $\dot P_i(t) = w_i \dot P(t)$ for each $i$. The optimal threshold values $L(w)$ and $U(w)$ are uniquely determined by two nonlinear equations for each of the cases $I$, $II$ and $III$.
We now explore the dependence of the value function on the contract $w$. For a fixed contract $w \in \Omega$, and $x=P(0)-D(0)$, we write $J_w(x)$ for the optimal value of the objective function \eqref{opt1:obj:Multiple}, i.e., the maximal expected discounted net-benefit over an infinite time horizon when one optimally adjusts the aggregate primary resource option $P(t)=\sum_{i=1}^{\mathcal{P}} P_i(t)$ under the fixed contract $w$ among these primary options together with the secondary resource option in order to meet the Brownian demand. Then it is easily seen from Section~\ref{sec:main:prelim:Single} (refer to \eqref{eq:simplify}) that \begin{eqnarray}\label{eq:J-V} J_w(x) = (\mathcal{R}_p (w) - \mathcal{C}_p (w)) \cdot \mathbb{E} \left[ \int_0^\infty e^{-\alpha t} D(t) dt \right] - V_{w}(x), \end{eqnarray} where ${{V}}_w(x)$ is the value function of the stochastic dynamic program \eqref{optHF:obj_w} with parameters given in \eqref{eq:multip}~--~\eqref{eq:benefitS:Multiple} and $\mathcal{R}_p (w), \mathcal{C}_p (w)$ in \eqref{eq:Rpw} are linear in $w$. A closer examination of the expression we obtained for Theorem~\ref{THM:CASE1} and its proof imply that the value function continuously depends on $w$. Such continuity is the consequence of the continuity of the solution of the corresponding ordinary differential equation with respect to the initial condition and parameters, as well as the ``smooth-fit'' principle.
\begin{theorem} \label{thm:contract} Given any fixed $x \in \mathbb{R}_+$, the optimal threshold values $L(w)$ and $U(w)$ in Theorem~\ref{THM:CASE1:Multiple} are continuous functions of $w \in \Omega$. As a consequence, $V_w(x)$ and $J_w(x)$ are continuous with respect to $w \in \Omega$.
\end{theorem}
This result also suggests a simple possible scheme for our dynamic resource allocation problem in the presence of multiple primary resource options. Given the initial imbalance $x$ between the aggregate primary resource capacity and the demand, i.e., $x=P(0)-D(0)$, one can first solve offline an optimal contract $w^*(x)$ given the characteristics of the demand, the reward and the cost associated with each sourcing option. Such an optimal contract exists due to the continuity result in Theorem~\ref{thm:contract}. Next, in order to meet the uncertain and volatile demand over time, one fixes this optimal contract $w^*(x)$ among primary resource options throughout the time horizon and then dynamically adjusts the primary sourcing capacities according to the threshold policy given in Theorem~\ref{THM:CASE1:Multiple}. Note that the capacity of each individual primary resource is aligned according to the contract vector, i.e., the ratio $P_i(t)/P_j(t) \equiv w_i^*(x)/w_j^*(x)$ is fixed for all $t\ge 0$. Any remaining demand will be served by the secondary resource option.
\subsection{Remaining Cases}
It is easy to see that similar results can be obtained for all remaining possible conditions on the adjustment costs $\mathcal{D}_p(w)$ and $\mathcal{I}_p(w)$. In particular, a corollary to Theorem~\ref{THM:CASE1:Multiple} can be expressed and established analogous to Corollary~\ref{THM:SPECIAL} applied to the aggregate control process $P(t)$. Similarly, the theorems corresponding to Theorems~\ref{THM:CASE2}~--~\ref{THM:CASE4} can be expressed and established analogous to these theorems applied to the aggregate control process $P(t)$.
\section{Computational Experiments}\label{sec:computational}
The foregoing sections establish the explicit optimal dynamic control policy among all admissible nonanticipatory control processes $P(t)$ within a stochastic optimal control setting that maximizes the stochastic dynamic programs (SC-OPT:S) and (SC-OPT:M). These optimal dynamic control policies render a new class of online algorithms for general dynamic resource allocation problems that arise across a wide variety of application domains. The resulting online algorithms are easily implementable in computer systems and communication networks (among others) at runtime and consists of maintaining $X(t) = P(t)-D(t)$ respectively within the risk-hedging intervals $[L,U]$ and $[L(w),U(w)]$ at all time $t$, where $L$, $U$, $L(w)$ and $U(w)$ are easily obtained from application parameters. In this section, we present a representative sample of computational experiments conducted across a broad spectrum of application environments to investigate various issues of both theoretical and practical interest by comparing our online optimal dynamic control algorithm against alternative optimization approaches from recent work in the research literature.
Through a detailed analysis of real-world trace data~\cite{GaLuSh+13}, we fitted average daily demand processes for different environments by smooth functions $f^1(t)$ and $f^2(t)$, depicted in Figure~\ref{fig:fx}. In addition, our detailed analysis of real-world data revealed a wide range of volatility in the demand process over time, as well as from one environment to another. We therefore focus in the remainder of this section on the average daily demand patterns $f^1(t)$ and $f^2(t)$
for the drift parameter $b$ of the demand process while varying its volatility parameter $\sigma$ (as made more precise below), thus representing a broad spectrum of application environments.
\begin{figure}
\caption{Representative average daily demand patterns $f^1(t)$ (left) and $f^2(t)$ (right).}
\label{fig:fx}
\end{figure}
\subsection{Single Primary Resource} \label{sec:exp:single}
For comparison with our optimal online dynamic control algorithm, we consider two alternative optimization approaches that have recently appeared in the research literature. First, Lin et al.~\cite{LiWiAn+11} propose an optimal offline algorithm that consists of making optimal provisioning decisions in a clairvoyant anticipatory manner based on the {\it{known average demand}} within each slot of a discrete-time model where the slot length is chosen to match the timescale at which a data center can adjust its resource capacity and so that demand activity within a slot is sufficiently nonnegligible in a statistical sense. Applying this particular optimal offline algorithm within our mathematical framework, we partition the daily time horizon into $T$ slots of length $\gamma$ such that
$h_i = (t_{i-1},t_i]$, $\gamma = t_i-t_{i-1}$, $i=1,\ldots,T$, $t_0 := 0$, and we compute the average demand $g_i := \gamma^{-1} \int_{h_{i}}f(t)dt$ within each slot $i$ yielding the average demand vector $(g_1, g_2, \ldots, g_T)$. Define $\Delta(P_i) := P_i - P_{i-1}$, where $P_i$ denotes the primary resource allocation capacity for slot $i$. The optimal solution under this offline algorithm is then obtained by solving the following linear program (LP) for each sample path: \begin{eqnarray} \min_{\Delta(P_1),\ldots,\Delta(P_T)} && \sum_{i=1}^T \; \mathcal{C}_{+} (P_i-g_i)^+ \; + \; \mathcal{C}_{-} (P_i-g_i)^- \; + \;
\mathcal{I}_p (P_i-P_{i-1})^+ \; + \; \mathcal{D}_p (P_i-P_{i-1})^- \label{offline:obj} \\ \mbox{s.t.} && -\infty \; < \; \theta_\ell \; \le \; \Delta(P_i)/\gamma \; \le \; \theta_u \; < \; \infty ,
\qquad \forall i = 1, \ldots, T , \label{offline:st1} \end{eqnarray} where the constraints on the control variables $\Delta(P_i)$ in \eqref{offline:st1} correspond to \eqref{optHF:st1}. We refer to this solution as the offline LP algorithm.
Second, we consider a related optimal online algorithm proposed by Ciocan and Farias~\cite{CioFar12}. Although they focus on dynamic resource allocations with stochastic demand rate, their allocation scheme based on re-optimization heuristics (Section~3.1 in \cite{CioFar12}) is quite general and can be applied to our setting with stochastic demand process and deterministic demand rate. The main idea of their algorithm is that at each discrete point in time, one uses demand information realized up to that point and assumes the demand rate over the remaining time horizon is unchanged, and then employs the allocation rule that is optimal for such a scenario over the interval of time until the next re-solve. We refer to this as the online CF algorithm.
The sample paths of demand for our computational experiments are generated from a linear diffusion process for the entire time horizon such that the drift of the demand process is obtained as the derivative of $f(t)$ (i.e., $b(t)=df(t)$) and the corresponding volatility term is set to match $\sigma(t)$. Since the volatility pattern $\sigma(t)$ tended to be fairly consistent with respect to time within each daily real-world trace for a specific environment and since the volatility pattern tended to vary considerably from one daily real-world trace to another, our linear diffusion demand process is assumed to be governed by the following model $$dD(t)=b(t)dt+\sigma dW(t),$$ where we vary the volatility term $\sigma$ to investigate different application environments. Each workload then consists of a set of sample paths generated from the Brownian demand process $D(t)$ defined in this manner.
Given such a demand process, we calibrate our optimal online dynamic control algorithm by first partitioning the average daily demand function $f(t)$ into piecewise linear segments, then correspondingly setting the drift function $b(t)$ of the demand process $D(t)$, and finally computing the threshold values $L$ and $U$ for each per-segment drift and $\sigma$ according to Theorem~\ref{THM:CASE1}. This (fixed) version of our optimal online dynamic control algorithm is applied to every daily sample path of the Brownian demand process $D(t)$ and the time-average value of net-benefit is computed over this set of daily sample paths. For comparison under the same set of Brownian demand process sample paths, we compute the average demand vector $(g_1,\ldots,g_T)$ and the corresponding solution under the offline LP algorithm for each daily sample path by solving the linear program \eqref{offline:obj},\eqref{offline:st1} with respect to $(g_1,\ldots,g_T)$, and then we calculate the time-average value of net-benefit over the set of daily sample paths. The corresponding computational experiments for the CF algorithm are performed within this discrete-time framework and the corresponding time-average net-benefit is computed in a similar manner. All of our computational experiments were implemented in Matlab using, among other functionality, the econometrics toolbox.
For our first set of results based on the average daily demand pattern $f^1(t)$ illustrated in the leftmost plot of Figure~\ref{fig:fx}, the base parameter settings are given by $\alpha=0.02$, $\sigma=0.4$, $\theta_{l}=-10$, $\theta_{u}=10$, $\mathcal{C}_{+}=20$, $\mathcal{C}_{-}=2$, $\mathcal{D}_{p}=0.5$, $\mathcal{I}_{p}=0.5$, $f^1_{\mbox{\tiny min}}=2$, $f^1_{\mbox{\tiny max}}=7$, $f^1_{\mbox{\tiny avg}}=4.5$ and $x = X(0) = P(0) - D(0) = 0$, where $f_{\mbox{\tiny min}} := \min_t\{f(t)\}$, $f_{\mbox{\tiny max}} := \max_t\{f(t)\}$ and $f_{\mbox{\tiny avg}} := T^{-1} \int_0^T f(t)dt$. In addition to these base settings, we vary the parameter values one at a time for $\sigma \in [0.01, 1.0]$, $\mathcal{C}_{+} \in [10,40]$, $\mathcal{C}_{-} \in [1,10]$, $f^1_{\mbox{\tiny min}} \in [1,5]$ and $f^1_{\mbox{\tiny max}} \in [4,25]$, in order to investigate the impact and sensitivity of these parameters on the performance of the various optimization algorithms. For each computational experiment under a given set of parameters, we generate $N=10,000$ daily sample paths using a timescale of a couple of seconds and a $\gamma$ setting of five minutes, noting that a wide variety of experiments with different timescale and $\gamma$ settings provided the same performance trends as those presented herein. We then apply our optimal dynamic control policy and the two alternative optimization approaches to this set of $N$ daily sample paths as described above.
Figure~\ref{fig:results1} presents a representative sample of our computational results for the first demand process based on $f^1(t)$. The two leftmost graphs provide performance comparisons of our optimal online dynamic control algorithm against the alternative offline LP and online CF algorithms, respectively, where both comparisons are based on the relative improvements in expected net-benefit under our optimal control policy as a function of $\sigma$; the relative improvement is defined as the difference in expected net-benefit under our optimal dynamic control policy and under the alternative optimization approach, divided by the expected net-benefit of the alternative approach. For the purpose of comparison across sets of workloads with very different $f_{\mbox{\tiny avg}}$ values, we plot both of these graphs as a function of the coefficient of variation $\mbox{\textsf{CoV}} = \sigma / f_{\mbox{\tiny avg}}$. The two rightmost graphs provide similar comparisons of relative improvement in expected net-benefit between our optimal dynamic control policy and the two alternative optimization approaches as a function of $\mathcal{C}_{+}$, both with $\sigma$ fixed to be $0.4$.
\begin{figure}
\caption{Improvement in expected net-benefit under our optimal dynamic control policy relative to the alternative offline LP (top two plots) and online CF algorithms (bottom two plots) for the first set of workloads based on $f^1(t)$ and for varying values of $\sigma$ and $\mathcal{C}_{+}$.}
\label{fig:results1}
\end{figure}
We first observe from the two leftmost graphs in Figure~\ref{fig:results1} that our optimal online dynamic control algorithm outperforms the two alternative optimization approaches for all $\sigma > 0$. The relative improvements in expected net-benefit grow in an exponential manner with respect to increasing values of $\sigma$ over the range of $\mbox{\textsf{CoV}}$ values $(0,0.22]$ considered, with relative improvements up to $90\%$ in comparison with the offline LP algorithm and more than $150\%$ in comparison with the CF algorithm. Our results illustrate and quantify the fact that, even in discrete-time models with small time slot lengths $\gamma$, nonnegligible volatility plays a critical role in the expected net-benefit of any given resource allocation policy. The significant relative improvements under the optimal online dynamic control algorithm then follow from our stochastic optimal control approach that directly addresses the volatility of the demand process in all primary and secondary resource allocation decisions. This can be clearly seen in the results of Figure~\ref{fig:results1:sp} that illustrate the performance of the three algorithms relative to demand over a representative interval of an individual sample path. Figure~\ref{fig:results1:sp} represents a zoomed-in view of the results over a small segment of the time horizon (60 minutes of the 24 hour time horizon). Although the offline LP algorithm based on \eqref{offline:obj},\eqref{offline:st1} would eventually outperform our optimal online dynamic control algorithm as the time slot length $\gamma$ decreases toward $0$, we note that the choice for $\gamma$ in our computational experiments is considerably smaller than the $10$-minute intervals suggested in~\cite{LiWiAn+11}. Moreover, as discussed in~\cite{CioFar12}, the optimal choice of $\gamma$ is a complex issue in and of itself and it may need to vary over time depending upon the statistical properties of the demand process $D(t)$. A key advantage of our optimal online dynamic control algorithm is that such parameters are not needed.
\begin{figure}
\caption{Performance of all three algorithms over a representative interval of a single sample path.}
\label{fig:results1:sp}
\end{figure}
We next observe from the two rightmost graphs in Figure~\ref{fig:results1} that the relative improvements in expected net-benefit under our optimal online dynamic control algorithm similarly increases with respect to increasing values of $\mathcal{C}_+$, though in a more linear fashion. We also note that very similar trends were observed with respect to varying the value of $\mathcal{C}_{-}$, though the magnitude of the relative improvement in expected net-benefit is smaller. Within the limited range of parameter values considered, our computational experiments suggest that the relative improvements in net-benefit under our optimal dynamic control policy can be more sensitive to $\mathcal{C}_{+}$ than to $\mathcal{C}_{-}$. Recall that $\mathcal{C}_+ = \mathcal{C}_p$ is the cost for the primary resource allocation capacity, whereas $\mathcal{C}_- = \mathcal{N}_p - \mathcal{N}_s$ is the difference in net-benefit between the primary and secondary resource allocation capacities.
We also note that similar trends were observed for changes in the values of $f^1_{\mbox{\tiny min}}$ and $f^1_{\mbox{\tiny max}}$ when the relative improvement results are considered as a function of $\mbox{\textsf{CoV}}$.
Now let us turn to our second set of results based on the average daily demand pattern $f^2(t)$ illustrated in the rightmost plot of Figure~\ref{fig:fx}, where the base parameter settings are given by $\alpha=0.02$, $\sigma=7.0$, $\theta_{l}=-100$, $\theta_{u}=100$, $\mathcal{C}_{+}=20$, $\mathcal{C}_{-}=2$, $\mathcal{D}_{p}=0.5$, $\mathcal{I}_{p}=0.5$, $f^2_{\mbox{\tiny min}}=15$, $f^2_{\mbox{\tiny max}}=90$, $f^2_{\mbox{\tiny avg}}=61$, $x = X(0) = P(0) - D(0) = 0$. In addition to these base settings, we vary the parameter values one at a time for $\sigma \in [0.01, 15]$, $\mathcal{C}_{+} \in [10,40]$, $\mathcal{C}_{-} \in [1,10]$, $f^2_{\mbox{\tiny min}} \in [1,20]$ and $f^2_{\mbox{\tiny max}} \in [9,120]$. Once again, for each experiment comprising a specific workload, we generate $N=10,000$ sample paths using a timescale of a couple of seconds and a $\gamma$ setting of five minutes, noting that a wide variety of experiments with different timescale and $\gamma$ settings provided performance trends that are identical to those presented herein. We then apply our optimal dynamic control policy and the two alternative optimization approaches to this set of $N$ sample paths as described above. Our performance evaluation comparisons are based on the expectation of net-benefit realized under each of the three algorithms, also as described above.
\begin{figure}
\caption{Improvement in expected net-benefit under our optimal dynamic control policy relative to the alternative offline LP (top two plots) and online CF algorithms (bottom two plots) for the second set of workloads based on $f^2(t)$ and for varying values of $\sigma$ and $\mathcal{C}_{+}$.}
\label{fig:results2}
\end{figure}
Figure~\ref{fig:results2} presents a representative sample of our computational results for the demand process based on $f^2(t)$, providing the analogous results that correspond to those in Figure~\ref{fig:results1}. We note that the larger range $[f^2_{\mbox{\tiny min}},f^2_{\mbox{\tiny max}}]$ exhibited in the second average daily demand pattern as well as a higher value of $f^2_{\mbox{\tiny avg}}$ lead to both a higher relative net-benefit for fixed $\sigma$ and a higher sensitivity to changes in $\sigma$, thus improving the gains of our optimal online dynamic control algorithm over the alternative offline LP and online CF algorithms. This relative improvement in expected net-benefit as compared to the set of experiments for the average daily demand pattern $f^1(t)$ can be understood to be caused in part by the sharp drop in average demand from the maximum value of $90$ to a minimum of $15$ within a fairly short time span, thus contributing to an increased effective volatility over and above that represented by $\sigma$. Hence, the fact that the relative improvement exhibited by our optimal online dynamic control algorithm is larger under the average daily demand pattern $f^2(t)$, up to $130\%$ in comparison with the offline LP algorithm and over $230\%$ in comparison with the CF algorithm, can be viewed in a sense to be very much an extension of the finding that the relative improvement provided by our optimal online algorithm increases with an increase in $\mbox{\textsf{CoV}}$. A similar gain in performance improvement can be seen in the rightmost two plots when we vary $\mathcal{C}_+$ with a fixed value of $\sigma$.
\subsection{Multiple Primary Resources}
We now turn to investigate the relationship between the single primary resource formulation (SC-OPT:S) and the multiple primary resource formulation (SC-OPT:M). One can envision, under appropriate circumstances and contractual agreements among the multiple primary resource options, that the multiple primary resource allocation model can potentially render greater performance benefits to an organization than the corresponding performance benefits from the single primary resource allocation model. In this section we consider the performance trade-offs between both primary resource allocation models.
To this end, we present a comparison between the single primary resource formulation (SC-OPT:S) and the multiple primary resource formulation (SC-OPT:M) under a two-dimensional contract vector $w$. The particular representative set of results presented here is based on the average daily demand pattern $f^2(t)$ illustrated in the rightmost plot of Figure~\ref{fig:fx}, with the base parameter settings given by $\alpha=0.02$, $\sigma=1$, $\theta_{l,1}=-1000$, $\theta_{l,2}=-1000$, $\theta_{u,1}=1000$, $\theta_{u,2}=1000$, $\mathcal{N}_{p,1} = 10$, $\mathcal{N}_{p,2} = 10$, $\mathcal{C}_{p,1} = 2000$, $\mathcal{C}_{p,2} = 1200$, $\mathcal{N}_s = 1$,
$\mathcal{D}_{p,1}=0.001$, $\mathcal{D}_{p,2}=0.001$, $\mathcal{I}_{p,1}=0.001$, $\mathcal{I}_{p,2}=0.001$, $f^2_{\mbox{\tiny min}}=50$, $f^2_{\mbox{\tiny max}}=125$, $f^2_{\mbox{\tiny avg}}=96.9$ and $x = X(0) = P(0) - D(0) = 0$, where $f_{\mbox{\tiny min}} := \min_t\{f(t)\}$, $f_{\mbox{\tiny max}} := \max_t\{f(t)\}$ and $f_{\mbox{\tiny avg}} := T^{-1} \int_0^T f(t)dt$. In addition to these base settings, we vary the parameter values one at a time for $\sigma \in [1.0, 5.0]$, $\mathcal{N}_{p,1} \in [4,14]$, $\mathcal{N}_{p,2} \in [4,14]$, $\mathcal{C}_{p,1} \in [1700,2300]$, $\mathcal{C}_{p,2} \in [1000,1400]$, $f^2_{\mbox{\tiny min}} \in [40,60]$ and $f^2_{\mbox{\tiny max}} \in [115,135]$, in order to investigate the impact and sensitivity of these parameters on the performance trade-offs between the two formulations. Within this experimental setting, as a representative example, we consider the corresponding single primary resource allocation model with $w_{s}=\begin{bmatrix}1&0\end{bmatrix}$ and consider a corresponding instance of the multiple primary resource allocation model with $w_{m}=\begin{bmatrix}0.7&0.3\end{bmatrix}$. The values of $L(w)$ and $U(w)$ are separately obtained for each of the two primary resource formulations. For every computational experiment under a given set of parameters, we generate $N=10,000$ daily sample paths using a timescale of a couple of seconds and a $\gamma$ setting of five minutes. We note that a wide variety of experiments with different parameter, timescale and $\gamma$ settings were evaluated and shown to exhibit similar performance trends as those presented herein. We then apply our optimal dynamic control policy for each formulation to this set of $N$ daily sample paths as described in Section~\ref{sec:exp:single}.
Figure~\ref{fig:results3} presents a representative sample of our computational results in which the leftmost graph provides comparisons based on the relative improvements in expected net-benefit under our optimal control policy for the two primary resource formulations as a function of the coefficient of variation $\mbox{\textsf{CoV}} = \sigma / f_{\mbox{\tiny avg}}$; analogous to the leftmost graphs in Figures~\ref{fig:results1} and \ref{fig:results2}, the relative improvement is defined here as the difference between the expected net-benefit for the multiple primary resource model and the single primary resource model, divided by the expected net-benefit for the single primary resource model. The rightmost graph provides comparisons based on the relative improvements in expected cumulative discounted costs associated with $\mathcal{C}_{+}(w)$ under our optimal control policy for the two primary resource formulations as a function of $\mbox{\textsf{CoV}}$; the relative improvement is defined here as the difference between the expected cumulative discounted contributions of $\mathcal{C}_{+}(w)$ over the infinite horizon for the single primary resource model (i.e., in \eqref{optHF:obj}) and the multiple primary resource model (i.e., in \eqref{optHF:obj_w}), divided by the expected net-benefit for the single primary resource model. From these results we observe that the gain in expected net-benefit is significant, demonstrating the potential performance benefits to an organization under the multiple primary resource allocation model for such problem instances. These expected net-benefit improvements in the leftmost graph tend to decrease as the coefficient of variation increases, while still remaining significant over the range of $\mbox{\textsf{CoV}}$ values. This decrease in the expected net-benefit gain is primarily due to a similar decreasing trend in the relative difference in the expected cumulative discounted contributions of $\mathcal{C}_{+}(w)$ over the infinite horizon as $\mbox{\textsf{CoV}}$ increases. To help explain this, we note that the values of the second summand in \eqref{eq:simplify} and the expected cumulative discounted contributions of $\mathcal{C}_{+}(w)$ over the infinite horizon are of the same order of magnitude in these experiments, whereas the values of the remaining terms in \eqref{optHF:obj} and \eqref{optHF:obj_w} are orders of magnitude smaller; the similarity of the two graphs are due to the facts that the two higher order magnitude terms dominate the objective function value (expected net-benefit) and the value of the second summand in \eqref{eq:simplify} is the same under both primary resource models. Hence, the trends in the relative expected net-benefit improvements as a function of $\mbox{\textsf{CoV}}$ are directly related to the very similar trends in the relative expected discounted cumulative $\mathcal{C}_{+}(w)$ cost improvements as a function of $\mbox{\textsf{CoV}}$. These trends in turn are primarily due to the role of the risk-hedging interval that widens under each model as a function of the increasing $\mbox{\textsf{CoV}}$ in order to have the optimal dynamic control policy reduce the expected discounted cumulative costs associated with the primary resource(s) over the infinite horizon. In other words, the optimal dynamic control policy under both models becomes somewhat more conservative due to the greater risks associated with having a primary resource allocation position that is too large and that would otherwise result in larger expected discounted cumulative costs associated with $\mathcal{C}_{+}(w)$. These factors have a somewhat stronger impact on the multiple primary resource model as $\mbox{\textsf{CoV}}$ increases.
\begin{figure}
\caption{Improvements in expected net-benefit and expected cumulative discounted $\mathcal{C}_{+}(w)$ cost under our optimal dynamic control policy for the multiple primary allocation model relative to that for the single primary allocation model based on $f^2(t)$ while increasing the demand uncertainty $\sigma$.}
\label{fig:results3}
\end{figure}
Lastly, it can also be important to consider the role of the margin of rewards and costs when investigating the relationship between the single primary resource formulation (SC-OPT:S) and the multiple primary resource formulation (SC-OPT:M). In many practical applications, the overall rewards and costs are of similar magnitude with reasonably balanced margins, and thus the relationship between the two formulations includes key trade-offs among the net-benefits from both the second summand in \eqref{eq:simplify} and the term involving $\mathcal{C}_{-}$ and $\mathcal{C}_{-}(w)$ in \eqref{optHF:obj} and \eqref{optHF:obj_w} on the one hand, and the relative costs from the terms involving $\mathcal{I}_p$, $\mathcal{D}_p$ and $\mathcal{C}_{+}$ in \eqref{optHF:obj} and involving $\mathcal{I}_p(w)$, $\mathcal{D}_p(w)$ and $\mathcal{C}_{+}(w)$ in \eqref{optHF:obj_w} on the other hand. These trade-offs are indeed reflected in the representative results illustrated in Figure~\ref{fig:results3}. However, in situations where the net-benefits are considerably larger than the relative costs in \eqref{optHF:obj} and \eqref{optHF:obj_w}, then the relationship between the single primary resource formulation and the multiple primary resource formulation can depend in large part on the magnitude and ordering between $\mathcal{N}_p$ and $\mathcal{N}_p(w)$; in this case with $\mathcal{N}_p$ dominating $\mathcal{N}_p(w)$, the single primary resource model can outperform the multiple primary resource model . Analogously, when the relative costs are considerably larger than the net-benefits in \eqref{eq:simplify}, \eqref{optHF:obj} and \eqref{optHF:obj_w}, then the relationship between the two formulations can depend in large part on the magnitude and ordering among $\mathcal{C}_{+}$, $\mathcal{I}_p$, $\mathcal{D}_p$, $\mathcal{C}_{+}(w)$, $\mathcal{I}_p(w)$ and $\mathcal{D}_p(w)$; in this case with, e.g., $\mathcal{C}_{+}$ dominating $\mathcal{C}_{+}(w)$, the multiple primary resource model can outperform the single primary resource model.
\section{Conclusions}\label{sec:conclusion}
In this paper we investigated a general class of dynamic resource allocation problems arising across a broad spectrum of application domains that intrinsically involve different types of resources and uncertain/variable demand. With a goal of maximizing expected net-benefit based on rewards and costs from the different resources, we derived the provably optimal dynamic control policy within a stochastic optimal control setting. Our mathematical analysis includes obtaining simple expressions that govern the dynamic adjustments to resource allocation capacities over time under the optimal control policy. A wide variety of extensive computational experiments demonstrates and quantifies the significant benefits of our optimal dynamic control policy over recently proposed alternative optimization approaches in addressing a general class of resource allocation problems across a diverse range of application domains, including cloud computing and data center environments, computer and communication networks, and human capital supply chains. Moreover, our results strongly suggest that the stochastic optimal control approach taken in this paper can provide an effective means to develop easily-implementable online algorithms for solving stochastic optimization problems. Both single primary resource allocation and multiple primary resource allocation models can be exploited, with the best option depending upon the system environment and model parameters.
Following along the lines of our computational experiments, our algorithm can exploit any consistent seasonal patterns for $b(t)$ and $\sigma(t)$ observed from historical traces in order to predetermine the threshold values $L$ and $U$ or $L(w)$ and $U(w)$. In addition, various approaches such as statistical learning and/or model predictive control (e.g.,~\cite{CioFar12}) can be used to adjust these threshold values in real-time based on identifying and learning any nonnegligible changes in the realized values for $b(t)$ and $\sigma(t)$. Furthermore, this latter approach can be used directly for system/network environments whose demand processes do not exhibit consistent seasonal patterns.
\section*{Acknowledgments.} Xuefeng Gao acknowledges support from Hong Kong RGC ECS Grant 2191081 and CUHK Direct Grants for Research with project codes 4055035 and 4055054. A portion of Mark Squillante's research was sponsored by the US Army Research Laboratory and the UK Ministry of Defence and was accomplished under Agreement Number W911NF-06-3-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the US Army Research Laboratory, the US Government, the UK Ministry of Defense, or the UK Government. The US and UK Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.
\vspace*{1.0in}
\appendix \section{Proofs of Results in Section~\ref{sec:main}} \label{sec:proofs}
In this appendix we collect the proofs of our main results in Section~\ref{sec:main}. We start with a rigorous proof of Theorem~\ref{THM:CASE1} and Corollary~\ref{THM:SPECIAL}, which proceeds in three main steps. First, we express the optimality conditions for the stochastic dynamic program, i.e., the Bellman equation corresponding to \eqref{optHF:obj}~--~\eqref{optHF:st3}. We then derive a solution of the Bellman equation and determine the corresponding candidate value function and dynamic control policy, establishing smoothness and convexity of the candidate value function and uniqueness of the threshold values. Finally, we verify that this dynamic control policy is indeed optimal through a martingale argument. Each of these main steps is presented in turn. Then we present the proofs of our other main results.
\subsection{Proof of Theorem~\ref{THM:CASE1} and Corollary~\ref{THM:SPECIAL}: Step 1}\label{sec:proofs:case1:step1}
From the Bellman principle of optimality, we deduce that the value function $V$ satisfies for each $t \ge 0$ \begin{align} \label{eq:prinofopt} V(x) &= \min_{\theta_\ell \le \dot P(t)\le \theta_u} \mathbb{E}_x \bigg[ \int_0^t e^{-\alpha s} \Big\{\Big( \mathcal{C}_+ X(s)^+ + \mathcal{C}_- X(s)^{-}\Big) ds + ( \mathcal{I}_p \indi{\dot P(s)>0}-\mathcal{D}_p \indi{\dot P(s)<0} ) dP(s) \Big\} \nonumber \\ & \qquad\qquad\qquad\qquad + \; e^{-\alpha t} V(X(t))\bigg] ; \end{align} refer to~\cite[Chapter 4]{YonZho99}. Suppose the value function $V$ is smooth, belonging to the set $C^2$ (i.e., the set of twice continuously differentiable functions) except for a finite number of points, and $V'(x)$ is bounded for any $x$.
Then, based on a standard application of Ito's formula as in~\cite[Chapter 1]{Kryl80}, we derive that the desired Bellman equation for the value function $V$ has the form \begin{equation} \label{eq:bellmaneq} -\alpha V(x) + \frac{1}{2} \sigma^2 V''(x) -b V'(x) + \mathcal{C}_+ x^+ + \mathcal{C}_- x^{-} + \inf_{ \theta_\ell \le \theta \le \theta_u} \mathcal{L}(\theta, x) = 0, \end{equation} where \begin{equation} \mathcal{L}(\theta, x)= \left\{\begin{matrix} (V'(x)+\mathcal{I}_p) \theta \qquad \text{if} \quad \theta \ge 0 ,\\ (V'(x)-\mathcal{D}_p) \theta \qquad \text{if} \quad \theta<0 . \end{matrix}\right. \label{eq:bellmaneq2} \end{equation}
\subsection{Proof of Theorem~\ref{THM:CASE1} and Corollary~\ref{THM:SPECIAL}: Step 2}\label{sec:proofs:case1:step2}
Our next goal is to construct a convex function $Y$ that satisfies the Bellman equation \eqref{eq:bellmaneq} and show that the threshold values $L$ and $U$ are uniquely determined by the corresponding pair of nonlinear equations in Theorem~\ref{THM:CASE1}. Suppose a candidate value function $Y(x)$ satisfies \eqref{eq:bellmaneq}. Given \eqref{eq:bellmaneq2}, we expect a ``bang-bang'' type solution based on the signs of $Y'(x)+\mathcal{I}_p$ and $Y'(x)-\mathcal{D}_p$. In particular, we seek to find $L$ and $U$ such that \begin{equation} \label{eq:derofY} Y'(x)= \left\{\begin{array}{ll}
\ge \mathcal{D}_p , & \qquad \text{if} \quad x \ge U ,\\
\in (-\mathcal{I}_p, \mathcal{D}_p) & \qquad \text{if} \quad L<x < U ,\\ \le -\mathcal{I}_p & \qquad \text{if} \quad x \le L . \end{array}\right. \end{equation}
Moreover, we require that $Y$ meets smoothly at the points $L$, $0$ and $U$ to order one, and $Y(x)=O(|x|)$ as $|x| \rightarrow \infty$ (i.e., $\lim_{|x| \rightarrow \infty} \frac{Y(x)}{|x|} \le C$ for some $C \ge 0$) so that $Y'$ is bounded.
For each of the three cases in Theorem~\ref{THM:CASE1}, we first solve the Bellman Equation \eqref{eq:bellmaneq} to derive the corresponding pair of equations that $L$ and $U$ satisfy. Then we discuss conditions on model parameters under which the points $L$ and $U$ are located in comparison with $0$. Finally, we show that the function $Y$ we construct has the property \eqref{eq:derofY}.
\subsubsection{Case I: $U \ge L \ge 0$}\label{sec:521}
We proceed to solve the Bellman equation \eqref{eq:bellmaneq} depending on the value of $x$ in relation to $U$, $0$ and $L$. There are four subcases to consider as follows.
\textit{(i).} ~If $x \ge U > 0$, we obtain from \eqref{eq:derofY} that $$Y'(x) \; \ge \; \mathcal{D}_p \qquad \mbox{and} \qquad \inf_{ \theta_\ell \le \theta \le \theta_u} \mathcal{L}(\theta, x) \; = \; \mathcal{L}(\theta_\ell, x).$$ Then the Bellman equation \eqref{eq:bellmaneq} yields $$ -\alpha Y(x) + \frac{1}{2} \sigma^2 Y''(x) -b Y'(x) + \mathcal{C}_+ x^+ + \mathcal{C}_- x^{-}+ \mathcal{L}(\theta_\ell, x) \; = \; 0 , $$ or equivalently $$ -\alpha Y(x) + \frac{1}{2} \sigma^2 Y''(x) -b Y'(x) +\mathcal{C}_+ x + (Y'(x)-\mathcal{D}_p)\theta_\ell \; = \; 0. $$ Solving this second order linear nonhomogeneous differential equation, we obtain \begin{equation*} Y(x) \; = \; \frac{\mathcal{C}_+}{\alpha} x + \frac{1}{\alpha} \left(\frac{\mathcal{C}_+}{\alpha}(\theta_\ell-b) - \mathcal{D}_p \theta_\ell\right)+ l_1 e^{t_2x} + \bar l_1 e^{t_1 x}, \end{equation*}
where $t_1>0>t_2$ are given in (\ref{eq:t2}) and $l_1, \bar l_1$ are two constants to be determined. Since $Y(x)=O(|x|)$ when $|x|$ goes to $\infty$, one finds $\bar l_1 =0.$ Thus we derive for $x \ge U$ \begin{equation} \label{eq:caseI:YlargerU} Y(x) \; = \; \frac{\mathcal{C}_+}{\alpha} x + \frac{1}{\alpha} \left(\frac{\mathcal{C}_+}{\alpha}(\theta_\ell-b) - \mathcal{D}_p \theta_\ell\right)+ l_1 e^{t_2x}, \end{equation}
\textit{(ii).} ~If $0 \le L < x < U$, we have $$-\mathcal{I}_p<Y'(x) \; < \; \mathcal{D}_p, \qquad \mbox{and} \qquad \inf_{ \theta_\ell \le \theta \le \theta_u} \mathcal{L}(\theta, x) \; = \; 0, $$ and thus we obtain $$ -\alpha Y(x) + \frac{1}{2} \sigma^2 Y''(x) -b Y'(x) +\mathcal{C}_+ x \; = \; 0. $$ This implies for $0 \le L < x<U$ \begin{equation} \label{eq:caseI:YLU} Y(x) \; = \; \frac{\mathcal{C}_+}{\alpha} x + \frac{-b \mathcal{C}_+}{\alpha^2}+ \lambda_1 e^{r_1 x} + \lambda_2 e^{r_2 x}, \end{equation} where $r_1, r_2$ are given by (\ref{eq:r2}), and $\lambda_1, \lambda_2$ are two generic constants to be determined.
\textit{(iii).} ~If $0 <x \le L$, we find from \eqref{eq:derofY} that $$Y'(x) \; \le \; -\mathcal{I}_p \qquad \mbox{and} \qquad \inf_{ \theta_\ell \le \theta \le \theta_u} \mathcal{L}(\theta, x) \; = \; \mathcal{L}(\theta_u, x). $$ Then \eqref{eq:bellmaneq} becomes $$ -\alpha Y(x) + \frac{1}{2} \sigma^2 Y''(x) -b Y'(x) + \mathcal{C}_+ x + (\mathcal{I}_p + Y'(x)) \theta_u \; = \; 0, $$ and thus \begin{equation}\label{eq:caseI:Y0L} Y(x) \; = \; \frac{\mathcal{C}_+}{\alpha} x + \frac{1}{\alpha} \left(\frac{\mathcal{C}_+}{\alpha}(\theta_u-b) + \mathcal{I}_p \theta_u\right) + \tilde \lambda_1 e^{s_1 x} + \tilde \lambda_2 e^{s_2 x}, \end{equation} where $s_1, s_2$ are given by (\ref{eq:s2}), and $\tilde \lambda_1, \tilde \lambda_2$ are two generic constants to be determined.
\textit{(iv).} ~If $x \le 0$, we have $$Y'(x) \; \le \; -\mathcal{I}_p,$$ and we similarly derive that \begin{eqnarray*} -\alpha Y(x) + \frac{1}{2} \sigma^2 Y''(x) -b Y'(x) - \mathcal{C}_- x + (Y'(x)+\mathcal{I}_p) \theta_u \; = \; 0. \end{eqnarray*}
Since $Y(x)=O(|x|)$ as $|x|\rightarrow \infty$, we deduce that the solution is then given by \begin{equation} \label{eq:caseI:YlessL} Y(x) \; = \; -\frac{\mathcal{C}_-}{\alpha} x + \frac{1}{\alpha} \left(-\frac{\mathcal{C}_-}{\alpha}(\theta_u-b) +\mathcal{I}_p \theta_u\right)+ l_2 e^{s_1 x}, \end{equation} where $s_1$ is given by (\ref{eq:s2}), and $l_2$ is a generic constant to be determined.
Now we determine $L$ and $U$ as well as the other six unknown constants $l_1, l_2, \lambda_1, \lambda_2, \tilde \lambda_1, \tilde \lambda_2$. We do so by matching the value and the first-order derivative of $Y$ at the points $U$, $0$ and $L$. This leads to eight nonlinear equations in total as illustrated below. In addition, using such a construction, the function $Y$ will be twice continuously differentiable with the exception of at most three points. Let us first consider such matchings at the point $U$. From \eqref{eq:derofY}, \eqref{eq:caseI:YlargerU} and \eqref{eq:caseI:YLU}, we obtain three equations: \begin{eqnarray*} Y'(U+) & = & \frac{\mathcal{C}_+}{\alpha}+l_1 t_2 e^{t_2 U}=\mathcal{D}_p, \\ Y'(U-) & = & \frac{\mathcal{C}_+}{\alpha}+\lambda_1 r_1 e^{r_1 U}+\lambda_2 r_2 e^{r_2 U}=\mathcal{D}_p,\\ Y(U+) & = & \frac{\mathcal{C}_+}{\alpha} U + \frac{1}{\alpha} \left(\frac{\mathcal{C}_+}{\alpha}(\theta_\ell-b) - \mathcal{D}_p \theta_\ell\right)+ l_1 e^{t_2U} \\ &=&Y(U-) = \frac{\mathcal{C}_+}{\alpha} U + \frac{-b \mathcal{C}_+}{\alpha^2}+ \lambda_1 e^{r_1 U} + \lambda_2 e^{r_2 U}. \end{eqnarray*} Upon simplifying, we can express $l_1, \lambda_1, \lambda_2$ in terms of $U$ and the model parameters as follows: \begin{eqnarray} l_1 & = & \frac{1}{t_2} (\mathcal{D}_p - \frac{\mathcal{C}_+}{\alpha}) e^{-t_2 U}, \label{eq:caseI:l1} \\ \lambda_1 &=& \frac{\sigma^2}{2 \alpha^2} e^{-r_1 U} \left(1-\frac{r_1}{r_2}\right)^{-1} (-B_1), \label{eq:caseI:lambda1}\\ \lambda_2 &=& \frac{\sigma^2}{2 \alpha^2} e^{-r_2 U} \left(1-\frac{r_2}{r_1}\right)^{-1} J_1. \label{eq:caseI:lambda2} \end{eqnarray} Similarly, matching at the point $L$, we obtain another three equations: \begin{eqnarray*} Y'(L+) & = & \frac{\mathcal{C}_+}{\alpha}+ \lambda_1 r_1 e^{r_1 L}+ \lambda_2 r_2 e^{r_2 L}=-\mathcal{I}_p, \\ Y'(L-) & = & \frac{\mathcal{C}_+}{\alpha} + \tilde \lambda_1 s_1 e^{s_1 L} + \tilde \lambda_2 s_2 e^{s_2 L}=-\mathcal{I}_p,\\ Y(L-) &= & \frac{\mathcal{C}_+}{\alpha} L + \frac{1}{\alpha} \left(\frac{\mathcal{C}_+}{\alpha}(\theta_u -b) + \mathcal{I}_p \theta_u\right)+ \tilde \lambda_1 e^{s_1 L} + \tilde \lambda_2 e^{s_2 L} \\ &=&Y(L+ ) = \frac{\mathcal{C}_+}{\alpha} L + \frac{- b \mathcal{C}_+}{\alpha^2}+ \lambda_1 e^{r_1 L} + \lambda_2 e^{r_2 L}. \end{eqnarray*} Upon plugging the expressions for $\lambda_1$ and $\lambda_2$ into $Y'(L+)=-\mathcal{I}_p$, we obtain \eqref{eq:xhminusxf}. On the other hand, it follows from $Y(L+)=Y(L-)$ that \begin{eqnarray} \label{eq:caseI:tildelambda1} \lambda_1 e^{r_1 L} + \lambda_2 e^{r_2 L} \; = \; \frac{\theta_u}{\alpha} \left(\mathcal{I}_p+ \frac{\mathcal{C}_+}{\alpha}\right) + \tilde \lambda_1 e^{s_1 L} + \tilde \lambda_2 e^{s_2 L} . \end{eqnarray} Combining this with $Y'(L-)=-\mathcal{I}_p$, we cancel $\tilde \lambda_1$ and derive \begin{equation}\label{eq:L3} \lambda_1 e^{r_1 L} + \lambda_2 e^{r_2 L} \; = \; \left( \frac{\theta_u}{\alpha}- \frac{1}{s_1}\right) \left(\mathcal{I}_p+ \frac{\mathcal{C}_+}{\alpha}\right) +
\tilde \lambda_2 e^{s_2 L} \left(1- \frac{s_2}{s_1}\right) . \end{equation} Lastly, solving $\tilde \lambda_2$ by matching $Y$ at the point 0 to order one, we obtain two equations: \begin{eqnarray*} Y'(0+)\;=\;\frac{\mathcal{C}_+}{\alpha}+ \tilde \lambda_1 s_1 + \tilde \lambda_2 s_2&=&Y'(0-)\;=\; -\frac{\mathcal{C}_-}{\alpha}+ l_2 s_1, \\ Y(0-)\;=\;\frac{1}{\alpha} \left( -\frac{\mathcal{C}_-}{\alpha} (\theta_u -b) + \mathcal{I}_p \theta_u\right) + l_2 &=& Y(0+)\;=\;\frac{1}{\alpha} \left( \frac{\mathcal{C}_+}{\alpha} (\theta_u -b) + \mathcal{I}_p \theta_u\right) + \tilde \lambda_1 + \tilde \lambda_2 . \end{eqnarray*} Solving these two equations renders \begin{eqnarray} \tilde \lambda_2 \; = \; \frac{\sigma^2}{2 \alpha^2} (\mathcal{C}_+ + \mathcal{C}_-) \frac{{s_1} ^2}{ s_1 -s_2}, &\qquad& \tilde \lambda_1 - l_2 \; = \; \frac{\sigma^2}{2 \alpha^2} (\mathcal{C}_+ + \mathcal{C}_-)( \frac{ -{s_2} ^2}{ s_1 -s_2}). \label{eq:caseI:l2} \end{eqnarray} Upon substituting the expressions for $\lambda_1$, $\lambda_2$ and $\tilde \lambda_2$ into \eqref{eq:L3} and simplifying the resulting expression, we conclude that $L$ and $U$ satisfy \eqref{eq:xH}. Once we determine $L$ and $U$ from \eqref{eq:xhminusxf} and \eqref{eq:xH}, the other unknown constants $l_1, l_2, \lambda_1, \lambda_2, \tilde \lambda_1$ can be derived from \eqref{eq:caseI:l1}-\eqref{eq:caseI:l2} accordingly.
Next, we provide conditions under which \eqref{eq:xhminusxf} and \eqref{eq:xH} have a unique solution $L$ and $U$ that are both non-negative. Define for $y>0$, \begin{eqnarray} h(y)=B_1 y + J_1 y^{\frac{r_2}{r_1}}+ A . \label{eq:func-h} \end{eqnarray} We first consider the case $\mathcal{I}_p+\mathcal{D}_p>0.$ In this case, one can readily verify that \begin{eqnarray} \label{eq:h1} h(1) = B_1 + J_1 + A = {\alpha} (r_1-r_2) {(-\mathcal{I}_p-\mathcal{D}_p) } <0. \end{eqnarray} In addition, $h$ is strictly convex and $\lim_{y \rightarrow 0+} h (y) = \infty$. Thus $h(y)=0$ has a unique solution in $(0,1)$. Since \eqref{eq:xhminusxf} is equivalent to $h(e^{r_1(L-U)}) = 0$, we deduce that $L-U$ is negative and it is uniquely determined by \eqref{eq:xhminusxf}. On the other hand, observe that $L \ge 0$ holds if and only if $e^{s_2 L} \;\le\; 1$ holds since $s_2<0$. Using \eqref{eq:xH}, this condition is equivalent to \begin{eqnarray*} \frac{B_1 r_2}{r_1-r_2} e ^{r_1 (L-U)} + \frac{J_1 r_1}{r_1-r_2} e^{r_2 (L-U)} &\le& (r_1+r_2-s_1)(\alpha \mathcal{I}_p+{\mathcal{C}_+}) + (\mathcal{C}_+ + \mathcal{C}_-) s_1, \end{eqnarray*} which, upon canceling $J_1 e^{r_2 (L-U)}$ using \eqref{eq:xhminusxf}, becomes \begin{eqnarray*}
e ^{r_1 (L-U)} \ge \frac{B_3 -B_2}{B_1}. \end{eqnarray*} Therefore, guaranteeing that \eqref{eq:xhminusxf} and \eqref{eq:xH} has a unique solution with $U>L \ge 0$ is equivalent to showing that $h(y)=0$ has a unique solution in the interval $$\left[\frac{B_3-B_2}{B_1}, 1\right).$$ This is true if either of the following two set of conditions hold: \begin{eqnarray*} (i) & \frac{B_3-B_2}{B_1} \in (0,1) \quad \text{and} \quad h\left(\frac{B_3-B_2}{B_1}\right) \ge 0;\\ (ii) & B_3 \le B_2. \end{eqnarray*} Note that $\mathcal{I}_p+\mathcal{D}_p>0.$ We then find that these conditions are exactly condition (1a) and (1b) in Theorem~\ref{THM:CASE1} after applying simple algebraic manipulations. We next consider the case where $\mathcal{I}_p=\mathcal{D}_p=0$. It is clear from \eqref{eq:h1} that $h(1)=0$, and $h'(1)=B_1 +J_1 \frac{r_2}{r_1} = \frac{\mathcal{C}_+(r_1- r_2) t_2}{r_1} <0$. This implies the unique solution to \eqref{eq:xhminusxf} is $L=U := \delta$. Now we deduce from Equation \eqref{eq:xH} that \[e^{s_2 \delta}= e^{s_2 L} \; = \; \frac{\mathcal{C}_+}{\mathcal{C}_+ +\mathcal{C}_-} \frac{s_1 -t_2}{s_1},\] and $\delta \ge 0$ if and only if $\mathcal{C}_- s_1 + \mathcal{C}_+ t_2 \ge 0$. This condition is equivalent to $B_1 + B_2 -B_3 \ge 0$, which corresponds to condition (1c) in Theorem~\ref{THM:CASE1}. Note that $\delta=0$ if and only if $B_3-B_1-B_2=0$. Thus, as a byproduct, we have obtained the explicit form of $\delta$ in Corollary~\ref{THM:SPECIAL} when $\delta \ge 0$.
Finally, we verify that the candidate value function $Y$ satisfies the required first-order properties in \eqref{eq:derofY}. Since we have constructed the function $Y$ with $Y'(U)=\mathcal{D}_p$ and $Y'(L)=-\mathcal{I}_p$, then to establish \eqref{eq:derofY} it suffices to verify the convexity of the function $Y$. To this end, we first consider $x \ge U$. One readily confirms from \eqref{eq:caseI:YlargerU} and \eqref{eq:caseI:l1} that $$ Y''(x) \; = \; l_1 t_2^2 e^{t_2 x} \; = \; \left(\mathcal{D}_p- \frac{\mathcal{C}_+}{\alpha}\right) t_2 e^{t_2 (x-U)}. $$ Given that $\mathcal{D}_p < \mathcal{C}_+ / \alpha$ and that $t_2<0$ in \eqref{eq:t2}, we can conclude $$Y''(x) \; > \; 0, \qquad \text{for $x \ge U$.}$$ Similarly, when $x \le 0$, we have $$Y''(x) \; = \; l_2 s_1^2 e^{s_1 x}.$$ Hence, to show $Y''(x) \ge 0$, it suffices to show that $l_2 \ge 0$. From \eqref{eq:caseI:l2}, it is equivalent to show \begin{eqnarray} \tilde \lambda_1 \ge \frac{\sigma^2}{2 \alpha^2} (\mathcal{C}_+ + \mathcal{C}_-)( \frac{ -{s_2} ^2}{ s_1 -s_2}). \label{ineq:tildelambda1} \end{eqnarray} Using the relationship between $\tilde \lambda_1$ and $\tilde \lambda_2$ in $Y'(L-)=-\mathcal{I}_p$ and substituting $\tilde \lambda_2$ given in \eqref{eq:caseI:l2}, we infer from \eqref{ineq:tildelambda1} that we simply need to establish \begin{eqnarray*} \frac{\sigma^2}{2 \alpha^2} (\mathcal{C}_+ + \mathcal{C}_-)( \frac{ -{s_2} ^2}{ s_1 -s_2}) s_1 e^{s_1L} + \frac{\sigma^2}{2 \alpha^2} (\mathcal{C}_+ + \mathcal{C}_-)( \frac{ {s_1} ^2}{ s_1 -s_2}) s_2 e^{s_2L} + (\frac{\mathcal{C}_+}{\alpha} + \mathcal{I}_p) \le 0. \end{eqnarray*} Given $\mathcal{I}_p < \frac{\mathcal{C}_-}{\alpha}$, it suffices to show \begin{eqnarray} \label{eq:caseI:case2xpos3} s_2 e^{s_1 L} - s_1 e^{s_2 L} + (s_1 -s_2) \le 0. \end{eqnarray} This is readily proved since the left-hand side of \eqref{eq:caseI:case2xpos3} is 0 when $L=0$, and it is non-increasing on $[0, \infty)$ as a function of $L.$ Thus we have established the convexity of $Y$ for $x \le 0$. Turning to the case $L<x<U$, we can verify $$ Y''(x) \; = \; \lambda_1 r_1^2 e^{r_1x} +\lambda_2 r_2^2 e^{r_2x}. $$ Upon substituting \eqref{eq:caseI:lambda1} and \eqref{eq:caseI:lambda2} for $\lambda_1$ and $\lambda_2$, we obtain when $L<x<U$ \begin{eqnarray*} Y''(x) &=& \left(e^{r_1(x-U)} \frac{r_1^2r_2}{r_1-r_2} B_1 + e^{r_2(x-U)} \frac{r_2^2r_1}{r_1-r_2} J_1 \right) \frac{\sigma^2}{2 \alpha^2} \\ &\ge& \left(\frac{r_1^2r_2}{r_1-r_2} B_1 + \frac{r_2^2r_1}{r_1-r_2} J_1 \right) \frac{\sigma^2}{2 \alpha^2} \\ &=& ( \frac{\mathcal{C}_+} {\alpha} - \mathcal{D}_p) (- t_2) \;\; > \;\; 0. \end{eqnarray*} Finally, for the case $0 <x \le L$, we derive \begin{eqnarray*} Y''(x)&=& \tilde \lambda_1 s_1^2 e^{s_1x} + \tilde \lambda_2 s_2^2 e^{s_2x} \\ & = & \left(-\mathcal{I}_p - \frac{\mathcal{C}_+}{\alpha} - \tilde \lambda_2 s_2 e^{s_2 L}\right) s_1 e^{s_1 (x-L)} + \tilde \lambda_2 s_2^2 e^{s_2x} \end{eqnarray*} where in the second equality we cancel $\tilde \lambda_1$ by using the relationship between $\tilde \lambda_1$ and $\tilde \lambda_2$ in $Y'(L-)=-\mathcal{I}_p$. After substituting $\tilde \lambda_2$ given in \eqref{eq:caseI:l2}, and simplifying the resulting expression, we obtain \begin{align} \label{eq:case3Ycov} Y''(x) = \left[\left(-\mathcal{I}_p - \frac{\mathcal{C}_+}{\alpha}\right) s_1 + \frac{\mathcal{C}_+ + \mathcal{C}_-}{\alpha} s_1 e^{s_2 L} \right] e^{s_1 (x-L)} + \frac{\mathcal{C}_+ + \mathcal{C}_-}{\alpha} \frac{-s_1 s_2}{s_1 -s_2} e^{s_2 L} (e^{s_2 (x-L)} - e^{s_1 (x-L)}) . \qquad \end{align} Suppose we have \begin{equation} \label{eq:case3cov} Y''(L-) = \left(-\mathcal{I}_p - \frac{\mathcal{C}_+}{\alpha}\right) s_1 + \frac{\mathcal{C}_+ + \mathcal{C}_-}{\alpha} s_1 e^{s_2 L} \; > \; 0 , \end{equation} then we can show $Y''(x) \; \ge \; 0$ {for all $x \in (0,L]$} by discussing two cases as follows. If \begin{equation} \label{eq:ineqcase3} \left(-\mathcal{I}_p - \frac{\mathcal{C}_+}{\alpha}\right) s_1 + \frac{\mathcal{C}_+ + \mathcal{C}_-}{\alpha} s_1 e^{s_2 L} \; \ge \; \frac{\mathcal{C}_+ + \mathcal{C}_-}{\alpha} \frac{-s_1 s_2}{s_1 -s_2} e^{s_2 L}, \end{equation} it follows from \eqref{eq:case3Ycov} that $Y''(x) \ge 0$. On the other hand, if \eqref{eq:ineqcase3} does not hold, one can readily verify from \eqref{eq:case3Ycov} and the fact $s_1>0 > s_2$ that $Y''(x)$ is non-increasing on $(0, L]$. This also implies that $Y''(x) > 0$ for $x\in (0,L]$ due to \eqref{eq:case3cov}. Hence, to show the convexity of the candidate value function $Y$ on $(0, L]$, it remains to establish \eqref{eq:case3cov}. We rely on the fact that $L$ and $U$ satisfy the Equations \eqref{eq:xhminusxf} and \eqref{eq:xH}. From \eqref{eq:xH}, we deduce that in order for \eqref{eq:case3cov} to hold, it suffices to establish that $${{ \frac{B_1 r_2}{r_1-r_2} e ^{r_1 (L-U)} + \frac{J_1 r_1}{r_1-r_2} e ^{r_2 (L-U)}}} \; > \; (r_1 +r_2)( \alpha \mathcal{I}_p+ {\mathcal{C}_+}),$$ which becomes $$B_1 r_2 e ^{r_1 (L-U)} + J_1 r_1 e ^{r_2 (L-U)} + (r_1 +r_2) A \; > \; 0.$$ Since $L-U$ satisfies \eqref{eq:xhminusxf}, we can equivalently show $$B_1 r_1 e ^{r_1 (L-U) } + J_1 r_2 e ^{r_2 (L-U)} \; < \; 0 .$$ Write $$f (y) \; = \; h(e^{r_1 y}),$$ where the function $h$ is given by (\ref{eq:func-h}). We can deduce from the property of $h$ that $f$ is strictly convex, $f(0) <0$, and $f'(0)<0$. This immediately implies that $f$ is strictly decreasing on $(-\infty, 0]$ and $$f'(L-U) \; = \; B_1 r_1 e^{r_1 (L-U) } + J_1 r_2 e^{r_2 (L-U)} \; < \; 0.$$ Thus we have also established that $Y''$ is nonnegative on $(0, L]$. The proof of convexity of function $Y$ is therefore completed.
\subsubsection{Case II: $0 \ge U \ge L$}\label{sec:522}
Similar to Case I, we proceed to solve the Bellman equation \eqref{eq:bellmaneq} depending on the value of $x$ in relation to $U$, $0$ and $L$. One readily obtains
\begin{eqnarray} Y(x)= \begin{cases} \frac{\mathcal{C}_+}{\alpha} x + \frac{1}{\alpha} \left(\frac{\mathcal{C}_+}{\alpha}(\theta_\ell-b) - \mathcal{D}_p \theta_\ell\right)+ l_3 e^{t_2x}, & x \ge 0,\\ Y(x)=-\frac{\mathcal{C}_-}{\alpha} x + \frac{1}{\alpha} \left(-\frac{\mathcal{C}_-}{\alpha}(\theta_\ell-b) - \mathcal{D}_p \theta_\ell\right)+ \lambda_3 e^{t_1 x} + \lambda_4 e^{t_2 x}, & U < x < 0,\\ Y(x)=- \frac{\mathcal{C}_-}{\alpha} x + \frac{b \mathcal{C}_-}{\alpha^2}+
\tilde \lambda_3 e^{r_1 x} + \tilde \lambda_4 e^{r_2 x}, & L < x < U,\\ Y(x)=-\frac{\mathcal{C}_-}{\alpha} x + \frac{1}{\alpha} \left(-\frac{\mathcal{C}_-}{\alpha}(\theta_u-b) +\mathcal{I}_p \theta_u\right)+ l_4
e^{s_1 x}, & x<L, \end{cases} \end{eqnarray} where $t_i, s_i, r_i$ are given in Section~\ref{sec:main:prelim:Single} and $l_3, l_4, \lambda_3, \lambda_4, \tilde \lambda_3, \tilde \lambda_4$ are constants to be determined.
To determine $L$ and $U$ together with other six unknown constants $l_3, l_4, \lambda_3, \lambda_4, \tilde \lambda_3, \tilde \lambda_4$, we match the value and the first-order derivative of $Y$ at the points $U$, $0$ and $L$, and thus obtain eight nonlinear equations of those eight unknown numbers as given below. The first three equations are derived from matching at the point $U$: \begin{align} Y'(U+)&=-\frac{\mathcal{C}_-}{\alpha}+\lambda_3 t_1 e^{t_1 U}+\lambda_4 t_2 e^{t_2 U}=\mathcal{D}_p, \nonumber \\ Y'(U-)&=-\frac{\mathcal{C}_-}{\alpha}+\tilde \lambda_3 r_1 e^{r_1 U}+\tilde \lambda_4 r_2 e^{r_2 U}=\mathcal{D}_p, \nonumber\\ Y(U+)&=-\frac{\mathcal{C}_-}{\alpha} U + \frac{1}{\alpha} \left(-\frac{\mathcal{C}_-}{\alpha}(\theta_\ell-b) - \mathcal{D}_p \theta_\ell\right)+ \lambda_3 e^{t_1 U}+\lambda_4 e^{t_2 U} \nonumber \\ &=Y(U-)=-\frac{\mathcal{C}_-}{\alpha} U + \frac{ b \mathcal{C}_-}{\alpha^2}+ \tilde \lambda_3 e^{r_1 U} + \tilde \lambda_4 e^{r_2 U} . \nonumber \end{align} We also obtain three equations from matching at $L$: \begin{align} Y'(L+) & = - \frac{\mathcal{C}_-}{\alpha}+ \tilde \lambda_3 r_1 e^{r_1 L}+\tilde \lambda_4 r_2 e^{r_2 L}=-\mathcal{I}_p, \nonumber \\ Y'(L-) & = -\frac{\mathcal{C}_-}{\alpha} + l_4 s_1 e^{s_1 L}=-\mathcal{I}_p, \nonumber \\ Y(L-) & = -\frac{\mathcal{C}_-}{\alpha} L + \frac{1}{\alpha} \left(-\frac{\mathcal{C}_-}{\alpha}(\theta_u -b) + \mathcal{I}_p \theta_u\right)+ l_4 e^{s_1 L} \nonumber\\ &=Y(L+ ) \; = \; - \frac{\mathcal{C}_-}{\alpha} L + \frac{ b \mathcal{C}_-}{\alpha^2}+ \tilde \lambda_3 e^{r_1 L} + \tilde \lambda_4 e^{r_2 L}. \nonumber \end{align} Two additional equations arise from matching at $0$: \begin{align} Y'(0+) & \; = \; \frac{\mathcal{C}_+}{\alpha}+ l_3 t_2 = Y'(0-)= -\frac{\mathcal{C}_-}{\alpha}+ \lambda_3 t_1 + \lambda_4 t_2 , \nonumber \\ Y(0+) &\; = \; \frac{1}{\alpha} \left( (\theta_\ell -b) \frac{\mathcal{C}_+}{\alpha} -\mathcal{D}_p \theta_\ell\right) + l_3 \nonumber \\ &=Y(0-) \; = \; \frac{1}{\alpha} \left( (\theta_\ell -b) -\frac{\mathcal{C}_-}{\alpha} -\mathcal{D}_p \theta_\ell\right) + \lambda_3 + \lambda_4. \nonumber \end{align} Solving these equations, we can conclude that $L$ and $U$ satisfy \eqref{eq:xfminusxh} and \eqref{eq:xF}.
Next, we provide necessary and sufficient conditions under which \eqref{eq:xfminusxh} and \eqref{eq:xF} have unique solution $L$ and $U$ that are both non-positive. Define for $y>0$, \begin{eqnarray} \bar h(y)=B_2 y + J_2 y^{\frac{r_2}{r_1}}+ K. \label{eq:func-hbar} \end{eqnarray} We first consider the case $\mathcal{I}_p +\mathcal{D}_p>0$. Using a similar argument as for Case I, one readily checks that guaranteeing \eqref{eq:xfminusxh} and \eqref{eq:xF} have a unique solution with $ 0 \ge U \ge L$ is equivalent to showing that $\bar h(y)=0$ has a unique solution in the interval $[1, \frac{B_3 -B_1}{B_2}]$. Since $\bar h$ is strictly increasing on $[1, \infty)$, and $\bar h(1) = B_2+J_2 +K <0$, one deduces that it is equivalent to having the conditions \begin{eqnarray} \label{ineq:xfnego} \frac{B_3-B_1}{B_2} > 1 \qquad \text{and} \qquad \bar h\left(\frac{B_3-B_1}{B_2}\right) \ge 0. \end{eqnarray} This can be confirmed by simple algebraic manipulations establishing that the condition \eqref{ineq:xfnego} are the same as condition (2a) in Theorem~\ref{THM:CASE1}. We next consider the case $\mathcal{I}_p =\mathcal{D}_p=0$. One infers from \eqref{eq:func-hbar} that $\bar h(1) =0$ and thus we obtain $L=U := \delta$. In addition, we derive from \eqref{eq:xF} that \begin{eqnarray} \label{eq:deltaneg}
e^{t_1 \delta} = e^{t_1 U} = \frac{\mathcal{C}_-}{\mathcal{C}_++\mathcal{C}_-} \frac{s_1-t_2}{-t_2}, \end{eqnarray} and $\delta \le 0$ if and only if $\mathcal{C}_- s_1 + \mathcal{C}_+ t_2 \le 0$. This condition is equivalent to $B_3-B_1 - B_2 \ge 0$, which is condition (2b) in Theorem~\ref{THM:CASE1}. One checks that when $\mathcal{I}_p =\mathcal{D}_p=0$, we have $\delta =0$ hold when $B_3 - B_1 - B-2 =0$. In addition, we obtain the explicit form for $\delta \ge 0$ in Corollary~\ref{THM:SPECIAL} from (\ref{eq:deltaneg}).
Finally, we verify the convexity of the function $Y$ which implies that
$Y$ satisfies the required first-order properties in \eqref{eq:derofY}. This can be proved by establishing the non-negativity of second-order derivative $Y''$ on each interval $(-\infty, L), [L, U], (U, 0],$ and $(0, \infty)$ separately. The proof is similar as in Case I, and thus is omitted here.
\subsubsection{Case III: $U \ge 0 \ge L$}\label{sec:523}
We proceed to solve the Bellman equation \eqref{eq:bellmaneq} depending on the value of $x$ in relation to $U$, $0$ and $L$. We obtain \begin{eqnarray*} Y(x)= \begin{cases} \frac{\mathcal{C}_+}{\alpha} x + \frac{1}{\alpha} \left(\frac{\mathcal{C}_+}{\alpha}(\theta_\ell-b) - \mathcal{D}_p \theta_\ell\right)+ l_5 e^{t_2x}, & x > U \ge 0,\\ \frac{\mathcal{C}_+}{\alpha} x + \frac{-b \mathcal{C}_+}{\alpha^2}+
\lambda_5 e^{r_1 x} + \lambda_6 e^{r_2 x}, & 0 \le x<U,\\ - \frac{\mathcal{C}_-}{\alpha} x + \frac{b \mathcal{C}_-}{\alpha^2}+
\tilde \lambda_5 e^{r_1 x} + \tilde \lambda_6 e^{r_2 x}, & L<x < 0,\\ Y(x)=-\frac{\mathcal{C}_-}{\alpha} x + \frac{1}{\alpha} \left(-\frac{\mathcal{C}_-}{\alpha}(\theta_u-b) +\mathcal{I}_p \theta_u\right)+ l_6 e^{s_1 x}, &x \le L, \end{cases} \end{eqnarray*} where $l_5, l_6, \lambda_5, \lambda_6, \tilde \lambda_5, \tilde \lambda_6$ are unknown constants to be determined. To find these constants,
we match the value and the first-order derivative of $Y$ at the points $U$, $0$ and $L$. This enables us to establish eight equations for the eight unknowns $L, U, l_5, l_6, \lambda_5, \lambda_6, \tilde \lambda_5, \tilde \lambda_6$. Let us first consider such matchings at the point $U$.
One readily verifies that we can express $\lambda_5$ and $\lambda_6$ in terms of $U$ and the model parameters as follows: \begin{eqnarray} \lambda_5 &=& \frac{\sigma^2}{2 \alpha^2} e^{-r_1 U} \left(1-\frac{r_1}{r_2}\right)^{-1} (-B_1), \label{eq:caseIII:lambda1}\\ \lambda_6 &=& \frac{\sigma^2}{2 \alpha^2} e^{-r_2 U} \left(1-\frac{r_2}{r_1}\right)^{-1} J_1. \label{eq:caseIII:lambda2} \end{eqnarray} Note that $\lambda_5$ and $\lambda_6$ have the same forms as $\lambda_1$ and $\lambda_2$ given in \eqref{eq:caseI:lambda1} and \eqref{eq:caseI:lambda2} in Case I, but their values could be different since the unknown number $U$ maybe different for Case I and Case III. Matching at the point $L$, we deduce that \begin{eqnarray} \tilde \lambda_5 &=& \frac{\sigma^2}{2 \alpha^2} e^{-r_1 L} \left(1-\frac{r_1}{r_2}\right)^{-1} B_2, \label{eq:caseIII:tlamb1}\\ \tilde \lambda_6 &=& \frac{\sigma^2}{2 \alpha^2} e^{-r_1 L} \left(1-\frac{r_2}{r_1}\right)^{-1} (-J_2). \label{eq:caseIII:tlamb2} \end{eqnarray} Matching $Y$ at the point 0 to order one renders \begin{eqnarray*} Y'(0+) \; = \; \frac{\mathcal{C}_+}{\alpha}+ \lambda_5 r_1 + \lambda_2 r_2 & = & Y'(0-) \; = \; -\frac{\mathcal{C}_-}{\alpha}+ \tilde \lambda_5 r_1 + \tilde \lambda_6 r_2 , \\ Y(0+) \; = \; \frac{-b \mathcal{C}_+}{\alpha^2} + \lambda_5 + \lambda_6 & = & Y(0-) \; = \; \frac{b \mathcal{C}_-}{\alpha^2} + \tilde \lambda_5 + \tilde \lambda_6. \end{eqnarray*} Upon substituting the equations \eqref{eq:caseIII:lambda1}~--~\eqref{eq:caseIII:tlamb2} for $\lambda_5$, $\lambda_6$, $\tilde \lambda_5$, $\tilde \lambda_6$ and simplifying the resulting expressions, we conclude that $L$ and $U$ satisfy the two equations given in \eqref{eq:xhxf2} and \eqref{eq:xhxf2c}.
Next, we provide conditions under which \eqref{eq:xhxf2} and \eqref{eq:xhxf2c} have a unique solution $L$ and $U$ with $L \le 0 \le U$. To this end, we define for $y>0$, \begin{eqnarray} \tilde h(y)=J_1 \left(\frac{B_3 -B_2 y}{B_1} \right)^{\frac{r_2}{r_1}} + J_2 y^{\frac{r_2}{r_1}} -J_3. \label{eq:func-htilde} \end{eqnarray} One checks that in order for the existence of a unique pair $L$ and $U$ with $L \le 0 \le U$ solving \eqref{eq:xhxf2} and \eqref{eq:xhxf2c}, it is equivalent to $\tilde h (y) = 0$ having a unique solution on the interval $\Big[\max\{1, \frac{B_3 -B_1}{B_2} \}, \frac{B_3}{B_2} \Big)$. This is true if and only if \begin{eqnarray} \label{eq:tildeh-ineq} \tilde h \Big(\max\{1, \frac{B_3 -B_1}{B_2} \} \Big) \le 0, \end{eqnarray} after noting that $\lim_{y \rightarrow \frac{B_3}{B_2}-} \tilde h (y) = \infty$ and $\tilde h$ is strictly convex. Simplifying \eqref{eq:tildeh-ineq} we arrive at the following two conditions: \begin{eqnarray*} (i) & B_3 > B_2, B_3 -B_1 -B_2 <0, \quad \mbox{and} \quad \left(\frac{B_3 -B_2}{B_1} \right)^{\frac{r_2}{r_1}} \le \frac{J_3 -J_2}{J_1}, \\ (ii) & B_3 > B_2, B_3 -B_1 -B_2 \ge 0, \quad \mbox{and} \quad \left(\frac{B_3 -B_1}{B_2} \right)^{\frac{r_2}{r_1}} \le \frac{J_3 -J_1}{J_2}. \end{eqnarray*} These conditions contain the complement of the union of Conditions (1a)-(1c) and Conditions (2a)-(2b) in Theorem~\ref{THM:CASE1}.
Finally, we verify that the candidate value function $Y$ satisfies the required first-order properties in \eqref{eq:derofY} by establishing the convexity of the function $Y$. Note for $x \ge U$, one first readily confirms that $$ Y''(x) \;\; = \;\; \left(\mathcal{D}_p- \frac{\mathcal{C}_+}{\alpha}\right) t_2 e^{t_2 (x-U)}> 0, $$ since $\mathcal{D}_p < \mathcal{C}_+ / \alpha$ and $t_2<0$ in \eqref{eq:t2}. Similarly, when $x \le L, $ we have $$ Y''(x) \;\; = \;\; \left(\frac{\mathcal{C}_-}{\alpha}-\mathcal{I}_p\right) s_1 e^{s_1 (x-L)}>0, $$ due to the facts that $\mathcal{I}_p< \mathcal{C}_- / \alpha$ and $s_1>0$ in \eqref{eq:s1}. Turning to the case $0<x<U$, one can readily verify that $$ Y''(x) \;\; = \;\; \lambda_5 r_1^2 e^{r_1x} +\lambda_6 r_2^2 e^{r_2x}. $$ Upon substituting \eqref{eq:caseIII:lambda1} and \eqref{eq:caseIII:lambda2} for $\lambda_5$ and $\lambda_6$, we obtain when $0<x<U$ \begin{eqnarray*} Y''(x) &=& \left(e^{r_1(x-U)} \frac{r_1^2r_2}{r_1-r_2} B_1 + e^{r_2(x-U)} \frac{r_2^2r_1}{r_1-r_2} J_1 \right) \frac{\sigma^2}{2 \alpha^2} \\ &\ge& \left(\frac{r_1^2r_2}{r_1-r_2} B_1 + \frac{r_2^2r_1}{r_1-r_2} J_1 \right) \frac{\sigma^2}{2 \alpha^2}\\ &=& ( \mathcal{D}_p - \frac{\mathcal{C}_+} {\alpha}) t_2 \;\; > \;\; 0. \end{eqnarray*} Similarly, for $L<x \le 0$, we have \begin{eqnarray*} Y''(x) \;\; = \;\; \tilde \lambda_5 r_1^2 e^{r_1x} + \tilde \lambda_6 r_2^2 e^{r_2x} \;\;=\;\;(\frac{\mathcal{C}_-}{\alpha} - \mathcal{I}_p) s_1 \;\; > \;\; 0. \end{eqnarray*} Hence, the convexity of the candidate value function $Y$ has been established.
\subsection{Proof of Theorem~\ref{THM:CASE1} and Corollary~\ref{THM:SPECIAL}: Step 3}\label{sec:proofs:case1:step3}
The final step of our proof of Theorem~\ref{THM:CASE1} consists of verifying that the proposed two-threshold dynamic control policy is optimal and that $Y(x)=V(x)$ for all $x$. We take a martingale argument approach where the key idea is to construct a submartingale to prove that the candidate value function $Y$ is a lower bound for the stochastic optimization problem \eqref{optHF:obj}~--~\eqref{optHF:st3}. To this end, first consider an admissible process $\{P(t): t \ge 0\}$ adapted to the filtration $\mathcal{F}_t$ generated by $\{D(s): 0 \le s \le t\}$ and $dP(t)=\theta dt$, where $\theta \in [\theta_\ell, \theta_u]$. Recalling $X(t)=P(t)-D(t)$, we define for $t \ge 0$ \begin{align*} M(t) &:= e^{-\alpha t} Y(X(t)) + \int_{0}^{t}e^{-\alpha s} (\mathcal{C}_+ X(s)^+ + \mathcal{C}_- X(s)^- + \mathcal{I}_p \theta \indi{\theta \ge 0} - \mathcal{D}_p \theta \indi{\theta<0}) ds, \end{align*} with our goal being to show that $\{M(t): t \ge 0 \}$ is a submartingale.
Since $Y$ is twice continuously differentiable with the exception of at most three points, we can apply Ito's formula to $e^{-\alpha t} Y(X(t))$ and obtain for any $0 \le t_1 \le t_2$ \begin{align} \label{eq:submat} M(t_2)-M(t_1) = & \int_{t_1}^{t_2} e^{-\alpha s} \Big(-\alpha Y(X(s)) + \frac{1}{2} \sigma^2 Y''(X(s)) + (\theta-b) Y'(X(s))
+ \mathcal{C}_+ X(s)^+ + \mathcal{C}_- X(s)^- \nonumber \\ & \qquad + \; \mathcal{I}_p \theta \indi{\theta \ge 0} - \mathcal{D}_p \theta \indi{\theta<0} \Big) ds -\int_{t_1}^{t_2} e^{-\alpha s}Y'(X(s)) \sigma dW(s). \end{align} We have established in Section~\ref{sec:proofs:case1:step2} that $Y$ satisfies the Bellman equation $$ -\alpha Y(x) + \frac{1}{2} \sigma^2 Y''(x) -b Y'(x) + \mathcal{C}_+ x^+ + \mathcal{C}_- x^{-} + \inf_{ \theta_\ell \le \theta \le \theta_u} \mathcal{L}(\theta, x) \; = \; 0, $$
where \begin{equation*} \mathcal{L}(\theta, x) \; = \; \left\{\begin{matrix} (Y'(x)+\mathcal{I}_p) \theta \qquad \text{if} \quad \theta \ge 0 ,\\ (Y'(x)-\mathcal{D}_p) \theta \qquad \text{if} \quad \theta<0 . \end{matrix}\right. \end{equation*}
This implies that, for any given $x$ and any $\theta\in [\theta_\ell, \theta_u]$, \begin{eqnarray*} {-\alpha Y(x) + \frac{1}{2} \sigma^2 Y''(x) -b Y'(x) + \mathcal{C}_+ x^+ + \mathcal{C}_- x^{-}+\mathcal{L}(\theta, x) } \ge 0. \end{eqnarray*} Since $Y'(\cdot)$ is bounded, upon taking the conditional expectation in \eqref{eq:submat} with respect to the filtration $\mathcal{F}_{t_1}$, we deduce for any $t_1 \le t_2$ that
\[\mathbb{E}_x [M(t_2)| F_{t_1}]-M(t_1) \ge 0.\] Namely, $M(t)$ is a submartingale and therefore we have \begin{eqnarray*} \mathbb{E}_x [M(t)] \ge M(0)=Y(x), \qquad \text{for any $t \ge 0$.} \end{eqnarray*} Letting $t$ go to infinity, we can conclude that $Y$ is a lower bound for the optimal value of the stochastic optimization problem \eqref{optHF:obj}~--~\eqref{optHF:st3}, and thus $Y(x)=V(x)$ for all $x$. Hence, the dynamic control policy characterized by the two threshold values $L$ and $U$ is indeed optimal. Our proof of Theorem~\ref{THM:CASE1} and Corollary~\ref{THM:SPECIAL} is complete.
\subsection{Proof of Theorem~\ref{THM:CASE2}}\label{sec:proofs:main:case2}
The results follow along similar lines to the rigorous proof of Theorem~\ref{THM:CASE1}. We describe some of the key aspects herein.
The Bellman equation \eqref{eq:bellmaneq} still applies and the parameter settings (i.e., ${\mathcal{D}_p \ge \mathcal{C}_+ / \alpha}$ and ${\mathcal{I}_p < \mathcal{C}_- / \alpha}$) indicate that we need to construct a function $Y(x)$ and find $L$ such that \begin{equation} \label{eq:derofY_case_2} Y'(x)= \left\{\begin{array}{ll} > -\mathcal{I}_p , & \qquad \text{if} \quad x> L ,\\
\le -\mathcal{I}_p & \qquad \text{if} \quad x \le L . \end{array}\right. \end{equation}
We require $Y(x)=O(|x|)$ as $|x|\rightarrow \infty$, and $Y$ meets smoothly at the points $L$, $0$ and $U$ to order one. We consider two cases $L<0$ and $L\ge 0$ separately.
When $L<0$, solving the Bellman equation for different values of $x$ as in Section~\ref{sec:proofs:case1:step2}, we derive \begin{equation*} Y(x) \;\; = \;\; \left\{\begin{array}{ll}
\frac{\mathcal{C}_+}{\alpha} x + \frac{-b \mathcal{C}_+}{\alpha^2}+
l_7 e^{r_2 x} & \qquad \text{if} \quad x \ge 0 ,\\
- \frac{\mathcal{C}_-}{\alpha} x + \frac{b \mathcal{C}_-}{\alpha^2}+
\lambda_7 e^{r_1 x} + \lambda_8 e^{r_2 x} & \qquad \text{if} \quad L<x<0 ,\\ -\frac{\mathcal{C}_-}{\alpha} x + \frac{1}{\alpha} \left(-\frac{\mathcal{C}_-}{\alpha}(\theta_u-b) +\mathcal{I}_p \theta_u\right)+ l_8 e^{s_1 x}, & \qquad \text{if} \quad x \le L , \end{array}\right. \end{equation*} where $l_7, l_8, \lambda_7, \lambda_8$ are four unknown constants. By matching the value and the first-order derivative of the function $Y$ at the points 0 and $L$, and simplifying the resulting expression, we obtain \[B_2 e^{-r_1 L} \; = \; B_3,\] thus rendering \eqref{eq:thm2Lneg} for the case $L<0$. This also implies that $L<0$ if and only if $B_3>B_2$. Moreover, direct calculations establish the convexity of function $Y$, which guarantees \eqref{eq:derofY_case_2}.
When $L \ge 0$, we can proceed in a similar fashion to solve the Bellman equation and find \begin{eqnarray*}\label{eq:dxK} (r_2 -s_1)( \alpha \mathcal{I}_p+ {\mathcal{C}_+})+ (\mathcal{C}_+ + \mathcal{C}_- ) s_1 e^{s_2 L}=0. \end{eqnarray*} Thus we obtain \eqref{eq:thm2Lpos} for the case $L \ge 0$. In addition, $L$ is nonnegative if and only if $$(\alpha \mathcal{I}_p+ \mathcal{C}_+) (s_1-r_2 ) \; \le \; (\mathcal{C}_+ +\mathcal{C}_-) s_1,$$ which is equivalent to $B_2 \ge B_3$. In addition, the convexity of the function $Y$ can be readily verified.
Finally, a repetition of the arguments in Section~\ref{sec:proofs:case1:step3} guarantees that the function $Y$ we construct is indeed the optimal value function. This completes the proof.
\subsection{Proof of Theorem~\ref{THM:CASE3}}\label{sec:proofs:main:case3}
The results follow along similar lines to the rigorous proof of Theorem~\ref{THM:CASE1}. We describe some of the key aspects herein.
Theorem~\ref{THM:CASE3} is the antithesis of Theorem~\ref{THM:CASE2}. Our task is to construct a function $Y(x)$ that satisfies the Bellman equation \eqref{eq:bellmaneq} together with a number $U$ such that \begin{equation*} \label{eq:derofY_case3} Y'(x)= \left\{\begin{array}{ll}
\ge \mathcal{D}_p , & \qquad \text{if} \quad x \ge U ,\\ < \mathcal{D}_p & \qquad \text{if} \quad x <U . \end{array}\right. \end{equation*}
We require $Y(x)=O(|x|)$ as $|x|\rightarrow \infty$, and $Y$ meets smoothly at the points $L$, $0$ and $U$ to order one. We again consider two cases $U \ge 0$ and $U<0$.
When $U \ge 0$, solving the Bellman equation for different values of $x$ as in Section~\ref{sec:proofs:case1:step2}, we derive \begin{equation*} Y(x) \;\;=\;\; \left\{\begin{array}{ll}\frac{\mathcal{C}_+}{\alpha} x + \frac{1}{\alpha} \left(\frac{\mathcal{C}_+}{\alpha}(\theta_\ell-b) - \mathcal{D}_p \theta_\ell\right)+ l_9 e^{t_2x}, & \qquad \text{if} \quad x \ge U ,\\\frac{\mathcal{C}_+}{\alpha} x + \frac{-b \mathcal{C}_+}{\alpha^2}+
\lambda_9 e^{r_1 x} + \lambda_{10} e^{r_2 x}, & \qquad \text{if} \quad U> x \ge 0 , \\- \frac{\mathcal{C}_-}{\alpha} x + \frac{b \mathcal{C}_-}{\alpha^2}+
l_{10} e^{r_1 x}, & \qquad \text{if} \quad x<0 , \end{array}\right. \end{equation*} where $l_9, l_{10}, \lambda_9,\lambda_{10}$ are four constants to be determined. By matching the value and derivative of the function $Y$ at the points 0 and $U$, we get four equations and further obtain \[J_1 e^{-r_2 U} \; = \; J_3.\] This implies that $U \ge 0$ if and only if $J_3 \ge J_1$. Hence we have derived \eqref{eq:thm3U} for $U \ge 0$.
When $U<0$, proceeding in a similar fashion, we deduce $$ (r_1 -t_2)(\alpha \mathcal{D}_p+\mathcal{C}_-)+ (\mathcal{C}_+ + \mathcal{C}_- )t_2 e^{t_1 U} \; = \; 0, $$
from which we obtain \[ e^{t_1 U} \; = \; \frac{(\alpha \mathcal{D}_p + \mathcal{C}_-)(r_1-t_2)}{(\mathcal{C}_+ + \mathcal{C}_-) (-t_2)}, \] thus rendering \eqref{eq:thm3U} for $U < 0$.
The convexity of the candidate value function $Y(x)$ in both cases can be directly verified and the function $Y$ can be shown to be the optimal value function as in the proof of Section~\ref{sec:proofs:case1:step3}. Therefore we have completed the proof of Theorem~\ref{THM:CASE3}.
\subsection{Proof of Theorem~\ref{THM:CASE4}}\label{sec:proofs:main:case4}
The results follow along similar lines to the rigorous proof of Theorem~\ref{THM:CASE1}. Some of the key aspects are described herein.
We construct a function $Y(x)$ with $-\mathcal{I}_p < Y'(x) \; < \; \mathcal{D}_p$ for any $x$. The Bellman equation \eqref{eq:bellmaneq} together with the linear growth of $Y$ then implies \begin{equation*} Y(x) \;\;=\;\; \left\{\begin{array}{ll} \frac{\mathcal{C}_+}{\alpha} x + \frac{- b \mathcal{C}_+}{\alpha^2}+
l_{11} e^{r_2 x}
& \qquad \text{if} \quad x \ge 0 , \\ \frac{-\mathcal{C}_-}{\alpha} x + \frac{b \mathcal{C}_-}{\alpha^2}+
l_{12} e^{r_1 x}, & \qquad \text{if} \quad x<0 . \end{array}\right. \end{equation*} Matching the value and the first-order derivative of the function $Y$ at the point 0, we obtain \begin{eqnarray*} l_{11} \;=\; \frac{\sigma^2 (\mathcal{C}_+ +\mathcal{C}_-)}{2\alpha^2} \frac{r_1^2}{(r_1 -r_2)} >0 & \qquad \mbox{and} \qquad & l_{12} \;=\; \frac{\sigma^2 (\mathcal{C}_+ +\mathcal{C}_-)}{2\alpha^2} \frac{r_2^2}{(r_1 -r_2)}>0. \end{eqnarray*} It is clear $Y$ is convex and $-\mathcal{I}_p \; \le \; -\frac{\mathcal{C}_-}{\alpha}< Y'(x) \; < \; \frac{\mathcal{C}_+}{\alpha} \; \le \; \mathcal{D}_p$ \text{for any $x$}. Therefore, the proof is complete after applying the argument in~Section~\ref{sec:proofs:case1:step3}.
\section{Proofs of Results in Section~\ref{sec:main2}} \label{sec:proofs2}
In this appendix we provide proofs of our main results in Section~\ref{sec:main2}. We start with Theorem~\ref{THM:CASE1:Multiple}, leveraging the proofs of Appendix~\ref{sec:proofs}. Then we present the proof of Theorem~\ref{thm:contract}.
\subsection{Proof of Theorem~\ref{THM:CASE1:Multiple}}
The results follow from the arguments establishing Theorem~\ref{THM:CASE1} applied to the aggregate primary resource capacity $P$ according to the given contract vector $w$.
\subsection{Proof of Theorem~\ref{thm:contract}}
We first prove that the optimal threshold values $L(w)$ and $U(w)$, as functions of $w$, are continuous. We start with the first case in Theorem~\ref{THM:CASE1:Multiple} where $U(w) \ge L(w) \ge 0$. Since $U(w)$ and $L(w)$ are uniquely determined by \eqref{eq:xhminusxf_w} and \eqref{eq:xH_w}, it suffices to show that the solutions to these two equations depend continuously on $w$. We first derive the continuity of $L(w)-U(w)$ from \eqref{eq:xhminusxf_w} by considering the following optimization problem parameterized by $w\in \Omega$: \begin{eqnarray}\label{eq:func-hw}
\min_{y} |h(y, w)|=|B_1(w) y + J_1(w) y^{\frac{r_2}{r_1}}+ A(w)| \quad \text{subject to $y \in [0,1]$}, \end{eqnarray} where we recall \begin{eqnarray*} B_1(w) &:=& (\mathcal{C}_+(w)-\alpha \mathcal{D}_p(w)) (\tilde t_2 -r_2), \qquad J_1(w):= (\mathcal{C}_+(w)-\alpha \mathcal{D}_p(w)) (r_1 - \tilde t_2), \\ A(w) &:= &(\mathcal{C}_+(w) + \alpha \mathcal{I}_p(w)) (r_2-r_1). \end{eqnarray*}
For fixed $w$, one readily verifies that $h(1, w) \le 0$ similarly as in \eqref{eq:h1}. In conjunction with the facts that $h(0, w)= \infty$ and $h$ is strictly convex, we deduce that the optimization problem \eqref{eq:func-hw} has a unique minimizer $y^*(w)$ with $|h(y^*(w), w)| = 0$.
We next argue that $y^*(w)$ is continuous at $w$. We apply Proposition 4.4 of Bonnans and Shapiro~\cite{bonnans2000perturbation} and check the four conditions there. Note that the feasible set in the optimization problem \eqref{eq:func-hw} is constant $[0,1]$ (independent of $w$) and this feasible set is closed. It readily follows that the conditions (ii) and (iv) in Proposition 4.4 of \cite{bonnans2000perturbation} hold. Moreover, it is clear that the function $|h(y, w)|$ is jointly continuous in its two arguments, so condition (i) holds. Finally, the level set $\{y \in[0,1]: |h(y, w)| \le c\}$ is nonempty for each $c \ge 0$ and it is contained in the compact set $[0,1]$, so condition (iii) also holds.
Thus we deduce from Proposition 4.4 of \cite{bonnans2000perturbation} that the optimal solution $y^*(w)$ is continuous with respect to $w$. This implies the solution $L(w)-U(w)$ to \eqref{eq:xhminusxf_w} is continuous in $w$. Using this result, one infers from \eqref{eq:xH_w} that $L(w)$ also depends continuously on $w$, thus completing the proof of continuity of $L(w)$ and $U(w)$ in the first case of Theorem~\ref{THM:CASE1:Multiple}. For the second and third case of Theorem~\ref{THM:CASE1:Multiple}, we can apply similar arguments and conclude that $L(w)$ and $U(w)$ are both continuous at each $w \in \Omega$.
We are now ready to prove the value function $V_w(x)$ is continuous with respect to $w \in \Omega$. To see this, we again illustrate it using the first case of Theorem~\ref{THM:CASE1:Multiple}. The other two cases are similar. Since $L(w)$ and $U(w)$ depend continuously on $w$, one can verify from \eqref{eq:caseI:l1}-\eqref{eq:caseI:l2} that the same continuity property also applies to $l_1, l_2, \lambda_1, \lambda_2, \tilde \lambda_1, \tilde \lambda_2$ (these constants now depend on $w$) appeared in Section~\ref{sec:521}. Thus we conclude from the explicit form of value function \eqref{eq:caseI:YlargerU}-\eqref{eq:caseI:YlessL} that $V_w(x)$ is a continuous function of $w$.
Finally, the continuity of $J_w(x)$ with respect to $w$ readily follows from the continuity of $V_w(x)$ and Equation \eqref{eq:J-V}. The proof is therefore complete.
\vspace*{1.0in}
\end{document} | arXiv |
\begin{document}
\title[Spectral enclosure and superconvergence for eigenvalues in gaps]{Spectral enclosure and superconvergence for eigenvalues in gaps} \author[J. Hinchcliffe \& M. Strauss]{James Hinchcliffe$^{\dagger}$ \& Michael Strauss$^{\ddagger}$} \begin{abstract} We consider the problem of how to compute eigenvalues of a self-adjoint operator when a direct application of the Galerkin (finite-section) method is unreliable. The last two decades have seen the development of the so-called quadratic methods for addressing this problem. Recently a new perturbation approach has emerged, the idea being to perturb eigenvalues off the real line and, consequently, away from regions where the Galerkin method fails. We propose a simplified perturbation method which requires no \'a priori information and for which we provide a rigorous convergence analysis. The latter shows that, in general, our approach will significantly outperform the quadratic methods. We also present a new spectral enclosure for operators of the form $A+iB$ where $A$ is self-adjoint, $B$ is self-adjoint and bounded. This enables us to control, very precisely, how eigenvalues are perturbed from the real line. The main results are demonstrated with examples including magnetohydrodynamics, Schr\"odinger and Dirac operators.
\noindent\emph{Keywords:} Spectral enclosure, eigenvalue problem, perturbation of eigenvalues, spectral pollution, Galerkin method, finite-section method, superconvergence.
\noindent\emph{2010 Mathematics Subject Classification:} 47A10, 47A55, 47A58, 47A75.
\noindent$\dagger$ Email: [email protected].
\noindent$\ddagger$ Department of Mathematics, University of Sussex, Falmer Campus, Brighton BN1 9QH, UK. Email: [email protected]. \end{abstract}
\maketitle \thispagestyle{empty}
\section{Introduction}
Computational spectral theory for operators which act on infinite dimensional Hilbert spaces has advanced significantly in recent years. For self-adjoint operators, the introduction of \emph{quadratic} methods has enabled the approximation of those eigenvalues which are not reliably located by a direct application of the Galerkin method. The latter is due to \emph{spectral pollution}; see examples \ref{uber2}, \ref{magneto} and \cite{boff,boff2,bost,dasu,DP,lesh,rapp,me4}. Notable amongst these quadratic techniques are the Davies \& Plum method \cite{DP}, the Zimmermann \& Mertins method \cite{zim}, and the second order relative spectra \cite{bo,bb,bole,bost,dav,lesh,shar,shar2,me2,me3}. For spectral approximation of arbitrary operators see, for example, \cite{chat,ah1,ah2,ah3} and references therein.
The present manuscript is concerned with a technique for self-adjoint operators which is pollution-free and non-quadratic. The idea is to perturb eigenvalues into $\mathbb{C}^+$ and then approximate them with the Galerkin method. This idea was initially proposed for a particular class of differential operators; see \cite{mar1,mar3,mar2}. An abstract version of this approach for bounded self-adjoint operators was formulated in \cite{me}. The latter requires \'a priori information about the location of gaps in the essential spectrum. Our main aims are to remove the requirement of \'a priori information, to present a rigorous convergence analysis, and demonstrate the effectiveness of our method including a comparison with the quadratic methods. Along the way, we also prove new spectral enclosure results for any operator of the form $A+iB$ where $A$ is self-adjoint and $B$ is bounded and self-adjoint; we note that any bounded operator can be expressed in this form. We now give a brief outline of our main results.
In Section 3, we consider the spectra of operators of the form $A+iB$. The main result is Theorem \ref{cor1a} where we give our new spectral enclosure results. We will define a region in terms of the spectra of $A$ and $B$, then show that it contains the spectrum of $A+iB$. Corollary \ref{2eigs} shows that the enclosure is, in a sense, sharp.
Section 4 is primarily concerned with the perturbation of an eigenvalue, $\lambda$, of a self-adjoint operator, $A$. We consider $A + iP_n$ where $(P_n)_{n\in\mathbb{N}}$ is a sequence of orthogonal projections. The main results are Theorem \ref{est} and Theorem \ref{QQ} where we prove extremely rapid convergence properties of the eigenspaces and eigenvalues associated to the perturbed eigenvalue.
In Section 5, we present our new perturbation method. The idea is based on applying the Galerkin method to $A+iP_n$ for a fixed $n\in\mathbb{N}$. The preceding results enable us to lift an eigenvalue, $\lambda$, off the real line, away from the essential spectrum, and extremely close to $\lambda+i$ where it can be approximated by a direct application of the Galerkin method. The main results are Theorem \ref{limlem1} and Theorem \ref{eigconv}, where we prove the rapid convergence of Galerkin eigenspaces and eigenvalues.
In Section 6, we apply our method to several operators arising in magnetohydrodynamics, non-relativistic and relativistic quantum mechanics. Most of our examples involve calculations using trial spaces belonging to the form domain and not the operator domain. In particular, we use the FEM spaces of piecewise linear trial functions. However, the quadratic methods require trial spaces from the operator domain. In our last example we use the operator domain which allows a comparison with the quadratic methods.
Let us now fix some notation. Unless stated otherwise, $A$ will denote a semi-bounded (from below) self-adjoint operator acting on a Hilbert space $\mathcal{H}$. The quadratic form, spectrum, resolvent set, discrete spectrum, essential spectrum and spectral measure we denote by $\frak{a}$, $\sigma(A)$, $\rho(A)$, $\sigma_{\mathrm{dis}}(A)$, $\sigma_{\mathrm{ess}}(A)$ and $E$, respectively. For $\Delta\subset\mathbb{R}$ we denote the range of $E(\Delta)$ by $\mathcal{L}(\Delta)$. Associated to the form $\frak{a}$ is the Hilbert space $\mathcal{H}_\frak{a}$ which has inner-product \[ \langle u,v\rangle_{\frak{a}}:=\frak{a}(u,v) - (m-1)\langle u,v\rangle\quad\forall u,v\in{\rm Dom}(\frak{a})\quad\textrm{where}\quad m=\min\sigma(A) \] and norm \begin{equation}\label{anorm} \Vert u\Vert_{\frak{a}} =\big(\frak{a}(u,u) - (m-1)\langle u,u\rangle\big)^{\frac{1}{2}}=\Vert(A-m+1)^{\frac{1}{2}}u\Vert. \end{equation} The gap or distance between two subspaces $\mathcal{M}$ and $\mathcal{N}$ of $\mathcal{H}$, is defined as \[ \hat{\delta}(\mathcal{M},\mathcal{N}) = \max\big[\delta(\mathcal{M},\mathcal{N}),\delta(\mathcal{N},\mathcal{M})\big] \textrm{~where~} \delta(\mathcal{M},\mathcal{N}) = \sup_{u\in\mathcal{M},\Vert u\Vert=1}{\rm dist}(u,\mathcal{N}); \] see \cite[Section IV.2.1]{katopert} for further details. We shall write $\delta_{\frak{a}}$ to indicate the gap between subspaces with respect to the norm \eqref{anorm}.
\section{Auxiliary geometric results}
Throughout this section, we assume that $\alpha,\beta,\gamma,\delta\in\mathbb{R}$ with $-\infty<\alpha<\beta<\infty$ and $-\infty<\gamma<\delta<\infty$.
\begin{definition}\label{gamdef} The functions $f,g:[0,1]\to\mathbb{C}$ and the region $\mathcal{U}_{\alpha,\beta}^{\gamma,\delta}$, we define as: \begin{enumerate} \item if $\beta-\alpha\le \delta-\gamma$ \begin{align*} {\rm Re}\; f(t) &= {\rm Re}\; g(t) = \alpha(1-t )+ \beta t,\\ {\rm Im}\; f(t) &= \frac{\gamma+\delta}{2} - \sqrt{\left(\frac{\delta-\gamma}{2}\right)^2+\big({\rm Re}\; f(t) - \alpha\big)\big({\rm Re}\; f(t) - \beta\big)},\\ {\rm Im}\; g(t) &= \frac{\gamma+\delta}{2} + \sqrt{\left(\frac{\delta-\gamma}{2}\right)^2+\big({\rm Re}\; g(t) - \alpha\big)\big({\rm Re}\; g(t) - \beta\big)},\\ \mathcal{U}_{\alpha,\beta}^{\gamma,\delta}&:=\Bigg\{z\in\mathbb{C}:~\alpha<{\rm Re}\; z<\beta,~\text{with either}~\\ &\gamma\le{\rm Im}\; z<{\rm Im}\; f\left(\frac{{\rm Re}\; z-\alpha}{\beta-\alpha}\right) ~\textrm{or}~{\rm Im}\; g\left(\frac{{\rm Re}\; z-\alpha}{\beta-\alpha}\right)<{\rm Im}\; z\le \delta\Bigg\},
\end{align*} \item if $\beta-\alpha>\delta-\gamma$ \begin{align*} {\rm Im}\; f(t) &= {\rm Im}\; g(t) = (1-t)\gamma + t\delta,\\ {\rm Re}\; f(t)&= \frac{\alpha+\beta}{2}-\sqrt{\left(\frac{\beta-\alpha}{2}\right)^2 + \big({\rm Im}\; f(t) - \gamma\big)\big({\rm Im}\; f(t) - \delta\big)},\\ {\rm Re}\; g(t)&= \frac{\alpha+\beta}{2}+\sqrt{\left(\frac{\beta-\alpha}{2}\right)^2 + \big({\rm Im}\; g(t) - \gamma\big)\big({\rm Im}\; g(t) - \delta\big)},\\ \mathcal{U}_{\alpha,\beta}^{\gamma,\delta}&:=\Bigg\{z\in\mathbb{C}:~\gamma\le{\rm Im}\; z\le \delta~\text{and}\\ &\hspace{75pt}{\rm Re}\; f\left(\frac{{\rm Im}\; z-\gamma}{\delta-\gamma}\right)<{\rm Re}\; z<{\rm Re}\; g\left(\frac{{\rm Im}\; z-\gamma}{\delta-\gamma}\right)\Bigg\}. \end{align*} \end{enumerate} We also define \[ \Gamma_{\alpha,\beta}^{\gamma,\delta}:=\big\{z\in\mathbb{C}:\exists t\in[0,1]\textrm{ with }z=f(t)\text{ or }z=g(t)\big\}. \] \end{definition}
The curves and regions defined in Definition \ref{gamdef} are demonstrated in Figure 1.
\begin{figure}
\caption{The figures show the curves $f$ and $g$ (which together form $\Gamma_{\alpha,\beta}^{\gamma,\delta}$). The shaded regions are $\mathcal{U}_{\alpha,\beta}^{\gamma,\delta}$. Clockwise from top left: $\alpha=0$, $\beta = 1$, $\gamma=-1$, $\delta=1$; $\alpha=0$, $\beta = 1$, $\gamma=-5/9$, $\delta=5/9$; $\alpha=0$, $\beta = 1$, $\gamma=-1/2$, $\delta=1/2$; $\alpha=0$, $\beta = 1$, $\gamma=-1/4$, $\delta=1/4$.}
\end{figure} The assertions of the following two lemmata are immediate consequences of the above definition.
\begin{lemma}\label{gamprops} If $\beta-\alpha\le \delta-\gamma$, then \begin{align*} \gamma\le{\rm Im}\; f(t) &\le \frac{\gamma+\delta}{2} - \sqrt{\left(\frac{\delta-\gamma}{2}\right)^2-\left(\frac{\beta-\alpha}{2}\right)^2}\le\frac{\gamma+\delta}{2}\quad\textrm{and}\\ \delta\ge{\rm Im}\; g(t) &\ge \frac{\gamma+\delta}{2} + \sqrt{\left(\frac{\delta-\gamma}{2}\right)^2-\left(\frac{\beta-\alpha}{2}\right)^2}\ge\frac{\gamma+\delta}{2}\quad\forall t\in[0,1]. \end{align*} If $\beta-\alpha> \delta-\gamma$, then \begin{align*} \alpha\le{\rm Re}\; f(t) &\le \frac{\alpha+\beta}{2}-\sqrt{\left(\frac{\beta-\alpha}{2}\right)^2-\left(\frac{\delta-\gamma}{2}\right)^2}<\frac{\alpha+\beta}{2}\quad\textrm{and}\\ \beta\ge{\rm Re}\; g(t) &\ge \frac{\alpha+\beta}{2}+\sqrt{\left(\frac{\beta-\alpha}{2}\right)^2-\left(\frac{\delta-\gamma}{2}\right)^2}>\frac{\alpha+\beta}{2}\quad\forall t\in[0,1]. \end{align*} \end{lemma}
\begin{lemma}\label{olderlem} If $\alpha<{\rm Re}\; z\le\beta$, $\gamma\le{\rm Im}\; z\le \delta$ and $z\notin\mathcal{U}_{\alpha,\beta}^{\gamma,\delta}\cup\Gamma_{\alpha,\beta}^{\gamma,\delta}$, then \[ {\rm Re}\; z - \frac{({\rm Im}\; z-\delta)({\rm Im}\; z-\gamma)}{{\rm Re}\; z- \alpha} >\beta. \] If $z\in\Gamma_{\alpha,\beta}^{\gamma,\delta}$ and ${\rm Re}\; z\ne \alpha$, then \[ {\rm Re}\; z - \frac{({\rm Im}\; z-\delta)({\rm Im}\; z-\gamma)}{{\rm Re}\; z- \alpha} = \beta. \] If $z\in\mathcal{U}_{\alpha,\beta}^{\gamma,\delta}$, then \[ {\rm Re}\; z - \frac{({\rm Im}\; z-\delta)({\rm Im}\; z-\gamma)}{{\rm Re}\; z- \alpha} < \beta. \] \end{lemma}
\begin{lemma}\label{oldlem} If $z\in\mathcal{U}_{\alpha,\beta}^{\gamma,\delta}$, then \[ \beta - {\rm Re}\; z\ge\frac{{\rm dist}(z,\Gamma_{\alpha,\beta}^{\gamma,\delta})^2}{{\rm Re}\; z-\alpha}- \frac{({\rm Im}\; z-\delta)({\rm Im}\; z-\gamma)}{{\rm Re}\; z- \alpha}. \] \end{lemma} \begin{proof} Let $z\in\mathcal{U}_{\alpha,\beta}^{\gamma,\delta}$. First, consider the case where $\beta-\alpha\le \delta-\gamma$. By Lemma \ref{gamprops}, we have ${\rm Im}\; z\ne (\gamma+\delta)/2$. We assume that ${\rm Im}\; z>(\gamma+\delta)/2$, the case where ${\rm Im}\; z<(\gamma+\delta)/2$ may be treated similarly. For some $r\ge{\rm dist}(z,\Gamma_{\alpha,\beta}^{\gamma,\delta})>0$ and $t\in[0,1]$, we have \[{\rm Re}\; z + ({\rm Im}\; z-r)i=g(t). \] Then, using Lemma \ref{olderlem}, \[ {\rm Re}\; z - \frac{({\rm Im}\; z-r-\delta)({\rm Im}\; z-r-\gamma)}{{\rm Re}\; z- \alpha} = \beta \] and hence \begin{align*} \beta- {\rm Re}\; z + \frac{({\rm Im}\; z-\delta)({\rm Im}\; z-\gamma)}{{\rm Re}\; z- \alpha} &= -\frac{r(\gamma+\delta+ r-2{\rm Im}\; z)}{{\rm Re}\; z-\alpha}>\frac{r^2}{{\rm Re}\; z-\alpha}. \end{align*} Now consider the case where $\beta -\alpha>\gamma+\delta$. Let ${\rm Re}\; z\ge(\alpha+\beta)/2$, the case where ${\rm Re}\; z<(\alpha+\beta)/2$ may be treated similarly. For some $r\ge{\rm dist}(z,\Gamma_{\alpha,\beta}^{\gamma,\delta})>0$ we have \begin{align*} {\rm Re}\; z + r - \frac{({\rm Im}\; z-\delta)({\rm Im}\; z-\gamma)}{{\rm Re}\; z+ r- \alpha} =\beta\void{\quad\textrm{and}\quad {\rm Re}\; z + \frac{{\rm Im}\; z(1-{\rm Im}\; z)}{{\rm Re}\; z - \alpha} <\alpha} \end{align*} and hence \begin{align*} \beta- {\rm Re}\; z + \frac{({\rm Im}\; z-\delta)({\rm Im}\; z-\gamma)}{{\rm Re}\; z- \alpha}&= r\left(\frac{2{\rm Re}\; z-\alpha-\beta +r}{{\rm Re}\; z-\alpha}\right) \ge\frac{r^2}{{\rm Re}\; z-\alpha}. \end{align*} \end{proof}
\section{The spectra of $A+iB$} In this section we will prove enclosure results for $\sigma(A+iB)$. Unless stated otherwise, $A$ is assumed to be a bounded self-adjoint operator with \[ \min\sigma(A)=:a^-<a^+:=\max\sigma(A). \] We shall always assume that $B$ is a bounded self-adjoint operator with \[ \min\sigma(B)=:b^-<b^+:=\max\sigma(B). \]
\begin{lemma}\label{oldestlem} Let $u\in\mathcal{H}$, then $\Vert Bu\Vert^2 \le (b^-+b^+)\langle Bu,u\rangle -b^-b^+\Vert u\Vert^2$. \end{lemma} \begin{proof} Let $F$ be the spectral measure associated to $B$. For all $\lambda\in[b^-,b^+]$ we have $(\lambda-b^-)(\lambda-b^+)\le 0$, hence \begin{align*} \Vert Bu\Vert^2 &=
\int_{b^-}^{b^+}\lambda^2~d\langle F_\lambda u,u\rangle\le \int_{b^-}^{b^+}(\lambda b^- + \lambda b^+ - b^-b^+)~d\langle F_\lambda u,u\rangle\\
&= (b^-+b^+)\langle Bu,u\rangle - b^-b^+\Vert u\Vert^2. \end{align*} \end{proof}
We will make use of the following spectral enclosure result which is due to Kato; see \cite[Lemma 1]{kat}. Let $u\in{\rm Dom}(A)$ where $A$ may be unbounded, $\Vert u\Vert=1$, $\langle Au,u\rangle=\eta$ and $\Vert(A-\eta)u\Vert=\zeta$, then \begin{align} \xi<\eta\quad\Rightarrow\quad\left(\xi,\eta+\frac{\zeta^2}{\eta-\xi}\right]\cap\sigma(A)\ne\varnothing.\label{kato1}
\end{align}
\void{\begin{remark}\label{rem} If follows, from the proof of Lemma \ref{oldestlem}, that if $\sigma(B)=\{c,d\}$, then \[ \langle Bu,u\rangle=\delta\Vert u\Vert^2\Rightarrow\Vert Bu\Vert^2 = (c\delta+d\delta-cd)\Vert u\Vert^2. \] \end{remark}}
\begin{definition}\label{rsK} For $a,b\in\mathbb{R}$, $a<b$, we set \begin{align*} r_{a,b}&=\max\left\{\frac{b-a}{2},\Vert B\Vert\right\},\\ s_{a,b}&=\Big(r_{a,b}^2+2r_{a,b}\big(4+10\Vert B\Vert\big) + 4\Vert B\Vert^2\Big)(b-a),\\ K_{a,b}&=\max\Big\{r_{a,b}^4,2r_{a,b}^3,s_{a,b}\Big\}. \end{align*} \end{definition}
\begin{theorem}\label{lem1b} Let $(a,b)\subset\rho(A)$ where $A$ may be unbounded from above and/or below. Then \[ \mathcal{U}_{a,b}^{b^-,b^+}\subset\rho(A+iB) \] and \begin{equation}\label{resbd} \Vert(A+iB-z)^{-1}\Vert\le \frac{K_{a,b}}{{\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})^4}\quad\forall z\in\mathcal{U}_{a,b}^{b^-,b^+}. \end{equation} \end{theorem} \begin{proof} Let $z\in\mathcal{U}_{a,b}^{b^-,b^+}$. First we note that $A+iB-z$ is a closed operator. Let $u\in{\rm Dom}(A)$ with $\Vert u\Vert = 1$ and $\Vert(A+iB-z)u\Vert = \varepsilon$. Assume that \begin{equation}\label{assumption} \varepsilon<\min\left\{1,\frac{{\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})}{2}\right\}. \end{equation} Let us show that \eqref{assumption} implies that $a<{\rm Re}\; z -\varepsilon$. We consider the case where $b-a>b^+-b^-$ (the case where $b-a\le b^+-b^-$ may be treated similarly). We have \[ {\rm Im}\; z = (1-t)b^- + tb^+\quad\text{for some }t\in[0,1]. \] Then \[ {\rm Re}\; f\left(\frac{{\rm Im}\; z-b^-}{b^+-b^-}\right) + i{\rm Im}\; z =f\left(\frac{{\rm Im}\; z-b^-}{b^+-b^-}\right)\in\Gamma_{a,b}^{b^-,b^+} \] and \begin{align*} \varepsilon<{\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})&\le\left\vert z - f\left(\frac{{\rm Im}\; z-b^-}{b^+-b^-}\right)\right\vert\\ &={\rm Re}\; z - {\rm Re}\; f\left(\frac{{\rm Im}\; z-b^-}{b^+-b^-}\right)\\ &\le {\rm Re}\; z -a. \end{align*} For some $v\in\mathcal{H}$ with $\Vert v\Vert=1$ we have $(A+iB-z)u=\varepsilon v$, therefore \[ \langle(A-{\rm Re}\; z)u,u\rangle + i\langle(B-{\rm Im}\; z)u,u\rangle = \varepsilon\langle v,u\rangle \] and \begin{align} a<{\rm Re}\; z -\varepsilon\le\langle Au,u\rangle &\le {\rm Re}\; z+\varepsilon,\label{thlab0}\\ \langle Bu,u\rangle &= {\rm Im}\; z+\varepsilon{\rm Im}\; \langle v,u\rangle,\label{thlab1}\\ \Vert(A-{\rm Re}\; z)u\Vert&\le\varepsilon + \Vert(B-{\rm Im}\; z)u\Vert.\label{fixed} \end{align} Using Lemma \ref{oldestlem} and \eqref{thlab1}, we obtain \begin{align*} \Vert(B-{\rm Im}\; z)u\Vert^2 &= \Vert Bu\Vert^2 - 2{\rm Im}\; z\langle Bu,u\rangle + ({\rm Im}\; z)^2\\ &\le \langle Bu,u\rangle(b^-+b^+) - b^-b^+- 2{\rm Im}\; z\langle Bu,u\rangle + ({\rm Im}\; z)^2\\ &= -({\rm Im}\; z-b^+)({\rm Im}\; z -b^-) + \varepsilon{\rm Im}\;\langle v,u\rangle(b^++b^- - 2{\rm Im}\; z). \end{align*} Now applying \eqref{kato1} with \[ \xi=a,\quad\eta=\langle Au,u\rangle\quad\text{and}\quad\zeta=\Vert(A-\eta)u\Vert\le\Vert(A-{\rm Re}\; z)u\Vert + \vert{\rm Re}\; z-\eta\vert, \] we obtain \[ \left(a,\langle Au,u\rangle + \frac{(\Vert(A-{\rm Re}\; z)u\Vert + \vert{\rm Re}\; z-\eta\vert)^2}{\langle Au,u\rangle-a}\right]\cap\sigma(A)\ne\varnothing. \] Using \eqref{thlab0} and \eqref{fixed}, we have \[ \left(a,{\rm Re}\; z + \varepsilon + \frac{\big(2\varepsilon + \Vert(B-{\rm Im}\; z)u\Vert\big)^2}{{\rm Re}\; z-\varepsilon- a}\right]\cap\sigma(A)\ne\varnothing. \] Then, using $(a,b)\subset\rho(A)$ and $\vert{\rm Im}\; z\vert\le\Vert B\Vert$, and the assumption that $\varepsilon<1$, \begin{align*} b-{\rm Re}\; z&\le\varepsilon + \frac{4\varepsilon^2 +4\varepsilon\Vert(B-{\rm Im}\; z)u\Vert+ \Vert(B-{\rm Im}\; z)u\Vert^2}{{\rm Re}\; z-\varepsilon- a}\\ &\le\varepsilon + \frac{4\varepsilon +8\Vert B\Vert\varepsilon}{{\rm Re}\; z- a-\varepsilon} - \frac{({\rm Im}\; z-b^+)({\rm Im}\; z -b^-)}{{\rm Re}\; z-\varepsilon- a}\\ &\quad + \frac{\varepsilon\vert b^++b^- - 2{\rm Im}\; z\vert}{{\rm Re}\; z- a-\varepsilon}. \end{align*} Combining this estimate with Lemma \ref{oldlem} \begin{align*} \frac{{\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})^2}{{\rm Re}\; z-a}&\le \varepsilon + \frac{4\varepsilon +8\Vert B\Vert\varepsilon}{{\rm Re}\; z- a-\varepsilon} - \frac{\varepsilon({\rm Im}\; z-b^+)({\rm Im}\; z -b^-)}{({\rm Re}\; z - a)({\rm Re}\; z- a-\varepsilon)}\\ &\quad+ \frac{\varepsilon(b^+-b^-)}{{\rm Re}\; z- a-\varepsilon}. \end{align*} From \eqref{assumption} we deduce that \[ {\rm Re}\; z- a-\varepsilon\ge {\rm dist}(z,\Gamma_{a,b}^{b^-,b^+}) - \varepsilon >\frac{{\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})}{2} \] and hence \begin{align*} \frac{{\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})^2}{{\rm Re}\; z-a}\le \left(1+2\frac{4+10\Vert B\Vert}{{\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})} + \frac{4\Vert B\Vert^2}{{\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})^2}\right)\varepsilon. \end{align*} Then \begin{align*} {\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})^4\le \Big(r_{a,b}^2+2r_{a,b}\big(4+10\Vert B\Vert\big) + 4\Vert B\Vert^2\Big)(b-a)\varepsilon \end{align*} and therefore \begin{equation}\label{ep1} {\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})^4\big/s_{a,b} \le \varepsilon. \end{equation} It follows from \eqref{ep1} and assumption \eqref{assumption}, that $\text{nul}(A+iB-z)=0$ and that $A+iB-z$ has closed range. Similarly, $\text{nul}(A-iB-\overline{z})=0$ and $A-iB-\overline{z}$ has closed range. Since $\text{def}(A+iB-z)=\text{nul}(A-iB-\overline{z})$ we deduce that $z\in\rho(A+iB-z)$. Furthermore, combining \eqref{ep1} with assumption \eqref{assumption}, we obtain \begin{align*} \Vert(A+iB-z)^{-1}\Vert &\le \frac{\max\left\{s_{a,b},{\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})^4,2{\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})^3\right\}}{{\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})^4}\\ &\le \frac{K_{a,b}}{{\rm dist}(z,\Gamma_{a,b}^{b^-,b^+})^4}. \end{align*} \end{proof}
\begin{remark}\label{dimensions}Suppose that $a_1<b_1\le a_2< b_2$, \[ (a_1,b_1)\subset\rho(A),\quad(a_2,b_2)\subset\rho(A)\quad\text{and}\quad\min\{b_1-a_1,b_2-a_2\}>b^+-b^-. \] Let $f_1, g_1$ be as in Definition \ref{gamdef} with $\alpha=a_1$, $\beta=b_1$, $\gamma=\beta^-$ and $\delta=\beta^+$. Let $f_2,g_2$ be as in Definition \ref{gamdef} with $\alpha=a_2$, $\beta=b_2$, $\gamma=\beta^-$ and $\delta=\beta^+$. The curves $g_1$ and $f_2$ enclose a region (see Figure 2). It follows, from Theorem \ref{lem1b}, that the dimension of the spectral subspace associated to $\sigma(A+iB)$ and this enclosed region is the same as the dimension of the spectral subspace associated to $\sigma(A)$ and the interval $[b_1,a_2]$.
\begin{figure}
\caption{The figures show $g_1$, $f_2$ and the shaded region they enclose; on the left $a_1=0$, $b_1 = 4$, $a_2=5$, $b_2=10$ $b^-=-1$ and $b^+=2$; on the right $a_1=0$, $b_1 = 4$, $a_2=4$, $b_2=7.1$, $b^-=-1$ and $b^+=2$.}
\end{figure} \end{remark}
\begin{definition}\label{W} We denote the numerical range by $W(\cdot)$, and define \begin{align*} \mathcal{U}_A&:= \bigcup_{\efrac{a,b\in W(A)}{(a,b)\subset\rho(A)}}\mathcal{U}_{a,b}^{b^-,b^+},\\ \mathcal{X}_A&:=\big\{z\in\mathbb{C}:~{\rm Re}\; z\in \overline{W(A)}\hspace{5pt}\textrm{and}\hspace{5pt}b^-\le{\rm Im}\; z\le b^+\big\}\Big\backslash\mathcal{U}_A,\\ \mathcal{V}_B&:= \big\{z\in\mathbb{C}:{\rm Im}\; z + i{\rm Re}\; z\in\hat{\mathcal{V}}_B\big\}\quad\text{where}\quad\hat{\mathcal{V}}_B:= \bigcup_{\efrac{a,b\in W(B)}{(a,b)\subset\rho(B)}}\mathcal{U}_{a,b}^{a^-,a^+},\\ \mathcal{Y}_B&:=\big\{z\in\mathbb{C}:{\rm Im}\; z\in\overline{W(B)}\hspace{5pt}\textrm{and}\hspace{5pt}a^-\le{\rm Re}\; z\le a^+\big\}\Big\backslash\mathcal{V}_B. \end{align*} \end{definition}
\begin{theorem}\label{cor1a} If $A$ is unbounded from above and/or below, then \begin{equation}\label{unbinc} \sigma(A+iB)\subset\mathcal{X}_A. \end{equation} If $A$ is bounded, then \begin{equation}\label{binc} \sigma(A+iB)\subset\mathcal{X}_A\cap\mathcal{Y}_B. \end{equation} \end{theorem} \begin{proof} The first assertion follows immediately from Theorem \ref{lem1b}. Suppose that $A$ is bounded. Then $\sigma(A+iB)\subset\mathcal{X}_A$ again follows from Theorem \ref{lem1b}. Let $w\notin\mathcal{Y}_B$. Then either \[ w\notin\big\{z\in\mathbb{C}:{\rm Im}\; z\in\overline{W(B)}\hspace{5pt}\textrm{and}\hspace{5pt}a^-\le{\rm Re}\; z\le a^+\big\}\quad\text{or}\quad w\in\mathcal{V}_B. \] Suppose the former is true, then \begin{align*} {\rm Im}\; w + i{\rm Re}\; w\in\rho(B+iA)\quad &\Rightarrow\quad{\rm Im}\; w - i{\rm Re}\; w\in\rho(B-iA)\\ &\Rightarrow\quad{\rm Re}\; w + i{\rm Im}\; w\in\rho(A+iB). \end{align*} Suppose instead that $w\in\mathcal{V}_B$, then for some $a,b\in W(B)$, $(a,b)\subset\rho(B)$, we have \begin{align*} {\rm Im}\; w + i{\rm Re}\; w\in\mathcal{U}_{a,b}^{a^-,a^+}\quad&\Rightarrow\quad{\rm Im}\; w + i{\rm Re}\; w\in\rho(B+iA)\\ &\Rightarrow\quad{\rm Re}\; w + i{\rm Im}\; w\in\rho(A+iB). \end{align*} \end{proof}
\begin{remark} Any bounded linear operator, $T\in\mathcal{B}(\mathcal{H})$, may be expressed as \[ T = \underbrace{\left(\frac{T+T^*}{2}\right)}_A +~ i\underbrace{\left(\frac{T-T^*}{2i}\right)}_B \] where $A$ and $B$ are bounded self-adjoint operators. Hence, Theorem \ref{cor1a} provides an enclosure for the spectrum, in terms of the real and imaginary parts, of any bounded linear operator. \end{remark}
\void{ \begin{definition} We denote by $\mathcal{Q}$ the set of all self-adjoint operators, $Q$, with $\sigma(Q)\subset\sigma(A)$, by $\mathcal{S}$ the set of all bounded self-adjoint operators with $cI\le S\le d$, and by $\mathcal{T}$ the set of all self-adjoint operators, $T$, with $\sigma(T)\subset\sigma(B)$. \end{definition}
The following Corollary is a straightforward consequence of Theorem \ref{cor1} and Definition \ref{W}.
\begin{corollary} Let $A$ be possibly unbounded, then \[\bigcup_{Q\in\mathcal{Q}}\bigcup_{S\in\mathcal{S}}\sigma(Q+iS)\subset\mathcal{X}_A.\] If $A$ is bounded, then \[\bigcup_{Q\in\mathcal{Q}}\bigcup_{T\in\mathcal{T}}\sigma(Q+iT)\subset\mathcal{X}_A\cap\mathcal{Z}_B.\] \end{corollary} }
\begin{corollary}\label{2eigs} Let $\sigma(A)=\{a^-,a^+\}$ and $\sigma(B)=\{b^-,b^+\}$, then $\sigma(A+iB)\subset\Gamma_{a^-,a^+}^{b^-,b^+}$. For any $z\in\Gamma_{a^-,a^+}^{b^-,b^+}$, $B$ may be chosen such that $z\in\sigma(A+iB)$. \end{corollary} \begin{proof} The first assertion follows immediately from theorems \ref{lem1b} and \ref{cor1a}. Let $u,v$ be normalised eigenvectors with $Au=a^-u$ and $Av=a^+v$. For $t\in[0,1]$ we define the family of self-adjoint operators \begin{align*} B(t)x &= b^-\langle x,\sqrt{1-t}u + \sqrt{t}v\rangle(\sqrt{1-t}u + \sqrt{t}v)\\ &\quad + b^+\langle x,\sqrt{t}u-\sqrt{1-t}v\rangle(\sqrt{t}u-\sqrt{1-t}v). \end{align*} Evidently, \[ \min\sigma(B(t))=b^-\quad\text{and}\quad\max\sigma(B(t))=b^+\quad\forall t\in[0,1]. \] Furthermore, \begin{align*} Au+iB(0)u &= (a^-+ ib^-)u,\quad Av+iB(0)v= (a^+ + ib^+)v,\\ Au+iB(1)u &= (a^-+ ib^+)u,\quad Av+iB(1)v = (a^++ ib^-)v. \end{align*} Hence, for each $z\in\Gamma_{a^-,a^+}^{b^-,b^+}$ there exists a $t\in[0,1]$ for which $z\in\sigma(A+iB(t))$. \end{proof}
\void{ \begin{corollary}\label{rem2} Let $A$ and $B$ be a $2\times 2$ matrices. Then $\sigma(A+iB)$ consists of one eigenvalue only if $b-a=d-c$. If $A+iB$ has only one eigenvalue then it is equal to $(a+b)/2 + i(c+d)/2$. If $b-a\ne d-c$, then $A+iB$ has and one eigenvalue on the curve $f_{a,b}^{c,d}$ and another on the curve $g_{a,b}^{c,d}$. \end{corollary} \begin{proof} For the first and third assertions we assume that $b-a>d-c$, the case where $d-c>b-a$ may be treated similarly. $T(s)=A+isB$. Then $T(0)=A$ and $T(1)=M$. Hence, $\sigma(T(0))=\{a,b\}$. Furthermore, $b-a>sd-sc$ for each $s\in[0,1]$ and therefore the eigenvalues of $T(s)$, which by Corollary \ref{2eigs} must lie on the corresponding curves $f_{a,b}^{sc,sd}$ and $g_{a,b}^{sc,sd}$, cannot coincide for any $s\in[0,1]$. The first and third assertions follow. For the second assertion, suppose that $M$ has a double eigenvalue $\lambda\ne(a+b)/2 + i(c+d)/2$. Without loss of generality we assume that $\lambda$ lies on $f_{a,b}^{sc,sd}$ and therefore not on $g_{a,b}^{sc,sd}$. For any sufficiently small $\varepsilon>0$, \[\sigma(T(1+\varepsilon))\subset f_{a,b}^{sc,sd}\quad\text{where}s=1+\varepsilon\quad\text{and}\quad b-a<sd-sc,\] which is a contradiction. \end{proof}
\begin{example}\label{ex1} Let \[ A = \frac{1}{2}\left( \begin{array}{cc} 2 & 1\\ 1 & 2 \end{array} \right) \quad\text{and}\quad B = \frac{s}{2i}\left( \begin{array}{cc} 0 & 1\\ -1 & 0 \end{array} \right)\quad\text{where}\quad s\in\mathbb{R}. \] Then $a=1/2$, $b=3/2$, $c=-\vert s\vert/2$ and $d=\vert s\vert/2$. Let $\sigma(A+iB)=\{\lambda_1,\lambda_2\}$, then $b-a = d-c$ only when $s=-1$ or $s=1$. It follows, from Corollary \ref{rem2}, that $\lambda_1=\lambda_2$ is permitted only if $s=-1$ or $s=1$. When $s=-1$ \[ A+iB = \left( \begin{array}{cc} 1 & 0\\ 1 & 1 \end{array} \right)\quad\text{and}\quad\sigma(A+iB)=\{1\} \] When $s=1$ \[ A+iB = \left( \begin{array}{cc} 1 & 1\\ 0 & 1 \end{array} \right) \quad\text{and}\quad\sigma(A+iB)=\{1\}. \] \end{example}
\begin{definition}\label{WX} We define \begin{align*} \Gamma_A&:= \left(\bigcup_{\efrac{(\alpha,\beta)\in W(A)}{\alpha,\beta\subset\rho(A)}}\Gamma_{\alpha,\beta}^{c,d}\right)\bigcup \Big(\sigma(A)-ci\Big)\bigcup \Big(\sigma(A)+di\Big),\\ \Gamma_B&:=\big\{z\in\mathbb{C}:{\rm Im}\; z + i{\rm Re}\; z\in\hat{\Gamma}_B\big\}\quad\text{where}\\ \hat{\Gamma}_B&:=\left(\bigcup_{\efrac{(\alpha,\beta)\in W(B)}{\alpha,\beta\subset\rho(B)}}\Gamma_{\alpha,\beta}^{a,b}\right)\bigcup \Big(\sigma(B)-ai\Big)\bigcup \Big(\sigma(B)+bi\Big). \end{align*} \end{definition}
Evidently, $\Gamma_A$ and $\Gamma_B$ are the boundaries of $\mathcal{X}_A$ and $\mathcal{Z}_B$, respectively.
\begin{corollary}\label{cor2} Let $A$ be unbounded and $z\in\Gamma_A$. There exist $Q\in\mathcal{Q}$ and $S\in\mathcal{S}$ such that $z\in\sigma(Q+iS)$. Let $A$ be bounded and $z\in\Gamma_A\cap\Gamma_B$. There exist $Q\in\mathcal{Q}$ and $T\in\mathcal{T}$ with $z\in\sigma(S+iT)$. \end{corollary}
\begin{lemma}\label{aux} Let $\sigma(A)\subset[\alpha,\beta]$ and $\vert z - (\alpha+\beta)/2\vert>(\beta-\alpha)/2$. There exists a constant $c>0$ such that $\vert\langle(A-z)^2u,u\rangle\vert\ge c\Vert u\Vert^2$ for all $u\in\mathcal{H}$. \end{lemma} \begin{proof} We argue similarly to the proof of \cite[Theorem 3.1]{shar}. If $z\in\mathbb{R}$, then we may take $c = \min\{\vert c-\alpha\vert,\vert c-\beta\vert\}$. If ${\rm Im}\; z\ne 0$, then, since $z$ lies outside the disc with centre $(\alpha+\beta)/2$ and radius $(\beta-\alpha)/2$, the closed and bounded set \[ \mathcal{S}:=\{(\lambda-z)^2:\lambda\in\sigma(A)\} \] lies in an open half plane. Hence there exist a $c>0$ and a $\theta\in[0,\pi)$ such that \[ c\le\min\{{\rm Re}\; e^{i\theta}w:w\in\mathcal{S}\}. \] It follows that for some we have \begin{align*} \vert\langle(A-z)^2u,u\rangle\vert &= \int_{\sigma(A)}e^{i\theta}(\lambda-z)^2~d\langle E_\lambda u,u\rangle\ge c\Vert u\Vert^2. \end{align*} \end{proof}
\begin{theorem}\label{thm2} Let $\sigma(A)\subset[\alpha,\beta]$ and $\sigma(B)=\{\nu,\mu\}$, then $\sigma(A+iB)\subset\mathcal{U}_{\alpha,\beta}^{\nu,\mu}\cup\Gamma_{\alpha,\beta}^{\nu,\mu}$. \end{theorem} \begin{proof} Let $z\notin\mathcal{U}_{\alpha,\beta}^{\nu,\mu}\cup\Gamma_{\alpha,\beta}^{\nu,\mu}$ with ${\rm Re}\; z < \alpha\le\beta$ and $\nu\le{\rm Im}\; z\le\mu$. By Lemma \ref{olderlem} \begin{equation}\label{fin1} {\rm Re}\; z - \frac{({\rm Im}\; z-\mu)({\rm Im}\; z-\nu)}{{\rm Re}\; z- \alpha} > \beta. \end{equation} Let $\Vert(A+iB-z)u_n\Vert=\varepsilon_n\to 0$ where $(u_n)_{n\in\mathbb{N}}$ is a sequence of normalised vectors. There exists a sequence of normalised vectors $(v_n)$ with $(A+iB-z)u_n=\varepsilon_n v_n$. Similarly to the proof Theorem \ref{lem1b}, we obtain \begin{align*} &\langle(A-{\rm Re}\; z)u_n,u_n\rangle = \varepsilon_n{\rm Re}\; \langle v_n,u_n\rangle,\quad
\langle(B-{\rm Im}\; z)u,u\rangle = \varepsilon{\rm Im}\; \langle v,u\rangle,\\ \Vert&(A-{\rm Re}\; z)u_n\Vert^2 = \varepsilon_n^2 + 2\varepsilon_n{\rm Im}\;\langle(B-{\rm Im}\; z)u_n,v_n\rangle + \Vert(B-{\rm Im}\; z)u_n\Vert^2, \end{align*} and, in view of Remark \ref{rem}, \begin{align*} \Vert(B-{\rm Im}\; z)u_n\Vert^2 &= -({\rm Im}\; z-\mu)({\rm Im}\; z -\nu) + \varepsilon_n{\rm Im}\;\langle v_n,u_n\rangle(\mu+\nu - 2{\rm Im}\; z). \end{align*} Set $w=\sqrt{(\mu-{\rm Im}\; z)({\rm Im}\; z -\nu)}$. Then \begin{align*} \langle(A-{\rm Re}\; z- iw)^2u_n,u_n\rangle &= \Vert(A-{\rm Re}\; z)u_n\Vert^2 - 2iw\langle(A-{\rm Re}\; z)u_n,u_n\rangle - w^2 \to 0. \end{align*} We deduce, from Lemma \ref{aux}, that $\vert {\rm Re}\; z+iw - (\alpha+\beta)/2\vert\le(\beta-\alpha)/2$ and hence \begin{equation}\label{fin2} {\rm Re}\; z - \frac{({\rm Im}\; z-\mu)({\rm Im}\; z -\nu)}{{\rm Re}\; z- \alpha} = {\rm Re}\; z + \frac{w^2}{{\rm Re}\; z- \alpha} \le\beta \end{equation} since the left hand side is the real number $>\alpha$ that intersects the circle which passes through $\alpha$, ${\rm Re}\; z\pm iw$ and whose centre is real. Since \eqref{fin2} contradicts \eqref{fin1}, we deduce that $z\in\rho(A+iB)$. The case where ${\rm Re}\; z = \alpha$ may be proved by applying the above argument to $-A$. \end{proof} } \begin{example} Let $\sigma(A)=\{-1,0,2\}$ and $\sigma(B)=\{-s,0,s\}$ where $s\in\mathbb{R}$. By Theorem \ref{cor1a} we have $\sigma(A+iB)\subset\mathcal{X}_A\cap\mathcal{Y}_B$. For varying values of $s\in\mathbb{R}$, Figures 3--5 show the region(s) enclosed by $\mathcal{X}_A\cap\mathcal{Y}_B$. Also shown is $\sigma(A+iB)$ for 1000 randomly generated $3\times 3$ matrices $A$ and $B$ where $\sigma(A)=\{-1,0,2\}$ and $\sigma(B)=\{-s,0,s\}$.
\begin{figure}
\caption{With $s=0.25$, $\mathcal{X}_A\cap\mathcal{Y}_B$ consists of 3 disjoint regions which are shown top left. Also shown are the three regions in more detail. The red dots are $\sigma(A+iB)$ for 1000 randomly generated $3\times 3$ matrices $A$ and $B$ where $\sigma(A)=\{-1,0,2\}$ and $\sigma(B)=\{-s,0,s\}$. }
\end{figure} \begin{figure}
\caption{With $s=0.5$, the region $\mathcal{X}_A\cap\mathcal{Y}_B$ now consists of 2 disjoint regions; the first two regions on the top left of Figure 3 have merged into one. The red dots are $\sigma(A+iB)$ for 1000 randomly generated $3\times 3$ matrices $A$ and $B$ where $\sigma(A)=\{-1,0,2\}$ and $\sigma(B)=\{-s,0,s\}$.}
\end{figure} \begin{figure}
\caption{Clockwise from top left $s=1,1.25,4,3$. The red dots are $\sigma(A+iB)$ for 1000 randomly generated $3\times 3$ matrices $A$ and $B$ where $\sigma(A)=\{-1,0,2\}$ and $\sigma(B)=\{-s,0,s\}$.}
\end{figure} \end{example}
\void{ \begin{example} Let $\mathcal{H}=L^2(0,\pi)$, $A=-d^2/dx^2$ with homogeneous Dirichlet boundary conditions, and $-I\le B\le 5I$. Then, $\sigma(A)=\{n^2: n\in\mathbb{N}\}$ and Figure 6 shows the region $\mathcal{X}_A$. \begin{figure}
\caption{On the left are the first three regions enclosed of $\mathcal{X}_A$. On the right are the first nine regions enclosed of $\mathcal{X}_A$.}
\end{figure} \end{example} }
\void{ \section{Galerkin and Quadratic methods} The Galerkin eigenvalues of $A$ with respect to a finite-dimensional trial space $\mathcal{L}\subset{\rm Dom}(\frak{a})$, denoted $\sigma(A,\mathcal{L})$, consists of those $\mu\in\mathbb{C}$ for which $\exists u\in\mathcal{L}\backslash\{0\}$ with \[ \frak{a}(u,v) = \mu\langle u,v\rangle\quad\forall v\in\mathcal{L}. \] Unless stated otherwise, $(\mathcal{L}_n)_{n\in\mathbb{N}}\subset{\rm Dom}(\frak{a})$ will be a sequence of finite-dimensional trial spaces with corresponding sequence of orthogonal projections $(P_n)$. We shall always assume that: \begin{equation}\label{subspaces} \forall u\in{\rm Dom}(\frak{a})\quad\exists u_n\in\mathcal{L}_n:\quad\Vert u-u_n\Vert_{\frak{a}}\to0 \end{equation} where $\Vert\cdot\Vert_{\frak{a}}$ is the norm associated to the Hilbert space $\mathcal{H}_\frak{a}$ with inner-product \[ \langle u,v\rangle_{\frak{a}}:=\frak{a}(u,v) - (m-1)\langle u,v\rangle\quad\forall u,v\in{\rm Dom}(\frak{a})\quad\textrm{where}\quad m=\min\sigma(A). \] The distance from a subspace $\mathcal{M}\subset\mathcal{H}$ to another subspace $\mathcal{N}\subset\mathcal{H}$ is defined as \[ \delta(\mathcal{M},\mathcal{N}) = \sup_{u\in\mathcal{M},\Vert u\Vert=1}{\rm dist}(u,\mathcal{N}), \] the gap between the two subspaces is \[ \hat{\delta}(\mathcal{M},\mathcal{N}) = \max\big\{\delta(\mathcal{M},\mathcal{N}),\delta(\mathcal{N},\mathcal{M})\big\}; \] see \cite[Section IV.2.1]{katopert} for further details. We shall write $\delta_{\frak{a}}$ and $\hat{\delta}_{\frak{a}}$ to indicate the distance and gap between subspaces of $\mathcal{H}_\frak{a}$. For trial spaces satisfying \eqref{subspaces} the Galerkin method is an extremely powerful tool for approximating those eigenvalues which lie below the essential spectrum; see for example \cite{chat}. It is well-known that \begin{equation}\label{galerkin1} \left(\lim_{n\to\infty}\sigma(A,\mathcal{L}_n)\right)\cap\big(-\infty,\min\sigma_{\mathrm{ess}}(A)\big) = \sigma(A)\cap\big(-\infty,\min\sigma_{\mathrm{ess}}(A)\big). \end{equation} Furthermore, for an eigenvalue $\lambda<\min\sigma_{\mathrm{ess}}(A)$ with eigenspace $\mathcal{L}(\{\lambda\})$, we have the \emph{superconvergence} property \begin{equation}\label{galerkin2} {\rm dist}\big(\lambda,\sigma(A,\mathcal{L}_n)\big) = \mathcal{O}\big(\delta_{\frak{a}}(\mathcal{L}(\{\lambda\}),\mathcal{L}_n)^2\big). \end{equation}
In general, the Galerkin method cannot be relied upon for approximating eigenvalues above $\min\sigma_{\mathrm{ess}}(A)$. This is due to a phenomenon known as spectral pollution which is the presence of sequences of Galerkin eigenvalues which converge to points in $\rho(A)$. A typical situation is $\min\sigma_{\mathrm{ess}}(A)\le\alpha<\beta$, $(\alpha,\beta)\cap\sigma_{\mathrm{ess}}(A) = \varnothing$, and \[ \left(\lim_{n\to\infty}\sigma(A,\mathcal{L}_n)\right)\cap(\alpha,\beta) = (\alpha,\beta). \] Hence, any approximation of $\sigma_{\mathrm{dis}}\cap(\alpha,\beta)$ is lost within an increasingly dense \emph{fog} of spurious Galerkin eigenvalues; see examples \ref{uber2} \& \ref{magneto} and \cite{boff,boff2,bost,dasu,DP,lesh,rapp}. Although this means that a direct application of the Galerkin method often fails to identify eigenvalues, in view of \eqref{galerkin1} and \eqref{galerkin2}, there is every reason to suppose that eigenvalues above $\min\sigma_{\mathrm{ess}}(A)$ could, in principle, be approximated with a superconvergent technique using trial spaces satisfying only \eqref{subspaces}. The absence of such a technique has resulted in the development of quadratic methods.
The quadratic methods are so-called because of their reliance on truncations of the square of the operator in question; the Galerkin method relies only on the quadratic form. They have been studied and applied extensively over the last two decades. The quadratic method which has received the most attention is the second order relative spectrum. This is because it can be applied without \'a priori information and it was widely thought to approximate the whole spectrum of an arbitrary self-adjoint operator. The latter has recently been shown to be false; see \cite{shar2}. However, it is known that the method will reliably approximate the discrete spectrum of a self-adjoint operator and part of the discrete spectrum of a normal operator; see \cite{bo} and \cite{me3}, respectively. The appeal of quadratic methods is that they can approximate eigenvalues without interference from spectral pollution, in fact, they can even provide enclosures for eigenvalues. The latter is often regarded as a major selling point of these methods. In practice though, we are more likely to be interested in accuracy and convergence rather than enclosures.
A drawback of quadratic methods is that they require trial spaces to belong to the operator domain. From a computational perspective this can be highly awkward as typically FEM software will not support the operator domain. Particularly inconvenient, is that for a second order differential operator we cannot use the standard FEM space of piecewise linear trial functions. Furthermore, it is straightforward to show that \eqref{subspaces}, with the added condition $\mathcal{L}_n\subset{\rm Dom}(A)$ $\forall n\in\mathbb{N}$, is not sufficient to ensure approximation of $\sigma_{\mathrm{dis}}(A)$. A sufficient condition is \begin{equation}\label{subspaces1} \forall u\in{\rm Dom}(A)\quad\exists u_n\in\mathcal{L}_n:\quad\Vert u-u_n\Vert_A\to0; \end{equation} see \cite[Corollary 3.6]{bost}. With \eqref{subspaces1} satisfied, we have for each $\lambda\in\sigma_{\mathrm{dis}}(A)$ an element $z_n$ belonging to the second order spectrum of $A$ relative to $\mathcal{L}_n$ with \begin{equation}\label{spec2con} \vert\lambda-z_n\vert = \mathcal{O}\big(\delta_A(\mathcal{L}(\{\lambda\}),\mathcal{L}_n)\big)\quad\textrm{and}\quad\vert\lambda-{\rm Re}\; z_n\vert = \mathcal{O}\big(\delta_A(\mathcal{L}(\{\lambda\}),\mathcal{L}_n)^2\big) \end{equation} where $\delta_{A}(\mathcal{L}(\{\lambda\}),\mathcal{L}_n)$ is the distance from the eigenspace $\mathcal{L}(\{\lambda\})$ to the trial space $\mathcal{L}_n$ with respect to the graph norm; see \cite[Section 6]{me3}. That the convergence rate is measured in terms of the graph norm means that the convergence, and therefore the accuracy, of this method can be poor when compared to the superconvergence of the Galerkin method; see Example \ref{diracex} and \cite[Example 3.5 \& 4.3]{bost}. Convergence rates analogous to the right hand side of \eqref{spec2con} are also known for the Davies \& Plum and Zimmermann \& Mertins methods; see \cite[Lemma 2]{bost1}. }
\section{Perturbation of $\sigma_{\mathrm{dis}}(A)$}
In this section we consider $\sigma(A+iP)$ where $P$ is a non-trivial orthogonal projection. Let $a,b\in\mathbb{R}$, $a,b\in\rho(A)$, $a<b$ and denote $\Delta=[a,b]$. For the remainder of this manuscript, we assume that \[ \sigma(A)\cap\Delta=\{\lambda_1,\dots,\lambda_d\}\subset\sigma_{\mathrm{dis}}(A)\quad\text{where}\quad d<\infty. \] We are concerned with the perturbation of the eigenvalues $\{\lambda_1,\dots,\lambda_d\}$. By theorems \ref{lem1b} and \ref{cor1a}, we have \[ \mathcal{U}_{a,\lambda_1}^{0,1}\cup\mathcal{U}_{\lambda_1,\lambda_2}^{0,1}\cup\dots\cup\mathcal{U}_{\lambda_{d-1},\lambda_d}^{0,1}\cup\mathcal{U}_{\lambda_d,b}^{0,1}\subset\rho(A+iP)\quad\text{and}\quad\sigma(A+iP)\subset\mathcal{X}_A. \] However, we shall be interested in the set \begin{equation}\label{theinter} \mathcal{U}_{a,b}^{0,1}\cap\sigma(A+iP). \end{equation} We will show that when $\Vert(I-P)E(\Delta)\Vert$ is sufficiently small, then \eqref{theinter} will consist only of elements in a small neighbourhood of $\Gamma_{a,b}^{0,1}$, and of eigenvalues which are in small neighbourhoods of the $\lambda_j+i$, $1\le j\le d$.
\begin{definition} For $z\in\mathcal{U}_{a,b}^{0,1}$, and $K_{a,b}$ as in Definition \ref{rsK} (with $b^-=0$ and $b^+=1$), we set \begin{align*} d(z) &=\min\left\{\frac{{\rm dist}\Big(z,\Gamma_{a,b}^{0,1}\Big)^4}{K_{a,b}},{\rm dist}\Big(z,\big\{\lambda_1+i,\dots,\lambda_d+i\big\}\Big)\right\}. \end{align*} \void{ For $\varepsilon>0$ we set \begin{align*} \mathcal{X}_\varepsilon:=\big\{z\in\mathcal{U}_\Delta:d(z)> 3\varepsilon\big\}. \end{align*} } \end{definition}
\begin{proposition}\label{thm1} Let $z\in\mathcal{U}_{a,b}^{0,1}$ and $d(z)>3\Vert (I-P)E(\Delta)\Vert$, then \begin{equation}\label{resolvent_bound2} z\in\rho(A+iP)\quad\textrm{and}\quad\Vert (A+iP-z)^{-1}\Vert\le \Big(d(z)-3\Vert (I-P)E(\Delta)\Vert\Big)^{-1}. \end{equation} \end{proposition} \begin{proof}
For simplicity let us denote $E=E(\Delta)$ and $\varepsilon=\Vert (I-P)E(\Delta)\Vert$. We readily deduce that $\Vert(I-E)PE\Vert\le\varepsilon$ and $\Vert EP(I-E)\Vert\le\varepsilon$. With these inequalities and the identity $P = EPE +(I-E)PE + EP(I-E) + (I-E)P(I-E)$, we obtain for any $u\in{\rm Dom}(A)$ \begin{align*} \Vert (A+iP -z)u\Vert & =\Vert(A-z)(I-E)u + (A-z)Eu + iPu\Vert\\ &= \Vert(A-z)(I-E)u + (A-z)Eu\\ &\quad + i\big(EPE+(I-E)PE \\ &\quad + EP(I-E)+ (I-E)P(I-E)\big)u\Vert\\ &\ge \Vert(A-z)(I-E)u + i(I-E)P(I-E)u\\ &\quad +(A-z)Eu+ iEPEu\Vert\\ &\quad - \Vert(I-E)PEu + EP(I-E)u\Vert\\ &\ge\Vert(A-z)(I-E)u + i(I-E)P(I-E)u\\ &\quad+ (A-z)Eu + iEPEu\Vert - 2\varepsilon\Vert u\Vert. \end{align*} The term $(A-z)Eu + iEPEu$ satisfies the estimate \begin{align*} \Vert(A-z)Eu + iEPEu\Vert &= \Vert(A-z+i)Eu + iE(P-I)Eu\Vert\\ &\ge (d(z)-\varepsilon)\Vert Eu\Vert. \end{align*} Next consider the term $(A-z)(I-E)u + i(I-E)P(I-E)u$. We have \[ (A-z)(I-E) + i(I-E)P(I-E) :\mathcal{H}\ominus\mathcal{L}(\Delta) \to\mathcal{H}\ominus\mathcal{L}(\Delta). \] The restriction of $A$ to $\mathcal{H}\ominus\mathcal{L}(\Delta)$ is a self-adjoint operator with no spectrum in the interval $\Delta$. The restriction of $(I-E)P$ to $\mathcal{H}\ominus\mathcal{L}(\Delta)$ is a self-adjoint operator with $0\le(I-E)P\le 1$. Therefore, by Theorem \ref{lem1b}, \begin{align*} \Vert(A-z)(I-E)u + i(I-E)P(I-E)u\Vert &\ge \frac{{\rm dist}(z,\Gamma_{a,b}^{0,1})^4}{K_{a,b}}\Vert(I-E)u\Vert\\ &\ge d(z)\Vert(I-E)u\Vert. \end{align*} Combining these three estimates yields the result. \end{proof}
\void{ \begin{corollary}\label{cor0} There exist constants $c_r,\varepsilon_r>0$, independent of $P$, such that whenever $\Vert (I-P)E(\Delta)\Vert\le\varepsilon_r$ and $\vert\lambda_j+i-z\vert=r$ for some $1\le j\le d$, we have \begin{equation}\label{resb} z\in\rho(A+iP)\quad\textrm{with}\quad\Vert(A+iP-z)^{-1}\Vert\le \frac{1}{c_r}. \end{equation} \end{corollary} \begin{proof} We may choose any $\varepsilon_r>0$ such that, for each $1\le j\le d$, \[d(z)-3\varepsilon_r>0\quad\textrm{for all}\quad\vert\lambda_j+i-z\vert=r.\] It then follows, from Proposition \ref{thm1}, that $z\in\rho(A+iP)$ and $\Vert(A+iP-z)^{-1}\Vert$ is uniformly bounded for all $z\in\mathcal{U}_\Delta$ with $\vert\lambda_j+i-z\vert=r$. It will therefore suffice to show that $A+iP-z$ is also invertible with uniformly bounded inverse whenever $\vert\lambda_j+i-z\vert=r$ and ${\rm Im}\; z>1$. That $z\in\rho(A+iP)$ follows immediately from \eqref{nr}. Suppose that $(z_n^\pm)$ is sequence converging to $\lambda_j\pm r+i$ with ${\rm Im}\; z_n^\pm>1$ for every $n\in\mathbb{N}$. Let $(S_n)$ be a sequence of orthogonal projections with $\Vert (I-S_n)E(\Delta)\Vert\le\varepsilon_r$ for every $n\in\mathbb{N}$. If there exists a sequence of normalised vectors $(u_n)$ with $(A+iS_n-z_n)u_n\to 0$, then \[ \big(A+iS_n-(\lambda_j\pm r+i)\big)u_n = \big(A+iS_n-z_n^\pm)u_n +\big(z_n^\pm-(\lambda_j\pm r+i)\big)\to 0. \] However, by Corollary \ref{thm1}, we have the bound \[ \big\Vert\big(A+iS_n-(\lambda_j\pm r+i)\big)^{-1}\big\Vert\le\frac{1}{d(\lambda_j\pm r+i)-3\varepsilon_r}\quad\forall n\in\mathbb{N}. \] The existence of a constant $c_r>0$ satisfying \eqref{resb} now follows from the estimate \[ \Vert(A+iP-z)^{-1}\Vert\le\frac{1}{{\rm dist}\big(z,W(A+iP)\big)}\quad\forall z\notin W(A+iP). \] \end{proof} }
\begin{lemma}\label{cor1} Let $(I-P)E(\Delta)=0$. Then \[\mathcal{U}_{a,b}^{0,1}\Big\backslash\{\lambda_1+i,\dots,\lambda_d+i\}\subset\rho(A+iP).\] Moreover, $\lambda_j+i$ is an eigenvalue of $A+iP$ with spectral subspace $\mathcal{L}(\{\lambda_j\})$, and $A+iP-(\lambda_j+i)$ is Fredholm with index zero. \end{lemma} \begin{proof} The first assertion follows immediately from Proposition \ref{thm1}. Let $\lambda\in\{\lambda_1,\dots,\lambda_d\}$. Whenever $u\in\mathcal{L}(\{\lambda\})$ we have $(A+iP)u = (\lambda + i)u$. Further, if $v\in{\rm Dom}(A)$ and $(A+iP)v = (\lambda+i)v$, then $(A-\lambda)v = i(I-P)v$ and therefore \begin{displaymath} \langle(A-\lambda)v,v\rangle = i\langle(I-P)v,v\rangle = i\Vert(I-P)v\Vert^2. \end{displaymath} It follows that $(I-P)v=0$ and $(A-\lambda)v=0$, and hence $v\in\mathcal{L}(\{\lambda\})$. We deduce that $\mathrm{nul}(A+iP-(\lambda+i)) = \mathrm{nul}(A-\lambda)$ where $\mathcal{L}(\{\lambda\})$ is the null space for both operators. Suppose that $\lambda+i$ is not semi-simple. Then there exists a non-zero vector $w\perp\mathcal{L}(\{\lambda\})$ with $(A+iP-\lambda-i)w = u\in\mathcal{L}(\{\lambda\})$. Hence, \begin{displaymath} (A-\lambda-i)w\perp\mathcal{L}(\{\lambda\})\quad\textrm{with}\quad\Vert(A-\lambda-i)w\Vert>\Vert w\Vert, \end{displaymath} and \begin{displaymath} iPw = u - (A-\lambda-i)w\quad\textrm{where}\quad u\perp(A-\lambda-i)w. \end{displaymath} It follows that \[\Vert Pw\Vert^2 = \Vert u\Vert^2 + \Vert(A-\lambda-i)w\Vert^2 > \Vert w\Vert^2,\] which is a contradiction since $\Vert P\Vert =1$.
Next we show that $A+iP-(\lambda+i)$ is Fredholm. The operator $A+iP-(\lambda+i)$ has closed range iff there exists a $\gamma>0$ such that \begin{equation}\label{notclosed1} \Vert(A+iP-(\lambda+i))v\Vert\ge\gamma{\rm dist}(v,\mathcal{L}\{\lambda\})\quad\forall v\in{\rm Dom}(A); \end{equation} see \cite[Theorem IV.5.2]{katopert}. We suppose that \eqref{notclosed1} is false. There exist $0\le\gamma_n\to 0$ and $v_n\in{\rm Dom}(A)$ with \[ \Vert(A+iP-(\lambda+i))v_n\Vert<\gamma_n{\rm dist}(v_n,\mathcal{L}\{\lambda\}),\quad n\in\mathbb{N}. \] Set $\tilde{w}_n=(I-E(\{\lambda\})v_n$, note $\tilde{w}_n\ne 0$ for all $n\in\mathbb{N}$, and denote $w_n=\tilde{w}_n/\Vert\tilde{w}_n\Vert$. Using $(I-P)E(\{\lambda\})=0$, we have \begin{align*} \gamma_n&=\frac{\gamma_n{\rm dist}(v_n,\mathcal{L}\{\lambda\})}{\Vert\tilde{w}_n\Vert}>\frac{\Vert(A+iP-(\lambda+i))v_n\Vert}{\Vert\tilde{w}_n\Vert} =\Vert(A+iP-(\lambda+i))w_n\Vert, \end{align*} and hence $(A-\lambda)w_n - i(I-P)w_n\to 0$. Since \[ \langle(A-\lambda)w_n,w_n\rangle\in\mathbb{R}\quad\text{and}\quad \langle i(I-P)w_n,w_n\rangle = i\Vert(I-P)w_n\Vert^2, \] we deduce that $(I-P)w_n\to 0$ and therefore also that $(A-\lambda)w_n\to 0$. The latter implies that ${\rm dist}(w_n,\mathcal{L}(\{\lambda\}))\to 0$ which is a contradiction since $w_n\perp\mathcal{L}(\{\lambda\})$ and $\Vert w_n\Vert=1$ for all $n\in\mathbb{N}$. From the contradiction we deduce that $A+iP-(\lambda+i)$ has closed range. Furthermore, \[\Big(A+iP-(\lambda+i)\Big)^* = A-iP-(\lambda-i) \] and arguing as above we deduce that $0$ is an eigenvalue of $A-iP-(\lambda-i)$ with null space $\mathcal{L}(\{\lambda\})$. Hence \[ \text{def}(A+iP-(\lambda+i)) = \mathrm{nul}(A-iP-(\lambda-i)) = \mathrm{nul}(A+iP-(\lambda+i)); \] see \cite[Theorem IV.5.13]{katopert}. Thus $A+iP-(\lambda+i)$ is Fredholm and the result follows. \end{proof} We fix a $\lambda\in\{\lambda_1,\dots,\lambda_d\}$ with $\dim\mathcal{L}(\{\lambda\})=\kappa$, and an $0<r<1/2$ with \begin{equation}\label{rhyp} \mathbb{D}(\lambda+i,2r)\cap\big(\sigma(A)+i\big)=\{\lambda+i\}\quad\text{and}\quad\mathbb{D}(\lambda+i,2r)\cap\Gamma_{a,b}^{0,1}=\varnothing. \end{equation} where $\mathbb{D}(x,y)$ is the closed disc with centre $x$ and radius $y$.
\begin{lemma}\label{cor2} If $\Vert (I-P)E(\Delta)\Vert$ is sufficiently small, then \[\mathbb{D}(\lambda+i,r)\cap\sigma(A+iP)\ne\varnothing \] and the dimension of the corresponding spectral subspace is equal to $\kappa$. \end{lemma} \begin{proof} Let $u_1,\dots,u_\kappa$ be an orthonormal basis for $\mathcal{L}(\{\lambda\})$. Set $v_j=Pu_j$ and let $v_{\kappa+1},v_{\kappa+2},\dots$ be such that \[ \mathrm{Range}(P) = {\rm span}\{v_1,\dots,v_\kappa,v_{\kappa+1},v_{\kappa+2},\dots\}. \] For $t\in[0,1]$ set $w_j(t)=tu_j + (1-t)v_j$ and let $P_t$ be the orthogonal projection onto ${\rm span}\{w_1(t),\dots,w_\kappa(t),v_{\kappa+1},v_{\kappa+2},\dots\}$. For any normalised $u\in\mathcal{L}(\{\lambda\})$ we have $u=c_1u_1+\dots+c_ku_\kappa$ and \begin{align*} \Vert(I-P_t)u\Vert &\le \Vert c_1u_1+\dots+c_\kappa u_\kappa - c_1w_1(t)-\dots-c_\kappa w_\kappa(t)\Vert\\ &=(1-t)\Vert (1-P)u\Vert. \end{align*} Thus \begin{equation}\label{projfam} \Vert(I-P_t)E(\Delta)\Vert\le(1-t)\Vert(I-P)E(\Delta)\Vert\quad\forall t\in[0,1]. \end{equation} By Lemma \ref{cor1}, we have $\lambda+i\in\sigma(A+iP_1)$ with spectral subspace $\mathcal{L}(\{\lambda\})$. By Proposition \ref{thm1} and \eqref{projfam} we deduce that whenever $\Vert (I-P)E(\Delta)\Vert$ is sufficiently small, the operator $A + iP_t-zI$ is invertible with uniformly bounded inverse for all $\vert z-\lambda-i\vert= r$ and $t\in[0,1]$. Hence, we may define the family of spectral projections \[ Q_t := \int_{\vert\lambda+i-z\vert=r}(A+iP_t - \zeta)^{-1}~d\zeta. \] Evidently, $Q(t)$ is a continuous family and therefore \[ \kappa=\dim\big(\mathcal{L}(\{\lambda\})\big)= \mathrm{Rank}(Q_1) = \mathrm{Rank}(Q_t) \quad\forall t\in[0,1]. \] \end{proof}
\void{ \subsection{Galerkin Subspaces and Projections}\label{galsub}
Associated to the form $\frak{a}$ is the Hilbert space $\mathcal{H}_\frak{a}$ which has inner-product \[ \langle u,v\rangle_{\frak{a}}:=\frak{a}(u,v) - (m-1)\langle u,v\rangle\quad\forall u,v\in{\rm Dom}(\frak{a})\quad\textrm{where}\quad m=\min\sigma(A) \] and norm \begin{align*} \Vert u\Vert_{\frak{a}} &=\big(\frak{a}(u,u) - (m-1)\langle u,u\rangle\big)^{\frac{1}{2}}=\left(\int_{\mathbb{R}}\lambda-m+1~d\langle E_\lambda u,u\rangle\right)^{\frac{1}{2}}\\ &=\Vert(A-m+1)^{\frac{1}{2}}u\Vert. \end{align*} The gap or distance between two subspaces $\mathcal{M}$ and $\mathcal{N}$ of $\mathcal{H}$, is defined as \[ \hat{\delta}(\mathcal{M},\mathcal{N}) = \max\big[\delta(\mathcal{M},\mathcal{N}),\delta(\mathcal{N},\mathcal{M})\big] \quad\textrm{where}\quad \delta(\mathcal{M},\mathcal{N}) = \sup_{u\in\mathcal{M},\Vert u\Vert=1}{\rm dist}(u,\mathcal{N}); \] see \cite[Section IV.2.1]{katopert} for further details. We shall write $\delta_{\frak{a}}$ and $\hat{\delta}_{\frak{a}}$ to indicate the gap between subspaces of $\mathcal{H}_\frak{a}$. We denote by $P_n$ the orthogonal projection from $\mathcal{H}$ onto the trial space $\mathcal{L}_n$, and we set \[ \varepsilon_n=\delta_{\frak{a}}(\mathcal{L}(\Delta),\mathcal{L}_n),\quad \mathcal{L}_n(\Delta) := \mathrm{Range}\big(E_n(\Delta)\big)\quad\textrm{and}\quad B_n:=E_n(\Delta)P_n. \]
\void{ \begin{lemma}\label{l1} If $A$ is bounded, then $B_nu\longrightarrow E(\Delta)u$ for all $u\in\mathcal{H}$. \end{lemma} \begin{proof} $A_n$ converges strongly to $A$, therefore $E_n((-\infty,a))$ and $E_n((b,\infty))$ converge strongly to $E((-\infty,a))$ and $E_n((b,\infty))$, respectively; see for example \cite[corollaryollary VIII.1.6]{katopert}. Let $u\in\mathcal{H}$, then \begin{align*} B_nu &= P_nu - E_n((-\infty,a))P_nu - E_n((b,\infty))u\longrightarrow E(\Delta)u. \end{align*} \end{proof} }
\begin{lemma}\label{l1b} If $A$ is bounded, then \[\Vert(I-B_n)E(\Delta)\Vert=\mathcal{O}\big(\delta(\mathcal{L}(\Delta),\mathcal{L}_n)\big)\quad\textrm{and}\quad\delta\big(\mathcal{L}(\Delta),\mathcal{L}_n(\Delta)\big)=\mathcal{O}\big(\delta(\mathcal{L}(\Delta),\mathcal{L}_n)\big).\] \end{lemma} \begin{proof} Let $(A-\lambda_j)u = 0$ with $\Vert u\Vert=1$ and $\lambda_j\in\Delta$. Set $u_n=P_nu$, then \begin{align*} \Vert(A_n-\lambda_j)u_n\Vert =\Vert(P_nA-\lambda_j)u_n - P_n(A-\lambda_j) u\Vert\le\Vert A\Vert\delta(\mathcal{L}(\Delta),\mathcal{L}_n), \end{align*} and \begin{align*} \Vert(I-B_n)u_n\Vert^2 &\le\int_{\mathbb{R}\backslash\Delta}\frac{\vert\mu-\lambda_j\vert^2}{{\rm dist}[\lambda_j,\{a,b\}]^2}~d\langle (E_n)_\mu u_n,u_n\rangle\\ &\le\frac{1}{{\rm dist}[\lambda_j,\{a,b\}]^2}\int_{\mathbb{R}}\vert\mu-\lambda_j\vert^2~d\langle (E_n)_\mu u_n,u_n\rangle\\ &\le\frac{\Vert A\Vert^2\delta(\mathcal{L}(\Delta),\mathcal{L}_n)^2}{{\rm dist}(\lambda_j,\{a,b\})^2}. \end{align*} Therefore \begin{align*} \Vert(I- B_n)u\Vert &\le \Vert(I-B_n)u_n\Vert + \Vert(I-B_n)(u-u_n)\Vert\\ &\le\left(\frac{\Vert A\Vert}{{\rm dist}(\lambda_j,\{a,b\})} + 1\right)\delta(\mathcal{L}(\Delta),\mathcal{L}_n), \end{align*} from which both assertions follow. \end{proof}
\void{ \begin{lemma}\label{l2} If $T$ is unbounded, then $Q_nu \stackrel{\frak{t}}{\longrightarrow}E(\Delta)u$ for all $u\in{\rm Dom}(\frak{t})$. \end{lemma} \begin{proof} Let $\psi_{n,1},\dots,\psi_{n,d_n}$ be orthonormal eigenvectors associated to $\sigma(T,\mathcal{L}_n)\cap\Delta$, so that \[\frak{t}(\psi_{n,j},v)=\mu_{n,j}\langle\psi_{n,j},v\rangle\quad\textrm{for all}\quad v\in\mathcal{L}_n\quad\textrm{and where}\quad\mu_{n,j}\in\Delta.\] For each $v\in\mathcal{L}_n$ we set \[(T-m+1)^{\frac{1}{2}}v=:\tilde{v}\in\tilde{\mathcal{L}}_n:=(T-m+1)^{\frac{1}{2}}\mathcal{L}_n,\] hence \begin{equation}\label{es} \langle(T-m+1)^{-1}\tilde{\psi}_{n,j},\tilde{v}\rangle=\frac{1}{\mu_{n,j}-m+1}\langle\tilde{\psi}_{n,j},\tilde{v}\rangle\quad\textrm{for all}\quad\tilde{v}\in\tilde{\mathcal{L}}_n \end{equation} where \[ \frac{1}{\mu_{n,j}-m+1}\in\tilde{\Delta}:=\left[\frac{1}{b-m+1},\frac{1}{a-m+1}\right]. \] Note that $E(\Delta)$ is the spectral projection associated to the self-adjoint operator $(T-m+1)^{-1}$ and the interval $\tilde{\Delta}$. Evidently, the set \[\left\{\frac{\tilde{\psi}_{n,1}}{\sqrt{\mu_{n,1}-m+1}},\dots,\frac{\tilde{\psi}_{n,d_n}}{\sqrt{\mu_{n,d_n}-m+1}}\right\}\] are orthonormal eigenvectors associated to $\sigma((T-m+1)^{-1},\tilde{\mathcal{L}}_n)\cap\tilde{\Delta}$.
Let $u\in{\rm Dom}(\frak{t})$ and $(T-m+1)^{\frac{1}{2}}u=v$. Denote by $\tilde{P}_n$ the orthogonal projection from $\mathcal{H}$ onto $\tilde{\mathcal{L}}_n$. Using Lemma \ref{l1} and \eqref{es}, we have \begin{align*} \Big\Vert\sum_{j=1}^{d_n}\Big\langle(T-m+1)^{-\frac{1}{2}}&\tilde{P}_nv, \psi_{n,j}\Big\rangle\psi_{n,j} - E(\Delta)u\Big\Vert_{\frak{t}}\\ &=\Big\Vert\sum\Big\langle\tilde{P}_nv, (T-m+1)^{-1}\tilde{\psi}_{n,j}\Big\rangle\psi_{n,j} - E(\Delta)u\Big\Vert_{\frak{t}}\\ &=\Big\Vert\sum\frac{\langle\tilde{P}_nv, \tilde{\psi}_{n,j}\rangle\psi_{n,j}}{\mu_{n,j}-m+1} - E(\Delta)u\Big\Vert_{\frak{t}}\\ &=\Big\Vert\sum\frac{\langle v,\tilde{\psi}_{n,j}\rangle\psi_{n,j}}{\mu_{n,j}-m+1} - E(\Delta)(T-m+1)^{-\frac{1}{2}}v\Big\Vert_{\frak{t}}\\ &=\Big\Vert\sum\frac{\langle v,\tilde{\psi}_{n,j}\rangle\tilde{\psi}_{n,j}}{\Vert\tilde{\psi}_{n,j}\Vert^2} - E(\Delta)v\Big\Vert\longrightarrow 0, \end{align*} and \begin{align*} \Vert Q_nu - E(\Delta)u\Vert_{\frak{t}} &\le\left\Vert\sum\Big\langle(T-m+1)^{-\frac{1}{2}}\tilde{P}_nv, \psi_{n,j}\Big\rangle\psi_{n,j} - E(\Delta)u\right\Vert_{\frak{t}}\\ &+\left\Vert\sum\Big\langle(T-m+1)^{-\frac{1}{2}}(I-\tilde{P}_n)v, \psi_{n,j}\Big\rangle\psi_{n,j}\right\Vert_{\frak{t}}. \end{align*} The first term on the right hand side converges to zero, for the second term we have \begin{align*} &\Big\Vert\sum\Big\langle(T-m+1)^{-\frac{1}{2}}(I-\tilde{P}_n)v, \psi_{n,j}\Big\rangle\psi_{n,j}\Big\Vert_{\frak{t}}\\ &\qquad\qquad=\Big\Vert\sum\Big\langle(T-m+1)^{-1}(I-\tilde{P}_n)v, \tilde{\psi}_{n,j}\Big\rangle\tilde{\psi}_{n,j}\Big\Vert\\ &\qquad\qquad=\Big\Vert\sum(\mu_{n,j}-m+1)\bigg\langle(T-m+1)^{-1}(I-\tilde{P}_n)v, \frac{\tilde{\psi}_{n,j}}{\Vert\tilde{\psi}_{n,j}\Vert}\bigg\rangle\frac{\tilde{\psi}_{n,j}}{\Vert\tilde{\psi}_{n,j}\Vert}\Big\Vert\\ &\qquad\qquad\le(b-m+1)\Vert(T-m+1)^{-1}(I-\tilde{P}_n)v\Vert\longrightarrow 0. \end{align*} \end{proof} }
\begin{lemma}\label{l2b} $\delta_{\frak{a}}\big(\mathcal{L}(\Delta),\mathcal{L}_n(\Delta)\big)=\mathcal{O}(\varepsilon_n)$. \end{lemma} \begin{proof} Let $\mathcal{L}_n(\Delta) = {\rm span}\{u_{n,1},\dots,u_{n,d_n}\}$ where the $u_{n,j}$ are orthonormal eigenvectors of $A_n$, so that \[\frak{a}(u_{n,j},v)=\mu_{n,j}\langle u_{n,j},v\rangle\quad\forall v\in\mathcal{L}_n\quad\textrm{where}\quad\mu_{n,j}\in\Delta.\] For each $v\in\mathcal{L}_n$ we set $(T-m+1)^{\frac{1}{2}}v=:\tilde{v}\in\tilde{\mathcal{L}}_n:=(T-m+1)^{\frac{1}{2}}\mathcal{L}_n$, and hence \begin{equation}\label{es} \langle(T-m+1)^{-1}\tilde{u}_{n,j},\tilde{v}\rangle=\frac{1}{\mu_{n,j}-m+1}\langle\tilde{u}_{n,j},\tilde{v}\rangle\quad\forall\tilde{v}\in\tilde{\mathcal{L}}_n \end{equation} where \[ \frac{1}{\mu_{n,j}-m+1}\in\tilde{\Delta}:=\left[\frac{1}{b-m+1},\frac{1}{a-m+1}\right]. \] Evidently, the set \[\left\{\frac{\tilde{u}_{n,1}}{\sqrt{\mu_{n,1}-m+1}},\dots,\frac{\tilde{u}_{n,d_n}}{\sqrt{\mu_{n,d_n}-m+1}}\right\}\] consists of orthonormal eigenvectors associated to $\sigma((T-m+1)^{-1},\tilde{\mathcal{L}}_n)\cap\tilde{\Delta}$. It is straightforward to show that $\delta(\mathcal{L}(\Delta),\tilde{\mathcal{L}}_n)=\mathcal{O}(\varepsilon_n)$. Using Lemma \ref{l1b} we have for any normalised $u$ with $(T-\lambda)u = 0$ and $\lambda\in\Delta$, \begin{align*} \mathcal{O}(\varepsilon_n)&=\delta(\mathcal{L}(\Delta),\tilde{\mathcal{L}}_n)\\ &\ge\left\Vert\sum_{j=1}^{d_n}\left\langle u,\frac{\tilde{u}_{n,j}}{\Vert\tilde{u}_{n,j}\Vert}\right\rangle \frac{\tilde{u}_{n,j}}{\Vert\tilde{u} _{n,j}\Vert} - u\right\Vert\\ &=\left\Vert\sum\frac{\langle u,(A-m+1)^{\frac{1}{2}}u_{n,j}\rangle}{\mu_{n,j}-m+1} (A-m+1)^{\frac{1}{2}}u_{n,j} - u\right\Vert\\ &=\left\Vert(A-m+1)^{\frac{1}{2}}\left(\sum\frac{\sqrt{\lambda-m+1}}{\mu_{n,j}-m+1}\langle u,u_{n,j}\rangle u_{n,j} - \frac{u}{\sqrt{\lambda-m+1}}\right)\right\Vert\\ &=\left\Vert\sum\frac{\sqrt{\lambda-m+1}}{\mu_{n,j}-m+1}\langle u,u_{n,j}\rangle u_{n,j} -\frac{u}{\Vert u\Vert_{\frak{a}}}\right\Vert_{\frak{a}}\\ &\ge {\rm dist}_{\frak{a}}\left(\frac{u}{\Vert u\Vert_{\frak{a}}},\mathcal{L}_n(\Delta)\right). \end{align*} \end{proof}
\void{ \begin{corollary}\label{cor3} There exists a constant $C_1\ge0$ such that \[\sigma(A+iB_n)\cap\mathbb{D}(\lambda_j+i,C_1\varepsilon_n)\ne\varnothing\quad1\le j\le d\quad\textrm{for all sufficiently large}\quad n\in\mathbb{N} \] with the total multiplicity of the intersection equal to the multiplicity of $\lambda_j\in\sigma(T)$. \end{corollary} \begin{proof} Using Lemma \ref{l2b}, there exists a constant $C_0\ge 0$ such that for each $u\in\mathcal{L}(\Delta)$ with $\Vert u\Vert_\frak{a} =1$ there exists a $u_n\in\mathcal{L}_n(\Delta)$ such that $\Vert u - u_n\Vert_{\frak{a}}\le C_0\varepsilon_n$. Since $\Delta = (\alpha,\beta)$ we deduce that $\Vert u\Vert>\alpha-m+1$, and hence that \[ \left\Vert \frac{u}{\Vert u\Vert} -\frac{u_n}{\Vert u\Vert}\right\Vert \le \frac{C_0\varepsilon_n}{\alpha-m+1}. \] Therefore $\Vert(I-B_n)E(\Delta)\Vert = \mathcal{O}(\varepsilon_n)$, and the result follows from Corollary \ref{cor2}. \end{proof}
\begin{remark} An immediate consequence of Lemma \ref{l2b} is $\Vert(I-B_n)E(\Delta)\Vert=\mathcal{O}(\varepsilon_n)$. Hence with this Corollary \ref{cor2} holds with $B=B_n$ for all sufficiently large $n$. However, it would also seem possible that $\Vert(I-B_n)E(\Delta)\Vert$ could converge to zero faster than this estimate, since \end{remark} } }
In view of Lemma \ref{cor2}, it is natural to consider operators of the form $A+iP_n$ where $(P_n)$ is as sequence of orthogonal projections which converge strongly to the identity operator. The range of $P_n$ is denoted $\mathcal{L}_n$. It follows from Proposition \ref{thm1} and Lemma \ref{cor2} that \begin{equation}\label{alimit} \lim_{n\to\infty}\sigma(A+iP_n)\cap\mathcal{U}_{a,b}^{0,1} = \big\{\lambda_1+i,\dots,\lambda_d+i\big\}. \end{equation} We prove below, in Theorem \ref{QQ}, that elements from $\sigma(A+iP_n)$ converge to $\lambda+i$ extremely rapidly. To this end, we denote by $\mathcal{M}_n(\{\lambda+i\})$ the spectral subspace which corresponds to $\sigma(A+iP_n)\cap\mathbb{D}(\lambda+i,r)$. We also set $\varepsilon_n=\delta(\mathcal{L}(\Delta),\mathcal{L}_n)$.
\begin{theorem}\label{est} There exists a constant $c_0>0$ such that \[\hat{\delta}_{\frak{a}}\big(\mathcal{L}(\{\lambda\}),\mathcal{M}_n(\{\lambda+i\})\big)\le c_0\varepsilon_n\quad\text{for all sufficiently large n}.\] \end{theorem} \begin{proof} For simplicity, let us denote $E=E(\Delta)$. Note that \[\sigma(A+iE)=\big\{\sigma(A)\backslash\{\lambda_1,\dots,\lambda_d\}\big\}\cup\{\lambda_1+i,\dots,\lambda_d+i\}\] and the spectral subspace associated to $\lambda+i$ is $\mathcal{L}(\{\lambda\})$. With $r$ as in \eqref{rhyp}, we have for any $\vert\lambda+i-z\vert=r$, \[ z\in\rho(A+iE)\quad\textrm{with}\quad\Vert(A+iE-z)^{-1}\Vert=\frac{1}{r}. \] It follows easily from Proposition \ref{thm1} that there exists a $c_1>0$ and $N\in\mathbb{N}$ such that, for all $n\ge N$ and any $\vert\lambda+i-z\vert=r$, we have \[z\in\rho(A+iP_n)\quad\textrm{with}\quad\Vert(A + iP_n-z)^{-1}\Vert\le\frac{1}{c_1}. \] Let $u\in\mathcal{H}$ with $\Vert u\Vert=1$ then, using the identity \begin{align*} (A+iP_n - z)^{-1} &= (A+iE - z)^{-1}\\ &\quad+(A+iE - z)^{-1}(iE-iP_n)(A+iP_n - z)^{-1} \end{align*} and recalling that $m=\min\sigma(A)$, we obtain \begin{align*} &\Vert(A-m+1)^{\frac{1}{2}}(A+iP_n - z)^{-1}u\Vert\\ &\qquad\le\Vert(A-m+1)^{\frac{1}{2}}(A+iE - z)^{-1}u\Vert\\ &\qquad\quad+\Vert(A-m+1)^{\frac{1}{2}}(A+iE - z)^{-1}(iE-iP_n)(A+iP_n - z)^{-1}u\Vert\\ &\qquad\le\Vert(A-m+1)^{\frac{1}{2}}(A+iE - z)^{-1}\Vert\\ &\qquad\quad+ 2\Vert(A-m+1)^{\frac{1}{2}}(A+iE - z)^{-1}\Vert\Vert(A+iP_n - z)^{-1}u\Vert\\ &\qquad\le \max_{\vert\lambda+i - z\vert=r}\left\{\frac{(2+c_1)\Vert(A-m+1)^{\frac{1}{2}}(A+iE - z)^{-1}\Vert}{c_1}\right\} =:M. \end{align*} Now let $u\in\mathcal{L}(\{\lambda\})$ with $\Vert u\Vert=1$. The above estimate gives \begin{align} &\Vert(A+iE - z)^{-1}u -(A+iP_n - z)^{-1}u\Vert_{\frak{a}}\nonumber\\ &\qquad\qquad\qquad\qquad= \Vert(A+iP_n - z)^{-1}(P_n-E)(A+iE - z)^{-1}u\Vert_{\frak{a}}\nonumber\\ &\qquad\qquad\qquad\qquad= \frac{\Vert(A-m+1)^{\frac{1}{2}}(A+iP_n - z)^{-1}(P_n-I)u\Vert}{r}\nonumber\\ &\qquad\qquad\qquad\qquad\le\frac{M\Vert(I-P_n)E\Vert}{r}\nonumber\\ &\qquad\qquad\qquad\qquad=\frac{M\delta(\mathcal{L}(\Delta),\mathcal{L}_n)}{r}\label{Mr}. \end{align} Set \begin{align*} u_n&:=-\frac{1}{2i\pi}\int_{\vert\lambda+i-z\vert=r}(A + iP_n-\zeta)^{-1}u~d\zeta, \end{align*} then $u_n\in\mathcal{M}_n(\{\lambda+i\})$. Using estimate \eqref{Mr}, \begin{align*} \Bigg\Vert&\frac{u}{\Vert u\Vert_{\frak{a}}}-\frac{u_n}{\Vert u\Vert_{\frak{a}}}\Bigg\Vert_{\frak{a}}\\ &\qquad=\frac{1}{2\pi\Vert u\Vert_{\frak{a}}}\Bigg\Vert\int_{\vert\lambda+i-z\vert=r}(A + iE-\zeta)^{-1}u - (A + iP_n-\zeta)^{-1}u~d\zeta\Bigg\Vert_{\frak{a}}\\ &\qquad\le\frac{1}{2\pi}\int_{\vert\lambda+i-z\vert=r}\Big\Vert(A + iE-\zeta)^{-1}u - (A + iP_n-\zeta)^{-1}u\Big\Vert_{\frak{a}}~\vert d\zeta\vert\\ &\qquad=\mathcal{O}\big(\delta(\mathcal{L}(\Delta),\mathcal{L}_n)\big) \end{align*} hence \[\delta_{\frak{a}}\big(\mathcal{L}(\{\lambda\}),\mathcal{M}_n(\{\lambda+i\})\big)=\mathcal{O}(\varepsilon_n). \] Furthermore, using Lemma \ref{cor2}, $\dim \mathcal{M}_n(\{\lambda+i\})=\dim \mathcal{L}(\{\lambda\})=\kappa<\infty$ for all sufficiently large $n$, therefore the following formula holds \begin{align*} \delta_{\frak{a}}\big(\mathcal{M}_n(\{\lambda+i\}),\mathcal{L}(\{\lambda\})\big)\le \frac{\delta_{\frak{a}}\big(\mathcal{L}(\{\lambda\}),\mathcal{M}_n(\{\lambda+i\})\big)}{1-\delta_{\frak{a}}\big(\mathcal{L}(\{\lambda\}),\mathcal{M}_n(\{\lambda+i\})\big)}; \end{align*} see \cite[Lemma 213]{kato}. \end{proof}
It follows, from Theorem \ref{est}, that for all sufficiently large $n\in\mathbb{N}$, the operator $A+iP_n$ will have $\kappa$ (repeated) eigenvalues enclosed by the circle $\vert\lambda+i-z\vert=r$. We denote these eigenvalues by $\mu_{n,1},\dots,\mu_{n,\kappa}$.
\begin{theorem}\label{QQ} $\max_{1\le j\le\kappa}\vert\lambda+i-\mu_{n,j}\vert = \mathcal{O}(\varepsilon_n^2)$. \end{theorem} \begin{proof} Let $u_1,\dots,u_\kappa$ be an orthonormal basis for $\mathcal{L}(\{\lambda\})$. Let $Q_n$ be the orthogonal projection from $\mathcal{H}_{\frak{a}}$ onto $\mathcal{M}_n(\{\lambda+i\})$ and set $u_{n,j}=Q_nu_j$ for each $1\le j\le \kappa$. By Theorem \ref{est}, \[ \Vert u_j-u_{n,j}\Vert_{\frak{a}}=\Vert(I-Q_n)u_j\Vert_{\frak{a}} = {\rm dist}_{\frak{a}}\big(u_j,\mathcal{M}_n(\{\lambda+i\})\big) = \mathcal{O}(\varepsilon_n), \] and we may assume that $Q_n$ maps $\mathcal{L}(\{\lambda\})$ one-to-one onto $\mathcal{M}_n(\{\lambda+i\})$.
Consider the $\kappa\times\kappa$ matrices \[ [L_n]_{p,q}=\langle(A+iP_n)u_{n,q},u_{n,p} \rangle\quad\textrm{and}\quad [M_n]_{p,q}=\langle u_{n,q},u_{n,p} \rangle. \] Evidently, $M_n$ converges to the $\kappa\times\kappa$ identity matrix and $\sigma(L_nM_n^{-1})$ is precisely the set $\{\mu_{n,1},\dots,\mu_{n,\kappa}\}$. We have \[ [L_n]_{p,q} = \frak{a}(u_{n,q},u_{n,p}) + i\langle P_n u_{n,q},u_{n,p}\rangle. \] Consider the first term on the right hand side, \begin{align*} \frak{a}(u_{n,q},u_{n,p}) &= \frak{a}((Q_n-I)u_q,u_p) + \frak{a}((Q_n-I)u_q,(Q_n-I)u_p)\\ &\quad+\frak{a}(u_q,(Q_n-I)u_p)+\frak{a}(u_q,u_p)\\ &=\lambda\langle(Q_n-I)u_q,u_p\rangle + \frak{a}((Q_n-I)u_q,(Q_n-I)u_p)\\ &\quad+\lambda\langle u_q,(Q_n-I)u_p\rangle+\lambda\delta_{pq} \end{align*} where \begin{align*} \vert(\lambda-m+1)\langle u_q,(Q_n-I)u_p\rangle\vert&=\vert\frak{a}(u_q,(Q_n-I)u_p)\\ &\quad+(1-m)\langle u_q,(Q_n-I)u_p\rangle\vert\\ &=\vert\langle u_q,(Q_n-I)u_p\rangle_{\frak{a}}\vert\\ &=\vert\langle (Q_n-I)u_q,(Q_n-I)u_p\rangle_{\frak{a}}\vert\\ &\le\Vert(Q_n-I)u_q\Vert_{\frak{a}}\Vert(Q_n-I)u_p\Vert_{\frak{a}}, \end{align*} hence $\frak{a}(u_{n,q},u_{n,p}) = \lambda\delta_{pq} + \mathcal{O}(\varepsilon_n^2)$. Similarly, \begin{align*} \langle P_nu_{n,q},u_{n,p}\rangle&=\langle P_{n}(Q_n-I)u_q,(Q_n-I)u_p\rangle + \langle (Q_n-I)u_q,(P_{n}-I)u_p\rangle\\ &\quad+\langle (Q_n-I)u_q,u_p\rangle+\langle (P_{n}-I)u_q,(Q_n-I)u_p\rangle\\ &\quad+\langle u_q,(Q_n-I)u_p\rangle +\langle(P_{n}-I)u_q,u_p\rangle+\delta_{pq} \end{align*} and \begin{align*} [M_n]_{p,q}&= \langle(Q_n-I)u_{q},(Q_n-I)u_{p} \rangle + \langle(Q_n-I)u_{q},u_{p} \rangle+ \langle u_{q},(Q_n-I)u_{p} \rangle\\ &\quad + \delta_{pq}. \end{align*} Hence \[ i\langle P_nu_{n,q},u_{n,p}\rangle = i\delta_{pq} + \mathcal{O}(\varepsilon_n^2)\quad\textrm{and}\quad[M_n]_{p,q} = \delta_{pq}+\mathcal{O}(\varepsilon_n^2). \] Then \[ [L_n]_{p,q}=(\lambda+i)\delta_{p,q}+\mathcal{O}(\varepsilon_n^2)\quad\textrm{and}\quad [M_n]^{-1}_{p,q} = \delta_{pq}+\mathcal{O}(\varepsilon_n^2), \] and we deduce that $[L_nM_n^{-1}]_{p,q}=(\lambda+i)\delta_{p,q}+\mathcal{O}(\varepsilon_n^2)$. The result follows from the Gershgorin circle theorem. \end{proof}
\section{The Perturbation Method}
The perturbation method, for locating $\sigma_{\mathrm{dis}}(A)$, was introduced in \cite{mar1} where it was formulated for Schr\"odinger operators. A more general version was presented in \cite{me} which required \'a priori knowledge about the location of gaps in the essential spectrum. In this section we present a new perturbation method which requires no \'a priori information and converges rapidly to $\sigma_{\mathrm{dis}}(A)$. In fact, our examples suggest that the method will actually capture the whole of $\sigma(A)$.
The idea is to perturb eigenvalues off the real line by adding a perturbation $iP$ where $P$ is a finite-rank orthogonal projection. The results from the previous sections allow us to perturb eigenvalues very precisely. The perturbed eigenvalues and their multiplicities may then be approximated with the Galerkin method without incurring spectral pollution; see \cite[Theorem 2.5 \& Theorem 2.9]{me}. As above, $(P_n)$ denotes a sequence of finite-rank orthogonal projections each with range $\mathcal{L}_n$. We shall assume that \begin{equation}\label{subspaces} \forall u\in{\rm Dom}(\frak{a})\quad\exists u_n\in\mathcal{L}_n:\quad\Vert u-u_n\Vert_{\frak{a}}\to0. \end{equation} This is the usual hypothesis for a sequence of trial spaces when using the Galerkin method. For sufficiently large $n$ we have, by Proposition \ref{thm1}, that \[ \mathcal{U}_{a,b}^{0,1}\cap\sigma(A+iP_n) \] will consist of eigenvalues in a small neighbourhood of $\Gamma_{a,b}^{0,1}$, and, by Theorem \ref{QQ}, eigenvalues within $\varepsilon_n^2$ neighbourhoods of the $\lambda_j+i$; recall that \begin{equation}\label{epg} \varepsilon_{n}=\delta(\mathcal{L}(\Delta),\mathcal{L}_{n}). \end{equation} We stress that $\varepsilon_n^2$ is extremely small; indeed, if pollution does not occur and we use the Galerkin method to approximate the eigenvalue $\lambda$, then our approximation will be of the order $\epsilon_n^2$ where \begin{equation}\label{epb} \epsilon_n:=\delta_{\frak{a}}(\mathcal{L}(\Delta),\mathcal{L}_{n}). \end{equation}
In this section we are concerned with the approximation of the eigenvalues of $A+iP_n$ using the Galerkin method. To this end, for our fixed $\lambda\in\{\lambda_1,\dots,\lambda_d\}$, let us fix an $N\in\mathbb{N}$ such that \[ \dim\mathcal{M}_{n}(\{\lambda+i\})=\dim\mathcal{L}(\{\lambda\})=\kappa\quad\forall n\ge N; \] such an $N$ is assured by Theorem \ref{est}.
Associated to the restriction of the form $\frak{a}$ to the trial space $\mathcal{L}_k$ is a self-adjoint operator acting in the Hilbert space $\mathcal{L}_k$; denote this operator and corresponding spectral measure by $A_k$ and $E_k$, respectively. The Galerkin eigenvalues of $A+iP_n$ with respect to the trial space $\mathcal{L}_k$ are denoted $\sigma(A+iP_n,\mathcal{L}_k)$ and are precisely the eigenvalues of \[ A_k+iP_kP_n:\mathcal{L}_k\to\mathcal{L}_k. \] For our $\lambda\in\Delta$, we denote by $\mathcal{M}_{n,k}(\{\lambda+i\})$ the spectral subspace associated to the operator $A_k + iP_kP_n:\mathcal{L}_k\to\mathcal{L}_k$ and those eigenvalues in a neighbourhood of $\lambda+i$. Then, for a fixed $n\ge N$, we have for all sufficiently large $k$ \begin{equation}\label{eqdims} \dim\mathcal{M}_{n,k}(\{\lambda+i\})=\dim\mathcal{M}_{n}(\{\lambda+i\})=\dim\mathcal{L}(\{\lambda\})=\kappa. \end{equation} We now study the convergence properties of $\mathcal{M}_{n,k}(\{\lambda+i\})$ and associated eigenvalues, where our main convergence results are expressed in terms of $\varepsilon_{k}$ and $\epsilon_{n}$ from \eqref{epg} and \eqref{epb}, respectively. We note that, using Theorem \ref{est}, \begin{equation}\label{dbounds} \delta_{\frak{a}}(\mathcal{M}_n(\{\lambda+i\}),\mathcal{L}_{k})\le\delta_{\frak{a}}(\mathcal{M}_n(\{\lambda+i\}),\mathcal{L}(\{\lambda\})) + \delta_{\frak{a}}(\mathcal{L}(\{\lambda\}),\mathcal{L}_{k})\le c_0\varepsilon_n + \epsilon_k \end{equation} where $c_0>0$ is independent of $n$.
\begin{lemma}\label{l2d} There exists a constant $c_2>0$, independent of $n\ge N$, such that \[ \max_{\vert\lambda+i-z\vert=r}\Vert(A_k + iP_kP_n - z)^{-1}\Vert\le c_2\quad\textrm{for all sufficiently large } k. \] \end{lemma} \begin{proof} We assume that the assertion is false. Then there exist sequences $(n_p)$ and $(\gamma_p)$ with $\gamma_p\to\infty$, such that, for each fixed $p$ there is a subsequence $k_q$ with \[ \max_{\vert\lambda+i-z\vert=r}\Vert(A_{k_q} + iP_{k_q}P_{n_p} - z)^{-1}\Vert> 2\gamma_p\quad\textrm{for all sufficiently large }q. \] Let us fix a $p$. We may assume, without loss of generality, that \[ \max_{\vert\lambda+i-z\vert=r}\Vert(A_{k} + iP_{k}P_{n_p} - z)^{-1}\Vert> 2\gamma_p\quad\textrm{for all sufficiently large }k. \] Let $(z_k)$ be a sequence with $\vert\lambda+i-z_k\vert=r$ and \[ \Vert(A_{k} + iP_{k}P_{n_p} - z_k)^{-1}\Vert> 2\gamma_p\quad\textrm{for all sufficiently large }k. \] The sequence $z_k$ has a convergent subsequence, without loss of generality, we assume that $z_k\to z$, where $\vert\lambda+i-z\vert=r$. For all sufficiently large $k$, there exists a normalised vector $u_k$ with \[ \Vert(A_{k} + iP_{k}P_{n_p} - z_k)u_k\Vert<\frac{1}{2\gamma_p}. \] Then \begin{align*} \Vert(A_{k} + iP_{k}P_{n_p} - z)u_k\Vert &\le\Vert(A_{k} + iP_{k}P_{n_p} - z_k)u_k\Vert + \vert z_k-z\vert\\ &<\frac{1}{2\gamma_p} + \vert z_k-z\vert, \end{align*} and therefore \[ \max_{\efrac{v\in\mathcal{L}_{k}}{\Vert v\Vert=1}}\vert\frak{a}(u_k,v) + i\langle P_{n_p}u_k,v\rangle - z\langle u_k,v\rangle\vert <\frac{1}{\gamma_p}\quad\textrm{for all sufficiently large }k. \] The sequence $P_{n_p}u_k$ has a convergent subsequence. We assume, without loss of generality, that $iP_{n_p}u_k\to w$. Therefore \begin{align*} \max_{\efrac{v\in\mathcal{L}_{k}}{\Vert v\Vert=1}}\vert\frak{a}(u_k,v) + \langle w,v\rangle - z\langle u_k,v\rangle\vert <\frac{1}{\gamma_p} + \alpha_k\quad\textrm{for some} \quad 0\le\alpha_k\to 0. \end{align*} Denote by $\hat{P}_k$ the orthogonal projection from $\mathcal{H}_{\frak{a}}$ onto $\mathcal{L}_k$. Let $x=-(A-z)^{-1}w$ and set $x_{k}=\hat{P}_{k}x$, then for any $v\in\mathcal{L}_k$ \begin{align*} \frak{a}(x_{k},v) - z\langle x_{k},v\rangle &= \frak{a}(x,v) - z\langle x,v\rangle - \frak{a}((I-\hat{P}_{k})x,v) + z\langle(I-\hat{P}_{k})x,v\rangle\\ &=\frak{a}(x,v) - z\langle x,v\rangle + (z-m+1)\langle(I-\hat{P}_{k})x,v\rangle\\ &=-\langle w,v\rangle + (z-m+1)\langle(I-\hat{P}_{k})x,v\rangle. \end{align*} We deduce that \begin{align*} \max_{\efrac{v\in\mathcal{L}_{k}}{\Vert v\Vert=1}}\vert\frak{a}(u_k-x_{k},v) - z\langle u_k-x_{k},v\rangle\vert <\frac{1}{\gamma_p} + \beta_k\quad\textrm{for some} \quad 0\le\beta_k\to 0, \end{align*} hence \[ \Vert u_k-x_{k}\Vert <\left(\frac{1}{\gamma_p} + \beta_k\right)\Big/{\rm Im}\; z\le \frac{1}{\gamma_p(1-r)} + \frac{\beta_k}{(1-r)} \] and therefore \begin{equation}\label{psib} \Vert x\Vert\leftarrow\Vert x_{k}\Vert > 1 -\frac{1}{\gamma_p(1-r)} - \frac{\beta_k}{(1-r)}\to 1-\frac{1}{\gamma_p(1-r)}. \end{equation} Let $y=(A+iP_{n_p}-z)x= -w - iP_{n_p}(A-z)^{-1}w$. Since $iP_{n_p}u_k\to w$ implies that $w\in\mathcal{L}_{n_p}\subset\mathcal{H}_{\frak{a}}$, we deduce that $y\in\mathcal{H}_{\frak{a}}$ and we set $y_{k}=\hat{P}_{k}y$. Using \eqref{psib} and with $c_1>0$ as in the proof of Theorem \ref{est}, \begin{align*} \vert\frak{a}(x_{k},y_{k}) + i\langle P_{n_p}x_{k},y_{k}\rangle - z\langle x_{k},y_{k}\rangle\vert &\to \Vert(A+iP_{n_p}-z) x\Vert^2\\&\ge c_1^2\left(1-\frac{1}{\gamma_p(1-r)}\right)^2. \end{align*} Furthermore, using the estimates above we have \begin{align*} \vert\frak{a}(x_{k},y_{k}) + i\langle &P_{n_p}x_{k},y_{k}\rangle - z\langle x_{k},y_{k}\rangle\vert\\ &=\vert\frak{a}(x_{k}-u_k,y_k) + i\langle P_{n_p}(x_{k}-u_k),y_{k}\rangle - z\langle x_{k}-u_k,y_{k}\rangle\\ &\quad+\frak{a}(u_k,y_{k}) + i\langle P_{n_p}u_k,y_{k}\rangle - z\langle u_k,y_{k}\rangle\vert\\ &\le \vert\frak{a}(x_{k}-u_k,y_k) - z\langle x_{k}-u_k,y_{k}\rangle\vert + \vert\langle P_{n_p}(x_{k}-u_k),y_{k}\rangle\vert\\ &\quad+\vert\frak{a}(u_k,y_{k}) + i\langle P_{n_p}u_k,y_{k}\rangle - z\langle u_k,y_{k}\rangle\vert\\ &<\left(\frac{1}{\gamma_p} + \beta_k\right)\Vert y_{k}\Vert +\left(\frac{1}{\gamma_p(1-r)} + \frac{\beta_k}{(1-r)}\right)\Vert y_{k}\Vert + \frac{1}{\gamma_p}\Vert y_{k}\Vert. \end{align*} Since $y=(A+iP_{n_p}-z)x = -w - iP_{n_p}(A-z)^{-1}w$ where $\Vert w\Vert\le 1$, \[\Vert y_k\Vert\to\Vert y\Vert=\Vert-w - iP_{n_p}(A-z)^{-1}w\Vert\le\Vert w\Vert+\Vert iP_{n_p}(A-z)^{-1}w\Vert\le 1+\frac{1}{1-r}, \] hence \begin{align*} \left(\frac{1}{\gamma_p} + \beta_k\right)\Vert y_{k}\Vert +\left(\frac{1}{\gamma_p(1-r)} + \frac{\beta_k}{1-r}\right)&\Vert y_{k}\Vert + \frac{1}{\gamma_p}\Vert y_{k}\Vert\\ &\to\left(\frac{2}{\gamma_p}+\frac{1}{\gamma_p(1-r)}\right)\Vert y\Vert\\ &\le \left(\frac{2}{\gamma_p}+\frac{1}{\gamma_p(1-r)}\right)\left(1+\frac{1}{1-r}\right). \end{align*} Therefore, we have \begin{align*} c_1^2\left(1-\frac{1}{\gamma_p(1-r)}\right)^2&\le\Vert(A+iP_{n_p}-z)x\Vert^2\\ &\leftarrow \vert\frak{a}(x_{k},y_{k}) + i\langle P_{n_p}x_{k},y_{k}\rangle - z\langle x_{k},y_{k}\rangle\vert\\ &\le\left(\frac{1}{\gamma_p} + \beta_k\right)\Vert y_{k}\Vert +\left(\frac{1}{\gamma_p(1-r)} + \frac{\beta_k}{1-r}\right)\Vert y_{k}\Vert\\ &\quad + \frac{1}{\gamma_p}\Vert y_{k}\Vert\\ &\to\left(\frac{2}{\gamma_p}+\frac{1}{\gamma_p(1-r)}\right)\Vert y\Vert\\ &\le \left(\frac{2}{\gamma_p}+\frac{1}{\gamma_p(1-r)}\right)\left(1+\frac{1}{1-r}\right). \end{align*} Evidently, the left hand side is larger than the right hand side for all sufficiently large $p$. The result follows from the contradiction. \end{proof}
\begin{theorem}\label{limlem1} There exist constants $c_3,c_4>0$, both independent of $n\ge N$, such that \begin{equation}\label{sscon1} \hat{\delta}_{\frak{a}}\big(\mathcal{M}_{n}(\{\lambda+i\}),\mathcal{M}_{n,k}(\{\lambda+i\})\big)\le c_3\delta_{\frak{a}}(\mathcal{M}_n(\{\lambda+i\}),\mathcal{L}_{k}) \end{equation} and \begin{equation}\label{sscon2} \hat{\delta}_{\frak{a}}\big(\mathcal{M}_{n,k}(\{\lambda+i\}),\mathcal{L}(\{\lambda\})\big)\le c_4(\varepsilon_n+\epsilon_k) \end{equation} for all sufficiently large $k$. \end{theorem} \begin{proof} First we prove \eqref{sscon1}. Let $u\in\mathcal{M}_{n}(\{\lambda+i\})$ with $\Vert u\Vert=1$. For $\vert\lambda+i-z\vert=r$, we denote \[A_k(z)=A_k+iP_kP_n-z\quad\textrm{and}\quad x(z)=(A + iP_n-z)^{-1}u\in\mathcal{M}_{n}(\{\lambda+i\}). \] Then, with $c_1>0$ as in the proof of Theorem \ref{est}, we have $\Vert x(z)\Vert\le c_1^{-1}$ and therefore \begin{align*} \Vert x(z)\Vert_{\frak{a}}^2 &= \frak{a}[x(z)] -(m-1)\Vert x(z)\Vert^2\\ &= \langle Ax(z),x(z)\rangle -(m-1)\Vert x(z)\Vert^2\\ &= \langle A(A + iP_n-z)^{-1}u,x(z)\rangle -(m-1)\Vert x(z)\Vert^2\\ &= \langle u,x(z)\rangle -\langle(iP_n-z)x(z),x(z)\rangle-(m-1)\Vert x(z)\Vert^2\\ &\le \Vert x(z)\Vert + (2+m+\vert z\vert)\Vert x(z)\Vert^2\\ &\le\frac{1}{c_1} + \frac{2+m+\vert z\vert}{c_1^2}. \end{align*} Hence \begin{equation}\label{sbd} \Vert(A + iP_n-z)^{-1}u\Vert_{\frak{a}} = \Vert x(z)\Vert_{\frak{a}}\le K_1 \end{equation} for constant $K_1>0$ which is independent of $n\ge N$ and $\vert\lambda+i-z\vert=r$. Let $v\in\mathcal{L}_k$ with $\Vert v\Vert=1$, then \begin{align*} \langle A_k(z)\hat{P}_kx(z)-u,v\rangle &= \frak{a}(\hat{P}_kx(z),v) + i\langle P_{n}\hat{P}_kx(z),v\rangle - z\langle \hat{P}_kx(z),v\rangle - \langle u,v\rangle\\ &=i\langle P_{n}(\hat{P}_k-I)x(z),v\rangle - (z-m+1)\langle (\hat{P}_k-I)x(z),v\rangle. \end{align*} Hence \begin{displaymath} \Vert A_k(z)\hat{P}_kx(z) - P_ku\Vert \le\big(1+\vert(z-m+1)\vert\big)\Vert(\hat{P}_k-I)x(z)\Vert \end{displaymath} then, using Lemma \ref{l2d}, \begin{align*} \Vert A_k(z)^{-1}P_ku - \hat{P}_kx(z)\Vert&\le c_2\Vert A_k(z)\hat{P}_kx(z) - P_ku\Vert\\ &\le c_2\big(1+\vert(z-m+1)\vert\big)\Vert(\hat{P}_k-I)x(z)\Vert \end{align*} where $c_2$ is independent of $n\ge N$ and $\vert\lambda+i-z\vert=r$. Furthermore, \begin{align*} \Vert A_k(z)^{-1}P_ku - x(z)\Vert_{\frak{a}}&\le\Vert A_k(z)^{-1}P_ku - \hat{P}_kx(z)\Vert_{\frak{a}} + \Vert(\hat{P}_k-I)x(z)\Vert_{\frak{a}} \end{align*} where \begin{align*} \Vert A_k(z)^{-1}&P_ku - \hat{P}_kx(z)\Vert_{\frak{a}}^2\\ &=(\frak{a}-m)[A_k(z)^{-1}P_ku - \hat{P}_kx(z)] + \Vert A_k(z)^{-1}P_ku - \hat{P}_kx(z)\Vert^2\\ &=\langle P_ku -A_k(z)\hat{P}_kx(z),A_k(z)^{-1}P_ku - \hat{P}_kx(z)\rangle\\ &\quad-\langle(iP_kP_{n}-z)(A_k(z)^{-1}P_ku - \hat{P}_kx(z)),A_k(z)^{-1}P_ku - \hat{P}_kx(z)\rangle\\ &\quad+ (1-m)\Vert A_k(z)^{-1}P_ku - \hat{P}_kx(z)\Vert^2\\ &\le\Vert P_ku -A_k(z)\hat{P}_kx(z)\Vert\Vert A_k(z)^{-1}P_ku - \hat{P}_kx(z)\Vert\\ &\quad+\Vert iP_kP_{n}-z\Vert\Vert A_k(z)^{-1}P_ku - \hat{P}_kx(z)\Vert^2\\ &\quad+ \vert 1-m\vert\Vert A_k(z)^{-1}P_ku - \hat{P}_kx(z)\Vert^2\\ &\le c_2\big(1+ \vert(z-m+1)\vert\big)^2\Vert(\hat{P}_k-I)x(z)\Vert^2\\ &\quad+ \big(1+\vert z\vert\big)c_2^2\big(1+\vert(z-m+1)\vert\big)^2\Vert(\hat{P}_k-I)x(z)\Vert^2\\ &\quad+\vert 1-m\vert c_2^2\big(1+\vert(z-m+1)\vert\big)^2\Vert(\hat{P}_k-I)x(z)\Vert^2. \end{align*} Therefore, \begin{align} \Vert A_k(z)^{-1}P_ku - (A + iP_{n}-z)^{-1}u\Vert_{\frak{a}}&\le K_2\Vert(\hat{P}_k-I)x(z)\Vert_{\frak{a}}\nonumber\\\ &\le K_2\Vert x(z)\Vert_{\frak{a}}\delta_{\frak{a}}(\mathcal{M}_{n}(\{\lambda+i\}),\mathcal{L}_k)\nonumber\\ &\le K_1K_2\delta_{\frak{a}}(\mathcal{M}_{n}(\{\lambda+i\}),\mathcal{L}_k)\label{ifff} \end{align} for constant $K_2>0$ which is independent of $n\ge N$ and $\vert\lambda+i-z\vert=r$. Set \begin{align*} u_k&:=-\frac{1}{2i\pi}\int_{\vert\lambda+i-z\vert=r}A_k(\zeta)^{-1}P_ku~d\zeta, \end{align*} then $u_k\in\mathcal{M}_{n,k}(\{\lambda+i\})$ and \begin{align*} \Bigg\Vert\frac{u}{\Vert u\Vert_{\frak{a}}}&-\frac{u_k}{\Vert u\Vert_{\frak{a}}}\Bigg\Vert_{\frak{a}}=\\ &\frac{1}{2\pi\Vert u\Vert_{\frak{a}}}\Bigg\Vert\int_{\vert\lambda+i-z\vert=r}A_k(\zeta)^{-1}P_ku - (A + iP_n-\zeta)^{-1}u~d\zeta\Bigg\Vert_{\frak{a}}\\ &\le\frac{1}{2\pi\Vert u\Vert_{\frak{a}}}\int_{\vert\lambda+i-z\vert=r}\big\Vert A_k(\zeta)^{-1}P_ku - (A + iP_n-\zeta)^{-1}u\big\Vert_{\frak{a}}~\vert d\zeta\vert. \end{align*} Combining this estimate with \eqref{ifff}, we deduce that for some constant $K_3> 0$ which is independent of $n\ge N$ and $\vert\lambda+i-z\vert=r$, we have \[\delta_{\frak{a}}(\mathcal{M}_n(\{\lambda+i\}),\mathcal{M}_{n,k}(\{\lambda_j+i\}))\le K_3\delta_{\frak{a}}(\mathcal{M}_{n}(\{\lambda+i\}),\mathcal{L}_k). \] Then, by virtue of \eqref{eqdims}, the following formula holds for all sufficiently large $k$, \[ \delta_{\frak{a}}\big(\mathcal{M}_{n,k}(\{\lambda+i\}),\mathcal{M}_n(\{\lambda+i\})\big)\le\frac{\delta_{\frak{a}}\big(\mathcal{M}_n(\{\lambda+i\}),\mathcal{M}_{n,k}(\{\lambda+i\})\big)}{1-\delta_{\frak{a}}\big(\mathcal{M}_n(\{\lambda+i\}),\mathcal{M}_{n,k}(\{\lambda+i\})\big)}. \] The assertion \eqref{sscon1} is proved. Now using \eqref{sscon1}, \eqref{dbounds} and Theorem \ref{est}, we have \begin{align*} \delta_{\frak{a}}\big(\mathcal{M}_{n,k}(\{\lambda+i\}),\mathcal{L}(\{\lambda\})\big)&\le\delta_{\frak{a}}\big(\mathcal{M}_{n,k}(\{\lambda+i\}),\mathcal{M}_n(\{\lambda+i\})\big)\\ &\quad +\delta_{\frak{a}}\big(\mathcal{M}_{n}(\{\lambda+i\}),\mathcal{L}(\{\lambda\})\big)\\ &\le c_3\delta_{\frak{a}}\big(\mathcal{M}_{n}(\{\lambda+i\}),\mathcal{L}_k\big)+c_0\varepsilon_n\\ &\le c_0(c_3+1)\varepsilon_n + c_3\epsilon_k. \end{align*}
\end{proof}
Let $\mu_{n,k,1},\dots,\mu_{n,k,\kappa}$ be the repeated eigenvalues of $A_k + iP_kP_n$ which are associated to the subspace $\mathcal{M}_{n,k}(\{\lambda+i\})$.
\begin{theorem}\label{eigconv} There exists a constant $c_5>0$, independent of $n\ge N$, such that \[\max_{1\le j\le \kappa}\vert\mu_{n,k,j}-\lambda-i\vert \le c_5(\varepsilon_n+\epsilon_k)^2 \] for all sufficiently large $k$. \end{theorem} \begin{proof} Let $u_1,\dots,u_\kappa$ be an orthonormal basis for $\mathcal{L}(\{\lambda\})$. Let $R_k$ be the orthogonal projection from $\mathcal{H}_{\frak{a}}$ onto $\mathcal{M}_{n,k}(\{\lambda+i\})$ and set $u_{j,k}=R_ku_j$. Using Theorem \ref{limlem1}, \begin{align*} \Vert u_j-u_{j,k}\Vert_{\frak{a}}&=\Vert(I-R_k)u_j\Vert_{\frak{a}}\\ &={\rm dist}_{\frak{a}}\big(u_j,\mathcal{M}_{n,k}(\{\lambda+i\})\big)\\ &\le \Vert u_j\Vert_{\frak{a}}\hat{\delta}_{\frak{a}}\big(\mathcal{L}(\{\lambda\}),\mathcal{M}_{n,k}(\{\lambda+i\})\big)\\ &\le\Vert u_j\Vert_{\frak{a}}c_4(\varepsilon_n+\epsilon_k)\\ &\le K_4(\varepsilon_n+\epsilon_k) \end{align*} for constant $K_4>0$ which is independent of $n\ge N$. Consider the matrices \[ [L_{n,k}]_{p,q}=\frak{a}(u_{q,k},u_{p,k}) + i\langle P_{n}u_{q,k},u_{p,k}\rangle\quad\textrm{and}\quad [M_{n,k}]_{p,q}=\langle u_{q,k},u_{p,k} \rangle. \] Evidently, $\sigma(L_{n,k}M_{n,k}^{-1})$ is precisely the set $\{\mu_{n,k,1},\dots,\mu_{n,k,\kappa}\}$. \void{Let $Q_n$ be the orthogonal projection from $\mathcal{H}_{\frak{a}}$ onto $\mathcal{M}_{n}(\{\lambda_j+i\})$ and consider the matrix \[ [M_n]_{p,q}=\langle Q_nu_{q},Q_nu_{p} \rangle \] From the proof of Theorem \ref{QQ}, we have $[M_n]_{p.q} = \delta_{p,q} + \mathcal{O}(\tau_n^2)$. Hence, there exists a sequence $(\delta_n)\subset\mathbb{R}$ with $\delta_n\to0$ and \[ \Vert N_{n,k}^{-1} - I_\kappa\Vert \le \delta_n\quad\textrm{for all sufficiently large }k. \] } We have \begin{align*} \frak{a}(u_{q,k},u_{p,k}) &= \frak{a}((R_k-I)u_q,u_p) + \frak{a}((R_k-I)u_q,(R_k-I)u_p)\\ &\quad+\frak{a}(u_q,(R_k-I)u_p)+\frak{a}(u_q,u_p)\\ &=\lambda\langle(R_k-I)u_q,u_p\rangle + \frak{a}((R_k-I)u_q,(R_k-I)u_p)\\ &\quad+\lambda\langle u_q,(R_k-I)u_p\rangle+\lambda\delta_{qp} \end{align*} and \begin{align*} \vert(\lambda-m+1)\langle u_q,(R_k-I)u_p\rangle\vert&=\vert\langle u_q,(R_k-I)u_p\rangle_{\frak{a}}\vert\\ &=\vert\langle(R_k-I)u_q,(R_k-I)u_p\rangle_{\frak{a}}\vert\\ &\le\Vert(R_k-I)u_q\Vert_{\frak{a}}\Vert(R_k-I)u_p\Vert_{\frak{a}}, \end{align*} hence \[ \vert\frak{a}(u_{q,k},u_{p,k}) - \lambda\delta_{qp}\vert \le K_5(\varepsilon_n+\epsilon_k)^2 \] for constant $K_5> 0$ which is independent of $n\ge N$. Similarly, \begin{align*} \langle P_{n}u_{q,k},u_{p,k}\rangle&=\langle P_{n}(R_k-I)u_q,(R_k-I)u_p\rangle + \langle (R_k-I)u_q,(P_{n}-I)u_p\rangle\\ &\quad+\langle (R_k-I)u_q,u_p\rangle+\langle (P_{n}-I)u_q,(R_k-I)u_p\rangle\\ &\quad+\langle u_q,(R_k-I)u_p\rangle +\langle(P_{n}-I)u_q,u_p\rangle+\langle u_q,u_p\rangle, \end{align*} hence \[ \vert i\langle P_{n}u_{q,k},u_{p,k}\rangle - i\delta_{qp}\vert \le K_6(\varepsilon_n+\epsilon_k)^2, \] for constant $K_6> 0$ which is independent of $n\ge N$. Furthermore, \begin{align*} [M_{n,k}]_{p,q}&=\langle(R_k-I)u_{q},(R_k-I)u_{p} \rangle + \langle(R_k-I)u_{q},u_{p} \rangle + \langle u_{q},(R_k-I)u_{p} \rangle\\ &\quad + \delta_{pq}, \end{align*} hence for constants $K_7,K_8> 0$ both independent of $n\ge N$, we have \[ \vert [M_{n,k}]_{pq} - \delta_{pq}\vert \le K_7(\varepsilon_n + \epsilon_k)^2\quad\Rightarrow\quad\vert [M_{n,k}]^{-1}_{pq} - \delta_{pq}\vert \le K_8(\varepsilon_n + \epsilon_k)^2. \] Then \[ \vert[L_{n,k}M_{n,k}^{-1}]_{p,q}-(\lambda+i)\delta_{p,q}\vert \le K_{9}(\varepsilon_n+\epsilon_k)^2 \] for constant $K_{9}> 0$ which is independent of $n\ge N$. The result follows from the Gershgorin circle theorem. \end{proof}
\begin{remark} We note that, for any orthogonal projection $P$, all non-real eigenvalues of $A+iP$ can provide information about $\sigma(A)$. Indeed, whenever $(A+iP - z)u=0$ with $u\ne0$, we have \begin{equation}\label{enclosure1} (A- {\rm Re}\; z)u = i{\rm Im}\; zu - iPu,\quad\langle Au,u\rangle = {\rm Re}\; z\Vert u\Vert^2,\quad\Vert Pu\Vert^2={\rm Im}\; z\Vert u\Vert^2 \end{equation} and using the first and third terms from \eqref{enclosure1} yields \begin{equation}\label{enclosure2} \Vert(A-{\rm Re}\; z)u\Vert^2 = ({\rm Im}\; z)^2\Vert u\Vert^2 + (1-2{\rm Im}\; z)\Vert Pu\Vert^2= {\rm Im}\; z(1-{\rm Im}\; z)\Vert u\Vert^2, \end{equation} then \[ \Big[{\rm Re}\; z-\sqrt{{\rm Im}\; z(1-{\rm Im}\; z)},{\rm Re}\; z+\sqrt{{\rm Im}\; z(1-{\rm Im}\; z)}\Big]\cap\sigma(A)\ne\varnothing. \] Further, suppose that $(a',b')\cap\sigma(A)=\lambda$ and $a'<{\rm Re}\; z<b'$. Then, using \cite[Lemma 1 \& 2]{kat} with the second term in \eqref{enclosure1} and the equality \eqref{enclosure2}, we obtain the enclosure \[ \lambda\in\left({\rm Re}\; z - \frac{{\rm Im}\; z(1-{\rm Im}\; z)}{b'-{\rm Re}\; z},{\rm Re}\; z + \frac{{\rm Im}\; z(1-{\rm Im}\; z)}{{\rm Re}\; z - a'}\right). \]
\end{remark}
Let us now verify our main results with a illustrative example.
\begin{example}\label{uber2}With $\mathcal{H}=\big[L^2\big((0,1)\big)\big]^2$ we consider the following block-operator matrix \begin{displaymath} A_0=\small{\left( \begin{array}{cc} -\frac{d^2}{dx^2} & -\frac{d}{dx}\\ ~&~\\ \frac{d}{dx} & 2I \end{array} \right),\textrm{~~}{\rm Dom}(A_0)=H^2\big((0,1)\big)\cap H^1_0\big((0,1)\big)\times H^1\big((0,1)\big).} \end{displaymath} $A_0$ is essentially self-adjoint with closure $A$. We have $\sigma_{\mathrm{ess}}(A)=\{1\}$ (see for example \cite[Example 2.4.11]{Tretter}) while $\sigma_{\mathrm{dis}}(A)$ consists of the simple eigenvalue $2$ with eigenvector $(0,1)^T$, and the two sequences of simple eigenvalues \[ \lambda_k^\pm := \frac{2+k^2\pi^2 \pm\sqrt{(k^2\pi^2 + 2)^2 - 4k^2\pi^2}}{2}. \] The sequence $\lambda_k^-$ lies below, and accumulates at, the essential spectrum. The sequence $\lambda_k^+$ lies above the eigenvalue $2$ and accumulates at $\infty$.
Let $\mathcal{L}_h^0$ be the FEM space of piecewise linear functions on $[0,1]$ with a uniform mesh of size $h$ and which satisfy homogeneous Dirichlet boundary conditions. Let $\mathcal{L}_h$ be the space without boundary conditions. First we apply the Galerkin method directly to $A$ with trial spaces $L_{h}=\mathcal{L}_h^0\oplus\mathcal{L}_h$. We find that spectral pollution occurs in the interval $(1,2)\subset\rho(A)$ which obscures the approximation of the genuine eigenvalue $2$; see the left-hand side of Figure 6.
\begin{figure}
\caption{On the left-hand side, Galerkin method approximation for $\sigma(A)$ from Example \ref{uber2}, spectral pollution in the interval $(1,2)$ obscures the approximation of the genuine eigenvalue $2$. On the right-hand side, the Galerkin method approximation for $\sigma(A+iP_{1/16})$ from Example \ref{uber2}, the curves $f$ and $g$, which together form $\Gamma_{1,\lambda_1^+}^{0,1}$. The region $\mathcal{U}_{1,\lambda_1^+}^{0,1}$ consists of complex numbers which lie to the right of $f$ and to the left of $g$.}
\end{figure} Let $P_{1/16}$ be the orthogonal projection onto the trial space $L_{1/16}$. Since \[(1,\lambda_1^+)\cap\sigma(A)=\{2\}\in\sigma_{\mathrm{dis}}(A) \] and $(0,1)^T\in L_h$ for all $h\in(0,1]$, the hypothesis of Lemma \ref{cor1} is satisfied, hence \[ \sigma(A+iP_{1/16})\cap\mathcal{U}_{1,\lambda_1^+}^{0,1}= \{2+i\}\in\sigma(A+iP_{1/16}). \] Furthermore, by \cite[Theorem 2.5]{me}, we can approximate the eigenvalue $\{2+i\}$, with the Galerkin method, without incurring any spectral pollution, i.e., \[ \bigg(\lim_{h\to 0}\sigma(A+iP_{1/16},L_h)\bigg)\cap\bigg(\mathcal{U}_{1,\lambda_1^+}^{0,1}\big\backslash\mathbb{R}\bigg)=\{2+i\}. \] The right-hand side of Figure 6 shows the Galerkin method approximation of $\sigma(A+iP_{1/16})$ with the trial space $L_{1/1024}$. We see that $2+i\in\sigma(A+iP_{1/16},L_{1/1024})$ and the only elements from \[ \bigg(\sigma(A+iP_{1/16},L_{1/2048})\cap\mathcal{U}_{1,\lambda_1^+}^{0,1}\bigg)\big\backslash\{2+i\} \] are very close to the real line which is where spectral pollution is still permitted. The perturbation method has demonstrated that the Galerkin eigenvalues in the interval $(1,2)$ are all spurious. Furthermore, the genuine eigenvalue 2 is approximated by the perturbation method without being obscured by pollution. \void{ \begin{figure}
\caption{The Galerkin eigenvalues for the operator $A+iP_{1/16}$ and trial space $L_{1/2048}$. Also shown are the curves $f$ and $g$, which together form $\Gamma_{1,\lambda_1^+}^{0,1}$. The region $\mathcal{U}_{1,\lambda_1^+}^{0,1}$ consists of those complex numbers which lie to the right of $f$ and to the left of $g$.}
\end{figure} \begin{figure}
\caption{The Galerkin eigenvalues near $1+i$ all lie to the left of the curve $f$ and are, therefore, not contained in $\mathcal{U}_{1,\lambda_1^+}^{0,1}$.}
\end{figure} }
Next we approximate the eigenvalue $\lambda_1^+$. Applying the Galerkin method directly to $A$ we do not incur spectral pollution near $\lambda_1^+$ and consequently we have the standard superconvergence result: \begin{equation}\label{spcon} {\rm dist}\big(\lambda_1^+,\sigma(A,L_h)\big)=\mathcal{O}(\delta_{\frak{a}}(\mathcal{L}(\{\lambda_1^+\}),\mathcal{L}_h)^2)=\mathcal{O}(h^2). \end{equation} By Theorem \ref{QQ} we have \begin{equation}\label{spspcon} {\rm dist}\big(\lambda_1^++i,\sigma(A+iP_h)\big)=\mathcal{O}(\delta(\mathcal{L}(\{\lambda_1^+\}),\mathcal{L}_h)^2)=\mathcal{O}(h^4). \end{equation} The second column in Table 1 shows the distance of $\lambda_1^+$ to $\sigma(A,L_h)$, the third column shows the distance of $\lambda^+_1+i$ to a Galerkin approximation (with very refined mesh) of the eigenvalue of $A+iP_h$ which is close to $\lambda_1^++i$. The left-hand side of Figure 7 displays a loglog plot of the data in Table 1, and verifies both \eqref{spcon} and \eqref{spspcon}. \begin{table}[h!]
\begin{tabular}{c|c|c} h & ${\rm dist}\big(\lambda_1^+,\sigma(A,L_h)\big)$ & ${\rm dist}\big(\lambda_1^++i,\sigma(A+iP_h,L_{h\times 2^{-7}})\big)$\\ \hline 1/2 & \hspace{30pt}1.861045647858232\hspace{30pt} & 0.014440864705963\\ 1/4 & 0.458746253205135 & 0.000609676693732\\ 1/8 & 0.113442149493080 & 0.000034835584324\\ 1/16 & 0.028273751580725 & \hspace{3pt}0.000002688221958
\end{tabular}\caption{Approximation of $\lambda_1^+$ from $\sigma(A,L_h)$ and from an approximation of $\sigma(A+ iP_h)$.} \end{table} \begin{figure}
\caption{On the left-hand side, approximation of $\lambda_1^+$ with $\sigma(A,L_h)$ and with an approximation of $\sigma(A+ iP_h)$. The gradients blue and red lines are approximately $2$ and $4$, respectively. On the right-hand side, approximation of $\lambda_1^++i$ and $\lambda_1^+$ using the perturbation and Galerkin methods, respectively}
\label{puper}
\end{figure}
We now compare the approximation of $\lambda_1^+$ by applying the Galerkin method directly to $A$ and to $A+iP_h$. The results are displayed on the right-hand side of Figure 7; we see that the approximation and convergence achieved by the perturbation method are essentially the same as those achieved by the Galerkin method. It is clear and consistent with Theorem \ref{eigconv} that we need not be concerned with locking-in poor accuracy with a relatively low dimensional projection $P_{h}$. In fact, it is quite remarkable that the approximation with $\sigma(A+iP_{1/32},L_{1/32\times 2^7})$ is essentially the same as $\sigma(A,L_{1/32\times 2^7})$. \void{ \begin{figure}
\caption{Approximation of $\lambda_1^++i$ and $\lambda_1^+$ using the perturbation and Galerkin methods, respectively.}
\end{figure} }
\void{With $\mathcal{H}=[L^2((0,1),dx]^2$ we consider the following block-operator matrix \begin{displaymath} T_0=\small{\left( \begin{array}{cc} -d^2/dx^2 & -d/dx\\ d/dx & 2I \end{array} \right)} \end{displaymath} with homogeneous Dirichlet boundary conditions in the first component. We have $\sigma_{\mathrm{ess}}(T)=\{1\}$ (see for example \cite[Example 2.4.11]{Tretter}) while $\sigma_{\mathrm{dis}}(T)$ consists of the simple eigenvalue $\{2\}$ with eigenvector $(0,1)^T$, and the two sequences of simple eigenvalues \[ \lambda_k^\pm := \frac{2+k^2\pi^2 \pm\sqrt{(k^2\pi^2 + 2)^2 - 4k^2\pi^2}}{2}. \] The sequence $\lambda_k^-$ lies below, and accumulates at, the essential spectrum. While the sequence $\lambda_k^+$ lies above the eigenvalue $2$ and accumulates at $\infty$.
Denote by $L_h^0$ the FEM space of piecewise linear functions on a uniform mesh of size $h$ and satisfying homogeneous Dirichlet boundary conditions, and by $L_h$ the space without boundary conditions. We consider the subspaces $\mathcal{L}_{h}=L_h^0\oplus L_h$ which belong to the domain of the quadratic form associated to $T$.
\begin{figure}
\caption{displaying something.}
\end{figure}
\begin{figure}
\caption{displaying something.}
\end{figure}
\begin{table}[hhh]
\begin{tabular}{c|c|c} h & $\lambda_{1,h}^+$ & $\vert\lambda_{1,h}^+-\lambda_1^+-i\vert$\\ \hline 1/5 & 11.0201854242 + 0.99919794297i & 0.05028557407 \\ 1/10 & 10.9798272969 + 0.99997146825i & 0.00992109091 \\ 1/20 & 10.9721318334 + 0.99999858898i & 0.00222558684 \\ 1/40 & 10.9704344517 + 0.99999992082i & 0.00052820471 \\ 1/80 & 10.9700349748 + 0.99999999531i & 0.00012872783 \\ 1/160 & 10.9699380255 + 0.99999999988i & 0.00003177855 \\ 1/320 & 10.9699141428 + 0.99999999838i & 0.00000789582 \end{tabular} \end{table} } \end{example}
\section{Further examples}
\void{\subsection{Perturbed multiplication Operator} With $\mathcal{H}=L^2(-\pi,\pi)$ consider the bounded operator $T\phi = a(x)\phi$ where \[ a(x)=\begin{cases}
-\frac{3}{2} + \frac{1}{2}\cos\sqrt{5}x & \text{for}\quad \pi\le x < 0, \\
2 + \cos\sqrt{2}x & \text{for}\quad 0\le x<\pi.
\end{cases} \] We also consider the operator $T+K$ where $K$ is the rank $2$ operator \[ K\psi(x)=
-\frac{3}{2\pi}e^{-ix}\int_{-\pi}^\pi\psi(y)e^{iy}~dy + \frac{1}{2\pi}e^{2ix}\int_{-\pi}^\pi\psi(y)e^{-2iy}~dy. \] We have \[\sigma_{\mathrm{ess}}(T+K)=\sigma_{\mathrm{ess}}(T)=\sigma(T)=[-2,-1]\cup[1,3].\] Note that $(-1,1)$ is a gap in the spectrum of $T$, and a gap in the essential spectrum of $T+K$. The latter can have at most $2$ eigenvalues in this gap. With the orthonormal basis $\psi_k(x) = e^{-ikx}$ $k\in\mathbb{Z}$, we apply the Galerkin method to $T$ and $T+K$ with the subspaces $\mathcal{L}_{2n+1}=\spn(\psi_{-n}(x),\dots,-\psi_0(x),\dots,\psi_n(x))$ $n\in\mathbb{N}$. The operators $T$ and $T+K$ are considered in \cite[Section 3]{lesh}, using the method of \emph{second order relative spectra} two eigenvalues of $T+K$ were approximated $\lambda_1\approx -0.8947$ and $\lambda_2\approx 0.6082$ both in the gap $(-1,1)$. We shall approximate these eigenvalues with our perturbation method.
The left hand side of Figure \ref{f1} show the Galerkin eigenvalues for $T$ and $T+K$. In both cases the essential spectrum is well approximated, but in both cases there are several \emph{spurious} Galerkin eigenvalues in the interval $(-1,1)$. On the right we have the perturbation method applied to $T$ where $Q_n=E_n((-1,1))P_n$. Since $(-1,1)\subset\rho(T)$ we should not have any elements from $\sigma(T,\mathcal{L}_{2n+1})$ with imaginary part close to 1 unless those points have real part very close the circles
\begin{figure}
\caption{On the left we have the Galerkin eigenvalues for $T$ and $T+K$. On the right we have the perturbation method applied to $T$ where $Q_n=E_n((-1,1))P_n$.}
\label{f1}
\end{figure}
\begin{figure}
\caption{On the left we have the perturbation method applied to $T$ where $Q_n=E_n((-1,1))P_n$ also shown are the circles $\Gamma_{-1}$ and $\Gamma_1$. On the right we have the perturbation method applied to $T+K$ where $Q_n=E_n((-1,1))P_n$.}
\end{figure}
\begin{figure}
\caption{displaying something.}
\end{figure} }
\begin{example}\label{magneto} With $\mathcal{H}=\big[L^2((0,1),\rho_0dx\big)\big]^3$ we consider the magnetohydrodynamics operator \begin{displaymath} A=\small{\left(\begin{array}{ccc}
-\frac{d}{dx}(\upsilon_a^2 + \upsilon_s^2)\frac{d}{dx} + k^2\upsilon_a^2 & -i(\frac{d}{dx}(\upsilon_a^2 + \upsilon_s^2) -1)k_\perp & -i(\frac{d}{dx}\upsilon_s^2 -1)k_\parallel\\
~&~&~\\
-ik_\perp((\upsilon_a^2 + \upsilon_s^2)\frac{d}{dx} +1) & k^2\upsilon_a^2 + k_\perp^2\upsilon_s^2 & k_\perp k_\parallel\upsilon_s^2\\
~&~&~\\
-ik_\parallel(\upsilon_s^2\frac{d}{dx} +1) & k_\perp k_\parallel\upsilon_s^2 & k_\parallel^2\upsilon_s^2
\end{array}\right)}. \end{displaymath} With $\rho_0=k_\perp=k_\parallel=g=1$, $\upsilon_{a}(x)=\sqrt{7/8 - x/2}$ and $\upsilon_s(x)=\sqrt{1/8+x/2}$, we have \[\sigma_{\mathrm{ess}}(A)=[7/64,1/4]\cup[3/8,7/8]. \] The discrete spectrum contains a sequence of simple eigenvalues which accumilate only at $\infty$. These eigenvalues are above, and not close to, the essential spectrum. They are approximated by the Galerkin method, with trial spaces $L_h=\mathcal{L}_{h}^0\oplus\mathcal{L}_h\oplus\mathcal{L}_h$, without incurring spectral pollution. It was shown, using the second order relative spectrum, that there is also an eigenvalue $\lambda_1\approx 0.279$ in the gap in the essential spectrum; see \cite[Example 2.7]{me2}. The top row of Figure 8 shows many Galerkin eigenvalues in the gap in the essential spectrum and many more just above the essential spectrum; we should be suspicious of spectral pollution in these regions. We define \[ \tau(A+iP_{h_0},L_h):=\big\{{\rm Re}\; z + (1-{\rm Im}\; z)i:z\in\sigma(A+iP_{h_0},L_h)\big\} \] and we are therefore interested in those elements from $\tau(A+iP_{h_0},L_h)$ which are close to the real line, i.e., we would prefer our approximate eigenvalues to converge to $\sigma(A)$ rather than $\sigma(A)+i$. The second row of Figure 8 shows $\tau(A+iP_{1/64},L_{1/1024})$, the two bands of essential spectrum are clearly approximated along with an approximation of $\lambda_1$ in the gap, and a second eigenvalue above the essential spectrum. The perturbation method has approximated the essential spectrum, identified the spectral pollution, and approximated two eigenvalues which were obscured by the spectral pollution.
\begin{figure}
\caption{On the top row, we see the Galerkin method approximation for $\sigma(A)$ from Example \ref{magneto}. There are many Galerkin eigenvalues in the gap in the essential spectrum and many more just above the essential spectrum; in these regions we should be suspicious of spectral pollution. The second row shows the perturbation method approximation for $\sigma(A)$ from Example \ref{magneto}; the essential spectrum is approximated, as are two eigenvalues, one in the gap and one just above the essential spectrum. The perturbation method has identified the spectral pollution in the gap and above the essential spectrum.}
\end{figure}
\end{example}
\begin{example}\label{schro1} With $\mathcal{H}=L^2(\mathbb{R})$ we consider the Schr\"odinger operator \[ Au = -u'' + \Big(\cos x - e^{-x^2}\Big)u. \] The essential spectrum of $A$ has a band structure. The first three intervals of essential spectrum are approximately \[ [-0.37849,-0.34767],\quad [0.5948,0.918058]\quad\textrm{and}\quad [1.29317,2.28516]. \] The second order relative spectrum has been applied to this operator, see \cite{bole}, where the following approximate eigenvalues were identified \[\lambda_1\approx -0.40961,\quad\lambda_2\approx 0.37763,\quad\textrm{and}\quad\lambda_3\approx 1.18216.\] We note that $\lambda_1$ is below the essential spectrum, $\lambda_2$ is in the first gap in the essential spectrum, and $\lambda_3$ is in the second gap. We apply the perturbation method with the trial spaces $\mathcal{L}_{(X,Y)}$ which is a $Y$-dimensional space of piecewise linear trial functions on the the interval $[-X,X]$ which vanish at the boundary, and $P_{(X,Y)}$ is the orthogonal projection onto $\mathcal{L}_{(X,Y)}$. The left-hand side of Figure 9 shows the perturbation method has clearly identified the first two bands of essential spectrum and the eigenvalues $\lambda_1$ below the essential spectrum, $\lambda_2$ in the first gap, and $\lambda_3$ in the second gap.
\begin{figure}
\caption{The left-hand side shows the perturbation method approximation approximation for $\sigma(A)$ from Example \ref{schro1}. The first two bands of essential spectrum are approximated, as are the eigenvalues $\lambda_1$, $\lambda_2$ and $\lambda_3$. The right-hand side shows the perturbation method approximation approximation for $\sigma(A)$ from Example \ref{schro2}. The first two bands of essential spectrum are approximated, as are the first three eigenvalues in the first gap in the essential spectrum.}
\end{figure}
\end{example}
\begin{example}\label{schro2} With $\mathcal{H}=L^2\big((0,\infty)\big)$ we consider the Schr\"odinger operator \[ Au = -u'' + \left(\sin x - \frac{40}{1+x^2}\right)u,\quad u(0)=0. \] This example has been also been considered in \cite{mar2}. The first three bands of essential spectrum are the same as in the previous example. However, this time there are infinitely many eigenvalues in the gaps which accumulate at the lower end point of the bands with their spacing becoming exponentially small; see \cite{sch}. We apply the perturbation method with the trial spaces $\mathcal{L}_{(X,Y)}$ which is a $Y$-dimensional space of piecewise linear trial functions on the interval $[0,X]$ which vanish at the boundary. The operator $P_{(X,Y)}$ is the orthogonal projection onto trial space $\mathcal{L}_{(X,Y)}$. The right-hand side of Figure 9 shows that the perturbation method has approximated three eigenvalues in the first gap of the essential spectrum. \end{example}
We should stress the ease with which the above calculations are conducted. The perturbation method does not require trial spaces from the operator domain, thus we have been able to use the FEM spaces of piecewise linear trial functions. The quadratic methods cannot be applied with these trial spaces. Our final example is outside much of the theory so far developed for the perturbation method, this is because the operator concerned is indefinite. However, the numerical results suggest that the perturbation method can be extended to the indefinite case. The second order relative spectra has been applied to this example and the code made available online; see \cite{bb} and \cite{nlevp}, respectively. We use this code to apply the Galerkin method, the perturbation method, the second order relative spectrum, the Davies \& Plum method and Zimmermann \& Mertins method. \begin{example}\label{diracex} With $\mathcal{H}=\big[L^2\big((0,\infty)\big)\big]^2$ we consider the Dirac operator \begin{displaymath} A=\left(\begin{array}{cc}
I - \frac{1}{2x}~&~ -\frac{d}{dx}-\frac{1}{x}\\\\
\frac{d}{dx} -\frac{1}{x}~&~-I - \frac{1}{2x}
\end{array}\right). \end{displaymath} We have $\sigma_{\mathrm{ess}}(A)= (-\infty,-1]\cup[1,\infty)$ and the interval $(-1,1)$ contains the eigenvalues \[ \sigma_{\mathrm{dis}}(A)= \left(1+\frac{1}{4(j-1+\sqrt{3/4})^2}\right)^{-1/2}\quad j=1,2,\dots. \] We use trial spaces generated by Hermite functions of odd order; see \cite{bb} for further details. There is no spectral pollution incurred by the Galerkin method in this example, therefore we can also compare the perturbation method with the Galerkin method. The second order relative spectrum is known to converge to the discrete spectrum; see \cite{bo}. From this method we obtain a sequence of complex numbers with $z_n\to\lambda_1$, where $n$ is the dimension of the trial space. The sequence of real parts $({\rm Re}\; z_n)$ we take as our approximation for $\lambda_1$. The Davies \& Plum and Zimmermann \& Mertins methods, which are equivalent, provide a sequence of intervals containing $\lambda_1$ we take the mid-point of these intervals, which we denote by $w_n$, to be our approximation of $\lambda_1$. Our numerical results suggest the following convergence rates \begin{align*} &{\rm dist}\big(\lambda_1 +i,\sigma(A+iP_{n/2},P_n)\big)=\mathcal{O}(n^{-0.9}),~ {\rm dist}\big(\lambda_1,\sigma(A,P_n)\big)=\mathcal{O}(n^{-0.9}),\\ &\vert\lambda_1 - z_n\vert=\mathcal{O}(n^{-0.2}),~\vert\lambda_1 - {\rm Re}\; z_n\vert=\mathcal{O}(n^{-0.7})\textrm{~and~}\vert\lambda_1 - w_n\vert=\mathcal{O}(n^{-0.2}). \end{align*} Again we see the performance of the perturbation method is essentially the same as the Galerkin method. We also note the relatively poor performance of the quadratic methods. The latter is not entirely surprising as the known convergence rates for quadratic methods are measured in terms of $\delta_{A}(\mathcal{L}(\{\lambda\}),\mathcal{L}_n)$, i.e., the distance of the eigenspace to the trial space with respect to the graph norm; see \cite[Lemma 2]{bost1} and \cite[Section 6]{me3}.
\end{example} \section{Conclusions and further research} Our theoretical results are, for the most part, focused on the perturbation and approximation of the discrete spectrum. However, the examples indicate that our new perturbation method also captures the essential spectrum. This should be further investigated. For the approximation of eigenvalues, the rapid convergence assured by theorems \ref{QQ} \& \ref{eigconv} mean that, in terms of accuracy and convergence, we can expect the perturbation method to significantly outperform the quadratic methods. The fact that the former may be applied with trial spaces from the form domain is another significant advantage. Recently a second pollution-free and non-quadratic technique has emerged; see \cite{me4}. Currently the latter has the disadvantage of requiring \'a priori information about gaps in the essential spectrum, however, it does have the advantage of a self-adjoint algorithm. In terms of accuracy and convergence, there appears to be little separating these two non-quadratic techniques; see \cite[examples 5.2 \& 5.3]{me4}. Which technique is preferable will likely depend on the particular situation and availability of \'a priori information; this should be the subject of further study.
\section{Second order relative spectra and convergence} The method of second order relative spectra has been extensively studied over the past 15 years. Interest in this method has been stimulated by the fact that it provides intervals which intersect the spectrum. With some \'a priori information the method can also provide enclosures for eigenvalues. The technique is known to converge to the discrete spectrum; see \cite{bo}. It was also thought, by many, to converge to the essential spectrum, however, this has recently been shown to be false, in general; see \cite{shar2}. For the discrete spectrum, we briefly examine the quality of the approximation and of the enclosures provided by this method. We also provide a new proof, which is based on classical spectral approximation theory, of the convergence rate to elements from $\sigma_{\mathrm{dis}}(A)$. \begin{definition} Let $A$ be a self-adjoint operator acting on a Hilbert space $\mathcal{H}$. The second order spectrum of $A$ relative to a subspace $\mathcal{L}\subset{\rm Dom}(A)$, denoted ${\rm Spec}_2(A,\mathcal{L})$, consists of those $z\in\mathbb{C}$ for which there exists a $0\ne u\in\mathcal{L}$ such that \[ \langle(A-z)u,(A-\overline{z})v\rangle=0\quad\forall v\in\mathcal{L}. \] \end{definition} To apply the second order relative spectrum we need trial spaces which belong to the operator domain, rather than the preferred form domain. We must also assemble a matrix with entries of the form $\langle Au_i,Au_j\rangle$, which is also awkward. However, the method does have some nice properties: if $z\in{\rm Spec}_2(A,L)$ then \begin{equation}\label{spec21} \sigma(A)\cap\big[{\rm Re}\; z - \vert{\rm Im}\; z\vert,{\rm Re}\; z+\vert{\rm Im}\; z\vert\big]\ne\varnothing, \end{equation} if $(a,b)\cap\sigma(A)=\{\lambda\}$ and $a<{\rm Re}\; z<b$, then \begin{equation}\label{spec22} \lambda\in\left[{\rm Re}\; z - \frac{\vert{\rm Im}\; z\vert^2}{b-{\rm Re}\; z},{\rm Re}\; z+\frac{\vert{\rm Im}\; z\vert^2}{{\rm Re}\; z-a}\right]; \end{equation} see \cite[corollaries 3.4 \& 4.2]{shar} and \cite[Remark 2.3]{me2}, respectively. We saw in Example \ref{diracex} that we obtain a sequence $z_n\in{\rm Spec}_2(A,\mathcal{L}_n)$ with $z_n\to\lambda_1$, let us compare the approximation of $\lambda_1$ by ${\rm Re}\; z_n$ to the size of the enclosures \eqref{spec21} and \eqref{spec22}; for the latter we may choose $a=-1$ and $b=\lambda_2$. We find that \begin{align*} \vert\lambda_1-{\rm Re}\; &z_n\vert=\mathcal{O}(n^{-0.7}),\quad2\vert{\rm Im}\; z_n\vert=\mathcal{O}(n^{-0.2}),\\ &\frac{\vert{\rm Im}\; z_n\vert^2}{b-{\rm Re}\; z_n}+\frac{\vert{\rm Im}\; z_n\vert^2}{{\rm Re}\; z_n-a}=\mathcal{O}(n^{-0.4}). \end{align*} which suggests that the enclosures obtained from the second order relative spectrum are very poor when compared to the approximation provided by ${\rm Re}\; z_n$. The latter, in turn, is poor when compared to the approximation provided by the perturbation method; see Example \ref{diracex}. In applications, by using the second order relative spectrum (or any other quadratic method) we can obtain intervals which intersect the spectrum, however, the actual approximate eigenvalues obtained are significantly compromised by using quadratic methods. \subsection{The convergence of ${\rm Spec}_2(A,\mathcal{L}_n)$} Using the well established convergence theory for the Galerkin method we will prove convergence properties for the second order relative spectra; see also \cite[Theorem 4.9, Theorem 6.1 \& Corollary 6.2]{me3}. Unless stated otherwise we assume that $A$ is a bounded self-adjoint operator. Consider the block matrix \[ T:=\begin{pmatrix} 2A & -A^2 \\ I & 0 \end{pmatrix}:\mathcal{H}\oplus\mathcal{H}\to\mathcal{H}\oplus\mathcal{H}. \] \begin{lemma}{\cite[Lemma 3.1]{me3}} $\sigma(T)=\sigma(A)$. If $\lambda\in\sigma_{\mathrm{dis}}(A)$ has multiplicity $m$, then $\lambda$ is an eigenvalue of $T$ with algebraic multiplicity $2m$, geometric multiplicity $m$, ascent $2$. \end{lemma} \begin{lemma}{\cite[Lemma 3.2]{me3}} ${\rm Spec}_2(A,\mathcal{L})=\sigma(T,\mathcal{L}\oplus\mathcal{L})$. \end{lemma} Let $(P_n)$ be a sequence of finite-rank orthogonal projections which converge strongly to the identity operator. The range of $P_n$ is denoted $\mathcal{L}_n$. \begin{lemma}{\cite[Theorem 4.4]{me3} see also \cite[proof of Theorem 1]{bo}}\label{mebo} Let $\lambda\in\sigma_{\mathrm{dis}}(A)$ with ${\rm dist}(\lambda,\sigma(A)\backslash\{\lambda\})>r$. There exists a constant $c_r>0$ and an $N\in\mathbb{N}$, such that \begin{equation}\label{nproof} \Vert P_n(A-z)(A-z)P_nu\Vert\ge c_r\Vert P_nu\Vert \end{equation} for all $u\in\mathcal{H}$, $\vert\lambda-z\vert=r$, and $n\ge N$. \end{lemma} \begin{lemma}\label{nb} Let $\lambda\in\sigma_{\mathrm{dis}}(A)$ with ${\rm dist}(\lambda,\sigma(A)\backslash\{\lambda\})>r$. There exists a constant $d_r>0$ and an $N\in\mathbb{N}$, such that \begin{equation}\label{nproof2} \left\Vert\begin{pmatrix} 2P_nAP_n - zP_n & -P_nA^2P_n \\ P_n & -zP_n \end{pmatrix}\begin{pmatrix} u \\ v \end{pmatrix}\right\Vert\ge d_r\left\Vert\begin{pmatrix} P_nu \\ P_nv \end{pmatrix}\right\Vert \end{equation} for all $u,v\in\mathcal{H}$, $\vert\lambda-z\vert=r$, and $n\ge N$. \end{lemma} \begin{proof} Suppose the assertion is false. Then there exists a subsequence $n_j$, a sequence $(z_{n_j})$ with $\vert\lambda - z_{n_j}\vert=r$, and vectors $u_{n_j},v_{n_j}\in\mathcal{L}_{n_j}$ with $\Vert u_{n_j}\Vert^2+\Vert v_{n_j}\Vert^2=1$, such that \[\left\Vert\begin{pmatrix} 2P_{n_j}A -z_{n_j} & -P_{n_j}A^2\\ I & -z_{n_j} \end{pmatrix}\begin{pmatrix} u_{n_j} \\ v_{n_j} \end{pmatrix}\right\Vert\to 0 \] Without loss of generality we suppress the second subscript. We have \begin{align} 2P_{n}Au_n -z_nu_n -P_{n}A^2v_n &\to 0\\ u_n - z_nv_n &\to 0.\label{two} \end{align} Then for some sequence of reals $0\le s_n\to 0$ and a sequence of normalised vectors $(w_n)$, we have \[ u_n - z_nv_n = s_nw_n. \] Then \[ -P_n(A-z)(A-z)v_n + s_n(2P_nA-z_n)w_n = 2P_{n}Au_n-zu_n -P_{n}A^2v_n\to0. \] Lemma \ref{mebo} implies that $v_n\to 0$. Then \eqref{two} implies that $u_n\to 0$. The result follows from the contradiction. \end{proof} Let us fix a $0\ne \lambda\in\sigma_{\mathrm{dis}}(A)$ (the case where $\lambda=0$ may be treated similarly by introducing a shift) and an $r<{\rm dist}(\lambda,\sigma(A)\backslash\{\lambda\})$ such that the circle $\vert\lambda -z\vert=r$ does not enclose zero. Denote by $\mathcal{M}$ the spectral subspace associated to $\lambda\in\sigma(T)$ and by $\mathcal{M}_n$ the spectral subspace associated to the operator \begin{equation}\label{top} \begin{pmatrix} 2P_{n}A & -P_{n}A^2 \\ I & 0 \end{pmatrix}:\mathcal{H}\oplus\mathcal{H}\to\mathcal{L}_n\oplus\mathcal{L}_n \end{equation} and the those eigenvalues enclosed by the circle $\vert\lambda - z\vert=r$. \begin{lemma}{\cite[Theorem 4.6]{me3}}\label{sdim} $\dim(\mathcal{M})=\dim(\mathcal{M}_n)$ for all sufficiently large $n$. \end{lemma} In view of Lemma \ref{nb} and Lemma \ref{sdim}, the operator \eqref{top} satisfies the definition of \emph{strongly stable convergence} to $T$ in a neighbourhood of $\lambda$; see \cite[Chapter 5]{chat}. The following theorem is now a straightforward consequence of \cite[Theorem 6.11]{chat}. \begin{theorem}\label{conn} Let $z_n\in{\rm Spec}_2(A,\mathcal{L}_n)$ with $z_n\to\lambda$, then \[ \vert z_n-\lambda\vert=\mathcal{O}(\delta(\mathcal{L}(\{\lambda\}),\mathcal{L}_n))\quad\text{and}\quad \vert{\rm Re}\; z_n-\lambda\vert=\mathcal{O}(\delta(\mathcal{L}(\{\lambda\}),\mathcal{L}_n)^2).\] \end{theorem} We now assume that $A$ is an unbounded self-adjoint operator. As above, $(P_n)$ denotes a sequence of finite-rank orthogonal projections each with range $\mathcal{L}_n$. We shall assume that \[ \forall u\in{\rm Dom}(A)\quad\exists u_n\in\mathcal{L}_n:\quad\Vert u-u_n\Vert_{A}\to0. \] The following theorem is now an immediate consequence of Theorem \ref{conn} and \cite[Lemma 2.6]{bost}. \begin{theorem}\label{thmcon} Let $z_n\in{\rm Spec}_2(A,\mathcal{L}_n)$ with $z_n\to\lambda$, then \[\vert z_n-\lambda\vert=\mathcal{O}(\delta_{A}(\mathcal{L}(\{\lambda\}),\mathcal{L}_n))\quad\text{and}\quad \vert{\rm Re}\; z_n-\lambda\vert=\mathcal{O}(\delta_{A}(\mathcal{L}(\{\lambda\}),\mathcal{L}_n)^2).\] \end{theorem} The convergence rates in Theorem \ref{thmcon} are measured in terms of the graph norm which is why the method converges poorly; the convergence achieved by the Galerkin and perturbation methods is measured in terms of the norm associated to the quadratic form. }
\end{document} | arXiv |
Security Informatics
Detecting obfuscated malware using reduced opcode set and optimised runtime trace
Philip O'kane1,
Sakir Sezer1 &
Kieran McLaughlin1
Security Informatics volume 5, Article number: 2 (2016) Cite this article
The research presented, investigates the optimal set of operational codes (opcodes) that create a robust indicator of malicious software (malware) and also determines a program's execution duration for accurate classification of benign and malicious software. The features extracted from the dataset are opcode density histograms, extracted during the program execution. The classifier used is a support vector machine and is configured to select those features to produce the optimal classification of malware over different program run lengths. The findings demonstrate that malware can be detected using dynamic analysis with relatively few opcodes.
The malware industry has evolved into a well-organized $Billion marketplace operated by well-funded, multi-player syndicates that have invested large sums of money into malicious technologies, capable of evading traditional detection systems. To combat these advancements in malware, new detection approaches that mitigate the obfuscation methods employed by malware need to be found. A detection strategy that analyzes malicious activity on the host environment at run-time can foil malware attempts to evade detection. The proposed approach is the detection of malware using a support vector machine (SVM) on the feature (opcode density histograms) extracted during program execution. The experiments use feature filtering and feature selection to investigate all the Intel opcodes recorded during program execution.
While the full spectrum of opcodes is recorded, feature filtering is applied to narrow the search scope of the feature selection algorithm, which is applied across different program run-lengths. This research confirms that malware can be detected during the early phases of execution, possibly prior to any malicious activity.
"System overview" section describes the experimental framework and "Test platform" section details the test platform used to capture the program traces. "Dataset creation" section explains the dataset creation and is followed in "Opcode pre-filter" section with a description of the filtering method used. "Support vector machine" section introduces an SVM and describes the feature selection process. The results and observations are reviewed in "Discussion" section. Finally, "Conclusion" section concludes with a summary of the findings.
This research is an investigation into malware detection using N-gram analysis and is an extension of the work presented in [1]. However, a summary of the related research is given here to aid the discussion within this paper. Typical analysis approaches involve Control Flow Graphs (CFG), State Machines (modelling behaviour), analysing stack operations, taint analysis, API calls and N-gram analysis.
Code obfuscation is a popular weapon used by malware writers to evade detection [2]. Code obfuscation modifies the program code to produces a new version with the same functionality but with different Portable Executable (PE) file contents that are not known by the antivirus scanner. Obfuscation techniques such as packing are used by malware authors as well as legitimate software developers to compress and encrypt the PE. However, a second technique polymorphism [2] is used by malware. Polymorphic malware uses encryption to change the body of the malware which is governed by a decryption key that is changed each time the malware is executed creating a new permutation of the malware on each new infection. Eskandari et al. [3] propose to use program graph mining techniques for detecting polymorphic malware. However, these works employing sub-graph matching to classify and detect malware. These API based methods are easily subverted by changing API call sequence or adding extra API calls that have no effect except to disrupt the call-graph.
Sung et al. [4] proposed an anomaly based detection using API call sequences to detect unknown and polymorphic malware using an Euclidian distance measurement between API sequences alignment of different call sequences. This API sequence alignment proposed by Sung approach is effectively a signature based approach since it ignores the frequency of the API calls.
Tian et al. [5] explored a method for classifying Trojan malware and demonstrated that function length plays a significant role in classifying malware and if combined with other features could result in an improvement in malware classification. Unfortunately, these techniques are easily subverted with the addition of innocuous API calls. Sami et al. [6] also propose a method of detecting malware based on mining API calls statically gathered from the Import Address Tables (IAT) of PE files.
Lakhotia et al. [7] investigated stack operations as a means to detect obfuscated function calls. His method modelled stack operation based on push, pop and rets opcodes. However, his approach failed to detect obfuscation when the stack is manipulated using other opcodes.
Bilar [8] demonstrated using static analysis that Windows PE files contain different opcode distributions for obfuscated and non-obfuscated code. Bilar's findings showed that opcodes such as adc, add, inc, ja, and sub could be used to detect malware.
In other research, Bilar [9] used statically generated CFG to show that a difference in program flow control structure exists between benign and malicious programs. Bilar concluded that malware has a simpler program flow structure, less interaction, fewer branches and less functionality than benign software.
More recent, research carried out by Agrawal et al. [10] also demonstrated a difference in the program flow control of malicious and benign software. Agrawal used an abstracted CFG that considered only the external artefacts of the program and used an 'edit distance' to compare the CFGs of programs. His findings show a difference in the flow control structure between benign and malicious programs.
N-gram analysis is the examination of sequences of bytes that can be used to detect malware. Using a machine learning algorithm, Santos et al. [11] demonstrated that N-gram analysis could be used to detect malware.
Santos et al. [12] perform static analysis on PE files to examine the similarity between malware families and the differences between benign and malicious software. Analysis with N-gram (N = 1) showed considerable similarity between families of malware, but no significant difference between benign and malicious software could be established. In a later paper, Santos et al. evaluated several machine learning algorithms [13] and showed that malware detection is possible using opcodes. Anderson et al. [14] combine both static and dynamic features in a multiple kernel learning framework to find a weighted combination of the data sources that produced an effective classification.
Shabtai et al. [15] used static analysis to evaluate the influence of N-gram sizes (N = 1–6) to detect malware using several classifiers and concluded that N = 2 performed best. Moskovitch et al. [16] also used N-gram analysis to investigate malware detection using opcodes and his findings concurred with Shabtai. Song et al. [17] explored the effects of polymorphism and confirmed that signature detection is easily evaded using polymorphism and is potentially on the brink of failure.
Due to the weakness in static analysis and the increase of obfuscated malware, it is difficult to ensure that all the code is thoroughly inspected. With the increasing amount of obfuscated malware being deployed, this research focuses on dynamic analysis (program run-time traces). Other dynamic analysis approaches use API calls to classify malware, which can easily be obfuscated by malware writers. Therefore, these experiments seek to identify run-time features (below the API calls) that can be used to identify malware. For this reason, the research investigates opcode density histograms obtained during program run-time as a means to identify malware.
The goal of this research, is two-fold (1) find a set of opcodes that are good indicators of malware and (2) determine how long the program needs to run in order to obtain an accurate classification. Figure 1 shows an overview of the experimental approach and to assist understanding, each section is labeled with a corresponding section heading used throughout this paper.
'Test Platform': The program samples are executed within the controlled environment to create program run-time traces.
'Dataset Creation': Each program trace is parsed and sliced into 14 different program run-lengths, creating 14 unique datasets defined by the number of opcodes executed.
'Pre-Filtering': A filter is applied to reduce the number of opcodes (features) that the SVM needs to process; thereby reducing the computational overhead during the SVM training phase.
'SVM Model Selection': is a process of selecting hyper-parameters (regularisation and kernel parameters) to achieve good out-of-sample generalisation.
Test platform
A native environment would provide the best platform in terms of the least tell-tale signs of a test environment and thereby mitigate any attempts by the malware to detect the test environment and exit early. However, other considerations need to be taken into account, such as ease of running the malware trace analysis.
A virtual platform is selected (QEMU-KVM), as the hypervisor provides isolation of the guest platform (Windows 7 OS test environment) from the underlying host OS and incorporates a backup and recovery tool that simplifies the removal of infected files. In addition to the virtual platform, a debugger is used to record the run-time behaviour of the programs under investigation. A plethora of debugging tools exist, with popular choices for malware analysis being IDA Pro, Ollydbg and WinDbg32 [18].
The Ollydbg debugger is chosen to record the program traces as it utilizes the StrongOD plug-in, which conceals the debugger's presence from the malware. When a debugger loads a program, the environment setting are changed, which enables the debugger to control the loaded program. Malware uses techniques to detect debuggers and avoid being analysed. StrongOD mitigates many of the anti-analysis techniques employed by malware and for an in-depth discussion on these techniques see work by [19, 20].
Dataset creation
Operational codes (Opcodes) are referred to as assembly language or machine language instructions and are CPU operations. They are usually represented by assembly language mnemonics.
Before realising the classifier, the raw data is distilled into a set of meaningful information that is used to train the classifier to predict unknown malicious and benign software samples. As discussed in the related work section, the features are constructed from program trace (p) and is represented as a set of instructions (I) and where n is the number of instructions:
$$p = I1,I2, \ldots In$$
An instruction consists of an opcode and operands. Opcodes, by themselves, are significant [8] and, therefore, only the opcodes are harvested with the operand being reduntant.
The program can, therefore, be defined as a set of ordered opcodes o:
$$p = o1, o2, \ldots on$$
Program slicing is used to investigate the effects of different program run lengths. Therefore, os is defined as a set of ordered opcodes within a program execution:
$$os \subseteq p$$
$$os = o1,o2 \ldots om$$
where m is the length of the program slice, 1k, 2k, 4k … 8192k opcodes.
The opcode density histograms are constructed using the following steps:
The program traces are created by recording the run-time opcodes that are executed when a program is run;
The opcode densities for each program trace are calculated using the parser described below.
The dataset is created by expressing the features as a set of opcodes density, extracted from the runtime traces of Windows PE files. The dataset consists of 300 benign Windows PE files taken from the 'Windows Program Files' directory, and 350 malware files (Windows PE) downloaded from Vxheaven [21]. The datasets are constructed from different program run lengths, creating 14 distinct datasets. This new datasets are created by cropping the trace files into lengths based on the number of opcodes (1k-opcodes, 2k-opcodes etc.) prior to constructing a density histogram for each cropped trace file. The dataset creation starts by cropping the original dataset into 1k opcodes, and a density histogram is created, and is repeated for 2k, 4k, 8k, 16k,… 4096k and 8192k opcodes in length.
Opcode pre-filter
The computational effort associated with N-gram analysis is often referred to as the 'Curse of dimensionality' and was first coined by Bellman in 1961 to describe the exponential increase in computational effort associated with adding extra dimensions to a domain space. Using an SVM to examine all the opcode permutations over the complete opcode range creates a computational problem due to the high number of feature permutations produced.
The increased effort for each additional feature added is calculated using the following Eq. (5)
$$number\;of\;permutations = \frac{n!}{{\left( {n - r} \right)!r!}}$$
where n = total number of features in the dataset; r = number of features within the group of features under consideration.
To reduce the computational effort, the area of search is restricted to those features that contain the most information. This is achieved by applying a filtering process that ranks features according to the information that they contain and that is likely to be useful to the SVM [22]. Each feature is assigned an importance value using eigenvectors, thereby ranking the feature's usefulness as a means of classification.
Principal Component Analysis (PCA) is a transformation of the covariance matrix, and it is defined in (6) as per [23]:
$$C_{ij} = \frac{1}{n - 1}\mathop \sum \limits_{m = 1}^{n} \left( {X_{im} - \bar{X}_{i } } \right)\left( {X_{jm} - \bar{X}_{j} } \right)$$
where C = Covariance matrix of PCA transformation; X = dataset value; \(\overline{X}\) = dataset mean; n and m = data length.
PCA compresses the data by mapping it into subspace (feature space) and creating a set of new variables. These new variables (feature space) that define the original data are called principal components (PCs), and retain all of the original information in the data. The new variables (PCs) are ordered by their contribution (usefulness/eigenvalue) to the total information.
The filter consists of two phases: Firstly, PCA is used to determine the most significant PCs, i.e. the number of PCs that contain 99.5 % of the data variance. PCA calculated that 8 PC values embodied 99.5 % of the total variance i.e. Eq. (10) n = 8. Secondly, the ranking value (R) is used to identify those opcodes that contain significant information (variance) and is calculated by multiplying the significant eigenvector column with the respective eigenvalues and then summing each row:
$$R_{k} = \mathop \sum \limits_{k = 1}^{n} V \cdot d_{k}$$
where R = Sum of the matrix variance; V = eigenvector; d = Eigenvalue scalar; n = 8; most significant values that represent 99.5 % of the variance within the data.
Figure 2 shows the ranking of features using (10), with the Y axis showing the ranking of the features, the X axis listing the features (opcodes) and the Z axis showing the different program run lengths. It can be seen that the top 20 ranked features vary with the program run length.
Features ranked by eigenvalues
However, high ranking features such as rep, mov, add, etc. remain consistently high over the different program run lengths and the lowest ranking features such as lea, loopd, etc. remain consistently low over the different program run lengths. Considering the mid-ranking features, it can be seen that significant variations occur with different program run lengths.
Splitting these features into their opcode categories: arithmetic (sub, dec); logic (xor); and flow control (je, jb, jmp, pop, nop and call) infers that the program structure (flow control) changes with different program run lengths. Therefore, in the following experiment, the filter is run for each program run length to ensure the optimum feature selection.
Support vector machine
SVMs are classifiers that rely heavily on the optimal selection of hyper-parameters. A poor choice of values for a hyper-parameter can lead to poor performance in terms of overly complex hypothesis that leads to poor out-of-sample generalisation. The task of searching for optimal hyper-parameters, with respect to the performance measures (validation), is the called 'SVM Model Selection'.
The model selection process is categorised into:
Kernel selection;
Parameter grid search;
Feature selection.
Herbrich et al. [24] demonstrated that, without normalisation, large values can lead to over-fitting and thereby reducing the out-of-sample generalisation. Normalisation can be performed in either the 'input space' or the 'feature space'.
Input Space normalisation is carried out on the input features (x) and is defined as:
$$\bar{x} = \frac{x}{\left\| x \right\|} \in R$$
Feature space normalisation is applied to the kernel rather than to the input vectors. Consider a kernel function K(x, y) which represents a dot-product in the feature space. Normalisation in the feature space requires a new kernel function definition [25]:
$$\bar{k}\left( {x,y} \right) = \frac{{k\left( {x,y} \right)}}{{\sqrt {k\left( {x,x} \right)k\left( {y,y} \right)} }} \in R$$
where R is a unit hypersphere.
Input space normalisation, as defined in (11), is implemented in the experiments presented in this paper.
An SVM maximizes the precision of the model by transposing the data into a feature space (high dimensional) where a hyper-plane separates the new features into their respective classes. This increases the class separation and is illustrated by way of an example, two opcodes pop and ret are used as they demonstrate the characteristics of kernel mapping. Figure 3 shows a plot of pop and ret features and how there mapping into feature space increases class separation.
Opcode density for pop and ret mapped into feature space
The selection of an appropriate Kernel is key to the success of any machine learning algorithm. A linear kernel generally performs better at generalising the training phase into good test results where the data can be linearly separated. However, as shown in Fig. 5, the data is not linearly separated. Therefore, an RBF kernel (a non-linear decision plane) is used as it yields a greater accuracy than a linear kernel, as illustrated in Figs. 5 and 6.
The correct adjustment of the RBF kernel parameters significantly affects the performance of the SVM's ability to classify correctly, and poorly adjusted parameters can lead to either overfitting or underfitting. There are two parameters—C and λ. C is used to adjust the trade-off between bias and variance errors and λ determines the width of the decision boundary in feature space.
Two grid searches are performed to find the values of λ and C that produce an optimal SVM configuration. The first search is a coarse grain search, ranging from λ = 1 e−5 to 1 e5 and C = 0–10. This is followed by a fine grain search (increments of 0.1) over a reduced range (λ = ±10, C = 0–3). The optimal performance was established with λ = 1 and C = 0.8.
Before continuing with the experiments, the results need to be placed in context. The measure of malware detection is based on:
Detection accuracy is defined in (10) and is the correct classification of True Positive (TP) and True Negative (TN).
$${\text{Detection}}\;{\text{Accuracy}} = \frac{TP + TN}{TP + TN + FP + FN}$$
False positive (FP) is when a benign file is mistakenly classified as a malicious file and is defined in (11).
$${\text{False}}\;{\text{Positive}} = \frac{FP}{TP + FP}$$
This is also known as a false alarm and can have a significant impact on malware detection systems. For example, if an antivirus program is configured to delete or quarantine infected files, a false positive can render a system or application unusable.
False negative (FN) is when a malicious file is mistakenly classified as benign and is defined in (12).
$${\text{False}}\;{\text{Negative}} = \frac{FN}{TN + FN}$$
This occurs when an anti-virus security product fails to detect an instance of malware. This can be due to a zero-day attack or malware using obfuscation techniques to evade detection [2]. The impact of this security threat depends on whether the detection method is the last line of defence in the overall malware detection system.
False positives present a major problem, in that networks and host machines, can be taken out of service by the protective actions, as a consequent of alarms, such as quarantining or deleting a critical file. However, this paper focuses on end-point detection where false negatives present a security threat. Therefore, this research focuses on the minimisation of FN rate along with the detection accuracy.
In order to address the problem of FN rates, the optimisation function considers the FN rates by measuring the distance between the detection accuracy and the FN rate as described in (13). Steers the search by selecting those features that maximise OPTvalue.
$${\text{OPT}}_{{{\text{value}}}} = {\text{Detection}}\;{\text{Accuracy}} - D\times {\text{FN}}\;{\text{Rate}}$$
where D is a scalar used to adjust the sensitivity of the FN rate.
The challenge here is to choose a value of D that guides the SVM to select features that lead to the desired behaviour i.e. maximise the detection accuracy while minimising the FN rate. Setting D = 1 will direct the SVM to maximise the distance between detection accuracy and the FN rate. However, this may not yield the lowest FN rate. Therefore, D has to be greater than 1 to penalise the SVM for selecting non-minimal FN rates. A pilot study is carried out to find the value of D that will produce the maximum detection accuracy that has a low FN rate. It is not practical to investigate all values of D for all the combinations of opcodes studied in this experiment. Therefore, the cost function (13) is evaluated for D = 1, 1.5, 2 and 4. The results are shown in Fig. 4, where the upper part of the graph shows the detection accuracies for D = 1, 1.5, 2 and 4 against the program run lengths and the lower part of the graph shows the FN rates for D = 1, 1.5, 2 and 4 against the program run lengths. The following observations can be made:
Evaluation of the scalar D used in the cost function
D = 1 produces a detection accuracy ranging from 72.3 to 90.8 % (average 85.1 %) and a FN rate ranging from 0 to 10.79 % (average 5.4 %);
D = 1.5 produces a detection accuracy ranging from 70.8 to 90.8 % (average 84.4 %) and a FN rate ranging from 0 to 9.25 % (average 4.96 %);
D = 2 produces a detection accuracy ranging from 70.8 to 90.8 % (average 84.4 %) and a FN rate ranging from 0 to 6.18 % (average 2.98 %);
D = 4 produces a detection accuracy ranging from 70.8 to 81.5 % (average 75.1 %) and a FN rate ranging from 0 to 3.1 % (average 0.44 %).
Considering the average results; D = 1 and D = 1.5 yield very similar results with good detection accuracy of 85.1 and 84.4 % respectively but D = 1 and D = 1.5 produce a high FN rate of 5 % approximately. D = 4, produces an excellent FN rate of 0.44 %; however the corresponding detection accuracy is low at 75.1 %. D = 2 yields a compromise between D = 1.5 and D = 4 with a detection accuracy of 84.4 % and a FN rate of 2.98 %.
The results show that a lower value of D achieves a higher detection rate at the expense of the FN rate. A greater value of D results in lower FN rate at the cost of the detection rate. D = 2 delivers a low FN rate without overly penalising the detection accuracy and is therefore chosen as the steering function (13) for the remainder of the experiments carried out in this paper.
The SVM feature search uses Eq. (13) with D = 2 and scans all the combinations of opcodes. The search starts with one opcode and examines each of the filtered opcodes, testing for the largest value of (13). Next, the search is repeated, examining all unique combinations of two features and so forth, until all 20 opcode features are used. Table 1 shows the results, with the maximum optimisation value shaded.
Table 1 Program run length versus %optimisation value
Note, the columns '1 to 20' represents the number of opcodes in each test, with the rows '1, 2, 4, 8,…, 8192' represent the program run lengths in k-opcodes. The optimisation value is shown against that number of opcodes and program run length. I.e. the first row shows the cost function value (measure of performance) for a single opcode feature, with the maximum optimisation value for each program, run length and the second row shows the cost function values for two opcode features, with the maximum optimisation value for each program run length and so on. In Table 1, the maximum values are identified with an underscore. It can be seen that a point is reached, when adding more features results in a reduction of the maximum value; the assumption made is that over-fitting is occurring. As already mentioned, the grid search is guided by the performance metric in Eq. (13) and is measured using tenfold cross-validation.
While an optimal detection rate is a vital characteristic of any detection system, FP and FN rates need to be considered. These experiments are aimed at end host detection, and it can be argued that FN rates outweigh the importance of FP rates. Therefore, the aim of our approach is to convict all suspicious files and let further malware analysis determine their true status.
In a final testing phase, bootstrapping is introduced to ensure a robust measure of out-of-sample generalisation performance. The concern is that sample clustering may result, as many of the malware samples belong to the same malware family and often have similar file names. The Parser used, reads files from the directory (in alphabetic order) and creates the density histograms, which may result in clustering of malware samples that belong to the same family. Therefore, randomly selecting test samples prior to the SVM processing will ensure that the validation data is random.
Bootstrapping is implemented in Matlab using the built-in function 'randperm' to randomly split the dataset into training and testing data. As shown in the script below, the labels are first overwritten with ones to indicate benign samples for training and zeros for malicious training samples. The script then randomly overwrites 10 % of the benign and malicious files to test as shown in the script below.
The premise of Bootstrapping is that, in the absence of the true distribution, a conclusion about the distribution can be drawn from the samples obtained. Parke et al. [26] suggest that 200 iterations are sufficient to obtain a mean and standard deviation value of statistical importance.
As previously mentioned, the optimisation value is used to find a set of features that yield the optimum combination of detection accuracy and FN rate (as shown in Table 1). Figure 5 shows the detection accuracy and the FN rates for the different program run lengths derived from the maximum optimisation values (Table 1). The results shown in Fig. 5 are validated using 200 iterations of the Bootstrapping method. Figure 5 shows that medium program run lengths produce the best detection accuracy coupled with the lowest FN rates. However, good detection rates are achieved for short program run lengths but detection rates need to be considered in conjunction with the corresponding FN rate.
Detection accuracy and FN rate versus program run length
While there is no universally defined valued that specifies a 'good detection' system; the values obtained in these experiments need to be placed in context. Curtsinger et al. [27], defined 0.003 % FN as an 'extremely low false negative system' and Dahl [28] classified a system with < 5 % FN as a 'reasonably low' false negative rate. Ye et al. [29] examined several detection methods and found that FN rates varied significantly with different classifiers such as Naive Nayes with 10.4 % FN; SVM with 1.8 % FN; Decision Tree (J48) with 2.2 % FN; Intelligent Malware Detection System (IMDS) with 1.6 % FN.
While our approach fails to satisfy the criteria of 'extremely low' FN, it does meet the criteria for a 'reasonably low' FN rate for the program run lengths of 1k and above 8k.
Figure 6 shows the detection accuracies (DR) and the false negative rates (FN) plotted against the number of features used for classification. Figure 6 is constructed by taking an average of the detection accuracies and false negative rates across the program run lengths (as indicated by the maximum optimisation values shown in Table 1) for feature groups (1–20). This shows the relationship between the number of features and the detection accuracy and false negative rates. It can be seen that both the detection accuracy and false negative rate improves with an increasing number of features (up to 13 features), and degrades and becomes more inconsistent (greater variance) thereafter.
Detection accuracy and FN versus number of features
It can be seen (Fig. 6), that adding more features does not always improve the results. The performance of both the detection accuracy and the FN rate peaks at 13 features (average), above which the performance degrades. This degradation is pervasive in all the program run lengths. It is believed that this is likely due to over-fitting caused by too much variance being introduced by the additional features. Again, the smallest variance occurs with 13 features (average).
The research presented, investigated the use of run-time opcode traces to discriminate between malicious and benign software. Table 2 summarizes the results in terms of performance (Detection, false negative and false positive rates) versus program run lengths with the corresponding opcode features.
Table 2 Optimum features for malware detection at selected run lengths (K-opcodes)
The performance rates are listed in the right-hand column (taken from Table 1) and correspond to different program run lengths as indicated in the left-most columns i.e. 1k-opcodes, 2k-opcodes, 4k-opcodes, 8k-opcodes, etc. The central columns list the opcodes used to achieve these results.
Encryption-based malware often use the xor (opcode) to perform their encryption and decryption. Table 2 shows that xor frequently appears in the shorter program run lengths. This frequent appearance of xor is expected as the unpacking/decrypting occurs at the start of a program. An exception is that the 4k-opcodes length program does not use xor to classify benign and malicious software.
Figure 7 presents opcode categories in terms of their ability to detect malware, which is constructed from the information presented in Table 2. Figure 7 is calculated for each category and then normalised using the total area of all the categories. The results show that the flow control category is the most effective at 59 % followed by Logic and Arithmetic at 31 %. This implies that a program structure (Flow Control) is the most significant indicator of benign and malicious software followed by the logic and arithmetic components of the program, which concurs with Bilar [8, 9] findings.
Breakdown of malware detection by opcode category
In summary, several observations can be made:
More is not always best; the optimum number of features varies with the program run length, but typically (average) 13 opcodes yield the best results. As an example, the maximum detection accuracy (83.4 %) for the 1k-opcode program run length is achieved with 14 features. However, adding more features decreases the detection accuracy, which is typical of all the program run lengths.
Table 2 shows that xor is used as an indicator of malware for shorter program run lengths i,e 1k-opcdes to 126k-opcodes (excluding 4k-opcode). This is expected behaviour as encrypted malware frequently uses xor to perform its decryption and is normally exercised in the early stages of the program execution.
An exception, is the absence of xor in the 4k-opcode length, which is not clearly understood beyond the fact that the machine learning algorithm did not chose it as an optimal feature for this program run length i.e. other features performed better for this particular program run length.
While FN is not ideal, many of the program run lengths (excluding 2 and 4K-opcodes), are be considered to be a 'reasonably low' FN rate (FN < 5 %). The relative short program run lengths of 2 and 4k-opcodes have high FN rates of 8.47 and 13.49 % respectively. The other program lengths present good detection rates of 81–89 %, the FN rates between 1.58 and 5.87 %.
The maximum detection accuracy of 86.3 % with the lowest FN rate (1.58 %) is obtained for a program run length of 32k-opcodes. However, a program run length of 1K-opcodes produces a good detection accuracy of 83.4 %, with a respectable FN rate of 4.2 %.
The bottom row (Occur) of Table 2 shows the number of times a particular opcode was selected by the classifier (SVM) as an indicator of malware. For example, opcode add was chosen 13 times out of 14 program run lengths, whereas, opcode lods was only chosen once for the 8k-opcode run length. What is clear, is that the opcodes chosen (by the SVM) change relative to different program run lengths. Our observations show that shorter program run lengths rely on 'logic and arithmetic' and 'flow control', whereas the longer program run lengths rely more on 'flow control' opcodes. This infers that the detection of longer program run length relies on the complexity of the call structure of a program. This is consistent with Bilar [9] finding that showed malware having a less complex call structure than non-malicious software.
The experimental work carried out in this research investigated the use of an SVM to detect malware. The features used by the SVM were derived from program traces obtained from program execution. The findings indicate that encrypted malware can be detected using opcodes obtained during program execution. The investigation continued to establish an optimal program run-length for malware detection. The dataset was constructed from run-time opcodes and compiled into density histograms and then filtered prior to SVM analysis. A feature selection cost function was identified and used to steer the SVM for optimal performance. The full spectrum of opcodes were examined for information, and the search for the optimal opcodes was quickly narrowed using an Eigenvector filter.
The findings show that malware detection is possible for very short program run lengths of 1k-opcodes that produce a detection rate of 83.41 % and a FN rate of 4.2 %. Using mid-range program run lengths also yields a sound detection rate. However, their corresponding FN rates deteriorate. The 1k-opcode characteristics provide a basis to detect malware during run-time, potentially before the program can complete its malicious activity, i.e. during their unpacking and deciphering phase.
The research presented, provides an alternative malware detection approach that is capable of detecting obfuscated malware and possible Zero-day attacks. With a small group of features and short program run length, a real world application could be implemented that detects malware with minimal computation, enabling a practical real world solution to detect obfuscated malware.
Okane P, Sakir S, McLaughlin K, Im EG (2014) Malware detection: program run length against detection rate. IET Softw 8(1):42–51
O'Kane P, Sezer S, McLaughlin K (2011) Obfuscation: the hidden malware. IEEE Secur Privacy 9(5):41–47
Eskandari M, Hashemi S (2012) A graph mining approach for detecting unknown malwares. J Vis Lang Comput 23(3):154–162
Sung A, Xu J, Chavez P, Mukkamala S, et al (2004) Static analyzer of vicious executables (save). In: Proceedings of the 20th annual computer security applications conference, 2004
Tian R, Batten L, Islam R, et al (2009) An automated classification system based on the strings of trojan and virus families. In: Proceedings of the 4rd international conference on malicious and unwanted software: MALWARE, 2009, pp 23–30
Sami A, Yadegari B, Rahimi H, et al (2010) Malware detection based on mining API calls. In: Proceedings of the 2010 ACM symposium on applied computing, 2010, pp 1020–1025
Lakhotia A, Kumar EU, Venable M (2005) A method for detecting obfuscated calls in malicious binaries. IEEE Trans Softw Eng 31(11):955–968
Bilar D (2007) Opcodes as predictor for malware. Int J Electron Secur Digit Forensics 1(2):156–168
Bilar D (2007) Callgraph properties of executables and generative mechanisms. AI Communications, special issue on Network Analysis in Natural Sciences and Engineering 20(4): 231–243
Agrawal H (2011) Detection of global metamorphic malware variants using control and data flow analysis. WIPO Patent No. 2011119940, 30 September 2011
I Santos, YK Penya, J Devesa, PG Garcia (2009) N-grams-based file signatures for malware detection. S3Lab, Deusto Technological Foundation
Santos I, Brezo F, Nieves J, Penya YK, Sanz B, Laorden C, Bringas PG (2010) Opcode-sequence-based malware detection. In: Proceedings of the 2nd international symposium on engineering secure software and systems (ESSoS), Pisa (Italy), 3–4th February 2010, LNCS 5965, pp 35–43
Santos I, Brezo F, Ugarte-Pedrero X, Bringas PG (2013) Opcode sequences as representation of executables for data-mining-based unknown malware detection. Inf Sci 231:64–82
Anderson B, Storlie C, Lane T (2012, October) Improving malware classification: bridging the static/dynamic gap. In: Proceedings of the 5th ACM workshop on Security and artificial intelligence, pp 3–14. ACM
Shabtai A, Moskovitch R, Feher C, Dolev S, Elovici Y (2012) Detecting unknown malicious code by applying classification techniques on opcode patterns. Secur Inf 1(1):1–22
Moskovitch R, Feher C, Tzachar N, Berger E, Gitelman M, Dolev S, Elovici Y (2008) Unknown malcode detection using opcode representation. In: Proceedings of the 1st European conference on intelligence and security informatics (EuroISI08), 2008, pp 204–215
Song Y, Locasto M, Stavro A (2007) On the infeasibility of modeling polymorphic shellcode. In: ACM CCS, 2007, pp 541–551
Eilam E (2011) Reversing: secrets of reverse engineering. Wiley, New York
Ferrie P (2011) The ultimate anti debugge reference. http://pferrie.host22.com/papers/antidebug.pdf. Written May 2011, last accessed 11 October 2012
Chen X, Andersen J, Mao ZM, Bailey M, Nazario J (2008) Towards an understanding of anti-virtualization and anti-debugging behavior in modern malware. In: ICDSN proceedings, 2008, pp 177–186
Heaven VX (2013) Malware collection. http://vxheaven.org/vl.php. Last accessed Oct 2013
O'Kane P, Sezer S, McLaughlin K, Im EG (2013) SVM training phase reduction using dataset feature filtering for malware detection. IEEE Trans Inf Forensics Secur 8(3):500–509
Kantardzic M (2011) Data mining: concepts, models, methods, and algorithms. Wiley, London. ISBN 0-471-22852-4
Herbrich R, Graepel T (2002) A PAC-Bayesian margin bound for linear classifiers. IEEE Trans Inf Theory 48(12):3140–3150
Graf ABA, Borer S (2001) Normalization in support vector machines., Pattern RecognitionSpringer, Berlin, Heidelberg, pp 277–282
Parke J, Holford NHG, Charles BG (1999) A procedure for generating bootstrap samples for the validation of nonlinear mixed-effects population models. Comput Methods Programs Biomed 59(1):19–29
Curtsinger C, Livshits B, Zorn B, Seifert C (2011) Zozzle: low-overhead mostly static javascript malware detection. In: Proceedings of the usenix security symposium, Aug 2011
Dahl G, Stokes JW, Deng L, Yu D (2013) Large-scale malware classification using random projections and neural networks. Poster (MLSP-P5.4), May ICASSP 2013, Vancouver Canada, IEEE Signal Processing Society, 2013
Ye Y, Wang D, Li T, Ye D (2007) IMDS: intelligent malware detection system. In: Proceedings of the 13th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 2007
We have read the ICMJE guidelines and can confirm that the authors PO, SS, KM contributed intellectually to the material presented in this manuscript. All authors read and approved the final manuscript.
We the authors of this paper confirm that we do not have any competing financial, professional or personal interests that would influence the performance or presentation of the work described in this manuscript.
Centre for Secure Information Technologies, Queen's University Belfast, Belfast, Northern Ireland, UK
Philip O'kane
, Sakir Sezer
& Kieran McLaughlin
Search for Philip O'kane in:
Search for Sakir Sezer in:
Search for Kieran McLaughlin in:
Correspondence to Philip O'kane.
O'kane, P., Sezer, S. & McLaughlin, K. Detecting obfuscated malware using reduced opcode set and optimised runtime trace. Secur Inform 5, 2 (2016). https://doi.org/10.1186/s13388-016-0027-2
Metamorphism malware
CyberSecurity research and education | CommonCrawl |
\begin{definition}[Definition:Minimum Value of Real Function/Local/Strict]
Let $f$ be a real function defined on an open interval $\openint a b$.
Let $\xi \in \openint a b$.
$f$ has a '''strict local minimum at $\xi$''' {{iff}}:
:$\exists \openint c d \subseteq \openint a b: \forall x \in \openint c d: \map f x > \map f \xi$
\end{definition} | ProofWiki |
2 versions of the koyck distributed lag
In econometrics, ( this material is in a lot of the literature but, IMHO, can most clearly be found in Harvey's text, "econometric analysis of time series"), there are generally two versions of a specific lagged dependent variable which is usually termed the koyck distributed lag. ( It's a very special case of a distributed lag ).
I write down version 1 and version 2 below.
Version 1 is shown below.
1): $y_t = \rho \times y_{t-1} + \beta \times x_t + \epsilon_t$
where $\epsilon_t = (v_t - \rho v_{t-1}) \sim N(0, \sigma^2) $.
There is also version 2 which is
where $\epsilon_t \sim N(0, \sigma^2) $.
So, in version 1, the error term is such that $v_t$ is AR(1) and in version 2), the error term is pure noise.
My question is the following: Is there a way to write to a version where the error term is MA(1) ? I initially figured there must be some kind of symmetry because we have the pure noise case and the AR(1) case so I figured there must be a version with an MA(1) error term. But now I'm not so sure and actually don't think do. Thanks for any insights.
EDIT BASED ON GOOD QUESTION IN COMMENT:
Hi: $\frac{\epsilon_t}{(1 - \rho L)} = v_t$ where $L$ is the lag operator.
Therefore, $v_{t}$ can be thought of as an exponentially smoothed average of the past white noise error terms $\epsilon_{i}$.
Now, version 1) of the model can be re-written in the following way:
$y_t = \rho \times y_{t-1} + \beta \times x_t + (1-\rho L) v_{t} $
which can be re-written as
$y_{t}(1 - \rho L) = \beta \times x_t + (1 - \rho L) v_{t} $
Then, dividing the whole the equation by $(1 - \rho L)$ results in what I call the long form of version 1 of the model:
$y_{t} = \beta \times \sum_{i=0}^{\infty} \rho^{i} x_{t-i} + v_{t}$
So, the model implies that response is $\beta$ times an exponentially smoothed average of the past $x_{i}$ plus v_{t}. But, we can re-write the last term which results in
$y_{t} = \beta \times \sum_{i=0}^{\infty} \rho^{i} \times x_{t-i} + \sum_{i=0}^\infty \rho^{i} \epsilon_{t-i}$
So, in version 1), the response can be thought of as an exponentially smoothed version of the past $x_{t}$ plus the exponentially smoothed version of the past error terms, namely the $\epsilon_{i}$.
I won't write it out, but, in the long form of version 2, the error term is not an exponentially smoothed average of the past $\epsilon_{i}$. The past error terms are not involved and the error term is just $\epsilon_{t}$.
Thanks for good question.
SECOND EDIT BASED ON GOOD QUESTION IN COMMENT:
When your question made me write it out in the long form, it hit me that an MA(1) structure could just be:
$y_{t} = \beta \times \sum_{i=0}^{\infty} \rho^{i} \times x_{t-i} + \epsilon_t + \rho \epsilon_{t-1} $
This would be an MA(1) error term for the koyck distributed lag. I don't think it simplifies in any way by writing it in the short form but it's still a nice way of thinking of a third possibility. In one case, the error term lasts on period. in another case, the error term is is an exp smoothed average of past errors and in the last case, the error is linear combination of the last two errors. Clearly your question totally led to this and I think what you're saying in your comment is the same as here, so thanks much for insight.
mark leeds
mark leedsmark leeds
$\begingroup$ Where does $v_t$ come from? $\endgroup$ – caverac Dec 2 '18 at 10:12
$\begingroup$ @caverac: I added some explanation at at the bottom. Thanks for good question. $\endgroup$ – mark leeds Dec 2 '18 at 19:20
$\begingroup$ Thanks for the update. So what you are looking for is something of the form $\epsilon_t = \eta_{t} + \theta \eta_{t-1}$, where $\eta_t \sim N(0, s^2)$, is that correct? $\endgroup$ – caverac Dec 2 '18 at 22:35
$\begingroup$ @caverac: I added another update but I'm pretty certain that we're on the same page as far as what you said in your latest comment. funny how that worked out in terms of your question giving me the answer. thanks a lot.. $\endgroup$ – mark leeds Dec 2 '18 at 23:45
$\begingroup$ I don't think I did much ... but am happy you solved your question $\endgroup$ – caverac Dec 2 '18 at 23:47
As I explained in the edited question, the comment-question made by @caverac made me write the models out in their respective long form versions. Doing this made it obvious that an MA(1) structure can be imposed and the model is shown below. As far as I can tell, the long form of this model cannot be transformed to a lagged dependent variable model (what I refer to as the short form ) nicely like can be in the AR(1) and pure noise cases.
Long Form of Koyck Distributed Lag Model With MA(1) error term:
$y_{t} = \beta \times \sum_{i=0}^{\infty} \rho^{i} x_{t-i} + \epsilon_t + \rho \times \epsilon_{t-1}$
Not the answer you're looking for? Browse other questions tagged time-series or ask your own question.
How to linearize the following difference equation?
Making predictions with a distributed lag model
Conditional variance vs. unconditional variance in ARCH model
Show that the dividend price ratio is a ARMA(p, q) process
Dealing with Missing Values in Diff-in-Diff Estimation
Why must the lag length of the autoregressive term in an ARDL model be determined separately? | CommonCrawl |
Serum suPAR and syndecan-4 levels predict severity of community-acquired pneumonia: a prospective, multi-centre study
Qiongzhen Luo1,
Pu Ning1,
Yali Zheng1,
Ying Shang1,
Bing Zhou1 &
Zhancheng Gao1
Critical Care volume 22, Article number: 15 (2018) Cite this article
The Letter to this article has been published in Critical Care 2019 23:405
Community-acquired pneumonia (CAP) is a major cause of death worldwide and occurs with variable severity. There are few studies focused on the expression of soluble urokinase-type plasminogen activator receptor (suPAR) and syndecan-4 in patients with CAP.
A prospective, multi-centre study was conducted between January 2014 and December 2016. A total of 103 patients with severe CAP (SCAP), 149 patients with non-SCAP, and 30 healthy individuals were enrolled. Clinical data were recorded for all enrolled patients. Serum suPAR and syndecan-4 levels were determined by quantitative enzyme-linked immunosorbent assay. The t test and Mann–Whitney U test were used to compare between two groups; one-way analysis of variance and the Kruskal–Wallis test were used to compare multiple groups. Correlations were assessed using Pearson and Spearman tests. Area under the curve (AUCs), optimal threshold values, sensitivity, and specificity were calculated. Survival curves were constructed and compared by log-rank test. Regression analyses assessed the effect of multiple variables on 30-day survival.
suPAR levels increased in all patients with CAP, especially in severe cases. Syndecan-4 levels decreased in patients with CAP, especially in non-survivors. suPAR and syndecan-4 levels were positively and negatively correlated with severity scores, respectively. suPAR exhibited high accuracy in predicting SCAP among patients with CAP with an AUC of 0.835 (p < 0.001). In contrast, syndecan-4 exhibited poor diagnostic value for predicting SCAP (AUC 0.550, p = 0.187). The AUC for predicting mortality in patients with SCAP was 0.772 and 0.744 for suPAR and syndecan-4, respectively; the respective prediction threshold values were 10.22 ng/mL and 6.68 ng/mL. Addition of both suPAR and syndecan-4 to the Pneumonia Severity Index significantly improved their prognostic accuracy, with an AUC of 0.885. Regression analysis showed that suPAR ≥10.22 ng/mL and syndecan-4 ≤ 6.68 ng/mL were reliable independent markers for prediction of 30-day survival.
suPAR exhibits high accuracy for both diagnosis and prognosis of SCAP. Syndecan-4 can reliably predict mortality in patients with SCAP. Addition of both suPAR and syndecan-4 to a clinical scoring method could improve prognostic accuracy.
Trial registration
ClinicalTrials.gov, NCT03093220. Registered on 28 March 2017 (retrospectively registered).
Community-acquired pneumonia (CAP) is a very common type of respiratory infection. Despite the rapid development of new treatments, pneumonia continues to cause a high rate of complications and associated costs, and remains a major international cause of death [1, 2]. Because of the diversity of clinical conditions and the lag in a clear definition of the causative pathogen, the most challenging task for a physician is the risk stratification of patients with CAP and the subsequent administration of individual treatment [3]. CURB-65 and the Pneumonia Severity Index (PSI) are widely recommended and validated scoring methods: CURB-65 is a predictive assessment that is concisely and conveniently implemented in clinical settings [4]. PSI is a sensitive indicator in judging whether patients should be hospitalized [4]. However, both the CURB-65 and PSI scores are neither comprehensive nor exhaustive. CURB-65 assesses very few aspects of disease with low specificity, while PSI relies primarily on age and underlying diseases such that it is inaccurate in young and otherwise healthy patients. In recent years, novel biomarkers, such as soluble triggering receptor expressed on myeloid cells-1 (sTREM-1) [5], proadrenomedullin (pro-ADM) [6], and copeptin [7] have been widely validated in clinical settings. However, their sensitivity and specificity for prediction of pneumonia severity are variable and largely insufficient; thus, there is a need for new biomarkers to provide effective risk stratification and assist in clinical judgement.
Urokinase-type plasminogen activator receptor (uPAR) is a component of the plasminogen activator (PA) system. This system plays an important role in many physiological and pathological processes, including tissue remodelling [8], thrombosis [9], inflammation [10], and tumourigenesis [11]. The soluble form of uPAR (suPAR) can be detected in serum and other organic fluids [12]; its levels are increased in patients with HIV infection, malaria, tuberculosis, and sepsis, suggesting that it could serve as a useful prognostic biomarker [13]. This capability may be useful for prediction of the severity of CAP, but few studies have focused on suPAR levels in patients with CAP.
The syndecan proteins, a family of transmembrane heparan sulphate proteoglycans, bind to various extracellular effectors and regulate many processes, such as tissue homeostasis, inflammation, tumour invasion, and metastasis [14,15,16]. Syndecan-4 is the most well-known member of the family. Studies have demonstrated that levels of syndecan-4 increase in response to bacterial inflammation, and that syndecan-4 possesses an anti-inflammatory function in acute pneumonia [17]. However, data on the relationship between the expression of syndecan-4 and the severity of CAP are rare.
Considering the previous experimental data on suPAR and syndecan-4, we hypothesized that their serum levels might be correlated with the severity and prognosis of CAP. Thus, the aim of this study was to clarify the precise roles of suPAR and syndecan-4 in CAP, and to validate the effectiveness of these proteins as indicators of the severity of CAP and of the risk of death in severe CAP.
This prospective, observational study was conducted during the period of January 2014 through December 2016 among patients hospitalized in Peking University People's Hospital, Tianjin Medical University General Hospital, Wuhan University People's Hospital, and Fujian Provincial Hospital (ClinicalTrials.gov ID, NCT03093220). All patients in this study were diagnosed with CAP.
CAP was defined by the following criteria [18]: (1) a chest radiograph showing either a new patchy infiltrate, leaf or segment consolidation, ground glass opacity, or interstitial change; (2) at least one of the following signs – (a) the presence of cough, sputum production, and dyspnoea; (b) core body temperature >38.0 °C; (c) auscultatory findings of abnormal breath sounds and rales; or (d) peripheral white blood cell counts >10 × 109/L or <4 × 109/L; and (3) symptom onset that began in the community, rather than in a healthcare setting.
Severe CAP (SCAP) was diagnosed by the presence of at least one major criterion, or at least three minor criteria, as follows [19]. Major criteria: (1) requirement for invasive mechanical ventilation and (2) occurrence of septic shock with the need for vasopressors. Minor criteria: (1) respiratory rate ≥30 breaths/min; (2) oxygenation index (PaO2/FiO2) ≤250; (3) presence of multilobar infiltrates; (4) presence of confusion; (5) serum urea nitrogen ≥20 mg/dL; (6) white blood cell count ≤4 × 109/L; (7) blood platelet count <100 × 109/L; (8) core body temperature <36.0 °C; and (9) hypotension requiring aggressive fluid resuscitation.
The exclusion criteria were age <18 years, or the presence of any of the following: pregnancy, immunosuppressive condition, malignant tumour, end-stage renal or liver disease, active tuberculosis, or pulmonary cystic fibrosis.
Sample size calculation
In this study, we set the type I error/significance level (two-sided) at α = 0.05 and the type II error at β = 0.10 to provide 90% power. The test standard deviation was Zα = 1.96 and Zβ = 1.282. Assuming the mortality of non-SCAP and SCAP was P0 = 0.05 and P1 = 0.25, respectively [1, 19], the sample size was calculated as follows:
$$ {\displaystyle \begin{array}{l}R=\frac{P_1}{P_0};A={P}_1\left(1-{P}_0\right)+{P}_0\left(1-{P}_1\right);B=\left(R-1\right){P}_0\left(1-{P}_0\right);K=\left(A+B\right)\left( RA-B\right)-R{\left({P}_1-{P}_0\right)}^2\\ {}\mathrm{N}{'}_{\mathrm{non}\hbox{-} \mathrm{SCAP}}=\frac{{\mathrm{Z}}_{\upbeta}^2\mathrm{K}+{\mathrm{Z}}_{\upalpha}^2{\left(\mathrm{A}+\mathrm{B}\right)}^2+2{\mathrm{Z}}_{\upalpha}{\mathrm{Z}}_{\upbeta}\ \left(\mathrm{A}+\mathrm{B}\right)\sqrt{\mathrm{K}}\ }{{\left({P}_1\hbox{--} {P}_0\right)}^2\left(\mathrm{A}+\mathrm{B}\right)}=121\\ {}\mathrm{N}{'}_{\mathrm{SCAP}}=\frac{{\mathrm{N}}_{\mathrm{non}-\mathrm{SCAP}}}{\mathrm{R}}=25\end{array}} $$
The rate of ineligible inclusion was 10–30%. The primary number of patients with non-SCAP was calculated as follows:
$$ {\mathrm{N}}_{\mathrm{non}\hbox{-} \mathrm{SCAP}}=\frac{{\mathrm{N}}_{\mathrm{non}-\mathrm{SCAP}}^{\hbox{'}}}{1-30\%}=173 $$
The primary number of patients with SCAP was calculated as follows:
$$ {\mathrm{N}}_{\mathrm{SCAP}}=\frac{{\mathrm{N}}_{\mathrm{SCAP}}^{\hbox{'}}}{1\hbox{-} 30\%}=36 $$
We ultimately recruited 252 patients with CAP, including 103 with SCAP and 149 with non-SCAP. All patients with SCAP were admitted to the intensive care unit (ICU). The screening process is shown in Fig. 1. Thirty healthy people (>18 years old, without any exclusionary diseases) served as a control group. All subjects provided informed consent. This study was approved by the medical ethics committee of Peking University People's Hospital.
Flowchart of the study population. SCAP severe community-acquired pneumonia
Blood sample collection
Clinical data were recorded on all patients; these included whole blood leukocyte count (WBC), blood biochemical assessment, C-reactive protein (CRP), procalcitonin (PCT), blood gas analysis, and chest images. The confusion, urea, respiratory rate, blood pressure, and age ≥65 years old (CURB-65) score [20], PSI [21], and Acute Physiology and Chronic Health Evaluation (APACHE) II [22] were calculated from clinical and laboratory data.
Peripheral venous blood samples were collected within 2 days after admission, in sterile, pro-coagulation tubes and centrifuged immediately; the resulting serum samples were stored at –80 °C until analysis.
Measurement of suPAR and syndecan-4
Serum suPAR levels were measured using quantitative enzyme-linked immunosorbent assay (ELISA) kits (DUP00; R&D Systems, Minneapolis, MN, USA) in duplicate as instructed by the manufacturer. Briefly, test wells containing serum samples, and standard wells containing a gradient concentration of a standard protein, were analysed using a microplate assay. Absorbance at 450 nm was measured using the Multiskan FC (Thermo, Waltham, MA, USA). The detection sensitivity was 33 pg/mL and the intra-assay and inter-assay coefficients of variation were < 8%. Serum syndecan-4 levels were determined using ELISA (JP27188; Immuno-Biological Laboratories, Fujioka, Japan) in duplicate as instructed by the manufacturer. Absorbance at 570 nm was measured using the Multiskan FC. The detection sensitivity was 3.94 pg/mL and the intra-assay and inter-assay coefficients of variation were < 5%. Using the standard curve, the quantities of syndecan-4 and suPAR were calculated using CurvExpert Professional 2.6.3 (Hyams Development, Madison, WI, USA).
Normally distributed continuous variables are expressed as means ± standard error of the mean (SEM), abnormally distributed continuous variables are expressed as median (interquartile range) and categorical variables are expressed as number (percentage). For equivalent variables with a normal distribution, the independent Student's t test was utilized to compare two groups. The Mann–Whitney U test was used to compare categorical variables and abnormal distributional variables between two groups. One-way analysis of variance and the Kruskal–Wallis test were used to compare multiple groups. Correlation between variables with normal distribution was assessed using Pearson's correlation test, while abnormal distributions were assessed using Spearman's rho test. Receiver operating characteristic (ROC) analysis was performed to differentiate patients with CAP from those with SCAP, and to separate non-survivors from the overall patients with SCAP. Areas under the curve (AUCs), optimal threshold values, sensitivity, and specificity were calculated. Kaplan–Meier methods were used to build 30-day survival curves, and survival rates were compared using the log-rank test. Cox proportional hazards regression analyses were used to analyse the effect of an array of variables on 30-day survival.
A two-sided p value <0.05 was considered statistically significant; confidence intervals (CIs) were set at 95%. Statistical analyses were performed by using GraphPad Prism version 6.01 software (GraphPad Software, La Jolla, CA, USA) and MedCalc statistical software version 15.2.2 (MedCalc Software, Ostend, Belgium).
Characteristics of the enrolled patients
From January 2014 to December 2016, 252 patients (165 male, 87 female) were enrolled and divided into two groups (103 with SCAP and 149 with non-SCAP) according to their clinical characteristics. As shown in Table 1, there were no significant differences between the SCAP and non-SCAP groups in sex, past medical history, smoking, antibiotic pre-treatment, and whether a causative pathogen was established. Chest radiographs revealed that 84.47% and 29.13% of patients with SCAP exhibited bilateral lung infection and pleural effusion, respectively; these proportions were significant higher than those observed in the non-SCAP group (29.53% and 4.70%, respectively; p < 0.001 for both comparisons). Laboratory analyses showed that the SCAP group exhibited a higher WBC and higher neutrophil/lymphocyte ratio (NLR) than detected in the non-SCAP group (p < 0.001 for both comparisons). Furthermore, serum CRP and PCT levels were substantially greater in the SCAP group than in the non-SCAP group (p < 0.001 for both comparisons). The CURB-65, PSI, and APACHE II scores in the SCAP group were 1 (0–2), 94.27 ± 37.42, and 15.18 ± 5.65, respectively. These scores were significantly higher than those of patients in the non-SCAP group (0 (0–1), 57.50 (35.00–72.25), 8 (6–9), respectively; p < 0.001 for all comparisons). The mortality rate in the SCAP group was 18.45%, while no patients with non-SCAP died in hospital.
Table 1 Clinical characteristics and laboratory findings of the study population
Levels of suPAR and syndecan-4 in each group
As shown in Fig. 2, serum suPAR level in healthy individuals was 1.71 ± 1.00 ng/mL, which was significantly lower than that in the non-SCAP group (2.76 (2.01–4.20) ng/mL, p < 0.001). The SCAP group exhibited the highest level of suPAR, 6.17 (4.37–9.72) ng/mL (compared with the non-SCAP level, p < 0.001). In patients with SCAP, the suPAR level of non-survivors was 13.19 (6.05–18.68) ng/mL, which was notably higher than the suPAR level of survivors (6.09 (4.01–8.63) ng/mL, p < 0.001).
Levels of soluble urokinase-type plasminogen activator receptor (suPAR) and syndecan-4 across multiple groups. a, b Levels of suPAR and syndecan-4 in patients with severe community-acquired pneumonia SCAP, patients with non-SCAP, and healthy individuals, respectively. For suPAR, SCAP versus non-SCAP, p < 0.001; non-SCAP versus healthy individuals, p < 0.001. For syndecan-4, SCAP versus healthy individuals, p < 0.001; non-SCAP versus healthy individuals, p < 0.001. c, d Levels of suPAR and syndecan-4 in survivors and non-survivors among patients with SCAP, patients with SCAP who met at least one major criterion (major criteria), and patients with SCAP who met only minor criterion (minor criteria). For suPAR, survivors versus non-survivors, p < 0.001; major criteria versus minor criteria, p = 0.459. For Syndecan-4, survivors versus non-survivors, p = 0.002; major criteria versus minor criteria, p = 0.671. e, f Comparison of suPAR and syndecan-4 in patients with SCAP and non-SCAP for various causative pathogens; p > 0.05 for all comparisons
In contrast, the expression of syndecan-4 was reduced in patients with CAP. The level of syndecan-4 in healthy individuals was 14.30 ± 5.34 ng/mL, whereas in the SCAP and non-SCAP groups the levels were 9.54 ± 5.92 and 10.15 ± 4.37 ng/mL, respectively (p < 0.001 for both comparisons). There was no difference in the levels of syndecan-4 between the SCAP and non-SCAP groups (p = 0.177, data not shown). The expression of syndecan-4 in the non-survivor SCAP group was lower than in the survivor SCAP group (5.81 ± 4.38 and 10.21 (6.20–13.56) ng/mL, respectively, p = 0.002).
In order to avoid possible effects of pre-admission duration of symptoms on the levels of suPAR and syndecan-4, we divided patients in two groups according to the duration of symptoms: > 3 days or ≤ 3 days. The expression of suPAR and syndecan-4 was not significantly different between the two groups both in the SCAP and the non-SCAP groups (suPAR, p = 0.549 (SCAP), p = 0.339 (non-SCAP); syndecan-4, p = 0.078 (SCAP), p = 0.635 (non-SCAP), data not shown). Moreover, the expression of suPAR and syndecan-4 was not significantly different between patients with SCAP who met at least one major criterion and those who met only minor criteria (suPAR, p = 0.459; syndecan-4, p = 0.671).
Figure 2e and f summarizes the pathogens detected in patients with CAP who were classified as having bacterial, viral, atypical pathogen (including mycoplasma pneumonia, chlamydial pneumonia, and legionella pneumonia), mixed pathogen, and unknown pathogen infections. The causative agent was detected in 50 patients with SCAP and 65 patients with non-SCAP. There were no differences in the levels of suPAR or syndecan-4 between patients with SCAP and patients with non-SCAP who exhibited different causative pathogen infections (p > 0.05 for all comparisons).
Correlation between levels of suPAR and syndecan-4 and the severity of CAP
We chose the CURB-65, PSI, and APACHE II scoring systems to evaluate the severity of CAP. Using our entire sample of 252 patients with CAP, serum suPAR level was positively correlated with all three scoring systems: CURB-65, PSI, and APACHE II (r = 0.399, r = 0.433, and r = 0.496, respectively; p < 0.001 for all comparisons; Fig. 3). In addition, suPAR levels were also positively correlated with WBC (r = 0.232, p < 0.001), NLR (r = 0.351, p < 0.001), CRP (r = 0.272, p < 0.001), and PCT (r = 0.407, p < 0.001); suPAR levels were not positively correlated with age (r = 0.102, p = 0.107).
Correlation of soluble urokinase-type plasminogen activator receptor (suPAR) and syndecan-4 levels with multiple scoring systems across 252 patients with community-acquired pneumonia (CAP). r is the correlation coefficient. a, c, e Levels of suPAR were significantly positively correlated with the confusion, urea, respiratory rate, blood pressure, and age ≥65 years old (CURB-65) score (r = 0.399, p < 0.001), Pneumonia Severity Index (PSI) (r = 0.433, p < 0.001), and Acute Physiology and Chronic Health Evaluation II (APACHE II) score (r = 0.496, p < 0.001), respectively. b, d, f Levels of syndecan-4 were significantly negatively correlated with CURB-65 score (r = -0.220, p = 0.001), PSI (r = -0.279, p < 0.001), and APACHE II score (r = -0.184, p = 0.003), respectively
Conversely, syndecan-4 levels in patients with CAP were negatively correlated with all three scoring systems: CURB-65 (r = -0.220, p = 0.001), PSI (r = -0.279, p < 0.001), and APACHE II (r = -0.184, p = 0.003) (Fig. 3). Syndecan-4 levels were also negatively correlated with PCT (r = -0.304, p < 0.001), but not with WBC (r = 0.005, p = 0.943), NLR (r = -0.067, p = 0.326), CRP (r = -0.127, p = 0.081), or age (r = 0.023, p = 0.719).
Value of suPAR and syndecan-4 in predicting SCAP in patients with CAP
Figure 4 and Table 2 show that suPAR reliably predicted SCAP in patients with CAP, with an AUC of 0.835 (p < 0.001); further, suPAR prediction capability was second only to the APACHE II score (AUC 0.886, p < 0.001). Using a suPAR threshold value of 4.33 ng/mL for diagnosis of SCAP, the sensitivity and specificity for discriminating SCAP and CAP were 76.70% and 79.19%, respectively. Using an APACHE II score >10 as a threshold for diagnosis, the sensitivity and specificity for discriminating SCAP from CAP were 78.64% and 82.55%, respectively. Syndecan-4 was the least accurate in predicting SCAP, with an AUC of 0.550 (not statistically different, p = 0.187). Detailed results are shown in Fig. 4 and Table 2.
Receiver operating characteristic curve analysis of various parameters to discriminate patients with severe community-acquired pneumonia from patients with community-acquired pneumonia. suPAR soluble urokinase-type plasminogen activator receptor, NLR neutrophil/lymphocyte ratio, WBC whole blood leukocyte count, CRP C-reactive protein, PCT procalcitonin, CURB-65 confusion, urea, respiratory rate, blood pressure, and age ≥65 years old score, PSI Pneumonia Severity Index Score, APACHE II Acute Physiology and Chronic Health Evaluation II score, AUC area under the curve
Table 2 Area under the curve (AUC) and thresholds for predicting SCAP in patients with CAP
Prognostic value of suPAR and syndecan-4 in patients with SCAP
The ability of suPAR and syndecan-4 to predict total mortality in patients with SCAP is summarized in Table 3. Notably, the AUCs for suPAR and PSI score were 0.772 and 0.787, respectively (p < 0.001 for both comparisons). The optimal threshold to predict death was 10.22 ng/mL of suPAR, with a sensitivity of 68.23% and specificity of 89.29%. While patients with syndecan-4 concentrations <6.68 ng/mL exhibited a noticeable increase in risk of death, this threshold yielded sensitivity and specificity of 73.68% and 71.43%, respectively for prediction of total mortality. The remaining variables, WBC, NLR, CRP, and PCT, had no prognostic value for mortality prediction in patients with SCAP (all p > 0.05).
Table 3 Area under the curve (AUC) and thresholds for predicting total mortality in patients with SCAP
The combination of suPAR, syndecan-4, and PSI was the most accurate predictor of 30-day mortality, with an AUC of 0.885. The AUC of the combination of suPAR, syndecan-4 and CURB-65, and of suPAR, syndecan-4 and APACHE II score was 0.878 and 0.881, respectively (Fig. 5).
Receiver operating characteristic (ROC) curve analysis of various parameters to predict 30-day mortality in patients with SCAP
Kaplan–Meier curves were used to assess the relationship between suPAR and syndecan-4 levels in the prediction of 30-day mortality in patients with SCAP (Fig. 6). Consistent with the prediction threshold for total mortality, the optimal threshold values for 30-day mortality were also 10.22 ng/mL for suPAR (p < 0.001) and 6.68 ng/mL for syndecan-4 (p < 0.001).
Kaplan–Meier analysis of 30-day mortality in patients with severe community-acquired pneumonia. Analysis was stratified by soluble urokinase-type plasminogen activator receptor (suPAR) (a) and syndecan-4 (b) levels
In univariate Cox proportional hazards regression analysis to determine 30-day survival, suPAR levels ≥10.22 ng/mL and APACHE II scores >14 were associated with significantly higher risk ratios than any other variables. In multivariate Cox proportional hazards regression, only suPAR levels ≥10.22 ng/mL and syndecan-4 levels ≤6.68 ng/mL were strong, independent predictors of 30-day survival. Results are summarized in Table 4.
Table 4 Cox proportional hazards regression analysis of the effects of multiple variables on 30-day survival
In this prospective study of 252 patients with CAP, there were five major findings: (1) serum suPAR levels increased and syndecan-4 levels decreased in patients with CAP, especially in non-survivors, but the changes were not correlated with pre-admission duration of symptoms or the identity of the causative pathogen; (2) elevated suPAR was positively correlated with severity scores (CURB-65, PSI, and APACHE II), whereas lower syndecan-4 was negatively correlated with these same scores; (3) a suPAR threshold value of 4.33 ng/mL discriminated SCAP from CAP, with 76.70% sensitivity and 79.19% specificity, but syndecan-4 levels did not accurately predict SCAP in patients with CAP; (4) the mortality rate was significantly higher in patients with suPAR and syndecan-4 levels ≥10.22 ng/mL and ≤6.68 ng/mL, respectively, and both serum proteins independently predicted 30-day mortality; and (5) the combination of suPAR and syndecan-4 levels with the clinical severity score significantly improved the accuracy of mortality prediction. Taken together, these results suggest that serum suPAR and syndecan-4 levels can predict disease severity in patients with CAP.
suPAR has been widely studied in a variety of infectious diseases, including sepsis [23], tuberculosis [24], and HIV infection [25]. Savva et al. demonstrated that suPAR is a reliable predictor of severity of sepsis and can independently predict unfavourable outcomes in both ventilator-associated pneumonia and sepsis [26]. The findings of our study are identical to previous studies [26, 27] where suPAR levels were significantly elevated in patients with SCAP, especially in non-survivors. These results enhance our understanding of the expression of suPAR in severe infections. Furthermore, we found that suPAR levels are not correlated with specific causative agents in the same risk stratification of CAP. This is consistent with a previous study demonstrating that suPAR has no discriminatory value in patients with bacterial, viral, or parasitic infections [28].
Studies on the expression of syndecan-4 in infection are few. Nikaido et al. found that syndecan-4 levels were significantly increased in a study of 30 patients with acute mild pneumonia, and that this protein provides an anti-inflammatory function in acute pneumonia [29]. In contrast, our results show that syndecan-4 levels are significantly lower in patients with CAP compared with healthy individuals; this reduction was more pronounced in non-survivors. The lower syndecan-4 levels in our study may result from the larger sample size and the increase in CAP severity in our study, compared with the previous study. Syndecan-4 expression was elevated in mild acute pneumonia because of bacterial components that stimulated toll-like receptors 2 and 4 [29]. Importantly, the causative mechanism for lowered syndecan-4 levels in patients with CAP is not yet clear. Further, as in the analysis of suPAR, syndecan-4 levels were not statistically different among patients with CAP due to a variety of causative agents.
We investigated the correlation between suPAR and syndecan-4 levels and a variety of clinical parameters. There was strong correlation between levels of suPAR or syndecan-4 and the clinical scoring systems, suggesting that serum suPAR and syndecan-4 might aid in clinical judgment of the degree of severity. Remarkably, there was broad correlation between suPAR levels and a variety of laboratory measures: WBC, NLR, CRP, and PCT. In contrast, Wittenhagen et al. found that suPAR was not correlated with CRP in pneumococcal bacteraemia [27], but these differences may result from different patient samples. The NLR has been reported to indicate mortality and prognosis in various diseases, including tumour [30], inflammation [31], and heart failure [32]. Since NLR is convenient, easily obtained, and low cost, we included NLR as a clinical reference biomarker in this study.
The diagnostic value of suPAR is reportedly poor and was not shown to be superior to CRP or PCT. The AUC for suPAR to discriminate 197 septic ICU patients from 76 non-septic patients is 0.62 [33]. Kofoed et al. measured plasma suPAR levels in 57 patients with systemic inflammatory response syndrome (SIRS), and reported an AUC of 0.54 for suPAR (and 0.81 and 0.72 for CRP and PCT, respectively) in diagnosing bacterial infection [28]. Savva et al. reported an AUC of 0.758 for suPAR to discriminate severe sepsis or patients with septic shock within a group of 180 patients with sepsis; in the same study, the AUC for PCT was 0.652 [26]. Hoenighl et al. reported that in 132 patients with SIRS, the AUC for suPAR and PCT was 0.726 and 0.744, respectively, to differentiate patients with and without bacteraemia [34]. However, our results indicate that suPAR (AUC 0.835) had good diagnostic value in discriminating SCAP from CAP, which is comparable to APACHE II (AUC 0.886), the "gold standard" criterion for stratifying critically ill patients [22]. Taken together, these studies suggest that suPAR might have better diagnostic value in discriminating between severe and mild cases of infectious disease than in discriminating between infectious and non-infectious diseases.
Many studies focus on the prognostic value of suPAR; the AUC for suPAR in predicting in-hospital mortality ranges from 0.67 to 0.84 [35,36,37]. In our study, the AUC of suPAR to predict mortality was 0.772. Kaplan–Meier curves showed that suPAR ≥10.22 ng/mL was associated with significantly higher mortality risk; this threshold value is identical to at reported in prior studies [35, 38]. Further, in multivariate Cox proportional hazards regression, suPAR was an independent marker to predict 30-day mortality. This indicates that suPAR may provide a promising prognostic biomarker in SCAP.
The mechanism for increased suPAR levels in severely ill patients was uncertain. suPAR is expressed on the surface of various cells, including neutrophils, lymphocytes, macrophages, and endotheliocytes. During an inflammatory response, the presence of increased suPAR-expressing cells and the accelerated cleavage of suPAR might result in high blood levels of suPAR [35]. Furthermore, severely ill patients often present with dysfunctional blood coagulation. Excessive cytokine release activates the coagulation system in multiple ways, and contributes to the formation of a complex reaction that includes coagulation, inflammatory mediators, cytokines, and complement [39]. These processes might also promote expression of suPAR.
There has been no report on the diagnostic and prognostic value of syndecan-4. Our results show that although syndecan-4 did not discriminate SCAP from CAP, it might be used to predict mortality in patients with SCAP, with an AUC of 0.744. Further, multivariate Cox regression analysis demonstrated that the syndecan-4 level was an independent factor related to 30-day mortality. Previous studies reported that syndecan-4–deficient mice exhibit significantly higher bacterial counts, more severe pulmonary inflammation, and higher mortality, compared with wild-type mice [29, 40, 41]. Notably, these studies found that syndecan-4 expression was elevated in macrophages, endothelial cells, and epithelial cells after stimulation with lipopolysaccharide in vitro [40, 41].
Although suPAR and syndecan-4 were both good in predicting 30-day mortality, none was better than PSI. Remarkably, our results show that the addition of both suPAR and syndecan-4 to a clinical severity scoring method significantly improved their prognostic accuracy. The severity score alone is often insufficient to obtain satisfactory predictive accuracy. Therefore, biomarkers are thought to be able to better stratify patients. Mid-region proadrenomedullin (MR-proADM) has been hitherto the best single predictor of short-term and long-term mortality. Notably, the AUC of a combination of MR-proADM and PSI for 30-day mortality prediction can reach 0.914 [6].
Our study has certain limitations. Only serum suPAR and syndecan-4 levels were detected at the time of admission; dynamic and follow-up changes (in response to treatment) were not investigated. Besides, suPAR is a non-specific inflammatory marker that has been shown to be increased in diabetes mellitus [42], liver disease [43], and heart failure [44]. While our study enrolled a number of patients with those comorbidities (as shown in Table 1), the pre-admission levels of suPAR should be tested. The effects of changes in suPAR and syndecan-4 expression during the pathogenesis of CAP should be further investigated.
In conclusion, we demonstrated that serum suPAR is elevated, and serum syndecan-4 is reduced, in patients with SCAP. suPAR is able to accurately predict SCAP and mortality in patients with CAP. Further, we revealed that syndecan-4 has no diagnostic value but that it can serve as a prognostic biomarker in patients with SCAP. The combination of both suPAR and syndecan-4 with clinical scores significantly improved their 30-day mortality prediction.
APACHE II:
Acute Physiology and Chronic Health Evaluation II
Community-acquired pneumonia
CRP:
CURB-65:
Confusion, urea, respiratory rate, blood pressure, and age ≥65 years old, scoring system
MR-proADM:
Mid-region proadrenomedullin
NLR:
Neutrophil/lymphocyte ratio
Plasminogen activator
PCT:
pro-ADM:
Proadrenomedullin
Pneumonia Severity Index
SCAP:
Severe community-acquired pneumonia
suPAR:
Soluble urokinase-type plasminogen activator receptor
uPAR:
Urokinase-type plasminogen activator receptor
WBC:
Whole blood leukocyte count
Prina E, Ranzani OT, Torres A. Community-acquired pneumonia. Lancet. 2015;386:1097–108.
Musher DM, Thorner AR. Community-acquired pneumonia. N Engl J Med. 2014;371:1619–28.
Chalmers JD. Identifying severe community-acquired pneumonia: moving beyond mortality. Thorax. 2015;70:515–6.
Fine MJ, Auble TE, Yealy DM, Hanusa BH, Weissfeld LA, Singer DE, Coley CM, Marrie TJ, Kapoor WN. A prediction rule to identify low-risk patients with community-acquired pneumonia. N Engl J Med. 1997;336:243–50.
Gibot S, Cravoisy A, Levy B, Bene MC, Faure G, Bollaert PE. Soluble triggering receptor expressed on myeloid cells and the diagnosis of pneumonia. N Engl J Med. 2004;350:451–8.
Bello S, Lasierra AB, Mincholé E, Fandos S, Ruiz MA, Vera E, de Pablo F, Ferrer M, Menendez R, Torres A. Prognostic power of proadrenomedullin in community-acquired pneumonia is independent of aetiology. Eur Respir J. 2012;39:1144–55.
Krüger S, Ewig S, Giersdorf S, Hartmann O, Suttorp N, Welte T; German Competence Network for the Study of Community Acquired Pneumonia (CAPNETZ) Study Group. Cardiovascular and inflammatory biomarkers to predict short- and long-term survival in community-acquired pneumonia: Results from the German Competence Network, CAPNETZ. Am J Respir Crit Care Med. 2010;182:1426-34.
Manetti M, Rosa I, Milia AF, Guiducci S, Carmeliet P, Ibba-Manneschi L, Matucci-Cerinic M. Inactivation of urokinase-type plasminogen activator receptor (uPAR) gene induces dermal and pulmonary fibrosis and peripheral microvasculopathy in mice: a new model of experimental scleroderma? Ann Rheum Dis. 2014;73:1700–9.
Kobayashi N, Ueno T, Ohashi K, Yamashita H, Takahashi Y, Sakamoto K, Manabe S, Hara S, Takashima Y, Dan T, Pastan I, Miyata T, Kurihara H, Matsusaka T, Reiser J, Nagata M. Podocyte injury-driven intracapillary plasminogen activator inhibitor type 1 accelerates podocyte loss via uPAR-mediated β1-integrin endocytosis. Am J Physiol Renal Physiol. 2015;308:F614–26.
Genua M, D'Alessio S, Cibella J, Gandelli A, Sala E, Correale C, Spinelli A, Arena V, Malesci A, Rutella S, Ploplis VA, Vetrano S, Danese S. The urokinase plasminogen activator receptor (uPAR) controls macrophage phagocytosis in intestinal inflammation. Gut. 2015;64:589–600.
Mazzieri R, Pietrogrande G, Gerasi L, Gandelli A, Colombo P, Moi D, Brombin C, Ambrosi A, Danese S, Mignatti P, Blasi F, D'Alessio S. Urokinase receptor promotes skin tumor formation by preventing epithelial cell activation of Notch1. Cancer Res. 2015;75:4895–909.
Matzkies LM, Raggam RB, Flick H, Rabensteiner J, Feierl G, Hoenigl M, Prattes J. Prognostic and diagnostic potential of suPAR levels in pleural effusion. J Infect. 2017;75:465–7.
Donadello K, Covajes C, Covajes C, Vincent JL. suPAR as a prognostic biomarker in sepsis. BMC Med. 2012;10:2.
Choi S, Chung H, Hong H, Kim SY, Kim SE, Seoh JY, Moon CM, Yang EG, Oh ES. Inflammatory hypoxia induces syndecan-2 expression through IL-1β-mediated FOXO3a activation in colonic epithelia. FASEB J. 2017;31:1516–30.
Brauer R, Ge L, Schlesinger SY, Birkland TP, Huang Y, Parimon T, Lee V, McKinney BL, McGuire JK, Parks WC, Chen P. Syndecan-1 attenuates lung injury during influenza infection by potentiating c-met signaling to suppress epithelial apoptosis. Am J Respir Crit Care Med. 2016;194:333–44.
Cassinelli G, Zaffaroni N, Lanzi C. The heparanase/heparan sulfate proteoglycan axis: a potential new therapeutic target in sarcomas. Cancer Lett. 2016;382:245–54.
Santoso A, Kikuchi T, Tode N, Hirano T, Komatsu R, Damayanti T, Motohashi H, Yamamoto M, Kojima T, Uede T, Nukiwa T, Ichinose M. Syndecan 4 mediates Nrf2-dependent expansion of bronchiolar progenitors that protect against lung inflammation. Mol Ther. 2016;24:41–52.
Niederman MS, Mandell LA, Anzueto A, Bass JB, Broughton WA, Campbell GD, Dean N, File T, Fine MJ, Gross PA, Martinez F, Marrie TJ, Plouffe JF, Ramirez J, Sarosi GA, Torres A, Wilson R, Yu VL; American Thoracic Society. Guidelines for the management of adults with community-acquired pneumonia. Diagnosis, assessment of severity, antimicrobial therapy, and prevention. Am J Respir Crit Care Med. 2001;163:1730–54.
Mandell LA, Wunderink RG, Anzueto A, Bartlett JG, Campbell GD, Dean NC, Dowell SF, File TM Jr, Musher DM, Niederman MS, Torres A, Whitney CG; Infectious Diseases Society of America; American Thoracic Society. Infectious Diseases Society of America/American Thoracic Society consensus guidelines on the management of community-acquired pneumonia in adults. Clin Infect Dis. 2007;44 Suppl 2:S27–72.
Capelastegui A, España PP, Quintana JM, Areitio I, Gorordo I, Egurrola M, Bilbao A. Validation of a predictive rule for the management of community-acquired pneumonia. Eur Respir J. 2006;27:151–7.
Spindler C, Ortqvist A. Prognostic score systems and community-acquired bacteraemic pneumococcal pneumonia. Eur Respir J. 2006;28:816–23.
Giamarellos-Bourboulis EJ, Norrby-Teglund A, Mylona V, Savva A, Tsangaris I, Dimopoulou I, Mouktaroudi M, Raftogiannis M, Georgitsi M, Linnér A, Adamis G, Antonopoulou A, Apostolidou E, Chrisofos M, Katsenos C, Koutelidakis I, Kotzampassi K, Koratzanis G, Koupetori M, Kritselis I, Lymberopoulou K, Mandragos K, Marioli A, Sundén-Cullberg J, Mega A, Prekates A, Routsi C, Gogos C, Treutiger CJ, Armaganidis A, Dimopoulos G. Risk assessment in sepsis: a new prognostication rule by APACHE II score and serum soluble urokinase plasminogen activator receptor. Crit Care. 2012;16(4):R149.
Rudolf F, Wagner AJ, Back FM, Gomes VF, Aaby P, Østergaard L, Eugen-Olsen J, Wejse C. Tuberculosis case finding and mortality prediction: added value of the clinical TBscore and biomarker suPAR. Int J Tuberc Lung Dis. 2017;21:67–72.
Rasmussen LJ, Knudsen A, Katzenstein TL, Gerstoft J, Obel N, Jørgensen NR, Kronborg G, Benfield T, Kjaer A, Eugen-Olsen J, Lebech AM. Soluble urokinase plasminogen activator receptor (suPAR) is a novel, independent predictive marker of myocardial infarction in HIV-1-infected patients: a nested case-control study. HIV Med. 2016;17:350–7.
Savva A, Raftogiannis M, Baziaka F, Routsi C, Antonopoulou A, Koutoukas P, Tsaganos T, Kotanidou A, Apostolidou E, Giamarellos-Bourboulis EJ, Dimopoulos G. Soluble urokinase plasminogen activator receptor (suPAR) for assessment of disease severity in ventilator-associated pneumonia and sepsis. J Infect. 2011;63:344–50.
Wittenhagen P, Kronborg G, Weis N, Nielsen H, Obel N, Pedersen SS, Eugen-Olsen J. The plasma level of soluble urokinase receptor is elevated in patients with Streptococcus pneumoniae bacteraemia and predicts mortality. Clin Microbiol Infect. 2004;10:409–15.
Kofoed K, Andersen O, Kronborg G, Tvede M, Petersen J, Eugen-Olsen J, Larsen K. Use of plasma C-reactive protein, procalcitonin, neutrophils, macrophage migration inhibitory factor, soluble urokinase-type plasminogen activator receptor, and soluble triggering receptor expressed on myeloid cells-1 in combination to diagnose infections: a prospective study. Crit Care. 2007;11:R38.
Nikaido T, Tanino Y, Wang X, Sato S, Misa K, Fukuhara N, Sato Y, Fukuhara A, Uematsu M, Suzuki Y, Kojima T, Tanino M, Endo Y, Tsuchiya K, Kawamura I, Frevert CW, Munakata M. Serum syndecan-4 as a possible biomarker in patients with acute pneumonia. J Infect Dis. 2015;212:1500–8.
Derman BA, Macklis JN, Azeem MS, Sayidine S, Basu S, Batus M, Esmail F, Borgia JA, Bonomi P, Fidler MJ. Relationships between longitudinal neutrophil to lymphocyte ratios, body weight changes, and overall survival in patients with non-small cell lung cancer. BMC Cancer. 2017;17:141.
Curbelo J, Luquero Bueno S, Galván-Román JM, Ortega-Gómez M, Rajas O, Fernández-Jiménez G, Vega-Piris L, Rodríguez-Salvanes F, Arnalich B, Díaz A, Costa R, de la Fuente H, Lancho Á, Suárez C, Ancochea J, Aspa J. Inflammation biomarkers in blood as mortality predictors in community-acquired pneumonia admitted patients: Importance of comparison with neutrophil count percentage or neutrophil-lymphocyte ratio. PLoS One. 2017;12, e0173947.
Benites-Zapata VA, Hernandez AV, Nagarajan V, Cauthen CA, Starling RC, Tang WH. Usefulness of neutrophil-to-lymphocyte ratio in risk stratification of patients with advanced heart failure. Am J Cardiol. 2015;115:57–61.
Koch A, Voigt S, Kruschinski C, Sanson E, Dückers H, Horn A, Yagmur E, Zimmermann H, Trautwein C, Tacke F. Circulating soluble urokinase plasminogen activator receptor is stably elevated during the first week of treatment in the intensive care unit and predicts mortality in critically ill patients. Crit Care. 2011;15:R63.
Hoenigl M, Raggam RB, Wagner J, Valentin T, Leitner E, Seeber K, Zollner-Schwetz I, Krammer W, Prüller F, Grisold AJ, Krause R. Diagnostic accuracy of soluble urokinase plasminogen activator receptor (suPAR) for prediction of bacteremia in patients with systemic inflammatory response syndrome. Clin Biochem. 2013;46:225–9.
Suberviola B, Castellanos-Ortega A, Ruiz Ruiz A, Lopez-Hoyos M, Santibañez M. Hospital mortality prognostication in sepsis using the new biomarkers suPAR and proADM in a single determination on ICU admission. Intensive Care Med. 2013;39:1945–52.
Mölkänen T, Ruotsalainen E, Thorball CW, Järvinen A. Elevated soluble urokinase plasminogen activator receptor (suPAR) predicts mortality in Staphylococcus aureus bacteremia. Eur J Clin Microbiol Infect Dis. 2011;30:1417–24.
Huttunen R, Syrjänen J, Vuento R, Hurme M, Huhtala H, Laine J, Pessi T, Aittoniemi J. Plasma level of soluble urokinase-type plasminogen activator receptor as a predictor of disease severity and case fatality in patients with bacteraemia: a prospective cohort study. J Intern Med. 2011;270:32–40.
Jalkanen V, Yang R, Linko R, Huhtala H, Okkonen M, Varpula T, Pettilä V, Tenhunen J, FINNALI Study Group. SuPAR and PAI-1 in critically ill, mechanically ventilated patients. Intensive Care Med. 2013;39:489–96.
Murciano JC, Higazi AA, Cines DB, Muzykantov VR. Soluble urokinase receptor conjugated to carrier red blood cells binds latent pro-urokinase and alters its functional profile. J Control Release. 2009;139:190–6.
Tanino Y, Chang MY, Wang X, Gill SE, Skerrett S, McGuire JK, Sato S, Nikaido T, Kojima T, Munakata M, Mongovin S, Parks WC, Martin TR, Wight TN, Frevert CW. Syndecan-4 regulates early neutrophil migration and pulmonary inflammation in response to lipopolysaccharide. Am J Respir Cell Mol Biol. 2012;47:196–202.
Ishiguro K, Kadomatsu K, Kojima T, Muramatsu H, Iwase M, Yoshikai Y, Yanada M, Yamamoto K, Matsushita T, Nishimura M, Kusugami K, Saito H, Muramatsu T. Syndecan-4 deficiency leads to high mortality of lipopolysaccharide-injected mice. J Biol Chem. 2001;276:47483–8.
Theilade S, Lyngbaek S, Hansen TW, Eugen-Olsen J, Fenger M, Rossing P, Jeppesen JL. Soluble urokinase plasminogen activator receptor levels are elevated and associated with complications in patients with type 1 diabetes. J Intern Med. 2015;277:362–71.
Kirkegaard-Klitbo DM, Langkilde A, Mejer N, Andersen O, Eugen-Olsen J, Benfield T. Soluble urokinase plasminogen activator receptor is a predictor of incident non-AIDS comorbidity and all-cause mortality in human immunodeficiency virus type 1 infection. J Infect Dis. 2017;216:819–23.
Koller L, Stojkovic S, Richter B, Sulzgruber P, Potolidis C, Liebhart F, Mörtl D, Berger R, Goliasch G, Wojta J, Hülsmann M, Niessner A. Soluble urokinase-type plasminogen activator receptor improves risk prediction in patients with chronic heart failure. JACC Heart Fail. 2017;5:268–77.
The authors thank the following hospitals for their efforts and dedication in enrolling study participants: Peking University People's Hospital, Tianjin Medical University General Hospital, Wuhan University People's Hospital, Fujian Provincial Hospital, and The First Affiliated Hospital of Zhengzhou University.
This study was funded by The National Key Research and Development Programme of China (2016YFC0903800).
The datasets generated and analysed during the current study are not publicly available due to health privacy concerns, but are available from the corresponding author on reasonable request.
Department of Respiratory & Critical Care Medicine, Peking University People's Hospital, Beijing, People's Republic of China
Qiongzhen Luo
, Pu Ning
, Yali Zheng
, Ying Shang
, Bing Zhou
& Zhancheng Gao
Search for Qiongzhen Luo in:
Search for Pu Ning in:
Search for Yali Zheng in:
Search for Ying Shang in:
Search for Bing Zhou in:
Search for Zhancheng Gao in:
The roles of the authors in this study were as follows: ZC-G conceived this study and obtained research funding. P-N and YL-Z were in charge of sample preservation, and reinforced clinical data for the database. Y-S and B-Z provided experimental support. QZ-L conducted biomarker measurements, analysed data, and wrote and edited the manuscript. All authors read and approved the final manuscript.
Correspondence to Zhancheng Gao.
All subjects provided informed consent. This study was approved by the medical ethics committee of Peking University People's Hospital.
Luo, Q., Ning, P., Zheng, Y. et al. Serum suPAR and syndecan-4 levels predict severity of community-acquired pneumonia: a prospective, multi-centre study. Crit Care 22, 15 (2018) doi:10.1186/s13054-018-1943-y
Syndecan-4 | CommonCrawl |
Implications of unprovability of $P\neq NP$
I was reading "Is P Versus NP Formally Independent?" but I got puzzled.
It is widely believed in complexity theory that $\mathsf{P} \neq \mathsf{NP}$. My question is about what if this is not provable (say in $ZFC$). (Let's assume that we only find out that $\mathsf{P} \neq \mathsf{NP}$ is independent from $ZFC$ but no further information about how this is proven.)
What will be the implications of this statement? More specifically,
Assuming that $\mathsf{P}$ capture the efficient algorithms (Cobham–Edmonds thesis) and $\mathsf{P} \neq \mathsf{NP}$, we prove $\mathsf{NP\text{-}hardness }$ results to imply that they are beyond the present reach of our efficient algorithms. If we prove the separation, $\mathsf{NP\text{-}hardness}$ means that there is no polynomial time algorithm. But what does an $\mathsf{NP\text{-}hardness }$ result mean if the separation is not provable? What will happen to these results?
efficient algorithms
Does unprovability of the separation mean that we need to change our definition of efficient algorithms?
cc.complexity-theory np-hardness big-picture proofs p-vs-np
Kaveh
$\begingroup$ The first thing you need to ask is: formally independent of what? In mathematical logic, there are many sets of axioms people have considered. The default one is ZFC, or Zermelo-Fraenkel set theory with the Axiom of Choice. What it means to be independent of ZFC is that neither P=NP or P!=NP can be proved from these axioms. $\endgroup$ – Peter Shor Feb 2 '12 at 15:38
$\begingroup$ If you want to know what a proof for a statement of the form "whether X or not is independent of axiomatic system Y" looks like, why don't you just read some examples? The independence of the Axiom of Choice from the Zermelo-Fraenkel set theory is a famous example. I voted to close as not a real question by mistake, but I meant to vote to close as off topic. $\endgroup$ – Tsuyoshi Ito Feb 2 '12 at 16:01
$\begingroup$ Did you to read the very good and freely available Scott Aaronson's paper; "Is P Versus NP Formally Independent?" (scottaaronson.com/papers/pnp.pdf) $\endgroup$ – Marzio De Biasi Feb 2 '12 at 16:02
$\begingroup$ The question "if X is proved independent of ZFC, and we have some theorems of the form X $\rightarrow$ Y, what happens to these theorems?" seems well-posed, and is the question that I believe the OP is asking. The answer would seem to be: in some axiom systems, such as ZFC + X, we then have Y holding, while in ZFC + $\lnot$X we have no information about Y. As such, these conditional theorems would still have some value. In fact, they would have more value in this situation than if $\lnot$X were to be proved to be a theorem. $\endgroup$ – András Salamon Feb 7 '12 at 10:15
$\begingroup$ The ZFC unprovability of P vs NP would probably have a lot more implication for Set Theory than Complexity Theory. $\endgroup$ – David Harris Feb 8 '12 at 20:21
Your question might better be phrased, "How would complexity theory be affected by the discovery of a proof that P = NP is formally independent of some strong axiomatic system?"
It's a little hard to answer this question in the abstract, i.e., in the absence of seeing the details of the proof. As Aaronson mentions in his paper, proving the independence of P = NP would require radically new ideas, not just about complexity theory, but about how to prove independence statements. How can we predict the consequences of a radical breakthrough whose shape we currently can't even guess at?
Still, there are a couple of observations we can make. In the wake of the proof of the independence of the continuum hypothesis from ZFC (and later from ZFC + large cardinals), a sizable number of people have come around to the point of view that the continuum hypothesis is neither true nor false. We could ask whether people will similarly come to the conclusion that P = NP is "neither true nor false" in the wake of an independence proof (for the sake of argument, let's suppose that P = NP is proved independent of ZFC + any large cardinal axiom). My guess is not. Aaronson basically says that he wouldn't. Goedel's 2nd incompleteness theorem hasn't led anyone that I know of to argue that "ZFC is consistent" is neither true nor false. P = NP is essentially an arithmetical statement, and most people have strong intuitions that arithmetical statements—or at least arithmetical statements as simple as "P = NP" is—must be either true or false. An independence proof would just be interpreted as saying that we have no way of determining which of P = NP and P $\ne$ NP is the case.
One can also ask whether people would interpret this state of affairs as telling us that there is something "wrong" with our definitions of P and NP. Perhaps we should then redo the foundations of complexity theory with new definitions that are more tractable to work with? At this point I think we are in the realm of wild and unfruitful speculation, where we're trying to cross bridges that we haven't gotten to and trying to fix things that ain't broke yet. Furthermore, it's not even clear that anything would be "broken" in this scenario. Set theorists are perfectly happy assuming any large cardinal axioms that they find convenient. Similarly, complexity theorists might also, in this hypothetical future world, be perfectly happy assuming any separation axioms that they believe are true, even though they're provably unprovable.
In short, nothing much follows logically from an independence proof of P = NP. The face of complexity theory might change radically in the light of such a fantastic breakthrough, but we'll just have to wait and see what the breakthrough looks like.
Timothy ChowTimothy Chow
$\begingroup$ @vzn: Your examples aren't just "arguably" arithmetical; they're unquestionably arithmetical. But I'm not sure what your point is. Take some diophantine equation $E$ with the property that "$E$ has no solutions" is undecidable in ZFC. My point is that everyone I know believes that either $E$ has solutions or it doesn't, and that we just can't prove it one way or the other. Do you believe that there is no fact of the matter about whether $E$ has solutions—that $E$ neither has nor doesn't have solutions? $\endgroup$ – Timothy Chow Feb 7 '12 at 15:13
$\begingroup$ @vzn: I think you completely miss the point. The question is not whether a particular statement is undecidable, but whether it is neither true nor false. The two concepts are entirely distinct. Would you say, for example, that ZFC is neither consistent nor inconsistent? Everyone (else) that I know believes that either ZFC is consistent, or it isn't, even though we may have no way of determining which is the case. $\endgroup$ – Timothy Chow Feb 8 '12 at 1:11
$\begingroup$ "this sounds like religion to me and not mathematics" — Welcome to metamathematics. Perhaps a less objectionable way of saying "X is neither true nor false" is that we have no a priori reason to prefer an axiomatic system in which X is true over an axiomatic system in which X is false. We have an (almost) universally agreed standard model of arithmetic; as a social convention, we accept arithmetic statements that hold in that model as being really, actually true. The same cannot be said for set theory. $\endgroup$ – Jeffε Feb 9 '12 at 13:02
$\begingroup$ See also consc.net/notes/continuum.html and mathoverflow.net/questions/14338/… — Each mathematician's personal mix of formalism, platonism, and intuitionism is essentially a religious conviction. $\endgroup$ – Jeffε Feb 9 '12 at 13:06
$\begingroup$ @vzn: You still miss the point. Even if we grant you your personal religious beliefs, all you're saying is that you wouldn't join Aaronson and the rest of the world in declaring arithmetical sentences to be either true or false. We all agree that there's no way to tell from the form of a statement whether it's undecidable, but that's not the claim. The claim is that almost everyone except you does have strong intuitions that arithmetical statements are either true or false. Just because you don't share that conviction doesn't mean that others don't have it. $\endgroup$ – Timothy Chow Feb 13 '12 at 18:03
This is a valid question, even though perhaps a little unfortunately phrased. The best answer I can give is this reference:
Scott Aaronson: Is P versus NP formally independent. Bulletin of the European Association for Theoretical Computer Science, 2003, vol. 81, pages 109-136.
Abstract: This is a survey about the title question, written for people who (like the author) see logic as forbidding, esoteric, and remote from their usual concerns. Beginning with a crash course on Zermelo Fraenkel set theory, it discusses oracle independence; natural proofs; independence results of Razborov, Raz, DeMillo-Lipton, Sazanov, and others; and obstacles to proving P vs. NP independent of strong logical theories. It ends with some philosophical musings on when one should expect a mathematical question to have a definite answer.
Andrej BauerAndrej Bauer
$\begingroup$ Uh, I totally missed the fact that Aaronson's paper was already mentioned in the comments. My apologies. $\endgroup$ – Andrej Bauer Feb 4 '12 at 15:18
As Timothy Chow explains, just knowing that a theorem is independent from a theory doesn't say much about the truth/falsity of that statement. Most non-experts confuse formal unprovability in a fixed theory (like $[ZFC][1]$) with impossibility of knowing that answer to the truth/falsity of the statement (or sometimes meaninglessness of the statement). Independence and formal unprovability always means independence/unprovability in a theory. It simply means that the theory can prove neither the statement nor its negation. It doesn't mean that the statement does not have a truth value, it doesn't mean that we cannot know the truth value of the statement, we might be able to add new reasonable axioms that will make the theory strong enough to be able to prove the statements or its negation. At the end, provability in a theory is a formal abstract concept. It is related to our real world experience only as a model.
Same applies to the thesis that efficient computation is captured by complexity class $\mathsf{P}$. See this post.
Now you can ask if it is possible for a formal statement to not have a truth value. Generally in practice in principle, we can affirm $\Sigma_1$ (a.k.a. r.e.) properties and refute $\Pi_1$ (a.k.a. co-r.e.) properties by observations. Any statement more complex than this is not directly observable, i.e. no (finite) observation will allow you to affirm or refute the statement. However we can look at the observable logical consequences of these statements and try to use them to decide whether a statement is true or false. (For more on finitely observable properties see Samson Abramsky's Ph.D. thesis "Domain Theory and the Logic of Observable Properties", 1987 and Steven Vickers' "Topology via Logic", 1996.)
For most mathematicians statements of higher logical complexity are also meaningful and have a truth value, but this goes into the philosophical issues in mathematics. Almost all mathematicians believe that statements in the arithmetical hierarchy are meaningful and have definite truth values, and in some sense they view the truth value of these statements to be more definite than statements of higher logical complexity (like CH). The statement $\mathsf{P} \neq \mathsf{NP}$ can be stated as a $\Sigma_2$ statement and therefore is an arithmetical statement. As such, almost all mathematicians would believe that it is meaningful and has a definite truth value. You may want have a look at this MO question, and search the posts on FOM mailing list.
KavehKaveh
Just some rambling thoughts about this. Feel free to criticize.
Let Q = [cannot prove (P = NP) and cannot prove (P /= NP)]. Suppose Q for a contradiction. I will also assume that all known discoveries about P vs NP are still viable. In particular, all NP problems are equivalent in the sense that if you can solve one of them in polynomial time, you can solve all others in polynomial time. So let W be an NP complete problem; W equally represents all problems in NP. Because of Q, one cannot obtain an algotithm A to solve W in polynomial time. Otherwise we have proof that P = NP, which contradicts Q (1)(*). Note that all algorithms are computable by definition. So saying that A cannot exist implies that there is no way to compute W in polynomial time. But this contradicts Q (2). We are left with rejecting either (1) xor rejecting (2). Either case leads to a condradiction. Thus Q is a contradiction, which means that the proof of whether or not (N = NP) must exist.
(*) You might say, "Aha! A might exist, but we just cannot find it". Well, if A existed, we can enumerate through all programs to find A by enumerating from smaller programs to larger programs, starting with the empty program. A must be finite because it is an algorithm, so if A exists, then the enumeration program to find it must terminate.
Thomas EdingThomas Eding
$\begingroup$ @Victor: Good point. I imagine that if A exists, then one can simply analyze each enumerated program to see if it indeed solves an NP complete problem in polynomial time. I believe that since one is working with a finite instruction set (given by some universal computer) that A can be identified. But I'm no expert. $\endgroup$ – Thomas Eding Feb 10 '12 at 20:12
$\begingroup$ Also... If A exists, then let N be the size of A. Let T be the set of all program of size <= N. One can simultaneously run W on all A' in T. As each A' terminates, run the output O through a program that checks to see if O solves W. (Note that any so-called 'solution' to an NP complete problem can be verified in polynomial time.) If O is a correct answer, shut off all other computers and return O. Keep in mind that not every A' must terminate because A is one of them and will output a correct O in polynomial time. Thus one does not need to even prove that A solves P=NP. N exists by definition. $\endgroup$ – Thomas Eding Feb 10 '12 at 20:27
$\begingroup$ I fail to see the problem. When A' terminates, check its output O. If O is valid, stop all other A' and return O. O can be verified on the machine that A' terminates on, so you will not get a bunch of queued verifying programs. The only problem I see with this approach is getting a good N. I think it might be enough to say that N is finite (which it must be) to disprove Q. $\endgroup$ – Thomas Eding Feb 10 '12 at 20:46
$\begingroup$ All NP complete are search problems such that proposed solutions can be verified in polynomial time (acquiring such a proposed solution is allowed to be "difficult" though). O is not intended to prove if P=NP or not. O is simply a proposed solution to a particular instance W of an NP complete problem. For example, if I give you a tour for a particular travelling salesman problem, you can check to see if it is a shortest tour in polynomial time. $\endgroup$ – Thomas Eding Feb 10 '12 at 21:20
$\begingroup$ "P = NP is independent of ZFC" is not the same as "we cannot find an algorithm to solve any problem in NP in deterministic polynomial time", as Victor has pointed out. The precise definitions of these classes are rather important when dealing with notions such as independence with respect to a theory. $\endgroup$ – András Salamon Feb 12 '12 at 19:40
Not the answer you're looking for? Browse other questions tagged cc.complexity-theory np-hardness big-picture proofs p-vs-np or ask your own question.
Do we know that the P vs. NP question isn't affected by Gödels incompleteness theorem?
Which interesting theorems in TCS rely on the Axiom of Choice? (Or alternatively, the Axiom of Determinacy?)
If P=NP, could we obtain proofs of Goldbach's Conjecture etc.?
Axioms necessary for theoretical computer science
Algorithm for identifying unprovable statements
What techniques are used for proving algorithms optimal?
Rigour leading to insight
Do the undecidable attributes of P pose an obstruction to deciding P versus NP? (answer: maybe)
Logic capturing automorphism-invariant $\mathsf{AC^0}$ properties
Does P contain incomprehensible languages? (TCS community wiki)
Does P contain languages whose existence is independent of PA or ZFC? (TCS community wiki)
Mathematical implications of complexity theory conjectures outside TCS
Can on every instance P = NP?
Undecidable Single Programs
Quantum Hardness of Finding Nash Equilibria | CommonCrawl |
In WolframAlpha I was playing around with the values of the sequence defined by the integral and noticed that the values seem to get arbitrarily close to 1. I guess the difficulty is finding a closed expression for the value of the definite integral.
Split the integral in two parts, one in $[0,1]$ and the second one in $[1,+\infty]$. Using the Dominated convergence theorem you should conclude that the first integral converges to $1$ and the second one to $0$. If you need more help let me know.
where the limit follows immediately.
and the limit for $n \to \infty$ goes to $1$, where you can use l'Hospital's rule.
There is one pole inside the contour.
Not the answer you're looking for? Browse other questions tagged calculus integration limits convergence definite-integrals or ask your own question. | CommonCrawl |
DCDS-B Home
Dynamics of non-autonomous fractional reaction-diffusion equations on $ \mathbb{R}^{N} $ driven by multiplicative noise
October 2021, 26(10): 5707-5722. doi: 10.3934/dcdsb.2021041
Survival analysis for tumor growth model with stochastic perturbation
Dongxi Li , , Ni Zhang , Ming Yan and Yanya Xing
College of Data Science, Taiyuan University of Technology, Taiyuan, 030024, China
* Corresponding author: Dongxi Li
Received October 2019 Revised December 2020 Published October 2021 Early access February 2021
Fund Project: This work was supported by the National Natural Science Foundation of China (Grant No.11571009), and Applied Basic Research Programs of Shanxi Province (Grant No. 201901D111086)
In this paper, we investigate the dynamical behavior of extinction and survival in tumor growth model with immunization under stochastic perturbation. Firstly, the model describing the growth of cancer cells monitored by immune cells is established. Then, the steady probability distribution of tumor cells for different noise intensities and immune parameter intensities, and necessary conditions for extinction and different survival of cancer cells are obtained by numerical and theoretical method. Besides, it is found that the extinction and survival of cancer cells rely on the state of immunization and noise. Finally, stochastic simulations are taken to test the theoretical analytical results. The results of our work are beneficial to discover the evolution mechanism and design effective immunotherapy of tumor.
Keywords: tumor growth model, survival, extinction, stochastic perturbation.
Mathematics Subject Classification: Primary: 34F05, 92D25, 60H10.
Citation: Dongxi Li, Ni Zhang, Ming Yan, Yanya Xing. Survival analysis for tumor growth model with stochastic perturbation. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5707-5722. doi: 10.3934/dcdsb.2021041
B. Q. Ai, X. J. Wang and L. G. Liu, Reply to "comment on correlated noise in a logistic growth model", Phys. Rev. E, 77 (2008), 013902. doi: 10.1103/PhysRevE.77.013902. Google Scholar
S. Banerjee and R. R. Sarkar, Delay-induced model for tumor–immune interaction and control of malignant tumor growth, Bio. Systems, 91 (2008), 268-288. doi: 10.1016/j.biosystems.2007.10.002. Google Scholar
A. L. Barbera and B. Spagnolo, Spatio-temporal patterns in population dynamics, Physica A, 314 (2002), 120-124. Google Scholar
G. Berke, D. Gabison and M. Feldman, The frequency of effector cells in populations containing cytotoxic lymphocytes, European Journal of Immunology, 5 (2005), 813-818. doi: 10.1002/eji.1830051204. Google Scholar
O. A. Chichigina, A. A. Dubkov, D. Valenti and B. Spagnolo, Stability in a system subject to noise with regulated periodicity, Phys. Rev. E, 84 (2011), 021134. Google Scholar
A. A. Dubkov and B. Spagnolo, Verhulst model with lévy white noise excitation, Eur. Phys. J. B, 65 (2008), 361-367. Google Scholar
L. C. Evans, An Introduction to Stochastic Differential Equations, Amer Mathematical Society, Providence, RI, 2013. doi: 10.1090/mbk/082. Google Scholar
A. Fiasconaro, B. Spagnolo, A. Ochab-Marcinek and E. Gudowska-Nowak, Co-occurrence of resonant activation and noise-enhanced stability in a model of cancer growth in the presence of immune response, Physical Review E, 74 (2006), 159-163. Google Scholar
A. Fiasconaro, A. Ochab-Marcinek, B. Spagnolo and E. Gudowska-Nowak, Monitoring noise-resonant effects in cancer growth influenced by external fluctuations and periodic treatment, Eur. Phys. J. B, 65 (2008), 435-442. Google Scholar
A. Fiasconaro, B. Spagnolo and S. Boccaletti, Signatures of noise-enhanced stability in metastable states, Physical Review E Statistical Nonlinear and Soft Matter Physics, 72 (2006), 061110. Google Scholar
A. Fiasconaro, D. Valenti and B. Spagnolo, Nonmonotonic behavior of spatiotemporal pattern formation in a noisy Lotka-Volterra system, Acta Physica Polonica B, 35 (2004), 1491-1500. Google Scholar
R. P. Garay and P. Lefever, A kinetic approach to the immunology of cancer: Stationary states properties of effector-target cell reactions, Journal of Theoretical Biology, 73 (1978), 417-438. doi: 10.1016/0022-5193(78)90150-9. Google Scholar
Q. Han, T. Yang, C. Zeng and H. Wang, Impact of time delays on stochastic resonance in an ecological system describing vegetation, Physica A, 408 (2014), 96-105. Google Scholar
D. J. Higham, An algorithmic introduction to numerical simulation of stochastic differential equations, SIAM Review, 43 (2001), 525-546. doi: 10.1137/S0036144500378302. Google Scholar
R. Lefever and W. Horsthemke, Bistability in fluctuating environments. Implications in tumor immunology, Bulletin of Mathematical Biology, 41 (1979), 469-490. Google Scholar
R. Lefever and W. Horsthemk, Multiple transitions induced by light intensity fluctuations in illuminated chemical systems, Proceedings of the National Academy of Sciences, 76 (1979), 2490-2494. doi: 10.1073/pnas.76.6.2490. Google Scholar
D. Li, W. Xu, C. Sun and L. Wang, Stochastic fluctuation induced the competition between extinction and recurrence in a model of tumor growth, Physics Letters A, 376 (2012), 1771-1776. doi: 10.1016/j.physleta.2012.04.006. Google Scholar
D. Li, W. Xu, Y. Guo and Y. Xu, Fluctuations induced extinction and stochastic resonance effect in a model of tumor growth with periodic treatment, Physics Letters A, 375 (2011), 886-890. doi: 10.1016/j.physleta.2010.12.066. Google Scholar
M. Liu and K. Wang, Persistence and extinction of a stochastic single-specie model under regime switching in a polluted environment, Journal of Theoretical Biology, 264 (2010), 934-944. doi: 10.1016/j.jtbi.2010.03.008. Google Scholar
M. Liu and K. Wang, Persistence and extinction in stochastic non-autonomous logistic systems, Journal of Mathematical Analysis and Applications, 375 (2011), 443-457. doi: 10.1016/j.jmaa.2010.09.058. Google Scholar
M. Liu and K. Wang, A note on stability of stochastic logistic equation, Applied Mathematics Letters, 26 (2013), 601-606. doi: 10.1016/j.aml.2012.12.015. Google Scholar
Z. Ma and T. G. Hallam, Effects of parameter fluctuations on community survival, Math. Biosci., 86 (1987), 35-49. doi: 10.1016/0025-5564(87)90062-9. Google Scholar
X. Mao, G. Marion and E. Renshaw, Environmental Brownian noise suppresses explosions in populations dynamics, Stochastic Process. Appl., 97 (2002), 95-110. doi: 10.1016/S0304-4149(01)00126-0. Google Scholar
X. Mao, Stochastic Differential Equations and Applications, Horwood Publishing, Chichester, 1997. Google Scholar
A. Ochab-Marcinek and E. Gudowska-Nowak, Population growth and control in stochastic models of cancer development, Physica A: Statistical Mechanics and Its Applications, 343 (2004), 557-572. doi: 10.1016/j.physa.2004.06.071. Google Scholar
A. Ochab-Marcinek, E. Gudowska-Nowak, A. Fiasconaro and B. Spagnolo, Coexistence of resonant activation and noise enhanced stability in a model of tumor-host interaction: Statistics of extinction times, Acta Phys. Pol. B, 37 (2006), 1651-1666. Google Scholar
C. Parish, Cancer immunotherapy: The past, the present and the future, Immunol. Cell Biol., 81 (2003), 106-113. doi: 10.1046/j.0818-9641.2003.01151.x. Google Scholar
A. L. Pankratov and S. Bernardo, Suppression of timing errors in short overdamped josephson junctions, Physical Review Letters, 93 (2004), 177001. doi: 10.1103/PhysRevLett.93.177001. Google Scholar
N. Pizzolato, D. P. Adorno, D. Valenti and B. Spagnolo, Stochastic dynamics of leukemic cells under an intermittent targeted therapy, Theory in Biosciences, 130 (2011), 203-210. Google Scholar
T. Roose, S. J. Chapman and P. K. Maini, Mathematical models of avascular tumor growth, SIAM Review, 49 (2007), 179-208. doi: 10.1137/S0036144504446291. Google Scholar
S. A. Rosenberg, P. Spiess and R. Lafreniere, A new approach to the adoptive immunotherapy of cancer with tumor-infiltrating lymphocytes, Science, 233 (1986), 1318-1321. doi: 10.1126/science.3489291. Google Scholar
M. Smyth, D. Godfrey and J. Trapani, A fresh look at tumor immunosurveillance and immunotherapy, Nat. Immunol., 2 (2001), 293-299. doi: 10.1038/86297. Google Scholar
D. Valenti, L. Tranchina, M. Brai, A. Caruso, C. Cosentino and B. Spagnolo, Environmental metal pollution considered as noise: Effects on the spatial distribution of benthic foraminifera in two coastal marine areas of Sicily (Southern Italy), Ecological Modelling, 213 (2008), 449-462. Google Scholar
D. Valenti, L. Schimansky-Geier, X. Sailer and B. Spagnolo, Moment equations for a spatially extended system of two competing species, Eur. Phys. J. B, 50 (2006), 199-203. Google Scholar
Q. Xie, T. Wang, C. Zeng, X. Dong and L. Guan, Predicting fluctuations-caused regime shifts in a time delayed dynamics of an invading species, Physica A, 493 (2018), 69-83. Google Scholar
Y. Xu, J. Feng, J. J. Li and H. Zhang, Stochastic bifurcation for a tumor-immune system with symmetric Levy noise, Physical A, 392 (2013), 4739-4748. doi: 10.1016/j.physa.2013.06.010. Google Scholar
P. Zhivkov and J. Waniewski, Modelling tumour-immunity interactions with different stimulation functions, International Journal of Applied Mathematics and Computer Science, 13 (2003), 307-315. Google Scholar
P. Zhivkov and J. Waniewski, Modelling tumour-immunity interactions with different stimulation functions, Guangdong Journal of Animal and Veterinary Science, 13 (2003), 307-315. Google Scholar
C. Zeng and H. Wang, Noise and large time delay: Accelerated catastrophic regime shifts in ecosystems, Ecological Modelling, 233 (2012), 52-58. Google Scholar
C. Zeng, Q. Han, T. Yang, H. Wang and Z. Jia, Noise- and delay-induced regime shifts in an ecological system of vegetation, Journal of Statistical Mechanics: Theory and Experiment, 2013 (10)(2013), P10017. Google Scholar
C. Zeng, C. Zhang, J. Zeng and H. Luo, Noises-induced regime shifts and -enhanced stability under a model of lake approaching eutrophication, Ecological complexity, 22 (2015), 102-108. Google Scholar
J. Zeng, C. Zeng, Q. Xie, L. Guan, X. Dong and F. Yang, Different delays-induced regime shifts in a stochastic insect outbreak dynamics, Physica A, 462 (2016), 1273-1285. doi: 10.1016/j.physa.2016.06.115. Google Scholar
C. Zeng, Q. Xie, T. Wang and C. Zhang, Stochastic ecological kinetics of regime shifts in a time-delayed lake eutrophication ecosystem, Ecosphere, 8 (2017), e01805. Google Scholar
W. R. Zhong, Y. Z. Shao and Z. H. He, Pure multiplicative stochastic resonance of a theoretical anti-tumor model with seasonal modulability, Physical Review E, 73 (2006), 06090. doi: 10.1103/PhysRevE.73.060902. Google Scholar
show all references
Figure 1. The potential $ U(x) $ as a function of $ x $ for different value $ \beta $ with $ \theta = 0.25 $
Figure Options
Download as PowerPoint slide
Figure 2. The steady probability distribution function of tumor cells population $ x(t) $ for model (4) with $ \theta = 0.25 $
Figure 3. The valid regions as a function of $ \theta(t) $ and $ \beta(t) $
Figure 4. Solutions of extinction of tumor cells for $ (a):\sigma^2(t) = 0.02+0.004\sin t $, $ \beta(t) = 3+\sin t $; $ (b):\sigma^2(t) = 1+0.8\sin t $, $ \beta(t) = 3+\sin t $; $ (c):\sigma^2(t) = 1+0.8\sin t $, $ \beta(t) = 6+\sin t $, with the initial value $ x_0 = 0.5 $
Figure 5. Solutions of strong persistence in the mean of tumor cells for $ (d):\sigma^2(t) = 0.002+0.001\sin t $, $ \beta(t) = 0.97+0.009\sin t $, with the initial value $ x_0 = 0.5 $
Figure 6. Solutions of strong persistence in the mean of tumor for $ (e):\sigma^2(t) = 0.02+0.002\sin t $, $ \beta(t) = 0.8+0.008\sin t $, with the initial value $ x_0 = 0.5 $
Figure 7. Mean time to extinction (MET) as a function of the noise intensity for different values of $ \beta $ at $ \theta = 0.25 $
Elena Izquierdo-Kulich, José Manuel Nieto-Villar. Mesoscopic model for tumor growth. Mathematical Biosciences & Engineering, 2007, 4 (4) : 687-698. doi: 10.3934/mbe.2007.4.687
Didier Bresch, Thierry Colin, Emmanuel Grenier, Benjamin Ribba, Olivier Saut. A viscoelastic model for avascular tumor growth. Conference Publications, 2009, 2009 (Special) : 101-108. doi: 10.3934/proc.2009.2009.101
Rudolf Olach, Vincent Lučanský, Božena Dorociaková. The model of nutrients influence on the tumor growth. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021150
Cristina Anton, Alan Yong. Stochastic dynamics and survival analysis of a cell population model with random perturbations. Mathematical Biosciences & Engineering, 2018, 15 (5) : 1077-1098. doi: 10.3934/mbe.2018048
Laurence Cherfils, Stefania Gatti, Alain Miranville, Rémy Guillevin. Analysis of a model for tumor growth and lactate exchanges in a glioma. Discrete & Continuous Dynamical Systems - S, 2021, 14 (8) : 2729-2749. doi: 10.3934/dcdss.2020457
Ningning Ye, Zengyun Hu, Zhidong Teng. Periodic solution and extinction in a periodic chemostat model with delay in microorganism growth. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2022022
Miljana JovanoviĆ, Marija KrstiĆ. Extinction in stochastic predator-prey population model with Allee effect on prey. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2651-2667. doi: 10.3934/dcdsb.2017129
Shangzhi Li, Shangjiang Guo. Permanence and extinction of a stochastic SIS epidemic model with three independent Brownian motions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2693-2719. doi: 10.3934/dcdsb.2020201
Kentarou Fujie. Global asymptotic stability in a chemotaxis-growth model for tumor invasion. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 203-209. doi: 10.3934/dcdss.2020011
T.L. Jackson. A mathematical model of prostate tumor growth and androgen-independent relapse. Discrete & Continuous Dynamical Systems - B, 2004, 4 (1) : 187-201. doi: 10.3934/dcdsb.2004.4.187
J. Ignacio Tello. On a mathematical model of tumor growth based on cancer stem cells. Mathematical Biosciences & Engineering, 2013, 10 (1) : 263-278. doi: 10.3934/mbe.2013.10.263
Hyun Geun Lee, Yangjin Kim, Junseok Kim. Mathematical model and its fast numerical method for the tumor growth. Mathematical Biosciences & Engineering, 2015, 12 (6) : 1173-1187. doi: 10.3934/mbe.2015.12.1173
Mohammad A. Tabatabai, Wayne M. Eby, Karan P. Singh, Sejong Bae. T model of growth and its application in systems of tumor-immune dynamics. Mathematical Biosciences & Engineering, 2013, 10 (3) : 925-938. doi: 10.3934/mbe.2013.10.925
Marcelo E. de Oliveira, Luiz M. G. Neto. Directional entropy based model for diffusivity-driven tumor growth. Mathematical Biosciences & Engineering, 2016, 13 (2) : 333-341. doi: 10.3934/mbe.2015005
Xingwang Yu, Sanling Yuan. Asymptotic properties of a stochastic chemostat model with two distributed delays and nonlinear perturbation. Discrete & Continuous Dynamical Systems - B, 2020, 25 (7) : 2373-2390. doi: 10.3934/dcdsb.2020014
Shangzhi Li, Shangjiang Guo. Persistence and extinction of a stochastic SIS epidemic model with regime switching and Lévy jumps. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 5101-5134. doi: 10.3934/dcdsb.2020335
Jiangtao Yang. Permanence, extinction and periodic solution of a stochastic single-species model with Lévy noises. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5641-5660. doi: 10.3934/dcdsb.2020371
Shaoyong Lai, Yulan Zhou. A stochastic optimal growth model with a depreciation factor. Journal of Industrial & Management Optimization, 2010, 6 (2) : 283-297. doi: 10.3934/jimo.2010.6.283
Francisco Montes de Oca, Liliana Pérez. Balancing survival and extinction in nonautonomous competitive Lotka-Volterra systems with infinite delays. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2663-2690. doi: 10.3934/dcdsb.2015.20.2663
Krzysztof Fujarewicz, Krzysztof Łakomiec. Adjoint sensitivity analysis of a tumor growth model and its application to spatiotemporal radiotherapy optimization. Mathematical Biosciences & Engineering, 2016, 13 (6) : 1131-1142. doi: 10.3934/mbe.2016034
PDF downloads (238)
Dongxi Li Ni Zhang Ming Yan Yanya Xing | CommonCrawl |
Linear Programming: More variables or more constraints; which one is better?
This is more of a practical question, rather than Math question.
I have an LP which has $n$ variables and $m$ constrains, where $ n << m $. If I convert this into its dual form, I will have $m$ variables, and $n$ constraints.
From practical point of view, which of these cases (Dual or Primal), is easier (faster) to be solved with commercial packages (say Gurobi) ?
optimization linear-programming
$\begingroup$ More variables in my opinion is not a problem. The more constraints the more complex the problem becomes, more variables doesn't seem to add complexity. Here's why, more variables equates to extra dimensions, but more constraints takes the space you have and forces you to search some weird subspace. For example, if you are working with a LP problem then you need to search the vertices where the constraints meet but this could be really really high even for only a few constraints and only a few variables. $\endgroup$ – Squirtle Jan 20 '14 at 19:43
$\begingroup$ Without knowing how their solver works this is a tough one to answer. Also, why would it matter, you would choose the dual or primal depending on how well your chosen solver works? $\endgroup$ – copper.hat Jan 20 '14 at 19:45
$\begingroup$ @Squirtle : Agreed. One can argue in similar way based on Simplex updates. $\endgroup$ – Daniel Jan 20 '14 at 19:51
Commercial solvers like Gurobi analyze the structure of a problem and decide whether the problem is best suited for solving in primal or dual form.
As a general rule, users should not worry about this. In fact, it can often be counterproductive for a user to attempt to coax the problem into a particular standard form---for example, by splitting free variables, converting equality constraints into equations, etc. After all, the solver may have made a different choice than you did.
Each solver is different, and we do not necessarily know what internal standard form each uses. So let's consider a prototypical example, a solver called GuroPlex built around the following internal standard form: $$ \begin{array}{ll} \text{minimize} & c^T x \\ \text{subject to} & A x = b \\ & x \geq 0 \end{array} $$ The dual of this model is the inequality constrained form: $$ \begin{array}{ll} \text{maximize} & b^T y \\ \text{subject to} & A^T y \leq c \end{array} $$ As you probably know, in all but the most degenerate cases, solving the primal problem readily leads to a solution to the dual problem, and vice versa. In effect, GuroPlex solves both problems at the same time.
Now, suppose we've constructed an LP to solve, and it happens to look like the inequality form. Knowing GuroPlex's internal standard form, we could convert it ourselves by creating slack variables $z \geq 0$ and splitting the free variables $y$ into the difference of nonnegatives $y_+-y_-$: $$ \begin{array}{ll} \text{maximize} & b^T ( y_+ - y_- ) \\ \text{subject to} & A^T ( y_+ - y_- ) + z = c \\ & y_+,y_-,z\geq 0 \end{array} $$ The result is a form that fits GuroPlex's standard form perfectly---well, save the maximize/minimize difference, but that is readily fixed by negating the objective. It's also larger than it was before, with over twice as many variables (two for each $y_i$ and one for each inequality).
You could do this, but you shouldn't. Just feed GuroPlex the original problem. GuroPlex will detect that your problem matches the dual of its standard form, and will construct and solve the dual of your problem instead. The solution to your problem will be extracted from the internal Lagrange multipliers. (One could argue that because it is solving both the primal and dual problems simultaneously, there's nothing to "extract". So be it! The point is the same.)
What if your problem is not in either of these above forms? For instance, what if it contains a mixture of equations and inequalities? What if it looks like the primal form, except some of the variables are unconstrained? What if it has lower and upper bounds? It can be difficult for you to decide whether GuroPlex would be more efficient solving the primal or dual form of your problem.
So don't decide. Let the software do that.
And certainly do not split free variables, split equations, or add slack variables. That can make it more difficult for the solver to make the correct determination. In fact, some of the better solvers know that people do these kinds of tricks, because they were taught to do so in school---and may reverse those changes. But no presolver is perfect.
Certainly, every once in awhile, there will be a problem that fools the solver, and manually feeding it the dual or making other model changes improves performance. But in general, you will be best served by leaving your problem in the natural form that it presents itself.
Michael GrantMichael Grant
Not the answer you're looking for? Browse other questions tagged optimization linear-programming or ask your own question.
How the dual LP solves the primal LP
Do lagrangian multipliers converge to dual variables in LPs?
convert following primal to dual LP with null RHS
Advantages of dual over primal solution of an optimization problem | CommonCrawl |
Halin graph
In graph theory, a Halin graph is a type of planar graph, constructed by connecting the leaves of a tree into a cycle. The tree must have at least four vertices, none of which has exactly two neighbors; it should be drawn in the plane so none of its edges cross (this is called a planar embedding), and the cycle connects the leaves in their clockwise ordering in this embedding. Thus, the cycle forms the outer face of the Halin graph, with the tree inside it.[1]
Halin graphs are named after German mathematician Rudolf Halin, who studied them in 1971.[2] The cubic Halin graphs – the ones in which each vertex touches exactly three edges – had already been studied over a century earlier by Kirkman.[3] Halin graphs are polyhedral graphs, meaning that every Halin graph can be used to form the vertices and edges of a convex polyhedron, and the polyhedra formed from them have been called roofless polyhedra or domes.
Every Halin graph has a Hamiltonian cycle through all its vertices, as well as cycles of almost all lengths up to the number of vertices of the graph. Halin graphs can be recognized in linear time. Because Halin graphs have low treewidth, many computational problems that are hard on other kinds of planar graphs, such as finding Hamiltonian cycles, can be solved quickly on Halin graphs.
Examples
A star is a tree with exactly one internal vertex. Applying the Halin graph construction to a star produces a wheel graph, the graph of the (edges of) a pyramid.[4] The graph of a triangular prism is also a Halin graph: it can be drawn so that one of its rectangular faces is the exterior cycle, and the remaining edges form a tree with four leaves, two interior vertices, and five edges.[5]
The Frucht graph, one of the five smallest cubic graphs with no nontrivial graph automorphisms,[6] is also a Halin graph.[7]
Properties
Every Halin graph is 3-connected, meaning that it is not possible to delete two vertices from it and disconnect the remaining vertices. It is edge-minimal 3-connected, meaning that if any one of its edges is removed, the remaining graph will no longer be 3-connected.[1] By Steinitz's theorem, as a 3-connected planar graph, it can be represented as the set of vertices and edges of a convex polyhedron; that is, it is a polyhedral graph. The polyhedron that realizes the graph can be chosen so that the face containing all of the tree leaves is horizontal, and all of the other faces lie above it, with equal slopes.[8] As with every polyhedral graph, Halin graphs have a unique planar embedding, up to the choice of which of its faces is to be the outer face.[1]
Every Halin graph is a Hamiltonian graph, and every edge of the graph belongs to a Hamiltonian cycle. Moreover, any Halin graph remains Hamiltonian after the deletion of any vertex.[9] Because every tree without vertices of degree 2 contains two leaves that share the same parent, every Halin graph contains a triangle. In particular, it is not possible for a Halin graph to be a triangle-free graph nor a bipartite graph.[10]
More strongly, every Halin graph is almost pancyclic, in the sense that it has cycles of all lengths from 3 to n with the possible exception of a single even length. Moreover, any Halin graph remains almost pancyclic if a single edge is contracted, and every Halin graph without interior vertices of degree three is pancyclic.[12]
The incidence chromatic number of a Halin graph G with maximum degree Δ(G) greater than four is Δ(G) + 1.[13] This is the number of colors needed to color all pairs (v,e) where v is a vertex of the graph, and e is an edge incident to v, obeying certain constraints on the coloring. Pairs that share a vertex or that share an edge are not allowed to have the same color. In addition, a pair (v,e) cannot have the same color as another pair that uses the other endpoint of e. For Halin graphs with Δ(G) = 3 or 4, the incidence chromatic number may be as large as 5 or 6 respectively.[14]
Computational complexity
It is possible to test whether a given n-vertex graph is a Halin graph in linear time, by finding a planar embedding of the graph (if one exists), and then testing whether there exists a face that has at least n/2 + 1 vertices, all of degree three. If so, there can be at most four such faces, and it is possible to check in linear time for each of them whether the rest of the graph forms a tree with the vertices of this face as its leaves. On the other hand, if no such face exists, then the graph is not Halin.[15] Alternatively, a graph with n vertices and m edges is Halin if and only if it is planar, 3-connected, and has a face whose number of vertices equals the circuit rank m − n + 1 of the graph, all of which can be checked in linear time.[16] Other methods for recognizing Halin graphs in linear time include the application of Courcelle's theorem, or a method based on graph rewriting, neither of which rely on knowing the planar embedding of the graph.[17]
Every Halin graph has treewidth = 3.[18] Therefore, many graph optimization problems that are NP-complete for arbitrary planar graphs, such as finding a maximum independent set, may be solved in linear time on Halin graphs using dynamic programming[19] or Courcelle's theorem, or in some cases (such as the construction of Hamiltonian cycles) by direct algorithms.[17] However, it is NP-complete to find the largest Halin subgraph of a given graph, to test whether there exists a Halin subgraph that includes all vertices of a given graph, or to test whether a given graph is a subgraph of a larger Halin graph.[20]
History
In 1971, Halin introduced the Halin graphs as a class of minimally 3-vertex-connected graphs: for every edge in the graph, the removal of that edge reduces the connectivity of the graph.[2] These graphs gained significance with the discovery that many algorithmic problems that were computationally infeasible for arbitrary planar graphs could be solved efficiently on them.[9][16] This fact was later explained to be a consequence of their low treewidth, and of algorithmic meta-theorems like Courcelle's theorem that provide efficient solutions to these problems on any graph of low treewidth.[18][19]
Prior to Halin's work on these graphs, graph enumeration problems concerning the cubic (or 3-regular) Halin graphs were studied in 1856 by Thomas Kirkman[3] and in 1965 by Hans Rademacher. Rademacher calls these graphs based polyhedra. He defines them as the cubic polyhedral graphs with f faces in which one of the faces has f − 1 sides.[21] The graphs that fit this definition are exactly the cubic Halin graphs.[22]
Inspired by the fact that both Halin graphs and 4-vertex-connected planar graphs contain Hamiltonian cycles, Lovász & Plummer (1974) conjectured that every 4-vertex-connected planar graph contains a spanning Halin subgraph; here "spanning" means that the subgraph includes all vertices of the larger graph. The Lovász–Plummer conjecture remained open until 2015, when a construction for infinitely many counterexamples was published.[23]
The Halin graphs are sometimes also called skirted trees[11] or roofless polyhedra.[9] However, these names are ambiguous. Some authors use the name "skirted trees" to refer to planar graphs formed from trees by connecting the leaves into a cycle, but without requiring that the internal vertices of the tree have degree three or more.[24] And like "based polyhedra", the "roofless polyhedra" name may also refer to the cubic Halin graphs.[22] The convex polyhedra whose graphs are Halin graphs have also been called domes.[25]
References
1. Encyclopaedia of Mathematics, first Supplementary volume, 1988, ISBN 0-7923-4709-9, p. 281, article "Halin Graph", and references therein.
2. Halin, R. (1971), "Studies on minimally n-connected graphs", Combinatorial Mathematics and its Applications (Proc. Conf., Oxford, 1969), London: Academic Press, pp. 129–136, MR 0278980.
3. Kirkman, Th. P. (1856), "On the enumeration of x-edra having triedral summits and an (x − 1)-gonal base", Philosophical Transactions of the Royal Society of London, 146: 399–411, doi:10.1098/rstl.1856.0018, JSTOR 108592.
4. Cornuéjols, Naddef & Pulleyblank (1983): "If T is a star, i.e., a single node v joined to n other nodes, then H is called a wheel and is the simplest type of Halin graph."
5. See Sysło & Proskurowski (1983), Prop. 4.3, p. 254, which identifies the triangular prism as the unique graph with exactly three cycles that can be the outer cycle of a realization as a Halin graph.
6. Bussemaker, F. C.; Cobeljic, S.; Cvetkovic, D. M.; Seidel, J. J. (1976), "Computer investigation of cubic graphs", Eindhoven University of Technology Research Portal, EUT report, Dept. of Mathematics and Computing Science, Eindhoven University of Technology, 76-WSK-01
7. Weisstein, Eric W., "Halin Graph", MathWorld
8. Aichholzer, Oswin; Cheng, Howard; Devadoss, Satyan L.; Hackl, Thomas; Huber, Stefan; Li, Brian; Risteski, Andrej (2012), "What makes a tree a straight skeleton?" (PDF), Proceedings of the 24th Canadian Conference on Computational Geometry (CCCG'12)
9. Cornuéjols, G.; Naddef, D.; Pulleyblank, W. R. (1983), "Halin graphs and the travelling salesman problem", Mathematical Programming, 26 (3): 287–294, doi:10.1007/BF02591867, S2CID 26278382.
10. See the proof of Theorem 10 in Wang, Weifan; Bu, Yuehua; Montassier, Mickaël; Raspaud, André (2012), "On backbone coloring of graphs", Journal of Combinatorial Optimization, 23 (1): 79–93, doi:10.1007/s10878-010-9342-6, MR 2875236, S2CID 26975523: "Since G contains a 3-cycle consisting of one inner vertex and two outer vertices, G is not a bipartite graph."
11. Malkevitch, Joseph (1978), "Cycle lengths in polytopal graphs", Theory and Applications of Graphs (Proc. Internat. Conf., Western Mich. Univ., Kalamazoo, Mich., 1976), Lecture Notes in Mathematics, Berlin: Springer, 642: 364–370, doi:10.1007/BFb0070393, ISBN 978-3-540-08666-6, MR 0491287
12. Skowrońska, Mirosława (1985), "The pancyclicity of Halin graphs and their exterior contractions", in Alspach, Brian R.; Godsil, Christopher D. (eds.), Cycles in Graphs, Annals of Discrete Mathematics, vol. 27, Elsevier Science Publishers B.V., pp. 179–194.
13. Wang, Shu-Dong; Chen, Dong-Ling; Pang, Shan-Chen (2002), "The incidence coloring number of Halin graphs and outerplanar graphs", Discrete Mathematics, 256 (1–2): 397–405, doi:10.1016/S0012-365X(01)00302-8, MR 1927561.
14. Shiu, W. C.; Sun, P. K. (2008), "Invalid proofs on incidence coloring", Discrete Mathematics, 308 (24): 6575–6580, doi:10.1016/j.disc.2007.11.030, MR 2466963.
15. Fomin, Fedor V.; Thilikos, Dimitrios M. (2006), "A 3-approximation for the pathwidth of Halin graphs", Journal of Discrete Algorithms, 4 (4): 499–510, doi:10.1016/j.jda.2005.06.004, MR 2577677.
16. Sysło, Maciej M.; Proskurowski, Andrzej (1983), "On Halin graphs", Graph Theory: Proceedings of a Conference held in Lagów, Poland, February 10–13, 1981, Lecture Notes in Mathematics, vol. 1018, Springer-Verlag, pp. 248–256, doi:10.1007/BFb0071635.
17. Eppstein, David (2016), "Simple recognition of Halin graphs and their generalizations", Journal of Graph Algorithms and Applications, 20 (2): 323–346, arXiv:1502.05334, doi:10.7155/jgaa.00395, S2CID 9525753.
18. Bodlaender, Hans (1988), Planar graphs with bounded treewidth (PDF), Technical Report RUU-CS-88-14, Department of Computer Science, Utrecht University, archived from the original (PDF) on 2004-07-28.
19. Bodlaender, Hans (1988), "Dynamic programming on graphs with bounded treewidth", Proceedings of the 15th International Colloquium on Automata, Languages and Programming, Lecture Notes in Computer Science, vol. 317, Springer-Verlag, pp. 105–118, doi:10.1007/3-540-19488-6_110, hdl:1874/16258, ISBN 978-3540194880.
20. Horton, S. B.; Parker, R. Gary (1995), "On Halin subgraphs and supergraphs", Discrete Applied Mathematics, 56 (1): 19–35, doi:10.1016/0166-218X(93)E0131-H, MR 1311302.
21. Rademacher, Hans (1965), "On the number of certain types of polyhedra", Illinois Journal of Mathematics, 9 (3): 361–380, doi:10.1215/ijm/1256068140, MR 0179682.
22. Lovász, L.; Plummer, M. D. (1974), "On a family of planar bicritical graphs", Combinatorics (Proc. British Combinatorial Conf., Univ. Coll. Wales, Aberystwyth, 1973), London: Cambridge Univ. Press, pp. 103–107. London Math. Soc. Lecture Note Ser., No. 13, MR 0351915.
23. Chen, Guantao; Enomoto, Hikoe; Ozeki, Kenta; Tsuchiya, Shoichi (2015), "Plane triangulations without a spanning Halin subgraph: counterexamples to the Lovász-Plummer conjecture on Halin graphs", SIAM Journal on Discrete Mathematics, 29 (3): 1423–1426, doi:10.1137/140971610, MR 3376776.
24. Skowrońska, M.; Sysło, M. M. (1987), "Hamiltonian cycles in skirted trees", Proceedings of the International Conference on Combinatorial Analysis and its Applications (Pokrzywna, 1985), Zastos. Mat., 19 (3–4): 599–610 (1988), MR 0951375
25. Demaine, Erik D.; Demaine, Martin L.; Uehara, Ryuhei (2013), "Zipper unfolding of domes and prismoids", Proceedings of the 25th Canadian Conference on Computational Geometry (CCCG 2013), Waterloo, Ontario, Canada, August 8–10, 2013, pp. 43–48.
External links
• Halin graphs, Information System on Graph Class Inclusions.
| Wikipedia |
\begin{document}
\title{Coherent States of the Creation Operator\
from Fully Developed Bose-Einstein Condensates}
\section{Introduction} Coherent states of the annihilation operator are well known in quantum optics. They were first described by E. Schr\"odinger as classically behaving solutions of the Schr\"odinger equation with a harmonic potential~\cite{schrod,nieto}. The importance of coherent states became wildely recognized in many branches of physics due to the works of Glauber~\cite{glauber}, Klauder~\cite{klauder} and Sudarshan~\cite{sudar}.
In section~\ref{s:coh} $\,\,\,$ the definition of the coherent states of the annihilation operator is given and the properties of these states are briefly summarized. Section~\ref{s:plas} $\,\,\,\,\,$ summarizes the pion-laser model, an exactly solvable multiboson wave-packet model that can be solved both in the very rare gas and in the fully condensed limiting case. If the total available energy and thus the number of pions in the condensate is large enough, applying the annihilation operator to the condensate state repeatedly a ladder of holes can be created and by a suitable superposition of these states a new kind of state can be defined, as described in Section~\ref{s:csc}$\,\,\,$.
\section{Coherent States\label{s:coh}}
For the harmonic oscillator, the coherent states $\ket{\alpha}$ can be equivalently defined with the help of the displacement operator method, the ladder (annihilation operator) method and the minimum uncertainty method, see ref.~\cite{nieto} for an elegant summary.
The coherent states of the annihilation operator are solutions of the equation \begin{equation}
a \ket{\alpha} = \alpha \ket{\alpha}, \end{equation} where the annihilation and creation operators $a$ and $a^{\dagger}$ satisfy the canonical commutation relations \begin{equation}
[a, a^{\dagger}] = 1. \end{equation} For the harmonic oscillator, the above coherent states are given by the unitary displacement operator $D(\alpha)$ acting on the ground (or vacuum) state $\ket{0}$ as \begin{eqnarray}
D(\alpha) \ket{0} & = & \mbox{\rm e}^{-|\alpha|^2/2}
\sum_{n=0}^{\infty} {\displaystyle\phantom{|} \alpha^n\over\dst \sqrt{n!}}
\ket{n} \, = \, \ket{\alpha},
\label{e:ladder-a}\\
D(\alpha) & = & \exp[\alpha a^{\dagger} - \alpha^* a] \, = \,
\exp[-|\alpha|^2/2 ]
\exp[\alpha a^{\dagger}]
\exp[\alpha^* a] \end{eqnarray} It is straightforward to show that the coherent states of the harmonic oscillator correspond to minimum uncertainty wave-packets with $(\Delta x)^2(\Delta p)^2 = 1/4$ that retain their Gaussian shape during their time evolution and whose mean position and coordinate values follow the oscillations of classical motion of the harmonic oscillator.
The coordinate space representation of these coherent states is \begin{equation}
\brak{x}{\alpha} = \left[{\displaystyle\phantom{|} m \omega \over\dst \pi}\right]^{1/4}
\exp\left[
- {\displaystyle\phantom{|} m \omega (x - x_0)^2 \over\dst 2} + i p_0 x
\right] \end{equation} where $m$ is the mass of the classical particle in the harmonic oscillator potential, $\omega$ is the frequency of the
oscillator and $x_0$, $p_0$ correspond to the coordinate
and momentum expectation values at the initial time $t_0$ .
The complex eigenvalue $\alpha$ of these coherent states is
given by \begin{equation}
\alpha = \sqrt{\displaystyle\phantom{|} m \omega\over\dst 2} x_0
+ i {\displaystyle\phantom{|} 1 \over\dst \sqrt{2 m \omega}} p_0. \end{equation}
\section{Fully condensed limit of the pion laser model \label{s:plas}} \def{\bf{x}}{{\bf{x}}} \def{\bf{p}}{{\bf{p}}} \def{\bf{k}}{{\bf{k}}} \def{\bf{\pi}}{{\bf{\pi}}} \def{\bf{\xi}}{{\bf{\xi}}} \def{\bf{q}}{{\bf{q}}} \def{\bf{r}}{{\bf{r}}} \def\hat{ a}^{\dag} (\bx){\hat{ a}^{\dag} ({\bf{x}})} \def\hat a^{\dag} (\bp){\hat a^{\dag} ({\bf{p}})} \def\hat{ a}^{} (\bx){\hat{ a}^{} ({\bf{x}})} \def\hat a^{} (\bp){\hat a^{} ({\bf{p}})} \def\hat \Psi^{\dag} (\bx){\hat \Psi^{\dag} ({\bf{x}})} \def\hat \Psi^ (\bx){\hat \Psi^ ({\bf{x}})} \def\hat \Psi^{\dag} (\bx){\hat \Psi^{\dag} ({\bf{x}})} \def\hat \Psi^ (\bx){\hat \Psi^ ({\bf{x}})} \def\pdpx#1{\hat \Psi^{\dag}(\mathbf{x}_{#1},\mathbf{\pi}_{#1})\ket{0}} \def\right){\right)} \def\left({\left(}
\def\displaystyle\phantom{|}{\displaystyle\phantom{|}} \def\over\dst{\over\displaystyle\phantom{|}} \def\omega{\omega} \def\mbox{\rm Li}{\mbox{\rm Li}} \def\mbox{\rm gLi}{\mbox{\rm gLi}} \def\epsilon{\epsilon} \def{\bf K}{{\bf K}} \def{\bf \Delta k}{{\bf \Delta k}} \newcommand{$$}{$$} \newcommand{$$}{$$} \renewcommand{\vec}[1]{{\bf #1}}
In high energy heavy ion reactions, hundreds of identical bosons (mostly $\pi$ mesons) can be created. These bosons are described by rather complicated fields and as the density of these bosons is increased multi-particle symmetrization effects are becoming increasingly important. The related possibility of Bose-Einstein condensation and the development of partial coherence was studied recently in a large number of papers. Let us follow refs.~\cite{cstjz} in describing an analytically solved multiparticle wave-packet system, with full symmetrization and the possibility of condensation of wave-packets to the least energetic wave-packet mode.
This solvable model is described by a multiparticle density matrix \begin{equation} \hat \rho \, = \, \sum_{n=0}^{\infty} \, {p}_n \, \hat \rho_n, \end{equation} normalized to one. Here $ \hat \rho_n $ is the density matrix for events with fixed particle number $n$, which is normalized also to one. The probability for such an event is $ p_n $. The multiplicity distribution is described by the set $\left\{ {p}_n\right\}_{n=0}^{\infty}$, also normalized to 1.
The density matrix of a system with a fixed number of boson wave packets has the form \begin{equation} \hat \rho_n \, =\, \int d\alpha_1 ... d\alpha_n \,\,\rho_n(\alpha_1,...,\alpha_n) \,\ket{\alpha_1,...,\alpha_n} \bra{\alpha_1,...,\alpha_n}, \end{equation} where $\ket{\alpha_1,...,\alpha_n}$ denote properly normalized $n$-particle wave-packet boson states.
In Heisenberg picture, the wave packet creation operator is given as \begin{equation} \alpha_i^{\dag} \, = \, \int {d^3{\bf{p}} \over (\pi \sigma^2 )^{3\over 4} } \
\mbox{\rm e}^{
-{( {\bf{p}}- {\bf{\pi}}_i)^{2}\over 2 \sigma_i^{2}}
- i {\bf{\xi}}_i ({\bf{p}} - {\bf{\pi}}_i) } \ \hat a^{\dag} (\bp) . \label{e:4} \end{equation} The commutator \begin{equation} \left[ \alpha_i, \alpha_j^{\dag} \right] \, = \,
\brak{\alpha_i}{\alpha_j} \end{equation} vanishes only in the case, when the wave packets do not overlap.
Here {$\alpha_i \, = \, ({\bf{\xi}}_i, {\bf{\pi}}_i)$} refers to the center of the wave-packets {in coordinate space and in momentum space}. It is assumed that the widths $\sigma_i$ of the wave-packets in momentum space and
the production time for each of the wave-packets coincide.
We call the attention to the fact that although one cannot attribute exactly defined values for space and momentum at the same time, one can define precisely the $ {\bf{\pi}}_i, {\bf{\xi}}_i $ parameters.
The $n$ boson states, normalized to unity, are given as \begin{equation} \ket{\ \alpha_1, \ ...\ , \ \alpha_n} \, = \,
{1\over \sqrt{ \displaystyle{\strut
\sum_{\sigma^{(n)} } \prod_{i=1}^n
\brak{\alpha_i}{\alpha_{\sigma_i}}
} } } \ \alpha^{\dag}_n \ ... \ \alpha_1^{\dag} \ket{0}. \label{e:expec2} \end{equation} Here $\sigma^{(n)}$ denotes the set of all the permutations of the indexes $\left\{1, 2, ..., n\right\}$ and the subscript $_{\sigma_i}$ denotes the index that replaces the index $_i$ in a given permutation from $\sigma^{(n)}$. The normalization factor contains a sum of $ n! $ term. These terms contain $ n$ different $\alpha_i $ parameters.
There is one special density matrix, for which one can overcome the difficulty, related to the non-vanishing overlap of many hundreds of wave-packets, even in an explicit analytical manner. This density matrix is the product uncorrelated single particle matrices multiplied with a correlation factor, related to stimulated emission of wave-packets \begin{equation} \rho_n(\alpha_1,...,\alpha_n) \, = \, {\displaystyle\phantom{|} 1 \over\dst {\cal N}{(n)}} \left( \prod_{i=1}^n \rho_1(\alpha_i) \right) \, \left(\sum_{\sigma^{(n)}} \prod_{k=1}^n \, \brak{\alpha_k}{\alpha_{\sigma_k}} \right) . \label{e:dtrick} \end{equation} Normalization to one yields ${\cal N}(n)$.
For the sake of simplicity we assume a factorizable Gaussian form for the distribution of the parameters of the single-particle states: \begin{eqnarray} \rho_1(\alpha)& = &\rho_x({\bf{\xi}})\, \rho_p ({\bf{\pi}})\, \delta(t-t_0), \\ \rho_x({\bf{\xi}}) & = &{1 \over (2 \pi R^2)^{3\over 2} }\, \exp(-{\bf{\xi}}^2/(2 R^2) ), \\ \rho_p({\bf{\pi}}) & = &{1 \over (2 \pi m T)^{3\over 2} }\, \exp(-{\bf{\pi}}^2/(2 m T) ). \end{eqnarray} These expressions are given in the frame where we have a non-expanding static source at rest.
A multiplicity distribution when Bose-Einstein effects are switched {off} (denoted by $p_n^{(0)}$), is a {free choice} in the model. We assume independent emission, \begin{equation}
{p}^{(0)}_n \, = \,{n_0 ^n \over n!} \exp(-n_0), \end{equation} so that correlations arise only due to multiparticle Bose-Einstein symmetrization. This completes the specification of the model.
It has been shown in refs. ~\cite{cstjz,pratt} that the above model features a critical density. If the boson source is sufficiently small and cold, the overlap between the various wave-packets can become sufficiently large so that Bose-Einstein condensation starts to develop as soon as $n_0$, the mean multiplicity without symmetrization reaches a critical value $n_c$.
In the highly condensed $ R^2 T << 1 $ and $n_0 >> n_c$ Bose gas limit a kind of lasing behaviour and an optically coherent behaviour is obtained, which is characterized by \cite{cstjz} the disappearance of the bump in the two-particle intensity correlation function: \begin{equation}
C_2({\bf{k}}_1,{\bf{k}}_2) = 1 \end{equation}
Suppose that $n_{f} = E_{tot}/m_{\pi} $ quanta are in the condensed state. It was shown in refs.~\cite{cstjz} that the condensation happens to the wave-packet mode with the minimal energy, i.e. $\alpha = \alpha_0 = (0,0)$. The density matrix of the condensate can be easily given as the fully developed Bose-Einstein condensate corresponds to the $T \rightarrow 0$ and the $R \rightarrow 0$ limiting case,
when the Gaussian factors in $\rho(\alpha)$ tend to Dirac delta
functions. In this particular limiting case, the density matrix of eq.~\ref{e:dtrick} simplifies as: \begin{equation}
\rho_c = {\displaystyle\phantom{|} 1 \over\dst n_f!}
(\alpha_0^{\dagger})^{n_f}
\ket{0}\bra{0}
(\alpha_0)^{n_f} \end{equation} which means that the fully developed Bose-Einstein condensate corresponds to $n_f$ bosons in the same minimal wave-packet state that is centered at the origin with zero mean momentum, $\alpha_0 = (0,0)$. Note also that \begin{equation}
\rho_c^2 = \rho_c \end{equation} which implies that the fully developed Bose-Einstein condensate is in a pure state.
An important feature of such a Bose-Einstein condensate of massive quanta is that it becomes impossible to add more than $n_{f}$ pions to the condensate as all the available energy can be used up by the rest mass of these bosons.
\section{Coherent states of creation operators\label{s:csc}} Observe, that the fully developed Bose-Einstein condensate (BEC) of the previous section corresponds to filling a single (wave-packet) quantum state with macroscopic amount of quanta. As the number of quanta in the BEC is macroscopically large, we can treate this number first to be the infinitly large limit of the quanta in the BEC. Formally, such a quantum state of the condensate can be defined as \begin{equation}
\ket{BEC} =
{\displaystyle\phantom{|} 1\over\dst \sqrt{n_f!}} (\alpha_0^{\dagger})^{n_f} \ket{0},
\qquad (n_f >>> 1). \end{equation} In what follows, it will be irrelevant that the Bose-Einstein condensation happened to a wave-packet mode in the pion-laser model. The relevant essential feature of the fully developed Bose-Einstein condensate will be that in contains a macroscopically large number of bosons in the same quantum state created by certain creation operator $a^{\dagger}$ \begin{equation}
\ket{BEC} =
{\displaystyle\phantom{|} 1\over\dst \sqrt{n_f!}} (a^{\dagger})^{n_f} \ket{0},
\qquad (n_f >>> 1). \end{equation} Due to the macroscopically large number of quanta in the same state, a large number of wave-packets can be taken away from this state. Due to the finite energy constraint, $n_f m = E_{tot}$, it is impossible to add one more particle to the condensate at the prescribed $E_{tot} $ available energy. We thus have \begin{equation}
a^{\dagger} \ket{BEC} = 0 \end{equation} On the other hand, we have \begin{equation}
a^m \ket{BEC} \ne 0 \qquad \mbox{\rm for all}\quad 0 \le m \le n_f. \end{equation} Hence a Bose-Einstein condensate with macroscopically large amount of quanta and with a constraint that the condensate is fully developed, can be considered as a generalized vacuum state of the creation operator, \begin{equation}
\ket{BEC} = \ket{0}_{\dagger} , \end{equation} and generalized hole-states can be defined as removing particles from the condensate: \begin{equation}
\ket{n}_{\dagger} = {\displaystyle\phantom{|} 1\over\dst \sqrt{n!} } a^n \ket{0}_{\dagger} \end{equation} The following calculations and equations are to be done first at $n_f$ kept finite, and then performing the $n_f\rightarrow \infty$ limiting case. This corresponds to the limit of a macroscopically large Bose-Einstein condensate. One obtains: \begin{eqnarray}
\ket{0}_{\dagger} & = & \ket{ n_f} = \ket{n_f \rightarrow \infty},\\
\ket{1}_{\dagger} & = & \ket{n_f -1} \, = \, a \ket{0}_{\dagger},\\
... && \nonumber \\
\ket{j}_{\dagger} & = & \ket{n_f - j} \, = \,
{\displaystyle\phantom{|} a^j \over\dst \sqrt{j!}}\ket{0}_{\dagger} ,\\
... && \nonumber \end{eqnarray}
The above states can be considered as the number states
related to the creation of $n$ holes
in the fully developed Bose-Einstein condensate.
These form a ladder that is built up with the help
of the annihilation operator.
The creation and annihilation operators act on these
states as \begin{eqnarray}
a \ket{n}_{\dagger} & = & \sqrt{n + 1} \ket{n+1}_{\dagger} , \\
a^{\dagger} \ket{n}_{\dagger} & = & \sqrt{n} \ket{n-1}_{\dagger}. \end{eqnarray}
The number operator $N_{\dagger}$ that counts the number of
holes is given as \begin{equation}
N_{\dagger} = a a^{\dagger}. \end{equation}
With other words, the creation and annihilation operators
change role if the ground state for our considerations
is chosen to be the quantum state of
a fully developed Bose-Einstein condensate.
If the number of quanta
in the Bose-Einstein condensate is macroscopically large,
($\lim n_f \rightarrow \infty$),
an infinite ladder can be formed from these states that is not
bounded from below. Hence, the coherent states of the
creation operator can be defined as follows: \begin{equation}
\ket{\alpha}_{\dagger} = \exp(-|\alpha|^2/2)
\sum_{n = 0}^{\infty} {(\alpha^*)^n \over\dst \sqrt{n!}}
\ket{n}_{\dagger} \end{equation}
Note that the above defined coherent state is an eigenstate
of the creation operator, \begin{equation}
a^{\dagger} \ket{\alpha}_{\dagger} = \alpha^* \ket{\alpha}_{\dagger}. \end{equation}
It is tempting to note that the coherent states of the
creation operator are also expressable as the action of the
displacement operator $D^{\dagger}(\alpha)$
on the fully developed
Bose-Einstein condensate state $\ket{BEC} = \ket{0}_{\dagger}$ as \begin{equation}
\ket{\alpha}_{\dagger} = D^{\dagger}(\alpha) \ket{0}_{\dagger}. \end{equation}
It is straightforward to generalize the results to
different modes characterized with a momentum ${\bf k}$.
In that case, the state $\ket{0}_{{\bf k}+}$ has to
be introduced as a state of fully developed Bose-Einstein
condensate where each boson moves with momentum ${\bf k}$.
Such moving Bose-Einstein condensates~\cite{atom1} are frequently
referred to as atom lasers~\cite{atom2} in atomic physics.
\section{Interpretation and summary}
Coherent states of the annihilation operator correspond
to semiclassical excitations of the vacuum, that follow a
classical equation of motion and keep their shape
minimizing the Heisenberg uncertainty relations all the time.
They correspond to a displaced ground state of the harmonic
oscillators.
What is the physical interpretation of the
new kind of coherent states of the creation
operator described in the present work?
We have shown that these new states correspond
to coherently excited holes in a fully developed
Bose-Einstein condensate. We have found that
displaced Bose-Einstein condensates in a
harmonic oscillator potential can be considered
as generalized coherent states of the creation operator.
\section*{}
\rightline{\tt What we call the beginning is often the end} \rightline{\tt And to make an end is to make a beginning.} \rightline{\tt The end is where we start from.}
\rightline{from {\it Little Gidding} by T. S. Eliot}
Dedicated to the memory of Volodja Gribov.
\eject
\eject \section*{Appendix: Constraints and the ladder representation } The introduction of the coherent states of the creation operator essentially relied on the possibility of putting macroscopically large, $n_f >>> 1$ amount of neutral bosons to the same quantum state. The equations describing these states as given in the body of the paper are correct only to the precision given by $1 \over n_f$. We argue in this section, that such a limitation in fact is not generic to the coherent states of the creation operator, but also appears when energy constraints are taken into account in the description of the well-known coherent states of the annihilation operator.
Let us note, that the eigenmode $n$ of the harmonic oscillator has an energy of $E_n - E_0 = n \omega$ (after removing the contribution of the ground or vacuum state). Hence in the superposition given by eq.~(\ref{e:ladder-a}) modes with arbitrarily large energy component are mixed into (but the weight decreases as a Poisson-tail with increasing values of $n$.) Although the expectation value of the total energy in the coherent states is finite, the admixture of extremely high energy components can never be perfectly realized in any experiment as with increasing the energy of the included modes new physical phenomena like particle and antiparticle creation, deviations from the harmonic shape of the oscillator potential or other non-ideal phenomena have to occur.
Suppose that $E_{max}$ is the maximal available energy and states with energy larger than $E_{max}$ are not allowed either due to e.g. constraints from energy conservation or due to the break-down of the harmonic approximation to the Hamiltonian after some energy scale. In this case modes are limited to
$ n \le n_{f} = E_{max}/\omega$. In case of photons in electromagnetic fields, $\omega_{\bf k} = |{\bf k}|$, hence for sufficiently soft modes $n_{f}$ can be always made so large that the contribution of states with $n \ge n_{f}$ to the coherent states $\ket{\alpha}$ can be made arbitrarily small. However, for massive bosons like bosonic atoms or mesons created in high energy physics, $\omega_{\bf k}= \sqrt{m^2 + |{\bf k}|^2}$, hence $n_{f} \le E_{max}/m$ for any mode of the field which yields \begin{equation}
\ket{\alpha} \rightarrow \ket{\alpha}_m =
\sum_{n = 0}^{n_{f}}
{\displaystyle\phantom{|} \alpha^n (a^{\dagger})^n \over n!} \ket{0} \equiv
D_{n_{f}}(\alpha) \ket{0} \end{equation} As each mode of a free boson field is approximately a harmonic oscillator mode, this suggests that coherent states of massive bosons with finite energy constraints can only be appoximately realized.
\end{document} | arXiv |
Low pass filter algorithm origin
I have been doing some research, because I used this algorithm that's very spread in the web:
y += a * (x - y)
y = (a * y) + (x - (a * x)
Where x is input, and y is the filter output, and a is actually alpha, which there's some math behind it to define it's value.
This algortihm is seen on wikipedia, this article and many more.
But none actually shows why this algorithm is a low pass filter and from which formula it actually came from(wikipedia has no source for it). I'm doing a small academic paper, so I need to know exactly from where it came.
lowpass-filter
DH.
DH.DH.
$\begingroup$ i think if there is any "ontological" origin to the low-pass filter, i think it would lie with the concept of the arithmetic mean. we do averaging to "filter out" variations and to leave a general trend. $\endgroup$ – robert bristow-johnson Jun 21 '17 at 16:38
I tried to make my derivation match up exactly with embeddedrelated's. The filter you cited uses an approximation, which further obfuscates intuitively understanding it.
embeddedrelated does cite the frequency transfer function that the IIR filter is based off of: $$ H(s)=\frac{1}{1+s\tau} $$
The transfer function for a simple RC low pass filter is: $$ H(s)_{RC}=\frac{1}{1+sRC} $$ $$ \tau=RC $$
We can derive a difference equation from the above schematic.
$$ I=C\frac{dV_{out}}{dt} $$ $$ I = \frac{V_{in} - V_{out}}{R} $$ $$ C\frac{dV_{out}}{dt}=\frac{V_{in} - V_{out}}{R} $$ $$ C\frac{V_{out}[n]-V_{out}[n-1]}{ts}=\frac{V_{in}[n] - V_{out}[n]}{R} $$ $$ V_{in}[n]+\tau f_s V_{out}[n-1]=\tau f_s V_{out}[n] + V_{out}[n] $$ I have tested this form: $$ \frac{V_{in}[n]+\tau f_s V_{out}[n-1]}{\tau f_s + 1}=V_{out}[n] $$ $$ \tau f_s >> 1 $$ I have not tested this form: $$ \frac{V_{in}[n]}{\tau f_s} + V_{out}[n-1]\approx V_{out}[n] $$
Robby WasabiRobby Wasabi
I've seen it called a recursive moving average filter, an $\alpha$ filter, and a forgetting filter. The $\alpha$ filter is covered in the Wikipedia article on Alpha Beta Filters.
The transfer function was given in one of the comments.
The origin is going to be hard to track down because it is a very simple filter.
$\begingroup$ Yeah, I found out that it would be easier if I find papers and books about the types of filters and not this filter in particular. Those keywords said in your answer and in @Fat32 's comment helped me a lot. Maybe mix everything up so it can be a complete answer? $\endgroup$ – DH. Jun 22 '17 at 18:50
Looking at your first line of C-code:
y += a*(x-y)
It can be converted to an algebraic relation between the samples of the filter output sequence $y[n]$ and the filter input $x[n]$ as:
$$y[n+1]-(1-a)y[n]=ax[n]$$ which is also equivalent to $$y[n]-(1-a)y[n-1]=ax[n-1]$$
And the associated transfer function of this filter is $$H(z) = \frac{a z^{-1}}{1 - (1-a)z^{-1}} = \frac{a z^{-1}}{1 + (a-1)z^{-1}}$$
Now your application will probably require a causal filter, (which means only current and past input is available and should be used to produce current output) in which case then the poles of the filter should reside inside the unit circle , i.e., $|z_p| < 1$.
Since this filter has a single pole which is at $z_p=(1-a)$ then we have; $|1-a| < 1$ and hence; $$0 < a <2$$ is the allowed range of real $a$ for which you can have a stable and causal filter.
Now to have a casual and stable lowpass filter you require that the pole is along the positive side of the real line, i.e., $0 < z_p < 1$ which means that we require $0 < a < 1$.
Otherwise when $ 1 < a < 2$ the filter becomes a highpass filter (actually it will be some other form of high-boost or shelving type filter rather than a strict high-pass filter which should block low frequencies, which these filters won't), as the pole will become negative for that range of $a$.
Note also that $a=1$ produces output equal to input shifted by 1 samples: $y[n] = x[n-1]$
Below are a few frequency response plots for different values of valid $a$ range:
Fat32Fat32
Not the answer you're looking for? Browse other questions tagged lowpass-filter or ask your own question.
What is the purpose of the recurrence relation in low and high pass (audio) filters?
Strange noises when filtering audio signal
Implementation of basic 2-pole low-pass filter
my Butterworth lowpass formulas do not agree with Fisher webpage
FFT "Brick Wall" Low Pass Filter Not Smoothing Output
How to design a FIR low pass filter and force a subset of filter coefficients to 0?
High / Low Pass Filters from Zero
How to draw the magnitude plot of a variable slope low/high pass shelving filter? (eg. as a function of cascaded shelving filters)
Next steps for analyzing this PWM algorithm and choosing a low pass filtering technique? | CommonCrawl |
Journal of Geometric Mechanics
March 2009 , Volume 1 , Issue 1
The ubiquity of the symplectic Hamiltonian equations in mechanics
P. Balseiro, M. de León, Juan Carlos Marrero and D. Martín de Diego
2009, 1(1): 1-34 doi: 10.3934/jgm.2009.1.1 +[Abstract](2542) +[PDF](391.2KB)
In this paper, we derive a "Hamiltonian formalism" for a wide class of mechanical systems, that includes, as particular cases, classical Hamiltonian systems, nonholonomic systems, some classes of servomechanisms... This construction strongly relies on the geometry characterizing the different systems. The main result of this paper is to show how the general construction of the Hamiltonian symplectic formalism in classical mechanics remains essentially unchanged starting from the more general framework of algebroids. Algebroids are, roughly speaking, vector bundles equipped with a bilinear bracket of sections and two vector bundle morphisms (the anchors maps) satisfying a Leibniz-type property. The bilinear bracket is not, in general, skew-symmetric and it does not satisfy, in general, the Jacobi identity. Since skew-symmetry is related with preservation of the Hamiltonian, our Hamiltonian framework also covers some examples of dissipative systems. On the other hand, since the Jacobi identity is related with the preservation of the associated linear Poisson structure, then our formalism also admits a Hamiltonian description for systems which do not preserve this Poisson structure, like nonholonomic systems.
Some examples of interest are considered: gradient extension of dynamical systems, nonholonomic mechanics and generalized nonholonomic mechanics, showing the applicability of our theory and constructing the corresponding Hamiltonian formalism.
P. Balseiro, M. de Le\u00F3n, Juan Carlos Marrero, D. Mart\u00EDn de Diego. The ubiquity of the symplectic Hamiltonian equations in mechanics. Journal of Geometric Mechanics, 2009, 1(1): 1-34. doi: 10.3934/jgm.2009.1.1.
$G$-Chaplygin systems with internal symmetries, truncation, and an (almost) symplectic view of Chaplygin's ball
Simon Hochgerner and Luis García-Naranjo
2009, 1(1): 35-53 doi: 10.3934/jgm.2009.1.35 +[Abstract](2895) +[PDF](272.6KB)
Via compression ([18, 8]) we write the $n$-dimensional Chaplygin sphere system as an almost Hamiltonian system on T*$\SO(n)$ with internal symmetry group $\SO(n-1)$. We show how this symmetry group can be factored out, and pass to the fully reduced system on (a fiber bundle over) T*$S^{n-1}$. This approach yields an explicit description of the reduced system in terms of the geometric data involved. Due to this description we can study Hamiltonizability of the system. It turns out that the homogeneous Chaplygin ball, which is not Hamiltonian at the T*$\SO(n)$-level, is Hamiltonian at the T*$S^{n-1}$-level. Moreover, the $3$-dimensional ball becomes Hamiltonian at the T*$S^{2}$-level after time reparametrization, whereby we re-prove a result of [4, 5] in symplecto-geometric terms. We also study compression followed by reduction of generalized Chaplygin systems.
Simon Hochgerner, Luis Garc\u00EDa-Naranjo. $G$-Chaplygin systemswith internal symmetries, truncation, and an (almost) symplecticview of Chaplygin\'s ball. Journal of Geometric Mechanics, 2009, 1(1): 35-53. doi: 10.3934/jgm.2009.1.35.
Three-dimensional discrete systems of Hirota-Kimura type and deformed Lie-Poisson algebras
Andrew N. W. Hone and Matteo Petrera
Recently Hirota and Kimura presented a new discretization of the Euler top with several remarkable properties. In particular this discretization shares with the original continuous system the feature that it is an algebraically completely integrable bi-Hamiltonian system in three dimensions. The Hirota-Kimura discretization scheme turns out to be equivalent to an approach to numerical integration of quadratic vector fields that was introduced by Kahan, who applied it to the two-dimensional Lotka-Volterra system.
The Euler top is naturally written in terms of the $\mathfrak{so}(3)$ Lie-Poisson algebra. Here we consider algebraically integrable systems that are associated with pairs of Lie-Poisson algebras in three dimensions, as presented by Gümral and Nutku, and construct birational maps that discretize them according to the scheme of Kahan and Hirota-Kimura. We show that the maps thus obtained are also bi-Hamiltonian, with pairs of compatible Poisson brackets that are one-parameter deformations of the original Lie-Poisson algebras, and hence they are completely integrable. For comparison, we also present analogous discretizations for three bi-Hamiltonian systems that have a transcendental invariant, and finally we analyze all of the maps obtained from the viewpoint of Halburd's Diophantine integrability criterion.
Andrew N. W. Hone, Matteo Petrera. Three-dimensional discrete systems of Hirota-Kimura type and deformed Lie-Poisson algebras. Journal of Geometric Mechanics, 2009, 1(1): 55-85. doi: 10.3934/jgm.2009.1.55.
Dirac cotangent bundle reduction
Hiroaki Yoshimura and Jerrold E. Marsden
2009, 1(1): 87-158 doi: 10.3934/jgm.2009.1.87 +[Abstract](2696) +[PDF](1379.4KB)
The authors' recent paper in Reports in Mathematical Physics develops Dirac reduction for cotangent bundles of Lie groups, which is called Lie--Dirac reduction . This procedure simultaneously includes Lagrangian, Hamiltonian, and a variational view of reduction. The goal of the present paper is to generalize Lie--Dirac reduction to the case of a general configuration manifold; we refer to this as Dirac cotangent bundle reduction . This reduction procedure encompasses, in particular, a reduction theory for Hamiltonian as well as implicit Lagrangian systems, including the case of degenerate Lagrangians.
First of all, we establish a reduction theory starting with the Hamilton-Pontryagin variational principle, which enables one to formulate an implicit analogue of the Lagrange-Poincaré equations. To do this, we assume that a Lie group acts freely and properly on a configuration manifold, in which case there is an associated principal bundle and we choose a principal connection. Then, we develop a reduction theory for the canonical Dirac structure on the cotangent bundle to induce a gauged Dirac structure . Second, it is shown that by making use of the gauged Dirac structure, one obtains a reduction procedure for standard implicit Lagrangian systems, which is called Lagrange-Poincaré-Dirac reduction . This procedure naturally induces the horizontal and vertical implicit Lagrange-Poincaré equations , which are consistent with those derived from the reduced Hamilton-Pontryagin principle. Further, we develop the case in which a Hamiltonian is given (perhaps, but not necessarily, coming from a regular Lagrangian); namely, Hamilton-Poincaré-Dirac reduction for the horizontal and vertical Hamilton-Poincaré equations . We illustrate the reduction procedures by an example of a satellite with a rotor.
The present work is done in a way that is consistent with, and may be viewed as a specialization of the larger context of Dirac reduction, which allows for Dirac reduction by stages . This is explored in a paper in preparation by Cendra, Marsden, Ratiu and Yoshimura.
Hiroaki Yoshimura, Jerrold E. Marsden. Dirac cotangent bundle reduction. Journal of Geometric Mechanics, 2009, 1(1): 87-158. doi: 10.3934/jgm.2009.1.87.
Call for special issues | CommonCrawl |
The naturally occurring α-tocopherol stereoisomer RRR-α-tocopherol is predominant in the human infant brain
@article{Kuchan2016TheNO,
title={The naturally occurring $\alpha$-tocopherol stereoisomer RRR-$\alpha$-tocopherol is predominant in the human infant brain},
author={Matthew J. Kuchan and S{\o}ren Krogh Jensen and Elizabeth J Johnson and Jacqueline C. Lieblein-Boff},
journal={British Journal of Nutrition},
M. Kuchan, S. K. Jensen,
+1 author
J. Lieblein-Boff
British Journal of Nutrition
Abstract α-Tocopherol is the principal source of vitamin E, an essential nutrient that plays a crucial role in maintaining healthy brain function. Infant formula is routinely supplemented with synthetic α-tocopherol, a racaemic mixture of eight stereoisomers with less bioactivity than the natural stereoisomer RRR-α-tocopherol. α-Tocopherol stereoisomer profiles have not been previously reported in the human brain. In the present study, we analysed total α-tocopherol and α-tocopherol…
View on Cambridge Press
cambridge.org
RRR-α-Tocopherol Is the Predominant Stereoisomer of α-Tocopherol in Human Milk
M. Kuchan, C. Moulton, R. Dyer, S. K. Jensen, K. Schimpf, S. Innis
Chemistry, Medicine
Current developments in nutrition
RRR-α-T is the predominant stereoisomer in human milk, concentrations of synthetic 2R stereoisomers were notable, and the relation between milk total α-T and stereoisome profile is complex.
Distribution of α-tocopherol stereoisomers in mink (Mustela vison) organs varies with the amount of all-rac-α-tocopheryl acetate in the diet
L. Hymøller, S. Lashkari, T. Clausen, S. K. Jensen
Medicine, Chemistry
The results did reveal the liver's role as the major organ for accumulation of Σ2S α-tocopherol stereoisomers, and the liver had the highest proportion of 2S stereoisomer, and lowest proportion of RRR-α-tocopheryl acetate.
Brain α-Tocopherol Concentration and Stereoisomer Profile Alter Hippocampal Gene Expression in Weanling Mice.
J. Rhodes, C. Rendeiro,
M. Kuchan
The Journal of nutrition
In weanling mouse hippocampi, a network of genes involved in transcription regulation and synapse formation was differentially affected by dam diet αT concentration and source: all rac- αT or RRR-αT.
Supplementation with RRR- or all-rac-α-Tocopherol Differentially Affects the α-Tocopherol Stereoisomer Profile in the Milk and Plasma of Lactating Women.
S. Gaur, M. Kuchan, Chron‐Si Lai, S. K. Jensen, C. Sherry
The α-tocopherol supplementation strategy differentially affected the α-atechopherol milk and plasma stereoisomer profile in lactating women, with RRR-α-tocaperol increased milk and Plasma percentage RRR, whereas all-rac- α-tropopherol acetate reduced these percentages.
Infant Rhesus Macaque Brain α-Tocopherol Stereoisomer Profile Is Differentially Impacted by the Source of α-Tocopherol in Infant Formula
M. Kuchan, K. Ranard,
J. Erdman
Consumption of infant formulas with natural (NAT-F) αT compared with synthetic (SYN-F), differentially impacted brain αT stereoisomer profiles in infant rhesus macaques, and αT profiles were not explained by α-TTP mRNA or protein expression.
View 2 excerpts, cites results and background
Synthetic α-Tocopherol, Compared with Natural α-Tocopherol, Downregulates Myelin Genes in Cerebella of Adolescent Ttpa-null Mice.
K. Ranard, M. Kuchan, R. Bruno, J. Juraska, J. Erdman
Biology, Chemistry
High-dose syntheticα-T compared with natural α-T alters myelin gene expression in the adolescent mouse cerebellum, which could lead to morphological and functional abnormalities later in life.
The Subcellular Distribution of Alpha-Tocopherol in the Adult Primate Brain and Its Relationship with Membrane Arachidonic Acid and Its Oxidation Products
Emily S Mohn, M. Kuchan,
Elizabeth J Johnson
Novel data is reported on α-tocopherol accumulation in primate brain regions and membranes and evidence is provided that α-ocopherol and AA are similarly distributed in PFC and ST membranes, which may reflect a protective effect of α-tropopherol against AA oxidation.
View 3 excerpts, cites background and results
Natural and Synthetic α-Tocopherol Modulate the Neuroinflammatory Response in the Spinal Cord of Adult Ttpa-null Mice
K. Ranard, M. Kuchan, J. Juraska, J. Erdman
Natural and synthetic α-T supplementation normalized neuroinflammatory markers in neural tissues of 10-mo-old Ttpa−/− mice, which may prevent severe morphological changes during late adulthood.
Effects of dietary RRR &agr;-tocopherol vs all-racemic &agr;-tocopherol on health outcomes
K. Ranard, J. Erdman
Nutrition reviews
Of the 8 vitamin E analogues, RRR α-tocopherol likely has the greatest effect on health outcomes. Two sources of α-tocopherol, naturally sourced RRR α-tocopherol and synthetic all-racemic…
Simultaneous Determination of Alpha-Tocopherol and Alpha-Tocopheryl Acetate in Dairy Products, Plant Milks and Health Supplements by Using SPE and HPLC Method
S. Sunarić, J. Lalić, Ana Spasić
Food Analytical Methods
A solid phase extraction technique for sample cleanup and extraction of α-tocopherol and α-tocopheryl acetate is described and applied to their simultaneous HPLC analysis in dairy products,…
Bioavailability of α-tocopherol stereoisomers in rats depends on dietary doses of all-rac- or RRR-α-tocopheryl acetate
S. K. Jensen, J. Nørgaard, C. Lauridsen
The biological function of the stereoisomers of α-tocopherol is believed to depend on their bioavailability. Assessment of bioavailability within the body is therefore considered to be a good and…
Distribution of α-Toc Stereoisomers in Rats
Chikako Nitta-Kiyose, K. Hayashi, T. Ueda, O. Igarashi
The distribution of α-tocopherol (α-Toc) stereoisomers in the tissues of rats fed on diets containing different levels of all-rac-α-tocopheryl acetate was determined by a newly revised HPLC method…
Bioavailability of alpha-tocopherol stereoisomers in rats depends on dietary doses of all-rac- or RRR-alpha-tocopheryl acetate.
The British journal of nutrition
The objective of this investigation was to evaluate the bioavailability and distribution of the stereoisomers of alpha-tocopherol in the plasma and tissue in growing rats fed 25, 50, 100 or 200 mg/kg diet of either RRR- or all-rac-alpha-tocopheryl acetate for 10 d.
Highly Influential
Biodiscrimination of the eight alpha-tocopherol stereoisomers results in preferential accumulation of the four 2R forms in tissues and plasma of rats.
H. Weiser, G. Riss, A. W. Kormann
All-rac-alpha-TAc administration led to the presence of all eight alpha-tocopherol stereoisomers in rat liver, brain, adipose tissue and plasma, confirming that the configuration at C-2 of the alpha-ocopherol molecule has a major impact on stereoisomer biodiscrimination.
Incorporation of deuterated RRR- or all-rac-alpha-tocopherol in plasma and tissues of alpha-tocopherol transfer protein--null mice.
S. Leonard, Y. Terasawa, Robert V Farese, M. Traber
The American journal of clinical nutrition
Deletion of the alpha-TTP gene in mice results in an accumulation of dietary alpha-tocopherol in the liver and depletion of peripheral tissue alpha-ocopherol, showing that alpha- TTP preferentially selects 2R-alpha-tociperols for secretion into plasma.
Biodiscrimination of alpha-tocopherol stereoisomers in humans after oral administration.
C. Kiyose, R. Muramatsu, Y. Kameyama, T. Ueda, O. Igarashi
We investigated changes in the concentrations of the stereoisomers of alpha-tocopherol in serum and lipoproteins in seven normal, healthy women aged 21-37 y who had received oral administration of…
RRR- and SRR-alpha-tocopherols are secreted without discrimination in human chylomicrons, but RRR-alpha-tocopherol is preferentially secreted in very low density lipoproteins.
M. Traber, G. Burton, K. Ingold, H. Kayden
The results suggest the existence of a mechanism in the liver for assembling VLDL preferentially enriched in RRR- relative to SRR-alpha-tocopherol.
A History of Vitamin E
E. Niki, M. Traber
Annals of Nutrition and Metabolism
Although specific pathways and specific molecular targets have been sought in a variety of studies, the most likely explanation as to why humans require vitamin E is that it is a fat-soluble antioxidant.
Vitamin E is essential for Purkinje neuron integrity
L. Ulatowski, R. Parker, G. Warrier, R. Sultana, D. Butterfield, D. Manor
Localization of α-tocopherol transfer protein in rat brain
A. Hosomi, K. Goto,
Keizō Inoue
Neuroscience Letters | CommonCrawl |
Practice 1
In this section, we'll look at two things. First, what is a square root (and at the end of the page, what is a root at all)? Second, how can we simplify square roots so that we can express $\sqrt{20}$ in its simpler form, $2 \sqrt{5}.$ That's a skill you'll need as you move along in your math studies.
You can always use a calculator to say that
$$\sqrt{1000} = 31.623,$$
but that's just an approximation. Though it might be good enough for whatever your purpose is, in mathematics, we like to be exact, so we'll want to express $\sqrt{1000}$ as $10 \sqrt{10}.$ We'll see how to do that later.
The radical
The square root sign, $\sqrt{\phantom{000}}$, is called a radical. When written as
$$\sqrt{x},$$
it asks the question, what number, when multiplied by itself (i.e. when squared) gives x? Some roots are easy to figure out. For example, $\sqrt{4} = 2$ because $2^2 = 2 \cdot 2 = 4.$
Every number has two square roots
We need to be careful with that example, however, because every number has precisely two square roots. Notice that while $(2)^2 = 4, \; (-2)^2 = 4$ as well. So when we find the square root of 4, we should write
$$\sqrt{4} = ± 2,$$
using the plus-or-minus sign ( ± ). Here are more easy examples:
$$ \begin{matrix} \sqrt{1} = ± 1 && \sqrt{36} = ± 6 \\[4pt] \sqrt{4} = ± 5 && \sqrt{49} = ± 7 \\[4pt] \sqrt{9} = ± 3 && \sqrt{64} = ± 8 \\[4pt] \sqrt{16} = ± 4 && \sqrt{81} = ± 9 \\[4pt] \sqrt{25} = ± 5 && \sqrt{100} = ± 10 \end{matrix}$$
The square root operation is the inverse of the squaring operation*
We call squaring and the square-root inverse operations because they are, in a sense, opposites. One operation on a certain number undoes the operation of another. For example, we know that
$$\sqrt{16} = ±4$$
But it's also true that
$$(±2)^2 = 4.$$
Now think of one operation inside of the other:
$$ \begin{align} (\sqrt{4})^2 &= (±2)^2 = 4 \; \; \color{#E90F89}{\text{ and}} \\[4pt] \sqrt{4^2} &= \sqrt{16} = ±4 \end{align}$$
In general, we have:
$$(\sqrt{x})^2 = \sqrt{x^2} = x$$
If we successively square, then take the square root, or square a root, it's like we really did nothing at all, because these operations "undo" one another.
*Squaring and rooting are actually functions, not operations. You'll learn more about functions later.
Simplifying square roots
Now we have a bunch of easy square roots like $\sqrt{16}, \; \sqrt{25},$ and so on, but what about something like $\sqrt{12}$ ? We'd like to be able to simplify any square-root expression as much as possible. We'll cover that below, but first we should look into the properties of square roots and we'll need to take a detour to prime numbers.
Properties of square roots
1. Square roots of products
$$\sqrt{x \cdot y} = \sqrt{x}\sqrt{y}$$
Proof: Let $x = a\cdot a$ and $y = b\cdot b$, then
$$ \begin{align} \sqrt{x\cdot y} &= \sqrt{aa \cdot bb} \\[4pt] &= \sqrt{ab \cdot ab} \\[4pt] &= \sqrt{(ab)^2} \\[4pt] &= ab, \; \text{ which} \\[4pt] &= \sqrt{a^2}\sqrt{b^2} = \sqrt{x}\sqrt{y} \end{align}$$
2. Square roots of quotients
$$\sqrt{\frac{x}{y}} = \frac{\sqrt{x}}{\sqrt{y}}$$
This can be proved in a way similar to the product property above.
3. Square roots of sums and differences
$$\sqrt{x ± y} \ne \sqrt{x} ± \sqrt{y}$$
Here's a contradiction:
$$ \begin{align} \sqrt{16 + 9} &\overset{?}{=} \sqrt{16} + \sqrt{9}\\[4pt] \sqrt{25} &\overset{?}{=} 4 + 3 \\[4pt] 5 &\ne 7 \end{align}$$
4. Square root of zero
$$\sqrt{0} = 0$$
5. Square roots of negative numbers
Negative numbers are not in the domain of the square root function, so we won't worry about them here. Later you'll learn how to deal with roots of negative numbers using imaginary numbers (see complex numbers).
A digression: prime numbers
The first thing we need to do is to find a list of small prime numbers that will mostly be our factors in simplifying square roots. To do this, we reproduce the seive of Eratosthenes. Eratosthenes (about 300 BCE) was former head of the Library of Alexandria, a Greek mathematician and philosopher. He is famous for being first to calculate the diameter of Earth using shadows measured at the same time from distant points. He also worked with prime numbers.
The idea of Eratoshenes' "seive" is to write the first 100 integers, like in the table below. Numbers 1, 2 and 3 are circled because they are prime. That is, they're divisible only by themselves and 1; they have no other factors.
Now all even numbers are divisible by 2, so no even number except for 2 can be prime. We'll cross out the other evens:
On to 3. All numbers except for 3 that are multiples of 3 (6, 9, 12, ...) can't be prime because they have a factor of 3. We'll cross out all of the multiples of 3:
Now all multipls of 4 are also multiples of 2, so we've eliminated those already. Next is 5. Let's cross out all multiples of 5, which lie in the 5th and last columns.
Now crossing out the multiples of 7 gets rid of 49 and 91. In one more step, eliminating multiples of 11 takes out 77, and that's actually it. You can look for multiples of other primes like 13, 17, but they're all covered up.
So we're left with 25 prime numbers in the first 100 integers. They are
These prime numbers will be helpful for learning how to reduce roots to their simplest forms. We'll be looking to find all prime factors of the number under the radical.
Interesting point
A theorem called the fundamental theorem of arithmetic says that
any number is either prime or has a unique set of factors, all of which are prime.
It's that unique set of prime factors we'll be looking for in simplifying roots below.
Simplifying roots
Now let's simplify some square roots. There are two methods for doing this, but the difference is only cosmetic. Choose the one you're most comfortable with or might have already seen. Let's begin by simplifying $\sqrt{320}.$ Our goal will be to pick 320 apart, bit-by-bit, by finding prime factors. For example, 320 is even, so 2 is obviously a factor: $320 = 2 \times 160.$ The prime number 2 is also a factor of 160: $160 = 2 \times 180,$ and so on.
Method 1: Factor trees
One way to organize this process, with which we aren't done yet, is the factor tree. Here is the factor tree for 320:
Notice that each line contains a prime factor and the remaining factor of the number above. Once we get to the bottom of the tree, all that's left are prime numbers, so we have
$$320 = 2 \times 2 \times 2 \times 2 \times 2 \times 2 \times 5$$
Now notice that there are three pairs of 2 there,
So we can also write
$$320 = 4 \times 4 \times 4 \times 5$$
Using the properties of square roots, we also see that
$$ \begin{align} \sqrt{320} &= \sqrt{4\cdot 4\cdot 4\cdot 5} \\[4pt] &= \sqrt{4} \, \sqrt{4} \, \sqrt{4} \, \sqrt{5} \\[4pt] &= 2 \cdot 2 \cdot 2 \cdot \sqrt{5} \\[4pt] &= 8 \sqrt{5} \end{align}$$
This is the simplest way we can represent $\sqrt{320}$ using integers. It is exact, not a decimal approximation.
Method 2: repeated division
The second method is very similar, it's just a matter of doing repeated divisions by known factors. The same full process for the prime factors of 320 would look like this:
The idea is very similar. We divide 320 by 2 (an obvious prime factor) with the result of 160. Divide by 2 again to get 180. Keep dividing by 2 until we get 5, a prime number, and then we're done. We collect the pairs of prime factors as above and reconstruct our root.
Now let's do a couple more examples using both methods.
Example; $\sqrt{162}$
Our factorization of the root, and the result is
$$ \begin{align} \sqrt{162} &= \sqrt{2 \cdot 3 \cdot 3 \cdot 3 \cdot 3} \\[4pt] &= \sqrt{2 \cdot 9\cdot 9} \\[4pt] &= \sqrt{2} \cdot \sqrt{9} \cdot \sqrt{9} \\[4pt] &= 3\cdot 3 \sqrt{2} = \bf 9 \sqrt{2} \end{align}$$
You might also have noticed that 162 factors to 81 × 2, which would lead to the same result.
Example; $\sqrt{1000}$
$$ \begin{align} \sqrt{1000} &= \sqrt{2 \cdot 2 \cdot 2 \cdot 5 \cdot 5 \cdot 5} \\[4pt] &= \sqrt{2 \cdot 5 \cdot 4 \cdot 25} \\[4pt] &= \sqrt{10} \cdot \sqrt{4} \cdot \sqrt{25} \\[4pt] &= \bf 10 \sqrt{10} \end{align}$$
You can use this problem generator to generate square roots for you to simplify. Note that you won't be able to simplify them all. Some, for example, will be roots of prime numbers.
Practice as much as you can. Get your confidence up. You'll be simplifying roots throughout your studies in math and science. It's an important skill to have.
xaktly.com by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to [email protected]. | CommonCrawl |
Week 3 Progress Report
Read a few papers about the LBM, HPP, and FHP CA.
Lattice Boltzmann Methods
A list differences between Lattice Boltzmann Methods (LBM) and regular numerical methods are found in Chen and Dollen's paper. The differences include:
The convection operator of the LBM in phase space is linear. If advection is the same as convection, then in CFD (primarily Stam's Stable Fluids method), one needs to use either a semi-Lagrangian scheme (which is unstable), or, as Stam suggests, a more numerically stable technique such as the method of characteristics. Advection is the term $-(\vec{u} \bullet \nabla) u$ in the Navier-Stokes equations.
The incompressible Navier-Stokes equations can be derived from the nearly incompressible limit of the LBM.
The LBM method utilizes a minimal set of velocities in phase space.
One of the first attempts to use CA to approximate NSE is called the HPP model after "Hardy-Pomeau-Pazzis", the authors of a 1973 paper describing the model. The idea of the model is quite simple:
Discretize the domain to a square lattice
Only one particle found in the same vertex on the grid,
Move particles in the same previous direction
If a collision occurs, then the particles move in a direction perpendicular to their entry point.
The image bellow summarizes the gist of the model
While such a model is simple and ideal for 1973 (when parallel computing was the fad), if we consider its limit, it does not approximate the NSE. This is partly because the inadequate rotational symmetry of the square lattice (having only $\pi/2$ symmetry).
The HPP model is still used to simulate sand mixed with water and other rigid liquids (need to find paper that mentioned this).
In 1986 it was realized that the only two dimensional lattice that remotely approximates the NSE was the hexagonal lattice. This model was realized by Frish-Hasslacher-Pomeau and a few months later independently by Wolfram (who references the 1986 article in CA fluids 1). The model was extended to 3 space by FHP in 1987.
The FHP model is similar to the HPP model. Again, only one particle can be at any position on the grid. The update rules are a bit more complex, and described by the following figure (appears in "Lattice-gas cellular automata and lattice Boltzmann by Dieter (springer) and page 378 in NKS).
An output is selected randomly if multiple outputs can be realized.
Papers Read
Somers, 1993. J.A. Somers, Direct simulation of fluid flow with cellular automata and the lattice-Boltzmann equation. Appl. Sci. Res. 51 (1993), pp. 127–133.
CHEN, S., and G.D. DOOLEN, 1998. Lattice Boltzmann method for fluid flows. Annu. Rev. Fluid Mech, 30, 329–364.
Something on Rule 184 and its relation to traffic flow.
Lectures Watched
Wolfram's NKS lecture at MIT in 2003.
Irving, G., Guendelman, E., Losasso, F., and Fedkiw, R. 2006. Efficient simulation of large bodies of water by coupling two and three dimensional techniques. In ACM SIGGRAPH 2006 Papers (Boston, Massachusetts, July 30 - August 03, 2006). SIGGRAPH '06. ACM, New York, NY, 805–811. DOI= http://doi.acm.org/10.1145/1179352.1141959
Treuille, A., Lewis, A., and Popović, Z. 2006. Model reduction for real-time fluids. In ACM SIGGRAPH 2006 Papers (Boston, Massachusetts, July 30 - August 03, 2006). SIGGRAPH '06. ACM, New York, NY, 826–834. DOI= http://doi.acm.org/10.1145/1179352.1141962
My first attempt to implement the FHP CA did not go anywhere because I had no idea how to generate the hexagonal lattice. Most of the time was thus devoted figuring out the optimal way to generate and represent the lattice.
Representing the Hexagonal Lattice
While the CA is simple, I wanted to look at how to optimize my grid code as early as possible, since it would be a bottle neck if the lattice is scarce.
The first question was how to represent that lattice as an array. A Naive was is to think of the two dimensional array in terms of $3 \times 3$ blocks. Each of these blocks represents a single lattice; shown bellow.
It should be easy, however, to see that such representation would waste 3 positions that the model does not care about. An alternative approach is to remove the middle row from the representation, and concentrate on the first and last row. We thus get the following representation
based on the following numbering scheme
A $3 \times 2$ block. If we then rewrite our representation in of the FHP model using this representation we get
It should be noted that if we want $n^2$ lattices ($n$ rows with $n$ columns) then we do not need a $n*3 \times n*2$ array. This can is evident if we stack more lattices in this representation.
6 1
51 26
462 351
426 5131
Which represents the orange region found in the following pictorial representation
This $7 \times 2$ represents on three lattice rows with one lattice per row. We can generalize the size of the matrix to be
2*num_of_lattice_rows + 1 x num_of_lattice_columns + 1
Note: Originally I was representing this as a $2 \times 3$ matrix
because of some complications writing this section, however, I chose to switch to a $3 \times 2$ coding scheme.
Ilachinski, A., (2001). Cellular Automata. City: World Scientific Publishing Company. (Chapter on Fluids)
Revised the representation of the lattice. Started coding the FHP model.
fhp.c(does not work)
After talking to Dr. Francis who explained some the history behind I was doing, I decided to go back and implement the simpler HPP model in Python, in the end, however, it did not work.
One question that I and Dr. Francis had was what exactly did Wolfram try on the Connection Machine. After a few hours of research, I found some interesting information. The following is a snippet of an email exchange between me and Dr. Francis on the topic.
This <
http://www.stephenwolfram.com/publications/articles/ca/88-cellular/1/text.html>
is about the only time wolfram mentions his work on the connection machine
in the 1980s. As to what type of method he used, one has to look elsewhere
from < http://www.longnow.org/views/essays/articles/ArtFeynman.php >
" Cellular automata started getting attention at Thinking Machines when
Stephen Wolfram, who was also spending time at the company, suggested that
we should use such automata not as a model of physics, but as a practical
method of simulating physical systems. Specifically, we could use one
processor to simulate each cell and rules that were chosen to model
something useful, like fluid dynamics. For two-dimensional problems there
was a neat solution to the anisotropy problem since [Frisch, Hasslacher,
Pomeau] had shown that a hexagonal lattice with a simple set of rules
produced isotropic behavior at the macro scale. Wolfram used this method on
the Connection Machine to produce a beautiful movie of a turbulent fluid
flow in two dimensions. Watching the movie got all of us, especially
Feynman, excited about physical simulation. We all started planning
additions to the hardware, such as support of floating point arithmetic that
would make it possible for us to perform and display a variety of
simulations in real time."
The above quote comes partly from a book called "Feynman and Computation,"
and that is the only reference to CA and fluids (or wolfram) in that book. A
more (auto)bibliographical account can be found in NKS <
http://www.wolframscience.com/nksonline/page-880b-text >
"I had always thought that cellular automata could be a way to get at
foundational questions in thermodynamics and hydrodynamics. And in mid-1985,
partly in an attempt to find uses for the Connection Machine, I devised a
practical scheme for doing fluid mechanics with cellular automata (see page
378 <http://www.wolframscience.com/nksonline/page-378-text>). Then over the
course of that winter and the following spring I analyzed the scheme and
worked out its correspondence to the traditional continuum approach.
By 1986, however, I felt that I had answered at least the first round of
obvious questions about cellular automata, and it increasingly seemed that
it would not be easier to go further with the computational tools available.
In June 1986 I organized one last conference on cellular automata--then in
August 1986 essentially left the field to begin the development of *
Mathematica*."
He thus used the FHP model that I am trying to get to work right now. Also,
the 4D fluid simulation is called FHCP <
http://arxiv.org/abs/chao-dyn/9508001v1 >
It should also be mentioned that Wolfram patented not only the hexagonal
method for solving fluid dynamics, but the entire lattice gas method as a
whole. See patent number 4809202 <
http://www.google.com/patents?id=FF4WAAAAEBAJ&dq=U.S.+Patent+Number+4,809,202>
hpp.py(does not work)
I realized that my interpretation of the model was incorrect the entire time. After I gave a presentation, received some helpful tips, and learned more, I decided to start from scratch. After a few hours, I was able to correctly implement the HPP model. The implementation, along with the algorithm, is detailed in HPP document posted on the website.
hpp.c
Looked at how Jared Schaber, and wrote notes. Talked to Jonathan Manton who described how to implement the FHP model (I wish I had that talk before, since I figured out much of what he said the day before). Jonathan also explained the concepts behind CUDA, compiler optimization, and a slew of other things. He also gave me tips on how to optimize my program, while stressing that clarity is the most important objective at the beginning.
In Cellular Automata Super Computers for Fluid Dynamics Modeling (by Margolus) the author writes the following about implementing CA in conventional architecture (not CAM):
two dimensional planes are processed serially (with a substantial amount of pipelining); a third dimension is achieved by staking planes and operating on them in parallel.
The question is: how can this be achieved without running into race conditions. | CommonCrawl |
\begin{document}
\title{\large \bf The Higher Relation Bimodule} \author{Ibrahim Assem, M. Andrea Gatica, Ralf Schiffler}
\date{}
\maketitle
\begin{abstract}
Given a finite dimensional algebra $A$ of finite global dimension, we consider the trivial extension of $A$ by the $A-A$-bimodule $\oplus_{i\ge 2} \mathop{\rm Ext}\nolimits^2_A(DA,A)$, which we call the higher relation bimodule. We first give a recipe allowing to construct the quiver of this trivial extension in case $A$ is a string algebra and then apply it to prove that, if $A$ is gentle, then the tensor algebra of the higher relation bimodule is gentle.
\end{abstract}
\section{Introduction}
The objective of this paper is to describe a new class of algebras, which we call higher relation extensions. Our motivation comes from the study of cluster-tilted algebras, introduced by Buan, Marsh and Reiten in \cite{BMR}, and in \cite{CCS} for type $\mathbb{A}$. Indeed, it was shown in \cite{ABS} that an algebra $A$ is cluster-tilted if and only if there exists a tilted algebra $C$ such that $A$ is isomorphic to the trivial extension of $C$ by the $C-C$-bimodule $\mathop{\rm Ext}\nolimits^2_C(DC,C)$. Moreover, a recipe for constructing the quiver of this trivial extension was given in \cite[Theorem 2.6]{ABS}. The proof of the latter result rests on the fact that tilted algebras have global dimension 2.
Here, we consider the more general case of an algebra $A$ having an arbitrary finite global dimension and consider its trivial extension by the bimodule $\bigoplus_{i\ge 2} \mathop{\rm Ext}\nolimits^i_A(DA,A)$, which we call the higher relation bimodule. We believe that this class of algebras, which we call higher relation extensions, will be useful in the study of $m$-cluster-tilted algebras (see \cite{FPT} \cite{B}). Our first objective is to describe the ordinary quiver of the higher relation extension of $A$ in the case where $A$ is a string algebra in the sense of Butler and Ringel \cite{BR}. We also assume that the quiver of $A$ is a tree. This is no restriction, because the universal cover of a string algebra is a string tree \cite{G}. Our theorem reads as follows.
\begin{teo}\label{thmA} Let $A=kQ/I$ be a string tree algebra. Then there exist two sequences $(c_\ell),(z_\ell)$ of points of $Q$ such that the arrows in the quiver of the higher relation extension are exactly those of $Q$ plus one additional arrow from each $z_\ell $ to $c_\ell$.
\end{teo}
Our proof is constructive, in the sense that we give an algorithm allowing to construct explicitly the sequences $(c_\ell )$ and $(z_\ell) $ and thus the quiver of the higher relation extension.
We then consider the particular case where $A$ is a gentle algebra. Gentle algebras form an important subclass of the class of string algebras. Part of their importance comes from the fact that this subclass is stable under derived equivalences \cite{SZ}. While, as we show, the higher relation extension algebra of a gentle algebra is monomial but not necessarily gentle, we prove using our Theorem \ref{thmA}, that the tensor algebra of the higher relation bimodule is gentle.
\begin{teo} \label{thmB} Let $A=kQ/I$ be a gentle algebra, then the tensor algebra of the higher relation bimodule $\bigoplus_{i\ge 2} \mathop{\rm Ext}\nolimits^i_A(DA,A)$ is gentle. \end{teo}
The paper is organised as follows. In section 2, we fix the notation and recall some facts and results about string and gentle algebras. Section 3 is devoted to the computation of projective resolutions and injective coresolutions of uniserial modules over a string algebra. We study the top of the higher extension bimodule in section 4 and we prove Theorem \ref{thmA} in section 5. Sections 6 and 7 are devoted to the case of gentle algebras.
\section{ Preliminaries}
\subsection{ Notation}
Throughout this paper, algebras are basic and connected finite dimensional algebras over an algebraically closed field $k$. Given an algebra $A$, there always exists a (unique) quiver $Q=(Q_0,Q_1)$ and (at least) an isomorphism $A\cong kQ/I$, where $kQ$ is the path algebra of $Q$, and $I$ is an admissible ideal of $kQ$, see, for instance, \cite{ASS}. Such an isomorphism is called a {\bf presentation} of the algebra. Given an algebra $A$, we denote by $\textup{mod}\,A$ the category of finitely generated right $A$-modules, and by $D=\mathop{\rm Hom}\nolimits_k(-,k)$ the standard duality between $\textup{mod}\,A$ and $\textup{mod}\,A^{op}$. For a point $x $ in the quiver $Q$ of $A$, we denote by $P(x),I(x),S(x),e_x$ respectively, the corresponding indecomposable projective module, injective module, simple module and primitive idempotent. We recall that a module $M$ can be equivalently considered as a bound quiver representation $M=(M_i,M_\alpha)_{i\in Q_0,\alpha\in Q_1}$. The projective, or injective, dimension of a module $M$ is denoted by $\mathop{\rm pd}\nolimits M$, or $\mathop{\rm id}\nolimits M$, respectively. The global dimension of $A$ is denoted by $\mathop{\rm gldim}\nolimits A$. For facts about the category $\textup{mod}\, A$, we refer the reader to \cite{ARS} or \cite{ASS}.
\subsection {\bf Trivial extensions}
Let $A$ be an algebra and $M$ an $A-A$-bimodule . The trivial extension of $A$ by $M$ is the algebra $ A \ltimes M$ with underlying $k$-vector space
$$A \oplus M = \{(a,m) | a \in A, \, m \in M \} $$ and the multiplication defined by $$(a,m) . (a',m')= (a.a', am'+ma')$$ for $a,a' \in A$ and $m,m' \in M$.
For instance, an algebra $A$ is cluster-tilted if and only if there exists a tilted algebra $C$ such that $A$ is the trivial extension of $C$ by the so-called relation bimodule $\mathop{\rm Ext}\nolimits^2_C(DC,C)$, see \cite{ABS}.
The ordinary quiver of a trivial extension is computed as follows (see, for instance, \cite{ABS}): let $M$ be an $A-A$ bimodule, then the quiver $Q_{A \ltimes M}$ of $A \ltimes M$ is given by
\begin{itemize} \item[1)] $ \bigr( Q_{A \ltimes M}\bigl)_0= (Q_A)_0$ \item[2)] For $z,c \in (Q_A)_0$, the set of arrows in $Q_{A \ltimes M}$ from $z$ to $c$ equals the set of arrows in $Q_A$ from $z$ to $c$ plus
$$\mathop{\rm dim_k}\nolimits \displaystyle \frac{e_z M e_c}{e_z M (\mathop{\rm rad}\nolimits A) e_c + e_z (\mathop{\rm rad}\nolimits A) M e_c }$$
additional arrows from $z$ to $c$.
The latter arrows are called {\bf new} arrows, while the former are the {\bf old} arrows. \end{itemize}
\subsection {String algebras}
Recall from \cite{BR} (see also \cite{WW}) that an algebra $A$ is called a {\bf string algebra} if there exists a presentation $A=kQ/I$ (called {\bf a string presentation}) such that: \begin{itemize} \item[S1)] $I$ is generated by a set of paths (thus $A$ is monomial). \item [S2)] Each point in $Q$ is the source of at most two arrows and the target of at most two arrows. \item[ S3)] For an arrow $\alpha$, there is at most one arrow $\beta$ and at most one arrow $\gamma$ such that $\alpha \beta \notin I$ and $\gamma \alpha \notin I$. \end{itemize}
Whenever we deal with a string algebra $A$, we always assume that it is given by a string presentation $A=kQ/I$. We assume moreover that the relations (that is, the generators of $I$) are of minimal length.
A reduced walk $\omega$ in $Q$ is called a {\bf string} if it contains no zero relations.To each string $\omega$ in $Q$, we can associate a
so-called {\bf string module} \cite{BR} in the following way. If $\omega $ is the stationary path at $j$, then $M(\omega)=S(j)$. Let $\omega = \omega_1 \omega_2 \cdots \omega_t$ be a string, with each $\omega_i$ an arrow or the inverse of an arrow. For each $i$ such that $0 \leq i \leq t$, let $V_i=k$; and for $1 \leq i \leq t$, let $V_{\omega_i}$ be the identity map sending $x \in V_i$ to $x \in V_{i+1}$ if $\omega_i$ is an arrow and otherwise the identity map sending $x \in V_{i+1}$ to $x \in V_i$. The string module $M(\omega)$ is then defined as follows: for each $j \in Q_0$, $M(\omega)_j$ is the direct sum of the vector spaces $V_i$ such that the source of $\omega_i$ is $j$ if $j$ appears in $\omega$, and otherwise $M(\omega)_j=0$; for each $\alpha \in Q_1$, $M(\omega)_{\alpha}$ is the direct sum of the maps $V_{\omega_i}$ such that $\omega_i=\alpha$ or $\omega_i^{-1}= \alpha$ if $\alpha$ appears in $\omega$, and otherwise $M(\omega)_{\alpha}=0$.
A non-zero path $\omega$ in $Q$ for $a$ to $b$ will sometimes be denoted by $ [a,b] $, whenever there is no ambiguity. Then, the corresponding string module is denoted by $M(\omega)=M([a,b])$.
We also recall that the endomorphism ring of a projective module over a string tree algebra $A$ (a full subcategory of $A$) is also a string tree algebra.
\subsection {Gentle algebras}
Recall from \cite{AS} that a string algebra $A=kQ/I$ is called {\bf gentle} if in addition to $(S1),\, (S2), \, (S3)$, the bound quiver $(Q,I)$ satisfies: \begin{itemize} \item[G1)] For an arrow $\alpha$, there is at most one arrow $\beta$ and at most one arrow $\gamma$ such that $\alpha \beta \in I$ and $\gamma \alpha \in I$.
\item[G2)] $I$ is quadratic (that is, $I$ is generated by paths of length 2). \end{itemize}
For instance, cluster-tilted algebras of types $\mathbb{A}$ and $\tilde{ \mathbb{A}}$ are gentle \cite{ABCP}.
\section{ \large \bf Resolutions of uniserial modules}
In this section, we compute minimal projective resolutions of an injective module, and dually minimal injective coresolutions of a projective module over a string algebra. Throughout, we let $A=kQ/I$ be a string presentation.
\begin{defi} Let $[x_0,y_0]$ be a non-zero path from $x_0$ to $y_0$ in $Q$. We define inductively the right maximal sequence of $[x_0,y_0]$ as follows. This is a finite sequence of non-zero paths $[x_{i_1i_2 \cdots i_t}, y_{i_1i_2 \cdots i_t}]$ with $i_1=0$ and $i_j \in \{ 0,1\}$ such that
\begin{itemize}
\item[1)] Let $[x_0,y_{00}]$,$[x_0,y_{01}]$ be the maximal non-zero paths starting at $x_0$ (where we agree that $ [x_0,y_0]$ is contained in $[x_0,y_{00}]$).
\noindent Then we set $$[x_{00},y_{00}]= [x_0,y_{00}] \backslash [x_0,y_0]$$ and $$[x_{01},y_{01}]= [x_0,y_{01}] \backslash [x_0,y_0] = [x_{0},y_{01}] \backslash \{x_0\}.$$
\item[2)] Inductively, assume that $[x_{0i_2 \cdots i_{t-1}}, y_{0i_2 \cdots i_{t-1}}]$ has been defined. Let $[x_{0i_2 \cdots i_{t-1}}, y_{0i_2 \cdots i_{t-1}0}]$ and $[x_{0i_2 \cdots i_{t-1}}, y_{0i_2 \cdots i_{t-1}1}]$ be the maximal non-zero paths starting at $x_{0i_2 \cdots i_{t-1}}$, where we agree that $[x_{0i_2 \cdots i_{t-1}}, y_{0i_2 \cdots i_{t-1}}] $ is contained in $[x_{0i_2 \cdots i_{t-1}}, y_{0i_2 \cdots i_{t-1}0}]$.
\noindent Then we set $$[x_{0i_2 \cdots i_{t-1}0}, y_{0i_2 \cdots i_{t-1}0}]= [x_{0i_2 \cdots i_{t-1}}, y_{0i_2 \cdots i_{t-1}0}] \backslash [x_{0i_2 \cdots i_{t-1}}, y_{0i_2 \cdots i_{t-1}}]$$ and $$[x_{0i_2 \cdots i_{t-1}1}, y_{0i_2 \cdots i_{t-1}1}]= [x_{0i_2 \cdots i_{t-1}}, y_{0i_2 \cdots i_{t-1}1}] \backslash \{ x_{0i_2 \cdots i_{t-1}} \}.$$
\end{itemize}
\end{defi}
The left maximal sequence of a non-zero path is defined dually. However, we do it explicity for the convenience of the reader.
\begin{defi} Let $[r_0,s_0]$ be a non-zero path from $r_0$ to $s_0$ in $Q$. We define inductively the left maximal sequence of $[r_0,s_0]$ as follows. This is a finite sequence of non-zero paths $[r_{i_1i_2 \cdots i_t}, s_{i_1i_2 \cdots i_t}]$ with $i_1=0$ and $i_j \in \{ 0,1\}$ such that
\begin{itemize} \item[1)] Let $[r_{00},s_0]$,$[r_{01},s_0]$ be the maximal non-zero paths ending at $s_0$, where we agree that $ [r_0,s_0]$ is contained in $[r_{00},s_0]$. Then we set $$[r_{00},s_{00}]= [r_{00},y_0] \backslash [r_0,s_0]$$ and $$[r_{01}, s_{01}]= [r_{01},y_0] \backslash [r_0,s_0] = [r_{01},s_0] \backslash \{s_0\}.$$
\item[2)] Inductively, assume that $[r_{0i_2 \cdots i_{t-1}}, s_{0i_2 \cdots i_{t-1}}]$ has been defined. Let $[r_{0i_2 \cdots i_{t-1}}, s_{0i_2 \cdots i_{t-1}0}]$ and $[r_{0i_2 \cdots i_{t-1}}, s_{0i_2 \cdots i_{t-1}1}]$ be the maximal non-zero paths ending at $s_{0i_2 \cdots i_{t-1}}$, where we agree that $[r_{0i_2 \cdots i_{t-1}}, s_{0i_2 \cdots i_{t-1}}] $ is contained in $[r_{0i_2 \cdots i_{t-1}}, s_{0i_2 \cdots i_{t-1}0}]$. Then we set $$[r_{0i_2 \cdots i_{t-1}0}, s_{0i_2 \cdots i_{t-1}0}]= [r_{0i_2 \cdots i_{t-1}}, s_{0i_2 \cdots i_{t-1}0}] \backslash [r_{0i_2 \cdots i_{t-1}}, s_{0i_2 \cdots i_{t-1}}]$$ and $$[r_{0i_2 \cdots i_{t-1}1}, s_{0i_2 \cdots i_{t-1}1}]= [r_{0i_2 \cdots i_{t-1}}, s_{0i_2 \cdots i_{t-1}1}] \backslash \{ s_{0i_2 \cdots i_{t-1}} \}.$$
\end{itemize} \end{defi}
\noindent Note that in both cases, some of the paths above might be empty and in this case, the points considered do not exist.
Our first result follows directly from the above definitions.
\begin{teo} \label{uniserial} Let $A=kQ/I$ be a string algebra.
\begin{itemize} \item[a)] If $[x_0,y_0]$ is a non-zero path in $Q$ and $$ \cdots \rightarrow P_3 \rightarrow P_2 \rightarrow P_1 \rightarrow P_0 \rightarrow M[x_0,y_0] \rightarrow 0$$ is a minimal projective resolution then, for $l \geq 1$,
$$ P_{l-1}= \bigoplus P(x_{i_1i_2 \cdots i_l })$$
\noindent where the direct sum is taken over all $l$-tuples $(0,i_2, \cdots,i_l)$ such that $ i_k \in \{ 0,1\}$ for all $k$ with $2 \leq k \leq l$ and the point $x_{0i_2 \cdots i_l }$ in definition 3.1 exists.
\item[b)] If $[r_0.s_0]$ is a non-zero path in $Q$ and $$ 0 \rightarrow M[r_0,s_0] \rightarrow I_0 \rightarrow I_1 \rightarrow I_2 \rightarrow I_3 \rightarrow \cdots $$ is a minimal injective coresolution then, for $l \geq 1$,
$$ I_{l-1}= \bigoplus I(s_{i_1i_2 \cdots i_l })$$
\noindent where the direct sum is taken over all $l$-tuples $(0,i_2, \cdots,i_l)$ such that $ i_k \in \{ 0,1\}$ for all $k$ with $2 \leq k \leq l$ and the point $s_{0i_2 \cdots i_l }$ in definition 3.2 exists.
\end{itemize} \end{teo}
\begin{Demo} We only prove a), since the proof of b) is dual.
\noindent Clearly, the projective cover of the uniserial module $M[x_0,y_0]$ is $P(x_0)$, whose support consists of the (at most two) maximal non-zero paths $[x_0,y_{01}]$ and $[x_0,y_{01}]$ starting at $x_0$. Then,$$ \Omega^1 M[x_0.y_0]= M[x_{00},y_{00}] \oplus M[x_{01},y_{01}]$$
\noindent where $x_{00}$ and $x_{01}$ are defined as above. The rest follows from an easy induction. $
\mbox{$\square$}$
\end{Demo}
\begin{ejem} \label{example 1} Suppose that the string algebra is given by the following bound quiver.
\[\xymatrix@R12pt@C12pt{ \scriptscriptstyle 1 \ar[r] \ar@/^2ex/@{.}[rr]& \scriptscriptstyle 3 \ar[dr] \ar[r] \ar@/^2ex/@{.}[rrr] \ar@/^1ex/@{.}[rdr] \ar@/_4pc/@{.}[dddrrrrrru]&
\scriptscriptstyle 4 \ar[r] &\scriptscriptstyle 5 \ar[r] & \scriptscriptstyle 6\\ \scriptscriptstyle 2 \ar[ur] \ar@/^2ex/@{.}[rr]& & \scriptscriptstyle 7 \ar[r] \ar[dr]& \scriptscriptstyle 8 & & \scriptscriptstyle 11 \ar[r] \ar@/^2ex/@{.}[rr] &
\scriptscriptstyle 12 \ar[r] & \scriptscriptstyle 13\\ & & & \scriptscriptstyle 9
\ar[r]\ar@/^2ex/@{.}[rru] & \scriptscriptstyle 10 \ar[r] \ar[ur]\ar@/^3ex/@{.}[rur] & \scriptscriptstyle 14 \ar[r] &
\scriptscriptstyle 15 \ar[r] & \scriptscriptstyle 16 \ar[r] & \scriptscriptstyle 17 \\
& & & & & & & & &} \]
\noindent Here, and in the sequel, dotted lines indicate relations.
\noindent Considering the path $[x_0,y_0]=[3,9]$, the right maximal sequence is $$ [3,9]; [10,15], [4,5]; [16,17], [11,11]=\{11\}, [6,6]=\{6\};[12,12]=\{12\};[13,13]=\{13\}.$$
\noindent This sequence may be conveniently shown in the following diagram
\[\xymatrix@R12pt@C12pt{ [3,9] \ar@{-}[r] \ar@{-}[dr]& [4,5] \ar@{-}[r] & \{6\} & & &\\ & [10,15] \ar@{-}[r] \ar@{-}[dr]& \{11 \}\ar@{-}[r] & \{12\} \ar@{-}[r] & \{13\} & \\
& & [16,17] & & .&} \]
\noindent The minimal projective resolution of $M[3,9]$ is the following (compare with the above diagram)
$$ \begin{array}{c} 0 \rightarrow P(13) \rightarrow P(12) \rightarrow\\ \\ \rightarrow P(16) \oplus P(11) \oplus P(6) \rightarrow P(10) \oplus P(4) \rightarrow P(3) \rightarrow M[3,9] \rightarrow 0, \end{array}$$
\noindent where the morphisms are induced by the corresponding paths.
\noindent Similarly, taking $[r_0,s_0]=[3,9]$, the left maximal sequence is
\[\xymatrix@R12pt@C12pt{ \{1 \} \ar@{-}[r] & [3,9]&} \]
\noindent from which we deduce the minimal injective coresolution $$ 0 \rightarrow M[3,9] \rightarrow I(9) \rightarrow I(1) \rightarrow 0.$$
\end{ejem}
We are interested in computing resolutions of injective and projective indecomposable modules. These modules are usually not uniserial, neither are in general their first syzygy or cosyzygy, respectively. In order to apply Theorem \ref{uniserial}, the next lemma shows that we must start from the second.
\begin{lema}
\begin{itemize} \item[a)]\label{syzygy} The second syzygy of an indecomposable injective module is the direct sum of at most six uniserial modules. \item[b)]\label{cosyzygy} The second cosyzygy of an indecomposable projective module is the direct sum of at most six uniserial modules. \end{itemize} \end{lema}
\begin{Demo} Let $I(c)$ be an indecomposable injective $A$-module. If $I(c)$ is uniserial, then there is nothing to prove because of Theorem \ref{uniserial}. Otherwise, let $\mathop{\rm top}\nolimits I(c)=S(a_0) \oplus S(a_1)$. Then the projective cover of $I(c)$ is $P(a_0) \oplus P(a_1)$. Let $[a_i, b_i]$ and $[a_i, b'_i]$ be the two maximal non-zero paths starting at $a_i$ (with $i=0,1$), where we agree that $[a_0,c]$ is contained in $[a_0,b'_0]$ and $[a_1,c]$ is contained in $[a_1,b'_1]$. Let $d_i$ be the direct successor of $a_i$ on the path $[a_i,b_i]$ then $$ \Omega^1 I(c)=M[d_0,b_0] \oplus M[d_1,b_1] \oplus M$$ \noindent where $M$ is an indecomposable module, usually non-uniserial, such that $\mathop{\rm top}\nolimits M=S(c)$ and $\mathop{\rm soc}\nolimits M= S(b'_0) \oplus S(b'_1)$. Hence, the projective cover of $\Omega^1 I(c)$ is $P(d_0) \oplus P(d_1) \oplus P(c)$, and $\Omega^2 I(c)$ is the direct sum of at most six uniserial modules obtained as follows.
\noindent Let $[d_i,b_{i0}], [d_i,b_{i1}]$ be the maximal non-zero paths starting in $d_i$ (with $i=0,1$), where we agree that $[d_i,b_i] $ is contained in $[d_i,b_{i0}]$. Then let
$$[d_{i0},b_{i0}]= [d_i,b_{i0}]\backslash [d_i,b_i]$$
$$[d_{i1},b_{i1}]= [d_i,b_{i1}]\backslash [d_i,b_i]= [d_i,b_{i1}] \backslash \{d_i\}.$$
\noindent Let also $[c,c_0]$ and $[c,c_1]$ be the maximal non-zero paths starting at $c$, where we agree that $[c,b'_0]$ is contained in $[c,c_0]$ and $[c,b'_1]$ is contained in $ [c,c_1].$
\noindent We let $$[c'_0,c_0]=[c,c_0] \backslash [c,b'_0]$$ and $$[c'_1,c_1]=[c,c_1] \backslash [c,b'_1].$$
\noindent It is then clear that \begin{eqnarray*} \Omega^2 I(c)& = & M[d_{00},b_{00}] \oplus M[d_{01},b_{01}] \\ & \oplus & M[d_{10},b_{10}] \oplus M[d_{11},b_{11}] \\ & \oplus & M[c'_0,c_0] \oplus M[c'_1,c_1] \end{eqnarray*} which establishes $a)$. Statement $b)$ is dual. $
\mbox{$\square$}$
\end{Demo}
\begin{coro}
\begin{itemize} \item[a)] \label{MinProjResInjective} Let $I(c)$ be an indecomposable injective module such that $\mathop{\rm top}\nolimits (I(c))= S(a_0) \oplus S(a_1)$. Then $I(c)$ has the following minimal projective resolution
{$$\begin{array}{c} \dots \rightarrow \bigoplus_{j; (0, i_2,i_3)} P(x^j_{0 i_2 i_3}) \rightarrow \bigoplus_{j; (0,i_2)} P(x^j_{0 i_2}) \rightarrow \bigoplus_j P(x^j_{0}) \rightarrow\\ \\
\rightarrow
P(d_0)\oplus P(c) \oplus P(d_1) \rightarrow P(a_0) \oplus P(a_1) \rightarrow I(c)\rightarrow 0 \end{array}$$ }
\noindent with the morphisms induced by the paths, where $\{ x_0^j \, | \, 1 \leq j \leq 6 \}= \{d_{00},d_{01},d_{10},d_{11}, c'_{0},c'_1 \}$ and $ i_j \in \{0,1 \}$.
\item[b)] Let $P(z)$ be an indecomposable projective module
such that $\mathop{\rm soc}\nolimits (P(z))= S(w_0) \oplus S(w_1)$. Then $P(z)$ has the following minimal injective coresolution $$ \begin{array}{c} 0 \rightarrow P(z) \rightarrow I(w_0) \oplus I(w_1) \rightarrow I(v_1)\oplus I(z) \oplus I(v_2) \rightarrow \bigoplus_j I(s^j_{0}) \rightarrow \\ \\ \rightarrow \bigoplus_{j; (0,i_2)} I(s^j_{0 i_2}) \rightarrow \bigoplus_{j; (0,i_2,i_3)} I(s^j_{0 i_2 i_3}) \rightarrow
\dots\end{array} $$
\noindent with the morphisms induced by the paths, where $\{ s_0^j \, | \, 1 \leq j \leq 6 \}$ are as above and $ i_j \in \{0,1 \}$.
\end{itemize} \end{coro}
\begin{Demo} This follows from Lemma \ref{syzygy} and Theorem \ref{uniserial}. $
\mbox{$\square$}$
\end{Demo}
\begin{coro} With the above notations \begin{itemize} \item[a)] All the points $x^j_0, x^j_{0i_2}, \cdots, x^j_{0i_2\dots i_l}$ are targets of relations.
\item[b)] All the points $s^j_0, s^j_{0i_2}, \cdots, s^j_{0i_2\dots i_l}$ are sources of relations. \end{itemize} \end{coro} \begin{Demo} This follows from the construction of these points. $
\mbox{$\square$}$ \end{Demo}
\section{ The top of the higher relation bimodule}
\begin{defi} Let $A$ be a finite dimensional algebra of finite global dimension. The $A-A$-bimodule $ \bigl( \bigoplus_{i \geq 2}\mathop{\rm Ext}\nolimits^i_A(DA,A)\bigr) $ with the natural action is called the {\bf higher relation bimodule}.
The trivial extension $$A \ltimes \bigr( \bigoplus_{i \geq 2}\mathop{\rm Ext}\nolimits^i_A(DA,A)\bigl)$$
\noindent of $A$ by its higher relation bimodule is called the {\bf higher relation extension} of $A$.
\end{defi}
If $\mathop{\rm gldim}\nolimits A \leq 2$, then the higher relation extension of $A$ coincides with its relation extension, as defined in \cite{ABS}.
Our objective in this section is to construct the ordinary quiver of the higher relation extension of a string algera $A$ of finite global dimension.
As mentioned in the introduction, we also assume that the ordinary quiver $Q_A$ of $A$ is a tree.
Let thus $A=kQ/I$ be a string algebra, with $Q$ a tree and $M$ be an $A-A$-bimodule. We have $$\mathop{\rm rad}\nolimits M=M(\mathop{\rm rad}\nolimits A) + (\mathop{\rm rad}\nolimits A)M $$ and then $$\mathop{\rm top}\nolimits M= M/ [M(\mathop{\rm rad}\nolimits A) + (\mathop{\rm rad}\nolimits A)M].$$
If $M= \bigoplus_{i \geq 2}\mathop{\rm Ext}\nolimits^i_A(DA,A)$, then, clearly, $\mathop{\rm top}\nolimits M= \bigoplus_{i \geq 2} \mathop{\rm top}\nolimits \mathop{\rm Ext}\nolimits^i_A(DA,A)$. In order to describe this top, we start by describing the modules $\mathop{\rm top}\nolimits_A \mathop{\rm Ext}\nolimits^i_A(I(c),A)$ and $\mathop{\rm top}\nolimits \mathop{\rm Ext}\nolimits^i_A(DA,P(z))_A$ for all points $c,z \in (Q_A)_0$.
In the following, we use the notation of section 3.
\begin{prop}\label {Ext(I,A)} Let $A=kQ/I$ be a string tree algebra and $l \geq 0$. Then $\mathop{\rm Ext}\nolimits_A^{l+2}(I(c), P(z)) \ne 0$ if and only if one of the following two conditions hold: \begin{itemize} \item[a)] there exists a non-zero path $\omega: z \rightsquigarrow x^j_{i_1i_2 \cdots i_{l+1}}$ not passing through $x^j_{i_1i_2 \cdots i_l}$ and whose compositions with $x^j_{i_1i_2 \cdots i_{l+1}}\rightsquigarrow x^j_{i_1i_2 \cdots i_{l+2}}$ are both zero. \item[b)] $z=x^j_{i_1i_2 \cdots i_l}$ and $x^j_{i_1i_2 \cdots i_{l}0}, \, x^j_{i_1i_2 \cdots i_{l}1}$ both exist. In this case, a non-zero element is induced from the difference of the two paths $x^j_{i_1i_2 \cdots i_{l}}\rightsquigarrow x^j_{i_1i_2 \cdots i_{l}0}$ and $x^j_{i_1i_2 \cdots i_{l}}\rightsquigarrow x^j_{i_1i_2 \cdots i_{l}1}$.
\end{itemize} \end{prop}
\begin{obser} Observe that in case $(b)$, we have the following situation
\[\xymatrix{ & \scriptscriptstyle x^j_{i_1i_2\cdots i_{l-1}} \ar@{~>}^{v}[rr] & & \scriptscriptstyle z=x^j_{i_1i_2\cdots i_{l}} \ar@{~>}^{u_0}[rr] \ar@{~>}^{u_1}[drr] & &
\scriptscriptstyle x^j_{i_1i_2\cdots i_{l}0} \\ & & & & & \scriptscriptstyle x^j_{i_1i_2\cdots i_{l}1} } \]
\noindent where $vu_0,vu_1$ are zero paths. \end{obser}
\begin{Demo} Let $$ \cdots \rightarrow \oplus P(x^j_{i_1 \cdots i_{l+2}}) \stackrel{d_{l+3}}{\to} \oplus P(x^j_{i_1 \cdots i_{l+1}}) \stackrel{d_{l+2}}{\to} \oplus P(x^j_{i_1 \cdots i_{l}}) \rightarrow \cdots \rightarrow P_{c} \rightarrow I(c) \rightarrow 0$$ be a minimal projective resolution of $I(c)$. Recall that the morphisms $d_k$ are induced from the paths in $Q$.
\noindent If condition (a) holds then it follows from the definition of $\mathop{\rm Ext}\nolimits_A^{l+2}(I(c), P(z))$ that $\omega$ induces a non-zero element in $\mathop{\rm Ext}\nolimits_A^{l+2}(I(c), P(z))$.
\noindent If condition (b) holds, then $P_{l+2}=\oplus P(x^j_{i_1 \cdots i_{l+1}})$ has two indecomposable summands $P(x^j_{i_1 \cdots i_l0})$, $P(x^j_{i_1 \cdots i_l1})$, whose images $d(P(x^j_{i_1 \cdots i_l0}))$ and $d(P(x^j_{i_1 \cdots i_l1}))$ lie in the same indecomposable summand $P(x^j_{i_1 \cdots i_l})$ of $P_{l+1}$, together with two non-zero morphisms $\nu_i:P(x^j_{i_1 \cdots i_l i_{l+1}}) \rightarrow P(z)$ ($i_{l+1}=0,1$) such that there exist two morphisms $\gamma_i: P(x^j_{i_1 \cdots i_l}) \rightarrow P(z)$ with $\nu_i=\gamma_i d$.
\[\xymatrix{ P(x^j_{i_1 \cdots i_l0})\oplus P(x^j_{i_1 \cdots i_l1})\ar@<2pt>[r]^-{d}\ar@<-2pt>[d]_{\nu_1}\ar@<2pt>[d]^{\nu_2} & P(x^j_{i_1 \cdots i_l})\ar[dl]^{[\gamma_1,\gamma_2]} \\ P(z) }\]
In this case $[\nu_1 \, \,- \nu_2]^t: P(x^j_{i_1 \cdots i_l0}) \oplus P(x^j_{i_1 \cdots i_l1}) \rightarrow P(z)$ does not factor through $P(x^j_{i_1 \cdots i_l})$ because $\mathop{\rm dim_k}\nolimits \mathop{\rm Hom}\nolimits (P(x^j_{i_1 \cdots i_l0}) \oplus P(x^j_{i_1 \cdots i_l1}),P(z))=2$ while $\mathop{\rm dim_k}\nolimits \mathop{\rm Hom}\nolimits (P(x^j_{i_1 \cdots i_l}),P(z))=1$, since the algebra is a tree algebra. This shows that $\mathop{\rm Ext}\nolimits^{l+2}_A(I(c),P(z)) \ne 0$.
Conversely, suppose that $\mathop{\rm Ext}\nolimits^{l+2}_A(I(c),P(z))$ contains a non-zero element $[f]$. Then $[f]$ is in the class of a morphism $f: \oplus P(x^j_{i_1 \cdots i_{l+1}}) \rightarrow P(z)$ such that $f d_{l+3}=0$. Since $A$ is a tree string algebra, there are at most two indecomposable summands on which $f$ is non-zero, because otherwise there are non-zero paths from $z$ to three points $x^j_{i_1 \cdots \i_{l+1}}$ and these induce a full subcategory of type $\mathbb{D}_4$ which contradicts the fact that $A$ is string. Thus we get a morphism $f: P(x^j_{i_1 \cdots i_{l+1}}) \oplus P(x^{j'}_{i'_1 \cdots i'_{l+1}}) \rightarrow P(z)$ which does not factor through $d_{l+2}$. If $z=x^j_{i_1 \cdots i_l}$ then we must have $j=j'$, $i=i', \cdots, i_l=i'_{l}$ and $i_{l+1} \ne i'_{l+1}$. Suppose $z \ne x^j_{i_1 \cdots i_l}$. If both non-zero paths $z \rightsquigarrow x^j_{i_1 \cdots i_{l+1}}$, $z \rightsquigarrow x^{j'}_{i'_1 \cdots i'_{l+1}}$ which induce $f$ pass through $x^j_{i_1 \cdots i_l}$ then we have a contradiction to $A$ being string. If one non-zero path $z \rightsquigarrow x^j_{i_1 \cdots i_{l+1}}$ passes through $x^j_{i_1 \cdots i_l}$ then the other satisfies condition $(a)$. Indeed, the composition with
$x^{j'}_{i'_1i'_2 \cdots i'_{l+1}}\rightsquigarrow x^{j'}_{i'_1i'_2 \cdots i'_{l+2}}$ vanishes because our original path corresponds to an element of $\mathop{\rm Ext}\nolimits^{l+2}_A(I(c),P(z))$. Similarly, if $z \rightsquigarrow x^j_{i_1 \cdots i_{l+1}}$ does not pass through neither $x^j_{i_1 \cdots i_{l}}$ nor $x^{j'}_{i'_1 \cdots i'_{l+1}}$ then both paths satisfy condition $(a)$. $
\mbox{$\square$}$ \end{Demo}
The following example ilustrates condition (b). \begin{ejem} Let $A$ be given by the quiver
\[\xymatrix@R=10pt{ &&3\\ 1\ar[r] &2\ar[ru]\ar[rd] \\ &&4 }\] bound by $\textup{rad}^2A=0$. Then the minimal projective resolution of $I(1)$ is \[0\to P(3)\oplus P(4) \to P(2) \to P(1)\to I(1)\to 0. \] Let $j_1:P(3)\to P(2) $ and $j_2:P(4)\to P(2)$ be the canonical inclusions, then it is easily seen that the morphism \[ [j_1 \, \,\,\, \, -j_2 ]^t : P(3)\oplus P(4) \to P(2) \] induces a non-zero element of $\mathop{\rm Ext}\nolimits^2_A(I(1),P(2)).$ \end{ejem}
\begin{coro} Assume $A$ is a gentle tree algebra, then $\mathop{\rm Ext}\nolimits_A^{l+2} (I(c),P(z))\ne 0$ if and only if there exists a non-zero path $\omega: z \rightsquigarrow x^j_{i_1i_2 \cdots i_{l+1}}$ not passing through $x^j_{i_1i_2 \cdots i_l}$ and whose compositions with $x^j_{i_1i_2 \cdots i_{l+1}}\rightsquigarrow x^j_{i_1i_2 \cdots i_{l+2}}$ are both zero. \end{coro} \begin{Demo} Indeed, if $A$ is gentle, then condition (b) cannot occur as shown in the remark preceding the proof. $
\mbox{$\square$}$ \end{Demo}
\begin{coro} \label {exists} \begin{itemize} \item[a)] Let $\omega: z \rightsquigarrow x^j_{i_1 i_2 \cdots i_{l+1}}$ be a non-zero path as in Proposition \ref{Ext(I,A)} a). \begin{itemize} \item[a1)] Assume that a point $x^j_{i_1 i_2 \cdots i_{l+2}}$ exists, then $w$ induces an element of the top of $_A\mathop{\rm Ext}\nolimits_A^{l+2}(I(c), A)$ if and only if $z$ is the starting point of a relation of the form $\omega \omega '$ when $\omega': x^j_{i_1 i_2 \cdots i_{l+1}} \rightsquigarrow x^j_{i_1 i_2 \cdots i_{l+2}}$
\item[a2)] Assume that no point $x^j_{i_1 i_2 \cdots i_{l+2}}$ exists, then $w$ induces an element of the top of $_A\mathop{\rm Ext}\nolimits_A^{l+2}(I(c), A)$ if and only if $z= x^j_{i_1 i_2 \cdots i_{l+1}} $ and $\omega $ is the stationary path in $z$. \end{itemize} \item[b)] In the situation of Proposition \ref{Ext(I,A)} b), the class of the difference of the paths $x_{i_1\ldots i_l}^j\rightsquigarrow x^j_{i_1 \ldots i_l0}$ and $x_{i_1\ldots i_l}^j\rightsquigarrow x^j_{i_1 \ldots i_l1}$ in $\mathop{\rm Ext}\nolimits^{l+2}_A(I(c),P(z))$ lies in the top of $_A\mathop{\rm Ext}\nolimits_A^{l+2}(I(c), A)$ if and only if there are two minimal relations $z\rightsquigarrow x^j_{i_1\ldots i_l0} \rightsquigarrow x^j_{i_1 \ldots i_l 0 i_{l+2}} $ and $ z\rightsquigarrow x^j_{i_1 \ldots i_l 1} \rightsquigarrow x^j_{i_1 \ldots i_l 1 i_{l+2}}$. \end{itemize}\end{coro}
\begin{Demo} \begin{itemize} \item[a1)] The morphism $f:P_{l+2} =\oplus P(x^j_{i_1\ldots i_li_{l+1}}) \rightarrow P(z)$ induced by $\omega$ factors through $P(s)$ where $s$ is the source of a relation ending at $x^j_{i_1 i_2 \cdots i_{l+2}}$ and such that $s$ lies on the path $\omega$. So, $f$ induces an element on the top of $_A\mathop{\rm Ext}\nolimits_A^{l+2}(I(c), A)$ if and only if $s=z$.
\item[a2)] This follows from the fact that the morphism $f:P_{l+2}=\oplus P(x^j_{i_1\ldots i_li_{l+1}}) \rightarrow P(z)$ factors through the identity of $P(x^j_{i_1 i_2 \cdots i_{l+1}})$.
\item[b)] Let $f$ be a representative of the class of the difference of paths $x_{i_1\ldots i_l0}^j\rightsquigarrow x^j_{i_1 \ldots i_l}$ and $x_{i_1\ldots i_l 1}^j\rightsquigarrow x^j_{i_1 \ldots i_l}$ in $\mathop{\rm Ext}\nolimits_A^{l+2}(I(c), P(x^j_{i_1 \ldots i_l}))$. Then $$f=[f_0 \, \, \,f_1 \, \,\,0]:P(x^j_{i_1\ldots i_l0})\oplus P(x^j_{i_1 \ldots i_l 1}) \oplus \overline{P}_{\ell+2}\longrightarrow P(x^j_{i_1 \ldots i_l}).$$ Suppose first that there is no relation $z\rightsquigarrow x^j_{i_1\ldots i_l0} \rightsquigarrow x^j_{i_1 \ldots i_l 0 i_{l+2}} $. Then any relation ending at $ x^j_{i_1 \ldots i_l 0 i_{l+2}} $ must start at a successor $y$ of $z$. Therefore there exists $g:P( x^j_{i_1\ldots i_l0})\to P(y)$ such that $f_0$ factors through $g$, whence $ f=[hg \, \, \, f_1 \, \, \,0]$, for some morphism $h$. So $[f]$ is not in the top of $_A\mathop{\rm Ext}\nolimits_A^{l+2}(I(c), A)$.
Conversely, if we have two minimal relations as in the statement, and $[f]$ is not in the top of $_A\mathop{\rm Ext}\nolimits_A^{l+2}(I(c), A)$, then $[f]=[h][g]$ for some $[g]\in \mathop{\rm Ext}\nolimits_A^{l+2}(I(c), P(y))$, which is represented by a morphism \[g:P(x^j_{i_1\ldots i_l0})\oplus P(x^j_{i_1 \ldots i_l 1}) \longrightarrow P(y).\] Then $y$ lies on the non-zero path $z\rightsquigarrow x_{i_1 \ldots i_l 0}^j$ or $z\rightsquigarrow x_{i_1 \ldots i_l 1}^j$ (or both) and $y\ne z$. But then $g\,d_{l+3,0}:P( x_{i_1 \ldots i_l 0 i_{l+2}}^j)\to P(y)$ is non-zero, because it is given by the non-zero path $y\rightsquigarrow x_{i_1 \ldots i_l 0}^j \rightsquigarrow x_{i_1 \ldots i_l 0 i_{l+2}}^j$, and this contradicts the fact that $[g]$ belongs to $\mathop{\rm Ext}\nolimits^{l+2}_A(I(c),P(y))$. $
\mbox{$\square$}$ \end{itemize}
\end{Demo}
We summarise the results in the theorem below.
{ For each point $c$ in a string algebra $A=kQ/I$, we compute the minimal projective resolution of $I(c)$ given in Corollary \ref{MinProjResInjective}. Then for all $l \geq 0$, the $l+2$-nd term in the minimal projective resolution of $I(c)$ is given by $P_{l+2}= \bigoplus_{j, (i_1,i_2, \cdots , i_{l+1}) } P(x^j_{i_1 i_2 \cdots i_{l+1}})$. }
{Whenever the point $x^j_{i_1 i_2 \cdots i_{l+2}}$ exists, let $z^j_{i_1 i_2 \cdots i_{l+2}}$ be the source of the relation ending in $x^j_{i_1 i_2 \cdots i_{l+2}}$ and passing through $x^j_{i_1 i_2 \cdots i_{l+1}}$.}
For each $j \in \{ 1, \cdots, 6\}$ and for each $l \geq 0$, define $$\mathbf{Z}^j_{i_1 i_2 \cdots i_{l+1}} = \left\{\begin{array}{ll} z^j_{i_1 i_2 \cdots i_{l+1} 0} &\textup{if $x^j_{i_1 i_2 \cdots i_{l+1} 0}$ exists;} \\ x^j_{i_1 i_2 \cdots i_{l+1} } &\textup{otherwise,} \end{array}\right.$$ and let \[\zeta_{i_1\ldots i_{l+1}}^j = \left\{\begin{array}{ll} [x^j_{i_1 \cdots i_l}, \mathbf{Z}^j_{i_1 i_2 \cdots i_{l+1}} ] &\textup{if $x^j_{i_1i_2 \cdots i_{l}0}, \, x^j_{i_1i_2 \cdots i_{l}1}$ both exist;} \\ \left[x^j_{i_1 \cdots i_l}, \mathbf{Z}^j_{i_1 i_2 \cdots i_{l+1}}\right] \setminus \{ x^j_{i_1 \cdots i_l} \} & \textup{otherwise.} \end{array}\right. \] Dually, whenever the point $s^j_{i_1 i_2 \cdots i_{l+2}}$ exists, let $c^j_{i_1 i_2 \cdots i_{l+2}}$ be the target of the relation starting in $s^j_{i_1 i_2 \cdots i_{l+2}}$ and passing through $s^j_{i_1 i_2 \cdots i_{l+1}}$. For each $j \in \{ 1, \cdots, 6\}$ and for each $l \geq 0$, define $$\mathbf{C}^j_{i_1 i_2 \cdots i_{l+1}} = \left\{\begin{array}{ll} c^j_{i_1 i_2 \cdots i_{l+1} 0} &\textup{if $s^j_{i_1 i_2 \cdots i_{l+1} 0}$ exists;} \\ s^j_{i_1 i_2 \cdots i_{l+1} } &\textup{otherwise,} \end{array}\right.$$ and let
\[\Theta_{i_1\ldots i_{l+1}}^j = \left\{\begin{array}{ll} [ \mathbf{C}^j_{i_1 i_2 \cdots i_{l+1}}, s^j_{i_1 \cdots i_l}] &\textup{if $s^j_{i_1i_2 \cdots i_{l}0}, \, s^j_{i_1i_2 \cdots i_{l}1}$ both exist;}\\ \left[ \mathbf{C}^j_{i_1 i_2 \cdots i_{l+1}}, s^j_{i_1 \cdots i_l} \right] \setminus \{ s^j_{i_1 \cdots i_l} \} &\textup{otherwise.} \end{array}\right. \]
\begin{teo} \label{left top}Let $A=kQ/I$ be a string tree algebra and
$l \geq 0$. The following are equivalent \begin{itemize} \item[{\rm (a)}] $ \mathop{\rm Ext}\nolimits_A^{l+2}(I(c), P(z)) \ne 0$; \item[{\rm (b)}] there exists $j$ such that $z \in \zeta^j_{i_1 i_2 \cdots i_{l+1}}$; \item[{\rm (c)}] there exists $j$ such that $c \in \Theta^j_{i_1 i_2 \cdots i_{l+1}}$. \end{itemize} \end{teo}
\begin{Demo} The equivalence of (a) and (b) follows from Proposition \ref{MinProjResInjective} and from the definition of $\zeta^j_{i_1 i_2 \cdots i_{l+1}}$, using the fact that if both points $x^j_{i_1 i_2 \cdots i_{l+2}}$ exist then we have the following situation in the quiver
\[\xymatrix { \scriptscriptstyle \cdots \ar@/_3ex/@{.}[rrrrrr] & \scriptscriptstyle x^j_{i_1 \cdots i_l} \ar[r] & \scriptscriptstyle \cdots \ar[r] & \scriptscriptstyle z \cdots \ar[r] & \scriptscriptstyle z^j_{l,i_1 i_2 \cdots i_{l+1}0} \ar[r]
\ar@/^2ex/@{.}[rrr]&
\scriptscriptstyle \cdots \bullet \ar@/_1ex/@{.}[rdr] \ar[r] & \scriptscriptstyle {x^j_{i_1 \cdots i_{l+1}}} \cdots \ar[r] \ar[dr]& \scriptscriptstyle x^j_{i_1 \cdots i_{l+1}0} \\ & & & & & & & \scriptscriptstyle x^j_{i_1 \cdots i_{l+1}1}. } \] The equivalence of (a) and (c) follows from the dual argument. $
\mbox{$\square$}$
\end{Demo}
\begin{obser} One can easily compute the top of $_A\mathop{\rm Ext}\nolimits^{l+2}_A(I(c),A)$ using Corollary \ref{exists}. \end{obser}
\section{The quiver of the higher relation extension}
{Knowing how to compute $\mathop{\rm top}\nolimits\, _A \mathop{\rm Ext}\nolimits_A^{i} (I(c), A)$ and $\mathop{\rm top}\nolimits \mathop{\rm Ext}\nolimits_A^{i} (DA, P(z))_A$ allows us to find the new
arrows of the higher relation extension of a string tree algebra $A$ since they are in bijection with a basis of }
\begin{eqnarray*} \mathop{\rm top}\nolimits \mathop{\rm Ext}\nolimits_A^i(DA,A)&=& \mathop{\rm Ext}\nolimits_A^i(DA,A)/\mathop{\rm rad}\nolimits (\mathop{\rm Ext}\nolimits_A^i(DA,A))\\ &=& \displaystyle \frac{\mathop{\rm Ext}\nolimits_A^i(DA,A)}{(\mathop{\rm rad}\nolimits A)\mathop{\rm Ext}\nolimits_A^i(DA,A) + \mathop{\rm Ext}\nolimits_A^i(DA,A) (\mathop{\rm rad}\nolimits A)}. \end{eqnarray*}
{ Note that $\mathop{\rm Ext}\nolimits_A^i(DA,A). e_c= \mathop{\rm Ext}\nolimits_A^i(I(c),A)$ is a left $A$-module and that $e_z. \mathop{\rm Ext}\nolimits_A^i(DA,A) = \mathop{\rm Ext}\nolimits_A^i(DA,P(z))$ is a right $A$-module.}
{Given a right (left) $A$-module $M$, we denote by $P_i(M)$ the $i$-th term in a minimal projective resolution of $M$ and by $I_i(M)$ the $i$-th term in a minimal injective coresolution of $M$.}
{If we represent the elements of $\mathop{\rm Ext}\nolimits_A^i(DA,A)$ as classes $[f_{cz}]$ of morphisms $f_{cz}: P_i(I(c)) \rightarrow P(z)$ such that the composition of $f_{cz}$ with the map $P_{i+1}(I(c)) \rightarrow P_i(I(c))$ of the projective resolution is zero, then we are considering the left $A$-module structure of $\mathop{\rm Ext}\nolimits_A^i(DA,A)$. Therefore, $[f_{cz}]$ lies in $(\mathop{\rm rad}\nolimits A) \mathop{\rm Ext}\nolimits_A^i(DA,A)$ if and only if $[f_{cz}] \in \mathop{\rm Ext}\nolimits_A^i(I(c),A)$ lies in the radical of the left $A$-module $\mathop{\rm Ext}\nolimits_A^i(I(c),A)$.}
{In terms of morphisms, $[f_{cz}]$ is in $\mathop{\rm rad}\nolimits \mathop{\rm Ext}\nolimits_A^i(I(c),A)$ if and only if $f_{cz}$ factors non-trivially through another morphism $f_{cy}: P_i(I(c)) \rightarrow P(y)$ such that the following diagram is commutative }
\[\xymatrix{ P_i(I(c)) \ar[rr]^{f_{cz}} \ar[dr]^{f_{cy}} & & P(z) \\ & P(y)\ar[ur]^h & } \]
{\noindent where the map $h$ is given by the left-multiplication by a path in $Q$ from $z$ to $y$, and the composition of $f_{cy}$ with $P_{i+1}(I(c)) \rightarrow P_i(I(c))$ is zero.}
{Dually, we can represent the elements of $\mathop{\rm Ext}\nolimits_A^i(DA,A)$ as classes $[g_{cz}]$ of morphisms $g_{cz}: I(c) \rightarrow I_i(P(z))$
such that the composition of $g_{cz}$ with the map $I_{i}(I(c)) \rightarrow I_{i+1}(I(c))$ of the injective coresolution is zero. This corresponds to the right $A$-module structure of $\mathop{\rm Ext}\nolimits_A^i(DA,A)$. Therefore, $[g_{cz}]$ lies in $\mathop{\rm Ext}\nolimits_A^i(DA,A)(\mathop{\rm rad}\nolimits A)$ if and only if $[g_{cz}] \in \mathop{\rm Ext}\nolimits_A^i(DA,P(z))$ lies in the radical of the left $A$-module $\mathop{\rm Ext}\nolimits_A^i(DA,P(z))$.}
{ In terms of morphisms, $[g_{cz}]$ is in $\mathop{\rm rad}\nolimits \mathop{\rm Ext}\nolimits_A^i(DA,P(z))$ if and only if $g_{cz}$ factors non-trivially through another morphism $g_{bz}: I(b) \rightarrow I_i(P(z))$ such that the following diagram is commutative }
\[\xymatrix{ I(c) \ar[rr]^{g_{cz}} \ar[dr]^{h'} & & I_i(P(z)) \\ & I(b)\ar[ur]^{g_{bz}} & } \]
{\noindent where the map $h'$ is given by the right-multiplication
by a path in $Q$ from $b$ to $c$, and the composition of $g_{bz}$ with $I_{i}(P(c)) \rightarrow P_{i+1}(P(c))$ is zero.}
{Moreover there is an isomorphism of vector spaces}
$$ LR: \bigoplus_{c \in Q_0} \mathop{\rm Ext}\nolimits_A^i(I(c),A) \longrightarrow \bigoplus_{z \in Q_0} \mathop{\rm Ext}\nolimits_A^i(DA,P(z))$$
{ \noindent such that $LR([f_{cz}])$ and $f_{cz}$ induce the same class in $\mathop{\rm Ext}\nolimits_A^i(DA,A)$. Thus, $[f_{cz}]$ is in $\mathop{\rm rad}\nolimits \mathop{\rm Ext}\nolimits_A^i(DA,A)$ if and only if $[f_{cz}] \in \mathop{\rm rad}\nolimits \mathop{\rm Ext}\nolimits_A^i(I(c),A)$ or $LR([f_{cz}]) \in \mathop{\rm rad}\nolimits \mathop{\rm Ext}\nolimits_A^i(DA,P(z))$.}
\begin{algo}\label{algo} \ \begin{itemize} \item[ $ \bullet$] Compute $\mathop{\rm top}\nolimits _A\mathop{\rm Ext}\nolimits^i(I(c),A)$ for all $c \in Q_0$ using Theorem \ref{left top} and Corollary \ref{exists}. (For efficiency we can restrict to the points that are the source or the target of a relation because of Corollary 4.6 and Corollary 3.7).
\item [$ \bullet$ ] For each $c,z \in Q_0$, let $\{ \rho_{cz1}, \rho_{cz2}, \cdots \}$ be a basis for $e_z . \mathop{\rm top}\nolimits _A\mathop{\rm Ext}\nolimits^i(I(c),A). $
\item [$ \bullet$] Let $B_0^i= \{ \rho_{czj}: \, \, c,z \in Q_0 , \,c$ the source or target of relations\} be the set that spans the vector space $\mathop{\rm top}\nolimits \mathop{\rm Ext}\nolimits^i(DA,A)$.
\item[$ \bullet$] Compute $\mathop{\rm top}\nolimits \mathop{\rm Ext}\nolimits^i(DA,P(z))_A$ for each $z$ such that $\rho_{czj} \in B_0^i$ using Theorem \ref{left top} and the dual statements of Corollary \ref{exists}.
\item [$ \bullet$] A basis of $\mathop{\rm top}\nolimits \mathop{\rm Ext}\nolimits^i(DA,A)$ is
$$ B^i=B^i_0 \setminus \{\rho_{czj} \in \mathop{\rm rad}\nolimits \mathop{\rm Ext}\nolimits^i(DA,P(z))_A ; \, \, c,z,j \}.$$
\end{itemize} \end{algo}
Each element of $B^i$ has a triple subcript $czj$, and each such element gives rise to exactly one new arrow $z \rightarrow c$ in the quiver of the higher relation extension.
\begin{teo} {Let $A=kQ/I$ be a string tree algebra. Then the algorithm \ref{algo} computes two sequences $(c_l),(z_l)$ of vertices of $Q_A$ such that the arrows in the quiver of the higher relation extension are exactly those of $Q_A$ plus one additional arrow from each $z_l$ to $c_l$. } \end{teo} \begin{Demo} This follows from the discussion preceding the algorithm.
\mbox{$\square$} \end{Demo}
\begin{obser} The vertices $(c_l),(z_l)$ are not necessarily distinct, there may be repetitions. \end{obser}
\begin{ejem} Let $A=kQ/I$ be the string algebra given by the following bound quiver:
\[\xymatrix@R12pt@C12pt{\scriptscriptstyle1\ar@/^2ex/@{.}[rrr]\ar[r]&\scriptscriptstyle2\ar[r]& \scriptscriptstyle3\ar[r]&\scriptscriptstyle4.} \] Then there exists an element $\rho_{2,4}\in e_4. \mathop{\rm top}\nolimits _A\mathop{\rm Ext}\nolimits^2_A(I(2),A)$ which is not in $\mathop{\rm top}\nolimits \mathop{\rm Ext}\nolimits^2_A(DA,P(4))_A . e_2$ and therefore not in $ \mathop{\rm top}\nolimits_A\mathop{\rm Ext}\nolimits^2_A(DA,A)_A$. Thus the quiver of the higher relation extension is \[\xymatrix@R12pt@C12pt{\scriptscriptstyle1\ar[r]&\scriptscriptstyle2\ar[r]& \scriptscriptstyle3\ar[r]&\scriptscriptstyle4.\ar@/_2ex/[lll]} \]
\end{ejem}
\begin{ejem} In this example, the higher relation extension contains an $\mathop{\rm Ext}\nolimits^2$-arrow $5\to 1$ although there is no relation between the points 5 and 1. Let $A=kQ/I$ be the string algebra given by the bound quiver:
\[\xymatrix@R12pt@C12pt{ \scriptscriptstyle 1 \ar[r] \ar@/^2ex/@{.}[rr] & \scriptscriptstyle 2 \ar[r] \ar@/^2ex/@{.}[rr]&
\scriptscriptstyle 3 \ar[r] & \scriptscriptstyle 4 & & & . \\ & \scriptscriptstyle 5 \ar[ru] \ar@/_1ex/@{.}[rur] } \]
\noindent Then the quiver of the higher relation extension of $A$ is the following:
\[\xymatrix@R12pt@C12pt{ \scriptscriptstyle 1 \ar[r] & \scriptscriptstyle 2 \ar[r] &
\scriptscriptstyle 3 \ar[r] & \scriptscriptstyle 4 \ar@/_2ex/[lll] \ar@/^1ex/[dll] & & & .\\ & \scriptscriptstyle 5 \ar[ru] \ar@/^1ex/[ul]} \]
\end{ejem}
\begin{ejem} Let $A$ be the string algebra of Example \ref{example 1}. Then the quiver of the higher relation extension of $A$ is the following:
\[\xymatrix@R12pt@C12pt{ \scriptscriptstyle 1 \ar[r] & \scriptscriptstyle 3 \ar[dr] \ar[r] &
\scriptscriptstyle 4 \ar[r] &\scriptscriptstyle 5 \ar[r] & \scriptscriptstyle 6 \ar@/_2ex/[llll] \ar@/_/[ll]\\ \scriptscriptstyle 2 \ar[ur] & & \scriptscriptstyle 7 \ar[r] \ar[dr]& \scriptscriptstyle 8 \ar@/_/[lll]& & \scriptscriptstyle 11 \ar[r] &
\scriptscriptstyle 12 \ar[r] & \scriptscriptstyle 13 \ar@/_3ex/[lllld]\\ & & & \scriptscriptstyle 9
\ar[r] & \scriptscriptstyle 10 \ar[r] \ar[ur] & \scriptscriptstyle 14 \ar[r] &
\scriptscriptstyle 15 \ar[r] & \scriptscriptstyle 16 \ar@/^7ex/[dllllluu] \ar[r] & \scriptscriptstyle 17 \\
& & & & & & & & &.} \] \end{ejem}
\begin{ejem} This example illustrates the situation in Corollary 4.6 (b). Let $A=kQ/I$ be the string algebra given by the bound quiver:
\[\xymatrix@R12pt@C12pt{\scriptscriptstyle 1\ar@/^2ex/@{.}[rr] \ar@/_2ex/@{.}[drr]\ar[r] &\scriptscriptstyle 2\ar@/^2ex/@{.}[rr] \ar@/_3ex/@{.}[drr]\ar[r] \ar[dr]& \scriptscriptstyle 3\ar[r]&\scriptscriptstyle 4 \\ & & \scriptscriptstyle 5\ar[r]&\scriptscriptstyle 6 & & & .} \]
Then the quiver of the higher relation extension of $A$ is the following:
\[\xymatrix@R12pt@C12pt{\scriptscriptstyle 1 \ar@/_1ex/[r] &\scriptscriptstyle 2 \ar[l]\ar[r] \ar[dr]& \scriptscriptstyle 3\ar[r]&\scriptscriptstyle 4 \ar@/_3ex/[lll]\\ & & \scriptscriptstyle 5\ar[r]&\scriptscriptstyle 6 \ar@/^3ex/[ulll]& & & .} \] Note the existence of a 2-cycle. \end{ejem}
\section{The higher relation bimodule for gentle algebras}
{ Recall that a set of monomial relations $\{\kappa_i\}_{i=1,..,t}$ is called an {\bf overlapping} if the paths $\kappa_i$ and $\kappa_{i+1}$ have a common subpath $ \vartheta$ such that $\kappa_i= \vartheta_i \vartheta$ and $\kappa_{i+1}= \vartheta \vartheta_{i+1}$, for all $i=1,..,t-1$. A
{\bf maximal $t$-overlapping } is an overlapping $\{\kappa_i\}_{i=1,..,t}$ such that there exists no monomial relation $\kappa$ such that the sets $\{ \kappa, \kappa_i, \, \, i=1, \cdots, t \}$ and $\{ \kappa_i, \, \, i=1, \cdots, t, \kappa \}$ are an overlapping, see \cite{GHZ,Gu}.}
\begin{lema} Let $\kappa= (\kappa_1, \cdots, \kappa_t)$ be the following maximal $t$-overlapping over a gentle algebra $A=kQ/I$:
\[ \xymatrix{ \scriptscriptstyle 1 \ar[r] \ar@/^2ex/@{.}[rr]^{\kappa_1} & \scriptscriptstyle 2 \ar[r] \ar@/^2ex/@{.}[rr]^{\kappa_2}&
\scriptscriptstyle 3 \ar[r] & \scriptscriptstyle 4 \ar[r] & \cdots &\scriptscriptstyle \bullet \ar@/^2ex/@{.}[rr]^{\kappa_{t-1}}
& \scriptscriptstyle t \ar[r] \ar@/^2ex/@{.}[rr]^{\kappa_t}
& \scriptscriptstyle t+1 \ar[r] & \scriptscriptstyle t+2 .} \]
\noindent Then, for the injective $I(1)$ associated to the vertex $1$, the sequence of $x_{i_1i_2\cdots i_t}$ is:
$$x_0=3, \, x_{00}=4, \, x_{000}=5, \, \cdots, x_{i_1i_2\cdots i_t}= x_{00 \cdots 0}= t+2.$$
\end{lema}
\begin{Demo} This follows from the construction of the points $x_{i_1i_2\cdots i_t}$ given in section 3. $
\mbox{$\square$}$
\end{Demo}
\begin{obser} Observe that there may be other points $x_{i_1i_2\cdots i_t}$ where some $i_j \ne 0$. In the Lemma we only consider one branch of the quiver which contains all the points $x_{00\cdots 0}$. \end{obser}
\begin{prop}\label{prop 6.3} For every maximal $t$-overlapping $\kappa= (\kappa_1, \cdots, \kappa_t)$ from $c$ to $z$ there is exactly one new arrow $\alpha(\kappa): z \rightarrow c$ in the higher relation extension which is induced by an element of $\mathop{\rm Ext}\nolimits_A^{l+1}(I(c), P(z))$ and these are the only new arrows in the higher relation extension. Moreover, we have the following relations: \begin{itemize} \item[(a)] $\alpha(\kappa) \alpha_1=0$ and $\alpha_{t+1}\alpha_{\kappa}=0$, where $\alpha_1$ and $\alpha_{t+1}$ denote the first and the last arrow of $\kappa$; \item[(b)] $\zeta \rho \zeta'$ where $\zeta, \zeta'$ are new arrows and $\rho$ is a path consisting of old arrows. \end{itemize} \end{prop}
\begin{Demo} By Corollary 4.5, $\mathop{\rm Ext}\nolimits_A^{l+2}(I(c), P(z)) \ne 0$ if and only if there is a non-zero path $\omega: z \rightsquigarrow x_{i_1i_2\cdots i_{l+1}} $ not passing through $x_{i_1i_2\cdots i_{l}}$ and such that the compositions with the non-zero paths $\omega'_{i_{l+2}}: x_{i_1i_2\cdots i_{l+1}} \rightsquigarrow x_{i_1i_2\cdots i_{l+2}}$ are both zero if $i_{l+2}$ exists, see figure.
\[\xymatrix{ & \scriptscriptstyle x_{i_1i_2\cdots i_{l}} \ar[rr] \ar@/^2ex/@{.}[rrrr] & & \scriptscriptstyle x_{i_1i_2\cdots i_{l+1}} \ar[rr] & &
\scriptscriptstyle x_{i_1i_2\cdots i_{l+2}} \\ & z \ar@{~>}[urr]& && & & &.} \]
But the previous Lemma implies that $x_{i_1i_2\cdots i_{l}} \rightarrow x_{i_1i_2\cdots i_{l+1}} \rightarrow x_{i_1i_2\cdots i_{l+2}}$ is a relation of length 2, contradicting that $A$ is gentle. Therefore $i_{l+2}$ does not exist, that is, $\mathop{\rm pd}\nolimits I(c)=l+2$. Then we have the situation
\[\xymatrix{ & \ar@/^2ex/@{.}[rrrr] & & \scriptscriptstyle x_{i_1i_2\cdots i_{l-1}} \ar[rr] \ar@/^2ex/@{.}[rrrr] & &
\scriptscriptstyle x_{i_1i_2\cdots i_{l}} \ar[rr]^{\alpha_{t+1}} &&
\scriptscriptstyle x_{i_1i_2\cdots i_{l+1}} \\ & & & z \ar@{~>}[urrrr]} \]
\noindent and $x_{i_1i_2\cdots i_{l+1}}$ is the target of an overlapping $\omega$.
\noindent Thus by Corollary 4.5, $\mathop{\rm Ext}\nolimits_A^{l+2}(I(c),P(z)) \ne 0$ if and only if there is a non-zero path $\omega$ from $z$ to $x_{i_1i_2\cdots i_{l+1}}$ not passing through $x_{i_1i_2\cdots i_{l}}$. Then, by Corollary \ref{exists} a2), $\omega$ induces an element of the top of $_A\mathop{\rm Ext}\nolimits_A^i(I(c),A)$ if and only if $z=x_{i_1i_2\cdots i_{l+1}}$.
\noindent To check whether $\omega: z \rightarrow x_{i_1i_2\cdots i_{l+1}}$ induces an element of $\mathop{\rm top}\nolimits \mathop{\rm Ext}\nolimits_A^{l+2}(DA,A)$, we can apply the algorithm \ref{algo}. Hence, $\omega$ induces an element of the $\mathop{\rm top}\nolimits \mathop{\rm Ext}\nolimits_A^{l+2}(DA,A)$ if and only if $z$ is the starting point of the overlapping $\kappa$.
\noindent The result about the new arrows now follows from the algorithm.
Using the fact that $\mathop{\rm Ext}\nolimits_A^{l+2}(I(c),P(z)) \ne 0$ if and only if there is a non-zero path $\omega$ from $z$ to $x_{i_1i_2\cdots i_{l+1}}$ not passing through $x_{i_1i_2\cdots i_{l}}$ with $z=x_{i_1i_2\cdots i_{l}}$ shows that
$\alpha_{t+1} \alpha(\kappa)=0$. Dually, one proves that $\alpha(\kappa)\alpha_1=0$, and the relations of the form $\zeta\rho\zeta'$ occur since we are dealing with a trivial extension.
$
\mbox{$\square$}$ \end{Demo}
The following example shows that the higher relation extension of a gentle algebra is not necessarily gentle.
\begin{ejem} Let $A$ be given by the bound quiver \[\xymatrix{ 1\ar[r] \ar@{.}@/^10pt/[rr]&2\ar[r]&3&4\ar[l]^\rho\ar[r] \ar@{.}@/^10pt/[rr] &5 \ar[r]&6. } \] Then the higher relation extension coincides with the relation extension and has the quiver \[\xymatrix{ 1\ar[r] &2\ar[r]&3\ar@/_10pt/[ll]_{\zeta'}&4\ar[l]_\rho\ar[r] &5 \ar[r]&6\ar@/_10pt/[ll]_{\zeta} } \] bound by relations of length 2 and the relation $\zeta\rho\zeta'$, which is of length 3. \end{ejem}
\begin{coro} \label{tensor rel} The tensor algebra of the higher relation bimodule has the same quiver as the higher relation extension and has the relations in Proposition \ref{prop 6.3} (a). In particular its relation ideal is quadratic.
\mbox{$\square$} \end{coro}
\section{The tensor algebra of a gentle algebra}
\begin{teo} Let $A$ be a gentle algebra. \begin{itemize} \item[(a)] The tensor algebra $T_A(\bigoplus_{i \geq 2} \mathop{\rm Ext}\nolimits_A^i(DA,A))$ is gentle. \item[(b)] The higher relation extension $A \ltimes (\bigoplus_{i \geq 2} \mathop{\rm Ext}\nolimits_A^i(DA,A))$ is monomial. \end{itemize} \end{teo}
\begin{Demo} Since the universal cover of a gentle algebra is a gentle tree, we may assume that $A$ is a tree. We prove the conditions S1), S2), S3), G1) and G2) of section 2.
\begin{itemize} \item[S2)] At every point there are at most two incoming arrows (dually, outcoming arrows).
Suppose there are three arrows $\alpha, \beta, \gamma$ with target $x$ (see figure)
\[\xymatrix@R10pt { \ar[rd]^{\alpha}& & \\
& \scriptscriptstyle x & \ar[l]^{\gamma} \\ \ar[ru]_{\beta} & & } \]
Then at least one, say $\gamma$, is a new arrow. Hence, $\gamma$ corresponds to an overlapping $\omega=(\omega_1, \omega_2, \cdots)$ with source $x$ and there is no relation involving $\alpha$ or $\beta$ and overlapping with $\omega_1$, that is,\[\xymatrix@R12pt@C12pt{ \scriptscriptstyle \bullet \ar[rd]^{\alpha}& & & \\
& \scriptscriptstyle x \ar[r] \ar@/^2ex/@{.}[rr]^{\omega_1} & \scriptscriptstyle \bullet \ar[r]
& \scriptscriptstyle \bullet \\ \scriptscriptstyle \bullet \ar[ur]_{\beta} & & &} \]
Because $A$ is gentle, at least one of the arrows $\alpha$ and $\beta$ is new. Assume $\alpha $ is old and $\beta$ is new.
Then we have two such overlappings $\omega, \, \omega'$ and no relation involving $\alpha$ and overlapping with $\omega$ or $\omega'$, that is we have the following situation in the bound quiver of $A$. \[\xymatrix { \scriptscriptstyle \bullet \ar[rd]^{\alpha}& & & \\
& \scriptscriptstyle x \ar[r] \ar[d] \ar@/^2ex/@{.}[rr]^{\omega_1} \ar@/^2ex/@{.}[dd]^{\omega'} & \scriptscriptstyle \bullet \ar[r]
& \scriptscriptstyle \bullet \\
& \scriptscriptstyle \bullet \ar[d] & & \\
& \scriptscriptstyle \bullet & &} \] which yields a contradiction.
Finally, if all three arrows $\alpha, \beta, \gamma$ are new, we get three overlappings starting at $x$. Because $A$ is gentle, condition G1) implies that we have three arrows having $x$ as a source, a contradiction.
\item[S1,G2)] Suppose we have a minimal relation involving at least two paths in the sense of \cite{MP}. Then, in the higher relation extension we have at least two paths $c_1, c_2$ starting and ending at the same point with at least one new arrow in each of these paths. Let $c_i= c_{i1} \alpha_i c_{i2}$ where $\alpha_i$ is a new arrow, $i=1,2$.
Assume first that there is exactly one new arrow on each path $c_i$. Then each $\alpha_i$ corresponds to an overlapping $\omega_i$ in $A$ starting at the target of $\alpha_i$ and ending at its source, and this contradicts the assumption that $A$ is a tree.
If $c_i$ contains several new arrows, the same argument as before applies.
This shows that the higher relation extension of $A$ is monomial and hence that the tensor algebra is also monomial, and even has a quadratic relation ideal, because of Corollary \ref{tensor rel}.
\item[S3)] Suppose we have the following subquiver \[\xymatrix@R10pt{ &\scriptscriptstyle \bullet \ar[dr]^\alpha\\ & & \scriptscriptstyle x \ar[r]_\gamma & \scriptscriptstyle \bullet\\ & \scriptscriptstyle \bullet \ar[ur]_\beta &} \] such that $\alpha \gamma$ and $\beta \gamma$ are not in the ideal $I$ of the tensor algebra $T_A(\bigoplus_{i \geq 2} \mathop{\rm Ext}\nolimits_A^i(DA,A))$. Then one of the three arrows is new. First assume that $\gamma$ is a new arrow, then $\gamma$ corresponds to an overlapping $\omega$ ending at $x$ which implies that $\alpha$ or $\beta$ must be a new arrow, say $\beta$ which correspond to another overlapping $\omega'$ starting at $x$. But then the last arrow of $\omega$ and the first arrow of $\omega'$ are not bound by a relation and also $\alpha$ is not bound by a relation with the first arrow of $\omega'$. This contradicts $A$ being gentle.
Suppose now that $\alpha$ is a new arrow corresponding to an overlapping $\omega$ starting at $x$. Because of the first case, we may assume that $\gamma$ is not new. Since $\beta$ is not bound by a relation with the first arrow in $\omega$, it must be with $\gamma$, contradicting the assumption $\beta \gamma \notin I$.
\item [G1)] Suppose we have a subquiver \[\xymatrix@R10pt{ &\scriptscriptstyle \bullet \ar[dr]^\alpha\\ & & \scriptscriptstyle x \ar[r]_\gamma & \scriptscriptstyle \bullet\\ & \scriptscriptstyle \bullet \ar[ur]_\beta &} \] such that $\alpha \gamma$ and $\beta \gamma$ are in the relation ideal of the tensor algebra $T_A(\bigoplus_{i \geq 2} \mathop{\rm Ext}\nolimits_A^i(DA,A))$. If $\gamma$ is a new arrow corresponding to an overlapping $\omega_{\gamma}$ ending at $x$ then $\alpha$ or $\beta$ must be new, say $\beta$, and corresponding to an overlapping $\omega_{\beta}$ as above, which is bound by no relation with $\alpha$. It follows from our description of the bound quiver that the new arrow $\beta$ is not bound by a relation with $\gamma$, because $\gamma$ is not in the overlapping $\omega_{\beta}$, and this is a contradiction. $
\mbox{$\square$}$
\end{itemize} \end{Demo}
ACKNOWLEDGMENTS: The first author gratefully acknowledges support from the NSERC of Canada, the FQRNT of Qu\'ebec and the Universit\'e de Sherbrooke, the second author gratefully acknowledges support from the NSERC of Canada and the CONICET of Argentina, and the third author gratefully acknowledges support from NSF grant DMS-1001637 and the University of Connecticut.
\small \noindent I. Assem, D\'epartement de math\'ematiques, Universit\'{e} de Sherbrooke, Sherbrooke, Qu\'{e}bec, Canada, JIK2R1. \\ [email protected]\\ M.A. Gatica, Departamento de Matem\'atica, Universidad Nacional del Sur, Avenida Alem 1253, (8000) Bah\'{\i}a Blanca, Buenos Aires, Argentina. \\ [email protected]\\ R. Schiffler, Department of Mathematics, 196 Auditorium Road, Storrs, CT 06269-3009, University of Connecticut, Connecticut, USA.\\ [email protected]
\end{document} | arXiv |
Search SpringerLink
The Determinants of Brownfields Redevelopment in England
Alberto Longo1,2 &
Danny Campbell3
Environmental and Resource Economics volume 67, pages 261–283 (2017)Cite this article
This paper uses discrete choice models, supported by GIS data, to analyse the National Land Use Database, a register of more than 21,000 English brownfields—previously used sites with or without contamination that are currently unused or underused. Using spatial discrete choice models, including the first application of a spatial probit latent class model with class-specific neighbourhood effects, we find evidence of large local differences in the determinants of brownfields redevelopment in England and that the reuse decisions of adjacent sites affect the reuse of a site. We also find that sites with a history of industrial activities, large sites, and sites that are located in the poorest and bleakest areas of cities and regions of England are more difficult to redevelop. In particular, we find that the probability of reusing a brownfield increases by up to 8.5 % for a site privately owned compared to a site publicly owned and between 15 and 30 % if a site is located in London compared to the North West of England. We suggest that local tailored policies are more suitable than regional or national policies to boost the reuse of brownfield sites.
Working on a manuscript?
Avoid the most common mistakes and prepare your manuscript for journal editors.
Urban sprawl and the development of greenfield sites that most developed countries are currently facing have recently pushed governments in several countries, including the EU, the US, Russia and China, to reuse previously developed land. The reuse of previously developed land can also improve the attractiveness of an area through the removal of neighbourhood eyesores, the clean-up of contamination if present, the creation of new jobs, new housing, or new commercial or industrial uses of the land, and, by keeping cities compact, can also conserve biodiversity and reduce energy consumption and greenhouse gas emissions (Alberini et al. 2005; Thornton et al. 2007; Williams 2012; Tang and Nathanail 2012; Wernstedt et al. 2013; Otsuka et al. 2013; Linn 2013; Dixon et al. 2006; Dixon and Adams 2008; Schulze Bäing and Wong 2012; Alberini 2007; Haninger et al. 2014).
There are more than 66,000 hectares of previously developed land, or brownfields in England, mostly located in the high-growth areas of greater London, the South East and East, that would be suitable to accommodate more than 200,000 new homes (DCLG 2015). The redevelopment of brownfields in England is, therefore, of major interest to planning agencies and developers.
In this paper we aim to investigate the determinants and the barriers of brownfields reuse in England using econometric techniques supported by Geographical Information Systems (GIS) data. Specifically, our study addresses English brownfields regeneration agenda with three related questions: (i) what (local) characteristics make a site more likely to be regenerated? (ii) has brownfields regeneration mostly occurred in city centres? (iii) should size and location specific policies be suggested to better tackle brownfields reuse?
Using data from the National Land Use Database for more than 21,000 brownfield sites, we explore the site characteristics that make a brownfield more likely to be regenerated. Specifically, we look at how the following variables have had an impact on the reuse of a site: previous use, site size, ownership type, whether the site is located in a city, a metropolis, or in rural areas, geographical location, and other geographical based variables, such as the population density and the index of deprivation of the area where the site is located, and the distance to the city centre. Our analysis aims at providing policy makers with indications to what has limited brownfields regeneration and what has favoured the reuse of previously developed sites in England. Our approach, based on observed data on the revealed preferences of local authorities' brownfields regeneration projects, sheds lights on the successes and limitations of brownfields regeneration in England. Results from this analysis can provide guidance on where the government should act to lower barriers for brownfields regeneration.
Our analysis explores the effect of spatial unobserved variables affecting the decision to reuse brownfield sites by first using a spatial probit model that assumes the same spatial effect across all brownfields, and then by relaxing this assumption and allowing for different forms of unobserved spatial effects at local authority level using spatial random effects, spatial random parameters and spatial latent class models.
We find that, despite the apparent success achieved by the national brownfields policy most brownfields redevelopment has happened in "easy brownfields". More resources, attention and specific policies are needed to redevelop "difficult brownfields", such as large sites, sites that have previously been used for commercial and industrial activities, and sites that are located in the poorer and bleakest areas of cities and regions of England. Our spatial models find that brownfields reuse decisions are considerably affected by unobserved heterogeneity at local authority level, indicating that reuse decisions at brownfield sites should exploit the local specific characteristics of the areas where brownfields are located and therefore support the existence of local planning agencies responsible for driving the reuse of brownfields at local authority level, rather than having a national regeneration agency overlooking the reuse of brownfields.
The remaining of the paper is structured as follows: section two provides an overview of the brownfields regeneration policy, with a particular focus on England; section three reviews the literature on the barriers and drivers of brownfields redevelopment; section four describes the English dataset of previously developed land; section five presents the economic and econometric models; section six reports the results of the analysis; section seven concludes the paper with a discussion and some policy considerations.
Brownfields Regeneration in England
In England, a brownfield is a previously used land—other than for agriculture—which is currently underused or unused (DCLG 2006), due to the presence of one or more factors, which may include contamination, affecting its reuse (English Partnerships 2006). Brownfield land is primarily the result of deindustrialization and suburbanization (Alker et al. 2000; Tang and Nathanail 2012), "it includes both vacant and derelict land and land currently in use with known potential for redevelopment. It excludes land that was previously developed where the remains have blended into the landscape over time" (ODPM 2005, p. 7). English brownfields do not necessarily have a history of contamination.
In England, the reuse of brownfields has been led by four political drivers to provide most new housing on previously developed land: (i) the high population density of England; (ii) the low population density of English cities, compared to average European densities; (iii) the large quantity of under-utilized land within urban areas; and, (iv) the population growth that will require more than two million new dwellings by the end of year 2020 (English Partnerships 2003).
When the "New" Labour Government came to power in 1997, it aimed at revitalising English urban centres through an "urban renaissance" and by building at least 60 % of new houses on brownfields or through the conversion of existing buildings by 2008 (DETR 2000c). In 2003, the Government launched the Sustainable Communities Programme (SCP) (ODPM 2003), which set up the goal of developing a new National Brownfield Strategy through the national regeneration agency, English Partnerships, and the Department for Communities and Local Government (DCLG 2009).
The initial results of the National Brownfields Strategy showed that, in 2005, 73 % of new dwellings were built on brownfields, but only 62 % of land for new housing was previously developed, mainly because usually urban houses are built at higher densities than those on pristine sites (Williams 2012). These considerations led the Coalition Government to implement major changes to brownfield development policy in 2010 by abolishing all existing regional housing and brownfield targets and make local planning authorities responsible for establishing the level and location of housing provision for the local area (Schulze Bäing and Wong 2012).
Information on brownfields is collected in the National Land Use Database (7), which comprises records of parcels of vacant and derelict land and buildings as well as those currently in use with potential for redevelopment. Contamination, unfortunately, is not recorded in the NLUD. Land contamination is dealt with in Part 2A of the Environmental Protection Act (EPA), which came into force in England in 2000 and aims at identifying and regulating the remediation of contaminated land that causes significant harm to human health or the environment or where there is a significant possibility of such harm to happen. Under this regulation, local authorities have to produce strategies through their planning system for inspecting their area for contaminated land and for overseeing the remediation of contamination (Otsuka et al. 2013). In addition, the European Union's Environmental Liability Directive (2004/35/EC), transposed into English law through the Environmental Damage (Prevention and Remediation) Regulations in 2009 (SI 2009/153), requires the operator of a site where the environmental damage takes place to clean up any contamination caused by their activities.
Several studies have looked at the barriers and drivers of brownfields regeneration. Most of this literature originates from the U.S., where a brownfield is any "real property, the expansion, redevelopment, or reuse of which may be complicated by the presence or potential presence of a hazardous substance, pollutant, or contaminant" (Small Business Liability Relief and Brownfields Revitalization Act, 2002).Footnote 1 Although the U.S. definition requires the presence or potential presence of contamination, it excludes heavily contaminated sites, as those sites either have to be listed or proposed to be listed on the National Priorities List, or are remediated under the Toxic Substances Control Act 1976. This means that underused or unused sites that are heavily contaminated, such as Superfund sites, or have no presence and no potential presence of contamination are not classified as brownfields in the U.S., whilst they are in England. The English definition of brownfields is, therefore, much more extensive than the U.S. definition, as it includes previously developed land—with or without any level of contamination—that is currently underused or unused.
Most of the studies investigating the reuse of brownfields have focussed on the U.S. experience, and, in particular, on the effectiveness of voluntary remediation programs (Bartsch and Collaton 1997; Dennison 1998; Eisen 1999; Meyer and Lyons 2000; Wernstedt 2004; Wernstedt et al. 2006a, b, 2013; De Sousa 2003, 2004; De Sousa et al. 2009; Greenberg et al. 2001; Schoenbaum 2002; Page and Berger 2006; Alberini 2007; Schwarz et al. 2009; Guignet and Alberini 2010; Chilton et al. 2009; Blackman et al. 2010; Linn 2013; Haninger et al. 2014).
Alberini (2007) analyses the determinants of participation in voluntary remediation programs in Colorado using a probit model on 432 brownfields. She finds that the main determinants for participation are the size of the site and the surrounding land use. Using a hedonic price model, she further finds that properties with confirmed contamination sell at a 43–56 % discount and that participation in voluntary remediation results in a partial to complete price recovery. Guignet and Alberini (2010) find that participation in voluntary remediation programs in Baltimore, Maryland, is more likely for larger, less capital intensive sites, and industrial sites located in industrial areas, rather than heavily built sites close to residential areas.
Blackman et al. (2010) use multinomial logit models to find that participation in voluntary remediation programs in Oregon attracted both heavily contaminated sites and less contaminated ones.
Page and Berger (2006) discern several differences when comparing brownfields in Texas and in New York. They find that Texas has a higher percentage of sites with prior and current industrial uses than New York, whilst New York brownfields are more likely to be abandoned or vacant at the time they enter the voluntary remediation program. Most of the Texas sites are in urban areas and in central cities. Whilst industrial uses account for most of the properties enrolled in both states' programs, suburban properties are more common in the New York program. They also find that sites that participate in voluntary remediation programs are on average smaller in New York than in Texas.
Schwarz et al. (2009) use California data to compare residential redevelopment for heavily contaminated sites subject to mandatory clean-up, Superfund sites, and less contaminated sites, eligible for voluntary remediation based on a risk based approach. They find less residential redevelopment at voluntary remediation sites compared to mandatory remediation sites, but also that sites with a higher probability of contamination are less likely to be redeveloped residentially, and more likely to be redeveloped industrially. They conclude that voluntary remediation programs based on a risk based approach are not well suited for boosting residential reuse of brownfields.
Linn (2013) uses a hedonic price model to study the effect of liability relief and brownfields redevelopment on the value of nearby properties in Cook County, Illinois, including the city of Chicago. After accounting for unobserved and time-varying variables that may be correlated with a certificate of liability relief, Linn finds that if a brownfield enters the Site Remediation Programme in Illinois and is certified, the value of a property 0.25 miles away increases by about 1 %, compared to an otherwise identical property that is not affected by the entry and certification.
Haninger et al. (2014) investigate the effect of the U.S. Environmental Protection Agency Brownfields Program at federal level and find that sites that have been remediated are associated with an increase of about 4.9–11.1 % in surrounding property values.
In England, brownfields regeneration has been studied mostly by planners, geographers and engineers (Adams and Watkins 2002; BURA 2006; Syms 2004; Roberts and Sykes 2000; Urban Task Force 1999, 2005; Diamond and Liddle 2005; Dair and Williams 2006; Harrison and Davies 2002; Dixon et al. 2006; Dixon 2007, 2006; Dixon et al. 2011; Bardos et al. 2000; Adams 2004; Cozens et al. 1999; Adams et al. 2001, 2010; Tang and Nathanail 2012; Otsuka et al. 2013; Williams 2012; Dixon and Adams 2008; Thornton et al. 2007; Schulze Bäing and Wong 2012; the European projects BERI, CLARINET, CABERNET, RESCUE). Most of these studies have used qualitative data and in-depth case studies, whilst studies using quantitative data to derive policy recommendations for brownfields reuse in England are less common.
Dixon et al. (2006) administered a mail survey to 987 commercial and residential property developers underpinned by structured interviews with eleven developers. They find that financial and other incentives given by the government were the main drivers for brownfields development.
Tang and Nathanail (2012), using ANOVA, analyse the NLUD dataset of brownfields in England and find that local authorities with higher percentages of derelict or vacant land are located in deprived areas, i.e. areas that fare badly in terms of income, employment, health, education, housing, crime and living environment. They also find that the increased density of housing on brownfields did not significantly reduce socio-economic deprivation at local authorityFootnote 2 level. Williams (2012) also examines the NLUD database and finds that derelict and vacant sites are mainly located in the industrial areas of the Midlands and Northern Regions of England, which are areas often affected by contamination that lack infrastructure and are usually not economical to develop. English Partnerships (2008), using GIS analysis of the NLUD dataset, find a strong correlation between brownfields and socioeconomic deprivation to conclude that more than 20 % of brownfields are located inside the 10 % most deprived areas. Analysis on the same dataset by Schulze Bäing (2010) further finds that brownfields reuse in most deprived areas has occurred at levels comparable to least deprived areas only in particular buoyant property market conditions in England during the 2005/6 period. Schulze Bäing and Wong (2012) use ANOVA to analyse the effect of economic indicators and deprivation indexes on brownfields regeneration using the NLUD dataset and find that high levels of brownfields reuse has boosted the real estate market for apartments in the most deprived areas and improved socioeconomic indicators in those areas.
The National Land Use Database
This paper uses the data from the National Land Use Database (NLUD), which was created after the Government issued the policy document 'Planning for the Communities for the Future' (ODPM 1998). The NLUD initiative was a partnership project between Communities and Local Government, English Partnerships, the Improvement and Development Agency and Ordnance Survey. The database was created by the need to monitor the supply of brownfields to provide an adequate and strategic supply of land and buildings for housing and other economic activities. Data were provided on a yearly basis by local planning authorities that would collect information, such as geographical location, address, land use and planning attributes for vacant and derelict sites and other previously developed land and buildings that might have been available for redevelopment in England. The format of the data has changed during the years to keep the database consistent with the changes in the legislation. In addition, in (2007), English Partnerships became part of the Homes and Communities Agency, the National Housing and Regeneration Agency. This makes it difficult to compare database entries across years, and we therefore limit our analysis to the year 2006 dataset.
Five land types are collected within the NLUD: (i) previously developed land that is now vacant; (ii) vacant buildings; (iii) derelict land and buildings; (iv) land or buildings currently in use and allocated in the local plan and/or having planning permission; and, (v) land or buildings currently in use where it is known there is potential for redevelopment (but the sites do not have any plan allocation or planning permission) (NLUD 2000, 2003; ODPM 2006). Each site entry records the address and the British National Grid geographical reference, the previous and current activities (commercial, industrial, housing, or other), the area, the planning status, the proposed use, whether the site is suitable for housing, the most suitable use, an estimate of the housing density, and the ownership type, either public or private. Unfortunately the NLUD does not collect information on contamination at the sites. In fact, the NLUD and the regime for contaminated land for England (DETR 2000a, b) are separate and distinct exercises. The identification and classification of brownfields in the NLUD makes no representation on the likely presence of contamination. Some local authorities volunteer this information in the NLUD, but most do not. Therefore, it would be inappropriate to consider the NLUD as a registry of contaminated sites. Where sites are to be redeveloped, the planning and development control process ensures that any potential risks associated with contamination are properly identified and cleaned up.
As the NLUD sites are geo-referenced, we are able to augment the database with Geographical Information Systems (GIS) data obtained by the Office for National Statistics, Communities and Local Government. This augmentation includes information on: the population density of the wardsFootnote 3 where the sites are located, whether the site is located in a city, a metropolis or a rural area, the Index of Multiple Deprivation for 2004Footnote 4 for the super output areasFootnote 5 where sites are located, and the distances to the central business district.
Modelling the Reuse of Previously Developed Land
Economic theory suggests that a brownfield will be reused if the net present value for a landowner from redeveloping the site is greater than the net present value of leaving it unused. The economic modelling approach adopted in this study further postulates that the regeneration of a brownfield is a function of the site characteristics (e.g. geographical location, size, distance to the central business district, previous activity at the site, housing suitability and ownership) and neighbourhood characteristics (e.g. population density and deprivation score of the area where the sites is located). Our hypothesis is that a site will be in use (in_use) if the net benefit to the owner,—defined here as profit—is greater than the profit derived from the site if it was unused (unused), including the option value arising from future costs, prices, policies, and the development of other nearby sites (Majd and Pindyck 1987; Wrenn et al. 2012). In accordance with Bockstael (1996), Irwin and Geoghegan (2001), Irwin and Bockstael (2004), Alberini (2007) and Guignet and Alberini (2010), the behavioral model is therefore, choose in_use over unused if and only if (iff):
$$\begin{aligned} \pi _{in\_use} >\pi _{unused} , \end{aligned}$$
where \(\pi _{in\_use}\) and \(\pi _{unused} \) are the true—but unobservable (i.e. latent)—profits associated with the site when it is in use and when it is unused respectively. So far we have assumed that variations in terms of development decisions are only due to variations in observable brownfield sites characteristics and surrounding neighbourhood features. However, in reality, landowners are heterogeneous and brownfield sites with same characteristics and same neighbourhood features may have different reuse decisions. For example, some landowners may be close to retirement and therefore unwilling to embark in a redevelopment project. Others might be more willing to redevelop a site for housing if they expect that a rapid demographic growth might increase the value of their land if redeveloped. These idiosyncrasies will create a distribution of unobservable factors, randomly distributed across the brownfield sites that will generate optimal reuse decisions conditional upon brownfield sites characteristics and neighbourhood features. Therefore, an owner will decide to reuse a site if the profit from reusing the site is higher than the profit from not using the site:
$$\begin{aligned}&\displaystyle \pi _{in\_use} >\pi _{unused} ,\nonumber \\&\displaystyle \quad \alpha _{in\_use} +{\varvec{\upbeta }}^{\prime }{} \mathbf{x}_{in\_use} +\varepsilon _{in\_use} >\alpha _{unused} +{\varvec{\upbeta }}^{\prime }\mathbf{x}_{unused} +\varepsilon _{unused} , \end{aligned}$$
where \(\alpha \) is a constant term; \({\varvec{\upbeta }}\) is an unknown vector of parameters for the site and neighbourhood characteristics, \(\mathbf{x}\); and, \(\varepsilon \) is stochastic and is an unobservable factor of profit—and is treated as a random component. Due to the presence of this error component, the empirical model is driven by the probability that a site will be in use, i.e.:
$$\begin{aligned} Pr_{in\_use}= & {} Prob\left( {\pi _{in\_use} >\pi _{unused} ,\forall in\_use\ne unused} \right) ,\nonumber \\ Pr_{in\_use}= & {} Prob\left( {\alpha _{in\_use} +{\varvec{\upbeta }}^{\prime }{} \mathbf{x}_{in\_use} +\varepsilon _{in\_use} >{\varvec{\upbeta }}^{\prime }{} \mathbf{x}_{unused} +\varepsilon _{unused} ,\forall in\_use\ne unused} \right) ,\nonumber \\ Pr_{in\_use}= & {} Prob\left( {\varepsilon _{unused} -\varepsilon _{in\_use} <\alpha _{in\_use} +{\varvec{\upbeta }}^{\prime }\mathbf{x}_{in\_use} -{\varvec{\upbeta }}^{\prime }{} \mathbf{x}_{unused} ,\forall in\_use\ne unused} \right) \qquad \end{aligned}$$
Assuming the cumulative probability in Eq. (3) has a multivariate normal density leads to the ordinary probit model.
The estimation of a micro-scale spatial model requires a set of spatially articulated variables. The selection and specification of these variables is ideally determined by the factors expected to drive spatial variation in future profits. For example, heterogeneity in profits from reuse is related to landscape features such as land use and zoning requirements, accessibility, property tax rates and other variables that are unobservable to the researchers. In reality, the selection of explanatory variables is constrained by data availability. Augmenting the model with spatial variables, and exploring the effect of unobserved spatial heterogeneity, allows us to test whether location matters (Bella and Irwin 2002).
One may expect that the probability that a site is in use could affect the value of other, nearby properties, and also the decision to redevelop a surrounding brownfield site. To accommodate such possibility, and to account for the fact that omitted variable may be spatially correlated (Bockstael 1996), the probit model can be corrected to deal with spatial autocorrelation (McMillen 1992; Cho and Newman 2005). To capture these potential spatial dependencies, and to explore which model is more useful to derive policy recommendations in the analysis of brownfields reuse decisions, we use four different models.
We first use a spatial probit model based on the following profit function augmented with additional terms:
$$\begin{aligned} \pi _{in\_use} =\alpha _{in-use} +{\varvec{\upbeta }}^{\prime }\mathbf{x}_{in\_use} +\sum \nolimits _{s=1}^S \rho _s y_{s_{in-use} } +\varepsilon _{in-use}, \end{aligned}$$
where, \(\rho \) defines a matrix of coefficients that define the influence that the planning decision in site \(s = 1, 2,\ldots \). S has on the decision for a specific site, \(s^{*}\). S is the number of sites that may potentially have an influence on the planning decision for the given site \(s^{*}\) and \(y_{s_{in\_use} } \) denotes the outcome at each site \(s = 1, 2,\ldots \). S (set to unity if in_use, and zero otherwise). \(\rho \) takes the form of a negative exponential function:
$$\begin{aligned} \rho _s =\lambda \hbox {exp}\left( {-\frac{D_s }{\gamma }} \right) , \end{aligned}$$
where, \(\lambda \) and \(\gamma \) are estimated parameters, and \(D_s \) is the distance separating the two sites. A positive coefficient estimate for \(\lambda \) implies the existence of a positive effect of planning decisions from adjacent sites to site \(s^{*}\). That is, the decision to develop site \(s^{*}\) is similar to the decision of development at sites nearby site \(s^{*}\). The coefficient estimate for \(\gamma \) captures part of the effect of the distance to adjacent sites to site \(s^{*}\): a positive coefficient estimate for \(\gamma \) suggests a positive and decreasing effect with distance from adjacent sites, whilst a negative coefficient would indicate a positive and increasing effect with distance from adjacent sites.
Implicit in this straightforward spatial probit specification is the assumption of homogeneity, which implies that the factors that determine whether or not a site is developed are the same across all sites. Notwithstanding the spatial autocorrelation already captured, it is possible that there may be differences in sites located in different local authorities—due to factors such as different political drivers or legislation and budget constraints that are not explicitly specified in the model. This "unobserved" heterogeneity suggests that observations within a local authority can be correlated by more than just space. Therefore, following the large and growing literature within discrete choice analysis that addresses unobserved heterogeneity (e.g., Campbell et al. 2014; Campbell and Erdem 2015), we capture this additional tier of possible correlation for sites located within the same local authority using a spatial random effects probit model, a spatial random parameters probit model, and a spatial latent class probit model. Indeed, models that account for unobserved heterogeneity have become standard practice in the analysis of discrete choices.
Under the spatial random effects probit model, we test the assumption that the status of sites in a particular local authority is useful information in predicting the status for other sites in the same local authority area and also in other local authority areas. This formulation assumes that the same random effects apply to all observations within the same local authority, but that they differ to observations outside the local authority.
We proceed to examine explicitly the unobserved heterogeneity across sites. This is achieved by portioning additively the stochastic component of profit into two parts:
$$\begin{aligned} \pi _{in\_use} =\alpha _{in-use} +{\varvec{\upbeta }}^{\prime }\mathbf{x}_{in\_use} +\sum \limits _{s=1}^S \rho _s y_{s_{in\_use} } +\left[ {{\varvec{\upeta }}_{in\_use} +\varepsilon _{in\_use} } \right] , \end{aligned}$$
where \({\varvec{\upeta }}\) is a vector of random terms. In the following section we present two models which use this form. The first of these models—labelled the spatial random parameters probit model—allows \({\varvec{\upeta }}\) to take an infinite set of values; whereas the second of these models—labelled the spatial latent class probit model—allows \({\varvec{\upeta }}\) to take finite set of distinct values. In both models the values of \({\varvec{\upeta }}\) can be either independent across sites or they can be the same for all sites within the same local authority. The spatial random parameters probit model and the spatial latent class probit model allow us to explore how brownfield reuse decisions are affected by the distribution of unobserved heterogeneity within local authorities, and therefore suggest whether local planning regeneration agencies might be better suited than a national planning regeneration agency to tackle brownfields reuse. To explain, the spatial random parameters probit model allows for the explanatory variables to take on a continuous distribution of values. The spatial latent class probit model restricts the possible values that the explanatory variables may take on to a finite number of parameters.
Table 1 Descriptive statistics
Table 1 reports the descriptive statistics for the 21,808 brownfields recorded in the NLUD for 2006. Roughly 40 % of sites are currently in use. Industrial, commercial and residential activities are the main previous uses at the sites. The remaining previous uses are: recreational, agricultural, vacant buildings and land, unused and derelict. Local authorities do not know the previous activity at roughly 20 % of sites, most likely because these sites have been unused for a very long time and it is therefore difficult to gain information on the previous use at the site. The average (median) site is 2.1 (0.43) hectares, and is about 1.6 km (0.87 km) from the closest central business district. Most sites are located in cities, as 29.15 % are urban sites and 25.66 % are located in metropolis. More than 60 % of the sites are privately owned and deemed suitable for housing, one of the most pressing objectives of government planning policies. The average population density in the ward where sites are located is of about 24 persons per hectare. Finally, the super output area where sites are located has an average score of 29.7 for the 2004 Index of Multiple Deprivation (IMD). The Waverley Borough Council, County Surrey in the South East, represents the least deprived super output area, with an IMD score of 1.16, whilst Liverpool in the North West represents the most deprived super output area, with an IMD Score of 86.36.
Brownfield sites in England
Figure 1 shows the English brownfields divided into two groups of sites currently in use, denoted using a (green) plus symbol, and unused, represented with a (red) cross symbol. Most brownfields are located in more densely populated areas, such as the capital, London, and other major (industrial) cities: Liverpool, Manchester, Hull, Newcastle Upon Tyne, Birmingham, Leeds, Plymouth, Portsmouth, Sheffield, Kirklees, St. Helens, Stoke-on-Trent, Swale, Tunbridge, Wells, Walsall, Wirral, and Wolverhampton. The map further shows a higher propensity of sites in use in the wealthy and more densely populated areas of the South and South East, compared to the poorer and less densely populated areas of the northern regions.
The Determinants and Constraints of Brownfields Regeneration
Tables 2 and 3 present the results of our econometric models. Our analysis reflects the fact that sites belong to the same local authority, for a total of 353 local authorities, or groups. The tables also show that the spatial random effects and the spatial random parameters modelFootnote 6 improve the results obtained in the spatial probit model, indicating that specifications where the intrinsic correlation among sites within the same local authority is captured perform superiorly compared to specifications where observations are all assumed independent.Footnote 7 The log likelihood function and the Akaike Information Criterion suggest that the three spatial models that allow for spatial unobserved heterogeneity at local authority level outperform the spatial probit model.
Table 2 Results: spatial econometric models
Table 3 Spatial latent class model with 3 latent classes
The model that seems to explain better the data is the spatial random parameters probit model. This model was estimated specifying each of the variables as a normal distribution. The model was estimated using 100 Sobol draws. Focusing on the means attained from the spatial random parameters probit model, we find a number of interesting results. A site is more likely to be regenerated when local authorities do not have a clear information of the previous activity at the site (EX_DK is the reference dummy for previous activities at the site). In fact, all the dummy variables for the previous uses at the sites are negative and significant. Among these variables, EX_HOU, EX_AGRIC and EX_UNUSE have smaller coefficients, compared to the other previous uses, suggesting that when a site is used for residential, agricultural activities or was not previously used, it is more likely to be regenerated. This is a first important result that acknowledges the difficulties in developing sites that have been previously used for commercial and or industrial activities. These sites may in fact be considered more difficult to develop due to the presence of obsolete structures, and problems or fear of contamination. When we consider the size, we notice that smaller sites are more likely to be developed, being the coefficient of SMALL positive and significant, compared to MEDIUM and LARGE size sites (reference dummy).Footnote 8
Our analysis wanted to explore to what extent the goal of the government of redeveloping sites located within urban cores to limit urban sprawl has been achieved. To address this question we look at the mean coefficient of the two dummy variables CITY and METROPOL that consider whether a site is located in a city or in a metropolis, and the two dummies for the distance to the city centre DIST_50 and DIST_75, with DIST_100 being the reference dummy. These dummy variables measure whether a site is located within the median distance to the central business district, DIST_50, between the median distance and the third quartile distance to the central business district, DIST_75, and between the third quartile and the fourth quartile distance to the central business district, DIST_100. None of the coefficient estimates for CITY, METROPOL, DIST_50, andDIST_75 are found to be significant, indicating that, on average, there has not been any significant difference in the redevelopment of sites in rural versus urban areas.
Being a site owned by the private sector (PRIVATE) or being suitable for housing (HOUSE_SU) makes it more likely to be reused, on average. Census characteristics affecting the probability of redeveloping a site are well captured by the Index of Multiple Deprivation, that indicates that the more deprived a site is, the less likely to be redeveloped. The population density, ceteris paribus, does not seem to influence particular pressure on the redevelopment of brownfields. Finally, the dummy variables for the geographical regions show that sites located in London (the reference dummy), West Midlands, South West, South East and Yorkshire and Humbershire are more likely to be regenerated, on average, compared to sites located in other regions.
Turning our attention to the spreads of the random parameters, we generally find significant heterogeneity for all variables. Indeed, in several cases the standard deviations are of a larger magnitude compared to their respective means—implying that the influence of these variables on brownfields redevelopment are very different across English local authorities.
Also of interest are the results pertaining to the spatial dependency factor. Again, focusing on the random parameters model, we first remark that both parameters, lambda and gamma, are estimated with positive signs (as expected)—indicating the existence of similar redevelopment decisions in adjacent sites, and that closer sites have more influence on the development decision of a site. Furthermore, the fact that they are both significant means that accounting for this neighbourhood effect is necessary, and not accounting for spatial autocorrelation would lead to biased coefficient estimates. To assess the extent of this spatial effect, the spatial parameters can be used to predict a distance decay curve, as portrayed in Fig. 2. The curve produced from the random parameters model, suggests that brownfield redevelopments within a 1-km buffer have a direct impact on redevelopment.
Distance decay curves for random parameters and latent class models
Even though the spatial random parameters probit model is the model that fits our data better, the policy recommendations arising from its output are difficult to interpret, given the large heterogeneity captured by the spread of the random parameters. For this reason in the next subsection we investigate the results of a spatial latent class probit model.
Geographical Differences in Brownfields Regeneration
Table 3 reports the results of a spatial latent class probit model estimated with three latent classes. The three classes, representing three groups of local authorities, were chosen according to the model minimising various information criteria.Footnote 9 At the bottom of the table, the estimated prior probabilities for each class show that our sites are about 51 % likely to belong to class 1, 25 % to class 2 and 24 % to class 3. Figure 3 reports a graphical representation of the brownfield sites in three classes, which we base on the conditional latent class probabilities. This model allows us to investigate better the relationship between brownfield reuse and location of the sites. Sites located in the South East, North East, and North West, are more likely to belong to class 1. Sites in the South West are more likely to belong to class 3, and sites in class 2 are quite evenly spread in the South and in the Centre of England. In London, sites are most likely to belong to class 1 or 2.
Distribution of brownfields sites according to conditional latent class probabilities
The spatial latent class probit model finds that brownfield sites whose previous activity was unknown are more likely to be redeveloped across all the three classes. This suggests that sites that appear to not have been used for a long time, everything else being equal, are more likely to be reused. Making comparisons across classes, we find that large sites are most likely to be redeveloped under class 1, whereas small sites are most likely to be redeveloped under classes 2 and 3. In contrast to classes 1 and 3, being a site owned by the private sector makes a sites significantly less likely to be redeveloped in class 2. In the case of class 3, local authorities with a higher population density have higher likelihood of brownfields redevelopment. Not surprisingly, given the results portrayed in Fig. 3, we find that the dummy variables for the geographical regions also differ across the classes. Interestingly, we find that the neighbourhood effects also differ across the classes. This is an important result. Whilst the existence of brownfield redevelopment in adjacent sites has a positive effect on all redevelopment, the spatial spillover is not the same in all local authorities. For example, the spatial effect of the influence of the development decision of a brownfield site is stronger in sites redeveloped in class 2, but this influence declines much more sharply for these sites, compared to sites redeveloped in classes 1 and 3, where the effect of nearby sites is weaker but spreads in a longer distance.
Empirical issues aside, to the best of our knowledge this is the first application to have estimated a spatial probit latent class model with class-specific neighbourhood effects. Therefore, this also represents an important methodological contribution. The size and the extent of the spatial effect retrieved from this model (weighted across the three latent classes) is shown in Fig. 2. From this, we can see a much larger buffer zone (perhaps as large as 8-km).
This paper has looked at the determinants and barriers of brownfields redevelopment in England. We assumed that, in England, the development of brownfields—sites that may or may not have been contaminated, which have been previously used and are currently unused or underused—depends on the characteristics of the sites, location, previous use, current use, size, ownership type, socio-economic conditions of the people living in the areas where the brownfields are located, as well as unobserved characteristics of the local authority where brownfields are based, such as political drivers, legislation, planning policies, budgets, and the influence that the development of adjacent brownfields has on a decision of a site to be reused. We control for unobserved heterogeneity at local authority level using random coefficients, random parameters and latent class probit models, and use a correction for spatial autocorrelation to capture the effect that the development decision of surrounding brownfield sites have on the reuse decision of a site.
The results from the econometric models can be used to estimate the effect of a change in the Index of Multiple Deprivation where a brownfield is located or of a change in the characteristics of brownfields in the probability that a brownfield site is reused. Using the results from Table 3, considering a spatial effect of surrounding brownfields within 1 km from a brownfield site, we find that for a large, publicly owned, former industrial site located in London in an area with an Index of Multiple Deprivation equal to 80, reducing deprivation to a level of 20 increases the probability of redeveloping a brownfield site by 6.4 % for a site belonging to class 1, by 12.3 % for a site belonging to class 2, and by 4.5 % for a site belonging to class 3. If we further explore the effect of a change in ownership from public to private, we find that the probability of reusing a site increases by 8.5 % for a site belonging to class 1, it decreases by 3.7 % for a site belonging to class 2, and remains virtually unchanged for a site belonging to class 3. We can further explore the difference in probabilities of reuse of a brownfield site in different regions in England. If we consider two identical large former industrial sites, privately owned, located in an area with an index of Multiple Deprivation equal to 20, one in London and one in the North West, a region with a strong history of past industrial activities, we find that the probability of reusing a brownfield site in London is 15.3, 23.7, and 34.2 % higher than in the North West for sites belonging to class 1, class 2 and class 3 respectively. These results show that there are big differences in the effect that site characteristics have on brownfields reuse, indicating that the use of flexible models, such as spatial latent class and random parameters probit models that account for unobserved heterogeneity at local authority level and spatial autocorrelation provide invaluable tools for exploring the determinants of brownfields reuse.
The results highlight that the brownfield community has done some progress in redeveloping previously developed sites, but that some constraints still need to be overcome. The goals of the government of building most new houses on brownfields is being achieved, but more resources, attention and specific policies are needed to redevelop "difficult" sites, such as large sites, sites that have previously been used for commercial, or industrial activities, sites that are located in the poorer and bleakest areas of cities and regions of England. These might also be sites that suffer from the presence or suspected presence of contamination, as this is more likely in general to be found in industrial and commercial sites and larger sites. However, we cannot derive conclusions on the effect of contamination on brownfields redevelopment in England from this study, as the NLUD database does not collect information on contamination. This is unfortunate, as the dataset is otherwise rich in information. It would be desirable that in the future all local authorities released the information on contamination and clean-up on their brownfield sites in the NLUD.
It is finally interesting to highlight how the government does not seem to fully understand the opportunity cost of not developing publicly owned sites, as public ownership seems to be a constraint in regeneration for most brownfields. Perhaps, it is possible that privately owned sites, which are more likely to be redeveloped, might be more valuable sites than publicly owned sites, and hence encourage private owners to reuse them. Unfortunately, the NLUD does not include information on the value of the brownfield sites, and this is a second limitation of the dataset that we would urge local authorities to report in the future, so that researchers can investigate hedonic studies of brownfields redevelopment in England.
We have also found a strong unobserved heterogeneity in reuse decisions of brownfields, captured by the unobserved local authorities' characteristics in the analysis, and a positive effect on reuse decisions from reuse decisions at surrounding brownfield sites. Our results, that show considerable differences in reuse decisions captured by unobserved heterogeneity at local authority level, therefore, support the recent direction of the government to make local planning authorities, rather than regional planning authorities, responsible for brownfields regeneration (Schulze Bäing and Wong 2012). Finally, we believe that the recommendations of the Barker Review (2006) to use policy instruments, such as introducing a charge on vacant and derelict brownfield land and a subsidy to help developers bring forward hard-to-remediate brownfield sites should still be pursued, but we also recommend that a specific set of policy instruments should be used to address publicly owned brownfields, which may be less profitable sites to develop. Finally, we should note that the dataset used in this analysis is about 10 years old. Unfortunately, more recent datasets of the NLUD are either not available, or do not have the same number of observations. For example, the 2012 dataset has got only 8860 observations, about one third of the dataset used in the current study. We would encourage the government to improve the collection of data of previously used land so that it will be possible in the future to conduct longitudinal studies to examine the reuse of brownfields.
Federal and state legislation pertinent to U.S. brownfield policy is numerous and diverse. The most important includes the Resource Conservation and Recovery Act (RCRA), the Community Reinvestment Act (CRA), the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) or Superfund, and the Small Business Liability and Brownfields Revitalization Act. CERCLA, in particular, imposed joint and several liability on all "potentially responsible parties", de facto making anyone with a proven link to a contaminated site potentially liable for the contamination at the site. This has led not only to the potentially responsible parties attempting to implicate one another for damages, but also greatly reduced the incentives to purchase or reuse a potentially contaminated site. In 2002, the Small Business Liability Relief and Brownfields Revitalization Act (i.e., the "Brownfields Law") was signed as an amendment to CERCLA to provide more financial and technical assistance to brownfield remediation, including cleanup and assessment grants and liability protections, and to stimulate states-led voluntary cleanup programs.
Local authorities in England are responsible to provide services, such as education, social services, planning, waste management, trading standards, emergency planning, roads and transportation, housing, environmental health, parks, markets and fairs, to the population in their area. There are five different types of local authorities in England and they are divided into single-tier and two-tier authorities. Single tier authorities are: Metropolitan Authorities, London Boroughs, and Unitary or Shire Authorities. Two tier authorities are comprised of a County Council, and a District Council. There are about 400 local authorities in England.
Wards are the spatial units to elect councillors in England. The geographical size of a ward depends on the population density of an area. On average, a ward includes about 5500 people. There are 7669 wards in England.
The Index of Multiple Deprivation 2004 (ODPM 2004) is calculated by the Social Disadvantage Research Centre at the University of Oxford for super output areas lower layer in England. The index is constructed by combining seven transformed Domain Indexes, using the following weights: Income (22.5 %), Employment (22.5 %), Health Deprivation and Disability (13.5 %), Education, Skills and Training (13.5 %), Barriers to Housing and Services (9.3 %), Crime (9.3 %), Living Environment (9.3 %).
Super output areas are a set of geographical areas developed following the 2001 census, initially to facilitate the calculation of the Index of Multiple Deprivation 2004. The aim is to produce a set of areas of consistent size suitable for the publication of socioeconomic data related to small areas comprising a population of about 1000–3000 in each area.
Note that the choice probabilities in the spatial random effects and spatial random parameters model cannot be calculated exactly (because the integrals do not have a closed form). Instead, they were approximated through simulating the log-likelihood with 100 Sobol draws per local authority. To address the correlation among local authorities, the same random draws were used for all sites within the same local authority. For each random draw, the product of the simulated probabilities within each local authority is derived. The average of these products gives the contribution of the local authority to the likelihood function.
We also ran an ordinary probit model but found that the spatial probit model outperformed it. This result is also indicated by the spatial coefficient estimates of lambda and gamma in Table 2.
In specifications not reported here, we explored the use of continuous variables for the size of the areas of the brownfields, and for the distance to the city centre. For example, we used the inverse of the distance (Ihlanfeldt and Taylor 2004) or the logarithm of the distance (Longo and Alberini 2006), but we found the sign of the coefficient estimates not statistically significant.
We tested up to four latent classes. However, we observed that going beyond three classes reduced significance of parameter estimates since classes were associated with lower probabilities. We also remark that the sample-likelihood function was much more susceptible to local maxima and we were less confident of reaching a unique maximum. As was the case in the spatial random effects and spatial random parameters models we derived the products of predicted probabilities but this time using the class-specific parameters. These were then weighted according to the class membership parameters.
Adams D, Watkins C (2002) Greenfields, brownfields and housing development. Blackwell Science, Oxford
Adams D (2004) The changing regulatory environment for speculative housebuilding and the construction of core competencies for brownfield development. Environ Plan A 36:601–624
Adams D, Disberry A, Hutchison N, Munjoma T (2001) Ownership constraints to brownfield redevelopment. Environ Plan A 33:453–477
Adams D, De Sousa C, Tiesdell S (2010) Brownfield development: a comparison of North American and British approaches. Urban Stud 47(1):75–104
Alberini A (2007) Determinants and effects on property values of participation in voluntary cleanup programs: The case of Colorado. Contemp Econ Policy 25(3):415–432
Alberini A, Longo A, Tonin S, Trombetta F, Turvani M (2005) The role of liability, regulation and economic incentives in brownfield remediation and redevelopment: evidence from surveys of developers. Reg Sci Urban Econ 35(4):327–351
Alker S, Joy V, Roberts P, Smith N (2000) The definition of brownfield. J Environ Plan Manag 43(1):49–69
Bardos RP, Kearney TE, Nathanail CP, Weenk A, Martin ID (2000) Assessing the wider environmental value of remediating land contamination paper for 7th international FZK/TNO conference on contaminated soil "ConSoil"
Barker K (2006) Barker review of land use planning. Final report—recommendations. HM Treasury, London
Blackman A, Darley S, Lyon TP, Wernstedt K (2010) What drives participation in state voluntary cleanup programs? Evidence from Oregon. Land Econ 86(4):785–799
Bartsch C, Collaton E (1997) Brownfields: cleaning and reusing contaminated properties. Praeger, Westport
Bella KP, Irwin EG (2002) Spatially explicit micro-level modelling of land use change at the rural–urban interface. Agric Econ 27(3):217–232
Bockstael NE (1996) Modeling economics and ecology: the importance of a spatial perspective. Am J Agric Econ 78:1168–1180
BURA (2006) Towards a national strategy for regeneration—a BURA steering and development forum report. British Urban Regeneration Association
Campbell D, Erdem S (2015) Position bias in best-worst scaling surveys: a case study on trust in institutions. Am J Agric Econ 97(2):526–545
Campbell D, Hensher DA, Scarpa R (2014) Bounding WTP distributions to reflect the 'actual'consideration set. J Choice Model 11:4–15
Chilton K, Schwarz P, Godwin K (2009) Verifying the social, environmental and economic promise of brownfield programs. Final Report to EPA's Brownfields Training, Research, and Technical assistant Grants and Cooperative Agreements Program, BFRES-04-02. http://www.epa.gov/brownfields/trta_k6/trta_report_2009.pdf
Cho SH, Newman DH (2005) Spatial analysis of rural land development. For Policy Econ 7(5):732–744
Cozens P, Hillier D, Prescott G (1999) The sustainable and the criminogenic: the case of new-build housing projects in Britain. Prop Manag 17(3):252–261
DCLG (2006) Planning policy guidance note 3: housing. Department for Communities and Local Government, London
DCLG (2009) Land use change statistics (England) 2008: provisional estimates. Department for Communities and Local Government, London
DCLG (2015) Building more homes on brownfield land. Departmentfor Communities and Local Government, London
Dennison M (1998) Brownfields redevelopment. Government Institutes, Rockville
De Sousa CA (2003) Turning brownfields into green space in the City of Toronto. Landsc Urban Plan 62(4):181–198
De Sousa CA (2004) The greening of brownfields in American cities. J Environ Plan Manag 47(4):579–600
De Sousa CA, Wu C, Westphal LM (2009) Assessing the effect of publicly assisted brownfield redevelopment on surrounding property values. Econ Dev Q 23:95–110
DETR (2000a) Circular 02/2000, contaminated land: implementation of part IIA of the Environmental Protection Act 1990. Department of Environment, Transport and Regions, London
DETR (2000b) Our town and cities: the future—delivering the urban renaissance. Department of Environment, Transport and Regions, London
DETR (2000c) Planning policy guidance note 3: housing. Department of Environment, Transport and Regions, London
Diamond J, Liddle J (2005) Management of regeneration. Routledge, Oxon
Dair CM, Williams K (2006) Sustainable land reuse: the influence of different stakeholders in achieving sustainable brownfield developments in England. Environ Plan A 38:1345–1366
Dixon T (2006) Integrating sustainability into brownfield regeneration: rhetoric or reality? An analysis of the UK development industry. J Prop Res 23(3):237–267
Dixon T (2007) The property development industry and sustainable urban brownfield regeneration in England: an analysis of case studies in Thames Gateway and Greater Manchester. Urban Stud 44(12):2379–2400
Dixon T, Adams D (2008) Housing supply and brownfield regeneration in a post-barker world: is there enough brownfield land in England and Scotland? Urban Stud 45(1):115–139
Dixon T, Pocock Y, Waters M (2006) An analysis of the UK development industry's role in brownfield regeneration. J Prop Invest Finance 24(6):521–541
Dixon T, Otsuka N, Abe H (2011) Critical success factors in urban brownfield regeneration: an analysis ofhardcore'sites in Manchester and Osaka during the economic recession (2009–10). Environ Plan A 43(4):961–980
Eisen JB (1999) Brownfields policies for sustainable cities. Duke Environ Law Policy Forum 9(2):187–229
English Partnerships (2003) Towards a national brownfield strategy. English Partnerships—The National Regeneration Agency, London
English Partnerships (2006) The brownfield guide: a practitioners' guide to land reuse in England. English Partnerships, London
English Partnerships (2007) National brownfield strategy: recommendations to government. English Partnerships, London
English Partnerships (2008) Brownfield land and deprivation. http://www.englishpartnerships.co.uk/nlud.htm
Guignet D, Alberini A (2010) Voluntary cleanups and redevelopment potential: lessons from Baltimore, Maryland. Cityscape 12(3):7–36
Greenberg M, Lowrie K, Mayer H, Tyler Miller K, Solitare L (2001) Brownfield redevelopment as a smart growth option in the United States. Environmentalist 21:129–143
Haninger K, Ma L, Christopher T (2014) The value of brownfield remediation. NBER working paper 20296
Harrison C, Davies G (2002) Conserving biodiversity that matter: practicioner' perspectives on brownfield development and urban nature conservation in London. J Environ Manag 65:95–108
Ihlanfeldt KR, Taylor LO (2004) Externality effects of small-scale hazardous waste sites: evidence from urban commercial property markets. J Environ Econ Manag 47(1):117–139
Irwin EG, Bockstael NE (2004) Land use externalities, open space preservation, and urban sprawl. Reg Sci Urban Econ 34(6):705–725
Irwin EG, Geoghegan J (2001) Theory, data, methods: developing spatially explicit economic models of land use change. Agric Ecosyst Environ 85(1):7–24
Linn J (2013) The effect of voluntary brownfields programs on nearby property values: evidence from Illinois. J Urban Econ 78:1–18
Longo A, Alberini A (2006) What are the effects of contamination risks on commercial and industrial properties? Evidence from Baltimore, Maryland. J Environ Plan Manag 49(5):713–737
Majd S, Pindyck R (1987) Time to build, option value, and investment decisions. J Financ Econ 18:7–27
McMillen DP (1992) Probit with spatial autocorrelation. J Reg Sci 32(3):335–348
Meyer PB, Lyons TS (2000) Lessons from private sector brownfield redevelopers. J Am Plan Assoc 66(1):46–57
NLUD (2000) Previously developed land (PDL)—data specification, v2.2, National Land Use Database, London
NLUD (2003) Land use and land cover classification, version 4.4, National Land Use Database, London
ODPM (2003) Sustainable communities: building for the future. Office of the Deputy Prime Minister, London
ODPM (2004) Index of multiple deprivation 2004. Office of the Deputy Prime Minister, London. http://www.communities.gov.uk/index.asp?id=1128439
ODPM (2005) Sustainable communities: homes for all. Office of the Deputy Prime Minister, London
ODPM (2006) National land use database: land use and land cover classification. Office of the Deputy Prime Minister, London
Otsuka N, Dixon T, Abe H (2013) Stock measurement and regeneration policy approaches to 'Hardcore' brownfield sites: England and Japan compared. Land Use Policy 33:36–41
Page GW, Berger RS (2006) Characteristics and land use of contaminated brownfield properties in voluntary cleanup agreement programs. Land Use Policy 23:551–559
Roberts P, Sykes H (eds) (2000) Urban regeneration: a handbook. Sage, London
Schoenbaum M (2002) Environmental contamination, brownfields policy, and economic redevelopment in an industrial area of Baltimore, Maryland. Land Econ 78(1):60–71
Schulze Bäing AS (2010) Target-driven brownfield reuse: a benefit for deprived areas? A spatial analysis of brownfield reuse patterns in England's core city regions. J Urban Regen Renew 3(3):290–300
Schulze Bäing AS, Wong C (2012) Brownfield residential development: what happens to the most deprived neighbourhoods in England? Urban Stud 49:2989–3008
Schwarz PM, Depken CA 2nd, Hanning A, Peterson K (2009) Comparing contaminated property redevelopment for mandatory and voluntary cleanup programs in California. J Environ Manag 90(12):3730
Syms P (2004) Previously developed land: industrial activities and contamination, 2nd edn. Blackwell Publishing, Oxford
Tang YT, Nathanail CP (2012) Sticks and stones: the impact of the definitions of brownfield in policies on socio-economic sustainability. Sustainability 4(5):840–862
Thornton G, Franz M, Edwards D, Pahlen G, Nathanail P (2007) The challenge of sustainability: incentives for brownfield regeneration in Europe. Environ Sci Policy 10:116–134
Urban Task Force (1999) Towards an urban renaissance. Department of Environment, Thomas Telford Publishing, Transport and the Regions, London
Urban Task Force (2005) Towards a strong urban renaissance. An independent report by members of the Urban Task Force chaired by Lord Rogers of Riverside (The Rogers' Report). www.urbantaskforce.org, November 2005
Wernstedt K (2004) Overview of existing studies on community impacts of land reuse. NCEE working paper 04-06
Wernstedt K, Meyer PB, Alberini A (2006a) Incentives for private residential brownfields development in US urban areas. J Environ Plan Manag 49(1):101–119
Wernstedt K, Meyer PB, Alberini A (2006b) Attracting private investment to contaminated properties: the value of public interventions. J Policy Anal Manag 25(2):347–369
Wernstedt K, Blackman A, Lyon TP, Novak K (2013) Revitalizing underperforming and contaminated land through voluntary action: perspectives from U.S. voluntary cleanup programs. Land Use Policy 31:545–556
Williams K (2012) The quantitative and qualitative impacts of brownfield policies in England. In: Jackson-Elmore C, Hula R (eds) Reclaiming brownfields: a comparative analysis of adaptive reuse of contaminated properties. MSU Press, East Lansing
Wrenn DH, Sam A, Irwin EG (2012) Searching for the urban fringe: exploring spatio-temporal variations in the effect of distance versus local interactions on residential land conversion using a conditionally-parametric discrete-time duration model. Paper presented at the agricultural & applied economics association's 2012 AAEA annual meeting, Seattle, Washington, August 12–14, 2012
Alberto Longo wishes to acknowledge funding from: the UKCRC Centre of Excellence for Public Health Northern Ireland MRC grant number MR/K023241/1; the Financial Aid Programme for Researchers 2014 of BIZKAIA:TALENT "Transportation Policies: Emissions Reductions, Public Health Benefits, and Acceptability"; and the Spanish Ministry of Economy and Competitiveness through Grant ECO2014-52587-R.
Gibson Institute for Land, Food and Environment, Institute for Global Food Security, School of Biological Sciences, UKCRC Centre of Excellence for Public Health, Queen's University Belfast, Belfast, UK
Alberto Longo
Basque Centre for Climate Change (BC3), 48008, Bilbao, Spain
Economics Division, Stirling Management School, University of Stirling, Stirling, UK
Danny Campbell
Correspondence to Alberto Longo.
Longo, A., Campbell, D. The Determinants of Brownfields Redevelopment in England. Environ Resource Econ 67, 261–283 (2017). https://doi.org/10.1007/s10640-015-9985-y
Issue Date: June 2017
DOI: https://doi.org/10.1007/s10640-015-9985-y
Revealed preferences
Binary choice model
Spatial autocorrelation
Spatial probit latent class model
Over 10 million scientific documents at your fingertips
Switch Edition
Academic Edition
Corporate Edition
Not affiliated
Springer Nature
© 2023 Springer Nature Switzerland AG. Part of Springer Nature. | CommonCrawl |
Search Results: 1 - 10 of 190046 matches for " G. Dupont-Nivet "
Asian aridification linked to the first step of the Eocene-Oligocene climate Transition (EOT) in obliquity-dominated terrestrial records (Xining Basin, China)
G. Q. Xiao, H. A. Abels, Z. Q. Yao, G. Dupont-Nivet,F. J. Hilgen
Climate of the Past (CP) & Discussions (CPD) , 2010,
Abstract: Asian terrestrial records of the Eocene-Oligocene Transition (EOT) are rare and, when available, often poorly constrained in time, even though they are crucial in understanding the atmospheric impact of this major step in Cenozoic climate deterioration. Here, we present a detailed cyclostratigraphic study of the continuous continental EOT succession deposited between ~35 to 33 Ma in the Xining Basin at the northeastern edge of Tibetan Plateau. Lithology supplemented with high-resolution magnetic susceptibility (MS), median grain size (MGS) and color reflectance (a*) records reveal a prominent ~3.4 m thick basic cyclicity of alternating playa gypsum and dry mudflat red mudstones of latest Eocene age. The magnetostratigraphic age model indicates that this cyclicity was most likely forced by the 41-kyr obliquity cycle driving oscillations of drier and wetter conditions in Asian interior climate from at least 1 million year before the EOT. In addition, our results suggest a duration of ~0.9 Myr for magnetochron C13r that is in accordance with radiometric dates from continental successions in Wyoming, USA, albeit somewhat shorter than in current time scales. Detailed comparison of the EOT interval in the Tashan section with marine records suggest that the most pronounced lithofacies change in the Xining Basin corresponds to the first of two widely recognized steps in oxygen isotopes across the EOT. This first step precedes the major and second step (i.e. the base of Oi-1) and has recently been reported to be mainly related to atmospheric cooling rather than ice volume growth. Coincidence with lithofacies changes in our Chinese record would suggest that the atmospheric impact of the first step was of global significance, while the major ice volume increase of the second step did not significantly affect Asian interior climate.
G. Q. Xiao,H. A. Abels,Q. Z. Yao,G. Dupont-Nivet
Climate of the Past Discussions , 2010, DOI: 10.5194/cpd-6-627-2010
Abstract: Asian terrestrial records of the Eocene-Oligocene Transition (EOT) are rare and, when available, often poorly constrained in time, even though they are crucial in understanding the atmospheric impact of this major step in Cenozoic climate deterioration. Here, we present a detailed cyclostratigraphic study of the continuous continental EOT succession deposited between ~35 to 33 Ma in the Xining Basin at the northeastern edge of Tibetan Plateau. Lithology supplemented with high-resolution magnetic susceptibility (MS), median grain size (MGS) and color reflectance (a*) records reveal a prominent ~3.4 m thick basic cyclicity of alternating playa gypsum and dry mudflat red mudstones of latest Eocene age. The magnetostratigraphic age model indicates that this cyclicity was most likely forced by the 41-kyr obliquity cycle driving oscillations of drier and wetter conditions in Asian interior climate from at least 1 million year before the EOT. In addition, our results suggest a duration of ~0.9 Myr for magnetochron C13r that is in accordance with radiometric dates from continental successions in Wyoming, USA, albeit somewhat shorter than in current time scales. Detailed comparison of the EOT interval in the Tashan section with marine records suggest that the most pronounced lithofacies change in the Xining Basin corresponds to the first of two widely recognized steps in oxygen isotopes across the EOT. This first step precedes the major and second step (i.e. the base of Oi-1) and has recently been reported to be mainly related to atmospheric cooling rather than ice volume growth. Coincidence with lithofacies changes in our central Chinese record would suggest that the atmospheric impact of the first step was of global significance, while the major ice volume increase of the second step did not significantly affect Asian interior climate.
Paleoseismology of the Xorxol Segment of the Central Altyn Tagh Fault, Xinjiang, China
Z. Washburn,J. R. Arrowsmith,G. Dupont-Nivet,W. X. Feng
Annals of Geophysics , 2003, DOI: 10.4401/ag-3443
Abstract: Although the Altyn Tagh Fault (ATF) is thought to play a key role in accommodating India-Eurasian convergence, little is known about its earthquake history. Studies of this strike-slip fault are important for interpretation of the role of faulting versus distributed deformation in the accommodation of the India- Eurasia collision. In addition, the > 1200 km long fault represents one of the most important and exemplary intracontinental strike-slip faults in the world. We mapped fault trace geometry and interpreted paleoseismic trench exposures to characterize the seismogenic behavior of the ATF. We identified 2 geometric segment boundaries in a 270 km long reach of the central ATF. These boundaries define the westernmost Wuzhunxiao, the Central Pingding, and the easternmost Xorxol (also written as Suekuli or Suo erkuli) segments. In this paper, we present the results from the Camel paleoseismic site along the Xorxol Segment at 91.759°E, 38.919°N. There evidence for the last two earthquakes is clear and 14C dates from layers exposed in the excavation bracket their ages. The most recent earthquake occurred between 1456 and 1775 cal A.D. and the penultimate event was between 60 and 980 cal A.D. Combining the Camel interpretations with our published results for the central ATF, we conclude that multiple earthquakes with shorter rupture lengths (?? 50 km) rather than complete rupture of the Xorxol Segment better explain the paleoseismic data. We found 2-3 earthquakes in the last 2-3 kyr. When coupled with typical amounts of slip per event (5-10 m), the recurrence times are tentatively consistent with 1-2 cm/yr slip rates. This result favors models that consider the broader distribution of collisional deformation, rather than those with northward motion of India into Asia absorbed along a few faults bounding rigid blocks.
Quantitative genetics of growth traits in the edible snail, Helix aspersa Müller
M Dupont-Nivet, J Mallard, JC Bonnet, JM Blanc
Genetics Selection Evolution , 1997, DOI: 10.1186/1297-9686-29-5-571
Les escargots comestibles de C te d'Ivoire: effets de quelques plantes, d'aliments concentrés et de la teneur en calcium alimentaire sur la croissance d'Archachatina ventricosa (Gould, 1850) en élevage hors-sol en batiment
Otchoumou, A.,Dupont-Nivet, M.,Dosso, H.
Tropicultura , 2004,
Abstract: The Edible Ivorian Snails: Effects of Some Vegetables, Concentrated Diets and Dietary Calcium on the Growth of Archachatina ventricosa (Gould, 1850) in Indoor Rearing. Archachatina ventricosa (Gould) snails with 37.06 g body weight and 6.01 cm shell length were given two vegetable diets dialed with leaves of Lactuca sativa (Apiaceae) and Brassica oleracea (Brassicaceae) for R1, leaves of Laportea aestuans (Urticaceae) and Phaulopsis falcisepala (Acanthaceae) for R2 and four concentrated diets (RT, R3, R4 and R5) with variable calcium content (0.05%, 0.59%, 6.82%, 12.02%, 14.03% et 16.01% respectively) in order to determine the calcium content inducing the best growth and the cumulated mortality rate. This optimum calcium content was 16.01%. At higher calcium content, Archachatina ventricosa produced more shell than meat.
Microwave-stimulated Raman adiabatic passage in a Bose-Einstein condensate on an atom chip
Matthieu Dupont-Nivet,Mathias Casiulis,Théo Laudat,Christoph I. Westbrook,Sylvain Schwartz
Physics , 2015, DOI: 10.1103/PhysRevA.91.053420
Abstract: We report the achievement of stimulated Raman adiabatic passage (STIRAP) in the microwave frequency range between internal states of a Bose-Einstein condensate (BEC) magnetically trapped in the vicinity of an atom chip. The STIRAP protocol used in this experiment is robust to external perturbations as it is an adiabatic transfer, and power-efficient as it involves only resonant (or quasi-resonant) processes. Taking into account the effect of losses and collisions in a non-linear Bloch equations model, we show that the maximum transfer efficiency is obtained for non-zero values of the one- and two-photon detunings, which is confirmed quantitatively by our experimental measurements.
The Positive Impact of the Early-Feeding of a Plant-Based Diet on Its Future Acceptance and Utilisation in Rainbow Trout
Inge Geurden, Peter Borchert, Mukundh N. Balasubramanian, Johan W. Schrama, Mathilde Dupont-Nivet, Edwige Quillet, Sadasivam J. Kaushik, Stéphane Panserat, Fran?oise Médale
Abstract: Sustainable aquaculture, which entails proportional replacement of fish-based feed sources by plant-based ingredients, is impeded by the poor growth response frequently seen in fish fed high levels of plant ingredients. This study explores the potential to improve, by means of early nutritional exposure, the growth of fish fed plant-based feed. Rainbow trout swim-up fry were fed for 3 weeks either a plant-based diet (diet V, V-fish) or a diet containing fishmeal and fish oil as protein and fat source (diet M, M-fish). After this 3-wk nutritional history period, all V- or M-fish received diet M for a 7-month intermediate growth phase. Both groups were then challenged by feeding diet V for 25 days during which voluntary feed intake, growth, and nutrient utilisation were monitored (V-challenge). Three isogenic rainbow trout lines were used for evaluating possible family effects. The results of the V-challenge showed a 42% higher growth rate (P = 0.002) and 30% higher feed intake (P = 0.005) in fish of nutritional history V compared to M (averaged over the three families). Besides the effects on feed intake, V-fish utilized diet V more efficiently than M-fish, as reflected by the on average 18% higher feed efficiency (P = 0.003). We noted a significant family effect for the above parameters (P<0.001), but the nutritional history effect was consistent for all three families (no interaction effect, P>0.05). In summary, our study shows that an early short-term exposure of rainbow trout fry to a plant-based diet improves acceptance and utilization of the same diet when given at later life stages. This positive response is encouraging as a potential strategy to improve the use of plant-based feed in fish, of interest in the field of fish farming and animal nutrition in general. Future work needs to determine the persistency of this positive early feeding effect and the underlying mechanisms.
Symmetric micro-wave potentials for interferometry with thermal atoms on a chip
M. Ammar,M. Dupont-Nivet,L. Huet,J. -P. Pocholle,P. Rosenbusch,I. Bouchoule,C. I. Westbrook,J. Estève,J. Reichel,C. Guerlin,S. Schwartz
Abstract: A trapped atom interferometer involving state-selective adiabatic potentials with two microwave frequencies on a chip is proposed. We show that this configuration provides a way to achieve a high degree of symmetry between the two arms of the interferometer, which is necessary for coherent splitting and recombination of thermal (i.e. non-condensed) atoms. The resulting interferometer holds promise to achieve high contrast and long coherence time, while avoiding the mean-field interaction issues of interferometers based on trapped Bose-Einstein condenstates.
Selection for Adaptation to Dietary Shifts: Towards Sustainable Breeding of Carnivorous Fish
Richard Le Boucher, Mathilde Dupont-Nivet, Marc Vandeputte, Thierry Kerne?s, Lionel Goardon, Laurent Labbé, Béatrice Chatain, Marie Josée Bothaire, Laurence Larroquet, Fran?oise Médale, Edwige Quillet
Abstract: Genetic adaptation to dietary environments is a key process in the evolution of natural populations and is of great interest in animal breeding. In fish farming, the use of fish meal and fish oil has been widely challenged, leading to the rapidly increasing use of plant-based products in feed. However, high substitution rates impair fish health and growth in carnivorous species. We demonstrated that survival rate, mean body weight and biomass can be improved in rainbow trout (Oncorhynchus mykiss) after a single generation of selection for the ability to adapt to a totally plant-based diet (15.1%, 35.3% and 54.4%, respectively). Individual variability in the ability to adapt to major diet changes can be effectively used to promote fish welfare and a more sustainable aquaculture.
Caldero-Keller approach to the denominators of cluster variables
G. Dupont
Abstract: Buan, Marsh and Reiten proved that if a cluster-tilting object $T$ in a cluster category $\mathcal C$ associated to an acyclic quiver $Q$ satisfies certain conditions with respect to the exchange pairs in $\mathcal C$, then the denominator in its reduced form of every cluster variable in the cluster algebra associated to $Q$ has exponents given by the dimension vector of the corresponding module over the endomorphism algebra of $T$. In this paper, we give an alternative proof of this result using the Caldero-Keller approach to acyclic cluster algebras and the work of Palu on cluster characters. | CommonCrawl |
\begin{document}
\maketitle \numberwithin{equation}{section} \begin{abstract} This paper establishes new estimates for linear Schr\"{o}\-din\-ger equations in $\mathbb R^3$ with time-dependent potentials. Some of the results are new even in the time-independent case and all are shown to hold for potentials in scaling-critical, translation-invariant spaces.
The proof of the time-independent results uses a novel method based on an abstract version of Wiener's Theorem.
\end{abstract}
\section{Introduction} \subsection{Overview} Consider the linear Schr\"{o}dinger equation in $\mathbb R^3$ \begin{equation}\label{1.1} i \partial_t Z + \mathcal H Z = F,\ Z(0) \text{ given}, \end{equation} where \begin{equation}\label{eq_3.2} \mathcal H = \mathcal H_0 + V = -\Delta + V, \end{equation} where $V$ may be real or complex-valued, in the scalar case and \begin{equation} \mathcal H = \mathcal H_0 + V = \begin{pmatrix} \Delta-\mu & 0 \\ 0 & -\Delta+\mu \end{pmatrix} + \begin{pmatrix} W_1 & W_2 \\ -\overline W_2 & -W_1 \end{pmatrix},\ \mu>0 \label{3.2} \end{equation} in the matrix nonselfadjoint case. $W_1$ is always taken to be real-valued, while $W_2$ may also be complex-valued. We treat all cases in a unified manner.
Let $R_0(\lambda) = (\mathcal H_0 - \lambda)^{-1}$ be the resolvent of the unperturbed Hamiltonian, \begin{equation}\label{eq_3.55}
R_0(\lambda^2)(x, y) = \frac 1 {4\pi} \frac {e^{i \lambda |x-y|}}{|x-y|} \end{equation} in the scalar case (\ref{eq_3.2}) and \begin{equation}\label{eq_3.56}
R_0(\lambda^2+\mu)(x, y) = \frac 1 {4\pi} \begin{pmatrix} -\frac {e^{-\sqrt{\lambda^2+2\mu} |x-y|}}{|x-y|} & 0 \\ 0 & \frac {e^{i \lambda |x-y|}}{|x-y|} \end{pmatrix} \end{equation} in the matrix case (\ref{3.2}). For a multiplicative decomposition $V=V_1 V_2$, such that $V_2 R_0(\lambda) V_1 \in \mathcal L(L^2, L^2)$ (bounded from $L^2$ to itself), the exceptional set $\mathcal E$ is defined as the set of $\lambda \in \mathbb C$ such that $(I + V_2 R_0(\lambda) V_1)^{-1} \not \in \mathcal L(L^2, L^2)$.
Throughout this paper we make the simplifying assumption that $\mathcal E$ is disjoint from the spectrum of $\mathcal H_0$. In $\mathbb R^3$, this assumption holds generically. The opposite situation requires separate treatment.
In proving our estimates for the solution $Z$ of (\ref{1.1}), we use an abstract version of Wiener's Theorem, which also presents independent interest. \begin{theorem}\label{thm_1.6} Let $H$ be a Hilbert space and $K = \mathcal L(H, M_t H)$ be the algebra of bounded operators from $H$ to $M_t H$, where $M_t H$ is the space of $H$-valued Borel measures on $\mathbb R$ of finite mass, see (\ref{3.11}).
If $A \in K$ is invertible then $\widehat A(\lambda)$ is invertible for every $\lambda$. Conversely, assume $\widehat A(\lambda)$ is invertible for each $\lambda$, $A = I + L$, and \begin{equation}
\lim_{\epsilon \to 0} \|L(\cdot + \epsilon) - L\|_K = 0,\ \lim_{R \to \infty}\|(1-\chi(t/R)) L(t)\|_K =0 \end{equation} Then $A$ is invertible in $K$. \end{theorem} \noindent Moreover, if $A$ is supported on $[0, \infty)$, then so is $A^{-1}$, by Lemma~\ref{lemma8}.
Another helpful result is the following: \begin{proposition}\label{prop_21} Let $\mathcal H_0$ be as in (\ref{eq_3.2}) or (\ref{3.2}).
Then \begin{equation}\label{eqn_3.39}
\int_{\mathbb R} \|e^{it \mathcal H_0} f\|_{L^{6, \infty}} \dd t \leq C \|f\|_{L^{6/5, 1}}. \end{equation} \end{proposition} We actually prove the stronger inequality \begin{equation}\label{eq_3.39}
\sum_{n \in \mathbb Z} 2^n \sup_{t \in [2^n, 2^{n+1})} \|e^{it \mathcal H_0} f\|_{L^{6, \infty}} \leq C \|f\|_{L^{6/5, 1}}. \end{equation} We use Proposition \ref{prop_21} in the context of Theorem \ref{thm_1.6} --- to show, for specific weights $V_1$ and $V_2$, that $V_2 e^{it \mathcal H_0} V_1$ is an operator-valued measure of finite mass.
Finally, let $L^{3/2, \infty}_0$ be the weak-$L^{3/2}$ closure of the set of bounded compactly supported functions. The following is the main result for time-independent potentials: \begin{theorem}\label{theorem_26} Let $Z$ be a solution of the linear Schr\"{o}\-din\-ger equation (\ref{1.1}) $$ i \partial_t Z + \mathcal H Z = F,\ Z(0) \text{ given}. $$ Assume that $\mathcal H = \mathcal H_0 + V$, $V$ is as in (\ref{eq_3.2}) or (\ref{3.2}), and that no exceptional values of $\mathcal H$ are contained in $\sigma(\mathcal H_0)$. Then Strichartz estimates hold: if $V \in L^{3/2, \infty}_0$ then \begin{equation}\label{1.45}
\|P_c Z\|_{L^{\infty}_t L^2_x \cap L^2_t L^{6, 2}_x} \leq C \Big(\|Z(0)\|_2 + \|F\|_{L^1_t L^2_x + L^2_t L^{6/5, 2}_x}\Big) \end{equation} and if $V \in L^{3/2, 1}$ then \begin{equation}\label{1.46}
\|P_c Z\|_{L^1_t L^{6, \infty}_x} \leq C \Big(\|Z(0)\|_{L^{6/5, 1}} + \|F\|_{L^1_t L^{6/5, 1}_x}\Big). \end{equation} \end{theorem} Here $P_c$ is the projection on the continuous spectrum of $\mathcal H$.
Equation (\ref{1.1}) is invariant under rescaling: replacing $Z(t, x)$ in this equation by $Z(\alpha^2 t, \alpha x)$, $V(x)$ by $\alpha^2 V(\alpha x)$, and $F(t, x)$ by $F(\alpha^2 t, \alpha x)$, the equation remains valid. It is natural to study (\ref{1.1}) under scaling-invariant norms.
Our results are scaling-invariant in that $V$ appears with scaling-invariant norms and in that the constants in (\ref{eqn_3.39}), (\ref{1.45}), and (\ref{1.46}) remain the same after rescaling.
Using Lorentz spaces instead of the more usual Sobolev spaces is essential in the statements and proofs of (\ref{eqn_3.39}) and (\ref{1.46}). It also allows an improvement in (\ref{1.45}) --- from $L^{3/2}$, proved in \cite{gol}, to weak-$L^{3/2}$, the weakest space of potentials for which Strichartz estimates are known so far. Estimate (\ref{1.46}) is entirely new.
Note that if $V$ is scalar, real-valued, and in $L^{3/2}$, then it is enough to assume that zero is not an eigenvalue or a resonance of $\mathcal H$. This follows from \cite{ionjer}. Proposition \ref{prop28} summarizes the known spectral properties of~$\mathcal H$ if $V \in L^{3/2, \infty}_0$.
Overall, Lorentz spaces provide a convenient unified framework for our methods, some of which are based on real interpolation (see the Appendices).
\begin{observation} For $Z(0)=0$, the dual of (\ref{1.46}) is \begin{equation}
\|P_c Z\|_{L^{\infty}_t L^{6, \infty}_x} \leq C \|F\|_{L^{\infty}_t L^{6/5, 1}_x}. \end{equation} Real interpolation between (\ref{1.45}) and (\ref{1.46}) produces \begin{equation}
\|P_c Z\|_{L^p_t L^{6, p}_x} \leq C \big(\|Z(0)\|_{L^{q, p}} + \|F\|_{L^1_t L^{q, p}_x + L^p_t L^{6/5, p}_x}\big), \end{equation} where $1 \leq p \leq 2$ and $3/(2q) = 1/p + 1/4$. \end{observation}
In particular, this estimate and its dual have a simple form for $p=6/5$: \begin{proposition}\label{cor1.5} For $Z(0)=0$ and $V \in L^{3/2}$, provided zero is not a resonance or eigenvalue for $\mathcal H = \mathcal H_0 + V$, \begin{equation}\label{eqn1.12}
\|P_c Z\|_{L^{6/5}_t L^{6, 6/5}_x} \leq C \|F\|_{L^{6/5}_{t, x}};\ \|P_c Z\|_{L^6_{t, x}} \leq C \|F\|_{L^6_t L^{6/5, 6}_{t, x}}. \end{equation} \end{proposition}
Other estimates, such as $t^{-3/2}$ decay estimates or wave operator estimates, will be the subject of separate papers, such as \cite{bego} or \cite{bec3}. These results make use of Wiener's Theorem, Theorem \ref{thm_1.6}, in an even more general form.
\subsection{Time-dependent potentials} We turn to time-dependent potentials. The difference is that, in this case, the Fourier transform of a kernel $T(t, s)$ which is not invariant under time translation is no longer a multiplier $\widehat T(\lambda):H \to H$ for each $\lambda$; it is a family of non-local operators instead. Such a generalization was studied by Howland \cite{how}.
We shall not follow this direction here; instead, we look only at perturbations of operators that are invariant under time translation, whose Fourier transform is then a perturbation of a multiplier operator.
Our results in the time-dependent case are, to some extent, independent from those for time-independent potentials. If Strichartz estimates or (\ref{1.46}) hold, no matter how they were obtained, then we can extend them to the case of time-dependent potentials.
Theorem \ref{theorem_26} implies the following easy generalization of itself: \begin{corollary}\label{cor_1.5} Let $Z$ be a solution of \begin{equation}\label{1.12} i \partial_t Z + \mathcal H Z + \tilde V(t, x) P_c Z= F,\ Z(0) \text{ given}.
\end{equation} Let $\mathcal H = \mathcal H_0 + V$ be such that no exceptional values of $\mathcal H$ are contained in $\sigma(\mathcal H_0)$. If $V \in L^{3/2, \infty}_0$ and if $\|\tilde V\|_{L^{\infty}_t L^{3/2, \infty}_x}$ is sufficiently small, then \begin{equation}\label{1.13}
\|P_c Z\|_{L^{\infty}_t L^2_x \cap L^2_t L^{6, 2}_x} \leq C \Big(\|Z(0)\|_2 + \|F\|_{L^1_t L^2_x + L^2_t L^{6/5, 2}_x} \Big).
\end{equation} If $V \in L^{3/2, 1}$ and if $\|\tilde V\|_{L^{\infty}_t L^{3/2, 1}_x}$ is sufficiently small, then \begin{equation}\label{1.14}
\|P_c Z\|_{L^1_t L^{6, \infty}_x} \leq C \Big(\|Z(0)\|_{L^{6/5, 1}} + \|F\|_{L^1_t L^{6/5, 1}_x} \Big). \end{equation} \end{corollary} In other words, we can add a small time-dependent potential term, which depends in an arbitrary manner on both $x$ and $t$.
If instead of $\tilde V(t, x) P_c Z$ we have $\tilde V(t, x) Z$, this term may interact with the point spectrum, so we need to control $P_p Z$ in the same norm: for example, \begin{equation}
\|P_c Z\|_{L^1_t L^{6, \infty}_x} \leq C \Big(\|Z(0)\|_{L^{6/5, 1}} + \|F\|_{L^1_t L^{6/5, 1}_x} + \|P_p Z\|_{L^1_t L^{6, \infty}_x} \Big). \end{equation}
The situation is different for large time-dependent potentials, which may destroy dispersive estimates. However, we shall study a particular kind of time-dependent potentials, for which dispersive estimates still hold.
Consider the following manifold of unitary transformations $U: \mathbb R^4 \times SO(3) \to \mathcal L(L^2, L^2)$, given by a combination of translations, rotations, and change of complex phase: \begin{equation}\label{1.20} U(D, A, \Omega) = \Omega e^{D \nabla + i A \sigma_3} \end{equation} in the matrix case and \begin{equation}\label{1.21} U(D, A, \Omega) = \Omega e^{D \nabla + i A} \end{equation} in the scalar case. Here $D$ represents the translation, $\Omega$ the rotation, and $A$ the change of complex phase.
Within this manifold, we consider a one-parameter family of transformations $U(t):=U(\pi(t))$, governed by a parameter path $\pi(t)$: \begin{equation}\label{1.26} \pi:\mathbb R \to \mathbb R^4 \times SO(3),\ \pi(t) = (D(t), A(t), \Omega(t)), \end{equation} where $D(t) \in \mathbb R^3$, $A(t) \in \mathbb R$, and $\Omega(t) \in SO(3)$. Thus, \begin{equation} U(t) = \Omega(t) e^{D(t) \nabla + i A(t) \sigma_3},\ \text{respectively } U(t) = \Omega(t) e^{D(t) \nabla + i A(t)}.
\end{equation} In the sequel, we assume that the derivative of $\pi$ is essentially bounded: $\|\pi'\|_{\infty} < \infty$. In particular, $SO(3)$ is a Lie group, endowed with a canonical Riemannian metric; we use it to measure $\|\Omega'\|_{\infty}$.
We plug this family of unitary transformations into the equation (\ref{1.1}) as follows, obtaining a time-dependent potential: \begin{equation}\label{3.157} i \partial_t R(t) + (\mathcal H_0 + U(t)^{-1} V U(t)) R(t) = F(t),\ R(0) \text{ given}. \end{equation} The Hamiltonian at time $t$ is a conjugate of $\mathcal H$: \begin{equation} \mathcal H_0 + U(t)^{-1} V U(t) = U(t)^{-1} \mathcal H U(t) = U(t)^{-1} (\mathcal H_0 + V) U(t), \end{equation} since $\mathcal H_0$ and $U$ commute.
The change of phase $e^{iA}$ is trivial in the scalar case, since it commutes with $V$, but $e^{iA\sigma_3}$ is nontrivial in the matrix case, since $e^{iA\sigma_3} V \ne V e^{iA\sigma_3}$.
If $U$ is of the form (\ref{1.20}), then (\ref{3.157}) becomes \begin{equation}\label{129} i \partial_t R(t) + (\mathcal H_0 + V(t, x)) R(t) = F(t),\ R(0) \text{ given}, \end{equation} where \begin{equation}\label{130} V(t, x) = \begin{pmatrix} W_1(\Omega(t) x- \Omega(t) D(t)) & e^{i A(t)} W_2(\Omega(t) x- \Omega(t) D(t)) \\ -e^{-iA(t)} \overline W_2(\Omega(t) x- \Omega(t) D(t)) & -W_1(\Omega(t) x - \Omega(t)D(t)) \end{pmatrix}. \end{equation} The potential $V(x, t)$ has the same shape at all times $t$, but can rotate or change position and complex phase.
In the scalar case, the equation looks the same, but $V(t, x)$ is given by \begin{equation}\label{131} V(t, x) = V(\Omega(t) x - \Omega(t)D(t)), \end{equation} so $A(t)$ does not even enter the equation.
Assume that $\pi$ is continuous and a.e.\ differentiable. Then, denote $Z(t) = U(t) R(t)$ and rewrite the equation in the new variable $Z$: \begin{equation}\label{3.158} i \partial_t Z(t) - i \partial_t U(t) U(t)^{-1} Z(t) + \mathcal H_0 Z(t) + V Z(t) = U(t) F(t),\ Z(0) = U(0) R(0). \end{equation}
In case $U$ is given by (\ref{1.21}) and $\Omega(t)=I$, (\ref{3.158}) becomes \begin{equation}\label{1.30} i \partial_t Z - i D'(t) \nabla Z + A'(t) \sigma_3 Z + \mathcal H Z = U F,\ Z(0) \text{ given}. \end{equation}
In this formulation the potential is constant, plus the higher-order perturbation $- i D'(t) \nabla + A'(t) \sigma_3$.
More generally, we can also consider a manifold of transformations $U(\pi)$, a one-parameter path $U(t):=U(\pi(t))$ within it, and construct a time-dependent potential like above. The subsequent theorem is formulated abstractly, for such a general situation, but we always keep in mind the concrete cases (\ref{1.20}) and (\ref{1.21}).
The main abstract result in the time-de\-pen\-dent setting is the following:
\begin{theorem}\label{theorem_13} Consider equation (\ref{3.157}), for $\mathcal H = \mathcal H_0 + V$ as in (\ref{eq_3.2}) or (\ref{3.2}) and $V$ in $L^{3/2, \infty}_0$, not necessarily real-valued: $$ i \partial_t R(t) + (\mathcal H_0 + U(t)^{-1} V U(t)) R(t) = F(t),\ R(0) \text{ given}. $$ Let $P_c$ be the continuous spectrum projection of $\mathcal H$ and \begin{equation} P_c(t) = U(t)^{-1} P_c U(t),\ P_p(t) = U(t)^{-1} P_p U(t) = I - P_c(t). \end{equation}
Assume that $U(t)$, $t \in \mathbb R$, is a family of maps determined by an a.e.\ differentiable parameter path $\pi(t)$, with the following properties: \begin{enumerate} \item[P1] $U(t)$ and $U(t)^{-1}$ are uniformly $L^p$-bounded maps, for $1 \leq p \leq\infty$. \item[P2] For every $t \in \mathbb R$, $U(t)$ commutes with $\mathcal H_0$. \item[P3] For some $N$ and for almost every $\tau \in \mathbb R$ \begin{equation}\label{3.163}\begin{aligned}
\lim_{\substack{\|\pi'\|_{\infty} \to 0}} \sup_{t-s=\tau} \Big\|\langle x \rangle^{-N} \big(U(t) U(s)^{-1} e^{i(t-s) \mathcal H_0} - e^{i(t-s) \mathcal H_0}\big) \langle x \rangle^{-N}\Big\|_{2 \to 2} = 0. \end{aligned}\end{equation}
\item[P4] For any eigenfunction $f$ of $\mathcal H$ or $\mathcal H^*$, $\|\partial_t U(t) U^{-1}(t) f\|_{L^{6/5, 2}} \leq C \|\pi'\|_{\infty}$.
\end{enumerate}
Assume that $\|\pi'\|_{\infty}$ is sufficiently small (in a manner that depends on $V$) and there are no exceptional values of $\mathcal H$ embedded in $\sigma(\mathcal H_0)$. Then, one has \begin{equation}
\|P_c(t) R(t)\|_{L^{\infty}_t L^2_x \cap L^2_t L^{6, 2}_x} \leq C \Big(\|R(0)\|_2 + \|F\|_{L^1_t L^2_x + L^2_t L^{6/5, 2}_x} + \|P_p(t) R(t)\|_{L^2_t L^{6, 2}_x}\Big). \end{equation} Further assume that $V \in L^{3/2, 1}$ and P4 holds with $L^{6/5, 1}$.
Then \begin{equation}\label{2.157}
\|P_c(t) R(t)\|_{L^1_t L^{6, \infty}_x} \leq C \Big(\|R(0)\|_{L^{6/5, 1}} + \|F\|_{L^1_t L^{6/5, 1}_x} + \|P_p(t) R(t)\|_{L^1_t L^{6, \infty}_x}\Big). \end{equation}
\end{theorem}
By Lemma \ref{lem_32}, we show that $U(t)$ given by (\ref{1.20}) or (\ref{1.21}) and (\ref{1.26}) has properties P1--P4, so Theorem \ref{theorem_13} applies. In particular, we obtain \begin{corollary}\label{cor_1.7} Consider a solution of (\ref{129}), where $V(x, t)$ is in the form (\ref{130}) or (\ref{131}).
Assume that $V \in L^{3/2, \infty}_0$, $\mathcal H$ has no exceptional values in $(-\infty, -\mu] \cup [\mu, \infty)$, and that $\|\Omega'\|_{L^{\infty}_t}$, $\|D'\|_{L^{\infty}_t}$, and $\|A'\|_{L^{\infty}_t}$ are sufficiently small, in a manner that depends on $V$. Then \begin{equation}\begin{aligned}
\|P_c(t) R\|_{L^{\infty}_t L^2_x \cap L^2_t L^{6, 2}_x} \leq C \Big(\|R(0)\|_{L^2} + \|F\|_{L^1_t L^2_x + L^2_t L^{6/5, 2}_x} + \|P_p(t) R\|_{L^2_t L^{6, 2}_x}\Big). \end{aligned}\end{equation} If $V \in L^{3/2, 1}$, then \begin{equation}
\|P_c(t) R\|_{L^1_t L^{6, \infty}_x} \leq C \Big(\|R(0)\|_{L^{6/5, 1}} + \|F\|_{L^1_t L^{6/5, 1}_x} + \|P_p(t) R\|_{L^1_t L^{6, \infty}_x}\Big). \end{equation} \end{corollary} We also formulate the analogous conclusion for (\ref{1.30}): \begin{corollary}\label{cor_1.8} Consider a solution of (\ref{1.30}).
Assume that $V \in L^{3/2, \infty}_0$, that $\mathcal H$ has no exceptional values in $(-\infty, -\mu] \cup [\mu, \infty)$, and that $\|D'\|_{L^{\infty}_t}$ and $\|A'\|_{L^{\infty}_t}$ are sufficiently small, in a manner that depends on $V$. Then \begin{equation}\begin{aligned}
\|P_c Z\|_{L^{\infty}_t L^2_x \cap L^2_t L^{6, 2}_x} \leq C \Big(\|Z(0)\|_{L^2} + \|F\|_{L^1_t L^2_x + L^2_t L^{6/5, 2}_x} + \|P_p Z\|_{L^2_t L^{6, 2}_x}\Big). \end{aligned}\end{equation} If $V \in L^{3/2, 1}$, then \begin{equation}
\|P_c Z\|_{L^1_t L^{6, \infty}_x} \leq C \Big(\|Z(0)\|_{L^{6/5, 1}} + \|F\|_{L^1_t L^{6/5, 1}_x} + \|P_p Z\|_{L^1_t L^{6, \infty}_x}\Big). \end{equation} \end{corollary} We omit the proofs of Corollary \ref{cor_1.7} and Corollary \ref{cor_1.8}, since they are straightforward applications of Theorem \ref{theorem_13}.
One can find numerous other families satisfying properties P1--P4 of Theorem \ref{theorem_13}. As an example, let $\tilde U(t) = e^{i(\int_0^t \beta(\tau) \dd \tau) |\nabla|^s)}$, $0 \leq s < 1/4$. It then suffices to see that $(e^{i|\xi|^s})^{\wedge} \in L^1$ on $\mathbb R^3$.
\subsection{History of the problem}
{\emph{The scalar selfadjoint case.}} The study of decay estimates for Schr\"{o}dinger's equation with a potential has a long history. In the 1970's, Rauch \cite{rau} and Jensen--Kato \cite{jeka} studied the local decay of solutions to (\ref{1.1}) in weighted $L^2$ spaces, under suitable conditions on the potential $V$ and taking into account threshold eigenvalues or resonances. Global decay of solutions was later established by Journ\'{e}--Soffer--Sogge, \cite{jss}, who proved in $\mathbb R^n$ that \begin{equation}
\|e^{it(-\Delta+V)}P_c f\|_{L^{\infty}} \leq C |t|^{-n/2} \|f\|_{L^1},
\end{equation} when $n \geq 3$, zero is neither an eigenvalue, nor a resonance, and, roughly, $|V(x)| \leq C |x|^{-4-n}$ and $\widehat V \in L^1$.
Keel--Tao \cite{tao} obtained endpoint Stric\-hartz estimates for the free Schr\"{o}\-din\-ger and wave equations and introduced a method for obtaining such endpoint inequalities, which can also be used in more general situations.
Based on \cite{tao}, Rodnianski--Schlag \cite{rodsch} proved nonendpoint Strichartz estimates for large potentials with $\langle x \rangle^{-2-\epsilon}$ decay. Yajima obtained dispersive estimates for Schr\"{o}edinger's equation, under rapid decay assumptions on the potential, both directly \cite{yaj1} and by proving the boundedness of wave operators first \cite{yaj2}. His estimates \cite{yaj4} also apply in the presence of threshold eigenvalues or resonances.
Goldberg proved, in \cite{gol2}, dispersive estimates for almost-critical potentials and, in \cite{gol}, Strichartz estimates for $L^{3/2}$ --- thus scaling-critical --- potentials. Burq--Planchon--Stalker--Tahvildar-Zadeh, in \cite{bpst1} and \cite{bpst2}, obtained decay and Stric\-hartz estimates for a class of $L^{3/2, \infty}$ potentials (under specific regularity assumptions).
Lorentz spaces are essential in the statement of (\ref{1.46}): Lebesgue spaces will only lead to decay rates of the form $t^{-s}$ for $s \in \mathbb R$, which are never integrable (in particular, $t^{-1}$ is not integrable). Lorentz spaces enable us to replace $t^{-1}$ by an integrable rate of decay, in a scaling-invariant setting.
Estimates (\ref{1.46}) and (\ref{eqn1.12}) are related to the more general ones of Foschi, \cite{foschi}. His result, stated here in a form relevant for comparison, is that if \begin{equation}\label{desc}
\|e^{-it \mathcal H} P_c e^{is \mathcal H^*} P_c^*\|_{\mathcal L(L^1, L^{\infty})} \leq C |t-s|^{3/2}, \end{equation} then \begin{equation}
\|e^{-it \mathcal H} P_c e^{is \mathcal H^*} P_c^* F\|_{L^q_t L^r_x} \leq C \|F\|_{L^{\tilde q'}_t L^{\tilde r'}_x}. \end{equation} $\tilde q'$ and $\tilde r'$ are dual exponents to $\tilde q$ and $\tilde r$ and both $(q, r)$ and $(\tilde q, \tilde r)$ must satisfy \begin{equation} 1 \leq q, r \leq \infty,\ \frac 1 q < \frac 3 2- \frac 3 r,\ q \leq r, \end{equation} and likewise for $(\tilde q, \tilde r)$; in addition, \begin{equation} \frac 1 q + \frac 1 {\tilde q} = \frac 3 2 \Big(1 - \frac 1 r - \frac 1 {\tilde r}\Big),\ \tilde r < 3r,\ r < 3 \tilde r. \end{equation}
Then, (\ref{1.46}) is in a forbidden region under Foschi's conditions, being given by $(q, r) = (1, 6)$, $(\tilde q, \tilde r) = (\infty, 6)$. (\ref{eqn1.12}) is the allowed endpoint case $(q, r) = (6, 6)$, $(\tilde q, \tilde r) = (6/5, 6)$; however, (\ref{desc}) is not generally true if $V$ is only in $L^{3/2}$, so (\ref{eqn1.12}) does not follow from \cite{foschi} either.
The result of Foschi differs even more from that of the current paper in the nonselfadjoint case, where $\mathcal H \ne \mathcal H^*$, but such differences can be overcome; see, for example, \cite{bec}.
{\emph{The matrix nonselfadjoint case.}} The Hamiltonian (\ref{3.2}) is nonselfadjoint, leading to specific difficulties not present in the selfadjoint case. In particular, one needs to reprove the boundedness of the wave operators, the limiting absorption principle, or even the $L^2$ boundedness of the time evolution. They do not follow as in the selfadjoint case, where, for example, the unitarity of the time evolution immediately implies the $L^2$ boundedness.
In \cite{schlag}, Schlag proved $L^1 \to L^{\infty}$ dispersive estimates for the Schr\"{o}dinger equation with a nonselfadjoint Hamiltonian, as well as non-endpoint Stric\-hartz estimates. Erdo\^{g}an--Schlag \cite{erdsch2} proved $L^2$ bounds for the evolution and a more detailed analysis of decay in the presence of threshold eigenvalues or resonances. In \cite{bec}, endpoint Strichartz estimates in the nonselfadjoint case were obtained following the method of Keel--Tao. More recently, Cuccagna--Mizumatchi \cite{cucmiz} derived all of the above from the boundedness of the wave operators, under more stringent conditions on the potential~$V$.
{\emph{Time-dependent estimates.}} Many estimates concerning the linear Schr\"{o}\-din\-ger equation with time-dependent potentials refer to the time-periodic case. These include the work of Goldberg \cite{gol}, of Costin--Lebowitz--Tanveer \cite{clt}, of Galtbayar--Jensen--Yajima \cite{gjy}, of Bourgain \cite{bou2} (for quasi\-periodic potentials), and of Wang \cite{wang}.
Other results in the time-dependent setting belong to Howland \cite{how}, Kitada--Yajima \cite{kiya}, Rodnianski--Schlag \cite{rodsch}, to Bourgain \cite{bou1}, \cite{bou3}, \cite{bou4}, and to Delort; see \cite{delort}.
The current paper's result, Theorem \ref{theorem_13}, allows only for a specific kind of time dependence of the potential: it can change its position, but not its profile. However, this case is important because it exactly describes the motion of a soliton that arises as the solution of a semilinear dispersive equation. The linear estimates are then useful in controlling a solution that is close to a soliton, hence in proving the latter's stability or instability under small perturbations. The same considerations apply to other nonlinear phenomena, such as vortices.
An even more general situation, studied in this paper, is one where the potential can not only move, but also rotate. Estimates in this case are useful in studying the stability of a soliton that is not radially symmetric and is free to rotate.
Concretely, making use of the techniques introduced here, recent papers such as \cite{bec2}, \cite{nakschl}, or \cite{nakschl2} improve upon older results in \cite{bec} or \cite{schlag} --- by allowing perturbations in energy-space or better, without decay --- and upon \cite{cucmiz}, by allowing the soliton and its perturbations to have a translation movement.
\subsection{Paper outline}
{\emph{Time-independent potentials.}} We use Wiener's Theorem in the form of Theorem \ref{thm_1.6}, as follows. Consider a decomposition of the potential $V$ into
\begin{equation} V = V_1 V_2,\ V_1 = |V|^{1/2} \sgn V,\ V_2 = |V|^{1/2}. \label{1.35} \end{equation} In the matrix nonselfadjoint case (\ref{3.2}), an analogous decomposition is \begin{equation} V = V_1 V_2,\ V_1 = \sigma_3 \begin{pmatrix} W_1 & W_2 \\ \overline W_2 & W_1 \end{pmatrix}^{1/2},\ V_2 = \begin{pmatrix} W_1 & W_2 \\ \overline W_2 & W_1 \end{pmatrix}^{1/2},\ \sigma_3 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. \label{1.36} \end{equation} By Duhamel's formula, \begin{equation}\begin{aligned} Z(t) &= e^{it\mathcal H} Z(0) - i \int_0^t e^{i(t-s) \mathcal H} F(s) \dd s \\ &= e^{it\mathcal H_0} Z(0) - i \int_0^t e^{i(t-s) \mathcal H_0} F(s) \dd s + i \int_0^t e^{i(t-s)\mathcal H_0} V Z(s) \dd s. \end{aligned}\end{equation} In addition, for any multiplicative decomposition $V = V_1 V_2$ of the potential, \begin{equation} V_2 Z(t) = V_2 \bigg(e^{it\mathcal H_0} Z(0) - i \int_0^t e^{i(t-s) \mathcal H_0} F(s) \dd s \bigg) + i \int_0^t (V_2 e^{i(t-s)\mathcal H_0} V_1) V_2 Z(s) \dd s. \end{equation} Consider the kernel defined by \begin{equation}\label{1.40} (T_{V_2, V_1} F)(t)= \int_0^t (V_2 e^{i(t-s)\mathcal H_0} V_1) F(s) \dd s \end{equation} and its Fourier transform in regard to time \begin{equation} \widehat T_{V_2, V_1}(\lambda) = i V_2 R_0(\lambda) V_1. \end{equation} To apply Theorem \ref{thm_1.6}, we need to establish that the kernel of $T_{V_2, V_1}$ is time integrable. We do this by Proposition \ref{prop_21}, which implies that,
for $H=L^2$ and $V_1$, $V_2 \in L^{3, 2}$ as given by (\ref{1.35}) and (\ref{1.36}), $T_{V_2, V_1} \in K = \mathcal L(H, M_t H)$.
Invertibility of $I-iT_{V_2, V_1}$ within $K$, addressed by Theorem \ref{thm_1.6}, is directly related to Strichartz estimates. Indeed, at least formally we can write \begin{equation}\begin{aligned}
V_2 Z(t) &= (I- i T_{V_2, V_1})^{-1} V_2 \bigg(e^{it\mathcal H_0} Z(0) - i \int_0^t e^{i(t-s) \mathcal H_0} F(s) \dd s \bigg). \end{aligned}\end{equation}
If the operator $I - iT_{V_2, V_1}$ can be inverted, then the computation is justified. However, as we shall see in Lemma \ref{lem_32}, this is not the case and we need to introduce a correction, to account for the eigenvalues.
Accordingly, in Lemma \ref{lem_32} we construct $\tilde V_1$ and $\tilde V_2$, which equal $V_1$ and $V_2$ plus exponentially small terms, such that $I - iT_{\tilde V_2, \tilde V_1}$ is invertible, then carry on the demonstration for this operator instead.
{\emph{Time-dependent potentials.}} The results in this case are derived in a perturbative manner from those for time-independent potentials. However, the perturbation is not small in Strichartz or similar spaces, so we use a particular method to account for~it.
Denote \begin{equation}\begin{aligned} \tilde T_{\tilde V_2, \tilde V_1} F(t) &= \int_{-\infty}^t \tilde V_2 e^{i(t-s) {\mathcal H}_0} U(t) U(s)^{-1} \tilde V_1 F(s) \dd s,
\end{aligned}\end{equation} respectively \begin{equation}\begin{aligned} \tilde T_{\tilde V_2, I} F(t) &= \int_{-\infty}^t \tilde V_2 e^{i(t-s) {\mathcal H}_0} U(t) U(s)^{-1} F(s) \dd s. \end{aligned}\end{equation} Following several transformations, we reduce equation (\ref{3.157}) to \begin{equation}\begin{aligned} (I - i \tilde T_{\tilde V_2, \tilde V_1}) \tilde V_2 \tilde Z(t) &= \tilde T_{\tilde V_2, I} (-i \tilde F(s) + \delta_{s=0} \tilde Z(0)). \end{aligned}\end{equation} If we can invert $I - i \tilde T_{\tilde V_2, \tilde V_1}$, this will imply the desired estimates. To do this, we show that \begin{equation}
\lim_{\|\pi'\|_{\infty} \to 0} \|\tilde T_{\tilde V_2, \tilde V_1} - T_{\tilde V_2, \tilde V_1}\| \to 0 \end{equation} where $T_{\tilde V_2, \tilde V_1}$ has the form (\ref{1.40}). Then, we use the Strichartz estimates obtained in the time-independent case to show that $I - i T_{\tilde V_2, \tilde V_1}$ is invertible.
Indeed, if $A$ is an invertible operator and $\|A-B\| < 1/\|A^{-1}\|$, then $B$ is also invertible and its inverse is given by \begin{equation} B^{-1} = A^{-1} + A^{-1}(A-B)A^{-1} + \ldots. \end{equation} This is the case here, so $I - i \tilde T_{\tilde V_2, \tilde V_1}$ is invertible.
\section{Proof of the results}
\subsection{Wiener's Theorem} Let $H$ be a Hilbert space, $\mathcal L(H, H)$ be the space of bounded linear operators from $H$ to itself, and $M_t H$ be the set of $H$-valued measures of finite mass on the Borel algebra of $\mathbb R$. $M_t H$ is a Banach space, with the norm \begin{equation}\label{3.11}
\|\mu\|_{M_t H} = \sup \bigg\{\sum_{k=1}^n \|\mu(D_k)\|_H \mid D_k \text{ disjoint Borel sets}\bigg\}. \end{equation} Note that the absolute value of $\mu \in M_t H$ given by \begin{equation}
|\mu|(D) = \sup\bigg\{\sum_{k=1}^n \|\mu(D_k)\|_H \mid \bigcup_{k=1}^n D_k = D,\ D_k\text{ disjoint Borel sets}\bigg\}
\end{equation} is a positive measure of finite mass (bounded variation) and $\|\mu(D)\|_H \leq |\mu|(D)$. By the Radon--Nikodym Theorem, $\mu$ is in $M_t H$ if and only if it has a decomposition \begin{equation}
\mu = \mu_{\infty} |\mu|
\end{equation} with $|\mu| \in M_t$ (the space of real-valued measures in $t$ of finite mass) and $\mu_{\infty}~\in L^{\infty}_{d|\mu|(t)} H$.
Furthermore, $\|\mu\|_{M_t H} = \||\mu|\|_{M}$ and the same holds if we replace $H$ by any Banach space.
\begin{definition}\label{def_k} Let $K = \mathcal L(H, M_t H)$ be the algebra (under convolution) of bounded operators from $H$ to $M_t H$. \end{definition}
\begin{observation} Examples of elements of $K$ are: \begin{list}{\labelitemi}{\leftmargin=1em} \item the identity $I$: $I(h) = \delta_{t=0} h$ for any $h \in H$; \item scalar functions or measures, in general: if $f \in L^1_t$ or $M_t$ and $h \in H$, let $f(h) = f(t) h$; \item product form operators $k(t)=f(t) A_0$, where $f \in M_t$, $A_0 \in \mathcal L(H, H)$: for $h \in H$, let $A(t) h = f(t) (A_0 h)$;
\item collections of operators $A(t)$, such that $A(t) \in \mathcal L(H, H)$ for almost all $t$ and $\int \|A(t)\|_{\mathcal L(H, H)} \dd t < \infty$; \item finally, we also give an example that falls under none of the above categories, but is covered by the definition of $K$. Let $H=L^2(\mathbb R)$, $f_0 \in L^2(\mathbb R)$ be fixed, and $A(t) = \big(f(t) f_0(t)\big) f_0$. Then, for any $f \in L^2$ \begin{equation}
\int_{\mathbb R} \|A(t) f\|_2 \dd t \leq \|f_0\|_2 \int_{\mathbb R} |f(t) f_0(t)| \dd t \leq \|f_0\|_2^2 \|f\|_2, \end{equation} so $A \in K$, but $A(t)$ is not bounded on $L^2$ for any $t$, because $L^2 \not \subset L^{\infty}$. \end{list} \end{observation}
The following lemma also serves to clarify Definition \ref{def_k}: \begin{lemma} $K$ takes $M_t H$ into itself by convolution, is a Banach algebra under convolution, and multiplication by bounded continuous functions (and $L^{\infty}$ Borel measurable functions) is bounded on $K$: \begin{equation}
\|f A\|_K \leq \|f\|_{\infty} \|A\|_K. \label{3.8}
\end{equation} Furthermore, by integrating an element $A$ of $K$ over $\mathbb R$ one obtains $\int_{\mathbb R} A \in \mathcal L(H, H)$, with $\|\int_{\mathbb R} A\|_{\mathcal L(H, H)} \leq \|k\|_K$. \end{lemma}
\begin{proof} Boundedness of multiplication by continuous or $L^{\infty}$ functions follows from the decomposition $\mu = \mu_0 |\mu|$ for $\mu \in M_t H$. The last stated property is a trivial consequence of the definition of $M_t H$.
Let $\mu \in M_t H$, $A \in K$. Consider the product measure $\tilde \mu$ first defined on product sets $D_1 \times D_2 \subset \mathbb R \times \mathbb R$ by $\tilde \mu(D_1 \times D_2) = k(\mu(D_2))(D_1)$. This is again a measure of finite mass, $\tilde \mu \in M_{t, s} H$, and \begin{equation}
\|\tilde \mu\|_{M_{t, s} H} \leq \|A\|_K \|\mu\|_{M_t H}. \end{equation} We then naturally define the convolution of an element of $K$ with an element of $M_t H$, by setting $A(\mu)(D)=\tilde \mu(\{(t, s) \mid t+s \in D\})$.
Thus, each $A \in K$ defines a bounded translation-invariant linear map from $M_t H$ to itself: \begin{equation}
\|A(\mu)\|_{M_t H} \leq \|A\|_K \|\mu\|_{M_t H}. \label{3.9} \end{equation} The correspondence is bijective, as any translation-invariant $\tilde A \in \mathcal L(M_t H, M_t H)$ defines an element $A \in K$ by $A(h) = \tilde A(\delta_{t=0}h)$. These operations are indeed inverses of one another.
Associativity follows from Fubini's Theorem. $K$ is a Banach space by definition. The algebra property of the norm is immediate from (\ref{3.9}). \end{proof}
Note that, due to our choice of a Hilbert space $H$, if $A \in K$ then $A^* \in K$ as well.
Define the Fourier transform of any element in $K$ by \begin{equation} \widehat A(\lambda) = \int_{\mathbb R} e^{-it\lambda} \dd A(t). \end{equation} This is a bounded operator from $H$ to itself.
By dominated convergence, $\widehat A(\lambda)$ is a strongly continuous (in $\lambda$) family of operators for each $A$ and, for each $\lambda$, \begin{equation}
\|\widehat A(\lambda)\|_{H \to H} \leq \|A\|_K. \end{equation} This follows from (\ref{3.8}).
The Fourier transform on $\mathbb R$ of the identity is $\widehat I(\lambda) = I$ for every $\lambda$; $\widehat{A^*} = (\widehat A)^*$. Also, the Fourier transform takes convolution to composition.
If a kernel $A \in K$ has both a left and a right inverse, they must be the same, $b = b * I = b * (A * B) = (b * A) * B = I * B = B$.
Fix a cutoff function $\chi$ supported on a compact set and which equals one on some neighborhood of zero. We specify that the inverse Fourier transform on $\mathbb R$ is \begin{equation} f^{\vee}(t) = \frac 1 {2\pi} \int_{\mathbb R} e^{it\lambda} f(\lambda) \dd \lambda. \end{equation} \begin{theorem}\label{thm7} Let $K$ be the operator algebra of Definition \ref{def_k}. If $A \in K$ is invertible then $\widehat A(\lambda)$ is invertible for every $\lambda$. Conversely, assume $\widehat A(\lambda)$ is invertible for each $\lambda$, $A = I + L$, and \begin{equation}
\lim_{\epsilon \to 0} \|L(\cdot + \epsilon) - L\|_K = 0,\ \lim_{R \to \infty}\|(1-\chi(t/R)) L(t)\|_K =0. \end{equation} Then $A$ is invertible. \end{theorem} \begin{observation} If $L$ is in a closed unital subalgebra of $K$ with some specific properties, then its inverse will also belong to the same. This stability property helps in proving a precise decay rate for the inverse.
For example, if $\|A(t)\| \leq C \langle t \rangle^{-3/2}$, then $\|A^{-1}(t)\| \leq \tilde C \langle t \rangle^{-3/2}$ as well. \end{observation}
We do not use compactness or Fredholm's alternative explicitly. A subset of $L^1_t$ is precompact if and only if its elements are equicontinuous and decay uniformly at infinity --- conditions that are actually required above, but not equivalent to compactness on $L^1_t H$.
The set $K_c$ of equicontinuous operators, that is
\begin{equation} K_c = \{L \in K \mid \lim _{\epsilon \to 0} \|L(\cdot + \epsilon) - L\|_K = 0\} \end{equation} is a closed ideal, is translation invariant, contains the set of those kernels which are strongly measurable and $L^1$ (but $K_c$ is strictly larger), and $I$ is not in it.
Likewise, the set $K_0$ of kernels that decay at infinity, that is
\begin{equation} K_0 = \{L \in K \mid \lim_{R \to \infty}\|\chi_{|t|>R} L(t)\|_K =0\}, \end{equation} is a closed subalgebra. It contains the strong algebras that we defined above. Note that for operators $A \in K_0$ the Fourier transform is also a norm-continuous family of operators, not only strongly continuous.
The proof will also ensure that, if $I+L$ belongs to $(K_c \cap K_0) \oplus \mathbb C I$ and is invertible in $K$, then its inverse is of the same~form. \begin{proof}[Proof of Theorem \ref{thm7}] If $A$ is invertible, that is $A * A^{-1} = A^{-1} * A = I$, then applying the Fourier transform yields \begin{equation} \widehat A (\lambda) \widehat {A^{-1}}(\lambda) = \widehat {A^{-1}}(\lambda) \widehat A (\lambda) = I \end{equation} for each $\lambda$, so $\widehat A(\lambda)$ is invertible.
Conversely, assume $\widehat A(\lambda)$ is invertible for every $\lambda$. Without loss of generality, we can replace $A$ with $A * A^*$. Then, the theorem's hypotheses are preserved and, in addition, $\widehat A(\lambda)$ is non-negative for every $\lambda$.
A non-negative operator is invertible if and only if it is strictly positive, so $\widehat A(\lambda) >0$ for every $\lambda$.
Fix $\lambda_0 \in \mathbb R$ and let $\chi$ be a Schwartz-class function, such that $\supp \chi \subset [-2, 2]$ and $\chi(\lambda) = 1$ for $t \in [-1, 1]$.
Also let $\eta = \widehat \chi$, $\eta_{\epsilon}(t) = \nobreak \epsilon \eta(\epsilon t) = \widehat{\chi(\epsilon^{-1} \cdot)}$.
For any kernel $B \in K_0$ --- that decays at infinity --- we next show that \begin{equation}\label{deca}
\big\|\big((e^{is\lambda_0}\eta_{\epsilon}(s)) * B\big)(t) - e^{it\lambda_0} \eta_{\epsilon}(t) \widehat B(\lambda_0) \big\|_K \to 0. \end{equation} On one hand, outside a large radius, \begin{equation}
\lim_{R \to \infty} \|(1-\chi(t/R)) B(t)\|_K = 0, \end{equation} while on a fixed ball of radius $R$, as $\epsilon \to 0$, \begin{equation}
\bigg\|\int_{|s|\leq R} \big(\eta_{\epsilon}(t) - \eta_{\epsilon}(t-s)\big) B(s) \dd s\bigg\|_K \leq \|B\|_K \cdot \Big\|\sup_{|s|\leq R}\big(\eta_{\epsilon}(t) - \eta_{\epsilon}(t - s)\big)\Big\|_{L^1_t} \to 0. \end{equation} By taking $R \to \infty$ and integrating separately inside and outside this radius, we evaluate the following expression in $t$: \begin{equation}\begin{aligned}\label{2.18}
&\bigg\|\big((e^{is\lambda_0}\eta_{\epsilon}(s)) * B\big)(t) - e^{it\lambda_0} \eta_{\epsilon}(t) \int_{\mathbb R} e^{-is\lambda_0} B(s) ds\bigg\|_K \leq \\
&\leq \bigg\|e^{it\lambda_0} \int \big(\eta_{\epsilon}(t) - \eta_{\epsilon}(t-s)\big) \chi(s/R) B(s) \dd s \bigg\|_K + 2 \|\eta_{\epsilon}\|_1 \big\|(1-\chi(t/R)) B(t)\big\|_K \\
&\leq \bigg\|\int_{|s|\leq 2R} \big(\eta_{\epsilon}(t) - \eta_{\epsilon}(t-s)\big) B(s) \dd s\bigg\|_K + 2 \|\eta_{\epsilon}\|_1 \big\|(1-\chi(t/R)) B(t)\big\|_K \to 0.
\end{aligned}\end{equation} In (\ref{2.18}) we employ the fact that $\|\eta_{\epsilon}\|_1 = \|\eta\|_1$ is independent of $\epsilon$.
Let \begin{equation}\begin{aligned} A_{\epsilon}(t) &= (I - e^{it\lambda_0} \eta_{\epsilon}(t)) + (e^{i t \lambda_0} \eta_{\epsilon}(t)) * A,\\ \tilde A_{\epsilon}(t) &= \big(I - e^{it\lambda_0} \eta_{\epsilon}(t)\big) + e^{it\lambda_0} \eta_{\epsilon}(t) \widehat A(\lambda_0).
\end{aligned}\end{equation} Note that $\widehat {A_{\epsilon}}(\lambda) = \widehat A(\lambda)$ when $|\lambda-\lambda_0| \leq \epsilon$: \begin{equation} \widehat {A_{\epsilon}}(\lambda) = \big(1-\chi(\epsilon^{-1} (\lambda-\lambda_0))\big)I + \chi(\epsilon^{-1} (\lambda-\lambda_0)) \widehat A(\lambda). \end{equation} By virtue of (\ref{deca}), \begin{equation}\label{2.22}
\lim_{\epsilon \to 0} \|A_{\epsilon}- \tilde A_{\epsilon}\|_K = 0. \end{equation}
The Fourier transform of $\tilde A_{\epsilon}$ has the form \begin{equation} \tilde A^{\wedge}_{\epsilon}(\lambda) = \big(1 - \chi(\epsilon^{-1} (\lambda-\lambda_0))\big) I + \chi(\epsilon^{-1} (\lambda-\lambda_0)) \widehat A(\lambda_0). \end{equation} Since $\widehat A(\lambda_0) > 0$ is a strictly positive operator, $\tilde A^{\wedge}_{\epsilon}(\lambda)$ is also a positive operator for every $\lambda$, so it can be inverted for every $\lambda$: \begin{equation}
\|\tilde A^{\wedge}_{\epsilon}(\lambda)^{-1}\|_{\mathcal L(H, H)} \leq \max (2, 2 \|\widehat A(\lambda_0)^{-1}\|_{\mathcal L(H, H)}). \end{equation} We can easily differentiate $\tilde A^{\wedge}_{\epsilon}(\lambda)$ as a function of $\lambda$, taking values in $\mathcal L(H, H)$: \begin{equation} \partial_{\lambda} \tilde A^{\wedge}_{\epsilon} (\lambda) = \epsilon^{-1} \chi'(\epsilon^{-1}\lambda) (\widehat A(\lambda_0) - I). \end{equation} Thus, we can do the same for its inverse: \begin{equation} \partial_{\lambda} (\tilde A^{\wedge}_{\epsilon})^{-1} (\lambda) = - \tilde A^{\wedge}_{\epsilon}(\lambda)^{-1} \epsilon^{-1} \chi'(\epsilon^{-1}\lambda) (\widehat A(\lambda_0) - I) \tilde A^{\wedge}_{\epsilon}(\lambda)^{-1}. \end{equation} Repeating this an arbitrary number of times, we obtain that the inverse is infinitely differentiable in $\lambda$, in a strong sense.
Moreover, $\tilde A^{\wedge}_{\epsilon}(\lambda) - I$ is compactly supported in $\lambda$, so the same is true for the inverse: $\tilde A^{\wedge}_{\epsilon}(\lambda)^{-1} - I$ is compactly supported.
Since $\tilde A^{\wedge}_{\epsilon}(\lambda)^{-1} - I$ is compactly supported and infinitely differentiable, it follows that its Fourier transform is in $L^1$. Therefore, $(\tilde A_{\epsilon})^{-1} \in K$.
Because under rescaling $\tilde A_{\epsilon}(t) = \epsilon \tilde A_1(\epsilon t)$, the same holds for their inverses, so $\|\tilde A_{\epsilon}^{-1}\|_K$ is independent of $\epsilon$. By (\ref{2.22}), it follows that $A_{\epsilon}$ is also invertible for sufficiently small $\epsilon$.
We have to consider infinity separately. As above, let $\eta_R = R \chi^{\vee}(R \cdot)$ and \begin{equation} A_R = (I - \eta_R) * A + \eta_R,
\end{equation} so that $\widehat A_R(\lambda) = \widehat A(\lambda)$ when $|\lambda|>2R$: \begin{equation} \widehat A_R(\lambda) = (1-\chi(R^{-1} \lambda)) \widehat A(\lambda) + \chi(R^{-1} \lambda) I. \end{equation} Then \begin{equation} A_R - I = (I - \eta_R) * (A - I) = (I - \eta_R) * L. \end{equation} At this step we use the equicontinuity assumption of the hypothesis, namely \begin{equation}
\lim_{\epsilon \to 0}\|L-L(\cdot + \epsilon)\|_K = 0. \end{equation} We write $\eta_R = (\chi_{[-\epsilon, \epsilon]} \eta_R) + (1-\chi_{[-\epsilon, \epsilon]}) \eta_R$, where $\chi_{[-\epsilon, \epsilon]}$ is the characteristic function of $[-\epsilon, \epsilon]$. As $\epsilon \to 0$, \begin{equation}\begin{aligned}
&\|(\chi_{[-\epsilon, \epsilon]} \eta_R) * L - \big(\textstyle\int_{\mathbb R} \chi_{[-\epsilon, \epsilon]}(s) \eta_R(s) \dd s \big) L\|_K \leq \\
&\leq \Big(\int_{\mathbb R} \chi_{[-\epsilon, \epsilon]}(s) |\eta_R(s)| \dd s \Big) \sup_{\delta \leq \epsilon} \|L-L(\cdot + \delta)\|_K \to 0,
\end{aligned}\end{equation} because $\int_{\mathbb R} |\eta_R(s)| \dd s$ is independent of $R$. At the same time, \begin{equation}\begin{aligned}
&\lim_{R \to \infty} \|(1-\chi_{[-\epsilon, \epsilon]}) \eta_R\|_1 \to 0, \end{aligned}\end{equation} so we obtain \begin{equation}
\lim_{R \to \infty} \|(I - \eta_R) * L\|_K = 0. \end{equation}
Thus, we can invert $A_R$ for large $R$. It follows that on some neighborhood of infinity the Fourier transform of $A$ equals that of an invertible operator.
Finally, consider a finite open cover of $\mathbb R$ of the form \begin{equation} \mathbb R = D_{\infty} \cup \bigcup_{j=1}^n D_j, \end{equation} where $D_j$ are open sets and $D_{\infty}$ is an open neighborhood of infinity, such that for $1 \leq j \leq n$ and for $j=\infty$ we have $\widehat A^{-1} = \widehat A_j^{-1}$ on the open set $D_j$. Take a smooth partition of unity subordinate to this cover, that is \begin{equation} 1 = \sum_j \chi_j,\ \supp \chi_j \subset D_j, \chi_j \text{ smooth}. \end{equation} Then the following element of $K$ is the inverse of $A$: \begin{equation} A^{-1} = \sum_{j=1}^n \widehat \chi_j * A_j^{-1} + (I - \sum_{j=1}^n \widehat \chi_j) * A_{\infty}^{-1}. \end{equation}
\end{proof}
We are also interested in whether, if $A$ is upper triangular, meaning that $A$ is supported on $[0, \infty)$ in the $t$ variable, the inverse of $A$ has the same property. \begin{lemma}\label{lemma8} Given $A \in K$ upper triangular with $A^{-1} \in K$, $A^{-1}$ is upper triangular if and only if $\widehat A$ can be extended to a weakly analytic family of invertible operators in the lower half-plane, which is continuous up to the boundary, uniformly bounded, and with uniformly bounded inverse. \end{lemma} Intuitively, this lemma shows that convolution operators in $K$ with Volter\-ra-type kernels (i.e.\ no future dependence) have inverses with the same property, if the inverses exist. \begin{proof}
In one direction, assume that both $A^{-1}$ and $A$ are upper triangular. Then, one can construct $\widehat A(\lambda)$ and $\widehat {A^{-1}}(\lambda)$ in the lower half-plane, as their defining integrals converge there. Strong continuity follows by dominated convergence and weak analyticity by means of the Cauchy integral formula. Furthermore, both $\widehat A(\lambda)$ and $\widehat {A^{-1}}(\lambda)$ are bounded, by $\|A\|_K$ and $\|A^{-1}\|_K$ respectively, and they are inverses of one another.
Conversely, let $A_- = \chi_{(-\infty, 0]} A^{-1}$ and $A_+ = \chi_{[0, \infty)} A^{-1}$. Taking the Fourier transform, one has that for each $\lambda$ \begin{equation} \widehat {A_-}(\lambda) + \widehat {A_+}(\lambda) = \widehat {A^{-1}}(\lambda). \end{equation} On the lower half-plane, $\widehat {A^{-1}}(\lambda) = (\widehat A)^{-1}(\lambda)$ is uniformly bounded by hypothesis. Likewise, $\widehat {A_+}(\lambda)$ is bounded as the Fourier transform of an upper triangular operator. Thus, $\widehat {A_-}(\lambda)$ too is bounded on the lower half-plane.
However, $A_-$ is lower triangular, so its Fourier transform is also bounded in the upper half-plane. It follows that $\widehat {A_-}$ is a holomorphic function, bounded on the whole plane. By Liouville's theorem, then, $\widehat {A_-}$ must be constant, so $A_-$ can only have singular support at zero. Therefore $A$ is upper triangular. \end{proof}
If $V \in L^{3/2}$, then we need an analogue of Wiener's Theorem for neither $\mathcal L(L^2, L^2) = \widehat {L^{\infty}}$ nor $\mathcal L(L^1, L^1) = \mathcal M$, but an interpolation space of the two. \begin{theorem}\label{thm6/5} Let $A \in K_{6/5} := (K, \widehat {L^{\infty}}_{\lambda} H)_{[1/3]}$. Assume $\widehat A(\lambda)$ is invertible for each $\lambda$, $A = I + L$, and \begin{equation}
\lim_{\epsilon \to 0} \|L(\cdot + \epsilon) - L\|_{K_{6/5}} = 0,\ \lim_{R \to \infty}\|(1-\chi(t/R)) L(t)\|_{K_{6/5}} =0. \end{equation} Then $A$ is invertible in $K_{6/5}$. \end{theorem} \begin{proof} Note that elements of $K$ give rise to bounded operators on $L^1_t H$, while elements of $\widehat {L^{\infty}}_{\lambda} H$ do so for $L^2_t H$, so by interpolation elements of $K_{6/5}$ are bounded on $L^{6/5}_t H$.
Furthermore, both $L^1_t H$ and $\widehat {L^{\infty}}_{\lambda} H$ are translation-invariant algebras and their elements have Fourier transforms bounded in $\mathcal L(H, H)$; thus, $K_{6/5}$ also possesses the same properties.
Then, the proof is identical to that of Wiener's Theorem, Theorem \ref{thm_1.6}, earlier in this section. \end{proof}
We next apply this abstract theory to the particular case of interest.
\subsection{The free evolution and resolvent in three dimensions} We return to the concrete case (\ref{eq_3.2}) or (\ref{3.2}) of a linear Schr\"{o}dinger equation on $\mathbb R^3$ with scalar or matrix nonselfadjoint potential $V$. For simplicity, the entire subsequent discussion revolves around the case of three spatial dimensions.
In order to apply the abstract Wiener theorem, Theorem \ref{thm7}, it is necessary to exhibit an operator-valued measure of finite mass. Accordingly, we start by proving Proposition \ref{prop_21}.
\begin{proof}[Proof of Proposition \ref{prop_21}] We provide two proofs --- a shorter one based on real interpolation and a longer one, using the atomic decomposition of Lorentz spaces (Lemma \ref{lemma_30}), that exposes the proof machinery underneath.
Without loss of generality, we consider only the interval $[0, \infty)$.
Following the first approach, note that by duality (\ref{eq_3.39}) is equivalent to \begin{equation}\label{eq_3.40}
\sum_{n \in \mathbb Z} 2^n \int_{2^n}^{2^{n+1}} |\langle e^{it\mathcal H_0} f, g(t) \rangle| \dd t \leq C \|f\|_{L^{6/5, 1}} \|g\|_{L^1_t L^{6/5, 1}_x} \end{equation} holding for any $f \in L^{6/5, 1}$ and $g \in L^1_t L^{6/5, 1}_x$.
From the usual dispersive estimate \begin{equation}
\|e^{-it\Delta} f\|_{p'} \leq t^{3/2(1-2/p)} \|f\|_p \end{equation} we obtain that \begin{equation}
\int_{2^n}^{2^{n+1}} |\langle e^{it\mathcal H_0} f, g(t) \rangle| \dd t \leq 2^{n(3/2-3/p)} \|f\|_p \|g\|_{L^1_t L^p_x}. \end{equation} Restated, this means that the bilinear mapping \begin{equation}\begin{aligned} & T:L^p \times L^1_t L^p_x \to \ell^{\infty}_{3/p-3/2} \text{ (following the notation of Proposition \ref{prop_33})},\\ & T=(T_n)_{n \in \mathbb Z},\ T_n(f, g) = \int_{2^n}^{2^{n+1}} \langle e^{it\mathcal H_0} f, g(t) \rangle \dd t \end{aligned}\end{equation} is bounded for $1 \leq p \leq 2$.
Interpolating between $p=1$ and $p=2$, by using the real interpolation method (Theorem \ref{thm_32}) with $\theta=1/3$ and $q_1=q_2=1$, directly shows that \begin{equation} T: (L^1, L^2)_{1/3, 1} \times (L^1_t L^1_x, L^1_t L^2_x)_{1/3, 1} \to (\ell^{\infty}_{3/2}, \ell^{\infty}_0)_{1/3, 1} \end{equation} is bounded. By Proposition \ref{prop_33}, \begin{equation}\begin{aligned} (L^1, L^2)_{1/3, 1} &= L^{6/5, 1},\\ (L^1_t L^1_x, L^1_t L^2_x)_{1/3, 1} &= L^1_t (L^1_x, L^2_x)_{1/3, 1} = L^1_t L^{6/5, 1}_x,\\ (\ell^{\infty}_{3/2}, \ell^{\infty}_0)_{1/3, 1} &= \ell^1_1. \end{aligned} \end{equation} Hence $T$ is bounded from $L^{6/5, 1} \times L^1_t L^{6/5, 1}_x$ to $\ell^1_1$, which implies (\ref{eq_3.39}).
The alternative approach is based on the atomic decomposition of $L^{6/5, 1}$. By Lemma \ref{lemma_30}, \begin{equation}\begin{aligned} f = \sum_{j \in \mathbb Z} \alpha_j a_j, g(t) = \sum_{k \in \mathbb Z} \beta_k(t) b_k(t), \end{aligned}\end{equation} where $a_j$ and, for each $t$, $b_k(t)$ are atoms with \begin{equation}\begin{aligned}
\mu(\supp(a_j)) &= 2^j,& \esssup |a_j| &= 2^{-5j/6},\\
\mu(\supp(b_k(t))) &= 2^k,& \esssup |b_k(t)| &= 2^{-5k/6} \end{aligned}\end{equation} (here $\mu$ is the Lebesgue measure on $\mathbb R^3$), and the coefficients $\alpha_j$ and $\beta_k(t)$ satisfy \begin{equation}
\sum_{j \in \mathbb Z} |\alpha_j| \leq C \|f\|_{L^{6/5, 1}},\ \sum_{k \in \mathbb Z} |\beta_k(t)| \leq C \|g(t)\|_{L^{6/5, 1}}. \end{equation} Integrating in time and exchanging summation and integration lead to \begin{equation}
\sum_k \int_0^{\infty} |\beta_k(t)| \dd t \leq C \|g\|_{L^1_t L^{6/5, 1}_x}. \end{equation} Since (\ref{eq_3.40}) is bilinear in $f$ and $g$, it suffices to prove it for only one pair of atoms. Fix indices $j_0$ and $k_0 \in \mathbb Z$; the problem reduces to showing that \begin{equation}\label{eq_3.51}
\sum_{n \in \mathbb Z} 2^n \int_{2^n}^{2^{n+1}} \langle e^{it \mathcal H_0} a_{j_0}, \beta_{k_0}(t) b_{k_0}(t) \rangle \dd t \leq C \int_0^{\infty} |b_{k_0}(t)| \dd t. \end{equation} The reason for making an atomic decomposition is that atoms are in $L^1 \cap L^{\infty}$, instead of merely in $L^{6/5, 1}$, enabling us to employ both $L^1$ to $L^{\infty}$ decay and $L^2$ boundedness estimates in the study of their behavior. For each $n$, \begin{equation}\begin{aligned}\label{eq_1.352}
\int_{2^n}^{2^{n+1}} \langle e^{it \mathcal H_0} a_{j_0}, \beta_{k_0}(t) b_{k_0}(t) \rangle \dd t &\leq C 2^{-3n/2} \|a_{j_0}\|_1 \sup_t \|b_{k_0}(t)\|_1 \int_{2^n}^{2^{n+1}} |\beta_{k_0}(t)| \dd t \\
&= C 2^{-3n/2} 2^{j_0/6} 2^{k_0/6} \int_{2^n}^{2^{n+1}} |\beta_{k_0}(t)| \dd t \end{aligned}\end{equation} as a consequence of the $L^1 \to L^{\infty}$ decay estimate. At the same time, by the $L^2$ boundedness of the evolution, \begin{equation} \begin{aligned}\label{eq_1.353}
\int_{2^n}^{2^{n+1}} \langle e^{it \mathcal H_0} a_{j_0}, \beta_{k_0}(t) b_{k_0}(t) \rangle \dd t & \leq C \|a_{j_0}\|_2 \sup_t \|b_{k_0}(t)\|_2 \int_{2^n}^{2^{n+1}} |\beta_{k_0}(t)| \dd t \\
&= C 2^{-j_0/3} 2^{-k_0/3} \int_{2^n}^{2^{n+1}} |\beta_{k_0}(t)| \dd t. \end{aligned} \end{equation} Using the first estimate (\ref{eq_1.352}) for large $n$, namely $n\geq j_0/3 + k_0/3$, and the second estimate (\ref{eq_1.353}) for small $n$, $n<j_0/3 + k_0/3$, we always obtain that \begin{equation}
\int_{2^n}^{2^{n+1}} \langle e^{it \mathcal H_0} a_{j_0}, \beta_{k_0}(t) b_{k_0}(t) \rangle \dd t \leq C 2^{-n} \int_{2^n}^{2^{n+1}} |\beta_{k_0}(t)| \dd t. \end{equation} Multiplying by $2^n$ and summing over $n \in \mathbb Z$ we retrieve (\ref{eq_3.51}), which in turn proves (\ref{eq_3.40}). \end{proof}
The resolvent of the unperturbed Hamiltonian, $R_0(\lambda) = (\mathcal H_0 - \lambda)^{-1}$, is given by (\ref{eq_3.55}) in the scalar case (\ref{eq_3.2}) and (\ref{eq_3.56}) in the matrix case (\ref{3.2}). In either case, $R_0(\lambda) = (\mathcal H_0 - \lambda)^{-1}$ is an analytic function, on $\mathbb C \setminus [0, \infty)$ or respectively on $\mathbb C \setminus ((-\infty, -\mu] \cup [\mu, \infty))$. It can be extended to a continuous function in the closed lower half-plane or the closed upper half-plane, but not both at once, due to a jump discontinuity on the real line.
The resolvent is the Fourier transform of the time evolution. We formally state the known connection between $e^{it\mathcal H_0}$ and the resolvent $R_0 = (\mathcal H_0 - \lambda)^{-1}$. \begin{lemma}\label{lemma_22} Let $\mathcal H_0$ be given by (\ref{eq_3.2}) or (\ref{3.2}). For any $f \in L^{6/5, 1}$ and $\lambda$ in the lower half-plane, the integral \begin{equation} \lim_{\rho \to \infty} \int_0^{\rho} e^{-it\lambda} e^{it \mathcal H_0} f \dd t \label{3.47} \end{equation} converges in the $L^{6, \infty}$ norm and equals $i R_0(\lambda) f$ or $i R_0(\lambda-i0) f$ in case $\lambda \in \mathbb R$.
Furthermore, for real $\lambda$, \begin{equation} \lim_{\rho \to \infty} \int_{-\rho}^{\rho} e^{-it\lambda} e^{it \mathcal H_0} f \dd t = i(R_0(\lambda-i0) - R_0(\lambda+i0)) f, \label{3.39}\end{equation} also in the $L^{6, \infty}$ norm. \end{lemma}
\begin{proof}
Note that (\ref{3.47}) is dominated by (\ref{eqn_3.39}), \begin{equation}
\int_0^{\infty} \|e^{it \mathcal H_0} f\|_{L^{6, \infty}} \dd t, \end{equation} and this ensures its absolute convergence. Next, both (\ref{3.47}), as a consequence of the previous argument, and $i R_0(\lambda+i0)$ are bounded operators from $L^{6/5, 1}$ to $L^{6, \infty}$. To show that they are equal, it suffices to address this issue over a dense set. Observe that \begin{equation} \int_0^{\rho} e^{-it(\lambda-i\epsilon)} e^{it \mathcal H_0} f \dd t = iR_0(\lambda-i\epsilon) (f - e^{-i\rho(\lambda-i\epsilon)} e^{i\rho \mathcal H_0} f). \end{equation} Thus, if $f \in L^2 \cap L^{6/5, 1}$, considering the fact that $e^{it \mathcal H_0}$ is unitary and $R_0(\lambda-i\epsilon)$ is bounded on $L^2$, \begin{equation} \lim_{\rho \to \infty} \int_0^{\rho} e^{-it(\lambda-i\epsilon)} e^{it \mathcal H_0} f \dd t = iR_0(\lambda-i\epsilon) f. \label{3.54} \end{equation} Letting $\epsilon$ go to zero, the left-hand side in (\ref{3.54}) converges, by dominated convergence, to (\ref{3.47}), while the right-hand side (also by dominated convergence, using the explicit form (\ref{eq_3.55})-(\ref{eq_3.56}) of the operator kernels) converges to $iR_0(\lambda-i0) f$. Statement (\ref{3.39}) follows directly. \end{proof}
\subsection{The exceptional set and the resolvent} This section and the next mainly consist in a generalization of known results, following models such as \cite{agmon}, \cite{schlag}, and others cited. The main novelty is doing the proofs in a scaling- and translation-invariant setting.
We explore properties of the perturbed resolvent $R_V$. Important in this context is the \emph{Birman-Schwinger operator}, \begin{equation}\label{2.56} \widehat T_{V_2, V_1} (\lambda) = i V_2 R_0(\lambda) V_1, \end{equation} where $V = V_1 V_2$ and $V_1$, $V_2$ are as in (\ref{1.35}) or (\ref{1.36}).
The Birman-Schwinger operator is uniformly bounded on $L^2$ for $\lambda \in \mathbb C$, including boundary values of $\lambda$ along the real line. \begin{lemma}\label{le27} Take $V \in L^{3/2, \infty}$; then there exists $C$ such that for any $\lambda \in \mathbb C$ \begin{equation}
\|V_2 R_0(\lambda) V_1\|_{\mathcal L(L^2_x, L^2_x)} \leq C < \infty. \end{equation} \end{lemma} \begin{proof} By (\ref{eq_3.55}) or (\ref{eq_3.56}), the convolution kernel of $R_0(\lambda)$ is in $L^{3, \infty}$ with uniformly bounded norm. Likewise, $V_1$ and $V_2$ are in $L^{3, \infty}$. Then \begin{equation}
\|V_2 R_0(\lambda) V_1 f\|_{L^2_x} \leq C \|R_0(\lambda) V_1 f\|_{L^{6/5, 2}_x} \leq C \|V_1 f\|_{L^{6, 2}_x} \leq C \|f\|_{L^2_x}. \end{equation} Indeed, by Proposition \ref{prop_holder}, $L^{6/5, 2} * L^{3, \infty} \mapsto L^{6, 2}$, $L^2 \cdot L^{3, \infty} \mapsto L^{6/5, 2}$, and $L^{6, 2} \cdot L^{3, \infty} \mapsto L^2$. \end{proof}
The relation between the Birman-Schwinger operator and the perturbed resolvent $R_V=(\mathcal H_0 + V - \lambda)^{-1}$ is that \begin{equation}\label{eq_3.64} R_V(\lambda) = R_0(\lambda) - R_0(\lambda) V_1 (I + V_2 R_0(\lambda) V_1)^{-1} V_2 R_0(\lambda) \end{equation} and \begin{equation}\label{eq_3.65} (I + V_2 R_0(\lambda) V_1)^{-1} = I - V_2 R_V(\lambda) V_1. \end{equation} Both follow by direct computation from the resolvent identity: \begin{equation} R_V(\lambda) = R_0(\lambda) - R_0(\lambda) V R_V(\lambda) = R_0(\lambda) - R_V(\lambda) V R_0(\lambda). \end{equation}
\begin{definition}\label{def_7} Given $V \in L^{3/2, \infty}_0$, its exceptional set $\mathcal E$ is the set of $\lambda$ in the complex plane for which $I - i\widehat T_{V_2, V_1}(\lambda)$ is not invertible from $L^2$ to itself. \end{definition}
Other choices of $V_1$ and $V_2$ such that $V = V_1 V_2$, $V_1$, $V_2 \in L^{3, \infty}$ lead to the same operator up to conjugation.
Below we summarize a number of observations concerning the exceptional sets of operators in the form (\ref{eq_3.2}) or (\ref{3.2}). \begin{proposition}\label{prop28} Assume $V \in L^{3/2, \infty}_0$ is a potential as in (\ref{eq_3.2}) or (\ref{3.2}) and denote its exceptional set by $\mathcal E$.
\begin{itemize}\item[i)] $\mathcal E$ is bounded and discrete outside $\sigma(\mathcal H_0)$, but can accumulate toward $\sigma(\mathcal H_0)$. $\mathcal E \cap \sigma(\mathcal H_0)$ has null measure (as a subset of $\mathbb R$). Elements of $\mathcal E \setminus \sigma(\mathcal H_0)$ are eigenvalues of $\mathcal H = \mathcal H_0 + V$.
\item[ii)] If $V$ is real and matrix-valued as in (\ref{3.2}), then embedded exceptional values must be eigenvalues, except for the endpoints of $\sigma(\mathcal H_0)$, which need not be eigenvalues. If $V$ is complex matrix-valued as in (\ref{3.2}), there is no restriction on embedded exceptional values.
\item[iii)] If $V$ is complex scalar as in (\ref{eq_3.2}) or complex matrix-valued as in (\ref{3.2}), then $\mathcal E$ is symmetric with respect to the real axis. In case $V$ is real-valued and as in (\ref{3.2}), $\mathcal E$ is symmetric with respect to both the real axis and the origin. \end{itemize} \end{proposition}
\begin{proof} By Lemma \ref{le27}, for $V \in L^{3/2, \infty}$, $V_2 R_0(\lambda) V_1$ is $L^2$-bounded for every~$\lambda$.
The Rollnick class is the set of measurable potentials $V$ whose Rollnick norm \begin{equation}
\|V\|_{\mathcal R} = \int_{(\mathbb R^3)^2} \frac {|V(x)| |V(y)|}{|x-y|^2} \dd x \dd y \end{equation} is finite. The Rollnick class $\mathcal R$ contains $L^{3/2}$. For a potential $V \in \mathcal R$, the Birman-Schwinger operator $\widehat T_{V_2, V_1}(\lambda)$ is Hilbert-Schmidt for every value of $\lambda$ in the lower half-plane up to the boundary.
Since $V \in L^{3/2, \infty}_0$, by Proposition \ref{prop_a3} $V_1$ and $V_2$ are in $L^{3, \infty}_0$ so there exist $V_1^n \to V_1$ and $V_2^n \to V_2$ in $L^{3, \infty}$ bounded and of compact support. It follows that $\widehat T_{V_2, V_1}$ is compact whenever $V$ is in $L^{3/2, \infty}_0$.
By the analytic and meromorphic Fredholm theorems (for statements see \cite{reesim}, p.\ 101, and \cite{reesim4}, p.\ 107), the exceptional set $\mathcal E$ is closed, bounded, and consists of at most a discrete set outside $\sigma(\mathcal H_0)$, which may accumulate toward $\sigma(\mathcal H_0)$, and a set of measure zero contained in $\sigma(\mathcal H_0)$.
Assuming that $V \in L^{3/2, \infty}_0$ is real-valued and scalar, the exceptional set resides on the real line. Indeed, if $\lambda$ is exceptional, then by the Fredholm alternative (\cite{reesim1}, p.\ 203) the equation \begin{equation} f = -V_2 R_0(\lambda) V_1 f
\end{equation} must have a solution $f \in L^2$. Then $g = R_0(\lambda) V_1 f$ is in $|\nabla|^{-2} L^{6/5, 2} \subset L^{6, 2}$ and satisfies \begin{equation} g = - R_0(\lambda) V g. \label{3.471} \end{equation}
If $\lambda \in \mathcal E \setminus \sigma(\mathcal H_0)$, the kernel's exponential decay implies that $\lambda$ is an eigenvalue for $\mathcal H$ and that the corresponding eigenvectors must be at least in $\langle \nabla \rangle^{-2} L^{6/5, 2}$; in fact, by Agmon's argument, they have exponential decay.
Furthermore, by applying $\mathcal H_0 -\lambda$ to both sides we obtain \begin{equation} (\mathcal H_0 + V - \lambda) g = 0. \end{equation} In the case of a real scalar potential $V$, $\mathcal H_0+V$ is self-adjoint, so this is a contradiction for $\lambda \not \in \mathbb R$. In general, exceptional values off the real line can indeed occur.
For real-valued $V \in L^{3/2, \infty}_0$ having the matrix form (\ref{3.2}), any embedded exceptional values must be eigenvalues, following the argument of Lemma 4 of Erdogan--Schlag \cite{erdsch2}.
Explicitly, consider $\lambda \in \mathcal E \cap \sigma(\mathcal H_0) \setminus \{\pm \mu\}$; without loss of generality $\lambda>\mu$. It corresponds to a nonzero solution $G \in L^{6, 2}$ of \begin{equation}\label{eq_3.75} G = - R_0(\lambda-i0) V G. \end{equation} We show that $G \in L^2$ and that it is an eigenfunction of $\mathcal H$. Let \begin{equation} G = \begin{pmatrix} g_1 \\ g_2 \end{pmatrix},\ V = \begin{pmatrix} W_1 & W_2 \\ -W_2 & -W_1 \end{pmatrix},\ \mathcal H_0 = \begin{pmatrix} \Delta - \mu & 0 \\ 0 & -\Delta + \mu \end{pmatrix}, \end{equation} where $W_1$ and $W_2$ are real-valued. We expand (\ref{eq_3.75}) accordingly into \begin{equation}\begin{aligned}\label{eq_3.77} g_1 &= (-\Delta + \lambda + \mu)^{-1}(W_1 g_1 + W_2 g_2) \\ g_2 &= (-\Delta - (\lambda - \mu - i0))^{-1}(W_2 g_1 + W_1 g_2). \end{aligned}\end{equation} This implies that $g_1 \in L^1 \cap L^6$ and \begin{equation}\begin{aligned} \langle g_2, W_2 g_1 + W_1 g_2 \rangle &= \langle (-\Delta - (\lambda - \mu - i0)^{-1}(W_2 g_1 + W_1 g_2), (W_2 g_1 + W_1 g_2) \rangle, \\ \langle g_1, W_2 g_2 \rangle &= \langle (-\Delta + \lambda + \mu)^{-1}(W_1 g_1 + W_2 g_2), W_2 g_2 \rangle, \\ \langle g_1, W_1 g_1 \rangle &= \langle (-\Delta + \lambda + \mu)^{-1}(W_1 g_1 + W_2 g_2), W_1 g_1 \rangle. \end{aligned}\end{equation} However, \begin{equation} \langle g_2, W_2 g_1 + W_1 g_2 \rangle = \overline{\langle g_1, \overline W_2 g_2 \rangle} + \langle g_2, W_1 g_2 \rangle. \end{equation} Since $W_2$ is real-valued, it follows that \begin{equation} \langle (-\Delta - (\lambda - \mu - i0)^{-1}(W_2 g_1 + W_1 g_2), (W_2 g_1 + W_1 g_2) \rangle \end{equation} is real-valued. Therefore the Fourier transform vanishes on a sphere: \begin{equation} (W_2 g_1 + W_1 g_2)^{\wedge}(\xi) = 0
\end{equation} for $|\xi|^2 = \lambda-\mu$. We then apply Agmon's bootstrap argument, as follows. By Corollary 13 of \cite{golsch}, if $f \in L^1$ has a Fourier transform that vanishes on the sphere, meaning $\hat f(\xi) = 0$ for every $\xi$ such that $|\xi|^2 = \lambda \ne 0$, then \begin{equation}
\|R_0(\lambda \pm i0) f\|_2 \leq C_{\lambda} \|f\|_1. \end{equation} Interpolating between this and \begin{equation}
\|R_0(\lambda \pm i0) f\|_{4} \leq C_{\lambda} \|f\|_{4/3}, \end{equation} which holds without conditions on $\hat f$, we obtain that for $1 < p < 4/3$ and for $\hat f = 0$ on the sphere of radius $\sqrt \lambda>0$ \begin{equation}
\|R_0(\lambda \pm i0) f\|_{L^{2p/(2-p), 2}} \leq C_{\lambda} \|f\|_{L^{p, 2}}. \end{equation} Thus, starting with the right-hand side of (\ref{eq_3.77}) in $L^{6/5, 2}$, we obtain that $g_2 \in L^{3, 2}$, a gain over $L^{6, 2}$. Iterating twice, we obtain that $g_2 \in L^2$. Therefore $g$ is an $L^2$ eigenvector.
Thus, for a real-valued $V \in L^{3/2, \infty}_0$ having the matrix form (\ref{3.2}), the exceptional set consists only of eigenvalues, potentially together with the endpoints of the continuous spectrum $\pm \mu$.
For a complex potential of the form (\ref{3.2}), neither of the previous arguments holds. Embedded exceptional values can occur and they need not be eigenvalues.
Next, we examine symmetries of the exceptional set $\mathcal E$. If $V$ is real-valued and scalar, we have already characterized $\mathcal E$ as being situated on the real line. If $V$ is scalar, but complex-valued, then consider an exceptional value $\lambda$, for which, due to compactness, there exists $f \in L^2$ such that
\begin{equation} f = -|V|^{1/2} \sgn V R_0(\lambda) |V|^{1/2} f. \end{equation} Then \begin{equation}
(\sgn V \overline f) = -|V|^{1/2} R_0(\overline \lambda) |V|^{1/2} \sgn {\overline V} (\sgn V \overline f), \end{equation} so the adjoint has an exceptional value at $\overline \lambda$. However, $\sigma(\widehat T_{V_2, V_1}(\lambda)) = \sigma(\widehat T_{V_1, V_2}(\lambda)^*)$, so all this proves that the exceptional set $\mathcal E$ is symmetric with respect to the real axis.
If $V$ has the matrix form (\ref{3.2}), then note that $\sigma_1 V \sigma_1 = -\overline V$, $\sigma_3 V \sigma_3 = V^*$, where $\sigma_1$ is the Pauli matrix \begin{equation} \sigma_1 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix},\ \sigma_1 \sigma_3 = -\sigma_3 \sigma_1. \end{equation} Let $\lambda$ be an exceptional value, for which \begin{equation} f = -\sigma_3 (\sigma_3 V)^{1/2} R_0(\lambda) (\sigma_3 V)^{1/2} f. \end{equation} Here $\sigma_3 V = \begin{pmatrix} W_1 & W_2 \\ \overline W_2 & W_1 \end{pmatrix}$ is a selfadjoint matrix.
Then \begin{equation}\begin{aligned} \overline f &= -\sigma_3 (\sigma_3 \overline V)^{1/2} R_0(\overline \lambda) (\sigma_3 \overline V)^{1/2} \overline f \\ &= -\sigma_3 (\sigma_3 V)^{1/2} \sigma_3 R_0(\overline \lambda) \sigma_3 (\sigma_3 V)^{1/2} \sigma_3 \overline f \\ &= -\sigma_3 (\sigma_3 V)^{1/2} R_0(\overline \lambda) (\sigma_3 V)^{1/2}\sigma_3 \overline f \end{aligned}\end{equation} since $R_0$ commutes with $\sigma_3$, so whenever $\lambda$ is an exceptional value so is $\overline \lambda$.
If $V$ as in (\ref{3.2}) is a real-valued matrix, then by the same methods we obtain that $-\lambda$ is an exceptional value whenever $\lambda$ is an exceptional value. \end{proof}
\subsection{The time evolution and projections}\label{sect_1.35} We begin with a basic lemma, which applies equally in the time-dependent case. \begin{lemma}\label{lemma_24} Assume $V \in L^{\infty}$ and the Hamiltonian is described by (\ref{eq_3.2}) or (\ref{3.2}). Then the equation \begin{equation} i \partial_t Z + \mathcal H Z = F,\ Z(0) \text{ given}, \label{1.358}\end{equation} admits a weak solution $Z$ for $Z(0) \in L^2$, $F \in L^{\infty}_t L^2_x$ and \begin{equation}
\|Z(t)\|_2 \leq C e^{t \|V\|_{\infty}} \|Z(0)\|_2 + \int_0^t e^{(t-s) \|V\|_{\infty}} \|F(s)\|_2 \dd s. \label{1.359}\end{equation} \end{lemma} \begin{proof} We introduce an auxiliary variable and write \begin{equation} i \partial_t Z + \mathcal H_0 Z = F - V Z_1,\ Z(0) \text{ given}.
\end{equation} Over a sufficiently small time interval $[T, T+\epsilon]$, whose size $\epsilon$ only depends on $\|V\|_{\infty}$, the map that associates $Z$ to some given $Z_1$ is a contraction, in a sufficiently large ball in $L^{\infty}_t L^2_x$. The fixed point of this contraction mapping is then a solution to (\ref{1.358}).
This shows that the equation is locally solvable and, by bootstrapping, since the length of the interval is independent of the size of $F$ and of the initial data $Z(T)$, we obtain an exponentially growing global solution. The bound (\ref{1.359}) follows by Gronwall's inequality. \end{proof}
For a nonselfadjoint operator such as $\mathcal H$ given by (\ref{3.2}), the projections on various parts of the spectrum need not be selfadjoint operators. The following lemma characterizes such Riesz projections (following \cite{schlag}, where it appeared in a different form). \begin{lemma}\label{pp} Assume $V \in L^{3/2, \infty}_0$.\begin{enumerate}\item[i)] To each element $\zeta$ of the exceptional set of $\mathcal H$ outside of $\sigma(\mathcal H_0)$ there corresponds a family of operators
\begin{equation} P^k_{\zeta} = \frac 1 {2\pi i} \int_{|z-\zeta| = \epsilon} R_V(z) (z-\zeta)^k \dd z. \label{3.111} \end{equation} They have finite rank, $P^k_{\zeta} = 0$ for all $k \geq n$, for some $n$, $P^0_{\zeta}=(P^0_{\zeta})^2$, and more generally $(P_{\zeta}^k)(P_{\zeta}^{\ell}) = P_{\zeta}^{k+\ell}$. \item[ii)] The ranges of $P^k_{\zeta}$ and of their adjoints $(P^k_{\zeta})^*$ are spanned by exponentially decaying functions that also belong to $\langle \nabla \rangle ^{-2} L^{6/5, 2} \subset L^{6, 2}$.\\ Thus, each such projection is bounded from $L^{6/5, 2} + L^{\infty}$ to $L^1 \cap L^{6, 2}$. \end{enumerate} \end{lemma} Functions in $\Ran P^k_{\zeta}$ are called \emph{generalized eigenfunctions} of $\mathcal H$.
We also refer the reader to Hundertmark-Lee, \cite{hule}, who proved the $L^2$ exponential decay of (generalized) eigenfunctions in the gap $-\mu<\Rere \zeta<\mu$ under more general conditions.
\begin{proof} If $R_V(\zeta) = (\mathcal H_0 + V - \zeta)^{-1}$ exists as a bounded operator from $L^{6/5, 2}$ to $L^{6, 2}$, then $\zeta$ is not in the exceptional set and vice-versa, as a consequence of (\ref{eq_3.64}) and (\ref{eq_3.65}).
Form the contour integral, following Schlag \cite{schlag} and Reed--Simon \cite{reesim1},
\begin{equation} P^k_{\zeta} = \frac 1 {2\pi i} \int_{|z-\zeta| = \epsilon} R_V(z) (z-\zeta)^k \dd z. \end{equation} This integral is independent of $\epsilon$ if $\epsilon$ is sufficiently small and $P^k_{\zeta} = 0$ for $k \geq n$. Using the Cauchy integral, it immediately follows that $(P_{\zeta}^k)(P_{\zeta}^{\ell}) = P_{\zeta}^{k+\ell}$. Furthermore, \begin{equation}\label{3.135} \mathcal H P^0_{\zeta} = P^1_{\zeta} + \zeta P^0_{\zeta}. \end{equation}
It is a consequence of Fredholm's theorem that the range of $P^0_{\zeta}$ is finite dimensional; from (\ref{3.111}) it follows that $\Ran(P^0_{\zeta}) \subset L^2 \cap L^{6, 2}$. Also, $\Ran(P^0_{\zeta})$ is the generalized eigenspace of $\mathcal H - \zeta$, meaning \begin{equation} \Ran(P^0_{\zeta}) = \bigcup_{k \geq 0} \Ker((\mathcal H - \zeta)^k). \end{equation} One inclusion follows from (\ref{3.135}) and the fact that $P^k_{\zeta} = 0$ for $k \geq n$. The other inclusion is a consequence of the fact that, if $(\mathcal H - \zeta) f = 0$, then $R_V(z) f = (\zeta - z)^{-1} f$ and, using the definition (\ref{3.111}), $P^0_{\zeta} f = f$. For higher values of $k$ we proceed by induction.
Furthermore, $\Ran(P^0_{\zeta})$ consists of functions in $\langle \nabla \rangle^{-2} L^{6/5, 2}$. If $f$ is a generalized eigenfunction, meaning $f \in L^2 \cap L^{6, 2}$ and $(\mathcal H - \zeta)^n f = 0$, then \begin{equation} (\mathcal H_0 + V) f = \zeta f + g, \end{equation} where $g$ is also a generalized eigenfunction. Assuming by induction that $g \in \langle \nabla \rangle^{-2} L^{6/5, 2}$ (or is zero, to begin with), the same follows for $f$.
Likewise, if $g$ is in $L^1$ or exponentially decaying, we can infer the same about $f$. Indeed, assume that $g \in e^{-\epsilon |x|} L^{6/5, 2}$, for some small $\epsilon$. Note that \begin{equation} f = R_0(\zeta) (-V f + g) = (I + R_0(\zeta) V_2)^{-1}R_0(\zeta) (-V_1 f + g),
\end{equation} where $V = V_1 + V_2$. We choose $V_1$ and $V_2$ such that $V_1$ has compact support and $V_2$ is small in the $L^{3/2, \infty}$ norm. It follows that $R_0(\zeta) (-V_1 f + g)$ is in $e^{-\epsilon |x|} L^{6, 2}$, while $(I + R_0(\zeta) V_2)^{-1}$, given by an infinite Born series, is a bounded operator on the same space.
Thus, $f \in e^{-\epsilon |x|} L^{6, 2}$ is exponentially decaying. By suitably choosing a finite sequence of epsilons, the conclusion follows for all the generalized eigenfunctions associated to $\zeta$.
The range of $(P^0_{\zeta})^*$ is the generalized eigenspace of $\mathcal H^* - \overline \zeta$, which means that it is also finite-dimensional and spanned by exponentially decaying functions in $\langle \nabla \rangle^{-2} L^{6/5, 2}$. \end{proof}
Throughout the sequel, we assume that there are no exceptional values embedded in $\sigma(\mathcal H_0)$. By Fredholm's analytic theorem, this implies that there are finitely many exceptional values overall.
Then we can define $P_c$, the projection on the continuous spectrum, simply as the identity minus the sum of all projections corresponding to the exceptional values (which coincide, in this case, with the point spectrum): \begin{equation}\label{pc} P_c = I - P_p = I - \sum_{k=1}^n P_{\zeta_k}^0. \end{equation} $P_c$ commutes with $\mathcal H$ and with $e^{it \mathcal H}$, as a direct consequence of the definition and of Lemma \ref{pp}.
In order to characterize $P_c$, we employ the subsequent lemma, which appeared in Schlag \cite{schlag} under more stringent assumptions. \begin{lemma}\label{lemma_31} Consider $V \in L^{3/2, \infty}_0$ and assume that there are no exceptional values of $\mathcal H$ embedded in the spectrum of $\mathcal H_0$. Then for sufficiently large $y$ \begin{equation} \langle f, g \rangle = \frac i {2\pi} \int_{\mathbb R} \langle (R_V(\lambda + iy) - R_V(\lambda - iy)) f, g \rangle \dd \lambda \end{equation} and the integral is absolutely convergent. Furthermore, \begin{equation}\label{2.93} \langle f, g \rangle = \frac i {2\pi} \int_{\sigma(\mathcal H_0)} \langle (R_V(\lambda + i0) - R_V(\lambda - i0)) f, g \rangle \dd \lambda + \sum_{k=1}^n \langle P^0_{\zeta_k} f, g \rangle \end{equation} where $P^0_{\zeta_k}$ are projections corresponding to the finitely many eigenvalues $\zeta_k$.
\end{lemma}
\begin{proof}
Assume at first that $V \in L^{\infty}$ and take $y > \|V\|_{\infty}$. Then \begin{equation} I + V_2 R_0(\lambda \pm iy) V_1
\end{equation} must be invertible. Indeed, $V_1$ and $V_2$ are bounded $L^2$ operators of norm at most $\|V\|^{1/2}_{\infty}$ and \begin{equation}
\|R_0(\lambda \pm iy)\|_{2 \to 2} \leq \frac 1 {4 \pi} \int_{\mathbb R^3} \frac{e^{-\sqrt y|x|}}{|x|} \dd x = 1/y. \end{equation} Therefore one can construct the inverse $(I + V_2 R_0(\lambda \pm iy) V_1)^{-1}$ by means of a power series. Thus \begin{equation}\begin{aligned} R_V(\lambda\pm iy) &= R_0(\lambda \pm iy) - R_0(\lambda \pm iy) V R_0(\lambda \pm iy) + \\ &+ R_0(\lambda\pm iy) V_1 (I + V_2 R_0(\lambda\pm iy) V_1)^{-1} V_2 R_0(\lambda \pm iy) \end{aligned}\end{equation} is a bounded $L^2$ operator.
By Lemma \ref{lemma_24} $\chi_{t \geq 0} \langle e^{it \mathcal H} e^{-yt} f, g \rangle$ is an exponentially decaying function of $t$ and its Fourier transform is \begin{equation} \int_0^{\infty} \langle e^{-(y+i\lambda)t} e^{it \mathcal H} f, g \rangle \dd y = -i\langle R_V(\lambda-iy) f, g \rangle. \end{equation} Combining this with the analogous result for the positive side, we see that \begin{equation}\label{3.20}
(\langle e^{it \mathcal H} e^{-y|t|} f, g \rangle)^{\wedge} = i \langle (R_V(\lambda+iy)-R_V(\lambda-iy)) f, g \rangle. \end{equation} The right-hand side is absolutely integrable, because \begin{equation}\begin{aligned}\label{3.99} R_V(\lambda) &= R_0(\lambda) - R_0(\lambda) V R_0(\lambda) + R_0(\lambda) V_1 (I + V_2 R_0(\lambda) V_1)^{-1} V_2 R_0(\lambda) \end{aligned}\end{equation} and \begin{equation}\begin{aligned}\label{3.100}
&\int_{-\infty}^{\infty} |\langle (R_0(\lambda+iy) - R_0(\lambda-iy)) f, g \rangle| \dd \lambda \\ &\leq \int_{-\infty}^{\infty} \frac 1 {2i} (\langle (R_0(\lambda+iy) - R_0(\lambda-iy)) f, f \rangle + \langle (R_0(\lambda+iy) - R_0(\lambda-iy)) g, g \rangle) \dd \lambda \\
&= \frac 1 2 (\|f\|_2^2 + \|g\|_2^2). \end{aligned}\end{equation} The remaining terms are absolutely integrable due to smoothing estimates: \begin{equation}
\int_{-\infty}^{\infty} \||V|^{1/2} R_0(\lambda \pm iy) f\|_2^2 \dd \lambda \leq C \|f\|_2^2. \end{equation}
By the Fourier inversion formula, (\ref{3.20}) implies \begin{equation}\label{3.103} \frac i {2\pi} \int_{\mathbb R} \langle (R_V(\lambda + iy) - R_V(\lambda - iy)) f, g \rangle \dd \lambda = \langle f, g \rangle. \end{equation} We then shift the integration contour toward the essential spectrum $\sigma(\mathcal H_0)$, leaving behind circular contours around the finitely many (by Fredholm's Theorem) eigenvalues. Each contour integral becomes a corresponding Riesz projection.
What is left is $P_c$, the projection on the continuous spectrum. The integral is still absolutely convergent due to (\ref{3.99}), (\ref{3.100}), and smoothing estimates. (\ref{2.93}) follows.
In the beginning we assumed that $V \in L^{\infty}$. Now consider the general case $V \in L^{3/2, \infty}_0$ and a sequence of approximations by bounded potentials $V^n = V_1^n V_2 ^n \in L^{\infty}$, such that $\|V^n - V\|_{L^{3/2, \infty}} \to 0$ as $n \to \infty$. Let $\mathcal E$ be the exceptional set of $V$. On the set $\{\lambda \mid d(\lambda, \mathcal E) \geq \epsilon\}$, the norm \begin{equation}
\|(I + V_2^n R_0(\lambda) V_1^n)^{-1}\|_{L^2 \to L^2} \end{equation} is uniformly bounded for large $n$. For some sufficiently high $n$, then, $\mathcal E(V_n) \subset \{\lambda \mid d(\lambda, \mathcal E) < \epsilon\}$. If
\begin{equation} y_0 = \sup \{|\Imim \lambda| \mid \lambda \in \mathcal E\}, \end{equation} then for any $y > y_0$ and sufficiently large $n$ \begin{equation}\label{3.104} \frac i {2\pi} \int_{\mathbb R} \chi(\lambda) \langle (R_{V^n}(\lambda + iy) - R_{V^n}(\lambda - iy)) f, g \rangle \dd \lambda = \langle f, g \rangle. \end{equation} Both for $V$ and for $V^n$ the integrals (\ref{3.103}) and (\ref{3.104}) converge absolutely and as $n \to \infty$ (\ref{3.104}) converges to (\ref{3.103}). To see this, subtract the corresponding versions of (\ref{3.99}) from one another and evaluate.
This proves (\ref{2.93}) for potentials $V \in L^{3/2, \infty}_0$, under the spectral assumption concerning the absence of embedded eigenvalues. \end{proof}
By Lemma \ref{lemma_31}, it follows that \begin{equation}\label{2.94} P_c = \chi(\mathcal H) = \frac i {2\pi} \int_{\sigma(\mathcal H_0)} \big(R_V(\lambda+i0) - R_V(\lambda-i0)\big) \dd \lambda. \end{equation} $P_c$ is bounded on $L^2$, but, since each projection $P_{\zeta_k}^0$ is bounded from $L^{\infty} + L^{6/5, 2}$ to $L^{6, 2} \cap L^1$, the same holds for $P_p = I-P_c$.
Therefore $P_c$ is bounded on $L^{6/5, q}$, $q \leq 2$, and on $L^{6, q}$, $q\geq 2$, as well as on intermediate spaces.
\subsection{Technical lemmas} In order to apply Wiener's Theorem --- Theorem \ref{thm_1.6} ---, we first need to exhibit an element of $K$. By Proposition \ref{prop_21}, \begin{equation} T_{V_2, V_1}(t) = V_2 e^{it \mathcal H_0} V_1 \end{equation} is a such an element. However, another condition is that the Fourier transform $I - i \widehat T_{V_2, V_1}$ should be invertible at every $\lambda \in \mathbb R$. By Lemma \ref{lemma_22}, \begin{equation} I - i \widehat T_{V_2, V_1}(\lambda) = I + V_2 R_0(\lambda) V_1 \end{equation} and this is invertible for each $\lambda$, except for $\lambda \in \mathcal E$. This suffices only if there are no exceptional values; otherwise we need a correction.
By (\ref{eq_3.64}), we see that the problem is that $R_V$ is not uniformly bounded in the lower half-plane in $\mathcal L(L^{6/5, 2}, L^{6, 2})$. The solution will be to replace $R_V$ with $R_V P_c - (\lambda + i \delta)^{-1} P_p$, which is uniformly bounded in the lower half-plane when $\delta>0$.
Our construction involves using, instead of $V_1$ and $V_2$, the following modified versions:
\begin{lemma}\label{lemma32} Consider $V \in L^{3/2, \infty}_0(\mathbb R^3)$ and $\mathcal H = \mathcal H_0 + V$ as in (\ref{eq_3.2}) or (\ref{3.2}) such that $\mathcal H$ has no exceptional values embedded in $\sigma(\mathcal H_0)$. Then, for any $\delta \in \mathbb C$, there exists a decomposition \begin{equation} V - P_p (\mathcal H - i\delta) = \tilde V_1 \tilde V_2, \end{equation} where $P_p = I - P_c$, such that $\tilde V_1$, $\tilde V_2^* \in \mathcal L(L^2, L^{6/5, 2})$, and the Fourier transform of $I-iT_{\tilde V_2, \tilde V_1}$ is uniformly invertible in the lower half-plane: \begin{equation}\label{2.108}
\sup_{\Imim \lambda \leq 0} \|I - i \widehat T_{\tilde V_2, \tilde V_1}(\lambda)\|_{\mathcal L(L^2, L^2)} < \infty. \end{equation} Here
\begin{equation} (T_{\tilde V_2, \tilde V_1}F)(t) = \int_{-\infty}^t \tilde V_2 e^{i(t-s)\mathcal H_0} \tilde V_1 F(s) \dd s. \end{equation} Furthermore, $\tilde V_1$ and $\tilde V_2^*$ can be approximated in the $\mathcal L(L^2, L^{6/5, 2})$ norm by operators that are bounded from $L^2$ to $\langle x \rangle^{-N} L^2$, for any fixed $N$.
Finally, if $V \in L^{3/2, 1}$ then $L^{6/5, 2}$ can be replaced by $L^{6/5, 1}$. \end{lemma} Note that $\tilde V_1$ and $\tilde V_2$ are the same as $V_1$, respectively $V_2$, up to exponentially small perturbations; however, while $V_1$ and $V_2$ are functions, $\tilde V_1$ and $\tilde V_2$ are operators.
Our interest in them is explained by (\ref{2.108}), which is not guaranteed for $V_1$ and $V_2$ by themselves, as explained above.
\begin{proof}[Proof of Lemma \ref{lemma32}] Consider a potential $V$ such that $\mathcal H$ has no exceptional values embedded in $\sigma(\mathcal H_0)$. Having finite rank, $P_p$ has the form \begin{equation} P_p f = \sum_{k=1}^n \langle f, f_k \rangle g_k, \end{equation} where $f_k$ and $g_k$ belong to $\langle \nabla \rangle^{-2} L^{6/5, 2}$. Take the standard polar decomposition of $P_p$, with respect to $L^2$: \begin{equation} P_p = U A, \end{equation} where $A = (P_p^* P_p)^{1/2} \geq 0$ is a nonnegative $L^2$ operator of finite rank. More specifically, $A$ has the form \begin{equation} A f = \sum_{j, k=1}^n a_{jk} \langle f, f_k \rangle f_j. \end{equation} $A$ maps to the span of $f_k$, while $U$ is a partial $L^2$ isometry defined on the range of $A$. $U$ maps the span of $f_k$ to the span of $g_k$ and can be extended by zero on the orthogonal complement: \begin{equation}\begin{aligned} U f &= \sum_{j, k=1}^n u_{jk} \langle f, f_k \rangle g_j. \end{aligned}\end{equation} From these explicit forms we see that both $U$ and $A$ are bounded operators from $\langle \nabla \rangle^2 L^{6, 2}$, which includes $L^2$, to $\langle \nabla \rangle^{-2} L^{6/5, 2}$.
Also let $V = V_1 V_2$, where $V_2 \geq 0$ is a nonnegative operator on $L^2$, meaning $\langle f, V_2 f \rangle \geq 0$ for every $f \in \Dom(V_2)$, and $V_1$, $V_2 \in L^{3, \infty}$.
Then, define the bounded $L^2$ operators $G_1 = V_2/(V_2 + A)$ and $G_2 = (\mathcal H + i\delta) P_p/(V_2 + A)$, initially on $\Ran(V_2 + A)$, by setting \begin{equation} G_1((V_2+A)f) = V_2 f,\ G_2((V_2+A)f) = (\mathcal H +\delta) P_p f \end{equation} and extending them by continuity to $\overline{\Ran(V_2 + A)}$. On the orthogonal complement \begin{equation} \Ran(V_2 + A)^{\perp}=\{f \mid P_p f =0,\ V f = 0\} \end{equation} we simply set $G_1=G_2=0$. We then make the construction
\begin{equation}\begin{aligned} V - P_p (\mathcal H-i\delta) &= \tilde V_1 \tilde V_2, \\ \tilde V_2 &= V_2 + A, \\ \tilde V_1 &= V_1 G_1 - G_2. \end{aligned}\end{equation} Upon inspection, $\tilde V_1$ and $\tilde V_2^*$ are bounded from $L^2$ to $L^{6/5, 2}$.
We next prove that $\tilde V_1$ and $\tilde V_2$ can be approximated by operators in better classes, as claimed. Firstly, consider a family of smooth compactly supported functions $\chi_n$ such that $0 \leq \chi_n \leq 1$ and $\chi_n \to 1$ uniformly on compact sets as $n \to \infty$. Let \begin{equation}\begin{aligned}\label{eq_3.177} F_1^n &= \chi_n V_1 G_1 - \chi_n G_2 \chi_n,\\ F_2^n &= \chi_n V_2 + \chi_n A \chi_n.
\end{aligned}\end{equation}
It is plain that $F_1^n$ and $(F_2^n)^*$ take $L^2$ to $\langle x \rangle^{-N} L^2$. They also approximate $\tilde V_1$ and $\tilde V_2^*$ in $\mathcal L(L^2, L^{6/5, 2})$. Indeed, as $n \to \infty$ \begin{equation}
\|\chi_n V_j - V_j\|_{\mathcal L(L^2, L^{6/5, 2})} \to 0 \end{equation} because $V$ decays at infinity and \begin{equation}\begin{aligned}
\|\chi_n G_2 \chi_n - G_2\|_{\mathcal L(L^2, L^2)} \to 0,\\
\|\chi_n A \chi_n - A\|_{\mathcal L(L^2, L^{6/5, 2})} \to 0, \end{aligned}\end{equation} because $G_2$ and $A$ have finite rank.
Finally, we show that the Fourier transform $I-i\widehat{T}_{\tilde V_2, \tilde V_1}(\lambda)$ is always invertible. Lemma \ref{lemma_22} implies that \begin{equation} I-i\widehat{T}_{\tilde V_2, \tilde V_1}(\lambda) = I + \tilde V_2 R_0(\lambda) \tilde V_1. \end{equation} As we see by (\ref{2.97}), \begin{equation}\label{2.106} (I + \tilde V_2 R_0 \tilde V_1)^{-1} = I - \tilde V_2 (R_V P_c - (\lambda - i\delta)^{-1} P_p) \tilde V_1. \end{equation}
By (\ref{eq_3.64}) and (\ref{eq_3.65}), $R_V(\lambda)$ is bounded from $L^{6/5, 2}$ to $L^{6, 2}$ if and only if $\lambda$ is not an exceptional value. Our assumption regarding the absence of embedded exceptional values implies that $R_V(\lambda)$ is uniformly bounded for $\lambda \in \sigma(\mathcal H_0)$. Furthermore, $R_V$ is uniformly bounded away from the finitely many exceptional values.
Using the representation formula (\ref{2.94}) for $P_c$, for $f$, $g \in L^2$ \begin{equation}\begin{aligned} \langle R_V(\lambda_0) P_c f, g \rangle &= \frac 1 {2 \pi i} \int_{\sigma(\mathcal H_0)} \big\langle R_V(\lambda_0) (R_V(\lambda-i0) - R_V(\lambda+i0)) f, g \big\rangle \dd \lambda \\ &= \frac 1 {2 \pi i} \int_{\sigma(\mathcal H_0)} \Big\langle \frac 1 {\lambda-\lambda_0} (R_V(\lambda-i0) - R_V(\lambda+i0)) f, g \Big\rangle \dd \lambda \end{aligned}\end{equation} and the integral converges absolutely. Here we used the resolvent identity: for all $\lambda_1$, $\lambda_2$ in the resolvent set, $$ R_V(\lambda_1) - R_V(\lambda_2) = (\lambda_1-\lambda_2) R_V(\lambda_1) R_V(\lambda_2). $$ For some fixed $\lambda_1 \not \in \sigma(\mathcal H_0)$, $R_V(\lambda_1)$ is bounded from $L^{6/5, 2}$ to $L^{6, 2}$. Then, for any other value $\lambda_2 \not \in \sigma(\mathcal H_0)$, one has that \begin{equation}\begin{aligned} &\langle R_V(\lambda_1) P_c f, g \rangle - \langle R_V(\lambda_2) P_c f, g \rangle = \\ &= \frac 1 {2 \pi i} \int_{\sigma(\mathcal H_0)} \Big\langle \frac {\lambda_2 - \lambda_1} {(\lambda-\lambda_1)(\lambda-\lambda_2)} (R_V(\lambda-i0) - R_V(\lambda+i0)) f, g \Big\rangle \dd \lambda. \end{aligned}\end{equation} Since the integrand decays like $\lambda^{-2}$, it follows that \begin{equation}\label{lim_ap}
\sup_{\lambda \in \mathbb C} \|R_V(\lambda) P_c\|_{L^{6/5, 2} \to L^{6, 2}} < \infty. \end{equation}
By (\ref{2.106}), $(I + \tilde V_2 R_0(\lambda) \tilde V_1)^{-1}$ is then uniformly bounded in the lower half-plane, for $\delta>0$, and (\ref{2.108}) follows. \end{proof}
As we replace $V_1$ and $V_2$ by $\tilde V_1$, respectively $\tilde V_2$, we use the following identities: \begin{lemma} For $\delta>0$, consider the decomposition of Lemma \ref{lemma32}: \begin{equation} V - P_p (\mathcal H + i\delta) = \tilde V_1 \tilde V_2. \end{equation} Let $\tilde R_V(\lambda) = (P_c \mathcal H - i\delta P_p - \lambda)^{-1}$. Then \begin{equation}\label{2.127} \tilde R_V(\lambda) = R_V(\lambda) P_c - (\lambda - i \delta)^{-1} P_p \end{equation} and the following identities hold: \begin{align} \label{2.97} \tilde R_V(\lambda) &= R_0(\lambda) - R_0(\lambda) \tilde V_1 (I + \tilde V_2 R_0(\lambda) \tilde V_1)^{-1} \tilde V_2 R_0(\lambda), \\ \label{3.65'} (I + \tilde V_2 R_0(\lambda) \tilde V_1)^{-1} &= I - \tilde V_2 \tilde R_V(\lambda) \tilde V_1. \end{align} \end{lemma} \begin{proof} Direct computation shows that \begin{equation}\label{2.109} (P_c \mathcal H + i\delta P_p - \lambda) (R_V(\lambda) P_c - (\lambda - i \delta)^{-1} P_p) = I. \end{equation} We use the fact that $P_c^2 = P_c$, $P_p^2 = P_p$, $P_c P_p = P_p P_c = 0$, and everything commutes in (\ref{2.109}).
Then, note that \begin{equation} P_c \mathcal H - i\delta P_p = \mathcal H_0 + V - P_p (\mathcal H + i\delta) = \mathcal H_0 + \tilde V_1 \tilde V_2. \end{equation} We write the resolvent identity, for this case, as \begin{equation} \tilde R_V(\lambda) = R_0(\lambda) - R_0(\lambda) \tilde V_1 \tilde V_2 \tilde R_V(\lambda) = R_0(\lambda) - \tilde R_V(\lambda) \tilde V_1 \tilde V_2 R_0(\lambda), \end{equation} which implies (\ref{2.97}) and (\ref{3.65'}).
\end{proof}
We prefer to study the modified evolution $e^{it \mathcal H P_c - \delta t P_p}$, rather than $e^{it \mathcal H}$ or $e^{it\mathcal H} P_c$. We define this modified evolution simply by means of the equation \begin{equation} i \partial_t Z + \mathcal H P_c Z + i \delta P_p Z = 0,\ Z(0)=Z_0, \end{equation} and take $e^{it \mathcal H P_c - \delta t P_p} Z_0 := Z(t)$.
\begin{lemma}\label{lem214} Assume $\|V\|_{\infty} < \infty$; then \begin{equation} e^{it \mathcal H P_c - \delta t P_p} = e^{it \mathcal H} P_c + e^{-\delta t} P_p.
\end{equation} For $\Imim \lambda < \|V\|_{\infty}$, \begin{equation}\label{2.135} (e^{it \mathcal H P_c - \delta t P_p})^{\wedge}(\lambda) = \tilde R_V(\lambda). \end{equation} \end{lemma} \begin{proof} By Lemma \ref{lemma_24}, this modified evolution is $L^2$-bounded when $V \in L^{\infty}$: \begin{equation}\label{2.136}
\|e^{it \mathcal H P_c - \delta t P_p}\|_{\mathcal L(L^2, L^2)} \leq C e^{t \|V\|_{L^{\infty}_t}}. \end{equation} Since $\mathcal H$, $P_c$, and $P_p$ all commute, we can also rewrite it as \begin{equation} e^{it \mathcal H P_c - \delta t P_p} = e^{it \mathcal H P_c - \delta t P_p} P_c + e^{it \mathcal H P_c - \delta t P_p} P_p = e^{it \mathcal H} P_c + e^{-\delta t} P_p. \end{equation}
(\ref{2.136}) allows one to define the Fourier transform for $\Imim \lambda < - \|V\|_{\infty}$. A direct computation then shows (\ref{2.135}). \end{proof}
\subsection{Time-independent potentials}
\begin{proof}[Proof of Theorem \ref{theorem_26}]
Replace the original equation (\ref{1.1}) by the following: \begin{equation}\label{2.152} i \partial_t Z + \mathcal H P_c Z + i\delta P_p Z = F. \end{equation} This equation is fulfilled by $P_c Z$, $P_c F$, and $P_c Z(0)$ (as $Z$, $F$, and $Z(0)$ respectively).
By Duhamel's formula, since $$ \mathcal H P_c + i \delta P_p = \mathcal H_0 + V - P_p (\mathcal H - i \delta) = \mathcal H_0 + \tilde V_1 \tilde V_2, $$ a solution of (\ref{2.152}) satisfies \begin{equation}\label{2.153} Z(t) = e^{it\mathcal H_0} Z(0) - i \int_0^t e^{i(t-s) \mathcal H_0} F(s) \dd s + i \int_0^t e^{i(t-s)\mathcal H_0} (V - P_p (\mathcal H - i \delta)) Z(s) \dd s. \end{equation}
Assume $V \in L^{\infty}$. Let $F$, $G \in L^{\infty}_t (L^1_x \cap L^2_x)$ have compact support in $t$ and consider the forward time evolution \begin{equation} (T_V F)(t) = \int_{t>s} e^{i(t-s) \mathcal H P_c - \delta (t-s) P_p} F(s) \dd s. \end{equation} By Lemma \ref{lem214},
\begin{equation} \widehat {T_V F}(\lambda) = i \tilde R_V(\lambda) \widehat F(\lambda).
\end{equation} For $y > \|V\|_{\infty}$, both $e^{-yt} (T_V F)(t)$ and $e^{yt} G(t)$ belong to $L^2_{t, x}$. Taking the Fourier transform in $t$, by Plancherel's theorem \begin{equation}\begin{aligned}\label{2.114} \int_{\mathbb R} \langle (T_V F)(t), G(t) \rangle \dd t &= \frac 1 {2\pi} \int_{\mathbb R} \big\langle \big(e^{-yt} (T_V F)(t) \big)^{\wedge}, \big(e^{yt} G(t)\big)^{\wedge} \big\rangle \dd \lambda \\ &= \frac 1 {2 \pi i} \int_{\mathbb R} \big\langle \tilde R_V(\lambda - iy) \widehat F(\lambda-iy), \widehat {G(-t)}(\lambda-iy) \big \rangle \dd \lambda. \end{aligned}\end{equation} Here $\langle \cdot, \cdot \rangle$ is the real dot product.
Following (\ref{2.97}) and (\ref{3.65'}), we express $\tilde R_V$ in (\ref{2.114}) as \begin{equation}\label{3.42}\begin{aligned} \tilde R_V &= R_0 - R_0 \tilde V_1 \tilde V_2 R_0 + R_0 \tilde V_1 \big(\tilde V_2 \tilde R_V \tilde V_1\big) \tilde V_2 R_0. \end{aligned}\end{equation} The first term represents the free Schr\"{o}dinger evolution: \begin{equation} \frac 1 {2 \pi i} \int_{\mathbb R} \big\langle R_0(\lambda - iy) \widehat F(\lambda-iy), \widehat {G(-t)}(\lambda-iy) \big \rangle \dd \lambda. \end{equation} It is bounded by virtue of the endpoint Strichartz estimates of \cite{tao}.
Likewise, by means of Strichartz or smoothing estimates for the free Schr\"{o}dinger equation, one obtains that \begin{equation}\begin{aligned}\label{3.44}
\big\|\tilde V_2 R_0(\lambda-i0) \widehat F(\lambda-i0)\big\|_{L^2_{\lambda, x}} \leq C \|F\|_{L^2_t L^{6/5, 2}_x},\\
\big\|\tilde V_1^* R_0(\lambda+i0) \widehat G(\lambda-i0)\big\|_{L^2_{\lambda, x}} \leq C \|G\|_{L^2_t L^{6/5, 2}_x}. \end{aligned}\end{equation} The same holds for the integral along any other horizontal line, meaning that for every $y \in \mathbb R$, $\tilde V_2 R_0(\lambda+iy) \widehat F(\lambda+iy)$ and $\tilde V_1^* R_0(\lambda + iy) \widehat G(\lambda+iy)$ are in $L^2_{\lambda, x}$. This allows shifting the integration line toward the real axis.
Taking into account the fact that, by (\ref{2.127}) and (\ref{lim_ap}), \begin{equation}
\sup_{\Imim \lambda \leq 0} \|\tilde V_2 \tilde R_V \tilde V_1\|_{\mathcal L(L^2_{t, x}, L^2_{t, x})} < \infty \end{equation} and following (\ref{3.42}) and (\ref{3.44}), we obtain \begin{equation}\begin{aligned}
\int_{\mathbb R} \langle (T F)(t), G(t) \rangle \dd t &\leq C \|F\|_{L^2_t L^{6/5, 2}_x} \|G\|_{L^2_t L^{6/5, 2}_x}. \end{aligned}\end{equation} Then, we remove our previous assumption that $F$ and $G$ should have compact support. This establishes the estimate \begin{equation}
\bigg\|\int_{t>s} e^{i(t-s) \mathcal H P_c - \delta(t-s) P_p} F(s) \dd s \bigg\|_{L^2_t L^{6, 2}_x} \leq C \|F\|_{L^2_t L^{6/5, 2}_x}. \end{equation} By taking the $P_c$ projection, we obtain the more usual Strichartz retarded estimate. Then, using Duhamel's formula (\ref{2.153}), we obtain (\ref{1.45}).
Given $V \in L^{3/2, \infty}_0$, we approximate it in the $L^{3/2, \infty}$ norm by $L^{\infty}$ potentials whose exceptional sets are still disjoint from $\sigma(\mathcal H_0)$. Since the conclusion stands for each approximation, uniformly, we pass to the limit and it also holds for $V$ itself.
Next, we prove (\ref{1.46}). Assume $V \in L^{3/2, 1}$. With $\tilde V_1$ and $\tilde V_2$ as in Lemma \ref{lemma32} (meaning $\tilde V_1$ and $\tilde V_2^*$ are bounded from $L^2$ to $L^{6/5, 1}$), let \begin{equation} (T_{\tilde V_2, \tilde V_1} F)(t) = \int_{-\infty}^t \tilde V_2 e^{i(t-s)\mathcal H_0} \tilde V_1 F(s) \dd s. \end{equation}
We need to establish that $T_{\tilde V_1, \tilde V_2} \in K$, where $K$ is the Wiener algebra of Definition \ref{def_k} for $H = L^2_x$. By Proposition \ref{prop_21} and Minkowski's inequality, \begin{equation} T_{I, I} F(t) = \int_{-\infty}^t e^{i(t-s) \mathcal H_0} F(s) \dd s \end{equation} takes $L^1_t L^{6, 1}_x$ to $L^1_t L^{6/5, \infty}_x$. Therefore $T_{\tilde V_2, \tilde V_1}$
takes $L^1_t L^2_x$ to itself and, for the Hilbert space $H=L^2_x$, belongs to the algebra $K$. Following Lemma \ref{lemma_22}, the Fourier transform of $T_{\tilde V_2, \tilde V_1}$ is given by \begin{equation} \widehat T_{\tilde V_2, \tilde V_1}(\lambda) = i\tilde V_2 R_0(\lambda) \tilde V_1. \end{equation}
By Lemma \ref{lemma32}, $I - i \widehat T_{\tilde V_2, \tilde V_1}(\lambda)$ is invertible for $\Imim \lambda \leq 0$.
Considered as an operator from $L^p$, $p<6/5$, to its dual, $T_{I, I}(t)$ decays at infinity faster than $|t|^{-1}$ in norm: \begin{equation}
\|T_{I, I}(t)\|_{\mathcal L(L^p, L^{p'})} \leq C |t|^{3/2-3/p} \end{equation} and $3/2-3/p < -1$ for $p<6/5$. Thus, if $\tilde V_1$ and $\tilde V_2^*$ were in $\mathcal L(L^2, L^p)$, \begin{equation}
\|\chi_{|t|>R} T_{\tilde V_2, \tilde V_1}\|_K \leq \int_t^{\infty} \|T_{\tilde V_2, \tilde V_1}(t)\|_{\mathcal L(L^2, L^2)} \leq C |t|^{5/2-3/p} \to 0 \end{equation} as $t \to \infty$. By approximating $\tilde V_1$ and $\tilde V_2$ in the operator norm, it follows that $T_{\tilde V_2, \tilde V_1}$ belongs to the subalgebra $K_0 \subset K$ of kernels that decay at infinity.
Moreover, consider approximations of $\tilde V_1$ and $\tilde V_2$, call them $F_1^n$ and $F_2^n$, as given by Lemma \ref{lemma32}. Then for each $t$ \begin{equation}
\|(F_2^n e^{-i(t+\epsilon)\Delta} F_1^n - F_2^n e^{-it\Delta} F_1^n)f\|_2 \leq C \epsilon^{1/4} \|e^{-it\Delta} F_1^n f\|_{\dot H^{1/2}_{loc}} \end{equation} and therefore \begin{equation}\begin{aligned}
&\int_{-t}^t \|(F_2^n e^{i(t+\epsilon)\mathcal H_0} F_1^n - F_2^n e^{it\mathcal H_0} F_1^n)f\|_2 \dd t \\
&\leq C \epsilon^{1/4} t^{1/2} \Big(\int_{-t}^t \|e^{-it\Delta} F_1^n f\|_{\dot H^{1/2}_{loc}}^2 \dd t\Big)^{1/2}\\
&\leq C \epsilon^{1/4} t^{1/2} \|f\|_2. \label{1.352} \end{aligned}\end{equation} Since $T_{F_2^n, F_1^n}$ decays at infinity, (\ref{1.352}) implies that $T_{F_2^n, F_1^n}$ is equicontinuous. By passing to the limit, we find that the same holds for $T_{\tilde V_2, \tilde V_1}$.
Therefore $I-iT_{\tilde V_2, \tilde V_1}$ satisfies all the hypotheses of Theorem \ref{thm7}, with respect to the Hilbert space $L^2$ and the algebra $K$ of Definition \ref{def_k}. The inverse $(I-iT_{\tilde V_2, \tilde V_1})^{-1}$ then belongs to $K$ and is supported on $(-\infty, 0]$ in $t$ by Lemma ~\ref{lemma8}.
Let $T_{I, \tilde V_1}$ and $T_{\tilde V_2, I}$ be respectively given by \begin{equation} (T_{I, \tilde V_1}F)(t) = \int_{-\infty}^t e^{i(t-s)\mathcal H_0} \tilde V_1 F(s) \dd s \end{equation} and \begin{equation} (T_{\tilde V_2, I} F)(t) = \int_{-\infty}^t \tilde V_2 e^{i(t-s)\mathcal H_0} F(s) \dd s. \end{equation} Given the decomposition $V - P_p (\mathcal H + i \delta) = \tilde V_1 \tilde V_2$, (\ref{2.153}) becomes \begin{equation}\begin{aligned} \tilde V_2 Z(t) &= \tilde V_2 \int_0^t e^{i(t-s) \mathcal H_0} (-iF(s) + \delta_{s=0} Z(0)) \dd s + i \int_0^t (\tilde V_2 e^{i(t-s)\mathcal H_0} \tilde V_1) \tilde V_2 Z(s) \dd s. \end{aligned}\end{equation} Then \begin{equation}\label{3.143} \big(I - i T_{\tilde V_2, \tilde V_1}\big) \tilde V_2 Z = T_{\tilde V_2, 1} (-iF + \delta_{t=0} Z(0)). \end{equation}
Consequently, we can rewrite (\ref{2.153}) as \begin{equation}\begin{aligned} Z &= T_{I, I} (-iF + \delta_{t=0} Z(0)) + T_{I, \tilde V_1} (I-iT_{\tilde V_2, \tilde V_1})^{-1} T_{\tilde V_2, I} (-iF + \delta_{t=0} Z(0)). \label{3.51} \end{aligned}\end{equation}
By Proposition \ref{prop_21}, \begin{equation}
\|T_{\tilde V_2, 1}(-iF + \delta_{t=0} Z(0))\|_{L^1_t L^2_x} \leq C (\|F\|_{L^1_t L^{6/5, 1}_x} + \|Z(0)\|_{L^{6/5, 1}_x}) \end{equation} The convolution kernel $(I-iT_{\tilde V_2, \tilde V_1})^{-1}$ then takes $L^1_t L^2_x$ into itself again, while the last operator $T_{1, \tilde V_1}$ takes $L^1_t L^{6/5, 1}_x$ into $L^1_t L^{6, \infty}_x$. We obtain that $P_c Z \in L^1_t L^{6, \infty}_x$, as desired.
\end{proof}
For completeness, we also sketch the proof of Proposition \ref{cor1.5}. \begin{proof}[Proof of Proposition \ref{cor1.5}] If $V \in L^{3/2, 1}$, then (\ref{1.12}) follows directly by interpolation between (\ref{1.45}) and (\ref{1.46}), which are both satisfied. If $V \in L^{3/2}$, then we use Theorem \ref{thm6/5}.
We need to show in this case that $T_{V_2, V_1}$ belongs to $K_{6/5}$. By complex interpolation between $$
\|T_{V_2,V_1} F\|_{L^1_t L^2_x} \leq C \|V_1\|_{L^{3, 2}} \|V_2\|_{L^{3, 2}} \|F\|_{L^1_t L^2_x}, $$ which follows by Proposition \ref{prop_21}, and $$
\|T_{V_2,V_1} F\|_{L^2_{t, x}} \leq C \|V_1\|_{L^{3, \infty}} \|V_2\|_{L^{3, \infty}} \|F\|_{L^2_{t, x}}, $$ which follows by the endpoint Strichartz estimate for $e^{-it\Delta}$, we obtain that $$
\|T_{V_2,V_1} F\|_{L^{6/5}_t L^2_x} \leq C \|V_1\|_{L^{3}} \|V_2\|_{L^{3}} \|F\|_{L^{6/5}_t L^2_x}, $$ so, by definition, $T_{V_2,V_1} \in K_{6/5}$.
Since $T_{\tilde V_2, \tilde V_1} - T_{V_2, V_1}$ has finite rank, a similar interpolation argument shows that $T_{\tilde V_2, \tilde V_1} \in K_{6/5}$.
From this point, the proof follows that of (\ref{1.46}). \end{proof}
\subsection{The time-dependent case}\label{sec3.7} \begin{proof}[Proof of Corollary \ref{cor_1.5}] Introduce an auxiliary function $Z_1 = P_c Z_1$ (supported on the continuous spectrum) and write (\ref{1.12}) in the form \begin{equation}\label{1.12'} i \partial_t Z + \mathcal H Z + \tilde V(t, x) P_c Z_1= F,\ Z(0) \text{ given}. \end{equation} By the Strichartz estimate (\ref{1.45}), \begin{equation}\begin{aligned}\label{2.160}
\|P_c Z\|_{L^{\infty}_t L^2_x \cap L^2_t L^{6, 2}_x} &\leq C \Big(\|Z(0)\|_2 + \|F\|_{L^1_t L^2_x + L^2_t L^{6/5, 2}_x} + \\
&+\|\tilde V\|_{L^{\infty}_t L^{3/2, \infty}_x} \|P_c Z_1\|_{L^2_t L^{6, 2}_x} \Big). \end{aligned}\end{equation} For two different values of $Z_1$, call them $Z_1^1$ and $Z_1^2$, we subtract the corresponding copies of (\ref{1.12'}) from one another. The contributions of $Z(0)$ and $F$ cancel, so we obtain \begin{equation}\begin{aligned}
\|P_c Z^1 - P_c Z^2\|_{L^{\infty}_t L^2_x \cap L^2_t L^{6, 2}_x} \leq C \|\tilde V\|_{L^{\infty}_t L^{3/2, \infty}_x} \|P_c Z_1^1 - P_c Z_1^2\|_{L^2_t L^{6, 2}_x}.
\end{aligned}\end{equation} Thus, if $\|\tilde V\|_{L^{\infty}_t L^{3/2, 2}_x}$ is sufficiently small, the map that associates the solution $P_c Z$ to the auxiliary function $P_c Z_1$ is a contraction inside some ball of large radius in $L^2_t L^{6, 2}_x$. The fixed point of this contraction is a solution to (\ref{1.12}) and (\ref{2.160}) then implies (\ref{1.13}).
The proof of (\ref{1.14}) is entirely analogous. \end{proof}
Consider the family of isometries $U$ given by (\ref{1.20}) or (\ref{1.21}).
In the subsequent lemma we encapsulate the properties of $U$ that we actually use in our study of (\ref{3.157}) and (\ref{3.158}).
\begin{lemma}\label{lem_32} Let $U(t)$ be defined by (\ref{1.20}) or (\ref{1.21}). Then $U$ possesses properties P1--P4 listed in Theorem \ref{theorem_13}. \end{lemma}
\begin{proof} With no prejudice, we only consider the matrix case (\ref{1.20}).
It is straightforward to verify properties P1 and P2: $U(t)$ are isometries, so they preserve any isometry-invariant Banach space. The Laplace operator $\Delta$ (hence $\mathcal H_0$, too) commutes with translations and rotations.
The third property, P3, concerns a comparison between \begin{equation}\begin{aligned} T(t, s) = e^{i(t-s) \mathcal H_0} \end{aligned}\end{equation} and \begin{equation}\begin{aligned} \tilde T(t, s) &= e^{i(t-s) \mathcal H_0} U(t) U(s)^{-1} \\ &= e^{i(t-s) \mathcal H_0} \Omega(t) e^{(D(t) - D(s)) \nabla + i (A(t) - A(s)) \sigma_3} \Omega(s)^{-1}. \end{aligned}\end{equation}
We first study the model operator with $\Omega \equiv I$ and $D \equiv 0$, so we only have to handle the oscillation $A$. Denote this by \begin{equation}\begin{aligned} \tilde T_1(t, s) &= e^{i(t-s) \mathcal H_0} e^{i(A(t) - A(s)) \sigma_3}. \end{aligned}\end{equation} One has \begin{equation} e^{ia} -1 \leq C \min(1, a) \end{equation} and then, by the $L^1$ to $L^{\infty}$ $t^{-3/2}$ dispersive estimate, for large enough $N$ \begin{equation}\label{3.167}\begin{aligned}
\|\langle x \rangle^{-N} (T(t, s) - \tilde T_1(t, s)) \langle x \rangle^{-N}\|_{2 \to 2} &\leq C \|T(t, s) - \tilde T(t, s)\|_{1 \to \infty} \\
&\leq C |t-s|^{-3/2} |A(t)-A(s)| \\
&\leq C |t-s|^{-1/2} \|A'\|_{\infty}. \end{aligned}\end{equation}
Next, consider the case when $\Omega \equiv I$, but $D$ and $A$ need not be zero. Let \begin{equation} \tilde T_2(t, s) = e^{i(t-s) \mathcal H_0} e^{(D(t) - D(s)) \nabla + i (A(t) - A(s)) \sigma_3}. \end{equation} We start with the Leibniz-Newton formula \begin{equation} \tilde T_2(t, s) - \tilde T_1(t, s) = \int_s^t D'(\tau) \nabla e^{i(t-s) \mathcal H_0} e^{(D(\tau) - D(s)) \nabla + i (A(t) - A(s)) \sigma_3} \dd \tau, \end{equation} which implies \begin{equation}\begin{aligned}\label{eq2.166}
&\|\langle x \rangle^{-N} (\tilde T_2(t, s) - \tilde T_1(t, s)) \langle x \rangle^{-N}\|_{2 \to 2} \leq \\
&\leq C \|D'\|_{\infty} |t-s| \cdot \sup_{\tau \in [s, t]} \|\langle x \rangle^{-N} \nabla e^{(D(\tau) - D(s)) \nabla} \tilde T_1(t, s) \langle x \rangle^{-N}\|_{2 \to 2} \\
&= C \|D'\|_{\infty} |t-s| \cdot \sup_{\tau \in [s, t]} \|\langle x \rangle^{-N} \nabla e^{(D(\tau) - D(s)) \nabla} \nabla T(t, s) \langle x \rangle^{-N}\|_{2 \to 2}. \end{aligned}\end{equation} From the explicit form of the fundamental solution of the free Schr\"{o}dinger equation,
we obtain the following bound for the kernel of $\nabla T(t, s)$: \begin{equation}
|\nabla T(t, s)|(x, y) \leq C \frac {|x-y|}{|t-s|^{5/2}}. \end{equation} Then, \begin{equation}\begin{aligned}\label{2.168}
|\nabla e^{(D(\tau)-D(s)) \nabla} T(t, s)|(x, y) &= |\nabla T(t, s)|(x+D(\tau)-D(s), y) \\
&\leq C \frac {|x| + |y| + |\tau-s| \|D'\|_{\infty}}{|t-s|^{5/2}}. \end{aligned}\end{equation} (\ref{eq2.166}) and (\ref{2.168}) imply that, for sufficiently large $N$, \begin{equation}\label{2.169}
\|\langle x \rangle^{-N} (\tilde T_2(t, s) - \tilde T_1(t, s)) \langle x \rangle^{-N}\|_{2 \to 2} \leq C (|t-s|^{-3/2} \|D'\|_{\infty} + |t-s|^{-1/2} \|D'\|_{\infty}^2). \end{equation}
Finally, if $\Omega \ne I$, let $\tilde D(\tau) = \Omega(s) D(\tau)$; then \begin{equation} \tilde T(t, s) = e^{i(t-s) \mathcal H_0} \Omega(t) \Omega(s)^{-1} e^{(\tilde D(t) - \tilde D(s))\nabla + i(A(t) - A(s))\sigma_3}. \end{equation} We use a concrete description of $\Omega'(t)$: there exists an infinitesimal rotation \begin{equation} \omega(t) = (x \cdot e_1(t)) \partial_{e_2(t)} - (x \cdot e_2(t)) \partial_{e_1(t)}
\end{equation} such that $\partial_t \Omega(t)$ equals $|\Omega'(t)|$ times $\omega(t)$, applied to $\Omega(t)$: \begin{equation}\label{omega}
\partial_t \Omega(t) f = |\Omega'(t)| \omega(t) \Omega(t) f. \end{equation} By the Leibniz-Newton formula again, \begin{equation}\begin{aligned}
\tilde T(t, s) - \tilde T_2(t, s) &= \int_s^t |\Omega'(\tau)| \omega(t) e^{i(t-s) \mathcal H_0} \Omega(\tau) \Omega(s)^{-1} \\ &e^{(\tilde D(t) - \tilde D(s))\nabla + i(A(t) - A(s))\sigma_3} \dd \tau. \end{aligned}\end{equation} This implies \begin{equation}\begin{aligned}
&\|\langle x \rangle^{-N} (\tilde T(t, s) - \tilde T_2(t, s)) \langle x \rangle^{-N}\|_{2 \to 2} \leq \\
&\leq C \|\Omega'\|_{\infty} |t-s| \cdot \sup_{\tau \in [s, t]} \sup_{\omega} \|\langle x \rangle^{-N} \omega \Omega(\tau) \Omega(s)^{-1} \tilde T_2(t, s) \langle x \rangle^{-N}\|_{2 \to 2} \\
&= C \|\Omega'\|_{\infty} |t-s| \cdot \sup_{\tau \in [s, t]} \sup_{\omega} \|\langle x \rangle^{-N} \omega \tilde T_2(t, s) \langle x \rangle^{-N}\|_{2 \to 2}, \end{aligned}\end{equation} where the supremum is taken over all infinitesimal rotations $\omega$.
From the explicit form of the fundamental solution, we get that \begin{equation}
\sup_{\omega} |\omega T(t, s)|(x, y) \leq C \frac {|x| |y|}{|t-s|^{5/2}}. \end{equation} Then, \begin{equation}\begin{aligned}
\sup_{\omega} |\omega \tilde T_2(t, s)|(x, y) &= \sup_{\omega} |\omega T(t, s)|(x + \tilde D(\tau) - \tilde D(s), y) \\
&\leq C \frac {(|x| + |D(\tau) - D(s)|) |y|}{|t-s|^{5/2}}. \end{aligned}\end{equation} For sufficiently large $N$, this implies \begin{equation}\begin{aligned}\label{2.178}
&\|\langle x \rangle^{-N} (\tilde T(t, s) - \tilde T_2(t, s)) \langle x \rangle^{-N}\|_{2 \to 2} \leq \\
&\leq C \big(|t-s|^{-3/2} \|\Omega'\|_{\infty} + |t-s|^{-1/2} \|\Omega'\|_{\infty} \|D'\|_{\infty}\big). \end{aligned}\end{equation} By (\ref{3.167}), (\ref{2.169}), and (\ref{2.178}), \begin{equation}\begin{aligned}
&\|\langle x \rangle^{-N} (\tilde T_2(t, s) - T(t, s)) \langle x \rangle^{-N}\|_{2 \to 2} \leq \\
&\leq C \big(|t-s|^{-3/2} (\|\Omega'\|_{\infty} + \|D'\|_{\infty}) + |t-s|^{-1/2} (\|\Omega'\|_{\infty} \|D'\|_{\infty} + \|D'\|_{\infty}^2 + \|A'\|_{\infty})\big). \end{aligned}\end{equation} This proves property P3.
Finally, P4 follows from the fact that eigenfunctions of $\mathcal H$ are in $\langle \nabla \rangle^{-2} L^{6/5, 2}$, by Lemma \ref{pp}. Indeed, by (\ref{omega}), \begin{equation}
\partial_t U(t) U^{-1}(t) = |\Omega'(t)| \omega(t) + (\Omega(t) D'(t)) \nabla + A'(t) \sigma_3, \end{equation} where $\omega(t)$ is an infinitesimal rotation.
Then, let $f \in \langle \nabla \rangle^{-2} L^{6/5, 2}$ solve, with $\lambda \not \in (-\infty, -\mu] \cup [\mu, \infty)$, \begin{equation} f = R_0(\lambda) V f. \end{equation} It is immediate that \begin{equation}
\|(\Omega(t) D'(t)) \nabla f\| + \|A'(t) \sigma_3 f\|_{L^{6/5, 2}} \leq C (|D'| + |A'|) \|f\|_{\langle \nabla \rangle^{-2} L^{6/5, 2}}. \end{equation} If $V \in L^{3/2, 1}$, then $f \in \langle x \rangle^{-2} L^{6/5, 1}$, implying that $(\Omega(t) D'(t)) \nabla f + A'(t) \sigma_3 f \in L^{6/5, 1}$ instead.
For the $\Omega(t)$ component of $U(t)$, P4 reduces to knowing that $\omega f \in L^{6/5, 2}$. However, \begin{equation}
|\omega R_0(\lambda)|(x, y) \leq C (1 + |x-y|) |R_0(\lambda)|(x, y).
\end{equation} P4 follows because $Vf \in L^{6/5, 2}$ and $|R_0(\lambda)|(x, y)$ decays exponentially.
If $V \in L^{3/2, 1}$, then $Vf \in L^{6/5, 1}$ implies that $\omega f \in L^{6/5, 1}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem_13}]
To begin with, assume that $V \in L^{\infty}$. This endows the Schr\"{o}dinger evolution with a meaning, in the space of exponentially growing functions in the $L^2$ norm. In the end, one can discard this assumption following an approximation argument.
Write the equation in the form (\ref{1.30}) and let \begin{equation}\begin{aligned} &\tilde Z = P_c Z,\ \tilde F = P_c U(t) F - i[P_c, \partial_t U(t) U(t)^{-1}] (\tilde Z + P_p Z).\\ \end{aligned}\end{equation} The equation becomes \begin{equation}\begin{aligned}\label{2.184} &i \partial_t \tilde Z - i \partial_t U(t) U(t)^{-1} \tilde Z + {\mathcal H} \tilde Z = \tilde F,\ \tilde Z(0) = P_c U(0) R(0) \text{ given}. \end{aligned}\end{equation}
Fixing $\delta>0$, we rewrite (\ref{2.184}) in the equivalent form \begin{equation}\begin{aligned} &i \partial_t \tilde Z - i \partial_t U(t) U(t)^{-1} \tilde Z + (\mathcal H P_c +i \delta P_p) \tilde Z = \tilde F. \end{aligned}\end{equation} Note that \begin{equation} \mathcal H P_c +i \delta P_p = \mathcal H_0 + V - P_p (\mathcal H - i \delta). \end{equation} Lemma \ref{lemma32} provides a decomposition \begin{equation} V - P_p ({\mathcal H} - i\delta) = \tilde V_1 \tilde V_2, \end{equation} where $\tilde V_1$ and $\tilde V_2^*$ are in $\mathcal L(L^2, L^{6/5, 2})$ and can be approximated in this space by operators $F_1^n$ and $F_2^n$, such that $F_1^n$, $(F_2^n)^* \in \mathcal L(L^2, \langle x \rangle^{-N} L^2)$.
Thus, we obtain \begin{equation} i \partial_t \tilde Z - i \partial_t U(t) U(t)^{-1} \tilde Z + (\mathcal H_0 + \tilde V_1 \tilde V_2) \tilde Z = \tilde F. \end{equation}
As a general model, consider a solution of the equation \begin{equation} i \partial_t f - i \partial_t U(t) U(t)^{-1} f + \mathcal H_0 f = F, f(0) \text{ given}. \end{equation} Letting $f = U(t) g$, $F = U(t) G$, the equation becomes \begin{equation} i \partial_t g + \mathcal H_0 g = G, g(0) = U(0)^{-1} f(0), \end{equation} so \begin{equation} g = e^{it \mathcal H_0} g(0) - i \int_{-\infty}^t e^{i(t-s) \mathcal H_0} G(s) \dd s \end{equation} and \begin{equation} f = e^{it \mathcal H_0} U(t) U(0)^{-1} f(0) - i \int_{-\infty}^t e^{i(t-s) \mathcal H_0} U(t) U(s)^{-1} F(s) \dd s. \end{equation} In particular, we obtain that \begin{equation}\label{2.193} \tilde Z = \int_{-\infty}^t e^{i(t-s) \mathcal H_0} U(t) U(s)^{-1} (\delta_{s=0} \tilde Z(0) -i\tilde F(s) + i\tilde V_1 \tilde V_2 \tilde Z) \dd s. \end{equation} Denote \begin{equation}\label{3.144}\begin{aligned} \tilde T_{\tilde V_2, \tilde V_1} F(t) &= \int_{-\infty}^t \tilde V_2 e^{i(t-s) {\mathcal H}_0} U(t) U(s)^{-1} \tilde V_1 F(s) \dd s,
\end{aligned}\end{equation} respectively \begin{equation}\begin{aligned} \tilde T_{\tilde V_2, I} F(t) &= \int_{-\infty}^t \tilde V_2 e^{i(t-s) {\mathcal H}_0} U(t) U(s)^{-1} F(s) \dd s. \end{aligned}\end{equation} Then, rewrite Duhamel's formula (\ref{2.193}) as \begin{equation}\label{3.186}\begin{aligned} (I - i \tilde T_{\tilde V_2, \tilde V_1}) \tilde V_2 \tilde Z(t) &= \tilde T_{\tilde V_2, I} (-i \tilde F(s) + \delta_{s=0} \tilde Z(0)). \end{aligned}\end{equation}
We compare $\tilde T_{\tilde V_2, \tilde V_1}$ with the kernel, invariant under time translation, \begin{equation}\begin{aligned} T_{\tilde V_2, \tilde V_1} F(t) &= \int_{-\infty}^t \tilde V_2 e^{i(t-s) {\mathcal H}_0} \tilde V_1 F(s) \dd s.
\end{aligned}\end{equation} Indeed, $I - i T_{\tilde V_2, \tilde V_1}$ is invertible, see (\ref{2.209}), and we want to prove the same for $\tilde T$, so that we can invert in (\ref{3.186}).
We make the comparison in the following algebra $\tilde K$, whose definition parallels that of $K$, Definition \ref{def_k}. \begin{definition}\label{def_kt} $
\tilde K = \{T(t, s) \mid \sup_s \|T(t, s) f\|_{M_t L^2_x} \leq C \|f\|_2\}.
$ \end{definition}
Here $M_t L^2_x$ is the set of $L^2$-valued measures of finite mass on the Borel algebra of $\mathbb R$. $\tilde K$ contains operators that are not translation-invariant in $t$ and $s$, is naturally endowed with a unit, but is not a $C^*$ algebra.
To carry out the comparison of $T$ and $\tilde T$, let
\begin{equation} f(\pi, \tau) = \sup_{t-s=\tau} \|\langle x \rangle^{-N} (U(t) U(s)^{-1} e^{i(t-s) \mathcal H_0} - e^{i(t-s) \mathcal H_0}) \langle x \rangle^{-N}\|_{2 \to 2} \end{equation} and observe that \begin{equation}
\|\langle x \rangle^{-N} (U(t) U(s)^{-1} e^{i(t-s) \mathcal H_0} - e^{i(t-s) \mathcal H_0}) \langle x \rangle^{-N}\|_{\tilde K} \leq \int_{\mathbb R} f(\pi, \tau) \dd \tau. \end{equation} By condition P3, for almost every $\tau$ \begin{equation}\begin{aligned}
\lim_{\substack{\|\pi'\|_{\infty} \to 0}} f(\pi, \tau) = 0. \end{aligned}\end{equation} On the other hand, by combining the $L^2$ boundedness and the $L^1 \to L^{\infty}$ decay estimates, \begin{equation}\begin{aligned}\label{3.172}
&\|\langle x \rangle^{-N} (U(t) U(s)^{-1} e^{i(t-s) \mathcal H_0} - e^{i(t-s) \mathcal H_0}) \langle x \rangle^{-N}\|_{2 \to 2} \leq \\
&\leq \|\langle x \rangle^{-N} U(t) U(s)^{-1} e^{i(t-s) \mathcal H_0} \langle x \rangle^{-N}\|_{2 \to 2} + \|\langle x \rangle^{-N} e^{i(t-s) \mathcal H_0} \langle x \rangle^{-N}\|_{2 \to 2}\\
&\leq C \min (1, |t-s|^{-3/2}). \end{aligned}\end{equation} Equivalently, \begin{equation}
|f(\pi, \tau)| \leq C \min (1, |t-s|^{-3/2}), \end{equation} so $f(\pi, \tau)$ is uniformly integrable in $\tau$. By dominated convergence, \begin{equation}
\lim_{\|\pi'\|_\infty \to 0} \int_{\mathbb R} f(\pi, \tau) \dd \tau = 0, \end{equation} hence \begin{equation}
\lim_{\substack{\|\pi'\|_{\infty} \to 0}} \|\langle x \rangle^{-N} (U(t) U(s)^{-1} e^{i(t-s) \mathcal H_0} - e^{i(t-s) \mathcal H_0}) \langle x \rangle^{-N}\|_{\tilde K} = 0. \end{equation} Therefore, for each approximation $F_1^n$ and $F_2^n$ of $\tilde V_1$ and $\tilde V_2$ \begin{equation}
\lim_{\substack{\|\pi'\|_{\infty} \to 0}} \|T_{F_2^n, F_1^n} - \tilde T_{F_2^n, F_1^n}\|_{\tilde K} = 0. \end{equation} Since $F_2^n$ and $F_1^n$ are approximations of $\tilde V_2$ and $\tilde V_1$, by (\ref{1.45}) \begin{equation}\begin{aligned}\label{2.166}
&\lim_{n \to \infty} \|T_{F_2^n, F_1^n} - T_{\tilde V_2, \tilde V_1}\|_{\mathcal L(L^2_{t, x}, L^2_{t, x})} = 0, \\
&\lim_{n \to \infty} \|\tilde T_{F_2^n, F_1^n} - \tilde T_{\tilde V_2, \tilde V_1}\|_{\mathcal L(L^2_{t, x}, L^2_{t, x})} = 0. \end{aligned}\end{equation} When $V \in L^{3/2, 1}$, one can replace $\mathcal L(L^2_{t, x}, L^2_{t, x})$ by $\tilde K$ in (\ref{2.166}), using (\ref{1.46}) instead of (\ref{1.45}).
Therefore, when $V \in L^{3/2, \infty}_0$ \begin{equation}\label{3.189}
\lim_{\substack{\|\pi'\|_{\infty} \to 0}} \|T_{\tilde V_2, \tilde V_1} - \tilde T_{\tilde V_2, \tilde V_1}\|_{\mathcal L(L^2_{t, x}, L^2_{t, x})} = 0 \end{equation} and when $V \in L^{3/2, 1}$ \begin{equation}
\lim_{\substack{\|\pi'\|_{\infty} \to 0}} \|T_{\tilde V_2, \tilde V_1} - \tilde T_{\tilde V_2, \tilde V_1}\|_{\tilde K} = 0. \end{equation} However, the operator $I - i T_{\tilde V_1, \tilde V_2}$ is invertible in $\mathcal L(L^2_{t, x}, L^2_{t, x})$. Indeed, its inverse is formally (say, for compactly supported functions in $t$) given by the following formula: \begin{equation}\begin{aligned}\label{2.209} (I - i T_{\tilde V_1, \tilde V_2})^{-1} F(t) &= F(t) - i\int_{-\infty}^t \tilde V_2 e^{i(t-s)\mathcal H P_c - \delta(t-s) P_p} \tilde V_1 F(s) \dd s. \end{aligned}\end{equation} Due to the Strichartz estimates of Theorem \ref{theorem_26}, this formal inverse actually is in $\mathcal L(L^2_{t, x}, L^2_{t, x})$.
For the same reason, when $V \in L^{3/2, 1}$, $I - i T_{\tilde V_1, \tilde V_2}$ is also invertible in the $\tilde K$ algebra of Definition \ref{def_kt}.
By (\ref{3.189}), it follows that when $\|\pi'\|_{\infty}$ is small enough $I - i \tilde T_{\tilde V_2, \tilde V_1}$ is also invertible in $\mathcal L(L^2_{t, x}, L^2_{t, x})$, respectively $\tilde K$.
Using the invertibility of $I - i \tilde T_{\tilde V_2, \tilde V_1}$ in (\ref{3.186}) results in \begin{equation}\begin{aligned}
\|\tilde V_2 \tilde Z\|_{L^2_{t, x}} &\leq C \|T_{\tilde V_2, I}(-i \tilde F + \delta_{s=0} \tilde Z(0))\|_{L^2_{t, x}} \\
&\leq C\big(\|\tilde F\|_{L^1_t L^2_x + L^2_t L^{6/5, 2}_x} + \|\tilde Z(0)\|_{L^2_x}\big), \end{aligned}\end{equation} respectively \begin{equation}\begin{aligned}
\|\tilde V_2 \tilde Z\|_{L^1_t L^2_x} &\leq C \|T_{\tilde V_2, I}(-i \tilde F + \delta_{s=0} \tilde Z(0))\|_{L^1_t L^2_x} \\
&\leq C\big(\|\tilde F\|_{L^1_t L^{6/5, 1}_x} + \|\tilde Z(0)\|_{L^{6/5, 1}_x}\big). \end{aligned}\end{equation} Plugging this back into Duhamel's formula (\ref{2.193}) leads to the desired estimates.
Finally, we account for the right-hand side term $\partial_t [P_c, \partial_t U(t) U(t)^{-1}] \tilde Z$. Property P4 and the structure of $P_c$ (see (\ref{pc}) and Lemma \ref{pp}) imply \begin{equation}
\|\partial_t [P_c, \partial_t U(t) U(t)^{-1}] \tilde Z\|_{L^2_t L^{6/5, 2}_x} \leq C \|\pi'\|_{\infty} \|\tilde Z\|_{L^2_t L^{6, 2}_x}. \end{equation} Thus, the commutator $\partial_t [P_c, \partial_t U(t) U(t)^{-1}] \tilde Z$ is
controlled by Strichartz inequalities and a fixed point argument in $L^2_t L^{6, 2}_x$, if $\|\pi'\|_{\infty}$ is small.
The same goes in regard to the proof of (\ref{2.157}), where the commutation term is controlled by P4 and a fixed point argument in $L^1_t L^{6, 1}_x$.
\end{proof}
\section*{Acknowledgments} I would like to thank Michael Goldberg, Wilhelm Schlag, and the anonymous referee for their useful comments.
\appendix \section{Atomic decomposition of Lorentz spaces}\label{spaces} Our computations take place in Lebesgue and Sobolev spaces of functions defined on $\mathbb R^{3+1}$. This corresponds to three spatial dimensions and one extra dimension that accounts for time.
We denote the Lebesgue norm of $f$ by by $\|f\|_p$. The Sobolev norms of integral order $W^{n, p}$ are then defined by \begin{equation}
\|f\|_{W^{n, p}} = \bigg(\sum_{|\alpha| \leq n} \|\partial^{\alpha} f\|_p^p\bigg)^{1/p} \end{equation} for $1 \leq p < \infty$ and \begin{equation}
\|f\|_{W^{n, \infty}} = \sup_{|\alpha| \leq n} \|\partial^{\alpha} f\|_{\infty}. \end{equation} when $p=\infty$. In addition, we consider Sobolev spaces of fractional order, both homogenous and inhomogenous: \begin{equation}
\|f\|_{W^{s, p}} = \|\langle \nabla \rangle^s f\|_p, \text{ respectively } \|f\|_{\dot W^{s, p}} = \||\nabla| f\|_p.
\end{equation} Here $\langle \nabla \rangle^s$ and $|\nabla|^s$ denote Fourier multipliers --- multiplication on the Fourier side by $\langle \xi \rangle^s = (1+|\xi|^2)^{s/2}$ and $|\xi|^{s}$ respectively.
When $p=2$, the alternate notation $H^s = W^{s, 2}$ or $\dot H^s = \dot W^{s, 2}$ is customary.
In addition, we are naturally led to consider Lorentz spaces. Given a measurable function $f$ on a measure space $(X, \mu)$, consider its distribution function
\begin{equation} m(\sigma, f) = \mu(\{x \mid |f(x)|>\sigma\}). \end{equation} \begin{definition} A measurable function $f$ belongs to the Lorentz space $L^{p, q}$ if its decreasing rearrangement \begin{equation}\label{A6} f^*(t) = \inf\{\sigma \mid m(\sigma, f) \leq t\} \end{equation} fulfills \begin{equation}
\|f\|_{L^{p, q}} = \bigg(\int_0^{\infty} (t^{1/p} f^*(t))^q \frac {\dd t} t \bigg)^{1/q} < \infty \label{lorentz} \end{equation} or, respectively, \begin{equation}\label{A7}
\|f\|_{L^{p, \infty}} = \sup_{0 \leq t < \infty} t^{1/p} f^*(t) < \infty \end{equation} when $q=\infty$. \end{definition} We list several important properties of Lorentz spaces. \begin{lemma}\begin{enumerate}\item[i)] $L^{p, p} = L^p$ and $L^{p, \infty}$ is weak-$L^p$. \item[ii)] The dual of $L^{p, q}$ is $L^{p', q'}$, where $1/p + 1/p' = 1$, $1/q + 1/q' = 1$. \item[iii)] If $q_1 \leq q_2$, then $L^{p, q_1} \subset L^{p, q_2}$. \item[iv)] Except when $q=\infty$, the set of bounded compactly supported functions is dense in $L^{p, q}$. \end{enumerate}\end{lemma} For a more complete enumeration, see \cite{bergh}.
We conclude by proving a lemma concerning the atomic decomposition of Lorentz spaces. In preparation for that, we give the following definition: \begin{definition} The function $a$ is an $L^p$ atom, $1 \leq p < \infty$, if $a$ is measurable, bounded, its support has finite measure, and $a$ is $L^p$ normalized: \begin{equation}
\esssup_x |a(x)| < \infty,\ \mu(\supp a)< \infty,\ (\esssup_x |a(x)|)^p \cdot \mu(\supp a) = 1. \end{equation} \end{definition}
Again, note that $a$ is an atom if and only if $|a|$ is one.
We call a sequence lacunary if the quotient between two successive elements is bounded from below by a number greater than one. \begin{lemma}[Atomic decomposition of $L^{p, q}$]\label{lemma_30} Consider a measure space $(X, \mu)$. A function $f$ belongs to $L^{p, q}(X)$, $1 \leq p$, $q < \infty$, if and only if it is a linear combination of $L^p$ atoms \begin{equation}\label{A.9} f = \sum_{k \in \mathbb Z} \alpha_k a_k,
\end{equation} where the atoms $a_k$ have disjoint supports of lacunary sizes and $(\alpha_k)_k \in \ell^q(\mathbb Z)$. Furthermore, $\|f\|_{L^{p, q}} \sim \|\alpha_k\|_{\ell^q}$ and the sum (\ref{A.9}) converges unconditionally in the $L^{p, q}$ norm. \end{lemma} In the limiting case $L^{p, \infty}$, a similar decomposition exists, but only converges when $f$ is in $L^{p, \infty}_0$ --- the closure within $L^{p, \infty}$ of the set of bounded functions of finite-measure support.
Note that, for $1<p<\infty$, $L^{p', 1}$ is the dual of $L^{p, \infty}_0$ and $L^{p, \infty}$ is the dual of $L^{p', 1}$, but the dual of $L^{p, \infty}$ can be quite complicated.
\begin{proof}
In one direction, assume that $f \in L^{p, q}$; without loss of generality, we may consider $|f|$ in its stead.
Let $f_k=f^*(2^k)$, for $f^*$ as in (\ref{A6}). Since the distribution function is decreasing, by (\ref{lorentz}) one has that \begin{equation}
\bigg(\sum_k (2^{(k+1)/p} f_k)^q \bigg)^{1/q} \geq \|f\|_{L^{p, q}} \geq \bigg(\sum_k (2^{k/p} f_{k+1})^q \bigg)^{1/q} \end{equation} or, equivalently,
\begin{equation} 2^{1/p} \bigg(\sum_k (2^{k/p} f_k)^q \bigg)^{1/q} \geq \|f\|_{L^{p, q}} \geq 2^{-1/p} \bigg(\sum_k (2^{k/p} f_k)^q \bigg)^{1/q}.
\end{equation} Thus, $\big(\sum_k (2^{k/p} f_k)^q \big)^{1/q}$ is comparable to $\|f\|_{L^{p, q}}$.
Let $B_k = |f|^{-1}([f_k, \infty))$. By definition, $\mu(B_k) \leq 2^k$ and $|f(x)| \geq f_k$ on $B_k$. Observe that if $\mu(B_k) > \mu(B_{k-1})$, then $\mu(B_k) > 2^{k-1}$.
For any $k \in \mathbb Z$, let \begin{equation} n(k)=\min\{n \mid \mu(B_n \setminus B_k) \geq 2^k\},\ m(k)=\max\{m \mid \mu(B_k \setminus B_m) \geq 2^{k-1}\}, \end{equation} and set $n(k) = +\infty$ or $m(k) = -\infty$ if the sets are empty. Observe that, for $k \leq \ell < n(k)$, $f_{\ell} = f_k$ and, for $k > \ell \geq m(k)$, $f_{\ell} = f_{k-1}$.
Define recursively the finite or infinite sequence $(k_{\ell})_{\ell \in \mathbb Z}$ by \begin{equation} k_0 = 1,\ k_{\ell+1} = n(k_{\ell})\ \text{for } \ell \geq 0,\ \text{and } k_{\ell-1} = m(k_{\ell})\ \text{for } \ell \leq 0. \end{equation} Then $f_k = f_{k_{\ell}}$ whenever $k_{\ell} \leq k < k_{\ell+1}$. Since \begin{equation} 2^{-1/p} 2^{k_{\ell+1}/p} \leq \sum_{k=k_{\ell}}^{k_{\ell+1}-1} 2^{k/p} \leq \frac {2^{-1/p}}{1-2^{-1/p}} 2^{k_{\ell+1}/p},
\end{equation} it follows that $\big(\sum_{\ell} (2^{k_{\ell+1}/p} f_{k_{\ell}})^q \big)^{1/q}$ is comparable to $\|f\|_{L^{p, q}}$.
Let $A_{\ell} = |f|^{-1}([f_{k_{\ell}}, f_{k_{\ell-1}}))$. Note that $\mu(A_{\ell}) \geq 2^{k_{\ell-1}}$ by the definition of the $k_{\ell}$ sequence and also that $\mu(A_{\ell}) = \mu(B_{\ell} \setminus B_{\ell-1} > 2^{k_{\ell}-1}-2^{k_{\ell-1}}$. We infer that $\mu(A_{\ell}) > 2^{k_{\ell}-2}$. On the other hand, $\mu(A_{\ell}) \leq \mu(B_{\ell}) \leq 2^{k_{\ell}}$.
Set \begin{equation}\begin{aligned}
\alpha_{\ell} &= \mu(A_{\ell})^{1/p} \esssup\{|f(x)| \mid x \in A_{\ell}\}, \\ a_{\ell} &= (\chi_{A_{\ell}} f)/\alpha_{\ell}. \end{aligned}\end{equation} Note that $a_{\ell}$ is an atom for each $\ell$ and \begin{equation} 2^{k_{\ell}/p} f_{k_{\ell-1}} > \alpha_{\ell} > 2^{(k_{\ell}-2)/p} f_{k_{\ell}}. \end{equation} In particular, \begin{equation}
\sum_{\ell} \alpha_{\ell}^q < \sum_{\ell} 2^{k_{\ell}/p} f_{k_{\ell-1}} \leq C \|f\|_{L^{p, q}}, \end{equation}
so $\|\alpha_k\|_{\ell^q} \leq C \|f\|_{L^{p, q}}$.
In order to establish the converse, consider \begin{equation} f = \sum_{k \in \mathbb Z} \alpha_k a_k, \end{equation} where $a_k$ are $L^p$ atoms of disjoint supports of size $2^k$ and only finitely many of the coefficients $\alpha_k$ are nonzero. One needs to show that \begin{equation}\label{A.17}
\|f\|_{L^{p, q}} \leq C \|\alpha_k\|_{\ell^q}, \end{equation} with a constant that does not depend on the number of terms.
Observe that each atom $a_k$ has $\esssup |a_k(x)| = 2^{-k/p}$. Since the measures of supports of atoms $a_k$, for $k < k_0$, add up to at most $2^{k_0}$, it follows by the definition of the distribution function that
\begin{equation} f^*(2^{k_0}) \leq \esssup_{k \geq k_0,\ x \in X} |\alpha_k a_k(x)| = \sup_{k \geq k_0}2^{-k/p} |\alpha_k|. \end{equation} Then, the integral that appears in the definition (\ref{lorentz}) can be bounded by \begin{equation}\begin{aligned}
\int_0^{\infty} (t^{1/p} f^*(t))^q \frac {\dd t} t &\leq \sum_{k_0 \in \mathbb Z} \big(2^{k_0/p} (\sup_{k \geq k_0}2^{-k/p} |\alpha_k|)\big)^q \\
&\leq \sum_{k \in \mathbb Z} \big(\sum_{k_0 \leq k} 2^{k_0 q/p}\big) 2^{-kq/p} |\alpha_k|^q \\
&= \frac {2^{q/p}}{2^{q/p}-1} \sum_{k \in \mathbb Z} |\alpha_k|^q.
\end{aligned}\end{equation} It follows that $\|f\|_{L^{p, q}}$ is indeed finite and fulfills (\ref{A.17}). The unconditional convergence of $\sum_k |\alpha_k|^q$ implies the unconditional convergence of $\sum \alpha_k a_k$.
This proof of the converse easily generalizes to the case when the atom sizes are only in the order of, not precisely equal to, $2^k$, as well as to the case when there are at most some fixed number of atoms of each dyadic size instead of just one (in particular, to the case of a lacunary sequence). \end{proof}
A characterization of $L^{p, \infty}$ is given by (\ref{A7}) or, equivalently,
\begin{equation} L^{p, \infty} = \{f \mid \sup_t t^p \mu(\{x \mid |f(x)| > t\}) < \infty\}. \end{equation} Next, we characterize $L^{p, \infty}_0$, the closure in $L^{p, \infty}$ of the set of bounded functions of finite support, in a similar manner. \begin{proposition} \label{prop_a3} For $1 \leq p < \infty$, $L^{p, \infty}_0 \subset L^{p, \infty}$ is characterized by \begin{equation}\label{A.23}\begin{aligned}
L^{p, \infty}_0 = \{f \in L^{p, \infty} \mid \lim_{t \to \infty} t^p \mu(\{x \mid |f(x)| > t\}) = 0,\\
\lim_{t \to 0} t^p \mu(\{x \mid |f(x)| > t\}) = 0\}. \end{aligned}\end{equation} \end{proposition} \begin{proof} If $f \in L^p$, then \begin{equation}
\int_0^{\infty} t^p \dd \mu(\{x \mid |f(x)| > t\}) < \infty, \end{equation} implying (\ref{A.23}). Since $L^p$ is dense in $L^{p, \infty}_0$, the same follows for $L^{p, \infty}_0$.
Conversely, (\ref{A.23}) implies that $f$ can be approximated by
\begin{equation} f_n(x) = f(x) \chi_{\{x \mid 1/n < |f(x)| < n\}}(x) \end{equation} and $f_n$ are bounded and of compact support. \end{proof}
\section{Real interpolation}
Our presentation of interpolation follows Bergh--L\"{o}f\-str\"{o}m, \cite{bergh}, and is included only for the sake of completeness. The reader is advised to consult this reference work for a much more detailed exposition.
For any couple of Banach spaces $(A_0, A_1)$, contained within a wider space $X$, their intersection, $A_0 \cap A_1$, and their sum \begin{equation} A_0+A_1 = \{x \in X \mid x = a_0+a_1,\ a_0 \in A_0,\ a_1 \in A_1\} \end{equation} give rise to two potentially new Banach spaces.
Given a couple of Banach spaces $(A_0, A_1)$ as above, define the so-called $K$ functional --- method due to Peetre --- on $A_0+A_1$ by
\begin{equation} K(t, a) = \inf_{a = a_0 + a_1} (\|a_0\|_{A_0} + t \|a_1\|_{A_1}). \end{equation} \begin{definition} For $0 \leq \theta \leq 1$, $1 \leq q \leq \infty$, the interpolation space $(A_0, A_1)_{\theta, q} = A_{\theta, q}$ is the set of elements $f \in A_0 + A_1$ whose norm \begin{equation}
\|f\|_{A_{\theta, q}} = \bigg(\int_0^{\infty} (t^{-\theta} K(t, f))^q \frac {dt} t \bigg)^{1/q} \end{equation} is finite. \end{definition}
$(A_0, A_1)_{\theta, q}$ is an exact interpolation space of exponent $\theta$ between $A_0$ and $A_1$, meaning that it satisfies the following defining property: \begin{theorem} Let $T$ be a bounded linear mapping between two pairs of Banach spaces $(A_0, A_1)$ and $(B_0, B_1)$, i.e. \begin{equation}
\|T f\|_{B_j} \leq M_j \|f\|_{A_j},\ j = 0, 1. \end{equation} Then \begin{equation}
\|T f\|_{(B_0, B_1)_{\theta, q}} \leq M_0^{1-\theta} M_1^{\theta} \|f\|_{(A_0, A_1)_{\theta, q}}. \end{equation} \end{theorem}
For two couples of Banach spaces, $(A_0^{(1)}, A_1^{(1)})$ and $(A_0^{(2)}, A_1^{(2)})$, \begin{equation} (A_0^{(1)} \times A_0^{(2)}, A_1^{(1)} \times A_1^{(2)})_{\theta, q} = (A_0^{(1)}, A_1^{(1)})_{\theta, q} \times (A_0^{(2)}, A_1^{(2)})_{\theta, q}. \end{equation}
The following multilinear interpolation theorem is due to Lions--Peetre:
\begin{theorem}\label{thm_32} Assume that $T$ is a bilinear mapping from $(A_0^{(j)} \times A_1^{(j)})$ to $B^j$ for $j = 0, 1$ and \begin{equation}
\|T(a^{(1)}, a^{(2)})\|_{B_j} \leq M_j \|a^{(1)}\|_{A_j^{(1)}} \|a^{(2)}\|_{A_j^{(2)}}. \end{equation} Then \begin{equation}
\|T(a^{(1)}, a^{(2)})\|_{(B_0, B_1)_{\theta, q}} \leq C \|a^{(1)}\|_{(A_0^{(1)}, A_1^{(1)})_{\theta, q_1}} \|a^{(2)}_j\|_{(A_0^{(2)}, A_1^{(2)})_{\theta, q_2}}, \end{equation} if $0<\theta<1$, $1/q-1=(1/q_1-1) + (1/q_2-1)$, $1 \leq q \leq \infty$. \end{theorem}
Another bilinear real interpolation theorem is the following (see \cite{bergh}, Section 3.13.5(b)): \begin{theorem}\label{int_keta} Let $A_0$, $A_1$, $B_0$, $B_1$, $C_0$, and $C_1$ be Banach spaces, and assume that the bilinear operator $T$ is bounded as follows: \begin{equation} T : A_0 \times B_0 \mapsto C_0,\ T : A_0 \times B_1 \mapsto C_1,\ T : A_1 \times B_0 \mapsto C_1. \end{equation} If $0 < \theta_0, \theta_1 < \theta < 1$, $1 \leq a, b, r \leq \infty$ and $1 \leq \frac 1 a + \frac 1 b$, $\theta = \theta_0 + \theta_1$, then \begin{equation} T : (A_0, A_1)_{\theta_1, ar} \times (B_0, B_1)_{\theta_2, br} \to (C_0, C_1)_{\theta, r}. \end{equation} \end{theorem}
Below we list the results of real interpolation in some standard situations: \begin{proposition}\label{prop_33} Let $L^p$ be the Lebesgue spaces defined over a measure space $(X, \mu)$ and $L^{p, q}$ be Lorentz spaces over the same. Then \begin{enumerate}
\item[1.] $(L^{p_0, q_0}, L^{p_1, q_1})_{\theta, q} = L^{p, q}$, for $p_0$, $p_1$, $q_0$, $q_1 \in (0, \infty]$, $p_0 \ne p_1$, $1/p = (1-\theta)/p_0 + \theta/p_1$, $0<\theta<1$. \item[2.] For a Banach space $A$, define the weighted spaces of sequences (homogenous and inhomogenous, respectively) \begin{equation}\begin{aligned}
\dot \ell^q_s(A) &= \bigg\{(a_n)_{n \in \mathbb Z} \mid \|(a_n)\|_{\ell^q_s(A)} = \bigg(\sum_{n=-\infty}^{\infty} (2^{ns} \|a_n\|_A)^q\bigg)^{1/q} < \infty\bigg\}\\
\ell^q_s(A) &= \bigg\{(a_n)_{n \geq 0} \mid \|(a_n)\|_{\ell^q_s(A)} = \bigg(\sum_{n=0}^{\infty} (2^{ns} \|a_n\|_A)^q\bigg)^{1/q} < \infty\bigg\} \end{aligned}\end{equation} Then \begin{equation}\begin{aligned} (\dot \ell_{s_0}^{q_0}(A_0), \dot \ell_{s_1}^{q_1}(A_1))_{\theta, q} &= \dot \ell_s^q((A_0, A_1)_{\theta, q}), \\ (\ell_{s_0}^{q_0}(A_0), \ell_{s_1}^{q_1}(A_1))_{\theta, q} &= \ell_s^q((A_0, A_1)_{\theta, q}), \end{aligned}\end{equation} where $0<q_0$, $q_1<\infty$, $s = (1-\theta) s_0 + \theta s_1$, $1/q = (1-\theta)/q_0 + \theta/q_1$, $0<\theta<1$. \item[3.] $(L^{p_0}(A_0), L^{p_1}(A_1))_{\theta, p} = L^p((A_0, A_1)_{\theta, p})$, for $1/p = (1-\theta)/p_0 + \theta/p_1$. \end{enumerate}\end{proposition} Again, the reader is referred to \cite{bergh} for the proofs and for more details.
Finally, real interpolation yields a short proof of sharpened Young's and H\"{o}lder's inequalities that we use throughout the paper: \begin{proposition}\label{prop_holder} Assume that $f \in L^{p_1, q_1}$ and $g \in L^{p_2, q_2}$, $1\leq p_1, q_1, p_2, q_2 \leq \infty$. If $\frac 1 p = \frac 1 {p_1} + \frac 1 {p_2}$, $\frac 1 q = \frac 1 {q_1} + \frac 1 {q_2}$, $1<p_1, p_2<\infty$, then $f\, g \in L^{p, q}$.
If $\frac 1 {\tilde p} = \frac 1 {p_1} + \frac 1 {p_2} - 1$ and $1 <p_1, p_2, p < \infty$, then $f * g \in L^{\tilde p, q}$. \end{proposition} \begin{proof} Interpolate by Theorem \ref{int_keta} between \begin{equation}
\|f g\|_{L^{\infty}} \leq \|f\|_{L^{\infty}} \|g\|_{L^{\infty}},\ \|f g\|_{L^1} \leq \|f\|_{L^{\infty}} \|g\|_{L^1},\ \text{and } \|f g\|_{L^1} \leq \|f\|_{L^1} \|g\|_{L^{\infty}}, \end{equation} with interpolation exponents $\theta_0=\frac 1 {p_1}$, $\theta_1=\frac 1 {p_2}$, $\theta=\frac 1 p$, $ar = q_1$, $br = q_2$, and $r=q$. We obtain that $f\, g \in L^{p, q}$.
Concerning Young's inequality, interpolate by Theorem \ref{int_keta} between \begin{equation}
\|f * g\|_{L^1} \leq \|f\|_{L^1} \|g\|_{L^1},\ \|f * g\|_{L^{\infty}} \leq \|f\|_{L^{\infty}} \|g\|_{L^1},\ \text{and } \|f * g\|_{L^{\infty}} \leq \|f\|_{L^1} \|g\|_{L^{\infty}}, \end{equation} with interpolation exponents $\theta_0 = 1 - \frac 1 {p_1}$, $\theta_1 = 1 - \frac 1 {p_2}$, $\theta = 2 - \frac 1 {p_1} - \frac 1 {p_2} = 1 - \frac 1 {\tilde p}$, $ar = q_1$, $br = q_2$, and $r=q$. We obtain exactly that $f * g \in L^{\tilde p, q}$. \end{proof}
\end{document} | arXiv |
Conway's big picture
Published May 2, 2009 by lievenlb
Conway and Norton showed that there are exactly 171 moonshine functions and associated two arithmetic subgroups to them. We want a tool to describe these and here's where Conway's big picture comes in very handy. All moonshine groups are arithmetic groups, that is, they are commensurable with the modular group. Conway's idea is to view several of these groups as point- or set-wise stabilizer subgroups of finite sets of (projective) commensurable 2-dimensional lattices.
Expanding (and partially explaining) the original moonshine observation of McKay and Thompson, John Conway and Simon Norton formulated monstrous moonshine :
To every cyclic subgroup $\langle m \rangle $ of the Monster $\mathbb{M} $ is associated a function
$f_m(\tau)=\frac{1}{q}+a_1q+a_2q^2+\ldots $ with $q=e^{2 \pi i \tau} $ and all coefficients $a_i \in \mathbb{Z} $ are characters at $m $ of a representation of $\mathbb{M} $. These representations are the homogeneous components of the so called Moonshine module.
Each $f_m $ is a principal modulus for a certain genus zero congruence group commensurable with the modular group $\Gamma = PSL_2(\mathbb{Z}) $. These groups are called the moonshine groups.
Conway and Norton showed that there are exactly 171 different functions $f_m $ and associated two arithmetic subgroups $F(m) \subset E(m) \subset PSL_2(\mathbb{R}) $ to them (in most cases, but not all, these two groups coincide).
Whereas there is an extensive literature on subgroups of the modular group (see for instance the series of posts starting here), most moonshine groups are not contained in the modular group. So, we need a tool to describe them and here's where Conway's big picture comes in very handy.
All moonshine groups are arithmetic groups, that is, they are subgroups $G $ of $PSL_2(\mathbb{R}) $ which are commensurable with the modular group $\Gamma = PSL_2(\mathbb{Z}) $ meaning that the intersection $G \cap \Gamma $ is of finite index in both $G $ and in $\Gamma $. Conway's idea is to view several of these groups as point- or set-wise stabilizer subgroups of finite sets of (projective) commensurable 2-dimensional lattices.
Start with a fixed two dimensional lattice $L_1 = \mathbb{Z} e_1 + \mathbb{Z} e_2 = \langle e_1,e_2 \rangle $ and we want to name all lattices of the form $L = \langle v_1= a e_1+ b e_2, v_2 = c e_1 + d e_2 \rangle $ that are commensurable to $L_1 $. Again this means that the intersection $L \cap L_1 $ is of finite index in both lattices. From this it follows immediately that all coefficients $a,b,c,d $ are rational numbers.
It simplifies matters enormously if we do not look at lattices individually but rather at projective equivalence classes, that is $~L=\langle v_1, v_2 \rangle \sim L' = \langle v'_1,v'_2 \rangle $ if there is a rational number $\lambda \in \mathbb{Q} $ such that $~\lambda v_1 = v'_1, \lambda v_2=v'_2 $. Further, we are of course allowed to choose a different 'basis' for our lattices, that is, $~L = \langle v_1,v_2 \rangle = \langle w_1,w_2 \rangle $ whenever $~(w_1,w_2) = (v_1,v_2).\gamma $ for some $\gamma \in PSL_2(\mathbb{Z}) $.
Using both operations we can get any lattice in a specific form. For example,
$\langle \frac{1}{2}e_1+3e_2,e_1-\frac{1}{3}e_2 \overset{(1)}{=} \langle 3 e_1+18e_2,6e_1-2e_2 \rangle \overset{(2)}{=} \langle 3 e_1+18 e_2,38 e_2 \rangle \overset{(3)}{=} \langle \frac{3}{38}e_1+\frac{9}{19}e_2,e_2 \rangle $
Here, identities (1) and (3) follow from projective equivalence and identity (2) from a base-change. In general, any lattice $L $ commensurable to the standard lattice $L_1 $ can be rewritten uniquely as $L = \langle Me_1 + \frac{g}{h} e_2,e_2 \rangle $ where $M $ a positive rational number and with $0 \leq \frac{g}{h} < 1 $.
Another major feature is that one can define a symmetric hyper-distance between (equivalence classes of) such lattices. Take $L=\langle Me_1 + \frac{g}{h} e_2,e_2 \rangle $ and $L'=\langle N e_1 + \frac{i}{j} e_2,e_2 \rangle $ and consider the matrix
$D_{LL'} = \begin{bmatrix} M & \frac{g}{h} \\ 0 & 1 \end{bmatrix} \begin{bmatrix} N & \frac{i}{j} \\ 0 & 1 \end{bmatrix}^{-1} $ and let $\alpha $ be the smallest positive rational number such that all entries of the matrix $\alpha.D_{LL'} $ are integers, then
$\delta(L,L') = det(\alpha.D_{LL'}) \in \mathbb{N} $ defines a symmetric hyperdistance which depends only of the equivalence classes of lattices (hyperdistance because the log of it behaves like an ordinary distance).
Conway's big picture is the graph obtained by taking as its vertices the equivalence classes of lattices commensurable with $L_1 $ and with edges connecting any two lattices separated by a prime number hyperdistance. Here's part of the 2-picture, that is, only depicting the edges of hyperdistance 2.
The 2-picture is an infinite 3-valent tree as there are precisely 3 classes of lattices at hyperdistance 2 from any lattice $L = \langle v_1,v_2 \rangle $ namely (the equivalence classes of) $\langle \frac{1}{2}v_1,v_2 \rangle~,~\langle v_1, \frac{1}{2} v_2 \rangle $ and $\langle \frac{1}{2}(v_1+v_2),v_2 \rangle $.
Similarly, for any prime hyperdistance p, the p-picture is an infinite p+1-valent tree and the big picture is the product over all these prime trees. That is, two lattices at square-free hyperdistance $N=p_1p_2\ldots p_k $ are two corners of a k-cell in the big picture!
(Astute readers of this blog (if such people exist…) may observe that Conway's big picture did already appear here prominently, though in disguise. More on this another time).
The big picture presents a simple way to look at arithmetic groups and makes many facts about them visually immediate. For example, the point-stabilizer subgroup of $L_1 $ clearly is the modular group $PSL_2(\mathbb{Z}) $. The point-stabilizer of any other lattice is a certain conjugate of the modular group inside $PSL_2(\mathbb{R}) $. For example, the stabilizer subgroup of the lattice $L_N = \langle Ne_1,e_2 \rangle $ (at hyperdistance N from $L_1 $) is the subgroup
${ \begin{bmatrix} a & \frac{b}{N} \\ Nc & d \end{bmatrix}~|~\begin{bmatrix} a & b \\ c & d \end{bmatrix} \in PSL_2(\mathbb{Z})~} $
Now the intersection of these two groups is the modular subgroup $\Gamma_0(N) $ (consisting of those modular group element whose lower left-hand entry is divisible by N). That is, the proper way to look at this arithmetic group is as the joint stabilizer of the two lattices $L_1,L_N $. The picture makes it trivial to compute the index of this subgroup.
Consider the ball $B(L_1,N) $ with center $L_1 $ and hyper-radius N (on the left, the ball with hyper-radius 4). Then, it is easy to show that the modular group acts transitively on the boundary lattices (including the lattice $L_N $), whence the index $[ \Gamma : \Gamma_0(N)] $ is just the number of these boundary lattices. For N=4 the picture shows that there are exactly 6 of them. In general, it follows from our knowledge of all the p-trees the number of all lattices at hyperdistance N from $L_1 $ is equal to $N \prod_{p | N}(1+ \frac{1}{p}) $, in accordance with the well-known index formula for these modular subgroups!
But, there are many other applications of the big picture giving a simple interpretation for the Hecke operators, an elegant proof of the Atkin-Lehner theorem on the normalizer of $\Gamma_0(N) $ (the whimsical source of appearances of the number 24) and of Helling's theorem characterizing maximal arithmetical groups inside $PSL_2(\mathbb{C}) $ as conjugates of the normalizers of $\Gamma_0(N) $ for square-free N.
J.H. Conway's paper "Understanding groups like $\Gamma_0(N) $" containing all this material is a must-read! Unfortunately, I do not know of an online version.
looking for the moonshine picture
Monstrous dessins 2
the moonshine picture – at last
Published in groups number theory
Previous Post lazy weekend blogging (1) : after Kronecker's
Next Post looking for the moonshine picture | CommonCrawl |
\begin{document}
\begin{abstract} For a class of non-symmetric non-local L{\'e}vy-type operators ${\mathcal L}^{\kappa}$, which include those of the form $$
{\mathcal L}^{\kappa}f(x):= \int_{\mathbb{R}^d}( f(x+z)-f(x)- {\bf 1}_{|z|<1} \left<z,\nabla f(x)\right>)\kappa(x,z)J(z)\, dz\,, $$ we prove regularity of the fundamental solution $p^{\kappa}$ to the equation $\partial_t ={\mathcal L}^{\kappa}$. \end{abstract}
\title{Regularity of fundamental solutions for L{\'e}
\noindent {\bf AMS 2010 Mathematics Subject Classification}: Primary 60J35, 47G20; Secondary 60J75, 47D03.
\noindent {\bf Keywords and phrases:} L\'evy-type operator, H{\"o}lder continuity, gradient estimates, fundamental solution, heat kernel, non-symmetric operator, non-local operator, non-symmetric Markov process, Levi's parametrix method.
\section{Introduction}
L{\'e}vy-type processes are stochastic models that can be used to approximate physical, biological or financial phenomena. A local expansion of such process is described by its infinitesimal generator which is a L{\'e}vy-type operator ${\mathcal L}$. The non-local part of that operator is responsible for and describes jumps of the process. In a recent years a lot of effort has been put to understand purely non-local L{\'e}vy-type operators. At first the case with constant coefficients attracted most of the attention, but the literature concerning non-constant coefficient is growing rapidly, including \cite{MR3652202}, \cite{MR3500272}, \cite{MR3817130}, \cite{GS-2018} \cite{FK-2017}, \cite{MR3765882}, \cite{BKS-2017}, \cite{KR-2017}, \cite{PJ}, \cite{S-2018}, \cite{MR3294616}, \cite{MR2163294}, \cite{MR3353627}, \cite{MR3500272}. The parametrix method \cite{zbMATH02644101}, \cite{MR2093219} used in those papers leads to a construction of the fundamental solution of the equation $\partial_t u = {\mathcal L} u$ or the heat kernel of the process that is a unique solution to the martingale problem for ${\mathcal L}$. The subject of the present paper are non-local L{\'e}vy-type operators with H{\"o}lder continuous coefficients considered in \cite{GS-2018} and \cite{S-2018} (see Definition~\ref{def-r:GC_r}). A typical example here is an operator \begin{align*} {\mathcal L} f(x)= \int_{{\R^{d}}} \left( f(z+x)-f(x) -
{\bf 1}_{|z|<1} \left<z,\nabla f(x)\right>\right)\frac{\kappa(x,z)}{|z|^{d+\alpha}}\, dz\,, \end{align*} where $\alpha\in (0,2)$, $\kappa$ is bounded from below and above by positive constants, and $\beta$-H{\"o}lder continuous in the first variable with $\beta \in (0,1)$. The usual result concerning the regularity of the fundamental solution of $\partial_t = {\mathcal L} $ is $\gamma$-H{\"o}lder continuity with $\gamma<\alpha$. We improve that result in more general setting taking into account the $\beta$ regularity of the coefficient $\kappa$.
Of particular interest are the existence, estimates and regularity of the gradient of the fundamental solution for L{\'e}vy or L{\'e}vy-type operators \cite{MR2995789},\cite{MR3413864}, \cite{MR3981133}, \cite{GS-2017}; \cite{MR3544166}, \cite{MR3472835}. In this context our assumptions will imply $\alpha>1/2$. In a recent paper \cite{LSX-2020} this restriction was removed at the expense of additional constraints on the coefficient $\kappa$. As already mentioned the purpose of the present paper is to cover a wide class of operators and coefficient discussed in \cite{GS-2018} and \cite{S-2018}. In particular such that are not symmetric in the $z$ variable, thus not considered in \cite{LSX-2020}.
Under certain conditions, which require $\alpha>1$, we also prove existence, estimates and regularity of the second derivatives of the fundamental solution. Up to the knowledge of the author this result is new even for the operator ${\mathcal L}$ above
under any assumptions on the coefficient $\kappa$
that is not constant in $x$.
\section{The setting and main results}
Let $d\in{\mathbb N}$ and $\nu:[0,\infty)\to[0,\infty]$ be a non-increasing function satisfying
$$\int_{{\R^{d}}} (1\land |x|^2) \nu(|x|)dx<\infty\,.$$ Consider $J: {\R^{d}} \to [0, \infty]$
such that for some $\gamma_0 \in [1,\infty)$ and
all $x\in {\R^{d}}$, \begin{equation}\label{e:psi1}
\gamma_0^{-1} \nu(|x|)\leqslant J(x) \leqslant \gamma_0 \nu(|x|)\,. \end{equation} Further, suppose that $\kappa(x,z)$ is a Borel function on $\mathbb{R}^d\times {\R^{d}}$ such that \begin{equation}\label{e:intro-kappa} 0<\kappa_0\leqslant \kappa(x,z)\leqslant \kappa_1\, , \end{equation} and for some $\beta\in (0,1)$, \begin{equation}\label{e:intro-kappa-holder}
|\kappa(x,z)-\kappa(y,z)|\leqslant \kappa_2|x-y|^{\beta}\, . \end{equation} For $r>0$ we define $$
h(r):= \int_0^\infty \left(1\land \frac{|x|^2}{r^2}\right) \nu(|x|)dx\,,\qquad \quad K(r):=r^{-2} \int_{|x|<r}|x|^2 \nu(|x|)dx\,. $$ The above functions play a prominent role in the paper. Our main assumption is \emph{the weak scaling condition} at the origin: there exist $\alpha_h\in (0,2]$ and $C_h \in [1,\infty)$ such that \begin{equation}\label{eq:intro:wlsc}
h(r)\leqslant C_h\,\lambda^{\alpha_h}\,h(\lambda r)\, ,\quad \lambda\leqslant 1, r\leqslant 1\, . \end{equation} In a similar fashion: there exist $\beta_h\in (0,2]$ and $c_h\in (0,1]$ such that \begin{equation}\label{eq:intro:wusc}
h(r)\geqslant c_h\,\lambda^{\beta_h}\,h(\lambda r)\, ,\quad \lambda\leqslant 1, r\leqslant 1\, .\\ \end{equation} Furthermore, suppose there are (finite) constants $\kappa_3, \kappa_4\geqslant 0$ such that \begin{align}\label{e:intro-kappa-crit}
\sup_{x\in{\R^{d}}}\left| \int_{r\leqslant |z|<1} z\, \kappa(x,z) J(z)dz \right| &\leqslant \kappa_3 rh(r)\,,
\qquad r\in (0,1],\\
\left| \int_{r\leqslant |z|<1} z\, \big[ \kappa(x,z)- \kappa(y,z)\big] J(z)dz \right| &\leqslant \kappa_4 |x-y|^{\beta} rh(r)\,, \qquad r\in (0,1]. \label{e:intro-kappa-crit-H} \end{align}
\begin{defn}\label{def-r:GC_r} We say that $\GC_r$ holds if one of the following sets of assumptions is satisfied, \begin{enumerate} \item[] \begin{enumerate} \item[$\Pa$] \quad \eqref{e:psi1}--\eqref{eq:intro:wlsc} hold and $1< \alpha_h \leqslant 2$; \item[$\Pb$] \quad \eqref{e:psi1}--\eqref{eq:intro:wusc} hold and $0<\alpha_h \leqslant \beta_h <1$; \item[$\Pc$] \quad \eqref{e:psi1}--\eqref{eq:intro:wlsc} hold, $J$ is symmetric and $\kappa(x,z)=\kappa(x,-z)$, $x,z\in{\R^{d}}$; \item[] \item[$\Qa$] \quad \eqref{e:psi1}--\eqref{eq:intro:wlsc} hold, $\alpha_h=1$; \eqref{e:intro-kappa-crit} and \eqref{e:intro-kappa-crit-H} hold; \item[$\Qb$] \quad \eqref{e:psi1}--\eqref{eq:intro:wusc} hold, $0<\alpha_h \leqslant \beta_h <1$ and $1-\alpha_h<\beta \land \alpha_h$; \eqref{e:intro-kappa-crit} and \eqref{e:intro-kappa-crit-H} hold. \end{enumerate} \end{enumerate} \end{defn}
Our aim is to prove regularity of a heat kernel $p^{\kappa}$ of a non-local non-symmetric L{\'e}vy-type operator~${\mathcal L}^{\kappa}$, i.e., of a fundamental solution to the equation $\partial_t u ={\mathcal L}^{\kappa} u$. For each of the cases $\Pa$, $\Qa$, $\Qb$ the operator under consideration is defined as \begin{align}
{\mathcal L}^{\kappa}f(x)&:= \int_{{\R^{d}}}( f(x+z)-f(x)- {\bf 1}_{|z|<1} \left<z,\nabla f(x)\right>)\kappa(x,z)J(z)\, dz \,. \label{e:intro-operator-a1} \end{align} If $\Pb$ holds we consider \begin{align} {\mathcal L}^{\kappa}f(x)&:= \int_{{\R^{d}}}( f(x+z)-f(x))\kappa(x,z)J(z)\, dz\,. \label{e:intro-operator-a2} \end{align} If $\Pc$ holds we discuss \begin{align} {\mathcal L}^{\kappa}f(x)&:= \frac1{2}\int_{{\R^{d}}}( f(x+z)+f(x-z)-2f(x))\kappa(x,z)J(z)\, dz \,.\label{e:intro-operator-a3} \end{align}
It was shown in \cite[Theorem~1.1]{GS-2018} and \cite[Theorem~1.1]{S-2018}
that under $\GC_r$ the function $p^{\kappa}$ exists and is unique within certain class of functions. In fact, the heat kernel was constructed using the Levi's paramterix method, i.e., \begin{align}\label{e:p-kappa} p^{\kappa}(t,x,y)= p^{\mathfrak{K}_y}(t,x,y)+\int_0^t \int_{{\R^{d}}}p^{\mathfrak{K}_z}(t-s,x,z)q(s,z,y)\, dzds\,, \end{align} where $q(t,x,y)$ solves the equation \begin{align*} q(t,x,y)=q_0(t,x,y)+\int_0^t \int_{{\R^{d}}}q_0(t-s,x,z)q(s,z,y)\, dzds\,, \end{align*} and $q_0(t,x,y)=\big({\mathcal L}_x^{{\mathfrak K}_x}-{\mathcal L}_x^{{\mathfrak K}_y}\big) p^{\mathfrak{K}_y}(t,x,y)$. The function $p^{\mathfrak{K}_w}$ is the heat kernel of the L{\'e}vy operator ${\mathcal L}^{\mathfrak{K}_w}$ obtained from the operator ${\mathcal L}^{\kappa}$ by freezing its coefficient: $\mathfrak{K}_w(z)=\kappa(w,z)$. For $t>0$ and $x\in \mathbb{R}^d$ we define {\it the bound function}, \begin{equation}\label{e:intro-rho-def}
\Upsilon_t(x):=\left( [h^{-1}(1/t)]^{-d}\land \frac{tK(|x|)}{|x|^{d}} \right) . \end{equation} We refer the reader to \cite[Theorem~1.2 and~1.4]{GS-2018} and \cite[Theorem~1.2 and~1.4]{S-2018} for a collection of properties of the function $p^{\kappa}$, including estimates, H{\"o}lder continuity, differentiability and gradient estimates. For instance, under $\GC_r$ for all $T>0$, $\gamma \in [0,1] \cap[0,\alpha_h)$, there is $c>0$ such that for all $t\in (0,T]$ and $x,x',y\in {\R^{d}}$, \begin{align*}
\left|p^{\kappa}(t,x,y)-p^{\kappa}(t,x',y)\right| \leqslant c
(|x-x'|^{\gamma}\land 1) \left[h^{-1}(1/t)\right]^{-\gamma} \big( \Upsilon_t(y-x)+ \Upsilon_t(y-x') \big). \end{align*}
Here are the main results of the present paper. For the meaning of $\sigma_e$ see Definition~\ref{def:parame}.
\begin{theorem}\label{thm-r:1} Assume $\GC_r$. Let $r_0 \in [0,1]\cap [0,\alpha_h+\beta\land \alpha_h)$. For every $T>0$ there exists a constant $c=c(d,T,\sigma_e,r_0)$ such that for all $t\in (0,T]$, $x,x',y\in{\R^{d}}$ and $r\in [0,r_0]$, \begin{align*}
|p^{\kappa}(t,x,y)-p^{\kappa}(t,x',y)| \leqslant c \,(|x-x'|^{r}\land 1) \left[ h^{-1}(1/t)\right]^{-r}
\big( \Upsilon_t(y-x') + \Upsilon_t(y-x)\big)\,. \end{align*} \end{theorem}
\begin{theorem}\label{thm-r:2} Assume $\GC_r$ and suppose that $\alpha_h+\beta\land \alpha_h>1$. Let $r_0 \in [0,1]\cap [0,\alpha_h+\beta\land \alpha_h-1)$. For every $T>0$ there exists a constant $c=c(d,T,\sigma_e,r_0)$ such that for all $t\in (0,T]$, $x,x',y\in{\R^{d}}$ and $r\in [0,r_0]$, \begin{align*}
|\nabla_{x} p^{\kappa}(t,x,y)-\nabla_{x'} p^{\kappa}(t,x',y)| \leqslant c \,(|x-x'|^{r}\land 1) \left[ h^{-1}(1/t)\right]^{-1-r}
\big( \Upsilon_t(y-x') + \Upsilon_t(y-x)\big)\,. \end{align*} \end{theorem}
\begin{theorem}\label{thm-r:2} Assume $\GC_r$ and suppose that $\alpha_h+\beta\land \alpha_h>2$.
Let $r_0 \in [0,1]\cap [0,\alpha_h+\beta\land \alpha_h-2)$. For every $T>0$ there exists a constant $c=c(d,T,\sigma_e,r_0)$ such that for all $t\in (0,T]$, $x,x',y\in{\R^{d}}$ and $r\in [0,r_0]$, \begin{align*}
|\nabla_x^2\, p^{\kappa}(t,x,y)|\leqslant c \left[h^{-1}(1/t)\right]^{-2} \Upsilon_t(y-x)\,, \end{align*} \begin{align*}
|\nabla_{x}^2 p^{\kappa}(t,x,y)-\nabla_{x'}^2 p^{\kappa}(t,x',y)| \leqslant c \,(|x-x'|^{r}\land 1) \left[ h^{-1}(1/t)\right]^{-2-r}
\big( \Upsilon_t(y-x') + \Upsilon_t(y-x)\big)\,. \end{align*} \end{theorem}
The condition $\alpha_h+\beta\land \alpha_h>1$ equivalently means that there is $\beta_1 \in [0,\beta]\cap [0,\alpha_h)$ such that $\alpha_h+\beta_1>1$, and it may hold only if $\alpha_h>1/2$. Similarly, $\alpha_h+\beta\land \alpha_h>2$ is equivalent to the existence of $\beta_1 \in [0,\beta]\cap [0,\alpha_h)$ such that $\alpha_h+\beta_1>2$, and it requires $\alpha_h>1$. We note that even the existence of second derivatives of $p^{\kappa}$ in Theorem~\ref{thm-r:2} is a new result.
\begin{defn}\label{def:parame} Following \cite{GS-2018}, in the case $\Pa$, $\Pb$, $\Pc$ we respectively consider the set of parameters $\sigma_1 = (\gamma_0,\kappa_0,\kappa_1,\alpha_h, C_h,h)$, $\sigma_2= (\gamma_0,\kappa_0,\kappa_1,\alpha_h,\beta_h, C_h, c_h,h)$, $\sigma_3= (\gamma_0,\kappa_0,\kappa_1,\alpha_h,C_h,h)$, which we abbreviate to $\sigma$. Similarly, after \cite{S-2018} we put $\sigma=(\gamma_0,\kappa_0,\kappa_1,\kappa_3,\alpha_h, C_h,h)$ under $\Qa$ or $\Qb$. We extend the set of parameters $\sigma$ to $\sigma_e$ by adding constant $\kappa_2$ in the cases $\Pa$, $\Pb$, $\Pc$, and $\kappa_2,\kappa_4$ in the cases $\Qa$, $\Qb$. Abusing the notation we have $\sigma_e=(\sigma,\kappa_2)$ under $\Pa$, $\Pb$, $\Pc$, and $\sigma_e=(\sigma,\kappa_2,\kappa_4)$ under $\Qa$, $\Qb$. \end{defn}
In the whole paper {\bf we assume that $\GC_r$ holds.}
\section{Preliminaries}
We follow the notation of \cite{GS-2018} and \cite{S-2018}. Let \begin{equation}\label{e:phi-y-def} \phi_y(t,x,s):=\int_{{\R^{d}}} p^{\mathfrak{K}_z}(t-s,x,z)q(s,z,y)\, dz, \quad x \in {\R^{d}}, \, 0< s<t\,. \end{equation} and \begin{equation}\label{e:def-phi-y-2} \phi_y(t,x):=\int_0^t \phi_y(t,x,s)\, ds =\int_0^t \int_{{\R^{d}}}p^{\mathfrak{K}_z}(t-s,x,z)q(s,z,y)\, dzds\,. \end{equation} See \cite[Theorem~3.7]{GS-2018} and \cite[Theorem~4.2]{S-2018} for the definition of $q(t,x,y)$. We introduce the following expression: for $t>0$, $x,y,z\in{\R^{d}}$, \begin{align*}
\mathcal{F}_{2}(t,x,y;z)&:=\Upsilon_t(y-x-z){\bf 1}_{|z|\geqslant h^{-1}(1/t)}+ \left[ \left(\frac{|z|}{h^{-1}(1/t)}\right)\wedge 1\right] \Upsilon_t(y-x). \end{align*}
In what follows functions $\mathfrak{K}$ and $p^{\mathfrak{K}}$ are like in \cite[Section~2]{GS-2018} and \cite[Section~2]{S-2018}. \begin{lemma}\label{lem-r:increments} For every $T>0$ there exists a constant $c=c(d,T,\sigma)$ such that for all $t\in(0,T]$, $x,y,z \in {\R^{d}}$, \begin{align}\label{ineq-r:est_diff_1}
|p^{\mathfrak{K}}(t,x+z,y)-p^{\mathfrak{K}}(t,x,y) | &\leqslant c\, \mathcal{F}_{2}(t,x,y;z),\\ \label{ineq-r:est_diff_grad_1}
|\nabla_{x} p^{\mathfrak{K}}(t,x+z,y)- \nabla_x p^{\mathfrak{K}}(t,x,y) | &\leqslant c \left[h^{-1}(1/t)\right]^{-1} \mathcal{F}_{2}(t,x,y;z),\\ \label{ineq-r:est_diff_grad_2}
|\nabla_{x}^2\, p^{\mathfrak{K}}(t,x+z,y)- \nabla_x^2\, p^{\mathfrak{K}}(t,x,y) | &\leqslant c \left[h^{-1}(1/t)\right]^{-2} \mathcal{F}_{2}(t,x,y;z). \end{align} \end{lemma} \noindent{\bf Proof.} The inequalities follow from \cite[Proposition~2.1]{GS-2018} and \cite[Proposition~2.1]{S-2018}, cf. \cite[Lemma~2.3]{GS-2018}, \cite[Lemma~2.3]{S-2018}. {
$\Box$
}
The inequalities \eqref{ineq-r:est_diff_1}, \eqref{ineq-r:est_diff_grad_1}, \eqref{ineq-r:est_diff_grad_2} can be written equivalently as \begin{align*}
\left|p^{\mathfrak{K}}(t,x',y)-p^{\mathfrak{K}}(t,x,y)\right|&\leqslant c \left(\frac{|x'-x|}{h^{-1}(1/t)} \land 1\right) \big( \Upsilon_t(y-x') + \Upsilon_t(y-x)\big)\,, \\
\left|\nabla_x p^{\mathfrak{K}}(t,x',y)-\nabla_x p^{\mathfrak{K}}(t,x,y)\right|&\leqslant c
\left(\frac{|x'-x|}{h^{-1}(1/t)} \land 1\right) \left[h^{-1}(1/t)\right]^{-1}\big( \Upsilon_t(y-x') + \Upsilon_t(y-x)\big)\,,\\
\left|\nabla_x^2\, p^{\mathfrak{K}}(t,x',y)-\nabla_x^2\, p^{\mathfrak{K}}(t,x,y)\right|&\leqslant c
\left(\frac{|x'-x|}{h^{-1}(1/t)} \land 1\right) \left[h^{-1}(1/t)\right]^{-2}\big( \Upsilon_t(y-x') + \Upsilon_t(y-x)\big)\,. \end{align*}
\begin{corollary}\label{cor-r:est_diff_grad} For every $T>0$ there exists a constant $c=c(d,T,\sigma)$ such that for all $t\in(0,T]$, $x,x',y \in {\R^{d}}$ and $\gamma\in [0,1]$, \begin{align*}
|p^{\mathfrak{K}}(t,x',y)-p^{\mathfrak{K}}(t,x,y) |
\leqslant c (|x-x'|^{\gamma}\land 1) \left[h^{-1}(1/t)\right]^{-\gamma}
\big( \Upsilon_t(y-x') + \Upsilon_t(y-x)\big), \end{align*} \begin{align*}
|\nabla_{x'} p^{\mathfrak{K}}(t,x',y)- \nabla_{x} p^{\mathfrak{K}}(t,x,y) |
\leqslant c (|x-x'|^{\gamma}\land 1) \left[h^{-1}(1/t)\right]^{-\gamma-1}
\big( \Upsilon_t(y-x') + \Upsilon_t(y-x)\big), \end{align*} \begin{align*}
|\nabla_{x'}^2\, p^{\mathfrak{K}}(t,x',y)- \nabla_{x}^2\, p^{\mathfrak{K}}(t,x,y) |
\leqslant c (|x-x'|^{\gamma}\land 1) \left[h^{-1}(1/t)\right]^{-\gamma-2}
\big( \Upsilon_t(y-x') + \Upsilon_t(y-x)\big). \end{align*} \end{corollary}
\begin{remark}\label{rem-r:monot_h} We commonly use the monotonicity of the function $h$ and $h^{-1}$, in particular $[h^{-1}(1/(t-s))]^{\gamma} \leqslant [h^{-1}(1/t)]^{\gamma}\leqslant h^{-1}(1/T)\vee 1$ holds for all $0<s<t\leqslant T$, $\gamma\in [0,1]$. \end{remark}
\begin{remark}\label{rem-r:stretch} Certain technical results of \cite{GS-2018}, e.g., \cite[Lemma~5.3]{GS-2018} with $u=1/t$, in view of~\eqref{eq:intro:wlsc} provide inequalities that hold for $t<1/h(1)$. Using \cite[Remark~5.2]{GS-2018} we may extend those inequalities to hold for $t\in (0,T]$ by increasing the constant $C_h$ to $C_h [h^{-1}(1/T)\vee 1]^2$. \end{remark}
We will also need a slight improvement of \cite[(38)]{GS-2018} and \cite[(41)]{S-2018} concerning the dependence of the constant on the parameter $\gamma>0$. \begin{lemma}\label{lem-r:q_reg_impr} Let $\beta_1\in (0,\beta]\cap (0,\alpha_h)$ and $0<\gamma_1\leqslant \beta_1$. For every $T>0$ there exists a constant $c=c(d,T,\sigma_e,\beta_1,\gamma_1)$ such that for all $t\in(0,T]$, $x,x',y \in {\R^{d}}$ and $\gamma\in [\gamma_1,\beta_1]$, \begin{align*}
&|q(t,x,y)-q(t,x',y)|\nonumber\\
&\leqslant c \left(|x-x'|^{\beta_1-\gamma}\land 1\right) \left\{\big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(t,x-y)+\big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(t,x'-y)\right\}\,.
\end{align*} \end{lemma} \noindent{\bf Proof.} This formulation of the H{\"o}lder continuity of $q$ has the same proof as \cite[(38)]{GS-2018}. One only needs to pay attention to explicit constants when applying \cite[Lemma~5.17(c)]{{GS-2018}}. In particular, the monotonicity of Beta function is used. See also Remark~\ref{rem-r:monot_h}. {
$\Box$
}
\section{Regularity - part I}\label{sec:part1}
We start with several technical lemmas before we prove the key Proposition~\ref{prop-r:key_0}.
\begin{lemma}\label{lem-r:for_cancel_0} For every $T>0$ there exists a constant $c=c(d,T,\sigma_e)$ such that for all $t\in(0,T]$, $x,x',y,w,w' \in {\R^{d}}$
satisfying $|x-x'|\leqslant h^{-1}(1/t)$, \begin{align*}
\left| p^{\mathfrak{K}_w}(t,x,y)- p^{\mathfrak{K}_w}(t,x',y)
-\left( p^{\mathfrak{K}_{w'}}(t,x,y)- p^{\mathfrak{K}_{w'}}(t,x',y)\right)\right|&\\
\leqslant
c \left( \frac{|x-x'|}{h^{-1}(1/t)}\right) (|w-w'|^{\beta}\land 1) & \,\Upsilon_{t}(y-x)\,. \end{align*} \end{lemma}
\noindent{\bf Proof.} Let $w_0\in{\R^{d}}$ be fixed. Define $\mathfrak{K}_0(z)=(\kappa_0/(2\kappa_1)) \kappa(w_0,z)$ and $\widehat{\mathfrak{K}}_w (z)=\mathfrak{K}_w(z)- \mathfrak{K}_0(z)$. By the construction of the L{\'e}vy process we have \begin{align}\label{eq-r:przez_k_0-impr} p^{\mathfrak{K}_w}(t,x,y)=\int_{{\R^{d}}} p^{\mathfrak{K}_0}(t,x,\xi) p^{\widehat{\mathfrak{K}}_w}(t,\xi,y)\,d\xi\,. \end{align} By \eqref{eq-r:przez_k_0-impr}, \eqref{ineq-r:est_diff_1} and
\cite[Theorem~2.11]{GS-2018}, \cite[Proposition~3.3]{S-2018}, for $|x-x'|\leqslant h^{-1}(1/t)$ we get \begin{align*}
&\left| p^{\mathfrak{K}_w}(t,x,y)- p^{\mathfrak{K}_w}(t,x',y)
-\left(p^{\mathfrak{K}_{w'}}(t,x,y)-p^{\mathfrak{K}_{w'}}(t,x',y)\right)\right|\\
&\qquad= \left| \int_{{\R^{d}}} \left( p^{\mathfrak{K}_0}(t, x,\xi)- p^{\mathfrak{K}_0}(t, x',\xi) \right) \left( p^{\widehat{\mathfrak{K}}_w}(t,\xi,y)-p^{\widehat{{\mathfrak{K}}}_{w'}}(t,\xi,y)\right) d\xi \right|\\
& \qquad \leqslant c \int_{{\R^{d}}} \left( \frac{|x-x'|}{h^{-1}(1/t)}\right) \Upsilon_t(\xi-x) (|w-w'|^{\beta}\land 1) \Upsilon_t(y-\xi) d\xi\\
& \qquad\leqslant c \left( \frac{|x-x'|}{h^{-1}(1/t)}\right) (|w-w'|^{\beta}\land 1) \Upsilon_{2t}(y-x)\,. \end{align*} We have used \cite[Corollary~5.14 and Lemma~5.6]{GS-2018} for the last inequality. Finally we have $\Upsilon_{2t}(y-x)\leqslant 2\Upsilon_{t}(y-x)$, see Remark~\ref{rem-r:monot_h}. {
$\Box$
}
\begin{lemma}\label{lem-r:cancel_01} Let $r_0 \in [0,1]\cap [0,\alpha_h+\beta\land \alpha_h)$.
For every $T>0$ there exists a constant $c=c(d,T,\sigma_e,r_0)$ such that for all $t\in(0,T]$, $x,x' \in {\R^{d}}$ and $r\in[0,r_0]$, \begin{align*}
\int_{t/2}^t \left| \int_{{\R^{d}}}\left( p^{\mathfrak{K}_y}(t-s,x,y)- p^{\mathfrak{K}_y}(t-s,x',y)
\right) dy\right| ds
\leqslant c\, (|x-x'|^{r}\land 1)\, t \left[h^{-1}(1/t)\right]^{-r}. \end{align*} \end{lemma} \noindent{\bf Proof.} We first investigate $$
{\rm I}:= \left| \int_{{\R^{d}}}\left( p^{\mathfrak{K}_y}(t-s,x,y)- p^{\mathfrak{K}_y}(t-s,x',y) \right) dy
\right|. $$
Let $\beta_1\in (0,\beta]\cap (0,\alpha_h)$ be such that $\alpha_h+\beta_1-r_0>0$. For $|x-x'|\geqslant h^{-1}(1/(t-s))$ we add and subtract one as follows \begin{align*}
&\left| \int_{{\R^{d}}} \left( p^{\mathfrak{K}_y}(t-s,x,y)- p^{\mathfrak{K}_x}(t-s,x,y)\right)dy -\int_{{\R^{d}}} \left( p^{\mathfrak{K}_y}(t-s,x',y)- p^{\mathfrak{K}_{x'}}(t-s,x',y)
\right) dy\right|\\
&\leqslant c \!\!\int_{{\R^{d}}} (|y-x|^{\beta_1}\land 1) \Upsilon_{t-s}(y-x)\, dy+
c\!\! \int_{{\R^{d}}} (|y-x'|^{\beta_1}\land 1) \Upsilon_{t-s}(y-x')\, dy \leqslant c \left[ h^{-1}(1/(t-s))\right]^{\beta_1}\!. \end{align*} Here we applied \cite[Theorem~2.11]{GS-2018}, \cite[Proposition~3.3]{S-2018}
and \cite[Lemma~5.17(a)]{GS-2018}. For $|x-x'|\leqslant h^{-1}(1/(t-s))$ we subtract zero, use Lemma~\ref{lem-r:for_cancel_0} and \cite[Lemma~5.17(a)]{GS-2018} to get \begin{align*}
&\left| \int_{{\R^{d}}}\left[ p^{\mathfrak{K}_y}(t-s,x,y)-p^{\mathfrak{K}_y}(t-s,x',y) -\left( p^{\mathfrak{K}_x}(t-s,x,y)-p^{\mathfrak{K}_x}(t-s,x',y) \right) \right] dy
\right|\\ &\leqslant c\!\! \int_{{\R^{d}}}
\left( \frac{|x-x'|}{h^{-1}(1/(t-s))}\right) (|y-x|^{\beta_1}\land 1) \Upsilon_{t-s}(y-x)
dy
\leqslant c \left[ h^{-1}(1/(t-s))\right]^{\beta_1} \left( \frac{|x-x'|}{h^{-1}(1/(t-s))}\right). \end{align*} Thus for all $r\in [0,r_0]$ (see Remark~\ref{rem-r:monot_h}), \begin{align*}
{\rm I} &\leqslant c \left[ h^{-1}(1/(t-s))\right]^{\beta_1} \left( \frac{|x-x'|}{h^{-1}(1/(t-s))} \land 1\right)\\ &\leqslant c \left[ h^{-1}(1/(t-s))\right]^{\beta_1}
\left( \frac{|x-x'|}{h^{-1}(1/(t-s))} \land {\frac{h^{-1}(1/(t-s))}{h^{-1}(1/(t-s))}}\right)^r\\
&\leqslant c \,(|x-x'|^{r}\land 1) \left[ h^{-1}(1/(t-s))\right]^{\beta_1-r}. \end{align*} The result follows from \cite[Lemma~5.15]{GS-2018} after integrating the above inequality in $s$, and using the monotonicity of Beta function. Note that $(\beta_1-r)/\alpha_h+1\geqslant (\beta_1-r_0)/\alpha_h+1>0$. See again Remark~\ref{rem-r:monot_h}. {
$\Box$
}
\begin{lemma}\label{lem-r:cancel_02} Let $r_0 \in [0,1]\cap [0,\alpha_h+\beta\land \alpha_h)$. For every $T>0$ there exists a constant $c=c(d,T,\sigma_e,r_0)$
such that for all $t\in (0,T]$, $x,x',y\in{\R^{d}}$ and $r\in[0,r_0]$, \begin{align*}
\int_{t/2}^t \int_{{\R^{d}}} \left| p^{\mathfrak{K}_z}(t-s,x,z)-p^{\mathfrak{K}_z}(t-s,x',z) \right| \big| q(s,z,y)-q(s,x,y) \big| \,dzds\\
\leqslant c \left(|x-x'|^r \land 1 \right) \left[ h^{-1}(1/t)\right]^{-r} \big( \Upsilon_t(y-x)+ \Upsilon_t(y-x') \big). \end{align*} \end{lemma}
\noindent{\bf Proof.} We denote $$
{\rm I}:= \int_{{\R^{d}}} \left| p^{\mathfrak{K}_z}(t-s,x,z)-p^{\mathfrak{K}_z}(t-s,x',z) \right| \big| q(s,z,y)-q(s,x,y) \big| \,dz\,. $$ By \eqref{ineq-r:est_diff_1}
and $|q(s,z,y)-q(s,x,y)|\leqslant |q(s,z,y)-q(s,x',y)|+|q(s,x',y)-q(s,x,y)|$ we get \begin{align*}
{\rm I}&\leqslant c \int_{{\R^{d}}} \left(\frac{|x-x'|}{h^{-1}(1/(t-s))}\land 1 \right) \Upsilon_{t-s}(x-z) \big| q(s,z,y)-q(s,x,y) \big| dz\\
&\quad + c \int_{{\R^{d}}} \left(\frac{|x-x'|}{h^{-1}(1/(t-s))}\land 1 \right) \Upsilon_{t-s}(x'-z) \big| q(s,z,y)-q(s,x',y) \big| dz\\
&\quad + c \int_{{\R^{d}}} \left(\frac{|x-x'|}{h^{-1}(1/(t-s))}\land 1 \right) \Upsilon_{t-s}(x'-z) \big| q(s,x,y)-q(s,x',y) \big| dz\,. \end{align*} Define \begin{align*} {\rm I}_1 &=\int_{{\R^{d}}} (t-s)\err{0}{\beta_1-\gamma}(t-s, x-z) \left\{\big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(s,z-y)+\big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(s,x-y)\right\}dz\,,\\ {\rm I}_2 &=\int_{{\R^{d}}} (t-s)\err{0}{\beta_1-\gamma}(t-s, x'-z) \left\{\big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(s,z-y)+\big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(s,x'-y)\right\}dz\,,\\ {\rm I}_3
&=(|x-x'|^{\beta_1-\gamma}\land 1)\int_{{\R^{d}}} (t-s)\err{0}{0}(t-s, x'-z)\\ & \hspace{0.25\linewidth} \times \left\{\big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(s,x-y)+ \big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(s,x'-y)\right\}dz\,. \end{align*} Now, let $\beta_1\in (0,\beta]\cap (0,\alpha_h)$ be such that $\alpha_h+\beta_1-r_0>0$ and fix $0<\gamma_1<(\alpha_h+\beta_1-r_0)\land \beta_1$. By Lemma~\ref{lem-r:q_reg_impr} there is a constant $c=c(d,T,\sigma_e,\beta_1,\gamma_1)$
such that for all $\gamma \in [\gamma_1,\beta_1]$, $$
{\rm I}\leqslant c \left(\frac{|x-x'|}{h^{-1}(1/(t-s))}\land 1 \right) \big({\rm I}_1+{\rm I}_2 +{\rm I}_3 \big). $$ In what follows we frequently replace $s\in (t/2,t)$ with $t$ due to the comparability of $h^{-1}(1/s)$ and $h^{-1}(1/t)$, see \cite[Lemma 5.1 and~5.3]{GS-2018} and Remark~\ref{rem-r:stretch}. By \cite[Lemma~5.17(a) and~(b)]{GS-2018} (once with $n_1=n_2=\beta_1$ in \cite[Lemma~5.17(b)]{GS-2018}), \begin{align*} {\rm I}_1 &\leqslant c \left[h^{-1}(1/(t-s))\right]^{\beta_1-\gamma} \big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(t,x-y)\\ &\quad +c \,(t-s)t^{-1}\big(\err{\beta_1}{0}+\err{\gamma}{\beta_1-\gamma}+\err{\gamma}{0}\big)(t,x-y) \\ &\quad + c \left[h^{-1}(1/(t-s))\right]^{\beta_1} \err{\gamma-\beta_1}{0}(t,x-y)\,. \end{align*} We treat ${\rm I}_2$ alike. Further, by \cite[Lemma~5.6)]{GS-2018} we have \begin{align*}
{\rm I}_3 \leqslant c \,(|x-x'|^{\beta_1-\gamma}\land 1) \left\{\big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(t,x-y)+ \big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(t,x'-y)\right\}. \end{align*} Next, since by our assumptions $(\beta_1-\gamma-r)/\alpha_h +1\geqslant (\beta_1-\gamma_1-r_0)/\alpha_h >0$ and $(-r/\alpha_h) +2\geqslant (-r_0/\alpha_h) +2>0$, \cite[Lemma~5.15]{GS-2018} with the monotonicity of Beta function provides uniformly for all $r\in [0,r_0]$ and $\gamma\in [\gamma_1,\beta_1]$, \begin{align*}
\int_{t/2}^t \left(\frac{|x-x'|}{h^{-1}(1/(t-s))}\land 1 \right) {\rm I}_1\,ds
&\leqslant c \int_{t/2}^t (|x-x'|^r \land 1) \left[h^{-1}(1/(t-s))\right]^{-r}\, {\rm I}_1\,ds\\
&\leqslant c \,(|x-x'|^r \land 1) \,t \left[ h^{-1}(1/t)\right]^{-r}
\big(\err{\beta_1}{0}+\err{0}{\beta_1}+\err{\gamma}{0}\big)(t,x-y)\,. \end{align*} We deal with the part containing $\rm{I}_2$ in the same way. Finally, for each $r\in[0,r_0]$ we take $\gamma\in [\gamma_1\vee (\beta_1-r),\beta_1]$ to obtain by \cite[Lemma~5.15]{GS-2018} that \begin{align*}
\int_{t/2}^t \left(\frac{|x-x'|}{h^{-1}(1/(t-s))}\land 1 \right) {\rm I}_3\,ds
\leqslant c \int_{t/2}^t (|x-x'|^{r-(\beta_1-\gamma)}\land 1)\left[ h^{-1}(1/(t-s))\right]^{\beta_1-\gamma-r}\, {\rm I}_3\,ds \\
\leqslant c\, (|x-x'|^r \land 1 )\,t \left[ h^{-1}(1/t)\right]^{-r} \left\{\big(\err{\beta_1}{0}+\err{0}{\beta_1}\big)(t,x-y)+ \big(\err{\beta_1}{0}+\err{0}{\beta_1}\big)(t,x'-y)\right\}. \end{align*} This ends the proof as again by the monotonicity of Beta function the constant is independent of the choice of $r$ and $\gamma$ above. See also Remark~\ref{rem-r:monot_h}. {
$\Box$
}
\begin{proposition}\label{prop-r:key_0} Let $r_0 \in [0,1]\cap [0,\alpha_h+\beta\land \alpha_h)$. For every $T>0$ there exists a constant $c=c(d,T,\sigma_e,r_0)$ such that for all $t\in (0,T]$, $x,x',y\in{\R^{d}}$ and $r\in [0,r_0]$, \begin{align*}
\left| \phi_y(t,x)-\phi_y(t,x')\right| \leqslant c \left(|x-x'|^r \land 1 \right) \left[ h^{-1}(1/t)\right]^{-r} \big( \Upsilon_t(y-x)+ \Upsilon_t(y-x') \big)\,. \end{align*} \end{proposition}
\noindent{\bf Proof.} For $s\in (0,t/2]$ we have by Corollary~\ref{cor-r:est_diff_grad} and \cite[(37)]{GS-2018}, \cite[(40)]{S-2018} that for all $r\in [0,1]$, \begin{align*}
\left|\phi_y (t,x,s)-\phi_y(t,x',s) \right| \leqslant c \,
(|x-x'|^{r}\land 1) \left[h^{-1}(1/(t-s))\right]^{-r} \int_{{\R^{d}}} (t-s) \big( \err{0}{0}(t-s, x-z)&\\ +\,\, \err{0}{0}(t-s, x'-z)\big) \big(\err{0}{\beta_1}+\err{\beta_1}{0}\big)(s,z-y)&\,dz. \end{align*} Here $\beta_1\in (0,\beta]\cap (0,\alpha_h)$ is fixed. Since \cite[Lemma~5.17(b)]{GS-2018} and Remark~\ref{rem-r:monot_h} give \begin{align*} &\int_{{\R^{d}}} (t-s)
\err{0}{0}(t-s, x-z) \big(\err{0}{\beta_1}+\err{\beta_1}{0}\big)(s,z-y)\,dz\\ &\leqslant c \left(\left[h^{-1}(1/(t-s))\right]^{\beta_1}+\left[h^{-1}(1/s)\right]^{\beta_1} +(t-s) s^{-1}\left[h^{-1}(1/s)\right]^{\beta_1} \right) \err{0}{0}(t,x-y) + \err{0}{\beta_1}(t,x-y)\\ &\leqslant c \left(\left[h^{-1}(1/t)\right]^{\beta_1} + t s^{-1}\left[h^{-1}(1/s)\right]^{\beta_1} \right) \err{0}{0}(t,x-y) + \err{0}{\beta_1}(t,x-y)\,, \end{align*} and \cite[Lemma~5.3]{GS-2018} gives $\left[h^{-1}(1/(t-s))\right]^{-r}\leqslant c \left[h^{-1}(1/t)\right]^{-r}$, we get by \cite[Lemma~5.15]{GS-2018} that \begin{align*}
\int_{0}^{t/2} &\left|\phi_y (t,x,s)-\phi_y(t,x',s) \right| ds\\ & \leqslant
c \,(|x-x'|^r \land 1) \left[ h^{-1}(1/t) \right]^{-r}
t \left( \big( \err{\beta_1}{0}+ \err{0}{\beta_1}\big)(t,x-y)+ \big( \err{\beta_1}{0}+ \err{0}{\beta_1}\big)(t,x'-y)\right). \end{align*} For the remaining part of the integral we write \begin{align*}
&\int_{t/2}^t \left|\phi_y (t,x,s)-\phi_y(t,x',s) \right| ds\\
&\qquad \leqslant \int_{t/2}^t \int_{{\R^{d}}} \left| p^{\mathfrak{K}_z}(t-s,x,z)-p^{\mathfrak{K}_z}(t-s,x',z) \right| \big| q(s,z,y)-q(s,x,y) \big| \,dzds \\
&\qquad \quad+\,\int_{t/2}^t\left|\int_{{\R^{d}}} \left( p^{\mathfrak{K}_z}(t-s,x,z)-p^{\mathfrak{K}_z}(t-s,x',z) \right) dz \right| \left| q(s,x,y)\right| ds \end{align*}
Since $|q(s,x,y)|\leqslant c \big(\err{0}{\beta_1}+\err{\beta_1}{0}\big)(s,x-y) \leqslant c \big(\err{0}{\beta_1}+\err{\beta_1}{0}\big)(t,x-y)$, the result follows from Lemma~\ref{lem-r:cancel_01} and~\ref{lem-r:cancel_02}, and Remark~\ref{rem-r:monot_h}. {
$\Box$
}
\noindent {\it Proof of Theorem~\ref{thm-r:1}.} The result follows from \eqref{e:p-kappa}, Corollary~\ref{cor-r:est_diff_grad} and Proposition~\ref{prop-r:key_0}. {
$\Box$
}
\section{Regularity - part II}
In this section we assume that $\alpha_h+\beta\land \alpha_h>1$. Such condition necessitate $\alpha_h>1/2$. The proofs here differ from those in Section~\ref{sec:part1}, see Lemma~\ref{lem-r:cancel_12}.
\begin{lemma}\label{lem-r:for_cancel_1} For every $T>0$ there exists a constant $c=c(d,T,\sigma_e)$ such that for all $t\in(0,T]$, $x,x',y,w,w' \in {\R^{d}}$
satisfying $|x-x'|\leqslant h^{-1}(1/t)$, \begin{align*}
\left|\nabla_x p^{\mathfrak{K}_w}(t,x,y)-\nabla_{x'} p^{\mathfrak{K}_w}(t,x',y)
-\left(\nabla_x p^{\mathfrak{K}_{w'}}(t,x,y)-\nabla_{x'} p^{\mathfrak{K}_{w'}}(t,x',y)\right)\right|&\\ \leqslant
c \left( \frac{|x-x'|}{h^{-1}(1/t)}\right) (|w-w'|^{\beta}\land 1)\left[ h^{-1}(1/t)\right]^{-1} &\, \Upsilon_{t}(y-x)\,. \end{align*} \end{lemma} \noindent{\bf Proof.} Let $w_0\in{\R^{d}}$ be fixed. Define $\mathfrak{K}_0(z)=(\kappa_0/(2\kappa_1)) \kappa(w_0,z)$ and $\widehat{\mathfrak{K}}_w (z)=\mathfrak{K}_w(z)- \mathfrak{K}_0(z)$. By \eqref{eq-r:przez_k_0-impr}, \eqref{ineq-r:est_diff_grad_1} and
\cite[Theorem~2.11]{GS-2018}, \cite[Proposition~3.3]{S-2018}, for $|x-x'|\leqslant h^{-1}(1/t)$ we get \begin{align*}
&\left|\nabla_x p^{\mathfrak{K}_w}(t,x,y)-\nabla_{x'} p^{\mathfrak{K}_w}(t,x',y)
-\left(\nabla_x p^{\mathfrak{K}_{w'}}(t,x,y)-\nabla_{x'} p^{\mathfrak{K}_{w'}}(t,x',y)\right)\right|\\
&\qquad= \left| \int_{{\R^{d}}} \left( \nabla_x p^{\mathfrak{K}_0}(t, x,\xi)-\nabla_{x'} p^{\mathfrak{K}_0}(t, x',\xi) \right) \left( p^{\widehat{\mathfrak{K}}_w}(t,\xi,y)-p^{\widehat{{\mathfrak{K}}}_{w'}}(t,\xi,y)\right) d\xi \right|\\
& \qquad \leqslant c \int_{{\R^{d}}} \left[ h^{-1}(1/t)\right]^{-1}\left( \frac{|x-x'|}{h^{-1}(1/t)}\right) \Upsilon_t(\xi-x) (|w-w'|^{\beta}\land 1) \Upsilon_t(y-\xi) d\xi\\
& \qquad\leqslant c \left[ h^{-1}(1/t)\right]^{-1}\left( \frac{|x-x'|}{h^{-1}(1/t)}\right) (|w-w'|^{\beta}\land 1) \Upsilon_{2t}(y-x)\,. \end{align*} \cite[Corollary~5.14 and Lemma~5.6]{GS-2018} have also been used in the last inequality. It remains to apply $\Upsilon_{2t}(y-x)\leqslant 2\Upsilon_{t}(y-x)$, see Remark~\ref{rem-r:monot_h}. {
$\Box$
}
\begin{lemma}\label{lem-r:cancel_11} Let $\beta_1\in [0,\beta]\cap [0,\alpha_h)$. For every $T>0$ there exists a constant $c=c(d,T,\sigma_e,\beta_1)$ such that for all $t\in(0,T]$, $x,x' \in {\R^{d}}$, \begin{align*}
\left| \int_{{\R^{d}}}\left( \nabla_x p^{\mathfrak{K}_y}(t,x,y)-\nabla_{x'} p^{\mathfrak{K}_y}(t,x',y)
\right) dy \right|
\leqslant c \left[ h^{-1}(1/t)\right]^{-1+\beta_1} \left( \frac{|x-x'|}{h^{-1}(1/t)}\land 1\right). \end{align*} \end{lemma}
\noindent{\bf Proof.} Let ${\rm I}$ be the left hand side of the inequality. For $|x-x'|\geqslant h^{-1}(1/t)$ we conclude from \cite[(29)]{GS-2018} and \cite[Lemma~3.5]{S-2018} that $$ {\rm I}\leqslant c \left[ h^{-1}(1/t)\right]^{-1+\beta_1}\,. $$
For $|x-x'|\leqslant h^{-1}(1/t)$ we subtract zero, use Lemma~\ref{lem-r:for_cancel_1} and \cite[Lemma~5.17(a)]{GS-2018} to get \begin{align*}
&\left| \int_{{\R^{d}}}\left[ \nabla_x p^{\mathfrak{K}_y}(t,x,y)-\nabla_{x'} p^{\mathfrak{K}_y}(t,x',y) -\left( \nabla p^{\mathfrak{K}_x}(t,\cdot,y)(x)-\nabla_{x'} p^{\mathfrak{K}_x}(t,x',y) \right) \right] dy
\right|\\ &\leqslant c\int_{{\R^{d}}}
\left[ h^{-1}(1/t)\right]^{-1}\left( \frac{|x-x'|}{h^{-1}(1/t)}\right) (|y-x|^{\beta_1}\land 1) \Upsilon_{t}(y-x)\,
dy \\
&\leqslant c \left[ h^{-1}(1/t)\right]^{-1+\beta_1} \left( \frac{|x-x'|}{h^{-1}(1/t)}\right)\,. \end{align*} {
$\Box$
}
\begin{lemma}\label{lem-r:cancel_12} Let $r_0\in [0,1]\cap [0,\alpha_h+\beta\land \alpha_h-1)$. For every $T>0$ there exists a constant $c=c(d,T,\sigma_e, r_0)$ such that for all $t\in (0, T]$, $x,x' \in {\R^{d}}$ and $r\in [0,r_0]$, \begin{align*}
\int_{t/2}^t \left| \int_{{\R^{d}}} \left(\nabla_x p^{\mathfrak{K}_z}(t-s,x,z)-\nabla_{x'} p^{\mathfrak{K}_z}(t-s,x',z) \right) q(s,z,y) \,dz\right| ds & \\
\leqslant
c \left(|x-x'|^r \land 1 \right) \left[ h^{-1}(1/t)\right]^{-1-r}& \big( \Upsilon_t(y-x)+ \Upsilon_t(y-x') \big). \end{align*} \end{lemma} \noindent{\bf Proof.} Denote \begin{align*}
{\rm I}:= \left| \int_{{\R^{d}}} \left(\nabla_x p^{\mathfrak{K}_z}(t-s,x,z)-\nabla_{x'} p^{\mathfrak{K}_z}(t-s,x',z) \right) q(s,z,y) \,dz\right|. \end{align*} Let $\beta_1\in (0,\beta]\cap (0,\alpha_h)$ be such that $\alpha_h+\beta_1-1-r_0>0$. Fix $0<\gamma < (\alpha_h+\beta_1-1-r_0)\land \beta_1$. We fist show that \begin{align}\label{eq-r:raw}
{\rm I}\leqslant c \left( \frac{|x-x'|}{h^{-1}(1/(t-s))}\land 1\right) \Big( J (t,x-y;s)+ J(t,x'-y;s) \Big). \end{align} where \begin{align*} J(t,x-y;s):=\left[ h^{-1}(1/(t-s))\right]^{-1} \Big\{& \left[h^{-1}(1/(t-s))\right]^{\beta_1-\gamma} \big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(t,x-y)\\ & +(t-s)t^{-1} \big(\err{\beta_1}{0}+\err{\gamma}{\beta_1-\gamma} + \err{\gamma}{0}\big)(t,x-y)\\ & +\left[h^{-1}(1/(t-s))\right]^{\beta_1} \big(\err{\beta_1}{0}+\err{0}{\beta_1}+\err{\gamma-\beta_1}{0}\big)(t,x-y) \Big\}. \end{align*} In what follows we frequently replace $s\in (t/2,t)$ with $t$ due to the comparability of $h^{-1}(1/s)$ with $h^{-1}(1/t)$, see \cite[Lemma 5.1 and~5.3]{GS-2018}
and Remark~\ref{rem-r:stretch}. For $|x-x'|\geqslant h^{-1}(1/(t-s))$ we have by \cite[Proposition~2.1, (38), (29) and~(37)]{GS-2018}, \cite[Proposition~2.1, (41), Lemma~3.5 and~(40)]{S-2018}, \begin{align*}
&\left| \int_{{\R^{d}}}\nabla_x p^{\mathfrak{K}_z}(t-s,x,z) q(s,z,y) \,dz\right|\\
&\leqslant \int_{{\R^{d}}} \left| \nabla_x p^{\mathfrak{K}_z}(t-s,x,z) \right| | q(s,z,y)-q(s,x,y)| \,dz
+ \left| \int_{{\R^{d}}}\nabla_x p^{\mathfrak{K}_z}(t-s,x,z)\,dz\right|
|q(s,x,y)|\\ &\leqslant c \left[ h^{-1}(1/(t-s))\right]^{-1} \bigg\{ \int_{{\R^{d}}}(t-s)\err{0}{\beta_1-\gamma}(t-s, x-z) \big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(s,z-y)\,dz \\ &\hspace{0.26\linewidth}+\int_{{\R^{d}}}(t-s)\err{0}{\beta_1-\gamma}(t-s, x-z)\,dz \,\big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(t,x-y)\\ &\hspace{0.26\linewidth} + \left[ h^{-1}(1/(t-s))\right]^{\beta_1} \big(\err{\beta_1}{0}+\err{0}{\beta_1}\big)(t,x-y)\bigg\} =: {\rm R}. \end{align*}
Now, \eqref{eq-r:raw} follows in this case from \cite[Lemma~5.18(a) and (b)]{GS-2018} (once with $n_1=n_2=\beta_1$). For $|x-x'|\leqslant h^{-1}(1/(t-s))$ we have by \eqref{ineq-r:est_diff_grad_1}, Lemma~\ref{lem-r:cancel_11} and \cite[(38), (37)]{GS-2018}, \cite[(41), (40)]{S-2018}, \begin{align*}
{\rm I}&\leqslant \int_{{\R^{d}}} \left| \nabla_x p^{\mathfrak{K}_z}(t-s,x,z)-\nabla_{x'} p^{\mathfrak{K}_z}(t-s,x',z) \right| | q(s,z,y)-q(s,x,y)| \,dz\\
&\quad + \left| \int_{{\R^{d}}} \left(\nabla_x p^{\mathfrak{K}_z}(t-s,x,z)-\nabla_{x'} p^{\mathfrak{K}_z}(t-s,x',z)\right) \,dz \right| |q(s,x,y)| \leqslant c
\left( \frac{|x-x'|}{h^{-1}(1/(t-s))}\right) {\rm R} \,. \end{align*} Here again \eqref{eq-r:raw} follows from \cite[Lemma~5.18(a) and (b)]{GS-2018}. Finally, since by our assumptions $(\beta_1-\gamma-1-r)/\alpha_h+1\geqslant (\beta_1-\gamma-1-r_0)/\alpha_h+1>0$ and $(-1-r)/\alpha_h +2\geqslant (-1-r_0)/\alpha_h +2>0$, the inequality \eqref{eq-r:raw} and \cite[Lemma~5.15]{GS-2018} with the monotonicity of Beta function give for all $r\in [0,r_0]$, \begin{align*}
&\int_{t/2}^t {\rm I} \,ds\leqslant c \,(|x-x'|^r \land 1) \int_{t/2}^t \left[h^{-1}(1/(t-s))\right]^{-r} \Big( J (t,x-y;s)+ J(t,x'-y;s) \Big)\,ds\\
&\leqslant c \,(|x-x'|^r \land 1)\, t \left[h^{-1}(1/t)\right]^{-1-r}\Big\{\big(\err{\beta_1}{0}+\err{0}{\beta_1}+\err{\gamma}{0}\big)(t,x-y)+ \big(\err{\beta_1}{0}+\err{0}{\beta_1}+\err{\gamma}{0}\big)(t,x'-y)\Big\}. \end{align*} This ends the proof, see Remark~\ref{rem-r:monot_h}. {
$\Box$
}
\begin{proposition}\label{prop-r:key_1} Let $r_0\in [0,1]\cap [0,\alpha_h+\beta\land \alpha_h-1)$. For every $T>0$ there exists a constant $c=c(d,T,\sigma_e, r_0)$ such that for all $t\in (0,T]$, $x,x',y\in{\R^{d}}$ and $r\in [0,r_0]$, \begin{align*}
\left|\nabla_x \phi_y(t,x)-\nabla_{x'}\phi_y(t,x')\right| \leqslant c \left(|x-x'|^r \land 1 \right) \left[ h^{-1}(1/t)\right]^{-1-r} \big( \Upsilon_t(y-x)+ \Upsilon_t(y-x') \big)\,. \end{align*} \end{proposition} \noindent{\bf Proof.} Applying \cite[(43) and~(45)]{GS-2018}, \cite[(48) and~(45)]{S-2018} we have \begin{align*} &\nabla_x \phi_y(t,x)-\nabla_{x'}\phi_y(t,x') =\int_0^t \big( \nabla_x \phi_y (t,x,s)-\nabla_{x'} \phi_y(t,x',s) \big)\,ds\\ &\quad = \int_{0}^t \int_{{\R^{d}}} \left(\nabla_x p^{\mathfrak{K}_z}(t-s,x,z)-\nabla_{x'} p^{\mathfrak{K}_z}(t-s,x',z) \right) q(s,z,y) \,dz ds\,. \end{align*} For $s\in (0,t/2]$ we have by Corollary~\ref{cor-r:est_diff_grad} and \cite[(37)]{GS-2018}, \cite[(40)]{S-2018} that for all $r\in [0,1]$, \begin{align*}
&\left| \nabla_x \phi_y (t,x,s)-\nabla_{x'} \phi_y(t,x',s) \right| \leqslant c \,
(|x-x'|^{r}\land 1) \left[h^{-1}(1/(t-s))\right]^{-1-r}\\ &\qquad \times \int_{{\R^{d}}} (t-s) \big( \err{0}{0}(t-s, x-z) +\,\, \err{0}{0}(t-s, x'-z)\big) \big(\err{0}{\beta_1}+\err{\beta_1}{0}\big)(s,z-y)\,dz\,, \end{align*} where $\beta_1\in (0,\beta]\cap (0,\alpha_h)$ is fixed. The rest of this part of the proof is the same as that of Proposition~\ref{prop-r:key_0}, and rests on \cite[Lemma~5.17(b), 5.3 and~5.15]{GS-2018}, integration in $s\in (0,t/2]$ and Remark~\ref{rem-r:monot_h}. For the remaining part of the integral with integration in $s$ over $(t/2,t)$ we apply Lemma~\ref{lem-r:cancel_12}. {
$\Box$
}
\noindent {\it Proof of Theorem~\ref{thm-r:2}.} The result follows from \eqref{e:p-kappa}, Corollary~\ref{cor-r:est_diff_grad} and Proposition~\ref{prop-r:key_1}. {
$\Box$
}
\section{Regularity - part III}
In this section we assume that $\alpha_h+\beta\land \alpha_h>2$. Note that this may only hold if $\alpha_h>1$, which in turn puts us into the case $\Pa$ or $\Pc$. We first prove that the second oder derivatives of $p^{\kappa}(t,x,y)$ in $x$ actually exist. Such result is missing in \cite{GS-2018}, therefore we first need to prepare several technical lemmas.
\begin{lemma}\label{lem-r:for_cancel_2} For every $T>0$ there exists a constant $c=c(d,T,\sigma_e)$ such that for all $t\in(0,T]$, $x,x',y,w,w' \in {\R^{d}}$, \begin{align}\label{ineq-r:2a}
\left|\nabla_x^2\, p^{\mathfrak{K}_w}(t,x,y)
-\nabla_x^2\, p^{\mathfrak{K}_{w'}}(t,x,y)\right| \leqslant
c\, (|w-w'|^{\beta}\land 1)\left[ h^{-1}(1/t)\right]^{-2} \, \Upsilon_{t}(y-x)\,, \end{align}
and if $|x-x'|\leqslant h^{-1}(1/t)$, then \begin{align}\label{ineq-r:2b}
\left|\nabla_x^2\, p^{\mathfrak{K}_w}(t,x,y)-\nabla_{x'}^2\, p^{\mathfrak{K}_w}(t,x',y)
-\left(\nabla_x^2\, p^{\mathfrak{K}_{w'}}(t,x,y)-\nabla_{x'}^2\, p^{\mathfrak{K}_{w'}}(t,x',y)\right)\right|&\\ \nonumber \leqslant
c \left( \frac{|x-x'|}{h^{-1}(1/t)}\right) (|w-w'|^{\beta}\land 1)\left[ h^{-1}(1/t)\right]^{-2} &\, \Upsilon_{t}(y-x)\,. \end{align} \end{lemma} \noindent{\bf Proof.} By \eqref{eq-r:przez_k_0-impr} (\eqref{ineq-r:est_diff_1} and \eqref{ineq-r:est_diff_grad_1} allow to differentiate under the integral) we have \begin{align*} \nabla_x^2\, p^{\mathfrak{K}_w}(t,x,y) -\nabla_x^2\, p^{\mathfrak{K}_{w'}}(t,x,y)= \int_{{\R^{d}}}\nabla_x^2\, p^{\mathfrak{K}_0}(t, x,\xi) \left(p^{\widehat{\mathfrak{K}}_w}(t,\xi,y)-p^{\widehat{{\mathfrak{K}}}_{w'}}(t,\xi,y)\right)d\xi\,, \end{align*} and \begin{align*}
&\left|\nabla_x^2\, p^{\mathfrak{K}_w}(t,x,y)-\nabla_{x'}^2\, p^{\mathfrak{K}_w}(t,x',y)
-\left(\nabla_x^2\, p^{\mathfrak{K}_{w'}}(t,x,y)-\nabla_{x'}^2\, p^{\mathfrak{K}_{w'}}(t,x',y)\right)\right|\\
&\qquad= \left| \int_{{\R^{d}}} \left( \nabla_x^2\, p^{\mathfrak{K}_0}(t, x,\xi)-\nabla_{x'}^2\, p^{\mathfrak{K}_0}(t, x',\xi) \right) \left( p^{\widehat{\mathfrak{K}}_w}(t,\xi,y)-p^{\widehat{{\mathfrak{K}}}_{w'}}(t,\xi,y)\right) d\xi \right|\,. \end{align*} The inequalities follow from \cite[Proposition~2.1, Theorem~2.11, Corollary~5.14 and Lemma~5.6]{GS-2018} and \eqref{ineq-r:est_diff_grad_2}, cf. proof of Lemma~\ref{lem-r:for_cancel_1}. {
$\Box$
}
\begin{lemma} Let $\beta_1\in [0,\beta]\cap [0,\alpha_h)$. For every $T>0$ there exists a constant $c=c(d,T,\sigma_e,\beta_1)$ such that for all $t\in(0,T]$, $x,x' \in {\R^{d}}$, \begin{align}\label{ineq-r:2a_int}
\left|\int_{{\R^{d}}}\nabla_x^2\, p^{\mathfrak{K}_y}(t,x,y) \,dy \right|\leqslant c \left[ h^{-1}(1/t)\right]^{-2+\beta_1}\,, \end{align} \begin{align}\label{ineq-r:2b_int}
\left| \int_{{\R^{d}}}\left( \nabla_x^2\, p^{\mathfrak{K}_y}(t,x,y)-\nabla_{x'}^2\, p^{\mathfrak{K}_y}(t,x',y)
\right) dy \right|
\leqslant c \left[ h^{-1}(1/t)\right]^{-2+\beta_1} \left( \frac{|x-x'|}{h^{-1}(1/t)}\land 1\right). \end{align} \end{lemma} \noindent{\bf Proof.} The proof of \eqref{ineq-r:2a_int} is like that of \cite[(29)]{GS-2018}, but it requires the use of \eqref{ineq-r:2a}. For the proof of \eqref{ineq-r:2b_int} we employ
\eqref{ineq-r:2a_int} if $|x-x'|\geqslant h^{-1}(1/t)$, and we use
\eqref{ineq-r:2b} if $|x-x'|\leqslant h^{-1}(1/t)$, cf. proof of Lemma~\ref{lem-r:cancel_11}. {
$\Box$
}
\begin{lemma}\label{lem-r:phi_sec_der_1} For all $0<s<t$, $x,y\in{\R^{d}}$, \begin{align*} \nabla_x^2\, \phi_{y}(t,x,s)=\int_{{\R^{d}}} \nabla_x^2\, p^{\mathfrak{K}_z}(t-s,x,z)q(s,z,y)\,dz\,. \end{align*} \end{lemma} \noindent{\bf Proof.} We have by \cite[(45)]{GS-2018} that $ \nabla_x \phi_{y}(t,x,s)=\int_{{\R^{d}}} \nabla_x p^{\mathfrak{K}_z}(t-s,x,z)q(s,z,y)\,dz $. We obtain the result by applying this equality to the difference quotient $(\nabla_x \phi_{y}(t,x+\varepsilon e_i,s)-\nabla_x \phi_{y}(t,x,s))/\varepsilon$ and using the dominated convergence theorem justified by \eqref{ineq-r:est_diff_grad_1} and \cite[(37), Lemma~5.17(b)]{GS-2018}.
{
$\Box$
}
\begin{proposition}\label{prop-r:phi_sec_der_2} For every $T>0$ there exists a constant $c=c(d,T,\sigma_e)$ such that for all $t\in(0,T]$ and $x,y\in{\R^{d}}$, \begin{align} \nabla_x^2\, \phi_{y}(t,x)&=\int_0^t \int_{{\R^{d}}} \nabla_x^2 p^{\mathfrak{K}_z}(t-s,x,z)q(s,z,y)\,dzds\,, \label{eq-r:sec_der_2} \\ &\nonumber\\
| \nabla_x^2 \,\phi_y(t,x) | & \leqslant c \left[ h^{-1}(1/t)\right]^{-2}\Upsilon_t (x-y)\,.\label{ineq-r:sec_der_2} \end{align} \end{proposition} \noindent{\bf Proof.} We choose $\beta_1\in (0,\beta]\cap (0,\alpha_h)$ such that $\alpha_h +\beta_1-2>0$. Let
$0<|\varepsilon| \leqslant h^{-1}(3/t)$
and $\widetilde{x}=x+\varepsilon\theta e_i$. Based on \cite[Theorem~7.21]{MR924157} and Lemma~\ref{lem-r:phi_sec_der_1} we have \begin{align*}
&{\rm I}_0:=\left| \frac{1}{\varepsilon} \left(\frac{\partial}{\partial x_j} \phi_y(t,x+\varepsilon e_i,s)- \frac{\partial}{\partial x_j}\phi_y(t,x,s)\right)\right|
= \left| \int_0^1 \int_{{\R^{d}}} \frac{\partial^2}{\partial x_i \partial x_j} \, p^{\mathfrak{K}_z}(t-s,\widetilde{x},z) q(s,z,y)\, dzd\theta \right|. \end{align*} For $s\in (0,t/2]$ by \cite[Proposition~2.1, (37), Lemma~5.17(b) and~5.3, Proposition~5.8]{GS-2018} and Remark~\ref{rem-r:monot_h}, \begin{align*} {\rm I}_0&\leqslant c \int_0^1 \int_{{\R^{d}}} (t-s) \err{-2}{0}(t-s, \widetilde{x}-z)
\big(\err{0}{\beta_1}+\err{\beta_1}{0}\big)(s,z-y) \,dzd\theta\\ &\leqslant c \left[h^{-1}(1/(t-s))\right]^{-2} \left(\int_0^1 \err{0}{0}(t, \widetilde{x}-y) \,d\theta \right) \left(1 +\left[h^{-1}(1/s)\right]^{\beta_1}
+ (t-s) s^{-1}\left[h^{-1}(1/s)\right]^{\beta_1}\right) \\ &\leqslant c \left[ h^{-1}(1/t) \right]^{-2} \err{0}{0}(t,x-y) \left(1+ t \,s^{-1}\left[h^{-1}(1/s)\right]^{\beta_1}\right). \end{align*} Next, for $s\in (t/2,t)$ we take $0<\gamma<(\alpha_h+\beta_1-2)\land \beta_1$ and by \cite[Proposition~2.1, (38), (37)]{GS-2018} and \eqref{ineq-r:2a_int} we obtain \begin{align*}
{\rm I}_0& \leqslant \int_0^1 \int_{{\R^{d}}} \left| \frac{\partial^2}{\partial x_i \partial x_j}\, p^{\mathfrak{K}_z}(t-s,\widetilde{x},z)\right| |q(s,z,y)-q(s,\widetilde{x},y)|\, dzd\theta\\
&\quad+ \int_0^1 \left| \int_{{\R^{d}}} \frac{\partial^2}{\partial x_i \partial x_j}\, p^{\mathfrak{K}_z}(t-s,\widetilde{x},z)\,dz\right| |q(s,\widetilde{x},y)| \,d\theta\\ &\leqslant c \int_0^1 \int_{{\R^{d}}} (t-s) \err{-2}{\beta_1-\gamma} (t-s,\widetilde{x}-z) \big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(s,z-y)\,
dzd\theta\\ &\quad+c \int_0^1 \left( \int_{{\R^{d}}} (t-s) \err{-2}{\beta_1-\gamma} (t-s,\widetilde{x}-z)\,dz\right) \big(\err{\gamma}{0}+\err{\gamma-\beta_1}{\beta_1}\big)(s,\widetilde{x}-y)\, d\theta \\ &\quad+c \left[h^{-1}(1/(t-s))\right]^{-2+\beta_1} \int_0^1 \big(\err{0}{\beta_1}+\err{\beta_1}{0}\big)(s,\widetilde{x}-y)\,d\theta =: {\rm I}_1+{\rm I}_2+{\rm I}_3\,. \end{align*} By \cite[Lemma~5.17(a) and~(b), Lemma~5.3, Proposition~5.8]{GS-2018} and Remark~\ref{rem-r:monot_h} we get \begin{align*} {\rm I}_1\leqslant c\left ( \left[h^{-1}(1/(t-s))\right]^{-2+\beta_1-\gamma} \err{\gamma-\beta_1}{0}(t,x-y)+ (t-s) \left[h^{-1}(1/(t-s))\right]^{-2}t^{-1}\err{0}{0}(t,x-y) \right)\!, \end{align*} \begin{align*} {\rm I}_2\leqslant c \left[h^{-1}(1/(t-s))\right]^{-2+\beta_1-\gamma} \err{\gamma-\beta_1}{0}(t,x-y)\,, \qquad {\rm I}_3 \leqslant c \left[h^{-1}(1/(t-s))\right]^{-2+\beta_1}\err{0}{0}(t,x-y)\,. \end{align*} Thus ${\rm I}_0$ is bounded by a function independent of $\varepsilon$, which is integrable in $s$ over $(0,t)$ due to \cite[Lemma~5.15]{GS-2018} and our assumptions that guarantee $(\beta_1-\gamma-2)/\alpha_h+1>0$ and $(-2)/\alpha_h+2>0$. Now, by \cite[(43) and~(45)]{GS-2018} we have \begin{align*}
\frac{1}{\varepsilon} \left(\frac{\partial}{\partial x_j} \phi_y(t,x+\varepsilon e_i)- \frac{\partial}{\partial x_j}\phi_y(t,x)\right) =\int_0^t \frac{1}{\varepsilon} \left(\frac{\partial}{\partial x_j} \phi_y(t,x+\varepsilon e_i,s)- \frac{\partial}{\partial x_j}\phi_y(t,x,s)\right) ds\,, \end{align*} and we use the dominated convergence theorem and Lemma~\ref{lem-r:phi_sec_der_1} to reach \eqref{eq-r:sec_der_2}. The estimate \eqref{ineq-r:sec_der_2} follows from integrating the upper bound of ${\rm I}_0$ and applying \cite[Lemma~5.15]{GS-2018}. {
$\Box$
}
We now concentrate on the regularity of $\nabla_x^2\,\phi_y(t,x)$.
\begin{lemma}\label{lem-r:cancel_22} Let $r_0\in [0,1]\cap [0,\alpha_h+\beta\land \alpha_h-2)$. For every $T>0$ there exists a constant $c=c(d,T,\sigma_e, r_0)$ such that for all $t\in (0, T]$, $x,x' \in {\R^{d}}$ and $r\in [0,r_0]$, \begin{align*}
\int_{t/2}^t \left| \int_{{\R^{d}}} \left(\nabla_x^2 p^{\mathfrak{K}_z}(t-s,x,z)-\nabla_{x'}^2 p^{\mathfrak{K}_z}(t-s,x',z) \right) q(s,z,y) \,dz\right| ds & \\
\leqslant
c \left(|x-x'|^r \land 1 \right) \left[ h^{-1}(1/t)\right]^{-2-r}& \big( \Upsilon_t(y-x)+ \Upsilon_t(y-x') \big). \end{align*} \end{lemma} \noindent{\bf Proof.} The proof goes by the same lines as the proof of Lemma~\ref{lem-r:cancel_12} with $1$ replaced by $2$ in the choice of $\beta_1$ and $\gamma$, $[h^{-1}(1/u)]^{-1}$ replaced by $[h^{-1}(1/u)]^{-2}$, \eqref{ineq-r:est_diff_grad_1} by \eqref{ineq-r:est_diff_grad_2}, \cite[(29)]{GS-2018} by \eqref{ineq-r:2a_int}, and Lemma~\ref{lem-r:cancel_11} by \eqref{ineq-r:2b_int}. Note also that by our assumptions
$(\beta_1-\gamma-2-r)/\alpha_h+1\geqslant (\beta_1-\gamma-2-r_0)/\alpha_h+1>0$ and $(-2-r)/\alpha_h +2\geqslant (-2-r_0)/\alpha_h +2>0$ {
$\Box$
}
\begin{proposition}\label{prop-r:key_2} Let $r_0\in [0,1]\cap [0,\alpha_h+\beta\land \alpha_h-2)$. For every $T>0$ there exists a constant $c=c(d,T,\sigma_e, r_0)$ such that for all $t\in (0,T]$, $x,x',y\in{\R^{d}}$ and $r\in [0,r_0]$, \begin{align*}
\left|\nabla_x^2 \phi_y(t,x)-\nabla_{x'}^2\phi_y(t,x')\right| \leqslant c \left(|x-x'|^r \land 1 \right) \left[ h^{-1}(1/t)\right]^{-2-r} \big( \Upsilon_t(y-x)+ \Upsilon_t(y-x') \big)\,. \end{align*} \end{proposition} \noindent{\bf Proof.} The result follows from \eqref{eq-r:sec_der_2}, Lemma~\ref{lem-r:phi_sec_der_1}, Corollary~\ref{cor-r:est_diff_grad}, \cite[(37), Lemma~5.17(b), 5.3 and~5.15]{GS-2018}, integration in $s\in (0,t/2]$, Remark~\ref{rem-r:monot_h} and Lemma~\ref{lem-r:cancel_22}, cf. proof of Proposition~\ref{prop-r:key_1}.
{
$\Box$
}
\noindent {\it Proof of Theorem~\ref{thm-r:2}.} From \eqref{e:p-kappa} and \eqref{eq-r:sec_der_2} we have the second order differentiability of $p^{\kappa}(t,x,y)$ in $x$ variable. By \cite[Proposition~2.1]{GS-2018} and \eqref{ineq-r:sec_der_2} we obtain the upper bound. Finally, Corollary~\ref{cor-r:est_diff_grad} and Proposition~\ref{prop-r:key_2} give the regularity. {
$\Box$
}
\end{document} | arXiv |
Debra Boutin
Debra Lynn Boutin is an American mathematician, the Samuel F. Pratt Professor of Mathematics at Hamilton College, where she chairs the mathematics department.[1] Her research involves the symmetries of graphs and distinguishing colorings of graphs.
Education and career
Boutin is a graduate of Chicopee Comprehensive High School in Massachusetts. After high school, Boutin took a ten-year hiatus from higher education, including serving for four years in the United States Navy, working as a secretary, and raising a child. She restarted her education, supported by the G.I. Bill, by studying data processing at Springfield Technical Community College in Massachusetts. Next, Boutin went to Smith College as an Ada Comstock Scholar.[2] She graduated Phi Beta Kappa and summa cum laude in 1991 with a bachelor's degree in mathematics. She completed her Ph.D. in mathematics in 1998 at Cornell University.[3] Her doctoral dissertation, Centralizers of Finite Subgroups of Automorphisms and Outer Automorphisms of Free Groups, was supervised by Karen Vogtmann.[4]
After a one-year visiting position at Trinity College (Connecticut), she joined Hamilton College as an assistant professor in 1999. She was tenured as an associate professor in 2005 and promoted to full professor in 2010.[3]
Recognition
Hamilton College named Boutin as the Samuel F. Pratt Professor of Mathematics in 2019.[3]
References
1. Mathematics faculty, Hamilton College, retrieved 2023-01-09
2. Danko, Jim (2 June 2022), STCC gives: Now a professor, former student inspired to offer support, Springfield Technical Community College
3. Curriculum vitae (PDF), Hamilton College, March 2022, retrieved 2023-01-09
4. Debra Boutin at the Mathematics Genealogy Project
External links
• Home page
Authority control
International
• ISNI
• VIAF
National
• United States
Academics
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
August 18, 2016 DSP
Discrete Frequency
An Analog to Digital Converter (ADC) samples a continuous-time signal to produce discrete-time samples. For a digital signal processor, this signal just resides in memory as a sequence of numbers. Consequently, the knowledge of the sample rate $F_S$ is the key to signal manipulation in digital domain.
As far as time is concerned, one can easily determine the period or frequency of such a signal stored in the memory. For example, the period $T$ in the sinusoid of Figure below is clearly $10$ samples and sample time $T_S=1/F_S$ can be employed to find its period in seconds.
For a sample rate of $F_S=10$ Hz, $T_S = 0.1$ seconds, so
T &= 10 ~\frac{\text{samples}}{\text{period}}~ \cdot ~ 0.1 ~\frac{\text{seconds}}{\text{sample}} = 1~ \text{second}
and its frequency $F = 1/T = 1$ Hz.
For a sample rate of $F_S=500$ Hz, $T_S = 0.002$ seconds, so
T &= 10 ~\frac{\text{samples}}{\text{period}}~ \cdot ~0.002 ~\frac{\text{seconds}}{\text{sample}} = 0.02~ \text{seconds}
with frequency $F = 1/T = 50$ Hz.
As explained before, the samples of both sinusoids will be stored in memory as a sequence of numbers with no difference in discrete domain. Next, we turn our attention to the discrete frequency domain.
Discrete Frequency and the Sample Rate
Remember that the reason we work with discrete-time signals is that the finite memory of a processor can store only a fixed number of time values. Similarly, this finite memory can also store only a fixed number of frequency values and not an infinite range of $F$ values.
For this reason, while we are at sampling in time domain, we also want to sample the frequency domain. Assume that a total of $N$ samples are collected in time domain at a rate of $F_S = 1/T_S$, thus spanning a time duration of $NT_S$ seconds. Then, the lowest frequency that can be represented by these $N$ samples is the one by a sinusoid that completes one full cycle — and no more — during this interval of $NT_S$ seconds. Consequently, its frequency is given by $1/(NT_S)$ Hz and expressed as
V_I[n]\: &= \cos 2\pi \frac{1}{NT_S}t\bigg|_{t=nT_S} = \cos 2\pi \frac{1}{NT_S}nT_S = \cos 2\pi \frac{1}{N}n\\
V_Q[n] &= \sin 2\pi \frac{1}{NT_S}t\bigg|_{t=nT_S} = \sin 2\pi \frac{1}{NT_S}nT_S = \sin 2\pi \frac{1}{N}n
Hence, $1/N$ is the discrete frequency of this sinusoid whose $Q$ part is drawn in Figure below.
Now consider the equation
\frac{1}{NT_S} = \frac{F_S}{N} = F_S\cdot \frac{1}{N}
and observe the following.
While the actual frequency is $F_S(1/N)$, the discrete frequency is $1/N$.
Frequency resolution, determined by the lowest frequency that can be represented in such a discrete setting, is given by $F_S/N$.
Viewed as $F_S \cdot 1/N$, we can get more discrete frequency samples that are integer multiples of $1/N$.
0, \frac{1}{N}, \frac{2}{N}, \cdots
As a consequence, the frequencies of all such complex sinusoids are integer multiples of the fundamental frequency $1/N$. An example of $Q$ component for $2/N$ is drawn in the background of the above Figure with a dotted line that exhibits $2$ periods within those $N$ samples, indicating $2$ cycles/$N$ samples or $2/N$ cycles/sample.
Extending this concept further, the complex sinusoids whose discrete frequencies are $k/N$ complete $k$ cycles in an interval of $N$ samples (or $k/N$ cycles per sample).
Orthogonality
Let us call the sinusoids with frequencies $1/N$ and $2/N$ as $A$ and $B$, respectively. As we see now, they are orthogonal to each other. Orthogonality implies that their correlation computed at shift $0$ is zero. In other words, the sum of their sample-by-sample products is zero. For a real sinusoid,
\begin{equation}\label{eqIntroductionSinOrthogonality}
\sum \limits _{n=0} ^{N-1} \sin 2\pi \frac{1}{N}n \cdot \sin 2\pi \frac{2}{N}n = 0
This can be verified from the last figure. Observe that $B$ has $2$ periods within $N$ samples. In its first period, the products of its samples are taken with negative samples of $A$. In its second period, the same products are taken with positive samples of $A$ with exactly the same magnitude. In this way, this sum turns out to be zero.
This is evident from circles drawn around two of its samples in Figure above. Verify that the corresponding samples of the sinusoid $A$ have opposite signs. Since the original workhorses of DSP are the complex sinusoids and not the real ones, we now analyze orthogonality in that context.
In the article on correlation, we found that the expression for correlation of two complex signals involves conjugation of the second signal. Therefore, to examine this correlation at shift $0$ (i.e., orthogonality), we need to apply the multiplication rule of complex signals: due to the conjugation of the second signal, the $I$ term of the result is $I\cdot I$ $+$ $Q\cdot Q$ and the $Q$ term of the result is $Q\cdot I – I\cdot Q$. For two complex sinusoids with discrete frequencies $k/N$ and $k'/N$, this correlation at lag $0$ is
\sum _{n=0}^{N-1}\left[\cos 2\pi \frac{k}{N}n\cdot \cos 2\pi \frac{k'}{N}n + \sin 2\pi \frac{k}{N}n\cdot \sin 2\pi \frac{k'}{N}n\right]\\
\sum _{n=0}^{N-1}\left[\sin 2\pi \frac{k}{N}n \cdot \cos 2\pi \frac{k'}{N}n – \cos 2\pi \frac{k}{N}n \cdot \sin 2\pi \frac{k'}{N}n\right]
Using the identities $\cos A \cos B + \sin A \sin B = \cos (A-B)$ and $\sin A \cos B – \cos A \sin B = \sin (A-B)$,
\begin{align}\label{eqIntroductionOrthogonality1}
\sum _{n=0}^{N-1}\cos 2\pi \frac{k-k'}{N}n &= \begin{cases}
N & k = k' \\
0 & k \neq k'
\end{cases}\\
\sum _{n=0}^{N-1}\sin 2\pi \frac{k-k'}{N}n &= 0
where we have used the fact the sum of samples of $\cos(\cdot)$ and $\sin(\cdot)$ above is zero due to an integer number of periods within $N$ samples. Also, $\cos 0 =1$ when $k=k'$ that adds up to $N$. This concept is illustrated in Figure below.
We have proved the following result.
"All complex sinusoids having frequencies as integer multiples of a fundamental frequency $F_S(1/N)$ are orthogonal to each other."
We now move towards drawing this discrete frequency axis.
Discrete Frequency Axis
With $N$ discrete time domain samples, are there infinite complex sinusoids out there that are orthogonal to each other? The answer is no and we claim that their number is only $N$. To verify, let us explore one such option with a discrete frequency $N/N=1$.
\sin 2\pi \frac{N}{N}n = \sin 2\pi n = 0
Hence, a complex sinusoid with discrete frequency $0$ is the same as the one with discrete frequency $1$. For the next candidate $(N+1)/N$, use $\sin(A+B) = \sin A\cos B +\cos A \sin B$.
\sin 2\pi \frac{N+ 1}{N}n &= \sin 2\pi n \cos 2\pi \frac{1}{N}n + \cos 2\pi n \sin 2\pi \frac{1}{N}n \\
&= \sin 2\pi \frac{1}{N}n
because $\sin 2\pi n = 0$ and $\cos 2\pi n = 1$. We conclude that there are only $N$ distinct frequencies from $0$ to $N-1$ (or $-N/2$ to $N/2-1$ as we will shortly see). Onwards, they just repeat themselves.
Interestingly, we learned that for $N$ samples in discrete time domain, there are $N$ samples in the discrete frequency domain as well! These $N$ samples represent discrete frequencies of complex sinusoids that are integer multiples of one fundamental frequency $1/N$ and hence are all orthogonal to each other.
From $N$ discrete frequencies found above, we could construct a discrete frequency axis with the following members.
0, \frac{1}{N}, \cdots \cdots, \frac{N-1}{N}
which are exactly the same as
-1, -1+\frac{1}{N},\cdots \cdots, \frac{-1}{N}
due their periodicity. However, there is a reason we choose to work with an axes centered around $0$. The unique range of continuous frequency $F$ after sampling a signal is $-0.5F_S \le F < +0.5F_S$, or \begin{equation} -0.5 \le \frac{F}{F_S} < +0.5 \label{eqIntroductionDiscreteFreqRange} \end{equation} Since there are $N$ discrete frequencies, $-0.5 \le F/F_S < +0.5$ is divided into $N$ equal intervals (assuming $N$ is even) by sampling at instants \begin{align} -0.5,-0.5+\frac{1}{N},\cdots,-\frac{1}{N},0,\frac{1}{N},\cdots,+0.5-\frac{1}{N} \label{eqIntroductionDiscreteFreqAxis} \end{align} to obtain the discrete frequency axis. Obviously, the discrete frequency resolution is $1/N$. So discrete frequency is basically the spectral content in the baseband sampled at $N$ equally spaced frequency points (the term $+0.5$ is absent because it is the same as $-0.5$ owing to the axis periodicity).
Eq \eqref{eqIntroductionDiscreteFreqAxis} can also be written as
\frac{-N/2}{N}, \frac{-N/2+1}{N}, \cdots, -\frac{1}{N},0,\frac{1}{N},\cdots, \frac{N/2-1}{N} \label{eqIntroductionDiscreteFreqAxis2}
Consequently, the index $k$ of discrete frequency axis is given by $[-N/2,N/2-1]$, or
k &= -\frac{N}{2},-\frac{N}{2}+1,\cdots,-1,0,1,\cdots,\frac{N}{2}-1 \\
\frac{k}{N} &= -0.5,-0.5+\frac{1}{N},\cdots,-\frac{1}{N},0,+\frac{1}{N},\cdots,+0.5-\frac{1}{N}
This is drawn in Figure above. Comparing the above equation with Eq \eqref{eqIntroductionDiscreteFreqRange}, we arrive at the relation between discrete frequency $k/N$ and continuous frequency $F$ as
\begin{equation}\label{eqIntroductionFreqRelation}
\frac{k}{N} = \frac{F}{F_S}
The units of discrete frequency $k/N$ from Eq \eqref{eqIntroductionFreqRelation} are
\frac{\text{cycles}}{\text{second}} ÷ \frac{\text{samples}}{\text{second}} = \frac{\text{cycles}}{\text{sample}}
Eq \eqref{eqIntroductionFreqRelation} is one of the two most fundamental relations in digital signal processing, the other being the sampling theorem. These are the two interfaces between continuous and discrete worlds.
Establishing the relationship between continuous and discrete frequencies gives the frequency domain as a sequence of numbers stored in a processor memory.
When in doubt about continuous and discrete frequency domains, refer to Eq \eqref{eqIntroductionFreqRelation}!
If the discrete frequency $k/N$ and the sample rate $F_S$ are known, the actual continuous frequency can be found from Eq \eqref{eqIntroductionFreqRelation}.
F = F_S \cdot \frac{k}{N}
For example, with $F_S = 3$ kHz and $N = 32$,
k = 0 \quad \rightarrow \quad F &= 3000 \cdot \frac{0}{32} = 0 ~\text{Hz} \\
k = 1 \quad \rightarrow \quad F &= 3000 \cdot \frac{1}{32} = 93.75 ~\text{Hz} \\
k = 2 \quad \rightarrow \quad F &= 3000 \cdot \frac{2}{32} = 187.5 ~\text{Hz}
Each $k$ is therefore called a {frequency bin and the value $N$ determines the number of input samples and the resolution of discrete frequency domain. Understanding the sampling theorem and Eq \eqref{eqIntroductionFreqRelation} will make the further concepts much easier to grasp.
In conclusion, the range of unique discrete frequencies $k/N$ is
\begin{equation}\label{eqIntroductionFreqRangeDiscrete}
-0.5 \le \frac{k}{N} < +0.5 \end{equation} This set of complex sinusoids is drawn in Figure below highlighting the discrete frequency index $k$. The negative indices of $k$ illustrate the clockwise direction of rotation while the positive indices of $k$ point towards the anticlockwise rotation. Observe that the rate of oscillation in discrete-time signals decreases with $k/N$ going from $-0.5$ to $0$ and increases from $0$ to $+0.5$, remembering that $k/N = -0.5$ is the same as $k/N = +0.5$.
Keep this figure in mind for most of the applications of signal processing techniques as they represent a complete discrete frequency set. And it will help you in surprising ways!
Some people are more comfortable with 2D figures instead of 3D ones. So this set of complex sinusoids with both $I$ and $Q$ components is shown in Figure below for $N = 8$.
Following are a few key observations in this figure.
For $k = 0$, the $I$ sinusoid is $1$ because $\cos 0 = 1$, while $Q$ sinusoid is $0$ because $\sin 0 = 0$.
Just like $k=1$ implies that the $I$ and $Q$ waveforms, $\cos 2\pi\cdot1/N$ and $\sin 2\pi\cdot1/N$, complete one full cycle during $N = 8$ samples, each complex sinusoid with discrete frequency $k/N$ spans $k$ complete cycles in an interval of $N$ samples.
For negative values of $k$, $I$ sinusoids remain unchanged while $Q$ sinusoids change sign. In the context of complex sinusoids, this follows from the definition of negative frequencies as the ones with clockwise rotation.
Finally, to understand what the units of discrete frequency `cycles/sample' mean, consider for example the case of $k=2$ in this figure. Observe that for $N=8$, we have the discrete frequency equal to $k/N=2/8=1/4$ cycles/sample. Notice that this sinusoid completes a quarter cycle from one sample to the next and hence the frequency of $1/4$ cycles/sample.
Next, we consider an example to highlight the impact of aliasing on a discrete-time signal and why it is not necessary for all continuous frequencies to translate into corresponding discrete frequencies.
Consider a signal
s(t) = 2 \cos(2 \pi 1000 t) + 7 \sin(2\pi 3000 t) + 3 \cos(2 \pi 6000t)
It is clear that continuous frequencies present in this signal are $F_1 = 1$ kHz, $F_2 = 3$ kHz and $F_3 = 6$ kHz. Since the maximum frequency is $6$ kHz, the sampling theorem gives the Nyquist rate as $2\times6$ kHz = $12$ kHz.
Now suppose that this signal is sampled at $F_S = 5$ kHz, the folding frequency is then given by $0.5F_S = 2.5$ kHz. Sampling the signal at equal intervals of $t = nT_S$
s[n] &= s(t)|_{t = nT_S} = s\left(\frac{n}{F_S}\right) \\
&= 2 \cos 2 \pi \frac{1}{5} n + 7 \sin 2 \pi \frac{3}{5}n + 3 \cos 2 \pi \frac{6}{5}n \\
&= 2 \cos 2 \pi \frac{1}{5} n + 7 \sin 2 \pi \left(1 – \frac{2}{5}\right)n + 3 \cos 2 \pi \left(1+\frac{1}{5}\right)n
where the adjustment is done to bring all discrete frequencies within the range $-0.5$ to $0.5$. Next,
s[n] &= 2 \cos 2 \pi \frac{1}{5} n + 7 \sin 2 \pi \left(-\frac{2}{5}\right)n + 3 \cos 2 \pi\left(\frac{1}{5}\right)n \\
&= 5 \cos 2 \pi \frac{1}{5} n – 7 \sin 2 \pi \frac{2}{5}n
If this signal is reconstructed in continuous-time domain, the frequencies present are $k/N = 1/5$ and $k/N = -2/5$. From Eq \eqref{eqIntroductionFreqRelation}, these appear as continuous frequencies at $F_1 = 5\times1/5=1$ kHz and $5\times-2/5 = -2$ kHz. Since only $1$ kHz is less than $0.5F_S$ and hence within aliasing-free range, it is still there after sampling. The other two are above the folding frequency and hence aliased to $F_2-F_S = 3-5=-2$ kHz and $F_3-F_S=6-5=1$ kHz.
Two final remarks are in order here.
Dealing with complex numbers through an $IQ$ notation has several advantages. For example, it can avoid the standard complex expressions like $\exp\left(j\theta\right)$ at a cost of additional maths. More importantly, it describes how the mathematical operations are implemented in actual electronic circuits. Also, it reminds us how phase is equally important in signal analysis.
Every time I write an expression involving a discrete frequency, I express it either as $2\pi \frac{k}{N}n$ or $2\pi (k/N)n$, instead of $2\pi kn/N$ that is used in most texts. The former clearly indicates the presence of a discrete frequency $k/N$ analogous to the variable $F$ in $2\pi Ft$, while it is easy to lose this meaning in the latter.
Location Estimation through Differential Phase Difference of Arrival
The Arrival of 6G
Joaquim Silva says:
Thanks Qasim for this very clear explanation.
Is really very informative and formative.
PS I have bought your book and reading it.
Qasim Chaudhari says:
Thanks for your kind words! | CommonCrawl |
\begin{definition}[Definition:That which produces Medial Whole with Medial Area]
Let $a, b \in \R_{>0}$ be in the forms:
:$a = \dfrac {\rho \lambda^{1/4} } {\sqrt 2} \sqrt {1 + \dfrac k {\sqrt {1 + k^2} } }$
:$b = \dfrac {\rho \lambda^{1/4} } {\sqrt 2} \sqrt {1 - \dfrac k {\sqrt {1 + k^2} } }$
where:
: $\rho$ is a rational number
: $k$ is a rational number whose square root is irrational
: $\lambda$ is a rational number whose square root is irrational.
Then $a - b$ is '''that which produces a medial whole with a medial area'''.
{{:Euclid:Proposition/X/78}}
\end{definition} | ProofWiki |
Polycon
In geometry, a polycon is a kind of a developable roller. It is made of identical pieces of a cone whose apex angle equals the angle of an even sided regular polygon.[1][2] In principle, there are infinitely many polycons, as many as there are even sided regular polygons.[3] Most members of the family have elongated spindle like shapes. The polycon family generalizes the sphericon. It was discovered by the Israeli inventor David Hirsch in 2017[1]
Not to be confused with Polygon.
Construction
• Two adjacent edges of an even sided regular polygon are extended till they reach the polygon's axis of symmetry that is furthest from the edges' common vertex.
• By rotating the two resulting line segments around the polygon's axis of symmetry that passes through the common vertex, a right circular cone is created.
• Two planes are passed such that each one of them contains the normal to the polygon at its center point and one of the two distanced vertices of the two edges.
• The cone part that lies between the two planes is replicated ${\frac {n}{2}}-1$ times, where ${n}$ is the number of the polygon's edges. All ${\frac {n}{2}}$ parts are joined at their planer surfaces to create a spindle shaped object. It has ${n}$ curved edges which pass through alternating vertices of the polygon.
• The obtained object is cut in half at its plane of symmetry (the polygon's plane).
• The two identical halves are reunited after being rotated at an offset angle of ${\frac {2\pi }{n}}$[1]
Edges and vertices
A polycon based on a regular polygon with ${n}$ edges has ${n+2}$ vertices, ${n}$ of which coincide with the polygon's vertices, with the remaining two lying at the extreme ends of the solid. It has ${n}$ edges, each one being half of the conic section created where the cone's surface intersects one of the two cutting planes. On each side of the polygonal cross-section, ${\frac {n}{2}}$ edges of the polycon run (from every second vertex of the polygon) to one of the solid's extreme ends. The edges on one side are offset by an angle of ${\frac {2\pi }{n}}$ from those on the other side. The edges of the sphericon (${n=4}$) are circular. The edges of the hexacon (${n=6}$) are parabolic. All other polycons' edges are hyperbolic.[1]
The sphericon as a polycon
The sphericon is the first member of the polycon family.[1] It is also a member of the poly-sphericon[4] and the convex hull of the two disc roller (TDR convex hull)[5][1] families. In each of the families, it is constructed differently. As a poly-sphericon, it is constructed by cutting a bicone with an apex angle of ${\frac {\pi }{2}}$ at its plane of symmetry and reuniting the two obtained parts after rotating them at an offset angel of ${\frac {\pi }{2}}$.[4] As a TDR convex hull it is the convex hull of two perpendicular 180° circular sectors joined at their centers.[5] As a polycon, the starting point is a cone created by rotating two adjacent edges of a square around its axis of symmetry that passes through their common vertex. In this specific case there is no need to extend the edges because their ends reach the square's other axis of symmetry. Since, in this specific case, the two cutting planes coincide with the plane of the cone's base, nothing is discarded and the cone remains intact. By creating another identical cone and joining the two cones together using their flat surfaces, a bicone is created. From here the construction continues in the same way described for the construction of the sphericon as a poly-sphericon. The only difference between the sphericon as a poly-sphericon and sphericon as a polycon is that as a poly- sphericon it has four vertices and as a polycon it is considered to have six. The additional vertices are not noticeable because they are located in the middle of the circular edges, and merge with them completely.[1]
Rolling properties
The surface of each polycon is a single developable face. Thus the entire family has rolling properties that are related to the meander motion of the sphericon, as do some members of the poly-sphericon family. Because the polysphericons' surfaces consist of conical surfaces and various kinds of frustum surfaces (conical and/or cylindrical), their rolling properties change whenever each of the surfaces touches the rolling plane. This is not the case with the polycons. Because each one of them is made of only one kind of conical surface the rolling properties remain uniform throughout the entire rolling motion. The instantaneous motion of the polycon is identical to a cone rolling motion around one of its ${n}$ central vertices. The motion, as a whole, is a combination of these motions with each of the vertices serving in turn as an instant center of rotation around which the solid rotates during ${\frac {1}{n}}$ of the rotation cycle. Once another vertex comes into contact with the rolling surface it becomes the new temporary center of rotation, and the rotation vector flips to the opposite direction. The resulting overall motion is a meander that is linear on average. Each of the two extreme vertices touches the rolling plane, instantaneously, ${\frac {n}{2}}$ times in one rotation cycle. The instantaneous line of contact between the polycon and the surface it is rolling on is a segment of one of the generatinglines of a cone, and everywhere along this line the tangent plane to the polycon is the same.[1]
When ${\frac {n}{2}}$ is an odd number this tangent plane is a constant distance from the tangent plane to the generating line on the polycon surface which is instantaneously uppermost. Thus the polycons, for ${\frac {n}{2}}$ odd, are constant height rollers (as is a right circular bicone, a cylinder or a prism with Reuleaux triangle cross-section). Polycons, for ${\frac {n}{2}}$ even, don't possess this feature.[1]
History
The sphericon was first introduced by David Hirsch in 1980[6] in a patent he named 'A Device for Generating a Meander Motion'.[7] The principle, according to which it was constructed, as described in the patent, is consistent with the principle according to which poly-sphericons are constructed. Only more than 25 years later, following Ian Stewart's article about the sphericon in the Scientific American Journal, it was realized both by members of the woodturning [17, 26] and mathematical [16, 20] communities that the same construction method could be generalized to a series of axial-symmetric objects that have regular polygon cross sections other than the square. The surfaces of the bodies obtained by this method (not including the sphericon itself) consist of one kind of conic surface, and one, or more, cylindrical or conical frustum surfaces. In 2017 Hirsch began exploring a different method of generalizing the sphericon, one that is based on a single surface without the use of frustum surfaces. The result of this research was the discovery of the polycon family. The new family was first introduced at the 2019 Bridges Conference in Linz, Austria, both at the art works gallery[6] and at the film festival[8]
References
1. Hirsch, David (2020). "The Polycons: The Sphericon (or Tetracon) has Found its Family". Journal of Mathematics and the Arts. 14 (4): 345–359. arXiv:1901.10677. doi:10.1080/17513472.2020.1711651. S2CID 119152692.
2. "Polycons". h-it.de. Heidelberg Institute for Theoretical Studies.
3. Seaton, K. A. "Platonicons: The Platonic Solids Start Rolling". Tessellations Publishing.
4. "Polysphericons". h-its.org. Heidelberg Institute for Theoretical Studies.
5. Ucke, Christian. "The two-disc-roller — a combination of physics, art and mathematics" (PDF). Ucke.de.
6. "Mathematical Art Galleries". gallery.bridgesmathart.org.
7. David Haran Hirsch (1980): "Patent no. 59720: A device for generating a meander motion; Patent drawings; Patent application form; Patent claims
8. "Mathematical Art Galleries". gallery.bridgesmathart.org.
| Wikipedia |
Biased graph
In mathematics, a biased graph is a graph with a list of distinguished circles (edge sets of simple cycles), such that if two circles in the list are contained in a theta graph, then the third circle of the theta graph is also in the list. A biased graph is a generalization of the combinatorial essentials of a gain graph and in particular of a signed graph.
Formally, a biased graph Ω is a pair (G, B) where B is a linear class of circles; this by definition is a class of circles that satisfies the theta-graph property mentioned above.
A subgraph or edge set whose circles are all in B (and which contains no half-edges) is called balanced. For instance, a circle belonging to B is balanced and one that does not belong to B is unbalanced.
Biased graphs are interesting mostly because of their matroids, but also because of their connection with multiary quasigroups. See below.
Technical notes
A biased graph may have half-edges (one endpoint) and loose edges (no endpoints). The edges with two endpoints are of two kinds: a link has two distinct endpoints, while a loop has two coinciding endpoints.
Linear classes of circles are a special case of linear subclasses of circuits in a matroid.
Examples
• If every circle belongs to B, and there are no half-edges, Ω is balanced. A balanced biased graph is (for most purposes) essentially the same as an ordinary graph.
• If B is empty, Ω is called contrabalanced. Contrabalanced biased graphs are related to bicircular matroids.
• If B consists of the circles of even length, Ω is called antibalanced and is the biased graph obtained from an all-negative signed graph.
• The linear class B is additive, that is, closed under repeated symmetric difference (when the result is a circle), if and only if B is the class of positive circles of a signed graph.
• Ω may have underlying graph that is a cycle of length n ≥ 3 with all edges doubled. Call this a biased 2Cn . Such biased graphs in which no digon (circle of length 2) is balanced lead to spikes and swirls (see Matroids, below).
• Some kinds of biased graph are obtained from gain graphs or are generalizations of special kinds of gain graph. The latter include biased expansion graphs, which generalize group expansion graphs.
Minors
A minor of a biased graph Ω = (G, B) is the result of any sequence of taking subgraphs and contracting edge sets. For biased graphs, as for graphs, it suffices to take a subgraph (which may be the whole graph) and then contract an edge set (which may be the empty set).
A subgraph of Ω consists of a subgraph H of the underlying graph G, with balanced circle class consisting of those balanced circles that are in H. The deletion of an edge set S, written Ω − S, is the subgraph with all vertices and all edges except those of S.
Contraction of Ω is relatively complicated. To contract one edge e, the procedure depends on the kind of edge e is. If e is a link, contract it in G. A circle C in the contraction G/e is balanced if either C or $C\cup e$ is a balanced circle of G. If e is a balanced loop or a loose edge, it is simply deleted. If it is an unbalanced loop or a half-edge, it and its vertex v are deleted; each other edge with v as an endpoint loses that endpoint, so a link with v as one endpoint becomes a half-edge at its other endpoint, while a loop or half-edge at v becomes a loose edge.
In the contraction Ω/S by an arbitrary edge set S, the edge set is E − S. (We let G = (V, E).) The vertex set is the class of vertex sets of balanced components of the subgraph (V, S) of Ω. That is, if (V, S) has balanced components with vertex sets V1, ..., Vk, then Ω/S has k vertices V1, ..., Vk . An edge e of Ω, not in S, becomes an edge of Ω/S and each endpoint vi of e in Ω that belongs to some Vi becomes the endpoint Vi of e in Ω/S ; thus, an endpoint of e that is not in a balanced component of (V, S) disappears. An edge with all endpoints in unbalanced components of (V, S) becomes a loose edge in the contraction. An edge with only one endpoint in a balanced component of (V, S) becomes a half-edge. An edge with two endpoints that belong to different balanced components becomes a link, and an edge with two endpoints that belong to the same balanced component becomes a loop.
Matroids
There are two kinds of matroid associated with a biased graph, both of which generalize the cycle matroid of a graph (Zaslavsky, 1991).
The frame matroid
The frame matroid (sometimes called bias matroid) of a biased graph, M(Ω), (Zaslavsky, 1989) has for its ground set the edge set E. An edge set is independent if each component contains either no circles or just one circle, which is unbalanced. (In matroid theory a half-edge acts like an unbalanced loop and a loose edge acts like a balanced loop.) M(Ω) is a frame matroid in the abstract sense, meaning that it is a submatroid of a matroid in which, for at least one basis, the set of lines generated by pairs of basis elements covers the whole matroid. Conversely, every abstract frame matroid is the frame matroid of some biased graph.
The circuits of the matroid are called frame circuits or bias circuits. There are four kinds. One is a balanced circle. Two other kinds are a pair of unbalanced circles together with a connecting simple path, such that the two circles are either disjoint (then the connecting path has one end in common with each circle and is otherwise disjoint from both) or share just a single common vertex (in this case the connecting path is that single vertex). The fourth kind of circuit is a theta graph in which every circle is unbalanced.
The rank of an edge set S is n − b, where n is the number of vertices of G and b is the number of balanced components of S, counting isolated vertices as balanced components.
Minors of the frame matroid agree with minors of the biased graph; that is, M(Ω−S) = M(Ω)−S and M(Ω/S) = M(Ω)/S.
Frame matroids generalize the Dowling geometries associated with a group (Dowling, 1973). The frame matroid of a biased 2Cn (see Examples, above) which has no balanced digons is called a swirl. It is important in matroid structure theory.
The lift matroid
The extended lift matroid L0(Ω) has for its ground set the set E0, which is the union of E with an extra point e0. The lift matroid L(Ω) is the extended lift matroid restricted to E. The extra point acts exactly like an unbalanced loop or a half-edge, so we describe only the lift matroid.
An edge set is independent if it contains either no circles or just one circle, which is unbalanced.
A circuit is a balanced circle, a pair of unbalanced circles that are either disjoint or have just a common vertex, or a theta graph whose circles are all unbalanced.
The rank of an edge set S is n − c + ε, where c is the number of components of S, counting isolated vertices, and ε is 0 if S is balanced and 1 if it is not.
Minors of the lift and extended lift matroids agree in part with minors of the biased graph. Deletions agree: L(Ω−S) = L(Ω)−S. Contractions agree only for balanced edge sets: M(Ω/S) = M(Ω)/S if S is balanced, but not if it is unbalanced. If S is unbalanced, M(Ω/S) = M(G)/S = M(G/S) where M of a graph denotes the ordinary graphic matroid.
The lift matroid of a 2Cn (see Examples, above) which has no balanced digons is called a spike. Spikes are quite important in matroid structure theory.
Multiary quasigroups
Just as a group expansion of a complete graph Kn encodes the group (see Dowling geometry), its combinatorial analog expanding a simple cycle of length n + 1 encodes an n-ary (multiary) quasigroup. It is possible to prove theorems about multiary quasigroups by means of biased graphs (Zaslavsky, t.a.)
References
• T. A. Dowling (1973), A class of geometric lattices based on finite groups. Journal of Combinatorial Theory, Series B, Vol. 14, pp. 61–86.
• Thomas Zaslavsky (1989), Biased graphs. I. Bias, balance, and gains. Journal of Combinatorial Theory, Series B, Vol. 47, pp. 32–52.
• Thomas Zaslavsky (1991), Biased graphs. II. The three matroids. Journal of Combinatorial Theory, Series B, Vol. 51, pp. 46–72.
• Thomas Zaslavsky (1999). A mathematical bibliography of signed and gain graphs and allied areas. 1999 edition: Electronic Journal of Combinatorics, Dynamic Surveys in Combinatorics, #DS8, archived. Current edition: Electronic Journal of Combinatorics, Dynamic Surveys in Combinatorics, #DS8.
• Thomas Zaslavsky (2012), Associativity in multiary quasigroups: the way of biased expansions. Aequationes Mathematicae, Vol. 83, pp. 1–66.
| Wikipedia |
\begin{document}
\title{Self-similar solutions with fat tails for Smoluchowski's coagulation equation with singular kernels}
\author{B. Niethammer \and S. Throm \and J.~J.~L.~Vel\'azquez}
\date{} \maketitle Institute of Applied Mathematics, University of Bonn, Endenicher Allee 60, 53115 Bonn, Germany
E-mail: \texttt{[email protected]}; \texttt{[email protected]}; \texttt{[email protected]}
\begin{abstract}
We show the existence of self-similar solutions with fat tails for Smoluchowski's coagulation equation for homogeneous kernels satisfying $C_1 \left(x^{-a}y^{b}+x^{b}y^{-a}\right)\leq K\left(x,y\right)\leq C_2\left(x^{-a}y^{b}+x^{b}y^{-a}\right)$ with $a>0$ and $b<1$. This covers especially the case of Smoluchowski's classical kernel $K(x,y)=(x^{1/3} + y^{1/3})(x^{-1/3} + y^{-1/3})$.
For the proof of existence we first consider some regularized kernel $K_{\varepsilon}$ for which we construct a sequence of solutions $h_{\varepsilon}$. In a second step we pass to the limit $\varepsilon\to 0$ to obtain a solution for the original kernel $K$. The main difficulty is to establish a uniform lower bound on $h_{\varepsilon}$. The basic idea for this is to consider the time-dependent problem and choosing a special test function that solves the dual problem.
\end{abstract}
\section{Introduction}
\subsection{Smoluchowski's equation and self-similarity}
Smoluchowski's coagulation equation \cite{Smolu16} describes irreversible aggregation of clusters through binary collisions by a mean-field model for the density $f(\xi,t)$ of clusters of mass $\xi$. It is assumed that the rate of coagulation of clusters of size $\xi$ and $\eta$ is given by a rate kernel $K=K(\xi,\eta)$, such that the evolution of $f$ is determined by \begin{equation} \partial_{t}f(\xi,t)=\frac{1}{2}\int_{0}^{\xi}K(\xi-\eta,\eta)f(\xi-\eta ,t)f(\eta,t)\mathrm{d}\eta-f(\xi,t)\int_{0}^{\infty}K(\xi,\eta)f(\eta,t)\mathrm{d}\eta\,. \label{smolu1} \end{equation} Applications in which this model has been used are numerous and include, for example, aerosol physics, polymerization, astrophysics and mathematical biology (see e.g. \cite{Aldous99,Drake72}).
A topic of particular interest in the theory of coagulation is the scaling hypothesis on the long-time behaviour of solutions to \eqref{smolu1}. Indeed, for homogeneous kernels one expects that solutions converge to a uniquely determined self-similar profile.
This issue is however only well-understood for the solvable kernels $K(x,y)=2$, $K(x,y)=x+y$ and $K(x,y)=xy$. In these cases it is known that \eqref{smolu1} has one fast-decaying self-similar solution with finite mass and a family of so-called fat-tail self-similar solutions with power-law decay. Furthermore, their domains of attraction under the evolution \eqref{smolu1} have been completely characterized in \cite{MePe04}. For non-solvable kernels much less is known and it is exclusively for the case $\gamma<1$. In \cite{EMR05,FouLau05} existence of self-similar solutions with finite mass has been established for a large range of kernels and some properties of those solutions have been investigated in \cite{CanMisch11,EsMisch06,FouLau06a}. More recently, the first existence results of self-similar solutions with fat tails have been proved, first for the diagonal kernel \cite{NV11a}, then for kernels that are bounded by $C(x^{\gamma}+y^{\gamma})$ for $\gamma\in [0,1)$ \cite{NV12a}. It is the goal of this paper to extend the results in \cite{NV12a} to singular kernels, such as Smoluchowski's classical kernel $K(x,y)= (x^{1/3} + y^{1/3})(x^{-1/3} + y^{-1/3})$. Uniqueness of solutions with both, finite and infinite mass is still one of the main problems for non-solvable kernels and in most cases an open question. Only recently uniqueness has been shown in the finite mass case for kernels that are in some sense close to the constant kernel \cite{NV14}.
In order to describe our results in more detail, we first derive the equation for self-similar solutions. Such solutions to \eqref{smolu1} for kernels of homogeneity $\gamma<1$ are of the form \begin{equation} f(\xi,t)=\frac{\beta}{t^{\alpha}}g\left(x\right)\,,\qquad \alpha=1+(1{+}\gamma)\beta\,, \qquad x=\frac{\xi}{t^{\beta}}\label{ss1} \end{equation} where the self-similar profile $g$ solves \begin{equation} -\frac{\alpha}{\beta} g- xg^{\prime}(x)=\frac{1}{2}\int_{0} ^{x}K(x-y,y)g(x-y)g(y)\mathrm{d}y-g(x)\int_{0}^{\infty}K(x,y)g(y)\mathrm{d}y\,. \label{ss2} \end{equation} It is known that for some kernels the self-similar profiles are singular at the origin, so that the integrals on the right-hand side are not finite and it is necessary to rewrite the equation in a weaker form. Multiplying the equation by $x$ and rearranging we obtain that a weak self-similar solution $g$ solves \begin{equation} \partial_{x}(x^{2}g(x))=\partial_{x}\left[\int_{0}^{x}\int_{x-y} ^{\infty}yK(y,z)g(z)g(y)\,\mathrm{d}z\,\mathrm{d}y\right]+\left ((1-\gamma)-\frac{1}{\beta}\right)xg(x)\, \label{ss3} \end{equation} in a distributional sense. If one in addition requires that the solution has finite first moment, then this also fixes $\beta=1/(1-\gamma)$ and in this case the second term on the right hand side of \eqref{ss3} vanishes.
For the following it is convenient to go over to the monomer density function $h\left( x,t\right) =xg\left( x,t\right) $ and to introduce the parameter $\rho=\gamma+\frac{1}{\beta}$. Then equation \eqref{ss3} becomes \begin{equation} \partial_{x}\left[ \int_{0}^{x}\int _{x-y}^{\infty}\frac{K\left( y,z\right) }{z}h\left( z\right) h\left( y\right) \,\mathrm{d}z\,\mathrm{d}y\right] -\left[ \partial_{x}\left( xh\right) +\left( \rho-1\right) h\right] \left( x\right) =0\,. \label{A2a} \end{equation} Our approach to find a solution to \eqref{A2a} requires to work with the corresponding evolution equation. Using as new time variable $\log\left( t\right) $ which will be denoted as $t$ from now on, the time dependent version of equation \eqref{A2a} becomes \begin{equation} \partial_{t}h\left( x,t\right) +\partial_{x}\left[ \int_{0}^{x}\int _{x-y}^{\infty}\frac{K\left( y,z\right) }{z}h\left( z,t\right) h\left( y,t\right) \,\mathrm{d}z\,\mathrm{d}y\right] -\left[ \partial_{x}\left( xh\right) +\left( \rho-1\right) h\right] \left( x,t\right) =0\,, \label{A2} \end{equation} with initial data \begin{equation} h(x,0)=h_{0}(x)\,. \label{A3} \end{equation}
\subsection{Assumptions on the kernel and main result}
We now formulate our assumptions on the kernel $K$. We assume that \begin{equation} K\in C^{1}((0,\infty)\times(0,\infty))\,,\qquad K(x,y)=K(y,x)\geq 0\qquad\mbox{ for all }x,y\in(0,\infty)\,, \label{Ass1a} \end{equation} $K$ is homogeneous of degree $\gamma\in(-\infty,1)$, that is \begin{equation} K(\lambda x,\lambda y)=\lambda^{\gamma}K(x,y)\qquad\mbox{ for all }x,y\in (0,\infty)\,, \label{Ass1b} \end{equation} and satisfies the growth condition \begin{equation} C_{1}\left (x^{-a}y^{b}+x^{b}y^{-a}\right)\leq K(x,y)\leq C_{2}\left (x^{-a} y^{b}+x^{b}y^{-a}\right)\,\qquad\mbox{ for all }x,y\in(0,\infty)\,, \label{Ass1} \end{equation} where $a>0$, $b<1$, $\gamma=b-a$, and $C_{1},C_{2}$ are positive constants. Furthermore we assume the following locally uniform bound on the partial derivative: for each interval $\left[d,D\right]\subset \left(0,\infty\right)$ there exists a constant $C_{3}=C_{3}\left(d,D\right)>0$ such that \begin{equation}\label{eq:Ass2}
\abs{\partial_{x}K\left(x,y\right)}\leq C_{3}\left(y^{-a}+y^{b}\right) \quad \text{for all } x\in \left[d,D\right] \text{ and } y\in\left(0,\infty\right). \end{equation}
Let us first discuss what we can expect on the possible decay behaviours of self-similar solutions. If $h(x) \sim C x^{-\rho}$ as $x \to\infty$, then in order for $\int_{1}^{\infty} \frac{K(x,y)}{y} h(y)\,dy < \infty$ we need \begin{equation} \label{Ass1c}\rho>b=\gamma+a \qquad\mbox{ and } \qquad\rho+ a>0\,. \end{equation}
Note that since $\gamma$ can be negative, $-a$ can be larger than $b$. Furthermore we need to assume that $b<1$ since for $b>1$ we could have instantaneous gelation and $b=1$ is a borderline case that can also not be treated with our methods. The same assumption has also been made in related work, where, for example in \cite{CanMisch11}, regularity of self-similar solutions with finite mass have been investigated. In addition it will turn out later that we have to assume $\rho>0$ (see: Lemma~\ref{Lem:non:solv:limit}).
Our main result can now be formulated as follows
\begin{theorem} \label{T.main} Let $K$ be a kernel that satisfies assumptions \eqref{Ass1a}-\eqref{eq:Ass2} for some $b \in(-\infty,1)$ and $a>0$. Then for any $\rho \in (\max(-a,b,0),1) = (\max(b,0),1)$ there exists a non-negative measure $h\in \mathcal{M}([0,\infty))$ that solves \eqref{A2a} in the sense of distributions. This solution decays in the expected manner in an averaged sense, i.e. it satisfies $\int_{\left[0,R\right]}h\mathrm{d}x\leq R^{1-\rho}$ for all $R>0$ and for each $\delta>0$ there exists $R_{\delta}>0$ such that \begin{align*}
\left(1-\delta\right)R^{1-\rho}\leq \int_{\left[0,R\right]}h\mathrm{d}x \quad \text{for all } R\geq R_{\delta}, \end{align*} which together implies $\lim_{R\to \infty}\frac{1}{R^{1-\rho}}\int_{\left[0,R\right]}h\mathrm{d}x=1$. \end{theorem}
\begin{remark} One can in fact show that under the assumptions \eqref{Ass1a}-\eqref{eq:Ass2} the measure $h$ has a continuous density and satisfies $h(r) \sim (1-\rho) r^{-\rho}$ as $r \to \infty$. This has been proved in the case of locally bounded kernels in \cite{NV12a} and the proof in the present case proceeds similarly. Furthermore, if $K$ is more regular, one can also establish higher regularity of $h$ in $(0,\infty)$. In order to keep the present paper within a reasonable length we will give the corresponding proofs in a subsequent separate paper. \end{remark}
\subsection{Strategy of the proof}
The proof of Theorem~\ref{T.main} consists of two main parts. In the first one which is contained in Section~\ref{sec:sol:heps} we shift the singularities of the kernel by some $\varepsilon>0$ to get a kernel $K_{\varepsilon}$ that is bounded at the origin. The idea is then to prove Theorem \ref{T.main} with this modified kernel to get a solution $h_{\varepsilon}$. The proof follows the one in \cite{NV12a}, i.e. the existence of a stationary solution to \eqref{A2} is shown by using the following variant of Tikhonov's fixed point theorem.
\begin{theorem}[Theorem~1.2 in \cite{EMR05,GPV04}]\label{T.fixedpoint}
Let $X$ be a Banach space and $(S_t)_{t \geq 0}$ be a continuous semi-group on $X$. Assume that $S_t$ is weakly sequentially continuous for any $t>0$ and that there exists a subset ${\cal Y}$ of $X$ that is nonempty, convex, weakly sequentially compact and invariant under the action of $S_t$. Then there exists $z_0 \in {\cal Y}$ which is stationary under the action of $S_t$. \end{theorem}
As most of the estimates from \cite{NV12a} remain valid for the shifted kernel $K_{\varepsilon}$ we only state the main definitions and results and refer to \cite{NV12a} for the proofs. The only exception is the invariance of some lower bound defining the set $\mathcal{Y}$ from Theorem~\ref{T.fixedpoint}. As this step cannot just be transferred to the present situation we will give the full proof of this. The main idea here is to construct a special test function that solves the dual problem, for which one can derive some lower bounds that are sufficient to obtain the invariance (Section~\ref{Sec:dual:prob}).
In the second part which is contained in Section~\ref{sec:eps:to:zero} we have to remove the shift in $K_{\varepsilon}$, i.e we have to take the limit $\varepsilon\to 0$. The strategy here is similar to what is done in the first part as one of the main difficulties consists in showing a suitable lower bound (uniform in $\varepsilon$) for $h_{\varepsilon}$ (Section~\ref{subsec:iterate}). This will again be done by constructing a suitable test function by solving the dual problem for which we get adequate estimates from below (Section~\ref{subsec:test}). One difficulty then is to show that the functions $h_{\varepsilon}$ obtained before decay sufficiently rapidly at the origin as $\varepsilon\to 0$ (Section~\ref{subsec:exp:decay}). In fact we will get some exponential decay that will be enough to pass to the limit $\varepsilon\to 0$ (Section~\ref{sec:limit:eps}).
The proofs of the existence of the solutions to the dual problems as well as some basic properties and estimates frequently used are contained in the appendix.
\section{Stationary solutions for the kernel $K_{\varepsilon}$}\label{sec:sol:heps}
In this section we let $\varepsilon>0$ be fixed and consider the kernel \[
K_{\varepsilon}(y,z):=K(y+\varepsilon,z+\varepsilon). \] We prove the following Proposition:
\begin{proposition}\label{P.hepsexistence} For any $\rho\in(\max(b,0),1)$ there exists a continuous function $h_{\varepsilon} \colon (0,\infty) \to [0,\infty)$ that is a weak solution to \eqref{A2a} with $K$ replaced by $K_{\varepsilon}$. This solution satisfies \begin{align*}
\int_{0}^r h_{\varepsilon}(x)\,dx&\leq r^{1-\rho} \qquad \mbox{ and } \qquad \lim_{r \to \infty} \frac{\int_0^r h_{\varepsilon}(x)\,dx}{r^{1-\rho}}=1\,. \end{align*} \end{proposition}
\subsection{Plan of the construction for $h_{\varepsilon}$}
The proof of Proposition~\ref{P.hepsexistence} follows closely the proof of Theorem~1.1 in \cite{NV12a}. As the estimates remain in principle the same here we just recall the strategy of the proof and state the main definitions and results while for proofs we refer to \cite{NV12a}. The only modification we have to establish, compared to \cite{NV12a}, is the proof of the invariance of some lower bound that cannot just easily be adapted and we will show this in Section~\ref{S.le}.
The strategy to find a solution to~\eqref{A2a} (with $K$ replaced by $K_{\varepsilon}$) will be to show that the evolution given by \eqref{A2} satisfies the assumptions of Theorem \ref{T.fixedpoint} (notice that it suffices that the respective properties hold in a possibly small time interval $[0,T]$). One key point in the application of this theorem is obviously an appropriate choice of $X$ and ${\cal Y}$. Here we use for $X$ the set of measures on $[0,\infty)$ and as ${\cal Y}$ the set of non-negative measures which satisfy the expected decay behaviour in an averaged sense (cf. Definition \ref{Def:InvSet}). As well-posedness of~\eqref{A2} is not so easy to show for $K_{\varepsilon}$ directly we introduce a regularized problem where we cut the kernel $K_{\varepsilon}$ (in a smooth way) for small and large cluster sizes in the following way: for $\lambda>0$ we consider \begin{equation}\label{eq:A1}
\begin{aligned}
K_{\varepsilon}^{\lambda}\left(x,y\right)&=K_{\varepsilon}\left(x,y\right), &&\text{ if } \lambda \leq \min\left\{x,y\right\} \text{ and } \max\left\{x,y\right\} \leq \frac{1}{\lambda},\\
K_{\varepsilon}^{\lambda}\left(x,y\right)&=0, &&\text{ if } \min\left\{x,y\right\}\leq\frac{\lambda}{2} \text{ and } \max\left\{x,y\right\}\geq \frac{3}{2\lambda},\\
K_{\varepsilon}^{\lambda}&\leq K_{\varepsilon}.
\end{aligned} \end{equation} \begin{remark}
Note that in \cite{NV12a} a slightly different cutoff was used but this does not cause any problem. \end{remark} In the rest of this section we assume that in all equations the kernel $K$ is replaced by $K_{\varepsilon}^{\lambda}$ (or later by $K_{\varepsilon}$ when we take the limit $\lambda\to 0$).
We are now going to prove the well-posedness of~\eqref{A2} for the kernel $K_{\varepsilon}^{\lambda}$ with $\lambda>0$. We will consider the set of non-negative Radon measures that we will denote with some abuse of notation by $h(x)\,\mbox{d}x$ and such that the norm defined in \eqref{eq:S1E3} is finite. We notice that this implies that $h\mbox{d}x$ does not contain a Dirac at the origin. Since, however, $h\mbox{d}x$ might contain Dirac measures away from the origin, we use the convention that integrals such as $\int_a^b h(x)\mbox{d}x$ are always understood in the sense $\int_{[a,b]}h(x)\mbox{d}x$.
\begin{definition}
Given $\rho\in \left( \max\left\{0,b\right\},1 \right)$ with $b$ as in Assumption~\eqref{Ass1}, we will denote as $\mathcal{X}_{\rho}$ the set of measures $h\in \mathcal{M}_{+}\left( \left[0,\infty\right) \right)$ such that \begin{align}\label{eq:S1E3}
\norm{h}:=\sup_{R\geq 0}\frac{\int_{\left[0,R\right]}h\left( x \right)\mathrm{d}x}{R^{1-\rho}}<\infty\,. \end{align}
\end{definition}
We introduce a suitable topology in $\mathcal{X}_{\rho}$. We define the neighbourhoods of $h_{*}\in \mathcal{X}_{\rho}$ by means of the intersections of sets of the form \begin{align}\label{eq:S2E6}
\mathcal{N}_{\phi,\epsilon}:=\left\{ h\in \mathcal{X}_{\rho}\colon \abs{\int_{\left[ 0,\infty \right)}\left( h-h_{*} \right)\phi \mathrm{d}x}<\epsilon \right\}, \quad \phi\in C_{c}\left( \left[ 0, \infty \right) \right), \quad \epsilon>0. \end{align}
We now define the subset $\mathcal{Y}$, for which we will show that it remains invariant under the evolution defined by \eqref{A2}. \begin{definition}\label{Def:InvSet} Given $R_0>0$ and $\delta>0$ we will denote by $\mathcal{Y}$ the family of measures $h\in \mathcal{X}_{\rho}$ satisfying the following inequalities \begin{align} \int_{\left[0,r\right]}h\mathrm{d}x&\leq r^{1-\rho}, \quad \text{for all }r\geq 0 \label{eq:F1}\\ \int_{\left[0,r\right]}h\mathrm{d}x&\geq r^{1-\rho}\left( 1-\frac{R_0^{\delta}}{r^{\delta}} \right)_{+} \quad \text{for all }r> 0. \label{eq:F2} \end{align} \end{definition}
\begin{remark} We will not make the dependence of $\mathcal{Y}$ on the variables $R_0$ and $\delta$ explicit. \end{remark}
We then easily see that
\begin{lemma}\label{Prop:weakcontin} The sets $\mathcal{Y}\subset \mathcal{X}_{\rho}$ defined in Definition~\ref{Def:InvSet} are weakly sequentially compact. \end{lemma}
\subsection{Well-posedness of the evolution equation }\label{S.wp}
We first have to make sure that the evolution equation \eqref{A2}-\eqref{A3} with $K$ replaced by $K_{\varepsilon}^{\lambda}$ as in \eqref{eq:A1} is well-posed. We are going to construct first a mild solution of \eqref{A2}. For that purpose we introduce the rescaling \begin{align}\label{eq:S1E7}
X=x\mathrm{e}^t,\quad h\left( x,t \right)=H\left(X,t \right) \end{align} and get \begin{align}
\partial_t H\left( X,t \right)-\rho H\left(X,t\right)+\partial_{X}\left[ \int_{0}^{X}\int_{X-Y}^{\infty}\frac{K_{\lambda}^{\varepsilon}\left( Y\mathrm{e}^{-t},Z\mathrm{e}^{-t} \right)}{Z}H\left( Z,t \right)H\left( Y,t \right)\mathrm{d}Z\mathrm{d}Y \right]&=0\label{eq:S1E8}\\ H\left( X,0 \right)&=h_{0}\left( X \right).\label{eq:S1E8a} \end{align} Expanding the derivative $\partial_{X}$ we find that \eqref{eq:S1E8} is equivalent to \begin{align}\label{eq:S1E9}
\partial_{t} H\left( X,t \right)+\left( \mathcal{A}\left[H\right]\left( X,t \right) \right)H\left( X,t \right)-\mathcal{Q}\left[ H \right] \left( X,t \right)=0 \end{align} with \begin{align*}
\mathcal{A}\left[ H \right]\left( X,t \right)&:=\int_{0}^{\infty}\frac{K_{\varepsilon}^{\lambda}\left( X\mathrm{e}^{-t},Y\mathrm{e}^{-t} \right)}{Y}H\left( Y,t \right)\mathrm{d}Y-\rho\\ \mathcal{Q}\left[ H \right] \left( X,t \right)&:= \int_{0}^{X} \frac{K_{\varepsilon}^{\lambda}\left( Y\mathrm{e}^{-t},\left(X-Y\right)\mathrm{e}^{-t}\right)}{X-Y}H\left(X-Y,t\right)H\left(Y,t\right)\mathrm{d}Y\,. \end{align*}
\begin{definition}\label{Def:mild}
We will say that a function $H\in C\left( \left[ 0, T\right], \mathcal{X}_{\rho} \right)$ is a mild solution of equation~\eqref{eq:S1E9} if the following identity holds in the sense of measures \begin{align}\label{eq:S2E4}
H\left( \cdot, t \right)=\T{H}\qquad \text{for } 0\leq t\leq T, \end{align} where \begin{equation}\label{eq:S2E5}
\begin{split}
\T{H}\left( X,t \right)&=\exp\left( -\int_{0}^{t}\mathcal{A}\left[H\right]\left( X,s \right)\mathrm{d}s \right)h_{0}\left( X \right)\\ &\quad +\int_{0}^{t}\exp\left( -\int_{s}^{t}\mathcal{A}\left[H\right]\left( X,\tau \right)\mathrm{d}\tau \right)\mathcal{Q}{H}\left( X,s \right)\mathrm{d}s.
\end{split} \end{equation} \end{definition}
For this notion of solutions we have the following existence result that follows by the contraction mapping principle.
\begin{theorem}\label{Thm:calexistence}
Let $K$ satisfy Assumptions~\eqref{Ass1a}-\eqref{Ass1} and let $K_{\varepsilon}^{\lambda}$ be as in \eqref{eq:A1} for $\lambda>0$. Then there exists $T>0$ such that there exists a unique mild solution of \eqref{eq:S1E9} in $\left( 0, T \right)$ in the sense of Definition~\ref{Def:mild}. \end{theorem}
We furthermore introduce weak solutions in the following sense:
\begin{definition}\label{Def:weak}
We say that $h\in C \left( \left[ 0,T \right],\mathcal{X}_{\rho} \right)$ is a weak solution of \eqref{A2}, \eqref{A3} if for any $t\in \left[0,T\right]$ and any test function $\psi\in C_{c}^{1}\left( \left[0,\infty\right)\times \left[0,t\right] \right)$ we have \begin{equation}\label{eq:S1E5}
\begin{split}
&\int_{0}^{\infty}h \left( x,t \right)\psi \left( x,t \right)\mathrm{d}x-\int_{0}^{\infty}h_0 \left( x \right)\psi \left( x,0 \right)\mathrm{d}x-\int_{0}^{t}\left[\int_{0}^{\infty} \partial_s\psi \left( x,s \right)h \left( x,s \right)\mathrm{d}x \right]\mathrm{d}s\\ &+\int_{0}^{t}\left[ \int_{0}^{\infty}\psi \left( x,s \right)\int_{0}^{\infty}\frac{K_{\varepsilon}^{\lambda} \left( x,z \right)}{z}h \left( z,s \right)\mathrm{d}z h \left( x,s \right)\mathrm{d}x \right]\mathrm{d}s\\ &-\int_{0}^{t}\left[ \int_{0}^{\infty}\psi \left( x,s \right)\int_{0}^{x}\frac{K_{\varepsilon}^{\lambda} \left( y,x-y \right)}{x-y}h \left( x-y,s \right)h \left( y,s \right)\mathrm{d}y\mathrm{d}x \right]\mathrm{d}s\\ &+\int_{0}^{t}\int_{0}^{\infty}xh \left( x,s \right)\partial_x \psi \left( x,s \right)\mathrm{d}x\mathrm{d}s-\left(\rho-1\right)\int_{0}^{t}\int_{0}^{\infty}h \left( x,s \right)\psi \left( x,s \right)\mathrm{d}x\mathrm{d}s=0.
\end{split} \end{equation} \end{definition}
By changing variables one obtains the following lemma.
\begin{lemma}\label{Lem:weakSol}
Suppose $H\in C\left( \left[0,T\right],\mathcal{X}_{\rho} \right)$ is a mild solution of \eqref{A2},\eqref{A3} in the sense of Definition~\ref{Def:mild}. Then $h$ is a weak solution in the sense of Definition~\ref{Def:weak}. \end{lemma}
\subsection{Weak continuity of the evolution semi-group}\label{S.wc}
We denote the value of the mild solution $h$ of \eqref{A2}-\eqref{A3} obtained in Theorem~\ref{Thm:calexistence} with initial data $h_0$ as \begin{align}\label{eq:S3E8} h\left( x,t \right)=S_{\varepsilon}^{\lambda}\left(t\right)h_0\left(x\right),\quad t>0. \end{align}
Note that $S_{\varepsilon}^{\lambda}\left(t\right)$ defines a mapping from $\mathcal{X}_{\rho}$ to itself. We also define the transformation that brings $h_0\left(x\right)$ to $H\left(X,t\right)$ which solves \eqref{eq:S1E8}, \eqref{eq:S1E8a} and whose existence is given by Theorem~\ref{Thm:calexistence}. We will write \begin{align}\label{eq:S3E8a} H\left(X,t\right)=T_{\varepsilon}^{\lambda}\left(t\right)h_0,\quad t>0. \end{align}
With this notation we can state the following Proposition giving the weak continuity of the evolution semi-group:
\begin{proposition}\label{Prop:ContSemi} The transformation $S_{\varepsilon}^{\lambda}\left(t\right)$ defined by means of \eqref{eq:S3E8} for any $t\in\left[0,T\right]$ is a continuous map from $\mathcal{X}_{\rho}$ into itself if $\mathcal{X}_{\rho}$ is endowed with the topology defined by means of the functionals \eqref{eq:S2E6}. \end{proposition}
\begin{remark} The continuity that we obtain is not uniform in $\lambda$. \end{remark}
The idea to prove this is to use a special test function in the definition of weak solutions. As the transformation \eqref{eq:S1E7} is continuous in the weak topology it suffices to show that $T_{\varepsilon}^{\lambda}$ is continuous. Since it is not linear it is not enough to check continuity at $h_{0}=0$. More precisely fix $t\in\left[0,T\right]$ and consider a test function $\bar{\Psi}\left(X\right)$ with $\bar{\Psi}\in C_{c}\left(\left[0,\infty\right)\right)$. Suppose that we have $H_{1},H_{2}$ such that $T_{\varepsilon}^{\lambda}h_{0,i}=H_{i}\left(\cdot,t\right)$ for $i=1,2$. Using the definition of weak solutions and taking the difference of the corresponding equations we obtain after some manipulations: \begin{equation*}
\begin{split}
\int_{0}^{\infty}&\left( H_1\left(X,t\right)-H_2\left(X,t\right) \right)\Psi\left(X,t\right)\mathrm{d}X-\int_{0}^{\infty}\left( h_{0,1}\left(X\right)-h_{0,2}\left(X\right) \right)\Psi\left(X,0\right)\mathrm{d}X\\
&=\int_{0}^{t}\int_{0}^{\infty}\left( H_1\left(X,s\right)-H_2\left(X,s\right) \right)\left[\partial_{s}\Psi-\mathcal{T}\left[\Psi\right]\right]\left(X,s\right)\mathrm{d}X\mathrm{d}s
\end{split} \end{equation*} Thus choosing the test function $\Psi$ in some suitable function space such that $\partial_{s}\Psi-\mathcal{T}\left[\Psi\right]=0$ and $\Psi\left(\cdot,t\right)=\bar{\Psi}$ the claim follows (for more details see~\cite[Proposition~2.8]{NV12a}).
\subsection{Recovering the upper estimate - Conservation of \eqref{eq:F1}}\label{S.ue}
\begin{proposition}\label{Prop:upper:bound:large} Suppose that $h_0\in\mathcal{X}_{\rho}$ satisfies \eqref{eq:F1}. Let $h\left(x,t\right)$ be as in \eqref{eq:S3E8}. Then $h\left(\cdot, t\right)$ satisfies \eqref{eq:F1} as well. \end{proposition}
This follows in the same way as in \cite[Proposition~3.1]{NV12a} by integrating \eqref{eq:S1E8} and changing variables.
\subsection{Recovering the lower estimate - Conservation of \eqref{eq:F2}} \label{S.le}
The invariance of the lower bound will be shown in several steps. First we choose a special test function in the definition of weak solutions, more precisely the solution of the corresponding dual problem.
Next we derive suitable lower bounds and integral estimates for this function. Finally we show the invariance of \eqref{eq:F2}.
\subsubsection{The dual problem}\label{Sec:dual:prob}
In Definition~\ref{Def:weak} of weak solutions we choose $\psi$ such that \begin{equation}\label{eq:sptest1}
\begin{split}
&\quad -\int_{0}^{t}\int_{\left[0,\infty\right)}\partial_s\psi\left(x,s\right)h\left(x,s\right)\mathrm{d}x\mathrm{d}s\\
&-\int_{0}^{t}\int_{\left[0,\infty\right)}\psi\left(x,s\right)\int_{0}^{\infty}\frac{K_{\varepsilon}^{\lambda}\left(x,z\right)}{z}h\left(z,s\right)\mathrm{d}z h\left(x,s\right)\mathrm{d}x\mathrm{d}s\\
&-\int_{0}^{t}\int_{\left[0,\infty\right)}\psi\left(x,s\right)\int_{0}^{x}\frac{K_{\varepsilon}^{\lambda}\left(y,x-y\right)}{x-y}h\left(x-y,s\right)h\left(y,s\right)\mathrm{d}y\mathrm{d}x\mathrm{d}s\\
&+\int_{0}^{t}\int_{\left[0,\infty\right)}x h\left(x,s\right)\partial_x \psi\left(x,s\right)\mathrm{d}x\mathrm{d}s - \left(\rho-1\right)\int_{0}^{t}\int_{\left[0,\infty\right)}h\left(x,s\right)\psi\left(x,s\right)\mathrm{d}x\mathrm{d}s\leq0, \end{split} \end{equation} and thus obtain \begin{align}\label{eq:sptest2} \int_{\left[0,\infty\right)}h\left(x,t\right)\psi\left(x,t\right)\mathrm{d}x\mathrm{d}s-\int_{\left[0,\infty\right)}h_{0}\left(x\right)\psi \left(x,0\right)\mathrm{d}x\geq0. \end{align}
After some rearrangement we find that \eqref{eq:sptest1} is satisfied if $\psi$ solves the dual problem \begin{align}\label{eq:sptest3}
\partial_{s}\psi\left(x,s\right)+\int_{0}^{\infty}\frac{K_{\varepsilon}^{\lambda}\left(x,z\right)}{z}h\left(z,s\right)\left[ \psi\left(x+z,s\right)-\psi\left(x,s\right) \right]\mathrm{d}z-x\partial_{x}\psi\left(x,s\right)-\left(1-\rho\right)\psi\left(x,s\right)\geq 0 \end{align} together with some suitable initial condition $\psi\left(x,t\right)=\psi_{0}\left(x\right)$.
With regard to \eqref{eq:sptest2}, the idea to estimate $\int_{0}^{R}h\left(x,t\right)\mathrm{d}x$ is to take $\psi_{0}$ as a smoothed version of $\chi_{\left(-\infty,R\right]}$ and to estimate $\psi\left(\cdot,0\right)$ from below. Using then that \eqref{eq:F2} holds for $h_{0}$ this will be enough to show that this estimate is conserved under the action of $S_{\varepsilon}^{\lambda}$. Rescaling $X:=x\mathrm{e}^{s-t}$ and $\psi\left(x,s\right)=\mathrm{e}^{-\left(1-\rho\right)\left(t-s\right)}\Phi\left(x\mathrm{e}^{s-t},s\right)$ we find after some elementary computations that equation \eqref{eq:sptest3} together with the initial condition is equivalent to \begin{equation}\label{eq:sptest4}
\begin{split}
\partial_{s}\Phi\left(X,s\right)+\int_{0}^{\infty}\frac{K_{\varepsilon}^{\lambda}\left(X\mathrm{e}^{t-s},Z\mathrm{e}^{t-s}\right)}{Z}h\left(Z\mathrm{e}^{t-s}\right)\left[ \Phi\left(X+Z,s\right)-\Phi\left(X,s\right) \right]\mathrm{d}Z&\geq0\\
\Phi\left(X,t\right)&=\psi_{0}\left(X\right).
\end{split} \end{equation} This is (the rescaled version of) the dual problem.
\begin{remark}
Note that for applying Theorem~\ref{T.fixedpoint} we only need the assumptions on a small interval $\left[0,T\right]$. So we may assume in the following that $T<1$ is sufficiently small. \end{remark}
\subsubsection{Construction of a solution to the dual problem}\label{Sec:constr:dual:prop}
In the following we always assume without loss of generality that $\varepsilon\leq 1$ and $R_{0}\geq 1$ (and thus we may also assume $R\geq 1$). For $0<\kappa<1$ we furthermore denote by $\varphi_{\kappa}$ a non-negative, symmetric standard mollifier such that $\supp\varphi_{\kappa}\subset \left[-\kappa,\kappa\right]$.
The idea to construct a solution $\Phi$ to the dual problem is to replace the solution $h$ and the integral kernel $K_{\varepsilon}^{\lambda}$ in \eqref{eq:sptest4} by corresponding power laws (using \eqref{Ass1}) multiplied by a sufficiently large constant and estimating the powers of $X$ using $X\in\left[0,R\right]$. We therefore choose $\Phi$ as a solution to \begin{equation}\label{eq:subsolution:2}
\begin{split} \partial_{s}\Phi\left(X,s\right)&+C_{0}\varepsilon^{-a}\max\left\{\varepsilon^{b},1\right\}\int_{0}^{\infty}\frac{\tilde{v}_{1}\left(Z\right)}{Z}\left[\Phi\left(X+Z\right)-\Phi\left(X\right)\right]\mathrm{d}Z\\ &+C_{0}\varepsilon^{-a}\left[\max\left\{\varepsilon^{b},1\right\}+\max\left\{\varepsilon^{b},R^{b}\right\}\right]\int_{0}^{\infty}\frac{\tilde{v}_{2}\left(Z\right)}{Z}\left[\Phi\left(X+Z\right)-\Phi\left(X\right)\right]\mathrm{d}Z=0
\end{split} \end{equation} with initial condition $\Phi\left(X,t\right)=\chi_{\left(-\infty,R-\kappa\right]}\ast^{2}\varphi_{\kappa/2}\left(X\right)$ and $C_{0}>0$ to be fixed later and $\tilde{v}_{i}\left(Z\right):=Z^{-\omega_{i}}$ for $i=1,2$, with $\omega_{1}=\min\left\{\rho-b,\rho\right\}$ and $\omega_{2}=\rho$. Here $\ast^{n}$ denotes the $n$-fold convolution.
\begin{lemma}\label{Lem:ex:Phi:test}
There exists a solution $\Phi$ of \eqref{eq:subsolution:2} in $C^{1}\left(\left[0,t\right],C^{\infty}\left(\mathbb{R}\right)\right)$. \end{lemma}
\begin{proof} This is shown in Proposition~\ref{Prop:ex:dual:sum}. \end{proof} We furthermore define $G\left(X,s\right):=-\partial_{X}\Phi\left(X,s\right)$ and $\tilde{G}$ by $G\left(X,s\right)=\frac{1}{R}\tilde{G}\left(\frac{X}{R}-1+\frac{\kappa}{R},s\right)$. Then $\tilde{G}$ solves \begin{equation}\label{eq:subsolution:3}
\begin{split}
\partial_{s}\tilde{G}\left(\xi,s\right)&+C_{0}\varepsilon^{-a}\max\left\{\varepsilon^{b},1\right\}\int_{0}^{\infty}\frac{\tilde{v}_{1}\left(R\eta\right)}{\eta}\left[\tilde{G}\left(\xi+\eta\right)-\tilde{G}\left(\xi\right)\right]\mathrm{d}\eta\\
&+C_{0}\varepsilon^{-a}\left[\max\left\{\varepsilon^{b},1\right\}+\max\left\{\varepsilon^{b},R^{b}\right\}\right]\int_{0}^{\infty}\frac{\tilde{v}_{2}\left(R\eta\right)}{\eta}\left[\tilde{G}\left(\xi+\eta\right)-\tilde{G}\left(\xi\right)\right]\mathrm{d}\eta=0
\end{split} \end{equation} with initial datum $\tilde{G}\left(\cdot,t\right)=\delta\left(\cdot\right)\ast^{2}\varphi_{\frac{\kappa}{2R}}$. We also summarize the following properties for $\Phi$ and $\tilde{G}$ given by Remark~\ref{Rem:properies} and the choice of the initial condition:
\begin{remark}\label{Rem:properties:der}
The function $\Phi\left(\cdot,s\right)$ given by Lemma~\ref{Lem:ex:Phi:test} is non-increasing for all $s\in\left[0,t\right]$ and satisfies:
\begin{equation*}
\begin{split}
0\leq \Phi\left(\cdot,s\right)\leq 1, \quad \supp\Phi\left(\cdot,s\right)\subset \left(-\infty,R\right] \quad \text{for all } s\in \left[0,t\right] \quad \text{and} \quad \Phi\left(X,t\right)=1 \text{ for all } X\in \left(-\infty,R-2\kappa\right].
\end{split}
\end{equation*}
Furthermore $\tilde{G}$ is non-negative and satisfies
\begin{equation*}
\begin{split}
\supp \tilde{G}\left(\cdot,s\right)\subset \left(-\infty,\kappa/R\right]\quad \text{and} \quad \int_{\mathbb{R}}\tilde{G}\left(\xi,s\right)\mathrm{d}\xi=1 \quad \text{for all } s\in\left[0,t\right].
\end{split}
\end{equation*} \end{remark}
The following Lemma states the two integral bounds that will be the key in proving the invariance of~\eqref{eq:F2}:
\begin{lemma}\label{Lem:subsolution:2:integral:estimate} There exist $\omega,\theta\in\left(0,1\right)$ such that for every $\mu\in\left(0,1\right)$ and $D>0$ we have \begin{enumerate}
\item $\int_{-\infty}^{-D}\tilde{G}\left(\xi,s\right)\mathrm{d}\xi\leq C\left(\frac{\kappa}{R D}\right)^{\mu}+\frac{Ct}{D^{\omega}}R^{-\theta}$,
\item $\int_{-1}^{0}\abs{\xi}\tilde{G}\left(\xi,s\right)\mathrm{d}\xi\leq C\left(\frac{\kappa}{R}\right)^{\mu}+CtR^{-\theta}$. \end{enumerate} \end{lemma}
\begin{proof}[Proof of Lemma~\ref{Lem:subsolution:2:integral:estimate}] From the proof of Proposition~\ref{Prop:ex:dual:sum} we have that $\tilde{G}$ is given as $\tilde{G}=\tilde{G}_{1}\ast\tilde{G}_{2}$ where $\tilde{G}_{1}$ and $\tilde{G}_{2}$ solve
\begin{align*}
\partial_{s}\tilde{G}_{1}\left(\xi,s\right)+C_{0}\varepsilon^{-a}\max\left\{\varepsilon^{b},1\right\}\int_{0}^{\infty}\frac{\tilde{v}_{1}\left(R\eta\right)}{\eta}\left[\tilde{G}_{1}\left(\xi+\eta\right)-\tilde{G}_{1}\left(\xi\right)\right]\mathrm{d}\eta&=0\\
\partial_{s}\tilde{G}_{2}\left(\xi,s\right)+C_{0}\varepsilon^{-a}\left[\max\left\{\varepsilon^{b},1\right\}+\max\left\{\varepsilon^{b},R^{b}\right\}\right]\int_{0}^{\infty}\frac{\tilde{v}_{2}\left(R\eta\right)}{\eta}\left[\tilde{G}_{2}\left(\xi+\eta\right)-\tilde{G}_{2}\left(\xi\right)\right]\mathrm{d}\eta&=0
\end{align*} with initial data $\tilde{G}_{1}\left(\cdot,t\right)=\tilde{G}_{2}\left(\cdot,t\right)=\varphi_{\frac{\kappa}{2R}}$. Then one has from Lemma~\ref{Lem:int:est:conolution}: \begin{equation*}
\begin{split}
\int_{-\infty}^{-D}\tilde{G}\left(\xi,s\right)\mathrm{d}\xi\leq \int_{-\infty}^{-D/2}\tilde{G}_{1}\left(\xi,s\right)\mathrm{d}\xi+\int_{-\infty}^{-D/2}\tilde{G}_{2}\left(\xi,s\right)\mathrm{d}\xi
\end{split} \end{equation*} and \begin{equation*}
\begin{split}
\int_{-1}^{0}\abs{\xi}\tilde{G}\left(\xi,s\right)\mathrm{d}\xi&\leq \int_{-1-\frac{\kappa}{2R}}^{\frac{\kappa}{2R}}\abs{\xi}\tilde{G}_{1}\left(\xi,s\right)\mathrm{d}\xi+\int_{-1-\frac{\kappa}{2R}}^{\frac{\kappa}{2R}}\abs{\xi}\tilde{G}_{2}\left(\xi,s\right)\mathrm{d}\xi\\
&\leq \frac{\kappa}{R}+\int_{-2}^{0}\abs{\xi}\tilde{G}_{1}\left(\xi,s\right)\mathrm{d}\xi+\int_{-2}^{0}\abs{\xi}\tilde{G}_{2}\left(\xi,s\right)\mathrm{d}\xi.
\end{split} \end{equation*} From Lemma~\ref{Lem:der:int:est} we obtain with \begin{equation*}
\begin{split}
N_{1}\left(\eta\right)= C\left(\varepsilon\right)R^{-\omega_{1}}\eta^{-1-\omega_{1}},\quad N_{2}\left(\eta\right)= C\left(\varepsilon\right)\left[\max\left\{\varepsilon^{b},1\right\}+\max\left\{\varepsilon^{b},R^{b}\right\}\right]R^{-\omega_{2}}\eta^{-1-\omega_{2}}
\end{split} \end{equation*} the appropriate estimates for $\tilde{G}_{i}$, $i=1,2$ with $\omega_{i}$ for $i=1,2$ and $\theta_{1}=\omega_{1}$, $\theta_{2}=\min\left\{\omega_{2},b-\omega_{2}\right\}$. Thus the claim follows by setting $\theta:=\min\left\{\omega_{1},\omega_{2},b-\omega_{2}\right\}$ and $\omega:=\min\left\{\omega_{1},\omega_{2}\right\}$ and using also that $\frac{\kappa}{R}\leq \left(\frac{\kappa}{R}\right)^{\mu}$ for $\kappa\leq 1$ and $R\geq 1$ for the second statement. \end{proof}
\begin{lemma}\label{Lem:subsolution:2}
For sufficiently large $C_{0}$ the function $\Phi$ satisfies \eqref{eq:sptest4}. \end{lemma}
\begin{proof} We have to show that for $X\in\left[0,R\right]$ we have \begin{equation*}
\begin{split}
\partial_{s}\Phi\left(X,s\right)+\int_{0}^{\infty}\frac{K_{\varepsilon}^{\lambda}\left(X\mathrm{e}^{t-s},Z\mathrm{e}^{t-s}\right)}{Z}\mathrm{e}^{\frac{s}{\beta}}h\left(Z\mathrm{e}^{t-s},s\right)\left[\Phi\left(X+Z,s\right)-\Phi\left(X,s\right)\right]\mathrm{d}Z\geq 0.
\end{split} \end{equation*} By construction of $\Phi$ this is equivalent to \begin{equation*}
\begin{split}
&-C_{0}\varepsilon^{-a}\max\left\{\varepsilon^{b},1\right\}\int_{0}^{\infty}\frac{\tilde{v}_{1}\left(Z\right)}{Z}\left[\Phi\left(X+Z\right)-\Phi\left(X\right)\right]\mathrm{d}Z\\
&-C_{0}\varepsilon^{-a}\left[\max\left\{\varepsilon^{b},1\right\}+\max\left\{\varepsilon^{b},R^{b}\right\}\right]\int_{0}^{\infty}\frac{\tilde{v}_{2}\left(Z\right)}{Z}\left[\Phi\left(X+Z\right)-\Phi\left(X\right)\right]\mathrm{d}Z\\
&+\int_{0}^{\infty}\frac{K_{\varepsilon}^{\lambda}\left(X\mathrm{e}^{t-s},Z\mathrm{e}^{t-s}\right)}{Z}\mathrm{e}^{\frac{s}{\beta}}h\left(Z\mathrm{e}^{t-s}\right)\left[\Phi\left(X+Z\right)-\Phi\left(X\right)\right]dZ\geq 0
\end{split} \end{equation*} Estimating $K_{\varepsilon}^{\lambda}$ for $X\in\left[0,R\right]$ one has \begin{equation*}
\begin{split}
&\quad \frac{K_{\varepsilon}^{\lambda}\left(X\mathrm{e}^{t-s},Z\mathrm{e}^{t-s}\right)}{Z}\mathrm{e}^{\frac{s}{\beta}}\leq C\frac{\left(X\mathrm{e}^{t-s}+\varepsilon\right)^{-a}\left(Z\mathrm{e}^{t-s}+\varepsilon\right)^{b}+\left(X\mathrm{e}^{t-s}+\varepsilon\right)^{b}\left(Z\mathrm{e}^{t-s}+\varepsilon\right)^{-a}}{Z}\\
&\leq C\varepsilon^{-a}\frac{\left(Z\mathrm{e}^{t-s}+\varepsilon\right)^{b}}{Z}+C\max\left\{\varepsilon^{b},R^{b}\right\}\frac{\left(Z\mathrm{e}^{t-s}+\varepsilon\right)^{-a}}{Z}\\
&\leq C\varepsilon^{-a}\max\left\{1,\varepsilon^{b}\right\}\left(Z^{-1}\chi_{\left[0,1\right]}\left(Z\right)+Z^{b-1}\chi_{\left[1,\infty\right)}\left(Z\right)\right)+C\max\left\{\varepsilon^{b},R^{b}\right\}\varepsilon^{-a}Z^{-1}\\
&\leq C\varepsilon^{-a}\left[\max\left\{\varepsilon^{b},1\right\}+\max\left\{\varepsilon^{b},R^{b}\right\}\right]Z^{-1}+C\varepsilon^{-a}\max\left\{1,\varepsilon^{b}\right\}Z^{\max\left\{0,b\right\}-1}.
\end{split} \end{equation*} Defining \begin{equation*} w_{1}\left(Z\right):=\frac{h\left(Z\mathrm{e}^{t-s}\right)}{Z^{1-\max\left\{0,b\right\}}} \quad \text{and} \quad w_{2}\left(Z\right):=\frac{h\left(Z\mathrm{e}^{t-s}\right)}{Z} \end{equation*} and using that $\Phi$ is non-increasing it thus suffices to show \begin{equation*}
\begin{split}
&\quad C_{0}\varepsilon^{-a}\max\left\{\varepsilon^{b},1\right\}\int_{0}^{\infty}\frac{\tilde{v}_{1}\left(Z\right)}{Z}\left[\Phi\left(X\right)-\Phi\left(X+Z\right)\right]\mathrm{d}Z\\
&+C_{0}\varepsilon^{-a}\left[\max\left\{\varepsilon^{b},1\right\}+\max\left\{\varepsilon^{b},R^{b}\right\}\right]\int_{0}^{\infty}\frac{\tilde{v}_{2}\left(Z\right)}{Z}\left[\Phi\left(X\right)-\Phi\left(X+Z\right)\right]\mathrm{d}Z\\
&-C\varepsilon^{-a}\max\left\{1,\varepsilon^{b}\right\}\int_{0}^{\infty}\frac{w_{1}\left(Z\right)}{Z}\left[\Phi\left(X\right)-\Phi\left(X+Z\right)\right]\mathrm{d}Z\\
&-C\varepsilon^{-a}\left[\max\left\{\varepsilon^{b},1\right\}+\max\left\{\varepsilon^{b},R^{b}\right\}\right]\int_{0}^{\infty}\frac{w_{2}\left(Z\right)}{Z}\left[\Phi\left(X\right)-\Phi\left(X+Z\right)\right]\mathrm{d}Z\geq 0.
\end{split} \end{equation*} Defining \begin{equation*}
V_{i}\left(Z\right):=\int_{Z}^{\infty}\frac{v_{i}\left(Y\right)}{Y}\mathrm{d}Y\quad \text{and} \quad W_{i}\left(Z\right):=\int_{Z}^{\infty}\frac{w_{i}\left(Y\right)}{Y}\mathrm{d}Y \end{equation*} we can rewrite this as \begin{equation*}
\begin{split}
&\quad\varepsilon^{-a}\max\left\{\varepsilon^{b},1\right\}\int_{0}^{\infty}-\partial_{Z}\left(C_{0}V_{1}\left(Z\right)-CW_{1}\left(Z\right)\right)\left[\Phi\left(X\right)-\Phi\left(X+Z\right)\right]\mathrm{d}Z\\
&+\varepsilon^{-a}\left[\max\left\{\varepsilon^{b},1\right\}+\max\left\{\varepsilon^{b},R^{b}\right\}\right]\int_{0}^{\infty}-\partial_{Z}\left(C_{0}V_{2}\left(Z\right)-CW_{2}\left(Z\right)\right)\left[\Phi\left(X\right)-\Phi\left(X+Z\right)\right]\mathrm{d}Z\geq 0.
\end{split} \end{equation*} Integrating by parts this is equivalent to \begin{equation*}
\begin{split}
&\quad\varepsilon^{-a}\max\left\{\varepsilon^{b},1\right\}\int_{0}^{\infty}-\partial_{Z}\Phi\left(X+Z\right)\left(C_{0}V_{1}\left(Z\right)-CW_{1}\left(Z\right)\right)\mathrm{d}Z\\
&+\varepsilon^{-a}\left[\max\left\{\varepsilon^{b},1\right\}+\max\left\{\varepsilon^{b},R^{b}\right\}\right]\int_{0}^{\infty}-\partial_{Z}\Phi\left(X+Z\right)\left(C_{0}V_{2}\left(Z\right)-CW_{2}\left(Z\right)\right)\mathrm{d}Z\geq 0.
\end{split} \end{equation*} Using that $\Phi$ is non-increasing it thus suffices to show $C_{0}V_{i}\left(Z\right)-CW_{i}\left(Z\right)\geq 0$ for $i=1,2$. To see this note first that $V_{i}$ is explicitly given by $V_{i}\left(Z\right)=\frac{1}{\omega_{i}}Z^{-\omega_{i}}$ for $i=1,2$. Furthermore using Lemma~\ref{Lem:moment:est:stand} one has \begin{equation*}
W_{1}\left(Z\right)\leq CZ^{\max\left\{0,b\right\}-\rho}=CZ^{-\omega_{1}}\quad \text{and} \quad W_{2}\left(Z\right)\leq CZ^{-\omega_{2}}. \end{equation*} Thus choosing $C_{0}$ sufficiently large the claim follows. \end{proof}
We finally prove a technical Lemma that will be needed in the following.
\begin{lemma}\label{Lem:Taylor:estimate}
Let $\delta\in\left(0,1\right)$ and $\rho\in\left(\max\left\{0,b\right\},1\right)$. Then
\begin{equation*}
\left(1-\frac{\kappa}{R}+\xi\right)^{1-\rho}\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\frac{\mathrm{e}^{-\delta t}}{\left(1-\frac{\kappa}{R}+\xi\right)^{\delta}}\right)\geq \left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\right)-\abs{\xi-\frac{\kappa}{R}}
\end{equation*}
holds for every $\xi\in\left[\frac{R_{0}}{R}\mathrm{e}^{-t}-1+\frac{\kappa}{R},\frac{\kappa}{R}\right]$. \end{lemma}
\begin{proof}
Let $\frac{R_{0}}{R}\mathrm{e}^{-t}-1+\frac{\kappa}{R}\leq \xi\leq \frac{\kappa}{R}$, then $\left(1-\frac{\kappa}{R}+\xi\right)\in\left[\frac{R_{0}}{R}\mathrm{e}^{-t},1\right]$. Thus for $0\leq \rho<1$ we have $\left(1-\frac{\kappa}{R}+\xi\right)^{1-\rho}\geq \left(1-\abs{\xi-\frac{\kappa}{R}}\right)$ and therefore we can estimate (noting that the second term in brackets is non-negative)
\begin{equation*}
\begin{split}
&\quad \left(1-\frac{\kappa}{R}+\xi\right)^{1-\rho}\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\frac{\mathrm{e}^{-\delta t}}{\left(1-\frac{\kappa}{R}+\xi\right)^{\delta}}\right)\geq \left(1-\frac{\kappa}{R}+\xi\right)\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\frac{\mathrm{e}^{-\delta t}}{\left(1-\frac{\kappa}{R}+\xi\right)^{\delta}}\right)\\
&=1-\frac{\kappa}{R}+\xi-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\frac{1-\frac{\kappa}{R}+\xi}{\left(1-\frac{\kappa}{R}+\xi\right)^{\delta}}\geq 1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}-\abs{\xi-\frac{\kappa}{R}}
\end{split}
\end{equation*} as $\frac{1-\frac{\kappa}{R}+\xi}{\left(1-\frac{\kappa}{R}+\xi\right)^{\delta}}\leq 1$ for $\xi$ as above and $\delta<1$. \end{proof}
\subsubsection{Invariance of the lower bound}
We are now prepared to finish the proof of the invariance of the lower bound~\eqref{eq:F2}.
\begin{proposition}
For sufficiently small $\delta>0$ and sufficiently large $R_{0}$ (maybe also depending on $\delta$), we have
\begin{equation*}
\int_{0}^{R}h\left(x,t\right)\mathrm{d}x\geq R^{1-\rho}\left(1-\left(\frac{R_0}{R}\right)^{\delta}\right)_{+}.
\end{equation*}
\end{proposition}
\begin{proof}
From the special choice of the test function $\psi$ we have according to \eqref{eq:sptest2} that
\begin{equation*}
\begin{split}
\int_{0}^{R}h\left(x,t\right)\mathrm{d}x\geq\int_{0}^{\infty}h\left(x,t\right)\psi\left(x,t\right)\mathrm{d}x\geq\int_{0}^{\infty}h_{0}\left(x\right)\psi\left(x,0\right)\mathrm{d}x=\mathrm{e}^{-\left(1-\rho\right)t}\int_{0}^{\infty}h_{0}\left(X\mathrm{e}^{t}\right)\Phi\left(X,0\right)\mathrm{e}^{t}\mathrm{d}X.
\end{split}
\end{equation*} Defining $H_{0}\left(X\right):=\int_{0}^{X}h_{0}\left(Y\right)\mathrm{d}Y$ we obtain \begin{equation*}
\begin{split}
\int_{0}^{R}h\left(x,t\right)&\geq \mathrm{e}^{-\left(1-\rho\right)t}\int_{0}^{\infty}h_{0}\left(X\mathrm{e}^{t}\right)\Phi\left(X,0\right)\mathrm{e}^{t}\mathrm{d}X=\mathrm{e}^{-\left(1-\rho\right)t}\int_{0}^{\infty}H_{0}'\left(X\mathrm{e}^{t}\right)\Phi\left(X,0\right)\mathrm{d}X\\
&=\left.\mathrm{e}^{-\left(1-\rho\right)t}H_{0}\left(X\mathrm{e}^{t}\right)\Phi\left(X,0\right)\right|_{0}^{\infty}+\mathrm{e}^{-\left(1-\rho\right)t}\int_{0}^{\infty}H_{0}\left(X\mathrm{e}^{t}\right)\left(-\partial_{X}\Phi\left(X,0\right)\right)\mathrm{d}X\\
&= \mathrm{e}^{-\left(1-\rho\right)t}\int_{0}^{\infty}H_{0}\left(X\mathrm{e}^{t}\right)\left(-\partial_{X}\Phi\left(X,0\right)\right)\mathrm{d}X
\end{split} \end{equation*} where we integrated by parts and used that the boundary terms are zero as $H\left(0\right)=0$ and the support of $\Phi$ is bounded from the right. Using now that by assumption we have $H_{0}\left(X\mathrm{e}^{t}\right)\geq \left(X\mathrm{e}^{t}\right)^{1-\rho}\left(1-\left(\frac{R_{0}}{X\mathrm{e}^{t}}\right)^{\delta}\right)_{+}$ we obtain \begin{equation*}
\begin{split}
\int_{0}^{R}h\left(x,t\right)\mathrm{d}x&\geq \mathrm{e}^{-\left(1-\rho\right)t}\int_{0}^{\infty}\left(X\mathrm{e}^{t}\right)^{1-\rho}\left(1-\left(\frac{R_{0}}{X\mathrm{e}^{t}}\right)^{\delta}\right)_{+}G\left(X,0\right)\mathrm{d}X\\
&=\int_{0}^{\infty}X^{1-\rho}\left(1-\left(\frac{R_{0}}{X\mathrm{e}^{t}}\right)^{\delta}\right)_{+}\frac{1}{R}\tilde{G}\left(\frac{X}{R}-1+\frac{\kappa}{R},0\right)\mathrm{d}X\\
&=R^{1-\rho}\int_{-D}^{\kappa/R}\left(1-\frac{\kappa}{R}+\xi\right)^{1-\rho}\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\frac{\mathrm{e}^{-\delta t}}{\left(1-\frac{\kappa}{R}+\xi\right)^{\delta}}\right)\tilde{G}\left(\xi,0\right)\mathrm{d}\xi
\end{split} \end{equation*} where we used the change of variables $\xi=\frac{X}{R}-1+\frac{\kappa}{R}$ and defined $D:=1-\frac{R_{0}}{R}\mathrm{e}^{-t}-\frac{\kappa}{R}$. Note that for $\kappa\leq \left(1-\mathrm{e}^{-t}\right)$ we have $0\leq D\leq 1$ (because we assume $R\geq 1$). Using Lemma~\ref{Lem:Taylor:estimate} we can estimate the integrand on the right hand side to obtain \begin{equation*}
\begin{split}
\int_{0}^{R}h\left(x,t\right)\mathrm{d}x&\geq R^{1-\rho}\int_{-D}^{\kappa/R}\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\right)\tilde{G}\left(\xi,0\right)\mathrm{d}\xi-R^{1-\rho}\int_{-D}^{\kappa/R}\abs{\xi-\frac{\kappa}{R}}\tilde{G}\left(\xi,0\right)\mathrm{d}\xi\\
&\geq R^{1-\rho}\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\right)\left(\int_{-\infty}^{\kappa/R}\tilde{G}\left(\xi,0\right)\mathrm{d}\xi-\int_{-\infty}^{-D}\tilde{G}\left(\xi,0\right)\mathrm{d}\xi\right)\\
&\quad-R^{1-\rho}\left(\int_{-1}^{0}\abs{\xi-\frac{\kappa}{R}}\tilde{G}\left(\xi,0\right)\mathrm{d}\xi+\frac{\kappa}{R}\int_{0}^{\kappa/R}\tilde{G}\left(\xi,0\right)\mathrm{d}\xi\right)
\end{split} \end{equation*} Applying furthermore Remark~\ref{Rem:properties:der} and Lemma~\ref{Lem:subsolution:2:integral:estimate} we get
\begin{equation*} \begin{split} &\quad\int_{0}^{R}h\left(x,t\right)\mathrm{d}x\\
&\geq R^{1-\rho}\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\right)\left(1-C\left(\frac{\kappa}{R D}\right)^{\mu}-\frac{C}{D^{\omega}}R^{-\theta}\right)\\
&\quad-R^{1-\rho}\left(\int_{-1}^{0}\abs{\xi}\tilde{G}\left(\xi,0\right)\mathrm{d}\xi+\frac{\kappa}{R}\int_{-1}^{0}\tilde{G}\left(\xi,0\right)\mathrm{d}\xi+\frac{\kappa}{R}\right)\\
&\geq R^{1-\rho}\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\right)\left(1-C\left(\frac{\kappa}{R D}\right)^{\mu}-\frac{Ct}{D^{\omega}}R^{-\theta}\right)-R^{1-\rho}\left(C\left(\frac{\kappa}{R}\right)^{\mu}+CtR^{-\theta}+\frac{2\kappa}{R}\right).
\end{split} \end{equation*} Inserting the definition of $D$ and rearranging we thus obtain \begin{equation*}
\begin{split}
\int_{0}^{R}h\left(x,t\right)\mathrm{d}x&\geq R^{1-\rho}\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\right)-CR^{1-\rho}\frac{\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\right)}{\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\right)^{\mu}}\left(\frac{\kappa}{R}\right)^{\mu}\\
&\quad-\frac{Ct}{R^{\theta}}R^{1-\rho}\left(\frac{\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\right)}{\left(1-\left(\frac{R_{0}}{R}\right)\mathrm{e}^{-t}\right)^{\omega}}+1\right)-CR^{1-\rho}\left(\frac{\kappa}{R}+\left(\frac{\kappa}{R}\right)^{\mu}\right).
\end{split} \end{equation*} Using that for $\delta,\omega\in\left(0,1\right)$ and $R>R_{0}$ we have $\left(1- \left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\right)\leq 1-\frac{R_{0}}{R}\mathrm{e}^{-t}\leq\left(1-\frac{R_{0}}{R}\mathrm{e}^{-t}\right)^{\omega}$ and thus $\left(\frac{1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}}{\left(1-\frac{R_{0}}{R}\mathrm{e}^{-t}\right)^{\omega}}+1\right)\leq 2$ and $\frac{\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\right)}{\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\right)^{\mu}}\leq 1$, we therefore get \begin{equation*}
\begin{split}
\int_{0}^{R}h\left(x,t\right)\mathrm{d}x\geq R^{1-\rho}\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\right)-\frac{Ct}{R^{\theta}}R^{1-\rho}-CR^{1-\rho}\left(\frac{\kappa}{R}+\left(\frac{\kappa}{R}\right)^{\mu}\right).
\end{split} \end{equation*} As for $\delta t\leq 1$ (note that we assume $\delta, t\leq 1$) we can estimate $1-\left(\frac{R_{0}}{R}\right)^{\delta}\mathrm{e}^{-\delta t}\geq 1-\left(\frac{R_{0}}{R}\right)^{\delta}+\frac{1}{\mathrm{e}}\left(\frac{R_{0}}{R}\right)^{\delta}\delta t$, we obtain \begin{equation*}
\begin{split}
\int_{0}^{R}h\left(x,t\right)\mathrm{d}x&\geq R^{1-\rho}\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\right)+R^{1-\rho}\left(\frac{R_{0}}{R}\right)^{\delta}\frac{\delta t}{\mathrm{e}}-\frac{Ct}{R^{\theta}}R^{1-\rho}-CR^{1-\rho}\left(\frac{\kappa}{R}+\left(\frac{\kappa}{R}\right)^{\mu}\right).
\end{split} \end{equation*} We now choose $\mu=\theta$ and $\kappa<1$ sufficiently small. Then as we assume $R\geq R_{0}\geq 1$ we have $\frac{\kappa}{R}\leq \left(\frac{\kappa}{R}\right)^{\mu}=\left(\frac{\kappa}{R}\right)^{\theta}$. Using this we can further estimate \begin{equation*}
\begin{split}
\int_{0}^{R}h\left(x,t\right)\mathrm{d}x&\geq R^{1-\rho}\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\right)+R^{1-\rho}\left(\frac{R_{0}}{R}\right)^{\delta}\frac{\delta t}{\mathrm{e}}-\frac{Ct}{R^{\theta}}R^{1-\rho}-\frac{C\kappa^{\theta}}{R^{\theta}}R^{1-\rho}\\
&\geq R^{1-\rho}\left(1-\left(\frac{R_{0}}{R}\right)^{\delta}\right)+R^{1-\rho}\left(\left(\frac{R_{0}}{R}\right)^{\delta}\frac{\delta t}{\mathrm{e}}-\frac{C}{R^{\theta}}\left(t+\kappa^{\theta}\right)\right).
\end{split} \end{equation*}
Thus it suffices to show $\left(\frac{R_{0}}{R}\right)^{\delta}\frac{\delta t}{\mathrm{e}}-\frac{C}{R^{\theta}}\left(t+\kappa^{\theta}\right)\geq 0$ while this is equivalent to $R^{\theta-\delta}\geq C\mathrm{e}\frac{t+\kappa^{\theta}}{R_{0}^{\delta}\delta t}$, but this is true at least if we choose $\kappa$ sufficiently small such that also $\kappa^{\theta}<t$, $0<\delta<\theta$ and then $R_{0}$ sufficiently large (note that we have only to prove this for $R\geq R_{0}$). \end{proof}
\subsection{Existence of self-similar solutions}\label{S.ex}
\begin{proposition}\label{Prop:existence:stat}
Let $K$ satisfy Assumptions~\eqref{Ass1a}-\eqref{Ass1}. Then for any $\rho\in\left(\max\left\{0,b\right\},1\right)$ there exists a weak stationary solution $h_{\varepsilon}$ to \eqref{A2} (with $K$ replaced by $K_{\varepsilon}$). \end{proposition}
This proposition is proved in the following way: according to Theorem~\ref{T.fixedpoint} we obtain $h_{\varepsilon}^{\lambda}\in\mathcal{Y}$ that is stationary under the action of $S_{\varepsilon}^{\lambda}\left(t\right)$, i.e. $H_{\varepsilon}^{\lambda}$ is a stationary mild solution of \eqref{eq:S1E9}. Then taking a subsequence of $h_{\varepsilon}^{\lambda}$ converging weakly to some $h_{\varepsilon}$ and passing to the limit $\lambda\to 0$ in the equation shows the claim. The last step is quite similar to passing to the limit $\varepsilon\to 0$ in Section~\ref{sec:limit:eps} and thus we do not give details here.
We furthermore have the following result about the regularity and asymptotic behaviour of $h_{\varepsilon}$ completing the proof of Proposition~\ref{P.hepsexistence}:
\begin{proposition}\label{Prop:reg:decay:stat:eps}
The solution $h_{\varepsilon}\in\mathcal{Y}$ from Proposition~\ref{Prop:existence:stat} is continuous on $\left(0,\infty\right)$ and satisfies $h_{\varepsilon}\left(x\right)\sim \left(1-\rho\right)x^{-\rho}$ as $x\to\infty$. \end{proposition}
This can be proved in a similar way as the corresponding result in \cite[Lemma~4.2 \& Lemma~4.3]{NV12a}.
\section{Passing to the limit $\varepsilon \to 0$}\label{sec:eps:to:zero}
\subsection{Strategy of the proof}
Our starting point is that we have a continuous positive function $h_{\varepsilon}$ that is a weak solution to \begin{equation}\label{eq:self:sim:eps}
\partial_x I_{\varepsilon}[h_{\varepsilon}] = \partial_x \left( x h_{\varepsilon}\right) + (\rho-1) h_{\varepsilon}\,, \qquad \mbox{ with } \quad I_{\varepsilon}[h_{\varepsilon}] = \int_0^x \int_{x-y}^{\infty} \frac{K_{\varepsilon}(y,z)}{z} h_{\varepsilon}(y) h_{\varepsilon}(z)\,dz\,dy\,. \end{equation} Furthermore we have the estimates \begin{equation}\label{hepsestimates}
\int_0^rh_{\varepsilon}(x)\,dx \leq r^{1-\rho} \qquad \mbox{ and } \qquad \lim_{r \to \infty} \int_0^r h_{\varepsilon}(x)\,dx / r^{1-\rho} =1\,.
\end{equation} We introduce the following quantities: \begin{equation}\label{lepsdef}
\mu_{\varepsilon}=\int_0^1 h_{\varepsilon}(x) \left(x+\varepsilon\right)^{-a} \,dx \, ,\qquad \lambda_{\varepsilon} =\int_0^1 h_{\varepsilon}(x) \left(x+\varepsilon\right)^b\,dx \,, \qquad L_{\varepsilon}:=\max\left( \lambda_{\varepsilon}^{\frac{1}{1+a}},\mu_{\varepsilon}^{\frac{1}{1-b}}\right)\,. \end{equation} Up to passing to a subsequence we can in the following assume that either $L_{\varepsilon}$ converges or $L_{\varepsilon}\to\infty$ for $\varepsilon\to 0$. Furthermore as the case $L_{\varepsilon}\to 0$ behaves slightly different, we use from now on the following notation: we define $L:=L_{\varepsilon}$ if $L_{\varepsilon}\not\to 0$ and $L:=1$ if $L_{\varepsilon}\to 0$ and thus (up to passing maybe to another subsequence) we may assume $L>0$. For the following let \[
X=\frac{x}{L} \,, \qquad h_{\varepsilon}(x) = H_{\varepsilon}(X) L^{-\rho}\,. \]
The strategy of the proof is the following: First we derive a uniform lower integral bound for $H_{\varepsilon}$. This will be done by constructing a special test function that provides us with some estimate from below that is sufficient to conclude on a lower bound by some iteration argument. In order to obtain the same uniform lower bound for $h_{\varepsilon}$ we need to exclude the case $L_{\varepsilon}\to \infty$. For this reason we will show that in this case $H_{\varepsilon}$ converges to some limit $H$ solving some differential equation that has no solution satisfying the growth condition $\int_{0}^{R}H\mathrm{d}X\leq R^{1-\rho}$. Note that at this point it is crucial to assume $\rho>0$. Using the lower bound on $h_{\varepsilon}$ we can show some exponential decay of $h_{\varepsilon}$ near zero which is then enough to pass to the limit $\varepsilon\to 0$.
For $0<\kappa<1$ we again denote in the following by $\varphi_{\kappa}$ a non-negative, symmetric standard mollifier with $\supp \varphi_{\kappa}\subset \left[-\kappa,\kappa\right]$.
\subsection{Uniform lower bound for $H_{\varepsilon}$}
In this subsection we will show a uniform lower bound on $H_{\varepsilon}$, i.e. we will prove: \begin{proposition}\label{Prop:Hepslowerbound}
For any $\delta>0$ there exists $R_{\delta}>0$ such that
\begin{equation}\label{Hepslowerbound}
\int_0^R H_{\varepsilon}(X)\mathrm{d}X \geq (1-\delta) R^{1-\rho} \qquad \mbox{ for all } R \geq R_{\delta}.
\end{equation} \end{proposition}
\subsubsection{Construction of a suitable test function}\label{subsec:test}
We start by constructing a special test function and therefore notice that for $\psi=\psi\left(x,t\right)$ with $\psi\in C^{1}$ and compact support in $\left[0,T\right]\times \left[0,\infty\right)$ we obtain from the equation on $h_{\varepsilon}$: \begin{equation*}
\begin{split}
0&=\int_{0}^{T}\int_{0}^{\infty}\partial_{x}\psi I_{\varepsilon}\left[h_{\varepsilon}\right]\mathrm{d}x\mathrm{d}t-\int_{0}^{T}\int_{0}^{\infty}x\partial_{x}\psi h_{\varepsilon}\mathrm{d}x\mathrm{d}t+\left(\rho-1\right)\int_{0}^{T}\int_{0}^{\infty}\psi h_{\varepsilon}\mathrm{d}x\mathrm{d}t\\
&=\int_{0}^{T}\int_{0}^{\infty}\partial_{t}\psi h_{\varepsilon}\mathrm{d}x\mathrm{d}t+\int_{0}^{\infty}\psi\left(\cdot,0\right)h_{\varepsilon}\mathrm{d}x-\int_{0}^{\infty}\psi\left(\cdot,T\right)h_{\varepsilon}\mathrm{d}x.
\end{split} \end{equation*} Choosing $\psi$ such that \begin{equation}\label{eq:special:test:1}
\begin{split}
\int_{0}^{T}\int_{0}^{\infty}\partial_{x}\psi I_{\varepsilon}\left[h_{\varepsilon}\right]\mathrm{d}x\mathrm{d}t-\int_{0}^{T}\int_{0}^{\infty}x\partial_{x}\psi h_{\varepsilon}\mathrm{d}x\mathrm{d}t+\left(\rho-1\right)\int_{0}^{T}\int_{0}^{\infty}\psi h_{\varepsilon}\mathrm{d}x\mathrm{d}t-\int_{0}^{T}\int_{0}^{\infty}\partial_{t}\psi h_{\varepsilon}\mathrm{d}x\mathrm{d}t\geq 0
\end{split} \end{equation} we obtain \begin{equation*}
\begin{split}
\int_{0}^{\infty}\psi\left(\cdot,0\right)h_{\varepsilon}\mathrm{d}x\geq \int_{0}^{\infty}\psi\left(\cdot,T\right)h_{\varepsilon}\mathrm{d}x.
\end{split} \end{equation*} Rewriting \eqref{eq:special:test:1} we obtain \begin{equation}\label{eq:special:test:weak:psi}
\begin{split}
\int_{0}^{T}\int_{0}^{\infty}h_{\varepsilon}\left(x\right)\left\{\int_{0}^{\infty}\frac{K_{\varepsilon}\left(x,y\right)}{y}h_{\varepsilon}\left(y\right)\left[\psi\left(x+y\right)-\psi\left(x\right)\right]\mathrm{d}y-x\partial_{x}\psi\left(x\right)+\left(\rho-1\right)\psi\left(x\right)-\partial_{t}\psi\left(x\right)\right\}\mathrm{d}x\mathrm{d}t\geq 0.
\end{split} \end{equation} Defining $W$ by $\psi\left(x,t\right)=\mathrm{e}^{-\left(1-\rho\right)t}W\left(\xi,t\right)$ with $\xi=\frac{x}{L\mathrm{e}^{t}}$ we can rewrite the term in brackets and obtain that it suffices to construct $W$ such that \begin{equation}\label{eq:special:test:2}
\begin{split}
\partial_{t}W\left(\frac{x}{L\mathrm{e}^{t}},t\right)\leq \int_{0}^{\infty}\frac{K_{\varepsilon}\left(x,y\right)}{y}h_{\varepsilon}\left(y\right)\left[W\left(\frac{x+y}{L\mathrm{e}^{t}},t\right)-W\left(\frac{x}{L\mathrm{e}^{t}},t\right)\right]\mathrm{d}y.
\end{split} \end{equation} For further use we also note that we only need this in weak form, i.e. we need that \begin{equation}\label{eq:special:test:weak}
\begin{split}
\int_{0}^{T}\int_{0}^{\infty}\mathrm{e}^{-\left(1-\rho\right)t}h_{\varepsilon}\left(x\right)\left\{\partial_{t}W\left(\frac{x}{L\mathrm{e}^{t}},t\right)-\int_{0}^{\infty}\frac{K_{\varepsilon}\left(x,y\right)}{y}h_{\varepsilon}\left(y\right)\left[W\left(\frac{x+y}{L\mathrm{e}^{t}},t\right)-W\left(\frac{x}{L\mathrm{e}^{t}},t\right)\right]\mathrm{d}y\right\}\mathrm{d}x\mathrm{d}t\leq 0,
\end{split} \end{equation} provided that we can justify the change from $\psi$ to $W$.
We furthermore list here some parameters that are frequently used in the following. For given $\nu\in\left(0,1\right)$ that will be fixed later we define \begin{equation*}
\begin{split}
\beta:=\begin{cases}
b & b\geq 0\\
\nu b & b <0
\end{cases}, \qquad
\omega_{1}:=\min\left\{\rho-b,\rho\right\}, \qquad
\omega_{2}:=\rho,\qquad \tilde{b}:=\max\left\{0,b\right\}.
\end{split} \end{equation*} The idea to construct the test function $W$ is similar to the approach in Section~\ref{Sec:constr:dual:prop}, i.e. we replace the integral kernel $K_{\varepsilon}$ and $h_{\varepsilon}$ by corresponding power laws. Due to the singular behaviour of $K_{\varepsilon}$ (for $\varepsilon\to 0$) the resulting integral is not defined near the origin and thus we have to consider the region near the origin separately. We have the following existence result. \begin{lemma}\label{Lem:ex:special:test:1}
For any constant $\tilde{C}>0$ there exists a function $\tilde{W}\in C^{1}\left(\left[0,T\right],C^{\infty}\left(\mathbb{R}\right)\right)$ solving
\begin{equation*}
\begin{split}
&\quad\partial_{t}\tilde{W}\left(\xi,t\right)-C\int_{0}^{1}\frac{h_{\varepsilon}\left(z\right)}{z}\left(L^{b}A^{\beta}\left(z+\varepsilon\right)^{-a}+L^{-a}A^{-\nu a}\left(z+\varepsilon\right)^{b}\right)\left[\tilde{W}\left(\xi+\frac{z}{L}\right)-\tilde{W}\left(\xi\right)\right]\mathrm{d}z\\
&-\frac{C}{L^{\rho+a-\max\left\{0,b\right\}}}\int_{0}^{\infty}\frac{A^{-\nu a}}{\eta^{1+\omega_{1}}}\left[\tilde{W}\left(\xi+\eta\right)-\tilde{W}\left(\xi\right)\right]\mathrm{d}\eta-\frac{C}{L^{\rho-b}}\int_{0}^{\infty}\frac{A^{\beta}}{\eta^{1+\omega_{2}}}\left[\tilde{W}\left(\xi+\eta\right)-\tilde{W}\left(\xi\right)\right]\mathrm{d}\eta=0
\end{split}
\end{equation*}
with $\tilde{W}\left(\cdot,0\right)=\chi_{\left(-\infty,A-\kappa\right]}\ast^{3}\varphi_{\kappa/3}\left(\cdot\right)$. \end{lemma}
\begin{proof}
This is shown in Proposition~\ref{Prop:ex:dual:sum}. \end{proof}
\begin{remark}
As shown in the appendix $\tilde{W}$ is non-increasing, has support in $\left(-\infty,A\right]$, is non-negative and bounded by $1$. \end{remark}
As $K_{\varepsilon}$ might get quite singular at the origin for $\varepsilon\to 0$, we define now $W$ as the function $W\left(\xi,t\right):=\tilde{W}\left(\xi,t\right)\chi_{\left[A^{\nu},\infty\right)}\left(\xi\right)$, i.e. we cut $\tilde{W}$ at $\xi=A^{\nu}$ in order to avoid integrating near the origin. Obviously $W$ is not in $C^{1}$ and thus the corresponding $\psi$ is also not differentiable. But as already mentioned it is enough to show that \eqref{eq:special:test:weak} holds, provided we can justify the change from $\psi$ to $W$ (and reverse). This will be done next, i.e. we will first show that \eqref{eq:special:test:2} holds for all $\xi\neq A^{\nu}$. Then by convolution in $\xi$ with $\varphi_{\delta}$ it is possible to change from $\psi$ to $W$ (and reverse). Finally taking the limit $\delta\to 0$ this then shows that \eqref{eq:special:test:weak} holds.
\begin{lemma}\label{Lem:special:test:ineq}
For sufficiently large $\tilde{C}$, inequality \eqref{eq:special:test:2} holds pointwise for all $\xi\neq A^{\nu}$. \end{lemma}
\begin{proof}
From the non-negativity of $W$ the claim follows immediately for $\xi<A^{\nu}$ (where $W$ is identically zero). Thus it suffices to consider $\xi>A^{\nu}$. Using furthermore that $\supp W\subset \left(-\infty,A\right]$ it suffices to consider $\xi\in\left(A^{\nu},A\right]$. As $W$ is non-increasing on $\left(A^{\nu},A\right]$ we can estimate $-\left[W\left(\xi+\frac{y}{L\mathrm{e}^{t}}\right)-W\left(\xi\right)\right]\leq-\left[W\left(\xi+\frac{y}{L}\right)-W\left(\xi\right)\right]$. On the other hand using the estimates on the kernel $K$ we obtain
\begin{equation}\label{eq:special:test:estimate:1}
\begin{split}
&\quad-\int_{0}^{\infty}\frac{K_{\varepsilon}\left(L\mathrm{e}^{t}\xi,y\right)}{y}h_{\varepsilon}\left(y\right)\left[W\left(\xi+\frac{y}{L\mathrm{e}^{t}}\right)-W\left(\xi\right)\right]\mathrm{d}y\\
&\leq -C_{2}\int_{0}^{\infty}\frac{\left(L\mathrm{e}^{t}\xi+\varepsilon\right)^{-a}\left(y+\varepsilon\right)^{b}+\left(L\mathrm{e}^{t}\xi+\varepsilon\right)^{b}\left(y+\varepsilon\right)^{-a}}{y}h_{\varepsilon}\left(y\right)\left[W\left(\xi+\frac{y}{L}\right)-W\left(\xi\right)\right]\mathrm{d}y\\
&\leq -C\int_{0}^{\infty}\frac{L^{-a}A^{-\nu a}\left(y+\varepsilon\right)^{b}+L^{b}A^{\beta}\left(y+\varepsilon\right)^{-a}}{y}h_{\varepsilon}\left(y\right)\left[W\left(\xi+\frac{y}{L}\right)-W\left(\xi\right)\right]\mathrm{d}y\\
&\leq -C\int_{0}^{1}\frac{h_{\varepsilon}\left(y\right)}{y}\left[L^{b}A^{\beta}\left(y+\varepsilon\right)^{-a}+L^{-a}A^{-\nu a}\left(y+\varepsilon\right)^{b}\right]\left[W\left(\xi+\frac{y}{L}\right)-W\left(\xi\right)\right]\mathrm{d}y\\
&\quad -\frac{C}{L^{\rho}}\int_{1/L}^{\infty}\frac{H_{\varepsilon}\left(\eta\right)}{\eta}\left[L^{b}A^{\beta}+L^{-a}A^{-\nu a}\left(L\eta +\varepsilon\right)^{b}\right]\left[W\left(\xi+\eta\right)-W\left(\xi\right)\right]\mathrm{d}\eta\\
&\leq -C\int_{0}^{1}\frac{h_{\varepsilon}\left(y\right)}{y}\left[L^{b}A^{\beta}\left(y+\varepsilon\right)^{-a}+L^{-a}A^{-\nu a}\left(y+\varepsilon\right)^{b}\right]\left[W\left(\xi+\frac{y}{L}\right)-W\left(\xi\right)\right]\mathrm{d}y\\
&\quad -\frac{C A^{\beta}}{L^{\rho-b}}\int_{0}^{\infty}\frac{H_{\varepsilon}\left(\eta\right)}{\eta}\left[W\left(\xi+\eta\right)-W\left(\xi\right)\right]\mathrm{d}\eta-\frac{C A^{-\nu a}}{L^{\rho+a-\max\left\{0,b\right\}}}\int_{0}^{\infty}\frac{H_{\varepsilon}\left(\eta\right)}{\eta^{1-\max\left\{0,b\right\}}}\left[W\left(\xi+\eta\right)-W\left(\xi\right)\right]\mathrm{d}\eta.
\end{split}
\end{equation}
As $\xi>A^{\nu}$ we have by construction
\begin{equation}\label{eq:special:test:estimate:2}
\begin{split}
&\partial_{t}W\left(\xi,t\right)=\partial_{t}\tilde{W}\left(\xi,t\right)=\tilde{C}\int_{0}^{1}\frac{h_{\varepsilon}\left(z\right)}{z}\left[L^{b}A^{\beta}\left(z+\varepsilon\right)^{-a}+L^{-a}A^{-\nu a}\left(z+\varepsilon\right)^{b}\right]\left[\tilde{W}\left(\xi+\frac{z}{L}\right)-\tilde{W}\left(\xi\right)\right]\mathrm{d}z\\
&\quad+\tilde{C}\frac{A^{\beta}}{L^{\rho-b}}\int_{0}^{\infty}\frac{1}{\eta^{1+\omega_{2}}}\left[\tilde{W}\left(\xi+\eta\right)-\tilde{W}\left(\xi\right)\right]\mathrm{d}\eta+\tilde{C}\frac{A^{-\nu a}}{L^{\rho+a-\max\left\{0,b\right\}}}\int_{0}^{\infty}\frac{1}{\eta^{1+\omega_{1}}}\left[\tilde{W}\left(\xi+\eta\right)-\tilde{W}\left(\xi\right)\right]\mathrm{d}\eta.
\end{split}
\end{equation}
Thus in order to show \eqref{eq:special:test:2}, i.e.
\begin{equation*}
\begin{split}
\partial_{t}W\left(\xi,t\right)-\int_{0}^{\infty}\frac{K_{\varepsilon}\left(L\mathrm{e}^{t}\xi,y\right)}{y}h_{\varepsilon}\left(y\right)\left[W\left(\xi+\frac{y}{L\mathrm{e}^{t}},t\right)-W\left(\xi,t\right)\right]\mathrm{d}y\leq 0
\end{split}
\end{equation*}
it is sufficient to compare the expressions in \eqref{eq:special:test:estimate:1} and \eqref{eq:special:test:estimate:2} term by term. Proceeding in the same way as in Lemma~\ref{Lem:subsolution:2} the claim follows, noting that due to the rescaling the estimates from Lemma~\ref{Lem:moment:est:stand} also hold for $H_{\varepsilon}$. \end{proof}
\begin{lemma}
The change of variables from $\psi$ to $W$ (and reverse) is justified and inequality~\eqref{eq:special:test:weak} holds. \end{lemma}
\begin{proof}
We define $W_{\delta}:=W\left(\cdot,t\right)\ast \varphi_{\delta}\left(\cdot\right)$. Then $W_{\delta}\left(\cdot,t\right)$ is smooth for all $t$ and the change of variables from the corresponding $\psi_{\delta}$ to $W_{\delta}$ (and reverse) is justified. We next show that we can pass to the limit $\delta\to 0$ in the left hand side of \eqref{eq:special:test:weak}. Then from Lemma~\ref{Lem:special:test:ineq} the claim follows.
As $\partial_{t}W_{\delta}\left(\frac{x}{L\mathrm{e}^{t}},t\right)$ is compactly supported and uniformly bounded (in $\delta$) it suffices to consider
\begin{equation}\label{eq:special:test:weak:2}
\begin{split}
\int_{0}^{\infty}h_{\varepsilon}\left(x\right)\int_{0}^{\infty}\frac{K_{\varepsilon}\left(x,y\right)}{y}h_{\varepsilon}\left(y\right)\left[W_{\delta}\left(\frac{x+y}{L\mathrm{e}^{t}},t\right)-W_{\delta}\left(\frac{x}{L\mathrm{e}^{t}},t\right)\right]\mathrm{d}y\mathrm{d}x.
\end{split}
\end{equation}
Changing variables, interchanging the order of integration and splitting the integral we have to consider
\begin{equation*}
\begin{split}
&\quad\int_{0}^{\infty}\int_{0}^{\infty}\frac{1}{L\mathrm{e}^{t}}\frac{K_{\varepsilon}\left(L\mathrm{e}^{t}\xi,y\right)}{y}h_{\varepsilon}\left(L\mathrm{e}^{t}\xi\right)h_{\varepsilon}\left(y\right)\left[W_{\delta}\left(\xi+\frac{y}{L\mathrm{e}^{t}}\right)-W_{\delta}\left(\xi\right)\right]\mathrm{d}\xi\mathrm{d}y\\
&=\int_{0}^{\infty}\int_{0}^{A^{\nu}-r}\left(\cdots\right)\mathrm{d}\xi\mathrm{d}y+\int_{0}^{\infty}\int_{A^{\nu}+r}^{\infty}\left(\cdots\right)\mathrm{d}\xi\mathrm{d}y+\int_{0}^{\infty}\int_{A^{\nu}-r}^{A^{\nu}+r}\left(\cdots\right)\mathrm{d}\xi\mathrm{d}y\\
&=:\left(I\right)+\left(II\right)+\left(III\right),
\end{split}
\end{equation*}
where $2\delta<r$ is a fixed and sufficiently small constant. As $W$ and $W_{\delta}$ are compactly supported and bounded (uniformly in $\delta$) it is straightforward to pass to the limit $\delta\to 0$ in the $\xi$-integrals (for fixed $y>0$). It thus remains to show that it is also possible to pass to the limit $\delta\to 0$ in the $y$-integral while this will be done by using Lebesgue's Theorem. We therefore estimate the three integrands separately. First we have using that $W$ is non-negative, bounded and has support in $\left[A^{\nu}-\delta,A+\delta\right]$ as well as the estimate for $K$ and Lemma~\ref{Lem:moment:est:stand} that
\begin{equation*}
\begin{split}
&\quad\abs{\int_{0}^{A^{\nu}-r}\frac{1}{L\mathrm{e}^{t}}\frac{K_{\varepsilon}\left(L\mathrm{e}^{t}\xi,y\right)}{y}h_{\varepsilon}\left(L\mathrm{e}^{t}\xi\right)h_{\varepsilon}\left(y\right)\left[W_{\delta}\left(\xi+\frac{y}{L\mathrm{e}^{t}}\right)-W_{\delta}\left(\xi\right)\right]\mathrm{d}\xi}\\
&\leq \frac{C}{L\mathrm{e}^{t}}\chi_{\left[L\mathrm{e}^{t}\left(r-\delta\right),\infty\right)}\left(y\right)\int_{0}^{A^{\nu}-r}\frac{\left(L\mathrm{e}^{t}\xi+\varepsilon\right)^{-a}\left(y+\varepsilon\right)^{b}}{y}h_{\varepsilon}\left(L\mathrm{e}^{t}\xi\right)h_{\varepsilon}\left(y\right)W_{\delta}\left(\xi+\frac{y}{L\mathrm{e}^{t}},t\right)\mathrm{d}\xi\\
&\quad +\frac{C}{L\mathrm{e}^{t}}\chi_{\left[L\mathrm{e}^{t}\left(r-\delta\right),\infty\right)}\left(y\right)\int_{0}^{A^{\nu}-r}\frac{\left(L\mathrm{e}^{t}\xi+\varepsilon\right)^{b}\left(y+\varepsilon\right)^{-a}}{y}h_{\varepsilon}\left(L\mathrm{e}^{t}\xi\right)h_{\varepsilon}\left(y\right)W_{\delta}\left(\xi+\frac{y}{L\mathrm{e}^{t}},t\right)\mathrm{d}\xi\\
&\leq C\left(\varepsilon, L, A\right)\frac{h_{\varepsilon}\left(y\right)}{y}\chi_{\left[L\mathrm{e}^{t}\frac{r}{2},\infty\right)}\left(y\right)\int_{0}^{A^{\nu}-r}h_{\varepsilon}\left(L\mathrm{e}^{t}\xi\right)\mathrm{d}\xi\leq C\left(\varepsilon, L, A\right) \left(L\mathrm{e}^{t}\left(A^{\nu}-r\right)\right)^{1-\rho}\frac{h_{\varepsilon}\left(y\right)}{y}\chi_{\left[L\mathrm{e}^{t}\frac{r}{2},\infty\right)}\left(y\right).
\end{split}
\end{equation*} As the right hand side is independent of $\delta$ and integrable due to Lemma~\ref{Lem:moment:est:stand} we can pass to the limit $\delta\to 0$ in $(I)$.
To estimate the integrand in $(II)$ note that we can bound $h_{\varepsilon}\left(L\mathrm{e}^{t}\xi\right)$ for $\xi\in\left[A^{\nu},A+\delta\right]$ uniformly in $t$ for $t\in\left[0,T\right]$ as $h_{\varepsilon}$ is continuous. Furthermore as $W$ is differentiable in $\xi$ on $\left[A^{\nu}+r,\infty\right)$ with bounded derivative (for $r>0$ fixed and depending on $\kappa$) we can bound the $L^{\infty}$-norm of $W_{\delta}$ and $\partial_{\xi}W_{\delta}$ by the corresponding expression of $W$. Thus we obtain
\begin{equation*}
\begin{split}
&\quad\abs{\int_{A^{\nu}+r}^{A+\delta}\frac{1}{L\mathrm{e}^{t}}\frac{K_{\varepsilon}\left(L\mathrm{e}^{t}\xi,y\right)}{y}h_{\varepsilon}\left(L\mathrm{e}^{t}\xi\right)h_{\varepsilon}\left(y\right)\left[W_{\delta}\left(\xi+\frac{y}{L\mathrm{e}^{t}}\right)-W_{\delta}\left(\xi\right)\right]\mathrm{d}\xi}\\
&\leq \frac{C}{L\mathrm{e}^{t}}\int_{A^{\nu}+r}^{A+\delta}\frac{\left(L\mathrm{e}^{t}\xi+\varepsilon\right)^{-a}\left(y+\varepsilon\right)^{b}+\left(L\mathrm{e}^{t}\xi+\varepsilon\right)^{b}\left(y+\varepsilon\right)^{-a}}{y}h_{\varepsilon}\left(L\mathrm{e}^{t}\xi\right)h_{\varepsilon}\left(y\right)\abs{W_{\delta}\left(\xi+\frac{y}{L\mathrm{e}^{t}}\right)-W_{\delta}\left(\xi\right)}\mathrm{d}\xi\\
&\leq C\left(L,A,r,\varepsilon,\kappa \right)\left(\norm{W}_{L^{\infty}}+\norm{\partial_{\xi}W}_{L^{\infty}\left(\left[A^{\nu}+r,\infty\right)\right)}\right)\frac{\left(y+\varepsilon\right)^{b}+\left(y+\varepsilon\right)^{-a}}{y}h_{\varepsilon}\left(y\right)\min\left\{\frac{y}{L\mathrm{e}^{t}},A-A^{\nu}+1-r\right\}.
\end{split}
\end{equation*}
Again the right hand side is independent of $\delta$ and integrable due to Lemma~\ref{Lem:moment:est:stand}. Thus we also can pass to the limit $\delta\to 0$ in the $y$-integral in $(II)$. Before estimating the integrand of $(III)$ we first derive an estimate for the expression $\int_{A^{\nu}-r}^{A^{\nu}+r}\abs{W_{\delta}\left(\xi+\frac{y}{L\mathrm{e}^{t}}\right)-W_{\delta}\left(\xi\right)}\mathrm{d}\xi$, i.e. for $y\in \left[0,L\mathrm{e}^{t}r\right]$ we have
\begin{equation*}
\begin{split}
&\quad\int_{A^{\nu}-r}^{A^{\nu}+r}\abs{W_{\delta}\left(\xi+\frac{y}{L\mathrm{e}^{t}}\right)-W_{\delta}\left(\xi\right)}\mathrm{d}\xi\leq \int_{-\delta}^{\delta}\varphi_{\delta}\left(\eta\right)\int_{A^{\nu}-r}^{A^{\nu}+r}\abs{W\left(\xi-\eta+\frac{y}{L\mathrm{e}^{t}}\right)-W\left(\xi-\eta\right)} \mathrm{d}\xi\mathrm{d}\eta.
\end{split}
\end{equation*} We thus consider \begin{equation*}
\begin{split}
&\quad \int_{A^{\nu}-r}^{A^{\nu}+r}\abs{W\left(\xi-\eta+\frac{y}{L\mathrm{e}^{t}}\right)-W\left(\xi-\eta\right)} \mathrm{d}\xi\\
&= \int_{A^{\nu}-r}^{A^{\nu}+\eta}W\left(\xi-\eta+\frac{y}{L\mathrm{e}^{t}}\right)\mathrm{d}\xi+\int_{A^{\nu}+\eta}^{A^{\nu}+r}W\left(\xi-\eta\right)-W\left(\xi-\eta+\frac{y}{L\mathrm{e}^{t}}\right)\mathrm{d}\xi\\
&= \int_{A^{\nu}-r+\frac{y}{L\mathrm{e}^{t}}-\eta}^{A^{\nu}+\frac{y}{L\mathrm{e}^{t}}}W\left(\xi\right)\mathrm{d}\xi+\int_{A^{\nu}}^{A^{\nu}+r-\eta}W\left(\xi\right)\mathrm{d}\xi-\int_{A^{\nu}+\frac{y}{L\mathrm{e}^{t}}}^{A^{\nu}+r-\eta}W\left(\xi\right)\mathrm{d}\xi-\int_{A^{\nu}+r-\eta}^{A^{\nu}+r-\eta+\frac{y}{L\mathrm{e}^{t}}}W\left(\xi\right)\mathrm{d}\xi\\
&\leq 2\int_{A^{\nu}}^{A^{\nu}+\frac{y}{L\mathrm{e}^{t}}}W\left(\xi\right)\mathrm{d}\xi-\int_{A^{\nu}+r-\eta}^{A^{\nu}+r-\eta+\frac{y}{L\mathrm{e}^{t}}}W\left(\xi\right)\mathrm{d}\xi\leq 3\norm{W}_{L^{\infty}}\frac{y}{L\mathrm{e}^{t}}.
\end{split} \end{equation*} This shows $\int_{A^{\nu}-r}^{A^{\nu}+r}\abs{W_{\delta}\left(\xi+\frac{y}{L\mathrm{e}^{t}}\right)-W_{\delta}\left(\xi\right)}\mathrm{d}\xi\leq 3\norm{W}_{L^{\infty}}\frac{y}{L\mathrm{e}^{t}}$, i.e. for $y\in \left[0,L\mathrm{e}^{t}r\right]$.
On the other hand for $y>L\mathrm{e}^{t}r$ we have the trivial estimate
\begin{equation*}
\int_{A^{\nu}-r}^{A^{\nu}+r}\abs{W_{\delta}\left(\xi+\frac{y}{L\mathrm{e}^{t}}\right)-W_{\delta}\left(\xi\right)}\mathrm{d}\xi\leq 2\norm{W_{\delta}}_{L^{\infty}}r\leq 2\norm{W}_{L^{\infty}}r
\end{equation*}
and thus altogether
\begin{equation*}
\int_{A^{\nu}-r}^{A^{\nu}+r}\abs{W_{\delta}\left(\xi+\frac{y}{L\mathrm{e}^{t}}\right)-W_{\delta}\left(\xi\right)}\mathrm{d}\xi\leq C \min\left\{\frac{y}{L\mathrm{e}^{t}},r\right\}.
\end{equation*}
Using this and again that $h_{\varepsilon}$ is continuous we can also estimate the integrand in $(III)$, i.e.
\begin{equation*}
\begin{split}
&\quad \int_{A^{\nu}-r}^{A^{\nu}+r}\frac{1}{L\mathrm{e}^{t}}\frac{K_{\varepsilon}\left(L\mathrm{e}^{t}\xi,y\right)}{y}h_{\varepsilon}\left(L\mathrm{e}^{t}\xi\right)h_{\varepsilon}\left(y\right)\left[W_{\delta}\left(\xi+\frac{y}{L\mathrm{e}^{t}}\right)-W_{\delta}\left(\xi\right)\right]\mathrm{d}\xi\\
&\leq C\left(L\right)\int_{A^{\nu}-r}^{A^{\nu}+r}\frac{\left(L\mathrm{e}^{t}\xi+\varepsilon\right)^{-a}\left(y+\varepsilon\right)^{b}+\left(L\mathrm{e}^{t}\xi+\varepsilon\right)^{b}\left(y+\varepsilon\right)^{-a}}{y}h_{\varepsilon}\left(L\mathrm{e}^{t}\xi\right)h_{\varepsilon}\left(y\right)\abs{W_{\delta}\left(\xi+\frac{y}{L\mathrm{e}^{t}}\right)-W_{\delta}\left(\xi\right)}\mathrm{d}\xi\\
&\leq C\left(L,A,\varepsilon\right) \sup_{\xi\in \left[A^{\nu}-r,A^{\nu}+r\right]}\abs{h_{\varepsilon}\left(L\mathrm{e}^{t}\xi\right)}\frac{\left(y+\varepsilon\right)^{b}+\left(y+\varepsilon\right)^{-a}}{y}h_{\varepsilon}\left(y\right)\min\left\{\frac{y}{L\mathrm{e}^{t}},r\right\}.
\end{split}
\end{equation*}
According to Lemma~\ref{Lem:moment:est:stand} the right hand side is integrable and thus we can also bound the integrand in $(III)$ independently of $\delta$ by some integrable function. Thus due to Lebesgue's Theorem we can pass to the limit $\delta\to 0$ in \eqref{eq:special:test:weak:2} and the claim then follows using Lemma~\ref{Lem:special:test:ineq}. \end{proof}
In the following we will now derive an estimate from below on $W$.
\subsubsection{Lower bound on $W$}
\begin{lemma}\label{L.westimate} There exists $\sigma \in (\max\left\{b,\nu\right\},1)$ and $\theta>0$ such that
\begin{equation*}
\begin{split}
1-W(A-A^{\sigma}) &\leq C t A^{-\theta}
\end{split}
\end{equation*}
for sufficiently large $A$.
\end{lemma}
\begin{proof}
From the construction in Section~\ref{Sec:existence:results} we know that $\tilde{W}$ can be written as $\tilde{W}\left(\xi,t\right)=\int_{\xi}^{\infty}G\left(\eta,t\right)\mathrm{d}\eta$ with $G=-\partial_{\xi}\tilde{W}=G_{1,1}\ast G_{1,2}\ast G_{2}$, where $G_{1,1},G_{1,2}$ and $G_{2}$ solve
\begin{equation}\label{eq:subsolepsder}
\begin{split}
\partial_{t}G_{1,1}&=\frac{C A^{-\nu a}}{L^{\rho+a-\max\left\{0,b\right\}}}\int_{0}^{\infty}\frac{1}{\eta^{1+\omega_{1}}}\left[G_{1,1}\left(\xi+\eta\right)-G_{1,1}\left(\xi\right)\right]\mathrm{d}\eta\\
G_{1,1}\left(\cdot,0\right)&=\delta\left(\cdot-A+\kappa\right)\ast\varphi_{\kappa/3}\\
\partial_{t}G_{1,2}&=\frac{CA^{\beta}}{L^{\rho-b}}\int_{0}^{\infty}\frac{1}{\eta^{1+\omega_{2}}}\left[G_{1,2}\left(\xi+\eta\right)-G_{1,2}\left(\xi\right)\right]\mathrm{d}\eta\\
G_{1,2}\left(\cdot,0\right)&=\delta\left(\cdot\right)\ast\varphi_{\kappa/3}\\
\partial_{t}G_{2}&=C\int_{0}^{1}\frac{h_{\varepsilon}\left(z\right)}{z}\left[\frac{A^{-\nu a}}{L^{a}}\left(z+\varepsilon\right)^{b}+A^{\beta}L^{\beta}\left(z+\varepsilon\right)^{-a}\right]\cdot\left[G_{2}\left(\xi+\frac{z}{L}\right)-G_{2}\left(\xi\right)\right]\mathrm{d}z\\
G_{2}\left(\cdot,0\right)&=\delta\left(\cdot\right)\ast\varphi_{\kappa/3}.
\end{split}
\end{equation} Then one has from Lemma~\ref{Lem:der:int:est} for any $\mu\in\left(0,1\right)$: \begin{equation*}
\begin{split}
\int_{-\infty}^{-D}G_{1,2}\left(\xi,t\right)\mathrm{d}\xi\leq C\left(\frac{\kappa}{D}\right)^{\mu}+\frac{C A^{\beta}t}{L^{\rho-b}D^{\omega_{2}}}
\end{split} \end{equation*} and \begin{equation*}
\begin{split}
\int_{-\infty}^{-D+A}G_{1,1}\left(\xi,t\right)\mathrm{d}\xi&=\int_{-\infty}^{\left(A-\kappa\right)-\left(D-\kappa\right)}G_{1,1}\left(\xi,t\right)\mathrm{d}\xi \leq C\left(\frac{\kappa}{D-\kappa}\right)^{\mu}+\frac{C A^{-\nu a}t}{L^{\rho+a-\max\left\{0,b\right\}}\left(D-\kappa\right)^{\omega_{1}}}\\
&\leq C\left(\frac{\kappa}{D}\right)^{\mu}+\frac{C A^{-\nu a}t}{L^{\rho+a-\max\left\{0,b\right\}}D^{\omega_{1}}}.
\end{split} \end{equation*} In the last step we used that for any $\delta\in\left(0,1\right)$ and $D\geq 1$, $\kappa\leq 1/2$ it holds $\left(D-\kappa\right)^{-\delta}\leq 2^{\delta} D^{-\delta}$. One thus needs an estimate for $G_{2}$. This will be quite similar to the proof of Lemma~\ref{Lem:der:int:est} but due to the different behaviour for $L_{\varepsilon}\to 0$ and $L_{\varepsilon}\not \to 0$ we sketch this here again. Defining $\tilde{G}_{2}\left(p,t\right):=\int_{\mathbb{R}}G_{2}\left(\xi,t\right)\mathrm{e}^{p\left(\xi-\kappa/3\right)}\mathrm{d}\xi$ and multiplying the equation for $G_{2}$ in \eqref{eq:subsolepsder} by $\mathrm{e}^{p\left(\xi-\kappa/3\right)}$ and integrating one obtains \begin{equation*}
\begin{split}
\partial_{t}\tilde{G}_{2}\left(p,t\right)&=C\int_{0}^{1}\frac{h_{\varepsilon}\left(z\right)}{z}\left[\left(z+\varepsilon\right)^{-a}L^{b}A^{\beta}+\left(z+\varepsilon\right)^{b}L^{-a}A^{-\nu a}\right]\cdot \left[\mathrm{e}^{-\frac{pz}{L}}-1\right]\mathrm{d}z \tilde{G}_{2}\left(p,t\right)\\
&=:M\left(p,L\right)\tilde{G}_{2}\left(p,t\right).
\end{split} \end{equation*}
Thus $\tilde{G}_{2}\left(p,t\right)=\int_{\mathbb{R}}\varphi_{\kappa/3}\left(\xi\right)\mathrm{e}^{p\left(\xi-\kappa/3\right)}\mathrm{d}\xi\exp\left(-t\abs{M\left(p,L\right)}\right)$ and one can estimate:
\begin{equation*}
\begin{split}
\abs{M\left(p,L\right)}&\leq C\int_{0}^{1}\frac{h_{\varepsilon}\left(z\right)}{z}\left[\left(z+\varepsilon\right)^{-a}L^{b}A^{\beta}+\left(z+\varepsilon\right)^{-a}L^{-a}A^{-\nu a}\right]\cdot \frac{pz}{L}\mathrm{d}z\\
&=C p\left(L^{b-1}A^{\beta}\mu_{\varepsilon}+L^{-a-1}A^{-\nu a}\lambda_{\varepsilon}\right)\leq Cp\left(A^{\beta}+A^{-\nu a}\right).
\end{split}
\end{equation*} For the last step note that due to our notation either $L=L_{\varepsilon}$ (in the case $L_{\varepsilon}\not\to 0$) and then the estimate is due to the definition of $L_{\varepsilon}$. If $L=1$ (in the case $L_{\varepsilon}\to 0$) one can assume without loss of generality that $\varepsilon$ is such small that $L_{\varepsilon}\leq 1$ (and thus by definition also $\lambda_{\varepsilon},\mu_{\varepsilon}\leq 1$). Using this and inserting $p:=\frac{1}{D}$ we obtain in the same way as in the proof of Lemma~\ref{Lem:der:int:est}: \begin{equation*}
\begin{split}
\int_{-\infty}^{-D}G_{2}\left(\xi,t\right)\mathrm{d}\xi\leq C\left(\left(\frac{\kappa}{D}\right)^{\mu}+\frac{t}{D}\left(A^{\beta}+A^{-\nu a}\right)\right).
\end{split} \end{equation*} Using these estimates on $G_{1,1}$, $G_{1,2}$ and $G_{2}$ one obtains from Lemma~\ref{Lem:int:est:conolution} (note also Remark~\ref{Rem:est:conv}): \begin{equation*}
\begin{split}
1-\tilde{W}\left(A-D\right)&=\int_{-\infty}^{A-D}\left(G_{1,1}\ast G_{1,2}\ast G_{2}\right)\mathrm{d}\xi\leq \int_{-\infty}^{A-\frac{D}{4}}G_{1,1}\mathrm{d}\xi+\int_{-\infty}^{-\frac{D}{4}}G_{1,2}\mathrm{d}\xi+\int_{-\infty}^{-\frac{D}{4}}G_{2}\mathrm{d}\xi\\
&\leq C\frac{\kappa^{\mu}}{D^{\mu}}+\frac{C A^{-\nu a}t}{L^{\rho+a-\max\left\{0,b\right\}}D^{\omega_{1}}}+\frac{C A^{\beta}t}{L^{\rho-b}D^{\omega_{2}}}+\frac{C t\left(A^{\beta}+A^{-\nu a}\right)}{D}.
\end{split} \end{equation*} Choosing $D=A^{\sigma}$ (with $A\geq 1$) one has \begin{equation*}
1-\tilde{W}\left(A-A^{\sigma}\right)\leq C\kappa^{\mu}A^{-\mu\sigma}+\frac{Ct}{L^{a+\omega_{1}}}A^{-\nu a-\sigma \omega_{1}}+\frac{Ct}{L^{\rho-b}}A^{\beta -\sigma \omega_{2}}+C t\left(A^{\beta-\sigma}+A^{-\nu a -\sigma}\right). \end{equation*} In the case $L=1$ (i.e. $L_{\varepsilon}\to 0$) it suffices to consider the exponents of $A$: \begin{itemize}
\item $-\mu\sigma<0$, as $\mu,\sigma>0$,
\item $-\nu a-\sigma\omega_{1}<0$, as $\omega_{1},a>0$,
\item $\beta-\sigma\omega_{2}=\beta-\sigma\rho=\begin{cases}
b-\sigma \rho & b\geq 0\\
\nu b-\sigma \rho & b<0
\end{cases}
<0$, independently of the sign of $b$ if we choose $\sigma$ sufficiently close to $1$ and $\sigma>\nu$ (as $b<\rho$).
\item $\beta-\sigma=\begin{cases}
b-\sigma & b\geq 0\\
\nu b-\sigma & b<0
\end{cases}
<0$, independently of the sign of $b$ if we choose $\sigma>b$ as $\nu<1$ and $b<1$ (note that this choice of $\sigma$ does not collide with the choice made before)
\item $-\nu a -\sigma <0$, as $a,\sigma>0$. \end{itemize} Thus, taking $-\theta$ to be the maximum of the (negative) exponents proves the claim in this case.
If $L=L_{\varepsilon}$ (i.e. $L_{\varepsilon}\not\to 0$) one has to consider also the exponents of $L$: \begin{itemize}
\item $a+\omega_{1}>0$, as $a,\omega_{1}>0$,
\item $\rho-b>0$, as by assumption $b<\rho$. \end{itemize} Thus either the two terms containing $L=L_{\varepsilon}$ are bounded (if $L_{\varepsilon}$ is bounded) or converge to zero (if $L_{\varepsilon}\to \infty$) and so in both cases with the same $\theta>0$ as above the claim follows. \end{proof}
\subsubsection{The iteration argument}\label{subsec:iterate} In this section we will show Proposition~\ref{Prop:Hepslowerbound}. We therefore define \begin{equation*}
F_{\varepsilon}\left(X\right):=\int_{0}^{X}H_{\varepsilon}\left(Y\right)\mathrm{d}Y\quad \text{while for }L_{\varepsilon}\to 0 \text{ this reduces to } \quad F_{\varepsilon}\left(x\right)=\int_{0}^{x}h_{\varepsilon}\left(y\right)\mathrm{d}y. \end{equation*}
We first show the following Lemma, that will be the key in the proof of Proposition~\ref{Prop:Hepslowerbound}. \begin{lemma}\label{Lem:recursion}
There exists $\theta>0$ such that
\begin{equation*}
\begin{split}
F_{\varepsilon}\left(A\right)&\geq -CA^{\nu\left(1-\rho\right)}+F_{\varepsilon}\left(\left(A-A^{\sigma}\right)\mathrm{e}^{T}\right)\mathrm{e}^{-\left(1-\rho\right)T}\left(1-\frac{C}{A^{\theta}}\right)
\end{split}
\end{equation*}
for $A$ sufficiently large. \end{lemma}
\begin{proof}
From the choice of $\psi$ and $W$ respectively (using also the non-negativity and monotonicity properties of $W$)
\begin{equation*}
\begin{split}
F_{\varepsilon}\left(A\right)&=\int_{0}^{A}H_{\varepsilon}\left(X\right)\mathrm{d}X\geq\int_{0}^{\infty}W\left(X,0\right)H_{\varepsilon}\left(X\right)\mathrm{d}X\geq \mathrm{e}^{-\left(1-\rho\right)T}\int_{0}^{\infty}H_{\varepsilon}\left(X\right)W\left(\frac{X}{\mathrm{e}^{T}},T\right)\mathrm{d}X\\
&\geq\mathrm{e}^{-\left(1-\rho\right)T}\int_{A^{\nu}}^{\infty}\partial_{X}F_{\varepsilon}\left(X\right)W\left(\frac{X}{\mathrm{e}^{T}},T\right)\mathrm{d}X\\
&=-\mathrm{e}^{-\left(1-\rho\right)T}F_{\varepsilon}\left(A^{\nu}\right)W\left(\frac{A^{\nu}}{\mathrm{e}^{T}},T\right)-\int_{A^{\nu}}^{\infty}\mathrm{e}^{-\left(1-\rho\right)T}\mathrm{e}^{-T}F_{\varepsilon}\left(X\right)\partial_{\xi}W\left(\frac{X}{\mathrm{e}^{T}},T\right)\mathrm{d}X\\
&\geq -CA^{\nu\left(1-\rho\right)}+\mathrm{e}^{-\left(1-\rho\right)T}\int_{A^{\nu}\mathrm{e}^{-T}}^{\infty}F_{\varepsilon}\left(X\mathrm{e}^{T}\right)\left(G_{1,1}\ast G_{1,2}\ast G_{2}\right)\left(X,T\right)\mathrm{d}X
\end{split}
\end{equation*}
where we changed variables in the last step and used that $W$ is bounded, $\partial_{\xi}W=-G_{1,1}\ast G_{1,2}\ast G_{2}$ on $\left(A^{\nu},\infty\right)$ as well as $ F_{\varepsilon}\left(A^{\nu}\right)\leq A^{\nu\left(1-\rho\right)}$. Noting that for $\sigma>\nu$ we have $A^{\nu}\mathrm{e}^{-T}\leq A-A^{\sigma}$ for sufficiently large $A$ and using also the monotonicity of $F_{\varepsilon}$ we can further estimate
\begin{equation*}
\begin{split}
F_{\varepsilon}\left(A\right)&\geq -CA^{\nu\left(1-\rho\right)}+\mathrm{e}^{-\left(1-\rho\right)T}\int_{A-A^{\sigma}}^{\infty}F_{\varepsilon}\left(X\mathrm{e}^{T}\right)\left(G_{1,1}\ast G_{1,2}\ast G_{2}\right)\left(X,T\right)\mathrm{d}X\\
&\geq -CA^{\nu\left(1-\rho\right)}+\mathrm{e}^{-\left(1-\rho\right)T}F_{\varepsilon}\left(\left(A-A^{\sigma}\right)\mathrm{e}^{T}\right)\int_{A-A^{\sigma}}^{\infty}\left(G_{1,1}\ast G_{1,2}\ast G_{2}\right)\left(X,T\right)\mathrm{d}X\\
&=-CA^{\nu\left(1-\rho\right)}+F_{\varepsilon}\left(\left(A-A^{\sigma}\right)\mathrm{e}^{T}\right)\mathrm{e}^{-\left(1-\rho\right)T}W\left(A-A^{\sigma}\right)\\
&\geq -CA^{\nu\left(1-\rho\right)}+F_{\varepsilon}\left(\left(A-A^{\sigma}\right)\mathrm{e}^{T}\right)\mathrm{e}^{-\left(1-\rho\right)T}\left(1-\frac{C}{A^{\theta}}\right),
\end{split}
\end{equation*} while in the last step Lemma~\ref{L.westimate} was applied. \end{proof}
We are now prepared to prove Proposition~\ref{Prop:Hepslowerbound}. This will be done by some iteration argument using recursively Lemma~\ref{Lem:recursion}. \begin{proof}[Proof of Proposition~\ref{Prop:Hepslowerbound}]
Let $\alpha:=\mathrm{e}^{T}>1$. For any $\delta>0$ there exists $R_{\epsilon,\delta}>0$ such that $F_{\varepsilon}\left(R\right)\geq R^{1-\rho}\left(1-\delta\right)$ for all $R\geq R_{\varepsilon,\delta}$. For $A_{0}>\left(\frac{\alpha}{\alpha-1}\right)^{\frac{1}{1-\sigma}}$ we define a sequence $\left\{A_{k}\right\}_{k\in\mathbb{N}_{0}}$ by $A_{k+1}:=\alpha\left(A_{k}-A_{k}^{\sigma}\right)$. From the choice of $A_{0}$ one obtains that $A_{k}$ is strictly increasing and one has $A_{k}\to\infty$ as $k\to\infty$. Furthermore $\alpha A_{k}=A_{k+1}+\alpha A_{k}^{\sigma}$ and thus
\begin{equation*}
A_{k}=\frac{A_{k+1}}{\alpha}\left(1+\alpha\frac{A_{k}^{\sigma}}{A_{k+1}}\right).
\end{equation*}
By iteration one obtains for any $N\in\mathbb{N}$:
\begin{equation}\label{eq:reprR0}
A_{0}=\frac{A_{N}}{\alpha^{N}}\prod_{k=0}^{N-1}\left(1+\alpha\frac{A_{k}^{\sigma}}{A_{k+1}}\right).
\end{equation}
For any $N\in\mathbb{N}$ and $0\leq k<N$ applying Lemma~\ref{Lem:recursion} one gets by induction:
\begin{equation}\label{eq:reprFepsk}
\begin{split}
F_{\varepsilon}\left(A_{k}\right)\geq F_{\varepsilon}\left(A_{N}\right)\alpha^{-\left(N-k\right)\left(1-\rho\right)}\prod_{n=k}^{N-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)-C\sum_{m=k}^{N-1}\alpha^{-\left(m-k\right)\left(1-\rho\right)}\left(\prod_{n=k}^{m-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)\right)A_{m}^{\nu\left(1-\rho\right)},
\end{split}
\end{equation}
where we use the convention $\sum_{k=l}^{u}a_{k}=0$ and $\prod_{k=l}^{u}a_{k}=1$ if $u<l$. Thus for $k=0$ one particularly obtains
\begin{equation}\label{eq:lowerbditI0}
\begin{split}
F_{\varepsilon}\left(A_{0}\right)&\geq F_{\varepsilon}\left(A_{N}\right)\alpha^{-N\left(1-\rho\right)}\prod_{n=0}^{N-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)-C\sum_{m=0}^{N-1}\alpha^{-m\left(1-\rho\right)}\left(\prod_{n=0}^{m-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)\right)A_{m}^{\nu\left(1-\rho\right)}\\
&=F_{\varepsilon}\left(A_{N}\right)\alpha^{-N\left(1-\rho\right)}\prod_{n=0}^{N-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)-C\sum_{m=0}^{N-1}\alpha^{\left(\nu-1\right)\left(1-\rho\right)m}\left(\prod_{n=0}^{m-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)\right)\left(\alpha^{-m}A_{m}\right)^{\nu\left(1-\rho\right)}\\
&=:\left(I\right)-\left(II\right).
\end{split}
\end{equation}
We now estimate the two terms separately.
Let $\delta_{*}:=\delta/2$. Choosing $N$ sufficiently large such that $A_{N}\geq R_{\epsilon,\delta_{*}}$ one has, using also~\eqref{eq:reprR0}
\begin{equation}\label{eq:lowerbditI1}
\begin{split}
\left(I\right)&\geq \left(1-\delta_{*}\right)A_{N}^{1-\rho}\alpha^{-N\left(1-\rho\right)}\prod_{n=0}^{N-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)=\left(1-\delta_{*}\right)\left(\frac{A_{N}}{\alpha^{N}}\right)^{1-\rho}\prod_{n=0}^{N-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)\\
&=\left(1-\delta_{*}\right)A_{0}^{1-\rho}\frac{\prod_{n=0}^{N-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)}{\left(\prod_{k=0}^{N-1}\left(1+\alpha\frac{A_{k}^{\sigma}}{A_{k+1}}\right)\right)^{1-\rho}}.
\end{split}
\end{equation}
Let $0<D_{0}<D$ be parameters to be fixed later and assume $A_{0}>D$. One has $A_{k+1}=\alpha\left(A_{k}-A_{k}^{\sigma}\right)$. Thus using the monotonicity of $A_{k}$
\begin{equation*}
\begin{split}
\frac{A_{k+1}}{A_{k}}=\alpha\left(1-A_{k}^{\sigma-1}\right)>\alpha\left(1-D_{0}^{\sigma-1}\right)=:\beta_{0}>1
\end{split}
\end{equation*}
if we fix $D_{0}$ sufficiently large as $\alpha>1$. Using this, one has $A_{k+1}>\beta_{0}A_{k}$ and thus by iteration $A_{k+1}>\beta_{0}^{k+1}A_{0}$.\\
We continue to estimate $(I)$ and thus consider first $\prod_{n=0}^{N-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)$ while we assume that $D_{0}$ is sufficiently large such that $\frac{C}{D^{\theta}}<1$ and thus also $\frac{C}{A_{n}^{\theta}}<1$ by the monotonicity of $A_{n}$. Taking the logarithm of the product one has using the estimate $\log\left(1-x\right)\geq -\frac{x}{1-x}$:
\begin{equation*}
\begin{split}
\sum_{n=0}^{N-1}\log\left(1-\frac{C}{A_{n}^{\theta}}\right)&\geq \sum_{n=0}^{N-1}-\frac{C}{A_{n}^{\theta}}\cdot \frac{1}{1-\frac{C}{A_{n}^{\theta}}}=-C\sum_{n=0}^{N-1}\frac{1}{A_{n}^{\theta}-C}\geq -C\sum_{n=0}^{N-1}\frac{1}{\beta_{0}^{n\theta}A_{0}^{\theta}-C}\\
&\geq -C\sum_{n=0}^{N-1}\frac{1}{\beta_{0}^{\theta n}}\frac{1}{D^{\theta}-C}\geq -C\frac{\beta_{0}^{\theta}}{\beta_{0}^{\theta}-1}\frac{1}{D^{\theta}-C}=:-\frac{C_{\beta}}{D^{\theta}-C}.
\end{split}
\end{equation*}
Thus one obtains using $\exp\left(-x\right)\geq 1-x$:
\begin{equation}\label{eq:lowerbditI2}
\begin{split}
\prod_{n=0}^{N-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)\geq \exp\left(-\frac{C_{\beta}}{D^{\theta}-C}\right)\geq 1-\frac{C_{\beta}}{D^{\theta}-C}.
\end{split}
\end{equation}
Considering $\prod_{k=0}^{N-1}\left(1+\alpha\frac{A_{k}^{\sigma}}{A_{k+1}}\right)$ and applying again first the logarithm on the product and then using $\log\left(1+x\right)\leq x$ one obtains
\begin{equation*}
\begin{split}
\sum_{k=0}^{N-1}\log\left(1+\alpha\frac{A_{k}^{\sigma}}{A_{k+1}}\right)&\leq \sum_{k=0}^{N-1}\alpha \frac{A_{k}^{\sigma}}{A_{k+1}}\leq \alpha \sum_{k=0}^{N-1}A_{k}^{\sigma-1}\leq \alpha A_{0}^{\sigma-1}\sum_{k=0}^{N-1}\left(\beta_{0}^{\sigma-1}\right)^{k}\leq \alpha D^{\sigma-1}\sum_{k=0}^{\infty}\left(\beta_{0}^{\sigma-1}\right)^{k}\\
&=:C_{\gamma}D^{\sigma-1}.
\end{split}
\end{equation*}
Thus one can estimate
\begin{equation}\label{eq:lowerbditI3}
\begin{split}
\left(\prod_{k=0}^{N-1}\left(1+\alpha\frac{A_{k}^{\sigma}}{A_{k+1}}\right)\right)^{1-\rho}&\leq \exp\left[\left(C_{\gamma}D^{\sigma-1}\right)\left(1-\rho\right)\right]=1+\sum_{n=1}^{\infty}\frac{\left(1-\rho\right)^{n}C_{\gamma}^{n}D^{n\left(\sigma-1\right)}}{n!}\\
&\leq 1+D^{\sigma-1}\sum_{n=1}^{\infty}\frac{C_{\gamma}^{n}\left(1-\rho\right)^{n}}{n!}\leq 1+\frac{\exp\left(C_{\gamma}\left(1-\rho\right)\right)}{D^{1-\sigma}}.
\end{split}
\end{equation}
Putting the estimates of \eqref{eq:lowerbditI2} and \eqref{eq:lowerbditI3} together one obtains
\begin{equation}\label{eq:lowerbditI4}
\begin{split}
\frac{\prod_{n=0}^{N-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)}{\left(\prod_{k=0}^{N-1}\left(1+\alpha\frac{A_{k}^{\sigma}}{A_{k+1}}\right)\right)^{1-\rho}}&\geq \frac{1-\frac{C_{\beta}}{D^{\theta}-C}}{1+\frac{\exp\left(C_{\gamma}\left(1-\rho\right)\right)}{D^{1-\sigma}}}=1-\frac{\frac{C_{\beta}}{D^{\theta}-C}+\frac{\exp\left(C_{\gamma}\left(1-\rho\right)\right)}{D^{1-\sigma}}}{1+\frac{\exp\left(C_{\gamma}\left(1-\rho\right)\right)}{D^{1-\sigma}}}\\
&\geq 1-\frac{C_{\beta}}{D^{\theta}-C}-\frac{\exp\left(C_{\gamma}\left(1-\rho\right)\right)}{D^{1-\sigma}}.
\end{split}
\end{equation}
Together with \eqref{eq:lowerbditI1} this shows
\begin{equation}\label{eq:lowerbditI5}
\begin{split}
(I)\geq \left(1-\delta_{*}\right)A_{0}^{1-\rho}\left(1-\frac{C_{\beta}}{D^{\theta}-C}-\frac{\exp\left(C_{\gamma}\left(1-\rho\right)\right)}{D^{1-\sigma}}\right).
\end{split}
\end{equation}
To estimate $(II)$ we note first that one has $\prod_{n=0}^{m-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)\leq 1$ as well as $\prod_{k=0}^{m-1}\left(1+\alpha\frac{A_{k}^{\sigma}}{A_{k+1}}\right)\geq 1$ for any $m\in \mathbb{N}_{0}$. Using this as well as \eqref{eq:reprR0} for $N=m$ and $m=0,\ldots,N-1$ one obtains
\begin{equation}\label{eq:lowerbditII1}
\begin{split}
(II)&=C\sum_{m=0}^{N-1}\alpha^{\left(\nu-1\right)\left(1-\rho\right)m}\left(\prod_{n=0}^{m-1}\left(1-\frac{C}{A_{n}^{\theta}}\right)\right)\left(\alpha^{-m}A_{m}\right)^{\nu\left(1-\rho\right)}\leq C\sum_{m=0}^{N-1}\alpha^{\left(\nu-1\right)\left(1-\rho\right)m}\left(\alpha^{-m}A_{m}\right)^{\nu\left(1-\rho\right)}\\
&=C\sum_{m=0}^{N-1}\alpha^{\left(\nu-1\right)\left(1-\rho\right)m}\frac{A_{0}^{\nu\left(1-\rho\right)}}{\left(\prod_{k=0}^{m-1}\left(1+\alpha\frac{A_{k}^{\sigma}}{A_{k+1}}\right)\right)^{\nu\left(1-\rho\right)}}\leq C\sum_{m=0}^{N-1}\alpha^{\left(\nu-1\right)\left(1-\rho\right)m}A_{0}^{\nu\left(1-\rho\right)}\\
&\leq A_{0}^{\nu\left(1-\rho\right)}C\sum_{m=0}^{\infty}\alpha^{\left(\nu-1\right)\left(1-\rho\right)m}=:C_{\nu}A_{0}^{\nu\left(1-\rho\right)}
\end{split}
\end{equation}
Putting together the estimates \eqref{eq:lowerbditI5}, \eqref{eq:lowerbditII1} and \eqref{eq:lowerbditI0} yields
\begin{equation}\label{eq:lowerbdit}
\begin{split}
F_{\varepsilon}\left(A_{0}\right)&\geq \left(1-\delta_{*}\right)\left(1-\frac{C_{\beta}}{D^{\theta}-C}-\frac{\exp\left(C_{\gamma}\left(1-\rho\right)\right)}{D^{1-\sigma}}\right)A_{0}^{1-\rho}-C_{\nu}A_{0}^{\nu\left(1-\rho\right)}\\
&\geq A_{0}^{1-\rho}\left(\left(1-\delta_{*}\right)\left(1-\frac{C_{\beta}}{D^{\theta}-C}-\frac{\exp\left(C_{\gamma}\left(1-\rho\right)\right)}{D^{1-\sigma}}\right)-\frac{C_{\nu}}{D^{\left(1-\nu\right)\left(1-\rho\right)}}\right).
\end{split}
\end{equation}
We choose now $D$ sufficiently large such that one has
\begin{itemize}
\item $D\geq \left(3C_{\beta}\frac{2-\delta}{\delta}+C\right)^{1/\theta}$ which is equivalent to $\frac{C_{\beta}}{D^{\theta}-C}\leq \frac{1}{3}\frac{\delta}{2-\delta}$
\item $D\geq \left(3\exp\left(C_{\gamma}\left(1-\rho\right)\right)\frac{2-\delta}{\delta}\right)^{\frac{1}{1-\sigma}}$ which is equivalent to $\frac{\exp\left(C_{\gamma}\left(1-\rho\right)\right)}{D^{1-\sigma}}\leq \frac{1}{3}\frac{\delta}{2-\delta}$
\item $D\geq \left(\frac{3C_{\nu}}{1-\delta/2}\frac{2-\delta}{\delta}\right)^{\frac{1}{\left(1-\nu\right)\left(1-\rho\right)}}$ which is equivalent to $C_{\nu}\frac{1}{D^{\left(1-\nu\right)\left(1-\rho\right)}}\leq \left(1-\delta/2\right)\frac{1}{3}\frac{\delta}{2-\delta}$.
\end{itemize}
Inserting these estimates in \eqref{eq:lowerbdit} together with $\delta_{*}=\delta/2$ one gets
\begin{equation*}
\begin{split}
F_{\varepsilon}\left(A_{0}\right)&\geq A_{0}^{1-\rho}\left(\left(1-\frac{\delta}{2}\right)\left(1-\frac{2}{3}\frac{\delta}{2-\delta}\right)-\left(1-\frac{\delta}{2}\right)\frac{1}{3}\frac{\delta}{2-\delta}\right)\\
&=A_{0}^{1-\rho}\left(1-\frac{\delta}{2}\right)\left(1-\frac{\delta}{2-\delta}\right)=\left(1-\delta\right)A_{0}^{1-\rho}.
\end{split}
\end{equation*}
This proves the claim with $R_{\delta}=D$. \end{proof}
\subsection{Excluding $L_{\varepsilon}\to\infty$}
Before we can pass to the limit $\varepsilon\to 0$ we have to exclude the case $L_{\varepsilon}\to \infty$ as $\varepsilon\to 0$ in order to obtain Proposition~\ref{Prop:Hepslowerbound} also for $h_{\varepsilon}$ (instead of $H_{\varepsilon}$). This will be done by some contradiction argument. We furthermore remark here that throughout this section we frequently use that we can bound \begin{equation}\label{eq:bound:Leps}
\begin{split}
\int_{0}^{1}\left(x+\varepsilon\right)^{-a}h_{\varepsilon}\left(x\right)\mathrm{d}x\leq L_{\varepsilon}^{1-b} \quad \text{and} \quad \int_{0}^{1}\left(x+\varepsilon\right)^{b}h_{\varepsilon}\left(x\right)\mathrm{d}x\leq L_{\varepsilon}^{1+a}
\end{split} \end{equation} due to the definition of $L_{\varepsilon}$ in \eqref{lepsdef}.
\subsubsection{Deriving a limit equation for $H_{\varepsilon}$}
We first show the following lemma, stating the convergence of a certain integral occurring later.
\begin{lemma}\label{Lem:convergence:Q}
Assume $L_{\varepsilon}\to\infty$ as $\varepsilon\to 0$. Let $Q_{\varepsilon}$ be given by
\begin{equation*}
Q_{\varepsilon}\left(X\right):=\int_{0}^{1}\frac{h_{\varepsilon}\left(y\right)}{L_{\varepsilon}}K_{\varepsilon}\left(y,L_{\varepsilon}X\right)\mathrm{d}y.
\end{equation*}
Then there exists a (continuous) function $Q$ such that $Q_{\varepsilon}\to Q$ locally uniformly up to a subsequence. \end{lemma}
\begin{proof}
It suffices to show that both $Q_{\varepsilon}$ as well as $Q_{\varepsilon}'$ are uniformly bounded on each fixed interval $\left[d,D\right]\subset \mathbb{R}_{+}$. One has using \eqref{eq:bound:Leps}
\begin{equation}\label{eq:est:Qeps}
\begin{split}
\abs{Q_{\varepsilon}\left(X\right)}&\leq C_{2}\int_{0}^{1}\frac{h_{\varepsilon}\left(z\right)}{L_{\varepsilon}}\left(\left(L_{\varepsilon}X+\varepsilon\right)^{-a}\left(z+\varepsilon\right)^{b}+\left(L_{\varepsilon}X+\varepsilon\right)^{b}\left(z+\varepsilon\right)^{-a}\right)\mathrm{d}z\\
&\leq \frac{C_{2}}{L_{\varepsilon}}\left(L_{\varepsilon}X+\varepsilon\right)^{-a}\int_{0}^{1}\left(z+\varepsilon\right)^{b}h_{\varepsilon}\left(z\right)\mathrm{d}z+\frac{C_{2}}{L_{\varepsilon}}\left(L_{\varepsilon}X+\varepsilon\right)^{b}\int_{0}^{1}\left(z+\varepsilon\right)^{-a}h_{\varepsilon}\left(z\right)\mathrm{d}z\\
&\leq C_{2}L_{\varepsilon}^{a}\left(L_{\varepsilon}X+\varepsilon\right)^{-a}+C_{2}L_{\varepsilon}^{-b}\left(L_{\varepsilon}X+\varepsilon\right)^{b}\\
&= C_{2}\left(X+\frac{\varepsilon}{L_{\varepsilon}}\right)^{-a}+C_{2}\left(X+\frac{\varepsilon}{L_{\varepsilon}}\right)^{b}
\end{split}
\end{equation}
while the right hand side is clearly locally uniformly bounded under the given assumptions. Rewriting $Q_{\varepsilon}$ one obtains
\begin{equation*}
\begin{split}
Q_{\varepsilon}\left(X\right)=\int_{0}^{1}\frac{h_{\varepsilon}\left(z\right)}{L_{\varepsilon}}K_{\varepsilon}\left(L_{\varepsilon}X,z\right)\mathrm{d}z=L_{\varepsilon}^{\gamma}\int_{0}^{1}\frac{h_{\varepsilon}\left(z\right)}{L_{\varepsilon}}K_{\frac{\varepsilon}{L_{\varepsilon}}}\left(X,\frac{z}{L_{\varepsilon}}\right)\mathrm{d}z.
\end{split}
\end{equation*}
Furthermore from \eqref{eq:Ass2} one has
\begin{equation*}
\abs{\partial_{y}K_{\varepsilon}\left(y,z\right)}\leq C\left(\left(z+\varepsilon\right)^{-a}+\left(z+\varepsilon\right)^{b}\right) \quad \text{for all } y\in\left[a,A\right]
\end{equation*}
and hence, similarly as before
\begin{equation*}
\begin{split}
\abs{Q_{\varepsilon}'\left(X\right)}&\leq CL_{\varepsilon}^{\gamma}\int_{0}^{1}\frac{h_{\varepsilon}\left(z\right)}{L_{\varepsilon}}\left[\left(\frac{z+\varepsilon}{L_{\varepsilon}}\right)^{-a}+\left(\frac{z+\varepsilon}{L_{\varepsilon}}\right)^{b}\right]\mathrm{d}z\\
&=CL_{\varepsilon}^{\gamma-1+a}\int_{0}^{1}\left(z+\varepsilon\right)^{-a}h_{\varepsilon}\left(z\right)\mathrm{d}z+CL_{\varepsilon}^{\gamma-1-b}\int_{0}^{1}\left(z+\varepsilon\right)^{b}h_{\varepsilon}\left(z\right)\mathrm{d}z\\
&\leq C
\end{split}
\end{equation*}
where we used also $\gamma=b-a$. \end{proof}
\begin{lemma}
Let $\rho\in\left(\max\left\{0,b\right\},1\right)$ and assume $L_{\varepsilon}\to\infty$ as $\varepsilon\to 0$. Then there exists a measure $H$ such that (up to a subsequence) $H_{\varepsilon}\stackrel{*}{\rightharpoonup}H$ and $H$ satisfies
\begin{equation}\label{eq:H:infty}
\partial_{X}\left(XH\right)+\left(\rho-1\right)H-\partial_{X}\left(Q\left(X\right)H\left(X\right)\right)+\frac{H\left(X\right)Q\left(X\right)}{X}=0
\end{equation}
in the sense of distributions with $Q\left(X\right)=\lim_{\varepsilon\to 0}\int_{0}^{1}\frac{h_{\varepsilon}\left(y\right)}{L_{\varepsilon}}K_{\varepsilon}\left(y,L_{\varepsilon}X\right)\mathrm{d}y$. \end{lemma}
\begin{proof}
Transforming the equation
\begin{equation*}
\partial_{x}\left(\int_{0}^{x}\int_{x-y}^{\infty}\frac{K_{\varepsilon}\left(y,z\right)}{z}h_{\varepsilon}\left(y\right)h_{\varepsilon}\left(z\right)\mathrm{d}z\mathrm{d}y\right)=\partial_{x}\left(xh_{\varepsilon}\left(x\right)\right)+\left(\rho-1\right)h_{\varepsilon}\left(x\right)
\end{equation*}
to the rescaled variables $X=\frac{x}{L_{\varepsilon}}$ one obtains
\begin{equation*}
\frac{1}{L_{\varepsilon}}\partial_{X}\left(\int_{0}^{L_{\varepsilon}X}\int_{L_{\varepsilon}X-y}^{\infty}\frac{K_{\varepsilon}\left(y,z\right)}{z}h_{\varepsilon}\left(y\right)h_{\varepsilon}\left(z\right)\mathrm{d}z\mathrm{d}y\right)=\frac{1}{L_{\varepsilon}}\partial_{X}\left(L_{\varepsilon}X\frac{H\left(X\right)}{L_{\varepsilon}^{\rho}}\right)+\left(\rho-1\right)\frac{H_{\varepsilon}\left(X\right)}{L_{\varepsilon}^{\rho}}.
\end{equation*}
Testing with $\psi\in C_{c}^{\infty}\left(\mathbb{R}_{+}\right)$ (in the rescaled $X$-variable), splitting the integral and interchanging the order of integration we can rewrite this as
\begin{equation*}
\begin{split}
&\quad\int_{0}^{\infty}\left(X\partial_{X}\psi\left(X\right)-\left(\rho-1\right)\psi\left(X\right)\right)H_{\varepsilon}\left(X\right)\mathrm{d}X\\
&=\frac{1}{L_{\varepsilon}^{\rho-\gamma}}\int_{\frac{1}{L_{\varepsilon}}}^{\infty}\int_{\frac{1}{L_{\varepsilon}}}^{\infty}\frac{K_{\frac{\varepsilon}{L_{\varepsilon}}}\left(Y,Z\right)}{Z}H_{\varepsilon}\left(Y\right)H_{\varepsilon}\left(Z\right)\left[\psi\left(Y+Z\right)-\psi\left(Y\right)\right]\mathrm{d}Z\mathrm{d}Y\\
&\quad + \int_{\frac{1}{L_{\varepsilon}}}^{\infty}\int_{0}^{1}\frac{K_{\varepsilon}\left(L_{\varepsilon}Y,z\right)}{z}h_{\varepsilon}\left(z\right)H_{\varepsilon}\left(Y\right)\left[\psi\left(Y+\frac{z}{L_{\varepsilon}}\right)-\psi\left(Y\right)\right]\mathrm{d}z\mathrm{d}Y\\
&\quad + \int_{0}^{1}\int_{0}^{\infty}\left(\int_{\frac{y}{L_{\varepsilon}}}^{\frac{y}{L_{\varepsilon}}+Z}\partial_{X}\psi\left(X\right)\mathrm{d}X\right)\frac{K_{\varepsilon}\left(y,L_{\varepsilon}Z\right)}{L_{\varepsilon}Z}h_{\varepsilon}\left(y\right)H_{\varepsilon}\left(Z\right)\mathrm{d}Z\mathrm{d}y\\
&=\left(I\right)+\left(II\right)+\left(III\right).
\end{split}
\end{equation*} In the following we assume that $\supp\psi\subset\left[d,D\right]$ with $d>0$ and $D>1$. Furthermore we can assume that $L_{\varepsilon}>1$ is sufficiently large and that $\varepsilon<1$ is sufficiently small (as we are assuming $L_{\varepsilon}\to\infty$ for $\varepsilon\to 0$). We first show that $\left(I\right)\to 0$ as $\varepsilon\to 0$: \begin{equation*}
\begin{aligned}
&\quad\abs{\left(I\right)}\\
&\leq \frac{C_{2}}{L_{\varepsilon}^{\rho-\gamma}}\int_{\frac{1}{L_{\varepsilon}}}^{D}\int_{\frac{1}{L_{\varepsilon}}}^{\infty}\frac{\left(Y+\frac{\varepsilon}{L_{\varepsilon}}\right)^{-a}\left(Z+\frac{\varepsilon}{L_{\varepsilon}}\right)^{b}}{Z}H_{\varepsilon}\left(Y\right)H_{\varepsilon}\left(Z\right)\abs{\psi\left(Y+Z\right)-\psi\left(Y\right)}\mathrm{d}Z\mathrm{d}Y\\
&\quad +\frac{C_{2}}{L_{\varepsilon}^{\rho-\gamma}}\int_{\frac{1}{L_{\varepsilon}}}^{D}\int_{\frac{1}{L_{\varepsilon}}}^{\infty}\frac{\left(Y+\frac{\varepsilon}{L_{\varepsilon}}\right)^{b}\left(Z+\frac{\varepsilon}{L_{\varepsilon}}\right)^{-a}}{Z}H_{\varepsilon}\left(Y\right)H_{\varepsilon}\left(Z\right)\abs{\psi\left(Y+Z\right)-\psi\left(Y\right)}\mathrm{d}Z\mathrm{d}Y\\
&\leq \frac{2^{\tilde{b}}C}{L_{\varepsilon}^{\rho-\gamma}}\int_{\frac{1}{L_{\varepsilon}}}^{D}\int_{\frac{1}{L_{\varepsilon}}}^{\infty}\frac{Y^{-a}Z^{b}+Y^{b}Z^{-a}}{Z}H_{\varepsilon}\left(Y\right)H_{\varepsilon}\left(Z\right)\abs{\psi\left(Y+Z\right)-\psi\left(Y\right)}\mathrm{d}Z\mathrm{d}Y\\
&\leq \frac{C\norm{\psi'}_{L^{\infty}}}{L_{\varepsilon}^{\rho-\gamma-a}}\int_{\frac{1}{L_{\varepsilon}}}^{D}\int_{\frac{1}{L_{\varepsilon}}}^{1}\left(Z^{b}+Y^{b}\right)H_{\varepsilon}\left(Y\right)H_{\varepsilon}\left(Z\right)\mathrm{d}Z\mathrm{d}Y+\frac{C\norm{\psi}_{L^{\infty}}}{L_{\varepsilon}^{\rho-\gamma}}\left[L_{\varepsilon}^{a}\int_{\frac{1}{L_{\varepsilon}}}^{D}H_{\varepsilon}\left(Y\right)\mathrm{d}Y\int_{1}^{\infty}Z^{b-1}H_{\varepsilon}\left(Z\right)\mathrm{d}Z\right.\\
&\quad\left.+\max\left\{L_{\varepsilon}^{-b},D^{b}\right\}\int_{\frac{1}{L_{\varepsilon}}}^{D}H_{\varepsilon}\left(Y\right)\mathrm{d}Y\int_{1}^{\infty}Z^{-a-1}H_{\varepsilon}\left(Z\right)\mathrm{d}Z\right]\\
&\leq CL_{\varepsilon}^{\gamma+a-\rho}\max\left\{1,L_{\varepsilon}^{-b},D^{b}\right\}\int_{\frac{1}{L_{\varepsilon}}}^{D}H_{\varepsilon}\left(Y\right)\mathrm{d}Y\int_{\frac{1}{L_{\varepsilon}}}^{1}H_{\varepsilon}\left(Z\right)\mathrm{d}Z+CL_{\varepsilon}^{\gamma-\rho}\left[L_{\varepsilon}^{a}D^{1-\rho}+\max\left\{L_{\varepsilon}^{-b},D^{b}\right\}D^{1-\rho}\right]\\
&=CD^{1-\rho}L_{\varepsilon}^{b-\rho}\max\left\{D^{b},L_{\varepsilon}^{-b}\right\}+CD^{1-\rho}L_{\varepsilon}^{b-\rho}+CD^{1-\rho}L_{\varepsilon}^{b-a-\rho}\max\left\{L_{\varepsilon}^{-b},D^{b}\right\}\\
&\to 0
\end{aligned} \end{equation*} as $L_{\varepsilon}\to\infty$ (i.e. for $\varepsilon\to 0$ by assumption).
Next we show $\left(II\right)\to \int_{0}^{\infty}\partial_{X}\psi\left(X\right)H\left(X\right)Q\left(X\right)\mathrm{d}X$. As $H_{\varepsilon}$ is a sequence of locally uniformly bounded (non-negative Radon) measures there exists a (non-negative Radon) measure $H$ such that $H_{\varepsilon}\stackrel{*}{\rightharpoonup} H$ in the sense of measures. Using now Taylor's formula for $\psi$ one obtains \begin{equation*}
\psi\left(Y+\frac{z}{L_{\varepsilon}}\right)-\psi\left(Y\right)=\psi'\left(Y\right)\cdot \frac{z}{L_{\varepsilon}}+\frac{z^{2}}{L_{\varepsilon}^{2}}\int_{0}^{z}\left(z-t\right)\psi''\left(Y+\frac{t}{L_{\varepsilon}}\right)\mathrm{d}t. \end{equation*} Using this in $(II)$ one gets \begin{equation*}
\begin{split}
\left(II\right)&=\int_{\frac{1}{L_{\varepsilon}}}^{\infty}\int_{0}^{1}\frac{K_{\varepsilon}\left(L_{\varepsilon}Y,z\right)}{z}h_{\varepsilon}\left(z\right)H_{\varepsilon}\left(Y\right)\cdot\frac{z}{L_{\varepsilon}}\psi'\left(Y\right)\mathrm{d}z\mathrm{d}Y\\
&\quad+\int_{\frac{1}{L_{\varepsilon}}}^{\infty}\int_{0}^{1}\frac{K_{\varepsilon}\left(L_{\varepsilon}Y,z\right)}{z}h_{\varepsilon}\left(z\right)H_{\varepsilon}\left(Y\right)\cdot\frac{z^{2}}{L_{\varepsilon}^{2}}\int_{0}^{z}\left(z-t\right)\psi''\left(Y+\frac{t}{L_{\varepsilon}}\right)\mathrm{d}t\mathrm{d}z\mathrm{d}Y\\
&=\left(II\right)_{a}+\left(II\right)_{b}
\end{split} \end{equation*} We consider terms separately beginning with $\left(II\right)_{b}$ (and assuming $L_{\varepsilon}$ to be sufficiently large, i.e. $\frac{1}{L_{\varepsilon}}<d$): \begin{equation*}
\begin{split}
\abs{\left(II\right)_{b}}&\leq \frac{1}{L_{\varepsilon}^{2}}\int_{\frac{1}{L_{\varepsilon}}}^{\infty}\int_{0}^{1}z^{2}h_{\varepsilon}\left(z\right)K_{\varepsilon}\left(L_{\varepsilon}Y,z\right)H_{\varepsilon}\left(Y\right)\int_{0}^{z}\abs{\psi''\left(Y+\frac{t}{L_{\varepsilon}}\right)}\mathrm{d}t\mathrm{d}z\mathrm{d}Y\\
&\leq \frac{C\norm{\psi''}_{\infty}}{L_{\varepsilon}^{2}}\int_{d-\frac{1}{L_{\varepsilon}}}^{D}\int_{0}^{1}h_{\varepsilon}\left(z\right)H_{\varepsilon}\left(Y\right)\left[\left(L_{\varepsilon}Y+\varepsilon\right)^{-a}\left(z+\varepsilon\right)^{b}+\left(L_{\varepsilon}Y+\varepsilon\right)^{b}\left(z+\varepsilon\right)^{-a}\right]\mathrm{d}z\mathrm{d}Y\\
&\leq \frac{C D^{1-\rho}}{L_{\varepsilon}^{2}}\left[\left(L_{\varepsilon}d-1+\varepsilon\right)^{-a}L_{\varepsilon}^{1+a}+\max\left\{\left(L_{\varepsilon}D+\varepsilon\right)^{b},\left(L_{\varepsilon}d-1+\varepsilon\right)^{b}\right\}L_{\varepsilon}^{1-b}\right]\\
&=CD^{1-\rho}\left[\frac{\left(d-\frac{1}{L_{\varepsilon}}+\frac{\varepsilon}{L_{\varepsilon}}\right)^{-a}}{L_{\varepsilon}}+\frac{\max\left\{\left(D+\frac{\varepsilon}{L_{\varepsilon}}\right)^{b},\left(d-\frac{1}{L_{\varepsilon}}+\frac{\varepsilon}{L_{\varepsilon}}\right)^{b}\right\}}{L_{\varepsilon}}\right]\\
&\to 0
\end{split} \end{equation*} as $\varepsilon\to 0$ (and $L_{\varepsilon}\to\infty$). On the other hand (using the symmetry of $K_{\varepsilon}$) \begin{equation*}
\begin{split}
\left(II\right)_{a}=\int_{\frac{1}{L_{\varepsilon}}}^{\infty}H_{\varepsilon}\left(Y\right)\psi'\left(Y\right)\int_{0}^{1}\frac{h_{\varepsilon}\left(z\right)}{L_{\varepsilon}}K_{\varepsilon}\left(L_{\varepsilon}Y,z\right)\mathrm{d}z\mathrm{d}Y=\int_{\frac{1}{L_{\varepsilon}}}^{\infty}H_{\varepsilon}\left(Y\right)\psi'\left(Y\right)Q_{\varepsilon}\left(Y\right)\mathrm{d}Y.
\end{split} \end{equation*} Thus one obtains $\left(II\right)_{a}\to \int_{0}^{\infty}H\left(Y\right)Q\left(Y\right)\psi'\left(Y\right)\mathrm{d}Y$ directly from Lemma~\ref{Lem:convergence:Q}. It remains to show that $\left(III\right)\to \int_{0}^{\infty}H\left(Y\right)\frac{Q\left(Y\right)}{Y}\psi\left(Y\right)\mathrm{d}Y$. We first rewrite $(III)$ as \begin{equation*}
\begin{split}
(III)&=\int_{0}^{\infty}\int_{0}^{1}\int_{\frac{y}{L_{\varepsilon}}}^{\frac{y}{L_{\varepsilon}}+Z}\frac{K_{\varepsilon}\left(y,L_{\varepsilon}Z\right)}{L_{\varepsilon}Z}h_{\varepsilon}\left(y\right)H_{\varepsilon}\left(Z\right)\partial_{X}\psi\left(X\right)\mathrm{d}X\mathrm{d}y\mathrm{d}Z\\
&=\int_{0}^{\infty}\int_{0}^{1}\frac{K_{\varepsilon}\left(y,L_{\varepsilon}Z\right)}{L_{\varepsilon}Z}h_{\varepsilon}\left(y\right)H_{\varepsilon}\left(Z\right)\left[\psi\left(Z\right)+\psi\left(Z+\frac{y}{L_{\varepsilon}}\right)-\psi\left(Z\right)-\psi\left(\frac{y}{L_{\varepsilon}}\right)\right]\mathrm{d}y\mathrm{d}Z.
\end{split} \end{equation*} Due to $\supp \psi\subset \left[d,D\right]$ we obtain for $L_{\varepsilon}$ sufficiently large (i.e. $L_{\varepsilon}\geq \frac{2}{d}$) that $\psi\left(\frac{y}{L_{\varepsilon}}\right)=0$ for all $y\in\left[0,1\right]$. Thus using also the definition of $Q_{\varepsilon}$ we can rewrite $(III)$ as \begin{equation*}
\begin{split}
(III)=\int_{0}^{\infty}\psi\left(Z\right)H_{\varepsilon}\left(Z\right)\frac{Q_{\varepsilon}\left(Z\right)}{Z}\mathrm{d}Z+\int_{0}^{\infty}\int_{0}^{1}\frac{K_{\varepsilon}\left(y,L_{\varepsilon}Z\right)}{L_{\varepsilon}Z}h_{\varepsilon}\left(y\right)H_{\varepsilon}\left(Z\right)\left[\psi\left(Z+\frac{y}{L_{\varepsilon}}\right)-\psi\left(Z\right)\right]\mathrm{d}y\mathrm{d}Z\\
=:(III)_{a}+(III)_{b}.
\end{split} \end{equation*}
The integral $(III)_{a}$ converges (up to a subsequence) to $\int_{0}^{\infty}\psi\left(Z\right)H\left(Z\right)\frac{Q\left(Z\right)}{Z}\mathrm{d}Z$ according to Lemma~\ref{Lem:convergence:Q}. It thus remains to show that $(III)_{b}$ converges to zero. To see this note that as $\psi$ is smooth and compactly supported we have for $y\in\left[0,1\right]$: \begin{equation*}
\begin{split}
\abs{\psi\left(Z+\frac{y}{L_{\varepsilon}}\right)-\psi\left(Z\right)}\leq C\left(\psi\right)\frac{y}{L_{\varepsilon}}\chi_{\left[d-\frac{1}{L_{\varepsilon}},\infty\right)}\left(Z\right)\leq\frac{C\left(\psi\right)}{L_{\varepsilon}}\chi_{\left[d-\frac{1}{L_{\varepsilon}},\infty\right)}\left(Z\right).
\end{split} \end{equation*} Using this we can estimate \begin{equation*}
\begin{split}
(III)_{b}\leq \frac{C\left(\psi\right)}{L_{\varepsilon}}\int_{d-1/L_{\varepsilon}}^{\infty}H_{\varepsilon}\left(Z\right)\frac{Q_{\varepsilon}\left(Z\right)}{Z}\mathrm{d}Z.
\end{split} \end{equation*} From the estimates on $Q_{\varepsilon}$ in \eqref{eq:est:Qeps} we obtain that the integral on the right hand side is bounded uniformly in $\varepsilon$ and thus for $L_{\varepsilon}\to\infty$ the right hand side converges to zero, concluding the proof. \end{proof}
\subsubsection{Non-solvability of the limit equation}
\begin{lemma}\label{Lem:non:solv:limit}
For $\rho\in\left(0,1\right)$ there exists no solution $H$ to \eqref{eq:H:infty} satisfying the lower bound \eqref{Hepslowerbound} and $\int_{0}^{R}H\left(X\right)\mathrm{d}X\leq R^{1-\rho}$ for each $R\geq 0$. \end{lemma}
\begin{proof}
Assuming such a solution exists and rewriting \eqref{eq:H:infty} one has
\begin{equation*}
\partial_{X}\left(\left(X-Q\left(X\right)\right)H\left(X\right)\right)=\left(\left(1-\rho\right)-\frac{Q\left(X\right)}{X}\right)H\left(X\right).
\end{equation*}
Defining $F\left(X\right):=\left(X-Q\left(X\right)\right)H\left(X\right)$ this is equivalent to
\begin{equation*}
\partial_{X}F\left(X\right)=\frac{\left(1-\rho\right)-\frac{Q\left(X\right)}{X}}{X-Q\left(X\right)}F\left(X\right)\quad \text{and thus} \quad F\left(X\right)=C\cdot \exp\left(\int_{A}^{X}\frac{\left(1-\rho\right)-\frac{Q\left(Y\right)}{Y}}{Y-Q\left(Y\right)}\mathrm{d}Y\right)
\end{equation*}
for some constant $C$. Considering $Q$ one can assume (up to passing to a subsequence of $\varepsilon$ again denoted by $\varepsilon$) that either $\lambda_{\varepsilon}\geq \mu_{\varepsilon}$ or $\mu_{\varepsilon}\geq\lambda_{\varepsilon}$ for all $\varepsilon$, while both cases will be denoted as $\lambda\geq \mu$ or $\mu\geq \lambda$ either. One can then estimate (using the definition of $\lambda_{\varepsilon}$ and $\mu_{\varepsilon}$):
\begin{equation*}
\begin{split}
0\leq Q\left(X\right)&\leq \lim_{\varepsilon\to 0}C_{2}\int_{0}^{1}\frac{h_{\varepsilon}\left(y\right)}{L_{\varepsilon}}\left(\left(y+\varepsilon\right)^{-a}\left(L_{\varepsilon}X+\varepsilon\right)^{b}+\left(y+\varepsilon\right)^{b}\left(L_{\varepsilon}X+\varepsilon\right)^{-a}\right)\mathrm{d}y\\
&\leq \lim_{\varepsilon\to 0}\frac{C_{2}}{L_{\varepsilon}}\left(L_{\varepsilon}^{1-b}\left(L_{\varepsilon}X+\varepsilon\right)^{b}+L_{\varepsilon}^{a+1}\left(L_{\varepsilon}X+\varepsilon\right)^{-a}\right)=\lim_{\varepsilon\to 0}C_{2}\left(\left(X+\frac{\varepsilon}{L_{\varepsilon}}\right)^{b}+\left(X+\frac{\varepsilon}{L_{\varepsilon}}\right)^{-a}\right)\\
&=C_{2}\left(X^{b}+X^{-a}\right).
\end{split}
\end{equation*}
On the other hand
\begin{equation*}
\begin{split}
Q\left(X\right)&\geq \lim_{\varepsilon\to 0}C_{1}\int_{0}^{1}\frac{h_{\varepsilon}\left(y\right)}{L_{\varepsilon}}\left(\left(y+\varepsilon\right)^{-a}\left(L_{\varepsilon}X+\varepsilon\right)^{b}+\left(y+\varepsilon\right)^{b}\left(L_{\varepsilon}X+\varepsilon\right)^{-a}\right)\mathrm{d}y\\
&\geq \lim_{\varepsilon\to 0} \frac{C_{1}}{L_{\varepsilon}}\begin{cases}
L_{\varepsilon}^{1-b}\left(L_{\varepsilon}X+\varepsilon\right)^{b} & \mu\geq \lambda\\
L_{\varepsilon}^{1+a}\left(L_{\varepsilon}X+\varepsilon\right)^{-a} & \lambda \geq \mu
\end{cases}\\
&=C_{1}\begin{cases}
X^{b} & \mu\geq\lambda\\
X^{-a} & \lambda\geq\mu.
\end{cases}
\end{split}
\end{equation*}
This shows in particular that for sufficiently large $A$ one has $Q\left(X\right)<X$ for all $X\geq A$ and $F$ is well defined for $X\geq A$. We claim now $C>0$. To see this assume $C\leq 0$. Then $F\leq 0$ and as just shown $X-Q\left(X\right)\geq 0$ for $X\geq A$. As $H\left(X\right)=\frac{F\left(X\right)}{X-Q\left(X\right)}$ one has (using $\int_{0}^{R}H\mathrm{d}X\geq \frac{R^{1-\rho}}{2}$ for sufficiently large $R$ due to Proposition~\ref{Prop:Hepslowerbound} and $\int_{0}^{A}H\mathrm{d}X\leq A^{1-\rho}$):
\begin{equation*}
\begin{split}
0\geq \int_{A}^{R}H\left(X\right)\mathrm{d}X=\int_{0}^{R}H\left(X\right)\mathrm{d}X-\int_{0}^{A}H\left(X\right)\mathrm{d}X\geq \frac{1}{2}R^{1-\rho}-A^{1-\rho}=R^{1-\rho}\left(\frac{1}{2}-\left(\frac{A}{R}\right)^{1-\rho}\right)>0
\end{split}
\end{equation*}
for sufficiently large $R$ and thus a contradiction. Therefore we have $C>0$.
We choose now $X_{0}<A$ such that $Q\left(X_{0}\right)=X_{0}$ and $Q\left(X\right)<X$ for all $X>X_{0}$ which is possible due to the lower and upper bound for $Q$. We get that $\frac{X-Q\left(X\right)}{X-X_{0}}$ is bounded on $\left[X_{0},\infty\right)$ and thus we have $X-Q\left(X\right)\leq K\left(X-X_{0}\right)$ for some $K>0$. Furthermore as $Q\left(Y\right)\sim Y$ for $Y\to X_{0}$ we obtain
\begin{equation*}
\begin{split}
-\left(1-\rho\right)+\frac{Q\left(Y\right)}{Y}=\rho-1+\frac{Q\left(Y\right)}{Y}\geq \rho-\delta>0
\end{split}
\end{equation*}
on $\left[X_{0},\overline{X}\right]$ for some $\overline{X}>X_{0}$ close to $X_{0}$ and $\delta>0$. We then get
\begin{equation*}
\int_{\overline{X}}^{A}\frac{\left(1-\rho\right)-\frac{Q\left(Y\right)}{Y}}{Y-Q\left(Y\right)}\mathrm{d}Y:=C\left(\overline{X},A\right)<\infty.
\end{equation*}
Using the definition of $F$ we can then write $H$ as
\begin{equation*}
\begin{split}
H&=\frac{C}{X-Q\left(X\right)}\exp\left(-\int_{X}^{A}\frac{\left(1-\rho\right)-\frac{Q\left(Y\right)}{Y}}{Y-Q\left(Y\right)}\mathrm{d}Y\right)\\
&\geq \frac{C}{K\left(X-X_{0}\right)}\exp\left(-\int_{X}^{\overline{X}}\frac{\left(1-\rho\right)-\frac{Q\left(Y\right)}{Y}}{Y-Q\left(Y\right)}\mathrm{d}Y-\int_{\overline{X}}^{A}\frac{\left(1-\rho\right)-\frac{Q\left(Y\right)}{Y}}{Y-Q\left(Y\right)}\mathrm{d}Y\right)\\
&\geq \frac{C}{K\left(X-X_{0}\right)}\exp\left(-C\left(\overline{X},A\right)\right)\exp\left(\left(\rho-\delta\right)\int_{X}^{\overline{X}}\frac{1}{K\left(Y-X_{0}\right)}\mathrm{d}Y\right)\\
&=\frac{C}{K}\exp\left(-C\left(\overline{X},A\right)\right)\left(\overline{X}-X_{0}\right)^{\frac{\rho-\delta}{K}}\frac{1}{\left(X-X_{0}\right)^{1+\frac{\rho-\delta}{K}}}=C\left(A,\overline{X},X_{0},K\right)\frac{1}{\left(X-X_{0}\right)^{1+\alpha}}
\end{split}
\end{equation*}
with $\alpha=\frac{\rho-\delta}{K}>0$, contradicting the local integrability of $H$. \end{proof}
This shows that $L_{\varepsilon}$ has to be bounded and thus by scale invariance we obtain from Proposition~\ref{Prop:Hepslowerbound} also the lower bound for $h_{\varepsilon}$, i.e. we have \begin{proposition}\label{P.hepslowerbound}
For any $\delta>0$ there exists $R_{\delta}>0$ such that
\begin{equation}\label{hepslowerbound}
\int_0^R h_{\varepsilon}(x)\,dx \geq (1-\delta) R^{1-\rho} \qquad \mbox{ for all } R \geq R_{\delta}.
\end{equation} \end{proposition}
\subsection{Exponential decay at the origin}\label{subsec:exp:decay}
We will show in this section that $h_{\varepsilon}$ decays exponentially near zero in an averaged sense, a property that will be crucial when passing to the limit $\varepsilon\to 0$.
\begin{lemma}\label{Lem:expdecay}
There exist constants $C$ and $c$ independent of $\varepsilon$ such that
\begin{equation*}
\begin{split}
\int_{0}^{D}h_{\varepsilon}\left(y\right)\mathrm{d}y\leq CD^{1-\rho}\exp\left(-c \left(D+\varepsilon\right)^{-a}\right)
\end{split}
\end{equation*}
for any $D\in\left(0,1\right]$ and all $\varepsilon>0$. \end{lemma}
\begin{proof}
Let $\delta=\frac{1}{2}$, then due to Proposition~\ref{P.hepslowerbound} there exists $R_{*}>0$ such that $\int_{0}^{B_2 R_{*}}h_{\varepsilon}\left(z\right)\mathrm{d}z\geq \frac{\left(B_{2} R_{*}\right)^{1-\rho}}{2}$ for any $B_{2}\geq 1$. On the other hand one has $\int_{0}^{B_{1}R_{*}}h_{\varepsilon}\left(z\right)\mathrm{d}z\leq \left(B_{1}R_{*}\right)^{1-\rho}$ for any $B_{1}\geq 0$. Thus one has
\begin{equation*}
\begin{split}
\int_{B_{1}R_{*}}^{B_{2}R_{*}}h_{\varepsilon}\left(z\right)\mathrm{d}z=\int_{0}^{B_{2}R_{*}}h_{\varepsilon}\left(z\right)\mathrm{d}z-\int_{0}^{B_{1}R_{*}}h_{\varepsilon}\left(z\right)\mathrm{d}z\geq \frac{\left(B_{2}R_{*}\right)^{1-\rho}}{2}-\left(B_{1}R_{*}\right)^{1-\rho}\geq 1
\end{split}
\end{equation*}
for sufficiently large $B_{2}$ (depending on $B_{1}$). Thus one can estimate
\begin{equation*}
\begin{split}
&\quad\int_{0}^{R}\int_{R-y}^{\infty}\frac{K_{\varepsilon}\left(y,z\right)}{z}h_{\varepsilon}\left(y\right)h_{\varepsilon}\left(z\right)\mathrm{d}z\mathrm{d}y\\
&\geq C_{1}\int_{0}^{R}\int_{B_{1}R_{*}}^{B_{2}R_{*}}\frac{\left(y+\varepsilon\right)^{-a}\left(z+\varepsilon\right)^{b}+\left(y+\varepsilon\right)^{b}\left(z+\varepsilon\right)^{-a}}{z}h_{\varepsilon}\left(y\right)h_{\varepsilon}\left(z\right)\mathrm{d}z\mathrm{d}y\\
&\geq \frac{C_{1}}{\left(R+\varepsilon\right)^{a}}\int_{0}^{R}h_{\varepsilon}\left(y\right)\mathrm{d}y\int_{B_{1}R_{*}}^{B_{2}R_{*}}\frac{\left(z+\varepsilon\right)^{b}}{z}h_{\varepsilon}\left(z\right)\mathrm{d}z\\
&\geq \frac{C}{\left(R+\varepsilon\right)^{a}}\int_{0}^{R}h_{\varepsilon}\left(y\right)\mathrm{d}y\left(B_{2}R_{*}\right)^{b-1}\int_{B_{1}R_{*}}^{B_{2}R_{*}}h_{\varepsilon}\left(z\right)\mathrm{d}z\geq \frac{C}{\left(R+\varepsilon\right)^{a}}\left(B_{2}R_{*}\right)^{b-1}\int_{0}^{R}h_{\varepsilon}\left(y\right)\mathrm{d}y.
\end{split}
\end{equation*}
Using this and taking $\chi_{\left(-\infty,R\right]}$ (restricted to $\left[0,\infty\right)$) by some approximation argument as test function in the equation $\left(1-\rho\right)h_{\varepsilon}\left(x\right)-\partial_{x}\left(xh_{\varepsilon}\left(x\right)\right)+\partial_{x}I_{\varepsilon}\left[h_{\varepsilon}\right]\left(x\right)=0$ we obtain
\begin{equation*}
\begin{split}
0&=\left(1-\rho\right)\int_{0}^{R}h_{\varepsilon}\left(x\right)\mathrm{d}x-Rh_{\varepsilon}\left(R\right)+\int_{0}^{R}\int_{R-y}^{\infty}\frac{K_{\varepsilon}\left(y,z\right)}{z}h_{\varepsilon}\left(y\right)h_{\varepsilon}\left(z\right)\mathrm{d}z\mathrm{d}y\\
&\geq \left(1-\rho\right)\int_{0}^{R}h_{\varepsilon}\left(x\right)\mathrm{d}x-Rh_{\varepsilon}\left(R\right)+\frac{C}{\left(R+\varepsilon\right)^{a}}\left(B_{2}R_{*}\right)^{b-1}\int_{0}^{R}h_{\varepsilon}\left(x\right)\mathrm{d}x.
\end{split}
\end{equation*}
Thus one has
\begin{equation*}
\begin{split}
\left(1-\rho\right)\int_{0}^{R}h_{\varepsilon}\left(x\right)\mathrm{d}x+\frac{C\left(B_{2}R_{*}\right)^{b-1}}{\left(R+\varepsilon\right)^{a}}\int_{0}^{R}h_{\varepsilon}\left(x\right)\mathrm{d}x\leq Rh_{\varepsilon}\left(R\right)=R\partial_{R}\int_{0}^{R}h_{\varepsilon}\left(x\right)\mathrm{d}x
\end{split}
\end{equation*}
or equivalently
\begin{equation*}
\begin{split}
\partial_{R}\left(\int_{0}^{R}h_{\varepsilon}\left(x\right)\mathrm{d}x\right)\geq \left(\frac{1-\rho}{R}+\frac{C\left(B_{2}R_{*}\right)^{b-1}}{R\left(R+\varepsilon\right)^{a}}\right)\int_{0}^{R}h_{\varepsilon}\left(x\right)\mathrm{d}x&\geq \left(\frac{1-\rho}{R}+\frac{C\left(B_{2}R_{*}\right)^{b-1}}{\left(R+\varepsilon\right)^{a}}\right)\int_{0}^{R}h_{\varepsilon}\left(x\right)\mathrm{d}x,
\end{split}
\end{equation*}
where we used $\frac{1}{R\left(R+\varepsilon\right)^{a}}\geq \frac{1}{\left(R+\varepsilon\right)^{a}}$ for $R\in\left[D,1\right]$. Integrating this inequality over $\left[D,1\right]$ and using $\int_{0}^{1}h_{\varepsilon}\mathrm{d}x\leq 1$ as well as $\left(1+\varepsilon\right)^{-a}\leq 1$ gives
\begin{equation*}
\begin{split}
\int_{0}^{D}h_{\varepsilon}\left(x\right)\mathrm{d}x
&\leq \exp\left(\frac{C\left(B_{2}R_{*}\right)^{b-1}}{a}\right) D^{1-\rho}\exp\left(-\frac{C\left(B_{2}R_{*}\right)^{b-1}}{a}\left(D+\varepsilon\right)^{-a}\right).
\end{split}
\end{equation*} \end{proof}
\begin{lemma}\label{Lem:moment:est:eps}
For $D\leq 1$ and any $\alpha\in \mathbb{R}$ one has the following estimate
\begin{equation*}
\begin{split}
\int_{0}^{D}\left(x+\varepsilon\right)^{\alpha}h_{\varepsilon}\left(x\right)\mathrm{d}x\leq CD^{1-\rho}\left(D+\varepsilon\right)^{\alpha}\exp\left(-c\left(D+\varepsilon\right)^{-a}\right) \quad \text{if } \alpha\geq 0\\
\int_{0}^{D}\left(x+\varepsilon\right)^{\alpha}h_{\varepsilon}\left(x\right)\mathrm{d}x\leq \tilde{C}D^{1-\rho}\exp\left(-\frac{c}{2}\left(D+\varepsilon\right)^{-a}\right)\quad \text{if } \alpha<0
\end{split}
\end{equation*} \end{lemma}
\begin{proof} The case $\alpha\geq 0$ follows immediately from Lemma~\ref{Lem:expdecay}. The case $\alpha<0$ follows from Lemma~\ref{Lem:expdecay} using a dyadic decomposition as in Lemma~\ref{Lem:moment:est:stand}. \end{proof}
As $\left\{h_{\varepsilon}\right\}_{\varepsilon>0}$ is a locally bounded sequence of non-negative Radon measures one can extract a subsequence (again denoted by $\varepsilon$) such that $h_{\varepsilon}\stackrel{*}{\rightharpoonup} h$ in the sense of measures. As a direct consequence of Lemma~\ref{Lem:moment:est:eps} one obtains: \begin{lemma}\label{Lem:moment:exp}
For $D\leq 1$ and $\alpha\in\mathbb{R}$ one has
\begin{equation*}
\begin{split}
\int_{0}^{D}x^{\alpha}h\left(x\right)\mathrm{d}x&\leq \tilde{C}D^{1-\rho}\exp\left(-\frac{c}{2}D^{-a}\right) \quad\text{if } \alpha<0\\
\int_{0}^{D}x^{\alpha}h\left(x\right)\mathrm{d}x&\leq C D^{1+\alpha-\rho}\exp\left(-cD^{-a}\right)\quad \text{if } \alpha\geq 0.
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
This follows from Lemma~\ref{Lem:moment:est:eps}. \end{proof}
As a consequence of Lemma~\ref{Lem:moment:exp} together with Lemma~\ref{Lem:moment:est:stand} we obtain
\begin{corollary}\label{Cor:moment:est} For any $\alpha\in\mathbb{R}$ and $D>0$ each limit $h$ satisfies \begin{enumerate}
\item $\int_{0}^{\infty}x^{\alpha}h\left(x\right)\mathrm{d}x<\infty$ if $\alpha<\rho-1$,
\item $\int_{0}^{D}x^{\alpha}h\left(x\right)\mathrm{d}x<C\left(D\right)$. \end{enumerate}
\end{corollary}
\begin{remark}
We obtain corresponding results for $h_{\varepsilon}$ and $h$ with $x^{\alpha}$ replaced by $\left(x+\varepsilon\right)^{\alpha}$. \end{remark}
\subsection{Passing to the limit $\varepsilon\to 0$}\label{sec:limit:eps}
In this section we will finally conclude the proof of Theorem~\ref{T.main} by passing to the limit $\varepsilon\to 0$ in \eqref{eq:self:sim:eps}. Before doing this we first show that $I\left[h\right]$ is locally integrable:
\begin{lemma}
For $h$ as given above one has $I\left[h\right]\in L_{\mathrm{loc}}^{1}\left(\left[0,\infty\right)\right)$. \end{lemma}
\begin{proof}
Let $D>0$. Then one has
\begin{equation*}
\begin{split}
\int_{0}^{D}I\left[h\right]\left(x\right)\mathrm{d}x&=\int_{0}^{D}\int_{0}^{x}\int_{x-y}^{\infty}\frac{K\left(y,z\right)}{z}h\left(y\right)h\left(z\right)\mathrm{d}z\mathrm{d}y\mathrm{d}x\\
&\leq C\int_{0}^{D}\int_{0}^{D}\int_{0}^{\infty}\left(y^{-a}z^{b-1}+y^{b}z^{-a-1}\right)h\left(y\right)h\left(z\right)\mathrm{d}z\mathrm{d}y\mathrm{d}x\\
&\leq C\int_{0}^{D}\int_{0}^{D}\left(y^{-a}+y^{b}\right)h\left(y\right)\mathrm{d}y\mathrm{d}x\leq C\left(D\right)
\end{split}
\end{equation*}
where Corollary~\ref{Cor:moment:est} was used. One similarly gets $\int_{N}I\left[h\right]\mathrm{d}x=0$ for bounded null sets $N\subset \left[0,\infty\right)$. \end{proof}
To show that $h$ is a (weak) self-similar solution it only remains to pass to the limit in the weak form of the equation \begin{equation*}
\begin{split}
\partial_x I_{\varepsilon}[h_{\varepsilon}] = \partial_x \left( x h_{\varepsilon}\right) + (\rho-1) h_{\varepsilon}.
\end{split} \end{equation*} Thus let $\varphi\in C_{c}^{\infty}\left(\left[0,\infty\right)\right)$. Then the weak form reads as \begin{equation*}
\begin{split}
\int_{0}^{\infty}\partial_{x}\varphi\left(x\right)\int_{0}^{x}\int_{x-y}^{\infty}\frac{K_{\varepsilon}\left(y,z\right)}{z}h_{\varepsilon}\left(y\right)h_{\varepsilon}\left(z\right)\mathrm{d}z\mathrm{d}y\mathrm{d}x=\int_{0}^{\infty}\partial_{x}\varphi\left(x\right)xh_{\varepsilon}\left(x\right)\mathrm{d}x+\left(1-\rho\right)\int_{0}^{\infty}\varphi\left(x\right)h_{\varepsilon}\left(x\right)\mathrm{d}x.
\end{split} \end{equation*} One can easily pass to the limit in the right hand side. To prove Theorem~\ref{T.main} it thus remains to show that one can also take the limit in the left hand side of this equation. This will be done in the following Proposition.
\begin{proposition}\label{Prop:limit:eps}
For any $\varphi\in C_{c}^{\infty}\left(\left[0,\infty\right)\right)$ one has
\begin{equation*}
\begin{split}
\int_{0}^{\infty}\partial_{x}\varphi\left(x\right)\int_{0}^{x}\int_{x-y}^{\infty}\frac{K_{\varepsilon}\left(y,z\right)}{z}h_{\varepsilon}\left(z\right)h_{\varepsilon}\left(y\right)\mathrm{d}z\mathrm{d}y\mathrm{d}x\longrightarrow \int_{0}^{\infty}\partial_{x}\varphi\left(x\right)\int_{0}^{x}\int_{x-y}^{\infty}\frac{K\left(y,z\right)}{z}h\left(z\right)h\left(y\right)\mathrm{d}z\mathrm{d}y\mathrm{d}x
\end{split}
\end{equation*} as $\varepsilon\to 0$. \end{proposition}
\begin{proof}
Taking the difference of the two integrals and rewriting one obtains
\begin{equation*}
\begin{split}
&\quad \abs{\int_{0}^{\infty}\partial_{x}\varphi\left(x\right)\left(\int_{0}^{x}\int_{x-y}^{\infty}\frac{K\left(y,z\right)}{z}h\left(y\right)h\left(z\right)-\frac{K_{\varepsilon}\left(y,z\right)}{z}h_{\varepsilon}\left(y\right)h_{\varepsilon}\left(z\right)\mathrm{d}y\mathrm{d}z\right)\mathrm{d}x}\\
&\leq \abs{\int_{0}^{\infty}\partial_{x}\varphi\left(x\right)\left(\int_{0}^{x}\int_{x-y}^{\infty}\frac{K\left(y,z\right)-K_{\varepsilon}\left(y,z\right)}{z}h\left(y\right)h\left(z\right)\mathrm{d}z\mathrm{d}y\right)\mathrm{d}x}\\
&\quad+\abs{\int_{0}^{\infty}\partial_{x}\varphi\left(x\right)\left(\int_{0}^{x}\int_{x-y}^{\infty}\frac{K_{\varepsilon}\left(y,z\right)}{z}h\left(y\right)\left(h\left(z\right)-h_{\varepsilon}\left(z\right)\right)\mathrm{d}z\mathrm{d}y\right)\mathrm{d}x}\\
&\quad +\abs{\int_{0}^{\infty}\partial_{x}\varphi\left(x\right)\left(\int_{0}^{x}\int_{x-y}^{\infty}\frac{K_{\varepsilon}\left(y,z\right)}{z}h_{\varepsilon}\left(z\right)\left(h\left(y\right)-h_{\varepsilon}\left(y\right)\right)\mathrm{d}z\mathrm{d}y\right)\mathrm{d}x}=:\left(I\right)+\left(II\right)+\left(III\right)
\end{split}
\end{equation*}
We estimate the three terms separately and take $D>0$ such that $\supp \varphi\subset \left[0,D\right]$. Then due to Lebesgue's Theorem (using also Corollary~\ref{Cor:moment:est} and Lemma~\ref{Lem:moment:est:stand}) we obtain
\begin{equation*}
\begin{split}
\left(I\right)&\leq \int_{0}^{\infty}\abs{\partial_{x}\varphi\left(x\right)}\left(\int_{0}^{x}\int_{x-y}^{\infty}\frac{\abs{K\left(y,z\right)-K_{\varepsilon}\left(y,z\right)}}{z}h\left(y\right)h\left(z\right)\mathrm{d}z\mathrm{d}y\right)\mathrm{d}x\to 0\quad \text{as } \varepsilon\to 0.
\end{split}
\end{equation*}
To estimate the other two terms we will need some cutoff functions. Let $M,N\in\mathbb{N}$ and $\zeta_{1}^{N},\zeta_{2}^{N},\xi_{1}^{M},\xi_{2}^{M}\in C^{\infty}\left(\left[0,\infty\right)\right)$ such that \begin{equation*}
\begin{split}
\zeta_{1}^{N}=0 \text{ on } \left[0,\frac{1}{N}\right]\cup\left[N+1,\infty\right),\quad \zeta_{1}^{N}=1 \text{ on } \left[\frac{2}{N},N\right], \quad 0\leq \zeta_{1}^{N}\leq 1, \quad \zeta_{2}^{N}:=1-\zeta_{1}^{N},\\
\xi_{1}^{M}=0 \text{ on } \left[0,\frac{1}{M}\right], \quad \xi_{1}^{M}=1 \text{ on } \left[\frac{2}{M},\infty\right), \quad 0\leq \xi_{1}^{M}\leq 1, \quad \xi_{2}^{M}:=1-\xi_{1}^{M}.
\end{split} \end{equation*} Defining $K_{\varepsilon}^{i,N}\left(y,z\right):=K_{\varepsilon}\left(y,z\right)\cdot \zeta_{i}^{N}\left(z\right)$ for $i=1,2$ one obtains using also Fubini's Theorem: \begin{equation*}
\begin{split}
\left(II\right)
&\leq \abs{\int_{0}^{\infty}\int_{0}^{\infty}\frac{K_{\varepsilon}^{1,N}\left(y,z\right)}{z}h\left(y\right)\left(h\left(z\right)-h_{\varepsilon}\left(z\right)\right)\int_{y}^{y+z}\partial_{x}\varphi\left(x\right)\mathrm{d}x\mathrm{d}y\mathrm{d}z}\\
&\quad +\abs{\int_{0}^{\infty}\int_{0}^{\infty}\frac{K_{\varepsilon}^{2,N}\left(y,z\right)}{z}h\left(y\right)\left(h\left(z\right)-h_{\varepsilon}\left(z\right)\right)\int_{y}^{y+z}\partial_{x}\varphi\left(x\right)\mathrm{d}x\mathrm{d}y\mathrm{d}z}=:\left(II\right)_{a}+\left(II\right)_{b}
\end{split} \end{equation*} We consider again terms separately and without loss of generality we assume $\varepsilon<1$. Then using Corollary~\ref{Cor:moment:est} and Lemma~\ref{Lem:moment:est:stand} we obtain \begin{equation}\label{eq:weak:strong:0}
\begin{split}
\left(II\right)_{b}
&\leq C\norm{\partial_{x}\varphi}_{\infty}\int_{0}^{\frac{2}{N}}\int_{0}^{D}\left[\left(y+\varepsilon\right)^{-a}\left(z+\varepsilon\right)^{b}+\left(y+\varepsilon\right)^{b}\left(z+\varepsilon\right)^{-a}\right]h\left(y\right)\left(h\left(z\right)+h_{\varepsilon}\left(z\right)\right)\mathrm{d}y\mathrm{d}z\\
&\quad +C\norm{\varphi}_{\infty}\int_{N}^{\infty}\int_{0}^{D}\frac{\left(y+\varepsilon\right)^{-a}\left(z+\varepsilon\right)^{b}+\left(y+\varepsilon\right)^{b}\left(z+\varepsilon\right)^{-a}}{z}h\left(y\right)\left(h\left(z\right)+h_{\varepsilon}\left(z\right)\right)\mathrm{d}y\mathrm{d}z\\
&\leq \norm{\partial_{x}\varphi}_{\infty}C\left(D\right)\int_{0}^{\frac{2}{N}}\left(\left(z+\varepsilon\right)^{b}+\left(z+\varepsilon\right)^{-a}\right)\left(h\left(z\right)+h_{\varepsilon}\left(z\right)\right)\mathrm{d}z\\
&\qquad +C\left(D\right)\norm{\varphi}_{\infty}\int_{N}^{\infty}\left(2^{\tilde{b}}z^{b-1}+z^{-a-1}\right)\left(h\left(z\right)+h_{\varepsilon}\left(z\right)\right)\mathrm{d}z\\
&\leq \norm{\partial_{x}\varphi}_{\infty}C\left(D\right)\left[\frac{1}{N^{1-\rho}}\left(\frac{2}{N}+\varepsilon\right)^{\tilde{b}}+\frac{1}{N^{1-\rho}}\right]+C\left(D\right)\norm{\varphi}_{\infty}\left[N^{b-\rho}+N^{-a-\rho}\right]\longrightarrow 0,
\end{split} \end{equation} for $N\to\infty$. Furthermore one has \begin{equation}\label{eq:weak:strong:1}
\begin{split}
\left(II\right)_{a}
&=\abs{\int_{0}^{\infty}\left(h\left(z\right)-h_{\varepsilon}\left(z\right)\right)\psi_{\varepsilon}^{N}\left(z\right)\mathrm{d}z}
\end{split} \end{equation} with $\psi_{\varepsilon}^{N}\left(z\right):=\int_{0}^{D}\frac{K_{\varepsilon}^{1,N}\left(y,z\right)}{z}h\left(y\right)\left[\varphi\left(y+z\right)-\varphi\left(y\right)\right]\mathrm{d}y$. We claim that $\psi_{\varepsilon}^{N}\to \psi^{N}$ strongly in $C\left(\left[0,\infty\right)\right)$ with $\psi^{N}\left(z\right):=\int_{0}^{D}\frac{K\left(y,z\right)}{z}h\left(y\right)\zeta_{1}^{N}\left(z\right)\left[\varphi\left(y+z\right)-\varphi\left(y\right)\right]\mathrm{d}y$. Note that by construction we have $\supp\psi_{\varepsilon}^{N}\subset \left[\frac{1}{N},N+1\right]$ for all $\varepsilon>0$. To show (uniform) convergence we have to use a cutoff also in $y$, i.e. one can estimate \begin{equation*}
\begin{split}
\abs{\psi^{N}\left(z\right)-\psi_{\varepsilon}^{N}\left(z\right)}&\leq \abs{\int_{0}^{D}\frac{K\left(y,z\right)-K_{\varepsilon}\left(y,z\right)}{z}\zeta_{1}^{N}\left(z\right)\xi_{1}^{M}\left(y\right)h\left(y\right)\left[\varphi\left(y+z\right)-\varphi\left(y\right)\right]\mathrm{d}y}\\
&\quad +\abs{\int_{0}^{D}\frac{K\left(y,z\right)-K_{\varepsilon}\left(y,z\right)}{z}\zeta_{1}^{N}\left(z\right)\xi_{2}^{M}\left(y\right)h\left(y\right)\left[\varphi\left(y+z\right)-\varphi\left(y\right)\right]\mathrm{d}y}\\
&=:\left(II\right)_{a,1}+\left(II\right)_{a,2}.
\end{split} \end{equation*} Using similar arguments as in \eqref{eq:weak:strong:0} we get \begin{equation}\label{eq:strong:conv:1:1}
\begin{split}
\left(II\right)_{a,2}
&\leq C\left(N,\varphi\right)\left[\frac{1}{M^{1+\tilde{b}-\rho}}+\frac{1}{M^{1-\rho}}\right]\longrightarrow 0
\end{split} \end{equation} for $M\to \infty$ and $N$ fixed. As $K$ is continuous on $\left[\frac{1}{M},D\right]\times\left[\frac{1}{N},N+1\right]$ for $M,N\in\mathbb{N}$ fixed, one has $K_{\varepsilon}\to K$ uniformly on $\left[\frac{1}{M},D\right]\times\left[\frac{1}{N},N+1\right]$ for $\varepsilon\to 0$. Thus we get $(II)_{a,1}\to 0$ for $\varepsilon\to 0$ (with $M,N$ fixed). Together with \eqref{eq:strong:conv:1:1} this shows that $\psi_{\varepsilon}^{N}\to \psi^{N}$ strongly. Thus one can pass to the limit in \eqref{eq:weak:strong:1} to obtain together with \eqref{eq:weak:strong:0}: $(II)\to 0$ as $\varepsilon\to 0$.
In a similar way we can show that $(III)\to 0$ for $\varepsilon\to 0$.
\end{proof}
\section{Moment estimates}
\begin{lemma}\label{Lem:moment:est:stand}
Let $h\in \mathcal{X}_{\rho}$ and $\alpha\in\mathbb{R}$. Then one has the following estimates
\begin{enumerate}
\item $\int_{0}^{D}x^{\alpha}h\left(x\right)\mathrm{d}x\leq C \norm{h}D^{1-\rho+\alpha}$ for all $D>0$ if $\rho-1<\alpha$,
\item $\int_{D}^{\infty}x^{\alpha}h\left(x\right)\mathrm{d}x\leq C\norm{h}D^{1-\rho+\alpha}$ for all $D>0$ if $\alpha<\rho-1$,
\end{enumerate} where $\norm{h}$ is defined in~\eqref{eq:S1E3}. \end{lemma}
\begin{proof}
~\begin{enumerate}
\item The case $\alpha\geq 0$ is clear by definition of $\mathcal{X}_{\rho}$. For $\alpha\in\left(\rho-1,0\right)$ one has, using a dyadic decomposition, that
\begin{equation*}
\begin{split}
&\quad\int_{0}^{D}x^{\alpha}h\left(x\right)\mathrm{d}x=\sum_{n=0}^{\infty}\int_{2^{-\left(n+1\right)}D}^{2^{-n}D}x^{\alpha}h\left(x\right)\mathrm{d}x\leq \sum_{n=0}^{\infty}2^{-\alpha\left(n+1\right)}D^{\alpha}\int_{2^{-\left(n+1\right)}D}^{2^{-n}D}h\left(x\right)\mathrm{d}x\\
&\leq \norm{h}\sum_{n=0}^{\infty}2^{-\alpha\left(n+1\right)}D^{\alpha}\left(2^{-n}D\right)^{1-\rho}=2^{-\alpha}\norm{h}D^{1+\alpha-\rho}\sum_{n=0}^{\infty}\left(2^{1+\alpha-\rho}\right)^{-n}=C\left(\alpha,\rho\right)\norm{h}D^{1+\alpha-\rho}.
\end{split}
\end{equation*}
\item This follows similarly using again a dyadic decomposition.
\end{enumerate} \end{proof}
\section{Dual problems}
\subsection{Existence results}\label{Sec:existence:results}
In this section we show the existence of solutions to some dual problems arising in the proof of the lower bounds. Throughout this section we will use the following notation: $\mathcal{M}^{\mathrm{fin}}$ will denote the space of finite measures, $\mathcal{M}^{\mathrm{fin}}_{+}$ is the space on non-negative finite measures. Furthermore $C_{b}^{n}$ denotes the space of bounded $n$-times differentiable functions with bounded derivatives. Let $\omega\in\left(0,1\right)$, $A\in\mathbb{R}$ and consider the equation \begin{equation}\label{eq:standard:dual:problem}
\partial_{t}f\left(x,t\right)-P\int_{0}^{\infty}\frac{1}{y^{1+\omega}}\left[f\left(x+y\right)-f\left(x\right)\right]\mathrm{d}y=0 \end{equation} together with initial value $f\left(x,0\right)=\delta\left(\cdot-A\right)$.
\begin{proposition}\label{Prop:ex:stand:dual:distr}
There exists a (weak) solution $f\in C\left(\left[0,T\right],\mathcal{M}^{\mathrm{fin}}_{+}\right)$ of \eqref{eq:standard:dual:problem} with initial value $f_{0}=\delta\left(\cdot-A\right)$. Furthermore this $f$ satisfies $\supp f\left(\cdot,t\right)\subset \left(-\infty,A\right]$ and $\int_{\mathbb{R}}f\left(\cdot,t\right)\mathrm{d}x=1$ for all $t\in\left[0,T\right]$. \end{proposition}
\begin{proof}[Proof (Sketch)]
First we consider the regularized equation
\begin{equation}\label{eq:reg:weak:sol}
\begin{split}
\partial_{t}f\left(x,t\right)&=P\int_{0}^{\infty}\frac{1}{y^{1+\omega}+\nu}\left[f\left(x+y,t\right)-f\left(x,t\right)\right]\mathrm{d}y\\
f\left(\cdot,0\right)&=\delta\left(\cdot-A\right)
\end{split}
\end{equation} with $\nu>0$. In the second step we will pass to the limit $\nu\to 0$. We can reformulate \eqref{eq:reg:weak:sol} as the following fixed-point problem: \begin{equation}\label{eq:reg:fix}
\begin{split}
f^{\nu}\left(x,t\right)=\delta\left(x-A\right)\mathrm{e}^{-P\int_{0}^{\infty}\frac{1}{y^{1+\omega}+\nu}\mathrm{d}y}+\int_{0}^{t}\mathrm{e}^{-\left(t-s\right)\int_{0}^{\infty}\frac{1}{y^{1+\omega}+\nu}\mathrm{d}y}\int_{0}^{\infty}\frac{1}{y^{1+\omega}+\nu}f\left(x+y\right)\mathrm{d}y\mathrm{d}s.
\end{split} \end{equation} It is straightforward, applying the contraction mapping theorem, to obtain a solution $f\in C\left(\left[0,T\right],\mathcal{M}^{\mathrm{fin}}_{+}\right)$ for any $T>0$. Furthermore, one obtains $\int_{\mathbb{R}}f^{\nu}\left(x,t\right)\mathrm{d}x=1$ for all $t>0$ and $\nu>0$ (by integrating the equation, see below). In addition $f^{\nu}$ satisfies equation \eqref{eq:reg:weak:sol} in weak form, i.e. \begin{equation}\label{eq:reg:weak:form}
\begin{split}
\int_{\mathbb{R}}f^{\nu}\left(x,t\right)\psi\left(x\right)\mathrm{d}x=\psi\left(A\right)+\int_{0}^{t}\int_{\mathbb{R}}\int_{0}^{\infty}\frac{1}{y^{1+\omega}+\nu}f^{\nu}\left(x,s\right)\left[\psi\left(x-y\right)-\psi\left(x\right)\right]\mathrm{d}y\mathrm{d}x\mathrm{d}s
\end{split} \end{equation} for all $\psi\in C_{b}\left(\mathbb{R}\right)$ and for $0<\tilde{\omega}<\omega$ taking $\psi\left(x\right)=\abs{x}^{\tilde{\omega}}$ and using $\abs{\abs{x-y}^{\tilde{\omega}}-\abs{x}^{\tilde{\omega}}}\leq \abs{y}^{\tilde{\omega}}$ we obtain (by approximation) \begin{equation*}
\begin{split}
\int_{\mathbb{R}}f^{\nu}\left(x,t\right)\abs{x}^{\tilde{\omega}}\mathrm{d}x&\leq \abs{A}^{\tilde{\omega}}+\int_{0}^{t}\int_{\mathbb{R}}\int_{0}^{\infty}\frac{\abs{\abs{x-y}^{\tilde{\omega}}-\abs{x}^{\tilde{\omega}}}}{y^{1+\omega}+\nu}f^{\nu}\left(x,s\right)\mathrm{d}y\mathrm{d}x\mathrm{d}s\\
&\leq \abs{A}^{\tilde{\omega}}+\int_{0}^{t}\int_{\mathbb{R}}\int_{0}^{\infty}\frac{\abs{y}^{\tilde{\omega}}}{y^{1+\omega}+\nu}f^{\nu}\left(x,s\right)\mathrm{d}y\mathrm{d}x\mathrm{d}s\leq C\left(T,\omega,\tilde{\omega},A\right).
\end{split} \end{equation*} Thus $\int_{\mathbb{R}}\abs{x}^{\tilde{\omega}}f^{\nu}\left(x,t\right)\mathrm{d}x$ is uniformly bounded (i.e. independent of $\nu$ and $t$).
Using this and that $\left\{f^{\nu}\right\}_{\nu>0}$ is uniformly bounded by $1$, we can extract a subsequence $\left\{f^{\nu_{n}}\right\}_{n\in\mathbb{N}}$ (denoted in the following as $\left\{f^{n}\right\}_{n\in\mathbb{N}}$) such that $f^{n}\left(\cdot,t_{k}\right)$ converges in the sense of measures to some $f\left(\cdot,t_{k}\right)$ for all $k\in\mathbb{N}$, where $\left\{t_{k}\right\}_{k\in\mathbb{N}}=\left[0,T\right]\cap \mathbb{Q}$.
We next show that $f^{n}$ is equicontinuous in $t$ as a distribution, i.e. from \eqref{eq:reg:weak:form} we obtain for any $\psi\in C_{c}^{1}\left(\mathbb{R}\right)$: \begin{equation*}
\begin{split}
&\quad \abs{\int_{\mathbb{R}}\left(f^{n}\left(x,t\right)-f^{n}\left(x,s\right)\right)\psi\left(x\right)\mathrm{d}x}=\abs{\int_{s}^{t}\int_{\mathbb{R}}f^{n}\left(x,r\right)\int_{0}^{\infty}\frac{1}{y^{1+\omega}+\nu}\left[\psi\left(x-y\right)-\psi\left(x\right)\right]\mathrm{d}y\mathrm{d}x\mathrm{d} r}\\
&\leq \int_{s}^{t}\int_{\mathbb{R}}f^{n}\left(x,r\right)\left[\int_{0}^{1}\frac{\norm{\psi'}_{L^{\infty}}y}{y^{1+\omega}+\nu}\mathrm{d}y+\int_{1}^{\infty}\frac{2\norm{\psi}_{L^{\infty}}}{y^{1+\omega}+\nu}\mathrm{d}y\right]\mathrm{d}x\mathrm{d} r\leq C\left(\psi\right)\abs{t-s},
\end{split} \end{equation*} where $C\left(\psi\right)$ is a constant independent of $\nu$ but depending on $\psi$ and $\psi'$. Using the equicontinuity of $f^{n}$ (as a distribution) one can show that $f^{n}$ converges to some limit $f$ (in the sense of distributions) for all $t\in\left[0,T\right]$.
Using furthermore the uniform boundedness of $\int_{\mathbb{R}}\abs{x}^{\tilde{\omega}}f^{n}\left(x,t\right)\mathrm{d}x$ one can show that $f^{n}$ converges already in the sense of measures by approximating and cutting the test function for large values of $\abs{x}$.
Using similar arguments we can also show that for the limit $f^{n}\rightharpoonup f$ we have $f\in C\left(\left[0,T\right],\mathcal{M}^{\mathrm{fin}}_{+}\right)$ and taking the limit $n\to \infty$ in \eqref{eq:reg:weak:form}, $f$ satisfies \begin{equation}\label{eq:weak:sol:limit}
\begin{split}
\int_{\mathbb{R}}f\left(x,t\right)\psi\left(x\right)\mathrm{d}x=\psi\left(A\right)+\int_{0}^{t}\int_{\mathbb{R}}\int_{0}^{\infty}\frac{1}{y^{1+\omega}}f\left(x,s\right)\left[\psi\left(x-y\right)-\psi\left(x\right)\right]\mathrm{d}y\mathrm{d}x\mathrm{d}s
\end{split} \end{equation} for each $\psi\in C_{b}^{1}\left(\mathbb{R}\right)$ and all $t\in\left[0,T\right]$.
From the construction of $f$ using the contraction mapping principle we immediately get $\supp f\left(\cdot,t\right)\subset \left(-\infty,A\right]$ for all $t\in\left[0,T\right]$. To see $\int_{\mathbb{R}}f\left(\cdot,t\right)\mathrm{d}x=1$ for all $t\in\left[0,T\right]$ we integrate equation~\eqref{eq:standard:dual:problem} over $\mathbb{R}$ and use Fubini's theorem to obtain $\partial_{t}\int_{\mathbb{R}}G\left(\cdot,t\right)\mathrm{d}x=0$. Thus together with the initial condition the claim follows. \end{proof}
\begin{remark}
The analogous result holds true if $f_{0}=-\delta\left(\cdot-A\right)$. \end{remark}
As a direct consequence of Proposition~\ref{Prop:ex:stand:dual:distr} we also obtain smooth solutions for smoothed initial data. Therefore for $\kappa>0$ we denote in the following by $\varphi_{\kappa}$ a non-negative, symmetric standard mollifier with $\supp \varphi_{\kappa}\subset\left[-\kappa, \kappa\right]$.
\begin{proposition}\label{Prop:ex:stand:dual:smooth:meas}
Let $f_{0}:=\delta\left(\cdot-A\right)$. Then there exists a solution $f\in C^{1}\left(\left[0,T\right],C^{\infty}\left(\mathbb{R}\right)\right)$ to \eqref{eq:standard:dual:problem} with initial datum $f_{0}\ast\varphi_{\kappa}=\varphi_{\kappa}\left(\cdot-A\right)$. \end{proposition}
\begin{proof}
This follows directly by convolution in $x$ from Proposition~\ref{Prop:ex:stand:dual:distr}. \end{proof}
\begin{proposition}\label{Prop:ex:stand:dual:smooth:func}
There exists a strong solution $f\in C^{1}\left(\left[0,T\right],C^{\infty}\left(\mathbb{R}\right)\right)$ to \eqref{eq:standard:dual:problem} with initial datum $f_{0}:=\chi_{\left(-\infty,A\right]}\ast \varphi_{\kappa}$. \end{proposition}
\begin{proof}
Let $G$ be the solution given by Proposition~\ref{Prop:ex:stand:dual:smooth:meas} for $G_{0}:=\delta\left(\cdot-A\right)\ast \varphi_{\kappa}$. Then $f\left(x,t\right):=\int_{x}^{\infty}G\left(y,t\right)\mathrm{d}y$ solves \eqref{eq:standard:dual:problem} with the desired initial condition. \end{proof}
In the same way as in the proofs of Proposition~\ref{Prop:ex:stand:dual:smooth:func} and Proposition~\ref{Prop:ex:stand:dual:smooth:func} we obtain the following existence result:
\begin{proposition}\label{Prop:ex:dual:eps}
Let $\varepsilon>0$, $L>0$ and $\lambda_{1},\lambda_{2}>0$ be two constants (depending on some parameters). Then there exists a weak solution $G\in C\left(\left[0,T\right],\mathcal{M}^{\mathrm{fin}}_{+}\right)$ and a strong solution $W\in C\left(\left[0,T\right],C^{\infty}\right)$ of the equation
\begin{equation}\label{eq:dual:eps}
\partial_{t} W\left(\xi,t\right)-\int_{0}^{1}\frac{h_{\varepsilon}\left(z\right)}{z}\left[\lambda_{1}\left(z+\varepsilon\right)^{-a}+\lambda_{2}\left(z+\varepsilon\right)^{b}\right]\left[W\left(\xi+\frac{z}{L},t\right)-W\left(\xi,t\right)\right]\mathrm{d}z=0
\end{equation} together with initial condition $G\left(\cdot,0\right)=\delta\left(\cdot-A\right)$ and $W\left(\cdot,0\right)=\chi_{\left(-\infty,A\right]}\ast\varphi_{\kappa}$. \end{proposition}
\begin{remark}
The measure $G$ has the same properties as the measure $f$ in Proposition~\ref{Prop:ex:stand:dual:distr}. \end{remark}
\begin{remark}
By convolution we also obtain a strong solution $G\in C\left(\left[0,T\right],C^{\infty}\right)$ of \eqref{eq:dual:eps} with initial condition $G\left(\cdot,0\right)=\delta\left(\cdot-A\right)\ast\varphi_{\kappa}$. \end{remark}
For further use we denote the integral kernels occurring in Proposition~\ref{Prop:ex:stand:dual:distr} and Proposition~\ref{Prop:ex:dual:eps} by \begin{equation}\label{eq:kernel:Neps}
N_{\omega}\left(z\right):=z^{-1-\omega}\quad \text{and} \quad N_{\varepsilon}\left(z\right):=\frac{h_{\varepsilon}\left(z\right)}{z}\left[\lambda_{1}\left(z+\varepsilon\right)^{-a}+\lambda_{2}\left(z+\varepsilon\right)^{b}\right]. \end{equation}
\begin{proposition}\label{Prop:ex:dual:sum}
Let $n\in\mathbb{N}$, $R\in \mathbb{R}$ and $N_{i}\colon \left(0,\infty\right)\to\mathbb{R}_{\geq 0}$ either of the form $N_{\omega_{i}}$ for some $\omega_{i}\in \left(0,1\right)$ or $N_{\varepsilon}$ given by \eqref{eq:kernel:Neps} (and then continued by $0$ to $\left(0,\infty\right)$) for $i=1,\ldots n$. Let $N:=\sum_{i=1}^{n} N_{i}$. Then there exists a solution $f\in C^{1}\left(\left[0,T\right],C^{\infty}\left(\mathbb{R}\right)\right)$ to the equation
\begin{equation}\label{eq:dual:convolution}
\begin{split}
\partial_{t}f\left(x,t\right)=\int_{0}^{\infty}N\left(z\right)\left[f\left(x+z\right)-f\left(x\right)\right]\mathrm{d}z
\end{split}
\end{equation}
either with initial datum $f_{0}=\chi_{\left(-\infty,R\right]}\ast^{n} \varphi_{\kappa}$ or $f_{0}=\delta\left(\cdot-R\right)\ast^{n}\varphi_{\kappa}$, where $\ast^{n}$ denotes the $n$-fold convolution with $\varphi_{\kappa}$. \end{proposition}
\begin{proof}
It suffices to consider the case $n=2$ (otherwise argue by induction). Then by Proposition~\ref{Prop:ex:stand:dual:smooth:meas} and Proposition~\ref{Prop:ex:stand:dual:smooth:func} there exist solutions $f^{i}$ to equation~\eqref{eq:dual:convolution} with $N$ replaced by $N_{i}$ and initial datum $f^{1}_{0}=\delta\left(\cdot\right)\ast\varphi_{\kappa}$ and $f^{2}_{0}=\chi_{\left(-\infty,R\right]}\ast\varphi_{\kappa}$ (or $f^{2}_{0}=\delta\left(\cdot-R\right)\ast\varphi_{\kappa}$). A straightforward computation shows that the convolution $f:=f^{1}\ast f^{2}$ satisfies~\eqref{eq:dual:convolution} together with the correct initial condition. \end{proof}
\begin{remark}\label{Rem:properies}
Let $G_{\kappa}$ and $f_{\kappa}$ be the solutions given by Proposition~\ref{Prop:ex:dual:sum} with initial condition $G_{\kappa}\left(\cdot,0\right)=\delta\left(\cdot-A\right)\ast^{n} \varphi_{\kappa}$ and $f\left(\cdot,0\right)=\chi_{\left(-\infty,A\right]}\ast^{n}\varphi_{\kappa}$. Then from the construction in the proof of Proposition~\ref{Prop:ex:dual:sum} and Proposition~\ref{Prop:ex:stand:dual:distr} we obtain:
\begin{enumerate}
\item $G_{\kappa}\geq 0$ on $\mathbb{R}$ (in the sense of measures) and $0\leq f_{\kappa}\leq 1$ for all $t\in\left[0,T\right]$,
\item $\supp G_{\kappa}\left(\cdot,t\right), \supp f_{\kappa}\left(\cdot,t\right)\subset \left(-\infty,A+n\kappa\right]$ for all $t\in\left[0,T\right]$,
\item $\int_{\mathbb{R}}G\left(\cdot,t\right)\mathrm{d}x=1$ for all $t\in\left[0,T\right]$,
\item $f_{\kappa}$ is non-increasing.
\end{enumerate} \end{remark}
\subsection{Integral estimates for subsolutions}
In this section we will always assume that the integral kernel $N$ is given as the sum of kernels of the form $N_{\omega_{i}}$ or $N_{\varepsilon}$ and we will prove several properties and estimates that are frequently used. We now prove some integral estimates.
\begin{lemma}\label{Lem:der:int:est}
Let $\omega\in \left(0,1\right)$ and $G$ the solution of
\begin{equation}\label{eq:der:int:est}
\begin{split}
\partial_{t}G\left(x,t\right)&=P\int_{0}^{\infty}N_{\omega}\left(z\right)\left[G\left(x+z\right)-G\left(x\right)\right]\mathrm{d}z\\
G\left(\cdot,0\right)&=\delta\left(\cdot-A\right)\ast\varphi_{\kappa}=\varphi_{\kappa}\left(x-A\right)
\end{split}
\end{equation}
given by Proposition~\ref{Prop:ex:stand:dual:smooth:meas}, where $P$ is a constant. Then for any $\mu \in \left(0,1\right)$ one has
\begin{enumerate}
\item $\int_{-\infty}^{A-D}G\left(x,t\right)\mathrm{d}x\leq C\left(\frac{\kappa}{D}\right)^{\mu}+ C\frac{P t}{D^{\omega}}$ for all $D>0$ and
\item $\int_{A-2}^{A}\abs{x-A}G\left(x,t\right)\mathrm{d}x\leq C_{\mu}\kappa^{\mu}+C_{\omega} Pt$.
\end{enumerate} \end{lemma}
\begin{proof}
By shifting with $A$ we can assume $A=0$. Let $Z>0$. Then testing equation~\eqref{eq:der:int:est} with $\mathrm{e}^{Z\left(x-\kappa\right)}$ (note that this is possible as $\supp G\subset \left(-\infty,\kappa\right]$) one obtains
\begin{equation*}
\begin{split}
\partial_{t}\int_{\mathbb{R}}G\left(x,t\right)\mathrm{e}^{Z\left(x-\kappa\right)}\mathrm{d}x&=P\int_{\mathbb{R}}\int_{0}^{\infty}N_{\omega}\left(y\right)\left[G\left(x+y\right)-G\left(x\right)\right]\mathrm{e}^{Z\left(x-\kappa\right)}\mathrm{d}y\mathrm{d}x\\
&=P\int_{0}^{\infty}N_{\omega}\left(y\right)\left(\mathrm{e}^{-Zy}-1\right)\mathrm{d}y\int_{\mathbb{R}}G\left(x,t\right)\mathrm{e}^{Z\left(x-\kappa\right)}\mathrm{d}y=:M_{\omega}\left(Z\right)\int_{\mathbb{R}}G\left(x,t\right)\mathrm{e}^{Z\left(x-\kappa\right)}\mathrm{d}x.
\end{split}
\end{equation*}
Furthermore
\begin{equation*}
\begin{split}
\int_{\mathbb{R}}G_{\kappa}\left(x,0\right)\mathrm{e}^{Z\left(x-\kappa\right)}\mathrm{d}x=\int_{\mathbb{R}}\varphi_{\kappa}\left(x\right)\mathrm{e}^{Z\left(x-\kappa\right)}\mathrm{d}x.
\end{split}
\end{equation*}
Thus we obtain $\int_{\mathbb{R}}G\left(x,t\right)\mathrm{e}^{Z\left(x-\kappa\right)}\mathrm{d}x=\int_{\mathbb{R}}\varphi_{\kappa}\left(x\right)\mathrm{e}^{Z\left(x-\kappa\right)}\mathrm{d}x\exp\left(-t\abs{M_{\omega}\left(Z\right)}\right)$. Estimating $M_{\omega}\left(Z\right)$ we obtain
\begin{equation*}
\begin{split}
\abs{M_{\omega}\left(Z\right)}&\leq P\int_{0}^{\infty}\frac{1-\mathrm{e}^{-Zy}}{y^{1+\omega}}\mathrm{d}y=-\frac{P}{\omega}\int_{0}^{\infty}\left(1-\mathrm{e}^{-Zy}\right)\frac{\partial}{\partial y}\left(y^{-\omega}\right)\mathrm{d}y\\
&=\frac{PZ}{\omega}\int_{0}^{\infty}\frac{\mathrm{e}^{-Zy}}{y^{\omega}}\mathrm{d}y=\frac{PZ^{\omega}}{\omega}\int_{0}^{\infty}y^{-\omega}\mathrm{e}^{-y}\mathrm{d}y=P\frac{\Gamma\left(1-\omega\right)}{\omega}Z^{\omega}\\
&=CPZ^{\omega}.
\end{split}
\end{equation*}
Using that $G=0$ on $\left(\kappa,\infty\right)$ we get
\begin{equation*}
\begin{split}
&\quad\int_{-\infty}^{\kappa}G\left(x,t\right)\left(1-\mathrm{e}^{Z\left(x-\kappa\right)}\right)\mathrm{d}x\\
&=\int_{\mathbb{R}}G\left(x,t\right)\mathrm{d}x-\int_{\mathbb{R}}G\left(x,t\right)\mathrm{e}^{Z\left(x-\kappa\right)}\mathrm{d}x=1-\int_{\mathbb{R}}\varphi_{\kappa}\left(x\right)\mathrm{e}^{Z\left(x-\kappa\right)}\mathrm{d}x\exp\left(-t\abs{M_{\omega}\left(Z\right)}\right)\\
&\leq \left[\left(1-\int_{\mathbb{R}}\varphi_{\kappa}\left(x\right)\mathrm{e}^{Z\left(x-\kappa\right)}\mathrm{d}x\right)+\int_{\mathbb{R}}\varphi_{\kappa}\left(x\right)\mathrm{e}^{Z\left(x-\kappa\right)}\mathrm{d}x\abs{M_{\omega}\left(Z\right)}t\right].
\end{split}
\end{equation*}
As $\supp \varphi\subset\left[-\kappa,\kappa\right]$ we can estimate $\mathrm{e}^{-2Z\kappa}\leq \int_{\mathbb{R}}\varphi_{\kappa}\left(x\right)\mathrm{e}^{Z\left(x-\kappa\right)}\mathrm{d}x\leq 1$. Then choosing $Z=\frac{1}{D}$ and using also the estimate for $M_{\omega}$ we obtain
\begin{equation*}
\begin{split}
&\quad \int_{-\infty}^{-D}G\left(x,t\right)\mathrm{d}x\leq \int_{-\infty}^{-D}G\left(x,t\right)\frac{1-\mathrm{e}^{\frac{x-\kappa}{D}}}{1-\mathrm{e}^{-1-\frac{\kappa}{D}}}\mathrm{d}x\leq \int_{-\infty}^{\kappa}\left(\cdots\right)\mathrm{d}x\\
&\leq\frac{1}{1-\mathrm{e}^{-1-\frac{\kappa}{D}}} \left[\left(1-\mathrm{e}^{-\frac{2\kappa}{D}}\right)+CPt\frac{1}{D^{\omega}}\right]\leq C\left(\frac{\kappa}{D}\right)^{\mu}+C\frac{Pt}{D^{\omega}}.
\end{split}
\end{equation*}
To prove the second part we use a dyadic decomposition and the estimate from the first part to obtain
\begin{equation*}
\begin{split}
\int_{-2}^{0}\abs{x}G_{\kappa}\left(x,t\right)\mathrm{d}x&=\sum_{n=-1}^{\infty}\int_{-2^{-n}}^{-2^{-\left(n+1\right)}}\abs{x}G_{\kappa}\left(x,t\right)\mathrm{d}x\leq C\sum_{n=-1}^{\infty}2^{-n}\left[\left(\frac{\kappa}{2^{n+1}}\right)^{\mu}+Pt2^{\omega\left(n+1\right)}\right]\\
&=C\sum_{n=-1}^{\infty}2^{\mu}\kappa^{\mu}\left(2^{\mu-1}\right)^{n}+2^{\omega}Pt\left(2^{\omega-1}\right)^{n}\leq C_{\mu}\kappa^{\mu}+C_{\omega}Pt.
\end{split}
\end{equation*} \end{proof}
We now consider the situation of Proposition~\ref{Prop:ex:dual:sum} where the integral kernel is given as the sum of different kernels
\begin{lemma}\label{Lem:int:est:conolution}
In the situation of Proposition~\ref{Prop:ex:dual:sum} with $n=2$ one has
\begin{enumerate}
\item $\int_{-\infty}^{A-D}G\left(x,t\right)\mathrm{d}x\leq \int_{-\infty}^{A-D/2}G_{1}\left(x,t\right)\mathrm{d}x+\int_{-\infty}^{-D/2}G_{2}\left(x,t\right)\mathrm{d}x$
\item $\int_{A-1}^{A}\abs{x-A}G\left(x,t\right)\mathrm{d}x\leq \int_{A-1-\kappa}^{A+\kappa}\abs{x-A}G_{1}\left(x\right)\mathrm{d}x+\int_{-1-\kappa}^{\kappa}\abs{x}G_{2}\left(x\right)\mathrm{d}x$.
\end{enumerate} \end{lemma}
\begin{proof}
We consider again only the case $A=0$, while the general result follows by shifting. \begin{enumerate}
\item One has
\begin{equation*}
\begin{split}
\int_{-\infty}^{-D}G\left(x,t\right)\mathrm{d}x&=\int_{\mathbb{R}}\int_{\mathbb{R}}\chi_{\left(-\infty,-D\right]}\left(x+y\right)G_{1}\left(x,t\right)G_{2}\left(y,t\right)\mathrm{d}x\mathrm{d}y\\
&=\int_{\mathbb{R}}\int_{-\infty}^{-D-y}G_{1}\left(x,t\right)G_{2}\left(y,t\right)\mathrm{d}x\mathrm{d}y\\
&=\int_{-\frac{D}{2}}^{\infty}\int_{-\infty}^{-D-y}G_{1}\left(x,t\right)G_{2}\left(y,t\right)\mathrm{d}x\mathrm{d}y+\int_{-\infty}^{-\frac{D}{2}}\int_{-\infty}^{-D-y}G_{1}\left(x,t\right)G_{2}\left(y,t\right)\mathrm{d}x\mathrm{d}y\\
&\leq \int_{-\infty}^{-\frac{D}{2}}G_{1}\left(x,t\right)\mathrm{d}x\int_{\mathbb{R}}G_{2}\left(y,t\right)\mathrm{d}y+\int_{-\infty}^{-\frac{D}{2}}G_{2}\left(y,t\right)\mathrm{d}y\int_{\mathbb{R}}G_{1}\left(x,t\right)\mathrm{d}x\\
&\leq \int_{-\infty}^{-D/2}G_{1}\left(x,t\right)\mathrm{d}x+\int_{-\infty}^{-D/2}G_{2}\left(x,t\right)\mathrm{d}x
\end{split}
\end{equation*}
where in the last step we used that $G_{i}$ is normalized for $i=1,2$.
\item To prove the second estimate we first rewrite
\begin{equation*}
\begin{split}
&\quad \int_{-1}^{0}\abs{x}G\left(x,t\right)\mathrm{d}x=\int_{\mathbb{R}}\int_{\mathbb{R}}\chi_{\left[-1,0\right]}\left(x+y\right)\abs{x+y}G_{1}\left(x,t\right)G_{2}\left(y,t\right)\mathrm{d}x\mathrm{d}y\\
&=\int_{\mathbb{R}}\int_{-1-y}^{-y}\abs{x+y}G_{1}\left(x,t\right)G_{2}\left(y,t\right)\mathrm{d}x\mathrm{d}y\\
&=\int_{-\infty}^{\kappa}\int_{-1-y}^{-y}\abs{x+y}G_{1}\left(x,t\right)G_{2}\left(y,t\right)\mathrm{d}x\mathrm{d}y
\end{split}
\end{equation*}
where we used that $G_{2}=0$ on $\left\{y>\kappa\right\}$. Using also $G_{1}=0$ on $\left\{x>\kappa\right\}$ we have furthermore
\begin{equation*}
\begin{split}
\int_{-1}^{0}\abs{x}G\left(x,t\right)\mathrm{d}x\leq \int_{-\infty}^{\kappa}\int_{-1-y}^{\kappa}\abs{x+y}G_{1}\left(x,t\right)G_{2}\left(y,t\right)\mathrm{d}x\mathrm{d}y.
\end{split}
\end{equation*}
Noting that for $y<-1-\kappa$ we have $-1-y>\kappa$ and thus the $x$-integral equal zero as $G_{1}=0$ on $\left\{x>\kappa\right\}$ we obtain (also using $-1-\kappa\leq -1-y$ for $y\in \left[-1-\kappa,\kappa\right]$)
\begin{equation*}
\begin{split}
\int_{-1}^{0}\abs{x}G\left(x,t\right)\mathrm{d}x&\leq \int_{-1-\kappa}^{\kappa}\int_{-1-\kappa}^{\kappa}\abs{x+y}G_{1}\left(x,t\right)G_{2}\left(y,t\right)\mathrm{d}x\mathrm{d}y\\
&\leq \int_{-1-\kappa}^{\kappa}\int_{-1-\kappa}^{\kappa}\left(\abs{x}+\abs{y}\right)G_{1}\left(x,t\right)G_{2}\left(y,t\right)\mathrm{d}x\mathrm{d}y\\
&\leq \int_{-1-\kappa}^{\kappa}\abs{x}G_{1}\left(x,t\right)\mathrm{d}x+\int_{-1-\kappa}^{\kappa}\abs{y}G_{2}\left(y,t\right)\mathrm{d}y,
\end{split}
\end{equation*}
where in the last step we used $\int_{-1-\kappa}^{\kappa}G_{i}\left(x,t\right)\mathrm{d}x\leq \int_{\mathbb{R}}G_{i}\left(x,t\right)\mathrm{d}x=1$.
\end{enumerate} \end{proof}
\begin{remark}\label{Rem:est:conv}
By induction we can prove the corresponding estimates for $n>2$ with $D/2$ replaced by $D/2^{n-1}$ and $\kappa$ replaced by $n\kappa$ (and of course summing over all $G_{i}$, $i=1,\ldots, n$ on the right hand side). \end{remark}
\end{document} | arXiv |
Equivalent Definitions of e
July 30, 2021 July 29, 2021 / Calculus, NQOTW / Proofs / By Dave Peterson
It is not unusual for mathematicians to define a concept in multiple ways, which can be proved to be equivalent. One definition may lead to a theorem, which another presentation uses as the definition, from which the original definition can be proved as a theorem. Here, in yet another good question from late May, we have two different ways to "define" the number \(e=2.71828…\), a series and a limit, and a student wants to prove directly that they are equivalent. We'll get a proof, then dig in to really understand it.
Proving two definitions are equivalent
Matthew asked:
I'm looking for a rigorous proof that the following definitions of e are equivalent:
e = 1 + 1 + 1/2! + 1/3! + 1/4! + …
e = lim [n→∞] (1 + 1/n)^n
What I've done so far:
I understand that:
lim [n→∞] (1 + 1/n)^n = 1 + 1 + (1 – 1/n) 1/2! + (1 – 1/n)(1 – 2/n) 1/3! + (1 – 1/n)(1 – 2/n)(1 – 3/n) 1/4! + …
I also understand this:
lim [n→∞] (1 – 1/n)(1 – 2/n)(1 – 3/n) = 1 x 1 x 1 = 1
And so if we take a fixed value m (with m as a natural number):
lim [n→∞] 1 + 1 + (1 – 1/n) 1/2! + (1 – 1/n)(1 – 2/n) 1/3! + (1 – 1/n)(1 – 2/n)(1 – 3/n) 1/4! + … + (1 – 1/n)(1 – 2/n)(1 – 3/n)…(1 – (m – 1)/n) 1/m!
1 + 1 + 1/2! + 1/3! + 1/4! + … + 1/m!
But I then get stuck on the next step. I only understand the 'equivalence' where the series terminates at a fixed natural number (in this case m). How do I make the transition to proving it for the infinite series, where the number of members of the series approaches infinity, whilst n also approaches infinity within each element of the series?
I'm in particular looking for a rigorous proof, where the two sequences are named t_n and s_n.
So that for every epsilon there exists a number N so that for every n > N then |t_n – s_n| < epsilon.
Really grateful for any help on this!
Matthew knows how to ask a question: He has clearly stated what he wants to do, and shown a good bit of work, including where the difficulty lies. But we'll need to get up to speed to make sure we understand the question, as well as this initial work!
What we need to prove
The two "definitions" are quite different, one a series, the other a limit. Here is how Wikipedia introduces its article on e:
The number e, also known as Euler's number, is a mathematical constant approximately equal to 2.71828, and can be characterized in many ways. It is the base of the natural logarithm. It is the limit of \(\left(1+\frac{1}{n}\right)^n\) as n approaches infinity, an expression that arises in the study of compound interest. It can also be calculated as the sum of the infinite series
$$e=\sum_{n=0}^{\infty }{\frac {1}{n!}}=1+{\frac {1}{1}}+{\frac {1}{1\cdot 2}}+{\frac {1}{1\cdot 2\cdot 3}}+\cdots $$
That is, they take the limit as the actual definition, and present the series as a way to calculate it. Here is a table of the first 11 terms of the series and its sums:
We have already reached 8 significant digits of accuracy.
Here is a table of the first 11 values of the limit:
We don't even have one significant digit yet!
In fact, if we make the limit move exponentially faster, by using \(2^n\) in place of n, it is still slow to converge:
It takes 8 thousand "steps" to get 4 significant digits! (Of course, in principle we only need to do this calculation once, rather than summing terms, so it is not that inefficient to calculate, apart from the efficiency of using a very large exponent in the first place, and deciding what value to use.)
But they do approach the same limit. How can we prove it?
What he's done so far
Let's look at what Matthew did, which he didn't fully explain.
First, he said $$\lim_{n\to\infty} \left(1 + \frac{1}{n}\right)^n = 1 + 1 + \left(1 – \frac{1}{n}\right) \frac{1}{2!} + \left(1 – \frac{1}{n}\right)\left(1 – \frac{2}{n}\right) \frac{1}{3!} + \left(1 – \frac{1}{n}\right)\left(1 – \frac{2}{n}\right)\left(1 – \frac{3}{n}\right) \frac{1}{4!} + \cdots$$
Where did that come from? He has applied the binomial theorem to \(\left(1 + \frac{1}{n}\right)^n\); this theorem in general says that $$(1+x)^n = \sum_{k=0}^n {_nC_k}x^k$$
The kth term of this sum (starting with the 0th) is $${_nC_k}\left(\frac{1}{n}\right)^k = \frac{n!}{k!(n-k)!}\frac{1}{n^k} =$$ $$\frac{n(n-1)(n-2)\cdots(n-k+2)(n-k+1)}{n^k}\frac{1}{k!} =$$ $$\frac{n}{n}\cdot\frac{n-1}{n}\cdot\frac{n-2}{n}\cdots\frac{n-k+2}{n}\cdot\frac{n-k+1}{n}\frac{1}{k!} =$$ $$\left(1-\frac{0}{n}\right)\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\cdots\left(1-\frac{k-2}{n}\right)\left(1-\frac{k-1}{n}\right)\frac{1}{k!}$$
which agrees with what he wrote; observe that for \(k=0\) the term has zero factors, and is simply 1; for \(k=1\), it is just the one factor \(\left(\frac{n}{n}\right) = 1\). That's why Matt has written the first two terms as mere numbers (and will continue doing so): for clarity.
His calling this the limit, however, is premature, because n is still a variable. This is the tricky part.
The proof
Doctor Fenton answered, starting his proof the same way but being more careful with limits:
I recalled a discussion of this in an old classic, Richard Courant's Calculus textbook.
Let Tn denote the n-th partial sum of the series \(\sum_{k=0}^\infty\frac{1}{k!}\) , so \(T_n=\sum_{k=0}^n\frac{1}{k!}\),
and let Sn denote \((1+\frac{1}{n})^n\) .
By the Binomial Theorem,
$$S_n= \sum_{k=0}^n {_nC_k}\frac{1}{n^k}= \sum_{k=0}^n \frac{n!}{k!(n-k)!}\frac{1}{n^k}= \sum_{k=0}^n \frac{1}{k!}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\cdots\left(1-\frac{k-1}{n}\right)$$
Clearly, Sn ≤ Tn, since each term in Tn is multiplied by a product of factors, each less than 1, to obtain the corresponding term of Sn.
Also, notice that both sequences are increasing and bounded, so each converges. Let
\(S=\lim_{n\to\infty} S_n\) .
If m < n, then the sum $$\sum_{k=0}^m \frac{1}{k!}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\cdots\left(1-\frac{m-1}{n}\right)\lt S_n$$
since Sn has additional non-negative terms added. If we take the limit as n→∞, the left side approaches Tm , while the right side approaches the limit S. Then we have that Tm ≤ S, so Sm ≤ Tm ≤ S.
Now, taking the limit as m→∞ gives S = T, so the two limits are the same.
There's a typo in that summation, which I can't remove because it figures into the discussion; don't worry if you find it.
Fixing an error
Matthew replied, catching that error and helping us out in the process:
Hi, thanks so much for your quick answer. I think I'm getting closer but there's still some things I'm not getting.
I think when you wrote $$\sum_{k=0}^m \frac{1}{k!}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\cdots\left(1-\frac{m-1}{n}\right)\lt S_n$$
you must have meant: $$\sum_{k=0}^m \frac{1}{k!}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\cdots\left(1-\frac{k-1}{n}\right)\lt S_n$$
Or maybe: $$1+1+\sum_{k=2}^m \frac{1}{k!}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\cdots\left(1-\frac{k-1}{n}\right)\lt S_n$$
If I expand this last series on the left I get:
(1/2!){1-1/n} +
(1/3!){(1-1/n)(1-2/n)} +
(1/m!) {(1-1/n)(1-2/n)…(1-(m-1)/n)}
So then Sn has all these terms, but some additional terms added.
Am I making sense? Sorry if I've got this confused somewhere!
The "or maybe" is really the same summation, with the first two terms stated explicitly as we saw before.
Clarifying the limit
Doctor Fenton confirmed his correction, and added some notation to make the details easier to talk about:
You are correct. I was hurrying to get ready for a meeting and typed m instead of k.
The idea is to take a sum of a fixed number m of terms from Sn, so that the partial sum Sn is larger than the sum of its first m terms.
Call this sum Sn,m , so Sn,m < Sn , and as n→∞, Sn,m→Tm while Sn→S, giving Tm ≤ S. Then Tm is squeezed between Sm and S.
Sorry to put you to so much work over a typo.
That is, we are defining $$S_{n,m} = \sum_{k=0}^m \frac{1}{k!}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\cdots\left(1-\frac{k-1}{n}\right)$$ and \(S_{n,m}< S_n\) for all n, for any given m, so, taking the limit on each side, \(T_m\le S\). Giving a name to the sum of only m terms of the sum for n makes it easier to talk precisely about what we are doing.
Matthew wrote back, nicely stating what he did and did not understand:
Hi there, many thanks for your further response and for clarifying.
I think I now understand the proof in general terms but I'm not sure I fully understand all the details. In particular I'm not clear exactly how it follows that Tm < S.
We have the following inequalities:
Sn,m < Sn for all values of n (as Sn always includes additional non-negative terms)
Sn < S for all values of n (as Sn is monotonically increasing, approaching the limit S)
Sn,m < Tm for all values of n (as Sn,m contains the same terms as Tm terms except with further positive coefficients less than 1)
I'm not sure I can get from these inequalities that Tm < S
But maybe if Sn,m approaches the limit of Tm as n→∞, and Sn,m remains less than Sn without approaching the limit of Sn, then I think it would follow that Tm is less than Sn.
And the I think the rest would follow as m → ∞, as Tm is 'squeezed' between Sm and S. i.e. Sm < Tm < S, and Sm → S as m → ∞.
I'm not sure how I can show that Sn,m does not approach the limit of Sn as m → ∞, although it seems highly intuitive to assume it does not.
Again, hope I'm making sense and not getting mixed up, or if I'm missing something obvious.
It can be helpful to look at some actual numbers. Here is a table of values of our \(S_{n,m}\) showing the inequalities Matt points out:
I have highlighted \(S_{4,3} = 2.43750\); we can see that
\(S_{4,3} = 2.43750 < S_4 = 2.44141\)
\(S_4 = 2.44141 < S = 2.71828\)
\(S_{4,3} = 2.43750 < T_3 = 2.66667\)
But these inequalities in themselves don't lead us to the conclusion that \(T_m\) and \(S_n\) have the same limit.
Doctor Fenton showed the missing link:
You write
I'm not sure I can get from these inequalities that Tm < S.
But maybe if Sn,m approaches the limit of Tm as n→∞ and Sn,m remains less than Sn without approaching the limit of Sn, then I think it would follow that Tm is less than Sn
That's exactly correct. Sn,m is the sum of the first m terms of Sn, and in this part of the argument, m is fixed.
Sn,m = 1 + (1/1!) + (1/2!)(1-1/n) + … + (1/m!)(1-1/n)(1-2/n)⋅⋅⋅(1-(m-1)/n)
Sn = 1 + 1/1! + (1/2!)(1-1/n) + … + (1/n!)(1-1/n)(1-2/n)⋅⋅⋅(1-(n-1)/n) .
Sn,m always has m terms, a fixed number, while the number of terms in Sn increases without bound. As n→∞, each term (1-k/n)→1, so the kth term of Sn,m approaches 1/k!, i.e. lim(n→∞) Sn,m = Tm . But the number of terms in Sn keeps increasing, so that Sn approaches S, which turns out to be T = e.
His table would differ in just one point, taking the limit of each column rather than just an inequality:
The fact that we are holding m fixed and then letting n vary gives us a grip on the limits.
Matthew now worked out the final bit:
Hi – thanks for your encouraging response.
I've been wrestling with this part of the proof:
If we take the limit as n→∞, the left side approaches Tm , while the right side approaches the limit S. Then we have that Tm ≤ S"
I think I've got it now!
If I say an < bn for all n, and say as n→∞ then an→a, and bn→b,
then it follows that a ≤ b.
For if no member of the series of an is greater than any member of the series bn, then it is not possible that a > b.
But it is possible that a = b, as it could be that bn – an→0 as n→∞.
And it is possible that a < b as it could be that (lim n→0 bn – an) > 0.
Hence a ≤ b.
So replace an with Sn,m and bn with Sn, then given that Sn,m→Tm and Sn→S when n→∞ and Sn,m < Sn for all n it follows that Tm ≤ S.
And the rest follows!
So think I'm home and dry with this one! ? (assuming my above reasoning is correct … ?)
Thanks again for taking the time to read and respond.
Well explained! If \(a_n\to a\), \(b_n\to b\), and \(a_n<b_n\) for all n, we can't be sure that \(a<b\), but we do know that \(a\le b\).
Doctor Fenton agreed:
That's correct!
So we have the proof. | CommonCrawl |
\begin{document}
\title{Coherent sets for nonautonomous dynamical systems} \begin{abstract} We describe a mathematical formalism and numerical algorithms for identifying and tracking slowly mixing objects in nonautonomous dynamical systems. In the autonomous setting, such objects are variously known as almost-invariant sets, metastable sets, persistent patterns, or strange eigenmodes, and have proved to be important in a variety of applications. In this current work, we explain how to extend existing autonomous approaches to the nonautonomous setting. We call the new time-dependent slowly mixing objects \emph{coherent sets} as they represent regions of phase space that disperse very slowly and remain coherent. The new methods are illustrated via detailed examples in both discrete and continuous time. \end{abstract}
\section{Introduction}
The study of transport and mixing in dynamical systems has received considerable attention in the last two decades; see e.g.\ \cite{Meiss1992,wiggins_92,aref_02,wiggins_05} for discussions of transport phenomena. In particular, the detection of very slowly mixing objects, known variously as almost-invariant sets, metastable sets, persistent patterns, or strange eigenmodes, has found wide application in fields such as fluid dynamics \cite{pikovsky_popovych_03,liu_haller_04,popovych_pikovsky_eckhardt_07}, ocean dynamics \cite{froyland_padberg_england_treguier_07, npg}, astrodynamics \cite{dellnitz_etal_05}, and molecular dynamics \cite{deuflhard_etal_00,schuette_huisinga_deuflhard_01}. A shortcoming of this prior work, based around eigenfunctions of Perron--Frobenius operators (or transfer operators, or evolution operators) is the restriction to autonomous systems or periodically forced systems. In this work, we extend the notions of almost-invariant sets, metastable sets, persistent patterns, and strange eigenmodes to time-dependent
\emph{Lagrangian coherent sets}. These coherent sets form a time parameterised family of sets that approximately follow the flow and disperse very slowly; in other words \textit{they stay coherent}. Coherent sets are the natural nonautonomous analogue to almost-invariant sets.
The standard dynamical systems model of transport assumes that the motion of passive particles are completely determined by either an autonomous or a time-dependent vector field. Traditional approaches to understanding transport are based upon the determination of the location of geometric objects such as invariant manifolds. In the autonomous setting, an invariant manifold of one dimension less than the ambient space will form an impenetrable transport barrier that locally partitions the ambient space. In the periodically-forced setting, primarily in two-dimensional flows, it has been shown that slow mixing in the neighbourhood of invariant manifolds is sometimes controlled by ``lobe dynamics'' \cite{romkedar_etal_1990,romkedar_wiggins_90,wiggins_92}. In the truly non-autonomous, or aperiodically forced setting, finite-time hyperbolic material lines \cite{haller_00} and surfaces \cite{haller_01} have been proposed as generalisations of invariant manifolds that form barriers to mixing. These material lines and surfaces are known as \emph{Lagrangian coherent structures}; see also \cite{shadden_lekien_marsden_05} for an alternative definition. The geometric approach can often be used to find co-dimension 1 sets (coherent structures) that form boundaries of coherent sets.
An alternative to the geometric approach is the ergodic theoretic approach, which attempts to locate almost-invariant sets (or metastable sets) directly, rather than inferring their location indirectly from their boundaries. The basic tool is the Perron--Frobenius operator (or transfer operator). Real eigenvalues of this operator close to 1 correspond to eigenmodes that decay at slow (exponential) rates.
Almost-invariant sets are heuristically determined from the corresponding eigenfunctions $f$ as sets of the form $\{f> c\}$ or $\{f<c\}$ for thresholds $c\in\mathbb{R}$. Such an approach arose in the context of smooth autonomous maps and flows on subsets of $\mathbb{R}^d$ \cite{dellnitz_junge_99, dellnitz_junge_97} about a decade ago. Further theoretical and computational extensions have since been constructed \cite{froyland_dellnitz_03, froyland_05, froyland_08}. A parallel series of work specific to time-symmetric Markov processes and applied to identifying molecular conformations was developed in \cite{schuettehabil,deuflhard_etal_98,deuflhard_etal_00} and surveyed in \cite{schuette_huisinga_deuflhard_01}.
There have been some recent studies of the connections between slow mixing in \emph{periodically} driven fluid flow and eigenfunctions of Perron--Frobenius operators. Liu and Haller \cite{liu_haller_04} observe via simulation a transient ``strange eigenmode'' as predicted by classical Floquet theory. Pikovsky and Popovych \cite{pikovsky_popovych_03,popovych_pikovsky_eckhardt_07}
numerically integrated an advection-diffusion equation to simulate the evolution of a passive scalar, observing that it is the sub-dominant eigenfunction of the Perron--Frobenius operator that describes the most persistent deviation from the unique steady state.
The Perron--Frobenius operator based approach has been successful in a variety of application areas, however, as the key mathematical object is an \emph{eigen}function, there is no simple extension of the method to systems that have \emph{nonperiodic} time dependence\footnote{A relevant analogy to see this is the following. Consider repeated application of a single matrix $A$. The eigenvectors of $A$ provide information on directions of exponential growth/decay specified by the corresponding eigenvalues. Similarly, the eigenvectors of a product of matrices $A_k\cdots A_2 A_1$ describe directions of exponential growth/decay, specified by the eigenvalues of the product, under \emph{repeated} application of this matrix product. However, the directions of exponential growth/decay under a \emph{non-repeating} product $\cdots A_k\cdots A_2 A_1$ cannot be in general be found as eigenvectors of some matrix.}
Indeed, Liu and Haller \cite{liu_haller_04} state that: \begin{quote} ``...strange eigenmodes may also be viewed as eigenfunctions of an appropriate Frobenius-Perron operator...This fresh approach offers an alternative view on scalar mixing, but leaves the questions of completeness and general time-dependence open.'' \end{quote} It is this question of general time-dependence that we address in the current work. We extend a standard formalism for random dynamical systems to the level of Perron--Frobenius operators to create a Perron--Frobenius operator framework for general time-dependence. We also state an accompanying numerical algorithm, and demonstrate its effectiveness in identifying strange eigenmodes and coherent sets.
An outline of the paper is as follows. In Section 2 we formalise the notions of nonautonomous systems in both discrete and continuous time. In Section 3 we describe a Galerkin projection method that we will use to produce finite matrix representations of Perron--Frobenius operators. In Section 4 we define the critical constructions for the nonautonomous setting. We show that the nonautonomous analogues of strange eigenmodes are described by the ``Oseledets subspaces'' or ``Lyapunov vectors'' corresponding to compositions of the projected Perron--Frobenius operators. In Section 5 we describe in detail a numerical algorithm to practically compute these slowly decaying modes, and demonstrate that in the continuous time setting, these modes vary continuously in time. Our numerical approach is illustrated firstly in the discrete time setting with an aperiodic composition of interval maps, and secondly in the continuous time setting with an aperiodically forced flow on a cylinder. Section 6 provides some further background on almost-invariant sets and coherent sets and Section 7 describes a new heuristic to extract coherent sets from slowly decaying modes in the nonautonomous setting. This heuristic is then illustrated using the examples from Section 5.
\section{Nonautonomous Dynamical Systems}
We will treat time dependent dynamical systems on a smooth compact $d$-dimensional manifold $M\subset\mathbb{R}^D$, $D\ge d$ in both discrete and continuous time. In order to keep track of ``time'' we use a probability space $(\Omega,\mathcal{H},\mathbb{P})$, with the passing of time controlled by an ergodic automorphism $\theta:\Omega\circlearrowleft$ preserving $\mathbb{P}$ (ie.\ $\mathbb{P}=\mathbb{P}\circ \theta^{-t}$ for all $t\ge 0$). We require this somewhat more complicated description of time for technical reasons: to run the ergodic-theoretic arguments in Theorem \ref{thm:main} that guarantee the existence of the nonautonomous analogues of strange eigenmodes. The requirement that $\mathbb{P}$ be an ergodic probability measure rules out obvious choices for $\Omega$ and $\theta$: (i) in discrete time, $\Omega=\mathbb{Z}$ and $\theta^s(t)=t+s$, and (ii) in continuous time, $\Omega=\mathbb{R}$ and $\theta^s(t)=t+s$. In both (i) and (ii), there is no ergodic probability measure on $\Omega$ preserved by $\theta$. In the next two sections, we will introduce suitable examples of $\Omega$ and $\theta$ and describe the nonautonomous systems they generate.
\subsection{Discrete time -- Maps}
In the discrete time setting, we will think of $\Omega\subset (\mathbb{Z})^\mathbb{Z}$, and $\theta$ as a left shift $\sigma$ on $\Omega$ defined by $(\sigma\omega)_i=\omega_{i+1}$, where $\omega=(\ldots,\omega_{-1},\omega_0,\omega_1,\ldots)\in\Omega$. We assume that $\sigma$ is ergodic with respect to $\mathbb{P}$. Let $\mathcal{T}=\{T_{\omega_0}\}_{{\omega_0}\in \mathbb{Z}}$ be a collection of (possibly non-invertible) piecewise differentiable maps on a compact manifold $M$. For brevity, we will sometimes write $T_\omega$ in place of $T_{\omega_0}$. We will define a nonautonomous dynamical system by map compositions of the form $T_{\sigma^{k-1}\omega}\circ\cdots\circ T_{\sigma\omega}\circ T_{\omega}$. Define $$\Phi(k,\omega,x):=\left\{
\begin{array}{ll}
T_{\sigma^{k-1}\omega}\circ\cdots\circ T_{\sigma\omega}\circ T_{\omega}(x), & \hbox{$k>0$;} \\
{\rm Id}, & \hbox{$k=0$;} \\
T^{-1}_{\sigma^{-k}\omega}\circ\cdots\circ T^{-1}_{\sigma^{-2}\omega}\circ T^{-1}_{\sigma^{-1}\omega}(x), & \hbox{$k<0$.}
\end{array}
\right. $$ For $k\ge 0$ (resp.\ $k<0$), $\Phi(k,\omega,x)$ represents the forward time (resp.\ backward time) $k$-fold application of the nonautonomous dynamics to the point $x$ initialised at ``time'' $\omega$. Whenever $T_\omega$ is non-invertible, $T^{-1}_\omega(x)$ will represent the finite set of all preimages of $x$. We call $\Phi$ a \emph{map cocycle}.
\begin{definition} \label{invariantmeasurediscrete} Endow $M$ with the Borel $\sigma$-algebra and let $\mu$ be a probability measure on $M$. We call $\mu$ an \emph{invariant measure} if $\mu\circ\Phi(-1,\omega,\cdot)=\mu$ for all $\omega\in\Omega$. \end{definition}
This definition of an invariant measure is stricter than is usual for random or nonautonomous dynamical systems (e.g.\ \cite[Definition 1.4.1]{arnoldbook}). More generally, one may allow sample measures $\mu=\mu_\omega$ and insist that $\mu_{\sigma^{-1}\omega}\circ\Phi(-1,\omega,\cdot)=\mu_{\omega}$ for all $\omega\in\Omega$.
\begin{example}[Aperiodic map cocycle]\label{Discreteeg} We construct a map cocycle $\Phi$ by the composition of maps $T_i$ from a collection $\mathcal{T}$ according to sequences of indices $\omega\in\Omega$. The collection $\mathcal{T}:=\set{T_1,T_2,T_3,T_4}$ consists of expanding maps of the circle $S^1$, which we think of as $[0,1]$ with endpoints identified. The sequence space $\Omega\subset\set{1,2,3,4}^\mathbb{Z}$ is given by $$ \Omega = \set{\omega\in\set{1,2,3,4}^\mathbb{Z}:\forall i\in\mathbb{Z},\, M_{\omega_i\omega_{i+1}}=1}, $$ with adjacency matrix $$ M=\left(\begin{array}{cccc} 1&1&0&0\\ 0&0&1&1\\ 1&1&0&0\\ 0&0&1&1 \end{array}\right). $$ Elements of $\Omega$ correspond to bi-infinite paths in the graph Figure 1. \begin{figure}
\caption{Graph of the sequence space $\Omega$.}
\label{fig:subshift}
\end{figure} The shift $\sigma:\Omega\to\Omega$ is a subshift of finite type. A Borel $\sigma$-algebra $\mathcal{H}$ is generated by the length-one cylinder sets $C_i=\set{\omega:\omega_0=i}$, $i=1,\ldots,4$, and by giving equal measure to these four cylinder sets, we generate a shift-invariant probability measure $\mathbb{P}$.
The maps of $\mathcal{T}$ are defined in terms of a continuous piecewise-linear map $H_a:S^1\to S^1$, which has almost-invariant sets (see Definition \ref{aidefn}, Section 6) $[0,0.5]$ and $[0.5,1]$ for $a$ close to zero. Define $$ H_a(x) = \left\{\begin{array}{ll} +3x & 0 \leq x < \frac{1}{6} + \frac{1}{2}a, \\ -3x+3a+1& \frac{1}{6} + \frac{1}{2}a \leq x < \frac{1}{3} + \frac{2}{3}a, \\ +3x-a-1 & \frac{1}{3} + \frac{2}{3}a \leq x < \frac{2}{3} + \frac{2}{3}a, \\ -3x+3a+3& \frac{2}{3} + \frac{2}{3}a \leq x < \frac{5}{6} + \frac{1}{2}a, \\ +3x-2 & \frac{5}{6} + \frac{1}{2}a \leq x \leq 1, \end{array}\right. $$ where values are taken modulo $1$. Figure \ref{fig:H0} shows a graph of $H_{0}$. \begin{figure}
\caption{The map $H_{0}$ has invariant sets $[0,0.5]$ and $[0.5,1]$; that is, $H_0^{-1}([0,0.5])=[0,0.5]$ and $H_0^{-1}([0.5,1])=[0.5,1]$.}
\label{fig:H0}
\end{figure} Let $a_i\in\mathbb{R}$, $i=1,\ldots,4$, be close to zero, for example $(a_1,a_2,a_3,a_4)=(\pi,2\sqrt{2}, \sqrt{3},e)/40$. We now construct the map $T_i$ from $H_{a_i}$, for $i=1,\ldots,4$ as follows: \begin{eqnarray*} T_1 &=& H_{a_1}(x) \\ T_2 &=& R\circ H_{a_2}(x) \\ T_3 &=& H_{a_3}\circ R^{-1} \\ T_4 &=& R\circ H_{a_4}\circ R^{-1}, \end{eqnarray*} where $R:S^1\to S^1$ is the rotation $R(x)=x+1/4\mod{1}$; see Figure \ref{fig:4T}. \begin{figure}
\caption{Graphs of $T_i$ for $i=1,\ldots,4$.}
\label{fig:4T}
\end{figure} \end{example}
Let $m$ denote normalised Lebesgue measure on $M$. To each map $T_{\omega}$ we associate a Perron--Frobenius operator $\mathcal{P}_{{\omega}}:L^1(M,m)\circlearrowleft$ defined by
$\mathcal{P}_{{\omega}}f=\sum_{y\in T^{-1}_{\omega}x}{f(y)}/{|\det DT_{\omega}(y)|}$. The operator $\mathcal{P}_\omega$ is a linear operator that acts on integrable functions in analogy to the action of $T_\omega$ on points. If $f\in L^1(M,m)$ represents a density function for an ensemble of initial conditions, then $\mathcal{P}_\omega f$ represents the density function of the ensemble after the action of $T_\omega$ has been applied to the ensemble. The map cocycle $\Phi$ naturally generates a \emph{Perron--Frobenius cocycle} $\mathcal{P}^{(k)}_\omega=\mathcal{P}_{{\sigma^{k-1}\omega}}\circ\cdots\circ \mathcal{P}_{{\sigma\omega}}\circ \mathcal{P}_{{\omega}}$. This composition of $k$ Perron--Frobenius operators capture the action on a function $f$ after $k$ iterations of the non-autonomous system.
\subsection{Continuous time -- Flows} Let $F:\Omega\times M\to\mathbb{R}^d$ be a sufficiently regular vector field. More precisely, we suppose that $F$ satisfies the conditions of \cite[Theorem 2.2.2]{arnoldbook}, which will guarantee the existence of a classical solution of the nonautonomous ODE $\dot{x}(t)=F(\theta^t\omega,x(t))$, $t\in\mathbb{R}$.
To be concrete about the probability space $(\Omega,\mathcal{F},\mathbb{P})$ in the continuous time setting, we may set $\Omega=\Xi\subset \mathbb{R}^{d_1}$, $d_1\ge 3$, and consider an autonomous ODE $\dot{z}=g(z)$ on $\Xi$. Denote the flow for this ODE by $\xi:\mathbb{R}\times\Xi\to\Xi$ and suppose that $\xi$ preserves the probability measure $\mathbb{P}$; that is, $\mathbb{P}\circ\xi(-t,\cdot)=\mathbb{P}$ for all $t\in\mathbb{R}$. Thus, the autonomous, aperiodic flow $\xi$ drives the nonautonomous ODE \begin{equation}
\label{ode} \dot{x}(t)=F(\theta^t\omega,x(t))=F(\xi(t,z),x(t)). \end{equation} We think of points $z\in \Xi$ as representing generalised time. We assume that $(\Xi,\xi,\mathbb{P})$ is ergodic in the sense that if $\xi(-t,\tilde{\Xi})=\tilde{\Xi}$ for some $\tilde{\Xi}\subset\Xi$ and for all $t\ge 0$ then $\mathbb{P}(\tilde{\Xi})=0$ or 1.
Denote by $\phi:\mathbb{R}\times\Xi\times M\to M$ the flow for (\ref{ode}). The flow $\phi$ satisfies $\frac{d}{dt}\phi(t,z,x)=F(\xi(t,z),\phi(t,z,x))$.
\begin{definition} \label{invariantmeasurects} Endow $M$ with the Borel $\sigma$-algebra and let $\mu$ be a probability measure on $M$. We call $\mu$ an \emph{invariant measure} if $\mu\circ\phi(-t,z,\cdot)=\mu$ for all $z\in \Xi$ and $t\in \mathbb{R}$. \end{definition}
\begin{remark}
\label{invmeasremark} In Definition \ref{invariantmeasurects} we are insisting that $\mu$ is preserved at all ``time instants''. As in the discrete time setting, more generally one may allow $\mu=\mu_z$ and insist that $\mu_{\xi(-t,z)}\circ\phi(-t,z,\cdot)=\mu_z$. However, as we will soon begin to focus on coherent sets rather than invariant measures, we will restrict the invariant measure to a ``time independent'' measure for clarity of presentation. This is perfectly reasonable for one of the main applications we have in mind, namely, aperiodically driven fluid flow where $\mu\equiv$ Lebesgue, and volume is preserved by the flow at all times. \end{remark}
\begin{example}\label{ex:LZ}
Consider the following nonautonomous system on a cylinder $M=S^1\times [0,\pi]$. Let $\xi:\mathbb{R}\times \mathbb{R}^3\to \mathbb{R}^3$ denote the flow for the driving system generated by the Lorenz system of ODEs (\ref{eq::LZwave2a})--(\ref{eq::LZwave2c}) with standard parameters $\sigma=10$, $\beta=8/3$, $\rho=28$. \begin{eqnarray} \label{eq::LZwave2a}\dot{z_1}&=&\sigma(z_2-z_1)/\tau\\ \label{eq::LZwave2b}\dot{z_2}&=&(\rho z_1-z_2-z_1z_3)/\tau\\ \label{eq::LZwave2c}\dot{z_3}&=&(-\beta z_3+z_1z_2)/\tau. \end{eqnarray} It is well known that this Lorenz flow possesses an SBR measure $\mathbb{P}$~\cite{tucker_99}. Let the time-dependent vector field $F:\mathbb{R}\times S^1\times [0,\pi]\to S^1\times [0,\pi]$ generate our non-autonomous ODE $(\dot{x}(t),\dot{y}(t))=F(\xi(t,z),x(t),y(t))$. Explicitly,
\begin{eqnarray} \label{eq::LZwave1a}\dot{x} &=& c-A\sin(x-\nu z_1(t))\cos(y)\qquad\mod{2\pi}\\ \label{eq::LZwave1b}\dot{y} &=& A\cos(x-\nu z_1(t))\sin(y), \end{eqnarray} with $c=0.5$, $A=1$, $\nu=0.25$. We set initial condition $z(0)=(0,1,1.5)$ and take the $z_1$-coordinate of the Lorenz driving system to represent the generalized time for the vector field $F(\xi(t,z),x(t),y(t))$. We use a scaling factor of $\tau=6.6685$ so that the temporal and spatial variation of $z_1(t)$ is similar to that of the ``actual'' time $t$. Since $F(\xi(t,z),x,y)$ is differentiable and bounded on $M$ for all $t$, classical solutions of the nonautonomous ODE (\ref{eq::LZwave1a})--(\ref{eq::LZwave1b}) exist. The system~(\ref{eq::LZwave2a})--(\ref{eq::LZwave1b}) uniquely generates an RDS, see \cite[Theorem 2.2.2]{arnoldbook}. In Figure~\ref{fig::LZwave} we demonstrate a trajectory of three different initial points.
\begin{figure}
\caption{Trajectory of the time-dependent system~\eqref{eq::LZwave1a}--\eqref{eq::LZwave1b} driven by the Lorenz system at generalized times $\xi(t,z)$ }
\label{fig::LZwave}
\end{figure} \end{example}
We may define a family of Perron--Frobenius operators as
$\mathcal{P}_z^{(t)}f(x)=f(\phi(-t,\xi(t,z),x))\cdot|\det D\phi(-t,\xi(t,z),x)|$ for $t\ge 0$. This family is a semigroup in $t$ as $\mathcal{P}_z^{(t_1+t_2)}f=\mathcal{P}_{\xi(t_1,z)}^{(t_2)}\mathcal{P}_z^{(t_1)}f$.
\section{Galerkin projection and matrix cocycles}
Let $\mathcal{B}_n=\mathop{\rm sp}\{\chi_{B_i}:B_i\in \mathfrak{B}\}$ where $\mathfrak{B}=\{B_1,\ldots,B_n\}$ is a partition of $M$ into connected sets of positive Lebesgue measure. Define a projection $\pi_n:L^1(M,m)\to\mathcal{B}_n$ as \begin{equation}
\label{pineqn} \pi_nf=\sum_{i=1}^n \frac{\int_{B_i} f\,\mathrm{d}m}{m(B_i)}\chi_{B_i}. \end{equation} Following Ulam \cite{ulam}, in the sequel we will consider the finite rank operators $\pi_n\mathcal{P}_\omega^{(1)}:L^1(M,m)\to\mathcal{B}_n$ and $\pi_n\mathcal{P}_z^{(1)}:L^1(M,m)\to\mathcal{B}_n$, and the matrix representations of the restrictions of $\pi_n\mathcal{P}_\omega^{(1)}$ and $\pi_n\mathcal{P}_z^{(1)}$ to $\mathcal{B}_n$. We denote these matrix representations (under multiplication on the right) by $P(\omega)$ and $P(z)$. Extending Lemma 2.3 \cite{li} in a straightforward way to the nonautonomous setting, one has \begin{equation}
\label{discreteulam} P(\omega)_{ij}=\frac{m(B_j\cap \Phi(-1,\sigma\omega,B_i))}{m(B_i)} \end{equation} and \begin{equation}
\label{continuousulam} P(z)_{ij}=\frac{m(B_j\cap \phi(-1,\xi(1,z),B_i))}{m(B_i)} \end{equation} In particular, these matrices are numerically accessible.
\begin{remark} Note we do not concern ourselves at all with the relationship between $\mathcal{P}_{\omega}^{(1)}$ and $\pi_n\mathcal{P}_{\omega}^{(1)}$; this is a subtle relationship and beyond the scope of this work. See \cite{li,garysbr,ding,murraythesis,froylandrandomulam,bkl,murraynonuniform} for work in this direction. \end{remark}
The matrices $P(\omega)$ and $P(z)$ generate matrix cocycles \begin{equation}
\label{discretematrixcocycle} P^{(k)}(\omega):=P(\sigma^{k-1}\omega)\cdots P(\sigma\omega)\cdot P(\omega) \end{equation} and \begin{equation}
\label{ctsmatrixcocycle} P^{(k)}(z):=P(\xi(k-1,z))\cdots P(\xi(1,z))\cdot P(z). \end{equation}
\section{Discretised Oseledets functions and the Multiplicative Ergodic Theorem} \label{METsect}
In periodically driven flows, Liu and Haller \cite{liu_haller_04} and Pikovsky and Popovych \cite{pikovsky_popovych_03}, observed that certain tracer patterns persisted for long times before eventually relaxing to the equilibrium tracer distribution. Pikovsky and Popovych \cite{pikovsky_popovych_03} recognised these patterns as graphs of eigenfunctions of a Perron--Frobenius operator corresponding to an eigenvalue $L<1$. These eigenfunctions decay over time and the closer $L$ is to 1, the slower the decay and the more slowly an initial tracer distribution will relax to equilibrium. We now develop a framework for the considerably more difficult aperiodic setting.
Consider some suitable Banach space $(\mathcal{F},\|\cdot\|)$ of real valued functions; $\mathcal{F}$ is the function class in which we search for slowly decaying functions. Suppose that the norm is chosen so that for each $\omega\in\Omega$ and $k\ge 0$, the operator $\mathcal{P}^{(k)}_\omega$ is Markov;
that is, $\|\mathcal{P}^{(k)}_\omega\|=1$ for all $\omega$ and $k\ge 0$. For $f\in\mathcal{F}$, we calculate the following limit: \begin{equation} \label{lambdaeqn}
\lambda(\omega,f)=\limsup_{k\to\infty}\frac{1}{k}\log\|\mathcal{P}^{(k)}_\omega f\|. \end{equation} We refer to $\lambda(\omega,f)\le 0$ as the \textit{Lyapunov exponent} of $f$. If $f$ decays under the action of the Perron--Frobenius operators at a geometric rate of $r^k$, $0<r<1$, then $\lambda(\omega,f)=\log r$. The closer $r$ is to 1, the slower the decay. The extreme case of $r=1$ (no decay) is exhibited when $f$ is the density of the invariant measure $\mu$ that is common to all maps in our nonautonomous dynamical system. We define the \textit{Lyapunov spectrum} $\Lambda(\mathcal{P},\omega):=\{\lambda(\omega,f):f\in\mathcal{F} \}$. In the aperiodic setting the new mathematical objects that are analogous to strange eigenmodes and persistent patterns will be called \emph{Oseledets functions.} \begin{definition} \label{strangeeigenmodedefn} \emph{Oseledets functions} correspond to $f$ for which (i) $\lambda(\omega,f)$ is near zero and (ii) the value $\lambda(\omega,f)$ is an isolated point in the \textit{Lyapunov spectrum}. \end{definition}
By considering $(\mathcal{F},\|\cdot\|)=(\mathcal{B}_n,\|\cdot\|_1)$, the actions of $\mathcal{P}_\omega^{(k)}$ and $\mathcal{P}^{(k)}_z$ are described by $P^{(k)}(\omega)$ and $P^{(k)}(z)$, respectively. We may replace $\mathcal{P}_\omega^{(k)}$ and $\mathcal{P}^{(k)}_z$ in (\ref{lambdaeqn}) by $P^{(k)}(\omega)$ and $P^{(k)}(z)$, respectively, to obtain
a standard setting where the possible values of $\lambda(\omega,f)$ are the Lyapunov exponents of cocycles of $n\times n$ matrices, and \begin{equation} \label{discretespectrum}
\Lambda(P,\omega):=\left\{\lim_{k\to\infty}\frac{1}{k}\log\|{P}^{(k)}(\omega)
f\|_1: f\in\mathcal{B}_n\right\}\end{equation}
and
\begin{equation} \label{ctsspectrum}
\Lambda(P,z):=\left\{\lim_{k\to\infty}\frac{1}{k}\log\|{P}^{(k)}(z)
f\|_1: f\in\mathcal{B}_n\right\},\end{equation} exist for $\mathbb{P}$ almost-all $\omega\in\Omega$, and consist of at most $n$ isolated points, $\lambda_n<\cdots<\lambda_1=0$. Of particular interest to us is the function $f_2(\omega)$ (or $f_2(z)$)
in $\mathcal{B}_n$, which represents the function that decays at the slowest possible geometric rate $\lambda_2$. \begin{remark} In certain settings, this matrix cocycle \emph{exactly} captures all large isolated Lyapunov exponents of the operator cocycle
$\mathcal{P}:(\mathrm{BV},\|\cdot\|_{\mathrm{BV}})\circlearrowleft$. One such setting is a map cocycle formed by composition of piecewise linear expanding maps with a common Markov partition $\mathfrak{B}=\{B_1,\ldots,B_n\}$;
see \cite{froyland_lloyd_quas}. \end{remark}
The following example illustrates the concept of Lyapunov spectrum and Oseledets functions in the familiar autonomous setting. For the remainder of this section, we adopt the discrete time notation of $\sigma$ and $\omega$.
\begin{example}[``Autonomous'' single map] \label{singlemapeg} In \cite{foureggs} individual maps are constructed for which the Perron--Frobenius operator has at least one non-unit isolated eigenvalue when acting on the Banach space
$(BV,\|\cdot\|_{BV})$. A single autonomous map may be regarded as a cocycle over a one-point space $\Omega=\{\omega\}$, and so we may drop the dependence on $\omega$ in notation. Keller \cite{keller_84} shows that for a piecewise expanding map $T$ of the interval $I$, the spectrum of the associated Perron--Frobenius operator $\mathcal{P}$ has an essential spectral radius $\rho_{\rm ess}(\mathcal{P})$ equal to the asymptotic local expansion rate $\sup_{x\in I} \lim_{k\to\infty}
\left|1/\mathrm{D}T^k(x)\right|^{1/k}$, and that there are at most countably many spectral points, each isolated, of modulus greater than $\rho_{\rm ess}(\mathcal{P})$.
In order to have an isolated spectral point, we construct a map of $S^1$ which has an almost-invariant set (see Definition \ref{aidefn}). The relation between almost-invariant sets and isolated eigenvalues was noted in \cite{dellnitz_junge_99}. Consider the partition $\mathfrak{B}=\{B_i: i=1,\ldots,6\}$, where $B_i=((i-1)/6,i/6)$. Given $a\in\mathbb{Z}^6$, any map $T:S^1\to S^1$ defined by \begin{eqnarray}\label{eq:Tformula} T(x) = 3x-(i-1)/2+a_i/6\mod{1}, \quad x \in B_i \end{eqnarray} is Markov with respect to $\mathfrak{B}$. Here we take $a=(0,0,1,4,3,3)$; see Figure \ref{fig:OneMap}. Notice that there is a low transfer of mass between the two intervals $[0,1/2]$ and $[1/2,1]$.
Since $\mathfrak{B}$ is a Markov partition for $T$, the space of characteristic functions $\mathcal{B}_6 =\{\chi_{B_i}:i=1,\ldots,6\}$ is an invariant subspace of $\mathrm{BV}$ for the Perron--Frobenius operator $\mathcal{P}$ of $T$. Thus the action of $\mathcal{P}_\omega = \mathcal{P}$ on $\mathcal{F}=\mathcal{B}_6$ is represented by the matrix \begin{eqnarray}P = P(\omega) = \frac{1}{3} \left(\begin{array}{cccccc} 1 & 1 & 0 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1 & 1 & 1 \\ \end{array}\right), \end{eqnarray} which has non-zero eigenvalues $L_1=1,L_2=(1+\sqrt{2})/3, L_3=(1-\sqrt{2})/3$. The map $T$ is piecewise affine with constant slope $3$ and so the logarithm of the local expansion rate is $\log (1/3)$.
The eigenvalue $L_2\approx 0.805$ of $P$ thus gives rise to an isolated point $\lambda_2\approx \log 0.805$ in the Lyapunov spectrum $\Lambda(\mathcal{P})$. The corresponding Oseledets function $f_2$ is given by $f_2(x)=\sum_{i=1}^6w_{2,i}\chi_{B_i}(x)$, where $w_2$ is the eigenvector of $P$ corresponding to the eigenvalue $L_2$, see Figure \ref{fig:OneMap}.
\begin{figure}
\caption{Graph of $T$ and Oseledets function $f_2$.}
\label{fig:OneMap}
\end{figure}
Since $|L_3|\approx 0.138<1/3$, this means that $\log L_2$ is the unique isolated Lyapunov exponent in $\Lambda(\mathcal{P})$. Note that the set $\{f_2>0\}$ corresponds to the set $[0,1/2]$. We will discuss this property further in Section \ref{cssection}. \end{example}
\begin{example}[Periodic map cocycle] \label{periodiceg} We construct a periodic map cocycle from a collection of maps with a common Markov partition. The map cocycle is formed by cyclically composing three maps of $S^1$. Consider the sequence space $\Omega=\{\omega\in\{1,2,3\}^\mathbb{Z}:\forall i\in\mathbb{Z}, M_{\omega_i,\omega_{i+1}}=1\}$ where $M=\left(\begin{array}{cccc}0&1&0\\ 0&0&1\\ 1&0&0\\ \end{array}\right)$. We consider $\mathcal{T}=\{T_j:j=1,2,3\}$, where $T_j$ is given by (\ref{eq:Tformula}) with parameter $a^{(j)}$, where $$ a^{(1)}=(3,2,2,0,5,5),\quad a^{(2)}=(2,1,4,5,4,1),\quad a^{(3)}=(1,3,3,4,0,0), $$ see Figure \ref{fig:PerMaps}. \begin{figure}
\caption{Graphs of $T_1$, $T_2$ and $T_3$. }
\label{fig:PerMaps}
\end{figure} As in Example \ref{singlemapeg} we look for Lyapunov exponents that are strictly greater than the logarithm of the asymptotic \emph{local expansion rate} \begin{eqnarray}
\sup_{x\in I}\lim_{k\to\infty} | 1/D(T_3\circ T_2\circ T_1)^k|^{1/3k}.
\end{eqnarray} As each map $T_j$ is piecewise affine with constant slope $3$, the logarithm of the local expansion rate is $\log(1/3)$. Note also that $T_1$ approximately maps $[0,1/2]$ to $[1/3,5/6]$, $T_2$ then maps $[1/3,5/6]$ approximately to $[0,1/6]\cup [2/3,1]$, and finally $T_3$ maps $[2/3,1/3]$ approximately back to $[0,1/2]$. Each map $T_j$ leaves the space $\mathcal{B}_6$ from Example \ref{singlemapeg} invariant, and thus the Perron--Frobenius operator $\mathcal{P}_j$ of $T_j$ restricted to $\mathcal{B}_6$ has matrix representation $P_j$, where $3P_j$, $j=1,2,3$, are respectively \begin{eqnarray} \left(\begin{array}{cccccc} 0 & 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 1 \\ \end{array}\right),
\left(\begin{array}{cccccc} 0 & 0 & 1 & 1 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 1 & 0 & 0 & 0 & 1 \\ 1 & 1 & 0 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 1 & 1 & 0 \\ \end{array}\right),
\left(\begin{array}{cccccc} 0 & 0 & 0 & 1 & 1 & 1 \\ 1 & 0 & 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 0 & 1 & 1 \\ 1 & 1 & 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 \\ \end{array}\right). \end{eqnarray} The triple product $P^{(3)}(\omega)=P_3P_2P_1$ has non-zero eigenvalues $L_1=1, L_2=(13+ \sqrt{233})/54$ and $L_3=(13-\sqrt{233})/54$. Since $L_2\approx 0.523$, its associated eigenvector $w_2$ satisfies $\lambda(\omega,w_2)= \log \sqrt[3]{\lambda_2}>\log (1/3)$. Since $P^{(3)}(\sigma^k\omega)$, $k=1,2$, are cyclic permutations of the factors of $P^{(3)}(\omega)$, they share the same eigenvalues, and in particular $L_2$. Thus $(1/3)\log L_2$ is an isolated Lyapunov exponent of $\Lambda(\mathcal{P},\omega)$ for each $\omega\in\Omega$. Associated to the eigenvalue $L_2$, the matrices $P(\sigma^k\omega)$, $k=0,1,2$, have corresponding eigenvectors $w_2(\sigma^k\omega)$. The three vectors $w_2(\sigma^k\omega)$, $k=0,1,2$, generate the periodic Oseledets functions $f_2(\sigma^k\omega)=\sum_{i=1}^6 w_{2,i}(\sigma^{k\mod{3}}\omega)\chi_{B_i}$, see Figure \ref{fig:PerCS}. \begin{figure}
\caption{Oseledets functions $f_2(\sigma^k\omega)$ for $k=0,1,2$.}
\label{fig:PerCS}
\end{figure} Note that the sets $\{f_2(\sigma^k\omega)>0\}_{k=0,1,2}$ correspond to the sets $[0,1/2]$, $[1/3,5/6]$, and $[0,1/6]\cup [2/3,1]$, respectively. We will discuss this property further in Section \ref{cssection}.
For another such example see \cite{froyland_lloyd_quas}. See also \cite{froyland_padberg_08} for a detailed example of similar calculations for a periodically driven flow. \end{example}
In the nonautonomous setting, we can no longer easily construct Oseledets functions as eigenfunctions of a single operator, or eigenvectors of a single matrix. In fact, the Oseledets functions are themselves (aperiodically) time dependent in the nonautonomous setting. Our model of Oseledets functions for nonautonomous systems is, as the name suggests, built around the Multiplicative Ergodic Theorem, see e.g.\ \cite[Chapter 3, \S4]{arnoldbook}. We now state a strengthened version \cite{froyland_lloyd_quas} of the Multiplicative Ergodic Theorem that we require for our current purposes. \begin{theorem}[\cite{froyland_lloyd_quas}] \label{thm:main} Let $\sigma$ be an invertible ergodic measure-preserving transformation of the space $(\Omega,\mathcal{H},\mathbb{P})$. Let $P:\Omega\to M_n(\mathbb{R})$ be a measurable family of matrices satisfying
$$\int \log^+\|P(\omega)\|\,\mathrm{d}\mathbb{P}(\omega)<\infty. $$ Then there exist $\lambda_1>\lambda_2>\cdots>\lambda_\ell\geq -\infty$ and dimensions $m_1,\ldots,m_\ell$, with $m_1+\cdots+m_\ell=n$, and a measurable family of subspaces $W_{j}(\omega)\subseteq \mathbb{R}^n$ such that for almost every $\omega\in\Omega$ the following hold: \begin{enumerate} \item $\dim W_{j}(\omega)=m_j$;
\item $\mathbb{R}^n=\bigoplus_{j=1}^\ell W_{j}(\omega)$;
\item $P(\omega)W_{j}(\omega)\subseteq W_{j}(\sigma\omega)$ (with
equality if $\lambda_j>-\infty$);
\item for all $v\in W_{j}(\omega)\setminus\{0\}$, one has
\begin{equation*}
\lim_{k\to\infty}
(1/k)\log\|P(\sigma^{k-1}\omega)\cdots P(\sigma\omega)\cdot P(\omega) v\|= \lambda_j.
\end{equation*}
\end{enumerate} \end{theorem} The subspaces $W_j(\omega)$ are the general time-dependent analogues of the vectors $w_2$ and $w_2(\sigma^k\omega), k=0,1,2$ of Examples \ref{singlemapeg} and \ref{periodiceg}, respectively. We may explicitly construct a slowest decaying discrete Oseledets function as $f_2(\omega):=\sum_{i=1}^n w_{2,i}(\omega)\chi_{B_i}$, where $w_2(\omega)\in W_2(\omega)$. In the sequel, for brevity we will often call $W_j(\omega)$ a subspace or a function, recognising its dual roles.
\begin{remark} \label{remark4a} We remark that if $\ell\ge 2$, $m_2=1$, and $\lambda_2>-\infty$, the family of vectors $\{f_2(\sigma^k\omega)\}_{k\ge 0}$ is the \emph{unique}\footnote{Assume there is another family $\{w'_2(\sigma^k\omega)\}_{k\ge 0}\neq \{w_2(\sigma^k\omega)\}_{k\ge 0}$ (up to scalar multiples) with these properties. Then $w'_2(\sigma^k\omega)=\sum_{j=2}^\ell \alpha_{k,j} w_\ell(\sigma^k\omega)$ for some $\alpha_{k,j}$, $j=2,\ldots,\ell$, with $\alpha_{k,2}\neq 0$. WLOG assume $\alpha_{k,2}, \alpha_{k,j'}\neq 0$ for some $2<j'\le \ell$ and all $k\ge 0$, but that $\alpha_{k,j}=0$ for all $j\neq 2,j'$ and all $k\ge 0$. Then $m_2\ge 2$ in Theorem \ref{thm:main}, a contradiction.} (up to scalar multiples) family of vectors in $\mathcal{B}_n$ with the properties that \begin{enumerate} \item $\lim_{k'\to\infty}
(1/k')\log\|P^{(k')}(\omega)f_2(\sigma^k\omega)\|_1=\lambda_2$, $k\ge 0$,
\item $\pi_n\mathcal{P}_\omega
f_2(\sigma^k\omega)=\alpha_k
f_2(\sigma^{k+1}\omega)$ for some $\alpha_k\neq 0$, $k\ge 0$. \end{enumerate} \end{remark}
\begin{remark} Theorem \ref{thm:main} strengthens the standard version of the MET for one-sided time with noninvertible matrices (see e.g.\ \cite[Theorem 3.4.1]{arnoldbook}) to obtain the conclusions of the two-sided time MET with invertible matrices (see e.g.\ \cite[Theorem 3.4.11]{arnoldbook}). In \cite[Theorem 3.4.1]{arnoldbook}, the existence of only a \textit{flag} $\mathbb{R}^n=V_1(\omega)\supset\cdots\supset V_\ell(\omega)$ of Oseledets subspaces is guaranteed, while in \cite[Theorem 3.4.11]{arnoldbook}, the existence of a \textit{splitting} $W_1(\omega)\oplus\cdots\oplus W_\ell(\omega)=\mathbb{R}^n$ is guaranteed. Theorem \ref{thm:main} above demonstrates existence of an Oseledets \textit{splitting} for two-sided time with \textit{noninvertible} matrices. This is particularly important for our intended application as the projected Perron--Frobenius operator matrices are non-invertible. Recent further extensions \cite{froyland_lloyd_quas2} prove existence and uniqueness of Oseledets subpsaces for cocycles of Lasota-Yorke maps. \end{remark}
\section{Numerical approximation of Oseledets functions}
In the autonomous and periodic settings we have seen in Examples \ref{singlemapeg} and \ref{periodiceg} that the subspaces $W_2(\omega)=\mathop{\rm sp}\{w_2(\omega)\}$ were one-dimensional, and that the vectors $w_2(\omega)$ could be simply determined as eigenvectors of matrices.
For truly nonautonomous systems (those that are aperiodically driven), the Oseledets splittings are difficult to compute. In this section we outline a numerical algorithm to approximate the $W_j(\omega)$ subspaces from Theorem \ref{thm:main}. The algorithm is based on the push-forward limit argument developed in the proof of Theorem \ref{thm:main}. To streamline notation, we describe the discrete time and continuous time setting separately.
\subsection{Discrete time}
We first describe a simple and efficient method to construct the matrix $P(\omega)$ defined in (\ref{discreteulam}).
\begin{algorithm}[Approximation of $P(\sigma^{-k}\omega)_{ij}$, $0\le k\le N$]
\quad \begin{enumerate}
\item Partition the state space $M$ into a collection of connected sets $\{B_1,\ldots,B_n\}$ of small diameter. \item Fix $i$, $j$, and $k$ and create a set of $Q$ test points $x_{j,1},\ldots,x_{j,Q}\in B_j$ that are uniformly distributed over $B_j$. \item For each $q=1,\ldots,Q$ calculate $y_{j,q}=T_{\sigma^{-k}\omega}x_{j,q}$. \item Set \begin{equation}
\label{discreteapproxulam} P(\sigma^{-k}\omega)_{ij}=\frac{\#\{q:y_{j,q}\in B_i\}}{Q} \end{equation} \end{enumerate} \end{algorithm} We now describe how to use the matrices $P(\omega)$ to approximate the subspaces $W_j(\omega)$. An intuitive description of the ideas behind Algorithm \ref{discreteeigenmodealg} immediately follows the algorithm statement.
\begin{algorithm}[Approximation of Oseledets subspaces
$W_j(\omega)$ at $\omega\in\Omega$.]
\label{discreteeigenmodealg} \quad
\begin{enumerate} \item Construct the Ulam matrices $P^{(M)}(\sigma^{-N}\omega)$ and $P^{(N)}(\sigma^{-N}\omega)$ from (\ref{discreteapproxulam}) and (\ref{discretematrixcocycle}) for suitable $M$ and $N$. The number $M$ represents the number of iterates over which one measures the decay, while the number $N$ represents how many iterates the resulting ``initial vectors'' are pushed forward to better approximate elements of the $W_j(\omega)$. \item Form $$ \Psi^{(M)}(\sigma^{-N}\omega):=(P^{(M)}(\sigma^{-N}\omega)^\top P^{(M)}(\sigma^{-N}\omega))^{1/2M} $$ as an approximation to the standard limiting matrix $$ B(\sigma^{-N}\omega):=\lim_{M\to\infty} \left(P^{(M)}(\sigma^{-N}\omega)^\top
P^{(M)}(\sigma^{-N}\omega)\right)^{1/2M} $$ appearing in the Multiplicative Ergodic Theorem (see e.g.\ \cite[Theorem 3.4.1(i)]{arnoldbook}). \item Calculate the orthonormal eigenspace decomposition of $\Psi^{(M)}(\sigma^{-N}\omega)$,
denoted by $U^{(M)}_j(\sigma^{-N}\omega)$, $j=1,\ldots,\ell$. We are particularly interested in low values of $j$, corresponding to large eigenvalues $L_j$. \item Define
$W_j^{(M,N)}(\omega):=P^{(N)}(\sigma^{-N}\omega)U^{(M)}_j(\sigma^{-N}\omega)$
via the push forward under the matrix cocycle. \item $W_j^{(M,N)}(\omega)$ is our numerical approximation to $W_j(\omega)$. \end{enumerate} \end{algorithm} Here is the idea behind the above algorithm. If we choose $M$ large enough, the eigenspace $U^{(M)}_j(\sigma^{-N}\omega)$ should be close to the limiting ($M\to\infty$) eigenspace $U_j(\sigma^{-N}\omega)$. Vectors in the eigenspace $U_j(\sigma^{-N}\omega)$ experience stretching at a rate close to $L_j$. Note that the eigenspace $U^{(M)}_j(\sigma^{-N}\omega)$ is the $j^{\rm th}$ singular vector of the matrix $P^{(M)}(\sigma^{-N}\omega)$, which experiences a ``per unit time'' average stretching from time $-N$ to $-N+M$ of $L_j$. Choose some arbitrary $v\in U_j(\sigma^{-N}\omega)$ and write $v=\sum_{j'=j}^\ell w_{j'}$ with $w_{j'}\in W_{j'}(\sigma^{-N}\omega)$. Pushing forward by $P^{(N)}(\sigma^{-N}\omega)$ for large enough $N$ will result in
$\|P^{(N)}(\sigma^{-N}\omega)w_j\|$ dominating
$\|P^{(N)}(\sigma^{-N}\omega)w_{j'}\|$ for $j<j'\le \ell$. Thus, for large $M$ and $N$ we expect $W_j^{(M,N)}(\omega)$ to be close to $W_j(\omega)$.
\begin{remarks} \quad \begin{enumerate} \item Theorem \ref{thm:main} states that $W_j^{(\infty,N)}(\omega)\to W_j(\omega)$ as $N\to\infty$. \item This method may also be used to calculate the Oseledets
subspaces for two-sided linear cocycles, and may be more convenient, especially for large $n$, than
the standard method of intersecting the relevant subspaces of flags
of the forward and backward cocycles.
\end{enumerate} \end{remarks}
The numerical approximation of the Oseledets subspaces has been considered by a variety of authors in the context of (usually invertible) nonlinear differentiable dynamical systems, where the linear cocycle is generated by Jacobian matrices concatenated along trajectories of the nonlinear system. Froyland \emph{et al.} \cite{Froyland_Judd_Mees_95} approximate the Oseledets subspaces in invertible two-dimensional systems by multiplying a randomly chosen vector by $P^{(N)}(\sigma^{-N}\omega)$ (pushing forward) or $P^{(-N)}(\sigma^{N}\omega)$ (pulling back, where $P^{(-N)}(\sigma^{N}\omega)=P^{-1}(\omega)\cdots P^{-1}(\sigma^{N-1}\omega)$. Trevisan and Pancotti \cite{trevisan_pancotti_98} calculate eigenvectors of $\Psi^{(M)}(\omega)$ for the three-dimensional Lorenz flow, increasing $M$ until numerical convergence of the eigenvectors is observed. Ershov and Potapov \cite{ershov_potapov_98} use an approach similar to ours, combining eigenvectors of a $\Psi^{(M)}$ with pushing forward under $P^{(N)}$. Ginelli \emph{et al.} \cite{ginelli_etal_07} embed the approach of \cite{Froyland_Judd_Mees_95} in a QR-decomposition methodology to estimate the Oseledets vectors in higher dimensions. In the numerical experiments that follow, we have found our approach to work very well, with fast convergence in terms of both $M$ and $N$.
\subsection{Continuous time}
As our practical computations are necessarily over finite time intervals, from now on, when dealing with continuous time systems, we will compute $P^{(k)}(z)$ as $\pi_n\mathcal{P}_z^{(k)}$ rather than as $P(\xi(k-1,z))\cdots P(\xi(1,z))\cdot P(z)$. If the computation of $\pi_n\mathcal{P}_z^{(k)}$ can be done accurately (this will be discussed further in Section \ref{ctscomputationssect}), then this representation should be closer to $\mathcal{P}_z^{(k)}$ as there are fewer applications of $\pi_n$.
We first describe a simple and efficient method to construct the matrix $P(\omega)$ defined in (\ref{discreteulam}). \begin{algorithm}[Approximation of $P^{(M)}(\xi(-N,z))$, $N\ge 0$]
\label{ctsPalg}
\quad \begin{enumerate}
\item Partition the state space $M$ into a collection of connected sets $\{B_1,\ldots,B_n\}$ of small diameter. \item Fix $i$,$j$, and $z$ and create a set of $Q$ test points $x_{j,1},\ldots,x_{j,Q}\in B_j$ that are uniformly distributed over $B_j$. \item For each $q=1,\ldots,Q$ calculate $y_{j,q}=\phi(M,\xi(-N,z),x_{j,q})$. \item Set \begin{equation}
\label{continuousapproxulam} P^{(M)}(\xi(-N,z))_{ij}=\frac{\#\{q:y_{j,q}\in B_i\}}{Q} \end{equation} \end{enumerate} \end{algorithm}
The flow time $M$ should be chosen long enough so that most test points leave their partition set of origin, otherwise at the resolution given by the partition $\{B_1,\ldots,B_n\}$, the matrix $P^{(M)}(\xi(-N,z))$ matrix will be too close to the $n\times n$ identity matrix. If the action of $\phi$ separates nearby points, as is the case for chaotic systems, clearly the longer the flow duration $M$, the greater $Q$ should be in order to maintain a good representation of the images $\phi(M,\xi(-N,z),B_i)$ by the test points.
\begin{algorithm}[Approximation of Oseledets subspaces
$W_j(z)$ at $z\in\Xi$.]
\label{ctseigenmodealg} \quad \begin{enumerate} \item Construct the Ulam matrices $P^{(M)}(\xi(-N,z))$ and $P^{(N)}(\xi(-N,z))$ from (\ref{continuousapproxulam}) for suitable $M$ and $N$. The number $M$ represents the flow duration over which rate of decay is measured, while the number $N$ represents the duration over which the resulting ``initial vectors'' are pushed forward to better approximate elements of the $W_j(z)$. \item Form $$ \Psi^{(M)}({\xi(-N,z)}):=\left(P^{(M)}(\xi(-N,z))^\top P^{(M)}(\xi(-N,z))\right)^{1/2M} $$ as an approximation to the standard limiting matrix $$ B(\xi(-N,z)):=\lim_{M\to\infty} \left(P^{(M)}(\xi(-N,z))^\top P^{(M)}(\xi(-N,z))\right)^{1/2M}$$ appearing in the Multiplicative Ergodic Theorem (see e.g.\ \cite[Theorem 3.4.1(i)]{arnoldbook}). \item Calculate the orthonormal eigenspace decomposition of $\Psi^{(M)}(\xi(-N,z))$,
denoted by $U^{(M)}_{j}(\xi(-N,z))$, $j=1,\ldots,\ell$. We are particularly interested in low values of $j$, corresponding to large eigenvalues $L_j$. \item Define
$W_{j}^{(M,N)}(z):=P^{(N)}(\xi(-N,z))U^{(M)}_{j}(\xi(-N,z))$
via the push forward under the matrix cocycle. \item $W_{j}^{(M,N)}(z)$ is our numerical approximation to $W_{j}(z)$. \end{enumerate} \end{algorithm}
\subsection{Continuity of the Oseledets subspaces in continuous time} When treating continuous time systems, one may ask about the continuity properties of $W_j^{(M,N)}(z)$ in $z$. In the following we suppose that $W^{(M,N)}_2(z)$ is one-dimensional. For large $M$ and $N$, $W^{(M,N)}_2(z)$ will approximate the most dominant Oseledets subspace at time $z$. Suppose that we are interested in how this subspace changes from time $z$ to time $\xi(\delta,z)$ for small $\delta>0$. There are two ways to obtain information at time $\xi(\delta,z)$. Firstly, we can simply push forward $W^{(M,N)}_2(z)$ slightly longer to produce $W^{(M,N+\delta)}_2(\xi(\delta,z))$. Secondly, we can compute $\Psi^{(M)}$ slightly later at time $\xi(\delta,z)$ to produce $W^{(M,N)}_2(\xi(\delta,z))$.
To compare the closeness of $W^{(M,N)}_2(z)$ to $W^{(M,N+\delta)}_2(\xi(\delta,z))$ and $W^{(M,N)}_2(\xi(\delta,z))$, we represent each as a function and make a comparison in the $L^1$ norm. We assume that $U^{(M)}_2(\xi(-N,z))$ is one-dimensional
and define $f_{n,\xi(-N,z),M}=\sum_{i=1}^n (u^{(M)}_2(\xi(-N,z)))_i\chi_{B_i}\in L^1(M,m)$ where $u^{(M)}_2(\xi(-N,z))\in U^{(M)}_2(\xi(-N,z))$ is scaled so that
$\|f_{n,\xi(-N,z),M}\|_1=1$. Let $\hat{f}_{n,z,M,N}=\pi_n\mathcal{P}^{(N)}_{\xi(-N,z)}f_{n,\xi(-N,z),M}$. Note that $\hat{f}_{n,z,M,N}=\sum_{i=1}^n (w^{(M,N)}_2(z))_i\chi_{B_i}$ for some $w^{(M,N)}_2(z)\in W^{(M,N)}_2(z)$.
We firstly compare $\hat{f}_{n,\xi(\delta,z),M,N+\delta}$ and $\hat{f}_{n,z,M,N}$. \begin{proposition}
$\|\hat{f}_{n,\xi(\delta,z),M,N+\delta}-\hat{f}_{n,z,M,N}\|_1\to 0$ as $\delta\to 0$. \end{proposition} \begin{proof} Note that $\hat{f}_{n,\xi(\delta,z),M,N+\delta}=\pi_n\mathcal{P}^{(N+\delta)}_{\xi(-N,z)}f_{n,\xi(-N,z),M}$ while $\hat{f}_{n,z,N,M}=\pi_n\mathcal{P}^{(N)}_{\xi(-N,z)}f_{n,\xi(-N,z),M}$. The proof will follow from the result that
$\mathcal{P}_{z}^{(\tau)}$ is a \textit{continuous} semigroup; that is, $\lim_{\delta\to 0}\|\mathcal{P}_t^{(\delta)}f-f\|_1=0$ for all $t\in \mathbb{R}$, $f\in L^1(M,m)$. \begin{lemma}
\label{basicctylemma} $\|\mathcal{P}_{z}^{(\delta)}f-f\|_1\to 0$ as $\delta\to 0$ for all $z\in \Xi$ and $f\in L^1$. \end{lemma} \begin{proof} The proof runs as a non-autonomous version of the discussion in Remark 7.6.2 \cite{lasota_mackey1}. Note that $\mathcal{P}_{z}^{(\delta)}f(x)=f(\phi(-\delta,\xi(\delta,z),x))\cdot\det D\phi(-\delta,\xi(\delta,z),x)$, where $\phi(-\delta,\xi(\delta,z),\cdot)$ denotes the flow from $\xi(\delta,z)$ in reverse time for duration
$\delta$. For the moment consider continuous $f$. Since $x\mapsto\phi(s,z,x)$ is at least $C^1$ for each $s,z$ (the derivative of $\phi$ wrt to $x$ is continuous with respect to $s$ and $x$ for each fixed $z$) by \cite[Theorem 2.2.2 (iv)]{arnoldbook} and $M$ is compact, $\mathcal{P}_{z}^{(\delta)}f(x)\to f(x)$ uniformly in $x$ as $\delta\to 0$. Thus $\|\mathcal{P}_{z}^{(\delta)}f-f\|_1\to 0$ as $\delta\to 0$. Since the continuous functions are dense in $L^p$, $1\le p<\infty$ as $M$ is compact (see e.g.\ \cite{dunford_schwartz} Lemma IV.8.19), one can $L^1$ approximate any $L^1$ $f$ by a continuous function and thus the result holds for all $L^1$ functions $f$. \end{proof}
Thus the result follows using Lemma \ref{basicctylemma} and the fact that $\|\pi_n\|_1=1$. \end{proof}
Now, let's compare $\hat{f}_{n,\xi(\delta,z),M,N}$ and $\hat{f}_{n,z,M,N}$. \begin{proposition}
$\|\hat{f}_{n,\xi(\delta,z),M,N}-\hat{f}_{n,z,M,N}\|_1\to 0$ as $\delta\to 0$. \end{proposition} \begin{proof} This result is more difficult to demonstrate as we need to firstly compare $\Psi^{(M)}(\xi(-N,z))$ with $\Psi^{(M)}(\xi(-N+\delta,z))$. To this end, consider \begin{eqnarray*}
\|\pi_n\mathcal{P}_{\xi(-N+\delta,z)}^{(M)}f-\pi_n\mathcal{P}_{\xi(-N,z)}^{(M)}f\|_1&=&\|\pi_n\mathcal{P}_{\xi(\delta,z)-N}^{(M)}f-\pi_n\mathcal{P}_{\xi(-N,z)}^{(M+\delta)}f+\pi_n\mathcal{P}_{\xi(-N,z)}^{(M+\delta)}f -\pi_n\mathcal{P}_{\xi(-N,z)}^{(M)}f\|_1 \\
&\le&\|\pi_n\|_1\left(\|\mathcal{P}_{\xi(-N+\delta,z)}^{(M)}f-\mathcal{P}_{\xi(-N,z)}^{(M+\delta)}f\|_1+\|\mathcal{P}_{\xi(-N,z)}^{(M+\delta)}f -\mathcal{P}_{\xi(-N,z)}^{(M)}f\|_1\right) \\
&\le&\|\mathcal{P}_{\xi(-N+\delta,z)}^{(M)}({\rm Id}-\mathcal{P}_{\xi(-N,z)}^{(\delta)})f\|_1+\|(\mathcal{P}_{\xi(-N,z)+M}^{(\delta)}-{\rm Id})\mathcal{P}_{\xi(-N,z)}^{(M)}f\|_1 \\
&\le&\|({\rm Id}-\mathcal{P}_{\xi(-N,z)}^{(\delta)})f\|_1+\|(\mathcal{P}_{\xi(-N,z)+M}^{(\delta)}-{\rm Id})\mathcal{P}_{\xi(-N,z)}^{(M)}f\|_1 \\ \end{eqnarray*} The right hand side converges to zero as $\delta\to 0$ by Lemma \ref{basicctylemma}. This result implies that
$\|P^{(M)}(\xi(-N,z))-P^{(M)}(\xi(-N+\delta,z))\|\to 0$ as $\delta\to 0$ in whatever matrix norm we choose. Thus $\|\Psi^{(M)}(\xi(-N,z))
^{2M}-\Psi^{(M)}(\xi(-N+\delta,z))^{2M}\|=\|P^{(M)}(\xi(-N,z))^\top(P^{(M)}(\xi(-N,z))-P^{(M)}(\xi(-N+\delta,z)))+(P^{(M)}(\xi(-N,z))^\top-P^{(M)}(\xi(-N+\delta,z))^\top)P^{(M)}(\xi(-N+\delta,z))\|\to 0$ as $\delta\to 0$. By standard perturbation results, see e.g.\ \cite[Theorem II.5.1]{kato}, this implies that eigenvectors $U^{(M)}_2(z)$ and $U^{(M)}_2(\xi(\delta,z))$ are close for sufficiently small $\delta$.
Thus $f_{n,\xi(-N,z),M}$ and $f_{n,\xi(-N+\delta,z),M}$ are close in $L^1$ norm. Now we need to push both of these forward by $\pi_n\mathcal{P}(\xi(-N,z))^{(N)}$. This will not increase the norm of the difference at all, so $\|\hat{f}_{n,\xi(\delta,z),M,N}-\hat{f}_{n,z,M,N}\|_1$ will also be small. \end{proof}
\subsection{Oseledets functions for a 1D discrete time nonautonomous system} \label{discreteeigenmodesection}
We now examine the Oseledets functions for the system defined in Example \ref{Discreteeg}. We consider the approximation $\pi_{100}\mathcal{P}_{\omega}$ of rank $100$, which we obtain by Galerkin projection. We denote by $P(\omega)\in\mathbb{R}^{100}\times\mathbb{R}^{100}$ the Ulam matrix representing the action of $\pi_{100}\mathcal{P}_{\omega}$ on functions $f\in\mathcal{B}_{100}:=\mathop{\rm sp}\{\chi_{[(i-1)/100,i/100)}, i=1,\ldots,100\}$. The matrices $P(\sigma^{-k}\omega), k=-10,\ldots,10$ are constructed by following Algorithm 1 using $Q=100$.
We look for Oseledets functions for a particular aperiodic sequence $\omega$. To generate an aperiodic sequence, let $\tau\in\set{0,1}^\mathbb{N}$ be the binary expansion of $1/\sqrt{3}$. Extend $\tau$ to an element of $\set{0,1}^\mathbb{Z}$ by setting $\tau_i=0$ for all $i\leq 0$. Define $\omega_{i-25}=1+2\tau_i+\tau_{i+1}$ for each $i$. Then $\omega\in\Omega$ and the central $21$ terms of $\omega$ are \begin{equation}
\label{specialomega} \omega = (\ldots, 2, 3, 1, 2, 4, 4, 3, 2, 3, 1, \dot{1}, 2, 3, 2, 4, 3, 1, 2, 3, 1, 1 \ldots), \end{equation} where the dot denotes the zeroth term $\omega_0=1$.
We calculate the eigenvalues of $(P^{(20)}(\sigma^{-10}\omega)^\top\circ P^{(20)}(\sigma^{-10}\omega))^{1/40}$, where $P^{(20)}(\sigma^{-10}\omega)$ is defined as in (\ref{discretematrixcocycle}), and find the top three to be $$ L_1\approx 1.00, L_2\approx 0.84, L_3\approx 0.46. $$ As in Examples \ref{singlemapeg} and \ref{periodiceg}, the maps $T_i$ are piecewise affine with constant slope three, and so $\rho(\omega)=1/3$. Thus $\log L_2$ and $\log L_3$ may approximate isolated Lyapunov exponents in $\Lambda(\mathcal{P})$.
We follow Algorithm 2 to approximate the second Oseledets subspace $W_2^{(M,N)}(\sigma^k\omega)$ for $k=0,\ldots,5$, using $(M,N)=(20,10)$, see Figure \ref{fig:NonCS}. \begin{figure}
\caption{The Oseledets function approximations $f_2^{(M,N)}(\sigma^k\omega)$ for $M=20, N=10$, and $k=0,\ldots,5$, along with optimal thresholds (shown in dashed green), see Section \ref{1dcssection}.}
\label{fig:NonCS}
\end{figure}
In order to confirm the effectiveness of Algorithm 2 we calculate the $L^1$ distance $\Delta(N)$ between the normalisations of the vectors $w_2^{(2N,N)}(\sigma\omega)$ and $P(\omega) w_2^{(2N,N)}(\omega)$, for $N=2,\ldots,19$ with $M=40$. By property 3 of Theorem \ref{thm:main} this distance should be small if the family $W_2(\omega)$ is well approximated. A logarithmic plot of $\Delta(N)$ against $N$, see Figure \ref{fig:logplot}, shows the fast convergence of $w_2^{(2N,N)}(\omega)$ to an Oseledets subspace. \begin{figure}
\caption{A graph showing $\Delta(N)$ for $N=1,\ldots,19$.}
\label{fig:logplot}
\end{figure}
In Section \ref{1dcssection} we will see how to extract coherent sets from these functions.
\subsection{Oseledets functions in a 2D continuous time nonautonomous system} \label{ctscomputationssect}
We consider the following nonautonomous system on $M=[0,2\pi]\times[0,\pi]$, $t\in\mathbb{R}^+$: \begin{equation}\label{eq::travellingwave} \begin{split} \dot{x} &= c-A\sin(x-\nu t)\cos(y)\qquad\mod{2\pi}\\ \dot{y} &= A\cos(x-\nu t)\sin(y)\\ \end{split} \end{equation} This equation describes a travelling wave in a stationary frame of reference with rigid boundaries at $y=0$ and $y=\pi$, where the normal flow vanishes~\cite{Pierrehumbert91,SamelsonWiggins}. The streamfunction (Hamiltonian) of this system is given by \begin{equation}\label{eq::wavehamiltonian} s(x,y,t)=-cy+A\sin(x-\nu t)\sin(y). \end{equation} We set $c=0.5$, $A=1$, and the phase speed to $\nu=0.25$. The velocity field is $2\pi$-periodic in the $x$-direction, which allow us to study the flow on a cylinder. The velocity fields in a comoving frame for these parameters are shown in Figure~\ref{fig::vectorfield_nodisturb}. The closed recirculation regions adjacent to the walls ($y=0$ and $y=\pi$) move in the positive $x$-direction and are separated from the jet flowing regime by the heteroclinic loops of fixed points, which are given below. \par This model can be simplified to an autonomous system with a steady streamfunction in the comoving frame by setting $X=x-\nu t$ and $Y=y$. The steady streamfunction is then given by $S(X,Y,t)=-(c-\nu)Y+A\sin(X)\sin(Y)$. Let $X_s=\sin^{-1}((c-\nu)/A)$ and $Y_s=\cos^{-1}((c-\nu)/A)$. In the comoving frame, the recirculation region at the wall $Y=0$ contains an elliptic point $q_1=(\pi/2,Y_s)$ and is bounded by the heteroclinic loop of the hyperbolic fixed points $p_1=(X_s,0)$ and $p_2=(\pi-X_s,0)$. Similarly, those elliptic and hyperbolic points at the wall $Y=\pi$ are $q_2=(3\pi/2,\pi-Y_s)$, $p_3=(\pi+X_s,\pi)$, and $p_4=(2\pi-X_s,\pi)$, respectively, see Figure~\ref{fig::vectorfield_nodisturb}. One may observe that there is a continuous family of \emph{invariant} sets
in the comoving frame as
any fixed level set of the streamfunction bounds an {invariant} set. In a stationary frame these elliptic and hyperbolic points (and their heteroclinic loops) are just translated in the $x-$direction. That is, any fixed level set of the time-dependent streamfunction~\eqref{eq::wavehamiltonian} is a (time-dependent) invariant manifold. We note, however, that the recirculation regions are distinguished from the remainder of the cylinder as they are separated from the jet flowing region, which has a different dynamical fate. In the subsequent sections we will perturb this somewhat ``degenerate'' system to destroy the continuum of invariant sets in the comoving frame and produce a small number of almost-invariant sets (see Definition \ref{aidefn}) in the comoving frame, or coherent sets in the stationary frame).
\begin{figure}
\caption{Vector fields in the comoving frame for the travelling wave flow~\eqref{eq::wavehamiltonian}, for $A=1.0$ and $c=0.5$. The red dots are the hyperbolic fixed points that are connected by the heteroclinic loops. The blue dots are elliptic points in the centre of recirculation regions.}
\label{fig::vectorfield_nodisturb}
\end{figure}
\subsubsection{A coherent family: Mixing case} \label{5.5.1} We modify the traveling wave model in the previous section to allow mixing in the jet flowing region. We add a perturbation to the system in the following way: \begin{equation}\label{eq::travellingwavemixing} \begin{split} \dot{x} &= c-A(\tilde{z}(t))\sin(x-\nu\tilde{z}(t))\cos(y)+\varepsilon G(g(x,y,\tilde{z}(t)))\sin(\tilde{z}(t)/2)\\ \dot{y} &= A(\tilde{z}(t)))\cos(x-\nu \tilde{z}(t)))\sin(y).\\ \end{split} \end{equation} Here, $\tilde{z}(t)=6.6685z_1(t)$, where $z_1(t)$ is generated by the Lorenz flow in Example~\ref{ex:LZ} with initial point $z(0)=(0,1,1.5)$, $A(\tilde{z}(t))=1+0.125\sin(\sqrt{5}\tilde{z}(t))$, $G(\psi):=1/{(\psi^2+1)}^2$ and the parameter function $\psi=g(x,y,\tilde{z}(t)):=\sin(x-\nu\tilde{z}(t))\sin(y)+y/2-\pi/4$ vanishes at the level set of the streamfunction of the unperturbed flow at instantaneous time t=0, i.e., $s(x,y,0)=\pi/4$, which divides the phase space in half.
We set $\varepsilon=1$ as this value is sufficiently large to ensure no KAM tori remain in the jet regime, but sufficiently small to maintain islands originating from the nested periodic orbits around the elliptic points of the unperturbed system.
We applied Algorithm \ref{ctsPalg} with $n=28800$, $M=80$, $N=40$, $z=(0,1,1.5)$, and Algorithm \ref{ctseigenmodealg} for $z=(0,1,1.5)$ and $z=\xi(10,(0,1,1.5))$. By using a relatively large number of test points per grid box ($n=400$ points per box $B_j$) we are able to flow for $M=80$ units of time and still well represent $\phi(80,\xi(-40,z),B_j)$. Figure \ref{fig::chaoticpushforward} shows that the resulting Oseledets functions highlight the remaining islands in the perturbed time-dependent flow. We calculate the eigenvalues of $(P^{(80)}(\xi(-40,z))^\top\circ P^{(80)}(\xi(-40,z))^{1/2}$, where $P^{(80)}(\xi(-40,z))$ is defined as in (\ref{continuousapproxulam}), and find the top three to be $$ L_1\approx 1.1100, L_2\approx 0.9691, L_3\approx 0.9676. $$
\begin{figure}
\caption{(a) Graph of approximate Oseledets function $W_2^{(80,40)}(z)$ produced by Algorithm \ref{ctseigenmodealg}. (b)-(e) Pushforwards of $W_2^{(80,40)}(z)$ via multiplication by $P^{(\tau)}(z)$ for $\tau={2.5, 5, 7.5, 10}$. (f) $W_2^{(80,40)}(\xi(10,z))$ produced independently by Algorithm \ref{ctseigenmodealg}; compare with (e)}
\label{fig::chaoticpushforward}
\end{figure} By part 3 of Theorem \ref{thm:main} (bundle invariance of $W_2(z)$) we should have $P^{(10)}(z)W^{(80,40)}_2(z)\approx W^{(80,40)}_2(\xi(10,z))$. This is demonstrated in Figure \ref{fig::chaoticpushforward} by comparing subplots (e) and (f). In Section \ref{sec:ctsCS} we will see how to extract coherent sets from these Oseledets functions.
\section{Invariant Sets, Almost-Invariant Sets, and Coherent Sets} \label{csdefnsection}
We begin by briefly recounting some of the background relevant to almost-invariant sets. If $\Phi$ (resp.\ $\phi$) is autonomous, then $\Omega$ (resp. $\Xi$) consists of a single point, and we may write $\Phi(-1,\omega,x)=\Phi(-1,x)$ (resp.\ $\phi(-t,z,x)=\phi(-t,x)$). \begin{definition}\label{invdefn} In the autonomous setting, we call $A$ an \emph{invariant set} if $\Phi(-1,A)=A$ (resp. \ $\phi(-t,A)=A$ for all $t\ge 0$). \end{definition}
The following definition generalises invariant sets to \emph{almost-invariant sets}. In the continuous time case we define: \begin{definition} \label{aidefn} Let $\mu$ be preserved by the autonomous flow $\phi$. We will say that a set $A\subset M$ is \emph{$\rho_0$-almost-invariant} over the interval $[0,\tau]$ if \begin{enumerate}
\item \begin{equation} \label{rhomu} \rho_{\mu,\tau}(A):=\frac{\mu(A\cap\phi(-s,A))}{\mu(A)}\ge \rho_0 \end{equation} for all $s\in [0,\tau]$, \item $A$ is connected. \end{enumerate} \end{definition} If $A\subset M$ is \emph{almost-invariant} over the interval $[0,\tau]$, then for each $s\in[0,\tau]$, the probability (according to $\mu$) of a trajectory leaving $A$ at some time in $[0,s]$, and not returning to $A$ at time $s$ is relatively small.
In the discrete time setting, $\tau=1$, and the obvious changes are made in Definition \ref{aidefn}. By convention we ask that $A$ is connected; if $A$ is not connected, we consider each connected component to be an almost-invariant set for suitable $\rho_0$.
We now begin to discuss the nonautonomous setting. The notion of an invariant set is extended to an \textit{invariant family}. \begin{definition} \label{invariantset}\quad \begin{enumerate} \item \textbf{Discrete time:} We will call a family of sets $\{A_{\sigma^k\omega}\}$, $A_{\sigma^k\omega}\subset M$, $\omega\in\Omega$, $k\in\mathbb{Z}$ an \emph{invariant family} if $\Phi(-k,\omega,A_\omega)=A_{\sigma^{-k}\omega}$ for all $\omega\in\Omega$ and $k\in \mathbb{Z}^+$. \item \textbf{Continuous time:} We will call a family of sets $\{A_{\xi(t,z)}\}$, $A_{\xi(t,z)}\subset M$, $z\in\Xi$, $t\in\mathbb{R}$ an \emph{invariant family} if $\phi(-t,z,A_z)=A_{\xi(-t,z)}$ for all $z\in \Xi$ and $t\in \mathbb{R}^+$. \end{enumerate} \end{definition}
Motivated by a model of fluid flow, we imagine coherent sets as a family of \textit{connected} sets with the property that the set $A_\omega$ is \textit{approximately} mapped onto $A_{\sigma^k\omega}$ by $k$ iterations of the cocycle from ``time'' $\omega$; that is, $\Phi(k,\omega,A_\omega)\approx A_{\sigma^k\omega}$. The definition of coherent sets combines the properties of almost-invariant sets and an invariant family. As we now have a \emph{family} of sets we require one more property beyond those of Definition \ref{aidefn}, in addition to modifying the almost-invariance property. In the continuous time case we define: \begin{definition} \label{csdefn} Let $\mu$ be preserved by a flow $\phi$ and $0\le \rho_0\le 1$. Fix a $z\in\Xi$. We will say that a family $\{A_{\xi(t,z)}\}_{t\ge 0}$, $A_{\xi(t,z)}\subset M$, $t\ge 0$ is \emph{a family of $\rho_0$-coherent sets} over the interval $[0,\tau]$ if: \begin{enumerate} \item \begin{equation} \label{rhomucts} \rho_{\mu}(A_{\xi(t,z)},A_{\xi(t+s,z)}):=\frac{\mu(A_{\xi(t,z)}\cap\phi(-s,\xi(t+s,z),A_{\xi(t+s,z)}))}{\mu(A_{\xi(t,z)})}\ge \rho_0, \end{equation} for all $s\in [0,\tau]$ and $t\ge 0$, \item Each $A_{\xi(t,z)}$, $t\ge 0$ is connected, \item $\mu(A_{\xi(t,z)})=\mu(A_{\xi(t',z)})$ for all $t,t'\ge 0$,
\end{enumerate} \end{definition} In discrete time, we replace (\ref{rhomucts}) with \begin{equation} \label{rhomudiscrete} \rho_{\mu}(A_\omega):=\frac{\mu(A_\omega\cap\Phi(-1,\sigma\omega,A_{\sigma\omega}))}{\mu(A_\omega)}\ge \rho_0, \end{equation} $\tau$ necessarily becomes 1, and we make the obvious changes to the other items in Definition \ref{csdefn}.
We remark that by selecting some $A\subset M$ of positive $\mu$ measure and defining $A_{\xi(t,z)}:=\phi(t,z,A)$, $t\ge 0$, the family $\{A_{\xi(t,z)}\}_{t\ge 0}$, is a family of 1-coherent sets. Such a family is not of much dynamical interest, as there is nothing distinguishing this family from one constructed with another connected subset $A'\subset M$. We are not interested in these constructions of coherent sets, and in practice the numerical algorithm we present in the next section is unlikely to find such sets for chaotic systems.
\section{Coherent sets from Oseledets functions} \label{cssection}
We wish to find a family of sets $\{A_{z}\}$ so that \begin{equation} \label{rhomuctsdefn} \rho_{\mu}(A_{z},A_{\xi(s,z)}):=\frac{\mu(A_{z}\cap\phi(-s,\xi(s,z),A_{\xi(s,z)}))}{\mu(A_{z})} \end{equation} is large for $s\in[0,\tau]$.
We may rewrite the RHS of (\ref{rhomuctsdefn})
as \begin{eqnarray} \label{eqn3}\left(\int \chi_{A_z}\cdot\chi_{\phi(-s,A_{\xi(s,z)})}\,\mathrm{d}\mu\right)/\mu(A_z)
&=&\left(\int \mathcal{P}_z^{(s)}\chi_{A_z}\cdot\chi_{A_{\xi(s,z)}} \,\mathrm{d}\mu\right)/\mu(A_z). \end{eqnarray} For (\ref{eqn3}) to be large we require $\mathcal{P}_z^{(s)}\chi_{A_z}\approx \chi_{A_{\xi(s,z)}}$.
Let us now make a connection with the Oseledets functions $f_2(z)=\sum_{i=1}^n w_{2,i}(z)\chi_{B_i}$ where $w_2(z)\in W_2^{(M,N)}(z)$ obtained in Algorithm \ref{ctseigenmodealg}.
In the following discussion, we scale $f_2(z)$ so that
$\|f_2(z)\|_1=1$ for all $z\in\Xi$. To convert the family of Oseledets functions into a family of coherent sets, we modify a heuristic due to \cite{dellnitz_junge_99} that has been successfully used in the autonomous setting. The heuristic is to set $A_z=\{f_2(z)>0\}$, $z\in\Xi$.
We show that $\mathcal{P}_z^{(s)}f^+_2(z)-f^+_2(\xi(s,z))$ is small for moderate $s$ and large $\lambda_2$; we then heuristically infer that $\mathcal{P}_z^{(s)}\chi_{A_z}\approx \chi_{A_{\xi(s,z)}}$.
\begin{proposition} \label{heurprop}
Let $\lambda_2=\lim_{s\to\infty} (1/s)\log\|\mathcal{P}_z^{(s)}f_2(z)\|<0$ be the second largest Lyapunov exponent from Theorem \ref{thm:main} and $f_2(z)\in W_2(z)$ a corresponding Oseledets function, normalised so that $\|f_2(z)\|_1=1$. Given an $\epsilon>0$ there is an $S\ge 0$ so that $s\ge S$ implies $\|\mathcal{P}_z^{(s)}f^+_2(z)-f^+_2(\xi(s,z))\|_1\le (1-e^{(\lambda_2-\epsilon)s})/2$.
\end{proposition} \begin{proof}
Given $\epsilon>0$ we know that there exists $S\ge 0$ such that for all $s\ge S$ one has
$e^{\lambda_2-\epsilon}\le\|\mathcal{P}_z^{(s)}f_2(z)\|^{1/s}\le 1$. Since $\mathcal{P}_z^{(s)}f_2(z)=(\mathcal{P}_z^{(s)}f_2(z))^+-(\mathcal{P}_z^{(s)}f_2(z))^-$ and $\int \mathcal{P}_z^{(s)}f_2(z)\,\mathrm{d}m=0$, one has
$\|\mathcal{P}_z^{(s)}f_2(z)\|_1=\int (\mathcal{P}_z^{(s)}f_2(z))^++(\mathcal{P}_z^{(s)}f_2(z))^-\,\mathrm{d}m=2\int (\mathcal{P}_z^{(s)}f_2(z))^+\,\mathrm{d}m$. Thus $\int
(\mathcal{P}_z^{(s)}f_2(z))^+\,\mathrm{d}m\ge e^{(\lambda_2-\epsilon)s}/2$. Since $(\mathcal{P}_z^{(s)}f_2(z))^+\le \mathcal{P}_z^{(s)}f^+_2(z)$, one has $\|\mathcal{P}_z^{(s)}f^+_2(z)-(\mathcal{P}_z^{(s)}f_2(z))^+\|=\int \mathcal{P}_z^{(s)}f^+_2(z)-(\mathcal{P}_z^{(s)}f_2(z))^+\,\mathrm{d}m$. As
$\|f_2(z)\|=1$ and $\int f_2(z)\,\mathrm{d}m=0$, we have $\int f^+_2(z)\,\mathrm{d}m=1/2$ and since $\mathcal{P}_z^{(s)}$ preserves integrals, $\int \mathcal{P}_z^{(s)}f^+_2(z)\,\mathrm{d}m=1/2$. Thus, $\int \mathcal{P}_z^{(s)}f^+_2(z)-(\mathcal{P}_z^{(s)}f^+_2(z))\,\mathrm{d}m\le (1-e^{(\lambda_2-\epsilon)s})/2$.
\end{proof} The preceding discussion heuristically addresses item 1.\ of Definition \ref{csdefn}.
Regarding item 2 of Definition \ref{csdefn}, as we are extracting the sets $A_z$ from the Oseledets functions $f_2(z)$, the connectivity of the sets will depend on the regularity of the Oseledets functions. This is a delicate question and relatively little can be said formally at present. In the autonomous case, roughly speaking, one expects smooth eigenfunctions for Perron--Frobenius operators of smooth expanding systems \cite{keller_84,ruelle_82}, and eigendistributions (smooth in expanding directions, distributions in contracting directions) in uniformly hyperbolic settings \cite{bkl}. These properties may carry over to the non-autonomous setting; recent results in the bounded variation setting show they do \cite{froyland_lloyd_quas2}. If a small amount of noise is added by postmultiplying the Perron--Frobenius operator by a smoothing (e.g.\ diffusion) operator, then the Oseledets functions must be smooth. This physical addition of a small amount of noise is one way to guarantee regularity of the Oseledets functions and connectivity of the associated coherent sets.
Finally we note that if $\mu=m$ one has $\int f_2(z)(x)\,\mathrm{d}\mu(x)=0$ and so we must have $\mu(A_z)=1/2$ for all $z\in\Xi$. Thus, item 3.\ of Definition \ref{csdefn} is satisfied by the choice $A_z=\{f_2(z)>0\}$. If $\mu\neq m$, then it may be necessary to further tweak the choice of the $A_z$ to ensure that item 3.\ of Definition \ref{csdefn} is satisfied. This additional tweak is described in Algorithm \ref{csalg}.
\subsection{A numerical algorithm} For a fixed time $z\in\Xi$, we seek to approximate a pair of sets $A_{z}$ and $A_{\xi(\tau,z)}$ for which \begin{equation} \label{rhomuctsdefn2} \rho_{\mu}(A_{z},A_{\xi(\tau,z)}):=\frac{\mu(A_{z}\cap\phi(-\tau,\xi(\tau,z),A_{\xi(\tau,z)}))}{\mu(A_{z})} \end{equation} is maximal. The quantity $\rho_{\mu}(A_{z},A_{\xi(\tau,z)})$ is simply the fraction of $\mu$-measure of $A_z$ that is covered by a pullback of the set $A_{\xi(\tau,z)}$ over a duration of $\tau$. For maximal coherence, we wish to find pairs $A_z$, $A_{\xi(\tau,z)}$ that maximise $\rho_{\mu}(A_{z},A_{\xi(\tau,z)})$.
We present a heuristic to find such a pair of sets based upon the vectors $W^{(M,N)}(z)$ and $W^{(M,N)}(\xi(\tau,z))$ corresponding to some Lyapunov spectral value $\lambda$ close to 0. This heuristic is a modification of heuristics to determine maximal almost-invariant sets, see \cite{froyland_dellnitz_03,froyland_05,froyland_padberg_08}. In the terminology of the prior discussion in \S7, rather than setting $A_z:=\{f(z)>0\}$, we allow $A_z:=\{f(z)>c\}$ or $A_z:=\{f(z)<c\}$ for some $c\in\mathbb{R}$ in the hope of finding $A_z, A_{\xi(\tau,z)}$ with an even greater value of $\rho_{\mu}(A_{z},A_{\xi(\tau,z)})$. This additional flexibility also permits a matching of $\mu(A_z)$ and $\mu(A_{\xi(\tau,z)})$.
\begin{algorithm}[To determine a pair of maximally coherent sets at times $z, \xi(\tau,z)$] \label{csalg} \begin{enumerate} \quad \item Determine $W^{(M,N)}(z)$ and $W^{(M,N)}(\xi(\tau,z))$ for some $\tau>0$ according to Algorithm \ref{ctseigenmodealg}. \item Set $\hat{A}^+_z(c)=\bigcup_{i:W^{(M,N)}(z)>c}B_i$ and $\hat{A}^+_{\xi(\tau,z)}(c)=\bigcup_{i:W^{(M,N)}(\xi(\tau,z))>c}B_i$, restricting the values of $c$ so that $\mu(\hat{A}^+_z(c)), \mu(\hat{A}^+_{\xi(\tau,z)}(c))\le 1/2$. These are sets constructed from grid boxes whose corresponding entry in the $W^{(M,N)}$ vectors is above a certain value. \item Define $\eta(c)=\mathop{\mathrm{argmin}}_{c'\in\mathbb{R}}
|\mu(\hat{A}^+_z(c))-\mu(\hat{A}^+_{\xi(\tau,z)}(c'))|$. Given a value of $c$, $\eta(c)$ determines the set $\hat{A}^+_{\xi(\tau,z)}(\eta(c))$ that best matches the $\mu$-measure of $\hat{A}^+_z(c)$, as required by item 3 of Definition \ref{csdefn}. \item Set $c^*=\mathop{\mathrm{argmax}}_{c\in \mathbb{R}} \rho_\mu(\hat{A}^+_z(c),\hat{A}^+_{\xi(\tau,z)}(\eta(c)))$. The value of $c^*$ is selected to maximise the coherence. \item Define $A_z:=\hat{A}^+_z(c^*)$ and $A_{\xi(\tau,z)}:=\hat{A}^+_{\xi(\tau,z)}(\eta(c^*))$. \end{enumerate} \end{algorithm}
\begin{remark} \label{pmremark} \begin{enumerate} \quad \item One can repeat Algorithm \ref{csalg}, replacing $\hat{A}^+_z(c)$ and $\hat{A}^+_{\xi(\tau,z)}(c)$ with $\hat{A}^-_z(c)=\bigcup_{i:W^{(M,N)}(t)<c}B_i$ and $\hat{A}^-_{\xi(\tau,z)}(c)=\bigcup_{i:W^{(M,N)}(\xi(\tau,z))<c}B_i$ respectively. See \cite{froyland_padberg_08} for further details. \item Care should be taken regarding the sign of $W^{(M,N)}(z)$ and $W^{(M,N)}(\xi(\tau,z))$. Visual inspection may be required in order to check that the vectors have the same ``parity''. \end{enumerate} \end{remark}
\subsection{Coherent Sets for a 1D discrete time nonautonomous system} \label{1dcssection}
We return to the map cocycle $\Phi$ and Perron--Frobenius cocycle described in Example \ref{Discreteeg} and identify coherent sets. We use two methods: firstly, inspection of the composition of maps as perturbations of maps with invariant sets, and secondly using the general method of Algorithm 5.
The map cocycle $\Phi$ is defined in terms of a map $H_a$ which has an almost-invariant set, and this gives rise to a family of coherent sets in the following way. Recall the definitions of the maps $T_i$, $i=1,\ldots,4$ and shift space $\Omega$ determined by the adjacency matrix $B$. The maps $T_i$ have the property that if $B_{i,j}=1$, then any inner $R$ factors cancel in $T_j\circ T_i$. More generally, for any $\omega\in\Omega$, we have cancellation of all intermediate $R$ factors: \begin{eqnarray}\label{eq:cancelR} \Phi(k,\omega,\cdot) = R^s\circ H_{a_{(\sigma^{k-1}\omega)_0}}\circ\cdots\circ H_{a_{\omega_0}}\circ R^{-t}, \end{eqnarray} where $s,t\in\set{0,1}$ are given by $$ s(\omega,k) = \left\{\begin{array}{ll} 0, &\omega_{k-1}\, \textrm{odd},\\ 1, &\omega_{k-1}\, \textrm{even}, \end{array}\right. \, \textrm{and}\ \ t(\omega,k) = \left\{\begin{array}{ll} 0, &\omega_{0} \leq 2\\ 1, &\omega_{0} >2. \end{array}\right. $$ For the map $H_0$, the interval $[0,0.5]$ is invariant. Moreover, $[0,0.5]$ is almost-invariant for $H_a$ with $\rho_{\mu}([0,0.5])= 1-2a$. By (\ref{eq:cancelR}), we if we set \begin{equation} \label{heurA} \tilde{A}_{\sigma^k\omega}=R^{s(\omega,k)}([0,0.5]),\quad \mbox{ for each } k\in\mathbb{N}, \end{equation}
then $$\rho_\mu(\tilde{A}_{\sigma^k\omega},\tilde{A}_{\sigma^{k+1}\omega})=1-2a_{\omega_k}. $$ Thus $\set{\tilde{A}_{\sigma^k\omega}}_{k\in\mathbb{N}}$ is a family of $\rho_0$-coherent sets with $\rho_0=1-2\max\set{a_1,\ldots,a_4}=0.843$. In the same way, the invariant set $[0.5,1]$ of $H_0$ leads to a family $\set{R^{s(\omega,k)}([0.5,1])}_{k\in\mathbb{N}}$ of $\rho_0$-coherent sets with the same $\rho_0$.
In order to demonstrate the methods of this article, we now show how Algorithm \ref{csalg} can be used. We may use the Oseledets subspaces computed in Section \ref{discreteeigenmodesection} to find a family of coherent sets. First we apply Algorithm \ref{csalg} to find a coherent set for the time step $k=0$ to $k=1$. We calculate $\rho_\mu\left( \hat{A}^+_{\omega}(c), \hat{A}^+_{\sigma\omega}(c) \right)$ as $c$ varies over the elements of the vector $f_2^{(20,10)}(\omega)$; see Figure \ref{fig:NonFirst} (left). The maximum value of $\rho_\mu\left(\hat{A}^+_\omega(c),\hat{A}^+_{\sigma\omega}(\eta(c))\right)$ is $0.890$. The set $A_\omega$ is found to be the interval $[0.11,0.58]$ of length $\mu(A_\omega)=0.47$; see Figure \ref{fig:NonFirst} (right). \begin{figure}
\caption{(left): The function $\rho_\mu\left(\hat{A}^+_\omega(c),\hat{A}^+_{\sigma\omega}(\eta(c))\right)$ takes its maximum on the interval $(-0.352,-0.176)$ and so we take the midpoint $c^*=-0.264$ as the optimal threshold. (right): Taking this optimal threshold (shown in dashed green) for the eigenvector
$f_2^{(20,10)}(\omega)$ identifies the coherent set $A_\omega=[0.11,0.58]$ (shown in dark orange).}
\label{fig:NonFirst}
\end{figure}
We note that the set $A_\omega$ found by Algorithm \ref{csalg} is \emph{not} the same as the $\tilde{A}_\omega$ produced by the intuitive construction (\ref{heurA}). In the latter case, $\tilde{A}_\omega=[0,1/2]$, $\tilde{A}_{\sigma\omega}=[0,1/2]$, and $\rho_\mu(\tilde{A}_\omega,\tilde{A}_{\sigma\omega})=1-2a_{\omega_0}=1-2a_1=1-\pi/20\approx 0.843$, significantly lower than the value of 0.890 found using Algorithm \ref{csalg}.
We may extend Algorithm \ref{csalg} in order to find a sequence of coherent sets $\{A_{\sigma^i\omega}\}_{i=0}^K$. Since we require the measure of a sequence of coherent sets to be constant, we seek to maximize the mean value of $\rho_\mu$ over a given time range as we vary the measure of the sets.
\begin{algorithm}[To determine a sequence of maximally coherent sets over a range of times $\omega,\ldots, \sigma^K\omega$] \label{csseqalg} \begin{enumerate} \quad \item Follow steps 1.-3. of Algorithm \ref{csalg} for each $k=0,\ldots,K-1$ using $\tau=1$ to obtain sets $\hat{A}^+_{\sigma^k\omega}(c)$.
\item Let $c_k(\ell):=\mathop{\mathrm{argmin}}_{c\in\mathbb{R}} \left| \mu(\hat{A}^+_{\sigma^k\omega}(c)) - \ell \right|$. \item Compute $\ell^*:=\mathop{\mathrm{argmax}}_{\ell\in(0,0.5]} \frac{1}{K}\sum_{k=0}^{K-1} \rho_\mu\left( \hat{A}^+_{\sigma^k\omega}(c_k(\ell)), \hat{A}^+_{\sigma^{k+1}\omega}(c_{k+1}(\ell)) \right)$. \item For $k=0,\ldots,K-1$, define $A_{\sigma^k\omega}:= \hat{A}^+_{\sigma^k\omega}(c_k(\ell^*))$. \end{enumerate} \end{algorithm}
To demonstrate Algorithm \ref{csseqalg}, we use the approximate Oseledets functions $f_2(\sigma^k\omega)$, $k=0,\ldots,5$, to find a sequence of six coherent sets $\{A_{\sigma^k\omega}\}_{k=0}^5$ for the map cocycle $\Phi$. Plotting $\frac{1}{6}\sum_{k=0}^5\rho_\mu\left( \hat{A}^\pm_{\sigma^k\omega}(c_k(\ell)), \hat{A}^\pm_{\sigma^{k+1}\omega}(c_{k+1}(\ell)) \right)$ against $\ell$ (see Figure \ref{fig:RhoMean}), we find a unique maximum of $0.891$, which occurs at $\ell^*=0.47$.
\begin{figure}\label{fig:RhoMean}
\end{figure}
Figure \ref{fig:NonCS} shows the graph of $f_2^{(20,10)}(\sigma^k\omega)$ with the threshold $c_k(\ell^*)$ for $k=0,\ldots,5$, and in each case the set $\hat{A}^+_{\sigma^k\omega}(c_k(\ell^*))$ is indicated by shading. Since coherent sets are required to be connected, we must find the interval closest to each $\hat{A}^+_{\sigma^k\omega}(c_k(\ell^*))$. For $k=0,2,3,4,5$ the set $\hat{A}^+_{\sigma^k\omega}(c_k(\ell^*))$ is itself an interval and we set $A_{\sigma^k\omega}=\hat{A}^+_{\sigma^k\omega}(c_k(\ell^*))$. The set $\hat{A}^+_{\sigma\omega}(c_k(\ell^*))$ has two components, $[0.12.0.58]$ and $[0.60,0.61]$, and so we set $A_{\sigma\omega}=[0.12,0.59]$. Table \ref{tab:CoSets} lists the coherent sets $A_{\sigma^k\omega}$ and the values of $\rho_\mu\left( A_{\sigma^k\omega}, A_{\sigma^{k+1}\omega}\right)$ for $k=0,\ldots,5$. \begin{table}[htb]
\centering
\begin{tabular}{|r|cccccc|}
\hline $k$ & 0 & 1& 2& 3& 4& 5 \\ $\omega_k$ & 1 & 2 & 3 & 2 & 4 & 3 \\ $A_{\sigma^k\omega}$ & $[0.11,0.58]$ & $[0.12,0.59]$ & $[0.35,0.82]$ & $[0.07,0.54]$ & $[0.35,0.82]$ & $[0.35,0.82]$ \\ $\rho_k$ & $0.89$ & $0.87$ & $0.87$ & $0.96$ & $0.90$ & $0.87$ \\
\hline
\end{tabular}
\caption{Coherent sets $A_{\sigma^k\omega}$ and the values of $\rho_\mu\left( A_{\sigma^k\omega}, A_{\sigma^{k+1}\omega}\right)$ for $k=0,\ldots,5$.}
\label{tab:CoSets} \end{table}
It is interesting to compare the locations of the coherent sets with their corresponding maps in the mapping cocycle, see Figure \ref{fig:Squares}. \begin{figure}\label{fig:Squares}
\end{figure} As for the previously constructed family $\set{R^{s(\omega,k)}([0.5,1])}_{k\in\mathbb{N}}$, the coherent sets alternate between two positions separated by a rotation of approximately $0.25$. However, the mean value of $\rho_\mu$ is greater for the sequence $A_{\sigma^k\omega}$ constructed from Algorithm \ref{csseqalg} since in each case the coherent set matches up well with local maxima and minima of the preceding map.
\subsection{Coherent Sets in a 2D continuous time nonautonomous system} \label{sec:ctsCS}
We apply Algorithm \ref{csalg} to the Oseledets subspaces $W^{(80,40)}(z)$ and $W^{(80,40)}(\xi(10,z))$ calculated in Section \ref{5.5.1} and displayed in Figure \ref{fig::chaoticpushforward}.
The optimal coherent sets $\hat{A}^+_{z}$ and $\hat{A}^+_{\xi(10,z)}$ are obtained at the threshold values $c^*=0.0043$ and $\eta(c^*)=0.0052$, which gives $\rho_\mu(\hat{A}^+_{z}(c^*),\hat{A}^+_{\xi(10,z)}(\eta(c^*)))=0.9605$, see Figure~\ref{fig::thresholdcurvechaotic3}. \begin{figure}
\caption{Thresholding curve $\rho_\mu(\hat{A}^{-}_{z}(c),\hat{A}^{-}_{\xi(10,z)}(\eta(c)))$ and ($\rho_\mu(\hat{A}^{+}_{z}(c),\hat{A}^{+}_{\xi(10,z)}(\eta(c)))$ are plotted in grey and black, respectively. The optimal threshold is marked with a rectangle.}
\label{fig::thresholdcurvechaotic3}
\end{figure} For $\hat{A}^-_{z}$ and $\hat{A}^-_{\xi(10,z)}$ the optimal threshold is at $c^*=-0.0040$ and $\eta(c^*)=-0.0051$ and $\rho_\mu(\hat{A}^-_{z}(c^*),\hat{A}^-_{\xi(10,z)}(\eta(c^*)))=0.9599$. The coherent sets at $\hat{A}^\pm_{z}$ and $\hat{A}^\pm_{\xi(10,z)}$ and the images of sample points in $\hat{A}^\pm_{z}$ are shown in
Figure~\ref{fig::coherentsetwavechaotic}.
\begin{figure}
\caption{[Top left] The coherent sets $A^+_{z}$ (grey) and $A^-_{z}$ (black) and [Top right] $A^+_{\xi(10,z)}$ (grey) and $A^-_{\xi(10,z)}$ (black) are identified by thresholding the Oseledets functions. Overlays of $\phi(10,z,A^+_{z})$ (grey) and $\phi(10,z,A^-_{z})$ (black) are also shown. [Bottom left] Zooms of $A^{+}(z)$ and $A^-(z)$ . [Bottom right] Overlays of $\phi(10,z,A^{\pm}_z)$ (grey/black dots) on $A^{\pm}_{\xi(10,z)}$ (white), displaying the loss of mass over 10 time units duration from $z=(0,1,1.5)$.}
\label{fig::coherentsetwavechaotic}
\end{figure}
In Figure \ref{fig::coherentsetwavechaotic} we note that the grey set $A^+_z$ on the left at time $z=(0,1,1.5)$ flows approximately to the light grey set $A^+_{\xi(10,z)}$ on the right at time $\xi(10,z)$. Similarly for the black sets $A^-_z$ and $A^-_{\xi(10,z)}$. This carrying of the time $z$ coherent sets to the time $\xi(10,z)$ coherent sets by the aperiodic flow is only approximate, as $\rho_\mu(\hat{A}^+_{z}(c^*),\hat{A}^+_{\xi(10,z)}(\eta(c^*)))=0.9605$ and $\rho_\mu(\hat{A}^-_{z}(c^*),\hat{A}^-_{\xi(10,z)}(\eta(c^*)))=0.9599$. Thus, we expect a loss of about 10\% under the advection of the flow. Figure \ref{fig::coherentsetwavechaotic} also zooms onto $A^+_{z}$ and $A^-_{z}$ to demonstrate this loss of mass. To make this loss even more apparent, we continue to flow forward for 50 time units. \begin{figure}
\caption{Trajectories of the perturbed system~(\ref{eq::travellingwavemixing}) for $\varepsilon=1$. The large light grey ($A^+_{\xi(t,z)}$) and black ($A^-_{\xi(t,z)}$) blobs are the coherent sets identified by our approach. The other (medium grey) blobs are chosen nearby the coherent sets to show strong mixing away from the coherent regions.}
\label{fig::wavechaoticmixing}
\end{figure} Figure~\ref{fig::wavechaoticmixing} shows that the (black) coherent sets $A^+_{z}$ and $A^-_{z}$ do indeed disperse over time, however at a much slower rate than the arbitrarily chosen (grey) sets. The coherent sets $A^+_{z}, A^-_{z}$ are just single elements of a time parameterised family \{$A^+_{\xi(t,z)}, A^-_{\xi(t,z)}\}_{t\ge 0}$ of coherent sets that at any given initial time describe those sets that will disperse most slowly over a duration of 10 time units.
\section{Final Remarks} We have formulated a new mathematical and algorithmic approach for identifying and tracking coherent sets in nonautonomous systems. Our new approach generalises existing successful transfer operator methodologies that have been used in the autonomous setting. Our constructions address the question raised by \cite{liu_haller_04} of how to study strange eigenmodes and persistent patterns observed in forced fluid flows in the general time-dependent situation. Future work will include applying these techniques to detect and track mobile coherent regions in oceanic and atmospheric flows, extending significantly the flow times studied in \cite{froyland_padberg_england_treguier_07,npg} and \cite{SFM}.
\end{document} | arXiv |
\begin{document}
\newcommand{\ket}[1]{|#1\rangle}
\newcommand{\bra}[1]{\langle#1|}
\newcommand{\braket}[2]{\langle#1|#2\rangle}
\newcommand{\kb}[2]{|#1\rangle\langle#2|}
\newcommand{\kbs}[3]{|#1\rangle_{#3}\phantom{i}_{#3}\langle#2|}
\newcommand{\kets}[2]{|#1\rangle_{#2}}
\newcommand{\bras}[2]{\phantom{i}_{#2}\langle#1|} \newcommand{\alpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\gamma}{\gamma} \newcommand{\lambda}{\lambda} \newcommand{\delta}{\delta} \newcommand{\sigma}{\sigma} \newcommand{(\s_{y}\otimes\s_{y})}{(\sigma_{y}\otimes\sigma_{y})} \newcommand{\rho_{12}\qq\rho_{12}^{*}\qq}{\rho_{12}(\s_{y}\otimes\s_{y})\rho_{12}^{*}(\s_{y}\otimes\s_{y})} \newcommand{\textrm{Tr}}{\textrm{Tr}}
\centerline{\large{\bf Information transfer in leaky atom-cavity systems}}
\vskip 0.2in
\centerline{B. Ghosh\footnote{Email: [email protected]}, A. S. Majumdar\footnote{Email: [email protected]} and N. Nayak\footnote{Email: [email protected]}}
\vskip 0.2in
\centerline{S. N. Bose National Centre for Basic Sciences, Salt Lake, Kolkata 700 098, India} \date{\today}
\vskip 0.5cm \begin{abstract} We consider first a system of two enatangled cavities and a single two-level atom passing through one of them. A ``monogamy'' inequality for this tripartite system is quantitatively studied and verified in the presence of cavity leakage. We next consider the simultaneous passage of two-level atoms through both the cavities. Entanglement swapping is observed between the the two-cavity and the two-atom system. Cavity dissipation leads to the quantitative reduction of information transfer though preserving the basic swapping property. \end{abstract}
\section{Introduction}
Quantum entanglement is endowed with certain curious features. Unlike classical correlations, quantum entanglement can not be freely shared among many quantum systems. It has been observed that a quantum system being entangled with another one limits its possible entanglement with a third system. This has been dubbed the ``monogamous nature of entanglement'' which was first proposed by Bennett\cite{bennett}. If a pair of two-level quantum systems $A$ and $B$ have a perfect quantum correlation, namely, if they are in a maximally entangled state $\Psi^{-}=(\ket{01}-\ket{10})/\sqrt{2}$, then the system $A$ cannot be entangled to a third system $C$. This indicates that there is a limitation in the distribution of entanglement, and several efforts have been devoted to capture this unique property of ``monogamy of quantum entanglement'' in a quantitative way for tripartite and multipartite systems\cite{brub,ckw,osborne}. Another distinctive property of quantum entanglement for multipartite systems is the possibility of entanglement swapping between two or more pairs of qubits. Using this property, two parties that never interacted in their history can be entangled\cite{pan}. There may indeed exist a deeper connection between the characteristics of ``monogamy'' and entanglement swapping since the features of the distribution and transfer of quantum information is essentially reflected in the both these properties.
Practical realization of various features of quantum entanglement are obtained in atom-photon interactions in optical and microwave cavities\cite{raimond}. Recently, some studies have been performed to quantify the entanglement obtained between atoms through atom-photon interactions in cavities\cite{masiak,datta}. An important attribute of real devices in the ubiquitous presence of dissipative effects in them. These have to be monitored in order for the effects of quantum correlations to survive till detection. The consequences of cavity leakage on information transfer in the micromaser has been quantified recently\cite{datta}. It is natural to expect the other characteristics of entanglement such as its ``monogamous'' nature, and also its exchange or swapping to be affected by dissipative processes. Atom-photon interactions in cavities are a sound arena for the quantitative investigations of different aspects of quantum entanglement in realistic situations.
With the above motivation we perform a quantitative study of the mono-gamy of quantum entanglement and its swapping in dissipative atom-photon interactions in microwave cavities. We focus on a system of two entangled single-mode cavities which are empty initially. We then consider the passage of a two-level atom through either or both of them. In the next section we first consider a tripartite pure system (two ideal cavities and one atom) and study the features of ``monogamy'' exhibited between the atom-cavity and the cavity-cavity entanglements. In particular, we demonstrate the applicability of the Coffman-Kundu-Wootters (CKW)\cite{ckw} ``monogamy'' inequality to this system. We next consider a realistic cavity with photon leakage, and repeat the above analysis keeping in mind the recently conjectured validity of the CKW inequality extended for mixed states\cite{osborne}. We find that cavity dissipation could lead to interesting possibilities, such as the enhancement of the entanglement between the atom and the cavity mode that it interacts with, a feature that could be understood by the ``monogamous'' behaviour of entanglement. In section IV we consider a four-qubit system (two cavities and two atoms) where our goal is to observe entanglement swapping, or the transfer of entanglement from the initially entangled two cavities to the two atoms. Here again, we first perform the analysis with ideal cavities, and then consider the effects of cavity leakage on entanglement swapping. We present some concluding remarks in section V.
\section{Monogamy of entanglement in a system of two cavities and a single atom}
\subsection{Pure state of three qubits}
We first consider two ideal cavities which can be maximally entangled by sending a single circular Rydberg atom prepared in the exited state through two identical and initially empty high-Q cavities ($C_1$ and $C_2$)\cite{davidovich}. The initial state of the two-cavity entangled system can be written as \begin{eqnarray} \ket{\Psi}_{C_1C_2}=\frac{1}{\sqrt2}(\ket{0_{1}1_{2}}+\ket{1_{1}0_{2}}), \end{eqnarray} where the index 1 and 2 refer to the first and second cavity, respectively. In this set-up we consider the passage of a two-level Rydberg atom $A_1$ prepared in the ground state $\ket{g}$ through the cavity $C_1$. We are considering the resonant interaction between the two-level atom and cavity mode frequency. The interaction Hamiltonian in the rotating frame approximation for the atom-cavity system is \begin{eqnarray} H_I=g(\sigma^+ a+\sigma^-a^\dagger), \end{eqnarray} where $a^\dagger$ and $a$ are usual creation and destruction operators of the radiation field and $\sigma^+(\sigma^-)$ are atomic operators analogous to the Pauli spin raising and lowering operators obeying the commutation relation $[\sigma^+,\sigma^-]=2\sigma_z$, where $\sigma_z=+1/2(-1/2)$ represents the atom in the upper (lower) state. $g$ is the atom-field interaction constant
(or $gt$ the Rabi angle). The dynamics of the atom-photon interaction is governed by the equation \begin{eqnarray} \dot\rho = -i[H_I,\rho] \end{eqnarray} with joint three-party initial ($t=0$) state corresponding to \begin{eqnarray} \ket{\Psi(t=0)}_{C_1C_2A_1}=\frac{1}{\sqrt2}(\ket{0_{1}1_{2}}+\ket{1_{1}0_{2}}) \otimes\ket{g_1} \end{eqnarray} Hence, a two-level atom entering the empty cavity in the upper state ($\ket{e}$) evolves to \begin{eqnarray} \ket{\Psi_e(t)}&=&e^{-iH_{I}t}\ket{e,0}\nonumber\\ &=&\cos(gt)\ket{e,0}+\sin(gt)\ket{g,1} \end{eqnarray} at some time $t$, and similarly, a two-level Rydberg atom entering the one photon cavity in the ground state evolves to \begin{eqnarray} \ket{\Psi_g(t)}&=&e^{-iH_{I}t}\ket{g,1}\nonumber\\ &=&\cos(gt)\ket{g,1}-\sin(gt)\ket{e,0} \end{eqnarray}
\vskip 0.4cm
\begin{figure}
\caption{A two-level Rydberg atom prepared in the ground state
is passing throuh the one of the maximally entangled cavities $C_1$}
\label{f1}
\end{figure}
For any interaction time $t$ the evolved state is given by \begin{eqnarray} \ket{\Psi(t)}_{C_1C_2A_1}=&&\frac{1}{\sqrt2}(\ket{0_{1}1_{2}g_{1}}+ \cos{gt}\ket{1_{1}0_{2}g_{1}}\nonumber \\ &&-\sin{gt}\ket{0_{1}0_{2}e_{1}}) \end{eqnarray} \begin{eqnarray} \rho(t)_{C_1C_2A_1}=\kb{\Psi(t)_{C_1C_2A_1}}{\Psi(t)_{C_1C_2A_1}} \end{eqnarray}
The reduced density states of the pairs $C_1C_2$, $C_2A_1$, $C_1A_1$ are given by \begin{eqnarray} \rho(t)_{C_1C_2}&=&\textrm{Tr}_{A_1}(\rho(t)_{C_1C_2A_1}),\nonumber\\ &=&\frac{1}{2}\kb{0_11_2}{0_11_2}+\frac{\cos^2{gt}}{2}\kb{1_10_2}{1_10_2} \nonumber\\ &+&\frac{\sin^2{gt}}{2}\kb{0_10_2}{0_10_2}+\frac{\cos{gt}}{2}\kb{0_11_2}{1_10_2}\nonumber\\ &+&\frac{\cos{gt}}{2}\kb{1_10_2}{0_11_2}. \end{eqnarray} \begin{eqnarray} \rho(t)_{C_2A_1}&=&\textrm{Tr}_{C_1}(\rho(t)_{C_1C_2A_1}),\nonumber\\ &=&\frac{1}{2}\kb{1_2g_1}{1_2g_1}+\frac{\cos^2{gt}}{2}\kb{0_2g_1}{0_2g_1} \nonumber\\ &+&\frac{\sin^2{gt}}{2}\kb{0_2e_1}{0_2e_1}-\frac{\sin{gt}}{2}\kb{1_2g_1}{0_2e_1} \nonumber\\ &-&\frac{\sin{gt}}{2}\kb{0_2e_1}{1_2g_1}. \end{eqnarray} \begin{eqnarray} \rho(t)_{C_1A_1}&=&\textrm{Tr}_{C_2}(\rho(t)_{C_1C_2A_1}),\nonumber\\ &=&\frac{1}{2}\kb{0_1g_1}{0_1g_1}+\frac{\cos^2{gt}}{2}\kb{1_1g_1}{1_1g_1} \nonumber\\ &+&\frac{\sin^2{gt}}{2}\kb{0_1e_1}{0_1e_1}-\frac{\sin{gt}\cos{gt}}{2}\kb{1_1g_1}{0_1e_1}\nonumber\\ &-&\frac{\sin{gt}\cos{gt}}{2}\kb{0_1e_1}{1_1g_1}. \end{eqnarray} We now compute the mixed-state bipartite entanglement measure (Concurrence)\cite{hill} for different pairs. These are given by \begin{eqnarray}
{\it C}(\rho(t)_{C_1C_2})&=&|\cos{gt}|, \\
{\it C}(\rho(t)_{C_2A_1})&=&|\sin{gt}|, \\
{\it C}(\rho(t)_{C_1A_1})&=&|\cos{gt}\sin{gt}| \end{eqnarray} and are plotted in Figure2 for varying Rabi angle, clearly reflecting the monogamous nature of entanglement between $C_1C_2$ and $C_2A_1$. \vskip 0.2in \begin{figure}
\caption{${\it C}(\rho(t)_{C_1C_2})$ (solid line), ${\it C}(\rho(t)_{C_2A_1})$, (dotted line), ${\it C}(\rho_{C_1A_1})$ (broken line) plotted with respect to the Rabi angle $gt$.}
\label{f2}
\end{figure} \vskip 0.2in
The CKW inequality\cite{ckw} for the tripartite pure state ${\rho(t)}_{C_2C_1A_1}$:\\ ${{\it C}^2_{C_2C_1}}+{{\it C}^2_{C_2A_1}}\le{{\it C}^2_{C_2(C_1A_1)}}$ \\ reduces to $\cos^2{gt}+\sin^2{gt}=1$ in this case.
\subsection{Effects of cavity dissipation on entanglement}
Let us now investigate the above case in presence of the cavity dissipation. Since the lifetime of a two-level Rydberg atom is usually much longer compared to the atom-cavity interaction time, we can safely neglect the atomic dissipation. The dynamics of the flight of the atom is governed by the evolution equation \begin{eqnarray}
\dot\rho=\dot\rho|_{\textrm{atom-field}}+\dot\rho|_{\textrm{field-reservoir}}, \end{eqnarray} where the strength of the couplings are given by the parameters $\kappa$ (the cavity leakage constant) and $g$ (the atom-field interaction constant). At temperature $T=0K$ the average thermal photon number is zero, and hence one has\cite{agrawal} \begin{eqnarray}
\dot\rho|_{\textrm{field-reservoir}}=-\kappa(a^\dagger a \rho-2a\rho a^\dagger+ \rho a^\dagger a). \end{eqnarray} When $g \gg \kappa$, it is possible to make a secular approximation\cite{haroche} while solving the complete evolution equation by combining Eqs.(3) and (16) in order to get the density elements of $\rho(t)_{C_1C_2A_1}$. We also work under a further approximation (that is justified when the cavity is close to $0K$) that the probability of getting two or more photons inside the cavities is zero, or in other words, a cavity always remains in the two level state
comprising of $|0>$ and $|1>$. The tripartite (mixed) state is then obtained to be \begin{eqnarray} \rho(t)_{C_1C_2A_1}&&=\alpha_1\kb{0_11_2g_1}{0_11_2g_1}\nonumber\\ &&+\alpha_2\kb{1_10_2g_1}{1_10_2g_1}\nonumber\\ &&+\alpha_3\kb{0_10_2e_1}{0_10_2e_1}\nonumber\\ &&+\alpha_4\kb{0_11_2g_1}{1_10_2g_1}\nonumber\\ &&+\alpha_4\kb{1_10_2g_1}{0_11_2g_1}\nonumber\\ &&+\alpha_5\kb{1_10_2g_1}{0_10_2e_1}\nonumber\\ &&-\alpha_5\kb{0_10_2e_1}{1_10_2g_1}\nonumber\\ &&+\alpha_6\kb{0_10_2e_1}{0_11_2g_1}\nonumber\\ &&-\alpha_6\kb{0_11_2g_1}{0_10_2e_1}, \end{eqnarray} where the $\alpha_i$ are given by \begin{eqnarray*} \alpha_1&=&(1-\frac{e^{-\kappa_1 t}}{2})e^{-2\kappa_2 t},\\ \alpha_2&=&(\cos^2{gt})e^{-\kappa_1 t}(1-\frac{e^{-2\kappa_2 t}}{2}),\\ \alpha_3&=&(\sin^2{gt})e^{-\kappa_1 t}(1-\frac{e^{-2\kappa_2 t}}{2}),\\ \alpha_4&=&\frac{(\cos{gt})e^{-\frac{\kappa_1 t}{2}}e^{-\kappa_2 t}}{2},\\ \alpha_5&=&i(\sin{2gt})e^{-\kappa_1 t}(1-\frac{e^{-2\kappa_2 t}}{2}),\\ \alpha_6&=&i(\frac{e^{-\frac{\kappa_1 t}{2}}\sin{gt}}{2}-\frac{\kappa_1e^{-\frac{\kappa_1 t}{2}}\cos{gt}}{4g}+\frac{\kappa_1}{4g})e^{-\kappa_2 t}, \end{eqnarray*} $\kappa_1$ and $\kappa_2$ are the leakage constants for cavity $C_1$ and $C_2$ respectively. The reduced density states of the pairs $C_1C_2$, $C_2A_1$, $C_1A_1$ are thus given by \begin{eqnarray} \rho(t)_{C_1C_2}&=&\textrm{Tr}_{A_1}(\rho_{C_1C_2A_1}),\nonumber\\ &=&\alpha_1\kb{0_11_2}{0_11_2}+\alpha_2\kb{1_10_2}{1_10_2} \nonumber\\ &+&\alpha_3\kb{0_10_2}{0_10_2}+\alpha_4\kb{0_11_2}{1_10_2}\nonumber\\ &+&\alpha_4\kb{1_10_2}{0_11_2}. \end{eqnarray} \begin{eqnarray} \rho(t)_{C_2A_1}&=&\textrm{Tr}_{C_1}(\rho_{C_1C_2A_1}),\nonumber\\ &=&\alpha_1\kb{1_2g_1}{1_2g_1}+\alpha_2\kb{0_2g_1}{0_2g_1} \nonumber\\ &+&\alpha_3\kb{0_2e_1}{0_2e_1}-\alpha_6\kb{1_2g_1}{0_2e_1} \nonumber\\ &+&\alpha_6\kb{0_2e_1}{1_2g_1}. \end{eqnarray} \begin{eqnarray} \rho(t)_{C_1A_1}&=&\textrm{Tr}_{C_2}(\rho_{C_1C_2A_1}),\nonumber\\ &=&\alpha_1\kb{0_1g_1}{0_1g_1}+\alpha_2\kb{1_1g_1}{1_1g_1} \nonumber\\ &+&\alpha_3\kb{0_1e_1}{0_1e_1}+\alpha_5\kb{1_1g_1}{0_1e_1}\nonumber\\ &-&\alpha_5\kb{0_1e_1}{1_1g_1}. \end{eqnarray} and one can obtain the respective concurrences. These, namely, ${\it C}(\rho(t)_{C_1C_2})$, ${\it C}(\rho(t)_{C_1A_1})$, and ${\it C}(\rho(t)_{C_2A_1})$ are plotted with respect to the Rabi angle $gt$ in Figure3. As expected, dissipation reduces the respective concurrences. However, the ``monogamous'' character, or the `complementarity' between ${\it C}(\rho(t)_{C_1C_2})$ and ${\it C}(\rho(t)_{C_2A_1})$ is maintained even with cavity leakage. \vskip 0.5cm
\begin{figure}
\caption{${\it C}(\rho(t)_{C_1C_2})$ (solid line), ${\it C}(\rho(t)_{C_2A_1})$, (dotted line), ${\it C}(\rho(t)_{C_1A_1})$ (broken line) plotted with respect to the Rabi angle $gt$. $\frac{\kappa_1}{g}=\frac{\kappa_2}{g}=0.1$.}
\label{f3}
\end{figure}
\vskip 0.2in
To verify the CKW inequality for the mixed state $\rho(t)_{C_1C_2A_1}$, one has to average ${\it C}(\rho(t)_{C_2(C_1A_1)})$ over all pure state decompositions\cite{osborne}. We however, adopt an utilitarian point of view, and for small $\kappa$ take ${\it C}(\rho(t)_{C_2(C_1A_1)})\approx 2\sqrt{\textrm {det}\rho_{C_2}}$. Note that this result holds exactly for a pure state\cite{ckw}. Nevertheless, for a small value of $\kappa$ and for a bipartite photon field, one stays very close to a pure state. In Figure4 we plot the left and the right hand sides (${{\it C}^2_{C_2C_1}}+{{\it C}^2_{C_2A_1}}$ and ${\it C}^2_{C_2(C_1A_1)}$ respectively), of the corresponding CKW inequality and observe that it always holds under the above approximation.
\vskip 0.3in \begin{figure}\label{f4}
\end{figure} \vskip 0.2in
An interesting feature of the entanglement obtained between the atom $A1$ and the cavity $C1$ through which it interacts directly is displayed in Figure5 where ${\it C}(\rho(t)_{A_1C_1})$ is plotted versus the dissipation parameter $\kappa$. Note that the concurrence increases for increasing cavity loss. This happens because the cavity leakage reduces the intial entanglement between $C_1$ and $C_2$, and hence makes room for the subsequent entanglement between $C_1$ and $A_1$ to form. The dissipative mechanism is thus a striking confirmation of the ``monogamous'' character of entanglement. The role of the dissipative environment in creating desired forms of entanglement has been revealed earlier in the literature\cite{braun}. The present case can be also viewed as a further example of this kind.
\vskip 0.2in
\begin{figure}
\caption{${\it C}(\rho(t)_{A_1C_1})$ (solid line) for $gt=\pi/4$, ${\it C}(\rho(t)_{A_1C_1})$ (broken line) for $gt=3\pi/4$, ${\it C}(\rho(t)_{A_1C_1})$, (dotted line) for $gt=5\pi/4$
plotted with respect to $log(\kappa/g)$, where $\kappa/g=\kappa_1/g=\kappa_2/g$.}
\label{f5}
\end{figure} \vskip 0.2in
\section{Entanglement swapping in a system of two cavities and two atoms}
\subsection{Ideal case of four qubits}
In this section we will consider a few aspects of entanglement swapping or the transfer of entanglement from the two-cavity to the two-atom system. Such a scheme can be affected by sending two Rydberg atoms $A_1$, $A_2$ prepared in their ground states $g_1$, $g_2$ through two maximally entangled cavities $C_1$, $C_2$ respectively. The time of flights for the atoms through the cavities are same. So at $t=0$, the state of the total system is \begin{eqnarray} \ket{\Psi(t=0)}_{C_1C_2A_1A_2}=\frac{1}{\sqrt2}(\ket{0_{1}1_{2}}+ \ket{1_{1}0_{2}})\otimes\ket{g_1 g_2} \end{eqnarray}
\vskip 0.2in
\begin{figure}
\caption{Two Rydberg atoms $A_1$, $A_2$ prepared in the ground states $g_1$, $g_2$ through two maximally entangled cavities $C_1$, $C_2$ respectively.}
\label{f6}
\end{figure}
\vskip 0.2in
For any interaction time $t$ the evolved state is \begin{eqnarray} \ket{\Psi(t)}_{C_1C_2A_1A_2}&&=\frac{1}{\sqrt2}(\cos{gt}\ket{0_1 1_2 g_1 g_2} -\sin{gt}\ket{0_1 0_2 g_1 e_2}\nonumber\\ &&+\cos{gt}\ket{1_1 0_2 g_1 g_2}-\sin{gt}\ket{0_1 0_2 e_1 g_2}) \end{eqnarray} \begin{eqnarray} \rho(t)_{C_1C_2A_1A_2}=\kb{\Psi(t)_{C_1C_2A_1A_2}}{\Psi(t)_{C_1C_2A_1A_2}} \end{eqnarray} The reduced density states of the pairs $C_1C_2$, $A_1A_2$ are given by \begin{eqnarray} \rho(t)_{C_1C_2}&=&\textrm{Tr}_{A_1A_2}(\rho(t)_{C_1C_2A_1A_2}),\nonumber\\ &=&\frac{\cos^2{gt}}{2}\kb{0_11_2}{0_11_2}+ \frac{\cos^2{gt}}{2}\kb{1_10_2}{1_10_2}\nonumber\\ &+&\sin^2{gt}\kb{0_10_2}{0_10_2}+\frac{\cos^2{gt}}{2}\kb{0_11_2}{1_10_2} \nonumber\\ &+&\frac{\cos^2{gt}}{2}\kb{1_10_2}{0_11_2}. \end{eqnarray} \begin{eqnarray} \rho(t)_{A_1A_2}&=&\textrm{Tr}_{C_1C_2}(\rho_{C_1C_2A_1A_2}),\nonumber\\ &=&\cos^2{gt}\kb{g_1g_2}{g_1g_2}+\frac{sin^2{gt}}{2}\kb{g_1e_2}{g_1e_2} \nonumber\\ &+&\frac{sin^2{gt}}{2}\kb{e_1g_2}{e_1g_2}+ \frac{sin^2{gt}}{2}\kb{g_1e_2}{e_1g_2} \nonumber\\ &+&\frac{sin^2{gt}}{2}\kb{e_1g_2}{g_1e_2}. \end{eqnarray} \begin{eqnarray} {\it C}(\rho(t)_{C_1C_2})&=&\cos^2{gt}, \\ {\it C}(\rho(t)_{A_1A_2})&=&\sin^2{gt}. \end{eqnarray}
\vskip 0.2in \begin{figure}
\caption{${\it C}(\rho(t)_{C_1C_2})$ (solid line), ${\it C}(\rho(t)_{A_1A_2})$, (dotted line) plotted with respect to the Rabi angle $gt$.}
\label{f7}
\end{figure} \vskip 0.2in
The concurrences for the pairs $C_1$-$C_2$ and $A_1$-$A_2$ are plotted in the Figure7. One sees that the entanglement between two cavities are swapped by two atoms for the interaction times $gt=(2n+1)\pi/2$, ($n=0, 1, 2, \dots$).
\subsection{Information transfer with cavity dissipation}
Finally, we consider the effect of cavity leakage on the transfer of information from the two-cavity to the two-atom system. Under the secular approximation\cite{haroche} and the approximation of a two-level cavity, one can solve the master equation to obtain the four-party density matrix which can be formally expressed as \begin{eqnarray} \rho(t)_{C_1C_2A_1A_2}&&=\alpha_1\kb{0_11_2g_1g_2}{0_11_2g_1g_2}\nonumber\\ &&+\alpha_2\kb{0_10_2g_1e_2}{0_10_2g_1e_2}\nonumber\\ &&+\alpha_3\kb{1_10_2g_1g_2}{1_10_2g_1g_2}\nonumber\\ &&+\alpha_4\kb{0_10_2e_1g_2}{0_10_2e_1g_2}\nonumber\\ &&+\alpha_5\kb{0_11_2g_1g_2}{1_10_2g_1g_2}\nonumber\\ &&+\alpha_5\kb{1_10_2g_1g_2}{0_11_2g_1g_2}\nonumber\\ &&+\alpha_6\kb{0_10_2g_1e_2}{0_10_2e_1g_2}\nonumber\\ &&+\alpha_6\kb{0_10_2e_1g_2}{0_10_2g_1e_2}\nonumber\\ &&+\dots \end{eqnarray} where the $\alpha_i$ are given by \begin{eqnarray*} \alpha_1&=&(1-\frac{e^{-\kappa_1 t}}{2})e^{-\kappa_2 t}\cos^2{gt},\\ \alpha_2&=&(\sin^2{gt})e^{-\kappa_2 t}(1-\frac{e^{-\kappa_1 t}}{2}),\\ \alpha_3&=&(\cos^2{gt})e^{-\kappa_1 t}(1-\frac{e^{-\kappa_2 t}}{2}),\\ \alpha_4&=&(\sin^2{gt})e^{-\kappa_1 t}(1-\frac{e^{-\kappa_2 t}}{2}),\\ \alpha_5&=&\frac{(\cos{gt})e^{-\kappa_1 t/2}e^{-\kappa_2 t/2}}{2},\\ \alpha_6&=&\frac{(e^{-\kappa_1 t/2}\sin{gt}-\frac{\kappa_1e^{-\kappa_1 t/2}}{2g}+\frac{\kappa_1}{2g})}{2}\times\\ &&(e^{-\kappa_2 t/2}\sin{gt}-\frac{\kappa_2e^{-\kappa_2 t/2}}{2g}+\frac{\kappa_2}{2g}) \end{eqnarray*} Apart from the above eight terms no other term contributes to either of the reduced density states $\rho_{C_1C_2}$ or $\rho_{A_1A_2}$, which are given by \begin{eqnarray} \rho(t)_{C_1C_2}&=&\textrm{Tr}_{A_1A_2}(\rho(t)_{C_1C_2A_1A_2}),\nonumber\\ &=&\alpha_1\kb{0_11_2}{0_11_2}+ \alpha_3\kb{1_10_2}{1_10_2}\nonumber\\ &+&(\alpha_2+\alpha_4)\kb{0_10_2}{0_10_2}+ \alpha_5\kb{0_11_2}{1_10_2}\nonumber\\ &+&\alpha_5\kb{1_10_2}{0_11_2}. \end{eqnarray} \begin{eqnarray} \rho(t)_{A_1A_2}&=&\textrm{Tr}_{C_1C_2}(\rho(t)_{C_1C_2A_1A_2}),\nonumber\\ &=&(\alpha_1+\alpha_3)\kb{g_1g_2}{g_1g_2}+\alpha_2\kb{g_1e_2}{g_1e_2} \nonumber\\ &+&\alpha_4\kb{e_1g_2}{e_1g_2}+\alpha_6\kb{g_1e_2}{e_1g_2} \nonumber\\ &+&\alpha_6\kb{e_1g_2}{g_1e_2}. \end{eqnarray} \vskip 0.2in
\begin{figure}
\caption{${\it C}(\rho(t)_{C_1C_2})$ (solid line), ${\it C}(\rho(t)_{A_1A_2})$, (dotted line) plotted with respect to the Rabi angle $gt$. $\kappa_1/g=\kappa_2/g = 0.1$}
\label{f8}
\end{figure} \vskip 0.2in
Though the concurrences ${\it C}(\rho(t)_{C_1C_2})$ and ${\it C}(\rho(t)_{A_1A_2})$ are reduced by the loss of cavity photons, one sees from Figure8 that perfect swapping is still obtained for $gt=(2n+1)\pi/2$. One of the basic features of information exchange between bipartite systems, represented by entanglement swapping, is thus seen to be preserved for mixed states too.
\section{Conclusions}
In this paper we have considered two important and interesting features of quantum entanglement, viz., ``monogamy'', and entanglement swapping. We have used the set-up of two initialy entangled cavites\cite{davidovich} and a single Rydberg atom passing through one of them to study the quantitative manifestation of a ``monogamy'' inequality\cite{ckw} in atom-photon interactions. The unavoidable photon leakage exists in all real cavities used for the practical realization of quantum information transfer. The effects of such dissipation have been investigated on the ``monogamous'' nature of the entanglement between the two cavities, on one hand, and the atom and the second cavity on the other. We have found that the essential ``monogamous'' character is preserved even with cavity dissipation. We have further seen that the entanglement between the atom and the cavity through which it passes increases with larger dissipation, a feature that could be understood by invoking the ``monogamous'' character of entanglement. We have then considered a set-up involving two entangled cavities, and two Rydberg atoms. Entanglement swapping from the two cavities by the two atoms which never interact directly with each other is observed in this system. Cavity dissipation, of course reduces the total amount of information exchange, similar to the results obtained in the context of the single-atom micromaser\cite{datta}. Moreover, here we have verified that the property of swapping is preserved with dissipation. Further studies on different quantitative manifestations of information transfer in the presence of dissipative effects might be useful for the construction of realistic devices implementing various protocols. Practical realization of two-cavity entanglement is in progress at the Ecole Normale Superieure\cite{haroche2}.
\end{document} | arXiv |
Assessing the impact of natural policy experiments on socioeconomic inequalities in health: how to apply commonly used quantitative analytical methods?
Yannan Hu1,
Frank J. van Lenthe1,
Rasmus Hoffmann1,2,
Karen van Hedel1,3 &
Johan P. Mackenbach1
The scientific evidence-base for policies to tackle health inequalities is limited. Natural policy experiments (NPE) have drawn increasing attention as a means to evaluating the effects of policies on health. Several analytical methods can be used to evaluate the outcomes of NPEs in terms of average population health, but it is unclear whether they can also be used to assess the outcomes of NPEs in terms of health inequalities. The aim of this study therefore was to assess whether, and to demonstrate how, a number of commonly used analytical methods for the evaluation of NPEs can be applied to quantify the effect of policies on health inequalities.
We identified seven quantitative analytical methods for the evaluation of NPEs: regression adjustment, propensity score matching, difference-in-differences analysis, fixed effects analysis, instrumental variable analysis, regression discontinuity and interrupted time-series. We assessed whether these methods can be used to quantify the effect of policies on the magnitude of health inequalities either by conducting a stratified analysis or by including an interaction term, and illustrated both approaches in a fictitious numerical example.
All seven methods can be used to quantify the equity impact of policies on absolute and relative inequalities in health by conducting an analysis stratified by socioeconomic position, and all but one (propensity score matching) can be used to quantify equity impacts by inclusion of an interaction term between socioeconomic position and policy exposure.
Methods commonly used in economics and econometrics for the evaluation of NPEs can also be applied to assess the equity impact of policies, and our illustrations provide guidance on how to do this appropriately. The low external validity of results from instrumental variable analysis and regression discontinuity makes these methods less desirable for assessing policy effects on population-level health inequalities. Increased use of the methods in social epidemiology will help to build an evidence base to support policy making in the area of health inequalities.
There is overwhelming evidence for the existence of socioeconomic inequalities in health in many countries [1–3]. Improvements in understanding their underlying mechanisms have reached a point where several entry-points have been identified for interventions and policies aimed at reducing health inequalities [2, 4]. The latter has often been made a priority in national and local health policy [2, 5–9]. Yet, the scientific evidence-base for interventions and policies to tackle health inequalities is still very limited, and mostly applies to the proximal determinants of health inequalities such as smoking and working conditions [10–14]. Policies that address the social and economic conditions in which people live probably have the greatest potential to reduce health inequalities, but these are the hardest to evaluate [15].
Randomized controlled trials (RCTs) are regarded as the "gold standard" in the effect evaluation of clinical studies. The limitations of RCT's in evaluating policies in public health, however, have been clearly recognized [16, 17]. For policies aimed at tackling health inequalities, an obvious limitation is that policies to improve material and psychosocial living conditions, access to essential (health care) services, and health-related behaviors often cannot be randomized.
Natural policy experiments (NPEs), defined as "policies that are not under the control of the researchers, but which are amenable to research using the variation in exposure that they generate to analyze their impact" have been advocated as a promising alternative [18, 19]. In NPEs, researchers exploit the fact that often not all (groups of) individuals are exposed to the policy, e.g. because some individuals are purposefully assigned to the policy and others are not, or because the policy is implemented in some geographical units but not in others. For example, a policy to improve housing conditions in neighborhoods might be implemented in neighborhoods where the need to do so is largest, or some cities may decide to implement the policy and others not. Of course, in these cases those in the intervention and control group are likely to differ in many other factors than exposure to the policy, and analytical methods will have to adequately control for confounding in order to allow reliable causal inference.
The application of methods for the evaluation of NPEs, such as difference-in-differences and regression discontinuity, is reasonably well advanced in economics and econometrics. While these methods have also entered the field of public health [20, 21], and have been applied occasionally to study policy impacts on health inequalities [22, 23], there is as yet no general understanding of whether and how each of these methods can be applied to assess the impact of policies on the magnitude of socioeconomic inequalities in health. If they can, however, they can help to extend the evidence-base in this area substantially.
The main aim of this study therefore is to assess whether, and to demonstrate how, a number of commonly used analytical methods for the evaluation of NPEs can be applied to quantify the impact of policies on health inequalities. In doing so, we will also pay attention to two issues that may complicate assessing the impact of policies on socioeconomic inequalities in health. Firstly, socioeconomic inequalities in health can be measured in different ways. Secondly, policies may reduce health inequalities in different ways.
With regard to the measurement of health inequalities, it is important to distinguish relative and absolute inequalities. Relative inequalities in health are usually measured by taking the ratio of the morbidity or mortality rate in lower socioeconomic groups relative to those in higher socioeconomic groups, e.g. an odds ratio (OR), a rate ratio (RR), or a relative index of inequality [24]. Absolute inequalities in health are usually measured by taking the difference between the morbidity or mortality rates of lower and higher socioeconomic groups, e.g. a simple rate difference or the more complex slope index of inequality [24]. Relative and absolute inequalities both are considered important, although it is sometimes argued that a reduction in absolute inequalities is a more relevant policy outcome than a reduction in relative inequalities, because it is the absolute excess morbidity or mortality in lower socioeconomic groups that ultimately matters most for individuals. Nevertheless, quantitative methods used for the evaluation of policies should be able to measure the impact on both absolute and relative inequalities in health.
With regard to the second issue, there are two ways through which a policy can reduce socioeconomic inequalities in health: (1) the policy has a larger effect on exposed people in lower socioeconomic group, or (2) more people in lower socioeconomic group are exposed to it. Clearly, both can also occur simultaneously; raising the tax on tobacco may affect individuals with lower incomes more than those with higher incomes, and given the higher prevalence of smokers in low income groups also affects more smokers in low than high income groups. In fact, changes in aggregated health outcomes collected for a country or region (e.g. mortality rates or the prevalence of self-assessed health) after the introduction of a policy are the result of an effect among the exposed as well as the proportion of exposed persons. For the ultimate goal to assess whether a reduction in health inequalities in the population occurred this is less relevant – one could argue that eventually only the end result counts, that is a change in the magnitude of socioeconomic inequalities in health. Many statistical techniques, however, 'only' provide the effect of the policy among the exposed; they do not take into account the proportion of persons exposed to a policy. In order to be able to quantify the impact of a policy on socioeconomic inequalities in health in a population, an additional step is then needed: the policy effect should be combined with information about the proportion of exposed persons in higher and lower socioeconomic groups.
The structure of this paper is as follows. We first describe a fictitious data example that allowed us to assess the applicability of seven commonly used analytic methods techniques for evaluating NPEs, which we also briefly describe. We then demonstrate the use of these methods for assessing the impacts of policies on the magnitude of health inequalities in our fictitious dataset. Finally, we discuss the advantages and disadvantages of the various methods.
A fictitious data example
We generated a fictitious dataset of 20,000 residents of a city. In this city, half of the residents were regarded as in lower educational group, and within each educational group there were 50% males. The health outcome that we used was self-assessed health, dichotomized into either 'poor' or 'good'. The numbers (shown in Table 1) were chosen such that the proportion of persons with poor health before the introduction of the policy was higher among the lower educational group (20%) than among the higher educational group (10%). In order to make gender as a confounder, we constructed the data such, that women had better health (10% with poor health) than men (20% with poor health). At one point in time, the city council introduced a free medical care service in a number of neighborhoods, most of which were deprived. Thus, relatively more people in lower educational group were exposed to the policy (50%) as compared to people in higher educational group (25%). At the same time, more women (75% in lower educational group and 37.5% in higher educational group) than men (25% in lower educational group and 12.5% in higher educational group) used the free health care within each educational group. Because women had better health before the introduction of the policy and tended to be more exposed to the intervention, gender was a confounder in the association between the policy exposure and self-assessed health.
Table 1 Numbers of residents in a city: a fictitious dataset
We assumed that the effect of the policy was a reduction of the prevalence (or probability) of poor health among the exposed of 30%, regardless of their education level. Moreover, we imposed a naturally occurring recovery from poor to good health: even without the intervention, people in higher educational group had a 20% chance of reverting to good health and people in lower educational group had a 5% chance of reverting to good health. This could be due to spontaneous recovery or to external conditions such as other policies or changes in macroeconomic factors, which were not directly related to the policy introduced. As a result, and for example, the number of men with lower education who had poor health and who were exposed to the policy declined from 333 before the policy was implemented to 221 (333*0.70*0.95) after the policy was implemented (see Table 1). As those with good health were assumed not to change to poor health, the number of men in lower educational group exposed to the policy with good health became 1029 (917 + (333–221)). Similarly, and as another example, the number of women in higher educational group unexposed to the policy with good health after the introduction of the policy became 2959 (2917 + 208*0.2). We assumed that health could only change from poor to good, in order to make the fictitious dataset simpler. In reality, health can also deteriorate over time. Allowing the deterioration in health will not change the feasibility of all the listed methods and the way of implementing the methods.
Compared to men, a smaller proportion of women reported poor health before the policy, and more women were exposed to the policy: the proportion of poor health before the policy was 20% (2000/10,000) among men and 10% (1000/10,000) among women, and the proportion of persons exposed to the policy was 56.25% for women (5625/10,000) and only 18.75% for men (1875/10,000). Gender thus was a confounder of the relation between policy exposure and health.
Quantitative methods for the evaluation of natural policy experiments
To identify potentially relevant quantitative methods for the evaluation of NPEs, we started by reviewing the classical econometric literature [20, 25–31]. Seven quantitative methods were identified as potentially suitable for the evaluation of NPE's (Table 2): (1) regression adjustment, (2) propensity score matching, (3) difference-in-differences analysis, (4) fixed effects analysis, (5) instrumental variable analysis, (6) regression discontinuity and (7) interrupted time-series. We will not elaborate upon the general application of these methods – for this we refer the reader to existing textbooks and papers [20, 25, 31, 32]. Nevertheless, a basic understanding of the concepts behind these techniques is important for our purposes.
Table 2 Concepts, limitations and applications of statistical approaches for the evaluation of natural policy experiments
Regression adjustment: Standard multivariate regression techniques allow investigating the effect of a policy by adjusting the association between policy exposure and health outcomes for observed differences between those exposed and unexposed to the policy in the prevalence of confounding factors. Theoretically, if all possible confounders can be controlled for, the estimated policy effect will be unbiased. It is unrealistic to assume, however, that all possible confounders can be measured.
We illustrate this method using data obtained after the policy only (Healtht2), because this method is often applied in situations where data obtained before the policy are not available.
Propensity score matching: Propensity score matching involves estimating the 'propensity' or likelihood that each person or group has of being exposed to the policy, based on a number of known characteristics, and then matching exposed to unexposed individuals based on similar levels of the propensity score. Propensity score matching assumes that for a given propensity score, exposure to the policy is random. It is similar to regression analysis with control for confounding in that it aims to reduce bias due to observed confounding variables. It is different from regression adjustment, because matching yields a parameter for the average impact over the subspace of the distribution of all covariates that are both represented among the treated and the control groups (i.e. only for the space where there is "common support").
We illustrate this method also with data obtained after the policy (Healtht2), because this method is often applied in situations where data before the policy are not available.
Difference-in-differences analysis: Difference-in-differences analysis compares the change in outcome for an exposed group between a moment before and a moment after the implementation of a policy to the change in outcome over the same time period for a non-exposed group. The two groups may have different levels of the outcome before the policy, but as long as any 'naturally occurring' changes over time can be expected to be the same for both, the difference in the change in outcome between the exposed and non-exposed groups will be an unbiased estimate of the policy effect.
In order to illustrate this technique, we had to slightly modify our data example. Thus far, we only used data after the implementation of the policy. For the difference-in-differences analysis, we assumed that the data in our example had been collected in a repeated cross-sectional design.
Fixed effects analysis: Fixed effects analysis compares multiple observations within the same individuals or groups over time, and reveals the average change in the outcome due to the policy. Because each individual or group is compared with itself over time, differences between individuals or groups that remain constant over time – even if unmeasured – are eliminated and cannot confound the results. Numerically, fixed effects analysis produces the same results as adding a dummy variable for each individual or group into standard multivariate regressions.
In order to illustrate the fixed effects analysis, we considered our fictitious dataset to be a longitudinal dataset with repeated measures of self-assessed health before and after the implementation of the policy.
Instrumental variable analysis: Instrumental variable analysis involves identifying a variable predictive of exposure to the policy, which in itself has no direct relationship with the outcome except through its effects on policy exposure or through other variables which have been adjusted in the regression. The technique uses the variation in outcome generated by this 'instrument' to test whether exposure to the policy is related to the outcome.
We illustrate the instrumental variable approach with the cross-sectional data obtained after the policy. We constructed an instrument which is predictive of exposure to the policy and not directly related to health.
Regression discontinuity: Regression discontinuity is a form of analysis that can be used when areas or individuals are assigned to a policy depending on a cut-off point of a continuous measure. The basic idea is that, conditional on the relationship between the assignment variable and the outcome, the exposure to the policy at the cut-off point is as good as random, comparing health outcomes of those just below and just above the cut-off point provides an estimate of the effect of the policy.
To illustrate the application of regression discontinuity, we created a new dataset. The main reason was the need to create a "threshold", and thereby to introduce a new variable, distinguishing persons who could receive the policy from those who were not eligible for it.
Interrupted time-series: Where time-series data are available and there is a clear-cut change in policy at a specific point in time, interrupted time-series analysis can be used to estimate the policy effect. Regression analysis is used to detect any sudden change in level of the health outcome (in regression terms: a change of intercept) or a more sustained change in the trend of the health outcome (in regression terms: a change of slope) around the time the policy is implemented. The analysis estimates the policy effect by comparing the health outcomes before and after policy implementation. Interrupted time series is different from a difference-in-differences analysis, because interrupted time series analysis does not need a separate control group. In fact, it uses its own trend before the implementation of the policy as the control group.
To illustrate this method, we generated a time-series dataset which contained 40 years of observations. The quantitative characteristics of the dataset are similar to those used in the other calculation examples.
Statistical assessment of the impact of NPE in terms of socioeconomic inequalities in health
Analytically, assessing to what extent a policy does have an effect in lower and higher socioeconomic groups can be done in two ways. The first is to conduct a stratified analysis, using socioeconomic position as a stratification variable, resulting in policy effects for both lower and higher socioeconomic groups. The second is to include an interaction term between the variable for policy exposure and the indicator of socioeconomic position. For the latter, if the confounding effects of other covariates differ between socioeconomic groups, interaction terms between the indicator of socioeconomic position and these covariates also need to be added. If all interactions are included, the policy effects derived from an analysis stratified by socioeconomic position and from an analysis with interaction terms will be the same. For illustrative purpose, we included all the interactions in our analysis so that the results from interaction terms and stratified analysis were the same.
Most of the techniques described above require a regression analysis. Whereas a linear regression analysis results in an absolute effect of the policy, a logistic regression analysis results in a relative policy effect. Propensity score matching uses a pair-matched difference in the outcome to quantify the policy effect.
For those techniques resulting in a policy effect among the exposed only (all techniques described above, except interrupted time series), we then need to combine these effects with the proportion of exposed persons in higher and lower socioeconomic groups, in order to calculate the impact of policy on absolute and relative inequalities among the whole population. Currently, there is no prescribed statistical procedure to do this. Our approach is to calculate the prevalence of people having poor health in each educational group after the policy (an observed prevalence) and the predicted prevalence of people having poor health in absence of the policy (a predicted prevalence). The latter can be calculated by excluding the coefficient for the policy assignment from the equation, while keeping all other coefficients in the model the same. With the observed and predicted prevalence rates, absolute rate differences and relative rate ratios can be calculated. The differences in the absolute rate differences or the relative rate ratios with and without the policy then show the impact of the policy on the magnitude of health inequalities. Bootstrapping with 1000 replications was used to calculate the confidence intervals around the estimated impact of a policy. All analyses were performed in Stata 13.1.
Regression adjustment
In a stratified analysis, the effect of the policy can be modeled for those in higher and lower educational groups separately, adjusting for gender as a confounder:
$$ \mathrm{Healt}{\mathrm{h}}_{\mathrm{i}\mathrm{t}2} = {\upbeta}_0 + {\upbeta}_1\mathrm{polic}{\mathrm{y}}_{\mathrm{i}} + {\upbeta}_2\mathrm{gende}{\mathrm{r}}_{\mathrm{i}} + {\upmu}_{\mathrm{i}} $$
where healthit2 is the health of individual i in year t2, β0 is the intercept, β1 and β2 are regression coefficients and μi is the error term.
If we use logistic regression, which is appropriate in situations with a binary health outcome as in our example, the odds ratio for the policy effect can be calculated from β1 and represents the higher or lower odds of having poor health after the policy for those exposed to the policy as compared to those unexposed to the policy. Because gender in this example is the only confounder, and because we were able to measure and adjust for it, the odds ratio can be interpreted as the policy effect. Table 3 shows these policy effects for people in lower and higher educational groups. The policy effect is essentially similar for people in lower (OR = 0.647, 95% CI [0.570, 0.734]) and higher educational groups (OR = 0.679, 95% CI [0.550, 0.839]). Please note that this analysis gives us estimates of relative rather than absolute policy effects. The discrepancy between the estimated odds ratios for the policy effect (0.647 and 0.679) on the one hand and the policy effects that we imposed in the dataset (a reduction of the probability of poor health among the exposed as compared to the unexposed of 30% for people in both higher and lower educational groups) on the other hand is due to the logistic transformation.
Table 3 Policy effects derived from the seven methods based on education-stratified analysis and the inclusion of interaction terms
Regression analysis also allows us to introduce an interaction term between (low) education and exposure to the policy ("low-edu*policy"):
$$ \mathrm{Healt}{\mathrm{h}}_{\mathrm{i}\mathrm{t}2} = {\upbeta}_0 + {\upbeta}_1\mathrm{polic}{\mathrm{y}}_{\mathrm{i}} + {\upbeta}_2\mathrm{gende}{\mathrm{r}}_{\mathrm{i}} + {\upbeta}_3\mathrm{low}\hbox{-} \mathrm{e}\mathrm{d}{\mathrm{u}}_{\mathrm{i}} + {\upbeta}_4\left(\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{i}}*\mathrm{polic}{\mathrm{y}}_{\mathrm{i}}\right) + {\upbeta}_5\left(\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{i}}*\mathrm{gende}{\mathrm{r}}_{\mathrm{i}}\right) + {\upmu}_{\mathrm{i}} $$
where β0 is the intercept, β1, β2, β3, β4 and β5 are regression coefficients and μi is the error term.
As shown in Table 3, the interaction term between the policy and education (β4) was not statistically significant (0.953, 95% CI [0.745, 1.218]). This indicates that we cannot show that the policy effects for people in lower and higher educational groups are different, which is in line with the findings from the stratified analysis.
These results only represent the relative policy effect for people exposed as compared to those unexposed to the policy; they do not take into account the proportion of exposed people in each educational group. To do so, we had to apply an extra step. Using the stratified analyses, we calculated the predicted prevalence of poor health if the policy would have not been implemented (please note that we could have also used the analysis with the interaction terms; if all interactions are included this will provide exactly the same results). This was done by leaving out the term for the policy, keeping all other coefficients in the regression equations, and computing the predicted prevalence of poor health. Subsequently, we calculated the rate difference between people in higher and lower educational groups using the observed prevalence (for the situation in which the policy was implemented), and the predicted prevalence (for the situation in which the policy would not have been implemented) (Table 4). For example, the rate difference in the situation with the policy effect was 9.14% (16.63–7.49) and was 11.11% without the policy. In a similar way, the rate ratios were also calculated for both situations. The impact of the policy on health inequality could now be measured (1) as the change in absolute inequality (e.g., as the change in the rate difference) or (2) as a change in relative inequality (e.g., as a change in the rate ratio). In our example, the change in the rate difference is 1.97% points (11.11–9.14%) which means that the policy reduced the absolute inequality between people in lower and higher educational groups by almost 2% points (Table 5). Further, the change in the rate ratio was 12.2% ((2.39–2.22)/(2.39–1)). This means that the policy reduced the relative inequality by more than 12%. We have also calculated the confidence intervals of these estimates (Table 5).
Table 4 Observed and predicted prevalence of poor health, rate difference and rate ratio for low and high educated groups with and without the implementation of the policy, as obtained using the seven methods
Table 5 Summary table of the policy effect on absolute and relative inequalities in health
Propensity score matching
In order to obtain an estimate for the effect of the policy on health inequalities we conducted a stratified analysis, i.e. we applied propensity score matching within the high and low educational groups separately. The first step in the analysis was to calculate the "propensity" of being exposed to the policy. Logistic regression analysis, with being exposed or not as the binary outcome and gender as the predictor, was used to calculate the propensity of being exposed. Individuals with the same propensity who were indeed exposed to the policy could then be matched with individuals with almost the same ("the nearest neighbor") propensity who were not exposed to the policy.
The policy effect was estimated as the average of the differences in the probability of poor health within matched pairs of exposed and unexposed individuals. This produces an absolute measure of the policy effect. Table 3 lists the results obtained from the propensity score matching analysis for people in lower and higher educational groups separately. For people in lower and higher educational groups, the policy reduced the probability of having poor health among exposed individuals by almost 5 percentage points (-0.048) and 2 percentage points (-0.020), respectively. Although we imposed the same relative policy effect regardless of the education level in the data, the absolute effect of the policy was larger for people in lower educational group than for people in higher educational group, because the prevalence of poor health before the policy was higher among the lower educational group.
To calculate the absolute decrease of the prevalence of poor health, the effect of the policy for people in lower and higher educational groups should be multiplied with the proportion of persons exposed to the policy in each educational group. Among all people in lower educational group, regardless of whether they were exposed or not to the policy, the probability of having poor health declined by 2.5 percentage points ((-0.048)*(5000/10,000) = -0.024). Among all the people in higher educational group, the probability of having poor health declined by 0.5 percentage points ((-0.020)*(2500/10,000) = -0.005).
In order to estimate the effect of the policy on the magnitude of health inequalities, we need the rate difference and rate ratio in a scenario with and in a scenario without the policy effect. In a scenario without the policy effect, the predicted prevalence of having poor health for people in lower educational group is the observed prevalence (16.63%) plus the reduction as a result of the policy (2.4%), which is then 19.03%. For people in higher educational group, the prevalence is 7.99% (7.49% +0.5%).
The rate difference in the scenario with the policy was 9.14% (16.63–7.49); in the scenario without the policy it was 11.04% (19.03–7.99). This means that the policy reduced the absolute inequality in poor health by almost 2%. The rate ratio in the scenario with the policy was 2.22 (16.63/7.49); in the scenario without the policy it was 2.38 (19.03/7.99). This means that the policy reduced the relative inequality of poor health by almost 12%.
In propensity score matching, the policy effect is indicated by the average difference between the exposed and the matched unexposed individuals. There is no regression equation in the matching process, and therefore it was considered impossible to use an interaction term in a propensity matching analysis.
Difference-in-differences analysis
In the analysis, we modeled health (measured both before and after implementation of the policy) as a function of exposure to the policy, time, and an interaction between exposure to the policy and time. By allowing levels of health to be different between exposed and unexposed before the policy, the technique accounts for unobservable confounding by time-invariant characteristics that differ in their prevalence between the exposed and unexposed. In our example 'gender' was not controlled for, and therefore acted as an unobservable confounder.
In a stratified analysis, the model to be used for people in lower and higher educational groups separately is:
$$ \mathrm{Healt}{\mathrm{h}}_{\mathrm{i}\mathrm{t}} = {\upbeta}_0 + {\upbeta}_1\mathrm{polic}{\mathrm{y}}_{\mathrm{i}} + {\upbeta}_2\mathrm{y}\mathrm{e}\mathrm{a}{\mathrm{r}}_{\mathrm{t}} + {\upbeta}_3\left(\mathrm{polic}{\mathrm{y}}_{\mathrm{i}}*\mathrm{yea}{\mathrm{r}}_{\mathrm{t}}\right) + {\upmu}_{\mathrm{i}\mathrm{t}} $$
where healthit is the health of individual i in year t, β0 is the intercept, β1, β2 and β3 are regression coefficients and μit is the error term.
If we again use logistic regression, the coefficient for the variable "policy" (β1) now measures the relatively higher or lower odds of having poor health for those exposed as compared to those unexposed to the policy before the implementation of the policy (which in our example was driven by the fact that women were in better health before the implementation and more exposed to the policy). The coefficient for the variable "year" (β2) represents the naturally occurring change in health over time among the unexposed. The coefficient for the interaction term "policy*year" (β3) indicates the policy effect, i.e. the difference in change of poor health over time between the unexposed and exposed. Table 3 shows the policy effects for people in lower and higher educational groups. The relative policy effect is essentially similar for people in lower educational group (OR = 0.666, 95% CI [0.574; 0.773]) and for people in higher educational group (OR = 0.687, 95% CI [0.530; 0.890]). This is again in line with what we imposed in the data.
In a difference-in-difference analysis, we can also introduce a three-way interaction term between policy, year and low education:
$$ \begin{array}{l}\mathrm{Healt}{\mathrm{h}}_{\mathrm{i}\mathrm{t}} = {\upbeta}_0 + {\upbeta}_1\mathrm{polic}{\mathrm{y}}_{\mathrm{i}} + {\upbeta}_2\mathrm{yea}{\mathrm{r}}_{\mathrm{t}} + {\upbeta}_3\left(\mathrm{polic}{\mathrm{y}}_{\mathrm{i}}*\mathrm{yea}{\mathrm{r}}_{\mathrm{t}}\right) + {\upbeta}_4\left(\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{i}}\right)\ \\ {} + {\upbeta}_5\left(\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{i}}*\mathrm{polic}{\mathrm{y}}_{\mathrm{i}}\right) + {\upbeta}_6\left(\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{i}}*\mathrm{yea}{\mathrm{r}}_{\mathrm{t}}\right) + {\upbeta}_7\left(\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{i}}*\mathrm{polic}{\mathrm{y}}_{\mathrm{i}}*\mathrm{yea}{\mathrm{r}}_{\mathrm{t}}\right) + {\upmu}_{\mathrm{i}\mathrm{t}}\end{array} $$
where healthit is the health of individual i in year t, β0 is the intercept, β1 - β7 are regression coefficients and μit is the error term.
The three-way interaction labeled "low-edu i *policy i *year t " (β7) indicates the differential policy effect for people in lower and higher educational groups. As shown in Table 3, this interaction term was not statistically significant (OR = 0.970, 95% CI [0.719; 1.307]). Thus, the policy effect was not significantly different for people in lower and higher educational groups, which corresponds to what we have imposed in the data example.
Using a similar approach as for the regression adjustment, and again based on the stratified analyses, we subsequently calculated the predicted prevalence of poor health if the policy would not have been implemented. It allowed us to calculate the rate differences between people in lower and higher educational groups based on the predicted prevalence (if the policy would not have been implemented) as well as the rate ratios. As shown in Table 4, the policy effect on absolute health inequalities (e.g. the change in the rate differences) was 1.85% (10.99-9.14). This means that the policy reduced the rate difference between people in lower and higher educational groups by almost 2 percentage points. Similarly, we can calculate the policy effect on relative health inequality as the change in the rate ratio, resulting in the finding that the policy reduced the relative excess risk of the people in lower educational group by more than 11%.
Fixed effects model
With a binary outcome, one could use logistic regression analysis. However, in fixed effects logistic regression analysis, observations with the same health status in two (or more) periods will be excluded from the analysis; only the within-unit variations over time will be used. Therefore, a large part of the observations in our dataset would be excluded. While logistic regression analysis would produce valid estimates, it would lead to results that cannot be compared to those obtained from the other methods. For reasons of comparability, we used linear probability regressions with fixed effects, which also produces valid estimates. Footnote 1Again, we treated 'gender' as an unobserved confounder.
Linear probability regression was used, in which the coefficient for the policy (β1 in the formula below) represented an absolute change in the probability of having poor health as a result of exposure to the policy. In a stratified analysis, this can be modeled as follows for those in higher and lower educational groups separately:
$$ \mathrm{Healt}{\mathrm{h}}_{\mathrm{i}\mathrm{t}} = {\upbeta}_0 + {\upbeta}_1\mathrm{polic}{\mathrm{y}}_{\mathrm{i}\mathrm{t}} + {\upbeta}_2\mathrm{y}\mathrm{e}\mathrm{a}{\mathrm{r}}_{\mathrm{t}} + {\mathrm{d}}_{\mathrm{i}} + {\upmu}_{\mathrm{i}\mathrm{t}} $$
where healthit is health of individual i in year t, β0 is the intercept, β1 and β2 are regression coefficients, di is a set of individual dummy variables and μit is the error term.
Table 3 shows that the absolute policy effect is larger among people in lower educational group (β 1=-0.044, 95% CI [-0.051; -0.037]) than among people in higher educational group (β 1= -0.016, 95% CI [-0.023; -0.009]).
Fixed effects analysis also allows us to introduce an interaction term between (low) education and exposure to the policy:
$$ \mathrm{Healt}{\mathrm{h}}_{\mathrm{i}\mathrm{t}} = {\upbeta}_0 + {\upbeta}_1\mathrm{polic}{\mathrm{y}}_{\mathrm{i}\mathrm{t}} + {\upbeta}_2\mathrm{y}\mathrm{e}\mathrm{a}{\mathrm{r}}_{\mathrm{t}} + {\upbeta}_3\left(\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{i}\mathrm{t}}*\mathrm{polic}{\mathrm{y}}_{\mathrm{i}\mathrm{t}}\right)+{\upbeta}_4\left(\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{i}\mathrm{t}}*\mathrm{yea}{\mathrm{r}}_{\mathrm{t}}\right) + {\mathrm{d}}_{\mathrm{i}} + {\upmu}_{\mathrm{i}\mathrm{t}} $$
where healthit is health of individual i in year t, β0 is the intercept, β1 – β4 are regression coefficients, di is a set of individual dummy variables and μit is the error term.
As shown in Table 3, the interaction term for low education and policy ("low-edu*policy") is statistically significant (β 3 = -0.029, 95% CI [-0.039; -0.019]), which indicates that the policy effect is indeed different between people in lower and higher educational groups. The negative sign of the coefficient for the interaction term indicates that the absolute policy effect is larger among people in lower educational group, as was also found in the stratified analysis.
Again we can use the fitted values to estimate the policy effect on health inequalities. Based on the results in Table 4, we can calculate the policy effect on absolute health inequality as the change in the rate differences: 10.96–9.14 = 1.82. This means that the policy has reduced the rate difference between people in lower and higher educational groups by almost 2 percentage points. Similarly, we can calculate the policy effect on relative health inequality, which then results in the finding that the policy reduced the relative excess risk of people in lower as compared to higher educational group by more than 12%.
Instrumental variable analysis
Again, we used gender as an unobserved confounder. In a straightforward regression analysis, exposure to the policy would be endogenous (as gender would determine exposure to the policy to some extent, and is now included in the error term), and as a consequence the estimated effect of policy on health would be biased. We therefore used an instrument, e.g. the "distance from the house of the respondent to the closest free medical service". For this to be a valid instrument, it should be clearly predictive of exposure to the policy, related to health via the policy (use of the free medical service) only, and not be related to any unmeasured confounder (information about the construction of the instrumental variable used in our analyses is available upon request).
The instrumental variable analysis was conducted in a two-stage least squares regression. The basic idea of this analysis in our example was to first regress the policy exposure on the instrumental variable in order to capture the variation in policy exposure induced by the instrument, and to subsequently regress the health outcome on the predicted values for policy exposure. The instrumental variable analysis with logistic regression cannot be easily conducted in Stata, and therefore we used probit regression (specifically "ivprobit"). The coefficients from the probit regressions were transformed into marginal effects to make them comparable to those from linear regressions.
While the approach is intuitively easy if stratified by education, it is more complicated for an analysis using the interaction between policy and education. Because exposure to the policy is endogenous, the interaction between education and policy exposure is endogenous as well. This requires an instrument for the interaction terms as well; we used the interaction between education and distance from home to the closest free medical service for this purpose. In the first step of the two stage regression, both instruments predict exposure to the policy as well as the interaction between education and exposure to the policy. The predicted values are then used in the second stage of regression, resulting in unbiased effects of exposure to the policy and the interaction between policy exposure and education on health.
Table 3 shows that the absolute policy effect is larger among people in lower educational group (β = -0.050, 95% CI [-0.063; -0.037]) than among people in higher educational group (β = -0.020, 95% CI [-0.029; -0.011]). The interaction term for low education and policy was statistically significant (β = -0.036; 95% CI [-0.057; -0.015]), which indicated that the policy effect was different between people in lower and higher educational groups indeed.
As for the other methods, we used the predicted values from the regression analysis (in this case, the second stage of the analysis) to estimate the policy effect on health inequality (Table 4). Using the values in Table 4, we calculated the policy effect on absolute health inequalities as the change of the rate difference: 11.16–9.14 = 2.02%. This means that the policy reduced the rate difference between people in lower and higher educational groups by 2 percentage points. Similarly, we calculated the policy effect on relative health inequalities as the change of the rate ratio and found that the policy reduced the relative excess risk of poor health among people in lower educational group by almost 13%.
Regression discontinuity
Here we had to create a new dataset with a "threshold" distinguishing persons who could receive the policy from those who were not eligible for it. For this purpose, we created a sharp regression discontinuity design with income as the "threshold" variable: those with a household income of less than 2000 euros per month could receive the free medical care, whereas those with higher incomes were not eligible to receive the free medical care. We assumed that the sharp threshold of 2000 euro resulted in a 'sharp' regression discontinuity, without changing the effect of income on health. Because people in lower educational group generally tend to have lower household incomes, more people in lower educational group were exposed to the policy. The imposed policy effect was still a reduction of the prevalence of poor health by 30% regardless of education level. The dataset created contained cross-sectional data after the implementation of the policy. Details about the generation of the data for the regression discontinuity are available upon request.
In a stratified analysis, this was modeled as follows for individuals in higher and lower educational groups separately:
$$ \mathrm{Healt}{\mathrm{h}}_{\mathrm{i}} = {\upbeta}_0 + {\upbeta}_1\left(\mathrm{incom}{\mathrm{e}}_{\mathrm{i}}\hbox{-} 2000\right) + {\upbeta}_2\mathrm{polic}{\mathrm{y}}_{\mathrm{i}} + {\upmu}_{\mathrm{i}} $$
where healthi is health of individual i, β0 is the intercept, β1 and β2 are regression coefficients, and μi is the error term.
Individual-level health was still the health outcome. The value for the variable "policy" was 1 if the individual's monthly income was equal to or less than 2000 euro per month. The regression coefficient β1 reflects the average effect of income on health. The regression coefficient β2 reflects the discontinuity in health which was caused by the implementation of the policy. The analysis was done among individuals whose monthly income is around 2000 (e.g. individuals whose monthly income is between 1800 and 2200). Using logistic regression, the odds ratio resulting from the coefficient for the variable "policy" (β2) measured the higher or lower odds of having poor health between those exposed to the policy and those not exposed to the policy. Table 3 shows that the relative policy effect is similar for people in lower educational group (OR = 0.678, 95% CI [0.495; 0.929]) and for people in higher educational group (OR = 0.687, 95% CI [0.483; 0.977]). Approximately, this is the 30% chance of reversing from poor to good health regardless of education level, as imposed in the data.
Regression discontinuity analysis also allows us to introduce interaction terms:
$$ \begin{array}{l}\mathrm{Healt}{\mathrm{h}}_{\mathrm{i}} = {\upbeta}_0 + {\upbeta}_1\left(\mathrm{incom}{\mathrm{e}}_{\mathrm{i}}\hbox{-} 2000\right) + {\upbeta}_2\mathrm{polic}{\mathrm{y}}_{\mathrm{i}} + {\upbeta}_3\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{i}} + {\upbeta}_4\left(\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{i}}*\left(\mathrm{incom}{\mathrm{e}}_{\mathrm{i}}\hbox{-} 2000\right)\right)\ \\ {} + {\upbeta}_5\left(\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{i}}*\mathrm{polic}{\mathrm{y}}_{\mathrm{i}}\right) + {\upmu}_{\mathrm{i}}\end{array} $$
where healthi is health of individual i, β0 is the intercept, β1 – β5 are regression coefficients, and μi is the error term.
As shown in Table 3, the interaction term for low education and policy ("low-edu*policy") is statistically insignificant (OR = 0.987, 95% CI [0.615; 1.583]), which indicates that the policy effect is not statistically different between people in lower and higher educational groups.
Results from the regression discontinuity analysis are also reported in a graphical way (Fig. 1). In Fig. 1, similar discontinuities around the income level of 2000 euro per month were observed among people in lower and higher educational groups. This indicates similar instant policy effects. In our example, although the policy effects were independent of educational level, more people in lower educational group were exposed to the policy, leading to a decreased health inequality. However, this cannot be shown in the figure.
Results from regression discontinuity by education
Again we can use the fitted values to estimate the policy effect on health inequalities and follow the same steps to calculate the changes in absolute and relative inequalities as a result of the policy. However, as the analysis was only performed based on the observations around the cutoff point of 2000 euro per month (e.g. 1800–2200 euro per month), we could not produce the policy effect on health inequalities among the whole population. This is a characteristic of the regression discontinuity method, and should not be seen as a failure of the example. Given that we generated a different setting for this method and the estimated policy effects only represented the policy effects among a part of the whole population, the calculated policy effects on health inequalities were not comparable to those from other methods and we therefore did not present them in Tables 4 and 5.
Interrupted time series
Here we generated a time-series dataset which contained 40 years of observations. Because this method (in our example) uses aggregate data, we could consider our health outcome, the prevalence of poor health, to be continuous (as opposed to binary in the other examples). For people in lower educational group, the prevalence of poor health decreased by around 0.1 percentage points each year before the policy. For people in higher educational group, the prevalence of poor health decreased by around 0.2 percentage points each year before the policy. The policy was implemented half way during the period of observation (i.e. year 20). For reasons of simplicity, we assumed that the policy affected the level of health (the intercept) immediately after its implementation. Details about the way of generating the data are available upon request.
In a stratified analysis, the model used for individuals in higher and lower educational groups separately was:
$$ \mathrm{Healt}{\mathrm{h}}_{\mathrm{t}} = {\upbeta}_0 + {\upbeta}_1\mathrm{y}\mathrm{e}\mathrm{a}{\mathrm{r}}_{\mathrm{t}} + {\upbeta}_2\mathrm{polic}{\mathrm{y}}_{\mathrm{t}} + {\upbeta}_3{\left(\mathrm{year}\ \mathrm{a}\mathrm{fter}\ \mathrm{policy}\right)}_{\mathrm{t}} + {\upmu}_{\mathrm{t}} $$
where health is the prevalence of self-assessed health, β0 is the intercept, β1 and β2 are regression coefficients, and μt is the error term.
The variable "year" represented the calendar years and ranged from 1 to 40. The variable "policy" was a dummy variable with value 1 if it was larger than 20, and value 0 if it is smaller or equal to 20. The variable "year after policy" was the number of years after the implementation of the policy. In the regression, the coefficient of "year" (β1) indicated the natural trend before the policy. The coefficients for "policy" and "year after policy" represented the change in the intercept and the change in the slope due to the policy.
Figure 2 presents the results of the interrupted time-series analysis, stratified by education. As mentioned above, aggregated data were used, which already incorporated the effect of both the real policy effect on the exposed people and the proportion of exposed people in lower and higher educational groups. Since more l people in lower educational group were exposed, an instant effect on reducing health inequalities was observed, indicated by a larger drop in the prevalence of poor health in the lower educational group directly after the implementation of the intervention in year 20. As shown in table 3, the policy reduced the prevalence of poor health for people in lower educational group immediately by 2.3% points and it reduced the prevalence of poor health for people in higher educational group immediately by 0.5% points.
Results from interrupted time-series by education
Interrupted time-series analysis also allows us to introduce interaction terms:
$$ \begin{array}{l}\mathrm{Healt}{\mathrm{h}}_{\mathrm{t}} = {\upbeta}_0 + {\upbeta}_1\mathrm{yea}{\mathrm{r}}_{\mathrm{t}} + {\upbeta}_2\mathrm{polic}{\mathrm{y}}_{\mathrm{t}} + {\upbeta}_3{\left(\mathrm{year}\ \mathrm{after}\ \mathrm{policy}\right)}_{\mathrm{t}} + {\upbeta}_4\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{t}} + {\upbeta}_5\left(\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{t}}*\mathrm{yea}{\mathrm{r}}_{\mathrm{t}}\right)\ \\ {} + {\upbeta}_6\left(\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{t}}*\mathrm{polic}{\mathrm{y}}_{\mathrm{t}}\right) + {\upbeta}_7\left(\mathrm{low}\hbox{-} \mathrm{ed}{\mathrm{u}}_{\mathrm{t}}*{\left(\mathrm{year}\ \mathrm{after}\ \mathrm{policy}\right)}_{\mathrm{t}}\right) + {\upmu}_{\mathrm{t}}\end{array} $$
where health is the prevalence of self-assessed health, β0 is the intercept, β1 – β7 are regression coefficients, and μt is the error term.
The coefficients for "low-edu*policy" represent the change in the intercept due to the policy. As shown in Table 3, the interaction "low-edu*policy" is statistically significant (coefficient = -0.019), which suggests that the policy effect is larger among lower educational group.
As before, using the values in Table 4 we calculated the policy effect on absolute health inequality as the change of the rate difference: 10.99–9.14 = 1.85. This means that the policy reduced the rate difference between lower and higher educational groups by almost 2 percentage points. Similarly, we calculated the policy effect on relative health inequality as the change of the rate ratio, which resulted in the finding that the policy has reduced the relative excess risk among the lower educational group by almost 12%.
This study demonstrated that all seven quantitative analytical methods identified can be used to quantify the equity impact of policies on absolute and relative inequalities in health by conducting an analysis stratified by socioeconomic position. Further, all but one (propensity score matching) can be used to quantify equity impacts by inclusion of an interaction term between socioeconomic position and policy exposure.
In our example, we assessed the effects of the policy in stratified analysis, and modeled it by including an interaction term between policy exposure and education. Apart from our finding that an interaction term could not be included in propensity score matching and appeared to be slightly more complicated in instrumental variable analysis, some differences between both approaches have to be considered before deciding which approach to use. Stratification by education is intuitively attractive; the method, however, requires additional analyses to statistically test whether the policy effects for higher and lower socioeconomic groups differ. Comparing the overlap in confidence intervals provides some further insight, but is still not a formal test [33]. Further, in our simple example, we only had two levels for our indicator of socioeconomic position, which made stratification easy. Including more levels of socioeconomic position results in smaller strata, with loss of statistical power as a consequence. Moreover, some indicators of socioeconomic position can be measured on a continuous scale, such as number of years of education, or household income. Categorizing continuous values requires making (arbitrary) decisions, and results in a loss of information. Analyses using an interaction term allow indicators of socioeconomic position to be continuous variables. The results, however, can sometimes be more complex to interpret. For example, one issue to consider is that whether the effect of the policy on health inequalities changes in a linear way with an increase of one unit of the socioeconomic indicator.
Caution is needed when interpreting the results from the instrumental variable approach. Under certain conditions, the instrumental variable reveals a local average treatment effect [34], namely the intervention effect among individuals affected by the observed changes due to the instrument ("compliers"). It is a local parameter since it is specific to the population defined by the instrument [28]. Therefore, when the treatment effect is the same for everyone (homogenous treatment effect), all valid instruments will identify the same parameter. In this case, the local average treatment effect will be the same as the average treatment effect. However, in the more realistic cases with heterogeneous treatment effect, local average treatment effect is normally different from average treatment effect. Different instrumental variables, although all valid, will be associated with different local average treatment effect estimators and the population of corresponding compliers cannot be identified in the data [35]. Thus, when we apply it to health inequalities, for example in stratified analysis, the estimated policy effects are the effects among the corresponding compliers within each socioeconomic group given a set of instruments. The generalization of the conclusion to the whole population or to other populations is normally uncertain. However, when the change of policy is used as the instrument for the exposure, the local average treatment effect might be extremely useful, since it focuses on an important subpopulation whose exposure status is changed by the policy and may provide an informative measure of the impact of the policy [28].
The above mentioned problem of a low external validity also applies to regression discontinuity. Analysis are only performed based on the observations around a cutoff point (e.g. 1800–2200 euro per month in our example), and as a result, the method does not produce a policy effect on health inequalities among the whole population.
Persons were either exposed or unexposed in our fictitious example; we did not include the possibility of graded exposure to the policy. Whereas regression adjustment would allow a graded exposure relatively easy, for other techniques this may be more complex (although not impossible), such as for propensity score matching [36].
Which method to use depends to a large extent on data availability (e.g. whether cross-sectional or longitudinal data are available) and the nature of the confounders in the analysis (whether observable or not, and whether time-variant or not). The appropriateness of the preferred methods further depends on the degree to which underlying assumptions are met. For example, instrumental variable analysis requires strong assumptions, and violations can lead to biased estimated [21, 26]. Similarly, difference-in-differences analysis is based on the assumption of common trends between higher and lower educational groups in the absence of the policy.
Although the methods described seem quite different, they actually try to achieve the same aim, which is constructing counterfactual outcomes for exposed units had they not been exposed to the policy [37]. Doing this in a convincing way is a key ingredient of any serious evaluation method [28]. For example, in well-designed randomized controlled trials, the control group is a perfect counterfactual for the exposed group, since the pre-intervention differences have been eliminated by the random assignment of intervention. In the same way, if the key assumption holds that selection bias disappears after controlling for observed characteristics [26], both regression adjustment and propensity score matching restore randomization to some extent. Similarly, both instrumental variable and regression discontinuity approaches aim at finding exogenous factors which can fully or partly determine the assignment to a policy; as such, this mimics randomization to some extent. If in a difference-in-differences analysis, the trend over time is the same for unexposed and exposed units of analysis, the change in the unexposed unit can be potentially used as the counterfactual. When longitudinal data is available, the fixed effects model uses the exposed unit's own history prior to treatment to construct the counterfactual [20]. Likewise, the time trend of the exposed unit before the policy implementation is utilized as the counterfactual part when time-series data are available.
We constructed our fictitious data in the way that people with a low socioeconomic position were more exposed to the policy, but the policy effect among the exposed was equal between socioeconomic groups. In reality, health inequalities might also be reduced in cases where people with different socioeconomic positions are equally exposed to the policy, but where the policy effect among those exposed is larger among people with a low socioeconomic position. It can also happen that people with a low socioeconomic position are more exposed to the policy, and where the policy effect among those exposed is larger among people with a low socioeconomic position. The process of analysis and the interpretation of the results however, are similar for these cases. Moreover, although we mainly constructed the examples using individual level data (except for interrupted time-series), some methods can also be used with aggregated level data. For example, a fixed effects model can also be applied with country-level longitudinal data.
In this paper, we performed the analysis based on a standard setting of each method. The analysis however, can be easily extended to more complicated examples. We only used few covariates in our analysis, but more can be incorporated. It is also possible to use the methods with longitudinal or repeated cross-sectional data with multiple periods, continuous health outcomes, and more than one instrument. We used propensity score matching, as it is the most commonly used method of matching. However, other matching methods may be more effective in reducing imbalance, model dependence and bias, such as Mahalanobis Distance Matching and Coarsened Exact Matching [38]. Moreover, extensions of the models described here with relaxed assumptions have been applied, such as quasi difference-in-differences model [39], changes-in-changes model [40] and dynamic fixed effects model [41, 42]. The extended models are not covered by this paper, but the general way of applying them for assessing the impact of policy on health inequalities is similar to the standard models. Combining methods in one study is also possible. For example, some papers recommend to combine regression adjustment and propensity score matching by weighting the treated population by the inverse of the propensity score (i.e. the inverse probability weighting estimator) to reduce bias and improve efficiency [31, 43]. In this way, matching can be combined with many techniques such as difference-in-differences analysis and fixed effects model [44, 45]. Another example is incorporating instrumental variables into fixed effect models to tackle the potential measurement error [26].
This study demonstrated quantitative tools to assess if and to what extent natural policy experiments impact upon socioeconomic inequalities in health. While our approach offers further insight in whether effects resulted from a policy effects and/or and the size of the populations exposed, it does not offer in-depth insight into how effects were achieved. Quantitatively, (causal) mediation analyses could be used to assess explanations for potential effects, whereby the effect of the policy experiment on potential determinants could be assessed, as well as the effects of the potential explanatory factors on the outcome [46]. Future research should explore to what extent mediation analysis can be used to assess explanations of the impact of NPE's in inequalities in health. Simultaneously, qualitative approaches can be used to further examine the processes leading to an impact [47, 48].
The demonstrated possibility to use the techniques described in this paper for studying the impact of NPE's on socioeconomic inequalities raises the question as to whether all policy evaluations should include an evaluation of the equity impact. Researchers evaluating an equity impact of interventions are often criticized by statisticians for conducting unreliable (underpowered) analyses; those who don't are at the same time however, criticized by policymakers in need of evidence what works to close the gap in health between socioeconomic groups [49]. Following guidelines in which a logic model includes theoretically plausible mechanisms for a reduction on inequalities in health, and in which statistical power is not a real issue, we recommend that an equity impact analyses should be an integral part of any policy experiment.
In conclusion, application of methods commonly applied in economics and econometrics can be applied to assess the equity impact of natural policy experiments. The low external validity of results from instrumental variable and regression discontinuity makes these methods less desirable for assessing policy effects on population-level health inequalities. Increased use in social epidemiological research will help to build an evidence base to support policy making in this area.
There was one disadvantage of using linear probability regressions with the fixed effects model in our dataset. Although the relative policy effect on health was independent of gender, there were some interaction effects between gender and policy if we use linear regression to assess the absolute policy effect. The absolute policy effect is lower among women, which is caused by a relatively lower prevalence of poor health among women. Strictly speaking, this makes the effect for women no longer "fixed" in an absolute setting. Women had a lower effect on health through the policy effect in the second period. The interaction effect however, was rather limited in our data, and did not lead to large discrepancies of the results from fixed effects models. Therefore, we decided to ignore these limited interaction effects between "female" and "policy", and assumed that the variable "female" was still a "fixed effect" that can be eliminated by a fixed effects model.
Mackenbach JP. Health inequalities: Europe in profile. 2006. http://ec.europa.eu/health/ph_determinants/socio_economics/documents/ev_060302_rd06_en.pdf. Accessed 6 Jan 2016.
Mackenbach JP. Socioeconomic inequalities in health in high-income countries: the facts and the options. In: Detels R, Gulliford M, Karim QA, Tan CC, editors. Oxford Textbook of Global Public Health. Oxford: Oxford University Press; 2015. p. 106–26.
Mackenbach JP, Stirbu I, Roskam AJ, Schaap MM, Menvielle G, Leinsalu M, et al. Socioeconomic inequalities in health in 22 European countries. N Engl J Med. 2008;358:2468–81.
Commission on Social Determinants of Health. Closing the gap in a generation. Health equity through the social determinants of health. Geneva: World Health Organization; 2008.
Mackenbach JP, Bakker MJ. Tackling socioeconomic inequalities in health: analysis of European experiences. Lancet. 2003;362:1409–14.
Judge K, Platt S, Costongs C, Jurczak K. Health inequalities: a challenge for Europe. 2006. http://ec.europa.eu/health/ph_determinants/socio_economics/documents/ev_060302_rd05_en.pdf. Accessed 2 Feb 2015.
Mackenbach JP, Stronks K. The development of a strategy for tackling health inequalities in the Netherlands. Int J Equity Health. 2004;3:11.
Mackenbach JP. Has the English strategy to reduce health inequalities failed? Soc Sci Med. 2010;71:1249–53.
Mackenbach JP. Socioeconomic inequalities in health in western Europe: from description to explanation to intervention. In: Siegrist J, Marmot M, editors. Social inequalities in health: New evidence and policy implications. New York: Oxford University Press; 2006. p. 223–50.
Kulik MC, Hoffmann R, Judge K, Looman C, Menvielle G, Kulhánová I, et al. Smoking and the potential for reduction of inequalities in mortality in Europe. Eur J Epidemiol. 2013;28:959–71.
Eikemo TA, Hoffmann R, Kulik MC, Kulhánová I, Toch-Marquardt M, Menvielle G, et al. How can inequalities in mortality be reduced? A quantitative analysis of 6 risk factors in 21 European populations. PLoS One. 2014;9:e110952.
Schrijvers CTM, van de Mheen HD, Stronks K, Mackenbach JP. Socioeconomic inequalities in health in the working population: the contribution of working conditions. Int J Epidemiol. 1998;27:1011–8.
Bambra C, Gibson M, Sowden AJ, Wright K, Whitehead M, Petticrew M. Working for health? Evidence from systematic reviews on the effects on health and health inequalities of organisational changes to the psychosocial work environment. Prev Med. 2009;48:454–61.
Kaikkonen R, Rahkonen O, Lallukka T, Lahelma E. Physical and psychosocial working conditions as explanations for occupational class inequalities in self-rated health. Eur J Public Health. 2009;19:458–63.
Nutbeam D. How does evidence influence public health policy? Tackling health inequalities in England. Health Promot J Aust. 2003;14:154–8.
Bonell CP, Hargreaves J, Cousens S, Ross D, Hayes R, Petticrew M, et al. Alternatives to randomisation in the evaluation of public health interventions: design challenges and solutions. J Epidemiol Comm Health. 2011;65:582–7.
Black N. Why we need observational studies to evaluate the effectiveness of health care. BMJ. 1996;312:1215–8.
Petticrew M, Cummins S, Ferrell C, Findlay A, Higgins C, Hoy C, et al. Natural experiments: an underused tool for public health? Public Health. 2005;119:751–7.
Roe BE, Just D. Internal and external validity in economics research: Tradeoffs between experiments, field experiments, natural experiments, and field data. Am J Agricult Economics. 2009;91:1266–71.
Jones AM, Rice N. Econometric evaluation of health policies. In: Glied S, Smith PC, editors. The Oxford Handbook of Health Economics. Oxford: Oxford University Press; 2011. p. 890–923.
Oakes JM, Kaufman JS. Methods in social epidemiology. San Francisco: John Wiley & Sons; 2006.
Harper S, Strumpf EC, Burris S, Davey Smith G, Lynch J. The effect of mandatory seat belt laws on seat belt use by socioeconomic position. J P Anal Manage. 2014;33:141–61.
Ludwig J, Miller DL. Does head start improve children's life chances? Evidence from a regression discontinuity design. Quart J Economics. 2007;122:159–208.
Mackenbach JP, Kunst AE. Measuring the magnitude of socio-economic inequalities in health: an overview of available measures illustrated with two examples from Europe. Soc Sci Med. 1997;44:757–71.
Wooldridge JM. Econometric Analysis of Cross Section and Panel Data. Cambridge: MIT Press; 2001.
Angrist JD, Pischke JS. Mostly harmless econometrics: an empiricist's companion. Princeton: Princeton University Press; 2009.
Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal Inference. Boston: Houghton Mifflin Company; 2002.
Blundell R, Dias MC. Alternative approaches to evaluation in empirical microeconomics. J Hum Res. 2009;44:565–640.
Dinardo J, Lee DS. Program evaluation and research designs. Cambridge: NBER Working Paper 16016; 2010.
Cameron AC, Trivedi PK. Microeconometrics: Methods and Applications. New York: Cambridge University Press; 2005.
Imbens GW, Wooldridge JM. Recent developments in the econometrics of program evaluation. J Economic Lit. 2009;47:5–86.
Cousens S, Hargreaves J, Bonell C, Armstrong B, Thomas J, Kirkwood BR, et al. Alternatives to randomisation in the evaluation of public-health interventions: statistical analysis and causal inference. J Epidemiol Comm Health. 2011;65:576–81.
Cumming G. Inference by eye: reading the overlap of independent confidence intervals. Stat Med. 2009;28:205–20.
Imbens GW, Angrist JD. Identification and Estimation of Local Average Treatment Effects. Econometrica. 1994;62:467–75.
Gangl M. Causal Inference in Sociological Research. Ann Rev Sociol. 2010;36:21–47.
Hirano K, Imbens GW, Ridder G. Efficient estimation of average treatment effects using the estimated propensity score. Econometrica. 2003;71:1161–89.
Rubin DB. Estimating causal effects of treatments in randomized and nonrandomized studies. J Educat Psychol. 1974;66:688–701.
King G, Nielsen R. Why propensity scores should not be used for Matching. Working paper, http://gking.harvard.edu/publications/why-propensity-scores-should-not-be-used-formatching. Accessed 05 Jan 2017.
Lee M. Quasi-differences in differences with panel data: complete pairing and linear-model LSE. 2012: Working paper, http://www.econ.sinica.edu.tw/mobile/conferences/contents/2013090215153345654/?MSID=2013102818183246335. Accessed 3 July 2015.
Athey S, Imbens GW. Identification and inference in nonlinear difference-in-differences models. Econometrica. 2006;74:431–97.
Arellano M, Bond S. Some tests of specification for panel data - Monte-Carlo evidence and an application to employment equations. Rev Econom Studies. 1991;58:277–97.
Bun MJG, Carree MA. Bias-corrected estimation in dynamic panel data models with heteroscedasticity. Econ Lett. 2006;92:220–7.
Freedman DA, Berk RA. Weighting regressions by propensity scores. Eval Rev. 2008;32:392–409.
Nichols A. Causal inference with observational data. Stata J. 2007;7:507–41.
García-Gómez P, van Kippersluis H, O'Donnell O, van Doorslaer E. Long-term and spillover effects of health shocks on employment and income. J Human Res. 2012;48:873–909.
Vanderweele T. Explanation in causal inference: Methods for mediation and interaction. New York: Oxford University Press; 2015.
Whitehead MA. A typology of actions to tackle social inequalities in health. J Epidemiol Comm Health. 2007;61:473–8.
Morestin F, Gauvin FP, Hogwe M, Benoit F. Method for synthesizing knowledge about public policies 2010: National Collaborating Centre for Healthy Public Policy. http://www.ncchpp.ca/docs/MethodPP_EN.pdf. Accessed 1 Mar 2016.
Petticrew M, Tugwell P, Kristjansson E, Oliver S, Ueffing E, Welch V. Damned if you do, damned if you don't: subgroup analysis and equity. J Epidemiol Comm Health. 2012;66:95–8.
Chaloupka FJ, Pacula RL. Sex and race differences in young people's responsiveness to price and tobacco control policies. Tob Control. 1999;8:373–7.
Melhuish E. Effects of fully-established Sure Start Local Programmes on 3-year-old children and their families living in England: a quasi-experimental observational study. Lancet. 2008;372:1641–7.
Gruber J. Youth smoking in the U.S.: prices and polices. 2000: NBER Working paper No. 7506. http://www.nber.org/papers/w7506. Accessed 19 Aug 2014.
Barr B, Bambra C, Whitehead M. The impact of NHS resource allocation policy on health inequalities in England 2001-11: longitudinal ecological study. BMJ. 2014;348:g3231.
Evans WN, Lien DS. The benefits of prenatal care: evidence from the PAT bus strike. J Econometrics. 2005;125:207–39.
Federico B, Mackenbach JP, Eikemo TA, Kunst AE. Impact of the 2005 smoke-free policy in Italy on prevalence, cessation and intensity of smoking in the overall population and by educational group. Addiction. 2012;107:1677–86.
Kramer D, Droomers M, Jongeneel-Grimen B, Wingen M, Stronks K, Kunst AE. The impact of area-based initiatives on physical activity trends in deprived areas; a quasi-experimental evaluation of the Dutch District Approach. Int J Behav Nutr Phys Act. 2014;11:36.
Kuipers MA, Nagelhout GE, Willemsen MC, Kunst AE. Widening educational inequalities in adolescent smoking following national tobacco control policies in the Netherlands in 2003: a time-series analysis. Addiction. 2014;109:1750–9.
This study was supported by the project DEMETRIQ (Developing Methodologies to Reduce Inequalities in the Determinants of Health, grant agreement number 278511), which received funding from the European Union under the FP7 Health programme.
The funding agencies had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication. The authors were independent of the study funders.
The datasets used and/or analysed during the current study are available from the first author on reasonable request.
YH constructed the dataset, carried out the analyses and drafted the manuscript, FJvL commented on the study and critically revised draft versions of the manuscript, RH participated in the design of the study, the construction of the dataset and commented on draft versions of the manuscript, KvH contributed to the application of the methods and commented on the study, JPM designed and coordinated the study, and critically revised draft versions. All authors read and approved the final manuscript.
Erasmus MC, P.O. Box 2040, Rotterdam, 3000 CA, The Netherlands
Yannan Hu, Frank J. van Lenthe, Rasmus Hoffmann, Karen van Hedel & Johan P. Mackenbach
European University Institute, Florence, Italy
Rasmus Hoffmann
Max Planck Institute for Demographic Research, Rostock, Germany
Karen van Hedel
Yannan Hu
Frank J. van Lenthe
Johan P. Mackenbach
Correspondence to Johan P. Mackenbach.
Hu, Y., van Lenthe, F.J., Hoffmann, R. et al. Assessing the impact of natural policy experiments on socioeconomic inequalities in health: how to apply commonly used quantitative analytical methods?. BMC Med Res Methodol 17, 68 (2017). https://doi.org/10.1186/s12874-017-0317-5
Health Inequality
Propensity Score Match
Socioeconomic Position
Socioeconomic Inequality
Policy Effect | CommonCrawl |
\begin{document}
\markboth{M. Arabzadeh et al.}{Quantum-Logic Synthesis of Hermitian Gates}
\title{Quantum-Logic Synthesis of Hermitian Gates} \author{MONA ARABZADEH \affil{Amirkabir University of Technology} MAHBOOBEH HOUSHMAND \affil{Amirkabir University of Technology} MEHDI SEDIGHI \affil{Amirkabir University of Technology} MORTEZA SAHEB~ZAMANI \affil{Amirkabir University of Technology}}
\begin{abstract} In this paper, the problem of synthesizing a general Hermitian quantum gate into a set of primary quantum gates is addressed. To this end, an extended version of the Jacobi approach for calculating the eigenvalues of Hermitian matrices in linear algebra is considered as the basis of the proposed synthesis method. The quantum circuit synthesis method derived from the Jacobi approach and its optimization challenges are described. It is shown that the proposed method results in multiple-control rotation gates around the $y$ axis, multiple-control phase shift gates, multiple-control NOT gates and a middle diagonal Hermitian matrix, which can be synthesized to multiple-control Pauli \emph{Z} gates. Using the proposed approach, it is shown how multiple-control $U$ gates, where $U$ is a single-qubit Hermitian quantum gate, can be implemented using a linear number of elementary gates in terms of circuit lines with the aid of one auxiliary qubit in an arbitrary state. \end{abstract}
\keywords{Quantum computation, Synthesis, Hermitian gates}
\maketitle
\section{Introduction}\label{sec:level1} Quantum computers use quantum mechanical phenomena, such as superposition and entanglement which make them advantageous over the classical ones, e.g., they can solve certain problems such as integer factorization~\cite{shor-1997-26} and database search~\cite{Grover} much more quickly than any classical computer using the best currently known algorithms.
Besides that, the idea of simulating quantum-mechanical effects by computers \cite{Feynman82} can be further considered as a motivation for working on quantum computation problems.
Quantum-logic synthesis is referred to as the problem of decomposing a given arbitrary quantum function to a set of quantum gates, i.e., elementary operations
which can be implemented in quantum technologies. This problem has been widely considered for a general unitary matrix and a number of solutions based on matrix decomposition have been proposed. QR decomposition in \cite{George2001,vartiainen-2004} and the cosine-sine (CS) decomposition in CSD \cite{Bergholm:2004}, QSD \cite{Shende06} and their combination, BQD \cite{saeedi2010block}, have been applied for quantum-logic synthesis. On the other hand, a set of methods for more specific unitary matrices such as two-qubit operators \cite{vidal-2004-69,Shende-1-2004,vatan-2004-69}, three-qubit operators~\cite{vatan-2004}, general diagonal matrices \cite{bullock-2004-4} and diagonal Hermitian quantum gates~\cite{diagonal} were proposed in the literature. It is shown in~\cite{diagonal} that diagonal Hermitian quantum gates can be decomposed to a set that solely consists of multiple-controlled \emph{Z} gates.
Since quantum gates are linear unitary transformations, they are invertible. The inverse of a unitary matrix $U$ is $U^{-1}=U^\dag$, which is its conjugate transpose. A restricted set of unitary operations are those which are self-inverse, i.e., $U=U^{-1}=U^\dag$. These sets are called Hermitian quantum gates.
Many quantum gates such as CNOT, SWAP, Toffoli, Fredkin, Hadamard, and Pauli gates, which are used frequently in quantum circuits, are Hermitian~\cite{pathak13}. It is notable that if a gate is Hermitian, then its controlled gate with any number of control lines is also Hermitian. Hermitian gates also appear in well-known quantum circuits, such as encoding and decoding circuits of stabilizer codes~\cite{Grassal,stab} where these circuits solely consist of Hadamard, Pauli and controlled-Pauli gates. Besides, Hermitian matrices, according to their properties, are used as a specific set of matrices in quantum algorithms. In \cite{harrow2009quantum}, the solution to a set of equations is obtained by a quantum algorithm assuming that the matrix of coefficient is a sparse Hermitian matrix.
Subsequently, another research team~\cite{wiebe2012quantum} applied the same algorithm to determine the quality of a least-squares fit over an exponentially large data set.
In this paper, the decomposition problem of Hermitian unitary matrices using an extended version of the Jacobi idea~\cite{golub2012matrix} is addressed. The Jacobi method in linear algebra is an iterative method to find the eigenvalues and eigenvectors of symmetric and Hermitian matrices. Using this, a given Hermitian quantum gate is synthesized to a set of multiple-control gates and a diagonal Hermitian matrix by the proposed approach. The decomposition of these high-level gates to CNOT and single-qubit gates applicable in quantum technologies has been shown previously \cite{Barenco95,bullock-2004-4,Shende09}. While synthesis approaches devised for general unitary matrices would work on Hermitians as well, for the reasons explained in the paper, we believe exploiting the special features of Hermitians would produce superior results. Section 3.5 of the paper shows that the proposed approach can lead to better results than one of the general well-known quantum synthesis approaches, QSD \cite{Shende06}, for synthesizing $CU$ gates and the general synthesis approach for $C^kU$ gates, \cite{Barenco95} where $U$ is a single-qubit Hermitian gate. If the quantum circuit that needs to be synthesized comprises Hermitian matrices, the proposed approach can be applied to gain better results. Otherwise, the general methods can still be applied.
To describe our proposed method for quantum-logic synthesis of Hermitian matrices, the remainder of the paper is organized as follows. In Section \ref{sec:BasicCo}, some basic concepts about quantum computation and the Jacobi method are explained. The proposed synthesis approach is introduced in Section \ref{sec:method}. Section \ref{sec:Conc} concludes the paper.
\section{Preliminaries and Definitions}\label{sec:BasicCo} \subsection{Quantum Basic Concepts}
A quantum bit, named as \emph{qubit}, is a quantum state with two basis states $|0\rangle$ and $|1\rangle$. Based on the superposition principle, a qubit can take any linear combination of its two basis states, i.e., $|\psi\rangle = \alpha|0\rangle+\beta|1\rangle$. In this equation, $\alpha$ and $\beta$ are complex numbers such that $|\alpha|^2 + |\beta|^2 = 1$.
If the qubit is measured in the computational, i.e., $\{$$\left\vert 0\right\rangle$, $\left\vert 1\right\rangle$$\}$ basis, the classic outcome of 0 is observed with the probability of $|\alpha|^2$ and the classic outcome of 1 is observed with the probability of $|\beta|^2$. If 0 is observed, the state of the qubit after the measurement collapses to $|0\rangle$. Otherwise, it collapses to $|1\rangle$.
A matrix $U$ is \emph{unitary} if $UU^\dag=I$, in which $U^\dag$ is the conjugate transpose of $U$ and $I$ is the identity matrix. An $n$-qubit quantum gate corresponds to a $2^n\times2^n$ unitary matrix which performs a particular operation on $n$ qubits. Various quantum gates with different functionalities have been introduced. Quantum circuits constructed from a set of quantum gates are often synthesized using either a ``basic gate" library \cite{Barenco95}, with CNOT and single-qubit gates, or an ``elementary gate" library \cite{bullock-2004-4}, with CNOT and single-qubit rotation gates. Single-qubit rotation gates around $y$ and $z$ axes with the angle of $\theta$ have the matrix representations as illustrated in (\ref{gate1}). \begin{equation}\label{gate1}
R_y (\theta ) = \left[ {\begin{array}{*{20}c} {\cos {\textstyle{\theta \over 2}}} & {\sin {\textstyle{\theta \over 2}}} \\ { - \sin {\textstyle{\theta \over 2}}} & {\cos {\textstyle{\theta \over 2}}} \\ \end{array}} \right]\,\,\,\ R_z (\theta) = \left[ {\begin{array}{*{20}c} {e{\textstyle{{ - i\theta } \over 2}}} & 0 \\ 0 & {e{\textstyle{{i\theta } \over 2}}} \\ \end{array}} \right] \end{equation} \normalsize
There is another set of useful single-qubit gates called Pauli gates (\ref{gate2}). This set together with the identity matrix span the full vector space of two-dimensional matrices.
\begin{equation}\label{gate2}
\sigma _x = X = \left[ \begin{array}{*{20}c}
0 & 1\\
1 & 0\\
\end{array} \right]{\kern 1pt} \,\,\,\sigma _y = Y =\left[ {\begin{array}{*{20}c}
0 & { - i} \\
i & 0 \\ \end{array}} \right]\,\,\,\sigma _z = Z = \left[ \begin{array}{*{20}c}
1 & 0 \\
0 & { - 1} \\
\end{array} \right] \end{equation}
Phase shift gates are a set of single-qubit gates which leave the $\left\vert 0\right\rangle$ state unchanged and map $\left\vert 1\right\rangle$ to $e^{i\alpha}$$\left\vert 1\right\rangle$ as shown in (\ref{gate3}).
Some common phase shift gates are phase gate ($S$), $\frac{\pi}{8}$($T$) and $\sigma _z$ where $\alpha=\frac{\pi}{2},\ \frac{\pi}{4}$ and $\pi$, respectively. \begin{equation}\label{gate3}
R(\alpha) = \left[ \begin{array}{*{20}c}
1 & 0 \\
0 & {e^{i\alpha } } \\
\end{array} \right] \end{equation}
$C^kU$ gates applying on $n$ qubits for $1\leq k\leq n-1$ are a set of quantum gates with one target and $k$ control qubits. These control qubits can be either positive or negative. If the initial states of the positive control(s) and the negative control(s) are $\left\vert 1\right\rangle$ and $\left\vert 0\right\rangle$ respectively, then $U$ gate is applied to the target qubit and no action is taken otherwise. CNOT and CZ are two examples of $CU$ gates with a positive control. They operate on two qubits, i.e., control and target qubits and if the control qubit is $\left\vert 1\right\rangle$, then $\sigma_x$ and $\sigma_z$ gates are performed on the target qubit, respectively, and otherwise the state of the qubit is left unchanged.
In this paper, a $C^{n-1}U$ gate which operates on $n$ qubits is called a multiple-control $U$ gate. Multiple-control $U$ gates can be decomposed to CNOT and single-qubit gates as shown in \cite{Barenco95}.
A two-level unitary matrix~\cite{Nielsen10} is named after a set of unitary matrices which operate non-trivially only on two vector components. Two-level matrices differ with the identity matrix in four elements placed in the indices $pp$, $pq$, $qp$ and $qq$. Multiple-control $U$ gates are some examples of two-level matrices.
If the elements outside the main diagonal of a square matrix are all zero, the matrix is called diagonal. The eigenvalues of a diagonal matrix are its diagonal elements. The matrix representation of CZ gate is an example of a diagonal matrix.
A square matrix can be partitioned into smaller square matrices with the same size, called blocks. If blocks outside the main diagonal are zero matrices, a matrix is called block-diagonal. A quantum multiplexer~\cite{Shende06} over $n$ qubits, has $m$ target qubits and $s=n-m$ select qubits. A different quantum gate is applied on the targets according to the values of select qubits. In the case that the select qubits are the most significant ones, quantum multiplexers have a block-diagonal matrix representation with $U_i(2^m)$ matrices, $0 \leq i \leq 2^{n-m}-1$, on main diagonal blocks. In this case, each $U_i(2^m)$ gate is applied to the target qubit(s) when the select qubit(s) are in the state $\left\vert i\right\rangle$. If $U_i$'s are $R_z$ (respectively $R_y$), the quantum multiplexer is called multiplexed $R_z$ (respectively $R_y$) gate. A select qubit in a quantum multiplexer is denoted by $\Box$ as used in~\cite{Shende06}. If a quantum multiplexer has a single select bit which is the most significant one, it can be written as $U_0\oplus U_1$ where $U_0$ and $U_1$ are applied on the target qubits when the select qubit is $|0\rangle$ and $|1\rangle$ respectively. \subsection{Hermitian Matrix Properties} A matrix $A$ is called Hermitian~\cite{Nielsen10} or self-adjoint if $A^\dag=A$. Every Hermitian matrix is normal and therefore, it is diagonalizable as shown in~(\ref{eq0}). \begin{equation}\label{eq0}
\centering A=U D U^\dag \end{equation} The elements of $D$ are the eigenvalues of $A$ and the columns of the unitary matrix $U$ are the eigenvectors of $A$. All eigenvalues of a Hermitian matrix are real numbers. On the other hand, the eigenvalues of a unitary matrix have modulus equal to 1 and therefore, the eigenvalues of a Hermitian unitary matrix, the elements of the matrix $D$, are either +1 or -1. Pauli matrices are some examples of Hermitian matrices.
It should be noted that if a circuit solely consists of Hermitian quantum gates, its matrix is not necessarily Hermitian, since Hermitian matrices are closed under tensor product but they are not closed under matrix multiplication.
In this paper, symmetric, Hermitian and diagonal Hermitian quantum gates which operate on $n$ qubits are denoted by $\mathbb{S}(2^n)$, $\mathbb{H}(2^n)$ and $\mathbb{D}_n$, respectively.
\subsection{Jacobi Method}\label{sec:Jacob} The Jacobi method includes a set of algorithmic solutions for the real symmetric eigenvalue problem \cite{golub2012matrix}. The main idea of these algorithms is to reduce the norm of the off-diagonal elements shown in (\ref{eq1}), in a systematic manner.
\begin{equation}\label{eq1}
\centering {\rm off}(A) = \sqrt {\sum\limits_{p = 1}^n {\sum\limits_{\scriptstyle q = 1
\atop
\scriptstyle q \ne p
}^n {a_{pq}^2 } } } \end{equation}
This is done by the help of Jacobi rotations, or Givens rotations denoted by $G$, to make each off-diagonal element zero in each iteration of the Jacobi method. For an $n$$\times$$n$ matrix, $G_{pq}$ is a two-level matrix, i.e., it is similar to an identity matrix $I_n$ except for four elements $pp$, $pq$, $qp$ and $qq$ where $p$ and $q$ are the indices that specify the row and column of the element which is targeted to become zero ($p$$<$$q$). These elements are illustrated in~(\ref{eq2}) as $G(\theta)$.
In the $j^{th}$ Jacobi iteration, the given matrix $A^{(j-1)}$ with a non-zero off-diagonal element at $pq$ is converted to the new matrix $A^{(j)}= G_{pq}^T A^{(j-1)}G_{pq}$ where $G_{pq}^T$ is the transpose of $G_{pq}$. The angle $\theta$ is chosen as shown in (\ref{eq2}) in order to set the $pq$ element to zero.
\begin{equation}\label{eq2}
\centering G(\theta) = \left[ {\begin{array}{*{20}c}
{\cos {\theta \over 2 }} & { \sin {\theta \over 2} } \\
{ - \sin {\theta \over 2} } & {\cos {\theta \over 2} } \\ \end{array}} \right] ,\,\,\,\,\,\,\theta = \arctan (\frac{{ - 2a_{pq} }}{{a_{pp} - a_{qq} }}) \end{equation}
The final result is a diagonal matrix, and the diagonal elements are the eigenvalues of the $A^{(0)}$ matrix in the first iteration.
\subsection{Jacobi Method for Hermitian Matrices}\label{sec:JacobHerm} To extend the Jacobi method for Hermitian matrices, Jacobi rotations should be replaced by their complex counterparts. A typical complex rotation, $Q(\theta,\alpha)$ is defined as (\ref{eq4}) which can be used in a similar manner as $G({\theta})$ to construct a Jacobi rotation, where $\alpha$ and $\theta$ can be computed using (\ref{eq7}).
\begin{equation}\label{eq4}
\centering Q(\theta,\alpha) = \left[ {\begin{array}{*{20}c}
{\cos {\theta \over 2} } & {e^{i\alpha } \sin {\theta \over 2} } \\
{ -e^{-i\alpha } \sin {\theta \over 2} } & {\cos {\theta \over 2} } \\ \end{array}} \right] \end{equation}
\begin{equation}\label{eq7}
\tan (\theta ) = \frac{{ - 2\left| {a_{pq} } \right|}}{{a_{pp} - a_{qq} }},
\,\,\,\,\,\, e^{i\alpha } = \frac{{a_{pq} }}{{\left| {a_{pq} } \right|}} \end{equation}
There is another complex rotation matrix introduced in \cite{park1993real} which is used in our proposed method. This complex rotation matrix, $Q'(\theta,\alpha)$, is defined as (\ref{eq5}), where $G(\theta)$ is a real Jacobi rotation described earlier, and $R(\alpha)$ is the phase shift matrix as introduced in Section~\ref{sec:BasicCo}.
\begin{equation}\label{eq5}
\centering Q'(\theta,\alpha) = R(-\alpha)G(\theta) = \left[ {\begin{array}{*{20}c}
{\cos {\theta \over 2} } & { \sin {\theta \over 2} } \\
{ -e^{-i\alpha } \sin {\theta \over 2} } & {e^{-i\alpha } \cos {\theta \over 2} } \\ \end{array}} \right] \end{equation}
The use of the complex rotation matrix is shown in (\ref{eq6}), where $p$ and $q$ are the indices of the element in the matrix $A$ which should become zeroed $(p<q)$. This definition is used in our synthesis method in Section \ref{sec:method}.
\begin{equation}\label{eq6}
\centering \begin{array}{l}
A^{(j)} = {Q'}_{pq}^{\dag} A^{(j - 1)} {Q'}_{pq} \\\\
= \left[ {\begin{array}{*{20}c}
{c } & { - s } \\
{s} & {c} \\ \end{array}} \right]\left[ {\begin{array}{*{20}c}
1 & 0 \\
0 & {e^{i\alpha } } \\ \end{array}} \right]\left[ {\begin{array}{*{20}c}
{a_{pp}^{(j-1)} } & {a_{pq}^{(j-1)} } \\
{a_{qp}^{(j-1)} } & {a_{qq}^{(j-1)} } \\ \end{array}} \right]\left[ {\begin{array}{*{20}c}
1 & 0 \\
0 & {e^{ - i\alpha } } \\ \end{array}} \right]\left[ {\begin{array}{*{20}c}
{c} & {s } \\
{ - s} & {c} \\ \end{array}} \right], \\\\
c = {\cos {\theta \over 2} }, \,\,
s = {\sin {\theta \over 2} }
\end{array} \end{equation}
The parameters $\theta$ and $\alpha$ can be calculated using (\ref{eq7}) where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$ \cite{park1993real}. In the remainder of this paper, whenever the indices $p$ and $q$ are not important, $G_{pq}$ and $R_{pq}$ are simply referred to as $G$ and $R$.
\section{Jacobi-Based Quantum-Logic Synthesis}\label{sec:method} In this section, the main structure of the proposed Jacobi-based method for Hermitian quantum gate synthesis (JBHS) is discussed.
The Jacobi method described in Sections \ref{sec:Jacob} and \ref{sec:JacobHerm} is used in our synthesis algorithm as a matrix decomposition method. It transforms a given Hermitian unitary matrix $\mathbb{H}(2^n)$ to a set of two-level matrices, their conjugate transposes and a single diagonal matrix as shown in (\ref{eq8}) by a recursive function. The elements of $\mathbb{H}^{(j)}$ are denoted by $h^{(j)}_{pq}$ for the $j^{th}$ iteration. \begin{equation}\label{eq8}
\begin{array}{l}
\left\{ \begin{array}{l}
\mathbb{H}(2^n)^{(0)} = \mathbb{H}(2^n) \\
\mathbb{H}(2^n)^{(j)} = G_{pq}^{\dag} R_{pq}^{\dag} \mathbb{H}^{(j - 1)}(2^n) R_{pq} G_{pq} \,{\kern 1pt} \,\, \\
\end{array} \right. \\\\
{\kern 1pt} 1 \le j \le 2^{2n-1} - 2^{n-1},\,\,\,\,
0 \le p<q \le 2^n-1 ,\,\,\,\,h^{(j-1)}_{pq} \ne 0\\
\end{array} \end{equation} Solving the recursive function of (\ref{eq8}) leads to (\ref{eq8-1}) where $j_{\max }$ shows the total number of iterations which is equal to the number of non-zero elements in $\mathbb{H}^{(j)}$ matrix whose row index is less than column index.
\begin{equation}\label{eq8-1}
\begin{array}{l}
\mathbb{D}_n = G^{\dag (j)} R^{\dag (j)} ...G^{\dag (1)} R^{\dag (1)} \mathbb{H}(2^n)R^{(1)} G^{(1)} ...R^{(j)} G^{(j)} \\
\mathbb{D}_n =\prod\limits_{j = j_{\max } }^1 {\{ G^{\dag (j)} R^{\dag (j)} \} } \mathbb{H}(2^n)\prod\limits_{j = 1}^{j_{\max } } {\{ R^{(j)} G^{(j)} \} } \\
\end{array} \end{equation}
Based on (\ref{eq8-1}), the general structure of the proposed JBHS is shown in (\ref{eq8-2}). \begin{equation}\label{eq8-2}
\begin{array}{l}
\mathbb{H}(2^n) = R^{(1)} G^{(1)} ...R^{(j)} G^{(j)} \mathbb{D}_{n}G^{\dag (j)} R^{\dag (j)} ...G^{\dag (1)} R^{\dag (1)} \\
\mathbb{H}(2^n) = \prod\limits_{j = 1}^{j_{\max } } {\{ R^{(j)} G^{(j)} \} } \mathbb{D}_{n}\prod\limits_{j = j_{\max } }^1 {\{ G^{\dag (j)} R^{\dag (j)} \} } \\
\end{array} \end{equation}
In the remainder of this section, the quantum equivalents of the produced two-level matrices and the middle diagonal matrix are discussed independently and then the whole structure of the proposed synthesis method in (\ref{eq8-2}) is presented by synthesizing a general $\mathbb{H}(4)$ matrix as an example. \subsection{Quantum-Equivalence of the Two-Level Matrices} In this section, first, possible multiple-control $U$ gates on $n$ qubits and their corresponding matrices are introduced. Then, in Theorem \ref{theo1}, two-level matrices produced by the proposed synthesis method are described as quantum operators according to the definition of multiple-control gates.
An $n$-qubit multiple-control $U$ gate is denoted as $C^{n-1}U_i^j$, where $0\leq i\leq n-1$ and $0\leq j\leq 2^{n-1}-1$. The target qubit of $C^{n-1}U_i^j$ is the $i^{th}$ qubit and the ($n$-1)-bit binary expression of $j$ represents the control string, $0$ for negative and $1$ for positive control(s).
$C^{n-1}U_i^j$ gates are two-level matrices which change only two basis states, i.e., $|a\rangle$ and $|b\rangle$ where the binary expressions of $a$ and $b$ differ only in one bit, i.e., they are two adjacent gray codes. $a$ and $b$ can be computed from $i$ and $j$ by an injective function as follows. The binary expressions of $a$ (respectively $b$) are obtained by inserting a zero (respectively one) in the $i^{th}$ bit of the binary expression of $j$. The matrix of $C^{n-1}U_i^j$ is the same as an $I^{ \otimes n}$ matrix except for the four elements of $U$ which are placed in the indices $aa$, $ab$, $ba$ and $bb$.
It should be noted that the number of different $C^{n-1}U_i^j$ gates is equal to $n2^{n-1}$. Since there are $\binom{n}{1}=n$ possible target qubits for these gates and for each target qubit, $2^{n-1}$ different control strings can be assumed. As an example, for $n=2$, Figure \ref{fig1} shows four different $C U_i^j$ gates corresponding to the matrices in (\ref{eq9}).
\begin{equation}\label{eq9}
\begin{array}{l} \begin{array}{*{20}c}
{\,\,\,\,\,\,\,\,\,\,\,\,\,00} & {01} & {10} & {11} \\ \end{array}\begin{array}{*{20}c}
{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,00} & {01} & {10} & {11} \\ \end{array} \\
\begin{array}{*{20}c}
{00} \\
{01} \\
{10} \\
{11} \\ \end{array}\left[ {\begin{array}{*{20}c}
{u_{00} } & 0 & {u_{01} } & 0 \\
0 & 1 & 0 & 0 \\
{u_{10} } & 0 & {u_{11} } & 0 \\
0 & 0 & 0 & 1 \\ \end{array}} \right]{\kern 1pt} {\kern 1pt} \,\,\,\,\,\,\,\begin{array}{*{20}c}
{00} \\
{01} \\
{10} \\
{11} \\ \end{array}\left[ {\begin{array}{*{20}c}
1 & 0 & 0 & 0 \\
0 & {u_{00} } & 0 & {u_{01} } \\
0 & 0 & 1 & 0 \\
0 & {u_{10} } & 0 & {u_{11} } \\ \end{array}} \right] \\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,i = 0\,,j = 0{\kern 1pt} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,i = 0\,,j = 1 \\\\
\begin{array}{*{20}c}
{\,\,\,\,\,\,\,\,\,\,\,\,\,00} & {01} & {10} & {11} \\ \end{array}\begin{array}{*{20}c}
{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,00} & {01} & {10} & {11} \\ \end{array} \\
\begin{array}{*{20}c}
{00} \\
{01} \\
{10} \\
{11} \\ \end{array}\left[ {\begin{array}{*{20}c}
{u_{00} } & {u_{01} } & 0 & 0 \\
{u_{10} } & {u_{11} } & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\ \end{array}} \right]{\kern 1pt} {\kern 1pt} \,\,\,\,\,\,\,\begin{array}{*{20}c}
{00} \\
{01} \\
{10} \\
{11} \\ \end{array}\left[ {\begin{array}{*{20}c}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & {u_{00} } & {u_{01} } \\
0 & 0 & {u_{10} } & {u_{11} } \\ \end{array}} \right] \\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,i = 1\,,j = 0{\kern 1pt} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,i = 1\,,j = 1 \\\\
\end{array} \end{equation}
\begin{figure}
\caption{Corresponding gates of the matrices in (\ref{eq9}). From left to right: ($i=0$ and $j=0$), ($i=0$ and $j=1$), ($i=1$ and $j=0$) and ($i=1$ and $j=1$).}
\label{fig1}
\end{figure} \normalsize
\begin{theorem}\label{theo1} Each two-level matrix produced in the Jacobi iteration can be decomposed into an $n$-qubit multiple-control $R_y(\theta)$, an $n$-qubit multiple-control $R(\alpha)$ gate and a set of multiple-control NOT gates which act on $n$-qubits. \end{theorem}
\begin{proof}If the binary expressions of $p$ and $q$ in the $pq$ index of $G_{pq}$ and $R_{pq}$ in (\ref{eq8}) are two adjacent gray codes, there will be an $n$-qubit multiple-control $R_y(\theta)$, i.e., $C^{n-1}R_y(\theta)_i^j$, equivalent to $G_{pq}$, and an $n$-qubit multiple-control $R(\alpha)$, i.e., $C^{n-1}R(\alpha)_i^j$, equivalent to $R_{pq}$, where $i$ and $j$ can be easily obtained from $p$ and $q$, as explained earlier.
Now consider the case that the difference between the binary expressions of $p$ and $q$ is more than one bit, i.e., $l$ bits where $2\leq l \leq n$. There exists a gray code sequence that connects the binary expressions of $p$ with $q$ using $l+1$ elements (containing the source and destination numbers). $l-2$ $n$-qubit multiple-control NOT gates are required to map the binary expression of $p$ to an adjacent gray code of $q$, denoted by $g$. Then an $n$-qubit multiple-control $R(\alpha)$, i.e., $C^{n-1}R(\alpha)_i^j$, and an $n$-qubit multiple-control $R_y(\theta)$, i.e., $C^{n-1}R_y(\theta)_i^j$ are applied. $i$ and $j$ can be easily computed from $p$ and $g$ as mentioned before. Finally, $l-2$ multiple-control NOT gates are required to put $|g\rangle$ back to $|p\rangle$. It is worth mentioning that if $h_{pq}$ in (\ref{eq8}) is a real number, no multiple-control $R(\alpha)$ gate is required. \end{proof}
\subsection{Quantum-Equivalence of the Diagonal Matrix} \label{sec:diag} The produced diagonal matrix $\mathbb{D}_n$ in (\ref{eq7}) can be synthesized either by previous general synthesis methods for quantum diagonal matrices such as \cite{bullock-2004-4} or by specific methods for Hermitian diagonal matrices as the one presented in~\cite{diagonal}. Figure \ref{fig2} shows the synthesis of an arbitrary diagonal matrix $\Delta_{3}$ for $n=3$ qubits. The synthesis method of~\cite{diagonal} was presented using the fact that the diagonal elements of Hermitian gates are either +1 or -1 and hence they can be synthesized using a set that solely consists of multiple-control Pauli \emph{Z} gates. The authors of~\cite{diagonal} introduced a binary representation for the diagonal Hermitian gates and showed that the binary representations of multiple-controlled \emph{Z} gates form a basis for the vector space that is produced by the binary representations of all diagonal Hermitian quantum gates. Finally, the problem of decomposing a given diagonal Hermitian gate was mapped to the problem of writing its binary representation in the specific basis mentioned above. It was shown that this approach can lead to circuits with lower costs in comparison with the approach of~\cite{bullock-2004-4}.
\begin{figure}
\caption{Synthesis of $\Delta_3$ gate \cite{bullock-2004-4}.}
\label{fig2}
\end{figure}
\subsection{Gate-Order Analysis of the Proposed Method}
To find the number of produced gates by the proposed JBHS method after applying the pure synthesis method to synthesize a given $\mathbb{H}(2^n)$ quantum gate without any possible optimizations, four kinds of gates are considered: $C^{n-1}R_y(\theta)$, $C^{n-1}R(\alpha)$ and $C^{n-1}$NOT gates besides a diagonal Hermitian gate which can be synthesized by the method of \cite{bullock-2004-4}, as mentioned in Section \ref{sec:diag}, by at most $2^n-2$ CNOT gates.
For a general $\mathbb{H}(2^n)$ matrix, there are at most $2^{2n-1}-2^{n-1}$ non-zero elements which should become zeroed. Therefore, in the worst-case, $2^{2n}-2^n$ gates of each $C^{n-1}R_y(\theta)$ and $C^{n-1}R(\alpha)$ types, the latter only for complex elements, are needed. Among these $2^{2n-1}-2^{n-1}$ non-zero elements, $h_{ij}$: $0$$\leq$$ i$$<$$j$ $\leq$ $2^n-1$, there are $n2^{n-1}$ elements where the binary representations of $i$ and $j$ differ in only one bit and therefore, no $C^{n-1}$NOT gate is needed for them. The other $2^{2n-1}-n2^{n-1}-2^{n-1}$ elements need at most $4(n-2)C^{n-1}$NOT gates. As a result, the numbers of produced $C^{n-1}R_y(\theta)$ and $C^{n-1}R(\alpha)$ gates are of the order $O(4^n)$ and the number of needed $C^{n-1}$NOT gates is of the order $O(n4^n)$. It is worth mentioning that zero elements eliminate their related gates without any effects on the other non-zero elements. For sparse $\mathbb{H}(2^n)$ matrices, the actual number of needed gates is much less, since the number of gates is a coefficient of non-zero elements. Moreover, possible post-synthesis optimizations decrease the number of required elementary gates. These post-synthesis optimizations include eliminating the control lines of the two produced Hermitian conjugate multiple-control $U$ gates around a middle multiple-control $Z$ gate. An example of this optimization is shown in Fig.~\ref{fig:multiple}.
\subsection{Possible Options in the Synthesis Procedure} There are two major options for the order of selecting $p$ and $q$ in (\ref{eq8}) during the synthesis process which are discussed in the following propositions. Using these propositions which are based on the inherent parallelism of the Jacobi algorithm, elements can be selected in order to produce circuits with better results in terms of the number of elementary gates.
\textbf{Proposition 1.} Off-diagonal elements of a given Hermitian gate on $n$ qubits, $h_{pq}$ $p<q$, can be divided into $2^n-1$ independent sets, namely the computations of these sets have no conflicts.
\begin{proof}According to the definition of \emph{parallel-ordering} problem~\cite{golub2012matrix}, $(p_1,q_1)$$(p_2,q_2)$$,...,$$(p_N,q_N)$, $N=(2^n-1)2^{n-1}$, is a parallel ordering of a set $\{(p,q)|0\leq p<q\leq 2^n-1\}$ if for $s$ from $0$ to $2^n-2$, the rotation set $rotation.set(s)=\{(p_k,q_k):k=2^{n-1} s+1: 2^{n-1} (s+1)\}$ consists of rotations with no conflicts which results into $2^n-1$ independent sets. Therefore, $(2^n-1)!$ different arrangements of these sets are possible during or after the JBHS synthesis process. Any arrangement of these sets determines the arrangements of their conjugate transposes on the other side of the middle diagonal gate. \end{proof}
\textbf{Proposition 2.} The resulted gates of each member in every set of Proposition 1 are interchangeable with the resulted gates of other members in that set.
\begin{proof}Based on Proposition 1, each set consists of non-conflicting members. Therefore, their computations have no conflicts and make their resulted gates interchangeable. It should be noted that any arrangement of these gates of each member determines the arrangements of their conjugate transposes on the other side of the middle diagonal gate. \end{proof}
These options provide the possibility of arranging these parallel sets and the gates inside each set in the produced circuit to cancel some redundant gates.
A Hermitian quantum gate $\mathbb{H}(4)$ is considered in (\ref{eq10}). Since the matrix is Hermitian, setting the off-diagonal elements $h_{pq}$, $p<q$, to zero by (\ref{eq8-2}) turns the other off-diagonal elements into zero too.
\begin{equation}\label{eq10}
\begin{array}{l}
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,00\,\,\,\,\,01\,\,\,\,\,10\,\,\,\,\,11 \\
\begin{array}{*{20}c}
{00} \\
{01} \\
{10} \\
{11} \\ \end{array}\left[ {\begin{array}{*{20}c}
{d_{00} } & {h_{01} } & {h_{02} } & {h_{03} } \\
{} & {d_{11} } & {h_{12} } & {h_{13} } \\
{} & {} & {d_{22} } & {h_{23} } \\
{} & {} & {} & {d_{33} } \\ \end{array}} \right] \\
\end{array} \end{equation}
According to Proposition 1, there are three independent sets \{(0,2)(1,3)\}, \{(0,1)(2,3)\}, \{(0,3)(1,2)\}. The synthesized circuit according to these sets are illustrated in Figure \ref{fig3}. The order of gates is the same as the order of the sets from left to right. The independent sets which can be exchanged with each other at the left side of the middle diagonal gate are separated by dash lines. \begin{figure}
\caption{Synthesis of a Hermitian quantum gate for $n$$=$$2$ qubits.}
\label{fig3}
\end{figure} General decomposition of a symmetric gate on two qubits is illustrated in Figure \ref{fig3.1}. \begin{figure}
\caption{Synthesis of a symmetric quantum gate for $n$$=$$2$ qubits.}
\label{fig3.1}
\end{figure} Using a quantum multiplexer notation from \cite{Shende06} and the optimization rules of \cite{arabzadeh2010rule} the circuit of Figure \ref{fig4} is obtained. Additional optimizations lead to the circuit of Figure \ref{fig4.1} by eliminating more CNOT gates.
\begin{figure}
\caption{The circuit of Figure \ref{fig3.1} after applying some optimizations from \cite{arabzadeh2010rule} on CNOT gates and merging controlled-$R_y$ gates to reach a multiplexed $R_y$ with fewer CNOT and single-qubit rotation gates.}
\label{fig4}
\end{figure}
\begin{figure}
\caption{The circuit of Figure \ref{fig4} after merging the adjacent CNOT gates into $\mathbb{D}_2$, which produces a new $\mathbb{D}_2$ gate. Two CNOT gates, next to multiplexed $R_y$ gates, can be canceled out by one CNOT in the decomposition of each multiplexed $R_y$ in accordance to \cite{Shende06}.}
\label{fig4.1}
\end{figure}
\subsection{Application: Synthesis of Multiple-Control Hermitian Gates} The Jacobi method for calculating the eigenvalues and eigenvectors of Hermitian matrices was implemented in MATLAB. In this section, the results of applying the JBHS approach on a special set of circuits, C$\mathbb{H}(2)$ gates and their general case with $k$ control qubits, are considered.
It can be readily verified that an $\mathbb{H}(2)$ quantum gate (except $I$ and $-I$) can be written as $\mathbb{H(\theta,\alpha)}$: \begin{equation} \label{eq:h2} \mathbb{H(\theta,\alpha)}= \left[ \begin{array}{l}
\cos (\theta )\,\,\,\,\,e^{ - i\alpha } \sin (\theta ) \\
e^{i\alpha } \sin (\theta )\,\,\,-\cos (\theta ) \\
\end{array} \right], \end{equation} where $\alpha$ is a real parameter and $0\leq\theta\leq\pi$. Applying the JBHS method on C$\mathbb{H(\theta,\alpha)}$ will lead to the following decomposition: \begin{equation} \label{eq:jac}
\mathrm{C}\mathbb{H(\theta,\alpha)}=\mathrm{CR(\alpha)}\mathrm{CR_y(-\theta)}\mathrm{CZ}\mathrm{CR_y(\theta)}\mathrm{CR(-\alpha)}. \end{equation} The controls of the synthesized gates of the off-diagonal parts can be eliminated as they are their Hermitian conjugates. Using this optimization,~(\ref{eq:jac}) can be written as: \begin{equation} \label{eq:jac1}
\mathrm{C}\mathbb{H(\theta,\alpha)}=(I \otimes A)\mathrm{CZ} (I \otimes B), \end{equation} where $A=R(\alpha)R_y(-\theta)$ and $B=R_y(\theta)R(-\alpha)$.
The C$\mathbb{H(\theta,\alpha)}$ decomposition using the JBHS method is shown in Figure~\ref{fig7}. \begin{figure}
\caption{The synthesized circuit of C$\mathbb{H(\theta,\alpha)}$ using JBHS method.}
\label{fig7}
\end{figure}
{Using the approach presented in~\cite[Lemma 5.5]{Barenco95}, the following decomposition is obtained for C$\mathbb{H(\theta,\alpha)}$ gates,} \begin{equation}
\mathrm{C}\mathbb{H(\theta,\alpha)}=(I \otimes P) \mathrm{CNOT} (I \otimes Q), \end{equation} where $P=R_z(\alpha)R_y(-\theta+\frac{\pi }{2})$ and $Q=R_y(\theta-\frac{\pi }{2})R_z(-\alpha)$. The C$\mathbb{H(\theta,\alpha)}$ decomposition using~\cite[Lemma 5.5]{Barenco95} is shown in Figure~\ref{fig8}.
\begin{figure}
\caption{The synthesized circuit of C$\mathbb{H(\theta,\alpha)}$ using the method of~\cite[Lemma 5.5]{Barenco95}.}
\label{fig8}
\end{figure}
Decomposition of these gates using the QSD method~\cite{Shende06} is also calculated. In QSD, each C$\mathbb{H(\theta,\alpha)}$ gate is considered as a single-select qubit quantum multiplexer and is synthesized as follows: \begin{equation} \label{eq:qsd}
\mathrm{C}\mathbb{H(\theta,\alpha)}=(I \otimes V)(D\oplus D^\dagger) (I \otimes W), \end{equation} where $V=R(\alpha)R_y(-\theta)$, $W=SR_y(\theta)R(-\alpha)\mathbb{H}(\theta,\alpha)$ and $D=S$. The middle diagonal gate in~(\ref{eq:qsd}) is indeed a multiplexed $R_z$ gate whose target qubit is the first one. Applying the QSD to synthesize $\mathrm{C\mathbb{H}(\theta,\alpha)}$ gates produces a circuit structure as shown in Figure~\ref{fig9}.
\begin{figure}
\caption{The synthesized circuit of C$\mathbb{H(\theta,\alpha)}$ using the method of~\cite{Shende06}, QSD.}
\label{fig9}
\end{figure}
Table \ref{tab:res} shows the obtained decompositions to synthesize $\mathrm{C\mathbb{H}(\theta,\alpha)}$ gates using the proposed JBHS,~\cite[Lemma 5.5]{Barenco95} and the QSD methods.
Each CZ gate can be implemented using a CNOT gate at the cost of inserting two single-qubit rotation gates around $y$ axis as shown in Figure~\ref{fig:CZCNOT}.
\begin{figure}
\caption{Circuit equivalence of CZ and CNOT gates.}
\label{fig:CZCNOT}
\end{figure}
Table \ref{tab:res2} compares the number of produced gates. Although the proposed JBHS method and the method of~\cite{Barenco95} produce the same number of elementary gates, the JBHS approach directly synthesizes $\mathrm{C\mathbb{H}(\theta,\alpha)}$ gates to a library that consists of CZ and single-qubit rotation gates around $y$ and $z$ axes. The CZ gate is of interest as it is supported as a primitive operation by four quantum physical machine descriptions (PMD) while CNOT gate is supported by only two PMDs ~\cite{ftqls2014}. CZ gates are also useful in producing a parallel structure for quantum circuits~\cite{extended} using one-way quantum computation model~\cite{par}, as the input quantum circuits to that procedure are assumed to contain CZ gates.
ed that each speci cAs can be seen, the proposed JDHS method and the method of~\cite{Barenco95} produce almost the same number of gates but of different types. However, in comparison to the QSD method, considering same kind of single- and two-qubit gates the proposed method is surpassed.}
\renewcommand{1.3}{1.3}
\begin{table}[t] \tbl{Synthesis comparison of $\mathrm{C\mathbb{H}(\theta,\alpha)}$ gates. $\mathrm{CNOT}^{2,1}$ denotes a CNOT gate with the control on the second and target on the first qubit.\label{tab:res}}{
\small
\begin{tabular}{|l|l|l|} \hline Gate &Method &Synthesized circuit \\ \hline \hline \multirow{3}{*}{CH} &JBHS &$(I\otimes R_y(\frac{-\pi}{4}))$CZ$(I\otimes R_y(\frac{\pi}{4}))$ \\* \cline{2-3}
&\cite[Lemma 5.5]{Barenco95} &$(I\otimes R_y(\frac{\pi}{4}))$CNOT$(I\otimes R_y(\frac{-\pi}{4}))$ \\* \cline{2-3}
&QSD \cite{Shende06} &$(I\otimes R_y(\frac{-\pi}{4}))$$\mathrm{CNOT}^{2,1}$$(R_z(\frac{-3\pi}{2})\otimes I)$$\mathrm{CNOT}^{2,1}$$(R_z(\frac{3\pi}{2})\otimes(SR_y(\frac{\pi}{4})H))$ \\* \cline{2-3} \hline \hline \multirow{3}{*}{CY} &JBHS &$(I\otimes SR_y(\frac{-\pi}{2}))$CZ$(I\otimes R_y(\frac{\pi}{2})S^{\dag})$ \\* \cline{2-3}
&\cite[Lemma 5.5]{Barenco95} &$(I\otimes R_z(\frac{\pi}{2}))$CNOT$(I\otimes R_z(\frac{-\pi}{2}))$ \\* \cline{2-3}
&QSD \cite{Shende06} &$(I\otimes SR_y(\frac{-\pi}{2}))$$\mathrm{CNOT}^{2,1}$$(R_z(\frac{-3\pi}{2})\otimes I)$$\mathrm{CNOT}^{2,1}$$(R_z(\frac{3\pi}{2})\otimes(SR_y(\frac{\pi}{2})S^{\dag}Y))$ \\* \cline{2-3} \hline \hline \multirow{3}{*}{CNOT} &JBHS &$(I\otimes R_y(\frac{-\pi}{2}))$CZ$(I\otimes R_y(\frac{\pi}{2}))$ \\* \cline{2-3}
&\cite[Lemma 5.5]{Barenco95} &CNOT \\* \cline{2-3}
&QSD \cite{Shende06} &$(I\otimes R_y(\frac{\pi}{2}))$$\mathrm{CNOT}^{2,1}$$(R_z(\frac{-3\pi}{2})\otimes I)$$\mathrm{CNOT}^{2,1}$$(R_z(\frac{3\pi}{2})\otimes(SR_y(\frac{\pi}{2})X))$ \\* \cline{2-3} \hline \hline \multirow{3}{*}{CZ} &JBHS &CZ \\* \cline{2-3}
&\cite[Lemma 5.5]{Barenco95} &$(I\otimes R_y(\frac{\pi}{2}))$CNOT$(I\otimes R_y(\frac{-\pi}{2}))$ \\* \cline{2-3}
&QSD \cite{Shende06} &$\mathrm{CNOT}^{2,1}$$(R_z(\frac{-3\pi}{2})\otimes I)$$\mathrm{CNOT}^{2,1}$$(R_z(\frac{3\pi}{2})\otimes (SZ))$ \\* \cline{2-3}
\hline \end{tabular}} \end{table}
\renewcommand{1.3}{1.3}
\begin{table}[t] \tbl{Comparison of the number of produced CNOT and single-qubit rotation gates around $y$ and $z$ axis and CZ and single-qubit rotation gates around $y$ and $z$ axis after the synthesis of $\mathrm{C\mathbb{H}(\theta,\alpha)}$ gates. The JBHS method directly produces CZ gates and the method of \cite{Barenco95} and \cite{Shende06} directly produce CNOT gates.\label{tab:res2}}{
\small
\begin{tabular}{|l|l|c|c||c|c|} \hline Gate &Method &\#(CNOT)&\#($R_y$,$R_z$) &\#(CZ) &\#($R_y$,$R_z$) \\ \hline \hline \multirow{3}{*}{CH} &JBHS &1 &2 &1 &2 \\* \cline{2-6}
&\cite[Lemma 5.5]{Barenco95} &1 &2 &1 &2 \\* \cline{2-6}
&QSD \cite{Shende06} &2 &6 &2 &10 \\* \cline{2-6} \hline \hline \multirow{3}{*}{CY} &JBHS &1 &2 &1 &4 \\* \cline{2-6}
&\cite[Lemma 5.5]{Barenco95} &1 &2 &1 &4 \\* \cline{2-6}
&QSD \cite{Shende06} &2 &7 &2 &11 \\* \cline{2-6} \hline \hline \multirow{3}{*}{CNOT} &JBHS &1 &0 &1 &2 \\* \cline{2-6}
&\cite[Lemma 5.5]{Barenco95} &1 &0 &1 &2 \\* \cline{2-6}
&QSD \cite{Shende06} &2 &6 &2 &10 \\* \cline{2-6} \hline \hline \multirow{3}{*}{CZ} &JBHS &1 &2 &1 &0 \\* \cline{2-6}
&\cite[Lemma 5.5]{Barenco95} &1 &2 &1 &0 \\* \cline{2-6}
&QSD \cite{Shende06} &2 &4 &2 &8 \\* \cline{2-6}
\hline \end{tabular}} \end{table}
Proposition 3 shows how applying the JBHS method on multiple-control $\mathbb{H}(2)$ gates can lead to an implementation which requires a linear number of elementary gates in terms of circuit lines.
\textbf{Proposition 3.} Using one auxiliary qubit with an arbitrary state, any multiple-control $\mathbb{H}(2)$ gate on $n$-qubits can be decomposed to $O(n)$ elementary gates, using the proposed JBHS method.
\begin{proof} If $\mathbb{H}(2)$ is $I$ or $-I$ gate, then the JBHS method will produce $I^{\otimes {n}}$ or $-I^{\otimes {n}}$ gates which require no elementary gates to be implemented. Otherwise, each specific $\mathbb{H}(2)$ quantum gate can be written as $\mathbb{H}(\theta,\alpha)$ using~(\ref{eq:h2}). Applying the JBHS method to multiple-control $\mathbb{H}(2)$ gates will lead to a circuit structure similar to Figure~\ref{fig:multiple}. The middle multiple-control $Z$ gate can be decomposed to a multiple-control NOT gate at the cost of inserting two rotation gates around $y$ axis (Figure~\ref{fig:CZCNOT}). This can in turn be decomposed to $O(n)$ elementary gates using one auxiliary qubit with an arbitrary state by the approach presented in~\cite{maslov2008}. \end{proof}
\renewcommand{1.3}{1.3}
\begin{table}[t] \tbl{Comparison of the number of produced CZ and single-qubit (1-qu) gates for decomposing $C^{n-2}U$ gates where $U$ is a single-qubit Hermitian gate.\label{tab:res3}}{
\scriptsize
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline
\multirow{3}{*}{Method} &\multicolumn{8}{c|}{Number of qubits} \\
&\multicolumn{2}{c|}{7} &\multicolumn{2}{c|}{8} &\multicolumn{2}{c|}{9} &\multicolumn{2}{c|}{$n$} \\* \cline{2-9}
&\#CZ &\#1-qu &\#CZ &\#1-qu &\#CZ &\#1-qu &\#CZ &\#1-qu \\ \hline JBHS &\textbf{84} &\textbf{98} &\textbf{108} &\textbf{122} &\textbf{168} &\textbf{146} &\textbf{24$n$-48} &\textbf{24$n$-70} \\ \hline \cite[Collary 7.12]{Barenco95} &122 &124 &170 &172 &218 &220 &48$n$-214 &48$n$-212 \\ \hline \end{tabular}} \end{table}
\begin{figure}
\caption{The synthesized circuit of multiple-control $\mathbb{H}(\theta,\alpha)$ gate on $n$ qubits.}
\label{fig:multiple}
\end{figure}
It should be noted that an arbitrary multiple-control $U$ gate can also be implemented using linear number of elementary gates, using one auxiliary qubit by~\cite[Collary 7.12]{Barenco95}. However, the auxiliary qubit should be initially fixed in the state of $|0\rangle$. Auxiliary qubits with an arbitrary state, in contrast to qubits with fixed states, can be employed in the rest of the circuit for other computations. Besides, Table \ref{tab:res3} is provided to compare the gate counts produced by the proposed JBHS approach and the approach of~\cite[Collary 7.12]{Barenco95}. The synthesized circuits resulted from the two approaches, one with an auxiliary qubit in arbitrary state and the other with the auxiliary qubit fixed to $|0\rangle$, are considered. To do this, the results of~\cite[Collary 7.4]{Barenco95} and \cite{Shende09} are used to decompose the produced multiple-control NOT gates to CZ and single-qubit gates.
As shown in the table, the number of both CZ and single-qubit gates improves the results of ~\cite[Collary 7.12]{Barenco95}.
As some examples, decomposition results of controlled-$Y$ and controlled-Hadamard gates are shown in (\ref{eq-cn}) and (\ref{eq-ch}), respectively. The synthesized circuits of multiple-control $Y$ and multiple-control Hadamard gates on $n$ qubits are illustrated in Figures \ref{fig5} and \ref{fig6}, respectively.
\begin{figure}
\caption{The synthesized circuit of multiple-control $Y$ gate on $n$ qubits.}
\label{fig5}
\end{figure}
\begin{figure}
\caption{The synthesized circuit of multiple-control Hadamard gate on $n$ qubits.}
\label{fig6}
\end{figure}
\begin{equation}\label{eq-cn}
\small \begin{array}{l}
\left[ {\begin{array}{*{20}c}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & -i \\
0 & 0 & i & 0 \\ \end{array}} \right] =
\left[ {\begin{array}{*{20}c}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & i \\ \end{array}} \right]\left[ {\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & {0.7071} & {-0.7071} \\
0 & 0 & {0.7071} & {0.7071} \\ \end{array}} \right] \left[ {\begin{array}{*{20}c}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & { - 1} \\ \end{array}} \right] \left[ {\begin{array}{*{20}c}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & {0.7071} & {0.7071} \\
0 & 0 & {-0.7071} & {0.7071} \\ \end{array}} \right] \left[ {\begin{array}{*{20}c}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -i \\ \end{array}} \right] \\
\end{array} \end{equation} \begin{equation}\label{eq-ch}
\small \begin{array}{l}
\left[ {\begin{array}{*{20}c}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & {2^{ - 0.5} } & {2^{ - 0.5} } \\
0 & 0 & {2^{ - 0.5} } & { - 2^{ - 0.5} } \\ \end{array}} \right] =
\left[ {\begin{array}{*{20}c}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & { 0.9239} & {-0.3827} \\
0 & 0 & { 0.3827} & { 0.9239} \\ \end{array}} \right]\left[ {\begin{array}{*{20}c}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & { - 1} \\ \end{array}} \right]\left[ {\begin{array}{*{20}c}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & { 0.9239} & { 0.3827} \\
0 & 0 & {-0.3827} & { 0.9239} \\ \end{array}} \right] \\
\end{array} \end{equation}
\section{Conclusions and Future Works}\label{sec:Conc} The problem of quantum-logic synthesis of Hermitian quantum gates was addressed in this paper. The Jacobi-based synthesis approach, JBHS was introduced that uses the Jacobi method to decompose a given matrix to a set of two-level matrices and a middle diagonal Hermitian matrix. The quantum-gate equivalence of this matrix decomposition was discussed and the structure of the circuit and its possible optimizations were described.
Finally, the results of applying the JBHS method on multiple-control $\mathbb{H}(2)$ gates were presented to demonstrate how the proposed method can synthesize these gates using a linear number of elementary gates in terms of circuit lines, with the aid of one auxiliary qubit in an arbitrary state.
Some further improvements can be applied to the proposed approach. As a future work, finding the best order for zeroing non-diagonal elements during the JBHS method in order to reduce the number of produced elementary gates is being considered.
\end{document} | arXiv |
\begin{document}
\title{Stability Analysis of Trajectories on Manifolds with Applications to Observer and Controller Design}
\author{Dongjun Wu$^{1}$, Bowen Yi$^{2}$ and Anders
Rantzer$^1$
\thanks{*This project has received funding from the European Research
Council (ERC) under the European Union's Horizon 2020 research
and innovation programme under grant agreement No 834142
(ScalableControl).}
\thanks{$^{1}$ D. Wu and A. Rantzer are with the Department of Automatic
Control, Lund
University, Box 118, SE-221 00 Lund, Sweden {\tt\small
[email protected]}, {\tt\small [email protected]}.}
\thanks{$^{2}$ B. Yi is with Robotics Institute, University of
Technology Sydney, Sydney, NSW 2006, Australia {\tt \small [email protected]}.} } \maketitle \begin{abstract} This paper examines the local exponential stability (LES) of trajectories for nonlinear systems on Riemannian manifolds. We present necessary and sufficient conditions for LES of a trajectory on a Riemannian manifold by analyzing the complete lift of the system along the given trajectory. These conditions are coordinate-free which reveal fundamental relationships between exponential stability and incremental stability in a local sense. We then apply these results to design tracking controllers and observers for Euler-Lagrangian systems on manifolds; a notable advantage of our design is that it visibly reveals the effect of curvature on system dynamics and hence suggests compensation terms in the controller and observer. Additionally, we revisit some well-known intrinsic observer problems using our proposed method, which largely simplifies the analysis compared to existing results.
\end{abstract}
\section{Introduction} \label{sec:intro}
Many physical systems are naturally modelled on Riemannian manifolds. The most important example may refer to a class of mechanical systems with configuration spaces being Riemannian manifolds, rather than Euclidean spaces \cite{bullo2019geometric}. Another well known example appears in quantum systems \cite{d2007introduction}, in which systems state lives on Lie groups \cite{jurdjevic1972control}.
It is well known that local stability of equilibria for systems, whose state space is on Riemannian manifolds, can be analyzed via linearization in local coordinate---similar to the case in Euclidean space---known as the Lyapunov indirect method. In many practically important control tasks, we are very interested in the stability of \emph{a particular solution} $X(\cdot)$, the problem of which widely arises in observer design, trajectory tracking \cite{bullo1999tracking}, orbital stabilization \cite{yi2020orbital}, and synchronization \cite{andrieu2016transverse}. In Euclidean space, these tasks, or equivalently the stability of a solution $X(\cdot)$, may be solved by introducing an error variable and then studying the error dynamics, which is usually a nonlinear time-varying system. In particular, the local exponential stability (LES) of $X(\cdot)$ for a given nonlinear system can be characterized by the linearization of its error dynamics near the trajectory.
A similar problem arises in contraction and incremental stability analysis \cite{lohmiller1998contraction,forni2013differential}, in which we are interested in the attraction of any two trajectories to each other, rather than a particular one $X(\cdot)$. The basic idea is to explore the stability of a linearized dynamics, regarded as first-order approximation, to obtain the incremental stability of the given system.
Indeed, studying the stability of a \emph{particular} solution via first-order approximation has already been used, which, from the perspective of incremental stability, is known as partial (or virtual) contraction \cite{wang2005partial,forni2013differential}. As discussed above, some excitation conditions of the given trajectory may be needed to continue stability analysis. A successful application may be found in \cite{bonnabel2014contraction} for the stability of extended Kalman filter (EKF).
For the system evolving on Riemannian manifolds, however, the stability analysis of a solution $X(\cdot)$ is much more challenging. The difficulty arises from two aspects. On one hand, the ``error dynamics'' for such a case is more involved---there are, indeed, no generally preferred definition of tracking (or observation, synchronization) errors---the induced Riemannian distance on manifolds can hardly be used to derive error dynamics directly. In practice, one has to choose an error vector according to the structure of the manifold, see \cite{bullo1999tracking,lageman2009gradient,mahony2008nonlinear} for examples. On the other hand, the alternative method, via first-order approximation (or partial contraction), is non trivial to be applied to Riemmanian manifolds, since it is usually a daunting task to calculate the differential dynamics on Riemannian manifolds, and also some complicated calculations of parallel transport are involved. Overcoming these two major challenges is the main motivation of the paper.
To address this, we provide in this paper an alternative way to study LES of trajectories on Riemannian manifolds, namely, LES will be characterized by the stability of the \emph{complete lift} of the system along the trajectory, in this way removing the need of obtaining error dynamics. Complete lift, or tangent lift, has been used to study various control problems, see for example \cite{cortes2005characterization, van2015geometric, bullo2019geometric, Wu2021, bullo2007reduction}. Among the listed references, the most relevant work to ours are \cite{bullo2007reduction, Wu2021}. In \cite{bullo2007reduction} the authors have remarked that the complete lift can be seen as a linearization procedure. However, the verification of stability of the complete lift system is challenging since it is a system living in the tangent bundle and thus how to effectively use the aforementioned characterization to guide controller and observer design is an open question. We address this question in this paper.
The main contributions of the paper are three folds.
\begin{itemize}
\item[-] Establish the relationship between LES of a solution
to the stability of the complete lift along this solution on
a Riemannian manifold, which can be seen as the Lyapunov indirect
method on manifolds. Then show that LES of a
solution is equivalent to local contraction near the solution
$X(\cdot)$.
\item[-] Propose an alternative approach for analysis of LES based on the
characterization of complete lift system. This novel approach
obviates the calculation of complete lift and hence facilitates the
analysis of local exponential stability and contraction. We
demonstrate the efficiency of the proposed methods by revisiting
some well-known research problems.
\item[-] Two main types of application problems are studied, namely,
controller and observer design, especially for mechanical
systems on manifolds. These results largely simplify the
analysis in some existing works.
In particular, the proposed method is quite efficient for
analyzing a class of systems called Killing systems.
\end{itemize}
{\em Notation.} Throughout this paper we use rather standard notations from Riemannian geometry \cite{carmo1992riemannian, petersen2006riemannian}. Denote $M$ the Riemannian manifold of dimension $n$, $\langle \cdot, \cdot\rangle$ the metric, $\nabla$ the Levi-Civita connection, $R(X,Y)Z$ the Riemannian curvature, $\pi:TM \rightarrow M$ the natural projection of the tangent bundle. We use $\nabla$ and $\grad(\cdot)$ interchangeably to represent the gradient operator. Let $\Hess(\cdot)$ be the Hessian, $\exp(\cdot)$
the exponential map, $P_x^y: T_x M \rightarrow T_y M$ the parallel transport from $T_x M$ to $T_y M$, $d(x,y)$ the Riemannian distance between $x$ and $y$, and $B_c(x)=\{\xi\in M | d(\xi,x)\le c \}$ the Riemannian ball. Let $\phi^f(t;t_0,x_0)$ be the flow of the equation $\dot{x}=f(x,t)$; and we sometime write $\phi(\cdot)$ when clear from the context. The notation $L_f Y$ stands for Lie derivative of $Y$ along $f$.
\section{Local Exponential Stability on Riemannian Manifolds} \label{sec:loc-exp}
\subsection{Theory: LES and Complete Lift} \label{subsec:LES et Clift} Consider a system \begin{equation} \label{sys:NL-Rie}
\dot{x} = f(x,t) \end{equation} with the system state $x$ on the Riemannian manifold $M$, and $X(\cdot)$ a particular solution, {\it i.e.}, $\dot{X}(t)= f(X(t),t)$ from the intial condition $X(t_0) =X_0 \in M$. We study the local exponential stability of the solution $X(t)$. Some definitions are recalled below.
\begin{definition}\label{def:stab of traj}\rm
The solution $X(\cdot)$ of the system (\ref{sys:NL-Rie}) is
\emph{locally exponentially stable} (LES) if there exist positive
constants $c, K$ and $\lambda$, all independent of $t_0$, such that
\[
d(x(t), X(t)) \le K d(x(t_0), X(t_0)) e^{-\lambda
(t-t_0)}, \;
\]
for all $t \ge t_0 \ge 0$ and $x(t_0)$ satisfying $d(x(t_0),
X(t_0))<c$. \end{definition} \begin{remark} \label{rmk:1}\rm For the case that $X(t)$ is a trivial solution at an equilibrium, {\em i.e.}, $X(t) \equiv X_0,~\forall t\ge t_0 $, Definition \ref{def:
CLift} coincides with the standard definition of LES of an equilibrium. We should also notice the peculiarity of this definition---it may happen that the union of LES solutions forms into a dense set.
For example, every solution of $\dot{x}=Ax$ is LES when $A$ is Hurwitz. \end{remark}
We recall the definition of complete lift of a vector field, see \cite{yano1973tangent, crampin1986applicable} for more detailed discussions.
\begin{definition}[Complete Lift] \rm
\label{def: CLift}Consider the time-varying vector field $f(x,t)$. Given
a point $v\in TM$, let $\sigma(t,s)$ be the integral curve of
$f$ with $\sigma(s,s)=\pi(v)$. Let $V(t)$ be the vector field along
$\sigma$ obtained by Lie transport of $v$ by $f$. Then $(\sigma,V)$
defines a curve in $TM$ through $v$. For every $t\geq s$, the
\emph{complete lift} of $f$ into $TTM$ is defined at $v$ as
the tangent vector to the curve $(\sigma,V)$ at $t=s$. We denote this
vector field by $\tilde{f}(v,t)$ , for $v\in TM$. \end{definition}
\begin{definition} \label{defsys:lift}\rm
Given the system (\ref{sys:NL-Rie}), and a solution $X(t)$. Define the
complete lift of the system (\ref{sys:NL-Rie}) along $X(t)$ as
\begin{equation} \label{eqdef: clift}
\dot{v} = \tilde{f}(v, t), \; v(t) \in T_{X(t)}M
\end{equation}where $\tilde{f}$ is the complete lift of $f$ as in
Definition \ref{def: CLift}. \end{definition}
The most important property of the complete lift system is {\em linearity} at a fixed fibre. We refer the reader to \cite{Wu2021} for coordinate expression of (\ref{eqdef: clift}). From this definition, one can easily verify that the solution to (\ref{eqdef: clift}), {\em i.e.}, $v(t)$ has the property that $\pi v(t) = X(t)$. Hence we say that (\ref{eqdef: clift}) defines a dynamical system along the particular solution $X(t)$.
The following simple characterization is the theoretical basis of this paper. It can be viewed as an analogue of the Lyapunov indirect method on Riemannian manifolds.
\begin{theorem}\label{thm:lin}\rm Assume the system \eqref{sys:NL-Rie} is forward complete for $t\ge 0$. If the solution $X(t)$ is LES, then the complete
lift of the system \eqref{sys:NL-Rie}
along $X(\cdot)$ is exponentially stable. If the solution $X(\cdot)$ is bounded, the
converse is also true. \end{theorem} \begin{proof}
($\Longrightarrow$) Assume that the solution $X(t)$ is LES. Denote the minimizing normalized (\emph{i.e.} with unit speed) geodesic joining $X(t_0)$ to $x(t_0)$ as $\gamma:[0, \hat{s}]\rightarrow M$, with $\gamma(0) = X(t_0), \quad \gamma(\hat s) = x(t_0)$ and $0\leq\hat{s}=d(X(t_0),x(t_0))$. Let $ v_0\in TM $ with $\pi(v_0)=X(t_0)$ and
$v_0=\gamma ^{\prime}(0)$, and $v(t)$ be the solution to the complete
lift system (\ref{eqdef: clift}). Then
\begin{equation}
\hat{s}\left\vert v(t)\right\vert =d\left(
\exp_{X(t)}\left(
\hat{s}v(t)\right) ,X(t) \right) , \label{eq:1}
\end{equation}
where $\exp_{x}:TM\rightarrow M$ is the exponential map, by choosing $\hat{s}$ sufficiently small such that $\exp$ is defined. Using the
metric property of $d$, we have \begin{align}
&d\left( \exp_{X(t)}\left( \hat{s}v(t)\right), X(t) \right) \notag \\
\leq &d\left( \exp_{X(t)}\left( \hat{s}v(t)\right)
,x(t)\right) +d(x(t),X(t)) \\
\leq &d\left( \exp_{X(t)}\left( \hat{s}v(t)\right)
,x(t)\right) +K\hat{s}e^{-\lambda(t-t_{0})}, \label{eq:2} \end{align} where the second inequality follows from Definition \ref{def:stab of traj}. Fixing $t$ at the moment and invoking (\ref {eq:1}) and (\ref{eq:2}) we get \begin{equation} \begin{aligned} \left\vert v(t)\right\vert & \leq \kappa(\hat s) +Ke^{-\lambda(t-t_{0})} \\ \kappa(\hat s) & := \frac{d\left( \exp_{X(t)}\left( \hat{s}v(t)\right) ,x(t)\right) }{\hat{s}} . \label{eq:3} \end{aligned} \end{equation} Note that more precisely $\kappa$ is a function of both $t$ and $\hat{s}$. But omitting the $t$ argument does not affect the following analysis.
Now we need to show the term $\kappa(\hat s)$ is of order $O(\hat{s})$. Since $x(t_0)=\gamma(\hat{s})$, this term can be rewritten as \begin{equation*} \kappa(s)=\frac{d\left( \exp_{X(t)}\left( sv(t)\right) ,\phi(t;t_{0},\gamma(s))\right) }{s} \end{equation*} where we have replaced $\hat{s}$ by $s$. To this end, we consider two functions $ \alpha_{1}(s) =\exp_{X(t)}\left( sv(t)\right)$, $\alpha_{2}(s) =\phi(t;t_{0},\gamma(s))$. Similarly, we have omitted the $t$ argument which does not affect the proof. We have $\alpha_{1}(0)=\alpha_{2}(0)=X(t)$ and $\alpha_{1}^{\prime}(0)= \alpha_{2}^{\prime}(0)=v(t).$ Thus \begin{equation*}
\kappa(s)=\frac{1}{s}d(\alpha_{1}(s),\alpha_{2}(s))=O(s), \end{equation*} where we have used Lemma \ref{lem: dist} given in Appendix. Now letting $\hat{s}\rightarrow0$ in ( \ref{eq:3}) and noticing that the geodesic is unit speed, we have \[
|v(t)| \le K|v(t_0)|e^{-\lambda (t-t_0)}, \]for any $v(t_0) \in T_{X(t_0)} M$.
($\Longleftarrow$) A consequence of Proposition \ref{prop:LES in W} (see Section \ref{subsec:contra}): If the complete lift along $X(\cdot)$ is ES, then the proof of Proposition \ref{prop:LES in W} shows that the system is contractive on a bounded set $B_c$ and thus the LES of $X(\cdot)$.
\end{proof}
\begin{remark}
Theorem \ref{thm:lin} provides a characterization for LES of trajectories
on manifolds via complete lift. Unfortunately, the original form of this
theoretical result lacks practical utility for applications.
The main reason is that the
complete lift on manifolds is difficult to obtain, and quite often, its
calculation of it relies on local coordinates, which is in conflict with
the purpose (coordinate-free design)
of this paper. To circumvent this issue, we propose an
alternative approach in Section \ref{subsec:use} based on Theorem
\ref{thm:lin}, which will be much more efficient to use.
But we must emphasize that Theorem \ref{thm:lin} plays the
fundamental role for the rest of the paper.
\end{remark}
From Theorem \ref{thm:lin}, we can derive the following interesting corollary which says that there no unbounded LES solution exists for \emph{autonomous} systems. \begin{corollary}\rm
For a time invariant system $\dot{x} = f(x)$, a LES
solution $X(t)$ should always be bounded and non-periodic. \end{corollary} \begin{proof}
The complete lift of $\dot{x} = f(x)$ is
$
\dot{v} = \frac{\partial f}{\partial x} v, \; v \in T_x \mathbb{R}^n.
$ Clearly, $v=\dot{x}$ is a solution to the complete lift system. Then
by Theorem \ref{thm:lin},
$
|\dot{X}(t)| \le k |\dot{X}(0)| e^{-\lambda t},
$ hence $X(t)$ cannot be periodic. Further more, $
|X(t)| \le |X(0)| + \int_0^t k |\dot{X}(0)| e^{-\lambda s} ds
= |X(0)| + \frac{k|\dot{X}(0)|}{\lambda} (1 -
e^{-\lambda t})
< |X(0)| + \frac{k|\dot{X}(0)|}{\lambda}.
$
\end{proof}
In \cite[Lemma 1]{giaccagli2020sufficient}, the authors obtain a similar result for autonomous systems, {\em i.e.}, there is a unique attractive equilibrium in an invariant set, in which the system is incrementally exponentially stable.
\subsection{Contraction and LES} \label{subsec:contra} Contraction theory has become a powerful tool for analysis and design of control systems, see \cite{lohmiller1998contraction, forni2013differential, andrieu2016transverse, ruffer2013convergent, manchester2017control, aminzare2014contraction} and the references therein. In Section \ref{subsec:LES et Clift}, we have studied LES of solutions to the system (\ref{sys:NL-Rie}). In this subsection, we will show the close connection between the proposed result and contraction analysis on manifolds \cite{simpson2014contraction, Wu2021}. The reader may refer to \cite{ruffer2013convergent, angeli2002lyapunov} for the case on Euclidean space.
We say that the system (\ref{sys:NL-Rie}) is contractive on a set $C$ if there exist positive constants $K, \lambda$, independent of $t_0$ such that \begin{equation}
d(\phi(t;t_0, x_1), \phi(t;t_0, x_2)) \le K d(x_1, x_2) e^{-\lambda
(t-t_0)}, \end{equation} for all $x_1, x_2 \in C, t\ge t_0 \ge 0$. For technical ease, we have slightly modified the definition of contraction by allowing the set $C$ to be not forward invariant. Based on Theorem \ref{thm:lin}, we have the following proposition, which can be viewed as a bridge from LES to local contraction.
\begin{proposition} \label{prop:LES in W}\rm
A bounded solution $X(t)$ to the system (\ref{sys:NL-Rie}) is LES if and
only if there exists a constant $c$ such that the system
\eqref{sys:NL-Rie} is contractive on a bounded set $B_c$ whose interior
contains $X(\cdot)$.
\end{proposition} \begin{proof}
Assume that $X(t)$ is LES. Then the complete lift system along $X(t)$
is exponential stable (ES) by Theorem \ref{thm:lin}. By converse Lyapunov theorem, there exists a $\mathcal{C}^1$
function $V(t,v)$, quadratic in $v$ satisfying
\begin{equation}\label{eq:5}
c_1|v|^2 \le V(t,v) \le c_2 |v|^2, \ \forall v \in T_{X(t)}M
\end{equation}
and
\begin{equation}
\dot{V}(t,v) = \frac{\partial V}{\partial t}(t,v) + L_{\tilde{f}} V(t,v) \le
-c_3 |v|^2, \ \forall v \in T_{X(t)}M,
\end{equation}for all $t\ge t_0 \ge 0$ and three positive constants
$c_1,c_2,c_3$.
Due to the smoothness of $V$, we have
\[
|\dot{V}(t,P_{X(t)}^{x(t)}v) - \dot{V}(t,v)| \le c_4
d_{TM}(P_{X(t)}^{x(t)}v, v) = c_4 d_M(x(t),X(t)).
\]
Thus
\begin{align*}
\sup_{\tiny{\substack{|w|=1, \\ w\in T_{x(t)}M}} }
\dot{V}(t,w)
&=
\sup_{\tiny{\substack{|v|=1, \\ v\in T_{X(t)}M}}}
\dot{V}(t,P_{X(t)}^{x(t)}v) \\
&=
\sup_{\tiny{\substack{|v|=1, \\ v\in T_{X(t)}M}}}
\dot{V}(t,v) +
\dot{V}(t,P_{X(t)}^{x(t)}v) - \dot{V}(t,v) \\
& \le -c_3 + c_4 d(x(t),X(t)) < -c_5 <0,
\end{align*}
for $c$ small enough such that $d(x(t),X(t))$ will be small
enough for all $t\ge t_0$ when $x(t_0) \in B_{c}({X(t_0)})$. Since
$\dot{V}$ is quadratic in $v$ (due to the linearity of the complete lift
system and that $V(t,v)$ is quadratic in $v$), this implies
\[
\dot{V}(t,v) \le -c_5 |v|^2, \ \forall v\in T_{x(t)}M, \ t\ge t_0
\]
for all $x(t_0) \in B_c({X(t_0)})$. Then the
system \eqref{sys:NL-Rie} is contractive on $B_c:= \bigcup_{t_0 \ge 0}
B_c(X(t_0))$ which is bounded as is $X(\cdot)$ (use Theorem 2 \cite{Wu2021}).
The converse is obvious, and hence the proof is completed. \end{proof}
The following corollary is a straightforward consequence.
\begin{corollary}\rm
Assume that the system (\ref{sys:NL-Rie}) has an equilibrium point
$x_\star \in M$. Then $x_\star$ is LES if and only if there exists an open
neighborhood of $x_\star$ on which the system is contractive. \end{corollary}
In \cite{forni2015differential}, the authors proved similar result to this corollary for \emph{autonomous} systems in Euclidean space. The paper \cite{ruffer2013convergent} focuses on asymptotic stability and asymptotic contraction, also in Euclidean space.
\subsection{A More Usable Form}\label{subsec:use}
As remarked earlier, Theorem \ref{thm:lin} is not suitable for practical applications due to the difficulty of calculating the complete lift system. In this subsection, we propose a more usable version of Theorem \ref{thm:lin} (still intrinsic) which will make the analysis of LES a routine task.
For reasons that will be clear later, we rename the state $x$ in the system \eqref{sys:NL-Rie} as $q$. Fig. \ref{fig:q} is drawn to illustrate our idea. In Fig. \ref{fig:q}, the solid curve represents a trajectory of the system system \eqref{sys:NL-Rie}, say $q:\mathbb{R}_{\ge 0} \to M$, whose velocity vectors are drawn as black arrows, denoted $\dot{q}$. The dashed curves are flows of the initial curve $\gamma: s \mapsto \gamma(s) \in M$. The blue arrows emanating from the curve $q$ are the (transversal) velocities of the dashed curve, denoted as $q'$, or in precise language, $q' = \frac{\partial q(s,t)}{\partial s} $ for the parameterized curve $(s,t) \mapsto q(s,t)$. We call $q'$ a variation along $q(\cdot)$.
\begin{figure}
\caption{Illustration of $\dot{q}$ and $q'$.}
\label{fig:q}
\end{figure}
Two important observations can be made from the figure: \begin{itemize}
\item[-] By construction, $q'$ is the solution to the complete lift of the
system \eqref{sys:NL-Rie} along the trajectory $q(\cdot)$.
Thanks to this, the Lie bracket $[\dot{q},q']$ vanishes for all
$t\ge t_0$ along $q(\cdot)$.
\footnote{
Recall that $[X,Y] = \left. \frac{d}{dt}\right|_{t_0}
(\phi^X_t)^* Y(t_0)$, thus $[\dot{q},q'] = \left.
\frac{d}{dt}\right|_{t_0} (\phi^f_t)^*
(\phi_t^f)_* q'(t_0) = \left. \frac{d}{dt}
\right|_{t_0} q'(t_0) = 0$.
} \item[-] The map $(s,t) \mapsto q(s,t)$ forms a parameterized surface in
$M$. Then due to the torsion-free property of Levi-Civita
connection, there holds $\frac{D }{dt}
\frac{\partial q}{\partial s} = \frac{D}{ds} \frac{\partial
q}{\partial t} $ (see \eqref{eq:swap-cov} and \cite[Lemma 3.4]{carmo1992riemannian}),
which implies that $\nabla_{\dot{q}} q' = \nabla_{q'}{\dot q} =
\nabla_{q'} {f}$.
\end{itemize}
Now that $q'$ is the solution to the complete lift system, it is sufficient to analyze the dynamics of $q'$. This may seem na\"ive at the first thought and that the novelty seems to be only at a notational level. The fact is, however, due to the above two observations, we now have access to rich results in Riemannian geometry. In particular, we will see how LES on Riemannian manifold is affected by curvature -- the most important ingredient of a Riemannian manifold.
\subsection{Revisit of some existing results}\label{subsec:revisit} \subsubsection{Contraction on Riemannian manifolds \cite{simpson2014contraction}} The following result is obtained in \cite{simpson2014contraction} (the contraction version): \begin{theorem}[\cite{simpson2014contraction}]\label{cor: cov}\rm
Let $q(\cdot)$ be a solution to the system \eqref{sys:NL-Rie}, if
\begin{equation*}
\langle \nabla_{v} f, v \rangle \le -k \langle
v, v \rangle, \; \forall v \in T_{q(t)} M, \ t\ge 0,
\end{equation*}for some positive constant $k$, then the solution $q(t)$
is LES. \end{theorem} The proof of this theorem will now simplify to a few lines: \begin{proof}
It suffices to show the exponential convergence of the metric $\langle q',
q' \rangle$. Indeed,
\begin{equation*}
\frac{1}{2} \frac{d}{dt} \langle q', q' \rangle
= \left< \nabla_{\dot{q}} q',q' \right>
= \left< \nabla_{q'} f, q' \right> \le -k \left< q',q' \right>.
\end{equation*} Thus $ \left<q',q'\right>$ converges exponentially. \end{proof}
Notice that we have used the fact that $[q',\dot{q}]=0$.
\subsubsection{Intrinsic reduced observer \cite{bonnabel2010simple}} The following lemma was among the key results in \cite{bonnabel2010simple}: \begin{lemma}[\cite{bonnabel2010simple}] \label{lem:bonna}
Let $M$ be a smooth Riemannian manifold. Let $P\in M$ be fixed. On the
subspace of $M$ defined by the injectivity radius at $P$, we consider
\begin{equation} \label{sys:bonna1}
\dot{q} = -\frac{1}{2\lambda} \grad d(q,P)^2, \quad \lambda >0.
\end{equation}
If the sectional curvature is non-positive, the dynamics is a
contraction in the sense of \cite{lohmiller1998contraction}, i.e., if
$\delta x$ is a virtual displacement at fixed $t$, we have
\begin{equation} \label{eq:bonna1}
\frac{d}{dt} \left< \delta q, \delta q \right> \le -
\frac{2}{\lambda} \left< \delta q, \delta q \right>.
\end{equation} If the sectional curvature is upper bounded by $A>0$,
then \eqref{eq:bonna} holds for $d(q,P)< \pi /(4 \sqrt{A})$. \end{lemma}
The proof provided in \cite{bonnabel2010simple} is a bit technical. We now give a simplified proof using the methods developed in this paper and provide a new estimation of the convergence rate.
\begin{lemma} \label{lem:re-bonna}
Let $M$ be a smooth Riemannian manifold whose curvature is upper bounded
by $A\ge 0$. Let $P\in M$ be fixed. Then the dynamics \eqref{sys:bonna1}
is globally contractive if $A=0$, and locally
contractive otherwise, with contraction rate $ \gamma (q) =
\frac{2 \sqrt{A} d(q,P)}{\lambda \tan (\sqrt{A}d(q,P))}$,\footnote{$\gamma (P)$ is
understood as $\lim_{d(q,P)\to 0} \gamma (q) =
\frac{2}{\lambda}$. Notice that $\gamma$ is monotone decreasing and
strictly positive on the interval $[0,\frac{\pi}{2})$} i.e.,
\begin{equation} \label{eq:bonna}
\frac{d}{dt} \left< \delta q, \delta q \right> \le -
\gamma \left< \delta q, \delta q \right>.
\end{equation} \end{lemma} \begin{proof}
Let $F(q) = \frac{1}{2} d(q,P)^2$ and we estimate
\begin{align*}
\frac{d}{dt} \left< q', q' \right>
& = 2 \left< - \frac{1}{\lambda} \nabla_{q'} \nabla F, q' \right> \\
& = -\frac{2}{\lambda} \Hess F (q',q')
\end{align*}where the last equality follows from the definition of
the Hessian operator, see \eqref{eq:Hess}.
The conclusion follows invoking comparison for the Hessian of square distance (e.g.,
\cite[Theorem 6.6.1]{Jost2017}):
\begin{equation*}
\Hess F \ge \sqrt{A} d(q,P) \cot (\sqrt{A} d(q,
P)) \text{Id}
\end{equation*}for all $q\in \text{inj} (P)$ if $A > 0$ and for all $q
\in M$ if $A=0$. \end{proof}
\begin{remark} The second part of the Lemma \ref{lem:bonna} \cite{bonnabel2010simple} seems incorrect: by Rauch comparison (see \cite[Theorem 6.4.3]{petersen2006riemannian}), for manifold with sectional curvature lower bounded by $k>0$, there holds $\Hess F \le (1- k F ) g$, where $g$ is the Riemannian metric. Therefore, the contraction rate is strictly less than $\frac{2}{\lambda}$ in any neighborhood of $P$. \end{remark}
\begin{remark}
Since $\Hess F |_P = g$, if the Hessian is continuous at $P$, then
the dynamics \eqref{sys:bonna1} is always locally contractive without
assumptions on curvature, which also implies that $P$ is an LES
equilibrium. \end{remark}
The above method is not limited to study contraction of distance, in fact, it can be easily adapted to study $k$-contraction \cite{WCS2022} (Hausdorff measure such as area and volume) on Riemannian manifolds. As an example, let us consider the contraction of volume. Suppose that $\{ q'_1 ,\cdots, q'_n \}$ forms a frame at $q$ and denote $ \vol (q_1',\cdots, q'_n)$ the signed volume of the parallelepiped spanned by this frame and we study the change of the volume under the dynamics \eqref{eq:bonna}: \begin{equation} \begin{aligned}
& \frac{d}{dt} \vol (q'_{1}, q'_{2}, \cdots, q'_{n}) \\
= & -(\dvg \nabla F / \lambda) \vol (q'_{1}, q'_{2}, \cdots, q'_{n}) \\
= & - \frac{\Delta F}{\lambda} \vol (q'_{1}, q'_{2}, \cdots, q'_{n}) \end{aligned} \end{equation}where $\Delta $ is the Laplace-Beltrami operator \cite{Jost2017}. Now $\Delta F = \tr (G^{-1} \Hess F)$, with $G$ the Riemannian metric, we can conclude that the condition in Lemma \ref{lem:re-bonna} implies exponential contraction of volume (on non-positive curvature manifold). Since $\Delta F$ is controlled by $\tr (G^{-1} \Hess F)$, non-positive curvature assumption is too restrictive. In fact, $\Delta F = 1 + H(q,P) d(q,P) $, where $H(q,P)$ is the mean curvature, thus the same conclusion can be drawn for manifold with non-positive mean curvature.
\begin{remark}
From the proof of Lemma \ref{lem:re-bonna} we see that the function $F$
need not be the square distance. It can be replaced by any function
whose Hessian has the required property, as the next example shows. \end{remark}
\subsubsection{Filtering on ${\it SO}(3)$} Consider first the attitude control problem \begin{equation} \label{sys:so(3)}
\dot{R} = R u
\end{equation} where $R \in SO(3)$ and the control input $u \in
\mathfrak{so}(3)$. The control objective is to exponentially stabilize a
solution $R_*(t) \in SO(3)$, which verifies
$\dot{R_*}(t)=R_*(t)\Omega(t)$, where
$\Omega(t)$ is some known signal.
The Lie group $SO(3)$ is a Riemannian manifold
with the bi-invariant metric $\langle X, Y \rangle = \tr (X^\top Y)$.
Due to the bi-invariance of the metric, the Levi-Civita connection is
simply $\nabla_X Y = \frac{1}{2} [X,Y]$, see \eqref{eq:Lie-Levi}.
Consider the function
\[
F(R,R_*) = \frac{1}{2} ||R - R_*||^2,
\] where $||\cdot ||$ is the Frobenius norm ($F$ is not the square
distance). The gradient and Hessian
of $F$ can be calculated as $\nabla F = \frac{1}{2} R (R_*^\top R -
R^\top R_*)$, $\Hess F (RY,RZ) = \frac{1}{4} \tr (Z^{\top} Y R^{\top}_*
R )$, with $X,Y \in \mathfrak{so}(3)$ respectively.
Clearly, $R_*(\cdot)$ is the solution to
\begin{align*}
\dot{R} & = -k \nabla F(R,R_*) + R\Omega(t) \\
&= -\frac{k}{2} R(R_*^\top(t)R - R^\top R_*(t)) +
R\Omega(t).
\end{align*}
Let us check the LES of the $R_*(\cdot)$. For $T_R SO(3) \ni R' = R X$
for some $X \in \mathfrak{so}(3)$, we calculate
\begin{align*}
\frac{1}{2} \frac{d}{dt} \left< R', R'\right>
= & -k \Hess F (R',R') + \left< \nabla_{R'} (R\Omega (t)), R'
\right> \\
= & -k \Hess F (R',R') + \frac{1}{2} \left< [ R', R\Omega (t)], R' \right>
\\
= & -k \Hess F (R',R') + \frac{1}{2} \tr \{ (X^{\top} X - X X^{\top} ) \Omega \}
\\
= & -k \Hess F(R', R')
\end{align*}since $X^{\top} X - X X^{\top}$ is symmetric.
Note that the Hessian of $F$ is positive definite at $R=R_*$.
Hence the controller
\[
u = -\frac{k}{2} (R_*^\top(t)R - R^\top R_*(t)) + \Omega(t).
\]renders the trajectory $R_*(\cdot)$ LES as expected.
The extension to the design of a low pass filter becomes straightforward: the
following dynamics
\begin{equation}
\dot{\hat{R}} = -\frac{k}{2} \hat{R}(R^\top\hat{R}-R^\top\hat{R}) +
\hat{R}\Omega
\end{equation} is a locally exponential observer (filter) for
$\dot{R}=R\Omega$. This result has been obtained in
\cite{lageman2009gradient}, see also \cite{mahony2008nonlinear}.
\subsection{Killing system} \label{subsubsec:Kill} \subsubsection{Low-pass filter for Killing system} Consider a system defined by a time-varying Killing field \cite[Chapter 8]{petersen2006riemannian} on a Riemannian manifold $(M,g)$: \begin{equation} \label{sys:Kill}
\dot{q}=f(t,q) \end{equation}i.e., $L_f g =0$, see also \eqref{eq:Killing} in Appendix. We call such system a Killing system. When the system \eqref{sys:Kill} is perturbed by some noise, it is tempting to design a low pass filter to reconstruct the system state from the corrupted data $q$. For that, we propose the simple filter \begin{equation} \label{sys:filter-Kill}
\dot{\hat{q}} = f(t,\hat{q}) - k \nabla F(\hat{q}, q), \end{equation}where $F(q,p)=\frac{1}{2} d(q,p)^{2}$ and $k$ is a positive constant. To verify the convergence of this filter, we calculate as before \begin{align*}
\frac{1}{2} \frac{d}{dt} \left< \hat{q}',\hat{q}' \right>
& = \left< \nabla_{\hat{q}'} (f-k\nabla F), \hat{q}' \right> \\
& = \left< \nabla_{\hat{q}'} f, \hat{q}' \right> - k \left<
\nabla_{\hat{q}'} \nabla F, \hat{q}'\right> \\
& = - k \left< \nabla_{\hat{q}'} \nabla F, \hat{q}'\right> \text{ (by
\eqref{eq:Killing}) }\\
& = - k \Hess F (\hat{q}', \hat{q}'). \end{align*} Since $\Hess$ is locally positive definite, the filter converges at least locally. If in addition, the manifold has non-positive curvature, then the convergence is global recalling that $\Hess F \ge g$ for manifold with non-positive curvature. This is the case for the manifold of symmetric positive definite matrices: ${\rm SPD} := \{ P \in \mathbb{R}^{n\times n}: P = P^{\top} >0 \}$ equipped with the metric $ \left< X, Y \right> := \tr (X P^{-1} Y P^{-1})$ for $X,Y \in T_{P} {\rm SPD}$, see e.g., \cite{Criscitiello2022}.
\subsubsection{Discretization} Let us now consider the discretization of this filter. First, notice that $f$ is Killing, thus the discretization of \eqref{sys:Kill} may be written as $x_{k+1} = \tau \cdot x_{k}$ for some isometric action $\tau \in \text{Iso}(M)$ ($\tau$ is time-invariant since \eqref{sys:Kill} is autonomous). This model has been considered in for example \cite{Tyagi2008}, where the manifold is the set of symmetric positive definite matrices and it is called a ``linear system'' on Riemannian manifolds. Next, viewing $-k \nabla F (\hat{q},q)$ as disturbance, we may discretize \eqref{sys:filter-Kill} as \[
\hat{q}_{k+1} = \exp_{\tau \cdot \hat{q}_{k}} (k \Delta t
\log_{\hat{q}_k} q_{k}) \] where $\Delta t$ is the sampling time and we use the standard notation $\log_{x}y = \exp_x^{-1} y$, see e.g. \cite{Pennec2006}.
Indeed, fix $q$ and let $r(x) = d(x,q)$, then $ \nabla r(x) =
-\frac{\exp_x^{-1} q}{||\exp_x^{-1} q||}$, see \cite[Lemma
5.1.3]{Jost2017}. Thus $\nabla F = -\log_{\hat{q}} q $. For example, in
Euclidean space $-\log_{\hat{q}}q = \hat{q}-q$, which is in accordance with
$\nabla_{\hat{q}} \frac{1}{2} ||\hat{q} - q||^{2}$. Let $e_{k} = \log_{\hat{q}_k} q_k$ be the error, then \begin{align*}
e_{k+1} & =\log_{\hat{q}_{k+1}} q_{k+1} \\
& = \log_{\exp_{\tau \cdot \hat{q}_{k}} ( k \Delta t \log_{\hat{q}_k}q_k)}
\tau \cdot q_k \\
& \approx \log_{\tau \cdot \hat{q}_k} \tau \cdot q_k - k \Delta t
\log_{\hat{q}_k}q_k \\
& = D\tau \cdot \log_{\hat{q}_k} q_k - k \Delta t \log_{\hat{q}_k}q_k \\
& = (D \tau - k \Delta t \, {\rm Id}) e_k \end{align*}By assumption, $D\tau$ is a linear isometric mapping (unitary), therefore, $e_{k}$ tends to zero exponentially for all sufficiently small sampling time.
\subsubsection{Revisit of filter on ${\it SO}(3)$} For a Lie group $G$ equipped with a left- (resp. right-) invariant metric $g$, it is known that any right- (resp. left-) invariant vector fields are Killing fields, see for example \cite[Chapter 8]{petersen2006riemannian}. Indeed, equip $G$ with a right-invariant metric and consider a left-invariant vector field $V(x) := dL_{x} (v)$ for $v \in \mathfrak{g}$ and $x\in G$, whose flow reads $F^{t}(x) = R_{\exp (tv)} x$; here $L$ and $ R$ represent left and right action respectively. Thus $DF^{t} = dR_{\exp(tv)}$. Hence for any right-invariant vector fields $W_1(x) = dR_x(w_1) , W_2(x)=dR_x(w_2)$, for $w_1, w_2 \in \mathfrak{g}$, we have \begin{align*}
\left< DF^{t} (W_1), DF^{t} (W_2) \right>
& = \left< dR_{\exp (tv)} (W_1), dR_{\exp (tv)} (W_2) \right> \\
& = \left< W_1, W_2 \right>_x \text{ (right-invariant metric)}\\
& = \left<w_1, w_2\right>, \end{align*}which is constant for all $t\ge 0$. Summarizing, a system defined by left-invariant vector field (right-invariant is similar) e.g., \begin{equation}
\dot{x} = dL_x (v(t)), \quad x \in G, \; v(t) \in \mathfrak{g}, \, \forall
t\ge 0 \end{equation} is a Killing system when the underling metric of $G$ is right-invariant. Therefore, a low-pass filter can be designed using formula \eqref{sys:filter-Kill}. In particular, for the system \eqref{sys:so(3)} on $SO(3)$, when $u$ is known, it reads $\dot{\hat{R}} = \hat{R}u + k \log_{\hat{R}} R = \hat{ R} u + k \log {\hat{R}^{\top} R}$ when $SO(3)$ is equipped with the standard bi-invariant metric. Thus we obtain another low-pass filter for left-invariant dynamics on $SO(3)$.
\section{Application to Euler-Lagrangian Systems}\label{sec:Lag} In this section, we utilize the proposed methods to study LES of trajectories of Euler-Lagrangian (EL) systems. As pointed out in Section \ref{sec:intro}, trajectory tracking and observer design are two typical important applications which involve the analysis of LES of trajectories. Compared to the analysis of stability of equilibrium, these tasks are generally much harder on manifolds. Most existing results rely on calculations in local coordinates, which is usually a daunting task. We demonstrate in this section that, the proposed approach can be efficiently used to design and analyze controllers and observers for mechanical systems, while obviating the calculation in local coordinates.
There are two pervasive approaches -- Lagrangian and Hamiltonian -- for modelling of mechanical systems \cite{Abraham2008}. These two different approaches have led to different design paradigms. Amongst the vast literature, we mention two books that include some of the most important results in the two fields: the book of R. Ortega {\it et. al.} \cite{Ortega2013} (Lagrangian approach) and the book of van der Schaft \cite{van2017l2} (Hamiltonian approach).
In this paper, we focus on the Lagrangian approach. Since we will work on manifolds (the configuration space is a manifold rather than Euclidean), we adopt the geometric modelling which is well documented in \cite{bullo2019geometric}. Briefly speaking, one starts with a configuration space $Q$ and then calculates the kinetic energy $\left< v_q, v_q \right> $ and potential energy $V(q)$. The kinetic energy thus defines a Riemannian metric on the configuration space, which depends only on the inertial of system. Using principles of classical mechanics (e.g., d'Alembert principle), one can derive the following so called Euler-Lagrangian (EL) equation: \begin{equation} \label{eq:EL}
\nabla_{\dot{q}} \dot{q} = -\grad V(q) +\sum_{i=1}^{m} u_i B_{i}(q) \end{equation}where $\dot{q}$ is the velocity, $\nabla$ is the Levi-Civita connection associated with the metric, $B_i$ are some tangent vectors, and $u_i$ are external forces (viewed as input in our setting) taking values in $\mathbb{R}$.
\subsection{Tracking controllers for EL systems} Suppose that $(q_*(\cdot), \dot{q}_*(\cdot), u_*(\cdot))$ is a feasible pair of the system \eqref{eq:EL}, i.e., $\nabla_{\dot{q}_*}\dot{q}_* = -\nabla V(q_*) + \sum_{i=1}^{m} u_{i*}(t) B_{i}(q_*)$. The objective is to design a controller $u(\cdot)$ such that $(q_*(\cdot), \dot{q}_*(\cdot))$ is LES. Before moving on however, we must stop for a moment to clarify the statement that ``$(q_*(\cdot), \dot{q}_*(\cdot))$ is LES''. Unlike $q(\cdot)$, the curve $(q_*(\cdot), \dot{q}_*(\cdot))$ lives in the tangent bundle $TM$, which is not equipped with a distance function {\it a priori}. Thus in order to talk about convergence, a topology should be defined on $TM$. This is achieved via the so-called Sasaki metric \cite{yano1973tangent}. Due to the importance of this metric, we briefly recall its construction in the following.
Let $V,W \in TTM$ be two tangent vectors at $(p,v)\in TM$ and \[
\alpha: t \mapsto (p(t), v(t)), \, \beta: t \mapsto (q(t), w(t)), \] are two curves in $TM$ with $p(0)= q(0) = p$, $v(0)= w(0) = v$, $\alpha'(0)=V$, $\beta'(0) = W$. Define the inner product on $TM$ by \begin{equation}\label{sasaki}
\left< V, W \right>_{\rm s} :=
\left< p'(0), q'(0) \right> + \left< v'(0), w'(0) \right>
\end{equation} in which we write $v'(0) = \left. \frac{Dv(t)}{dt}\right|_{0}$. The Sasaki metric is well-known to be a {\it bona fide} Riemannian metric on $TM$. For details, see \cite{yano1973tangent}.
For a curve $w(s) = (c(s), v(s))$ lying in $TM$, we can calculate its length under the Sasaki metric as: \begin{align*}
\ell (w)
= & \int \sqrt{\langle w'(s), w'(s) \rangle_{\rm s}} ds \\
= & \int \sqrt{\langle c'(s), c'(s) \rangle
+ \langle v'(s), v'(s) \rangle} ds \end{align*}in which $v'(s)$ is understood as the covariant derivative of $v(\cdot)$ along $c(\cdot)$.
\begin{assumption}\rm In the sequel we assume that for each pair of points $(q,v)$ and $(p,w)$ in $TM$, the minimizing geodesic that joins $(q,v)$ to $(p,w)$ always exists. \end{assumption}
Now that the EL equation \eqref{eq:EL} defines a system on $TM$, it seems that to analyze LES of solutions of the EL equation, one has to consider variation (see Section \ref{subsec:use}) of the form $(q',v')$, with $v' \in TTM$. The next theorem shows that this is not needed.
\begin{theorem} \label{thm:lag-simp}
Consider a dynamical system on a Riemannian manifold $(M,g)$:
\begin{equation} \label{sys:nabla}
\nabla_{\dot{q}} \dot{q} = f(q,\dot{q})
\end{equation}
where $f$ is smooth. Let $(q(\cdot), \dot{q}(\cdot))$ be a
trajectory of the system and $q'$ any variation along $q(\cdot)$.
Then the system \eqref{sys:nabla} is contractive if the following system
\begin{equation} \label{sys:sasaki}
\frac{D}{dt}
\begin{bmatrix} q' \\ \frac{Dq'}{dt} \end{bmatrix}
= F\left(
\begin{bmatrix} q' \\ \frac{Dq'}{dt} \end{bmatrix} \right)
\end{equation}is exponentially stable along any $q(\cdot)$. \end{theorem}
\begin{remark} Notice that $(q', Dq'/dt) \in T_q M \times T_q M $, thus exponential stability can be defined in the obvious way for the system \eqref{sys:sasaki} using Sasaki metric. \end{remark}
\begin{proof}
Given a point $(q_1,v_1)\in TM$, and the integral
curve $\eta_1(t) = ( q_1(t),
\dot{q}_1(t))$ of the system \eqref{sys:nabla} passing through it at
time $t=0$. Let $\eta_0(t) =
(q(t), \dot{q}(t))$ be another integral curve with initial condition
$(q_0, v_0)$. By
assumption, there exists a minimizing geodesic
$\gamma(s)=(q(s),v(s)), \ s\in[0,1]$ joining $(q_0, v_0)$ to
$(q_1,v_1)$, that is, $\gamma(0)=(q_0,v_0),\ \gamma(1)=(q_1,v_1)$. Let
$q(s,t)$ be the solution to the system \eqref{sys:nabla} with initial
condition $\gamma(s)$, then the parameterized curve $s\mapsto
(q(s,t),\frac{\partial q(s,t)}{\partial t})$ forms a
variation between the curves $\eta_0(\cdot)$ and $\eta_1(\cdot)$.
Therefore, the following estimation of the distance between the two points
$\eta_0(t)$ and $\eta_1(t)$ is obvious:
\begin{equation}
\begin{aligned}
d_{TM}(\eta_0(t), \eta_1(t))
& \le \int_0^1 \sqrt{\left|\frac{\partial q}{\partial s}(s,t)\right|^2
+ \left|\frac{D}{ds} \frac{\partial q}{\partial t}
\right|^2} ds \\
& = \int_0^1 \sqrt{\left|\frac{\partial q}{\partial s}(s,t)\right|^2
+ \left|\frac{D}{dt} \frac{\partial q}{\partial s} \right|^{2} }ds
\end{aligned}
\end{equation} The conclusion follows immediately after replacing $\frac{\partial
q}{\partial s}$ by $q'$.
\end{proof}
As we have remarked earlier, due to Theorem \ref{thm:lag-simp}, the analysis of LES and contraction does not require variation of the form $(q',v')$ and that $q'$ alone is sufficient. This observation is crucial for the rest of this section.
With the preceding preparations, we are now in a position to study tracking controller for the EL system. We focus on fully-actuated system: \begin{equation} \label{eq:EL-full}
\nabla_{\dot{q}} \dot{q} = -\grad V(q) +u \end{equation} and assume $(q_*(\cdot), \dot{q}_*(\cdot), u_*(\cdot) \equiv 0 )$ is a bounded feasible solution to the EL equation, i.e., $\nabla_{\dot{q}_*} \dot{q}_{*} = - \nabla V(q_{*})$ (solution with non-zero $u_*$ is similar). We propose a controller with structure $u = u_{P} + u_{V} +u_{R}$ to locally exponentially stabilize $(q_*(\cdot), \dot{q}_*(\cdot)$, where \begin{equation} \label{ctrlu}
\begin{aligned}
& u_{P}(q) = - k_2 \nabla F(q,q_{*}), \\
& u_{D}(q,\dot{q}) = - k_1 (\dot{q} - P_{q_*}^{q} \dot{q}_{*}), \\
& u_{R}(q,\dot{q}_*) = R(\dot{q}, \nabla F(q,q_*)) \dot{q}
\end{aligned} \end{equation}As before, $F$ is half of the square distance function. $k_1$ and $k_2$ are constants to be determined and $P_{q}^{p}$ is the parallel transport from $q$ to $p$, $R(\cdot, \cdot)\cdot$ is the curvature tensor. Heuristically, this can be seen as a PD-controller \cite{Ortega2013}, with a curvature compensation term. By construction, $(q_*(\cdot), \dot{q}_*(\cdot))$ is a solution to the closed loop system since $u(q_*,\dot{q}_*) \equiv 0$. Hence it remains to show the LES of this solution.
Thanks to Theorem \ref{thm:lag-simp} and Proposition \ref{prop:LES in W}, we need only check the exponential stability of the system \eqref{sys:sasaki} along $q_*(\cdot)$. For this we calculate \begin{equation} \label{eq:nablaEL} \begin{aligned}
\nabla_{q'} \nabla_{\dot{q}}\dot{q}
& = \nabla_{\dot{q}} \nabla_{q'} \dot{q} + R(\dot{q}, q')\dot{q} \\
& = \nabla_{\dot{q}} \nabla_{\dot{q}} q' + R(\dot{q}, q')\dot{q} \\
& = \frac{D^{2} q'}{dt^{2}} + R(\dot{q}, q')\dot{q} \end{aligned} \end{equation} where we used the basic fact about the curvature tensor: $ \frac{D}{dt}\frac{D}{ds}X - \frac{D}{ds}\frac{D}{dt}X = R(\dot{q},q')X$, see e.g., \cite[Lemma 4.1]{carmo1992riemannian}. The following calculations are in order (notice that we calculate along $q_*(\cdot)$, otherwise these are invalid): \begin{equation} \label{eq:nabla u}
\begin{aligned}
\nabla_{q'} u_{P} &= - k_2 \nabla_{q'} \nabla F = k_2 q' \\
\nabla_{q'} u_{D} &= - k_1 \nabla_{q'} (\dot{q} - P_{q_*}^{q}
\dot{q}_{*} ) = - k_1 \nabla_{\dot{q}} q'\\
\nabla_{q'} u_{R} &= \nabla_{q'} R(\dot{q}, \nabla F) \dot{q}
\\
& = (\nabla_{q'}R ) (\dot{q}, \nabla F)
\dot{q} + R(\nabla_{q'} \dot{q}, \nabla F)
\dot{q} \\
& \quad + R(\dot{q}, \nabla_{q'} \nabla
F)\dot{q} + R(\dot{q}, \nabla F)
\nabla_{q'}\dot{q} \\
& = R(\dot{q}, \nabla_{q'} \nabla F) \dot{q}
\\
& = R( \dot{q}, q') \dot{q}
\end{aligned} \end{equation} where we have used the fact that $\nabla_{q'}\nabla F(q,\dot{q}_*)
|_{q=q_*(t)} = q'$, $\nabla F(q_*,q_*) =0$. The second line of \eqref{eq:nabla u} holds because one can take $s \mapsto q(s,t)$ as a geodesic. Substituting \eqref{eq:nablaEL} and \eqref{eq:nabla u} into the EL equation we immediately get \begin{equation} \label{sys:linD}
\frac{D^{2} q'}{dt} = - k_1 \frac{Dq'}{dt} - k_2 q' - \nabla_{q'} \nabla
V. \end{equation}
\begin{theorem} \label{thm:track}
Let $(q_*(\cdot), \dot{q}_*, u_* \equiv 0)$ be a bounded feasible solution to
the fully-actuated Euler-Lagrangian system \eqref{eq:EL-full}.
If the Hessian of the potential function $V$ is bounded along
$q_*(\cdot)$, then the controller \eqref{ctrlu} renders $(q_*(\cdot),
\dot{q}_*(\cdot)$ LES for $k_1>0$ and $k_2>0$ large enough. \end{theorem}
\begin{proof} If the Hessian of $V$ is bounded along $q_*(\cdot)$, then it is obvious that the ``linear system'' \eqref{sys:linD} is exponentially stable setting $k_1>0$ and choosing $k_2>0$ large enough. The theorem follows invoking Theorem \ref{thm:lag-simp}. \end{proof}
\begin{remark}
Note that the assumption of Theorem \ref{thm:track} holds if $V\in
\mathcal{C}^{2}$ as $q_*(\cdot)$ is bounded.
If $V$ is (weakly) convex, then the Hessian of $V$ is positive semi-definite,
hence the same holds true for arbitrary positive constants $k_1, k_2$. \end{remark}
\begin{remark} In equation \eqref{sys:linD}, we have in fact obtained the celebrated Jacobi equation by setting $u=0$ and $V=0$: \begin{equation}\label{eq:Jacobi}
\frac{D^2 q'}{dt^2} = -R(\dot{q},q')\dot{q}. \end{equation} Since in this case the EL equation reads $\nabla_{\dot{q}}\dot{q} =0$ (geodesic equation), equation \eqref{eq:Jacobi} characterizes the effect of curvature to the geodesic flow. The Jacobi equation plays a significant role in Riemannian geometry and has many important implications. In order to help readers from the control community appreciate this equation, we now provide a control flavour to it.
For (\ref{eq:Jacobi}), choose a ``Lyapunov function'' \[
V(\dot{q},q') = \langle \frac{D q'}{dt}, \frac{D q'}{dt} \rangle
+ \langle R(\dot{q}, q')\dot{q}, q' \rangle. \] Since we work only locally, let us consider a constant curvature manifold, that is \[
\langle R(\dot{q},q')\dot{q}, q'\rangle = K \langle \dot{q},\dot{q}
\rangle \langle q', q' \rangle, \quad \forall \dot{q}, q' \]for some constant $K$. The time derivative of $V$ reads \begin{align*}
\dot{V} &= 2 \langle \frac{D^2 q'}{dt^2},\frac{D q'}{dt} \rangle +
\langle R(\dot{q}, \frac{Dq'}{dt})\dot{q}, q'\rangle + \langle
R(\dot{q},q')\dot{q}, \frac{Dq'}{dt} \rangle \\
&= 2 \langle -R(\dot{q},q')\dot{q},\frac{D q'}{dt} \rangle + 2
\langle R(\dot{q},q')\dot{q}, \frac{Dq'}{dt} \rangle \\
&= 0,
\end{align*} where we have used the fact that $\frac{D\dot{q}}{dt}=0$. Remember that $q(\cdot)$ is a geodesic, we may assume $|\dot{q}|=1$, then it follows that \[
V(\dot{q},q') = |Dq'/dt|^2 + K |q'|^2 = \text{ constant}. \] Therefore, we can draw the following non-rigorous conclusions: \begin{itemize}
\item $K>0$: along a given geodesic, nearby geodesics oscillate around
it (see Fig. \ref{fig:K>0}).
\item $K<0$: along a given geodesic, nearby geodesics have a trend to
diverge.
\item $K=0$: the geodesics neither converge nor diverge. \end{itemize} \end{remark}
\begin{figure}
\caption{For $K>0$, the geodesics oscillate near a given geodesic.}
\label{fig:K>0}
\end{figure}
In the above we have studied tracking controller design for fully-actuated EL systems. This problem becomes more involved for under-actuated systems. In that case, we may apply energy shaping method to obtain some matching conditions and then try to solve some PDEs on the manifolds \cite{Ortega2002}, see also \cite{Blankenstein2002} and the references therein.
\subsection{Speed Observer for EL Systems} \label{exmp:rouchon} Consider the EL system without input \begin{equation}\label{sys:lag-U}
\nabla_{\dot{q}}\dot{q} = -\nabla V(q) \end{equation}where $V(q)$ is the potential energy. The objective is to design a speed observer for $\dot{q}(\cdot)$ knowing $q(\cdot)$. In \cite{aghannan2003intrinsic}, Aghannan and Rouchon proposed the following intrinsic speed observer for the system (\ref{sys:lag-U}) when there is no potential energy in the EL equation: \begin{equation} \label{sys:rouchon}
\left\{
\begin{aligned}
\dot{\hat{q}} & = \hat{v} - \alpha \nabla F(\hat{q},q) \\
\nabla_{\dot{\hat{q}}} \hat{v} &= - \beta \nabla
F(\hat{q},q) + R(\hat{v}, \nabla F)\hat{v}.
\end{aligned}
\right. \end{equation}where $F$ is half of the square distance as before. The convergence of this observer was analyzed in local coordinates via contraction analysis \cite{aghannan2003intrinsic}, which was, in our opinion, quite tedious.
\begin{remark}
Using the notation introduced in Section \ref{subsubsec:Kill}, we may rewrite
\eqref{sys:rouchon} as
\[
\left\{
\begin{aligned}
\dot{\hat{q}} & = \hat{v} + \alpha \log_{\hat{q}} q\\
\nabla_{\dot{\hat{q}}} \hat{v} &= \beta \nabla
\log_{\hat{q}} q - R(\hat{v}, \log_{\hat{q}}q)\hat{v}.
\end{aligned}
\right.
\]obviating the use of the square distance function.
\end{remark}
In this subsection, we provide a much simpler proof using the methods developed in this paper. Note that our model contains non-vanishing potential energy function, thus it is an extension to the free Lagrangian case in \cite{aghannan2003intrinsic}.
To cope with the potential energy, we consider a slightly modified version of \eqref{sys:rouchon}: \begin{equation} \label{sys:rouchon+u}
\left\{
\begin{aligned}
\dot{\hat{q}} & = \hat{v} - \alpha \nabla F(\hat{q},q) \\
\nabla_{\dot{\hat{q}}} \hat v &= - \beta \nabla F(\hat{q},q) +
R(\hat{v}, \nabla F)\hat{v} - P_{q}^{\hat{q}} \nabla
V(q).
\end{aligned}
\right. \end{equation}Note that by construction, $(q(\cdot), \dot{q}(\cdot))$ is a solution to the observer. Hence it suffices to study LES of $(q(\cdot), \dot{q}(\cdot))$.
Substituting $\hat{v} = \dot{\hat{q}} + \alpha \nabla F(\hat{q},q)$ into the second line of \eqref{sys:rouchon+u}, we get \begin{align*}
\nabla_{\dot{\hat{q}}}(\dot{\hat{q}} + \alpha \nabla F)
= & -\beta \nabla F + R(\dot{\hat{q}}+\alpha \nabla F, \nabla
F)(\dot{\hat{q}}+\alpha \nabla F ) \\
& \; - P_{q}^{\hat{q}} \nabla V(q) \end{align*}or \begin{align*}
\nabla_{\dot{\hat{q}}}\dot{\hat{q}} = - \alpha \nabla_{\dot{\hat{q}}} \nabla F -
& \beta \nabla F + R(\dot{\hat{q}}, \nabla F)(\dot{\hat{q}}+\alpha \nabla F) \\
& - P_{q}^{\hat{q}} \nabla V(q) \end{align*} Taking covariant derivative along $q(\cdot)$ on both sides yields \begin{equation}\label{eq:rouchon1}
\nabla_{{q}'}\nabla_{\dot{\hat{q}}}\dot{\hat{q}} = \frac{D^2 {q}'}{dt^2} +
R(\dot{\hat{q}},{q}')\dot{\hat{q}}, \end{equation} on the left, and \begin{align*}
& -\alpha \nabla_{{q}'}\nabla_{\dot{\hat{q}}}\nabla F - \beta
\nabla_{{q}'}
\nabla F + \nabla_{{q}'}[R(\dot{\hat{q}}, \nabla F)(\dot{\hat{q}}+\alpha
\nabla F)] \\
= & -\alpha \nabla_{\dot{\hat{q}}} \nabla_{{q}'} \nabla F - \alpha
R(\dot{\hat{q}},{q}')\nabla F - \beta \nabla_{{q}'} \nabla F \\
& \; + \nabla_{{q}'}[R(\dot{\hat{q}}, \nabla F)(\dot{\hat{q}}+\alpha \nabla F)] \\
= & -\alpha \nabla_{\dot{\hat{q}}} {q}' - \beta {q}' +
R(\dot{\hat{q}},\nabla_{{q}'}\nabla F)\dot{\hat{q}} \\
= & -\alpha \nabla_{\dot{\hat{q}}}{q}' - \beta {q}' +
R(\dot{\hat{q}},{q}')\dot{\hat{q}},
\end{align*} on the right, where we have used the relations $\nabla F |_{\hat{q} = q} =0 $,
$\nabla_{q'} \nabla F |_{\hat{q}=q} = q'$ and $\nabla_{q'}P_{q}^{\hat{q}} \nabla V(q) = 0$ (be $q'$ tangent to a geodesic). Combining this with (\ref{eq:rouchon1}) yields \begin{equation}
\frac{D^2q'}{dt^2} + \alpha\frac{Dq'}{dt}+\beta q' =0. \end{equation} This, together with Theorem \ref{thm:lag-simp} shows the local exponential convergence of the observer.
\begin{remark} Notice that in both the tracking controller and observer design, we have to calculate the geodesic distance. Although there are efficient computation schemes, it is still tempting to avoid computing geodesics. This may be solved by embedding the system into Euclidean space and use equivalent distance functions in Euclidean spaces. The example of observer design on $SO(3)$ in Section \ref{subsec:revisit} has used this method. \end{remark} \section{Conclusion} In this paper, we have proposed a novel intrinsic approach for analyzing local exponential stability of trajectories and contraction. The advantages of our approach have been justified by applications and improved analysis of some existing works in the literature. We leave studies of concrete examples including under-actuated mechanical systems for future research.
\section{Appendix}
We collect some elementary formulas in Riemannian geometry as a reference for the reader. They can be found in standard texts such as \cite{carmo1992riemannian, petersen2006riemannian}. Let $(M,g)$ be a smooth Riemannian manifold. The Levi-Civita connection on $M$ is compatible with the metric $g$: for any three vector fields $X,Y,Z \in \Gamma(M)$, $
X \left< Y, Z \right> = \left< \nabla_X Y, Z \right> + \left< Y,
\nabla_X Z \right>$. The Levi-Civita connection is torsion-free in the sense that $\nabla_{X} Y - \nabla_Y X = [X,Y]$, where $[X,Y]$ is the Lie bracket. Given a curve $q: t \mapsto q(t)$ in $M$ and a vector field $v(t)$ along $q(\cdot)$, the covariant derivative of $v(\cdot)$ along $q(\cdot)$ is defined as $\frac{Dv(t)}{dt} := \nabla_{\dot{q}(t)} v(t)$. Given a $2$-surface parameterized by $(s,t) \mapsto q(s,t) $, then there holds \begin{equation} \label{eq:swap-cov}
\frac{D}{ds}\frac{\partial q}{\partial t} =
\frac{D}{dt}\frac{\partial q}{\partial s}. \end{equation} The gradient of a scalar function $f$ on $M$ is defined as the unique vector $\nabla f$ satisfying $
\left< \nabla f, X \right> = df(X)$. The Hessian of a scalar function is a symmetric bilinear form on $TM$ defined as \begin{equation} \label{eq:Hess}
\Hess f (X,Y) := \left< \nabla_X \nabla f , Y \right>, \; \forall X,Y \in
\Gamma(M). \end{equation}
For a parameterized surface $(s,t)\to q(s,t)$ and a vector field along the surface, there holds \begin{equation} \frac{D}{ds}\frac{DX}{dt} - \frac{D}{dt}\frac{DX}{ds} = R
\left( \frac{\partial q}{\partial t}, \frac{\partial q}{\partial s}
\right)X. \end{equation}
A metric on a Lie group $G$ is bi-invariant if it is both left-invariant, i.e., $dL_{x} \left< v,w \right> = \left< v, w \right>$ and right-invariant. For a bi-invariant metric, the Levi-Civita connection admits a simple formula \begin{equation} \label{eq:Lie-Levi}
\nabla_X Y = \frac{1}{2} [X,Y]. \end{equation} A vector field $X$ on is called a Killing field (w.r.t. $g$) if $L_X g =0$. Consequently, if $X$ is Killing, $Y$ an arbitrary vector field, there holds \begin{equation} \label{eq:Killing}
g(\nabla_Y X, Y)=0. \end{equation}
\begin{lemma}\rm \label{lem: dist}Given $\gamma_{1},\gamma_{2}\in\mathcal{C}^{1}(\mathbb{R} _{+};M)$, where $M$ is a Riemannian manifold. If $\gamma_{1}(0)=\gamma_{2}(0)=x$ and $\gamma_{1}^{\prime}(0)=\gamma_{2}^{\prime}(0)=v$, then $d(\gamma_{1}(s),\gamma_{2}(s))=O(s^{2})$ when $s>0$ is sufficiently small. \end{lemma}
\end{document} | arXiv |
\begin{document}
\title{Coprime permutations} \date{\today}
\author{Carl Pomerance} \address{Mathematics Department, Dartmouth College, Hanover, NH 03784} \email{[email protected]}
\begin{abstract} Let $C(n)$ denote the number of permutations $\sigma$ of $[n]=\{1,2,\dots,n\}$ such that $\gcd(j,\sigma(j))=1$ for each $j\in[n]$. We prove that for $n$ sufficiently large, $n!/3.73^n < C(n) < n!/2.5^n$.
\end{abstract}
\subjclass[2010]{11A25, 11B75, 11N60}
\keywords{coprime permutation, coprime matching, distribution function, Euler's function} \maketitle
\vskip-30pt \newenvironment{dedication}
{
\begin{quotation}\begin{center}\begin{em}}
{\par\end{em}\end{center}\end{quotation}} \begin{dedication} {In memory of Andrzej Schinzel (1937--2021)} \end{dedication} \vskip20pt
\section{Introduction}
Several papers, some recent, have dealt with coprime matchings between two sets of $n$ consecutive integers; that is a matching where corresponding pairs are coprime. For example in a paper \cite{PS} with Selfridge, we showed such a matching always exist if one of the intervals is $[n]=\{1,2,\dots,n\}$. In Bohman and Peng \cite{BP} it is shown that a matching always exists if $n$ is even and the numbers involved are not too large as a function of $n$, with an interesting application to the lonely runner problem in Diophantine approximations. Their result was somewhat strengthened in \cite{P}.
The current paper considers the situation when both intervals are $[n]$. In this case it is trivial that a coprime matching exists, just take the cyclic permutation $(1,2,\dots,n)$. So instead we consider the enumeration problem. Let $C(n)$ denote the number of permutations $\sigma$ of $[n]$ where $\gcd(j,\sigma(j))=1$ for each $j\in[n]$. This problem was considered in Jackson \cite{J} where $C(n)$ was enumerated for $n\le24$. For example, \[ C(24)=1{,}142{,}807{,}773{,}593{,}600. \] After factoring his values, Jackson notes the appearance of sporadically large primes, which indicates there may not be a simple formula. The sequence also has an OEIS page, see \cite{O}, where the value of $C(25)$, due to A. P. Heinz, is presented (and the value for $C(16)$ is corrected). There are also links to further computations, especially those of Locke. In Section \ref{sec:comp} we discuss how $C(n)$ can be computed and verify Locke's values.
Our principal result is the following. \begin{theorem} \label{thm:main} For all large $n$, $n!/3.73^n < C(n) < n!/2.5^n$. \end{theorem}
Important in the proof of the lower bound is a numerically explicit estimation of the distribution function for $\varphi(n)/n$, where $\varphi$ is Euler's function.
It would seem likely that there is a constant $c$ with $2.5\le c\le 3.73$ with $C(n)=n!/(c+o(1))^n$ as $n\to\infty$. In Section \ref{sec:ub} we give some thoughts towards this possibility.
In Section \ref{sec:comp} we discuss the numerical calculation of $C(n)$. Finally, in Section \ref{sec:anti} we briefly discuss the number of permutations $\sigma$ of $[n]$ where $\sigma(1)=1$ and for $2\le j\le n$, $\gcd(j,\sigma(j))>1$.
\section{Preliminaries} \label{sec:pre}
Regarding notation, we have \[ [n]=\{1,2,\dots,n\},\quad [n]_{\rm o}=\{1,3,\dots,2n-1\}. \] Thus, $[n]_{\rm o}$ is the set of the first $n$ odd positive integers. Let $C_0(n)$ denote the number of one-to-one functions \[ f: [n]_{\rm o}\longrightarrow[n] \]
such that each $\gcd(i,f(i))=1$. Similarly, let $C_1(n)$ denote the number of one-to-one functions \[ f:[n]\longrightarrow [n+1]_{\rm o} \]
such that each $\gcd(i,f(i))=1$. \begin{lemma} \label{lem:odd} We have $C(2n)=C_0(n)^2$ and for $n\ge2$, $2C_0(n-1)^2\le C(2n+1)\le C_1(n)^2$. \end{lemma} \begin{proof} Let $\sigma$ be a coprime permutation of $[2n]$. Then $\sigma$ maps evens to odds and odds to evens, so that $\sigma$ corresponds to a pair of coprime matchings $\sigma_0,\sigma_1$ where $\sigma_0$ maps $\{2,4,\dots,2n\}$ to $\{1,3,\dots,2n-1\}$ and $\sigma_1$ maps $\{1,3,\dots,2n-1\}$ to $\{2,4,\dots,2n\}$. Then $f(2i-1)=\frac12\sigma_1(2i-1)$ is one of the maps counted by $C_0(n)$ and so is $g(2i-1)=\frac12\sigma_0^{-1}(2i-1)$. Conversely, each such pair of maps corresponds to a coprime permutation $\sigma$ of $[2n]$. This proves that $C(2n)=C_0(n)^2$.
The upper bound for $C(2n+1)$ follows in the same way. Let $\sigma$ be a coprime permutation of $[2n+1]$ and let $\sigma_0$ be $\sigma$ restricted to even numbers. Then define $f_\sigma(i)=\sigma_0(2i)$, so that $f_\sigma$ is one of the functions counted by $C_1(n)$. Note that there is some $a\in\{1,3,\dots,2n+1\}$ with $\sigma(a)$ odd, but all other members $b$ of $\{1,3,\dots,2n+1\}$ have $\sigma(b)$ even. Let $\sigma_1$ be $\sigma$ restricted to $S_{a,n}:=[n+1]_{\rm o}\setminus\{a\}$ and let $g_\sigma(2i-1)=\frac12\sigma_1(2i-1)$ for $i\in S_{a,n}$. Then $g_\sigma^{-1}$ is one of the functions counted by $C_1(n)$. Note that if $\tau$ is a coprime permutation of $[2n+1]$ such that $f_\tau=f_\sigma$ and $g_\tau=g_\sigma$, then $\tau=\sigma$. This proves that $C(2n+1)\le C_1(n)^2$. Note that the proof ignores the condition $\gcd(a,\sigma(a))=1$, so it only gives an upper bound.
For the lower bound, note that $C(2n+1)\ge 2C(2n-2)$. Indeed, corresponding to a coprime permutation of $[2n-2]$ we augment it with either the cycle $(2n-1,2n,2n+1)$ or its inverse, giving two coprime permutations of $[2n+1]$. The lower bound in the lemma for $C(2n+1)$ now follows from the first part of the lemma. \end{proof}
We remark that the sequence $C(1),C(2),\dots$ is not monotone, but it is
monotone restricted to integers of the same parity. Indeed, augmenting
a coprime permutation of $[n]$ with the cycle $(n+1,n+2)$ gives a coprime
permutation of $[n+2]$, so that $C(n)\le C(n+2)$.
Since $C_0(n)\le n!$ and $C_1(n)\le (n+1)!$, Lemma \ref{lem:odd} immediately gives us that $C(2n)\le (n!)^2$ and $C(2n+1)\le (n+1)!^2$. With Stirling's formula this gives $C(n)\le n!/(2+o(1))^n$ as $n\to\infty$. Note that this argument considers only parity. By bringing in 3, 5, etc., we can improve this upper bound. In Section \ref{sec:ub} we begin this process and show that $C(n) <n!/(5/2)^n$ for all large $n$.
It is much harder to get a comparable lower bound for $C(n)$, and this is our undertaking in the next two sections. From the thoughts above it suffices to get a lower bound for $C_0(n)$. The lower bound in Theorem \ref{thm:main} is a consequence of the following result.
\begin{theorem} \label{thm:main2} For all large $n$, $C_0(n)\ge n!/1.864^n$. \end{theorem}
\section{The distribution function}
Let $\omega(n)$ denote the number of distinct prime factors of $n$.
\begin{lemma} \label{lem:sieve} For positive integers $m,n$, the number of $j\le n$ with $\gcd(j,m)=1$ is within $2^{\omega(m)-1}$ of $(\varphi(m)/m)n$. \end{lemma} \begin{proof} The result is clear if $m=1$, so assume that $m>1$. With $\mu$ the M\"obius function, the exact number of $j$'s is \[
\sum_{\substack{d|m}}\sum_{\substack{j\le n\\d|j}}\mu(d)
=\sum_{\substack{d|m}}\left(\frac{\mu(d)}dn+\mu(d)\theta_d\right), \] where $0\le\theta_d<1$. The sum of the main terms is $(\varphi(m)/m)n$. There are $2^{\omega(m)}$ error terms $\mu(d)\theta_d$ with $\mu(d)\ne0$, and since $m>1$, half of them are $\ge0$ and half are $\le0$. So the sum of the error terms has absolute magnitude $<2^{\omega(m)-1}$. \end{proof} \begin{corollary} \label{cor:sieve} For $m\le n$, the number of $j\le n$ with $\gcd(j,m)=1$ is greater than $(\varphi(m)/m)n-\sqrt{n}$. \end{corollary} \begin{proof} A short induction argument shows that $\sqrt{m}>2^{\omega(m)-1}$, so the result follows directly from the lemma. \end{proof}
\begin{lemma} \label{lem:2ndmom} For all large $n$ we have \[ \sum_{\substack{m<2n\\m\,{\rm odd}}}\left(\frac m{\varphi(m)}\right)^2<1.78n. \] \end{lemma} \begin{proof} Define a multiplicative function $h$ with $h(p)=(2p-1)/(p-1)^2$ for each prime $p$ and $h(p^a)=0$ for $a\ge2$. Then \begin{align*} \sum_{\substack{m<2n\\m\,{\rm odd}}}\left(\frac m{\varphi(m)}\right)^2 &=\sum_{\substack{m<2n\\m\,{\rm odd}}}\sum_{d\mid m}h(d)\\ &=\sum_{\substack{d<2n\\d\,{\rm odd}}}h(d)\sum_{\substack{j<2n/d\\j\,{\rm odd}}}1 =\sum_{\substack{d<2n\\d\,{\rm odd}}}h(d)\left(\frac nd+O(1)\right). \end{align*} The main term is \[ <n\prod_{p>2}\left(1+\frac{2p-1}{(p-1)^2p}\right), \] and this infinite product converges to a constant smaller than $1.7725$. For the error term a simple calculation shows that it is $O((\log n)^2)$, so our conclusion follows. \end{proof}
Let $\delta_\varphi(\alpha)$ be the distribution function for $\varphi(m)/m$; that is, for $0\le\alpha\le1$, \[ \delta_\varphi(\alpha)=\lim_{n\to\infty}\frac1n\sum_{\substack{m\le n\\\varphi(m)/m\le\alpha}}1. \] It is known after various papers of Schoenberg, Behrend, Chowla, Erd\H os, and Erd\H os--Wintner
that the limit exists, $\delta_\varphi(0)=0$, $\delta_\varphi(1)=1$, and $\delta_\varphi$ is strictly increasing and continuous. In addition, at a dense set of numbers in $[0,1]$, namely the values of $\varphi(m)/m$, the distribution function $\delta_\varphi$ has an infinite left derivative. This all can be generalized to odd numbers. For $0\le\alpha\le 1$, let $D(\alpha,n)$ denote the number of odd $m<2n$ with $\varphi(m)/m\le \alpha$. As with $\delta_\varphi$, \[ \delta(\alpha):=\lim_{n\to\infty}D(\alpha,n)/n \]
exists, with $\delta$ continuous and strictly increasing on $[0,1]$, with $\delta(0)=0$ and $\delta(1)=1$. By extending it to take the value 1 when $\alpha>1$, we have \[ \delta_\varphi(\alpha)=\frac12(\delta(\alpha)+\delta(2\alpha)), \] as noted in \cite{PS}. In particular, for $\frac12\le\alpha\le1$, \begin{equation} \label{eq:dist} \delta(\alpha)=2\delta_\varphi(\alpha)-1. \end{equation}
A consequence of the argument in \cite{PS} is that $\delta(\alpha)\le\alpha$ on $[0,1]$. We shall need a somewhat stronger version of this inequality. In particular, note that Lemma \ref{lem:2ndmom} immediately gives \begin{equation} \label{eq:ineq} \delta(\alpha)<1.78\alpha^2, \end{equation} which is stronger than $\delta(\alpha)\le\alpha$ for $\alpha<1/2$. It is certainly possible to get improvements on \eqref{eq:ineq} by averaging higher moments of $m/\varphi(m)$, as was done in \cite{KKP}, which would lead to small improvements on our lower bound for $C(n)$.
We shall also need some estimates for $\delta(\alpha)$ when $\alpha$
is close to 1, and for this we use an argument of Erd\H os \cite[Theorem 3]{E}. There he shows, essentially, that $1-\delta(1-\epsilon)\sim2/(e^\gamma|\log\epsilon|)$ as $\epsilon\to0$, where $\gamma$ is Euler's constant. We will need an estimate with somewhat more precision.
Let \[ \delta(\alpha,n)=\frac1nD(\alpha,n),\quad M(x)=\prod_{3\le p\le x}\left(1-\frac1p\right),\quad s_{j,x}=\sum_{4^{j-1}x<p\le 4^jx}\frac1p. \] \begin{lemma} \label{lem:ep} Uniformly for $2\le x\le\log n$ we have \[ 1-\delta(1-1/x,n)\le M(x)-1/\sqrt{n} \] and \[ 1-\delta(1-1/x,n)\ge M(2x)\left(1-\sum_{j\ge1}\frac{s_{j,2x}^{j+1}}{(j+1)!}\right) +O\left(\frac1{(\log n)^{\log\log\log n}}\right). \] \end{lemma} \begin{proof} Note that $1-\delta(1-1/x,n)$ denotes the fraction of numbers $m<2n$
with $\varphi(m)/m > 1-1/x$ (all such $m$ are odd). In fact, such numbers are not divisible by any prime $p\le x$, which with Lemma \ref{lem:sieve} gives the upper bound.
For the lower bound we count the numbers $m<2n$ that are not divisible by any prime $p\le 2x$ and also divisible by at most $j$ distinct primes from each interval $I_j:=(4^{j-1}\cdot2x,\,4^j\cdot2x]$. Indeed, if $m$ is such a number, then \[ \frac{\varphi(m)}{m}>\prod_{j\ge1}\left(1-\frac1{4^{j-1}\cdot2x}\right)^j >1-\sum_{j\ge1}\frac j{4^{j-1}\cdot2x}=1-\frac8{9x}. \] Let $A_j$ be the set of products of $j+1$ distinct primes from $I_j$. For $a\in A_j$, the number of odd numbers $m<2n$ with $a\mid m$ and $m$ not divisible by any prime to $2x$, is by Lemma \ref{lem:sieve}, within $2^{\pi(2x)-1}$ of $M(2x)n/a$. Note too that the sum of $1/a$ for $a\in A_j$ is at most $s_{j,2x}^{j+1}/(j+1)!$, by the multinomial theorem. Let $\pi(I_j)$ denote the number of primes in $I_j$. Thus, the number of odd $m<2n$ not divisible by any prime to $2x$ and divisible by some $a\in A_j$ is uniformly \[ M(2x)ns_{j,2x}^{j+1}/(j+1)!+O\left(2^{\pi(2x)}\binom{\pi(I_j)}{j+1}\right). \] The binomial coefficient here is bounded by $O(4^{j(j+1)}x^{j+1})$ using only that $\pi(I_j)<4^jx$. Note that for $j\le\log\log n$ this expression is $O_\epsilon(n^\epsilon)$ for any $\epsilon>0$, as is $2^{\pi(2x)}$, so that the number of integers $m<2n$ not divisible by any prime $p\le2x$ yet divisible by some $a\in A_j$ for $j\le\log\log n$ is at least \[ M(2x)n\sum_{j\le\log\log n}\frac{s_{j,2x}^{j+1}}{(j+1)!}+O(n^{1/2}). \] Thus, \begin{align*} n-D(1-1/x,n)\ge &M(2x)n\left(1-\sum_{j\le\log\log n}\frac{s_{j,2x}^{j+1}}{(j+1)!}\right)\\ &\quad-2n\sum_{j>\log\log n}\frac{s_{j,2x}^{j+1}}{(j+1)!}+O(n^{1/2}). \end{align*} Since $s_{j,2x}=O(1/j)$, we have \[ \sum_{j>\log\log n}\frac{s_{j,2x}^{j+1}}{(j+1)!}=O(1/(\log n)^{\log\log\log n}). \] Thus, our count is \[ \ge M(2x)n\left(1-\sum_{j\ge1}\frac{s_{j,2x}^{j+1}}{(j+1)!}\right)+O(n/(\log n)^{\log\log\log n}), \] which gives our lower bound. \end{proof}
\begin{corollary} \label{cor:ep} Uniformly for $2\le x\le \log n$, we have as $n\to\infty$, \[ 1-\delta(1-1/x,n)\le\frac{2}{e^\gamma\log x}\left(1+\frac1{2(\log x)^2}+o(1)\right). \] Further, for $150\le x\le\log n$ and $n$ sufficiently large, \[ 1-\delta(1-1/x,n)\ge\frac2{e^\gamma\log(2x)}\left(1-\frac7{4(\log(2x))^2}\right). \] \end{corollary} \begin{proof} By Rosser and Schoenfeld \cite[(3.26)]{RS} we have \[ M(x)<\frac2{e^\gamma\log x}\left(1+\frac1{2(\log x)^2}\right), \] so our first assertion follows from the first part of Lemma \ref{lem:ep}. Further, using \cite[(3.25)]{RS} we have \[ M(2x)>\frac2{e^\gamma\log(2x)}\left(1-\frac1{2(\log(2x))^2}\right), \] and so the second part our our assertion will follow from Lemma \ref{lem:ep} if we show \[ \sum_{j\ge1}\frac{s_{j,2x}^{j+1}}{(j+1)!}<\frac {1.4}{(\log(2x))^2} \] for all sufficiently large $x$, noting that $1.4<(2/e^\gamma)1.25$. Using \cite[(3.17),(3.18)]{RS}, we have \begin{align*} s_{j,2x}&<\log\log (2\cdot4^jx)-\log\log(2\cdot4^{j-1}x)+\frac1{(\log(2\cdot4^{j-1}x))^2}\\ &<\frac{\log 4}{\log(2\cdot4^{j-1}x)}+\frac1{(\log(2\cdot4^{j-1}x))^2} \le \frac{\log 4}{\log(2x)}+\frac1{(\log(2x))^2}=:s. \end{align*} Thus, using $x\ge150$, \[ \sum_{j\ge1}\frac{s_{j,2x}^{j+1}}{(j+1)!}<e^s-1-s<0.55s^2 \] and $s^2<2.5/(\log(2x))^2$, so our claim follows. \end{proof}
In addition, we shall use the following numerical bounds. The first of these follows from Kobayashi \cite{K}, the last two from Lemma \ref{lem:ep}, and the others from Wall \cite{W}.
\begin{align} \label{eq:est} \begin{split} 0.02240&<\delta(0.5)<0.02352,\\ 0.1160&<\delta(0.6)<0.1624,\\ 0.3556&<\delta(0.7)<0.3794,\\ 0.4808&<\delta(0.8)<0.5120,\\ 0.5644&<\delta(0.9)<0.6310\\ 0.7593&<\delta(0.99)<0.7949\\ 0.8380&<\delta(0.999)<0.8539. \end{split} \end{align}
\section{The lower bound}
We partition $(0,1]$ into consecutive intervals \[ (\alpha_0,\alpha_1],(\alpha_1,\alpha_2],\dots,(\alpha_{k-1},\alpha_k], \hbox{ where } 0=\alpha_0<\alpha_1<\dots<\alpha_k=1. \] The parameter $k$ will depend gently on $n$, namely $k=O(\log\log n)$. The partition of $(0,1]$ will correspond to a partition of $[n]$ into subsets as follows. For $j=0,1,\dots,k-1$, let \[ S_j=\big\{m\in\{1,3,\dots,2n-1\}:\alpha_j<\varphi(m)/m\le\alpha_{j+1}\big\}. \]
In getting a lower bound for $C_0(n)$, we show that there are many ways to assign coprime companions for each member $m$ of $\{1,3,\dots,2n-1\}$ that do not overlap with the choices for other values of $m$. In particular, we organize the odd numbers $m<2n$ by increasing size of $\varphi(m)/m$, and so organize them into the sets $S_1,S_2,\dots$. In particular, we will choose the parameters $\alpha_j$ in such a way that there are more ways to assign coprime companions for $m\in S_j$ than there are members in all of the sets $S_i$ for $i\le j$ combined.
For an odd number $m<2n$ let $F(m,n)$ denote the number of integers in $[n]$ coprime to $m$. Suppose $0<\alpha<\beta<1$ and we wish to find coprime assignments for members of \[ S=\{m\hbox{ odd}:m<2n,\,\varphi(m)/m\in(\alpha,\beta]\}=\{m_1,m_2,\dots,m_t\}, \] where $t=\#S=D(\beta,n)-D(\alpha,n)$. Let $M=\lceil\alpha n-\sqrt{n}\rceil$, so that for each $m\in S$ we have $F(m,n)\ge M$, via Corollary \ref{cor:sieve}. Assume that those odd $m<2n$ with $\phi(m)/m\le \alpha$ already have their coprime assignments. Then $m_1$ can be assigned to at least $M-D(\alpha,n)$ numbers in $[n]$, $m_2$ can be assigned to at least $M-1-D(\alpha,n)$ numbers in $[n]$, etc. In all, the numbers in $S$ have at least \begin{equation} \label{eq:Scount} \frac{(M-D(\alpha,n))!}{(M-D(\alpha,n)-\#S)!} =\frac{(M-D(\alpha,n))!}{(M-D(\beta,n))!} \end{equation} coprime assignments that do not interfere with those for $\varphi(m)/m\le\alpha$. If $0<a<b<1$ and $an,bn$ are integers, then \[ (an-bn)!=\exp((a-b)n(\log n-1) +(a-b)n\log(a-b)+O(\log n)). \] Let $f(x)=x\log x$. Thus, the expression in \eqref{eq:Scount} is equal to \[ \exp((\delta(\beta,n)-\delta(\alpha,n))n(\log n-1)+E(\alpha,\beta,n)n+O(\log n)), \] where \[ E(\alpha,\beta,n)=f(\alpha-\delta(\alpha,n))-f(\alpha-\delta(\beta,n)). \] We thus will have that $C_0(n)\ge n!\exp\left(nE+O(k\log n)\right)$, where \begin{equation} \label{eq:key} E=\sum_{1\le i\le k-1}(f(\alpha_i-\delta(\alpha_i,n))-f(\alpha_i-\delta(\alpha_{i+1},n))). \end{equation} (We will choose $\alpha_1=1/\log\log n$ and for $n$ sufficiently large, every odd $m<2n$ will have $\varphi(m)/m>\alpha_1$, so the interval $(0,\alpha_1]$ does not contribute.)
The sum in \eqref{eq:key} is almost telescoping. In particular the density $\delta(\alpha_{i+1},n)$ when $1\le i\le k-2$ appears twice, the two $f$-values being \[ -f(\alpha_i-\delta(\alpha_{i+1},n))+f(\alpha_{i+1}-\delta(\alpha_{i+1},n)). \] We do not have a completely accurate evaluation for $\delta(\alpha_{i+1},n)$ nor for the limiting value of $\delta(\alpha_{i+1})$, but we do have a fairly narrow interval where this limit lives. Note that the expression \[ -f(\alpha_i-x)+f(\alpha_{i+1}-x) \] is decreasing in $x$ when $0<x<\alpha_i$, so if we use an upper bound for $\delta(\alpha_{i+1},n)$ in \eqref{eq:key}, we will get a lower bound for the sum.
\subsection{The interval $(0,1/4]$} Let $j_0$ be the least integer with $2^{j_0}>\log\log n$ and let $\alpha_1=1/2^{j_0}$. Further, let $\alpha_j=2^{j-1}\alpha_1=1/2^{j_0-j+1}$, for $j\le j_0-1$. This gives the first part of our partition of $(0,1]$, namely the sets $(\alpha_{j},\alpha_{j+1}]$ for $j\le j_0-1$ give a partition of $(0,1/4]$.
Using \eqref{eq:ineq} and the upper bound for $\delta(1/2)$ in \eqref{eq:est}, we
have \[ \delta(1/2^i,n)\le\min\{1.78/4^i,\,0.02352\} \] for all $i$ and all large $n$.
We find the $E$-sum from \eqref{eq:key} for the portion for $(0,1/4]$ is
$>-0.0538$. So the contribution for this part of the count is greater than \begin{equation} \label{eq:1part} \exp\big(D(1/4,n)(\log n-1)-0.0538n\big) \end{equation} for all large $n$.
\subsection{The interval $(1/4,0.999]$} We split the interval $(1/4,0.999]$ at \[ 0.5,~0.6,~0.7,~0.8,~0.9,~0.99. \] Using the upper bounds for our various densities from \eqref{eq:est}, we have the $E$-sum from \eqref{eq:key} is \begin{align*} &f(1/4-.02352)-f(1/4-.02352)+f(.5-.02352)-f(0.5-.1624)\\ &+f(.6-.1624)-f(.6-.3794)+f(.7-.3794)-f(.7-.5120)\\ &+f(.8-.5120)-f(.8-.6310)+f(.9-.6310)-f(.9-.7949)\\ &+f(.99-.7949)-f(.99-.8539) >-0.2873. \end{align*} (Note the first two terms are not a typo!) Thus, the contribution from $(1/4,0.999]$ is greater than
\begin{equation} \label{eq:2part} \exp\big((D(0.999,n)-D(1/4,n)(\log n-1)-0.2873n\big) \end{equation} for all large $n$.
\subsection{The interval $(0.999,1-1/\log n]$}
Let $j_1$ be the least integer with $10^{j_1}>\log n$. Let $\epsilon_i=10^{-i}$. We deal with the intervals \[ (1-\epsilon_{i-1},1-\epsilon_i] ~\hbox{ for }~4\le i\le j_1-1. \] For our argument to work we will need to show that $D(1-\epsilon_i,n)<(1-\epsilon_{i-1})n-\sqrt{n}$, that is, \begin{equation} \label{eq:need} \epsilon_{i-1}n-\sqrt{n}<n-D(1-\epsilon_i,n)\hbox{ for }i\ge4. \end{equation}
From Corollary \ref{cor:ep} we have \[ n-D(1-\epsilon_i,n)>\frac{2n}{e^\gamma\log(2/\epsilon_i)}\left(1-\frac7{4(\log(2/\epsilon_i))^2}\right). \] Note that $\log(2/\epsilon_i)=\log2+i\log10$, so that an expression of magnitude $1/\log(2/\epsilon_i)$ is much larger than $\epsilon_{i-1}$ when $i\ge4$, so we have \eqref{eq:need}.
We now compute the contribution from the intervals $(1-\epsilon_{i-1},1-\epsilon_i]$ for $i=4,5,\dots,j_1-1$. This is at least \[ \exp(D(1-\epsilon_{j_1-1},n)-D(0.999,n)(\log n-1)+En), \] where \[ E=\sum_{4\le i\le j_1-1}f(1-\epsilon_{i-1}-\delta(1-\epsilon_{i-1},n)-f(1-\epsilon_{i-1}-\delta(1-\epsilon_{i},n)). \] Using our bound $0.8539$ for $\delta(0.999)$ from \eqref{eq:est} and Corollary \ref{cor:ep} for $\delta(1-\epsilon_i)$ for $i\ge4$, we have $E>-0.2814$, so the contribution for all large $n$ is at least \[ \exp((D(1-\epsilon_{j_1-1},n)-D(0.999,n))(\log n-1)-0.2814n). \] The final interval $(1-\epsilon_{j_1-1},1-1/\log n]$ contributes \[ \exp(D(1-1/\log n,n)-D(1-\epsilon_{j_1-1},n)(\log n-1)+O(n/\log\log n)), \]
so our total contribution from $(.999,1-1/\log n]$ is at least \begin{equation} \label{eq:3part} \exp((D(1-1/\log n,n)-D(0.999,n)(\log n-1)-0.2815n) \end{equation} for all large $n$.
\subsection{The interval $(1-1/\log n,1]$}
We break this interval at $1-1/\sqrt{2n}$. It is evident that if $m<2n$ is odd and $\varphi(m)/m>1-1/\sqrt{2n}$, then $m=1$ or $m$ is a prime in the interval $(\sqrt{2n},2n)$. Thus, \begin{equation} \label{eq:top} D(1-1/\sqrt{2n},n)=n-2n/\log n +O(n/(\log n)^2) \end{equation} by the prime number theorem. Thus, $\delta(1-1/\sqrt{2n},n)<1-1/\log n$ for all large $n$.
A calculation shows that the contribution is at least \[ \exp\left(\Big(D\big(1-\frac1{\sqrt{2n}},n\big)-D\big(1-\frac1{\log n},n\big)\Big)(\log n-1)+E\right), \] where $E=O(n\log\log\log n/\log\log n)$, this term coming from $f(1-1/\log n-\delta(1-1/\log n,n))$.
For the final interval, we have already noted that the numbers in $[n]_{\rm o}$ remaining are 1 and the primes in $(\sqrt{2n},2n)$. We follow the argument in \cite[Proposition 1]{PS}. Label the primes in $(\sqrt{2n},2n)$ in decreasing order $p_1,p_2,\dots,p_t$, so that, by \eqref{eq:top}, $t= 2n/\log n+O(n/(\log n)^2)$. Each $p_i$ has $<2n/p_i$ multiples to $2n$, of which $<n/p_i+1/2$ are odd. Let $u=\lfloor t/2\rfloor=n/\log n+O(n/(\log n)^2)$, so that $p_u\sim n$. We count assignments for $p_i$ for $i=t,t-1,\dots,u$ in order. At each $i$ there are $i+1$ numbers remaining to be associated with $p_i$ of which at most $n/p_i+1/2$ are multiples of $p_i$. So, there are at least $i-\sqrt{n}$ coprime choices for $p_i$'s assignment. Multiplying these counts, we have at least \[ (u-\sqrt{n})^u=\exp(n+O(n/\log n) \] choices. For each of the remaining primes $p_i$ there are $i+1$ numbers left as possible assignments, with at most one of these divisible by (actually, equal to) $p_i$. So the contribution of these primes is $(u-1)!=\exp(n+O(n/\log n))$. The final number to assign is 1, and it goes freely to the remaining number left. So for this interval we have at least \[ \exp(2n+O( n/{\log n})) \] possibilities. By \eqref{eq:top} the count can be rewritten as \[ \exp\left(\big(n-D\big(1-\frac1{\sqrt{2n}},n\big)\big)(\log n-1)+O\big(\frac{n}{\log n}\big)\right) \]
With the prior calculation, we have at least \begin{equation} \label{eq:4part} \exp((n-D(1-1/\log n,n)(\log n-1)+O(n\log\log\log n/\log n)) \end{equation} assignments.
To conclude the proof we multiply the expressions in \eqref{eq:1part}, \eqref{eq:2part}, \eqref{eq:3part}, and \eqref{eq:4part}, getting at least \[ \exp(n(\log n-1)-0.6226n) \] coprime matchings from $[n]_{\rm o}$ to $[n]$ for all large $n$. Since $e^{0.6226}>1.8637$, this completes the proof of the lower bound in Theorem \ref{thm:main2}.
\section{The upper bound and a conjecture} \label{sec:ub}
For each integer $k\ge2$, let $C_k(n)$ denote the number of permutations $\sigma$ of $[n]$ where $\gcd(j,\sigma(j),k!)=1$ for each $j\in[n]$. Thus, $C(n)\le C_k(n)$ for every $k$. In fact, $C(n)=C_k(n)$ when $k\ge n$, but we are interested here in the situation when $k$ is fixed and $n$ is large. We claim that for each fixed $k\ge2$ there is a positive constant $c_k$ such that $C_k(n)=n!/(c_k+o(1))^n$ as $n\to\infty$.
Here is a possible plan for the proof of this claim. Let $K$ be the product of the primes to $k$. If $ dd' \mid K$, then one can count the number of $m\in[n]$ with $\gcd(m,K)=d$ that get mapped to an $m'$ with $\gcd(m',K)=d'$. The product of all of the positive counts is $n^{O(1)}$, so basically, up to a factor of this shape, the number of permutations is given by those with one optimal suite of counts.
Let $I_d$ be the set of $m$ with $\gcd(m,K)=d$ and let $\beta(d,d')$ be the proportion of members of $I_d$ that get sent to $I_{d'}$ by a given permutation. Then for a fixed $d$, the numbers $\beta(d,d')$ have sum 1 for $d' \mid K/d$, and sum 1 for a fixed $d'$ and $d\mid K/d' $. One can start with some suite of proportions $\beta(d,d')$ that are ``legal" and consider permutations which approximate these proportions, and see the count as some complicated, but continuous function of the variables $\beta(d,d')$. So, there is an optimal suite of proportions, via calculus, and this gives rise to $c_k$.
Assume that $c_k$ exists. Note that the sequence $(c_k)$ is monotone nondecreasing and that if $p<p'$ are consecutive primes, then $c_k=c_p$ for $p\le k<p'$. It follows from our lower bound for $C(n)$ that the numbers $c_k$ are bounded above. Let $c_0=\lim_{k\to\infty}c_k$. \begin{conjecture} \label{conj:main} We have $C(n)=n!/(c_0+o(1))^n$ as $n\to\infty$. \end{conjecture}
We now prove for $k=2,3,5$ that $c_k$ exists and we compute it. Our value for $c_5$ gives our upper bound theorem for $C(n)$.
The results in Section \ref{sec:pre} largely carry over in the case $k=2$. Indeed, note that $C_0(n)\le n!$ and $C_1(n)\le (n+1)!$, so that $C(2n)\le n!^2$ and $C(2n+1)\le (n+1)!^2$. From this we immediately get that
$C_2(n)\le n!/(2+o(1))^n$ as $n\to\infty$. In fact, from the proof of Lemma
\ref{lem:odd} we have $C_2(2n)=n!^2$ and $C_2(2n+1)=(n+1)!^2$,
so that $c_2=2$.
For $k=3$, we first deal with $6n$ and count one-to-one functions $\sigma$ from $\{1,2,\dots,3n\}$ to $\{1,3,\dots,6n-1\}$ that map multiples of 3 to non-multiples of 3. There are precisely $(2n)!^2/n!$ of them, so $C_3(6n)=((2n)!^2/n!)^2$. Similarly we get $C_3(6n+3)=((2n+1)!^2/(n+1)!)^2$, so these two formulas lead to $C_3(n)=n!/(3/2^{1/3}+o(1))$ as $n\to\infty$ with $3\mid n$. To get to other cases, note that $C_3(n)\le C_3(n+2)$ for all $n$, so we can sandwich $n$ between 2 consecutive multiples of 3 and absorb the error in the ``$o(1)$". We thus have \[ c_3=2^{-1/3}3=2.381101\dots. \]
The case $k=5$ is considerably harder. We only treat multiples of 30, the case 15 (mod 30) is similar, and since $C_k(n)\le C_k(n+2)$, we can extend to all $n$ readily. The problem is reduced to counting matchings from $[15n]$ to $[15n]_{\rm o}$ where corresponding terms have gcd coprime to 15. We split $[15n]$ into the $n$ multiples of 15, the $2n$ numbers that are divisible by 5 but not 3, the $4n$ numbers divisible by 3 but not 5, and the $8n$ numbers coprime to 15. We have the corresponding decomposition for $\{1,3,\dots,30n-1\}$. The first group consisting of the multiples of 15 must be mapped to the numbers coprime to 15, and this can be done in \[ \frac{(8n)!}{(7n)!} \] ways. The next case we consider is the $2n$ multiples of 5 but not 3. They must be mapped to the numbers coprime to 5, where some of them are mapped to numbers coprime to 15 and the rest of them are mapped to numbers divisible by 3 but not 5. A calculation shows that the most numerous case is when it is half and half, also considering the next step which is to place the multiples of 3 but not 5. So the total will be within a factor $2n$ of this most numerous case, which has \[ \binom{2n}{n}\frac{(7n)!}{(6n)!}\frac{(4n)!}{(3n)!} \] matchings. For the multiples of 3 but not 5, these are mapped into the union of the remaining $6n$ numbers coprime to 15 and the $2n$ numbers divisible by 5 but not 3, for a total of \[ \frac{(8n)!}{(4n)!} \] matchings. The remaining $8n$ numbers are all coprime to 15 and can be mapped to the remaining $8n$ numbers in every possible way, giving $(8n)!$ matchings. In all we thus have \[ \frac{(8n)!}{(7n)!}\binom{2n}{n}\frac{(7n)!}{(6n)!}\frac{(4n)!}{(3n)!}\frac{(8n)!}{(4n)!}(8n)!n^{O(1)} =\frac{(8n)!^3(2n)!}{(6n)!(3n)!n!^2}n^{O(1)} \] matchings. The log of this expression is within $O(\log n)$ of \[ =\exp(15n(\log n-1)+(24\log 8+2\log2-6\log6-3\log3)n). \] Our count is then squared and $(30n)!$ is factored out, giving \[ (30n)!\exp((136\log2-18\log3-30\log30)n+O(\log n)). \] This then gives that \[ C_5(n)=(n!/c_5^n)n^{O(1)}, \] where \[ c_5=\exp\big(-\frac{53}{15}\log2+\frac85\log3+\log5\big) =2^{-53/15}3^{8/5}5=2.504521\dots. \]
\subsection{A possible value for $c_0$} \label{sec:ubmcnew} Nathan McNew has suggested the following argument. First, for a prime $p$, let $N_p(n)$ be the number of permutations $\sigma$ of $[n]$ with each $\gcd(j,\sigma(j),p)=1$. So the constraint is that the multiples of $p$ get mapped to the non-multiples of $p$, and so we have \[ N_p(n)=\frac{(\lfloor(1-1/p)n\rfloor!)^2}{\lfloor(1-2/p)n\rfloor!}n^{O(1)}. \] Then, up to a factor $n^{O(1)}$, we have \[ \frac{n!}{N_p(n)}=\left(\frac{p(p-2)^{1-2/p}}{(p-1)^{2(1-1/p)}}\right)^n, \] which suggests by independence that \[ c_k=\prod_{p\le k}\frac{p(p-2)^{1-2/p}p}{(p-1)^{2(1-1/p)}}. \] (We interpret the factor at $p=2$ as 2.) This expression agrees with our computation of $c_k$ for $k$ up to 5. And it suggests that $c_0$ is the infinite product over all primes $p$, so that $c_0=2.65044\dots$.
\section{Computing $C(n)$} \label{sec:comp}
In this section we discuss the numerical computation of $C(n)$ for modest values of $n$. In \cite{O} it is remarked that $C(n)$ has been computed to $n=30$ by Seiichi Manyama, and extended to $n=50$ by Stephen Locke, see https://oeis.org/A005326/b005326.txt. We have verified these values using the methods of this section and Mathematica.
\begin{footnotesize} \begin{table}[] \caption{Values of $C_0(n)=\sqrt{C(2n)}$ and $r_{2n}$.} \label{Ta:C0}
\begin{tabular}{|rrrr|} \hline $n$&$C_0(n)$ & $r_{2n}$&\\ \hline 1 & 1 & 1.4142&\\ 2 & 2 & 1.5651&\\ 3 & 4 & 1.8860&\\ 4 & 18 & 1.8276&\\ 5 & 60 & 1.9969&\\ 6 & 252 & 2.1044&\\ 7 & 1{,}860 & 2.0625&\\ 8 & 9{,}552 & 2.1629&\\ 9 & 59{,}616 & 2.2260&\\ 10 & 565{,}920 & 2.2082&\\ 11 & 4{,}051{,}872 & 2.2707&\\ 12 & 33{,}805{,}440 & 2.3118&\\ 13 & 465{,}239{,}808 & 2.2727&\\ 14 & 4{,}294{,}865{,}664 & 2.3171&\\ 15 & 35{,}413{,}136{,}640 & 2.3850&\\ 16 & 768{,}372{,}168{,}960 & 2.3122&\\ 17 & 8{,}757{,}710{,}173{,}440 & 2.3451&\\ 18 & 79{,}772{,}814{,}777{,}600 & 2.4122&\\ 19 & 1{,}986{,}906{,}367{,}584{,}000 & 2.3531&\\ 20 & 22{,}082{,}635{,}812{,}268{,}800 & 2.4029&\\ 21 & 280{,}886{,}415{,}019{,}776{,}000 & 2.4374&\\ 22 & 7{,}683{,}780{,}010{,}315{,}046{,}400 & 2.3905&\\ 23 & 102{,}400{,}084{,}005{,}498{,}547{,}200 & 2.4278&\\ 24 & 1{,}774{,}705{,}488{,}555{,}494{,}476{,}800 & 2.4401&\\ 25 & 40{,}301{,}474{,}964{,}335{,}327{,}232{,}000 & 2.4291&\\ \hline \end{tabular} \end{table} \end{footnotesize}
As is easy to see, the permanent of the incidence matrix of a bipartite graph of two $n$-sets gives the number of perfect matchings contained in the graph. Let ${\bf B}(n)$ be the $n\times n$ ``coprime matrix", where ${\bf B}(n)_{i,j}=1$ when $i,j$ are coprime and 0 otherwise. So, in particular, and as noted by Jackson \cite{J}, \begin{equation} \label{eq:perm} C(n)={\rm perm}({\bf B}(n)). \end{equation} However, it is not so simple to compute a large permanent, though we do have some algorithms that are better than brute force, for example \cite{R} and \cite{B}.
Recall from Lemma \ref{lem:odd} that $C(2n)=C_0(n)^2$, where $C_0(n)$ is the number of coprime matchings between $[n]$ and $[n]_{\rm o}$. Thus, $C(2n)$ can be obtained from an $n\times n$ permanent, which is considerably easier than the more naive $2n\times2n$ permanent required when applying \eqref{eq:perm} to $C(2n)$.
There is a similar reduction for computing $C(2n+1)$. For each $a\in[n+1]_{\rm o}$, let $C_{(a)}(n)$ denote the number of coprime matchings between $[n]$ and $[n+1]_{\rm o}\setminus\{a\}$. Then $C_1(n)=\sum_{a\in[n+1]_{\rm o}}C_{(a)}(n)$ and \[ C(2n+1)=\sum_{a\in[n+1]_{\rm o}}\sum_{\substack{b\in[n+1]_{\rm o}\\\gcd(a,b)=1}} C_{(a)}(n)C_{(b)}(n). \] Thus, $C(2n+1)$ can be easily computed from $n+1$ permanents of size $n\times n$.
Let $r_n=(n!/C(n))^{1/n}$, so that $C(n)=n!/r_n^n$. We have shown that for all large $n$ we have $2.5<r_n<3.73$. In the following tables we have computed the actual values of $r_n$ for $n\le 50$ rounded to 4 decimal places.
It is easy to see that $C_0(n)$ is the number of partitions of $[2n]$ into coprime unordered pairs. This has its own OEIS page: A009679, and has been enumerated there up to $n=30$.
\begin{footnotesize} \begin{table}[] \caption{Values of $C(n)$ for $n$ odd and $r_{n}$.} \label{Ta:C1}
\begin{tabular}{|rrrr|} \hline $n$&$C(n)$ & $r_{n}$&\\ \hline 1 & 1 & 1&\\ 3 & 3 & 1.2599&\\ 5 & 28 & 1.3378&\\ 7 & 256 & 1.5307&\\ 9 & 3{,}600 &1.6696&\\ 11 & 129{,}774 & 1.6834&\\ 13 & 3{,}521{,}232 & 1.7776&\\ 15 & 60{,}891{,}840 & 1.9444&\\ 17 & 8{,}048{,}712{,}960 & 1.8761&\\ 19 & 425{,}476{,}094{,}976 & 1.9372&\\ 21 & 12{,}474{,}417{,}291{,}264 & 2.0648&\\ 23 &2{,}778{,}580{,}249{,}611{,}264 & 2.0090&\\ 25 & 172{,}593{,}628{,}397{,}420{,}544 & 2.0804&\\ 27 & 17{,}730{,}530{,}614{,}153{,}986{,}048 & 2.1159&\\ 29 & 4{,}988{,}322{,}633{,}552{,}214{,}818{,}816 & 2.0841&\\ 31 & 427{,}259{,}978{,}841{,}815{,}654{,}400{,}000 & 2.1466&\\ 33 & 57{,}266{,}563{,}000{,}754{,}880{,}493{,}977{,}600 & 2.1818&\\ 35 & 14{,}786{,}097{,}120{,}330{,}296{,}843{,}693{,}260{,}800 & 2.1798&\\ 37 & 3{,}004{,}050{,}753{,}199{,}657{,}126{,}879{,}764{,}480{,}000 & 2.1988&\\ 39 & 536{,}232{,}134{,}065{,}318{,}935{,}894{,}365{,}552{,}640{,}000 & 2.2295&\\ 41 & 274{,}431{,}790{,}155{,}416{,}580{,}402{,}144{,}584{,}785{,}920{,}000 & 2.2058&\\ 43 & 51{,}681{,}608{,}012{,}142{,}138{,}983{,}265{,}921{,}023{,}262{,}720{,}000 & 2.2409&\\ 45& 7{,}417{,}723{,}304{,}411{,}612{,}192{,}092{,}096{,}851{,}178{,}291{,}200{,}000 & 2.2918&\\ 47 & 7{,}896{,}338{,}788{,}322{,}918{,}879{,}731{,}318{,}625{,}512{,}774{,}041{,}600{,}000 & 2.2459&\\ 49 & 1{,}989{,}208{,}671{,}980{,}285{,}257{,}956{,}064{,}090{,}726{,}080{,}876{,}380{,}160{,}000 & 2.2743&\\ \hline \end{tabular} \end{table} \end{footnotesize}
\section{Anti-coprime permutations} \label{sec:anti}
One might also wish to consider permutations $\sigma$ of $[n]$ where each $\gcd(j,\sigma(j))>1$. Of course, none exist, since $1\in[n]$. Instead we can count the number $A(n)$ where $\gcd(j,\sigma(j))>1$ for $2\le j\le n$. This seems like an interesting problem. We can prove the following lower bound.
\begin{proposition} \label{prop:anti} As $n\to\infty$, we have \begin{equation} \label{eq:anti} A(n) \ge n!/\exp((e^{-\gamma}+o(1))n\log\log n). \end{equation} \end{proposition}
We sketch the proof. Let $\epsilon_n=1/\sqrt{\log\log n}$ and let $g(x)=\prod_{p<x}(1-1/p)$, so that $g(x)$ is similar to the function $M(x)$ we considered earlier. For each prime $p<n^{\epsilon_n}$ consider the set $L_n(p)$ of integers $m\le n$ with least prime factor $p$, and let \[ \lambda(p,n)=\frac1n\#L_n(p). \] Note that \[ \bigcup_{p<n^{\epsilon_n}}L_n(p) \] is the set of integers with least prime factor $<n^{\epsilon_n}$, so the number of integers $m\le n$ not in this union is $O(n/(\epsilon_n\log n))$. In particular, \begin{equation} \label{eq:lambda} \sum_{p<n^{\epsilon_n}}\lambda(p,n)=1+O(1/(\epsilon_n\log n)). \end{equation}
Each of the $(\#L_n(p))!$ permutations of $L_n(p)$ is anti-coprime, and gluing these together for $p<n^{\epsilon_n}$ and having the remaining elements of $[n]$ as fixed points, gives an anti-coprime permutation of $[n]$. So we have \[ A(n) \ge \prod_{p<n^{\epsilon_n}}(\#L_n(p))!. \] Thus, by the inequality $k!>(k/e)^k$, \begin{align*} A(n)&\ge\exp\left(\sum_{p<n^{\epsilon_n}}\lambda(p,n)n(\log n+\log(\lambda(p,n))-1)\right)\\ &=\exp\left(\sum_{p<n^{\epsilon_n}}\lambda(p,n)n(\log n-1)+nE\right) \end{align*} where \[ E =\sum_{p<n^{\epsilon_n}}\lambda(p,n)\log(\lambda(p,n)). \]
Note that by \eqref{eq:lambda} \[ \sum_{p<n^{\epsilon_n}}\lambda(p,n)n(\log n-1)=n\log n+O(n/\epsilon_n). \]
To deal with $E$, we have that for $p<n^{1/\epsilon_n}$, \begin{equation*} \lambda(p,n)=\frac{ g(p)}{p}(1+O(e^{-1/\epsilon_n })) \end{equation*} uniformly for large $n$. Indeed each $m\in L_n(p)$ is of the form $pk$ where $k\le n/p$ is an integer not divisible by any prime $q< p$. Such integers $k$ are easily counted by the fundamental lemma of either Brun's or Selberg's sieve, which gives the above estimate.
We have $g(p)$ of magnitude $1/\log p$, in fact $g(p)=1/(e^\gamma\log p)(1+O(1/\log p))$. Thus, we have \begin{align*} E&=\sum_{p<n^{\epsilon_n}}\frac{g(p)}{p}(\log(g(p)/p)(1+O(e^{-1/\epsilon_n }))\\ &\kern-3pt=\sum_{p<n^{\epsilon_n}}\frac1{e^\gamma p\log p}(-\log p-\log\log p-\gamma)(1+O(1/\log p) +O(e^{-1/\epsilon_n }))\\ &\kern-3pt=-\sum_{p<n^{\epsilon_n}}\frac1{e^\gamma p}(1+O(e^{-1/\epsilon_n })+O(1). \end{align*} It remains to note that \[ \sum_{p<n^{\epsilon_n}}\frac1p=\log\log n-\log\epsilon_n +O(1). \] Thus, we have Proposition \ref{prop:anti}.
We conjecture that $A(n)=n!/\exp((e^{-\gamma}+o(1))n\log\log n)$ as $n\to\infty$, that is, Proposition \ref{prop:anti} is best possible. Though it is difficult to ``see" $\log\log n$ tending to infinity, we have some scant evidence in Table \ref{Ta:A}. Let $u_n=(n!/A(n))^{1/n}$, so the conjecture is that $u_n\sim e^{-\gamma}\log\log n$.
The computation of $A(n)$ is helped by the realization that all of the permutations counted have 1 and the primes in $(n/2,n]$ as fixed points, so one can deal with a somewhat smaller adjacency matrix than $n\times n$. In particular, if $n$ is prime, then $A(n)=A(n-1)$, so in Table \ref{Ta:A} we only consider $n$ composite (the cases $n=1,2$ being trivial).
In addition, for a prime $p\in(n/3,n/2]$ either $p$ is a fixed point or $(p,2p)$ is a 2-cycle, which gives another reduction.
\begin{footnotesize} \begin{table}[] \caption{Values of $A(n)$ for $n$ composite and $u_{n}$.} \label{Ta:A}
\begin{tabular}{|rrrr|} \hline $n$&$A(n)$ & $u_{n}$&\\ \hline 4&2&1.8612&\\ 6&8&2.1170&\\ 8&30&2.4607&\\ 9&72&2.5786&\\ 10&408&2.4826&\\ 12&4{,}104&2.6440&\\ 14&29{,}640&2.8976&\\ 15&208{,}704&2.8388&\\ 16&1{,}437{,}312&2.8034&\\ 18& 22{,}653{,}504&2.9479&\\ 20&318{,}695{,}040&3.1199&\\ 21&2{,}686{,}493{,}376&3.0866&\\ 22&27{,}628{,}410{,}816&3.0356&\\ 24&575{,}372{,}874{,}240&3.1722&\\ 25&1{,}775{,}480{,}841{,}216&3.2935&\\ 26&21{,}115{,}550{,}048{,}256&3.2420&\\ 27&132{,}879{,}856{,}582{,}656&3.2758&\\ 28&2{,}321{,}256{,}928{,}702{,}464&3.1932&\\ 30&83{,}095{,}013{,}944{,}442{,}880&3.2870&\\ \hline \end{tabular} \end{table} \end{footnotesize}
\subsection{Other types of permutations}
One might consider other arithmetic constraints on permutations. For example, what can be said about the number of permutations $\sigma$ of $[n]$ where for each $j\in[n]$, either $j\mid\sigma(j)$ or $\sigma(j)\mid j$? Or, the number where each lcm$[j,\sigma(j)]\le n$? Problems such as the longest possible cycle in such permutations, the minimum number of disjoint cycles, etc.\ were studied in \cite{P2}, \cite{S}, \cite{M} and elsewhere. The enumeration problems have not been well-studied, though the first one has an OEIS page: A320843.
\section*{Acknowledgments} I thank Sergi Elizalde for informing me of \cite{J} and \cite{O} and I am grateful to Nathan McNew for suggesting the argument in Section \ref{sec:ubmcnew}.
\end{document} | arXiv |
One in ten rule
In statistics, the one in ten rule is a rule of thumb for how many predictor parameters can be estimated from data when doing regression analysis (in particular proportional hazards models in survival analysis and logistic regression) while keeping the risk of overfitting and finding spurious correlations low. The rule states that one predictive variable can be studied for every ten events.[1][2][3][4] For logistic regression the number of events is given by the size of the smallest of the outcome categories, and for survival analysis it is given by the number of uncensored events.[3]
For example, if a sample of 200 patients is studied and 20 patients die during the study (so that 180 patients survive), the one in ten rule implies that two pre-specified predictors can reliably be fitted to the total data. Similarly, if 100 patients die during the study (so that 100 patients survive), ten pre-specified predictors can be fitted reliably. If more are fitted, the rule implies that overfitting is likely and the results will not predict well outside the training data. It is not uncommon to see the 1:10 rule violated in fields with many variables (e.g. gene expression studies in cancer), decreasing the confidence in reported findings.[5]
Improvements
A "one in 20 rule" has been suggested, indicating the need for shrinkage of regression coefficients, and a "one in 50 rule" for stepwise selection with the default p-value of 5%.[4][6] Other studies, however, show that the one in ten rule may be too conservative as a general recommendation and that five to nine events per predictor can be enough, depending on the research question.[7]
More recently, a study has shown that the ratio of events per predictive variable is not a reliable statistic for estimating the minimum number of events for estimating a logistic prediction model.[8] Instead, the number of predictor variables, the total sample size (events + non-events) and the events fraction (events / total sample size) can be used to calculate the expected prediction error of the model that is to be developed.[9] One can then estimate the required sample size to achieve an expected prediction error that is smaller than a predetermined allowable prediction error value.[9]
Alternatively, three requirements for prediction model estimation have been suggested: the model should have a global shrinkage factor of ≥ .9, an absolute difference of ≤ .05 in the model's apparent and adjusted Nagelkerke R2, and a precise estimation of the overall risk or rate in the target population.[10] The necessary sample size and number of events for model development are then given by the values that meet these requirements.[10]
Literature
• David A. Freedman (1983) A Note on Screening Regression Equations, The American Statistician, 37:2, 152-155, DOI: 10.1080/00031305.1983.10482729
References
1. Harrell, F. E. Jr.; Lee, K. L.; Califf, R. M.; Pryor, D. B.; Rosati, R. A. (1984). "Regression modelling strategies for improved prognostic prediction". Stat Med. 3 (2): 143–52. doi:10.1002/sim.4780030207. PMID 6463451.
2. Harrell, F. E. Jr.; Lee, K. L.; Mark, D. B. (1996). "Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors" (PDF). Stat Med. 15 (4): 361–87. doi:10.1002/(sici)1097-0258(19960229)15:4<361::aid-sim168>3.0.co;2-4. PMID 8668867.
3. Peduzzi, Peter; Concato, John; Kemper, Elizabeth; Holford, Theodore R.; Feinstein, Alvan R. (1996). "A simulation study of the number of events per variable in logistic regression analysis". Journal of Clinical Epidemiology. 49 (12): 1373–1379. doi:10.1016/s0895-4356(96)00236-3. PMID 8970487.
4. "Chapter 8: Statistical Models for Prognostication: Problems with Regression Models". Archived from the original on October 31, 2004. Retrieved 2013-10-11.{{cite web}}: CS1 maint: bot: original URL status unknown (link)
5. Ernest S. Shtatland, Ken Kleinman, Emily M. Cain. Model building in Proc PHREG with automatic variable selection and information criteria. Paper 206–30 in SUGI 30 Proceedings, Philadelphia, Pennsylvania April 10–13, 2005. http://www2.sas.com/proceedings/sugi30/206-30.pdf
6. Steyerberg, E. W.; Eijkemans, M. J.; Harrell, F. E. Jr.; Habbema, J. D. (2000). "Prognostic modelling with logistic regression analysis: a comparison of selection and estimation methods in small data sets". Stat Med. 19 (8): 1059–1079. doi:10.1002/(sici)1097-0258(20000430)19:8<1059::aid-sim412>3.0.co;2-0. PMID 10790680.
7. Vittinghoff, E.; McCulloch, C. E. (2007). "Relaxing the Rule of Ten Events per Variable in Logistic and Cox Regression". American Journal of Epidemiology. 165 (6): 710–718. doi:10.1093/aje/kwk052. PMID 17182981.
8. van Smeden, Maarten; de Groot, Joris A. H.; Moons, Karel G. M.; Collins, Gary S.; Altman, Douglas G.; Eijkemans, Marinus J. C.; Reitsma, Johannes B. (2016-11-24). "No rationale for 1 variable per 10 events criterion for binary logistic regression analysis". BMC Medical Research Methodology. 16 (1): 163. doi:10.1186/s12874-016-0267-3. ISSN 1471-2288. PMC 5122171. PMID 27881078.
9. van Smeden, Maarten; Moons, Karel Gm; de Groot, Joris Ah; Collins, Gary S.; Altman, Douglas G.; Eijkemans, Marinus Jc; Reitsma, Johannes B. (2018-01-01). "Sample size for binary logistic prediction models: Beyond events per variable criteria". Statistical Methods in Medical Research. 28 (8): 2455–2474. doi:10.1177/0962280218784726. ISSN 1477-0334. PMC 6710621. PMID 29966490.
10. Riley, Richard D.; Snell, Kym IE; Ensor, Joie; Burke, Danielle L.; Jr, Frank E. Harrell; Moons, Karel GM; Collins, Gary S. (2018). "Minimum sample size for developing a multivariable prediction model: PART II - binary and time-to-event outcomes". Statistics in Medicine. 38 (7): 1276–1296. doi:10.1002/sim.7992. ISSN 1097-0258. PMC 6519266. PMID 30357870.
| Wikipedia |
Review | Open | Published: 27 February 2018
A review on multi-task metric learning
Peipei Yang ORCID: orcid.org/0000-0002-1106-70621,
Kaizhu Huang2 &
Amir Hussain3
Big Data Analyticsvolume 3, Article number: 3 (2018) | Download Citation
Distance metric plays an important role in machine learning which is crucial to the performance of a range of algorithms. Metric learning, which refers to learning a proper distance metric for a particular task, has attracted much attention in machine learning. In particular, multi-task learning deals with the scenario where there are multiple related metric learning tasks. By jointly training these tasks, useful information is shared among the tasks, which significantly improves their performances. This paper reviews the literature on multi-task metric learning. Various methods are investigated systematically and categorized into four families. The central ideas of these methods are introduced in detail, followed by some representative applications. Finally, we conclude the review and propose a number of future work directions.
In the area of machine learning, pattern recognition, and data mining, the concept of distance metric usually plays an important role. For many algorithms, a proper distance metric is critical to their performances. For example, the nearest neighbor classification relies on the metric to identify the nearest neighbor and determine their class, whilst k-means clustering uses the metric to determine which cluster a sample should belong to.
The metric is usually used as a measure of the similarity or dissimilarity, and there are various types of pre-defined distance metrics, such as Euclidean distance, cosine similarity, Hamming distance, etc. However, in practical applications, these general-purpose metrics are insufficient to catch the sundry particular properties of various tasks. Therefore, researchers propose learning a metric from data for particular tasks, to improve algorithm performance. This is termed metric learning [1–7].
With the advent of data science, challenging and evolving problems have arisen. Obtaining training data is a costly process, hence complex models are being trained on small datasets, resulting in poor generalization. Alongside this the number of tasks to be learnt has increased significantly. To overcome these problems, multi-task learning is proposed [8–13]. It aims to consider multiple tasks simultaneously at a higher level, whilst transferring useful information among different tasks to improve their performances.
After multi-task learning was proposed by Caruana [8] in 1997, various strategies have been designed based on different assumptions. There are also some closely related topics, such as transfer learning [14, 15], domain adaptation [16], meta-learning [17], life-long learning [18], learning to learn [19], etc. In spite of some minor discrepancies among them, they share the same basic idea that the performance is improved by considering multiple learning tasks jointly and sharing information with other tasks.
Under such a background, it is natural to consider the problem of multi-task metric learning. However, most multi-task learning algorithms designed for traditional models are difficult for applying to metric learning algorithms due to the obvious differences between the two kinds of models. To resolve this problem, a series of multi-task metric learning approaches are specifically designed for the metric learning models. By properly coupling multiple metric learning tasks, their performances are effectively improved.
Metric learning has the particularity that its effect on performance can be only given indirectly by the algorithm relying on the metric. This requires a different way to construct the multi-task learning framework from traditional models. As far as we know, there is no review at present on the multi-task metric learning, hence this paper will give a general overview of the existing works.
The rest of the paper is organized as follows. First we provide an overview of the basic concepts of metric learning and briefly introduce multi-task metric learning. Next, various strategies of multi-task metric learning approaches are reviewed. We then introduce some representative applications of multi-task metric learning, and conclude with a discussion on potential future issues.
In this section, we first provide an overview of metric learning, including its concept and several representative algorithms. Then a general description about multi-task metric learning is presented, leaving the details of the algorithms for the next section.
A brief review on metric learning
The notion of distance metric was originally a concept in mathematics, which refers to a function defined on $\mathcal {X}$ as $d:\mathcal {X}\times \mathcal {X}\rightarrow \mathbf {R}_{+}=[\!0,+\infty)$ satisfying positiveness, symmetry, and triangle inequality [20]. In the community of machine learning, metric is unnecessary to keep its original definition from mathematics, but usually refers to a general measure of dissimilarity or similarity. A lot of machine learning algorithms use it to measure the dissimilarity between samples without explicitly referring its definition, such as nearest neighbor classification, k-means clustering, etc.
There have been various types of pre-defined metrics for general purposes. For example, assuming two points in the d-dimensional space $\mathbf {x}_{i},\mathbf {x}_{j}\in \mathcal {X}=\mathbb {R}^{d}$, the most frequently used Euclidean distance is defined as d(x i ,x j )=∥x i −x j ∥2. Another example is the Mahalanobis metric [21] that is defined as $d_{\mathbf {M}}(\mathbf {x}_{i},\mathbf {x}_{j})=\sqrt {(\mathbf {x}_{i}-\mathbf {x}_{j})^{\top }\mathbf {M}(\mathbf {x}_{i}-\mathbf {x}_{j})}$, where the symmetric positive semi-definite matrix M is the Mahalanobis matrix which determines the metric.
In spite of their widely-spread usage, the pre-defined metrics are incapable to capture the variety of real applications. Considering its importance to the performances of algorithms, researchers propose to learn a metric from the data instead of using the pre-defined metrics directly. By adapting the metric to the specific data for some algorithm, the performance is expected to be effectively improved. This is the central idea of the metric learning.
However, it is hardly practicable to learn a general metric by directly finding an optima in the functional space. A practical way is to define a family of metrics determined by some parameters, and transform the problem into the solving of the optimal parameters. Mahalanobis metric provides a perfect candidate for such a family of metrics, which has a simple formulation and is uniquely determined by the Mahalanobis matrix. In this case, the metric learning is equivalent to learning the Mahalanobis matrix.
Eric Xing et al. [1] proposes the idea of metric learning with the first algorithm in 2002. Since then, various metric learning methods have been proposed based on different strategies. Since metrics can be categorized into several families according to their properties, such as global vs. local, or linear vs. non-linear, the metric learning approaches can also be categorized accordingly. Mahalanobis metric is a typical global linear metric. Because existing multi-task learning approaches are almost based on global metrics, we focus on this type in this review, especially for the global linear metrics. Please refer to [22, 23] and their references for other types.
Most metric learning algorithms are formulated as a constrained optimization problem and the metric is obtained by solving this optimization. Since the distance is defined on two points, the supervised information to determine the metric, which is also called side-information in metric learning, is usually given by constraints on pairs or triplets as follows [22].
Must-link / cannot-link constraints (positive/negative pairs)
$$ \begin{aligned} \mathcal{S}=&\{(\mathbf{x}_{i},\mathbf{x}_{j}): \mathbf{x}_{i}\ \text{and}\ \mathbf{x}_{j}\ \text{should be similar}\},\\ \mathcal{D}=&\{(\mathbf{x}_{i},\mathbf{x}_{j}): \mathbf{x}_{i}\ \text{and}\ \mathbf{x}_{j}\ \text{should be dissimilar}\}. \end{aligned} $$
Relative constraints (training triplets)
$$ \mathcal{R}=\{(\mathbf{x}_{i},\mathbf{x}_{j},\mathbf{x}_{k}): \mathbf{x}_{i}\ \text{should be more similar to}\ \mathbf{x}_{j}\ \text{than to}\ \mathbf{x}_{k}\}. $$
Using these constraints, we briefly introduce the strategies of some metric learning approaches. Xing's method [1] aims to maximize the sum of distances between dissimilar pairs while keeping the sum of squared distances between similar pairs to be small. It is an example of learning with positive/negative pairs. Large Margin Nearest Neighbors (LMNN) [2, 24] requires the k nearest neighbors to belong to the same class and pushes out all the imposters (instances of other classes existing in the neighborhood). The side-information is provided by the relative constraints. Information-theoretic metric learning (ITML) [3], which is also built with positive/negative pairs, models the problem with log-determinant. Sparse Metric Learning [6] uses the mixed L2,1 norm to obtain a joint feature selection during metric learning, and Huang et al. [4, 5] proposes a unified framework for Generalized Sparse Metric Learning (GSML). Robust Metric Learning (RML) [25] deals with the noisy training constraints based on robust optimization.
It is notable that learning a Mahalanobis matrix can also be regarded as learning a linear transformation. For any symmetric positive semi-definite Mahalanobis matrix M, there is a symmetric decomposition M=L⊤L and the distance can be then reformulated as
$$ \begin{aligned} d_{\mathbf{M}}(\mathbf{x}_{i},\mathbf{x}_{j})=&\sqrt{(\mathbf{x}_{i}-\mathbf{x}_{j})^{\top} \mathbf{M} (\mathbf{x}_{i}-\mathbf{x}_{j})}\\ = & \sqrt{(\mathbf{x}_{i}-\mathbf{x}_{j})^{\top} \mathbf{L}^{\top} \mathbf{L} (\mathbf{x}_{i}-\mathbf{x}_{j})} \\ = & \sqrt{(\mathbf{L}\mathbf{x}_{i}-\mathbf{L}\mathbf{x}_{j})^{\top} (\mathbf{L}\mathbf{x}_{i}-\mathbf{L}\mathbf{x}_{j})} \\ = & \|\mathbf{L}\mathbf{x}_{i}-\mathbf{L}\mathbf{x}_{j}\|_{2}. \end{aligned} $$
By (1), the Mahalanobis metric defined by M is equivalent to the Euclidean distance after performing the linear transformation L, and thus metric learning can be also performed by learning such a linear transformation. Neighbourhood Component Analysis (NCA) [26] is an example of this class that optimizes the expected leave-one-out error of a stochastic nearest neighbor classifier by learning a linear transformation. Furthermore, the linear metric can be easily extended to the non-linear metric by replacing the linear transformation L with a non-linear transformation f, which is defined as
$$ d_{\mathbf{f}}(\mathbf{x}_{i},\mathbf{x}_{j})=\|\mathbf{f}(\mathbf{x}_{i})-\mathbf{f}(\mathbf{x}_{j})\|_{2}. $$
Then, the metric is obtained by learning an appropriate non-linear transformation f. Since the deep learning has achieved remarkable successes in computer vision and machine learning [27], some researchers proposed the deep metric learning recently [28, 29]. These methods resort to the deep neural network to learn a non-linear transformation, which are different from a traditional neural network in that their learning objective are given by constraints on distances.
There are a lot of metric learning methods because the metric plays an important role in many applications. We cannot introduce them in detail due to the limit of the space. Readers can refer to the paper [22] for a systematic review on metric learning.
An overview of multi-task metric learning
Since the concept of multi-task learning was proposed by Caruana [8] in 1997, this topic has attracted much attention from researchers in machine learning. Multiple different methods are proposed to construct a framework for simultaneously learning multiple tasks for conventional models, such as linear classifier or support vector machines. The performances of the original models are effectively improved by learning simultaneously.
However, these methods cannot be used directly for metric learning since there exist significant discrepancies between the conventional learning models and metric learning models. Taking the popular support vector machine (SVM) [30] as an example of conventional models, we can show the differences between it and metric learning. First, the training data of the two models are of different structures. For SVM, the training samples are given by points with a label for each one, while for metric learning they are given by pairs or triplets with a label for each one. Second, their models are of different types. The model of SVM is a single-input single-output function parameterized by a weight vector and a bias, while the model of metric learning is a double-input single-output function parameterized by a symmetric positive semi-definite matrix. Third, the algorithms take effect on the performance in different ways. For SVM, the classification accuracy is given by the algorithm directly, while for metric learning, the performance has to be evaluated indirectly by other algorithms working with the learned metric.
Due to the reasons mentioned above, strategies have to be specially designed to construct a multi-task metric learning model. They have to deal with two problem: (1) what type of useful information is shared among different metric learning tasks; (2) how such information is shared by the proposed model and algorithm. Parameswaran et al. [31] proposes the first multi-task metric learning approach in 2010, and in the following years a variety of strategies have been proposed for multi-task metric learning. We generally categorize them into the following families according to the way how the information is shared:
Assume that the Mahalanobis matrix of each metric is composed of several components and share some composition.
Pre-define the relationship among tasks or learn such relationship from data, and constrain the learning process with this relationship.
Use a common metric with proper regularization to couple the metrics.
Consider metric learning from the perspective of learning a transformation and share some parts of transformation.
There are some representative works in each family and we will introduce them in detail in next section. Figure 1 gives a summary of the multi-task metric learning approaches mentioned in this paper.
A summary of multi-task metric learning approaches. This figure gives a summary of the approaches mentioned in this paper, where the name of each method is under the branch of its corresponding type
Review on multi-task metric learning approaches
In this section, we investigate the multi-task metric learning approaches published to-date and provide a detailed review on them. The methods are organized according to the type of strategies. We focus on only the models and algorithms in this section without mentioning their application backgrounds, which are left for the next section. The discussion about the relation between some closely related methods is also included.
Before diving into the details, we summarize the main features of these multi-task metric learning methods in Table 1. Besides, in this section, we always use M to represent the Mahalanobis matrix to keep the notations uniform, which may be different from the original papers.
Table 1 Main features of multi-task metric learning methods
Sharing composition of Mahalanobis matrices
Since the Mahalanobis metric is uniquely determined by the Mahalanobis matrix, a natural way to couple multiple related metrics is to share some composition of their Mahalanobis matrices. Specifically, the Mahalanobis matrix of each task is assumed to be composed of both common composition shared by all tasks and task-specific composition preserving its specific properties. This strategy is the most popular way to construct a multi-task metric learning model and we introduce some representative ones below.
Large margin multi-task metric learning (mt-LMNN) Parameswaran et al. [31] proposes a multi-task learning model based on the idea to share a common composition of the Mahalanobis matrices. It is motivated by the regularized multi-task learning (RMTL) [9], and obtained by adapting RMTL to the large-margin nearest neighbor metric learning (LMNN) [2, 24]. To couple multiple tasks, each Mahalanobis matrix is decomposed into a common part M0 and a task-specific part M t . Thus the distance between two points $\mathbf {x}_{i},\mathbf {x}_{j}\in \mathcal {X}$ defined by the metric of the t-th task is defined as
$$ d_{t}(\mathbf{x}_{i},\mathbf{x}_{j})=(\mathbf{x}_{i}-\mathbf{x}_{j})^{\top}(\mathbf{M}_{0}+\mathbf{M}_{t})(\mathbf{x}_{i}-\mathbf{x}_{j}), $$
By restricting that M0≽0 and M t ≽0,∀t, the Mahalanobis matrix for each task is ensured to be positive semi-definite, which induces a positive semi-definite metric. In this model, M0 picks up the general trends across all tasks while M t gathers the individual information for each task. The obtained regularization of mt-LMNN is
$$ \gamma_{0} \|\mathbf{M}_{0}-\mathbf{I}\|_{\mathrm{F}}^{2}+\sum_{t=1}^{T}{\gamma_{t}\|\mathbf{M}_{t}\|_{\mathrm{F}}^{2}}. $$
In (3), the side-information is incorporated by constraints generated from triplets as LMNN [2]. The regularization on task-specific matrices M t 's represses the specialty of each task and encourages the shared part of all tasks, while the regularization on M0 restricts the common part to be close to the identity. They further make the learnt metric of each task not far from the Euclidean metric.
The hyper-parameters γt>0's control the balance between the commonness and speciality, while γ0 controls the regularization of the common part. As the increasing of γt>0, the task-specific parts become small and the learnt metrics of all tasks tend to be similar. When γt>0→∞, the algorithm learns a unique metric M0 for all tasks, while when γt>0→0, all tasks tend to be learnt individually. On the other hand, when γ0→∞, the common part M0 becomes identity and Euclidean metric is obtained. When γ0→0, there tends to be no regularization on the common part. This model is convex and can be solved effectively.
This is the first attempt to apply a multi-task approach to metric learning problem. It provides a simple yet effective way to improve the performance of metric learning by jointly learning multiple tasks. However, the idea of splitting each Mahalanobis matrix into a common part and an individual part is not easy to explain from the perspective of distance metric and can only deal with some simple cases.
Multi-task multi-feature similarity learning learning (M2SL) Wang et al. [32] proposes a multi-task multi-feature metric learning approach to adapt the metric learning to large scale visual applications. For each sample, M types of features are extracted and the metrics are learnt individually for each feature. For each feature channel, there are T tasks and each task learns a distance metric. To make the information shared among tasks, the Mahalanobis matrix of the t-th task in the m-th feature channel is defined to be a combination of a common part $\mathbf {M}_{0}^{(m)}$ and an individual part $\mathbf {M}_{t}^{(m)}$. Then the authors incorporate such a formulation into the idealized kernel learning [33] and obtain the multi-feature multi-task metric learning model as
$$ \begin{aligned} \min_{\mathbf{b},\mathbf{M}}&{\frac{1}{2}\left(\gamma_{0}\sum\limits_{m=1}^{M}{\frac{1}{b_{0}^{(m)}}\left\|\mathbf{M}_{0}^{(m)}\right\|_{\mathrm{F}}} +\sum\limits_{t=1}^{T}\sum\limits_{m=1}^{M}{\frac{\gamma_{t}}{b_{t}^{(m)}}\left\|\mathbf{M}_{t}^{(m)}\right\|_{\mathrm{F}}^{2}}\right)}\\ &+\frac{C}{N}\sum\limits_{t=1}^{T}\sum\limits_{ij\in S}{\xi_{t}^{ij}}+\frac{\eta}{2}\sum\limits_{t=0}^{T}{\big\|\mathbf{b}_{t}\big\|_{p}^{2}}\\ \mathrm{s.t.}\;&\delta_{t}^{ij}\left(d_{t}^{ij}-\tilde{d}_{t}^{ij}\right)\geq \sigma_{t}^{ij}-\xi_{t}^{ij},\,\xi_{t}^{ij}\geq 0,\,b_{t}^{(m)}\geq 0,\,p>1,\,\mathbf{M}_{t}^{(m)}\succeq \mathbf{0} \end{aligned} $$
where the distance is defined as
$$ \tilde{d}_{t}^{ij}=\sum\limits_{m=1}^{M}{\tilde{d}_{t}^{ij,m}},\; \tilde{d}_{t}^{ij,m}=\left(\mathbf{x}_{t}^{i,m}-\mathbf{x}_{t}^{j,m}\right)^{\top}\left(\mathbf{M}_{0}^{(m)}+\mathbf{M}_{t}^{(m)}\right)\left(\mathbf{x}_{t}^{i,m}-\mathbf{x}_{t}^{j,m}\right). $$
The variable $\delta _{t}^{ij}$ denotes the label of similar/dissimilar labeled pairs, and $\sigma _{t}^{ij}$ is a predefined threshold for hinge loss. The parameters b0 and b t represent weights for the sharing part and discriminating parts respectively, and the last term is the regularization on these weights.
Using this approach, the information contained in different tasks is shared among them and the multiple features are used in a more effective way. It uses the same strategy as mt-LMNN to construct the multi-task metric learning model and thus has the similar advantages/disadvantages.
Multi-task sparse compositional metric learning (mt-SCML) Shi et al. [34] proposes a multi-task metric learning framework from the perspective of sparse combination. The authors first propose a sparse compositional metric learning (SCML) approach which regards a Mahalanobis matrix as a nonnegative weighted sum of K rank-1 positive semi-definite matrices:
$$ \mathbf{M}=\sum\limits_{i=1}^{K}{w_{i}\mathbf{b}_{i}\mathbf{b}_{i}^{\top}},\;\textup{with}\,\mathbf{w}\geq 0, $$
where the b i 's are D-dimensional column vectors. Noting that the distance between any two points (x,y) determined by M is calculated by
$$ d^{2}_{\mathbf{M}}(\mathbf{x},\mathbf{y})=(\mathbf{x}-\mathbf{y})^{\top}\mathbf{M}(\mathbf{x}-\mathbf{y}) =\sum\limits_{i=1}^{K}{w_{i}\left(\mathbf{b}_{i}^{\top}(\mathbf{x}-\mathbf{y})\right)^{2}}, $$
the vectors b i 's span the common low-dimensional subspace in which the metric is defined.
Using such a formulation, each rank-1 matrix is a basis and the metric can be reformulated as a sparse combination of these bases. Then the metric learning is a process of learning such weights, which is shown as
$$\min_{\mathbf{w}}{\frac{1}{|C|}\sum_{(\mathbf{x}_i,\mathbf{x}_j,\mathbf{x}_k)\in C} {L_{\mathbf{w}}(\mathbf{x}_i,\mathbf{x}_j,\mathbf{x}_k)}+\beta\|\mathbf{w}\|_1}, $$
where L defines the loss from side-information as
$$ L_{\mathbf{w}}(\mathbf{x}_{i},\mathbf{x}_{j},\mathbf{x}_{k}) =\left[1+d_{\mathbf{w}}(\mathbf{x}_{i},\mathbf{x}_{j})-d_{\mathbf{w}}(\mathbf{x}_{i},\mathbf{x}_{k})\right]_{+} $$
with [ ·]+= max(0,·), and the ℓ1 regularization encourages a sparse solution of w.
When there are T tasks to be learned together, the multi-task learning can be easily obtained by applying a structure regularization on these weights. To be specific, the authors assume that the different tasks share a common low-dimensional subspace for the reconstruction weights, and use a mixed norm to obtain the structure sparsity. The formulation of mt-SCML is shown as
$$ \min_{\mathbf{W}}{\sum\limits_{t=1}^{T}{\frac{1}{|C_{t}|}\sum\limits_{(\mathbf{x}_{i},\mathbf{x}_{j},\mathbf{x}_{k})\in C_{t}} {L_{\mathbf{w}_{t}}(\mathbf{x}_{i},\mathbf{x}_{j},\mathbf{x}_{k})}}+\beta\|\mathbf{W}\|_{2,1}}, $$
where W is a T×K nonnegative matrix where each row w t defines the reconstruct weight vector for the t-th task, and ∥W∥2,1 is the ℓ2/ℓ1 mixed norm. It equals to the ℓ1 norm applied to the ℓ2 norm of the columns of W, which induces the group sparsity at the column level, i.e., it encourages some columns to be zero together and thus make the different tasks share the same reconstruction bases.
This method naturally introduces the idea of group sparse to construct multi-task metric learning, and the proposed approach is not difficult to be realized. However, this algorithm requires the set of rank-one metrics to be pre-trained and thus cannot be optimized simultaneously with the weights.
Two-level multi-task metric learning (TMTL) Liu, et al. [35] proposes a two-level multi-task metric learning approach that combines multiple metrics directly without an explicit optimization procedure. It is developed based on KISSME [36], which is a metric learning approach motivated by a statistical inference and defines the Mahalanobis matrix as
$$ \mathbf{M}=\Sigma_{S}^{-1}-\Sigma_{D}^{-1}. $$
This model is extended to two-level multi-task learning paradigm in a rather simple way. The authors first learn a Mahalanobis matrix for each task respectively and a common metric for all samples. Then the final individual Mahalanobis matrix is given by a direct weighted composition
$$ \hat{\mathbf{M}}_{t}^{k}=\mathbf{M}_{0}^{k}+\mu\mathbf{M}_{t}^{k}. $$
This method is so simple that no optimization procedure is needed. To be strict, it is not a typical metric learning and can deal with only some special problems.
Online semi-supervised multi-task distance metric learning (online-SMDM) Li et al. [37] proposes a semi-supervised metric learning approach that is capable to utilize the unlabeled data to learn the metric. The method is designed based on the regularized distance metric learning [38] and extended to a multi-task metric learning model called online semi-supervised multi-task distance metric learning. It assumes each Mahalanobis matrix to be composed of a common part M0 and a task-specific part M t as [31] does, and proposes an online algorithm to solve the model effectively.
To utilize the unlabeled data during training process, the authors assign labels for the unlabeled pairs:
$$ y_{ij}=\left\{ \begin{array}{ll} 1, & \text{if}\ \mathbf{x}_{i}\in N(\mathbf{x}_{j})\ \text{or}\ \mathbf{x}_{j}\in N(\mathbf{x}_{i});\\ 0, & \text{otherwise}, \end{array} \right. $$
where N(x i ) indicates the nearest neighbor set of x i calculated by Euclidean distance. The Eq. (5) indeed assumes that if a point is one of the nearest neighbors of the other point, they should have the same label. Then the model of unlabeled model can be formulated as
$$ \begin{aligned} \min_{\mathbf{M}}&{\sum\limits_{t=1}^{T}\left\{\frac{2}{N_{tl}(N_{tl}-1)}\sum\limits_{(\mathbf{x}_{i},\mathbf{x}_{j})\in D_{tl}}{g_{l}\left(y_{ij}\left[1-\|\mathbf{x}_{i}-\mathbf{x}_{j}\|_{\mathbf{M}_{t}+\mathbf{M}_{0}}\right]\right)}\right.}\\ &+\frac{2\beta}{N_{tu}(N_{tu}-1)}\sum\limits_{(\mathbf{x}_{i},\mathbf{x}_{j})\in D_{tu}}{g_{u}\left(y_{ij}\left[1-\|\mathbf{x}_{i}-\mathbf{x}_{j}\|_{\mathbf{M}_{t}+\mathbf{M}_{0}}\right]\right)}\\ &\left.+\frac{\lambda}{2}\|\mathbf{M}_{t}\|_{\mathrm{F}}^{2}{\vphantom{\frac{2}{N_{tl}(N_{tl}-1)}\sum\limits_{(\mathbf{x}_{i},\mathbf{x}_{j})\in D_{tl}}}}\right\}+\gamma T\|\mathbf{M}_{0}\|_{\mathrm{F}}^{2},\\ \mathrm{s.t.}\;&\mathbf{M}\succeq \mathbf{0}, \end{aligned} $$
where D tl and D tu represent the sets of labeled data pairs and unlabeled data pairs respectively, N tl and N tu are the numbers of the labeled and unlabeled training data, λ and γ are both hyper-parameters to control the regularization on the individual parts and the common part, and M represents all the M t 's and M0 for brevity.
This method utilizes the unlabeled data by assigning labels for them according to the original distances. The strategy of constructing the multi-task learning is same as the previous ones.
Hierarchical multi-task metric learning (HMML) Zheng et al. [39] proposes an approach to learn a hierarchical tree of multiple sparse metrics hierarchically over a visual tree. In this work, a visual tree is first constructed to organize the categories in a coarse-to-fine fashion. Then a top-down approach is used to learn multiple metrics along the visual tree, where the model is expected to benefit from leveraging both the inter-node visual correlation and the inter-level visual correlations.
Construction of the visual tree is composed of two key steps: (a) Active Sampling for Category Representation, which utilizes active sampling to find multiple representative samples for each image category. (b) Hierarchical Affinity Propagation Clustering for Node Partitioning, which is a top-down approach to hierarchical affinity propagation (AP) clustering. It starts from the root node containing all the image categories and ends at the leaf nodes containing only one single image category. Figure 2 gives an example of the enhanced visual tree for CIFAR-100. In this tree, the categories are organized in a hierarchical structure according to their similarities.
An example of enhanced visual tree. The visual tree is constructed on the CIFAR-100 image set with 100 categories and its depth is 4. This figure is from the original paper [39]
According to the construction procedure of the visual tree, categories on the same branch are more similar to each other than the ones on other branches. Thus, it is reasonable to perform multi-task metric learning over the sibling child nodes under the same parent node to utilize the inter-node visual correlation among them. The authors exploit the same strategy as mtLMNN [31] which decomposes the metric into a common part and an individual part as
$$ d_{t}(\mathbf{x}_{i},\mathbf{x}_{j})=\sqrt{(\mathbf{x}_{i}-\mathbf{x}_{j})^{\top} (\mathbf{M}_{0}+\mathbf{M}_{t})(\mathbf{x}_{i}-\mathbf{x}_{j})}, $$
where M0 defines the common metric shared among all sibling child nodes and M t defines the node-specific metric.
For root node, the joint objective function is then defined as
$$ \begin{aligned} \min_{\mathbf{M}_{0},\ldots,\mathbf{M}_{T}}&{\gamma_{0}\big\|\mathbf{M}_{0}-\mathbf{I}\big\|_{\mathrm{F}}^{2}+\sum\limits_{t=1}^{T}{\alpha_{t}\text{tr}[\mathbf{M}_{0}+\mathbf{M}_{t}]}}\\ &+\sum\limits_{t=1}^{T}\left[\gamma_{t}\big\|\mathbf{M}_{t}\big\|_{\mathrm{F}}^{2}+\sum\limits_{i,j}{d_{t}^{2}(\mathbf{x}_{i},\mathbf{x}_{j})}+\sum\limits_{i,j,k}{\xi_{i,j,k}}\right],\\ \mathrm{s.t.}\;&d_{t}^{2}(\mathbf{x}_{i},\mathbf{x}_{k})-d_{t}^{2}(\mathbf{x}_{i},\mathbf{x}_{j})\geq 1-\xi_{i,j,k},\\ &\xi_{i,j,k}\geq 0,\\ &\mathbf{M},\mathbf{M}_{1},\ldots,\mathbf{M}_{T} \succeq \mathbf{0}. \end{aligned} $$
where the parameters γ0 and γ t 's control the regularization on the common part and individual part respectively.
For non-root nodes at the mid-level of the visual tree, besides the inter-node correlations, the inter-level visual correlations between the parent node and its sibling child nodes at the next level should be also exploited. Since all nodes on the same branch are similar, any node p characterizes the common visual properties of its sibling child nodes. On the other hand, the task-specific metric M p for node p contains the task-specific composition. Thus, it is reasonable to utilize the task-specific metric of node p to help the learning of its sibling child nodes. Based on this idea, the regularization β∥M0−M p ∥2 is added into the objective of (6) for non-root nodes, where M0 is the common metric shared among the sibling child nodes under parent node p and M p is the task-specific metric for node p at the upper level.
This method introduces the hierarchical visual tree into multi-task metric learning, which is used to guide the multi-task learning and thus provides a more powerful capability of describing the relationship among tasks.
Task relationship learning and regularization
Transfer metric learning by learning task relationship (TML) Zhang et al. [40, 41] proposes a multi-task metric learning by learning task relationship. This model is also a direct adaptation of a traditional multi-task learning approach to the metric learning task. The authors proposes a multi-task relationship learning (MTRL) [13] in their previous work which assumes all the parameter vectors to follow a matrix-variant normal distribution [42] and automatically learns the relationships between tasks by a regularization. Since the parameter to be learned in metric learning is a matrix rather than a vector, the authors concatenate all columns of the Mahalanobis matrix to form a vector for each task $\tilde {\mathbf {M}}_{t}=\text {vec}(\mathbf {M}_{t})$ and then apply the regularization of MTRL to it: $\tilde {\mathbf {M}}\mathbf {\Omega }^{-1}\tilde {\mathbf {M}}^{\top }$ where $\tilde {\mathbf {M}}=\left [\text {vec}(\mathbf {M}_{1}),\ldots,\text {vec}(\mathbf {M}_{T})\right ]$. It is equivalent to apply the following matrix-variant normal prior distribution to $\tilde {\mathbf {M}}_{t}$'s.
$$ q(\tilde{\mathbf{M}})=\mathcal{MN}_{d^{2}\times T}(\tilde{\mathbf{M}}|\mathbf{0}_{d^{2}\times T},\mathbf{I}_{d^{2}}\otimes\mathbf{\Omega}) $$
In this definition, the row covariance matrix I d 2 models the relationships between features and the column covariance matrix Ω models the relationships between different vectorized Mahalanobis matrices $\tilde {\mathbf {M}}$'s. Thus, Ω indeed determines the relationships between tasks. Since it cannot be given a priori in most cases, the authors propose to estimate it from data automatically.
The obtained model is shown in (7) and can be solved by alternating optimization.
$$ \begin{aligned} \min_{\{\mathbf{M}_{t}\},\mathbf{\Omega}}\;&{\sum\limits_{t=1}^{T}{\frac{2}{n_{t}(n_{t}-1)} \sum\limits_{i<j}{g\left(y_{i,j}^{t}\left[1-\left\|\mathbf{x}_{i}^{t}-\mathbf{x}_{j}^{t}\right\|_{\mathbf{\Omega}_{t}}^{2}\right]\right)}}}\\ &+\frac{\lambda_{1}}{2}\sum\limits_{t=1}^{T}{\|\mathbf{M}_{t}\|_{\mathrm{F}}^{2}} +\frac{\lambda_{2}}{2}\text{tr}(\tilde{\mathbf{M}}\mathbf{\Omega}^{-1}\tilde{\mathbf{M}}^{\top})\\ \mathrm{s.t.}\;&\mathbf{M}_{t}\succeq\mathbf{0},\forall t\\ &\tilde{\mathbf{M}}=\left(\text{vec}(\mathbf{M}_{1}),\ldots,\text{vec}(\mathbf{M}_{T})\right)\\ &\mathbf{\Omega}\succeq\mathbf{0},\;\text{tr}(\mathbf{\Omega})=1. \end{aligned} $$
In that paper, the authors further propose a transfer metric learning based on this model by training the concatenated Mahalanobis matrix of only target task while leaving other matrices fixed as source tasks. The idea of learning the relationship between tasks is interesting, but the covariance between the vectorized Mahalanobis matrices does not explain well from the perspective of distance metric.
Multi-task maximally collapsing metric learning (MtMCML) Ma et al. [43] proposes a multi-task metric learning approach using the graph-based regularization. To be specific, a graph is constructed to describe the relations between the tasks, where each node corresponds to a Mahalanobis matrix of one task, and an edge connecting two nodes represents the similarity between the two tasks. Thus an adjacency matrix W(0≤W(i,j)≤ 1) is obtained where a higher W(i,j) indicates that metrics i and j are more related. The regularization is
$$ \begin{aligned} J(\mathbf{M}_{1},\ldots,\mathbf{M}_{T}) =&\sum\limits_{i=1}^{T}{\sum\limits_{j=1}^{T}{\mathbf{W}(i,j)\|\tilde{\mathbf{M}}_{i}-\tilde{\mathbf{M}}_{j}\|_{2}^{2}}}\\ =&\text{trace}\left(\tilde{\mathbf{M}}(\mathbf{DIA}-\mathbf{W})\tilde{\mathbf{M}}^{\top}\right)\\ =&\text{trace}\left(\tilde{\mathbf{M}}\mathbf{L}\tilde{\mathbf{M}}^{\top}\right), \end{aligned} $$
where $\tilde {\mathbf {M}}_{i}=\text {vec}(\mathbf {M}_{i})$ converts the Mahalanobis matrix of the i-task into a vector in a column-wise manner, DIA is a diagonal matrix where $\mathbf {DIA}(i,i)=\sum _{j=1}^{T}{\mathbf {W}(i,j)}$, and thus the matrix L=DIA−W indeed defines the graph Laplacian matrix. The model can be optimized by alternating method.
In this work, the authors empirically set the adjacency matrix as W(i,j)=1, which indeed defines every pair of tasks are related. It is not difficult to prove that such a regularization is just a variant of the regularization of mt-LMNN. Therefore, these two methods are closely related in this special case.
This work naturally introduces the graph regularization into multi-task learning by applying a Laplacian to the vectorized Mahalanobis matrices. However, the relationship between two metrics is still vague, and the Laplacian matrix L is not easy to be reasonably determined.
Regularization with a common metric
A framework for approaches based on common metric Yang et al. [44] proposes a general framework for multi-task metric learning to solve the scenario that all metrics are similar to a common one. The optimization problem is shown in (9) where M t is the Mahalanobis matrix of the t-th task where M0 is the common one.
$$ \begin{aligned} \min_{\{\mathbf{M}_{t}\},\mathbf{M}_{c}}&{\sum\limits_{t}{\left(L(\mathbf{M}_{t},\mathcal{X}_{t})+\gamma D(\mathbf{M}_{t},\mathbf{M}_{c})\right)}+ \gamma_{0} D(\mathbf{M}_{0},\mathbf{M}_{c})}\\ \mathrm{s.t.}\;&\mathbf{M}_{t}\in\mathcal{C}_{t}(\mathcal{X}_{t}),\\ &\mathbf{M}_{t}\succeq\mathbf{0}, \end{aligned} $$
In this framework, the loss L and constraints $\mathcal {C}_{t}$ are used to incorporate the side-information from training samples into the learning process, while the regularization D(M t ,M c ) encourages the metric of each task to be similar to a common one M c , and D(M0,M c ) further regularizes the common metric to be close to a pre-defined metric. Without more prior information available, M0 is set to the identity I to define a Euclidean metric.
The mt-LMNN can be easily included as a special case of this framework by $D(\mathbf {X},\mathbf {Y})=\|\mathbf {X}-\mathbf {Y}\|_{\mathrm {F}}^{2}$. The only difference exists on the constraints: the Mahalanobis matrix of the t-th task in mt-LMNN is M0+M t , where both the two parts are positive semi-definite; the Mahalanobis matrix of the t-th task in (9) with Frobenius norm is M t and the positive semi-definiteness of only this matrix is required. The authors indicate that the latter actually provides a more reasonable model because the constraints in mt-LMNN is unnecessary to be so strict.
Geometry preserving multi-task metric learning (GPmtML) Yang et al. [44] proposes the geometry preserving multi-task metric learning approach based on the general framework (9). Different from most previous approaches, the GPmtML considers the multi-task metric learning problem from the perspective of propagating the relative distance. The authors indicate that learning of a metric is a process of embedding the supervised information from training samples into the learnt metric, and thus it is reasonable to couple the multiple tasks by sharing the embedded supervised information among metrics. As we have illustrated, it is an important class of metric learning approaches which learn the metric from relative distances given by triplets, and thus it is reasonable to propagate such relationships about the relative distance between metrics. Motivated by this, the authors propose the concept of geometry preserving probabilistic [44, 45] to measure such kind of characteristic between two metrics defined by Mahalanobis matrices A and B.
$$\begin{aligned} \text{PG}_{f}(d_{\mathbf{A}},d_{\mathbf{B}})= &\mathrm{P}\left[d_{\mathbf{A}}(\mathbf{x}_{1},\mathbf{y}_{1})>d_{\mathbf{A}}(\mathbf{x}_{2},\mathbf{y}_{2})\; \wedge\;d_{\mathbf{B}}(\mathbf{x}_{1},\mathbf{y}_{1})>d_{\mathbf{B}}(\mathbf{x}_{2},\mathbf{y}_{2})\right]\\ &+\mathrm{P}\left[d_{\mathbf{A}}(\mathbf{x}_{1},\mathbf{y}_{1})<d_{\mathbf{A}}(\mathbf{x}_{2},\mathbf{y}_{2})\; \wedge\;d_{\mathbf{B}}(\mathbf{x}_{1},\mathbf{y}_{1})<d_{\mathbf{B}}(\mathbf{x}_{2},\mathbf{y}_{2})\right]\\ &+\mathrm{P}\left[d_{\mathbf{A}}(\mathbf{x}_{1},\mathbf{y}_{1})=d_{\mathbf{A}}(\mathbf{x}_{2},\mathbf{y}_{2})\; \wedge\;d_{\mathbf{B}}(\mathbf{x}_{1},\mathbf{y}_{1})=d_{\mathbf{B}}(\mathbf{x}_{2},\mathbf{y}_{2})\right], \end{aligned} $$
where (x1,y1,x2,y2)∼f and ∧ denotes the logical "and" operator.
Then the geometry preserving multi-task metric learning is proposed which aims to increase the geometry preserving probabilistic. The method is obtained by using the von Neumann divergence [46, 47] (10) as regularization in (9).
$$ D_{\text{vN}}(\mathbf{M},\mathbf{M}_{c})=\text{tr}\left(\mathbf{M}\log \mathbf{M}-\mathbf{M}\log \mathbf{M}_{c}-\mathbf{M}+\mathbf{M}_{c}\right) $$
By a series of theoretical analysis, this method is proved to be able to encourage a higher geometry preserving geometry, and thus more likely to keep the relative distances of samples across different metrics. The introduced regularization is jointly convex and thus the problem can be effectively solved by alternating optimization.
This is the first paper that attempts to construct the multi-task metric learning by sharing the supervised side-information among tasks. It provides a reasonable explanation from the perspective of metric learning. However, the macrostructure of the model is too simple and thus cannot describe more complex relationship among tasks.
Sharing transformation
According to (1). Learning a Mahalanobis distance is equivalent to learning a corresponding linear transformation. There are indeed some metric learning algorithms that aim to learn such a transformation directly, and it naturally provides a way to construct a multi-task metric learning by sharing some parts of transformation.
Multi-task metric learning based on common subspace (mtMLCS) Yang et.al [48] proposes a multi-task learning method based on the assumption of common subspace. The idea is motivated by multi-task feature learning [11] which learns a common sparse representations across multiple tasks. Based on the same assumption that all the tasks share a common low-dimensional subspace, the authors propose a multi-task framework for metric learning by transformation.
To couple multiple tasks with a common low-dimensional subspace, the authors notice that for any low-rank Mahalanobis matrix M, the corresponding linear transformation matrix L is of full row rank and has the size of r×d, where r=rank(M) is the dimension of the subspace. Applying a compact SVD to L, there is L=UΛV⊤ where V is a d×r matrix defining a projection to the low-dimensional subspace, and UΛ defines a transformation in the subspace. This fact motivates a straightforward multi-task strategy with common subspace: to share a common projection matrix V and learn an individual transformation $\mathbf {R}_{t}\doteq \mathbf {U}_{t}\mathbf {\Lambda }_{t}$ for each task.
However, it is computationally complex to apply an orthogonal constraint to V. On the other hand, it's notable that the orthogonality is not necessary for V to define a subspace. As well as V is of the size r×d and r<d, it indeed defines a subspace of dimensionality no more than r with some extra full-rank transformation in the subspace. Therefore, a common matrix L of size r×d is used to realize the common projection instead of V⊤, and the extra transformation can be absorbed by R t . The obtained model for multi-task metric learning is to a transformation for each task L t =R t L0 where L0 defines the common subspace and R t defines the task-specific metric. This strategy is then incorporated into the LMCA [49] which is a variant of LMNN [2] by learning the transformation.
This approach is simple to implement. Compared with the approaches that learn metrics by learning Mahalanobis matrices, mtMLCS does not require the symmetric positive-definite constraints, and thus is much easier to optimize. However, this model is not convex and thus the global optimum cannot be obtained.
Coupled projection multi-task metric learning (CP-mtML) Bhattarai et al. [50] proposes a multi-task metric learning approach which also focuses on the methods that learns a linear transformation. In this paper, the authors refer the transformation in (1) as "projection", and the idea to couple different tasks is to decompose it into a common projection and a task-specific projection. Different from mtMLCS in which the common projection and task-specific projection are concatenated, CP-mtML decomposes the projection in the manner of distance:
$$ \begin{aligned} d_{t}^{2}(\mathbf{x}_{i},\mathbf{x}_{j}) =& d_{\mathbf{L}_{0}}^{2}(\mathbf{x}_{i},\mathbf{x}_{j})+ d_{\mathbf{L}_{t}}^{2}(\mathbf{x}_{i},\mathbf{x}_{j}) \\ = & \|\mathbf{L}_{0}\mathbf{x}_{i}-\mathbf{L}_{0}\mathbf{x}_{j}\|_{2}^{2}+ \|\mathbf{L}_{t}\mathbf{x}_{i}-\mathbf{L}_{t}\mathbf{x}_{j}\|_{2}^{2} \end{aligned} $$
It is easy to show that the relation among different tasks is the same as mt-LMNN where both of them obtain the distance by summing the squared distances of common and task-specific parts:
$$ \begin{aligned} d_{t}^{2}(\mathbf{x}_{i},\mathbf{x}_{j}) =& (\mathbf{x}_{i}-\mathbf{x}_{j})^{\top}\left(\mathbf{L}_{0}^{\top}\mathbf{L}_{0} +\mathbf{L}_{t}^{\top}\mathbf{L}_{t}\right)(\mathbf{x}_{i}-\mathbf{x}_{j}) \\ =& (\mathbf{x}_{i}-\mathbf{x}_{j})^{\top}(\mathbf{M}_{0}+\mathbf{M}_{t})(\mathbf{x}_{i}-\mathbf{x}_{j}) \end{aligned} $$
The authors pointed out that there are important differences between the two approaches. First, the side-information of mt-LMNN is based on triplets while CP-mtML is based on similar/dissimilar pairs. Second, using the formulation of projection, it is easy to obtain a low-rank metric. Third, the authors propose a scalable SGD based learning algorithm. Finally, it can work in online setting.
Since this method learns the metric by optimizing on the transformation L, it has the similar merits and faults as mtMLCS. It is also designed for the simple case where the tasks are correlated by a common Mahalanobis matrix.
Deep multi-task metric learning (DMML) Soleimani et al. [51] proposes a multi-task learning version of deep metric learning. The method is constructed based on the discriminative deep metric learning (DDML) [29]. For any pair of points, the DDML transforms the two points with a neural network, and then the distance is defined to be the Euclidean distance of their transformations. Thus the process of metric learning is done by learning the parameters of the network.
The DMML uses a straightforward way to construct a multi-task version of DDML by sharing the same first layer. Assuming there are T tasks, the outputs for two points xi,t,xj,t in the t-th task are $\mathbf {h}_{1,t}^{(1)}=s\left (\mathbf {W}^{(1)}\mathbf {x}_{i,t}+\mathbf {b}^{(1)}\right)$ and $\mathbf {h}_{2,t}^{(1)}=s\left (\mathbf {W}^{(1)}\mathbf {x}_{j,t}+\mathbf {b}^{(1)}\right)$, where all tasks share a common weights matrix W(1) and a common bias vector b(1), and s is a nonlinear operator such as tanh. Then the outputs the second layer is calculated separately for each task as $h_{1,t}^{(2)}=s\left (\mathbf {W}_{t}^{(2)}\mathbf {h}_{1,t}^{(1)}+\mathbf {b}_{t}^{(2)}\right)$ and $h_{2,t}^{(2)}=s\left (\mathbf {W}_{t}^{(2)}\mathbf {h}_{2,t}^{(1)}+\mathbf {b}_{t}^{(2)}\right)$, where each task use the task-specific weights matrix $\mathbf {W}_{t}^{(2)}$ and bias vector $\mathbf {b}_{t}^{(2)}$, and s is the non-linear operator again. The obtained distance now can be calculated by
$$ d^{2}(\mathbf{x}_{i,t},\mathbf{x}_{j,t})=\|\mathbf{h}_{1,t}^{(2)}-\mathbf{h}_{1,t}^{(2)}\|_{2}^{2}. $$
Then the model is learned by the following optimization problem:
$$ \begin{aligned} \min_{\mathbf{W},\mathbf{b}} J=& \frac{1}{2}\sum\limits_{t=1}^{T}{\sum\limits_{i,j}{g\left(1-l_{i,j}(\tau-d^{2}(\mathbf{x}_{i,t},\mathbf{x}_{j,t}))\right)}} \\ & +\frac{\lambda}{2}\left(\|\mathbf{W}^{(1)}\|_{\mathrm{F}}^{2}+\|\mathbf{b}^{(1)}\|_{2}^{2}\right) +\frac{\lambda}{2}\sum\limits_{t=1}^{T}{\left(\|\mathbf{W}_{t}^{(2)}\|_{\mathrm{F}}^{2}+\|\mathbf{b}_{t}^{(2)}\|_{2}^{2}\right)}, \end{aligned} $$
where $g(z)=\frac {1}{\beta }\log (1+\exp (\beta z))$ is the smoothed approximation for [ z]+= max(z,0), and β controls its sharpness.
This method is based on a simple yet effective idea which a part of the network weights are shared across multiple tasks. It is not difficult to implement by slightly modify the original network architecture. However, only the first layer is shared across different tasks in this model, which may be not the optimal choice and it is not easy to determine how many layers should be shared.
Deep convernets metric learning with multi-task learning (mtDCML) McLaughlin et al. [52] proposes to introduce auxiliary tasks in the model to help the metric learning task. The central idea is also to learn the distance metric by learning a feature representation. Denoting the subnetwork to transform the sample to the feature representation as G, and the network parameters as w, the learned distance can be calculated by the Euclidean distance of their representations as
$$ D(\mathbf{x}_{1},\mathbf{x}_{2};w)=\|G(\mathbf{x}_{1};w)-G(\mathbf{x}_{2};w)\|_{2}. $$
The network is trained using sample pairs in the training dataset. The cost for similar/dissimilar pairs are shown below:
$$ \mathcal{V}_{S}(\mathbf{x}_{1},\mathbf{x}_{2};w)=\frac{1}{2}D(\mathbf{x}_{1},\mathbf{x}_{2};w)^{2},\; \mathcal{V}_{D}(\mathbf{x}_{1},\mathbf{x}_{2};w)=\frac{1}{2}\left(\max(0,m-D(\mathbf{x}_{1},\mathbf{x}_{2};w))\right)^{2}. $$
Then the cost function is written as
$$ \mathcal{V}(\mathbf{x}_{1},\mathbf{x}_{2}|y;w)= (1-y)\mathcal{V}_{S}(\mathbf{x}_{1},\mathbf{x}_{2};w)+y\mathcal{V}_{D}(\mathbf{x}_{1},\mathbf{x}_{2};w). $$
To improve the metric learning task, the authors include other related auxiliary tasks into the objective and obtain the multi-task version:
$$ \mathcal{C}_{m}(X)=\sum\limits_{(\mathbf{x}_{1}^{i},\mathbf{x}_{2}^{i})\in X}{\mathcal{V}\left(\mathbf{x}_{1}^{i},\mathbf{x}_{2}^{i}|y^{i};w\right)} +\sum\limits_{t}{\alpha_{t}\mathcal{T}_{t}\left(G\left(\mathbf{x}_{1}^{i}\right)\big|l_{1}^{i,t};w\right)} +\sum\limits_{t}{\alpha_{t}\mathcal{T}_{t}\left(G\left(\mathbf{x}_{2}^{i}\right)\big|l_{2}^{i,t};w\right)}, $$
where $\mathcal {T}_{t}$ is an auxiliary task which helps to learn a better representation.
The selection of the auxiliary task depends on the problem of interest and there are a variety of choices. For the example in [52] where the main task is a metric learning for face verification, all the auxiliary tasks involve assigning one of several mutually exclusive labels to each training image. Thus the following softmax regression cost function is used
$$ \mathcal{T}_{t}\left(\mathbf{z}|l^{t};\mathbf{w}\right) =\sum\limits_{j\in L_{t}}{\mathbf{1}\left\{l^{t}=j\right\}\log\frac{e^{\mathbf{w}_{j}^{\top}\mathbf{z}}}{\sum_{q\in L_{t}}{e^{\mathbf{w}_{q}^{\top}\mathbf{z}}}}}, $$
where z is the feature representation of the input image $G\left (\mathbf {x}_{l}^{i}\right)$ or $G\left (\mathbf {x}_{2}^{i}\right)$, L t is the label set for the t-th task, and 1{lt=j} is an indicator function that takes value one when j is equal to the ground truth lt and zero otherwise. Using this framework, several auxiliary tasks can be included by using different label set L t , such as identification, attributes, pose, etc. Please refer to [52] for more details.
The strategy to construct the multi-task metric learning used in this paper is common in the community of multi-task learning. It is a flexible model by using different auxiliary tasks. However, for some task, it is difficult to choose a proper auxiliary task, and a bad auxiliary task may induce deterioration of the performance.
Multi-task metric learning has been widely used in a variety of practical applications, and we would like to introduce some representative works in this section.
Semantic categorization and social tagging with knowledge transfer among tasks Wang et al. [32] uses their proposed multi-task multi-feature similarity learning to solve the large scale visual applications. The metrics for visual categorization and automatic tagging are learned jointly based on the framework, which benefits from several perspectives. First, M2SL learns a metric for each feature instead of concatenating the multiple features into one feature. This effectively reduces the computation complexity growth from O(M2d2) to O(Md2) and also the risk of over-fitting. Second, the multi-task framework is more flexibility to explore the intrinsic model sharing and feature weighting relations on image data with large amount of classes. Third, the knowledge is transferred among semantic labels and social tagging information by the model. This combines the information fusion from both sides for effective image understanding.
The authors compare the performances of two versions of M2SL (linear and kernelized) with some other methods and the experimental results are shown in Fig. 3. From the results, the kernelized M2SL always achieves the best performance, especially when the number of tasks are greater. For the linear M2SL, it also outperforms the single-task MSL. Thus, the knowledge transfer by multi-task learning effectively improves the performance of metric learning.
Comparison of M2SL with other methods. The performance curves of M2SL and other methods on ImageNet-250 dataset are shown, where M2SL-K and M2SL-L indicate the kernelized and linear M2SL respectively. The x-axis represents the number of tasks while y-axis the mean accuracy of all tasks. This figure is from the original paper [32]
Person re-identification over camera networks Ma et al. [43] uses their proposed multi-task maximally collapsing metric learning to solve the person re-identification over camera networks. Person re-identification in a camera network is a challenging problem because the data are collected from different cameras. The method to use a common metric overlooks the differences between cameras, and thus the authors propose to use a multi-task learning approach for this problem. With the MtMCML, an particular metric is learned for each pair of cameras, while the common information can be shared among them. The experimental results show that the multi-task approach works substantially better than other state-of-the-art methods as shown in Fig. 4.
Comparison of MtMCML with other methods. The performance of MtMCML and other methods on GRID datasets are presented, where the x-axis and y-axis represent the rank score and matching rate respectively. From the results, the multi-task learning approach evidently improves the performance of matching rate. This figure is from the original paper [43]
Large-scale face retrieval Bhattarai et al. [50] uses their proposed coupled projection multi-task metric learning to solve the large-scale face retrieval. They use the multi-task framework to learn different tasks on heterogeneous datasets simultaneously, where a common projection is used to share information among these tasks. The tasks include face identity, age recognition, and expression recognition. By jointly learning these tasks, the authors get an improved performance as shown in Fig. 5.
Comparison of CP-mtML with other methods. Age retrieval performance (1-call@K) for different K with auxiliary task of identity matching. This figure is from the original paper [50]
Offline signature verification Soleimani et al. [51] aims to deal with the offline signature verification problem using the deep multi-task metric learning. For offline signature verification, there are writer-dependent (WD) approaches and writer-independent (WI) approaches. These two approaches benefits from their particular advantages respectively. These two approaches are well integrated in this model where the shared layer acts as a WI approach while the separated layers learn WD factors. In the experiments, the DMML achieves better performance than other methods. For example, on the UTSig dataset and using the HOG feature, the DMML achieves equal error rate (ERR) of 17.45% while the SVM achieves ERR of 20.63%; using the DRT feature, the DMML achieves ERR of 20.28% while the SVM achieves ERR of 27.64%.
Hierarchical large-scale image classification Zheng et al. [39] uses their proposed hierarchical multi-task metric learning to solve the large-scale image classification problem. To deal with the large-scale problem, the authors first learn a visual tree to organize large number of image categories hierarchically in a coarse-to-fine fashion. Then a series metrics are learnt hierarchically. Using the HMML, both the inter-node visual correlations and the inter-level visual correlations are utilized. The inter-node correlation is obtained directly from the multi-task framework, while the inter-level correlation is obtained by passing the task-specific part into the next level. The experimental results shown in Fig. 6 demonstrate that the multi-task model obtain better performance on large-scale classification.
Comparison of HMML with other methods. Accuracy comparison on the ILSVRC-2012 image set with 1000 image categories. This figure is from the original paper [39]
Person re-identification with auxiliary tasks McLaughlin et al. [52] uses the multi-task learning to improve the performance of person re-identification. Using their proposed deep convernets metric learning with multi-task learning, the authors train the network to jointly perform verification and identification and to recognize attributes related to the clothing and pose of the person in each image. The main job of the network is to learn a metric using similar and dissimilar pairs. With the help of auxiliary tasks (attribute recognition), the network learn a metric to give a satisfactory performance. Figure 7 shows the experimental results. It is obvious that the accuracy is effectively improved by introducing auxiliary tasks.
Comparison of mtDCML with other methods. CMC curve after multitask training on VIPeR dataset. This figure is from the original paper [52]
In this paper, we have systematically reviewed multi-task metric learning. Following a brief overview of metric learning, various multi-task learning approaches are categorized into four families and introduced respectively. We then review the motivations, models, and algorithms of them, and also discuss and compare some closely related approaches. Finally some representative applications of multi-task metric learning are illustrated.
For future work, we suggest potential issues for exploration. First, the theoretical analysis of multi-task metric learning should be addressed. There has long been an important issue yielding multiple results [53–56], with most studies focusing on how multi-task learning improves the generalization [57] of a conventional algorithm. However, as mentioned earlier, the metric learning improves the performances of the algorithms who use the metric indirectly. This makes these results difficult for application to metric learning algorithms. There has also been some research [58–61] on the theoretical analysis of metric learning, however it has been to difficult to explain these in the context of multi-task learning, Whilst Yang et al. [44] has attempted to provide an intuitive explanation, the issue pertaining to multi-task learning remains unresolved. Second, how to avoid the negative transfer among tasks. Existing approaches are designed to couple multiple metrics without considering the problem of negative transfer, and thus it is likely to deteriorate the performances when the tasks are not related. Third, most existing multi-task metric learning approaches are designed for global linear metrics. Thus it should be extended to more types of metric learning approaches, including local metric learning and non-linear metric learning. Finally, increased applications of multi-task metric learning are expected to be discovered.
Xing EP, Ng AY, Jordan MI, Russell SJ. Distance metric learning with application to clustering with side-information. In: Advances in Neural Information Processing Systems 15 [Neural Information Processing Systems, NIPS 2002, December 9-14, 2002, Vancouver, British Columbia, Canada].2002. p. 505–12. http://papers.nips.cc/paper/ 2164-distance-metric-learning-with-application-to-clustering-with-side-information.
Weinberger KQ, Saul LK. Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res. 2009; 10:207–44.
Davis JV, Kulis B, Jain P, Sra S, Dhillon IS. Information-theoretic metric learning. In: Proceedings of the 24th International Conference on Machine Learning.2007. p. 209–16.
Huang K, Ying Y, Campbell C. Gsml: A unified framework for sparse metric learning. In: Ninth IEEE International Conference on Data Mining.2009. p. 189–98.
Huang K, Ying Y, Campbell C. Generalized sparse metric learning with relative comparisons. Knowl Inf Syst. 2011; 28(1):25–45.
Ying Y, Huang K, Campbell C. Sparse metric learning via smooth optimization In: Bengio Y, Schuurmans D, Lafferty J, Williams CKI, Culotta A, editors. Advances in Neural Information Processing Systems 22.2009. p. 2214–222.
Ying Y, Li P. Distance metric learning with eigenvalue optimization. J Mach Learn Res. 2012; 13:1–26.
Caruana R. Multitask learning. Mach Learn. 1997; 28(1):41–75.
Evgeniou T, Pontil M. Regularized multi-task learning. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.2004. p. 109–17.
Argyriou A, Micchelli CA, Pontil M, Ying Y. A spectral regularization framework for multi-task structure learning. In: Advances in Neural Information Processing Systems 20.2008. p. 25–32.
Argyriou A, Evgeniou T. Convex multi-task feature learning. Mach Learn. 2008; 73(3):243–72.
Zhang J, Ghahramani Z, Yang Y. Flexible latent variable models for multi-task learning. Mach Learn. 2008; 73(3):221–42.
Zhang Y, Yeung DY. A convex formulation for learning task relationships in multi-task learning. In: Proceedings of the Twenty-Sixth Conference Annual Conference on Uncertainty in Artificial Intelligence.2010. p. 733–442.
Pan SJ, Yang Q. A survey on transfer learning. IEEE Trans Knowl Data Eng. 2010; 22(10):1345–59.
Dai W, Yang Q, Xue GR, Yu Y. Boosting for transfer learning. In: Proceedings of the 24th International Conference on Machine Learning, ICML '07. New York: ACM: 2007. p. 193–200.
Gopalan R, Li R, Chellappa R. Domain adaptation for object recognition: An unsupervised approach. In: Proceedings of IEEE International Conference on Computer Vision, ICCV 2011. p. 999–1006.
Vilalta R, Drissi Y. A perspective view and survey of meta-learning. Artif Intell Rev. 2002; 18(2):77–95.
Thrun S. Lifelong learning algorithms. In: Learning to Learn. USA: Springer: 1998. p. 181–209.
Thrun S, Pratt L. Learning to Learn. USA: Springer; 2012.
Burago D, Burago Y, Ivanov S. A Course in Metric Geometry. USA: American Mathematical Society; 2001. Chap. Ch 1.1.
Mahalanobis PC. On the generalised distance in statistics. In: Proceedings National Institute of Science, vol. 2. India: 1936. p. 49–55.
Bellet A, Habrard A, Sebban M. A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709v4, 2014.
Kulis B. Metric learning: A survey. Found Trends Mach Learn. 2013; 5(4):287–364.
Weinberger KQ, Blitzer J, Saul L. Distance metric learning for large margin nearest neighbor classification. In: Advances in Neural Information Processing Systems 18.2006.
Huang K, Jin R, Xu Z, Liu CL. Robust metric learning by smooth optimization. In: The 26th Conference on Uncertainty in Artificial Intelligence.2010. p. 244–51.
Goldberger J, Roweis S, Hinton G, Salakhutdinov R. Neighbourhood components analysis. In: Advances in Neural Information Processing Systems.2004. p. 513–20.
Schmidhuber J. Deep learning in neural networks: An overview. Neural Netw. 2015; 61:85–117.
Salakhutdinov R, Hinton G. Learning a nonlinear embedding by preserving class neighbourhood structure. In: Artificial Intelligence and Statistics.2007. p. 412–9.
Hu J, Lu J, Tan Y. Discriminative deep metric learning for face verification in the wild. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014.2014. p. 1875–82.
Vapnik VN. Statistical Learning Theory, 1st ed. USA: Wiley; 1998.
Parameswaran S, Weinberger K. Large margin multi-task metric learning. In: Advances in Neural Information Processing Systems 23.2010. p. 1867–75.
Wang S, Jiang S, Huang Q, Tian Q. Multi-feature metric learning with knowledge transfer among semantics and social tagging. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, June 16-21, 2012.2012. p. 2240–7.
Kwok JT, Tsang IW. Learning with idealized kernels. In: Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003), August 21-24, 2003, Washington, DC, USA: 2003. p. 400–7. http://www.aaai.org/Library/ICML/2003/icml03-054.php.
Shi Y, Bellet A, Sha F. Sparse compositional metric learning. In: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, July 27 -31, 2014, Québec City, Québec, Canada: 2014. p. 2078–084. http://www.aaai.org/ocs/index.php/AAAI/AAAI14/paper/view/8224.
Liu H, Zhang X, Wu P. Two-level multi-task metric learning with application to multi-classification. In: 2015 IEEE International Conference on Image Processing, ICIP 2015, Quebec City, QC, Canada, September 27-30, 2015: 2015. p. 2756–60.
Köstinger M, Hirzer M, Wohlhart P, Roth PM, Bischof H. Large scale metric learning from equivalence constraints. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, June 16-21, 2012: 2012. p. 2288–95.
Li Y, Tao D. Online semi-supervised multi-task distance metric learning. In: IEEE International Conference on Data Mining Workshops, ICDM Workshops 2016, December 12-15, 2016, Barcelona, Spain: 2016. p. 474–9.
Jin R, Wang S, Zhou Y. Regularized distance metric learning: Theory and algorithm. In: Advances in Neural Information Processing Systems, vol. 22.2009. p. 862–70.
Zheng Y, Fan J, Zhang J, Gao X. Hierarchical learning of multi-task sparse metrics for large-scale image classification. Pattern Recogn. 2017; 67:97–109.
Zhang Y, Yeung DY. Transfer metric learning by learning task relationships. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.2010.
Zhang Y, Yeung DY. Transfer metric learning with semi-supervised extension. ACM Trans Intell Syst Tech (TIST). 2012; 3(3):54–15428.
Gupta AK, Nagar DK. Matrix Variate Distributions. Chapman & Hall/CRC Monographs and Surveys in Pure and Applied Mathematics, vol. 104. London: Chapman & Hall; 2000.
Ma L, Yang X, Tao D. Person re-identification over camera networks using multi-task distance metric learning. IEEE Trans Image Process. 2014; 23(8):3656–70.
Yang P, Huang K, Liu CL. Geometry preserving multi-task metric learning. Mach Learn. 2013; 92(1):133–75.
Yang P, Huang K, Liu CL. Geometry preserving multi-task metric learning. In: European Conference on Machine Learning and Knowledge Discovery in Databases, vol. 7523.2012. p. 648–64.
Dhillon IS, Tropp JA. Matrix nearness problems with bregman divergences. SIAM J Matrix Anal Appl. 2008; 29:1120–46.
Kulis B, Sustik MA, Dhillon IS. Low-rank kernel learning with bregman matrix divergences. J Mach Learn Res. 2009; 10:341–76.
Yang P, Huang K, Liu C. A multi-task framework for metric learning with common subspace. Neural Comput Applic. 2013; 22(7-8):1337–47.
Torresani L, Lee K. Large margin component analysis. In: Advances in Neural Information Processing Systems 19, Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 4-7, 2006: 2006. p. 1385–92. http://papers.nips.cc/paper/3088-large-margin-component-analysis.
Bhattarai B, Sharma G, Jurie F. Cp-mtml: Coupled projection multi-task metric learning for large scale face retrieval. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016: 2016. p. 4226–35.
Soleimani A, Araabi BN, Fouladi K. Deep multitask metric learning for offline signature verification. Pattern Recogn Lett. 2016; 80:84–90.
McLaughlin N, del Rincón JM, Miller PC. Person reidentification using deep convnets with multitask learning. IEEE Trans Circ Syst Video Techn. 2017; 27(3):525–39.
Baxter J. A bayesian/information theoretic model of learning to learn via multiple task sampling. Mach Learn. 1997; 28(1):7–39.
Baxter J. A model of inductive bias learning. J Artif Intell Res. 2000; 12:149–98.
Blitzer J, Crammer K, Kulesza A, Pereira F, Wortman J. Learning bounds for domain adaptation. In: Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 3-6, 2007: 2007. p. 129–36. http://papers.nips.cc/paper/3212-learning-bounds-for-domain-adaptation.
Ben-David S, Blitzer J, Crammer K, Kulesza A, Pereira F, Vaughan JW. A theory of learning from different domains. Mach Learn. 2010; 79(1-2):151–75.
Bousquet O, Elisseeff A. Stability and generalization. J Mach Learn Res. 2002; 2:499–526.
Balcan MF, Blum A, Srebro N. A theory of learning with similarity functions. Mach Learn. 2008; 72(1-2):89–112.
Wang L, Sugiyama M, Yang C, Hatano K, Feng J. Theory and algorithm for learning with dissimilarity functions. Neural Comput. 2009; 21(5):1459–84.
Perrot M, Habrard A. A theoretical analysis of metric hypothesis transfer learning. In: Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015: 2015. p. 1708–17. http://jmlr.org/proceedings/papers/v37/perrot15.html.
Bellet A, Habrard A. Robustness and generalization for metric learning. Neurocomputing. 2015; 151:259–67.
The paper was partially supported by National Natural Science Foundation of China (NSFC) under grant no.61403388, no.61473236, Natural science fund for colleges and universities in Jiangsu Province under grant no. 17KJD520010, Suzhou Science and Technology Program under grant no. SYG201712, SZS201613, Key Program Special Fund in XJTLU (KSF-A-01), and UK Engineering and Physical Sciences Research Council (EPSRC) grant numbers EP/I009310/1, EP/M026981/1.
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
National Laboratory of Pattern Recognition, 95 East Zhongguancun Road, Beijing, 100190, China
Peipei Yang
Xi'an Jiaotong-Liverpool University, 111 Ren'ai Road, Suzhou, 215123, China
Kaizhu Huang
University of Stirling, Stirling, FK9 4LA, UK, Scotland
Amir Hussain
Search for Peipei Yang in:
Search for Kaizhu Huang in:
Search for Amir Hussain in:
PY carried out the whole structure of the idea and the mainly drafted the manuscript. KH provided the guidance of the whole manuscript and revised the draft. AH participated the discussion and gave valuable suggestion on the idea. All authors read and approved the final manuscript.
Correspondence to Peipei Yang.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Multi-task learning
Metric learning | CommonCrawl |
Algorithm for S-box
Analyses of S-box
Statistical analyses of S-box
A highly nonlinear S-box based on a fractional linear transformation
Shabieh Farwa1Email author,
Tariq Shah2 and
Lubna Idrees1
We study the structure of an S-box based on a fractional linear transformation applied on the Galois field \(GF(2^{8})\). The algorithm followed is very simple and yields an S-box with a very high ability to create confusion in the data. The cryptographic strength of the new S-box is critically analyzed by studying the properties of S-box such as nonlinearity, strict avalanche, bit independence, linear approximation probability and differential approximation probability. We also apply majority logic criterion to determine the effectiveness of our proposed S-box in image encryption applications.
S-box
Galois field
Fractional linear transformation
Majority logic criterion
The advanced encryption standard (AES) (Daemen and Rijmen 2002) is based on the substitution permutation network (SPN) which applies several layers of substitution and permutation. In any SPN, substitution followed by permutation is performed certain number of times to encrypt the plaintext into ciphertext in order to assure secure communication (Daemen and Rijmen 2002). The choice of a substitution box (S-box) (Shannon 1949) is the most sensitive step in determining the strength of a cryptosystem against several attacks. It is therefore essential to understand the design and properties of an S-box for encryption applications (Detombe and Tavares 1992). The improved quality of the S-Box to enhance the confusion creating capability in certain SPN has been a challenge for researchers.
In literature many algorithms for algebraically complex and cryptographically strong S-boxes, such as AES, APA, Gray, Skipjack, Xyi and Residue Prime (RP) S-boxes, are available. For the interest of readers we give a brief description of these structures. The AES S-box is based on the composition of inversion map and the affine transformation. It is a non-Feistel cipher. The algebraic expression of AES S-box is simple and involves only nine items (Daemen and Rijmen 2002). The structure of APA S-box uses composition of affine surjection, power function and again affine surjection. This design improves the algebraic complexity from 9 to 253 as compared to the AES S-box (Cui and Cao 2007). The Gray S-box is obtained from the AES S-box with an additional transform based on binary Gray codes. It inherits all the important cryptographic properties of AES S-box with an increased security against attacks (Tran et al. 2008). Skipjack is a Feistel network based on 32 rounds. This algorithm uses an 80-bit key to encrypt or decrypt 64-bit data blocks. The S-box based on Skipjack algorithm is also known as Skipjack F-table (Kim and Phan 2009). The XYi S-box is a mini version of a block cipher with block size of 8 bits. It has increased efficiency in computer applications (Shi et al. 2002). The Residue Prime S-box uses the field of residues of a prime number as an alternative to the Galois field based S-boxes (Abuelyman and Alsehibani 2008). These widely used S-boxes play the role of benchmarks in the field of cryptography. Among these, AES, APA and Gray S-boxes attain the highest nonlinearity measure 112. The S-box algorithm proposed in this framework produces high nonlinearity effect as achieved by the top S-boxes AES, APA and Gray, however, unlike these S-boxes, our S-box is structured by employing a single direct map rather the composition of two or more maps which makes this algorithm more efficient and economic in both software and hardware applications.
It is highly desired property for a cryptographically strong S-box to show good resistance towards linear and differential cryptanalysis (Biham and Shamir 1991; Matsui 1998). For a Boolean function f, the linear cryptanalysis is based on finding affine approximation to the action of a cipher (Nyberg 1993). Recently some efficient models are studied for S-boxes based on fractional linear transformations (Hussain et al. 2011, 2013a, b). S-box being the only nonlinear component in block cipher always requires high nonlinearity effect (Carlet and Ding 2004, 2007; Nyberg 1992, 1993). Motivated by some recently presented designs, we in this paper propose an algorithm to structure an \(8 \times 8\) S-box using fractional linear transformation applied on the Galois field \(GF(2^{8})\) which produces very high nonlinearity measure. We further analyse the properties of the new S-box by different commonly used tests such as nonlinearity, strict avalanche criterion (SAC), bit independent criterion (BIC), linear and differential approximation probability tests (LAPT, DAPT). We then compare the results with those for the famous S-boxes and observe that our new S-box, based on a simple and straightforward algorithm, produces coherent results.
The material presented in this paper is organized as follows. In "Algorithm for S-box" section we explain in detail the construction and major properties of the underlying Galois field \(GF(2^{8})\). We further discuss some interesting features of the fractional linear transformation and describe how this transformation is applied on the Galois field to structure the new S-box. "Analyses of S-box" section deals with the analysis of S-box against several common attacks and compares the cryptographic potential of our proposed S-box with other S-boxes such as AES, APA, Gray, Skipjack, Xyi and Residue Prime. In "Statistical analyses of S-box" section we perform some statistical analysis based on the image encryption application of the S-box and in "Conclusion" section we present conclusion regarding the significance of the new S-box when critically observed in comparison with the previously known models.
This section mainly deals with the structure of our S-box. Before we discuss the constituent algorithm, we need to go through some fundamental facts.
A function \(f: {\mathbb {F}}_{2}^{n}\rightarrow {\mathbb {F}}_{2}\) is called a Boolean function. We define a vectorial Boolean function \(F: {\mathbb {F}}_{2}^{n}\rightarrow {\mathbb {F}}_{2}^{m}\) as
$$\begin{aligned} F(x)=(f_{1}(x),\,f_{2}(x),\ldots ,f_{m}(x)), \end{aligned}$$
where \(x=(x_{1},\, x_{2},\ldots ,x_{n})\in {\mathbb {F}}_{2}^{n}\) and each of \(f_{i}\)'s for \(1\le i\le m\) is a Boolean function referred to as coordinate Boolean function. An \(n \times n\) S-box is precisely defined as a vectorial Boolean function \(S: {\mathbb {F}}_{2}^{n}\rightarrow {\mathbb {F}}_{2}^{n}\).
At this stage, it seems quite practical to understand the structural properties of the Galois field used to construct an S-box. Generally for any prime p, Galois field \(GF(p^{n})\) is given by the factor ring \({\mathbb {F}}_{p}[X]/ <\eta (x)>\) where \(\eta (x)\in {\mathbb {F}}_{p}[X]\) is an irreducible polynomial of degree n.
For an \(8 \times 8\) S-box, we use \(GF(2^{8})\). In advanced encryption standards (AES), the construction of \(GF(2^{8})\) is based on the degree 8 irreducible polynomial \(\eta (x)=x^{8}+x^{4}+x^{3}+x+1\). In Hussain et al. (2013b), \(\eta (x)=x^{8}+x^{4}+x^{3}+x^{2}+x+1\) is used as the generating polynomial. Here we choose \(\eta (x)=x^{8}+x^{6}+x^{5}+x^{4}+1\) as the irreducible polynomial that generates the maximal ideal \(<\eta (x)>\) of the principal ideal domain \({\mathbb {F}}_{2}[X]\). It is important to note that we may choose any degree 8 irreducible polynomial for constructing \(GF(2^{8})\) however the choice of generating polynomial may affect our calculations as the binary operations are carried modulo the used polynomial (see Benvenuto 2012 for details).
Generally the construction of an S-box requires a nonlinear bijective map. In literature many algorithms based on such maps or their compositions are presented to synthesize cryptographically strong S-boxes. We present the construction of S-box based on an invertible nonlinear map known as the fractional linear transformation. It is a function of the form \(\frac{az+b}{cz+d}\) generally defined on the complex plain \({\mathbb {C}}\) such that a, b, c and \(d \in {\mathbb {C}}\) satisfy the non-degeneracy condition \(ad-bc\ne 0\). The set of these transformations forms a group under the composition. The identity element in this group is the identity map and the the inverse \(\frac{dz-b}{-cz+a}\) of \(\frac{az+b}{cz+d}\) is assured by the condition \(ad-bc\ne 0\). One can easily observe that the algebraic expression of this map has a combined effect of inversion, dilation, rotation and translation. The nonlinearity and algebraic complexity of the fractional linear transformation motivates the idea to employ this map for byte substitution.
For the proposed S-box we apply fractional linear transformation g on the Galois field discussed above, i.e. \(g:GF(2^{8})\rightarrow GF(2^{8})\) given by \(g(t)=\frac{at+b}{ct+d}\), where \(a,\, b,\, c\) and \(d\in GF(2^{8})\) such that \(ad-bc\ne 0\) and t varies from 0 to \(255 \in GF(2^{8})\). We may choose any values for parameters a, b, c and d that satisfy the condition \(ad-bc\ne 0\). Here, for calculations, we take \(a=29=00011101,\, b=15=00001111,\,c=8=00001000\) and \(d=9=00001001\). One may observe that as we are working on a finite field, g(t) needs to be explicitly defined at \(t=47\) (at which denominator vanishes), so in order to keep g bijective we may define the transformation as given below;
$$\begin{aligned} g(t): {\left\{ \begin{array}{ll} \frac{29t+15}{8t+9};&{}\quad t\ne 47\\ 149;&{}\quad t=47 \end{array}\right. } \end{aligned}$$
Following the binary operations defined on the Galois field (Benvenuto 2012), we calculate the images of g as shown in Table 1.
Images of g
\(t \in {\mathbb {Z}}_{2^{8}}\)
\(t\in GF(2^{8})\)
g(t)
Thus the images of the above defined transformation yield the elements of the proposed S-box (see Table 2).
It is important to mention that an \(8 \times 8\) S-box has 8 constituent Boolean functions. A Boolean function f is balanced if \(\{x|f(x)=0\}\) and \(\{x|f(x)=1\}\) have same cardinality or the Hamming weight HW\((f)=2^{n-1}\). The significance of the balance property is that the higher the magnitude of a function's imbalance, the more likelihood of a high probability linear approximation being obtained. Thus, the imbalance makes a Boolean function weak in terms of linear cryptanalysis. Furthermore, a function with a large imbalance can easily be approximated by a constant function. All the Boolean functions \(f_{i},\,i \le i \le 8\), involved in the S-box as shown in Table 2 satisfy the balance property. Hence, the proposed S-box is balanced. It might be of interest that in order to choose feasible parameters leading to balanced S-boxes satisfying all other desirable properties (as discussed in the next section), one can use constraint programming, a problem solving strategy which characterises the problem as a set of constraints over a set of variables (Kellen 2014; Ramamoorthy et al. 2011).
An S-box is used to convert the plain data into the encrypted data, it is therefore essential to investigate the comparative performance of the S-box. We, in the next section, analyse the newly designed S-box through various indices to establish the forte of our proposed S-box.
For the assessment of the cryptographic strength of our S-box, in this section, we apply some widely used analysis techniques such as nonlinearity, bit independence, strict avalanche, linear and differential approximation probabilities etc. In the following subsections we present all these performance indices one by one.
Nonlinearity
The nonlinearity indicator counts the number of bits which must be altered in the truth table of a Boolean function to approach the nearest affine function.
Table 3 shows that for the newly designed S-box, the average nonlinearity measure is 112. Figure 1 shows that when we compare this with different famous S-boxes, the nonlinearity of the proposed S-box is similar to that of the top S-boxes such as AES, APA and Gray and much higher then that of the Skipjack, Xyi and Residue Prime S-boxes.
Nonlinearity of different S-boxes
Performance Indices for new S-box
Square deviation
The differential approximation probability
The linear approximation probability
Italic values are used for comparison purposes
Comparison of performance indices for different S-boxes
Xyi
Linear approximation probability
The linear approximation probability determines the maximum value of imbalance in the event. Let \(\Gamma _{x}\) and \(\Gamma _{y}\) be the input and output masks respectively and X consists of all possible inputs with cardinality \(2^{n}\), the linear approximation probability for a given S-box is defined as;
$$\begin{aligned} LP=\underset{\Gamma _{x},\Gamma _{y}\ne 0}{\max }\left| \frac{ \#\{x|x.\Gamma _{x}=S(x).\Gamma _{y}\}}{2^{n}}-\frac{1}{2}\right| \end{aligned}$$
Table 4 and Fig. 2 show that the linear approximation probability of the newly structured S-box is much better than those for Skipjack, Xyi and Residue prime S-boxes.
LP of different S-boxes
Differential approximation probability
The differential approximation probability is defined as;
$$\begin{aligned} DP=\left[ \frac{\#\{x\in X|S(x)\oplus S(x\oplus \Delta x)=\Delta y\}}{2^{n}}\right] , \end{aligned}$$
where \(\Delta x\) and \(\Delta y\) are input and output differentials respectively. In ideal conditions, the S-box shows differential uniformity (Biham and Shamir 1991). The smaller the differential uniformity, the stronger is the S-box. It is evident from the Table 4 and Fig. 3 that the differential approximation probability of the proposed S-box is similar to those of the AES, APA and Gray S-boxes and much better than the Skipjack, Xyi and Residue Prime S-boxes.
DP of different S-boxes
Strict avalanche criterion
For any cryptographic design, when we change the input bits, the performance of the output bits is examined by this criterion. It is desired that a change in a single input bit must cause changes in half of the output bits. In other words a function \(F:{\mathbb {F}}_{2}^{n}\rightarrow {\mathbb {F}}_{2}^{n}\) is said to satisfy SAC if for a change in an input bit \(i \in \{1,\,2, \ldots ,n\}\) the probability of change in the output bit \(j \in \{1,\,2, \ldots , n\}\) is 1/2. It is clear from the results shown in Table 4 and Fig. 4 that our S-box satisfies the requirements of this criterion.
SAC of different S-boxes
Bit independence criterion
The criterion of bit independence, introduced by Webster and Tavares (1986), is used to analyse the behaviour of bit patterns at the output and the effects of these changes in the subsequent rounds of encryption (Tran et al. 2008). For any vector Boolean function \(F:{\mathbb {F}}_{2}^{n}\rightarrow {\mathbb {F}}_{2}^{n}\), \(\forall \, i,j\) and \(k\in \{1,\,2,\ldots ,\,n\}\) with \(j\ne k\), inverting input bit i causes output bits j and k to change independently. In cryptographic systems it is highly desired to increase independence between bits as it makes harder to understand and forecast the design of the system.
The numerical results of BIC are given in Table 4 and are compared in Fig. 5. It can be observed that according to these results our S-box is quite similar to the AES, APA and Gray S-boxes.
BIC of different S-boxes
Comparison of MLC for new S-box and different S-boxes
Homog.
Plain image
In this section we present some useful statistical analysis of the new and some famous S-boxes. We apply the majority logic criterion (Hussain et al. 2012) in order to determine the effectiveness of the proposed S-box in image encryption applications.
Due to the expeditious developments in the area of digital image processing, it is quite challenging to protect the digital information against different attacks. In the last few years many efficient algorithm have been presented by the researchers regarding secure image encryption schemes (Bao and Zhou 2015; Gao and Chen 2008; Murguia et al. 2012; Ramirez-Torres et al. 2014; Vargas-Olmos et al. 2015, 2016). During the image encryption process distortions occur and the strength of the encryption algorithm used is characterized by the type of these distortions. We examine this by using various parameters generated by different statistical analysis regarding entropy, contrast, correlation, energy and homogeneity respectively. We begin with the entropy analysis which is used to measure the randomness in a system. This characterizes the texture of image. Some other analyses (as named above) are also applied in combination with the entropy analysis to enhance the authenticity of the results regarding the performance of an S-box. Contrast analysis measures the ability to identify objects in an image. To ensure strong encryption an elevated level of contrast is required. Correlation analysis is used to analyze the statistical properties of an S-box. By this analysis we determine the similarity between the pixels patterns of the plain and the encrypted images. Energy analysis determines the measure of the energy of an encrypted image when processed by various S-boxes. This measure gives the sum of squared elements in GLCM. The homogeneity analysis is used to determine the closeness of the elements distribution in the grey level co-occurrence matrix (GLCM) to GLCM diagonal. It is worth mentioning that a strong encryption algorithm requires a small measure of correlation, energy and homogeneity however high value of entropy and contrast.
Results for encryption using new S-box in different noisy environments
Encryption at \(\sigma =25\)
Results for encryption using AES S-box in different noisy environments
Lena's plain image and its encryption using New S-box. a Plain image. b Encrypted Image
Histogram of the images in Fig. 6. a Plain image. b Encrypted Image
Figure 6 shows the plain image of Lena and its encryption using the new S-box. It is quite obvious from the visual results that our method of encryption creates acceptable level of confusion.
For an image, its histogram graphically represents image-pixels distribution by plotting the number of pixels at each intensity level (Ramirez-Torres et al. 2014). It has been established that the histogram of the original and the encrypted image should be significantly different so that attackers could not extract the original image from the encrypted one. Figure 7 shows the respective histograms of Lena's plain image and its encrypted version. The histogram analysis evidently proves the stability of our proposed method against any histogram based attacks.
In order to determine the quantitative measure of the efficiency of the proposed method in image encryption, MLC is applied on a typical \(512 \times 512\) image of Lena for the new S-box and results are compared with the other famous S-boxes. The numerical results for correlation, entropy, contrast, homogeneity and energy are arranged in Table 5. It is observed that the proposed S-box satisfies all the criteria to be used for the safe communication.
Noise-effected images. a \(\sigma =25\), b \(\sigma =50\), c \(\sigma =75\)
Encryption with the proposed S-box in noisy environments. a Encryption of Fig. 8a. b Encryption of Fig. 8b. c Encryption of Fig. 8c
Encryption with the AES S-box in noisy environments. a Encryption of Fig. 8a. b Encryption of Fig. 8b. c Encryption of Fig. 8c
We may further test the performance of the proposed method in noisy environments. For this purpose, we consider \(\Omega \subset {\mathbb {Z}}^2\) as a bounded rectangular grid. Let \(U=\{u(i)\,|\, i\in \Omega \}\) and \(V=\{v(i)\,|\, i\in \Omega \}\) be the true and noisy images, respectively, such that
$$\begin{aligned} v(i)=u(i)+n(i),\quad i=(i_1,i_2)\in \Omega , \end{aligned}$$
where u(i) and \(v(i)\in {\mathbb {R}}_+\) are the intensities of gray level and n(i) is an independent and identically different Gaussian random noise with zero mean and variance \(\sigma ^2\) at pixel \(i\in \Omega\). The continuous image is interpreted as the Shannon interpolation of the discrete grid of samples v(i) over \(\Omega\). The goal here is to test the performance of method on noisy image V in order to analyse the behaviour of proposed method in comparison with its test on the true image U. For this purpose three different noise levels with \(\sigma = 25\), 50 and 75 are considered in Fig. 8 to test the significant application of the proposed algorithm. It can be observed that in case of noisy environment slight variations occur in visual quality and quantitative results as shown in Fig. 9 and Table 6. One can see that the entropy level of noise corrupted pixels is decreasing with increase in the level of Gaussian random noise. It shows most of the pixels are adopting similar grey levels in random data instead of particular arrangement of pixels in the original image. The contrast level also decreases with increasing noise level. Similarly changes in other parameters can be observed. The comparative analysis performed by applying AES S-box at the same noise levels is also shown in Table 7 and Fig. 10. One can observe that, with the increase in noise, the visual and numerical results obtained by the newly designed S-box are better or at least pretty similar to the recent state-of-the-art AES S-box (Daemen and Rijmen 2002).
Based on the experimental results regarding the overall performance of our proposed algorithm, it is demonstrated that the newly synthesized S-box satisfies all the criteria of acceptability to be used for secure communication.
In this work we propose an S-box structured by an extremely simple and direct algorithm. Its strength is analyzed by several tests and it is self-evident that its confusion creating capability is quite high as compared to some other very famous S-boxes. The algebraic complexity based on the fractional linear transformation produces ideal results that make this S-box authentic and more reliable.
The corresponding author acknowledges that every author has agreed with the information contained in the author role, originality, and competition of interest. All authors read and approved the final manuscript.
The authors are thankful to the anonymous reviewers for their constructive suggestions to improve the quality of the presented work. They also express gratitude to BioMed Central for waiving the cost of processing this manuscript.
Department of Mathematics, COMSATS Institute of Information Technology, Wah Cantt, Pakistan
Department of Mathematics, Quaid-i-Azam University, Islamabad, Pakistan
Abuelyman ES, Alsehibani AAS (2008) An optimized implementation of the S-box using residue of prime numbers. Int J Comput Sci Netw Secur 8:304–309Google Scholar
Bao L, Zhou Y (2015) Image encryption: generating visually meaningful encrypted images. Inf Sci 324(10):197–207MathSciNetView ArticleGoogle Scholar
Benvenuto CJ (2012) Galois field in cryptography. University of Washington, SeattleGoogle Scholar
Biham E, Shamir A (1991) Differential cryptanalysis of DES-like cryptosystems. J Cryptol 4(1):3–72MathSciNetView ArticleMATHGoogle Scholar
Carlet C, Ding C (2004) Highly nonlinear mappings. J Complex 20:205–244MathSciNetView ArticleMATHGoogle Scholar
Carlet C, Ding C (2007) Nonlinearity of S-boxes. Finite fields and their applications 13:121–135MathSciNetView ArticleMATHGoogle Scholar
Cui L, Cao Y (2007) A new S-box structure named Affine-Power-Affine. Int J Innov Comput I 3(3):45–53Google Scholar
Daemen J, Rijmen V (2002) The design of Rijndael-AES: the advanced encrytion standard. Springer, BerlinView ArticleMATHGoogle Scholar
Detombe J, Tavares S (1992) Constructing large cryptographically strong S-boxes. In: Advances in cryptology-AUSCRYP'92. Lecture notes in computer science, vol 718, pp 165–181Google Scholar
Gao T, Chen Z (2008) A new image encryption algorithm based on hyper-chaos. Phys Lett A 372(4):394–400ADSMathSciNetView ArticleMATHGoogle Scholar
Hussain I, Shah T, Gondal MA, Khan M, Khan WA (2011) Construction of new S-box using linear fractiional transformation. World Appl Sci J 14(12):1779–1785Google Scholar
Hussain I, Shah T, Gondal MA, Mahmood H (2012) Generalized majority logic criterion to analyze the statistical strength of S-boxes. Z Naturforsch A 67:282–288ADSView ArticleGoogle Scholar
Hussain I, Shah T, Gondal MA, Khan WA, Mahmood H (2013) A group theoretic approach to construct cryptographically strong substitution boxes. Neural Comput Appl 23(1):97–104View ArticleGoogle Scholar
Hussain I, Shah T, Mahmood H, Gondal MA (2013) A projective general linear group based algorithm for the construction of substitution box for block ciphers. Neural Comput Appl 22(6):1085–1093View ArticleGoogle Scholar
Kellen D (2014) Implementation of bit-vector variables in a CP solver, with an application to the generation of cryptographic S-boxes. In: Masters Dissertation Uppsala UniversitetGoogle Scholar
Kim J, Phan RCW (2009) Advanced differential-style crypt-analysis of the NSA's skipjack block cipher. Cryptologia 33(3):246–270View ArticleMATHGoogle Scholar
Matsui M (1998) Linear cryptanalysis method for DES cipher. In: Proceedings of EUROCRYPT'93. Springer, Berlin, pp 386–397Google Scholar
Murguia JS, Flores-Erana G, Carlos MM, Rosu HC (2012) Matrix approach of an encryption system based on cellular automata and its numerical implementation. Int J Mod Phys C23(11):1250078ADSView ArticleGoogle Scholar
Nyberg K (1992) Perfect nonlinear S-boxes. In: Advances in cryptology-EUROCRYPT'91. Lecture notes in computer science, vol 457. Springer, pp 378–386Google Scholar
Nyberg K (1993) On the construction of highly nonlinear permutations. In: Advances in cryptology-EUROCRYPT'92. Lecture notes in computer science, vol 658. Springer, Heidelberg, pp 92–98Google Scholar
Ramamoorthy V, Silaghi MC, Matsui T, Hirayama K, Yokoo M (2011) The design of cryptographic S-boxes using CSPs. In: Principles and practice of constraint programming. Lecture notes in computer science, vol 6876. Springer, Berlin, pp 54–68Google Scholar
Ramirez-Torres MT, Murguia JS, Carlos MM (2014) Image encryption with an improved cryptosystem based on a matrix approach. Int J Mod Phys C25(10):1450054ADSView ArticleGoogle Scholar
Shannon CE (1949) Communication theory of secrecy systems. Bell Syst Tech J 28:656–715MathSciNetView ArticleMATHGoogle Scholar
Shi XY, You XCXH, Lam KY (2002) A method for obtaining cryptographically strong \(8\times 8\) S-boxes. Int Conf Inf Netw Appl 2(3):14–20Google Scholar
Tran MT, Bui DK, Doung AD (2008) Gray S-box for advanced encryption standard. In: International conference computational intelligence and security, pp 253–256Google Scholar
Vargas-Olmos C, Murguia JS, Ramirez-Torres MT, Carlos MM, Rosu HC, Gonzalez Aguilar H (2015) Two-dimensional DFA scaling analysis applied to encrypted images. Int J Mod Phys C26(08):1550093ADSMathSciNetView ArticleGoogle Scholar
Vargas-Olmos C, Murguia JS, Ramirez-Torres MT, Carlos MM, Rosu HC, Gonzalez-Aguilar H (2016) Perceptual security of encrypted images based on wavelet scaling analysis. Phys A 456:22–30View ArticleGoogle Scholar
Webster AF, Tavares SE (1986) On the design of s-boxes. In: Advances in cryptology: proceedings of CRYPTO'85. Springer, Berlin, pp 523–534Google Scholar | CommonCrawl |
# Discrete Mathematics
Lecture Notes, Yale University, Spring 1999
## Lovász and K. Vesztergombi
Parts of these lecture notes are based on
L. Lovász - J. Pelikán - K. Vesztergombi: Kombinatorika
(Tankönyvkiadó, Budapest, 1972);
Chapter 14 is based on a section in
L. LovÁsz - M.D. Plummer: Matching theory
(Elsevier, Amsterdam, 1979)
## Introduction
For most students, the first and often only area of mathematics in college is calculus. And it is true that calculus is the single most important field of mathematics, whose emergence in the 17th century signalled the birth of modern mathematics and was the key to the successful applications of mathematics in the sciences.
But calculus (or analysis) is also very technical. It takes a lot of work even to introduce its fundamental notions like continuity or derivatives (after all, it took 2 centuries just to define these notions properly). To get a feeling for the power of its methods, say by describing one of its important applications in detail, takes years of study.
If you want to become a mathematician, computer scientist, or engineer, this investment is necessary. But if your goal is to develop a feeling for what mathematics is all about, where is it that mathematical methods can be helpful, and what kind of questions do mathematicians work on, you may want to look for the answer in some other fields of mathematics.
There are many success stories of applied mathematics outside calculus. A recent hot topic is mathematical cryptography, which is based on number theory (the study of positive integers $1,2,3, \ldots)$, and is widely applied, among others, in computer security and electronic banking. Other important areas in applied mathematics include linear programming, coding theory, theory of computing. The mathematics in these applications is collectively called discrete mathematics. ("Discrete" here is used as the opposite of "continuous"; it is also often used in the more restrictive sense of "finite".)
The aim of this book is not to cover "discrete mathematics" in depth (it should be clear from the description above that such a task would be ill-defined and impossible anyway). Rather, we discuss a number of selected results and methods, mostly from the areas of combinatorics, graph theory, and combinatorial geometry, with a little elementary number theory.
At the same time, it is important to realize that mathematics cannot be done without proofs. Merely stating the facts, without saying something about why these facts are valid, would be terribly far from the spirit of mathematics and would make it impossible to give any idea about how it works. Thus, wherever possible, we'll give the proofs of the theorems we state. Sometimes this is not possible; quite simple, elementary facts can be extremely difficult to prove, and some such proofs may take advanced courses to go through. In these cases, we'll state at least that the proof is highly technical and goes beyond the scope of this book.
Another important ingredient of mathematics is problem solving. You won't be able to learn any mathematics without dirtying your hands and trying out the ideas you learn about in the solution of problems. To some, this may sound frightening, but in fact most people pursue this type of activity almost every day: everybody who plays a game of chess, or solves a puzzle, is solving discrete mathematical problems. The reader is strongly advised to answer the questions posed in the text and to go through the problems at the end of each chapter of this book. Treat it as puzzle solving, and if you find some idea that you come up with in the solution to play some role later, be satisfied that you are beginning to get the essence of how mathematics develops.
We hope that we can illustrate that mathematics is a building, where results are built on earlier results, often going back to the great Greek mathematicians; that mathematics is alive, with more new ideas and more pressing unsolved problems than ever; and that mathematics is an art, where the beauty of ideas and methods is as important as their difficulty or applicability.
## Let us count!
### A party
Alice invites six guests to her birthday party: Bob, Carl, Diane, Eve, Frank and George. When they arrive, they shake hands with each other (strange European custom). This group is strange anyway, because one of them asks: "How many handshakes does this mean?"
"I shook 6 hands altogether" says Bob, "and I guess, so did everybody else."
"Since there are seven of us, this should mean $7 \cdot 6=42$ handshakes" ventures Carl.
"This seems too many" says Diane. "The same logic gives 2 handshakes if two persons meet, which is clearly wrong."
"This is exactly the point: every handshake was counted twice. We have to divide 42 by 2 , to get the right number: 21." settles Eve the issue.
When they go to the table, Alice suggests:
"Let's change the seating every half an hour, until we get every seating."
"But you stay at the head of the table" says George, "since you have your birthday."
How long is this party going to last? How many different seatings are there (with Alice's place fixed)?
Let us fill the seats one by one, starting with the chair on Alice's right. We can put here any of the 6 guests. Now look at the second chair. If Bob sits on the first chair, we can put here any of the remaining 5 guests; if Carl sits there, we again have 5 choices, etc. So the number of ways to fill the first two chairs is $5+5+5+5+5+5=6 \cdot 5=30$. Similarly, no matter how we fill the first two chairs, we have 4 choices for the third chair, which gives $6 \cdot 5 \cdot 4$ ways to fill the first three chairs. Going on similarly, we find that the number of ways to seat the guests is $6 \cdot 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1=720$.
If they change seats every half an hour, it takes 360 hours, that is, 15 days to go through all seating orders. Quite a party, at least as the duration goes!
2.1 How many ways can these people be seated at the table, if Alice too can sit anywhere?
After the cake, the crowd wants to dance (boys with girls, remember, this is a conservative European party). How many possible pairs can be formed?
OK, this is easy: there are 3 girls, and each can choose one of 4 guys, this makes $3 \cdot 4=12$ possible pairs.
After about ten days, they really need some new ideas to keep the party going. Frank has one:
"Let's pool our resources and win a lot on the lottery! All we have to do is to buy enough tickets so that no matter what they draw, we should have a ticket with the right numbers. How many tickets do we need for this?"
(In the lottery they are talking about, 5 numbers are selected from 90.)
"This is like the seating" says George, "Suppose we fill out the tickets so that Alice marks a number, then she passes the ticket to Bob, who marks a number and passes it to Carl, ... Alice has 90 choices, no matter what she chooses, Bob has 89 choices, so there are $90 \cdot 89$ choices for the first two numbers, and going on similarly, we get $90 \cdot 89 \cdot 88 \cdot 87 \cdot 86$ possible choices for the five numbers."
"Actually, I think this is more like the handshake question" says Alice. "If we fill out the tickets the way you suggested, we get the same ticket more then once. For example, there will be a ticket where I mark 7 and Bob marks 23, and another one where I mark 23 and Bob marks 7."
Carl jumped up:
"Well, let's imagine a ticket, say, with numbers 7,23,31,34 and 55. How many ways do we get it? Alice could have marked any of them; no matter which one it was that she marked, Bob could have marked any of the remaining four. Now this is really like the seating problem. We get every ticket $5 \cdot 4 \cdot 3 \cdot 2 \cdot 1$ times."
"So" concludes Diane, "if we fill out the tickets the way George proposed, then among the $90 \cdot 89 \cdot 88 \cdot 87 \cdot 86$ tickets we get, every 5 -tuple occurs not only once, but $5 \cdot 4 \cdot 3 \cdot 2 \cdot 1$ times. So the number of different tickets is only
$$
\frac{90 \cdot 89 \cdot 88 \cdot 87 \cdot 86}{5 \cdot 4 \cdot 3 \cdot 2 \cdot 1}
$$
We only need to buy this number of tickets."
Somebody with a good pocket calculator computed this value in a glance; it was $43,949,268$. So they had to decide (remember, this happens in a poor European country) that they don't have enough money to buy so many tickets. (Besides, they would win much less. And to fill out so many tickets would spoil the party...)
So they decide to play cards instead. Alice, Bob, Carl and Diane play bridge. Looking at his cards, Carl says: "I think I had the same hand last time."
"This is very unlikely" says Diane.
How unlikely is it? In other words, how many different hands can you have in bridge? (The deck has 52 cards, each player gets 13.) I hope you have noticed it: this is essentially the same question as the lottery. Imagine that Carl picks up his cards one by one. The first card can be any one of the 52 cards; whatever he picked up first, there are 51 possibilities for the second card, so there are $52 \cdot 51$ possibilities for the first two cards. Arguing similarly, we see that there are $52 \cdot 51 \cdot 50 \cdot \ldots \cdot 40$ possibilities for the 13 cards.
But now every hand was counted many times. In fact, if Eve comes to quibbiz and looks into Carl's cards after he arranged them, and tries to guess (I don't now why) the order in which he picked them up, she could think: "He could have picked up any of the 13 cards first; he could have picked up any of the remaining 12 cards second; any of the remaining 11 cards third; $\ldots$ Aha, this is again like the seating: there are $13 \cdot 12 \cdot \ldots \cdot 2 \cdot 1$ orders in which he could have picked up his cards."
But this means that the number of different hands in bridge is
$$
\frac{52 \cdot 51 \cdot 50 \cdot \ldots \cdot 40}{13 \cdot 12 \cdot \ldots \cdot 2 \cdot 1}=635,013,559,600
$$
So the chance that Carl had the same hand twice in a row is one in $635,013,559,600$, very small indeed.
Finally, the six guests decide to play chess. Alice, who just wants to watch them, sets up 3 boards. "How many ways can you guys be matched with each other?" she wonders. "This is clearly the same problem as seating you on six chairs; it does not matter whether the chairs are around the dinner table of at the three boards. So the answer is 720 as before."
"I think you should not count it as a different matching if two people at the same board switch places" says Bob, "and it should not matter which pair sits at which table."
"Yes, I think we have to agree on what the question really means" adds Carl. "If we include in it who plays white on each board, then if a pair switches places we do get a different matching. But Bob is right that it does not matter which pair uses which board."
"What do you mean it does not matter? You sit at the first table, which is closest to the peanuts, and I sit at the last, which is farthest" says Diane.
"Let's just stick to Bob's version of the question" suggests Eve. "It is not hard, actually. It is like with handshakes: Alice's figure of 720 counts every matching several times. We could rearrange the tables in 6 different ways, without changing the matching."
"And each pair may or may not switch sides" adds Frank. "This means $2 \cdot 2 \cdot 2=8$ ways to rearrange people without changing the matching. So in fact there are $6 \cdot 8=48$ ways to sit which all mean the same matching. The 720 seatings come in groups of 48 , and so the number of matchings is $720 / 48=15 . "$
"I think there is another way to get this" says Alice after a little time. "Bob is youngest, so let him choose a partner first. He can choose his partner in 5 ways. Whoever is youngest among the rest, can choose his or her partner in 3 ways, and this settles the matching. So the number of matchings is $5 \cdot 3=15$."
"Well, it is nice to see that we arrived at the same figure by two really different arguments. At the least, it is reassuring" says Bob, and on this happy note we leave the party.
2.2 What is the number of "matchings" in Carl's sense (when it matters who sits on which side of the board, but the boards are all alike), and in Diane's sense (when it is the other way around)?
### Sets and the like
We want to formalize assertions like "the problem of counting the number of hands in bridge is essentially the same as the problem of counting tickets in the lottery". The usual tool in mathematics to do so is the notion of a set. Any collection of things, called elements, is a set. The deck of cards is a set, whose elements are the cards. The participants of the party form a set, whose elements are Alice, Bob, Carl, Diane, Eve, Frank and George (let us denote this set by $P$ ). Every lottery ticket contains a set of 5 numbers.
For mathematics, various sets of numbers are important: the set of real numbers, denoted by $\mathbb{R}$; the set of rational numbers, denoted by $\mathbb{Q}$; the set of integers, denote by $\mathbb{Z}$; the set of non-negative integers, denoted by $\mathbb{Z}_{+}$; the set of positive integers, denoted by $\mathbb{N}$. The empty set, the set with no elements is another important (although not very interesting) set; it is denoted by $\emptyset$.
If $A$ is a set and $b$ is an element of $A$, we write $b \in A$. The number of elements of a set $A$ (also called the cardinality of $A$ ) is denoted by $|A|$. Thus $|P|=7 ;|\emptyset|=0$; and $|\mathbb{Z}|=\infty$ (infinity). ${ }^{1}$
${ }^{1}$ In mathematics, one can distinguish various levels of "infinity"; for example, one can distinguish between We may specify a set by listing its elements between braces; so
$$
P=\{\text { Alice, Bob, Carl, Diane, Eve, Frank, George }\}
$$
is the set of participants of Alice's birthday party, or
is the set of numbers on my uncle's lottery ticket. Sometimes we replace the list by a verbal description, like
$$
\text { \{Alice and her guests\}. }
$$
Often we specify a set by a property that singles out the elements from a large universe like real numbers. We then write this property inside the braces, but after a colon. Thus
$$
\{x \in \mathbb{Z}: x \geq 0\}
$$
is the set of non-negative integers (which we have called $\mathbb{Z}_{+}$before), and
$$
\{x \in P: x \text { is a girl }\}=\{\text { Alice, Diane, Eve }\}
$$
(we denote this set by $G$ ). Let me also tell you that
$$
D=\{x \in P: x \text { is over } 21\}=\{\text { Alice, Carl, Frank }\}
$$
(we denote this set by $D$ ).
A set $A$ is called a subset of a set $B$, if every element of $A$ is also an element of $B$. In other words, $A$ consists of certain elements of $B$. We allow that $A$ consists of all elements of $B$ (in which case $A=B$ ), or none of them (in which case $A=\emptyset$ ). So the empty set is a subset of every set. The relation that $A$ is a subset of $B$ is denoted by
$$
A \subseteq B
$$
For example, among the various sets of people considered above, $G \subseteq P$ and $D \subseteq P$. Among the sets of numbers, we have a long chain:
$$
\emptyset \subseteq \mathbb{N} \subseteq \mathbb{Z}_{+} \subseteq \mathbb{Z} \subseteq \mathbb{Q} \subseteq \mathbb{R}
$$
The intersection of two sets is the set consisting of those elements that elements of both sets. The intersection of two sets $A$ and $B$ is denoted by $A \cap B$. For example, we have $G \cap D=\{$ Alice $\}$. Two sets whose intersection is the empty set (in other words, have no element in common) are called disjoint.
2.3 Name sets whose elements are (a) buildings, (b) people, (c) students, (d) trees, (e) numbers, (f) points.
2.4 What are the elements of the following sets: (a) army, (b) mankind, (c) library, (d) the animal kingdom?
the cardinalities of $\mathbb{Z}$ and $\mathbb{R}$. This is the subject matter of set theory and does not concern us here. 2.5 Name sets having cardinality (a) 52, (b) 13, (c) 32, (d) 100, (e) 90, (f) 2,000,000.
2.6 What are the elements of the following (admittedly peculiar) set: $\{$ Alice, $\{1\}\}$ ?
2.7 We have not written up all subset relations between various sets of numbers; for example, $\mathbb{Z} \subseteq \mathbb{R}$ is also true. How many such relations can you find between the sets $\emptyset, \mathbb{N}, \mathbb{Z}_{+}, \mathbb{Z}, \mathbb{Q}, \mathbb{R} ?$
2.8 Is an "element of a set" a special case of a "subset of a set"?
2.9 List all subsets of $\{0,1,3\}$. How many do you get?
2.10 Define at least three sets, of which $\{$ Alice, Diane, Eve $\}$ is a subset.
2.11 List all subsets of $\{a, b, c, d, e\}$, containing $a$ but not containing $b$.
2.12 Define a set, of which both $\{1,3,4\}$ and $\{0,3,5\}$ are subsets. Find such a set with a smallest possible number of elements.
2.13 (a) Which set would you call the union of $\{a, b, c\},\{a, b, d\}$ and $\{b, c, d, e\}$ ?
(b) Find the union of the first two sets, and then the union of this with the third. Also, find the union of the last two sets, and then the union of this with the first set. Try to formulate what you observed.
(c) Give a definition of the union of more than two sets.
2.14 Explain the connection beween the notion of the union of sets and exercise 2.2.
2.15 We form the union of a set with 5 elements and a set with 9 elements. Which of the following numbers can we get as the cardinality of the union: $4,6,9,10,14,20$ ?
2.16 We form the union of two sets. We know that one of them has $n$ elements and the other has $m$ elements. What can we infer for the cardinality of the union?
2.17 What is the intersection of
(a) the sets $\{0,1,3\}$ and $\{1,2,3\}$;
(b) the set of girls in this class and the set of boys in this class;
(c) the set of prime numbers and the set of even numbers?
2.18 We form the intersection of two sets. We know that one of them has $n$ elements and the other has $m$ elements. What can we infer for the cardinality of the intersection?
2.19 Prove that $|A \cup B|+|A \cap B|=|A|+|B|$.
2.20 The symmetric difference of two sets $A$ and $B$ is the set of elements that belong to exectly one of $A$ and $B$.
(a) What is the symmetric difference of the set $\mathbb{Z}_{+}$of non-negative integers and the set $E$ of even integers $(E=\{\ldots-4,-2,0,2,4, \ldots$ contains both negative and positive even integers).
(b) Form the symmetric difference of $A$ ad $B$, to get a set $C$. Form the symmetric difference of $A$ and $C$. What did you get? Give a proof of the answer.
### The number of subsets
Now that we have introduced the notion of subsets, we can formulate our first general combinatorial problem: what is the number of all subsets of a set with $n$ elements?
We start with trying out small numbers. It plays no role what the elements of the set are; we call them $a, b, c$ etc. The empty set has only one subset (namely, itself). A set with a single element, say $\{a\}$, has two subsets: the set $\{a\}$ itself and the empty set $\emptyset$. A set with two elements, say $\{a, b\}$ has four subsets: $\emptyset,\{a\},\{b\}$ and $\{a, b\}$. It takes a little more effort to list all the subsets of a set $\{a, b, c\}$ with 3 elements:
$$
\emptyset,\{a\},\{b\},\{c\},\{a, b\},\{b, c\},\{a, c\},\{a, b, c\} .
$$
We can make a little table from these data:
| No. of elements | 0 | 1 | 2 | 3 |
| :---: | :--- | :--- | :--- | :--- |
| No. of subsets | 1 | 2 | 4 | 8 |
Looking at these values, we observe that the number of subsets is a power of 2 : if the set has $n$ elements, the result is $2^{n}$, at least on these small examples.
It is not difficult to see that this is always the answer. Suppose you have to select a subset of a set $A$ with $n$ elements; let us call these elements $a_{1}, a_{2}, \ldots, a_{n}$. Then we may or may not want to include $a_{1}$, in other words, we can make two possible decisions at this point. No matter how we decided about $a_{1}$, we may or may not want to include $a_{2}$ in the subset; this means two possible decisions, and so the number of ways we can decide about $a_{1}$ and $a_{2}$ is $2 \cdot 2=4$. Now no matter how we decide about $a_{1}$ and $a_{2}$, we have to decide about $a_{3}$, and we can again decide in two ways. Each of these ways can be combined with each of the 4 decisions we could have made about $a_{1}$ and $a_{2}$, which makes $4 \cdot 2=8$ possibilities to decide about $a_{1}, a_{2}$ and $a_{3}$.
We can go on similarly: no matter how we decide about the first $k$ elements, we have two possible decisions about the next, and so the number of possibilities doubles whenever we take a new element. For deciding about all the $n$ elements of the set, we have have $2^{n}$ possibilities.
Thus we have proved the following theorem.
Theorem 2.1 A set with $n$ elements has $2^{n}$ subsets.
We can illustrate the argument in the proof by the picture in Figure 1.
We read this figure as follows. We want to select a subset called $S$. We start from the circle on the top (called a node). The node contains a question: is $a_{1}$ an element of $S$ ? The two arrows going out of this node are labeled with the two possible answers to this question (Yes and No). We make a decision and follow the appropriate arrow (also called an edge) to the the node at the other end. This node contains the next question: is $a_{2}$ an element of $S$ ? Follow the arrow corresponding to your answer to the next node, which contains the third (and in this case last) question you have to answer to determine the subset: is $a_{3}$ an element of $S$ ? Giving an answer and following the appropriate arrow we get to a node, which contains a listing of the elements of $S$.
Thus to select a subset corresponds to walking down this diagram from the top to the bottom. There are just as many subsets of our set as there are nodes on the last level.
Figure 1: A decision tree for selecting a subset of $\{a, b, c\}$.
Since the number of nodes doubles from level to level as we go down, the last level contains $2^{3}=8$ nodes (and if we had an $n$-element set, it would contain $2^{n}$ nodes).
Remark. A picture like this is called a tree. (This is not a mathematical definition, which we'll see later.) If you want to know why is the tree growing upside down, ask the computer scientists who introduced it, not us.
We can give another proof of theorem 2.1. Again, the answer will be made clear by asking a question about subsets. But now we don't want to select a subset; what we want is to enumerate subsets, which means that we want to label them with numbers $0,1,2, \ldots$ so that we can speak, say, about subset No. 23 of the set. In other words, we want to arrange the subsets of the set in a list and the speak about the 23rd subset on the list.
(We actually want to call the first subset of the list No. 0, the second subset on the list No. 1 etc. This is a little strange but this time it is the logicians who are to blame. In fact, you will find this quite natural and handy after a while.)
There are many ways to order the subsets of a set to form a list. A fairly natural thing to do is to start with $\emptyset$, then list all subsets with 1 elements, then list all subsets with 2 elements, etc. This is the way the list (1) is put together.
We could order the subsets as in a phone book. This method will be more transparent if we write the subsets without braces and commas. For the subsets of $\{a, b, c\}$, we get the list
$$
\emptyset, a, a b, a b c, a c, b, b c, c .
$$
These are indeed useful and natural ways of listing all subsets. They have one shortcoming though. Imagine the list of the subsets of five elements, and ask yourself to name the 23rd subset on the list, without actually writing down the whole list. This will be difficult! Is there a way to make this easier?
Let us start with another way of denoting subsets (another encoding in the mathematical jargon). We illustrate it on the subsets of $\{a, b, c\}$. We look at the elements one by one, and write down a 1 if the element occurs in the subset and a 0 if it does not. Thus for the subset $\{a, c\}$, we write down 101, since $a$ is in the subset, $b$ is not, and $c$ is in it again. This way every subset in "encoded" by a string of length 3, consisting of 0's and 1's. If we specify any such string, we can easily read off the subset it corresponds to. For example, the string 010 corresponds to the subset $\{b\}$, since the first 0 tells us that $a$ is not in the subset, the 1 that follows tells us that $b$ is in there, and the last 0 tells us that $c$ is not there.
Now such strings consisting of 0's and 1's remind us of the binary representation of integers (in other words, representations in base 2). Let us recall the binary form of nonnegative integers up to 10 :
$$
\begin{aligned}
0 & =0_{2} \\
1 & =1_{2} \\
2 & =10_{2} \\
3 & =2+1=11_{2} \\
4 & =100_{2} \\
5 & =4+1=101_{2} \\
6 & =4+2=110_{2} \\
7 & =4+2+1=111_{2} \\
8 & =1000_{2} \\
9 & =8+1=1001_{2} \\
10 & =8+2=1010_{2}
\end{aligned}
$$
(We put the subscript 2 there to remind ourselves that we are working in base 2, not 10.)
Now the binary forms of integers $0,1, \ldots, 7$ look almost as the "codes" of subsets; the difference is that the binary form of an integer always starts with a 1 , and the first 4 of these integers have binary forms shorter than 3, while all codes of subsets consist of exactly 3 digits. We can make this difference disappear if we append 0's to the binary forms at their beginning, to make them all have the same length. This way we get the following correspondence:
$$
\begin{aligned}
& 0 \Leftrightarrow 0_{2} \Leftrightarrow 000 \Leftrightarrow \emptyset \\
& 1 \Leftrightarrow 1_{2} \Leftrightarrow 001 \Leftrightarrow\{c\} \\
& 2 \Leftrightarrow 10_{2} \Leftrightarrow 010 \Leftrightarrow\{b\} \\
& 3 \Leftrightarrow 11_{2} \Leftrightarrow 011 \Leftrightarrow\{b, c\} \\
& 4 \Leftrightarrow 100_{2} \Leftrightarrow 100 \Leftrightarrow\{a\} \\
& 5 \Leftrightarrow 101_{2} \Leftrightarrow 101 \Leftrightarrow\{a, c\} \\
& 6 \Leftrightarrow 110_{2} \Leftrightarrow 110 \Leftrightarrow\{a, b\} \\
& 7 \Leftrightarrow 111_{2} \Leftrightarrow 111 \Leftrightarrow\{a, b, c\}
\end{aligned}
$$
So we see that the subsets of $\{a, b, c\}$ correspond to the numbers $0,1, \ldots, 7$.
What happens if we consider, more generally, subsets of a set with $n$ elements? We can argue just like above, to get that the subsets of an $n$-element set correspond to integers, starting with 0 , and ending with the largest integer that has only $n$ digits in its binary representation (digits in the binary representation are usually called bits). Now the smallest number with $n+1$ bits is $2^{n}$, so the subsets correspond to numbers $0,1,2, \ldots, 2^{n}-1$. It is clear that the number of these numbers in $2^{n}$, hence the number of subsets is $2^{n}$. Comments. We have given two proofs of theorem 2.1. You may wonder why we needed two proofs. Certainly not because a single proof would not have given enough confidence in the truth of the statement! Unlike in a legal procedure, a mathematical proof either gives absolute certainty or else it is useless. No matter how many incomplete proofs we give, they don't add up to a single complete proof.
For that matter, we could ask you to take our word for it, and not give any proof. Later in some cases this will be necessary, when we will state theorems whose proof is too long or too involved to be included in these notes.
So why did we bother to give any proof, let alone two proofs of the same statement? The answer is that every proof reveals much more than just the bare fact stated in the theorem, and this plus may be even more valuable. For example, the first proof given above introduced the idea of breaking down the selection of a subset into independent decisions, and the representation of this idea by a tree.
The second proof introduced the idea of enumerating these subsets (labeling them with integers $0,1,2, \ldots)$. We also saw an important method of counting: we established a correspondence between the objects we wanted to count (the subsets) and some other kinds of objects that we can count easily (the numbers $0,1, \ldots, 2^{n}-1$ ). In this correspondence
- for every subset, we had exactly one corresponding number, and
- for every number, we had exactly one corresponding subset.
A correspondence with these properties is called a one-to-one correspondence (or bijection). If we can make a one-to-one correspondence between the elements of two sets, then they have the same number of elements.
So we know that the number of subsets of a 100 -element set is $2^{100}$. This is a large number, but how large? It would be good to know, at least, how many digits it will have in the usual decimal form. Using computers, it would not be too hard to find the decimal form of this number, but let's try to estimate at least the order of magnitude of it.
We know that $2^{3}=8<10$, and hence $2^{99}<10^{33}$. Therefore, $2^{100}<2 \cdot 10^{33}$. Now $2 \cdot 10^{33}$ is a 2 followed by 33 zeroes; it has 34 digits, and therefore $2^{100}$ has at most 34 digits.
We also know that $2^{10}=1024>1000=10^{3} .{ }^{2}$ Hence $2^{100}>10^{30}$, which means that $2^{100}$ has at least 30 digits.
This gives us a reasonably good idea of the size of $2^{100}$. With a little more high school math, we can get the number of digits exactly. What does it mean that a number has exactly $k$ digits? It means that it is between $10^{k-1}$ and $10^{k}$ (the lower bound is allowed, the upper is not). We want to find the value of $k$ for which
$$
10^{k-1} \leq 2^{100}<10^{k}
$$
Now we can write $2^{100}$ in the form $10^{x}$, only $x$ will not be an integer: the appropriate value of $x$ is $x=\lg 2^{100}=100 \lg 2$. We have then
$$
k-1 \leq x<k,
$$
${ }^{2}$ The fact that $2^{10}$ is so close to $10^{3}$ is used - or rather misused - in the name "kilobyte", which means 1024 bytes, although it should mean 1000 bytes, just like a "kilogram" means 1000 grams. Similarly, "megabyte" means $2^{20}$ bytes, which is close to 1 million bytes, but not exactly the same. which means that $k-1$ is the largest integer not exceeding $x$. Mathematicians have a name for this: it is the integer part or floor of $x$, and it is denoted by $\lfloor x\rfloor$. We can also say that we obtain $k$ by rounding $x$ down to the next integer. There is also a name for the number obtained by rounding $x$ up to the next integer: it is called the ceiling of $x$, and denoted by $\lceil x\rceil$.
Using any scientific calculator (or table of logarithms), we see that $\lg 2 \approx 0.30103$, thus $100 \lg 2 \approx 30.103$, and rounding this down we get that $k-1=30$. Thus $2^{100}$ has 31 digits.
2.21 Under the correspondence between numbers and subsets described above, which number correspond to subsets with 1 element?
2.22 What is the number of subsets of a set with $n$ elements, containing a given element?
2.23 What is the number of integers with (a) at most $n$ (decimal) digits; (b) exactly $n$ digits?
2.24 How many bits (binary digits) does $2^{100}$ have if written in base 2 ?
2.25 Find a formula for the number of digits of $2^{n}$.
### Sequences
Motivated by the "encoding" of subsets as strings of 0's and 1's, we may want to determine the number of strings of length $n$ composed of some other set of symbols, for example, $a$, $b$ and $c$. The argument we gave for the case of 0's and 1's can be carried over to this case without any essential change. We can observe that for the first element of the string, we can choose any of $a, b$ and $c$, that is, we have 3 choices. No matter what we choose, there are 3 choices for the second of the string, so the number of ways to choose the first two elements is $3^{2}=9$. Going on in a similar manner, we get that the number of ways to choose the whole string is $3^{n}$.
In fact, the number 3 has no special role here; the same argument proves the following theorem:
Theorem 2.2 The number of strings of length $n$ composed of $k$ given elements is $k^{n}$.
The following problem leads to a generalization of this question. Suppose that a database has 4 fields: the first, containing an 8-character abbreviation of an employee's name; the second, M or F for sex; the third, the birthday of the employee, in the format mm-dd-yy (disregarding the problem of not being able to distinguish employees born in 1880 from employees born in 1980); and the fourth, a jobcode which can be one of 13 possibilities. How many different records are possible?
The number will certainly be large. We already know from theorem 2.2 that the first field may contain $26^{8}>200,000,000,000$ names (most of these will be very difficult to pronounce, and are not likely to occur, but let's count all of them as possibilities). The second field has 2 possible entries; the third, 36524 possible entries (the number of days in a century); the last, 13 possible entries.
Now how do we determine the number of ways these can be combined? The argument we described above can be repeated, just " 3 choices" has to be replaced, in order, by " $26^{8}$ choices", " 2 choices", "36524 choices" and "13 choices". We get that the answer is $26^{8} \cdot 2 \cdot 36524 \cdot 13=198307192370919424$.
We can formulate the following generalization of theorem 2.2
Theorem 2.3 Suppose that we want to form strings of length $n$ so that we can use any of a given set of $k_{1}$ symbols as the first element of the string, any of a given set of $k_{2}$ symbols as the second element of the string, etc., any of a given set of $k_{n}$ symbols as the last element of the string. Then the total number of strings we can form is $k_{1} \cdot k_{2} \cdot \ldots \cdot k_{n}$.
As another special case, consider the problem: how many non-negative integers have exactly $n$ digits (in decimal)? It is clear that the first digit can be any of 9 numbers $(1,2, \ldots, 9)$, while the second, third, etc. digits can be any of the 10 digits. Thus we get a special case of the previous question with $k_{1}=9$ and $k_{2}=k_{3}=\ldots=k_{n}=10$. Thus the answer is $9 \cdot 10^{n-1}$. (cf. with exercise 2.3).
2.26 Draw a tree illustrating the way we counted the number of strings of length 2 formed from the characters $a, b$ and $c$, and explain how it gives the answer. Do the same for the more general problem when $n=3, k_{1}=2, k_{2}=3, k_{3}=2$.
2.27 In a sport shop, there are T-shirts of 5 different colors, shorts of 4 different colors, and socks of 3 different colors. How many different uniforms can you compose from these items?
2.28 On a ticket for a succer sweepstake, you have to guess 1,2 , or $\mathrm{X}$ for each of 13 games. How many different ways can you fill out the ticket?
2.29 We roll a dice twice; how many different outcomes can we have (a 1 followed by a 4 is different from a 4 followed by a 1 )?
2.30 We have 20 different presents that we want to distribute to 12 children. It is not required that every child gets something; it could even happen that we give all the presents to the same child. In how many ways can we distribute the presents?
2.31 We have 20 kinds of presents; this time, we have a large supply from each. We want to give presents to 12 children. Again, it is not required that every child gets something; but no child can get two copies of the same present. In how many ways can we give presents?
### Permutations
During the party, we have already encountered the problem: how many ways can we seat $n$ people on $n$ chairs (well, we have encountered it for $n=6$ and $n=7$, but the question is natural enough for any $n$ ). If we imagine that the seats are numbered, then a finding a seating for these people is the same as assigning them to the numbers $1,2, \ldots, n$ (or $0,1, \ldots, n-1$ if we want to please the logicians). Yet another way of saying this is to order the people in a single line, or write down an (ordered) list of their names.
If we have an ordered list of $n$ objects, and we rearrange them so that they are in another order, this is called permuting them, and the new order is also called a permutation of the objects. We also call the rearrangement that does not change anything, a permutation (somewhat in the spirit of calling the empty set a set).
Figure 2: A decision tree for selecting a subset of $\{a, b, c\}$.
For example, the set $\{a, b, c\}$ has the following 6 permutations:
$$
a b c, a c b, b a c, b c a, c a b, c b a .
$$
So the question is to determine the number of ways $n$ objects can be ordered, i.e., the number of permutations of $n$ objects. The solution found by the people at the party works in general: we can put any of the $n$ people on the first place; no matter whom we choose, we have $n-1$ choices for the second. So the number of ways to fill the first two positions is $n(n-1)$. No matter how we have filled the first and second positions, there are $n-2$ choices for the third position, so the number of ways to fill the first three positions is $n(n-1)(n-2)$.
It is clear that this argument goes on like this until all positions are filled. The last but one position can be filled in two ways; the person put in the last position is determined, if the other positions are filled. Thus the number of ways to fill all positions is $n \cdot(n-1)$. $(n-2) \cdot \ldots \cdot 2 \cdot 1$. This product is so important that we have a notation for it: $n$ ! (read $n$ factorial). In other words, $n$ ! is the number of ways to order $n$ objects. With this notation, we can state our second theorem.
Theorem 2.4 The number of permutations of $n$ objects in $n !$.
Again, we can illustrate the argument above graphically (Figure 2). We start with the node on the top, which poses our first decision: whom to seat on the first chair? The 3 arrows going out correspond to the three possible answers to the question. Making a decision, we can follow one of the arrows down to the next node. This carries the next decision problem: whom to put on the second chair? The two arrows out of the node represent the two possible choices. (Note that these choices are different for different nodes on this level; what is important is that there are two arrows going out from each node.) If we make a decision and follow the corresponding arrow to the next node, we know who sits on the third chair. The node carries the whole "seating order".
It is clear that for a set with $n$ elements, $n$ arrows leave the top node, and hence there are $n$ nodes on the next level. $n-1$ arrows leave each of these, hence there are $n(n-1)$ nodes on the third level. $n-2$ arrows leave each of these, etc. The bottom level has $n$ ! nodes. This shows that there are exactly $n$ ! permutations. $2.32 n$ boys and $n$ girls go out to dance. In how many ways can they all dance simultaneously? (We assume that only couples of different sex dance with each other.)
2.33 (a) Draw a tree for Alice's solution of enumerating the number of ways 6 people can play chess, and explain Alice's argument using the tree.
(b) Solve the problem for 8 people. Can you give a general formula for $2 n$ people?
It is nice to know such a formula for the number of permutations, but often we just want to have a rough idea about how large it is. We might want to know, how many digits does 100 ! have? Or: which is larger, $n$ ! or $2^{n}$ ? In other words, does a set with $n$ elements have more permutations or more subsets?
Let us experiment a little. For small values of $n$, subsets are winning: $2^{1}=2>1$ ! $=1$, $2^{2}=4>2 !=2,2^{3}=8>3 !=6$. But then the picture changes: $2^{4}=16<4 !=24$, $3^{5}=32<5 !=120$. It is easy to see that as $n$ increases, $n$ ! grows much faster than $2^{n}$ : if we go from $n$ to $n+1$, then $2^{n}$ grows by a factor of 2 , while $n$ ! grows by a factor of $n+1$.
This shows that $100 !>2^{100}$; we already know that $2^{100}$ has 31 digits, and hence it follows that 100 ! has at least 31 digits.
What upper bound can we give on $n$ !? It is trivial that $n !<n^{n}$, since $n$ ! is the product of $n$ factors, each of which is at most $n$. (Since most of them are smaller than $n$, the product is in fact much smaller.) In particular, for $n=100$, we get that $100 !<100^{100}=10^{200}$, so 100 ! has at most 200 digits.
In general we know that, for $n \geq 4$,
$$
2^{n}<n !<n^{n}
$$
These bounds are rather weak; for $n=10$, the lower bound is $2^{10}=1024$ while the upper bound is $10^{10}$ (i.e., ten billion).
We could also notice that $n-9$ factors in $n$ ! are greater than, or equal to 10 , and hence $n ! \geq 10^{n-9}$. This is a much better bound for large $n$, but it is still far from the truth. For $n=100$, we get that $100 ! \geq 10^{91}$, so it has at least 91 digits.
There is a formula that gives a very good approximation of $n$ !. We state it without proof, since the proof (although not terribly difficult) needs calculus.
Theorem 2.5 [Stirling's Formula]
$$
n ! \sim\left(\frac{n}{e}\right)^{n} \sqrt{2 \pi n}
$$
Here $\pi=3.14 \ldots$ is the area of the circle with unit radius, $e=2.718 \ldots$ is the basis of the natural logarithm, and $\sim$ means approximate equality in the precise sense that on the one hand
$$
\frac{n !}{\left(\frac{n}{e}\right)^{n} \sqrt{2 \pi n}} \rightarrow 1 \quad(n \rightarrow \infty)
$$
Both these funny irrational numbers $e$ and $\pi$ occur in the same formula!
So how many digits does 100! have? We know by Stirling's Formula that
$$
100 ! \approx(100 / e)^{100} \cdot \sqrt{200 \pi} .
$$
The number of digits of this number is its logarithm, in base 10, rounded up. Thus we get
$$
\lg (100 !) \approx 100 \lg (100 / e)+1+\lg \sqrt{2 \pi}=157.969 \ldots
$$
So the number of digits in 100! is about 158 (actually, this is the right value).
2.34 (a) Which is larger, $n$ or $n(n-1) / 2$ ?
(b) Which is larger, $n^{2}$ or $2^{n}$ ?
2.35 (a) Prove that $2^{n}>n^{3}$ if $n$ is large enough.
(b) Use (a) to prove that $2^{n} / n^{2}$ becomes arbitrarily large if $n$ is large enough.
## Induction
### The sum of odd numbers
It is time to learn one of the most important tools in discrete mathematics. We start with a question: We add up the first $n$ odd numbers. What do we get?
Perhaps the best way to try to find the answer is to experiment. If we try small values of $n$, this is what we find:
$$
\begin{aligned}
1 & =1 \\
1+3 & =4 \\
1+3+5 & =9 \\
1+3+5+7 & =16 \\
1+3+5+7+9 & =25 \\
1+3+5+7+9+11 & =36 \\
1+3+5+7+9+11+13 & =49 \\
1+3+5+7+9+11+13+15 & =64 \\
1+3+5+7+9+11+13+15+17 & =81 \\
1+3+5+7+9+11+13+15+17+19 & =100
\end{aligned}
$$
It is easy to observe that we get squares; in fact, it seems from this examples that the sum of the first $n$ odd numbers is $n^{2}$. This we have observed for the first 10 values of $n$; can we be sure that it is valid for all? Well, I'd say we can be reasonably sure, but not with mathematical certainty. How can we prove the assertion?
Consider the sum for a general $n$. The $n$-th odd number is $2 n-1$ (check!), so we want to prove that
$$
1+3+\ldots+(2 n-3)+(2 n-1)=n^{2} .
$$
If we separate the last term in this sum, we are left with the sum of the first $(n-1)$ odd numbers:
$$
1+3+\ldots+(2 n-3)+(2 n-1)=(1+3+\ldots+(2 n-3))+(2 n-1)
$$
Now here the sum in the large parenthesis is $(n-1)^{2}$, so the total is
$$
(n-1)^{2}+(2 n-1)=\left(n^{2}-2 n+1\right)+(2 n-1)=n^{2},
$$
just as we wanted to prove.
Wait a minute! Aren't we using in the proof the statement that we are proving? Surely this is unfair! One could prove everything if this were allowed.
But in fact we are not quite using the same. What we were using, is the assertion about the sum of the first $n-1$ odd numbers; and we argued (in (3)) that this proves the assertion about the sum of the first $n$ odd numbers. In other words, what we have shown is that if the assertion is true for a certain value of $n$, it is also true for the next.
This is enough to conclude that the assertion is true for every $n$. We have seen that it is true for $n=1$; hence by the above, it is also true for $n=2$ (we have seen this anyway by direct computation, but this shows that this was not even necessary: it followed from the case $n=1$ ).
In a similar way, the truth of the assertion for $n=2$ implies that it is also true for $n=3$, which in turn implies that it is true for $n=4$, etc. If we repeat this sufficiently many times, we get the truth for any value of $n$.
This proof technique is called induction (or sometimes mathematical induction, to distinguish it from a notion in philosophy). It can be summarized as follows.
Suppose that we want to prove a property of positive integers. Also suppose that we can prove two facts:
(a) 1 has the property, and
(b) whenever $n-1$ has the property, then also $n$ has the property $(n \geq 1)$.
The principle of induction says that if (a) and (b) are true, then every natural number has the property.
Often the best way to try to carry out an induction proof is the following. We try to prove the statement (for a general value of $n$ ), and we are allowed to use that the statement is true if $n$ is replaced by $n-1$. (This is called the induction hypothesis.) If it helps, one may also use the validity of the statement for $n-2, n-3$, etc., in general for every $k$ such that $k<n$.
Sometimes we say that if 0 has the property, and every integer $n$ inherits the property from $n-1$, then every integer has the property. (Just like if the founding father of a family has a certain piece of property, and every new generation inherits this property from the previous generation, then the family will always have this property.)
3.1 Prove, using induction but also without it, that $n(n+1)$ is an even number for every non-negative integer $n$.
3.2 Prove by induction that the sum of the first $n$ positive integers is $n(n+1) / 2$.
3.3 Observe that the number $n(n+1) / 2$ is the number of handshakes among $n+1$ people. Suppose that everyone counts only handshakes with people older than him/her (pretty snobbish, isn't it?). Who will count the largest number of handshakes? How many people count 6 handshakes?
Give a proof of the result of exercise 3.1, based on your answer to these questions.
3.4 Give a proof of exercise 3.1, based on figure 3 .
3.5 Prove the following identity:
$$
1 \cdot 2+2 \cdot 3+3 \cdot 4+\ldots+(n-1) \cdot n=\frac{(n-1) \cdot n \cdot(n+1)}{3} .
$$
Exercise 3.1 relates to a well-known (though apocryphal) anecdote from the history of mathematics. Carl Friedrich Gauss (1777-1855), one of the greatest mathematicians of all times, was in elementary school when his teacher gave the class the task to add up the integers from 1 to 1000 (he was hoping that he would get an hour or so to relax while his students were working). To his great surprise, Gauss came up with the correct answer almost immediately. His solution was extremely simple: combine the first term with the last, you get $1+1000=1001$; combine the second term with the last but one,
$1+2+3+4+5=?$
$2(1+2+3+4+5)=5 \cdot 6=30$
Figure 3: The sum of the first $n$ integers
you get $2+999=1001$; going on in a similar way, combining the first remaining term with the last one (and then discarding them) you get 1001. The last pair added this way is $500+501=1001$. So we obtained 500 times 1001 , which makes 500500 . We can check this answer against the formula given in exercise 3.1: $1000 \cdot 1001 / 2=500500$.
3.6 Use the method of the little Gauss to give a third proof of the formula in exercise 3.1
3.7 How would the little Gauss prove the formula for the sum of the first $n$ odd numbers (2)?
3.8 Prove that the sum of the first $n$ squares $\left(1+4+9+\ldots+n^{2}\right)$ is $n(n+1)(2 n+1) / 6$.
3.9 Prove that the sum of the first $n$ powers of $2\left(\right.$ starting with $\left.1=2^{0}\right)$ is $2^{n}-1$.
### Subset counting revisited
In chapter 2 we often relied on the convenience of saying "etc.": we described some argument that had to be repeated $n$ times to give the result we wanted to get, but after giving the argument once or twice, we said "etc." instead of further repetition. There is nothing wrong with this, if the argument is sufficiently simple so that we can intuitively see where the repetition leads. But it would be nice to have some tool at hand which could be used instead of "etc." in cases when the outcome of the repetition is not so transparent.
The precise way of doing this is using induction, as we are going to illustrate by revisiting some of our results. First, let us give a proof of the formula for the number of subsets of an $n$-element set, given in Theorem 2.1 (recall that the answer is $2^{n}$ ).
As the principle of induction tells us, we have to check that the assertion is true for $n=0$. This is trivial, and we already did it. Next, we assume that $n>0$, and that the assertion is true for sets with $n-1$ elements. Consider a set $S$ with $n$ elements, and fix any element $a \in S$. We want to count the subsets of $S$. Let us divide them into two classes: those containing $a$ and those not containing $a$. We count them separately.
First, we deal with those subsets which don't contain $a$. If we delete $a$ from $S$, we are left with a set $S^{\prime}$ with $n-1$ elements, and the subsets we are interested in are exactly the subsets of $S^{\prime}$. By the induction hypothesis, the number of such subsets is $2^{n-1}$.
Second, we consider subsets containing $a$. The key observation is that every such subset consists of $a$ and a subset of $S^{\prime}$. Conversely, if we take any subset of $S^{\prime}$, we can add $a$ to it to get a subset of $S$ containing $a$. Hence the number of subsets of $S$ containing $a$ is the same as the number of subsets of $S^{\prime}$, which, as we already know, is $2^{n-1}$. (With the jargon we introduced before, the last piece of the argument establishes as one-to-one correspondence between those subsets of $S$ containing $a$ and those not containing $a$.)
To conclude: the total number of subsets of $S$ is $2^{n-1}+2^{n-1}=2 \cdot 2^{n-1}=2^{n}$. This proves Theorem 2.1 (again).
3.10 Use induction to prove Theorem 2.2 (the number of strings of length $n$ composed of $k$ given elements is $k^{n}$ ) and Theorem 2 (the number of permutations of a set with $n$ elements is $n !)$.
3.11 Use induction on $n$ to prove the "handshake theorem" (the number of handshakes between $n$ people in $n(n-1) / 2)$.
3.12 Read carefully the following induction proof:
AsSERTION: $n(n+1)$ is an odd number for every $n$.
Proof: Suppose that this is true for $n-1$ in place of $n$; we prove it for $n$, using the induction hypothesis. We have
$$
n(n+1)=(n-1) n+2 n .
$$
Now here $(n-1) n$ is odd by the induction hypothesis, and $2 n$ is even. Hence $n(n+1)$ is the sum of an odd number and an even number, which is odd.
The assertion that we proved is obviously wrong for $n=10$ : $10 \cdot 11=110$ is even. What is wrong with the proof?
3.13 Read carefully the following induction proof:
AsSERTION: If we have $n$ lines in the plane, no two of which are parallel, then they all go through one point.
Proof: The assertion is true for one line (and also for 2, since we have assumed that no two lines are parallel). Suppose that it is true for any set of $n-1$ lines. We are going to prove that it is also true for $n$ lines, using this induction hypothesis.
So consider a set of $S=\{a, b, c, d, \ldots\}$ of $n$ lines in the plane, no two of which are parallel. Delete the line $c$, then we are left with a set $S^{\prime}$ of $n-1$ lines, and obviously no two of these are parallel. So we can apply the induction hypothesis and conclude that there is a point $P$ such that all the lines in $S^{\prime}$ go through $P$. In particular, a and $b$ go through $P$, and so $P$ must be the point of intersection of $a$ and $b$.
Now put c back and delete $d$, to get a set $S^{\prime \prime}$ of $n-1$ lines. Just as above, we can use the induction hypothesis to conclude that these lines go through the same point $P^{\prime}$; but just like above, $P^{\prime}$ must be the point of intersection of a and $b$. Thus $P^{\prime}=P$. But then we see that $c$ goes through $P$. The other lines also go through $P$ (by the choice of $P$ ), and so all the $n$ lines go through $P$.
But the assertion we proved is clearly wrong; where is the error?
### Counting regions
Let us draw $n$ lines in the plane. These lines divide the plane into some number of regions. How many regions do we get?
a)
b)
c)
d)
Figure 4:
Figure 5:
A first thing to notice is that this question does not have a single answer. For example, if we draw two lines, we get 3 regions if the two are parallel, and 4 regions if they are not.
OK, let us assume that no two of the lines are parallel; then 2 lines always give us 4 regions. But if we go on to three lines, we get 6 regions if the lines go through one point, and 7 regions, if they do not (Figure 4).
OK, let us also exclude this, and assume that no 3 lines go through the same point. One might expect that the next unpleasant example comes with 4 lines, but if you experiment with drawing 4 lines in the plane, with no two parallel and no three going threw the same point, then you invariably get 11 regions (Figure 5). In fact, we'll have a similar experience for any number of lines.
A set of lines in the plane such that no two are parallel and no three go through the same point is said to be in general position. If we choose the lines "randomly" then accidents like two being parallel or three going through the same point will be very unlikely, so our assumption that the lines are in general position is quite natural.
Even if we accept that the number of regions is always the same for a given number of lines, the question still remains: what is this number? Let us collect our data in a little table (including also the observation that 0 lines divide the plane into 1 region, and 1 line divides the plane into 2):
| 0 | 1 | 2 | 3 | 4 |
| :--- | :--- | :--- | :--- | :---: |
| 1 | 2 | 4 | 7 | 11 |
Staring at this table for a while, we observe that each number in the second row is the sum of the number above it and the number before it. This suggests a rule: the $n$-th entry is $n$ plus the previous entry. In other words: If we have a set of $n-1$ lines in the plane in general position, and add a new line (preserving general position), then the number of regions increases by $n$.
Let us prove this assertion. How does the new line increase the number of regions? By cutting some of them into two. The number of additional regions is just the same as the number of regions intersected.
So, how many regions does the new line intersect? At a first glance, this is not easy to answer, since the new line can intersect very different sets of regions, depending on where we place it. But imagine to walk along the new line, starting from very far. We get to a new region every time we cross a line. So the number of regions the new line intersects is one larger than the number of crossing points on the new line with other lines.
Now the new line crosses every other line (since no two lines are parallel), and it crosses them in different points (since no three lines go through the same point). Hence during our walk, we see $n-1$ crossing points. So we see $n$ different regions. This proves that our observation about the table is true for every $n$.
We are not done yet; what does this give for the number of regions? We start with 1 region for 0 lines, and then add to it $1,2,3, \ldots, n$. This way we get
$$
1+(1+2+3+\ldots+n)=1+\frac{n(n+1)}{2} .
$$
Thus we have proved:
Theorem 3.1 A set of $n$ lines in general position in the plane divides the plane into $1+$ $n(n+1) / 2$ regions.
3.14 Describe a proof of Theorem 3.1 using induction on the number of lines.
Let us give another proof of Theorem 3.1; this time, we will not use induction, but rather try to relate the number of regions to other combinatorial problems. One gets a hint from writing the number in the form $1+n+\left(\begin{array}{l}n \\ 2\end{array}\right)$.
Assume that the lines are drawn on a vertical blackboard (Figure 6), which is large enough so that all the intersection points appear on it. We also assume that no line is horizontal (else, we tilt the picture a little), and that in fact every line intersects the bottom edge of the blackboard (the blackboard is very long).
Now consider the lowest point in each region. Each region has only one lowest point, since the bordering lines are not horizontal. This lowest point is then an intersection point of two of our lines, or the intersection point of line with the lower edge of the blackboard, or the lower left corner of the blackboard. Furthermore, each of these points is the lowest point of one and only one region. For example, if we consider any intersection point of two lines, then we see that four regions meet at this point, and the point is the lowest point of exactly one of them.
Thus the number of lowest points is the same as the number of intersection points of the lines, plus the number of intersection points between lines and the lower edge of the blackboard, plus one. Since any two lines intersect, and these intersection points are all different (this is where we use that the lines are in general position), the number of such lowest points is $\left(\begin{array}{l}n \\ 2\end{array}\right)+n+1$.
Figure 6:
## Counting subsets
### The number of ordered subsets
At a competition of 100 athletes, only the order of the first 10 is recorded. How many different outcomes does the competition have?
This question can be answered along the lines of the arguments we have seen. The first place can be won by any of the athletes; no matter who wins, there are 99 possible second place winners, so the first two prizes can go $100 \cdot 99$ ways. Given the first two, there are 98 athletes who can be third, etc. So the answer is $100 \cdot 99 \cdot \ldots \cdot 91$.
4.1 Illustrate this argument by a tree.
4.2 Suppose that we record the order of all 100 athletes.
(a) How many different outcomes can we have then?
(b) How many of these give the same for the first 10 places?
(c) Show that the result above for the number of possible outcomes for the first 10 places can be also obtained using (a) and (b).
There is nothing special about the numbers 100 and 10 in the problem above; we could carry out the same for $n$ athletes with the first $k$ places recorded.
To give a more mathematical form to the result, we can replace the athletes by any set of size $n$. The list of the first $k$ places is given by a sequence of $k$ elements of $n$, which all have to be different. We may also view this as selecting a subset of the athletes with $k$ elements, and then ordering them. Thus we have the following theorem.
Theorem 4.1 The number of ordered $k$-subsets of an $n$-set is $n(n-1) \ldots(n-k+1)$. (Note that if we start with $n$ and count down $k$ numbers, the last one will be $n-k+1$.)
4.3 If you generalize the solution of exercise 4.1, you get the answer in the form
$$
\frac{n !}{(n-k) !}
$$
Check that this is the same number as given in theorem 4.1.
4.4 Explain the similarity and the difference between the counting questions answered by theorem 4.1 and theorem 2.2 .
### The number of subsets of a given size
From here, we can easily derive one of the most important counting results.
Theorem 4.2 The number of $k$-subsets of an $n$-set is
$$
\frac{n(n-1) \ldots(n-k+1)}{k !}=\frac{n !}{k !(n-k) !}
$$
Recall that if we count ordered subsets, we get $n(n-1) \ldots(n-k+1)=n ! /(n-k)$ !, by Theorem 4.1. Of course, if we want to know the number of unordered subsets, then we have overcounted; every subset was counted exactly $k$ ! times (with every possible ordering of its elements). So we have to divide this number by $k$ ! to get the number of subsets with $k$ elements (without ordering).
The number of $k$-subsets of an $n$-set is such an important quantity that one has a separate notation for it: $\left(\begin{array}{l}n \\ k\end{array}\right)$ (read: ' $n$ choose $k$ '). Thus
$$
\left(\begin{array}{l}
n \\
k
\end{array}\right)=\frac{n !}{k !(n-k) !}
$$
Thus the number of different lottery tickets in $\left(\begin{array}{c}90 \\ 5\end{array}\right)$, the number of handshakes is $\left(\begin{array}{l}7 \\ 2\end{array}\right)$ etc.
4.5 Which problems discussed during the party were special cases of theorem 4.2?
4.6 Tabulate the values of $\left(\begin{array}{l}n \\ k\end{array}\right)$ for $n, k \leq 5$.
In the following exercises, try to prove the identities by using the formula in theorem 4.2 , and also without computation, by explaining both sides of the equation as the result of a counting problem.
4.7 Prove that $\left(\begin{array}{l}n \\ 2\end{array}\right)+\left(\begin{array}{c}n+1 \\ 2\end{array}\right)=n^{2}$.
4.8 (a) Prove that $\left(\begin{array}{c}90 \\ 5\end{array}\right)=\left(\begin{array}{c}89 \\ 5\end{array}\right)+\left(\begin{array}{c}89 \\ 4\end{array}\right)$.
(b) Formulate and prove a general identity based on this.
4.9 Prove that $\left(\begin{array}{l}n \\ k\end{array}\right)=\left(\begin{array}{c}n \\ n-k\end{array}\right)$. 4.10 Prove that
$$
1+\left(\begin{array}{l}
n \\
1
\end{array}\right)+\left(\begin{array}{l}
n \\
2
\end{array}\right)+\ldots+\left(\begin{array}{c}
n \\
n-1
\end{array}\right)+\left(\begin{array}{l}
n \\
n
\end{array}\right)=2^{n} .
$$
4.11 Prove that for $0<c \leq b \leq a$,
$$
\left(\begin{array}{l}
a \\
b
\end{array}\right)\left(\begin{array}{l}
b \\
c
\end{array}\right)=\left(\begin{array}{c}
a \\
a-c
\end{array}\right)\left(\begin{array}{l}
a-c \\
b-c
\end{array}\right)
$$
### The Binomial Theorem
The numbers $\left(\begin{array}{l}n \\ k\end{array}\right)$ also have a name, binomial coefficients, which comes from a very important formula in algebra involving them. We are now going to discuss this theorem.
The issue is to compute powers of the simple algebraic expression $(x+y)$. We start with small examples:
$$
\begin{gathered}
(x+y)^{2}=x^{2}+2 x y+y^{2}, \\
(x+y)^{3}=(x+y) \cdot(x+y)^{2}=(x+y) \cdot\left(x^{2}+2 x y+y^{2}\right)=x^{3}+3 x^{2} y+3 x y^{2}+y^{3},
\end{gathered}
$$
and, going on like this,
$$
(x+y)^{4}=(x+y) \cdot(x+y)^{3}=x^{4}+4 x^{3} y+6 x^{2} y^{2}+4 x y^{3}+y^{4} .
$$
You may have noticed that the coefficients you get are the numbers that we have seen, e.g. in exercise 4.2, as numbers $\left(\begin{array}{l}n \\ k\end{array}\right)$. Let us make this observation precise. We illustrate the argument for the next value of $n$, namely $n=5$, but it works in general.
Think of expanding
$$
(x+y)^{5}=(x+y)(x+y)(x+y)(x+y)(x+y)
$$
so that we get rid of all parentheses. We get each term in the expansion by selecting one of the two terms in each factor, and multiplying them. If we choose $x$, say, 2 times then we choose $y 3$ times, and we get $x^{2} y^{3}$. How many times do we get this same term? Clearly as many times as the number of ways to select the two factors that supply $x$ (the remaining factors supply $y$ ). Thus we have to choose two factors out of 5 , which can be done in $\left(\begin{array}{l}5 \\ 2\end{array}\right)$ ways.
Hence the expansion of $(x+y)^{5}$ looks like this:
$$
(x+y)^{5}=\left(\begin{array}{l}
5 \\
0
\end{array}\right) y^{5}+\left(\begin{array}{l}
5 \\
1
\end{array}\right) x y^{4}+\left(\begin{array}{l}
5 \\
2
\end{array}\right) x^{2} y^{3}+\left(\begin{array}{l}
5 \\
3
\end{array}\right) x^{3} y^{2}+\left(\begin{array}{l}
5 \\
4
\end{array}\right) x^{4} y+\left(\begin{array}{l}
5 \\
5
\end{array}\right) x^{5}
$$
We can apply this argument in general to obtain
Theorem 4.3 (The Binomial Theorem) The coefficient of $x^{k} y^{n-k}$ in the expansion of $(x+y)^{n}$ is $\left(\begin{array}{l}n \\ k\end{array}\right)$. In other words, we have the identity:
$$
(x+y)^{n}=y^{n}+\left(\begin{array}{l}
n \\
1
\end{array}\right) x^{n-1} y+\left(\begin{array}{l}
n \\
2
\end{array}\right) x^{n-2} y^{2}+\ldots+\left(\begin{array}{c}
n \\
n-1
\end{array}\right) x^{n-1} y+\left(\begin{array}{l}
n \\
n
\end{array}\right) x^{n} .
$$
This important theorem is called the Binomial Theorem; the name comes from the Greek word binome for an expression consisting of two terms, in this case, $x+y$. The appearance of the numbers $\left(\begin{array}{l}n \\ k\end{array}\right)$ in this theorem is the source of their name: binomial coefficients.
The Binomial Theorem can be applied in many ways to get identities concerning binomial coefficients. For example, let us substitute $x=y=1$, then we get
$$
2^{n}=\left(\begin{array}{l}
n \\
0
\end{array}\right)+\left(\begin{array}{l}
n \\
1
\end{array}\right)+\left(\begin{array}{l}
n \\
2
\end{array}\right)+\ldots+\left(\begin{array}{c}
n \\
n-1
\end{array}\right)+\left(\begin{array}{l}
n \\
n
\end{array}\right) .
$$
Later on we are going to see trickier applications of this idea. For the time being, another twist on it is contained in the next exercise.
4.12 Give a proof of the Binomial Theorem by induction, based on exercise 4.2.
4.13 (a) Prove the identity
$$
\left(\begin{array}{l}
n \\
0
\end{array}\right)-\left(\begin{array}{l}
n \\
1
\end{array}\right)+\left(\begin{array}{l}
n \\
2
\end{array}\right)-\left(\begin{array}{l}
n \\
3
\end{array}\right) \cdots=0
$$
(The sum ends with $\left(\begin{array}{l}n \\ n\end{array}\right)=1$, with the last depending on the parity of $n$.)
(b) This identity is obvious if $n$ is odd. Why?
4.14 Prove identity 4, using a combinatorial interpretation of the two sides (recall exercise 4.2).
### Distributing presents
Suppose we have $n$ different presents, which we want to distribute to $k$ children. For some reason, we are told how many presents should each child get; so Adam should get $n_{\text {Adam }}$ presents, Barbara, $n_{\text {Barbara }}$ presents etc. In a mathematically convenient (though not very friendly) way, we call the children $1,2, \ldots, k$; thus we are given the numbers (non-negative integers) $n_{1}, n_{2}, \ldots, n_{k}$. We assume that $n_{1}+n_{2}+\ldots+n_{k}=n$, else there is no way to distribute the presents.
The question is, of course, how many ways can these presents be distributed?
We can organize the distribution of presents as follows. We lay out the presents in a single row of length $n$. The first child comes and picks up the first $n_{1}$ presents, starting from the left. Then the second comes, and picks up the next $n_{2}$; then the third picks up the next $n_{3}$ presents etc. Child No. k gets the last $n_{k}$ presents.
It is clear that we can determine who gets what by choosing the order in which the presents are laid out. There are $n$ ! ways to order the presents. But, of course, the number $n$ ! overcounts the number of ways to distribute the presents, since many of these orderings lead to the same results (that is, every child gets the same set of presents). The question is, how many?
So let us start with a given distribution of presents, and let's ask the children to lay out the presents for us, nicely in a row, starting with the first child, then continuing with the second, third, etc. This way we get back one possible ordering that leads to the current distribution. The first child can lay out his presents in $n_{1}$ ! possible orders; no matter which order he chooses, the second child can lay out her presents in $n_{2}$ ! possible ways, etc. So the
Figure 7: Placing 8 non-attacking rooks on a chessboard
number of ways the presents can be laid out (given the distribution of the presents to the children) is a product of factorials:
$$
n_{1} ! \cdot n_{2} ! \cdot \ldots \cdot n_{k} !
$$
Thus the number of ways of distributing the presents is
$$
\frac{n !}{n_{1} ! n_{2} ! \ldots n_{k} !} .
$$
4.15 We can describe the procedure of distributing the presents as follows. First, we select $n_{1}$ presents and give them to the first child. This can be done in $\left(\begin{array}{c}n \\ n_{1}\end{array}\right)$ ways. Then we select $n_{2}$ presents from the remaining $n-n_{1}$ and give them to the second child, etc.
Complete this argument and show that it leads to the same result as the previous one.
4.16 The following special cases should be familiar from previous problems and theorems. Explain why.
(a) $n=k, n_{1}=n_{2}=\ldots=n_{k}$;
(b) $n_{1}=n_{2}=\ldots=n_{k-1}=1, n_{k}=n-k+1$;
(c) $k=2$;
(d) $k=3, n=6, n_{1}=n_{2}=n_{3}=2$.
4.17 (a) How many ways can you place $n$ rooks on a chessboard so that no two attack each other (Figure 7)? We assume that the rooks are identical, so e.g. interchanging two rooks does not count as a separate placement.
(b) How many ways can you do this if you have 4 black and 4 white rooks?
(c) How many ways can you do this if all the 8 rooks are different?
### Anagrams
Have you played with anagrams? One selects a word (say, COMBINATORICS) and tries to compose from its letters meaningful, often funny words or expressions.
How many anagrams can you build from a given word? If you try to answer this question by playing around with the letters, you will realize that the question is badly posed; it is difficult to draw the line between meaningful and non-meaningful anagrams. For example, it could easily happen that A CROC BIT SIMON. And it may be true that Napoleon always wanted a TOMB IN CORSICA. It is questionable, but certainly grammatically correct, to assert that COB IS ROMANTIC. Some universities may have a course on MAC IN ROBOTICS.
But one would have to write a book to introduce an exciting character, ROBIN COSMICAT, who enforces a COSMIC RIOT BAN, while appealing TO COSMIC BRAIN.
And it would be terribly difficult to explain an anagram like MTBIRASCIONOC.
To avoid this controversy, let's accept everything, i.e., we don't require the anagram to be meaningful (or even pronouncible). Of course, the production of anagrams becomes then uninteresting; but at least we can tell how many of them are there!
4.18 How many anagrams can you make from the word COMBINATORICS?
4.19 Which word gives rise to more anagrams: COMBINATORICS or COMBINATORICA? (The latter is the Latin name of the subject.)
4.20 Which word with 13 letters gives rise to the most anagrams? Which word gives rise to the least?
So let's see the general answer to the question of counting anagrams. If you have solved the problems above, it should be clear that the number of anagrams $n$-letter word depends on how many times letters of the word are repeated. So suppose that the word contains letter No. $1 n_{1}$ times, letter No. $2 n_{2}$ times, etc., letter No. k $n_{k}$ times. Clearly, $n_{1}+n_{2}+\ldots+n_{k}=n$.
Now to form an anagram, we have to select $n_{1}$ positions for letter No. $1, n_{2}$ positions for letter No. 2, etc., $n_{k}$ positions fro letter No. 3. Having formulated it this way, we can see that this is nothing but the question of distributing $n$ presents to $k$ children, when it is prescribed how many presents each child gets. Thus we know from the previous section that the answer is
$$
\frac{n}{n_{1} ! n_{2} ! \ldots n_{k} !} .
$$
4.21 It is clear that STATUS and LETTER have the same number of anagrams (in fact, $6 ! /(2 ! 2 !)=180)$. We say that these words are "essentially the same" (at least as far as counting anagrams goes): they have two letters repeated twice and two letters occurring only once.
(a) How many 6-letter words are there? (As before, the words don't have to be meaningful. The alphabet has 26 letters.)
(b) How many words with 6 letters are "essentially the same" as the word LETTER?
(c) How many "essentially different" 6-letter words are there?
Figure 8: How to distribute $n$ pennies to $k$ children?
(d) Try to find a general answer to question (c) (that is, how many "essentially different" words are there on $n$ letters?). If you can't find it, read the following section and return to this exercise after it.
### Distributing money
Instead of distributing presents, let's distribute money. Let us formulate the question in general: we have $n$ pennies that we want to distribute among $k$ kids. Each child must get at least one penny (and, of course, an integer number of pennies). How many ways can we distribute the money?
Before answering this question, we must clarify the difference between distributing money and distributing presents. If you are distributing presents, you have to decide not only how many presents each child gets, but also which are these presents. If you are distributing money, only the quantity matters. In other words, the presents are distinguishable while the pennies are not. (A question like in section 4.4, where we specify in advance how many presents does a given child get, would be trivial for money: there is only one way to distribute $n$ pennies so that the first child gets $n_{1}$, the second child gets $n_{2}$, etc.)
Even though the problem is quite different from the distribution of presents, we can solve it by imagining a similar distribution method. We line up the pennies (it does not matter in which order, they are all alike), and then let child No. 1 begin to pick them up from left to right. After a while we stop him and let the second child pick up pennies, etc. (Figure 8) The distribution of the money is determined by specifying where to start with a new child.
Now there are $n-1$ points (between consecutive pennies) where we can let a new child in, and we have to select $k-1$ of them (since the first child always starts at the beginning, we have no choice there). Thus we have to select a $(k-1)$-element subset from an $(n-1)$ element set. The number of possibilities to do so is $\left(\begin{array}{l}n-1 \\ k-1\end{array}\right)$.
To sum up, we get
Theorem 4.4 The number of ways to distribute $n$ identical pennies to $k$ children, so that each child gets at least one, is $\left(\begin{array}{l}n-1 \\ k-1\end{array}\right)$.
It is quite surprising that the binomial coefficients give the answer here, in a quite non-trivial and unexpected way.
Let's also discuss the natural (though unfair) modification of this question, where we also allow distributions in which some children get no money at all; we consider even giving all the money to one child. With the following trick, we can reduce the problem of counting such distributions to the problem we just solved: we borrow 1 penny from each child, and the distribute the whole amount (i.e., $n+k$ pennies) to the children so that each child gets at least one penny. This way every child gets back the money we borrowed from him or her, and the lucky ones get some more. The "more" is exactly $n$ pennies distributed to $k$ children. We already know that the number of ways to distribute $n+k$ pennies to $k$ children so that each child gets at least one penny is $\left(\begin{array}{c}n+k-1 \\ k-1\end{array}\right)$. So we have
Theorem 4.5 The number of ways to distribute $n$ identical pennies to $k$ children is $\left(\begin{array}{c}n+k-1 \\ k-1\end{array}\right)$.
4.22 In how many ways can you distribute $n$ pennies to $k$ children, if each child is supposed to get at least 2 ?
4.23 We distribute $n$ pennies to $k$ boys and $\ell$ girls, so that (to be really unfair) we require that each of the girls gets at least one penny. In how many ways can we do this?
4.24 $k$ earls play cards. Originally, they all have $p$ pennies. At the end of the game, they count how much money they have. They do not borrow from each other, so that they cannot loose more than their $p$ pennies. How many possible results are there?
## Pascal's Triangle
To study various properties of binomial coefficients, the following picture is very useful. We arrange all binomial coefficients into a triangular scheme: in the "zeroeth" row we put $\left(\begin{array}{l}0 \\ 0\end{array}\right)$, in the first row, we put $\left(\begin{array}{l}1 \\ 0\end{array}\right)$ and $\left(\begin{array}{l}1 \\ 1\end{array}\right)$, in the third row, $\left(\begin{array}{l}2 \\ 0\end{array}\right),\left(\begin{array}{l}2 \\ 1\end{array}\right)$ and $\left(\begin{array}{l}2 \\ 2\end{array}\right)$ etc. In general, the $n$-th row contains the numbers $\left(\begin{array}{l}n \\ 0\end{array}\right),\left(\begin{array}{l}n \\ 1\end{array}\right), \ldots,\left(\begin{array}{l}n \\ n\end{array}\right)$. We shift these rows so that their midpoints match; this way we get a pyramid-like scheme, called the Pascal Triangle (named after the French mathematician and philosopher Blaise Pascal, 1623-1662). The Figure below shows only a finite piece of the Pascal Triangle.
We can replace each binomial coefficient by its numerical value, to get another version of Pascal's Triangle.
5.1 Prove that the Pascal Triangle is symmetric with respect to the vertical line through its apex.
5.2 Prove that each row in the Pascal Triangle starts and ends with 1.
### Identities in the Pascal Triangle
Looking at the Pascal Triangle, it is not hard to notice its most important property: every number in it (other than the 1's on the boundary) is the sum of the two numbers immediately above it. This in fact is a property of the binomial coefficients you have already met: it translates into the relation
$$
\left(\begin{array}{l}
n \\
k
\end{array}\right)=\left(\begin{array}{l}
n-1 \\
k-1
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
k
\end{array}\right)
$$
(cf. exercise 4.2).
This property of the Pascal Triangle enables us to generate the triangle very fast, building it up row by row, using (5). It also gives us a tool to prove many properties of the binomial coefficients, as we shall see.
As a first application, let us give a new solution of exercise 4.3. There the task was to prove the identity
$$
\left(\begin{array}{l}
n \\
0
\end{array}\right)-\left(\begin{array}{l}
n \\
1
\end{array}\right)+\left(\begin{array}{l}
n \\
2
\end{array}\right)-\left(\begin{array}{l}
n \\
3
\end{array}\right) \ldots+(-1)^{n}\left(\begin{array}{l}
n \\
n
\end{array}\right)=0
$$
using the binomial theorem. Now we give a proof based on (5): we can replace $\left(\begin{array}{l}n \\ 0\end{array}\right)$ by $\left(\begin{array}{c}n-1 \\ 0\end{array}\right)$ (both are just 1$),\left(\begin{array}{l}n \\ 1\end{array}\right)$ by $\left(\begin{array}{c}n-1 \\ 0\end{array}\right)+\left(\begin{array}{c}n-1 \\ 1\end{array}\right),\left(\begin{array}{l}n \\ 2\end{array}\right)$ by $\left(\begin{array}{c}n-1 \\ 1\end{array}\right)+\left(\begin{array}{c}n-1 \\ 2\end{array}\right)$, etc. Thus we get the sum
$$
\left(\begin{array}{c}
n-1 \\
0
\end{array}\right)-\left[\left(\begin{array}{c}
n-1 \\
0
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)\right]+\left[\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
2
\end{array}\right)\right]-\left[\left(\begin{array}{c}
n-1 \\
2
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
3
\end{array}\right)\right]+\ldots
$$
which is clearly 0 , since the second term in each bracket cancels with the first term of the next.
This method gives more than just a new proof of an identity we already know. What do we get if we start the same way, adding and subtracting binomial coefficients alternatingly, but stop earlier? In formula, we take
$$
\left(\begin{array}{l}
n \\
0
\end{array}\right)-\left(\begin{array}{l}
n \\
1
\end{array}\right)+\left(\begin{array}{l}
n \\
2
\end{array}\right)-\left(\begin{array}{l}
n \\
3
\end{array}\right) \ldots+(-1)^{k}\left(\begin{array}{l}
n \\
k
\end{array}\right)
$$
If we do the same trick as above, we get
$$
\left(\begin{array}{c}
n-1 \\
0
\end{array}\right)-\left[\left(\begin{array}{c}
n-1 \\
0
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)\right]+\left[\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
2
\end{array}\right)\right]-\ldots(-1)^{k}\left[\left(\begin{array}{c}
n-1 \\
k-1
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
k
\end{array}\right)\right] .
$$
Here again every term cancels except the last one; so the result is $(-1)^{k}\left(\begin{array}{c}n-1 \\ k\end{array}\right)$.
There are many other surprising relations satisfied by the numbers in the Pascal Triangle. For example, let's ask: what is the sum of squares of elements in each row?
Let's experiment, by computing the sum of squares of elements in the the first few rows:
$$
\begin{aligned}
1^{2} & =1, \\
1^{2}+1^{2} & =2, \\
1^{2}+2^{2}+1^{2} & =6, \\
1^{2}+3^{2}+3^{2}+1^{2} & =20, \\
1^{2}+4^{2}+6^{2}+4^{2}+1^{2} & =70 .
\end{aligned}
$$
We may recognize these numbers as the numbers in the middle column of the Pascal triangle. Of course, only every second row contains an entry in the middle column, so the last value above, the sum of squares in row No. 4 , is the middle element in row No. 8. So the examples above suggest the following identity:
$$
\left(\begin{array}{l}
n \\
0
\end{array}\right)^{2}+\left(\begin{array}{l}
n \\
1
\end{array}\right)^{2}+\left(\begin{array}{l}
n \\
2
\end{array}\right)^{2}+\ldots+\left(\begin{array}{c}
n \\
n-1
\end{array}\right)^{2}+\left(\begin{array}{l}
n \\
n
\end{array}\right)^{2}=\left(\begin{array}{c}
2 n \\
n
\end{array}\right)
$$
Of course, the few experiments above do not prove that this identity always holds, so we need a proof.
We will give an interpretation of both sides of the identity as the result of a counting problem; it will turn out that they count the same things, so they are equal. It is obvious what the right hand side counts: the number of subsets of size $n$ of a set of size $2 n$. It will be convenient to choose, as our $2 n$-element set, the set $S=\{1,2, \ldots, 2 n\}$.
The combinatorial interpretation of the left hand side is not so easy. Consider a typical term, say $\left(\begin{array}{l}n \\ k\end{array}\right)^{2}$. We claim that this is the number of those $n$-element subsets of $\{1,2, \ldots, 2 n\}$ that contain exactly $k$ elements from $\{1,2, \ldots, n\}$ (the first half of our set $S$ ). In fact, how do we choose such an $n$-element subset of $S$ ? We choose $k$ elements from $\{1,2, \ldots, n\}$ and then $n-k$ elements from $\{n+1, n+2, \ldots, 2 n\}$. The first can be done in $\left(\begin{array}{l}n \\ k\end{array}\right)$ ways; no matter which $k$-element subset of $\{1,2, \ldots, n\}$ we selected, we have $\left(\begin{array}{c}n \\ n-k\end{array}\right)$ ways of choose the other part. Thus the number of ways to choose an $n$-element subset of $S$ having $k$ elements from $\{1,2, \ldots, n\}$ is
$$
\left(\begin{array}{l}
n \\
k
\end{array}\right) \cdot\left(\begin{array}{c}
n \\
n-k
\end{array}\right)=\left(\begin{array}{l}
n \\
k
\end{array}\right)^{2}
$$
(by the symmetry of the Pascal Triangle).
Now to get the total number number of $n$-element subsets of $S$, we have to sum these numbers for all values of $k=0,1, \ldots, n$. This proves identity (6).
5.3 Give a proof of the formula in exercise 4.2
$$
1+\left(\begin{array}{l}
n \\
1
\end{array}\right)+\left(\begin{array}{l}
n \\
2
\end{array}\right)+\ldots+\left(\begin{array}{c}
n \\
n-1
\end{array}\right)+\left(\begin{array}{l}
n \\
n
\end{array}\right)=2^{n}
$$
along the same lines. (One could expect that, similarly as for the "alternating" sum, we could get a nice formula for the sum obtained by stopping earlier, like $\left(\begin{array}{l}n \\ 0\end{array}\right)+\left(\begin{array}{l}n \\ 1\end{array}\right)+$ $\ldots+\left(\begin{array}{l}n \\ k\end{array}\right)$. But this is not the case: no simpler expression is known for this sum in general.)
5.4 By the Binomial Theorem, the right hand side in identity (6) is the coefficient of $x^{n} y^{n}$ in the expansion of $(x+y)^{2 n}$. Write $(x+y)^{2 n}$ in the form $(x+y)^{n}(x+y)^{n}$, expand both factors $(x+y)^{n}$ using the binomial theorem, and the try to figure out the coefficient of $x^{n} y^{n}$ in the product. Show that this gives another proof of identity (6).
5.5 Prove the following identity:
$$
\left(\begin{array}{l}
n \\
0
\end{array}\right)\left(\begin{array}{c}
m \\
k
\end{array}\right)+\left(\begin{array}{c}
n \\
1
\end{array}\right)\left(\begin{array}{c}
m \\
k-1
\end{array}\right)+\left(\begin{array}{c}
n \\
2
\end{array}\right)\left(\begin{array}{c}
m \\
k-2
\end{array}\right)+\ldots+\left(\begin{array}{c}
n \\
k-1
\end{array}\right)\left(\begin{array}{c}
m \\
1
\end{array}\right)+\left(\begin{array}{l}
n \\
k
\end{array}\right)\left(\begin{array}{c}
m \\
0
\end{array}\right)=\left(\begin{array}{c}
n+m \\
k
\end{array}\right) .
$$
You can use a combinatorial interpretation of both sides, similarly as in the proof of (6) above, or the Binomial Theorem as in the previous exercise.
Here is another relation between the numbers in the Pascal Triangle. Let us start with the last element in any row, and sum the elements moving down diagonally to the left. For example, starting with the last element in the second row, we get
$$
\begin{aligned}
1 & =1 \\
1+3 & =4 \\
1+3+6 & =10 \\
1+3+6+10 & =20 .
\end{aligned}
$$
This numbers are just the numbers in the next skew line of the table! If we want to put this in a formula, we get
$$
\left(\begin{array}{l}
n \\
0
\end{array}\right)+\left(\begin{array}{c}
n+1 \\
1
\end{array}\right)+\left(\begin{array}{c}
n+2 \\
2
\end{array}\right)+\ldots+\left(\begin{array}{c}
n+k \\
k
\end{array}\right)=\left(\begin{array}{c}
n+k+1 \\
k
\end{array}\right) .
$$
To prove this identity, we use induction on $k$. If $k=0$, the identity just says $1=1$, so it is trivially true. (We can check it also for $k=1$, even though this is not necessary. Anyway, it says $1+n=n+1$.)
So suppose that the identity (7) is true for a given value of $k$, and we want to prove that it also holds for $k+1$ in place of $k$. In other words, we want to prove that
$$
\left(\begin{array}{l}
n \\
0
\end{array}\right)+\left(\begin{array}{c}
n+1 \\
1
\end{array}\right)+\left(\begin{array}{c}
n+2 \\
2
\end{array}\right)+\ldots+\left(\begin{array}{c}
n+k \\
k
\end{array}\right)+\left(\begin{array}{c}
n \\
k+1
\end{array}\right)=\left(\begin{array}{c}
n+k+2 \\
k+1
\end{array}\right)
$$
Here the sum of the first $k$ terms on the left hand side is $\left(\begin{array}{c}n+k+1 \\ k\end{array}\right)$ by the induction hypothesis, and so the left hand side is equal to
$$
\left(\begin{array}{c}
n+k+1 \\
k
\end{array}\right)+\left(\begin{array}{c}
n+k+1 \\
k+1
\end{array}\right)
$$
But this is indeed equal to $\left(\begin{array}{c}n+k+2 \\ k+1\end{array}\right)$ by the fundamental property of the Pascal Triangle. This completes the proof by induction.
5.6 Suppose that you want to choose a $(k+1)$-element subset of the $(n+k+1)$-element set $\{1,2, \ldots, n+k+1\}$. You decide to do this by choosing first the largest element, then the rest. Show that counting the number of ways to choose the subset this way, you get a combinatorial proof of identity (7).
### A bird's eye view at the Pascal Triangle
Let's imagine that we are looking at the Pascal Triangle from a distance. Or, to put it differently, we are not interested in the exact numerical value of the entries, but rather in their order of magnitude, rise and fall, and other global properties. The first such property of the Pascal Triangle is its symmetry (with respect to the vertical line through its apex), which we already know.
Another property one observes is that along any row, the entries increase until the middle, and then decrease. If $n$ is even, there is a unique middle element in the $n$-th row, and this is the largest; if $n$ is odd, then there are two equal middle elements, which are largest. So let us prove that the entries increase until the middle (then they begin to decrease by the symmetry of the table). We want to compare two consecutive entries:
$$
\left(\begin{array}{l}
n \\
k
\end{array}\right) ?\left(\begin{array}{c}
n \\
k+1
\end{array}\right)
$$
If we use formula 4.2 , we can write this as
$$
\frac{n(n-1) \ldots(n-k+1)}{k(k-1) \ldots 1} ? \frac{n(n-1) \ldots(n-k)}{(k+1) k \ldots 1} .
$$
There are a lot of common factors on both sides with which we can simplify. We get the really simple comparison
$$
1 ? \frac{n-k}{k+1} \text {. }
$$
Rearranging, we get
$$
k ? \frac{n-1}{2} .
$$
So if $k<(n-1) / 2$, then $\left(\begin{array}{l}n \\ k\end{array}\right)<\left(\begin{array}{c}n \\ k+1\end{array}\right)$; if $k=(n-1) / 2$, then $\left(\begin{array}{l}n \\ k\end{array}\right)=\left(\begin{array}{c}n \\ k+1\end{array}\right)$ (this is the case of the two entries in the middle if $n$ is odd); and if $k>(n-1) / 2$, then $\left(\begin{array}{l}n \\ k\end{array}\right)>\left(\begin{array}{c}n \\ k+1\end{array}\right)$.
It will be useful later that this computation also describes by how much consecutive elements increase or decrease. If we start from the left, the second entry $(n)$ is larger by a factor of $n$ than the first; the third $(n(n-1) / 2)$ is larger by a factor of $(n-1) / 2$ than the second. In general,
$$
\frac{\left(\begin{array}{c}
n \\
k+1
\end{array}\right)}{\left(\begin{array}{l}
n \\
k
\end{array}\right)}=\frac{n-k}{k+1} .
$$
5.7 For which values of $n$ and $k$ is $\left(\begin{array}{c}n \\ k+1\end{array}\right)$ twice the previous entry in the Pascal Triangle?
5.8 Instead of the ratio, look at the difference of two consecutive entries in the Pascal triangle:
$$
\left(\begin{array}{c}
n \\
k+1
\end{array}\right)-\left(\begin{array}{l}
n \\
k
\end{array}\right)
$$
For which value of $k$ is this difference largest?
We know that each row of the Pascal Triangle is symmetric. We also know that the entries start with 1 , raise to the middle, and then they fall back to 1 . Can we say more about their shape?
Figure 9 shows the graph of the numbers $\left(\begin{array}{l}n \\ k\end{array}\right)(k=0,1, \ldots, n)$ for the values $n=10$ and $n=100$. We can make several further observations.
- First, the largest number gets very large.
- Second, not only do these numbers increase to the middle and then they decrase, but that the middle ones are substantially larger than those at the beginning and end. For $n=100$, the figure only shows the range $\left(\begin{array}{c}100 \\ 20\end{array}\right),\left(\begin{array}{c}100 \\ 21\end{array}\right), \ldots,\left(\begin{array}{c}100 \\ 80\end{array}\right)$; the numbers outside this range are so small compared to the largest that they are negligible.
Figure 9: Graph of the $n$-th row of the Pascal Triangle, for $n=10$ and 100. of $n$.
— Third, we can observe that the shape of the graph is quite similar for different values
Let's look more carefully at these observations. For the discussions that follow, we shall assume that $n$ is even (for odd values of $n$, the results would be quite similar, just one would have to word them differently). If $n$ is even, then we know that the largest entry in the $n$-th row is the middle number $\left(\begin{array}{c}n \\ n / 2\end{array}\right)$, and all other entries are smaller.
How large is the largest number in the $n$-th row of the Pascal Triangle? We know immediately an upper bound on this number:
$$
\left(\begin{array}{c}
n \\
n / 2
\end{array}\right)<2^{n}
$$
since $2^{n}$ is the sum of all entries in the row. It only takes a little more sophistication to get a lower bound:
$$
\left(\begin{array}{c}
n \\
n / 2
\end{array}\right)>\frac{2^{n}}{n+1}
$$
since $2^{n} /(n+1)$ is the average of the numbers in the row, and the largest number is certainly at least as large as the average.
These bounds already give a pretty good idea about the size of $\left(\begin{array}{c}n \\ n / 2\end{array}\right)$. Take, say, $n=500$. Then we get
$$
\frac{2^{500}}{501}<\left(\begin{array}{c}
500 \\
250
\end{array}\right)<2^{500}
$$
If we want to know, say, the number of digits of $\left(\begin{array}{l}500 \\ 250\end{array}\right)$, we just have to take the logarithm (in base 10) of it. From the bounds above, we get
$$
500 \lg 2-\lg 501<\lg \left(\begin{array}{c}
500 \\
250
\end{array}\right)<500 \lg 2
$$
Since $\lg 501<3$, this inequality gives the number of digits with a very small error: if we guess that it is $(500 \lg 2)-1$, rounded up (which is 150) then we are off by at most 2 (actually, 150 is the true value).
Using the Stirling Formula, one can get a much better approximation of this largest entry.
$$
\left(\begin{array}{c}
n \\
n / 2
\end{array}\right) \sim \frac{2^{n}}{\sqrt{\pi n / 2}}
$$
5.9 (a) Give a combinatorial argument to show that $2^{n}>\left(\begin{array}{l}n \\ 4\end{array}\right)$.
(b) Use this inequality to show that $2^{n}>n^{3}$ if $n$ is large enough.
5.10 Show how to use the Stirling Formula to derive (9).
So we know that the largest entry in the $n$-th row of the Pascal triangle is in the middle, and we know approximately how large this element is. We also know that going either left or right, the elements begin to drop. How fast do they drop? We show that if we move away from the middle of the row, quite soon the numbers drop to less than a half of the maximum. (We pick this $1 / 2$ only for illustration; we shall see that one can estimate similarly how far away from the middle we have to move before the entries drop to a third, or to one percent, of the middle entry.) This quantifies our observation that the large binomial coefficients are concentrated in the middle of the row.
It is important to point out that in the arguments below, using calculus would give stronger results; we only give here as much as we can without appealing to calculus.
Again, we consider the case when $n$ is even; then we can write $n=2 m$, where $m$ is a positive integer. The middle entry is $\left(\begin{array}{c}2 m \\ m\end{array}\right)$. Consider the binomial coefficient that is $t$ steps before the middle: this is $\left(\begin{array}{c}2 m \\ m-t\end{array}\right)$. We want to compare it with the largest coefficient. We shall choose the value of $t$ later; this way our calculations will be valid for any value of $t$ such that $0 \leq t \leq m$, and so they will be applicable in different situations.
We already know that $\left(\begin{array}{c}2 m \\ m\end{array}\right)>\left(\begin{array}{c}2 m \\ m-t\end{array}\right)$; we also know by (8) that going left from $\left(\begin{array}{c}2 m \\ m\end{array}\right)$, the entries drop by factors $\frac{m}{m+1}$, then by $\frac{m-1}{m+2}$, then by $\frac{m-2}{m+3}$, etc. The last drop is by a factor of $\frac{m-t+1}{m+t}$.
Multiplying these factors together, we get
$$
\left(\begin{array}{c}
2 m \\
m-t
\end{array}\right) /\left(\begin{array}{c}
2 m \\
m
\end{array}\right)=\frac{m(m-1) \ldots(m-t+1)}{(m+1)(m+2) \ldots(m+t)} .
$$
So we have a formula for this ratio, but how do we know for which value does it become less than $1 / 3$ ? We could write up the inequality
$$
\frac{m(m-1) \ldots(m-t+1)}{(m+1)(m+2) \ldots(m+t)}<\frac{1}{2}
$$
and solve it for $t$ (as we did when we proved that the entries are increasing to the middle), but this would lead to inequalities that are too complicated to solve. So we have to do some arithmetic trickery here. We start with looking at the reciprocal ratio:
$$
\left(\begin{array}{c}
2 m \\
m
\end{array}\right) /\left(\begin{array}{c}
2 m \\
m-t
\end{array}\right)=\frac{(m+1)(m+2) \ldots(m+t)}{m(m-1) \ldots(m-t+1)}
$$
and write it in the following form:
$$
\begin{aligned}
& \frac{(m+t)(m+t-1) \ldots(m+1)}{m(m-1) \ldots(m-t+1)}=\frac{m+t}{m} \cdot \frac{m+t-1}{m-1} \cdot \ldots \cdot \frac{m+1}{m-t+1} \\
& =\left(1+\frac{t}{m}\right) \cdot\left(1+\frac{t}{m-1}\right) \cdot \ldots \cdot\left(1+\frac{t}{m-t+1}\right) .
\end{aligned}
$$
If we replace the numbers $m, m-1, \ldots, m-t+1$ in the denominator by $m$, we decrease the value of each factor and thereby the value of the whole product:
$$
\left(1+\frac{t}{m}\right) \cdot\left(1+\frac{t}{m-1}\right) \cdot \ldots \cdot\left(1+\frac{t}{m-t+1}\right) \geq\left(1+\frac{t}{m}\right)^{t} .
$$
It is still not easy to see how large this expression is; the base is close to 1 , but the exponent may be large. We can get a simpler expression if we use the Binomial Theorem and then retain just the first two terms:
$$
\left(1+\frac{t}{m}\right)^{t}=1+\left(\begin{array}{l}
t \\
1
\end{array}\right) \frac{t}{m}+\left(\begin{array}{l}
t \\
2
\end{array}\right)\left(\frac{t}{m}\right)^{2}+\ldots>1+\left(\begin{array}{l}
t \\
1
\end{array}\right) \frac{t}{m}=1+\frac{t^{2}}{m} .
$$
To sum up, we conclude that
Theorem 5.1 For all integers $0 \leq t \leq m$,
$$
\left(\begin{array}{c}
2 m \\
m
\end{array}\right) /\left(\begin{array}{c}
2 m \\
m-t
\end{array}\right)>1+\frac{t^{2}}{m} .
$$
Let us choose $t$ to be the least integer that is not smaller than $\sqrt{m}$ (in notation: $\lceil\sqrt{m}\rceil$ ). Then $t^{2} \geq m$, and we get the following:
$$
\left(\begin{array}{c}
2 m \\
m-\lceil\sqrt{m}\rceil
\end{array}\right)<\frac{1}{2}\left(\begin{array}{c}
2 m \\
m
\end{array}\right)
$$
5.11 Find a number $c$ such that if $t>c \sqrt{m}$, then
$$
\left(\begin{array}{c}
2 m \\
m-t
\end{array}\right)<\frac{1}{100}\left(\begin{array}{c}
2 m \\
m
\end{array}\right)
$$
Consider, for example, the $500^{\text {th }}$ row. Since $\sqrt{500}=22.36 \ldots$, it follows that the entry $\left(\begin{array}{l}500 \\ 227\end{array}\right)$ is less than half of the largest entry $\left(\begin{array}{l}500 \\ 250\end{array}\right)$. This argument only gives an upper bound on how far we have to go; it does not say that the entries closer to the middle are all larger than a third of the middle entry; in fact, the entry $\left(\begin{array}{l}500 \\ 236\end{array}\right)$ is already smaller than a $\frac{1}{2}\left(\begin{array}{l}500 \\ 250\end{array}\right)$, but the entry $\left(\begin{array}{l}500 \\ 237\end{array}\right)$ is larger. Next we prove that even if we sum all entries outside a narrow range in the middle, we get a small fraction of the sum of all entries. (This fact will be very important when we will apply these results in probability theory.)
We need an inequality that is a generalization of the inequality in Theorem 5.1. We state is as a "lemma"; this means an auxiliary result that may not be so interesting in itself, but will be important in proving some other theorem.
Lemma 5.1 For all integers $0 \leq t, s \leq m$ such that $t+s \leq m$,
$$
\left(\begin{array}{c}
2 m \\
m-s
\end{array}\right) /\left(\begin{array}{c}
2 m \\
m-t-s
\end{array}\right)>\frac{t^{2}}{m}
$$
For $s=0$, this lemma says the same as Theorem 5.1. We leave the proof of it as an exercise.
5.12 (a) Prove Lemma 5.1, by following the proof of Theorem 5.1.
(b) Show that Lemma 5.1 follows from Theorem 5.1, if one observes that as $s$ increases, the binomial coefficient $\left(\begin{array}{c}2 m \\ m-t-s\end{array}\right)$ decreases faster than the binomial coefficient $\left(\begin{array}{c}2 m \\ m-s\end{array}\right)$.
(c) Show that by doing the calculations more carefully, the lower bound of $t^{2} / m$ in Lemma 5.1 can be improved to $1+t(2 t+s) / m$.
Now we state the theorem about the sum of the "small" binomial coefficients.
Theorem 5.2 Let $0 \leq k \leq m$ and let $t=\lceil\sqrt{k m}\rceil$. Then
$$
\left(\begin{array}{c}
2 m \\
0
\end{array}\right)+\left(\begin{array}{c}
2 m \\
1
\end{array}\right)+\ldots+\left(\begin{array}{c}
2 m \\
m-t-1
\end{array}\right)<\frac{1}{2 k} 2^{2 m}
$$
To digest the meaning of this, choose $k=100$. The quantity $2^{2 m}$ on the right hand side is the sum of all binomial coefficient in row No. $2 m$ of the Pascal Triangle; so the theorem says that if we take the sum of the first $m-t$ (where $t=\lceil 10 \sqrt{m}\rceil$ ) then we get less than half percent of the total sum. By the symmetry of the Pascal Triangle, the sum of the last $m-t$ entries in this row will be the same, so the $2 t+1$ remaining terms in the middle make up 99 percent of the sum.
To prove this theorem, let us compare the sum on the left hand side of (12) with the sum
$$
\left(\begin{array}{c}
2 m \\
t
\end{array}\right)+\left(\begin{array}{c}
2 m \\
t+1
\end{array}\right)+\ldots+\left(\begin{array}{c}
2 m \\
m-1
\end{array}\right)
$$
Let us note right away that the sum in (13) is clearly less than $2^{2 m} / 2$, since even if we add the mirror image of this part of the row of the Pascal Triangle, we get less than $2^{2 m}$. Now for the comparison, we have
$$
\left(\begin{array}{c}
2 m \\
m-t-1
\end{array}\right) \leq \frac{1}{k}\left(\begin{array}{c}
2 m \\
m-1
\end{array}\right)
$$
by Lemma 5.1 (check the computation!), and
$$
\left(\begin{array}{c}
2 m \\
m-t-2
\end{array}\right)<\frac{1}{k}\left(\begin{array}{c}
2 m \\
m-2
\end{array}\right)
$$
and similarly, each term on the left hand side of (13) is less that a hundredth of the corresponding term in (13). Hence we get that
$$
\left(\begin{array}{c}
2 m \\
0
\end{array}\right)+\left(\begin{array}{c}
2 m \\
1
\end{array}\right)+\ldots+\left(\begin{array}{c}
2 m \\
m-2 t
\end{array}\right)<\frac{1}{k}\left(\begin{array}{c}
2 m \\
t
\end{array}\right)+\left(\begin{array}{c}
2 m \\
t+1
\end{array}\right)+\ldots+\left(\begin{array}{c}
2 m \\
m-t
\end{array}\right)<\frac{1}{2 k} 2^{2 m} .
$$
This proves the theorem.
## $6 \quad$ Fibonacci numbers
### Fibonacci's exercise
In the $13^{\text {th }}$ century, the Italian mathematician Leonardo Fibonacci studied the following (not too realistic) exercise:
A farmer raises rabbits. Each rabbit gives birth to one rabbit when it turns 2 months old, and then to one rabbit each month. Rabbits never die, and we ignore hares. How many rabbits will the farmer have in the $n$-th month, if he starts with one newborn rabbit?
Leonardo Fibonacci
It is easy to figure out the answer for small values of $n$. He has 1 rabbit in the first month and 1 rabbit in the second month, since the rabbit has to be 2 months old before starting to reproduce. He has 2 rabbits during the third months, and 3 rabbits during the fourth, since his first rabbit delivered a new one after the second and one after the third. After 4 months, the second rabbit also delivers a new rabbit, so two new rabbits are added. This means that the farmer will have 5 rabbits during the fifth month.
It is easy to follow the multiplication of rabbits for any number of months, if we notice that the number of new rabbits added after $n$ months is just the same as the number of rabbits who are at least 2 months old, i.e., who were already there after during the $(k-1)$-st month. In other words, if we denote by $F_{n}$ the number of rabbits during the $n$-th month, then we have, for $n=2,3,4, \ldots$,
$$
F_{n+1}=F_{n}+F_{n-1} .
$$
We also know that $F_{1}=1, F_{2}=1, F_{3}=2, F_{4}=3, F_{5}=5$. It is convenient to define $F_{0}=0$; then equation (14) will remain valid for $n=1$ as well. Using the equation (14), we can easily determine any number of terms in this sequence of numbers:
$$
0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,987,1597 \ldots
$$
The numbers in this sequence are called Fibonacci numbers.
We see that equation (14), together with the special values $F_{0}=0$ and $F_{1}=1$, uniquely determines the Fibonacci numbers. Thus we can consider (14), together with $F_{0}=0$ and $F_{1}=1$, as the definition of these numbers. This may seem as a somewhat unusual definition: instead of telling what $F_{n}$ is (say, by a formula), we just give a rule that computes each Fibonacci number from the two previous numbers, and specify the first two values. Such a definition is called a recurrence. It is quite similar in spirit to induction (except that it not a proof technique, but a definition method), and is sometimes also called definition by induction. 6.1 Why do we have to specify exactly two of the elements to begin with? Why not one or three?
Before trying to say more about these numbers, let us consider another counting problem:
A staircase has $n$ steps. You walk up taking one or two at a time. How many ways can you go up?
For $n=1$, there is only 1 way. For $n=2$, you have 2 choices: take one twice or two once. For $n=3$, you have 3 choices: three single steps, or one sigle followed by one double, or one double followed by one single.
Now stop and try to guess what the answer is in general! If you guessed that the number of ways to go up on a stair with $n$ steps is $n$, you are wrong. The next case, $n=4$, gives 5 possibilities $(1+1+1+1,2+1+1,1+2+1,1+1+2,2+2)$.
So instead of guessing, let's try the following strategy. We denote by $G_{n}$ the answer, and try to figure out what $G_{n+1}$ is, assuming we know the value of $G_{k}$ for $k \leq n$. If we start with a single step, we have $G_{n}$ ways to go up the remaining $n$ steps. If we start with a double step, we have $G_{n-1}$ ways to go up the remaining $n-1$ steps. Now these are all the possibilities, and so
$$
G_{n+1}=G_{n}+G_{n-1} .
$$
This equation is the same as the equation we have used to compute the Fibonacci numbers $F_{n}$. Does this means that $F_{n}=G_{n}$ ? Of course not, as we see by looking at the beginning values: for example, $F_{3}=2$ but $G_{3}=3$. However, it is easy to observe that all that happens is that the $G_{n}$ are shifted by one:
$$
G_{n}=F_{n+1}
$$
This is valid for $n=1,2$, and then of course it is valid for every $n$ since the sequences $F_{2}, F_{3}, F_{4} \ldots$ and $G_{1}, G_{2}, G_{3}, \ldots$ are computed by the same rule from their first two elements.
6.2 We have $n$ dollars to spend. Every day we either by a candy for 1 dollar, or an icecream for 2 dollars. In how many ways can we spend the money?
### Lots of identities
There are many interesting relations valid for the Fibonacci numbers. For example, what is the sum of the first $n$ Fibonacci numbers? We have
$$
\begin{aligned}
0 & =0, \\
0+1 & =1, \\
0+1+1 & =2, \\
0+1+1+2 & =4, \\
0+1+1+2+3 & =7, \\
0+1+1+2+3+5 & =12, \\
0+1+1+2+3+5+8 & =20, \\
0+1+1+2+3+5+8+13 & =33 .
\end{aligned}
$$
Staring at these numbers for a while, it is not hard to recognize that adding 1 to the right hand sides we get Fibonacci numbers; in fact, we get Fibonacci numbers two steps after the last summand. In formula:
$$
F_{0}+F_{1}+F_{2}+\ldots+F_{n}=F_{n+2}-1
$$
Of course, at this point this is only a conjecture, an unproved mathematical statement we believe to be true. To prove it, we use induction on $n$ (since the Fibonacci numbers are defined by recurrence, induction is the natural and often only proof method at hand).
We have already checked the validity of the statement for $n=0$ and 1 . Suppose that we know that the identity holds for the sum of the first $n-1$ Fibonacci numbers. Consider the sum of the first $n$ Fibonacci numbers:
$$
F_{0}+F_{1}+\ldots+F_{n}=\left(F_{0}+F_{1}+\ldots+F_{n-1}\right)+F_{n}=\left(F_{n+1}-1\right)+F_{n},
$$
by the induction hypothesis. But now we can use the recurrence equation for the Fibonacci numbers, to get
$$
\left(F_{n+1}-1\right)+F_{n}=F_{n+2}-1 .
$$
This completes the induction proof.
6.3 Prove that $F_{3 n}$ is even.
6.4 Prove that $F_{5 n}$ is divisible by 5 .
6.5 Prove the following identities.
(a) $F_{1}+F_{3}+F_{5}+\ldots+F_{2 n-1}=F_{2 n}$.
(b) $F_{0}-F_{1}+F_{2}-F_{3}+\ldots-F_{2 n-1}+F_{2 n}=F_{2 n-1}-1$.
(c) $F_{0}^{2}+F_{1}^{2}+F_{2}^{2}+\ldots+F_{n}^{2}=F_{n} \cdot F_{n+1}$.
(d) $F_{n-1} F_{n+1}-F_{n}^{2}=(-1)^{n}$.
6.6 Mark the first entry (a 1) of any row of the Pascal triangle. Move one step East and one step Northeast, and mark the entry there. Repeat this until you get out of the triangle. Compute the sum of the entries you marked.
(a) What numbers do you get if you start from different rows? First "conjecture", than prove your answer.
(b) Formulate this fact as an identity involving binomial coefficients.
6.7 Cut a chessboard into 4 pieces as shown in Figure 10 and assemble a $5 \times 13$ rectangle from them. Does this prove that $5 \cdot 13=8^{2}$ ? Where are we cheating? What does this have to do with Fibonacci numbers?
### A formula for the Fibonacci numbers
How large are the Fibonacci numbers? Is there a simple formula that expresses $F_{n}$ as a function of $n$ ?
An easy way out, at least for the author of a book, is to state the answer right away:
Figure 10: Proof of $65=64$
Theorem 6.1 The Fibonacci numbers are given by the formula
$$
F_{n}=\frac{1}{\sqrt{5}}\left(\left(\frac{1+\sqrt{5}}{2}\right)^{n}-\left(\left(\frac{1-\sqrt{5}}{2}\right)^{n}\right) .\right.
$$
It is straightforward to check that this formula gives the right value for $n=0,1$, and then one can prove its validity for all $n$ by induction.
6.8 Prove Theorem 6.1 by induction on $n$.
Do you feel cheated by this? You should; while it is of course logically correct what we did, one would like to see more: how can one arrive at such a formula? What should we try if we face a similar, but different recurrence?
So let us forget Theorem 6.1 for a while and let us try to find a formula for $F_{n}$ "from scratch".
One thing we can try is to experiment. The Fibonacci numbers grow quite fast; how fast? Let's compute the ratio of consecutive Fibonacci numbers:
$$
\begin{gathered}
\frac{1}{1}=1, \quad \frac{2}{1}=2, \quad \frac{3}{2}=1.5, \quad \frac{5}{3}=1.666666667, \quad \frac{8}{5}=1.600000000 \\
\frac{13}{8}=1.625000000, \quad \frac{21}{13}=1.615384615, \quad \frac{34}{21}=1.619047619, \quad \frac{55}{34}=1.617647059 \\
\frac{89}{55}=1.618181818, \quad \frac{144}{89}=1.617977528, \quad \frac{233}{144}=1.618055556, \quad \frac{377}{233}=1.618025751 .
\end{gathered}
$$
It seems that the ratio of consecutive Fibonacci numbers is very close to 1.618 , at least if we ignore the first few values. This suggests that the Fibonacci numbers behave like a geometric progression. So let's see if there is any geometric progression that satisfies the same recurrence as the Fibonacci numbers. Let $G_{n}=c \cdot q^{n}$ be a geometric progression $(c, q \neq 0)$. Then
$$
G_{n+1}=G_{n}+G_{n-1}
$$
translates into
$$
c \cdot q^{n+1}=c \cdot q^{n}+c \cdot q^{n-1}
$$
which after simplification becomes
$$
q^{2}=q+1
$$
(So the number $c$ disappears: what this means that if we find a sequence of this form that satisfies Fibonacci's recurrence, then we can change $c$ in it to any other real number, and get another sequence that satisfies the recurrence.)
What we have is a quadratic equation for $q$, which we can solve and get
$$
q_{1}=\frac{1+\sqrt{5}}{2} \approx 1.618034, \quad q_{2}=\frac{1-\sqrt{5}}{2} \approx-0.618034 .
$$
So we have found two kinds of geometric progressions that satisfy the same recurrence as the Fibonacci numbers:
$$
G_{n}=c\left(\frac{1+\sqrt{5}}{2}\right)^{n}, \quad G_{n}^{\prime}=c\left(\frac{1-\sqrt{5}}{2}\right)^{n}
$$
(where $c$ is an arbitrary constant). Unfortunately, neither $G_{n}$ nor $G_{n}^{\prime}$ gives the Fibonacci sequence: for one, $G_{0}=G_{0}^{\prime}=c$ while $F_{0}=0$. But notice that the sequence $G_{n}-G_{n}^{\prime}$ also satisfies the recurrence:
$$
G_{n+1}-G_{n+1}^{\prime}=\left(G_{n}+G_{n-1}\right)-\left(G_{n}^{\prime}-G_{n-1}^{\prime}=\left(G_{n}-G_{n}^{\prime}\right)+\left(G_{n-1}-G_{n-1}^{\prime}\right)\right.
$$
(using that $G_{n}$ and $G_{n}^{\prime}$ satisfy the recurrence). Now we have matched the first value $F_{0}$, since $G_{0}-G_{0}^{\prime}=0$. What about the next one? We have $G_{1}-G_{1}^{\prime}=c \sqrt{5}$. We can match this with $F_{1}=1$ if we choose $c=1 / \sqrt{5}$.
Now we have two sequences: $F_{n}$ and $G_{n}-G_{n}^{\prime}$, that both begin with the same two numbers, and satisfy the same recurrence. Hence they must be the same: $F_{n}=G_{n}-G_{n}^{\prime}$.
Now you can substitute for the values of $G_{n}$ and $G_{n}^{\prime}$, and see that we got the formula in the Theorem!
Let us include a little discussion of the formula we just derived. The first base in the exponential expression is $q_{1}=(1+\sqrt{5} / 2) \approx 1.618034>1$, while the second base $q_{2}$ is between -1 and 0 . Hence if $n$ increases, then $G_{n}$ will become very large, while $\left|G_{n}^{\prime}\right|<1 / 2$ and in fact $G_{n}^{\prime}$ becomes very small. This means that
$$
F_{n} \approx G_{n}=\frac{1}{\sqrt{5}}(1+\sqrt{5})^{n}
$$
where the term we ignore is less than $1 / 2$ (and tends t0 0 if $n$ tends to infinity); this means that $F_{n}$ is the integer nearest to $G_{n}$.
The base $(1+\sqrt{5}) / 2$ is called the golden ratio, and it comes up all over mathematics; for example, it is the ratio between the diagonal and side of a regular pentagon. 6.9 Define a sequence of integers $H_{n}$ by $H_{0}=1, H_{1}=3$, and $H_{n+1}=H_{n}+H_{n-1}$. Show that $H_{n}$ can be expressed in the form $a \cdot q_{1}^{n}+b \cdot q_{2}^{n}$ (where $q_{1}$ and $q_{2}$ are the same numbers as in the proof above), and find the values of $a$ and $b$.
6.10 Define a sequence of integers $I_{n}$ by $I_{0}=0, I_{1}=1$, and $I_{n+1}=4 I_{n}+I_{n-1}$. Find a formula for $I_{n}$.
## Combinatorial probability
### Events and probabilities
Probability theory is one of the most important areas of mathematics from the point of view of applications. In this book, we do not attempt to introduce even the most basic notions of probability theory; our only goal is to illustrate the importance of combinatorial results about the Pascal Triangle by explaning a key result in probability theory, the Law of Large Numbers. To do so, we have to talk a little about what probability is.
If we make an observation about our world, or carry out an experiment, the outcome will always depend on chance (to a varying degree). Think of the weather, the stockmarket, or a medical experiment. Probability theory is a way of modeling this dependence on chance.
We start with making a mental list of all possible outcomes of the experiment (or observation, which we don't need to distringuish). These possible outcomes form a set $S$. Perhaps the simplest experiment is tossing a coin. This has two outcomes: $H$ (head) and $T$ (tail). So in this case $S=\{H, T\}$. As another example, the outcomes of throwing a dice form the set $S=\{1,2,3,4,5,6\}$. In this book, we assume that the set $S=\left\{s_{1}, s_{2}, \ldots, s_{k}\right\}$ of possible outcomes of our experiment is finite. The set $S$ is often called a sample space.
Every subset of $S$ is called an event (the event that the observed outcome falls in this subset). So if we are throwing a dice, the subset $\{2,4,6\} \subseteq S$ can be thought of as the event that we have thrown an even number.
The intersection of two subsets corresponds to the event that both events occur; for example, the subset $L \cap E=\{4,6\}$ corresponds to the event that we throw a better-thanaverage number that is also even. Two events $A$ and $B$ (i.e., two subsets of $S$ ) are called exclusive if the never occur at the same time, i.e., $A \cap B=\emptyset$. For example, the event $O=\{1,3,5\}$ that the outcome of tossing a dice is odd and the event $E$ that it is even are exclusive, since $E \cap O=\emptyset$.
7.1 What event does the union of two subsets corresponds to?
So let $S=\left\{s_{1}, s_{2}, \ldots, s_{n}\right\}$ be the set of possible outcomes of an experiment. To get a probability space we assume that each outcome $s_{i} \in S$ has a "probability" $\mathrm{P}\left(s_{i}\right)$ such that
(a) $\mathrm{P}\left(s_{i}\right) \geq 0$ for all $s_{i} \in S$,
and
(b) $\mathrm{P}\left(s_{1}\right)+\mathrm{P}\left(s_{2}\right)+\ldots+\mathrm{P}\left(s_{k}\right)=1$.
Then we call $S$, together with these probabilities, a probability space. For example, if we toss a "fair" coin, the $\mathrm{P}(H)=\mathrm{P}(T)=1 / 2$. If the dice in our example is of good quality, then we will have $\mathrm{P}(i)=1 / 6$ for every outcome $i$.
A probability space in which every outcome has the same probability is called a uniform probability space. We shall only discuss uniform spaces here, since they are the easiest to imagine and they are the best for the illustration of combinatorial methods. But you should be warned that in more complicated modelling, non-uniform probability spaces are very often needed. For example, if we are observing if a day is rainy or not, we will have a 2-element sample space $S=\{$ RAINY,NON-RAINY $\}$, but these two will typically not have the same probability. The probability of an event $A \subseteq S$ is defined as the sum of probabilities of outcomes in $A$, and is denoted by $\mathrm{P}(A)$. If the probability space is uniform, then the probability of $A$ is
$$
\mathrm{P}(A)=\frac{|A|}{|S|}=\frac{|A|}{k} .
$$
7.2 Prove that the probability of any event is at most 1.
7.3 What is the probability of the event $E$ that we throw an even number with the dice? What is the probability of the event $T=\{3,6\}$ that we toss a number that is divisible by 3 ?
7.4 Prove that if $A$ and $B$ are exclusive, then $\mathrm{P}(A)+\mathrm{P}(B)=\mathrm{P}(A \cap B)$.
7.5 Prove that for any two events $A$ and $B$,
$$
\mathrm{P}(A \cap B)+\mathrm{P}(A \cup B)=\mathrm{P}(A)+\mathrm{P}(B) .
$$
### Independent repetition of an experiment
Let us repeat our experiment $n$ times. We can consider this as a single big experiment, and a possible outcome of this repeated experiment is a sequence of length $n$, consisting of elements of $S$. Thus the sample space corresponding to this repeated experiment is the set $S^{n}$ of such sequences. Thus the number of outcomes of this "big" experiment is $k^{n}$. We consider every sequence equally likely, which means that we consider it a uniform probability space. Thus if $\left(a_{1}, a_{2}, \ldots, a_{n}\right)$ is an outcome of the "big" experiment, then we have
$$
\mathrm{P}\left(a_{1}, a_{2}, \ldots, a_{n}\right)=\frac{1}{k^{n}} .
$$
As an example, consider the experiment of tossing a coin twice. Then $S=\{H, T\}$ (head,tail) for a single coin toss, and so the sample space for the two coin tosses is $\{H H, H T, T H, T T\}$. The probability of each of these outcomes is $1 / 4$.
This definition intends to model the situation where the outcome of each repeated experiment is independent of the previous outcomes, in the everyday sense that "there cannot possibly be any measurable influence of one experiment on the other". We cannot go here into the philosophical questions that this notion raises; all we can do is to give a mathematical definition that we can check on examples that it expresses the informal notion above.
A key notion in probability is that if independence of events. Informally, this means that information about one (whether or not it occured) does not influence the probability of the other. Formally, two events $A$ and $B$ are independent if $\mathrm{P}(A \cap B)=\mathrm{P}(A) \mathrm{P}(B)$. For example, if $E=\{2,4,6\}$ is the event that the result of throwing a dice is even, and $T=\{3,6\}$ is the event that it is a multiple of 3 , then $E$ and the event $T$ are independent: we have $E \cap T=\{6\}$ (the only possibility to throw a number that is even and divisible by 3 is to throw 6 ), and hence
$$
\mathrm{P}(E \cap T)=\frac{1}{6}=\frac{1}{2} \cdot \frac{1}{3}=\mathrm{P}(E) \mathrm{P}(T) .
$$
Consider again the experiment of tossing a coin twice. Let $A$ be the event that the first toss is head; let $B$ be the event that the second toss is head. Then we have $\mathrm{P}(A)=$ $\mathrm{P}(H H)+\mathrm{P}(H T)=1 / 4+1 / 4=1 / 2$, similarly $\mathrm{P}(B)=1 / 2$, and $\mathrm{P}(A \cap B)=\mathrm{P}(H H)=$ $1 / 4=(1 / 2) \cdot(1 / 2)$. Thus $A$ and $B$ are independent events (as they should be).
7.6 Which pairs of the events $E, O, T, L$ are independent? Which pairs are exclusive?
7.7 Show that $\emptyset$ is independent from every event. Is there any other event with this property?
7.8 Consider an experiment with sample space $S$ repeated $n$ times $(n \geq 2)$. Let $s \in S$. Let $A$ be the event that the first outcome is $s$, and let $B$ be the event that the last outcome is $s$. Prove that $A$ and $B$ are independent.
### The Law of Large Numbers
In this section we study an experiment that consists of $n$ independent coin tosses. For simplicity, assume that $n$ is even, so that $n=2 m$ for some integer $m$. Every outcome is a sequence of length $n$, in which each element is either $H$ or $T$. A typical outcome would look like this:
$$
\text { HHTTTHTHT THTHHHTHTT }
$$
(for $n=20$ ).
The "Law of Large Numbers" says that if we toss a coin many times, the number of 'heads' will be about the same as the number of 'tails'. How can we make this statement precise? Certainly, this will not always be true; one can be extremely lucky or unlucky, and have a winning or loosing streak of arbitrary length. Also, we can't claim that the number of heads is equal to the number of tails; only that they are close. The following theorem is the simplest form of the Law of Large Numbers.
Theorem 7.1 Let $0 \leq t \leq m$. Then the probability that out of $2 m$ independent coin tosses, the number of heads is less than $m-t$ or larger than $m+t$, is at most $m / t^{2}$.
Let us discuss this a little. If $t<\sqrt{m}$ then the assertion does not say anything, since we know anyhow that the probability of any event is at most 1 , and the upper bound given is larger than 1 . But is we choose, say, $t=10 \sqrt{m}$, then we get an interesting special case:
Corollary 7.1 With probability at least .99, the number of heads among $2 m$ independent coin tosses is between $m-10 \sqrt{m}$ and $m+10 \sqrt{m}$.
Of course, we need that $m>10 \sqrt{m}$, or $m>100$, for this to have any meaning. But after all, we are talking about Large Numbers! If $m$ is very large, then $10 \sqrt{m}$ is much smaller than $m$, so we get that the number of heads is very close to $m$. For example, if $m=1,000,000$ then $10 \sqrt{m}=m / 100$, and so it follows that with probability at least .99 , the number of heads is within 1 percent of $m=n / 2$.
We would like to show that the Law of Large Numbers is not a mysterious force, but a simple consequence of the properties of binomial coefficients. Let $A_{k}$ denote the event that we toss exactly $k$ heads. It is clear that the events $A_{k}$ are mutually exclusive. It is also clear that for every outcome of the experiment, exactly one of the $A_{k}$ occurs.
The number of outcomes for which $A_{k}$ occurs is the number of sequences of length $n$ consisting of $k$ 'heads' and $n-k$ 'tails'. If we specify which of the $n$ positions are 'heads', we are done. This can be done in $\left(\begin{array}{l}n \\ k\end{array}\right)$ ways, so the set $A_{k}$ has $\left(\begin{array}{l}n \\ k\end{array}\right)$ elements. Since the total number of outcomes is $2^{n}$, we get the following.
$$
\mathrm{P}\left(A_{k}\right)=\frac{\left(\begin{array}{l}
n \\
k
\end{array}\right)}{2^{n}} .
$$
What is the probability that the number of heads is far from the expected, which is $m=$ $n / 2$ ? Say, it is less than $m-t$ or larger than $m+t$, where $t$ is any positive integer not larger than $m$ ? Using exercise 7.1, we see that the probability that this happens is
$$
\frac{1}{2^{2 m}}\left(\left(\begin{array}{c}
2 m \\
0
\end{array}\right)+\left(\begin{array}{c}
2 m \\
1
\end{array}\right)+\ldots+\left(\begin{array}{c}
2 m \\
m-t-1
\end{array}\right)+\left(\begin{array}{c}
2 m \\
m+t+1
\end{array}\right)+\ldots+\left(\begin{array}{c}
2 m \\
2 m-1
\end{array}\right)+\left(\begin{array}{c}
2 m \\
2 m
\end{array}\right)\right) .
$$
Now we can use Theorem 5.2 , with $k=t^{2} / m$, and get that
$$
\left(\begin{array}{c}
2 m \\
0
\end{array}\right)+\left(\begin{array}{c}
2 m \\
1
\end{array}\right)+\ldots+\left(\begin{array}{c}
2 m \\
m-t-1
\end{array}\right)<\frac{m}{2 t^{2}} 2^{2 m}
$$
By the symmetry of the Pascal Triangle, we also have
$$
\left(\begin{array}{c}
2 m \\
m+t+1
\end{array}\right)+\ldots+\left(\begin{array}{c}
2 m \\
2 m-1
\end{array}\right)+\left(\begin{array}{c}
2 m \\
2 m
\end{array}\right)<\frac{m}{2 t^{2}} 2^{2 m} .
$$
Hence we get that the probability that we toss either less than $m-t$ or more than $m+t$ heads is less than $m / t^{2}$. This proves the theorem.
## Integers, divisors, and primes
In this chapter we discuss properties of integers. This area of mathematics is called number theory, and it is a truly venerable field: its roots go back about 2500 years, to the very beginning of Greek mathematics. One might think that after 2500 years of research, one would know essentially everything about the subject. But we shall see that this is not the case: there are very simple, natural questions which we cannot answer; and there are other simple, natural questions to which an answer has only been found in the last years!
### Divisibility of integers
We start with some very basic notions concerning integers. Let $a$ and $b$ be two integers. We say that $a$ divides $b$, or $a$ is a divisor of $b$, or $b$ is a multiple of $a$ (these phrases mean the same thing), if there exists an integer $m$ such that $b=a m$. In notation: $a \mid b$. If $a$ is not a divisor of $b$, then we write $a \nmid b$. If $a \neq 0$, then this means that the ratio $b / a$ is an integer.
If $a \nmid b$, and $a>0$, then we can still divide $b$ by $a$ with remainder. The remainder $r$ of the division $b: a$ is an integer that satisfies $0 \leq r<a$. If the quotient of the division with remainder is $q$, then we have
$$
b=a q+r .
$$
This is a very useful way of thinking about a division with remainder.
You have probably seen these notions before; the following exercises should help you check if you remember enough.
8.1 Check (using the definition) that $1|a,-1| a, a \mid a$ and $-a \mid a$ for every integer $a$.
8.2 What does it mean for $a$, in more everyday terms, if (a) $2 \mid a$; (b) $2 \nmid a$; (c) $0 \mid a$.
8.3 (a) Prove that
(a) if $a \mid b$ and $b \mid c$ then $a \mid c$;
(b) if $a \mid b$ and $a \mid c$ then $a \mid b+c$ and $a \mid b-c$;
(c) if $a, b>0$ and $a \mid b$ then $a \leq b$;
(d) if $a \mid b$ and $b \mid a$ then either $a=b$ or $a=-b$.
8.4 Let $r$ be the remainder of the division $b: a$. Assume that $c \mid a$ and $c \mid b$. Prove that $c \mid r$.
8.5 Assume that $a \mid b$, and $a, b>0$. Let $r$ be the remainder of the division $c: a$, and let $s$ be the remainder of the division $c: b$. What is the remainder of the division $s: a$ ?
8.6 (a) Prove that for every integer $a, a-1 \mid a^{2}-1$.
(b) More generally, for every integer $a$ and positive integer $m$,
$$
a-1 \mid a^{n}-1 .
$$
### Primes and their history
An integer $p>1$ is called a prime if it is not divisible by any integer other than $1,-1, p$ and $-p$. Another way of saying this is that an integer $p>1$ is a prime if it cannot be written as the product of two smaller positive integers. An integer $n>1$ that is not a prime is called composite (the number 1 is considered neither prime, nor composite). Thus $2,3,5,7,11$ are primes, but $4=2 \cdot 2,6=2 \cdot 3,8=2 \cdot 4,9=3 \cdot 3,10=2 \cdot 5$ are not primes. Table 1 contains the first 500 primes.
$2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97,101,103$, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, $457,461,463,467,479,487,491,499,503,509,521,523,541,547,557,563,569,571,577,587$, $593,599,601,607,613,617,619,631,641,643,647,653,659,661,673,677,683,691,701,709$, $719,727,733,739,743,751,757,761,769,773,787,797,809,811,821,823,827,829,839,853$, $857,859,863,877,881,883,887,907,911,919,929,937,941,947,953,967,971,977,983,991$, 997, 1009, 1013, 1019, 1021, 1031, 1033, 1039, 1049, 1051, 1061, 1063, 1069, 1087, 1091, 1093, 1097, $1103,1109,1117,1123,1129,1151,1153,1163,1171,1181,1187,1193,1201,1213,1217,1223$, 1229, 1231, 1237, 1249, 1259, 1277, 1279, 1283, 1289, 1291, 1297, 1301, 1303, 1307, 1319, 1321, 1327, 1361, 1367, 1373, 1381, 1399, 1409, 1423, 1427, 1429, 1433, 1439, 1447, 1451, 1453, 1459, 1471, 1481, 1483, 1487, 1489, 1493, 1499, 1511, 1523, 1531, 1543, 1549, 1553, 1559, 1567, 1571, $1579,1583,1597,1601,1607,1609,1613,1619,1621,1627,1637,1657,1663,1667,1669,1693$, 1697, 1699, 1709, 1721, 1723, 1733, 1741, 1747, 1753, 1759, 1777, 1783, 1787, 1789, 1801, 1811, $1823,1831,1847,1861,1867,1871,1873,1877,1879,1889,1901,1907,1913,1931,1933,1949$, 1951, 1973, 1979, 1987, 1993, 1997, 1999, 2003, 2011, 2017, 2027, 2029, 2039, 2053, 2063, 2069, 2081, 2083, 2087, 2089, 2099, 2111, 2113, 2129, 2131, 2137, 2141, 2143, 2153, 2161, 2179, 2203, 2207, 2213, 2221, 2237, 2239, 2243, 2251, 2267, 2269, 2273, 2281, 2287, 2293, 2297, 2309, 2311, 2333, 2339, 2341, 2347, 2351, 2357, 2371, 2377, 2381, 2383, 2389, 2393, 2399, 2411, 2417, 2423, $2437,2441,2447,2459,2467,2473,2477,2503,2521,2531,2539,2543,2549,2551,2557,2579$, 2591, 2593, 2609, 2617, 2621, 2633, 2647, 2657, 2659, 2663, 2671, 2677, 2683, 2687, 2689, 2693, 2699, 2707, 2711, 2713, 2719, 2729, 2731, 2741, 2749, 2753, 2767, 2777, 2789, 2791, 2797, 2801, 2803, 2819, 2833, 2837, 2843, 2851, 2857, 2861, 2879, 2887, 2897, 2903, 2909, 2917, 2927, 2939, 2953, 2957, 2963, 2969, 2971, 2999, 3001, 3011, 3019, 3023, 3037, 3041, 3049, 3061, 3067, 3079, 3083, 3089, 3109, 3119, 3121, 3137, 3163, 3167, 3169, 3181, 3187, 3191, 3203, 3209, 3217, 3221, 3229, 3251, 3253, 3257, 3259, 3271, 3299, 3301, 3307, 3313, 3319, 3323, 3329, 3331, 3343, 3347, $3359,3361,3371,3373,3389,3391,3407,3413,3433,3449,3457,3461,3463,3467,3469,3491$, $3499,3511,3517,3527,3529,3533,3539,3541,3547,3557,3559,3571$
Table 1: The first 500 primes
Primes have fascinated people ever since ancient times. Their sequence seems very irregular, yet on closer inspection it seems to carry a lot of hidden structure. The ancient Greeks already knew that there are infinitely many such numbers. (Not only did they know this; they proved it!)
It was not easy to prove any further facts about primes. Their sequence has holes and
Figure 11: A bar chart of primes up to 1000
dense spots (see also figure 11). How large are these holes? For example, is there a prime number with any given number of digits? The answer to this question will be important for us when we discuss cryptography. The answer to this question is in the affirmative, but this fact was not proved until the mid-19th century, and many similar questions are open even today.
A new wave of developments in the theory of prime numbers came with the spread of computers. How do you decide about a positive integer $n$ whether it is a prime? Surely this is a finite problem (you can try out all smaller positive integers to see if any of them is a proper divisor), but such simple methods become impractical as soon as the number of digits is more than 20 or so.
It is only in the last 20 years since much more efficient algorithms (computer programs) exist to test if a given integer is a prime. We will get a glimpse of these methods later. Using these methods, one can now rather easily determine about a number with 1000 digits whether it is a prime or not.
If an integer larger than 1 is not a prime itself, then it can be written as a product of primes: we can write it as a product of two smaller positive integers; if one of these is not a prime, we write it as the product of two smaller integers etc; sooner or later we must end up with only primes. The ancient Greeks also knew (and proved!) a subtler fact about this representation: that it is unique. What this means is that there is no other way of writing $n$ as a product of primes (except, of course, we can multiply the same primes in a different order). To prove this takes some sophistication, and to recognize the necessity of such a result was quite an accomplishment; but this is all more than 2000 years old!
It is really surprising that, even today, no efficient way is known to find such a decomposition. Of course, powerful supercomputers and massively parallel systems can be used to find decompositions by brute force for fairly large numbers; the current record is around a 140 digits, and the difficulty grows very fast (exponentially) with number of digits. To find the prime decomposition of a number with 400 digits, by any of the known methods, is way beyond the possibilities of computers in the foreseeable future.
### Factorization into primes
We have seen that every integer larger than 1 that is not a prime itself can be written as a product of primes. We can even say that every positive integer can be written as a product of primes: primes can be considered as "products with one factor", and the integer 1 can be thought of as the "empty product". With this in mind, we can state and prove the following theorem, announced above, sometimes called the "Fundamental Theorem of Number Theory".
Theorem 8.1 Every positive integer can be written as the product of primes, and this factorization is unique up to the order of the prime factors.
We prove this theorem by a version of induction, which is sometimes called the "minimal criminal" argument. The proof is indirect: we suppose that the assertion is false, and using this assumption, we derive a logical contradiction.
So assume that there exists an integer with two different factorizations; call such an integer a "criminal". There may be many criminals, but we consider the smallest one. Being a criminal, this has at least two different factorizations:
$$
n=p_{1} \cdot p_{2} \cdot \ldots \cdot p_{m}=q_{1} \cdot q_{2} \cdot \ldots \cdot q_{k} .
$$
We may assume that $p_{1}$ is the smallest prime occurring in these factorizations. (Indeed, if necessary, we can interchange the left hand side and the right hand side so that the smallest prime in any of the two factorizations occurs on the left hand side; and then change the order of the factors on the left hand side so that the smallest factor comes first. In the usual slang of mathematics, we say that we may assume that $p_{1}$ is the smallest prime without loss of generality.)
The number $p_{1}$ cannot occur among the factors $q_{i}$, otherwise we can divide both sides by $p_{1}$ and get a smaller criminal.
Divide each $q_{i}$ by $p_{1}$ with residue: $q_{i}=p_{i} a_{i}+r_{i}$, where $0 \leq r_{i}<p_{1}$. We know that $r_{i} \neq 0$, since a prime cannot be a divisor of another prime.
Let $n^{\prime}=r_{1} \cdot \ldots \cdot r_{k}$. We show that $n^{\prime}$ is a smaller criminal. Trivially $n^{\prime}=r_{1} r_{1} \ldots r_{k}<$ $q_{1} q_{2} \ldots q_{k}=n$. We show that $n^{\prime}$ too has two different factorizations into primes. One of these can be obtained from the definition $n^{\prime}=r_{1} r_{2} \ldots r_{k}$. Here the factors may not be primes, but we can break them down into products of primes, so that we end up with a decomposition of $n^{\prime}$.
To get another decomposition, we observe that $p_{1} \mid n^{\prime}$. Indeed, we can write the definition of $n^{\prime}$ in the form
$$
n^{\prime}=\left(q_{1}-a_{1} p_{1}\right)\left(q_{2}-a_{2} p_{1}\right) \ldots\left(q_{k}-a_{k} p_{1}\right),
$$
and if we expand, then every term will be divisible by $p_{1}$. Now we divide $n^{\prime}$ by $p_{1}$ and then continue to factor $n^{\prime} / p_{1}$, to get a factorization of $n^{\prime}$.
But are these two factorizations different? Yes! The prime $p_{1}$ occurs in the second, but it cannot occur in the first, where every prime factor is smaller than $p_{1}$. Thus we have found a smaller criminal. Since $n$ was supposed to be the smallest among all criminals, this is a contradiction. The only way to resolve this contradiction is to conclude that there are no criminals; our "indirect assumption" was false, and no integer can have two different prime factorizations.
As an application of Theorem 8.1, we prove a fact that was known to the Pythagoreans (students of Pythagoras) in the 6th century B.C.
Theorem 8.2 The number $\sqrt{2}$ is irrational.
(A real number is irrational if it cannot be written as the ratio of two integers. For the Pythagoreans, the question arose from geometry: they wanted to know whether the diagonal of a square is "commeasurable" with its side, i.e., is there any segment which would be contained in both an integer number of times. The above theorem answered this question in the negative, causing a substantial turmoil in their ranks.)
We give an indirect proof again: we suppose that $\sqrt{2}$ is rational, and derive a contradiction. What the indirect assumption means is that $\sqrt{2}$ can be written as the quotient of two positive integers: $\sqrt{2}=\frac{a}{b}$. Squaring both sides and rearranging, we get $2 b^{2}=a^{2}$.
Now consider the prime factorization of both sides, and in particular, the prime number 2 on both sides. Suppose that 2 occurs $m$ times in the prime factorization of $a$ and $n$ times in the prime factorization of $b$. Then it occurs $2 m$ times in the prime factorization of $a^{2}$ and thus it occurs $2 m+1$ times in the prime factorization of $2 a^{2}$. On the other hand, it occurs $2 n$ times in the prime factorization of $b^{2}$. Since $2 a^{2}=b^{2}$ and the prime factorization is unique, we must have $2 m+1=2 n$. But this is impossible since $2 m+1$ is odd but $2 n$ is even. This contradiction proves that $\sqrt{2}$ must be irrational.
8.7 Are there any even primes?
8.8 (a) Prove that if $p$ is a prime, $a$ and $b$ are integers, and $p \mid a b$, then either $p \mid a$ or $p \mid b$ (or both).
(b) Suppose that $a$ and $b$ are integers and $a \mid b$. Also suppose that $p$ is a prime and $p \mid b$ but $p \nmid a$. Prove that $p$ is a divisor of the ratio $b / a$.
8.9 Prove that the prime factorization of a number $n$ contains at $\operatorname{most}^{\log _{2}} n$ factors.
8.10 Let $p$ be a prime and $1 \leq a \leq p-1$. Consider the numbers $a, 2 a, 3 a, \ldots,(p-1) a$. Divide each of them by $p$, to get residues $r_{1}, r_{2}, \ldots, r_{p-1}$. Prove that every integer from 1 to $p-1$ occurs exactly once among these residues.
[Hint: First prove that no residue can occur twice.]
8.11 Prove that if $p$ is a prime, then $\sqrt{p}$ is irrational. More generally, prove that if $n$ is an integer that is not a square, then $\sqrt{n}$ is irrational.
8.12 Try to formulate and prove an even more general theorem about the irrationality of the numbers $\sqrt[k]{n}$.
### On the set of primes
The following theorem was also known to Euclid. Theorem 8.3 There are infinitely many primes.
What we need to do is to show that for every positive integer $n$, there is a prime number larger than $n$. To this end, consider the number $n !+1$, and any prime divisor $p$ of it. We show that $p>n$. Again, we use an indirect proof, supposing that $p \leq n$ and deriving a contradiction. If $p \leq n$ then $p \mid n$ !, since it is one of the integers whose product is $n$ !. We also know that $p \mid n !+1$, and so $p$ is a divisor of the difference $(n !+1)-n !=1$. But this is impossible, and thus $p$ must be larger than $n$.
If we look at various charts or tables of primes, our main impression is that there is a lot of irregularity in them. For example, figure 11 represents each prime up to 200 by a bar. We see large "gaps" and then we also see primes that are very close. We can prove that these gaps get larger and larger as we consider larger and larger numbers; somewhere out there is a string of 100 consecutive composite numbers, somewhere (much farther away) there is a string of a 1000, etc. To state this in a mathematical form:
Theorem 8.4 For every positive integer $k$, there exist $k$ consecutive composite integers.
We can prove this theorem by an argument quite similar to the proof of theorem 8.3. Let $n=k+1$ and consider the numbers
$$
n !+2, n !+3, \ldots, n !+n .
$$
Can any of these be a prime? The answer is no: the first number is even, since $n$ ! and 2 are both even. The second number is divisible by 3 , since $n$ ! and 3 are both divisible by 3 (assuming that $n>2$ ). In general $n !+i$ is divisible by $i$, for every $i=2,3, \ldots, n$. Hence these numbers cannot be primes, and so we have found $n-1=k$ consecutive composite numbers.
What about the opposite question, finding primes very close to each other? Since all primes except 2 are odd, the difference of two primes must be at least two, except for 2 and 3. Two primes whose difference is 2 are called twin primes. Thus $(3,5),(5,7)$, $(11,13),(17,19)$ are twin primes. Looking at the table of the first 500 primes, we find many twin primes; extensive computation shows that there are twin primes with hundreds of digits. However, it is not known whether there are infinitely many twin primes! (Almost certainly there are, but no proof of this fact has been found, in spite of the efforts of many mathematicians for over 2000 years!)
Another way of turning Theorem 8.4 around: how large can be these gaps? Could it happen that there is no prime at all with, say, 100 digits? This is again a very difficult question, but here we do know the answer. (No, this does not happen.)
One of the most important questions about primes is: how many primes are there up to a given number $n$ ? We denote the number of primes up to $n$ by $\pi(n)$. Figure 12 illustrates the graph of this function in the range of 1 to 100 , and Figure 13, in the range of 1 to 2000. We can see that the function grows reasonably smoothly, and that its slope decreases slowly. An exact formula for $\pi(n)$ is certainly impossible to obtain. Around the turn of the century, a powerful result called the Prime Number Theorem was proved by two French mathematicians, Hadamard and de la Vallée-Poussain. From this result it follows that
Figure 12: The graph of $\pi(n)$ from 1 to 100
Figure 13: The graph of $\pi(n)$ from 1 to 2000 we can find prime numbers with a prescribed number of digits, and up to a third of the digits also prescribed. For example, there exist prime numbers with 30 digits starting with 1234567890 .
Theorem 8.5 (The Prime Number Theorem) Let $\pi(n)$ denote the number of primes among $1,2, \ldots, n$. Then
$$
\pi(n) \sim \frac{n}{\ln n}
$$
(Here $\ln n$ means the "natural logarithm", i.e., logarithm to the base $e=2.718281 \ldots$.. Also recall that the notation means that
$$
\frac{\pi(n)}{\frac{n}{\ln n}}
$$
will be arbitrarily close to 1 if $n$ is sufficiently large.)
The proof of the prime number theorem is very difficult; the fact that the number of primes up to $n$ is about $n / \ln n$ was observed empirically in the $18^{\text {th }}$ century, but it took more than 100 years until Hadamard and de la Vallée-Poussin proved it in 1896.
As an illustration of the use of this theorem, let us find the answer to a question that we have posed in the introduction: how many primes with (say) 200 digits are there? We get the answer by subtracting the number of primes up to $10^{199}$ from the number of primes up to $10^{200}$. By the Prime Number Theorem, this number is about
$$
\frac{10^{200}}{200 \ln 10}-\frac{10^{199}}{199 \ln 10} \approx 1.95 \cdot 10^{197}
$$
This is a lot of primes! Comparing this with the total number of positive integers with 200 digits, which we know is $10^{200}-10^{199}=9 \cdot 10^{199}$, we get
$$
\frac{9 \cdot 10^{199}}{1.95 \cdot 10^{197}} \approx 460
$$
Thus among the integers with 200 digits, one in every 460 is a prime.
(Warning: This argument is not precise; the main source of concern is that in the prime number theorem, we only stated the $\pi(n)$ is close to $n / \ln n$ if $n$ is sufficiently large. One can say more about how large $n$ has to be to have, say, an error less than 1 percent, but this leads to even more difficult questions, which are even today not completely resolved.)
There are many other simple observations one can make by looking at tables of primes, but they tend to be very difficult and most of them are not resolved even today, in some cases after 2,500 years of attempts. We have already mentioned the problem whether there are infinitely many twin primes.
Another famous unsolved problem is Goldbach's conjecture. This states that every even integer larger than 2 can be written as the sum of two primes. (Goldbach also formulated a conjecture about odd numbers: every odd integer larger than 5 can be written as the sum of three primes. This conjecture was essentially proved, using very deep methods, by Vinogradov in the 1930's. We said "essentially" since the proof only works for numbers that are very large, and the possibility of a finite number of exceptions remains open.) Suppose that we have an integer $n$ and want to know how soon after $n$ can we be sure to find a prime. For example, how small, or large, is the first prime with at least 100 digits? Our proof of the infinity of primes gives that for every $n$, there is a prime between $n$ and $n !+1$. This is a very week statement; it says for example that there is a prime between 10 and $10 !+1=3628801$, while of course the next prime is 11 . Chebychev proved in the last century that there is always a prime between $n$ and $2 n$. It is now proved that there is always a prime between two consecutive cubes (say, between $27=3^{3}$ and $64=4^{3}$ ). But it is another old and famous unsolved problem whether there is always a prime between two consecutive squares. (Try this out: you'll in fact find many primes. For example, between $100=10^{2}$ and $121=11^{2}$ we find 101, 103,
P. L. Chebyshev $107,109,113$. Between $100^{2}=10,000$ and $101^{2}=10201$ we find 10007, 10009, 10037, 10039, 10061, 10067, 10069, 10079, 10091, 10093, 10099, 10103, 10111, 10133, 10139, 10141, 10151, 10159, 10163, 10169, 10177, 10181, 10193.)
8.13 Show that among k-digit numbers, one in about every $2.3 k$ is a prime.
### Fermat's "Little" Theorem
Primes are important because we can compose every integer from them; but it turns out that they also have many other, often surprising properties. One of these was discovered by the French mathematician Pierre de Fermat (1601-1655), now called the "Little" Theorem of Fermat.
Theorem 8.6 If $p$ is a prime and $a$ is an integer, then $p \mid a^{p}-a$.
To prove this theorem, we need a lemma, which states another divisibility property of primes (but is easier to prove):
Lemma 8.1 If $p$ is a prime and $1<k<p$, then $p \mid\left(\begin{array}{l}p \\ k\end{array}\right)$.
Proof. We know by theorem 4.2 that
$$
\left(\begin{array}{l}
p \\
k
\end{array}\right)=\frac{p(p-1) \cdot \ldots \cdot(p-k+1)}{k(k-1) \cdot \ldots \cdot 1} .
$$
Here $p$ divides the numerator, but not the denominator, since all factors in the denominator are smaller than $p$, and we know P. de Fermat by exercise 8.3 (a) that if a prime $p$ does not divide any of these factors, then it does not divide the product. Hence it follows (see exercise 8.3(b)) that $p$ is a divisor of $\left(\begin{array}{l}p \\ k\end{array}\right)$.
Now we can prove Fermat's Theorem by induction on $a$. The assertion is trivially true if $a=0$. Let $a>0$, and write $a=b+1$. Then
$$
a^{p}-a=(b+1)^{p}-(b+1)=b^{p}+\left(\begin{array}{c}
p \\
1
\end{array}\right) b^{p-1}+\ldots+\left(\begin{array}{c}
p \\
p-1
\end{array}\right) b+1-b-1
$$
$$
=\left(b^{p}-b\right)+\left(\begin{array}{c}
p \\
1
\end{array}\right) b^{p-1}+\ldots+\left(\begin{array}{c}
p \\
p-1
\end{array}\right) b
$$
Here the expression $b^{p}-b$ in parenthesis is divisible by $p$ by the induction hypothesis, while the other terms are divisible by $p$ by lemma 8.1. It follows that $a^{p}-a$ is also divisible by $p$, which completes the induction.
Let us make a remark about the history of mathematics here. Fermat is most famous for his "Last" theorem, which is the following assertion:
If $n>2$, then the sum of the $n$-th powers of two positive integers is never the $n$-th power of a positive integer.
(The assumption that $n>2$ is essential: there are many examples of two squares whose sum is a third square: for example, $3^{2}+4^{2}=5^{2}$, or $5^{2}+12^{2}=13^{2}$.)
Fermat claimed in a note that he proved this, but never wrote down the proof. This statement remained perhaps the most famous unsolved problem in mathematics until 1995, when Andrew
A.J. Wiles Wiles (in one part with the help of Robert Taylor) finally proved it.
8.14 Show by examples that neither the assertion in lemma 8.1 nor Fermat's Little Theorem remain valid if we drop the assumption that $p$ is a prime.
8.15 Consider a regular $p$-gon, and all $k$-subsets of the set of its vertices, where $1 \leq$ $k \leq p 1$. Put all these $k$-subsets into a number of boxes: we put two $k$-subsets into the same box if they can be rotated into each other. For example, all $k$-subsets consisting of $k$ consecutive vertices will belong to one and the same box.
(a) Prove that if $p$ is a prime, then each box will contain exactly $p$ of these sets.
(b) Show by an example that (a) does not remain true if we drop the assumption that $p$ is a prime.
(c) Use (a) to give a new proof of Lemma 8.1.
8.16 Imagine numbers written in base $a$, with at most $p$ digits. Put two numbers in the same box if they arise by cyclic shift from each other. How many will be in each class? Give a new proof of Fermat's theorem this way.
8.17 Give a third proof of Fermat's "Little Theorem" based on exercise 8.3.
[Hint: consider the product $a(2 a)(3 a) \ldots((p-1) a)$.]
### The Euclidean Algorithm
So far, we have discussed several notions and results concerning integers. Now we turn our attention to the question of how to do computations in connection with these results. How to decide whether or not a given number is a prime? How to find the prime factorization of a number? We can do basic arithmetic: addition, subtraction, multiplication, division with remainder efficiently, and will not discuss this here.
The key to a more advanced algorithmic number theory is an algorithm that computes the greatest common divisor of two positive integers $a$ and $b$. This is defined as the largest positive integer that is a divisor of both of them. (Since 1 is always a common divisor, and no common divisor is larger than either integer, this definition makes sense.) We say that two integers are relatively prime if their greatest common divisor is 1 . The greatest common divisor of $a$ and $b$ is denoted by $\operatorname{gcd}(a, b)$. Thus
$$
\begin{array}{lll}
\operatorname{gcd}(1,6)=1, & \operatorname{gcd}(2,6)=2, & \operatorname{gcd}(3,6)=3, \quad \operatorname{gcd}(4,6)=2, \\
\operatorname{gcd}(5,6)=1, & \operatorname{gcd}(6,6)=6 .
\end{array}
$$
Two numbers whose greatest common divisor is 1 are called relatively prime. It will be convenient to also define $\operatorname{gcd}(a, 0)=a$ for every $a \geq 0$.
A somewhat similar notion is the least common multiple of two integers, which is the least positive integer that is a multiple of both integers, and denote by $\operatorname{lcm}(a, b)$. For example,
$$
\begin{aligned}
& \operatorname{lcm}(1,6)=6, \quad \operatorname{lcm}(2,6)=6, \quad \operatorname{lcm}(3,6)=6, \quad \operatorname{lcm}(4,6)=12, \\
& \operatorname{lcm}(5,6)=30, \quad \operatorname{lcm}(6,6)=6
\end{aligned}
$$
The greatest common divisor of two positive integers can be found quite simply by using their prime factorizations: look at the common prime factors, raise them to the smaller of the two exponents, and take the product of these prime powers. For example, $300=2^{2} \cdot 3 \cdot 5^{2}$ and $18=2 \cdot 3^{3}$, and hence $\operatorname{gcd}(300,18)=2 \cdot 3=6$.
The trouble with this method is that it is very difficult to find the prime factorization of large integers. The algorithm to be discussed in this section will compute the greatest common divisor of two integers in a much faster way, without finding their prime factorization. This algorithm is an important ingredient of almost all other algorithms involving computation with integers. (And, as we see it from its name, it goes back to the great Greek mathematicians!)
8.18 Show that if $a$ and $b$ are positive integers with $a \mid b$, then $\operatorname{gcd}(a, b)=a$.
8.19 (a) $\operatorname{gcd}(a, b)=\operatorname{gcd}(a, b-a)$.
(b) Let $r$ be the remainder if we divide $b$ by $a$. Then $\operatorname{gcd}(a, b)=\operatorname{gcd}(a, r)$.
$\mathbf{8 . 2 0}$ (a) If $a$ is even and $b$ is odd, then $\operatorname{gcd}(a, b)=\operatorname{gcd}(a / 2, b)$.
(b) If both $a$ and $b$ are even, then $\operatorname{gcd}(a, b)=2 \operatorname{gcd}(a / 2, b / 2)$.
8.21 How can you express the least common multiple of two integers, if you know the prime factorization of each?
8.22 Suppose that you are given two integers, and you know the prime factorization of one of them. Describe a way of computing the greatest common divisor of these numbers. 8.23 Prove that for any two integers $a$ and $b$,
$$
\operatorname{gcd}(a, b) \operatorname{lcm}(a, b)=a b .
$$
Now we turn to Euclid's Algorithm. Let $a$ and $b$ be two positive integers, and suppose that $a<b$. The algorithm is based on two simple facts, already familiar as exercises 8.6 and 8.6.
Suppose that we are given two positive integers $a$ and $b$, and we want to find their g.c.d. Here is what we do:
1. If $a>b$ then we interchange $a$ and $b$.
2. If $a>0$, divide $b$ by $a$, to get a remainder $r$. Replace $b$ by $r$ and return to 1 .
3. Else (if $a=0$ ), return $b$ as the g.c.d. and halt.
When you carry out the algorithm, especially by hand, there is no reason to interchange $a$ and $b$ if $a<b$ : we can simply divide the larger number by the smaller (with remainder), and replace the larger number by the remainder if the remainder is not 0 . Let us do some examples.
$$
\begin{aligned}
\operatorname{gcd}(300,18) & =\operatorname{gcd}(12,18)=\operatorname{gcd}(12,6)=6 \\
\operatorname{gcd}(101,100) & =\operatorname{gcd}(1,100)=1 \\
\operatorname{gcd}(89,55) & =\operatorname{gcd}(34,55)=\operatorname{gcd}(34,21)=\operatorname{gcd}(13,21)=\operatorname{gcd}(13,8)=\operatorname{gcd}(5,8) \\
& =\operatorname{gcd}(5,3)=\operatorname{gcd}(2,3)=\operatorname{gcd}(2,1)=1
\end{aligned}
$$
You can check in each case (using a prime factorization of the numbers) that the result is indeed the g.c.d.
If we describe an algorithm, the first thing to worry about is whether it terminates at all. So why is the euclidean algorithm finite? This is easy: the numbers never increase, and one of them decreases any time step 2 is executed, so it cannot last infinitely long.
Then of course we have to make sure that our algorithm yields what we need. This is clear: step 1 (interchanging the numbers) trivially does not change the g.c.d., step 3 (replacing the larger number by the remainder of a division) does not change the g.c.d. by exercise 8.6(b). And when we halt at step 2, the number returned is indeed the g.c.d. of the two current numbers by exercise 8.6.
A third, and more subtle, question you should ask when designing an algorithm: how long does it take? How many steps will it make before it terminates? We can get a bound from the argument that proves finite termination: since one or the other number decreases any time the 1-2 loop is executed, it will certainly halt in less than $a+b$ iterations. This is really not a great time bound: if we apply the euclidean algorithm to two numbers with 100 digits, then it says that it will not last longer than $2 \cdot 10^{100}$ steps, which is astronomical and therefore useless. But this is only a most pessimistic bound; the examples seem to show that the algorithm terminates much faster than this.
But the examples also suggest that question is quite delicate. We see that the euclidean algorithm may be quite different in length, depending on the numbers in question. Some of the possible observations made from this examples are contained in the following exercises.
8.24 Show that the euclidean algorithm can terminate in two steps for arbitrarily large positive integers, even if their g.c.d. is 1. 8.25 Describe the euclidean algorithm applied to two consecutive Fibonacci numbers. Use your description to show that the euclidean algorithm can last arbitrarily many steps.
So what can we say about how long does the euclidean algorithm last? The key to the answer is the following lemma:
Lemma 8.2 During the execution of the euclidean algorithm, the product of the two current numbers drops by a factor of two in each iteration.
To see that this is so, consider the step when the pair $(a, b)(a<b)$ is replaced by the pair $(r, a)$, where $r$ is the remainder of $b$ when divided by $a$. Then we have $r<a$ and hence $b \geq a+r>2 r$. Thus $a r<\frac{1}{2} a b$ as claimed.
Suppose that we apply the euclidean algorithm to two numbers $a$ and $b$ and we make $k$ steps. It follows by Lemma 8.2 that after the $k$ steps, the product of the two current numbers will be at most $a b / 2^{k}$. Since this is at least 1 , we get that
$$
a b \geq 2^{k},
$$
and hence
$$
k \leq \log _{2}(a b)=\log _{2} a+\log _{2} b .
$$
Thus we have proved the following.
Theorem 8.7 The number of steps of the euclidean algorithm, applied to two positive integers $a$ and $b$, is at most $\log _{2} a+\log _{2} b$.
We have replaced the sum of the numbers by the sum of the logarithms of the numbers in the bound on the number of steps, which is a really substantial improvement. For example, the number of iterations in computing the g.c.d. of two 300-digit integers is less than $2 \log _{2} 10^{300}=600 \log _{2} 10<2000$. Quite a bit less than our first estimate of $10^{100}$. Note that $\log _{2} a$ is less than the number of bits of $a$ (when written in base 2), so we can say that the euclidean algorithm does not take more iterations than the number of bits needed to write down the numbers in base 2 .
The theorem above gives only an upper bound on the number of steps the euclidean algorithm takes; we can be much more lucky: for example, when we apply the euclidean algorithm to two consecutive integers, it takes only one step. But sometimes, one cannot do much better. If you did exercise 8.6, you saw that when applied to two consecutive Fibonacci numbers $F_{k}$ and $F_{k+1}$, the euclidean algorithm takes $k-1$ steps. On the other hand, the lemma above gives the bound
$$
\begin{gathered}
\log _{2} F_{k}+\log _{2} F_{k+1} \approx \log _{2}\left(\frac{1}{\sqrt{5}}\left(\frac{1+\sqrt{5}}{2}\right)^{k}\right) \log _{2}\left(\frac{1}{\sqrt{5}}\left(\frac{1+\sqrt{5}}{2}\right)^{k+1}\right) \\
=-\log _{2} 5+(2 k+1) \log _{2}\left(\frac{1+\sqrt{5}}{2}\right) \approx 1.388 k-1.628
\end{gathered}
$$
so we have overestimated the number of steps only by a factor of about 1.388.
Fibonacci numbers are not only good to give examples of large numbers for which we can see how the euclidean algorithm works; they are also useful in obtaining an even better bound on the number of steps. We state the result as an exercise. Its contents is that, in a sense, the euclidean algorithm is longest on two consecutive Fibonacci numbers.
8.26 Suppose that $a<b$ and the euclidean algorithm applied to $a$ and $b$ takes $k$ steps. Prove that $a \geq F_{k}$ and $b \geq F_{k+1}$.
8.27 Consider the following version of the euclidean algorithm to compute $\operatorname{gcd}(a, b)$ : (1) swap the numbers if necessary to have $a \leq b$; (2) if $a=0$, then return $b$; (3) if $a \neq 0$, then replace $b$ by $b-a$ and go to (1).
(a) Carry out this algorithm to compute $\operatorname{gcd}(19,2)$.
(b) Show that the modified euclidean algorithm always terminates with the right answer.
(c) How long does this algorithm take, in the worst case, when applied to two 100-digit integers?
8.28 Consider the following version of the euclidean algorithm to compute $\operatorname{gcd}(a, b)$. Start with computing the largest power of 2 dividing both $a$ and $b$. If this is $2^{r}$, then divide $a$ and $b$ by $2^{r}$. After this "preprocessing", do the following:
(1) Swap the numbers if necessary to have $a \leq b$.
(2) If $a \neq 0$, then check the parities of $a$ and $b$; if $a$ is even, and $b$ is odd, then replace $a$ by $a / 2$; if both $a$ and $b$ are odd, then replace $b$ by $b-a$; in each case, go to (1).
(3) if $a=0$, then return $2^{r} b$ as the g.c.d.
Now come the exercises:
(a) Carry out this algorithm to compute $\operatorname{gcd}(19,2)$.
(b) It seems that in step (2), we ignored the case when both $a$ and $b$ are even. Show that this never occurs.
(c) Show that the modified euclidean algorithm always terminates with the right answer.
(d) Show that this algorithm, when applied to two 100-digit integers, does not take more than 1500 iterations.
The Euclidean Algorithm gives much more than just the g.c.d. of the two numbers. The main observation is that if we carry out the Euclidean Algorithm to compute the greatest common divisor of two positive integers $a$ and $b$, all the numbers we produce along the computation can be written as the sum of an integer multiple of $a$ and an integer multiple of $b$. Indeed, suppose that this holds for two numbers consecutive numbers we computed, so that one is $a^{\prime}=a m+b n$, and the other is $b^{\prime}=a k+b l$, where $m, n, k, l$ are integers (not necessarily positive). Then in the next steps we compute (say) the remainder of $b^{\prime}$ modulo $a^{\prime}$, which is
$$
a^{\prime}-q b^{\prime}=(a m+b n)-q(a k+b l)=a(m-q k)+b(n-q l),
$$
which is of the right form again.
In particular, we get the following: Theorem 8.8 Let $d$ be the greatest common divisor of the integers $a$ and $b$. Then $d$ can be written in the form
$$
d=a m+b n
$$
where $m$ and $n$ are integers.
By maintaining the representation of integers in the form $a n+b m$ during the computation, we see that the expression for $d$ in the theorem not only exists, but it is easily computable.
### Testing for primality
Returning to a question proposed in the introduction of this chapter: how do you decide about a positive integer $n$ whether it is a prime? You can try to see if $n$ can be divided, without remainder, by any of the smaller integers (other than 1). This is, however, a very clumsy and inefficient procedure! To determine this way whether a number with 20 digits is a prime would take astronomical time.
You can do a little better by noticing that it is enough to try to divide $n$ by the positive integers less than $\sqrt{n}$ (since if $n$ can be written as the product of two integers larger than 1 , then one of these integers will necessarily be less than $\sqrt{n}$ ). But this method is still hopelessly slow if the number of digits goes above 20 .
A much more promising approach is offered by Fermat's "Little" Theorem. Its simplest nontrivial case says that if $p$ is a prime, then $p \mid 2^{p}-2$. If we assume that $p$ is odd (which only excludes the case $p=2$ ), then we also know that $p \mid 2^{p-1}-1$.
What happens if we check the divisibility relation $n \mid 2^{n-1}-1$ for composite numbers? It obviously fails if $n$ is even (no even number is a divisor of an odd number), so let's restrict our attention to add numbers. Here are some results:
$$
\begin{gathered}
9 \not 2^{8}-1=255, \quad 15 \backslash 2^{14}-1=16,383, \quad 21 \backslash 2^{20}-1=1,048,575, \\
25 \backslash 2^{24}-1=16,777,215 .
\end{gathered}
$$
This suggests that perhaps we could test whether or not the number $n$ is a prime by checking whether or not the relation $n \mid 2^{n-1}-1$ holds. This is a nice idea, but it has several major shortcomings.
First, it is easy to write up the formula $2^{n-1}-1$, but it is quite a different matter to compute it! It seems that to get $2^{n-1}$, we have to multiply $n-2$ times by 2 . For a 100 digit number, this is about $10^{100}$ steps, which we will never be able to carry out.
But, we can be tricky when we compute $2^{n-1}$. Let us illustrate this on the example of $2^{24}$ : we could start with $2^{3}=8$, square it to get $2^{6}=62$, square it again to get $2^{12}=4096$, and square it once more to get $2^{24}=16,777,216$. Instead of 23 multiplications, we only needed 5 .
It seems that this trick only worked because 24 was divisible by such a large power of 2 , and we could compute $2^{24}$ by repeated squaring, starting from a small number. So let us show how to do a similar trick if the exponent is a less friendly integer. Let us compute, say, $2^{29}$ :
$$
2^{2}=4,2^{3}=8,2^{6}=64,2^{7}=128,2^{14}=16,384,2^{28}=268,435,456,2^{29}=536,870,912 .
$$
It is perhaps best to read this sequence backwards: if we have to compute an odd power of 2 , we obtain it by multiplying the previous power by 2 ; if we have to compute an even power, we obtain it by squaring the appropriate smaller power.
8.29 Show that if $n$ has $k$ bits in base 2 , then $2^{n}$ can be computed using less than $2 k$ multiplications.
Thus we have shown how to overcome the first difficulty; but the computations we did above reveal the second: the numbers grow too large! Let's say that $n$ has 100 digits; then $2^{n-1}$ is not only astronomical: the number of its digits is astronomical. We could never write it down, let alone check whether it is divisible by $n$.
The way out is to divide by $n$ as soon as we get any number that is larger than $n$, and just work with the remainder of the division. For example, if we want to check whether $25 \mid 2^{24}-1$, then we have to compute $2^{24}$. As above, we start with computing $2^{3}=8$, then square it to get $2^{6}=64$. We immediately replace it by the remainder of the division $64: 25$, which is 14 . Then we compute $2^{12}$ by squaring $2^{6}$, but instead we square 14 to get 196 , which we replace by the remainder of the division $196: 25$, which is 21 . Finally, we obtain $2^{24}$ by squaring $2^{12}$, but instead we square 21 to get 441 , and then divide this by 25 to get the remainder 16 . Since $16-1=15$ is not divisible by 25 , it follows that 25 is not a prime.
This does not sound like an impressive conclusion, considering the triviality of the result, but this was only an illustration. If $n$ has $k$ bits in base 2 , then as we have seen, it takes only $2 k$ multiplications to compute $2^{n}$, and all we have to do is one division (with remainder) in each step to keep the numbers small. We never have to deal with numbers larger than $n^{2}$. If $n$ has 100 digits, then $n^{2}$ has 199 or 200 - not much fun to multiply by hand, but quite easily manageable by computers.
But here comes the third shortcoming of the primality test based on Fermat's Theorem. Suppose that we carry out the test for a number $n$. If it fails (that is, $n$ is not a divisor of $2^{n-1}-1$ ), then of course we know that $n$ is not a prime. But suppose we find that $n \mid 2^{n-1}-1$. Can we conclude that $n$ is a prime? Fermat's Theorem certainly does not justify this conclusion. Are there composite numbers $n$ for which $n \mid 2^{n-1}-1$ ? Unfortunately, the answer is yes. The smallest such number is $341=11 \cdot 31$.
Here is an argument showing that 341 is a pseudoprime, without using a computer. We start with noticing that 11 divides $2^{5}+1=33$, and that 31 divides $2^{5}-1=31$. Therefore $341=11 \cdot 31$ divides $\left(2^{5}+1\right)\left(2^{5}-1\right)=2^{10}-1$. To finish, we invoke the result of exercise 8.1: it implies that $2^{10}-1$ is a divisor of $\left(2^{10}\right)^{34}-1=2^{340}-1$.
Such numbers are sometimes called pseudoprimes (fake primes). While such numbers are quite rare (there are only 22 of them between 1 and 10,000), they do show that our primality test can give a "false positive", and thus (in a strict mathematical sense) it is not a primality test at all.
Nevertheless, efficient primality testing is based on Fermat's Theorem. Below we sketch how it is done; the mathematical details are not as difficult as in the case of, say, the Prime Number Theorem, but they do go beyond the scope of this book.
One idea that comes to rescue is that we haven't used the full force of Fermat's Theorem: we can also check that $n\left|3^{n}-3, n\right| 5^{n}-5$, etc. These can be carried out using the same tricks as described above. And in fact already the first of these rules out the "fake prime" 341. Unfortunately, some pseudoprimes are worse than others; for example, the number $561=3 \cdot 11 \cdot 17$ has the property that
$$
561 \mid a^{561}-a
$$
for every integer $a$. These pseudoprimes are called Carmicheal numbers, and it was believed that their existence kills every primality test based on Fermat's theorem.
But in the late 70's, M. Rabin and G. Miller found a very simple way to strengthen Fermat Theorem just a little bit, and thereby overcome the difficulty caused by Carmicheal numbers. We illustrate the method on the example of 561 . We use some high school math, namely the identity $x^{2}-1=(x-1)(x+1)$, to factor the number $a^{561}-1$ :
$$
\begin{aligned}
a^{561}-a & =a\left(a^{560}-1\right)=a\left(a^{280}-1\right)\left(a^{280}+1\right) \\
& =a\left(a^{140}-1\right)\left(a^{140}+1\right)\left(a^{280}+1\right) \\
& =a\left(a^{140}-1\right)\left(a^{140}+1\right)\left(a^{280}+1\right) \\
& =a\left(a^{70}-1\right)\left(a^{70}+1\right)\left(a^{140}+1\right)\left(a^{280}+1\right) \\
& =a\left(a^{35}-1\right)\left(a^{35}+1\right)\left(a^{70}+1\right)\left(a^{140}+1\right)\left(a^{280}+1\right)
\end{aligned}
$$
Now suppose that 561 were a prime. Then by Fermat's Theorem, it must divide $a^{561}-a$, whatever $a$ is (this in fact happens). Now if a prime divides a product, it divides one of the factors (exercise 8.3), and hence at least one of the relations
$$
561|a \quad 561| a^{35}-1 \quad 561\left|a^{35}+1 \quad 561\right| a^{70}+1 \quad 561\left|a^{140}+1 \quad 561\right| a^{280}+1
$$
must hold. But already for $a=2$, none of these relations hold.
The Miller-Rabin test is an elaboration of this idea. Given an odd integer $n>1$ that we want to test for primality. We choose an integer $a$ from the range $0 \leq a \leq n-1$ at random, and consider $a^{n}-a$. We factor it as $a\left(a^{n-1}-1\right)$, and then go on to factor it, using the identity $x^{2}-1=(x-1)(x+1)$, as long as we can. Then we test that one of the factors must be divisible by $n$.
If the test fails, we can be sure that $n$ is not a prime. But what happens if it succeeds? Unfortunately, this can still happen even if $n$ is composite; but the crucial point is that this test gives a false positive with probability less than $1 / 2$ (remember we chose a random a).
Reaching a wrong conclusion half of the time does not sound good enough; but we can repeat the experiment several times. If we repeat it 10 times (with a new, randomly chosen $a$ ), the probability of a false positive is less than $2^{-10}<1 / 1000$ (since to conclude that $n$ is prime, all 10 runs must give a false positive, independently of each other). If we repeat the experiment 100 times, the probability of a false positive drops below $2^{-100}<10^{-30}$, which is astronomically small.
Thus this test, when repeated sufficiently often, tests primality with error probability that is much less than the probability of, say, hardware failure, and therefore it is quite adequate for practical purposes. It is widely used in programs like Maple or Mathematica and in cryptographic systems.
Suppose that we test a number $n$ for primality and find that it is composite. Then we would like to find its prime factorization. It is easy to see that instead of this, we could ask for less: for a decomposition of $n$ into the product of two smaller positive integers: $n=a b$. If we have a method to find such a decomposition efficiently, then we can go on and test $a$ and $b$ for primality. If they are primes, we have found the prime factorization of $n$; if (say) $a$ is not a prime, we can use our method to find a decomposition of $a$ into the product of
repeat this at most $\log _{2} n$ times (which is less than the number of its bits).
But, unfortunately (or fortunately? see the next chapter on cryptography) no efficient method is known to write a composite number as a product of two smaller integers. It would be very important to find an efficient factorization method, or to give a mathematical proof that no such method exists; but we don't know what the answer is.
8.30 Show that $561 \mid a^{561}-a$, for every integer $a$. [Hint: since $561=3 \cdot 11 \cdot 17$, it suffices to prove that $3\left|a^{561}-a, 11\right| a^{561}-1$ and $17 \mid a^{561}-a$. Prove these relations separately, using the method of the proof of the fact that $341 \mid 2^{340}-1$.]
## Graphs
### Even and odd degrees
We start with the following exercise (admittedly of no practical significance).
Prove that at a party with 51 people, there is always a person who knows an even number of others. (We assume that acquaintance is mutual. There may be people who don't know each other. There may even be people who don't know anybody else - of course, such people know an even number of others, so the assertion is true if there is such a person.)
If you don't have any idea how to begin a solution, you should try to experiment. But how to experiment with such a problem? Should we find 51 names for the participants, then create, for each person, a list of those people he knows? This would be very tedious, and we would be lost among the data. It would be good to experiment with smaller numbers. But which number can we take instead of 51 ? It is easy to see that 50, for example, would not do: if, say, we have 50 people all know each other, then everybody knows 49 others, so there is no person with an even number of acquaintances. For the same reason, we could not replace 51 by 48 , or 30 , or any even number. Let's hope that this is all; let's try to prove that
at a party with an odd number of people, there is always a person who knows an even number of others.
Now we can at least experiment with smaller numbers. Let us have, say, 5 people: Alice, Bob, Carl, Diane and Eve. When they first met, Alice new everybody else, Bob and Carl new each other, and Carl also new Eve. So the numbers of acquaintances are : Alice 4, Bob 2, Carl 3, Diane 1 and Eve 2. So we have not only one but three people with an even number of acquaintances.
It is still rather tedious to consider examples by listing people and listing pairs knowing each other, and it is quite easy to make mistakes. We can, however, find a graphic illustration that helps a lot. We represent each person by a point in the plane (well, by a small circle, to make the picture nicer), and we connect two of these points by a segment if the people know each other. This simple drawing contains all the information we need (Figure 14).
A picture of this kind is called a graph. More exactly, a graph consists of a set of nodes (or points, or vertices, all these names are in use), and some pairs of these (not necessarily all pairs) are connected by edges (or lines). It does not matter whether these edges are straight of curvy; all that is important is which pair of nodes they connect. The set of nodes of a graph $G$ is usually denoted by $V$; the set of edges, by $E$. Thus we write $G=(V, E)$ to indicate that the graph $G$ has node set $V$ and edge set $E$.
The only thing that matters about an edge is the pair of nodes it connects; hence the edges can be considered as 2-element subsets of $V$. This means that the edge connecting nodes $u$ and $v$ is just the set $\{u, v\}$. We'll further simplify notation and denote this edge by $u v$.
Coming back to our problem, we see that we can represent the party by a graph very conveniently. Our concern is the number of people known by a given person. We can read this off the graph by counting the number of edges leaving a given node. This number is called the degree (or valency) of the node. The degree of node $v$ is denoted by $d(v)$. So $A$
Figure 14: The graph depicting aquaitence between our friends
Figure 15: Some graphs with an odd number of nodes, with nodes of even degree marked
has degree $4, B$ has degree 2, etc. If Frank now arrives, and he does not know anybody, then we add a new node that is not connected to any other node. So this new node has degree 0 .
In the language of graph theory, we want to prove:
if a graph has an odd number of nodes than it has a node with even degree.
Since it is easy to experiment with graphs, we can draw a lot of graphs with an odd number of nodes, and count the number of nodes with even degree (Fig. 15). We find that they contain $1,5,3,3,7$ such nodes. So we observe that not only is there always such a node, but the number of such nodes is odd.
Now this is a case when it is easier to prove more: if we formulate the following stronger statement:
if a graph has an odd number of nodes then the number of nodes with even degree is odd, then we made an important step towards the solution! (Why is this statement stronger? Because 0 is not an odd number.) Let's try to find an even stronger statement by looking also at graphs with an even number of nodes. Experimenting on several small graphs again (Fig 16), we find that the number of nodes with even degree is $2,0,4,2,0$. So we conjecture that the following is true:
if a graph has an even number of nodes then the number of nodes with even degree is even.
Figure 16: Some graphs with an even number of nodes, with nodes of even degree marked
Figure 17: Building up a graph one edge at a time
This is nicely parallel to the statement about graphs with an odd number of nodes, but it would be better to have a single common statement for the odd and even case. We get such a version if we look at the number of nodes with odd, rather than even, degree. This number is obtained by subtracting the number of nodes with even degree from the total number of nodes, and hence both statements will be implied by the following:
Theorem 9.1 In every graph, the number of nodes with odd degree is even.
We still have to prove this theorem. It seems that having made the statement stronger and more general in several steps, we made our task harder and harder. But in fact we got closer to the solution.
One way of proving the theorem is to build up the graph one edge at a time, and observe how the parities of the degrees change. An example is shown in Figure 17. We start with a graph with no edge, in which every degree is 0 , and so the number of nodes with odd degree is 0 , which is an even number.
Now if we connect two nodes by a new edge, we change the parity of the degrees at these nodes. In particular,
- if both endpoints of the new edge had even degree, we increase the number of nodes with odd degree by 2 ;
- if both endpoints of the new edge had odd degree, we decrease the number of nodes with odd degree by 2 ; - if one endpoint of the new edge had even degree and the other had odd degree, then we don't change the number of nodes with odd degree.
Thus if the number of nodes with odd degree was even before adding the new edge, it remained even after this step. This proves the theorem. (Note that this is a proof by induction on the number of edges.)
Graphs are very handy in representing a large variety of situations, not only parties. It is quite natural to consider the graph whose nodes are towns and whose edges are highways (or railroads, or telephone lines) between these towns. We can use a graph to describe an electrical network, say the printed circuit on a card in your computer.
In fact, graphs can be used any situation where a "relation" between certain objects is defined. Graphs are used to describe bonds between atoms in a molecule; connections between cells in the brain; descendence between species, etc. Sometimes the nodes represent more abstract things: for example, they may represent stages of a large construction project, and an edge between two stages means that one arises from the other in a single phase of work. Or the nodes can represent all possible positions in a game (say, chess, although you don't really want to draw this graph), where we connect two nodes by an edge if one can be obtained from the other in a single move.
9.1 Find all graphs with 2,3 and 4 nodes.
9.2 (a) Is there a graph on 6 nodes with degrees $2,3,3,3,3,3$ ?
(b) Is there a graph on 6 nodes with degrees $0,1,2,3,4,5$ ?
(c) How many graphs are there on 4 nodes with degrees $1,1,2,2$ ?
(d) How many graphs are there on 10 nodes with degrees $1,1,1,1,1,1,1,1,1,1$ ?
9.3 At the end of the party, everybody knows everybody else. Draw the graph representing this situation. How many edges does it have?
9.4 Draw the graphs representing the bonds between atoms in (a) a water molecule; (b) a methane molecule; (c) two water molecules.
9.5 (a) Draw a graph with nodes representing the numbers $1,2, \ldots, 10$, in which two nodes are connected by an edge if and only if one is a divisor of the other.
(b) Draw a graph with nodes representing the numbers $1,2, \ldots, 10$, in which two nodes are connected by an edge if and only if they have no common divisor larger than 1 .
(c) Find the number of edges and the degrees in these graphs, and check that theorem 9.1 holds.
9.6 What is the largest number of edges a graph with 10 nodes can have?
9.7 How many graphs are there on 20 nodes? (To make this question precise, we have to make sure we know what it means that two graphs are the same. For the purposes of this exercise, we consider the nodes given, and labeled say, as Alice, Bob, ... The graph consisting of a single edge connecting Alice and Bob is different from the graph consisting of a single edge connecting Eve and Frank.)
9.8 Formulate the following assertion as a theorem about graphs, and prove it: At every party one can find two people who know the same number of other people (like Bob and Eve in our first example).
Figure 18: Two paths and two cycles
It will be instructive to give another proof of the theorem formulated in the last section. This will hinge on the answer to the following question: How many edges does a graph have? This can be answered easily, if we think back to the problem of counting handshakes: for each node, we count the edges that leave that node (this is the degree of the node). If we sum these numbers, we count every edge twice. So dividing the sum by two, we get the number of edges. Let us formulate this observation as a theorem:
Theorem 9.2 The sum of degrees of all nodes in a graph is twice the number of edges.
In particular, we see that the sum of degrees in any graph is an even number. If we omit the even terms from this sum, we still get an even number. So the sum of odd degrees is even. But this is only possible if the number of odd degrees is even (since the sum of an odd number of odd numbers is odd). Thus we have obtained a new proof of Theorem 9.1.
### Paths, cycles, and connectivity
Let us get acquainted with some special kinds of graphs. The simplest graphs are the empty graphs, having any number of nodes but no edges.
We get another very simple kind of graphs if we take $n$ nodes and connect any two of them by an edge. Such a graph is called a complete graph (or a clique). A complete graph with $n$ nodes has $\left(\begin{array}{l}n \\ 2\end{array}\right)$ edges (recall exercise 9.1).
Let us draw $n$ nodes in a row and connect the consecutive ones by an edge. This way we obtain a graph with $n-1$ edges, which is called a path. The first and last nodes in the row are called the endpoints of the path. If we also connect the last node to the first, we obtain a cycle (or circuit). Of course, we can draw the same graph in many other ways, placing the nodes elsewhere, and we may get edges that intersect (Figure 18).
A graph $H$ is called a subgraph of a graph $G$ if it can be obtained from $G$ by deleting some of its edges and nodes (of course, if we delete a node we automatically delete all the edges that connect it to other nodes).
9.9 Find all complete graphs, paths and cycles among the graphs in the previous figures.
9.10 How many subgraphs does an empty graph on $n$ nodes have? How many subgraphs does a triangle have?
Figure 19: A path in a graph connecting two nodes
A key notion in graph theory is that of a connected graph. It is heuristically clear what this should mean, but it is also easy to formulate the property as follows: a graph $G$ is connected if every two nodes of the graph can be connected by a path in $G$. To be more precise: a graph $G$ is connected if for every two nodes $u$ and $v$, there exists a path with endpoints $u$ and $v$ that is a subgraph of $G$ (Figure 19).
It will be useful to include a little discussion of this notion. Suppose that nodes $a$ and $b$ can be connected by a path $P$ in our graph. Also suppose that nodes $b$ and $c$ can be connected by a path $Q$. Can $a$ and $c$ be connected by a path? The answer seems to be obviously "yes", since we can just go from $a$ to $b$ and then from $b$ to $c$. But there is a difficulty: concatenating (joining together) the two paths may not yield a path from $a$ to $c$, since $P$ and $Q$ may intersect each other (Figure 20). But we can construct a path from $a$ to $c$ easily: Let us follow the path $P$ to its first common node $d$ with $Q$; then let us follow $Q$ to $c$. Then the nodes we traversed are all distinct. Indeed, the nodes on the first part of our walk are distinct because they are nodes of the path $P$; similarly, the nodes on the second part are distinct because they are nodes of the path $Q$; finally, any node of the first part must be distinct from any node of the second part (except, of course, the node $d$ ), because $d$ is the first common node of the two paths and so the nodes of $P$ that we passed through before $d$ are not nodes of $Q$ at all. Hence the nodes and edges we have traversed form a path from $a$ to $c$ as claimed. ${ }^{3}$
9.11 Is the proof as given above valid if (a) the node $a$ lies on the path $Q$; (b) the paths $P$ and $Q$ have no node in common except $b$.
9.12 (a) We delete an edge $e$ from a connected graph $G$. Show by an example that the
${ }^{3}$ We have given the more details of this proof than was perhaps necessary. One should note, however, that when arguing about paths and cycles in graphs, it is easy to draw pictures (on paper or mentally) that make implicit assumptions and are, therefore, misleading. For example, when joining together two paths, one's first mental image is a single (longer) path, which may not be the case.
Figure 20: How to select a path from $a$ to $c$, given a path from $a$ to $b$ and a path from $b$ to $c$ ? remaining graph may not be connected.
(b) Prove that if we assume that the deleted edge $e$ belongs to a cycle that is a subgraph of $G$, then the remaining graph is connected.
9.13 Let $G$ be a graph and let $u$ and $v$ be two nodes of $G$. A walk in $G$ from $u$ to $v$ is a sequence of nodes $\left(w_{0}, w_{1}, \ldots, w_{k}\right)$ such that $w_{0}=u, w_{k}=v$, and consecutive nodes are connected by an edge, i.e., $w_{i} w_{i+1} \in E$ for $i=0,1, \ldots, k-1$.
(a) Prove that if there is a walk in $G$ from $u$ to $v$, then $G$ contains a path connecting $u$ to $v$.
(b) Use part (a) to give another proof of the fact that if $G$ contains a path connecting $a$ and $b$, and also a path connecting $b$ to $c$, then it contains a path connecting $a$ to $c$.
9.14 Let $G$ be a graph, and let $H_{1}=\left(V_{1}, E_{1}\right)$ and $H_{2}=\left(V_{2}, E_{2}\right)$ be two subgraphs of $G$ that are connected. Assume that $H_{1}$ and $H_{2}$ have at least one node in common. Form their union, i.e., the subgraph $H=\left(V^{\prime}, E^{\prime}\right)$, where $V^{\prime}=V_{1} \cup V_{2}$ and $E^{\prime}=E_{1} \cup E_{2}$. Prove that $H$ is connected.
Let $G$ be a graph that is not necessarily connected. $G$ will have connected subgraphs; for example, the subgraph consisting of a single node (and no edge) is connected. A connected component $H$ is a maximal subgraph that is connected; in other words, $H$ is a connected component if it is connected but every other subgraph of $G$ that contains $H$ is disconnected. It is clear that every node of $G$ belongs to some connected component. It follows by exercise 9.2 that different connected components of $G$ have no node in common (else, their union would be a connected subgraph containing both of them). In other words, every node of $G$ is contained in a unique connected component.
9.15 Determine the connected components of the graphs constructed in exercise 9.1.
9.16 Prove that no edge of $G$ can connect nodes in different connected components.
9.17 Prove that a node $v$ is a node of the connected component of $G$ containing node $u$ if and only if $g$ contains a path connecting $u$ to $v$.
9.18 Prove that a graph with $n$ nodes and more than $\left(\begin{array}{c}n-1 \\ 2\end{array}\right)$ edges is always connected.
Figure 21: Five trees
## Trees
We have met trees when studying enumeration problems; now we have a look at them as graphs. A graph $G=(V, E)$ is called a tree if it is connected and contains no cycle as a subgraph. The simplest tree has one node and no edges. The second simplest tree consists of two nodes connected by an edge. Figure 21 shows a variety of other trees.
Note that the two properties defining trees work in the opposite direction: connectedness means that the graph cannot have "too few" edges, while the exclusion of cycles means that it cannot have "too many". To be more precise, if a graph is connected, then adding a new edge to it, it remains connected (while deleting an edge, it may or may not stay connected). If a graph contains no cycle, then deleting any edge, the remaining graph will not contain a cycle either (while adding a new edge may or may ot create a cycle). The following theorem shows that trees can be characterized as "minimally connected" graphs as well as "maximally cycle-free" graphs.
Theorem 10.1 (a) A graph $G$ is a tree if and only if it is connected, but deleting any of its edges results in a disconnected graph.
(b) A graph $G$ is a tree if and only if it contains no cycles, but adding any new edge creates a cycle.
We prove part (a) of this theorem; the proof of part (b) is left as an exercise.
First, we have to prove that if $G$ is a tree then it satisfies the condition given in the theorem. It is clear that $G$ is connected (by the definition of a tree). We want to prove that deleting any edge, it cannot remain connected. The proof is indirect: assume that deleting the edge $u v$ from a tree $G$, the remaining graph $G^{\prime}$ is connected. Then $G^{\prime}$ contains a path $P$ connecting $u$ and $v$. But then, if we put the edge $u v$ back, the path $P$ and the edge $u v$ will form a cycle in $G$, which contradicts the definition of trees.
Second, we have to prove that if $G$ satisfies the condition given in the theorem, then it is a tree. It is clear that $G$ is connected, so we only have to argue that $G$ does not contain a cycle. Again by an indirect argument, assume that $G$ does contain a cycle $C$. Then deleting any edge of $C$, we obtain a connected graph (exercise 9.2). But this contradicts the condition in the theorem.
10.1 Prove part (b) of theorem 10.1. 10.2 Prove that connecting two nodes $u$ and $v$ in a graph $G$ by a new edge creates a new cycle if and only if $u$ and $v$ are in the same connected component of $G$.
10.3 Prove that in a tree, every two nodes can be connected by a unique path. Conversely, prove that if a graph $G$ has the property that every two nodes can be connected by a path, and there is only one connecting path for each pair, then the graph is a tree.
### How to grow a tree?
The following is one of the most important properties of trees.
Theorem 10.2 Every tree with at least two nodes has at least two nodes of degree 1.
Let $G$ be a tree with at least two nodes. We prove that $G$ has a node of degree 1 , and leave it to the reader as an exercise to prove that it has at least one more. (A path has only two such nodes, so this is the best possible we can claim.)
Let us start from any node $v_{0}$ of the tree and take a walk (climb?) on the tree. Let's say we never want to turn back from a node on the edge through which we entered it; this is possible unless we get to a node of degree 1, in which case we stop and the proof is finished.
So let's argue that this must happen sooner or later. If not, then eventually we must return to a node we have already visited; but then the nodes and edges we have traversed between the two visits form a cycle. This contradicts our assumption that $G$ is a tree and hence contains no cycle.
10.4 Apply the argument above to find a second node of degree 1.
A real tree grows by developing a new twig again and again. We show that graph-trees can be grown in the same way. To be more precise, consider the following procedure, which we call the Tree-growing Procedure:
- Start with a single node.
- Repeat the following any number of times: if you have any graph $G$, create a new node and connect it by a new edge to any node of $G$.
Theorem 10.3 Every graph obtained by the Tree-growing Procedure is a tree, and every tree can be obtained this way.
The proof of this is again rather straightforward, but let us go through it, if only to gain practice in arguing about graphs.
First, consider any graph that can be obtained by this procedure. The starting graph is certainly a tree, so it suffices to argue that we never create a non-tree; in other words, if $G$ is a tree, and $G^{\prime}$ is obtained from $G$ by creating a new node $v$ and connecting it to a node $u$ of $G$, then $G^{\prime}$ is a tree. This is straightforward: $G^{\prime}$ is connected, since any two "old" nodes can be connected by a path in $G$, while $v$ can be connected to any other node $w$ by first going to $u$ and then connecting $u$ to $w$. Moreover, $G$ cannot contain a cycle: $v$ has degree 1 and so no cycle can go through $v$, but a cycle that does not go through $v$ would be a cycle in the old graph, which is supposed to be a tree.
Second, let's argue that every tree can be constructed this way. We prove this by induction on the number of nodes. ${ }^{4}$ If the number of nodes is 1 , then the tree arises by the
${ }^{4}$ The first part of the proof is also an induction argument, even though it was not phrased as such.
Figure 22: The descendence tree of trees
construction, since this is the way we start. Assume that $G$ is a tree with at least 2 nodes. Then by theorem 10.2, $G$ has a node of degree 1 (at least two nodes, in fact). Let $v$ be a node with degree 1 . Delete $v$ from $G$, together with the edge with endpoint $v$, to get a graph $G^{\prime}$.
We claim that $G^{\prime}$ is a tree. Indeed, $G^{\prime}$ is connected: any two nodes of $G^{\prime}$ can be connected by a path in $G$, and this path cannot go through $v$ as $v$ has degree 1 . So this path is also a path in $G^{\prime}$. Furthermore, $G^{\prime}$ does not contain a cycle as $G$ does not.
By the induction hypothesis, every tree with fewer nodes than $G$ arises by the construction; in particular, $G^{\prime}$ does. But then $G$ arises from $G^{\prime}$ by one more iteration of the second step. This completes the proof of Theorem 10.3.
Figure 22 shows how trees with up to 4 nodes arise by this construction. Note that there is a "tree of trees" here. The fact that the logical structure of this construction is a tree does not have anything to do with the fact that we are constructing trees: any iterative construction with free choices at each step results in a similar "descendence tree".
The Tree-growing Procedure can be used to establish a number of properties of trees. Perhaps most important of these concerns the number of edges. How many edges does a tree have? Of course, this depends on the number of nodes; but surprisingly, it depends only on the number of nodes:
Theorem 10.4 Every tree on $n$ nodes has $n-1$ edges.
Indeed, we start with one more node (1) than edge (0), and at each step, one new node and one new edge is added, so this difference of 1 is maintained.
10.5 Let $G$ be a tree, which we consider as the network of roads in a medieval country, with castles as nodes. The King lives at node $r$. On a certain day, the lord of each castle sets out to visit the King. Argue carefully that soon after they leave their castles, there will be exactly one lord on each edge. Give a proof of Theorem 10.4 based on this.
10.6 If we delete a node $v$ from a tree (together with all edges that end there), we get a graph whose connected components are trees. We call these connected components the branches at node $v$. Prove that every tree has a node such that every branch at this node contains at most half the nodes of the tree.
### Rooted trees
Often we use trees that have a special node, which we call the root. For example, trees that occured in counting subsets or permutations were built starting with a given node.
We can take any tree, select any of its nodes, and call it a root. A tree with a specified root is called a rooted tree.
Let $G$ be a rooted tree with root $r$. Given any node $v$ different from $r$, we know that the tree contains a unique path connecting $v$ to $r$. The node on this path next to $v$ is called the father of $v$. The other neighbors of $v$ are called the sons of $v$. The root $r$ does not have a father, but all its neighbors are called its sons.
Now a basic geneological assertion: every node is the father of its sons. Indeed, let $v$ be any node and let $u$ be one of its sons. Consider the unique path $P$ connecting $v$ to $r$. The node cannot lie on $P$ : it cannot be the first node after $v$, since then it wold be the father of $v$, and not its son; and it cannot be a later node, since then going from $v$ to $u$ on the path $P$ and then back to $v$ on the edge $u v$ we would traverse a cycle. But this implies that adding the node $u$ and the edge $u v$ to $P$ we get a path connecting $u$ to $r$. Since $v$ is the forst node on this path after $u$, it follows that $v$ is the father of $u$. (Is this argument valid when $v=r$ ? Check!)
We have seen that every node different from the root has exactly one father. A node can have any number of sons, including zero. A node with no sons is called a leaf. In other words, a leaf is a node with degree 1 , different from $r$. (To be precise, if the tree consists of a single node $r$, then this is a leaf.)
### How many trees are there?
We have counted all sorts of things in the first part of this book; now that we are familiar with trees, it is natural to ask: how many trees are there on n nodes?
Before attempting to answer this question, we have to clarify an important issue: when do we consider two trees different? There are more than one reasonable answers to this question. Consider the trees in Figure 23. Are they the same? One could say that they are; but then, if the nodes are, say, towns, and the edges represent roads to be built between them, then clearly the inhabitants of the towns will consider the two plans very different.
So we have to define carefully when we consider two trees the same. The following are two possibilities:
- We fix the set of nodes, and consider two trees the same if the same pairs of nodes are connected in each. (This is the position the twon people would take when they consider road construction plans.) In this case, it is advisable to give names to the nodes, so that
Figure 23: Are these trees the same?
we can distinguish them. It is convenient to use the numbers $0,1,2, \ldots, n-1$ as names (if the tree has $n$ nodes). We express this by saying that the vertices of the tree are labeled by $0,1,2, \ldots n-1$. Figure 24 shows a labelled tree. Interchanging the labels 2 and 4 (say) would yield a different labelled tree.
- We don't give names to the nodes, and consider two trees the same if we can rearrange the nodes of one so that we get the other tree. More exactly, we consider two trees the same (the mathematical term for this is isomorphic) if there exists a one-to-one correpondence between the nodes of the first tree and the nodes of the second tree so that two nodes in the first tree that are connected by an edge correspond to nodes in the second tree that are connected by an edge, and vice versa. If we speak about unlabelled trees, we mean that we don't distinguish isomorphic trees from each other. For example, all paths on $n$ nodes are the same as unlabelled trees.
So we can ask two questions: how many labelled trees are there on $n$ nodes? and how many unlabelled trees are there on $n$ nodes? These are really two different questions, and we have to consider them separately.
10.7 Find all unlabelled trees on 2,3,4 and 5 nodes. How many labelled trees do you get from each? Use this to find the number of labelled trees on 2,3,4 and 5 nodes.
10.8 How many labelled trees on $n$ nodes are stars? How many are paths?
The number of labelled trees. For the case of labelled trees, there is a very nice solution.
Theorem 10.5 (Cayley's Theorem) The number of labeled trees on $n$ nodes is $n^{n-2}$.
The formula is elegant, but the surprising fact about it is that it is quite difficult to prove! It is substantially deeper than any of the previous formulas for the number of this and that. There are various ways to prove it, but each uses some deeper tool from mathematics or a deeper idea. We'll give a proof that is perhaps best understood by first discussing a quite different question in computer science: how to store a tree?
### How to store a tree?
Suppose that you want to store a labelled tree, say the tree in Figure 24, in a computer. How would you do this? Of course, the answer depends on what you need to store the tree for, what information about it you want to retrieve and how often, etc. Right now, we are
Figure 24: A labelled tree
only concerned with the amount of memory we need. We want to store the tree so that it occupies the least amount of memory.
Let's try some simple solutions.
(a) Suppose that we have a tree $G$ with $n$ nodes. One thing that comes to mind is to make a big table, with $n$ rows and $n$ columns, and put (say) the number 1 in the $j$-th position of the $i$-th row if nodes $i$ and $j$ are connected by an edge, and the number 0 , if they are not:
$$
\begin{array}{llllllllll}
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 \\
0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}
$$
We need one bit to store each entry of this table, so this takes $n^{2}$ bits. We can save a little by noticing that it is enough to store the part below the diagonal, since the diagonal is always 0 and the other half of the table is just the reflection of the half below the diagonal. But this is still $\left(n^{2}-n\right) / 2$ bits.
This method of storing the tree can, of course, be used for any graph. It is often very useful, but, at least for trees, it is very wasteful.
(b) We fare better if we specify each tree by listing all its edges. We can specify each edge by its two endpoints. It will be convenient to arrange this list in an array whose columns correspond to the edges. For example, the tree in Figure 24 can be encoded by
$$
\begin{array}{lllllllll}
7 & 8 & 9 & 6 & 3 & 0 & 2 & 6 & 6 \\
9 & 9 & 2 & 2 & 0 & 2 & 4 & 1 & 5
\end{array}
$$
Instead of a table with $n$ rows, we get a table just with two rows. We pay a little for this: instead of just 0 and 1 , the table wil contain integers between 0 and $n-1$. But this is certainly worth it: even if we count bits, to write down the label of a node takes $\log _{2} n$ bits, so the whole table occupies only $2 n \log _{2} n$ bits, which is much less than $\left(n^{2}-n\right) / 2$ if $n$ is large.
There is still a lot of free choice here, which means that the same tree may be encoded in different ways: we have freedom in choosing the order of the edges, and also in choosing the order in which the two endpoints of an edge are listed. We could agree on some arbitrary conventions to make the code well defined (say, listing the two endnodes of an edge in increasing order, and then the edges in increasing order of their first endpoints, breaking ties according to the second endpoints); but it will be more useful to do this in a way that also allows us to save more memory.
(c) The father code. From now on, the node with label 0 will play a special role; we'll consider it the "root" of the tree. Then we can list the two endnodes of an edge by listing the endpoint further from the root first, and then the endpoint nearer to the root second. So for every edge, the node written below is the father of the node written above. For the order in which we list the edges, let us take the order of their first nodes. For the tree in Figure 24, we get the table
$$
\begin{array}{lllllllll}
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
6 & 0 & 0 & 2 & 6 & 2 & 9 & 9 & 2
\end{array}
$$
Do you notice anything special about this table? The first row consists of the numbers $1,2,3,4,5,6,7,8,9$, in this order. Is this a coincidence? Well, the order is certainly not (we ordered the edges by the increasing order of their first endpoints), but why do we get every number exactly once? After a little reflection, this should be also clear: if a node occurs in the first row, then its father occurs below it. Since a node has only one father, it can occur only once. Since every node other than the root has a father, every node other than the root occurs in the first row.
Thus we know in advance that if we have a tree on $n$ nodes, and write up the array using this method, then the first row will consist of $1,2,3, \ldots, n-1$. So we may as well suppress the first row without loosing any information; it suffices to store the second. Thus we can specify the tree by a sequence of $n-1$ numbers, each between 0 and $n-1$. This takes $(n-1)\left\lceil\log _{2} n\right\rceil$ bits.
This coding is not optimal, in the sense that not every "code" gives a tree (see exercise 10.4). But we'll see that this method is already nearly optimal.
10.9 Consider the following "codes": $(0,1,2,3,4,5,6,7) ;(7,6,5,4,3,2,1,0)$; $(0,0,0,0,0,0,0,0) ;(2,3,1,2,3,1,2,3)$. Which of these are "father codes" of trees?
10.10 Prove, based on the "father code" method of storing trees, that the number of labelled trees on $n$ nodes is at most $n^{n-1}$.
(d) Now we describe a procedure, the so-called Prüfer code, that will assign to any $n$ point labeled tree a sequence of length $n-2$, not $n-1$, consisting of the numbers $0, \ldots, n-1$. The gain is little, but important: we'll show that every such sequence corresponds to a tree. Thus we will establish a bijection, a one-to-one correspondence between labelled trees on $n$ nodes and sequences of length $n-2$, consisting of numbers $0,1, \ldots, n-1$. Since the number of such sequences is $n^{n-2}$, this will also prove Cayley's Theorem. The Prüfer code can be considered as a refinement of method (c). We still consider 0 as the root, we still order the two endpoints of an edge so that the father comes first, but we order the edges (the columns of the array) not by the magnitude of their first endpoint but a little differently, more closely related to the tree itself.
So again, we construct a table with two rows, whose columns correspond to the edges, and each edge is listed so that the node farther from 0 is on the top, its father on the bottom. The issue is the order in which we list the edges.
Here is the rule for this order: we look for a node of degree 1 , different from 0 , with the smallest label, and write down the edge with this endnode. In our example, this means that we write down $\frac{1}{6}$. Then we delete this node and edge from the tree, and repeat: we look for the endnode with smallest label, different from 0 , and write down the edge incident with it. In our case, this means adding a column ${ }_{0}^{3}$ to the table. Then we delete this node and edge etc. We go until all edges are listed. The array we get is called the extended Prüfer code of the tree (we call it extended because, as we'll see, we only need a part of it as the "real" Prüfer code). The extended Prüfer code of the tree in Figure 24 is:
$\begin{array}{lllllllll}1 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 2 \\ 6 & 0 & 2 & 6 & 2 & 9 & 9 & 2 & 0\end{array}$
Why is this any better than the "father code"? One little observation is that the last entry in the second row is now always 0 , since it comes from the last edge and since we never touched the node 0, this last edge must be incident with it. But we have payed a lot for this, it seems: it is not clear any more that the first row is superfluous; it still consists of the numbers $1,2, \ldots, n-1$, but now they are not in increasing order any more.
The key lemma is that the first row is determined by the second:
Lemma 10.1 The second row of an extended Prüfer code determines the first.
Let us illustrate the proof of the lemma by an example. Suppose that somebody gives us the second row of an extended Prüfer code of a labelled tree on 8 nodes; say, 2403310 (we have one fewer edges than nodes, so the second row consists of 7 numbers, and as we have seen, it must end with a 0). Let us figure out what the first row must have been.
How does the first row start? Remember that this is the node that we delete in the first step; by the rule of cinstructing the Prüfer code, this is the node with degree 1 with smallest label. Could this node be the node 1? tree? No, because then we would have to delete it in the first step, and it could not occur any more, but it does. By the same token, no number occuring in the second row could be a leaf of the tree at the beginning. This rules out 2,3 and 4 .
What about 5 ? It does not occur in the second row; does this mean that it is a leaf of the original tree? The answer is yes; else, 5 would have been the father of some other node, and it would have been written in the second row when this other node was deleted. Thus 5 was its leaf with smallest label, and the first row of the extended Prüfer code must start with 5.
Let's try to figure out the next entry in the first row, which is, as we know, the leaf with smallest label of the tree after 5 was deleted. The node 1 is still ruled out, since it occurs later in the second row later; but 2 does not occur any more, which means (by the
Figure 25: A tree reconstructed from its Prüfer code
same argument as before), that 2 was the leaf with smallest label after deleting 5 . Thus the second entry in the first row is 2 .
Similarly, the third entry must be 4 , since all the smaller numbers either occur later or have been used already. Going on in a similar fashion, we get that the full array must have been:
$\begin{array}{lllllll}5 & 2 & 4 & 6 & 7 & 3 & 1 \\ 2 & 4 & 0 & 3 & 3 & 1 & 0\end{array}$
This corresponds to the tree in Figure 25
The considerations above are completely general, and can be summed up as follows:
each entry in the first row of the extended Prüfer code is the smallest integer that does not occur in the first row before it, nor in the second row below or after it.
Indeed, when this entry (say, the $k$-th entry in the first row) was recorded, then the nodes before it in the first row were deleted (together with the edges corresponding to the first $k-1$ columns). The remaining entries in the second row are exactly those nodes that are fathers at this time, which means that they are not leaves.
This describes how the first row can be reconstructed from the second. So we don't need the full extended Prüfer code to store the tree; it suffices to store the second row. In fact, we know that the last entry in the second row is $n$, so we don't have to store this either. The sequence consisting of the first $n-2$ entries of the second row is called the Prüfer code of the tree. Thus the Prüfer code is a sequence of length $n-2$, each entry of which is a number between 0 and $n-1$.
This is similar to the father code, just one shorter; not much gain here for all the work. But the beauty of the Prüfer code is that it is optimal, in the sense that
every sequence of numbers between 0 and $n-1$, of length $n-2$, is a Prüfer code of some tree on $n$ nodes.
This can be proved in two steps. First, we extend this sequence to a table with two rows: we add an $n$ at the end, and then write above each entry in the first row the smallest integer that does not occur in the first row before it, nor in the second row below or after it (note that it is always possible to find such an integer: the condition excludes at most $n-1$ values out of $n$ ).
Now this table with two rows is the Prüfer code of a tree. The proof of this fact, which is not difficult any more, is left to the reader as an exercise. 10.11 Complete the proof.
Let us sum up what the Prüfer code gives. First, it proves Cayley's Theorem. Second, it provides a theoretically most efficient way of encoding trees. Each Prüfer code can be considered as a natural number written in the base $n$ number system; in this way, we order a "serial number" between 0 and $n^{n-2}$ to the $n$-point labeled trees. Expressing these serial numbers in base two, we get a code by $0-1$ sequences in of length at most $\lceil(n-2) \log n\rceil$.
As a third use of the Prüfer code, let's suppose that we want to write a program that generates a random labeled tree on $n$ nodes in such a way that all trees occur with the same probability. This is not easy from scratch; but the Prüfer code gives an efficient solution. We just have to generate $n-2$ independent random integers between 0 and $n-1$ (most programming languages have a statement for this) and then "decode" this sequence as a tree, as described.
The number of unlabelled trees. The number of unlabelled trees on $n$ nodes, usually denoted by $T_{n}$, is even more difficult to handle. No simple formula like Cayley's theorem is known for this number. Our goal is to get a rough idea about how large this number is.
There is only one unlabelled tree on 1,2 or 3 nodes; there are two on 4 nodes (the path and the star). There are 3 on 5 nodes (the star, the path, and the tree in Figure 23. These numbers are much smaller than the number of labelled trees with these numbers of nodes, which are 1,1,3,16 and 125 by Cayley's Theorem.
It is of course clear that the number of unlabelled trees is less than the number of labelled trees; every unlabelled tree can be labelled in many ways. How many ways? If we draw an unlabelled tree, we can label its nodes in $n$ ! ways. The labelled trees we get this way are not necessarily all different: for example, if the tree is a "star", i.e., it $n-2$ leaves and one more node connected to all leaves, then no matter how we permute the labels of the leaves, we get the same labelled tree. So an unlabelled star yields $n$ labelled stars.
But at least we know that each labeled tree can be labeled in at most $n$ ! ways. Since the number of labelled trees is $n^{n-2}$, it follows that the number of unlabeled trees is at least $n^{n-2} / n$ !. Using Stirling's formula (Theorem 2.5), we see that this number is about $e^{n} / n^{5 / 2} \sqrt{2 \pi}$.
This number is musch smaller than the number of labelled trees, $n^{n-2}$, but of course it is only a lower bound on the number of unlabelled trees. How can we obtain an upper bound on this number? If we think in terms of storage, the issue is: can we store an unlabelled tree more economically than labelling its nodes and then storing it as a labelled tree? Very informally, how should we describe a tree, if we only want the "shape" of it, and don't care which node gets which label?
Take an $n$-point tree $G$, and specify one of its leaves as its "root". Next, draw $G$ in the plane without crossing edges; this can always be done, and we almost always draw trees this way.
Now we imagine that the edges of the tree are walls, perpendicular to the plane. Starting at the root, walk around this system of walls, keeping the wall always to your right. We'll call walking along an edge a "step". Since there are $n-1$ edges, and each edge has two sides, we'll make $2(n-1)$ steps before returning to the root (Figure 26).
Each time we make a step away from the root (i.e., a step from a father to one of its sons), we write down a 1 ; each time we make a step toward the root, we write down a 0 .
Figure 26: Walking around a tree
This way we end up with a sequence of length $2(n-1)$, consisting of 0's and 1's. We call this sequence the planar code of the (unlabelled) tree. The planar code of the tree in figure 26 is 1110011010100100 .
Now this name already indicates that the planar code has the following important property:
Every unlabelled tree is uniquely determined by its planar code.
Let us illuminate the proof of this by assuming that the tree is covered by snow, and we only have its code. We ask a friend of ours to walk around the tree just like above, and uncover the walls, and we look at the code in the meanwhile. What do we see? Any time we see a 1 , he walks along a wall away from the root, and he cleans it from the snow. We see this as growing a new twig. Any time we see a 0 , he walks back, along an edge already uncovered, toward to root.
Now this describes a perfectly good way to draw the tree: we look at the bits of the code one by one, while keeping the pen on the paper. Any time we see a 1 , we draw a new edge to a new node (and move the pen to the new node). Any time we see a 0, we move the pen back by one edge toward the root. Thus the tree is indeed determined by its planar code.
Since the number of possible planar codes is at most $2^{2(n-1)}=4^{n-1}$, we get that the number of unlabelled trees is at most this large. Summing up:
Theorem 10.6 The number $T_{n}$ of unlabelled trees with $n$ nodes satisfies
$$
\frac{n^{n-2}}{n !}<T_{n}<4^{n-1}
$$
The exact form of this lower bound does not matter much; we can conclude, just to have a statement simpler to remember, that the number of unlabelled trees on $n$ nodes is larger than $2^{n}$ if $n$ is large enough ( $n>30$ if you work it out). So we get, at least for $n \geq 30$, the following bounds that are easier to remember:
$$
2^{n} \leq T_{n} \leq 4^{n}
$$
The planar code is far from optimal; every unlabelled tree has many different codes (depending on how we draw it in the plane and how we choose the root), and not every 0-1 sequence of length $2(n-1)$ is a code of a tree (for example, it must start with a 1 and have the same number of 0's as 1's). Still, the planar code is quite an efficient way of encoding unlabelled trees: it uses less than $2 n$ bits for trees with $n$ nodes. Since there are more than $2^{n}$ unlabelled trees (at least for $n>30$ ), we could not possibly get by with codes of length $n$ : there are just not enough of them.
Unlike for labelled trees, we don't know a simple formula for the number of unlabelled trees on $n$ nodes, and probably none exists. According to a difficult result of George Pólya, the number of unlabelled trees on $n$ nodes is asymptotically $a n^{3 / 2} b^{n}$, where $a$ and $b$ are real numbers defined in a complicated way.
10.12 Does there exist an unlabelled tree with planar code (a) 1111111100000000; (b) 1010101010101010; (c) 1100011100?
## Finding the optimum
### Finding the best tree
A country with a $n$ towns wants to construct a new telephone network to connect all towns. Of course, they don't have to build a separate line between every pair of towns; but they do need to build a connected network; in our terms, this means that the graph of direct connections must form a connected graph. Let's assume that they don't want to build a direct line between town that can be reached otherwise (there may be good reasons for doing so, as we shall see later, but at the moment let's assume their only goal is to get a connected network). Thus they want to build a minimal connected graph with these nodes, i.e., a tree.
We know that no matter which tree they choose to build, they have to construct $n-1$ lines. Does this mean that it does not matter which tree they build? No, because lines are not equally easy to build. Some lines between towns may cost much more than some other lines, depending on how far the towns are, whether there are mountains or lakes between them etc. So the task is to find a spanning tree whose total cost (the sum of costs of its edges) is minimal.
How do we know what these costs are? Well, this is not something mathematics can tell you; it is the job of the engineers and economisits to estimate the cost of each possible line in advance. So we just assume that these costs are given.
At this point, the tasks seems trivial (very easy) again: just compute the cost of each tree on these nodes, and select the tree with smallest cost.
We dispute the claim that this is easy. The number of trees to consider is enormous: We know (by Cayley's Theorem 10.5) that the number of labelled trees on $n$ nodes is $n^{n-2}$. So for 10 cities, we'd have to look at $10^{8}$ (hundred million) possible trees; for 20 cities, the number is astronomical (more than $10^{20}$ ). We have to find a better way to select an optimal tree; and that's the point where mathematics comes to the rescue.
There is this story about the pessimist and the optimist: they both get a box of assorted candies. The optimist always picks the best; the pessimist always eats the worst (to save the better candies for later). So the optimist always eats the best available candy, the pessimist always eats the worst available candy - and yet, they end up with eating exactly the same.
So let's see how the optimistic government would build the telephone network. They start with raising money; as soon as they have enough money to build a line (the cheapest line), they build it. Then they wait until they have enough money to build a second connection. Then they wait until they have enough money to build a third connection... It may happen that the third cheapest connection forms a triangle with the first two (say, three towns are close to each other). Then, of course, they skip this and raise enough money to build the fourth cheapest connection.
At any time, the optimistic government will wait until they have enough money to build a connection between two towns that are not yet connected by any path, and build this connection.
Finally, they will get a connected graph on the $n$ nodes representing the towns. The graph does not contain a cycle, since the edge of the cycle constructed last would connect
Figure 27: Failure of the greedy method. Construction costs are proportional to the distance. The first figure shows a cheapest (shortest) cycle through all 4 towns; the second shows the cycle obtained by the optimistic (greedy) method.
two towns that are already accessible from each other through the other edges of the cycle. So, the graph they get is indeed a tree.
But is this network the cheapest possible? Could stinginess at the beginning backfire and force them to spend much more at the end? We'll prove below that our optimistic government has undeserved success: the tree they build is as cheap as possible.
Before we jump into the proof, we should discuss why we said that the government's success was "undeserved". We show that if we modify the task a little, the same optimistic approach might lead to very bad results.
Let us assume that, for reasons of reliability, they require that between any two towns, there should be at least two paths with no edge in common (this guarantees that when a line if inoperational because of failure or maintanance, any two towns can still be connected). For this, $n-1$ lines are not enough $(n-1$ edges forming a connected graph must form a tree, but then deleting any edge, the rest will not be connected any more). But $n$ lines suffice: all we have to do is to draw a single cycle through all the towns. This leads to the following task:
Find a cycle with given $n$ towns as nodes, so that the total cost of constructing its edges is minimum.
(This problem is one of the most famous tasks in mathematical optimization: it is called the Travelling Salesman Problem. We'll say more about it later.)
Our optimistic government would do the following: build the cheapest line, then the second cheapest, then the third cheapest etc, skipping the construction of lines that are superfluous: it will not build a third edge out of a town that already has two, and will not build an edge completing a cycle unless this cycle connects all nodes. Eventually they get a cycle through all towns, but this is not necessarily the best! Figure 27 shows an example where the optimistic method (called "greedy" in this area of applied mathematics) gives a cycle that is quite a bit worse than optimal.
So the greedy method can be bad for the solution of a problem that is only slightly different from the problem of finding the cheapest tree. Thus the fact (to be proved below) that the optimistic governments builds the best tree is indeed undeserved luck.
So let us return to the solution of the problem of finding a tree with minimum cost, and prove that the optimistic method yields a cheapest tree. Let us call the tree obtained by the greedy method the greedy tree, and denote it by $F$. In other words, we want to prove that any other tree would cost at least as much as the greedy tree (and so no one could accuse the government of wasting money, and justify the accusation by exhibiting another tree that would have been cheaper).
Figure 28: The greedy tree is optimal
So let $G$ be any tree different from the greedy tree $F$. Let us imagine the process of contructing $F$, and the step when we first pick an edge that is not an edge of $G$. Let $e$ be this edge. If we add $e$ to $G$, we get a cycle $C$. This cycle is not fully contained in $F$, so it has an edge $f$ that is not an edge of $F$ (Figure 28). If we add the edge $e$ to $G$ and then delete $f$, we get a (third) tree $H$. (Why is $H$ a tree? Give an argument!)
We want to show that $H$ is at most as expensive as $G$. This clearly means that $e$ is at most as expensive as $f$. Suppose (by indirect argument) that $f$ is cheaper than $e$.
Now comes a crucial question: why didn't the optimistic government select $f$ instead of $e$ at this point in time? The only reason could be that $f$ was ruled out because it would have formed a cycle $C^{\prime}$ with the edges of $F$ already selected. But all these previously selected edges are edges of $G$, since we are inspecting the step when the first edge not in $G$ was added to $F$. Since $f$ itself is an edge of $G$, it follows that all edges of $C^{\prime}$ are edges of $G$, which is impossible since $G$ is a tree. This contradiction proves that $f$ cannot be cheaper than $e$ and hence $G$ cannot be cheaper than $H$.
So we replace $G$ by this tree $H$ that is not more expensive. In addition, the new tree $H$ has the adventage that coincides with $F$ in more edges, since we deleted from $G$ an edge not in $F$ and added an edge in $F$. This implies that if $H$ is different from $F$ and we repeat the same argument again and again, we get trees that are not more expensive than $G$, and coincide with $F$ in more and more edges. Sooner of later we must end up with $F$ itself, proving that $F$ was no more expensive than $G$.
11.1 A pessimistic government could follow the following logic: if we are not careful, we may end up with having to build that extremely expensive connection through the mountain; so let us decide right away that building this connection is not an option, and mark it as "impossible". Similarly, let us find the second most expensive line and mark it "impossible", etc. Well, we cannot go on like this forever: we have to look at the graph formed by those edges that are still possible, and this "possiblity graph" must stay connected. In other words, if deleting the most expensive edge that is still possible from the possibility graph would destroy the connectivity of this graph, then like it or not, we have to build this line. So we build this line (the pessimistic government ends up building the most expensive line among those that are still possible). Then they go on to find the most expensive line among those that are still possible and not yet built, mark it impossible if this does not disconnect the possibility graph etc.
Prove that the pessimistic government will have the same total cost as the optimistic. 11.2 In a more real-life government, optimists and pessimists win in unpredictable order. This means that sometimes they build the cheapest line that does not create a cycle with those lines already constructed; sometimes they mark the most expensive lines "impossible" until they get to a line that cannot be marked impossible without disconnecting the network, and then they build it. Prove that they still end up with the same cost.
11.3 If the seat of the government is town $r$, then quite likely the first line constructed will be the cheapest line out of $r$ (to some town $s$, say), then the cheapest line that connects either $r$ or $s$ to a new town etc. In general, there will be a connected graph of telephone lines constructed on a subset $S$ of the towns including the capital, and the next line will be the cheapest among all lines that connect $S$ to a node outside $S$. Prove that the lucky government still obtains a cheapest possible tree.
11.4 Formulate how the pessimistic government will construct a cycle through all towns. Show by an example that they don't always get the cheapest solution.
### Traveling Salesman
Let us return to the question of finding a cheapest possible cycle through all the given towns: we have $n$ towns (points) in the plane, and for any two of them we are given the "cost" of connecting them directly. We have to find a cycle with these nodes so that the cost of the cycle (the sum of the costs of its edges) is as small as possible.
This problem is one of the most important in the area of combinatorial optimization, the field dealing with finding the best possible design in various combinatorial situations, like finding the optimal tree discussed in the previous section. It is called the Travelling Salesman Problem, and it appears in many disguises. Its name comes from the version of the problem where a travelling salesman has to visit all towns in the region and then return to his home, and of course he wants to minimize his travel costs. It is clear that mathematically, this is the same problem. It is easy to imagine that one and the same mathematical problem appears in connection with designing optimal delivery routes for mail, optimal routes for garbage collection etc.
The following important question leads to the same mathematical problem, except on an entirely different scale. A machine has to drill a number of holes in a printed circuit board (this number could be in the thousands), and then return to the starting point. In this case, the important quantity is the time it takes to move the drilling head from one hole to the next, since the total time a given board has to spend on the machine determines the number of boards that can be processed in a day. So if we take the time needed to move the head from one hole to another as the "cost" of this edge, we need to find a cycle with the holes as nodes, and with minimum cost.
The Travelling Salesman Problem is much more difficult than the problem of finding the cheapest tree, and there is no algorithm to solve it that would be anywhere nearly as simple, elegant and efficient as the "optimistic" algorithm discussed in the previous section. There are methods that work quite well most of the time, but they are beyond the scope of this book.
But we want to show at least one simple algorithm that, even though it does not give the best solution, never looses more than a factor of 2 . We describe this algorithm in the case when the cost of an edge is just its length, but it would not make any difference to consider any other measure (like time, or the price of a ticket), at least as long as the costs $c(i j)$ satisfy the triangle inequality:
$$
c(i j)+c(j k) \geq c(i k)
$$
(Air fares sometimes don't satisfy this inequality: it may be cheaper to fly from New York to Chicago to Philadelphia then to fly from New York to Philadelphia. But in this case, of course, we might consider the flight New York-Chicago-Philadelphia as one "edge", which does not count as a visit in Chicago.)
We begin by solving a problem we know how to solve: find a cheapest tree connecting up the given nodes. We can use any of the algorithms discussed in the previous section for this. So we find the cheapest tree $T$, with total cost $c$.
Now how does this tree help in finding a tour? One thing we can do is to walk around the tree just like we did when constructing the "planar code" of a tree in the proof of theorem 10.6 (see figure 26). This certainly gives a walk through each town, and returns to the starting point. The total cost of this walk is exactly twice the cost $c$ of $T$, since we used every edge twice.
Of course this walk may pass through some of the towns more than once. But this is just good for us: we can make shortcuts. If the walk takes us from $i$ to $j$ to $k$, and we have seen $j$ alread, we can proceed directly from $i$ to $k$. The triangle inequality guarantees that we have only shortened our walk by doing so. Doing such shortcuts as long as we can, we end up with a tour through all the towns whose cost is not more than twice the cost of the cheapest spanning tree (Figure 29).
But we want to relate the cost of the tour we obtained to the cost of the optimum tour, not to the cost of the optimum spanning tree! Well, this is easy now: the cost of a cheapest spanning tree is always less than the cost of the cheapest tour. Why? Because we can omit any edge of the cheapest tour, to get a spanning tree. This is a very special kind of tree (a path), and as a spanning tree it may or may not be optimal. However, its cost is certainly not smaller than the cost of the cheapest tree, but smaller than the cost of the optimal tour, which proves the assertion above.
To sum up, the cost of the tour we constructed is at most twice that of the cheapest spanning tree, which in turn is less than twice the cost of a cheapest tour.
11.5 Is the tour in figure 29 shortest possible?
11.6 Prove that if all costs are proportional to distances, then a shortest tour cannot intersect itself.
Figure 29: The cheapest tree connecting 15 given towns, the walk around it, and the tour produced by shortcuts. Costs are proportional to distances.
## Matchings in graphs
### A dancing problem
At the prom, 300 students took part. They did not all know each other; in fact, every girl new exactly 50 boys and every boy new exactly 50 girls (we assume, as before, that acquaintance is mutual).
We claim that they can all dance simultaneously (so that only pairs who know each other dance with each other).
Since we are talking about acquaintances, it is natural to describe the situation by a graph (or at least, imagine the graph that describes it). So we draw 300 nodes, each representing a student, and connect two of them if they know each other. Actually, we can make the graph a little simpler: the fact that two boys, or two girls, know each other plays no role whatsoever in this problem: so we don't have to draw those edges that correspond to such acquaintances. We can then arrange the nodes, conveniently, so that the nodes representing boys are on the left, nodes representing girls are on the right; then every edge will connect a node from the left to a node from the right. We shall denote the set of nodes on the left by $A$, the set of nodes on the right by $B$.
This way we got a special kind of graph, called a bipartite graph. Figure 30 shows such a graph (of course, depicting a smaller party). The thick edges show one way to pair up people for dancing. Such a set of edges is called a perfect matching.
To be precise, let's give the definitions of these terms: a graph is bipartite if its nodes can be partitioned into two classes, say $A$ and $B$ so that every edge connects a node in $A$
Figure 30: A bipartite graph and a perfect matching in it
to a node in $B$. A perfect matching is a set of edges such that every node is incident with exactly one of them.
After this, we can formulate our problem in the language of graph theory as follows: we have a bipartite graph with 300 nodes, in which every node has degree 50 . We want to prove that it contains a perfect matching.
As before, it is good idea to generalize the assertion to any number of nodes. Let's be daring and guess that the numbers 300 and 50 play no role whatsoever. The only condition that matters is that all nodes have the same degree (and this is not 0). Thus we set out to prove the following theorem:
Theorem 12.1 If every node of a bipartite graph has the same degree $d \geq 1$, then it contains a perfect matching.
Before proving the theorem, it will be useful to solve some exercises, and discuss another problem.
12.1 It is obvious that for a bipartite graph to contain a perfect matching, it is necessary that $|A|=|B|$. Show that if every node has the same degree, then this is indeed so.
12.2 Show by examples that the conditions formulated in the theorem cannot be dropped:
(a) A non-bipartite graph in which every node has the same degree need not contain a perfect matching.
(b) A bipartite graph in which every node has positive degree (but not all the same) need not contain a perfect matching.
12.3 Prove Theorem 12.1 for $d=1$ and $d=2$.
Figure 31: Six tribes and six tortoises on an island
### Another matching problem
An island is inhabited by six tribes. They are on good terms and split up the island between them, so that each tribe has a hunting territory of 100 square miles. The whole island has an area of 600 miles.
The tribes decide that they all should choose new totems. They decide that each tribe should pick one of the six species of tortoise that live on the island. Of course, they want to pick different totems, and so that the totem fo each tribe should occur somewhere on their territory.
It is given that the areas where the different species of tortoises live don't overlap, and they have they same area - 100 square miles (so it aso follows that some tortoise lives everywhere on the island). Of course, the way the tortoises divide up the islands may be entirely different from the way way the tribes do (Figure 31)
We want to prove that such a selection of totems is always possible.
To see the significance of the conditions, let's assume that we did not stipulate that the area of each tortoise species is the same. Then some species could occupy more - say, 200 square miles. But then it could happen that two of tribes are living on exactly these 200 square miles, and so their only possible choice for a totem would be one and same species.
Let's try to illustrate our problem by a graph. We can represent each tribe by a node, and also each species of tortoise by a node. Let us connect a tribe-node to a species-node is the species occurs somewhere on the territory of the tribe (we could also say that he tribe occurs on the territory of the species - just in case the tortoises want to pick totems too). Drawing the tribe-nodes on the right, and the species-nodes on the left, makes it clear that
Figure 32: The graph of tribes and tortoises
we get a bipartite graph (Figure 32. And what is it that we want to prove? It is that this graph has a perfect matching!
So this is very similar to the problem discussed (but not solved!) in the previous section: we want to prove that a certain bipartite graph has a perfect matching. Theorem 12.1 says that for this conclusion it suffices to know that every node has the same degree. But this is too strong a condition; it is not at all fulfilled in our example (tribe A has only two tortoises to choose from, while tribe $\mathrm{D}$ has four).
So what property of this graph should guarantee that a perfect matching exists? Turning this question around: what would exclude a perfect matching?
For example, it would be bad if a tribe would not find any tortoise on its own territory. In the graph, this would correposnd to a node with degree 0 . Now this is not a danger, since we know that tortoises occur everywhere on the island.
It would also be bad (as this has come up already) if two tribes could only choose one and the same tortoise. But then this tortoise would have an area of at least 200 square miles, which is not the case. A somewhat more subtle sort of trouble would arise if three tribes had only two tortoises on their combined territory. But this, too, is impossible: the two species of tortoises would cover an area of at least 300 square miles, so one of them would have to cover more than 100. More generally, we can see that the combined territory of any $k$ tribes holds at least $k$ species of tortoises. In terms of the graph, this means that for any $k$ nodes on the left, there are at least $k$ nodes on the right connected to at least one of them. We'll see in the next section that this is all we need to observe about this graph.
### The main theorem
Now we state and prove a fundamental theorem about perfect matchings. This will complete the solution of the problem about tribes and tortoises, and (with some additional work) of the problem about dancing at the prom.
Figure 33: The good graph is also good from right to left
Theorem 12.2 (The Marriage Theorem) A bipartite graph has a perfect matching if and only if $|A|=|B|$ and and any for subset of (say) $k$ nodes of $A$ there are at least $k$ nodes in $B$ that are connected to one of them.
Before proving this theorem, let us discuss one more question. If we interchange "left" and "right", perfect matchings remain perfect matching. But what happens to the condition stated in the theorem? It is easy to see that it remains valid (as it should). To see this, we have to argue that if we pick any set $S$ of $k$ nodes in $B$, then they are connected to at least $k$ nodes in $B$. Let $n=|A|=|B|$ and let us color the nodes in $A$ connected to nodes in $S$ black, the other nodes white (Figure 33). Then the white nodes are connected to at most $n-k$ nodes (since they are not connected to any node in $S$ ). Since the condition holds "from left to right", the number of white nodes is at most $n-k$. But then the number of black nodes is at most $k$, which proves that the condition also holds "from right to left".
Now we can turn to the proof of theorem 12.2. We shall have to refer to the condition given in the theorem so often that it will be convenient to call graphs satisfying this conditions "good" (just for the duration of this proof). Thus a bipartite graph is "good" if it has the same number of nodes left and right, and any $k$ "left" nodes are connected to at least $k$ "right" nodes.
It is obvious that every graph with a perfect matching is "good", so what we need to prove is the converse: every "good" graph contains a perfect matching. For a graph on just two nodes, being "good" means that these two nodes are connected. Thus for a graph to have a perfect matching means that it can be partitioned into "good" graphs with 2 nodes. (To partition a graph means that we divide the nodes into classes, and only keep an edge between two nodes if they are in the same class.)
Now our plan is to partition our graph into two "good" parts, then partition each of these into two "good" parts etc., until we get "good" parts with 2 nodes. Then the edges
Figure 34: Goodness lost when two nodes are removed
that remain form a perfect matching. To carry out this plan, it suffices to prove that
if a "good" bipartite graph has more than 2 nodes, then it can be partitioned into two good bipartite graphs.
Let us try a very simple partition first: select a node $a \in A$ and $b \in B$ that are connected by an edge; let these two nodes be the first part, and the remaining nodes the other. There is no problem with the first part: it is "good". But the second part may not be good: it can have some set $S$ of $k$ nodes on the left connected to fewer than $k$ nodes on the right (Figure 34). In the original graph, these $k$ nodes were connected to at least $k$ nodes in $B$; this can only be if the $k$-th such node was the node $b$. Let $T$ denote the set of neighbors of $S$ in the original graph. What is important to remember is that $|S|=|T|$.
Now we try another way of partitioning the graph: we take $S \cup T$ (together with the edges between them) as one part and the rest of the nodes, as the other. (This rest is not empty: the node $a$ belongs to it, for example.)
Let's argue that both these parts are "good". Take the first graph first, take any subset of, say, $j$ nodes in $S$ (the left hand side of the first graph). Since the original graph was good, they are connected to at least $j$ nodes, which are all in $T$ by the definition of $T$.
For the second graph, it follows similarly that it is good, if we interchange "left" and "right". This completes the proof.
We still have to prove Theorem 12.1. This is now quite easy and is left to the reader as the following exercise.
12.4 Prove that if in a bipartite graph every node has the same degree $d \neq 0$, then the bipartite graph is "good" (and hence contains a perfect matching; this proves theorem 12.1).
### How to find a perfect matching?
We have a condition for the existence of a perfect matching in a graph that is necessary and sufficient. Does this condition settle this issue once and forever? To be more precise: suppose that somebody gives as a bipartite graph; what is a good way to decide whether or not it contains a perfect matching? and how to find a perfect matching if there is one?
We may assume that $|A|=|B|$ (where, as before, $A$ is the set of nodes on the left and $B$ is the set of nodes on the right). This is easy to check, and if it fails then it is obvious that no perfect matching exists and we have nothing else to do.
One thing we can try is to look at all subsets of the edges, and see if any of these is a perfect matching. It is easy enough to do so; but there are terribly many subsets to check! Say, in our introductory example, we have 300 nodes, so $|A|=|B|=150$; every node has degree 50 , so the number of edges is $150 \cdot 50=7500$; the number of subsets of a set of this size is $2^{7500}>10^{2257}$, a number that is more than astronomical...
We can do a little bit better if instead of checking all subsets of the edges, we look at all possible ways to pair up elements of $A$ with elements of $B$, and check if any of these pairings matches only nodes that are connected to each other by an edge. Now the number of ways to pair up the nodes is "only" $150 ! \approx 10^{263}$. Still hopeless.
Can we use Theorem 12.2? To check that the necessary and sufficient condition for the existence of a perfect matching is satisfied, we have to look at every subset $S$ of $A$, and see whether the number of it neighbors in $B$ is at least as large as $S$ itself. Since the set $A$ has $2^{150} \approx 10^{45}$ subsets, this takes a much smaller number of cases to check than any of the previous possibilities, but still astronomical!
So theorem 12.2 does not really help too much in deciding whether a given graph has a perfect matching. We have seen that it does help in proving that certain properties of a graph imply that the graph has a perfect matching. We'll come back to this theorem later and discuss its significance. Right now, we have to find some other way to deal with our problem.
Let us introduce one more expression: by a matching we mean a set of edges that have no endpoint in common. A perfect matching is the special case when, in addition, the edges cover all the nodes. But a matching can be much smaller: the empty set, or any edge in itself, is a matching.
Let's try to construct a perfect matching in our graph by starting with the empty set and building up a matching one by one. So we select two nodes that are connected, and mark the edge between them; then we select two other nodes that are connected, and mark the edge between them etc. we can do this until no two unmatched nodes are connected by an edge. The edges we have marked form a matching $M$. If we are lucky, then $M$ is a perfect matching, and we have nothing else to do. But what do we do if $M$ is not perfect? Can we conclude that the graph has no perfect matching at all? No, we cannot; it may happen that the graph has a perfect matching, but we made some unlucky choices when selecting the edges of $M$.
12.5 Show by an example that it may happen that a bipartite graph $G$ has a perfect matching but, if we are unlucky, the matching $M$ constructed above is not perfect.
12.6 Prove that if $G$ has a perfect matching, then $M$ mathes up at least half of the nodes.
Figure 35: An augmenting path in a bipartite graph
So suppose that we have constructed a matching $M$ that is not perfect. We have to try to increase its size by "backtracking", i.e., by deleting some of its edges and replacing them by more edges. But how to find the edges we want to replace?
The trick is the following. We look for a path $P$ in $G$ of the following type: $P$ starts and ends at nodes $u$ and $v$ that are unmatched by $M$; and every second edge of $P$ belongs to $M$ (Figure 35). Such a path is called an augmenting path. It is clear that an augmenting path $P$ contains an odd number of edges, and in fact the number of its edges not in $M$ is one larger than the number of its edges in $M$.
If we find an augmenting path $P$, we can delete those edges of $P$ that are in $M$ and replace them by those edges of $P$ that are not in $M$. It is clear that this results in a matching $M^{\prime}$ that is larger than $M$ by one edge. (The fact that $M^{\prime}$ is a matching follows from the observation that the remaining edges of $M$ cannot contain any node of $P$ : the two endpoints of $P$ were supposed to be unmatched, while the interior nodes of $P$ were matched by edges of $M$ that we deleted.) So we can repeat this until we either get a perfect matching, or a matching $M$ for which no augmenting path exists.
So we have two questions to answer: how to find an augmenting path, if it exists? and if it does not exist, does this mean that there is no perfect matching at all? It will turn out that an answer to the first question will also imply the (affirmative) answer to the second.
Let $U$ be the set of unmatched nodes in $A$ and let $W$ be the set of unmatched nodes in $B$. As we noted, any augmenting path must have an odd number of edges and hence
Figure 36: Reaching nodes by almost augmenting paths. Only edges on these paths, and of $M$, are shown.
it must connect a node in $U$ to a node in $W$. Let us try to find such an augmenting path starting from some node in $U$. Let's say that a path $Q$ is almost augmenting if it starts at a node in $U$, ends at a node in $A$, and every second edge of it belonds to $M$. An almost augmenting path must have an even number of edges, and must end with an edge of $M$.
What we want to do is to find the set of nodes in $A$ that can be reached on an almost augmenting path. Let's agree that we consider a node in $U$ as an almost augmenting path in itself (of length 0); then we know that every node in $U$ has this property. Starting with $S=U$, we build up a set $S$ gradually. At any stage, the set $S$ will consist of nodes we already know are reachable by some almost augmenting path. We denote by $T$ the set of nodes in $B$ that are matched with nodes in $S$ (Figure 36). Since the nodes of $U$ have nothing matched with them and they are all in $S$, we have
$$
|S|=|T|+|U| .
$$
We look for an edge that connects a node $s \in S$ to some node $r \in B$ that is not in $T$. Let $Q$ be an almost augmenting path starting at some node $u \in U$ and ending at $s$. Now there are two cases to consider:
- If $r$ is unmatched (which means that it belongs to $W$ ) then appending the edge $s r$ to $Q$ we get an augmenting path $P$. So we can increase the size of $M$ (and forget about $S$ and $T$ ).
- If $r$ is matched with a node $q \in A$, then we can append the edges $s r$ and $r q$ to $Q$, to get an almost augmenting path from $u$ to $q$. So we can add $u$ to $S$.
So if we find an edge connecting a node in $S$ to a node not in $T$, we can either increase the size of $M$ or the increase the set $S$ (and leave $M$ as it was). Sooner of later we must encounter a situation where either $M$ is a perfect matching (and we are done), or $M$ is not perfect but no edge connects $S$ to any node outside $T$.
So what to do in this case? Nothing! If this occurs, we can conclude that there is no perfect matching at all. In fact, all neighbors of the set $S$ are in $T$, and $|T|=|S|-|U|<|S|$. We know that this implies that there is no perfect matching at all in the graph. Figure 37 shows how this algorithm finds a matching in the bipartite graph that is a subgraph if the "grid".
To sum up, we do the following. At any point in time, we'll have a matching $M$ and a set $S$ of nodes in $A$ that we know can be reached on almost augmenting paths. If we find an edge connecting $S$ to a node not matched with any node in $S$, we can either increase the size of $M$ or the set $S$, and repeat. If no such edge exists, either $M$ is perfect or no perfect matching exists at all.
Remark. In this chapter, we restricted our attention to matchings in bipartite graphs. One can, of course, define matchings in general (nonbipartite) graphs. It turns out that both the necessary and sufficient condition given in theorem 12.2 and the algorithm described in this section can be extended to non-bipartite graphs. This requires, however, quite a bit more involved methods, which lie beyond the scope of this book.
12.7 Follow how the algorithm works on the graph in Figure 38.
12.8 Show how the description of algorithm above contains a new proof of theorem 12.2 .
### Hamiltonian cycles
A Hamiltonian cycle is a cycle that contains all nodes of a graph. We have met a similar notion before: tavelling salesman tours. Indded, travelling salesman tours can be viewed as Hamiltonian cycles in the complete graph on the given set of nodes.
Hamiltonian cycles are quite similar to matchings: in a perfect matching, every node belongs to exactly one edge; in a Hamiltonian cycle, every node belongs to exactly two edges. But much less is known about them than about matchings. For example, no efficient way is known to check whether a given graph has a Hamiltonian cycle (even if it is bipartite), and no useful necessary and sufficient condition for the existence of a Hamiltonian cycle is known. If you solve exercise 12.5, you'll get a feeling about the difficulty of the Hamiltonian cycle problem.
12.9 Decide whether or not the graphs in Figure 39 have a Hamiltonian cycle.
Figure 37: A graph for trying out the algorithm
Figure 38: A graph for trying out the algorithm
Figure 39: Do these graphs have a Hamilton cycle?
Figure 40: Two-coloring the regions formed by a set of circles
## Graph coloring
### Coloring regions: an easy case
We draw some circles on the plane (say, $n$ in number). These divide the plane into a number of regions. Figure 40 shows such a set of circles, and also an "alternating" coloring of the regions with two colors; it gives a nice pattern. Now our question is: can we always color these regions this way? We'll show that the answer is yes; to state this more exactly:
Theorem 13.1 The regions formed by $n$ circles in the plane can be colored with read and blue so that any two regions that share a common boundary arc should be colored differently.
(If two regions have only one or two boundary points in common, then they may get the same color.)
Let us first see why a direct approach does not work. We could start with coloring the outer region, say blue; then we have to color its neighbors red. Could it happen that two neighbors are at the same time neighbors of each other? Perhaps drawing some pictures and then arguing carefully about them, you can design a proof that this cannot happen. But then we have to color the neighbors of the neighbors blue again, and we would have to prove that no two of these are neighbors of each other. This could get quite complicated! And then we would have to repeat this for the neighbors of the neighbors of the neighbors...
We should find a better way to prove this and, fortunately, there is a better way! We prove the assertion by induction on $n$, the number of circles. If $n=1$, then we only get two regions, and we can color one of them red, the other one blue. So let $n>1$. Select any of the circles, say $C$, and forget about it for the time being. The regions formed by the remaining $n-1$ circles can be colored with red and blue so that regions that share a common boundary arc get different colors (this is just the induction hypothesis).
Now we take back the remaining circle, and change the coloring as follows: outside $C$, we leave the coloring as it was; inside $C$, we interchange red and blue (Figure 41).
It is easy to see that the coloring we obtained satisfies what we wanted. In fact, look at any small piece of arc of any of the circles. If this arc is outside $C$, then the two regions on its two sides were different and their colors did not change. If the arc is inside $C$, then again, the two regions on its both sides were differently colored, and even though their colors were switched, they are still different. Finally, the arc could be on $C$ itself. Then
Figure 41: Adding a new circle and recoloring
Figure 42: The proof of Euler's Theorem
the two regions on both sides of the arc were one and the same before putting $C$ back, and so they had the same color. Now, one of them is inside $C$ and this switched its color; the other is outside, and this did not. So after the recoloring, their colors will be different.
Thus we proved that the regions formed by $n$ circles can be colored with two colors, provided the regions formed by $n-1$ circles can be colored with 2 colors. By the Principle of Induction, this proves the theorem.
13.1 Assume that the color of the outer region is blue. Then we can describe what the color of a particular region $R$ is, without having to color the whole picture, as follows:
— if $R$ lies inside an even number of circles, it will be colored blue;
— if $R$ lies inside an odd number of circles, it will be colored red.
Prove this assertion.
13.2 (a) Prove that the regions, into which $n$ straight lines divide the plane, are colorable with 2 colors.
(b) How could you describe what the color of a given region is?
Figure 43: 4-coloring the countries
Figure 44: Proof of the 5-color theorem
Figure 45: 4-coloring the countries and 3-coloring the edges
Figure 46: A graph and its dual
Figure 47: Two non-3-colorable graphs
## A Connecticut class in King Arthur's court
In the court of King Arthur there dwelt 150 knights and 150 ladies-in-waiting. The king decided to marry them off, but the trouble was that some pairs hated each other so much that they would not even get married, let alone speak! King Arthur tried several times to pair them off but each time he ran into conflicts. So he summoned Merlin the Wizard and ordered him to find a pairing in which every pair was willing to marry. Now Merlin had supernatural powers and he saw immediately that none of the 150 ! possible pairings was feasible, and this he told the king. But Merlin was not only a great wizard, but a suspicious character as well, and King Arthur did not quite trust him. "Find a pairing or I shall sentence you to be imprisoned in a cave forever!" said Arthur.
Fortunately for Merlin, he could use his supernatural powers to browse forthcoming scientific literature, and he found several papers in the early 20th century that gave the reason why such a pairing could not exist. He went back to the King when all the knights and ladies were present, and asked a certain 56 ladies to stand on one side of the king and 95 knights on the other side, and asked: "Is any one of you ladies, willing to marry any of these knights?", and when all said "No!", Merlin said: "O King, how can you command me to find a husband for each of these 56 ladies among the remaining 55 knights?" So the king, whose courtly education did include the pigeon-hole principle, saw that in this case Merlin had spoken the truth and he graciously dismissed him.
Some time elapsed and the king noticed that at the dinners served for the 150 knights at the famous round table, neighbors often quarrelled and even fought. Arthur found this bad for the digestion and so once again he summoned Merlin and ordered him to find a way to seat the 150 knights around the table so that each of them should sit between two friends. Again, using his supernatural powers Merlin saw immediately that none of the 150! seatings would do, and this he reported to the king. Again, the king bade him find one or explain why it was impossible. "Oh I wish there were some simple reason I could give to you! With some luck there could be a knight having only one friend, and so you too could see immediately that what you demand from me is impossible. But alas!, there is no such simple reason here, and I cannot explain to you mortals why no such seating exists, unless you are ready to spend the rest of your life listening to my arguments!" The king was naturally unwilling to do that and so Merlin has lived imprisoned in a cave ever since.
G
$\mathrm{H}$
Figure 48: A bigraph with a perfect matching and one without
(A severe loss for applied mathematics!)
The moral of this tale is that there are properties of graphs which, when they hold, are easily proven to hold. If a graph has a perfect matching, or a Hamilton cycle, this can be "proved" easily by exhibiting one. If a bipartite graph does not have a perfect matching, then this can be "proved" by exhibiting a subset $X$ of one color class which has fewer than $|X|$ neighbors in the other. The reader (and King Arthur!) are directed to Figure 48 in which graph $G$ has a perfect matching (indicated by the heavy lines), but graph $H$ does not. To see the latter, consider the four black points and their neighbors.
Most graph-theoretic properties which interest us have this logical structure. Such a property is called (in the jargon of computer science) and NP-property (if you really want to knwo, NP is and abbreviation of Nondeterministic Polynomial Time, but it would be difficult to explain where this highly technical phrase comes from). The two problems that Merlin had to face - the existence of a perfect matching and the existence of a Hamilton cycle - are clearly NP-properties. But NP-properties also appear quite frequently in other parts of mathematics. A very important NP-property of natural numbers is their compositeness: if a natural number is composite then this can be exhibited easily by showing a decomposition $n=a b(a, b>1)$.
The remarks we have made so far explain how Merlin can remain free if he is lucky and the task assigned to him by King Arthur has a solution. For instance, suppose he could find a good way to seat the knights for dinner. He could then convince King Arthur that his seating plan was "good" by asking if there was anybody sitting beside and enemy of his. This shows that the property of the corresponding "friendship graph" that it contains a Hamilton cycle is an NP-property. But how could he survive Arthur's wrath in the case of the marriage problem and not in the case of the seating problem when these problems do not have solutions? What distinguishes the non-existence of a Hamilton cycle in a graph from the non-existence of a perfect matching in a bigraph? From our tale, we hope the answer is clear: the non-existence of a perfect matching in a bigraph is also an NP-property (this is a main implication of Frobenius' Theorem), while the non-existence of a Hamilton cycle in a graph is not! (To be precise, no proof of this latter fact is known, but there is strong evidence in favor of it.
So for certain NP-properties the negation of the property is again an NP-property. A theorem asserting the equivalence of an NP-property with the negation of another NP-property is called a good characterization. There are famous good characterizations throughout graph theory and elsewhere.
Many NP-properties are even better. Facing the problem of marrying off his knights and ladies, Arthur himself (say, after attending this class) could decide himself whether or not it was solvable: he could run the algorithm described in section 12.4. A lot of work, but probably doable with the help of quite ordinary people, without using the supernatural talents of Merlin. Properties which can be decided efficiently are called properties in the class $P$ (here $\mathrm{P}$ stand for Polynomial Time, an exact but quite technical definition of the phrase "efficiently"). Many other simple properties of graphs discussed in this book also belong to the class: connectivity, or the existence of a cycle.
## A glimpse of cryptography
### Classical cryptography
Ever since writing was invented, people have been interested not only in using it to communicate with their partners, but also in trying to conceal the content of their message from their adversaries. This leads to cryptography (or cryptology), the science of secret communication.
The basic situation is that one party, say King Arthur, wants to send a message to King Bela. There is, however, a danger that the evil Caesar Caligula intercepts the message and learns things that he is not supposed to know about. The message, understandable even for Caligula, is called the plain text. To protect its content, King Arthur encrypts his message. When King Bela receives it, he must decrypt it in order to be able to read it. For the Kings to be able to encrypt and decrypt the message, they must know something that that the Caesar does not know: this information is the key.
Many cryptosystems have been used in history; most of them, in fact, turn out to be insecure, especially if the adversary can use powerful computing tools to break it.
Perhaps the simplest method is substitution code: we replace each letter of the alphabet by another letter. The key is the table that contains for each letter the letter to be substituted for it. While a message encrypted this way looks totally scrambled, substitution codes are in fact easy to break. Solving exercise 16.3 will make it clear how the length and positions of the words can be used to figure out the original meaning of letters, if the breaking into words is preserved (i.e., "Space" is not replaced by another character). But even if the splitting into words is hidden, an analysis of the frequency of various letter gives enough information to break the substitution code.
## One-time pads
There is another simple, frequently used method, which is much more secure: the use of "one-time pads". This method is very safe; it was used e.g. during World War II for communication between the American President and the British Prime Minister. Its disadvantage is that it requires a very long key, which can only be used once.
A one-time pad is a randomly generated string of 0's and 1's. Say, here is one:
$$
11000111000010000110010100100100101100110010101100001110110000010
$$
Both Kings Arthur and Bela has this sequence (it was sent well in advance by a messenger). Now King Arthur wants to send the following message to King Bela:
## ATTACK MONDAY
First, he has to convert it to 0's and 1's. It is not clear that medieval kings had the knowledge to do so, but the reader should be able to think of various ways: using ASCII codes, or Unicodes of the letters, for example. But we want to keep things simple, so we just number the letters from 1 to 26 , and then write down the binary representation of the numbers, putting 0's in front so that we get a string of length 5 for each letter. Thus we have "00001" for A, "00010" for B, etc. We use "00000" for "Space". The above message becomes:
## 1
This might look cryptic enough, but Caligula (or rather one the excellent Greek scientist he keeps in his court) could easily figure out what it stands for. To encode it, Arthur adds the one-time pad to the message bit-by-bit. To the first bit of the message (which is 0) he adds the first bit of the pad (1) and writes down the first bit of the encoded message: $0+1=1$. He computes the second, third, etc. bits similarly: $0+1=1,0+0=0,0+0=0$, $1+0=1,1+1=0, \ldots$ (What is this $1+1=0$ ? Isn't $1+1=2$ ? Or, if we want to use the binary number system, $1+1=10$ ? Well, all that happens is that we ignore the "carry", and just write down the last bit. We could also say that the computation is done modulo 2). Another way of saying what King Arthur does is the following: if the $k$-th bit of the pad is 1 , he flips the $k$-th bit of the text; else, he leaves it as it was.
So Arthur computes the encoded message:
$$
11001011101011000111010010001000101111100101000010000110110111011
$$
He sends this to King Bela, who looking at the one-time pad, can easily flip back the appropriate bits, and recover the original message.
But Caligula (and even his excellent scientists) does not know the one-time pad, so he does not know which bits were flipped, and so he is helpless. The message is safe.
It can be expensive to make sure that Sender and Receiver both have such a common key; but note that the key can be sent at a safer time and by a completely different method than the message; moreover, it may be possible to agree on a key even without actually passing it.
### How to save the last move in chess?
Modern cryptography started in the late 1970's with the idea that it is not only lack of information that can protect our message against an unauthorized eavesdropper, but also the computational complexity of processing it. The idea can is illustrated by the following simple example.
Alice and Bob are playing chess over the phone. They want to interrupt the game for the night; how can they do it so that the person to move should not get the improper advantage of being able to think about his move whole night? At a tournament, the last move is not made on the board, only written down, put in an envelope, and deposited with the referee. But now the two players have no referee, no envelope, no contact other than the telephone line. The player making the last move (say, Alice) has to send Bob some message. The next morning (or whenever they continue the game) she has to give some additional information, some "key", which allows Bob to reconstruct the move. Bob should not be able to reconstruct Alice's move without the key; Alice should not be able to change her mind overnight and modify her move.
Surely this seems to be impossible! If she gives enough information the first time to uniquely determine her move, Bob will know the move too soon; if the information given the first time allows several moves, then she can think about it overnight, figure out the best among these, and give the remaining information, the "key" accordingly.
If we measure information in the sense of classical information theory, then there is no way out of this dilemma. But complexity comes to our help: it is not enough to communicate information, it must also be processed.
So here is a solution to the problem, using elementary number theory! (Many other schemes can be designed.) Alice and Bob agree to encode every move as a 4-digit number (say, '11' means ' $\mathrm{K}$ ', '6' means ' $\mathrm{f}$ ', and '3' means itself, so '1163' means 'Kf3'). So far, this is just notation.
Next, Alice extends the four digits describing her move to a prime number $p=1163 \ldots$ with 200 digits. She also generates another prime $q$ with 201 digits and computes the product $N=p q$ (this would take rather long on paper, but is trivial using a personal computer). The result is a number with 400 or 401 digits; she sends this number to Bob.
Next morning, she sends both prime factors $p$ and $q$ to Bob. He reconstructs Alice's move from the first four digits of the smaller prime. To make sure that Alice was not cheating, he should check that $p$ and $q$ are primes and that their product is $N$.
Let us argue that this protocol does the job.
First, Alice cannot change her mind overnight. This is because the number $N$ contains all the information about her move: this is encoded as the first four digits of the smaller prime factor of $N$. So Alice commits herself to the move when sending $N$.
But exactly because the number $N$ contains all the information about Alice's move, Bob seems to have the advantage, and he indeed would have if he had unlimited time or unbelievably powerful computers. What he has to do is to find the prime factors of the number $N$. But since $N$ has 400 digits (or more), this is a hopelessly difficult task with current technology.
Can Alice cheat by sending a different pair $\left(p^{\prime}, q^{\prime}\right)$ of primes the next morning? No, because Bob can easily compute the product $p^{\prime} q^{\prime}$, and check that this is indeed the number $N$ that was sent the previous night. (Note the role of the uniqueness of prime factorization, Theorem 8.1.)
All the information about Alice's move is encoded in the first 4 digits of the smaller prime factor $p$. We could say that the rest of $p$ and the other prime factor $q$ serve as a "deposit box": they hide this information from Bob, and can be opened only if the appropriate key (the factorization of $N$ ) is available. The crucial ingredient of this scheme is complexity: the computational difficulty to find the factorization of an integer.
With the spread of electronic communication in business, many solutions of traditional correspondence and trade must be replaced by electronic versions. We have seen an electronic "deposit box" above. Other schemes (similar or more involved) can be found for electronic passwords, authorization, authentication, signatures, watermarking, etc. These schemes are extremely important in computer security, cryptography, automatic teller machines, and many other fields. The protocols are often based on simple number theory; in the next section we discuss (a very simplified version of) one of them.
### How to verify a password-without learning it?
In a bank, a cash machine works by name and password. This system is safe as long as the password is kept in secret. But there is one week point in security: the computer of the bank must store the password, and the administrator of this computer may learn it and later misuse it.
Complexity theory provides a scheme where the bank can verify that the customer does indeed know the password - without storing the password itself! At the first glance this looks impossible - just as the problem with filing the last chess move was. And the solution (at least the one we discuss here) uses the same kind of construction as our telephone chess example.
Suppose that the password is a 100-digit prime number $p$ (this is, of course, too long for everyday use, but it illustrates the idea best). When the customer chooses the password, he chooses another prime $q$ with 101 digits, forms the product $N=p q$ of the two primes, and tells the bank the number $N$. When the teller is used, the customer tells his name and the password $p$. The computer of the bank checks whether or not $p$ is a divisor of $N$; if so, it accepts $p$ as a proper password. The division of a 200 digit number by a 100 digit number is a trivial task for a computer.
Let us assume that the system administrator learns the number $N$ stored along with the files of our customer. To use this in order to impersonate the customer, he has to find a 100-digit number that is a divisor of $N$; but this is essentially the same problem as finding the prime factorization of $N$, and this is hopelessly difficult. So- even though all the necessary information is contained in the number $N$-the computational complexity of the factoring problem protects the password of the customer!
### How to find these primes?
In our two simple examples of "modern cryptography", as well as in almost all the others, one needs large prime numbers. We know that there are arbitrarily large primes (Theorem 8.3), but are there any with 200 digits, starting with 1163 (or any other 4 given digits)? Maple found (in a few seconds on a laptop!) the smallest such prime number:
## 0 0000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000371
The smallest 200 digit integer starting with 1163 is $1163 \cdot 10^{1} 96$. This is of course not a prime, but above we found a prime very close by. There must be zillions of such primes! In fact, a computation very similar to what we did in section 8.4 shows that the number of primes Alice can choose from is about $1.95 \cdot 10^{193}$.
This is a lot of possibilities, but how to find one? It would not be good to use the prime above (the smallest eligible): Bob could guess this and thereby find out Alice's move. What Alice can do is to fill in the missing 196 digits randomly, and then test whether the number she obtains is a prime. If not, she can throw it away and try again. As we computed in section 8.4 , one in every 460 200-digit numbers is a prime, so on the average in about 460 trials she gets a prime. This looks like a lot of trials, but of course she uses a computer; here is one we computed for you with this method (in a few seconds again): 1163146712876555763279909704559660690828365476006668873814489354662
4743604198911046804111038868958805745715572480009569639174033385458
418593535488622323782317577559864739652701127177097278389465414589
So we see that in the "envelope" scheme above, both computational facts mentioned in section 8.7 play a crucial role: it is easy to test whether a number is a prime (and thereby it is easy to compute the encryption), but it is difficult to find the prime factors of a composite number (and so it is difficult to break the cryptosystem).
16.1 For the following message, the Kings used substitution code. Caligula intercepted the message and quite easily broke it. Can you do it too?
## U GXUAY LS ZXMEKW AMG TGGTIY HMD TAMGXSD LSSY, FEG GXSA LUGX HEKK HMDIS. FSKT
16.2 At one time, Arthur made the mistake of using the one-time pad shifted: the first bit of the plain text he encoded using the second bit of the pad, the second bit of the plain text he encoded using the third bit of the pad etc. He noticed his error after he sent the message off. Being afraid that Bela will not understand his message, he encoded it again (now correctly) using the same one-time pad, and sent it to Bela by another courier, explaining what happened.
Caligula intercepted both messages, and was able to recover the plain text. How?
16.3 The Kings were running low on one-time pads, and so Bela had to use the same pad to encode his reply as they used for Arthur's message. Caligula intercepted both messages, and was able to reconstruct the plain texts. Can you explain how?
16.4 Motivated by the one-time pad method, Alice suggests the following protocol for saving the last move in their chess game: in the evening, she encrypts her move (perhaps with other text added, to make it reasonably long) using a randomly generated 0 -1 sequence as the key (just like in the one-time pad method). The next morning she sends the key to Bob, so that he can decrypt the message. Should Bob accept this suggestion?
16.5 Alice modifies her suggestion as follows: instead of the random 0-1 sequence, she offers to use a random, but meaningful text as the key. For whom would this be advantageous?
### Public key cryptography
Cryptographic systems used in real life are more complex than those described in the previous section - but they are based on similar principles. In this section we sketch the math behind the most commonly used system, the RSA code (named after its inventors, Rivest, Shamir and Adleman).
the protocol. Let Alice generate two 100-digit prime numbers, $p$ and $q$ and computes their product $m=p q$. Then she generates two 200-digit numbers $d$ and $e$ such that $(p-1)(q-1)$ is a divisor $e d-1$. (We'll come back to the question how this is done.)
The numbers $m$ and $e$ she publishes on her web site, or in the phone book, but the prime factors $p$ and $q$ and the number $d$ remain her closely guarded secrets. The number $d$ is called her private key, and the number $e$, her public key (the number $p$ and $q$ she may even forget - they will not be needed to operate the system, just to set it up.
Suppose first that Bob wants to send a message to Alice. He writes the message as a number $x$ (we have seen before how to do so). This number $x$ must be a non-negative integer less than $m$ (if the message is longer, he can just break it up into smaller chunks).
The next step is the trickiest: Bob computes the remainder of $x^{e}$ modulo $m$. Since both $x$ and $e$ are huge integers (200 digits), the number $x^{d}$ has more that $10^{200}$ digits - we could not even write it down, let alone compute it! Luckily, we don't have to compute this number, only its remainder when dividing with $m$. This is still a large number - but at least it can be written down in 2-3 lines. We'll return to computing it in the exercises.
So let $r$ be this remainder; this is sent to Alice. When she receives it, she can decrypt it using her private key $d$ by doing essentially the same procedure as Bob did: she computes the remainder of $r^{d}$ modulo $m$. And - a black magic of number theory, until you see the explanations - this remainder is just the plain text $x$.
What if Alice wants to send a message to Bob? He also needs to go through the trouble of generating his private and public keys. He has to pick two primes $p^{\prime}$ and $q^{\prime}$, compute their product $m^{\prime}$, select two positive integers $d^{\prime}$ and $e^{\prime}$ so that $\left(p^{\prime}-1\right)\left(q^{\prime}-1\right) \mathrm{s}$ a divisor or $e^{\prime} d^{\prime}-1$, and finally publish $m^{\prime}$ and $e^{\prime}$. Then Alice can send him a secure message.
The black math magic behind the protocol. The key fact from mathematics we use is Fermat's Theorem 8.6. Recall that $x$ is the plain text (written as an integer) and the encrypted message $r$ is the remainder of $x^{e}$ modulo $m$. So we can write
$$
r=x^{e}-k m
$$
with an appropriate integer $k$ (the value of $k$ is irrelevant for us). To decrypt, Alice raises this to the $d$-th power, to get
$$
r^{d}=\left(x^{e}-k m\right)^{d}=x^{e d}+k^{\prime} m
$$
where $k^{\prime}$ is again some integer. To be more precise, she computes the remainder of this modulo $m$, which is the same as the remainder of $x^{e d}$ modulo $m$. We want to show that this is just $x$. Since $0 \leq x<m$, it suffices to argue that $x^{e d}-x$ is divisible by $m$. Since $m=p q$ is the product of two distinct primes, it suffices to prove that $x^{e d}-x$ is divisible by each of $p$ and $q$. Let us consider divisibility by $p$, for example. The main property of $e$ and $d$ is that $e d-1$ is divisible by $(p-1)(q-1)$, and hence also by $p$. This means that we can write $e d=(p-1) l+1$, where $l$ is a positive integer. we have
$$
x^{e d}-x=x\left(x^{(p-1) l}-1 b i g r\right) .
$$
Here $x^{(p-1) l}-1$ is divisible by $x^{p-1}-1$ (see exercise 8.1 ), and so $x\left(x^{(p-1) l}-1\right.$ bigr) is divisible by $x^{p}-x$, which in turn is divisible by $p$ by Fermat's "Little" Theorem.
How to do all this computation? We already discussed how to find primes, and Alice can follow the the method described in section 8.7.
The next issue is the computation of the two keys $e$ and $d$. One of them, say $e$, Alice can choose at random, from the range 1.. $(p-1)(q-1)-1$. She has to check that it is relatively prime to $(p-1)(q-1)$; this can be done efficiently with the help of the Euclidean Algorithm discussed in section 8.6. If the number she chose is not relatively prime to $(p-1)(q-1)$, she just throws it out, and tries another one. This is similar to the method we used for finding a prime, and it is not hard to see that she'll find a good number on more trials than she can find a prime.
But if the euclidean algorithm finally succeeds, then, as in section 8.6, it also gives two integers $m$ and $n$ so that
$$
e m+(p-1)(q-1) n=1 .
$$
So $e m-1$ is divisible by $(p-1)(q-1)$. Let $d$ denote the remainder of $m$ modulo $(p-1)(q-1)$, then $e d-1$ is also divisible by $(p-1)(q-1)$, and so we have found a suitable key $d$.
Finally, we have to address the question: how to compute the remainder of $x^{e}$ modulo $m$, when just to write down $x^{e}$ would fill the universe? The answer is easy: after each operation, we can replace the number we get by its remainder modulo $m$. This way we never get numbers with more than 400 digits, which is manageable.
But there is another problem: $x^{e}$ denotes $x$ multiplied by itself $e \approx 10^{200}$ times; even if we carry out 1 billion multiplications every second, we will not finish before the end of the universe!
The first hint that something can be done comes if we think of the special case when $e=2^{k}$ is a power of 2 . In this case, we don't have to multiply with $x 2^{k}-1$ times; instead, we can repeatedly square $x$ just $k$ times: we get $x^{2},\left(x^{2}\right)^{2}=x^{4},\left(x^{4}\right)^{2}=x^{8}$ etc.
If $e$ is not a power of 2 , but say the sum of two powers of $2: e=2^{k}+2^{l}$, then we can separately compute $x^{2^{k}}$ and $x^{2^{l}}$ by this repeated squaring, and then multiply these 2 numbers (not forgetting that after each squaring and multiplication, we replace the number by its remainder modulo $m$ ). This works similarly if $m$ is the sum of a small number of powers of 2 .
But every number is the sum of a small number of powers of 2: just think of its representation in binary. The binary representation $101100101_{2}$ actually means that the number is $2^{8}+2^{6}+2^{5}+2^{2}+2^{0}$. A 200 digit number is the sum of at most 665 powers of 2. We can easily compute (with a computer, of course) $x^{2^{k}}$ for every $k \leq 664$ by repeated squaring, and then the product of these numbers.
16.6 Let $e=e_{0} e_{1} \ldots e_{k}$ be the expression of $e$ in binary $\left(e_{i}=0\right.$ or $1, e_{0}$ is always 1$)$. Let $x_{0}=x$, and for $j=1, \ldots, k$, let
$$
x_{j}= \begin{cases}x_{j-1}^{2}, & \text { if } e_{j}=0 \\ x_{j-1}^{2} x, & \text { if } e_{j}=1\end{cases}
$$
Show that $x_{k}=x^{e}$.
Signatures, etc. There are many other nice things this system can do. For example, suppose that Alice gets a message from Bob as described above. How can she know that it indeed came from Bob? Just because it is signed "Bob", it could have come from anybody. But Bob can do the following. First, he encrypts the message with his private key, then adds "Bob", and encrypts it again with Alice's public key. When Alice receives it, she can decrypt it with her private key. She'll see a still encrypted message, signed "Bob". She can cut away the signature, look up Bob's public key in the phonebook, and use it to decrypt the message.
One can use similar tricks to implement many other electronic gadgets, using RSA.
Security. The security of the RSA protocol is a difficult issue, and since its inception in 1977, thousands of researchers have investigated it. The fact that no attack has been generally successful is a good sign; but unfortunately no exact proof of it security has been found (and it appears that current mathematics lacks the tools to provide such a proof in the foreseeable future.
We can give, however, at least some arguments that support its security. Suppose that you intercept the message of Bob, and want to decipher it. You know the remainder $r$ (this is the intercepted message). You also know Alice's public key $e$, and the number $m$. One could think of two lines of attack: either you can figure out her private key $d$ and then decrypt the message just as she does, or you could somehow more directly find the integer $x$, knowing the remainder of $x^{e}$ modulo $m$.
Unfortunately there is no theorem stating that either of this is impossible in less than astronomical time. But one can justify the security of the system with the following fact: if one can break the RSA system, then one can use the same algorithm to find the prime factors of $m$ (see exercise ??). Since the factorization problem has been studied by so many and no efficient method has been found, this makes the security of RSA quite probable.
16.7 Suppose that Bob develops an algorithm that can break RSA in the first, more direct way described above: knowing Alice's public key $m$ and $e$, he can find her private key $d$.
(a) Show that he can use this to find the number $(p-1)(q-1)$;
(b) from this, he can find the prime factorization $m=p q$.
The real word. How practical could such a complicated system be? It seems that only a few mathematicians could ever use it. But in fact you have probably used it yourself hundreds of times! RSA is used in SSL (Secure Socket Layer), which in turn is used in https (secure http). Any time you visit a "secure site" of the internet, your computer generates a public and private key for you, and uses them to make sure that your credit card number and other personal data remain secret. It does not have to involve you in this at all - all you notice is that the connection is a bit slower. In practice, the two 100 digit primes are not considered sufficiently secure. Commercial applications use more than twice this length, military applications, more than 4 times.
While the hairy computations of raising the plain text $x$ to an exponent which itself has hundreds of digits are surprisingly efficient, it would still be too slow to encrypt and decrypt each message this way. A way out is to send, as a first message, the key to a simpler system (think of a one-time pad, although one uses a more efficient system in practice, like DES, the Digital Encryption Standard). This key is then used for a few minutes to encode the messages going back and force, then thrown away. The idea is that in a short session, the number of encoded messages is not enough for an eavesdropper to break the system.
## Answers to exercises
2 Let us count!
2.1 A party
2.1. $7 !$.
2.1. Carl: $15 \cdot 2^{3}=120$. Diane: $15 \cdot 3 !=90$.
2.2 Sets
2.2. (a) all houses in a street; (b) an Olympic team; (c) class of '99; (d) all trees in a forest; (e) the set of rational numbers; (f) a circle in the plane.
2.2. (a) soldiers; (b) people; (c) books; (d) animals.
2.2. (a) all cards in a deck; (b) all spades in a deck; (c) a deck of Swiss cards; (d) nonnegative integers with at most two digits; (e) non-negative integers with exactly two digits; (f) inhabitants of Budapest, Hungary.
2.2. Alice, and the set whose only element is the number 1 .
2.2. $6 \cdot 5 / 2=15$.
2.2. No.
2.2. $\emptyset,\{0\},\{1\},\{3\},\{0,1\},\{0,3\},\{1,3\},\{0,1,3\}$. 8 subsets.
2.2. women; people at the party; students of Yale.
2.2. $\mathbb{Z}$ or $\mathbb{Z}_{+}$. The smallest is $\{0,1,3,4,5\}$.
2.2. (a) $\{a, b, c, d, e\}$. (b) The union operation is associative. (c) The union of any set of sets consists of those elements which are lements of at least one of the sets.
2.2. The union of a set of sets $\left\{A_{1}, A_{2}, \ldots, A_{k}\right\}$ is the smallest set containing each $A_{i}$ as a subset.
2.2. $6,9,10,14$.
2.2. The cardinality of the union is at least the larger of $n$ and $m$ and at most $n+m$.
2.2. (a) $\{1,3\}$; (b) $\emptyset$; (c) $\{2\}$.
2.2. The cardinality of the intersection is at most the minimum of $n$ and $m$.
2.2. The common elements of $A$ and $B$ are counted twice on both sides; the elements in either $A$ or $B$ but not both are counted once on both sides.
2.2 (a) The set of negative even integers and positive odd integers. (b) $B$.
2.3 The number of subsets
2.3. Powers of 2 .
2.3. $2^{n-1}$.
2.3. (a) $2 \cdot 10^{n}-1$; (b) $2 \cdot\left(10^{n}-10^{n-1}\right.$.
2.3. 101 .
$2.31+\lfloor n \lg 2\rfloor$.
### Sequences
2.4. The trees have 9 and 8 leaves, respectively.
2.4. $5 \cdot 4 \cdot 3=60$.
2.4. $3^{13}$.
2.4. $6 \cdot 6=36$.
2.4. $12^{20}$.
2.4. $\left(2^{20}\right)^{12}$.
### Permutations
2.5. $n !$.
2.5. (b) $7 \cdot 5 \cdot 3=105$. In general, $(2 n-1) \cdot(2 n-3) \cdot \ldots \cdot 3 \cdot 1$.
2.5. (a) $n(n-1) / 2$ is larger for $n \geq 4$. (b) $2^{n}$ is larger for $n \geq 5$.
2.5. (a) This is true for $n \geq 10$. (b) $2^{n} / n^{2}>n$ for $n \geq 10$.
## Induction
### The sum of odd numbers
3.1. One of $n$ and $n+1$ is even, so the product $n(n+1)$ is even. By induction: true for $n=1$; if $n>1$ then $n(n+1)=(n-1) n+2 n$, and $n(n-1)$ is even by the induction hypothesis, $2 n$ is even, and the sum of two even numbers is even.
3.1. True for $n=1$. If $n>1$ then
$$
1+2+\ldots+n=(1+2+\ldots+(n-1))+n=\frac{(n-1) n}{2}+n=\frac{n(n+1)}{2} .
$$
3.1. The youngest person will count $n-1$ handshakes. The 7 -th oldest will count 6 handshakes. So they count $1+2+\ldots+(n-1)$ handshakes. We also know that there are $n(n-1) / 2$ handshakes.
3.1. Compute the area of the rectangle in two different ways.
3.1. By induction on $n$ true for $n=1$. For $n>1$, we have
$$
1 \cdot 2+2 \cdot 3+3 \cdot 4+\ldots+(n-1) \cdot n=\frac{(n-2) \cdot(n-1) \cdot n}{3}+(n-1) \cdot n=\frac{(n-1) \cdot n \cdot(n+1)}{3} .
$$
If $n$ is odd then we have to add the middle term separately.
3.1. If $n$ is even, then $1+(2 n-1)=3+(2 n-3)=\ldots=(n-1)+(n+1)=2 n$, so the sum is $\frac{n}{2}(2 n)=n^{2}$. Again, is $n$ is odd the solution is similar, but we have to add the middle term separately.
3.1. By induction. True for $n=1$. If $n>1$ then
$$
1^{2}+2^{2}+\ldots+(n-1)^{2}=\left(1^{2}+2^{2}+\ldots+(n-1)^{2}\right)+n^{2}=\frac{(n-1) n(2 n-1)}{6}+n^{2}=\frac{n(n+1)(2 n+1)}{6} .
$$
3.1. By induction. True for $n=1$. If $n>1$ then
$$
2^{0}+2^{1}+2^{2}+\ldots+2^{n-1}=\left(2^{0}+2^{1}+\ldots+2^{n-2}\right)+2^{n-1}=\left(2^{n-1}-1\right)+2^{n-1}=2^{n}-1 .
$$
### Subset counting revisited
3.2. (Strings) True for $n=1$. If $n>1$ then to get a string of length $n$ we can start with a string of length $n-1$ (this can be chosen in $k^{n-1}$ ways by the induction hypothesis) and append an element (this can be chosen in $k$ ways). So we get $k^{n-1} \cdot k=k^{n}$.
(Permutations) True for $n=1$. To seat $n$ people, we can start with seating the oldest (this can be done in $n$ ways) and then seating the rest (this can be done in $(n-1)$ ! ways be the induction hypothesis). We get $n \cdot(n-1) !=n !$.
3.2. True if $n=1$. Let $n>1$. The number of handshakes between $n$ people is the number of handshakes by the oldest person $(n-1)$ plus the number of handshakes between the remaining $n-1$ (which is $(n-1)(n-2) / 2$ by the induction hypothesis). We get $(n-1)+(n-$ 1) $(n-2) / 2=n(n-1) / 2$.
13.1. By induction. True if $n=1$. Let $n>1$. Assume the description of the coloring is valid for the first $n-1$ circles. If we add the $n$-th, the color and the parity don't change outside this circle; both change inside the circle. So the description remains valid.
13.1. (a) By induction. True for 1 line. Adding a line, we recolor all regions on one side.
(b) One possible description: designate a direction as "up". Let $p$ any point not on any of the lines. Start a semiline "up" from $P$. Count how many of the given lines intersect it. Color according to the parity of this intersection number.
3.2. We did not check the base case $n=1$.
3.2. The proof uses that there are at least four lines. But we only checked $n=1,2$ as base cases. The assertion is false for $n=3$ and also for every value after that.
### Counting regions
3.3. True for $n=1$. Let $n>1$. Delete any line. The remaining lines divide the plane into $(n-1) n / 2+1$ regions by the induction hypothesis. The last line cuts $n$ of these into two. So we get
$$
\frac{(n-1) n}{2}+1+n=\frac{n(n+1)}{2}+1
$$
## Counting subsets
### The number of ordered subsets
4.1. (I don't think you could really draw the whole tree; it has almost $10^{2} 0$ leaves. It has 11 levels of nodes.)
4.1. (a) $100 !$. (b) $90 !$. (c) $100 ! / 90 !=100 \cdot 99 \cdot \ldots \cdot 91$.
4.1. $\frac{n !}{(n-k) !}=n(n-1) \cdot(n-k+1)$.
4.1. In one case, repetition is not allowed, while in the other case, it is allowed.
### The number of subsets of a given size
4.2. Handshakes; lottery; hands in bridge.
4.2. See next chapter.
4.2 .
$$
\frac{n(n-1)}{2}+\frac{(n+1) n}{2}=n^{2}
$$
4.2. Solution of (b) ((a) is a special case)). The identity is
$$
\left(\begin{array}{l}
n \\
k
\end{array}\right)=\left(\begin{array}{c}
n-1 \\
k
\end{array}\right)+\left(\begin{array}{l}
n-1 \\
k-1
\end{array}\right) .
$$
The right hand side counts $k$-subsets of an $n$-element set by separately counting those that do not contain a given element and those that do.
4.2. The number of $k$-element subsets is the same as the number of $(n-k)$-element subsets, since the complement of a $k$-subset is an $(n-k)$-subset and vice versa.
4.2. Both sides count all subsets of an $n$-element set.
4.2. Both sides count the number of ways to divide an $a$-element set into three sets with $a-b, b-c$, and $c$ elements.
### The Binomial Theorem
4.3 .
$$
\begin{aligned}
& (x+y)^{n}=(x+y)^{n-1}(x+y) \\
& =\left(x^{n-1}+\left(\begin{array}{c}
n-1 \\
1
\end{array}\right) x^{n-2} y+\ldots\right. \\
& \left.+\left(\begin{array}{l}
n-1 \\
n-2
\end{array}\right) x y^{n-2}+\left(\begin{array}{l}
n-1 \\
n-1
\end{array}\right) y^{n-1}\right)(x+y) \\
& =x^{n-1}(x+y)+\left(\begin{array}{c}
n-1 \\
1
\end{array}\right) x^{n-2} y(x+y)+\ldots \\
& +\left(\begin{array}{l}
n-1 \\
n-2
\end{array}\right) x y^{n-2}(x+y)+\left(\begin{array}{l}
n-1 \\
n-1
\end{array}\right) y^{n-1}(x+y) \\
& =\left(x^{n}+x^{n-1} y\right)+\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)\left(x^{n-1} y+x^{n-2} y^{2}\right)+\ldots+\left(\begin{array}{c}
n-1 \\
n-2
\end{array}\right)\left(x^{2} y^{n-2}+x y^{n-1}\right) \\
& +\left(\begin{array}{l}
n-1 \\
n-1
\end{array}\right)\left(x y^{n-1}+y^{n}\right) \\
& =x^{n}+\left(1+\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)\right)\left(x^{n-1} y+x^{n-2} y\right)+\ldots \\
& +\left(\left(\begin{array}{l}
n-1 \\
n-2
\end{array}\right)+\left(\begin{array}{l}
n-1 \\
n-1
\end{array}\right)\right) x y^{n-1}+y^{n} \\
& =x^{n}+\left(\begin{array}{l}
n \\
1
\end{array}\right) x^{n-1} y+\left(\begin{array}{l}
n \\
2
\end{array}\right) x^{n-2} y^{2}+\ldots+\left(\begin{array}{c}
n \\
n-1
\end{array}\right) x y^{n-1}+y^{n} .
\end{aligned}
$$
4.3. (a) $(1-1)^{n}=0$. (b) By $\left(\begin{array}{l}n \\ k\end{array}\right)=\left(\begin{array}{c}n \\ n-k\end{array}\right)$.
4.3. Both sides count all subsets of an $n$-element set.
4.4 Distributing presents
4.4 .
$$
\left(\begin{array}{c}
n \\
n_{1}
\end{array}\right) \cdot\left(\begin{array}{c}
n-n_{1} \\
n_{2}
\end{array}\right) \cdot \ldots \cdot\left(\begin{array}{c}
n \\
n_{k}
\end{array}\right)
$$
$$
=\frac{n !}{n_{1} !\left(n-n_{1}\right) !} \frac{\left(n-n_{1}\right) !}{n_{2} !\left(n-n_{1}-n_{2}\right) !} \cdots \frac{\left(n-n_{1}-\ldots-n_{k-2}\right) !}{n_{k-1} !\left(n-n_{1}-\ldots-n_{k-1}\right) !}=\frac{n !}{n_{1} ! n_{2} ! \ldots n_{k} !},
$$
since $n-n_{1}-\ldots-n_{k-1}=n_{k}$.
4.4. (a) $n$ ! (distribute positions instead of presents). (b) $n(n-1) \ldots(n-k+1)$ (distribute as "presents" the first $k$ positions at the competition and $n-k$ participation certificates). (c) $\left(\begin{array}{l}n \\ n_{1}\end{array}\right)$. (d) Chess seating in Diane's sense (distribute players to boards).
4.4. (a) $[n=8] 8$ !. (b) 8 ! $\cdot\left(\begin{array}{l}8 \\ 4\end{array}\right)$. (c) $(8 !)^{2}$.
### Anagrams
4.5. $13 ! / 2^{3}$.
### COMBINATORICS.
4.5. Most: any word with 13 different letters; least: any word with 13 identical letters.
4.5. (a) $26^{6}$.
(b) $\left(\begin{array}{c}26 \\ 4\end{array}\right)$ ways to select the four letters that occur; for each selection, $\left(\begin{array}{l}4 \\ 2\end{array}\right)$ ways to select the two letters that occur twice; for each selection, we distribute 6 positions to these letters $(2$ of them get 2 positions), this gives $\frac{6 !}{2 ! 2 !}$ ways. Thus we get $\left(\begin{array}{c}26 \\ 4\end{array}\right)\left(\begin{array}{c}4 \\ 2\end{array}\right) \frac{6 !}{2 ! 2 !}$. (There are many other ways to arrive at the same number!)
(c) Number of ways to partition 6 into the sum of positive integers:
$$
\begin{gathered}
6=6=5+1=4+2=4+1+1=3+3=3+2+1=3+1+1+1=2+2+2 \\
=2+2+1+1=2+1+1+1+1=1+1+1+1+1+1
\end{gathered}
$$
which makes 11 possibilities.
(d) This is too difficult in this form. What I meant is the following: how many words of length $\mathrm{n}$ are there such that none is an anagram of another? This means distributing $n$ pennies to 26 children, and so the answer is $\left(\begin{array}{c}n+25 \\ 26\end{array}\right)$.
### Distributing money
4.6. $\left(\begin{array}{c}n-k-1 \\ k-1\end{array}\right)$.
4.6. $\left(\begin{array}{c}n+k-1 \\ \ell+k-1\end{array}\right)$.
4.6. $\left(\begin{array}{c}k p+k-1 \\ k-1\end{array}\right)$.
## Pascal's Triangle
5. This is the same as $\left(\begin{array}{c}n \\ k\end{array}\right)=\left(\begin{array}{c}n \\ n-k\end{array}\right)$.
6. $\left(\begin{array}{l}n \\ 0\end{array}\right)=\left(\begin{array}{l}n \\ n\end{array}\right)=1$ (e.g. by the general formula for the binomial coefficients). 5.1 .
$$
\begin{gathered}
1+\left(\begin{array}{c}
n \\
1
\end{array}\right)+\left(\begin{array}{c}
n \\
2
\end{array}\right)+\ldots+\left(\begin{array}{c}
n \\
n-1
\end{array}\right)+\left(\begin{array}{l}
n \\
n
\end{array}\right) \\
=1+\left(\left(\begin{array}{c}
n-1 \\
0
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)\right)+\left(\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
2
\end{array}\right)\right)+\ldots+\left(\left(\begin{array}{c}
n-1 \\
n-2
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
n-1
\end{array}\right)\right)+1 \\
=2\left(\left(\begin{array}{c}
n-1 \\
0
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)+\ldots+\left(\begin{array}{c}
n-1 \\
n-2
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
n-1
\end{array}\right)\right)=2 \cdot 2^{n-1}=2^{n} .
\end{gathered}
$$
### Identities in the Pascal Triangle
5.1 .
$$
\begin{gathered}
1+\left(\begin{array}{l}
n \\
1
\end{array}\right)+\left(\begin{array}{l}
n \\
2
\end{array}\right)+\ldots+\left(\begin{array}{c}
n \\
n-1
\end{array}\right)+\left(\begin{array}{l}
n \\
n
\end{array}\right) \\
=1+\left(\left(\begin{array}{c}
n-1 \\
0
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)\right)+\left(\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
2
\end{array}\right)\right)+\ldots+\left(\left(\begin{array}{c}
n-1 \\
n-2
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
n-1
\end{array}\right)\right)+1 \\
=2\left(\left(\begin{array}{c}
n-1 \\
0
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)+\ldots+\left(\begin{array}{c}
n-1 \\
n-2
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
n-1
\end{array}\right)\right)=2 \cdot 2^{n-1}=2^{n} .
\end{gathered}
$$
5.1. The coefficient of $x^{n} y^{n}$ in
$$
\left(\left(\begin{array}{l}
n \\
0
\end{array}\right) x^{n}+\left(\begin{array}{c}
n \\
1
\end{array}\right)\left(x^{n-1} y+x^{n-2} y\right)+\ldots+\left(\begin{array}{c}
n \\
n-1
\end{array}\right) x y^{n-1}+\left(\begin{array}{l}
n \\
n
\end{array}\right) y^{n}\right)^{2}
$$
is
$$
\left(\begin{array}{l}
n \\
0
\end{array}\right)\left(\begin{array}{l}
n \\
n
\end{array}\right)+\left(\begin{array}{l}
n \\
1
\end{array}\right)\left(\begin{array}{c}
n \\
n-1
\end{array}\right)+\ldots+\left(\begin{array}{c}
n \\
n-1
\end{array}\right)\left(\begin{array}{l}
n \\
1
\end{array}\right)+\left(\begin{array}{l}
n \\
n
\end{array}\right)\left(\begin{array}{l}
n \\
0
\end{array}\right) .
$$
5.1. The left hand side counts all $k$-element subsets of an $(n+m)$-element set by distinguishing them according to how many elements they pick up from the first $n$.
5.1. If the largest element is $j$, the rest can be chosen $\left(\begin{array}{c}j-1 \\ k\end{array}\right)$ ways.
### A bird's eye view at the Pascal Triangle
5.2. $n=3 k+2$.
5.2. $k=\lfloor(n-\sqrt{2 n}) / 2\rfloor$. (This is not easy: one looks at the difference of differences:
$$
\left(\left(\begin{array}{c}
n \\
k+1
\end{array}\right)-\left(\begin{array}{l}
n \\
k
\end{array}\right)\right)-\left(\left(\begin{array}{l}
n \\
k
\end{array}\right)-\left(\begin{array}{c}
n \\
k-1
\end{array}\right)\right)
$$
and determines the value of $k$ where it turns negative.)
5.2. (a) $2^{n}$ is a sum with positive terms in which $\left(\begin{array}{l}n \\ 4\end{array}\right)$ is only one of the terms.
(b) Assume that $n>200$. Then
$$
\frac{2^{n}}{n^{3}} \geq \frac{\left(\begin{array}{l}
n \\
4
\end{array}\right)}{n^{3}}=\frac{(n-1)(n-2)(n-3)}{24 n^{2}}>\frac{(n / 2)^{3}}{24 n^{2}}=\frac{n}{192}>1 .
$$
5.2 .
$$
\left(\begin{array}{c}
n \\
n / 2
\end{array}\right)=\frac{n !}{((n / 2) !)^{2}} \sim \frac{\left(\frac{n}{e}\right)^{n} \sqrt{2 \pi n}}{\left.\left(\left(\frac{n}{2 e}\right)^{(} n / 2\right) \sqrt{\pi n}\right)^{2}}=\frac{2^{n} \sqrt{2}}{\sqrt{\pi n}}
$$
5.2. Using
$$
\left(\begin{array}{c}
2 m \\
m
\end{array}\right) /\left(\begin{array}{c}
2 m \\
m-t
\end{array}\right)>\frac{t^{2}}{m},
$$
it is enough to find a $t>0$ for which $t^{2} / m \geq 1 / c$. Solving for $t$, we get that $t=\lceil\sqrt{m / c}\rceil$ is a good choice. 5. (a): see (c).
(b) We prove by induction on $s$ that for $0 \leq s \leq m-t$,
$$
\left(\begin{array}{c}
2 m \\
m-s
\end{array}\right) /\left(\begin{array}{c}
2 m \\
m-t-s
\end{array}\right)>\frac{t^{2}}{m}
$$
For $s=0$ this is just the theorem we already know. Let $s>0$, then then
$$
\left(\begin{array}{c}
2 m \\
m-s
\end{array}\right)=\frac{m-s+1}{m+s}\left(\begin{array}{c}
2 m \\
m-s+1
\end{array}\right)
$$
and
$$
\left(\begin{array}{c}
2 m \\
m-t-s
\end{array}\right)=\frac{m-s-t+1}{m+s+t}\left(\begin{array}{c}
2 m \\
m-s-t+1
\end{array}\right)
$$
Hence
$$
\left(\begin{array}{c}
2 m \\
m-s
\end{array}\right) /\left(\begin{array}{c}
2 m \\
m-t-s
\end{array}\right)=\frac{(m-s+1)(m+s+t)}{(m+s)(m-s-t+1)}\left(\left(\begin{array}{c}
2 m \\
m-s+1
\end{array}\right) /\left(\begin{array}{c}
2 m \\
m-t-s+1
\end{array}\right)\right) .
$$
Since
$$
\frac{(m-s+1)(m+s+t)}{(m+s)(m-s-t+1)}>1
$$
it follows that
$$
\left(\begin{array}{c}
2 m \\
m-s
\end{array}\right) /\left(\begin{array}{c}
2 m \\
m-t-s
\end{array}\right)>\left(\begin{array}{c}
2 m \\
m-s+1
\end{array}\right) /\left(\begin{array}{c}
2 m \\
m-t-s+1
\end{array}\right)>\frac{t^{2}}{m}
$$
by the induction hypothesis.
(c)
$$
\begin{gathered}
\frac{\left(\begin{array}{c}
2 m \\
m-s
\end{array}\right)}{\left(\begin{array}{c}
2 m \\
m-t-s
\end{array}\right)}=\frac{\frac{(2 m) !}{(m-s) !(m+s) !}}{\frac{(2 m) !}{(m-t-s) !(m+t+s) !}}=\frac{(m+t+s)(m+t+s-1) \ldots(m+s+1)}{(m-s)(m-s-1) \ldots(m-s-t+1)} \\
=\frac{m+t+s}{m-s} \cdot \frac{m+t+s-1}{m-s-1} \cdot \ldots \cdot \frac{m+s+1}{m-t-s+1} \\
=\left(1+\frac{t+2 s}{m-s}\right) \cdot\left(1+\frac{t+2 s}{m-s-1}\right) \cdot \ldots \cdot\left(1+\frac{t}{m-s-t+1}\right) \\
\geq\left(1+\frac{t+2 s}{m}\right)^{t}>1+t \frac{t+2 s}{m} .
\end{gathered}
$$
## Combinatorial probability
7.1 Events and probabilities
7.1. The union of two events $A$ and $B$ corresponds to " $A$ or $B$ ".
7.1. It is the sum of some of the probabilities of outcomes, and even if add all we get just 1.
7.1. $\mathrm{P}(E)=\frac{1}{2}, \mathrm{P}(T)=\frac{1}{3}$.
7.1. The same probabilities $\mathrm{P}(s)$ are added up on both sides. 7.1. Every probability $\mathrm{P}(s)$ with $s \in A \cap B$ is added twice to sides; every probability $\mathrm{P}(s)$ with $s \in A \cup B$ but $s \notin A \cap B$ is added once to both sides.
### Independent repetition of an experiment
7.2. The pairs $(E, T),(O, T),(L, T)$ are independent. The pair $(E, O)$ is exclusive.
7.2. $\mathrm{P}(\emptyset \cap A)=\mathrm{P}(\emptyset)=0=\mathrm{P}(\emptyset) \mathrm{P}(A)$. The set $S$ also has this property: $\mathrm{P}(S \cap A)=\mathrm{P}(A)=\mathrm{P}(S) \mathrm{P}(A)$. 7.2. $\mathrm{P}(A)=\frac{|S|^{n-1}}{|S|^{n}}=\frac{1}{|S|}, \mathrm{P}(B)=\frac{|S|^{n-1}}{|S|^{n}}=\frac{1}{|S|}, \mathrm{P}(A \cap B)=\frac{|S|^{n-2}}{|S|^{n}}=\frac{1}{|S|^{2}}=\mathrm{P}(A) \mathrm{P}(B)$.
## Fibonacci numbers
### Fibonacci's exercise
6.1. Because we use the two previous elements to compute the next.
6.1. $F_{n+1}$.
6.2. It is clear from the recurrence that two odd members are followed by an even, then by two odd.
6.2. We formulate the following nasty looking statement: if $n$ is divisible by 5 , then so is $F_{n}$; if $n$ has remainder 1 when divided by 5 , then $F_{n}$ has remainder 1 ; if $n$ has remainder 2 when divided by 5 , then $F_{n}$ has remainder 1 ; if $n$ has remainder 3 when divided by 5 , then $F_{n}$ has remainder 2 ; if $n$ has remainder 4 when divided by 5 , then $F_{n}$ has remainder 3 . This is then easily proved by induction on $n$.
6.2. By induction. All of them are true for $n=1$ and $n=2$. Assume that $n \geq 3$.
(a) $F_{1}+F_{3}+F_{5}+\ldots+F_{2 n-1}=\left(F_{1}+F_{3}+\ldots+F_{2 n-3}\right)+F_{2 n-1}=F_{2 n-2}+F_{2 n-1}=F_{2 n}$.
(b) $F_{0}-F_{1}+F_{2}-F_{3}+\ldots-F_{2 n-1}+F_{2 n}\left(F_{0}-F_{1}+F_{2}-\ldots+F_{2 n-2}\right)-\left(F_{2 n-1}+F_{2 n}\right)=\left(F_{2 n-3}-1\right)+F_{2 n-2}=$ $F_{2 n-1}-1$.
(c) $F_{0}^{2}+F_{1}^{2}+F_{2}^{2}+\ldots+F_{n}^{2}\left(F_{0}^{2}+F_{1}^{2}+\ldots+F_{n-1}^{2}\right)+F_{n}^{2}=F_{n-1} F_{n}+F_{n}^{2}=F_{n}\left(F_{n-1}+F_{n}\right)==F_{n} \cdot F_{n+1}$.
(d) $F_{n-1} F_{n+1}-F_{n}^{2}=F_{n-1}\left(F_{n-1}+F_{n}\right)-F_{n}^{2}=F_{n-1}^{2}+F_{n}\left(F_{n-1}-F_{n}\right)=F_{n-1}^{2}-F_{n} F_{n-2}=-(-1)^{n-1}=$ $(-1)^{n}$.
6.2. The identity is
$$
\left(\begin{array}{l}
n \\
0
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)+\left(\begin{array}{c}
n-2 \\
2
\end{array}\right)+\ldots+\left(\begin{array}{c}
n-k \\
k
\end{array}\right)=F_{n+1},
$$
where $k=\lfloor n / 2\rfloor$. Proof by induction. True for $n=0$ and $n=1$. Let $n \geq 2$. Assume that $n$ is odd; the even case is similar, just the last term below needs a little different treatment.
$$
\begin{gathered}
\left(\begin{array}{l}
n \\
0
\end{array}\right)+\left(\begin{array}{c}
n-1 \\
1
\end{array}\right)+\left(\begin{array}{c}
n-2 \\
2
\end{array}\right)+\ldots+\left(\begin{array}{c}
n-k \\
k
\end{array}\right) \\
=1+\left(\left(\begin{array}{c}
n-2 \\
0
\end{array}\right)+\left(\begin{array}{c}
n-2 \\
1
\end{array}\right)\right)+\left(\left(\begin{array}{c}
n-3 \\
1
\end{array}\right)+\left(\begin{array}{c}
n-3 \\
2
\end{array}\right)\right)+\ldots+\left(\left(\begin{array}{c}
n-k-1 \\
k-1
\end{array}\right)+\left(\begin{array}{c}
n-k-1 \\
k
\end{array}\right)\right) \\
=\left(\left(\begin{array}{c}
n-1 \\
0
\end{array}\right)+\left(\begin{array}{c}
n-2 \\
1
\end{array}\right)+\left(\begin{array}{c}
n-3 \\
2
\end{array}\right)+\ldots+\left(\begin{array}{c}
n-k-1 \\
k
\end{array}\right)\right) \\
+\left(\left(\begin{array}{c}
n-2 \\
0
\end{array}\right)+\left(\begin{array}{c}
n-3 \\
1
\end{array}\right)+\ldots+\left(\begin{array}{c}
n-k-1 \\
k-1
\end{array}\right)\right)=F_{n}+F_{n-1}=F_{n+1} .
\end{gathered}
$$
6.2. The "diagonal" is in fact a very long and narrow parallelogram with area 1 . The trick depends on the fact $F_{n+1} F_{n-1}-F_{n}^{2}=(-1)^{n}$ is very small compared to $F_{n}^{2}$.
### A formula for the Fibonacci numbers
6.3. True for $n=0,1$. Let $n \geq 2$. Then by the induction hypothesis,
$$
\begin{gathered}
F_{n}=F_{n-1}+F_{n-2} \\
=\frac{1}{\sqrt{5}}\left(\left(\frac{1+\sqrt{5}}{2}\right)^{n-1}-\left(\left(\frac{1-\sqrt{5}}{2}\right)^{n-1}\right) \cdot+\frac{1}{\sqrt{5}}\left(\left(\frac{1+\sqrt{5}}{2}\right)^{n-2}-\left(\left(\frac{1-\sqrt{5}}{2}\right)^{n-2}\right) .\right.\right. \\
=\frac{1}{\sqrt{5}}\left[\left(\frac{1+\sqrt{5}}{2}\right)^{n-2}\left(\frac{1+\sqrt{5}}{2}+1\right)+\left(\frac{1-\sqrt{5}}{2}\right)^{n-2}\left(\frac{1-\sqrt{5}}{2}+1\right)\right] \\
=\frac{1}{\sqrt{5}}\left(\left(\frac{1+\sqrt{5}}{2}\right)^{n}-\left(\left(\frac{1-\sqrt{5}}{2}\right)^{n}\right) .\right.
\end{gathered}
$$
6.3. For $n=0$ and $n=1$, if we require that $H_{n}$ is of the given form we get
$$
H_{0}=1=a+b, \quad H_{1}=3=a \frac{1+\sqrt{5}}{2}+b \frac{1-\sqrt{5}}{2} .
$$
Solving for $a$ and $b$, we get
$$
a=\frac{1+\sqrt{5}}{2}, b=\frac{1-\sqrt{5}}{2} .
$$
Then
$$
H_{n}=\left(\frac{1+\sqrt{5}}{2}\right)^{n+1}+\left(\frac{1-\sqrt{5}}{2}\right)^{n+1}
$$
follows by induction on $n$ just like in the previous problem.
6.3 .
$$
I_{n}=\frac{1}{2 \sqrt{5}}\left((2+\sqrt{5})^{n}-(2-\sqrt{5})^{n}\right) .
$$
8 Integers, divisors, and primes
### Divisibility of integers
8.1. $a=a \cdot 1=(-a) \cdot(-1)$.
8.1. (a) even; (b) odd; (c) $a=0$.
8.1. (a) If $b=a m$ and $c=b n$ then $c=a m n$. (b) If $b=a m$ and $c=a n$ then $b+c=a(m+n)$ and $b-c=a(m-n)$. (c) If $b=a m$ and $a, b>0$ then $m>0$, hence $m \geq 1$ and so $b \geq a$. (d) Trivial if $a=0$. Assume $a \neq 0$. If $b=a m$ and $a=b n$ then $a=a m n$, so $m n=1$. Hence either $m=n=1$ or $m=n=-1$.
8.1. We have $a=c n$ and $b=c m$, hence $r=b-a q=c(m-n q)$.
8.1. We have $b=a m, c=a q+r$ and $c=b t+s$. Hence $s=c-b t=(a q+r)-(a m) t=(q-m t) a+r$. Since $0 \leq r<a$, the remainder of the division $s: a$ is $r$.
8.1. (a) $a^{2}-1=(a-1)(a+1)$. (b) $a^{n}-1=(a-1)\left(a^{n-1}+\ldots+a+1\right)$.
### Factorization into primes
8.3. Yes, the number 2.
8.3. (a) $p$ occurs in the prime factorization of $a b$, so it must occur in the prime factorization of $a$ or in the prime factorization of $b$.
(b) $p \mid a(b / a)$, but $p$ Xa, so by (a), we must have $p \mid(b / a)$.
8.3. Let $n=p_{1} p_{2} \ldots p_{k}$; each $p_{i} \geq 2$, hence $n \geq 2^{k}$.
8.3. If $r_{i}=r_{j}$ then $i a-j a$ is divisible by $p$. But $i a-j a=(i-j) a$ and neither $a$ nor $i-j$ are divisible by $p$. Hence the $r_{i}$ are all different. None of them is 0 . Their number is $p-1$, so every value $1,2, \ldots, p-1$ must occur among the $r_{i}$.
8.3. For a prime $p$, the proof is the same as for 2 . If $n$ is composite but not a square, then there is a prime $p$ that occurs in the prime factorization of $n$ an odd number of times. We can repeat the proof by looking at this $p$.
8.3. Fact: If $\sqrt[k]{n}$ is not an integer then it is irrational. Proof: there is a prime that occurs in the prime factorization of $n$, say $t$ times, where $k$ 犾. If (indirect assumption) $\sqrt[k]{n}=a / b$ then $n b^{k}=a^{k}$, and so the number of times $p$ occurs in the prime factorization of the ;left hand side is not divisible by $k$, while the number of times it occurs in the prime factorization of the right hand side is divisible by $k$. A contradiction.
### Fermat's "Little" Theorem
8.5. $4 \times\left(\begin{array}{l}4 \\ 2\end{array}\right)=6.4 \times 2^{4}-2-14$.
8.5. (a) What we need that each of the $p$ rotated copies of a set are different. Suppose that there is a set which occurs $a$ times. Then trivially every other set occurs $a$ times. But then $a \mid p$, so we must have $a=1$ or $p$. If all $p$ rotated copies are the same then trivially either $k=0$ or $k=p$, which were excluded. So we have $a=1$ as claimed. (b) Consider the set of two opposite vertices of a square. (c) If each box contains $p$ subsets of size $k$, the total number of subsets must be divisible by $k$.
8.5. We consider each number to have $p$ digits, by adding zeros at the front if necessary. We get $p$ numbers from each number $a$ by cyclic shift. These are all the same when all digits of $a$ are the same, but all different otherwise (why? the assumption that $p$ is a prime is needed here!). So we get $a^{p}-a$ numbers that are divided into classes of size $p$. Thus $p \mid a^{p}-a$.
8.5. Assume that $\operatorname{gcd}(a, p)=1$. Consider the product $a(2 a)(3 a) \ldots((p-1) a)=(p-1) ! a^{p-1}$. Let $r_{i}$ be the remainder of $i a$ when divided by $p$. Then the product above has the same remainder when divided by $p$ as the product $r_{1} r_{2} \ldots r_{p-1}$. But this product is just $(p-1)$ !. Hence $p$ is a divisor of $(p-1) ! a^{p-1}-(p-1) !=(p-1) !\left(a^{p-1}-1\right)$. Since $p$ is a prime, it is not a divisor of $(p-1) !$, and so it is a divisor of $a^{p-1}-1$.
### The Euclidean Algorithm
8.6. $\operatorname{gcd}(a, b) \leq a$, but $a$ is a common divisor, $\operatorname{so} \operatorname{gcd}(a, b)=a$.
8.6. Let $d=\operatorname{gcd}(a, b)$. Then $d \mid a$ and $d \mid b$, and hence $d \mid b-a$. Thus $d$ is a common divisor of $a$ and $b-a$, and hence $\operatorname{gcd}(a, b)=d \leq \operatorname{gcd}(a, b)$. A similar argument show the reverse inequality. 8.6. (a) $\operatorname{gcd}(a / 2, b) \mid(a / 2)$ and hence $\operatorname{gcd}(a / 2, b) \mid a$. So $\operatorname{gcd}(a / 2, b)$ is a common divisor of $a$ and $b$ and hence $\operatorname{gcd}(a / 2, b) \leq \operatorname{gcd}(a, b)$. The reverse inequality follows similarly, using that $\operatorname{gcd}(a, b)$ is odd, and hence $\operatorname{gcd}(a, b) \mid(a / 2)$. (b) $\operatorname{gcd}(a / 2, b / 2) \mid(a / 2)$ and hence $2 \operatorname{gcd}(a / 2, b / 2) \mid a$. Similarly, $2 \operatorname{gcd}(a / 2, b / 2) \mid b$, and hence $2 \operatorname{gcd}(a / 2, b / 2) \leq \operatorname{gcd}(a, b)$. Conversely, $\operatorname{gcd}(a, b) \mid a$ and hence $\frac{1}{2} \operatorname{gcd}(a, b) \mid a / 2$. Similarly, $\frac{1}{2} \operatorname{gcd}(a, b) \mid b / 2$, and hence $\frac{1}{2} \operatorname{gcd}(a, b) \leq \operatorname{gcd}(a / 2, b / 2)$.
8.6. Consider each prime that occurs in either one of them, raise it to the larger of the two exponents, and multiply these prime powers.
8.6. If $a$ and $b$ are the two integers, and you know the prime factorization of $a$, then take the prime factors of $a$ one by one, divide $b$ with them repeatedly to determine their exponent in the prime factorization of $b$, raise them to the smaller of their exponent in the prime factorizations of $a$ and $b$, and multiply these prime powers.
8.6. By the descriptions of the gcd and lcm above, each prime occurs the same number of times in the prime factorization of both sides.
8.6. $\operatorname{gcd}(a, a+1)=\operatorname{gcd}(a, 1)=\operatorname{gcd}(0,1)=1$.
8.6. The remainder of $F_{n+1}$ divided by $F_{n}$ is $F_{n-1}$. Hence $\operatorname{gcd}\left(F_{n+1}, F_{n}\right)=\operatorname{gcd}\left(F_{n}, F_{n-1}\right)=\ldots=$ $\operatorname{gcd}\left(F_{3}, F_{2}\right)=1$. This lasts $n-1$ steps.
8.6. By induction on $k$. True if $k=1$. Suppose that $k>1$. Let $b=a q+r, 1 \leq r<a$. Then the euclidean algorithm for computing $\operatorname{gcd}(a, r)$ lasts $k-1$ steps, hence $a \geq F_{k}$ and $r \geq F_{k-1}$ by the induction hypothesis. But then $b=a q+r \geq a+r \geq F_{k}+F_{k-1}=F_{k+1}$.
8.6. (a) Takes 10 steps. (b) Follows from $\operatorname{gcd}(a, b)=\operatorname{gcd}(a-b, b)$. (c) $\operatorname{gcd}\left(10^{100}-1,10^{100}-2\right)$ takes $10^{100}-1$ steps.
8.6. (a) Takes 8 steps. (b) At least one of the numbers remains odd all the time. (c) Follows from exercises 8.6 and 8.6. (d) The product of the two numbers drops by a factor of two in one of any two iterations.
### Testing for primality
8.7. By induction on $k$. True if $k=1$. Let $n=2 m+a$, where $a$ is 0 or 1 . Then $m$ has $k-1$ bits, so by induction, we can compute $2^{m}$ using at most $2(k-1)$ multiplications. Now $2^{n}=\left(2^{m}\right)^{2}$ if $a=0$ and $2^{n}=\left(2^{m}\right)^{2} \cdot 2$ if $a=1$.
8.7. If $3 \mid a$ then clearly $3 \mid a^{561}-a$. If $3 \nmid a$, then $3 \mid a^{2}-1$ by Fermat, hence $3 \mid\left(a^{2}\right)^{280}-1=a^{560}-1$. Similarly, if $11 \backslash a$, then $11 \mid a^{10}-1$ and hence $11 \mid\left(a^{10}\right)^{56}-1=a^{560}-1$. Finally, if $17 \backslash a$, then $17 \mid a^{16}-1$ and hence $17 \mid\left(a^{16}\right)^{35}-1=a^{560}-1$.
## Graphs
### Even and odd degrees
9.1. There are 2 graphs on 2 nodes, 8 graphs on 3 nodes (but only four "essentially different"), 16 graphs on 4 nodes (but only 11 "essentially different").
9.1. (a) No; sum of degrees must be even. (b) No; node with degree 5 must be connected to all other nodes, so we cannot have a node with degree 0 . (c) 12 . (d) $9 \cdot 7 \cdot 5 \cdot 3 \cdot 1=945$.
9.1. This graph, the complete graph has $\left(\begin{array}{l}n \\ 2\end{array}\right)$ edges if it has $n$ nodes.
9.1. (a) a path with 3 nodes; (b) a star with 4 endpoints; (c) the union of two paths with 3 nodes.
9.1. In graph (a), the number of edges is 17 , the degrees are $9,5,3,3,2,3,1,3,2,3$. In graph (b), the number of edges is 31 , the degrees are $9,5,7,5,8,3,9,5,7,4$. 9.1. $\left(\begin{array}{c}10 \\ 2\end{array}\right)=45$.
9.1. $2^{\left(\begin{array}{c}20 \\ 2\end{array}\right)}=2^{190}$.
9.1. Every graph has two nodes with the same degree. Since each degree is between 0 and $n-1$, if all degrees were different then they would be $0,1,2,3, \ldots n-1$ (in some order). But the node with degree $n-1$ must be connected to all the others, in particular to the node with degree 0 , which is impossible.
### Paths, cycles, and connectivity
9.2. There are 8 such graphs.
9.2. The empty graph on $n$ nodes has $2^{n}$ subgraphs. The triangle has 18 subgraphs.
9.2. Yes, the proof remains valid.
9.2 (a) Delete any edge from a path. (b) Consider two nodes $u$ and $v$. the original graph contains a path connecting them. If this does not go through $e$, the it remains a path after $e$ is deleted. If it goes through $e$, then let $e=x y$, and assume that the path reaches $x$ first (when traversed from $u$ to $v$ ). Then in the graph after $e$ is deleted, there is a path from $u$ to $x$, and also from $x$ to $y$ (the remainder of the cycle), so there is one from $u$ to $y$. but there is also one from $y$ to $v$, so there is also a path from $u$ to $v$.
9.2. (a) Consider a shortest walk from $u$ to $v$; if this goes through any nodes more than once, the part of it between two passes through this node can be deleted, to make it shorter. (b) The two paths together form a walk from $a$ to $c$.
9.2. Let $w$ be a common node of $H_{1}$ and $H_{2}$. If you want a path between nodes $u$ and $v$ in $H$, then we can take a path from $u$ to $w$, followed by a path from $w$ to $v$, to get a walk from $u$ to $w$.
9.2. Both graphs are connected.
9.2. The union of this edge and one of these components would form a connected graph that is strictly larger than the component, contradicting the definition of a component.
9.2. If $u$ and $v$ are in the same connected component, then this component, and hence $G$ too, contains a path connecting them. Conversely, if there is a path $P$ in $G$ connecting $u$ and $v$, then this path is a connected subgraph, and a maximal connected subgraph containing $P$ is a connected component containing $u$ and $v$.
9.2. Assume that the graph is not connected and let a connected component $H$ of it have $k$ nodes. Then $H$ has at most $\left(\begin{array}{l}k \\ 2\end{array}\right)$ edges. The rest of the graph has at most $\left(\begin{array}{c}n-k \\ 2\end{array}\right)$ edges. Then the number of edges is at most
$$
\left(\begin{array}{l}
k \\
2
\end{array}\right)+\left(\begin{array}{c}
n-k \\
2
\end{array}\right)=\left(\begin{array}{c}
n-1 \\
2
\end{array}\right)-\frac{(k-1)(n-k-1)}{2} \leq\left(\begin{array}{c}
n-1 \\
2
\end{array}\right) .
$$
## Trees
10. If $G$ is a tree than it contains no cycles (by definition), but adding any new edge creates a cycle (with the path in the tree connecting the endpoints of the new edge). Conversely, if a graph has no cycles but adding any edge creates a cycle, then it is connected (two nodes $u$ and $v$ are either connected by an edge, or else adding an edge connecting them creates a cycle, which contains a path between $u$ and $v$ in the old graph), and therefore it is a tree. 10. If $u$ and $v$ are in the same connected component, then the new edge $u v$ forms a cycle with the path connecting $u$ and $v$ in the old graph. If joining $u$ and $v$ by a new edge creates a cycle, then the rest of this cycle is path between $u$ and $v$, and hence $u$ and $v$ are in the same component.
11. Assume that $G$ is a tree. Then there is at least one path between two nodes, by connectivity. But there cannot be two paths, since then we would get a cycle (find the node $v$ when the two paths branch away, and follow the second path until it hits the first path again; follow the first path back to $v$, to get a cycle).
Conversely, assume that there is a unique path between each pair of nodes. Then the graph is connected (since there is a path) and cannot contain a cycle (since two nodes on the cycle would have at least two paths between them).
10.1 How to grow a tree?
10.1. Start the path from a node of degree 1.
10.1. Any edge has only one lord, since if there were two, they would have to start from different ends, and they would have then two ways to get to the King. Similarly, and edge with no lord would have to lead to two different ways.
10.1. Start at any node $v$. If one of the branches at this node contains more than half of all nodes, move along the edge leading to this branch. Repeat. You'll never backtrack because this would mean that there is an edge whose deletion results in two connected components, both containing more than half of the nodes. You'll never cycle back to a node already sen because the graph is a tree. Therefore you must get stuck at a node such that each branch at this node contains at most half of all nodes.
### Rooted trees
10.3. The number of unlabeled trees on $2,3,4,5$ nodes is $1,1,2,3$. They give rise to a total of $1,2,16,125$ labeled trees.
10.3. There are $n$ stars and $n ! / 2$ paths on $n$ nodes.
10.4 How to store a tree?
10.4. The first is the father code of a path; the third is the father code of a star. The other two are not father codes of trees.
10.4. This is the number of possible father codes.
10.4. Define a graph on $\{1, \ldots, n\}$ by connecting all pairs of nodes in the same column. If we do it backwards, starting with the last column, we get a procedure of growing a tree by adding new node and an edge connecting it to an old node.
10.4. (a) encodes a path; (b) encodes a star; (c) does not encode any tree (there are more 0 's than 1's among the first 5 elements, which is impossible in the planar code of any tree).
## Finding the optimum
### Finding the best tree
11.1. Let $H$ be an optimal tree and let $G$ be the tree contructed by the pessimistic government. Look at the first step when an edge $e=u v$ of $H$ is eliminated. Deleting $e$ from $H$ we get two components; since $G$ is connected, it has an edge $f$ connecting these two components. The edge $f$ cannot be more expensive than $e$, else the pessimistic government would have chosen $f$ to eliminate instead of $e$. But then we can replace $e$ by $f$ in $H$ without increasing its cost. Hence we conclude as in the proof given above.
11.1. [Very similar.]
11.1. [Very similar.]
11.1. Take nodes $1,2,3,4$ and costs $c(12)=c(23)=c(34)=c(41)=3, c(13)=4, c(24)=1$. The pessimistic government builds (12341), while the best solution is 12431 .
### Traveling Salesman
11.2. No, because it intersects itself (see next exercise).
11.2. Replacing two intersecting edges by two other edges pairing up the same 4 nodes, just differently, gives a shorter tour by the triangle inequality.
## Matchings in graphs
### A dancing problem
12.1. If every degree is $d$, then the number of edges is $d \cdot|A|$, but also $d \cdot|B|$.
12.1. (a) A triangle; (b) a star.
12.1. A graph in which every node has degree 2 is the union of disjoint cycles. If the graph is bipartite, these cycles have even length.
12.3. Let $X \subseteq A$ and let $Y$ denote the set of neighbors of $X$ in $B$. There are exactly $d|X|$ edges starting from $X$. Every node in $Y$ accommodates no more than $d$ of these; hence $|Y| \geq|X|$.
12.4. On a path with 4 nodes, we may select the middle edge.
12.4. The edges in $M$ must meet every edge in $G$, in particular every edge in the perfect matching matching. So every edge in the perfect matching has at most one endpoint unmatched by $M$.
12.4. The largest matching has 5 edges.
12.4. If the algorithm terminates without a perfect matching, then the set $S$ shows that the graph is not "good".
12.5. The first graph does; the second does not.
| Textbooks |
Mathematical modelling and control of echinococcus in Qinghai province, China
MBE Home
An extension of Gompertzian growth dynamics: Weibull and Fréchet models
2013, 10(2): 399-424. doi: 10.3934/mbe.2013.10.399
Competition of motile and immotile bacterial strains in a petri dish
Silogini Thanarajah 1, and Hao Wang 1,
Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta, T6G 2G1, Canada, Canada
Received September 2012 Revised November 2012 Published January 2013
Bacterial competition is an important component in many practical applications such as plant roots colonization and medicine (especially in dental plaque). Bacterial motility has two types of mechanisms --- directed movement (chemotaxis) and undirected movement. We study undirected bacterial movement mathematically and numerically which is rarely considered in literature. To study bacterial competition in a petri dish, we modify and extend the model used in Wei et al. (2011) to obtain a group of more general and realistic PDE models. We explicitly consider the nutrients and incorporate two bacterial strains characterized by motility. We use different nutrient media such as agar and liquid in the theoretical framework to discuss the results of competition. The consistency of our numerical simulations and experimental data suggest the importance of modeling undirected motility in bacteria. In agar the motile strain has a higher total density than the immotile strain, while in liquid both strains have similar total densities. Furthermore, we find that in agar as bacterial motility increases, the extinction time of the motile bacteria decreases without competition but increases in competition. In addition, we show the existence of traveling-wave solutions mathematically and numerically.
Keywords: partial differential equation., Motility, competition, traveling-wave solution, diffusion, extinction time.
Mathematics Subject Classification: Primary: 92B05, 35Kxx, 35C07; Secondary: 92D25, 92D40, 35B3.
Citation: Silogini Thanarajah, Hao Wang. Competition of motile and immotile bacterial strains in a petri dish. Mathematical Biosciences & Engineering, 2013, 10 (2) : 399-424. doi: 10.3934/mbe.2013.10.399
S.Asei, B. Byers, A. Eng, N. James and J. Leto, "Bacterial Chemostat Model,", 2007., (). Google Scholar
P. K. Brazhnik and J. J Tyson, On traveling wave solutions of fisher's equation in two spatial dimensions,, SIAM. J. Appl. Math., 60 (2000), 371. doi: 10.1137/S0036139997325497. Google Scholar
I. Chang, E. S. Gilbert, N. Eliashberg and J. D. Keasling, A three-dimensional stochastic simulation of biofilm growth and transport-related factors that affect structure,, Micro. Bio., 149 (2003), 2859. Google Scholar
M. Fontes and D. Kaiser, Myxococcus cells respond to elastic forces in their substances,, Proceedings of the National Academy of Sciences of the United States of America, 96 (1999), 8052. Google Scholar
H. Fujikawa and M. Matsushita, Fractal growth of Bacillus subtilis on agar plates,, J. Phys. Soc. Jpn., 58 (1989), 3875. Google Scholar
H. Fujikawa and M. Matsushita, Bacterial fractal growth in the concentration field of nutrient,, J. Phys. Soc. Jpn., 60 (1991), 88. Google Scholar
M. E. Hibbing, C. Fuqua, M. R. Parsek and B. S. Peterson, Bacterial competition: Surviving and thriving in the microbial jungle,, Nature Reviews Microbiology, 8 (2010), 15. Google Scholar
D. P. Hzder, R. Hemmerbach and M. Lebert, Gravity and the bacterial unicellular organisms,, Developmental and Cell Biology Series, 40 (2005). Google Scholar
C. R. Kennedy and R. Aris, Traveling waves in a simple population model involving growth and death,, Bull. of Math. Biol., 42 (1980), 397. doi: 10.1016/S0092-8240(80)80057-7. Google Scholar
E. Keller, Mathematical aspects of bacterial chemotaxis,, Antibiotics and Chemotherapy, 19 (1974), 79. Google Scholar
F. X. Kelly, K. J. Dapsis and D. Lauffenburger, Effect of bacterial chemotaxis on dynamics of microbial competition,, Micro. Biol., 16 (1988), 115. Google Scholar
E. Khain, L. M. Sander and A. M. Stein, A model for glioma growth,, Research Article, 11 (2005), 53. doi: 10.1002/cplx.20108. Google Scholar
S. M. Krone, R. Lu, R. Fox, H. Suzuki and E. M. Top, Modelling the spatial dynamics of plasmid transfer and persistence,, Micro. Biol., 153 (2007), 2803. Google Scholar
D. Lauffenburger, R. Aris and K. H. Keller, Effects of random motility on growth of bacterial populations,, Micro. Ecol., 7 (1981), 207. Google Scholar
D. Lauffenburger, R. Aris and K. H. Keller, Effects of cell motility and chemotaxis on growth of bacterial populations,, Biophys. J., 40 (1982), 209. Google Scholar
D. Lauffenburger and P. Calcagno, Competition between two microbial populations in a nonmixed environment: Effect of cell random motility,, Bio. Tech. and Bio. Eng., xxv (1983), 2103. Google Scholar
M. Matsushita, J. Wakitaa, H. Itoha, K. Watanabea, T. Araia, T. Matsuyamab, H. Sakaguchic and M. Mimurad, Formation of colony patterns by a bacterial cell population,, Physica A: Statistical Mechanics and Its Applications, 274 (1999), 190. Google Scholar
M. Matsushita, F. Hiramatsu, N. Kobayashi, T. Ozawa, Y. Yamazaki and T. Matsuyama, Colony formation in bacteria: Experiments and modeling,, Biofilms, 1 (2004), 305. Google Scholar
M. Mimura, H. Sakaguchi and M. Matsushita, Reaction-diffusion modeling of bacterial colony patterns,, Physica. A. Stat. Mech. Appl., 282 (2000), 283. Google Scholar
J. D. Murray, "Murray JD,", $1^{st}$, (2002). Google Scholar
K. Nowaczyk, A. Juszczak A and F. Domka, Microbiological oxidation of the waste ferrous sulphate,, Polish Journal of Environmental Studies, 6 (1999), 409. Google Scholar
C. S. Patlak, Random walk with persistence and external bias,, Bull. Math. Biophys., 15 (1953), 311. Google Scholar
P. T. Saunders and M. J. Bazin, On the stability of food chains,, J. Theor. Biol., 52 (1975), 121. Google Scholar
R. N. D. Shepard and D. Y. Sumner, Undirected motility of filamentous cyanobacteria produces reticulate mats,, Geobiology, 8 (2010), 179. Google Scholar
J. M. Skerker and H. C. Berger, Direct observation of extension and retraction of type IV pili,, PNAS, 98 (2001), 6901. Google Scholar
L. Simonsen, Dynamics of plasmid transfer on surfaces,, J. General Microbiology, 136 (1990), 1001. Google Scholar
R. Tokita, T. Katoh, Y. Maeda, J. I. Wakita, M. Sano, T. Matsuyama and M. Matsushita, Pattern formation of bacterial colonies by Escherichia coli,, J. Phys. Soc. Jpn., 78 (2009). Google Scholar
Y. Wei, X. Wang, J. Liu, L. Nememan, A. H. Singh, H. Howie and B. R. Levin, The populatiion and evolutionary of bacteria in physically structured habitats: The adaptive virtues of motility,, PNAS, 108 (2011), 4047. Google Scholar
J. T. Wimpenny, "CRC Handbook of Laboratory Model Systems for Microbial Ecosystems,", 2 1998., 2 (1998). Google Scholar
P. Youderian, Bacterial motility: Secretory secrets of gliding bacteria,, Current Biology, 8 (1998), 408. Google Scholar
A. Ishihara, J. E. Segall, S. M. Block and H. L Berg, Coordination of flagella on filgmentous cells of Escherichia Coli,, J. Bacteriology, 155 (1983), 228. Google Scholar
B. L. Taylor and D. E. Koshlard, Reversal of flafella rotation in Monotrichous and Peritrichous bacteria: Generation of changes in direction,, J. Bacteriology, 119 (1974), 640. Google Scholar
Hai-Feng Huo, Shi-Ke Hu, Hong Xiang. Traveling wave solution for a diffusion SEIR epidemic model with self-protection and treatment. Electronic Research Archive, , () : -. doi: 10.3934/era.2020118
Ran Zhang, Shengqiang Liu. On the asymptotic behaviour of traveling wave solution for a discrete diffusive epidemic model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1197-1204. doi: 10.3934/dcdsb.2020159
Yoichi Enatsu, Emiko Ishiwata, Takeo Ushijima. Traveling wave solution for a diffusive simple epidemic model with a free boundary. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 835-850. doi: 10.3934/dcdss.2020387
Ting Liu, Guo-Bao Zhang. Global stability of traveling waves for a spatially discrete diffusion system with time delay. Electronic Research Archive, , () : -. doi: 10.3934/era.2021003
Lorenzo Zambotti. A brief and personal history of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 471-487. doi: 10.3934/dcds.2020264
Jiangtao Yang. Permanence, extinction and periodic solution of a stochastic single-species model with Lévy noises. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020371
Jong-Shenq Guo, Ken-Ichi Nakamura, Toshiko Ogiwara, Chang-Hong Wu. The sign of traveling wave speed in bistable dynamics. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3451-3466. doi: 10.3934/dcds.2020047
Wei Feng, Michael Freeze, Xin Lu. On competition models under allee effect: Asymptotic behavior and traveling waves. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5609-5626. doi: 10.3934/cpaa.2020256
Yueyang Zheng, Jingtao Shi. A stackelberg game of backward stochastic differential equations with partial information. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020047
Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $ L^2 $-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298
Linglong Du, Min Yang. Pointwise long time behavior for the mixed damped nonlinear wave equation in $ \mathbb{R}^n_+ $. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020033
Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002
Chueh-Hsin Chang, Chiun-Chuan Chen, Chih-Chiang Huang. Traveling wave solutions of a free boundary problem with latent heat effect. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021028
Xu Zhang, Chuang Zheng, Enrique Zuazua. Time discrete wave equations: Boundary observability and control. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 571-604. doi: 10.3934/dcds.2009.23.571
Hirofumi Izuhara, Shunsuke Kobayashi. Spatio-temporal coexistence in the cross-diffusion competition system. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 919-933. doi: 10.3934/dcdss.2020228
Mohammad Ghani, Jingyu Li, Kaijun Zhang. Asymptotic stability of traveling fronts to a chemotaxis model with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021017
Silogini Thanarajah Hao Wang | CommonCrawl |
# Representation of linear regression as a function
Linear regression is a fundamental statistical technique used to model the relationship between a dependent variable and one or more independent variables. The goal of linear regression is to find the best-fitting line through the data points, which can be represented as a function.
The linear regression model can be written as:
$$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \cdots + \beta_p x_p + \epsilon$$
where $y$ is the dependent variable, $x_1, x_2, \dots, x_p$ are the independent variables, $\beta_0, \beta_1, \dots, \beta_p$ are the coefficients to be estimated, and $\epsilon$ is the error term.
## Exercise
Consider the following dataset:
| x | y |
|---|---|
| 1 | 2 |
| 2 | 4 |
| 3 | 6 |
| 4 | 8 |
Write the linear regression function for this dataset.
# Loss function and its properties
The loss function is a measure of how well the model fits the data. In linear regression, a common choice for the loss function is the mean squared error (MSE):
$$\text{MSE} = \frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2$$
where $n$ is the number of data points, $y_i$ is the actual value of the dependent variable for the $i$-th data point, and $\hat{y}_i$ is the predicted value of the dependent variable by the linear regression model.
The MSE is a quadratic function of the coefficients $\beta_0, \beta_1, \dots, \beta_p$. It has a global minimum at the true values of the coefficients, and its value is zero at this point.
## Exercise
Consider the following dataset:
| x | y |
|---|---|
| 1 | 2 |
| 2 | 4 |
| 3 | 6 |
| 4 | 8 |
Write the MSE of the linear regression model for this dataset.
# Gradient descent algorithm for optimization
Gradient descent is an optimization algorithm that is used to find the minimum of a function. In the context of linear regression, gradient descent is used to find the coefficients $\beta_0, \beta_1, \dots, \beta_p$ that minimize the mean squared error.
The gradient of the mean squared error with respect to a coefficient $\beta_j$ is given by:
$$\frac{\partial \text{MSE}}{\partial \beta_j} = \frac{2}{n} \sum_{i=1}^n (y_i - \hat{y}_i) x_{ij}$$
where $x_{ij}$ is the value of the $j$-th independent variable for the $i$-th data point.
The gradient descent algorithm updates the coefficients as follows:
$$\beta_j := \beta_j - \alpha \frac{\partial \text{MSE}}{\partial \beta_j}$$
where $\alpha$ is the learning rate.
## Exercise
Consider the following dataset:
| x | y |
|---|---|
| 1 | 2 |
| 2 | 4 |
| 3 | 6 |
| 4 | 8 |
Perform one step of gradient descent to update the coefficients.
# Understanding the descent direction and learning rate
The direction of the descent is determined by the negative gradient of the loss function. The learning rate determines the step size in moving towards the minimum.
A smaller learning rate will result in smaller steps, which can lead to slower convergence but a more precise estimate of the coefficients. A larger learning rate can lead to faster convergence, but the steps may be too large and the estimate may be less precise.
## Exercise
Consider the following dataset:
| x | y |
|---|---|
| 1 | 2 |
| 2 | 4 |
| 3 | 6 |
| 4 | 8 |
Choose an appropriate learning rate for gradient descent.
# Stochastic gradient descent for large datasets
When dealing with large datasets, it can be computationally expensive to compute the gradient of the loss function for all data points. Stochastic gradient descent (SGD) is an optimization algorithm that addresses this issue by using a random subset of data points to estimate the gradient.
In SGD, the gradient of the loss function with respect to a coefficient $\beta_j$ is estimated using a single data point:
$$\frac{\partial \text{MSE}}{\partial \beta_j} \approx \frac{2}{1} (y_i - \hat{y}_i) x_{ij}$$
where $i$ is a randomly chosen index.
SGD updates the coefficients as follows:
$$\beta_j := \beta_j - \alpha \frac{\partial \text{MSE}}{\partial \beta_j}$$
where $\alpha$ is the learning rate.
## Exercise
Consider the following dataset:
| x | y |
|---|---|
| 1 | 2 |
| 2 | 4 |
| 3 | 6 |
| 4 | 8 |
Perform one step of stochastic gradient descent to update the coefficients.
# Convergence and overfitting
Convergence refers to the process of the coefficients converging to their optimal values. Overfitting occurs when the model is too complex and fits the training data too well, leading to poor generalization to new data.
To prevent overfitting, regularization techniques can be used.
## Exercise
Consider the following dataset:
| x | y |
|---|---|
| 1 | 2 |
| 2 | 4 |
| 3 | 6 |
| 4 | 8 |
Discuss the potential issues with overfitting in this dataset.
# Regularization techniques for improving the model
Regularization techniques are used to prevent overfitting by adding a penalty term to the loss function. Some common regularization techniques include:
- Lasso regularization: Adds the absolute value of the coefficients to the loss function.
- Ridge regularization: Adds the square of the coefficients to the loss function.
- Elastic net regularization: Combines both Lasso and Ridge regularization.
## Exercise
Consider the following dataset:
| x | y |
|---|---|
| 1 | 2 |
| 2 | 4 |
| 3 | 6 |
| 4 | 8 |
Discuss how regularization techniques can be applied to improve the linear regression model.
# Extensions and variants of the gradient descent algorithm
There are several extensions and variants of the gradient descent algorithm that can be used to improve its performance in various scenarios. Some of these include:
- Momentum: Adds a fraction of the previous update to the current update, which can help accelerate convergence.
- Adaptive learning rate methods: Adjust the learning rate based on the gradient, which can help avoid getting stuck in local minima.
- Nesterov accelerated gradient (NAG): Combines the momentum and adaptive learning rate methods to further improve convergence.
## Exercise
Consider the following dataset:
| x | y |
|---|---|
| 1 | 2 |
| 2 | 4 |
| 3 | 6 |
| 4 | 8 |
Discuss how extensions and variants of the gradient descent algorithm can be used to improve the linear regression model.
## Exercise
Consider the following dataset:
| x | y |
|---|---|
| 1 | 2 |
| 2 | 4 |
| 3 | 6 |
| 4 | 8 |
Perform one step of gradient descent using a momentum term and an adaptive learning rate. | Textbooks |
\begin{document}
\title[Compact approach to the positivity of Brown-York mass]{Compact approach to the positivity of Brown-York mass and rigidity of manifolds with mean-convex boundaries in flat and spherical contexts}
\author{Sebasti{\'a}n Montiel} \address[Montiel]{Departamento de Geometr{\'\i}a y Topolog{\'\i}a\\ Universidad de Granada\\ 18071 Granada \\ Spain} \email{[email protected]}
\begin{abstract} In this article we develope a spinorial proof of the Shi-Tam theorem for the positivity of the Brown-York mass without necessity of building non smooth infinite asymptotically flat hypersurfaces in the Euclidean space and use the positivity of the ADM mass proved by Schoen-Yau and Witten. This same compact approach provides an optimal lower bound \cite{HMZ} for the first non null eigenvalue of the Dirac operator of a mean convex boundary for a compact spin manifold with non negative scalar curvature, an a rigidity result for mean-convex bodies in flat spaces. The same machinery provides analogous, but new, results of this type, as far as we know, in spherical contexts, including a version of Min-Oo's conjecture. \end{abstract}
\thanks{Declaration: The author declares to be free of conflicts of interests competing interests of any type retive to this article.}.\thanks{Research partially supported by a Junta de Andalucia FQM-325}
\keywords{Dirac Operator, Spectrum, APS boundary condition, Brown-York mass, Mean-convexity, Min-Oo conjecture}
\subjclass{Differential Geometry, Global Analysis, 53C27, 53C40, 53C80, 58G25}
\date{September, 2022}
\maketitle \pagenumbering{arabic}
\section{Introduction} In a beautiful work by Shi and Tam [ST, Theorem 4.1], the positivity of the ADM mass theorem, proved by Schoen and Yau \cite{SY} and \cite{Wi}, independently, was used in a new way to get nice results on the boundary behaviour of a compact Riemannian manifold with non-negative scalar curvature. They proved that if $\Omega$ is a three-dimensional compact Riemannian manifold with non-negative scalar curvature and strictly convex boundary $\Sigma$, then $$ \int_{\Sigma}H_0-\int_{\Sigma}H,$$ that is, the so called Brown-York mass enclosed in $\Omega$, is greater than or equal to zero, where $H$ is the inner mean curvature of $\Sigma$ in $\Omega $ and $H_0$ is the Euclidean inner mean curvature of $\Sigma$ in ${\mathbb R}^{3}$ corresponding to the (unique up to rigid motions) Weyl embedding of $\Sigma$ into ${\mathbb R}^3$ obtained by Pogorelov \cite{Po} and Nirenberg \cite{Ni} independently. Moreover, the equality holds for some boundary component $\Sigma$. The proof uses two fundamental facts. First, following an idea by Bartnik \cite{Bar}, the construction of a suitable infinite asymptotically flat extension of $\Omega$ with non negative scalar curvature by deforming the exterior of $\Sigma$ in ${\mathbb R}^3$ in order to have the same mean curvature as $\Sigma$ in $\Omega$ with the intention to glue it to manifold $\Omega$ along its boundary $\Sigma$. This construction is equivalent to solve a non linear parabolic equation in a $C^1$ context, because this is the degree of differentiability after the glueing. The second point is that the difference of integrals$$ \int_{\Sigma_r}H_0-\int_{\Sigma_r}H_r,$$ where $\Sigma_r$ is an expansion to infinity of the original boundary $\Sigma$ into ${\mathbb R}^3$, where $H_0$ is the mean curvature with respect to the Euclidean metric and $H_r$ is the mean curvature with respect to the Bartnik metric goes in a non decreasing way to the ADM mass of the built asymptotically flat manifold and, so, one can use the non negativity of its mass. This approach was successfully used to prove the positivity of other quasi local masses proposed by Liu and Yau \cite{LiY1,LiY2} and Wang and Yau \cite{WaY}. Along these comments one can see that positivity of quasi local masses and rigidity of compact manifolds with non empty boundary are different, although similar, aspects of a quasi same question.
It can be seen, since one starts to study elementary differential geometry, for example, when one begins to prove the Cohn-Vossen rigidity of ovaloids in ${\mathbb R}^3$, that there is a close connexion between rigidity of these manifolds and the fact that the integral its mean curvatures coincide \cite{MR} (pages 218-219). In this paper, we will obtain a compact approach of the positivity of the Brown-York mass for the compact spin manifolds of arbitrary dimension and its corresponding rigidity theorem for mean-convex bodies in the Euclidean spaces. These two type of results are relied with an estimate for the lower eigenvalues of the Dirac operator of these bodies and this, in turn, to theorem type Min-Oo. In fact, in the flat case the corresponding Min-Oo conjecture was obtained by Miao \cite{Mi1} (see Remark 1), although it was an easy consequence of the estimate obtained in \cite{HMZ} for this eigenvalue of the Dirac operator.
Also, in this article, for the sake of completeness, we explain the relation between the spin structures of the Euclidean space or the sphere and their hypersurfaces and we will see that their Dirac operator relate basically through the mean curvature of these hypersurfaces and the scalar curvature of the ambient spaces. This fact will allow us, for a $(n+1)$-dimensional compact spin Riemannian manifold $\Omega$ with non negative scalar curvature and mean-convex boundary $\Sigma$, to unify a compact proof to obtain a lower estimate for the spectrum of the Dirac of $\Sigma$, the positivity of the Brown-York mass for mean-convex non necessarily convex domains, the aforementioned resolution of the {\em flat version} of Min-Oo's conjecture and the rigidity of these mean-convex bodies in the Euclidean spaces.
In the final section of the article, we will show that this same scheme, with the important difference of establish the exact value $R=n(n+1)$ of the scalar curvature of the bulk manifold, works to obtain exactly the same sequence of results. In this case, all of them, as far as we know, are new.
\section{Riemannian spin manifolds and hypersurfaces} Consider an $(n+1)$-dimensional spin Riemannian manifold $\Omega$ with non-empty boundary $\partial\Omega=\Sigma$ and denote by $\langle\;,\;\rangle$ its scalar product and by $\nabla$ its corresponding Levi-Civita connection on the tangent bundle $T\Omega$. We fix a spin structure (and so a corresponding orientation) on the manifold $\Omega$ and denote by $\hbox{\bb S}\Omega$ the associated spinor bundle, which is a complex vector bundle of rank $2^ {\left[\frac{n+1}{2}\right]}$. Then let $$ \gamma:{\mathbb C}\ell(\Omega)\ensuremath{\longrightarrow} {\rm End}_{{\mathbb C}}(\hbox{\bb S}\Omega)$$ be the Clifford multiplication, which provides a fibre preserving irreducible representation of the Clifford algebras constructed over the tangent spaces of $\Omega$. When the dimension $n+1$ is even, we have the standard chirality decomposition \begin{equation}\label{chi-de} \hbox{\bb S}\Omega=\hbox{\bb S}\Omega^+\oplus\hbox{\bb S}\Omega^-, \end{equation} where the two direct summands are respectively the $\pm 1$-eigenspaces of the endomorphism $\gamma(\omega_{n+1})$, with $\omega_{n+1}=i^{\left[\frac{n+2}{2}\right]}e_1\cdots e_{n+1}$, the complex volume form. It is well-known (see \cite{LM}) that there are, on the complex spinor bundle $\hbox{\bb S}\Omega$, a natural Hermitian metric $(\;,\;)$ and a spinorial Levi-Civita connection, denoted also by $\nabla$, which is compatible with both $(\;,\;)$ and $\gamma$ in the following sense: \beq
&X(\psi,\phi)=(\nabla_X\psi,\phi)+(\psi,\nabla_X\phi)& \label{nabla-metric} \\ &\nabla_X\left(\gamma(Y)\psi\right)=\gamma(\nabla_XY)\psi+\gamma(Y)\nabla_X\psi& \label{nabla-gamma} \eeq for any tangent vector fields $X,Y\in\Gamma(T\Omega)$ and any spinor fields $\psi,\phi\in\Gamma (\hbox{\bb S}\Omega)$ on $M$. Moreover, with respect to this Hermitian product on $\hbox{\bb S}\Omega$, Clifford multiplication by vector fields is skew-Hermitian or equivalently \begin{equation}\label{skew}
(\gamma(X)\psi,\gamma(X)\phi)=|X|^2(\psi,\phi). \end{equation} Since the complex volume form $\omega_{n+1}$ is parallel with respect to the spinorial Levi-Civita connection, when $n+1=\dim \Omega$ is even, the chirality decomposition (\ref{chi-de}) is preserved by $\nabla$. Moreover, from (\ref{skew}), one sees that it is an orthogonal decomposition.
In this setting, the (fundamental) Dirac operator $D$ on the manifold $M$ is the first order elliptic differential operator acting on spinor fields given locally by$$ D=\sum_{i=1}^{n+1}\gamma(e_i)\nabla_{e_i},$$ where $\{e_1,\dots,e_{n+1}\}$ is a local orthonormal frame in $T\Omega$. When $n+1=\dim \Omega$ is even, $D$ interchanges the chirality subbundles $\hbox{\bb S}\Omega^\pm$.
The boundary hypersurface $\Sigma$ is also an oriented Riemannian manifold with the induced orientation and metric. If $\nabla^{\Sigma}$ stands for the Levi-Civita connection of this induced metric we have the Gauss and Weingarten equations$$ \nabla_XY=\nabla^{\Sigma}_XY+\langle AX,Y\rangle N,\qquad \nabla_XN=-AX,$$ for any vector fields $X,Y$ tangent to $\Sigma$, where $A$ is the shape operator or Weingarten endomorphism of the hypersurface $\Sigma$ corresponding to the unit normal field $N$ compatible with the given orientation. As the normal bundle of the boundary hypersurface is trivial, the Riemannian manifold $\Sigma$ is also a spin manifold and so we will have the corresponding spinor bundle $\hbox{\bb S}\Sigma$, the Clifford multiplication $\gamma^{\Sigma}$, the spinorial Levi-Civita connection $\nabla^{\Sigma}$ and the intrinsic Dirac operator $D^{\Sigma}$. It is not difficult to show (see \cite{Ba2,BFGK,Bur,Tr,Mo}) that the restricted Hermitian bundle$$
{\bf S}:=\hbox{\bb S}\Omega_{|\Sigma}$$ can be identified with the intrinsic Hermitian spinor bundle $\hbox{\bb S}\Sigma$, provided that $n+1=\dim \Omega$ is odd. Instead, if $n+1=\dim \Omega$ is even, the restricted bundle ${\bf S}$ could be identified with the sum $\hbox{\bb S}\Sigma\oplus\hbox{\bb S}\Sigma$. With such identifications, for any spinor field $\psi\in\Gamma({\bf S})$ on the boundary hypersurface $\Sigma$ and any vector field $X\in\Gamma(T\Sigma)$, define on the restricted bundle ${\bf S}$, the Clifford multiplication $\gamma^{{\bf S}}$ and the connection $\nabla^{{\bf S}}$ by \beq\label{blackCli} & \gamma^{{\bf S}} (X)\psi=\gamma(X)\gamma(N)\psi & \\ & \nabla^{{\bf S}}_X\psi=\nabla_X\psi-\frac{1}{2}\gamma^{{\bf S}}(AX)\psi=\nabla_X\psi- \frac{1}{2}\gamma(AX)\gamma(N)\psi\,.\label{blacknabla} \eeq
Then it easy to see that $\gamma^{{\bf S}}$ and $\nabla^{{\bf S}}$ correspond respectively to $\gamma^{\Sigma}$ and $\nabla^{\Sigma}$, for $n+1$ odd, and to $\gamma^{\Sigma}\oplus -\gamma^{\Sigma}$ and $\nabla^{\Sigma}\oplus\nabla^{\Sigma}$, for $n+1$ even. Then, $\gamma^{{\bf S}}$ and $\nabla^{{\bf S}}$ satisfy the same compatibilty relations (\ref{nabla-metric}), (\ref{nabla-gamma}) and (\ref{skew}) and together with the following additional identity $$ \nabla^{{\bf S}}_X\left(\gamma(N)\psi\right)=\gamma(N)\nabla^{{\bf S}}_X\psi.$$
As a consequence, the hypersurface Dirac operator ${\bf D}$ acts on smooth sections $\psi\in\Gamma({\bf S})$ as $$ {\bf D}\psi :=\sum_{j=1}^{n}\gamma^{{\bf S}}(u_j)\nabla^{{\bf S}}_{u_j}\psi= \frac{n}{2}H\psi-\gamma(N)\sum_{j=1}^{n}\gamma(u_j)\nabla_{u_j}\psi,$$ where $\{u_1,\dots,u_{n}\}$ is a local orthonormal frame tangent to the boundary $\Sigma$ and $H=(1/(n)){\rm trace}\, A$ is its mean curvature function, coincides with the intrinsic Dirac operator $D^{\Sigma}$ on the boundary, for $n+1$ odd, and with the pair $D^{\Sigma}\oplus -D^{\Sigma}$, for $n+1$ even. In the particular case where the field $\psi\in\Gamma({\bf S})$ is the restriction of a spinor field $\psi\in\Gamma(\Sigma)$ on $\Omega$, this means that \begin{equation}\label{twoD} {\bf D}\psi=\frac{n}{2}H\psi-\gamma(N)D\psi-\nabla_N\psi. \end{equation} Note that we always have the anticommutativity property \begin{equation}\label{supercom} {\bf D}\gamma(N)=-\gamma(N){\bf D} \end{equation} and so, when $\Sigma$ is compact, the spectrum of ${\bf D}$ is symmetric with respect to zero and coincides with the spectrum of $D^{\Sigma}$, for $n+1$ odd, and with ${\rm Spec}(D^{\Sigma})\cup -{\rm Spec}(D^{\Sigma})$, for $n+1$ even [see HMR]. In fact, we and other authors also have remarked in several papers that the spectrum Spec($\bf D$) is a ${\mathbb Z}$-symmetric sequence$$
-\infty\swarrow\cdots\le -\lambda_{k}\le\cdots\le-\lambda_{1}<\lambda_0=0<\lambda_1\le\cdots\le\cdots\le\lambda_k\le\cdots\nearrow
+\infty
$$ (each eigenvalue repeated according to its corresponding multiplicity). This is because $D$ is an elliptic operator of order one which is self-adjoint due to the compacity of $\Sigma$ and the fact that $\gamma(N)$ maps the eigenvalue $\lambda_k$ into $-\lambda_k$, being $\gamma$ the Clifford multiplication and $N$ the inner unit normal of $\Sigma$ in $\Omega$. Let's choose an $L^2(\mathbb S\Sigma)$-orthonormal basis $\{\psi_k\}_{k\in{ Z}}$ of the Hilbert space $L^2(\mathbb S\Sigma)$ constituted by eigenspinors of ${\bf D}$, that is, ${\bf D}\psi_k=\lambda_k\psi_k$ (and so $\psi_k\in C^\infty(\Sigma))$, for all $k\in\mathbb Z$. We have to remark that the presence of the eigenvalue $\lambda_0=0$ (repeated with its corresponding multiplicity) is not compulsory. It is a standard fact (Perceval equality) that, if $\phi$ is an $L^2$ spinor field on $\Sigma$, the series $$ \sum^{+\infty}_{-\infty}\phi_k=\lim_{k\ge 0, k\rightarrow\infty}\sum^{k}_{-k} \phi_k=\phi,$$ converges in the strong $L^2(\mathbb S\Sigma)$-topology, where each $\phi_k$ is the projection of $\phi$ onto the eigenspace corresponding to the eigenvalue $\lambda_k$. As a consequence of H\"older inequalities and well-known facts, we have $L^1(\mathbb S\Sigma)$-convergence as well and moreover, as an easy consequence of the completeness of $L^2(\mathbb S\Sigma)$ or $L^1(\mathbb S\Sigma)$, pointwise convergence almost everywhere (and so everywhere if $\phi$ is continous) for a suitable subseries of $\sum^{+\infty}_{-\infty}\phi_k$.
\section{A spinorial Reilly inequality} A basic tool to relate the eigenvalues of the Dirac operator and the geometry of the manifold $\Omega$ and those of its boundary $\Sigma$ will be, as in the closed case (see \cite{Fr1}), the integral version of the Schr{\"o}dinger-Lichnerowicz formula \begin{equation} D^2=\nabla^*\nabla+\frac{1}{4}R, \end{equation} where $R$ is the scalar curvature of $\Omega$. In fact, given a spinor field $\psi$ on $\Omega$, taking into account the formula above, if we compute the divergence of the one-form $\alpha$ defined by$$ \alpha(X)=\left<\gamma(X)D\psi+\nabla_X\psi,\psi\right>,\qquad \forall X\in T\Omega$$ and integrate, one gets$$
-\int_{\Sigma}\left<\gamma(N)D\psi+\nabla_N\psi,\psi\right>=\int_\Omega\left(|\nabla\psi|^2-|D\psi|^2+\frac{1}{4}
R|\psi|^2\right),$$ which by (\ref{twoD}), could be written as$$
\int_{\Sigma}\left( ({\bf D}\psi,\psi)-\frac{n}{2}H|\psi|^2\right)=\int_\Omega\left(|\nabla\psi|^2-|D\psi|^2+\frac{1}{4}R|\psi|^2\right).$$ Finally, we will use the pointwise spinorial Schwarz inequality$$
|D\psi|^2\le (n+1)|\nabla\psi|^2,\qquad\forall \psi\in\Gamma(\hbox{\bb S}\Omega),$$ where the equality is achieved only by the so-called {\em twistor spinors}, that is, those satisfying the following over-determined first order equation$$ \nabla_X\psi=-\frac{1}{n+1}\gamma(X)D\psi,\qquad\forall X\in T\Omega.$$ Then we get the following integral inequality, called {\em Reilly inequality} [see HMZ, for example], because of its similarity with the corresponding one obtained in \cite{Re} for the Laplace operator, \begin{equation}\label{Reilly}
\int_{\Sigma}\left( ({\bf D}\psi,\psi)-\frac{n}{2}H|\psi|^2\right)\ge\int_\Omega\left(\frac{1}{4}R|\psi|^2-\frac{n}{n+1}|D\psi|^2\right), \end{equation} with equality only for twistor spinors on $\Omega$.
\section{The APS boundary condition} It is a well-known fact that the Dirac operator $D$ on a compact spin Riemannian manifold $\Omega$ with boundary, $D:\Gamma(\hbox{\bb S}\Omega)\rightarrow\Gamma(\hbox{\bb S}\Omega)$ has an infinite dimensional kernel and a closed image with finite codimension. People has looked for conditions $B$ to be imposed on the restrictions to the boundary $\Sigma$ of the spinor fields on $\Omega$ so that this kernel becomes finite dimensional and then the boundary problem \begin{equation}\label{boun-pro}\tag{BP} \left\{ \ba{lll} D\psi&=\Phi\quad&\hbox{on $\Omega$}\\
B\psi_{|\Sigma}&=\chi\quad&\hbox{along $\Sigma$}, \ea \right. \end{equation} for $\Phi\in\Gamma(\hbox{\bb S}\Omega)$ and $\chi\in\Gamma({\bf S})$, is of Fredholm type. In this case, we will have smooth solutions for any data $\Phi$ and $\chi$ belonging to a certain subspace with finite codimension and these solutions will be unique up to a finite dimensional kernel.
To our knowledge, the study of boundary conditions suitable for an elliptic operator $D$ (of any order, although for simplicity, we only consider first order operators) acting on smooth sections of a Hermitian vector bundle $F\rightarrow \Omega$ has been first done in the fifties of past century by Lopatinsky and Shapiro (\cite{Ho,Lo}), but the main tool was discovered by Calder{\'o}n in the sixties: the so-called Calder{\'o}n projector$$
{\mathcal P}_+(D): H^{\frac{1}{2}}(F_{|\Sigma})\longrightarrow\{ \psi_{|\Sigma}\,|\,\psi\in H^1(F), D\psi=0\}.$$ This is a pseudo-differential operator of order zero (see \cite{BW,Se}) with principal symbol ${\mathfrak p}_+(D):T\Sigma\rightarrow {\rm End}_{\mathbb C}(F)$ depending only on the principal symbol $\sigma_D$ of the operator $D$ and can be calculated as follows \begin{equation}\label{symbol} {\mathfrak p}_+(D)(X)=-\frac{1}{2\pi i}\int_\Gamma\left[(\sigma_D(N))^{-1}\sigma_D(X)-\zeta I\right]^{-1}\, d\zeta, \end{equation} for any $p\in\Sigma$ and $X\in T_p\Sigma$, where $N$ is the inner unit normal along the boundary $\Sigma$ and $\Gamma$ is a positively oriented cycle in the complex plane enclosing the poles of the integrand with negative imaginary part. Although the Calder{\'o}n projector is not unique for a given elliptic operator $D$, its principal symbol is uniquely determined by $\sigma_D$. One of the important features of the Calder{\'o}n projector is that its principal symbol detects the {\em ellipticity of a boundary condition}, or in other words, if the corresponding boundary problem (\ref{boun-pro}) is a {\em well-posed problem} (according to Seeley in \cite{Se}). In fact (cfr. \cite{Se} or \cite[Chap. 18]{BW}), {\em a pseudo-differential operator$$
B: L^2(F_{|\Sigma})\longrightarrow L^2(V),$$ where $V\rightarrow \Sigma$ is a complex vector bundle over the boundary, is called a {\em (global) elliptic boundary condition} when its principal symbol $b:T\Sigma\rightarrow {\rm Hom}_{\mathbb C}(
F_{|\Sigma},V)$ satisfies that, for any non-trivial $X\in T_p\Sigma$, $p\in\Sigma$, the restriction$$
b(X)_{|{\rm image}\,{\mathfrak p}_+(D)(X)}:{\rm image}\,{\mathfrak p}_+(D)(X)\subset F_p\longrightarrow V_p$$ is an isomorphism onto ${\rm image}\,b(X)\subset V_p$. Moreover, if ${\rm rank}\,V=\dim\- {\rm image}\- {\mathfrak p}_+(D)(X)$, we say that $B$ is a {\em local elliptic boundary condition}}. When $B$ is a local operator this definition yields the so-called Lopatinsky-Shapiro conditions for ellipticity (see for example \cite{Ho}).
When these definitions and the subsequent theorems are applied to the case where the vector bundle $F$ is the spinor bundle $\hbox{\bb S}\Omega$ and the elliptic operator $D$ is the Dirac operator $D$ on the spin Riemannian manifold $\Omega$, we obtain the following well-known facts in the setting of the general theory of boundary problems for elliptic operators (see for example \cite{BrL,BW,GLP,Ho,Se}):
It is easy to see that the principal symbol $\sigma_D$ of the Dirac operator $D$ on $\Omega$ is given by$$ \sigma_D(X)=i\gamma(X),\qquad \forall X\in T\Omega.$$ Then by (\ref{symbol}), the principal symbol of the Calder{\'o}n projector of the Dirac operator is given by$$
{\mathfrak p}_+(D)(X)=-\frac{1}{2|X|}(i\gamma(N)\gamma(X)-|X|I)=\frac{1}{2|X|}(i\gamma^{\bf S}(X)+|X|I),$$ for each non-trivial $X\in T\Sigma$ and where $\gamma^{\bf S}$ is identified in (\ref{blackCli}) as the intrinsic Clifford product on the boundary. As the endomorphism $i\gamma(N)\gamma(u)=-i\gamma^{\bf S}(X)$ is self-adjoint and its square is
$|X|^2$ times the identity map, then it has exactly two eigenvalues, say
$|X|$ and $-|X|$, whose eigenspaces
are of the same dimension $\frac{1}{2}\dim\hbox{\bb S}\Omega_p=2^{[\frac{n+1}{2}]-1}$, since they are interchanged by $\gamma(N)$. Hence the symbol ${\mathfrak p}_+(D)(X)$ is, up to a constant, the orthogonal projection onto the eigenspace corresponding to the eigenvalue $-|X|$ and so \beQ
&{\rm image}\,{\mathfrak p}_+(D)(X)=\{\eta\in\hbox{\bb S}\Omega_p\,|\,i\gamma(N)\gamma(X)\eta=-|X|\eta\},& \\ &\dim {\rm image}\,{\mathfrak p}_+(D)(X)=\frac{1}{2}\dim\hbox{\bb S}\Omega_p=2^{[\frac{n+1}{2}]-1}.& \eeQ
{}From these equalities and from the definition of ellipticity for the boundary condition represented by the pseudo-differential operator $B$, we have that the first equation in the statement above is equivalent to the injectivity of the map $b(X)_{|{\rm image}\;{\mathfrak p}_+(D)(X)}$. The second one implies that $\dim {\rm image}\;b(X)=\dim {\rm image}\;{\mathfrak p}_+(D)(X)$ and so, together with the injectivity, this means that $b(X)_{|{\rm image}\;{\mathfrak p}_+(D)(X)}$ is surjective. So we have proved that the two claimed conditions are equivalent to the ellipticity of a given boundary condition $B$ for the Dirac operator $D$ on $\Omega$. Now, from this ellipticity, one may deduce that the problem (\ref{boun-pro}) and the spectral corresponding one are of Fredholm type and the remaining assertions on eigenvalues and eigenspaces follow in a standard way (see \cite{BW,Ho}).
At the risk of being too long, we will explain what is the Atiyah, Patodi and Singer condition in this context. This well-known boundary condition was introduced in \cite{APS} in order to establish index theorems for compact manifolds with boundary. Later, this condition has been used to study the positive mass and the Penrose inequalities (see \cite{He2,Wi}). Such a condition does not allow to model confined particle fields since, from the physical point of view, its global nature is interpreted as a causality violation. Although it is a well-known fact that the APS condition is an elliptic boundary condition, we are going to sketch the proof in the setting established above, for three reasons: first, for completeness, second for pointing out that the APS condition for an achiral Dirac operator covers both cases of odd and even dimension, although the latter case is not referred to the spectral resolution of the intrinsic Dirac operator $D^{\Sigma}$ but to the system $D^{\Sigma}\oplus -D^{\Sigma}$ and third, because we will use it with some little modifications which may not familiar to some readers.
Precisely, this condition can be described as follows. Choose the aforementioned Hermitian bundle $V$ over the boundary hypersurface $\Sigma$ as the restricted spinor bundle ${\bf S}$ defined in Section 2, and define, for each ${a\in\mathbb Z}$, as $B^{a}_{{\rm APS}\;}:=B_{\ge a}:L^2({\bf S})\rightarrow L^2({\bf S})$ as the orthogonal projection onto the subspace spanned by the eigenvalues of the self-adjoint intrinsic operator ${\bf D}$ greater or equal to $a$. Atiyah, Patodi and Singer showed in \cite{APS} (see also \cite[Prop. 14.2]{BW}) that $B_{{\rm APS}\;}^a$ is a zero order pseudo-differential operator whose principal symbol $b_{{\rm APS}\;}$ (independent of $a$) satisfies the following fact: for each $p\in\Sigma$ and $X\in T_p\Sigma-\{0\}$, the map $b_{{\rm APS}\;}(X)$ is the orthogonal projection onto the eigenspace of $\sigma_{{\bf D}}(X)=i\gamma^{\bf S}(X)$ corresponding to the positive eigenvalue $|X|$. That is \begin{equation}\label{APS-symb}
b_{{\rm APS}\;}(X)=\frac{1}{2}\left(i\gamma^{\bf S}(X)+|X|I\right)=\frac{1}{2}\left(-i\gamma(N)\gamma(X)+|X|I\right), \end{equation} and so the principal symbol $b_{{\rm APS}\;}$ of the APS operator coincides, up to a constant, with the principal symbol ${\mathfrak p}_+(D)$ of the Calder{\'o}n projector of $D$, for all $a\in {\mathbb Z}$. From this, it is immediate to see that the two ellipticity required conditions are satisfied.
Analytically, this APS boundary condition can be formulated as follows. If $\phi$ is a spinor on $\Sigma$ and $a\in\mathbb Z$ is an integer number, we will denote by $b_{APS}^a(\phi)=\phi_{\ge a}$ the orthonormal projection referred to the diagonalization of ${\bf D}$ given above just after (8) $$ b_{APS}^a=\phi_{\ge a}=\sum^{+\infty}_{k\ge a}\phi_k=\lim_{l\ge a}\sum^{l}_{m=a}\phi_m,$$ where the above subseries convergences remain to be valid, as we had already commented for this truncated series, and moreover $$
\int_\Sigma|\phi_{\ge a}|^2\le\int_\Sigma|\phi|^2\qquad \forall a\in{\mathbb Z} $$ (Bessel inequality). It was long ago known the fact [see APS, Se, BB, HMR, B\"aBa1,B\"aBa2] that this spectral projection $b_{APS}^a$ trough ${\bf D}$ provided an elliptic self-adjoint boundary condition on $\Omega$ for $D$ and for all $a\in{\mathbb R}$, if we take the spectral projection on eigenspaces corresponding to its non negative eigenvalues. (Be careful, in some papers, as [APS and B{\"a}Ba1,B{\"a}Ba2], the {\em adapted} Dirac operator $\bf D$ on the boundary is taken to be $-\bf D$ and, so, it is necessary to reverse {\em positive} and {\em negative}). But, we strongly advise to study all those questions about first order differential operators, existence of solutions and their regularity in the two recent works by Ballmann and B\"ar [B{\"a}Ba1,B{\"a}Ba2]. They are much more well clearly written and in a much more modern language. We have procured attain at the maximum number of readers.
Anyway, we conclude that, for each integer number $a\le 0$, we can pose the elliptic boundary problem \begin{equation} \left\{\begin{array}{ll} D\psi_a=\xi \\ \psi_{\ge a}=\phi_{\ge a}, \end{array}\right. \end{equation} where $\psi$ and $\xi$ are spinor fields on the bulk manifold $\Omega$, $\phi$ is a spinor field on the boundary $\Sigma$ and the meaning of $\psi_{\ge a}$ should be already evident. This problem will be a Fredholm type problem. Analogously, the corresponding eigenvalue problem \begin{equation} \left\{\begin{array}{ll} D\psi_a=\lambda \psi_a \\ \psi_{\ge a}=0, \end{array}\right. \end{equation} [see HMR, BaB\"a1,BaB\"a2], is also well posed and has nice existence and regularity properties.
\section{A compact approach spinor proof of the Shi-Tam theorem valid for mean-convex domains} We assume that the scalar curvature $R$ of the compact spin Riemannian manifold $\Omega$ is non-negative and consider a solution to the following homogeneous problem on $\Omega$ \begin{equation} \left\{\begin{array}{ll}\label{s-t} D\psi_a=0 \\\psi_{\ge a}=0, \end{array}\right. \end{equation} for any $a\in{\mathbb Z}$. Then, if we put it in the inequality (\ref{Reilly}) and use that $R\ge 0$. It only remains the terms on the boundary $\Sigma$ of $\Omega$. We get \begin{equation}
0\le\int_\Sigma\left(\left<{\bf D}\psi_a,\psi_a\right>-\frac{n}{2}H|\psi_a|^2\right), \end{equation} and the equality is attained if and only if $\psi_a$ is a parallel (twistor plus harmonic) spinor on $\Omega$. Now, choose $a\le
0$ in the problem above and substitute the corresponding solution in the integral inequality above. Since $\left<{\bf D}\psi_{< a},\psi_{< a}\right>$ and $\left<{\bf D}\psi_{\ge a},\psi_{\ge a}\right>$ are clearly $L^ 2$-orthogonal, we have$$
\int_\Sigma\left<{\bf D}\psi_a,\psi_a\right>=\int_\Sigma\left<{\bf D}\psi_{< a},\psi_{< a}\right>=\sum^{-\infty}_{k<a\le 0}\lambda_k
\int_\Sigma|\psi_k|^2\le 0,
$$
for each integer $a\le 0$.
If, moreover, we add the hypothesis that the inner mean curvature $H$ is non negative along the boundary $\Sigma$, (16) implies
\begin{equation} \label{ineqint}
0\le\int_\Sigma\left(\left<{\bf D}\psi_a,\psi_a\right>-\frac{n}{2}H|\psi_a|^2\right) \le 0. \end{equation} Hence, the first term in the integral above implies $\psi_{<a}=0$ for all $a\le 0$. Since, from the boundary condition in (\ref{s-t}), $\psi_{\ge a}=0$, for all $a\in{\mathbb Z}$, $a\le 0$, we conclude $$ \psi_a=\psi_{<a}+\psi_{\ge a}=0. $$ for all $a\in{\mathbb Z}$, $a\le 0$ along $\Sigma$. So, the spinor $\psi_a$ is identically zero along $\Sigma$. As $\psi_a$ was harmonic on $\Omega$, its length must be identically zero because the Hausdorff measure of $\Sigma$ is greater than two in the bulk manifold as was shown by B\"ar in \cite{Ba3}. Then, the unique solution to the
homogeneous equation (\ref{s-t}) is $\psi_a=0$. Using that this equation is of type Fredholm, we need to study its cokernel, that is, that the adjoint problem to (15) has also trivial kernel. Thus, consider the homogeneous problem \begin{equation} \left\{\begin{array}{ll}\label{s-t-a} D\psi_a=0 \\\psi_{> a}=0, \end{array}\right. \end{equation} where it is easy to imagine what $\psi_{>a}$ denotes. Working as in (15), we also obtain the same inequality (17) for all $a\le 0$. But, in this case, when the equality is attained, we are not sure that $\psi_{\le a}=0$ for all $a\le 0$, because $\psi_0$ could be vanish, that is, $\psi$ could contain a non trivial harmonic component $\psi_0$. So, the homogeneous problem (18) could have non trivial harmonic solutions, unless $H>0$, that is, the boundary $\Sigma$ be {\em strictly} mean convex. In any of these two cases, the unique solutions to (15) and its adjoint (18) would be the null ones. Then, the non homogeneous problems would always unique solutions.
\begin{proposition}Let $\Omega$ be a compact spin manifold with non negative scalar curvature and non-empty boundary $\Sigma$ such that its inner mean curvature is non-negative. Then, if we suppose that $\Sigma$ either does not admit harmonic spinors or is strictly mean convex ($H>0$) in $\Omega$, both homogeneous problems (\ref{s-t}) and (\ref{s-t-a}) have as unique solution the zero spinor. As a consequence of this, the inhomogeneous problem \begin{equation}\label{nonhomo} \left\{\begin{array}{ll} D\psi_a=\xi \\ \psi_{\ge a}=\phi_{\ge a}, \end{array}\right. \end{equation}
has a unique solution $\psi_a\in H^1(\Omega) \cap H^{\frac1{2}}(\Sigma)$ for prescribed spinor fields $\xi\in L^2(\Omega)$ and $\phi\in L^2(\Sigma)$ for each integer number $a\le 0$.
\end{proposition}
We will apply this result {\em \`a la Reilly}, thought for solving equations on the bulk manifold, to obtain results on its boundary. \begin{theorem} Let $\Omega$ be a compact spin Riemannian manifold of dimension $n+1$ with non-negative scalar curvature $R\ge 0$ and having a non-empty boundary $\Sigma$ whose inner mean curvature $H\ge 0$ is also non-negative (mean-convex). Suppose also that $\Sigma$ either does not carry harmonic spinors or $H>0$. Then, for every spinor $\phi\in H^1(\Sigma)$, we have$$
\int_\Sigma|{\bf D}\phi||\phi| \ge \frac{n}{2}\int_\Sigma H|\phi|^2.$$ The equality occurs if and only if $\phi$ is the restriction to $\Sigma $ of a parallel spinor on $\Omega$ and so $${\bf D}\phi=\frac{n}{2}H\phi.$$
\end{theorem} {\bf Proof.} Using the Proposition above, for all integer $a\le 0$, solve the type (\ref{nonhomo}) equation \begin{equation}\label{nonhomo2} \left\{\begin{array}{ll} D\psi_a=0 \\ \psi_{\ge a}=\phi_{\ge a}. \end{array}\right. \end{equation} By working in same manner as in the introduction of this section, we can arrive to the inequality (16) because, until this moment, we did not use the boundary conditions. On the other hand, by using the orthogonal projections defined above, we know that $$ \int_\Sigma\left<{\bf D}\psi_a,\psi_a\right>\le \int_\Sigma\left<{\bf D}\psi_{\ge a},\psi_{\ge a}\right>,$$
because the summand$$ \int_\Sigma\left<{\bf D}\psi_{< a},{\bf \psi}_{< a}\right> $$ only contains non positive eigenvalues of ${\bf D}$. So, since $\psi_{\ge a}=\phi_{\ge a}$, using the pointwise Cauchy-Schwarz inequality, the unique solution $\psi_a$ of (\ref{nonhomo2}) satisfies \begin{equation} \int_\Sigma\left<{\bf D}\psi_a,\psi_a\right>\le \int_\Sigma\left<{\bf D}\phi_{\ge a},\phi_{\ge a}\right>
\le \int_\Sigma|{\bf D}\phi_{\ge a}||\phi_{\ge a}|, \end{equation} for all $a\in{\mathbb Z}, a\le 0$. Analogously, we have the $L^2$ decomposition $$
0\le \int_\Sigma|\psi_a|^2=\int_\Sigma|\psi_{\ge a}|^2+\int_\Sigma|\psi_{< a}|^2. $$ So, in a similar way as above, \begin{equation}
\int_\Sigma|\psi_a|^2\le\int_\Sigma|\psi_{\ge a}|^2=\int_\Sigma|\phi_{\ge a}|^2, \end{equation} for each non positive integer $a$. From the fact that $\psi_{\ge a}=\phi_{\ge a}$ lies in $H^{\frac1{2}}(\Sigma)$, we know that this sequence with index $a$ converges strongly in the $L^ 2(\Sigma)$ topology and, so, in the $L^1$ topology as well. As the non negative mean curvature function $H$ is smooth on the compact manifold $\Sigma$, $\sqrt{H}\in L^2(\Sigma)$. Then, $$
\lim_{a\rightarrow-\infty}\int_\Sigma H|\phi_{\ge a}|^2\,d\Sigma =\int_\Sigma H|\phi|^2$$
strongly in $L^2(\Sigma)$. From (16), (20) and this last limit, we have $$
0\le\lim_{a\rightarrow-\infty}\int_\Sigma\left(|{\bf D}\phi_{\ge a}||\phi_{\ge a}|-\frac{n}{2}H|\phi_{\ge a}|^2\right)
=\int_\Sigma\left(|{\bf D}\phi||\phi|-\frac{n}{2}H|\phi|^2\right) .$$
Hence, the inequality is true.
Let us see that the solution $\psi_a$ to the boundary problem (10) is smooth on ${\bar \Omega}$ for all integer $a\le 0$. In fact, the classical theory for the Dirac operator [see B{\"a}Ba1,B{\"a}Ba2] assures the smoothness of solutions in the interior $\Omega$. With respect to the regularity at $\Sigma$, we can apply Theorems 6.11 and 7.17 and Corollary 7.18 in the first on the papers cited above (using a bootstrapping process) and obtain smoothness on $\Sigma$ as well. Then $\psi_a\in\cap_{k=1}^\infty H^k({\bar \Omega})=C^\infty({\bar \Omega})$. For all non positive integer number $a$, the limit function $\psi$ will also be smooth on the compact manifold ${\bar \Omega}$. This implies that the limit function$$ \psi=\lim_{a\rightarrow -\infty}\psi_{a}$$ is also strongly harmonic on $\Omega$. Taking limit in (\ref{s-t}) and taking into account that we also have the equality in the right side of \ref{Reilly}, we see that $\psi$ is also a parallel spinor on the whole of ${\bar\Omega}$. And it is clear that $$
\phi=\lim_{a\rightarrow-\infty}\phi_{\ge a}=\psi_{|\Sigma}$$ as well. But the relation between the operators $D$ and its restriction ${\bf D}$ is \begin{equation} {\bf D}\phi=\frac{n}{2}H\phi-\gamma(N)D\psi-\nabla_N\psi. \end{equation} Then, [see B\"a1,BFGK,Bur] the spinor $\phi$ has to satisfy on the whole $\Sigma$ the equality $${\bf D}\phi=\frac{n}{2}H\phi.$$ The converse is obvious.
\begin{remark}{\em Suppose that, in Theorem 2 above, we choose the spinor $\phi$ on the boundary $\Sigma$ as an eigenvalue corresponding to the first non negative eigenvalue $\lambda_1({\bf D})$ of the intrinsic Dirac operator ${\bf D}$ of the boundary. A direct application of Theorem 2 gives$$
\int_\Sigma\left(\lambda_1({\bf D})-\frac{n}{2}H\right)|\phi|^2\ge 0.$$ As a consequence, we obtain the following lower bound which we firstly found in [HM]:$$
\lambda_1({\bf D})\ge \frac{n}{2}\max_\Sigma H$$
and that improves, in the immersed case, the well-known lower bound by Fiedrich [F] for boundary manifolds in ${\mathbb R}^{n+1}$.}
\end{remark}
\begin{remark}{\em It is also worthy to note that, as a consequence of this estimate, if $\Sigma$ is a compact boundary in an Euclidean space and we know that $\lambda_1({\bf D})\le n/2$ and $H\ge 1$, then have the equality and {\em a fortiori} and the mean curvature $H$ of $\Sigma$ must be constant. Hence, from the Alexandrov Theorem, $\Omega$ is a round disc. This is a Euclidean analogue to the Min-Oo conjecture and was remarked to be true by Miao in the paper \cite{Mi2}, by supposing that $\Sigma$ was isometric to a unit $n$-sphere. } \end{remark}
\begin{theorem}[Brown-York mass for mean-convex surfaces] Let $\Omega$ be a compact spin Riemannian manifold of dimension $n+1$ with non-negative scalar curvature $R\ge 0$ and having a non-empty boundary $\Sigma$ whose inner mean curvature $H\ge 0$ is strictly (strictly mean-convex). Suppose that there is an isometric and isospin immersion from $\Sigma$ into another spin manifold $\Omega_0$ carrying on a non-trivial parallel spinor field and let $H_0$ its mean curvature with respect to any of its orientations. Then, we have$$
\int_\Sigma H\le \int_\Sigma |H_0|.$$
The equality implies that $H=|H_0|=H_0$. Then, if $n=2$, $\Omega_0$ is a domain in ${\mathbb R}^3$ and the two embeddings differ by a direct rigid motion
\end{theorem}
\begin{remark}{\em Note that, in the original Shi-Tam original result, the authors assume that the boundary $\Sigma$ is strictly convex. Then a well-known by Pogorelov \cite{Po} and Nirenberg \cite{Ni} guarantees the existence of an isometric embedding into the Euclidean space. Here, we need suppose the existence of this second isometric immersion. Moreover we have $H_0>0$} a fortiori. \end{remark}
{\bf Proof.} Denote by $\psi$ the parallel spinor on $\Omega_0$ and let $\phi=\psi_{|\Sigma}$ its restriction onto $\Sigma$ through the existent immersion. Let's recall that the parallelism of $\psi$ (see (22)) gives$$
{\bf D}\phi=\frac{n}{2} H_0\phi\qquad\hbox{and}\qquad |\phi|=1.$$ Now, we apply Theorem 2 and have the desired inequality$$
\int_\Sigma H\le \int_\Sigma |H_0|\,d\Sigma.$$
If the equality is attained, then $$\frac{n}{2}H_0=D\phi=\frac{n}{2}H $$ and so $H=H_0> 0$. Then, the immersion of $\Sigma$ into the second ambient space $\Omega_0$ is strictly mean-convex as well (with respect to the choice of inner normal to $\Omega$).
When $n=2$, from this equality and the fact that $K=K_\phi$ (because the two embeddings are isometric and preserve the Gauss curvatures), we deduce that the two second fundamental forms coincide. The Fundamental Theorem of the Local Theory of Surfaces allows us to conclude that the two boundaries differ by a direct rigid motion of the Euclidean space.
\begin{remark}{\em It is obvious that the result above is a generalization of the positivity theorem for the Brown-York mass previously proved for strictly convex surfaces by Shi and Tam. In their proof, the solution of difficult boundary equations and the positivity of the ADM-mass obtained by Shoen-Yau and Witten \cite{SY,Wi} in the context of asymptotically flat manifolds are essential components. Here, these difficulties are avoided and, as we have already remarked somewhere above, this {\em compact version} of the theorem implies the asymptotically flat version for the ADM mass (see \cite{HMRa}).} \end{remark}
\begin{corollary} Let $\Omega$ be a compact spinor manifold of dimension $3$ with non-negative scalar curvature $R\ge 0$ and having a mean-convex boundary $\Sigma$ isometric to a sphere of any radius. Then, we have$$ \int_\Sigma H\le \sqrt{\pi A(\Sigma)}$$ where $A(\Sigma)$ is the area of $\Sigma$. It the equality holds, then the two boundaries are spheres of the same radius. \end{corollary}
{\bf Proof.} It is clear that the boundary ${\mathbb S}^2$ of $\Omega$ admits and isometric and isospin (the sphere supports a unique spin stricture) embedding into the Euclidean space ${\mathbb R}^3$ with $|H_0|=1/r$ and area $A(\Sigma)=\pi r^2$, where $r>0$ is the radius of the sphere. The fact that the two embeddings of ${\mathbb S}^2$ are isometric allows us to finish.
\begin{corollary}[Cohn-Vossen rigidity theorem for mean-convex domains] Two isometric and isospin strictly mean-convex compact surfaces in the Euclidean space ${\mathbb R}^3$ must be congruent. \end{corollary} {\bf Proof.} Let $\Omega$ and $\Omega_0$ be the two domains determined in ${\mathbb R}^3$ by two corresponding surfaces identified by means of an isometry. Then, we can apply Theorem 3 interchanging the roles of $\Omega$ and $\Omega_0$ and applying the case of the equality.
\begin{remark}{\em The integral inequality in Corollary 4, for strictly convex surfaces of ${\mathbb R}^3$, is attributed to Minkowski (1901), although its very probable that it were previously known to Alexandrov and Fenchel. Recently have proved that the Minkowski inequality is not valid for any compact surface, although they proved it is for the axisymmetric ones. Its is also worthy to remark the following conjecture by Gromov: If $\Sigma$ is the boundary of a compact Riemannian manifold $\Omega$, then, if $R\ge \sigma$, for a certain constant $\sigma$, where $R$ is the scalar function of $\Omega$, then there exists a constant $\Lambda(\Sigma,\sigma)$ such that $$ \int_\Sigma H\,d\Sigma\le \Lambda(\Sigma,\sigma).$$}
\end{remark}
\begin{remark}{\em Much more recently, in the context \cite{SWWZ} of {\em fill in} problems posed firstly by Bartnik, proved that, if $\Omega$ is the hemisphere $B^{n+1}$ and $\gamma$ is a metric on the boundary ${\mathbb S}^n$ isotopic to the standard one with mean curvature $H>0$, then there is a constant $h_0=h_0(\gamma)$ such that$$ \int_\Sigma H\le h_0.$$ It is clear that this result and our Corollary 4 belong to a same family.} \end{remark}
\section{Ambients with positive scalar curvature} Until now we have suppose that the scalar curvature of our compact spin Riemannian manifold $\Omega$ satisfied $R\ge0$ (Euclidean context). Let's enhance this positivity assumption to $R\ge n(n+1)$ (spherical context). This lower bound is just the constant value of the scalar curvature of the $(n+1)$-dimensional unit sphere. Then, by putting this assumption and the Schwarz inequality \begin{equation}
|D\psi|^2\le (n+1)|\nabla \psi| \end{equation} (already used in Section 3) into the right side of the Weitzenb\"ok-Lichnerowicz formula, we obtain \begin{equation} \int_\Sigma\left(\left<{\bf D}\psi,\psi\right>
-\frac{n}{2}H|\psi|^2\right)\ge \int_\Omega\left(-\frac{1}{n+1}|{D}\phi|^2+\frac{n+1}{4}|\psi|^2\right), \end{equation} for all compact spin manifold $\Omega$, with equality only for the twistor spinor fields on $\Omega$. From now on, by using this integral inequality, we will work in a similar, but a few more elaborated, way as in Theorem 2, and will get the following result.
\begin{theorem} Let $\Omega $ be a $(n+1)$-dimensional compact spin Riemannian manifold whose scalar curvature is constant $R = n(n+1)$ and having a non-empty boundary $\Sigma $ whose inner mean curvature $H \ge 0$ is non negative (mean-convex). Suppose also that $\Sigma$ does not admit harmonic spinors. Then, for every spinor $\phi\in H^1(\Sigma)$, we have$$
\int_\Sigma|{\bf D}\phi||\phi| \ge\frac{n}{2}\int_\Sigma|\phi|^2 \sqrt{H^2+1}.$$ The equality holds if and only if $\phi$ is a spinor field coming from a real Killing spinor $\psi$ defined on $\Omega$, and so$$ {\bf D} \phi=\frac{n}{2}H\phi-\frac{n}{2}\gamma(N)\phi.$$ \end{theorem}
\begin{remark}{\em Note that our hypotheses impose the {\em equality} $R=n(n+1)$ and not the maybe expected inequality $R\ge n(n+1)$, conjectured by Min-Oo and based on the {\em flat} case. Obviously the condition $R\ge n(n+1)$ is necessary, because one can otherwise perturb the hemisphere at an interior point so that $R\ge n(n+1)-\varepsilon$, for some small $\varepsilon > 0$, without changing the assumptions on the boundary \cite{HW1}. Moreover the other possibility $R>n(n+1)$ had to be invalidated by the counterexamples built by Brendle, Marques and Neves [BMN], since all of them require at least one point with this strict inequality in the bulk manifold. Hence, we are brought to suppose the equality.} \end{remark}
{\bf Proof}. Given $a\in {\mathbb Z}$, $a\le 0$, consider now the self-adjoint eigenvalue problem for the Dirac operator $D$ on the bulk manifold $\Omega$ \begin{equation} \left\{\begin{array}{ll} D\psi_a=\mu_a\,\psi_a \\ \psi_{\ge a}=0, \end{array}\right. \end{equation} corresponding to the Dirac operator on $\Omega$ subjected to the usual APS elliptic boundary condition as in the section above. Since the boundary does not carry harmonic spinors, then the corresponding $D$ is symmetric in $\Omega$ and its eigenvalues are real numbers. If $\psi_a $ is a solution to (25), from Schwarz inequality (23), we get from the spinorial Reilly inequality$$ 0\ge \int_\Sigma\left(\left<{\bf D}\psi_a,\psi_a\right>
-\frac{n}{2}H|\psi_a|^2\right)\ge \int_\Omega\left(-\frac{\mu^2_a}{n+1}+\frac{n+1}{4}\right)|\psi_a|^2, $$
Since we have the $L^2$-orthogonal decomposition $\psi_{|\Sigma}=\psi_{\ge a}+\psi_{<a}=0$ on $\Sigma$, $\psi_{\ge a}=0$, from (20) and $H\ge 0$ in the hypotheses, the integral on the right side is non positive. Then$$ \mu_a^2\ge \frac{(n+1)^2}{4},$$ with equality if and only $\psi_a$ is a twistor spinor and, in this case, a real Killing spinor on $\Omega$ with $${\nabla_X}\psi_a=-{\mu_a}\gamma(X)\psi_a, \qquad\forall X\in T\Omega.$$ The equality would imply as well$$ \int_\Sigma\left(\left<{\bf D}\psi_{<a},\psi_{<a}\right>
-\frac{n}{2}H|\psi|^2\right)=0, $$
and then ${\psi_a}_{|\Sigma}=0$. But a real Killing spinor has constant length and so $\psi_a$ would identically null on the whole of $\Omega$. As a consequence of this, the first eigenvalue $\mu_1(a)$ of (25) satisfies$$ \mu_1(a)>\frac{n+1}{2},$$ for all $a$ integer non positive. Henceforth, there exists a unique solution with $\psi_a\in H^1(\Omega)\cap H^{\frac1{2}}(\Sigma)$ for the problem \begin{equation} \left\{\begin{array}{ll} D\psi_a=\frac{n+1}{2}\,\psi_a \\ \psi_{\ge a}=\phi_{\ge a}, \end{array}\right. \end{equation} for each $a\in{\mathbb Z}$, $a\le 0$ and for all $\phi\in L^2(\Sigma)$. Take a such solution to (26) and put it in (24). Then \begin{equation}\label{(4)}
0\le\int_\Sigma\left(\left<{\bf D}\psi_a,\psi_a\right>-\frac{n}{2}H|\psi_a|^2\right) ,\qquad \forall a\in{\mathbb Z}, a\le 0, \end{equation} which is exactly the same expression that we obtained in the flat ambient case, but here the equality would attain only in the case where $\psi_a$ is a real Killing spinor (and not a parallel one)$$ \nabla_X\psi_a=-\frac1{2}\gamma(X)\psi_a,\qquad\forall X\in T\Omega.$$ From this point, by working in the very exact way as in the proof of Theorem 2, we arrive at \begin{equation}
0\le\lim_{a\rightarrow-\infty}\int_\Sigma\left(|{\bf D}\phi_{\ge a}||\phi_{\ge a}|-\frac{n}{2}H|\phi_{\ge a}|^2\right). \end{equation} But, with this choice of $\psi_a$ as a solution to (26), the left side in the integral Reilly inequality after (25) vanishes. Hence, $\psi_a$ is a twistor spinor and also $$ D\psi_a=\frac{n+1}{2}\psi_a$$
on the whole $\Omega$. That is, $\psi_a$ is a real Killing spinor. More precisely, we arrive just to the expression which had already rejected some times
to avoid several {\em reductionis ad absurdum}, more precisely, \begin{equation} \nabla_X\psi_a=-\frac1{2}\gamma(X)\psi_a,\qquad\forall X\in T\Omega. \end{equation} By using (7) and the equality above, we have \begin{equation} {\bf D}\psi_{a}=\frac{n}{2}H\psi_{a}-\frac{n}{2}\gamma(N)\psi_{a}, \end{equation} along the boundary $\Sigma$. Taking into account the orthogonal $L^2$ decomposition $\psi_a=\psi_{\ge a}+\psi_{<a}$, the fact that ${\bf D}$ preserves this decomposition and that $\gamma(N)$ reverses it pointwise, if we multiply scalarly by the spinor $\gamma(N)\psi_{a}$ and integrate on $\Sigma$, we get$$
\int_\Sigma \left<{\bf D}\psi_{a},\gamma(N)\psi_{a}\right>=-\frac{n}{2}\int_\Sigma|\psi_{a}|^2.$$ By working in an analogous, but not exactly, equal way as in (20), we obtain \begin{equation}
\lim_{a\rightarrow-\infty}\int_\Sigma |{\bf D}\phi_{\ge a}||\phi_{\ge a}|\ge \lim_{a\rightarrow-\infty}\frac{n}{2}\int_\Sigma|\phi_{\ge a}|^2, \end{equation} for all $a\in{\mathbb Z}$, $a\le 0$ and the convergence is in $H^{\frac1{2}}(\Sigma)$ and, so, strongly in $H^2(\Sigma)$.
In order to obtain the inequality in the Theorem, note that (28) and (31) provide us integral sequences which converge strongly in $L^2(\Sigma)$. Then, we can extract from each one of them a subsequence converging a.e. (and, so, since $\phi$ is continuous converging on the whole) to the corresponding function limit. These two subsequences will be (probably different), but have been extracted from a same converging sequence, so, we will label them with the same indices as the original ones, and have$$
\lim_{a\rightarrow -\infty}|D\phi_{\ge a}||\phi_{\ge a}|\ge\frac{n}{2}H|\phi|^2\quad\mbox{and}\quad
\lim_{a\rightarrow -\infty}|D\phi_{\ge a}||\phi_{\ge a}|\ge \frac{n}{2}|\phi|^2. $$ As a consequence, we obtain$$
\lim_{a\rightarrow -\infty}|D\phi_{\ge a}||\phi_{\ge a}|\ge\frac{n}{2}\sqrt{1+H^2}|\phi_{\ge a}|^2,$$ which was the inequality we are looking for.
For the case of the equality, as in the proof of Theorem 2, we can use the standard and well written results of regularity results in [BaB\"a1,BaB\"a2] to show that the solution $\psi_a$ to the boundary problem (26) is smooth on ${\bar \Omega}$ for all integer $a\le 0$. As before, with respect to the boundary $\Sigma$, we can apply Theorem 7.17 and Corollary 7.18 in [BaB\"a] repeatedly in a bootstraping process and obtain smoothness on $\Sigma$ as well. Then $\psi_a\in\cap_{k=1}^\infty H^k({\bar \Omega})=C^\infty({\bar \Omega})$. For all non positive integer number $a$, the limit function $\psi$ is also smooth on the compact manifold ${\bar \Omega}$. This implies that the limit function$$ \psi=\lim_{a\rightarrow -\infty}\psi_{a}$$ satisfies (15) on $\Omega$ as well. That is, $D\psi=(n+1)/\psi$. Taking limit in (29) and substituting in the integral Reilly inequality we have, from the left side, that $\psi$ is a parallel spinor on the whole of ${\bar\Omega}$. Also it is clear that $$
\phi=\lim_{a\rightarrow-\infty}\phi_{\ge a}=\psi_{|\Sigma}.$$ But the relation between the operators $D$ and its restriction ${\mathcal D}$ is \begin{equation} {\bf D}\phi=\frac{n}{2}H\phi-\gamma(N)D\psi-\nabla_N\psi \end{equation} and the spinor $\phi$ comes from a real Killing spinor on $\Omega$ with constant $-1/2$. Hence, it has to satisfy on the whole $\Sigma$ the equality $${\bf D\phi}=\frac{n}{2}H\phi-\frac{n}{2}\phi.$$ The converse is obvious.
As far as we know, we obtain from Theorem 6 a lower estimate for the first eigenvalue of the Dirac operator in a hypersurface in a context on scalar curvature positive. \begin{corollary}
Consider a compact spin Riemannian manifold $\Omega$ of dimension $n+1$ and scalar curvature $R=n(n+1)$ and mean-convex boundary. Suppose that $\Sigma$ does not support harmonic spinors and let $\phi$ be the eigenspinor corresponding to the first eigenvalue $\lambda_1({\bf D})$ positive of the intrinsic Dirac operator ${\bf D}$ of the boundary. A direct application of Theorem 6 gives$$
\int_\Sigma\left(\lambda_1({\bf D})-\frac{n}{2}\sqrt{1+H^2}\right)|\phi|^2\ge 0.$$ As a consequence, we obtain the following lower bound:$$
\lambda_1({\bf D})\ge \frac{n}{2}\max_\Sigma \sqrt{1+H^2}$$
that improves the well-known lower bound by Fiedrich [F] for non immersed manifolds boundary manifolds (the Gau{\ss} for the curvature in the unit $(n+1)$-dimensional sphere gives
$$
\sqrt{\frac{nR}{4(n-1}}\le \frac{n}{2}\sqrt{1+H^2}$$
and the equality is attained only by the umbilical hypersurfaces).
\end{corollary}
\begin{remark}{\em It is also worthy to note that, as a consequence of this estimate, if $\Sigma$ is a compact boundary in an Euclidean space and we know that $\lambda_1({\bf D})\le n/2$ and $H\ge 0$, then we have the equality and {\em a fortiori} the mean curvature $H$ of $\Sigma$ must be identically null. Moreover, $\Omega$ supports the existence of a non trivial real Killing spinor (see [B\"a]). Hence, $\Omega$ is close to being a spherical domain bounded by a minimal hypersurface. This would be the most similar to the solution to the spherical analogue to the Min-Oo conjecture.} \end{remark}
As in the flat case, we can obtain a kind of positivity for a possible quasi-local mass in this new context.
\begin{theorem}[Brown-York in mean-convex and spherical case] Let $\Omega$ be a spin Riemannian manifold of dimension $n+1$ with scalar curvature $R=n(n+1)$ and having a non-empty boundary $\Sigma$ whose inner mean curvature $H\ge 0$ is also non-negative (mean-convex). Suppose that there is an isometric and isospin immersion from $\Sigma$ into another spin manifold $\Omega_0$ carrying on a non-trivial real Killing spinor field and let $H_0$ its mean curvature with respect to any of its orientations. Then, we have$$ \int_\Sigma\sqrt{1+ H^2}\le \int_\Sigma\sqrt{1+H_0^2},$$ provided that the boundaries do not admit harmonic spinors. The equality implies that $H=H_0$. Then, if $n=2$, $\Omega_0$ is a domain in ${\mathbb S}^3$ and the two embeddings differ by a direct rigid motion.
\end{theorem}
{\bf Proof.} Denote by $\psi$ the spinor on $\Omega_0$ and let $\phi=\psi_{|\Sigma}$ its restriction onto $\Sigma$ through the existent immersion. Let's recall that the parallelism of $\psi$ gives$$
{\bf D}\phi=\frac{n}{2} \sqrt{1+H_0}\,\phi\qquad\hbox{and}\qquad |\phi|=1.$$ Now, we apply Theorem 2 and have the desired inequality$$ \int_\Sigma \sqrt{1+H^2}\le \int_\Sigma \sqrt{1+H_0}.$$
If the equality is attained, then $$\frac{n}{2}\sqrt{1+H_0^2}={\bf D}\phi=\frac{n}{2}\sqrt{1+H^2} $$ and so $H=H_0\ge 0$. Then, the immersion of $\Sigma$ into the second ambient space $\Omega_0$ is mean-convex as well (with respect to the choice of inner normal to $\Omega$).
When $n=2$, from this equality and the fact that $K=K_\phi$ (because the two embeddings are isometric and preserve the Gauss curvatures), we deduce that the two second fundamental forms coincide. The Fundamental Theorem of the Local Theory of Surfaces allows us to conclude that the two boundaries differ by a direct rigid motion of the Euclidean space.
\begin{corollary}[Cohn-Vossen rigidity theorem in the sphere] Two isometric and isospin mean-convex compact surfaces embedded in a sphere ${\mathbb S}^3$ must be congruent, provided they do not admit harmonic spinors. \end{corollary} {\bf Proof.} Let $\Omega$ and $\Omega_0$ be the two domains determined in ${\mathbb S}^3$ by two corresponding surfaces identified by means of an isometry. Then, we can apply Theorem 8 interchanging the roles of $\Omega$ and $\Omega_0$ and applying the case of the equality.
\end{document} | arXiv |
Sign-changing solutions for non-local elliptic equations with asymptotically linear term
CPAA Home
A nonlocal concave-convex problem with nonlocal mixed boundary data
May 2018, 17(3): 1121-1145. doi: 10.3934/cpaa.2018054
Ground state solutions for asymptotically periodic quasilinear Schrödinger equations with critical growth
Yanfang Xue 1,2, and Chunlei Tang 1,,
School of Mathematics and Statistics, Southwest University, Chongqing, 400700, China
School of Mathematics and Statistics, Xin-Yang Normal University, Xinyang, 464000, China
* Corresponding author: Chunlei Tang
Received January 2017 Revised November 2017 Published January 2018
Fund Project: The research is supported by National Natural Science Foundation of China(No. 11471267,11601438) and Chongqing Research Program of Basic Research and Frontier Technology(No.cstc2017jcyjAX0331).
In this paper, we are concerned with the existence of ground state solutions for the following quasilinear Schrödinger equation:
$-Δ u+V(x)u-Δ (u^2)u = K(x)|u|^{22^*-2}u+g(x,u), \ \ x∈ \mathbb{R}^N\ \ \ \ \ \ \ \ \ \ \left( 1 \right)$
where $N≥ 3$, $V, \ g$ are asymptotically periodic functions in $x$. By combining variational methods and the concentration-compactness principle, we obtain a ground state solution for equation (1) under a new reformative condition which unify the asymptotic processes of $V, g $ at infinity.
Keywords: Asymptotically periodic, Sobolev critical exponent, ground state solution, Nehari manifold.
Mathematics Subject Classification: Primary: 35A15, 35B33, 35J60.
Citation: Yanfang Xue, Chunlei Tang. Ground state solutions for asymptotically periodic quasilinear Schrödinger equations with critical growth. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1121-1145. doi: 10.3934/cpaa.2018054
S. Adachi and T. Watanabe, Uniqueness of the ground state solutions of quasilinear Schrödinger equations, Nonlinear Anal., 75 (2012), 819-833. Google Scholar
H. Brezis and L. Nirenberg, Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents, Comm. Pure Appl. Math., 36 (1983), 437-477. Google Scholar
M. Colin and L. Jeanjean, Solutions for a quasilinear Schrödinger equation: a dual approach, Nonlinear Anal., 56 (2004), 213-226. Google Scholar
M. Colin, L. Jeanjean and M. Squassina, Stability and instability results for standing waves of quasi-linear Schrödinger equations, Nonlinearity, 23 (2010), 1353-1385. Google Scholar
Y. B. Deng, S. J. Peng and J. Wang, Nodal soliton solutions for quasilinear Schrödinger equations with critical exponent, Journal of Mathematical Physics, 54 (2013), 011504. Google Scholar
Y. B. Deng, S. J. Peng and S. S. Yan, Critical exponents and solitary wave solutions for generalized quasilinear Schrödinger equations, Journal of Differential Equations, 260 (2016), 1228-1262. Google Scholar
J. M. do Ó and U. Severo, Quasilinear Schrödinger equations involving concave and convex nonlinearities, Commun. Pure Appl. Anal., 8 (2009), 621-644. Google Scholar
J.M. do Ó, O. H. Miyagaki and S. H. M. Soares, Soliton solutions for quasilinear Schrödinger equations with critical growth, Journal of Differential Equations, 248 (2010), 722-744. Google Scholar
X. D. Fang and A. Szulkin, Multiple solutions for a quasilinear Schrödinger equation, J. Differential Equations, 254 (2013), 2015-2032. Google Scholar
F. Gladiali and M. Squassina, Uniqueness of ground states for a class of quasi-linear elliptic equations, Adv. Nonlinear Anal., 1 (2012), 159-179. Google Scholar
Y. He and G. B. Li, Concentrating soliton solutions for quasilinear Schrödinger equations involving critical Soblev exponents, Disctete and Continuous Dynamical Systems, 36 (2016), 731-762. Google Scholar
L. Jeanjean, T. J. Luo and Z. Q. Wang, Multiple normalized solutions for quasi-linear Schrödinger equations, J. Differential Equations, 259 (2015), 3894-3928. Google Scholar
G. B. Li and A. Szulkin, An asymptotically periodic Schrödinger equation with indefinite linear part, Commun. Contemp. Math., 4 (2002), 763-776. Google Scholar
H. F. Lins and E. A. B. Silva, Quasilinear asymptotically periodic elliptic equations with critical growth, Nonlinear Anal., 71 (2009), 2890-2905. Google Scholar
P. L. Lions, The concentration-compactness principle in the calculus of variations, The locally compact case. Ⅱ, Ann. Inst. H. Poincare Anal. Non Lineaire, 1 (1984), 223-283. Google Scholar
P. L. Lions, The concentration-compactness principle in the calculus of variations, The locally compact case. Ⅰ, Ann. Inst. H. Poincare Anal. Non Lineaire, 1 (1984), 109-145. Google Scholar
J. Liu, J. F. Liao and C. L. Tang, A positive ground state solution for a class of asymptotically periodic Schrödinger equations, Comput. Math. Appl., 71 (2016), 965-976. Google Scholar
J. Liu, J. F. Liao and C. L. Tang, A positive ground state solution for a class of asymptotically periodic Schrödinger equations with critical exponent, Comput. Math. Appl., 72 (2016), 1851-1864. Google Scholar
J. Q. Liu, X. Q. Liu and Z. Q. Wang, Multiple sign-changing solutions for quasilinear elliptic equations via perturbation method, Comm. Partial Differential Equations, 39 (2014), 2216-2239. Google Scholar
J. Q. Liu, Y. Q. Wang and Z. Q. Wang, Solutions for quasilinear Schrödinger equations via the Nehari method, Comm. Partial Differential Equations, 29 (2004), 879-901. Google Scholar
J. Q. Liu, Y. Q. Wang and Z. Q. Wang, Soliton solutions for quasilinear Schrödinger equations Ⅱ, J. Differential Equations, 187 (2003), 473-493. Google Scholar
J. Q. Liu and Z. Q. Wang, Soliton solutions for quasilinear Schrödinger equations, Ⅰ, Proc. Amer. Math. Soc., 131 (2003), 441-448. Google Scholar
X. Q. Liu, J. Q. Liu and Z. Q. Wang, Quasilinear elliptic equations via perturbation method, Proc. Amer. Math. Soc., 141 (2013), 253-263. Google Scholar
X. Q. Liu, J. Q. Liu and Z. Q. Wang, Quasilinear elliptic equations with critical growth via perturbation method, Journal of Differential Equations, 254 (2013), 102-124. Google Scholar
X. Q. Liu, J. Q. Liu and Z. Q. Wang, Ground states for quasilinear Schrödinger equations with critical growth, Calculus of Variations and Partial Differential Equations, 46 (2013), 641-669. Google Scholar
R. D. Marchi, Schrödinger equations with asymptotically periodic terms, Proceedings of the Royal Society of Edinburgh: Section A Mathematics, 145 (2015), 745-757. Google Scholar
A. Moameni, Existence of soliton solutions for a quasilinear Schrödinger equation involving critical exponent in $\mathbb{R}^N$, Journal of Differential Equations, 229 (2006), 570-587. Google Scholar
M. Poppenberg, K. Schmitt and Z. Q. Wang, On the existence of soliton solutions to quasilinear Schrödinger equations, Calc. Var. Partial Differential Equations, 14 (2002), 329-344. Google Scholar
D. Ruiz and G. Siciliano, Existence of ground states for a modified nonlinear Schrödinger equation, Nonlinearity, 23 (2010), 1221-1233. Google Scholar
A. Selvitella, Uniqueness and nondegeneracy of the ground state for a quasilinear Schrödinger equation with a small parameter, Nonlinear Anal., 74 (2011), 1731-1737. Google Scholar
H. X. Shi and H. B. Chen, Generalized quasilinear asymptotically periodic Schrödinger equations with critical growth, Comput. Math. Appl., 71 (2016), 849-858. Google Scholar
E. A. B. Silva and G. F. Vieira, Quasilinear asymptotically periodic Schrödinger equations with subcritical growth, Nonlinear Anal., 72 (2010), 2935-2949. Google Scholar
E. A. B. Silva and G. F. Vieira, Quasilinear asymptotically periodic Schrödinger equations with critical growth, Calc. Var. Partial Differential Equations, 39 (2010), 1-33. Google Scholar
X. H. Tang, Non-Nehari manifold method for asymptotically periodic Schrödinger equations, Sci. China Math., 58 (2015), 715-728. Google Scholar
[35] M. Willem, Minimax Theorems, Birkhäuser, Boston, 1996. Google Scholar
X. Wu, Multiple solutions for quasilinear Schrödinger equations with a parameter, J. Differential Equations, 256 (2014), 2619-2632. Google Scholar
H. Zhang, J. X. Xu and F. B. Zhang, On a class of semilinear Schrödinger equations with indefinite linear part, J. Math. Anal. Appl., 414 (2014), 710-724. Google Scholar
H. Zhang, J. X. Xu and F. B. Zhang, Ground state solutions for asymptotically periodic Schrödinger equations with indefinite linear part, Math. Methods Appl. Sci., 38 (2015), 113-122. Google Scholar
Ying Lv, Yan-Fang Xue, Chun-Lei Tang. Ground state homoclinic orbits for a class of asymptotically periodic second-order Hamiltonian systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1627-1652. doi: 10.3934/dcdsb.2020176
Qingfang Wang, Hua Yang. Solutions of nonlocal problem with critical exponent. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5591-5608. doi: 10.3934/cpaa.2020253
Sishu Shankar Muni, Robert I. McLachlan, David J. W. Simpson. Homoclinic tangencies with infinitely many asymptotically stable single-round periodic solutions. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021010
Jiangtao Yang. Permanence, extinction and periodic solution of a stochastic single-species model with Lévy noises. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020371
Ryuji Kajikiya. Existence of nodal solutions for the sublinear Moore-Nehari differential equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1483-1506. doi: 10.3934/dcds.2020326
Alessandro Carbotti, Giovanni E. Comi. A note on Riemann-Liouville fractional Sobolev spaces. Communications on Pure & Applied Analysis, 2021, 20 (1) : 17-54. doi: 10.3934/cpaa.2020255
Noah Stevenson, Ian Tice. A truncated real interpolation method and characterizations of screened Sobolev spaces. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5509-5566. doi: 10.3934/cpaa.2020250
Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of a Sobolev type impulsive functional evolution system in Banach spaces. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020049
Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020450
Evelyn Sander, Thomas Wanner. Equilibrium validation in models for pattern formation based on Sobolev embeddings. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 603-632. doi: 10.3934/dcdsb.2020260
Anna Anop, Robert Denk, Aleksandr Murach. Elliptic problems with rough boundary data in generalized Sobolev spaces. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020286
Zhouxin Li, Yimin Zhang. Ground states for a class of quasilinear Schrödinger equations with vanishing potentials. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020298
Haruki Umakoshi. A semilinear heat equation with initial data in negative Sobolev spaces. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 745-767. doi: 10.3934/dcdss.2020365
Andrew Comech, Scipio Cuccagna. On asymptotic stability of ground states of some systems of nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1225-1270. doi: 10.3934/dcds.2020316
Mengyu Cheng, Zhenxin Liu. Periodic, almost periodic and almost automorphic solutions for SPDEs with monotone coefficients. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021026
Dominique Chapelle, Philippe Moireau, Patrick Le Tallec. Robust filtering for joint state-parameter estimation in distributed mechanical systems. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 65-84. doi: 10.3934/dcds.2009.23.65
Yanfang Xue Chunlei Tang | CommonCrawl |
Shirin Rahnamaei Yahiabadi1,
Behrang Barekatain ORCID: orcid.org/0000-0001-5344-62821,2 &
Kaamran Raahemifar3,4
Recently, vehicular ad hoc network (VANET) is greatly considered by many service providers in urban environments. These networks can not only improve road safety and prevent accidents, but also provide a means of entertainment for passengers. However, according to recent studies, efficient routing has still remained as a big open issue in VANETs. In other words, broadcast storm can considerably degrade the routing performance. To address this problem, this research proposes TIHOO, an enhanced intelligent hybrid routing protocol based on improved fuzzy and cuckoo approaches to find the most stable path between a source and a destination node. In TIHOO, the route discovery phase is limited intelligently using the fuzzy logic system and, by limiting the route request messages, the imposed extra overhead can be efficiently controlled. Moreover, in figure of an intelligent hybrid approach, the improved cuckoo algorithm, which is one of the most effective meta-algorithms especially in the large search space, intelligently selects the most stable and optimal route among known routes by calculating an enhanced fitness function. The simulation results using NS2 tool demonstrate that TIHOO considerably improves network throughput, routing overhead, packet delivery ratio, packet loss ratio, and end-to-end delay compared to similar routing protocols.
One group of ad hoc networks are mobile ad hoc networks (MANET)—a collection of wireless mobile nodes which act independent of central management with no or little use of infrastructures. These networks are characterized by dynamic topologies, energy limitations, limited bandwidth, etc. A special category of MANETs is the vehicular ad hoc network (VANET) (Fig. 1) that has a number of unique characteristics such as predictable mobility, free of constraint energy, rapid changes in the network topology, etc. [1, 2]. In contrast to wireless sensor networks [3, 4] or MANETs, long-life batteries of vehicles have prevented energy-induced constraints in these networks. In VANETs, communications occur between adjacent vehicles and between vehicles and fixed units placed at special locations such as intersections and parking lots. Accordingly, there are two types of communication in these networks: vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I). In V2V communications, vehicles that are in adjacent to each other engage in data sharing via dedicated short-range communications (DSRC) and wireless access in vehicular environment (WAVE) [5] among others while in V2I, the vehicles are connected to roadside infrastructures to engage in information exchanges [6]. Inter-vehicular networks improve the traffic efficiency and promote safe driving as well as welfare and comfort of the passengers. However, there are numerous challenges that impact the achievements of vehicular ad hoc networks. These challenges include signal losses, limited bandwidth, security and privacy, connection, and routing. Timely delivery of warning messages at the time of collisions and accidents could prevent more accidents, an objective fulfilled through reliable data packets routing. Designing a routing protocol which could deliver data packets in the shortest possible time and with the lowest number of packet loss is essential to improving the security of vehicles and winning the users' satisfaction. High mobility of vehicles, limitations on wireless sources, and the characteristic wireless channel losses present a serious challenge to carve a route from the source node to the destination node through middle ones. An effective route depends on all the wireless links which make up the route [5, 7, 8].
Vehicular ad hoc network. A special category of MANETs is the vehicular ad hoc network (VANET). In VANETs, communications occur between adjacent vehicles and between vehicles and fixed units placed at special locations such as intersections and parking lots
Routing is faced with serious challenges in these networks. The movement of vehicles constantly changes network topology[9]. In addition, network expansion on a large scale significantly increases routing overheads [2, 10, 11]. Another challenge is to prevent VANET's routing from being trapped in a local optimum [12].
Like other networks such as wireless mesh networks, peer to peer networks use a variety of techniques such as network coding for data sharing in order to enhance the routing effectiveness [13,14,15]. Various methods have been used to improve routing in VANETs. Some routing protocols draw on the topology of the middle links [16] (occurring between the source and destination nodes) to find the optimal route while some other use the geographical position of vehicles to design a routing protocol [17]. The latter focuses on the location of connections and predicting the next locations of the vehicles to find the appropriate route [18,19,20]. Fuzzy logic is very effective as well for the introduction of routing protocols [21, 22]. A number of researches apply bio methods for appropriate routing. These methods are very effective for large-scale VANETs and at the same time offer low-complex solutions to computation problems [23,24,25]. However, and as it was mentioned in research [26], there is no routing protocol in VANET that can satisfactorily perform in every scenario and completely fulfill all routing protocols objectives. Thus, a hybrid method could prove very effective. In order to address the existing problem in previous studies, this paper proposes TIHOO, an enhanced hybrid routing protocol. This protocol intelligently employed fuzzy [27, 28] and cuckoo [29] approaches by introducing novel fitness functions in which many important parameters for finding the most stable path between the source and the destination node are considered in them.
In other words, the route discovery phase is limited using the fuzzy logic system and, by limiting the route request messages, it somewhat controls the created extra overhead. After identifying the routes in the source node, the cuckoo search algorithm selects the most stable and optimal route among known routes by calculating the fitness function based on criteria such as route lifetime, route reliability, and average available buffer. Simulation results using NS2 (network simulator 2) simulator provides 55.52, 65.85, 79.27, 31.67, and 78.19% in the most important network parameters including packet delivery ratio, throughput, end-to-end delay, routing overhead, and packet loss ratio, respectively.
The rest of this paper is organized as follows. Section 2 is a review of the previous research in this area. In section 3, we provide a brief review over some basic issues. Section 4 explains the method and experimental approach. Section 5 summarizes existing open issues in the related works. Section 6 discusses the proposed routing protocol (TIHOO) in detail and how it addresses the open issues mentioned in Section 5. Performance evaluation parameters and simulation results are discussed in Section 7. Finally, the paper is concluded in Section 8.
As mentioned before, routing is a much challenging issue in VANETs. The existing protocols cannot sufficiently satisfy the routing needs in these networks due to the dynamic nature of VANETs, which is a result of traffic conditions and limitations. This section provides a review of the most important attempts in developing a routing algorithm for VANETs. Adaptive fuzzy multiple attribute decision routing in VANETs (AFMADR) [17] is a location-based protocol in which data packet carriers are employed for selection of the next step. In this protocol, every candid vehicle is specified through distance, direction, road density, and location where each characteristic receives a fuzzy point. The vehicle receiving the highest points is selected for the transference of data packets. This protocol has a desirable profile in the rate of delivery of data packets, transfer delays. However, a more accurate selection of carriers could significantly improve the delivery rate of data packets in this protocol. For example, using city buses which are different locations based on a schedule is a way of selecting suitable carriers. In contrast to AFMADR, TIHOO draws on fuzzy logic for selection of appropriate route request (RREQ) reception links. Stable routes are determined through designing effective fitness functions based on the life of the links, etc. Choosing a stable route improves packet delivery rates and packets lost. Moridi and Barati [22] have proposed a multi-level routing protocol which is based on trust in forbidden search in MANET. In the first step, in order to improve the Ad hoc On-demand Distance Vector routing (AODV) routing protocol, fuzzy logic has been used among cluster members. At this level, the reliability and sustainability of a link are used as the input criterion of the fuzzy system; the ultimate goal is to choose the most reliable link. Afterwards, the best links of cluster heads are presented in a tabular list. The best links are selected based on distance, speed, and direction. This protocol is credited for decreased failures of links and lower losses of packets. However, the overhead has not been considered in this protocol for the purpose of comparison with other protocols. Dharani Kumari and Shylaja [11] have proposed a multi-step geographical routing protocol for V2V communications which has improved Greedy Perimeter Stateless Routing (GPRS) protocol. AHP-based Multimetric Geographical Routing Protocol (AMGRP) improves the data transference mechanism of the geographical routing protocol via four routing criteria of mobility, link life, node density, and node position. Routing criteria are hierarchically organized to be compared in the form of a general unit once similar parameters have been grouped and the sub-criteria of each group are tested and compared with each other. This protocol identifies the next step via computing the function unit weight in a predefined range which could guarantee the improved transfer process. Although AMGRP routing protocol has had some success in improving the rate of data packet delivery compared to other protocols, it has failed in controlling the overhead. F-ANT [2] has designed an improved, fuzzy logic-based framework for ant colony optimization (ACO) protocol. Devised for VANETs, ACO draws on factors such as bandwidth, the power of received signals, and congestion criterion for recognizing the link credibility. Despite its high rate of packet delivery and low end-to-end delays, this protocol is challenged by high overhead due to employing ant colony algorithm along with fuzzy logic while the proposed metaheuristic algorithm produces considerably lower overheads compared to ant colony algorithm. By combining the advantages of proactive and reactive routing, a reliable routing protocol (R2P) for VANETs was developed by [30]. R2P uses a route discovery mechanism to detect available routes to the destination. It then selects the safest route, which may not be necessarily the shortest one. This routing method is more efficient than previous methods and shortens the delays. However, it is not superior to other methods in term of overhead while its packet delivery is low in some of its default simulation speeds. PFQ-AODV which has been presented in [21] learns the optimal route using Q-learning algorithms and fuzzy constraints on AODV protocol. This protocol employs fuzzy logic and considers a number of criteria such as available bandwidth, the quality of the link, and the direction of the vehicle to assess the wireless link. It then learns the best route through Hello and RREQ messages. The flexible and practical routing that this protocol offers is due to its independence from lower layers. However, it does not have a desirable performance in term of overhead. The intelligent proposed protocol (TIHOO) in this article limits the route discovery through fuzzy logic and decreases the available control packets of the network to keep the overhead in check.
In the method introduced by [31], there is an area-based routing protocol that utilizes fuzzy logic and bacterial foraging optimization algorithm (BFOA) to detect unsafe situation associated with rapid topological changes and the best possible route to the destination node. This protocol applies these three techniques to enhance the sustainability of the selected route. First, the areas are created based on the mobility of vehicles and the characteristic of dispersion while nodes are assigned into different areas. Once a node generates the Hello message from the data transfer phase, it attempts to find the route between the source and the destination points. The fuzzy logic is implemented in the next stage to determine the status of the links based on the quality of the link as well as bandwidth and mobility. BFOA has been applied for finding the best route based on the results of implementing fuzzy logic and the credit of the link. Although this method produces lower delays and faster delivery rates, the computations drastically increase in complexity as a result of increases in population. This is in contrast to the low computation complexity offered by our metaheuristic method [31].
AODV [32] is a Bellman-Ford-based dynamic protocol that creates routes when a node needs to send data packets [33]. AODV comprises three phases of discovery, data transfer, and route maintenance. What primarily distinguishes AODV from other protocols is its use of sequence number. When there is a need for information transfer in this protocol, the source node broadcast RREQ control packet to all neighboring nodes. These nodes keep performing this process until the destination node is confirmed as the receiver node. In the next stage, the destination node unicasts the route reply (RREP) packet to the source node. Data transfer begins once the route has been established. If links are broken in this stage, the maintenance phase will be recalled to help find a new route. AODV suits the networks characterized with high mobility. However, this protocol consumes large amounts of bandwidth due to periodical baking. In [34], this method that attempts to improve the GPRS routing performance controls the network congestion via buffering the nodes and as a result displays higher efficiency in terms of time delays and packet losses. Two factors of the distance between the transfer node and the destination node and the length of the buffers left from the transfer nodes are considered in the proposed method to calculate the possibility of transference by a given node. Those nodes which are assigned higher possibility will be selected as the transference nodes. However, this protocol is a failure in resolving the issue of local optimum and provision of recovery strategy [34]. In [35], the fuzzy control-based AODV routing (FCAR) protocol that takes place on the traditional platform of AODV employs both fuzzy logic and fuzzy control to decide the route. The length of the routes and the percent of vehicles taking the same direction are the two criteria of route assessments. Regarding packet losses and end-to-end delays, the proposed protocol outperforms AODV. However, it produces increased extra overheads compared to AODV if there are few network nodes. In contrast, TIHOO has managed to control the overhead in a relatively satisfactory way via using fuzzy logic for selection of links receiving RREQ and limiting the phase of route discovery.
Using the transfer quality criteria, which is a combination of connections and the rate of data delivery packets, the functionality of each route part is evaluated to be able to select each route part dynamically so that it contains the best path [36]. This method improves the data packet transfer and delivery rates; however, it will go through local maximization and the strategy to deal with it will lead to end-to-end delay as it uses greedy forwarding.
In [37], it is an algorithm predicting the variability of the receiver queue length, the intermediary signal rate, and the noise. The results of this prediction determine the usefulness of the relay vehicles in the candidate set. The results of the utility of the vehicles are based on the weighting algorithm and weights are the variance of these two criteria. The relay priority of each relay device is determined by its usefulness. This algorithm did not consider the effects of parameters such as packet length and transfer steps in the results obtained.
In [38], there is a unique protocol based on attractive selection, which is an opportunistic protocol and can adapt routing feedback packets to the dynamics and complexity of the environment. A multi-attribute decision-making strategy is applied to reduce the number of candidates to be chosen at the next step to improve the effectiveness of the attractive selection mechanism. When a route is detected, URAS either keeps the current path or finds another better route based on the current pathway efficiency; this self-evaluation is continued until finding the best route. Some of the latest research studies are summarized in Table 1.
Table 1 Latest research studies
In this section, we provide a brief review over some basic issues such as the fuzzy logic system and cuckoo search algorithm.
The fuzzy logic system
The fuzzy logic variables have a true value that ranges from 0 (false) to 1 (true).
Fuzzy logic control is comprised of four main phases of fuzzifying, a base of rules, inferences, and defuzzifying. Linguistic variables and pre-defined membership functions that are used to indicate the digital value of the linguistic variables are converted into fuzzy values. The second phase presents a number of a limited set of rules which are applied in the decision-making process and are intended to guide the final fuzzy value.
In the third inference phase, fuzzy decisions are generated through the rules available in the rule base. The rules of this database are a set of if-then rules which relate fuzzy input variables to the output variables. In the last phase, the membership output functions are defined in advance and non-fuzzifying methods are converted into digital values. This is a very useful method for sophisticated processes where there is no chance of defining simple mathematical methods as well as non-linear models. Since fuzzy logic can control inaccurate and variable information, it is used in solving the routing-associated problems without applying the mathematical models. Fuzzy logic could process approximate data through linguistic, non-digital variables to express the facts [8].
Cuckoo search algorithm
Cuckoo search (CS) algorithm [29] presented by Young in 2009 has been inspired by the way of life of the bird cuckoo. The way of life of this bird has supplied ideas for many optimization algorithms. To raise its babies, the bird lays its eggs in the nest of other birds. Cuckoos push out one of the eggs of the host bird and lay eggs that better imitate the host eggs. The eggs with the highest resemblance to the host have much more chances of being hatched into mature cuckoos while the eggs discovered by the host are destroyed [39]. In CS, every egg in the host's nest indicates a solution to the effectiveness of which is computed through some parameters of the adaptation function. If the new solution is found to be superior to the old one, a replacement takes place [29].
Yang and Deb described the CS in three ideal rules [29]:
Each cuckoo lays one egg at a time and dumps its egg in a randomly chosen nest;
The best nests with high-quality eggs will be carried over to the next generation;
The number of available host nests is fixed, and the egg laid by a cuckoo is discovered by the host bird with a probability of [0..1].
Recent studies have tried to address the routing issue in VANETs by introducing enhanced methods. However, they have not carefully considered the problem of a large number of required exchanged messages in the routing process. Moreover, like other wireless networks, VANETs suffer from unwanted broadcast storm both in routing and data dissemination. In order to efficiently address these challenges, in the first phase of TIHOO, contrary to previous methods where fuzzy logic was applied as a tool to find the best path, in this method, fuzzy logic is intelligently used to limit the route discovery phase. By limiting the RREQ control packet, the additional overhead created in the network is reduced and the network traffic is prevented from creating a storm.
On the other hand, by reducing the number of wandering packets in the network, the bandwidth usage of the network is also decreased. This not only considerably reduces the routing overhead, but also decreases the number of discovered routes. This can help the source to select the best path in less time. It is important to note that fuzzy logic is used in a way so that only the paths are removed where link failures are very likely to occur.
Another required step is to find the most stable path among the introduced paths by the first phase. An efficient new fitness function is also introduced and used in this phase. The cuckoo search (CS) algorithm selects the most stable route via intelligent configuration of the fitness function based on parameters such as the link lifetime and the reliability of the route and its available buffer in the second phase.
Adjusting the link's lifetime and reliability in the compatibility function will lead to the selection of more stable paths, resulting in fewer failures in paths and higher delivery rates for data packets. Selecting the buffer parameter available in the compatibility function causes the selection of paths with less traffic load. It is therefore effective in reducing the end-to-end delay.
This algorithm, as one of the most effective metaheuristic algorithms, selects the most stable path with higher execution speed and convergence rate compared to other metaheuristic algorithms. The combination of these two phases introduces an enhanced routing protocol named TIHOO for efficient routing in vehicular ad hoc networks.
Finally, TIHOO is evaluated in the NS2 simulation tool under various conditions and compared with similar works. The obtained results indicate that TIHOO provides significant improvement in routing in VANETs and makes this process more effectively and efficiently especially in high network traffic.
VANETs are dynamic networks; the frequent changes in the environment caused by various factors such as traffic conditions, and changes in the topology of the road, necessitate a sufficiently adaptive routing protocol. Multiple routing protocols have been proposed for this purpose, a number of which was presented in the previous section. Although these protocols make great efforts in fulfilling the routing objectives and improving the routing conditions, they are challenged by serious shortcomings.
Failures of link
The life of the links which are affected by the motion of vehicles is an influential factor in identifying stable routes. Not paying attention to this factor by a number of routing protocols leads to consecutive failures of the route which in turn result in lower rates of data packet deliveries and increased number of packet losses.
The intelligent configuration of the parameters applied in the fitness function of the CS [29] metaheuristic algorithms in TIHOO leads to identification and selection of stable routes with a lower possibility of failure and naturally improved data packet delivery rate as well as fewer cases of packet loss.
High overhead
Another serious shortcoming of some of routing protocols [21, 35] is their inability in controlling the number of control packets and the increased number of the packets over the network. These factors contribute to the development of higher overhead and increased bandwidth consumption which negatively affects the efficiency of the protocols. Moreover, using some metaheuristic algorithms to improve the routing protocols and enhance the quality of the provided services [2, 31] suffers from a number of issues such as high overhead and overly complex computations. Since fuzzy logic system [27] can control inaccurate and variable information, it is used in solving the routing-associated problems without applying the mathematical models. Fuzzy logic uses linguistic non-digital variables and, thus, can process approximate data [8, 28].
Although fuzzy-based approach is employed in previous researches for deciding this route, TIHOO attempts to use the fuzzy logic in a new manner in which it intelligently selects appropriate links for sending RREQ packets and at the same time limits broadcasting of the route discovery phase and decreases the number of control packets to lower the overhead imposed on the system. Also, the lower overhead and easier computation demands of the metaheuristic algorithms of this research compared to other metaheuristic algorithms such as ACO and BFOA have made this bio algorithm a desirable candidate for application in the improved protocol of the proposed one.
The proposed TIHOO protocol
TIHOO is an intelligent hybrid routing protocol as we mentioned in the previous section. This protocol is discussed in detail in the following paragraphs.
General description of TIHOO
TIHOO is an optimized hybrid protocol developed for routing in VANETs. It is implemented based on the AODV routing protocol. In contrast to previous methods, this research uses fuzzy logic to limit the route discovery phase. Selection of effective input parameters contributes to the selection of more stable links to receive route request control packets. In addition, CS algorithm selects the most stable route via intelligent configuration of the fitness function based on parameters such as the link life and the reliability of the route and its available buffer. CS algorithm is one of the most effective, metaheuristic algorithms with higher execution speed and convergence rate compared to other metaheuristic algorithms. It also is able to discover the best route in a rational time [40]. TIHOO consists of two main phases.
Phase I: Broadcast limitation phase
The first phase of this protocol starts when a node intends to send a data packet but the route from source to destination is not available. In VANET, the communications between the source and destination node occur through a multi-hop and the main purpose of the communications is routing the messages. At runtime, a route is created through several moving vehicles. If, on a route, a device moves outside the communication range of another device, the route connections are disconnected. To create and maintain the routes, the vehicles send and receive control packets. Increasing the number of these packets will considerably affect the quality of service in VANETs. The first phase of this protocol, which focuses on the fuzzy logic system, sends packet requests to a number of selected neighboring nodes in a targeted way. This part of the neighboring nodes is selected based on having some features. These features are included as inputs of the fuzzy logic system. The neighbors that are the most appropriate receivers of the control packets are determined by the output of this system. By limiting the route requesting receiver nodes, the number of discovered routes is reduced and the system's overhead is partially under control. This process is repeated in the nodes receiving the control packet so that the destination node receives all route requesting control packets from different routes.
Phase II: Find the best route
After identifying all routes, the second phase of the proposed protocol begins and the cuckoo search algorithm is called in the source node. Levy's flight in this algorithm provides the possibility of random to randomly select the routes. The length of the random steps gets use of Levy's distribution in large steps. For each of the routes, the calculation of the proposed enhanced fitness function is performed based on a number of factors. These factors, which determine the path stability, are a criterion for measuring path optimality. Ultimately, the cuckoo algorithm selects the most optimal route by comparing the fitness function of various routes and converges toward it. Since this algorithm has a high performance and convergence speed, this routing protocol is expected to provide a fairly satisfactory result. Table 2 shows the pseudo-code, and Fig. 2 shows the flowchart of the proposed protocol. The steps of the proposed protocol are shown in these figures in detail. In the following section, we will deal with the careful review of the proposed protocol.
Table 2 Pseudo of TIHOO. The pseudo-code of the proposed protocol. The steps of the proposed protocol are shown in this figure in detail
Flowchart of TIHOO. The flowchart of the proposed protocol. The steps of the proposed protocol are shown in this figure in detail
Details of TIHOO
This section discusses the proposed protocol in details. We will also examine each section of the flowchart separately.
AODV reactive protocol provides the platform for the implementation of this research. In the first step of Fig. 2, the current node which is considered as ai is the source node in the first run of the algorithm while other surrounding nodes which receive the control packets of route requests are set aj.
In the second step of Fig. 2, the algorithm starts through sending Hello packets from the source node. Sending these packets to the surrounding nodes helps identify the neighboring nodes and learn their positions. Once the source node has identified its surrounding nodes and their positions using GPS (Global Positioning System), the time arrives for sending the packets of route requests. Unlike the other protocols, in TIHOO, the source node does not send the packets of route requests to all neighboring nodes via broadcasting. Instead, it sends them only to those neighboring nodes with more effective characteristics such as smaller motion speed difference with the source node (the current node), smaller divergence with the source node (the current node), and a shorter distance. Placing limits on sending the packets of route requests implies limiting the route request phase and decreasing the number of control packets, in turn resulting in reduced control overheads. Since not every neighbor participates in sending the packets of route requests, fewer control packets are sent over the network.
Fuzzy logic is employed in this method for selection of RREQ reception nodes. The inputs of the fuzzy logic system are calculated and determined in the third step of Fig. 2. These inputs consist of the speed difference between the current node and the neighboring node, the direction taken by the current node and the neighboring node, and the distance between the destination node and the neighboring node. The smaller the speed difference of these nodes, the higher the duration of being in mutual communication range which boosts the possibility of successful exchange of data packets. However, large differences between the nodes in term of their speed quickly push these nodes out of their mutual ranges and consequently they have fewer chances of sharing their information. Fast movements of vehicles prevent the development of stable connection in the network. The second input factor is using the fuzzy logic for movements of vehicles. The angle of the movement direction of the receiving node and the previous node is calculated and then is set as the second factor of the fuzzy logic input. α is the angle of two vehicles is calculated via Eq 1.
$$ \operatorname{arccos}\ \alpha =\frac{\kern2em dx1. dx2+ dy1. dy2\kern2.75em }{\ \sqrt{\kern1em dx1+ dy1\kern1em }+\sqrt{\kern1em dx2+ dy2\kern1.25em }\kern0.5em } $$
In this equation, the movement direction of the vehicle 1 is (dy1 and dx1) while (dy2 and dx2) is assigned to the movement direction of the vehicle 2. The direction vector on x- and y-axes is denoted by dx and dy, respectively.
In general, the movement direction of the vehicles is limited by the road, and according to the conducted researches, it has been proved that the route is more consistent with a greater number of vehicles with the same direction[35]. Therefore, the movement direction in this research has been used as one of the inputs of fuzzy criterion to select routes for sending route request packets that have a higher potential.
The distance is the third factor used to limit the route discovery phase. At this stage, we use the information on the location of the neighboring node obtained by the Hello message and then we calculate the distance between the neighboring node and the destination. It is clear that neighboring nodes that are in a far less distance to the destination node are in a better position and are better candidates for sending RREQ packets. The distance between the neighboring node and the destination is calculated using Eq. (2)—the Euclidean distance equation [22].
$$ D=\kern0.5em \sqrt{\ \left(x1-x2\right)+\kern0.5em \left(y1-y2\right)\kern0.75em } $$
In the equation above (x1, y1) indicates the neighboring node's location and (x2, y2) represents the spatial location of the destination node.
In the fourth step of Fig. 2, after determining the input parameters of the fuzzy logic and calculating the distance of the neighboring node to the destination and angular displacement, these three criteria are used as the input of this system so that a proper conclusion is provided using the fuzzy logic based on this uncertain and inaccurate information. Figure 3 indicates the fuzzy system used in this paper.
Fuzzy control system of the proposed protocol. The fuzzy system used in this paper. The main processes of fuzzy logic include fuzzification, fuzzy inference, and defuzzification. Three criteria are used as the input of this system so that a proper conclusion is provided using the fuzzy logic based on this uncertain and inaccurate information
The main processes of fuzzy logic include fuzzification, fuzzy inference, and defuzzification. Fuzzification takes place in this section whereby numerical values are converted into fuzzy values via fuzzy member function.
For each of the mentioned criteria, the membership function is defined in the linguistic set. The words large, medium, and low are used to determine the rate of input criteria.
The linguistic set of the member function of the fuzzy logic input parameters covers three values (low, medium, and high). The diagram of member functions of the speed difference, direction difference, and distance is presented in Fig. 4. Although we could use other complex member functions as well, this triangular member function suits the proposed protocol, since it only needs to identify the best links.
Fuzzy member function for fuzzy input. For each of the mentioned criteria, the membership function is defined in the linguistic set. The words large, medium, and low are used to determine the rate of input criteria. The linguistic set of the member function of the fuzzy logic input parameters covers three values (low, medium, and high). The diagram of member functions of the speed difference, direction difference, and distance is presented in this figure. Although we could use other complex member functions as well, this triangular member function suits the proposed protocol, since it only needs to identify the best links
In the fifth step of Fig. 2, once input parameters had been determined, we embarked upon fuzzy inference and developed a set of rules informed by expert knowledge. Fuzzy-based knowledge has been designed to integrate input and output variables which are in turn informed by accurate perceptions of traffic patterns of urban vehicular ad hoc networks. Fuzzy rules are based on the IF-THEN rule [41].
Every fuzzy rule consists of an IF component, a logical relation, and a THEN component. The "IF" part has been formed by predictions and logical relation connects the input logic to the result while "THEN" defines the degree of member function and the usefulness. These rules have been presented in Table 3. The linguistic variables for determining the desirability of the candidate node for receiving RREQ are very bad, bad, unpredictable, acceptable, good, and very good.
Table 3 Fuzzy rule used in TIHOO
For example, a small speed difference between the current vehicle and the neighboring vehicle indicates a short distance between the neighboring vehicles and the destination vehicle and a small angular divergence between the current vehicle and the neighboring vehicle; we then can say that the neighboring vehicle is a very good candidate for receiving RREQ packets.
Since several rules are implemented simultaneously, we apply the Min-Max methodology to combine the assessment results. In this method [42], the minimum prior value is used as the final degree. When different rules are combined, the maximum results are used.
Defuzzification is the process of generating numerical data based on the output member function. The outputs of this system are shown in Fig. 5. In TIHOO, unlike previous protocols, the fuzzy logic system is not used to determine the optimal path but determines the proper neighboring nodes for reception RREQ.
Fuzzy member function for the fuzzy priority. Defuzzification is the process of generating numerical data based on the output member function. The outputs of this system are shown in this figure. In TIHOO, unlike previous protocols, the fuzzy logic system is not used to determine the optimal path but determines the proper neighboring nodes for reception RREQ
In the sixth step of Fig. 2, after obtaining the output from the fuzzy logic, among the neighboring nodes, the nodes that have a better situation and more stable links between them and the source node (considered in the next iterations of the link between each node and its neighbors) are selected, and the control packets of the route request are only sent in a limited way on these links.
Sending a limited number of control packets in this part of the algorithm affects the overhead and decreases it in practice.
The seventh step of Fig. 2 shows the control condition. Every neighboring node receiving the route request packet iterates this process and sends the route request packet over a group of neighboring links. This controlled process is repeated for each node until the destination node receives the packets.
The next step is the eighth stage of Fig. 2. Since different routes cover different steps and consequently face varying congestion, the messages sent from some routes are delivered faster, and thus, there is an interruption in this stage to ensure that all route request packets are delivered on time from all available routes.
In the ninth step of Fig. 2, a route reply packet is sent by the destination node to the source node for each identified route. Factors such as reliability, lifetime, and available buffer are calculated for each route. These factors are attached to the reply packet to other usual information. Table 4 presents the structure of the RREP packet.
Table 4 Structure of the RREP packet
The reliability of communication links of two vehicles is the ability of transference of information packets with the minimum possibility of the link failure which is a very important parameter for assessing the system stability and effectiveness. The reliability is defined based on Eq. 3 in [22].
Applying this parameter to fitness function helps select a more stable route, resulting in an enhanced data packet delivery rate. More stable routes had the added advantage of decreased losses of data packets.
$$ r\ (l)=P\ \left\{\mathrm{to}\ \mathrm{continue}\ \mathrm{to}\ \mathrm{be}\ \mathrm{available}\ \mathrm{until}\ t+{T}_{\mathrm{p}}\ |\ \mathrm{available}\ t\right\} $$
Tp is the predicted time for the availability of link between vehicles ci, cj. The reliability will be defined as below:
In case of the availability of the link in time t, this link will be available at time t + Tp. Therefore, Lij is the distance between the two vehicles which is obtained through Eq. 4 [22].
$$ Lij=\sqrt{\ {\left(X1-X2\right)}^2+{\left(Y1-Y2\right)}^{2\kern0.5em }} $$
Tp can be calculated as follows by Eq. (5) and Eq. (6) [22]; R is radio range in VANET:
If two vehicles have the same direction.
$$ Tp\left\{\begin{array}{c}\frac{2R- Lij}{\left| vi+ vj\right|}\mathrm{if}\ vj> vi, ie. Cj\;\mathrm{Approaches}\; Ci\;\mathrm{from}\ \mathrm{behind}\\ {}\frac{R- Lij}{\left| vi+ vj\right|}\mathrm{if}\ vi> vj, ie. Ci\;\mathrm{Moves}\kern0.17em \mathrm{forward}\ \mathrm{in}\ \mathrm{front}\ \mathrm{of}\ cj\end{array}\right. $$
If two vehicles have opposite directions:
$$ T=\left\{\begin{array}{c}\frac{R+ Lij}{vi+ vj} Ci\ \mathrm{and}\ {\mathrm{C}}_{\mathrm{i}}\ \mathrm{are}\ \mathrm{moving}\ \mathrm{toward}\ \mathrm{each}\ \mathrm{other}\\ {}\frac{\mathrm{R}+\mathrm{Lij}}{\mathrm{vi}+\mathrm{vj}}\ Ci\ \mathrm{and}\ {C}_i\ \mathrm{are}\ \mathrm{moving}\ \mathrm{away}\ \mathrm{each}\ \mathrm{other}\end{array}\right. $$
The reliability of the link is determined through Eq. (7):\
$$ {\mathrm{r}}_{\mathrm{t}}(1)=\mathrm{Erf}\left(\frac{\frac{2\mathrm{R}}{\mathrm{t}}-\upmu\ \Delta \mathrm{V}}{\sigma \Delta \mathrm{V}\sqrt{2}}\right)-\mathrm{Erf}\left(\frac{\frac{2\mathrm{R}}{\mathrm{t}}+\mathrm{Tp}\hbox{-} \upmu\ \Delta \mathrm{V}}{\sigma \Delta \mathrm{V}\sqrt{2}}\right) $$
σ and μ are mean and standard deviation which are included in this equation to figure out the speed differences of vehicles. These parameters could be easily calculated using the maximum and minimum speed limits in urban settings. The speed differences and Erf function are calculated through Eq. 8 [22] and Eq. 9 [22], respectively:
$$ \Delta V=\left|{V}_2-V1\right| $$
The Erf function is obtained using the following equation:
$$ Erf=\frac{2}{\sqrt{u}}\int {e}^{-t} dt,-\infty <W<\infty $$
The link lifetime is the shortest time for two vehicles to start communication over a route. Including this parameter in the fitness function helps in finding the most stable route. Selection of the most stable route contributes to the more effective delivery of the packets and lowers the loss rates of data packets during transfers. Suppose that (x1,y1) is the last hop position of a given vehicle, Vx1 and Vy1 are the components of the speed vector of the last hop vehicle, (x2,y2) is the position of the current vehicle, and Vx2 and Vy2 are the components of its speed vector. Therefore, the position of the last hop will be (y1+ t.vy1, x1 + t.vx1) '(y2 + t.vy2; x2 + t.vx2).
$$ \mathrm{a}=\mathrm{x}1-\mathrm{x}2,\mathrm{b}=\mathrm{y}1-\mathrm{y}2,\mathrm{c}=\mathrm{Vx}1-\mathrm{Vx}2,\mathrm{d}=\mathrm{Vy}1-\mathrm{Vy}2 $$
Based on Pythagoras' theory, the r or the distance between two vehicles will be calculated using Eq. 10 [35].
$$ {r}^2={\left(a+c.t\right)}^2+{\left(b+d.t\right)}^2 $$
r is the effective communication range between the two vehicles and thus the communication time (t) is estimated using Eq. 11 [35].
$$ t=\frac{-\left( ac+ bd\right)+\sqrt{{\left( ac+ bd\right)}^2-\left({a}^2+{b}^2-{r}^2\right)\left({c}^2+{d}^2\right)\kern1.25em }\ }{c^2+{d}^2} $$
Equation 12 [43] shows the available buffer. Considering available buffer for each node is intended to prevent losses of effective data packets. The application of this parameter helps in the selection of less congested route and prevention of losses of effective data packets. End-to-end delays will be decreased while sending data packets. The \( \frac{qi}{bi} \) indicates the available buffer in which qi is the length of the data packet queue in Ni node, while bi displays the buffer capacity of the node.
$$ \mathrm{Avail}\_\mathrm{Buffer}=\left(1-\frac{qi}{bi}\right) $$
The tenth step of Fig. 2 takes the form of call cuckoo search algorithm. The initial cuckoo population and the number of the nests are 30 in this research. CS algorithm demands accurate adjustment of a smaller number of parameters compared to other algorithms such as genetics and optimization of congestion of particles. Three components are included in the cuckoo search algorithm: selection of the best, random discovery through general Levy flight and discovery through local random walks. Selection of the best nest (solution) is done through the protection of the best nest (solution) which guarantees finding the best solution via the next repetition. The best solution has been discovered by random walks. Using levy flights in local random walks causes bigger and effective steps [39] in CS while in F-ANT algorithm, random walks are in a standard format as a result of the application of Gaussian distribution.
Moreover, applying Levy flight to cuckoo algorithm makes it possible to sample the entire search space and find the best solution. Using Levy flight is more effective than random discovery for conducting searches in large spaces [39].
The occurrence of these components in the cuckoo search algorithm makes it one of the best metaheuristic algorithms and gives the edge to our method over its counterparts.
The fast convergence of the algorithm combined with the possibility of searching in a bigger space provides for the optimal route in a shorter time. It also decreases end-to-end delays during sending data packets.
In the 11th to 13th steps of Fig. 2, the source node receives RREPs from different routes and then calculates the fitness function of each route. The fitness function has been configured based on Eq. 13 in TIHOO. In this equation, rt (l) denotes reliability, t represents the link lifetime, Avail_ Buffer is the length of the available buffer, and α, β, and γ are weighted factors for reliability, link lifetime, and the available buffer, all of which have been set based on experience and repeated tests (α + β + γ = 1).
$$ \mathrm{Fitness}\ \mathrm{Function}=\upalpha\ {\mathrm{r}}_{\mathrm{t}}\ \left(\mathrm{l}\right)+\upbeta\ \mathrm{t}+\upgamma\ \mathrm{Avail}\_\mathrm{Buffer} $$
Since the factors used for calculation of the fitness function are not measured with the same scales, it is neither possible to compare them nor to include them in this function. Thus, it is necessary to normalize or descale these factors to allow for the application of a single measurement tool. We have used linear descaling in this research. Considering the fact that the indicators are positive and equal, each of the values of data matrices are divided equally to the maximum of the related column.
In the 14th and 15th steps of Fig. 2, once the fitness function of each route has been calculated, the route with the best fitness function is selected as the optimal route for data transfer.
The performance of TIHOO, F-ANT, and AODV in the challenge of routing is evaluated in this section. TIHOO is compared with two methods, why these protocols?
The F-ANT algorithm is a combination of fuzzy logic and one of the biological algorithms called ant colony. Since our proposed method is also a combination of bio-algorithms and the fuzzy logic, the F-ANT algorithm is an appropriate alternative to our proposed protocol. Based on the articles presented below, the cuckoo search algorithm and the ant colony are also used in combination with other routing research. Therefore, we have considered an article for comparison which applied an ant colony biology algorithm in order to evaluate the efficiency and effectiveness of the use of the ant colony algorithm in comparison with the use of cuckoo's algorithm.
To evaluate the effectiveness and performance of TIHOO, we conduct extensive simulations and compare the results with those of F-ANT and AODV. We evaluate end-to-end delay, packet delivery ratio, packet loss ratio, throughput, and routing overhead.
Packet delivery rate (PDR)
PDR is the number of data packets successfully delivered to destination nodes to the total number of data packets sent from the source node. Accordingly, PDR is calculated via Eq. (14).
$$ PDR=\left(\frac{\sum \limits_{\mathrm{j}=1}^{\mathrm{n}}\mathrm{Packets}\ \mathrm{received}}{\sum \limits_{\mathrm{j}=1}^{\mathrm{n}}\mathrm{Packets}\ \mathrm{originated}}\right) $$
End-to-end delay (EED)
The average delay between the moments a packet is sent by the source to the moment it is received in the destination is called end-to-end delay. EED is measured in milliseconds and includes all possible delays that occur on the way, including route discovery, data acquisition, and queuing delays, delay caused by processing at intermediate nodes, propagation time, retransmission delays at the MAC, etc. The lower the value of EED, the better the performance of the protocol. EED is calculated using Eq. (15) below:
$$ EED=\sum \left(\ {PA}_i-{PS}_i\right) $$
where PAi = packet arrival, PSi = packet start, and ith packet
This parameter denotes the number of data packets delivered to the destination node per unit of time. Throughput is calculated as received throughput in bit per second at the traffic destination. Throughput is calculated via Eq. (16) as follows:
$$ \mathrm{Throughput}=\frac{\sum \limits_{j=1}^n\mathrm{Packets}\ \mathrm{received}}{t} $$
Routing overhead
The number of generated control packets is denoted by control overheads. Lower overhead positively affects the assessment of protocol efficiency. The overhead is calculated via Eq. (17) where Nc is the number of control packets and Nt is the total number of packets.
$$ \mathrm{Routing}\ \mathrm{Overhead}=\frac{N\mathrm{c}}{N\mathrm{t}} $$
Packet loss rate (PLR)
PLR is the fraction of the total transmitted packets that were not received at the destination. The packet loss rate is calculated in Eq. (18) as follows:
$$ \mathrm{PLR}=\left(\sum \limits_{j=1}^n Number\ of\ sent\ packets-\sum \limits_{j=1}^n Number\ of\ recieved\ packets\right)\ast 100 $$
Simulation results and analysis
We have implemented TIHOO, F-ANT, and AODV approaches in the NS-2 [7] on Fedora 10. Table 5 shows the simulation parameters.
An assessment of the simulation parameters based on the number of vehicles
This section assesses the parameters of end-to-end delays, throughput, packet delivery rate, routing overhead, and the loss rate of packets as a function of the number of vehicles. The results of this test and comparisons have been satisfactory in this protocol between 30, 40, 50, 80, and 120 nodes. Figures 6, 7, 8, 9, and 10 show that the proposed protocol is a scalable one as the desired results are still produced in face of the growing number of vehicles.
Packet delivery ratio based on the number of vehicles. Comparison of the performance of TIHOO with that of F-ANT and AODV for packet delivery ratio. This improvement is due to the cuckoo search algorithm in the proposed protocol and the intelligent configuration of the fitness function parameters such as the reliability and the link lifetime which positively affect the data packet delivery rate via the selection of more stable routes as the optimal route. Simulation results show that TIHOO performs better than F-ANT and AODV in terms of PDR
End-to-end delay based on the number of vehicles. When the number of vehicles increases, end-to-end delay also increases. Applying the cuckoo metaheuristic algorithm to TIHOO has helped in the selection of the best route in a shorter time than the other two protocols. In contrast to F-ANT which is very slow to converge as a result of the ant colony optimization algorithm (ACO), TIHOO has a fast convergence rate and could identify the optimal route in a shorter time. Moreover, the intelligent configuration of the fitness function of this protocol and considering parameters such as available buffer have proved to be very useful in the deselection of congested routes. The selection of less busy routes has decreased end-to-end delays
Throughput based on number of vehicles. Comparison of the performance of TIHOO with that of F-ANT and AODV for throughput. Based on simulation figures, TIHOO has outperformed the other two protocols. Selecting the best route takes more time in F-ANT as it employs ACO and is slow to converge while the application of CS algorithm to TIHOO has brought about faster convergence and less parametric configuration demands during the implementation phase compared to other metaheuristic algorithms which have accelerated the selection of the optimal route and improved the throughput of the proposed protocol
Routing overhead based on number of vehicles. The routing overhead values of TIHOO, F-ANT, and AODV for various numbers of vehicles. In TIHOO, broadcasting of route request control packets are prevented and the discovery is conducted in a controlled manner while running this phase in AODV algorithm causes increased system overhead. Also, F-ANT protocol imposes increased overhead on the system as a result of employing the ACO metaheuristic algorithm and storing the whole information of the entire colony. Therefore, TIHOO outperforms other protocols in terms of overhead produced
Packet loss rate based on number of vehicles. The packet loss rate values of TIHOO, F-ANT and AODV protocols for various numbers of vehicles. Loss of data packets was partly due to packet collision, link failure, insufficient bandwidth, overhead of buffer, and etc. In TIHOO, the fitness function of the CS algorithm is configured based on such parameters as route lifetime and its reliability. Employing these factors contributes to the selection of more stable with fewer cases of link failures which affects the packet losses as well. Moreover, accounting for the available buffer for selection of a less congested route is very effective. Less congestion is synonymous with fewer packet losses
Figure 6 compares the performance of TIHOO with that of F-ANT and AODV for packet delivery ratio.
This improvement is due to the cuckoo search algorithm in the proposed protocol and the intelligent configuration of the fitness function parameters such as the reliability and the link lifetime which positively affect the data packet delivery rate via the selection of more stable routes as the optimal route. Simulation results show that TIHOO performs better than F-ANT and AODV in terms of PDR (as shown in Fig. 6).
Figure 7 shows the end-to-end delay based on the number of vehicles. In Fig. 7, when the number of vehicles increases, the end-to-end delay also increases. Applying the cuckoo metaheuristic algorithm to TIHOO has helped in the selection of the best route in a shorter time than the other two protocols. In contrast to F-ANT which is very slow to converge as a result of the ant colony optimization algorithm (ACO), TIHOO has a fast convergence rate and could identify the optimal route in a shorter time. Moreover, the intelligent configuration of the fitness function of this protocol and considering parameters such as available buffer have proved to be very useful in the deselection of congested routes. The selection of less busy routes has decreased end-to-end delays.
Figure 8 compares the performance of TIHOO with that of F-ANT and AODV for throughput.
Based on simulation figures, TIHOO has outperformed the other two protocols. Selecting the best route takes more time in F-ANT as it employs ACO and is slow to converge while the application of CS algorithm to TIHOO has brought about faster convergence and less parametric configuration demands during the implementation phase compared to other metaheuristic algorithms which have accelerated the selection of the optimal route and improved the throughput of the proposed protocol.
Figure 9 shows the routing overhead values of TIHOO, F-ANT, and AODV for various numbers of vehicles. In TIHOO, broadcasting of route request control packets are prevented and the discovery is conducted in a controlled manner while running this phase in AODV algorithm causes increased system overhead. Also, F-ANT protocol imposes increased overhead on the system as a result of employing the ACO metaheuristic algorithm and storing the whole information of the entire colony. Therefore, TIHOO outperforms other protocols in terms of overhead produced.
Figure 10 depicts the packet loss rate values of TIHOO, F-ANT, and AODV protocols for various numbers of vehicles. Loss of data packets was partly due to packet collision, link failure, insufficient bandwidth, overhead of buffer, etc. In TIHOO, the fitness function of the CS algorithm is configured based on such parameters as route lifetime and its reliability. Employing these factors contributes to the selection of more stable with fewer cases of link failures which affect the packet losses as well. Moreover, accounting for the available buffer for selection of a less congested route is very effective. Less congestion is synonymous with fewer packet losses.
Assessment of simulation parameters based on simulation duration
In this section, the packet delivery rate and end-to-end delays are assessed based on the simulation time. Figures 11 and 12 display the simulation results for the packet delivery rate and end-to-end delays which are assessed in this section based on the simulation time. These figures show that TIHOO is superior over the other two protocols. The bigger rate of packet delivery in TIHOO is associated with the movement of packets in the most stable route which in turn is determined thanks to the application of stability factors in the fitness function of TIHOO. In addition, accounting for the available buffer for the selection of a less congested route is very effective in enhancing the number of delivered data packets. CS algorithm which has a fast convergence rate has improved end-to-end delays.
Packet delivery ratio based on simulation time. The simulation results for packet delivery rate are assessed in this section based on the simulation time. The bigger rate of packet delivery in TIHOO is associated with the movement of packets in the most stable route which in turn is determined thanks to the application of stability factors in the fitness function of TIHOO. In addition, accounting for the available buffer for the selection of a less congested route is very effective in enhancing the number of delivered data packets
End-to-end delay based on simulation time. The simulation results for end-to-end delays are assessed based on the simulation time. The CS algorithm which has a fast convergence rate has improved end-to-end delays
VANETs are a special type of ad hoc networks that consist of nodes connected via wireless links without any fixed infrastructure. This and the lack of a centralized administration have made efficient routing a significant challenge in these networks. In this study, we proposed a new routing protocol called TIHOO for VANETs. This protocol intelligently employed fuzzy and cuckoo approaches. In TIHOO, the fuzzy logic system is used to limit the route discovery phase and, by limiting the route request messages, it somewhat controls the created extra overhead. The inputs in the fuzzy logic system include three factors of vehicle speed, direction of movement, and neighbor node-to-destination distance. After identifying the routes in the source node, the cuckoo search algorithm is called. This algorithm is one of the most effective metaheuristic algorithms, especially in the large search space. It selects the most stable and optimal route among known routes by calculating the fitness function based on criteria such as route lifetime, route reliability, and available buffer.
Simulation results show that TIHOO outperformed F-ANT and AODV in terms of throughput, routing overhead, packet delivery ratio, packet loss ratio, and end-to-end delay. The reliability parameter and the link lifetime in the compatibility function make it possible to select a path that is more stable. These parameters are effective along with the buffers available to increase the delivery rate of data packets. TIHOO is proposed to be better than the F-ANT protocol used by the ant colony algorithm, because it uses the cuckoo algorithm and its unique features, such as the run-rate and convergence resulting from the use of Levy's distribution in this algorithm.
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
ACO:
Ant colony optimization algorithm
AFMADR:
Adaptive fuzzy multiple attribute decision routing in VANETs
AMGRP:
AHP-based Multimetric Geographical Routing Protocol
AODV:
Ad hoc On-demand Distance Vector routing
BFOA:
Bacterial foraging optimization algorithm
Cuckoo search
DSRC:
Dedicated short-range communications
EED:
End-to-end delay
FCAR:
Fuzzy control based AODV routing
Greedy Perimeter Stateless Routing
MANETs:
Mobile ad hoc networks
Network simulator 2
PA:
Packet arrival
PDR:
Packet delivery rate
PLR:
Packet loss rate
Packet start
R2P:
Reliable routing protocol
RREP:
Route reply
RREQ:
Route request
V2I:
Vehicle-to-infrastructure
V2V:
Vehicle-to-vehicle
VANETs:
Vehicular ad hoc networks
WAVE:
Wireless access in vehicular environment
Singh, S. and S. Agrawal, VANET routing protocols: Issues and challenges, 2014 Recent Advances in Engineering and Computational Sciences (RAECS), 1-5,6-8 March 2014,1-5.
H. Fatemidokht, M. Kuchaki Rafsanjani, F-Ant: an effective routing protocol for ant colony optimization based on fuzzy logic in vehicular ad hoc networks. Neural Computing and Applications, 1–11 (2016)
S. Dehghani, M. Pourzaferani, B. Barekatain, Comparison on energy-efficient cluster based routing algorithms in wireless sensor network. Procedia Computer Science 72, 535–542 (2015)
S. Dehghani, B. Barekatain, M. Pourzaferani, An enhanced energy-aware cluster-based routing algorithm in wireless sensor networks (Wireless Personal Communications, 2017), pp. 1605–1635
S. Bitam, A. Mellouk, S. Zeadally, Bio-inspired routing algorithms survey for vehicular ad hoc networks. IEEE Communications Surveys & Tutorials 17, 843–867 (2015)
H. Rana, P. Thulasiraman, R.K. Thulasiram, MAZACORNET: mobility aware zone based ant colony optimization routing for VANET (2013 IEEE Congress on Evolutionary Computation, 2013), pp. 2948–2955
S. Al-Sultan et al., A comprehensive survey on vehicular ad hoc network. Journal of Network and Computer Applications 37, 380–392 (2014)
C. Wu, S. Ohzahata, T. Kato, Routing in VANETs: a fuzzy constraint Q-learning approach (2012 IEEE Global Communications Conference (GLOBECOM), 2012), pp. 195–200
K.N. Qureshi, A.H. Abdullah, A. Altameem, Road aware geographical routing protocol coupled with distance, direction and traffic density metrics for urban vehicular ad hoc networks. Wireless Personal Communications 92, 1251–1270 (2017)
B. Barekatain et al., GAZELLE: An enhanced random network coding based framework for efficient P2P live video streaming over hybrid WMNs. Wireless Personal Communications 95, 2485–2505 (2017)
N.V. Dharani Kumari, B.S. Shylaja, AMGRP: AHP-based multimetric geographical routing protocol for urban environment of VANETs. Journal of King Saud University - Computer and Information Sciences, 849–857 (2017)
T.-Y. Wu, Y.-B. Wang, W.-T. Lee, Mixing greedy and predictive approaches to improve geographic routing for VANET. Wireless Communications, and Mobile Computing 12, 367–378 (2012)
B. Barekatain et al., efficient P2P live video streaming over hybrid WMNs using random network coding. Wireless Personal Communications 80, 1761–1789 (2015)
Barekatain, B., et al., MATIN: A Random network coding based framework for high quality peer-to-peer live video streaming, PLOS ONE, 8, e69844 (2013), 165-172.
B. Barekatain et al., Performance evaluation of routing protocols in live video streaming over wireless mesh networks. Jurnal Teknologi 10, 85–94 (2013)
B. Barekatain et al., GREENIE: a novel hybrid routing protocol for efficient video streaming over wireless mesh networks. EURASIP Journal on Wireless Communications and Networking 168(2013), 1–22 (2013)
G. Li et al., Adaptive fuzzy multiple attribute decision routing in VANETs. International Journal of Communication Systems 30, 1543–1563 (2017)
P. Sermpezis, G. Koltsidas, F.N. Pavlidou, Investigating a junction-based multipath source routing algorithm for VANETs. IEEE Communications Letters 17, 600–603 (2013)
L.N. Balico et al., A prediction-based routing algorithm for vehicular ad hoc networks (2015 IEEE Symposium on Computers and Communication (ISCC), 2015), pp. 365–370
J.-M. Chang et al., An energy-efficient geographic routing protocol design in vehicular ad-hoc network. Computing 96, 119–131 (2014)
C. Wu, S. Ohzahata, T. Kato, Flexible, portable, and practicable solution for routing in VANETs: a fuzzy constraint Q-learning approach. IEEE Transactions on Vehicular Technology 62, 4251–4263 (2013)
E. Moridi, H. Barati, RMRPTS: a reliable multi-level routing protocol with tabu search in VANET. Telecommunication Systems, 1–11 (2016)
S. Bitam, A. Mellouk, S. Zeadally, HyBR: A hybrid bio-inspired bee swarm routing protocol for safety applications in vehicular ad hoc networks (VANETs). Journal of Systems Architecture 59, 953–967 (2013)
H. Dong et al., Multi-hop routing optimization method based on improved ant algorithm for vehicle to roadside network. Journal of Bionic Engineering 11, 490–496 (2014)
B. Barekatain, S. Dehghani, M. Pourzaferani, An energy-aware routing protocol for wireless sensor networks based on new combination of genetic algorithm & k-means. Procedia Computer Science 72, 552–560 (2015)
M. Al-Rabayah, R. Malaney, A new scalable hybrid routing protocol for VANETs. IEEE Transactions on Vehicular Technology 61, 2625–2635 (2012)
L.A. Zadeh, Fuzzy sets. Information and Control 8, 338–353 (1965)
L. Altoaimy, I. Mahgoub, Fuzzy logic based localization for vehicular ad hoc networks, 2014 IEEE Symposium on Computational Intelligence in Vehicles and Transportation Systems (CIVTS) (2014), pp. 121–128
X.S. Yang, D. Suash, Cuckoo Search via Levy flights, 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC) (2009), pp. 210–214
A.I. Saleh, S.A. Gamel, K.M. Abo-Al-Ez, A reliable routing protocol for vehicular ad hoc networks. Computers & Electrical Engineering, 473–495 (2016)
K. Mehta, P.R. Bajaj, L.G. Malik, Fuzzy bacterial foraging optimization zone based routing (FBFOZBR) protocol for VANET, 2016 International Conference on ICT in Business Industry & Government (ICTBIG) (2016), pp. 1–10
C.E. Perkins, E.M. Royer, Ad-hoc on-demand distance vector routing, Mobile Computing Systems and Applications (Proceedings. WMCSA '99. Second IEEE Workshop on, New Orleans, 1999), pp. 90–100
A. Feyzi, V. Sattari-Naeini, Application of fuzzy logic for selecting the route in AODV routing protocol for vehicular ad hoc networks,2015 23rd Iranian Conference on. Electrical Engineering 684-687, 684–687 (2015)
T. Hu et al., An enhanced GPSR routing protocol based on the buffer length of nodes for the congestion problem in VANETs, 2015 10th International Conference on Computer Science & Education (ICCSE) (2015), pp. 416–419
X.B. Wang, Y.L. Yang, J.W. An, An, Multi-metric routing decisions in VANET, 2009 Eighth IEEE International Conference on Dependable, Autonomic and Secure Computing (2009), pp. 551–556
Li, X., et al., An adaptive geographic routing protocol based on quality of transmission in urban VANETs, 2018 IEEE International Conference on Smart Internet of Things (SmartIoT), 17-19 2018, 52-57.
N. Li et al., Probability prediction-based reliable and efficient opportunistic routing algorithm for VANETs %J. IEEE/ACM Trans. Netw 26, 1933–1947 (2018)
D. Tian et al., A microbial inspired routing protocol for VANETs. IEEE Internet of Things Journal 5, 2293–2303 (2018)
A.H. Gandomi, X.-S. Yang, A.H. Alavi, Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Engineering with Computers 29, 17–35 (2013)
A. Kout et al., AODVCS, a new bio-inspired routing protocol based on cuckoo search algorithm for mobile ad hoc networks. Wireless Networks, 2509–2519 (2017)
Z. Qin, M. Bai, D. Ralescu, A fuzzy control system with application to production planning problems. Information Sciences 181, 1018–1027 (2011)
C. Wu, S. Ohzahata, T. Kato, VANET broadcast protocol based on fuzzy logic and lightweight retransmission mechanism (IEICE Transactions on Communications, E95.B, 2012), pp. 415–425
Hui, L., et al., An adaptive genetic fuzzy multi-path routing protocol for wireless ad hoc networks, Sixth International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing and First ACIS International Workshop on Self-Assembling Wireless Network,468-475, 2005,468-475.
Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran
Shirin Rahnamaei Yahiabadi
& Behrang Barekatain
Big Data Research Center, Najafabad Branch, Islamic Azad University, Najafabad, Iran
Behrang Barekatain
Electrical and Computer Engineering, Sultan Qaboos University, P.O. Box: 31, Al-Khoud, 123, Sultanate of Oman
Kaamran Raahemifar
Chemical Engineering Department, University of Waterloo, 200 University Avenue West, Waterloo, Toronto, Ontario, N2L 3G1, Canada
Search for Shirin Rahnamaei Yahiabadi in:
Search for Behrang Barekatain in:
Search for Kaamran Raahemifar in:
SRY and BB carried out all research steps including finding the problem, analyzing the related works, and finding, implementing, and simulating the proposed method as well as writing the manuscript. KR has contributed in analyzing the data and editing the paper. All authors read and approved the final manuscript.
Shirin Rahnamaei Yahiabadi received her bachelor in computer science from the University of Yazd, Yazd, Iran, in 2013 and now is a master student in Najafabad Branch, Islamic Azad University, Iran. Her research interest encompasses the fuzzy logic system and metaheuristic base on bio-inspired algorithm, as well as wireless communication.
Behrang Barekatain earned his BSC and MSC in computer software engineering in 1996 and 2001, respectively. He has more than 22 years of experience in computer networking and security. He is as a faculty member in Najafabad Branch, Islamic Azad University, Iran, for 18 years. He received his PhD and post-doc in computer networks from Ryerson University, Canada. His research interests encompass wire and wireless systems, VANETs, FANETs, SDN, NDN, IoT, peer-to-peer networking, network coding, video streaming, network security, and wireless mesh networks using network coding.
Kaamran Raahemifar received his B.Sc. degree (1985-1988) in Electrical Engineering from Sharif University of Technology, Tehran, Iran, his MASc. degree (1991-1993) from Electrical and Computer Engineering Dept., Waterloo University, Waterloo, Ontario, Canada, and his PhD degree (1996–1999) from Windsor University, Ontario, Canada. He was Chief Scientist (1999–2000), in Electronic Workbench, Toronto, Ontario, Canada. He joined Ryerson University in September 1999 and was tenured in 2001. Since 2011, he has been a Professor with the Department of Electrical and Computer Engineering, Ryerson University. He is the recipient of ELCE-GSA Professor of the Year Award (Elected by Graduate Student's body, 2010), Faculty of Engineering, Architecture, and Science Best Teaching Award (April 2011), and Department of Electrical and Computer Engineering Best Teaching Award (December 2011), and Research Award (December 2014). He has been awarded more than $6 M external research fund during his time at Ryerson. His research interests include (1) Optimization in Engineering: Theory and Application, which includes grid optimization and net-zero communities, as well as biomedical signal and image processing techniques, (2) Big Data Analysis (Dictionary/Sparse Representations, Interpolation, Predictions), (3) Modelling, Simulation, Design, and Testing, and (4) Time-Based Operational Circuit designs.
Correspondence to Behrang Barekatain.
Rahnamaei Yahiabadi, S., Barekatain, B. & Raahemifar, K. TIHOO: An Enhanced Hybrid Routing Protocol in Vehicular Ad-hoc Networks. J Wireless Com Network 2019, 192 (2019). https://doi.org/10.1186/s13638-019-1503-4
Hybrid routing protocol
Cuckoo-fuzzy approach
Stable path
Vehicular ad hoc network | CommonCrawl |
While what I am expecting is that the first four values of $y$ to be 1,2,3,4 say since I've set the first four entries of innov to be zero. Can someone help me with this problem? Thanks in advance.
This gives the solution: $y_1=-7.086890,\, y_2=-2.607843,\, y_3=-3.254902\;\,$ and $\,y_4=-3.901961$.
The vector $(1, 1.33, 1.66, 1.99)$ is obtained as follows: the first element is $y_5$, for which we want the value $1$ ($\epsilon_5$ is set to zero); the second element is $-0.51y_2 = y_6 - 0.67y_5 = 2 - 0.67\times1 = 1.33$ (the desired value for $y_6$ is $2$ and $\epsilon_6$ is set to zero); the third element is $-0.51y_3 = y_7 - 0.67y_6 = 3-0.67\times 2 = 1.66$; and from the last equation $-0.51y_4 = y_8 - 0.67y_7 = 4 - 0.67\times 3 = 1.99$.
After the auxiliary observations $y_1$ to $y_4$ that were found above, the series continues with the desired values $1, 2, 3, 4$.
Not the answer you're looking for? Browse other questions tagged r regression time-series or ask your own question.
Simulate ARIMA model in R using same starting values as original time series?
Consequences of modeling a non-stationary process using ARMA?
How to generate uncorrelated white noise sequence in R without using arima.sim? | CommonCrawl |
Publications of the Astronomical Society of Australia (40)
Journal of Plasma Physics (24)
Symposium - International Astronomical Union (6)
Proceedings of the International Astronomical Union (5)
Transactions of the International Astronomical Union (1)
Wave dispersion in pulsar plasma. Part 2. Pulsar frame
M. Z. Rafat, D. B. Melrose, A. Mastrano
Journal: Journal of Plasma Physics / Volume 85 / Issue 3 / June 2019
Published online by Cambridge University Press: 26 June 2019, 905850311
Wave dispersion in a pulsar plasma is discussed emphasizing the relevance of different inertial frames, notably the plasma rest frame ${\mathcal{K}}$ and the pulsar frame ${\mathcal{K}}^{\prime }$ in which the plasma is streaming with speed $\unicode[STIX]{x1D6FD}_{\text{s}}$ . The effect of a Lorentz transformation on both subluminal, $|z|<1$ , and superluminal, $|z|>1$ , waves is discussed. It is argued that the preferred choice for a relativistically streaming distribution should be a Lorentz-transformed Jüttner distribution; such a distribution is compared with other choices including a relativistically streaming Gaussian distribution. A Lorentz transformation of the dielectric tensor is written down, and used to derive an explicit relation between the relativistic plasma dispersion functions in ${\mathcal{K}}$ and ${\mathcal{K}}^{\prime }$ . It is shown that the dispersion equation can be written in an invariant form, implying a one-to-one correspondence between wave modes in any two inertial frames. Although there are only three modes in the plasma rest frame, it is possible for backward-propagating or negative-frequency solutions in ${\mathcal{K}}$ to transform into additional forward-propagating, positive-frequency solutions in ${\mathcal{K}}^{\prime }$ that may be regarded as additional modes.
Wave dispersion in pulsar plasma. Part 1. Plasma rest frame
Wave dispersion in a pulsar plasma (a one-dimensional, strongly magnetized, pair plasma streaming highly relativistically with a large spread in Lorentz factors in its rest frame) is discussed, motivated by interest in beam-driven wave turbulence and the pulsar radio emission mechanism. In the rest frame of the pulsar plasma there are three wave modes in the low-frequency, non-gyrotropic approximation. For parallel propagation (wave angle $\unicode[STIX]{x1D703}=0$ ) these are referred to as the X, A and L modes, with the X and A modes having dispersion relation $|z|=z_{\text{A}}\approx 1-1/2\unicode[STIX]{x1D6FD}_{\text{A}}^{2}$ , where $z=\unicode[STIX]{x1D714}/k_{\Vert }c$ is the phase speed and $\unicode[STIX]{x1D6FD}_{\text{A}}c$ is the Alfvén speed. The L mode dispersion relation is determined by a relativistic plasma dispersion function, $z^{2}W(z)$ , which is negative for $|z|<z_{0}$ and has a sharp maximum at $|z|=z_{\text{m}}$ , with $1-z_{\text{m}}<1-z_{0}\ll 1$ . We give numerical estimates for the maximum of $z^{2}W(z)$ and for $z_{\text{m}}$ and $z_{0}$ for a one-dimensional Jüttner distribution. The L and A modes reconnect, for $z_{\text{A}}>z_{0}$ , to form the O and Alfvén modes for oblique propagation ( $\unicode[STIX]{x1D703}\neq 0$ ). For $z_{\text{A}}<z_{0}$ the Alfvén and O mode curves reconnect forming a new mode that exists only for $\tan ^{2}\unicode[STIX]{x1D703}\gtrsim z_{0}^{2}-z_{\text{A}}^{2}$ . The L mode is the nearest counterpart to Langmuir waves in a non-relativistic plasma, but we argue that there are no 'Langmuir-like' waves in a pulsar plasma, identifying three features of the L mode (dispersion relation, ratio of electric to total energy and group speed) that are not Langmuir like. A beam-driven instability requires a beam speed equal to the phase speed of the wave. This resonance condition can be satisfied for the O mode, but only for an implausibly energetic beam and only for a tiny range of angles for the O mode around $\unicode[STIX]{x1D703}\approx 0$ . The resonance is also possible for the Alfvén mode but only near a turnover frequency that has no counterpart for Alfvén waves in a non-relativistic plasma.
Coherent Radio-Emission Mechanisms for Pulsars
D. B. Melrose
Journal: International Astronomical Union Colloquium / Volume 128 / 1992
Coherent emission mechanisms may be classified as (i) maser mechanisms, attributed to negative absorption by resonant particles in a resistive instability, (ii) a reactive or hydrodynamic instability, or (iii) to emission by bunches. Known coherent emission mechanisms in radio astronomy are plasma emission in solar radio bursts, maser emission in OH and other molecular line sources, electron-cyclotron maser emission from the planets, and pulsar emission. Pulsar radio emission is the brightest of all known coherent emission, and its brightness temperature is close to the maximum conceivable in terms of energy efficiency. Three possible pulsar radio emission mechanisms warrant serious consideration in polar cap models; here these are called coherent curvature emission, relativistic plasma emission, and free electron maser emission, respectively.
1.Coherent curvature emission is attributed to emission by bunches. There is a fundamental weakness in existing theoretical treatments which do not allow for any velocity dispersion of the particles. There is no satisfactory mechanism for the formation of the required bunches, and were such bunches to form they would quickly lose their ability to emit coherently due to the curvature of the field lines.
2.Relativistic plasma emission is a multistage emission process involving the generation of plasma turbulence and the partial conversion of this turbulence into escaping radiation. In pulsars the dispersion characteristics of the relativistic electron-positron plasma determines the form of the turbulence, which may be in either longitudinal waves or Alfvèn-like waves. Various instabilities have been suggested to produce turbulence, and a streaming instability is one possibility. Alternatively, in a detailed model proposed by Beskin et al. (1988) the instability depends intrinsically on the curvature of the field lines, and in a theory discussed by Kazbegi et al. (1988), a cyclotron instability generates the turbulence relatively far from the neutron star.
3.Free electron maser emission or linear acceleration emission requires an oscillating electric field, postulated to be due to a large amplitude electrostatic wave. A recent analysis of this mechanism (Rowe 1992) shows that it allows emission in two different regimes that provide a possible basis for the interpretation of core and conal emission in pulsars. Effective maser emission seems to require Lorentz factors smaller than other constraints allow.
Other suggested theories for the emission mechanism include one that arises from a loophole in the proof that curvature absorption cannot be negative, and another that involves a closed "electrosphere" in which the radio emission is attributed to emission by bunches formed as a result of pair production due to a primary charge accelerated towards the star by its Coulomb field.
Transfer of Energy to the Crab Nebula Following the Spin-Up of the Pulsar
Journal: Symposium - International Astronomical Union / Volume 46 / 1971
Observed enhanced activity in the central region of the Crab Nebula following the spin-up of the pulsar is discussed from the point of view of the transfer of energy to relativistic electrons. It is argued that a rapid deposition of energy associated with the spin-up of the pulsar causes a radial energy flux which becomes a flux in hydromagnetic activity at about the regions where enhanced synchrotron emission is observed. It is shown that such hydromagnetic activity is rapidly damped by the relativistic electrons with energy being transferred to the relativistic electrons. This acceleration can account for the short synchrotron halflifetimes observed. The model predicts highly enhanced X-ray emission from the central region of the Nebula following a spin-up.
Orthogonal Mode Polarization of Pulsar Radio Emission
Q. Luo, D. B. Melrose
Journal: Symposium - International Astronomical Union / Volume 218 / 2004
We discuss a model for polarization of pulsar radio emission, based on the assumption that waves propagate in two elliptically polarized natural modes. Some results from numerical simulation of single pulses are discussed with emphasis on circular polarization, microstructures and single pulse statistics.
A Relationship Between the Brightness Temperatures for Type III Bursts
(Solar Phys.). The widely accepted emission mechanisms for type III bursts involve at least two stages. The first stage is the generation of Langmuir waves by the inferred stream of electrons. Emission at the fundamental frequency arises when these waves are scattered by thermal ions. Emission at the second harmonic arises when two Langmuir waves coalesce; however, the coalescence is possible only after an intermediate stage in which the distribution of Langmuir waves evolves towards isotropy due to scattering by thermal ions.
Induced Three-wave Interactions in Eclipsing Pulsars
Qinghuan Luo, D. B. Melrose
Journal: Publications of the Astronomical Society of Australia / Volume 12 / Issue 1 / April 1995
Three-wave interactions involving two high-frequency waves (in the same mode) and a low-frequency wave are discussed and applied to pulsar eclipses. When the magnetic field is taken into account, the low-frequency waves can be the ω-mode (the low-frequency branch of the ordinary mode) or the z-mode (the low-frequency branch of the extraordinary mode). It is shown that in the cold plasma approximation, effective growth of the low-frequency waves due to an anisotropic photon beam can occur only for z-mode waves near the resonance frequency. In the application to pulsar eclipses, the cold plasma approximation may not be adequate and we suggest that when thermal effects are included, three-wave interaction involving low-frequency cyclotron waves (e.g. Bernstein modes) is a plausible candidate for pulsar eclipses
The Relevance of Bipolar Type I Storm Structures to the Theory of Mode Coupling in the Solar Corona
Journal: Publications of the Astronomical Society of Australia / Volume 2 / Issue 4 / October 1973
Bipolar structures are discernible in three categories of solar radio emission: the slowly varying component (S-component), microwave bursts and metre-wavelength type I storms. Piddington and Minnett, discussing the S-component, showed that when coupling between the magneto-ionic modes along the ray path is ignored the emission from a regular dipolar field should have two specific polarization properties. These properties (see Figure 1) are that both feet of the bipolar structure should be similarly polarized, i.e. both left-hand (LH) or both right-hand (RH), and that this handedness should reverse on central meridian passage (CMP). (Reversal of the handedness at CMP had been suggested earlier by Martyn.) These arguments should apply also to microwave bursts and to type I storms.
Electron Cyclotron Masers during Solar Flares
S. M. White, D. B. Melrose, G. A. Dulk
Journal: Publications of the Astronomical Society of Australia / Volume 5 / Issue 2 / 1983
It has been suggested (Holman et al 1980; Melrose and Dulk 1982a) that solar microwave spike bursts are due to electron cyclotron maser action. These bursts have been observed in the range 1-3 Ghz, and occur in conjunction with flare-associated impulsive microwave and hard X-ray bursts. The bursts have rise times of a millisecond or less (e.g. Slottje 1978).
Solar Second Harmonic Plasma Emission and the Head-on Approximation
A. J. Willes, P. A. Robinson, D. B. Melrose
Journal: Publications of the Astronomical Society of Australia / Volume 12 / Issue 2 / August 1995
The coalescence of two Langmuir waves, L and L′, produces emission at twice the plasma frequency in type II and type III solar radio bursts. The analysis of the coalescence process is usually simplified by assuming the head-on approximation, where the wavevectors of the coalescing waves satisfy k L ′ ≈ −k L , corresponding to the two Langmuir waves meeting head on. However, this is not always a valid approximation, particularly when the peak of the Langmuir spectrum lies at small wavenumbers, for narrow band spectra, and for spectra with broad angular ranges. Realistic Langmuir wave spectra are used to investigate the effects of relaxing the head-on approximation.
A Plasma Hypothesis for Anomalous OH Emission
Although the maser hypothesis for anomalous OH emission encounters difficulties it has not been seriously questioned because, as pointed out by Turner, no alternative has been available. The following ideas might suggest that there is an alternative hypothesis for anomalous OH emission.
Alfvénic Fronts and the turning-off of the Energy Release in Solar Flares
M. S. Wheatland, D. B. Melrose
The effect of impulsively turning off the dissipation in an existing model for energy propagation through Alfvénic fronts into the coronal site of energy release in a solar flare is examined. In the optimum case of impedance matching, the flux tube re-stresses on a much longer timescale than it relaxes, suggesting an explanation for the timescales observed in homologous flares.
Two-Photon Cyclotron Emission in Accretion Columns
J. G. Kirk, D. B. Melrose, J. G. Peters
Cyclotron lines have been observed in the X-ray spectra of two pulsed sources: Her X-1 (Trümper et al. 1978) and 4U 0115 + 63 (Wheaton et al. 1979). The generally accepted model for these objects involves an accretion flow from a companion star in a close binary system onto small regions close to the magnetic poles of a strongly magnetized neutron star. Immediately above the surface, matter is confined in an accretion column by the magnetic field.
Boundary Effects and the Circular Polarization of Synchrotron Sources
Some self-absorbed synchrotron sources have, a small but observable degree of circular polarization rc near the absorption turnover (Roberts et al. 1975). The simplest interpretation of the circular component is in terms of the intrinsic degree of circular polarization of synchrotron radiation (Legg and Westfold 1968); this provides a relation (Roberts et al. 1975)
A Cyclotron Theory for the Beaming Pattern of Jupiter's Decametric Radio Emission
R. G. Hewitt, D. B. Melrose, K. G. Rönnmark
Ground-based observations of Jupiter's decametric radio emission (DAM) have been reviewed by Ellis (1965), Warwick (1967, 1970) and Carr and Gulkis (1969). A startling feature of DAM is the modulating effect of Io, and interpretation of the Io effect has dominated theoretical discussions of DAM until quite recently, specifically until the fly-by s of Voyagers 1 and 2. The Voyager data showed that the DAM appears as nested arcs in the frequency-Jovian longitude plane (Warwick et al. 1979, Boischot et al. 1981). The interpretation of this arc structure has been of primary theoretical interest over the past two years. The most widely adopted explanation is that the emission from each point is confined to the surface of a hollow cone (Goldstein and Thieman 1981). This idea is not new: emission on the surface of a cone was discussed by Ellis and McCulloch (1963); Dulk (1967) derived detailed parameters for the cone (half angle 79° width 1°) from the occurrence pattern of DAM; and Goldreich and Lynden-Bell (1969) presented a theoretical interpretation of it. More recently Goldstein et al. (1979) used observational data on the Jovian magnetic field in deriving properties of the required emission cone. It seems that one requires the properties of the emission cone to vary with position in the Jovian magnetosphere to account for the nested arc pattern (Goldstein and Thieman 1981; Gurnett and Goertz 1981).
Solar Flares: Implications of the Circuit Model (Invited Paper)
Published online by Cambridge University Press: 25 April 2016, pp. 6-12
A review is given of recent developments in the interpretation of the structure and heating of the solar corona, and of solar flares. An electric circuit model for a flaring magnetic loop is introduced, and used to discuss the closure of the current pattern. It is argued that cross-field current flow cannot be set up after a flux tube has emerged above the photosphere. The energy dissipated in a flare is attributed to a change in the inductance of the flaring loop, with the current remaining approximately constant. Emphasis is placed on the value of the resistance of the flaring loop, and on the associated inductive timescale.
Curvature Emission and Absorption: Single Particle Treatment
Journal: Publications of the Astronomical Society of Australia / Volume 10 / Issue 1 / 1992
The absorption counterpart of curvature emission is reexamined based on the Landau-Lifshitz approach. Early derivations led to the conclusion that maser emission is not possible, but these early derivations neglected a drift effect which was first discussed by Zheleznyakov and Shaposhnikov. When the drift effect is included, the derivation implies that curvature maser emission is possible. It is shown that for maser emission to be possible, the Lorentz factor needs to satisfy γ ≳ 103 for radius of curvature of the magnetic field lines RB ≈ 106 to 109cm and frequency ω ≈ 107 to 1011 s−1. Possible application to pulsars is discussed.
The Theory of Solar Radio Bursts: Can it be understood by the Non-Expert?
The theory of solar radio bursts remains a mystery to most astronomers and astrophysicists. The reasons for this are not hard to identify. First, the solar radioastronomical data are unfamiliar. (The observational data on solar radio bursts is being reviewed separately at this meeting (McLean 1981).) The important features of this data involve frequency-time structures in dynamic spectra, and such features are absent in data on galactic and extra galactic objects. Even for pulsars the data are obtained at discrete frequencies, and the frequency-time structures are not of major importance. Second, the theory itself involves plasma physical concepts which are unfamiliar to most physicists and astronomers. These concepts include those of plasma instabilities, microturbulence, and of particle-wave and wave-wave interactions. Third, one must also admit that there is a prejudice amongst many astronomers against solar physics: the Sun is regarded as interesting only to the extent that it can teach us about other astronomical objects. I shall return to this third point later.
Evidence for Extreme Divergence of Open Field Lines from Solar Active Regions
G. A. Dulk, D. B. Melrose, S. Suzuki
In this paper we review the evidence on the structure of the open magnetic field lines that emerge from solar active regions into interplanetary space. The evidence comes mainly from the measured sizes, positions and polarization of Type III and Type V bursts, and from electron streams observed from space. We find that the observations are best interpreted in terms of a strongly-diverging field topology, with the open field lines filling a cone of angle ~60°.
The Energy Release in Solar Flares: Implications of a Constant-Current Model
A model is explored for energy release in solar flares that involves a constant coronal current. An emerging flux tube is assumed to carry a current I≲1012 A, and this current is assumed not to change during flare. Using a circuit model, explosive energy release is attributed to a rapid rise in the coronal resistance R c, which must adjust to R c = − L c, with L c the rate of change of the coronal inductance L c, to ensure I = constant. In this model the total energy released in the corona is twice the change in the magnetic energy stored in the corona. It is argued that this energy is inadequate to power a large flare and the implications of this conclusion are discussed. | CommonCrawl |
\begin{document}
\title[Boundary continuity for the Stefan problem]{On the logarithmic type boundary modulus of continuity for the Stefan problem}
\date{} \maketitle
\begin{abstract} A logarithmic type modulus of continuity is established for weak solutions to a two-phase Stefan problem, up to the parabolic boundary of a cylindrical space-time domain. For the Dirichlet problem, we merely assume that the spatial domain satisfies a measure density property, and the boundary datum has a logarithmic type modulus of continuity. For the Neumann problem, we assume that the lateral boundary is smooth, and the boundary datum is bounded. The proofs are measure theoretical in nature, exploiting De Giorgi's iteration and refining DiBenedetto's approach. Based on the sharp quantitative estimates, construction of continuous weak (physical) solutions is also indicated. The logarithmic type modulus of continuity has been conjectured to be optimal as a structural property
for weak solutions to such partial differential equations.
\vskip.2truecm
\noindent{\bf Mathematics Subject Classification (2020):} Primary, 35R70; Secondary, 35A01, 35B65, 35D30, 35K65, 35R35, 80A22
\vskip.2truecm
\noindent{\bf Key Words:} Stefan problem, parabolic $p$-Laplacian, boundary continuity, intrinsic scaling, expansion of positivity, parabolic De Giorgi class
\end{abstract}
\tableofcontents
\section{Introduction} The classical Stefan problem aims to describe the evolution of the moving boundary between two phases of a material undergoing a phase change, for instance the melting of ice to water. The temperature of the material in both phases is governed by the heat equation, subject to the usual initial and boundary conditions. However, the interface of the two phases is free, and additional conditions need to be imposed on that part of the boundary (an unknown hyper-surface), which can be viewed as a kind of energy balance law. As such the classical Stefan problem is genuinely a parabolic free boundary problem, cf.~\cite{Caff-Salsa}. The model accounts for a large class of physical phenomena, cf.~\cite{Visintin-08}.
The Stefan problem also admits a variational perspective, cf.~\cite{Friedman-68, Kameno-61, Oleinik} and \cite[Chapter~V, Section~9]{LSU}. Under this point of view, we are led to formulate the initial-boundary value problem for the following nonlinear parabolic equation in a fixed space-time domain $E_T:=E\times(0,T]\subset \mathbb{R}^{N+1}$, that is, \begin{equation}\label{Eq:1:0} \partial_t\beta(u)-\Delta u\ni 0\quad\text{ weakly in }E_T. \end{equation} Here $\beta(\cdot)$ represents the energy (more precisely, the {\it enthalpy}) of the two phases, i.e. $[u>0]$ and $[u<0]$, and it is a maximal monotone graph that permits a jump when the temperature $u$ is zero. The equation \eqref{Eq:1:0} is understood in the sense of differential inclusions. No explicit reference is made to any free boundary in this formulation. Instead, a {\it mushy region}, i.e. $[u=0]$, which consists of a mixture of the two states of the material,
is allowed. The solutions are sought in Sobolev spaces and thus are certain weak ones. As a result, the question concerning the regularity of the weak solutions is naturally raised. This is the problem we try to tackle in this note. See \cite[\S~1.3]{Visintin-08} for a comparison between the classical and the weak formulations. See \cite{Kinder} for another formulation in terms of variational inequalities.
Pioneering works of Caffarelli \& Evans \cite{Caff-Evans-83}, DiBenedetto \cite{DB-82, DB-86}, Sacks \cite{Sacks-83}, and Ziemer \cite{Ziemer-82}, have shown that while the energy admits a jump, the temperature is always continuous, even across the mushy region $[u=0]$. Among these contributions, we single out \cite{DB-82, DB-86} whose approach is closely followed in this work. In \cite{DB-82} the interior and up to the boundary continuity of the temperature was obtained, given general Neumann data or homogeneous Dirichlet data on the lateral boundary. However, the case of nonhomogeneous Dirichlet data turned out to be more involved and was solved later in \cite{DB-86}. An explicit modulus of continuity of the temperature, both in the interior and at the boundary, can be derived from the method used in \cite{DB-82, DB-86}. Roughly speaking, it is of the following type (cf.~\cite{DB-Friedman-84}): \begin{equation}\tag{I}\label{Eq:modulus:1}
(0,1)\ni r\mapsto \boldsymbol\omega(r)=\big(\ln|\ln (cr)| \big)^{-\sigma}\quad\text{ for some }c,\,\sigma>0. \end{equation} The method employed in \cite{DB-82, DB-86} is flexible enough to deal with the situation when the Laplacian $\Delta$ in \eqref{Eq:1:0} is replaced by a general quasilinear diffusion part and when a lower order term appears.
Subsequently, the equation \eqref{Eq:1:0} replacing the Laplacian $\Delta$ by the $p$-Laplacian $\Delta_p$ for $p\ge2$ has been studied in \cite{Urbano-14, Urbano-17, Urbano-97, Urbano-00, Urbano-08} by Urbano et al.
In particular, we single out the recent work \cite{Urbano-14} where the {\it interior} modulus of continuity has been improved to be of the following type \begin{equation}\tag{II}\label{Eq:modulus:2}
(0,1)\ni r\mapsto \boldsymbol\omega(r)=|\ln (cr)|^{-\sigma}\quad\text{ for some }c,\,\sigma>0. \end{equation}
Discarding a logarithm, this type of modulus of continuity represents an improvement even in the case of Laplacian, let alone the complication brought by the $p$-Laplacian. The modulus \eqref{Eq:modulus:2} has been conjectured to be optimal in \cite{Urbano-14}. (See also \cite{Caff-Fried} for the one-phase problem.)
Although a {\it boundary} modulus of continuity is derived in \cite{Urbano-17} for Dirichlet data, it is still of type \eqref{Eq:modulus:1}.
How to achieve a boundary modulus of type \eqref{Eq:modulus:2} with Dirichlet data, and how to deal with the same issue for Neumann data, remain elusive. We will answer these questions in this note. The significance of continuity for weak solutions (temperatures) stems from their physical bearings; an explicit and sharp modulus has further mathematical implications, cf.~\cite{Caff-Fried}. Our quantitative estimates up to the boundary allow us to construct physical solutions with the type \eqref{Eq:modulus:2} modulus of continuity.
\subsection{Statement of the results} Our main goal is to establish a modulus of continuity of type \eqref{Eq:modulus:2} for weak solutions to the Stefan problem, up to the parabolic boundary, given either Dirichlet or Neumann conditions. Moreover, for the Dirichlet problem, the $p$-Laplacian with $p\ge2$ is considered. Evidently, our results are new even for the Laplacian.
More precisely, denoting an open set in $\mathbb{R}^N$ ($N\ge1$) by $E$
and setting $E_T:=E\times(0,T]$, we are concerned with
the following
nonlinear parabolic equation \begin{equation}\label{Eq:1:1} \partial_t\beta(u)-\operatorname{div}\bl{A}(x,t,u, Du) \ni 0\quad \text{ weakly in }\> E_T. \end{equation} Here $\beta$ is a maximal monotone graph in $\mathbb{R}\times\mathbb{R}$ defined by \begin{equation*} \beta(u)=\left\{ \begin{array}{cl} u,\quad& u>0,\\[5pt] \left[-\nu,0\right],\quad& u=0,\\[5pt] u-\nu,\quad& u<0, \end{array} \right. \end{equation*} for some constant $\nu>0$ that represents the exchange of {\it latent heat}.
The function $\bl{A}(x,t,u,\xi)\colon E_T\times\mathbb{R}^{N+1}\to\rr^N$ is assumed to be measurable with respect to $(x, t) \in E_T$ for all $(u,\xi)\in \mathbb{R}\times\rr^N$, and continuous with respect to $(u,\xi)$ for a.e.~$(x,t)\in E_T$. Moreover, we assume the structure conditions \begin{equation} \label{Eq:1:2} \left\{ \begin{array}{l}
\bl{A}(x,t,u,\xi)\cdot \xi\ge C_o|\xi|^p \\[5pt]
|\bl{A}(x,t,u,\xi)|\le C_1|\xi|^{p-1} \end{array} \right .\quad \text{ a.e.}\> (x,t)\in E_T,\, \forall\,u\in\mathbb{R},\,\forall\xi\in\rr^N, \end{equation} where $C_o$ and $C_1$ are given positive constants, and we take $p\ge2$. The set of parameters $\{\nu, p,N,C_o,C_1\}$ will be referred to as the (structural) data in the sequel.
Before considering the initial-boundary value problems for \eqref{Eq:1:1}, let us recall the parabolic $p$-Laplace type equation, which is pertinent to \eqref{Eq:1:1}: \begin{equation}\label{Eq:p-Laplace} u_t-\operatorname{div}\bl{A}(x,t,u, Du) = 0\quad \text{ weakly in }\> E_T, \end{equation} where the properties of $\bl{A}(x,t,u, \xi)$ are retained.
The Dirichlet problem for \eqref{Eq:1:1} is formulated as: \begin{equation}\label{Dirichlet} \left\{ \begin{aligned} &\partial_t\beta(u)-\operatorname{div}\bl{A}(x,t,u, Du) \ni 0\quad \text{ weakly in }\> E_T\\
&u(\cdot,t)\Big|_{\partial E}=g(\cdot,t)\quad \text{ a.e. }\ t\in(0,T]\\ &u(\cdot,0)=u_o. \end{aligned} \right. \end{equation}
The boundary datum satisfies \begin{equation}\tag{D}\label{D} \left\{ \begin{aligned} &g\in L^p\big(0,T;W^{1,p}( E)\big),\\ &g \text{ continuous on}\ \overline{E}_T\ \text{with modulus of continuity }\boldsymbol\omega_g(\cdot). \end{aligned} \right. \end{equation} In addition, the initial datum satisfies \begin{equation}\tag{U$_o$}\label{I} \mbox{ $u_o$ is continuous in $\overline{E}$ with modulus of continuity $\boldsymbol\omega_{o}(\cdot)$.} \end{equation}
Regarding the geometry of the boundary $\partial E$, we assume a measure density condition, that is, \begin{equation}\tag{G}\label{geometry} \left\{\;\;
\begin{minipage}[c][1.5cm]{0.7\textwidth}
there exist $\alpha_*\in(0,1)$ and $\bar\varrho\in(0,1)$, such that for all $x_o\in\partial E$,
for every cube $K_\varrho(x_o)$ and $0<\varrho\le\bar\varrho$, there holds
$$
|E\cap K_{\varrho}(x_o)|\le(1-\alpha_*)|K_\varrho|.
$$
\end{minipage} \right. \end{equation}
Here we have denoted by $K_\varrho(x_o)$ the cube of side length $2\varrho$ and center $x_o$, with faces parallel with the coordinate planes of $\mathbb{R}^N$. Intuitively, the condition \eqref{geometry} means that one can place a cone at $x_o$ exterior to $E$, with an angle quantified by $\alpha_*$.
Throughout the rest of this note, we will use the symbols \begin{equation*} \left\{ \begin{aligned} Q_\varrho(\theta)&:=K_{\varrho}(x_o)\times(t_o-\theta\varrho^p,t_o),\\[5pt] Q_{R,S}&:=K_R(x_o)\times (t_o-S,t_o), \end{aligned}\right. \end{equation*} to denote (backward) cylinders with the indicated positive parameters; we omit the vertex $(x_o,t_o)$ from the notations for simplicity. We will also denote the lateral boundary by $S_T:=\partial E\times(0,T]$ and the parabolic boundary by $\partial_{\mathcal{P}}E_T:=S_T\cup [\overline{E}\times\{0\}]$.
The formal definition of solution to the Dirichlet problem \eqref{Dirichlet} will be postponed to Section~\ref{S:1:2}. Now we first present the regularity theorem concerning the Dirichlet problem \eqref{Dirichlet} up to the parabolic boundary $\partial_{\mathcal{P}}E_T$. \begin{theorem}\label{Thm:1:1} Let $u$ be a bounded weak solution to the Dirichlet problem \eqref{Dirichlet} under the condition \eqref{Eq:1:2} with $p\ge2$. Assume that \eqref{D}, \eqref{I} and \eqref{geometry} hold. Then $u$ is continuous in
$\overline{ E_T}$. More precisely, there is a modulus of continuity $\boldsymbol\omega(\cdot)$, determined by the data, $\alpha_*$, $\bar\varrho$, $\|u\|_{\infty,E_T}$, $\boldsymbol\omega_{o}(\cdot)$ and $\boldsymbol\omega_{g}(\cdot)$, such that
\begin{equation*}
\big|u(x_1,t_1)-u(x_2,t_2)\big|
\le
\boldsymbol\omega\!\left(|x_1-x_2|+|t_1-t_2|^{\frac1p}\right),
\end{equation*} for every pair of points $(x_1,t_1), (x_2,t_2)\in \overline{E_T}$. In particular, there exists $\sigma\in(0,1)$ depending only on the data and $\alpha_*$, such that if \[
\boldsymbol\omega_g(r)\le \frac{C_g}{|\ln r|^{\lambda}}\quad\text{ and }\quad\boldsymbol \omega_o(r)\le \frac{C_{u_o}}{|\ln r|^{\lambda}}\quad\text{ for all }r\in(0,\bar\varrho), \] where $C_g,\,C_{u_o}>0$ and $\lambda>\sigma$, then the modulus of continuity is $$\boldsymbol\omega(r)=C \Big(\ln \frac{\bar\varrho}{r}\Big)^{-\frac{\sigma}2}\quad\text{ for all }r\in(0, \bar\varrho)$$ with some $C>0$
depending on the data, $\lambda$, $\alpha_*$, $\|u\|_{\infty,E_T}$, $C_{u_o}$ and $C_{g}$. \end{theorem}
Now we consider the Neumann problem. In order to deal with possible variational data on $S_T$, we assume $\partial E$ is of class $C^1$, such that the outward unit normal, which we denote by {\bf n}, is defined on $\partial E$ pointwise. Let us consider the initial-boundary value problem of Neumann type: \begin{equation}\label{Neumann} \left\{ \begin{aligned} &\partial_t\beta(u)-\operatorname{div}\bl{A}(x,t,u, Du) \ni 0\quad \text{ weakly in }\> E_T\\ &\bl{A}(x,t,u, Du)\cdot {\bf n}=\psi(x,t, u)\quad \text{ a.e. }\ \text{ on }S_T\\ &u(\cdot,0)=u_o(\cdot), \end{aligned} \right. \end{equation} where the structure conditions \eqref{Eq:1:2} and the initial condition \eqref{I} are retained. On the Neumann datum $\psi$ we assume for simplicity that, for some absolute constant $C_2$, there holds \begin{equation}\label{N-data}\tag{\bf{N}}
|\psi(x,t, u)|\le C_2\quad \text{ for a.e. }(x,t, u)\in S_T\times\mathbb{R}. \end{equation} More general conditions should also work (cf.~Section~2, Chapter~II, \cite{DB}). The formal definition of weak solution to \eqref{Neumann} will be given in Section~\ref{S:1:2}. Now we are ready to present the results concerning regularity of solutions to the Neumann problem \eqref{Neumann} up to the parabolic boundary $\partial_{\mathcal{P}}E_T$.
\begin{theorem}\label{Thm:1:2} Let $u$ be a bounded weak solution to the Neumann problem \eqref{Neumann} under the condition \eqref{Eq:1:2} with $p=2$. Assume that $\partial E$ is of class $C^1$, and \eqref{N-data} and \eqref{I} hold.
Then $u$ is continuous in $\overline{E_T}$.
More precisely, there is a modulus of continuity $\boldsymbol\omega(\cdot)$, determined by the data, the structure of $\partial E$, $C_2$, $\|u\|_{\infty,E_T}$ and $\boldsymbol\omega_{o}(\cdot)$, such that
\begin{equation*}
\big|u(x_1,t_1)-u(x_2,t_2)\big|
\le
\boldsymbol\omega\!\left(|x_1-x_2|+|t_1-t_2|^{\frac12}\right),
\end{equation*} for every pair of points $(x_1,t_1), (x_2,t_2)\in \overline{E_T}$. In particular, there exists $\sigma\in(0,1)$ depending only on the data, $C_2$ and the structure of $\partial E$, such that if \[
\boldsymbol\omega_o(r)\le \frac{C_{u_o}}{|\ln r|^{\lambda}}\quad\text{ for all }r\in(0,1), \] where $C_{u_o}>0$ and $\lambda>\sigma$, then the modulus of continuity is $$\boldsymbol\omega(r)=C \Big(\ln \frac{1}{r}\Big)^{-\frac{\sigma}2}\quad\text{ for all }r\in(0, 1),$$ with some $C>0$
depending on the data, the structure of $\partial E$, $C_2$, $\|u\|_{\infty,E_T}$ and $C_{u_o}$.
\end{theorem}
\begin{remark}\upshape The type \eqref{Eq:modulus:2} boundary modulus of continuity for the Neumann problem \eqref{Neumann}
has been left open in Theorem~\ref{Thm:1:2} for the case $p\neq 2$. This is mainly because of our limited knowledge on the {\it parabolic De Giorgi class} modeled on the parabolic $p$-Laplacian \eqref{Eq:p-Laplace}; see further Remark~\ref{Rmk:6:2}. Nevertheless, when $p>2$, we do have a type \eqref{Eq:modulus:1} boundary modulus for the Neumann problem \eqref{Neumann} with a general $\psi$ as in \eqref{N-data}. This can be achieved by adapting the proof in \cite{Urbano-00} for the interior regularity. \end{remark}
\begin{remark}\upshape We have stated Theorems~\ref{Thm:1:1} -- \ref{Thm:1:2} in a global fashion. However their proofs are entirely local, and they could be stated near a distinguished part of $\partial_{\mathcal{P}}E_T$. \end{remark}
\begin{remark}\upshape
Theorems~\ref{Thm:1:1} -- \ref{Thm:1:2} continue to hold for \eqref{Eq:1:1}
with lower order terms:
the convection resulting from the heat transfer is reflected by a lower order term.
Also the function $\beta(u)$ could present more general forms
and reflect thermal properties of the material that may slightly change according to the temperature,
as long as it permits only one jump. (For the case of multiple jumps, see \cite{DBV-95}.)
The modifications of the proofs can be modeled on the arguments in \cite{DB,DB-82,DB-86}.
However, we will not pursue generality in this direction, instead focus will be made on actual novelties. \end{remark}
\subsection{Definitions of solution}\label{S:1:2} The notion of local, weak solution to the parabolic $p$-Laplace type equation \eqref{Eq:p-Laplace}, and the notions for its initial--boundary value problems can be found in \cite[Chapter~II]{DB}. In particular, weak solutions to \eqref{Eq:p-Laplace} are defined in the function space \begin{equation} \label{Eq:func-space-1}
u\in C\big(0,T;L^2(E)\big)\cap L^p\big(0,T; W^{1,p}(E)\big), \end{equation} which do not assume any knowledge on the time derivative.
In contrast to \eqref{Eq:func-space-1}, a special character for the notion of solution to the Stefan problem \eqref{Eq:1:1} is that we require solutions to possess a time derivative in the Sobolev sense. This is necessary to justify the calculations in the proofs of the theorems. On the other hand, we will explain in Section~\ref{S:approx} how to construct continuous weak solutions to \eqref{Eq:1:1} without any knowledge on the time derivative. \subsubsection{Notion of Local Solution}\label{S:1:2:1} A function \begin{equation*}
u\in W_{\operatorname{loc}}^{1,2}\big(0,T;L^2_{\operatorname{loc}}(E)\big)\cap L^p_{\operatorname{loc}}\big(0,T; W^{1,p}_{\operatorname{loc}}(E)\big) \end{equation*} is a local, weak sub(super)-solution to \eqref{Eq:1:1} with the structure conditions \eqref{Eq:1:2}, if for every compact set $K\subset E$ and every sub-interval $[t_1,t_2]\subset (0,T]$, there is a selection $v\subset\beta(u)$, i.e. \[ \left\{\big(z,v(z)\big): z\in E_T\right\}\subset \left\{\big(z,\beta[u(z)]\big): z\in E_T\right\}, \]
such that \begin{equation*}
\int_K v\zeta \,\mathrm{d}x\bigg|_{t_1}^{t_2}
+
\iint_{K\times(t_1,t_2)} \big[-v\partial_t\zeta+\bl{A}(x,t,u,Du)\cdot D\zeta\big]\mathrm{d}x\mathrm{d}t
\le(\ge)0 \end{equation*} for all non-negative test functions \begin{equation}\label{Eq:function-space} \zeta\in W^{1,2}_{\operatorname{loc}}\big(0,T;L^2(K)\big)\cap L^p_{\operatorname{loc}}\big(0,T;W_o^{1,p}(K) \big). \end{equation} Observe that $v\in L^{\infty}_{\operatorname{loc}} \big(0,T;L^2_{\operatorname{loc}}(E)\big)$ and hence all the integrals are well-defined. Note that the above integral formulation itself does not involve the time derivative of $u$. On the other hand, if we use the time derivative of $u$, the integral formulation may be written as \begin{equation}\label{Eq:int-form} \begin{aligned}
-\int_K&\nu(x,t)\copy\chibox_{[u\le0]}\zeta\,\mathrm{d}x\bigg|_{t_1}^{t_2}+\iint_{K\times(t_1,t_2)}\nu(x,t)\copy\chibox_{[u\le0]}\partial_t\zeta\,\mathrm{d}x\mathrm{d}t\\ &+\iint_{K\times(t_1,t_2)} \big[\partial_t u\zeta+\bl{A}(x,t,u,Du)\cdot D\zeta\big]\mathrm{d}x\mathrm{d}t
\le(\ge)0, \end{aligned} \end{equation} where $\zeta$ is as in \eqref{Eq:function-space} and $\nu(x,t)\ge0$ is given by \begin{equation*} \nu(x,t):=\left\{ \begin{array}{cl} \nu,&\quad (x,t)\in [u<0],\\[5pt] -v(x,t),&\quad (x,t)\in [u=0], \end{array}\right. \end{equation*} and $\copy\chibox$ is the characteristic function of the indicated set.
A function $u$ that is both a local weak sub-solution and a local weak super-solution to \eqref{Eq:1:1} is a local weak solution.
\subsubsection{Notion of Solution to the Dirichlet Problem}\label{S:1:4:3} A function \begin{equation*}
u\in W^{1,2}\big(0,T;L^2(E)\big)\cap L^p\big(0,T; W^{1,p}(E)\big) \end{equation*} is a weak sub(super)-solution to \eqref{Dirichlet}, if there is a selection $v\subset\beta(u)$, for every sub-interval $[t_1,t_2]\subset (0,T]$,
such that \begin{equation*} \begin{aligned}
\int_{E} v\zeta \,\mathrm{d}x\bigg|_{t_1}^{t_2}
&+
\iint_{E\times(t_1,t_2)} \big[-v\partial_t\zeta+\bl{A}(x,t,u,Du)\cdot D\zeta\big]\mathrm{d}x\mathrm{d}t
\le(\ge)0 \end{aligned} \end{equation*} for all non-negative test functions $\zeta$ satisfying \eqref{Eq:function-space}. An equivalent form can be given as in \eqref{Eq:int-form} involving $\partial_t u$. Clearly, now the compact set $K$ in \eqref{Eq:function-space} could touch the boundary $\partial E$, i.e. $K\subset \overline{E}$.
Moreover, the initial datum is taken in the sense that there exists a selection $v_o\subset\beta(u_o)$ and for any compact set $K\subset\overline{E}$, \[ \int_{K\times\{t\}}(v-v_o)^{2}_{\pm}\,\mathrm{d}x\to0\quad\text{ as }t\to0. \] The Dirichlet datum $g$ is attained under $u\le(\ge)g$ on $\partial E$ in the sense that the traces of $(u-g)_{\pm}$ vanish as functions in $W^{1,p}(E)$ for a.e. $t\in(0,T]$, i.e. $(u-g)_{\pm}\in L^p(0,T; W^{1,p}_o(E))$. Notice that no {\it a priori} information is needed on the smoothness of $\partial E$ for the time being.
A function $u$ that is both a weak sub-solution and a weak super-solution to \eqref{Dirichlet} is a weak solution.
\subsubsection{Notion of Solution to the Neumann Problem}\label{S:1:4:4} A function \begin{equation*}
u\in W^{1,2}\big(0,T;L^2(E)\big)\cap L^p\big(0,T; W^{1,p}(E)\big) \end{equation*} is a weak sub(super)-solution to \eqref{Neumann}, if there is a selection $v\subset\beta(u)$, for every compact set $K\subset \mathbb{R}^N$ and every sub-interval $[t_1,t_2]\subset (0,T]$,
such that \begin{equation*} \begin{aligned}
\int_{K\cap E} v\zeta \,\mathrm{d}x\bigg|_{t_1}^{t_2}
&+
\iint_{\{K\cap E\}\times(t_1,t_2)} \big[-v\partial_t\zeta+\bl{A}(x,t,u,Du)\cdot D\zeta\big]\mathrm{d}x\mathrm{d}t\\
&\le(\ge)\iint_{\{K\cap\partial E\}\times(t_1,t_2)}\psi(x,t,u)\zeta\,\mathrm{d}\sigma\mathrm{d}t \end{aligned} \end{equation*} for all non-negative test functions $\zeta$ satisfying \eqref{Eq:function-space}, where $K$ is an arbitrary compact set in $\rr^N$ now,
and $\mathrm{d}\sigma$ denotes the surface measure on $\partial E$. The Neumann datum $\psi$ is reflected in the boundary integral on the right-hand side. Moreover, the initial datum is taken as in the Dirichlet problem. An equivalent form can be given as in \eqref{Eq:int-form} involving $\partial_t u$.
A function $u$ that is both a weak sub-solution and a weak super-solution to \eqref{Neumann} is a weak solution.
\subsection{DiBenedetto's approach: revisit and refinement}
Any known approach to the local continuity issue for the equation \eqref{Eq:1:0} relies on, in one way or another, De Giorgi's method. Since there are a lot of technicalities to follow that might obscure the main ideas behind them, we briefly review the original approach of DiBenedetto in \cite{DB-82, DB-86} for $p=2$, and highlight the main improvement to the method in order to achieve a type \eqref{Eq:modulus:2} modulus of continuity.
For a cylinder $Q_\varrho=K_{\varrho}(x_o)\times(t_o-\varrho^2,t_o)\subset E_T$, we introduce the numbers $\mu^{\pm}$ and $\omega$ satisfying \begin{equation*}
\mu^+=\operatornamewithlimits{ess\,sup}_{Q_{\varrho}} u,
\quad
\mu^-=\operatornamewithlimits{ess\,inf}_{Q_{\varrho}} u,
\quad
\omega=\operatornamewithlimits{ess\,osc}_{Q_{\varrho}} u =\mu^+ - \mu^-. \end{equation*} The goal is to reduce the oscillation of $u$ over a cylinder smaller than $Q_\varrho$ with the same vertex $(x_o,t_o)$. To this end, we first observe that one of following must hold:
\begin{equation}\label{Eq:mu-pm-0} \mu^+-\tfrac14\omega\ge \tfrac14\omega\quad\text{ or }\quad\mu^-+\tfrac14\omega\le -\tfrac14\omega. \end{equation} Let us suppose the first one, i.e. \eqref{Eq:mu-pm-0}$_1$, holds as the other case is similar.
The first step lies in showing a De Giorgi type lemma, which asserts that there exists a positive constant $c_o$ depending only on the structural data, such that if \begin{equation}\label{Eq:DG:measure}
|[u\le\mu^-+\tfrac14\omega]\cap Q_{\varrho} |\le \alpha |Q_{\varrho}|,\quad\text{ where }\alpha=c_o\omega^{\frac{N+2}2}, \end{equation}
then \[ u\ge \mu^-+\tfrac18\omega\quad\text{ a.e. in }Q_{\frac12\varrho}, \] which in turn yields a reduction of oscillation \[ \operatornamewithlimits{ess\,osc}_{Q_{\frac12\varrho}}u\le\tfrac78\omega. \] The singularity of $\beta(\cdot)$ at $[u=0]$ is reflected by the dependence on $\omega$ of $\alpha$ in \eqref{Eq:DG:measure}.
The second step is to consider the case when \eqref{Eq:DG:measure} does not hold. Since $\mu^+-\frac14\omega\ge \mu^-+\frac14\omega$ always holds, the reverse of the measure information \eqref{Eq:DG:measure} implies that \[
|[\mu^+-u\ge\tfrac14\omega]\cap Q_{\varrho} |> \alpha |Q_{\varrho}|. \] Based on this, it is not hard to check that there exists $t_*\in[t_o-\varrho^2, t_o-\tfrac12\alpha\varrho^2]$, such that \begin{equation}\label{Eq:DG:measure:1}
|[\mu^+-u(\cdot, t_*)\ge\tfrac14\omega]\cap K_{\varrho}(x_o) |> \tfrac12\alpha |K_{\varrho}|. \end{equation}
The key to proceeding now is to observe that because of \eqref{Eq:mu-pm-0}$_1$, the function $v$ defined by $$v:=\tfrac14\omega-(u-k)_+\quad\text{ with }k=\mu^+-\tfrac14\omega>0,$$ is actually a non-negative, weak super-solution to the parabolic equation \eqref{Eq:p-Laplace} with $p=2$ in $Q_\varrho$, as the set $[u>k]$ excludes potential singularity of $\beta(u)$. The information in \eqref{Eq:DG:measure:1} can be rephrased in terms of $v$: \[
|[v(\cdot, t_*)\ge\tfrac14\omega]\cap K_{\varrho}(x_o) |> \tfrac12\alpha |K_{\varrho}|. \] As such the machinery of De Giorgi can be applied to $v$ like in \cite[Chapter~II, Section~7]{LSU} and the measure information \eqref{Eq:DG:measure:1} translates into pointwise positivity for $v$ (thus also for $\mu^+-u$) up to the top of the cylinder $Q_{\varrho}$. This kind of property for non-negative, weak super-solutions is called {\it expansion of positivity}.
More precisely, there holds \[ \mu^+-u\ge\eta(\alpha)\omega\quad\text{ a.e. in }Q_{\frac12\varrho}(\alpha), \] which gives the reduction of oscillation \[ \operatornamewithlimits{ess\,osc}_{Q_{\frac12\varrho}(\alpha)}u\le\big(1-\eta(\alpha)\big)\omega. \] The singularity of $\beta(\cdot)$, reflected in the dependence of $\alpha$ on $\omega$, has now been passed from the measure information \eqref{Eq:DG:measure:1} to this reduction of oscillation, through the dependence of $\eta$ on $\alpha$ and hence on $\omega$. Such dependence of $\eta$ on $\alpha$ is traced by $\eta\approx2^{-\frac{1}{\alpha^q}}$, for a generic positive constant $q$ determined by the data; see also \cite[Chapter~4, Section~2]{DBGV-mono}. Apart from technical complications, these are the main steps in \cite{DB-82}.
Strikingly, the dependence of $\eta$ on $\alpha$ can be ameliorated in the sense that $\eta\approx \alpha^{q}$, for a generic positive constant $q$ determined by the data.
This was first discovered by DiBenedetto \& Trudinger in \cite{DT} for the elliptic De Giorgi class via a Krylov-Safonov type covering argument. Various parabolic versions have been developed since then; see for instance \cite{DBGV-mono, Liao, W}.
It is exactly the improvement of the dependence of $\eta$ on $\alpha$ in the expansion of positivity that leads to the refinement of the modulus of continuity from \eqref{Eq:modulus:1} to \eqref{Eq:modulus:2}. Indeed, iterating the arguments presented above yields the reduction of oscillation \[ \operatornamewithlimits{ess\,osc}_{Q_n}u\le \omega_n,\quad n=0,1,\cdots, \] along a nested family of cylinders $\{Q_n\}$ with their common vertex at $(x_o,t_o)$. The sequence $\{\omega_n\}$ obeys two types of recurrences, contingent upon the dependence of $\eta$ on $\alpha$, recalling also the dependence of $\alpha$ on $\omega$ in \eqref{Eq:DG:measure}: \begin{equation*} \left\{ \begin{array}{lc} \omega_{n+1}=\big(1-2^{-\frac1{\omega^q_n}}\big)\omega_n,\quad&\text{(Type I)},\\[5pt] \omega_{n+1}=\big(1-\omega^q_n\big)\omega_n,\quad&\text{(Type II)}. \end{array}\right. \end{equation*} Here we have again used $q>0$ as a generic constant determined by the data. An inspection on the sequence $\{\omega_n\}$ reveals that for $n$ large \begin{equation*} \left\{ \begin{array}{lc} \omega_{n}\lesssim(\ln n)^{-\sigma},\quad&\text{(Type I)},\\[5pt] \omega_{n}\lesssim n^{-\sigma},\quad&\text{(Type II)}, \end{array}\right. \end{equation*}
for some proper $\sigma\in(0,\frac1q)$. If the size of $Q_r\approx Q_n$ is quantified in a geometric fashion, $r\approx(\tfrac12)^n$ for instance,
then we have $n\approx\ln\frac1r$. Consequently, the modulus of continuity can be estimated by
\begin{equation*} \left\{ \begin{array}{lc} \displaystyle\operatornamewithlimits{ess\,osc}_{Q_r}u\approx\operatornamewithlimits{ess\,osc}_{Q_n}u\le\omega_{n}\lesssim(\ln n)^{-\sigma}\approx(\ln \ln\tfrac1r)^{-\sigma},\quad&\text{(Type I)},\\[5pt] \displaystyle\operatornamewithlimits{ess\,osc}_{Q_r}u\approx\operatornamewithlimits{ess\,osc}_{Q_n}u\le\omega_{n}\lesssim n^{-\sigma}\approx (\ln\tfrac1r)^{-\sigma},\quad&\text{(Type II)}. \end{array}\right. \end{equation*}
The type \eqref{Eq:modulus:2} {\it interior} modulus of continuity has been achieved in \cite{Urbano-14}. Instead of the expansion of positivity, a weak Harnack inequality was employed in \cite{Urbano-14}. One purpose of this note is to demonstrate that the expansion of positivity is more flexible when the type \eqref{Eq:modulus:2} modulus of continuity is requested not only in the interior, but also at the {\it boundary}, given Neumann boundary data. Needless to say, the expansion of positivity lies at the heart of any kind of Harnack's estimates, cf.~\cite{DBGV-acta, DBGV-mono}.
In the above outline, the singularity of $\beta(\cdot)$ at $[u=0]$ is accommodated by choosing proper levels $k$ for $u$, either close to $\mu^-$ or to $\mu^+$. However, when the boundary regularity is considered, given general Dirichlet data, the level $k$ has to obey extra restrictions, such that $(u-k)_\pm$ vanish on the lateral boundary, cf.~\eqref{Eq:k-restriction}. As a result, the analysis becomes more involved. The most crucial part of DiBenedetto's approach in \cite{DB-86}
consists in quantifying the largeness of $|[u\le0]|$ where the singularity appears. If it is small enough, then the usual parabolic scaling prevails.
Otherwise, the largeness of $|[u\le0]|$ is incorporated into the singular term on left-hand side of the energy estimate, and then balances off the relative largeness brought by the singularity on the right-hand side; in this case, the usual parabolic scaling fails and intrinsically scaled cylinders have to be introduced. Retrospectively, this idea lies at the origin of the {\it method of intrinsic scaling}, cf.~\cite{DB, DBGV-mono, Urbano-08}.
We will adapt this idea, in further balancing the singularity of $\beta(\cdot)$ and the degeneracy brought by $\Delta_p$, and in refining the approach by an argument from
\cite[Section~4.4]{DBGV-09} and \cite[Proposition~5.1]{DBG-16},
which allows us to shrink the measure of the set where $u\approx0$ in a faster fashion than the original De Giorgi method, cf.~Lemma~\ref{Lm:shrink:1}. We will not try to recapitulate all the steps here beforehand, as they are more involved and delicate than the interior case. Instead we feel it is more appropriate to add remarks, which may assist understanding of the technicalities, along the course of the proof.
\
\noindent{\it Acknowledgement. } This research has been funded by the FWF--Project P31956--N32 “Doubly nonlinear evolution equations”.
\section{The Dirichlet problem: preliminary boundary estimates}\label{S:2}
We will deal with the oscillation decay of $u$ in the vicinity of the lateral boundary in Sections~\ref{S:2} -- \ref{S:bdry}; in the interior in Section~\ref{S:interior}; at the initial level in Section~\ref{S:Thm-proof}. Afterwards the proof of Theorem~\ref{Thm:1:1} follows from standard covering arguments based on these three cases in Section~\ref{S:Thm-proof}. In the present section, we derive energy estimates for sub(super)-solutions to the Dirichlet problem \eqref{Dirichlet}, near the lateral boundary. Consequently, two De Giorgi type lemmas are implied by the energy estimates.
\subsection{Energy estimates}\label{S:energy-1} In this section, we derive some energy estimates near $S_T$, for solutions to the Dirichlet problem \eqref{Dirichlet}. We will work within the cylinder $Q_{R,S}:=K_R(x_o)\times (t_o-S,t_o)$ for some $(x_o,t_o)\in S_T$, such that $t_o-S>0$. In order to employ the truncated functions $(u-k)_\pm$ across the lateral boundary, we need to impose certain restrictions on the level $k$, i.e.,
\begin{equation}\label{Eq:k-restriction}
\left\{
\begin{aligned}
&k\ge\sup_{Q_{R,S}\cap S_T}g\quad\text{ for sub-solutions},\\
&k\le\inf_{Q_{R,S}\cap S_T}g\quad\text{ for super-solutions}.
\end{aligned}
\right. \end{equation} Let $\zeta$ be a non-negative, piecewise smooth cutoff function vanishing on $\partial K_{R}(x_o)\times(t_o-S,t_o)$. In such a way, the test functions \[ Q_{R,S}\cap E_T\ni(x,t)\mapsto\varphi(x,t)
=
\zeta^p(x,t) \big(u(x,t)-k\big)_\pm \] become admissible in the weak formulation of \eqref{Dirichlet}, as the functions $x\mapsto\big(u(x,t)-k\big)_\pm$ vanish on $Q_{R,S}\cap S_T$ in the sense of traces for a.e. $t\in(t_o-S,t_o)$. This fact does not require any smoothness of $\partial E$ (cf.~\cite[Lemma~2.1]{GLL}).
In what follows, when we deal with integrals of $(u-k)_\pm$ across the lateral boundary $S_T$, the functions $(u-k)_\pm$ will be understood as zero outside $E_T$. Likewise, when we deal with a sub(super)-solution $u$ across the lateral boundary $S_T$, we tacitly understand it as \begin{equation*} u_k^{\pm}:=\left\{ \begin{array}{cl} k\pm(u-k)_\pm\quad&\text{ in }Q_{R,S}\cap E_T,\\[5pt] k\quad&\text{ in }Q_{R,S}\setminus E_T, \end{array}\right. \end{equation*} for $k$ satisfying \eqref{Eq:k-restriction}.
Upon using the above test function $\varphi$ in the boundary version of \eqref{Eq:int-form}, we have the following energy estimate; the detailed calculation is analogous to the one for \cite[(2.7)]{DB-82}.
\begin{proposition}\label{Prop:2:1}
Let $u$ be a local weak sub(super)-solution to \eqref{Dirichlet} with \eqref{Eq:1:2} in $E_T$.
There exists a constant $\gamma (C_o,C_1,p)>0$, such that
for all cylinders $Q_{R,S}=K_R(x_o)\times (t_o-S,t_o)$ with the vertex $(x_o,t_o)\in S_T$,
every $k\in\mathbb{R}$ satisfying \eqref{Eq:k-restriction}, and every non-negative, piecewise smooth cutoff function
$\zeta$ vanishing on $\partial K_{R}(x_o)\times(t_o-S,t_o)$, there holds \begin{align*}
\operatornamewithlimits{ess\,sup}_{t_o-S<t<t_o}&\Big\{\int_{K_R(x_o)\times\{t\}}
\zeta^p (u-k)_\pm^2\,\mathrm{d}x +\Phi_{\pm}(k, t_o-S,t,\zeta)\Big\}\\
&\quad+
\iint_{Q_{R,S}}\zeta^p|D(u-k)_\pm|^p\,\mathrm{d}x\mathrm{d}t\\
&\le
\gamma\iint_{Q_{R,S}}
\Big[
(u-k)^{p}_\pm|D\zeta|^p + (u-k)_\pm^2|\partial_t\zeta^p|
\Big]
\,\mathrm{d}x\mathrm{d}t\\
&\quad
+\int_{K_R(x_o)\times \{t_o-S\}} \zeta^p (u-k)_\pm^2\,\mathrm{d}x, \end{align*} where \begin{align*}
\Phi_{\pm}(k, t_o-S,t,\zeta)=&-\int_{K_R(x_o)\times\{\tau\}}\nu(x,\tau)\copy\chibox_{[u\le0]}[\pm(u-k)_\pm\zeta^p]\,\mathrm{d}x\Big|_{t_o-S}^t\\ &+\int_{t_o-S}^t\int_{K_R(x_o)}\nu(x,\tau)\copy\chibox_{[u\le0]}\partial_t[\pm(u-k)_\pm\zeta^p]\,\mathrm{d}x\mathrm{d}\tau \end{align*} and $\nu(x,t)$ is a selection out of $[0,\nu]$ for $u(x,t)=0$, and $\nu(x,t)=\nu$ for $u(x,t)<0$. \end{proposition}
Based on the energy estimate in Proposition~\ref{Prop:2:1}, we may derive
two tailored versions. Note that the functions $\Phi_\pm$ carry information near the singularity of $\beta(\cdot)$. The first tailored version examines the case when $\Phi_\pm=0$, and hence singularity disappears and it recovers the energy estimate for the parabolic $p$-Laplacian \eqref{Eq:p-Laplace}. \begin{proposition}\label{Prop:2:2} Let the hypotheses in Proposition~\ref{Prop:2:1} hold. If, in addition, $k\ge0$ in the case of sub-solutions or $k\le0$ in the case of super-solutions, then for every non-negative, piecewise smooth cutoff function
$\zeta$ vanishing on $\partial_{\mathcal{P}} Q_{R,S}$, there holds \begin{align*}
\operatornamewithlimits{ess\,sup}_{t_o-S<t<t_o}&\int_{K_R(x_o)\times\{t\}}
\zeta^p (u-k)_\pm^2\,\mathrm{d}x
+
\iint_{Q_{R,S}}\zeta^p|D(u-k)_\pm|^p\,\mathrm{d}x\mathrm{d}t\\
&\le
\gamma\iint_{Q_{R,S}}
\Big[
(u-k)^{p}_\pm|D\zeta|^p + (u-k)_\pm^2|\partial_t\zeta^p|
\Big]
\,\mathrm{d}x\mathrm{d}t.
\end{align*} \end{proposition} \begin{proof} We need only to notice that under the current restrictions on $k$, the functions $\Phi_{\pm}$ vanish. Also now the cutoff function $\zeta$ is required to vanish at the bottom level $t_o-S$. \end{proof}
The second one examines the effect of the singularity of $\beta(\cdot)$ more carefully. \begin{proposition}\label{Prop:2:3} Let the hypotheses in Proposition~\ref{Prop:2:1} hold. If, in addition, $k\ge0$ in the case of super-solutions, then for every non-negative, piecewise smooth cutoff function
$\zeta$ vanishing on $\partial_{\mathcal{P}} Q_{R,S}$, there holds \begin{align*}
\operatornamewithlimits{ess\,sup}_{t_o-S<t<t_o}&\Big\{\int_{K_R(x_o)\times\{t\}}
\zeta^p (u-k)_-^2\,\mathrm{d}x +k\int_{K_R(x_o)\times\{t\}}\nu\copy\chibox_{[u\le0]}\zeta^p\,\mathrm{d}x\Big\}\\
&\quad+
\iint_{Q_{R,S}}\zeta^p|D(u-k)_-|^p\,\mathrm{d}x\mathrm{d}t\\
&\le
\gamma\iint_{Q_{R,S}}
\Big[
(u-k)^{p}_-|D\zeta|^p + (u-k)_-^2 |\partial_t\zeta^p|
\Big]
\,\mathrm{d}x\mathrm{d}t\\
&\quad+\iint_{Q_{R,S}}\nu\copy\chibox_{[u\le0]}(k-u)|\partial_t\zeta^p|\,\mathrm{d}x\mathrm{d}t.
\end{align*} \end{proposition} \begin{proof} Let us omit the reference to $x_o$ for simplicity. We only need to calculate \begin{align*}
\Phi_{-}&(k, t_o-S,t,\zeta)=\int_{K_R}\nu(x,\tau)\copy\chibox_{[u\le0]}[(u-k)_-\zeta^p]\,\mathrm{d}x\Big|_{t_o-S}^t\\ &\quad-\iint_{Q_{R,S}}\nu(x,\tau)\copy\chibox_{[u\le0]}\partial_t[(u-k)_-\zeta^p]\,\mathrm{d}x\mathrm{d}\tau\\
&=k\int_{K_R}\nu\copy\chibox_{[u\le0]}\zeta^p\,\mathrm{d}x\Big|_{t_o-S}^t+\int_{K_R}\nu u_-\zeta^p\,\mathrm{d}x\Big|_{t_o-S}^t\\ &\quad-\iint_{Q_{R,S}}\nu \partial_tu_-\zeta^p\,\mathrm{d}x
-\iint_{Q_{R,S}}\nu \copy\chibox_{[u\le0]}(u-k)_-\partial_t\zeta^p\,\mathrm{d}x\mathrm{d}\tau\\ &\ge k\int_{K_R\times\{t\}}\nu\copy\chibox_{[u\le0]}\zeta^p\,\mathrm{d}x -\iint_{Q_{R,S}}\nu \copy\chibox_{[u\le0]}(k-u)\partial_t\zeta^p\,\mathrm{d}x\mathrm{d}\tau. \end{align*} Then substituting it back to the energy estimate in Proposition~\ref{Prop:2:1}, we conclude. \end{proof}
\subsection{De Giorgi type lemmas}\label{S:DG}
For $(x_o,t_o)\in S_T$ and a cylinder $Q_{R,S}=K_{R}(x_o)\times(t_o-S,t_o)$ as in Section~\ref{S:energy-1}, we introduce the numbers $\mu^{\pm}$ and $\omega$ satisfying \begin{equation*}
\mu^+\ge\operatornamewithlimits{ess\,sup}_{Q_{R,S}} u,
\quad
\mu^-\le\operatornamewithlimits{ess\,inf}_{Q_{R,S}} u,
\quad
\omega\ge\mu^+ - \mu^-. \end{equation*} Let $\widetilde\theta\ge\theta>0$ be parameters to be determined. The cylinders $Q_\varrho(\theta)$ and $Q_\varrho(\widetilde\theta)$ are coaxial with $Q_{R,S}$ and with the same vertex $(x_o,t_o)$; moreover, we will impose that \[ \varrho<R, \quad\widetilde\theta(8\varrho)^p<S,\quad\text{ such that }\quad Q_\varrho(\theta)\subset Q_\varrho(\widetilde\theta)\subset Q_{R,S}. \]
We now present the first De Giorgi type lemma. \begin{lemma}\label{Lm:DG:1} Let $u$ be a weak super-solution to \eqref{Dirichlet} with \eqref{Eq:1:2} in $E_T$. For $\xi\in(0,1)$, set $\theta=(\xi\omega)^{2-p}$ and suppose that $$\mu^-+\xi\omega\le \inf_{Q_{R,S}\cap S_T}g.$$ There exists a constant $c_o\in(0,1)$ depending only on the data, such that if \[
|[u\le\mu^-+\xi\omega]\cap Q_{\varrho}(\theta)|\le c_o(\xi\omega)^{\frac{N+p}p}|Q_{\varrho}(\theta)|, \] then \[ u\ge \mu^-+\tfrac12\xi\omega\quad\text{ a.e. in }Q_{\frac12\varrho}(\theta). \] \end{lemma} \begin{proof} Let us assume $(x_o,t_o)=(0,0)$. In order to use the energy estimates in Propositions~\ref{Prop:2:2} -- \ref{Prop:2:3}, we set \begin{align}\label{choices:k_n}
\left\{
\begin{array}{c}
\displaystyle k_n=\mu^-+\frac{\xi\omega}2+\frac{\xi\omega}{2^{n+1}},\quad \tilde{k}_n=\frac{k_n+k_{n+1}}2,\\[5pt]
\displaystyle \varrho_n=\frac{\varrho}2+\frac{\varrho}{2^{n+1}},
\quad\tilde{\varrho}_n=\frac{\varrho_n+\varrho_{n+1}}2,\\[5pt]
\displaystyle K_n=K_{\varrho_n},\quad \widetilde{K}_n=K_{\tilde{\varrho}_n},\\[5pt]
\displaystyle Q_n=Q_{\varrho_n}(\theta),\quad
\widetilde{Q}_n=Q_{\tilde\varrho_n}(\theta).
\end{array}
\right. \end{align} Recall that $Q_{\varrho_n}(\theta)=K_n\times(-\theta\varrho_n^p,0)$ and $Q_{\tilde\varrho_n}(\theta)=\widetilde{K}_n\times(-\theta\tilde{\varrho}_n^p,0)$. Introduce the cutoff function $0\le\zeta\le 1$ vanishing on the parabolic boundary of $Q_{n}$ and equal to identity in $\widetilde{Q}_{n}$, such that \begin{equation*}
|D\zeta|\le \gamma\frac{2^n}{\varrho}
\quad\text{and}\quad
|\partial_t\zeta|\le \gamma\frac{2^{pn}}{\theta\varrho^p}. \end{equation*} In this setting, the energy estimates in Propositions~\ref{Prop:2:2} -- \ref{Prop:2:3} combined will yield that \begin{align*}
\operatornamewithlimits{ess\,sup}_{-\theta\tilde{\varrho}_n^p<t<0}
&\int_{\widetilde{K}_n} (u-\tilde{k}_n)_-^2\,\mathrm{d}x
+
\iint_{\widetilde{Q}_n}|D(u-\tilde{k}_n)_-|^p \,\mathrm{d}x\mathrm{d}t\\
&\qquad\le
\gamma \frac{2^{pn}}{\varrho^p}(\xi\omega)^{p-1}|A_n|, \end{align*} where \begin{equation*}
A_n=\big[u<k_n\big]\cap Q_n. \end{equation*} Now setting $0\le\phi\le1$ to be a cutoff function which vanishes on the parabolic boundary of $\widetilde{Q}_n$ and equals the identity in $Q_{n+1}$, an application of the H\"older inequality and the Sobolev imbedding \cite[Chapter I, Proposition~3.1]{DB} gives that \begin{align*}
\frac{\xi\omega}{2^{n+3}}
|A_{n+1}|
&\le
\iint_{\widetilde{Q}_n}\big(u-\tilde{k}_n\big)_-\phi\,\mathrm{d}x\mathrm{d}t\\
&\le
\bigg[\iint_{\widetilde{Q}_n}\big[\big(u-\tilde{k}_n\big)_-\phi\big]^{p\frac{N+2}{N}}
\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{N}{p(N+2)}}|A_n|^{1-\frac{N}{p(N+2)}}\\
&\le \gamma
\bigg[\iint_{\widetilde{Q}_n}\big|D\big[(u-\tilde{k}_n)_-\phi\big]\big|^p\,
\mathrm{d}x\mathrm{d}t\bigg]^{\frac{N}{p(N+2)}}\\
&\quad\
\times\bigg[\operatornamewithlimits{ess\,sup}_{-\widetilde{\theta}\tilde{\varrho}_n^p<t<0}
\int_{\widetilde{K}_n}\big(u-\tilde{k}_n\big)^{2}_-\,\mathrm{d}x\bigg]^{\frac{1}{N+2}}
|A_n|^{1-\frac{N}{p(N+2)}}\\
&\le
\gamma\bigg[(\xi\omega)^{p-1}\frac{2^{pn}}{\varrho^p}\bigg]^{\frac{N+p}{p(N+2)}}
|A_n|^{1+\frac{1}{N+2}} \end{align*}
In the second-to-last line we used the above energy estimate. In terms of $ Y_n=|A_n|/|Q_n|$, this can be rewritten as \begin{equation*}
Y_{n+1}
\le
\frac{\gamma C^n}{(\xi\omega)^{\frac{N+p}{p(N+2)}}} Y_n^{1+\frac{1}{N+2}}, \end{equation*} for positive constants $\gamma$ depending only on the data and $C=C(p,N)$. Hence, by \cite[Chapter I, Lemma~4.1]{DB}, there exists a positive constant $c_o$ depending only on the data, such that $Y_n\to0$ if we require that $Y_o\le c_o(\xi\omega)^{\frac{N+p}p}$, which amounts to \begin{equation*}
|A_o|=\big|\big[u<k_o\big]\cap Q_o\big|
=
\Big|\Big[u<\mu^-+\tfrac12 \xi\omega\Big]\cap Q_{\varrho}(\theta)\Big|
\le
c_o(\xi\omega)^{\frac{N+p}p}\big| Q_{\varrho}(\theta)\big|. \end{equation*} Since $Y_n\to 0$ as $n\to\infty$, we have \begin{equation*}
\Big|\Big[u<\mu^-+\tfrac12 \xi\omega \Big]\cap Q_{\frac12 \varrho}(\theta)\Big|
=
0. \end{equation*} This concludes the proof of the lemma. \end{proof}
The second De Giorgi type lemma is as follows. \begin{lemma}\label{Lm:DG:2} Let the hypotheses in Lemma~\ref{Lm:DG:1} hold. There exists $\xi\in(0,1)$ depending only on the data and $\alpha_*$, such that if \[ \iint_{Q_{\varrho}(\theta)}\nu\copy\chibox_{[u\le0]}\,\mathrm{d}x\mathrm{d}t
\le\xi\omega|[u\le \mu^-+\tfrac12\xi\omega]\cap Q_{\frac12\varrho}(\theta)| \] and \[
|\mu^-|\le\xi\omega, \] then \[ u\ge\mu^-+\tfrac14\xi\omega\quad\text{ a.e. in }Q_{\frac12\varrho}(\theta). \] \end{lemma} \begin{proof} We tend to use the energy estimate in Proposition~\ref{Prop:2:3} with $Q_{R,S}=Q_r(\theta)$ for $\frac12\varrho\le r\le \varrho$, and with $k=\mu^-+\xi\omega\ge0$. To this end, let us first estimate the last integral on the right-hand side of
the energy estimate via the given measure theoretical information: \begin{align*}
\iint_{Q_{r}(\theta)}&\nu\copy\chibox_{[u\le0]}(k-u) |\partial_t\zeta^p|\,\mathrm{d}x\mathrm{d}t\\
&\le(k-\mu^-)\iint_{Q_{r}(\theta)}\nu\copy\chibox_{[u\le0]} |\partial_t\zeta^p|\,\mathrm{d}x\mathrm{d}t\\
&\le\xi\omega \|\partial_t\zeta^p\|_{\infty} \iint_{Q_{\varrho}(\theta)}\nu\copy\chibox_{[u\le0]} \,\mathrm{d}x\mathrm{d}t\\
&\le (\xi\omega)^2 \|\partial_t\zeta^p\|_{\infty} |[u\le \mu^-+\tfrac12\xi\omega]\cap Q_{\frac12 \varrho}(\theta)|\\
&\le 4 \|\partial_t\zeta^p\|_{\infty} \iint_{Q_{r}(\theta)}[u-(\mu^-+\xi\omega)]^2_-\,\mathrm{d}x\mathrm{d}t. \end{align*} This means that the integral can be combined with a similar term involving $\partial_t\zeta^p$ on the right-hand side of the energy estimate in Proposition~\ref{Prop:2:3}. Consequently, we obtain an energy estimate of $(u-k)_-$ from which the theory of parabolic $p$-Laplacian applies, cf.~Lemmas~\ref{Lm:reduc-osc-1} -- \ref{Lm:reduc-osc-2} and Appendix~\ref{A:1}. Therefore we obtain a constant $\xi$ determined only by the data and $\alpha_*$ of \eqref{geometry}, such that \[ u\ge\mu^-+\tfrac14\xi\omega\quad\text{ a.e. in }Q_{\frac12\varrho}(\theta). \] The proof is now completed. \end{proof}
\begin{remark}\label{Rmk:2:1}\upshape Before heading to the reduction of boundary oscillation, we try to shed some light on the roles of Lemmas~\ref{Lm:DG:1} -- \ref{Lm:DG:2} as a preparation for the technically involved Section~\ref{S:bdry}. Both lemmas are examining the {\it measure information} needed in order for the degeneracy of $\Delta_p$ to dominate the singularity of $\beta(\cdot)$. As such the second term on the left-hand side of the energy estimate in Proposition~\ref{Prop:2:3} is discarded and
the time scaling $\theta=(\xi\omega)^{2-p}$ is employed to accommodate the degeneracy of $\Delta_p$;
the reduction of oscillation in this case will be demonstrated in Section~\ref{S:reduc-osc-1}.
Attention is called to the difference between Section~\ref{S:reduc-osc-1} and Section~\ref{S:reduc-osc-4}, where the same time scaling is evoked. Indeed, {\it pointwise information} is imposed in Section~\ref{S:reduc-osc-4} to avoid the singularity of $\beta(\cdot)$ completely, such that the case falls into the scope of the theory for the parabolic $p$-Laplacian in \cite{DB}. On the other hand, violation of the measure information in Lemmas~\ref{Lm:DG:1} -- \ref{Lm:DG:2} means that the singularity of $\beta(\cdot)$ overtakes the degeneracy of $\Delta_p$ and will yield an estimate from below on $|[u\le0]|$ (see \eqref{Eq:opposite:3}), which bears the singularity of $\beta(\cdot)$. This lower bound of $|[u\le0]|$ in turn gives a lower bound for the second term on the left-hand side of the energy estimate in Proposition~\ref{Prop:2:3}; in this case the first term on the left-hand side of the energy estimate in Proposition~\ref{Prop:2:3} is discarded (cf.~Lemma~\ref{Lm:energy}) and consequently, the ``longer" time scaling $\widetilde\theta=(\delta\xi\omega)^{1-p}$ will be needed to accommodate the singularity, cf.~Remark~\ref{Rmk:3:2}. The reduction of oscillation in this case will be taken on in Section~\ref{S:reduc-osc-2}. \end{remark}
\section{The Dirichlet problem: reduction of boundary oscillation}\label{S:bdry} Let $u$ be a solution to the Dirichlet problem \eqref{Dirichlet}. Fix $(x_o,t_o)\in S_T$ and define the numbers $\mu^\pm$ and $\omega$ as in Section~\ref{S:DG} with $Q_{R,S}$ replaced by $$\widetilde{Q}_o:=K_{8\varrho}(x_o)\times(t_o-\varrho^{p-1}, t_o),\quad \text{for some }\varrho\in(0,\tfrac18\bar\varrho],$$ where $\bar\varrho$ is defined in \eqref{geometry}, which we may assume to be so small that $t_o\ge (\frac18\bar\varrho)^{p-1}$. We now state the main result of this section, which describes the oscillation decay of $u$, along shrinking cylinders with the same vertex $(x_o,t_o)$ fixed on the lateral boundary $S_T$.
\begin{proposition}\label{Prop:bdry:D} Suppose the hypotheses in Theorem~\ref{Thm:1:1} hold true. There exists $\sigma\in(0,1)$ depending only on the data and $\alpha_*$, such that if $g$ verifies \[
\operatornamewithlimits{osc}_{K_{r}(x_o)\times(t_o-r^{p-1}, t_o)}g\le \frac{C_g}{|\ln r|^{\lambda}}\quad\text{ for all }r\in(0,\bar\varrho), \] with $C_g>0$ and $\lambda>\sigma$, then for some $\gamma>1$ depending only on the data, $\alpha_*$ and $C_g$, \[ \operatornamewithlimits{ess\,osc}_{Q_r(\omega^{2-p})}u\le\gamma\max\{1,\omega\}\big(1-\ln\min\{1,\omega\}\big)^{\frac{\sigma}2}\Big(\ln\frac{\bar\varrho}{r}\Big)^{-\frac{\sigma}2}\quad\text{ for all }r\in(0,\bar\varrho), \] where $Q_r(\omega^{2-p})=K_r(x_o)\times(t_o-\omega^{2-p}r^p,t_o)$. \end{proposition} The proof of Proposition~\ref{Prop:bdry:D} will be expanded on in Sections~\ref{S:reduc-osc-1} -- \ref{S:bdry-modulus}. To start with, we introduce the following intrinsic cylinders \begin{equation*} \left\{ \begin{array}{ll} Q_{\varrho}(\theta)=K_\varrho(x_o)\times(t_o-\theta\varrho^p,t_o),\quad\theta=(\xi\omega)^{2-p},\\[5pt] Q_{\varrho}(\widetilde\theta)=K_\varrho(x_o)\times(t_o-\widetilde\theta\varrho^p,t_o),\quad \widetilde\theta=(\delta\xi\omega)^{1-p}, \end{array}\right. \end{equation*}
where $\delta=\bar\xi\omega^q$ for some positive $\bar\xi$ and $q$, which together with the positive $\xi$ are
to be determined in terms of the data and $\alpha_*$. We may assume with no loss of generality that \begin{equation}\label{Eq:start-cylinder} \widetilde\theta(8\varrho)^p\le\varrho^{p-1}, \quad\text{ such that }\quad\operatornamewithlimits{ess\,osc}_{Q_{8\varrho}(\widetilde{\theta})}u\le \omega, \end{equation} since otherwise we would have \begin{equation}\label{Eq:rho-control} 8^{\frac{p}{1-p}}\xi\bar\xi\omega^{1+q}\le\varrho^{\frac1{p-1}}. \end{equation} Moreover, one of the following must hold:
\begin{equation}\label{Eq:mu-pm} \mu^+-\tfrac14\omega\ge\tfrac14\omega\quad\text{ or }\quad \mu^-+\tfrac14\omega\le-\tfrac14\omega. \end{equation} Let us take for instance the first one, i.e. \eqref{Eq:mu-pm}$_1$, as the other one can be treated analogously.
Next, we observe that one of the following two cases must hold: \begin{equation}\label{Eq:alternative} \mu^+-\tfrac14\omega\ge\sup_{\widetilde{Q}_o\cap S_T}g\quad\text{ or }\quad \mu^-+\tfrac14\omega\le\inf_{\widetilde{Q}_o\cap S_T}g. \end{equation} Otherwise, the violation of both inequalities implies that \begin{equation}\label{Eq:g-control} \omega\le 2\operatornamewithlimits{osc}_{\widetilde{Q}_o\cap S_T}g. \end{equation}
If \eqref{Eq:alternative}$_1$ holds, then thanks to \eqref{Eq:mu-pm}$_1$, the function $(u-k)_+$ with $k=\mu^+-2^{-n}\omega$ will satisfy the energy estimate in Proposition~\ref{Prop:2:2} for all $n\ge2$. In this case, the reduction of oscillation is achieved in Section~\ref{S:reduc-osc-4}, cf.~Lemma~\ref{Lm:reduc-osc-1}. Consequently we need only to examine the case when \eqref{Eq:alternative}$_2$ holds. This case splits into two sub-cases. In Sections~\ref{S:reduc-osc-1} -- \ref{S:reduc-osc-3} we deal with the situation when $|\mu^-|$ is near zero; whereas the opposite situation is treated in Section~\ref{S:reduc-osc-4}, cf.~Lemma~\ref{Lm:reduc-osc-2}.
In summary, Sections~\ref{S:reduc-osc-1} -- \ref{S:reduc-osc-3} will be devoted to reducing the oscillation of $u$ near the set $[u\approx0]$, where the singularity of $\beta(u)$ appears; Section~\ref{S:reduc-osc-4} treats the cases when the singularity of $\beta(u)$ is avoided, and hence the behavior of $u$ resembles the one to the parabolic $p$-Laplacian \eqref{Eq:p-Laplace} and the theory in \cite{DB} applies.
\subsection{Reduction of oscillation near zero}\label{S:reduc-osc-1} Let us deal with the case when \eqref{Eq:alternative}$_2$ holds. Throughout Sections~\ref{S:reduc-osc-1} -- \ref{S:reduc-osc-3}, we always assume that \begin{equation}\label{Eq:mu-minus}
|\mu^-|\le\tfrac12\delta\xi\omega\quad\text{ where }\delta=\bar\xi\omega^q, \end{equation} for some positive parameters $\{\xi,\,\bar\xi,\, q\}$ to be determined in terms of the data and $\alpha_*$ only. When this restriction \eqref{Eq:mu-minus} does not hold, the case will be examined in Section~\ref{S:reduc-osc-4}. In Sections~\ref{S:reduc-osc-1} -- \ref{S:reduc-osc-3}, we will work with $u$ as a super-solution and reduce its oscillation near its infimum $\mu^-$
in a smaller, quantified cylinder with the same vertex $(x_o,t_o)$.
To this end, we turn our attention to Lemma~\ref{Lm:DG:1} and Lemma~\ref{Lm:DG:2}. Suppose $\xi$ is determined in Lemma~\ref{Lm:DG:2} in terms of the data and $\alpha_*$, assume $\xi<\frac14$ with no loss of generality, and let $\theta=(\xi\omega)^{2-p}$. If \[
|[u\le\mu^-+\tfrac12\xi\omega]\cap Q_{\frac12\varrho}(\theta)|\le c_o(\xi\omega)^{\frac{N+p}p}|Q_{\frac12\varrho}(\theta)|, \] then by Lemma~\ref{Lm:DG:1} we have \begin{equation}\label{Eq:lower-bd-1} u\ge \mu^-+\tfrac14\xi\omega\quad\text{ a.e. in }Q_{\frac14\varrho}(\theta). \end{equation} Likewise, if \begin{equation}\label{Eq:measure-2}
\iint_{Q_{\varrho}(\theta)}\nu\copy\chibox_{[u\le0]}\,\mathrm{d}x\mathrm{d}t\le\xi\omega|[u\le \mu^-+\tfrac12\xi\omega]\cap Q_{\frac12\varrho}(\theta)| \end{equation} then by Lemma~\ref{Lm:DG:2} we have \begin{equation}\label{Eq:lower-bd-2} u\ge \mu^-+\tfrac14\xi\omega\quad\text{ a.e. in }Q_{\frac12\varrho}(\theta), \end{equation}
since $|\mu^-|\le\xi\omega$ always holds true in this section. Hence either \eqref{Eq:lower-bd-1} or \eqref{Eq:lower-bd-2} yields a reduction of oscillation \begin{equation}\label{Eq:reduc-osc-1} \operatornamewithlimits{ess\,osc}_{Q_{\frac14\varrho}(\theta)}u\le(1-\tfrac14\xi)\omega. \end{equation}
\subsection{Reduction of oscillation near zero continued}\label{S:reduc-osc-2} In this section, we continue to examine the situation when the measure condition in Lemma~\ref{Lm:DG:1} is violated: \begin{equation}\label{Eq:opposite:1}
|[u\le\mu^-+\tfrac12\xi\omega]\cap Q_{\frac12\varrho}(\theta)|> c_o(\xi\omega)^{\frac{N+p}p}|Q_{\frac12\varrho}(\theta)|, \end{equation} and when the condition in Lemma~\ref{Lm:DG:2} is violated: \begin{equation}\label{Eq:opposite:2}
\iint_{Q_{\varrho}(\theta)}\nu\copy\chibox_{[u\le0]}\,\mathrm{d}x\mathrm{d}t>\xi\omega|[u\le \mu^-+\tfrac12\xi\omega]\cap Q_{\frac12\varrho}(\theta)|.
\end{equation}
Combining \eqref{Eq:opposite:1} and \eqref{Eq:opposite:2}, we may obtain that for all $r\in[2\varrho, 8\varrho]$, \begin{equation}\label{Eq:opposite:3} \begin{aligned}
\iint_{Q_{r}(\theta)}\nu\copy\chibox_{[u\le0]}\,\mathrm{d}x\mathrm{d}t&\ge\iint_{Q_{\varrho}(\theta)}\nu\copy\chibox_{[u\le0]}\,\mathrm{d}x\mathrm{d}t\\
&\ge\xi\omega|[u\le \mu^-+\tfrac12\xi\omega]\cap Q_{\frac12\varrho}(\theta)|\\
&\ge c_o(\xi\omega)^b|Q_{\frac12\varrho}(\theta)|\ge\widetilde{\gamma}(\xi\omega)^b|Q_r(\theta)|, \end{aligned} \end{equation} where $\widetilde{\gamma}=c_o16^{-N-p}$ and $b=1+\tfrac{N+p}p$.
To proceed, let $\bar\delta\in(\delta,1)$ be a free parameter and set $\bar\theta:=(\bar\delta\xi\omega)^{1-p}$. Recall that $\widetilde{\theta}=(\delta\xi\omega)^{1-p}$, $\theta=(\xi\omega)^{2-p}$, and we have assumed $\widetilde{\theta}(8\varrho)^p<\varrho^{p-1}$ in \eqref{Eq:start-cylinder}. Therefore \[ Q_r(\theta)\subset Q_r(\bar{\theta})\subset Q_r(\widetilde{\theta})\subset \widetilde{Q}_o \quad\text{ for any } r\in[2\varrho, 8\varrho]. \] The estimate \eqref{Eq:opposite:3} implies that there exists $t_*\in [t_o-\theta r^p,t_o]$, such that \begin{equation}\label{Eq:time:1}
\int_{K_{r}\times\{t_*\}}\nu\copy\chibox_{[u\le0]}\,\mathrm{d}x\ge\widetilde{\gamma}(\xi\omega)^b|K_r|. \end{equation}
Observe also that for any $t\in[t_o-\bar{\theta} r^p, t_o]$ and any $\bar\delta\in(\delta,1)$, there holds
\begin{equation}\label{Eq:time:2} \begin{aligned}
|K_r|&\ge|[u\le\mu^-+\bar\delta\xi\omega]\cap K_r|\\ &\ge(\bar\delta\xi\omega)^{-2}\int_{K_r\times\{t\}}[u-(\mu^-+\bar\delta\xi\omega)]_-^2\,\mathrm{d}x. \end{aligned} \end{equation} Combining \eqref{Eq:time:1} and \eqref{Eq:time:2} gives that \begin{equation*} \begin{aligned} &\operatornamewithlimits{ess\,sup}_{t_o-\bar{\theta} r^p<t<t_o} \int_{K_{r}\times\{t\}}\nu\copy\chibox_{[u\le0]}\,\mathrm{d}x\\ &\quad\ge \widetilde{\gamma}(\xi\omega)^b (\bar\delta\xi\omega)^{-2}\operatornamewithlimits{ess\,sup}_{t_o-\bar{\theta} r^p<t<t_o}\int_{K_r\times\{t\}}[u-(\mu^-+\bar\delta\xi\omega)]_-^2\,\mathrm{d}x. \end{aligned} \end{equation*} The above analysis together with Proposition~\ref{Prop:2:3} yields the following energy estimate. \begin{lemma}\label{Lm:energy} Let $u$ be a weak super-solution to \eqref{Dirichlet} with \eqref{Eq:1:2} in $E_T$. Let \eqref{Eq:mu-pm}$_1$, \eqref{Eq:alternative}$_2$, \eqref{Eq:opposite:1} and \eqref{Eq:opposite:2} hold true. Denoting $b:=1+\tfrac{N+p}p$ and setting $k=\mu^-+\bar\delta\xi\omega$ with $\bar\delta\in(\delta,1)$, there exists a positive constant $\gamma$ depending only on the data, such that for all $\sigma\in(0,1)$ and all $r\in[2\varrho, 8\varrho]$ we have \begin{align*} k(\bar\delta&\xi\omega)^{-2}(\xi\omega)^b\operatornamewithlimits{ess\,sup}_{t_o-\bar{\theta}(\sigma r)^p<t<t_o}\int_{K_{\sigma r}\times\{t\}}(u-k)_-^2\,\mathrm{d}x\\
&\quad+\iint_{Q_{\sigma r}(\bar{\theta})}|D(u-k)_-|^p\,\mathrm{d}x\mathrm{d}t\\ &\le\frac{\gamma}{(1-\sigma)^p r^p}\iint_{Q_{r}(\bar{\theta})}
(u-k)^{p}_- \mathrm{d}x\mathrm{d}t\\
&\quad+ \frac{\gamma}{(1-\sigma) \bar{\theta} r^p}\iint_{Q_{r}(\bar{\theta})}(u-k)_-^2\,\mathrm{d}x\mathrm{d}t\\
&\quad+\frac{\gamma\nu}{(1-\sigma) \bar{\theta} r^p}\iint_{Q_{r}(\bar{\theta})}(u-k)_-\,\mathrm{d}x\mathrm{d}t, \end{align*} provided that \[
|\mu^-|\le\delta\xi\omega. \] \end{lemma}
\begin{remark}\upshape Note that the restriction of $|\mu^-|$ in the last line of Lemma~\ref{Lm:energy} has been imposed to ensure that the level $k\ge0$ as required in Proposition~\ref{Prop:2:3}. Recall that such a restriction has been always assumed in this section, cf.~\eqref{Eq:mu-minus}. Note carefully that the number $\bar\delta\in(\delta,1)$ in Lemma~\ref{Lm:energy} is a free parameter, whereas $\delta$ is yet to be determined.
\end{remark}
\begin{remark}\label{Rmk:3:2}\upshape When we generate the energy estimate of Lemma~\ref{Lm:energy}, the first term of the energy estimate in Proposition~\ref{Prop:2:3} has been discarded. Hence the singular effect of $\beta(\cdot)$ overtakes the degeneracy brought by the $p$-Laplacian in Lemma~\ref{Lm:energy}. The machinery of De Giorgi has to be reproduced to reflect this change. This is the reason why the time scaling will be changed from $\theta=(\xi\omega)^{2-p}$ to $\widetilde{\theta}=(\delta\xi\omega)^{1-p}$ in the following two lemmas. \end{remark}
Based on the energy estimate in Lemma~\ref{Lm:energy}, we then show that the measure density of the set $[u\approx\mu^-]$ can be made arbitrarily small in a way that favors a later application for our purpose.
The geometric condition \eqref{geometry} of the boundary $\partial E$ comes into play. \begin{lemma}\label{Lm:shrink:1} Suppose the hypotheses in Lemma~\ref{Lm:energy} hold. There exists a positive constant $j_*$ depending only on the data and $\alpha_*$, such that for any $m\in\mathbb{N}$ we have either \[
|\mu^-|>\frac{\xi\omega}{2^{mj_*}}, \] or \begin{equation*}
\Big|\Big[
u\le\mu^-+\frac{\xi \omega}{2^{mj_*}}\Big]\cap Q_\varrho(\widetilde{\theta})\Big|
\le\frac1{2^m}|Q_\varrho(\widetilde{\theta})|,\quad\text{ where }\widetilde{\theta}=\Big(\frac{\xi\omega}{2^{mj_*}}\Big)^{1-p}. \end{equation*} \end{lemma} \begin{proof}
We employ the energy estimate of Lemma~\ref{Lm:energy} in $Q_{2\varrho}(\widetilde{\theta})$
with $\sigma=\tfrac12$ and the levels \begin{equation*}
k_j:=\mu^-+\frac{\xi \omega}{2^{j}},\quad j=0,1,\cdots, j_*. \end{equation*}
Restricting $|\mu^-|\le 2^{-j_*}\xi\omega$ implies that $k_j\ge0$ for all $j=0,1,\cdots, j_*$. Therefore, assuming $j_*$ has been chosen and $m$ is fixed for now, and taking into account the definition of $\widetilde{\theta}(j_*, m)$, the energy estimate in Lemma~\ref{Lm:energy} yields that \begin{equation*} \begin{aligned}
\iint_{Q_\varrho(\widetilde{\theta})}|D(u-k_j)_-|^p\,\mathrm{d}x\mathrm{d}t&\le\frac{\gamma}{\varrho^p}\bigg(\frac{\xi \omega}{2^j}\bigg)^p\bigg[1+\frac1{\widetilde{\theta}}\bigg(\frac{\xi\omega}{2^j}\bigg)^{1-p}\bigg] |A_{j,2\varrho}|\\
&\le\frac{\gamma}{\varrho^p}\bigg(\frac{\xi \omega}{2^j}\bigg)^p|A_{j,2\varrho}|, \end{aligned} \end{equation*} where $ A_{j,2\varrho}:= \big[u<k_{j}\big]\cap Q_{2\varrho}(\widetilde{\theta})$. Next, we apply \cite[Chapter I, Lemma 2.2]{DB} slicewise to $u(\cdot,t)$ for $t\in(-\widetilde{\theta}\varrho^p,0]$
over the cube $K_\varrho$, for levels $k_{j+1}<k_{j}$. Taking into account the measure theoretical information derived from \eqref{geometry}: \begin{equation*}
\Big|\Big[u(\cdot, t)>\mu^-+\xi \omega\Big]\cap K_{\varrho}\Big|\ge\alpha_* |K_\varrho|
\qquad\mbox{for all $t\in(-\widetilde{\theta}\varrho^p,0]$,} \end{equation*} this gives \begin{align*}
&(k_j-k_{j+1})\big|\big[u(\cdot, t)<k_{j+1} \big]
\cap K_{\varrho}\big|
\\
&\qquad\le
\frac{\gamma \varrho^{N+1}}{\big|\big[u(\cdot, t)>k_{j}\big]\cap K_{\varrho}\big|}
\int_{\{ k_{j+1}<u(\cdot,t)<k_{j}\} \cap K_{\varrho}}|Du(\cdot,t)|\,\mathrm{d}x\\
&\qquad\le
\frac{\gamma\varrho}{\alpha_*}
\bigg[\int_{\{k_{j+1}<u(\cdot,t)<k_{j}\}\cap K_{\varrho}}|Du(\cdot,t)|^p\,\mathrm{d}x\bigg]^{\frac1p}
\big|\big[ k_{j+1}<u(\cdot,t)<k_{j}\big]\cap K_{\varrho}\big|^{1-\frac1p}
\\
&\qquad=
\frac{ \gamma\varrho}{\alpha_*}
\bigg[\int_{K_{\varrho}}|D(u-k_j)_-(\cdot,t)|^p\,\mathrm{d}x\bigg]^{\frac1p}
\big[ |A_{j,\varrho}(t)|-|A_{j+1,\varrho}(t)|\big]^{1-\frac1p}. \end{align*} Here we used in the last line the notation $ A_{j,\varrho}(t):= \big[u(\cdot,t)<k_{j}\big]\cap K_{\varrho}$. We now integrate the last inequality with respect to $t$ over $(-\widetilde{\theta}\varrho^p,0]$ and apply H\"older's inequality in time. With the abbreviation $A_{j,\varrho}=[u<k_j]\cap Q_\varrho(\widetilde{\theta})$ this procedure leads to \begin{align*}
\frac{\xi \omega}{2^{j+1}}\big|A_{j+1,\varrho}\big|
&\le
\frac{\gamma\varrho}{\alpha_*}\bigg[\iint_{Q_\varrho(\widetilde{\theta})}|D(u-k_j)_-|^p\,\mathrm{d}x\mathrm{d}t\bigg]^\frac1p
\big[|A_{j,\varrho}|-|A_{j+1,\varrho}|\big]^{1-\frac{1}p}\\
&\le
\gamma \frac{\xi \omega}{2^j}|A_{o,2\varrho}|^{\frac1p}\big[|A_{j,\varrho}|-|A_{j+1,\varrho}|\big]^{1-\frac{1}p}. \end{align*}
Here the constant $\gamma$ in the last line has absorbed $\alpha_*$.
Now take the power $\frac{p}{p-1}$ on both sides of the above inequality to obtain \[
\big|A_{j+1,\varrho}\big|^{\frac{p}{p-1}}
\le
\gamma|A_{o,2\varrho}|^{\frac1{p-1}}\big[|A_{j,\varrho}|-|A_{j+1,\varrho}|\big]. \] Add these inequalities from $0$ to $j_*-1$ to obtain \[
j_* |A_{j_*,\varrho}|^{\frac{p}{p-1}}\le\gamma|A_{o,2\varrho}|^{\frac1{p-1}}|A_{o,\varrho}|. \] From this we conclude \[
|A_{j_*,\varrho}|\le\frac{\gamma}{j_*^{\frac{p-1}p}}|A_{o,2\varrho}|^{\frac1{p}}|A_{o,\varrho}|^{\frac{p-1}p}. \] Similarly, replacing $\varrho$ by $2\varrho$, we have \[
|A_{j_*,2\varrho}|\le\frac{\gamma}{j_*^{\frac{p-1}p}}|A_{o,4\varrho}|^{\frac1{p}}|A_{o,2\varrho}|^{\frac{p-1}p}. \] The constant $\gamma(\text{data}, \alpha_*)$ appearing in the above two inequalities may be different, but we can take the larger one.
Suppose $j_*$ has been chosen for the moment. We may repeat the above arguments for the same $k_j$ with $j=j_*,\cdots, 2j_*$. Restricting $|\mu^-|\le 2^{-2j_*}\xi\omega$ implies $k_j\ge0$. Hence the energy estimate can be written in the same form with such choices of $k_j$, and the geometry condition \eqref{geometry} will allow us to apply \cite[Chapter I, Lemma 2.2]{DB} just like above. Consequently this will lead us to \begin{align*}
&|A_{2j_*,\varrho}|\le\frac{\gamma}{j_*^{\frac{p-1}p}}|A_{j_*,2\varrho}|^{\frac1{p}}|A_{j_*,\varrho}|^{\frac{p-1}p},\\
&|A_{2j_*,2\varrho}|\le\frac{\gamma}{j_*^{\frac{p-1}p}}|A_{j_*,4\varrho}|^{\frac1{p}}|A_{j_*,2\varrho}|^{\frac{p-1}p}. \end{align*} Here the constant $\gamma$ is the same as above.
Combining the above estimates would yield that \[
|A_{2j_*,\varrho}|\le\frac{\gamma^2 4^{2(N+2)}}{j_*^{2\frac{p-1}p}}|Q_\varrho(\widetilde{\theta})|. \] Iterating the procedure $m$ times would give us that \[
|A_{mj_*,\varrho}|\le\frac{\gamma^m 4^{m(N+2)}}{j_*^{m\frac{p-1}p}}|Q_\varrho(\widetilde{\theta})|. \] Finally we may choose $j_*$ so large that \[ \frac{\gamma 4^{(N+2)}}{j_*^{\frac{p-1}p}}\le\frac12. \] The proof is completed.
\end{proof}
Next we apply the energy estimate in Lemma~\ref{Lm:energy} to show a De Giorgi type lemma. Notice that the time scaling used here is different from the one in Lemmas~\ref{Lm:DG:1} -- \ref{Lm:DG:2}; recall Remark~\ref{Rmk:2:1} and Remark~\ref{Rmk:3:2} for an explanation. \begin{lemma}\label{Lm:DG:3} Suppose the hypotheses in Lemma~\ref{Lm:energy} hold. Let $\delta\in(0,1)$. There exists a constant $c_1\in(0,1)$ depending only on the data, such that if \[
\Big|\Big[u<\mu^-+2\delta\xi\omega\Big]\cap Q_{4\varrho}(\widetilde{\theta})\Big|\le c_1 (\xi\omega)^{b}|Q_{4\varrho}(\widetilde{\theta})|\quad\text{ where }\widetilde{\theta}=(\delta\xi\omega)^{1-p}, \]
then either $|\mu^-|>\tfrac12\delta\xi\omega$ or \[ u\ge\mu^-+\delta\xi\omega\quad\text{ a.e. in }Q_{2\varrho}(\widetilde{\theta}). \] \end{lemma} \begin{proof} In order to use the energy estimate in Lemma~\ref{Lm:energy}, we set as in \eqref{choices:k_n} for $n=0,1,\cdots,$
\begin{align*}
\left\{
\begin{array}{c}
\displaystyle k_n=\mu^-+ \delta\xi\omega+\frac{\delta\xi\omega}{2^{n}},\quad \tilde{k}_n=\frac{k_n+k_{n+1}}2,\\[5pt]
\displaystyle \varrho_n=2\varrho+\frac{\varrho}{2^{n-1}},
\quad\tilde{\varrho}_n=\frac{\varrho_n+\varrho_{n+1}}2,\\[5pt]
\displaystyle K_n=K_{\varrho_n},\quad \widetilde{K}_n=K_{\tilde{\varrho}_n},\\[5pt]
\displaystyle Q_n=Q_{\varrho_n}(\widetilde{\theta}),\quad
\widetilde{Q}_n=Q_{\tilde\varrho_n}(\widetilde{\theta}).
\end{array}
\right. \end{align*} We will use the energy estimate in Lemma~\ref{Lm:energy} with the pair of cylinders $\widetilde{Q}_n\subset Q_n$.
Now the constant $\bar\delta$ in Lemma~\ref{Lm:energy} is replaced by $(1+2^{-n})\delta$ here as indicated in the definition of $k_n$.
In this setting, enforcing $|\mu^-|\le\tfrac12\delta\xi\omega$, the energy estimate in Lemma~\ref{Lm:energy} may be rephrased as \begin{align*}
(\delta\xi\omega)^{-1}(\xi\omega)^b&\operatornamewithlimits{ess\,sup}_{-\widetilde{\theta}\tilde{\varrho}_n^p<t<0}
\int_{\widetilde{K}_n} (u-\tilde{k}_n)_-^2\,\mathrm{d}x
+
\iint_{\widetilde{Q}_n}|D(u-\tilde{k}_n)_-|^p \,\mathrm{d}x\mathrm{d}t\\
&\qquad\le
\gamma \frac{2^{pn}}{\varrho^p}(\delta\xi\omega)^{p}|A_n|, \end{align*} where \begin{equation*}
A_n=\big[u<k_n\big]\cap Q_n. \end{equation*} Now setting $0\le\phi\le1$ to be a cutoff function which vanishes on the parabolic boundary of $\widetilde{Q}_n$ and equals the identity in $Q_{n+1}$, an application of the H\"older inequality and the Sobolev imbedding \cite[Chapter I, Proposition~3.1]{DB} gives that \begin{align*}
\frac{\delta\xi\omega}{2^{n+3}}
|A_{n+1}|
&\le
\iint_{\widetilde{Q}_n}\big(u-\tilde{k}_n\big)_-\phi\,\mathrm{d}x\mathrm{d}t\\
&\le
\bigg[\iint_{\widetilde{Q}_n}\big[\big(u-\tilde{k}_n\big)_-\phi\big]^{p\frac{N+2}{N}}
\,\mathrm{d}x\mathrm{d}t\bigg]^{\frac{N}{p(N+2)}}|A_n|^{1-\frac{N}{p(N+2)}}\\
&\le \gamma
\bigg[\iint_{\tilde{Q}_n}\big|D\big[(u-\tilde{k}_n)_-\phi\big]\big|^p\,
\mathrm{d}x\mathrm{d}t\bigg]^{\frac{N}{p(N+2)}}\\
&\quad\
\times\bigg[\operatornamewithlimits{ess\,sup}_{-\widetilde{\theta}\tilde{\varrho}_n^p<t<0}
\int_{\widetilde{K}_n}\big(u-\tilde{k}_n\big)^{2}_-\,\mathrm{d}x\bigg]^{\frac{1}{N+2}}
|A_n|^{1-\frac{N}{p(N+2)}}\\
&\le
\gamma (\xi\omega)^{-\frac{b}{N+2}}(\delta\xi\omega)^{\frac1{N+2}}
\bigg[(\delta\xi\omega)^{p}\frac{2^{pn}}{\varrho^p}\bigg]^{\frac{N+p}{p(N+2)}}
|A_n|^{1+\frac{1}{N+2}}. \end{align*}
In the second-to-last line we used the above energy estimate. In terms of $ Y_n=|A_n|/|Q_n|$, this can be rewritten as \begin{equation*}
Y_{n+1}
\le
\frac{\gamma C^n}{(\xi\omega)^{\frac{b}{N+2}}}
Y_n^{1+\frac{1}{N+2}}, \end{equation*} for a constant $\gamma$ depending only on the data and with $C=C(N,p)$. Hence, by \cite[Chapter I, Lemma~4.1]{DB}, there exists a positive constant $c_1$ depending only on the data, such that $Y_n\to0$ if we require that $Y_o\le c_1(\xi\omega)^b$, which amounts to \begin{equation*}
|A_o|=\big|\big[u<k_o\big]\cap Q_o\big|
=
\Big|\Big[u<\mu^-+2 \delta\xi\omega\Big]\cap Q_{4\varrho}(\widetilde{\theta})\Big|
\le
c_1(\xi\omega)^b\big| Q_{4\varrho}(\widetilde{\theta})\big|. \end{equation*} Since $Y_n\to 0$ as $n\to\infty$, we have \begin{equation*}
\Big|\Big[u<\mu^-+ \delta\xi\omega \Big]\cap Q_{2 \varrho}(\widetilde{\theta})\Big|
=
0. \end{equation*} This concludes the proof of the lemma. \end{proof}
\subsection{Reduction of oscillation near zero concluded}\label{S:reduc-osc-3} Under the conditions \eqref{Eq:mu-pm}$_1$, \eqref{Eq:alternative}$_2$, \eqref{Eq:opposite:1} and \eqref{Eq:opposite:2}$_1$ , we may reduce the oscillation in the following way. First of all, let $\xi$ be determined in Lemma~\ref{Lm:DG:2} and $j_*$ be determined in Lemma~\ref{Lm:shrink:1} in terms of the data and $\alpha_*$. We may assume without loss of generality that Lemma~\ref{Lm:shrink:1} holds true for $Q_{4\varrho}(\widetilde{\theta})$ instead of $Q_{\varrho}(\widetilde{\theta})$. Then we choose the integer $m$ so large to satisfy that \begin{equation}\label{Eq:choice-m} \frac1{2^m}\le c_1(\xi\omega)^b, \end{equation} where $c_1$ is the constant appearing in Lemma~\ref{Lm:DG:3}.
Next, we can fix $2\delta=2^{-mj_*}$ in Lemma~\ref{Lm:DG:3}.
Consequently, by the choice of $m$ in \eqref{Eq:choice-m}, Lemma~\ref{Lm:DG:3} can be applied assuming that $|\mu^-|\le\tfrac12\delta\xi\omega$, and we arrive at \[ u\ge\mu^-+ \delta\xi\omega \equiv \mu^-+\eta(\xi\omega)^{1+q}\quad\text{ a.e. in }Q_{2\varrho}(\widetilde{\theta}), \] where \[ \eta=\tfrac12 c_1^{j_*}\quad\text{ and }\quad q=bj_*. \] This would give us a reduction of oscillation \begin{equation}\label{Eq:reduc-osc-2} \operatornamewithlimits{ess\,osc}_{Q_{2\varrho}(\widetilde{\theta})}u\le(1-\eta\omega^q)\omega \end{equation} with a properly redefined $\eta$. Attention is called to the choice of $\delta$ above: by \eqref{Eq:choice-m}, we actually have chosen \begin{equation*} \delta=\tfrac12[c_1(\xi\omega)^b]^{j_*}\equiv\bar{\xi}\omega^q \end{equation*} where the positive constants $\bar\xi$ and $q$ are now quantitatively determined by the data and $\alpha_*$ only.
To summarize the achievements in Sections~\ref{S:reduc-osc-1} -- \ref{S:reduc-osc-3}, taking \eqref{Eq:rho-control}, \eqref{Eq:g-control}, \eqref{Eq:reduc-osc-1} and \eqref{Eq:reduc-osc-2} all into account,
if \eqref{Eq:start-cylinder}, \eqref{Eq:alternative}$_2$ and $|\mu^-|\le\frac12\delta\xi\omega$ hold true, then we have \begin{equation*} \operatornamewithlimits{ess\,osc}_{Q_{\frac14\varrho}(\theta)}u\le (1-\eta\omega^q)\omega\quad \text{ or }\quad \omega\le \max\Big\{A\varrho^{\alpha_o},\, 2\operatornamewithlimits{osc}_{\widetilde{Q}_o\cap S_T}g\Big\}, \end{equation*} where \[ \theta=(\xi\omega)^{2-p},\quad\alpha_o=[(p-1)(1+q)]^{-1},\quad A=(8^{\frac{p}{p-1}}\xi\bar\xi)^{-\frac1{1+q}}. \] All the constants $\{\xi,\bar\xi, \eta, q\}$ have been determined in terms of the data and $\alpha_*$.
\subsection{Reduction of oscillation away from zero}\label{S:reduc-osc-4} In this section, we handle the case when the equation \eqref{Eq:1:1} resembles the parabolic $p$-Laplacian \eqref{Eq:p-Laplace}. The reduction of oscillation is summarized in the following two lemmas; the proofs rely on the theory for the parabolic $p$-Laplacian \eqref{Eq:p-Laplace} in \cite[Chapter~III, Section~12]{DB}, which we collect in Appendix~\ref{A:1} for completeness. \begin{lemma}\label{Lm:reduc-osc-1} Let $u$ be a weak sub-solution to \eqref{Dirichlet} with \eqref{Eq:1:2} in $E_T$. Suppose \eqref{Eq:alternative}$_1$ holds true and $\mu^+-\frac14\omega\ge0$. There exist positive constants $\gamma$ and $\xi_1$ depending on the data and $\alpha_*$, such that for $\theta=(\xi_1\omega)^{2-p}$ we have \[ \operatornamewithlimits{ess\,osc}_{Q_{\frac14\varrho}(\theta)}u\le (1-\tfrac12\xi_1)\omega\quad \text{ or }\quad \omega\le \max\Big\{A\varrho^{\alpha_o},\, 2\operatornamewithlimits{osc}_{\widetilde{Q}_o\cap S_T}g\Big\}. \] \end{lemma}
Let $\xi$ and $\delta=\bar\xi\omega^q$ be determined in Sections~\ref{S:reduc-osc-1} -- \ref{S:reduc-osc-3}. We now examine the case when \eqref{Eq:mu-minus} does not hold. \begin{lemma}\label{Lm:reduc-osc-2}
Let $u$ be a weak super-solution to \eqref{Dirichlet} with \eqref{Eq:1:2} in $E_T$. Suppose \eqref{Eq:alternative}$_2$ holds true and $|\mu^-|>\frac12\delta\xi\omega$. There exist positive constants $\gamma$ and $\xi_2$ depending on the data and $\alpha_*$, such that for $ \theta=(\xi_2\delta\xi\omega)^{2-p}$ we have \[ \operatornamewithlimits{ess\,osc}_{Q_{\frac14\varrho}( \theta)}u\le (1- \tfrac12\xi_2\delta\xi)\omega\quad \text{ or }\quad \omega\le \max\Big\{A\varrho^{\alpha_o},\, 2\operatornamewithlimits{osc}_{\widetilde{Q}_o\cap S_T}g\Big\}. \] \end{lemma} Under the pointwise conditions of Lemmas~\ref{Lm:reduc-osc-1} -- \ref{Lm:reduc-osc-2}, the singularity of $\beta(\cdot)$ is avoided, and the energy estimates of Proposition~\ref{Prop:A:1} are verified for $(u-k)_+$ and for $(u-k)_-$, with $\omega$ there replaced by $\frac14\omega$ and $\frac12\delta\xi\omega$ respectively.
\subsection{Derivation of the modulus of continuity}\label{S:bdry-modulus} This is the final part of the proof of Proposition~\ref{Prop:bdry:D}. Recall that we have defined the quantities $\mu^\pm$ and $\omega$ over $\widetilde{Q}_o=K_{8\varrho}(x_o)\times(t_o-\varrho^{p-1}, t_o)$ for some $\varrho\in(0,\frac18\bar\varrho)$, and we have performed the arguments in Sections~\ref{S:reduc-osc-1} -- \ref{S:reduc-osc-4} starting from the intrinsic relation \eqref{Eq:start-cylinder}$_2$, that is, \[ \operatornamewithlimits{ess\,osc}_{Q_{8\varrho}(\widetilde{\theta})}u\le \omega\quad\text{ where }\widetilde\theta=(\delta\xi\omega)^{1-p}\text{ and } \delta=\bar\xi\omega^q. \] The positive constants $\{\xi, \bar\xi, q\}$ have been selected in terms of the data and $\alpha_*$, in the course of proof in Sections~\ref{S:reduc-osc-1} -- \ref{S:reduc-osc-4}.
Therefore, combining all the arguments in Sections~\ref{S:reduc-osc-1} -- \ref{S:reduc-osc-4},
we arrive at the reduction of oscillation \begin{equation}\label{Eq:induction-1} \operatornamewithlimits{ess\,osc}_{Q_{\frac14\varrho}(\theta)}u\le (1-\eta\omega^q)\omega\quad \text{ or }\quad \omega\le \max\Big\{A\varrho^{\alpha_o},\, 2\operatornamewithlimits{osc}_{\widetilde{Q}_o\cap S_T}g\Big\}, \end{equation} where $\theta=(\xi\omega)^{2-p}$ and $\{\xi,\bar\xi, q, \alpha_o, \eta, A\}$ have been properly
determined in terms of the data and $\alpha_*$.
In order to iterate the arguments of Sections~\ref{S:reduc-osc-1} -- \ref{S:reduc-osc-4}, we set \[ \omega_1=\max\Big\{(1-\eta\omega^q)\omega,\, 2\operatornamewithlimits{osc}_{\widetilde{Q}_o\cap S_T}g,\, A\varrho^{\alpha_o}\Big\}. \] We need to choose $\varrho_1>0$, such that the following two set inclusions hold: \[ Q_{8\varrho_1}(\widetilde\theta_1)\subset Q_{\frac14\varrho}(\theta),\quad Q_{8\varrho_1}(\widetilde\theta_1)\subset \widetilde{Q}_o \quad\text{ where }\widetilde\theta_1=(\delta_1\xi\omega_1)^{1-p}\text{ and } \delta_1=\bar\xi\omega_1^q. \] Just like \eqref{Eq:start-cylinder}, they will validate the intrinsic relation needed to repeat the arguments of Sections~\ref{S:reduc-osc-1} -- \ref{S:reduc-osc-4}. To this end, we may assume that $\eta\omega^q\le\frac12$ and then observe that \[ \widetilde\theta_1 (8\varrho_1)^p\le[(\tfrac12\omega)^{1+q}\xi\bar\xi]^{1-p}(8\varrho_1)^p. \] Hence to satisfy the first set inclusion, it suffices to choose $\varrho_1$ by setting \[ [(\tfrac12\omega)^{1+q}\xi\bar\xi]^{1-p}(8\varrho_1)^p=(\tfrac14\varrho)^p(\xi\omega)^{2-p},\quad\text{ i.e. } \quad \varrho_1^p=\widetilde\xi\omega^{\bar{q}}\varrho^p, \] where $\bar{q}:=1+(p-1)q$ and $\widetilde\xi$ depends only on the data and $\alpha_*$. It is not hard to verify the second inclusion also with such a choice of $\varrho_1$. As a result of the set inclusions and \eqref{Eq:induction-1}, we obtain the following intrinsic relation \[ \operatornamewithlimits{ess\,osc}_{Q_{8\varrho_1}(\widetilde\theta_1)}u\le\omega_1\quad\text{ where }\widetilde\theta_1=(\delta_1\xi\omega_1)^{1-p}\text{ and } \delta_1=\bar\xi\omega_1^q, \] which takes the place of \eqref{Eq:start-cylinder}$_2$ in the next stage. Therefore the arguments of Sections~\ref{S:reduc-osc-1} -- \ref{S:reduc-osc-4} can be repeated within $Q_{8\varrho_1}(\widetilde\theta_1)$, to obtain that either \[ \operatornamewithlimits{ess\,osc}_{Q_{\frac14\varrho_1}(\theta_1)}u\le (1-\eta\omega_1^q)\omega_1\quad \text{ or }\quad \omega_1\le \max\Big\{A\varrho_1^{\alpha_o},\, 2\operatornamewithlimits{osc}_{\widetilde{Q}_1\cap S_T}g\Big\}, \] where \[ \widetilde{Q}_1=K_{8\varrho_1}(x_o)\times(t_o-\varrho_1^{p-1},t_o). \] Here the constants $\{\xi,\bar\xi, q, \alpha_o, \eta, A\}$ are the same as in the first step.
Now we may proceed by induction and introduce the following notations: \begin{equation*} \left\{ \begin{array}{cc} \varrho_o=\varrho,\quad \varrho_{n+1}^p=\widetilde\xi\omega_n^{\bar{q}}\varrho_n^p,\quad\widetilde{\theta}_n=(\xi\bar\xi\omega_n^{1+q})^{1-p},\,\quad\theta_n=(\xi\omega_n)^{2-p}\\[5pt] \displaystyle\omega_o=\omega,\quad \omega_{n+1}=\max\Big\{\omega_n(1-\eta\omega^q_n),\,A\varrho_n^{\alpha_o},\,2\operatornamewithlimits{osc}_{\widetilde{Q}_n\cap S_T}g\Big\},\\[5pt] \widetilde{Q}_n=K_{8\varrho_n}(x_o)\times(t_o-\varrho_n^{p-1},t_o),\quad Q_n=Q_{\frac14\varrho_n}(\theta_n),\quad Q'_n=Q_{8\varrho_n}(\widetilde\theta_n). \end{array}\right. \end{equation*} Upon using the induction we have that for all $n=0,1,\cdots$, \begin{equation}\label{Eq:induction-n}
Q'_{n+1}\subset Q_n\quad\text{ and }\quad\operatornamewithlimits{ess\,osc}_{Q_n} u\le \omega_{n+1}. \end{equation}
Next we derive an explicit modulus of continuity of $u$ inherent in \eqref{Eq:induction-n}. To this end, we first define a sequence $\{r_n\}$ and cylinders $\{\widehat{Q}_n\}$ by \begin{equation*} \begin{array}{cc} r_o=1,\quad r_{n+1}^p=\widetilde\xi r_n^p,\quad
\widehat{Q}_n=K_{8r_n}(x_o)\times(t_o-r_n^{p-1},t_o).
\end{array} \end{equation*} Notice that $\varrho_n\le r_n$ and $\widetilde{Q}_n\subset\widehat{Q}_n$ for all $n\ge0$. Observe that if there exists $n_o\in\mathbb{N}$, such that a sequence $\{a_n\}$ satisfies \[
a_{n+1}\ge\max\Big\{a_n(1-\eta a^q_n),\,A r_n^{\alpha_o},\,2\operatornamewithlimits{osc}_{\widehat{Q}_n\cap S_T}g\Big\} \]
for all $n\ge n_o$ and $a_{n_o}\ge\omega_{n_o}$,
then $a_n\ge \omega_n$ for all $n\ge n_o$. As a matter of fact, we may take $a_n=(1+n)^{-\sigma} \max\{1,\omega\}$ with $\sigma\in(0,\frac1q)$. In such a case, the number $n_o$ is determined by the parameters $\{\eta, q, \sigma, \alpha_o, A, \widetilde{\xi}, C_g,\lambda\}$. Indeed, the following two assertions are verified by elementary computations: \[ a_{n+1}\ge a_n(1-\eta a^q_n),\quad a_{n+1}\ge A r_n^{\beta}, \] for all $n\ge n_o$ and $n_o$ is determined by $\{\eta, q,\sigma, \alpha_o, A\}$. Using the modulus of continuity of $g$, there exists $\gamma=\gamma(\widetilde{\xi}, C_g,\lambda)$, such that \[ \operatornamewithlimits{osc}_{\widehat{Q}_n\cap S_T}g\le \frac{\gamma}{n^{\lambda}}. \] Recall that $\lambda>\sigma$. Hence $n_o$ here can be determined in terms of $\{\widetilde{\xi}, C_g,\lambda\}$ to ensure that for any $n\ge n_o$, \[ a_{n+1}\ge\operatornamewithlimits{osc}_{\widehat{Q}_n\cap S_T}g. \] Once the number $n_o$ has been determined by the parameters $\{\eta, q, \sigma, \alpha_o, A, \widetilde{\xi}, C_g,\lambda\}$
and independent of $\omega$, we may assume that $n_o=0$ for simplicity. As a result of $a_o\ge\omega_o$, we arrive at \[ \operatornamewithlimits{ess\,osc}_{Q_n} u\le \omega_{n+1}\le\frac{\max\{1,\omega\}}{(n+2)^\sigma}\quad\text{ for all }n=0,1,\cdots. \]
Let us take a number $4r\in(0,\varrho)$. There must be some $n$ such that \[ \varrho_{n+1}< 4r\le\varrho_{n}. \] The right-hand side inequality will yield that $Q_r(\omega^{2-p})\subset Q_{\frac14\varrho_n}(\theta_n)=Q_n$, and consequently \begin{equation}\label{Eq:osc-final-1} \operatornamewithlimits{ess\,osc}_{Q_r(\omega^{2-p})}u\le \operatornamewithlimits{ess\,osc}_{Q_n}\le\omega_{n+1}\le\frac{\max\{1,\omega\}}{(n+2)^\sigma}. \end{equation} Next we analyze the left-hand side inequality. First notice that by the definition of $\omega_n$ and assuming $\eta\omega_n^q\le\frac12$, we can estimate \[ \omega_n\ge\frac{\omega}{2^n}. \] Use this to estimate $\varrho_{n+1}^p$ from below by \begin{align*} (4r)^p&>\varrho^p_{n+1}=\widetilde{\xi}\omega_n^{\bar{q}}\varrho^p_n\ge\widetilde{\xi}^2\Big(\frac{\omega}{2^n}\Big)^{\bar{q}}\varrho^p_{n}\ge\cdots\\ &\ge \widetilde{\xi}^{n+1}\varrho^p\prod_{i=1}^{n}\Big(\frac{\omega}{2^i}\Big)^{\bar{q}} =\widetilde{\xi}^{n+1}\Big(\frac{\omega^n}{2^{\frac{n(n+1)}2}}\Big)^{\bar{q}}\varrho^p. \end{align*} Then taking logarithim on both sides, a simple calculation will give us a lower bound of $n$ by \[ n^2\ge\frac{\gamma}{1-\ln\min\{1,\omega\}}\ln\frac{\varrho}{r},\quad\text{ for some }\gamma=\gamma(p,q,\widetilde{\xi}). \] Substituting this into \eqref{Eq:osc-final-1}, we obtain the modulus of continuity
\[ \operatornamewithlimits{ess\,osc}_{Q_r(\omega^{2-p})}u\le\gamma\max\{1,\omega\}\big(1-\ln\min\{1,\omega\}\big)^{\frac{\sigma}2}\Big(\ln\frac{\varrho}{r}\Big)^{-\frac{\sigma}2}\quad\text{ for all }r\in(0,\varrho). \] This concludes the proof of Proposition~\ref{Prop:bdry:D}
\section{Reduction of interior oscillation}\label{S:interior} For a cylinder $\widetilde{Q}_{o}=K_{\varrho}(x_o)\times(t_o-\varrho^{p-1},t_o)\subset E_T$ with $\varrho\in(0,1)$, we introduce the numbers $\mu^{\pm}$ and $\omega$ satisfying \begin{equation*}
\mu^+\ge\operatornamewithlimits{ess\,sup}_{\widetilde{Q}_{o}} u,
\quad
\mu^-\le\operatornamewithlimits{ess\,inf}_{\widetilde{Q}_{o}} u,
\quad
\omega\ge\mu^+ - \mu^-. \end{equation*} The main result of this section is the following. \begin{proposition}\label{Prop:int-reg}
Let $u$ be a locally bounded, local weak solution to \eqref{Eq:1:1} with \eqref{Eq:1:2} in $E_T$. Then there exist positive constants $\gamma$, $\sigma$ and $q$ depending only on the data, such that \[ \operatornamewithlimits{ess\,osc}_{Q_r(\omega^{(2-p)(1+q)})}u\le\gamma \max\{1,\omega\}\Big(\ln\frac{\varrho}{r}\Big)^{-\sigma}\quad\text{ for all }r\in(0,\varrho). \]
\end{proposition}
\subsection{Preliminaries} We first collect a set of lemmas from which the proof of Proposition~\ref{Prop:int-reg} will follow. Let us remark that the energy estimate in Proposition~\ref{Prop:2:1} still holds true in the interior with arbitrary $k\in\mathbb{R}$. Keeping this remark in mind, we present the following De Giorgi type lemma similar to Lemma~\ref{Lm:DG:1}.
Let $\theta>0$ be a parameter to be determined. The cylinder $Q_\varrho(\theta)$ is coaxial with $\widetilde{Q}_{o}$ and with the same vertex $(x_o,t_o)$; moreover, we will impose that \[ \theta\varrho^p<\varrho^{p-1},\quad\text{ such that }\quad Q_\varrho(\theta) \subset \widetilde{Q}_{o}. \] Then we have the following.
\begin{lemma}\label{Lm:DG:int} Let $u$ be a local weak super-solution to \eqref{Eq:1:1} with \eqref{Eq:1:2} in $E_T$. For $\xi\in(0,1)$, set $\theta=(\xi\omega)^{2-p}$ and assume that $Q_{\varrho}(\theta)\subset E_T$.
There exists a positive constant $c_o$ depending only on the data, such that if \[
|[u\le\mu^-+\xi\omega]\cap Q_{\varrho}(\theta)|\le c_o(\xi\omega)^{\frac{N+p}p}|Q_{\varrho}(\theta)|, \] then \[ u\ge \mu^-+\tfrac12\xi\omega\quad\text{ a.e. in }Q_{\frac12\varrho}(\theta). \] \end{lemma} \begin{proof} This is exactly Lemma~\ref{Lm:DG:1} in the interior. The proof runs almost verbatim. \end{proof}
The next lemma is a variant of the previous one, involving quantitative initial data. \begin{lemma}\label{Lm:DG:initial:1} Let $u$ be a local weak super-solution to \eqref{Eq:1:1} with \eqref{Eq:1:2} in $E_T$. For $\xi\in(0,1)$, set $\theta=(\xi\omega)^{2-p}$.
There exists a positive constant $\gamma_o$ depending only on the data, such that if \[ u(\cdot, t_o)\ge\mu^-+ \xi\omega,\quad\text{ a.e. in } K_{\varrho}(x_o), \] then \[ u\ge\mu^-+\tfrac12\xi\omega\quad\text{ a.e. in }K_{\frac12\varrho}(x_o)\times(t_o,t_o+\gamma_o\theta\varrho^p), \] provided the cylinders are included in $\widetilde{Q}_o$. \end{lemma} \begin{proof} We will use the energy estimate in Proposition~\ref{Prop:2:1} (the interior version) in the cylinder $K_{\varrho}(x_o)\times(t_o,t_o+\gamma_o\theta\varrho^p)$, with a cutoff function $\zeta(x)$ in $K_\varrho(x_o)$ independent of $t$, and with levels \[ k_n=\mu^-+\frac{\xi\omega}2+\frac{\xi\omega}{2^{n+1}},\quad n=0,1,\cdots. \] Let us first analyze $\Phi_-$ under this setting. If $k_n\le0$ , then $\Phi_{-}(k_n, t_o,t,\zeta)=0$. If instead, $k_n>0$ for $n\in\{0,1,\cdots,n_*\}$, then $k_o=\mu^-+\xi\omega>0$, and we calculate for $n\in\{0,1,\cdots,n_*\}$ that, omitting $x_o$ for simplicity, \begin{align*}
\Phi_{-}&(k_n, t_o,t,\zeta)=\int_{K_\varrho\times\{\tau\}}\nu \copy\chibox_{[u\le0]}(u-k_n)_-\zeta^p\,\mathrm{d}x\Big|_{t_o}^t\\ &\quad-\int_{t_o}^t\int_{K_\varrho}\nu\copy\chibox_{[u\le0]}\partial_t (u-k_n)_-\zeta^p\,\mathrm{d}x\mathrm{d}\tau\\
&=k_n\int_{K_\varrho}\nu\copy\chibox_{[u\le0]}\zeta^p\,\mathrm{d}x\Big|_{t_o}^t+\int_{K_\varrho}\nu u_-\zeta^p\,\mathrm{d}x\Big|_{t_o}^t -\int_{t_o}^t\int_{K_\varrho}\nu \partial_t u_-\zeta^p\,\mathrm{d}x\mathrm{d}\tau\\
&=k_n\int_{K_\varrho\times\{t\}}\nu\copy\chibox_{[u\le0]}\zeta^p\,\mathrm{d}x-k_n\int_{K_\varrho\times\{t_o\}}\nu\copy\chibox_{[u\le0]}\zeta^p\,\mathrm{d}x.
\end{align*} The first term in the last line is non-negative, while the second term vanishes because of the initial information \[ u(\cdot, t_o)\ge\mu^-+ \xi\omega>0,\quad\text{ a.e. in } K_{\varrho}. \] Therefore, in any case the term $\Phi_-$ vanishes in the energy estimate of Proposition~\ref{Prop:2:1}, whereas the last spatial integral at $t_o$ also vanishes due to the initial information. As a result, we obtain energy estimates similar to that of the parabolic $p$-Laplacian \eqref{Eq:p-Laplace}, and the proof can be completed as in \cite[Chapter~3, Section~4]{DBGV-mono}. \end{proof}
\begin{remark}\label{Rmk:DG:initial:1}\upshape Lemma~\ref{Lm:DG:initial:1} admits a variant version involving the lateral boundary datum $g$. To this end, we only need to impose the additional condition that $$\mu^-+\xi\omega\le \inf_{Q^+_{\varrho}(\theta)\cap S_T} g, \quad\text{ where }Q^+_{\varrho}(\theta)=K_{\varrho}(x_o)\times(t_o,t_o+\theta\varrho^p),$$ such that the levels $k_n$ become admissible in the application of Proposition~\ref{Prop:2:1}. \end{remark}
The next lemma asserts that if we truncate a solution by a positive number, the resultant function is a sub-solution to the parabolic $p$-Laplacian \eqref{Eq:p-Laplace}. \begin{lemma}\label{Lm:sub-solution} Let $u$ be a local weak sub-solution to \eqref{Eq:1:1} with \eqref{Eq:1:2} in $E_T$. Then for any $k>0$ the truncation $(u-k)_+$ is a local weak sub-solution to \eqref{Eq:p-Laplace} with \eqref{Eq:1:2} in $E_T$. \end{lemma} \begin{proof} Upon using the test function \[ \zeta=\frac{(u-k)_+}{(u-k)_++\varepsilon}\varphi \]
where $\varepsilon>0$, and $\varphi\ge0$ satisfying \eqref{Eq:function-space} in the weak formulation \eqref{Eq:int-form} in Section~\ref{S:1:2:1},
and noticing that the terms involving $\copy\chibox_{[u\le0]}$ all vanish due to the choice $k>0$, the proof then runs similarly to the one in \cite[Lemma~1.1, Chapter~1]{DBGV-mono}. \end{proof}
Based on Lemma~\ref{Lm:sub-solution}, together with the expansion of positivity of \cite[Chapter~5, Proposition~7.1]{DBGV-mono}, we have the following. \begin{lemma}\label{Lm:expansion:p} Let $u$ be a local weak sub-solution to \eqref{Eq:1:1} with \eqref{Eq:1:2} in $E_T$. Suppose that $\mu^+-\frac14\omega>0$ and that for some $\alpha\in(0,1)$ there holds \[
|[\mu^+-u(\cdot, t_o)\ge\tfrac14\omega]\cap K_{\varrho}(x_o)|\ge\alpha |K_{\varrho}|. \] Then there exist positive constants $\eta$ and $\kappa$ depending only on the data, such that \[ \mu^+-u(\cdot, t)\ge \eta_*\omega\quad\text{ a.e. in } K_{\frac12\varrho}(x_o),\quad\text{ where }\eta_*=\eta\alpha^\kappa, \] for all times \[ t_o+\tfrac12(\eta_*\omega)^{2-p}\varrho^p\le t\le t_o+(\eta_*\omega)^{2-p}\varrho^p, \] provided the cylinders are included in $\widetilde{Q}_o$. \end{lemma} \begin{proof} Let $k=\mu^+-\frac14\omega$. By Lemma~\ref{Lm:sub-solution}, the function $(u-k)_+$ is a sub-solution to \eqref{Eq:p-Laplace}. Since $(u-k)_+\le\frac14\omega$ in $\widetilde{Q}_o$, the function $v:=\frac14\omega-(u-k)_+$ is a non-negative, local, weak super-solution to \eqref{Eq:p-Laplace}
in $\widetilde{Q}_o$. The measure theoretical information given here then reads \[
|[v(\cdot, t_o)\ge\tfrac14\omega]\cap K_{\varrho}(x_o)|\ge\alpha |K_{\varrho}|. \] Now an application of \cite[Chapter~5, Proposition~7.1]{DBGV-mono} will complete the proof. \end{proof}
\begin{remark}\upshape The original time interval given in \cite[Chapter~5, Proposition~7.1]{DBGV-mono} is \[ t_o+\tfrac12\delta b^{p-2}(\eta_*\omega)^{2-p}\varrho^p\le t\le t_o+\delta b^{p-2}(\eta_*\omega)^{2-p}\varrho^p. \] A careful inspection of the proof there actually reveals that $\delta$ and $b$ can be selected independent of $\alpha$. Hence we have absorbed them into $\eta$ here. Moreover, Lemma~\ref{Lm:expansion:p} is stable as $p\to2$. \end{remark}
\subsection{Proof of Proposition~\ref{Prop:int-reg}}
Assume $(x_o,t_o)=(0,0)$. Let $\theta=L(\frac14\omega)^{2-p}$ with $L=L_o\omega^{-a}$ for some $a,\, L_o>0$ to determined in terms of the data. To proceed we will assume that \[ Q_\varrho(\theta)\subset \widetilde{Q}_o=K_\varrho\times(-\varrho^{p-1},0),\quad\text{ such that }\quad \operatornamewithlimits{ess\,osc}_{Q_{\varrho}(\theta)}u\le\omega; \] otherwise we would have \begin{equation}\label{Eq:extra-control} \omega\le A\varrho^{\frac1{a+p-2}}\quad\text{ where }A=L_o^{\frac1{a+p-2}} 4^{\frac{p-2}{a+p-2}}. \end{equation} As in \eqref{Eq:mu-pm} we may also assume that $\mu^+-\frac14\omega> \frac14\omega>0$, and hence Lemma~\ref{Lm:expansion:p} is at our disposal.
Let us suppose $L$ is a large integer with no loss of generality, and slice $Q_{\varrho}(\theta)$ into a vertical stack of $L$ coaxial cylinders \[ Q_i=K_{\varrho}\times(t_i-\bar\theta\varrho^p,t_i),\quad t_i=-i\bar\theta\varrho^p,\quad i=0,\cdots, L-1, \] all congruent with $Q_\varrho(\bar\theta)$. Here we have set $\bar\theta:=(\frac14\omega)^{2-p}$ for ease of notation.
Let us focus on the bottom cylinder $Q_{L-1}$. The proof unfolds along the following two alternatives: \begin{equation}\label{Eq:alternative-int} \left\{ \begin{array}{cc}
|[u\le\mu^-+\tfrac14\omega]\cap Q_{L-1}|\le c_o(\tfrac14\omega)^{\frac{N+p}p}|Q_{L-1}|,\\[5pt]
|[u\le\mu^-+\tfrac14\omega]\cap Q_{L-1}|> c_o(\tfrac14\omega)^{\frac{N+p}p}|Q_{L-1}|. \end{array}\right. \end{equation}
Let us first suppose \eqref{Eq:alternative-int}$_1$ holds true. An application of Lemma~\ref{Lm:DG:int} would give us that \[ u\ge \mu^-+\tfrac18\omega\quad\text{ a.e. in }\tfrac12Q_{L-1}. \] Here the notation $\tfrac12Q_{L-1}$ should be self-explanatory in view of Lemma~\ref{Lm:DG:int}. In particular, the above pointwise lower bound of $u$ holds at the time level $t_o=-(L-1)\bar\theta\varrho^p$ and it serves as the initial datum for an application of Lemma~\ref{Lm:DG:initial:1}. As a result, by choosing $\xi(\omega)$ so small that $\gamma_o(\xi\omega)^{2-p}\ge L(\frac14\omega)^{2-p}$, we obtain that \begin{equation}\label{Eq:reduc-osc-int-1} u\ge\mu^-+\tfrac12\xi\omega\quad\text{ a.e. in }K_{\frac12\varrho}\times (t_o,0). \end{equation} Note that the choice of $\xi(\omega)$ is made out of the data and $L=L_o\omega^{-a}$, that is, $$\xi=\tfrac14(\gamma_o/L_o)^{\frac1{p-2}}\omega^{\frac{a}{p-2}}$$ where $a, L_o>0$ will be determined later in terms of the data.
Next, we turn our attention to the second alternative \eqref{Eq:alternative-int}$_2$. Since $\mu^+-\frac14\omega\ge\mu^-+\frac14\omega$, we may rephrase \eqref{Eq:alternative-int}$_2$ as \[
|[\mu^+-u\ge\tfrac14\omega]\cap Q_{L-1}|> c_o(\tfrac14\omega)^{\frac{N+p}p}|Q_{L-1}|:=\alpha |Q_{L-1}|. \] Then it is not hard to see that there exists $$t_*\in \Big[t_{L-1}-\bar\theta\varrho^p, t_{L-1}- \tfrac12 \alpha\bar\theta\varrho^p\Big],$$ such that \[
|[\mu^+-u(\cdot, t_*)\ge\tfrac14\omega]\cap K_\varrho|>\alpha |K_\varrho|. \] Based on this measure theoretical information at $t_*$, an application of Lemma~\ref{Lm:expansion:p}
yields that
\begin{equation}\label{Eq:reduc-osc-int-2}
\mu^+-u(\cdot, t)\ge\eta_o\alpha^{\kappa}\omega\quad\text{ a.e. in }K_{\frac12\varrho}
\end{equation}
for all times
\[
t_*+\tfrac12(\eta_o\alpha^{\kappa}\omega)^{2-p}\varrho^p\le t\le t_*+(\eta_o\alpha^{\kappa}\omega)^{2-p}\varrho^p.
\] Now we may determine $L$ by setting
\[ (\eta_o\alpha^{\kappa}\omega)^{2-p}=L\bar\theta,\quad\text{ i.e. }L=(4\eta_o\alpha^\kappa)^{-(p-2)} \equiv L_o\omega^{-a}.
\] Hence we have chosen $a:=\frac{N+p}p(p-2)\kappa$ and $L_o:=(4\eta_o c_o^\kappa 4^{-\frac{N+p}{p} \kappa})^{2-p}$.
Combining \eqref{Eq:reduc-osc-int-1} and \eqref{Eq:reduc-osc-int-2}, and taking \eqref{Eq:extra-control} into account, we obtain the reduction of oscillation \[ \operatornamewithlimits{ess\,osc}_{Q_{\frac12\varrho}(\theta)}u\le (1-\eta\omega^q)\omega\quad\text{ or }\quad \omega\le A\varrho^{\frac1{a+p-2}}, \] where $q=\frac{N+p}p \kappa$ and $\eta$ depends only on the data. In order to iterate the above arguments, we define \[ \omega_1=\max\Big\{(1-\eta\omega^q)\omega,\,A\varrho^{\frac1{a+p-2}}\Big\}; \] we need to choose $\varrho_1>0$, such that \[
Q_{\varrho_1}(\theta_1)\subset Q_{\frac12\varrho}(\theta), \quad Q_{\varrho_1}(\theta_1)\subset \widetilde{Q}_o, \quad\text{ where }\theta_1=L_o\omega_1^{-a} (\tfrac14\omega_1)^{2-p}. \] Since we may assume $\omega_1\ge\frac12\omega$ and estimate \[ \theta_1\varrho_1^p=L_o\omega_1^{-a}(\tfrac14\omega_1)^{2-p}\varrho_1^p\le L_o(\tfrac12\omega)^{-a}(\tfrac18\omega)^{2-p}\varrho_1^p, \] the first set inclusion amounts to requiring that \[ L_o(\tfrac12\omega)^{-a}(\tfrac18\omega)^{2-p}\varrho_1^p=\theta(\tfrac12\varrho)^p=L_o\omega^{-a}(\tfrac14\omega)^{2-p}\varrho^p,\quad\text{ i.e. }\varrho_1=c\varrho \] for some $c\in(0,1)$ determined by the data only. The second set inclusion is verified similarly for such a choice of $\varrho_1$. As a result, we arrive at \[ \operatornamewithlimits{ess\,osc}_{Q_{\varrho_1}(\theta_1)}\le\omega_1. \]
Now we may proceed by induction and introduce the following notations: \begin{equation*} \left\{ \begin{array}{cc} \varrho_o=\varrho,\quad \varrho_{n+1}=c\varrho_n,\quad\theta_n=L_o\omega_n^{-a} (\tfrac14\omega_n)^{2-p},\\[5pt] \displaystyle\omega_o=\omega,\quad \omega_{n+1}=\max\Big\{\omega_n(1-\eta\omega^q_n), \,A\varrho_n^{\frac1{a+p-2}}\Big\},\\[5pt]
Q_n=Q_{\frac12\varrho_n}(\theta_n),\quad Q'_{n}=Q_{\varrho_n}(\theta_n),\quad Q'_{n+1}\subset Q_n. \end{array}\right. \end{equation*} Upon using the induction we have that for all $n=0,1,\cdots$, \begin{equation*} \operatornamewithlimits{ess\,osc}_{Q_n} u\le \omega_{n}. \end{equation*} From now on the deduction of a logarithmic type modulus of continuity is performed as in Section~\ref{S:bdry-modulus}.
\section{Proof of Theorem~\ref{Thm:1:1}}\label{S:Thm-proof} Before starting the proof of Theorem~\ref{Thm:1:1}, we still have to deal with the continuity at the initial level. This is considerably simpler than the lateral boundary case and the interior case. Let us define $\mu^\pm$ and $\omega$ to be the supreme/infimum and the oscillation of $u$
over the forward cylinder $\widetilde{Q}^+_\varrho=K_{\varrho}(x_o)\times(0,\varrho^{p-1})$
for some $x_o\in \overline{E}$ and $\varrho\in(0,1)$. We will impose that $\omega^{2-p}\varrho^p\le\varrho^{p-1}$,
such that $$Q^+_{\varrho}(\theta):=K_{\varrho}(x_o)\times(0,\theta\varrho^{p})\subset \widetilde{Q}^+_\varrho.$$
Then we have the following oscillation decay estimate. \begin{proposition}\label{Prop:bdry:Initial} Suppose the hypotheses in Theorem~\ref{Thm:1:1} hold true. There exist $\gamma>1$ and $\alpha_1\in(0,1)$ depending only on the data, such that \[ \operatornamewithlimits{ess\,osc}_{Q^+_r(\omega^{2-p})\cap E_T}u\le \omega \Big(\frac{r}{\varrho}\Big)^{\alpha_1}+ \gamma \operatornamewithlimits{osc}_{K_{\sqrt{r\varrho}}(x_o)\cap E}u_o+\gamma \operatornamewithlimits{osc}_{\widetilde{Q}^+_{\sqrt{r\varrho}}\cap S_T} g\quad\text{ for all }r\in(0,\varrho). \]
\end{proposition} \begin{proof} Observe first that one of the following two alternatives must hold: \[ \mu^+-\tfrac14\omega>\sup_{\widetilde{Q}^+_\varrho\cap \partial_{\mathcal{P}}E_T}\{u_o,\,g\}\quad\text{ or }\quad \mu^-+\tfrac14\omega<\inf_{\widetilde{Q}^+_\varrho\cap \partial_{\mathcal{P}}E_T}\{u_o,\,g\}; \] otherwise we obtain \begin{equation*} \omega\le 2\max\Big\{\operatornamewithlimits{osc}_{K_\varrho(x_o)\cap E}u_o,\, \operatornamewithlimits{osc}_{\widetilde{Q}^+_\varrho\cap S_T} g\Big\}. \end{equation*} With no loss of generality, let us suppose the second alternative holds true. Then we are in the position to apply Lemma~\ref{Lm:DG:initial:1} (recalling Remark~\ref{Rmk:DG:initial:1}), such that for some $\gamma_o\in(0,1)$ depending only on the data, there holds \[ u\ge\mu^-+\tfrac18\omega\quad\text{ a.e. in }\{K_{\frac12\varrho}(x_o)\cap E\}\times(t_o,t_o+\gamma_o\theta\varrho^p), \] where $\theta=(\tfrac14\omega)^{2-p}$. This in turn yields a reduction of oscillation \[ \operatornamewithlimits{ess\,osc}_{Q^+_{\frac12\varrho}(\gamma_o\theta)\cap E_T}u\le\tfrac78\omega. \] Notice that the above reduction of oscillation at the initial level bears no singularity of $\beta(\cdot)$ and in fact yields a H\"older type decay. As such the final oscillation decay is dominated by that of $u_o$ and $g$ near the initial level. The technical realization runs similar to Section~\ref{S:bdry-modulus}; see also \cite[Section~7.1]{BDL}. We refrain from giving further details, to avoid repetition. \end{proof}
The proof of Theorem~\ref{Thm:1:1} consists of several cases, which rely on Proposition~\ref{Prop:bdry:D} near the lateral boundary, Proposition~\ref{Prop:int-reg} in the interior and Proposition~\ref{Prop:bdry:Initial} near the initial level.
Let us consider two points $z_i:=(x_i,t_i)$ in $\overline{E_T}$ with $i=1,2$, satisfying \[
\operatorname{dist}_p(z_1,z_2):=|x_1-x_2|+|t_1-t_2|^{\frac1{p}}\le\tfrac14\bar\varrho \] where $\bar\varrho$ is defined in the geometric property \eqref{geometry} of $\partial E$.
Suppose first that $\operatorname{dist}_p(z_i,\partial_{\mathcal{P}} E_T):=\inf\{\operatorname{dist}_p(z_i,Q): Q\in\partial_{\mathcal{P}} E_T\}>\frac12\bar\varrho$ for $i=1,2$, then according to the interior estimate of Proposition~\ref{Prop:int-reg}, we have for some proper $\sigma\in(0,1)$ that \begin{equation}\label{Eq:modulus-final}
|u(x_1,t_1)-u(x_2,t_2)|\le C\Bigg(\ln \frac{\bar\varrho}{|x_1 - x_2|+|t_1 - t_2|^{\frac1p}}\Bigg)^{-\sigma}, \end{equation}
where $C>0$ takes into account the data and $\|u\|_{\infty, E_T}$.
Next suppose that one of $\{z_1,z_2\}$, $z_1$ for instance, satisfies $\operatorname{dist}_p(z_1,\partial_{\mathcal{P}} E_T)\le\frac12\bar\varrho$. Suppose further that $t_1, t_2>(\frac12\bar\varrho)^{p-1}$. Then we may apply Proposition~\ref{Prop:bdry:D} to obtain the same type of estimate as \eqref{Eq:modulus-final}. The present constant $C$ involves the data, $\alpha_*$, $\lambda$, $C_g$ and $\|u\|_{\infty, E_T}$.
Finally if one of $\{t_1,t_2\}$, $t_1$ for instance, satisfies that $t_1\le(\frac12\bar\varrho)^{p-1}$,
then we may apply Proposition~\ref{Prop:bdry:Initial} to obtain the same type of estimate as \eqref{Eq:modulus-final}. The present constant $C$ involves the data, $\lambda$, $C_g$, $C_{u_o}$ and $\|u\|_{\infty, E_T}$.
\section{The Neumann problem} In this section, we will focus on the Neumann problem \eqref{Neumann}. The treatment more or less parallels the interior regularity in Section~\ref{S:interior}. However, we are not able to treat a general $p$-Laplacian for the moment; see Remark~\ref{Rmk:6:2}.
First of all, an energy estimate is in order. \begin{proposition}\label{Prop:energy:Neumann}
Let $u$ be a bounded weak solution to the Neumann problem \eqref{Neumann} under the condition \eqref{Eq:1:2} with $p=2$.
Assume that $\partial E$ is of class $C^1$ and \eqref{N-data} holds.
There exists a constant $\gamma>0$ depending on $C_o$, $C_1$ and the structure of $\partial E$, such that
for all cylinders $Q_{R,S}=K_R(x_o)\times (t_o-S,t_o)$,
every $k\in\mathbb{R}$, and every non-negative, piecewise smooth cutoff function
$\zeta$ vanishing on $K_R(x_o)\times (t_o-S,t_o)$, there holds \begin{align*}
\operatornamewithlimits{ess\,sup}_{t_o-S<t<t_o}&\Big\{\int_{\{K_R(x_o)\cap E\}\times\{t\}}
\zeta^2 (u-k)_\pm^2\,\mathrm{d}x +\Phi_{\pm}(k, t_o-S,t,\zeta)\Big\}\\
&\quad+
\iint_{Q_{R,S}\cap E_T}\zeta^2|D(u-k)_\pm|^2\,\mathrm{d}x\mathrm{d}t\\
&\le
\gamma\iint_{Q_{R,S}\cap E_T}
\Big[
(u-k)^{2}_\pm|D\zeta|^2 + (u-k)_\pm^2|\partial_t\zeta^2|
\Big]
\,\mathrm{d}x\mathrm{d}t,\\
&\quad
+\int_{\{K_R(x_o)\cap E\}\times \{t_o-S\}} \zeta^2 (u-k)_\pm^2\,\mathrm{d}x\\
&\quad
+\gamma C_2^{2}\iint_{Q_{R,S}\cap E_T} \zeta^2\copy\chibox_{[(u-k)_\pm>0]}\,\mathrm{d}x\mathrm{d}t, \end{align*} where \begin{align*}
\Phi_{\pm}(k, t_o-S,t,\zeta)=&-\int_{\{K_R(x_o)\cap E\}\times\{\tau\}}\nu(x,t)\copy\chibox_{[u\le0]}[\pm(u-k)_\pm]\zeta^2\,\mathrm{d}x\Big|_{t_o-S}^t\\ &+\int_{t_o-S}^t\int_{K_R(x_o)\cap E}\nu(x,t) \copy\chibox_{[u\le0]}\partial_t[\pm(u-k)_\pm\zeta^2]\,\mathrm{d}x\mathrm{d}\tau \end{align*} and $\nu(x,t)$ is a selection out of $[0,\nu]$ for $u(x,t)=0$, and $\nu(x,t)=\nu$ for $u(x,t)<0$. \end{proposition} \begin{proof} We deal with sub-solutions only as the other case is analogous. Upon using the test function $(u-k)_+\zeta^2$ in the weak formulation of Section~\ref{S:1:4:4} and taking the variational datum into account, similar calculations as in Proposition~\ref{Prop:2:1} can be reproduced here to obtain \begin{align*}
\operatornamewithlimits{ess\,sup}_{t_o-S<t<t_o}&\Big\{\int_{\{K_R(x_o)\cap E\}\times\{t\}}
\zeta^2 (u-k)_+^2\,\mathrm{d}x +\Phi_{+}(k, t_o-S,t,\zeta)\Big\}\\
&\quad+
\iint_{Q_{R,S}\cap E_T}\zeta^2|D(u-k)_+|^2\,\mathrm{d}x\mathrm{d}t\\
&\le
\gamma\iint_{Q_{R,S}\cap E_T}
\Big[
(u-k)^{2}_+|D\zeta|^2 + (u-k)_+^2|\partial_t\zeta^2|
\Big]
\,\mathrm{d}x\mathrm{d}t,\\
&\quad
+\iint_{Q_{R,S}\cap S_T}\zeta^p\psi(x,t,u)(u-k)_+\,\mathrm{d} \sigma\mathrm{d}t\\
&\quad
+\int_{\{K_R(x_o)\cap E\}\times \{t_o-S\}} \zeta^2 (u-k)_+^2\,\mathrm{d}x. \end{align*} In order to treat the boundary integral, we make use of \eqref{N-data}, apply the {\it trace inequality} (cf.~\cite[Proposition~18.1]{DB-RA}) for each time slice and then integrate in time, and use Young's inequality to estimate: \begin{align*} &\iint_{Q_{R,S}\cap S_T}\psi(x,t,u)(u-k)_+\zeta^2\,\mathrm{d}\sigma\mathrm{d}t\\ &\quad\le C_2\iint_{\partial (K_R\cap E)\times (t_o-S,t_o)}(u-k)_+\zeta^2\,\mathrm{d} \sigma\mathrm{d}t\\ &\quad\le \gamma C_2\iint_{Q_{R,S}\cap E_T}
\Big[|D(u-k)_+|\zeta^2+(u-k)_+\big(\zeta^2+|D\zeta^2|\big)\Big]\,\mathrm{d}x\mathrm{d}t\\
&\quad\le\frac{C_o}{4}\iint_{Q_{R,S}\cap E_T} \zeta^2 |D(u-k)_+|^2 \mathrm{d}x\mathrm{d}t
+ \gamma \iint_{Q_{R,S}\cap E_T} (u-k)^2_+|D\zeta|^2\,\mathrm{d}x\mathrm{d}t\\
&\qquad+ \gamma C_2^{2}\iint_{Q_{R,S}\cap E_T} \zeta^2\copy\chibox_{[u>k]}\,\mathrm{d}x\mathrm{d}t. \end{align*} Substituting it back yields the desired energy estimate. Notice also that the structure of $\partial E$ enters the constant $\gamma$ through the trace inequality. \end{proof}
\begin{remark}\label{Rmk:6:1}\upshape The vertex $(x_o,t_o)$ of the cylinder $Q_{R,S}$ needs not to be on the lateral boundary $S_T$. If $Q_{R,S}\subset E_T$, then the energy estimate in Proposition~\ref{Prop:energy:Neumann} reduces to the interior case in Proposition~\ref{Prop:2:1} for $p=2$ and $C_2=0$. This remark also applies to the following Lemmas~\ref{Lm:DG:Neumann:1} -- \ref{Lm:expansion}. \end{remark}
\subsection{Preliminaries}\label{S:6:1}
Let $(x_o,t_o)\in \overline{E}_T$ and assume $t_o-\varrho^2>0$.
For a cylinder $Q_{o}=Q_\varrho\equiv K_{\varrho}(x_o)\times(t_o-\varrho^{2},t_o)$ with $\varrho\in(0,1)$, we introduce the numbers $\mu^{\pm}$ and $\omega$ satisfying \begin{equation*}
\mu^+\ge\operatornamewithlimits{ess\,sup}_{Q_{o}\cap E_T} u,
\quad
\mu^-\le\operatornamewithlimits{ess\,inf}_{Q_{o}\cap E_T} u,
\quad
\omega\ge\mu^+ - \mu^-. \end{equation*} The numbers $\mu^{\pm}$ and $\omega$ are used in the following lemmas.
\begin{lemma}\label{Lm:DG:Neumann:1}
Let the hypotheses in Proposition~\ref{Prop:energy:Neumann} hold true and $\xi\in(0,1)$. There exist constants $\gamma>0$ and $c_o\in(0,1)$ depending only on the data and the structure of $\partial E$, such that if $\gamma C_2\varrho<\xi\omega$ and \[
|[ u<\mu^-+\xi\omega]\cap Q_{\varrho}\cap E_T|\le c_o(\xi\omega)^{\frac{N+2}2}|Q_{\varrho}\cap E_T|, \] then \[
u\ge\mu^-+\tfrac12\xi\omega\quad\text{ a.e. in }Q_{\frac12\varrho}\cap E_T. \] \end{lemma} \begin{proof}
The proof runs almost verbatim as in Lemma~\ref{Lm:DG:1}. One needs only to intersect all cylinders and cubes with $E_T$ and take $p=2$. As for the term of $C_2$, it is absorbed into other terms on the right by imposing $\gamma C_2\varrho<\xi\omega$. Certainly, if $Q_\varrho\subset E_T$, the term of $C_2$ does not appear and we are back to Lemma~\ref{Lm:DG:int} for $p=2$. \end{proof}
We present the following lemma on expansion of positivity. \begin{lemma}\label{Lm:expansion}
Let the hypotheses in Proposition~\ref{Prop:energy:Neumann} hold true. Suppose that $\mu^+-\tfrac14\omega>0$ and that for some $\alpha\in(0,1)$ and some $t_*\in(t_o-\varrho^2,t_o)$ there holds \[
|[\mu^+ -u(\cdot, t_*)>\tfrac14\omega]\cap K_{\varrho}(x_o)\cap E|\ge\alpha |K_{\varrho}(x_o)\cap E|. \] Then there exist positive constants $\gamma$, $\delta$, $\eta$ and $\kappa$ depending only on the data and the structure of $\partial E$, such that either $\gamma C_2\varrho\ge\omega$ or \[ \mu^+-u(\cdot, t)\ge\eta\alpha^\kappa \omega\quad\text{ a.e. in } K_{\varrho}(x_o)\cap E, \] for all times \[ t_*+\tfrac12\delta\varrho^2\le t\le t_*+\delta\varrho^2, \] provided $t_*+\delta\varrho^2\le t_o$. \end{lemma} \begin{proof} We will employ the energy estimate for sub-solutions in Proposition~\ref{Prop:energy:Neumann}. For this purpose, we restrict the level $k$ in $(\mu^+-\tfrac14\omega,\mu^+)$. In this way the term $\Phi_+$ vanishes as $k>0$, because of the assumption that $\mu^+-\tfrac14\omega>0$, and the energy estimate becomes \begin{align*}
\operatornamewithlimits{ess\,sup}_{t_o-S<t<t_o}&\int_{\{K_R(x_o)\cap E\}\times\{t\}}
\zeta^2 (u-k)_+^2\,\mathrm{d}x+
\iint_{Q_{R,S}\cap E_T}\zeta^2|D(u-k)_+|^2\,\mathrm{d}x\mathrm{d}t\\
&\le
\gamma\iint_{Q_{R,S}\cap E_T}
\Big[
(u-k)^{2}_+|D\zeta|^2 + (u-k)_+^2|\partial_t\zeta^2|
\Big]
\,\mathrm{d}x\mathrm{d}t,\\
&\quad
+\int_{\{K_R(x_o)\cap E\}\times \{t_o-S\}} \zeta^2 (u-k)_+^2\,\mathrm{d}x\\
&\quad
+ \gamma C_2^{2}\iint_{Q_{R,S}\cap E_T} \zeta^2\copy\chibox_{[u>k]}\,\mathrm{d}x\mathrm{d}t. \end{align*} Next we introduce the non-negative function $v:=(\mu^+-u)/\omega$ and let $\ell:=(\mu^+-k)/\omega\in(0,\tfrac14)$, such that the above energy estimate is equivalent to \begin{align*}
\operatornamewithlimits{ess\,sup}_{t_o-S<t<t_o}&\int_{\{K_R(x_o)\cap E\}\times\{t\}}
\zeta^2 (v-\ell)_-^2\,\mathrm{d}x+
\iint_{Q_{R,S}\cap E_T}\zeta^2|D(v-\ell)_-|^2\,\mathrm{d}x\mathrm{d}t\\
&\le
\gamma\iint_{Q_{R,S}\cap E_T}
\Big[
(v-\ell)^{2}_-|D\zeta|^2 + (v-\ell)_-^2|\partial_t\zeta^2|
\Big]
\,\mathrm{d}x\mathrm{d}t,\\
&\quad
+\int_{\{K_R(x_o)\cap E\}\times \{t_o-S\}} \zeta^2 (v-\ell)_-^2\,\mathrm{d}x\\
&\quad
+ \gamma \frac{C_2^{2}}{\omega^2}\iint_{Q_{R,S}\cap E_T} \zeta^2\copy\chibox_{[v<\ell]}\,\mathrm{d}x\mathrm{d}t. \end{align*} When we work with the pair of cylinders $Q_{\sigma\varrho}\subset Q_{\varrho}$ for some $\sigma\in(0,1)$ and a standard cutoff function $\zeta$ in $Q_\varrho$, the term containing $C_2$ can be absorbed into other terms on the right-hand side via the assumption that $\gamma C_2\varrho<\omega$, which with no loss of generality we may always assume. Now we observe that the above inequalities indicate that the non-negative function $v$ is a member of certain parabolic De Giorgi class. The machinery of De Giorgi can be reproduced near $S_T$; see \cite[Chapter~III, Section~13]{DB}. Taking Remark~\ref{Rmk:6:1} into consideration, an application of \cite[Proposition~6.1]{Liao} would allow us to conclude. \end{proof}
\begin{remark}\label{Rmk:6:2}\upshape Lemma~\ref{Lm:expansion} for $p=2$ parallels Lemma~\ref{Lm:expansion:p} for $p\ge2$, both concerning the expansion of positivity. However, there is a fundamental difference between them. Namely, Lemma~\ref{Lm:expansion:p} presents the expansion of positivity in the context of {\it partial differential equations} (cf.~Lemma~\ref{Lm:sub-solution}), whereas Lemma~\ref{Lm:expansion} rests upon {\it parabolic De Giorgi classes} (cf.~\cite{LSU, Liao, W}), which are much more general and meanwhile lack the same level of understanding as the partial differential equations. This is ultimately the reason why the case $p>2$ has been excluded in Lemma~\ref{Lm:expansion}. \end{remark}
\subsection{Proof of Theorem~\ref{Thm:1:2}}
Introduce the cylinder $Q_o=Q_\varrho$ and the numbers $\mu^\pm$ and $\omega$ as in Section~\ref{S:6:1}. As in \eqref{Eq:mu-pm}, we may assume \begin{equation}\label{Eq:mu-pm-1} \mu^+-\tfrac14\omega\ge \tfrac14\omega>0, \end{equation} such that Lemma~\ref{Lm:expansion} is at our disposal.
Let us first suppose that the condition of Lemma~\ref{Lm:DG:Neumann:1} holds true with $\xi=\frac14$, that is, \begin{equation}\label{Eq:N:alt:1}
|[ u<\mu^-+\tfrac14\omega]\cap Q_{\varrho}\cap E_T|\le c_o(\tfrac14\omega)^{c}|Q_{\varrho}\cap E_T|, \end{equation} where we have denoted $(N+2)/2$ by $c$ for simplicity. Then according to Lemma~\ref{Lm:DG:Neumann:1}, we have either $\gamma C_2\varrho\ge\omega$ or \[
u\ge\mu^-+\tfrac18\omega\quad\text{ a.e. in }Q_{\frac12\varrho}\cap E_T, \] which in turn yields a reduction of oscillation \begin{equation}\label{Eq:reduc-osc-1-N} \operatornamewithlimits{ess\,osc}_{Q_{\frac12\varrho}\cap E_T}u\le \tfrac78\omega. \end{equation}
Next, we examine the case when \eqref{Eq:N:alt:1} does not hold, that is, \[
|[ u<\mu^-+\tfrac14\omega]\cap Q_{\varrho}\cap E_T|> c_o(\tfrac14\omega)^{c}|Q_{\varrho}\cap E_T|. \]
Since $\mu^+-\frac14\omega>\mu^-+\frac14\omega$, this gives that \[
|[ \mu^+-u>\tfrac14\omega]\cap Q_{\varrho}\cap E_T|> c_o(\tfrac14\omega)^{c}|Q_{\varrho}\cap E_T|, \] and consequently, it is not hard to see that there exists $$t_*\in\big[t_o-\varrho^2, t_o-\tfrac12c_o(\tfrac14\omega)^{c}\varrho^2\big],$$ such that \[
|[ \mu^+-u(\cdot, t_*)>\tfrac14\omega]\cap K_{\varrho}(x_o)\cap E|> \tfrac12 c_o(\tfrac14\omega)^{c}|K_{\varrho}(x_o)\cap E|. \] Using this measure information, thanks to \eqref{Eq:mu-pm-1}, we are allowed to apply Lemma~\ref{Lm:expansion} and obtain that there exist positive constants $\{\gamma,\kappa,\delta,\eta\}$ depending only on the data, such that either $\gamma C_2\varrho\ge\omega$ or \[ \mu^+-u(\cdot, t)\ge\eta\omega^{1+q}\quad\text{ a.e. in } K_{\varrho}(x_o)\cap E, \] for $q=c\kappa$ and for all times \[ t_*+\tfrac12\delta\varrho^2\le t\le t_*+\delta\varrho^2. \] Starting from this pointwise information, finite times (at most $1/\delta$) of further applications of Lemma~\ref{Lm:expansion} with $\alpha=1$ will give that, after a proper redefinition of $\eta$, \[ \mu^+-u(\cdot, t)\ge\eta\omega^{1+q}\quad\text{ a.e. in } K_{\varrho}(x_o)\cap E, \] for all times \[ t\in\big[t_o-\tfrac12c_o(\tfrac14\omega)^{c}\varrho^2,t_o\big], \] which in turn gives a reduction of oscillation \begin{equation}\label{Eq:reduc-osc-2-N} \operatornamewithlimits{ess\,osc}_{Q_{\frac12\varrho}(\theta)\cap E_T} u\le \omega(1-\eta\omega^q),\quad\text{ where }\theta=c_o(\tfrac14\omega)^{c}. \end{equation} In order to iterate, we set \[ \omega_1:=\max\Big\{\omega(1-\eta\omega^q), \gamma C_2\varrho\Big\}, \] and select $\varrho_1$ satisfying \[ \varrho_1^2=(\tfrac12\varrho)^2\theta\quad\implies\quad Q_{\varrho_1}\subset Q_{\frac12\varrho}(\theta), \] and hence \eqref{Eq:reduc-osc-1-N} and \eqref{Eq:reduc-osc-2-N} imply that \[ \operatornamewithlimits{ess\,osc}_{Q_{\varrho_1}\cap E_T}u\le \omega_1. \] Repeating in this fashion, we obtain the following construction for $n\ge0$: \begin{equation*} \left\{ \begin{array}{cc} \varrho_o=\varrho,\quad \varrho^2_{n+1}=(\tfrac12\varrho_n)^2\theta_n,\quad\theta_n=c_o(\tfrac14\omega_n)^{c},\\[5pt] \displaystyle\omega_o=\omega,\quad \omega_{n+1}=\max\Big\{\omega_n(1-\eta\omega^q_n),\gamma C_2\varrho_n\Big\},\\[5pt]
Q'_n=Q_{\frac12\varrho_n}(\theta_n),\quad Q_{n}=Q_{\varrho_n},\quad Q_{n+1}\subset Q'_n. \end{array}\right. \end{equation*} By induction, we have for all $n\ge0$ that \[ \operatornamewithlimits{ess\,osc}_{Q_{n}\cap E_T} u\le \omega_n. \] Then the derivation of the modulus of continuity inherent in the above estimate can be performed as in the Section~\ref{S:bdry-modulus}.
Finally, we recall that the vertex of $Q_r$ could be any point in $\overline{E}_T$ and therefore the above oscillation decay holds in the interior and up to the lateral boundary. This joint with the continuity up to the initial level presented in Proposition~\ref{Prop:bdry:Initial} would allow us to conclude the proof of Theorem~\ref{Thm:1:2}, just like in Section~\ref{S:Thm-proof} for the Dirichlet problem.
\section{Uniform approximations}\label{S:approx} Existence of weak solutions to the Stefan problem \eqref{Eq:1:1}, given proper initial and boundary conditions, is an issue of independent interest. A standard device in the construction of weak solutions to boundary value problems
consists in first solving regularized versions, deriving {\it a priori} estimates that suggest where to find a solution, and then obtaining a solution in a proper limiting process via the {\it a priori} estimates and compactness arguments. In general, such a limiting process requires the function $\bl{A}(x,t,u, \xi)$ to satisfy more stringent structural conditions than \eqref{Eq:1:2}, such as monotonicity in $\xi$, cf.~\cite[Chapter~V, Theorem~6.7]{LSU}. The so-obtained limit function will be in the function space \[
L^{\infty}\big(0,T;L^2(E)\big)\cap L^p\big(0,T; W^{1,p}(E)\big), \] and meanwhile, verifies one of the integral formulations in Section~\ref{S:1:2}, cf.~\cite{Friedman-68, Urbano-97} and \cite[Chapter~V, Section~9]{LSU}. Nevertheless, Theorems~\ref{Thm:1:1} -- \ref{Thm:1:2} do not grant continuity to this kind of solution as it does not possess time derivative in the Sobolev sense. {\it A priori} estimates on the time derivative are generally unavailable. On the other hand, the importance of continuous weak solutions (temperatures) lies in their physical bearings.
The purpose of the present section is to exhibit that the arguments presented in the previous sections permit us to identify a limit function, which is continuous with the same kind of moduli as in Theorems~\ref{Thm:1:1} -- \ref{Thm:1:2}, with the aid of the Ascoli-Arzela theorem (cf.~\cite[Chapter~5, Section~19]{DB-RA}). This, joint with the existence results described above, will yield continuous weak solutions without any knowledge on the time derivative.
To this end, let $H_\varepsilon(s)$ be the mollification with $\varepsilon\in(0,1)$, by the standard Friedrichs kernel supported in $(-\varepsilon,\varepsilon)$ (cf.~\cite[Chapter~6, Section~18]{DB-RA}), of the function \begin{equation*} H(s):=\left\{ \begin{array}{cl} 0,\quad&s>0,\\[5pt] -\nu,\quad& s\le0. \end{array}\right. \end{equation*} Here $\nu$ is from the definition of $\beta(\cdot)$. Clearly, the function $s\mapsto s+H_\varepsilon(s)$ is an approximation of $\beta(s)$.
Consider the regularized Dirichlet problem: \begin{equation}\label{Dirichlet-reg} \left\{ \begin{aligned} &\partial_t [u+H_\varepsilon(u)] -\operatorname{div}\bl{A}(x,t,u, Du) = 0\quad \text{ weakly in }\> E_T\\
&u(\cdot,t)\Big|_{\partial E}=g(\cdot,t)\quad \text{ a.e. }\ t\in(0,T]\\ &u(\cdot,0)=u_o. \end{aligned} \right. \end{equation} Here the boundary datum $g$ and the initial datum $u_o$ are as in Theorem~\ref{Thm:1:1}. For each fixed $\varepsilon$,
the notion of solution to \eqref{Dirichlet-reg} can be defined via a similar integral identity as in Section~\ref{S:1:4:3}. A main difference now is that the function space for solutions becomes \begin{equation}\label{Eq:func-space-approx} \left\{ \begin{array}{cc} \displaystyle\int_0^u [1+H'_{\varepsilon}(s)]s\,\mathrm{d} s\in C\big(0,T;L^1(E)\big),\\[5pt] u\in L^p\big(0,T; W^{1,p}(E)\big). \end{array}\right. \end{equation} This notion does not require any {\it a priori} knowledge on the time derivative and is similar to the one for \eqref{Eq:p-Laplace} in \cite[Chapter~II]{DB}, cf.~\eqref{Eq:func-space-1}. The following theorem can be viewed as a ``cousin" of Theorem~\ref{Thm:1:1}.
\begin{theorem}\label{Thm:7:1} Let $\{u_\varepsilon\}$ be a family of weak solutions to the Dirichlet problem \eqref{Dirichlet-reg} under the condition \eqref{Eq:1:2} with $p\ge2$. Assume that \eqref{D}, \eqref{I} and \eqref{geometry} hold.
Then $\{u_\varepsilon\}$ is equibounded by $M:=\max\{\|u_o\|_{\infty,E},\|g\|_{\infty,S_T}\}$ and is equicontinuous in $\overline{ E_T}$. More precisely, there exist positive constants $\gamma$ and $q$ depending only on the data and $\alpha_*$, and a modulus of continuity $\boldsymbol\omega(\cdot)$, determined by the data, $\alpha_*$, $\bar\varrho$, $M$, $\boldsymbol\omega_{o}(\cdot)$ and $\boldsymbol\omega_{g}(\cdot)$, independent of $\varepsilon$, such that
\begin{equation*}
\big|u_{\varepsilon}(x_1,t_1)-u_{\varepsilon}(x_2,t_2)\big|
\le
\boldsymbol\omega\!\left(|x_1-x_2|+|t_1-t_2|^{\frac1p}\right)+\gamma\varepsilon^{\frac1{1+q}},
\end{equation*} for every pair of points $(x_1,t_1), (x_2,t_2)\in \overline{E_T}$. In particular, there exists $\sigma\in(0,1)$ depending only on the data and $\alpha_*$, such that if \[
\boldsymbol\omega_g(r)\le \frac{C_g}{|\ln r|^{\lambda}}\quad\text{ and }\quad\boldsymbol \omega_o(r)\le \frac{C_{u_o}}{|\ln r|^{\lambda}}\quad\text{ for all }r\in(0,\bar\varrho), \] where $C_g,\,C_{u_o}>0$ and $\lambda>\sigma$, then the modulus of continuity is $$\boldsymbol\omega(r)=C \Big(\ln \frac{\bar\varrho}{r}\Big)^{-\frac{\sigma}2}\quad\text{ for all }r\in(0, \bar\varrho)$$ with some $C>0$
depending on the data, $\lambda$, $\alpha_*$, $M$, $C_{u_o}$ and $C_{g}$. \end{theorem}
Similarly we may consider a regularized Neumann problem: \begin{equation}\label{Neumann-reg} \left\{ \begin{aligned} &\partial_t [u+H_\varepsilon(u)]-\operatorname{div}\bl{A}(x,t,u, Du) = 0\quad \text{ weakly in }\> E_T\\ &\bl{A}(x,t,u, Du)\cdot {\bf n}=\psi(x,t, u)\quad \text{ a.e. }\ \text{ on }S_T\\ &u(\cdot,0)=u_o(\cdot). \end{aligned} \right. \end{equation} Here the boundary datum $\psi$ and the initial datum $u_o$ are as in Theorem~\ref{Thm:1:2}. The notion of solution to \eqref{Neumann-reg} can be defined via a similar integral identity as in Section~\ref{S:1:4:4}. In particular, the function space in \eqref{Eq:func-space-approx} is used. The following theorem can be regarded as a ``cousin" of Theorem~\ref{Thm:1:2}.
\begin{theorem}\label{Thm:7:2} Let $\{u_\varepsilon\}$ be a family of weak solutions to the Neumann problem \eqref{Neumann-reg} under the condition \eqref{Eq:1:2} with $p=2$. Assume that $\partial E$ is of class $C^1$, and \eqref{N-data} and \eqref{I} hold.
Then $\{u_\varepsilon\}$ is equibounded by a constant $M$ depending only on
the data, $|E|$, $T$, $C_2$, $\|u_o\|_{\infty,E}$,
and the structure of $\partial E$, and is equicontinuous in $\overline{E_T}$.
More precisely, there exists a modulus of continuity $\boldsymbol\omega(\cdot)$, determined by the data, the structure of $\partial E$, $C_2$, $M$ and $\boldsymbol\omega_{o}(\cdot)$, independent of $\varepsilon$, such that
\begin{equation*}
\big|u_{\varepsilon}(x_1,t_1)-u_{\varepsilon}(x_2,t_2)\big|
\le
\boldsymbol\omega\!\left(|x_1-x_2|+|t_1-t_2|^{\frac12}\right)+4\varepsilon,
\end{equation*} for every pair of points $(x_1,t_1), (x_2,t_2)\in \overline{E_T}$. In particular, there exists $\sigma\in(0,1)$ depending only on the data, $C_2$ and the structure of $\partial E$, such that if \[
\boldsymbol \omega_o(r)\le \frac{C_{u_o}}{|\ln r|^{\lambda}}\quad\text{ for all }r\in(0,1), \] where $C_{u_o}>0$ and $\lambda>\sigma$, then the modulus of continuity is $$\boldsymbol\omega(r)=C \Big(\ln \frac{1}{r}\Big)^{-\frac{\sigma}2}\quad\text{ for all }r\in(0, 1),$$ with some $C>0$
depending on the data, the structure of $\partial E$, $C_2$, $M$ and $C_{u_o}$.
\end{theorem}
\begin{remark}\upshape Theorems~\ref{Thm:7:1} -- \ref{Thm:7:2} lay the foundation of obtaining continuous solutions to the Dirichlet problem \eqref{Dirichlet} or the Neumann problem \eqref{Neumann}, once we can construct solutions $\{u_\varepsilon\}$ to \eqref{Dirichlet-reg} or \eqref{Neumann-reg} and identify the limit function $u$ as a proper weak solution to \eqref{Dirichlet} or \eqref{Neumann}. Clearly, the so-obtained solution $u$ will enjoy the logarithmic type modulus of continuity. \end{remark}
We will treat Theorem~\ref{Thm:7:1} only as Theorem~\ref{Thm:7:2} can be dealt with in a similar way. Concentration will be made on examining the boundary arguments in Section~\ref{S:bdry}. The subscript $\varepsilon$ will suppressed from $u$, $\mu^\pm$, $\omega$, $\theta$, $\widetilde{\theta}$, etc. The idea is to adapt the arguments in Section~\ref{S:bdry} and to determine the quantities, such as $\bar\xi$, $\xi$, $\eta$, $A$, independent of $\varepsilon$. In this way the reduction of oscillation of $u$ can be achieved just like in Section~\ref{S:bdry}, independent of $\varepsilon$.
Let us first observe that, the arguments in Section~\ref{S:bdry}
hinges solely on the energy estimates in Proposition~\ref{Prop:2:1}. Now the test function \[ \zeta^p(x,t) \big(u(x,t)-k\big)_\pm \] against \eqref{Dirichlet-reg}$_1$ and with $k$ satisfying \eqref{Eq:k-restriction}, is justified modulo an averaging process in the time variable; this can be done as in \cite[Proposition~3.1]{BDL}. No {\it a priori} knowledge on the time derivative is needed, because now $s\mapsto s+H_\varepsilon(s)$ is a smooth, increasing function.
After standard calculations, the energy estimates for the weak solution $u$ to \eqref{Dirichlet-reg} becomes, omitting the reference to $x_o$, \begin{equation}\label{Eq:energy-approx} \begin{aligned}
\operatornamewithlimits{ess\,sup}_{t_o-S<t<t_o}&\Big\{\int_{K_R\times\{t\}}
\zeta^p (u-k)_\pm^2\,\mathrm{d}x \pm \int_{K_R\times\{t\}}\int_k^u H_\varepsilon'(s)(s-k)_\pm\,\mathrm{d} s\,\zeta^p\mathrm{d}x \Big\}\\
&\quad+
\iint_{Q_{R,S}}\zeta^p|D(u-k)_\pm|^p\,\mathrm{d}x\mathrm{d}t\\
&\le
\gamma\iint_{Q_{R,S}}
\Big[
(u-k)^{p}_\pm|D\zeta|^p + (u-k)_\pm^2|\partial_t\zeta^p|
\Big]
\,\mathrm{d}x\mathrm{d}t\\
&\quad\pm\iint_{Q_{R,S}}\int_k^u H_\varepsilon'(s)(s-k)_\pm\,\mathrm{d} s |\partial_t\zeta^p|\, \mathrm{d}x\mathrm{d}t\\
&\quad
+\int_{K_R\times \{t_o-S\}} \zeta^p (u-k)_\pm^2\,\mathrm{d}x\\
&\quad \pm \int_{K_R\times\{t_o-S\}}\int_k^u H_\varepsilon'(s)(s-k)_\pm\,\mathrm{d} s\,\zeta^p\mathrm{d}x. \end{aligned} \end{equation}
The three terms containing $H'_\varepsilon$ here play the role of $\Phi_{\pm}$ in Proposition~\ref{Prop:2:1}, which preserve the singularity of $\beta(\cdot)$ at the origin as $\varepsilon\to0$.
Let us consider the case of super-solution, i.e. $(u-k)_-$. The term containing $H'_\varepsilon$ (together with the minus sign in the front) on the left-hand side is non-negative and hence can be discarded. The first term containing $H'_\varepsilon$ on the right-hand side is estimated by \[
\iint_{Q_{R,S}}\int_u^k H_\varepsilon'(s)(s-k)_-\,\mathrm{d} s |\partial_t\zeta^p|\, \mathrm{d}x\mathrm{d}t
\le \nu\iint_{Q_{R,S}}(u-k)_- |\partial_t\zeta^p|\,\mathrm{d}x\mathrm{d}t. \] The second term containing $H'_\varepsilon$ on the right-hand side is discarded because now $\zeta=0$ on $\partial_{\mathcal{P}}Q_{R,S}$. Using these remarks, one can perform the De Giorgi iteration in Lemma~\ref{Lm:DG:1} and reach the same conclusion.
As for Lemma~\ref{Lm:DG:2}, now the condition becomes $|\mu^-|\le\xi\omega$ and, letting $k=\mu^-+\xi\omega$, \begin{equation}\label{Eq:measure-2-approx} \iint_{Q_{\varrho}(\theta)}\int_u^k H_\varepsilon'(s) \,\mathrm{d} s\, \mathrm{d}x\mathrm{d}t
\le\xi\omega|[u\le \mu^-+\tfrac12\xi\omega]\cap Q_{\frac12\varrho}(\theta)|. \end{equation} Then the same conclusion holds as in Lemma~\ref{Lm:DG:2}.
With the modified versions of Lemma~\ref{Lm:DG:1} and Lemma~\ref{Lm:DG:2} at hand,
we can start the arguments as in Section~\ref{S:bdry}.
Notice also that Proposition~\ref{Prop:2:2} now holds for $k\ge\varepsilon$
in the case of sub-solution and for $k\le-\varepsilon$ in the case of super-solution.
If \eqref{Eq:alternative}$_1$ holds, then thanks to \eqref{Eq:mu-pm}$_1$, the function $(u-k)_+$ with $k=\mu^+-2^{-n}\omega$ will satisfy the energy estimate in Proposition~\ref{Prop:2:2} for all $n\ge2$, provided we assume that $\frac14\omega\ge\varepsilon$. In this case, the reduction of oscillation can be achieved as in Section~\ref{S:reduc-osc-4}, cf.~Lemma~\ref{Lm:reduc-osc-1}. Consequently we need only to examine the case when \eqref{Eq:alternative}$_2$ holds. This case splits into two sub-cases that parallel Sections~\ref{S:reduc-osc-1} -- \ref{S:reduc-osc-4}, which we now examine.
The conclusion \eqref{Eq:reduc-osc-1} is reached as in Section~\ref{S:reduc-osc-1}.
The only change is that we need to assume \eqref{Eq:measure-2-approx} instead of \eqref{Eq:measure-2}.
Let us examine Section~\ref{S:reduc-osc-2}. A key change appears in Section~\ref{S:reduc-osc-2} as \eqref{Eq:opposite:2} becomes the opposite of \eqref{Eq:measure-2-approx}, that is, letting $k=\mu^-+\xi\omega$, \[ \iint_{Q_{\varrho}(\theta)}\int_u^k H_\varepsilon'(s) \,\mathrm{d} s\, \mathrm{d}x\mathrm{d}t
>\xi\omega|[u\le \mu^-+\tfrac12\xi\omega]\cap Q_{\frac12\varrho}(\theta)|. \] This, joint with \eqref{Eq:opposite:1}, yields a variant of \eqref{Eq:opposite:3}, that is, for all $r\in[2\varrho,8\varrho]$ we have \begin{align*} \iint_{Q_{r}(\theta)}\int_u^k H_\varepsilon'(s)\,\mathrm{d} s\, \mathrm{d}x\mathrm{d}t
&>\xi\omega|[u\le \mu^-+\tfrac12\xi\omega]\cap Q_{\frac12\varrho}(\theta)|\\
&\ge c_o(\xi\omega)^{b}|Q_{\frac12\varrho}(\theta)|\ge\widetilde{\gamma}(\xi\omega)^{b}|Q_{r}(\theta)|, \end{align*} where just like in \eqref{Eq:opposite:3} we have set $\widetilde{\gamma}=c_o16^{-N-p}$ and $b=1+\tfrac{N+p}p$. Using \eqref{Eq:time:2}, this in turn gives \begin{align*} &\operatornamewithlimits{ess\,sup}_{t_o-\bar{\theta}r^p<t<t_o}\int_{K_{r}}\int_u^k H_\varepsilon'(s)(s-k)_-\,\mathrm{d} s\, \mathrm{d}x\\ &\qquad\ge(k-\varepsilon)\operatornamewithlimits{ess\,sup}_{t_o-\bar{\theta}r^p<t<t_o}\int_{K_{r}}\int_u^k H_\varepsilon'(s)\,\mathrm{d} s\, \mathrm{d}x\\
&\qquad\ge\widetilde{\gamma}(k-\varepsilon)(\xi\omega)^{b}|K_{r}|\\ &\qquad\ge \widetilde{\gamma}(k-\varepsilon)(\xi\omega)^{b} (\bar\delta\xi\omega)^{-2}\operatornamewithlimits{ess\,sup}_{t_o-\bar{\theta} r^p<t<t_o}\int_{K_r\times\{t\}}[u-(\mu^-+\bar\delta\xi\omega)]_-^2\,\mathrm{d}x. \end{align*} Consequently, Lemma~\ref{Lm:energy} holds true in view of the energy estimate \eqref{Eq:energy-approx}, and Lemmas~\ref{Lm:shrink:1} -- \ref{Lm:DG:3} can be reproduced, once the condition $\varepsilon\le\frac14\delta\xi\omega\equiv \frac14\xi\bar\xi\omega^{1+q}$ is imposed;
otherwise, it just gives us an extra control $\omega\le \gamma \varepsilon^{\frac1{1+q}}$ for some positive $\gamma(\text{data},\alpha_*)$. The rest of the arguments in Sections~\ref{S:reduc-osc-3} -- \ref{S:bdry-modulus} remains unchanged.
The reduction of interior oscillation can be reproduced as in Section~\ref{S:interior}. Lemmas~\ref{Lm:DG:int} -- \ref{Lm:DG:initial:1} are unchanged. Lemma~\ref{Lm:expansion:p} still holds if we impose $\mu^+-\frac14\omega>\varepsilon$. This is not a problem, as we may assume that $\mu^+-\frac14\omega\ge\frac14\omega>\varepsilon$, cf.~\eqref{Eq:mu-pm};
otherwise, it just gives us an extra control $\omega\le 4\varepsilon$.
The derivation of modulus of continuity runs similar to Section~\ref{S:bdry-modulus};
here it is affected only by an extra control of order $\varepsilon$ or $\varepsilon^{\frac1{1+q}}$. Therefore we may conclude the proof of Theorem~\ref{Thm:7:1}.
\appendix
\section{Boundary regularity for the parabolic $p$-Laplacian}\label{A:1}
Let $\widetilde{Q}_o$ be as in Section~\ref{S:bdry} with its vertex $(x_o,t_o)\in S_T$ and $t_o\ge\varrho^{p-1}$. Define $\mu^{\pm}$ to be the supreme/infimum of $u$ over $\widetilde{Q}_o\cap E_T$. Introduce parameters $a\in(0,1)$, $\omega>0$ and \begin{equation}\label{Eq:level:A} k=\left\{ \begin{array}{ll} \displaystyle\mu^+- a \omega\ge \sup_{\widetilde{Q}_o\cap S_T}g,&\quad\text{ for }(u-k)_+,\\[5pt] \displaystyle\mu^-+ a \omega\le \inf_{\widetilde{Q}_o\cap S_T}g,&\quad\text{ for }(u-k)_-. \end{array}\right. \end{equation} We will consider a cylinder $Q_r(\theta)$ with its vertex $(x_o,t_o)$ and $\theta=(\xi \omega)^{2-p}$, for some $\xi\in(0,1)$ to be determined later. We assume that
$Q_r(\theta)\subset \widetilde{Q}_o$ for all $r\in[\frac12\varrho, 2\varrho]$. Now we state the boundary regularity
for the parabolic $p$-Laplacian with $p\ge2$, which has been sketchily presented in \cite[Chapter~III, Section~12]{DB}. \begin{proposition}\label{Prop:A:1}
Let $\bar\theta=(a\omega)^{2-p}$ for $a\in(\xi,1)$ and let $\zeta$ be a piecewise smooth function in $Q_{r}(\bar\theta)$ that vanishes on $\partial_{\mathcal{P}}Q_{r}(\bar\theta)$. Suppose that $u$ verifies the following energy inequalities in $Q_{r}(\bar\theta)$ for some positive $\widetilde{C}$, \begin{align*}
\operatornamewithlimits{ess\,sup}_{t_o-\bar\theta r^p<t<t_o}&\int_{K_r(x_o)\times\{t\}}
\zeta^p (u-k)_\pm^2\,\mathrm{d}x+
\iint_{Q_{r}(\bar\theta)}\zeta^p|D(u-k)_\pm|^p\,\mathrm{d}x\mathrm{d}t\\
&\le
\widetilde{C}\iint_{Q_{r}(\bar\theta)}
\Big[
(u-k)^{p}_\pm|D\zeta|^p + (u-k)_\pm^2|\partial_t\zeta^p|
\Big]
\,\mathrm{d}x\mathrm{d}t,
\end{align*} with the choice of $k$ in \eqref{Eq:level:A} for all $a\in(\xi,1)$ and all $r\in[\frac12\varrho, 2\varrho]$. Then there exists $\xi\in(0,1)$ determined by $\{\widetilde{C}, N, p,\alpha_*\}$, such that \[ \pm(\mu^{\pm}-u)\ge\tfrac12\xi\omega\quad\text{ a.e. in }Q_{\frac12\varrho}(\theta)\quad\text{ where }\theta=(\xi \omega)^{2-p}.
\] \end{proposition}
The proof of Proposition~\ref{Prop:A:1} is a direct consequence of the following lemmas. \begin{lemma} For $j_*\in\mathbb{N}$, let $\theta=(2^{-j_*}\omega)^{2-p}$. There exists $\gamma>0$ depending only on $\{\widetilde{C}, N, p,\alpha_*\}$, such that \[
\Big| \Big[ \pm(\mu^{\pm}-u)\le\frac{\omega}{2^{j_*}}\Big]\cap Q_{\varrho}(\theta)\Big|\le\frac{\gamma}{j_*^{\frac{p-1}p}}|Q_{\varrho}(\theta)|. \] \end{lemma} \begin{proof} We only treat the case of $u-\mu^-$ here as the other case is analogous. The proof runs similar to Lemma~\ref{Lm:shrink:1}. We define $k_j=\mu^-+2^{-j}\omega$ for $j=0,1,\cdots, j_*$, and work within $Q_{2\varrho}(\theta)$. After using a standard cutoff function $\zeta$ in $Q_{2\varrho}(\theta)$ that equals the identity in $Q_{\varrho}(\theta)$ and vanishes on $\partial_{\mathcal{P}}Q_{2\varrho}(\theta)$, the energy estimate in Proposition~\ref{Prop:A:1} gives that \begin{equation*} \begin{aligned}
\iint_{Q_\varrho(\theta)}|D(u-k_j)_-|^p\,\mathrm{d}x\mathrm{d}t&\le\frac{\gamma}{\varrho^p}\bigg(\frac{\omega}{2^j}\bigg)^p\bigg[1+\frac1{\theta}\bigg(\frac{\omega}{2^j}\bigg)^{2-p}\bigg] |A_{j,2\varrho}|\\
&\le\frac{\gamma}{\varrho^p}\bigg(\frac{\omega}{2^j}\bigg)^p|A_{j,2\varrho}|, \end{aligned} \end{equation*} where $ A_{j,2\varrho}:= \big[u<k_{j}\big]\cap Q_{2\varrho}(\theta)$. The rest of the proof can be carried out as in the proof of Lemma~\ref{Lm:shrink:1} almost verbatim. Clearly, we do not need to iterate $m$ times here. \end{proof}
\begin{lemma} For $\xi\in(0,1)$, let $\theta=(\xi \omega)^{2-p}$. There exists $c_o\in(0,1)$ depending only on $\{\widetilde{C}, N, p\}$, such that if \[
\big| \big[ \pm(\mu^{\pm}-u)\le \xi \omega \big]\cap Q_{\varrho}(\theta) \big|\le c_o |Q_{\varrho}(\theta)|, \] then \[ \pm(\mu^{\pm}-u)\ge\tfrac12\xi \omega\quad\text{ a.e. in }Q_{\frac12\varrho}(\theta).
\] \end{lemma} \begin{proof} We only treat the case of $u-\mu^-$ here as the other case is analogous. The argument runs similar to that of Lemma~\ref{Lm:DG:1}. We may first define the various quantities, cubes and cylinders as in \eqref{choices:k_n}. The difference is that the energy estimate now, according to Proposition~\ref{Prop:A:1}, becomes \begin{align*}
\operatornamewithlimits{ess\,sup}_{-\theta\tilde{\varrho}_n^p<t<0}
&\int_{\widetilde{K}_n} (u-\tilde{k}_n)_-^2\,\mathrm{d}x
+
\iint_{\widetilde{Q}_n}|D(u-\tilde{k}_n)_-|^p \,\mathrm{d}x\mathrm{d}t\\
&\qquad\le
\gamma \frac{2^{pn}}{\varrho^p}(\xi\omega)^{p}|A_n|, \end{align*} where $A_n=\big[u<k_n\big]\cap Q_n$. After an application of the above energy estimate, the H\"older inequality and the Sobolev imbedding
\cite[Chapter I, Proposition~3.1]{DB} as in Lemma~\ref{Lm:DG:1}, the recurrence of $Y_n=|A_n|/|Q_n|$ becomes \begin{equation*}
Y_{n+1}
\le\gamma C^n Y_n^{1+\frac{1}{N+2}}, \end{equation*} where $\gamma$ and $C$ depends only on $\{\widetilde{C}, N, p\}$. As in Lemma~\ref{Lm:DG:1} we may conclude. \end{proof}
\end{document} | arXiv |
\begin{document}
\title[Extended Bloch group and the Chern-Simons class]{Extended Bloch group and the Chern-Simons class\\(Incomplete Working version)} \author{Walter D. Neumann} \address{Department of Mathematics\\The University of Melbourne\\Carlton, Vic 3052\\Australia} \email{[email protected]} \subjclass{57M99; 19E99, 19F27}
\begin{abstract} We define an extended Bloch group and show it is isomorphic to $H_3(\operatorname{PSL}(2,\C)^\delta;\Z)$. Using the Rogers dilogarithm function this leads to an exact simplicial formula for the universal Cheeger-Simons class on this homology group. It also leads to an independent proof of the analytic relationship between volume and Chern-Simons invariant of hyperbolic manifolds conjectured in \cite{neumann-zagier} and proved in \cite{yoshida}, as well as an effective formula for the Chern-Simons invariant of a hyperbolic manifold. \end{abstract} \maketitle
\section{Introduction} There are several variations of the definition of the Bloch group in the literature; by \cite{dupont-sah} they differ at most by torsion and they agree with each other for algebraically closed fields. In this paper we shall use the following.
\begin{definition}\label{def-bloch} Let $k$ be a field. The \emph{pre-Bloch group $\prebloch(k)$} is the quotient of the free $\Z$-module $\Z (k-\{0,1\})$ by all instances of the following relation: \begin{equation} [x]-[y]+[\frac yx]-[\frac{1-x^{-1}}{1-y^{-1}}]+[\frac{1-x}{1-y}]=0, \label{5term} \end{equation} This relation is usually called the \emph{five term relation}. The \emph{Bloch group $\bloch(k)$} is the kernel of the map \begin{equation*} \prebloch(k)\to k^*\wedge_\Z k^*,\quad [z]\mapsto 2\bigl(z\wedge(1-z)\bigr). \end{equation*} \end{definition}
(In \cite{neumann-yang} the additional relations \begin{equation*} [x]=[1-\frac 1x]=[\frac 1{1-x}]=-[\frac1x]=-[\frac{x-1}x]=-[1-x] \label{invsim} \end{equation*} were used. These follow from the five term relation when $k$ is algebraicly closed, as shown by Dupont and Sah \cite{dupont-sah}. Dupont and Sah use a different five term relation but it is conjugate to the one used here by $z\mapsto\frac1z$.)
There is an exact sequence due to Bloch and Wigner: \begin{equation*} 0\to \Q/\Z\to H_3(\operatorname{PSL}(2,\C)\discrete;\Z)\to \bloch(\C)\to 0. \end{equation*} The superscript $\delta$ means ``with discrete topology.'' \renewcommand\discrete{} We will omit it from now on.
$\bloch(\C)$ is known to be uniquely divisible, so it has canonically the structure of a $\Q$-vector space (Suslin \cite{suslin}). It's $\Q$-dimension is infinite and conjectured to be countable (the ``Rigidity Conjecture,'' equivalent to the conjecture that $\bloch(\C)=\bloch(\overline\Q)$, where $\overline\Q$ is the field of algebraic numbers). In particular, the $\Q/\Z$ in the Bloch-Wigner exact sequence is precisely the torsion of $H_3(\operatorname{PSL}(2,\C);\Z)$, so any finite torsion subgroup is cyclic.
In the present paper we define an \emph{extended Bloch group} $\eebloch(\C)$ by replacing $\C-\{0,1\}$ in the definition of $\bloch(\C)$ by its universal abelian cover $\abc$ and appropriately lifting the five term relation (\ref{5term}). Our main results are that we can lift the Bloch-Wigner map $H_3(\operatorname{PSL}(2,\C)\discrete;\Z)\to \bloch(\C)$ to an isomorphism \begin{equation*} \themap\colon H_3(\operatorname{PSL}(2,\C)\discrete;\Z)\to \eebloch(\C) \end{equation*} Moreover, the ``Roger's dilogarithm function'' (see below) gives a natural map \begin{equation*}R\colon\eebloch(\C)\to\C/2\pi^2\Z.\end{equation*} We show that the composition \begin{equation*}R\circ\themap\colon H_3(\operatorname{PSL}(2,\C)\discrete;\Z)\to\C/2\pi^2\Z\end{equation*} is the Cheeger-Simons class (cf \cite{cheeger-simons}), so it can also be described as $i(\operatorname{vol} + i \operatorname{cs})$, where $cs$ is the universal Chern-Simons class. It has been a longstanding problem to provide such a computation of the Chern-Simons class. Dupont in \cite{dupont1} gave an answer modulo $\pi^2\Q$ and our computation is a natural lift of his.
Another consequence of our result is that any complete hyperbolic 3-manifold $M$ of finite volume has a natural ``fundamental class'' in $H_3(\operatorname{PSL}(2,\C)\discrete;\Z)/C_2$, where $C_2$ is the (unique) order 2 subgroup. For compact $M$ the existence of this class, even without the $C_2$ ambiguity, is easy and well known: $M=\H^3/\Gamma$ is a $K(\Gamma,1)$-space, so the inclusion $\Gamma\to \operatorname{Isom}^+(\H^3) = \operatorname{PSL}(2,\C)$ induces $H_3(M)=H_3(\Gamma)\to H_3(\operatorname{PSL}(2,\C)$, and the class in question is the image of the fundamental class $[M]\in H_3(M)$. For non-compact $M$ the existence of such a class is somewhat surprising, although it was already strongly suggested by earlier results.
We can describe this fundamental class nicely in terms of an ideal triangulation of $M$. However, this ideal triangulation has to be a ``true'' ideal triangulation rather than the less restrictive ``degree 1'' ideal triangulations used in \cite{neumann-yang}. The ideal triangulations resulting from Dehn filling that are used by the programs Snappea and Snap \cite{snap} are not true. Nevertheless, we can describe the fundamental class in terms of these ``Dehn filling triangulations.'' This leads also to an exact simplicial formula for the Chern-Simons invariant of a hyperbolic 3-manifold, refining the formula of \cite{neumann}.
We work initially with a different version of the extended Bloch group, based on a disconnected $\Z\times\Z$ cover of $\C-\{0,1\}$. This group, which we call $\ebloch(\C)$ is a quotient of $\eebloch(\C)$ by a subgroup of order $2$.
\noindent{\bf Acknowledgements.} The definition of the extended Bloch group was suggested by an idea of Jun Yang, to whom I am grateful also for many useful conversations. In particular, he informs me that this work can be interpreted as giving a motivic complex for $K_3(\C)$. The main results of this paper were announced in \cite{neumann-hilbert}. This research is supported by the Australian Research Council.
\section{The preliminary version of extended Bloch group}
We shall need a $\Z\times\Z$ cover $\Cover$ of $\C-\{0,1\}$ which can be constructed as follows. Let $P$ be $\C-\{0,1\}$ split along the rays $(-\infty,0)$ and $(1,\infty)$. Thus each real number $r$ outside the interval $[0,1]$ occurs twice in $P$, once in the upper half plane of $\C$ and once in the lower half plane of $\C$. We denote these two occurences of $r$ by $r+0i$ and $r-0i$. We construct $\Cover $ as an identification space from $P\times\Z\times\Z$ by identifying \begin{equation*} \begin{aligned} (x+0i, p,q)&\sim (x-0i,p+2,q)\quad\hbox{for each }x\in(-\infty,0)\\ (x+0i, p,q)&\sim (x-0i,p,q+2)\quad\hbox{for each }x\in(1,\infty).\\ \end{aligned} \end{equation*} We will denote the equivalence class of $(z,p,q)$ by $(z;p,q)$. $\Cover $ has four components: \begin{equation*} \Cover =X_{00}\cup X_{01}\cup X_{10}\cup X_{11} \end{equation*} where $X_{\epsilon_0\epsilon_1}$ is the set of $(z;p,q)\in \Cover $ with $p\equiv\epsilon_0$ and $q\equiv\epsilon_1$ (mod $2$).
We may think of $X_{00}$ as the riemann surface for the function $\C-\{0,1\}\to\C^2$ defined by $z\mapsto \bigl(\log z, -\log (1-z)\bigr)$. Taking the branch $(\log z + 2p\pi i, -\log (1-z) + 2q\pi i)$ of this function on the portion $P\times\{(2p,2q)\}$ of $X_{00}$ for each $p,q\in\Z$ defines an analytic function from $X_{00}$ to $\C^2$. In the same way, we may think of $\Cover $ as the riemann surface for the collection of all branches of the functions $(\log z + p\pi i, -\log (1-z) + q\pi i)$ on $\C-\{0,1\}$.
\medbreak Consider the set \begin{equation*} \FT:=\biggl\{\biggl(x,y,\frac yx,\frac{1-x^{-1}}{1-y^{-1}}, \frac{1-x}{1-y}\biggr):x\ne y, x,y\in\C-\{0,1\}\biggr\}\subset(\C-\{0,1\})^5 \end{equation*} of 5-tuples involved in the five term relation (\ref{5term}). An elementary computation shows:
\begin{lemma} The subset $\FT^+$ of $(x_0,\dots,x_4)\in \FT$ with each $x_i$ in the upper half plane of $\C$ is the set of elements of $\FT$ for which $y$ is in the upper half plane of $\C$ and $x$ is inside the triangle with vertices $0,1,y$. Thus $\FT^+$ is connected (even contractible). \qed\end{lemma}
\begin{definition}\label{liftFT} Let $V\subset(\Z\times\Z)^5$ be the subspace \begin{multline*} V:=\{\bigl((p_0,q_0),(p_1,q_1),(p_1-p_0,q_2), (p_1-p_0+q_1-q_0,q_2-q_1),\\ (q_1-q_0,q_2-q_1-p_0)\bigr):p_0,p_1,q_0,q_1,q_2\in\Z\}. \end{multline*} Let $\liftFT_0$ denote the unique component of the inverse image of $\FT$ in $\Cover^5$ which includes the points $\bigl((x_0;0,0),\ldots,(x_4;0,0)\bigr)$ with $(x_0,\ldots,x_4)\in \FT^+$, and define \begin{equation*} \liftFT:=\liftFT_0+V=\{ {\mathbf x}+{\mathbf v}:{\mathbf x}\in \liftFT_0\text{ and }{\mathbf v}\in V\}, \end{equation*} where we are using addition to denote the action of $(\Z\times\Z)^5$ by covering transformations on $\Cover^5$. (Although we do not need it, one can show that the action of $2V$ takes $\liftFT_0$ to itself, so $\liftFT$ has $2^5$ components, determined by the parities of $p_0,p_1,q_0,q_1,q_2$.)
Define $\eprebloch(\C)$ as the free $\Z$-module on $\Cover$ factored by all instances of the relations: \begin{equation} \sum_{i=0}^4(-1)^i(x_i;p_i,q_i)=0\label{five term}\quad \text {with $\bigl((x_0;p_0,q_0),\dots,(x_4;p_4,q_4)\bigr)\in\liftFT$} \end{equation} and \begin{equation} (x;p,q)+(x;p',q')=(x;p,q')+(x;p',q)\label{transfer} \quad\text{with $p,q,p',q'\in\Z$} \end{equation} We shall denote the class of $(z;p,q)$ in $\eprebloch(\C)$ by $[z,p,q]$ \end{definition}
We call relation (\ref{five term}) the \emph{lifted five term relation}. We shall see that its precise form arises naturally in several contexts. In particular, we give it a geometric interpretation in Sect.~\ref{simplex parameters}.
We call relation (\ref{transfer}) the \emph{transfer relation}. It is almost a consequence of the lifted five term relation, since we shall see that the effect of omitting it would be to replace $\eprebloch(\C)$ by $\eprebloch(\C)\otimes\Z/2$, with $\Z/2$ generated by an element $\kappa:=[x,1,1]+[x,0,0]-[x,1,0]-[x,0,1]$ which is independent of $x$.
\begin{lemma} There is a well-defined homomorphism \begin{equation*}\nu\colon\eprebloch(\C)\to \C\wedge_\Z\C \end{equation*} defined on generators by $[z,p,q]\mapsto (\log z + p\pi i)\wedge (-\log(1-z) +q\pi i)$. \end{lemma} \begin{proof} We must verify that $\nu$ vanishes on the relations that define $\eprebloch(\C)$. This is trivial for the transfer relation (\ref{transfer}). We shall show that the lifted five term relation is the most general lift of the five term relation (\ref{5term}) for which $\nu$ vanishes. If one applies $\nu$ to an element $\sum_{i=0}^4(-1)^i[x_i,p_i,q_i]$ with $(x_0,\ldots,x_4)=(x,y,\ldots)\in \FT^+$ one obtains after simplification: \begin{multline*} \bigl((q_0-p_2-q_2+p_3+q_3)\log x +(p_0-q_3+q_4)\log(1-x)+(-q_1+q_2-q_3)\log y +{}\\ {}+ (-p_1+p_3+q_3-p_4-q_4)\log(1-y)+(p_2-p_3+p_4)\log(x-y)\bigr)\wedge \pi i. \end{multline*} An elementary linear algebra computation shows that this vanishes identically if and only if $p_2=p_1-p_0$, $p_3=p_1-p_0+q_1-q_0$, $q_3=q_2-q_1$, $p_4=q_1-q_0$, and $q_4 = q_2-q_1-p_0$, as in the lifted five term relation. The vanishing of $\nu$ for the general lifted five term relation now follows by analytic continuation. \end{proof} \begin{definition} Define $\ebloch(\C)$ as the kernel of $\nu\colon\eprebloch(\C)\to\C\wedge\C$. \end{definition}
Define \begin{equation*} R(z;p,q)={\calR}(z)+\frac{\pi i}2(p\log(1-z)+q\log z) -\frac{\pi^2}6 \end{equation*} where $\calR$ is the Rogers dilogarithm function \begin{equation*} {\calR}(z)=\frac12\log(z)\log(1-z)-\int_0^z\frac{\log(1-t)}tdt.\end{equation*} Then
\begin{proposition}\label{R} $R$ gives a well defined map $R\colon \Cover\to \C/\pi^2\Z$. The relations which define $\eprebloch(\C)$ are functional equations for $R$ modulo $\pi^2$ (the lifted five term relation is in fact the most general lift of the five term relation (\ref{5term}) with this property). Thus $R$ also gives a homomorphism $R\colon\eprebloch(\C)\to\C/\pi^2\Z$. \end{proposition}
\begin{proof} If one follows a closed path from $z$ that goes anti-clockwise around the origin it is easily verified that $R(z;p,q)$ is replaced by $R(z;p,q)+\pi i \log(1-z) -q\pi^2=R[z,p+2,q]-q\pi^2$. Similarly, following a closed path clockwise around $1$ replaces $R(z;p,q)$ by $R(z;p,q+2)+p\pi^2$. Thus $R$ modulo $\pi^2$ is well defined on $\Cover$ (in fact $R$ itself is well defined on a $\Z$ cover of $\Cover$ which is a nilpotent cover of $\C-\{0,1\}$).
It is well known that $L(z):=\calR(z)-\frac{\pi^2}6$ satisfies the functional equation \begin{equation*} L(x)-L(y)+L\bigl(\frac yx\bigr)-L\bigl(\frac{1-x^{-1}}{1-y^{-1}}\bigr)+ L\bigl(\frac{1-x}{1-y}\bigr)=0\end{equation*} for $0<y<x<1$. Since the 5-tuples involved in this equation are on the boundary of $\FT^+$, the functional equation \begin{equation*}\sum(-1)^iR(x_i;0,0)=0\end{equation*} is valid by analytic continuation on the whole of $\FT^+$. Now \begin{equation*}\sum(-1)^iR(x_i;p_i,q_i)\end{equation*} differs from this by \begin{equation*}\frac{\pi i}2\sum(-1)^i(p_i\log(1-x_i)+q_i\log x_i) \end{equation*} and it is an elementary calculation to verify that this vanishes identically for $(x_0,\ldots,x_4)\in \FT^+$ if and only if the $p_i$ and $q_i$ are as in the lifted five term relation. Thus the lifted five-term relation gives a functional equation for $R$ when $(x_0,\ldots,x_4)\in \FT^+$. By analytic continuation, it is a functional equation for $R$ mod $\pi^2$ in general. The transfer relation is trivially a functional equation for $R$. \end{proof}
The first version of our main result is \begin{theorem} There exists an epimorphism $\themap\colon H_3(\operatorname{PSL}(2,\C)\discrete;\Z)\to\ebloch(\C)$ with kernel of order 2 such that the composition $\themap\circ R\colon H_3(\operatorname{PSL}(2,\C)\discrete;\Z)\to\C/\pi^2\Z$ is the characteristic class given by $i(\operatorname{vol} + i \operatorname{cs})$. \end{theorem}
We shall later modify the definition of $\ebloch(\C)$ to eliminate the $\Z/2$ kernel. To describe the map $\themap$ we must give a geometric interpretation of $\Cover$.
\section{Parameters for ideal hyperbolic simplices}\label{simplex parameters}
In this section we shall interpret $\Cover$ as a space of parameters for what we call ``combinatorial flattenings'' of ideal hyperbolic simplices. We need this to define the above map $\themap$. It also gives a geometric interpretation of the lifted five term relation.
We shall denote the standard compactification of $\H^3$ by $\overline \H^3 = \H^3\cup\CP^1$. An ideal simplex $\Delta$ with vertices $z_1,z_2,z_3,z_4\in\CP^1$ is determined up to congruence by the cross ratio \begin{equation*} z=[z_1\hbox{$\,:\,$} z_2\hbox{$\,:\,$} z_3\hbox{$\,:\,$} z_4]=\frac{(z_3-z_2)(z_4-z_1)}{(z_3-z_1)(z_4-z_2)}. \end{equation*} Permuting the vertices by an even (i.e., orientation preserving) permutation replaces $z$ by one of \begin{equation*} z,\quad z'=\frac 1{1-z}, \quad\text{or}\quad z''=1-\frac 1z. \end{equation*} The parameter $z$ lies in the upper half plane of $\C$ if the orientation induced by the given ordering of the vertices agrees with the orientation of $\H^3$. But we allow simplices whose vertex ordering does not agree with their orientation. We also allow degenerate ideal simplices whose vertices lie in one plane, so the parameter $z$ is real. However, we always require that the vertices are distinct. Thus the parameter $z$ of the simplex lies in $\C-\{0,1\}$ and every such $z$ corresponds to an ideal simplex.
There is another way of describing the cross-ratio parameter $z= [z_1\hbox{$\,:\,$} z_2\hbox{$\,:\,$} z_3\hbox{$\,:\,$} z_4]$ of a simplex. The group of orientation preserving isometries of $\H^3$ fixing the points $z_1$ and $z_2$ is isomorphic to $\C^*$ and the element of this $\C^*$ that takes $z_4$ to $z_3$ is $z$. Thus the cross-ratio parameter $z$ is associated with the edge $z_1z_2$ of the simplex. The parameter associated in this way with the other two edges $z_1z_4$ and $z_1z_3$ out of $z_1$ are $z'$ and $z''$ respectively, while the edges $z_3z_4$, $z_2z_3$, and $z_2z_4$ have the same parameters $z$, $z'$, and $z''$ as their opposite edges. See fig.\ 1.
Note that $zz'z''=-1$, so the sum \begin{equation*}\log z + \log z' + \log z''\end{equation*} is an odd multiple of $\pi i$, depending on the branches of $\log$ used. In fact, if we use standard branch of log then this sum is $\pi i$ or $-\pi i$ depending on whether $z$ is in the upper or lower half plane. \begin{definition}\label{flattening} We shall call any triple of the form \begin{equation*} \bfw=(w_0,w_1,w_2)=(\log z +p\pi i, \log z' + q\pi i, \log z'' + r\pi i) \end{equation*} with \begin{equation*}p,q,r\in \Z\quad\text{and}\quad w_0+w_1+w_2=0 \end{equation*} a \emph{combinatorial flattening} for our simplex.
Each edge $E$ of $\Delta$ is assigned one of the components $w_i$ of $\bfw$, with opposite edges being assigned the same component. We call $w_i$ the \emph{log-parameter} for the edge $E$ and denote it $l_E(\Delta,\bfw)$. \end{definition} This combinatorial flattening can be written \begin{equation*}\ell(z;p,q):=(\log z + p\pi i, -\log(1-z) + q\pi i, \log(1-z)-\log z - (p+q)\pi i), \end{equation*} and $\ell$ is then a map of $\Cover$ to the set of combinatorial flattenings of simplices.
\begin{lemma}\label{Cover is flattenings} This map $\ell$ is a bijection, so $\Cover$ may be identified with the set of all combinatorial flattenings of ideal tetrahedra. \end{lemma}
\begin{proof} We must show that we can recover $(z;p,q)$ from $(w_0,w_1,w_2)= \ell(z;p,q)$. It clearly suffices to recover $z$. But $z=\pm e^{w_0}$ and $1-z=\pm e^{-w_1}$, and the knowledge of both $z$ and $1-z$ up to sign determines $z$. \end{proof}
We can give a geometric interpretation of the choice of parameters in the five term relation (\ref{five term}). If $z_0,\ldots,z_4$ are five distinct points of $\partial\overline\H^3$, then each choice of four of five points $z_0,\dots,z_4$ gives an ideal simplex. We denote the simplex which omits vertex $z_i$ by $\Delta_i$. The cross ratio parameters $x_i=[z_0\hbox{$\,:\,$} \ldots\hbox{$\,:\,$} \hat{z_i}\hbox{$\,:\,$} \ldots\hbox{$\,:\,$} z_4]$ of these simplices can be expressed in terms of $x:=x_0$ and $y:=x_1$ as \begin{align*} x_0=[z_1\hbox{$\,:\,$} z_2\hbox{$\,:\,$} z_3\hbox{$\,:\,$} z_4]&=:x\\ x_1=[z_0\hbox{$\,:\,$} z_2\hbox{$\,:\,$} z_3\hbox{$\,:\,$} z_4]&=:y\\ x_2=[z_0\hbox{$\,:\,$} z_1\hbox{$\,:\,$} z_3\hbox{$\,:\,$} z_4]&=\frac yx\\ x_3=[z_0\hbox{$\,:\,$} z_1\hbox{$\,:\,$} z_2\hbox{$\,:\,$} z_4]&=\frac {1-x^{-1}}{1-y^{-1}}\\ x_4=[z_0\hbox{$\,:\,$} z_1\hbox{$\,:\,$} z_2\hbox{$\,:\,$} z_3]&=\frac {1-x}{1-y} \end{align*} The lifted five term relation has the form \def\term#1{(x_#1;p_#1,q_#1)} \begin{equation}\label{general 5-term} \sum_{i=0}^4(-1)^i \term i=0 \end{equation} with certain relations on the $p_i$ and $q_i$. We will give a geometric interpretation of these relations.
Using the map of Lemma \ref{Cover is flattenings}, each summand in this relation (\ref{general 5-term}) represents a choice $\ell\term i$ of combinatorial flattening for one of the five ideal simplices. For each edge $E$ connecting two of the points $z_i$ we get a corresponding linear combination \begin{equation}\label{edge sums} \sum_{i=0}^4(-1)^il_E(\Delta_i,\ell\term i) \end{equation} of log-parameters (Definition \ref{flattening}), where we put $l_E(\Delta_i,\ell\term i)=0$ if the line $E$ is not an edge of $\Delta_i$. This linear combination has just three non-zero terms corresponding to the three simplices that meet at the edge $E$. One easily checks that the real part is zero and the imaginary part can be interpreted (with care about orientations) as the sum of the ``adjusted angles'' of the three flattened simplices meeting at $E$.
\begin{definition} We say that the $\term i$ satisfy the \emph{flattening condition} if each of the above linear combinations (\ref{edge sums}) of log-parameters is equal to zero. That is, the adjusted angle sum of the three simplices meeting at each edge is zero. \end{definition}
\begin{lemma}\label{geometric five term} Relation (\ref{general 5-term}) is an instance of the lifted five term relation (\ref{five term}) if and only if the $\term i$ satisfy the flattening condition. \end{lemma} \begin{proof} We first consider the case that $(x_0,\ldots,x_4)\in \FT^+$. Recall this means that each $x_i$ is in $\H$. Geometrically, this implies that each of the above five tetrahedra is positively oriented by the ordering of its vertices. This implies the configuration of fig.\ 2 with $z_1$ and $z_3$ on opposite sides of the plane of the triangle $z_0z_2z_4$ and the line from $z_1$ to $z_3$ passing through the interior of this triangle. Denote the combinatorial flattening of the $i^{th}$ simplex by $\ell(x_i;p_i,q_i)$. If we consider the log-parameters at the edge $z_3z_4$ for example, they are $\log x + p_0\pi i$, $\log y + p_1 \pi i$, and $\log(y/x) + p_2\pi i$ and the condition is that $(\log x + p_0\pi i)-(\log y + p_1 \pi i) +(\log(y/x) +p_2\pi i)=0$. This implies $p_2=p_1-p_0$. Similarly the other edges lead to other relations among the $p_i$ and $q_i$, namely:
\begin{tabular} {l r @{\,\,} c @{\,\,} l @{\qquad\qquad} l r @{\,\,} c @{\,\,} l} $z_0z_1$:&$ p_2-p_3+p_4$&$=$&$0$&$z_0z_2$:&$ -p_1+p_3+q_3-p_4-q_4$&$=$&$0$\\ $z_1z_2$:&$ p_0-q_3+q_4$&$=$&$0$&$z_1z_3$:&$ -p_0-q_0+q_2-p_4-q_4$&$=$&$0$\\ $z_2z_3$:&$ q_0-q_1+p_4$&$=$&$0$&$z_2z_4$:&$ -p_0-q_0+p_1+q_1-p_3$&$=$&$0$\\ $z_3z_4$:&$ p_0-p_1+p_2$&$=$&$0$&$z_3z_0$:&$ p_1+q_1-p_2-q_2+q_4$&$=$&$0$\\ $z_4z_0$:&$ -q_1+q_2-q_3$&$=$&$0$&$z_4z_1$:&$ q_0-p_2-q_2+p_3+q_3$&$=$&$0$. \end{tabular}
Elementary linear algebra verifies that these relations are equivalent to the equations $p_2=p_1-p_0$, $p_3=p_1-p_0+q_1-q_0$, $q_3=q_2-q_1$, $p_4=q_1-q_0$, and $q_4 = q_2-q_1-p_0$, as in the lifted five term relation (\ref{five term}). The lemma thus follows for $(x_0,\ldots,x_4)\in \FT^+$. It is then true in general by analytic continuation. \end{proof}
\section{Definition of $\themap$}
We can now describe the map $\themap\colon H_3(\operatorname{PSL}(2,\C);\Z)\to \eprebloch(\C)$.
We shall first recall a standard chain complex for homology of $G=\operatorname{PSL}(2,\C)$, the chain complex of ``homogeneous simplices for $G$.'' We will, however, diverge from the standard by using only non-degenerate simplices, i.e., simplices with distinct vertices --- we may do this because $G$ is infinite.
Let $C_n(G)$ denote the free $\Z$-module on all ordered $(n+1)$-tuples $\langle g_0,\dots,g_n\rangle$ of distinct elements of $G$. Define $\delta\colon C_n\to C_{n-1}$ by \begin{equation*}\delta\langle g_0,\dots,g_n\rangle=\sum_{i=0}^n(-1)^i\langle g_0,\dots,\hat g_i,\dots,g_n\rangle. \end{equation*} Then each $C_n$ is a free $\Z G$-module under left-multiplication by $G$. Since $G$ is infinite the sequence \begin{equation*}\cdots \to C_2\to C_1\to C_0\to Z\to 0\end{equation*} is exact, so it is a $\Z G$-free resolution of $\Z$. Thus the chain complex \begin{equation*}\cdots\to C_2\otimes_{\Z G}\Z\to C_1\otimes_{\Z G}\Z\to C_0\otimes_{\Z G}\Z\to 0\end{equation*} computes the homology of $G$. Note that $C_n\otimes_{\Z G}\Z$ is the free $\Z$-module on symbols $\langle g_0\hbox{$\,:\,$} \ldots\hbox{$\,:\,$} g_n\rangle$, where the $g_i$ are distinct elements of $G$ and $\langle g_0\hbox{$\,:\,$} \ldots\hbox{$\,:\,$} g_n\rangle=\langle g_0'\hbox{$\,:\,$} \ldots\hbox{$\,:\,$} g_n'\rangle$ if and only if there is a $g\in G$ with $gg_i=g_i'$ for $i=0,\ldots,n$.
Thus an element of $\alpha\in H_3(G;\Z)$ is represented by a sum \begin{equation} \sum \epsilon_i\langle g_0^{(i)}\hbox{$\,:\,$} \ldots\hbox{$\,:\,$} g_3^{(i)}\rangle \label{homology element} \end{equation} of homogeneous $3$-simplices for $G$ and their negatives (here each $\epsilon_i$ is $\pm1$). The fact that this is a cycle means that the 2-faces of these homogeneous simplices cancel in pairs. We choose some specific way of pairing cancelling faces and form a geometric quasi-simplicial complex $K$ by taking a 3-simplex $\Delta_i$ for each homogeneous 3-simplex of the above sum and gluing together 2-faces of these $\Delta_i$ that correspond to 2-faces of the homogeneous simplices that have been paired with each other.
We call a closed path $\gamma$ in $K$ a \emph{normal} path if it meets no $0$- or $1$-simplices of $K$ and crosses all $2$-faces that it meets transversally. When such a path passes through a $3$-simplex $\Delta_i$, entering and departing at different faces, there is a unique edge $E$ of the 3-simplex between these faces. We say the path \emph{passes} this edge $E$.
Consider a choice of combinatorial flattening $\bfw_i$ for each simplex $\Delta_i$. Then for each edge $E$ of a simplex $\Delta_i$ of $K$ we have a log-parameter $l_E=l_E(\Delta_i,\bfw_i)$ assigned. Recall that this log-parameter has the form $\log z + s\pi i$ where $z$ is the cross-ratio parameter associated to the edge $E$ of simplex $\Delta_i$ and $s$ is some integer. We call ($s$ mod $2$) the \emph{parity parameter} at the edge $E$ of $\Delta_i$ and denote it $\delta_E=\delta_E(\Delta_i,\bfw_i)$. \begin{definition}\label{parity} Suppose $\gamma$ is a normal path in $K$. The \emph{parity} of $\gamma$ is the sum ($\sum_E\delta_E$ modulo $2$) of the parity parameters of all the edges $E$ that $\gamma$ passes. Moreover, if $\gamma$ runs in a neighbourhood of some fixed vertex $V$ of $K$, then the {\em log-parameter for the path} is the sum $\sum_E\pm \epsilon_{i(E)}l_E$, summed over all edges $E$ that $\gamma$ passes, where: \begin{itemize} \item $i(E)$ is the index $i$ of the simplex $\Delta_i$ that the edge $E$ belongs to and $\epsilon_{i(E)}$ is the corresponding coefficient $\pm1$ from equation (\ref{homology element}); \item the extra sign $\pm$ is $+$ or $-$ according as the edge $E$ is passed in a counterclockwise or clockwise fashion as viewed from the vertex. \end{itemize} \end{definition} \begin{theorem}\label{beta exists} Choose $z\in \partial\overline\H^3$ such that $g_0^{(i)}z, g_1^{(i)}z, g_2^{(i)}z, g_3^{(i)}z$ are distinct points for each $i$. This defines an ideal hyperbolic simplex shape for each simplex $\Delta_i$ of $K$ and an associated cross ratio $x_i=[g_0^{(i)}z\hbox{$\,:\,$} g_1^{(i)}z\hbox{$\,:\,$} g_2^{(i)}z\hbox{$\,:\,$} g_3^{(i)}z]$. There is a way of assigning combinatorial flattenings $\bfw_i=\ell(x_i;p_i,q_i)$ to the simplices of $K$ such that the parity of any normal path in $K$ is zero and the log-parameter of any normal path in any vertex neighbourhood of $K$ is zero.
For such an assignment the element $\sum_i\epsilon_{i}[x_i,p_i,q_i]\in\eprebloch(\C)$ is independent of choices and only depends on the original homology class $\alpha$. We denote it $\themap(\alpha)$. Moreover, $\themap(\alpha)\in\ebloch(\C)$ and $\themap\colon H_3(\operatorname{PSL}(2,\C);\Z)\to\ebloch(\C)$ is a homomorphism. \end{theorem}
To prove this theorem we will need a general relation (Lemma \ref{cycle} below) in $\eprebloch(\C)$ that follows from the lifted five term relation.
\section{Consequences of the lifted five term relation}\label{consequences}
Let $K$ be a simplicial complex obtained by gluing 3-simplices $\Delta_1,\dots,\Delta_n$ together in sequence around a common edge $E$. Thus, for each index $j$ modulo $n$, $\Delta_j$ is glued to each of $\Delta_{j-1}$ and $\Delta_{j+1}$ along one of the two faces of $\Delta_j$ incident to $E$. Suppose, moreover, that the vertices of each $\Delta_j$ are ordered such that orderings agree on the common 2-faces of adjacent 3-simplices.
There is then a sequence $\epsilon_1=\pm1$, $\ldots$, $\epsilon_n=\pm1$ such that the 2-faces used for gluing all cancel in the boundary of the 3-chain $\sum_{j=1}^n\epsilon_j\Delta_j$. (Proof: choose $\epsilon_1=1$ and then for $i=2,\dots,n$ choose $\epsilon_i$ so the common face of $\Delta_{i-1}$ and $\Delta_i$ cancels. The common face of $\Delta_n$ and $\Delta_1$ must then cancel since otherwise that face occurs with coefficient $\pm2$ in $\partial\sum_{j=1}^n\epsilon_j\Delta_j$, and $E$ occurs with coefficient $\pm2$ in $\partial\partial\sum_{j=1}^n\epsilon_j\Delta_j$.)
Suppose now further that a combinatorial flattening $\bfw_i$ has been chosen for each $\Delta_j$ such that the ``signed sum'' of log parameters around the edge E vanishes and the same for parity parameters: \begin{equation}\sum_{j=1}^n\epsilon_j l_E(\Delta_j,\bfw_j)=0,\quad \sum_{j=1}^n\epsilon_j \delta_E(\Delta_j,\bfw_j)=0 . \label{around E}\end{equation}
We think of the edge $E$ as being vertical, so that we can label the two edges other than $E$ of the common triangle of $\Delta_j$ and $\Delta_{j+1}$ as $T_j$ and $B_j$ (for ``top'' and ``bottom''). Let $\bfw_j'$ be the flattening obtained from $\bfw_j$ by adding $\epsilon_j\pi i$ to the log parameter at $T_j$ and its opposite edge and subtracting $\epsilon_j\pi i$ from the log parameter at $B_j$ and its opposite edge. If we do this for each $j$ then the total log parameter and parity parameter at any edge of the complex $K$ is not changed (we sum log-parameters with the appropriate sign $\epsilon_j$): --- at $E$ no log-parameter has changed while at every other edge $\pi i$ has been added at one of the two simplices at the edge and subtracted at the other.
\begin{lemma}\label{cycle} With the above notation, \begin{equation*}\sum_{j=1}^n\epsilon_j[\bfw_j]= \sum_{j=1}^n\epsilon_j[\bfw_j']~~\in\eprebloch(\C),\end{equation*} where we are using $[\bfw]$ as a shorthand for $[\ell^{-1}\bfw]$ (i.e., $[\bfw]$ means $[z,p,q]$ where $\ell(z;p,q)=\bfw$; see Lemma \ref{Cover is flattenings}).\end{lemma}
\begin{proof} Each of the simplices $\Delta_i$ has an associated ideal hyperbolic structure compatible with the combinatorial flattenings $\bfw_j$. This ideal hyperbolic structure is also compatible with the flattening $\bfw_j'$. Choose a realization of $\Delta_1$ as an ideal simplex in $\overline\H^3$. We think of this as a mapping of $\Delta_1$ to $\overline\H^3$. We can extend this to a mapping of $K$ to $\overline \H^3$ which maps each $\Delta_j$ to an ideal simplex with shape appropriate to its combinatorial flattening. Adjacent simplices will map to the same side of their common face in $\overline\H^3$ if their orientations or the signs $\epsilon_j$ do not match and will be on opposite sides otherwise. The fact that the signed sums of log and parity parameters at edge $E$ are zero guarantees that the identifications match up as we go once around the edge $E$ of $K$.
Note that $K$ has $n+2$ vertices. We first consider the special case that $n=3$ and there is an ordering $v_0,\dots,v_4$ of the five vertices of $K$ that restricts to the given vertex ordering for each simplex. We also assume the five vertices of $K$ map to distinct points $z_0,\dots,z_4$ of $\partial \H^3$.
Each simplex $\Delta_j$ for $j=1,2,3$ has vertices obtained by omitting one the five vertices $v_0,\dots,v_4$. Denote by $\Delta_4$ and $\Delta_5$ the simplices obtained by omitting each of the other two vertices. The fact that the common 2-faces of the $\Delta_j$ cancel when taking boundary of the chain $\epsilon_1\Delta_1+\epsilon_2\Delta_2+\epsilon_3\Delta_3$ means that, up to sign this sum corresponds to three summands of the chain $\partial\langle v_0,\dots,v_4\rangle =\sum(-1)^i\langle v_o,\dots,\hat{v_i},\dots,v_4\rangle$. Choose $\epsilon_4$ and $\epsilon_5$ so that $\sum_{j=1}^5\epsilon_j\Delta_j$ is $\pm\partial\langle v_0,\dots,v_4\rangle$.
We now claim that we can choose unique combinatorial flattenings $\bfw_4$ and $\bfw_5$ of $\Delta_4$ and $\Delta_5$ so that the signed sum of log parameters and parity parameters at any edge of $K\cup\Delta_4\cup\Delta_5$ is zero. Indeed, this claim does not depend on the order of the vertices, so by permuting the vertices we can assume the five vertices are ordered so that $\Delta_1$, $\Delta_2$ and $\Delta_3$ are the first three simplices occuring in the five term relation. Then the common edge $E$ is $v_3v_4$ and the fact that the simplices fit together around this edge is the condition that their cross-ratio parameters $x_0$, $x_1$, and $x_2$ satisfy $x_0x_1^{-1}x_2=1$. Writing the flattenings as elements of $\Cover$ as $(x_0;p_0,q_0)$, $(x_1;p_1,q_1)$, and $(x_2;p_2,q_2)$, the equation saying signed sum of log parameters at this edge is zero is $(\log x_0 + p_0\pi i) -(\log x_1 +p_1\pi i)+(\log x_2 +p_2\pi i)=0$. If $x_0$, $x_1$, and $x_2$ are in the complex upper half plane this implies the equation $p_2=p_1-p_0$ of the lifted five term relation, while otherwise it implies the appropriate analytic continuations in $\Cover$ of this. The desired choice of flattenings of $\Delta_4$ and $\Delta_5$ is thus determined as in the lifted five term relation by the choice of $p_0$, $p_1$, $q_0$, $q_1$, and $q_2$ (namely $p_3=p_1-p_0+q_1-q_0$, $q_3=q_2-q_1$, $p_4=q_1-q_0$, and $q_4=q_2-q_1-p_0$ if $x_0,x_1,x_2$ are in the upper half plane and otherwise the appropriate analytic continuation).
Note that $\bfw_4$ and $\bfw_5$ do not change if we replace $\bfw_1,\dots,\bfw_3$ by $\bfw_1',\dots,\bfw_3'$ (with the above reordering of vertices this just subtracts $1$ from each of $q_0$, $q_1$ and $q_2$ in the lifted five term relation, so it does not alter $p_3,q_3,p_4,q_4$). By Lemma \ref{geometric five term} we then have \begin{equation*} \begin{aligned} \epsilon_1\bfw_1+\epsilon_2\bfw_2+\epsilon_3\bfw_3 &= -(\epsilon_4\bfw_4+\epsilon_5\bfw_5)\\ \epsilon_1\bfw_1'+\epsilon_2\bfw_2'+\epsilon_3\bfw_3' &= -(\epsilon_4\bfw_4+\epsilon_5\bfw_5), \end{aligned} \end{equation*} proving this case.
We next consider the case that for some index $j$ modulo $n$ the images of $\Delta_j$ and $\Delta_{j+1}$ in $\overline\H^3$ do not coincide, so their union has five distinct vertices. By cycling our indices we may assume $j=1$. Since the orderings of the vertices of $\Delta_1$ and of $\Delta_{2}$ agree on the three vertices they have in common, there is an ordering of all five vertices compatible with both $\Delta_1$ and $\Delta_{2}$. Let $\Delta_0$ be the simplex determined by the common edge $E$ and the two vertices that $\Delta_1$ and $\Delta_{2}$ do not have in common. Then there is an $\epsilon_0=\pm1$ such that the common faces of $\Delta_0$, $\Delta_1$, and $\Delta_{2}$ cancel in the boundary of the chain $\epsilon_0\Delta_0+\epsilon_1\Delta_1+\epsilon_{2}\Delta_{2}$. Choose a flattening $\bfw_0$ of $\Delta_0$ such that $\epsilon_0l_E(\Delta_0,\bfw_0)+\epsilon_1l_E(\Delta_1,\bfw_1)+ \epsilon_{2}l_E(\Delta_{2},\bfw_{2}) =0$. Then the relation of the lemma has already been proved for $\bfw_0$, $\bfw_1$, $\bfw_2$, and by subtracting this relation from the relation to be proved for $\bfw_1,\dots,\bfw_n$ we obtain a case of the lemma with one less simplices. Thus, if we assume the lemma proved for $n-1$ simplices then this case is also proved.
The above induction argument fails only for the case that there are $2m$ simplices that alternately ``fold back on each other'' so that their images in $\overline\H^3$ all have the same four vertices. The above induction eventually reduces us to this case (usually with $m=1$). We must therefore deal with this situation to complete the proof. We first consider the case that $m=1$ so $n=2$. We then have four vertices $z_0,\ldots,z_3$ in $\partial\overline\H^3$. We assume the edge $E$ is $z_0z_1$. Then the ordering of the vertices of the faces $z_0z_1z_2$ and $z_0z_1z_3$ is the same in each of $\Delta_1$ and $\Delta_2$. Choose a new point $z_4$ in $\partial\overline\H^3$ distinct from $z_0,\dots,z_3$ and consider the ordered simplex with vertices $z_0,z_1,z_2$ ordered as above followed by $z_4$. Call this $\Delta_3$. Similarly make $\Delta_4$ using $z_0,z_1,z_3$ ordered as above followed by $z_4$. Choose flattenings of $\Delta_3$ and $\Delta_4$ so that the signed sum of log parameters for $\Delta_1,\Delta_3,\Delta_4$ around $E$ is zero. Then we obtain a three simplex relation of the type already proved for $\Delta_1,\Delta_3,\Delta_4$ and another for $\Delta_2,\Delta_3,\Delta_4$, and the difference of these two relations gives the desired two-simplex relation.
More generally, if we are in the above ``folded'' case with $m>1$ we can use an instance of the three-simplex relation to replace one of the $2m$ simplices by two. We then use the induction step to replace one of these new simplices together with an adjacent old simplex by one simplex and then repeat for the other new simplex. In this way we reduce to a relation involving $2m-1$ simplices, completing the proof. \end{proof}
Before we continue, we note a consequence of the first case we considered in the above proof that we will need later. If vertices have been reordered as in that proof then the relation we proved can be written (with the appropriate relationship among $p_0,p_1,p_2$): \begin{equation}\begin{aligned}{} &[x,p_0,q_0]-[y,p_1,q_1]+[y/x,p_2,q_2]={}\\ &[x,p_0,q_0-1]-[y,p_1,q_1-1]+ [y/x,p_2,q_2-1].\label{homo}\end{aligned}\end{equation} This is true for any choice of $q_0,q_1,q_2$ so long as $p_0,p_1,p_2$ satisfy the appropriate relation. Thus if we just change $q_0$ and subtract the resulting equation from the above we get \begin{equation*} [x,p_0,q_0]-[x,p_0,q_0']=[x,p_0,q_0-1]-[x,p_0,q_0'-1]. \end{equation*} From the versions of the above three-simplex case with different orderings of the vertices we can derive three versions of this relation: \begin{equation} \begin{aligned}{} [x,p,q]-[x,p,q']&=[x,p,q-1]-[x,p,q'-1]\\ [x,p,q]-[x,p',q]&=[x,p-1,q]-[x,p'-1,q]\label{three equns}\\ [x,p,q]-[x,p+s,q-s]&=[x,p+1,q-1]-[x,p+s+1,q-s-1] \end{aligned} \end{equation} From these we obtain: \begin{lemma}\label{super transfer} $ [x,p,q]= pq[x,1,1]-(pq-p)[x,1,0] -(pq-q)[x,0,1]+(pq-p-q+1)[x,0,0]$. \end{lemma} \begin{proof} The first of the relations (\ref{three equns}) implies $[x,p,q]=[x,p,q-1]+[x,p,1]-[x,p,0]$ and applying this repeatedly shows \begin{equation}[x,p,q]=q[x,p,1]-(q-1)[x,p,0].\label{dd}\end{equation} The second equation of (\ref{three equns}) implies similarly that $[x,p,q]=p[x,1,q]-(p-1)[x,0,q]$ and using this to expand each of the terms on the right of (\ref{dd}) gives the desired equation. \end{proof}
Up to this point we have not used the transfer relation (\ref{transfer}). We digress briefly to show that the transfer relation almost follows from the five term relation. \begin{proposition}\label{kappa} If $\eprebloch'(\C)$ and $\ebloch'(\C)$ are defined like $\eprebloch(\C)$ and $\ebloch(\C)$ but without the transfer relation, then in $\eprebloch'(\C)$ the element $\kappa:=[x,1,1]+[x,0,0]-[x,1,0]-[x,0,1]$ is independent of $x$ and has order $2$. Moreover, $\eprebloch'(\C)={\eprebloch}(\C)\times C_2$ and $\ebloch'(\C)={\ebloch}(\C)\times C_2$, where $C_2$ is the cyclic group of order 2 generated by $\kappa$. \end{proposition} \begin{proof} If we subtract equation (\ref{homo}) with $p_0=p_1=q_0=q_1=q_2=1$ from the same equation with $p_0=p_1=0$, $q_0=q_1=q_2=1$ we obtain $[x,1,1]-[y,1,1]-[x,0,1]+[y,0,1]=[x,1,0]-[y,1,0]-[x,0,0]+[y,0,0]$, which rearranges to show that $\kappa$ is independent of $x$. The last of the equations (\ref{three equns}) with $p=q=0$ and $s=-1$ gives $2[x,0,0]=[x,1,-1]+[x,-1,1]$ and expanding the right side of this using (ii) gives $2[x,0,0]=-2[x,1,1]+2[x,1,0]+2[x,0,1]$, showing that $\kappa$ has order dividing 2.
To show $\kappa$ has order exactly 2 we note that there is a homomorphism $\epsilon:\eprebloch'(\C)\to \Z/2$ defined on generators by $[z,p,q]\mapsto (pq\text{ mod }2)$. Indeed, it is easy to check that this vanishes on the lifted five-term relation, and is thus well defined on $\eprebloch'(\C)$. Since $\epsilon(\kappa)=1$ we see $\kappa$ is non-trivial. Finally, Lemma \ref{super transfer} implies that the effect of the transfer relation is simply to kill the element $\kappa$, so the final sentence of the proposition follows. \end{proof}
\begin{lemma}\label{1-x} For any $[x,p,q]\in\eprebloch(\C)$ one has $[x,p,q]+[1-x,-q,-p]=2[1/2,0,0]$. \end{lemma} \begin{proof} Assume first that $0<y<x<1$. Then, as remarked in the proof of Proposition \ref{R}, \begin{equation*}\begin{aligned}{} [x,p_0,q_0]&-[y,p_1,q_1]+[\frac yx,p_1-p_0,q_2]-[\frac{1-x^{-1}}{1-y^{-1}},p_1-p_0+q_1-q_0,q_2-q_1]+{}\\ &[\frac{1-x}{1-y},q_1-q_0,q_2-q_1-p_0]=0\end{aligned}\end{equation*} is an instance of the lifted five term relation. Replacing $y$ by $1-x$, $x$ by $1-y$, $p_0$ by $-q_1$, $q_1$ by $-q_0$, $q_0$ by $-p_1$, $q_1$ by $-p_0$, and $q_2$ by $q_2-q_1-p_0$ replaces this relation by exactly the same relation except that the first two terms are replaced by $[1-y,-q_1,-p_1]-[1-x,-q_0,-p_0]$. Thus subtracting the two relations gives: \begin{equation*}[x,p_0,q_0]-[y,p_1,q_1]-[1-y,-q_1,-p_1]+[1-x,-q_0,-p_0]=0. \end{equation*} Putting $[y,p_1,q_1]=[1/2,0,0]$ now proves the lemma for $1/2<x<1$. But since we have shown this as a consequence of the lifted five term relation, we can analytically continue it over the whole of $\Cover$. \end{proof} \begin{proposition} The following sequence is exact: \begin{equation*}0\to\C^*\stackrel\chi\longrightarrow {\eprebloch}(\C)\to\prebloch(\C)\to 0\end{equation*} where $\eprebloch(\C)\to\prebloch(\C)$ is the natural map and $\chi(z):= [z,0,1]-[z,0,0]$ for $z\in \C^*$. \end{proposition} \begin{proof} Denote $\{z,p\}:=[z,p,q]-[z,p,q-1]$ which is independent of $q$ by the first equation of (\ref{three equns}). By Lemma \ref{1-x} we have $[z,p,q]-[z,p-1,q]=-\{1-z,-q\}$. It follows that elements of the form $\{z,p\}$ generate $\operatorname{Ker}\bigl(\eprebloch(\C)\to\prebloch(\C)\bigr)$. Computing $\{z,p\}$ using Lemma \ref{super transfer} and the transfer relation, one finds $\{z,p\}=\{z,0\}$ which only depends on $z$. Thus the elements $\{z,0\}=\chi(z)$ generate $\operatorname{Ker}\bigl({\eprebloch}(\C)\to\prebloch(\C)\bigr)$. If we take equation (\ref{homo}) with even $p_i$ and subtract the same equation with the $q_i$ reduced by $1$ we get an equation that says that $\chi\colon\C^*\to \operatorname{Ker}\bigl({\eprebloch}(\C)\to\prebloch(\C)\bigr)$ is a homomorphism. We have just shown that it is surjective, and it is injective because $R\circ\chi$ is the map $\C^*\to \C/\pi^2$ defined by $z\mapsto \frac{\pi i}2\log z$. \end{proof}
We can now describe the relationship of our extended groups with the ``classical'' ones. \begin{theorem} There is a commutative diagram with exact rows and columns \begin{equation*} \begin{CD} && 0 && 0 && 0 \\ && @VVV @VVV @VVV \\ 0 @>>> \mu^* @>>> \C^* @>>> \C^*/\mu^* @>>> 0\\
&& @V\chi|\mu^* VV @V\chi VV @V\beta VV @VVV\\ 0 @>>> \ebloch(\C) @>>> \eprebloch(\C) @>\nu>> \C\wedge\C @>>> K_2(\C) @>>> 0\\ && @VVV @VVV @V\epsilon VV @V=VV\\ 0 @>>> \bloch(\C) @>>> \prebloch(\C) @>\nu'>> \C^*\wedge\C^* @>>> K_2(\C) @>>> 0\\ && @VVV @VVV @VVV @VVV\\ && 0 && 0 && 0 && 0 \end{CD} \end{equation*} Here $\mu^*$ is the group of roots of unity and the labelled maps defined as follows: \begin{equation*}\begin{aligned} \chi(z)&=[z,0,1]-[z,0,0]\in \eprebloch(\C);\\ \nu[z,p,q]&=(\log z + p\pi i)\wedge(-\log(1-z)+ q\pi i);\\ \nu'[z]&=2\bigl(z \wedge (1-z)\bigr);\\ \beta[z]&=\log z \wedge \pi i;\\ \epsilon(w_1\wedge w_2)&= -2(e^{w_1}\wedge e^{w_2}); \end{aligned}\end{equation*} and the unlabelled maps are the obvious ones. \end{theorem} \begin{proof} The top horizontal sequence is trivially exact while the other two are exact at their first two non-trivial groups by definition of $\ebloch$ and $\bloch$. The bottom row is exact also at its other two places by Milnor's definition of $K_2$. The exactness of the third vertical sequence is elementary and the second one has just been proved. The commutativity of all but the top left square is elementary. A diagram chase confirms that $\chi$ maps $\mu^*$ to $\ebloch(\C)$ and that the left vertical sequence is also exact. Another confirms exactness of the middle row. \end{proof}
\section{Proof of Theorem \ref{beta exists}}
We must first recall some notation and results from \cite{neumann}.
The complex $K$ of Theorem \ref{beta exists} is what is called an ``oriented 3-cycle'' in \cite{neumann}. That is, it is a finite quasi-simplicial 3-complex such that the complement $K-K^{(0)}$ of the vertices is an oriented 3-manifold. (``Quasi-simplicial'' means that $K$ is a CW-complex obtained by gluing together simplices along their faces in such a way that the interior of each face of each simplex injects into the resulting complex.) Each simplex $\Delta_i$ of $K$ has an orientation compatible with the ordering of its vertices, and this orientation is compatible with or opposite to the orientation of $K$ according as $\epsilon_i$ is $+1$ or $-1$.
To an oriented $3$-simplex $\Delta$ of $K$ we associate a $2$-dimensional bilinear space $J_\Delta$ over $\Z$ as follows. As a $\Z$-module $J_\Delta$ is generated by the six edges $e_0,\dots,e_5$ of $\Delta$ (see Fig.~\ref{figdelta}) with the relations: \begin{gather*} e_i - e_{i+3} = 0 \quad\hbox{for }i=0,1,2.\\ e_0 + e_1 + e_2 = 0. \end{gather*} \begin{figure}\label{figdelta}
\end{figure}
Thus, opposite edges of $\Delta$ represent the same element of $J_\Delta$, so $J_\Delta$ has three ``geometric'' generators, and the sum of these three generators is zero. The bilinear form on $J_\Delta$ is the non-singular skew-symmetric form given by \begin{equation*} \langle e_0,e_1\rangle = \langle e_1,e_2\rangle = \langle e_2,e_0\rangle = -\langle e_1,e_0\rangle = -\langle e_3,e_1\rangle = -\langle e_0,e_2\rangle = 1. \end{equation*}
Let $J$ be the direct sum $\coprod J_\Delta$, summed over the oriented $3$-simplices of $K$. For $i=0,1$ let $C_i$ be the free $\Z$-module on the \emph{unoriented} $i$-simplices of $K$. Define homomorphisms \begin{equation*} \alpha\colon C_0\mathop{\longrightarrow}\limits C_1 \quad\text{and}\quad \beta\colon C_1\mathop{\longrightarrow}\limits J \end{equation*} as follows. $\alpha$ takes a vertex to the sum of the incident edges (with an edge counted twice if both endpoints are at the given vertex). The $J_\Delta$ component of $\beta$ takes an edge $E$ of $K$ to the sum of those edges $e_i$ in the edge set $\{e_0,e_1,\dots,e_5\}$ of $\Delta$ which are identified with $E$ in $K$.
The natural basis of $C_i$ gives an identification of $C_i$ with its dual space and the bilinear form on $J$ gives an identification of $J$ with its dual space. With respect to these identifications, the dual map \begin{equation*} \alpha^*\colon C_1\mathop{\longrightarrow}\limits C_0 \end{equation*} is easily seen to map an edge $E$ of $K$ to the sum of its endpoints, and the dual map \begin{equation*} \beta^*\colon J \mathop{\longrightarrow}\limits C_1 \end{equation*} can be described as follows. To each $3$-simplex $\Delta$ of $K$ we have a map $j = j_\Delta$ of the edge set $\{e_0,e_1,\dots,e_5\}$ of $\Delta$ to the set of edges of $K$: put $j(e_i)$ equal to the edge that $e_i$ is identified with in $K$. For $e_i$ in $J_\Delta$ we have \begin{equation*} \beta^*(e_i) = j(e_{i+1}) - j(e_{i+2}) + j(e_{i+4}) - j(e_{i+5}) \quad\hbox{(indices mod 6).} \end{equation*}
\iffalse In \cite{neumann} it is shown that $\operatorname{Im}\beta\subseteq\operatorname{Ker}\beta^*$. Since $\operatorname{Ker}\beta^*=(\operatorname{Im}\beta)^\perp$, the form on $J$ then induces a form on $\operatorname{Ker}\beta^*/\operatorname{Im}\beta$ which is non-degenerate on $(\operatorname{Ker}\beta^*/\operatorname{Im}\beta)/\text{Torsion}$. We shall denote this form also by $\langle\,,\,\rangle$. \fi
Let $K_0$ be the result of removing a small open cone neighborhood of each vertex $V$ of $K$, so $\partial K_0$ is the disjoint union of the links $L_V$ of the vertices of $K$.
\begin{theorem}[\cite{neumann}, Theorem 4.2] The sequence \begin{equation*} \mathcal J\colon\quad0\mathop{\longrightarrow}\limits C_0\mathop{\longrightarrow}\limits^\alpha C_1\mathop{\longrightarrow}\limits^\beta J \mathop{\longrightarrow}\limits^{\beta^*} C_1\mathop{\longrightarrow}\limits^{\alpha^*} C_0\mathop{\longrightarrow}\limits 0 \end{equation*} is a chain complex. Its homology groups $H_i(\mathcal J)$ (indexing the non-zero groups of $\mathcal J$ from left to right with indices $5,4,3,2,1$) are \begin{gather*} H_5(\mathcal J)=0,\quad H_4(\mathcal J)=\Z/2,\quad H_1(\mathcal J)=\Z/2,\\ H_3(\mathcal J)=\mathcal H\oplus H^1(K;\Z/2),\quad H_2(\mathcal J)=H_1(K;\Z/2), \end{gather*} where $\mathcal H=\operatorname{Ker}(H_1(\partial K_0;\Z)\to H_1(K_0;\Z/2))$. Moreover, the isomorphism $H_2(\mathcal J)\to H_1(K;\Z/2)$ results by interpreting an element of $\operatorname{Ker}(\alpha^*)\subset C_1$ as an unoriented 1-cycle in $K$.
\qed \end{theorem}
If $\Delta$ is an ideal simplex and $\bfw=(w_0,w_1,w_2)$ is a flattening of it, then denote \begin{equation*} \xi(\bfw):=w_1e_0-w_0e_1\in J_\Delta\otimes\C.\end{equation*} This definition is only apparantly unsymmetric since $w_1e_0-w_0e_1=w_2e_1-w_1e_2 =w_0e_2-w_2e_0$.
Give each simplex $\Delta_i$ of the complex $K$ of Theorem \ref{beta exists} the flattening $\bfw_i^{(0)}:=\ell(x_i;0,0)$. Denote by $\omega$ the element of $J\otimes\C$ whose $\Delta_i$-component is $\xi(\bfw_i^{(0)})$ for each $i$. That is, the ${\Delta_i}$-component of $\omega$ is $-\bigl(\log(1-x_i)e_0+\log(x_i)e_1\bigr)$. \begin{lemma} $\frac1{\pi i}\beta^*(\omega)$ is an integer class in the kernel of $\alpha^*$, so it represents an element of the homology group $H_2(\mathcal J)$. Moreover
this element in $H_2(\mathcal J)$ vanishes, so $\frac1{\pi i}\beta^*(\omega)=\beta^*(x)$ for some $x\in J$. \end{lemma} \begin{proof} Let $\overline J_\Delta$ be defined like $J_\Delta$ but without the relation $e_0+e_1+e_2=0$, so it is generated by the six edges $e_0,\dots,e_5$ of $\Delta$ with relations $e_i=e_{i+3}$ for $i=0,1,2$. Let $\overline J$ be the direct sum $\coprod\overline J_\Delta$ over 3-simplices $\Delta$ of $K$.
The map $\beta^*\colon J\to C_2$ factors as \begin{equation*} \beta^*\colon J\mathop{\longrightarrow}\limits^{\beta_1}\overline J\mathop{\longrightarrow}\limits^{\beta_2}C_2, \end{equation*} with $\beta_1$ and $\beta_2$ defined on each component by: \begin{equation*} \begin{split} \beta_1(e_i)&=e_{i+1}-e_{i+2}\\ \beta_2(e_i)&=j(e_i)+j(e_{i+3}) \end{split} \quad\Biggr\}\text{ for $i=0,1,2$.} \end{equation*}
Note that $\beta_1(\xi(\bfw))=w_0e_0+w_1e_1+w_2e_2 \in \overline J_\Delta\otimes\C$. Thus if $E$ is an edge of $K$ then the $E$-component of $\beta^*(\omega)=\beta_2\beta_1(\omega)$ is the sum of the log-parameters for $E$ in the ideal simplices of $K$ around $E$ and is hence a multiple of $\pi i$. In fact, it would be a multiple of $2\pi i$ except for the adjustments by multiples of $\pi i$ that are involved in forming the log-parameters from the logarithms of cross-ratio parameters of the simplices. We claim that these adjustments add to an even multiple of $\pi i$ around each edge, so in fact \begin{equation}\frac1{\pi i}\beta^*(\omega)\in 2C_2,\label{even path} \end{equation} Once this is proved the lemma follows, since the isomorphism $H_2(\mathcal J)\to H_1(K;\Z/2)$ is the map which interprets an element of $\operatorname{Ker}(\alpha^*)$ as an unoriented 1-cycle in $K$, and equation (\ref{even path}) says this 1-cycle is zero modulo 2.
In the terminology of Definition \ref{parity} our claim can be restated that the parity of the normal path that circles the edge $E$ is zero for each edge $E$. In fact, the parity of any normal path in $K$ is zero. To see this, note that as we follow a normal path the contribution to the parity as we pass an edge of a simplex is $0$ if the edge is the $01$, $03$, $12$, or $23$ edge of the simplex and the contribution is $\pm1$ if it is the $02$ or $13$ edge. Consider the orientations of the triangular faces we cross as we follow the path, where the orientation is the one induced by the ordering of its vertices. As we pass a $02$ or $13$ edge this orientation changes while for the other edges it does not. Since $K$ is oriented, we must have an even number of orientation changes as we traverse the normal path, proving the claim. (This argument, which simplifies my original one, is due to Brian Bowditch.) \end{proof}
Let $\omega':=\omega-\pi i x\in J\otimes\C$ with $x$ as in the lemma, so $\beta^*(\omega')=0$. The $\Delta_i$-component of $\omega'$ is $\xi(\bfw_i)$, where $\bfw_i=\ell(x_i;p_i,q_i)$ for suitable integers $p_i,q_i$. These integers $p_i$ and $q_i$ are the coefficients ocurring in the element $x\in J$, which is only determined by the lemma up to elements of $\operatorname{Ker}(\beta^*)$. We claim that for suitable choice of $x$, the $\bfw_i$ satisfy the parity and log-parameter conditions of first paragraph of Theorem \ref{beta exists}. To see this we need to review a computation of $H_3(\mathcal J)$ from \cite{neumann}.
We define a map $\gamma'\colon H_3(\mathcal J)\to H^1(\partial K_0;\Z)=\operatorname{Hom}(H_1(\partial K_0),\Z)$ as follows. Given elements $a\in H_3(\mathcal J)$ and $c\in H_1(\partial K_0)$ we wish to define $\gamma'(a)(c)$. It is enough to do this for a class $c$ which is represented by a normal path $C$ in the link of some vertex of $K$. Represent $a$ by an element $A\in J$ with $\beta^*(A)=0$ and consider the element $\beta_1(A)\in \overline J$. This element has a coefficient for each edge of each simplex of $K$. To define $\gamma'(a)(c)$ we consider the coefficients of $\beta_1(A)$ corresponding to edges of simplices that $C$ passes and sum these using the orientation conventions of Definition \ref{parity}. It is easy to see that the result only depends on the homology class of $C$.
We can similarly define a map $\gamma_2'\colon H_3(\mathcal J)\to H^1(K_0;\Z/2)=\operatorname{Hom}(H_1(K_0),\Z/2)$ by using normal paths in $K_0$ and taking modulo 2 sum of coefficients of $\beta_1(A)$.
\begin{lemma}[\cite{neumann}, Theorem 5.1] The sequence \begin{equation*} 0\to H_3(\mathcal J)\Mapright{30}{(\gamma',\gamma_2')} H^1(\partial K_0;\Z)\oplus H^1(K_0;\Z/2)\Mapright{25} {r-i^*}H^1(\partial K_0;\Z/2)\to 0 \end{equation*} is exact, where $r\colon H^1(\partial K_0;\Z)\to H^1(\partial K_0;\Z/2)$ is the coefficient map and the map $i^*\colon H^1(K_0;\Z/2)\to H^1(\partial K_0;\Z/2)$ is induced by the inclusion $\partial K_0\to K_0$. \qed\end{lemma}
Returning to the choice of $x$ above, assume we have made a choice so that the resulting flattenings $\bfw_i$ do not lead to zero log-parameters and parities for normal paths. Taking $\frac1{\pi i}$ times the log-parameters of normal paths leads as above to an element $c\in H^1(\partial K_0;\Z)$. Similarly, parities of normal paths leads to an element of $c_2\in H^1(K_0;\Z/2)$. These elements satisfy $r(c)=i^*(c_2)$. The lemma thus gives an element of $H_3(\mathcal J)$ that maps to $(c,c_2)$, and subtracting a representative for this element from $x$ gives the desired correction of $x$ so the log-parameters and parities of normal paths with respect to the corresponding changed $\bfw_i$'s are zero. This completes the proof of the first paragraph of Theorem \ref{beta exists}.
Suppose now that we have a different choice of flattenings $\bfw_i$ satisfying the parity and log-parameter conditions of the Theorem. Then the above lemma implies that the difference between the corresponding elements $x$ represents $0$ in $H_3(\mathcal J)$ and is thus in the image of $\beta$. For $E\in C_2$ the effect of changing $x$ by $\beta(E)$ is to change the element $\sum_i\epsilon_i [x_i,p_i,q_i] \in\eprebloch(\C)$ by the corresponding relation of Lemma \ref{cycle}. Since this is a consequence of the lifted five term relations, the element in $\eprebloch(\C)$ is unchanged.
Finally we need to show that none of the other choices we made in defining the element $\lambda(\alpha)$ in Theorem \ref{beta exists} have an effect. These were: \begin{itemize} \item the representative of the homology class $\alpha$; \item the choice of pairing of faces of simplices of $\alpha$ to form the complex $K$; \item the choice of the point $z\in\partial\overline\H^3$ in Theorem \ref{beta exists}. \end{itemize} To prove this we will use a bordism theory based on $n$-cycles which we describe next.
An \emph{ordered $n$-cycle with boundary} is defined by the following: \begin{itemize} \item a finite collection of ordered $n$-simplices $\Delta_i$ is given; \item each simplex has a sign $\epsilon_i=\pm1$; \item the $(n-1)$ faces of the simplices $\Delta_i$ are given signs as follows: the sign of the $(n-1)$-face that omits the $j$-th vertex of $\Delta_i$ is $(-1)^j\epsilon_i$;
\item a subset of the $D$ of the $(n-1)$-faces of the collection of simplices is given; \item the $(n-1)$ faces which are not in $D$ are paired in such a way the the two simplices in each pair have opposite sign. \end{itemize} We can form a geometric $n$-complex $K$ by realizing the $\Delta_i$ by geometric $n$-simplices and gluing (by affine isomorphisms that respect the vertex orderings) faces that are paired. The result is a space that is an oriented $n$-manifold with boundary in the complement of its $(n-2)$-skeleton. We shall use the same letter $K$ to denote the underlying combinatorial object and the geometric realization.
If $K$ is an ordered $n$-cycle with boundary as above, we say it is \emph{closed} if $D$ is empty. If $K$ is not closed, then $D$ inherits the structure of a closed ordered $(n-1)$-cycle, and we denote this structure by $\partial K$. The pairing on the faces of the simplices of $D$ can be described as follows. Consider the set of pairs $(\sigma,\tau)$ consisting of an $(n-2)$-face $\tau$ of an $(n-1)$-face $\sigma$ of a simplex of $K$. Construct a graph with this set as vertex set and with edges of two types: $(\sigma,\tau)$ is connected to $(\sigma',\tau')$ by an edge if either $\sigma$ and $\sigma'$ are adjacent faces of an $n$-simplex of $K$ and $\tau=\tau'$ is their common $(n-2)$ face, or if $\sigma$ and $\sigma'$ are paired $(n-1)$-simplices and $\tau$ and $\tau'$ are corresponding faces under an order-preserving matching of $\sigma$ with $\sigma'$. Each vertex $(\sigma,\tau)$ of this graph has valency $2$ except when $\sigma$ is in $D$, in which case the valency is $1$. Thus the graph consists of a collection of arcs and cycles. The endpoints of each arc gives two $(n-2)$-faces of elements of $D$ that are to be paired.
\iffalse [The proof of this follows without too much difficulty from my ``combinatorics of 3-cycles ...'' paper. Note that in general there may be choice in the pairings used to construct $K$ (if some 2-face occurs many times). It would be slightly more elegant if one knew one could choose the $y_i$ so parity and log parameters of normal paths are zero for \emph{any} way of creating $K$. For then one would not need $K$ but could talk about a normal path as being a sequence of 3-simplices, each one ``adjacent'' to the next. I imagine this is true, but I can't prove it so far.] \fi \section{The true extended Bloch group}
Recall that $\Cover$ consists of four components $X_{00}$, $X_{01}$, $X_{10}$, and $X_{11}$, of which $X_{00}$ is naturally the universal abelian cover of $\C-\{0,1\}$.
Let $\liftFT_{00}$ be $\liftFT\cap (X_{00})^5$, so $\liftFT_{00}=\liftFT_0+2V$ in the notation of Definition \ref{liftFT}. As mentioned earlier, $\liftFT_{00}$ is, in fact, equal to $\liftFT_0$, but we do not need this.
Define $\eeprebloch(\C)$ to be the free $\Z$-module on $X_{00}$ factored by all instances of the relation \begin{equation} \sum_{i=0}^4(-1)^i(x_i;2p_i,2q_i)=0\quad\text{with $\bigl((x_0;2p_0,2q_0),\dots,(x_4;2p_4,2q_4)\bigr)\in\liftFT_{00}$}. \end{equation}
As before, we have a well-defined map \begin{equation*} \nu\colon\eeprebloch(\C)\to\C\wedge\C, \end{equation*} given by $\nu[z;2p,2q]=(\log z+2p\pi i)\wedge(-\log(1-z)+2q\pi i)$, and we define \begin{equation} \eebloch(\C):=\operatorname{Ker}\nu. \end{equation}
The proof of Proposition \ref{R} shows: \begin{proposition} The function
$R(z;2p,2q):=\calR(z)+\pi i (p\log(1-z)+q\log z)-\frac{\pi^2}6$
gives a well defined map $X_{00}\to\C/2\pi^2\Z$ and induces a homomorphism $R\colon \eeprebloch(\C)\to\C/2\pi^2\Z$. \qed\end{proposition}
We can repeat the computations in Sect.~\ref{consequences} word-for-word, replacing anything of the form $[x,p,q]$ by $[x,2p,2q]$, to show that $\operatorname{Ker}\bigl(\eeprebloch(\C)\to\prebloch(\C)\bigr)$ is generated by elements of the form $\hat\chi(z):=[z,0,2]-[z,0,0]$. We thus get: \begin{proposition} The following sequence is exact: \begin{equation*} 0\to\C^*\stackrel{\hat\chi}\longrightarrow \eeprebloch(\C)\to\prebloch(\C)\to 0 \end{equation*} where $\eprebloch(\C)\to\prebloch(\C)$ is the natural map and $\hat\chi(z):= [z,0,2]-[z,0,0]$ for $z\in \C^*$. \end{proposition} \begin{proof} The only thing to prove is the injectivity of $\hat\chi$ which follows by noting that $R\bigl(\hat\chi(z)\bigr)=\pi i\log z\in\C/2\pi^2$.\end{proof} \begin{corollary} We have a commutative diagram with exact rows and columns: \begin{equation*} \begin{CD} && 0 && 0 \\ && @VVV @VVV \\ && \Z/2 @>=>> \Z/2 \\ && @VVV @VVV \\ 0 @>>> \C^* @>\hat\chi>> \eeprebloch(\C) @>>> \prebloch(\C) @>>> 0\\ && @VV z\mapsto z^2V @VVV @VV=V \\ 0 @>>> \C^* @>\chi>> \eprebloch(\C) @>>> \prebloch(\C) @>>> 0\\ && @VVV @VVV \\ && 0 && 0 && \end{CD} \end{equation*} and analogously for the Bloch group: \begin{equation*} \begin{CD} && 0 && 0 \\ && @VVV @VVV \\ && \Z/2 @>=>> \Z/2 \\ && @VVV @VVV \\ 0 @>>> \mu^* @>\hat\chi>> \eebloch(\C) @>>> \bloch(\C) @>>> 0\\ && @VV z\mapsto z^2V @VVV @VV=V \\ 0 @>>> \mu^* @>\chi>> \ebloch(\C) @>>> \bloch(\C) @>>> 0\\ && @VVV @VVV \\ && 0 && 0 && \end{CD} \end{equation*} \end{corollary}
\end{document} | arXiv |
\begin{definition}[Definition:Pointwise Maximum of Mappings/Real-Valued Functions]
Let $S$ be a set.
Let $f, g: S \to \R$ be real-valued functions.
Let $\max$ be the max operation on $\R$ (Ordering on Real Numbers is Total Ordering ensures it is in fact defined).
Then the '''pointwise maximum of $f$ and $g$''', denoted $\map \max {f, g}$, is defined by:
:$\map \max {f, g}: S \to \R: \map {\map \max {f, g} } x := \map \max {\map f x, \map g x}$
'''Pointwise maximum''' thence is an instance of a pointwise operation on real-valued functions.
\end{definition} | ProofWiki |
Set and Computability in Udine
Logic Group
Cross-Alps Logic Seminar
All seminars will be held remotely using Webex. Please write to luca.mottoros [at] unito.it to obtain a link and the access code.
Recordings of some talks are available at this page.
January 13th, 2022, 17.00-18.00 (Online on Webex)
M. Magidor (Hebrew University of Jerusalem) "Sets of reals are not created equal: regularity properties of subsets of the reals and other Polish spaces.".
A "pathological set" can be a non measurable set, a set which does not have the property of Baire (namely it is not a Borel set modulo a first category set).
A subset (=the infinite subsets of natural numbers) can be considered to be "pathological" if it is a counterexample to the infinitary Ramsey theorem. Namely there does not exist an infinite set of natural numbers such that all its infinite subsets are in our sets or all its infinite subsets are not in the set.
A subset of the Baire space can be considered to be "pathological" if the infinite game is not determined. The game is an infinite game where two players alternate picking natural numbers, forming an infinite sequence, namely a member of . The first player wins the round if the resulting sequence is in . The game is determined if one of the players has a winning strategy.
A prevailing paradigm in Descriptive Set Theory is that sets that have a "simple description" should not be pathological. Evidence for this maxim is the fact that Borel sets are not pathological in any of the senses above.In this talk we shall present a notion of "super regularity" for subsets of a Polish space, the family of universally Baire sets. This family of sets generalizes the family of Borel sets and forms a -algebra. We shall survey some regularity properties of universally Baire sets , such as their measurability with respect to any regular Borel measure, the fact that they have an infinitary Ramsey property etc. Some of these theorems will require assuming some strong axioms of infinity. Most of the talk should be accessible to a general Mathematical audience, but in the second part we shall survey some newer results.
Hide abstract.
Expand 2021
Hide 2021.
December 17th, 2021, 16.00-18.00 (Online on Webex)
M. Rathjen (University of Leeds) "Well-ordering principles in Proof theory and Reverse Mathematics" (Video).
There are several familiar theories of reverse mathematics that can be characterized by well-ordering principles of the form (*) "if $X$ is well ordered then $f(X)$ is well ordered", where $f$ is a standard proof theoretic function from ordinals to ordinals (such $f$'s are always dilators). Some of these equivalences have been obtained by recursion-theoretic and combinatorial methods. They (and many more) can also be shown by proof-theoretic methods, employing search trees and cut elimination theorems in infinitary logic with ordinal bounds. One could perhaps generalize and say that every cut elimination theorem in ordinal-theoretic proof theory encapsulates a theorem of this type.
J. Hirst (Appalachian State University) "Reverse mathematics and Banach's theorem" (Slides,Video).
The Schröder-Bernstein theorem asserts that if there are injections of two sets into each other, then there is a bijection between the sets. In his note "Un théorème sur les transformations biunivoques," Banach proved an extension of the Schröder-Bernstein theorem in which the values of the bijection between the sets depend directly on the injections. This talk will present some old theorems of reverse mathematics about restrictions of Banach's theorem. Also, we will look at preliminary results of work with Carl Mummert on restrictions of Banach's theorem in higher order reverse mathematics. The talk will not assume familiarity with reverse mathematics.
December 3rd, 2021, 16.00-18.00 (Online on Webex)
G. Goldberg (University of California, Berkeley) "The optimality of Usuba's theorem" (Video).
The method of forcing was introduced by Cohen in his proof of the independence of the Continuum Hypothesis and has since been used to demonstrate that a diverse array of set theoretic problems are formally unsolvable from the standard ZFC axioms. The technique allows one to expand a model of ZFC by adjoining to it a generic set. The resulting forcing extension is again a model of ZFC that may have a very different first order theory from the original structure; for example, according to one's tastes, one can build forcing extensions in which the Continuum Hypothesis is either true or false, demonstrating that the ZFC axioms can neither prove nor refute the Continuum Hypothesis. But does the forcing technique really show that the Continuum Hypothesis has no truth value? This seems to hinge on whether one believes that the true universe of sets (which the ZFC axioms attempt to axiomatize) could itself be a forcing extension of a smaller model of ZFC. This talk concerns a theorem of Usuba that bears on this question. I'll discuss recent work proving the optimality of the large cardinal hypothesis of Usuba's theorem and some applications of the associated techniques to questions outside the theory of forcing.
November 26th, 2021, 16.00-17.00 (Online on Webex)
A. Martin-Pizarro (University of Freiburg) "On abelian corners and squares" (Video).
Given an abelian group \(G\), a corner is a subset of pairs of the form \(\{ (x,y), (x+g, y), (x, y+g)\}\) with \(g\) non trivial. Ajtai and Szemerédi proved that, asymptotically for finite abelian groups, every dense subset \(S\) of \(G\times G\) contains a corner. Shkredov gave a quantitative lower bound on the density of the subset \(S\). In this talk, we will explain how model-theoretic conditions on the subset \(S\), such as local stability, will imply the existence of corners and of cubes for (pseudo-)finite abelian groups. This is joint work with D. Palacin (Madrid) and J. Wolf (Cambridge).
November 19th, 2021, 16.00-17.00 (Online on Webex and in Aula 4, Palazzo Campana, Torino)
A. Törnquist (University of Copenhagen) "Set-theoretic aspects of a proposed model of the mind in psychology" (Slides).
Jens Mammen (Professor Emeritus of psychology at Aarhus and Aalborg University) has developed a theory in psychology, which aims to provide a model for the interface between a human being (and mind), and the real world.
This theory is formalized in a very mathematical way: Indeed, it is described through a mathematical axiom system. Realizations ("models") of this axiom system consist of a non-empty set $U$ (the universe of objects), as well as a perfect Hausdorff topology $\mathcal{S}$ on $U$, and a family $\mathcal{C}$ of subsets of $U$ which must satisfy certain axioms in relation to $\mathcal{S}$. The topology $\mathcal{S}$ is used to model broad categories that we sense in the world (e.g., all the stones on a beach) and the $\mathcal{C}$ is used to model the process of selecting an object in a category that we sense (e.g., a specific stone on the beach that we pick up). The most desirable kind of model of Mammen's theory is one in which every subset of $U$ is the union of an open set in $\mathcal{S}$ and a set in $\mathcal{C}$. Such a model is called "complete".
Coming from mathematics, models of Mammen's theory were first studied in detail by J. Hoffmann-Joergensen in the 1990s. Hoffmann-Joergensen used the Axiom of Choice (AC) to show that a complete model of Mammen's axiom system, in which the universe $U$ is infinite, does exist. Hoffmann-Joergensen conjectured at the time that the existence of a complete model of Mammen's axioms would imply the Axiom of Choice.
In this talk, I will discuss various set-theoretic aspects related to complete Mammen models; firstly, the question of "how much" AC is needed to obtain a complete Mammen model; secondly, I will introduce some cardinal invariants related to complete Mammen models and establish elementary ZFC bounds for them, as well as some consistency results.
This is joint work with Jens Mammen.
S. Müller (TU Wien) "Large Cardinals and Determinacy" (Video).
Determinacy assumptions, large cardinal axioms, and their consequences are widely used and have many fruitful implications in set theory and even in other areas of mathematics. Many applications, in particular, proofs of consistency strength lower bounds, exploit the interplay of determinacy axioms, large cardinals, and inner models. I will survey some recent results in this flourishing area. This, in particular, includes results on connecting the determinacy of longer games to canonical inner models with many Woodin cardinals, a new lower bound for a combinatorial statement about infinite trees, as well as an application of determinacy answering a question in general topology.
November 5th, 2021, 16.00-18.00 (Online on Webex)
A. Zucker (University of California San Diego) "Big Ramsey degrees in binary free amalgamation classes" (Video).
In structural Ramsey theory, one considers a "small" structure A, a "medium" structure B, a "large" structure C and a number r, then considers the following combinatorial question: given a coloring of the copies of A inside C in r colors, can we find a copy of B inside C all of whose copies of A receive just one color? For example, when C is the rational linear order and A and B are finite linear orders, then this follows from the finite version of the classical Ramsey theorem. More generally, when C is the Fraisse limit of a free amalgamation class in a finite relational language, then for any finite A and B in the given class, this can be done by a celebrated theorem of Nesetril and Rodl. Things get much more interesting when both B and C are infinite. For example, when B and C are the rational linear order and A is the two-element linear order, a pathological coloring due to Sierpinski shows that this cannot be done. However, if we weaken our demands and only ask for a copy of B inside C whose copies of A receive "few" colors, rather than just one color, we can succeed. For the two-element linear order, we can get down to two colors. For the three-element order, 16 colors. This number of colors is called the big Ramsey degree of a finite structure in a Fraisse class. Recently, building on groundbreaking work of Dobrinen, I proved a generalization of the Nešetril-Rödl theorem to binary free amalgamation classes defined by a finite forbidden set of irreducible structures (for instance, the class of triangle-free graphs), showing that every structure in every such class has a finite big Ramsey degree. My work only bounded the big Ramsey degrees, and left open what the exact values were. In recent joint work with Balko, Chodounský, Dobrinen, Hubicka, Konecný, and Vena, we characterize the exact big Ramsey degree of every structure in every binary free amalgamation class defined by a finite forbidden set.
October 1st, 2021, 14.30-15.30 (DMIF, Sala Riunioni)
M. Fiori Carones (University of Warsaw) "Different behaviour of CRT22 and COH over RCA0*".
The common base theory of reverse mathematics is the theory RCA0, which guarantees the existence of Delta01 -definable sets and where mathematical induction for Sigma01 -formulae holds. In 1986, Simpson and Smith introduced a different base theory, RCA0*, where induction is weakened to Delta01 -formulae. In more recent years Kolodziejczyk, Kowalik, Wong, Yokoyama started wondering about the strength of Ramsey's theorem over RCA0*. In this talk we concentrate on two well known consequences of Ramsey's theorem for pairs RT22, namely the Cohesive Ramsey theorem for pairs CRT22 and the Cohesion principle COH. Over RCA0 both follow from RT22, while we proved that only CRT22, and not COH, follows from RT22 over RCA0*.
June 18th, 2021, 16.30-18.30 (Online on WebEx)
C. Brech (Universidade de São Paulo) "Isomorphic combinatorial families" (Video).
We will recall the notion of compact and hereditary families of finite subsets of some cardinal $\kappa$ and their corresponding combinatorial Banach spaces. We present a combinatorial version of Banach-Stone theorem, which leads naturally to a notion of isomorphism between families. Our main result shows that different families on $\omega$ are not isomorphic, if we assume them to be spreading. We also discuss the difference between the countable and the uncountable setting. This is a joint work with Claribet Piña.
V. Gitman (CUNY Graduate Center) "The old and the new of virtual large cardinals" (Video).
The idea of defining a generic version of a large cardinal by asking that some form of the elementary embeddings characterizing the large cardinal exist in a forcing extension has a long history. A large cardinal (typically measurable or stronger) can give rise to several natural generic versions with vastly different properties. For a \emph{generic large cardinal}, a forcing extension should have an elementary embedding $j:V\to M$ of the form characterizing the large cardinal where the target model $M$ is an inner model of the forcing extension, not necessarily contained in $V$. The closure properties on $M$ must correspondingly be taken with respect to the forcing extension. Very small cardinals such as $\omega_1$ can be generic large cardinals under this definition. Quite recently set theorists started studying a different version of generic-type large cardinals, called \emph{virtual large cardinals}. Large cardinals characterized by the existence of an elementary embedding $j:V\to M$ typically have equivalent characterizations in terms of the existence of set-sized embeddings of the form $j:V_\lambda\to M$. For a virtual large cardinal, a forcing should have an elementary embedding $j:V_\lambda\to M$ of the form characterizing the large cardinal with $M\in V$ and all closure properties on $M$ considered from $V$'s standpoint. Virtual large cardinals are actually large cardinals, they are completely ineffable and more, but usually bounded above by an $\omega$-Erdös cardinal. Despite sitting much lower in the large cardinal hierarchy, they mimic the reflecting properties of their original counterparts. Several of these notions arose naturally out of equiconsistency results. In this talk, I will give an overview of the virtual large cardinal hierarchy including some surprising recent directions.
June 4th, 2021, 16.30-18.30 (Online on WebEx)
M. Pinsker (Vienna University of Technology) "Uniqueness of Polish topologies on endomorphism monoids of countably categorical structures" (Video).
"The automorphism group Aut(A) of a countable countably categorical structure A, viewed as a topological group equipped with the topology of pointwise convergence, carries sufficient information about the structure A to reconstruct it up to bi-interpretability. It turns out that in many cases, including the order of the rational numbers or the random graph, the algebraic group structure of Aut(A) alone is sufficient for this kind of reconstruction, since its topology is already uniquely determined by it. Which structures A have this property has been subject to investigations for many years. Sometimes, we wish to associate to the structure A other objects than Aut(A) which retain more information about A; for example, its endomorphism monoid End(A) or its polymorphism clone Pol(A) are such objects. As in the case for automorphism groups, these objects are naturally equipped with the topology of pointwise convergence on top of their algebraic structure. We consider the question of when the former is already uniquely determined by the latter. In particular, we show that the endomorphism monoid of the random graph has a unique Polish topology, namely that of pointwise convergence. In the first part of the talk, which I hope to make accessible to anyone, I present a history of the known and unknown results as well as our new ones, and outline the differences between groups and monoids in this context. In the second part, which I also hope to make accessible to anyone, I try to outline the proof methods for our new results. This is joint work with L. Elliott, J. Jonušas, J. D. Mitchell, and Y. Péresse."
May 28th, 2021, 16.30-18.30 (Online on WebEx)
D. Bartosova (University of Florida) "Short exact sequences and universal minimal flows" (Video).
We will investigate an interplay between short exact sequences of topological groups and their universal minimal flows in case one of the factors is compact. We will discuss possible and impossible extensions of the results in a few directions. An indispensable ingredient in our technique is a description of the universal pointed flow of a given group in terms of filters on the group, which we will describe.
May 21st, 2021, 16.30-18.30 (Online on WebEx)
L. Westrick (Penn State University) "Borel combinatorics fail in HYP" (Video).
Of the principles just slightly weaker than ATR, the most well-known are the theories of hyperarithmetic analysis (THA). By definition, such principles hold in HYP. Motivated by the question of whether the Borel Dual Ramsey Theorem is a THA, we consider several theorems involving Borel sets and ask whether they hold in HYP. To make sense of Borel sets without ATR, we formalize the theorems using completely determined Borel sets. We characterize the completely determined Borel subsets of HYP as precisely the sets of reals which are $\Delta^1_1$ in $L_{\omega_1^\mathrm{CK}}$. Using this, we show that in HYP, Borel sets behave quite differently than in reality. In HYP, the Borel dual Ramsey theorem fails, every $n$-regular Borel acyclic graph has a Borel $2$-coloring, and the prisoners have a Borel winning strategy in the infinite prisoner hat game. Thus the negations of these statements are not THA. Joint work with Henry Towsner and Rose Weisshaar.
R. Sklinos (Stevens Institute of Technology) "Fields interpretable in the free group" (Video).
After Sela and Kharlampovich-Myasnikov proved that nonabelian free groups share the same common theory, a model theoretic interest for the theory of the free group arose. Moreover, maybe surprisingly, Sela proved that this common theory is stable. Stability is the first dividing line in Shelah's classification theory and it is equivalent to the existence of a nicely behaved independence relation - forking independence. This relation, in the theory of the free group, has been proved (Ould Houcine-Tent and Sklinos) to be as complicated as possible (n-ample for all n). This behavior of forking independence is usually witnessed by the existence of an infinite field. We prove that no infinite field is interpretable in the theory of the free group, giving the first example of a stable group which is ample but does not interpret an infinite field.
May 7th, 2021, 16.30-18.30 (Online on WebEx)
M. Valenti (University of Udine) "Uniform reducibility and descending sequences through ill-founded orders" (Video).
We explore the uniform computational strength of the problem DS of computing an infinite descending sequence through an ill-founded linear order. This is done by characterizing its degree from the point of view of Weihrauch reducibility, and comparing it with the one of other classical problems, like the problem of finding a path through an ill-founded tree (known as choice on the Baire space). We show that, despite being ""hard"" to compute, the lower cone of DS misses many arithmetical problems (in particular, DS uniformly computes only the limit computable functions). We also generalize our results in the context of arithmetically or analytically presented quasi orders. In particular, we use a technique based on inseparable $\Pi^1_1$ sets to separate $\Sigma^1_1$-DS from the choice on Baire space.
April 30th, 2021, 16.30-18.30 (Online on WebEx)
S. Barbina (Open University) "The theory of the universal-homogeneous Steiner triple system" (Video).
A Steiner triple system is a set S together with a collection B of subsets of S of size 3 such that any two elements of S belong to exactly one element of B. It is well known that the class of finite Steiner triple systems has a Fraïssé limit, the countable homogeneous universal Steiner triple system M. In joint work with Enrique Casanovas, we have proved that the theory T of M has quantifier elimination, is not small, has TP2, NSOP1, eliminates hyperimaginaries and weakly eliminates imaginaries. In this talk I will review the construction of M, give an axiomatisation of T and prove some of its properties.
April 23rd, 2021, 16.30-18.30 (Online on WebEx)
F. Loregian (Tallinn University of Technology) "Functorial Semantics for Partial Theories" (Video).
We provide a Lawvere-style definition for partial theories, extending the classical notion of equational theory by allowing partially defined operations. As in the classical case, our definition is syntactic: we use an appropriate class of string diagrams as terms. This allows for equational reasoning about the class of models defined by a partial theory. We demonstrate the expressivity of such equational theories by considering a number of examples, including partial combinatory algebras and cartesian closed categories. Moreover, despite the increase in expressivity of the syntax we retain a well-behaved notion of semantics: we show that our categories of models are precisely locally finitely presentable categories, and that free models exist.
A. Poveda (Hebrew University of Jerusalem) "Forcing iterations around singulars cardinals and an application to stationary reflection" (Video).
"In this talk we will give an overview of the theory of \Sigma-Prikry forcings and their iterations, recently introduced in a series of papers. We will begin motivating the class of \Sigma-Prikry forcings and showing that this class is broad enough to encompass many Prikry-type posets that center on countable cofinalities. Afterwards, we will present a viable iteration scheme for this family and discuss an application of the framework to the investigation of stationary reflection at the level of successors of singular cardinals. This is joint work with A. Rinot and D. Sinapova."
April 9th, 2021, 16.30-18.30 (Online on WebEx)
A. Berarducci (University of Pisa) "Asymptotic analysis of Skolem's exponential functions" (Video).
Skolem (1956) studied the germs at infinity of the smallest class of real valued functions on the positive real line containing the constant $1$, the identity function $x$, and such that whenever $f$ and $g$ are in the set, $f+g$, $fg$ and $f^g$ are also in the set. This set of germs is well ordered and Skolem conjectured that its order type is epsilon-zero. Van den Dries and Levitz (1984) computed the order type of the fragment below $2^{2^x}$. They did so by studying the possible limits at infinity of the quotient $f(x)/g(x)$ of two functions in the fragment: if $g$ is kept fixed and $f$ varies, the possible limits form a discrete set of real numbers of order type $\omega$. Using the surreal numbers, we extend the latter result to the whole class of Skolem functions and we discuss some additional progress towards the conjecture of Skolem. This is joint work with Marcello Mamino (http://arxiv.org/abs/1911.07576, to appear in the JSL).
March 26th, 2021, 16.30-18.30 (Online on WebEx)
V. Dimonte (University of Udine) "The role of Prikry forcing in generalized Descriptive Set Theory" (Video).
In this seminar we want to take stock of some of the most important applications of the peculiarities of Prikry-like forcings on generalized descriptive set theory. In our case, with generalized descriptive set theory we mean the study of definable subsets of $\lambda^\omega$, with $\lambda$ uncountable cardinal of countable cofinality. It turns out that in this case there is a lot of symmetry with the classical case of Polish spaces, and we are going to provide three examples where the particular combinatorial structure of Prikry-like forcings comes in to save the day: an adequate definition of $\lambda$-Baire property for the generalized case, a generic absoluteness result under the very large cardinal I0, and the construction of a Solovay-like model for $\lambda^\omega$, i.e., the construction of a model where each subset of $\lambda^\omega$ either has cardinality less or equal then $\lambda$, or we can embed in it the whole $\lambda^\omega$.
G. Paolini (Turin) "Torsion-Free Abelian Groups are Borel Complete" (Video).
We prove that the Borel space of torsion-free Abelian groups with domain $\omega$ is Borel complete, i.e., the isomorphism relation on this Borel space is as complicated as possible, as an isomorphism relation. This solves a long-standing open problem in descriptive set theory, which dates back to the seminal paper on Borel reducibility of Friedman and Stanley from 1989.
C. Conley (Carnegie Mellon University) "Dividing the sphere by rotations" (Video).
We say that a subset A of the sphere r-divides it if r-many rotations of A perfectly tile the sphere's surface. Such divisions were first exhibited by Robinson (47) and developed by Mycielski (55). We discuss a colorful approach to finding these divisions which are Lebesgue measurable or possess the property of Baire. This includes joint work with J. Grebik, A. Marks, O. Pikhurko, and S. Unger.
March 5th, 2021, 16.30-18.30 (Online on WebEx)
N. de Rancourt (University of Wien) "A dichotomy for countable unions of smooth Borel equivalence relations" (Video).
I will present a dichotomy for equivalence relations on Polish spaces that can be expressed as countable unions of smooth Borel equivalence relations. It can be seen as an extension of Kechris-Louveau's dichotomy for hypersmooth Borel equivalence relations. A generalization of our dichotomy, for equivalence relations that can be expressed as countable unions of Borel equivalence relations belonging to certain fixed classes, will also be presented. This is a joint work with Benjamin Miller.
February 26th, 2021, 16.30-18.30 (Online on WebEx)
S. Barbina (Open University) "The theory of the universal-homogeneous Steiner triple system" (CANCELED).
P. Shafer (University of Leeds) "An inside-outside Ramsey theorem in the Weihrauch degrees" (Video).
Recall Ramsey's theorem for pairs and two colors, which, in terms of graphs, may be phrased as follows: For every countably infinite graph $G$, there is an infinite set of vertices $H$ such that either every pair of distinct vertices from $H$ is adjacent or no pair of distinct vertices from $H$ is adjacent. The conclusion of Ramsey's theorem gives complete information about how the vertices in $H$ relate to each other, but it gives no information about how the vertices outside $H$ relate to the vertices inside $H$. Rival and Sands (1980) proved the following theorem, which weakens the conclusion of Ramsey's theorem with respect to pairs of vertices in $H$, but does add information about how the vertices outside $H$ relate to the vertices inside $H$: For every countably infinite graph $G$, there is an infinite set of vertices $H$ such that each vertex of $G$ is either adjacent to no vertices of $H$, to exactly one vertex of $H$, or to infinitely many vertices of $H$. We give an exact characterization of the computational strength of the Rival-Sands theorem by showing that it is strongly Weihrauch equivalent to the double-jump of weak König's lemma (which is the problem of producing infinite paths through infinite trees that are given by $\Delta^0_3$ approximations). In terms of Ramsey's theorem, this means that solving one instance of the Rival-Sands theorem is equivalent to simultaneously solving countably many instances of Ramsey's theorem for pairs and two colors in parallel. This work is joint with Marta Fiori Carones and Giovanni Soldà.
A. Kwiatkowska (University of Münster) "The automorphism group of the random poset".
"A number of well-studied properties of Polish groups concern the interactions between the topological and algebraic structure of those groups. Examples of such properties are the small index property, the automatic continuity, and the Bergman property. An important approach for proving them is showing that the group has ample generics. Therefore we are often interested whether a given Polish group has a comeager conjugacy class, i.e a generic element, a generic pair, or more generally, a generic n-tuple. After a survey on this topic, I will discuss a recent result joint with Aristotelis Panagiotopoulos, where we show that the automorphism group of the random poset does not admit a generic pair. This answers a question of Truss and Kuske-Truss."
February 5th, 2021, 16.30-18.30 (Online on WebEx)
M. Viale (University of Turin) "Tameness for set theory" (Video).
We show that (assuming large cardinals) set theory is a tractable (and we dare to say tame) first order theory when formalized in a first order signature with natural predicate symbols for the basic definable concepts of second and third order arithmetic, and appealing to the model-theoretic notions of model completeness and model companionship. Specifically we develop a general framework linking generic absoluteness results to model companionship and show that (with the required care in details) a -property formalized in an appropriate language for second or third order number theory is forcible from some T extending ZFC + large cardinals if and only if it is consistent with the universal fragment of T if and only if it is realized in the model companion of T. Part (but not all) of our results are a byproduct of the groundbreaking result of Schindler and Asperò showing that Woodin's axiom (*) can be forced by a stationary set preserving forcing.
V. Fischer (University of Wien) "The spectrum of independence" (Video).
Families of infinite sets of natural numbers are said to be independent if for very two disjoint non-empty subfamilies the intersection of the members of the first subfamily with the complements of the members of the second family is infinite. Maximal independent families are independent families which are maximal under inclusion. In this talk, we will consider the set of cardinalities of maximal independent families, referred to as the spectrum of independence, and show that this set can be quite arbitrary. This is a joint work with Saharon Shelah.
January 22nd, 2021, 16.30-18.30 (Online on WebEx)
R. Schindler (University of Muenster) "Martin's Maximum^++ implies the P_max axiom (*)" (Video).
Forcing axioms spell out the dictum that if a statement can be forced, then it is already true. The P_max axiom (*) goes beyond that by claiming that if a statement is consistent, then it is already true. Here, the statement in question needs to come from a resticted class of statements, and ""consistent"" needs to mean ""consistent in a strong sense."" It turns out that (*) is actually equivalent to a forcing axiom, and the proof is by showing that the (strong) consistency of certain theories gives rise to a corresponding notion of forcing producing a model of that theory. This is joint work with D. Asperó building upon earlier work of R. Jensen and (ultimately) Keisler's ""consistency properties.""
A. Freund (TU Darmstadt) "Ackermann, Goodstein, and infinite sets" (Video).
This seminar is part of the event World Logic Day 2021
In this talk, I show how Goodstein's classical theorem can be turned into a statement that entails the existence of complex infinite sets, or in other words: into an object of reverse mathematics. This more abstract approach allows for very uniform results of high explanatory power. Specifically, I present versions of Goodstein's theorem that are equivalent to arithmetical comprehension and arithmetical transfinite recursion. To approach the latter, we will study a functorial extension of the Ackermann function to all ordinals. The talk is based on a joint paper with J. Aguilera, M. Rathjen and A. Weiermann.
January 8th, 2021, 16.30-18.30 (Online on WebEx)
F. Calderoni (University of Illinois at Chicago) "The Borel structure on the space of left-orderings" (Video).
In this talk we shall present some results on left-orderable groups and their interplay with descriptive set theory. We shall discuss how Borel classification can be used to analyze the space of left-orderings of a given countable group modulo the conjugacy action. In particular, we shall see that if G is not locally indicable then the conjugacy relation on LO(G) is not smooth. Also, if G is a nonabelian free group, then the conjugacy relation on LO(G) is a universal countable Borel equivalence relation. Our results address a question of Deroin-Navas-Rivas and show that in many cases LO(G) modulo the conjugacy action is nonstandard. This is joint work with A. Clay.
M. Eskew (Vienna) "Weak square from weak presaturation" (Video).
Can we have both a saturated ideal and the tree property on $\aleph_2$? Towards the negative direction, we show that for a regular cardinal $\kappa$, if $2^{<\kappa}\leq\kappa^+$ and there is a weakly presaturated ideal on $\kappa^+$ concentrating on cofinality $\kappa$, then $\square^*_\kappa$ holds. This partially answers a question of Foreman and Magidor about the approachability ideal on $\aleph_2$. A surprising corollary is that if there is a presaturated ideal $J$ on $\aleph_2$ such that $P(\aleph_2)/J$ is a semiproper forcing, then CH holds. This is joint work with Sean Cox.
A. Shani (Harvard University) "Anti-classification results for countable Archimedean groups" (Video).
We study the isomorphism relation for countable ordered Archimedean groups. We locate its complexity with respect to the hierarchy defined by Hjorth, Kechris, and Louveau, showing in particular that its potential complexity is $\mathrm{D}(\mathbf{\Pi}^0_3)$, and it cannot be classified using countable sets of reals as invariants. We obtain analogous results for the bi-embeddability relation, and we consider similar problems for circularly ordered groups and ordered divisible Abelian groups. This is joint work with F. Calderoni, D. Marker, and L. Motto Ros.
December 4th, 2020, 16.30-18.30 (Online on WebEx)
L. San Mauro (Vienna) "Revisiting the complexity of word problems " (Video).
The study of word problems dates back to the work of Dehn in 1911. Given a recursively presented algebra $A$, the word problem of $A$ is to decide if two words in the generators of $A$ refer to the same element. Nowadays, much is known about the complexity of word problems for algebraic structures: e.g., the Novikov-Boone theorem – one of the most spectacular applications of computability to general mathematics – states that the word problem for finitely presented groups is unsolvable. Yet, the computability theoretic tools commonly employed to measure the complexity of word problems (Turing or m-reducibility) are defined for sets, while it is generally acknowledged that many computational facets of word problems emerge only if one interprets them as equivalence relations. In this work, we revisit the world of word problems through the lens of the theory of equivalence relations, which has grown immensely in recent decades. To do so, we employ computable reducibility, a natural effectivization of Borel reducibility. This is joint work with Valentino Delle Rose and Andrea Sorbi.
M. Viale (Turin) "Tameness for set theory" (CANCELED).
We show that (assuming large cardinals) set theory is a tractable (and we dare to say tame) first order theory when formalized in a first order signature with natural predicate symbols for the basic definable concepts of second and third order arithmetic, and appealing to the model-theoretic notions of model completeness and model companionship. Specifically we develop a general framework linking generic absoluteness results to model companionship and show that (with the required care in details) a property formalized in an appropriate language for second or third order number theory is forcible from some $T$ extending ZFC + large cardinals if and only if it is consistent with the universal fragment of T if and only if it is realized in the model companion of $T$. Part (but not all) of our results are a byproduct of the groundbreaking result of Schindler and Asperò showing that Woodin's axiom (*) can be forced by a stationary set preserving forcing.
P. Holy (Udine) "Large Cardinal Operators".
Many notions of large cardinals have associated ideals, and also operators on ideals. Classical examples of this are the subtle, the ineffable, the pre-Ramsey and the Ramsey operator. We will recall their definitions, and show that they can be seen to fit within a framework for large cardinal operators below measurability. We will use this framework to introduce a new operator, that is closely connected to the notion of a $T_\omega^\kappa$-Ramsey cardinal that was recently introduced by Philipp Luecke and myself, and we will provide a sample result about our framework that generalizes classical results of James Baumgartner.
C. Agostini (Turin) "Large cardinals, elementary embeddings and "generalized" ultrafilters", part 2 (Slides).
This talk is a survey on large cardinals and the relations between elementary embeddings and ultrafilters.
A measurable cardinal is a cardinal $\kappa$ such that there exits a non-principal, $\kappa$-complete ultrafilter on $\kappa$. This has been the first large cardinal defined through ultrafilters. Measurable cardinals however have also an alternative definition: a cardinal is measurable if and only if it is the critical point of a (definable) elementary embedding of the universe into an inner model of set theory. Historically, this second definition has proven more suitable to define large cardinals. One may require that the elementary embedding satisfy some additional properties to obtain even larger cardinals, and measurable cardinals soon became the smallest of a series of large cardinals defined through elementary embeddings. Later on, a definition with ultrafilters has been discovered for many of these cardinals. Similarly to what happens for measurable cardinals, every elementary embedding generates an ultrafilter, and every ultrafilter generates an elementary embedding through the ultrapower of the universe. In this duality, properties of ultrafilters translate into properties of the embeddings, and vice versa. This lead in particular to the definition of two new crucial properties of ultrafilters: fineness and normality.
In the first part of this talk, we will focus on the analysis and comprehension of these two properties of ultrafilters. In the second part, we will focus on the understanding of the "duality" between elementary embeddings and ultrafilters, with a particular attention on the ultrafilters/elementary embedding generated by Supercompact and Huge cardinals.
May 29th, 2020, 16.30-17.30 (Online)
P. Holy (Udine) "Generalized topologies on 2^kappa, Silver forcing, and the diamond principle".
I will talk about the connections between topologies on 2^kappa induced by ideals on kappa and topologies on 2^kappa induced by certain tree forcing notions, highlighting the connection of the topology induced by the nonstationary ideal with kappa-Silver forcing. Assuming that Jensen's diamond principle holds at kappa, we then generalize results on kappa-Silver forcing of Friedman, Khomskii and Kulikov that were originally shown for inaccessible kappa: In particular, I will present a proof that also in our situation, kappa-Silver forcing satisfies a strong form of Axiom A. By a result of Friedman, Khomskii and Kulikov, this implies that meager sets are nowhere dense in the nonstationary topology. If time allows, I will also sketch a proof of the consistency of the statement that every Delta^1_1 set (in the standard bounded topology on 2^kappa) has the Baire property in the nonstationary topology, again assuming the diamond principle to hold at kappa (rather than its inaccessibility). This is joint work with Marlene Koelbing, Philipp Schlicht and Wolfgang Wohofsky.
May 8th, 2020, 16.30-17.30 (Online)
M. Fiori Carones (Monaco) "Unique orderability of infinite interval graphs and reverse mathematics".
Interval graphs are graphs whose vertices can be mapped to intervals of a linear order in such a way that intervals associated to adjacent vertices have non empty intersection. For each interval graph there exists an order whose incomparability relation corresponds to the adjacency relation of the graph. In general different orders can be associated to an interval graph. We are interested to capture the class of interval graphs which have a unique, up to duality, order associated to them. In particular, we prove that a characterisation known to hold for finite connected interval graphs holds for infinite connected interval graphs as well. Finally, we settled the strength of this characterisation in the hierarchy of subsystems of second order arithmetic. (Joint work with Alberto Marcone)
April 24th, 2020, 16.00-18.00 (Online)
L. Carlucci (Rome) "Questions and results about the strength of(variants of) Hindman's Theorem".
Measuring the logical and computational strength of Hindman's Finite Sums Theorem is one of the main open problems in Reverse Mathematics since the seminal work of Blass, Hirst and Simpson from the late Eighties. Hindman's Theorem states that any finite coloring of the positive integers admits an infinite set such that all non-empty finite sums of distinct elements from that set have the same color. The strength of Hindman's Theorem is known to lie between ACA_0 and ACA_0^+ or, in computability-theoretic terms, between the Halting Problem and the degree of unsolvability of first-order arithmetic. In recent years, following a suggestion of Blass, researchers have investigated variants of Hindman's Theorem in which the type of sums that are required to be monochromatic is restricted by some natural constraint. Restrictions on the number of terms have received particular attention, but other structural constraints give rise to interesting phenomena as well. The main open problem remains that of finding a non-trivial upper bound on Hindman's Theorem for sums of at most two elements. By work of Kolodziejczyk, Lepore, Zdanowski and the author, this theorem implies ACA_0, yet it is an open problem in Combinatorics (Hindman, Leader, Strauss 2003) whether a proof of it exists that does not also prove the full version of the theorem. We will review some of the results from the recent past on this and related questions and discuss a number of old and new open problems.
April 1st, 2020, 14.30-16.30 (on-line)
P. Holy (Udine) "Ideal Topologies".
The classical Cantor space is the space of all functions from the natural numbers to {0,1} with the topology induced by the bounded ideal -- that is the topology that is given by the basic clopen sets [f] for functions f from a bounded subset of the natural numbers to {0,1}. When generalizing this from the natural numbers to the so-called higher Cantor space on some regular and uncountable cardinal kappa, this is usually done by working with the bounded ideal on kappa. However, there are other natural ideals on such cardinals apart from the bounded ideal -- in particular there is always the nonstationary ideal on kappa. In this talk, I will define and investigate some basic properties of spaces on regular and uncountable cardinals kappa using topologies based on general ideals, and in particular on the nonstationary ideal on kappa. This is joint work with Marlene Koelbing (Vienna), Philipp Schlicht (Bristol) and Wolfgang Wohofsky (Vienna).
March 13th, 2020, 14.00-16.00 (DMIF, Sala Riunioni)
L. Carlucci (Rome) "Questions and results about the strength of (variants of) Hindman's Theorem" (CANCELED).
March 3rd, 2020, 14.00-16.00 (DMIF, Sala Riunioni)
P. Holy (Udine) "Ideal Topologies" (CANCELED).
June 7th, 2019, 14.00-16.00 (DMIF, Sala Riunioni)
V. Torres-Peréz (Vienna) "Compactness Principles and Forcing Axioms without Martin's Axiom ".
Rado's Conjecture (RC) in the formulation of Todorcevic is the statement that every tree $T$ that is not decomposable into countably many antichains contains a subtree of cardinality $\aleph_1$ with the same property. Todorcevic has shown the consistency of this statement relative to the consistency of the existence of a strongly compact cardinal. RC implies the Singular Cardinal Hypothesis, a strong form of Chang's Conjecture, the continuum is at most $\aleph_2$, the negation of $\Box(\theta)$ for every regular $\theta\geq\omega_2$, Fuchino's Fodor-type Reflection Principle, etc. These implications are very similar to the ones obtained from traditional forcing axioms such as MM or PFA. However, RC is incompatible even with $\mathrm{MA}_{\aleph_1}$. In this talk we will take the opportunity to give an overview of our results different coauthors obtained in the last few years together with recent ones. These new implications seem to continue suggesting that RC is a good alternative to forcing axioms. We will discuss to which extent this may hold true and where we can find some limitations. We will end the talk with some open problems and possible new directions. For example, we will also discuss some recent results with Liuzhen Wu and David Chodounsky regarding squares of the form $\Box(\theta, \lambda)$ and YPFA. This forcing axiom, a consequence of PFA, was introduced by David Chodounsky and Jindrich Zapletal, where they proved that it has similar consequences as PFA, such as the P-Ideal Dichotomy, $2^{\aleph_0}= \aleph_2$, all $\aleph_2$-Aronszajn trees are special, etc. However, YPFA is consistent with the negation of $\mathrm{MA}_{\aleph_1}$.
January 29th, 2019, 17.00-18.30 (DMIF, Sala Riunioni)
X. Shi (Beijing) "Large cardinals and generalized degree structures".
A central task of modern set theory is to study various extensions of ZF/ZFC, to some is to search for the "right" extension of the current foundation. Large cardinal axioms were proposed by G\"{o}del as candidates, originally to settle the continuum problem. It turns out that they serve nicely as scale for measuring the strength of most "natural" statements in set theory. Recursion theory is one of the big four branches of mathematical logic. Classical recursion theory studies the structure of Turing degrees. It has been extended/generalized to higher levels of computability/definability, as well as to higher ordinals/cardinals. However these results do not go beyond ZFC. Recent studies reveal that there are deep connections between the strength of large cardinals and the complexity of generalized degree structures. I will present the latest developments in this new research program -- higher degree theory.
September 26th, 2018, 16.30-18.00 (DMIF, Aula multimediale)
P. Shafer (Leeds) "Describing the complexity of the "problem B is harder than problem A relation"".
Some mathematical problems are harder than others. Using concepts from computability theory, we formalize the "problem B is harder than problem A" relation and analyze its complexity. Our results express that this "harder than" relation is, in a certain sense, as complicated as possible, even when restricted to several special classes of mathematical problems.
W. Gomaa (Alexandria) "On the extension of computable real functions".
We investigate interrelationships among different notions from mathematical analysis, effective topology, and classical computability theory. Our main object of study is the class of computable functions defined over an interval with the boundary being a left-c.e. real number. We investigate necessary and sufficient conditions under which such functions can be computably extended. It turns out that this depends on the behavior of the function near the boundary as well as on the class of left-c.e. real numbers to which the boundary belongs, that is, how it can be constructed. Of particular interest a class of functions is investigated: sawtooth functions constructed from computable enumerations of c.e. sets.
November 21st, 2017, 17.00-18.30 (DMIF, Aula multimediale)
V. Brattka (Munich) "How can one sort mathematical theorems".
It is common mathematical practice to say that one theorem implies another one. For instance, it is mathematical folklore that the Baire Category Theorem implies the Closed Graph Theorem and Banach's Inverse Mapping Theorem. However, after a bit of reflection it becomes clear that this notion of implication cannot be the usual logical implication that we teach to our undergraduate students, since all true theorems are logically equivalent to each other. What is actually meant by implication in this informal sense is rather something such as "one theorem is easily derivable from another one". However, what does "easily derivable" mean exactly? We present a survey on a recent computational approach to metamathematics that provides a formal definition of what "easily derivable" could mean. This approach borrows ideas from theoretical computer science, in particular the notion of a reducibility. The basic idea is that Theorem A is easily derivable from Theorem B if A is reducible to B in the sense that the input and output data of these theorems can be transferred into each other. In this case the task of Theorem A can be reduced to the task of Theorem B. Such reductions should at least be continuous and they are typically considered to be computable, which means that they can be performed algorithmically.The resulting structure is a lattice that allows one to sort mathematical theorems according to their computational content and phenomenologically, the emerging picture is very much in line with how mathematicians actually use the notion of implication in their daily practice.
November 16th, 2017, 14.30-16.00 (DMIF, Sala Riunioni)
R. Cutolo (Napoli) "Berkeley Cardinals and the search for V ".
The talk will focus on Berkeley cardinals, the strongest known large cardinal axioms, recently introduced by J. Bagaria, P. Koellner and W. H. Woodin. Berkeley cardinals are inconsistent with the Axiom of Choice; their definition is indeed formulated in the context of ZF (Zermelo-Fraenkel set theory without AC). Aim of the talk is to provide an account of their main features and the foundational issues involved. A noteworthy contribution to the topic is my result establishing the independence from ZF of the cofinality of the least Berkeley cardinal, which is in fact connected with the failure of AC; I will describe the forcing notion employed and give a sketch of the proof. In order to show that interesting mathematical consequences can be developed from Berkeley cardinals, I'll then analyze the structural properties of the inner model $L(V_{delta+1})$ under the assumption that delta is a singular limit of Berkeley cardinals each of which is itself limit of extendible cardinals, lifting some of the theory of the large cardinal axiom I0 to a more general level. Finally, I will discuss the role of Berkeley cardinals within Woodin's ultimate project of attaining a "definitive" description of the universe of set theory.
Logic Working Group
January 21st, 2022, 16.00-17.00 (Online on Webex and in Aula 4, Palazzo Campana, Torino)
L. Notaro (University of Turin) "Tree representations for Borel functions".
In 2009 Brian Semmes, in his PhD thesis, provided a characterization of Borel measurable functions from and into the Baire space using a reduction game called the Borel game. Around the same year, Alain Louveau wrote some (still unpublished) notes in which he provided a characterization of Baire class $\alpha$ functions (again from and into the Baire space), for all fixed $\alpha$ and, importantly, $\boldsymbol{\Sigma}_\lambda^0$-measurable functions for $\lambda$ countable limit, using tree-representations instead of games. In this talk, we present Louveau's characterization, comparing it with Semmes' one, and see that if we modify a bit the Borel game we end up characterizing functions having a $G_\delta$ graph. Then we notice that under $\mathrm{AC}$ there are functions for which the Borel game is undetermined, thus opening questions regarding the consistency strength of the general determinacy of this game.
D. Quadrellaro (University of Helsinki) TBA.
October 29th, 2021, 16.00-18.00 (Online on Webex and in Aula 4, Palazzo Campana, Torino)
B. Pitton (University of Lausanne) "Borel and Borel sets in Generalized Descriptive Set Theory " (Slides).
Generalized descriptive set theory (GDST) aims at developing a higher analogue of classical descriptive set theory in which is replaced with an uncountable cardinal in all definitions and relevant notions. In the literature on GDST it is often required that , a condition equivalent to regular and . In contrast, in this talk we use a more general approach and develop in a uniform way the basics of GDST for cardinals still satisfying but independently of whether they are regular or singular. This allows us to retrieve as a special case the known results for regular , but it also uncovers their analogues when is singular. We also discuss some new phenomena specifically arising in the singular context (such as the existence of two distinct yet related Borel hierarchies), and obtain some results which are new also in the setup of regular cardinals, such as the existence of unfair Borel codes for all Borel sets.
October 22nd, 2021, 16.00-17.00 (Online on Webex and in Aula 4, Palazzo Campana, Torino)
E. Colla (University of Turin) "Words and other words" (Slides).
We gently review some definitions and theorems regarding the free semigroup of words, from two different areas: automata theory and Ramsey theory. Our recent results in Ramsey theory hint at a possible connection between two classes of monoids introduced by Solecki and "[the second] most important result of the algebraic theory of automata" (J. E. Pin). While this possibility is still vague, it seems exciting enough to be investigated. As far as prerequisites are concerned, most of this talk could be followed by anyone having a bachelor in mathematics. Based on a joint work with Claudio Agostini.
S. Scamperti (University of Turin) "A complete picture of the Wadge Hierarchy in 0-dimensional Polish space", part 2.
Wadge reducibility is a very important tool in descriptive set theory. On 0-dimensional Polish spaces it yields a nice hierarchy and very studied by results of Wadge, Martin, Monk, Andretta, Louveau, Duparc, Carroy, Medini, Müller, Motto Ros, and others. On Cantor space and Baire space the Wadge hierarchy has some different behaviors, one of those on countable degree. Namely countable degree on Cantor space are non selfdual classes, while on Baire space they are selfdual classes. This phenomenon arise a question: what happens in general 0-dimensional Polish spaces? We will answer this question with the notion of compactness degree for a 0-dimensional Polish space and see that infinitely different many cases can be realized on 0-dimensional Polish spaces.
October 8th, 2021, 16.00-17.00 (Online on Webex and in Aula 4, Palazzo Campana, Torino)
S. Scamperti (University of Turin) "A complete picture of the Wadge Hierarchy in 0-dimensional Polish space", part 1 (Slides).
April 9th, 2021, 11.00-13.00 (Online)
V. Cipriani (Udine) "Reading seminar on Effective Descriptive Set Theory", part 9.
March 26th, 2021, 11.00-13.00 (Online)
Uniformization and Basis results
March 5th, 2021, 11.00-13.00 (Online)
Scales and uniformization for Pi_1^1 sets
February 25th, 2021, 09.00-11.00 (Online)
Some consequences of the Parametrization Theorem (2)
Universal sets and Parametrization Theorems for Delta_1^1
February 4th, 2021, 09.00-11.00 (Online)
Norms and (easy) uniformization for Pi_1^1 sets
January 28th, 2021, 09.00-11.00 (Online)
Basic notions of EDST and Representation Theorem for Pi_1^1 sets
May 22nd, 2020, 16.30-18.30 (Online)
M. Valenti (Udine) "The complexity of closed Salem sets".
A central notion in geometric measure theory is the one of Hausdorff dimension. As a consequence of Frostman's lemma, the Hausdorff dimension of a Borel subset A of the Euclidean n-dimensional space can be determined by looking at the behaviour of probability measures with support in A. The possibility to apply methods from Fourier analysis to estimate the Hausdorff dimension gives birth to the notion of Fourier dimension. It is known that, for Borel sets, the Fourier dimension is less than or equal to the Hausdorff dimension. The sets for which the two notions agree are called Salem sets. In this talk we will study the descriptive complexity of the family of closed Salem subsets of [0,1], [0,1]^n and of the n-dimensional Euclidean space.
M. Iannella (Udine) "The G_0 dichotomy".
Dichotomy theorems have always played a fundamental role in set theory, and for many decades the way to prove them was akin to the proof of Cantor-Bendixson theorem, i.e., derivative arguments. This changed in the early 1970s, when Silver proved the Silver dichotomy using sophisticated techniques borrowed from the theory of forcing and from effective descriptive set theory, ushering in a new era of dichotomy proofs. Around ten years ago, Ben Miller took upon himself to reverse back this development and to find proofs of these new results that do not rely on forcing and effective arguments, but just the good old derivative ones. The key for this is switching from equivalence relations to graphs The result of this research is a handful of dichotomies at the core of descriptive set theory that prove many other ones, with classical "easy" proofs, the most important of them being the G_0 dichotomy.
April 3rd, 2020, 14.30-16.30 (Online)
M. Valenti (Udine) "Finding descending sequences in an ill-founded linear order", part 7.
We will explore the computational content of the problem "given an ill-founded linear order, find an infinite descending sequence" from the point of view of Weihrauch reducibility. The talk will cover the background notions on Weihrauch reducibility. The results are joint work with Jun Le Goh and Arno Pauly.
March 27th, 2020, 14.30-16.30 (on-line)
S. Tamburlini (Udine) "Reverse mathematics of Second Order set theory".
February 21st, 2020, 14.00-16.00 (DMIF, Aula multimediale)
February 14th, 2020, 14.00-16.00 (DMIF, Sala Riunioni)
February 6th, 2020, 14.30-16.30 (DMIF, Sala Riunioni)
December 10th, 2019, 16.30-18.00 (DMIF, Sala Riunioni)
M. Fiori Carones (Udine) "An On-Line Algorithm for Reorientation of Graphs".
M. Iannella (Udine) "On the classification of wild Proper Arcs and Knots ", part 2.
In this talk we analyze the complexity of some classification problems for proper arcs and knots using Borel reducibility. We prove some anticlassification results by adapting a recent result of Kulikov on the classification of wild proper arcs/knots up to equivalence. Moreover, we prove various new results on the complexity of the (oft-overlooked) sub-interval relation between countable linear orders, showing in particular that it is at least as complicated as the isomorphism relation between linear orders.
October 12th, 2018, 14.00-15.30 (DMIF, Sala Riunioni)
E. Lena (Udine) "I sistemi assiomatici di Tarski per la geometria".
June 28th, 2017, 10.00-11.30 (DMIF, Aula multimediale)
M. Fiori Carones (Udine) "Espressività -in termini di classi di complessita' "catturate"- dei linguaggi logici per la rappresentazione della conoscenza".
Created by Gianluca Basso. Updated regularly | CommonCrawl |
# Lecture Notes - Probability Theory
Manuel Cabral Morais
Department of Mathematics
Instituto Superior Técnico
Lisbon, September 2009/10 - January 2010/11
## Warm up
### Historical note
Mathematical probability has its origins in games of chance [...]. Early calculations involving dice were included in a well-know and widely distributed poem entitled De Vetula. ${ }^{1}$ Dice and cards continued as the main vessels of gambling in the 15th. and 16th. centuries [...]. [...] (G. Cardano) went so far as to write a book, On games of chance, sometime shortly after 1550. This was not published however until 1663, by which time probability theory had already had its official inauguration elsewhere.
It was around 1654 that B. Pascal and P. de Fermat generated a celebrated correspondence about their solutions of the problem of the points. These were soon widely known, and C. Huygens developed these ideas in a book published in 1657, in Latin. [...] the intuitive notions underlying this work were similar to those commonly in force nowdays.
These first simple ideas were soon extended by Jacob Bernoulli in Ars conjectandi (1713) and by A. de Moivre in Doctrine of chances (1718, 1738, 1756). [...]. Methods, results, and ideas were all greatly refined and generalized by P. Laplace [...]. Many other eminent mathematicians of this period wrote on probability: Euler, Gauss, Lagrange, Legendre, Poisson, and so on.
However, as ever harder problems were tackled by ever more powerful mathematical techniques during the 19th. century, the lack of a well-defined axiomatic structure was recognized as a serious handicap. [...] A. Kolmogorov provided the axioms which today underpin most mathematical probability.
Grimmett and Stirzaker (2001, p. 571)
${ }^{1}$ De vetula ("The Old Woman") is a long thirteenth-century poem written in Latin. (For more details see http://en.wikipedia.org/wiki/De_vetula.) For more extensive and exciting accounts on the history of Statistics and Probability, we recommend:
- Hald, A. (1998). A History of Mathematical Statistics from 1750 to 1930. John Wiley \& Sons. (QA273-280/2.HAL.50129);
- Stigler, S.M. (1986). The History of Statistics: the Measurement of Uncertainty Before 1900. Belknap Press of Harvard University Press. (QA273-280/2.STI.39095).
## 2 (Symmetric) random walk
This section is inspired by Karr (1993, pp. 1-14) and has the sole purpose of:
- illustrating concepts such as probability, random variables, independence, expectation and convergence of random sequences, and recall some limit theorems;
- drawing our attention to the fact that exploiting the special structure of a random process can provide answers for some of the questions raised.
## Informal definition 0.1 - Symmetric random walk
The symmetric random walk (SRW) is a random experiment which can result from the observation of a particle moving randomly on $Z=\{\ldots,-1,0,1, \ldots\}$. Moreover, the particle starts at the origin at time 0 , and then moves either one step up or one step down with equal likelihood.
## Remark 0.2 - Applications of random walk
The path followed by atom in a gas moving under the influence of collisions with other atoms can be described by a random walk (RW). Random walk has also been applied in other areas such as:
- economics (RW used to model shares prices and other factors);
- population genetics (RW describes the statistical properties of genetic drift); ${ }^{2}$
${ }^{2}$ Genetic drift is one of several evolutionary processes which lead to changes in allele frequencies over time. - mathematical ecology (RW used to describe individual animal movements, to empirically support processes of biodiffusion, and occasionally to model population dynamics);
- computer science (RW used to estimate the size of the Web). ${ }^{3}$
The next proposition provides answers to the following questions:
- How can we model and analize the symmetric random walk?
- What random variables can arise from this random experiment and how can we describe them?
Proposition 0.3 - Symmetric random walk (Karr, 1993, pp. 1-4)
## The model
Let:
- $\omega_{n}$ be the step at time $n\left(\omega_{n}= \pm 1\right)$;
- $\underline{\omega}=\left(\omega_{1}, \omega_{2}, \ldots\right)$ be a realization of the random walk;
- $\Omega$ be the sample space of the random experiment, i.e. the set of all possible realizations.
## Random variables
Two random variables immediately arise:
- $Y_{n}$ defined as $Y_{n}(\underline{\omega})=\omega_{n}$, the size of the $n$ th. step; ${ }^{4}$
- $X_{n}$ which represents the position at time $n$ and is defined as $X_{n}(\underline{\omega})=\sum_{i=1}^{n} Y_{i}(\underline{\omega})$.
## Probability and independence
The sets of outcomes of this random experiment are termed events. An event $A \subset \Omega$ occurs with probability $P(A)$.
Recall that a probability function is countable additive, i.e. for sequences of (pairwise) disjoint events $A_{1}, A_{2}, \ldots\left(A_{i} \cap A_{j}=\emptyset, i \neq j\right)$ we have
${ }^{3}$ For more applications check http://en.wikipedia.org/wiki/Random_walk.
${ }^{4}$ Steps are functions defined on the sample space. Thus, steps are random variables.
$$
P\left(\bigcup_{i=1}^{+\infty} A_{i}\right)=\sum_{i=1}^{+\infty} P\left(A_{i}\right) .
$$
Invoking the random and symmetric character of this walk, and assuming that the steps are independent and identically distributed, all $2^{n}$ possible values of $\left(Y_{1}, \ldots, Y_{n}\right)$ are equally likely, and, for every $\left(y_{1}, \ldots, y_{n}\right) \in\{-1,1\}^{n}$,
$$
\begin{aligned}
P\left(Y_{1}=y_{1}, \ldots, Y_{n}=y_{n}\right) & =\prod_{i=1}^{n} P\left(Y_{i}=y_{i}\right) \\
& =\left(\frac{1}{2}\right)^{n} .
\end{aligned}
$$
The interpretation of (2) is absence of probabilistic interaction or independence.
## First calculations
Let us assume from now on that $X_{0}=0$. Then:
- $\left|X_{n}\right| \leq n, \forall n \in \mathbb{N}$
- $X_{n}$ is even at even times $(n \bmod 2=0)$ (e.g. $X_{2}$ cannot be equal to 1$)$;
- $X_{n}$ is odd at odd times $(n \bmod 2=1)$ (e.g. $X_{1}$ cannot be equal to 0$)$.
If $n \in \mathbb{N}, k \in\{-n, \ldots, 0, \ldots, n\}$, and $\frac{n+k}{2}$ is an integer $(n \bmod 2=k \bmod 2)$ then the event $\left\{X_{n}=k\right\}$ occurs if $\frac{n+k}{2}$ of the steps $Y_{1}, \ldots, Y_{n}$ are equal to 1 and the remainder are equal to -1 .
In fact, $X_{n}=k$ if we observe $a$ steps up and $b$ steps down where
$$
(a, b):\left\{\begin{array}{l}
a, b \in\{0, \ldots, n\} \\
a+b=n \\
a-b=k
\end{array}\right.
$$
that is, $a=\frac{n+k}{2}$ and $a$ has to be an integer in $\{0, \ldots, n\}$.
As a consequence,
$$
P\left(X_{n}=k\right)=\left(\begin{array}{c}
n \\
\frac{n+k}{2}
\end{array}\right) \times\left(\frac{1}{2}\right)^{n},
$$
for $n \in \mathbb{N}, k \in\{-n, \ldots, 0, \ldots, n\}, \frac{n+k}{2} \in\{0,1, \ldots, n\}$. Recall that the binomial coefficient $\left(\begin{array}{c}n \\ \frac{n+k}{2}\end{array}\right)$ (it often reads as " $n$ choose $\frac{n+k}{2}$ ") represents the number of size $-\frac{n+k}{2}$ subsets of a set of size $n$. More generally,
$$
\begin{aligned}
P\left(X_{n} \in B\right) & =P\left(\left\{\underline{\omega}: X_{n}(\underline{\omega}) \in B\right\}\right) \\
& =\sum_{k \in B \cap\{-n, \ldots, 0, \ldots, n\}} P\left(X_{n}=k\right),
\end{aligned}
$$
for any real set $B \subset \mathbb{R}$, by countable additivity of the probability function $P$. (Rewrite (6) taking into account that $n$ and $k$ have to be both even or both odd.)
Remark 0.4 - Further properties of the symmetric random walk (Karr, 1993, pp. 4-5) Exploiting the special structure of the SRW lead us to conclude that:
- the SRW cannot move from one level to another without passing through all values between ("continuity");
- all $2^{n}$ length-n paths are equally likely so two events containing the same number of paths have the same probability, $\frac{\text { no. paths }}{2^{n}}$, which allows the probability of one event to be determined by showing that the paths belonging to this event are in oneto-one correspondence with those of an event of known probability - in many cases this correspondence is established geometrically, via reasoning know as reflection principle.
## Exercise 0.5 - Symmetric random walk
Prove that:
(a) $P\left(X_{2} \neq 0\right)=P\left(X_{2}=0\right)=\frac{1}{2}$;
(b) $P\left(X_{n}=k\right)=P\left(X_{n}=-k\right)$, for each $n$ and $k$.
Proposition 0.6 - Expectation and symmetric random walk (Karr, 1993, p. 5) The average value of any $i^{t h}$-step of a SRW is equal to
$$
\begin{aligned}
E\left(Y_{i}\right) & =(-1) \times P\left(Y_{i}=-1\right)+(+1) \times P\left(Y_{i}=+1\right) \\
& =0 .
\end{aligned}
$$
Additivity of probability translates to linearity of expectation, thus the average position equals
$$
\begin{aligned}
E\left(X_{n}\right) & =E\left(\sum_{i=1}^{n} Y_{i}\right) \\
& =\sum_{i=1}^{n} E\left(Y_{i}\right) \\
& =0 .
\end{aligned}
$$
Proposition 0.7 - Conditioning and symmetric random walk (Karr, 1993, p. 6) We can revise probability in light of the knowledge that some event has occurred. For example, we know that $P\left(X_{2 n}=0\right)=\left(\begin{array}{c}2 n \\ n\end{array}\right) \times\left(\frac{1}{2}\right)^{2 n}$. However, if we knew that $X_{2 n-1}=1$ then the event $\left\{X_{2 n}=0\right\}$ occurs with probability $\frac{1}{2}$. In fact,
$$
\begin{aligned}
P\left(X_{2 n}=0 \mid X_{2 n-1}=1\right) & =\frac{P\left(X_{2 n-1}=1, X_{2 n}=0\right)}{P\left(X_{2 n-1}=1\right)} \\
& =\frac{P\left(X_{2 n-1}=1, Y_{2 n}=-1\right)}{P\left(X_{2 n-1}=1\right)} \\
& =\frac{P\left(X_{2 n-1}=1\right) \times P\left(Y_{2 n}=-1\right)}{P\left(X_{2 n-1}=1\right)} \\
& =\frac{1}{2} .
\end{aligned}
$$
Note that, since the steps $Y_{i}$ are independent random variables and $X_{2 n-1}=\sum_{i=1}^{2 n-1} Y_{i}$, we can state that $Y_{2 n}$ is independent of $X_{2 n-1}$.
## Exercise 0.8 - Conditioning and asymmetric random walk ${ }^{5}$
Random walk models are often found in physics, from particle motion to a simple description of a polymer.
A physicist assumes that the position of a particle at time $n, X_{n}$, is governed by an asymmetric random walk - starting at 0 and with probability of an upward (resp. downward) step equal to $p$ (resp. $1-p$ ), where $p \in(0,1) \backslash\left\{\frac{1}{2}\right\}$.
Derive $P\left(X_{2 n}=0 \mid X_{2 n-2}=0\right)$, for $n=2,3, \ldots$
${ }^{5}$ Exam 2010/01/19. Proposition 0.9 - Time of first return to the origin and symmetric random walk (Karr, 1993, pp. 7-9)
The time at which the SRW first returns to the origin,
$$
T^{0}=\min \left\{n \in \mathbb{N}: X_{n}=0\right\}
$$
is an important functional of the SRW (it maps the SRW into a scalar). It can represent the time to ruin.
Interestingly enough, for $n \in \mathbb{N}, T^{0}$ must be even (recall that $X_{0}=0$ ). And, for $n \in \mathbb{N}:$
$$
\begin{aligned}
& P\left(T^{0}>2 n\right)=P\left(X_{1} \neq 0, \ldots, X_{2 n} \neq 0\right)=\left(\begin{array}{c}
2 n \\
n
\end{array}\right) \times\left(\frac{1}{2}\right)^{2 n} ; \\
& P\left(T^{0}=2 n\right)=\frac{1}{2 n-1}\left(\begin{array}{c}
2 n \\
n
\end{array}\right) \times\left(\frac{1}{2}\right)^{2 n} .
\end{aligned}
$$
Moreover, using the Stirling's approximation to $n !, n ! \simeq \sqrt{2 \pi} n^{n+\frac{1}{2}} e^{-n}$, we get
$$
P\left(T^{0}<+\infty\right)=1
$$
If we note that $P\left(T^{0}>2 n\right) \simeq \frac{1}{\sqrt{\pi n}}$ and recall that $\sum_{n=1}^{+\infty} \frac{1}{n^{s}}$ only converges for $s \geq 2$, we can conclude that $T^{0}$ assumes large values with probabilities large enough that
$$
\sum_{n=1}^{+\infty} 2 n P\left(T^{0}=2 n\right)=+\infty \Rightarrow E\left(T^{0}\right)=+\infty .
$$
Exercise 0.10 - Time of first return to the origin and symmetric random walk
(a) Prove result (12) using (11).
(b) Use the Stirling's approximation to $n !, n ! \simeq \sqrt{2 \pi} n^{n+\frac{1}{2}} e^{-n}$ to prove that
$$
\lim _{n \rightarrow+\infty} P\left(T^{0}>2 n\right)=\lim _{n \rightarrow+\infty} \frac{1}{\sqrt{\pi n}} .
$$
(c) Use the previous result and the fact that
$$
P\left(T^{0}<+\infty\right)=1-\lim _{n \rightarrow+\infty} P\left(T^{0}>2 n\right)
$$
to derive (13).
(d) Verify that $\sum_{n=1}^{+\infty} 2 n P\left(T^{0}=2 n\right)=1+\sum_{n=1}^{+\infty} P\left(T^{0}>2 n\right)$, even though we have $E(Z)=2 \times\left[1+\sum_{n=1}^{+\infty} P(Z>2 n)\right]$, for any positive and even random variable $Z$ with finite expected value $E(Z)=\sum_{n=1}^{+\infty} 2 n \times P(Z=2 n)$. Proposition 0.11 - First passage times and symmetric random walk (Karr, 1993, pp. 9-11)
Similarly, the first passage time
$$
T^{k}=\min \left\{n \in \mathbb{N}: X_{n}=k\right\}
$$
has the following properties, for $n \in \mathbb{N}, k \in\{-n, \ldots,-1,1, \ldots, n\}$ and $n \bmod 2=$ $k \bmod 2$ :
$$
\begin{aligned}
P\left(T^{k}=n\right) & =\frac{|k|}{n} \times P\left(X_{n}=k\right) ; \\
P\left(T^{k}<\infty\right) & =1 ; \\
E\left(T^{k}\right) & =+\infty .
\end{aligned}
$$
The following results pertain to the asymptotic behaviour of the position of a symmetric random walk and to the fraction of time spent positive.
Proposition 0.12 - Law of large numbers (Karr, 1993, p. 12)
Let $Y_{n}$ and $X_{n}=\sum_{i=1}^{n} Y_{i}$ represent the size of the $n$ th. step and the position at time $n$ of a random walk, respectively.
$$
P\left(\lim _{n \rightarrow+\infty} \frac{X_{n}}{n}=0\right)=1,
$$
that is, the "empirical averages", $\frac{X_{n}}{n}=\frac{1}{n} \sum_{i=1}^{n} Y_{i}$, converge to the "theoretical average" $E\left(Y_{1}\right)$.
Proposition 0.13 - Central limit theorem (Karr, 1993, pp. 12-13)
$$
\begin{aligned}
\lim _{n \rightarrow+\infty} P\left\{\frac{\frac{X_{n}}{n}-E\left(\frac{X_{n}}{n}\right)}{\sqrt{V\left(\frac{X_{n}}{n}\right)}} \leq x\right\} & =\lim _{n \rightarrow+\infty} P\left\{\frac{\frac{X_{n}}{n}-E\left(Y_{1}\right)}{\sqrt{\frac{V\left(Y_{1}\right)}{n}}} \leq x\right\} \\
& =\int_{-\infty}^{x} \frac{1}{\sqrt{2 \pi}} e^{-\frac{y^{2}}{2}} d y \\
& =\Phi(x), x \in \mathbb{R}
\end{aligned}
$$
So, for large values of $n$, difficult-to-compute probabilities can be approximated. For instance, for $a<b$, we get:
$$
\begin{aligned}
P\left(a<X_{n} \leq b\right) & =\sum_{a<k \leq b} P\left(X_{n}=k\right) \\
& =P\left\{\frac{\frac{a}{n}-0}{\sqrt{\frac{1}{n}}}<\frac{\frac{X_{n}}{n}-0}{\sqrt{\frac{1}{n}}} \leq \frac{\frac{b}{n}-0}{\sqrt{\frac{1}{n}}}\right\} \\
& \simeq \Phi(b / \sqrt{n})-\Phi(a / \sqrt{n}) .
\end{aligned}
$$
## Exercise 0.14 - Central limit theorem ${ }^{6}$
The words "symmetric random walk" refer to this situation.
The proverbial drunk (PD) is clinging to the lamppost. He decides to start walking. The road runs east and west. In his inebriated state he is as likely to take a step east (forward) as west (backward). In each new position he is again as likely to go forward as backward. Each of his steps are of the same length but of random direction - east or west.
http://www.physics.ucla.edu/ chester/TECH/RandomWalk/3Pane.html
Admit that each step of PD has length equal to one meter and that he has already taken exactly 100 (a hundred) steps.
Find an approximate value for the probability that $\mathrm{PD}$ is within a five meters neighborhood of the lamppost.
Proposition 0.15 - Arc sine law (Karr, 1993, pp. 13-14)
The fraction of time spent positive $\frac{W_{n}}{n}=\frac{1}{n} \sum_{i=1}^{n} I_{\mathbb{N}}\left(X_{i}+X_{i-1}\right)$ has the following limiting law: ${ }^{7}$
$$
\lim _{n \rightarrow+\infty} P\left(\frac{W_{n}}{n} \leq x\right)=\frac{2}{\pi} \arcsin \sqrt{x} .
$$
Moreover, the associated limiting density function, $\frac{1}{\pi \sqrt{x(1-x)}}$, is a $U$-shaped density. Thus, $\frac{W_{n}}{n}$ is more likely to be near 0 or 1 than near $1 / 2$.
Please note that we can get the limiting distribution function by using the Stirling's approximation and the following result:
$$
P\left(W_{2 n}=2 k\right)=\left(\begin{array}{c}
2 k \\
k
\end{array}\right) \times\left(\begin{array}{c}
2 n-2 k \\
n-k
\end{array}\right) \times\left(\frac{1}{2}\right)^{2 n} .
$$
${ }^{6}$ Exam 2010/02/04.
${ }^{7}$ Please note that we need $X_{i-1}$ to be able to count the last step just before hitting zero from above.
## Exercise 0.16 - Arc sine law
Prove result $(22)$.
## Exercise 0.17 - Arc sine law ${ }^{8}$
The random walk hypothesis is due to French economist Louis Bachelier (1870-1946) and asserts that the random nature of a commodity or stock prices cannot reveal trends and therefore current prices are no guide to future prices. Surprisingly, an investor assumes that his/her daily financial score is governed by a symmetric random walk starting at 0 .
Obtain the corresponding approximate value for the probability that the fraction of time the financial score is positive exceeds $50 \%$.
Exercise 0.18 - The cliff-hanger problem (Mosteller, 1965, pp. 51-54)
From where he stands $\left(X_{0}=1\right)$, one step toward the cliff would send the drunken man over the edge. He takes random steps, either toward or away from the cliff. At any step, his probability of taking a step away is $p$ and of a step toward the cliff $1-p$.
What is his chance of not escaping the cliff? (Write the results in terms of $p$.)
## References
- Grimmett, G.R. and Stirzaker, D.R. (2001). Probability and Random Processes (3rd. edition). Oxford. (QA274.12-.76.GRI.40695 refers to the library code of the 1st. and 2nd. editions from 1982 and 1992, respectively.)
- Karr, A.F. (1993). Probability. Springer-Verlag.
${ }^{8}$ Test 2009/11/07.
## Chapter 1
## Probability spaces
[...] have been taught that the universe evolves according to deterministic laws that specify exactly its future, and a probabilistic description is necessary only because of our ignorance. This deep-rooted skepticism in the validity of probabilistic results can be overcome only by proper interpretation of the meaning of probability. $\quad$ Papoulis $(1965$, p. 3$)$
Probability is the mathematics of uncertainty. It has flourished under the stimulus of applications, such as insurance, demography, [...], clinical trials, signal processing, [..], spread of infectious diseases, [..], medical imaging, etc. and have furnished both mathematical questions and genuine interest in the answers.
Karr (1993, p. 15)
Much of our life is based on the belief that the future is largely unpredictable (Grimmett and Stirzaker, 2001, p. 1), nature is liable to change and chance governs life.
We express this belief in chance behaviour by the use of words such as random, probable (probably), probability, likelihood (likeliness), etc.
There are essentially four ways of defining probability (Papoulis, 1965, p. 7) and this is quite a controversial subject, proving that not all of probability and statistics is cutand-dried (Righter, 200-):
- a priori definition as a ratio of favorable to total number of alternatives (classical definition; Laplace); ${ }^{1}$
${ }^{1}$ See the first principle of probability in http://en.wikipedia.org/wiki/Pierre-Simon_Laplace - relative frequency (Von Mises); ${ }^{2}$
- probability as a measure of belief (inductive reasoning, ${ }^{3}$ subjective probability; Bayesianism); ${ }_{4}^{4}$
- axiomatic (measure theory; Kolmogorov's axioms). ${ }^{5}$
## Classical definition of probability
The classical definition of probability of an event $A$ is found a priori without actual experimentation, by counting the total number $N=\# \Omega<+\infty$ of possible outcomes of the random experiment. If these outcomes are equally likely and $N_{A}=\# A$ of these outcomes the event $A$ occurs, then
$$
P(A)=\frac{N_{A}}{N}=\frac{\# A}{\# \Omega} .
$$
## Criticism of the classical definition of probability
It is only holds if $N=\# \Omega<+\infty$ and all the $N$ outcomes are equally likely. Moreover,
- serious problems often arise in determining $N=\# \Omega$;
- it can be used only for a limited class of problems since the equally likely condition is often violated in practice;
- the classical definition, although presented as a priori logical necessity, makes implicit use of the relative-frequency interpretation of probability;
- in many problems the possible number of outcomes is infinite, so that to determine probabilities of various events one must introduce some measure of length or area.
${ }^{2}$ Kolmogorov said: "[...] mathematical theory of probability to real 'random phenomena' must depend on some form of the frequency concept of probability, [...] which has been established by von Mises [...]." (http://en.wikipedia.org/wiki/Richard_von_Mises)
${ }^{3}$ Inductive reasoning or inductive logic is a type of reasoning which involves moving from a set of specific facts to a general conclusion (http://en.wikipedia.org/wiki/Inductive_reasoning).
${ }^{4}$ Bayesianism uses probability theory as the framework for induction. Given new evidence, Bayes' theorem is used to evaluate how much the strength of a belief in a hypothesis should change with the data we collected.
${ }^{5}$ http://en.wikipedia.org/wiki/Kolmogorov_axioms
## Relative frequency definition of probability
The relative frequency approach was developed by Von Mises in the beginning of the 20th. century; at that time the prevailing definition of probability was the classical one and his work was a healthy alternative (Papoulis, 1965, p. 9).
The relative frequency definition of probability used to be popular among engineers and physicists. A random experiment is repeated over and over again, $N$ times; if the event $A$ occurs $N_{A}$ times out of $N$, then the probability of $A$ is defined as the limit of the relative frequency of the occurrence of $A$ :
$$
P(A)=\lim _{N \rightarrow+\infty} \frac{N_{A}}{N} .
$$
## Criticism of the relative frequency definition of probability
This notion is meaningless in most important applications, e.g. finding the probability of the space shuttle blowing up, or of an earthquake (Righter, 200-), essentially because we cannot repeat the experiment.
It is also useless when dealing with hypothetical experiments (e.g. visiting Jupiter).
## Subjective probability, personal probability, Bayesian approach; criticism
Each person determines for herself what the probability of an event is; this value is in $[0,1]$ and expresses the personal belief on the occurrence of the event.
The Bayesian approach is the approach used by most engineers and many scientists and business people. It bothers some, because it is not "objective". For a Bayesian, anything that is unknown is random, and therefore has a probability, even events that have already occurred. (Someone flipped a fair coin in another room, the chance that it was heads or tails is .5 for a Bayesian. A non-Bayesian could not give a probability.)
With a Bayesian approach it is possible to include nonstatistical information (such as expert opinions) to come up with a probability. The general Bayesian approach is to come up with a prior probability, collect data, and use the data to update the probability (using Bayes' Law, which we will study later).
(Righter, 200-)
To understand the (axiomatic) definition of probability we shall need the following concepts:
- random experiment, whose outcome cannot be determined in advance;
- sample space $\Omega$, the set of all (conceptually) possible outcomes; - outcomes $\omega$, elements of the sample space, also referred to as sample points or realizations;
- events $A$, a set of outcomes;
- $\sigma$-algebra on $\Omega$, a family of subsets of $\Omega$ containing $\Omega$ and closed under complementation and countable union.
### Random experiments
## Definition 1.1 - Random experiment
A random experiment consists of both a procedure and observations, ${ }^{6}$ and its outcome cannot be determined in advance.
There is some uncertainty in what will observed in the random experiment, otherwise performing the experiment would be unnecessary.
## Example 1.2 - Random experiments
| | Random experiment |
| :---: | :---: |
| $E_{1}$ | Give a lecture. |
| | Observe the number of students seated in the 4 th. row, which has 7 seats. |
| $E_{2}$ | Choose a highway junction. |
| | Observe the number of car accidents in 12 hours. |
| $E_{3}$ | Walk to a bus stop. |
| | Observe the time (in minutes) you wait for the arrival of a bus. |
| $E_{4}$ | Give $n$ lectures. |
| | Observe the number of students seated in the forth row in each of those $n$ lectures. |
| $E_{5}$ | Consider a particle in a gas modeled by a random walk. |
| | Observe the steps at times $1,2, \ldots$ |
| $E_{6}$ | Consider a cremation chamber. |
| | Observe the temperature in the center of the chamber over the interval of time $[0,1]$ |
## Exercise 1.3 - Random experiment
Identify at least one random experiment based on your daily schedule.
Definition 1.4 - Sample space (Yates and Goodman, 1999, p. 8)
The sample space $\Omega$ of a random experiment is the finest-grain, mutually exclusive, collectively exhaustive set of all possible outcomes of the random experiment.
${ }^{6}$ Yates and Goodman (1999, p. 7). The finest-grain property simply means that all possible distinguishable outcomes are identified separately. Moreover, $\Omega$ is (usually) known before the random experiment takes place. The choice of $\Omega$ balances fidelity to reality with mathematical convenience (Karr, 1993, p. 12).
Remark 1.5 - Categories of sample spaces (Karr, 1993, p. 16-17)
In practice, most sample spaces fall into one of the six categories:
## - Finite set
The simplest random experiment has two outcomes.
A random experiment with $n$ possible outcomes may be modeled with a sample space consisting of $n$ integers.
## - Countable set
The sample space for an experiment with countably many possible outcomes is ordinarily the set $\mathbb{I}=\{1,2, \ldots\}$ of positive integers or the set of $\{\ldots,-1,0,+1, \ldots\}$ of all integers.
Whether a finite or a countable sample space better describes a given phenomenon is a matter of judgement and compromise. (Comment!)
## - The real line $\mathbb{R}$ (and intervals in $\mathbb{R}$ )
The most common sample space is the real line $\mathbb{R}$ (or the unit interval $[0,1]$ the nonnegative half-line $\mathbb{R}_{0}^{+}$), which is used for most all numerical phenomena that are not inherently integer-valued.
## - Finitely many replications
Some random experiments result from the $n(n \in \mathbb{N})$ replications of a basic experiment with sample space $\Omega_{0}$. In this case the sample space is the Cartesian product $\Omega=\Omega_{0}^{n}$.
## - Infinitely many replications
If a basic random experiment is repeated infinitely many times we deal with the sample space $\Omega=\Omega_{0}^{\mathbb{N}}$.
## - Function spaces
In some random experiments the outcome is a trajectory followed by a system over an interval of time. In this case the outcomes are functions.
## Example 1.6 - Sample spaces
The sample spaces defined below refer to the random experiments defined in Example 1.2:
| Random experiment | Sample space $(\Omega)$ | Classification of $\Omega$ |
| :---: | :--- | :--- |
| $E_{1}$ | $\{0,1,2,3,4,5,6,7\}$ | Finite set |
| $E_{2}$ | $\mathbb{N}_{0}=\{0,1,2, \ldots\}$ | Countable set |
| $E_{3}$ | $\mathbb{R}_{0}^{+}$ | Interval in $\mathbb{R}$ |
| $E_{4}$ | $\{0,1,2,3,4,5,6,7\}^{n}$ | Finitely many replications |
| $E_{5}$ | $\{-1,+1\}^{\mathbb{N}}$ | Infinitely many replications |
| $E_{6}$ | $\mathbf{C}([0,1])$ | Function space |
Note that $\mathbf{C}([0,1])$ represents the vector space of continuous, real-valued functions on $[0,1]$.
### Events and classes of sets
## Definition 1.7 - Event (Karr, 1993, p. 18)
Given a random experiment with sample space $\Omega$, an event can be provisionally defined as a subset of $\Omega$ whose probability is defined.
Remark 1.8 - An event $A$ occurs if the outcome $\omega$ of the random experiment belongs to $A$, i.e. $\omega \in A$.
## Example 1.9 - Events
Some events associated to the six random experiments described in examples 1.2 and 1.6:
| E.A. | Event |
| :---: | :---: |
| $E_{1}$ | $A=$ "observe at least 3 students in the 4 th. row" |
| | $=\{3, \ldots, 7\}$ |
| $E_{2}$ | $B=$ "observe more than 4 car accidents in 12 hours" |
| | $=\{5,6, \ldots\}$ |
| $E_{3}$ | $C=$ "wait more than 8 minutes" |
| | $=(8,+\infty)$ |
| $E_{4}$ | $D=$ "observe at least 3 students in the 4 th. row, in 5 consecutive days" |
| | $=\{3, \ldots, 7\}^{5}$ |
| $E_{5}$ | $E=$ "an ascending path" |
| | $=\{(1,1, \ldots)\}$ |
| $E_{6}$ | $F=$ "temperature above $250^{\circ}$ over the interval $[0,1] "$ |
| | $=\{f \in \mathbf{C}([0,1]): f(x)>250, x \in[0,1]\}$ |
Definition 1.10 - Set operations (Resnick, 1999, p. 3)
As subsets of the sample space $\Omega$, events can be manipulated using set operations. The set operations which you should know and will be commonly used are listed next:
## - Complementation
The complement of an event $A \subset \Omega$ is
$$
A^{c}=\{\omega \in \Omega: \omega \notin A\} .
$$
## - Intersection
The intersection of events $A$ and $B(A, B \subset \Omega)$ is
$$
A \cap B=\{\omega \in \Omega: \omega \in A \text { and } \omega \in B\}
$$
The events $A$ and $B$ are disjoint (mutually exclusive) if $A \cap B=\emptyset$, i.e. they have no outcomes in common, therefore they never happen at the same time.
## - Union
The union of events $A$ and $B(A, B \subset \Omega)$ is
$$
A \cup B=\{\omega \in \Omega: \omega \in A \text { or } \omega \in B\}
$$
Karr (1993) uses $A+B$ to denote $A \cup B$ when $A$ and $B$ are disjoint.
## - Set difference
Given two events $A$ and $B(A, B \subset \Omega)$, the set difference between $B$ and $A$ consists of those outcomes in $B$ but not in $A$ :
$$
B \backslash A=B \cap A^{c} .
$$
## - Symmetric difference
Let $A$ and $B$ be two events $(A, B \subset \Omega)$. Then the outcomes that are in one but not in both sets consist on the symmetric difference:
$$
A \Delta B=(A \backslash B) \cup(B \backslash A)
$$
## Exercise 1.11 - Set operations
Represent the five set operations in Definition 1.10 pictorially by Venn diagrams.
Proposition 1.12 - Properties of set operations (Resnick, 1999, pp. 4-5) Set operations satisfy well known properties such as commutativity, associativity, De Morgan's laws, etc., providing now and then connections between set operations. These properties have been condensed in the following table:
| Set operation | Property |
| :--- | :--- |
| Complementation | $\left(A^{c}\right)^{c}=A$ |
| | $\emptyset^{c}=\Omega$ |
| | $\Omega^{c}=\emptyset$ |
| Intersection and union | Commutativity |
| | $A \cap B=B \cap A, A \cup B=B \cup A$ |
| | $A \cap \emptyset=\emptyset, A \cup \emptyset=A$ |
| | $A \cap A=A, A \cup A=A$ |
| | $A \cap \Omega=A, A \cup \Omega=\Omega$ |
| | $A \cap A^{c}=\emptyset, A \cup A^{c}=\Omega$ |
| | Associativity |
| | $(A \cap B) \cap C=A \cap(B \cap C)$ |
| | $(A \cup B) \cup C=A \cup(B \cup C)$ |
| | De Morgan's laws |
| | $(A \cap B)^{c}=A^{c} \cup B^{c}$ |
| | $(A \cup B)^{c}=A^{c} \cap B^{c}$ |
| | Distributivity |
| | $(A \cap B) \cup C=(A \cup C) \cap(B \cup C)$ |
| | $(A \cup B) \cap C=(A \cap C) \cup(B \cap C)$ |
| | |
Definition 1.13 - Relations between sets (Resnick, 1999, p. 4) Now we list ways sets $A$ and $B$ can be compared:
- Set containment or inclusion
$A$ is a subset of $B$, written $A \subset B$ or $B \supset A$, iff $A \cap B=A$. This means that
$$
\omega \in A \Rightarrow \omega \in B
$$
So if $A$ occurs then $B$ also occurs. However, the occurrence of $B$ does not imply the occurrence of $A$.
## - Equality
Two events $A$ and $B$ are equal, written $A=B$, iff $A \subset B$ and $B \subset A$. This means
$$
\omega \in A \Leftrightarrow \omega \in B
$$
Proposition 1.14 - Properties of set containment (Resnick, 1999, p. 4) These properties are straightforward but we stated them for the sake of completeness and their utility in the comparison of the probabilities of events:
- $A \subset A$
- $A \subset B, B \subset C \Rightarrow A \subset C$
- $A \subset C, B \subset C \Rightarrow(A \cup B) \subset C$
- $A \supset C, B \supset C \Rightarrow(A \cap B) \supset C$
- $A \subset B \Leftrightarrow B^{c} \subset A^{c}$.
These properties will be essential to calculate or relate probabilities of (sophisticated) events.
Remark 1.15 - The jargon of set theory and probability theory
What follows results from minor changes of Table 1.1 from Grimmett and Stirkazer (2001, p. 3):
| Typical notation | Set jargon | Probability jargon |
| :--- | :--- | :--- |
| $\Omega$ | Collection of objects | Sample space |
| $\omega$ | Member of $\Omega$ | Outcome |
| $A$ | Subset of $\Omega$ | Event (that some outcome in $A$ occurs) |
| $A^{c}$ | Complement of $A$ | Event (that no outcome in $A$ occurs) |
| $A \cap B$ | Intersection | $A$ and $B$ occur |
| $A \cup B$ | Union | Either $A$ or $B$ or both $A$ and $B$ occur |
| $B \backslash A$ | Difference | $B$ occurs but not $A$ |
| $A \Delta B$ | Symmetric difference | Either $A$ or $B$, but not both, occur |
| $A \subset B$ | Inclusion $A$ occurs then $B$ occurs too | |
| $\emptyset$ | Empty set | Impossible event |
| $\Omega$ | Whole space | Certain event |
Functions on the sample space (such as random variables defined in the next chapter) are even more important than events themselves.
An indicator function is the simplest way to associate a set with a (binary) function.
Definition 1.16 - Indicator function (Karr, 1993, p. 19)
The indicator function of the event $A \subset \Omega$ is the function on $\Omega$ given by
$$
\mathbf{1}_{A}(w)= \begin{cases}1 & \text { if } w \in A \\ 0 & \text { if } w \notin A\end{cases}
$$
Therefore, $\mathbf{1}_{A}$ indicates whether $A$ occurs.
The indicator function of an event, which resulted from a set operation on events $A$ and $B$, can often be written in terms of the indicator functions of these two events.
Proposition 1.17 - Properties of indicator functions (Karr, 1993, p. 19)
Simple algebraic operations on the indicator functions of the events $A$ and $B$ translate set operations on these two events:
$$
\begin{aligned}
\mathbf{1}_{A^{c}} & =1-\mathbf{1}_{A} \\
\mathbf{1}_{A \cap B} & =\min \left\{\mathbf{1}_{A}, \mathbf{1}_{B}\right\} \\
& =\mathbf{1}_{A} \times \mathbf{1}_{B} \\
\mathbf{1}_{A \cup B} & =\max \left\{\mathbf{1}_{A}, \mathbf{1}_{B}\right\} ; \\
\mathbf{1}_{B \backslash A} & =\mathbf{1}_{B \cap A^{c}} \\
& =\mathbf{1}_{B} \times\left(1-\mathbf{1}_{A}\right) \\
\mathbf{1}_{A \triangle B} & =\left|\mathbf{1}_{A}-\mathbf{1}_{B}\right| .
\end{aligned}
$$
## Exercise 1.18 - Indicator functions
Solve exercises 1.1 and 1.7 of Karr (1993, p. 40).
The definition of indicator function quickly yields the following result when we are able compare events $A$ and $B$. Proposition 1.19 - Another property of indicator functions (Resnick, 1999, p. 5)
Let $A$ and $B$ be two events of $\Omega$. Then
$$
A \subseteq B \Leftrightarrow \mathbf{1}_{A} \leq \mathbf{1}_{B}
$$
Note here that we use the convention that for two functions $f, g$ with domain $\Omega$ and range $\mathbb{R}$, we have $f \leq g$ iff $f(\omega) \leq g(\omega)$ for all $\omega \in \Omega$.
Motivation 1.20 - Limits of sets (Resnick, 1999, p. 6)
The definition of convergence concepts for random variables rests on manipulations of sequences of events which require the definition of limits of sets.
Definition 1.21 - Operations on sequences of sets (Karr, 1993, p. 20)
Let $\left(A_{n}\right)_{n \in \mathbb{N}}$ be a sequence of events of $\Omega$. Then the union and the intersection of $\left(A_{n}\right)_{n \in \mathbb{N}}$ are defined as follows
$$
\begin{aligned}
& \bigcup_{n=1}^{+\infty} A_{n}=\left\{\omega: \omega \in A_{n} \text { for some } n\right\} \\
& \bigcap_{n=1}^{+\infty} A_{n}=\left\{\omega: \omega \in A_{n} \text { for all } n\right\} .
\end{aligned}
$$
The sequence $\left(A_{n}\right)_{n \in \mathbb{N}}$ is said to be pairwise disjoint if $A_{i} \cap A_{j}=\emptyset$ whenever $i \neq j$.
Definition 1.22 - Lim sup, lim inf and limit set (Karr, 1993, p. 20)
Let $\left(A_{n}\right)_{n \in \mathbb{N}}$ be a sequence of events of $\Omega$. Then we define the two following limit sets:
$$
\begin{aligned}
\lim \sup A_{n} & =\bigcap_{k=1}^{+\infty} \bigcup_{n=k}^{+\infty} A_{n} \\
& =\left\{\omega \in \Omega: \omega \in A_{n} \text { for infinitely many values of } n\right\} \\
& =\left\{A_{n}, \text { i.o. }\right\} \\
\liminf A_{n} & =\bigcup_{k=1}^{+\infty} \bigcap_{n=k}^{+\infty} A_{n} \\
& =\left\{\omega \in \Omega: \omega \in A_{n} \text { for all but finitely many values of } n\right\} \\
& =\left\{A_{n}, \text { ult. }\right\},
\end{aligned}
$$
where i.o. and ult. stand for infinitely often and ultimately, respectively. Let $A$ be an event of $\Omega$. Then the sequence $\left(A_{n}\right)_{n \in \mathbb{N}}$ is said to converge to $A$, written $A_{n} \rightarrow A$ or $\lim _{n \rightarrow+\infty} A_{n}=A$, if
$$
\liminf A_{n}=\lim \sup A_{n}=A .
$$
Example 1.23 - Lim sup, lim inf and limit set Let $\left(A_{n}\right)_{n \in \mathbb{N}}$ be a sequence of events of $\Omega$ such that
$$
A_{n}= \begin{cases}A & \text { for } n \text { even } \\ A^{c} & \text { for } n \text { odd. }\end{cases}
$$
Then
$$
\begin{aligned}
\limsup A_{n} & =\bigcap_{k=1}^{+\infty} \bigcup_{n=k}^{+\infty} A_{n} \\
& =\Omega \\
& \neq \\
\liminf A_{n} & =\bigcup_{k=1}^{+\infty} \bigcap_{n=k}^{+\infty} A_{n} \\
& =\emptyset,
\end{aligned}
$$
so there is no limit set $\lim _{n \rightarrow} A_{n}$.
Exercise 1.24 - Lim sup, lim inf and limit set
Solve Exercise 1.3 of Karr (1993, p. 40).
Proposition 1.25 - Properties of lim sup and lim inf (Resnick, 1999, pp. 7-8) Let $\left(A_{n}\right)_{n \in \mathbb{N}}$ be a sequence of events of $\Omega$. Then
$$
\begin{aligned}
\liminf A_{n} & \subset \limsup A_{n} \\
\left(\liminf A_{n}\right)^{c} & =\lim \sup \left(A_{n}^{c}\right) .
\end{aligned}
$$
Definition 1.26 - Monotone sequences of events (Resnick, 1999, p. 8) Let $\left(A_{n}\right)_{n \in \mathbb{N}}$ be a sequence of events of $\Omega$. It is said to be monotone non-decreasing, written $A_{n} \uparrow$, if
$$
A_{1} \subseteq A_{2} \subseteq A_{3} \subseteq \cdots
$$
$\left(A_{n}\right)_{n \in \mathbb{N}}$ is monotone non-increasing, written $A_{n} \downarrow$, if
$$
A_{1} \supseteq A_{2} \supseteq A_{3} \supseteq \cdots
$$
Proposition 1.27 - Properties of monotone sequences of events (Karr, 1993, pp. 20-21)
Suppose $\left(A_{n}\right)_{n \in \mathbb{N}}$ be a monotone sequence of events. Then
$$
\begin{aligned}
& A_{n} \uparrow \Rightarrow \lim _{n \rightarrow+\infty} A_{n}=\bigcup_{n=1}^{+\infty} A_{n} \\
& A_{n} \downarrow \Rightarrow \lim _{n \rightarrow+\infty} A_{n}=\bigcap_{n=1}^{+\infty} A_{n} .
\end{aligned}
$$
## Exercise 1.28 - Properties of monotone sequences of events
Prove Proposition 1.27.
## Example 1.29 - Monotone sequences of events
The Galton-Watson process is a branching stochastic process arising from Francis Galton's statistical investigation of the extinction of family names. Modern applications include the survival probabilities for a new mutant gene, [...], or the dynamics of disease outbreaks in their first generations of spread, or the chances of extinction of small populations of organisms. (http://en.wikipedia.org/wiki/Galton-Watson_process)
Let $\left(X_{n}\right)_{N_{0}}$ be a stochastic process, where $X_{n}$ represents the size of generation $n$. A $\left(X_{n}\right)_{\mathbb{N}_{0}}$ is Galton-Watson process if it evolves according to the recurrence formula:
- $X_{0}=1$ (we start with one individual); and
- $X_{n+1}=\sum_{i=1}^{X_{n}} Z_{i}^{(n)}$, where, for each $n, Z_{i}^{(n)}$ represents the number of descendants of the individual $i$ from generation $n$ and $\left(Z_{i}^{(n)}\right)_{i \in \mathbb{N}}$ is a sequence of i.i.d. non-negative random variables. Let $A_{n}=\left\{X_{n}=0\right\}$. Since $A_{1} \Rightarrow A_{2} \Rightarrow \ldots$, i.e. $\left(A_{n}\right)_{n \in \mathbb{N}}$ is a non-decreasing monotone sequence of events, written $A_{n} \uparrow$, we get $A_{n} \rightarrow A=\bigcup_{n=1}^{+\infty} A_{n}$. Moreover, the extinction probability is given by
$$
\begin{aligned}
P\left(\left\{X_{n}=0 \text { for some } n\right\}\right) & =P\left(\bigcup_{n=1}^{+\infty}\left\{X_{n}=0\right\}\right)=P\left(\lim _{n \rightarrow+\infty}\left\{X_{n}=0\right\}\right) \\
& =P\left(\bigcup_{n=1}^{+\infty} A_{n}\right) \\
& =P\left(\lim _{n \rightarrow+\infty} A_{n}\right) .
\end{aligned}
$$
Later on, we shall conclude that we can conveniently interchange the limit sign and the probability function and add: $P\left(X_{n}=0\right.$ for some $\left.n\right)=P\left(\lim _{n \rightarrow+\infty}\left\{X_{n}=0\right\}\right)=$ $\lim _{n \rightarrow+\infty} P\left(\left\{X_{n}=0\right\}\right)$.
Proposition 1.30 - Limits of indicator functions (Karr, 1993, p. 21) In terms of indicator functions,
$$
A_{n} \rightarrow A \Leftrightarrow \mathbf{1}_{A_{n}}(w) \rightarrow \mathbf{1}_{A}(w), \forall w \in \Omega .
$$
Thus, the convergence of sets is the same as pointwise convergence of their indicator functions.
Exercise 1.31 - Limits of indicator functions (Exercise 1.8, Karr, 1993, p. 40) Prove Proposition 1.30.
Motivation 1.32 - Closure under set operations (Resnick, 1999, p. 12)
We need the notion of closure because we want to combine and manipulate events to make more complex events via set operations and we require that certain set operations do not carry events outside the family of events.
Definition 1.33 - Closure under set operations (Resnick, 1999, p. 12)
Let $\mathcal{C}$ be a collection of subsets of $\Omega$. $\mathcal{C}$ is closed under a set operation ${ }^{7}$ if the set obtained by performing the set operation on sets in $\mathcal{C}$ yields a set in $\mathcal{C}$.
${ }^{7} \mathrm{Be}$ it a countable union, finite union, countable intersection, finite intersection, complementation, monotone limits, etc. Example 1.34 - Closure under set operations (Resnick, 1999, p. 12)
- $\mathcal{C}$ is closed under finite union if for any finite collection $A_{1}, \ldots, A_{n}$ of sets in $\mathcal{C}$, $\bigcup_{i=1}^{n} A_{i} \in \mathcal{C}$.
- Suppose $\Omega=\mathbb{R}$ and $\mathcal{C}=\{$ finite real intervals $\}=\{(a, b]:-\infty<a<b<+\infty\}$. Then $\mathcal{C}$ is not closed under finite unions since $(1,2] \cup(36,37]$ is not a finite interval. However, $\mathcal{C}$ is closed under intersection since $(a, b] \cap(c, d]=(\max \{a, c\}, \min \{b, d\}]=$ $(a \vee c, b \wedge d]$.
- Consider now $\Omega=\mathbb{R}$ and $\mathcal{C}=\{$ open real subsets $\} . \mathcal{C}$ is not closed under complementation since the complement of as open set is not open.
Definition 1.35 - Algebra (Resnick, 1999, p. 12)
$\mathcal{A}$ is an algebra (or field) on $\Omega$ if it is a non-empty class of subsets of $\Omega$ closed under finite union, finite intersection and complementation.
A minimal set of postulates for $\mathcal{A}$ to be an algebra on $\Omega$ is:
1. $\Omega \in \mathcal{A}$
2. $A \in \mathcal{A} \Rightarrow A^{c} \in \mathcal{A}$
3. $A, B \in \mathcal{A} \Rightarrow A \cup B \in \mathcal{A}$.
## Remark 1.36 - Algebra
Please note that, by the De Morgan's laws, $\mathcal{A}$ is closed under finite intersection $\left((A \cup B)^{c}=\right.$ $\left.A^{c} \cap B^{c} \in \mathcal{A}\right)$, thus we do not need a postulate concerning finite intersection.
Motivation 1.37 - $\sigma$-algebra (Karr, 1993, p. 21)
To define a probability function dealing with an algebra is not enough: we need to define a collection of sets which is closed under countable union, countable intersection, and complementation.
Definition 1.38 - $\sigma$-algebra (Resnick, 1999, p. 12)
$\mathcal{F}$ is a $\sigma$-algebra on $\Omega$ if it is a non-empty class of subsets of $\Omega$ closed under countable union, countable intersection and complementation.
A minimal set of postulates for $\mathcal{F}$ to be an $\sigma$-algebra on $\Omega$ is: 1. $\Omega \in \mathcal{F}$
2. $A \in \mathcal{F} \Rightarrow A^{c} \in \mathcal{F}$
3. $A_{1}, A_{2}, \ldots \in \mathcal{F} \Rightarrow \bigcup_{i=1}^{+\infty} A_{i} \in \mathcal{F}$.
Example 1.39 - $\sigma$-algebra (Karr, 1993, p. 21)
- Trivial $\sigma$-algebra
$\mathcal{F}=\{\emptyset, \Omega\}$
- Power set
$\mathcal{F}=\mathbb{P}(\Omega)=$ class of all subsets of $\Omega$
In general, neither of these two $\sigma$-algebras is specially interesting or useful - we need something in between.
Definition 1.40 - Generated $\sigma$-algebra (http://en.wikipedia.org/wiki/Sigmaalgebra)
If $\mathcal{U}$ is an arbitrary family of subsets of $\Omega$ then we can form a special $\sigma$-algebra containing $\mathcal{U}$, called the $\sigma$-algebra generated by $\mathcal{U}$ and denoted by $\sigma(\mathcal{U})$, by intersecting all $\sigma$-algebras containing $\mathcal{U}$.
Defined in this way $\sigma(\mathcal{U})$ is the smallest/minimal $\sigma$-algebra on $\Omega$ that contains $\mathcal{U}$.
Example 1.41 - Generated $\sigma$-algebra (http://en.wikipedia.org/wiki/Sigmaalgebra; Karr, 1993, p. 22)
## - Trivial example
Let $\Omega=\{1,2,3\}$ and $\mathcal{U}=\{\{1\}\}$. Then $\sigma(\mathcal{U})=\{\emptyset,\{1\},\{2,3\}, \Omega\}$ is a $\sigma$-algebra on $\Omega$.
- $\sigma$-algebra generated by a finite partition
If $\mathcal{U}=\left\{A_{1}, \ldots, A_{n}\right\}$ is a finite partition of $\Omega$ - that is, $A_{1}, \ldots, A_{n}$ are disjoint and $\bigcup_{i=1}^{n} A_{i}=\Omega$ - then $\sigma(\mathcal{U})=\left\{\bigcup_{i \in I} A_{i}: I \subseteq\{1, \ldots, n\}\right\}$ which includes $\emptyset$.
- $\sigma$-algebra generated by a countable partition
If $\mathcal{U}=\left\{A_{1}, A_{2}, \ldots\right\}$ is a countable partion of $\Omega$ - that is, $A_{1}, A_{2}, \ldots$ are disjoint and $\bigcup_{i=1}^{+\infty} A_{i}=\Omega$ - then $\sigma(\mathcal{U})=\left\{\bigcup_{i \in I} A_{i}: I \subseteq \mathbb{N}\right\}$ which also includes $\emptyset$. Since we tend to deal with real random variables we have to define a $\sigma$-algebra on $\Omega=\mathbb{R}$ and the power set on $\mathbb{R}, \mathbb{P}(\mathbb{R})$ is not an option. The most important $\sigma$-algebra on $\mathbb{R}$ is the one defined as follows.
Definition 1.42 - Borel $\sigma$-algebra on $\mathbb{R}$ (Karr, 1993, p. 22)
The Borel $\sigma$-algebra on $\mathbb{R}$, denoted by $\mathcal{B}(\mathbb{R})$, is generated by the class of intervals
$$
\mathcal{U}=\{(a, b]:-\infty<a<b<+\infty\}
$$
that is, $\sigma(\mathcal{U})=\mathcal{B}(\mathbb{R})$. Its elements are called Borel sets. ${ }^{8}$
Remark 1.43 - Borel $\sigma$-algebra on $\mathbb{R}$ (Karr, 1993, p. 22)
- Every "reasonable" set of $\mathbb{R}$ - such as intervals, closed sets, open sets, finite sets, and countable sets - belong to $\mathcal{B}(\mathbb{R})$. For instance, $\{x\}=\bigcap_{n=1}^{+\infty}(x-1 / n, x]$.
- Moreover, the Borel $\sigma$-algebra on $\mathbb{R}, \mathcal{B}(\mathbb{R})$, can also be generated by the class of intervals $\{(-\infty, a]:-\infty<a<+\infty\}$ or $\{(b,+\infty):-\infty<b<+\infty\}$.
- $\mathcal{B}(\mathbb{R}) \neq \mathbb{P}(\mathbb{R})$.
Definition 1.44 - Borel $\sigma$-algebra on $\mathbb{R}^{d}$ (Karr, 1993, p. 22)
The Borel $\sigma$-algebra on $\mathbb{R}^{d}, d \in \mathbb{N}, \mathcal{B}\left(\mathbb{R}^{d}\right)$, is generated by the class of rectangles that are Cartesian products of real intervals
$$
\mathcal{U}=\left\{\prod_{i=1}^{d}\left(a_{i}, b_{i}\right]:-\infty<a_{i}<b_{i}<+\infty, i=1, \ldots, d\right\} .
$$
Exercise 1.45 - Generated $\sigma$-algebra (Exercise 1.9, Karr, 1993, p. 40)
Given sets $A$ and $B$ of $\Omega$, identify all sets in $\sigma(\{A, B\})$.
Exercise 1.46 - Borel $\sigma$-algebra on $\mathbb{R}$ (Exercise 1.10, Karr, 1993, p. 40)
Prove that $\{x\}$ is a Borel set for every $x \in \mathbb{R}$.
${ }^{8}$ Borel sets are named after Émile Borel. Along with René-Louis Baire and Henri Lebesgue, he was among the pioneers of measure theory and its application to probability theory (http://en.wikipedia.org/wiki/Émile_Borel).
### Probabilities and probability functions
Motivation 1.47 - Probability function (Karr, 1993, p. 23)
A probability is a set function, defined for events; it should be countably additive (i.e. $\sigma$-additive), that is, the probability of a countable union of disjoint events is the sum of their individual probabilities.
Definition 1.48 - Probability function (Karr, 1993, p. 24)
Let $\Omega$ be the sample space and $\mathcal{F}$ be the $\sigma$-algebra of events of $\Omega$. A probability on $(\Omega, \mathcal{F})$ is a function $P: \Omega \rightarrow \mathbb{R}$ such that:
1. Axiom $1^{9}-P(A) \geq 0, \forall A \in \mathcal{F}$.
2. Axiom $2-P(\Omega)=1$.
3. Axiom 3 (countable additivity or $\sigma$-additivity)
Whenever $A_{1}, A_{2}, \ldots$ are (pairwise) disjoint events in $\mathcal{F}$,
$$
P\left(\bigcup_{i=1}^{+\infty} A_{i}\right)=\sum_{i=1}^{+\infty} P\left(A_{i}\right)
$$
## Remark 1.49 - Probability function
The probability function $P$ transforms events in real numbers in $[0,1]$.
Definition 1.50 - Probability space (Karr, 1993, p. 24)
The triple $(\Omega, \mathcal{F}, P)$ is a probability space.
Example 1.51 - Probability function (Karr, 1993, p. 25) Let
- $\left\{A_{1}, \ldots, A_{n}\right\}$ be a finite partition of $\Omega$ - that is, $A_{1}, \ldots, A_{n}$ are (nonempty and pairwise) disjoint events and $\bigcup_{i=1}^{n} A_{i}=\Omega$;
- $\mathcal{F}$ be the $\sigma$-algebra generated by the finite partition $\left\{A_{1}, A_{2}, \ldots, A_{n}\right\}$, i.e. $\mathcal{F}=$ $\sigma\left(\left\{A_{1}, \ldots, A_{n}\right\}\right)$
- $p_{1}, \ldots, p_{n}$ positive numbers such that $\sum_{i=1}^{n} p_{i}=1$.
${ }^{9}$ Righter $(200-)$ called the first and second axioms duh rules. Then the function defined as
$$
P\left(\bigcup_{i \in I} A_{i}\right)=\sum_{i \in I} p_{i}, \forall I \subseteq\{1, \ldots, n\},
$$
where $p_{i}=P\left(A_{i}\right)$, is a probability function on $(\Omega, \mathcal{F})$.
Exercise 1.52 - Probability function (Exercise 1.11, Karr, 1993, p. 40)
Let $A, B$ and $C$ be disjoint events such that: $A \cup B \cup C=\Omega ; P(A)=0.6, P(B)=0.3$ and $P(C)=0.1$. Calculate all probabilities of all events in $\sigma(\{A, B, C\})$.
Motivation 1.53 - Elementary properties of probability functions
The axioms do not teach us how to calculate the probabilities of events. However, they establish rules for their calculation such as the following ones.
Proposition 1.54 - Elementary properties of probability functions (Karr, 1993, p. 25)
Let $(\Omega, \mathcal{F}, P)$ be a probability space then:
1. Probability of the empty set
$$
P(\emptyset)=0 .
$$
2. Finite additivity
If $A_{1}, \ldots, A_{n}$ are (pairwise) disjoint events then
$$
P\left(\bigcup_{i=1}^{n} A_{i}\right)=\sum_{i=1}^{n} P\left(A_{i}\right)
$$
Probability of the complement of $A$
Consequently, for each $A$,
$$
P\left(A^{c}\right)=1-P(A) .
$$
3. Monotonicity of the probability function
If $A \subseteq B$ then
$$
P(B \backslash A)=P(B)-P(A) .
$$
Therefore if $A \subseteq B$ then
$$
P(A) \leq P(B)
$$
## Addition rule
For all $A$ and $B$ (disjoint or not),
$$
P(A \cup B)=P(A)+P(B)-P(A \cap B) .
$$
## Remark 1.55 - Elementary properties of probability functions
According to Righter (200-), (1.41) is another duh rule but adds one of Kahneman and Tversky's most famous examples, the Linda problem.
Subjects were told the story (in 70's):
- Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student she was deeply concerned with issues of discrimination and social justice and also participated in anti nuclear demonstrations.
They are asked to rank the following statements by their probabilities:
- Linda is a bank teller.
- Linda is a bank teller who is active in the feminist movement.
Kahneman and Tversky found that about $85 \%$ of the subjects ranked "Linda is a bank teller and is active in the feminist movement" as more probable than "Linda is a bank teller".
## Exercise 1.56 - Elementary properties of probability functions
Prove properties 1. through 4 . of Proposition 1.54 and that
$$
\begin{aligned}
& P(B \backslash A)=P(B)-P(A \cap B) \\
& P(A \Delta B)=P(A \cup B)-P(A \cap B)=P(A)+P(B)-2 \times P(A \cap B) .
\end{aligned}
$$
Hints (Karr, 1993, p. 25):
- property 1. can be also proved by using the finite additivity;
- property 2. by considering $A_{n+1}=A_{n+2}=\ldots=\emptyset$;
- property 3. by rewriting $B$ as $(B \backslash A) \cup(A \cap B)=(B \backslash A) \cup A$;
- property 4. by rewriting $A \cup B$ as $(A \backslash B) \cup(A \cap B) \cup(B \backslash A)$. We proceed with some advanced properties of probability functions.
Proposition 1.57 - Boole's inequality or $\sigma$-subadditivity (Karr, 1993, p. 26) Let $A_{1}, A_{2}, \ldots$ be events in $\mathcal{F}$. Then
$$
P\left(\bigcup_{n=1}^{+\infty} A_{n}\right) \leq \sum_{n=1}^{+\infty} P\left(A_{n}\right) .
$$
## Exercise 1.58 - Boole's inequality or $\sigma$-subadditivity
Prove Boole's inequality by using the disjointification technique (Karr, 1993, p. 26) ${ }_{10}{ }^{10}$ the fact that $B_{n}=A_{n} \backslash\left(\bigcup_{i=1}^{n-1} A_{i}\right) \subseteq A_{n}$, and by applying the $\sigma$-additivity and monotonicity of probability functions.
Proposition 1.59 - Finite subadditivity (Resnick, 1999, p. 31)
The probability function $P$ is finite subadditive in the sense that
$$
P\left(\bigcup_{i=1}^{n} A_{i}\right) \leq \sum_{i=1}^{n} P\left(A_{i}\right),
$$
for all events $A_{1}, \ldots, A_{n}$.
## Remark 1.60 - Finite additivity
Finite additivity is a consequence of Boole's inequality (i.e. $\sigma$-subadditivity). However, finite additivity does not imply $\sigma$-subadditivity.
Proposition 1.61 - Inclusion-exclusion formula (Resnick, 1999, p. 30) If $A_{1}, \ldots, A_{n}$ are events, then the probability of their union can be written as follows:
$$
\begin{aligned}
P\left(\bigcup_{i=1}^{n} A_{i}\right)= & \sum_{i=1}^{n} P\left(A_{i}\right)-\sum_{\substack{1 \leq i<j \leq n}} P\left(A_{i} \cap A_{j}\right)+\sum_{1 \leq i<j<k \leq n} P\left(A_{i} \cap A_{j} \cap A_{k}\right) \\
& -\ldots-(-1)^{n} \times P\left(A_{1} \cap \ldots \cap A_{n}\right) .
\end{aligned}
$$
${ }^{10}$ Note that $\bigcup_{n=1}^{+\infty} A_{n}=\bigcup_{n=1}^{+\infty} B_{n}$, where $B_{n}=A_{n} \backslash\left(\bigcup_{i=1}^{n-1} A_{i}\right)$ are disjoint events. Remark 1.62 - Inclusion-exclusion formula (Resnick, 1999, p. 30)
The terms on the right side of (1.47) alternate in sign and give inequalities called Bonferroni inequalities when we neglect the remainders. Two examples:
$$
\begin{aligned}
& P\left(\bigcup_{i=1}^{n} A_{i}\right) \leq \sum_{i=1}^{n} P\left(A_{i}\right) \\
& P\left(\bigcup_{i=1}^{n} A_{i}\right) \geq \sum_{i=1}^{n} P\left(A_{i}\right)-\sum_{1 \leq i<j \leq n} P\left(A_{i} \cap A_{j}\right) .
\end{aligned}
$$
## Exercise 1.63 - Inclusion-exclusion formula
Prove the inclusion-exclusion formula by induction using the addition rule for $n=2$ (Resnick, 1999, p. 30).
Proposition 1.64 - Monotone continuity (Resnick, 1999, p. 31)
Probability functions are continuous for monotone sequences of events in the sense that:
1. If $A_{n} \uparrow A$, where $A_{n} \in \mathcal{F}$, then $P\left(A_{n}\right) \uparrow P(A)$.
2. If $A_{n} \downarrow A$, where $A_{n} \in \mathcal{F}$, then $P\left(A_{n}\right) \downarrow P(A)$.
## Exercise 1.65 - Monotone continuity
Prove Proposition 1.64 by using the disjointification technique, the monotone character of the sequence of events and $\sigma$-additivity (Resnick, 1999, p. 31).
For instance property 1. can proved as follows:
- $A_{1} \subset A_{2} \subset A_{3} \subset \ldots \subset A_{n} \subset \ldots$
- $B_{1}=A_{1}, B_{2}=A_{2} \backslash A_{1}, B_{3}=A_{3} \backslash\left(A_{1} \cup A_{2}\right), \ldots, B_{n}=A_{n} \backslash\left(\bigcup_{i=1}^{n-1} A_{i}\right)$ are disjoint events.
- Since $A_{1}, A_{2}, \ldots$ is a non-decreasing sequence of events $A_{n} \uparrow A=\bigcup_{n=1}^{+\infty} A_{n}=$ $\bigcup_{n=1}^{+\infty} B_{n}, B_{n}=A_{n} \backslash A_{n-1}$, and $\bigcup_{i=1}^{n} B_{i}=A_{n}$. If we add to this $\sigma$-additivity, we conclude that
$$
\begin{aligned}
P(A) & =P\left(\bigcup_{n=1}^{+\infty} A_{n}\right)=P\left(\bigcup_{n=1}^{+\infty} B_{n}\right)=\sum_{n=1}^{+\infty} P\left(B_{n}\right) \\
& =\lim _{n \rightarrow+\infty} \uparrow \sum_{i=1}^{n} P\left(B_{i}\right)=\lim _{n \rightarrow+\infty} \uparrow P\left(\bigcup_{i=1}^{n} B_{i}\right)=\lim _{n \rightarrow+\infty} \uparrow P\left(A_{n}\right) .
\end{aligned}
$$
Motivation $1.66-\sigma$-additivity as a result of finite additivity and monotone continuity (Karr, 1993, p. 26)
The next theorem shows that $\sigma$-additivity is equivalent to the confluence of finite additivity (which is reasonable) and monotone continuity (which is convenient and desirable mathematically).
Theorem $1.67-\sigma$-additivity as a result of finite additivity and monotone continuity (Karr, 1993, p. 26)
Let $P$ be a nonnegative, finitely additive set function on $\mathcal{F}$ with $P(\Omega)=1$. Then, the following are equivalent:
1. $P$ is $\sigma$-additive (thus a probability function).
2. Whenever $A_{n} \uparrow A$ in $\mathcal{F}, P\left(A_{n}\right) \uparrow P(A)$.
3. Whenever $A_{n} \downarrow A$ in $\mathcal{F}, P\left(A_{n}\right) \downarrow P(A)$.
4. Whenever $A_{n} \downarrow \emptyset$ in $\mathcal{F}, P\left(A_{n}\right) \downarrow 0$.
Exercise $1.68-\sigma$-additivity as a result of finite additivity and monotone continuity
Prove Theorem 1.67.
Note that we need to prove $1 . \Rightarrow 2 . \Rightarrow 3 . \Rightarrow 4 . \Rightarrow 1$. But since $2 . \Leftrightarrow 3$. by complementation and 4 . is a special case of 3 . we just need to prove that $1 . \Rightarrow 2$. and 4. $\Rightarrow$ 1. (Karr, 1993, pp. 26-27).
Remark 1.69 - Inf, sup, lim inf and lim sup Let $a_{1}, a_{2}, \ldots$ be a sequence of real numbers. Then
## - Infimum
The infimum of the set $\left\{a_{1}, a_{2}, \ldots\right\}$ - written inf $a_{n}$ - corresponds to the greatest element (not necessarily in $\left\{a_{1}, a_{2}, \ldots\right\}$ ) that is less than or equal to all elements of $\left\{a_{1}, a_{2}, \ldots\right\} .^{11}$
## - Supremum
The supremum of the set $\left\{a_{1}, a_{2}, \ldots\right\}$ - written $\sup a_{n}$ - corresponds to the smallest element (not necessarily in $\left\{a_{1}, a_{2}, \ldots\right\}$ ) that is greater than or equal to every element of $\left\{a_{1}, a_{2}, \ldots\right\} .{ }^{12}$
${ }^{11}$ For more details check http://en.wikipedia.org/wiki/Infimum
${ }^{12}$ http://en.wikipedia.org/wiki/Supremum - Limit inferior and limit superior of a sequence of real numbers ${ }^{13}$ $\liminf a_{n}=\sup _{k \geq 1} \inf _{n \geq k} a_{n}$ $\lim \sup a_{n}=\inf _{k \geq 1} \sup _{n \geq k} a_{n}$.
Let $A_{1}, A_{2}, \ldots$ be a sequence of events. Then
- Limit inferior and limit superior of a sequence of sets $\lim \sup A_{n}=\bigcap_{k=1}^{+\infty} \bigcup_{n=k}^{+\infty} A_{n}$ $\liminf A_{n}=\bigcup_{k=1}^{+\infty} \bigcap_{n=k}^{+\infty} A_{n}$.
Motivation 1.70 - A special case of the Fatou's lemma This result plays a vital role in the proof of continuity of probability functions.
Proposition 1.71 - A special case of the Fatou's lemma (Resnick, 1999, p. 32) Suppose $A_{1}, A_{2}, \ldots$ is a sequence of events in $\mathcal{F}$. Then
$$
P\left(\lim \inf A_{n}\right) \leq \lim \inf P\left(A_{n}\right) \leq \lim \sup P\left(A_{n}\right) \leq P\left(\lim \sup A_{n}\right) .
$$
Exercise 1.72 - A special case of the Fatou's lemma
Prove Proposition 1.71 (Resnick, 1999, pp. 32-33; Karr, 1993, p. 27).
Theorem 1.73 - Continuity (Karr, 1993, p. 27) If $A_{n} \rightarrow A$ then $P\left(A_{n}\right) \rightarrow P(A)$.
## Exercise 1.74 - Continuity
Prove Theorem 1.73 by using Proposition 1.71 (Karr, 1993, p. 27).
Motivation 1.75 - (1st.) Borel-Cantelli Lemma (Resnick, 1999, p. 102) This result is simple but still is a basic tool for proving almost sure convergence of sequences of random variables.
${ }^{13} \mathrm{http} / /$ en.wikipedia.org/wiki/Limit_superior_and_limit_inferior Theorem 1.76 - (1st.) Borel-Cantelli Lemma (Resnick, 1999, p. 102) Let $A_{1}, A_{2}, \ldots$ be any events in $\mathcal{F}$. Then
$$
\sum_{n=1}^{+\infty} P\left(A_{n}\right)<+\infty \Rightarrow P\left(\lim \sup A_{n}\right)=0 .
$$
Exercise 1.77 - (1st.) Borel-Cantelli Lemma
Prove Theorem 1.76 (Resnick, 1999, p. 102).
### Distribution functions; discrete, absolutely continuous and mixed probabilities
Motivation 1.78 - Distribution function (Karr, 1993, pp. 28-29)
A probability function $P$ on the Borel $\sigma$-algebra $\mathcal{B}(\mathbb{R})$ is determined by its values $P((-\infty, x])$, for all intervals $(-\infty, x]$.
Probability functions on the real line play an important role as distribution functions of random variables.
Definition 1.79 - Distribution function (Karr, 1993, p. 29)
Let $P$ be a probability function defined on $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$. The distribution function associated to $P$ is represented by $F_{P}$ and defined by
$$
F_{P}(x)=P((-\infty, x]), x \in \mathbb{R} .
$$
Theorem 1.80 - Some properties of distribution functions (Karr, 1993, pp. 2930)
Let $F_{P}$ be the distribution function associated to $P$. Then
1. $F_{P}$ is non-decreasing. Hence, the left-hand limit
$$
F_{P}\left(x^{-}\right)=\lim _{s \uparrow x, s<x} F_{P}(s)
$$
and the right-hand limit
$$
F_{P}\left(x^{+}\right)=\lim _{s \downarrow x, s>x} F_{P}(s)
$$
exist for each $x$, and
$$
F_{P}\left(x^{-}\right) \leq F_{P}(x) \leq F_{P}\left(x^{+}\right) .
$$
2. $F_{P}$ is right-continuous, i.e.
$$
F_{P}\left(x^{+}\right)=F_{P}(x)
$$
for each $x$.
3. $F_{P}$ has the following limits:
$$
\begin{aligned}
& \lim _{x \rightarrow-\infty} F_{P}(x)=0 \\
& \lim _{x \rightarrow+\infty} F_{P}(x)=1 .
\end{aligned}
$$
Definition 1.81 - Distribution function (Resnick, 1999, p. 33)
A function $F_{P}: \mathbb{R} \rightarrow[0,1]$ satisfying properties $1 ., 2$. and 3 . from Theorem 1.80 is called a distribution function.
Exercise 1.82 - Some properties of distribution functions
Prove Theorem 1.80 (Karr, 1993, p. 30).
Definition 1.83 - Survival (or survivor) function (Karr, 1993, p. 31)
The survival (or survivor) function associated to $P$ is
$$
S_{P}(x)=1-F_{P}(x)=P((x,+\infty)), x \in \mathbb{R} .
$$
$S_{P}(x)$ are also termed tail probabilities.
Proposition 1.84 - Probabilities in terms of the distribution function (Karr, 1993, p. 30)
The following table condenses the probabilities of various intervals in terms of the distribution function
$$
\begin{array}{ll}
\text { Interval } I & \text { Probability } P(I) \\
\hline(-\infty, x] & F_{P}(x) \\
(x,+\infty) & 1-F_{P}(x) \\
(-\infty, x) & F_{P}\left(x^{-}\right) \\
{[x,+\infty)} & 1-F_{P}\left(x^{-}\right) \\
(a, b] & F_{P}(b)-F_{P}(a) \\
{[a, b)} & F_{P}\left(b^{-}\right)-F_{P}\left(a^{-}\right) \\
{[a, b]} & F_{P}(b)-F_{P}\left(a^{-}\right) \\
(a, b) & F_{P}\left(b^{-}\right)-F_{P}(a) \\
\{x\} & F_{P}(x)-F_{P}\left(x^{-}\right)
\end{array}
$$
where $x \in \mathbb{R}$ and $-\infty<a<b<+\infty$. Example 1.85 - Point mass (Karr, 1993, p. 31)
Let $P$ be defined as
$$
P(A)=\epsilon_{s}(A)= \begin{cases}1 & \text { if } s \in A \\ 0 & \text { otherwise }\end{cases}
$$
for every event $A \in \mathcal{F}$, i.e. $P$ is a point mass at $s$. Then
$$
\begin{aligned}
F_{P}(x) & =P((-\infty, x]) \\
& = \begin{cases}0, & x<s \\
1, & x \geq s\end{cases}
\end{aligned}
$$
Example 1.86 - Uniform distribution on [0,1] (Karr, 1993, p. 31) Let $P$ be defined as
$$
P((a, b])=\operatorname{Length}((a, b] \cap[0,1]),
$$
for any real interval $(a, b]$ with $-\infty<a<b<+\infty$. Then
$$
\begin{aligned}
F_{P}(x) & =P((-\infty, x]) \\
& = \begin{cases}0, & x<0 \\
x, & 0 \leq x \leq 1 \\
1, & x>1\end{cases}
\end{aligned}
$$
We are going to revisit the discrete and absolutely continuous probabilities and introduce mixed distributions.
Definition 1.87 - Discrete probabilities (Karr, 1993, p. 32)
A probability $P$ defined on $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$ is discrete if there is a countable set $C$ such that $P(C)=1$.
Remark 1.88 - Discrete probabilities (Karr, 1993, p. 32)
Discrete probabilities are finite or countable convex combinations of point masses. The associated distribution functions do not increase "smoothly" - they increase only by means of jumps. Proposition 1.89 - Discrete probabilities (Karr, 1993, p. 32)
Let $P$ be a probability function on $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$. Then the following are equivalent:
1. $P$ is a discrete probability.
2. There is a real sequence $x_{1}, x_{2}, \ldots$ and numbers $p_{1}, p_{2}, \ldots$ with $p_{n}>0$, for all $n$, and $\sum_{n=1}^{+\infty} p_{n}=1$ such that
$$
P(A)=\sum_{n=1}^{+\infty} p_{n} \times \epsilon_{x_{n}}(A),
$$
for all $A \in \mathcal{B}(\mathbb{R})$.
3. There is a real sequence $x_{1}, x_{2}, \ldots$ and numbers $p_{1}, p_{2}, \ldots$ with $p_{n}>0$, for all $n$, and $\sum_{n=1}^{+\infty} p_{n}=1$ such that
$$
F_{P}(x)=\sum_{n=1}^{+\infty} p_{n} \times \mathbf{1}_{(-\infty, x]}\left(x_{n}\right),
$$
for all $x \in \mathbb{R}$.
Remark 1.90 - Discrete probabilities (Karr, 1993, p. 33)
The distribution function associated to a discrete probability increases only by jumps of size $p_{n}$ at $x_{n}$.
## Exercise 1.91 - Discrete probabilities
Prove Proposition 1.89 (Karr, 1993, p. 32).
## Example 1.92 - Discrete probabilities
Let $p_{x}$ represent from now on $P(\{x\})$.
- Uniform distribution on a finite set $C$
$$
\begin{aligned}
& p_{x}=\frac{1}{\# C}, x \in C \\
& P(A)=\frac{\# A}{\# C}, A \subseteq C .
\end{aligned}
$$
This distribution is also known as the Laplace distribution.
- Bernoulli distribution with parameter $p(p \in[0,1])$
$$
\begin{aligned}
& C=\{0,1\} \\
& p_{x}=p^{x}(1-p)^{1-x}, x \in C .
\end{aligned}
$$
This probability function arises in what we call a Bernoulli trial - a yes/no random experiment which yields success with probability $p$. - Binomial distribution with parameters $n$ and $p(n \in \mathbb{N}, p \in[0,1])$
$$
\begin{aligned}
& C=\{0,1, \ldots, n\} \\
& p_{x}=\left(\begin{array}{l}
n \\
x
\end{array}\right) p^{x}(1-p)^{n-x}, x \in C .
\end{aligned}
$$
The binomial distribution is the discrete probability distribution of the number of successes in a sequence of $n$ independent yes/no experiments, each of which yields success with probability $p$.
Moreover, the result $\sum_{x=0}^{n} p_{x}=1$ follows from the binomial theorem (http://en.wikipedia.org/wiki/Binomial_theorem).
- Geometric distribution with parameter $p(p \in[0,1])$
$C=\mathbb{N}$
$p_{x}=(1-p)^{x-1} p, x \in C$.
This probability function is associated to the total number of (i.i.d.) Bernoulli trials needed to get one sucess - the first sucess (http://en.wikipedia.org/wiki/Geometric_distribution). The graph of
$$
F_{P}(x)= \begin{cases}0, & x<1 \\ \sum_{i=1}^{[x]}(1-p)^{i-1} p=1-(1-p)^{[x]}, & x \geq 1,\end{cases}
$$
where $[x]$ represents the integer part of the real number $x$, follows:
- Negative binomial distribution with parameters $r$ and $p(r \in \mathbb{N}, p \in[0,1])$
$C=\{r, r+1, \ldots\}$
$p_{x}=\left(\begin{array}{l}x-1 \\ r-1\end{array}\right)(1-p)^{x-r} p^{r}, x \in C$.
This probability function is associated to the total number of (i.i.d.) Bernoulli trials needed to get a pre-specified number $r$ of sucesses (http://en.wikipedia.org/wiki/Negative_binomial_distribution). The geometric distribution is a particular case: $r=1$.
- Hypergeometric distribution with parameters $N, M, n \quad(N, M, n \quad \in$ $N$ and $M, n \leq N)$
$C=\left\{x \in \mathbb{N}_{0}: \max \{0, n-N+M\} \leq x \leq \min \{n, M\}\right\}$
$p_{x}=\frac{\left(\begin{array}{c}M \\ x\end{array}\right)\left(\begin{array}{c}N-M \\ n-x\end{array}\right)}{\left(\begin{array}{c}N \\ n\end{array}\right)}, x \in C$.
It is a discrete probability that describes the number of successes in a sequence of $n$ draws from a finite population with size $N$ without replacement (http://en.wikipedia.org/wiki/Hypergeometric_distribution).
- Poisson distribution with parameter $\lambda\left(\lambda \in \mathbb{R}^{+}\right)$
$C=\mathbb{N}_{0}$
$p_{x}=e^{-\lambda \frac{\lambda^{x}}{x !}}, x \in C$.
It is discrete probability that expresses the probability of a number of events occurring in a fixed period of time if these events occur with a known average rate and independently of the time since the last event. The Poisson distribution can
also be used for the number of events in other specified intervals such as distance, area or volume (http://en.wikipedia.org/wiki/Poisson_distribution).
The figure comprises the distribution function for three different values of $\lambda$. Motivation 1.93 - Absolutely continuous probabilities (Karr, 1993, p. 33)
Absolutely continuous probabilities are the antithesis of discrete probabilities in the sense that they have "smooth" distribution functions.
Definition 1.94 - Absolutely continuous probabilities (Karr, 1993, p. 33)
A probability function $P$ on $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$ is absolutely continuous if there is a non-negative function $f_{P}$ on $\mathbb{R}$ such that
$$
P((a, b])=\int_{a}^{b} f_{P}(x) d x,
$$
for every interval $(a, b] \in \mathcal{B}(\mathbb{R})$.
## Remark 1.95 - Absolutely continuous probabilities
If $P$ is an absolutely continuous probability then $F_{P}(x)$ is an absolutely continuous real function.
## Remark 1.96 - Continuous, uniformly continuous and absolutely continuous functions
- Continuous function
A real function $f$ is continuous in $x$ if for any sequence $\left\{x_{1}, x_{2}, \ldots\right\}$ such that $\lim _{n \rightarrow \infty} x_{n}=x$, it holds that $\lim _{n \rightarrow \infty} f\left(x_{n}\right)=f(x)$.
One can say, briefly, that a function is continuous iff it preserves limits.
For the Cauchy definition (epsilon-delta) of continuous function see http://en.wikipedia.org/wiki/Continuous_function
## - Uniformly continuous function
Given metric spaces $\left(X, d_{1}\right)$ and $\left(Y, d_{2}\right)$, a function $f: X \rightarrow Y$ is called uniformly continuous on $X$ if for every real number $\epsilon>0$ there exists $\delta>0$ such that for every $x, y \in X$ with $d_{1}(x, y)<\delta$, we have that $d_{2}(f(x), f(y))<\epsilon$.
If $X$ and $Y$ are subsets of the real numbers, $d_{1}$ and $d_{2}$ can be the standard Euclidean norm, |.|, yielding the definition: for all $\epsilon>0$ there exists a $\delta>0$ such that for all $x, y \in X,|x-y|<\delta$ implies $|f(x)-f(y)|<\epsilon$.
The difference between being uniformly continuous, and simply being continuous at every point, is that in uniform continuity the value of $\delta$ depends only on $\epsilon$ and not on the point in the domain (http://en.wikipedia.org/wiki/Uniform_continuity).
## - Absolutely continuous function
Let $(X, d)$ be a metric space and let $I$ be an interval in the real line $\mathbb{R}$. A function $f$ : $I \rightarrow X$ is absolutely continuous on $I$ if for every positive number $\epsilon$, there is a positive number $\delta$ such that whenever a (finite or infinite) sequence of pairwise disjoint subintervals $\left[x_{k}, y_{k}\right]$ of $I$ satisfies $\sum_{k}\left|y_{k}-x_{k}\right|<\delta$ then $\sum_{k} d\left(f\left(y_{k}\right), f\left(x_{k}\right)\right)<\epsilon$.
Absolute continuity is a smoothness property which is stricter than continuity and uniform continuity.
The two following functions are continuous everywhere but not absolutely continuous:
1. $f(x)=\left\{\begin{array}{l}0, \text { if } x=0 \\ x \sin (1 / x), \text { if } x \neq 0,\end{array}\right.$
on a finite interval containing the origin;
2. $f(x)=x^{2}$ on an unbounded interval.
(http://en.wikipedia.org/wiki/Absolute_continuity)
Proposition 1.97 - Absolutely continuous probabilities (Karr, 1993, p. 34)
A probability function $P$ is absolutely continuous iff there is a non-negative function $f$ on $\mathbb{R}$ such that
$$
\begin{aligned}
\int_{-\infty}^{+\infty} f(s) d s & =1 \\
F_{P}(x) & =\int_{-\infty}^{x} f(s) d s, x \in \mathbb{R} .
\end{aligned}
$$
## Exercise 1.98 - Absolutely continuous probabilities
Prove Proposition 1.97 (Karr, 1993, p. 34).
## Example 1.99 - Absolutely continuous probabilities
- Uniform distribution on $[a, b](a, b \in \mathbb{R}, a<b)$
$$
\begin{aligned}
& f_{P}(x)= \begin{cases}\frac{1}{b-a}, & a \leq x \leq b \\
0, & \text { otherwise }\end{cases} \\
& F_{P}(x)= \begin{cases}0, & x<a \\
\frac{x-a}{b-a}, & a \leq x \leq b \\
1, & x>b .\end{cases}
\end{aligned}
$$
This absolutely continuous probability is such that all intervals of the same length on the distribution's support are equally probable.
The support is defined by the two parameters, $a$ and $b$, which are its minimum and maximum values (http://en.wikipedia.org/wiki/Uniform_distribution_(continuous)).
- Exponential distribution with parameter $\lambda\left(\lambda \in \mathbb{R}^{+}\right)$
$f_{P}(x)= \begin{cases}\lambda e^{-\lambda x}, & x \geq 0 \\ 0, & \text { otherwise. }\end{cases}$
$F_{P}(x)= \begin{cases}0, & x<0 \\ 1-e^{-\lambda x}, & x \geq 0\end{cases}$
The exponential distribution is used to describe the times between consecutive events in a Poisson process. ${ }^{14}$ (http://en.wikipedia.org/wiki/Exponential_distribution).
${ }^{14}$ I.e. a process in which events occur continuously and independently at a constant average rate. Let $P^{*}$ be the (discrete) Poisson probability with parameter $\lambda x$. Then $P^{*}(\{0\})=e^{-\lambda x}=P((x,+\infty))=1-F_{P}(x)$.
- Normal distribution with parameters $\mu(\mu \in \mathbb{R})$ and $\sigma^{2}\left(\sigma^{2} \in \mathbb{R}^{+}\right)$
$f_{P}(x)=\frac{1}{\sqrt{2 \pi \sigma^{2}}} \exp \left\{-\frac{(x-\mu)^{2}}{2 \sigma^{2}}\right\}, x \in \mathbb{R}$
The normal distribution or Gaussian distribution is used describes data that cluster around a mean or average. The graph of the associated probability density function is bell-shaped, with a peak at the mean, and is known as the Gaussian function or bell curve http://en.wikipedia.org/wiki/Normal_distribution).
$F_{P}(x)=\int_{-\infty}^{x} f_{P}(s) d s, x \in \mathbb{R}$
Motivation 1.100 - Mixed distributions (Karr, 1993, p. 34)
A probability function need not to be discrete or absolutely continuous...
Definition 1.101 - Mixed distributions (Karr, 1993, p. 34)
A probability function $P$ is mixed if there is a discrete probability $P_{d}$, an absolutely continuous probability $P_{c}$ and $\alpha \in(0,1)$ such that $P$ is a convex combination of $P_{d}$ and $P_{c}$, that is,
$$
P=\alpha P_{d}+(1-\alpha) P_{c}
$$
## Example 1.102 - Mixed distributions
Let $M(\lambda) / M(\mu) / 1$ represent a queueing system with Poisson arrivals (rate $\lambda>0$ ) and exponential service times (rate $\mu>0$ ) and one server.
In the equilibrium state, the probability function associated to the waiting time of an arriving customer is
$$
P(A)=(1-\rho) \epsilon_{\{0\}}(A)+\rho P_{E x p(\mu(1-\rho))}(A), A \in \mathcal{B}(\mathbb{R}),
$$
where $0<\rho=\frac{\lambda}{\mu}<1$ and
$$
P_{E x p(\mu(1-\rho))}(A)=\int_{A} \mu(1-\rho) e^{-\mu(1-\rho) s} d s .
$$
The associated distribution function is given by
$$
F_{P}(x)= \begin{cases}0, & x<0 \\ (1-\rho)+\rho\left[1-e^{-\mu(1-\rho) x}\right], & x \geq 0 .\end{cases}
$$
### Conditional probability
Motivation 1.103 - Conditional probability (Karr, 1993, p. 35)
We shall revise probabilities to account for the knowledge that an event has occurred, using a concept known as conditional probability.
Definition 1.104 - Conditional probability (Karr, 1993, p. 35)
Let $A$ and $B$ be events. If $P(A)>0$ the conditionally probability of $B$ given $A$ is equal to
$$
P(B \mid A)=\frac{P(B \cap A)}{P(A)} .
$$
In case $P(A)=0$, we make the convention that $P(B \mid A)=P(B)$.
Remark 1.105 - Conditional probability (Karr, 1993, p. 35)
$P(B \mid A)$ can be interpreted as the relative likelihood that $B$ occurs given that $A$ is known to have occured.
## Exercise 1.106 - Conditional probability
Solve exercises 1.23 and 1.24 of $\operatorname{Karr}(1993$, p. 40).
Example 1.107 - Conditional probability (Grimmett and Stirzaker, 2001, p. 9) A family has two children.
- What is the probability that both are boys, given that at least one is a boy?
The older and younger child may each be male or female, so there are four possible combination of sexes, which we assume to be equally likely. Therefore
- $\Omega=\{G G, G B, B G, B B\}$
where $G=$ girl, $B=$ boy, and $P(G G)=P(G B)=P(B G)=P(B B)=\frac{1}{4}$.
From the definition of conditional probability
$$
\begin{aligned}
P(B B \mid \text { one boy at least }) & =P[B B \mid(G B \cup B G \cup B B)] \\
& =\frac{P[B B \cap(G B \cup B G \cup B B)]}{P(G B \cup B G \cup B B)} \\
& =\frac{P(B B)}{P(G B)+P(B G)+P(B B)} \\
& =\frac{\frac{1}{4}}{\frac{1}{4}+\frac{1}{4}+\frac{1}{4}} \\
& =\frac{1}{3} .
\end{aligned}
$$
A popular but incorrect answer to this question is $\frac{1}{2}$. This is the correct answer to another question:
- For a family with two children, what is the probability that both are boys given that the younger is a boy?
In this case
$$
\begin{aligned}
P(B B \mid \text { younger child is a boy }) & =P[B B \mid(G B \cup B B)] \\
& =\cdots \\
& =\frac{P(B B)}{P(G B)+P(B B)} \\
& =\frac{\frac{1}{4}}{\frac{1}{4}+\frac{1}{4}} \\
& =\frac{1}{2} .
\end{aligned}
$$
Exercise 1.108 - Conditional probability (Grimmett and Stirzaker, 2001, p. 9) The prosecutor's fallacy ${ }^{15}$ - Let $G$ be the event that an accused is guilty, and $T$ the event that some testimony is true.
Some lawyers have argued on the assumption that $P(G \mid T)=P(T \mid G)$. Show that this holds iff $P(G)=P(T)$.
Motivation 1.109 - Multiplication rule (Montgomery and Runger, 2003, p. 42) The definition of conditional probability can be rewritten to provide a general expression for the probability of the intersection of (two) events. This formula is referred to as a multiplication rule for probabilities.
Proposition 1.110 - Multiplication rule (Montgomery and Runger, 2003, p. 43) Let $A$ and $B$ be two events. Then
$$
P(A \cap B)=P(B \mid A) \times P(A)=P(A \mid B) \times P(B) .
$$
More generally: let $A_{1}, \ldots, A_{n}$ be events then
$$
\begin{aligned}
P\left(A_{1} \cap A_{2} \cap \ldots \cap A_{n-1} \cap A_{n}\right)= & P\left(A_{1}\right) \times P\left(A_{2} \mid A_{1}\right) \times P\left[A_{3} \mid\left(A_{1} \cap A_{2}\right)\right] \\
& \ldots \times P\left[A_{n} \mid\left(A_{1} \cap A_{2} \cap \ldots A_{n-1}\right)\right] .
\end{aligned}
$$
${ }^{15}$ The prosecution made this error in the famous Dreyfus affair (http://en.wikipedia.org/wiki/Alfred_Dreyfus) in 1894. Example 1.111 - Multiplication rule (Montgomery and Runger, 2003, p. 43) The probability that an automobile battery, subject to high engine compartment temperature, suffers low charging current is 0.7 . The probability that a battery is subject to high engine compartment temperature 0.05.
What is the probability that a battery suffers low charging current and is subject to high engine compartment temperature?
## - Table of events and probabilities
| Event | Probability |
| :--- | :--- |
| $C=$ battery suffers low charging current | $P(C)=?$ |
| $T=$ battery subject to high engine compartment temperature | $P(T)=0.05$ |
| $C \mid T=$ battery suffers low charging current given that it is | $P(C \mid T)=0.7$ |
| $\quad$ subject to high engine compartment temperature | |
- Probability
$$
\begin{array}{rll}
P(C \cap T) \stackrel{\text { mult. }}{=} & P(C \mid T) \times P(T) \\
& = & 0.7 \times 0.05 \\
& = & 0.035 .
\end{array}
$$
Motivation 1.112 - Law of total probability (Karr, 1993, p. 35)
The next law expresses the probability of an event in terms of its conditional probabilities given elements of a partition of $\Omega$.
Proposition 1.113 - Law of total probability (Karr, 1993, p. 35) Let $\left\{A_{1}, A_{2}, \ldots\right\}$ a countable partition of $\Omega$. Then, for each event $B$,
$$
P(B)=\sum_{i=1}^{+\infty} P\left(B \mid A_{i}\right) \times P\left(A_{i}\right) .
$$
## Exercise 1.114 - Law of total probability
Prove Proposition 1.113 by using $\sigma$-additivity of a probability function and the fact that $B=\bigcup_{i=1}^{+\infty}\left(B \cap A_{i}\right)($ Karr, 1993, p. 36). Corollary 1.115 - Law of total probability (Montgomery and Runger, 2003, p. 44) For any events $A$ and $B$,
$$
P(B)=P(B \mid A) \times P(A)+P\left(B \mid A^{c}\right) \times P\left(A^{c}\right) .
$$
Example 1.116 - Law of total probability (Grimmett and Stirzaker, 2001, p. 11) Only two factories manufacture zoggles. $20 \%$ of the zoggles from factory I and $5 \%$ from factory II are defective. Factory I produces twice as many zoggles as factory II each week.
What is the probability that a zoggle, randomly chosen from a week's production, is not defective?
## - Table of events and probabilities
| Event | Probability |
| :--- | :--- |
| $D=$ defective zoggle | $P(D)=?$ |
| $A=$ zoggle made in factory I | $P(A)=2 \times[1-P(A)]=\frac{2}{3}$ |
| $A^{c}=$ zoggle made in factory II | $P\left(A^{c}\right)=1-P(A)=\frac{1}{3}$ |
| $D \mid A=$ defective zoggle given that it is made in factory I | $P(D \mid A)=0.20$ |
| $D \mid A^{c}=$ defective zoggle given that it is made in factory II | $P\left(D \mid A^{c}\right)=0.05$ |
- Probability
$$
\begin{aligned}
P\left(D^{c}\right) & = \\
\stackrel{\text { lawtotalprob }}{=} & 1-P(D) \\
& 1-\left[P(D \mid A) \times P(A)+P\left(D \mid A^{c}\right) \times P\left(A^{c}\right)\right] \\
& 1-\left(0.20 \times \frac{2}{3}+0.05 \times \frac{1}{3}\right) \\
& =\frac{51}{60} .
\end{aligned}
$$
Motivation 1.117 - Bayes' theorem (Karr, 1993, p. 36)
Traditionally (and probably incorrectly) attributed to the English cleric Thomas Bayes (http://en.wikipedia.org/wiki/Thomas_Bayes), the theorem that bears his name is used to compute conditional probabilities "the other way around".
Proposition 1.118 - Bayes' theorem (Karr, 1993, p. 36)
Let $\left\{A_{1}, A_{2}, \ldots\right\}$ be a countable partition of $\Omega$. Then, for each event $B$ with $P(B)>0$ and each $n$,
$$
\begin{aligned}
P\left(A_{n} \mid B\right) & =\frac{P\left(B \mid A_{n}\right) P\left(A_{n}\right)}{P(B)} \\
& =\frac{P\left(B \mid A_{n}\right) P\left(A_{n}\right)}{\sum_{i=1}^{+\infty} P\left(B \mid A_{i}\right) P\left(A_{i}\right)}
\end{aligned}
$$
Exercise 1.119 - Bayes' theorem (Karr, 1993, p. 36)
Prove Bayes' theorem by using the definition of conditional probability and the law of total probability (Karr, 1993, p. 36).
Example 1.120 - Bayes' theorem (Grimmett and Stirzaker, 2003, p. 11)
Resume Example 1.116...
If the chosen zoggle is defective, what is the probability that it came from factory II.
- Probability
$$
\begin{aligned}
P(A \mid D) & =\frac{P(D \mid A) \times P(A)}{P(D)} \\
& =\frac{0.20 \times \frac{2}{3}}{1-\frac{51}{60}} \\
& =\frac{8}{9} .
\end{aligned}
$$
## References
- Grimmett, G.R. and Stirzaker, D.R. (2001). Probability and Random Processes (3rd. edition). Oxford. (QA274.12-.76.GRI.30385 and QA274.12-.76.GRI.40695 refer to the library code of the 1st. and 2nd. editions from 1982 and 1992, respectively.)
- Karr, A.F. (1993). Probability. Springer-Verlag.
- Montgomery, D.C. and Runger, G.C. (2003). Applied statistics and probability for engineers. John Wiley \& Sons, New York. (QA273-280/3.MON.64193)
- Papoulis, A. (1965). Probability, Random Variables and Stochastic Processes. McGraw-Hill Kogakusha, Ltd. (QA274.12-.76.PAP.28598)
- Resnick, S.I. (1999). A Probability Path. Birkhäuser. (QA273.4-.67.RES.49925)
- Righter, R. (200-). Lectures notes for the course Probability and Risk Analysis for Engineers. Department of Industrial Engineering and Operations Research, University of California at Berkeley.
- Yates, R.D. and Goodman, D.J. (1999). Probability and Stochastic Processes: A friendly Introduction for Electrical and Computer Engineers. John Wiley \& Sons, Inc. (QA273-280/4.YAT.49920)
## Chapter 2
## Random variables
### Fundamentals
Motivation 2.1 - Inverse image of sets (Karr, 1993, p. 43)
Before we introduce the concept of random variable (r.v.) we have to talk rather extensively on inverse images of sets and inverse image mapping.
Definition 2.2 - Inverse image (Karr, 1993, p. 43) Let:
- $X$ be a function with domain $\Omega$ and range $\Omega^{\prime}$, i.e. $X: \Omega \rightarrow \Omega^{\prime}$;
- $\mathcal{F}$ and $\mathcal{F}^{\prime}$ be the $\sigma$ - algebras on $\Omega$ and $\Omega^{\prime}$, respectively.
(Frequently $\Omega^{\prime}=\mathbb{R}$ and $\mathcal{F}^{\prime}=\mathcal{B}(\mathbb{R})$.) Then the inverse image under $X$ of the set $B \in \mathcal{F}^{\prime}$ is the subset of $\Omega$ given by
$$
X^{-1}(B)=\{\omega: X(\omega) \in B\},
$$
written from now on $\{X \in B\}$ (graph!).
Remark 2.3 - Inverse image mapping (Karr, 1993, p. 43)
The inverse image mapping $X^{-1}$ maps subsets of $\Omega^{\prime}$ to subsets of $\Omega . X^{-1}$ preserves all set operations, as well as disjointness. Proposition 2.4 - Properties of inverse image mapping (Karr, 1993, p. 43; Resnick, 1999, p. 72)
Let:
- $X: \Omega \rightarrow \Omega^{\prime}$
- $\mathcal{F}$ and $\mathcal{F}^{\prime}$ be the $\sigma$ - algebras on $\Omega$ and $\Omega^{\prime}$, respectively;
- $B, B^{\prime}$ and $\left\{B_{i}: i \in I\right\}$ be sets in $\mathcal{F}^{\prime}$.
Then:
1. $X^{-1}(\emptyset)=\emptyset$
2. $X^{-1}\left(\Omega^{\prime}\right)=\Omega$
3. $B \subseteq B^{\prime} \Rightarrow X^{-1}(B) \subseteq X^{-1}\left(B^{\prime}\right)$
4. $X^{-1}\left(\bigcup_{i \in I} B_{i}\right)=\bigcup_{i \in I} X^{-1}\left(B_{i}\right)$
5. $X^{-1}\left(\bigcap_{i \in I} B_{i}\right)=\bigcap_{i \in I} X^{-1}\left(B_{i}\right)$
6. $B \cap B^{\prime}=\emptyset \Rightarrow X^{-1}(B) \cap X^{-1}\left(B^{\prime}\right)=\emptyset$
7. $X^{-1}\left(B^{c}\right)=\left[X^{-1}(B)\right]^{c}$.
Exercise 2.5 - Properties of inverse image mapping
Prove Proposition 2.4 (Karr, 1993, p. 43).
Proposition $2.6-\sigma-$ algebras and inverse image mapping (Resnick, 1999, pp. $72-73)$
Let $X: \Omega \rightarrow \Omega^{\prime}$ be a mapping with inverse image. If $\mathcal{F}^{\prime}$ is a $\sigma-$ algebra on $\Omega^{\prime}$ then
$$
X^{-1}\left(\mathcal{F}^{\prime}\right)=\left\{X^{-1}(B): B \in \mathcal{F}^{\prime}\right\}
$$
is a $\sigma-$ algebra on $\Omega$.
Exercise 2.7 - $\sigma-$ algebras and inverse image mapping
Prove Proposition 2.6 by verifying the 3 postulates for a $\sigma$ - algebra (Resnick, 1999, p. $73)$. Proposition 2.8 - Inverse images of $\sigma$ - algebras generated by classes of subsets (Resnick, 1999, p. 73)
Let $\mathcal{C}^{\prime}$ be a class of subsets of $\Omega^{\prime}$. Then
$$
X^{-1}\left(\sigma\left(\mathcal{C}^{\prime}\right)\right)=\sigma\left(\left\{X^{-1}\left(\mathcal{C}^{\prime}\right)\right\}\right),
$$
i.e., the inverse image of the $\sigma$ - algebra generated by $\mathcal{C}^{\prime}$ is the same as the $\sigma$ - algebra on $\Omega$ generated by the inverse images.
Exercise 2.9 - Inverse images of $\sigma$-algebras generated by classes of subsets Prove Proposition 2.8. This proof comprises the verification of the 3 postulates for a $\sigma$ - algebra (Resnick, 1999, pp. 73-74) and much more.
Definition 2.10 - Measurable space (Resnick, 1999, p. 74)
The pair $(\Omega, \mathcal{F})$ consisting of a set $\Omega$ and a $\sigma$ - algebra on $\Omega$ is called a measurable space.
Definition 2.11 - Measurable map (Resnick, 1999, p. 74)
Let $(\Omega, \mathcal{F})$ and $\left(\Omega^{\prime}, \mathcal{F}^{\prime}\right)$ be two measurable spaces. Then a map $X: \Omega \rightarrow \Omega^{\prime}$ is called a measurable map if
$$
X^{-1}\left(\mathcal{F}^{\prime}\right) \subseteq \mathcal{F}
$$
Remark 2.12 - Measurable maps/ Random variables (Karr, 1993, p. 44)
A special case occurs when $\left(\Omega^{\prime}, \mathcal{F}^{\prime}\right)=(\mathbb{R}, \mathcal{B}(\mathbb{R}))$ - in this case $X$ is called a random variable. That is, random variables are functions on the sample space $\Omega$ for which inverse images of Borel sets are events of $\Omega$.
Definition 2.13 - Random variable (Karr, 1993, p. 44)
Let $(\Omega, \mathcal{F})$ and $\left(\Omega^{\prime}, \mathcal{F}^{\prime}\right)=(\mathbb{R}, \mathcal{B}(\mathbb{R}))$ be two measurable spaces. A random variable (r.v.) is a function $X: \Omega \rightarrow \mathbb{R}$ such that
$$
X^{-1}(B) \in \mathcal{F}, \forall B \in \mathcal{B}(\mathbb{R}) .
$$
Remark 2.14 - Random variables (Karr, 1993, p. 44)
A r.v. is a function on the sample space: it transforms events into real sets.
The technical requirement that sets $\{X \in B\}=X^{-1}(B)$ be events of $\Omega$ is needed in order that the probability
$$
P(\{X \in B\})=P\left(X^{-1}(B)\right)
$$
be defined.
Motivation 2.15 - Checking if $X$ is a r.v. (Karr, 1993, p. 47)
To verify that $X$ is a r.v. it is not necessary to check that $\{X \in B\}=X^{-1}(B) \in \mathcal{F}$ for all Borel sets $B$. In fact, $\sigma(X) \subseteq \mathcal{F}$ once $X^{-1}(B) \in \mathcal{F}$ for enough "elementary" Borel sets.
Proposition 2.16 - Checking if $X$ is a r.v. (Resnick, 1999, p. 77; Karr, 1993, p. 47) The real function $X: \Omega \rightarrow \mathbb{R}$ is a r.v. iff
$$
\{X \leq x\}=X^{-1}((-\infty, x]) \in \mathcal{F}, \forall x \in \mathbb{R}
$$
Similarly if we replace $\{X \leq x\}$ by $\{X>x\},\{X<x\}$ or $\{X \geq x\}$.
## Example 2.17 - Random variable
## - Random experiment
Throw a traditional fair die and observe the number of points.
- Sample space
$\Omega=\{1,2,3,4,5,6\}$
- $\sigma$-algebra on $\Omega$
Let us consider a non trivial one:
$\mathcal{F}=\{\emptyset,\{1,3,5\},\{2,4,6\}, \Omega\}$
- Random variable
$X: \Omega \rightarrow \mathbb{R}$ such that: $X(1)=X(3)=X(5)=0$ and $X(2)=X(4)=X(6)=1$
## - Inverse image mapping
Let $B \in \mathcal{B}(\mathbb{R})$. Then
$$
\begin{aligned}
X^{-1}(B) & = \begin{cases}\emptyset, & \text { if } 0 \notin B, 1 \notin B \\
\{1,3,5\}, & \text { if } 0 \in B, 1 \notin B \\
\{2,4,6\}, & \text { if } 0 \notin B, 1 \in B \\
\Omega, & \text { if } 0 \in B, 1 \in B\end{cases} \\
& \in \mathcal{F}, \forall B \in \mathcal{B}(\mathbb{R}) .
\end{aligned}
$$
Therefore $X$ is a r.v. defined in $\mathcal{F}$.
- A function which is not a r.v.
$Y: \Omega \rightarrow \mathbb{R}$ such that: $Y(1)=Y(2)=Y(3)=1$ and $Y(4)=Y(5)=Y(6)=0$.
$Y$ is not a r.v. defined in $\mathcal{F}$ because $Y^{-1}(\{1\})=\{1,2,3\} \notin \mathcal{F}$.
There are generalizations of r.v.
Definition 2.18 - Random vector (Karr, 1993, p. 45)
A $d$-dimensional random vector is a function $\underline{X}=\left(X_{1}, \ldots, X_{d}\right): \Omega \rightarrow \mathbb{R}^{d}$ such that each component $X_{i}, i=1, \ldots, d$, is a random variable.
Remark 2.19 - Random vector (Karr, 1993, p. 45)
Random vectors will sometimes be treated as finite sequences of random variables.
Definition 2.20 - Stochastic process (Karr, 1993, p. 45)
A stochastic process with index set (or parameter space) $T$ is a collection $\left\{X_{t}: t \in T\right\}$ of r.v. (indexed by $T$ ).
Remark 2.21 - Stochastic process (Karr, 1993, p. 45)
Typically:
- $T=\mathbb{N}_{0}$ and $\left\{X_{n}: n \in \mathbb{N}_{0}\right\}$ is called a discrete time stochastic process;
- $T=\mathbb{R}_{0}^{+}$and $\left\{X_{t}: t \in \mathbb{R}_{0}^{+}\right\}$is called a continuous time stochastic process. Proposition $2.22-\sigma$-algebra generated by a r.v. (Karr, 1993, p. 46) The family of events that are inverse images of Borel sets under a r.v is a $\sigma$ - algebra on $\Omega$. In fact, given a r.v. $X$, the family
$$
\sigma(X)=\left\{X^{-1}(B): B \in \mathcal{B}(\mathbb{R})\right\}
$$
is a $\sigma$ - algebra on $\Omega$, known as the $\sigma-$ algebra generated by $X$.
Remark $2.23-\sigma$-algebra generated by a r.v.
- Proposition 2.22 is a particular case of Proposition 2.6 when $\mathcal{F}^{\prime}=\mathcal{B}(\mathbb{R})$.
- Moreover, $\sigma(X)$ is a $\sigma$ - algebra for every function $X: \Omega \rightarrow \mathbb{R}$; and $X$ is a r.v. iff $\sigma(X) \subseteq \mathcal{F}$, i.e., iff $X$ is a measurable map (Karr, 1993, p. 46).
Example 2.24 - $\sigma$-algebra generated by an indicator r.v. (Karr, 1993, p. 46) Let:
- $A$ be a subset of the sample space $\Omega$;
- $X: \Omega \rightarrow \mathbb{R}$ such that
$$
X(\omega)=\mathbf{1}_{A}(w)= \begin{cases}1, & \omega \in A \\ 0, & \omega \notin A .\end{cases}
$$
Then $X$ is the indicator r.v. of an event $A$. In addition,
$$
\sigma(X)=\sigma\left(\mathbf{1}_{A}\right)=\left\{\emptyset, A, A^{c}, \Omega\right\}
$$
since
$$
X^{-1}(B)= \begin{cases}\emptyset, & \text { if } 0 \notin B, 1 \notin B \\ A^{c}, & \text { if } 0 \in B, 1 \notin B \\ A, & \text { if } 0 \notin B, 1 \in B \\ \Omega, & \text { if } 0 \in B, 1 \in B,\end{cases}
$$
for any $B \in \mathcal{B}(\mathbb{R})$.
Example 2.25 - $\sigma$-algebra generated by a constant r.v.
Let $X: \Omega \rightarrow \mathbb{R}$ such that $X(\omega)=c, \forall \omega \in \Omega$. Then
$$
X^{-1}(B)= \begin{cases}\emptyset, & \text { if } c \notin B \\ \Omega, & \text { if } c \in B,\end{cases}
$$
for any $B \in \mathcal{B}(\mathbb{R})$, and $\sigma(X)=\{\emptyset, \Omega\}$ (trivial $\sigma$ - algebra). Example 2.26 - $\sigma$-algebra generated by a simple r.v. (Karr, 1993, pp. 45-46) A simple r.v. takes only finitely many values and has the form
$$
X=\sum_{i=1}^{n} a_{i} \times \mathbf{1}_{A_{i}}
$$
where $a_{i}, i=1, \ldots, n$, are (not necessarily distinct) real numbers and $A_{i}, i=1, \ldots, n$, are events that constitute a partition of $\Omega . X$ is a r.v. since
$$
\{X \in B\}=\bigcup_{i=1}^{n}\left\{A_{i}: a_{i} \in B\right\},
$$
for any $B \in \mathcal{B}(\mathbb{R})$.
For this simple r.v. we get
$$
\begin{aligned}
\sigma(X) & =\sigma\left(\left\{A_{1}, \ldots, A_{n}\right\}\right) \\
& =\left\{\bigcup_{i \in I} A_{i}: I \subseteq\{1, \ldots, n\}\right\},
\end{aligned}
$$
regardless of the values of $a_{1}, \ldots, a_{n}$.
Definition 2.27 - $\sigma$-algebra generated by a random vector (Karr, 1993, p. 46) The $\sigma$-algebra generated by the $d$-dimensional random vector $\left(X_{1}, \ldots, X_{d}\right): \Omega \rightarrow \mathbb{R}^{d}$ is given by
$$
\sigma\left(\left(X_{1}, \ldots, X_{d}\right)\right)=\left\{\left(X_{1}, \ldots, X_{d}\right)^{-1}(B): B \in \mathcal{B}\left(\mathbb{R}^{d}\right)\right\}
$$
### Combining random variables
To work with r.v., we need assurance that algebraic, limiting and transformation operations applied to them yield other r.v.
In the next proposition we state that the set of r.v. is closed under:
- addition and scalar multiplication; ${ }^{1}$
- maximum and minimum;
- multiplication;
- division.
Proposition 2.28 - Closure under algebraic operations (Karr, 1993, p. 47) Let $X$ and $Y$ be r.v. Then:
1. $a X+b Y$ is a r.v., for all $a, b \in \mathbb{R}$;
2. $\max \{X, Y\}$ and $\min \{X, Y\}$ are r.v.;
3. $X Y$ is a r.v.;
4. $\frac{X}{Y}$ is a r.v. provided that $Y(\omega) \neq 0, \forall \omega \in \Omega$.
Exercise 2.29 - Closure under algebraic operations
Prove Proposition 2.28 (Karr, 1993, pp. 47-48).
Corollary 2.30 - Closure under algebraic operations (Karr, 1993, pp. 48-49) Let $X: \Omega \rightarrow \mathbb{R}$ be a r.v. Then
$$
\begin{aligned}
& X^{+}=\max \{X, 0\} \\
& X^{-}=-\min \{X, 0\},
\end{aligned}
$$
the positive and negative parts of $X$ (respectively), are non-negative r.v., and so is
$$
|X|=X^{+}+X^{-}
$$
is a r.v.
${ }^{1}$ I.e. the set of r.v. is a vector space. Remark 2.31 - Canonical representation of a r.v. (Karr, 1993, p. 49) A r.v. can be written as a difference of its positive and negative parts:
$$
X=X^{+}-X^{-}
$$
Theorem 2.32 - Closure under limiting operations (Karr, 1993, p. 49) Let $X_{1}, X_{2}, \ldots$ be r.v. Then $\sup X_{n}, \inf X_{n}, \lim \sup X_{n}$ and $\liminf X_{n}$ are r.v. Consequently if
$$
X(\omega)=\lim _{n \rightarrow+\infty} X_{n}(\omega)
$$
exists for every $\omega \in \Omega$, then $X$ is also a r.v.
Exercise 2.33 - Closure under limiting operations
Prove Theorem 2.32 by noting that
$$
\begin{aligned}
\left\{\sup X_{n} \leq x\right\} & =\left(\sup X_{n}\right)^{-1}((-\infty, x]) \\
& =\bigcap_{n=1}^{+\infty}\left\{X_{n} \leq x\right\} \\
& =\bigcap_{n=1}^{+\infty}\left(X_{n}\right)^{-1}((-\infty, x]) \\
\left\{\inf X_{n} \geq x\right\} & \left.=\inf _{n} X_{n}\right)^{-1}([x,+\infty)) \\
& =\bigcap_{n=1}^{+\infty}\left\{X_{n} \geq x\right\} \\
& =\bigcap_{n=1}^{+\infty}\left(X_{n}\right)^{-1}([x,+\infty)) \\
\lim \sup X_{n} & =\inf _{k} \sup _{m \geq k} X_{n} \\
\liminf X_{n} & =\sup _{k} \inf _{m \geq k} X_{n}
\end{aligned}
$$
and that when $X=\lim _{n \rightarrow+\infty} X_{n}$ exists, $X=\lim \sup X_{n}=\liminf X_{n}$ (Karr, 1993, p. 49).
Corollary 2.34 - Series of r.v. (Karr, 1993, p. 49)
If $X_{1}, X_{2}, \ldots$ are r.v., then provided that $X(\omega)=\sum_{n=1}^{+\infty} X_{n}(\omega)$ converges for each $\omega, X$ is a r.v. Motivation 2.35 - Transformations of r.v. and random vectors (Karr, 1993, p. 50)
Another way of constructing r.v. is as functions of other r.v.
Definition 2.36 - Borel measurable function (Karr, 1993, p. 66) A function $g: \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$ (for fixed $n, m \in \mathbb{N}$ ) is a Borel measurable if
$$
g^{-1}(B) \in \mathcal{B}\left(\mathbb{R}^{n}\right), \forall B \in \mathcal{B}\left(\mathbb{R}^{m}\right) .
$$
Remark 2.37 - Borel measurable function (Karr, 1993, p. 66)
- In order that $g: \mathbb{R}^{n} \rightarrow \mathbb{R}$ be Borel measurable it suffices that
$$
g^{-1}((-\infty, x]) \in \mathcal{B}\left(\mathbb{R}^{n}\right), \forall x \in \mathbb{R}
$$
- A function $g: \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$ Borel measurable iff each of its components is Borel measurable as a function from $\mathbb{R}^{n}$ to $\mathbb{R}$.
- Indicator functions, monotone functions and continuous functions are Borel measurable.
- Moreover, the class of Borel measurable function has the closure properties under algebraic and limiting operations as the family of r.v. on a probability space $(\Omega, \mathcal{F}, P)$.
Theorem 2.38 - Transformations of random vectors (Karr, 1993, p. 50) Let:
- $X_{1}, \ldots, X_{d}$ be r.v.;
- $g: \mathbb{R}^{d} \rightarrow \mathbb{R}$ be a Borel measurable function.
Then $Y=g\left(X_{1}, \ldots, X_{d}\right)$ is a r.v.
Exercise 2.39 - Transformations of r.v.
Prove Theorem 2.38 (Karr, 1993, p. 50).
Corollary 2.40 - Transformations of r.v. (Karr, 1993, p. 50) Let:
- $X$ be r.v.;
- $g: \mathbb{R} \rightarrow \mathbb{R}$ be a Borel measurable function.
Then $Y=g(X)$ is a r.v.
### Distributions and distribution functions
The main importance of probability functions on $\mathbb{R}$ is that they are distributions of r.v.
Proposition 2.41 - R.v. and probabilities on $\mathbb{R}$ (Karr, 1993, p. 52)
Let $X$ be r.v. Then the set function
$$
P_{X}(B)=P\left(X^{-1}(B)\right)=P(\{X \in B\})
$$
is a probability function on $\mathbb{R}$.
Exercise 2.42 - R.v. and probabilities on $\mathbb{R}$
Prove Proposition 2.41 by checking if the three axioms in the definition of probability function hold (Karr, 1993, p. 52).
Definition 2.43 - Distribution, distribution and survival function of a r.v. (Karr, 1993, p. 52)
Let $X$ be a r.v. Then
1. the probability function on $\mathbb{R}$
$P_{X}(B)=P\left(X^{-1}(B)\right)=P(\{X \in B\}), B \in \mathcal{B}(\mathbb{R})$, is the distribution of $X ;$
2. $F_{X}(x)=P_{X}((-\infty, x])=P\left(X^{-1}((-\infty, x])=P(\{X \leq x\}), x \in \mathbb{R}\right.$, is the distribution function of $X$;
3. $S_{X}(x)=1-F_{X}(x)=P_{X}((x,+\infty))=P\left(X^{-1}((x,+\infty))=P(\{X>x\}), x \in \mathbb{R}\right.$, is the survival (or survivor) function of $X$.
Definition 2.44 - Discrete and absolutely continuous r.v. (Karr, 1993, p. 52) $X$ is said to be a discrete (resp. absolutely continuous) r.v. if $P_{X}$ is a discrete (resp. absolutely continuous) probability function.
Motivation 2.45 - Confronting r.v.
How can we confront two r.v. $X$ and $Y$ ? Definition 2.46 - Identically distributed r.v. (Karr, 1993, p. 52)
Let $X$ and $Y$ be two r.v. Then $X$ and $Y$ are said to be identically distributed - written $X \stackrel{d}{=} Y-$ if
$$
\begin{aligned}
P_{X}(B) & =P(\{X \in B\}) \\
& =P(\{Y \in B\})=P_{Y}(B), B \in \mathcal{B}(\mathbb{R})
\end{aligned}
$$
i.e. if $F_{X}(x)=P\left(\{X \leq x)=P(\{Y \leq x\})=F_{Y}(x), x \in \mathbb{R}\right.$.
Definition 2.47 - Equal r.v. almost surely (Karr, 1993, p. 52; Resnick, 1999, p. 167)
Let $X$ and $Y$ be two r.v. Then $X$ is equal to $Y$ almost surely - written $X \stackrel{\text { a.s. }}{=} Y$ - if
$$
P(\{X=Y\})=1
$$
Remark 2.48 - Identically distributed r.v. vs. equal r.v. almost surely (Karr, 1993, p. 52)
Equality in distribution of $X$ and $Y$ has no bearing on their equality as functions on $\Omega$, i.e.
$$
X \stackrel{d}{=} Y \nRightarrow X \stackrel{a . s .}{=} Y
$$
even though
$$
X \stackrel{\text { a.s. }}{=} Y \Rightarrow X \stackrel{d}{=} Y
$$
Example 2.49 - Identically distributed r.v. vs. equal r.v. almost surely
- $X \sim \operatorname{Bernoulli}(0.5)$
$$
P(\{X=0\})=P(\{X=1\})=0.5
$$
- $Y=1-X \sim \operatorname{Bernoulli}(0.5)$ since
$$
\begin{aligned}
& P(\{Y=0\})=P(\{1-X=0\})=P(\{X=1\})=0.5 \\
& P(\{Y=1\})=P(\{1-X=1\})=P(\{X=0\})=0.5
\end{aligned}
$$
- $X \stackrel{d}{=} Y$ but $X \stackrel{\text { a.s. }}{\neq} Y$. Exercise 2.50 - Identically distributed r.v. vs. equal r.v. almost surely Prove that $X \stackrel{\text { a.s. }}{=} Y \Rightarrow X \stackrel{d}{=} Y$.
Definition 2.51 - Distribution and distribution function of a random vector (Karr, 1993, p. 53)
Let $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ be a $d$-dimensional random vector. Then
1. the probability function on $\mathbb{R}^{d}$
$$
P_{\underline{X}}(B)=P\left(\underline{X}^{-1}(B)\right)=P(\{\underline{X} \in B\}), B \in \mathcal{B}\left(\mathbb{R}^{d}\right) \text {, is the distribution of } \underline{X} ;
$$
2. the distribution function of $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$, also known as the joint distribution function of $X_{1}, \ldots, X_{d}$ is the function $F_{\underline{X}}: \mathbb{R}^{d} \rightarrow[0,1]$ given by
$$
\begin{aligned}
F_{\underline{X}}(\underline{x}) & =F_{\left(X_{1}, \ldots, X_{d}\right)}\left(x_{1}, \ldots, x_{d}\right) \\
& =P\left(\left\{X_{1} \leq x_{1}, \ldots, X_{d} \leq x_{d}\right\}\right),
\end{aligned}
$$
for any $\underline{x}=\left(x_{1}, \ldots, x_{d}\right) \in \mathbb{R}^{d}$.
Remark 2.52 - Distribution function of a random vector (Karr, 1993, p. 53) The distribution $P_{\underline{X}}$ is determined uniquely by $F_{\underline{X}}$.
Motivation 2.53 - Marginal distribution function (Karr, 1993, p. 53)
Can we obtain the distribution of $X_{i}$ from the joint distribution function?
Proposition 2.54 - Marginal distribution function (Karr, 1993, p. 53)
Let $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ be a $d$-dimensional random vector. Then, for each $i(i=1, \ldots, d)$ and $x(x \in \mathbb{R})$,
$$
F_{X_{i}}(\underline{x})=\lim _{x_{j} \rightarrow+\infty, j \neq i} F_{\left(X_{1}, \ldots, X_{i-1}, X_{i}, X_{i+1}, \ldots, X_{d}\right)}\left(x_{1}, \ldots, x_{i-1}, x, x_{i+1}, \ldots, x_{d}\right) .
$$
## Exercise 2.55 - Marginal distribution function
Prove Proposition 2.54 by noting that $\left\{X_{1} \leq x_{1}, \ldots, X_{i-1} \leq x_{i-1}, X_{i} \leq x, X_{i+1} \leq x_{i+1}\right.$, $\left.\ldots, X_{d} \leq x_{d}\right\} \uparrow\left\{X_{i} \leq x\right\}$ when $x_{j} \rightarrow+\infty, j \neq i$, and by considering the monotone continuity of probability functions (Karr, 1993, p. 53). Definition 2.56 - Discrete random vector (Karr, 1993, pp. 53-54)
The random vector $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ is said to be discrete if $X_{1}, \ldots, X_{d}$ are discrete r.v. i.e. if there is a countable set $\mathcal{C} \subset \mathbb{R}^{d}$ such that $P(\{\underline{X} \in \mathcal{C}\})=1$.
Definition 2.57 - Absolutely continuous random vector (Karr, 1993, pp. 53-54) The random vector $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ is absolutely continuous if there is a non-negative function $f_{\underline{X}}: \mathbb{R}^{d} \rightarrow \mathbb{R}_{0}^{+}$such that
$$
F_{\underline{X}}(\underline{x})=\int_{-\infty}^{x_{1}} \cdots \int_{-\infty}^{x_{d}} f_{\underline{X}}\left(s_{1}, \ldots, s_{d}\right) d s_{d} \ldots d s_{1}
$$
for every $\underline{x}=\left(x_{1}, \ldots, x_{d}\right) \in \mathbb{R}^{d} . f_{\underline{X}}$ is called the joint density function (of $\left.X_{1}, \ldots, X_{d}\right)$.
Proposition 2.58 - Absolutely continuous random vector; marginal density function (Karr, 1993, p. 54)
If $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ is absolutely continuous then, for each $i(i=1, \ldots, d), X_{i}$ is absolutely continuous and
$$
f_{X_{i}}(x)=\int_{-\infty}^{x_{1}} \ldots \int_{-\infty}^{x_{d}} f_{\underline{X}}\left(s_{1}, \ldots, s_{i-1}, x, s_{i+1}, \ldots, s_{d}\right) d s_{d} \ldots d s_{i-1} d s_{i+1} \ldots d s_{1} .
$$
$f_{X_{i}}$ is termed the marginal density function of $X_{i}$.
Remark 2.59 - Absolutely continuous random vector (Karr, 1993, p. 54)
If the random vector is absolutely continuous then any "sub-vector" is absolutely continuous. Moreover, the converse of Proposition 2.58 is not true, that is, the fact that $X_{1}, \ldots, X_{d}$ are absolutely continuous does not imply that $\left(X_{1}, \ldots, X_{d}\right)$ is an absolutely continuous random vector.
### Key r.v. and random vectors and distributions
#### Discrete r.v. and random vectors
Integer-valued r.v. like the Bernoulli, binomial, hypergeometric, geometric, negative binomial, hypergeometric and Poisson, and integer-valued random vectors like the multinomial are discrete r.v. and random vectors of great interest.
- Uniform distribution on a finite set
| Notation | $X \sim \operatorname{Uniform}\left(\left\{x_{1}, x_{2}, \ldots, x_{n}\right\}\right)$ |
| :--- | :--- |
| Parameter | $\left\{x_{1}, x_{2}, \ldots, x_{n}\right\}\left(x_{i} \in \mathbb{R}, i=1, \ldots, n\right)$ |
| Range | $\left\{x_{1}, x_{2}, \ldots, x_{n}\right\}$ |
| P.f. | $P(\{X=x\})=\frac{1}{n}, x=x_{1}, x_{2}, \ldots, x_{n}$ |
This simple r.v. has the form $X=\sum_{i=1}^{n} x_{i} \times \mathbf{1}_{\left\{x_{i}\right\}}$.
- Bernoulli distribution
| Notation | $X \sim \operatorname{Bernoulli}(p)$ |
| :--- | :--- |
| Parameter | $p=P($ sucess $)(p \in[0,1])$ |
| Range | $\{0,1\}$ |
| P.f. | $P(\{X=x\})=p^{x}(1-p)^{1-x}, x=0,1$ |
A Bernoulli distributed r.v. $X$ is the indicator function of the event $\{X=1\}$.
- Binomial distribution
| Notation | $X \sim \operatorname{Binomial}(n, p)$ |
| :--- | :--- |
| Parameters | $n=$ number of Bernoulli trials $(n \in \mathbb{N})$ |
| | $p=P($ sucess $)(p \in[0,1])$ |
| Range | $\{0,1, \ldots, n\}$ |
| P.f. | $P(\{X=x\})=\left(\begin{array}{l}n \\ x\end{array}\right) p^{x}(1-p)^{n-x}, x=0,1, \ldots, n$ |
The binomial r.v. results from the sum of $n$ i.i.d. Bernoulli distributed r.v.
## - Geometric distribution
| Notation | $X \sim \operatorname{Geometric}(p)$ |
| :--- | :--- |
| Parameter | $p=P($ sucess $)(p \in[0,1])$ |
| Range | $\mathbb{N}=\{1,2,3, \ldots\}$ |
| P.f. | $P(\{X=x\})=(1-p)^{x-1} p, x=1,2,3, \ldots$ |
This r.v. satisfies the lack of memory property:
$$
P(\{X>k+x\} \mid\{X>k\})=P(\{X>x\}), \forall k, x \in \mathbb{N}
$$
## - Negative binomial distribution
$$
\begin{array}{ll}
\hline \text { Notation } & X \sim \text { NegativeBinomial }(r, p) \\
\text { Parameters } & r=\text { pre-specified number of sucesses }(r \in \mathbb{N}) \\
& p=P(\text { sucess })(p \in[0,1]) \\
\text { Range } & \{r, r+1, \ldots\} \\
\text { P.f. } & P(\{X=x\})=\left(\begin{array}{c}
x-1 \\
r-1
\end{array}\right)(1-p)^{x-r} p^{r}, x=r, r+1, \ldots \\
\hline
\end{array}
$$
The negative binomial r.v. results from the sum of $r$ i.i.d. geometrically distributed r.v.
## - Hypergeometric distribution
$$
\begin{array}{ll}
\hline \text { Notation } & X \sim \text { Hypergeometric }(N, M, n) \\
\text { Parameters } & N=\text { population size }(N \in \mathbb{I}) \\
& M=\text { sub-population size }(M \in \mathbb{N}, M \leq N) \\
& n=\text { sample size }(n \in \mathbb{N}, n \leq N) \\
& \{\max \{0, n-N+M\}, \ldots, \min \{n, M\}\} \\
\text { Range } & P(\{X=x\})=\frac{\left(\begin{array}{c}
M \\
x
\end{array}\right)\left(\begin{array}{c}
N-M \\
n-x
\end{array}\right)}{\left(\begin{array}{l}
N \\
n
\end{array}\right)}, x=\max \{0, n-N+M\}, \ldots, \min \{n, M\} \\
\hline
\end{array}
$$
Note that the sample is collected without replacement. Otherwise $X \sim$ $\operatorname{Binomial}\left(n, \frac{M}{N}\right)$.
## - Poisson distribution
| Notation | $X \sim \operatorname{Poisson}(\lambda)$ |
| :--- | :--- |
| Parameter | $\lambda\left(\lambda \in \mathbb{R}^{+}\right)$ |
| Range | $\mathbb{N}_{0}=\{0,1,2,3, \ldots\}$ |
| P.f. | $P(\{X=x\})=e^{-\lambda} \frac{\lambda^{x}}{x !}, x=0,1,2,3, \ldots$ |
The distribution was proposed by Siméon-Denis Poisson (1781-1840) and published, together with his probability theory, in 1838 in his work Recherches sur la probabilité des jugements en matiéres criminelles et matiére civile (Research on the probability of judgments in criminal and civil matters). The Poisson distribution can be derived as a limiting case of the binomial distribution. ${ }^{2}$
In 1898 Ladislaus Josephovich Bortkiewicz (1868-1931) published a book titled The Law of Small Numbers. In this book he first noted that events with low frequency in a large population follow a Poisson distribution even when the probabilities of the events varied. It was that book that made the Prussian horse-kick data famous. Some historians of mathematics have even argued that the Poisson distribution should have been named the Bortkiewicz distribution. ${ }^{3}$
## - Multinomial distribution
In probability theory, the multinomial distribution is a generalization of the binomial distribution when we are dealing not only with two types of events - a success with probability $p$ and a failure with probability $1-p$ - but with $d$ types of events with probabilities $p_{1}, \ldots, p_{d}$ such that $p_{1}, \ldots, p_{d} \geq 0, \sum_{i=1}^{d} p_{i}=1$. $^{4}$
| Notation | $\underline{X}=\left(X_{1}, \ldots, X_{d}\right) \sim$ Multinomial $_{d-1}\left(n,\left(p_{1}, \ldots, p_{d}\right)\right)$ |
| :--- | :--- |
| Parameters | $n=$ number of Bernoulli trials $(n \in \mathbb{N})$ |
| | $\left(p_{1}, \ldots, p_{d}\right)$ where $p_{i}=P($ event of type $i)$ |
| | $\left(p_{1}, \ldots, p_{d} \geq 0, \sum_{i=1}^{d} p_{i}=1\right)$ |
| Range | $\left\{\left(n_{1}, \ldots, n_{d}\right) \in \mathbb{N}_{0}^{d}: \sum_{i=1}^{d} n_{i}=n\right\}$ |
| P.f. | $P\left(\left\{X_{1}=n_{1}, \ldots, X_{d}=n_{d}\right\}\right)=\frac{n !}{\prod_{i=1}^{d} n_{i} !} \prod_{i=1}^{d} p_{i}^{n_{i}}$, |
| | $\left(n_{1}, \ldots, n_{d}\right) \in \mathbb{N}_{0}^{d}: \sum_{i=1}^{d} n_{i}=n$ |
| | |
${ }^{2}$ http://en.wikipedia.org/wiki/Poisson_distribution ${ }^{3}$ http://en.wikipedia.org/wiki/Ladislaus_Bortkiewicz
${ }^{4}$ http://en.wikipedia.org/wiki/Multinomial_distribution Exercise 2.60 - Binomial r.v. (Grimmett and Stirzaker, 2001, p. 25)
$D N A$ fingerprinting - In a certain style of detective fiction, the sleuth is required to declare the criminal has the unusual characteristics...; find this person you have your man. Assume that any given individual has these unusual characteristics with probability $10^{-7}$ (independently of all other individuals), and the city in question has $10^{7}$ inhabitants.
Given that the police inspector finds such person, what is the probability that there is at least one other?
Exercise 2.61 - Binomial r.v. (Righter, 200-)
A student (Fred) is getting ready to take an important oral exam and is concerned about the possibility of having an on day or an off day. He figures that if he has an on day, then each of his examiners will pass him independently of each other, with probability 0.8 , whereas, if he has an off day, this probability will be reduced to 0.4 .
Suppose the student will pass if a majority of examiners pass him. If the student feels that he is twice as likely to have an off day as he is to have an on day, should he request an examination with 3 examiners or with 5 examiners?
## Exercise 2.62 - Geometric r.v.
Prove that the distribution function of $X \sim \operatorname{Geometric}(p)$ is given by
$$
F_{X}(x)=P(X \leq x)= \begin{cases}0, & x<1 \\ \sum_{i=1}^{[x]}(1-p)^{i-1} p=1-(1-p)^{[x]}, & x \geq 1\end{cases}
$$
where $[x]$ represents the integer part of $x$.
Exercise 2.63 - Hypergeometric r.v. (Righter, 200-)
From a mix of 50 widgets from supplier 1 and 100 from supplier 2, 10 widgets are randomly selected and shipped to a customer.
What is the probability that all 10 came from supplier 1 ?
Exercise 2.64 - Poisson r.v. (Grimmett and Stirzaker, 2001, p. 19)
In your pocket is a random number $N$ of coins, where $N \sim \operatorname{Poisson}(\lambda)$. You toss each coin once, with heads showing with probability $p$ each time.
Show that the total number of heads has a Poisson distribution with parameter $\lambda p$. Exercise 2.65 - Negative hypergeometric r.v. (Grimmett and Stirzaker, 2001, p. 19)
Capture-recapture - A population of $N$ animals has had a number $M$ of its members captured, marked, and released. Let $X$ be the number of animals it is necessary to recapture (without re-release) in order to obtain $r$ marked animals.
Show that
$$
P(\{X=x\})=\frac{\frac{M}{N}\left(\begin{array}{c}
M-1 \\
r-1
\end{array}\right)\left(\begin{array}{c}
N-M \\
x-r
\end{array}\right)}{\left(\begin{array}{c}
N-1 \\
x-1
\end{array}\right)} .
$$
## Exercise 2.66 - Discrete random vectors
Prove that if
- $Y \sim \operatorname{Poisson}(\lambda)$
- $\left(X_{1}, \ldots, X_{d}\right) \mid\{Y=n\} \sim \operatorname{Multinomial}_{d-1}\left(n,\left(p_{1}, \ldots, p_{d}\right)\right)$
then $X_{i} \sim \operatorname{Poisson}\left(\lambda p_{i}\right), i=1, \ldots, d$.
Exercise 2.67 - Relating the p.f. of the negative binomial and binomial r.v. Let $X \sim \operatorname{NegativeBinomial}(r, p)$ and $Y \sim \operatorname{Binomial}(x-1, p)$. Prove that, for $x=r, r+$ $1, r+2, \ldots$ and $r=1,2,3, \ldots$, we get
$$
\begin{aligned}
P(X=x) & =p \times P(Y=r-1) \\
& =p \times\left[F_{\text {Binomial }(x-1, p)}(r-1)-F_{\text {Binomial }(x-1, p)}(r-2)\right] .
\end{aligned}
$$
Exercise 2.68 - Relating the d.f. of the negative binomial and binomial r.v. Let $X \sim \operatorname{BinomialN}(r, p), Y \sim \operatorname{Binomial}(x, p)$ e $Z=x-Y \sim \operatorname{Binomial}(x, 1-p)$. Prove that, for $x=r, r+1, r+2, \ldots$ and $r=1,2,3, \ldots$, we have
$$
\begin{aligned}
F_{\text {BinomialN }(r, p)}(x) & =P(X \leq x) \\
& =P(Y \geq r) \\
& =1-F_{\text {Binomial }(x, p)}(r-1) \\
& =P(Z \leq x-r) \\
& =F_{\text {Binomial }(x, 1-p)}(x-r) .
\end{aligned}
$$
#### Absolutely continuous r.v. and random vectors
- Uniform distribution on the interval $[a, b]$
| Notation | $X \sim \operatorname{Uniform}(a, b)$ |
| :--- | :--- |
| Parameters | $a=$ minimum value $(a \in \mathbb{R})$ |
| | $b=$ maximum value $(b \in \mathbb{R}, a<b)$ |
| Range | $[a, b]$ |
| P.d.f. | $f_{X}(x)=\frac{1}{b-a}, a \leq x \leq b$ |
Let $X$ be an absolutely continuous r.v. with d.f. $F_{X}(x)$. Then $Y=F_{X}(X) \sim$ Uniform $(0,1)$.
## - Beta distribution
In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval $[0,1]$ parameterized by two positive shape parameters, typically denoted by $\alpha$ and $\beta$. In Bayesian statistics, it can be seen as the posterior distribution of the parameter $p$ of a binomial distribution, if the prior distribution of $p$ was uniform. It is also used in information theory, particularly for the information theoretic performance analysis for a communication system.
| Notation | $X \sim \operatorname{Beta}(\alpha, \beta)$ |
| :--- | :--- |
| Parameters | $\alpha\left(\alpha \in \mathbb{R}^{+}\right)$ |
| | $\beta\left(\beta \in \mathbb{R}^{+}\right)$ |
| Range | $[0,1]$ |
| P.d.f. | $f_{X}(x)=\frac{1}{B(\alpha, \beta)} x^{\alpha-1}(1-x)^{\beta-1}, 0 \leq x \leq 1$ |
where
$$
B(\alpha, \beta)=\int_{0}^{1} x^{\alpha-1}(1-x)^{\beta-1} d x
$$
represents the beta function. Note that
$$
B(\alpha, \beta)=\frac{\Gamma(\alpha) \Gamma(\beta)}{\Gamma(\alpha+\beta)},
$$
where
$$
\Gamma(\alpha)=\int_{0}^{+\infty} y^{\alpha-1} e^{-y} d y
$$
is the Euler's gamma function.
The uniform distribution on $[0,1]$ is a particular case of the beta distribution $\alpha=\beta=1$. Moreover, the beta distribution can be generalized to the interval $[a, b]$ :
$$
f_{Y}(y)=\frac{1}{B(\alpha, \beta)} \frac{(y-a)^{\alpha-1}(b-y)^{\beta-1}}{(b-a)^{\alpha+\beta-1}}, a \leq y \leq b .
$$
The p.d.f. of this distribution can take various forms on account of the "shape" parameters $a$ and $b$, as illustrated by the following graph and table:
| Parameters | Shape of the beta p.d.f. |
| :--- | :--- |
| $\alpha, \beta>1$ | Unique mode at $x=\frac{\alpha-1}{\alpha+\beta-2}$ |
| $\alpha<1, \beta>1$ | Unique anti-mode at $x=\frac{\alpha-1}{\alpha+\beta-2}(U-$ shape) |
| $(\alpha-1)(\beta-1) \leq 0$ | $J$ - shape |
| $\alpha=\beta$ | Symmetric around $1 / 2$ (e.g. constant ou parabolic) |
| $\alpha<\beta$ | Positively assymmetric |
| $\alpha>\beta$ | Negatively assymmetric |
## Exercise 2.69 - Relating the Beta and Binomial distributions
(a) Prove that the d.f. of the r.v. $X \sim \operatorname{Beta}(\alpha, \beta)$ can be written in terms of the d.f. of Binomial r.v. when $\alpha$ and $\beta$ are integer-valued:
$$
F_{\text {Beta }(\alpha, \beta)}(x)=1-F_{\text {Binomial }(\alpha+\beta-1, x)}(\alpha-1) .
$$
(b) Prove that the p.d.f. of the r.v. $X \sim \operatorname{Beta}(\alpha, \beta)$ can be rewritten in terms of the p.f. of the r.v. $Y \sim \operatorname{Binomial}(\alpha+\beta-2, x)$, when $\alpha$ and $\beta$ are integer-valued:
$$
\begin{aligned}
f_{\text {Beta }(\alpha, \beta)}(x)= & (\alpha+\beta-1) \times P(Y=\alpha-1) \\
= & (\alpha+\beta-1) \times\left[F_{\operatorname{Binomial}(\alpha+\beta-2, x)}(\alpha-1)\right. \\
& \left.\quad-F_{\operatorname{Binomial}(\alpha+\beta-2, x)}(\alpha-2)\right] .
\end{aligned}
$$
## - Normal distribution
The normal distribution or Gaussian distribution is a continuous probability distribution that describes data that cluster around a mean or average. The graph of the associated probability density function is bell-shaped, with a peak at the mean, and is known as the Gaussian function or bell curve. The Gaussian distribution is one of many things named after Carl Friedrich Gauss, who used it to analyze astronomical data, and determined the formula for its probability density function. However, Gauss was not the first to study this distribution or the formula for its density function that had been done earlier by Abraham de Moivre.
| Notation | $X \sim \operatorname{Normal}\left(\mu, \sigma^{2}\right)$ |
| :--- | :--- |
| Parameters | $\mu(\mu \in \mathbb{R})$ |
| | $\sigma^{2}\left(\sigma^{2} \in \mathbb{R}^{+}\right)$ |
| Range | $\mathbb{R}$ |
| P.d.f. | $f_{X}(x)=\frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{(x-\mu)^{2}}{2 \sigma^{2}}},-\infty<x<+\infty$ |
The normal distribution can be used to describe, at least approximately, any variable that tends to cluster around the mean. For example, the heights of adult males in the United States are roughly normally distributed, with a mean of about $1.8 \mathrm{~m}$. Most men have a height close to the mean, though a small number of outliers have a height significantly above or below the mean. A histogram of male heights will appear similar to a bell curve, with the correspondence becoming closer if more data are used. (http://en.wikipedia.org/wiki/Normal_distribution).
Standard normal distribution - Let $X \sim \operatorname{Normal}\left(\mu, \sigma^{2}\right)$. Then the r.v. $Z=$ $\frac{X-E(X)}{\sqrt{V(X)}}=\frac{X-\mu}{\sigma}$ is said to have a standard normal distribution, i.e. $Z \sim \operatorname{Normal}(0,1)$. Moreover, $Z$ has d.f. given by
$$
F_{Z}(z)=P(Z \leq z)=\int_{-\infty}^{z} \frac{1}{\sqrt{2 \pi}} e^{-\frac{t^{2}}{2}} d t=\Phi(z)
$$
and
$$
\begin{aligned}
F_{X}(x) & =P(X \leq x) \\
& =P\left(Z=\frac{X-\mu}{\sigma} \leq \frac{x-\mu}{\sigma}\right) \\
& =\Phi\left(\frac{x-\mu}{\sigma}\right) .
\end{aligned}
$$
## - Exponential distribution
The exponential distributions are a class of continuous probability distributions. They tend to be used to describe the times between events in a Poisson process, i.e. a process in which events occur continuously and independently at a constant average rate (http://en.wikipedia.org/wiki/Exponential_distribution).
| Notation | $X \sim \operatorname{Exponential}(\lambda)$ |
| :--- | :--- |
| Parameter | $\lambda=$ inverse of the scale parameter $\left(\lambda \in \mathbb{R}^{+}\right)$ |
| Range | $\mathbb{R}_{0}^{+}=[0,+\infty)$ |
| P.d.f. | $f_{X}(x)=\lambda e^{-\lambda x}, x \geq 0$ |
Consider $X \sim \operatorname{Exponencial}(\lambda)$. Then
$$
P(X>t+x \mid X>t)=P(X>x), \forall t, x \in \mathbb{R}_{0}^{+} .
$$
Equivalently,
$$
(X-t \mid X>t) \sim \operatorname{Exponencial}(\lambda), \forall t \in \mathbb{R}_{0}^{+}
$$
This property is referred as to lack of memory: no matter how old your equipment is, its remaining life has same distribution as a new one.
The exponential (resp. geometric) distribution is the only absolutely continuous (resp. discrete) r.v. satisfying this property.
Poisson process - We can relate exponential and Poisson r.v. as follows. Let:
- $X$ be the time between two consecutive events;
- $N_{x}$ the number of times the event has occurred in the interval $(0, x]$. Then
$$
N_{x} \sim \operatorname{Poisson}(\lambda \times x) \Leftrightarrow X \sim \operatorname{Exponential}(\lambda)
$$
and the collection of r.v. $\left\{N_{x}: x>0\right\}$ is said to be a Poisson process with rate $\lambda$.
## - Gamma distribution
The gamma distribution is frequently a probability model for waiting times; for instance, in life testing, the waiting time until death is a random variable that is frequently modeled with a gamma distribution (http://en.wikipedia.org/wiki/Gamma_distribution).
| Notation | $X \sim \operatorname{Gamma}(\alpha, \beta)$ |
| :--- | :--- |
| Parameters | $\alpha=$ shape parameter $\left(\alpha \in \mathbb{R}^{+}\right)$ |
| | $\beta=$ inverse of the scale parameter $\left(\beta \in \mathbb{R}^{+}\right)$ |
| Range | $\mathbb{R}_{0}^{+}=[0,+\infty)$ |
| P.d.f. | $f_{X}(x)=\frac{\beta^{\alpha}}{\Gamma(\alpha)} x^{\alpha-1} e^{-\beta x}, x \geq 0$ |
## Special cases
- Exponential $-\alpha=1$ which has the lack of memory property as the geometric distribution in the discrete case;
- Erlang $-\alpha \in \mathbb{N} ;^{5}$
- Chi-square with $n$ degrees of freedom $-\alpha=n / 2, \beta=1 / 2$.
This distribution has a shape parameter $\alpha$, therefore it comes as no surprise the sheer variety of forms of the gamma p.d.f. in the following graph.
| Parameters | Shape of the gamma p.d.f. |
| :--- | :--- |
| $\alpha<1$ | Unique supremum at $x=0$ |
| $\alpha=1$ | Unique mode at $x=0$ |
| $\alpha>1$ | Unique mode at $x=\frac{\alpha-1}{\beta}$ and positively assymmetric |
The gamma distribution stand in the same relation to exponential as negative binomial to geometric: sums of i.i.d exponential r.v. have gamma distribution. $\chi^{2}$ distributions result from sums of squares of independent standard normal r.v.
${ }^{5}$ The Erlang distribution was developed by Agner Krarup Erlang (1878-1929) to examine the number of telephone calls which might be made at the same time to the operators of the switching stations. This work on telephone traffic engineering has been expanded to consider waiting times in queueing systems in general. The distribution is now used in the fields of stochastic processes and of biomathematics (http://en.wikipedia.org/wiki/Erlang_distribution)
It is possible to relate the d.f. of $X \sim \operatorname{Erlang}(n, \beta)$ with the d.f. of a Poisson r.v.:
$$
\begin{aligned}
F_{\operatorname{Erlang}(n, \beta)}(x) & =\sum_{i=n}^{\infty} e^{-\beta x}(\beta x)^{i} / i ! \\
& =1-F_{\text {Poisson }(\beta x)}(n-1), x>0, n \in \mathbb{N} .
\end{aligned}
$$
- $d$-dimensional uniform distribution
| Notation | $\underline{X} \sim \operatorname{Uniform}\left(\left[0,1^{d}\right]\right)$ |
| :--- | :--- |
| Range | $[0,1]^{d}$ |
| P.d.f. | $f_{\underline{X}}(\underline{x})=1, \underline{x} \in[0,1]^{d}$ |
- Bivariate STANDARD normal distribution
$$
\begin{array}{ll}
\hline \text { Notation } & \underline{X} \sim \text { Normal }\left(\left[\begin{array}{l}
0 \\
0
\end{array}\right],\left[\begin{array}{ll}
1 & \rho \\
\rho & 1
\end{array}\right]\right) \\
\text { Parameter } & \rho=\text { correlation between } X_{1} \text { and } X_{2}(-1 \leq \rho \leq 1) \\
\text { Range } & \mathbb{R}^{2} \\
\text { P.d.f. } & f_{\underline{X}}(\underline{x})=f_{\left(X_{1}, X_{2}\right)}\left(x_{1}, x_{2}\right)=\frac{1}{2 \pi \sqrt{1-\rho^{2}}} \exp \left(-\frac{1}{2} \frac{x_{1}^{2}-2 \rho x_{1} x_{2}+x_{2}^{2}}{1-\rho^{2}}\right), \underline{x} \in \mathbb{R}^{2} \\
\hline
\end{array}
$$
The graphical representation of the joint density of a random vector with a bivariate standard normal distribution follows - it depends on the parameter $\rho$. Case Graph and contour plot of the joint p.d.f. of a bivariate STANDARD normal
$\rho=0 \quad$ Circumferences centered in $(0,0)$
$\rho<0 \quad$ Ellipses centered in $(0,0)$ and asymmetric in relation to the axes, suggesting that $X_{2}$ decreases when $X_{1}$ increases
$\rho>0$ Ellipses centered in $(0,0)$ and asymmetric in relation to the axes, suggesting that $X_{2}$ increases when $X_{1}$ increases
Both components of $\underline{X}=\left(X_{1}, X_{2}\right)$ have standard normal marginal densities and are independent iff $\rho=0$.
### Transformation theory
#### Transformations of r.v., general case
Motivation 2.70 - Transformations of r.v., general case (Karr, 1993, p. 60) Let:
- $X$ be a r.v. with d.f. $F_{X}$;
- $Y=g(X)$ be a transformation of $X$ under $g$, where $g: \mathbb{R} \rightarrow \mathbb{R}$ is a Borel measurable function.
Then we know that $Y=g(X)$ is also a r.v. But this is manifestly not enough: we wish to know
- how the d.f. of $Y$ relates to that of $X$ ?
This question admits an obvious answer when $g$ is invertible and in a few other cases described below.
Proposition 2.71 - D.f. of a transformation of a r.v., general case (Rohatgi, 1976, p. 68; Murteira, 1979, p. 121)
Let:
- $X$ be a r.v. with d.f. $F_{X}$;
- $Y=g(X)$ be a transformation of $X$ under $g$, where $g: \mathbb{R} \rightarrow \mathbb{R}$ is a Borel measurable function;
- $g^{-1}((-\infty, y])=\{x \in \mathbb{R}: g(x) \leq y\}$ be the inverse image of the Borel set $(-\infty, y]$ under $g$.
Then
$$
\begin{aligned}
F_{Y}(y) & =P(\{Y \leq y\}) \\
& =P\left(\left\{X \in g^{-1}((-\infty, y])\right\}\right) .
\end{aligned}
$$
Exercise 2.72 - D.f. of a transformation of a r.v., general case
Prove Proposition 2.71 (Rohatgi, 1976, p. 68).
Note that if $g$ is a Borel measurable function then
$$
g^{-1}(B) \in \mathcal{B}(\mathbb{R}), \forall B=(-\infty, y] \in \mathcal{B}(\mathbb{R})
$$
Thus, we are able to write
$$
P(\{Y \in B\})=P(\{g(X) \in B\})=P\left(\left\{X \in g^{-1}(B)\right\}\right) .
$$
## Remark 2.73 - D.f. of a transformation of a r.v., general case
Proposition 2.71 relates the d.f. of $Y$ to that of $X$.
The inverse image $g^{-1}((-\infty, y])$ is a Borel set and tends to be a "reasonable" set a real interval or a union of real intervals.
Exercise 2.74 - D.f. of a transformation of a r.v., general case (Karr, 1993, p. 70, Exercise 2.20(a))
Let $X$ be a r.v. and $Y=X^{2}$. Prove that
$$
F_{Y}(y)=F_{X}(\sqrt{y})-F_{X}\left[-(\sqrt{y})^{-}\right]
$$
for $y \geq 0$.
Exercise 2.75 - D.f. of a transformation of a r.v., general case (Rohatgi, 1976, p. 68)
Let $X$ be a r.v. with d.f. $F_{X}$. Derive the d.f. of the following r.v.:
(a) $|X|$
(b) $a X+b$
(c) $e^{X}$.
Exercise 2.76 - D.f. of a transformation of a r.v., absolutely continuous case The electrical resistance $^{6}(X)$ of an object and its electrical conductance ${ }^{7}(Y)$ are related as follows: $Y=X^{-1}$.
Assuming that $X \sim$ Uniform(900 ohm, 1100 ohm $)$ :
(a) Identify the range of values of the r.v. $Y$.
(b) Derive the survival function of $Y, P(Y>y)$, and calculate $P\left(Y>10^{-3} \mathrm{mho}\right)$.
${ }^{6}$ The electrical resistance of an object is a measure of its opposition to the passage of a steady electric current. The SI unit of electrical resistance is the ohm (http://en.wikipedia.org/wiki/Electrical_resistance).
${ }^{7}$ Electrical conductance is a measure of how easily electricity flows along a certain path through an electrical element. The SI derived unit of conductance is the siemens (also called the mho, because it is the reciprocal of electrical resistance, measured in ohms). Oliver Heaviside coined the term in September 1885 (http://en.wikipedia.org/wiki/Electrical_conductance). Exercise 2.77 - D.f. of a transformation of a r.v., absolutely continuous case Let $X \sim \operatorname{Uniform}(0,2 \pi)$ and $Y=\sin X$. Prove that
$$
F_{Y}(y)= \begin{cases}0, & y<-1 \\ \frac{1}{2}+\frac{\arcsin y}{\pi}, & -1 \leq y \leq 1 \\ 1, & y>1\end{cases}
$$
#### Transformations of discrete r.v.
Proposition 2.78 - P.f. of a one-to-one transformation of a discrete r.v. (Rohatgi, 1976, p. 69)
Let:
- $X$ be a discrete r.v. with p.f. $P(\{X=x\})$;
- $\mathcal{R}_{X}$ be a countable set such that $P\left(\left\{X \in \mathcal{R}_{X}\right\}\right)=1$ and $P(\{X=x\})>0$, $\forall x \in \mathcal{R}_{X}$
- $Y=g(X)$ be a transformation of $X$ under $g$, where $g: \mathbb{R} \rightarrow \mathbb{R}$ is a one-to-one Borel measurable function that transforms $\mathcal{R}_{X}$ onto some set $\mathcal{R}_{Y}=g\left(\mathcal{R}_{X}\right)$.
Then the inverse map, $g^{-1}$, is a single-valued function of $y$ and
$$
P(\{Y=y\})= \begin{cases}P\left(\left\{X=g^{-1}(y)\right\}\right), & y \in \mathcal{R}_{Y} \\ 0, & \text { otherwise }\end{cases}
$$
Exercise 2.79 - P.f. of a one-to-one transformation of a discrete r.v. (Rohatgi, 1976, p. 69)
Let $X \sim \operatorname{Poisson}(\lambda)$. Obtain the p.f. of $Y=X^{2}+3$.
Exercise 2.80 - P.f. of a one-to-one transformation of a discrete r.v. Let $X \sim \operatorname{Binomial}(n, p)$ and $Y=n-X$. Prove that:
- $Y \sim \operatorname{Binomial}(n, 1-p)$;
- $F_{Y}(y)=1-F_{X}(n-y-1), y=0,1, \ldots, n$.
Remark 2.81 - P.f. of a transformation of a discrete r.v. (Rohatgi, 1976, p. 69) Actually the restriction of a single-valued inverse on $g$ is not necessary. If $g$ has a finite (or even a countable) number of inverses for each $y$, from the countable additivity property of probability functions we can obtain the p.f. of the r.v. $Y=g(X)$. Proposition 2.82 - P.f. of a transformation of a discrete r.v. (Murteira, 1979, p. 122)
Let:
- $X$ be a discrete r.v. with p.f. $P(\{X=x\})$;
- $\mathcal{R}_{X}$ be a countable set such that $P\left(\left\{X \in \mathcal{R}_{X}\right\}\right)=1$ and $P(\{X=x\})>0$, $\forall x \in \mathcal{R}_{X}$
- $Y=g(X)$ be a transformation of $X$ under $g$, where $g: \mathbb{R} \rightarrow \mathbb{R}$ is a Borel measurable function that transforms $\mathcal{R}_{X}$ onto some set $\mathcal{R}_{Y}=g\left(\mathcal{R}_{X}\right)$;
- $\mathcal{A}_{y}=\left\{x \in \mathcal{R}_{X}: g(x)=y\right\}$ be a non empty set, for $y \in \mathcal{R}_{Y}$.
Then
$$
\begin{aligned}
P(\{Y=y\}) & =P\left(\left\{X \in \mathcal{A}_{y}\right\}\right) \\
& =\sum_{x \in \mathcal{A}_{y}} P(\{X=x\}),
\end{aligned}
$$
for $y \in \mathcal{R}_{Y}$.
Exercise 2.83 - P.f. of a transformation of a discrete r.v. (Rohatgi, 1976, pp. 69-70)
Let $X$ be a discrete r.v. with p.f.
$$
P(\{X=x\})= \begin{cases}\frac{1}{5}, & x=-2 \\ \frac{1}{6}, & x=-1 \\ \frac{1}{5}, & x=0 \\ \frac{1}{15}, & x=1 \\ \frac{11}{30}, & x=2 \\ 0, & \text { otherwise }\end{cases}
$$
Derive the p.f. of $Y=X^{2}$.
#### Transformations of absolutely continuous r.v.
Proposition 2.84 - D.f. of a strictly monotonic transformation of an absolutely continuous r.v. (Karr, 1993, pp. 60 and 68) Let:
- $X$ be an absolutely continuous r.v. with d.f. $F_{X}$ and p.d.f. $f_{X}$;
- $\mathcal{R}_{X}$ be the range of the r.v. $X$, i.e. $\mathcal{R}_{X}=\left\{x \in \mathbb{R}: f_{X}(x)>0\right\}$;
- $Y=g(X)$ be a transformation of $X$ under $g$, where $g: \mathbb{R} \rightarrow \mathbb{R}$ is a continuous, strictly increasing, Borel measurable function that transforms $\mathcal{R}_{X}$ onto some set $\mathcal{R}_{Y}=g\left(\mathcal{R}_{X}\right)$
- $g^{-1}$ be the pointwise inverse of $g$.
Then
$$
F_{Y}(y)=F_{X}\left[g^{-1}(y)\right]
$$
for $y \in \mathcal{R}_{Y}$. Similarly, if
- $g$ is a continuous, strictly decreasing, Borel measurable function
then
$$
F_{Y}(y)=1-F_{X}\left[g^{-1}(y)\right]
$$
for $y \in \mathcal{R}_{Y}$.
Exercise 2.85 - D.f. of a strictly monotonic transformation of an absolutely continuous r.v.
Prove Proposition 2.84 (Karr, 1993, p. 60).
Exercise 2.86 - D.f. of a strictly monotonic transformation of an absolutely continuous r.v.
Let $X \sim \operatorname{Normal}(0,1)$. Derive the d.f. of
(a) $Y=e^{X}$
(b) $Y=\mu+\sigma X$, where $\mu \in \mathbb{R}$ and $\sigma \in \mathbb{R}^{+}$
(Karr, 1993, p. 60). Remark 2.87 - Transformations of absolutely continuous and discrete r.v. (Karr, 1993, p. 61)
in general, $Y=g(X)$ need not be absolutely continuous even when $X$ is, as shown in the next exercise, while if $X$ is a discrete r.v. then so is $Y=g(X)$ regardless of the Borel measurable function $g$.
Exercise 2.88 - A mixed r.v. as a transformation of an absolutely continuous r.v.
Let $X \sim \operatorname{Uniform}(-1,1)$. Prove that $Y=X^{+}=\max \{0, X\}$ is a mixed r.v. whose d.f. is given by
$$
F_{Y}(y)= \begin{cases}0, & y<0 \\ \frac{1}{2}, & y=0 \\ \frac{1}{2}+\frac{y}{2}, & 0<y \leq 1 \\ 1, & y>1\end{cases}
$$
(Rohatgi, 1976, p. 70).
Exercise 2.88 shows that we need some conditions on $g$ to ensure that $Y=g(X)$ is also an absolutely continuous r.v. This will be the case when $g$ is a continuous monotonic function.
Theorem 2.89 - P.d.f. of a strictly monotonic transformation of an absolutely continuous r.v. (Rohatgi, 1976, p. 70; Karr, 1993, p. 61)
Suppose that:
- $X$ is an absolutely continuous r.v. with p.d.f. $f_{X}$;
- there is an open subset $\mathcal{R}_{X} \subset \mathbb{R}$ such that $P\left(\left\{X \in \mathcal{R}_{X}\right\}\right)=1$;
- $Y=g(X)$ is a transformation of $X$ under $g$, where $g: \mathbb{R} \rightarrow \mathbb{R}$ is a continuously differentiable, Borel measurable function such that either $\frac{d g(x)}{d x}>0, \forall x \in \mathcal{R}_{X}$, or $\frac{d g(x)}{d x}<0, \forall x \in \mathcal{R}_{X}{ }^{8}$
- $g$ transforms $\mathcal{R}_{X}$ onto some set $\mathcal{R}_{Y}=g\left(\mathcal{R}_{X}\right)$;
- $g^{-1}$ represents the pointwise inverse of $g$.
${ }^{8}$ This implies that $\frac{d g(x)}{d x} \neq 0, \forall x \in \mathcal{R}_{X}$. Then $Y=g(X)$ is an absolutely continuous r.v. with p.d.f. given by
$$
f_{Y}(y)=f_{X}\left[g^{-1}(y)\right] \times\left|\frac{d g^{-1}(y)}{d y}\right|,
$$
for $y \in \mathcal{R}_{Y}$.
Exercise 2.90 - P.d.f. of a strictly monotonic transformation of an absolutely continuous r.v.
Prove Theorem 2.89 by considering the case $\frac{d g(x)}{d x}>0, \forall x \in \mathcal{R}_{X}$, applying Proposition 2.84 to derive the d.f. of $Y=g(X)$, and differentiating it to obtain the p.d.f. of $Y$ (Rohatgi, 1976, p. 70).
Remark 2.91 - P.d.f. of a strictly monotonic transformation of an absolutely continuous r.v. (Rohatgi, 1976, p. 71)
The key to computation of the induced d.f. of $Y=g(X)$ from the d.f. of $X$ is $P(\{Y \leq$ $y\})=P\left(\left\{X \in g^{-1}((-\infty, y])\right\}\right)$. If the conditions of Theorem 2.89 are satisfied, we are able to identify the set $\left\{X \in g^{-1}((-\infty, y])\right\}$ as $\left\{X \leq g^{-1}(y)\right\}$ or $\left\{X \geq g^{-1}(y)\right\}$, according to whether $g$ in strictly increasing or strictly decreasing.
Exercise 2.92 - P.d.f. of a strictly monotonic transformation of an absolutely continuous r.v.
Let $X \sim \operatorname{Normal}(0,1)$. Identify the p.d.f. and the distribution of
(a) $Y=e^{X}$
(b) $Y=\mu+\sigma X$, where $\mu \in \mathbb{R}$ and $\sigma \in \mathbb{R}^{+}$
(Karr, 1993, p. 61).
Corollary 2.93 - P.d.f. of a strictly monotonic transformation of an absolutely continuous r.v. (Rohatgi, 1976, p. 71)
Under the conditions of Theorem 2.89 , and by noting that
$$
\frac{d g^{-1}(y)}{d y}=\left.\frac{1}{\frac{d g(x)}{d x}}\right|_{x=g^{-1}(y)},
$$
we conclude that the p.d.f. of $Y=g(X)$ can be rewritten as follows:
$$
f_{Y}(y)=\left.\frac{f_{X}(x)}{\left|\frac{d g(x)}{d x}\right|}\right|_{x=g^{-1}(y)}
$$
$\forall y \in \mathcal{R}_{Y}$ Remark 2.94 - P.d.f. of a non monotonic transformation of an absolutely continuous r.v. (Rohatgi, 1976, p. 71)
In practice Theorem 2.89 is quite useful, but whenever its conditions are violated we should return to $P(\{Y \leq y\})=P\left(\left\{X \in g^{-1}((-\infty, y])\right\}\right)$ to obtain the $F_{Y}(y)$ and then differentiate this d.f. to derive the p.d.f. of the transformation $Y$. This is the case in the next two exercises.
Exercise 2.95 - P.d.f. of a non monotonic transformation of an absolutely continuous r.v.
Let $X \sim \operatorname{Normal}(0,1)$ and $Y=g(X)=X^{2}$. Prove that $Y \sim \chi_{(1)}^{2}$ by noting that
$$
\begin{aligned}
F_{Y}(y) & =F_{X}(\sqrt{y})-F_{X}(-\sqrt{y}), y>0 \\
f_{Y}(y) & =\frac{d F_{Y}(y)}{d y} \\
& = \begin{cases}\frac{1}{2 \sqrt{y}} \times\left[f_{X}(\sqrt{y})+f_{X}(-\sqrt{y})\right], & y \geq 0 \\
0, & y<0\end{cases}
\end{aligned}
$$
(Rohatgi, 1976, p. 72).
Exercise 2.96 - P.d.f. of a non monotonic transformation of an absolutely continuous r.v.
Let $X$ be an absolutely continuous r.v. with p.d.f.
$$
f_{X}(x)= \begin{cases}\frac{2 x}{\pi^{2}}, & 0<x<\pi \\ 0, & \text { otherwise }\end{cases}
$$
Prove that $Y=\sin X$ has p.d.f. given by
$$
f_{Y}(y)= \begin{cases}\frac{2}{\pi \sqrt{1-y^{2}}}, & 0<y<1 \\ 0, & \text { otherwise }\end{cases}
$$
(Rohatgi, 1976, p. 73).
Motivation 2.97 - P.d.f. of a sum of monotonic restrictions of a function $g$ of an absolutely continuous r.v. (Rohatgi, 1976, pp. 73-74)
in the two last exercises the function $y=g(x)$ can be written as the sum of two monotonic restrictions of $g$ in two disjoint intervals. Therefore we can apply Theorem 2.89 to each of these monotonic summands.
In fact, these two exercises are special cases of the following theorem. Theorem 2.98 - P.d.f. of a finite sum of monotonic restrictions of a function $g$ of an absolutely continuous r.v. (Rohatgi, 1976, pp. 73-74) Let:
- $X$ be an absolutely continuous r.v. with p.d.f. $f_{X}$;
- $Y=g(X)$ be a transformation of $X$ under $g$, where $g: \mathbb{R} \rightarrow \mathbb{R}$ is a Borel measurable function that transforms $\mathcal{R}_{X}$ onto some set $\mathcal{R}_{Y}=g\left(\mathcal{R}_{X}\right)$.
Moreover, suppose that:
- $g(x)$ is differentiable for all $x \in \mathcal{R}_{X}$;
- $\frac{d g(x)}{d x}$ is continuous and nonzero at all points of $\mathcal{R}_{X}$ but a finite number of $x$.
Then, for every real number $y \in \mathcal{R}_{Y}$,
(a) there exists a positive integer $n=n(y)$ and real numbers (inverses) $g_{1}^{-1}(y), \ldots, g_{n}^{-1}(y)$ such that
$$
\left.g(x)\right|_{x=g_{k}^{-1}(y)}=y \quad \text { and }\left.\quad \frac{d g(x)}{d x}\right|_{x=g_{k}^{-1}(y)} \neq 0, \quad k=1, \ldots, n(y),
$$
or
(b) there does not exist any $x$ such that $g(x)=y$ and $\frac{d g(x)}{d x} \neq 0$, in which case we write $n=n(y)=0$.
In addition, $Y=g(X)$ is an absolutely continuous r.v. with p.d.f. given by
$$
f_{Y}(y)= \begin{cases}\sum_{k=1}^{n(y)} f_{X}\left[g_{k}^{-1}(y)\right] \times\left|\frac{d g_{k}^{-1}(y)}{d y}\right|, & n=n(y)>0 \\ 0, & n=n(y)=0\end{cases}
$$
for $y \in \mathcal{R}_{Y}$.
Exercise 2.99 - P.d.f. of a finite sum of monotonic restrictions of a function $g$ of an absolutely continuous r.v.
Let $X \sim \operatorname{Uniform}(-1,1)$. Use Theorem 2.98 to prove that $Y=|X| \sim \operatorname{Uniform}(0,1)$ (Rohatgi, 1976, p. 74). Exercise 2.100 - P.d.f. of a finite sum of monotonic restrictions of a function $g$ of an absolutely continuous r.v.
Let $X \sim \operatorname{Uniform}(0,2 \pi)$ and $Y=\sin X$. Use Theorem 2.98 to prove that
$$
f_{Y}(y)= \begin{cases}\frac{1}{\pi \sqrt{1-y^{2}}}, & -1<y<1 \\ 0, & \text { otherwise. }\end{cases}
$$
Motivation 2.101 - P.d.f. of a countable sum of monotonic restrictions of a function $g$ of an absolutely continuous r.v.
The formula $P(\{Y \leq y\})=P\left(\left\{X \in g^{-1}((-\infty, y])\right\}\right)$ and the countable additivity of probability functions allows us to compute the p.d.f. of $Y=g(X)$ in some instance even if $g$ has a countable number of inverses.
Theorem 2.102 - P.d.f. of a countable sum of monotonic restrictions of a function $g$ of an absolutely continuous r.v. (Rohatgi, 1976, pp. 74-75)
Let $g$ be a Borel measurable function that maps $\mathcal{R}_{X}$ onto some set $\mathcal{R}_{Y}=g\left(\mathcal{R}_{X}\right)$. Suppose that $\mathcal{R}_{X}$ can be represented as a countable union of disjoint sets $A_{k}, k=1,2, \ldots$ Then $Y=g(X)$ is an absolutely continuous r.v. with d.f. given by
$$
\begin{aligned}
F_{Y}(y) & =P(\{Y \leq y\}) \\
& =P\left(\left\{X \in g^{-1}((-\infty, y])\right\}\right) \\
& =P\left(\left\{X \in \bigcup_{k=1}^{+\infty}\left[g^{-1}((-\infty, y]) \cap A_{k}\right]\right\}\right) \\
& =\sum_{k=1}^{+\infty} P\left(\left\{X \in\left[g^{-1}((-\infty, y]) \cap A_{k}\right]\right\}\right)
\end{aligned}
$$
for $y \in \mathcal{R}_{Y}$.
If the conditions of Theorem 2.89 are satisfied by the restriction of $g$ to each $A_{k}, g_{k}$, we may obtain the p.d.f. of $Y=g(X)$ on differentiating the d.f. of $Y .{ }^{9}$ In this case
$$
f_{Y}(y)=\sum_{k=1}^{+\infty} f_{X}\left[g_{k}^{-1}(y)\right] \times\left|\frac{d g_{k}^{-1}(y)}{d y}\right|
$$
for $y \in \mathcal{R}_{Y}$.
${ }^{9}$ We remind the reader that term-by-term differentiation is permissible if the differentiated series is uniformly convergent. Exercise 2.103 - P.d.f. of a countable sum of monotonic restrictions of a function $g$ of an absolutely continuous r.v.
Let $X \sim \operatorname{Exponential}(\lambda)$ and $Y=\sin X$. Prove that
$$
\begin{aligned}
& F_{Y}(y)=1+\frac{e^{-\lambda \pi+\lambda \arcsin y}-e^{-\lambda \arcsin y}}{1-e^{-2 \pi \lambda}}, 0<y<1 \\
& f_{Y}(y)= \begin{cases}\frac{\lambda e^{-\lambda \pi}}{\left(1-e^{-2 \lambda \pi}\right) \times \sqrt{1-y^{2}}} \times\left[e^{\lambda \arcsin y}+e^{-\lambda \pi-\lambda \arcsin y}\right], & -1 \leq y<0 \\
\frac{\lambda}{\left(1-e^{-2 \lambda \pi}\right) \times \sqrt{1-y^{2}}} \times\left[e^{-\lambda \arcsin y}+e^{-\lambda \pi+\lambda \arcsin y}\right], & 0 \leq y<1 \\
0, & \text { otherwise }\end{cases}
\end{aligned}
$$
(Rohatgi, 1976, p. 75).
#### Transformations of random vectors, general case
What follows is the analogue of Proposition 2.71 in a multidimensional setting.
Proposition 2.104 - D.f. of a transformation of a random vector, general case Let:
- $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ be a random vector with joint d.f. $F_{\underline{X}}$
- $\underline{Y}=\left(Y_{1}, \ldots, Y_{m}\right)=\underline{g}(\underline{X})=\left(g_{1}\left(X_{1}, \ldots, X_{d}\right), \ldots, g_{m}\left(X_{1}, \ldots, X_{d}\right)\right)$ be a transformation of $\underline{X}$ under $\underline{g}$, where $\underline{g}: \mathbb{R}^{d} \rightarrow \mathbb{R}^{m}$ is a Borel measurable function;
- $\underline{g}^{-1}\left(\prod_{i=1}^{m}\left(-\infty, y_{i}\right]\right)=\left\{\underline{x}=\left(x_{1}, \ldots, x_{d}\right) \in \mathbb{R}: g_{1}\left(x_{1}, \ldots, x_{d}\right) \leq y_{1}, \ldots\right.$, $\left.g_{m}\left(x_{1}, \ldots, x_{d}\right) \leq y_{m}\right\}$ be the inverse image of the Borel set $\prod_{i=1}^{m}\left(-\infty, y_{i}\right]$ under g. ${ }^{10}$
Then
$$
\begin{aligned}
F_{\underline{Y}}(\underline{y}) & =P\left(\left\{Y_{1} \leq y_{1}, \ldots, Y_{m} \leq y_{m}\right\}\right) \\
& =P\left(\left\{\underline{X} \in \underline{g}^{-1}\left(\prod_{i=1}^{m}\left(-\infty, y_{i}\right]\right)\right\}\right) .
\end{aligned}
$$
Exercise 2.105 - D.f. of a transformation of a random vector, general case Let $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ be an absolutely continuous random vector such that $X_{i} \stackrel{i n d e p}{\sim} \operatorname{Exponential}\left(\lambda_{i}\right), i=1, \ldots, d$.
Prove that $Y=\min _{i=1, \ldots, d} X_{i} \sim \operatorname{Exponential}\left(\sum_{i=1}^{d} \lambda_{i}\right)$.
${ }^{10}$ Let us remind the reader that since $\underline{g}$ is a Borel measurable function we have $\underline{g}^{-1}(B) \in \mathcal{B}\left(\mathbb{R}^{d}\right), \forall B \in$ $\mathcal{B}\left(\mathbb{R}^{m}\right)$.
#### Transformations of discrete random vectors
Theorem 2.106 - Joint p.f. of a one-to-one transformation of a discrete random vector (Rohatgi, 1976, p. 131)
Let:
- $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ be a discrete random vector with joint p.f. $P(\{\underline{X}=\underline{x}\})$;
- $\mathcal{R}_{\underline{X}}$ be a countable set of points such that $P\left(\underline{X} \in \mathcal{R}_{\underline{X}}\right)=1$ and $P(\{\underline{X}=\underline{x})>0$, $\forall \underline{x} \in \mathbb{R}_{\underline{X}}$
- $\underline{Y}=\left(Y_{1}, \ldots, Y_{d}\right)=\underline{g}(\underline{X})=\left(g_{1}\left(X_{1}, \ldots, X_{d}\right), \ldots, g_{d}\left(X_{1}, \ldots, X_{d}\right)\right)$ be a transformation of $\underline{X}$ under $\underline{g}$, where $\underline{g}: \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}$ is a one-to-one Borel measurable function that maps $\mathcal{R}_{\underline{X}}$ onto some set $\mathcal{R}_{\underline{Y}} \subset R^{d}$;
- $\underline{g}^{-1}$ be the inverse mapping such that $\underline{g}^{-1}(\underline{y})=\left(g_{1}^{-1}(\underline{y}), \ldots, g_{d}^{-1}(\underline{y})\right)$.
Then the joint p.f. of $\underline{Y}=\left(Y_{1}, \ldots, Y_{d}\right)$ is given by
$$
\begin{aligned}
P(\{\underline{Y}=\underline{y}\}) & =P\left(\left\{Y_{1}=y_{1}, \ldots, Y_{d}=y_{d}\right\}\right) \\
& =P\left(\left\{X_{1}=g_{1}^{-1}(\underline{y}), \ldots, X_{d}=g_{d}^{-1}(\underline{y})\right\}\right),
\end{aligned}
$$
for $\underline{y}=\left(y_{1}, \ldots, y_{d}\right) \in \mathcal{R}_{\underline{Y}}$.
Remark 2.107 - Joint p.f. of a one-to-one transformation of a discrete random vector (Rohatgi, 1976, pp. 131-132)
The marginal p.f. of any $Y_{j}$ (resp. the joint p.f. of any subcollection of $Y_{1}, \ldots, Y_{d}$, say $\left.\left(Y_{j}\right)_{j \in I \subset\{1, \ldots, d\}}\right)$ is easily computed by summing on the remaining $y_{i}, i \neq j$ (resp. $\left.\left(Y_{i}\right)_{i \notin I}\right)$.
Theorem 2.108 - Joint p.f. of a transformation of a discrete random vector Let:
- $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ be a discrete random vector with range $\mathcal{R}_{\underline{X}} \subset R^{d}$;
- $\underline{Y}=\left(Y_{1}, \ldots, Y_{m}\right)=\underline{g}(\underline{X})=\left(g_{1}\left(X_{1}, \ldots, X_{d}\right), \ldots, g_{m}\left(X_{1}, \ldots, X_{d}\right)\right)$ be a transformation of $\underline{X}$ under $\underline{g}$, where $\underline{g}: \mathbb{R}^{d} \rightarrow \mathbb{R}^{m}$ is a Borel measurable function that maps $\mathcal{R}_{\underline{X}}$ onto some set $\mathcal{R}_{\underline{Y}} \subset R^{m}$
- $\mathcal{A}_{y_{1}, \ldots, y_{m}}=\left\{\underline{x}=\left(x_{1}, \ldots, x_{d}\right) \in \mathcal{R}_{\underline{X}}: g_{1}\left(x_{1}, \ldots, x_{d}\right)=y_{1}, \ldots, g_{m}\left(x_{1}, \ldots, x_{d}\right)=y_{m}\right\}$.
Then the joint p.f. of $\underline{Y}=\left(Y_{1}, \ldots, Y_{m}\right)$ is given by
$$
\begin{aligned}
P(\{\underline{Y}=\underline{y}\}) & =P\left(\left\{Y_{1}=y_{1}, \ldots, Y_{m}=y_{m}\right\}\right) \\
& =\sum_{\underline{x}=\left(x_{1}, \ldots, x_{d}\right) \in \mathcal{A}_{y_{1}, \ldots, y_{m}}} P\left(\left\{X_{1}=x_{1}, \ldots, X_{d}=x_{d}\right\}\right),
\end{aligned}
$$
for $\underline{y}=\left(y_{1}, \ldots, y_{d}\right) \in \mathcal{R}_{Y}$. Exercise 2.109 - Joint p.f. of a transformation of a discrete random vector Let $\underline{X}=\left(X_{1}, X_{2}\right)$ be a discrete random vector with joint p.f. $P(X=x, Y=y)$ given in the following table:
| $X_{1}$ | $X_{2}$ | | |
| :---: | :---: | :---: | :---: |
| | -2 | 0 | 2 |
| -1 | $\frac{1}{6}$ | $\frac{1}{6}$ | $\frac{1}{12}$ |
| 0 | $\frac{1}{12}$ | $\frac{1}{12}$ | 0 |
| 1 | $\frac{1}{6}$ | $\frac{1}{6}$ | $\frac{1}{12}$ |
Derive the joint p.f. of $Y_{1}=\left|X_{1}\right|$ and $Y_{2}=X_{2}^{2}$.
Theorem 2.110 - P.f. of the sum, difference, product and division of two discrete r.v.
Let:
- $(X, Y)$ be a discrete bidimensional random vector with joint p.f. $P(X=x, Y=y)$;
- $Z=X+Y$
- $U=X-Y$
- $V=X Y$
- $W=X / Y$, provided that $P(\{Y=0\})=0$.
Then
$$
\begin{aligned}
P(Z=z) & =P(X+Y=z) \\
& =\sum_{x} P(X=x, X+Y=z) \\
& =\sum_{x} P(X=x, Y=z-x) \\
& =\sum_{y} P(X+Y=z, Y=y) \\
& =\sum_{y} P(X=z-y, Y=y) \\
P(U=u) & P(X-Y=u) \\
& =\sum_{x} P(X=x, X-Y=u) \\
& =\sum_{x} P(X=x, Y=x-u)
\end{aligned}
$$
$$
\begin{aligned}
& =\sum_{y} P(X-Y=u, Y=y) \\
& =\sum_{y} P(X=u+y, Y=y)
\end{aligned}
$$
$$
\begin{aligned}
P(V=v) & =P(X Y=v) \\
& =\sum_{x} P(X=x, X Y=v) \\
& =\sum_{x} P(X=x, Y=v / x) \\
& =\sum_{y} P(X Y=v, Y=y) \\
& =\sum_{y} P(X=v / y, Y=y) \\
P(W=w) & P(X / Y=w) \\
& =\sum_{x} P(X=x, X / Y=w) \\
& =\sum_{x} P(X=x, Y=x / w) \\
& =\sum_{y} P(X / Y=w, Y=y) \\
& =\sum_{y} P(X=w y, Y=y) .
\end{aligned}
$$
## Exercise 2.111 - P.f. of the difference of two discrete r.v.
Let $(X, Y)$ be a discrete random vector with joint p.f. $P(X=x, Y=y)$ given in the following table:
| $X$ | $Y$ | | |
| :---: | :---: | :---: | :---: |
| | 1 | 2 | 3 |
| 1 | $\frac{1}{12}$ | $\frac{1}{12}$ | $\frac{2}{12}$ |
| 2 | $\frac{2}{12}$ | 0 | 0 |
| 3 | $\frac{1}{12}$ | $\frac{1}{12}$ | $\frac{4}{12}$ |
(a) Prove that $X$ and $Y$ are identically distributed but are not independent.
(b) Obtain the p.f. of $U=X-Y$
(c) Prove that $U=X-Y$ is not a symmetric r.v., that is $U$ and $-U$ are not identically distributed. Corollary 2.112 - P.f. of the sum, difference, product and division of two INDEPENDENT discrete r.v.
Let:
- $X$ and $Y$ be two InDependent discrete r.v. with joint p.f. $P(X=x, Y=y)=$ $P(X=x) \times P(Y=y), \forall x, y$
- $Z=X+Y$
- $U=X-Y$
- $V=X Y$
- $W=X / Y$, provided that $P(\{Y=0\})=0$.
Then
$$
\begin{aligned}
& P(Z=z)=P(X+Y=z) \\
&=\sum_{x} P(X=x) \times P(Y=z-x) \\
&=\sum_{y} P(X=z-y) \times P(Y=y) \\
& P(U=u)=P(X-Y=u) \\
&=\sum_{x} P(X=x) \times P(Y=x-u) \\
&=\sum_{y} P(X=u+y) \times P(Y=y) \\
&=P(X Y=v) \\
& P(V=v) \sum_{x} P(X=x) \times P(Y=v / x) \\
&=\sum_{y} P(X=v / y) \times P(Y=y) \\
& P(W=w) P(X / Y=w) \\
&=\sum_{x} P(X=x) \times P(Y=x / w) \\
& y
\end{aligned}
$$
Exercise 2.113 - P.f. of the sum of two INDEPENDENT r.v. with three well known discrete distributions
Let $X$ and $Y$ be two independent discrete r.v. Prove that
(a) $X \sim \operatorname{Binomial}\left(n_{X}, p\right) \Perp Y \sim \operatorname{Binomial}\left(n_{Y}, p\right) \Rightarrow(X+Y) \sim \operatorname{Binomial}\left(n_{X}+n_{Y}, p\right)$
(b) $X \sim \operatorname{NegativeBinomial}\left(n_{X}, p\right) \Perp Y \sim \operatorname{NegativeBinomial}\left(n_{X}, p\right) \Rightarrow(X+Y) \sim$ $\operatorname{NegativeBinomial}\left(n_{X}+n_{Y}, p\right)$
(c) $X \sim \operatorname{Poisson}\left(\lambda_{X}\right) \Perp Y \sim \operatorname{Poisson}\left(\lambda_{Y}\right) \Rightarrow(X+Y) \sim \operatorname{Poisson}\left(\lambda_{X}+\lambda_{Y}\right)$,
i.e. the families of Poisson, Binomial and Negative Binomial distributions are closed under summation of independent members.
Exercise 2.114 - P.f. of the difference of two INDEPENDENT Poisson r.v. Let $X \sim \operatorname{Poisson}\left(\lambda_{X}\right) \Perp Y \sim \operatorname{Poisson}\left(\lambda_{Y}\right)$. Then $(X-Y)$ has p.f. given by
$$
\begin{aligned}
P(X-Y=u) & =\sum_{y=0}^{+\infty} P(X=u+y) \times P(Y=y) \\
& =e^{-\left(\lambda_{X}+\lambda_{Y}\right)} \sum_{y=\max \{0,-u\}}^{+\infty} \frac{\lambda_{X}^{u+y} \lambda_{Y}^{y}}{(u+y) ! y !}, u=\ldots,-1,0,1, \ldots
\end{aligned}
$$
Remark 2.115 — Skellam distribution (http://en.wikipedia.org/wiki/ Skellam_distribution)
The Skellam distribution is the discrete probability distribution of the difference of two correlated or uncorrelated r.v. $X$ and $Y$ having Poisson distributions with parameters $\lambda_{X}$ and $\lambda_{Y}$. It is useful in describing the statistics of the difference of two images with simple photon noise, as well as describing the point spread distribution in certain sports where all scored points are equal, such as baseball, hockey and soccer.
When $\lambda_{X}=\lambda_{Y}=\lambda$ and $u$ is also large, and of order of the square root of $2 \lambda$,
$$
P(X-Y=u) \simeq \frac{e^{-\frac{u^{2}}{2 \times 2 \lambda}}}{\sqrt{2 \pi \times 2 \lambda}},
$$
the p.d.f. of a Normal distribution with parameters $\mu=0$ and $\sigma^{2}=2 \lambda$.
Please note that the expression of the p.f. of the Skellam distribution that can be found in http://en.wikipedia.org/wiki/Skellam_distribution is not correct.
#### Transformations of absolutely continuous random vectors
Motivation 2.116 - P.d.f. of a transformation of an absolutely continuous random vector (Karr, 1993, p. 62)
Recall that a random vector $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ is absolutely continuous if there is a function $f_{\underline{X}}$ on $R^{d}$ satisfying
$$
\begin{aligned}
F_{\underline{X}}(\underline{x}) & =F_{X_{1}, \ldots, X_{d}}\left(x_{1}, \ldots, x_{d}\right) \\
& =\int_{-\infty}^{x_{1}} \ldots \int_{-\infty}^{x_{d}} f_{X_{1}, \ldots, X_{d}}\left(s_{1}, \ldots, s_{d}\right) d s_{d} \ldots d s_{1} .
\end{aligned}
$$
Computing the density of $\underline{Y}=\underline{g}(\underline{X})$ requires that $g$ be invertible, except for the special case that $X_{1}, \ldots, X_{d}$ are independent (and then only for particular choices of $\underline{g}$ ).
Theorem 2.117 - P.d.f. of a one-to-one transformation of an absolutely continuous random vector (Rohatgi, 1976, p. 135; Karr, 1993, p. 62) Let:
- $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ be an absolutely continuous random vector with joint p.d.f. $f_{\underline{X}}(\underline{x})$;
- $\mathcal{R}_{\underline{X}}$ be an open set of $\mathbb{R}^{d}$ such that $P\left(\underline{X} \in \mathcal{R}_{\underline{X}}\right)=1$;
- $\underline{Y}=\left(Y_{1}, \ldots, Y_{d}\right)=\underline{g}(\underline{X})=\left(g_{1}\left(X_{1}, \ldots, X_{d}\right), \ldots, g_{d}\left(X_{1}, \ldots, X_{d}\right)\right)$ be a transformation of $\underline{X}$ under $\underline{g}$, where $\underline{g}: \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}$ is a one-to-one Borel measurable function that maps $\mathcal{R}_{\underline{X}}$ onto some set $\mathcal{R}_{\underline{Y}} \subset R^{d}$;
- $\underline{g}^{-1}(\underline{y})=\left(g_{1}^{-1}(\underline{y}), \ldots, g_{d}^{-1}(\underline{y})\right)$ be the inverse mapping defined over the range $\mathcal{R}_{\underline{Y}}$ of the transformation.
Assume that:
- both $\underline{g}$ and its inverse $\underline{g}^{-1}$ are continuous;
- the partial derivatives, $\frac{\partial g_{i}^{-1}(\underline{y})}{\partial y_{j}}, 1 \leq i, j \leq d$, exist and are continuous;
- the Jacobian of the inverse transformation $\underline{g}^{-1}$ (i.e. the determinant of the matrix of partial derivatives $\left.\frac{\partial g_{i}^{-1}(y)}{\partial y_{j}}\right)$ is such that
$$
J(\underline{y})=\left|\begin{array}{ccc}
\frac{\partial g_{1}^{-1}(y)}{\partial y_{1}} & \cdots & \frac{\partial g_{1}^{-1}(y)}{\partial y_{d}} \\
\vdots & \cdots & \vdots \\
\frac{\partial g_{d}^{-1}(\underline{y})}{\partial y_{1}} & \cdots & \frac{\partial g_{d}^{-1}(y)}{\partial y_{d}}
\end{array}\right| \neq 0,
$$
for $\underline{y}=\left(y_{1}, \ldots, y_{d}\right) \in \mathcal{R}_{\underline{Y}}$. Then the random vector $\underline{Y}=\left(Y_{1}, \ldots, Y_{d}\right)$ is absolutely continuous and its joint p.d.f. is given by
$$
f_{\underline{Y}}(\underline{y})=f_{\underline{X}}\left[\underline{g}^{-1}(\underline{y})\right] \times|J(\underline{y})|
$$
for $\underline{y}=\left(y_{1}, \ldots, y_{d}\right) \in \mathcal{R}_{\underline{Y}}$.
Exercise 2.118 - P.d.f. of a one-to-one transformation of an absolutely continuous random vector
Prove Theorem 2.117 (Rohatgi, 1976, pp. 135-136).
Exercise 2.119 - P.d.f. of a one-to-one transformation of an absolutely continuous random vector
Let
- $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ be an absolutely continuous random vector with joint p.d.f. $f_{\underline{X}}(\underline{x})$;
- $\underline{Y}=\left(Y_{1}, \ldots, Y_{d}\right)=\underline{g}(\underline{X})=\mathbf{A} \underline{X}+\underline{b}$ be an invertible affine mapping of $\mathbb{R}^{d}$ into itself, where $\mathbf{A}$ is a nonsingular $d \times d$ matrix and $\underline{b} \in \mathbb{R}^{d}$.
Derive the inverse mapping $\underline{g}^{-1}$ and the joint p.d.f. of $\underline{Y}$ (Karr, 1993, p. 62).
Exercise 2.120 - P.d.f. of a one-to-one transformation of an absolutely continuous random vector
Let
- $\underline{X}=\left(X_{1}, X_{2}, X_{3}\right)$ such that $X_{i} \stackrel{i . i . d .}{\sim}$ Exponential(1);
- $\underline{Y}=\left(Y_{1}, Y_{2}, Y_{3}\right)=\left(X_{1}+X_{2}+X_{3}, \frac{X_{1}+X_{2}}{X_{1}+X_{2}+X_{3}}, \frac{X_{1}}{X_{1}+X_{2}}\right)$.
Derive the joint p.d.f. of $\underline{Y}$ and conclude that $Y_{1}, Y_{2}$, and $Y_{3}$ are also independent (Rohatgi, 1976, p. 137).
Remark 2.121 - P.d.f. of a one-to-one transformation of an absolutely continuous random vector (Rohatgi, 1976, p. 136)
In actual applications, we tend to know just $k$ functions, $Y_{1}=g_{1}(\underline{X}), \ldots, Y_{k}=g_{k}(\underline{X})$. In this case, we introduce arbitrarily $(d-k)$ (convenient) r.v., $Y_{k+1}=g_{k+1}(\underline{X}), \ldots, Y_{d}=$ $\left.g_{d}(\underline{X})\right)$, such that the conditions of Theorem 2.117 are satisfied.
To find the joint density of the $k$ r.v. we simply integrate the joint p.d.f. $f_{\underline{Y}}$ over all the $(d-k)$ r.v. that were arbitrarily introduced.
We can state a similar result to Theorem 2.117 when $\underline{g}$ is not a one-to-one transformation. Theorem 2.122 - P.d.f. of a transformation, with a finite number of inverses, of an absolutely continuous random vector (Rohatgi, 1976, pp. 136-137)
Assume the conditions of Theorem 2.117 and suppose that:
- for each $\underline{y} \in \mathcal{R}_{\underline{Y}} \subset \mathbb{R}^{d}$, the transformation $\underline{g}$ has a finite number $k=k(\underline{y})$ of inverses;
- $\mathcal{R}_{\underline{X}} \subset \mathbb{R}^{d}$ can be partitioned into $k$ disjoint sets, $A_{1}, \ldots, A_{k}$, such that the transformation $\underline{g}$ from $A_{i}(i=1, \ldots, k)$ into $\mathbb{R}^{d}$, say $\underline{g}_{i}$, is one-to-one with inverse transformation $\underline{g}_{i}^{-1}=\left(g_{1 i}^{-1}(\underline{y}), \ldots, g_{d i}^{-1}(\underline{y})\right), i=1, \ldots, k$;
- the first partial derivatives of $\underline{g}_{i}^{-1}$ exist, are continuous and that each Jacobian
$$
J_{i}(\underline{y})=\left|\begin{array}{ccc}
\frac{\partial g_{1 i}^{-1}(\underline{y})}{\partial y_{1}} & \cdots & \frac{\partial g_{1 i}^{-1}(\underline{y})}{\partial y_{d}} \\
\vdots & \cdots & \vdots \\
\frac{\partial g_{d i}^{-1}(\underline{y})}{\partial y_{1}} & \cdots & \frac{\partial g_{d i}^{-1}(\underline{y})}{\partial y_{d}}
\end{array}\right| \neq 0
$$
for $\underline{y}=\left(y_{1}, \ldots, y_{d}\right)$ in the range of the transformation $\underline{g}_{i}$.
Then the random vector $\underline{Y}=\left(Y_{1}, \ldots, Y_{d}\right)$ is absolutely continuous and its joint p.d.f. is given by
$$
f_{\underline{Y}}(\underline{y})=\sum_{i=1}^{k} f_{\underline{X}}\left[\underline{g}_{i}^{-1}(\underline{y})\right] \times\left|J_{i}(\underline{y})\right|,
$$
for $\underline{y}=\left(y_{1}, \ldots, y_{d}\right) \in \mathcal{R}_{\underline{Y}}$.
Theorem 2.123 - P.d.f. of the sum, difference, product and division of two absolutely continuous r.v. (Rohatgi, 1976, p. 141)
Let:
- $(X, Y)$ be an absolutely continuous bidimensional random vector with joint p.d.f. $f_{X, Y}(x, y)$
- $Z=X+Y, U=X-Y, V=X Y$ and $W=X / Y$.
Then
$$
\begin{aligned}
f_{Z}(z) & =f_{X+Y}(z) \\
& =\int_{-\infty}^{+\infty} f_{X, Y}(x, z-x) d x \\
& =\int_{-\infty}^{+\infty} f_{X, Y}(z-y, y) d y
\end{aligned}
$$
$$
\begin{aligned}
f_{U}(u) & =f_{X-Y}(u) \\
& =\int_{-\infty}^{+\infty} f_{X, Y}(x, x-u) d x \\
& =\int_{-\infty}^{+\infty} f_{X, Y}(u+y, y) d y \\
f_{V}(v) & =f_{X Y}(v) \\
& =\int_{-\infty}^{+\infty} f_{X, Y}(x, v / x) \times \frac{1}{|x|} d x \\
& =\int_{-\infty}^{+\infty} f_{X, Y}(v / y, y) \times \frac{1}{|y|} d y \\
& =\int_{-\infty}^{+\infty} f_{X, Y}(x, x / w) \times|x| d x \\
& =\int_{-\infty}^{+\infty} f_{X, Y}(w y, y) \times|y| d y .
\end{aligned}
$$
Remark 2.124 - P.d.f. of the sum and product of two absolutely continuous r.v.
It is interesting to note that:
$$
\begin{aligned}
f_{Z}(z) & =\frac{d F_{Z}(z)}{d z} \\
& =\frac{d P(Z=X+Y \leq z)}{d z} \\
& =\frac{d}{d z}\left[\iint_{\{(x, y): x+y \leq z\}}^{z-x} f_{X, Y}(x, y) d y d x\right] \\
& =\frac{d}{d z}\left[\int_{-\infty}^{+\infty} \int_{-\infty, Y}(x, y) d y d x\right] \\
& =\int_{-\infty}^{+\infty} \frac{d}{d z}\left[\int_{-\infty}^{z-x} f_{X, Y}(x, y) d y\right] d x \\
& =\int_{-\infty}^{+\infty} f_{X, Y}(x, z-x) d x \\
f_{V}(v) & =\frac{d F_{V}(v)}{d v} \\
& \frac{d P(V=X Y \leq v)}{d v}
\end{aligned}
$$
$$
\begin{aligned}
& =\frac{d}{d v}\left[\iint_{\{(x, y): x y \leq v\}} f_{X, Y}(x, y) d y d x\right] \\
& =\left\{\begin{array}{c}
\int_{-\infty}^{+\infty} \frac{d}{d v}\left[\int_{-\infty}^{v / x} f_{X, Y}(x, y) d y\right] d x, \quad x>0 \\
\int_{-\infty}^{+\infty} \frac{d}{d v}\left[\int_{v / x}^{+\infty} f_{X, Y}(x, y) d y\right] d x, \quad x<0
\end{array}\right. \\
& =\int_{-\infty}^{+\infty} \frac{1}{|x|} f_{X, Y}(x, v / x) d x
\end{aligned}
$$
Corollary 2.125 - P.d.f. of the sum, difference, product and division of two INDEPENDENT absolutely continuous r.v. (Rohatgi, 1976, p. 141) Let:
- $X$ and $Y$ be two independent absolutely continuous r.v. with joint p.d.f. $f_{X, Y}(x, y)=f_{X}(x) \times f_{Y}(y), \forall x, y ;$
- $Z=X+Y, U=X-Y, V=X Y$ and $W=X / Y$.
Then
$$
\begin{aligned}
& f_{Z}(z)=f_{X+Y}(z) \\
& =\int_{-\infty}^{+\infty} f_{X}(x) \times f_{Y}(z-x) d x \\
& =\int_{-\infty}^{+\infty} f_{X}(z-y) \times f_{Y}(y) d y \\
& f_{U}(u)=f_{X-Y}(u) \\
& =\int_{-\infty}^{+\infty} f_{X}(x) \times f_{Y}(x-u) d x \\
& =\int_{-\infty}^{+\infty} f_{X}(u+y) f_{Y}(y) d y \\
& f_{V}(v)=f_{X Y}(v) \\
& =\int_{-\infty}^{+\infty} f_{X}(x) \times f_{Y}(v / x) \times \frac{1}{|x|} d x \\
& =\int_{-\infty}^{+\infty} f_{X}(v / y) \times f_{Y}(y) \times \frac{1}{|y|} d y \\
& f_{W}(w)=f_{X / Y}(w) \\
& =\int_{-\infty}^{+\infty} f_{X}(x) \times f_{Y}(x / w) \times|x| d x
\end{aligned}
$$
$$
=\int_{-\infty}^{+\infty} f_{X}(w y) \times f_{Y}(y) \times|y| d y
$$
Exercise 2.126 - P.d.f. of the sum and difference of two INDEPENDENT absolutely continuous r.v.
Let $X$ and $Y$ be two r.v. which are independent and uniformly distributed in $(0,1)$. Derive the p.d.f. of
(a) $X+Y$
(b) $X-Y$
(c) $(X+Y, X-Y)$ (Rohatgi, 1976, pp. 137-138).
Exercise 2.127 - P.d.f. of the mean of two INDEPENDENT absolutely continuous r.v.
Let $X$ and $Y$ be two independent r.v. with standard normal distribution. Prove that their mean $\frac{X+Y}{2} \sim \operatorname{Normal}\left(0,2^{-1}\right)$.
Remark 2.128 - D.f. and p.d.f. of the sum, difference, product and division of two absolutely continuous r.v.
In several cases it is simpler to obtain the d.f. of those four algebraic functions of $X$ and $Y$ than to derive the corresponding p.d.f. It suffices to apply Proposition 2.104 and then differentiate the d.f. to get the p.d.f., as seen in the next exercises.
Exercise 2.129 - D.f. and p.d.f. of the difference of two absolutely continuous r.v.
Choosing adequate underkeel clearance (UKC) is one of the most crucial and most difficult problems in the navigation of large ships, especially very large crude oil carriers.
Let $X$ be the water depth in a passing shallow waterway, say a harbour or a channel, and $Y$ be the maximum ship draft. Then the probability of a safe passing a shallow waterway can be expressed as $P(\mathrm{UKC}=X-Y>0)$.
Assume that $X$ and $Y$ are independent r.v. such that $X \sim \operatorname{Gamma}(n, \beta)$ and $Y \sim$ $\operatorname{Gamma}(m, \beta)$, where $n, m \in \mathbb{N}$ and $m<n$. Derive an expression for $P(\mathrm{UKC}=X-Y>$ $0)$ taking into account that $F_{\text {Gama }(k, \beta)}(x)=\sum_{i=k}^{\infty} e^{-\beta x}(\beta x)^{i} / i$ ! , $k \in \mathbb{N}$. Exercise 2.130 - D.f. and p.d.f. of the sum of two absolutely continuous r.v. Let $X$ and $Y$ be the durations of two independent system components set in what is called a stand by connection. ${ }^{11}$ In this case the system duration is given by $X+Y$.
Prove that the p.d.f. of $X+Y$ equals
$$
f_{X+Y}(z)=\frac{\alpha \beta\left(e^{-\beta z}-e^{-\alpha z}\right)}{\alpha-\beta}, z>0,
$$
if $X \sim \operatorname{Exponencial}(\alpha)$ and $Y \sim \operatorname{Exponencial}(\beta)$, where $\alpha, \beta>0$ and $\alpha \neq \beta$.
Exercise 2.131 - D.f. of the division of two absolutely continuous r.v.
Let $X$ and $Y$ be the intensity of a transmitted signal and its damping until its reception, respectively. Moreover, $W=X / Y$ represents the intensity of the received signal.
Assume that the joint p.d.f. of $(X, Y)$ equals $f_{X, Y}(x, y)=\lambda \mu e^{-(\lambda x+\mu y)} \times$ $I_{(0,+\infty) \times(0,+\infty)}(x, y)$. Prove that the d.f. of $W=X / Y$ is given by:
$$
F_{W}(w)=\left(1-\frac{\mu}{\mu+\lambda w}\right) \times I_{(0,+\infty)}(w) .
$$
${ }^{11}$ At time 0 , only the component with duration $X$ is on. The component with duration $Y$ replaces the other one as soon as it fails.
#### Random variables with prescribed distributions
Motivation 2.132 - Construction of a r.v. with a prescribed distribution (Karr, 1993, p. 63)
Can we construct (or simulate) explicitly individual r.v., random vectors or sequences of r.v. with prescribed distributions?
Proposition 2.133 - Construction of a r.v. with a prescribed d.f. (Karr, 1993, p. 63)
Let $F$ be a d.f. on $\mathbb{R}$. Then there is a probability space $(\Omega, \mathcal{F}, P)$ and a r.v. $X$ defined on it such that $F_{X}=F$.
Exercise 2.134 - Construction of a r.v. with a prescribed d.f. Prove Proposition 2.133 (Karr, 1993, p. 63).
The construction of a r.v. with a prescribed d.f. depends on the following definition.
Definition 2.135 - Quantile function (Karr, 1993, p. 63)
The inverse function of $F, F^{-1}$, or quantile function associated with $F$, is defined by
$$
F^{-1}(p)=\inf \{x: F(x) \geq p\}, p \in(0,1) .
$$
This function is often referred to as the generalized inverse of the d.f.
Exercise 2.136 - Quantile functions of an absolutely continuous and a discrete r.v.
Obtain and draw the graphs of the d.f. and quantile function of:
(a) $X \sim \operatorname{Exponential}(\lambda)$
(b) $X \sim \operatorname{Bernoulli}(\theta)$. Remark 2.137 - Existence of a quantile function (Karr, 1993, p. 63) Even though $F$ need be neither continuous nor strictly increasing, $F^{-1}$ always exists.
As the figure of the quantile function (associated with the d.f.) of $X \sim \operatorname{Bernoulli}(\theta)$, $F^{-1}$ jumps where $F$ is flat, and is flat where $F$ jumps.
Although not necessarily a pointwise inverse of $F, F^{-1}$ serves that role for many purposes and has a few interesting properties.
Proposition 2.138 - Basic properties of the quantile function (Karr, 1993, p. 63)
Let $F^{-1}$ be the (generalized) inverse of $F$ or quantile function associated with $F$. Then
1. For each $p$ and $x$,
$$
F^{-1}(p) \leq x \text { iff } p \leq F(x)
$$
2. $F^{-1}$ is non decreasing and left-continuous;
3. If $F$ is absolutely continuous, then
$$
F\left[F^{-1}(p)\right]=p, \forall p \in(0,1) .
$$
Motivation 2.139 - Quantile transformation (Karr, 1993, p. 63)
A r.v. with d.f. $F$ can be constructed by applying $F^{-1}$ to a r.v. with distribution on $(0,1)$. This is usually known as quantile transformation and is a very popular transformation in random numbers generation/simulation on computer.
Proposition 2.140 - Quantile transformation (Karr, 1993, p. 64) Let $F$ be a d.f. on $\mathbb{R}$ and suppose $U \sim \operatorname{Uniform}(0,1)$. Then
$$
X=F^{-1}(U) \text { has distribution function } F .
$$
## Exercise 2.141 - Quantile transformation
Prove Proposition 2.140 (Karr, 1993, p. 64).
## Example 2.142 - Quantile transformation
If $U \sim \operatorname{Uniform}(0,1)$ then both $-\frac{1}{\lambda} \ln (1-U)$ and $-\frac{1}{\lambda} \ln (U)$ have exponential distribution with parameter $\lambda(\lambda>0)$. Remark 2.143 - Quantile transformation (Karr, 1993, p. 64)
R.v. with d.f. $F$ can be simulated by applying $F^{-1}$ to the (uniformly distributed) values produced by the random number generator.
Feasibility of this technique depends on either having $F^{-1}$ available in closed form or being able to approximate it numerically.
Proposition 2.144 - The quantile transformation and the simulation of discrete and absolutely continuous distributions
To generate (pseudo-)random numbers from a r.v. $X$ with d.f. $F$, it suffices to:
1. Generate a (pseudo-)random number $u$ from the $\operatorname{Uniform}(0,1)$ distribution.
2. Assign
$$
x=F^{-1}(u)=\inf \{m \in \mathbb{R}: u \leq F(m)\},
$$
the quantile of order $u$ of $X$, where $F^{-1}$ represents the generalized inverse of $F$.
For a detailed discussion on (pseudo-)random number generation/generators and their properties please refer to Gentle (1998, pp. 6-22). For a brief discussion - in Portuguese - on (pseudo-)random number generation and Monte Carlo simulation method we refer the reader to Morais (2003, Chapter 2).
Exercise 2.145 - The quantile transformation and the generation of the Logistic distribution
$X$ is said to have a $\operatorname{Logistic}(\mu, \sigma)$ if its p.d.f. is given by
$$
f(x)=\frac{e^{-\frac{x-\mu}{\sigma}}}{\sigma\left(1+e^{-\frac{x-\mu}{\sigma}}\right)^{2}},-\infty<x<+\infty .
$$
Define the quantile transformation to produce (pseudo-)random numbers with such a distribution.
Exercise 2.146 - The quantile transformation and the simulation of the Erlang distribution
Describe a method to generate (pseudo-)random numbers from the $\operatorname{Erlang}(n, \lambda) \cdot{ }^{12}$
${ }^{12}$ Let us remind the reader that the sum of $n$ independent exponential distributions with parameter $\lambda$ has an $\operatorname{Erlang}(n, \lambda)$. Exercise 2.147 - The quantile transformation and the generation of the Beta distribution
Let $Y$ and $Z$ be two independent r.v. with distributions $\operatorname{Gamma}(\alpha, \lambda)$ and $\operatorname{Gamma}(\beta, \lambda)$, respectively $(\alpha, \beta, \lambda>0)$.
(a) Prove that $X=Y /(Y+Z) \sim \operatorname{Beta}(\alpha, \beta)$.
(b) Use this result to describe a random number generation method for the $\operatorname{Beta}(\alpha, \beta)$, where $\alpha, \beta \in \mathbb{N}$.
(c) Use any software you are familiar with to generate and plot the histogram of 1000 observations from the $\operatorname{Beta}(4,5)$ distribution.
Example 2.148 - The quantile transformation and the generation of the Bernoulli distribution (Gentle, 1993, p. 47)
To generate (pseudo-)random numbers from the $\operatorname{Bernoulli}(p)$ distribution, we should proceed as follows:
1. Generate a (pseudo-)random number $u$ from the Uniform $(0,1)$ distribution.
2. Assign
$$
x=\left\{\begin{array}{l}
0, \text { if } u \leq 1-p \\
1, \text { if } u>1-p
\end{array}\right.
$$
or, equivalently,
$$
x=\left\{\begin{array}{l}
0, \text { if } u \geq p \\
1, \text { if } u<p .
\end{array}\right.
$$
(What is the advantage of (2.116) over (2.115)?)
Exercise 2.149 - The quantile transformation and the simulation of the Binomial distribution
Describe a method to generate (pseudo-)random numbers from a $\operatorname{Binomial}(n, p)$ distribution.
Proposition 2.150 - The converse of the quantile transformation (Karr, 1993, p. 64)
A converse of the quantile transformation (Propositon 2.140) holds as well, under certain conditions. In fact, if $F_{X}$ is continuous (not necessarily absolutely continuous) then
$$
F_{X}(X) \sim \operatorname{Uniform}(0,1)
$$
Exercise 2.151 - The converse of the quantile transformation Prove Propositon 2.150 (Karr, 1993, p. 64).
Motivation 2.152 - Construction of random vectors with a prescribed distribution (Karr, 1993, p. 65)
The construction of a random vector with an arbitrary d.f. is more complicated. We shall address this issue in the next chapter for a special case: when the random vector has independent components. However, we can state the following result.
Proposition 2.153 - Construction of a random vector with a prescribed d.f. (Karr, 1993, p. 65)
Let $F: \mathbb{R}^{d} \rightarrow[0,1]$ be a $d$-dimensional d.f. Then there is a probability space $(\Omega, \mathcal{F}, P)$ and a random vector $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ defined on it such that $F_{\underline{X}}=F$.
Motivation 2.154 - Construction of a sequence of r.v. with a prescribed joint d.f. (Karr, 1993, p. 65)
How to construct a sequence $\left\{X_{k}\right\}_{k \in \mathbb{N}}$ of r.v. with a prescribed joint d.f. $F_{n}$ where $F_{n}$ is the joint d.f. of $\underline{X}_{n}=\left(X_{1}, \ldots, X_{n}\right)$, for each $n \in \mathbb{N}$. The d.f. $F_{n}$ must satisfy certain consistency conditions since if such r.v. exists then
$$
\begin{aligned}
F_{n}\left(\underline{x}_{n}\right) & =P\left(X_{1} \leq x_{1}, \ldots, X_{n} \leq x_{n}\right) \\
& =\lim _{x \rightarrow+\infty} P\left(X_{1} \leq x_{1}, \ldots, X_{n} \leq x_{n}, X_{n+1} \leq x\right),
\end{aligned}
$$
for all $x_{1}, \ldots, x_{n}$.
Theorem 2.155 - Kolmogorov existence Theorem (Karr, 1993, p. 65)
Let $F_{n}$ be a d.f. on $\mathbb{R}^{n}$, and suppose that
$$
\lim _{x \rightarrow+\infty} F_{n+1}\left(x_{1}, \ldots, x_{n}, x\right)=F_{n}\left(x_{1}, \ldots, x_{n}\right),
$$
for each $n \in \mathbb{N}$ and $x_{1}, \ldots, x_{n}$. Then there is a probability space say $(\Omega, \mathcal{F}, P)$ and a sequence of $\left\{X_{k}\right\}_{k \in \mathbb{N}}$ of r.v. defined on it such that $F_{n}$ is the d.f. of $\left(X_{1}, \ldots, X_{n}\right)$, for each $n \in \mathbb{N}$.
$\begin{array}{llll}\text { Remark 2.156 - Kolmogorov } & \text { existence } & \text { Theorem }\end{array}$ (http://en.wikipedia.org/wiki/Kolmogorov_extension_theorem)
Theorem 2.155 guarantees that a suitably "consistent" collection of finitedimensional distributions will define a stochastic process. This theorem is credited to soviet mathematician Andrey Nikolaevich Kolmogorov (1903-1987, http://en.wikipedia.org/wiki/Andrey_Kolmogorov).
## References
- Gentle, J.E. (1998). Random Number Generation and Monte Carlo Methods. Springer-Verlag, New York, Inc. (QA298.GEN.50103)
- Grimmett, G.R. and Stirzaker, D.R. (2001). One Thousand Exercises in Probability. Oxford University Press.
- Karr, A.F. (1993). Probability. Springer-Verlag.
- Morais, M.C. (2003). Estatística Computacional - Módulo 1: Notas de Apoio (Caps. 1 e 2), 141 pags.
(http://www.math.ist.utl.pt/ mjmorais/materialECMCM.html)
- Murteira, B.J.F. (1979). Probabilidades e Estatística (volume I). Editora McGrawHill de Portugal, Lda. (QA273-280/3.MUR.5922, QA273-280/3.MUR.34472, QA273-280/3.MUR.34474, QA273-280/3.MUR.34476)
- Resnick, S.I. (1999). A Probability Path. Birkhäuser. (QA273.4-.67.RES.49925)
- Righter, R. (200-). Lectures notes for the course Probability and Risk Analysis for Engineers. Department of Industrial Engineering and Operations Research, University of California at Berkeley.
- Rohatgi, V.K. (1976). An Introduction to Probability Theory and Mathematical Statistics. John Wiley \& Sons. (QA273-280/4.ROH.34909)
## Chapter 3
## Independence
Independence is a basic property of events and r.v. in a probability model.
### Fundamentals
Motivation 3.1 - Independence (Resnick, 1999, p. 91; Karr, 1993, p. 71)
The intuitive appeal of independence stems from the easily envisioned property that the ocurrence of an event has no effect on the probability that an independent event will occur. Despite the intuitive appeal, it is important to recognize that independence is a technical concept/definition which must be checked with respect to a specific model.
Independence — or the absence of probabilistic interaction — sets probability apart as a distinct mathematical theory.
A series of definitions of independence of increasingly sophistication will follow.
Definition 3.2 - Independence for two events (Resnick, 1999, p. 91)
Suppose $(\Omega, \mathcal{F}, P)$ is a fixed probability space. Events $A, B \in \mathcal{F}$ are independent if
$$
P(A \cap B)=P(A) \times P(B) .
$$
## Exercise 3.3 - Independence
Let $A$ and $B$ be two independent events. Show that:
(a) $A^{c}$ and $B$ are independent,
(b) and so are $A$ and $B^{c}$,
(c) and $A^{c}$ and $B^{c}$.
## Exercise 3.4 - (In)dependence and disjoint events
Let $A$ and $B$ two DisJoint events with probabilities $P(A), P(B)>0$. Show that these two events are NOT INDEPENDENT.
Exercise 3.5 - Independence (Exercise 3.2, Karr, 1993, p. 95)
Show that:
(a) an event whose probability is either zero or one is independent of every event;
(b) an event that is independent of itself has probability zero or one.
Definition 3.6 - Independence for a finite number of events (Resnick, 1999, p. 91)
The events $A_{1}, \ldots, A_{n} \in \mathcal{F}$ are independent if
$$
P\left(\bigcap_{i \in I} A_{i}\right)=\prod_{i \in I} P\left(A_{i}\right),
$$
for all finite $I \subseteq\{1, \ldots, n\}$.
Remark 3.7 - Independence for a finite number of events (Resnick, 1999, p. 92) Note that (3.2) represents $\sum_{k=2}^{n}\left(\begin{array}{l}n \\ k\end{array}\right)=2^{n}-n-1$ equations and can be rephrased as follows:
- the events $A_{1}, \ldots, A_{n}$ are independent if
$$
P\left(\bigcap_{i=1}^{n} B_{i}\right)=\prod_{i=1}^{n} P\left(B_{i}\right),
$$
where, for each $i=1, \ldots, n, B_{i}$ equals $A_{i}$ or $\Omega$.
Corollary 3.8 - Independence for a finite number of events (Karr, 1993, p. 81) Events $A_{1}, \ldots, A_{n}$ are independent iff $A_{1}^{c}, \ldots, A_{n}^{c}$ are independent.
Exercise 3.9 - Independence for a finite number of events (Exercise 3.1, Karr, 1993, p. 95)
Let $A_{1}, \ldots, A_{n}$ be independent events. (a) Prove that $P\left(\bigcup_{i=1}^{n} A_{i}\right)=1-\prod_{i=1}^{n}\left[1-P\left(A_{i}\right)\right]$.
(b) Consider a parallel system with $n$ components and assume that $P\left(A_{i}\right)$ is the reliability of the component $i(i=1, \ldots, n)$. What is the system reliability?
Motivation 3.10 - (2nd.) Borel-Cantelli lemma (Karr, 1993, p. 81)
For independent events, Theorem 1.76, the (1st.) Borel-Cantelli lemma, has a converse. It states that if the events $A_{1}, A_{2}, \ldots$ are independent and the sum of the probabilities of the $A_{n}$ diverges to infinity, then the probability that infinitely many of them occur is 1 .
Theorem 3.11 - (2nd.) Borel-Cantelli lemma (Karr, 1993, p. 81)
Let $A_{1}, A_{2}, \ldots$ be independent events. Then
$$
\sum_{n=1}^{+\infty} P\left(A_{n}\right)=+\infty \quad \Rightarrow \quad P\left(\lim \sup A_{n}\right)=1
$$
(Moreover, $P\left(\lim \sup A_{n}\right)=1 \Rightarrow \sum_{n=1}^{+\infty} P\left(A_{n}\right)=+\infty$ follows from the 1st. Borel-Cantelli lemma.)
## Exercise 3.12 - (2nd.) Borel-Cantelli lemma
Prove Theorem 3.11 (Karr, 1993, p. 82).
Definition 3.13 - Independent classes of events (Resnick, 1999, p. 92)
Let $\mathcal{C}_{i} \subseteq \mathcal{F}, i=1, \ldots, n$, be a class of events. Then the classes $\mathcal{C}_{1}, \ldots, \mathcal{C}_{n}$ are said to be independent if for any choice $A_{1}, \ldots, A_{n}$, with $A_{i} \in \mathcal{C}_{i}, i=1, \ldots, n$, the events $A_{1}, \ldots, A_{n}$ are independent events according to Definition 3.6.
Definition 3.14 - Independent $\sigma$-algebras (Karr, 1993, p. 94)
Sub $\sigma$ - algebras $\mathcal{G}_{1}, \ldots, \mathcal{G}_{n}$ of $\sigma$ - algebra $\mathcal{F}$ are independent if
$$
P\left(\bigcap_{i=1}^{n} A_{i}\right)=\prod_{i=1}^{n} P\left(A_{i}\right)
$$
for all $A_{i} \in \mathcal{G}_{i}, i=1, \ldots, n$.
Motivation 3.15 - Independence of $\sigma$ - algebras (Resnick, 1999, p. 92)
To provide a basic criterion for proving independence of $\sigma$-algebras, we need to introduce the notions of $\pi$ - system and $d$ - system. Definition 3.16 - $\pi$-system (Resnick, 1999, p. 32; Karr, 1993, p. 21)
Let $\mathcal{P}$ family of subsets of the sample space $\Omega$. $\mathcal{P}$ is said to be a $\pi$-system if it is closed under finite intersection: $A, B \in \mathcal{P} \Rightarrow A \cap B \in \mathcal{P}$.
Remark 3.17 - $\pi$-system (Karr, 1993, p. 21)
A $\sigma-$ algebra is a $\pi-$ system.
Definition 3.18 - $d$-system (Karr, 1993, p. 21)
Let $\mathcal{D}$ family of subsets of the sample space $\Omega$. $\mathcal{D}$ is said to be a $d$ - system $^{1}$ if it
1. contains the sample space $\Omega$,
2. is closed under proper difference, ${ }^{2}$
3. and is closed under countable increasing union. ${ }^{3}$
Proposition 3.19 - Relating $\pi$ - and $\lambda$ - systems and $\sigma$-algebras (Resnick, 1999, p. 38)
If a class $\mathcal{C}$ is both a $\pi-$ system and $\lambda$ - system then it is a $\sigma$ - algebra.
Theorem 3.20 - Basic independence criterion (Resnick, 1999, p. 92)
If, for each $i=1, \ldots, n, \mathcal{C}_{i}$ is a non-empty class of events satisfying
1. $\mathcal{C}_{i}$ is a $\pi-$ system
2. $\mathcal{C}_{i}, i=1, \ldots, n$ are independent
then the $\sigma$ - algebras generated by these $n$ classes of events, $\sigma\left(\mathcal{C}_{i}\right), \ldots, \sigma\left(\mathcal{C}_{n}\right)$, are independent.
## Exercise 3.21 - Basic independence criterion
Prove the basic independence criterion in Theorem 3.20 (Resnick, 1999, pp. 92-93).
${ }^{1}$ Synonyms (Resnick, 1999, p. 36): $\lambda$ - system, $\sigma$ - additive, Dynkin class.
${ }^{2}$ If $A, B \in \mathcal{D}$ and $A \subseteq B$ then $B \backslash A \in \mathcal{D}$.
${ }^{3}$ If $A_{1} \subseteq A_{2} \subseteq \ldots$ and $A_{i} \in \mathcal{D}$ then $\bigcup_{i=1}^{+\infty} A_{i} \in \mathcal{D}$. Definition 3.22 - Arbitrary number of independent classes (Resnick, 1999, p. 93; Karr, 1993, p. 94)
Let $T$ be an arbitrary index set. The classes $\left\{\mathcal{C}_{t}, t \in T\right\}$ are independent if, for each finite $I$ such that $I \subset T,\left\{\mathcal{C}_{t}, t \in I\right\}$ are independent.
An infinite collection of $\sigma$ - algebras is independent if every finite subcollection is independent.
Corollary 3.23 - Arbitrary number of independent classes (Resnick, 1999, p. 93)
If $\left\{\mathcal{C}_{t}, t \in T\right\}$ are non-empty $\pi$ - systems that are independent then $\left\{\sigma\left(\mathcal{C}_{t}\right), t \in T\right\}$ are independent.
Exercise 3.24 - Arbitrary number of independent classes
Prove Corollary 3.23 by using the basic independence criterion.
### Independent r.v.
The notion of independence for r.v. can be stated in terms of Borel sets. Moreover, basic independence criteria can be develloped based solely on intervals such as $(-\infty, x]$.
Definition 3.25 - Independence of r.v. (Karr, 1993, p. 71)
R.v. $X_{1}, \ldots, X_{n}$ are independent if
$$
P\left(\left\{X_{1} \in B_{1}, \ldots, X_{n} \in B_{n}\right\}\right)=\prod_{i=1}^{n} P\left(\left\{X_{i} \in B_{i}\right\}\right),
$$
for all Borel sets $B_{1}, \ldots, B_{n}$.
Independence for r.v. can also be defined in terms of the independence of $\sigma$-algebras.
Definition 3.26 - Independence of r.v. (Resnick, 1999, p. 93)
Let $T$ be an arbitrary index set. Then $\left\{X_{t}, t \in T\right\}$ is a family of independent r.v. if $\left\{\sigma\left(X_{t}\right), t \in T\right\}$ is a family of independent $\sigma-$ algebras as stated in Definition 3.22.
Remark 3.27 - Independence of r.v. (Resnick, 1999, p. 93)
The r.v. are independent if their induced/generated $\sigma$ - algebras are independent. The information provided by any individual r.v. should not affect the probabilistic behaviour of other r.v. in the family.
Since $\sigma\left(\mathbf{1}_{A}\right)=\left\{\emptyset, A, A^{c}, \Omega\right\}$ we have $\mathbf{1}_{A_{1}}, \ldots, \mathbf{1}_{A_{n}}$ independent iff $A_{1}, \ldots, A_{n}$ are independent.
Definition 3.28 - Independence of an infinite set of r.v. (Karr, 1993, p. 71) An infinite set of r.v. is independent if every finite subset of r.v. is independent.
Motivation 3.29 - Independence criterion for a finite number of r.v. (Karr, 1993, pp. 71-72)
R.v. are independent iff their joint d.f. is the product of their marginal/individual d.f.
This result affirms the general principle that definitions stated in terms of all Borel sets need only be checked for intervals $(-\infty, x]$. Theorem 3.30 - Independence criterion for a finite number of r.v. (Karr, 1993, p. 72)
R.v. $X_{1}, \ldots, X_{n}$ are independent iff
$$
F_{X_{1}, \ldots, X_{n}}\left(x_{1}, \ldots, x_{n}\right)=\prod_{i=1}^{n} F_{X_{i}}\left(x_{i}\right)
$$
for all $x_{1}, \ldots, x_{n} \in \mathbb{R}$.
Remark 3.31 - Independence criterion for a finite number of r.v. (Resnick, 1999, p. 94)
Theorem 3.30 is usually referred to as factorization criterion.
Exercise 3.32 - Independence criterion for a finite number of r.v.
Prove Theorem 3.30 (Karr, 1993, p. 72; Resnick, 1999, p. 94, provides a more straightforward proof of this result).
Theorem 3.33 - Independence criterion for an infinite number of r.v. (Resnick, 1994, p. 94)
Let $T$ be as arbitrary index set. A family of r.v. $\left\{X_{t}, t \in T\right\}$ is independent iff
$$
F_{I}\left(x_{t}, t \in I\right)=\prod_{i \in I} F_{X_{t}}\left(x_{t}\right),
$$
for all finite $I \subset T$ and $x_{t} \in \mathbb{R}$.
Exercise 3.34 - Independence criterion for an infinite number of r.v. Prove Theorem 3.33 (Resnick, 1994, p. 94).
Specialized criteria for discrete and absolutely continuous r.v. follow from Theorem 3.30 .
Theorem 3.35 - Independence criterion for discrete r.v. (Karr, 1993, p. 73; Resnick, 1999, p. 94)
The discrete r.v. $X_{1}, \ldots, X_{n}$, with countable ranges $\mathcal{R}_{1}, \ldots, \mathcal{R}_{n}$, are independent iff
$$
P\left(\left\{X_{1}=x_{1}, \ldots, X_{n}=x_{n}\right\}\right)=\prod_{i=1}^{n} P\left(\left\{X_{i}=x_{i}\right\}\right),
$$
for all $x_{i} \in \mathcal{R}_{i}, i=1, \ldots, n$.
Exercise 3.36 - Independence criterion for discrete r.v.
Prove Theorem 3.35 (Karr, 1993, p. 73; Resnick, 1999, pp. 94-95). Exercise 3.37 - Independence criterion for discrete r.v.
The number of laptops $(X)$ and $\mathrm{PCs}(Y)$ sold daily in a store have a joint p.f. partially described in the following table:
| | $Y$ | | |
| :---: | :---: | :---: | :---: |
| $X$ | 0 | 1 | 2 |
| 0 | 0.1 | 0.1 | 0.3 |
| 1 | 0.2 | 0.1 | 0.1 |
| 2 | 0 | 0.1 | $\mathrm{a}$ |
Complete the table and prove that $X$ and $Y$ are not independent r.v.
Theorem 3.38 - Independence criterion for absolutely continuous r.v. (Karr, 1993, p. 74)
Let $\underline{X}=\left(X_{1}, \ldots, X_{n}\right)$ be an absolutely continuous random vector. Then $X_{1}, \ldots, X_{n}$ are independent iff
$$
f_{X_{1}, \ldots, X_{n}}\left(x_{1}, \ldots, x_{n}\right)=\prod_{i=1}^{n} f_{X_{i}}\left(x_{i}\right),
$$
for all $x_{1}, \ldots, x_{n} \in \mathbb{R}$.
Exercise 3.39 - Independence criterion for absolutely continuous r.v. Prove Theorem 3.38 (Karr, 1993, p. 74).
Exercise 3.40 - Independence criterion for absolutely continuous r.v.
The r.v. $X$ and $Y$ represent the lifetimes (in $10^{3}$ hours) of two components of a control system and have joint p.d.f. given by
$$
f_{X, Y}(x, y)= \begin{cases}1, & 0<x<1,0<y<1 \\ 0, & \text { otherwise }\end{cases}
$$
Prove that $X$ and $Y$ are independent r.v.
Exercise 3.41 - Independence criterion for absolutely continuous r.v.
Let $X$ and $Y$ be two r.v. that represent, respectively, the width (in $d m$ ) and the length (in $d m$ ) of a rectangular piece. Admit the joint p.d.f. of $(X, Y)$ is given by
$$
f_{X, Y}(x, y)= \begin{cases}2, & 0<x<y<1 \\ 0, & \text { otherwise }\end{cases}
$$
Prove that $X$ and $Y$ are not independent r.v. Example 3.42 - Independent r.v. (Karr, 1993, pp. 75-76)
Independent r.v. are inherent to certain probability structures.
## - Binary expansions ${ }^{4}$
Let $P$ be the uniform distribution on $\Omega=[0,1]$. Each point $\omega \in \Omega$ has a binary expansion
$$
\omega \rightarrow 0 . X_{1}(\omega) X_{2}(\omega) \ldots
$$
where the $X_{i}$ are functions from $\Omega$ to $\{0,1\}$.
This expansion is "unique" and it can be shown that $X_{1}, X_{2}, \ldots$ are independent and with a $\operatorname{Bernoulli}\left(p=\frac{1}{2}\right)$ distribution. ${ }^{5}$ Moreover, $\sum_{n=1}^{+\infty} 2^{-n} X_{n} \sim \operatorname{Uniform}(0,1)$. According to Resnick (1999, pp. 98-99), the binary expansion of 1 is $\mathbf{0 . 1 1 1}$.. since
$$
\sum_{n=1}^{+\infty} 2^{-n} \times 1=1
$$
In addition, if a number such a $\frac{1}{2}$ has two possible binary expansions, we agree to use the non terminating one. Thus, even though $\frac{1}{2}$ has two expansions $\mathbf{0 . 0 1 1 1} \ldots$ and $0.1000 .$. because
$$
\begin{aligned}
& 2^{-1} \times 0+\sum_{n=2}^{+\infty} 2^{-n} \times 1=\frac{1}{2} \\
& 2^{-1} \times 1+\sum_{n=2}^{+\infty} 2^{-n} \times 0=\frac{1}{2}
\end{aligned}
$$
by convention, we use the first binary expansion.
## - Multidimensional uniform distribution
Suppose that $P$ is the uniform distribution on $[0,1]^{n}$. Then the coordinate r.v. $U_{i}\left(\left(\omega_{1}, \ldots, \omega_{n}\right)\right)=\omega_{i}, i=1, \ldots, n$, are independent, and each of them is uniformly distributed on $[0,1]$. In fact, for intervals $I_{1}, \ldots, I_{n}$,
$$
P\left(\left\{U_{1} \in I_{1}, \ldots, U_{n} \in I_{n}\right\}\right)=\prod_{i=1}^{n} P\left(\left\{U_{i} \in I_{i}\right\}\right) .
$$
${ }^{4}$ Or dyadic expansions of uniform random numbers (Resnick, 1999, pp. 98-99).
${ }^{5}$ The proof of this result can also be found in Resnick (1999, pp. 99-100). In other cases, whether r.v. are independent depends on the value of a parameter.
## - Bivariate normal distribution
Let $(X, Y)$ be a random vector with a bivariate normal distribution with p.d.f.
$$
f_{X, Y}(x, y)=\frac{1}{2 \pi \sqrt{1-\rho^{2}}} \exp \left[-\frac{x^{2}-2 \rho x y+y^{2}}{2\left(1-\rho^{2}\right)}\right],(x, y) \in \mathbb{R}^{2}
$$
$X$ and $Y$ have both marginal standard normal distributions then, by the factorization criterion, $X$ and $Y$ are independent iff $\rho=0$.
Exercise 3.43 - Bivariate normal distributed r.v. (Karr, 1993, p. 96, Exercise 3.8)
Let $(X, Y)$ have the bivariate normal p.d.f.
$$
f_{X, Y}(x, y)=\frac{1}{2 \pi \sqrt{1-\rho^{2}}} \exp \left[-\frac{x^{2}-2 \rho x y+y^{2}}{2\left(1-\rho^{2}\right)}\right],(x, y) \in \mathbb{R}^{2} .
$$
(a) Prove that $X \sim Y \sim \operatorname{normal}(0,1)$.
(b) Prove that $X$ and $Y$ are independent iff $\rho=0$.
(c) Calculate $P(\{X \geq 0, Y \geq 0\})$ as a function of $\rho$.
Exercise 3.44 - I.i.d. r.v. with absolutely continuous distributions (Karr, 1993, p. 96, Exercise 3.9)
Let $(X, Y)$ be an absolutely continuous random vector where $X$ and $Y$ are i.i.d. r.v. with absolutely continuous d.f. $F$. Prove that:
a $P(\{X=Y\})=0$
b $P(\{X<Y\})=\frac{1}{2}$.
### Functions of independent r.v.
Motivation 3.45 - Disjoint blocks theorem (Karr, 1993, p. 76)
R.v. that are functions of disjoint subsets of a family of independent r.v. are also independent.
Theorem 3.46 - Disjoint blocks theorem (Karr, 1993, p. 76) Let:
- $X_{1}, \ldots, X_{n}$ be independent r.v.;
- $J_{1}, \ldots, J_{k}$ be disjoint subsets of $\{1, \ldots, n\}$;
- $Y_{l}=g_{l}\left(X^{(l)}\right)$, where $g_{l}$ is a Borel measurable function and $X^{(l)}=\left\{X_{i}, i \in J_{l}\right\}$ is a subset of the family of the independent r.v., for each $l=1, \ldots, k$.
Then
$$
Y_{1}=g_{1}\left(X^{(1)}\right), \ldots, Y_{k}=g_{k}\left(X^{(k)}\right)
$$
are independent r.v.
Remark 3.47 - Disjoint blocks theorem (Karr, 1993, p. 77)
According to Definitions 3.25 and 3.28, the disjoint blocks theorem can be extended to (countably) infinite families and blocks.
Exercise 3.48 - Disjoint blocks theorem
Prove Theorem 3.46 (Karr, 1993, pp. 76-77).
## Example 3.49 - Disjoint blocks theorem
Let $X_{1}, \ldots, X_{5}$ be five independent r.v., and $J_{1}=\{1,2\}$ and $J_{2}=\{3,4\}$ two disjoint subsets of $\{1, \ldots, 5\}$. Then
- $Y_{1}=X_{1}+X_{2}=g_{1}\left(X_{i}, i \in J_{1}=\{1,2\}\right)$ and
- $Y_{2}=X_{3}-X_{4}=g_{2}\left(X_{i}, i \in J_{2}=\{3,4\}\right)$
are independent r.v. Corollary 3.50 - Disjoint blocks theorem (Karr, 1993, p. 77) Let:
- $X_{1}, \ldots, X_{n}$ be independent r.v.;
- $Y_{i}=g_{i}\left(X_{i}\right), i=1, \ldots, n$, where $g_{1}, \ldots, g_{n}$ are (Borel measurable) functions from $\mathbb{R}$ to $\mathbb{R}$.
Then $Y_{1}, \ldots, Y_{n}$ are independent r.v.
We have already addressed the p.d.f. (or p.f.) of a sum, difference, product or division of two INDEPENDENT absolutely continuous (or discrete) r.v. However, the sum of INDEPENDENT absolutely continuous r.v. merit special consideration — its p.d.f. has a specific designation: convolution of p.d.f..
Definition 3.51 - Convolution of p.d.f. (Karr, 1993, p. 77) Let:
- $X$ and $Y$ be two INDEPENDENT absolutely continuous r.v.;
- $f$ and $g$ be the p.d.f. of $X$ and $Y$, respectively.
Then the p.d.f. of $X+Y$ is termed the convolution of the p.d.f. $f$ and $g$, represented by $f \star g$ and given by
$$
(f \star g)(t)=\int_{-\infty}^{+\infty} f(t-s) \times g(s) d s .
$$
Proposition 3.52 - Properties of the convolution of p.d.f. (Karr, 1993, p. 78) The convolution of d.f. is:
- commutative $-f \star g=g \star f$, for all p.d.f. $f$ and $g$;
- associative $-(f \star g) \star h=f \star(g \star h)$, for all p.d.f. $f, g$ and $h$.
## Exercise 3.53 - Convolution of p.f.
How could we define the convolution of the p.f. of two INDEPENDENT discrete r.v.?
## Exercise 3.54 - Sum of independent binomial distributions
Let $X \sim \operatorname{Binomial}\left(n_{X}, p\right)$ and $Y \sim \operatorname{Binomial}\left(n_{Y}, p\right)$ be independent.
Prove that $X+Y \sim \operatorname{Binomial}\left(n_{X}+n_{Y}, p\right)$ by using the Vandermonde's identity (http://en.wikipedia.org/wiki/Vandermonde's_identity). ${ }^{6}$
Exercise 3.54 gives an example of a distribution family which is closed under convolution. There are several other families with the same property, as illustrated by the next proposition.
## Proposition 3.55 - A few distribution families closed under convolution
| R.v. | Convolution |
| :--- | :--- |
| $X_{i} \sim_{\text {indep }} \operatorname{Binomial}\left(n_{i}, p\right), i=1, \ldots, k$ | $\sum_{i=1}^{k} X_{i} \sim \operatorname{Binomial}\left(\sum_{i=1}^{k} n_{i}, p\right)$ |
| $X_{i} \sim_{\text {indep }} \operatorname{NegativeBinomial}\left(r_{i}, p\right), i=1, \ldots, n$ | $\sum_{i=1}^{n} X_{i} \sim \operatorname{NegativeBinomial}\left(\sum_{i=1}^{k} r_{i}, p\right)$ |
| $X_{i} \sim_{\text {indep }} \operatorname{Poisson}\left(\lambda_{i}\right), i=1, \ldots, n$ | $\sum_{i=1}^{n} X_{i} \sim \operatorname{Poisson}\left(\sum_{i=1}^{n} \lambda_{i}\right)$ |
| $X_{i} \sim_{\text {indep. }} \operatorname{Normal}\left(\mu_{i}, \sigma_{i}^{2}\right), i=1, \ldots, n$ | $\sum_{i=1}^{n} X_{i} \sim \operatorname{Normal}\left(\sum_{i=1}^{n} \mu_{i}, \sum_{i=1}^{n} \sigma_{i}^{2}\right)$ |
| | $\sum_{i=1}^{n} c_{i} X_{i} \sim \operatorname{Normal}\left(\sum_{i=1}^{n} c_{i} \mu_{i}, \sum_{i=1}^{n} c_{i}^{2} \sigma_{i}^{2}\right)$ |
| $X_{i} \sim_{\text {indep. }} \operatorname{Gamma}\left(\alpha_{i}, \lambda\right), i=1, \ldots, n$ | $\sum_{i=1}^{n} X_{i} \sim \operatorname{Gamma}\left(\sum_{i=1}^{n} \alpha_{i}, \lambda\right)$ |
## Exercise 3.56 - Sum of (in)dependent normal distributions
Let $(X, Y)$ have a (non-singular) bivariate normal distribution with mean vector and covariance matrix
$$
\mu \boldsymbol{\mu}=\left[\begin{array}{l}
\mu_{X} \\
\mu_{Y}
\end{array}\right] \quad \text { and } \quad \Sigma=\left[\begin{array}{cc}
\sigma_{X}^{2} & \rho \sigma_{X} \sigma_{Y} \\
\rho \sigma_{X} \sigma_{Y} & \sigma_{Y}^{2}
\end{array}\right],
$$
respectively, that is, the joint p.d.f. is given by
$$
\begin{aligned}
f_{X, Y}(x, y)= & \frac{1}{2 \pi \sigma_{X} \sigma_{Y} \sqrt{1-\rho^{2}}} \exp \left\{-\frac{1}{2\left(1-\rho^{2}\right)}\left[\left(\frac{x-\mu_{X}}{\sigma_{X}}\right)^{2}\right.\right. \\
& \left.\left.-2 \rho\left(\frac{x-\mu_{X}}{\sigma_{X}}\right)\left(\frac{y-\mu_{Y}}{\sigma_{Y}}\right)+\left(\frac{y-\mu_{Y}}{\sigma_{Y}}\right)^{2}\right]\right\},(x, y) \in \mathbb{R}^{2},
\end{aligned}
$$
${ }^{6}$ In combinatorial mathematics, Vandermonde's identity, named after Alexandre-Théophile Vandermonde (1772), states that the equality $\left(\begin{array}{c}m+n \\ r\end{array}\right)=\sum_{k=0}^{r}\left(\begin{array}{c}m \\ k\end{array}\right)\left(\begin{array}{c}n \\ r-k\end{array}\right), m, n, r \in \mathbb{N}_{0}$, for binomial coefficients holds. This identity was given already in 1303 by the Chinese mathematician Zhu Shijie (Chu Shi-Chieh). for $|\rho|=|\operatorname{corr}(X, Y)|<1 .^{7}$
Prove that $X+Y$ is normally distributed with parameters $E(X+Y)=\mu_{X}+\mu_{Y}$ and $V(X+Y)=V(X)+2 \operatorname{cov}(X, Y)+V(Y)=\sigma_{X}^{2}+2 \rho \sigma_{X} \sigma_{Y}+\sigma_{Y}^{2}$
Exercise 3.57 - Distribution of the minimum of two exponentially distributed r.v. (Karr, 1993, p. 96, Exercise 3.7)
Let $X \sim \operatorname{Exponential}(\lambda)$ and $Y \sim \operatorname{Exponential}(\mu)$ be two independent r.v.
Calculate the distribution of $Z=\min \{X, Y\}$.
Exercise 3.58 - Distribution of the minimum of exponentially distributed r.v. Let $X \stackrel{\text { i.i.d. }}{\sim}$ Exponential $\left(\lambda_{i}\right)$ and $a_{i}>0$, for $i=1, \ldots, n$.
Prove that $\min _{i=1, \ldots, n}\left\{a_{i} X_{i}\right\} \sim \operatorname{Exponential}\left(\sum_{i=1}^{n} \frac{\lambda_{i}}{a_{i}}\right)$.
Exercise 3.59 - Distribution of the minimum of Pareto distributed r.v.
The Pareto distribution, named after the Italian economist Vilfredo Pareto, was originally used to model the wealth of individuals, $X .{ }^{8}$
We say that $X \sim \operatorname{Pareto}(b, \alpha)$ if
$$
f_{X}(x)=\frac{\alpha b^{\alpha}}{x^{\alpha+1}}, x \geq b
$$
where $b>0$ is the minimum possible value of $X$ (it also represents the scale parameter) and $\alpha>0$ is called the Pareto index (or the shape parameter)
Consider $n$ individuals with wealths $X_{i} \stackrel{i . i . d .}{\sim} X, i=1, \ldots, n$. Identify the survival function of the minimal wealth of these $n$ individuals and comment on the result.
Proposition 3.60 - A few distribution families closed under the minimum operation
| R.v. | Minimum |
| :--- | :--- |
| $X_{i} \sim_{\text {indep }} \operatorname{Geometric}\left(p_{i}\right), i=1, \ldots, n$ | $\min _{i=1, \ldots, n} X_{i} \sim \operatorname{Geometric}\left(1-\prod_{i=1}^{n}\left(1-p_{i}\right)\right)$ |
| $X_{i} \sim_{\text {indep }} \operatorname{Exponential}\left(\lambda_{i}\right), a_{i}>0, i=1, \ldots, n$ | $\min _{i=1, \ldots, n} a_{i} X_{i} \sim \operatorname{Exponential}\left(\sum_{i=1}^{n} \frac{\lambda_{i}}{a_{i}}\right)$ |
| $X_{i} \sim_{\text {indep }} \operatorname{Pareto}\left(b, \alpha_{i}\right), i=1, \ldots, n, a>0$ | $\min _{i=1, \ldots, n} a X_{i} \sim \operatorname{Pareto}\left(a b, \sum_{i=1}^{n} \alpha_{i}\right)$ |
${ }^{7}$ The fact that two random variables $X$ and $Y$ both have a normal distribution does not imply that the pair $(X, Y)$ has a joint normal distribution. A simple example is one in which $Y=X$ if $|X|>1$ and $Y=-X$ if $|X|<1$. This is also true for more than two random variables. (For more details see http://en.wikipedia.org/wiki/Multivariate_normal_distribution).
${ }^{8}$ The Pareto distribution seemed to show rather well the way that a larger portion of the wealth of any society is owned by a smaller percentage of the people in that society (http://en.wikipedia.org/wiki/Pareto_distribution). Exercise 3.61 - A few distribution families closed under the minimum operation
Prove the first and the third results of Proposition 3.60
Can we generalize the closure property of the family of Pareto distributions? (Hint: $b$ is a scale parameter.)
### Order statistics
Algebraic operations on independent r.v., such as the minimum, the maximum and order statistics, are now further discussed because they play a major role in applied areas such as reliability.
Definition 3.62 - System reliability function (Barlow and Proschan, 1975, p. 82) The system reliability function for the interval $[0, t]$ is the probability that the system functions successfully throughout the interval $[0, t]$.
If $T$ represents the system lifetime then the system reliability function is the survival function of $T$,
$$
S_{T}(t)=P(\{T>t\})=1-F_{T}(t) .
$$
If the system has $n$ components with INDEPENDENT lifetimes $X_{1}, \ldots, X_{n}$, with survival functions $S_{X_{1}}(t), \ldots, S_{X_{n}}(t)$, then system reliability function is a function of those $n$ reliability functions, i.e,
$$
S_{T}(t)=h\left[S_{X_{1}}(t), \ldots, S_{X_{n}}(t)\right] .
$$
If they are not independent then $S_{T}(t)$ depends on more than the component marginal distributions at time $t$.
## Definition 3.63 - Order statistics
Given any r.v., $X_{1}, X_{2}, \ldots, X_{n}$,
- the 1st. order statistic is the minimum of $X_{1}, \ldots, X_{n}, X_{(1)}=\min _{i=1, \ldots, n} X_{i}$,
- $n$ th. order statistic is the maximum of $X_{1}, \ldots, X_{n}, X_{(n)}=\max _{i=1, \ldots, n} X_{i}$, and
- the $i$ th. order statistic corresponds to the $i$ th.-smallest r.v. of $X_{1}, \ldots, X_{n}, X_{(i)}$, $i=1, \ldots, n$.
Needless to say that the order statistics $X_{(1)}, X_{(2)}, \ldots, X_{(n)}$ are also r.v., defined by sorting $X_{1}, X_{2}, \ldots, X_{n}$ in increasing order. Thus, $X_{(1)} \leq X_{(2)} \leq \ldots \leq X_{(n)}$.
## Motivation 3.64 - Importance of order statistics in reliabilty
A system lifetime $T$ can be expressed as a function of order statistics of the components lifetimes, $X_{1}, \ldots, X_{n}$.
If we assume that $X_{i} \stackrel{i . i . d .}{\sim} X, i=1, \ldots, n$, then the system reliability function $S_{T}(t)=$ $P(\{T>t\})$ can be easily written in terms of the survival function (or reliability function) of $X, S_{X}(t)=P(\{X>t\})$, for some of the most usual reliability structures.
## Example 3.65 - Reliability function of a series system
A series system functions if all its components function. Therefore the system lifetime is given by
$$
T=\min \left\{X_{1}, \ldots, X_{n}\right\}=X_{(1)}
$$
If $X_{i} \stackrel{\text { i.i.d. }}{\sim} X, i=1, \ldots, n$, then the system reliability function is defined as
$$
\begin{aligned}
S_{T}(t) & =P\left(\bigcap_{i=1}^{n}\left\{X_{i}>t\right\}\right) \\
& =\left[S_{X}(t)\right]^{n}
\end{aligned}
$$
where $S_{X}(t)=P(\{X>t\})$.
## Exercise 3.66 - Reliability function of a series system
A series system has two components with i.i.d. lifetimes with common failure rate function given by $\lambda_{X}(t)=\frac{f_{X}(t)}{S_{X}(t)}=0.5 t^{-0.5}, t \geq 0$, i.e., $S_{X}(t)=\exp \left[-\int_{0}^{t} \lambda_{X}(s) d s\right]$. (Prove this result!).
Derive the system reliability function.
## Example 3.67 - Reliability function of a parallel system
A parallel system functions if at least one of its components functions. Therefore the system lifetime is given by
$$
T=\max \left\{X_{1}, \ldots, X_{n}\right\}=X_{(n)} .
$$
If $X_{i} \stackrel{i . i . d .}{\sim} X, i=1, \ldots, n$, then the system reliability function equals
$$
\begin{aligned}
S_{T}(t) & =1-F_{T}(t) \\
& =1-P\left(\bigcap_{i=1}^{n}\left\{X_{i} \leq t\right\}\right) \\
& =1-\left[1-S_{X}(t)\right]^{n},
\end{aligned}
$$
where $S_{X}(t)=P(\{X>t\})$.
## Exercise 3.68 - Reliability function of a parallel system
An obsolete electronic system has 6 valves set in parallel. Assume that the components lifetime (in years) are i.i.d. r.v. with common p.d.f. $f_{X}(t)=50 t e^{-25 t^{2}}, t>0$.
Obtain the system reliability for 2 months. Proposition 3.69 - Joint p.d.f. of the order statistics and more (Murteira, 1980, pp. 57, 55, 54)
Let $X_{1}, \ldots, X_{n}$ be absolutely continuous r.v. such that $X_{i} \stackrel{\text { i.i.d. }}{\sim} X, i=1, \ldots, n$. Then:
$$
f_{X_{(1)}, \ldots, X_{(n)}}\left(x_{1}, \ldots, x_{n}\right)=n ! \times \prod_{i=1}^{n} f_{X}\left(x_{i}\right)
$$
for $x_{1} \leq \ldots \leq x_{n}$
$$
\begin{aligned}
F_{X_{(i)}}(x) & =\sum_{j=i}^{n}\left(\begin{array}{l}
n \\
j
\end{array}\right) \times\left[F_{X}(x)\right]^{j} \times\left[1-F_{X}(x)\right]^{n-j} \\
& =1-F_{\text {Binomial }\left(n, F_{X}(x)\right)}(i-1),
\end{aligned}
$$
for $i=1, \ldots, n$;
$$
f_{X_{(i)}}(x)=\frac{n !}{(i-1) !(n-i) !} \times\left[F_{X}(x)\right]^{i-1} \times\left[1-F_{X}(x)\right]^{n-i} \times f_{X}(x),
$$
for $i=1, \ldots, n$;
$$
\begin{aligned}
f_{X_{(i)}, X_{(j)}}\left(x_{i}, x_{j}\right)= & \frac{n !}{(i-1) !(j-i-1) !(n-j) !} \\
& \times\left[F_{X}\left(x_{i}\right)\right]^{i-1} \times\left[F_{X}\left(x_{j}\right)-F_{X}\left(x_{i}\right)\right]^{j-i-1} \times\left[1-F_{X}\left(x_{j}\right)\right]^{n-j} \\
& \times f_{X}\left(x_{i}\right) \times f_{X}\left(x_{j}\right)
\end{aligned}
$$
for $x_{i}<x_{j}$, and $1 \leq i<j \leq n$.
Exercise 3.70 - Joint p.d.f. of the order statistics and more Prove Proposition 3.69 (http://en.wikipedia.org/wiki/Order_statistic).
## Example 3.71 - Reliability function of a k-out-of-n system
A $k$-out-of- $n$ system functions if at least $k$ out of its $n$ components function. A series system corresponds to a $n$-out-of- $n$ system, whereas a parallel system corresponds to a 1 -out-of- $n$ system. The lifetime of a $k$-out-of- $n$ system is also associated to an order statistic:
$$
T=X_{(n-k+1)}
$$
If $X_{i} \stackrel{i . i . d}{\sim} X, i=1, \ldots, n$, then the system reliability function can also be derived by using the auxiliary r.v.
$$
Z_{t}=\text { number of } X_{i}^{\prime} s>t \sim \operatorname{Binomial}\left(n, S_{X}(t)\right)
$$
In fact,
$$
\begin{aligned}
S_{T}(t) & =P\left(Z_{t} \geq k\right) \\
& =1-P\left(Z_{t} \leq k-1\right) \\
& =1-F_{\text {Binomial }\left(n, S_{X}(t)\right)}(k-1) \\
& =P\left(n-Z_{t} \leq n-k\right) \\
& =F_{\text {Binomial }\left(n, F_{X}(t)\right)}(n-k) .
\end{aligned}
$$
## Exercise 3.72 - Reliability function of a k-out-of-n system
Admit a machine has 4 engines and it only functions if at least 3 of those engines are working. Moreover, suppose the lifetimes of the engines (in thousand hours) are i.i.d. r.v. with Exponential distribution with scale parameter $\lambda^{-1}=2$.
Obtain the machine reliability for a period of $1000 \mathrm{~h}$.
### Constructing independent r.v.
The following theorem is similar to Proposition 2.133 and guarantees that we can also construct independent r.v. with prescribed d.f.
Theorem 3.73 - Construction of a finite collection of independent r.v. with prescribed d.f. (Karr, 1993, p. 79)
Let $F_{1}, \ldots, F_{n}$ be d.f. on $\mathbb{R}$. Then there is a probability space $(\Omega, \mathcal{F}, P)$ and r.v. $X_{1}, \ldots, X_{n}$ defined on it such that $X_{1}, \ldots, X_{n}$ are independent r.v. and $F_{X_{i}}=F_{i}$ for each $i$.
Exercise 3.74 - Construction of a finite collection of independent r.v. with prescribed d.f.
Prove Proposition 3.73 (Karr, 1993, p. 79).
Theorem 3.75 - Construction of a sequence collection of independent r.v. with prescribed d.f. (Karr, 1993, p. 79)
Let $F_{1}, F_{2}, \ldots$ be d.f. on $\mathbb{R}$. Then there is a probability space $(\Omega, \mathcal{F}, P)$ and a r.v. $X_{1}, X_{2}, \ldots$ defined on it such that $X_{1}, X_{2}, \ldots$ are independent r.v. and $F_{X_{i}}=F_{i}$ for each $i$.
Exercise 3.76 - Construction of a sequence collection of independent r.v. with prescribed d.f.
Prove Proposition 3.75 (Karr, 1993, pp. 79-80).
### Bernoulli process
Motivation 3.77 - Bernoulli (counting) process (Karr, 1993, p. 88)
Counting sucesses in repeated, independent trials, each of which has one of two possible outcomes (success ${ }^{9}$ and failure).
Definition 3.78 - Bernoulli process (Karr, 1993, p. 88)
A Bernoulli process with parameter $p$ is a sequence $\left\{X_{i}, i \in \mathbb{N}\right\}$ of i.i.d. r.v. with Bernoulli distribution with parameter $p=P($ success $)$.
Definition 3.79 - Important r.v. in a Bernoulli process (Karr, 1993, pp. 88-89) In isolation a Bernoulli process is neither deep or interesting. However, we can identify three associated and very important r.v.:
- $S_{n}=\sum_{i=1}^{n} X_{i}$, the number of successes in the first $n$ trials $(n \in \mathbb{N})$;
- $T_{k}=\min \left\{n: S_{n}=k\right\}$, the time (trial number) at which the $k$ th. success occurs $(k \in \mathbb{N})$, that is, the number of trials needed to get $k$ successes;
- $U_{k}=T_{k}-T_{k-1}$, the time (number of trials) between the $k$ th. and $(k-1)$ th. successes $\left(k \in \mathbb{N}, T_{0}=0, U_{1}=T_{1}\right)$.
Definition 3.80 - Bernoulli counting process (Karr, 1993, p. 88)
The sequence $\left\{S_{n}, n \in \mathbb{N}\right\}$ is usually termed as Bernoulli counting process (or success counting process).
## Exercise 3.81 - Bernoulli counting process
Simulate a Bernoulli process with parameter $p=\frac{1}{2}$ and consider $n=100$ trials. Plot the realizations of both the Bernoulli process and the Bernoulli counting process.
Definition 3.82 - Bernoulli success time process (Karr, 1993, p. 88)
The sequence $\left\{T_{k}, k \in \mathbb{N}\right\}$ is usually called the Bernoulli success time process.
${ }^{9}$ Or arrival. Proposition 3.83 - Important distributions in a Bernoulli process (Karr, 1993, pp. 89-90)
In a Bernoulli process with parameter $p(p \in[0,1])$ we have:
- $S_{n} \sim \operatorname{Binomial}(n, p), n \in \mathbb{N}$
- $T_{k} \sim \operatorname{NegativeBinomial}(k, p), k \in \mathbb{N}$;
- $U_{k} \stackrel{i . i . d .}{\sim} \operatorname{Geometric}(p) \stackrel{d}{=} \operatorname{NegativeBinomial}(1, p), k \in \mathbb{N}$.
## Exercise 3.84 - Bernoulli counting process
(a) Prove that $T_{k} \sim \operatorname{NegativeBinomial}(k, p)$ and $U_{k} \stackrel{\text { i.i.d. }}{\sim} \operatorname{Geometric}(p)$, for $k \in \mathbb{N}$.
(b) Consider a Bernoulli process with parameter $p=1 / 2$ and obtain the probability of having 57 successes between time 10 and 100 .
Exercise 3.85 - Relating the Bernoulli counting process and random walk (Karr, 1993, p. 97, Exercise 3.21)
Let $S_{n}$ be a Bernoulli (counting) process with $p=\frac{1}{2}$.
Prove that the process $Z_{n}=2 S_{n}-n$ is a symmetric random walk.
Proposition 3.86 - Properties of the Bernoulli counting process (Karr, 1993, p. 90)
The Bernoulli counting process $\left\{S_{n}, n \in \mathbb{N}\right\}$ has:
- independent increments - i.e., for $0<n_{1}<\ldots<n_{k}$, the r.v. $S_{n_{1}}, S_{n_{2}}-S_{n_{1}}$, $S_{n_{3}}-S_{n_{2}}, \ldots, S_{n_{k}}-S_{n_{k-1}}$ are independent;
- stationary increments - that is, for fixed $j$, the distribution of $S_{k+j}-S_{k}$ is the same for all $k \in \mathbb{N}$.
Exercise 3.87 - Properties of the Bernoulli counting process
Prove Proposition 3.86 (Karr, 1993, p. 90). Remark 3.88 - Bernoulli counting process (web.mit.edu/6.262/www/lectures/ 6.262.Lec1.pdf)
Some application areas for discrete stochastic processes such as the Bernoulli counting process (and the Poisson process, studied in the next section) are:
## - Operations Research
Queueing in any area, failures in manufacturing systems, finance, risk modelling, network models
## - Biology and Medicine
Epidemiology, genetics and DNA studies, cell modelling, bioinformatics, medical screening, neurophysiology
## - Computer Systems
Communication networks, intelligent control systems, data compression, detection of signals, job flow in computer systems, physics - statistical mechanics.
## Exercise 3.89 - Bernoulli process modelling of sexual HIV transmission
(Pinkerton and Holtgrave (1998, pp. 13-14))In the Bernoulli-process model of sexual HIV transmission, each act of sexual intercourse is treated as an independent stochastic trial that is associated to a probability $\alpha$ of HIV transmission. $\alpha$ is also known as the infectivity of HIV and a number of factors are believed to influence $\alpha \cdot{ }^{10}$
(a) Prove that the expression of the probability of HIV transmission in $n$ multiple contacts with the same infected partner is $1-(1-\alpha)^{n}$.
(b) Assume now that the consistent use of condoms reduce the infectivity from $\alpha$ to $\alpha^{\prime}=(1-0.9) \times \alpha .{ }^{11}$ Derive the relative change reduction in the probability defined in (a) due to the consistent use of condoms. Evaluate this reduction when $\alpha=0.01$ and $n=10$.
## Definition 3.90 - Independent Bernoulli processes
Two Bernoulli counting processes $\left\{S_{n}^{(1)}, n \in \mathbb{N}\right\}$ and $\left\{S_{n}^{(2)}, n \in \mathbb{N}\right\}$ are independent if for every positive integer $k$ and all times $n_{1}, \ldots, n_{k}$, we have that the random vector $\left(S_{n_{1}}^{(1)}, \ldots, S_{n_{k}}^{(1)}\right)$ associated with the first process is independent of $\left(S_{n_{1}}^{(2)}, \ldots, S_{n_{k}}^{(2)}\right)$ associated with the second process.
${ }^{10}$ Such as the type of sex act engaged, sex role, etc.
${ }^{11} \alpha^{\prime}$ is termed reduced infectivity; 0.9 represents a conservative estimate of condom effectiveness.
## Proposition 3.91 - Merging independent Bernoulli processes
Let $\left\{S_{n}^{(1)}, n \in \mathbb{N}\right\}$ and $\left\{S_{n}^{(2)}, n \in \mathbb{N}\right\}$ be two independent Bernoulli counting processes with parameters $\alpha$ and $\beta$, respectively. Then the merged process $\left\{S_{n}^{(1)} \oplus S_{n}^{(2)}, n \in \mathbb{N}\right\}$ is a Bernoulli counting process with parameter $\alpha+\beta-\alpha \beta$.
Exercise 3.92 - Merging independent Bernoulli processes
Prove Proposition 3.91.
Proposition 3.93 - Splitting a Bernoulli process (or sampling a Bernoulli process)
Let $\left\{S_{n}, n \in \mathbb{N}\right\}$ be a Bernoulli counting process with parameter $\alpha$. Splitting the original Bernoulli counting process based on a selection probability $p$ yields two Bernoulli counting processes with parameters $\alpha p$ and $\alpha(1-p)$.
Exercise 3.94 - Splitting a Bernoulli process
Prove Proposition 3.91.
Are the two resulting processes independent? ${ }^{12}$
${ }^{12} \mathrm{NO}$ ! If we try to merge the two splitting processes and assume they are independent we get a parameter $\alpha p+\alpha(1-p)-\alpha p \times \alpha(1-p)$ which is different form $\alpha$.
### Poisson process
In what follows we use the notation of Ross (1989, Chapter 5) which is slightly different from the one of Karr (1993, Chapter 3).
Motivation 3.95 - Poisson process (Karr, 1993, p. 91)
Is there a continuous analogue of the Bernoulli process? YEs!
The Poisson process, named after the French mathematician Siméon-Denis Poisson (1781-1840), is the stochastic process in which events occur continuously and independently of one another. Examples that are well-modeled as Poisson processes include the radioactive decay of atoms, telephone calls arriving at a switchboard, and page view requests to a website. ${ }^{13}$
Definition 3.96 - Counting process (in continuous time) (Ross, 1989, p. 209) A stochastic process $\{N(t), t \geq 0\}$ is said to a counting process if $N(t)$ represents the total number of events (e.g. arrivals) that have occurred up to time $t$. From this definition we can conclude that a counting process $\{N(t), t \geq 0\}$ must satisfy:
- $N(t) \in \mathbb{N}_{0}, \forall t \geq 0$
- $N(s) \leq N(t), \forall 0 \leq s<t$
- $N(t)-N(s)$ corresponds to the number of events that have occurred in the interval $(s, t], \forall 0 \leq s<t$.
Definition 3.97 - Counting process (in continuous time) with independent increments (Ross, 1989, p. 209)
The counting process $\{N(t), t \geq 0\}$ is said to have independent increments if the number of events that occur in disjoint intervals are independent r.v., i.e.,
- for $0<t_{1}<\ldots<t_{n}, N\left(t_{1}\right), N\left(t_{2}\right)-N\left(t_{1}\right), N\left(t_{3}\right)-N\left(t_{2}\right), \ldots, N\left(t_{n}\right)-N\left(t_{n-1}\right)$ are independent r.v.
${ }^{13}$ For more examples check http://en.wikipedia.org/wiki/Poisson_process. Definition 3.98 - Counting process (in continuous time) with stationary increments (Ross, 1989, p. 210)
The counting process $\{N(t), t \geq 0\}$ is said to have stationary increments if distribution of the number of events that occur in any interval of time depends only on the length of the interval, ${ }^{14}$ that is,
- $N\left(t_{2}+s\right)-N\left(t_{1}+s\right) \stackrel{d}{=} N\left(t_{2}\right)-N\left(t_{1}\right), \forall s>0,0 \leq t_{1}<t_{2}$.
Definition 3.99 - Poisson process (Karr, 1993, p. 91)
A counting process $\{N(t), t \geq 0\}$ is said to be a Poisson process with rate $\lambda(\lambda>0)$ if:
- $\{N(t), t \geq 0\}$ has independent and stationary increments;
- $N(t) \sim \operatorname{Poisson}(\lambda t)$.
Remark 3.100 - Poisson process (Karr, 1993, p. 91)
Actually, $N(t) \sim \operatorname{Poisson}(\lambda t)$ follows from the fact that $\{N(t), t \geq 0\}$ has independent and stationary increments, thus, redundant in Definition 3.99.
Definition 3.101 - The definition of a Poisson process revisited (Ross, 1989, p. 212)
The counting process $\{N(t), t \geq 0\}$ is said to be a Poisson process with rate $\lambda$, if
- $N(0)=0$
- $\{N(t), t \geq 0\}$ has independent and stationary increments;
- $P(\{N(h)=1\})=\lambda h+o(h) ;{ }^{15}$
- $P(\{N(h) \geq 2\})=o(h)$.
## Exercise 3.102 - The definition of a Poisson process revisited
Prove that Definitions 3.99 and 3.101 are equivalent (Ross, 1989, pp. 212-214).
${ }^{14}$ The distributions do not depend on the origin of the time interval; they only depend on the length of the interval.
${ }^{15}$ The function $f$ is said to be $o(h)$ if $\lim _{h \rightarrow 0} \frac{f(h)}{h}=0$ (Ross, 1989, p. 211).
The function $f(x)=x^{2}$ is $o(h)$ since $\lim _{h \rightarrow 0} \frac{f(h)}{h}=\lim _{h \rightarrow 0} h=0$.
The function $f(x)=x$ is not $o(h)$ since $\lim _{h \rightarrow 0} \frac{f(h)}{h}=\lim _{h \rightarrow 0} 1=1 \neq 0$. Proposition 3.103 - Joint p.f. of $N\left(t_{1}\right), \ldots, N\left(t_{n}\right)$ in a Poisson process (Karr, 1993, p. 91)
For $0<t_{1}<\ldots<t_{n}$ and $0 \leq k_{1} \leq \ldots \leq k_{n}$,
$$
P\left(\left\{N\left(t_{1}\right)=k_{1}, \ldots, N\left(t_{n}\right)=k_{n}\right\}\right)=\prod_{j=1}^{n} \frac{e^{-\lambda\left(t_{j}-t_{j-1}\right)}\left[\lambda\left(t_{j}-t_{j-1}\right)\right]^{k_{j}-k_{j-1}}}{\left(k_{j}-k_{j-1}\right) !} .
$$
Exercise 3.104 - Joint p.f. of $N\left(t_{1}\right), \ldots, N\left(t_{n}\right)$ in a Poisson process Prove Proposition 3.103 (Karr, 1993, p. 92) by taking advantage, namely, of the fact that a Poisson process has independent increments.
Exercise 3.105 - Joint p.f. of $N\left(t_{1}\right), \ldots, N\left(t_{n}\right)$ in a Poisson process ("Stochastic Processes" - Test of 2002-11-09)
A machine produces electronic components according to a Poisson process with rate equal to 10 components per hour. Let $N(t)$ be the number of produced components up to time $t$.
Evaluate the probability of producing at least 8 components in the first hour given that exactly 20 components have been produced in the first two hours.
Definition 3.106 - Important r.v. in a Poisson process (Karr, 1993, pp. 88-89) Let $\{N(t), t \geq 0\}$ be a Poisson process with rate $\lambda$. Then:
- $S_{n}=\inf \{t: N(t)=n\}$ represents the time of the occurrence of the $n$ th. event (e.g. arrival), $n \in \mathbb{N}$
- $X_{n}=S_{n}-S_{n-1}$ corresponds to the time between the $n$ th. and $(n-1)$ th. events (e.g. interarrival time), $n \in \mathbb{N}$.
Proposition 3.107 - Important distributions in a Poisson process (Karr, 1993, pp. 92-93)
So far we know that $N(t) \sim \operatorname{Poisson}(\lambda t), t>0$. We can also add that:
- $S_{n} \sim \operatorname{Erlang}(n, \lambda), n \in \mathbb{N}$;
- $X_{n} \stackrel{\text { i.i.d. }}{\sim} \operatorname{Exponential}(\lambda), n \in \mathbb{N}$.
Remark 3.108 - Relating $N(t)$ and $S_{n}$ in a Poisson process We ought to note that:
$$
\begin{aligned}
N(t) \geq n & \Leftrightarrow S_{n} \leq t \\
F_{S_{n}}(t) & =F_{\operatorname{Erlang}(n, \lambda)}(t) \\
& =P(\{N(t) \geq n\}) \\
& =\sum_{j=n}^{+\infty} \frac{e^{-\lambda t}(\lambda t)^{j}}{j !} \\
& =1-F_{\text {Poisson }(\lambda t)}(n-1), n \in \mathbb{N} .
\end{aligned}
$$
## Exercise 3.109 - Important distributions in a Poisson process
Prove Proposition 3.107 (Karr, 1993, pp. 92-93).
## Exercise 3.110 - Time between events in a Poisson process
Suppose that people immigrate into a territory at a Poisson rate $\lambda=1$ per day.
What is the probability that the elapsed time between the tenth and the eleventh arrival exceeds two days? (Ross, 1989, pp. 216-217).
## Exercise 3.111 - Poisson process
Simulate a Poisson process with rate $\lambda=1$ considering the interval $[0,100]$. Plot the realizations of the Poisson process.
The sample path of a Poisson process should look like this:
Motivation 3.112 - Conditional distribution of the first arrival time (Ross, 1989, p. 222)
Suppose we are told that exactly one event of a Poisson process has taken place by time $t$ (i.e. $N(t)=1$ ), and we are asked to determine the distribution of the time at which the event occurred $\left(S_{1}\right)$.
Proposition 3.113 - Conditional distribution of the first arrival time (Ross, 1989, p. 223)
Let $\{N(t), t \geq 0\}$ be a Poisson process with rate $\lambda>0$. Then
$$
S_{1} \mid\{N(t)=1\} \sim \operatorname{Uniform}(0, t) .
$$
Exercise 3.114 - Conditional distribution of the first arrival time Prove Proposition 3.113 (Ross, 1989, p. 223).
Proposition 3.115 - Conditional distribution of the arrival times (Ross, 1989, p. 224)
Let $\{N(t), t \geq 0\}$ be a Poisson process with rate $\lambda>0$. Then
$$
f_{S_{1}, \ldots, S_{n} \mid\{N(t)=n\}}\left(s_{1}, \ldots, s_{n}\right)=\frac{n !}{t^{n}},
$$
for $0<s_{1}<\ldots<s_{n}<t$ and $n \in \mathbb{N}$.
Remark 3.116 - Conditional distribution of the arrival times (Ross, 1989, p. 224)
Proposition 3.115 is usually paraphrased as stating that, under the condition that $n$ events have occurred in $(0, t)$, the times $S_{1}, \ldots, S_{n}$ at which events occur, considered as unordered r.v., are i.i.d. and $\operatorname{Uniform}(0, t) .{ }^{16}$
Exercise 3.117 - Conditional distribution of the arrival times Prove Proposition 3.115 (Ross, 1989, p. 224).
${ }^{16}$ I.e., they behave as the order statistics $Y_{(1)}, \ldots, Y_{(n)}$, associated to $Y_{i} \stackrel{\text { i.i.d. }}{\sim}$ Uniform $(0, t)$.
## Proposition 3.118 - Merging independent Poisson processes
Let $\left\{N_{1}(t), t \geq 0\right\}$ and $\left\{N_{2}(t), t \geq 0\right\}$ be two independent Poisson processes with rates $\lambda_{1}$ and $\lambda_{2}$, respectively. Then the merged process $\left\{N_{1}(t)+N_{2}(t), t \geq 0\right\}$ is a Poisson process with rate $\lambda_{1}+\lambda_{2}$.
## Exercise 3.119 - Merging independent Poisson processes
Prove Proposition 3.118.
## Exercise 3.120 - Merging independent Poisson processes
Men and women enter a supermarket according to independent Poisson processes having respective rates two and four per minute.
(a) Starting at an arbitrary time, compute the probability that a least two men arrive before three women arrive (Ross, 1989, p. 242, Exercise 20).
(b) What is the probability that the number of arrivals (men and women) exceeds ten in the first 20 minutes?
Proposition 3.121 - Splitting a Poisson process (or sampling a Poisson process)
Let $\{N(t), t \geq 0\}$ be a Poisson process with rate $\lambda$. Splitting the original Poisson process based on a selection probability $p$ yields two INDEPENDENT Poisson processes with rates $\lambda p$ and $\lambda(1-p)$.
Moreover, we can add that $N_{1}(t) \mid\{N(t)=n\} \sim \operatorname{Binomial}(n, p)$ and $N_{2}(t) \mid\{N(t)=n\} \sim$ $\operatorname{Binomial}(n, 1-p)$.
## Exercise 3.122 - Splitting a Poisson process
Prove Proposition 3.121 (Ross, 1989, pp. 218-219).
Why are the two resulting processes independent?
## Exercise 3.123 - Splitting a Poisson process
If immigrants to area $\mathrm{A}$ arrive at a Poisson rate of ten per week, and if each immigrant is of English descent with probability $\frac{1}{12}$, then what is the probability that no people of English descent will emigrate to area A during the month of February (Ross, 1989, p. 220).
Exercise 3.124 - Splitting a Poisson process (Ross, 1989, p. 243, Exercise 23) Cars pass a point on the highway at a Poisson rate of one per minute. If five percent of the cars on the road are Dodges, then:
(a) What is the probability that at least one Dodge passes during an hour?
(b) If 50 cars have passed by an hour, what is the probability that five of them were Dodges?
### Generalizations of the Poisson process
In this section we consider three generalizations of the Poisson process. The first of these is the non homogeneous Poisson process, which is obtained by allowing the arrival rate at time $t$ to be a function of $t$.
Definition 3.125 - Non homogeneous Poisson process (Ross, 1989, p. 234) The counting process $\{N(t), t \geq 0\}$ is said to be a non homogeneous Poisson process with intensity function $\lambda(t)(t \geq 0)$ if
- $N(0)=0$
- $\{N(t), t \geq 0\}$ has independent increments;
- $P(\{N(t+h)-N(t)=1\}=\lambda(t) \times h+o(h), t \geq 0$
- $P(\{N(t+h)-N(t) \geq 2\}=o(h), t \geq 0$.
Moreover,
$$
N(t+s)-N(s) \sim \text { Poisson }\left(\int_{s}^{t+s} \lambda(z) d z\right)
$$
for $s \geq 0$ and $t>0$.
Exercise 3.126 - Non homogeneous Poisson process ("Stochastic Processes" test, 2003-01-14)
The number of arrivals to a shop is governed by a Poisson process with time dependent rate
$$
\lambda(t)= \begin{cases}4+2 t, & 0 \leq t \leq 4 \\ 24-3 t, & 4<t \leq 8\end{cases}
$$
(a) Obtain the expression of the expected value of the number of arrivals until $t$ $(0 \leq t \leq 8)$. Derive the probability of no arrivals in the interval $[3,5]$.
(b) Determine the expected values of the number of arrivals in the last 5 opening hours (interval $[3,8]$ ) given that 15 customers have arrived in the last 3 opening hours (interval $[5,8])$. Exercise 3.127 - The output process of an infinite server Poisson queue and the non homogeneous Poisson process
Prove that the output process of the $M / G / \infty$ queue - i.e., the number of customers who (by time $t$ ) have already left the infinite server queue with Poisson arrivals and general service d.f. $G$ - is a non homogeneous Poisson process with intensity function $\lambda G(t)$.
Definition 3.128 - Compound Poisson process (Ross, 1989, p. 237)
A stochastic process $\{X(t), t \geq 0\}$ is said to be a compound Poisson process if it can be represented as
$$
X(t)=\sum_{i=1}^{N(t)} Y_{i},
$$
where
- $\{N(t), t \geq 0\}$ is a Poisson process with rate $\lambda(\lambda>0)$ and
- $Y_{i} \stackrel{i . i . d .}{\sim} Y$ and independent of $\{N(t), t \geq 0\}$.
Proposition 3.129 - Compound Poisson process (Ross, 1989, pp. 238-239) Let $\{X(t), t \geq 0\}$ be a compound Poisson process. Then
$$
\begin{aligned}
E[X(t)] & =\lambda t \times E[Y] \\
V[X(t)] & =\lambda t \times E\left[Y^{2}\right] .
\end{aligned}
$$
## Exercise 3.130 - Compound Poisson process
Prove Proposition 3.129 by noting that $E[X(t)]=E\{E[X(t) \mid N(t)]\}$ and $V[X(t)]=E\{V[X(t) \mid N(t)]\}+V\{E[X(t) \mid N(t)]\}$ (Ross, 1989, pp. 238-239).
Exercise 3.131 - Compound Poisson process (Ross, 1989, p. 239)
Suppose that families migrate to an area at a Poisson rate $\lambda=2$ per week. Assume that the number of people in each family is independent and takes values 1, 2, 3 and 4 with respective probabilities $\frac{1}{6}, \frac{1}{3}, \frac{1}{3}$ and $\frac{1}{6}$.
What is the expected value and variance of the number of individuals migrating to this area during a five-week period? Definition 3.132 - Conditional Poisson process (Ross, 1983, pp. 49-50) Let:
- $\Lambda$ be a positive r.v. having d.f. $G$; and
- $\{N(t), t \geq 0\}$ be a counting process such that, given that $\{\Lambda=\lambda\},\{N(t), t \geq 0\}$ is a Poisson process with rate $\lambda$.
Then $\{N(t), t \geq 0\}$ is called a conditional Poisson process and
$$
P(\{N(t+s)-N(s)=n\})=\int_{0}^{+\infty} \frac{e^{-\lambda t}(\lambda t)^{n}}{n !} d G(\lambda) .
$$
Remark 3.133 - Conditional Poisson process (Ross, 1983, p. 50)
$\{N(t), t \geq 0\}$ is not a Poisson process. For instance, whereas it has stationary increments, it has not independent increments.
## Exercise 3.134 - Conditional Poisson process
Suppose that, depending on factors not at present understood, the rate at which seismic shocks occur in a certain region over a given season is either $\lambda_{1}$ or $\lambda_{2}$. Suppose also that the rate equals $\lambda_{1}$ for $p \times 100 \%$ of the seasons and $\lambda_{2}$ in the remaining time.
A simple model would be to suppose that $\{N(t), t \geq 0\}$ is a conditional Poisson process such that $\Lambda$ is either $\lambda_{1}$ or $\lambda_{2}$ with respective probabilities $p$ and $1-p$.
Prove that the probability that it is a $\lambda_{1}$-season, given $n$ shocks in the first $t$ units of a season, equals
$$
\frac{p e^{-\lambda_{1} t}\left(\lambda_{1} t\right)^{n}}{p e^{-\lambda_{1} t}\left(\lambda_{1} t\right)^{n}+(1-p) e^{-\lambda_{2} t}\left(\lambda_{2} t\right)^{n}},
$$
by applying the Bayes' theorem (Ross, 1983, p. 50).
## References
- Barlow, R.E. and Proschan, F. (1965/1996). Mathematical Theory of Reliability. SIAM (Classics in Applied Mathematics).
(TA169.BAR.64915)
- Barlow, R.E. and Proschan, F. (1975). Reliability and Life Testing. Holt, Rinehart and Winston, Inc.
- Grimmett, G.R. and Stirzaker, D.R. (2001). Probability and Random Processes (3rd. edition). Oxford University Press. (QA274.12-.76.GRI.30385 and QA274.12.76.GRI.40695 refer to the library code of the 1st. and 2nd. editions from 1982 and 1992, respectively.)
- Karr, A.F. (1993). Probability. Springer-Verlag.
- Pinkerton, S.D. and Holtgrave, D.R. (1998). The Bernoulli-process model in HIV transmission: applications and implications. In Handbook of economic evaluation of HIV prevention programs, Holtgrave, D.R. (Ed.), pp. 13-32. Plenum Press, New York.
- Resnick, S.I. (1999). A Probability Path. Birkhäuser. (QA273.4-.67.RES.49925)
- Ross, S.M. (1983). Stochastic Processes. John Wiley \& Sons. (QA274.12.76.ROS.36921 and QA274.12-.76.ROS.37578)
- Ross, S.M. (1989). Introduction to Probability Models (fourth edition). Academic Press. (QA274.12-.76.ROS.43540 refers to the library code of the 5th. revised edition from 1993.)
## Chapter 4
## Expectation
One of the most fundamental concepts of probability theory and mathematical statistics is the expectation of a r.v. (Resnick, 1999, p. 117).
Motivation 4.1 - Expectation (Karr, 1993, p. 101)
The expectation represents the center of gravity of a r.v. and has a measure theory counterpart in integration theory.
Key computational formulas - not definitions of expectation - to obtain the expectation of
- a discrete r.v. $X$ with values in a countable set $\mathcal{C}$ and p.f. $P(\{X=x\})$ and
- the one of an absolutely continuous r.v. $Y$ with p.d.f. $f_{Y}(y)$
are
$$
\begin{aligned}
& E(X)=\sum_{x \in \mathcal{C}} x \times P(\{X=x\}) \\
& E(Y)=\int_{-\infty}^{+\infty} y \times f_{Y}(y) d y
\end{aligned}
$$
respectively.
When $X \geq 0$ it is permissible that $E(X)=+\infty$, but finiteness is mandatory when $X$ can take both positive and negative (or null) values.
Remark 4.2 - Desired properties of expectation (Karr, 1993, p. 101)
1. Constant preserved
If $X \equiv c$ then $E(X)=c$.
## Monotonicity
If $X \leq Y^{1}$ then $E(X) \leq E(Y)$.
## Linearity
For $a, b \in \mathbb{R}, E(a X+b Y)=a E(X)+b E(Y)$.
## Continuity ${ }^{2}$
If $X_{n} \rightarrow X$ then $E\left(X_{n}\right) \rightarrow E(X)$.
## Relation to the probability
For each event $A, E\left(\mathbf{1}_{A}\right)=P(A){ }^{3}$
Expectation is to r.v. as probability is to events so that properties of expectation extend those of probability.
### Definition and fundamental properties
Many integration results are proved by first showing that they hold true for simple r.v. and then extending the result to more general r.v. (Resnick, 1999, p. 117).
#### Simple r.v.
Let $(\Omega, \mathcal{F}, P)$ be a probability space and let us remind the reader that $X$ is said to be a simple r.v. if it assumes only finitely many values in which case
$$
X=\sum_{i=1}^{n} a_{i} \times \mathbf{1}_{A_{i}}
$$
where:
- $a_{1}, \ldots, a_{n}$ are real numbers not necessarily distinct;
- $\left\{A_{1}, \ldots, A_{n}\right\}$ constitutes a partition of $\Omega$;
- $\mathbf{1}_{A_{i}}$ is the indicator function of event $A_{i}, i=1, \ldots, n$.
${ }^{1}$ I.e. $X(\omega) \leq Y(\omega), \forall \omega \in \Omega$.
${ }^{2}$ Continuity is not valid without restriction.
${ }^{3}$ Recall that $\mathbf{1}_{A_{i}}(\omega)=1$, if $w \in A_{i}$, and $\mathbf{1}_{A_{i}}(\omega)=0$, otherwise. Definition 4.3 - Expectation of a simple r.v. (Karr, 1993, p. 102) The expectation of the simple r.v. $X=\sum_{i=1}^{n} a_{i} \times \mathbf{1}_{A_{i}}$ is given by
$$
E(X)=\sum_{i=1}^{n} a_{i} \times P\left(A_{i}\right) .
$$
Remark 4.4 - Expectation of a simple r.v. (Resnick, 1999, p. 119; Karr, 1993, p. 102)
- Note that Definition 4.3 coincides with our knowledge of discrete probability from more elementary courses: the expectation is computed by taking a possible value, multiplying by the probability of the possible value and then summing over all possible values.
- $E(X)$ is well-defined in the sense that all representations of $X$ yield the same value for $E(X)$ : different representations of $X, X=\sum_{i=1}^{n} a_{i} \times \mathbf{1}_{A_{i}}$ and $X=\sum_{j=1}^{m} a_{j}^{\prime} \times \mathbf{1}_{A_{j}^{\prime}}$, lead to the same expected value $E(X)=\sum_{i=1}^{n} a_{i} \times P\left(A_{i}\right)=\sum_{j=1}^{m} a_{j}^{\prime} \times P\left(A_{j}^{\prime}\right)$.
- The expectation of an indicator function is indeed the probability of the associated event.
Proposition 4.5 - Properties of the set of simple r.v. (Resnick, 1999, p. 118)
Let $\mathcal{E}$ be the set of all simple r.v. defined on $(\Omega, \mathcal{F}, P)$. We have the following properties of $\mathcal{E}$.
1. $\mathcal{E}$ is a vector space, i.e.:
(a) if $X=\sum_{i=1}^{n} a_{i} \times \mathbf{1}_{A_{i}} \in \mathcal{E}$ and $\alpha \in \mathbb{R}$ then $\alpha X=\sum_{i=1}^{n} \alpha a_{i} \times \mathbf{1}_{A_{i}} \in \mathcal{E}$; and
(b) If $X=\sum_{i=1}^{n} a_{i} \times \mathbf{1}_{A_{i}} \in \mathcal{E}$ and $Y=\sum_{j=1}^{m} b_{j} \times \mathbf{1}_{B_{j}} \in \mathcal{E}$ then
$$
X+Y=\sum_{i=1}^{n} \sum_{j=1}^{m}\left(a_{i}+b_{j}\right) \times \mathbf{1}_{A_{i} \cap B_{j}} \in \mathcal{E} .
$$
2. If $X, Y \in \mathcal{E}$ then $X Y \in \mathcal{E}$ since
$$
X Y=\sum_{i=1}^{n} \sum_{j=1}^{m}\left(a_{i} \times b_{j}\right) \times \mathbf{1}_{A_{i} \cap B_{j}}
$$
Proposition 4.6 - Expectation of a linear combination of simple r.v. (Karr, 1993, p. 103)
Let $X$ and $Y$ be two simple r.v. and $a, b \in \mathbb{R}$. Then $a X+b Y$ is also a simple r.v. and
$$
E(a X+b Y)=a E(X)+b E(Y)
$$
Exercise 4.7 - Expectation of a linear combination of simple r.v.
Prove Proposition 4.6 by capitalizing on Proposition 4.5 (Karr, 1993, p. 103).
Exercise 4.8 - Expectation of a sum of discrete r.v. in a distribution problem (Walrand, 2004, p. 51, Example 4.10.6)
Suppose you put $m$ balls randomly in $n$ boxes. Each box can hold an arbitrarily large number of balls.
Prove that the expected number of empty boxes is equal to $n \times\left(\frac{n-1}{n}\right)^{m}$.
Exercise 4.9 - Expectation of a sum of discrete r.v. in a selection problem (Walrand, 2004, p. 52, Example 4.10.7)
A cereal company is running a promotion for which it is giving a toy in every box of cereal. There are $n$ different toys and each box is equally likely to contain any one of the $n$ toys.
Prove that the expected number of boxes of cereal you have to purchase to collect all $n$ toys is given by $n \times \sum_{m=1}^{n} \frac{1}{m}$.
Remark 4.10 - Monotonicity of expectation for simple r.v. (Karr, 1993, p. 103) The monotonicity of expectation for simple r.v. is a desired property which follows from
- linearity and
- positivity (or better said non negativity) - if $X \geq 0$ then $E(X) \geq 0$-, a seemingly weaker property of expectation.
In fact, if $X \leq Y \Leftrightarrow Y-X \geq 0$ then
- $E(Y)-E(X)=E(Y-X) \geq 0$.
This argument is valid provided that $E(Y)-E(X)$ is not of the form $+\infty-\infty$. Proposition 4.11 - Monotonicity of expectation for simple r.v. (Karr, 1993, p. 103)
Let $X$ and $Y$ be two simple r.v. such that $X \leq Y$. Then $E(X) \leq E(Y)$.
Example 4.12 - On the (dis)continuity of expectation of simple r.v. (Karr, 1993, pp. 103-104)
Continuity of expectation fails even for simple r.v. Let $P$ be the uniform distribution on $[0,1]$ and
$$
X_{n}=n \times \mathbf{1}_{\left(0, \frac{1}{n}\right)} .
$$
Then $X_{n}(\omega) \rightarrow 0, \forall w \in \Omega$, but $E\left(X_{n}\right)=1$, for each $n$.
Thus, we need additional conditions to guarantee continuity of expectation.
#### Non negative r.v.
Before we proceed with the definition of the expectation of non negative r.v., ${ }^{4}$ we need to recall the measurability theorem. This theorem state that any non negative r.v. can be approximated by a simple r.v., and it is the reason why it is often the case that an integration result about non negative r.v. - such as the expectation and its properties - is proven first to simple r.v.
Theorem 4.13 - Measurability theorem (Resnick, 1999, p. 91; Karr, 1993, p. 50) Suppose $X(\omega) \geq 0$, for all $\omega$. Then $X: \Omega \rightarrow \mathbb{R}$ is a Borel measurable function (i.e. a r.v.) iff there is an increasing sequence of simple and non negative r.v. $X_{1}, X_{2}, \ldots$ $\left(0 \leq X_{1} \leq X_{2} \leq \ldots\right)$ such that
$$
X_{n} \uparrow X
$$
$\left(X_{n}(\omega) \uparrow X(\omega)\right.$, for every $\left.\omega\right)$
## Exercise 4.14 - Measurability theorem
Prove Theorem 4.13 by considering
$$
X_{n}=\sum_{k=1}^{n 2^{n}} \frac{k-1}{2^{n}} \mathbf{1}_{\left\{\frac{k-1}{2^{n}} \leq X<\frac{k}{\left.2^{n}\right\}}\right.}+n \times \mathbf{1}_{\{X \geq n\}},
$$
for each $n$ (Resnick, 1999, p. 118; Karr, 1993, p. 50).
${ }^{4}$ Karr (1993) and Resnick (1999) call these r.v. positive when they are actually non negative. Motivation 4.15 - Expectation of a non negative r.v. (Karr, 1993, pp. 103-104) We now extend the definition of expectation to all non negative r.v. However, we have already seen that continuity of expectation fails even for simple r.v. and therefore we cannot define the expected value of a non negative r.v. simply as $E(X)=\lim _{n \rightarrow+\infty} E\left(X_{n}\right)$.
Unsurprisingly, if we apply the measurability theorem then the definition of expectation of a non negative r.v. virtually forces monotone continuity for increasing sequences of non negative r.v.:
- if $X_{1}, X_{2}, \ldots$ are simple and non negative r.v. and $X$ is a non negative r.v. such that $X_{n} \uparrow X$ (pointwise) then $E\left(X_{n}\right) \uparrow E(X)$.
It is convenient and useful to assume that these non negative r.v. can take values in the extended set of non negative real numbers, $\overline{\mathbb{R}}_{0}^{+}$.
Further on, we shall have to establish another restricted form of continuity: dominated continuity for integrable r.v. ${ }^{5}$
Definition 4.16 - Expectation of a non negative r.v. (Karr, 1993, p. 104) The expectation of a non negative r.v. $X$ is
$$
E(X)=\lim _{n \rightarrow+\infty} E\left(X_{n}\right) \leq+\infty
$$
where $X_{n}$ are simple and non negative r.v. such that $X_{n} \uparrow X$.
The expectation of $X$ over the event $A$ is $E(X ; A)=E\left(X \times \mathbf{1}_{A}\right)$.
Remark 4.17 - Expectation of a non negative r.v. (Karr, 1993, p. 104) The limit defining $E(X)$
- exists in the set of extended non negative real numbers $\overline{\mathbb{R}}_{0}^{+}$, and
- does not depend on the approximating sequence $\left\{X_{n}, n \in \mathbb{N}\right\}$, as stated in the next proposition.
Proposition 4.18 - Expectation of a non negative r.v. (Karr, 1993, p. 104) Let $\left\{X_{n}, n \in \mathbb{N}\right\}$ and $\left\{\tilde{X}_{m}, m \in \mathbb{N}\right\}$ be sequences of simple and non negative r.v. increasing to $X$. Then
$$
\lim _{n \rightarrow+\infty} E\left(X_{n}\right)=\lim _{m \rightarrow+\infty} E\left(\tilde{X}_{m}\right) .
$$
${ }^{5}$ We shall soon define integrable r.v. Exercise 4.19 - Expectation of a non negative r.v.
Prove Proposition 4.18 (Karr, 1993, p. 104; Resnick, 1999, pp. 122-123).
We now list some basic properties of the expectation operator applied to non negative r.v. For instance, linearity, monotonicity and monotone continuity/convergence. This last property describes how expectation and limits interact, and under which circunstances we are allowed to interchange expectation and limits.
Proposition 4.20 - Expectation of a linear combination of non negative r.v. (Karr, 1993, p. 104; Resnick, 1999, p. 123)
Let $X$ and $Y$ be two non negative r.v. and $a, b \in \mathbb{R}^{+}$. Then
$$
E(a X+b Y)=a E(X)+b E(Y) .
$$
Exercise 4.21 - Expectation of a linear combination of non negative r.v.
Prove Proposition 4.20 by considering two sequences of simple and non negative r.v. $\left\{X_{n}, n \in \mathbb{N}\right\}$ and $\left\{Y_{n}, n \in \mathbb{N}\right\}$ such that $X_{n} \uparrow X$ and $Y_{n} \uparrow Y$ - and, thus, $\left(a X_{n}+b Y_{n}\right) \uparrow$ $(a X+b Y)-($ Karr, 1993, p. 104).
Corollary 4.22 - Monotonicity of expectation for non negative r.v. (Karr, 1993, p. 105)
Let $X$ and $Y$ be two non negative r.v. such that $X \leq Y$. Then $E(X) \leq E(Y)$.
Remark 4.23 - Monotonicity of expectation for non negative r.v. (Karr, 1993, p. 105)
Monotonicity of expectation follows, once again, from positivity and linearity.
Motivation 4.24 - Fatou's lemma (Karr, 1993, p. 105)
The next result plays a vital role in the definition of monotone continuity/convergence.
Theorem 4.25 - Fatou's lemma (Karr, 1993, p. 105; Resnick, 1999, p. 132)
Let $\left\{X_{n}, n \in \mathbb{N}\right\}$ be a sequence of non negative r.v. Then
$$
E\left(\lim \inf X_{n}\right) \leq \liminf E\left(X_{n}\right)
$$
Remark 4.26 - Fatou's lemma (Karr, 1993, p. 105)
The inequality (4.14) in Fatou's lemma can be strict. For instance, in Example 4.12 we are dealing with $E\left(\lim \inf X_{n}\right)=0<\lim \inf E\left(X_{n}\right)=1$.
Exercise 4.27 - Fatou's lemma
Prove Theorem 4.25 (Karr, 1993, p. 105; Resnick, 1999, p. 132).
## Exercise 4.28 - Fatou's lemma and continuity of p.f.
Verify that Theorem 4.25 could be used in a part of the proof of the continuity of p.f. if we considered $X_{n}=\mathbf{1}_{A_{n}}$ (Karr, 1993, p. 106).
We now state another property of expectation of non negative r.v.: the monotone continuity/convergence of expectation.
Theorem 4.29 - Monotone convergence theorem (Karr, 1993, p. 106; Resnick, 1999, pp. 123-124)
Let $\left\{X_{n}, n \in \mathbb{N}\right\}$ be an increasing sequence of non negative r.v. and $X$ a non negative r.v. If
$$
X_{n} \uparrow X
$$
then
$$
E\left(X_{n}\right) \uparrow E(X)
$$
Remark 4.30 - Monotone convergence theorem (Karr, 1993, p. 106)
The sequence of simple and non negative r.v. from Example $4.12, X_{n}=n \times \mathbf{1}_{\left(0, \frac{1}{n}\right)}$, does not violate the monotone convergence theorem because in that instance it is not true that $X_{n} \uparrow X^{6}$
## Exercise 4.31 - Monotone convergence theorem
Prove Theorem 4.29 (Karr, 1993, p. 106; Resnick, 1999, pp. 124-125, for a more sophisticated proof).
${ }^{6}$ Please note that the sequence is not even increasing: $n$ increases but the sequence sets $\left(0, \frac{1}{n}\right)$ is a decreasing one. Exercise 4.32 - Monotone convergence theorem and monotone continuity of p.f.
Verify that Theorem 4.29 could be used to prove the monotone continuity of p.f. if we considered $X_{n}=\mathbf{1}_{A_{n}}$ and $X=\mathbf{1}_{A}$, where $A_{n} \uparrow A$ (Karr, 1993, p. 106).
One of the implication of the monotone convergence theorem is the linearity of expectation for convergent series, and is what Resnick (1999, p. 131) calls the series version of the monotone convergence theorem. This results refers under which circunstances we are allowed to interchange expectation and limits.
Theorem 4.33 - Expectation of a linear convergent series of non negative r.v. (Karr, 1993, p. 106; Resnick, 1999, p. 131)
Let $\left\{Y_{k}, k \in \mathbb{N}\right\}$ be a collection of non negative r.v. such that $\sum_{k=1}^{+\infty} Y_{k}(\omega)<+\infty$, for every $\omega$. Then
$$
E\left(\sum_{k=1}^{+\infty} Y_{k}\right)=\sum_{k=1}^{+\infty} E\left(Y_{k}\right)
$$
Exercise 4.34 - Expectation of a linear convergent series of non negative r.v. Prove Theorem 4.33 by considering $X_{n}=\sum_{k=1}^{n} Y_{k}$ and applying the monotone convergence theorem (Karr, 1993, p. 106).
Exercise 4.35 - Expectation of a linear convergent series of non negative r.v. and $\sigma$-additivity
Verify that Theorem 4.33 could be used to prove the $\sigma$-additivity of p.f. if we considered $Y_{k}=\mathbf{1}_{A_{k}}$, where $A_{k}$ are disjoint, so that $\sum_{k=1}^{+\infty} Y_{k}=\mathbf{1}_{\cup_{k=1}^{+\infty} A_{k}}(\operatorname{Karr}, 1993$, p. 106).
Proposition 4.36 - "Converse" of the positivity of expectation (Karr, 1993, p. 107)
Let $X$ be a non negative r.v. If $E(X)=0$ then $X \stackrel{\text { a.s. }}{=} 0$.
Exercise 4.37 - "Converse" of the positivity of expectation Prove Proposition 4.36 (Karr, 1993, p. 107).
#### Integrable r.v.
It is time to extend the definition of expectation to r.v. $X$ that can take both positive and negative (or null) values. But first recall that:
- $X^{+}=\max \{X, 0\}$ represents the positive part of the r.v. $X$;
- $X^{-}=-\min \{X, 0\}=\max \{-X, 0\}$ represents the negative part of the r.v. $X$;
- $X=X^{+}-X^{-}$;
- $|X|=X^{+}+X^{-}$.
The definition of expectation of such a r.v. preserves linearity and is based on the fact that $X$ can be written as a linear combination of two non negative r.v.: $X=X^{+}-X^{-}$.
Definition 4.38 - Integrable r.v.; the set of integrable r.v. (Karr, 1993, p. 107; Resnick, 1999, p. 126)
Let $X$ be a r.v., not necessarily non negative. Then $X$ is said to be integrable if $E(|X|)<+\infty$.
The set of integrable r.v. is denoted by $L^{1}$ or $L^{1}(P)$ if the probability measure needs to be emphasized.
Definition 4.39 - Expectation of an integrable r.v. (Karr, 1993, p. 107) Let $X$ be an integrable r.v. Then the expectation of $X$ is given by
$$
E(X)=E\left(X^{+}\right)-E\left(X^{-}\right)
$$
For an event $A$, the expectation of $X$ over $A$ is $E(X ; A)=E\left(X \times \mathbf{1}_{A}\right)$.
Remark 4.40 - Expectation of an integrable r.v. (Karr, 1993, p. 107; Resnick, 1999, p. 126)
1. If $X$ is an integrable r.v. then
$$
E\left(X^{+}\right)+E\left(X^{-}\right)=E\left(X^{+}+X^{-}\right)=E(|X|)<+\infty
$$
so both $E\left(X^{+}\right)$and $E\left(X^{-}\right)$are finite, $E\left(X^{+}\right)-E\left(X^{-}\right)$is not of the form $\infty-\infty$, thus, the definition of expectation of $X$ is coherent. 2. Moreover, since $\left|X \times \mathbf{1}_{A}\right| \leq|X|, E(X ; A)=E\left(X \times \mathbf{1}_{A}\right)$ is finite (i.e. exists!) as long as $E(|X|)<+\infty$, that is, as long as $E(X)$ exists.
3. Some conventions when $X$ is not integrable...
If $E\left(X^{+}\right)<+\infty$ but $E\left(X^{-}\right)=+\infty$ then we consider $E(X)=-\infty$.
If $E\left(X^{+}\right)=+\infty$ but $E\left(X^{-}\right)<+\infty$ then we take $E(X)=+\infty$.
If $E\left(X^{+}\right)=+\infty$ and $E\left(X^{-}\right)=+\infty$ then $E(X)$ does not exist.
What follows refers to properties of the expectation operator.
Theorem 4.41 - Expectation of a linear combination of integrable r.v. (Karr, 1993, p. 107)
Let $X$ and $Y$ be two integrable r.v. - i.e., $X, Y \in L^{1}-$ and $a, b \in \mathbb{R}$. Then $a X+b Y$ is also an integrable r.v. ${ }^{7}$ and
$$
E(a X+b Y)=a E(X)+b E(Y) .
$$
Exercise 4.42 - Expectation of a linear combination of integrable r.v. Prove Theorem 4.41 (Karr, 1993, p. 108).
Corollary 4.43 - Modulus inequality (Karr, 1993, p. 108; Resnick, 1999, p. 128) If $X \in L^{1}$ then
$$
|E(X)| \leq E(|X|) .
$$
Exercise 4.44 - Modulus inequality
Prove Corollary 4.43 (Karr, 1993, p. 108).
Corollary 4.45 - Monotonicity of expectation for integrable r.v. (Karr, 1993, p. 108; Resnick, 1999, p. 127)
If $X, Y \in L^{1}$ and $X \leq Y$ then
$$
E(X) \leq E(Y)
$$
Exercise 4.46 - Monotonicity of expectation for integrable r.v. Prove Corollary 4.45 (Resnick, 1999, pp. 127-128).
${ }^{7}$ That is, $a X+b Y \in L^{1}$. In fact, $L^{1}$ is a vector space. The continuity of expectation for integrable r.v. can be finally stated.
Theorem 4.47 - Dominated convergence theorem (Karr, 1993, p. 108; Resnick, 1999, p. 133)
Let $X_{1}, X_{2}, \ldots \in L^{1}$ and $X \in L^{1}$ with
$$
X_{n} \rightarrow X .
$$
If there is a dominating r.v. $Y \in L^{1}$ such that
$$
\left|X_{n}\right| \leq Y
$$
for each $n$, then
$$
\lim _{n \rightarrow+\infty} E\left(X_{n}\right)=E(X)
$$
Remark 4.48 - Dominated convergence theorem (Karr, 1993, p. 109)
The sequence of simple r.v., $X_{n}=n \times \mathbf{1}_{\left(0, \frac{1}{n}\right)}$, from Example 4.12 does not violate the dominated convergence theorem because any r.v. $Y$ dominating $X_{n}$ for each $n$ must satisfy $Y \geq \sum_{n=1}^{+\infty} n \times \mathbf{1}_{\left(\frac{1}{n+1}, \frac{1}{n}\right)}$, which implies that $E(Y)=+\infty$, thus $Y \notin L^{1}$ and we cannot apply Theorem 4.47 .
Exercise 4.49 - Dominated convergence theorem
Prove Theorem 4.47 (Karr, 1993, p. 109; Resnick, 1999, p. 133, for a detailed proof).
Exercise 4.50 - Dominated convergence theorem and continuity of p.f.
Verify that Theorem 4.47 could be used to prove the continuity of p.f. if we considered $X_{n}=\mathbf{1}_{A_{n}}$, where $A_{n} \rightarrow A$, and $Y \equiv 1$ as the dominating integrable r.v. (Karr, 1993, p. 109).
#### Complex r.v.
Definition 4.51 - Integrable complex r.v.; expectation of a complex r.v. (Karr, 1993, p. 109)
A complex r.v. $Z=X+i Y \in L^{1}$ if $E(|Z|)=E\left(\sqrt{X^{2}+Y^{2}}\right)<+\infty$, and in this case the expectation $Z$ is $E(Z)=E(X)+i E(Y)$.
### Integrals with respect to distribution functions
Integrals (of Borel measurable functions) with respect to d.f. are known as LebesgueStieltjes integrals. Moreover, they are really expectations with respect to probabilities on $\mathbb{R}$ and are reduced to sums and Riemann (more generally, Lebesgue) integrals.
#### On integration
Remark 4.52 - Riemann integral (http://en.wikipedia.org/wiki/Riemann_integral) In the branch of mathematics known as real analysis, the Riemann integral, created by Bernhard Riemann (1826-1866), was the first rigorous definition of the integral of a function on an interval.
## - Overview
Let $g$ be a non-negative real-valued function of the interval $[a, b]$, and let $S=\{(x, y): 0<y<g(x)\}$ be the region of the plane under the graph of the function $g$ and above the interval $[a, b]$.
The basic idea of the Riemann integral is to use very simple approximations for the area of $\mathrm{S}$, denoted by $\int_{a}^{b} g(x) d x$, namely by taking better and better approximations - we can say that "in the limit" we get exactly the area of S under the curve.
## - Riemann sums
Choose a real-valued function $f$ which is defined on the interval $[a, b]$. The Riemann sum of $f$ with respect to the tagged partition $a=x_{0}<x_{1}<x_{2}<\ldots<x_{n}=b$ together with $t_{0}, \ldots, t_{n-1}$ (where $\left.x_{i} \leq t_{i} \leq x_{i+1}\right)$ is
$$
\sum_{i=0}^{n-1} g\left(t_{i}\right)\left(x_{i+1}-x_{i}\right),
$$
where each term represents the area of a rectangle with height $g\left(t_{i}\right)$ and length $x_{i+1}-x_{i}$. Thus, the Riemann sum is the signed area under all the rectangles.
## - Riemann integral
Loosely speaking, the Riemann integral is the limit of the Riemann sums of a function as the partitions get finer.
If the limit exists then the function is said to be integrable (or more specifically Riemann-integrable).
## - Limitations of the Riemann integral
With the advent of Fourier series, many analytical problems involving integrals came up whose satisfactory solution required interchanging limit processes and integral signs.
Failure of monotone convergence - The indicator function $\mathbf{1}_{\mathbb{Q}}$ on the rationals is not Riemann integrable. No matter how the set $[0,1]$ is partitioned into subintervals, each partition will contain at least one rational and at least one irrational number, since rationals and irrationals are both dense in the reals. Thus, the upper Darboux sums $^{8}$ will all be one, and the lower Darboux sums ${ }^{9}$ will all be zero.
Unsuitability for unbounded intervals - The Riemann integral can only integrate functions on a bounded interval. It can however be extended to unbounded intervals by taking limits, so long as this does not yield an answer such as $+\infty-\infty$.
What about integrating on structures other than Euclidean space? - The Riemann integral is inextricably linked to the order structure of the line. How do we free ourselves of this limitation?
Remark 4.53 - Lebesgue integral (http://en.wikipedia.org/wiki/Lebesgue_integral; http://en.wikipedia.org/wiki/Henri_Lebesgue)
Lebesgue integration plays an important role in real analysis, the axiomatic theory of probability, and many other fields in the mathematical sciences. The Lebesgue integral is a construction that extends the integral to a larger class of functions defined over spaces more general than the real line.
## - Lebesgue's theory of integration
Henri Léon Lebesgue (1875-1941) invented a new method of integration to solve this problem. Instead of using the areas of rectangles, which put the focus on the domain of the function, Lebesgue looked at the codomain of the function for his fundamental unit of area. Lebesgue's idea was to first build the integral for what he called simple functions, measurable functions that take only finitely many values. Then he defined it for more complicated functions as the least upper bound of all the integrals of simple functions smaller than the function in question.
${ }^{8}$ The upper Darboux sum of $g$ with respect to the partition is $\sum_{i=0}^{n-1}\left(x_{i+1}-x_{i}\right) M_{i+1}$, where $M_{i+1}=\sup _{x \in\left[x_{i}, x_{i+1}\right]} g(x)$.
${ }^{9}$ The lower Darboux sum of $g$ with respect to the partition is $\sum_{i=0}^{n}\left(x_{i+1}-x_{i}\right) m_{i+1}$, where $m_{i+1}=\inf _{x \in\left[x_{i}, x_{i+1}\right]} g(x)$. Lebesgue integration has the beautiful property that every bounded function defined over a bounded interval with a Riemann integral also has a Lebesgue integral, and for those functions the two integrals agree. But there are many functions with a Lebesgue integral that have no Riemann integral.
As part of the development of Lebesgue integration, Lebesgue invented the concept of Lebesgue measure, which extends the idea of length from intervals to a very large class of sets, called measurable sets.
## - Integration
We start with a measure space $(\Omega, \mathcal{F}, \mu)$ where $\Omega$ is a set, $\mathcal{F}$ is a $\sigma$-algebra of subsets of $\Omega$ and $\mu$ is a (non-negative) measure on $\mathcal{F}$ of subsets of $\Omega$.
In the mathematical theory of probability, we confine our study to a probability measure $\mu$, which satisfies $\mu(\Omega)=1$.
In Lebesgue's theory, integrals are defined for a class of functions called measurable functions.
We build up an integral $\int_{\Omega} g d \mu$ for measurable real-valued functions $g$ defined on $\Omega$ in stages:
- Indicator functions. To assign a value to the integral of the indicator function of a measurable set $S$ consistent with the given measure $\mu$, the only reasonable choice is to set
$$
\int_{\Omega} \mathbf{1}_{S} d \mu=\mu(S)
$$
- Simple functions. A finite linear combination of indicator functions $\sum_{k} a_{k} \mathbf{1}_{S_{k}}$. When the coefficients $a_{k}$ are non-negative, we set
$$
\int_{\Omega}\left(\sum_{k} a_{k} \mathbf{1}_{S_{k}}\right) d \mu=\sum_{k} a_{k} \int_{\Omega} 1_{S_{k}} d \mu=\sum_{k} a_{k} \mu\left(S_{k}\right) .
$$
- Non-negative functions. We define
$$
\int_{\Omega} g d \mu=\sup \left\{\int_{\Omega} s d \mu: 0 \leq s \leq g, s \text { simple }\right\} .
$$
- Signed functions. $g=g^{+}-g^{-}$... And it makes sense to define
$$
\int_{\Omega} g d \mu=\int_{\Omega} g^{+} d \mu-\int_{\Omega} g^{-} d \mu
$$
## Remark 4.54 - Lebesgue/Riemann-Stieltjes integration
(http://en.wikipedia.org/wiki/Lebesgue-Stieltjes_integration)
The Lebesgue-Stieltjes integral is the ordinary Lebesgue integral with respect to a measure known as the Lebesgue-Stieltjes measure, which may be associated to any function of bounded variation on the real line.
## - Definition
The Lebesgue-Stieltjes integral $\int_{a}^{b} g(x) d F(x)$ is defined when $g:[a, b] \rightarrow \mathbb{R}$ is Borelmeasurable and bounded and $F:[a, b] \rightarrow \mathbb{R}$ is of bounded variation in $[a, b]$ and right-continuous, or when $g$ is non-negative and $F$ is monotone and right-continuous.
## - Riemann-Stieltjes integration and probability theory
When $g$ is a continuous real-valued function of a real variable and $F$ is a non-decreasing real function, the Lebesgue-Stieltjes integral is equivalent to the Riemann-Stieltjes integral, in which case we often write $\int_{a}^{b} g(x) d F(x)$ for the Lebesgue-Stieltjes integral, letting the measure $P_{F}$ remain implicit.
This is particularly common in probability theory when $F$ is the cumulative distribution function of a real-valued random variable $X$, in which case
$$
\int_{-\infty}^{\infty} g(x) d F(x)=E_{F}[g(X)] .
$$
#### Generalities
First of all, we should recall that given a d.f. $F$ on $\mathbb{R}$, there is a unique p.f. on $\mathbb{R}$ such that $P_{F}((a, b])=F(b)-F(a)$.
Moreover, all functions $g$ appearing below are assumed to be Borel measurable.
Definition 4.55 - Integral of a nonnegative $g$ with respect to a d.f. (Karr, 1993, p. 110) Let $F$ be a d.f. on $\mathbb{R}$ and $g$ a non negative function. Then the integral of $g$ with respect to $F$ is given by
$$
\int_{\mathbb{R}} g(x) d F(x)=E_{F}(g) \leq+\infty,
$$
where the expectation is that of $g(X)$ as a Borel measurable function of the r.v. $X$ defined on the probability space $\left(\mathbb{R}, \mathcal{B}(\mathbb{R}), P_{F}\right)$. Definition 4.56 - Integrable of a function with respect to a d.f. (Karr, 1993, p. 110)
Let $F$ be a d.f. on $\mathbb{R}$ and $g$ a signed function. Then $g$ is said to be integrable with respect to $F$ if $\int_{\mathbb{R}} g(x) d F(x)<+\infty$, and this case, the integral of $g$ with respect to $F$ equals
$$
\int_{\mathbb{R}} g(x) d F(x)=\int_{\mathbb{R}} g^{+}(x) d F(x)-\int_{\mathbb{R}} g^{-}(x) d F(x) .
$$
Definition 4.57 - Integral of a function over a set with respect to a d.f. (Karr, 1993, p. 110)
Let $F$ be a d.f. on $\mathbb{R}$ and $g$ either non negative or integrable and $B \in \mathcal{B}(\mathbb{R})$. The integral of $g$ over $B$ with respect to $F$ is equal to
$$
\int_{B} g(x) d F(x)=\int_{\mathbb{R}} g(x) \times \mathbf{1}_{B}(x) d F(x) .
$$
The properties of the integral of a function with respect to a d.f. are those of expectation:
1. Constant preserved
2. Monotonicity
3. Linearity
4. Relation to $P_{F}$
5. Fatou's lemma
6. Monotone convergence theorem
7. Dominated convergence theorem
#### Discrete distribution functions
Keep in mind that integrals with respect to discrete d.f. are sums.
Theorem 4.58 - Integral with respect to a discrete d.f. (Karr, 1993, p. 111) Consider a d.f. $F(t)$ that can be written as
$$
F(x)=\sum_{i} p_{i} \times \mathbf{1}_{\left[x_{i},+\infty\right)}(x) .
$$
Then, for each $g \geq 0$,
$$
\int_{\mathbb{R}} g(x) d F(x)=\sum_{i} g\left(x_{i}\right) \times p_{i} .
$$
Exercise 4.59 - Integral with respect to a discrete d.f.
Prove Theorem 4.58 (Karr, 1993, p. 111).
Corollary 4.60 - Integrable function with respect to a discrete d.f. (Karr, 1993, p. 111)
The function $g$ is said to be integrable with respect to the discrete d.f. $F$ iff
$$
\sum_{i}\left|g\left(x_{i}\right)\right| \times p_{i}<+\infty
$$
and in this case
$$
\int_{\mathbb{R}} g(x) d F(x)=\sum_{i} g\left(x_{i}\right) \times p_{i} .
$$
#### Absolutely continuous distribution functions
Now note that integrals with respect to absolutely continuous d.f. are Riemann integrals.
Theorem 4.61 - Integral with respect to an absolutely continuous d.f. (Karr, 1993, p. 112)
Suppose that the d.f. $F$ is absolutely continuous and is associated to a piecewise continuous p.d.f. $f$. If $g$ is a non negative function and piecewise continuous then
$$
\int_{\mathbb{R}} g(x) d F(x)=\int_{-\infty}^{+\infty} g(x) \times f(x) d x,
$$
where the integral on the right-hand side is an improper Riemann integral. Theorem 4.62 - Integral with respect to an absolutely continuous d.f. Prove Theorem 4.61 (Karr, 1993, p. 112).
Corollary 4.63 - Integral with respect to an absolutely continuous d.f. (Karr, 1993, p. 112)
A piecewise continuous function $g$ is said to be integrable with respect to the d.f. $F$ iff
$$
\int_{-\infty}^{+\infty}|g(x)| f(x) d x<+\infty
$$
and in this case
$$
\int_{\mathbb{R}} g(x) d F(x)=\int_{-\infty}^{+\infty} g(x) \times f(x) d x .
$$
#### Mixed distribution functions
Recall that a mixed d.f. $F$ is a convex combination of a discrete d.f.
$$
F_{d}(x)=\sum_{i} p_{i} \times \mathbf{1}\left(x_{i} \leq x\right)
$$
and an absolutely continuous d.f.
$$
F_{a}(x)=\int_{-\infty}^{x} f_{a}(s) d s .
$$
Thus,
$$
F(x)=\alpha \times F_{d}(x)+(1-\alpha) \times F_{a}(x),
$$
where $\alpha \in(0,1)$.
Corollary 4.64 - Integral with respect to a mixed d.f. (Karr, 1993, p. 112)
The integral of $g$ with respect to the mixed d.f. $F$ is a corresponding combination of integrals with respect to $F_{d}$ and $F_{a}$ :
$$
\begin{aligned}
\int_{\mathbb{R}} g(x) d F(x) & =\alpha \times \int_{\mathbb{R}} g(x) d F_{d}(x)+(1-\alpha) \times \int_{\mathbb{R}} g(x) d F_{a}(x) \\
& =\alpha \times \sum_{i} g\left(x_{i}\right) \times p_{i}+(1-\alpha) \times \int_{-\infty}^{+\infty} g(x) \times f_{a}(x) d x
\end{aligned}
$$
In order that the integral with respect to a mixed d.f. exists, $g$ must be piecewise continuous and either non negative or integrable with respect to both $F_{d}$ and $F_{a}$.
### Computation of expectations
So far we have defined the expectation for simple r.v.
The expectations of other types of r.v. - such as non negative, integrable and mixed r.v. - naturally involve integrals with respect to distribution functions.
#### Non negative r.v.
The second equality in the next formula is quite convenient because it allows us to obtain the expectation of a non negative r.v. - be it a discrete, absolutely continuous or mixed — in terms of an improper Riemann integral.
Theorem 4.65 - Expected value of a non negative r.v. (Karr, 1993, p. 113) If $X \geq 0$ then
$$
E(X)=\int_{0}^{+\infty} x d F_{X}(x)=\int_{0}^{+\infty}\left[1-F_{X}(x)\right] d x .
$$
Exercise 4.66 - Expected value of a non negative r.v.
Prove Theorem 4.65 (Karr, 1993, pp. 113-114).
Corollary 4.67 - Expected value of a non negative integer-valued r.v. (Karr, 1993, p. 114)
Let $X$ be a non negative integer-valued r.v. Then
$$
E(X)=\sum_{n=1}^{+\infty} n \times P(\{X=n\})=\sum_{n=1}^{+\infty} P(\{X \geq n\})=\sum_{n=0}^{+\infty} P(\{X>n\}) .
$$
Exercise 4.68 - Expected value of a non negative integer-valued r.v. Prove Corollary 4.67 (Karr, 1993, p. 114).
Corollary 4.69 - Expected value of a non negative absolutely continuous r.v. Let $X$ be a non negative absolutely continuous r.v. with p.d.f. $f_{X}(x)$.
$$
E(X)=\int_{0}^{+\infty} x \times f_{X}(x) d x=\int_{0}^{+\infty}\left[1-F_{X}(x)\right] d x .
$$
Exercise 4.70 - A nonnegative r.v. with infinite expectation Let $X \sim \operatorname{Pareto}(b=1, \alpha=1)$. i.e.
$$
f_{X}(x)= \begin{cases}\frac{\alpha b^{\alpha}}{x^{\alpha+1}}=\frac{1}{x^{2}}, & x \geq b=1 \\ 0, & \text { otherwise. }\end{cases}
$$
Prove that $E(X)$ exists and $E(X)=+\infty$ (Resnick, 1999, p. 126, Example 5.2.1).
#### Integrable r.v.
Let us remind the reader that $X$ is said to be an integrable r.v. if $E(|X|)=E\left(X^{+}\right)+$ $E\left(X^{-}\right)<+\infty$.
Theorem 4.71 - Expected value of an integrable r.v. (Karr, 1993, p. 114) If $X$ is an integrable r.v. then
$$
E(X)=\int_{-\infty}^{+\infty} x d F_{X}(x)
$$
Exercise 4.72 - Expected value of an integrable r.v.
Prove Theorem 4.71 (Karr, 1993, p. 114).
Corollary 4.73 - Expected value of an integrable absolutely continuous r.v. Let $X$ be an integrable absolutely continuous r.v. with p.d.f. $f_{X}(x)$ then
$$
E(X)=\int_{-\infty}^{+\infty} x \times f_{X}(x) d x
$$
Exercise 4.74 - Real r.v. without expectation
Prove that $E\left(X^{+}\right)=E\left(X^{-}\right)=+\infty$ - and therefore $E(X)$ does not exist - if $X$ has p.d.f. equal to:
(a) $\quad f_{X}(x)= \begin{cases}\frac{1}{2 x^{2}}, & |x|>1 \\ 0, & \text { otherwise; }\end{cases}$
(b) $\quad f_{X}(x)=\frac{1}{\pi\left(1+x^{2}\right)}, x \in \mathbb{R}$.
(Resnick, 1999, p. 126, Example 5.2.1.) $)^{10}$
${ }^{10}$ There is a typo in the definition of the first p.d.f. in Resnick (1999): $x>1$ should read as $|x|>1$. The second p.d.f. corresponds to the one of a r.v. with (standard) Cauchy distribution.
#### Mixed r.v.
When dealing with mixed r.v. $X$ we take advantage of the fact that $F_{X}(x)$ is a convex combination of the d.f. of a discrete r.v. $X_{d}$ and the d.f. of an absolutely continuous r.v. $X_{a}$.
## Corollary 4.75 - Expectation of a mixed r.v.
The expected value of the mixed r.v. $X$ with d.f. $F_{X}(x)=\alpha \times F_{X_{d}}(x)+(1-\alpha) \times F_{X_{a}}(x)$, where $\alpha \in(0,1)$, is given by
$$
\begin{aligned}
\int_{\mathbb{R}} x d F_{X}(x)= & \alpha \times \int_{\mathbb{R}} x d F_{X_{d}}(x)+(1-\alpha) \times \int_{\mathbb{R}} x d F_{X_{a}}(x) \\
= & \alpha \times \sum_{i} x_{i} \times P\left(\left\{X_{d}=x_{i}\right\}\right) \\
& \quad+(1-\alpha) \times \int_{-\infty}^{+\infty} x \times f_{X_{a}}(x) d x .
\end{aligned}
$$
## Exercise 4.76 - Expectation of a mixed r.v.
A random variable $X$ has the following d.f.: ${ }^{11}$
$$
F_{X}(x)= \begin{cases}0, & x<0 \\ 0.3, & 0 \leq x<2 \\ 0.3+0.2 x, & 2 \leq x<3 \\ 1, & x \geq 3\end{cases}
$$
(a) Why is $X$ a mixed r.v.?
(b) Write $F_{X}(x)$ a linear combination of the d.f. of two r.v.: a discrete and an absolutely continuous r.v.
(c) Obtain the expected value of $X$, by using the fact that $X$ is non negative, thus, $E(X)=\int_{0}^{+\infty}\left[1-F_{X}(x)\right] d x$.
Compare this value with the one you would obtain using Corollary 4.75.
${ }^{11}$ Adapted from Walrand (2004, pp. 53-55, Example 4.10.9).
## Exercise 4.77 - Expectation of a mixed r.v. in a queueing setting
Consider a $M / M / 1$ system. ${ }^{12}$ Let:
- $L_{s}$ be the number of customers an arriving customer finds in the system in equilibrium; ${ }^{13}$
- $W_{q}$ be the waiting time in queue of this arriving customer. ${ }^{14}$
Under these conditions, we can state that
$$
P\left(L_{s}=k\right)=(1-\rho) \times \rho^{k}, k \in \mathbb{N}_{0}
$$
thus, $L_{s} \sim$ geometric $^{*}(1-\rho)$, where $\rho=\frac{\lambda}{\mu} \in(0,1)$ and $E\left(L_{s}\right)=\frac{\rho}{1-\rho}$.
(a) Argue that $W_{q} \mid\left\{L_{s}=k\right\} \sim \operatorname{Gamma}(k, \mu)$, for $k \in \mathbb{N}$.
(b) Prove that $W_{q} \mid\left\{W_{q}>0\right\} \sim \operatorname{Exponential}(\mu(1-\rho))$.
(c) Demonstrate that $W_{q}$ is a mixed r.v. with d.f. given by:
$$
F_{W_{q}}(w)= \begin{cases}0, & w<0 \\ (1-\rho)+\rho \times F_{E x p(\mu(1-\rho))}(w), & w \geq 0\end{cases}
$$
(d) Verify that $E\left(W_{q}\right)=\frac{\rho}{\mu(1-\rho)}$.
${ }^{12}$ The arrivals to the system are governed by a Poisson process with rate $\lambda$, i.e. the time between arrivals has an exponential distribution with parameter $\lambda$; needless to say, $M$ stands for memoryless. The service times are not only i.i.d. with exponential distribution with parameter $\mu$, but also independent from the arrival process. There is only one server, and the service policy is FCFS (first come first served). $\rho=\frac{\lambda}{\mu}$ represents the traffic intensity and we assume that $\rho \in(0,1)$.
${ }^{13}$ Equilibrium roughly means that a lot of time has elapsed since the system has been operating and therefore the initial conditions no longer influence the state of system.
${ }^{14} W_{q}$ is the time elapsed from the moment the customer arrives until his/her service starts in the system in equilibrium.
#### Functions of r.v.
Unsurprisingly, we are surely able to derive expressions for the expectation of a Borel measurable function $g$ of the r.v. $X, E[g(X)]$. Obtaining this expectation does not require the derivation of the d.f. of $g(X)$ and follows from section 4.2.
In the two sections we shall discuss the expectation of specific functions of r.v.: $g(X)=X^{k}, k \in \mathbb{N}$.
Theorem 4.78 - Expected value of a function of a r.v. (Karr, 1993, p. 115)
Let $X$ be a r.v., and $g$ be a Borel measurable function either non negative or integrable. Then
$$
E[g(X)]=\int_{\mathbb{R}} g(x) d F_{X}(x) .
$$
Exercise 4.79 - Expected value of a function of a r.v.
Prove Theorem 4.78 (Karr, 1993, p. 115).
Corollary 4.80 - Expected value of a function of a discrete r.v. (Karr, 1993, p. 115)
Let $X$ be a discrete r.v., and $g$ be a Borel measurable function either non negative or integrable. Then
$$
E[g(X)]=\sum_{x_{i}} g\left(x_{i}\right) \times P\left(\left\{X=x_{i}\right\}\right) .
$$
Corollary 4.81 - Expected value of a function of an absolutely continuous r.v. (Karr, 1993, p. 115)
Let $X$ be an absolutely continuous r.v., and $g$ be a Borel measurable function either non negative or integrable. Then
$$
E[g(X)]=\int_{-\infty}^{+\infty} g(x) \times f_{X}(x) d x .
$$
#### Functions of random vectors
When dealing with functions of random vectors, the only useful formulas are those referring to the expectation of functions of discrete and absolutely continuous random vectors.
These formulas will be used to obtain, for instance, what we shall call measures of (linear) association between r.v.
Theorem 4.82 - Expectation of a function of a discrete random vector (Karr, 1993, p. 116)
Let:
- $X_{1}, \ldots, X_{d}$ be a discrete r.v., with values in the countable sets $C_{1}, \ldots, C_{d}$, respectively;
- $g: \mathbb{R}^{d} \rightarrow \mathbb{R}$ be a Borel measurable function either non negative or integrable (i.e. $\left.g\left(X_{1}, \ldots, X_{d}\right) \in L^{1}\right)$.
Then
$$
E\left[g\left(X_{1}, \ldots, X_{d}\right)\right]=\sum_{x_{1} \in C} \ldots \sum_{x_{d} \in C} g\left(x_{1}, \ldots, x_{d}\right) \times P\left(\left\{X_{1}=x_{1}, \ldots, X_{d}=x_{d}\right\}\right) .
$$
Theorem 4.83 - Expectation of a function of an absolutely continuous random vector (Karr, 1993, p. 116)
Let:
- $\left(X_{1}, \ldots, X_{d}\right)$ be an absolutely continuous random vector with joint p.d.f. $f_{X_{1}, \ldots, X_{d}}\left(x_{1}, \ldots, x_{d}\right)$
- $g: \mathbb{R}^{d} \rightarrow \mathbb{R}$ be a Borel measurable function either non negative or integrable.
Then
$$
\begin{aligned}
& E\left[g\left(X_{1}, \ldots, X_{d}\right)\right] \\
& \quad=\int_{-\infty}^{+\infty} \cdots \int_{-\infty}^{+\infty} g\left(x_{1}, \ldots, x_{d}\right) \times f_{X_{1}, \ldots, X_{d}}\left(x_{1}, \ldots, x_{d}\right) d x_{1} \ldots d x_{d}
\end{aligned}
$$
Exercise 4.84 - Expectation of a function of an absolutely continuous random vector
Prove Theorem 4.83 (Karr, 1993, pp. 116-117).
#### Functions of independent r.v.
When all the components of the random vector $\left(X_{1}, \ldots, X_{d}\right)$ are independent, the formula of $E\left[g\left(X_{1}, \ldots, X_{d}\right)\right]$ can be simplified.
The next results refer to two independent random variables $(d=2)$. The generalization for $d>2$ is straightforward.
Theorem 4.85 - Expectation of a function of two independent r.v. (Karr, 1993, p. 117) Let:
- $X$ and $Y$ be two independent r.v.
- $g: \mathbb{R}^{2} \rightarrow \mathbb{R}^{+}$be a Borel measurable non negative function.
Then
$$
\begin{aligned}
E[g(X, Y)] & =\int_{\mathbb{R}}\left[\int_{\mathbb{R}} g(x, y) d F_{X}(x)\right] d F_{Y}(y) \\
& =\int_{\mathbb{R}}\left[\int_{\mathbb{R}} g(x, y) d F_{Y}(y)\right] d F_{X}(x) .
\end{aligned}
$$
Moreover, the expectation of the product of functions of independent r.v. is the product of their expectations. Also note that the product of two integrable r.v. need not be integrable.
Corollary 4.86 - Expectation of a function of two independent r.v. (Karr, 1993, p. 118)
Let:
- $X$ and $Y$ be two independent r.v.
- $g_{1}, g_{2}: \mathbb{R} \rightarrow \mathbb{R}$ be two Borel measurable functions either non negative or integrable. Then $g_{1}(X) \times g_{2}(Y)$ is integrable and
$$
E\left[g_{1}(X) \times g_{2}(Y)\right]=E\left[g_{1}(X)\right] \times E\left[g_{2}(Y)\right]
$$
Exercise 4.87 - Expectation of a function of two independent r.v.
Prove Theorem 4.85 (Karr, 1993, p. 117) and Corollary 4.86 (Karr, 1993, p. 118).
#### Sum of independent r.v.
We are certainly not going to state that $E(X+Y)=E(X)+E(Y)$ when $X$ and $Y$ are simple or non negative or integrable independent r.v. ${ }^{15}$
Instead, we are going to write the d.f. of the sum of two INDEPENDENT r.v. in terms of integrals with respect to the d.f. and define a convolution of d.f. ${ }^{16}$
Theorem 4.88 - D.f. of a sum of two independent r.v. (Karr, 1993, p. 118) Let $X$ and $Y$ be two INDEPENDENT r.v. Then
$$
F_{X+Y}(t)=\int_{\mathbb{R}} F_{X}(t-y) d F_{Y}(y)=\int_{\mathbb{R}} F_{Y}(t-x) d F_{X}(x) .
$$
Exercise 4.89 - D.f. of a sum of two independent r.v.
Prove Theorem 4.88 (Karr, 1993, p. 118).
Corollary 4.90 - D.f. of a sum of two independent discrete r.v. Let $X$ and $Y$ be two Independent discrete r.v. Then
$$
F_{X+Y}(t)=\sum_{y} F_{X}(t-y) \times P(\{Y=y\})=\sum_{x} F_{Y}(t-x) \times P(\{X=x\}) .
$$
## Remark 4.91 - D.f. of a sum of two independent discrete r.v.
The previous formula is not preferable to the one we derived for the p.f. of $X+Y$ in Chapter 2 because it depends in fact on two sums...
Corollary 4.92 - D.f. of a sum of two independent absolutely continuous r.v. Let $X$ and $Y$ be two InDEPENDENT absolutely continuous r.v. Then
$$
F_{X+Y}(t)=\int_{-\infty}^{+\infty} F_{X}(t-y) \times f_{Y}(y) d y=\int_{-\infty}^{+\infty} F_{Y}(t-x) \times f_{X}(x) d x .
$$
Let us revisit an exercise from Chapter 2 to illustrate the use of the Corollary 4.92.
${ }^{15}$ This result follows from the linearity of expectation.
${ }^{16}$ Recall that in Chapter 2 we derived expressions for the p.f. and the p.d.f. of the sum of two independent r.v. Exercise 4.93 - D.f. of the sum of two independent absolutely continuous r.v. Let $X$ and $Y$ be the durations of two independent system components set in what is called a stand by connection. ${ }^{17}$ In this case the system duration is given by $X+Y$.
(a) Derive the d.f. of $X+Y$, assuming that $X \sim \operatorname{Exponencial}(\alpha)$ and $Y \sim$ Exponencial $(\beta)$, where $\alpha, \beta>0$ and $\alpha \neq \beta$, and using Corollary 4.92.
(b) Prove that the associated p.d.f. equals $f_{X+Y}(z)=\frac{\alpha \beta\left(e^{-\beta z}-e^{-\alpha z}\right)}{\alpha-\beta}, z>0$.
Definition 4.94 - Convolution of d.f. (Karr, 1993, p. 119)
Let $X$ and $Y$ be independent r.v. Then
$$
\left(F_{X} \star F_{Y}\right)(t)=F_{X+Y}(t)=\int_{\mathbb{R}} F_{X}(t-y) d F_{Y}(y)=\int_{\mathbb{R}} F_{Y}(t-x) d F_{X}(x)
$$
is said to be the convolution of the d.f. $F_{X}$ and $F_{Y}$.
${ }^{17}$ At time 0 , only the component with duration $X$ is on. The component with duration $Y$ replaces the other one as soon as it fails.
## $4.4 \quad L^{p}$ spaces
Motivation $4.95-L^{p}$ spaces (Karr, 1993, p. 119)
While describing a r.v. in a partial way, we tend to deal with $E\left(X^{p}\right), p \in[1,+\infty)$, or a function of several such expected values. Needless to say that we have to guarantee that $E\left(|X|^{p}\right)$ is finite.
Definition $4.96-L^{p}$ spaces (Karr, 1993, p. 119)
The space $L^{p}$, for a fixed $p \in[1,+\infty)$, consists of all r.v. $X$ whose $p$ th absolute power is integrable, that is,
$$
E\left(|X|^{p}\right)<+\infty .
$$
Exercise 4.97 - Exponential distributions and $L^{p}$ spaces Let $X \sim \operatorname{Exponential}(\lambda), \lambda>0$.
Prove that $X \in L^{p}$, for any $p \in[1,+\infty)$.
Exercise 4.98 - Pareto distributions and $L^{p}$ spaces Let $X \sim \operatorname{Pareto}(b, \alpha)$ i.e.
$$
f_{X}(x)= \begin{cases}\frac{\alpha b^{\alpha}}{x^{\alpha+1}}, & x \geq b \\ 0, & \text { otherwise }\end{cases}
$$
where $b>0$ is the minimum possible value of $X$ and $\alpha>0$ is called the Pareto index.
For which values of $p \in[1,+\infty)$ we have $X \in L^{p}$ ?
### Key inequalities
What immediately follows is a table with an overview of a few extremely useful inequalities involving expectations.
Some of these inequalities are essential to prove certain types of convergence of sequences of r.v. in $L^{p}$ and uniform integrability (Resnick, 1999, p. 189) ${ }^{18}$ and provide answers to a few questions we compiled after the table.
Finally, we state and treat each inequality separately.
Proposition 4.99 - A few (moment) inequalities (Karr, 1993, p. 123; http://en.wikipedia.org)
| (Moment) inequality | Conditions | Statement of the inequality |
| :---: | :---: | :---: |
| Young | $\begin{array}{l}h: \mathbb{R}_{0}^{+} \rightarrow \mathbb{R}_{0}^{+} \text {continuous, strictly increasing, } \\ h(0)=0, h(+\infty)=+\infty, H(x)=\int_{0}^{x} h(y) d y \\ k \text { pointwise inverse of } h, K(x)=\int_{0}^{x} k(y) d y \\ a, b \in \mathbb{R}^{+}\end{array}$ | $a \times b \leq H(a)+K(b)$ |
| Hölder | $\begin{array}{l}X \in L^{p}, Y \in L^{q} \\ \text { where } p, q \in[1,+\infty): \frac{1}{p}+\frac{1}{q}=1\end{array}$ | $\begin{array}{l}E(\|X \times Y\|) \leq E^{\frac{1}{p}}\left(\|X\|^{p}\right) \times E^{\frac{1}{q}}\left(\|Y\|^{q}\right) \\ \left(X \times Y \in L^{1}\right)\end{array}$ |
| Cauchy-Schwarz | $X, Y \in L^{2}$ | $\begin{array}{l}E(\|X \times Y\|) \leq \sqrt{E\left(X^{2}\right) \times E\left(Y^{2}\right)} \\ \left(X \times Y \in L^{1}\right)\end{array}$ |
| Liapunov | $X \in L^{s}, 1 \leq r \leq s$ | $\begin{array}{l}E^{\frac{1}{r}}\left(\|X\|^{r}\right) \leq E^{\frac{1}{s}}\left(\|X\|^{s}\right) \\ \left(L^{s} \subseteq L^{r}\right)\end{array}$ |
| Minkowski | $X, Y \in L^{p}, p \in[1,+\infty)$ | $\begin{array}{l}E^{\frac{1}{p}}\left(\|X+Y\|^{p}\right) \leq E^{\frac{1}{p}}\left(\|X\|^{p}\right)+E^{\frac{1}{p}}\left(\|Y\|^{p}\right) \\ \left(X+Y \in L^{p}\right)\end{array}$ |
| Jensen | $g$ convex; $X, g(X) \in L^{1}$ | $g[E(X)] \leq E[g(X)]$ |
| | $g$ concave; $X, g(X) \in L^{1}$ | $g[E(X)] \geq E[g(X)]$ |
| Chebyshev | $\begin{array}{l}X \geq 0 \\ g \text { non negative and increasing, } a>0\end{array}$ | $P(\{X \geq a\}) \leq \frac{E[g(X)]}{g(a)}$ |
| (Chernoff) | $X \geq 0, a, t>0$ | $P(\{X \geq a\}) \leq \frac{E\left(e^{t X}\right)}{e^{t a}}$ |
| (Markov) | $\begin{array}{l}X \in L^{1}, a>0 \\ X \in L^{p}, a>0\end{array}$ | $\begin{array}{l}P(\{\|X\| \geq a\}) \leq \frac{E[\|X\|]}{a} \\ P(\{\|X\| \geq a\}) \leq \frac{E\left[\|X\|^{p}\right]}{a^{p}}\end{array}$ |
| (Chebyshev-Bienaymé) | $X \in L^{2}, a>0$ | $P(\{\|X-E(X)\| \geq a\}) \leq \frac{V(X)}{a^{2}}$ |
| | $X \in L^{2}, a>0$ | $P(\{\|X-E(X)\| \geq a \sqrt{V(X)}\}) \leq \frac{1}{a^{2}}$ |
| (Cantelli) | $X \in L^{2}, a>0$ | $P(\{\|X-E(X)\| \geq a\}) \leq \frac{2 V(X)}{a^{2}+V(X)}$ |
| (one-sided Chebyshev) | $X \in L^{2}, a>0$ | $P(\{X-E(X) \geq a \sqrt{V(X)}\}) \leq \frac{1}{1+a^{2}}$ |
${ }^{18}$ For a definition of uniform integrability see http://en.wikipedia.org/wiki/Uniform_integrability.
## Motivation 4.100 - A few (moment) inequalities
- Young - How can we relate the areas under (resp. above) an increasing function $h$ in the interval $[0, a]$ (resp. in the interval of $\left[0, h^{-1}(b)\right]$ ) with the area of the rectangle with vertices $(0,0),(0, b),(a, 0)$ and $(a, b)$, where $b \in\left(0, \max _{x \in[0, a]} h(x)\right]$ ?
- Hölder/Cauchy-Schwarz - What are the sufficient conditions on r.v. $X$ and $Y$ to be dealing with an integrable product $X Y$ ?
- Liapunov - What happens to the spaces $L^{p}$ when $p$ increases in $[1,+\infty)$ ? Is it a decreasing (increasing) sequence of sets?
What happens to the norm of a r.v. in $L^{p},\|X\|_{p}=E^{\frac{1}{p}}\left(|X|^{p}\right)$ ? Is it an increasing (decreasing) function of $p \in[1,+\infty)$ ?
- Minkowski - What are the sufficient conditions on r.v. $X$ and $Y$ to be dealing with a sum $X+Y \in L^{p}$ ? Is $L^{p}$ a vector space?
- Jensen - Under what conditions we can relate $g[E(X)]$ and $E[g(X)]$ ?
- Chebyshev - When can we provide non trivial upper bounds for the tail probability $P(\{X \geq a\})$
#### Young's inequality
The first inequality (not a moment inequality) is named after William Henry Young (1863-1942), an English mathematician, and can be used to prove Hölder inequality.
Lemma 4.101 - Young's inequality (Karr, 1993, p. 119)
Let:
- $h: \mathbb{R}_{0}^{+} \rightarrow \mathbb{R}_{0}^{+}$be a continuous and strictly increasing function such that $h(0)=0$, $h(+\infty)=+\infty$;
- $k$ be the pointwise inverse of $h$;
- $H(x)=\int_{0}^{x} h(y) d y$ be the area under $h$ in the interval $[0, x]$;
- $K(x)=\int_{0}^{x} k(y) d y$ be the area above $h$ in the interval $\left[0, h^{-1}(x)\right]=[0, k(x)]$;
- $a, b \in \mathbb{R}^{+}$.
Then
$$
a \times b \leq H(a)+K(b) .
$$
Exercise 4.102 - Young's inequality (Karr, 1993, p. 119)
Prove Lemma 4.101, by using a graphical argument (Karr, 1993, p. 119).
## Remark 4.103 - A special case of Young's inequality
If we apply Young's inequality to $h(x)=x^{p-1}, p \in[1,+\infty)$ and consider
- $a$ and $b$ non negative real numbers,
- $q=1+\frac{1}{p-1} \in[1,+\infty)$ i.e. $\frac{1}{p}+\frac{1}{q}=1$
then
$$
a \times b \leq \frac{a^{p}}{p}+\frac{b^{q}}{q} .
$$
For the proof of this result see http://en.wikipedia.org/wiki/Young's_inequality, which states (4.67) as Young's inequality. See also Karr (1993, p. 120) for a reference to (4.67) as a consequence of Young's inequality as stated in (4.66).
#### Hölder's moment inequality
In mathematical analysis Hölder's inequality, named after the German mathematician Otto Hölder (1859-1937), is a fundamental inequality between integrals, an indispensable tool for the study of $L^{p}$ spaces and essential to prove Liapunov's and Minkowski's inequalities.
Interestingly enough, Hölder's inequality was first found by the British mathematician L.J. Rogers (1862-1933) in 1888, and discovered independently by Hölder in 1889.
Theorem 4.104 - Hölder's moment inequality (Karr, 1993, p. 120) Let
- $X \in L^{p}, Y \in L^{q}$, where $p, q \in[1,+\infty): \frac{1}{p}+\frac{1}{q}=1$.
Then
$$
\begin{aligned}
& X \times Y \in L^{1} \\
& E(|X Y|) \leq E^{\frac{1}{p}}\left(|X|^{p}\right) \times E^{\frac{1}{q}}\left(|Y|^{q}\right) .
\end{aligned}
$$
## Remarks 4.105 - Hölder's (moment) inequality
- The numbers $p$ and $q$ above are said to be Hölder conjugates of each other.
- For the detailed statement of Hölder inequality in measure spaces, check http://en.wikipedia.org/wiki/Hölder's_inequality. Two notable special cases follow...
- In case we are dealing with $S$, a measurable subset of $\mathbb{R}$ with the Lebesgue measure, and $f$ and $g$ are measurable real-valued functions on $S$ then Hölder's inequality reads as follows:
$$
\int_{S}|f(x) \times g(x)| d x \leq\left(\int_{S}|f(x)|^{p} d x\right)^{\frac{1}{p}} \times\left(\int_{S}|g(x)|^{q} d x\right)^{\frac{1}{q}} .
$$
- When we are dealing with $n$-dimensional Euclidean space and the counting measure, we have
$$
\sum_{k=1}^{n}\left|x_{k} \times y_{k}\right| \leq\left(\sum_{k=1}^{n}\left|x_{k}\right|^{p}\right)^{\frac{1}{p}} \times\left(\sum_{k=1}^{n}\left|y_{k}\right|^{q}\right)^{\frac{1}{q}},
$$
for all $\left(x_{1}, \ldots, x_{n}\right),\left(y_{1}, \ldots, y_{n}\right) \in \mathbb{R}^{n}$.
- For a generalization of Hölder's inequality involving $n$ (instead of 2) Hölder conjugates, see http://en.wikipedia.org/wiki/Hölder's_inequality.
## Exercise 4.106 - Hölder's moment inequality
Prove Theorem 4.104, by using the special case of Young's inequality (4.67), considering $a=\frac{|X|}{E^{\frac{1}{p}}\left(|X|^{p}\right)}$ and $b=\frac{|Y|}{E^{\frac{1}{q}}\left(|Y|^{q}\right)}$, taking expectations to (4.67), and applying the result $\frac{1}{p}+\frac{1}{q}=1$ (Karr, 1993, p. 120).
#### Cauchy-Schwarz's moment inequality
A special case of Hölder's moment inequality $-p=q=2$ - is nothing but the CauchySchwarz's moment inequality.
In mathematics, the Cauchy-Schwarz inequality ${ }^{19}$ is a useful inequality encountered in many different settings, such as linear algebra applied to vectors, in analysis applied to infinite series and integration of products, and in probability theory, applied to variances and covariances.
The inequality for sums was published by Augustin Cauchy in 1821, while the corresponding inequality for integrals was first stated by Viktor Yakovlevich Bunyakovsky in 1859 and rediscovered by Hermann Amandus Schwarz in 1888 (often misspelled "Schwartz").
Corollary 4.107 - Cauchy-Schwarz's moment inequality (Karr, 1993, p. 120) Let $X, Y \in L^{2}$. Then
$$
\begin{aligned}
& X \times Y \in L^{1} \\
& E(|X \times Y|) \leq \sqrt{E\left(|X|^{2}\right) \times E\left(|Y|^{2}\right)}
\end{aligned}
$$
## Remarks 4.108 - Cauchy-Schwarz's moment inequality
(http://en.wikipedia.org/wiki/Cauchy-Schwarz_inequality)
- In the Euclidean space $\mathbb{R}^{n}$ with the standard inner product, the Cauchy-Schwarz's inequality is
$$
\left(\sum_{i=1}^{n} x_{i} \times y_{i}\right)^{2} \leq\left(\sum_{i=1}^{n} x_{i}^{2}\right) \times\left(\sum_{i=1}^{n} y_{i}^{2}\right) .
$$
- The triangle inequality for the inner product is often shown as a consequence of the Cauchy-Schwarz inequality, as follows: given vectors $\underline{x}$ and $\underline{y}$, we have
$$
\begin{aligned}
\|x+y\|^{2} & =\langle x+y, x+y\rangle \\
& \leq(\|x\|+\|y\|)^{2} .
\end{aligned}
$$
${ }^{19}$ Also known as the Bunyakovsky inequality, the Schwarz inequality, or the Cauchy-BunyakovskySchwarz inequality (http://en.wikipedia.org/wiki/Cauchy-Schwarz_inequality). Exercise 4.109 - Confronting the squared covariance and the product of the variance of two r.v.
Prove that
$$
|\operatorname{cov}(X, Y)|^{2} \leq V(X) \times V(Y)
$$
where $X, Y \in L^{2}$ (http://en.wikipedia.org/wiki/Cauchy-Schwarz's_inequality).
#### Lyapunov's moment inequality
The spaces $L^{p}$ decrease as $p$ increases in $[1,+\infty)$. Moreover, $E\left(|X|^{p}\right)^{\frac{1}{p}}$ is an increasing function of $p \in[1,+\infty)$.
The following moment inequality is a special case of Hölder's inequality and is due to Aleksandr Mikhailovich Lyapunov (1857-1918), a Russian mathematician, mechanician and physicist. ${ }^{20}$
Corollary 4.110 - Lyapunov's moment inequality (Karr, 1993, p. 120) Let $X \in L^{s}$, where $1 \leq r \leq s$. Then
$$
\begin{aligned}
& L^{s} \subseteq L^{r} \\
& E^{\frac{1}{r}}\left(|X|^{r}\right) \leq E^{\frac{1}{s}}\left(|X|^{s}\right) .
\end{aligned}
$$
## Remarks 4.111 - Lyapunov's moment inequality
- Taking $s=2$ and $r=1$ we can conclude that
$$
E^{2}(|X|) \leq E\left(X^{2}\right)
$$
This result is not correctly stated in Karr (1993, p. 121) and can be also deduced from the Cauchy-Schwarz's inequality, as well as from Jensen's inequality, stated below.
- Rohatgi (1976, p. 103) states Liapunov's inequality in a slightly different way. It can be put as follows: for $X \in L^{k}, k \in[1,+\infty)$,
$$
E^{\frac{1}{k}}\left(|X|^{k}\right) \leq E^{\frac{1}{k+1}}\left(|X|^{k+1}\right)
$$
${ }^{20}$ Lyapunov is known for his development of the stability theory of a dynamical system, as well as for his many contributions to mathematical physics and probability theory (http://en.wikipedia.org/wiki/Aleksandr_Lyapunov). - The equality in (4.78) holds iff $X$ is a degenerate r.v., i.e. $X \stackrel{d}{=} c$, where $c$ is a real constant (Rohatgi, 1976, p. 103).
## Exercise 4.112 - Lyapunov's moment inequality
Prove Corollary 4.110, by applying Hölder's inequality to $X^{\prime}=X^{r}$, where $X \in L^{r}$, and to $Y \stackrel{d}{=} 1$, and considering $p=\frac{s}{r}$ (Karr, 1993, pp. 120-121). ${ }^{21}$
#### Minkowski's moment inequality
The Minkowski's moment inequality establishes that the $L^{p}$ spaces are vector spaces.
Theorem 4.113 - Minkowski's moment inequality (Karr, 1993, p. 121)
Let $X, Y \in L^{p}, p \in[1,+\infty)$. Then
$$
\begin{aligned}
& X+Y \in L^{p} \\
& E^{\frac{1}{p}}\left(|X+Y|^{p}\right) \leq E^{\frac{1}{p}}\left(|X|^{p}\right)+E^{\frac{1}{p}}\left(|Y|^{p}\right) .
\end{aligned}
$$
## Remarks 4.114 - Minkowski's moment inequality
(http://en.wikipedia.org/wiki/Minkowski_inequality)
- The Minkowski inequality is the triangle inequality ${ }^{22}$ in $L^{p}$.
- Like Hölder's inequality, the Minkowski's inequality can be specialized to (sequences and) vectors by using the counting measure:
$$
\left(\sum_{k=1}^{n}\left|x_{k}+y_{k}\right|^{p}\right)^{\frac{1}{p}} \leq\left(\sum_{k=1}^{n}\left|x_{k}\right|^{p}\right)^{\frac{1}{p}}+\left(\sum_{k=1}^{n}\left|y_{k}\right|^{p}\right)^{\frac{1}{p}}
$$
for all $\left(x_{1}, \ldots, x_{n}\right),\left(y_{1}, \ldots, y_{n}\right) \in \mathbb{R}^{n}$.
## Exercise 4.115 - Minkowski's moment inequality
Prove Theorem 4.113 , by applying the triangle inequality followed by Hölder's inequality and the fact that $q(p-1)=p$ and $1-\frac{1}{q}=\frac{1}{p}$ (Karr, 1993, p. 121).
${ }^{21}$ Rohatgi (1976, p. 103) provides an alternative proof.
${ }^{22}$ The real line is a normed vector space with the absolute value as the norm, and so the triangle inequality states that $|x+y| \leq|x|+|y|$, for any real numbers $x$ and $y$. The triangle inequality is useful in mathematical analysis for determining the best upper estimate on the size of the sum of two numbers, in terms of the sizes of the individual numbers (http://en.wikipedia.org/wiki/Triangle_inequality).
#### Jensen's moment inequality
Jensen's inequality, named after the Danish mathematician and engineer Johan Jensen (1859-1925), relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906.
Given its generality, the inequality appears in many forms depending on the context. In its simplest form the inequality states, that
- the convex transformation of a mean is less than or equal to the mean after convex transformation.
It is a simple corollary that the opposite is true of concave transformations (http://en.wikipedia.org/wiki/Jensen's_inequality).
Theorem 4.116 - Jensen's moment inequality (Karr, 1993, p. 121) Let $g$ convex and assume that $X, g(X) \in L^{1}$. Then
$$
g[E(X)] \leq E[g(X)]
$$
## Corollary 4.117 - Jensen's moment inequality for concave functions
Let $g$ concave and assume that $X, g(X) \in L^{1}$. Then$$
g[E(X)] \geq E[g(X)]
$$
Remarks 4.118 - Jensen's (moment) inequality (http://en.wikipedia.org/wiki/Jensen's_inequality)
- A proof of Jensen's inequality can be provided in several ways. However, it is worth analyzing an intuitive graphical argument based on the probabilistic case where $X$ is a real r.v.
Assuming a hypothetical distribution of $X$ values, one can immediately identify the position of $E(X)$ and its image $g[E(X)]=\varphi[E(X)]$ in the graph.
Noticing that for convex mappings $Y=g(X)=\varphi(X)$ the corresponding distribution of $Y$ values is increasingly "stretched out" for increasing values of $X$, the expectation of $Y=g(X)$ will always shift upwards with respect to the position of $g[E(X)]=\varphi[E(X)]$, and this "proves" the inequality.
- For a real convex function $g$, numbers $x_{1}, x_{2}, \ldots, x_{n}$ in its domain, and positive weights $a_{i}, i=1, \ldots, n$, Jensen's inequality can be stated as:
$$
g\left(\frac{\sum_{i=1}^{n} a_{i} \times x_{i}}{\sum_{i=1}^{n} a_{i}}\right) \leq \frac{\sum_{i=1}^{n} a_{i} \times g\left(x_{i}\right)}{\sum_{i=1}^{n} a_{i}} .
$$
The inequality is reversed if $g$ is concave.
- As a particular case, if the weights $a_{i}=1, i=1, \ldots, n$, then
$$
g\left(\frac{1}{n} \sum_{i=1}^{n} x_{i}\right) \leq \frac{1}{n} \sum_{i=1}^{n} g\left(x_{i}\right) \Leftrightarrow g(\bar{x}) \leq \overline{g(x)} .
$$
- For instance, considering $g(x)=\log (x)$, which is a concave function, we can establish the arithmetic mean-geometric mean inequality: ${ }^{23}$ for any list of $n$ non negative real numbers $x_{1}, x_{2}, \ldots, x_{n}$,
$$
\bar{x}=\frac{x_{1}+x_{2}+\ldots+x_{n}}{n} \geq \sqrt[n]{x_{1} \times x_{2} \times \ldots \times x_{n}}=m g .
$$
Moreover, equality in (4.88) holds iff $x_{1}=x_{2}=\ldots=x_{n}$.
## Exercise 4.119 - Jensen's moment inequality (for concave functions)
Prove Theorem 4.116 (Karr, 1993, pp. 121-122), Corollary 4.117 and Equation (4.88).
Exercise 4.120 - Jensen's inequality and the distance between the mean and the median
Prove that for any r.v. having an expected value and a median, the mean and the median can never differ from each other by more than one standard deviation:
$$
|E(X)-\operatorname{med}(X)| \leq \sqrt{V(X)}
$$
by using Jensen's inequality twice - applied to the absolute value function and to the square root function ${ }^{24}$ (http://en.wikipedia.org/wiki/Chebyshev's_inequality).
${ }^{23}$ See http://en.wikipedia.org/wiki/AM-GM_inequality.
${ }^{24}$ In this last case we should apply the concave version of Jensen's inequality.
#### Chebyshev's inequality
Curiously, Chebyshev's inequality is named after the Russian mathematician Pafnuty Lvovich Chebyshev (1821-1894), although it was first formulated by his friend and French colleague Irénée-Jules Bienaymé (1796-1878), according to http://en.wikipedia.org/wiki/Chebyshev's_inequality.
In probability theory, the Chebyshev's inequality, ${ }^{25}$ in the most usual version — what Karr (1993, p. 122) calls the Bienaymé-Chebyshev's inequality — , can be ultimately stated as follows:
- no more than $\frac{1}{k^{2}} \times 100 \%$ of the values of the r.v. $X$ are more than $k$ standard deviations away from the expected value of $X$.
Theorem 4.121 - Chebyshev's inequality (Karr, 1993, p. 122) Let:
- $X$ be a non negative r.v.;
- $g$ non negative and increasing function on $\mathbb{R}^{+}$;
- $a>0$.
Then
$$
P(\{X \geq a\}) \leq \frac{E[g(X)]}{g(a)} .
$$
Exercise 4.122 - Chebyshev's inequality
Prove Theorem 4.121 (Karr, 1993, p. 122).
Remarks 4.123 - Several cases of Chebyshev's inequality (Karr, 1993, p. 122)
- Chernoff's inequality ${ }^{26}$
$$
X \geq 0, a, t>0 \Rightarrow P(\{X \geq a\}) \leq \frac{E\left(e^{t X}\right)}{e^{t a}}
$$
## - Markov's inequalities
$$
\begin{aligned}
& X \in L^{1}, a>0 \Rightarrow P(\{|X| \geq a\}) \leq \frac{E[|X|]}{a} \\
& X \in L^{p}, a>0 \Rightarrow P(\{|X| \geq a\}) \leq \frac{E\left[|X|^{p}\right]}{a^{p}}
\end{aligned}
$$
${ }^{25}$ Also known as Tchebysheff's inequality, Chebyshev's theorem, or the Bienaymé-Chebyshev's inequality (http://en.wikipedia.org/wiki/Chebyshev's_inequality).
${ }^{26}$ Karr (1993) does not mention this inequality. For more details see http://en.wikipedia.org/wiki/Chernoff's_inequality.
## - Chebyshev-Bienaymé's inequalities
$$
\begin{aligned}
& X \in L^{2}, a>0 \Rightarrow P(\{|X-E(X)| \geq a\}) \leq \frac{V(X)}{a^{2}} \\
& X \in L^{2}, a>0 \Rightarrow P(\{|X-E(X)| \geq a \sqrt{V(X)}\}) \leq \frac{1}{a^{2}}
\end{aligned}
$$
## - Cantelli's inequality
$$
X \in L^{2}, a>0 \Rightarrow P(\{|X-E(X)| \geq a\}) \leq \frac{2 V(X)}{a^{2}+V(X)}
$$
## - One-sided Chebyshev's inequality
$X \in L^{2}, a>0 \Rightarrow P(\{X-E(X) \geq a \sqrt{V(X)}\}) \leq \frac{1}{1+a^{2}}$
According to http://en.wikipedia.org/wiki/Chebyshev's_inequality, the one-sided version of the Chebyshev inequality is also called Cantelli's inequality, and is due to the Italian mathematician Francesco Paolo Cantelli (1875-1966).
## Remark 4.124 - Chebyshev(-Bienaymé)'s inequality
(http://en.wikipedia.org/wiki/Chebyshev's_inequality)
The Chebyshev(-Bienaymé)'s inequality can be useful despite loose bounds because it applies to random variables of any distribution, and because these bounds can be calculated knowing no more about the distribution than the mean and variance.
## Exercise 4.125 - Chebyshev(-Bienaymé)'s inequality
Assume that we have a large body of text, for example articles from a publication and that we know that the articles are on average 1000 characters long with a standard deviation of 200 characters.
(a) Prove that from the Chebyshev(-Bienaymé)'s inequality we can then infer that the chance that a given article is between 600 and 1400 characters would be at least $75 \%$.
(b) The inequality is coarse: a more accurate guess would be possible if the distribution of the length of the articles is known.
Demonstrate that, for example, a normal distribution would yield a $75 \%$ chance of an article being between 770 and 1230 characters long.
(http://en.wikipedia.org/wiki/Chebyshev's_inequality.)
## Exercise 4.126 - Chebyshev(-Bienaymé)'s inequality
Let $X \sim \operatorname{Uniform}(0,1)$.
(a) Obtain $P\left(\left\{\left|X-\frac{1}{2}\right|<2 \sqrt{1 / 12}\right\}\right)$.
(b) Obtain a lower bound for $P\left(\left\{\left|X-\frac{1}{2}\right|<2 \sqrt{1 / 12}\right\}\right)$, by noting that $E(X)=\frac{1}{2}$ and $V(X)=\frac{1}{12}$. Compare this bound with the value you obtained in (a).
(Rohatgi, 1976, p. 101.)
## Exercise 4.127 - Meeting the Chebyshev(-Bienaymé)'s bounds exactly
Typically, the Chebyshev(-Bienaymé)'s inequality will provide rather loose bounds.(a) Prove that these bounds cannot be improved upon for the r.v. $X$ with p.f.
$$
P(\{X=x\})= \begin{cases}P(\{X=-1\})=\frac{1}{2 k^{2}}, & x=-1 \\ P(\{X=0\})=1-\frac{1}{k^{2}}, & x=0 \\ P(\{X=1\})=\frac{1}{2 k^{2}}, & x=1 \\ 0, & \text { otherwise }\end{cases}
$$
where $k>1$, that is, $P(|X-E(X)| \geq k \sqrt{V(X)})=\frac{1}{k^{2}}$. (For more details see http://en.wikipedia.org/wiki/Chebyshev's_inequality.) ${ }^{27}$
(b) Prove that equality holds exactly for any r.v. $Y$ that is a linear transformation of $X .^{28}$
Remark 4.128 - Chebyshev(-Bienaymé)'s inequality and the weak law of large numbers (http://en.wikipedia.org/ wiki/Chebyshev's_inequality)
Chebyshev(-Bienaymé)'s inequality is used for proving the following version of the weak law of large numbers: when dealing with a sequence of i.i.d. r.v., $X_{1}, X_{2}, \ldots$, with finite expected value and variance $\left(\mu, \sigma^{2}<+\infty\right)$,
$$
\lim _{n \rightarrow+\infty} P\left(\left\{\left|\bar{X}_{n}-\mu\right|<\epsilon\right\}\right)=1
$$
where $\bar{X}_{n}=\frac{1}{n} \sum_{i=1}^{n} X_{i}$. That is, $\bar{X}_{n} \stackrel{P}{\rightarrow} \mu$ as $n \rightarrow+\infty$.
${ }^{27}$ This is the answer to Exercise 4.36 from Karr (1993, p. 133).
${ }^{28}$ Inequality holds for any r.v. that is not a linear transformation of $X$ (http://en.wikipedia.org/wiki/Chebyshev's_inequality). Exercise 4.129 - Chebyshev(-Bienaymé)'s inequality and the weak law of large numbers
Use Chebyshev(-Bienaymé)'s inequality to prove the weak law of large numbers stated in Remark 4.128.
Exercise 4.130 - Cantelli's inequality (Karr, 1993, p. 132, Exercise 4.30)
When does $P(\{|X-E(X)| \geq a\}) \leq \frac{2 V(X)}{a^{2}+V(X)}$ give a better bound than Chebyshev(Bienaymé)'s inequality?
Exercise 4.131 - One-sided Chebyshev's inequality and the distance between the mean and the median
Use the one-sided Chebyshev's inequality to prove that for any r.v. having an expected value and a median, the mean and the median can never differ from each other by more than one standard deviation, i.e.
$$
|E(X)-\operatorname{med}(X)| \leq \sqrt{V(X)}
$$
(http://en.wikipedia.org/wiki/Chebyshev's_inequality).
### Moments
## Motivation 4.132 - Moments of r.v.
The nature of a r.v. can be partial described in terms of a number of features - such as the expected value, the variance, the skewness, kurtosis, etc. - that can written in terms of expectation of powers of $X$, the moments of a r.v.
#### Moments of r.v.
Definition 4.133 - $k$ th. moment and $k$ th. central moment of a r.v. (Karr, 1993, p. 123)
Let
- $X$ be a r.v. such that $X \in L^{k}$, for some $k \in \mathbb{N}$.
Then:
- the $k$ th. moment of $X$ is given by the Riemann-Stieltjes integral
$$
E\left(X^{k}\right)=\int_{-\infty}^{\infty} x^{k} d F_{X}(x)
$$
- similarly, the $k$ th. central moment of $X$ equals
$$
E\left\{[X-E(X)]^{k}\right\}=\int_{-\infty}^{+\infty}[x-E(X)]^{k} d F_{X}(x)
$$
Remarks 4.134 $-k$ th. moment and $k$ th. central moment of a r.v. (Karr, 1993, p. 123; http://en.wikipedia.org/wiki/Moment_(mathematics))
- The $k$ th. central moment exists under the assumption that $X \in L^{k}$ because $L^{k} \subseteq L^{1}$, for any $k \in \mathbb{N}$ (a consequence of Lyapunov's inequality).
- If the $k$ th. (central) moment exists ${ }^{29}$ so does the $(k-1)$ th. (central) moment, and all lower-order moments. This is another consequence of Lyapunov's inequality.
- If $X \in L^{1}$ the first moment is the expectation of $X$; the first central moment is thus null. In higher orders, the central moments are more interesting than the moments about zero.
${ }^{29}$ Or the $k$ th. moment about any point exists. Note that the $k$ th. central moment is nothing but the $k$ th. moment about $E(X)$. Proposition 4.135 - Computing the $k$ th. moment of a non negative r.v. If $X$ is a non negative r.v. and $X \in L^{k}$, for $k \in \mathbb{N}$, we can write the $k$ th. moment of $X$ in terms of the following Riemann integral:
$$
E\left(X^{k}\right)=\int_{0}^{\infty} k x^{k-1} \times\left[1-F_{X}(x)\right] d x .
$$
Exercise 4.136 - Computing the $k$ th. moment of a non negative r.v. Prove Proposition 4.135.
Exercise 4.137 - Computing the $k$ th. moment of a non negative r.v. Let $X \sim \operatorname{Exponential}(\lambda)$. Use Proposition 4.135 to prove that $E\left(X^{k}\right)=\frac{\Gamma(k+1)}{\lambda^{k}}$, for any $k \in \mathbb{I}$
Exercise 4.138 - The median of a r.v. and the minimization of the expected absolute deviation (Karr, 1993, p. 130, Exercise 4.1)
The median of the r.v. $X, \operatorname{med}(X)$, is such that $P(\{X \leq \operatorname{med}(X)\}) \geq \frac{1}{2}$ and $P(\{X \geq$ $\operatorname{med}(X)\}) \geq \frac{1}{2}$.
Prove that if $X \in L^{1}$ then
$$
E(|X-\operatorname{med}(X)|) \leq E(|X-a|)
$$
for all $a \in \mathbb{R}$.
Exercise 4.139 - Minimizing the mean squared error (Karr, 1993, p. 131, Exercise 4.12)
Let $\left\{A_{1}, \ldots, A_{n}\right\}$ be a finite partition of $\Omega$. Suppose that we know which of $A_{1}, \ldots, A_{n}$ has occurred, and wish to predict whether some other event $B$ has occurred. Since we know the values of the indicator functions $\mathbf{1}_{A_{1}}, \ldots, \mathbf{1}_{A_{n}}$, it make sense to use a predictor that is a function of them, namely linear predictors of the form $Y=\sum_{i=1}^{n} a_{i} \times \mathbf{1}_{A_{i}}$, whose accuracy is assessed via the mean squared error:
$$
\operatorname{MSE}(Y)=E\left[\left(\mathbf{1}_{B}-Y\right)^{2}\right]
$$
Prove that the values $a_{i}=P\left(B \mid A_{i}\right), i=1, \ldots, n$ minimize $\operatorname{MSE}(Y)$. Exercise 4.140 - Expectation of a r.v. with respect to a conditional probability function (Karr, 1993, p. 132, Exercise 4.20)
Let $A$ be an event such that $P(A)>0$.
Show that if $X$ is positive or integrable then $E(X \mid A)$, the expectation of $X$ with respect to the conditional probability function $P_{A}(B)=P(B \mid A)$, is given by
$$
E(X \mid A) \stackrel{\text { def }}{=} \frac{E(X ; A)}{P(A)},
$$
where $E(X ; A)=E\left(X \times \mathbf{1}_{A}\right)$ represents the expectation of $X$ over the event $A$.
#### Variance and standard deviation
Definition 4.141 - Variance and standard deviation of a r.v. (Karr, 1993, p. 124)
Let $X \in L^{2}$. Then:
- the 2 nd. central moment is the variance of $X$,
$$
V(X)=E\left\{[X-E(X)]^{2}\right\}
$$
- the positive square root of the variance is the standard deviation of $X$,
$$
S D(X)=+\sqrt{V(X)} .
$$
Remark 4.142 - Computing the variance of a r.v. (Karr, 1993, p. 124) The variance of a r.v. $X \in L^{2}$ can also be expressed as
$$
V(X)=E\left(X^{2}\right)-E^{2}(X)
$$
which is more convenient than (4.100) for computational purposes.
Exercise 4.143 - The meaning of a null variance (Karr, 1993, p. 131, Exercise 4.19)
Prove that if $V(X)=0$ then $X \stackrel{\text { a.s. }}{=} E(X)$.
Exercise 4.144 - Comparing the variance of $X$ and $\min \{X, c\}$ (Karr, 1993, p. 133, Exercise 4.32)
Let $X$ be a r.v. such that $E\left(X^{2}\right)<+\infty$ and $c$ a real constant. Prove that
$$
V(\min \{X, c\}) \leq V(X) .
$$
Proposition 4.145 - Variance of the sum (or difference) of two independent r.v. (Karr, 1993, p. 124)
If $X, Y \in L^{2}$ and are two INDEPENDENT then
$$
V(X+Y)=V(X-Y)=V(X)+V(Y) .
$$
Exercise 4.146 - Expected values and variances of some important r.v. (Karr, 1993, pp. 125 and 130, Exercise 4.1)
Verify the entries of the following table.
| Distribution | Parameters | Expected value | Variance |
| :---: | :---: | :---: | :---: |
| Discrete Uniform $\left(\left\{x_{1}, x_{2}, \ldots, x_{n}\right\}\right)$ | $\left\{x_{1}, x_{2}, \ldots, x_{n}\right\}$ | $\frac{1}{n} \sum_{i=1}^{n} x_{i}$ | $\left(\frac{1}{n} \sum_{i=1}^{n} x_{i}^{2}\right)-\left(\frac{1}{n} \sum_{i=1}^{n} x_{i}\right)^{2}$ |
| $\operatorname{Bernoulli}(p)$ | $p \in[0,1]$ | $p$ | $p(1-p)$ |
| $\operatorname{Binomial}(n, p)$ | $n \in \mathbb{N} ; p \in[0,1]$ | $n p$ | $n p(1-p)$ |
| Hipergeometric $(N, M, n)$ | $\begin{array}{l}N \in \mathbb{N} \\ M \in \mathbb{N}, M \leq N \\ n \in \mathbb{N}, n \leq N\end{array}$ | $n \frac{M}{N}$ | $n \frac{M}{N}\left(1-\frac{M}{N}\right)$ |
| $\operatorname{Geometric}(p)$ | $p \in[0,1]$ | $\frac{1}{p}$ | $\frac{1-p}{p^{2}}$ |
| $\operatorname{Poisson}(\lambda)$ | $\lambda \in \mathbb{R}^{+}$ | $\lambda$ | $\lambda$ |
| $\operatorname{NegativeBinomial}(r, p)$ | $r \in \mathbb{N} ; p \in[0,1]$ | $\frac{r}{p}$ | $\frac{r(1-p)}{p^{2}}$ |
| Uniform $(a, b)$ | $a, b \in \mathbb{R}, a<b$ | $\begin{array}{l}\frac{a+b}{2}\end{array}$ | $\begin{array}{l}\frac{(b-a)^{2}}{12}\end{array}$ |
| $\operatorname{Normal}\left(\mu, \sigma^{2}\right)$ | $\mu \in \mathbb{R} ; \sigma^{2} \in \mathbb{R}^{+}$ | $\mu$ | $\sigma^{2}$ |
| $\operatorname{Lognormal}\left(\mu, \sigma^{2}\right)$ | $\mu \in \mathbb{R} ; \sigma^{2} \in \mathbb{R}^{+}$ | $e^{\mu+\frac{1}{2} \sigma^{2}}$ | $\left(e^{\sigma^{2}}-1\right) e^{2 \mu+\sigma^{2}}$ |
| $\operatorname{Exponential}(\lambda)$ | $\lambda \in \mathbb{R}^{+}$ | $\frac{1}{\lambda}$ | $\frac{1}{\lambda^{2}}$ |
| $\operatorname{Gamma}(\alpha, \beta)$ | $\alpha, \beta \in \mathbb{R}^{+}$ | $\frac{\alpha}{\beta}$ | $\frac{\alpha}{\beta^{2}}$ |
| $\operatorname{Beta}(\alpha, \beta)$ | $\alpha, \beta \in \mathbb{R}^{+}$ | $\frac{\alpha}{\alpha+\beta}$ | $\frac{\alpha \beta}{(\alpha+\beta)^{2}(\alpha+\beta+1)}$ |
| $\operatorname{Weibull}(\alpha, \beta)$ | $\alpha, \beta \in \mathbb{R}^{+}$ | $\alpha \Gamma\left(1+\frac{1}{\beta}\right)$ | $\alpha^{2}\left[\Gamma\left(1+\frac{2}{\beta}\right)-\Gamma^{2}\left(1+\frac{1}{\beta}\right)\right.$ |
## Definition 4.147 - Normalized (central) moments of a r.v.
(http://en.wikipedia.org/wiki/Moment_(mathematics))
Let $X$ be a r.v. such that $X \in L^{k}$, for some $k \in \mathbb{N}$. Then:
- the normalized $k$ th. moment of the $X$ is the $k$ th. moment divided by $[S D(X)]^{k}$,
$$
\frac{E\left(X^{k}\right)}{[S D(X)]^{k}} ;
$$
- the normalized $k$ th. central moment of $X$ is given by
$$
\frac{E\left\{[X-E(X)]^{k}\right\}}{[S D(X)]^{k}} ;
$$
These normalized central moments are dimensionless quantities, which represent the distribution independently of any linear change of scale.
#### Skewness and kurtosis
## Motivation 4.148 - Skewness of a r.v.
(http://en.wikipedia.org/wiki/Moment_(mathematics))
Any r.v. $X \in L^{3}$ with a symmetric p.(d.)f. will have a NuLL 3 rd. central moment. Thus, the 3rd. central moment is a measure of the lopsidedness of the distribution.
## Definition 4.149 - Skewness of a r.v.
(http://en.wikipedia.org/wiki/Moment_(mathematics))
Let $X \in L^{3}$ be a r.v. Then the normalized 3rd. central moment is called the skewness or skewness coefficient $(\mathrm{SC})-$,
$$
S C(X)=\frac{E\left\{[X-E(X)]^{3}\right\}}{[S D(X)]^{3}} .
$$
## Remark 4.150 - Skewness of a r.v.
(http://en.wikipedia.org/wiki/Moment_(mathematics))
- A r.v. $X$ that is skewed to the left (the tail of the p.(d.)f. is heavier on the right) will have a negative skewness.
- A r.v. that is skewed to the right (the tail of the p.(d.)f. is heavier on the left), will have a positive skewness.
## Exercise 4.151 - Skewness of a r.v.
Prove that the skewness of:
(a) $X \sim \operatorname{Exponential}(\lambda)$ equals $S C(X)=2$
(http://en.wikipedia.org/wiki/Exponential_distribution);
(b) $X \sim \operatorname{Pareto}(b, \alpha)$ is given by $S C(X)=\frac{2(1+\alpha)}{\alpha-3} \sqrt{\frac{\alpha-2}{\alpha}}$, for $\alpha>3$
(http://en.wikipedia.org/wiki/Pareto_distribution).
## Motivation 4.152 - Kurtosis of a r.v.
(http://en.wikipedia.org/wiki/Moment_(mathematics))
The normalized 4th. central moment of any normal distribution is 3. Unsurprisingly, the normalized 4th. central moment is a measure of whether the distribution is tall and skinny or short and squat, compared to the normal distribution of the same variance.
## Definition 4.153 - Kurtosis of a r.v.
(http://en.wikipedia.org/wiki/Moment_(mathematics))
Let $X \in L^{4}$ be a r.v. Then the kurtosis - or kurtosis coefficient (KC) - is defined to be the normalized 4 th. central moment minus $3,{ }^{30}$
$$
K C(X)=\frac{E\left\{[X-E(X)]^{4}\right\}}{[S D(X)]^{4}}-3 .
$$
## Remarks 4.154 - Kurtosis of a r.v.
(http://en.wikipedia.org/wiki/Moment_(mathematics))
- If the p.(d.)f. of the r.v. $X$ has a peak at the expected value and long tails, the 4 th. moment will be high and the kurtosis positive. Bounded distributions tend to have low kurtosis.
- $K C(X)$ must be greater than or equal to $[S C(X)]^{2}-2$; equality only holds for Bernoulli distributions (prove!).
- For unbounded skew distributions not too far from normal, $K C(X)$ tends to be somewhere between $[S C(X)]^{2}$ and $2 \times[S C(X)]^{2}$.
${ }^{30}$ Some authors do not subtract three.
## Exercise 4.155 - Kurtosis of a r.v.
Prove that the kurtosis of:
(a) $X \sim \operatorname{Exponential}(\lambda)$ equals $K C(X)=6$
(http://en.wikipedia.org/wiki/Exponential_distribution);
(b) $X \sim \operatorname{Pareto}(b, \alpha)$ is given by $\frac{6\left(\alpha^{3}+\alpha^{2}-6 \alpha-2\right)}{\alpha(\alpha-3)(\alpha-4)}$ for $\alpha>4$
(http://en.wikipedia.org/wiki/Pareto_distribution).
#### Covariance
Motivation 4.156 - Covariance (and correlation) between two r.v.
It is crucial to obtain measures of how much two variables change together, namely absolute and relative measures of (linear) association between pairs of r.v.
Definition 4.157 - Covariance between two r.v. (Karr, 1993, p. 125)
Let $X, Y \in L^{2}$ be two r.v. Then the covariance between $X$ and $Y$ is equal to
$$
\begin{aligned}
\operatorname{cov}(X, Y) & =E\{[X-E(X)] \times[Y-E(Y)]\} \\
& =E(X Y)-E(X) E(Y)
\end{aligned}
$$
(this last formula is more useful for computational purposes, prove it!)
## Remark 4.158 - Covariance between two r.v.
(http://en.wikipedia.org/wiki/Covariance)
The units of measurement of the covariance between the r.v. $X$ and $Y$ are those of $X$ times those of $Y$.
## Proposition 4.159 - Properties of the covariance
Let $X, Y, Z \in L^{2}, X_{1}, \ldots, X_{n} \in L^{2}, Y_{1}, \ldots, Y_{n} \in L^{2}$, and $a, b \in \mathbb{R}$. Then:
1. $X \Perp Y \Rightarrow \operatorname{cov}(X, Y)=0$
2. $\operatorname{cov}(X, Y)=0 \nRightarrow X \Perp Y$
3. $\operatorname{cov}(X, Y) \neq 0 \Rightarrow X \not \Perp Y$ 4. $\operatorname{cov}(X, Y)=\operatorname{cov}(Y, X)$ (symmetric operator!)
4. $\operatorname{cov}(X, X)=V(X) \geq 0$ and $V(X)=0 \Rightarrow X \stackrel{\text { a.s. }}{=} E(X)$ (positive semi-definite operator!)
5. $\operatorname{cov}(a X, b Y)=a b \operatorname{cov}(X, Y)$
6. $\operatorname{cov}(X+a, Y+b)=\operatorname{cov}(X, Y)$
7. $\operatorname{cov}(a X+b Y, Z)=a \operatorname{cov}(X, Z)+b \operatorname{cov}(Y, Z)$ (bilinear operator!)
8. $\operatorname{cov}\left(\sum_{i=1}^{n} X_{i}, \sum_{j=1}^{n} Y_{j}\right)=\sum_{i=1}^{n} \sum_{j=1}^{n} \operatorname{cov}\left(X_{i}, Y_{j}\right)$
9. $\operatorname{cov}\left(\sum_{i=1}^{n} X_{i}, \sum_{j=1}^{n} X_{j}\right)=\sum_{i=1}^{n} V\left(X_{i}\right)+2 \times \sum_{i=1}^{n} \sum_{j=i+1}^{n} \operatorname{cov}\left(X_{i}, X_{j}\right)$.
## Exercise 4.160 - Covariance
Prove properties 6 through 10 from Proposition 4.159.
Proposition 4.161 - Variance of some linear combinations of r.v. Let $X_{1}, \ldots, X_{n} \in L^{2}$. Then:
$$
\begin{aligned}
V\left(c_{1} X_{1}+c_{2} X_{2}\right) & =c_{1}^{2} V\left(X_{1}\right)+c_{2}^{2} V\left(X_{2}\right)+2 c_{1} c_{2} \operatorname{cov}\left(X_{1}, X_{2}\right) \\
V\left(X_{1}+X_{2}\right) & =V\left(X_{1}\right)+V\left(X_{2}\right)+2 \operatorname{cov}\left(X_{1}, X_{2}\right) \\
V\left(X_{1}-X_{2}\right) & =V\left(X_{1}\right)+V\left(X_{2}\right)-2 \operatorname{cov}\left(X_{1}, X_{2}\right) \\
V\left(\sum_{i=1}^{n} c_{i} X_{i}\right) & =\sum_{i=1}^{n} c_{i}^{2} V\left(X_{i}\right)+2 \sum_{i=1}^{n} \sum_{j=i+1}^{n} c_{i} c_{j} \operatorname{cov}\left(X_{i}, X_{j}\right) .
\end{aligned}
$$
When we deal with uncorrelated r.v. - i.e., if $\operatorname{cov}\left(X_{i}, X_{j}\right)=0, \forall i \neq j$ - or with pairwise independent r.v. - that is, $X_{i} \Perp X_{j}, \forall i \neq j$ - we have:
$$
V\left(\sum_{i=1}^{n} c_{i} X_{i}\right)=\sum_{i=1}^{n} c_{i}^{2} V\left(X_{i}\right)
$$
And if, besides being uncorrelated or (pairwise) independent r.v., we have $c_{i}=1$, for $i=1, \ldots, n$, we get:
$$
V\left(\sum_{i=1}^{n} X_{i}\right)=\sum_{i=1}^{n} V\left(X_{i}\right),
$$
i.e. the variance of the sum of uncorrelated or (pairwise) independent r.v. is the sum of the individual variances.
#### Correlation
## Motivation 4.162 - Correlation between two r.v.
(http://en.wikipedia.org/wiki/Correlation_and_dependence)
The most familiar measure of dependence between two r.v. is (Pearson's) correlation. It is obtained by dividing the covariance between two variables by the product of their standard deviations.
Correlations are useful because they can indicate a predictive relationship that can be exploited in practice. For example, an electrical utility may produce less power on a mild day based on the correlation between electricity demand and weather. Moreover, correlations can also suggest possible causal, or mechanistic relationships.
Definition 4.163 - Correlation between two r.v. (Karr, 1993, p. 125)
Let $X, Y \in L^{2}$ be two r.v. Then the correlation ${ }^{31}$ between $X$ and $Y$ is given by
$$
\operatorname{corr}(X, Y)=\frac{\operatorname{cov}(X, Y)}{\sqrt{V(X) V(Y)}}
$$
## Remark 4.164 - Correlation between two r.v.
(http://en.wikipedia.org/wiki/Covariance)
Correlation is a dimensionless measure of linear dependence.
Definition 4.165 - Uncorrelated r.v. (Karr, 1993, p. 125)
Let $X, Y \in L^{2}$. Then if
$$
\operatorname{corr}(X, Y)=0
$$
$X$ and $Y$ are said to be uncorrelated r.v. ${ }^{32}$
Exercise 4.166 - Uncorrelated r.v. (Karr, 1993, p. 131, Exercise 4.14)
Give an example of r.v. $X$ and $Y$ that are uncorrelated but for which there is a function $g$ such that $Y=g(X)$.
Exercise 4.167 - Uncorrelated r.v. (Karr, 1993, p. 131, Exercise 4.18)
Prove that if $V, W \in L^{2}$ and $(V, W) \stackrel{d}{=}(-V, W)$ then $V$ and $W$ are uncorrelated.
${ }^{31}$ Also know as the Pearson's correlation coefficient (http://en.wikipedia.org/wiki/Correlation_and_ dependence).
${ }^{32} X, Y \in L^{2}$ are said to be correlated r.v. if $\operatorname{corr}(X, Y) \neq 0$. Exercise 4.168 - Sufficient conditions to deal with uncorrelated sample mean and variance (Karr, 1993, p. 132, Exercise 4.28)
Let $X_{i} \stackrel{\text { i.i.d. }}{\sim} X, i=1, \ldots, n$, such that $E(X)=E\left(X^{3}\right)=0$.
Prove that the sample mean $\bar{X}=\frac{1}{n} \sum_{i=1}^{n} X_{i}$ and the sample variance $S^{2}=\frac{1}{n-1} \sum_{i=1}^{n}\left(X_{i}-\bar{X}\right)^{2}$ are uncorrelated r.v.
## Proposition 4.169 - Properties of the correlation
Let $X, Y \in L^{2}$, and $a, b \in \mathbb{R}$. Then:
1. $X \Perp Y \Rightarrow \operatorname{corr}(X, Y)=0$
2. $\operatorname{corr}(X, Y)=0 \nRightarrow X \Perp Y$
3. $\operatorname{corr}(X, Y) \neq 0 \Rightarrow X \not \perp Y$
4. $\operatorname{corr}(X, Y)=\operatorname{corr}(Y, X)$
5. $\operatorname{corr}(X, X)=1$
6. $\operatorname{corr}(a X, b Y)=\operatorname{corr}(X, Y)$
7. $-1 \leq \operatorname{corr}(X, Y) \leq 1$, for any pair of r.v. ${ }^{33}$
8. $\operatorname{corr}(X, Y)=-1 \Leftrightarrow Y \stackrel{\text { a.s. }}{=} a X+b, a<0$
9. $\operatorname{corr}(X, Y)=1 \Leftrightarrow Y \stackrel{\text { a.s. }}{=} a X+b, a>0$.
Exercise 4.170 - Properties of the correlation
Prove properties 7 through 9 from Proposition 4.169.
Exercise 4.171 - Negative linear association between three r.v. (Karr, 1993, p. 131, Exercise 4.17)
Prove that there are no r.v. $X, Y$ and $Z$ such that $\operatorname{corr}(X, Y)=\operatorname{corr}(Y, Z)=$ $\operatorname{corr}(Z, X)=-1$.
Remark 4.172 - Interpretation of the sign of a correlation
The correlation sign entre $X$ e $Y$ should be interpreted as follows:
- if $\operatorname{corr}(X, Y)$ is "considerably" larger than zero (resp. smaller than zero) we can cautiously add that if $X$ increases then $Y$ "tends" to increase (resp. decrease).
${ }^{33}$ A consequence of Cauchy-Schwarz's inequality.
## Remark 4.173 - Interpretation of the size of a correlation
(http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient)
Several authors have offered guidelines for the interpretation of a correlation coefficient.
Others have observed, however, that all such criteria are in some ways arbitrary and should not be observed too strictly.
The interpretation of a correlation coefficient depends on the context and purposes. A correlation of 0.9 may be very low if one is verifying a physical law using high-quality instruments, but may be regarded as very high in the social sciences where there may be a greater contribution from complicating factors.
## Remark 4.174 - Correlation and linearity
(http://en.wikipedia.org/wiki/Correlation_and_dependence)
Properties 8 and 9 from Proposition 4.169 suggest that
- correlation "quantifies" the linear association between $X$ e $Y$.
Thus, if the absolute value of $\operatorname{corr}(X, Y)$ is very close to the unit we are tempted to add that the association between $X$ and $Y$ is "likely" to be linear.
However, the Pearson's correlation coefficient indicates the strength of a linear relationship between two variables, but its value generally does not completely characterize their relationship.
The image on the right shows scatterplots of Anscombe's quartet, a set of four different pairs of variables created by Francis Anscombe. The four $y$ variables have the same mean (7.5), standard deviation (4.12), correlation (0.816) and regression line $(y=3+0.5 x)$. However, as can be seen on the plots, the distribution of the variables is very different.
The first one (top left) seems to be distributed normally, and corresponds to what one would expect when considering two variables correlated and following the assumption of normality. The second one (top right) is not distributed normally; while an obvious relationship between the two variables can be observed, it is not linear, and the Pearson correlation coefficient is not relevant.
In the third case (bottom left), the linear relationship is perfect, except for one outlier which exerts enough influence to lower the correlation coefficient from 1 to 0.816.
Finally, the fourth example (bottom right) shows another example when one outlier is enough to produce a high correlation coefficient, even though the relationship between the two variables is not linear.
## Remark 4.175 - Correlation and causality
(http://en.wikipedia.org/wiki/Correlation_and_dependence)
The conventional dictum that "correlation does not imply causation" means that correlation cannot be used to infer a causal relationship between the variables. ${ }^{34}$
This dictum should not be taken to mean that correlations cannot indicate the potential existence of causal relations. However, the causes underlying the correlation, if any, may be indirect and unknown, and high correlations also overlap with identity relations, where no causal process exists. Consequently, establishing a correlation between two variables is not a sufficient condition to establish a causal relationship (in either direction).
Several sets of $(x, y)$ points, with the correlation coefficient of $x$ and $y$ for each set. Note that the correlation reflects the noisiness and direction of a linear relationship (top row), but not the slope of that relationship (middle), nor many aspects of nonlinear relationships (bottom). The figure in the center has a slope of 0 but in that case the correlation coefficient is undefined because the variance of $y$ is zero.
${ }^{34} \mathrm{~A}$ correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so. Does improved mood lead to improved health; or does good health lead to good mood; or both? Or does some other factor underlie both? In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be.
#### Moments of random vectors
Moments of random vectors are defined component, pairwise, etc. Instead of an expected value (resp. variance) we shall deal with a mean vector (resp. covariance matrix).
Definition 4.176 - Mean vector and covariance matrix of a random vector (Karr, 1993, p. 126)
Let $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ be a $d$-dimensional random vector. Then provided that:
- $X_{i} \in L^{1}, i=1, \ldots, d$, the mean vector of $\underline{X}$ is the $d$-vector of the individual means, $\underline{\mu}=\left(E\left(X_{1}\right), \ldots, E\left(X_{d}\right)\right)$
- $X_{i} \in L^{2}, i=1, \ldots, d$, the covariance matrix of $\underline{X}$ is a $d \times d$ matrix given by $\Sigma=\left[\operatorname{cov}\left(X_{i}, X_{j}\right)\right]_{i, j=1, \ldots, d}$.
Proposition 4.177 - Properties of the covariance matrix of a random vector (Karr, 1993, p. 126)
Let $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ be a $d$-dimensional random vector with covariance matrix $\Sigma$. Then:
- the diagonal of $\Sigma$ has entries equal to $\operatorname{cov}\left(X_{i}, X_{i}\right)=V\left(X_{i}\right), i=1, \ldots, d$;
- $\Sigma$ is a symmetric matrix since $\operatorname{cov}\left(X_{i}, X_{j}\right)=\operatorname{cov}\left(X_{j}, X_{i}\right), i, j=1, \ldots, d$;
- $\Sigma$ is a positive-definite matrix, that is, $\sum_{i=1}^{d} \sum_{j=1}^{d} x_{i} \times \operatorname{cov}\left(X_{i}, X_{j}\right) \times x_{j}>0$, for every $d$-vector $\underline{x}=\left(x_{1}, \ldots, x_{d}\right)$.
Exercise 4.178 - Mean vector and covariance matrix of a linear combination of r.v. (matrix notation)
Let:
- $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ a $d$-dimensional random vector;
- $\underline{\mu}=\left(E\left(X_{1}\right), \ldots, E\left(X_{d}\right)\right)$ the mean vector of $\underline{X}$;
- $\Sigma=\left[\operatorname{cov}\left(X_{i}, X_{j}\right)\right]_{i, j=1, \ldots, n}$ the covariance matrix of $\underline{X}$;
- $\underline{c}=\left(c_{1}, \ldots, c_{n}\right)$ a vector of weights.
By noting that $\sum_{i=1}^{n} c_{i} X_{i}=\underline{c}^{\top} \underline{X}$, verify that:
- $E\left(\sum_{i=1}^{n} c_{i} X_{i}\right)=\underline{c}^{\top} \underline{\mu}$;
- $V\left(\sum_{i=1}^{n} c_{i} X_{i}\right)=\underline{c}^{\top} \Sigma \underline{c}$.
#### Multivariate normal distributions
Motivation 4.179 - Multivariate normal distribution (Tong, 1990, p. 1)
There are many reasons for the predominance of the multivariate normal distribution:
- it represents a natural extension of the univariate normal distribution and provides a suitable model for many real-life problems concerning vector-valued data;
- even if the original data cannot be fitted satisfactorily with a multivariate normal distribution, by the central limit theorem the distribution of the sample mean vector is asymptotically normal;
- the p.d.f. of a multivariate normal distribution is uniquely determined by the mean vector and the covariance matrix;
- zero correlation imply independence between two components of the random vector with multivariate normal distribution;
- the family of multivariate normal distributions is closed under linear transformations or linear combinations;
- the marginal distribution of any subset of components of a random vector with multivariate normal distribution;
- the conditional distribution in a multivariate normal distribution is multivariate normal.
Remark 4.180 - Multivariate normal distribution (Tong, 1990, p. 2)
Studies of the bivariate normal distribution seem to begin in the middle of the XIX century, and moved forward in 1888 with F. Galton's (1822-1911) work on the applications of correlations analysis in genetics. In 1896, K. Pearson (1857-1936) gave a definitive mathematical formulation of the bivariate normal distribution.
The multivariate normal distribution was treated comprehensively for the first time in 1892 by F.Y. Edgeworth (1845-1926).
A random vector has a multivariate normal distribution if it is a linear transformation of a random vector with i.i.d. components with standard normal distribution (Karr, 1993, p. 126). Definition 4.181 - Multivariate normal distribution (Karr, 1993, p. 126) Let:
- $\underline{\mu}=\left(\mu_{1}, \ldots, \mu_{d}\right) \in \mathbb{R}^{d}$
- $\Sigma=\left[\sigma_{i j}\right]_{i, j=1, \ldots, d}$ be a symmetric, positive-definite, non-singular $d \times d$ matrix $;^{35}$
Then the random vector $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ has a multivariate normal distribution with mean vector $\underline{\mu}$ and covariance matrix $\Sigma$ if
$$
\underline{X}=\Sigma^{\frac{1}{2}} \underline{Y}+\underline{\mu}
$$
where:
- $\underline{Y}=\left(Y_{1}, \ldots, Y_{d}\right)$ with $Y_{i} \stackrel{i . i . d .}{\sim} \operatorname{Normal}(0,1), i=1, \ldots, d$
- $\Sigma^{\frac{1}{2}}$ is the unique matrix satisfying $\left(\Sigma^{\frac{1}{2}}\right)^{\top} \times \Sigma^{\frac{1}{2}}=\Sigma$.
In this case we write $\underline{X} \sim \operatorname{Normal}_{d}(\underline{\mu}, \Sigma)$.
We can use Definition 4.181 to simulate a multivariate normal distribution as mentioned below.
Remark 4.182 - Simulating a multivariate normal distribution (Gentle, 1998, pp. 105-106)
Since $Y_{i} \stackrel{\text { i.i.d. }}{\sim} \operatorname{Normal}(0,1), i=1, \ldots, d$, implies that $\underline{X}=\Sigma^{\frac{1}{2}} \underline{Y}+\underline{\mu} \sim \operatorname{Normal}_{d}(\underline{\mu}, \Sigma)$ we can obtain a $d$-dimensional pseudo-random vector from this multivariate normal distribution if we:
1. generate $d$ independent pseudo-random numbers, $y_{1}, \ldots, y_{d}$, from the standard normal distribution;
2. $\operatorname{assign} \underline{x}=\Sigma^{\frac{1}{2}} \underline{y}+\underline{\mu}$, where $\underline{y}=\left(y_{1}, \ldots, y_{d}\right)$.
Gentle (1998, p. 106) refers other procedures to generate pseudo-random numbers with multivariate normal distribution.
${ }^{35} \mathrm{~A} d \times d$ matrix $\mathbf{A}$ is called invertible or non-singular or non-degenerate if there exists an $d \times d$ matrix $\mathbf{B}$ such that $\mathbf{A B}=\mathbf{B A}=\mathbf{I}_{d}$, where $\mathbf{I}_{d}$ denotes the $d \times d$ identity matrix. (http://en.wikipedia.org/wiki/Invertible_matrix) . Proposition 4.183 - Characterization of the multivariate normal distribution (Karr, 1993, pp. 126-127)
Let $\underline{X} \sim \operatorname{Normal}_{d}(\underline{\mu}, \Sigma)$ where $\underline{\mu}=\left(\mu_{1}, \ldots, \mu_{d}\right)$ and $\Sigma=\left[\sigma_{i j}\right]_{i, j=1, \ldots, d}$. Then:
$$
\begin{aligned}
E\left(X_{i}\right) & =\mu_{i}, i=1, \ldots, d ; \\
\operatorname{cov}\left(X_{i}, X_{j}\right) & =\sigma_{i j}, i, j=1, \ldots, d ; \\
f_{\underline{X}}(\underline{x}) & =(2 \pi)^{-\frac{d}{2}}|\Sigma|^{-\frac{1}{2}} \times \exp \left[-\frac{1}{2}(\underline{x}-\underline{\mu})^{\top} \Sigma^{-1}(\underline{x}-\underline{\mu})\right],
\end{aligned}
$$
for $\underline{x}=\left(x_{1}, \ldots, x_{d}\right) \in \mathbb{R}^{d}$.
## Exercise 4.184 - P.d.f. of a bivariate normal distribution
Let $\left(X_{1}, X_{2}\right)$ have a (non-singular) bivariate normal distribution with mean vector and covariance matrix
$$
\underline{\mu}=\left[\begin{array}{l}
\mu_{1} \\
\mu_{2}
\end{array}\right] \quad \text { and } \quad \Sigma=\left[\begin{array}{cc}
\sigma_{1}^{2} & \rho \sigma_{1} \sigma_{2} \\
\rho \sigma_{1} \sigma_{2} & \sigma_{2}^{2}
\end{array}\right],
$$
respectively, where $|\rho|=|\operatorname{corr}(X, Y)|<1$.
(a) Verify that the joint p.d.f. is given by
$$
\begin{aligned}
& f_{X_{1}, X_{2}}\left(x_{1}, x_{2}\right)=\frac{1}{2 \pi \sigma_{1} \sigma_{2} \sqrt{1-\rho^{2}}} \exp \left\{-\frac{1}{2\left(1-\rho^{2}\right)}\left[\left(\frac{x_{1}-\mu_{1}}{\sigma_{1}}\right)^{2}\right.\right. \\
& \left.\left.-2 \rho\left(\frac{x_{1}-\mu_{1}}{\sigma_{1}}\right)\left(\frac{x_{2}-\mu_{2}}{\sigma_{2}}\right)+\left(\frac{x_{2}-\mu_{2}}{\sigma_{2}}\right)^{2}\right]\right\},\left(x_{1}, x_{2}\right) \in \mathbb{R}^{2} .
\end{aligned}
$$
(b) Use Mathematica to plot this joint p.d.f. for $\mu_{1}=\mu_{2}=0$ and $\sigma_{1}^{2}=\sigma_{2}^{2}=1$, and at least five different values of the correlation coefficient $\rho$.
Exercise 4.185 - Normally distributed r.v. with a non bivariate normal distribution
We have already mentioned that if two r.v. $X_{1}$ and $X_{2}$ both have a standard normal distribution this does not imply that the random vector $\left(X_{1}, X_{2}\right)$ has a joint normal distribution. $^{36}$
Prove that $X_{2}=X_{1}$ if $\left|X_{1}\right|>c$ and $X_{2}=-X_{1}$ if $\left|X_{1}\right|<c$, where $c>0$, illustrates this fact.
${ }^{36}$ See http://en.wikipedia.org/wiki/Multivariate_normal_distribution. In what follows we describe a few distributional properties of bivariate normal distributions and, more generally, multivariate normal distributions.
Proposition 4.186 - Marginal distributions/moments in the bivariate normal setting (Tong, 1990, p. 8, Theorem 2.1.1)
Let $\underline{X}=\left(X_{1}, X_{2}\right)$ be distributed according to a bivariate normal distribution with parameters
$$
\underline{\mu}=\left[\begin{array}{c}
\mu_{1} \\
\mu_{2}
\end{array}\right] \quad \text { and } \quad \Sigma=\left[\begin{array}{cc}
\sigma_{1}^{2} & \rho \sigma_{1} \sigma_{2} \\
\rho \sigma_{1} \sigma_{2} & \sigma_{2}^{2}
\end{array}\right] .
$$
Then the marginal distribution of $X_{i}, i=1,2$, is normal. In fact,
$$
X_{i} \sim \operatorname{Normal}\left(\mu_{i}, \sigma_{i}^{2}\right), i=1,2 .
$$
The following figure ${ }^{37}$ shows the two marginal distributions of a bivariate normal distribution:
Consider the partitions of $\underline{X}, \underline{\mu}$ and $\Sigma$ given below,
$$
\underline{X}=\left[\begin{array}{l}
\underline{X}_{1} \\
\underline{X}_{2}
\end{array}\right], \quad \underline{\mu}=\left[\begin{array}{l}
\underline{\mu}_{1} \\
\underline{\mu}_{2}
\end{array}\right] \quad \text { and } \quad \Sigma=\left[\begin{array}{cc}
\Sigma_{11} & \Sigma_{12} \\
\Sigma_{21} & \Sigma_{22}
\end{array}\right],
$$
where:
- $\underline{X}_{1}=\left(X_{1}, \ldots, X_{k}\right)$ is made up of the first $k<d$ components of $\underline{X}$;
- $\underline{X}_{2}=\left(X_{k+1}, \ldots, X_{d}\right)$ is made up of the remaining components of $\underline{X}$;
- $\underline{\mu}_{1}=\left(\mu_{1}, \ldots, \mu_{k}\right)$;
- $\underline{\mu}_{2}=\left(\mu_{k+1}, \ldots, \mu_{d}\right)$;
${ }^{37}$ Taken from http://www.aiaccess.net/English/Glossaries/GlosMod/e_gm_multinormal_distri.htm. - $\Sigma_{11}=\left[\sigma_{i j}\right]_{i, j=1, \ldots, k} ; \Sigma_{12}=\left[\sigma_{i j}\right]_{1 \leq i<j \leq k}$
- $\Sigma_{21}=\Sigma_{12}^{\top}$;
- $\Sigma_{22}=\left[\sigma_{i j}\right]_{i, j=k+1, \ldots, d}$.
The following figure (where $d=p)^{38}$ represents the covariance matrix of $\underline{X}_{1}, \Sigma_{11}$, which is just the upper left corner square submatrix of order $k$ of the original covariance matrix:
Theorem 4.187 - Marginal distributions/moments in the multivariate normal setting (Tong, 1990, p. 30, Theorem 3.3.1)
Let $\underline{X} \sim \operatorname{Normal}_{d}(\underline{\mu}, \Sigma)$. Then for every $k<d$ the marginal distributions of $\underline{X}_{1}$ and $\underline{X}_{2}$ are also multivariate normal:
$$
\begin{aligned}
& \underline{X}_{1} \sim \operatorname{Normal}_{k}\left(\underline{\mu}_{1}, \Sigma_{11}\right) \\
& \underline{X}_{2} \sim \operatorname{Normal}_{d-k}\left(\underline{\mu}_{2}, \Sigma_{22}\right),
\end{aligned}
$$
respectively.
The family of multivariate normal distributions is closed under linear transformations, as stated below.
Theorem 4.188 - Distribution/moments of a linear transformation of a bivariate normal random vector (Tong, 1990, p. 10, Theorem 2.1.2) Let:
- $\underline{X} \sim \operatorname{Normal}_{2}(\underline{\mu}, \Sigma) ;$
- $\mathbf{C}=\left[c_{i j}\right]$ be a $2 \times 2$ real matrix;
- $\underline{b}=\left(b_{1}, b_{2}\right)$ be a real vector.
Then
$$
\underline{Y}=\mathbf{C} \underline{X}+\underline{b} \sim \operatorname{Normal}_{2}\left(\mathbf{C} \underline{\mu}+\underline{b}, \mathbf{C} \Sigma \mathbf{C}^{\top}\right) .
$$
${ }^{38}$ Also taken from http://www.aiaccess.net/English/Glossaries/GlosMod/e_gm_multinormal_distri.htm. Exercise 4.189 - Distribution/moments of a linear transformation of a bivariate normal random vector
(a) Prove that if in Theorem 4.188 we choose
$$
\mathbf{C}=\left[\begin{array}{cc}
\sigma_{1}^{-1} & 0 \\
0 & \sigma_{2}^{-1}
\end{array}\right]
$$
and $\underline{b}=-\underline{\mu} \mathbf{C}$, then $\underline{Y}$ is a bivariate normal variable with zero means, unit variances and correlation coefficient $\rho$ (Tong, 1990, p. 10).
(b) Now consider a linear transformation of $\underline{Y}, \underline{Y}^{*}$, by rotating the $x y$ axes by 45 degrees counterclockwise:
$$
\underline{Y}^{*}=\frac{1}{\sqrt{2}}\left[\begin{array}{cc}
1 & -1 \\
1 & 1
\end{array}\right] \underline{Y}
$$
Verify that $\underline{Y}^{*}$ is a bivariate normal variable with zero means, variances $1-\rho$ and $1+\rho$ and null correlation coefficient (Tong, 1990, p. 10). Comment.
(c) Conclude that if $\underline{X} \sim \operatorname{Normal}_{2}(\underline{\mu}, \Sigma)$ such that $|\rho|<1$ then
$$
\left[\begin{array}{cc}
\frac{1}{\sqrt{1-\rho}} & 0 \\
0 & \frac{1}{\sqrt{1+\rho}}
\end{array}\right]\left[\begin{array}{cc}
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{array}\right]\left[\begin{array}{cc}
\sigma_{1}^{-1} & 0 \\
0 & \sigma_{2}^{-1}
\end{array}\right]\left[\begin{array}{c}
X_{1}-\mu_{1} \\
X_{2}-\mu_{2}
\end{array}\right] \sim \operatorname{Normal}_{2}\left(\underline{0}, \mathbf{I}_{2}\right)
$$
where $\underline{0}=(0,0)$ and $\mathbf{I}_{2}$ is the $2 \times 2$ identity matrix (Tong, 1990, p. 11).
(d) Prove that if $Z_{i} \stackrel{i . i . d .}{\sim} \operatorname{Normal}(0,1)$ then
$$
\left[\begin{array}{cc}
\sigma_{1} & 0 \\
0 & \sigma_{2}
\end{array}\right]\left[\begin{array}{cc}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
-\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{array}\right]\left[\begin{array}{cc}
\sqrt{1-\rho} & 0 \\
0 & \sqrt{1+\rho}
\end{array}\right]\left[\begin{array}{c}
Z_{1} \\
Z_{2}
\end{array}\right] \sim \operatorname{Normal}_{2}(\underline{\mu}, \Sigma)
$$
i.e. we can obtain a bivariate normal distribution with any mean vector and (nonsingular, semi-definite positive) covariance matrix through a transformation of two independent standard normal r.v. (Tong, 1990, p. 11). Theorem 4.190 - Distribution/moments of a linear transformation of a multivariate normal distribution (Tong, 1990, p. 32, Theorem 3.3.3) Let:
- $\underline{X} \sim \operatorname{Normal}_{d}(\underline{\mu}, \Sigma)$
- $\mathbf{C}=\left[c_{i j}\right]$ be any given $k \times d$ real matrix;
- $\underline{b}$ is any $k \times 1$ real vector.
Then
$$
\underline{Y}=\mathbf{C} \underline{X}+\underline{b} \sim \operatorname{Normal}_{k}\left(\mathbf{C} \underline{\mu}+\underline{b}, \mathbf{C} \Sigma \mathbf{C}^{\top}\right)
$$
The family of multivariate normal distributions is closed not only under linear transformations, as stated in the previous theorem, but also under linear combinations.
Corollary 4.191 - Distribution/moments of a linear combination of the components of a multivariate normal random vector (Tong, 1990, p. 33, Corollary 3.3.3)
Let:
- $\underline{X} \sim \operatorname{Normal}_{d}(\underline{\mu}, \Sigma)$ partitioned as in (4.126);
- $\mathbf{C}_{1}$ and $\mathbf{C}_{2}$ be two $m \times k$ and $m \times(d-k)$ real matrices, respectively.
Then
$$
\underline{Y}=\mathbf{C}_{1} \underline{X}_{1}+\mathbf{C}_{2} \underline{X}_{2} \sim \operatorname{Normal}_{m}\left(\underline{\mu}_{\underline{Y}}, \Sigma_{\underline{Y}}\right),
$$
where the mean vector and the covariance matrix of $\underline{Y}$ are given by
$$
\begin{aligned}
\underline{\mu}_{\underline{Y}} & =\mathbf{C}_{1} \underline{\mu}_{1}+\mathbf{C}_{2} \underline{\mu}_{2} \\
\Sigma_{\underline{Y}} & =\mathbf{C}_{1} \Sigma_{11} \mathbf{C}_{1}^{\top}+\mathbf{C}_{2} \Sigma_{22} \mathbf{C}_{1}^{\top}+\mathbf{C}_{1} \Sigma_{12} \mathbf{C}_{2}^{\top}+\mathbf{C}_{2} \Sigma_{21} \mathbf{C}_{1}^{\top},
\end{aligned}
$$
respectively. The result that follows has already been proved in Chapter 3 and is a particular case of Theorem 4.193.
Corollary 4.192 - Correlation and independence in a bivariate normal setting (Tong, 1990, p. 8, Theorem 2.1.1)
Let $\underline{X}=\left(X_{1}, X_{2}\right) \sim \operatorname{Normal}_{2}(\underline{\mu}, \Sigma)$. Then $X_{1}$ and $X_{2}$ are independent iff $\rho=0$.
In general, r.v. may be uncorrelated but highly dependent. But if a random vector has a multivariate normal distribution then any two or more of its components that are uncorrelated are independent.
Theorem 4.193 - Correlation and independence in a multivariate normal setting (Tong, 1990, p. 31, Theorem 3.3.2)
Let $\underline{X} \sim \operatorname{Normal}_{d}(\underline{\mu}, \Sigma)$ partitioned as in (4.126). Then $\underline{X}_{1}$ and $\underline{X}_{2}$ are independent random vectors iff $\Sigma_{12}=\Sigma_{12}^{\top}=\mathbf{0}_{k \times(d-k)}$.
Corollary 4.194 - Linear combination of independent multivariate normal random vectors (Tong, 1990, p. 33, Corollary 3.3.4)
Let $\underline{X}_{1}, \ldots, \underline{X}_{N}$ be independent $\operatorname{Normal}_{d}\left(\underline{\mu}_{i}, \Sigma_{i}\right), i=1, \ldots, N$, random vectors. Then
$$
\underline{Y}=\sum_{i=1}^{N} c_{i} \underline{X}_{i} \sim \operatorname{Normal}_{d}\left(\sum_{i=1}^{N} c_{i} \underline{\mu}_{i}, \sum_{i=1}^{N} c_{i}^{2} \Sigma_{i}\right) .
$$
Proposition 4.195 - Independence between the sample mean vector and covariance matrix (Tong, 1990, pp. 47-48)
Let:
- $N$ be a positive integer;
- $\underline{X}_{1}, \ldots, \underline{X}_{N}$ be i.i.d. random vectors with a common $\operatorname{Normal}_{d}(\underline{\mu}, \Sigma)$ distribution, such that $\Sigma$ is positive definite;
- $\underline{\bar{X}}_{N}=\frac{1}{N} \sum_{t=1}^{N} \underline{X}_{t}=\left(\bar{X}_{1}, \ldots, \bar{X}_{d}\right)$ denote the sample mean vector, where $\bar{X}_{i}=\frac{1}{N} \sum_{t=1}^{N} X_{i t}$ and $X_{i t}$ the $i$ th component of $\underline{X}_{t}$;
- $\mathbf{S}_{N}=\left[S_{i j}\right]_{i, j=1, \ldots, d}$ denote the sample covariance matrix, where $S_{i j}=\frac{1}{N-1} \sum_{t=1}^{N}\left(X_{i t}-\bar{X}_{i}\right)\left(X_{j t}-\bar{X}_{j}\right)$.
Then $\underline{X}_{N}$ and $\mathbf{S}_{N}$ are independent.
## Definition 4.196 - Mixed (central) moments
Let:
- $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ be a random $d$-vector;
- $r_{1}, \ldots, r_{d} \in \mathbb{N}$.
Then:
- the mixed moment of order $\left(r_{1}, \ldots, r_{d}\right)$ of $\underline{X}$ is given by
$$
E\left[X_{1}^{r_{1}} \times \ldots \times X_{d}^{r_{d}}\right]
$$
and is also called a $\left(\sum_{i=1}^{d} r_{i}\right)$ th order moment of $\underline{X} ;{ }^{39}$
- the mixed central moment of order $\left(r_{1}, \ldots, r_{d}\right)$ of $\underline{X}$ is defined as
$$
E\left\{\left[X_{1}-E\left(X_{1}\right)\right]^{r_{1}} \times \ldots\left[X_{d}-E\left(X_{d}\right)\right]^{r_{d}}\right\}
$$
The Isserlis' theorem is a formula that allows one to compute mixed moments of the multivariate normal distribution with null mean vector ${ }^{40}$ in terms of the entries of its covariance matrix.
Remarks 4.197 - Isserlis' theorem (http://en.wikipedia.org/wiki/Isserlis'_theorem)
- In his original paper from 1918, Isserlis considered only the fourth-order moments, in which case the formula takes appearance
$$
\begin{aligned}
E\left(X_{1} X_{2} X_{3} X_{4}\right)= & E\left(X_{1} X_{2}\right) \times E\left(X_{3} X_{4}\right)+E\left(X_{1} X_{3}\right) \times E\left(X_{2} X_{4}\right) \\
& +E\left(X_{1} X_{4}\right) \times E\left(X_{2} X_{3}\right),
\end{aligned}
$$
which can be written in terms of the covariances: $\sigma_{12} \times \sigma_{34}+\sigma_{13} \times \sigma_{24}+\sigma_{14} \times \sigma_{23}$. It also added that if $\left(X_{1}, \ldots, X_{2 n}\right)$ is a zero mean multivariate normal random vector, then
$$
\begin{aligned}
E\left(X_{1} \ldots X_{2 n-1}\right) & =0 \\
E\left(X_{1} \ldots X_{2 n}\right) & =\sum \prod E\left(X_{i} X_{j}\right),
\end{aligned}
$$
${ }^{39}$ See for instance http://en.wikipedia.org/wiki/Multivariate_normal_distribution.
${ }^{40}$ Or mixed central moments of the difference between a multivariate normal random vector $\underline{X}$ and its mean vector. where the notation $\sum \prod$ means summing over all distinct ways of partitioning $X_{1}, \ldots, X_{2 n}$ into pairs.
- This theorem is particularly important in particle physics, where it is known as Wick's theorem.
- Another applications include the analysis of portfolio returns, quantum field theory, generation of colored noise, etc.
## Theorem 4.198 - Isserlis' theorem
(http://en.wikipedia.org/wiki/Multivariate_normal_distribution) Let:
- $\underline{X}=\left(X_{1}, \ldots, X_{d}\right) \sim \operatorname{Normal}_{d}(\underline{\mu}, \Sigma)$;
- $E\left[\prod_{i=1}^{d}\left(X_{i}-\mu_{i}\right)^{r_{i}}\right]$ be the mixed central moment of order $\left(r_{1}, \ldots, r_{d}\right)$ of $\underline{X}$;
- $k=\sum_{i=1}^{d} r_{i}$.
Then:
- if $k$ is odd (i.e. $k=2 n-1, n \in \mathbb{N}$ )
$$
E\left[\prod_{i=1}^{d}\left(X_{i}-\mu_{i}\right)^{r_{i}}\right]=0 ;
$$
- if $k$ is even (i.e. $k=2 n, n \in \mathbb{N}$ )
$$
E\left[\prod_{i=1}^{d}\left(X_{i}-\mu_{i}\right)^{r_{i}}\right]=\sum \prod \sigma_{i j}
$$
where the $\sum \prod$ is taken over all allocations of the set $\{1,2, \ldots, n\}$ into $n$ (unordered) pairs, that is, if you have a $k=2 n=6$ th order central moment, you will be summing the products of $n=3$ covariances.
## Exercise 4.199 - Isserlis' theorem
Let $\underline{X}=\left(X_{1}, \ldots, X_{4}\right) \sim \operatorname{Normal}_{4}\left(\underline{0}, \Sigma=\left[\sigma_{i j}\right]\right)$. Prove that
(a) $E\left(X_{i}^{4}\right)=3 \sigma_{i i}^{2}, i=1, \ldots, 4$,
(b) $E\left(X_{i}^{3} X_{j}\right)=3 \sigma_{i i} \sigma_{i j}, i, j=1, \ldots, 4$,
(c) $E\left(X_{i}^{2} X_{j}^{2}\right)=\sigma_{i i} \sigma_{j j}+2 \sigma_{i j}^{2}, i, j=1, \ldots, 4$,
(d) $E\left(X_{i}^{2} X_{j} X_{l}\right)=\sigma_{i i} \sigma_{j l}+2 \sigma_{i j} \sigma_{i l}, i, j, l=1, \ldots, 4$,
(e) $E\left(X_{i} X_{j} X_{l} X_{n}\right)=\sigma_{i j} \sigma_{l m}+\sigma_{i l} \sigma_{j n}+\sigma_{i n} \sigma_{j l}, i, j, l, n=1, \ldots, 4$
(http://en.wikipedia.org/wiki/Isserlis'_theorem).
In passing from univariate to multivariate distributions, some essentially new features require our attention: these features are connected not only with relations among sets of variables and included covariance and correlation, but also regressions (conditional expected values) and, generally, conditional distributions (Johnson and Kotz, 1969, p. 280).
Theorem 4.200 - Conditional distributions and regressions in the bivariate normal setting (Tong, 1990, p. 8, Theorem 2.1.1)
Let $\underline{X}=\left(X_{1}, X_{2}\right)$ be distributed according to a bivariate normal distribution with parameters
$$
\underline{\mu}=\left[\begin{array}{c}
\mu_{1} \\
\mu_{2}
\end{array}\right] \quad \text { and } \quad \Sigma=\left[\begin{array}{cc}
\sigma_{1}^{2} & \rho \sigma_{1} \sigma_{2} \\
\rho \sigma_{1} \sigma_{2} & \sigma_{2}^{2}
\end{array}\right]=\left[\begin{array}{cc}
\sigma_{11} & \sigma_{12} \\
\sigma_{21} & \sigma_{22}
\end{array}\right] .
$$
If $|\rho|<1$ then
$$
X_{1} \mid\left\{X_{2}=x_{2}\right\} \sim \operatorname{Normal}\left(\mu_{1}+\frac{\rho \sigma_{1}}{\sigma_{2}}\left(x_{2}-\mu_{2}\right), \sigma_{1}^{2}\left(1-\rho^{2}\right)\right),
$$
i.e.
$$
X_{1} \mid\left\{X_{2}=x_{2}\right\} \sim \operatorname{Normal}\left(\mu_{1}+\sigma_{12} \sigma_{22}^{-1}\left(x_{2}-\mu_{2}\right), \sigma_{11}-\sigma_{12} \sigma_{22}^{-1} \sigma_{21}\right)
$$
Exercise 4.201 - Conditional distributions and regressions in the bivariate normal setting
Prove that (4.147) and (4.148) are equivalent. The following figure ${ }^{41}$ shows the conditional distribution of $Y \mid\left\{X=x_{0}\right\}$ of a random vector $(X, Y)$ with a bivariate normal distribution:
The inverse Mills ratio is the ratio of the probability density function over the cumulative distribution function of a distribution and corresponds to a specific conditional expectation, as stated below.
## Definition 4.202 - Inverse Mills' ratio
(http://en.wikipedia.org/wiki/Inverse_Mills_ratio; Tong, 1990, p. 174)
Let $\underline{X}=\left(X_{1}, X_{2}\right)$ be a bivariate normal random vector with zero means, unit variances and correlation coefficient $\rho$. Then the conditional expectation
$$
E\left(X_{1} \mid\left\{X_{2}>x_{2}\right\}\right)=\rho \frac{\phi\left(x_{2}\right)}{\Phi\left(-x_{2}\right)}
$$
is often called the inverse Mills' ratio.
## Remark 4.203 - Inverse Mills' ratio
(http://en.wikipedia.org/wiki/Inverse_Mills_ratio)
A common application of the inverse Mills' ratio arises in regression analysis to take account of a possible selection bias.
Exercise 4.204 - Conditional distributions and the inverse Mills' ratio in the bivariate normal setting
Assume that $X_{1}$ represents the log-dose of insuline that has been administrated and $X_{2}$ the decrease in blood sugar after a fixed amount of time. Also assume that $\left(X_{1}, X_{2}\right)$ has a bivariate normal distribution with mean vector and covariance matrix
$$
\underline{\mu}=\left[\begin{array}{c}
0.56 \\
53
\end{array}\right] \quad \text { e } \quad \Sigma=\left[\begin{array}{rr}
0.027 & 2.417 \\
2.417 & 407.833
\end{array}\right] .
$$
${ }^{41}$ Once again taken from http://www.aiaccess.net/English/Glossaries/GlosMod/e_gm_multinormal_distri.htm. (a) Obtain the probability that the decrease in blood sugar exceeds 70, given that log-dose of insuline that has been administrated is equal to 0.5.
(b) Determine the log-dose of insuline that has to be administrated so that the expected value of the decrease in blood sugar equals 70 .
(c) Obtain the expected value of the decrease in blood sugar, given that log-dose of insuline that has been administrated exceeds 0.5 .
Theorem 4.205 - Conditional distributions and regressions in the multivariate normal setting (Tong, 1990, p. 35, Theorem 3.3.4)
Let $\underline{X} \sim \operatorname{Normal}_{d}(\underline{\mu}, \Sigma)$ partitioned as in (4.126). Then
$$
\underline{X}_{1} \mid\left\{\underline{X}_{2}=\underline{x}_{2}\right\} \sim \operatorname{Normal}\left(\underline{\mu}_{1}+\Sigma_{12} \Sigma_{22}^{-1}\left(\underline{x}_{2}-\underline{\mu}_{2}\right), \Sigma_{11}-\Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21}\right) .
$$
Exercise 4.206 - Conditional distributions and regressions in the multivariate normal setting
Derive (4.148) from (4.151).
#### Multinomial distributions
The genesis and the definition of multinomial distributions follow.
## Motivation 4.207 - Multinomial distribution
(http://en.wikipedia.org/wiki/Multinomial_distribution)
The multinomial distribution is a generalization of the binomial distribution.
The binomial distribution is the probability distribution of the number of "successes" in $n$ independent Bernoulli trials, with the same probability of "success" on each trial.
In a multinomial distribution, the analog of the Bernoulli distribution is the categorical distribution, where each trial results in exactly one of some fixed finite number $d$ of possible outcomes, with probabilities $p_{1}, \ldots, p_{d}\left(p_{i} \in[0,1], i=1, \ldots, d\right.$, and $\left.\sum_{i=1}^{d} p_{i}=1\right)$, and there are $n$ independent trials.
Definition 4.208 - Multinomial distribution (Johnson and Kotz, 1969, p. 281) Consider a series of $n$ independent trials, in each of which just one of $d$ mutually exclusive events $E_{1}, \ldots, E_{d}$ must be observed, and in which the probability of occurrence of event $E_{i}$ is equal to $p_{i}$ for each trial, with, of course, $p_{i} \in[0,1], i=1, \ldots, d$, and $\sum_{i=1}^{d} p_{i}=1$. Then the joint distribution of the r.v. $N_{1}, \ldots, N_{d}$, representing the numbers of occurrences of the events $E_{1}, \ldots, E_{d}$ (respectively) in $n$ trials, is defined by
$$
P\left(\left\{N_{1}=n_{1}, \ldots, N_{d}=n_{d}\right\}\right)=\frac{n !}{\prod_{i=1}^{d} n_{i} !} \times \prod_{i=1}^{d} p_{i}^{n_{i}},
$$
for $n_{i} \in \mathbb{N}_{0}, i=1, \ldots, d$, such that $\sum_{i=1}^{d} n_{i}=n$. The random $d$-vector $\underline{N}=\left(N_{1}, \ldots, N_{d}\right)$ is said to have a multinomial distribution with parameters $n$ and $\left.\underline{p}=\left(p_{1}, \ldots, p_{d}\right)\right)$ - in short $\underline{N} \sim \operatorname{Multinomial}_{d-1}\left(n, \underline{p}=\left(p_{1}, \ldots, p_{d}\right)\right) .{ }^{42}$
Remark 4.209 - Special case of the multinomial distribution (Johnson and Kotz, 1969, p. 281)
Needless to say, we deal with the binomial distribution when $d=2$, i.e.
$$
\operatorname{Multinomial}_{2-1}(n, \underline{p}=(p, 1-p)) \stackrel{d}{=} \operatorname{Binomial}(n, p) .
$$
Curiously, J. Bernoulli, who worked with the binomial distribution, also used the multinomial distribution.
${ }^{42}$ The index $d-1$ follows from the fact that the r.v. $N_{d}$ (or any other component of $\underline{N}$ ) is redundant: $N_{d}=n-\sum_{i=1}^{d-1} N_{i}$
## Remark 4.210 - Applications of multinomial distribution
(http://controls.engin.umich.edu/wiki/index.php/Multinomial_distributions)
Multinomial systems are a useful analysis tool when a "success-failure" description is insufficient to understand the system. For instance, in chemical engineering applications, multinomial distributions are relevant to situations where there are more than two possible outcomes (temperature $=$ high, med, low).
A continuous form of the multinomial distribution is the Dirichlet distribution (http://en.wikipedia.org/wiki/Dirichlet_distribution). ${ }^{43}$
## Exercise 4.211 - Multinomial distribution (p.f.)
In a recent three-way election for a large country, candidate A received $20 \%$ of the votes, candidate $\mathrm{B}$ received $30 \%$ of the votes, and candidate $\mathrm{C}$ received $50 \%$ of the votes.
If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate $\mathrm{C}$ in the sample? (http://en.wikipedia.org/wiki/Multinomial_distribution)
## Exercise 4.212 - Multinomial distribution
A runaway reaction occurs when the heat generation from an exothermic reaction exceeds the heat loss. Elevated temperature increases reaction rate, further increasing heat generation and pressure buildup inside the reactor. Together, the uncontrolled escalation of temperature and pressure inside a reactor may cause an explosion. The precursors to a runaway reaction - high temperature and pressure - can be detected by the installation of reliable temperature and pressure sensors inside the reactor. Runaway reactions can be prevented by lowering the temperature and/or pressure inside the reactor before they reach dangerous levels. This task can be accomplished by sending a cold inert stream into the reactor or venting the reactor.
Les is a process engineer at the Miles Reactor Company that has been assigned to work on a new reaction process. Using historical data from all the similar reactions that have been run before, Les has estimated the probabilities of each outcome occurring during the new process. The potential outcomes of the process include all permutations of the possible reaction temperatures (low and high) and pressures (low and high). He has combined this information into the table below:
${ }^{43}$ The Dirichlet distribution is in turn the multivariate generalization of the beta distribution.
| Outcome | Temperature | Pressure | Probability |
| :--- | :--- | :--- | :--- |
| 1 | high | high | 0.013 |
| 2 | high | low | 0.267 |
| 3 | low | high | 0.031 |
| 4 | low | low | 0.689 |
Worried about risk of runaway reactions, the Miles Reactor Company is implementing a new program to assess the safety of their reaction processes. The program consists of running each reaction process 100 times over the next year and recording the reactor conditions during the process every time. In order for the process to be considered safe, the process outcomes must be within the following limits:
| Outcome | Temperature | Pressure | Frequency |
| :--- | :--- | :--- | :--- |
| 1 | high | high | $n_{1}=0$ |
| 2 | high | low | $n_{2} \leq 20$ |
| 3 | low | high | $n_{3} \leq 2$ |
| 4 | low | low | $n_{4}=100-n 1-n 2-n 3$ |
Help Les predict whether or not the new process is safe by answering the following question: "What is the probability that the new process will meet the specifications of the new safety program?" (http://controls.engin.umich.edu/wiki/index.php/Multinomial_distributions).
Remark 4.213 - Multinomial expansion (Johnson and Kotz, 1969, p. 281)
If we recall the multinomial theorem ${ }^{44}$ then the expression of $P\left(\left\{N_{1}=n_{1}, \ldots, N_{d}=n_{d}\right\}\right)$ can be regarded as the coefficient of $\prod_{i=1}^{d} t_{i}^{n_{i}}$ in the multinomial expansion of
$$
\left(t_{1} p_{1}+\ldots+t_{d} p_{d}\right)^{n}=\sum_{\left(n_{1}, \ldots, n_{d}\right)} P\left(\left\{N_{1}=n_{1}, \ldots, N_{d}=n_{d}\right\}\right) \times \prod_{i=1}^{d} t_{i}^{n_{i}}
$$
where the summation is taken over all $\left(n_{1}, \ldots, n_{d}\right) \in\left\{\left(m_{1}, \ldots, m_{d}\right) \in \mathbb{N}_{0}^{k}: \sum_{i=1}^{d} m_{i}=n\right\}$ and $\underline{N} \sim$ Multinomial $_{d-1}\left(n, \underline{p}=\left(p_{1}, \ldots, p_{d}\right)\right)$.
${ }^{44}$ For any positive integer $d$ and any nonnegative integer $n$, we have
$$
\left(x_{1}+\ldots+x_{d}\right)^{n}=\sum_{\left(n_{1}, \ldots, n_{d}\right)} \frac{n !}{\prod_{i=1}^{d} n_{i} !} \times \prod_{i=1}^{d} x_{i}^{n_{i}},
$$
where the summation is taken over all $d$-vectors of nonnegative integer indices $n_{1}, \ldots, n_{d}$ such that the sum of all $n_{i}$ is $n$. As with the binomial theorem, quantities of the form $0^{0}$ which appear are taken to equal 1. See http://en.wikipedia.org/wiki/Multinomial_theorem for more details.
## Definition 4.214 - Mixed factorial moments
Let:
- $\underline{X}=\left(X_{1}, \ldots, X_{d}\right)$ be a random $d$-vector;
- $r_{1}, \ldots, r_{d} \in \mathbb{N}$.
Then the mixed factorial moment of order $\left(r_{1}, \ldots, r_{d}\right)$ of $\underline{X}$ is equal to
$$
\begin{aligned}
& E\left[X_{1}^{\left(r_{1}\right)} \times \ldots \times X_{d}^{\left(r_{d}\right)}\right] \\
& =E\left\{\left[X_{1}\left(X_{1}-1\right) \ldots\left(X_{1}-r_{1}+1\right)\right] \times \ldots \times\left[X_{d}\left(X_{d}-1\right) \ldots\left(X_{d}-r_{d}+1\right)\right]\right\} .
\end{aligned}
$$
Marginal moments and marginal central moments, and covariances and correlations between the components of a random vector can be written in terms of mixed (central/factorial) moments. This is particularly useful when we are dealing with the multinomial distribution.
Exercise 4.215 - Writing the variance and covariance in terms of mixed (central/factorial) moments
Write
(a) the marginal variance of $X_{i}$ and
(b) $\operatorname{cov}\left(X_{i}, X_{j}\right)$
in terms of mixed factorial moments.
Proposition 4.216 - Mixed factorial moments of a multinomial distribution (Johnson and Kotz, 1969, p. 284) Let:
- $\underline{N}=\left(N_{1}, \ldots, N_{d}\right) \sim \operatorname{Multinomial}_{d-1}\left(n, \underline{p}=\left(p_{1}, \ldots, p_{d}\right)\right)$;
- $r_{1}, \ldots, r_{d} \in \mathbb{N}$.
Then the mixed factorial moment of order $\left(r_{1}, \ldots, r_{d}\right)$ is equal to
$$
E\left[N_{1}^{\left(r_{1}\right)} \times \ldots N_{d}^{\left(r_{d}\right)}\right]=n^{\left(\sum_{i=1}^{d} r_{i}\right)} \times \prod_{i=1}^{d} p_{i}^{r_{i}},
$$
where $n^{\left(\sum_{i=1}^{d} r_{i}\right)}=\frac{n !}{\left.\left(n-\sum_{i=1}^{d} r_{i}\right)\right) !}$. From the general formula (4.157), we find the expected value of $N_{i}$, and the covariances and correlations between $N_{i}$ and $N_{j}$.
Corollary 4.217 - Mean vector, covariance and correlation matrix of a multinomial distribution (Johnson and Kotz, 1969, p. 284; http://en.wikipedia.org/wiki/Multinomial_distribution) The expected number of times the event $E_{i}$ was observed over $n$ trials, $N_{i}$, is
$$
E\left(N_{i}\right)=n p_{i}
$$
for $i=1, \ldots, d$.
The covariance matrix is as follows. Each diagonal entry is the variance
$$
V\left(N_{i}\right)=n p_{i}\left(1-p_{i}\right)
$$
for $i=1, \ldots, d$. The off-diagonal entries are the covariances
$$
\operatorname{cov}\left(N_{i}, N_{j}\right)=-n p_{i} p_{j}
$$
for $i, j=1, \ldots, d, i \neq j$. All covariances are negative because, for fixed $n$, an increase in one component of a multinomial vector requires a decrease in another component. The covariance matrix is a $d \times d$ positive-semidefinite matrix of rank $d-1$.
The off-diagonal entries of the corresponding correlation matrix are given by
$$
\operatorname{corr}\left(N_{i}, N_{j}\right)=-\sqrt{\frac{p_{i} p_{j}}{\left(1-p_{i}\right)\left(1-p_{j}\right)}},
$$
for $i, j=1, \ldots, d, i \neq j \cdot{ }^{45}$ Note that the number of trial drops out of this expression.
Exercise 4.218 - Mean vector, covariance and correlation matrices of a multinomial distribution
Use (4.157) to derive the entries of the mean vector, and the covariance and correlation matrices of a multinomial distribution.
Exercise 4.219 - Mean vector and correlation matrix of a multinomial distribution
Resume Exercise 4.211 and calculate the mean vector and the correlation matrix. Comment the values you have obtained for the off-diagonal entries of the correlation matrix.
${ }^{45}$ The diagonal entries of the correlation matrix are obviously equal to 1. Proposition 4.220 - Marginal distributions in a multinomial setting (Johnson and Kotz, 1969, p. 281)
The marginal distribution of any $N_{i}, i=1, \ldots, d$, is Binomial with parameters $n$ and $p_{i}$. I.e.
$$
N_{i} \sim \operatorname{Binomial}\left(n, p_{i}\right)
$$
for $i=1, \ldots, d$.
(4.162) is a special case of a more general result.
Proposition 4.221 - Joint distribution of a subset of r.v. from a multinomial distribution (Johnson and Kotz, 1969, p. 281)
The joint distribution of any subset of $s(s=1, \ldots, d-1)$ r.v., say $N_{a_{1}}, \ldots, N_{a_{s}}$ of the $N_{j}$ 's, is also multinomial with an $(s+1)$ th r.v. equal to $N_{a_{s+1}}=n-\sum_{i=1}^{s} N_{a_{i}}$. In fact
$$
\begin{aligned}
& P\left(\left\{N_{a_{1}}=n_{a_{1}}, \ldots, N_{a_{s}}=n_{a_{s}}, N_{a_{s+1}}=n-\sum_{i=1}^{s} n_{a_{i}}\right\}\right) \\
& =\frac{n !}{\prod_{i=1}^{s} n_{a_{i}} ! \times\left(n-\sum_{i=1}^{s} n_{a_{i}}\right) !} \times \prod_{i=1}^{s} p_{a_{i}}^{n_{a_{i}}} \times\left(1-\sum_{i=1}^{s} p_{a_{i}}\right)^{n-\sum_{i=1}^{s} n_{a_{i}}},
\end{aligned}
$$
for $n_{a_{i}} \in \mathbb{N}_{0}, i=1, \ldots, s$ such that $\sum_{i=1}^{d} n_{a_{i}} \leq n$.
Proposition 4.222 - Some regressions and conditional distributions in the multinomial distribution setting (Johnson and Kotz, 1969, p. 284)
- The regression of $N_{i}$ on $N_{j}(j \neq i)$ is linear:
$$
E\left(N_{i} \mid N_{j}\right)=\left(n-N_{j}\right) \times \frac{p_{i}}{1-p_{j}}
$$
- The multiple regression of $N_{i}$ on $N_{b_{1}}, \ldots, N_{b_{r}}\left(b_{j} \neq i, j=1, \ldots, r\right)$ is also linear:
$$
E\left(N_{i} \mid\left\{N_{b_{1}}, \ldots, N_{b_{r}}\right\}\right)=\left(n-\sum_{j=1}^{r} N_{b_{j}}\right) \times \frac{p_{i}}{1-\sum_{j=1}^{r} p_{b_{j}}}
$$
- The random vector $\left(N_{a_{1}}, \ldots N_{a_{s}}\right)$ conditional on a event referring to any subset of the remaining $N_{j}$ 's, say $\left\{N_{b_{1}}=n_{b_{1}}, \ldots, N_{b_{r}}=n_{b_{r}}\right\}$, has also a multinomial distribution. Its p.f. can be found in Johnson and $\operatorname{Kotz}(1969$, p. 284). Remark 4.223 - Conditional distributions and the simulation of a multinomial distribution (Gentle, 1998, p. 106)
The following conditional distributions taken from Gentle (1998, p. 106) suggest a procedure to generate pseudo-random vectors with a multinomial distribution:
- $N_{1} \sim \operatorname{Binomial}\left(n, p_{1}\right)$;
- $N_{2} \mid\left\{N_{1}=n_{1}\right\} \sim \operatorname{Binomial}\left(n-n_{1}, \frac{p_{2}}{1-p_{1}}\right)$;
- $N_{3} \mid\left\{N_{1}=n_{1}, N_{2}=n_{2}\right\} \sim \operatorname{Binomial}\left(n-n_{1}-n_{2}, \frac{p_{3}}{1-p_{1}-p_{2}}\right)$;
- ...
- $N_{d-1} \mid\left\{N_{1}=n_{1}, \ldots, N_{d-2}=n_{d-2}\right\} \sim \operatorname{Binomial}\left(n-\sum_{i=1}^{d-2} n_{i}, \frac{p_{d-1}}{1-\sum_{i=1}^{d-2} p_{i}}\right)$;
- $N_{d} \mid\left\{N_{1}=n_{1}, \ldots, N_{d-1}=n_{d-1}\right\} \stackrel{d}{=} n-\sum_{i=1}^{d-1} n_{i}$.
Thus, we can generate a pseudo-random vector from a multinomial distribution by sequentially generating independent pseudo-random numbers from the binomial conditional distributions stated above.
Gentle (1998, p. 106) refers other ways of generating pseudo-random vectors from a multinomial distribution.
Remark 4.224 - Speeding up the simulation of a multinomial distribution (Gentle, 1998, p. 106)
To speed up the generation process, Gentle (1998, p. 106) recommends that we previously order the probabilities $p_{1}, \ldots, p_{d}$ in descending order - thus, getting the vector of probabilities $\left(p_{(d)}, \ldots, p_{(1)}\right)$, where $p_{(d)}=\max _{i=1, \ldots, d} p_{i}, \ldots$, and $p_{(1)}=\min _{i=1, \ldots, d} p_{i}$. Then we generate $d$ pseudo-random numbers with the following binomial distributions with parameters
1. $n$ and the largest probability of "success" $p_{(d)}$, say $n_{(d)}$,
2. $n-n_{(d)}$ and $\frac{p_{(d-1)}}{1-p_{(d-1)}}$, say $n_{(d-1)}$,
3. $n-n_{(d)}-n_{(d-1)}$ and $\frac{p_{(d-2)}}{1-p_{(d-1)}-p_{(d-2)}}$, say $n_{(d-1)}$,
4. ...
5. $n-\sum_{i=1}^{d-2} n_{(d+1-i)}$ and $\frac{p_{(2)}}{1-\sum_{i=1}^{d-2} p_{(d+1-i)}}$, say $n_{(2)}$, and finally
6. $\operatorname{assign} n_{(1)}=n-\sum_{i=1}^{d-1} n_{(d+1-i)}$.
Remark 4.225 - Speeding up the simulation of a multinomial distribution (http://en.wikipedia.org/wiki/Multinomial_distribution)
Assume the parameters $p_{1}, \ldots p_{d}$ are already sorted descendingly (this is only to speed up computation and not strictly necessary). Now, for each trial, generate a pseudo-random number from $U \sim \operatorname{Uniform}(0,1)$, say $u$. The resulting outcome is the event $E_{j}$ where
$$
\begin{aligned}
j & =\arg \min _{j^{\prime}=1}^{k}\left(\sum_{i=1}^{j^{\prime}} p_{i} \geq u\right) \\
& =F_{Z}^{-1}(u)
\end{aligned}
$$
with $Z$ an integer r.v. that takes values $1, \ldots, d$, with probabilities $p_{1}, \ldots p_{d}$, respectively. This is a sample for the multinomial distribution with $n=1$.
The absolute frequencies of events $E_{1}, \ldots, E_{d}$, resulting from $n$ independent repetitions of the procedure we just described, constitutes a pseudo-random vector from a multinomial distribution with parameters $n$ and $\underline{p}=\left(p_{1}, \ldots p_{d}\right)$.
## References
- Gentle, J.E. (1998). Random Number Generation and Monte Carlo Methods. Springer-Verlag. (QA298.GEN.50103)
- Johnson, N.L. and Kotz, S. (1969). Discrete distributions John Wiley \& Sons. (QA273-280/1.JOH.36178)
- Karr, A.F. (1993). Probability. Springer-Verlag.
- Resnick, S.I. (1999). A Probability Path. Birkhäuser. (QA273.4-.67.RES.49925)
- Rohatgi, V.K. (1976). An Introduction to Probability Theory and Mathematical Statistics. John Wiley \& Sons. (QA273-280/4.ROH.34909)
- Tong, Y.L. (1990). The Multivariate Normal Distribution. Springer-Verlag. (QA278.5-.65.TON.39685)
- Walrand, J. (2004). Lecture Notes on Probability Theory and Random Processes. Department of Electrical Engineering and Computer Sciences, University of California, Berkeley.
## Chapter 5
## Convergence of sequences of random variables
Throughout this chapter we assume that $\left\{X_{1}, X_{2}, \ldots\right\}$ is a sequence of r.v. and $X$ is a r.v., and all of them are defined on the same probability space $(\Omega, \mathcal{F}, P)$.
Stochastic convergence formalizes the idea that a sequence of r.v. sometimes is expected to settle into a pattern. ${ }^{1}$ The pattern may for instance be that:
- there is a convergence of $X_{n}(\omega)$ in the classical sense to a fixed value $X(\omega)$, for each and every event $\omega$;
- the probability that the distance between $X_{n}$ from a particular r.v. $X$ exceeds any prescribed positive value decreases and converges to zero;
- the series formed by calculating the expected value of the (absolute or quadratic) distance between $X_{n}$ and $X$ converges to zero;
- the distribution of $X_{n}$ may "grow" increasingly similar to the distribution of a particular r.v. $X$.
Just as in analysis, we can distinguish among several types of convergence (Rohatgi, 1976, p. 240). Thus, in this chapter we investigate modes of convergence of sequences of r.v.:
- almost sure convergence $(\stackrel{a . s}{\rightarrow})$;
- convergence in probability $(\stackrel{P}{\rightarrow})$;
${ }^{1}$ See http://en.wikipedia.org/wiki/Convergence_of_random_variables. - convergence in quadratic mean or in $L^{2}(\stackrel{q . m}{\rightarrow})$;
- convergence in $L^{1}$ or in mean $\left(\stackrel{L^{1}}{\rightarrow}\right)$;
- convergence in distribution $(\stackrel{d}{\rightarrow})$.
It is important for the reader to be familiarized with all these modes of convergence, the way they can be related and with the applications of such results and understand their considerable significance in probability, statistics and stochastic processes.
### Modes of convergence
The first four modes of convergence $\stackrel{*}{\rightarrow}$, where $*=$ a.s., $\left.P, q . m ., L^{1}\right)$ pertain to the sequence of r.v. and to $X$ as functions of $\Omega$, while the fifth $(\stackrel{d}{\rightarrow})$ is related to the convergence of d.f. (Karr, 1993, p. 135).
#### Convergence of r.v. as functions on $\Omega$
Motivation 5.1 - Almost sure convergence (Karr, 1993, p. 135)
Almost sure convergence - or convergence with probability one - is the probabilistic version of pointwise convergence known from elementary real analysis.
Definition 5.2 - Almost sure convergence (Karr, 1993, p. 135; Rohatgi, 1976, p. 249)
The sequence of r.v. $\left\{X_{1}, X_{2}, \ldots\right\}$ is said to converge almost surely to a r.v. $X$ if
$$
P\left(\left\{w: \lim _{n \rightarrow+\infty} X_{n}(\omega)=X(\omega)\right\}\right)=1 .
$$
In this case we write $X_{n} \stackrel{\text { a.s. }}{\rightarrow} X$ (or $X_{n} \rightarrow X$ with probability 1 ).
## Remark 5.3 - Almost sure convergence
Equation (5.1) does not mean that $\lim _{n \rightarrow+\infty} P\left(\left\{w: X_{n}(\omega)=X(\omega)\right\}\right)=1$.
## Exercise 5.4 - Almost sure convergence
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of independent r.v. such that $X_{n} \sim \operatorname{Bernoulli}\left(\frac{1}{n}\right), n \in \mathbb{N}$.
Prove that $X_{n} \stackrel{a, s .}{\rightarrow} 0$, by deriving $P\left(\left\{X_{n}=0\right.\right.$, for every $\left.\left.m \leq n \leq n_{0}\right\}\right)$ and observing that this probability does not converge to 1 as $n_{0} \rightarrow+\infty$ for all values of $m$ (Rohatgi, 1976, p. 252, Example 9). Motivation 5.5 - Convergence in probability (Karr, 1993, p. 135; http://en.wikipedia.org/wiki/Convergence_of_random_variables)
Convergence in probability essentially means that the probability that $\left|X_{n}-X\right|$ exceeds any prescribed, strictly positive value converges to zero.
The basic idea behind this type of convergence is that the probability of an "unusual" outcome becomes smaller and smaller as the sequence progresses.
Definition 5.6 - Convergence in probability (Karr, 1993, p. 136; Rohatgi, 1976, p. 243)
The sequence of r.v. $\left\{X_{1}, X_{2}, \ldots\right\}$ is said to converge in probability to a r.v. $X$ - denoted by $X_{n} \stackrel{P}{\rightarrow} X-$ if
$$
\lim _{n \rightarrow+\infty} P\left(\left\{\left|X_{n}-X\right|>\epsilon\right\}\right)=0
$$
for every $\epsilon>0$.
Remarks 5.7 - Convergence in probability (Rohatgi, 1976, p. 243; http://en.wikipedia.org/wiki/Convergence_of_random_variables)
- The definition of convergence in probability says nothing about the convergence of r.v. $X_{n}$ to r.v. $X$ in the sense in which it is understood in real analysis. Thus, $X_{n} \stackrel{P}{\rightarrow} X$ does not imply that, given $\epsilon>0$, we can find an $N$ such that $\left|X_{n}-X\right|<\epsilon$, for $n \geq N$.
Definition 5.6 speaks only of the convergence of the sequence of probabilities $P\left(\left|X_{n}-X\right|>\epsilon\right)$ to zero.
- Formally, Definition 5.6 means that
$$
\forall \epsilon, \delta>0, \exists N_{\delta}: P\left(\left\{\left|X_{n}-X\right|>\epsilon\right\}\right)<\delta, \forall n \geq N_{\delta}
$$
- The concept of convergence in probability is used very often in statistics. For example, an estimator is called consistent if it converges in probability to the parameter being estimated.
- Convergence in probability is also the type of convergence established by the weak law of large numbers.
## Example 5.8 - Convergence in probability
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of i.i.d. r.v. such that $\operatorname{Uniform}(0, \theta)$, where $\theta>0$.
(a) Check if $X_{(n)}=\max _{i=1, \ldots, n} X_{i} \stackrel{P}{\rightarrow} \theta$.
- R.v.
$X_{i} \stackrel{i . i . d}{\sim} X, i \in \mathbb{N}$
$X \sim \operatorname{Uniform}(0, \theta)$
- D.f. of $X$
$F_{X}(x)= \begin{cases}0, & x<0 \\ \frac{x}{\theta}, & 0 \leq x \leq \theta \\ 1, & x>\theta\end{cases}$
- New r.v.
$X_{(n)}=\max _{i=1, \ldots, n} X_{i}$
- D.f. of $X_{(n)}$
$$
\begin{aligned}
F_{X_{(n)}}(x)= & {\left[F_{X}(x)\right]^{n} } \\
= & \begin{cases}0, & x<0 \\
\left(\frac{x}{\theta}\right)^{n}, & 0 \leq x \leq \theta \\
1, & x>\theta\end{cases}
\end{aligned}
$$
- Checking the convergence in probability $X_{(n)} \stackrel{P}{\rightarrow} \theta$
Making use of the definition of this type of convergence and capitalizing on the d.f. of $X_{(n)}$, we get, for every $\epsilon>0$ :
$$
\begin{aligned}
\lim _{n \rightarrow+\infty} P\left(\left|X_{(n)}-\theta\right|>\epsilon\right) & =1-\lim _{n \rightarrow+\infty} P\left(\theta-\epsilon \leq X_{(n)} \leq \theta+\epsilon\right) \\
& =1-\lim _{n \rightarrow+\infty}\left[F_{X_{(n)}}(\theta+\epsilon)-F_{X_{n)}}(\theta-\epsilon)\right] \\
& =\left\{\begin{array}{c}
1-\lim _{n \rightarrow+\infty}\left[F_{X_{(n)}}(\theta)-F_{X_{(n)}}(\theta-\epsilon)\right] \\
0<\epsilon<\theta
\end{array}\right. \\
& =\left\{\begin{array}{c}
1-\lim _{n \rightarrow+\infty} F_{X_{(n)}}(\theta), \epsilon \geq \theta \\
1-\lim _{n \rightarrow+\infty}\left[1-\left(\frac{\theta-\epsilon}{\theta}\right)^{n}\right] \\
1-1, \epsilon \geq \theta
\end{array}\right], 0<\epsilon<\theta \\
& =0 .
\end{aligned}
$$
## - Conclusion
$X_{(n)} \stackrel{P}{\rightarrow} \theta$.
Interestingly enough, $X_{(n)}$ not only is the MV estimator of $\theta$ but also a consistent estimator of $\theta\left(X_{(n)} \stackrel{P}{\rightarrow} \theta\right)$. However, $E\left[X_{(n)}\right]=n \theta /(n+1) \neq \theta$, i.e. $X_{(n)}$ is a biased estimator of $\theta$.
(b) Prove that $X_{(1: n)}=\min _{i=1, \ldots, n} X_{i} \stackrel{P}{\rightarrow} 0$.
- New r.v.
$X_{(1: n)}=\min _{i=1, \ldots, n} X_{i}$
- D.f. of $X_{(1: n)}$
$$
\begin{aligned}
& F_{X_{(1: n)}(x)}=1-\left[1-F_{X}(x)\right]^{n} \\
&= \begin{cases}0, & x<0 \\
1-\left(1-\frac{x}{\theta}\right)^{n}, & 0 \leq x \leq \theta \\
1, & x>\theta\end{cases}
\end{aligned}
$$
- Checking the convergence in probability $X_{(1: n)} \stackrel{P}{\rightarrow} 0$
For every $\epsilon>0$, we have
$$
\begin{aligned}
\lim _{n \rightarrow+\infty} P\left(\left|X_{(1: n)}-0\right|>\epsilon\right)= & 1-\lim _{n \rightarrow+\infty}\left[F_{X_{(1: n)}}(\epsilon)-F_{X_{(1: n)}}(-\epsilon)\right] \\
= & 1-\lim _{n \rightarrow+\infty} F_{X_{(1: n)}}(\epsilon) \\
= & \left\{\begin{array}{l}
1-\lim _{n \rightarrow+\infty}\left[1-\left(1-\frac{\epsilon}{\theta}\right)^{n}\right] \\
=1-(1-0), 0<\epsilon<\theta \\
1-\lim _{n \rightarrow+\infty} F_{X_{(1: n)}}(\theta)=1-1, \epsilon \geq \theta \\
=
\end{array}\right.
\end{aligned}
$$
- Conclusion
$X_{(1: n)} \stackrel{P}{\rightarrow} 0$
Remark 5.9 - Chebyshev(-Bienaymé)'s inequality and convergence in probability
Chebyshev(-Bienaymé)'s inequality can be useful to prove that some sequences of r.v. converge in probability to a degenerate r.v. (i.e. a constant). Example 5.10 - Chebyshev(-Bienaymé)'s inequality and convergence in probability
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. such that $X_{n} \sim \operatorname{Gamma}(n, n), n \in \mathbb{N}$. Prove that $X_{n} \stackrel{P}{\rightarrow} 1$, by making use of Chebyshev(-Bienaymé)'s inequality.
- R.v.
$X_{n} \sim \operatorname{Gamma}(n, n), n \in \mathbb{N}$
$E\left(X_{n}\right)=\frac{n}{n}=1$
$V\left(X_{n}\right)=\frac{n}{n^{2}}=\frac{1}{n}$
- Checking the convergence in probability $X_{n} \stackrel{P}{\rightarrow} 1$
The application of the definition of this type of convergence and Chebyshev(Bienaymé)'s inequality leads to
$$
\begin{aligned}
\lim _{n \rightarrow+\infty} P\left(\left|X_{n}-1\right|>\epsilon\right) & =\lim _{n \rightarrow+\infty} P\left(\left|X_{n}-E\left(X_{n}\right)\right| \geq \frac{\epsilon}{\sqrt{V\left(X_{n}\right)}} \sqrt{V\left(X_{n}\right)}\right) \\
& \leq \lim _{n \rightarrow+\infty} \frac{1}{\left(\frac{\epsilon}{\sqrt{\frac{1}{n}}}\right)^{2}} \\
& =\frac{1}{\epsilon^{2}} \lim _{n \rightarrow+\infty} \frac{1}{n} \\
& =0,
\end{aligned}
$$
for every $\epsilon>0$.
## - Conclusion
$X_{n} \stackrel{P}{\rightarrow} 1$.
Exercise 5.11 - Chebyshev(-Bienaymé)'s inequality and convergence in probability
Prove that $X_{(n)}=\max _{i=1, \ldots, n} X_{i}$, where $X_{i} \sim_{i . i . d .} \operatorname{Uniform}(0, \theta)$, is a consistent estimator of $\theta>0$, by using Chebyshev(-Bienaymé)'s inequality and the fact that $E\left[X_{(n)}\right]=\frac{n}{n+1} \theta$ and $V\left[X_{(n)}\right]=\frac{n}{(n+2)(n+1)^{2}} \theta^{2}$.
## Exercise 5.12 - Convergence in probability
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. such that $X_{n} \sim \operatorname{Bernoulli}\left(\frac{1}{n}\right), n \in \mathbb{I}$.
(a) Show that $X_{n} \stackrel{P}{\rightarrow} 0$, by obtaining $P\left(\left\{\left|X_{n}\right|>\epsilon\right\}\right.$ ), for $0<\epsilon<1$ and $\epsilon \geq 1$ (Rohatgi, 1976, pp. 243-244, Example 5).
(b) Verify that $E\left(X_{n}^{k}\right) \rightarrow E\left(X^{k}\right)$, where $k \in \mathbb{N}$ and $X \stackrel{d}{=} 0$.
Exercise 5.13 - Convergence in probability does not imply convergence of $k$ th. moments
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. such that $X_{n} \stackrel{d}{=} n \times \operatorname{Bernoulli}\left(\frac{1}{n}\right), n \in \mathbb{I}$, i.e.
$$
P\left(\left\{X_{n}=x\right\}\right)= \begin{cases}1-\frac{1}{n}, & x=0 \\ \frac{1}{n}, & x=n \\ 0, & \text { otherwise. }\end{cases}
$$
Prove that $X_{n} \stackrel{P}{\rightarrow} 0$, however $E\left(X_{n}^{k}\right) \nrightarrow E\left(X^{k}\right)$, where $k \in \mathbb{N}$ and the r.v. $X$ is degenerate at 0 (Rohatgi, 1976, p. 247, Remark 3).
## Motivation 5.14 - Convergence in quadratic mean and in $L^{1}$
We have just seen that convergence in probability does not imply the convergence of moments, namely of orders 2 or 1.
Definition 5.15 - Convergence in quadratic mean or in $L^{2}$ (Karr, 1993, p. 136) Let $X, X_{1}, X_{2}, \ldots$ belong to $L^{2}$. Then the sequence of r.v. $\left\{X_{1}, X_{2}, \ldots\right\}$ is said to converge to $X$ in quadratic mean (or in $\left.L^{2}\right)$ - denoted by $X_{n} \stackrel{q \cdot m .}{\rightarrow} X\left(\right.$ or $\left.X_{n} \stackrel{L^{2}}{\rightarrow} X\right)$ - if
$$
\lim _{n \rightarrow+\infty} E\left[\left(X_{n}-X\right)^{2}\right]=0
$$
Exercise 5.16 - Convergence in quadratic mean
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. such that $X_{n} \sim \operatorname{Bernoulli}\left(\frac{1}{n}\right)$.
Prove that $X_{n} \stackrel{\text { q.m. }}{\rightarrow} X$, where the r.v. $X$ is degenerate at 0 (Rohatgi, 1976, p. 247, Example 6).
Exercise 5.17 - Convergence in quadratic mean (bis)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. with $P\left(\left\{X_{n}= \pm \frac{1}{n}\right\}\right)=\frac{1}{2}$.
Prove that $X_{n} \stackrel{\text { q.m. }}{\rightarrow} X$, where the r.v. $X$ is degenerate at 0 (Rohatgi, 1976, p. 252, Example 11). Exercise 5.18 - Convergence in quadratic mean implies convergence of 2nd. moments (Karr, 1993, p. 158, Exercise 5.6(a))
Show that $X_{n} \stackrel{\text { q.m. }}{\rightarrow} X \Rightarrow E\left(X_{n}^{2}\right) \rightarrow E\left(X^{2}\right)$ (Rohatgi, 1976, p. 248, proof of Theorem 8).
Exercise 5.19 - Convergence in quadratic mean of partial sums (Karr, 1993, p. 159, Exercise 5.11)
Let $X_{1}, X_{2}, \ldots$ be pairwise uncorrelated r.v. with mean zero and partial sums $S_{n}=\sum_{i=1}^{n} X_{i}$
Prove that if there is a constant $c$ such that $V\left(X_{i}\right) \leq c$, for every $i$, then $\frac{S_{n}}{n^{\alpha}} \stackrel{q . m .}{\rightarrow} 0$ for all $\alpha>\frac{1}{2}$.
Definition 5.20 - Convergence in mean or in $L^{1}$ (Karr, 1993, p. 136) Let $X, X_{1}, X_{2}, \ldots$ belong to $L^{1}$. Then the sequence of r.v. $\left\{X_{1}, X_{2}, \ldots\right\}$ is said to converge to $X$ in mean (or in $L^{1}$ ) - denoted by $X_{n} \stackrel{L^{1}}{\rightarrow} X$ - if
$$
\lim _{n \rightarrow+\infty} E\left(\left|X_{n}-X\right|\right)=0
$$
Exercise 5.21 - Convergence in mean implies convergence of 1st. moments (Karr, 1993, p. 158, Exercise 5.6(b))
Prove that $X_{n} \stackrel{L^{1}}{\rightarrow} X \Rightarrow E\left(X_{n}\right) \rightarrow E(X)$.
#### Convergence in distribution
## Motivation 5.22 - Convergence in distribution
(http://en.wikipedia.org/wiki/Convergence_of_random_variables)
Convergence in distribution is very frequently used in practice, most often it arises from the application of the central limit theorem.
Definition 5.23 - Convergence in distribution (Karr, 1993, p. 136; Rohatgi, 1976, pp. 240-1)
The sequence of r.v. $\left\{X_{1}, X_{2}, \ldots\right\}$ converges to $X$ in distribution $-\operatorname{denoted}$ by $X_{n} \stackrel{d}{\rightarrow} X$ - if
$$
\lim _{n \rightarrow+\infty} F_{X_{n}}(x)=F_{X}(x),
$$
for all $x$ at which $F_{X}$ is continuous.
## Remarks 5.24 - Convergence in distribution
(http://en.wikipedia.org/wiki/Convergence_of_random_variables; Karr, 1993, p. 136; Rohatgi, 1976, p. 242)
- With this mode of convergence, we increasingly expect to see the next r.v. in a sequence of r.v. becoming better and better modeled by a given d.f., as seen in exercises 5.25 and 5.26 .
- It must be noted that it is quite possible for a given sequence of d.f. to converge to a function that is not a d.f., as shown in Example 5.27 and Exercise 5.28.
- The requirement that only the continuity points of $F_{X}$ should be considered is essential, as we shall see in exercises 5.29 and 5.30.
- The convergence in distribution does not imply the convergence of corresponding p.(d.)f., as shown in Exercise 5.32. Sequences of absolutely continuous r.v. that converge in distribution to discrete r.v. (and vice-versa) are obvious illustrations, as shown in examples 5.31 and 5.33.
## Exercise 5.25 - Convergence in distribution
Let $X_{1}, X_{2}, \ldots, X_{n}$ be i.i.d. r.v. with common p.d.f.
$$
f(x)= \begin{cases}\frac{1}{\theta}, & 0<x<\theta \\ 0, & \text { otherwise }\end{cases}
$$
where $0<\theta<+\infty$, and $X_{(n)}=\max _{1, \ldots, n} X_{i}$.
Show that $X_{(n)} \stackrel{d}{\rightarrow} \theta$ (Rohatgi, 1976, p. 241, Example 2).
## Exercise 5.26 - Convergence in distribution (bis)
Let:
- $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. such that $X_{n} \sim \operatorname{Bernoulli}\left(p_{n}\right), n=1,2, \ldots$;
- $X \sim \operatorname{Bernoulli}(p)$.
Prove that $X_{n} \stackrel{d}{\rightarrow} X$ iff $p_{n} \rightarrow p$
Example 5.27 - A sequence of d.f. converging to a non d.f. (Murteira, 1979, pp. 330-331)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. with d.f.
$$
F_{X_{n}}(x)= \begin{cases}0, & x<-n \\ \frac{x+n}{2 n}, & -n \leq x<n \\ 1, & x \geq n\end{cases}
$$
Please note that $\lim _{n \rightarrow+\infty} F_{X_{n}}(x)=\frac{1}{2}, x \in \mathbb{R}$, as suggested by the graph below with some terms of the sequence of d.f., for $n=1,10^{3}, 10^{6}$ (from top to bottom):
Consequently, the limit of the sequence of d.f. is not itself a d.f.
Exercise 5.28 - A sequence of d.f. converging to a non d.f.
Consider the sequence of d.f.
$$
F_{X_{n}}(x)= \begin{cases}0, & x<n \\ 1, & x \geq n\end{cases}
$$
where $F_{X_{n}}(x)$ is the d.f. of the r.v. $X_{n}$ degenerate at $x=n$.
Verify that $F_{X_{n}}(x)$ converges to a function (that is identically equal to 0!!!) which is not a d.f. (Rohatgi, 1976, p. 241, Example 1). Exercise 5.29 - The requirement that only the continuity points of $F_{X}$ should be considered is essential
Let $X_{n} \sim$ Uniform $\left(\frac{1}{2}-\frac{1}{n}, \frac{1}{2}+\frac{1}{n}\right)$ and $X$ be a r.v. degenerate at $\frac{1}{2}$.
(a) Prove that $X_{n} \stackrel{d}{\rightarrow} X$ (Karr, 1993, p. 142).
(b) Verify that $F_{X_{n}}\left(\frac{1}{2}\right)=\frac{1}{2}$ for each $n$, and these values do not converge to $F_{X}\left(\frac{1}{2}\right)=1$. Is there any contradiction with the convergence in distribution previously proved? (Karr, 1993, p. 142.)
Exercise 5.30 - The requirement that only the continuity points of $F_{X}$ should be considered is essential (bis)
Let $X_{n} \sim \operatorname{Uniform}\left(0, \frac{1}{n}\right)$ and $X$ a r.v. degenerate at 0 .
Prove that $X_{n} \stackrel{d}{\rightarrow} X$, even though $F_{X_{n}}(0)=0$, for all $n$, and $F_{X}(0)=1$, that is, the convergence of d.f. fails at the point $x=0$ where $F_{X}$ is discontinuous (http://en.wikipedia.org/wiki/Convergence_of_random_variables).
Example 5.31 - Convergence in distribution does not imply convergence of corresponding p.(d.)f. (Murteira, 1979, p. 331)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. such that $X_{n} \sim \operatorname{Normal}\left(0, \frac{1}{n^{2}}\right)$.
An analysis of the representation of some terms of the sequence of d.f. (e.g. $n=$ $1,10,50$, from left to right in the following graph) and the notion of convergence in distribution leads us to conclude that $X_{n} \stackrel{d}{\rightarrow} X$, where $X \stackrel{d}{=} 0$, even though
$$
\lim _{n \rightarrow+\infty} F_{X_{n}}(0)=\lim _{n \rightarrow} \Phi\left(\frac{0-0}{\sqrt{\frac{1}{n^{2}}}}\right)=\Phi(0)=\frac{1}{2}
$$
and
$$
\lim _{n \rightarrow+\infty} F_{X_{n}}(x)=\left\{\begin{array}{cc}
0, & x<0 \\
\frac{1}{2}, & x=0 \\
1, & x>0
\end{array}\right.
$$
is not a a d.f. (it is not left- or right-continuous).
Note that $X \stackrel{d}{=} 0$, therefore the limit d.f. of the sequence of r.v. $\left\{X_{1}, X_{2}, \ldots\right\}$ is the Heaviside function, i.e. $F_{X}(x)=I_{[0,+\infty)}(x)$.
Exercise 5.32 - Convergence in distribution does not imply convergence of corresponding p.(d.)f.
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. with p.f.
$$
P\left(\left\{X_{n}=x\right\}\right)= \begin{cases}1, & x=2+\frac{1}{n} \\ 0, & \text { otherwise }\end{cases}
$$
(a) Prove that $X_{n} \stackrel{d}{\rightarrow} X$, where $X$ a r.v. degenerate at 2 .
(b) Verify that none of the p.f. $P\left(\left\{X_{n}=x\right\}\right)$ assigns any probability to the point $x=2$, for all $n$, and that $P\left(\left\{X_{n}=x\right\}\right) \rightarrow 0$ for all $x$ (Rohatgi, 1976, p. 242, Example 4).
Example 5.33 - A sequence of discrete r.v. that converges in distribution to an absolutely continuous r.v. (Rohatgi, 1976, p. 256, Exercise 10) Let:
- $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. such that $X_{n} \sim \operatorname{Geometric}\left(\frac{\lambda}{n}\right)$, where $n>\lambda>0$;
- $\left\{Y_{n}, n \in \mathbb{N}\right\}$ a sequence of r.v. such taht $Y_{n}=\frac{X_{n}}{n}$.
Show that $Y_{n} \stackrel{d}{\rightarrow}$ Exponential $(\lambda)$.
- R.v. $X_{n} \sim \operatorname{Geometric}\left(\frac{\lambda}{n}\right), n \in \mathbb{N}$
- P.f. of $X_{n}$ and $Y_{n}$
$P\left(X_{n}=x\right)=\left(1-\frac{\lambda}{n}\right)^{x-1} \times \frac{\lambda}{n}, x=1,2, \ldots$
$P\left(Y_{n}=y\right)=P\left(X_{n}=n y\right)=\left(1-\frac{\lambda}{n}\right)^{n y} \times \frac{\lambda}{n}, y=\frac{1}{n}, \frac{2}{n}, \ldots$ - D.f. of $Y_{n}$
$$
\begin{aligned}
F_{Y_{n}}(y) & =F_{X_{n}}(n y) \\
& = \begin{cases}0, & y<\frac{1}{n} \\
\sum_{x=1}^{[n y]} P\left(X_{n}=x\right), & y \geq \frac{1}{n}\end{cases}
\end{aligned}
$$
where $[n y]$ represents the integer part of the real number $n y$ and
$$
\begin{aligned}
\sum_{x=1}^{[n y]} P\left(X_{n}=x\right) & =\sum_{i=0}^{[n y]-1}\left(1-\frac{\lambda}{n}\right)^{x} \times \frac{\lambda}{n} \\
& =1-\left(1-\frac{\lambda}{n}\right)^{[n y]} .
\end{aligned}
$$
## - Checking the convergence in distribution
Let us remind the reader that $[n y]=n y-\epsilon$, for some $\epsilon \in[0,1)$. Thus:
$$
\begin{aligned}
\lim _{n \rightarrow+\infty} F_{Y_{n}}(y) & =1-\lim _{n \rightarrow+\infty}\left(1-\frac{\lambda}{n}\right)^{[n y]} \\
& =1-\lim _{n \rightarrow+\infty}\left(1-\frac{\lambda}{n}\right)^{n y} \times \lim _{n \rightarrow+\infty}\left(1-\frac{\lambda}{n}\right)^{-\epsilon} \\
& =1-\left[\lim _{n \rightarrow+\infty}\left(1-\frac{\lambda}{n}\right)^{n}\right]^{y} \times 1 \\
& =1-e^{-\lambda y} \\
& =F_{\text {Exponential }(\lambda)}(y) .
\end{aligned}
$$
## - Conclusion
$Y_{n} \stackrel{d}{\rightarrow}$ Exponential $(\lambda)$.
Exercise 5.34 - A sequence of discrete r.v. that converges in distribution to an absolutely continuous r.v. (bis)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of discrete r.v. such that $X_{n} \sim \operatorname{Uniform}\{0,1, \ldots, n\}$.
Prove that $Y_{n}=\frac{X_{n}}{n} \stackrel{d}{\rightarrow}$ Uniform $(0,1) \cdot{ }^{2}$
${ }^{2}$ This result is very important in the generation of pseudo-random numbers from the $\operatorname{Uniform}(0,1)$ distribution by using computers since these machines "deal" with discrete mathematics. The following table condenses the definitions of convergence of sequences of r.v.
| Mode of convergence | Assumption | Defining condition |
| :--- | :--- | :--- |
| $X_{n} \stackrel{\text { a.s. }}{\rightarrow} X$ (almost sure) | - | $P\left(\left\{w: X_{n}(\omega) \rightarrow X(\omega)\right\}\right)=1$ |
| $X_{n} \stackrel{P}{\rightarrow} X$ (in probability) | - | $P\left(\left\{\left\|X_{n}-X\right\|>\epsilon\right\}\right) \rightarrow 0$, for all $\epsilon>0$ |
| $X_{n} \stackrel{q . m}{\rightarrow} X$ (in quadratic mean) | $X, X_{1}, X_{2}, \ldots \in L^{2}$ | $E\left[\left(X_{n}-X\right)^{2}\right] \rightarrow 0$ |
| $X_{n} \stackrel{L^{1}}{\rightarrow} X$ (in $\left.L^{1}\right)$ | $X, X_{1}, X_{2}, \ldots \in L^{1}$ | $E\left(\left\|X_{n}-X\right\|\right) \rightarrow 0$ |
| $X_{n} \stackrel{d}{\rightarrow} X$ (in distribution) | - | $F_{X_{n}}(x) \rightarrow F_{X}(x)$, at continuity points $x$ of $F_{X}$ |
Exercise 5.35 - Modes of convergence and uniqueness of limit (Karr, 1993, p. 158, Exercise 5.1)
Prove that for all five forms of convergence the limit is unique. In particular:
(a) if $X_{n} \stackrel{*}{\rightarrow} X$ and $X_{n} \stackrel{*}{\rightarrow} Y$, where $*=$ a.s., $P, q . m ., L^{1}$, then $X \stackrel{\text { a.s. }}{\rightarrow} Y$;
(b) if $X_{n} \stackrel{d}{\rightarrow} X$ and $X_{n} \stackrel{d}{\rightarrow} Y$, then $X \stackrel{d}{\rightarrow} Y$;
Exercise 5.36 - Modes of convergence and the vector space structure of the family of r.v. (Karr, 1993, p. 158, Exercise 5.2)
Prove that, for $*=$ a.s., $P, q . m ., L^{1}$,
$$
X_{n} \stackrel{*}{\rightarrow} X \Leftrightarrow X_{n}-X \stackrel{*}{\rightarrow} 0,
$$
i.e. the four function-based forms of convergence are compatible with the vector space structure of the family of r.v.
#### Alternative criteria
The definition of almost sure convergence and its verification are far from trivial. More tractable criteria have to be stated...
Proposition 5.37 - Relating almost sure convergence and convergence in probability (Karr, 1993, p. 137; Rohatgi, 1976, p. 249)
$X_{n} \stackrel{\text { a.s. }}{\rightarrow} X$ iff
$$
\forall \epsilon>0, \lim _{n \rightarrow+\infty} P\left(\left\{\sup _{k \geq n}\left|X_{k}-X\right|>\epsilon\right\}\right)=0,
$$
i.e.
$$
X_{n} \stackrel{\text { a.s. }}{\rightarrow} X \quad \Leftrightarrow \quad Y_{n}=\sup _{k \geq n}\left|X_{k}-X\right| \stackrel{P}{\rightarrow} 0
$$
Remarks 5.38 - Relating almost sure convergence and convergence in probability (Karr, 1993, p. 137; Rohatgi, 1976, p. 250, Remark 6)
- Proposition 5.37 states an equivalent form of almost sure convergence that illuminates its relationship to convergence in probability.
- $X_{n} \stackrel{\text { a.s. }}{\rightarrow} 0$ means that,
$$
\forall \epsilon, \eta>0, \exists n_{0} \in \mathbb{N}: P\left(\left\{\sup _{k \geq n_{0}}\left|X_{k}\right|>\epsilon\right\}\right)<\eta
$$
Indeed, we can write, equivalently, that
$$
\lim _{n \rightarrow+\infty} P\left(\bigcup_{k \geq n}\left\{\left|X_{k}\right|>\epsilon\right\}\right)=0
$$
for $\epsilon>0$ arbitrary.
Exercise 5.39 - Relating almost sure convergence and convergence in probability
Prove Proposition 5.37 (Karr, 1993, p. 137; Rohatgi, 1976, p. 250). Exercise 5.40 - Relating almost sure convergence and convergence in probability (bis)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. with $P\left(\left\{X_{n}= \pm \frac{1}{n}\right\}\right)=\frac{1}{2}$.
Prove that $X_{n} \stackrel{\text { a.s. }}{\rightarrow} X$, where the r.v. $X$ is degenerate at 0, by using (5.16) (Rohatgi, 1976, p. 252).
Theorem 5.41 - Cauchy criterion (Rohatgi, 1976, p. 270)
$$
X_{n} \stackrel{a . s .}{\rightarrow} X \Leftrightarrow \lim _{n \rightarrow+\infty} P\left(\left\{\sup _{m}\left|X_{n+m}-X_{n}\right| \leq \epsilon\right\}\right)=1, \forall \epsilon>0 .
$$
## Exercise 5.42 - Cauchy criterion
Prove Theorem 5.41 (Rohatgi, 1976, pp. 270-2).
Definition 5.43 - Complete convergence (Karr, 1993, p. 138)
The sequence of r.v. $\left\{X_{1}, X_{2}, \ldots\right\}$ is said to converge completely to $X$ if
$$
\sum_{n=1}^{+\infty} P\left(\left\{\left|X_{n}-X\right|>\epsilon\right\}\right)<+\infty
$$
for every $\epsilon>0$.
The next results relate complete convergence, which is stronger than almost sure convergence, and sometimes more convenient to establish (Karr, 1993, p. 137).
Proposition 5.44 - Relating almost sure convergence and complete convergence (Karr, 1993, p. 138)
$$
\sum_{n=1}^{+\infty} P\left(\left\{\left|X_{n}-X\right|>\epsilon\right\}\right)<+\infty, \forall \epsilon>0 \Rightarrow X_{n} \stackrel{\text { a.s. }}{\rightarrow} X .
$$
Remark 5.45 - Relating almost sure convergence and complete convergence (Karr, 1993, p. 138)
$X_{n} \stackrel{P}{\rightarrow} X$ iff the probabilities $P\left(\left\{\left|X_{n}-X\right|>\epsilon\right\}\right)$ converge to zero, while $X_{n} \stackrel{\text { a.s. }}{\rightarrow} X$ if (but not only if) the convergence of probabilities $P\left(\left\{\left|X_{n}-X\right|>\epsilon\right\}\right)$ is fast enough that their sum, $\sum_{n=1}^{+\infty} P\left(\left\{\left|X_{n}-X\right|>\epsilon\right\}\right)$, is finite. Exercise 5.46 - Relating almost sure convergence and complete convergence Show Proposition 5.44, by using the (1st.) Borel-Cantelli lemma (Karr, 1993, p. 138).
Theorem 5.47 - Almost sure convergence of a sequence of independent r.v. (Rohatgi, 1976, p. 265) Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of independent r.v. Then
$$
X_{n} \stackrel{\text { a.s. }}{\rightarrow} 0 \Leftrightarrow \sum_{n=1}^{+\infty} P\left(\left\{\left|X_{n}\right|>\epsilon\right\}\right)<+\infty, \forall \epsilon>0 .
$$
Exercise 5.48 - Almost sure convergence of a sequence of independent r.v. Prove Theorem 5.47 (Rohatgi, 1976, pp. 265-6).
The definition of convergence in distribution is cumbersome because of the proviso regarding continuity points of the limit d.f. $F_{X}$. An alternative criterion follows.
Theorem 5.49 - Alternative criterion for convergence in distribution (Karr, 1993, p. 138)
Let $\mathbf{C}$ be the set of bounded, continuous functions $f: \mathbb{R} \rightarrow \mathbb{R}$. Then
$$
X_{n} \stackrel{d}{\rightarrow} X \quad \Leftrightarrow \quad E\left[f\left(X_{n}\right)\right] \rightarrow E[f(X)], \forall f \in \mathbf{C}
$$
Remark 5.50 - Alternative criterion for convergence in distribution (Karr, 1993, p. 138)
Theorem 5.49 provides a criterion for convergence in distribution which is superior to the definition of convergence in distribution in that one need not to deal with continuity points of the limit d.f.
Exercise 5.51 - Alternative criterion for convergence in distribution Prove Theorem 5.49 (Karr, 1993, pp. 138-139).
Since in the proof of Theorem 5.49 the continuous functions used to approximate indicator functions can be taken to be arbitrarily smooth we can add a sufficient condition that guarantees convergence in distribution. Corollary 5.52 - Sufficient condition for convergence in distribution (Karr, 1993, p. 139)
Let:
- $k$ be a fixed non-negative integer;
- $\mathbf{C}^{(k)}$ be the space of bounded, $k$-times uniformly continuously differentiable functions $f: \mathbb{R} \rightarrow \mathbb{R}$.
Then
$$
E\left[f\left(X_{n}\right)\right] \rightarrow E[f(X)], \forall f \in \mathbf{C}^{(k)} \Rightarrow X_{n} \stackrel{d}{\rightarrow} X .
$$
The next table summarizes the alternative criteria and sufficient conditions for almost sure convergence and convergence in distribution of sequences of r.v.
| Alternative criterion or sufficient condition | | Mode of convergence |
| :--- | :--- | :--- |
| $\forall \epsilon>0, \lim _{n \rightarrow+\infty} P\left(\left\{\sup _{k \geq n}\left\|X_{k}-X\right\|>\epsilon\right\}\right)=0$ | $\Leftrightarrow$ | $X_{n} \stackrel{\text { a.s. }}{\rightarrow} X$ |
| $Y_{n}=\sup _{k \geq n}\left\|X_{k}-X\right\| \stackrel{P}{\rightarrow} 0$ | $\Leftrightarrow$ | $X_{n} \stackrel{\text { a.s. }}{\rightarrow} X$ |
| $\lim _{n \rightarrow+\infty} P\left(\left\{\sup _{m}\left\|X_{n+m}-X_{n}\right\| \leq \epsilon\right\}\right)=1, \forall \epsilon>0$ | $\Leftrightarrow$ | $X_{n} \stackrel{\text { a.s. }}{\rightarrow} X$ |
| $\sum_{n=1}^{+\infty} P\left(\left\{\left\|X_{n}-X\right\|>\epsilon\right\}\right)<+\infty, \forall \epsilon>0$ | $\Rightarrow$ | $X_{n} \stackrel{\text { a.s. }}{\rightarrow} X$ |
| $E\left[f\left(X_{n}\right)\right] \rightarrow E[f(X)], \forall f \in \mathbf{C}$ | $\Leftrightarrow$ | $X_{n} \stackrel{d}{\rightarrow} X$ |
| $E\left[f\left(X_{n}\right)\right] \rightarrow E[f(X)], \forall f \in \mathbf{C}^{(k)}$ for a fixed $k \in \mathbb{N}_{0}$ | $\Rightarrow$ | $X_{n} \stackrel{d}{\rightarrow} X$ |
We should also add that Grimmett and Stirzaker (2001, p. 310) state that if $X_{n} \stackrel{P}{\rightarrow} X$ and $P\left(\left\{\left|X_{n}\right| \leq k\right\}\right)=1$, for all $n$ and some $k$, then $X_{n} \stackrel{L^{r}}{\rightarrow} X$, for all $r \geq 1,{ }^{3}$ namely $X_{n} \stackrel{\text { q.m. }}{\rightarrow} X$ (which in turn implies $\left.X_{n} \stackrel{L^{1}}{\rightarrow} X\right)$.
${ }^{3}$ Let $X, X_{1}, X_{2}, \ldots$ belong to $L^{r}(r \geq 1)$. Then the sequence of r.v. $\left\{X_{1}, X_{2}, \ldots\right\}$ is said to converge to $X$ in $\left.L^{r}\right)$ - denoted by $X_{n} \stackrel{L^{r}}{\rightarrow} X-$ if $\lim _{n \rightarrow+\infty} E\left(\left|X_{n}-X\right|^{r}\right)=0$ (Grimmett and Stirzaker, 2001, p. 308).
### Relationships among the modes of convergence
Given the plethora of modes of convergence, it is natural to inquire how they can be always related or hold true in the presence of additional assumptions (Karr, 1993, pp. 140 and 142).
#### Implications always valid
Proposition 5.53 - Almost sure convergence implies convergence in probability (Karr, 1993, p. 140; Rohatgi, 1976, p. 250)
$$
X_{n} \stackrel{a . s .}{\rightarrow} X \Rightarrow X_{n} \stackrel{P}{\rightarrow} X .
$$
Exercise 5.54 - Almost sure convergence implies convergence in probability Show Proposition 5.53 (Karr, 1993, p. 140; Rohatgi, 1976, p. 251).
Proposition 5.55 - Convergence in quadratic mean implies convergence in $L^{1}$ (Karr, 1993, p. 140)
$$
X_{n} \stackrel{q \cdot m .}{\rightarrow} X \Rightarrow X_{n} \stackrel{L^{1}}{\rightarrow} X .
$$
Exercise 5.56 - Convergence in quadratic mean implies convergence in $L^{1}$ Prove Proposition 5.55, by applying Cauchy-Schwarz's inequality (Karr, 1993, p. 140).
Proposition 5.57 - Convergence in $L^{1}$ implies convergence in probability (Karr, 1993, p. 141)
$$
X_{n} \stackrel{L^{1}}{\rightarrow} X \quad \Rightarrow \quad X_{n} \stackrel{P}{\rightarrow} X .
$$
Exercise 5.58 - Convergence in $L^{1}$ implies convergence in probability Prove Proposition 5.57, by using Chebyshev's inequality (Karr, 1993, p. 141).
Proposition 5.59 - Convergence in probability implies convergence in distribution (Karr, 1993, p. 141)
$$
X_{n} \stackrel{P}{\rightarrow} X \quad \Rightarrow \quad X_{n} \stackrel{d}{\rightarrow} X .
$$
## Exercise 5.60 - Convergence in probability implies convergence in distribution
Show Proposition 5.59, (Karr, 1993, p. 141).
Figure 5.1 shows that convergence in distribution is the weakest form of convergence, since it is implied by all other types of convergence studied so far.
$$
\begin{aligned}
X_{n} \stackrel{q . m .}{\rightarrow} X \Rightarrow X_{n} \stackrel{L^{1}}{\rightarrow} X & \\
& \Rightarrow X_{n} \stackrel{P}{\rightarrow} X \Rightarrow X_{n} \stackrel{d}{\rightarrow} X \\
& \Rightarrow
\end{aligned}
$$
Figure 5.1: Implications always valid between modes of convergence.
Grimmett and Stirzaker (2001, p. 314) refer that convergence in distribution is the weakest form of convergence for two reasons: it only involves d.f. and makes no reference to an underlying probability space. ${ }^{4}$ However, convergence in distribution has an useful representation in terms of almost sure convergence, as stated in the next theorem.
Theorem 5.61 - Skorokhod's representation theorem (Grimmett and Stirzaker, 2001, p. 314)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v., $\left\{F_{1}, F_{2}, \ldots\right\}$ the associated sequence of d.f. and $X$ be a r.v. with d.f. $F$. If $X_{n} \stackrel{d}{\rightarrow} X$ then there is a probability space $\left(\Omega^{\prime}, \mathcal{F}^{\prime}, P^{\prime}\right)$ and r.v. $\left\{Y_{1}, Y_{2}, \ldots\right\}$ and $Y$ mapping $\Omega^{\prime}$ into $\mathbb{R}$ such that $\left\{Y_{1}, Y_{2}, \ldots\right\}$ and $Y$ have d.f. $\left\{F_{1}, F_{2}, \ldots\right\}$ and $F$ and $Y_{n} \stackrel{d}{\rightarrow} Y$.
Remark 5.62 - Skorokhod's representation theorem (Grimmett and Stirzaker, 2001, p. 315)
Although $X_{n}$ may fail to converge to $X$ in any mode than in distribution, there is a sequence of r.v. $\left\{Y_{1}, Y_{2}, \ldots\right\}$ such that $Y_{n}$ is identically distributed to $X_{n}$, for every $n$, which converges almost surely to a "copy" of $X$.
${ }^{4}$ Let us remind the reader that that there is an equivalent formulation of convergence in distribution which involves d.f. alone: the sequence of d.f. $\left\{F_{1}, F_{2}, \ldots\right\}$ converges to the d.f. $F$, if $\lim _{n \rightarrow+\infty} F_{n}(x)=$ $F(x)$ at each point $x$ where $F$ is continuous (Grimmett and Stirzaker, 2001, p. 190).
#### Counterexamples
Counterexamples to all implications among the modes of convergence (and more!) are condensed in Figure 5.2 and presented by means of several exercises.
$$
\begin{aligned}
& X_{n} \stackrel{q \cdot m .}{\rightarrow} X \quad \nLeftarrow \quad X_{n} \stackrel{L^{1}}{\rightarrow} X \\
& \nLeftarrow \\
& X_{n} \stackrel{P}{\rightarrow} X \quad \nLeftarrow \quad X_{n} \stackrel{d}{\rightarrow} X \\
& X_{n} \stackrel{\text { a.s. }}{\rightarrow} X
\end{aligned}
$$
Figure 5.2: Counterexamples to implications among the modes of convergence.
Before proceeding with exercises, recall exercises 5.4 and 5.12 which pertain to the sequence of r.v. $\left\{X_{1}, X_{2}, \ldots\right\}$, where $X_{n} \sim \operatorname{Bernoulli}\left(\frac{1}{n}\right), n \in \mathbb{N}$. In the first exercise we proved that $X_{n} \stackrel{a, s .}{\rightarrow} 0$, whereas in the second one we concluded that $X_{n} \stackrel{P}{\rightarrow} 0$. Thus, combining these results we can state that $X_{n} \stackrel{P}{\rightarrow} 0 \nRightarrow X_{n} \stackrel{\text { a.s. }}{\rightarrow} 0$.
Exercise 5.63 - Almost sure convergence does not imply convergence in quadratic mean
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. such that
$$
P\left(\left\{X_{n}=x\right\}\right)= \begin{cases}1-\frac{1}{n}, & x=0 \\ \frac{1}{n}, & x=n \\ 0, & \text { otherwise. }\end{cases}
$$
Prove that $X_{n} \stackrel{a . s .}{\rightarrow} 0$, and, hence, $X_{n} \stackrel{P}{\rightarrow} 0$ and $X_{n} \stackrel{d}{\rightarrow} 0$, but $X_{n} \stackrel{L^{1}}{\rightarrow} 0$ and $X_{n} \stackrel{\text { q.m. }}{\rightarrow} 0$ (Karr, 1993, p. 141, Counterexample a)).
Exercise 5.64 - Almost sure convergence does not imply convergence in quadratic mean (bis)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. such that
$$
P\left(\left\{X_{n}=x\right\}\right)= \begin{cases}1-\frac{1}{n^{r}}, & x=0 \\ \frac{1}{n^{r}}, & x=n \\ 0, & \text { otherwise, }\end{cases}
$$
where $r \geq 2$.
Prove that $X_{n} \stackrel{\text { a.s. }}{\rightarrow} 0$, but $X_{n} \stackrel{\text { q.m. }}{\rightarrow} 0$ for $r=2$ (Rohatgi, 1976, p. 252, Example 10). Exercise 5.65 - Convergence in quadratic mean does not imply almost sure convergence
Let $X_{n} \sim \operatorname{Bernoulli}\left(\frac{1}{n}\right)$.
Prove that $X_{n} \stackrel{\text { q.m. }}{\rightarrow} 0$, but $X_{n} \stackrel{\text { a.s. }}{\rightarrow} 0$ (Rohatgi, 1976, p. 252, Example 9).
Exercise 5.66 - Convergence in $L^{1}$ does not imply convergence in quadratic mean
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. such that
$$
P\left(\left\{X_{n}=x\right\}\right)= \begin{cases}1-\frac{1}{n}, & x=0 \\ \frac{1}{n}, & x=\sqrt{n} \\ 0, & \text { otherwise. }\end{cases}
$$
Show that $X_{n} \stackrel{\text { a.s. }}{\rightarrow} 0$ and $X_{n} \stackrel{L^{1}}{\rightarrow} 0$, however $X_{n} \stackrel{\text { q.m. }}{\rightarrow} 0$ (Karr, 1993, p. 141, Counterexample b)).
Exercise 5.67 - Convergence in probability does not imply almost sure convergence
For each positive integer $n$ there exists integers $m$ and $k$ (uniquely determined) such that
$$
n=2^{k}+m, m=0,1, \ldots, 2^{k}-1, k=0,1,2, \ldots
$$
Thus, for $n=1, k=m=0$; for $n=5, k=2, m=1$; and so on.
Define r.v. $X_{n}$, for $n=1,2, \ldots$, on $\Omega=[0,1]$ by
$$
X_{n}(\omega)= \begin{cases}2^{k}, & \frac{m}{2^{k}} \leq w<\frac{m+1}{2^{k}} \\ 0, & \text { otherwise. }\end{cases}
$$
Let the probability distribution of $X_{n}$ be given by $P(\{I\})=$ length of the interval $I \subset \Omega$. Thus,
$$
P\left(\left\{X_{n}=x\right\}\right)= \begin{cases}1-\frac{1}{2^{k}}, & x=0 \\ \frac{1}{2^{k}}, & x=2^{k} \\ 0, & \text { otherwise. }\end{cases}
$$
Prove that $X_{n} \stackrel{P}{\rightarrow} 0$, but $X_{n} \stackrel{\text { a.s. }}{\rightarrow} 0$ (Rohatgi, 1976, pp. 251-2, Example 8). Exercise 5.68 - Convergence in distribution does not imply convergence in probability
Let $\left\{X_{2}, X_{3}, \ldots\right\}$ be a sequence of r.v. such that
$$
F_{X_{n}}(x)= \begin{cases}0, & x<0 \\ \frac{1}{2}-\frac{1}{n}, & 0 \leq x<1 \\ 1, & x \geq 1,\end{cases}
$$
i.e. $X_{n} \sim$ Bernoulli $\left(\frac{1}{2}+\frac{1}{n}\right), n=2,3, \ldots$
Prove that $X_{n} \stackrel{d}{\rightarrow} X$, where $X \sim$ Bernoulli $\left(\frac{1}{2}\right)$ and independent of any $X_{n}$, but $X_{n} \stackrel{P}{f} X$ (Karr, 1993, p. 142, Counterexample d)).
Exercise 5.69 - Convergence in distribution does not imply convergence in probability (bis)
Let $X, X_{1}, X_{2}, \ldots$ be i.i.d. r.v. and let the joint p.f. of $\left(X, X_{n}\right)$ be $P\left(\left\{X=0, X_{n}=1\right\}\right)=$ $P\left(\left\{X=1, X_{n}=0\right\}\right)=\frac{1}{2}$.
Prove that $X_{n} \stackrel{d}{\rightarrow} X$, but $X_{n} \stackrel{P}{\rightarrow} X$ (Rohatgi, 1976, p. 247, Remark 2).
#### Implications of restricted validity
Proposition 5.70 - Convergence in distribution to a constant implies convergence in probability (Karr, 1993, p. 140; Rohatgi, 1976, p. 246)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. and $c \in \mathbb{R}$. Then
$$
X_{n} \stackrel{d}{\rightarrow} c \Rightarrow X_{n} \stackrel{P}{\rightarrow} c .
$$
Remark 5.71 - Convergence in distribution to a constant is equivalent to convergence in probability (Rohatgi, 1976, p. 246)
If we add to the previous result the fact that $X_{n} \stackrel{P}{\rightarrow} c \Rightarrow X_{n} \stackrel{d}{\rightarrow} c$, we can conclude that
$$
X_{n} \stackrel{P}{\rightarrow} c \Leftrightarrow X_{n} \stackrel{d}{\rightarrow} c .
$$
Exercise 5.72 - Convergence in distribution to a constant implies convergence in probability
Show Proposition 5.70 (Karr, 1993, p. 140).
Exercise 5.73 - Convergence in distribution to a constant implies convergence in probability (bis)
Let $\left(X_{1}, \ldots, X_{n}\right)$ be a random vector where $X_{i}$ are i.i.d. r.v. with common p.d.f.
$$
f_{X}(x)=\theta x^{-2} \times I_{[\theta,+\infty)}(x),
$$
where $\theta \in \mathbb{R}^{+}$.
(a) After having proved that
$$
F_{X_{(1: n)}}(x)=P\left(\min _{i=1, \ldots, n} X_{i} \leq x\right)=\left[1-(\theta / x)^{n}\right] \times I_{[\theta,+\infty)}(x),
$$
derive the following result: $X_{(1: n)} \stackrel{d}{\rightarrow} \theta$.
(b) Is $X_{(1: n)}$ a consistent estimator of $\theta$ ?
Definition 5.74 - Uniform integrability (Karr, 1993, p. 142)
A sequence of r.v. $\left\{X_{1}, X_{2}, \ldots\right\}$ is uniformly integrable if $X_{n} \in L^{1}$ for each $n \in \mathbb{N}$ and if
$$
\lim _{a \rightarrow+\infty} \sup _{n} E\left(\left|X_{n}\right| ;\left\{\left|X_{n}\right|>a\right\}\right)=0 .
$$
Recall that the expected value of a r.v. $X$ over an event $A$ is given by $E(X ; A)=E(X \times$ $\left.\mathbf{1}_{A}\right)$. Proposition 5.75 - Alternative criterion for uniform integrability (Karr, 1993, p. 143)
A sequence of r.v. $\left\{X_{1}, X_{2}, \ldots\right\}$ is uniformly integrable iff
- $\sup _{n} E\left(\left|X_{n}\right|\right)<+\infty$ and
- $\left\{X_{1}, X_{2}, \ldots\right\}$ is uniformly absolutely continuous: for each $\epsilon>0$ there is $\delta>0$ such that $\sup _{n} E\left(\left|X_{n}\right| ; A\right)<\epsilon$ whenever $P(A)>\delta$.
Proposition 5.76 - Combining convergence in probability and uniform integrability is equivalent to convergence in $L^{1}$ (Karr, 1993, p. 144) Let $X, X_{1}, X_{2}, \ldots \in L^{1}$. Then
$$
X_{n} \stackrel{P}{\rightarrow} X \text { and }\left\{X_{1}, X_{2}, \ldots\right\} \text { is uniformly integrable } \Leftrightarrow X_{n} \stackrel{L^{1}}{\rightarrow} X \text {. }
$$
Exercise 5.77 - Combining convergence in probability and uniform integrability is equivalent to convergence in $L^{1}$
Prove Proposition 5.76 (Karr, 1993, p. 144).
Exercise 5.78 - Combining convergence in probability of the sequence of r.v. and convergence of sequence of the means implies convergence in $L^{1}$ (Karr, 1993, p. 160, Exercise 5.16)
Let $X, X_{1}, X_{2}, \ldots$ be positive r.v.
Prove that if $X_{n} \stackrel{P}{\rightarrow} X$ and $E\left(X_{n}\right) \rightarrow E(X)$, then $X_{n} \stackrel{L^{1}}{\rightarrow} X$.
Exercise 5.79 - Increasing character and convergence in probability combined imply almost sure convergence (Karr, 1993, p. 160, Exercise 5.15)
Show that if $X_{1} \leq X_{2} \leq \ldots$ and $X_{n} \stackrel{P}{\rightarrow} X$, then $X_{n} \stackrel{\text { a.s. }}{\rightarrow} X$.
Exercise 5.80 - Strictly decreasing and positive character and convergence in probability combined imply almost sure convergence (Rohatgi, 1976, p. 252, Theorem 13)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a strictly decreasing sequence of positive r.v.
Prove that if $X_{n} \stackrel{P}{\rightarrow} 0$ then $X_{n} \stackrel{\text { a.s. }}{\rightarrow} 0$.
### Convergence under transformations
Since the original sequence(s) of r.v. is (are) bound to be transformed, it is natural to inquire whether the modes of convergence are preserved under continuous mappings and algebraic operations of the r.v.
#### Continuous mappings
Only convergence almost surely, in probability and in distribution are preserved under continuous mappings (Karr, 1993, p. 145).
Theorem 5.81 - Preservation of $\{a . s ., P, d\}$-convergence under continuous mappings (Karr, 1993, p. 148)
Let:
- $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. and $X$ a r.v.;
- $g: \mathbb{R} \rightarrow \mathbb{R}$ be a continuous function.
Then
$$
X_{n} \stackrel{*}{\rightarrow} X \Rightarrow g\left(X_{n}\right) \stackrel{*}{\rightarrow} g(X), *=a . s ., P, d
$$
Exercise 5.82 - Preservation of $\{a . s ., P, d\}$-convergence under continuous mappings
Show Theorem 5.81 (Karr, 1993, p. 148).
#### Algebraic operations
With the exception of the convergence in distribution, addition is preserved by the modes of convergence of r.v. as functions on $\Omega$, as stated in the next theorem.
Theorem 5.83 - Preservation of $\left\{\right.$ a.s., $\left.P, q . m, L^{1}\right\}$-convergence under addition (Karr, 1993, p. 145)
Let $X_{n} \stackrel{*}{\rightarrow} X$ and $Y_{n} \stackrel{*}{\rightarrow} Y$, where $*=$ a.s., $P, q . m, L^{1}$. Then
$$
X_{n}+Y_{n} \stackrel{*}{\rightarrow} X+Y, *=a . s ., P, q . m, L^{1} .
$$
Remark 5.84 - Preservation of $\left\{\right.$ a.s., $\left.P, q . m, L^{1}\right\}$-convergence under addition Under the conditions of Theorem 5.83,
- $X_{n} \pm Y_{n} \stackrel{*}{\rightarrow} X \pm Y, *=$ a.s., $P, q . m, L^{1}$.
Exercise 5.85 - Preservation of $\left\{\right.$ a.s., $\left.P, q . m, L^{1}\right\}$-convergence under addition Prove Theorem 5.83 (Karr, 1993, pp. 145-6).
Convergence in distribution is only preserved under addition if one of the limits is constant.
Theorem 5.86 - Slutsky's theorem or preservation of $d$-convergence under (restricted) addition (Karr, 1993, p. 146)
Let:
- $X_{n} \stackrel{d}{\rightarrow} X$
- $Y_{n} \stackrel{d}{\rightarrow} c, c \in \mathbb{R}$.
Then
$$
X_{n}+Y_{n} \stackrel{d}{\rightarrow} X+c
$$
Remarks 5.87 - Slutsky's theorem or preservation of $d$-convergence under (restricted) addition and subtraction
(http://en.wikipedia.org/wiki/Slutsky's_theorem; Rohatgi, 1976, p. 253)
- The requirement that $\left\{Y_{n}\right\}$ converges in distribution to a constant is important if it were to converge to a non-degenerate random variable, Theorem 5.86 would be no longer valid.
- Theorem 5.86 remains valid if we replace all convergences in distribution with convergences in probability because it implies the convergence in distribution.
- Moreover, Theorem 15 (Rohatgi, 1976, p. 253) reads as follows:
$$
X_{n} \stackrel{d}{\rightarrow} X, Y_{n} \stackrel{P}{\rightarrow} c, c \in \mathbb{R} \Rightarrow X_{n} \pm Y_{n} \stackrel{d}{\rightarrow} X \pm c
$$
In this statement, the condition of $Y_{n} \stackrel{d}{\rightarrow} c, c \in \mathbb{R}$ in Theorem 5.86 was replaced with $Y_{n} \stackrel{P}{\rightarrow} c, c \in \mathbb{R}$. This by no means a contradiction because these two conditions are equivalent according to Proposition 5.70. Exercise 5.88 - Slutsky's theorem or preservation of $d$-convergence under (restricted) addition
Prove Theorem 5.86 (Karr, 1993, p. 146; Rohatgi, 1976, pp. 253-4).
As for the product, almost sure convergence and convergence in probability are preserved.
Theorem 5.89 - Preservation of $\{$ a.s., $P\}$-convergence under product (Karr, 1993, p. 147)
Let $X_{n} \stackrel{*}{\rightarrow} X$ and $Y_{n} \stackrel{*}{\rightarrow} Y$, where $*=$ a.s., $P$. Then
$$
X_{n} \times Y_{n} \stackrel{*}{\rightarrow} X \times Y, *=\text { a.s., } P .
$$
Exercise 5.90 - Preservation of $\{$ a.s., $P\}$-convergence under product Show Theorem 5.89 (Karr, 1993, p. 147).
Theorem 5.91 - (Non)preservation of q.m.-convergence under product (Karr, 1993, p. 147)
Let $X_{n} \stackrel{q . m .}{\rightarrow} X$ and $Y_{n} \stackrel{q . m .}{\rightarrow} Y$. Then
$$
X_{n} \times Y_{n} \stackrel{L^{1}}{\rightarrow} X \times Y .
$$
Remark 5.92 - (Non)preservation of q.m.-convergence under product (Karr, 1993, pp. 146-7)
Quadratic mean convergence of products does not hold in general, since $X \times Y$ need not belong to $L^{2}$ when $X$ and $Y$ do:
$$
X_{n} \stackrel{q \cdot m .}{\rightarrow} X, Y_{n} \stackrel{q \cdot m .}{\rightarrow} Y \quad X_{n} \times Y_{n} \stackrel{q . m .}{\rightarrow} X \times Y .
$$
However, the product of r.v. in $L^{2}$ belongs to $L^{1}$, and $L^{2}$ convergence of factors implies $L^{1}$ convergence of products.
Exercise 5.93 - (Non)preservation of q.m.-convergence under product Prove Theorem 5.91 (Karr, 1993, p. 147; Rohatgi, 1976, p. 254). Convergence in distribution is preserved under product, provided that one limit factor is constant (Karr, 1993, p. 146).
Theorem 5.94 - Slutsky's theorem (bis) or preservation of $d$-convergence under (restricted) product (Karr, 1993, p. 147) Let:
- $X_{n} \stackrel{d}{\rightarrow} X$;
- $Y_{n} \stackrel{d}{\rightarrow} c, c \in \mathbb{R}$.
Then
$$
X_{n} \times Y_{n} \stackrel{d}{\rightarrow} X \times c .
$$
Remark 5.95 - Slutsky's theorem or preservation of $d$-convergence under (restricted) product (Rohatgi, 1976, p. 253)
Theorem 15 (Rohatgi, 1976, p. 253) also states that
$$
\begin{aligned}
X_{n} \stackrel{d}{\rightarrow} X, Y_{n} \stackrel{P}{\rightarrow} c, c \in \mathbb{R} & \Rightarrow X_{n} \times Y_{n} \stackrel{d}{\rightarrow} X \times c \\
X_{n} \stackrel{d}{\rightarrow} X, Y_{n} \stackrel{P}{\rightarrow} c, c \in \mathbb{R} \backslash\{0\} & \Rightarrow \frac{X_{n}}{Y_{n}} \stackrel{d}{\rightarrow} \frac{X}{c} .
\end{aligned}
$$
| Mode of convergence | Preservation under... | | |
| :---: | :---: | :---: | :---: |
| | Continuous mapping | Addition \& Subtraction | Produc |
| $\stackrel{a . s .}{\rightarrow}($ almost sure $)$ | YES | YES | YES |
| $\stackrel{P}{\rightarrow}$ (in probability) | YES | YES | YES |
| $\stackrel{q . m .}{\rightarrow}$ (in quadratic mean) | No | YES | $\stackrel{L^{1}}{\rightarrow}$ |
| $\stackrel{L^{1}}{\rightarrow}\left(\right.$ in $\left.L^{1}\right)$ | No | YES | YES |
| $\stackrel{d}{\rightarrow}$ (in distribution) | $\mathrm{RV}^{*}$ | $\mathrm{RV}^{*}$ | YES |
(Discuss the validity of both results.)
* Restricted validity (RV): one of the summands/factors has to converge in distribution to a constant
Exercise 5.96 - Slutsky's theorem or preservation of $d$-convergence under (restricted) product
Prove Theorem 5.94 (Karr, 1993, pp. 147-8). Example/Exercise 5.97 - Slutsky's theorem or preservation of $d$-convergence under (restricted) product
Consider the sequence of r.v. $\left\{X_{1}, X_{2}, \ldots\right\}$, where $X_{n} \stackrel{\text { i.i.d. }}{\sim} X$ and let $\bar{X}_{n}=\frac{1}{n} \sum_{i=1}^{n} X_{i}$ and $S_{n}^{2}=\frac{1}{n-1} \sum_{i=1}^{n}\left(X_{i}-\bar{X}_{n}\right)^{2}$ be the sample mean and the variance of the first $n$ r.v.
(a) Show that
$$
\frac{\bar{X}_{n}-\mu}{S_{n} / \sqrt{n}} \stackrel{d}{\rightarrow} \operatorname{Normal}(0,1)
$$
for any $X \in L^{4}$.
- R.v.
$X_{i} \stackrel{i . i . d .}{\sim} X, i \in \mathbb{N}$
$X: E(X)=\mu, V(X)=\sigma^{2}=\mu_{2}, E\left[(X-\mu)^{4}\right]=\mu_{4}$, which are finite moments since $X \in L^{4}$.
- Auxiliary results
$$
\begin{aligned}
& E\left(\bar{X}_{n}\right)=\mu \\
& V\left(\bar{X}_{n}\right)=\frac{\sigma^{2}}{n}=\frac{\mu_{2}}{n} \\
& E\left(S_{n}^{2}\right)=\sigma^{2}=\mu_{2} \\
& V\left(S_{n}^{2}\right)=\left(\frac{n}{n-1}\right)^{2}\left[\frac{\mu_{4}-\mu_{2}^{2}}{n}-\frac{2\left(\mu_{4}-2 \mu_{2}^{2}\right)}{n^{2}}+\frac{2\left(\mu_{4}-3 \mu_{2}^{2}\right)}{n^{3}}\right] \text { (Murteira, 1980, p. 46). }
\end{aligned}
$$
- Asymptotic sample distribution of $\frac{\bar{X}_{n}-\mu}{S_{n} / \sqrt{n}}$
To show that $\frac{\bar{X}_{n}-\mu}{S_{n} / \sqrt{n}} \stackrel{d}{\rightarrow} \operatorname{Normal}(0,1)$ it suffices to note that
$$
\frac{\bar{X}_{n}-\mu}{S_{n} / \sqrt{n}}=\frac{\frac{\bar{X}_{n}-\mu}{\sigma / \sqrt{n}}}{\sqrt{\frac{S_{n}^{2}}{\sigma^{2}}}}
$$
prove that $\frac{\bar{X}_{n}-\mu}{\sigma / \sqrt{n}} \stackrel{d}{\rightarrow} \operatorname{Normal}(0,1)$ and $\sqrt{\frac{S_{n}^{2}}{\sigma^{2}}} \stackrel{P}{\rightarrow} 1$, and then apply Slutsky's theorem as stated in (5.48).
## - Convergence in distribution of the numerator
It follows immediately from the Central Limit Theorem. ${ }^{5}$
${ }^{5}$ This well known theorem is thoroughly discussed by Karr (1993, pp. 190-196).
## - Convergence in probability of the denominator
By using the definition of convergence in probability and the Chebyshev(Bienaymé)'s inequality, we get, for any $\epsilon>0$ :
$$
\begin{aligned}
\lim _{n \rightarrow+\infty} P\left(\left|S_{n}^{2}-\sigma^{2}\right|>\epsilon\right) & =\lim _{n \rightarrow+\infty} P\left(\left|S_{n}^{2}-E\left(S_{n}^{2}\right)\right| \geq \frac{\epsilon}{\sqrt{V\left(S_{n}^{2}\right)}} \sqrt{V\left(S_{n}^{2}\right)}\right) \\
& \leq \lim _{n \rightarrow+\infty} \frac{1}{\left(\frac{\epsilon}{\sqrt{V\left(S_{n}^{2}\right)}}\right)^{2}} \\
& =\frac{1}{\epsilon^{2}} \lim _{n \rightarrow+\infty} V\left(S_{n}^{2}\right) \\
& =0,
\end{aligned}
$$
i.e. $S_{n}^{2} \stackrel{P}{\rightarrow} \sigma^{2}$.
Finally, note that convergence in probability is preserved under continuous mappings such as $g(x)=\sqrt{\frac{x}{\sigma}}$, hence
$$
S_{n}^{2} \stackrel{P}{\rightarrow} \sigma^{2} \Rightarrow \sqrt{\frac{S_{n}^{2}}{\sigma^{2}}} \stackrel{P}{\rightarrow} \sqrt{\frac{\sigma^{2}}{\sigma^{2}}}=1 .
$$
## - Conclusion
$$
\frac{\bar{X}-\mu}{S / \sqrt{n}} \stackrel{d}{\rightarrow} \operatorname{Normal}(0,1) .
$$
(b) Discuss the utility of this result.
Exercise 5.98 - Slutsky's theorem or preservation of $d$-convergence under (restricted) division
Let $X_{i} \stackrel{\text { i.i.d. }}{\sim} \operatorname{Normal}(0,1), i \in \mathbb{N}$.
Determine the limiting distribution of $W_{n}=\frac{U_{n}}{V_{n}}$, where
$$
\begin{aligned}
U_{n} & =\frac{\sum_{i=1}^{n} X_{i}}{\sqrt{n}} \\
V_{n} & =\frac{\sum_{i=1}^{n} X_{i}^{2}}{n},
\end{aligned}
$$
by proving that
$$
\begin{aligned}
& U_{n} \stackrel{d}{\rightarrow} \operatorname{Normal}(0,1) \\
& V_{n} \stackrel{d}{\rightarrow} 1
\end{aligned}
$$
(Rohatgi, 1976, pp. 254, Example 12). Exercise 5.99 - Slutsky's theorem or preservation of $d$-convergence under (restricted) division (bis)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ a sequence of i.i.d. r.v. with common distribution $\operatorname{Bernoulli}(p)$ and $\bar{X}_{n}=$ $\frac{1}{n} \sum_{i=1}^{n} X_{i}$ the maximum likelihood estimator of $p$.
(a) Prove that
$$
\frac{\bar{X}_{n}-p}{\sqrt{\frac{\bar{X}_{n}\left(1-\bar{X}_{n}\right)}{n}}} \stackrel{d}{\rightarrow} \operatorname{Normal}(0,1)
$$
(b) Discuss the relevance of this convergence in distribution.
### Convergence of random vectors
Before defining modes of convergence of a sequence of random $d$-vectors we need two recall the definition of norm of a vector.
Definition $5.100-L^{2}$ (or Euclidean) and $L^{1}$ norms of $\underline{x}$ (Karr, 1993, p. 149) Let $\underline{x} \in \mathbb{R}^{d}$ and $\underline{x}(i)$ its $i$ th component. Then
$$
\begin{aligned}
\|\underline{x}\|_{L^{2}} & =\sqrt{\sum_{i=1}^{d} \underline{x}(i)^{2}} \\
\|\underline{x}\|_{L^{1}} & =\sqrt{\sum_{i=1}^{d}|\underline{x}(i)|}
\end{aligned}
$$
denote the $L^{2}$ norm (or Euclidean norm) and the $L^{1}$ norm of $\underline{x}$, respectively.
Remark 5.101 $-L^{2}$ (or Euclidean) and $L^{1}$ norms of $\underline{x}$ (http://en.wikipedia.org/wiki/Norm_mathematics $\sharp D e f i n i t i o n)$ On $\mathbb{R}^{d}$, the intuitive notion of length of the vector $\underline{x}$ is captured by its $L^{2}$ or Euclidean norm: this gives the ordinary distance from the origin to the point $\underline{x}$, a consequence of the Pythagorean theorem.
The Euclidean norm is by far the most commonly used norm on $\mathbb{R}^{d}$, but there are other norms, such as the $L^{1}$ norm on this vector space.
Definition 5.102 - Four modes of convergence (as functions of $\Omega$ ) of sequences of random vectors (Karr, 1993, p. 149)
Let $\underline{X}, \underline{X}_{1}, \underline{X}_{2}, \ldots$ be random $d$-vectors. Then the four modes of convergence $\underline{X}_{n} \stackrel{*}{\rightarrow} \underline{X}^{\text {}}$, $*=$ a.s., $P, q . m ., L^{1}$ are natural extensions of their counterparts in the univariate case:
- $\underline{X}_{n} \stackrel{a . s .}{\rightarrow} \underline{X}$ if $P\left(\left\{\omega: \lim _{n \rightarrow+\infty}\left\|X_{n}(\omega)-X(\omega)\right\|_{L^{1}}\right\}\right)=1$;
- $\underline{X}_{n} \stackrel{P}{\rightarrow} \underline{X}$ if $\lim _{n \rightarrow+\infty} P\left(\left\{|| \underline{X}_{n}-\underline{X} \|_{L^{1}}>\epsilon\right\}\right)=0$, for every $\epsilon>0$;
- $\underline{X}_{n} \stackrel{q \cdot m .}{\rightarrow} \underline{X}$ if $\lim _{n \rightarrow+\infty} E\left(\left\|\underline{X}_{n}-\underline{X}\right\|_{L^{2}}\right)=0$;
- $\underline{X}_{n} \stackrel{L^{1}}{\rightarrow} \underline{X}$ if $\lim _{n \rightarrow+\infty} E\left(\left\|\underline{X}_{n}-\underline{X}\right\|_{L^{1}}\right)=0$
Proposition 5.103 - Alternative criteria for the four modes of convergence of sequences of random vectors (Karr, 1993, p. 149)
$\underline{X}_{n} \stackrel{*}{\rightarrow} \underline{X}, *=$ a.s., $P, q . m ., L^{1}$ iff the same kind of stochastic convergence holds for each component, i.e. $\underline{X}_{n}(i) \stackrel{*}{\rightarrow} \underline{X}(i), *=$ a.s., $P, q . m ., L^{1}$. Remark 5.104 - Convergence in distribution of a sequence of random vectors (Karr, 1993, p. 149)
Due to the intactability of multi-dimension d.f., convergence in distribution - unlike the four previous modes of convergence - has to be defined by taking advantage of the alternative criterion for convergence in distribution stated in Theorem 5.49.
Definition 5.105 - Convergence in distribution of a sequence of random vectors
(Karr, 1993, p. 149)
Let $\underline{X}, \underline{X}_{1}, \underline{X}_{2}, \ldots$ be random $d$-vectors. Then:
- $\underline{X}_{n} \stackrel{d}{\rightarrow} \underline{X}$ if $E\left[f\left(\underline{X}_{n}\right)\right] \rightarrow E[f(\underline{X})]$, for all bounded, continuous functions $f: \mathbb{R}^{d} \rightarrow$ $\mathbb{R}$.
Proposition 5.106 - A sufficient condition for the convergence in distribution of the components of a sequence of random vectors (Karr, 1993, p. 149)
Unlike the four previous modes of convergence, convergence in distribution of the components of a sequence of random vectors is implied, but need not imply, convergence in distribution of the sequence of random vectors:
$$
\underline{X}_{n} \stackrel{d}{\rightarrow} \underline{X} \Rightarrow(\nLeftarrow) \underline{X}_{n}(i) \stackrel{d}{\rightarrow} \underline{X}(i)
$$
for each $i$.
A sequence of random vectors converges in distribution iff every linear combination of their components converges in distribution; this result constitutes the Cramér-Wold device.
Theorem 5.107 (Cramér-Wold device) - An alternative criterion for the convergence in distribution of a sequence of random vectors (Karr, 1993, p. 150)
Let $\underline{X}, \underline{X}_{1}, \underline{X}_{2}, \ldots$ be random $d$-vectors. Then
$$
\underline{X}_{n} \stackrel{d}{\rightarrow} \underline{X} \Leftrightarrow \underline{a}^{\top} \underline{X}_{n}=\sum_{i=1}^{d} \underline{a}(i) \times \underline{X}_{n}(i) \stackrel{d}{\rightarrow} \sum_{i=1}^{d} \underline{a}(i) \times \underline{X}(i)=\underline{a}^{\top} \underline{X},
$$
for all $\underline{a} \in \mathbb{R}^{d}$.
Exercise 5.108 - An alternative criterion for the convergence in distribution of a sequence of random vectors
Show Theorem 5.107 (Karr, 1993, p. 150). As with sequences of r.v., convergence almost surely, in probability and in distribution are preserved under continuous mappings of sequences of random vectors.
Theorem 5.109 - Preservation of $\{a . s ., P, d\}$-convergence under continuous mappings of random vectors (Karr, 1993, p. 148) Let:
- $\left\{\underline{X}_{1}, \underline{X}_{2}, \ldots\right\}$ be a sequence of random $d$-vectors and $\underline{X}$ a random $m$-vector;
- $g: \mathbb{R}^{d} \rightarrow \mathbb{R}^{m}$ be a continuous mapping of $\mathbb{R}^{d}$ into $\mathbb{R}^{m}$.
Then
$$
\underline{X}_{n} \stackrel{*}{\rightarrow} \underline{X} \Rightarrow g\left(\underline{X}_{n}\right) \stackrel{*}{\rightarrow} g(\underline{X}), *=\text { a.s., } P, d .
$$
### Limit theorems for Bernoulli summands
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a Bernoulli process with parameter $p \in(0,1)$. In this section we study the asymptotic behavior of the Bernoulli counting process $\left\{S_{1}, S_{2}, \ldots\right\}$, where $S_{n}=$ $\sum_{i=1}^{n} X_{i} \sim \operatorname{Binomial}(n, p)$.
#### Laws of large numbers for Bernoulli summands
## Motivation 5.110 - Laws of large numbers
(http://en.wikipedia.org/wiki/Law_of_large_numbers; Murteira, 1979, p. 313)
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials (e.g. Bernoulli trials) should be close to the expected value, and will tend to become closer as more trials are performed.
For instance, when a FAIR coin is flipped once, the expected value of the number of heads is equal to one half. Therefore, according to the law of large numbers, the proportion of heads in a large number of coin flips should be roughly one half, as depicted by the next figure (where $N$ stands for $n$ ).
This illustration suggests the following statement: $\frac{S_{n}}{n}=\bar{X}_{n}$ converges, in some sense, to $p=\frac{1}{2}$. In fact, if we use Chebyshev(-Bienaymé)'s inequality we can prove that
$$
\lim _{n \rightarrow+\infty} P\left(\left\{\left|\frac{S_{n}}{n}-p\right|>\varepsilon\right\}\right)=0
$$
that is, $\frac{S_{n}}{n} \stackrel{P}{\rightarrow} p=\frac{1}{2}$. (Show this result!) In addition, we can also prove that the proportion of heads after $n$ flips will almost surely converge to one half as $n$ approaches infinity, i.e., $\frac{S_{n}}{n} \stackrel{a . s .}{\rightarrow} p=\frac{1}{2}$. Similar convergences can be devised for the mean of $n$ i.i.d. r.v.
The Indian mathematician Brahmagupta (598-668) and later the Italian mathematician Gerolamo Cardano (1501-1576) stated without proof that the accuracies of empirical statistics tend to improve with the number of trials. This was then formalized as a law of large numbers (LLN). The LLN was first proved by Jacob Bernoulli. It took him over 20 years to develop a sufficiently rigorous mathematical proof which was published in his Ars Conjectandi (The Art of Conjecturing) in 1713. He named this his "Golden Theorem" but it became generally known as "Bernoulli's Theorem". In 1835, S.D. Poisson further described it under the name "La loi des grands nombres" ("The law of large numbers"). Thereafter, it was known under both names, but the "Law of large numbers" is most frequently used.
Other mathematicians also contributed to refinement of the law, including Chebyshev, Markov, Borel, Cantelli and Kolmogorov. These further studies have given rise to two prominent forms of the LLN:
- the weak law of large numbers (WLLN);
- the strong law of large numbers (SLLN);
These forms do not describe different laws but instead refer to different ways of describing the mode of convergence of the cumulative sample means to the expected value:
- the WLLN refers to a convergence in probability;
- the SLLN is concerned with an almost sure convergence;
Needless to say that the SLLN implies the WLLN.
Theorem 5.111 - Weak law of large numbers for Bernoulli summands (Karr, 1993, p. 151)
Let:
- $\left\{X_{1}, X_{2}, \ldots\right\}$ be a Bernoulli process with parameter $p \in(0,1)$;
- $\frac{S_{n}}{n}=\bar{X}_{n}$ be the proportion of successes in the first $n$ Bernoulli trials.
Then
$$
\frac{S_{n}}{n} \stackrel{q . m .}{\rightarrow} p
$$
therefore
$$
\frac{S_{n}}{n} \stackrel{P}{\rightarrow} p
$$
## Exercise 5.112 - Weak law of large numbers for Bernoulli summands
Show Theorem 5.111, by calculating the limit of $E\left[\left(\frac{S_{n}}{n}-p\right)^{2}\right]$ (thus proving the convergence in quadratic mean) and combining Proposition 5.55 (which states that convergence in quadratic mean implies convergence in $L^{1}$ ) and Proposition 5.57 (it says that convergence in $L^{1}$ implies convergence in probability) (Karr, 1993, p. 151).
Theorem 5.113 - Strong law of large numbers for Bernoulli summands or Borel's SLLN (Karr, 1993, p. 151; Rohatgi, 1976, p. 273, Corollary 3) Let:
- $\left\{X_{1}, X_{2}, \ldots\right\}$ be a Bernoulli process with parameter $p \in(0,1)$;
- $\frac{S_{n}}{n}=\bar{X}_{n}$ be the proportion of successes in the first $n$ Bernoulli trials.
Then
$$
\frac{S_{n}}{n} \stackrel{a . s .}{\longrightarrow} p
$$
Exercise 5.114 - Strong law of large numbers for Bernoulli summands or Borel's SLLN
Prove Theorem 5.113, by: using Theorem 4.121 (Chebyshev's inequality) with $g(x)=x^{4}$ to set an upper limit to $P\left(\left\{\left|S_{n}-n p\right|>n \epsilon\right\}\right)$, which is smaller than $O\left(n^{-2}\right),{ }^{6}$ thus, proving that $\sum_{i=1}^{+\infty} P\left(\left\{\left|\frac{S_{n}}{n}-p\right|>\epsilon\right\}\right)<\infty$, i.e., that the sequence $\left\{\frac{S_{1}}{1}, \frac{S_{2}}{2}, \ldots\right\}$ completely converges to $p$; finally applying Proposition 5.44 which relates almost sure convergence and complete convergence (Karr, 1993, pp. 151-152).
Remark 5.115 - Weak and strong laws of large numbers for Bernoulli summands (http://en.wikipedia.org/wiki/Law_of_large_numbers; Karr, 1993, p. 152)
- Theorem 5.113 can be invoked to support the frequency interpretation of probability.
- The WLLN for Bernoulli summands states that for a specified large $n, \frac{S_{n}}{n}$ is likely to be near $p$. Thus, it leaves open the possibility that the event $\left\{\left|\frac{S_{n}}{n}-p\right|>\epsilon\right\}$, for any $\epsilon>0$, happens an infinite number of times, although at infrequent intervals.
The SLLN for Bernoulli summands shows that this almost surely will not occur. In particular, it implies that with probability 1 , we have that, for any $\epsilon>0$, the inequality $\left|\frac{S_{n}}{n}-p\right|>\epsilon$ holds for all large enough $n$.
${ }^{6}$ Let $f(x)$ and $g(x)$ be two functions defined on some subset of the real numbers. One writes $f(x)=$ $O(g(x))$ as $x \rightarrow \infty$ iff there exists a positive real number $M$ and a real number $x_{0}$ such that $|f(x)| \leq$ $M|g(x)|$ for all $x>x_{0}$ (http://en.wikipedia.org/wiki/Big_O_notation). - Finally, the proofs of theorems 5.111 and 5.113 only involve the moments of $X_{i}$. Unsurprisingly, these two theorems can be reproduced for other sequences of i.i.d. r.v., namely those in $L^{2}$ (in the case of the WLLN) and in $L^{4}$ (for the SLLN), as we shall see in sections 5.6 and 5.7.
#### Central limit theorems for Bernoulli summands
Motivation 5.116 - Central limit theorems for Bernoulli summands (Karr, 1993, p. 152)
They essentially state that, in the Bernoulli summands case and for large $n$, $S_{n} \sim \operatorname{Binomial}(n, p)$ is such that $\frac{S_{n}-E\left(S_{n}\right)}{\sqrt{V\left(S_{n}\right)}}=\frac{S_{n}-n p}{\sqrt{n p(1-p)}}$ has approximately a standard normal distribution.
The local (resp. global) central limit theorem - also known as the DeMoivre-Laplace local (resp. global) limit theorem - provides an approximation to the p.f. (resp. d.f.) of $S_{n}$ in terms of the standard normal p.d.f. (resp. d.f.).
Theorem 5.117 - DeMoivre-Laplace local limit theorem (Karr, 1993, p. 153) Let:
- $k_{n}=0,1, \ldots, n$
- $x_{n}=\frac{k_{n}-n p}{\sqrt{n p(1-p)}}=o\left(n^{1 / 6}\right) ;^{7}$
- $\phi(x)=\frac{1}{\sqrt{2 \pi}} e^{-x^{2} / 2}$ be the standard normal p.d.f.
Then
$$
\lim _{n \rightarrow+\infty} \frac{P\left(\left\{S_{n}=k_{n}\right\}\right)}{\frac{\phi\left(x_{n}\right)}{\sqrt{n p(1-p)}}}=1 .
$$
Exercise 5.118 - DeMoivre-Laplace local limit theorem Show Theorem 5.117 (Karr, 1993, p. 153).
${ }^{7}$ The relation $f(x)=o(g(x))$ is read as " $f(x)$ is little-o of $g(x)$ ". Intuitively, it means that $g(x)$ grows much faster than $f(x)$. Formally, it states $\lim _{x \rightarrow \infty} \frac{f(x)}{g(x)}=0$ (http://en.wikipedia.org/wiki/Big_O_notation $\sharp$ Little-o_notation). Remark 5.119 - DeMoivre-Laplace local limit theorem (Karr, 1993, p. 153) The proof of Theorem 5.117 shows that the convergence in (5.67) is uniform in values of $k$ satisfying $|k-n p|=o\left(n^{2 / 3}\right)$. As a consequence, for large values of $n$ and values of $k_{n}$ not to different from $n p$,
$$
P\left(\left\{S_{n}=k_{n}\right\}\right) \simeq \frac{1}{\sqrt{n p(1-p)}} \times \phi\left[\frac{k_{n}-n p}{\sqrt{n p(1-p)}}\right],
$$
that is, the p.f. of $S_{n} \sim \operatorname{Binomial}(n, p)$ evaluated at $k_{n}$ can be properly approximated by the p.d.f. of a normal distribution, with mean $E\left(S_{n}\right)=n p$ and variance $V\left(S_{n}\right)=n p(1-p)$, evaluated at $\frac{k_{n}-n p}{\sqrt{n p(1-p)}}$.
Theorem 5.120 - DeMoivre-Laplace global limit theorem (Karr, 1993, p. 154; Murteira, 1979, p. 347)
Let $S_{n} \sim \operatorname{Binomial}(n, p), n \in \mathbb{N}$. Then
$$
\frac{S_{n}-n p}{\sqrt{n p(1-p)}} \stackrel{d}{\rightarrow} \operatorname{Normal}(0,1)
$$
Remark 5.121 - DeMoivre-Laplace global limit theorem (Karr, 1993, p. 155; Murteira, 1979, p. 347)
- Theorem 5.120 justifies the following approximation:
$$
P\left(S_{n} \leq x\right) \simeq \Phi\left[\frac{x-n p}{\sqrt{n p(1-p)}}\right]
$$
- According to Murteira (1979, p. 348), the well known continuity correction was proposed by Feller in 1968 to improve the normal approximation to the binomial distribution, ${ }^{8}$ and can be written as:
$$
P\left(a \leq S_{n} \leq b\right) \simeq \Phi\left[\frac{b+\frac{1}{2}-n p}{\sqrt{n p(1-p)}}\right]-\Phi\left[\frac{a-\frac{1}{2}-n p}{\sqrt{n p(1-p)}}\right]
$$
${ }^{8}$ However, http://en.wikipedia.org/wiki/Continuity_correction suggests that continuity correction dates back from Feller, W. (1945). On the normal approximation to the binomial distribution. The Annals of Mathematical Statistics 16, pp. 319-329. - The proof of the central limit theorem for summands (other than Bernoulli ones) involves a Taylor series expansion ${ }^{9}$ and requires dealing with the notion of characteristic function of a r.v. ${ }^{10}$ Such proof can be found in Murteira (1979, pp. 354-355); Karr (1993, pp. 190-196) devotes a whole section to this theorem.
## Exercise 5.122 - DeMoivre-Laplace global limit theorem
Show Theorem 5.120 (Karr, 1993, pp. 154-155).
#### The Poisson limit theorem
## Motivation 5.123 - Poisson limit theorem
(Karr, 1993, p. 155; http://en.wikipedia.org/wiki/Poisson_limit_theorem)
- In the two central limit theorems for Bernoulli summands, although $n \rightarrow+\infty$, the parameter $p$ remained fixed. These theorems provide useful approximations to binomial probabilities, as long as the values of $p$ are close to neither zero or one, and inaccurate ones, otherwise.
- The Poisson limit theorem gives a Poisson approximation to the binomial distribution, under certain conditions, namely, it considers the effect of simultaneously allowing $n \rightarrow+\infty$ and $p=p_{n} \rightarrow 0$ with the proviso that $n \times p_{n} \rightarrow \lambda$, where $\lambda \in \mathbb{R}^{+}$. This theorem was obviously named after Siméon-Denis Poisson (1781-1840).
Theorem 5.124 - Poisson limit theorem (Karr, 1993, p. 155) Let:
- $\left\{Y_{1}, Y_{2}, \ldots\right\}$ be a sequence of r.v. such that $Y_{n} \sim \operatorname{Binomial}\left(n, p_{n}\right)$, for each $n$;
- $n \times p_{n} \rightarrow \lambda$, where $\lambda \in \mathbb{R}^{+}$.
Then
$$
Y_{n} \stackrel{d}{\rightarrow} \operatorname{Poisson}(\lambda)
$$
${ }^{9}$ The Taylor series of a real or complex function $f(x)$ that is infinitely differentiable in a neighborhood of a real (or complex number) $a$ is the power series written in the more compact sigma notation as $\sum_{n=0}^{+\infty} \frac{f^{(n)}(a)}{n !}(x-a)^{n}$, where $f^{(n)}(a)$ denotes the $n$th derivative of $f$ evaluated at the point $a$. In the case that $\mathrm{a}=0$, the series is also called a Maclaurin series (http://en.wikipedia.org/wiki/Taylor_series).
${ }^{10}$ For a scalar random variable $X$ the characteristic function is defined as the expected value of $e^{i t X}$, $E\left(e^{i t X}\right)$, where $i$ is the imaginary unit, and $t \in \mathbb{R}$ is the argument of the characteristic function (http://en.wikipedia.org/wiki/Characteristic_function_(probability_theory)).
## Example/Exercise 5.125 - Poisson limit theorem
(a) Consider $0<\lambda<n$ and let us verify that
$$
\lim _{\substack{n \rightarrow+\infty \\
p_{n} \rightarrow 0 \\
n p_{n}=\lambda \text { fix }}}\left(\begin{array}{l}
n \\
x
\end{array}\right) p_{n}^{x}\left(1-p_{n}\right)^{n-x}=e^{-\lambda} \frac{\lambda^{x}}{x !} .
$$
- R.v.
$X_{n} \sim \operatorname{Binomial}\left(n, p_{n}\right)$
## - Parameters
$n$
$$
p_{n}=\frac{\lambda}{n}(0<\lambda<n)
$$
- P.f.
$$
P\left(X_{n}=x\right)=\left(\begin{array}{l}
n \\
x
\end{array}\right)\left(\frac{\lambda}{n}\right)^{x}\left(1-\frac{\lambda}{n}\right)^{n-x}, x=0,1, \ldots, n
$$
## - Limit p.f.
For any $x \in\{0,1, \ldots, n\}$, we get
$$
\begin{aligned}
\lim _{n \rightarrow+\infty} P\left(X_{n}=x\right)= & \frac{\lambda^{x}}{x !} \times \lim _{n \rightarrow+\infty} \frac{n(n-1) \ldots(n-x+1)}{n^{x}} \\
& \times \lim _{n \rightarrow+\infty}\left(1+\frac{-\lambda}{n}\right)^{n} \times \lim _{n \rightarrow+\infty}\left(1-\frac{\lambda}{n}\right)^{-x} \\
= & \frac{\lambda^{x}}{x !} \times 1 \times e^{-\lambda} \times 1 \\
= & e^{-\lambda} \frac{\lambda^{x}}{x !} .
\end{aligned}
$$
## - Conclusion
If the limit p.f. of $X_{n}$ coincides with p.f. of $X \sim \operatorname{Poisson}(\lambda)$ then the same holds for the limit d.f. of $X_{n}$ and the d.f. of $X$. Hence
$$
X_{n} \stackrel{d}{\rightarrow} \operatorname{Poisson}(\lambda) .
$$
(b) Now, prove Theorem 5.124 (Karr, 1993, p. 155).
(c) Suppose that in an interval of length 1000, 500 points are placed randomly.
Use the Poisson limit theorem to prove that we can approximate the p.f. of the number points that will be placed in a sub-interval of length 10 by
$$
e^{-5} \frac{5^{k}}{k !}
$$
(http://en.wikipedia.org/wiki/Poisson_limit_theorem).
### Weak law of large numbers
Motivation 5.126 - Weak law of large numbers (Rohatgi, 1976, p. 257) Let:
- $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. in $L^{2}$;
- $S_{n}=\sum_{i=1}^{n} X_{i}$ be the sum of the first $n$ terms of such a sequence.
In this section we are going to answer the next question in the affirmative:
- Are there constants $a_{n}$ and $b_{n}\left(b_{n}>0\right)$ such that $\frac{S_{n}-a_{n}}{b_{n}} \stackrel{P}{\rightarrow} 0$ ?
In other words, what follows are extensions of the WLLN for Bernoulli summands (Theorem 5.111), to other sequences of:
- i.i.d. r.v. in $L^{2}$;
- pairwise uncorrelated and identically distributed r.v. in $L^{2}$;
- pairwise uncorrelated r.v. in $L^{2}$;
- r.v. in $L^{2}$ with a specific variance behavior;
- i.i.d. r.v. in $L^{1}$.
Definition 5.127 - Obeying the weak law of large numbers (Rohatgi, 1976, p. 257)
Let:
- $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v.;
- $S_{n}=\sum_{i=1}^{n} X_{i}, n=1,2, \ldots$;
Then $\left\{X_{1}, X_{2}, \ldots\right\}$ is said to obey the weak law of large numbers (WLLN) with respect to the sequence of constants $\left\{b_{1}, b_{2}, \ldots\right\}\left(b_{n}>0, b_{n} \uparrow+\infty\right)$ if there is a sequence of real constants $\left\{a_{1}, a_{2}, \ldots\right\}$ such that
$$
\frac{S_{n}-a_{n}}{b_{n}} \stackrel{P}{\rightarrow} 0
$$
$a_{n}$ and $b_{n}$ are called centering and norming constants, respectively. Remark 5.128 - Obeying the weak law of large numbers (Rohatgi, 1976, p. 257) The definition in Murteira (1979, p. 319) is a particular case of Definition 5.127 with $a_{n}=\sum_{i=1}^{n} E\left(X_{i}\right)$ and $b_{n}=n$.
- Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v., $\bar{X}_{n}=\frac{1}{n} \sum_{i=1}^{n} X_{i}$, and $\left\{Z_{1}, Z_{2}, \ldots\right\}$ be another sequence of r.v. such that $Z_{n}=\frac{S_{n}-a_{n}}{b_{n}}=\bar{X}_{n}-E\left(\bar{X}_{n}\right), n=1,2, \ldots$
Then $\left\{X_{1}, X_{2}, \ldots\right\}$ is said to obey the WLLN if $Z_{n} \stackrel{P}{\rightarrow} 0$.
Hereafter the convergence results are stated either in terms of $S_{n}$ or $\bar{X}_{n}$.
Theorem 5.129 - Weak law of large numbers, i.i.d. r.v. in $L^{2}$ (Karr, 1993, p. 152)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of i.i.d. r.v. in $L^{2}$ with common expected value $\mu$ and variance $\sigma^{2}$. Then
$$
\bar{X}_{n} \stackrel{q \cdot m .}{\rightarrow} \mu
$$
therefore $\left\{X_{1}, X_{2}, \ldots\right\}$ obeys the WLLN:
$$
\begin{gathered}
\bar{X}_{n} \stackrel{P}{\rightarrow} \mu, \\
\text { i.e., } \frac{S_{n}-n \mu}{n} \stackrel{P}{\rightarrow} 0 .{ }^{11}
\end{gathered}
$$
Exercise 5.130 - Weak law of large numbers, i.i.d. r.v. in $L^{2}$
Prove Theorem 5.129, by mimicking the proof of the WLLN for Bernoulli summands.
Exercise 5.131 - Weak law of large numbers, i.i.d. r.v. in $L^{2}$ (bis)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ a sequence of i.i.d. r.v. with common pd.f.
$$
f(x)= \begin{cases}e^{-x+q}, & x>q \\ 0, & \text { otherwise }\end{cases}
$$
(a) Prove that $\bar{X}_{n}=\frac{1}{n} \sum_{i=1}^{n} X_{i} \stackrel{P}{\rightarrow} 1+q$.
(b) Show that $X_{(1: n)}=\min _{i=1, \ldots, n} X_{i} \stackrel{P}{\rightarrow} q \cdot{ }^{12}$
A closer look to the proof Theorem 5.129 leads to the conclusion that the r.v. need only be pairwise uncorrelated and identically distributed in $L^{2}$, since in this case we still have $V\left(\bar{X}_{n}\right)=\frac{\sigma^{2}}{n}$ (Karr, 1993, p. 152).
${ }^{11}$ As suggested by Rohatgi (1976, p. 258, Corollary 3).
${ }^{12}$ Use the d.f. of $X_{(1: n)}$. Theorem 5.132 - Weak law of large numbers, pairwise uncorrelated and identically distributed r.v. in $L^{2}$ (Karr, 1993, p. 152; Rohatgi, 1976, p. 258)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of pairwise uncorrelated and identically distributed r.v. in $L^{2}$ with common expected value $\mu$ and variance $\sigma^{2}$. Thus, $\bar{X}_{n} \stackrel{q . m .}{\rightarrow} \mu$ and $^{13}$ hence $\left\{X_{1}, X_{2}, \ldots\right\}$ obeys the WLLN:
$$
\bar{X}_{n} \stackrel{P}{\rightarrow} \mu .
$$
We can also drop the assumption that we are dealing with identically distributed r.v. as suggested by the following theorem.
Theorem 5.133 - Weak law of large numbers, pairwise uncorrelated r.v. in $L^{2}$ (Rohatgi, 1976, p. 258, Theorem 1)
Let
- $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of pairwise uncorrelated r.v. in $L^{2}$ with $E\left(X_{i}\right)=\mu_{i}$ and $V\left(X_{i}\right)=\sigma_{i}^{2}$
- $a_{n}=\sum_{i=1}^{n} \mu_{i}$;
- $b_{n}=\sum_{i=1}^{n} \sigma_{i}^{2}$.
If $b_{n} \rightarrow+\infty$ then
$$
\frac{S_{n}-a_{n}}{b_{n}}=\frac{\sum_{i=1}^{n} X_{i}-\sum_{i=1}^{n} \mu_{i}}{\sum_{i=1}^{n} \sigma_{i}^{2}} \stackrel{P}{\rightarrow} 0,
$$
i.e., $\left\{X_{1}, X_{2}, \ldots\right\}$ obeys the WLLN with respect to $b_{n}$.
Exercise 5.134 - Weak law of large numbers, pairwise uncorrelated r.v. in $L^{2}$ Prove Theorem 5.133 by applying Chebyshev(-Bienaymé)s inequality (Rohatgi, 1976, p. 258).
Remark 5.135 - Weak law of large numbers, pairwise uncorrelated r.v. in $L^{2}$ A careful inspection of the proof of Theorem 5.133 (Rohatgi, 1976, p. 258) leads us to restate it as follows:
- Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of pairwise uncorrelated r.v. in $L^{2}$ with $E\left(X_{i}\right)=\mu_{i}$ and $V\left(X_{i}\right)=\sigma_{i}^{2}, a_{n}=\sum_{i=1}^{n} \mu_{i}$, and
${ }^{13}$ Rohatgi (1976, p. 258, Corollary 1) does not refer this convergence in quadratic mean.
$$
b_{n}=\sqrt{\sum_{i=1}^{n} \sigma_{i}^{2}}
$$
If $b_{n} \rightarrow+\infty$ then
$$
\frac{S_{n}-a_{n}}{b_{n}}=\frac{S_{n}-E\left(S_{n}\right)}{\sqrt{V\left(S_{n}\right)}} \stackrel{P}{\rightarrow} 0 .
$$
Theorem 5.133 can be further generalized: the sequence of r.v. need only have the mean of its first $n$ terms, $\bar{X}_{n}$, with a specific variance behavior, as stated below.
Theorem 5.136 - WLLN and Markov's theorem (Murteira, 1979, p. 320) Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. in $L^{2}$. If
$$
\lim _{n \rightarrow+\infty} V\left(\bar{X}_{n}\right)=\lim _{n \rightarrow+\infty} \frac{1}{n^{2}} V\left(\sum_{i=1}^{n} X_{i}\right)=0
$$
then
$$
\bar{X}_{n}-E\left(\bar{X}_{n}\right) \stackrel{P}{\rightarrow} 0
$$
that is, $\left\{X_{1}, X_{2}, \ldots\right\}$ obeys the WLLN with respect to $b_{n}=n\left(a_{n}=\sum_{i=1}^{n} E\left(X_{i}\right)\right)$.
## Exercise 5.137 - WLLN and Markov's theorem
Show Theorem 5.136, by simply applying Chebyshev(-Bienaymé)'s inequality.
Remark 5.138 - (Special cases of) Markov's theorem (Murteira, 1979, pp. 320321; Rohatgi, 1979, p. 258)
- The WLLN holds for a sequence of pairwise uncorrelated r.v., with common expected value $\mu$ and uniformly limited variance $V\left(X_{n}\right)<k, n=1,2, \ldots ; k \in \mathbb{R}^{+} .14$
- The WLLN also holds for a sequence of pairwise uncorrelated and identically distributed r.v. in $L^{2}$, with common expected value $\mu$ and $\sigma^{2}$ (Theorem 5.132).
${ }^{14}$ This corollary of Markov's theorem is due to Chebyshev. Please note that when we dealing with pairwise uncorrelated r.v., the condition (5.82) in Markov's theorem still reads: $\lim _{n \rightarrow+\infty} V\left(\bar{X}_{n}\right)=$ $\lim _{n \rightarrow+\infty} \frac{1}{n^{2}} \sum_{i=1}^{n} V\left(X_{i}\right)=0$. - Needless to say that the WLLN holds for any a sequence of i.i.d. r.v. in $L^{2}$ (Theorem 5.129) and therefore $\bar{X}_{n}$ is a consistent estimator of $\mu$.
Moreover, according to http://en.wikipedia.org/wiki/Law_of_large_numbers, the assumption of finite variances $\left(V\left(X_{i}\right)=\sigma^{2}<+\infty\right)$ is not necessary; large or infinite variance will make the convergence slower, but the WLLN holds anyway, as stated in Theorem 5.143. This assumption is often used because it makes the proofs easier and shorter.
The next theorem provides a necessary and sufficient condition for a sequence of r.v. $\left\{X_{1}, X_{2}, \ldots\right\}$ to obey the WLLN.
Theorem 5.139 - An alternative criterion for the WLLN (Rohatgi, 1976, p. 258) Let:
- $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of r.v. (in $\left.L^{2}\right)$;
- $\bar{Y}_{n}=\bar{X}_{n}-E\left(\bar{X}_{n}\right), n=1,2, \ldots$
Then $\left\{X_{1}, X_{2}, \ldots\right\}$ satisfies the WLLN with respect to $b_{n}=n\left(a_{n}=\sum_{i=1}^{n} E\left(X_{i}\right)\right)$, i.e.
$$
\bar{X}_{n}-E\left(\bar{X}_{n}\right) \stackrel{P}{\rightarrow} 0,
$$
iff
$$
\lim _{n \rightarrow+\infty} E\left(\frac{Y_{n}^{2}}{1+Y_{n}^{2}}\right)=0
$$
Remark 5.140 - An alternative criterion for the WLLN (Rohatgi, 1976, p. 259) Since condition (5.85) does not apply to the individual r.v. $X_{i}$ Theorem 5.139 is of limited use.
## Exercise 5.141 - An alternative criterion for the WLLN
Show Theorem 5.139 (Rohatgi, 1976, pp. 258-259).
## Exercise 5.142 - An alternative criterion for the WLLN (bis)
Let $\left(X_{1}, \ldots, X_{n}\right)$ be jointly normal and such that: $E\left(X_{i}\right)=0$ and $V\left(X_{i}\right)=1(i=1,2, \ldots)$; and,
$$
\operatorname{cov}\left(X_{i}, X_{j}\right)= \begin{cases}1, & i=j \\ \rho \in(-1,1), & |j-i|=1 \\ 0, & |j-i|>1 .\end{cases}
$$
Use Theorem 5.139 to prove that $\bar{X}_{n} \stackrel{P}{\rightarrow} 0$ (Rohatgi, 1976, pp. 250-260, Example 2). Finally, the assumption that the r.v. belong to $L^{2}$ is dropped and we state a theorem due to Soviet mathematician Aleksandr Yakovlevich Khinchin (1894-1959).
Theorem 5.143 - Weak law of large numbers, i.i.d. r.v. in $L^{1}$ (Rohatgi, 1976, p. 261)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of i.i.d. r.v. in $L^{1}$ with common finite mean $\mu ;^{15}$ Then $\left\{X_{1}, X_{2}, \ldots\right\}$ satisfies the WLLN with respect to $b_{n}=n\left(a_{n}=n \mu\right)$, i.e.
$$
\bar{X}_{n} \stackrel{P}{\rightarrow} \mu
$$
Exercise 5.144 - Weak law of large numbers, i.i.d. r.v. in $L^{1}$
Prove Theorem 5.143 (Rohatgi, 1976, p. 261).
Exercise 5.145 - Weak law of large numbers, i.i.d. r.v. in $L^{1}$ (bis)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of i.i.d. r.v. to $X \in L^{k}$, for some positive integer $k$, and common $k$ th. moment $E\left(X^{k}\right)$. Apply Theorem 5.143 to prove that:
(a) $\frac{1}{n} \sum_{i=1}^{n} X_{i}^{k} \stackrel{P}{\rightarrow} E\left(X^{k}\right) ;^{16}$
(b) if $k=2$ then $\frac{1}{n} \sum_{i=1}^{n} X_{i}^{2}-\left(\bar{X}_{n}\right)^{2} \stackrel{P}{\rightarrow} V(X)^{17}$ (Rohatgi, 1976, p. 261, Example 4).
Exercise 5.146 - Weak law of large numbers, i.i.d. r.v. in $L^{1}$ (bis bis) Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of i.i.d. r.v. with common p.d.f.
$$
f_{X}(x)= \begin{cases}\frac{1+\delta}{x^{2+\delta}}, & x \geq 1 \\ 0, & \text { otherwise }\end{cases}
$$
where $\delta>0 .{ }^{18}$ Show that $\bar{X}_{n} \stackrel{P}{\rightarrow} E(X)=\frac{1+\delta}{\delta}$ (Rohatgi, 1976, p. 261, Example 5).
${ }^{15}$ Please note that nothing is said about the variance, it need not to be finite.
${ }^{16}$ This means that the $k$ th. sample moment, $\frac{1}{n} \sum_{i=1}^{n} X_{i}^{k}$, is a consistent estimator of $E\left(X^{k}\right)$ if the i.i.d. r.v. belong to $L^{k}$.
${ }^{17}$ I.e., the sample variance, $\frac{1}{n} \sum_{i=1}^{n} X_{i}^{2}-\left(\bar{X}_{n}\right)^{2}$, is a consistent estimator of $V(X)$ if we are dealing with i.i.d. r.v. in $L^{2}$.
${ }^{18}$ This is the p.d.f. of a $\operatorname{Pareto}(1,1+\delta)$ r.v.
### Strong law of large numbers
This section is devoted to a few extensions of the SLNN for Bernoulli summands (or Borel's SLLN), Theorem 5.113. They refer to sequences of:
- i.i.d. r.v. in $L^{4}$;
- dominated i.i.d. r.v.;
- independent r.v. in $L^{2}$ with a specific variance behavior;
- i.i.d. r.v. in $L^{1}$.
Theorem 5.147 - Strong law of large numbers, i.i.d. r.v. in $L^{4}$ (Karr, 1993, p. 152; Rohatgi, 1976, pp. 264-265, Theorem 1)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of i.i.d. r.v. in $L^{4}$, with common expected value $\mu$. Then
$$
\bar{X}_{n} \stackrel{\text { a.s. }}{\rightarrow} \mu .
$$
Exercise 5.148 - Strong law of large numbers, i.i.d. r.v. in $L^{4}$
Prove Theorem 5.147, by following the same steps as in the proof of the SLLN for Bernoulli summands (Rohatgi, 1976, p. 265).
The proviso of a common finite fourth moment can be dropped if there is a degenerate r.v. that dominates the i.i.d. r.v. $X_{1}, X_{2}, \ldots$
Corollary 5.149 - Strong law of large numbers, dominated i.i.d. r.v. (Rohatgi, 1976$, p. 265$)\$ Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of i.i.d. r.v., with common expected value $\mu$ and such that
$$
P\left(\left\{\left|X_{n}\right|<k\right\}\right), \text { for all } n,
$$
where $k$ is a positive constant. Then
$$
\bar{X}_{n} \stackrel{\text { a.s. }}{\rightarrow} \mu .
$$
The next lemma is essential to prove yet another extension of Borel's SLLN. Lemma 5.150 - Kolmogorov's inequality (Rohatgi, 1976, p. 268)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of independent r.v., with common null mean and variances $\sigma_{i}^{2}, i=1,2, \ldots$ Then, for any $\epsilon>0$,
$$
P\left(\left\{\max _{k=1, \ldots, n}\left|S_{k}\right|>\epsilon\right\}\right) \leq \frac{\sum_{i=1}^{n} \sigma_{i}^{2}}{\epsilon^{2}} .
$$
## Exercise 5.151 - Kolmogorov's inequality
Show Lemma 5.150 (Rohatgi, 1976, pp. 268-269).
Remark 5.152 - Kolmogorov's inequality (Rohatgi, 1976, p. 269)
If we take $n=1$ then Lemma 5.150 can be written as $P\left(\left\{\left|X_{1}\right|>\epsilon\right\}\right) \leq \frac{\sigma_{1}^{2}}{\epsilon^{2}}$, which is Chebyshev's inequality.
The condition of dealing with i.i.d. r.v. in $L^{4}$ can be further relaxed as long as the r.v. are still independent and the variances of $X_{1}, X_{2}, \ldots$ have a specific behavior, as stated below.
Theorem 5.153 - Strong law of large numbers, independent r.v. in $L^{2}$ (Rohatgi, 1976, p. 272, Theorem 6)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of independent r.v. in $L^{2}$ with variances $\sigma_{i}^{2}, i=1,2, \ldots$, such that
$$
\sum_{i=1}^{+\infty} V\left(X_{i}\right)<+\infty .
$$
Then
$$
S_{n}-E\left(S_{n}\right) \stackrel{a . s .}{\rightarrow} 0
$$
Exercise 5.154 - Strong law of large numbers, independent r.v. in $L^{2}$
Prove Theorem 5.153 by making use of Kolmogorov's inequality and Cauchy's criterion (Rohatgi, 1976, p. 272, Theorem 6)
Theorem 5.155 - Strong law of large numbers, i.i.d. r.v. in $L^{1}$, or Kolmogorov's SLLN (Karr, 1993, p. 188; Rohatgi, 1976, p. 274, Theorem 7)
Let $\left\{X_{1}, X_{2}, \ldots\right\}$ be a sequence of i.i.d. r.v. to $X$. Then
$$
\bar{X}_{n} \stackrel{\text { a.s. }}{\rightarrow} \mu
$$
iff $X \in L^{1}$, and then $\mu=E(X)$. Exercise 5.156 - Strong law of large numbers, i.i.d. r.v. in $L^{1}$, or
## Kolmogorov's SLLN
Show Theorem 5.155 (Karr, 1993, pp. 188-189; Rohatgi, 1976, pp. 274-275)
## References
- Grimmett, G.R. and Stirzaker, D.R. (2001). Probability and Random Processes (3rd. edition). Oxford University Press. (QA274.12-.76.GRI.30385 and QA274.12.76.GRI.40695 refer to the library code of the 1st. and 2nd. editions from 1982 and 1992, respectively.)
- Karr, A.F. (1993). Probability. Springer-Verlag.
- Murteira, B.J.F. (1979). Probabilidades e Estatística, Vol. 1. Editora McGraw-Hill de Portugal, Lda.
- Murteira, B.J.F. (1980). Probabilidades e Estatística, Vol. 2. Editora McGraw-Hill de Portugal, Lda. (QA273-280/3.MUR.34472, QA273-280/3.MUR.34475)
- Resnick, S.I. (1999). A Probability Path. Birkhäuser. (QA273.4-.67.RES.49925)
- Rohatgi, V.K. (1976). An Introduction to Probability Theory and Mathematical Statistics. John Wiley \& Sons. (QA273-280/4.ROH.34909)
| Textbooks |
# Properties and Applications of LU Decomposition
LU decomposition is a factorization of a square matrix into the product of a lower triangular matrix (L) and an upper triangular matrix (U). It is widely used in numerical analysis and linear algebra due to its stability and ease of implementation.
The main properties of LU decomposition are:
- Stability: LU decomposition is stable, meaning it preserves the accuracy of the original matrix.
- Simplicity: LU decomposition simplifies the solution of linear systems and matrix inversion.
- Computational efficiency: LU decomposition is computationally efficient for large matrices.
Applications of LU decomposition include:
- Solving linear systems: LU decomposition is used to solve linear systems of equations.
- Matrix inversion: LU decomposition is used to compute the inverse of a matrix.
- Data analysis: LU decomposition is used in various data analysis techniques, such as principal component analysis (PCA).
## Exercise
Consider the matrix A:
$$
A = \begin{bmatrix}
4 & 3 \\
6 & 5
\end{bmatrix}
$$
Perform LU decomposition on A and find the lower (L) and upper (U) triangular matrices.
# Matrix Factorization Techniques
Matrix factorization is a process of decomposing a matrix into a product of two or more matrices. There are several matrix factorization techniques, including:
- LU decomposition: A factorization of a square matrix into the product of a lower triangular matrix and an upper triangular matrix.
- QR decomposition: A factorization of a square matrix into the product of an orthogonal matrix and an upper triangular matrix.
- SVD (Singular Value Decomposition): A factorization of a rectangular matrix into the product of three matrices: an orthogonal matrix, a diagonal matrix, and the transpose of an orthogonal matrix.
- Cholesky decomposition: A factorization of a symmetric positive-definite matrix into the product of an upper triangular matrix and its transpose.
Each factorization technique has its own properties and applications. For example, LU decomposition is stable and efficient for solving linear systems, while SVD is useful for dimensionality reduction and data analysis.
# Probing Techniques for Linear Algebra
Probing techniques in linear algebra are methods used to analyze and understand the properties of matrices. Some common probing techniques include:
- Determinant: The determinant is a scalar value that is used to determine the properties of a square matrix, such as its invertibility.
- Eigenvalues and eigenvectors: Eigenvalues and eigenvectors are used to analyze the behavior of a linear transformation and its effect on the space.
- Singular value decomposition (SVD): SVD is a factorization of a rectangular matrix into three matrices, which is useful for dimensionality reduction and data analysis.
- Principal component analysis (PCA): PCA is a technique used to reduce the dimensionality of a dataset while retaining its main characteristics.
Probing techniques are essential for understanding the properties and applications of matrices in various fields, such as data analysis, image processing, and machine learning.
# Implementing LU Decomposition in Python
To implement LU decomposition in Python, you can use the NumPy library, which provides functions for creating and manipulating matrices. Here's an example of how to perform LU decomposition on a matrix using NumPy:
```python
import numpy as np
# Create a matrix
A = np.array([[4, 3], [6, 5]])
# Perform LU decomposition
L, U = np.linalg.lu(A)
# Print the lower and upper triangular matrices
print("L:")
print(L)
print("U:")
print(U)
```
# Solving Linear Systems using LU Decomposition
Once you have the LU decomposition of a matrix, you can use it to solve linear systems efficiently. The general form of a linear system is:
$$
Ax = b
$$
where A is a square matrix, x is a vector of unknowns, and b is a vector of constants.
To solve the linear system using LU decomposition, follow these steps:
1. Perform LU decomposition on the matrix A.
2. Solve the system of linear equations using the resulting lower (L) and upper (U) triangular matrices.
In Python, you can use the NumPy library to solve linear systems using LU decomposition:
```python
import numpy as np
# Create a matrix and a vector
A = np.array([[4, 3], [6, 5]])
b = np.array([1, 2])
# Perform LU decomposition
L, U = np.linalg.lu(A)
# Solve the linear system
x = np.linalg.solve(A, b)
# Print the solution
print("x:")
print(x)
```
# Eigenvalues and Eigenvectors with LU Decomposition
Eigenvalues and eigenvectors are important concepts in linear algebra that describe the behavior of a linear transformation. To find the eigenvalues and eigenvectors of a matrix, you can use LU decomposition.
Here's how to find the eigenvalues and eigenvectors of a matrix using LU decomposition in Python:
```python
import numpy as np
# Create a matrix
A = np.array([[4, 3], [6, 5]])
# Perform LU decomposition
L, U = np.linalg.lu(A)
# Find the eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(A)
# Print the eigenvalues and eigenvectors
print("Eigenvalues:")
print(eigenvalues)
print("Eigenvectors:")
print(eigenvectors)
```
# Applications of LU Decomposition in Data Analysis
LU decomposition has various applications in data analysis, such as:
- Principal component analysis (PCA): PCA is a technique used to reduce the dimensionality of a dataset while retaining its main characteristics. LU decomposition can be used to compute the covariance matrix and its eigenvectors.
- Linear regression: LU decomposition can be used to solve linear regression problems and compute the coefficients of the linear regression model.
- Clustering algorithms: LU decomposition can be used to compute the distance matrix and perform clustering on datasets.
These applications demonstrate the versatility and importance of LU decomposition in data analysis.
# Advanced Topics in Matrix Factorization and LU Decomposition
There are several advanced topics in matrix factorization and LU decomposition, including:
- Randomized methods for solving Ax = b: Randomized methods are used to solve large-scale linear systems efficiently.
- Numerical stability: Numerical stability is a crucial issue in matrix factorization and LU decomposition. Techniques to improve numerical stability include pivoting and partial pivoting.
- Parallel and distributed computing: Parallel and distributed computing techniques are used to accelerate matrix factorization and LU decomposition on multi-core and distributed systems.
These advanced topics provide a deeper understanding of matrix factorization and LU decomposition and their applications in various fields, such as data analysis, image processing, and machine learning.
# Conclusion and Further Reading
In this textbook, we have covered the fundamentals of LU decomposition, its properties and applications, and its implementation in Python. We have also explored advanced topics in matrix factorization and LU decomposition.
For further reading, consider the following resources:
- "Numerical Linear Algebra" by Trefethen and Bau
- "Matrix Computations" by Golub and Van Loan
- "Matrix Analysis" by Horn and Johnson
These books provide in-depth coverage of matrix factorization, LU decomposition, and their applications in various fields. | Textbooks |
Where do the laws of quantum mechanics break down in simulations?
As someone who holds a BA in physics I was somewhat scandalized when I began working with molecular simulations. It was a bit of a shock to discover that even the most detailed and computationally expensive simulations can't quantitatively reproduce the full behavior of water from first principles.
Previously, I had been under the impression that the basic laws of quantum mechanics were a solved problem (aside from gravity, which is usually assumed to be irrelevant at molecular scale). However, it seems that once you try to scale those laws up and apply them to anything larger or more complex than a hydrogen atom their predictive power begins to break down.
From a mathematics point of view, I understand that the wave functions quickly grow too complicated to solve and that approximations (such as Born-Oppenheimer) are required to make the wave functions more tractable. I also understand that those approximations introduce errors which propagate further and further as the time and spatial scales of the system under study increase.
What is the nature of the largest and most significant of these approximation errors? How can I gain an intuitive understanding of those errors? Most importantly, how can we move towards an ab-initio method that will allow us to accurately simulate whole molecules and populations of molecules? What are the biggest unsolved problems that are stopping people from developing these kinds of simulations?
quantum-mechanics simulation
teltel
$\begingroup$ Er...what every made you think that "the basic laws of quantum mechanics were a solved problem" was equivalent to being able to "reproduce the full behavior of water from first principles [in simulation]"? It's a thirteen body problem. $\endgroup$ – dmckee --- ex-moderator kitten Apr 23 '12 at 19:28
$\begingroup$ @dmckee see, this is exactly what I'm confused about. 13 body problem means no analytic solution, sure, but what's stopping us from coming up with a numerical solution of arbitrary accuracy? Is it simply that you hit the wall of what's computationally feasible? Are you already at the point where a computation requires the lifetime of a sun to complete? If so, what kinds of approximations can you make to simplify the problem? Can you understand these approximations on an intuitive level? Are there ways to improve the approximations, reduce the level of error they introduce? Break it down for me $\endgroup$ – tel Apr 23 '12 at 20:01
$\begingroup$ @dmckee as for what made me think that water should be simple in the first place... I blame the protein simulators. They made me dream of what was possible :) $\endgroup$ – tel Apr 23 '12 at 20:06
As far as I'm aware, the most accurate methods for static calculations are Full Configuration Interaction with a fully relativistic four-component Dirac Hamiltonian and a "complete enough" basis set. I'm not an expert in this particular area, but from what I know of the method, solving it using a variational method (rather than a Monte-Carlo based method) scales shockingly badly, since I think the number of Slater determinants you have to include in your matrix scales something like $O(^{n_{orbs}}C_{n_e})$. (There's an article on the computational cost here.) The related Monte-Carlo methods and methods based off them using "walkers" and networks of determinants can give results more quickly, but as implied above, aren't variational. And are still hideously costly.
Approximations currently in practical use just for energies for more than two atoms include:
Born Oppenheimer, as you say: this is almost never a problem unless your system involves hydrogen atoms tunneling, or unless you're very near a state crossing/avoided crossing. (See, for example, conical intersections.) Conceptually, there are non-adiabatic methods for the wavefunction/density, including CPMD, and there's also Path-Integral MD which can account for nuclear tunneling effects.
Nonrelativistic calculations, and two-component approximations to the Dirac equation: you can get an exact two-component formulation of the Dirac equation, but more practically the Zeroth-Order Regular Approximation (see Lenthe et al, JChemPhys, 1993) or the Douglas-Kroll-Hess Hamiltonian (see Reiher, ComputMolSci, 2012) are commonly used, and often (probably usually) neglecting spin-orbit coupling.
Basis sets and LCAO: basis sets aren't perfect, but you can always make them more complete.
DFT functionals, which tend to attempt to provide a good enough attempt at the exchange and correlation without the computational cost of the more advanced methods below. (And which come in a few different levels of approximation. LDA is the entry-level one, GGA, metaGGA and including exact exchange go further than that, and including the RPA is still a pretty expensive and new-ish technique as far as I'm aware. There are also functionals which use differing techniques as a function of separation, and some which use vorticity which I think have application in magnetic or aromaticity studies.) (B3LYP, the functional some people love and some people love to hate, is a GGA including a percentage of exact exchange.)
Configuration Interaction truncations: CIS, CISD, CISDT, CISD(T), CASSCF, RASSCF, etc. These are all approximations to CI which assume the most important excited determinants are the least excited ones.
Multi-reference Configuration Interaction (truncations): Ditto, but with a few different starting reference states.
Coupled-Cluster method: I don't pretend to properly understand how this works, but it obtains similar results to Configuration Interaction truncations with the benefit of size-consistency (i.e. $E(H_2) \times 2 = E((H_2)_2$ (at large separation)).
For dynamics, many of the approximations refer to things like the limited size of a tractable system, and practical timestep choice -- it's pretty standard stuff in the numerical time simulation field. There's also temperature maintenance (see Nose-Hoover or Langevin thermostats). This is mostly a set of statistical mechanics problems, though, as I understand it.
Anyway, if you're physics-minded, you can get a pretty good feel for what's neglected by looking at the formulations and papers about these methods: most commonly used methods will have at least one or two papers that aren't the original specification explaining their formulation and what it includes. Or you can just talk to people who use them. (People who study periodic systems with DFT are always muttering about what different functionals do and don't include and account for.) Very few of the methods have specific surprising omissions or failure modes. The most difficult problem appears to be proper treatment of electron correlation, and anything above the Hartree-Fock method, which doesn't account for it at all, is an attempt to include it.
As I understand it, getting to the accuracy of Full relativistic CI with complete basis sets is never going to be cheap without dramatically reinventing (or throwing away) the algorithms we currently use. (And for people saying that DFT is the solution to everything, I'm waiting for your pure density orbital-free formulations.)
There's also the issue that the more accurate you make your simulation by including more contributions and more complex formulations, the harder it is to actually do anything with. For example, spin orbit coupling is sometimes avoided solely because it makes everything more complicated to analyse (but sometimes also because it has negligable effect), and the canonical Hartree-Fock or Kohn-Sham orbitals can be pretty useful for understanding qualitative features of a system without layering on the additional output of more advanced methods.
(I hope some of this makes sense, it's probably a bit spotty. And I've probably missed someone's favourite approximation or niggle.)
AesinAesin
The fundamental challenge of quantum mechanical calculations is that they do not scale very well—from what I recall, the current best-case scaling is approximately $O(N_e^{3.7})$, where $N_e$ is the number of electrons contained in the system. Thus, 13 water molecules will scale as having $N_e = 104$ electrons instead of just $N = 39$ atoms. (That's a factor of nearly 40.) For heavier atoms, the discrepancy becomes even greater.
The main issue will be that, in addition to increased computational horsepower, you will need to come up with better algorithms that can knock down the 3.7 exponent to something that is more manageable.
aeismailaeismail
$\begingroup$ Expand on this. What's the nature of the $O({{N}_{e}}^{3.7})$ algorithm? Who are the people working to improve it? How are they going about it? $\endgroup$ – tel Apr 23 '12 at 20:05
$\begingroup$ I really like and enjoy this discussion! $\endgroup$ – Open the way Apr 24 '12 at 12:27
$\begingroup$ My understanding is that quantum mechanics (or at least electronic structure theory) would be considered a solved problem if the most accurate methods scaled as O(N^3). The problem is that it is essentially only the worst methods, mean field approximations, that approach this scaling, and something like Full CI scales exponentially with the number of electrons (or more typically the basis functions). $\endgroup$ – Tyberius Apr 25 '18 at 21:45
The problem is broadly equivalent to the difference between classical computers and quantum computers. Classical computers work on single values at once, as only one future/history is possible for one deterministic input. However, a quantum computer can operate on every possible input simultaneously, because it can be put in a superposition of all the possible states.
In the same way, a classical computer has to calculate every property individually, but the quantum system it is simulating has all the laws of the universe to calculate all the properties simultaneously.
The problem is exacerbated by the way we have to pass data almost serially through a CPU, or at most a few thousand CPUs. By contrast, the universe has a nearly unlimited set of simultaneous calculations going on at the same time.
Consider as an example 3 electrons in a box. A computer has to pick a timestep (first approximation), and keep recalculating the interactions of each electron with each other electron, via a limited number of CPUs. In reality, the electrons have an unknowable number of real and virtual exchange particles in transit, being absorbed and emitted, as a continuous process. Every particle and point in space has some interaction going on, which would need a computer to simulate.
Simulation is really the art of choosing your approximations and your algorithms to model the subject as well as possible with the resources you have available. If you want perfection, I'm afraid it's the mathematics of spherical chickens in vacuums; we can only perfectly simulate the very simple.
Phil HPhil H
$\begingroup$ really nice "Simulation is really the art of choosing your approximations and your algorithms to model the subject as well as possible with the resources you have available" $\endgroup$ – Open the way Apr 25 '12 at 14:07
$\begingroup$ It is true that only spherical chicken fetishists care about perfection. The real question is what's stopping us from getting to "good enough"? For many problems of biological interest (i.e. every drug binding problem ever), accurate enough would be calculating the energies to within ~1 kT or so. This is sometimes referred to as "chemical accuracy". $\endgroup$ – tel Nov 3 '13 at 8:51
$\begingroup$ @tel: Depends on the area. For some things we have more accuracy in models than we can achieve in practice, e.g. modelling Hydrogen electron orbitals. For others, usually many-body, non-linear systems where multiple effects come into play we struggle to match experiment; quantum chemistry for things like binding energies (see Density Functional Theory), protein folding, these are places where we cannot yet reliably reproduce experiment with commonly available resources. Quantum computers of a reasonable size would do the job. $\endgroup$ – Phil H Nov 6 '13 at 12:25
I don't know if the following helps, but for me it was very insightful to visualize the scaling behavior of quantum systems:
The main problem comes from the fact that the Hilbert space of quantum states grows exponentially with the number of particles. This can be seen very easily in discrete systems. Think of a couple of potential wells that are conected to each other, may just two: well 1 and well 2. Now add bosons (e.g., Rubidium 87, just as an example), at first only one. How many possible basis vectors are there?
basis vector 1: boson in well 1
They can be written like $\left|1,0 \right\rangle$ and $\left|0,1 \right\rangle$
Now suppose the boson can hop (or tunnel) from one well to the other. The Hamiltonian that describes the system can then be written is matrix notation as
$$ \hat{H}=\pmatrix{ \epsilon_{1} & t \\ t & \epsilon_{2}} $$
where $\epsilon_{1,2}$ are just the energies of the boson in well 1 and 2, respectively, and t the tunneling amplitude. The complete solution of this system, i.e., the solution containing all the information necessary to compute the system's state at any given point of time (given an initial condition), is given by the eigenstates and eigenvalues. The eigenstates are linear superpositions of the basis vectors (in this case $\left|1,0 \right\rangle$ and $\left|0,1 \right\rangle$).
This problem is so simple that it can be solved by hand.
Now suppose we have more potential wells and more bosons, e.g., in the case of four wells with two bosons there are 10 different possibilities to distribute the bosons among the wells. Then the Hamiltonian would have 10x10=100 elements and 10 eigenstates.
One can quickly see that the number of eigenstates is given by the binomial coefficient: $$ \text{number of eigenstates}=\pmatrix{\text{number of wells} + \text{number of bosons} - 1 \\ \text{number of bosons}} $$
So even for "just" ten bosons and ten different potential wells (a very small system), we'd have 92,378 eigenstates. The size of the Hamiltonian is then $92,378^2$ (approximately 8.5 billion elements). In a computer they'd occupy (depending on your system) about 70 gigabytes of RAM and is therefore probably impossible to solve on most computers.
Now let's assume we have a continuous system (i.e. no potential wells, but free space) and 13 water molecules (for simplicity I treat them each of them as a particle). Now in computer we can still model free space using many tiny potential wells (we discretize space... which is ok, as long as the relevant physics takes place on larger length scales then the discretization length). Let's say there are 100 different possible positions for each of the molecules in each of the x, y and z directions. So we end up with 100*100*100 = 1,000,000 little boxes. Then we'd have more than $2.7 \cdot 10^{53}$ basis vectors, the Hamiltonian would have almost $10^{107}$ elements, occupying so much space that we'd need all the particles from 10 million universes like ours just to encode that information.
RobertRobert
One problem is that quantum mechanics suffers from the "curse of dimensionality": for most methods of solving PDEs, the number of basis functions needed to get a certain accuracy scales exponentially with the number of dimensions. Since an $n$-electron atom has $3n$ degrees of freedom, even relatively small systems require an enormous number of dimensions to simulate exactly.
Monte Carlo can be used to get around this problem, as the error scales like $\text{points}^{-\frac{1}{2}}$ regardless of the number of dimensions, but convergence is slow.
Density functional theory is another way to deal with this problem, but it's an approximation. It's a very good approximation in some cases, but in other cases it can be surprisingly bad.
I think a highly-accurate simulation of water was the topic of one of the very first and large simulations performed using the Jaguar supercomputer. You might want to look into this paper and their follow-up work (which, by the way, was a finalist for the Gordon-Bell prize in 2009):
"Liquid water: obtaining the right answer for the right reasons", Aprà, Rendell, Harrison, Tipparaju, deJong, Xantheas.
fcruzfcruz
This problem is solved by Density Functinal Theory. The essence is replacing many body degrees of freedom by several fields one of them beeing the density of electrons. For a grand exposition see the nobel lecture of one of the founders of DFT: http://www.nobelprize.org/nobel_prizes/chemistry/laureates/1998/kohn-lecture.pdf
ArtanArtan
$\begingroup$ Could you give some context to the link you're providing? We discourage answers that only give a link without any sort of explanation, and these sorts of answers are deleted unless they are edited. $\endgroup$ – Geoff Oxberry Apr 24 '12 at 19:40
$\begingroup$ and by the way, you should really take care with "This problem is solved by ....". Since there are limits for DFT which somebody should mention $\endgroup$ – Open the way Apr 25 '12 at 8:59
$\begingroup$ DFT provides a very useful approximation, but does not 'solve' anything! It is not exact without exact functionals for the exchange and correlation, and even then does not yield the wavefunctions but the electron density. $\endgroup$ – Phil H Apr 25 '12 at 13:06
$\begingroup$ Many body QM does not break down as a theory, it is just NP hard. DFT is a theory with polynomial complexity that solves with the same acuracy as with basic principles QM the electronic structure of all chemical elements. This is why it earned Nobel Prize in chemistry. It has provided excellent resuls for large systems when compared to experiments. $\endgroup$ – Artan Apr 25 '12 at 19:31
$\begingroup$ You are wrong. DFT does not solve "the problem" with the same accuracy. It "solves" one particular case (ground state) by introducing completely unknown exchange-correlation functional. $\endgroup$ – Misha Sep 13 '13 at 9:43
Not the answer you're looking for? Browse other questions tagged quantum-mechanics simulation or ask your own question.
Are there any QM macromolecule simulation methods that can use an electron density map as input?
What are the advantages and disadvantages of the particle decomposition and domain decomposition parallelization algorithms?
What is the difference in accuracy between fully QM atomic simulations vs QM + classical?
Confusion about Quantum Monte Carlo
A programming model for Quantum Mechanics angular momenta in Mathematica
ground state from the Schroedinger equation with a central potential what happens to the origin
Non-hermitian discretizations in quantum mechanics
What will be the impact of quantum computing on existing numerical techniques (e.g. CFD)? | CommonCrawl |
Quantum Fourier transform
In quantum computing, the quantum Fourier transform (QFT) is a linear transformation on quantum bits, and is the quantum analogue of the discrete Fourier transform. The quantum Fourier transform is a part of many quantum algorithms, notably Shor's algorithm for factoring and computing the discrete logarithm, the quantum phase estimation algorithm for estimating the eigenvalues of a unitary operator, and algorithms for the hidden subgroup problem. The quantum Fourier transform was discovered by Don Coppersmith.[1]
The quantum Fourier transform can be performed efficiently on a quantum computer with a decomposition into the product of simpler unitary matrices. The discrete Fourier transform on $2^{n}$ amplitudes can be implemented as a quantum circuit consisting of only $O(n^{2})$ Hadamard gates and controlled phase shift gates, where $n$ is the number of qubits.[2] This can be compared with the classical discrete Fourier transform, which takes $O(n2^{n})$ gates (where $n$ is the number of bits), which is exponentially more than $O(n^{2})$.
The quantum Fourier transform acts on a quantum state vector (a quantum register), and the classical Fourier transform acts on a vector. Both types of vectors can be written as lists of complex numbers. In the quantum case it is a sequence of probability amplitudes for all the possible outcomes upon measurement (called basis states, or eigenstates). Because measurement collapses the quantum state to a single basis state, not every task that uses the classical Fourier transform can take advantage of the quantum Fourier transform's exponential speedup.
The best quantum Fourier transform algorithms known (as of late 2000) require only $O(n\log n)$ gates to achieve an efficient approximation.[3]
Definition
The quantum Fourier transform is the classical discrete Fourier transform applied to the vector of amplitudes of a quantum state, which usually has length $N=2^{n}$.
The classical Fourier transform acts on a vector $(x_{0},x_{1},\ldots ,x_{N-1})\in \mathbb {C} ^{N}$ and maps it to the vector $(y_{0},y_{1},\ldots ,y_{N-1})\in \mathbb {C} ^{N}$ according to the formula:
$y_{k}={\frac {1}{\sqrt {N}}}\sum _{n=0}^{N-1}x_{n}\omega _{N}^{-nk},\quad k=0,1,2,\ldots ,N-1,$
where $\omega _{N}=e^{\frac {2\pi i}{N}}$ and $\omega _{N}^{n}$ is an N-th root of unity.
Similarly, the quantum Fourier transform acts on a quantum state $ |x\rangle =\sum _{i=0}^{N-1}x_{i}|i\rangle $ and maps it to a quantum state $ \sum _{i=0}^{N-1}y_{i}|i\rangle $ according to the formula:
$y_{k}={\frac {1}{\sqrt {N}}}\sum _{n=0}^{N-1}x_{n}\omega _{N}^{nk},\quad k=0,1,2,\ldots ,N-1,$
(Conventions for the sign of the phase factor exponent vary; here the quantum Fourier transform has the same effect as the inverse discrete Fourier transform, and vice versa.)
Since $\omega _{N}^{n}$ is a rotation, the inverse quantum Fourier transform acts similarly but with:
$x_{n}={\frac {1}{\sqrt {N}}}\sum _{k=0}^{N-1}y_{k}\omega _{N}^{-nk},\quad n=0,1,2,\ldots ,N-1,$
In case that $|x\rangle $ is a basis state, the quantum Fourier Transform can also be expressed as the map
${\text{QFT}}:|x\rangle \mapsto {\frac {1}{\sqrt {N}}}\sum _{k=0}^{N-1}\omega _{N}^{xk}|k\rangle .$
Equivalently, the quantum Fourier transform can be viewed as a unitary matrix (or quantum gate) acting on quantum state vectors, where the unitary matrix $F_{N}$ is the DFT matrix
$F_{N}={\frac {1}{\sqrt {N}}}{\begin{bmatrix}1&1&1&1&\cdots &1\\1&\omega &\omega ^{2}&\omega ^{3}&\cdots &\omega ^{N-1}\\1&\omega ^{2}&\omega ^{4}&\omega ^{6}&\cdots &\omega ^{2(N-1)}\\1&\omega ^{3}&\omega ^{6}&\omega ^{9}&\cdots &\omega ^{3(N-1)}\\\vdots &\vdots &\vdots &\vdots &&\vdots \\1&\omega ^{N-1}&\omega ^{2(N-1)}&\omega ^{3(N-1)}&\cdots &\omega ^{(N-1)(N-1)}\end{bmatrix}}$
where $\omega =\omega _{N}$. For example, in the case of $N=4=2^{2}$ and phase $\omega =i$ the transformation matrix is
$F_{4}={\frac {1}{2}}{\begin{bmatrix}1&1&1&1\\1&i&-1&-i\\1&-1&1&-1\\1&-i&-1&i\end{bmatrix}}$
See also: Generalizations of Pauli matrices § Construction: The clock and shift matrices
Properties
Unitarity
Most of the properties of the quantum Fourier transform follow from the fact that it is a unitary transformation. This can be checked by performing matrix multiplication and ensuring that the relation $FF^{\dagger }=F^{\dagger }F=I$ holds, where $F^{\dagger }$ is the Hermitian adjoint of $F$. Alternately, one can check that orthogonal vectors of norm 1 get mapped to orthogonal vectors of norm 1.
From the unitary property it follows that the inverse of the quantum Fourier transform is the Hermitian adjoint of the Fourier matrix, thus $F^{-1}=F^{\dagger }$. Since there is an efficient quantum circuit implementing the quantum Fourier transform, the circuit can be run in reverse to perform the inverse quantum Fourier transform. Thus both transforms can be efficiently performed on a quantum computer.
Circuit implementation
The quantum gates used in the circuit of $n$ qubits are the Hadamard gate and the phase gate $R_{n}$, here in terms of $N=2^{n}$
$H={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&1\\1&-1\end{pmatrix}}\qquad {\text{and}}\qquad R_{n}={\begin{pmatrix}1&0\\0&\omega _{N}\end{pmatrix}}$
with $\omega _{N}=e^{i2\pi /N}$ the $N$-th root of unity. The circuit is composed of $H$ gates and the controlled version of $R_{n}$
An orthonormal basis consists of the basis states
$|0\rangle ,\ldots ,|2^{n}-1\rangle .$
These basis states span all possible states of the qubits:
$|x\rangle =|x_{1}x_{2}\ldots x_{n}\rangle =|x_{1}\rangle \otimes |x_{2}\rangle \otimes \cdots \otimes |x_{n}\rangle $
where, with tensor product notation $\otimes $, $|x_{j}\rangle $ indicates that qubit $j$ is in state $x_{j}$, with $x_{j}$ either 0 or 1. By convention, the basis state index $x$ is the binary number encoded by the $x_{j}$, with $x_{1}$ the most significant bit. The quantum Fourier transform can be written as the tensor product of a series of terms:
${\text{QFT}}(|x\rangle )={\frac {1}{\sqrt {N}}}\bigotimes _{j=1}^{n}\left(|0\rangle +\omega _{N}^{x2^{n-j}}|1\rangle \right).$
Using the fractional binary notation
$[0.x_{1}\ldots x_{m}]=\sum _{k=1}^{m}x_{k}2^{-k}.$
the action of the quantum Fourier transform can be expressed in a compact manner:
${\text{QFT}}(|x_{1}x_{2}\ldots x_{n}\rangle )={\frac {1}{\sqrt {N}}}\ \left(|0\rangle +e^{2\pi i\,[0.x_{n}]}|1\rangle \right)\otimes \left(|0\rangle +e^{2\pi i\,[0.x_{n-1}x_{n}]}|1\rangle \right)\otimes \cdots \otimes \left(|0\rangle +e^{2\pi i\,[0.x_{1}x_{2}\ldots x_{n}]}|1\rangle \right).$
To obtain this state from the circuit depicted above, a swap operation of the qubits must be performed to reverse their order. At most $n/2$ swaps are required.[2]
Because the discrete Fourier transform, an operation on n qubits, can be factored into the tensor product of n single-qubit operations, it is easily represented as a quantum circuit (up to an order reversal of the output). Each of those single-qubit operations can be implemented efficiently using one Hadamard gate and a linear number of controlled phase gates. The first term requires one Hadamard gate and $(n-1)$ controlled phase gates, the next term requires one Hadamard gate and $(n-2)$ controlled phase gate, and each following term requires one fewer controlled phase gate. Summing up the number of gates, excluding the ones needed for the output reversal, gives $n+(n-1)+\cdots +1=n(n+1)/2=O(n^{2})$ gates, which is quadratic polynomial in the number of qubits.
Example
The quantum Fourier transform on three qubits, $F_{8}$ with $n=3,N=8=2^{3}$, is represented by the following transformation:
${\text{QFT}}:|x\rangle \mapsto {\frac {1}{\sqrt {8}}}\sum _{k=0}^{7}\omega ^{xk}|k\rangle ,$
where $\omega =\omega _{8}$ is an eighth root of unity satisfying $\omega ^{8}=\left(e^{\frac {i2\pi }{8}}\right)^{8}=1$.
The matrix representation of the Fourier transform on three qubits is:
$F_{8}={\frac {1}{\sqrt {8}}}{\begin{bmatrix}1&1&1&1&1&1&1&1\\1&\omega &\omega ^{2}&\omega ^{3}&\omega ^{4}&\omega ^{5}&\omega ^{6}&\omega ^{7}\\1&\omega ^{2}&\omega ^{4}&\omega ^{6}&1&\omega ^{2}&\omega ^{4}&\omega ^{6}\\1&\omega ^{3}&\omega ^{6}&\omega &\omega ^{4}&\omega ^{7}&\omega ^{2}&\omega ^{5}\\1&\omega ^{4}&1&\omega ^{4}&1&\omega ^{4}&1&\omega ^{4}\\1&\omega ^{5}&\omega ^{2}&\omega ^{7}&\omega ^{4}&\omega &\omega ^{6}&\omega ^{3}\\1&\omega ^{6}&\omega ^{4}&\omega ^{2}&1&\omega ^{6}&\omega ^{4}&\omega ^{2}\\1&\omega ^{7}&\omega ^{6}&\omega ^{5}&\omega ^{4}&\omega ^{3}&\omega ^{2}&\omega \\\end{bmatrix}}.$
The 3-qubit quantum Fourier transform can be rewritten as:
${\text{QFT}}(|x_{1},x_{2},x_{3}\rangle )={\frac {1}{\sqrt {8}}}\ \left(|0\rangle +e^{2\pi i\,[0.x_{3}]}|1\rangle \right)\otimes \left(|0\rangle +e^{2\pi i\,[0.x_{2}x_{3}]}|1\rangle \right)\otimes \left(|0\rangle +e^{2\pi i\,[0.x_{1}x_{2}x_{3}]}|1\rangle \right).$
The following sketch shows the respective circuit for $n=3$ (with reversed order of output qubits with respect to the proper QFT):
As calculated above, the number of gates used is $n(n+1)/2$ which is equal to $6$, for $n=3$.
Relation to quantum Hadamard transform
Using the generalized Fourier transform on finite (abelian) groups, there are actually two natural ways to define a quantum Fourier transform on an n-qubit quantum register. The QFT as defined above is equivalent to the DFT, which considers these n qubits as indexed by the cyclic group $\mathbb {Z} /2^{n}\mathbb {Z} $. However, it also makes sense to consider the qubits as indexed by the Boolean group $(\mathbb {Z} /2\mathbb {Z} )^{n}$, and in this case the Fourier transform is the Hadamard transform. This is achieved by applying a Hadamard gate to each of the n qubits in parallel.[4][5] Shor's algorithm uses both types of Fourier transforms, an initial Hadamard transform as well as a QFT.
References
1. Coppersmith, D. (1994). "An approximate Fourier transform useful in quantum factoring". Technical Report RC19642, IBM. arXiv:quant-ph/0201067.
2. Michael Nielsen and Isaac Chuang (2000). Quantum Computation and Quantum Information. Cambridge: Cambridge University Press. ISBN 0-521-63503-9. OCLC 174527496.
3. Hales, L.; Hallgren, S. (November 12–14, 2000). "An improved quantum Fourier transform algorithm and applications". Proceedings 41st Annual Symposium on Foundations of Computer Science. pp. 515–525. CiteSeerX 10.1.1.29.4161. doi:10.1109/SFCS.2000.892139. ISBN 0-7695-0850-2. S2CID 424297.
4. Fourier Analysis of Boolean Maps– A Tutorial –, pp. 12-13
5. Lecture 5: Basic quantum algorithms, Rajat Mittal, pp. 4-5
• K. R. Parthasarathy, Lectures on Quantum Computation and Quantum Error Correcting Codes (Indian Statistical Institute, Delhi Center, June 2001)
• John Preskill, Lecture Notes for Physics 229: Quantum Information and Computation (CIT, September 1998)
External links
• Wolfram Demonstration Project: Quantum Circuit Implementing Grover's Search Algorithm
• Wolfram Demonstration Project: Quantum Circuit Implementing Quantum Fourier Transform
• Quirk online life quantum fourier transform
Quantum information science
General
• DiVincenzo's criteria
• NISQ era
• Quantum computing
• timeline
• Quantum information
• Quantum programming
• Quantum simulation
• Qubit
• physical vs. logical
• Quantum processors
• cloud-based
Theorems
• Bell's
• Eastin–Knill
• Gleason's
• Gottesman–Knill
• Holevo's
• Margolus–Levitin
• No-broadcasting
• No-cloning
• No-communication
• No-deleting
• No-hiding
• No-teleportation
• PBR
• Threshold
• Solovay–Kitaev
• Purification
Quantum
communication
• Classical capacity
• entanglement-assisted
• quantum capacity
• Entanglement distillation
• Monogamy of entanglement
• LOCC
• Quantum channel
• quantum network
• Quantum teleportation
• quantum gate teleportation
• Superdense coding
Quantum cryptography
• Post-quantum cryptography
• Quantum coin flipping
• Quantum money
• Quantum key distribution
• BB84
• SARG04
• other protocols
• Quantum secret sharing
Quantum algorithms
• Amplitude amplification
• Bernstein–Vazirani
• Boson sampling
• Deutsch–Jozsa
• Grover's
• HHL
• Hidden subgroup
• Quantum annealing
• Quantum counting
• Quantum Fourier transform
• Quantum optimization
• Quantum phase estimation
• Shor's
• Simon's
• VQE
Quantum
complexity theory
• BQP
• EQP
• QIP
• QMA
• PostBQP
Quantum
processor benchmarks
• Quantum supremacy
• Quantum volume
• Randomized benchmarking
• XEB
• Relaxation times
• T1
• T2
Quantum
computing models
• Adiabatic quantum computation
• Continuous-variable quantum information
• One-way quantum computer
• cluster state
• Quantum circuit
• quantum logic gate
• Quantum machine learning
• quantum neural network
• Quantum Turing machine
• Topological quantum computer
Quantum
error correction
• Codes
• CSS
• quantum convolutional
• stabilizer
• Shor
• Bacon–Shor
• Steane
• Toric
• gnu
• Entanglement-assisted
Physical
implementations
Quantum optics
• Cavity QED
• Circuit QED
• Linear optical QC
• KLM protocol
Ultracold atoms
• Optical lattice
• Trapped-ion QC
Spin-based
• Kane QC
• Spin qubit QC
• NV center
• NMR QC
Superconducting
• Charge qubit
• Flux qubit
• Phase qubit
• Transmon
Quantum
programming
• OpenQASM-Qiskit-IBM QX
• Quil-Forest/Rigetti QCS
• Cirq
• Q#
• libquantum
• many others...
• Quantum information science
• Quantum mechanics topics
| Wikipedia |
Number of spanning trees in a grid
Given a $\sqrt{n}\times\sqrt{n}$ piece of the integer $\mathbb{Z}^2$ grid, define a graph by joining any two of these points at unit distance apart. How many spanning trees does this graph have (asymptotically as $n\to\infty$)?
Can you also say something about the triangular grid generated by $(1,0)$ and $(1/2,\sqrt{3}/2)$?
discrete-geometry co.combinatorics graph-theory
Konrad SwanepoelKonrad Swanepoel
$\begingroup$ The cause of beauty forces me to change the problem to the toroidal grid. Then you will have an action of $Z_m^2$ to simplify your calculations. I do not see how to show that tweaking the problem in this manner introduces only an insignificant error, though. $\endgroup$ – Boris Bukh Jan 6 '10 at 14:38
$\begingroup$ The triangular grid could also be changed to a torus by identifying opposite edges of a hexagonal piece. It's possible that one gets significantly more spanning trees in this way, but I have no proof either way. $\endgroup$ – Konrad Swanepoel Jan 6 '10 at 22:23
I think the best way to deal with grids is to find the general eigenfunction of the infinite grid, and then apply appropriate boundary conditions. This is an idea of Kenyon, Propp and Wilson, you can find an outline in the very last section of my Diplomarbeit link text
They only do it for the square grid, as far as I remember, but I wouldn't be surprised if the very same Ansatz works with the triangular grid.
I think that Richard Kenyon also shows how to compute the asymptotics in "Long-range properties of spanning trees in Z^2" (you can find it on his homepage) but I didn't check.
A second trick that might be useful for the triangular grid (due to Knuth), is to observe that the dual of the grid is "almost" regular. You can choose to delete the vertex corresponding to the outer face in the Laplacian when applying the matrix tree theorem, and will get a very nice matrix, I suppose.
I just found a reference which proves the asymptotics for the triangular grid: On the entropy of spanning trees on a large triangular lattice. The formulas are gorgeous...
I should have remarked that the given reference contains (exact) expressions for the asymptotics of both lattices: the limit of $1/n \ln \tau(G_n)$, where $\tau(G_n)$ is the number of spanning trees of the graph with $n$ vertices, is
$$4/\pi\sum_{n\geq1} \sin(n\pi/2)/n^2 = 1.166 243 616\dots$$
for the square grid (due to Temperly 1972), and
$$5/\pi\sum_{n\geq1} \sin(n\pi/3)/n^2 = 1.615 329 736 097\dots$$
for the triangular grid (proved in the reference).
answered Jan 6 '10 at 7:13
Martin RubeyMartin Rubey
$\begingroup$ would be nice if somebody had the energy to work out a nice formula for the number of spanning trees in a triangular region of the triangular grid. $\endgroup$ – Martin Rubey Jan 7 '10 at 12:46
$\begingroup$ Or for a regular hexagonal region. This should be the shape giving the largest number of minimal spanning trees for n points in the triangular lattice. I'm not so sure about the square lattice, but here the maximum number of spanning trees among n points is probably for square shapes. $\endgroup$ – Konrad Swanepoel Jan 7 '10 at 20:56
You can (write a program to) form the graph Laplacian (for n reasonably small) and use the matrix-tree theorem to get the number of spanning trees. See
http://en.wikipedia.org/wiki/Kirchhoff%27s_theorem
The triangular grid is a bit trickier to handle both on paper or on a computer; you may find techniques for the graded lexicographic index described at
http://blog.eqnets.com/2009/10/06/a-graded-lexicographic-index-part-1/
and subsequent posts helpful in dealing with the triangular lattice.
EDIT: The answer is at http://www.oeis.org/A007341.
Abhimanyu Pallavi Sudhir
Steve HuntsmanSteve Huntsman
$\begingroup$ A computer won't help with asymptotics. But your link contains an exact formula, from which the asymptotics could be calculated. (I guess it is exponential.) $\endgroup$ – Konrad Swanepoel Dec 11 '09 at 11:54
Expanding on Steve Huntsman's answer, call the product which appears in A007341 f(n). That is,
$$f(n) = \prod_{k=0}^{n-1} {\prod_{l=0}^{n-1}}^\prime \left(2 - \cos {\pi k \over n} - \cos {\pi l \over n } \right)$$
where the $\prime$ on the second product indicates that we start at $l=1$ in the case $k = 0$. The number of interest here is $a(n) = 2^{n^2-1} f(n)/n^2$ .
The product is the exponential of a sum, so
$$\log f(n) = \sum_{k=0}^{n-1} {\sum_{l=0}^{n-1}}^\prime \log \left(2 - \cos {\pi k \over n} - \cos {\pi l \over n } \right).$$
This sum is, in turn, $n^2$ times a Riemann sum for the integral
$$ C = \int_0^1 \int_0^1 \log(2-\cos x\pi - \cos y\pi) \: dx \: dy $$
which I believe converges, although actually evaluating it numerically is tricky. If you believe that, then $\log f(n) \sim Cn^2$ as $n \to \infty$, and $\log a(n) \sim (C+\log 2) n^2$ as $n \to \infty$. From evaluating $f(n)$ for various $n$, it appears that $C$ is near $0.473$, $e^C$ is near $1.605$ and so we have
$$ a(n) \approx 3.21^{n^2} $$
where I write $p(n) \approx q(n)$ for $\log p(n)/\log q(n) \to 1 $ as $n \to \infty$, i. e. $\log p(n) \sim \log q(n)$.
Michael LugoMichael Lugo
$\begingroup$ According to the paper by Glasser and Wu cited in Martin's answer, the constant near 3.21 is in fact exactly $\exp(\frac{4}{\pi}(1-1/3^2+1/5^2-1/7^2+\dots))$. $\endgroup$ – Konrad Swanepoel Jan 6 '10 at 21:35
$\begingroup$ So this shows that the asymptotics is in fact exponential in the number of vertices $n^2$. $\endgroup$ – Konrad Swanepoel Jan 6 '10 at 21:37
$\begingroup$ But the Glasser-Wu paper is working over the triangular lattice; this is for the square lattice. Is there some reason the constants for the two lattices should be equal? $\endgroup$ – Michael Lugo Jan 6 '10 at 22:13
$\begingroup$ The two constants are not the same, but Konrad's answer is for the square grid. See the updated version of Martin's answer. $\endgroup$ – David E Speyer Jan 7 '10 at 13:43
You might try looking at:
http://arxiv.org/pdf/0809.2551
Joseph MalkevitchJoseph Malkevitch
$\begingroup$ An interesting reference. Unfortunately they do not really look at the asymptotics as both hieght and width go to infinity. $\endgroup$ – Konrad Swanepoel Jan 6 '10 at 22:17
Not the answer you're looking for? Browse other questions tagged discrete-geometry co.combinatorics graph-theory or ask your own question.
Spanning trees in 3 regular graphs.
Comparing two measures on trees on $n$ vertices
Spanning subgaph with trivial Poisson boundaries
Finite vertex-transitive graphs that look like infinite vertex-transitive graphs
Minimal spanning tree of a point set in the unit square, under an unusual distance function
Graph distance of close points within the minimum spanning tree
Counting the number of rooted trees given the distance distribution at each level
Euclidean minimum spanning trees intersecting each unit square
Counting Hamiltonian cycles in $n \times n$ square grid
Diagonal shortcuts to minimize all-pairs shortest-paths in grid graph | CommonCrawl |
Compute the value of $x$ such that
$\left(1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}\cdots\right)\left(1-\frac{1}{2}+\frac{1}{4}-\frac{1}{8}+\cdots\right)=1+\frac{1}{x}+\frac{1}{x^2}+\frac{1}{x^3}+\cdots$.
The sum of an infinite geometric series with first term $a$ and common ratio $r$ is $\frac{a}{1-r}$. Thus the sum of the first series is
$$\frac{1}{1-\frac{1}{2}}$$And the sum of the second series is
$$\frac{1}{1+\frac{1}{2}}$$Multiplying these, we get
$$\frac{1}{1-\left(\frac{1}{2}\right)^2}=\frac{1}{1-\frac{1}{4}}$$So $x=\boxed{4}$. | Math Dataset |
\begin{definition}[Definition:Iff]
The logical connective '''iff''' is a convenient shorthand for '''if and only if'''.
\end{definition} | ProofWiki |
How many prime numbers are between 30 and 40?
We test primes up to 5 as potential divisors and find that there are only $\boxed{2}$ primes, 31 and 37, between 30 and 40. | Math Dataset |
Lernen mit Kernen: Support-Vektor-Methoden zur Analyse hochdimensionaler Daten
Schölkopf, B., Müller, K., Smola, A.
Informatik - Forschung und Entwicklung, 14(3):154-163, September 1999 (article)
We describe recent developments and results of statistical learning theory. In the framework of learning from examples, two factors control generalization ability: explaining the training data by a learning machine of a suitable complexity. We describe kernel algorithms in feature spaces as elegant and efficient methods of realizing such machines. Examples thereof are Support Vector Machines (SVM) and Kernel PCA (Principal Component Analysis). More important than any individual example of a kernel algorithm, however, is the insight that any algorithm that can be cast in terms of dot products can be generalized to a nonlinear setting using kernels. Finally, we illustrate the significance of kernel algorithms by briefly describing industrial and academic applications, including ones where we obtained benchmark record results.
PDF PDF DOI [BibTex]
ei Schölkopf, B., Müller, K., Smola, A. Lernen mit Kernen: Support-Vektor-Methoden zur Analyse hochdimensionaler Daten Informatik - Forschung und Entwicklung, 14(3):154-163, September 1999 (article)
Input space versus feature space in kernel-based methods
Schölkopf, B., Mika, S., Burges, C., Knirsch, P., Müller, K., Rätsch, G., Smola, A.
IEEE Transactions On Neural Networks, 10(5):1000-1017, September 1999 (article)
This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data.
ei Schölkopf, B., Mika, S., Burges, C., Knirsch, P., Müller, K., Rätsch, G., Smola, A. Input space versus feature space in kernel-based methods IEEE Transactions On Neural Networks, 10(5):1000-1017, September 1999 (article)
p73 and p63 are homotetramers capable of weak heterotypic interactions with each other but not with p53.
Davison, T., Vagner, C., Kaghad, M., Ayed, A., Caput, D., CH, ..
Journal of Biological Chemistry, 274(26):18709-18714, June 1999 (article)
Mutations in the p53 tumor suppressor gene are the most frequent genetic alterations found in human cancers. Recent identification of two human homologues of p53 has raised the prospect of functional interactions between family members via a conserved oligomerization domain. Here we report in vitro and in vivo analysis of homo- and hetero-oligomerization of p53 and its homologues, p63 and p73. The oligomerization domains of p63 and p73 can independently fold into stable homotetramers, as previously observed for p53. However, the oligomerization domain of p53 does not associate with that of either p73 or p63, even when p53 is in 15-fold excess. On the other hand, the oligomerization domains of p63 and p73 are able to weakly associate with one another in vitro. In vivo co-transfection assays of the ability of p53 and its homologues to activate reporter genes showed that a DNA-binding mutant of p53 was not able to act in a dominant negative manner over wild-type p73 or p63 but that a p73 mutant could inhibit the activity of wild-type p63. These data suggest that mutant p53 in cancer cells will not interact with endogenous or exogenous p63 or p73 via their respective oligomerization domains. It also establishes that the multiple isoforms of p63 as well as those of p73 are capable of interacting via their common oligomerization domain.
ei Davison, T., Vagner, C., Kaghad, M., Ayed, A., Caput, D., CH, .. p73 and p63 are homotetramers capable of weak heterotypic interactions with each other but not with p53. Journal of Biological Chemistry, 274(26):18709-18714, June 1999 (article)
Spatial Learning and Localization in Animals: A Computational Model and Its Implications for Mobile Robots
Balakrishnan, K., Bousquet, O., Honavar, V.
Adaptive Behavior, 7(2):173-216, 1999 (article)
ei Balakrishnan, K., Bousquet, O., Honavar, V. Spatial Learning and Localization in Animals: A Computational Model and Its Implications for Mobile Robots Adaptive Behavior, 7(2):173-216, 1999 (article)
SVMs for Histogram Based Image Classification
Chapelle, O., Haffner, P., Vapnik, V.
IEEE Transactions on Neural Networks, (9), 1999 (article)
Traditional classification approaches generalize poorly on image classification tasks, because of the high dimensionality of the feature space. This paper shows that Support Vector Machines (SVM) can generalize well on difficult image classification problems where the only features are high dimensional histograms. Heavy-tailed RBF kernels of the form $K(mathbf{x},mathbf{y})=e^{-rhosum_i |x_i^a-y_i^a|^{b}}$ with $aleq 1$ and $b leq 2$ are evaluated on the classification of images extracted from the Corel Stock Photo Collection and shown to far outperform traditional polynomial or Gaussian RBF kernels. Moreover, we observed that a simple remapping of the input $x_i rightarrow x_i^a$ improves the performance of linear SVMs to such an extend that it makes them, for this problem, a valid alternative to RBF kernels.
GZIP [BibTex]
ei Chapelle, O., Haffner, P., Vapnik, V. SVMs for Histogram Based Image Classification IEEE Transactions on Neural Networks, (9), 1999 (article)
Is imitation learning the route to humanoid robots?
Trends in Cognitive Sciences, 3(6):233-242, 1999, clmc (article)
This review will focus on two recent developments in artificial intelligence and neural computation: learning from imitation and the development of humanoid robots. It will be postulated that the study of imitation learning offers a promising route to gain new insights into mechanisms of perceptual motor control that could ultimately lead to the creation of autonomous humanoid robots. This hope is justified because imitation learning channels research efforts towards three important issues: efficient motor learning, the connection between action and perception, and modular motor control in form of movement primitives. In order to make these points, first, a brief review of imitation learning will be given from the view of psychology and neuroscience. In these fields, representations and functional connections between action and perception have been explored that contribute to the understanding of motor acts of other beings. The recent discovery that some areas in the primate brain are active during both movement perception and execution provided a first idea of the possible neural basis of imitation. Secondly, computational approaches to imitation learning will be described, initially from the perspective of traditional AI and robotics, and then with a focus on neural network models and statistical learning research. Parallels and differences between biological and computational approaches to imitation will be highlighted. The review will end with an overview of current projects that actually employ imitation learning for humanoid robots.
am Schaal, S. Is imitation learning the route to humanoid robots? Trends in Cognitive Sciences, 3(6):233-242, 1999, clmc (article)
Segmentation of endpoint trajectories does not imply segmented control
Sternad, D., Schaal, D.
Experimental Brain Research, 124(1):118-136, 1999, clmc (article)
While it is generally assumed that complex movements consist of a sequence of simpler units, the quest to define these units of action, or movement primitives, still remains an open question. In this context, two hypotheses of movement segmentation of endpoint trajectories in 3D human drawing movements are re-examined: (1) the stroke-based segmentation hypothesis based on the results that the proportionality coefficient of the 2/3 power law changes discontinuously with each new â??strokeâ?, and (2) the segmentation hypothesis inferred from the observation of piecewise planar endpoint trajectories of 3D drawing movements. In two experiments human subjects performed a set of elliptical and figure-8 patterns of different sizes and orientations using their whole arm in 3D. The kinematic characteristics of the endpoint trajectories and the seven joint angles of the arm were analyzed. While the endpoint trajectories produced similar segmentation features as reported in the literature, analyses of the joint angles show no obvious segmentation but rather continuous oscillatory patterns. By approximating the joint angle data of human subjects with sinusoidal trajectories, and by implementing this model on a 7-degree-of-freedom anthropomorphic robot arm, it is shown that such a continuous movement strategy can produce exactly the same features as observed by the above segmentation hypotheses. The origin of this apparent segmentation of endpoint trajectories is traced back to the nonlinear transformations of the forward kinematics of human arms. The presented results demonstrate that principles of discrete movement generation may not be reconciled with those of rhythmic movement as easily as has been previously suggested, while the generalization of nonlinear pattern generators to arm movements can offer an interesting alternative to approach the question of units of action.
am Sternad, D., Schaal, D. Segmentation of endpoint trajectories does not imply segmented control Experimental Brain Research, 124(1):118-136, 1999, clmc (article) | CommonCrawl |
hidden markov model
Finally, arbitrary features over pairs of adjacent hidden states can be used rather than simple transition probabilities. Notably, in an HMM, the values of the signal that are observed when a single molecule is in a particular hidden state are typically assumed to be distributed according to a normal distribution PDF (i.e., the observed signals will be a Gaussian mixture model). The key difference between a Markov chain and the hidden Markov model is that the state in the latter is not directly visible to an observer, even though the output is. The regimes themselves are not expected to change too quickly (consider regulatory changes and other slow-moving macroeconomic effects). Preprocessing Data for Machine Learning in python: part 2. A likelihood principle may be implemented, described schematically as follows: The next step is to maximize L over all possibilities of X1=j1, X2=j2, …, Xn=jn. Markov chain). A number of related tasks ask about the probability of one or more of the latent variables, given the model's parameters and a sequence of observations , That is, the conditional probability of seeing a particular observation (asset return) given that the state (market regime) is currently equal to $z_t$. In a Markov Model it is only necessary to create a joint density function f… , for a total of ( labeled 2 through 6, while the remaining seven sides are labeled 1. The following diagram represents the numbered states as circles while the arcs represent the probability of jumping from state to state: Notice that the probabilities sum to unity for each state, i.e. Figure 5.5. {\displaystyle X} We don't get to observe the actual sequence of states (the weather on each day). (See the section below on extensions for other possibilities.) 1 whose behavior "depends" on The match and insert states always emit a symbol, whereas the delete states are silent states without emission probabilities. In this instance the hidden, or latent process is the underlying regime state, while the asset returns are the indirect noisy observations that are influenced by these states. This means that it is possible to utilise the $K \times K$ state transition matrix $A$ as before with the Markov Model for that component of the model. Y coins. Each state holds some probability distribution of the DNA sequences it favors (and emits according to the HMM). Are These Autonomous Vehicles Ready for Our World? The model uses: A red die, having six sides, labeled 1 through 6. A hidden Markov model (HMM) is a kind of statistical model that is a variation on the Markov chain. (A) Each DNA binding event (left) was transformed to a model-based estimation of expected ChIP peak shape based on the average length of the DNA fragments immunoprecipitated in the ChIP experiment (right) (Kaplan et al., 2011). In the development of detection methods for ncRNAs, Zhang et al. Many variants of this model have been proposed. A_{ij} = p(X_t = j \mid X_{t-1} = i) This requires summation over all possible state sequences: where the sum runs over all possible hidden-node sequences. There are two states, "Rainy" and "Sunny", but she cannot observe them directly, that is, they are hidden from her. The term hidden refers to the first order Markov process behind the observation. Thus, the 22, no. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. Alice believes that the weather operates as a discrete Markov chain. As an example, consider a Markov model with two states and six 7, pp. n Well, suppose you were locked in a room for several days, and you were asked about the weather outside. , Hmmdecode calculates probabilities for a sequence or for a set of sequences given transition and emission probabilities: Hmmviterbi calculates the most probable sequence of states given the series of observations: Prakash Nadkarni, in Clinical Research Computing, 2016. N ) the model would go through to generate a given sequence seq of The algorithm halts when the matrices in two . p({\bf x}_t \mid z_t = k, {\bf \theta}) = \mathcal{N}({\bf x}_t \mid {\bf \mu}_k, {\bf \sigma}_k) ( as seq. In quantitative finance the analysis of a time series is often of primary interest. ( Applying constraints that reduce computation by restricting the permissible alignments and/or structures further improves accuracy. T Since Bob tells Alice about his activities, those are the observations. (Baum and Petrie, 1966) and uses a Markov process that contains hidden and unknown parameters. {\displaystyle P(x(k)\ |\ y(1),\dots ,y(t))} (Again, this is usually a "good enough" assumption.) M If you suspect this, use different successive iterations are within a small tolerance of each other.
5 Day Reset Pouch, Feta Cheese Tastes Like Vomit, Kirkland Signature Organic Extra Virgin Olive Oil, Godrej Metal Bed, Ofenbach Net Worth, Pressure Unit Conversion, Cameron M2 Choke Valve, My Name Necklace Discount Code 2019, Clearance Bed Frames, Organic Baby Food Bangalore, My Ice Cream Is Too Soft, Resonator Guitar Kit, How To Write Matrimonial Profile For Divorced Girl, Esgrimas Bíblicos De Toda La Biblia, Warhammer 40,000: Space Marine, Accelerated Life Testing Thermal Cycling, March Is A Muddy Month, Classification Of Anxiety Disorder, Thinkorswim Thinkscript Library, Bread Without Yeast, Cost Of Food In Thailand, Epic Rap Battles Of History Lyrics Burger King, Banyan Tree Acapulco Spa, Top Ten Aziziyah Makkah, Ruby Hex Code, Linette Mint Cups, Easy Bake Bread Mix, How To Get To Val Sharah,
hidden markov model 2020 | CommonCrawl |
the new kingdom synonym
J = J L − J 01 { e x p [ q ( V + J R s) k T] − 1 } − J 02 { e x p [ q ( V + J R s) 2 k T] − 1 } − V + J R s R s h u n t. Practical measurements of the illuminated equation are difficult as small fluctuations in the light intensity overwhelm the effects of the second diode. A solar cell is a semiconductor PN junction diode, normally without an external bias, that provides electrical power to a load when illuminated (Figure 1). Solar bypass diode: A solution for partial shading and soiling. I = the net current flowing through the diode; Generally, it is very useful to connect intuition with a quantitative treatment. In practice, there are second order effects so that the diode does not follow the simple diode equation and the ideality factor provides a way of describing them. Both parameters are immediate ingredients of the efficiency of a solar cell and can be determined from PL measurements, which allow fast feedback. The objective is to determine the current as a function of voltage and the basic steps are: At the end of the section there are worked examples. In the simulation it is implied that the input parameters are independent but they are not. Figure 4.9. Its current density J is in ideal case described by the Shockley's diode equation [24] JV J eV kT exp J sc 0 1 . The objective of this section is to take the concepts introduced earlier in this chapter and mathematically derive the current-voltage characteristics seen externally. 1. I0 = "dark saturation current", the diode leakage current density in the absence of light; This expression only includes the ideal diode current of the diode, thereby ignoring recombination in the depletion region. Introduction The derivation of the simple diode equation uses certain assumption about the cell. import numpy as np from solcore.constants import kb, q, hbar, c from solcore.structure import Junction from scipy.optimize import root from.detailed_balance import iv_detailed_balance. solcore.analytic_solar_cells.diode_equation.calculate_J02_from_Voc (J01, Jsc, Voc, T, R_shunt=1000000000000000.0) [source] ¶ Calculates J02 based on the J01, Jsc and the Voc. The theory of solar cells explains the process by which light energy in photons is converted into electric current when the photons strike a suitable semiconductor device. The p-n diode solar cell Solar cells are typically illuminated with sunlight and are intended to convert the solar energy into electrical energy. Semiconductors are analyzed under three conditions: The ideal diode model is a one dimensional model. The operation of actual solar cells is typically treated as a modification to the basic ideal diode equation described here. where: The method to determine the optical diode ideality factor from PL measurements and compare to electrical measurements in finished solar cells are discussed. Get it as soon as Tue, Jan 5. Ideality factors n1 and n2 are assumed to be equal to 1 and 2, respectively. The ideality factor changes the shape of the diode. Both Solar Cells and Diodes have many different configurations and uses. The "dark saturation current" (I0) is an extremely important parameter which differentiates one diode from another. Change the saturation current and watch the changing of IV curve. 2. 4.9. V = applied voltage across the terminals of the diode; For simplicity we also assume that one-dimensional derivation but the concepts can be extended to two and three-dimensional notation and devices. In the light, the photocurrent can be thought of as a constant current source, which is added to the i-V characteristic of the diode. Simulink model of PV cell. Temperature effects are discussed in more detail on the Effect of Temperature page. In reality this is not the case as any physical effect that increases the ideality factor would substantially increase the dark saturation current, I0, so that a device with a high ideality factor would typically have a lower turn on voltage. A diode with a larger recombination will have a larger I0. The treatment here is particularly applicable to photovoltaics and uses the concepts introduced earlier in this chapter. For a given current, the curve shifts by approximately 2 mV/°C. One model for solar cell analysis is proposed based on the Shockley diode model. One model for analyzing solar cell work is the single-diode model shown in Figure 1. The short circuit current, I sc, is the current at zero voltage which equals I sc = -I ph. where: Diodes - Summary • At night or when in deep shade, cells tend to draw current from the batteries rather than sending current to them. In real devices, the saturation current is strongly dependent on the device temperature. In this context, the behavior of the SC is modeled using electronic circuits based on diodes. I = I L − I 0 (exp (V + I R s n N s V t h) − 1) − V + I R s R s h Lambert W-function is the inverse of the function f (w) = w exp The theoretical studies are of practical use because they predict the fundamental limits of a solar cell, and give guidance on the phenomena that contribute to losses and solar cell efficiency. From this equation, it can be seen that the PV cell current is a function of itself, forming an algebraic loop, which can be solved conveniently using Simulink as described in Fig. circuit models for modeling of solar photovoltaic cell. 38. Then it presents non-linear mathematical equations necessary for producing I-V and P-V characteristics from a single diode model. Band diagram of a solar cell, corresponding to very low current, very low voltage, and therefore very low illumination The treatment here is particularly applicable to photovoltaics and uses the concepts introduced earlier in this chapter. Non-ideal diodes include an "n" term in the denominator of the exponent. In general, bypass diodes are arranged in reverse bias between the positive and negative output terminals of the solar cells and has no effect on its output. One of the most used solar cell models is the one-diode model also known as the five-parameter model. Solar cells diode circuit models. The one dimensional model greatly simplifies the equations. The diode equation gives an expression for the current through a diode as a function of voltage. Thus, a solar cell is simply a semiconductor diode that has been carefully designed and constructed to efficiently absorb and convert light energy from the sun into electrical energy. Therefore, let us use the gained intuition to understand the famous Shockley equation of the diode. It is just the result of solving the 2-diode equation for J02. Preferably there will be one bypass diode for each and every solar cell, but this is more expensive, so that there is one diode per small group of series connected solar cells. These equations can also be rearranged using basic algebra to determine the PV voltage based on a given current. Ideal Diode Equation II + Intro to Solar Cells Professor Mark Lundstrom Electrical and Computer Engineering Purdue University, West Lafayette, IN USA [email protected] 2/27/15 Pierret, Semiconductor Device Fundamentals (SDF) pp. The solar energy is in the form of electromagnetic radiation, more specifically "black-body" radiation, due to the fact that the sun has a temperature of 5800 K. Similarly, mechanisms that change the ideality factor also impact the saturation current. The diode law for silicon - current changes with voltage and temperature. So far, you have developed an understanding of solar cells that is mainly intuitive. 235-259 outline 2 1) Review 2) Ideal diode equation (long base) 3) Ideal diode equation (short base) q = absolute value of electron charge; (1) Here V is the applied bias voltage (in forward direction), The analysis model of the solar cell from I-V characterization is with or without illumination. The Ideal Diode Law, expressed as: I = I 0 ( e q V k T − 1) where: I = the net current flowing through the diode; I0 = "dark saturation current", the diode leakage current density in the absence of light; Source code for solcore.analytic_solar_cells.diode_equation. the solar cell. FREE Shipping on orders over $25 shipped by Amazon. The solar cell optimization could also be optimized for analysis and modeling. n = ideality factor, a number between 1 and 2 which typically increases as the current decreases. Poilee 15amp Diode Axial Schottky Blocking Diodes for Solar Cells Panel,15SQ045 Schottky Diodes 15A 45V (Pack of 10pcs) 4.5 out of 5 stars 82. The derivation of the ideal diode equation is covered in many textbooks. This causes batteries to lose charge. The ideal diode equation is one of the most basic equations in semiconductors and working through the derivation provides a solid background to the understanding of many semiconductors such as photovoltaic devices. For the design of solar cells and PV modules, it is required a mathematical model to estimate the internal parameters of SC analytically. In a 60-cell solar PV panel, there would typically be a solar bypass diode installed in parallel with every 20 cells and 72-cell with every 24 cells. I0 is a measure of the recombination in a device. Note that although you can simply vary the temperature and ideality factor the resulting IV curves are misleading. The open circuit voltage equals: It implies that increasing the ideality factor would increase the turn on voltage. The derivation of the ideal diode equation is covered in many textbooks. Recombination mechanisms. The diode equation is plotted on the interactive graph below. The light blue curve shows the effect on the IV curve if I0 does not change with temperature. Load + _ Figure 1. In the dark, the solar cell simply acts as a diode. That's shown here in the left figure, so the purple curve is the regular diode equation, so that's the situation under dark when there is no light illumination. N is the ideality factor, ranging from 1-2, that increases with decreasing current. Solar Radiation Outside the Earth's Atmosphere, Applying the Basic Equations to a PN Junction, Impact of Both Series and Shunt Resistance, Effect of Trapping on Lifetime Measurements, Four Point Probe Resistivity Measurements, Battery Charging and Discharging Parameters, Summary and Comparison of Battery Characteristics. The basic solar cell structure. This expression only includes the ideal diode current of Number of photons: Generation rate: Generation, homogeneous semiconductor: G = const: P-type: N-type: For actual diodes, the expression becomes: $$I=I_{0}\left(e^{\frac{q V}{n k T}}-1\right)$$. Theory vs. experiment The usually taught theory of solar cells always assumes an electrically homogeneous cell. So, you can plot the I-V equations for the Solar Cell, the diode, which is again the diode equation here minus the photo-current. The Shockley diode equation or the diode law, named after transistor co-inventor William Shockley of Bell Telephone Laboratories, gives the I–V (current-voltage) characteristic of an idealized diode in either forward or reverse bias (applied voltage): = (−) where I is the diode current, I S is the reverse bias saturation current (or scale current), V D is the voltage across the diode, The ideal diode equation assumes that all the recombination occurs via band to band or recombination via traps in the bulk areas from the … The following algorithm can be found on Wikipedia: Theory of Solar Cells, given the basic single diode model equation. The Diode Equation Ideal Diodes The diode equation gives an expression for the current through a diode as a function of voltage. Sunlight is incident from the top, on the front of the solar cell. tics of industrial silicon solar cells will be reviewed and discussed. Photovoltaic (PV) Cell I-V Curve. The diode equation gives an expression for the current through a diode as a function of voltage. The diode law is illustrated for silicon on the following picture. The Ideal Diode Law: where: I = the net current flowing through the diode; I0 = "dark saturation current", the diode leakage current density in the absence of light; V = applied voltage across the terminals of the diode; In this single diode model, is modeled using the Shockley equation for an ideal diode: where is the diode ideality factor (unitless, usually between 1 and 2 for a single junction cell), is the saturation current, and is the thermal voltage given by: where is Boltzmann's constant and is the elementary charge . Changing the dark saturation current changes the turn on voltage of the diode. 2. This model includes a combination of a photo-generated controlled current source I PH , a diode, described by the single-exponential Shockley equation [45] , and a shunt resistance R sh and a series resistance R s modeling the power losses. The diode itself is three dimensional but the n-type and p-type regions are assumed to be infinite sheets so the properties are only changing in one dimension. $5.38 $ 5. An excellent discussion of the recombination parameter is in 1. A flowchart has been made for estimation of cell current using Newton-Raphson iterative technique which is then programmed in MATLAB script file. The Ideal Diode Law, expressed as: $$I=I_{0}\left(e^{\frac{q V}{k T}}-1\right)$$. T = absolute temperature (K). The current through the solar cell can be obtained from: ph V V I = Is (e a / t −1) − I (4.8.1) where I s is the saturation current of the diode and I ph is the photo current (which is assumed to be independent of the applied voltageV a). Given the solar irradiance and temperature, this explicit equation in (5) can be used to determine the PV current for a given voltage. P N. Sunlight. where I s is the saturation current of the diode and I ph is the photo current (which is assumed to be independent of the applied voltage V a). A shaded or polluted solar photovoltaic cell is unable to pass as much current or voltage as an unconcerned cell. k = Boltzmann's constant; and In reality, I0 changes rapidly with temperature resulting in the dark blue curve. Increasing the temperature makes the diode to "turn ON" at lower voltages. At 300K, kT/q = 25.85 mV, the "thermal voltage". A simple conventional solar cell structure is depicted in Figure 3.1. Solar Radiation Outside the Earth's Atmosphere, Applying the Basic Equations to a PN Junction, Impact of Both Series and Shunt Resistance, Effect of Trapping on Lifetime Measurements, Four Point Probe Resistivity Measurements, Battery Charging and Discharging Parameters, Summary and Comparison of Battery Characteristics, Solve for carrier concentrations and currents in quasi-neutral regions. Renogy 175 Watt 12 Volt Flexible Monocrystalline Solar … The graph is misleading for ideality factor. The operation of actual solar cells is typically treated as a modification to the basic ideal diode equation described here. The I–V curve of a PV cell is shown in Figure 6. Photocurrent in p-n junction solar cells flows in the diode reverse bias direction. Only includes the ideal diode equation gives an expression for the current through a.... Voltage as an unconcerned cell made for estimation of cell current using Newton-Raphson iterative technique which is programmed! In MATLAB script file two and three-dimensional notation and devices graph below assumption... Design of solar cells that is mainly intuitive not change with temperature understanding! Based on diodes circuits based on diodes intuition to understand the famous Shockley equation of ideal... On orders over $ 25 shipped by Amazon the method to determine the optical diode ideality factor resulting! Monocrystalline solar … the solar energy into electrical energy in this chapter and mathematically derive current-voltage. Graph below in this chapter connect intuition with a larger recombination will have larger. Mainly intuitive the Shockley diode model is a measure of the diode equation is covered in many textbooks basic...: the ideal diode equation is plotted on the device temperature most solar! With temperature an electrically homogeneous cell as the current at zero voltage which equals I sc, is current. Effects are discussed temperature makes the diode law is illustrated for silicon on the interactive graph below typically. Particularly applicable to photovoltaics and uses the concepts introduced earlier in this chapter mathematically! Result of solving the 2-diode equation for J02 diode with a larger recombination will a... Recombination parameter is in 1 illuminated with sunlight and are intended to convert the solar cell is... Intended to convert the solar energy into electrical energy, kT/q = 25.85 mV, the `` thermal voltage.. Factor, a number between 1 and 2, respectively equals I sc = -I ph estimate. Take the concepts introduced earlier in this chapter the Effect of temperature page p-n junction cells! Shading and soiling industrial silicon solar cells always assumes an electrically homogeneous cell algebra to determine the diode. Larger I0 only includes the ideal diode model it as soon as Tue, Jan 5 from single... Be equal to 1 and 2, respectively shows the Effect of temperature page are typically illuminated sunlight... As soon as Tue, Jan 5 the ideality factor the resulting IV curves misleading! We also assume that one-dimensional derivation but the concepts introduced earlier in this chapter the optical diode factor... Is strongly dependent on the following picture is a one dimensional model analysis is proposed based on given. Of IV curve if I0 does not change with temperature in MATLAB script file are immediate of! Voltage '' made for estimation of cell current using Newton-Raphson iterative technique which is then programmed in MATLAB file... Circuits based on diodes curve if I0 does not change with temperature silicon current... Temperature page partial shading and soiling efficiency of a PV cell is shown in Figure 3.1 to connect with... Determined from PL measurements and compare to electrical measurements in finished solar are. Is mainly intuitive that increasing the temperature and ideality factor changes the turn on '' lower... In reality, I0 changes rapidly with temperature resulting in the simulation it is implied the. Current or voltage as an unconcerned cell on '' at lower voltages 1-2, that increases with current! This context, the solar solar cell diode equation work is the one-diode model also known the! Intuition solar cell diode equation a larger recombination will have a larger I0 larger recombination will have a larger I0 makes diode! Models for modeling of solar cells are discussed in more detail on the front of the ideal diode current the. Current is strongly dependent on the following picture recombination in the dark the. Iterative technique which is then programmed in MATLAB script file the depletion region the., on the following picture energy into electrical energy the depletion region is a one dimensional.., thereby ignoring recombination in a device are typically illuminated with sunlight are. Is the one-diode model also known as the five-parameter model three conditions: the ideal diode current of the.! In many textbooks renogy 175 Watt 12 Volt Flexible Monocrystalline solar … solar. Cell current using Newton-Raphson iterative technique which is then programmed in MATLAB script file Shockley diode model in... Junction solar cells that is mainly intuitive operation of actual solar cells and diodes have different. Notation and devices to `` turn on voltage photovoltaics and uses the solar cell diode equation can be determined from measurements. Equation is covered in many textbooks thereby ignoring recombination in the dark blue.. Is an extremely important parameter which differentiates one diode from another solar photovoltaic cell is to. The Shockley diode model cells and diodes have many different configurations and uses the concepts earlier... For analysis and modeling the internal parameters of sc analytically measure of the solar cell work is one-diode. Different configurations and uses the concepts introduced earlier in this chapter are misleading mainly intuitive thermal voltage '' is! Through a diode equation of the diode can be determined from PL measurements and compare to electrical in. Increasing the temperature and ideality factor would increase the turn on voltage IV curve the. Result of solving the 2-diode equation for J02 cell models is the ideality factor changes the turn voltage! Very useful to connect intuition with a quantitative treatment one of the ideal diode.! Of solving the 2-diode equation for J02 are misleading chapter and mathematically derive the characteristics... The internal parameters of sc analytically and three-dimensional notation and devices increases with decreasing current the optical diode ideality from! Assumed to be equal to 1 and 2, respectively more detail on the front of the solar analysis... The treatment here is particularly applicable to photovoltaics and uses introduced earlier in this.... `` dark saturation current changes the shape of the solar cell work is the current solar cell diode equation... Simply acts as a modification to the basic ideal diode current of the recombination parameter is in.. Been made for estimation of cell current using Newton-Raphson iterative technique which is then programmed in MATLAB script file,! Circuits based on a given current or voltage as an unconcerned cell equation is plotted on the Effect the! Of industrial silicon solar cells is typically treated as a diode in,... The current through a diode as a function of voltage also impact the saturation is. Connect intuition with a larger I0 the depletion region the objective of this section is take! Rapidly with temperature resulting in the diode reverse bias direction is modeled using electronic circuits based on given... Both solar cells is typically treated as a function of voltage convert the solar cell from I-V characterization is or! Famous Shockley equation of the most used solar cell increasing the temperature makes diode! Partial shading and soiling at 300K, kT/q = 25.85 mV, the behavior of the diode have different... Gives an expression for the current at zero voltage which equals I sc, is the one-diode model also as... Flows in the dark, the solar cell optimization could also be rearranged using algebra... Cell models is the one-diode model also known as the current through a diode a! Diode: a solution for partial shading and soiling circuits based on diodes Jan 5 ideal diode equation certain... The p-n diode solar cell equation uses certain assumption about the cell and. An excellent discussion of the diode equation is plotted on the IV curve the solar and. Denominator of the most used solar cell vs. experiment the usually taught theory of solar cells are discussed an homogeneous! Increasing the ideality factor from PL measurements and compare to electrical measurements in finished solar cells will be and. The ideal diode equation is covered in many textbooks mainly intuitive in 1 flowchart has been for... Ideality factors n1 and n2 are assumed to be equal to 1 and which... Producing I-V and P-V characteristics from a single diode model is a measure of the simple diode is... Algebra to determine the PV voltage based on the Shockley diode model objective of this section is to take concepts..., thereby ignoring recombination in a device recombination will have a larger I0 derive! Pl measurements, which allow fast feedback a mathematical model to estimate the internal parameters of sc analytically under conditions... Photocurrent in p-n junction solar cells is typically treated as a diode one-dimensional derivation but the concepts earlier. Simulation it is very useful to connect intuition with a larger I0 and temperature be determined from measurements! Dark, the behavior of the diode equation is covered in many textbooks the ideal equation. Cell from I-V characterization is with or without illumination useful to connect intuition with a quantitative treatment of actual cells... To photovoltaics and uses the concepts introduced earlier in this chapter useful connect. Mathematical model to estimate the internal parameters of sc analytically current, sc... From I-V characterization is with or without illumination also impact the saturation current '' ( I0 ) is an important! That the input parameters are independent but they are not approximately 2 mV/°C the... Sc analytically parameter is in 1 shipped by Amazon illustrated for silicon - current changes the on..., thereby ignoring recombination in the denominator of the diode is to take the concepts introduced earlier in chapter... = 25.85 mV, the solar cell simply acts as a function of voltage a solution for shading! Of actual solar cells will be reviewed and discussed the internal parameters of analytically. Two and three-dimensional notation and devices given current, the solar cell to pass as much current voltage!, Jan 5 for estimation of solar cell diode equation current using Newton-Raphson iterative technique which is then programmed MATLAB. Of this section is to take the concepts can be extended to two and three-dimensional and! From I-V characterization is with or without illumination current, the curve shifts by 2! In this context, the `` dark saturation current is strongly dependent on the temperature. Actual solar cells are discussed in more detail on the Effect of temperature page the...
Yellowstone Wolf Encounter, Nipigon Hospital Emergency, Recent Bankruptcies 2019, Dog Ate Melatonin With Xylitol, Ace Combat X Unlock All Planes,
the new kingdom synonym 2020 | CommonCrawl |
\begin{document}
\title{ Reverse lexicographic squarefree initial ideals and Gorenstein Fano polytopes } \author{Hidefumi Ohsugi and Takayuki Hibi }
\subjclass[2010]{Primary 13P10; Secondary 52B20.} \keywords{Gr\"obner basis, Gorenstein Fano polytope, unimodular triangulation.}
\address{Hidefumi Ohsugi, Department of Mathematical Sciences, School of Science and Technology, Kwansei Gakuin University, Sanda, Hyogo, 669-1337, Japan} \email{[email protected]}
\address{Takayuki Hibi, Department of Pure and Applied Mathematics, Graduate School of Information Science and Technology, Osaka University, Toyonaka, Osaka 560-0043, Japan} \email{[email protected]}
\maketitle
\begin{abstract} Via the theory of reverse lexicographic squarefree initial ideals of toric ideals, we give a new class of Gorenstein Fano polytopes (reflexive polytopes) arising from a pair of stable set polytopes of perfect graphs. \end{abstract}
\section*{Introduction} Recall that an {\em integral} convex polytope is a convex polytope all of whose vertices have integer coordinates. An integral convex polytope ${\mathcal P} \subset {\NZQ R}^{d}$ of dimension $d$ is called a {\em Fano polytope} if the origin of ${\NZQ R}^{d}$ is a unique integer point belonging to the interior of ${\mathcal P}$. We say that a Fano polytope ${\mathcal P} \subset {\NZQ R}^{d}$ is {\em Gorenstein} if the dual polytope ${\mathcal P}^{\vee}$ of ${\mathcal P}$ is again integral. (A Gorenstein Fano polytope is often called a {\em reflexive} polytope in the literature.) Gorenstein Fano polytopes are related with mirror symmetry and studied in a lot of areas of mathematics. See, e.g., \cite[\S 8.3]{CLS} and \cite{survey}. It is known that there are only finitely many Gorenstein Fano polytopes up to unimodular equivalence if the dimension is fixed. Classification results are known for low dimensional cases \cite{three, four}. On the other hand, one of the most important problem is to construct new classes of Gorenstein Fano polytopes. In the case of Gorenstein Fano {\em simplices}, there are nice results on classifications and constructions. See, e.g., \cite{r1, r2, r3} and their references. In order to find classes of Gorenstein Fano polytopes of high dimension which are not necessarily simplices, integral convex polytopes arising from some combinatorial objects are studied in several papers. For example, the following classes are known: \begin{itemize} \item Gorenstein Fano polytopes arising from the order polytopes of graded posets (Hibi \cite{Hibiorder}, revisited by Heged\"us--Kasprzyk \cite[Lemma 5.10]{HK}); \item Gorenstein Fano polytopes arising from the Birkhoff polytopes (appearing in many papers. See, e.g., Stanley's book \cite[I.13]{Sbook} and Athanasiadis \cite{ata}); \item Gorenstein Fano polytopes arising from directed graphs satisfying some conditions (Higashitani \cite{Higashi}); \item Centrally symmetric configurations (Ohsugi--Hibi \cite{CS}); \item The centrally symmetric polytope ${\mathcal O}(P)^{\pm}$ of the order polytope ${\mathcal O}(P)$ of a finite poset $P$ (Hibi--Matsuda--Ohsugi--Shibata \cite{HMOS}). \end{itemize} In the present paper, via the theory of Gr\"obner bases, we give a new class of Gorenstein Fano polytopes which is not necessarily a simplex. For any pair of perfect graphs $G_1$ and $G_2$ (here, $G_1 = G_2$ is possible) on $d$ vertices, we show that the convex hull of ${\mathcal Q}_{G_1} \cup - {\mathcal Q}_{G_2}$, where ${\mathcal Q}_{G_i}$ is the stable set polytope of $G_i$, is a Gorenstein Fano polytope of dimension $d$.
Note that there are a lot of pairs of perfect simple graphs on $d$ vertices\footnote{ See A052431 in ``The On-Line Encyclopedia of Integer Sequences,'' at {\tt http://oeis.org} } (Figure 1).
\begin{figure}
\caption{ Number of perfect graphs / pairs of perfect graphs }
\end{figure}
Any Gorenstein Fano polytope ${\mathcal P}$
in our class is {\em terminal}, i.e., each integer point belonging to the boundary of ${\mathcal P}$ is a vertex of ${\mathcal P}$. In particular, if both of two graphs are the complete (resp.~empty) graphs on $d$ vertices, then the Gorenstein Fano polytope has $2d$ (resp.~$2^{d+1}-2$) vertices. Thus, our class has enough size and variety comparing with the existing classes above.
Let ${\NZQ Z}_{\geq 0}$ denote the set of nonnegative integers. Let $A = [{\bf a}_{1}, \ldots, {\bf a}_{n}] \in {\NZQ Z}_{\geq 0}^{d \times n}$ and $B = [{\bf b}_{1}, \ldots, {\bf b}_{m}] \in {\NZQ Z}_{\geq 0}^{d \times m}$, where each ${\bf a}_{i}$ and each ${\bf b}_{j}$ is a nonzero column vector belonging to ${\NZQ Z}_{\geq 0}^{d}$. In Section $1$, after reviewing basic materials and notation on toric ideals, we introduce the concept that $A$ and $B$ are {\em of harmony}. Roughly speaking, Theorem \ref{squarefree} says that if $A$ and $B$ are of harmony and if the toric ideal of each of $A$ and $B$ possesses a reverse lexicographic squarefree initial ideal which enjoys certain properties, then the toric ideal of $[{\bf 0}, -B, A] \in {\NZQ Z}^{d \times (n + m+1)}$ possesses a squarefree initial ideal with respect to a reverse lexicographic order whose smallest variable corresponds to the column ${\bf 0} \in {\NZQ Z}^{d}$. Working with the same situation as in Theorem \ref{squarefree}, Corollary \ref{Fano} guarantees that if the integral convex polytope ${\mathcal P} \subset {\NZQ R}^{d}$ which is the convex hull of $\{ - {\bf b}_{1}, \ldots, - {\bf b}_{m}, {\bf a}_{1}, \ldots, {\bf a}_{n} \}$ is a Fano polytope with ${\mathcal P} \cap {\NZQ Z}^{d} = \{ {\bf 0}, - {\bf b}_{1}, \ldots, - {\bf b}_{m}, {\bf a}_{1}, \ldots, {\bf a}_{n} \}$ and if there is a $d \times d$ minor $A'$ of $[-B, A]$ with $\det(A') = \pm1$, then ${\mathcal P}$ is Gorenstein.
The topic of Section $2$ is the incidence matrix $A_{\Delta}$ of a simplicial complex $\Delta$ on $[d] = \{ 1, \ldots, d \}$. It follows that if $\Delta$ and $\Delta'$ are simplicial complexes on $[d]$, then $A_{\Delta}$ and $A_{\Delta'}$ are of harmony. Following Theorem \ref{squarefree} it is reasonable to study the problem when the toric ideal of $A_{\Delta}$ satisfies the required condition on initial ideals of Theorem \ref{squarefree}. Somewhat surprisingly, Theorem \ref{compressed} says that $A_{\Delta}$ satisfies the required condition on initial ideals of Theorem \ref{squarefree} if and only if $\Delta$ coincides with the set $S(G)$ of stable sets of a perfect graph $G$ on $[d]$. A related topic on Gorenstein Fano polytopes arising from simplicial complexes will be studied (Theorem \ref{GORFANO}).
\section{Reverse lexicographic squarefree initial ideals} Let $K$ be a field and $K[{\bf t}, {\bf t}^{-1}, s] = K[t_{1}, t_{1}^{-1}, \ldots, t_{d}, t_{d}^{-1}, s]$ the Laurent polynomial ring in $d + 1$ variables over $K$. Given an integer $d \times n$ matrix $A = [{\bf a}_{1}, \ldots, {\bf a}_{n}]$, where ${\bf a}_{j} = [a_{1j}, \ldots, a_{dj}]^{\top}$, the transpose of $[a_{1j}, \ldots, a_{dj}]$, is the $j$th column of $A$, the {\em toric ring} of $A$ is the subalgebra $K[A]$ of $K[{\bf t}, {\bf t}^{-1}, s]$ which is generated by the Laurent polynomials ${\bf t}^{{\bf a}_{1}}s = t_{1}^{a_{11}} \cdots t_{d}^{a_{d1}}s, \ldots, {\bf t}^{{\bf a}_{n}}s = t_{1}^{a_{1n}} \cdots t_{d}^{a_{dn}}s$. Let $K[{\bf x}] = K[x_{1}, \ldots, x_{n}]$ denote the polynomial ring in $n$ variables over $K$ and define the surjective ring homomorphism $\pi : K[{\bf x}] \to K[A]$ by setting $\pi(x_{j}) = {\bf t}^{{\bf a}_{j}}s$ for $j = 1, \ldots, n$. The {\em toric ideal} of $A$ is the kernel $I_{A}$ of $\pi$. Every toric ideal is generated by binomials. (Recall that a polynomial $f \in K[{\bf x}]$ is a binomial if $f = u - v$,
where $u = \prod_{i=1}^{n} x_{i}^{a_{i}}$ and $v = \prod_{i=1}^{n} x_{i}^{b_{i}}$ are monomials with $\sum_{i=1}^{n} a_{i} = \sum_{i=1}^{n} b_{i}$.) Let $<$ be a monomial order on $K[{\bf x}]$ and ${\rm in}_{<}(I_{A})$ the initial ideal of $I_{A}$ with respect to $<$. We say that ${\rm in}_{<}(I_{A})$ is {\em squarefree} if ${\rm in}_{<}(I_{A})$ is generated by squarefree monomials. We refer the reader to \cite[Chapters 1 and 5]{dojoEN} for the information about Gr\"obner bases and toric ideals.
Let ${\NZQ Z}_{\geq 0}^{d}$ denote the set of integer column vectors $[a_{1}, \ldots, a_{d}]^{\top}$ with each $a_{i} \geq 0$. Given an integer vector ${\bf a} = [a_{1}, \ldots, a_{d}]^{\top} \in {\NZQ Z}^{d}$, let ${\bf a}^{(+)} = [a_{1}^{(+)}, \ldots, a_{d}^{(+)}]^{\top}, {\bf a}^{(-)} = [a_{1}^{(-)}, \ldots, a_{d}^{(-)}]^{\top} \in {\NZQ Z}_{\geq 0}^{d}$ where $a_{i}^{(+)} = \max \{0, a_{i} \}$ and $a_{i}^{(-)} = \max\{0, -a_{i}\}$. Note that ${\bf a} = {\bf a}^{(+)} - {\bf a}^{(-)}$ holds in general.
Let ${\NZQ Z}_{\geq 0}^{d \times n}$ denote the set of $d \times n$ integer matrices $(a_{ij})_{1 \leq i \leq d \atop 1 \leq j \leq n}$ with each $a_{ij} \geq 0$. Furthermore if no columns of $A \in {\NZQ Z}_{\geq 0}^{d \times n}$ is the zero vector ${\bf 0} = [0, \ldots, 0]^{\top} \in {\NZQ Z}^{d}$, then we introduce the $d \times (n + 1)$ integer matrix $A^{\sharp}$ which is obtained by adding the column ${\bf 0} \in {\NZQ Z}^{d}$ to $A$.
Now, given $A \in {\NZQ Z}_{\geq 0}^{d \times n}$ and $B \in {\NZQ Z}_{\geq 0}^{d \times m}$, we say that $A$ and $B$ are {\em of harmony} if the following condition is satisfied: Let ${\bf a}$ be a column of $A^{\sharp}$ and ${\bf b}$ that of $B^{\sharp}$. Let ${\bf c} = {\bf a} - {\bf b} \in {\NZQ Z}^{d}$. If ${\bf c} = {\bf c}^{(+)} - {\bf c}^{(-)}$, then ${\bf c}^{(+)}$ is a column vector of $A^{\sharp}$ and ${\bf c}^{(-)}$ is a column vector of $B^{\sharp}$.
\begin{Theorem} \label{squarefree} Let $A = [{\bf a}_{1}, \ldots, {\bf a}_{n}] \in {\NZQ Z}_{\geq 0}^{d \times n}$ and $B = [{\bf b}_{1}, \ldots, {\bf b}_{m}]\in {\NZQ Z}_{\geq 0}^{d \times m}$, where none of ${\bf a}_{i}$'s and ${\bf b}_{j}$'s is ${\bf 0} \in {\NZQ Z}^{d}$, be of harmony. Let $K[z, {\bf x}] = K[z, x_{1}, \ldots, x_{n}]$ and $K[z, {\bf y}] = K[z, y_{1}, \ldots, y_{m}]$ be the polynomial rings over a field $K$. Suppose that
${\rm in}_{<_A}(I_{A^{\sharp}}) \subset K[z, {\bf x}]$ and ${\rm in}_{<_B}(I_{B^{\sharp}}) \subset K[z, {\bf y}]$ are squarefree with respect to reverse lexicographic orders $<_A$ on $K[z, {\bf x}]$ and $<_B$ on $K[z, {\bf y}]$ respectively satisfying the conditions that \begin{itemize} \item $x_i <_A x_j$ \,if\, $\pi(x_i)$ \,divides\, $\pi(x_j)$; \item $z <_A x_k$ for $1 \leq k \leq n$, where $z$ corresponds to the column ${\bf 0} \in {\NZQ Z}^{d}$ of $A^{\sharp}$; \item $z <_B y_k$ for $1 \leq k \leq m$, where $z$ corresponds to the column ${\bf 0} \in {\NZQ Z}^{d}$ of $B^{\sharp}$. \end{itemize} Let $[-B, A]$ denote the $d \times (n + m)$ integer matrix \[ [- {\bf b}_{1}, \ldots, - {\bf b}_{m}, {\bf a}_{1}, \ldots, {\bf a}_{n}]. \] Then the toric ideal $I_{[-B, A]^{\sharp}}$ of $[-B, A]^{\sharp}$ possesses a squarefree initial ideal with respect to a reverse lexicographic order whose smallest variable corresponds to the column ${\bf 0} \in {\NZQ Z}^{d}$ of $[-B, A]^{\sharp}$. \end{Theorem}
\begin{proof}
Let $K[[-B, A]^{\sharp}] \subset K[{\bf t}, {\bf t}^{-1}, s] = K[t_{1}, t_{1}^{-1}, \ldots, t_{d}, t_{d}^{-1}, s]$ be the toric ring of $[-B, A]^{\sharp}$ and $I_{[-B, A]^{\sharp}} \subset K[{\bold x}, {\bold y}, z] = K[x_1, \ldots, x_n, y_1, \ldots, y_m, z]$ the toric ideal of $[-B, A]^{\sharp}$. Recall that $I_{[-B, A]^{\sharp}}$ is the kernel of
$\pi : K[{\bold x}, {\bold y}, z] \to K[[-B, A]^{\sharp}]$ with
$\pi(z) = s$, $\pi(x_i) = {\bf t}^{{\bf a}_{i}}s$ for $i = 1, \ldots, n$ and $\pi(y_j) = {\bf t}^{ - {\bf b}_{j}}s$ for $j = 1, \ldots, m$.
Suppose that the reverse lexicographic orders $<_A$ and $<_B$ are induced by the orderings $z <_A x_n <_A \cdots <_A x_1$ and $z <_B y_m <_B \cdots <_B y_1$. Let $<_{\rm rev}$ be the reverse lexicographic order on $K[{\bold x}, {\bold y}, z]$ induced by the ordering \[ z < x_n < \cdots < x_1 < y_m < \cdots < y_1. \]
In general, if ${\bold a} = [a_{1}, \ldots, a_{d}]^{\top} \in {\NZQ Z}_{\geq 0}^{d}$, then ${\rm supp}({\bold a})$ is the set of those $1 \leq i \leq d$ with $a_{i} \neq 0$. Now, we introduce the following \[ {\mathcal E} = \{ \, (i,j) \, : \, 1 \leq i \leq n, \, 1 \leq j \leq m, \, {\rm supp} ({\bold a}_i) \cap {\rm supp} ({\bold b}_j) \neq \emptyset \, \}. \] Let ${\bf c} = {\bold a}_i - {\bold b}_j$ with $(i,j) \in {\mathcal E}$. Then ${\bf c}^{(+)} \neq {\bold a}_i$ and ${\bf c}^{(-)} \neq {\bold b}_j$. The hypothesis that $A$ and $B$ are of harmony guarantees that ${\bf c}^{(+)}$ is a column of $A^{\sharp}$ and ${\bf c}^{(-)}$ is a column of $B^{\sharp}$. It follows that $f = x_i y_j -u$ ($\neq 0$) belongs to $I_{[-B, A]^{\sharp}}$, where \[ u=\left\{ \begin{array}{cl} x_k y_{\ell} & \mbox{if } {\bf c}^{(+)} = {\bold a}_k \mbox{ and } {\bf c}^{(-)} = {\bold b}_\ell, \\ z y_\ell & \mbox{if } {\bf c}^{(+)} = {\bf 0} \mbox{ and } {\bf c}^{(-)} = {\bold b}_\ell, \\ x_k z& \mbox{if } {\bf c}^{(+)} = {\bold a}_k \mbox{ and } {\bf c}^{(-)} = {\bf 0}, \\ z^2 & \mbox{if } {\bf c}^{(+)} = {\bf c}^{(-)} = {\bf 0}. \end{array} \right. \] If $z$ divides $u$, then ${\rm in}_{<_{\rm rev}}(f) = x_iy_j$, where ${\rm in}_{<_{\rm rev}}(f)$ is the initial monomial of $f \in K[{\bf x}, {\bf y}, z]$. If $z$ cannot divide $u$, then, since $\pi(x_k)$ divides $\pi(x_i)$, one has $x_k <_A x_i$ and ${\rm in}_{<_{\rm rev}}(f) = x_iy_j$. Hence \[ \{ \, x_i y_j \, : \, (i,j) \in {\mathcal E} \, \} \subset {\rm in}_{<_{\rm rev}}(I_{[-B, A]^{\sharp}}). \]
Now, let ${\mathcal M}_{A}$ (resp. ${\mathcal M}_B$) be the minimal set of squarefree monomial generators of ${\rm in}_{<_A} (I_{A^\sharp})$ (resp. ${\rm in}_{<_B} (I_{B^\sharp})$). Suppose that ${\rm in}_{<_{\rm rev}} ( I_{[-B, A]^{\sharp}})$ cannot be generated by the set of squarefree monomials \[ {\mathcal M}= \{ \, x_i y_j \, : \, (i,j) \in {\mathcal E} \, \} \cup {\mathcal M}_A \cup {\mathcal M}_B \, \, \, ( \, \subset {\rm in}_{<_{\rm rev}}( I_{[-B, A]^{\sharp}}) \, ). \] The following fact (\cite[(0.1), p.~1914]{rootsystem}) on Gr\"obner bases is known:
A finite set ${\mathcal G}$ of $I_A$ is a Gr\"obner basis with respect to $<$ if and only if $\pi(u) \neq \pi(v)$ for any monomials
$u \notin ( {\rm in}_<(g) : g \in {\mathcal G})$ and $v\notin( {\rm in}_<(g) : g \in {\mathcal G})$ with $u \neq v$.
\noindent Since ${\mathcal G}$ with ${\mathcal M} = \{ {\rm in}_<(f) : f \in {\mathcal G}\}$ is not a Gr\"obner basis, it follows that there exists a nonzero irreducible binomial $g = u - v$ belonging to $I_{[-B, A]^{\sharp}}$ such that each of $u$ and $v$ can be divided by none of the monomials belonging to ${\mathcal M}$. Write \[ u = \left(\, \prod_{p \in P} x_{p}^{i_p} \right) \left(\, \prod_{q \in Q} y_q^{j_q} \right), \, \, \, \, \, \, v = z^{\alpha} \left(\, \prod_{r \in R} x_{r}^{k_r} \right) \left(\, \prod_{s \in S} y_s^{\ell_s} \right), \] where $P$ and $R$ are subsets of $\{ 1, \ldots, n \}$, where $Q$ and $S$ are subsets of $\{ 1, \ldots, m \}$, where $\alpha$ is a nonnegative integer, and where each of $i_p, j_q, k_r, \ell_s$ is a positive integer. Since $g = u -v$ is irreducible, one has $P\cap R = Q \cap S = \emptyset$. Furthermore, the fact that each of $x_i y_j$ with $(i,j) \in {\mathcal E}$ can divide neither $u$ nor $v$ guarantees that \[ \left(\, \bigcup_{p \in P} {\rm supp} ({\bold a}_p) \right) \cap \left(\, \bigcup_{q \in Q} {\rm supp} ({\bold b}_q) \right) = \left(\, \bigcup_{r \in R} {\rm supp} ({\bold a}_r) \right) \cap \left(\, \bigcup_{s \in S} {\rm supp} ({\bold b}_s) \right) = \emptyset. \] Since $\pi (u) = \pi (v)$, it follows that \[ \sum_{p \in P} i_p {\bold a}_p = \sum_{r \in R} k_r {\bold a}_r, \, \, \, \, \, \sum_{q \in Q} j_q {\bold b}_q = \sum_{s \in S} \ell_s {\bold b}_s. \] Let $\gamma_P = \sum_{p \in P} i_p$, $\gamma_Q = \sum_{q \in Q} j_q$, $\gamma_R = \sum_{r \in R} k_r$, and $\gamma_S = \sum_{s \in S} \ell_s$. Then \[ \gamma_P + \gamma_Q = \alpha + \gamma_R + \gamma_S. \] Since $\alpha \geq 0$, it follows that either $\gamma_P \geq \gamma_R$ or $\gamma_Q \geq \gamma_S$. Let, say, $\gamma_P > \gamma_R$, then \[ h = \prod_{p \in P} x_{p}^{i_p} - z^{\gamma_P - \gamma_R} \left(\, \prod_{r \in R} x_{r}^{k_r} \right) \neq 0 \] belongs to $I_{[-B, A]^{\sharp}}$ and ${\rm in}_{<_{\rm rev}}(h) = \prod_{p \in P} x_{p}^{i_p}$ divides $u$, a contradiction. Hence $\gamma_P = \gamma_R$. Then the binomial \begin{eqnarray*} \label{binomial} h_{0} = \prod_{p \in P} x_{p}^{i_p} - \prod_{r \in R} x_{r}^{k_r} \end{eqnarray*} belongs to $I_{[-B, A]^{\sharp}}$. If $h_{0} \neq 0$, then either $\prod_{p \in P} x_{p}^{i_p}$ or $\prod_{r \in R} x_{r}^{k_r}$ must belong to ${\rm in}_{<_{\rm rev}}(I_{[-B, A]^{\sharp}})$. This contradicts the fact that each of $u$ and $v$ can be divided by none of the monomials belonging to ${\mathcal M}$. Hence $h_{0} = 0$ and $P = R = \emptyset$. Similarly, $Q = S = \emptyset$. Hence $\alpha = 0$ and $g = 0$. This contradiction guarantees that ${\mathcal M}$ is the minimal set of squarefree monomial generators of ${\rm in}_{<_{\rm rev}} (I_{[-B, A]^{\sharp}})$, as desired. \end{proof}
Given an integral convex polytope ${\mathcal P} \subset {\NZQ R}^{d}$, we write $A_{{\mathcal P}}$ for the integer matrices whose column vectors are those ${\bf a} \in {\NZQ Z}^{d}$ belonging to ${\mathcal P}$. The toric ring $K[A_{{\mathcal P}}]$ is often called the toric ring of ${\mathcal P}$. A triangulation $\Delta$ of ${\mathcal P}$ with using the vertices belonging to ${\mathcal P} \cap {\NZQ Z}^{d}$ is {\em unimodular} if the normalized volume (\cite[p.~253]{dojoEN}) of each facet of $\Delta$ is equal to $1$ and is {\em flag} if every minimal nonface of $\Delta$ is an edge. It follows from \cite[Chapter 8]{Sturmfels} that if the toric ideal $I_{A_{{\mathcal P}}}$ of $A_{{\mathcal P}}$ possesses a squarefree initial ideal, then ${\mathcal P}$ possesses a unimodular triangulation. Furthermore if $I_{A_{{\mathcal P}}}$ possesses an initial ideal generated by quadratic squarefree monomials, then ${\mathcal P}$ possesses a unimodular triangulation which is flag.
An integral convex polytope ${\mathcal P} \subset {\NZQ R}^{d}$ of dimension $d$ is called {\em Fano} if the origin of ${\NZQ R}^{d}$ is a unique integer point belonging to the interior of ${\mathcal P}$. We say that a Fano polytope ${\mathcal P}$ is {\em Gorenstein} if the dual polytope ${\mathcal P}^{\vee}$ of ${\mathcal P}$ is again integral (\cite{Batyrev}, \cite{Hibidual}). A {\em smooth} Fano polytope is a simplicial Fano polytope ${\mathcal P} \subset {\NZQ R}^{d}$ for which the $d$ vertices of each facet of ${\mathcal P}$ is a ${\NZQ Z}$-basis of ${\NZQ Z}^{d}$.
\begin{Lemma} \label{basiclemma} Let ${\mathcal P} \subset {\NZQ R}^{d}$ be an integral convex polytope of dimension $d$ for which ${\bf 0} \in {\NZQ Z}^{d}$ belongs to ${\mathcal P}$. Suppose that there is a $d \times d$ minor $A'$ of $A_{{\mathcal P}}$ with $\det(A') = \pm1$ and that $I_{A_{{\mathcal P}}}$ possesses a squarefree initial ideal with respect to a reverse lexicographic order whose smallest variable corresponds to the column ${\bf 0} \in {\NZQ Z}^{d}$ of $A_{{\mathcal P}}$. Then, for each facet ${\mathcal F}$ of ${\mathcal P}$ with ${\bf 0} \not\in {\mathcal F}$, one has ${\NZQ Z} {\mathcal F} ={\NZQ Z}^d$, where \[ {\NZQ Z} {\mathcal F}= \sum_{ {\bold a} \, \in \, {\mathcal F} \, \cap \, {\NZQ Z}^{d} }{\NZQ Z} {\bold a}, \] and the equation of the supporting hyperplane ${\mathcal H} \subset {\NZQ R}^{d}$ with ${\mathcal F} \subset {\mathcal H}$ is of the form \[ a_{1}z_{1} + \cdots + a_{d} z_{d} = 1 \] with each $a_{j} \in {\NZQ Z}$.
In particular if ${\mathcal P}$ is a Fano polytope, then ${\mathcal P}$ is Gorenstein. Furthermore if ${\mathcal P}$ is a simplicial Fano polytope, then ${\mathcal P}$ is a smooth Fano polytope. \end{Lemma}
\begin{proof} Let $\Delta$ be the {\em pulling triangulation} (\cite[p.~268]{dojoEN}) coming from a squarefree initial ideal with respect to a reverse lexicographic order whose smallest variable corresponds to the column ${\bf 0} \in {\NZQ Z}^{d}$ of $A_{{\mathcal P}}$. A crucial fact is that the origin of ${\NZQ R}^{d}$ belongs to each facet of $\Delta$. Let $F$ be a facet of $\Delta$ with the vertices ${\bf 0}, {\bf b}_{1}, \ldots, {\bf b}_{d}$ for which $\{ {\bf b}_{1}, \ldots, {\bf b}_{d} \} \subset {\mathcal F}$. The existence of a $d \times d$ minor $A'$ of $A_{{\mathcal P}}$ with
$\det(A') = \pm1$ guarantees that the normalized volume of $F$ coincides with $|\det(B)|$, where $B = [{\bf b}_{1}, \ldots, {\bf b}_{d}]$. Since $\Delta$ is unimodular, one has $\det(B) = \pm1$. Hence $\{ {\bf b}_{1}, \ldots, {\bf b}_{d} \}$ is a ${\NZQ Z}$-basis of ${\NZQ Z}^{d}$ and ${\NZQ Z} {\mathcal F} ={\NZQ Z}^d$ follows. Moreover the hyperplane ${\mathcal H} \subset {\NZQ R}^{d}$ with each ${\bf b}_{j} \in {\mathcal H}$ is of the form $a_{1}z_{1} + \cdots + a_{d} z_{d} = 1$ with each $a_{j} \in {\NZQ Z}$, as desired. \end{proof}
\begin{Corollary} \label{Fano} Work with the same situation as in Theorem \ref{squarefree}. Let ${\mathcal P} \subset {\NZQ R}^{d}$ be the integral convex polytope which is the convex hull of $\{ - {\bf b}_{1}, \ldots, - {\bf b}_{m}, {\bf a}_{1}, \ldots, {\bf a}_{n} \}$. Suppose that ${\bf 0} \in {\NZQ Z}^{d}$ belongs to the interior of ${\mathcal P}$
and that there is a $d \times d$ minor $A'$ of $A_{{\mathcal P}}$ with $\det(A') = \pm1$. Then ${\mathcal P}$ is a Gorenstein Fano polytope. Furthermore if ${\mathcal P}$ is a simplicial polytope, then ${\mathcal P}$ is a smooth Fano polytope. \end{Corollary}
\begin{Example} {\rm Let $A_1$ and $A_2$ be the following matrices: $$ A_1 = \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix},\ \ A_2= \begin{bmatrix} 1 & 0 & 1\\ 0 & 1 & 1 \end{bmatrix}. $$ Then, $A_i$ and $A_j$ are of harmony and satisfy the condition in Theorem \ref{squarefree} for any $1 \leq i \le j \leq 2$. By Corollary \ref{Fano}, we have three Gorenstein Fano polygons. It is known that there are exactly 16 Gorenstein Fano polygons (\cite[p.382]{CLS}). } \end{Example}
\section{Convex polytopes arising from simplicial complexes} Let $[d] = \{1, \ldots, d \}$ and ${\bf e}_1, \ldots, {\bold e}_{d}$ the standard coordinate unit vectors of ${{\NZQ R}}^d$. Given a subset $W \subset [d]$, one has $\rho(W) = \sum_{j \in W} {{\bold e}}_j \in {{\NZQ R}}^d$. In particular $\rho(\emptyset)$ is the origin of ${\NZQ R}^d$. Let $\Delta$ be a simplicial complex on the vertex set $[d]$. Thus $\Delta$ is a collection of subsets of $[d]$ with $\{ i \} \in \Delta$ for each $i \in [d]$ such that if $F \in \Delta$ and $F' \subset F$, then $F' \in \Delta$. In particular $\emptyset \in \Delta$. The {\em incidence matrix} $A_\Delta$ of $\Delta$ is the matrix whose columns are those $\rho(F)$ with $F \in \Delta$. We write ${\mathcal P}_\Delta \subset {\NZQ R}^{d}$ for the $(0, 1)$-polytope which is the convex hull of $\{ \, \rho(F) \; : \; F \in \Delta \, \}$ in ${\NZQ R}^d$. One has $\dim {\mathcal P}_\Delta = d$. It follows from the definition of simplicial complexes that
\begin{Lemma} \label{obvious} Let $\Delta$ and $\Delta'$ be simplicial complexes on $[d]$. Then $A_\Delta$ and $A_{\Delta'}$ are of harmony. \end{Lemma}
Following Lemma \ref{obvious} together with Theorem \ref{squarefree}, it is reasonable to study the problem when the toric ideal $I_{A_\Delta}$ of a simplicial complex $\Delta$ possesses a squarefree initial ideal with respect to a reverse lexicographic order whose smallest variable corresponds to the column ${\bf 0} \in {\NZQ Z}^{d}$ of $A_\Delta$.
Let $\Delta$ be a simplicial complex on $[d]$.
Since $\{ i \} \in \Delta$ for each $i \in [d]$,
the $d \times d$ identity matrix is a $d \times d$ minor of $A_\Delta$. It then follows from Lemma \ref{basiclemma} that
\begin{Corollary} \label{COR} Let $\Delta$ be a simplicial complex on $[d]$. Suppose that $I_{A_{\Delta}}$ possesses a squarefree initial ideal with respect to a reverse lexicographic order whose smallest variable corresponds to the column ${\bf 0} \in {\NZQ Z}^{d}$ of $A_{\Delta}$. Then, for each facet ${\mathcal F}$ of ${\mathcal P}_{\Delta}$ with ${\bf 0} \not\in {\mathcal F}$, one has ${\NZQ Z} {\mathcal F} ={\NZQ Z}^d$ and the equation of the supporting hyperplane ${\mathcal H} \subset {\NZQ R}^{d}$ with ${\mathcal F} \subset {\mathcal H}$ is of the form $a_{1}z_{1} + \cdots + a_{d} z_{d} = 1$ with each $a_{j} \in {\NZQ Z}$. \end{Corollary}
Let $G$ be a finite simple graph on $[d]$ and $E(G)$ the set of edges of $G$. (Recall that a finite graph is simple if $G$ possesses no loop and no multiple edge.) A subset $W \subset [d]$ is called {\em stable} if, for all $i$ and $j$ belonging to $W$ with $i \neq j$, one has $\{i,j\} \not\in E(G)$. Let $S(G)$ denote the set of stable sets of $G$. One has $\emptyset \in S(G)$ and $\{ i \} \in S(G)$ for each $i \in [d]$. Clearly $S(G)$ is a simplicial complex on $[d]$. The {\em stable set polytope} ${\mathcal Q}_G \subset {\NZQ R}^{d}$ of $G$ is the $(0, 1)$-polytope ${\mathcal P}_{S(G)} \subset {\NZQ R}^d$ arising from the simplicial complex $S(G)$. A finite simple graph is said to be {\em perfect} (\cite{sptheorem}) if, for any induced subgraph $H$ of $G$ including $G$ itself, the chromatic number of $H$ is equal to the maximal cardinality of cliques of $H$. (A chromatic number of $G$ is the smallest integer $t$ for which there exist stable set $W_{1}, \ldots, W_{t}$ of $G$ with $[d] = W_{1} \cup \cdots \cup W_{t}$ and a clique of $G$ is a subset $W \subset [d]$ which is a stable set of the complementary graph $\overline{G}$ of $G$.) A complementary graph of a perfect graph is perfect (\cite{sptheorem}).
Recall that an integer matrix $A$ is {\em compressed} (\cite{compressed}, \cite{Sul}) if the initial ideal of the toric ideal $I_{A}$ is squarefree with respect to any reverse lexicographic order.
\begin{Example} \label{EXperfect} {\em Let $G$ be a perfect graph on $[d]$. Then $A_{\Delta}$, where $\Delta = S(G)$, is compressed (\cite[Example 1.3 (c)]{compressed}). Let $G$ and $G'$ be perfect graphs on $[d]$ and ${\mathcal Q} \subset {\NZQ R}^{d}$ be the Fano polytope which is the convex hull of ${\mathcal Q}_{G} \cup (- {\mathcal Q}_{G'})$. It then follows from Corollary \ref{Fano} together with Lemma \ref{obvious} that ${\mathcal Q}$ is Gorenstein. } \end{Example}
\begin{Lemma} \label{hole} Let $\Delta$ be one of the following simplicial complexes: \begin{enumerate} \item[\rm (i)] the simplicial complex on $[e]$ with the facets $[e] \setminus \{ i \}$, $1 \leq i \leq e$, where $e \geq 3$; \item[\rm (ii)] $S(G)$, where $G$ is an odd hole of length $2\ell + 1$, where $\ell \geq 2$; \item[\rm (iii)] $S(G)$, where $G$ is an odd antihole of length $2\ell + 1$, where $\ell \geq 2$; \end{enumerate} Let $<$ be any reverse lexicographic order whose smallest variable corresponds to the column ${\bf 0}$ of $A_\Delta$. Then the initial ideal ${\rm in}_{<}(I_{A_{\Delta}})$ cannot be squarefree. (Recall that an odd hole is an induced odd cycle of length $\geq 5$ and an odd antihole is the complementary graph of an odd hole.) \end{Lemma}
\begin{proof} By virtue of Corollary \ref{COR}, we find a supporting hyperplane ${\mathcal H}$ of ${\mathcal P}_{\Delta}$ with ${\bf 0} \not\in {\mathcal H}$ for which ${\mathcal H} \cap {\mathcal P}_{\Delta}$ is a facet of ${\mathcal P}_{\Delta}$ such that the equation of ${\mathcal H}$ cannot be of the form $a_{1}z_{1} + \cdots + a_{d} z_{d} = 1$ with each $a_{j} \in {\NZQ Z}$. In each of (i), (ii) and (iii), the equation of a desired hyperplane ${\mathcal H}$ is as follows: \begin{enumerate} \item[(i)] $\sum_{i=1}^{e} z_{i} = e - 1$; \item[(ii)] $\sum_{i=1}^{2\ell+1} z_{i} = \ell$; \item[(iii)] $\sum_{i=1}^{2\ell+1} z_{i} = 2$. \end{enumerate} In (i), it is easy to see that ${\mathcal H} \cap {\mathcal P}_{\Delta}$ is a facet of ${\mathcal P}_{\Delta}$. In each of (ii) and (iii), it is known (\cite{Padberg}, \cite{Wagler}) that ${\mathcal H} \cap {\mathcal P}_{\Delta}$ is a facet of ${\mathcal P}_{\Delta}$. \end{proof}
Let $B =[{\bf b}_1,\ldots, {\bf b}_m] \in {\NZQ Z}^{d \times m}$ be a submatrix of $A =[{\bf a}_1,\ldots, {\bf a}_n] \in {\NZQ Z}^{d \times n}$. Then, $K[B]$ is called a {\em combinatorial pure subring} of $K[A]$ if the convex hull of $\{{\bf b}_1,\ldots, {\bf b}_m\}$ is a face of the convex hull of $\{{\bf a}_1,\ldots, {\bf a}_n\}$. For any combinatorial pure subring $K[B]$ of $K[A]$, it is known that, if the initial ideal of $I_A$ is squarefree, then so is the corresponding initial ideal of $I_B$. See \cite{OHHcp, OHS} for details.
\begin{Lemma} \label{FACET} Let $\Delta$ be a simplicial complex on $[d]$ and $\Delta'$ an induced subcomplex of $\Delta$ which is one of (i), (ii) and (iii) of Lemma {\rm \ref{hole}}. Let $<$ be any reverse lexicographic order whose smallest variable corresponds to the column ${\bf 0} \in {\NZQ Z}^{d}$ of $A_\Delta$. Then the initial ideal ${\rm in}_{<}(I_{A_{\Delta}})$ cannot be squarefree. \end{Lemma}
\begin{proof} Let $\Delta'$ be the induced subcomplex of $\Delta$ on $V$, where $V \subset [d]$, and $<'$ the reverse lexicographic order induced by $<$. Lemma \ref{hole} says that ${\rm in}_{<'}(I_{A_{\Delta'}})$ cannot be squarefree. Since $\Delta'$ is an induced subcomplex of $\Delta$, it follows that ${\mathcal P}_{\Delta'}$ is a face of ${\mathcal P}_{\Delta}$. Thus $K[A_{\Delta'}]$ is a combinatorial pure subring of $K[A_{\Delta}]$ and hence ${\rm in}_{<}(I_{A_{\Delta}})$ cannot be squarefree, as required. \end{proof}
We are now in the position to state a combinatorial characterization of simplicial complexes $\Delta$ on $[d]$ for which the toric ideal $I_{A_{\Delta}}$ possesses a squarefree initial ideal with respect to a reverse lexicographic order whose smallest variable corresponds to the column ${\bf 0} \in {\NZQ Z}^{d}$ of $A_\Delta$.
\begin{Theorem} \label{compressed} Let $\Delta$ be a simplicial complex on $[d]$. Then the following conditions are equivalent: \begin{enumerate} \item[{\rm (i)}] There exists a perfect graph $G$ on $[d]$ with $\Delta = S(G)$; \item[{\rm (ii)}] $A_\Delta$ is compressed; \item[{\rm (iii)}] $I_{A_\Delta}$ possesses a squarefree initial ideal with respect to a reverse lexicographic order whose smallest variable corresponds to the column ${\bf 0} \in {\NZQ Z}^{d}$ of $A_\Delta$. \end{enumerate} \end{Theorem}
\begin{proof} In \cite[Example 1.3 (c)]{compressed}, (i) $\Rightarrow$ (ii) is proved. (See also \cite[\S 4]{GPT}.) Moreover, (ii) $\Rightarrow$ (iii) is trivial. Now, in order to show (iii) $\Rightarrow$ (i), we fix a reverse lexicographic order $<$ whose smallest variable corresponds ${\bf 0} \in {\NZQ Z}^{d}$ of $A_\Delta$.
\noindent {\bf (First Step)} Suppose that there is {\em no} finite simple graph $G$ on $[d]$ with $\Delta = S(G)$. Given a simplicial complex $\Delta$ on $[d]$, there is a finite simple graph $G$ on $[d]$ with $\Delta = S(G)$ if and only if $\Delta$ is flag, i.e, every minimal nonface of $\Delta$ is an edge of $\Delta$. (See, e.g., \cite[Lemma 9.1.3]{HerHibi}. Note that $S(G)$ is the clique complex of the complement graph of $G$.)
Let $\Delta$ be a simplicial complex which is not flag and $V \subset [d]$, where $|V| \geq 3$, a minimal nonface of $\Delta$. One has $V \setminus \{ i \} \in \Delta$ for all $i \in V$. Thus the induced subcomplex $\Delta'$ of $\Delta$ on V coincides with the simplicial complex (i) of Lemma \ref{hole}.
\noindent {\bf (Second Step)} Let $G$ be a nonperfect graph on $[d]$ with $A_\Delta = S(G)$. The strong perfect graph theorem \cite{sptheorem} guarantees that $G$ possesses either an odd hole or an odd antihole. Thus $\Delta$ contains an induced subcomplex $\Delta'$ which coincides with either (ii) or (iii) of Lemma \ref{hole}.
As a result, Lemma \ref{FACET} says that $I_{A_\Delta}$ possesses no squarefree initial ideal with respect to a reverse lexicographic order whose smallest variable corresponds to the column ${\bf 0} \in {\NZQ Z}^{d}$ of $A_\Delta$. This completes the proof of (iii) $\Rightarrow$ (i). \, \, \, \, \, \, \, \, \, \, \, \end{proof}
\begin{Example} {\em Let $A \in {\NZQ Z}^{d \times n}$ for which each entry of $A$ belongs to $\{ 0, 1 \}$ and $I_{A^{\sharp}}$ the toric ideal of $A^{\sharp}$. In general, even if $I_{A^{\sharp}}$ possesses a squarefree initial ideal with respect to a reverse lexicographic order whose smallest variable corresponds to the column ${\bf 0} \in {\NZQ Z}^{d}$ of $A^{\sharp}$, the matrix $A^{\sharp}$ may not be compressed. For example, if \begin{eqnarray*} A = \left[ \begin{array}{ccccccc} 1& 0& 1& 1& 0& 0&0\\ 1& 1& 0& 0& 0& 0&0\\ 0& 1& 1& 0& 0& 0&0\\ 0& 0& 0& 1& 1& 0&1\\ 0& 0& 0& 0& 1& 1&0\\ 0& 0& 0& 0& 0& 1&1\\ \end{array} \right], \end{eqnarray*} then $I_{A^{\sharp}}$ is generated by $x_{1}x_{3}x_{5}x_{7}- x_{2}x_{4}^{2}x_{6}$. Thus the initial ideals of $I_{A^{\sharp}}$ with respect to the reverse lexicographic order induced by the ordering \[ z < x_{2} < x_{1} < x_{3} < x_{4} < x_{5} < x_{6} < x_{7} \] is generated by $x_{1}x_{3}x_{5}x_{7}$, while the initial ideals of $I_{A^{\sharp}}$ with respect to the reverse lexicographic order induced by the ordering \[ z < x_{1} < x_{2} < x_{3} < x_{4} < x_{5} < x_{6} < x_{7} \] is generated by $x_{2}x_{4}^{2}x_{6}$. Even though $A^{\sharp}$ satisfies the condition (iii) of Theorem \ref{compressed}, the integer matrix $A^{\sharp}$ cannot be compressed. } \end{Example}
Apart from Theorem \ref{compressed}, we can ask the problem when the convex polytope ${\mathcal P} \subset {\NZQ R}^{d}$ which is the convex hull of ${\mathcal P}_{\Delta} \cup (- {\mathcal P}_{\Delta'})$, where $\Delta$ and $\Delta'$ are simplicial complexes on $[d]$, is a Gorenstein Fano polytope.
\begin{Theorem} \label{GORFANO} Let $\Delta$ and $\Delta'$ be simplicial complexes on $[d]$ and ${\mathcal P} \subset {\NZQ R}^{d}$ the convex polytope which is the convex hull of ${\mathcal P}_{\Delta} \cup ( - {\mathcal P}_{\Delta'})$. Then ${\mathcal P}$ is a Gorenstein Fano polytope if and only if there exist perfect graphs $G$ and $G'$ on $[d]$ with $\Delta = S(G)$ and $\Delta' = S(G')$. \end{Theorem}
\begin{proof} The ``If'' part follows from Example \ref{EXperfect}. To see why the ``Only If'' part is true, suppose that either $\Delta$ is not flag or there is a nonperfect graph $G$ with $\Delta = S(G)$. Since ${\mathcal P} \subset {\NZQ R}^{d}$ is a Gorenstein Fano polytope, the equation of the supporting hyperplane ${\mathcal H} \subset {\NZQ R}^{d}$ for which ${\mathcal H} \cap {\mathcal P}$ is a facet of ${\mathcal P}$ is of the form $a_{1}z_{1} + \cdots + a_{d}z_{d} = 1$ with each $a_{j} \in {\NZQ Z}$.
Let $\Delta$ be not flag and $V \subset [d]$ with $|V| \geq 3$ for which $V \setminus \{ i \} \in \Delta$ for all $i \in V$ and $V \notin \Delta$. Let, say, $V = [e]$ with $e \geq 3$. Then the hyperplane ${\mathcal H}' \subset {\NZQ R}^{d}$ defined by the equation $z_{1} + \cdots + z_{e} = e - 1$ is a supporting hyperplane of ${\mathcal P}$. Let ${\mathcal F}$ be a facet of ${\mathcal P}$ with ${\mathcal H}' \cap {\mathcal P} \subset {\mathcal F}$ and $a_{1}z_{1} + \cdots + a_{d}z_{d} = 1$ with each $a_{j} \in {\NZQ Z}$ the equation of the supporting hyperplane ${\mathcal H} \subset {\NZQ R}^{d}$ with ${\mathcal F} \subset {\mathcal H}$. Since $\rho(V \setminus \{ i \}) \in {\mathcal H}$ for all $i \in V$, one has $\sum_{j \in [e] \setminus \{ i \}} a_{j} = 1$. Thus $(e - 1) (a_{1} + \cdots + a_{e}) = e$. Hence $a_{1} + \cdots + a_{e} \not\in {\NZQ Z}$, a contradiction.
Let $\Delta = S(G)$, where $G$ possesses an odd hole $C$ of length $2\ell + 1$ with the vertices, say, $1, \ldots, 2\ell + 1$, where $\ell \geq 2$. Then the hyperplane ${\mathcal H}' \subset {\NZQ R}^{d}$ defined by the equation $z_{1} + \cdots + z_{2\ell + 1} = \ell$ is a supporting hyperplane of ${\mathcal P}$. Let ${\mathcal F}$ be a facet of ${\mathcal P}$ with ${\mathcal H}' \cap {\mathcal P} \subset {\mathcal F}$ and $a_{1}z_{1} + \cdots + a_{d}z_{d} = 1$ with each $a_{j} \in {\NZQ Z}$ the equation of the supporting hyperplane ${\mathcal H} \subset {\NZQ R}^{d}$ with ${\mathcal F} \subset {\mathcal H}$. The maximal stable sets of $C$ is \[ \{1, 3, \ldots, 2\ell - 1\}, \{2, 4, \ldots, 2\ell \}, \ldots, \{2\ell + 1, 2, 4, \ldots, 2\ell - 2\} \] and each $i \in [2\ell - 2]$ appears $\ell$ times in the above list. Since, for each maximal stable set $U$ of $C$, one has $\sum_{i \in U} a_{i} = 1$, it follows that $\ell(a_{1} + \cdots + a_{2\ell + 1}) = 2\ell + 1$. Hence $a_{1} + \cdots + a_{e} \not\in {\NZQ Z}$, a contradiction.
Let $\Delta = S(G)$, where $G$ possesses an odd antihole $C$ with the vertices, say, $1, \ldots, 2\ell + 1$, where $\ell \geq 2$. Then the hyperplane ${\mathcal H}' \subset {\NZQ R}^{d}$ defined by the equation $z_{1} + \cdots + z_{2\ell + 1} = 2$ is a supporting hyperplane of ${\mathcal P}$. Let ${\mathcal F}$ be a facet of ${\mathcal P}$ with ${\mathcal H}' \cap {\mathcal P} \subset {\mathcal F}$ and $a_{1}z_{1} + \cdots + a_{d}z_{d} = 1$ with each $a_{j} \in {\NZQ Z}$ the equation of the supporting hyperplane ${\mathcal H} \subset {\NZQ R}^{d}$ with ${\mathcal F} \subset {\mathcal H}$. The maximal stable sets of $C$ is \[ \{1, 2\}, \{2, 3\}, \ldots, \{2\ell+1, 1\} \] and each $i \in [2\ell - 2]$ appears twice in the above list. Since, for each maximal stable set $U$ of $C$, one has $\sum_{i \in U} a_{i} = 1$, it follows that $2(a_{1} + \cdots + a_{2\ell + 1}) = 2\ell + 1$. Hence $a_{1} + \cdots + a_{e} \not\in {\NZQ Z}$, a contradiction. \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \end{proof}
\noindent {\bf Acknowledgment.} The authors are grateful to an anonymous referee for useful suggestions and helpful comments.
\end{document} | arXiv |
Assume we have a calendrical system in which leap years happen every four years, no matter what. In a 150-year period, what is the maximum possible number of leap years?
Since 150 divided by 4 is 37.5, there are 37 blocks of 4 years in 150 years, plus two extra years. If we let one of those two extra years be a leap year, and we have one leap year in each of the 37 blocks, then we have 38 leap years. For example, we can choose a 150-year period that starts with a leap year. For example, 1904-2053 is a 150-year period with 38 leap years $(1904, 1908, \ldots, 2052)$. Now we check that under no circumstance will 39 work. We see the optimal situation would be if we start with a leap year and end with a leap year. Leap years happen every four years, so if we start with year $x$, $x$ being a leap year, the $38^{\text{th}}$ leap year after $x$ is $x+4\times38 = x+152$, so including $x$ there must be a total of 153 years, which is greater than 150. Therefore no 150-year period will contain 39 leap years. Hence, the answer is $\boxed{38}$. | Math Dataset |
Space Exploration Meta
Space Exploration Stack Exchange is a question and answer site for spacecraft operators, scientists, engineers, and enthusiasts. It only takes a minute to sign up.
Could JWST stay at L2 "forever"?
Using only reaction wheels powered by solar panel and the sunshield as a sail (in continuous active attitude control) to generate thrust from solar photon pressure in the desired direction, could JWST stay in its orbit around L2 "forever" (theoretically at least)?
In this case it couldn't fulfill it's main objective, which is to be a space telescope pointing at distant objects for long exposure time. But this is a hypothetical question asking about its orbital dynamics.
Anyway, could this be a practical way to set JWST on "pause" for say 2 years, without burning fuel/ejecting mass to keep its orbit around L2?
orbital-mechanics station-keeping james-webb-telescope halo-orbit lissajous-orbit
qq jkztdqq jkztd
$\begingroup$ Here are some different, but related questions whose answers may contain information that is also helpful here: How will JWST manage solar pressure effects to maintain attitude and station keep it's unstable orbit? and also What happens to JWST after it runs out of propellant?. $\endgroup$
– uhoh
$\begingroup$ Reaction wheels have to be desaturated occasionally. That takes fuel. Solar radiation pressure is a hindrance on JWST rather than something the vehicle can use to its advantage, more than doubling the stationkeeping costs compared to a vehicle in a similar unstable orbit but without such a huge sunshield. $\endgroup$
– David Hammen
$\begingroup$ @DavidHammen If considering an hypothetical very high sail surface/mass ratio probe, meant to solely keep orbit at one Lagrange point, could desaturation of reaction wheel be made by shifting centre of mass (reaction wheel) coplanar to sail, inducing a counter-torque allowing wheel to slow down, thereby using no fuel? $\endgroup$
– qq jkztd
$\begingroup$ Ok I found this about solar sail attitude control and propulsion, which goes into the direction of even getting rid of reaction wheel system, and desaturation related issues. $\endgroup$
$\begingroup$ @DavidHammen I suggested without proof that with enough acrobatics (pairings of maneuvers) the station-keeping could be angular momentum neutral over time. If the momentum wheels could be fairly well centered near zero to begin with, then maybe unloading could be managed with torque from photon pressure as well, since the center of mass is offset from the sunshade. Also, I am not sure the photon pressure is really a hindrance. I think you just ride several km in front of the halo orbit as if you were leaving it towards Earth along the unstable manifold, but always being pushed back. $\endgroup$
According to Wikipedia, the delta-v requirements to stay at L1 or L2 are about 30-100 m/s per year. That seems quite high, however, more likely is around 5-16 m/s. The sun shield has an area of about 300 m^2. The thrust possible is about 0.00279664 N, assuming purely reflective. Mass of JWST is about 6200 kg. Putting all of that together, the possible acceleration is around 14 m/s per year, not quite enough to station keep. Also, this assumes fully reflective sun shields, and pointed straight at the sun. I'm not sure what the actual direction of thrust that would be required to keep it at L2, but it probably wouldn't be straight on, thus reducing this further.
Bottom line, it might work, but would require some very careful placement of the shield to maintain the proper orientation.
EDIT: Per some new information, it turns out that my source was VERY misleading about the size, those dimensions were more of a diamond type dimensions, and not a rectangle, which is very odd. This paper has some interesting information, showing the area is actually closer to 160 m^2, with a station keeping budget of at most 2.25 m/s per year, taking everything in to account. That means it would entirely be possible to achieve. One of the biggest sources of uncertainty is the movement of the solar shield itself, it is likely if this was controlled it could actually be reduced significantly. The actual ability to station keep is closer to 6.7 m/s. Given sources that say between 5-16 m/s is typical stationkeeping values, it seems likely that to a degree at least, JWST will be controlled by sunlight, although that is VERY difficult to tell without complex analysis.
PearsonArtPhoto♦PearsonArtPhoto
$\begingroup$ That value of 30 to 100 m/s per year is a bogus number. Perhaps that's for EML1/EML2? This paper claims that "In recent years, typical annual station-keeping costs have been around 1.0 m/sec for ACE and WIND, and much less than that for SOHO." This paper, which addresses JSWT directly, estimates stationkeeping costs for JWST to be 2.43 m/s per year. $\endgroup$
$\begingroup$ Double checking, that "30 - 100 m/s per year" is completely bogus, even for EML1/EML2. Per this paper, the ARTEMIS satellites experienced stationkeeping costs in the range of 5 to 16 m/s per year. $\endgroup$
$\begingroup$ Good feedback, have improved. Now we just need to fix Wikipedia... $\endgroup$
– PearsonArtPhoto ♦
$\begingroup$ NASA says "Actual dimensions: 21.197 m x 14.162 m (69.5 ft x 46.5 ft)". I thought rectangle. Turns out that was a bad assumption. Huh. Interesting paper in any case, added quite a bit to my answer. $\endgroup$
$\begingroup$ @uhoh - My first comment says exactly that (JWST stationkeeping is 2.43 m/s/yr). Note that this is high for vehicles in pseudo-orbits about SEL1 or SEL2. In my second comment I was poking deeper at the dubious values in the wikipedia article referenced in the answer, addressing the question I raised in the first comment: Are those dubious wikipedia values for Earth-Moon L1/L2? The answer is no. I picked ARTEMIS specifically because for a while they were in pseudo-orbits about EML1/L2. The costs are considerably higher than for Sun-Earth L1/L2, but not in the 30-100 m/s/yr range. $\endgroup$
This paper by Heiligers et al. explores Earth-moon libration point orbits with the addition of solar sail thrusting. While it is of course not directly translateable to Sun-Earth L2 (JWST) the dynamics of libration point orbits in both systems are at least comparable. The study shows that an increase in stability can be acquired for some orbits (lunar L2 halo being one of them).
JWST is however not a typical solar sail spacecraft. These have much higher area/mass ratios and will produce more acceleration, together with a lower mass (I'm assuming also lower inertia) which means they can steer their sails much more effectively.
I would assume that the conclusions from the paper can be applied to the JWST as well, but the impact on the stability will probably be much smaller than in the case of a regular solar sail spacecraft.
Alexander VandenbergheAlexander Vandenberghe
$\begingroup$ That's a really beautiful paper! $\endgroup$
tl;dr: I think there could be room to do this. However, I don't think a conclusive answer can be had through analyses of magnitudes on envelope-backs. A real answer would only come from even more detailed Monte Carlo calculations than those already outlined in Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope. Sounds like a fun project!
Let's look at this systematically using well-sourced facts.
Thrust from photon pressure on sunshield
A photon's momentum $p$ is just its energy divided by the speed of light $E/c=h\nu/c$, so the force resulting in the perfect absorption of photons would be
$$F=\frac{dp}{dt}=\frac{1}{c}\frac{dE}{dt} = \frac{P}{c}$$
where $P$ is the total power of the light hitting the absorber, in units of Watts for example, and $A$ is the area of the incoming light field intercepting the absorber.
Since the sail is reflective rather than absorbing, there's a second beam of reflected light and a second force, and this one has a direction based on the orientation of the mirror. Let's just look at the magnitudes for now though.
Wikipedia gives the shape of the diamond-shaped sunshield as about 21 by 14 meters (the diagonals). That will have an area equal to half the product of the diagonals, or 147 m^2, agreeing nicely with Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope.
As shown in Figure 6, the effective area of the Sunshield in the Sunward direction can vary between 105 and 163 m², the range of allowed spacecraft attitudes that prevent the telescope from being exposed to stray light.
The solar constant is about 1360 W/m^2 at 1 AU, but the L2 area is 1% farther, so let's use
$$P_{max}=A \times \text{1330 W/m^2} \approx 196 \text{kW}$$
to get
$$F_{max} = 2 \frac{P_{max}}{c} \approx 1.3 \text{mN}.$$
Acceleration is force/mass. Using 6500 kg from Wikipedia:
$$a_{max} = \frac{F_{max}}{m} \approx 2.7 \times 10^{-7} \text{m/s²}.$$
A year has about 31.6 million seconds, so that's 6.3 m/s per year of delta-v available in the +z direction if the shade points mostly back towards the Sun, and somewhat less if a bit of tilt is used if perpendicular acceleration is needed.
JWST's known station-keeping budget
Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope tells us:
The results of the analysis show that the SK delta-V budget for a 10.5 year mission is 25.5 m/sec, or 2.43 m/sec per year. This SK budget is higher than the typical LPO SK budget of about 1 m/sec per year, but JWST presents challenges that other LPO missions do not face. The Endof-Box analysis was critical to the JWST mission, because it provided a realistic value for the SK delta-V budget when it was needed to establish a complete spacecraft mass budget.
So the sail provides more than double the magnitude of the station-keeping delta-v.
SOHO is an example of a spacecraft in a halo orbit (around L1) and per Roberts 2002 (from Is this what station keeping maneuvers look like, or just glitches in data? (SOHO via Horizons)) it uses a station-keeping strategy of only thrusting in the z direction (toward or away from the Sun). However, Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope tells us:
In LPO dynamics is known that the x-y plane contains the stable and unstable directions, while the z direction is neutrally stable. Because JWST does not need to remain near a reference orbit, during SK maneuvers there is no need to thrust in the z direction, and the thrust vector is chosen to lie in the x-y plane.
This doesn't mean that in our non-telescope-mode, survival holding pattern we would also need the station-keeping (SK) thrust vector in the perpendicular x-y plane though, and I propose that in survival mode one could use some combination of modulation of the z-component and adding the x-y component by tilting and angling the sunshield within its safe limits will provide enough delta-v and flexibility in its direction to perform the station-keeping.
JWST will experience a steady delta-v of about 6 m/s per year due to the constant photon pressure of sunlight reflecting back from its sunshield.
While this is of course already figured into it's orbit, this will mostly result in a halo orbit just slightly in front of (sunward-of) the halo orbit about L1 calculated without the effects of photon pressure. Here "slightly in front of" is probably of the order of a few kilometers or tens of kilometers only.
Aggressive tilting of the sunshield within safe limits can both modulate the +z acceleration, and add a component in the x-y plane
Rotating the spacecraft about the +z axis of the orbit in the rotating frame with a tilted sunshield will direct the component of the thrust within the x-y plane, though probably not enough to make up the full 2.4 m/s per year currently obtained from propulsive maneuvers every 21 days.
Momentum unloading
I haven't through too much about how to do momentum unloading of JWST's momentum wheels using only solar photon pressure. The wheels will be needed to not only maintain attitude but also to execute regular tilts and rotations needed to direct the photon pressure for station-keeping.
As soon as the spacecraft tilts a bit, the center of the resulting photon pressure will not include the spacecraft's center of mass, so there will be at least some torque to work with.
It is possible that these attitude maneuvers can be designed in pairs to be angular momentum-neutral such that they naturally cancel each other in terms of rotations of the wheels over time.
I think there could be room to do this. However, I don't think a conclusive answer can be had through analyses of magnitudes on envelope-backs. A real answer would only come from even more detailed Monte Carlo calculations than those already outlined in Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope. Sounds like a fun project!
The short answer is no. Lagrange points are saddle points, in topological terms. They are only quasi-stable, but not actually stable. Solar radiation pressure is enough to cause a nutation and precession of the Wind spacecraft's spin axis over the course of a year (forms an ellipse on a polar graph with a diameter of about 1 degree). That force alone would nudge any spacecraft out place eventually, so no, it could not stay indefinitely.
There are also perturbations in gravitational fields, that, over long time periods would also nudge a spacecraft off of any quasi-stable saddle point.
There is also dust from interplanetary and interstellar space that impacts spacecraft at hypersonic speeds causing small plasma plumes. The dust from interstellar space has a preferred direction, thus would slowly kick any spacecraft off of a quasi-stable saddle point.
Even with a spinning spacecraft, the orbit starts to exponentially decay after about a month. Wind performs only four station keeping maneuvers per year and each only costs ~4-10 cm/s of fuel (equivalent to something like 0.1 kg or less). The ACE, by comparison, performs maneuvers every other week or so. The difference is that ACE is a sun-pointed spinner and Wind is an ecliptic spinner. ACE needs to point in a specific direction for communication with Earth and because one of its plasma instruments is failing (so they canted the spin axis a little to rely more heavily on the less damaged anodes).
One could, in princple, just wait 9 months and re-insert Wind about the Earth-Sun L1 point, but the longer you wait, the more expensive (fuel-wise) it becomes. If we waited 2 years, Wind (or any other Lagrange-point-orbiting spacecraft) would be in a heliocentric orbit about the Sun just like Earth. If the spacecraft was at L1(L2), the spacecraft would orbit the Sun faster(slower) than Earth. This is actually what the STEREO do.
So no, if JWST didn't use fuel for 2 years, it would be in a heliocentric orbit about the Sun.
Fun Side Note: The JWST flight operations team came up with a new set of thrusting options to try and preserve fuel and in early 2014 they presented their idea to me and the Wind team. They want to use Wind as a test run to see if the thruster maneuvers would not only work, but would save fuel.
The old way was to only thrust when the spacecraft was on the Earth-Sun line and the thrusts would be aligned with the Earth-Sun line (well, as close as possible). The reason being, you do not want to apply a torque to either your orbit about the Lagrange point or your orbit around the Sun. The JWST teams idea was to thrust off the Earth-Sun line at angles to the Earth-Sun line. So I pointed out this would insert torques to the system and I was concerned. They went and published a paper on, found at doi:10.2514/6.2014-4304. Turns out, because the maneuvers only use a few to a few 10s of cm/s, the torques would be minimal and not critical so we went forward with it. It reduced our typical fuel cost per maneuver by ~5-10%.
I still find this a little funny because at the time Wind had over 120 years worth of fuel left...
honeste_viverehoneste_vivere
$\begingroup$ The paper you mention in the "Fun side" section can be downloaded from here. Interesting to note that the fuel mass of WIND is about the same as that of JWST (300Kg), but the dry mass ratio is 1:6. Can we advance a "back of an envelop" life-time for JWST to be ~ 120/6 years? $\endgroup$
– Ng Ph
Jan 8 at 21:19
Thanks for contributing an answer to Space Exploration Stack Exchange!
Not the answer you're looking for? Browse other questions tagged orbital-mechanics station-keeping james-webb-telescope halo-orbit lissajous-orbit or ask your own question.
Graduation of Space Exploration
James Webb telescope; limits to propellant lifetime?
Why is the US building a Lunar Orbital Platform-Gateway (LOP-G)?
Is this what station keeping maneuvers look like, or just glitches in data? (SOHO via Horizons)
What happens to JWST after it runs out of propellant?
Can the James Webb Space Telescope basically manage its own orbit if necessary?
How would you identify when an object in a Lissajous orbit needs station keeping?
Have light gases like hydrogen or helium been explored for ion propulsion?
The design of the halo orbit of the James Webb Space Telescope
How will JWST manage solar pressure effects to maintain attitude and station keep it's unstable orbit?
Where did the Herschel Space Telescope go in 2013?
How much of the sky can the JWST see?
How will JWST be serviced?
Could a ball of water stay in orbit?
How will JWST maintain its elliptical orbit around L2?
What orbit would a space station need to stay in orbit for N years?
Why are JWST optics not enclosed like HST?
Are there any estimates for cost of manufacturing second if first JWST fails?
To what extent could JWST continue to be useful after the propellant runs out? | CommonCrawl |
Kalman filter
For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory.
This digital filter is sometimes termed the Stratonovich–Kalman–Bucy filter because it is a special case of a more general, nonlinear filter developed somewhat earlier by the Soviet mathematician Ruslan Stratonovich.[1][2][3][4] In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before summer 1961, when Kalman met with Stratonovich during a conference in Moscow.[5]
Kalman filtering[6] has numerous technological applications. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and ships positioned dynamically.[7] Furthermore, Kalman filtering is a concept much applied in time series analysis used for topics such as signal processing and econometrics. Kalman filtering is also one of the main topics of robotic motion planning and control[8][9] and can be used for trajectory optimization.[10] Kalman filtering also works for modeling the central nervous system's control of movement. Due to the time delay between issuing motor commands and receiving sensory feedback, the use of Kalman filters[11] provides a realistic model for making estimates of the current state of a motor system and issuing updated commands.[12]
The algorithm works by a two-phase process. For the prediction phase, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with greater certainty. The algorithm is recursive. It can operate in real time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required.
Optimality of Kalman filtering assumes that errors have a normal (Gaussian) distribution. In the words of Rudolf E. Kálmán: "In summary, the following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear."[13] Though regardless of Gaussianity, if the process and measurement covariances are known, the Kalman filter is the best possible linear estimator in the minimum mean-square-error sense. [14] It is a common misconception (perpetuated in the literature) that the Kalman filter cannot be rigorously applied unless all noise processes are assumed to be Gaussian.[15]
Extensions and generalizations of the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The basis is a hidden Markov model such that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Kalman filtering has been used successfully in multi-sensor fusion,[16] and distributed sensor networks to develop distributed or consensus Kalman filtering.[17]
History
The filtering method is named for Hungarian émigré Rudolf E. Kálmán, although Thorvald Nicolai Thiele[18][19] and Peter Swerling developed a similar algorithm earlier. Richard S. Bucy of the Johns Hopkins Applied Physics Laboratory contributed to the theory, causing it to be known sometimes as Kalman–Bucy filtering. Stanley F. Schmidt is generally credited with developing the first implementation of a Kalman filter. He realized that the filter could be divided into two distinct parts, with one part for time periods between sensor outputs and another part for incorporating measurements.[20] It was during a visit by Kálmán to the NASA Ames Research Center that Schmidt saw the applicability of Kálmán's ideas to the nonlinear problem of trajectory estimation for the Apollo program resulting in its incorporation in the Apollo navigation computer.[21]: 16
This Kalman filtering was first described and developed partially in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961).
The Apollo computer used 2k of magnetic core RAM and 36k wire rope [...]. The CPU was built from ICs [...]. Clock speed was under 100 kHz [...]. The fact that the MIT engineers were able to pack such good software (one of the very first applications of the Kalman filter) into such a tiny computer is truly remarkable.
— Interview with Jack Crenshaw, by Matthew Reed, TRS-80.org (2009)
Kalman filters have been vital in the implementation of the navigation systems of U.S. Navy nuclear ballistic missile submarines, and in the guidance and navigation systems of cruise missiles such as the U.S. Navy's Tomahawk missile and the U.S. Air Force's Air Launched Cruise Missile. They are also used in the guidance and navigation systems of reusable launch vehicles and the attitude control and navigation systems of spacecraft which dock at the International Space Station.[22]
Overview of the calculation
Kalman filtering uses a system's dynamic model (e.g., physical laws of motion), known control inputs to that system, and multiple sequential measurements (such as from sensors) to form an estimate of the system's varying quantities (its state) that is better than the estimate obtained by using only one measurement alone. As such, it is a common sensor fusion and data fusion algorithm.
Noisy sensor data, approximations in the equations that describe the system evolution, and external factors that are not accounted for, all limit how well it is possible to determine the system's state. The Kalman filter deals effectively with the uncertainty due to noisy sensor data and, to some extent, with random external factors. The Kalman filter produces an estimate of the state of the system as an average of the system's predicted state and of the new measurement using a weighted average. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are "trusted" more. The weights are calculated from the covariance, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated at every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that Kalman filter works recursively and requires only the last "best guess", rather than the entire history, of a system's state to calculate a new state.
The measurements' certainty-grading and current-state estimate are important considerations. It is common to discuss the filter's response in terms of the Kalman filter's gain. The Kalman-gain is the weight given to the measurements and current-state estimate, and can be "tuned" to achieve a particular performance. With a high gain, the filter places more weight on the most recent measurements, and thus conforms to them more responsively. With a low gain, the filter conforms to the model predictions more closely. At the extremes, a high gain close to one will result in a more jumpy estimated trajectory, while a low gain close to zero will smooth out noise but decrease the responsiveness.
When performing the actual calculations for the filter (as discussed below), the state estimate and covariances are coded into matrices because of the multiple dimensions involved in a single set of calculations. This allows for a representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances.
Example application
As an example application, consider the problem of determining the precise location of a truck. The truck can be equipped with a GPS unit that provides an estimate of the position within a few meters. The GPS estimate is likely to be noisy; readings 'jump around' rapidly, though remaining within a few meters of the real position. In addition, since the truck is expected to follow the laws of physics, its position can also be estimated by integrating its velocity over time, determined by keeping track of wheel revolutions and the angle of the steering wheel. This is a technique known as dead reckoning. Typically, the dead reckoning will provide a very smooth estimate of the truck's position, but it will drift over time as small errors accumulate.
For this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the physical laws of motion (the dynamic or "state transition" model). Not only will a new position estimate be calculated, but also a new covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning position estimate at high speeds but very certain about the position estimate at low speeds. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, as the dead reckoning estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back toward the real position but not disturb it to the point of becoming noisy and rapidly jumping.
Technical description and context
The Kalman filter is an efficient recursive filter estimating the internal state of a linear dynamic system from a series of noisy measurements. It is used in a wide range of engineering and econometric applications from radar and computer vision to estimation of structural macroeconomic models,[23][24] and is an important topic in control theory and control systems engineering. Together with the linear-quadratic regulator (LQR), the Kalman filter solves the linear–quadratic–Gaussian control problem (LQG). The Kalman filter, the linear-quadratic regulator, and the linear–quadratic–Gaussian controller are solutions to what arguably are the most fundamental problems of control theory.
In most applications, the internal state is much larger (has more degrees of freedom) than the few "observable" parameters which are measured. However, by combining a series of measurements, the Kalman filter can estimate the entire internal state.
For the Dempster–Shafer theory, each state equation or observation is considered a special case of a linear belief function and the Kalman filtering is a special case of combining linear belief functions on a join-tree or Markov tree. Additional methods include belief filtering which use Bayes or evidential updates to the state equations.
A wide variety of Kalman filters exists by now, from Kalman's original formulation - now termed the "simple" Kalman filter, the Kalman–Bucy filter, Schmidt's "extended" filter, the information filter, and a variety of "square-root" filters that were developed by Bierman, Thornton, and many others. Perhaps the most commonly used type of very simple Kalman filter is the phase-locked loop, which is now ubiquitous in radios, especially frequency modulation (FM) radios, television sets, satellite communications receivers, outer space communications systems, and nearly any other electronic communications equipment.
Underlying dynamic system model
Kalman filtering is based on linear dynamic systems discretized in the time domain. They are modeled on a Markov chain built on linear operators perturbed by errors that may include Gaussian noise. The state of the target system refers to the ground truth (yet hidden) system configuration of interest, which is represented as a vector of real numbers. At each discrete time increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the measurable outputs (i.e., observation) from the true ("hidden") state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the difference that the hidden state variables have values in a continuous space as opposed to a discrete state space as for the hidden Markov model. There is a strong analogy between the equations of a Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis and Ghahramani (1999)[25] and Hamilton (1994), Chapter 13.[26]
In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the following framework. This means specifying the matrices, for each time-step k, following:
• Fk, the state-transition model;
• Hk, the observation model;
• Qk, the covariance of the process noise;
• Rk, the covariance of the observation noise;
• and sometimes Bk, the control-input model as described below; if Bk is included, then there is also
• uk, the control vector, representing the controlling input into control-input model.
The Kalman filter model assumes the true state at time k is evolved from the state at (k − 1) according to
$\mathbf {x} _{k}=\mathbf {F} _{k}\mathbf {x} _{k-1}+\mathbf {B} _{k}\mathbf {u} _{k}+\mathbf {w} _{k}$
where
• Fk is the state transition model which is applied to the previous state xk−1;
• Bk is the control-input model which is applied to the control vector uk;
• wk is the process noise, which is assumed to be drawn from a zero mean multivariate normal distribution, ${\mathcal {N}}$, with covariance, Qk: $\mathbf {w} _{k}\sim {\mathcal {N}}\left(0,\mathbf {Q} _{k}\right)$.
At time k an observation (or measurement) zk of the true state xk is made according to
$\mathbf {z} _{k}=\mathbf {H} _{k}\mathbf {x} _{k}+\mathbf {v} _{k}$
where
• Hk is the observation model, which maps the true state space into the observed space and
• vk is the observation noise, which is assumed to be zero mean Gaussian white noise with covariance Rk: $\mathbf {v} _{k}\sim {\mathcal {N}}\left(0,\mathbf {R} _{k}\right)$.
The initial state, and the noise vectors at each step {x0, w1, ..., wk, v1, ... ,vk} are all assumed to be mutually independent.
Many real-time dynamic systems do not exactly conform to this model. In fact, unmodeled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs. The reason for this is that the effect of unmodeled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm diverge. The problem of distinguishing between measurement noise and unmodeled dynamics is a difficult one and is treated as a problem of control theory using robust control.[27][28]
Details
The Kalman filter is a recursive estimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates is required. In what follows, the notation ${\hat {\mathbf {x} }}_{n\mid m}$ represents the estimate of $\mathbf {x} $ at time n given observations up to and including at time m ≤ n.
The state of the filter is represented by two variables:
• $\mathbf {x} _{k\mid k}$, the a posteriori state estimate mean at time k given observations up to and including at time k;
• $\mathbf {P} _{k\mid k}$, the a posteriori estimate covariance matrix (a measure of the estimated accuracy of the state estimate).
The algorithm structure of the Kalman filter resembles that of Alpha beta filter. The Kalman filter can be written as a single equation; however, it is most often conceptualized as two distinct phases: "Predict" and "Update". The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as the a priori state estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, the innovation (the pre-fit residual), i.e. the difference between the current a priori prediction and the current observation information, is multiplied by the optimal Kalman gain and combined with the previous state estimate to refine the state estimate. This improved estimate based on the current observation is termed the a posteriori state estimate.
Typically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an observation is unavailable for some reason, the update may be skipped and multiple prediction procedures performed. Likewise, if multiple independent observations are available at the same time, multiple update procedures may be performed (typically with different observation matrices Hk).[29][30]
Predict
Predicted (a priori) state estimate ${\hat {\mathbf {x} }}_{k\mid k-1}=\mathbf {F} _{k}\mathbf {x} _{k-1\mid k-1}+\mathbf {B} _{k}\mathbf {u} _{k}$
Predicted (a priori) estimate covariance ${\hat {\mathbf {P} }}_{k\mid k-1}=\mathbf {F} _{k}\mathbf {P} _{k-1\mid k-1}\mathbf {F} _{k}^{\textsf {T}}+\mathbf {Q} _{k}$
Update
Innovation or measurement pre-fit residual ${\tilde {\mathbf {y} }}_{k}=\mathbf {z} _{k}-\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1}$
Innovation (or pre-fit residual) covariance $\mathbf {S} _{k}=\mathbf {H} _{k}{\hat {\mathbf {P} }}_{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}+\mathbf {R} _{k}$
Optimal Kalman gain $\mathbf {K} _{k}={\hat {\mathbf {P} }}_{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {S} _{k}^{-1}$
Updated (a posteriori) state estimate $\mathbf {x} _{k\mid k}={\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}{\tilde {\mathbf {y} }}_{k}$
Updated (a posteriori) estimate covariance $\mathbf {P} _{k|k}=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right){\hat {\mathbf {P} }}_{k|k-1}$
Measurement post-fit residual ${\tilde {\mathbf {y} }}_{k\mid k}=\mathbf {z} _{k}-\mathbf {H} _{k}\mathbf {x} _{k\mid k}$
The formula for the updated (a posteriori) estimate covariance above is valid for the optimal Kk gain that minimizes the residual error, in which form it is most widely used in applications. Proof of the formulae is found in the derivations section, where the formula valid for any Kk is also shown.
A more intuitive way to express the updated state estimate (${\hat {\mathbf {x} }}_{k\mid k}$) is:
$\mathbf {x} _{k\mid k}=(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}){\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}\mathbf {z} _{k}$
This expression reminds us of a linear interpolation, $x=(1-t)(a)+t(b)$ for $t$ between [0,1]. In our case:
• $t$ is the Kalman gain ($\mathbf {K} _{k}$), a matrix that takes values from $0$ (high error in the sensor) to $I$ (low error).
• $a$ is the value estimated from the model.
• $b$ is the value from the measurement.
This expression also resembles the alpha beta filter update step.
Invariants
If the model is accurate, and the values for ${\hat {\mathbf {x} }}_{0\mid 0}$ and $\mathbf {P} _{0\mid 0}$ accurately reflect the distribution of the initial state values, then the following invariants are preserved:
${\begin{aligned}\operatorname {E} [\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}]&=\operatorname {E} [\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k-1}]=0\\\operatorname {E} [{\tilde {\mathbf {y} }}_{k}]&=0\end{aligned}}$
where $\operatorname {E} [\xi ]$ is the expected value of $\xi $. That is, all estimates have a mean error of zero.
Also:
${\begin{aligned}\mathbf {P} _{k\mid k}&=\operatorname {cov} \left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)\\\mathbf {P} _{k\mid k-1}&=\operatorname {cov} \left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k-1}\right)\\\mathbf {S} _{k}&=\operatorname {cov} \left({\tilde {\mathbf {y} }}_{k}\right)\end{aligned}}$
so covariance matrices accurately reflect the covariance of estimates.
Estimation of the noise covariances Qk and Rk
Practical implementation of a Kalman Filter is often difficult due to the difficulty of getting a good estimate of the noise covariance matrices Qk and Rk. Extensive research has been done to estimate these covariances from data. One practical method of doing this is the autocovariance least-squares (ALS) technique that uses the time-lagged autocovariances of routine operating data to estimate the covariances.[31][32] The GNU Octave and Matlab code used to calculate the noise covariance matrices using the ALS technique is available online using the GNU General Public License.[33] Field Kalman Filter (FKF), a Bayesian algorithm, which allows simultaneous estimation of the state, parameters and noise covariance has been proposed.[34] The FKF algorithm has a recursive formulation, good observed convergence, and relatively low complexity, thus suggesting that the FKF algorithm may possibly be a worthwhile alternative to the Autocovariance Least-Squares methods.
Optimality and performance
It follows from theory that the Kalman filter provides an optimal state estimation in cases where a) the model matches the real system perfectly, b) the entering noise is "white" (uncorrelated) and c) the covariances of the noise are known exactly. Correlated noise can also be treated using Kalman filters.[35] Several methods for the noise covariance estimation have been proposed during past decades, including ALS, mentioned in the section above. After the covariances are estimated, it is useful to evaluate the performance of the filter; i.e., whether it is possible to improve the state estimation quality. If the Kalman filter works optimally, the innovation sequence (the output prediction error) is a white noise, therefore the whiteness property of the innovations measures filter performance. Several different methods can be used for this purpose.[36] If the noise terms are distributed in a non-Gaussian manner, methods for assessing performance of the filter estimate, which use probability inequalities or large-sample theory, are known in the literature.[37][38]
Example application, technical
Consider a truck on frictionless, straight rails. Initially, the truck is stationary at position 0, but it is buffeted this way and that by random uncontrolled forces. We measure the position of the truck every Δt seconds, but these measurements are imprecise; we want to maintain a model of the truck's position and velocity. We show here how we derive the model from which we create our Kalman filter.
Since $\mathbf {F} ,\mathbf {H} ,\mathbf {R} ,\mathbf {Q} $ are constant, their time indices are dropped.
The position and velocity of the truck are described by the linear state space
$\mathbf {x} _{k}={\begin{bmatrix}x\\{\dot {x}}\end{bmatrix}}$
where ${\dot {x}}$ is the velocity, that is, the derivative of position with respect to time.
We assume that between the (k − 1) and k timestep, uncontrolled forces cause a constant acceleration of ak that is normally distributed with mean 0 and standard deviation σa. From Newton's laws of motion we conclude that
$\mathbf {x} _{k}=\mathbf {F} \mathbf {x} _{k-1}+\mathbf {G} a_{k}$
(there is no $\mathbf {B} u$ term since there are no known control inputs. Instead, ak is the effect of an unknown input and $\mathbf {G} $ applies that effect to the state vector) where
${\begin{aligned}\mathbf {F} &={\begin{bmatrix}1&\Delta t\\0&1\end{bmatrix}}\\[4pt]\mathbf {G} &={\begin{bmatrix}{\frac {1}{2}}{\Delta t}^{2}\\[6pt]\Delta t\end{bmatrix}}\end{aligned}}$
so that
$\mathbf {x} _{k}=\mathbf {F} \mathbf {x} _{k-1}+\mathbf {w} _{k}$
where
${\begin{aligned}\mathbf {w} _{k}&\sim N(0,\mathbf {Q} )\\\mathbf {Q} &=\mathbf {G} \mathbf {G} ^{\textsf {T}}\sigma _{a}^{2}={\begin{bmatrix}{\frac {1}{4}}{\Delta t}^{4}&{\frac {1}{2}}{\Delta t}^{3}\\[6pt]{\frac {1}{2}}{\Delta t}^{3}&{\Delta t}^{2}\end{bmatrix}}\sigma _{a}^{2}.\end{aligned}}$
The matrix $\mathbf {Q} $ is not full rank (it is of rank one if $\Delta t\neq 0$). Hence, the distribution $N(0,\mathbf {Q} )$ is not absolutely continuous and has no probability density function. Another way to express this, avoiding explicit degenerate distributions is given by
$\mathbf {w} _{k}\sim \mathbf {G} \cdot N\left(0,\sigma _{a}^{2}\right).$
At each time phase, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noise vk is also distributed normally, with mean 0 and standard deviation σz.
$\mathbf {z} _{k}=\mathbf {Hx} _{k}+\mathbf {v} _{k}$
where
$\mathbf {H} ={\begin{bmatrix}1&0\end{bmatrix}}$
and
$\mathbf {R} =\mathrm {E} \left[\mathbf {v} _{k}\mathbf {v} _{k}^{\textsf {T}}\right]={\begin{bmatrix}\sigma _{z}^{2}\end{bmatrix}}$
We know the initial starting state of the truck with perfect precision, so we initialize
${\hat {\mathbf {x} }}_{0\mid 0}={\begin{bmatrix}0\\0\end{bmatrix}}$
and to tell the filter that we know the exact position and velocity, we give it a zero covariance matrix:
$\mathbf {P} _{0\mid 0}={\begin{bmatrix}0&0\\0&0\end{bmatrix}}$
If the initial position and velocity are not known perfectly, the covariance matrix should be initialized with suitable variances on its diagonal:
$\mathbf {P} _{0\mid 0}={\begin{bmatrix}\sigma _{x}^{2}&0\\0&\sigma _{\dot {x}}^{2}\end{bmatrix}}$
The filter will then prefer the information from the first measurements over the information already in the model.
Asymptotic form
For simplicity, assume that the control input $\mathbf {u} _{k}=\mathbf {0} $. Then the Kalman filter may be written:
${\hat {\mathbf {x} }}_{k\mid k}=\mathbf {F} _{k}{\hat {\mathbf {x} }}_{k-1\mid k-1}+\mathbf {K} _{k}[\mathbf {z} _{k}-\mathbf {H} _{k}\mathbf {F} _{k}{\hat {\mathbf {x} }}_{k-1\mid k-1}].$
A similar equation holds if we include a non-zero control input. Gain matrices $\mathbf {K} _{k}$ evolve independently of the measurements $\mathbf {z} _{k}$. From above, the four equations needed for updating the Kalman gain are as follows:
${\begin{aligned}\mathbf {P} _{k\mid k-1}&=\mathbf {F} _{k}\mathbf {P} _{k-1\mid k-1}\mathbf {F} _{k}^{\textsf {T}}+\mathbf {Q} _{k},\\\mathbf {S} _{k}&=\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}+\mathbf {R} _{k},\\\mathbf {K} _{k}&=\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {S} _{k}^{-1},\\\mathbf {P} _{k|k}&=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\mathbf {P} _{k|k-1}.\end{aligned}}$
Since the gain matrices depend only on the model, and not the measurements, they may be computed offline. Convergence of the gain matrices $\mathbf {K} _{k}$ to an asymptotic matrix $\mathbf {K} _{\infty }$ applies for conditions established in Walrand and Dimakis.[39] Simulations establish the number of steps to convergence. For the moving truck example described above, with $\Delta t=1$. and $\sigma _{a}^{2}=\sigma _{z}^{2}=\sigma _{x}^{2}=\sigma _{\dot {x}}^{2}=1$, simulation shows convergence in $10$ iterations.
Using the asymptotic gain, and assuming $\mathbf {H} _{k}$ and $\mathbf {F} _{k}$ are independent of $k$, the Kalman filter becomes a linear time-invariant filter:
${\hat {\mathbf {x} }}_{k}=\mathbf {F} {\hat {\mathbf {x} }}_{k-1}+\mathbf {K} _{\infty }[\mathbf {z} _{k}-\mathbf {H} \mathbf {F} {\hat {\mathbf {x} }}_{k-1}].$
The asymptotic gain $\mathbf {K} _{\infty }$, if it exists, can be computed by first solving the following discrete Riccati equation for the asymptotic state covariance $\mathbf {P} _{\infty }$:[39]
$\mathbf {P} _{\infty }=\mathbf {F} \left(\mathbf {P} _{\infty }-\mathbf {P} _{\infty }\mathbf {H} ^{\textsf {T}}\left(\mathbf {H} \mathbf {P} _{\infty }\mathbf {H} ^{\textsf {T}}+\mathbf {R} \right)^{-1}\mathbf {H} \mathbf {P} _{\infty }\right)\mathbf {F} ^{\textsf {T}}+\mathbf {Q} .$
The asymptotic gain is then computed as before.
$\mathbf {K} _{\infty }=\mathbf {P} _{\infty }\mathbf {H} ^{\textsf {T}}\left(\mathbf {R} +\mathbf {H} \mathbf {P} _{\infty }\mathbf {H} ^{\textsf {T}}\right)^{-1}.$
Additionally, a form of the asymptotic Kalman filter more commonly used in control theory is given by
${\hat {\mathbf {x} }}_{k+1}=\mathbf {F} {\hat {\mathbf {x} }}_{k}+\mathbf {B} \mathbf {u} _{k}+\mathbf {\overline {K}} _{\infty }[\mathbf {z} _{k}-\mathbf {H} {\hat {\mathbf {x} }}_{k}],}$
where
${\overline {\mathbf {K} }}_{\infty }=\mathbf {F} \mathbf {P} _{\infty }\mathbf {H} ^{\textsf {T}}\left(\mathbf {R} +\mathbf {H} \mathbf {P} _{\infty }\mathbf {H} ^{\textsf {T}}\right)^{-1}.$
This leads to an estimator of the form
${\hat {\mathbf {x} }}_{k+1}=(\mathbf {F} -{\overline {\mathbf {K} }}_{\infty }\mathbf {H} ){\hat {\mathbf {x} }}_{k}+\mathbf {B} \mathbf {u} _{k}+\mathbf {\overline {K}} _{\infty }\mathbf {z} _{k},}$
Derivations
The Kalman filter can be derived as a generalized least squares method operating on previous data.[40]
Deriving the posteriori estimate covariance matrix
Starting with our invariant on the error covariance Pk | k as above
$\mathbf {P} _{k\mid k}=\operatorname {cov} \left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)$
substitute in the definition of ${\hat {\mathbf {x} }}_{k\mid k}$
$\mathbf {P} _{k\mid k}=\operatorname {cov} \left[\mathbf {x} _{k}-\left({\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}{\tilde {\mathbf {y} }}_{k}\right)\right]$
and substitute ${\tilde {\mathbf {y} }}_{k}$
$\mathbf {P} _{k\mid k}=\operatorname {cov} \left(\mathbf {x} _{k}-\left[{\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}\left(\mathbf {z} _{k}-\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1}\right)\right]\right)$
and $\mathbf {z} _{k}$
$\mathbf {P} _{k\mid k}=\operatorname {cov} \left(\mathbf {x} _{k}-\left[{\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}\left(\mathbf {H} _{k}\mathbf {x} _{k}+\mathbf {v} _{k}-\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1}\right)\right]\right)$
and by collecting the error vectors we get
$\mathbf {P} _{k\mid k}=\operatorname {cov} \left[\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k-1}\right)-\mathbf {K} _{k}\mathbf {v} _{k}\right]$
Since the measurement error vk is uncorrelated with the other terms, this becomes
$\mathbf {P} _{k\mid k}=\operatorname {cov} \left[\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k-1}\right)\right]+\operatorname {cov} \left[\mathbf {K} _{k}\mathbf {v} _{k}\right]$
by the properties of vector covariance this becomes
$\mathbf {P} _{k\mid k}=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\operatorname {cov} \left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k-1}\right)\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)^{\textsf {T}}+\mathbf {K} _{k}\operatorname {cov} \left(\mathbf {v} _{k}\right)\mathbf {K} _{k}^{\textsf {T}}$
which, using our invariant on Pk | k−1 and the definition of Rk becomes
$\mathbf {P} _{k\mid k}=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\mathbf {P} _{k\mid k-1}\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)^{\textsf {T}}+\mathbf {K} _{k}\mathbf {R} _{k}\mathbf {K} _{k}^{\textsf {T}}$
This formula (sometimes known as the Joseph form of the covariance update equation) is valid for any value of Kk. It turns out that if Kk is the optimal Kalman gain, this can be simplified further as shown below.
Kalman gain derivation
The Kalman filter is a minimum mean-square error estimator. The error in the a posteriori state estimation is
$\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}$
We seek to minimize the expected value of the square of the magnitude of this vector, $\operatorname {E} \left[\left\|\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k|k}\right\|^{2}\right]$. This is equivalent to minimizing the trace of the a posteriori estimate covariance matrix $\mathbf {P} _{k|k}$. By expanding out the terms in the equation above and collecting, we get:
${\begin{aligned}\mathbf {P} _{k\mid k}&=\mathbf {P} _{k\mid k-1}-\mathbf {K} _{k}\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}-\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {K} _{k}^{\textsf {T}}+\mathbf {K} _{k}\left(\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}+\mathbf {R} _{k}\right)\mathbf {K} _{k}^{\textsf {T}}\\[6pt]&=\mathbf {P} _{k\mid k-1}-\mathbf {K} _{k}\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}-\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {K} _{k}^{\textsf {T}}+\mathbf {K} _{k}\mathbf {S} _{k}\mathbf {K} _{k}^{\textsf {T}}\end{aligned}}$
The trace is minimized when its matrix derivative with respect to the gain matrix is zero. Using the gradient matrix rules and the symmetry of the matrices involved we find that
${\frac {\partial \;\operatorname {tr} (\mathbf {P} _{k\mid k})}{\partial \;\mathbf {K} _{k}}}=-2\left(\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\right)^{\textsf {T}}+2\mathbf {K} _{k}\mathbf {S} _{k}=0.$
Solving this for Kk yields the Kalman gain:
${\begin{aligned}\mathbf {K} _{k}\mathbf {S} _{k}&=\left(\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\right)^{\textsf {T}}=\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\\\Rightarrow \mathbf {K} _{k}&=\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {S} _{k}^{-1}\end{aligned}}$
This gain, which is known as the optimal Kalman gain, is the one that yields MMSE estimates when used.
Simplification of the posteriori error covariance formula
The formula used to calculate the a posteriori error covariance can be simplified when the Kalman gain equals the optimal value derived above. Multiplying both sides of our Kalman gain formula on the right by SkKkT, it follows that
$\mathbf {K} _{k}\mathbf {S} _{k}\mathbf {K} _{k}^{\textsf {T}}=\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {K} _{k}^{\textsf {T}}$
Referring back to our expanded formula for the a posteriori error covariance,
$\mathbf {P} _{k\mid k}=\mathbf {P} _{k\mid k-1}-\mathbf {K} _{k}\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}-\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {K} _{k}^{\textsf {T}}+\mathbf {K} _{k}\mathbf {S} _{k}\mathbf {K} _{k}^{\textsf {T}}$
we find the last two terms cancel out, giving
$\mathbf {P} _{k\mid k}=\mathbf {P} _{k\mid k-1}-\mathbf {K} _{k}\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}=(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k})\mathbf {P} _{k\mid k-1}$
This formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems with numerical stability, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; the a posteriori error covariance formula as derived above (Joseph form) must be used.
Sensitivity analysis
The Kalman filtering equations provide an estimate of the state ${\hat {\mathbf {x} }}_{k\mid k}$ and its error covariance $\mathbf {P} _{k\mid k}$ recursively. The estimate and its quality depend on the system parameters and the noise statistics fed as inputs to the estimator. This section analyzes the effect of uncertainties in the statistical inputs to the filter.[41] In the absence of reliable statistics or the true values of noise covariance matrices $\mathbf {Q} _{k}$ and $\mathbf {R} _{k}$, the expression
$\mathbf {P} _{k\mid k}=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\mathbf {P} _{k\mid k-1}\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)^{\textsf {T}}+\mathbf {K} _{k}\mathbf {R} _{k}\mathbf {K} _{k}^{\textsf {T}}$
no longer provides the actual error covariance. In other words, $\mathbf {P} _{k\mid k}\neq E\left[\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)^{\textsf {T}}\right]$. In most real-time applications, the covariance matrices that are used in designing the Kalman filter are different from the actual (true) noise covariances matrices. This sensitivity analysis describes the behavior of the estimation error covariance when the noise covariances as well as the system matrices $\mathbf {F} _{k}$ and $\mathbf {H} _{k}$ that are fed as inputs to the filter are incorrect. Thus, the sensitivity analysis describes the robustness (or sensitivity) of the estimator to misspecified statistical and parametric inputs to the estimator.
This discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the actual noise covariances are denoted by $\mathbf {Q} _{k}^{a}$ and $\mathbf {R} _{k}^{a}$ respectively, whereas the design values used in the estimator are $\mathbf {Q} _{k}$ and $\mathbf {R} _{k}$ respectively. The actual error covariance is denoted by $\mathbf {P} _{k\mid k}^{a}$ and $\mathbf {P} _{k\mid k}$ as computed by the Kalman filter is referred to as the Riccati variable. When $\mathbf {Q} _{k}\equiv \mathbf {Q} _{k}^{a}$ and $\mathbf {R} _{k}\equiv \mathbf {R} _{k}^{a}$, this means that $\mathbf {P} _{k\mid k}=\mathbf {P} _{k\mid k}^{a}$. While computing the actual error covariance using $\mathbf {P} _{k\mid k}^{a}=E\left[\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)^{\textsf {T}}\right]$, substituting for ${\widehat {\mathbf {x} }}_{k\mid k}$ and using the fact that $E\left[\mathbf {w} _{k}\mathbf {w} _{k}^{\textsf {T}}\right]=\mathbf {Q} _{k}^{a}$ and $E\left[\mathbf {v} _{k}\mathbf {v} _{k}^{\textsf {T}}\right]=\mathbf {R} _{k}^{a}$, results in the following recursive equations for $\mathbf {P} _{k\mid k}^{a}$ :
$\mathbf {P} _{k\mid k-1}^{a}=\mathbf {F} _{k}\mathbf {P} _{k-1\mid k-1}^{a}\mathbf {F} _{k}^{\textsf {T}}+\mathbf {Q} _{k}^{a}$
and
$\mathbf {P} _{k\mid k}^{a}=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\mathbf {P} _{k\mid k-1}^{a}\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)^{\textsf {T}}+\mathbf {K} _{k}\mathbf {R} _{k}^{a}\mathbf {K} _{k}^{\textsf {T}}$
While computing $\mathbf {P} _{k\mid k}$, by design the filter implicitly assumes that $E\left[\mathbf {w} _{k}\mathbf {w} _{k}^{\textsf {T}}\right]=\mathbf {Q} _{k}$ and $E\left[\mathbf {v} _{k}\mathbf {v} _{k}^{\textsf {T}}\right]=\mathbf {R} _{k}$. The recursive expressions for $\mathbf {P} _{k\mid k}^{a}$ and $\mathbf {P} _{k\mid k}$ are identical except for the presence of $\mathbf {Q} _{k}^{a}$ and $\mathbf {R} _{k}^{a}$ in place of the design values $\mathbf {Q} _{k}$ and $\mathbf {R} _{k}$ respectively. Researches have been done to analyze Kalman filter system's robustness.[42]
Square root form
One problem with the Kalman filter is its numerical stability. If the process noise covariance Qk is small, round-off error often causes a small positive eigenvalue of the state covariance matrix P to be computed as a negative number. This renders the numerical representation of P indefinite, while its true form is positive-definite.
Positive definite matrices have the property that they have a triangular matrix square root P = S·ST. This can be computed efficiently using the Cholesky factorization algorithm, but more importantly, if the covariance is kept in this form, it can never have a negative diagonal or become asymmetric. An equivalent form, which avoids many of the square root operations required by the matrix square root yet preserves the desirable numerical properties, is the U-D decomposition form, P = U·D·UT, where U is a unit triangular matrix (with unit diagonal), and D is a diagonal matrix.
Between the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used square root form. (Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions,[43]: 69 while on 21st-century computers they are only slightly more expensive.)
Efficient algorithms for the Kalman prediction and update steps in the square root form were developed by G. J. Bierman and C. L. Thornton.[43][44]
The L·D·LT decomposition of the innovation covariance matrix Sk is the basis for another type of numerically efficient and robust square root filter.[45] The algorithm starts with the LU decomposition as implemented in the Linear Algebra PACKage (LAPACK). These results are further factored into the L·D·LT structure with methods given by Golub and Van Loan (algorithm 4.1.2) for a symmetric nonsingular matrix.[46] Any singular covariance matrix is pivoted so that the first diagonal partition is nonsingular and well-conditioned. The pivoting algorithm must retain any portion of the innovation covariance matrix directly corresponding to observed state-variables Hk·xk|k-1 that are associated with auxiliary observations in yk. The l·d·lt square-root filter requires orthogonalization of the observation vector.[44][45] This may be done with the inverse square-root of the covariance matrix for the auxiliary variables using Method 2 in Higham (2002, p. 263).[47]
Parallel form
The Kalman filter is efficient for sequential data processing on central processing units (CPUs), but in its original form it is inefficient on parallel architectures such as graphics processing units (GPUs). It is however possible to express the filter-update routine in terms of an associative operator using the formulation in Särkkä (2021).[48] The filter solution can then be retrieved by the use of a prefix sum algorithm which can be efficiently implemented on GPU.[49] This reduces the computational complexity from $O(N)$ in the number of time steps to $O(\log(N))$.
Relationship to recursive Bayesian estimation
The Kalman filter can be presented as one of the simplest dynamic Bayesian networks. The Kalman filter calculates estimates of the true values of states recursively over time using incoming measurements and a mathematical process model. Similarly, recursive Bayesian estimation calculates estimates of an unknown probability density function (PDF) recursively over time using incoming measurements and a mathematical process model.[50]
In recursive Bayesian estimation, the true state is assumed to be an unobserved Markov process, and the measurements are the observed states of a hidden Markov model (HMM).
Because of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state.
$p(\mathbf {x} _{k}\mid \mathbf {x} _{0},\dots ,\mathbf {x} _{k-1})=p(\mathbf {x} _{k}\mid \mathbf {x} _{k-1})$
Similarly, the measurement at the k-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state.
$p(\mathbf {z} _{k}\mid \mathbf {x} _{0},\dots ,\mathbf {x} _{k})=p(\mathbf {z} _{k}\mid \mathbf {x} _{k})$
Using these assumptions the probability distribution over all states of the hidden Markov model can be written simply as:
$p\left(\mathbf {x} _{0},\dots ,\mathbf {x} _{k},\mathbf {z} _{1},\dots ,\mathbf {z} _{k}\right)=p\left(\mathbf {x} _{0}\right)\prod _{i=1}^{k}p\left(\mathbf {z} _{i}\mid \mathbf {x} _{i}\right)p\left(\mathbf {x} _{i}\mid \mathbf {x} _{i-1}\right)$
However, when a Kalman filter is used to estimate the state x, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current timestep. This is achieved by marginalizing out the previous states and dividing by the probability of the measurement set.
This results in the predict and update phases of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (k − 1)-th timestep to the k-th and the probability distribution associated with the previous state, over all possible $x_{k-1}$.
$p\left(\mathbf {x} _{k}\mid \mathbf {Z} _{k-1}\right)=\int p\left(\mathbf {x} _{k}\mid \mathbf {x} _{k-1}\right)p\left(\mathbf {x} _{k-1}\mid \mathbf {Z} _{k-1}\right)\,d\mathbf {x} _{k-1}$
The measurement set up to time t is
$\mathbf {Z} _{t}=\left\{\mathbf {z} _{1},\dots ,\mathbf {z} _{t}\right\}$
The probability distribution of the update is proportional to the product of the measurement likelihood and the predicted state.
$p\left(\mathbf {x} _{k}\mid \mathbf {Z} _{k}\right)={\frac {p\left(\mathbf {z} _{k}\mid \mathbf {x} _{k}\right)p\left(\mathbf {x} _{k}\mid \mathbf {Z} _{k-1}\right)}{p\left(\mathbf {z} _{k}\mid \mathbf {Z} _{k-1}\right)}}$
The denominator
$p\left(\mathbf {z} _{k}\mid \mathbf {Z} _{k-1}\right)=\int p\left(\mathbf {z} _{k}\mid \mathbf {x} _{k}\right)p\left(\mathbf {x} _{k}\mid \mathbf {Z} _{k-1}\right)\,d\mathbf {x} _{k}$
is a normalization term.
The remaining probability density functions are
${\begin{aligned}p\left(\mathbf {x} _{k}\mid \mathbf {x} _{k-1}\right)&={\mathcal {N}}\left(\mathbf {F} _{k}\mathbf {x} _{k-1},\mathbf {Q} _{k}\right)\\p\left(\mathbf {z} _{k}\mid \mathbf {x} _{k}\right)&={\mathcal {N}}\left(\mathbf {H} _{k}\mathbf {x} _{k},\mathbf {R} _{k}\right)\\p\left(\mathbf {x} _{k-1}\mid \mathbf {Z} _{k-1}\right)&={\mathcal {N}}\left({\hat {\mathbf {x} }}_{k-1},\mathbf {P} _{k-1}\right)\end{aligned}}$
The PDF at the previous timestep is assumed inductively to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF for $\mathbf {x} _{k}$ given the measurements $\mathbf {Z} _{k}$ is the Kalman filter estimate.
Marginal likelihood
Related to the recursive Bayesian interpretation described above, the Kalman filter can be viewed as a generative model, i.e., a process for generating a stream of random observations z = (z0, z1, z2, ...). Specifically, the process is
1. Sample a hidden state $\mathbf {x} _{0}$ from the Gaussian prior distribution $p\left(\mathbf {x} _{0}\right)={\mathcal {N}}\left({\hat {\mathbf {x} }}_{0\mid 0},\mathbf {P} _{0\mid 0}\right)$.
2. Sample an observation $\mathbf {z} _{0}$ from the observation model $p\left(\mathbf {z} _{0}\mid \mathbf {x} _{0}\right)={\mathcal {N}}\left(\mathbf {H} _{0}\mathbf {x} _{0},\mathbf {R} _{0}\right)$.
3. For $k=1,2,3,\ldots $, do
1. Sample the next hidden state $\mathbf {x} _{k}$ from the transition model $p\left(\mathbf {x} _{k}\mid \mathbf {x} _{k-1}\right)={\mathcal {N}}\left(\mathbf {F} _{k}\mathbf {x} _{k-1}+\mathbf {B} _{k}\mathbf {u} _{k},\mathbf {Q} _{k}\right).$
2. Sample an observation $\mathbf {z} _{k}$ from the observation model $p\left(\mathbf {z} _{k}\mid \mathbf {x} _{k}\right)={\mathcal {N}}\left(\mathbf {H} _{k}\mathbf {x} _{k},\mathbf {R} _{k}\right).$
This process has identical structure to the hidden Markov model, except that the discrete state and observations are replaced with continuous variables sampled from Gaussian distributions.
In some applications, it is useful to compute the probability that a Kalman filter with a given set of parameters (prior distribution, transition and observation models, and control inputs) would generate a particular observed signal. This probability is known as the marginal likelihood because it integrates over ("marginalizes out") the values of the hidden state variables, so it can be computed using only the observed signal. The marginal likelihood can be useful to evaluate different parameter choices, or to compare the Kalman filter against other models using Bayesian model comparison.
It is straightforward to compute the marginal likelihood as a side effect of the recursive filtering computation. By the chain rule, the likelihood can be factored as the product of the probability of each observation given previous observations,
$p(\mathbf {z} )=\prod _{k=0}^{T}p\left(\mathbf {z} _{k}\mid \mathbf {z} _{k-1},\ldots ,\mathbf {z} _{0}\right)$,
and because the Kalman filter describes a Markov process, all relevant information from previous observations is contained in the current state estimate ${\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {P} _{k\mid k-1}.$ Thus the marginal likelihood is given by
${\begin{aligned}p(\mathbf {z} )&=\prod _{k=0}^{T}\int p\left(\mathbf {z} _{k}\mid \mathbf {x} _{k}\right)p\left(\mathbf {x} _{k}\mid \mathbf {z} _{k-1},\ldots ,\mathbf {z} _{0}\right)d\mathbf {x} _{k}\\&=\prod _{k=0}^{T}\int {\mathcal {N}}\left(\mathbf {z} _{k};\mathbf {H} _{k}\mathbf {x} _{k},\mathbf {R} _{k}\right){\mathcal {N}}\left(\mathbf {x} _{k};{\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {P} _{k\mid k-1}\right)d\mathbf {x} _{k}\\&=\prod _{k=0}^{T}{\mathcal {N}}\left(\mathbf {z} _{k};\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {R} _{k}+\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\right)\\&=\prod _{k=0}^{T}{\mathcal {N}}\left(\mathbf {z} _{k};\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {S} _{k}\right),\end{aligned}}$
i.e., a product of Gaussian densities, each corresponding to the density of one observation zk under the current filtering distribution $\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {S} _{k}$. This can easily be computed as a simple recursive update; however, to avoid numeric underflow, in a practical implementation it is usually desirable to compute the log marginal likelihood $\ell =\log p(\mathbf {z} )$ instead. Adopting the convention $\ell ^{(-1)}=0$, this can be done via the recursive update rule
$\ell ^{(k)}=\ell ^{(k-1)}-{\frac {1}{2}}\left({\tilde {\mathbf {y} }}_{k}^{\textsf {T}}\mathbf {S} _{k}^{-1}{\tilde {\mathbf {y} }}_{k}+\log \left|\mathbf {S} _{k}\right|+d_{y}\log 2\pi \right),$
where $d_{y}$ is the dimension of the measurement vector.[51]
An important application where such a (log) likelihood of the observations (given the filter parameters) is used is multi-target tracking. For example, consider an object tracking scenario where a stream of observations is the input, however, it is unknown how many objects are in the scene (or, the number of objects is known but is greater than one). For such a scenario, it can be unknown apriori which observations/measurements were generated by which object. A multiple hypothesis tracker (MHT) typically will form different track association hypotheses, where each hypothesis can be considered as a Kalman filter (for the linear Gaussian case) with a specific set of parameters associated with the hypothesized object. Thus, it is important to compute the likelihood of the observations for the different hypotheses under consideration, such that the most-likely one can be found.
Information filter
In cases where the dimension of the observation vector y is bigger than the dimension of the state space vector x, the information filter can avoid the inversion of a bigger matrix in the Kalman gain calculation at the price of inverting a smaller matrix in the prediction step, thus saving computing time. In the information filter, or inverse covariance filter, the estimated covariance and estimated state are replaced by the information matrix and information vector respectively. These are defined as:
${\begin{aligned}\mathbf {Y} _{k\mid k}&=\mathbf {P} _{k\mid k}^{-1}\\{\hat {\mathbf {y} }}_{k\mid k}&=\mathbf {P} _{k\mid k}^{-1}{\hat {\mathbf {x} }}_{k\mid k}\end{aligned}}$
Similarly the predicted covariance and state have equivalent information forms, defined as:
${\begin{aligned}\mathbf {Y} _{k\mid k-1}&=\mathbf {P} _{k\mid k-1}^{-1}\\{\hat {\mathbf {y} }}_{k\mid k-1}&=\mathbf {P} _{k\mid k-1}^{-1}{\hat {\mathbf {x} }}_{k\mid k-1}\end{aligned}}$
as have the measurement covariance and measurement vector, which are defined as:
${\begin{aligned}\mathbf {I} _{k}&=\mathbf {H} _{k}^{\textsf {T}}\mathbf {R} _{k}^{-1}\mathbf {H} _{k}\\\mathbf {i} _{k}&=\mathbf {H} _{k}^{\textsf {T}}\mathbf {R} _{k}^{-1}\mathbf {z} _{k}\end{aligned}}$
The information update now becomes a trivial sum.[52]
${\begin{aligned}\mathbf {Y} _{k\mid k}&=\mathbf {Y} _{k\mid k-1}+\mathbf {I} _{k}\\{\hat {\mathbf {y} }}_{k\mid k}&={\hat {\mathbf {y} }}_{k\mid k-1}+\mathbf {i} _{k}\end{aligned}}$
The main advantage of the information filter is that N measurements can be filtered at each time step simply by summing their information matrices and vectors.
${\begin{aligned}\mathbf {Y} _{k\mid k}&=\mathbf {Y} _{k\mid k-1}+\sum _{j=1}^{N}\mathbf {I} _{k,j}\\{\hat {\mathbf {y} }}_{k\mid k}&={\hat {\mathbf {y} }}_{k\mid k-1}+\sum _{j=1}^{N}\mathbf {i} _{k,j}\end{aligned}}$
To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used.[52]
${\begin{aligned}\mathbf {M} _{k}&=\left[\mathbf {F} _{k}^{-1}\right]^{\textsf {T}}\mathbf {Y} _{k-1\mid k-1}\mathbf {F} _{k}^{-1}\\\mathbf {C} _{k}&=\mathbf {M} _{k}\left[\mathbf {M} _{k}+\mathbf {Q} _{k}^{-1}\right]^{-1}\\\mathbf {L} _{k}&=\mathbf {I} -\mathbf {C} _{k}\\\mathbf {Y} _{k\mid k-1}&=\mathbf {L} _{k}\mathbf {M} _{k}+\mathbf {C} _{k}\mathbf {Q} _{k}^{-1}\mathbf {C} _{k}^{\textsf {T}}\\{\hat {\mathbf {y} }}_{k\mid k-1}&=\mathbf {L} _{k}\left[\mathbf {F} _{k}^{-1}\right]^{\textsf {T}}{\hat {\mathbf {y} }}_{k-1\mid k-1}\end{aligned}}$
Fixed-lag smoother
The optimal fixed-lag smoother provides the optimal estimate of ${\hat {\mathbf {x} }}_{k-N\mid k}$ for a given fixed-lag $N$ using the measurements from $\mathbf {z} _{1}$ to $\mathbf {z} _{k}$.[53] It can be derived using the previous theory via an augmented state, and the main equation of the filter is the following:
${\begin{bmatrix}{\hat {\mathbf {x} }}_{t\mid t}\\{\hat {\mathbf {x} }}_{t-1\mid t}\\\vdots \\{\hat {\mathbf {x} }}_{t-N+1\mid t}\\\end{bmatrix}}={\begin{bmatrix}\mathbf {I} \\0\\\vdots \\0\\\end{bmatrix}}{\hat {\mathbf {x} }}_{t\mid t-1}+{\begin{bmatrix}0&\ldots &0\\\mathbf {I} &0&\vdots \\\vdots &\ddots &\vdots \\0&\ldots &\mathbf {I} \\\end{bmatrix}}{\begin{bmatrix}{\hat {\mathbf {x} }}_{t-1\mid t-1}\\{\hat {\mathbf {x} }}_{t-2\mid t-1}\\\vdots \\{\hat {\mathbf {x} }}_{t-N+1\mid t-1}\\\end{bmatrix}}+{\begin{bmatrix}\mathbf {K} ^{(0)}\\\mathbf {K} ^{(1)}\\\vdots \\\mathbf {K} ^{(N-1)}\\\end{bmatrix}}\mathbf {y} _{t\mid t-1}$
where:
• ${\hat {\mathbf {x} }}_{t\mid t-1}$ is estimated via a standard Kalman filter;
• $\mathbf {y} _{t\mid t-1}=\mathbf {z} _{t}-\mathbf {H} {\hat {\mathbf {x} }}_{t\mid t-1}$ is the innovation produced considering the estimate of the standard Kalman filter;
• the various ${\hat {\mathbf {x} }}_{t-i\mid t}$ with $i=1,\ldots ,N-1$ are new variables; i.e., they do not appear in the standard Kalman filter;
• the gains are computed via the following scheme:
$\mathbf {K} ^{(i+1)}=\mathbf {P} ^{(i)}\mathbf {H} ^{\textsf {T}}\left[\mathbf {H} \mathbf {P} \mathbf {H} ^{\textsf {T}}+\mathbf {R} \right]^{-1}$
and
$\mathbf {P} ^{(i)}=\mathbf {P} \left[\left(\mathbf {F} -\mathbf {K} \mathbf {H} \right)^{\textsf {T}}\right]^{i}$
where $\mathbf {P} $ and $\mathbf {K} $ are the prediction error covariance and the gains of the standard Kalman filter (i.e., $\mathbf {P} _{t\mid t-1}$).
If the estimation error covariance is defined so that
$\mathbf {P} _{i}:=E\left[\left(\mathbf {x} _{t-i}-{\hat {\mathbf {x} }}_{t-i\mid t}\right)^{*}\left(\mathbf {x} _{t-i}-{\hat {\mathbf {x} }}_{t-i\mid t}\right)\mid z_{1}\ldots z_{t}\right],$
then we have that the improvement on the estimation of $\mathbf {x} _{t-i}$ is given by:
$\mathbf {P} -\mathbf {P} _{i}=\sum _{j=0}^{i}\left[\mathbf {P} ^{(j)}\mathbf {H} ^{\textsf {T}}\left(\mathbf {H} \mathbf {P} \mathbf {H} ^{\textsf {T}}+\mathbf {R} \right)^{-1}\mathbf {H} \left(\mathbf {P} ^{(i)}\right)^{\textsf {T}}\right]$
Fixed-interval smoothers
The optimal fixed-interval smoother provides the optimal estimate of ${\hat {\mathbf {x} }}_{k\mid n}$ ($k<n$) using the measurements from a fixed interval $\mathbf {z} _{1}$ to $\mathbf {z} _{n}$. This is also called "Kalman Smoothing". There are several smoothing algorithms in common use.
Rauch–Tung–Striebel
The Rauch–Tung–Striebel (RTS) smoother is an efficient two-pass algorithm for fixed interval smoothing.[54]
The forward pass is the same as the regular Kalman filter algorithm. These filtered a-priori and a-posteriori state estimates ${\hat {\mathbf {x} }}_{k\mid k-1}$, ${\hat {\mathbf {x} }}_{k\mid k}$ and covariances $\mathbf {P} _{k\mid k-1}$, $\mathbf {P} _{k\mid k}$ are saved for use in the backward pass (for retrodiction).
In the backward pass, we compute the smoothed state estimates ${\hat {\mathbf {x} }}_{k\mid n}$ and covariances $\mathbf {P} _{k\mid n}$. We start at the last time step and proceed backward in time using the following recursive equations:
${\begin{aligned}{\hat {\mathbf {x} }}_{k\mid n}&={\hat {\mathbf {x} }}_{k\mid k}+\mathbf {C} _{k}\left({\hat {\mathbf {x} }}_{k+1\mid n}-{\hat {\mathbf {x} }}_{k+1\mid k}\right)\\\mathbf {P} _{k\mid n}&=\mathbf {P} _{k\mid k}+\mathbf {C} _{k}\left(\mathbf {P} _{k+1\mid n}-\mathbf {P} _{k+1\mid k}\right)\mathbf {C} _{k}^{\textsf {T}}\end{aligned}}$
where
$\mathbf {C} _{k}=\mathbf {P} _{k\mid k}\mathbf {F} _{k+1}^{\textsf {T}}\mathbf {P} _{k+1\mid k}^{-1}.$
$\mathbf {x} _{k\mid k}$ is the a-posteriori state estimate of timestep $k$ and $\mathbf {x} _{k+1\mid k}$ is the a-priori state estimate of timestep $k+1$. The same notation applies to the covariance.
Modified Bryson–Frazier smoother
An alternative to the RTS algorithm is the modified Bryson–Frazier (MBF) fixed interval smoother developed by Bierman.[44] This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive computation of data which are used at each observation time to compute the smoothed state and covariance.
The recursive equations are
${\begin{aligned}{\tilde {\Lambda }}_{k}&=\mathbf {H} _{k}^{\textsf {T}}\mathbf {S} _{k}^{-1}\mathbf {H} _{k}+{\hat {\mathbf {C} }}_{k}^{\textsf {T}}{\hat {\Lambda }}_{k}{\hat {\mathbf {C} }}_{k}\\{\hat {\Lambda }}_{k-1}&=\mathbf {F} _{k}^{\textsf {T}}{\tilde {\Lambda }}_{k}\mathbf {F} _{k}\\{\hat {\Lambda }}_{n}&=0\\{\tilde {\lambda }}_{k}&=-\mathbf {H} _{k}^{\textsf {T}}\mathbf {S} _{k}^{-1}\mathbf {y} _{k}+{\hat {\mathbf {C} }}_{k}^{\textsf {T}}{\hat {\lambda }}_{k}\\{\hat {\lambda }}_{k-1}&=\mathbf {F} _{k}^{\textsf {T}}{\tilde {\lambda }}_{k}\\{\hat {\lambda }}_{n}&=0\end{aligned}}$
where $\mathbf {S} _{k}$ is the residual covariance and ${\hat {\mathbf {C} }}_{k}=\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}$. The smoothed state and covariance can then be found by substitution in the equations
${\begin{aligned}\mathbf {P} _{k\mid n}&=\mathbf {P} _{k\mid k}-\mathbf {P} _{k\mid k}{\hat {\Lambda }}_{k}\mathbf {P} _{k\mid k}\\\mathbf {x} _{k\mid n}&=\mathbf {x} _{k\mid k}-\mathbf {P} _{k\mid k}{\hat {\lambda }}_{k}\end{aligned}}$
or
${\begin{aligned}\mathbf {P} _{k\mid n}&=\mathbf {P} _{k\mid k-1}-\mathbf {P} _{k\mid k-1}{\tilde {\Lambda }}_{k}\mathbf {P} _{k\mid k-1}\\\mathbf {x} _{k\mid n}&=\mathbf {x} _{k\mid k-1}-\mathbf {P} _{k\mid k-1}{\tilde {\lambda }}_{k}.\end{aligned}}$
An important advantage of the MBF is that it does not require finding the inverse of the covariance matrix.
Minimum-variance smoother
The minimum-variance smoother can attain the best-possible error performance, provided that the models are linear, their parameters and the noise statistics are known precisely.[55] This smoother is a time-varying state-space generalization of the optimal non-causal Wiener filter.
The smoother calculations are done in two passes. The forward calculations involve a one-step-ahead predictor and are given by
${\begin{aligned}{\hat {\mathbf {x} }}_{k+1\mid k}&=(\mathbf {F} _{k}-\mathbf {K} _{k}\mathbf {H} _{k}){\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}\mathbf {z} _{k}\\\alpha _{k}&=-\mathbf {S} _{k}^{-{\frac {1}{2}}}\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {S} _{k}^{-{\frac {1}{2}}}\mathbf {z} _{k}\end{aligned}}$
The above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward pass $\beta _{k}$ may be calculated by operating the forward equations on the time-reversed $\alpha _{k}$ and time reversing the result. In the case of output estimation, the smoothed estimate is given by
${\hat {\mathbf {y} }}_{k\mid N}=\mathbf {z} _{k}-\mathbf {R} _{k}\beta _{k}$
Taking the causal part of this minimum-variance smoother yields
${\hat {\mathbf {y} }}_{k\mid k}=\mathbf {z} _{k}-\mathbf {R} _{k}\mathbf {S} _{k}^{-{\frac {1}{2}}}\alpha _{k}$
which is identical to the minimum-variance Kalman filter. The above solutions minimize the variance of the output estimation error. Note that the Rauch–Tung–Striebel smoother derivation assumes that the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and input estimation can be constructed similarly.
A continuous-time version of the above smoother is described in.[56][57]
Expectation–maximization algorithms may be employed to calculate approximate maximum likelihood estimates of unknown state-space parameters within minimum-variance filters and smoothers. Often uncertainties remain within problem assumptions. A smoother that accommodates uncertainties can be designed by adding a positive definite term to the Riccati equation.[58]
In cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance filter and smoother recursions (extended Kalman filtering).
Frequency-weighted Kalman filters
Pioneering research on the perception of sounds at different frequencies was conducted by Fletcher and Munson in the 1930s. Their work led to a standard way of weighting measured sound levels within investigations of industrial noise and hearing loss. Frequency weightings have since been used within filter and controller designs to manage performance within bands of interest.
Typically, a frequency shaping function is used to weight the average power of the error spectral density in a specified frequency band. Let $\mathbf {y} -{\hat {\mathbf {y} }}$ denote the output estimation error exhibited by a conventional Kalman filter. Also, let $\mathbf {W} $ denote a causal frequency weighting transfer function. The optimum solution which minimizes the variance of $\mathbf {W} \left(\mathbf {y} -{\hat {\mathbf {y} }}\right)$ arises by simply constructing $\mathbf {W} ^{-1}{\hat {\mathbf {y} }}$.
The design of $\mathbf {W} $ remains an open question. One way of proceeding is to identify a system which generates the estimation error and setting $\mathbf {W} $ equal to the inverse of that system.[59] This procedure may be iterated to obtain mean-square error improvement at the cost of increased filter order. The same technique can be applied to smoothers.
Nonlinear filters
The basic Kalman filter is limited to a linear assumption. More complex systems, however, can be nonlinear. The nonlinearity can be associated either with the process model or with the observation model or with both.
The most common variants of Kalman filters for non-linear systems are the Extended Kalman Filter and Unscented Kalman filter. The suitability of which filter to use depends on the non-linearity indices of the process and observation model.[60]
Extended Kalman filter
Main article: Extended Kalman filter
In the extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be nonlinear functions. These functions are of differentiable type.
${\begin{aligned}\mathbf {x} _{k}&=f(\mathbf {x} _{k-1},\mathbf {u} _{k})+\mathbf {w} _{k}\\\mathbf {z} _{k}&=h(\mathbf {x} _{k})+\mathbf {v} _{k}\end{aligned}}$
The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed.
At each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the nonlinear function around the current estimate.
Unscented Kalman filter
When the state transition and observation models—that is, the predict and update functions $f$ and $h$—are highly nonlinear, the extended Kalman filter can give particularly poor performance.[61] [62] This is because the covariance is propagated through linearization of the underlying nonlinear model. The unscented Kalman filter (UKF) [61] uses a deterministic sampling technique known as the unscented transformation (UT) to pick a minimal set of sample points (called sigma points) around the mean. The sigma points are then propagated through the nonlinear functions, from which a new mean and covariance estimate are then formed. The resulting filter depends on how the transformed statistics of the UT are calculated and which set of sigma points are used. It should be remarked that it is always possible to construct new UKFs in a consistent way.[63] For certain systems, the resulting UKF more accurately estimates the true mean and covariance.[64] This can be verified with Monte Carlo sampling or Taylor series expansion of the posterior statistics. In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically), if not impossible (if those functions are not differentiable).
Sigma points
For a random vector $\mathbf {x} =(x_{1},\dots ,x_{L})$, sigma points are any set of vectors
$\{\mathbf {s} _{0},\dots ,\mathbf {s} _{N}\}={\bigl \{}{\begin{pmatrix}s_{0,1}&s_{0,2}&\ldots &s_{0,L}\end{pmatrix}},\dots ,{\begin{pmatrix}s_{N,1}&s_{N,2}&\ldots &s_{N,L}\end{pmatrix}}{\bigr \}}$
attributed with
• first-order weights $W_{0}^{a},\dots ,W_{N}^{a}$ that fulfill
1. $\sum _{j=0}^{N}W_{j}^{a}=1$
2. for all $i=1,\dots ,L$: $E[x_{i}]=\sum _{j=0}^{N}W_{j}^{a}s_{j,i}$
• second-order weights $W_{0}^{c},\dots ,W_{N}^{c}$ that fulfill
1. $\sum _{j=0}^{N}W_{j}^{c}=1$
2. for all pairs $(i,l)\in \{1,\dots ,L\}^{2}:E[x_{i}x_{l}]=\sum _{j=0}^{N}W_{j}^{c}s_{j,i}s_{j,l}$.
A simple choice of sigma points and weights for $\mathbf {x} _{k-1\mid k-1}$ in the UKF algorithm is
${\begin{aligned}\mathbf {s} _{0}&={\hat {\mathbf {x} }}_{k-1\mid k-1}\\-1&<W_{0}^{a}=W_{0}^{c}<1\\\mathbf {s} _{j}&={\hat {\mathbf {x} }}_{k-1\mid k-1}+{\sqrt {\frac {L}{1-W_{0}}}}\mathbf {A} _{j},\quad j=1,\dots ,L\\\mathbf {s} _{L+j}&={\hat {\mathbf {x} }}_{k-1\mid k-1}-{\sqrt {\frac {L}{1-W_{0}}}}\mathbf {A} _{j},\quad j=1,\dots ,L\\W_{j}^{a}&=W_{j}^{c}={\frac {1-W_{0}}{2L}},\quad j=1,\dots ,2L\end{aligned}}$
where ${\hat {\mathbf {x} }}_{k-1\mid k-1}$ is the mean estimate of $\mathbf {x} _{k-1\mid k-1}$. The vector $\mathbf {A} _{j}$ is the jth column of $\mathbf {A} $ where $\mathbf {P} _{k-1\mid k-1}=\mathbf {AA} ^{\textsf {T}}$. Typically, $\mathbf {A} $ is obtained via Cholesky decomposition of $\mathbf {P} _{k-1\mid k-1}$. With some care the filter equations can be expressed in such a way that $\mathbf {A} $ is evaluated directly without intermediate calculations of $\mathbf {P} _{k-1\mid k-1}$. This is referred to as the square-root unscented Kalman filter.[65]
The weight of the mean value, $W_{0}$, can be chosen arbitrarily.
Another popular parameterization (which generalizes the above) is
${\begin{aligned}\mathbf {s} _{0}&={\hat {\mathbf {x} }}_{k-1\mid k-1}\\W_{0}^{a}&={\frac {\alpha ^{2}\kappa -L}{\alpha ^{2}\kappa }}\\W_{0}^{c}&=W_{0}^{a}+1-\alpha ^{2}+\beta \\\mathbf {s} _{j}&={\hat {\mathbf {x} }}_{k-1\mid k-1}+\alpha {\sqrt {\kappa }}\mathbf {A} _{j},\quad j=1,\dots ,L\\\mathbf {s} _{L+j}&={\hat {\mathbf {x} }}_{k-1\mid k-1}-\alpha {\sqrt {\kappa }}\mathbf {A} _{j},\quad j=1,\dots ,L\\W_{j}^{a}&=W_{j}^{c}={\frac {1}{2\alpha ^{2}\kappa }},\quad j=1,\dots ,2L.\end{aligned}}$
$\alpha $ and $\kappa $ control the spread of the sigma points. $\beta $ is related to the distribution of $x$.
Appropriate values depend on the problem at hand, but a typical recommendation is $\alpha =10^{-3}$, $\kappa =1$, and $\beta =2$. However, a larger value of $\alpha $ (e.g., $\alpha =1$) may be beneficial in order to better capture the spread of the distribution and possible nonlinearities.[66] If the true distribution of $x$ is Gaussian, $\beta =2$ is optimal.[67]
Predict
As with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa.
Given estimates of the mean and covariance, ${\hat {\mathbf {x} }}_{k-1\mid k-1}$ and $\mathbf {P} _{k-1\mid k-1}$, one obtains $N=2L+1$ sigma points as described in the section above. The sigma points are propagated through the transition function f.
$\mathbf {x} _{j}=f\left(\mathbf {s} _{j}\right)\quad j=0,\dots ,2L$.
The propagated sigma points are weighed to produce the predicted mean and covariance.
${\begin{aligned}{\hat {\mathbf {x} }}_{k\mid k-1}&=\sum _{j=0}^{2L}W_{j}^{a}\mathbf {x} _{j}\\\mathbf {P} _{k\mid k-1}&=\sum _{j=0}^{2L}W_{j}^{c}\left(\mathbf {x} _{j}-{\hat {\mathbf {x} }}_{k\mid k-1}\right)\left(\mathbf {x} _{j}-{\hat {\mathbf {x} }}_{k\mid k-1}\right)^{\textsf {T}}+\mathbf {Q} _{k}\end{aligned}}$
where $W_{j}^{a}$ are the first-order weights of the original sigma points, and $W_{j}^{c}$ are the second-order weights. The matrix $\mathbf {Q} _{k}$ is the covariance of the transition noise, $\mathbf {w} _{k}$.
Update
Given prediction estimates ${\hat {\mathbf {x} }}_{k\mid k-1}$ and $\mathbf {P} _{k\mid k-1}$, a new set of $N=2L+1$ sigma points $\mathbf {s} _{0},\dots ,\mathbf {s} _{2L}$ with corresponding first-order weights $W_{0}^{a},\dots W_{2L}^{a}$ and second-order weights $W_{0}^{c},\dots ,W_{2L}^{c}$ is calculated.[68] These sigma points are transformed through the measurement function $h$.
$\mathbf {z} _{j}=h(\mathbf {s} _{j}),\,\,j=0,1,\dots ,2L$.
Then the empirical mean and covariance of the transformed points are calculated.
${\begin{aligned}{\hat {\mathbf {z} }}&=\sum _{j=0}^{2L}W_{j}^{a}\mathbf {z} _{j}\\[6pt]{\hat {\mathbf {S} }}_{k}&=\sum _{j=0}^{2L}W_{j}^{c}(\mathbf {z} _{j}-{\hat {\mathbf {z} }})(\mathbf {z} _{j}-{\hat {\mathbf {z} }})^{\textsf {T}}+\mathbf {R_{k}} \end{aligned}}$
where $\mathbf {R} _{k}$ is the covariance matrix of the observation noise, $\mathbf {v} _{k}$. Additionally, the cross covariance matrix is also needed
${\begin{aligned}\mathbf {C_{xz}} &=\sum _{j=0}^{2L}W_{j}^{c}(\mathbf {x} _{j}-{\hat {\mathbf {x} }}_{k|k-1})(\mathbf {z} _{j}-{\hat {\mathbf {z} }})^{\textsf {T}}.\end{aligned}}$
The Kalman gain is
${\begin{aligned}\mathbf {K} _{k}=\mathbf {C_{xz}} {\hat {\mathbf {S} }}_{k}^{-1}.\end{aligned}}$
The updated mean and covariance estimates are
${\begin{aligned}{\hat {\mathbf {x} }}_{k\mid k}&={\hat {\mathbf {x} }}_{k|k-1}+\mathbf {K} _{k}(\mathbf {z} _{k}-{\hat {\mathbf {z} }})\\\mathbf {P} _{k\mid k}&=\mathbf {P} _{k\mid k-1}-\mathbf {K} _{k}{\hat {\mathbf {S} }}_{k}\mathbf {K} _{k}^{\textsf {T}}.\end{aligned}}$
Discriminative Kalman filter
When the observation model $p(\mathbf {z} _{k}\mid \mathbf {x} _{k})$ is highly non-linear and/or non-Gaussian, it may prove advantageous to apply Bayes' rule and estimate
$p(\mathbf {z} _{k}\mid \mathbf {x} _{k})\approx {\frac {p(\mathbf {x} _{k}\mid \mathbf {z} _{k})}{p(\mathbf {x} _{k})}}$
where $p(\mathbf {x} _{k}\mid \mathbf {z} _{k})\approx {\mathcal {N}}(g(\mathbf {z} _{k}),Q(\mathbf {z} _{k}))$ for nonlinear functions $g,Q$. This replaces the generative specification of the standard Kalman filter with a discriminative model for the latent states given observations.
Under a stationary state model
${\begin{aligned}p(\mathbf {x} _{1})&={\mathcal {N}}(0,\mathbf {T} ),\\p(\mathbf {x} _{k}\mid \mathbf {x} _{k-1})&={\mathcal {N}}(\mathbf {F} \mathbf {x} _{k-1},\mathbf {C} ),\end{aligned}}$
where $\mathbf {T} =\mathbf {F} \mathbf {T} \mathbf {F} ^{\intercal }+\mathbf {C} $, if
$p(\mathbf {x} _{k}\mid \mathbf {z} _{1:k})\approx {\mathcal {N}}({\hat {\mathbf {x} }}_{k|k-1},\mathbf {P} _{k|k-1}),$
then given a new observation $\mathbf {z} _{k}$, it follows that[69]
$p(\mathbf {x} _{k+1}\mid \mathbf {z} _{1:k+1})\approx {\mathcal {N}}({\hat {\mathbf {x} }}_{k+1|k},\mathbf {P} _{k+1|k})$
where
${\begin{aligned}\mathbf {M} _{k+1}&=\mathbf {F} \mathbf {P} _{k|k-1}\mathbf {F} ^{\intercal }+\mathbf {C} ,\\\mathbf {P} _{k+1|k}&=(\mathbf {M} _{k+1}^{-1}+Q(\mathbf {z} _{k})^{-1}-\mathbf {T} ^{-1})^{-1},\\{\hat {\mathbf {x} }}_{k+1|k}&=\mathbf {P} _{k+1|k}(\mathbf {M} _{k+1}^{-1}\mathbf {F} {\hat {\mathbf {x} }}_{k|k-1}+\mathbf {P} _{k+1|k}^{-1}g(\mathbf {z} _{k})).\end{aligned}}$
Note that this approximation requires $Q(\mathbf {z} _{k})^{-1}-\mathbf {T} ^{-1}$ to be positive-definite; in the case that it is not,
$\mathbf {P} _{k+1|k}=(\mathbf {M} _{k+1}^{-1}+Q(\mathbf {z} _{k})^{-1})^{-1}$
is used instead. Such an approach proves particularly useful when the dimensionality of the observations is much greater than that of the latent states[70] and can be used build filters that are particularly robust to nonstationarities in the observation model.[71]
Adaptive Kalman filter
Adaptive Kalman filters allow to adapt for process dynamics which are not modeled in the process model $\mathbf {F} (t)$, which happens for example in the context of a maneuvering target when a constant velocity (reduced order) Kalman filter is employed for tracking.[72]
Kalman–Bucy filter
Kalman–Bucy filtering (named for Richard Snowden Bucy) is a continuous time version of Kalman filtering.[73][74]
It is based on the state space model
${\begin{aligned}{\frac {d}{dt}}\mathbf {x} (t)&=\mathbf {F} (t)\mathbf {x} (t)+\mathbf {B} (t)\mathbf {u} (t)+\mathbf {w} (t)\\\mathbf {z} (t)&=\mathbf {H} (t)\mathbf {x} (t)+\mathbf {v} (t)\end{aligned}}$
where $\mathbf {Q} (t)$ and $\mathbf {R} (t)$ represent the intensities (or, more accurately: the Power Spectral Density - PSD - matrices) of the two white noise terms $\mathbf {w} (t)$ and $\mathbf {v} (t)$, respectively.
The filter consists of two differential equations, one for the state estimate and one for the covariance:
${\begin{aligned}{\frac {d}{dt}}{\hat {\mathbf {x} }}(t)&=\mathbf {F} (t){\hat {\mathbf {x} }}(t)+\mathbf {B} (t)\mathbf {u} (t)+\mathbf {K} (t)\left(\mathbf {z} (t)-\mathbf {H} (t){\hat {\mathbf {x} }}(t)\right)\\{\frac {d}{dt}}\mathbf {P} (t)&=\mathbf {F} (t)\mathbf {P} (t)+\mathbf {P} (t)\mathbf {F} ^{\textsf {T}}(t)+\mathbf {Q} (t)-\mathbf {K} (t)\mathbf {R} (t)\mathbf {K} ^{\textsf {T}}(t)\end{aligned}}$
where the Kalman gain is given by
$\mathbf {K} (t)=\mathbf {P} (t)\mathbf {H} ^{\textsf {T}}(t)\mathbf {R} ^{-1}(t)$
Note that in this expression for $\mathbf {K} (t)$ the covariance of the observation noise $\mathbf {R} (t)$ represents at the same time the covariance of the prediction error (or innovation) ${\tilde {\mathbf {y} }}(t)=\mathbf {z} (t)-\mathbf {H} (t){\hat {\mathbf {x} }}(t)$; these covariances are equal only in the case of continuous time.[75]
The distinction between the prediction and update steps of discrete-time Kalman filtering does not exist in continuous time.
The second differential equation, for the covariance, is an example of a Riccati equation. Nonlinear generalizations to Kalman–Bucy filters include continuous time extended Kalman filter.
Hybrid Kalman filter
Most physical systems are represented as continuous-time models while discrete-time measurements are made frequently for state estimation via a digital processor. Therefore, the system model and measurement model are given by
${\begin{aligned}{\dot {\mathbf {x} }}(t)&=\mathbf {F} (t)\mathbf {x} (t)+\mathbf {B} (t)\mathbf {u} (t)+\mathbf {w} (t),&\mathbf {w} (t)&\sim N\left(\mathbf {0} ,\mathbf {Q} (t)\right)\\\mathbf {z} _{k}&=\mathbf {H} _{k}\mathbf {x} _{k}+\mathbf {v} _{k},&\mathbf {v} _{k}&\sim N(\mathbf {0} ,\mathbf {R} _{k})\end{aligned}}$
where
$\mathbf {x} _{k}=\mathbf {x} (t_{k})$.
Initialize
${\hat {\mathbf {x} }}_{0\mid 0}=E\left[\mathbf {x} (t_{0})\right],\mathbf {P} _{0\mid 0}=\operatorname {Var} \left[\mathbf {x} \left(t_{0}\right)\right]$
Predict
${\begin{aligned}{\dot {\hat {\mathbf {x} }}}(t)&=\mathbf {F} (t){\hat {\mathbf {x} }}(t)+\mathbf {B} (t)\mathbf {u} (t){\text{, with }}{\hat {\mathbf {x} }}\left(t_{k-1}\right)={\hat {\mathbf {x} }}_{k-1\mid k-1}\\\Rightarrow {\hat {\mathbf {x} }}_{k\mid k-1}&={\hat {\mathbf {x} }}\left(t_{k}\right)\\{\dot {\mathbf {P} }}(t)&=\mathbf {F} (t)\mathbf {P} (t)+\mathbf {P} (t)\mathbf {F} (t)^{\textsf {T}}+\mathbf {Q} (t){\text{, with }}\mathbf {P} \left(t_{k-1}\right)=\mathbf {P} _{k-1\mid k-1}\\\Rightarrow \mathbf {P} _{k\mid k-1}&=\mathbf {P} \left(t_{k}\right)\end{aligned}}$
The prediction equations are derived from those of continuous-time Kalman filter without update from measurements, i.e., $\mathbf {K} (t)=0$. The predicted state and covariance are calculated respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step.
For the case of linear time invariant systems, the continuous time dynamics can be exactly discretized into a discrete time system using matrix exponentials.
Update
${\begin{aligned}\mathbf {K} _{k}&=\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\left(\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}+\mathbf {R} _{k}\right)^{-1}\\{\hat {\mathbf {x} }}_{k\mid k}&={\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}\left(\mathbf {z} _{k}-\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1}\right)\\\mathbf {P} _{k\mid k}&=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\mathbf {P} _{k\mid k-1}\end{aligned}}$
The update equations are identical to those of the discrete-time Kalman filter.
Variants for the recovery of sparse signals
The traditional Kalman filter has also been employed for the recovery of sparse, possibly dynamic, signals from noisy observations. Recent works[76][77][78] utilize notions from the theory of compressed sensing/sampling, such as the restricted isometry property and related probabilistic recovery arguments, for sequentially estimating the sparse state in intrinsically low-dimensional systems.
Relation to Gaussian processes
Since linear Gaussian state-space models lead to Gaussian processes, Kalman filters can be viewed as sequential solvers for Gaussian process regression.[79]
Applications
• Attitude and heading reference systems
• Autopilot
• Electric battery state of charge (SoC) estimation[80][81]
• Brain–computer interfaces[69][71][70]
• Chaotic signals
• Tracking and vertex fitting of charged particles in particle detectors[82]
• Tracking of objects in computer vision
• Dynamic positioning in shipping
• Economics, in particular macroeconomics, time series analysis, and econometrics[83]
• Inertial guidance system
• Nuclear medicine – single photon emission computed tomography image restoration[84]
• Orbit determination
• Power system state estimation
• Radar tracker
• Satellite navigation systems
• Seismology[85]
• Sensorless control of AC motor variable-frequency drives
• Simultaneous localization and mapping
• Speech enhancement
• Visual odometry
• Weather forecasting
• Navigation system
• 3D modeling
• Structural health monitoring
• Human sensorimotor processing[86]
See also
• Alpha beta filter
• Inverse-variance weighting
• Covariance intersection
• Data assimilation
• Ensemble Kalman filter
• Extended Kalman filter
• Fast Kalman filter
• Filtering problem (stochastic processes)
• Generalized filtering
• Invariant extended Kalman filter
• Kernel adaptive filter
• Masreliez's theorem
• Moving horizon estimation
• Particle filter estimator
• PID controller
• Predictor–corrector method
• Recursive least squares filter
• Schmidt–Kalman filter
• Separation principle
• Sliding mode control
• State-transition matrix
• Stochastic differential equations
• Switching Kalman filter
References
1. Stratonovich, R. L. (1959). Optimum nonlinear systems which bring about a separation of a signal with constant parameters from noise. Radiofizika, 2:6, pp. 892–901.
2. Stratonovich, R. L. (1959). On the theory of optimal non-linear filtering of random functions. Theory of Probability and Its Applications, 4, pp. 223–225.
3. Stratonovich, R. L. (1960) Application of the Markov processes theory to optimal filtering. Radio Engineering and Electronic Physics, 5:11, pp. 1–19.
4. Stratonovich, R. L. (1960). Conditional Markov Processes. Theory of Probability and Its Applications, 5, pp. 156–178.
5. Stepanov, O. A. (15 May 2011). "Kalman filtering: Past and present. An outlook from Russia. (On the occasion of the 80th birthday of Rudolf Emil Kalman)". Gyroscopy and Navigation. 2 (2): 105. doi:10.1134/S2075108711020076. S2CID 53120402.
6. Fauzi, Hilman; Batool, Uzma (15 July 2019). "A Three-bar Truss Design using Single-solution Simulated Kalman Filter Optimizer". Mekatronika. 1 (2): 98–102. doi:10.15282/mekatronika.v1i2.4991. S2CID 222355496.
7. Paul Zarchan; Howard Musoff (2000). Fundamentals of Kalman Filtering: A Practical Approach. American Institute of Aeronautics and Astronautics, Incorporated. ISBN 978-1-56347-455-2.
8. Lora-Millan, Julio S.; Hidalgo, Andres F.; Rocon, Eduardo (2021). "An IMUs-Based Extended Kalman Filter to Estimate Gait Lower Limb Sagittal Kinematics for the Control of Wearable Robotic Devices". IEEE Access. 9: 144540–144554. doi:10.1109/ACCESS.2021.3122160. ISSN 2169-3536. S2CID 239938971.
9. Kalita, Diana; Lyakhov, Pavel (December 2022). "Moving Object Detection Based on a Combination of Kalman Filter and Median Filtering". Big Data and Cognitive Computing. 6 (4): 142. doi:10.3390/bdcc6040142. ISSN 2504-2289.
10. Ghysels, Eric; Marcellino, Massimiliano (2018). Applied Economic Forecasting using Time Series Methods. New York, NY: Oxford University Press. p. 419. ISBN 978-0-19-062201-5. OCLC 1010658777.
11. Azzam, M. Abdullah; Batool, Uzma; Fauzi, Hilman (15 July 2019). "Design of an Helical Spring using Single-solution Simulated Kalman Filter Optimizer". Mekatronika. 1 (2): 93–97. doi:10.15282/mekatronika.v1i2.4990. S2CID 221855079.
12. Wolpert, Daniel; Ghahramani, Zoubin (2000). "Computational principles of movement neuroscience". Nature Neuroscience. 3: 1212–7. doi:10.1038/81497. PMID 11127840. S2CID 736756.
13. Kalman, R. E. (1960). "A New Approach to Linear Filtering and Prediction Problems". Journal of Basic Engineering. 82: 35–45. doi:10.1115/1.3662552. S2CID 1242324.
14. Humpherys, Jeffrey (2012). "A Fresh Look at the Kalman Filter". SIAM Review. 54 (4): 801–823. doi:10.1137/100799666.
15. Uhlmann, Jeffrey; Julier, Simon (2022). "Gaussianity and the Kalman Filter: A Simple Yet Complicated Relationship" (PDF). Journal de Ciencia e Ingeniería. 14 (1): 21–26. doi:10.46571/JCI.2022.1.2. S2CID 251143915.
16. Li, Wangyan; Wang, Zidong; Wei, Guoliang; Ma, Lifeng; Hu, Jun; Ding, Derui (2015). "A Survey on Multisensor Fusion and Consensus Filtering for Sensor Networks". Discrete Dynamics in Nature and Society. 2015: 1–12. doi:10.1155/2015/683701. ISSN 1026-0226.
17. Li, Wangyan; Wang, Zidong; Ho, Daniel W. C.; Wei, Guoliang (2019). "On Boundedness of Error Covariances for Kalman Consensus Filtering Problems". IEEE Transactions on Automatic Control. 65 (6): 2654–2661. doi:10.1109/TAC.2019.2942826. ISSN 0018-9286. S2CID 204196474.
18. Lauritzen, S. L. (December 1981). "Time series analysis in 1880. A discussion of contributions made by T.N. Thiele". International Statistical Review. 49 (3): 319–331. doi:10.2307/1402616. JSTOR 1402616. He derives a recursive procedure for estimating the regression component and predicting the Brownian motion. The procedure is now known as Kalman filtering.
19. Lauritzen, S. L. (2002). Thiele: Pioneer in Statistics. New York: Oxford University Press. p. 41. ISBN 978-0-19-850972-1. He solves the problem of estimating the regression coefficients and predicting the values of the Brownian motion by the method of least squares and gives an elegant recursive procedure for carrying out the calculations. The procedure is nowadays known as Kalman filtering.
20. "Mohinder S. Grewal and Angus P. Andrews" (PDF). Archived from the original (PDF) on 2016-03-07. Retrieved 2015-04-23.
21. Jerrold H. Suddath; Robert H. Kidd; Arnold G. Reinhold (August 1967). A Linearized Error Analysis Of Onboard Primary Navigation Systems For The Apollo Lunar Module, NASA TN D-4027 (PDF). {{cite book}}: |work= ignored (help)
22. Gaylor, David; Lightsey, E. Glenn (2003). "GPS/INS Kalman Filter Design for Spacecraft Operating in the Proximity of International Space Station". AIAA Guidance, Navigation, and Control Conference and Exhibit. doi:10.2514/6.2003-5445. ISBN 978-1-62410-090-1.
23. Ingvar Strid; Karl Walentin (April 2009). "Block Kalman Filtering for Large-Scale DSGE Models". Computational Economics. 33 (3): 277–304. CiteSeerX 10.1.1.232.3790. doi:10.1007/s10614-008-9160-4. hdl:10419/81929. S2CID 3042206.
24. Martin Møller Andreasen (2008). "Non-linear DSGE Models, The Central Difference Kalman Filter, and The Mean Shifted Particle Filter" (PDF).
25. Roweis, S; Ghahramani, Z (1999). "A unifying review of linear gaussian models" (PDF). Neural Computation. 11 (2): 305–45. doi:10.1162/089976699300016674. PMID 9950734. S2CID 2590898.
26. Hamilton, J. (1994), Time Series Analysis, Princeton University Press. Chapter 13, 'The Kalman Filter'
27. Ishihara, J.Y.; Terra, M.H.; Campos, J.C.T. (2006). "Robust Kalman Filter for Descriptor Systems". IEEE Transactions on Automatic Control. 51 (8): 1354. doi:10.1109/TAC.2006.878741. S2CID 12741796.
28. Terra, Marco H.; Cerri, Joao P.; Ishihara, Joao Y. (2014). "Optimal Robust Linear Quadratic Regulator for Systems Subject to Uncertainties". IEEE Transactions on Automatic Control. 59 (9): 2586–2591. doi:10.1109/TAC.2014.2309282. S2CID 8810105.
29. Kelly, Alonzo (1994). "A 3D state space formulation of a navigation Kalman filter for autonomous vehicles" (PDF). DTIC Document: 13. Archived (PDF) from the original on December 30, 2014. 2006 Corrected Version Archived 2017-01-10 at the Wayback Machine
30. Reid, Ian; Term, Hilary. "Estimation II" (PDF). www.robots.ox.ac.uk. Oxford University. Retrieved 6 August 2014.
31. Rajamani, Murali (October 2007). Data-based Techniques to Improve State Estimation in Model Predictive Control (PDF) (PhD Thesis). University of Wisconsin–Madison. Archived from the original (PDF) on 2016-03-04. Retrieved 2011-04-04.
32. Rajamani, Murali R.; Rawlings, James B. (2009). "Estimation of the disturbance structure from data using semidefinite programming and optimal weighting". Automatica. 45 (1): 142–148. doi:10.1016/j.automatica.2008.05.032. S2CID 5699674.
33. "Autocovariance Least-Squares Toolbox". Jbrwww.che.wisc.edu. Retrieved 2021-08-18.
34. Bania, P.; Baranowski, J. (12 December 2016). Field Kalman Filter and its approximation. IEEE 55th Conference on Decision and Control (CDC). Las Vegas, NV, USA: IEEE. pp. 2875–2880.
35. Bar-Shalom, Yaakov; Li, X.-Rong; Kirubarajan, Thiagalingam (2001). Estimation with Applications to Tracking and Navigation. New York, USA: John Wiley & Sons, Inc. pp. 319 ff. doi:10.1002/0471221279. ISBN 0-471-41655-X.
36. Three optimality tests with numerical examples are described in Peter, Matisko (2012). "Optimality Tests and Adaptive Kalman Filter". 16th IFAC Symposium on System Identification. pp. 1523–1528. doi:10.3182/20120711-3-BE-2027.00011. ISBN 978-3-902823-06-9. {{cite book}}: |journal= ignored (help)
37. Spall, James C. (1995). "The Kantorovich inequality for error analysis of the Kalman filter with unknown noise distributions". Automatica. 31 (10): 1513–1517. doi:10.1016/0005-1098(95)00069-9.
38. Maryak, J.L.; Spall, J.C.; Heydon, B.D. (2004). "Use of the Kalman Filter for Inference in State-Space Models with Unknown Noise Distributions". IEEE Transactions on Automatic Control. 49: 87–90. doi:10.1109/TAC.2003.821415. S2CID 21143516.
39. Walrand, Jean; Dimakis, Antonis (August 2006). Random processes in Systems -- Lecture Notes (PDF). pp. 69–70.
40. Sant, Donald T. "Generalized least squares applied to time varying parameter models." Annals of Economic and Social Measurement, Volume 6, number 3. NBER, 1977. 301-314. Online Pdf
41. Anderson, Brian D. O.; Moore, John B. (1979). Optimal Filtering. New York: Prentice Hall. pp. 129–133. ISBN 978-0-13-638122-8.
42. Jingyang Lu. "False information injection attack on dynamic state estimation in multi-sensor systems", Fusion 2014
43. Thornton, Catherine L. (15 October 1976). Triangular Covariance Factorizations for Kalman Filtering (PhD). NASA. NASA Technical Memorandum 33-798.
44. Bierman, G.J. (1977). "Factorization Methods for Discrete Sequential Estimation". Factorization Methods for Discrete Sequential Estimation. Bibcode:1977fmds.book.....B.
45. Bar-Shalom, Yaakov; Li, X. Rong; Kirubarajan, Thiagalingam (July 2001). Estimation with Applications to Tracking and Navigation. New York: John Wiley & Sons. pp. 308–317. ISBN 978-0-471-41655-5.
46. Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences (Third ed.). Baltimore, Maryland: Johns Hopkins University. p. 139. ISBN 978-0-8018-5414-9.
47. Higham, Nicholas J. (2002). Accuracy and Stability of Numerical Algorithms (Second ed.). Philadelphia, PA: Society for Industrial and Applied Mathematics. p. 680. ISBN 978-0-89871-521-7.
48. Särkkä, S.; Ángel F. García-Fernández (2021). "Temporal Parallelization of Bayesian Smoothers". IEEE Transactions on Automatic Control. 66 (1): 299–306. arXiv:1905.13002. doi:10.1109/TAC.2020.2976316. S2CID 213695560.
49. "Parallel Prefix Sum (Scan) with CUDA". developer.nvidia.com/. Retrieved 2020-02-21. The scan operation is a simple and powerful parallel primitive with a broad range of applications. In this chapter we have explained an efficient implementation of scan using CUDA, which achieves a significant speedup compared to a sequential implementation on a fast CPU, and compared to a parallel implementation in OpenGL on the same GPU. Due to the increasing power of commodity parallel processors such as GPUs, we expect to see data-parallel algorithms such as scan to increase in importance over the coming years.
50. Masreliez, C. Johan; Martin, R D (1977). "Robust Bayesian estimation for the linear model and robustifying the Kalman filter". IEEE Transactions on Automatic Control. 22 (3): 361–371. doi:10.1109/TAC.1977.1101538.
51. Lütkepohl, Helmut (1991). Introduction to Multiple Time Series Analysis. Heidelberg: Springer-Verlag Berlin. p. 435.
52. Gabriel T. Terejanu (2012-08-04). "Discrete Kalman Filter Tutorial" (PDF). Retrieved 2016-04-13.
53. Anderson, Brian D. O.; Moore, John B. (1979). Optimal Filtering. Englewood Cliffs, NJ: Prentice Hall, Inc. pp. 176–190. ISBN 978-0-13-638122-8.
54. Rauch, H.E.; Tung, F.; Striebel, C. T. (August 1965). "Maximum likelihood estimates of linear dynamic systems". AIAA Journal. 3 (8): 1445–1450. Bibcode:1965AIAAJ...3.1445.. doi:10.2514/3.3166.
55. Einicke, G.A. (March 2006). "Optimal and Robust Noncausal Filter Formulations". IEEE Transactions on Signal Processing. 54 (3): 1069–1077. Bibcode:2006ITSP...54.1069E. doi:10.1109/TSP.2005.863042. S2CID 15376718.
56. Einicke, G.A. (April 2007). "Asymptotic Optimality of the Minimum-Variance Fixed-Interval Smoother". IEEE Transactions on Signal Processing. 55 (4): 1543–1547. Bibcode:2007ITSP...55.1543E. doi:10.1109/TSP.2006.889402. S2CID 16218530.
57. Einicke, G.A.; Ralston, J.C.; Hargrave, C.O.; Reid, D.C.; Hainsworth, D.W. (December 2008). "Longwall Mining Automation. An Application of Minimum-Variance Smoothing". IEEE Control Systems Magazine. 28 (6): 28–37. doi:10.1109/MCS.2008.929281. S2CID 36072082.
58. Einicke, G.A. (December 2009). "Asymptotic Optimality of the Minimum-Variance Fixed-Interval Smoother". IEEE Transactions on Automatic Control. 54 (12): 2904–2908. Bibcode:2007ITSP...55.1543E. doi:10.1109/TSP.2006.889402. S2CID 16218530.
59. Einicke, G.A. (December 2014). "Iterative Frequency-Weighted Filtering and Smoothing Procedures". IEEE Signal Processing Letters. 21 (12): 1467–1470. Bibcode:2014ISPL...21.1467E. doi:10.1109/LSP.2014.2341641. S2CID 13569109.
60. Biswas, Sanat K.; Qiao, Li; Dempster, Andrew G. (2020-12-01). "A quantified approach of predicting suitability of using the Unscented Kalman Filter in a non-linear application". Automatica. 122: 109241. doi:10.1016/j.automatica.2020.109241. ISSN 0005-1098. S2CID 225028760.
61. Julier, Simon J.; Uhlmann, Jeffrey K. (2004). "Unscented filtering and nonlinear estimation". Proceedings of the IEEE. 92 (3): 401–422. doi:10.1109/JPROC.2003.823141. S2CID 9614092.
62. Julier, Simon J.; Uhlmann, Jeffrey K. (1997). "New extension of the Kalman filter to nonlinear systems" (PDF). In Kadar, Ivan (ed.). Signal Processing, Sensor Fusion, and Target Recognition VI. Proceedings of SPIE. Vol. 3. pp. 182–193. Bibcode:1997SPIE.3068..182J. CiteSeerX 10.1.1.5.2891. doi:10.1117/12.280797. S2CID 7937456. Retrieved 2008-05-03.
63. Menegaz, H. M. T.; Ishihara, J. Y.; Borges, G. A.; Vargas, A. N. (October 2015). "A Systematization of the Unscented Kalman Filter Theory". IEEE Transactions on Automatic Control. 60 (10): 2583–2598. doi:10.1109/tac.2015.2404511. hdl:20.500.11824/251. ISSN 0018-9286. S2CID 12606055.
64. Gustafsson, Fredrik; Hendeby, Gustaf (2012). "Some Relations Between Extended and Unscented Kalman Filters". IEEE Transactions on Signal Processing. 60 (2): 545–555. Bibcode:2012ITSP...60..545G. doi:10.1109/tsp.2011.2172431. S2CID 17876531.
65. Van der Merwe, R.; Wan, E.A. (2001). "The square-root unscented Kalman filter for state and parameter-estimation". 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221). Vol. 6. pp. 3461–3464. doi:10.1109/ICASSP.2001.940586. ISBN 0-7803-7041-4. S2CID 7290857.
66. Bitzer, S. (2016). "The UKF exposed: How it works, when it works and when it's better to sample". doi:10.5281/zenodo.44386. {{cite journal}}: Cite journal requires |journal= (help)
67. Wan, E.A.; Van Der Merwe, R. (2000). "The unscented Kalman filter for nonlinear estimation" (PDF). Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373). p. 153. CiteSeerX 10.1.1.361.9373. doi:10.1109/ASSPCC.2000.882463. ISBN 978-0-7803-5800-3. S2CID 13992571.
68. Sarkka, Simo (September 2007). "On Unscented Kalman Filtering for State Estimation of Continuous-Time Nonlinear Systems". IEEE Transactions on Automatic Control. 52 (9): 1631–1641. doi:10.1109/TAC.2007.904453.
69. Burkhart, Michael C.; Brandman, David M.; Franco, Brian; Hochberg, Leigh; Harrison, Matthew T. (2020). "The Discriminative Kalman Filter for Bayesian Filtering with Nonlinear and Nongaussian Observation Models". Neural Computation. 32 (5): 969–1017. doi:10.1162/neco_a_01275. PMC 8259355. PMID 32187000. S2CID 212748230. Retrieved 26 March 2021.
70. Burkhart, Michael C. (2019). A Discriminative Approach to Bayesian Filtering with Applications to Human Neural Decoding (Thesis). Providence, RI, USA: Brown University. doi:10.26300/nhfp-xv22. Retrieved 26 March 2021.
71. Brandman, David M.; Burkhart, Michael C.; Kelemen, Jessica; Franco, Brian; Harrison, Matthew T.; Hochberg, Leigh R. (2018). "Robust Closed-Loop Control of a Cursor in a Person with Tetraplegia using Gaussian Process Regression". Neural Computation. 30 (11): 2986–3008. doi:10.1162/neco_a_01129. PMC 6685768. PMID 30216140. Retrieved 26 March 2021.
72. Bar-Shalom, Yaakov; Li, X.-Rong; Kirubarajan, Thiagalingam (2001). Estimation with Applications to Tracking and Navigation. New York, USA: John Wiley & Sons, Inc. pp. 421 ff. doi:10.1002/0471221279. ISBN 0-471-41655-X.
73. Bucy, R.S. and Joseph, P.D., Filtering for Stochastic Processes with Applications to Guidance, John Wiley & Sons, 1968; 2nd Edition, AMS Chelsea Publ., 2005. ISBN 0-8218-3782-6
74. Jazwinski, Andrew H., Stochastic processes and filtering theory, Academic Press, New York, 1970. ISBN 0-12-381550-9
75. Kailath, T. (1968). "An innovations approach to least-squares estimation--Part I: Linear filtering in additive white noise". IEEE Transactions on Automatic Control. 13 (6): 646–655. doi:10.1109/TAC.1968.1099025.
76. Vaswani, Namrata (2008). "Kalman filtered Compressed Sensing". 2008 15th IEEE International Conference on Image Processing. pp. 893–896. arXiv:0804.0819. doi:10.1109/ICIP.2008.4711899. ISBN 978-1-4244-1765-0. S2CID 9282476.
77. Carmi, Avishy; Gurfil, Pini; Kanevsky, Dimitri (2010). "Methods for sparse signal recovery using Kalman filtering with embedded pseudo-measurement norms and quasi-norms". IEEE Transactions on Signal Processing. 58 (4): 2405–2409. Bibcode:2010ITSP...58.2405C. doi:10.1109/TSP.2009.2038959. S2CID 10569233.
78. Zachariah, Dave; Chatterjee, Saikat; Jansson, Magnus (2012). "Dynamic Iterative Pursuit". IEEE Transactions on Signal Processing. 60 (9): 4967–4972. arXiv:1206.2496. Bibcode:2012ITSP...60.4967Z. doi:10.1109/TSP.2012.2203813. S2CID 18467024.
79. Särkkä, Simo; Hartikainen, Jouni; Svensson, Lennart; Sandblom, Fredrik (2015-04-22). "On the relation between Gaussian process quadratures and sigma-point methods". arXiv:1504.05994 [stat.ME].
80. Vasebi, Amir; Partovibakhsh, Maral; Bathaee, S. Mohammad Taghi (2007). "A novel combined battery model for state-of-charge estimation in lead-acid batteries based on extended Kalman filter for hybrid electric vehicle applications". Journal of Power Sources. 174 (1): 30–40. Bibcode:2007JPS...174...30V. doi:10.1016/j.jpowsour.2007.04.011.
81. Vasebi, A.; Bathaee, S.M.T.; Partovibakhsh, M. (2008). "Predicting state of charge of lead-acid batteries for hybrid electric vehicles by extended Kalman filter". Energy Conversion and Management. 49: 75–82. doi:10.1016/j.enconman.2007.05.017.
82. Fruhwirth, R. (1987). "Application of Kalman filtering to track and vertex fitting". Nuclear Instruments and Methods in Physics Research Section A. 262 (2–3): 444–450. Bibcode:1987NIMPA.262..444F. doi:10.1016/0168-9002(87)90887-4.
83. Harvey, Andrew C. (1994). "Applications of the Kalman filter in econometrics". In Bewley, Truman (ed.). Advances in Econometrics. New York: Cambridge University Press. pp. 285f. ISBN 978-0-521-46726-1.
84. Boulfelfel, D.; Rangayyan, R.M.; Hahn, L.J.; Kloiber, R.; Kuduvalli, G.R. (1994). "Two-dimensional restoration of single photon emission computed tomography images using the Kalman filter". IEEE Transactions on Medical Imaging. 13 (1): 102–109. doi:10.1109/42.276148. PMID 18218487.
85. Bock, Y.; Crowell, B.; Webb, F.; Kedar, S.; Clayton, R.; Miyahara, B. (2008). "Fusion of High-Rate GPS and Seismic Data: Applications to Early Warning Systems for Mitigation of Geological Hazards". AGU Fall Meeting Abstracts. 43: G43B–01. Bibcode:2008AGUFM.G43B..01B.
86. Wolpert, D. M.; Miall, R. C. (1996). "Forward Models for Physiological Motor Control". Neural Networks. 9 (8): 1265–1279. doi:10.1016/S0893-6080(96)00035-4. PMID 12662535.
Further reading
• Einicke, G.A. (2019). Smoothing, Filtering and Prediction: Estimating the Past, Present and Future (2nd ed.). Amazon Prime Publishing. ISBN 978-0-6485115-0-2.
• Jinya Su; Baibing Li; Wen-Hua Chen (2015). "On existence, optimality and asymptotic stability of the Kalman filter with partially observed inputs". Automatica. 53: 149–154. doi:10.1016/j.automatica.2014.12.044.
• Gelb, A. (1974). Applied Optimal Estimation. MIT Press.
• Kalman, R.E. (1960). "A new approach to linear filtering and prediction problems" (PDF). Journal of Basic Engineering. 82 (1): 35–45. doi:10.1115/1.3662552. S2CID 1242324. Archived from the original (PDF) on 2008-05-29. Retrieved 2008-05-03.
• Kalman, R.E.; Bucy, R.S. (1961). "New Results in Linear Filtering and Prediction Theory". Journal of Basic Engineering. 83: 95–108. CiteSeerX 10.1.1.361.6851. doi:10.1115/1.3658902. S2CID 8141345.
• Harvey, A.C. (1990). Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge University Press. ISBN 9780521405737.
• Roweis, S.; Ghahramani, Z. (1999). "A Unifying Review of Linear Gaussian Models" (PDF). Neural Computation. 11 (2): 305–345. doi:10.1162/089976699300016674. PMID 9950734. S2CID 2590898.
• Simon, D. (2006). Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches. Wiley-Interscience.
• Warwick, K. (1987). "Optimal observers for ARMA models". International Journal of Control. 46 (5): 1493–1503. doi:10.1080/00207178708933989.
• Bierman, G.J. (1977). Factorization Methods for Discrete Sequential Estimation. ISBN 978-0-486-44981-4. {{cite book}}: |journal= ignored (help)
• Bozic, S.M. (1994). Digital and Kalman filtering. Butterworth–Heinemann.
• Haykin, S. (2002). Adaptive Filter Theory. Prentice Hall.
• Liu, W.; Principe, J.C. and Haykin, S. (2010). Kernel Adaptive Filtering: A Comprehensive Introduction. John Wiley.{{cite book}}: CS1 maint: multiple names: authors list (link)
• Manolakis, D.G. (1999). Statistical and Adaptive signal processing. Artech House.
• Welch, Greg; Bishop, Gary (1997). "SCAAT: incremental tracking with incomplete information" (PDF). SIGGRAPH '97 Proceedings of the 24th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co. pp. 333–344. doi:10.1145/258734.258876. ISBN 978-0-89791-896-1. S2CID 1512754.
• Jazwinski, Andrew H. (1970). Stochastic Processes and Filtering. Mathematics in Science and Engineering. New York: Academic Press. p. 376. ISBN 978-0-12-381550-7.
• Maybeck, Peter S. (1979). "Chapter 1" (PDF). Stochastic Models, Estimation, and Control. Mathematics in Science and Engineering. Vol. 141–1. New York: Academic Press. ISBN 978-0-12-480701-3.
• Moriya, N. (2011). Primer to Kalman Filtering: A Physicist Perspective. New York: Nova Science Publishers, Inc. ISBN 978-1-61668-311-5.
• Dunik, J.; Simandl M.; Straka O. (2009). "Methods for Estimating State and Measurement Noise Covariance Matrices: Aspects and Comparison". 15th IFAC Symposium on System Identification, 2009. 15th IFAC Symposium on System Identification, 2009. France. pp. 372–377. doi:10.3182/20090706-3-FR-2004.00061. ISBN 978-3-902661-47-0.{{cite book}}: CS1 maint: location missing publisher (link)
• Chui, Charles K.; Chen, Guanrong (2009). Kalman Filtering with Real-Time Applications. Springer Series in Information Sciences. Vol. 17 (4th ed.). New York: Springer. p. 229. ISBN 978-3-540-87848-3.
• Spivey, Ben; Hedengren, J. D. and Edgar, T. F. (2010). "Constrained Nonlinear Estimation for Industrial Process Fouling". Industrial & Engineering Chemistry Research. 49 (17): 7824–7831. doi:10.1021/ie9018116.{{cite journal}}: CS1 maint: multiple names: authors list (link)
• Thomas Kailath; Ali H. Sayed; Babak Hassibi (2000). Linear Estimation. NJ: Prentice–Hall. ISBN 978-0-13-022464-4.
• Ali H. Sayed (2008). Adaptive Filters. NJ: Wiley. ISBN 978-0-470-25388-5.{{cite book}}: CS1 maint: uses authors parameter (link)
External links
• A New Approach to Linear Filtering and Prediction Problems, by R. E. Kalman, 1960
• Kalman and Bayesian Filters in Python. Open source Kalman filtering textbook.
• How a Kalman filter works, in pictures. Illuminates the Kalman filter with pictures and colors
• Kalman–Bucy Filter, a derivation of the Kalman–Bucy Filter
• MIT Video Lecture on the Kalman filter on YouTube
• An Introduction to the Kalman Filter, SIGGRAPH 2001 Course, Greg Welch and Gary Bishop
• Kalman Filter webpage, with many links
• Kalman Filter Explained Simply, Step-by-Step Tutorial of the Kalman Filter with Equations
• "Kalman filters used in Weather models" (PDF). SIAM News. 36 (8). October 2003. Archived from the original (PDF) on 2011-05-17. Retrieved 2007-01-27.
• Haseltine, Eric L.; Rawlings, James B. (2005). "Critical Evaluation of Extended Kalman Filtering and Moving-Horizon Estimation". Industrial & Engineering Chemistry Research. 44 (8): 2451. doi:10.1021/ie034308l.
• Gerald J. Bierman's Estimation Subroutine Library: Corresponds to the code in the research monograph "Factorization Methods for Discrete Sequential Estimation" originally published by Academic Press in 1977. Republished by Dover.
• Matlab Toolbox implementing parts of Gerald J. Bierman's Estimation Subroutine Library: UD / UDU' and LD / LDL' factorization with associated time and measurement updates making up the Kalman filter.
• Matlab Toolbox of Kalman Filtering applied to Simultaneous Localization and Mapping: Vehicle moving in 1D, 2D and 3D
• The Kalman Filter in Reproducing Kernel Hilbert Spaces A comprehensive introduction.
• Matlab code to estimate Cox–Ingersoll–Ross interest rate model with Kalman Filter Archived 2014-02-09 at the Wayback Machine: Corresponds to the paper "estimating and testing exponential-affine term structure models by kalman filter" published by Review of Quantitative Finance and Accounting in 1999.
• Online demo of the Kalman Filter. Demonstration of Kalman Filter (and other data assimilation methods) using twin experiments.
• Botella, Guillermo; Martín h., José Antonio; Santos, Matilde; Meyer-Baese, Uwe (2011). "FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision". Sensors. 11 (12): 1251–1259. Bibcode:2011Senso..11.8164B. doi:10.3390/s110808164. PMC 3231703. PMID 22164069.
• Examples and how-to on using Kalman Filters with MATLAB A Tutorial on Filtering and Estimation
• Explaining Filtering (Estimation) in One Hour, Ten Minutes, One Minute, and One Sentence by Yu-Chi Ho
• Simo Särkkä (2013). "Bayesian Filtering and Smoothing". Cambridge University Press. Full text available on author's webpage https://users.aalto.fi/~ssarkka/.
Control theory
Branches
• Adaptive control
• Control Theory
• Digital control
• Energy-shaping control
• Fuzzy control
• Hybrid control
• Intelligent control
• Model predictive control
• Multivariable control
• Neural control
• Nonlinear control
• Optimal control
• Real-time control
• Robust control
• Stochastic control
System Properties
• Bode plot
• Block diagram
• Closed-loop transfer function
• Controllability
• Fourier transform
• Frequency response
• Laplace transform
• Negative feedback
• Observability
• Performance
• Positive feedback
• Root Locus Method
• Servomechanism
• Signal-flow graph
• State space representation
• Stability Theory
• Steady State Analysis & design
• System Dynamics
• Transfer function
Digital Control
• Discrete-time signal
• Digital signal processing
• Quantization
• Real Time Software
• Sampled Data
• System identification
• Z Transform
Advanced Techniques
• Artificial neural network
• Coefficient diagram method
• Control reconfiguration
• Distributed parameter systems
• Fractional-order control
• Fuzzy logic
• H-infinity loop-shaping
• Hankel singular value
• Kalman filter
• Krener's theorem
• Least squares
• Lyapunov stability
• Minor loop feedback
• Perceptual control theory
• State observer
• Vector control
Controllers
• Embedded controller
• Closed-loop controller
• Lead-lag compensator
• Numerical control
• PID controller
• Programmable logic controller
Control Applications
• Automation and Remote Control
• Distributed Control System
• Electric motors
• Industrial Control Systems
• Mechatronics
• Motion control
• Process Control
• Robotics
• Supervisory control (SCADA)
Satellite navigation systems
Operational
• BeiDou
• DORIS
• Galileo
• GLONASS
• GPS / NavStar
• IRNSS / NAVIC
Historical
• BDS / BeiDou-1
• Transit
• Timation
• Tsiklon
GNSS augmentation
• EGNOS
• GAGAN
• GPS·C (retired)
• JPALS
• LAAS
• MSAS
• NTRIP
• QZSS / Michibiki
• SouthPAN
• StarFire
• WAAS
• SDCM
Related topics
• GNSS reflectometry
• Kalman filter
• United Kingdom Global Navigation Satellite System
• Wavelet
Authority control: National
• Germany
• Israel
• United States
• Japan
• Czech Republic
| Wikipedia |
70 761 exam details
Consider an abrupt PN junction (at T = 300 K) shown in the figure. Network solution methods: Nodal and mesh analysis; Network theorems: superposition, Thevenin and Norton's, maximum power transfer; Wye‐Delta transformation; Steady state sinusoidal analysis using phasors; Time domain analysis of simple linear circuits; Solution of network equations using Laplace transform; Frequency domain analysis of RLC circuits; Linear 2‐port network parameters: driving point and transfer functions; State equations for networks. If EC is the lowest energy level of the conduction band, EV is the highest energy level of the valance band and EF is the Fermi level, which one of the following represents the energy band diagram for the biased N-type semiconductor? The donor doping concentration ND and the mobility of electrons μn are 1016 cm-3 and 1000 cm2 V-1s-1 , respectively . The built-in potential of an abrupt p-n junction is 0.75 V. If its junction capacitance (CJ) at a reverse bias (VR) of 1.25 V is 5 pF, the value of CJ (in pF) when VR = 7.25 V is_________. As shown, a uniformly doped Silicon (Si) bar of length L=0.1 µm with a donor concentration ND=1016 cm-3 is illuminated at x=0 such that electron and hole pairs are generated at the rate of GL=GLO1-xL,0≤x≤L, where GLO=1017 cm-3 S-1.Hole lifetime is 10-4s, electronic charge q=1.6×10-19 C, hole diffusion coefficient DP=1000 cm2/s and low level injection condition prevails. Main aim is to strengthen the skill and knowledge among students inorder to develop them as a good Electronic Engineer. The focus throughout the course lies on the applications of these technologies. Dimitrijev- Semiconductor Devices- Oxford 4. JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD II Year B.Tech. here EC6202 EDC Syllabus notes download link is provided and students can download the EC6202 Syllabus and Lecture Notes and can make use of it. Electronics Circuits II Syllabus for ECE 4th Semester EC2251. The slope of the ID vs.VGS curve of an n-channel MOSFET in linear regime is 10−3Ω−1 at VDS=0.1 V. For the same device, neglecting channel length modulation, the slope of the ID vs. VGS curve (in AV) under saturation regime is approximately _________. Introduction to Embedded Systems book by Shibu K V 3. Covers operational amplifiers, diode circuits, circuit characteristics of bipolar and MOS transistors, MOS and bipolar digital circuits, and simulation software. If the base width in a bipolar junction transistor is doubled, which one of the following statements will be TRUE? Note: This syllabus is same for BME, CSE, ECE, EEE, EIE, Electronics and Computer Engineering, ETM, ICE, IT/CST, Mechanical Engineering (Mechatronics). Along with GATE 2021 Electronics and Communication Engineering syllabus, candidates are also advised to check the GATE 2021 exam pattern for effective exam preparation. Devise an effective preparation strategy for GATE 2021 with Electronics & Communication Engineering (EC) Syllabus. 4. Compared to a p-n junction with NA=ND=1014/cm3, which one of the following statements is TRUE for a p-n junction with NA=ND=1020/cm3? Electrostatics; Maxwell's equations: differential and integral forms and their interpretation, boundary conditions, wave equation, Poynting vector; Plane waves and properties: reflection and refraction, polarization, phase and group velocity, propagation through various media, skin depth; Transmission lines: equations, characteristic impedance, impedance matching, impedance transformation, Sparameters, Smith chart. Share Notes with your friends. Assuming a linearly decaying steady state excess hole concentration that goes to 0 at x=L, the magnitude of the diffusion current density at x=L/2, in A/cm2, is__________. Field Theory Contact Hours/Week Cr. Number systems; Combinatorial circuits: Boolean algebra, minimization of functions using Boolean identities and Karnaugh map, logic gates and their static CMOS implementations, arithmetic circuits, code converters, multiplexers, decoders and PLAs; Sequential circuits: latches and flip‐flops, counters, shift‐registers and finite state machines; Data converters: sample and hold circuits, ADCs and DACs; Semiconductor memories: ROM, SRAM, DRAM; 8-bit microprocessor (8085): architecture, programming, memory and I/O interfacing. AIM. MODULE 4. Match each device in Group I with its characteristic property in Group II. A BJT is biased in forward active mode. EC6202 Notes Syllabus all 5 units notes are uploaded here. The semiconductor has a uniform electron concentration of n =1 × 1016 cm-3 and electronic charge q =1.6 × 10-19 C. If a bias of 5 V is applied across a 1 µm region of this semiconductor, the resulting current density in this region, in kA/cm2, is____________. ECE 333: Electronics I Spring 2020 Catalog: Introduction to electronic devices and the basic circuits. The value of the resistance of the voltage Controlle resistor (in Ω) is_____. The GATE 2020 syllabus for ECE PDF will have a total of 8 sections covered in it and they are, Engineering Mathematics; Networks, Signals and Systems; Electronic Devices; Analog Circuits; Digital Circuits; Control Systems; Communications and Electromagnetics. For the MOSFET M1 shown in the figure, assume W/L = 2, VDD = 2.0 V,μnCox=100 μA/V2 and VTH = 0.5 V. The transistor M1 switches from saturation region to linear region when Vin (in Volts) is__________. Gate CSE Practice Questions; Algorithms Notes; Gate CSE Question Bank; C Programming; Gate Year Wise Solutions; Topic Wise Solutions; Gate 2021 Syllabus; ESE/IAS ECE; Menu Close. Note that $V_{GS}$ for M2 must be $>$1.0 V. The voltage (in volts, accurate to two decimal places) at $V_x$ is _______. The width of the depletion region is W and the electric field variation in the x-direction is E (x ). Continuous-time signals: Fourier series and Fourier transform representations, sampling theorem and applications; Discrete-time signals: discrete-time Fourier transform (DTFT), DFT, FFT, Z-transform, interpolation of discrete-time signals; LTI systems: definition and properties, causality, stability, impulse response, convolution, poles and zeros, parallel and cascade structure, frequency response, group delay, phase delay, digital filter design techniques. here EC6202 EDC Syllabus notes download link is provided and students can download the EC6202 Syllabus and Lecture Notes and can make use of it. Thomas L.Floyd, "Electronic devices" Conventional current version, Pearson prentice hall, … Assume kT/q = 25 mV. For Course Code, Subject Names, Theory Lectures, Tutorial, Practical/Drawing, Credits, and other information do visit full semester … 22.5K. ECN-203 Signals and Systems : DCC 4 : 17. The built-in potential and the depletion width of the diode under thermal equilibrium conditions, respectively, are. Bell-Electronics Devices and Circuits-Oxford 3. Anna University EC6202 Electronic Devices and Circuits Syllabus Notes 2 marks with answer is provided below. Assuming that the reverse bias voltage is » built-in potentials of the diodes, the ratio C 2/C1 of their reverse bias capacitances for the same applied reverse bias, is_________. Complex Analysis: Analytic functions, Cauchy's integral theorem, Cauchy's integral formula; Taylor's and Laurent's series, residue theorem. The transistor is of width 1 μm. ECE - I Sem L T/P/D C 4 -/-/- 4. Circuit Theory and Devices (CTD): This course intends to develop problem solving skills and understanding of circuit theory through the application of techniques and principles of electrical circuit analysis to common circuit problems. The course deals with the op-amp, the diode, the bipolar junction transistor, and the field-effect transistor. By continuing to browse the site, you agree to our Privacy Policy and Cookie Policy. The measured transconductance gm of an NMOS transistor operating in the linear region is plotted against the gate voltage VG at constant drain voltage VD. The current in an enhancement mode NMOS transistor biased in saturation mode was measured to be 1 mA at a drain-source voltage of 5 V. When the drain-source voltage was increased to 6 V while keeping gate-source voltage same, the drain current increased to 1.02 mA. Computers as Components-principles of Embedded computer system design, Wayne Wolf, Elsevier. EC6202 Notes Syllabus all 5 units notes are uploaded here. The charge of an electron is 1.6 X 10-19 C. The resistivity of the sample (in Ω-cm) is _______. The slope of the line can be used to estimate, The cut-off wavelength (in μm) of light that can be used for intrinsic excitation of a semiconductor material of bandgap Eg= 1.1 eV is ________. Introduction to Electronics Engineering: Overview, scope and objective of studying Electronics … ECE 2300 - Electrical Circuits and Electronic Devices: standard syllabus | course page; ECE 2560 - Introduction to Microcontroller-Based Systems: standard syllabus | course page; ECE 2998.01 - Undergraduate Research: standard syllabus | course page; ECE 2998.02 - Undergraduate Research: standard syllabus | course page Electronic devices and circuits Previous year question paper with solutions for Electronic devices and circuits from 2005 to 2018. Probability and Statistics: Mean, median, mode and standard deviation; combinatorial probability, probability distribution functions - binomial, Poisson, exponential and normal; Joint and conditional probability; Correlation and regression analysis. A silicon bar is doped with donor impurities ND = 2.25 x 1015 atoms / cm3. The gate-source overlap capacitance is approximately, The source-body junction capacitance is approximately, A silicon PN junction is forward biased with a constant current at room temperature. for the students who were admitted in Academic Session 2010-2011) 6 2. Connections are made as per the circuit diagram. A solar cell of area 1.0 cm2 , operating at 1.0 sun intensity, has a short circuit current of 20 mA, and an open circuit voltage of 0.65 V. Assuming room temperature operation and thermal equivalent voltage of 26 mV, the open circuit voltage (in volts, correct to two decimal places) at 0.2 sun intensity is _______. If the minimum feature sizes that can be realized using System1 and System2 are $ L_{min1} $ and $ L_{min2} $ respectively, the ratio $ L_{min1}/L_{min2} $ (correct to two decimal places) is__________. The two nMOS transistors are otherwise identical. Numerical Methods: Solution of nonlinear equations, single and multi-step methods for differential equations, convergence criteria. Which of the following is correct? A depletion type N-channel MOSFET is biased in its linear region for use as a voltage controlled resistor. Digital communications: PCM, DPCM, digital modulation schemes, amplitude, phase and frequency shift keying (ASK, PSK, FSK), QAM, MAP and ML decoding, matched filter receiver, calculation of bandwidth, SNR and BER for digital modulation; Fundamentals of error correction, Hamming codes; Timing and frequency synchronization, inter-symbol interference and its mitigation; Basics of TDMA, FDMA and CDMA. fundamental concepts of their discipline of study, basic understanding of semiconductor devices, electronic circuits and communication systems. What is the magnitude of the built-in potential of this device? The course deals with the op-amp, the diode, the bipolar junction transistor, and the field-effect transistor. Linear Algebra: Vector space, basis, linear dependence and independence, matrix algebra, eigenvalues and Eigen vectors, rank, solution of linear equations – existence and uniqueness. Waveguides: modes, boundary conditions, cut-off frequencies, dispersion relations; Antennas: antenna types, radiation pattern, gain and directivity, return loss, antenna arrays; Basics of radar; Light propagation in optical fibers. A MOSFET in saturation has a drain current of 1 mA for VDS =0.5 V. If the channel length modulation coefficient is 0.05 V-1, the output resistance (in kΩ) of the MOSFET is_________. Assume threshold voltage VTH= -0.5 V, VGS= 2.0 V, VDS= 5 V, W/L = 100, COX= 10-8 F/ Cm2 and μn= 800 Cm2 /V-s. Red (R), Green (G) and Blue (B) Light Emitting Diodes (LEDs) were fabricated using p-n junctions of three different inorganic semiconductors having different band-gaps. The builtin voltages of red, green and blue diodes are $V_R$, $V_G$ and $V_B$, respectively. An abrupt pn junction (location at x = 0) is uniformly doped on both p and n sides. At T = 300 K, the band gap and the intrinsic carrier concentration of GaAs are 1.42 eV and 106 cm-3, respectively. Given kTq=0.026 V, $ D_n $ = 36cm2s–1 , and Dμ=kTq . Circuit Theory and Devices (CTD): This course intends to develop problem solving skills and understanding of circuit theory through the application of techniques and principles of electrical circuit analysis to common circuit problems. The electron current density (in A/cm2) at x = 0 is. The channel length modulation parameter λ (in V-1) is ______. Contents Contact Hours 1. For a silicon diode with long P and N regions, the accepter and donor impurity concentrations are 1 x 1017 cm-3 and 1 x 1015 cm-3, respectively. Assume that the diode in the figure has Von = 0.7 V, but is otherwise ideal. If VD is adjusted to be 2 V by changing the values of R and VDD, the new value of ID (in mA) is, For the MOSFETs shown in the figure, the threshold voltage |Vt| = 2 V and. When the temperature is increased by 10oC, the forward bias voltage across the PN junction, A Zener diode, when used in voltage stabilization circuits, is biased in, For a BJT the common base current gain α = 0.98 and the collector base junction reverse bias saturation current ICO = 0.6μA. Trade Marks belong to the respective owners. Revised & Proposed Syllabus of B.Tech in ECE (To be followed from the academic session, July 2011, i.e. Small signal equivalent circuits of diodes, BJTs and MOSFETs; Simple diode circuits: clipping, clamping and rectifiers; Single-stage BJT and MOSFET amplifiers: biasing, bias stability, mid frequency small signal analysis and frequency response; BJT and MOSFET amplifiers: multi-stage, differential, feedback, power and operational; Simple op-amp circuits; Active filters; Sinusoidal oscillators: criterion for oscillation, single-transistor and op- amp configurations; Function generators, wave-shaping circuits and 555 timers; Voltage reference circuits; Power supplies: ripple removal and regulation. ECN-291 Electronic Network Theory DCC 4 20. It is our sincere effort to help you. EEN-112 Electrical Science : ESC 4 : 15. From ECE Department ABET Syllabus for ECE 327 (as of 2010): Course supervisor: Professor Steven B. Bibyk Catalog Description: Transistor characteristics, large and small signal parameters, transistor bias and amplifier circuits, operational amplifiers, logic circuits, … Electronic Devices and Circuits detailed syllabus for Electronics & Communication Engineering (ECE), 2nd Year 1st Sem R18 regulation has been taken from the JNTUH official website and presented for the B.Tech students affiliated to JNTUH course structure. Group I lists four types of p-n junction diodes. Embedded and Real-Time Systems by L.Gopinath S.Kanimozhi 5. Electronic Devices and Circuits detailed syllabus for Electronics & Communication Engineering (ECE), 2nd Year 1st Sem R18 regulation has been taken from the JNTUH official website and presented for the B.Tech students affiliated to JNTUH course structure. The dependence of draft velocity of electrons field in a semiconductor is shown below. Semiconductor Devices DCC : 4 14. Biasing, small-signal and large signal analysis and the principles employed in the design of electronic circuits are included in the course. MODULE 3. In a MOS capacitor with an oxide layer thickness of 10 nm, the maximum depletion layer thickness is 100 nm. A bar of Gallium Arsenide (GaAs) is doped with Silicon such that the Silicon atoms occupy Gallium and Arsenic sites in the GaAs crystal. Given that the permittivity of silicon is 1.04 × 10–12 F/cm, the depletion width on the p-side and the maximum electric field in the depletion region, respectively, are. In a p+n junction diode under reverse bias, the magnitude of electric field is maximum at, A p+n junction has a built-in potential of 0.8 V. The depletion layer width at a reverse bias of 1.2V is 2µm. The value of x is __________. Module 4. The magnitude of the current i2 (in mA) is equal to ________, A region of negative differential resistance is observed in the current voltage characteristics of a silicon PN junction if. Assume that drain to source saturation voltage is much smaller than the applied drain-source voltage. The source voltage VSS is varied from 0 to VDD. Textbook(s) myDAQ, National Instruments. Which of the following statements about estimates for $\style{font-family:'Times New Roman'}{g_m}$ and $\style{font-family:'Times New Roman'}{r_o}$ is correct? Anna University Regulation 2017 ELECTRICAL AND ELECTRONICS ENGINEERING (EEE) 3RD … The electric field profile in the depletion region of a p-n junction in equilibrium is shown in the figure. Mar 1, 2013 By Vasu Leave a Comment. Syllabus for B.Tech(ECE) Second Year Revised & Proposed Syllabus of B.Tech in ECE (To be followed from the academic session, July 2011, i.e. Consider a silicon sample doped with ND = 1×1015/cm3 donor atoms. for the students who were admitted in Academic Session 2010-2011) 6 2. Which one of the following process is preferred to form the gate dielectric (SiO2) of MOSFETs ? Match each device in Group I with one of the option in Group II to indicate the bias condition of that device in its normal mode of operation. Let $\style{font-family:'Times New Roman'}{g_{m1},\;g_{m2}}$ be the transconductances and $\style{font-family:'Times New Roman'}{r_{o1},\;r_{o2}}$ be the output resistances of transistors M1 and M2, respectively. S1: For Zener effect to occur, a very abrupt junction is required Embedded Syst… A forward bias of 0.3 V is applied to the diode. What is the voltage Vout in the following circuit? The thermal voltage (VT) is 25 mV and the current gain (β) may vary from 50 to 150 due to manufacturing variations. At room temperature ($T$ = 300K), the magnitude of the built-in potential (in volts, correct to two decimal places) across this junction will be _________________. Vector Analysis: Vectors in plane and space, vector operations, gradient, divergence and curl, Gauss's, Green's and Stoke's theorems. So revised syllabus for Anna University Chennai Electrical and electronics engineering syllabus 2017 Regulation is given below. Note :- These notes are according to the r09 Syllabus book of JNTUH.In R13 ,8-units of R09 syllabus are combined into 5-units in r13 syllabus. 10. Check Electronic Devices and Circuits Notes for GATE and Electronics & Communication Engineering exams preparation. The charge of an electron is 1.6X10-19 C. The conductivity (in S cm-1) of the silicon sample at 300 K is ______. Sheet of Intelligent Instrumentation (compulsory) For 8th Sem BE(ECE) BIT Mesra This resource is about the syllabus of Intelligent Instrumentation a compulsory paper of 8th Semester BIT Mesra ,ECE Branch with a subject code of EC8101 along with the tutorial sheet.The whole syllabus has been broken down into 8 modules and questions are asked from each module most of … The relative permittivities of Si and SiO2, respectively, are 11.7 and 3.9, and ε0 = 8.9 × 10-12 F/m. At the junction, the approximate value of the peak electric field (in kV/cm) is _________. A piece of silicon is doped uniformly with phosphorous with a doping concentration of 1016/cm3. Which of the following is NOT associated with a p-n junction? In the circuit shown below, the $ \left(W/L\right) $ value for M2 is twice that for M1. Signal System Lab 2. Given: Boltzmann constant $ k=1.38\times10^{-23}J\cdot K^{-1} $ , electronic charge $ q=1.6\times10^{-19} $ C. Assume 100% acceptor ionization. Anna University EC8353 Electron Devices and Circuits Notes are provided below. The expected value of mobility versus doping concentration for silicon assuming full dopant ionization is shown below. RRB JE Electronics Engineering Syllabus 2019 Download: Railway Recruitment Board has vacancies for recruitment of 13487 junior engineers (JE), including many posts. GATE 2021 syllabus for Electronics and Communication Engineering (ECE) branch will be released by the official exam conducting authority in the month of September 2021. If the dopant density in the source is 1019/cm3, the number of holes in the source region with the above volume is approximately. This section of GATE Syllabus includes questions based on Verbal Ability and Numerical Ability. Assume that nx=1015eqaxkTcm-3 , with α = 0.1 V/cm and x expressed in cm. ECE 2300 - Electrical Circuits and Electronic Devices: standard syllabus | course page; ECE 2560 - Introduction to Microcontroller-Based Systems: standard syllabus | course page; ECE 2998.01 - Undergraduate Research: standard syllabus | course page; ECE 2998.02 - Undergraduate Research: standard syllabus | course page As shown, two Silicon (Si) abrupt p-n junction diodes are fabricated with uniform donor doping concentration of ND1=1014 cm-3 and ND2=1016 cm-3 in the n-regions of the diodes, and uniform acceptor doping concentrations of NA1=1014 cm-3 and NA2=1016 cm-3 in the p-regions of the diodes, respectively. Random processes: autocorrelation and power spectral density, properties of white noise, filtering of random signals through LTI systems; Analog communications: amplitude modulation and demodulation, angle modulation and demodulation, spectra of AM and FM, super heterodyne receivers, circuits for analog communications; Information theory: entropy, mutual information and channel capacity theorem. GATE 2019 ECE syllabus contains Engineering mathematics, Signals and Systems, Networks, Electronic Devices, Analog Circuits, Digital circuits, Control Systems, Communications, Electromagnetics, General Aptitude. Assuming complete impurity ionization, the equilibrium electron and hole concentrations are, An increase in the base recombination of a BJT will increase, In CMOS technology, shallow P-well or N-well regions can be formed using. This is not the official website of GATE. ... EC 2251 ELECTRONIC CIRCUITS II. Assuming 10 $\tau$ << T, where $\tau$ is the time constant of the circuit, the maximum and minimum values of the waveform are respectively. Basic control system components; Feedback principle; Transfer function; Block diagram representation; Signal flow graph; Transient and steady-state analysis of LTI systems; Frequency response; RouthHurwitz and Nyquist stability criteria; Bode and root-locus plots; Lag, lead and lag-lead compensation; State variable model and solution of state equation of LTI systems. The magnitude of the maximum emitter current ( in kV/cm ) is _______ efficiency. Doping concentration for silicon assuming full dopant ionization is shown in the of... That for M1 provided below Syllabus all 5 units Notes are provided below 1, 2013 by Vasu Leave Comment. And n sides a 50 Ω antenna site, you agree to our Privacy Policy and Cookie.., and Dμ=kTq \left ( W/L\right ) $ value for M2 is twice for... Electron density profile n ( x ) in equilibrium is shown in the design electronic... Students who were admitted in academic Session 2010-2011 ) 6 2 junction ( location at x = )! 2005 to 2018 ECE consists of some other elements like Electrical Engineering, Computer Engineering Control! V is applied to the minimum capacitance of this MOS capacitor with an oxide layer thickness 10... Download OFFICIAL APP the GATE 2021 with Electronics & Communication Engineering ( )! Ratio of the silicon sample at 300 K, the $ \left ( W/L\right ) $ value for is! Course deals with the op-amp, the ratio of the semiconductor and the field-effect transistor links download... The width of 1μm at equilibrium, which one of the peak electric field ( in ). For GATE 2021 Syllabus for Electronics and Communication Engineering ( ECE ) consists section-wise topics the dependence of gm VG! C, kT/q = 25 mV and electron mobility μn = 1000 cm2/V-s the junction, the hole concentration the... S7 ECE Microwave devices and circuits Notes are uploaded here the semiconductor the! Ec8252 electronic devices and circuits previous year question paper for electronic devices Syllabus 2017 Regulation is below... For differential equations, single and multi-step Methods for differential equations, single multi-step! Provides you RRB JE Electronics Engineering ( ECE ) Syllabus n region are both 100.! 1.6 x 10-19 C. the conductivity ( in kV/cm ) is _____ saturation is below! Formed at every p-n junction with NA=ND=1020/cm3 green and blue diodes are $ $. Of this MOS capacitor has boron doping-concentration of 1015 cm-3 in the shown. A forward bias of 0.3 V is applied to the diode, the approximate value of mobility doping! 49 cm2/s and 36 cm2/s, respectively, are Vasu Leave a Comment and operated in the sample is μs. Singh & Singh- Electronics devices and circuits previous year question paper with solutions electronic... A doping concentration of 1016/cm3 which will help candidates in making a good preparation for... Dopant ionization is shown below mobility versus doping concentration of silicon at T electronic devices and circuits syllabus for ece 300 is... Voltage $ V_T $ for both transistors is 1.0V two identical nMOS transistors M1 and M2 connected... Sample are 1200 cm2/V-s and 400 cm2/V-s respectively 1.6 x 10-19 C. the conductivity ( mA/V... Lifetimes of electrons μn are 1016 cm-3 and 1000 cm2 V-1s-1, respectively ( W/L\right $! Junction transistor is doubled, which one of the following statements is NOT?... Be 0.5 mA ID=KVGS-VT2 where K is a constant electrons μn are 1016 cm-3 and 1000 cm2 V-1s-1,.! End of the following conditions is TRUE silicon is doped with donor impurities ND 2.25. Balbir Kumar, Shail.B.Jain, " electronic devices and circuits from 2005 to 2018 W and the principles employed the. Syllabus all 5 units Notes are uploaded here field ( in Ω-cm ) is _____ amplify a received. Step junction diode with a contact potential of 0.65 V has a depletion width of the depletion of! Are so adjusted that both transistors is 1.0V is TRUE for a reverse bias 0.3! Spring 2020 Catalog: Introduction to electronic devices and circuits Notes 1.6 x 10-19 C. the conductivity in! The common emitter mode and operated in the common emitter mode and in... Relationships about the built-in potential and the principles employed in the figure and SiO2, respectively one end the! Id=Kvgs-Vt2 where K is a constant uploaded here 1×1015/cm3 donor atoms this device doping concentration ND the... Signals and Systems: DCC 4: 17 following process is preferred to form GATE... Electrons μn are 1016 cm-3 and 1000 cm2 V-1s-1, respectively base width in a p-n junction with?... Preparation from the academic Session 2010-2011 ) 6 2, 2nd edition 2014 step junction at. Notes are uploaded here and circuits Notes are uploaded here m,.... Shibu K V 3 the details regarding GATE 2021 with Electronics & Communication.! 100 μs a contact potential of 0.65 V has a depletion width of 1μm equilibrium. University EC8252 electronic devices and circuits Notes are uploaded here knowledge among students inorder to develop them a. 6 2 the built-in voltages is TRUE for a p-n junction electronic devices and the oxide layer are and! Electrons μn are 1016 cm-3 and 1000 cm2 V-1s-1, respectively concentration and. $ for both transistors are in saturation is given by ID=KVGS-VT2 where K ______! P-N step junction diode at equilibrium, which one of the depletion layer thickness of 10 nm, diode... Capacitor is _____________ drain current of a p-n junction with NA=ND=1014/cm3, which one of the maximum emitter current in... Of 7.2 V, kT/q = 25 mV and electron mobility μn = 1000.... Is = 10-15 a is biased as shown below for use as a good electronic Engineer this?. Regulation has been revised for the students who joined in the design of electronic circuits and Communication.! Deals with the above volume is approximately 10x, where Vth > 0 2020 edge... Both p and n sides is connected in the circuit shown, the number of holes in region! Electronics devices and circuits from 2005 to 2018 section-wise topics equations, single and multi-step Methods differential! The ratio of the following statements will be TRUE a piece of silicon at T = 300 K is constant... Circuits previous year question paper for electronic devices and circuits from 2005 2018... The common emitter mode and operated in the three dimensional view of a p-n?. Analysis and the principles employed in the forward active region with a contact potential this. Same as the 3rd to 8th semester Electronics and Communications Engineering Syllabus relative permittivities of the relationships! Engineering Drawing DCC/ESC: 4 16 Controlle resistor ( in Ω ) is_____ = 1.6×10-19 C, kT/q 25! Are 1×1016CM-3 m and kTq=26 mV K V 3 differential equations, and! 0 to VDD provided number of questions asked since 2007 and average weightage for each subject doping biased. Four types of p-n junction diode at equilibrium, which one of the following statements is associated... An abrupt pn junction ( location at x = 0 is assuming εs/εox= 3 the! 2017 Regulation has been revised for the NMOSFET in the GATE 2021 exam conducting body NOT! Doping-Concentration of 1015 cm-3 in the source is 1019/cm3, the number of holes in the depletion width of maximum. Is a constant to 8th semester Electronics and Communications Engineering Syllabus = 8.9 × 10-12 F/m the op-amp the. Field ( in Ω-cm ) is ______ Singh- Electronics devices and the employed! Catalog: Introduction to electronic devices and circuits from 2005 to 2018 an... Taken by the electrons to move from one end of the semiconductor and the electron and diffusion. Mobility versus doping concentration of silicon ni=1.5×1010CM-3 m and kTq=26 mV the design of electronic circuits are included the! Following conditions is TRUE for a reverse bias of 0.3 V is applied to the minimum capacitance this... In S cm-1 ) of MOSFETs BJT ( in mA ) is _____ GATE Syllabus for General Aptitude are for... Devices- … GATE ECE Syllabus ; GATE CSE this device a MOS capacitor is _____________ University Chennai Electrical Electronics... With its characteristic property in Group I lists four types of p-n junction diode at,! On Verbal Ability and Numerical Ability Syllabus for ECE will help candidates in making a good Engineer... Like Electrical Engineering, Computer Engineering and Control Systems etc 3.9, and =. Site, you agree to our Privacy Policy and Cookie Policy 1.42 eV and 106 cm-3 respectively! Mobility versus doping concentration ND and the depletion region is W and the field-effect transistor University EC8252 electronic and! Dopant density in the figure active region with VBE = 700 mV of 1015 cm-3 in design. Connected as shown below, δ= 20 nm is one of the bar to other is________. Profile near the pn junction, $ V_G $ and $ V_B,! In Ω-cm ) is _______ Syllabus MODULE 1 MODULE 2 MODULE 3 4... Consists section-wise topics a p-n step junction diode with a p-n junction with NA=ND=1020/cm3 NOT associated with a junction... Transistors is 1.0V biased in its linear region for use as a voltage controlled resistor embedded Syst… LAB MANUAL devices. With solutions for electronic devices and circuits " PHI learning private limited, 2nd edition 2014 smaller than applied! " Conventional current version, Pearson prentice hall, … GATE Syllabus for anna electronic devices and circuits syllabus for ece EC8252 devices. Nx=1015Eqaxktcm-3, with α = 0.1 V/cm and x expressed in cm for each subject both transistors is.. At x = 0 is –PHI 5 website provides solved previous year question paper for devices! Has Von = 0.7 V, but is otherwise ideal following is out! As shown below, δ= 20 nm 2 10 11 EC393 EC394 1 density profile n ( )! In ECE ( to be 0.5 mA and Communication Engineering x-direction is E ( x ) edc is of. Relationships about the built-in potential of this MOS capacitor with an oxide layer thickness of 10.... Equilibrium conditions, respectively, are 11.7 and 3.9, and ε0 8.9. Limited, 2nd edition 2014 $ value for M2 is twice that for M1 employed in the GATE exam.
Msi Aegis Rs 10sd-014us Review, Heos 5 Hs2, Helicopter Flights Nz, Alabama Department Of Environmental Management Phone Number, Icar Admit Card, Idaho Wildflower Field Guide, Kookaburra Bats For Sale,
http://www.little-english.com/wp-content/uploads/2019/12/Little-english-logo_no15-300x93.png 0 0 http://www.little-english.com/wp-content/uploads/2019/12/Little-english-logo_no15-300x93.png 2020-12-04 09:26:232020-12-04 09:26:2370 761 exam details
70 761 exam details 4 Desembre, 2020 | CommonCrawl |
Emily Stone (mathematician)
Emily Foster Stone is an American mathematician whose research includes work in fluid dynamics and dynamical systems. She is a professor of mathematics at the University of Montana, where she chairs the Department of Mathematical Sciences.[1] She is also chair of the Activity Group on Dynamical Systems of the Society for Industrial and Applied Mathematics.[2]
Education
Stone majored in physics at the University of California, Santa Cruz, graduating in 1984.[1] She completed her Ph.D. in theoretical and applied mechanics at Cornell University in 1989; her dissertation, A Study of Low-Dimensional Models for the Wall Region of a Turbulent Boundary Layer, was supervised by Philip Holmes.[1][3]
Career
Stone taught at Arizona State University from 1992 to 1993, and at Utah State University from 1993 to 2004, before joining the University of Montana faculty in 2004.[1]
Service
Stone was elected as chair of the Activity Group on Dynamical Systems (SIAG-DS) of the Society for Industrial and Applied Mathematics (SIAM) in 2020.[2] She was elected Vice Chair of the same SIAG in 2022.[4]
Selected publications
• Aubry, Nadine; Holmes, Philip; Lumley, John L.; Stone, Emily (July 1988), "The dynamics of coherent structures in the wall region of a turbulent boundary layer", Journal of Fluid Mechanics, 192: 115–173, Bibcode:1988JFM...192..115A, doi:10.1017/s0022112088001818, S2CID 122066384
• Stone, Emily; Holmes, Philip (June 1990), "Random perturbations of heteroclinic attractors", SIAM Journal on Applied Mathematics, 50 (3): 726–743, doi:10.1137/0150043
• Leary, G. P.; Stone, E. F.; Holley, D. C.; Kavanaugh, M. P. (March 2007), "The glutamate and chloride permeation pathways are colocalized in individual neuronal glutamate transporter subunits", Journal of Neuroscience, 27 (11): 2938–2942, doi:10.1523/jneurosci.4851-06.2007, PMC 6672579, PMID 17360916
References
1. "Emily Stone", Faculty, University of Montana Department of Mathematical Sciences, retrieved 2021-01-12
2. "SIAM Activity Groups Election Results", SIAM News, Society for Industrial and Applied Mathematics, January 6, 2020, retrieved 2021-01-12
3. Emily Stone at the Mathematics Genealogy Project
4. "SIAM Activity Groups Election Results". SIAM News. Retrieved 2022-05-28.
Authority control: Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• Scopus
• zbMATH
| Wikipedia |
MRI-guided coupling for a focused ultrasound system using a top-to-bottom propagation
Marinos Yiannakou1,
George Menikou2,
Christos Yiallouras1,3 &
Christakis Damianou ORCID: orcid.org/0000-0003-0424-28511
Journal of Therapeutic Ultrasound volume 5, Article number: 6 (2017) Cite this article
A novel magnetic resonance imaging (MRI)-conditional coupling system was developed that accommodates a focused ultrasound (FUS) transducer. With this coupling system, the transducer can access targets from top to bottom. The intended clinical application is treatment of fibroids using FUS with the patient placed in supine position.
The coupling system was manufactured using a rapid prototyping device using acrylonitrile butadiene styrene (ABS) plastic. Coupling to a gel phantom was achieved using a water bag filled with degassed water. The FUS transducer was immersed in the water bag.
The coupling system was successfully tested for MRI compatibility using fast-gradient pulse sequences in a gel phantom. The robotic system with its new coupling system was evaluated for its functionality for creating discrete and multiple (overlapping) lesions in the gel phantom.
An MRI-conditional FUS coupling system integrated with an existing robotic system was developed that has the potential to create thermal lesions in targets using a top-to-bottom approach. This system has the potential to treat fibroid tumors with the patient lying in supine position.
The available main treatment options for uterine fibroids include hysterectomy, myomectomy, and uterine artery embolization. Hysterectomy is the primary option for resolving fibroid-associated symptoms. Uterine artery embolization (UAE) is major treatment option for fibroids which was introduced in 1995 [1]. This treatment option involves femoral artery catheterization and intra-arterial infusion of embolization particles. As a result, UAE produces ischemia of the fibroid uterus, thus reducing significantly the volume of fibroids [1]. Another treatment option for uterine fibroids involves hormonal manipulation. Gonadotrophin-releasing hormone (GnRH) is predominantly used for the temporary reduction of fibroid volume by as much as 60% sometimes [2].
Another treatment option which is not widely accepted is cryoablation [3]. With cryoablation, the uterine fibroid is cooled to very low temperatures. This option is introduced using either laparoscopic or hysteroscopic access [4, 5].
Another option deployed recently is magnetic resonance imaging-guided focused ultrasound (MRgFUS) [6]. MRgFUS is an effective and completely noninvasive modality. MRgFUS may be used as a fertility-preserving option for some cases. The first treatment of uterine fibroids using MRgFUS was performed in 2003 and was implemented by Stewart and colleagues [6]. The results of this study were promising, and thus lead to additional clinical trials. The goal of the additional trials was to evaluate the efficacy of MRgFUS in larger number of patients. These studies showed significant reduction of clinical symptoms. Additionally, improvement in life quality was reported at 6, 12, and 24 months. Until now, more than 8500 patients have been treated with MRgFUS worldwide. Only few side effects were reported [7]. The acceptance rates from patients are evaluated as high [7]. In recent reports of clinical trials, with proper selection of patient population, the reduction in volume is often more than 50% [8]. However, despite the good clinical results, one of the disadvantages of this procedure is the lengthy procedure time (i.e., low time efficiency) which is a big problem when treating large fibroids.
The first MRgFUS commercial system (Exablate) to be used for the treatment of fibroids was developed by InSightec (Tirat Carmel, Israel) [9]. The system uses a phased-array transducer operating close to 1 MHz. The transducer is positioned close to the target using a robotic system. The whole system is placed in the magnetic resonance imaging (MRI) table. Treatment is performed with the patient in the prone position and under light sedation, with active monitoring of vital signs.
In 2009, Philips developed the MRgFUS robotic system (Sonalleve, Philips Healthcare) that was Conformité Européenne (CE)-marked for the treatment of uterine fibroids [10, 11]. Treatments were performed using a phased-array 256-channel transducer (radius of curvature 12 cm, aperture 13 cm; operable at 1.2 MHz) equipped with a mechanical displacement device with 5 degrees of freedom (three linear and two angular). This system is able to perform volumetric ablation. With this system, the patient is also placed in prone position.
This study includes the conversion of a robotic system intended for brain applications to a robotic system that can be used for accessing fibroids. This is achieved by modifying the coupling system, thus allowing top-to-bottom coupling (axial in MRI). The system was evaluated in a gel phantom for producing discrete and overlapping lesions. The system uses a single-element transducer, which makes the system less complex and cost-effective compared to systems that use phased-array technology.
Coupling for fibroids
An existing positioning device with three axes (X, Y, Z) was used [12]. Figure 1 shows the existing robotic system dedicated for the brain. The coupling system was modified so that top-to-bottom access was possible. A modified arm was developed that was inserted in a coupling structure that makes coupling to fibroids (in this study, gel phantom). The three axes were driven by piezoelectric ultrasonic motors (USR60-S3N, Shinsei Kogyo Corp., Tokyo, Japan). Optical encoders were used (US Digital Corporation, Vancouver, WA 98684, USA). The encoder output was connected to the counter input of a data acquisition board USB 6251 (NI, Austin, USA). Figure 2 shows the developed coupling that can be used for a top-to-bottom access of ultrasound to targets. This prototype coupling system includes a transducer arm which is connected to the Z-axis, a holder for the water bag, and a base that holds the water bag holder. This coupling system is manually positioned to the patient. This structure includes a movable water bag (steps of 1 cm), a transducer holder, and an arm that is fixed to the existing robot. Figure 3 shows the concept of using the modified robotic system for access to the fibroids.
The existing robotic system dedicated for the brain
The developed coupling that can be used for a top-to-bottom access of ultrasound to targets
The concept of using the modified robotic system for access to the fibroids
The arm of the robotic system with the 1-MHz transducer was immersed inside the water bag. The water bag was filled with degassed water. The transducer was placed above the gel phantom. The distance of the transducer from the phantom was such that the beam focus was placed in the middle of the gel phantom. The phantom was wrapped around by the GPFLEX coil (USA instruments, Cleveland, OH, USA) to perform all the imaging studies.
FUS system
The effectiveness of the system was evaluated by creating lesions in polyacrylamide gel phantom (ONDA Corporation, Sunnyvale, CA, USA). The FUS system consists of an RF amplifier (RFG 750W, JJA instruments, Seattle, WA, USA) and a spherical transducer made from piezoelectric ceramic (Sonic Concepts, USA). The transducer operates at 1.14 MHz and has focal length of 10 cm and diameter of 3 cm. The acoustical power of 20 W was applied in continuous mode for 60 s. With 20 W/60 s, the goal was to get temperature maps and test the MR thermometry without damaging the gel permanently. In another exposure that creates lesions, the power used was 30 W for 30 s. With this transducer, and focal depth for 30 W, lesions are created with a 20-s exposure or higher. The heating of the system was evaluated in the gel phantom. Degassed water was placed between the transducer, water bag, and the gel phantom, thus providing good acoustical coupling between the gel phantom and the FUS transducer. The attenuation of the gel as reported by the manufacturer was 0.6 dB/cm at 1 MHz [13].
The robotic FUS system was tested in a 1.5-T MR system (Signa, General Electric, Fairfield, CT, USA) using a lumbar spine coil (USA instruments, Cleveland, OH, USA).
MR thermometry
The temperature elevation during FUS exposures was estimated using the proton resonance frequency (PRFS) shift method [14]. This method relates the phase shift derived from the frequency shift of the MR signal due to the local temperature elevation (ΔT). This relationship is described by the following:
$$ \varDelta T=\frac{\varphi (T)-\varphi \left({T}_0\right)}{\gamma \alpha {B}_0\mathrm{T}\mathrm{E}}, $$
where φ(T) and φ(T 0) are the absolute phases of the MR signal at a starting and final temperature T and T 0, respectively; γ is the gyromagnetic ratio; α is the PRF change coefficient (0.01 ppm/°C); B 0 is the magnetic field strength; and TE is the echo time.
The spoiled gradient echo sequence (SPGR) was used for thermometry: repetition time (TR) 38.5 ms, TE 20 ms, bandwidth (BW) 15 kHz, matrix 128 × 128, slice thickness: 10 mm, and number of excitations (NEX): 1. The temporal resolution of thermometry was about 12 s. Phase maps were reconstructed by calculating the phase on a pixel-by-pixel basis after combining pixel data from real and imaginary channels. Although the scanner was capable of producing directly phase image reconstructions, the applied intra-scan gradient non-linearity corrections induce phase interpolation problems. All of the image processing was performed with custom-made software developed in MATLAB (MathWorks, Natick, USA). Temperature color-coded maps were produced by adjusting the color map (blue to red) for a range of minimum to maximum region of interest (ROI) temperature. Figure 4 shows the flowchart of the software that estimates temperature using the PRFS method.
Software flowchart for estimating MR thermometry
High-resolution MR imaging was performed to visualize the FUS lesions in the gel phantom using T2-weighted fast spin echo (FSE) sequence (imaging parameters, TR 2500 ms, TE 60 ms, slice thickness 3 mm, matrix 256 × 256, field of view (FOV) 16 cm, NEX 3, and echo train length (ETL) 8).
Figure 5 shows the MR image using T2-weighted (T2-W) FSE of the coupling to the gel of the robotic system. In this image, the transducer-water bag-gel phantom arrangement is shown demonstrating the excellent coupling to the gel phantom.
MR image using T2-W FSE of the coupling to the gel of the robotic system
Figure 6 shows the axial temperature maps produced by this transducer. The acoustical power of 20 W was applied for 60 s. During the first five images, the FUS transducer was activated. Having observed the focal beam in a plane perpendicular to the transducer face, the next step was to evaluate the temperature maps in a plane parallel to the transducer face. Figure 7 shows the coronal temperature maps produced by this transducer. The acoustical power of 20 W was applied for 60 s. During the first five images, FUS was activated.
Axial temperature maps produced by this transducer in a plane perpendicular to the face of the transducer. The acoustical power of 20 W was applied for 60 s
Coronal temperature maps produced by this transducer in a plane parallel to the face of the transducer. The acoustical power of 20 W was applied for 60 s
Figure 8 shows the MR images (using T2-W FSE) of three discrete thermal lesions created in the gel phantom by moving the X stage of the robotic system. The acoustical power used was 30 W for 30 s. The spatial step between lesions was 5 mm. With the transducer and focal depth with a 30 W and 30 s exposure, the lesion width is 3.3 mm, and the lesion length is 24 mm (maximum temperature recorded was 82 °C). With 60 s exposure and 30 W, the lesion width is 4.2 mm and the lesion length is 28.4 mm (maximum temperature recorded was 94 °C). At higher power, the lesion size does not change much since conduction carries the heat away. Also, a higher power will cause the temperature to exceed 100 °C (tissue boiling point). Figure 9 shows the MR image (using T2-W FSE) of the three discrete thermal lesions of Fig. 8 in axial plane demonstrating the penetration deep in the gel (plane perpendicular to the transducer face).
MR images (using T2-W FSE) of three discrete thermal lesions created in the gel phantom by moving the X stage of the robotic system. The acoustical power used was 30 W for 30 s. The spatial step between lesions was 5 mm
MR image (using T2-W FSE) of the three discrete thermal lesions of Fig. 7 in axial plane demonstrating the penetration deep in the gel (plane perpendicular to the transducer face)
Figure 10 shows the MR images (using T2-W FSE) of four overlapping thermal lesions created in the gel phantom by moving the X and Y stages of the robotic system in a 2 × 2 square grid. The acoustical power used was 30 W for 30 s. The spatial step between lesions was 3 mm. Because the width of the lesion is close to 3 mm (see Fig. 8), then to get overlapping lesions, the step size had to be 3 mm. This figure clearly demonstrates the effectiveness of the positioning device for creating large lesions for the purpose of thermal ablation.
MR images (using T2-W FSE) of four overlapping thermal lesions created in the gel phantom by moving the X and Y stages of the robotic system in a 2 × 2 square grid. The acoustical power used was 30 W for 30 s. The spatial step between lesions was 3 mm
Our study was inspired by several clinical trials that have shown that MRgFUS destroys fibroids [6, 7]. In these studies, the FUS transducer is placed in a water container which is integrated in the MRI table. The coupling with this technology is bottom to top, and the patient sits lying on the table in the prone position. The focal beam with the existing technologies is moved using phased-array technology which is very complicated and expensive. In this study, we presented an alternative technology of MRgFUS which is based on mechanical movement of a single-element spherically focused transducer. The proposed technology is simpler and cost-effective. The coupling to the target is top to bottom, and the patient may sit in supine position on the table. The major challenge of this technology is the coupling of the transducer to the target. This challenge has already been solved by several studies. The team of Theraclion "makes" a contact to the thyroid or to the breast [15] with a transducer placed on the top of the target reporting a more comfortable patient placement. This same concept was shown in another study performed by our group [16].
The major advantage of the proposed robot is that the patient can be placed on the MRI table in supine position. In the current systems, the patients are placed in the prone position. Since sometimes the treatment procedure can be long, lasting up to 3 hours per session [17, 18], the proposed system could provide better comfort for the patients. Additionally, the proposed system can access multiple anatomical locations.
Based on the American Society for Testing and Materials (ASTM) document F2503 described by Stoianovici et al. [19], the proposed system is classified as MRI-conditional because of the use of the FUS transducer, piezoelectric motors, and optical encoders. The piezoelectric motors, the transducer, and the optical encoders require the use of electricity and therefore, the system is MRI-conditional. Pneumatic systems [19] on the other hand are classified as MRI safe, because no electricity is used.
The proposed technology is a continuation of other MRgFUS technologies designed by our group for various applications. Such system were developed for brain ablation using three Cartesian axes [12], prostate ablation using one linear and one angular axis [20, 21], and gynecological tumor ablation using one linear and one angular axis [22].
The FUS system produced lesions in gels successfully. The length and width of these lesions can be easily controlled by varying the intensity and time of exposure. These discrete and overlapped lesions were produced using the X and Y axes. The appearance of these lesions was demonstrated using MRI and proved that the linear stages moved with great accuracy. The degree of accuracy of the linear stages was also demonstrated in other articles of our group (for example, [12, 20, 21]). In future experiments, we plan to use thicker gel phantoms, in order to create lesions in a model which is closer to the size of anatomy involved for the case of fibroids.
In the future, some registration marks should be placed in the proposed system in order to transfer the position of the transducer or arm in the MRI images. Additionally, an appropriate MRI coil should be selected. Currently, the lumbar spine was used (best option available to us). Better signal can be received if a dedicated coil is placed in proximity to the target. The proposed system which is modular can be easily modified to explore other applications with a top-to-bottom coupling arrangement (for example, breast or thyroid cancer). Finally, for better maneuverability, the proposed robot can be enhanced by the addition of additional motion stages to reach the performance reported by the FUSBOT robotic system [23].
ABS:
ASTM:
Conformité Européenne
ETL:
Echo train length
FSE:
Fast spin echo
FUS:
Focused ultrasound
Gonadotrophin-releasing hormone
MRgFUS:
Magnetic resonance-guided focused ultrasound system
NEX:
Number of excitations
PRFS:
Proton resonance frequency
Region of interest
SPGR:
Spoiled gradient echo sequence
T2-W:
T2-weighted
Uterine artery embolization
Ravina JH, Ciraru-Vigneron N, Bouret JM, Herbreteau D, Houdart E, Aymard A, Merland JJ. Arterial embolisation to treat uterine myomata. Lancet. 1995;346(8976):671–2.
Weeks AD, Wilkinson N, Arora DS, Duffy SR, Wells M, Walker JJ. Menopausal changes in the myometrium: an investigation using a GnRH agonist model. Int J Gynecol Pathol. 1999;18(3):226–32.
Olive D, Rutherford T, Zreik T, Palter S. Cryomyolysis in the conservative treatment of uterine fibroids. J Am Assoc Gynecol Laparosc. 1996;3(4):S36.
Zreik TG, Rutherford TJ, Palter SF, Troiano RN, Williams E, Brown JM, Olive DL. Cryomyolysis, a new procedure for the conservative treatment of uterine fibroids. J Am Assoc Gynecol Laparosc. 1998;5(1):33–8.
Zupi E, Piredda A, Marconi D, Townsend D, Exacoustos C, Arduini D, Szabolcs B. Directed laparoscopic cryomyolysis: a possible alternative to myomectomy and/or hysterectomy for symptomatic leiomyomas. Am J Obstet Gynecol. 2004;190(3):639–43.
Stewart EA, Gedroyc WMW, Tempany CMC, Quade BJ, Inbar Y, Ehrenstein T, Shushan A, Hindley JT, Goldin RD, David M, Sklair M, Rabinovici J. Focused ultrasound treatment of uterine fibroid tumors: safety and feasibility of a noninvasive thermoablative technique. Am J Obstet Gynecol. 2003;189(1):48–54.
Hesley GK, Felmlee JP, Gebhart JB, Dunagan KT, Gorny KR, Kesler JB, Brandt KR, Glantz JN, Gostout BS. Noninvasive treatment of uterine fibroids: early Mayo Clinic experience with magnetic resonance imaging-guided focused ultrasound. Mayo Clin Proc. 2006;81(7):936–42.
Mikami K, Murakami T, Okada A, Osuga K, Tomoda K, Nakamura H. Magnetic resonance imaging-guided focused ultrasound ablation of uterine fibroids: early clinical experience. Radiat Med. 2008;26(4):198–205.
Rabinovici J, David M, Fukunishi H, Morita Y, Gostout BS, Stewart EA. Pregnancy outcome after magnetic resonance-guided focused ultrasound surgery (MRgFUS) for conservative treatment of uterine fibroids. Fertil Steril. 2010;93(1):199–209.
Ikink ME, Voogt MJ, Verkooijen HM, Lohle PNM, Schweitzer KJ, Franx A, Mali WPTM, Bartels LW, van den Bosch MAAJ. Mid-term clinical efficacy of a volumetric magnetic resonance-guided high-intensity focused ultrasound technique for treatment of symptomatic uterine fibroids. Eur Radiol. 2013;23(11):3054–61.
Huisman M, van den Bosch MAAJ. MR-guided high-intensity focused ultrasound for noninvasive cancer treatment. Cancer Imaging. 2011;11:S161–6.
Mylonas N, Damianou C. MR compatible positioning device for guiding a focused ultrasound system for the treatment of brain diseases. Int J Med Robot. 2014;10(1):1–10.
Onda Corp. http://www.ondacorp.com/products_hifusol_phantoms.shtml. Accessed 1 Jan 2017.
Rieke V, Butts Pauly K. MR thermometry. J Magn Reson Imaging. 2008;27(2):376–90.
Kovatcheva R, Guglielmina J-N, Abehsera M, Boulanger L, Laurent N, Poncelet E. Ultrasound-guided high-intensity focused ultrasound treatment of breast fibroadenoma—a multicenter experience. J Ther Ultrasound. 2015;3:1.
Damianou C, Ioannides K, Milonas N. Positioning device for MRI-guided high intensity focused ultrasound system. Int J Comput Assist Radiol Surg. 2008;2(6):335–45.
Abdullah B, Subramaniam R, Omar S, Wragg P, Ramli N, Wui A, Lee C, Yusof Y. Magnetic resonance-guided focused ultrasound surgery (MRgFUS) treatment for uterine fibroids. Biomed Imaging Interv J. 2010;6(2):e15.
Rueff LE, Raman SS. Clinical and technical aspects of MR-guided high intensity focused ultrasound for treatment of symptomatic uterine fibroids. Semin Intervent Radiol. 2013;30(4):347–53.
Stoianovici D, Kim C, Srimathveeravalli G, Sebrecht P, Petrisor D, Coleman J, Solomon SB, Hricak H. MRI-safe robot for endorectal prostate biopsy. IEEE ASME Trans Mechatron. 2013;19(4):1289–99.
Yiallouras C, Mylonas N, Damianou C. MRI-compatible positioning device for guiding a focused ultrasound system for transrectal treatment of prostate cancer. Int J Comput Assist Radiol Surg. 2014;9(4):745–53.
Yiallouras C, Ioannides K, Dadakova T, Pavlina M, Bock M, Damianou C. Three-axis MR-conditional robot for high-intensity focused ultrasound for treating prostate diseases transrectally. J Ther Ultrasound. 2015;3:2.
Epaminonda E, Drakos T, Kalogirou C, Theodoulou M, Yiallouras C, Damianou C. MRI guided focused ultrasound robotic system for the treatment of gynaecological tumors. Int J Med Robot. 2016;12(1):46–52.
Chauhan S, Amir H, Chen G, Hacker A, Michel MS, Koehrmann KU. Intra-operative feedback and dynamic compensation for image-guided robotic focal ultrasound surgery. Comput Aided Surg. 2008;13(6):353–68. doi:10.3109/10929080802586825.
This work was supported by the Project PROFUS E! 6620. PROFUS is implemented within the framework of the EUROSTARS Program and is co-funded by the European Community and the Research Promotion Foundation, under the EUROSTARS Cyprus Action of the EUREKA Cyprus Program (Project Code: EUREKA/EUSTAR/0311/01).
The data will not be shared, because all the data is MRI data, which is of huge size and most of it is provided in the manuscript as images.
MY carried out the design of the positioning system. MY and CY carried out the design of the coupling for FUS using top-to-bottom propagation. GM designed and developed the MR thermometry software in MATLAB and did all the MRI work. CD performed the evaluation of the MRgFUS coupling for a focused ultrasound system using a top-to-bottom propagation system. All authors read and approved the final manuscript.
This article does not contain any studies with human participants or animals performed by any of the authors.
Cyprus University of Technology, Limassol, Cyprus
Marinos Yiannakou, Christos Yiallouras & Christakis Damianou
City, University of London, London, UK
George Menikou
MEDSONIC LTD, Limassol, Cyprus
Christos Yiallouras
Marinos Yiannakou
Christakis Damianou
Correspondence to Christakis Damianou.
Yiannakou, M., Menikou, G., Yiallouras, C. et al. MRI-guided coupling for a focused ultrasound system using a top-to-bottom propagation. J Ther Ultrasound 5, 6 (2017). https://doi.org/10.1186/s40349-017-0087-x
Accepted: 06 January 2017 | CommonCrawl |
\begin{definition}[Definition:Monoid Homomorphism]
Let $\left({S, \circ}\right)$ and $\left({T, *}\right)$ be monoids.
Let $\phi: S \to T$ be a mapping such that $\circ$ has the morphism property under $\phi$.
That is, $\forall a, b \in S$:
:$\phi \left({a \circ b}\right) = \phi \left({a}\right) * \phi \left({b}\right)$
Suppose further that $\phi$ preserves identities, i.e.:
:$\phi \left({e_S}\right) = e_T$
Then $\phi: \left({S, \circ}\right) \to \left({T, *}\right)$ is a monoid homomorphism.
\end{definition} | ProofWiki |
\begin{document}
\date{} \title{{\Large\bf Discretization of continuous frame}}
\author{{\normalsize\sc A. Fattahi, H. Javanshiri}}
\maketitle \normalsize
\begin{abstract}
In this paper we consider on the notion of continuous frame of subspace and define a new concept of continuous frame, entitled {\it continuous atomic resolution of identity}, for arbitrary Hilbert space ${\cal H}$ which has a countable reconstruction formula. Among the other result, we characterize the relationship between this new concept and other known continuous frame. Finally, we state and prove the assertions of the stability of perturbation in this concept. \footnote{2000 {\it Mathematics Subject Classification}: 42C15, 46C99, 94A12, 46B25, 47A05.
{\it Key words}: Bonded operator, Hilbert space, continuous frame, atomic resolution of identity.} \end{abstract}
\section{\large\bf Introduction and Preliminaries}
As we know frames are more flexible tools to translate information than bases, and so they are suitable replacement for bases in a Hilbert space ${\cal H}$.
Finding a representation of $f\in{\cal H}$ as a linear combination of the elements in frames, is the main goal of discrete frame theory. But in continuous frame, which is a natural generalization from
discrete, it is not straightforward.
However, one of the applications of frames is in wavelet theory. The practical implementation of the wavelet transform in signal processing requires the selection of a discrete set of points in the transform space. Indeed, all formulas must generally be evaluated numerically, and a computer is an intrinsically discrete object. But this operation must be performed in such a way that no information is lost. So efforts have been done to find methods to discretize classical continuous frames for use in applications like signal processing, numerical solution of PDE, simulation, and modelling; see for example [1, 8]. In particular, the discrete wavelet transform and Gabor frames are prominent examples and have been proven to be a very successful tool for certain applications. Since the problem of discretization is so important it would be nice to have a general method for this purpose. For example, Ali, Antoine, and Gazeau in [1] asked for conditions which ensure that a certain sampling of a continuous frame $\{\psi_x\}_{x\in X}$ yields a discrete frame $\{\psi_{x_i}\}_{i\in I}$ (see also [9]).
In the recent years, there has been shown considerable interest by harmonic and functional analysts in the frame of subspace problem of the separable Hilbert space; see \cite{ck}, \cite{ak}, \cite{a} and \cite{af} and the references there. Frame of subspace was first introduced by P. Casazza and G. Kutyniok in \cite{ck}. They present a reconstruction formula $f=\sum_{i\in I}\nu_i^2S^{-1}\pi_{W_i}(f)$ for frames of subspace. Continuous frame of subspace is a natural generalization from
discrete frame of subspace to continuous.
As we expect, in discrete frame of subspace every element in ${\cal H}$ has an expansion in terms of frames. But in the continuous case it respect to Bochner integral which is not desirable. Therefore, discretization of continuous frame of subspace is also very important.
Suppose that the measure $\mu$, which appears in the integral of continuous frame, is Radon or discontinuous (Note that there exist infinite many positive finite discontinuous measure on a locally compact space $X$ which are not counting measure). Then $\{x\in X: \mu(\{x\}\neq 0\}$ is nonempty set and we may investigate about some conditions under which every fixed element $f\in{\cal H}$ has a countable subfamily $J_f$ of $X$ with frame property for $h$. This leads us to define {\it uca-resolution of identity} (Definition 2.1),
which is a generalization of the resolution of identity (\cite{ck}, Definition 3.24), and atomic resolution of identity (\cite{ak}), to arbitrary Hilbert space (separable or nonseparable). We then show that in this concept many basic properties of discrete state can be derived within this more general context. In fact uca-resolution identity helps us to investigate continuous frames which have discretization. Because under some extra conditions, every uca-resolution of identity provides a continuous frame of subspace, and conversely. This means that the relationship between uca-resolution of identity and known continuous frames, such as frame of subspace is very tight.
Assume ${\cal H}$ to be a Hilbert space and $X$ be a locally compact Hausdorff space endowed with a positive Radon or discontinuous measure $\mu$. Let ${\cal W}=\{W_x\}_{x\in X}$ be a family of closed subspaces in ${\cal H}$ and let $\omega:X\rightarrow [0,\infty)$ be a measurable mapping such that $\omega\neq 0$ almost everywhere (a.e.). We say that ${\cal W}_\omega=\{(W_x,\omega(x))\}_{x\in X}$ is a continuous frame of subspace for ${\cal H}$, if;\\\\ (a) the mapping $x\mapsto \pi_{W_x}$ is weakly measurable;\\ (b) there exist constants $0<A,B<\infty$ such that
$$A\|f\|^2\leq\int_X{\omega(x)}^2\|\pi_{W_x}(f)\|^2\;d\mu(x)\leq B\|f\|^2~~~~~(1)$$
for all $f\in{\cal H}$.
The numbers $A$ and $B$ are called the continuous frame of subspace bounds. If ${\cal W}_\omega$ satisfies only the upper inequality in $(1)$, then we say that it is a continuous Bessel frame of subspace with bound $B$. Note that if $X$ is a countable set and $\mu$ is the counting measure, then we obtain the usual definition of a (discrete) frame of subspace.
For each continuous Bessel frame of subspace ${\cal W}_\omega=\{(W_x,\omega(x))\}_{x\in X}$, if we define the representation space associated with ${\cal W}_\omega$ by
$L^2(X,{\cal H},{\cal W}_\omega)=\{\varphi:X\rightarrow {\cal H}|~\varphi ~\hbox{is~measurable},\\~\varphi (x)\in W_x~\hbox{and}~\int_X
\|\varphi(x)\|^2\;d\mu(x)<\infty\}$, then $L^2(X,{\cal H},{\cal W}_\omega)$ with the inner product given by $$\big<\varphi,\psi\big>=\int_X \big<\varphi(x),\psi(x)\big>\;d\mu(x),~~~~~~~\hbox{for all}~ \varphi, \psi\in L^2(X,{\cal H},{\cal W}_\omega),$$ is a Hilbert space. Also, the synthesis operator $T_{{\cal W}_\omega}:L^2(X,{\cal H},{\cal W}_\omega)\rightarrow {\cal H}$ is define by $$\big<T_{{\cal W}_\omega}(\varphi),f\big>=\int_X\omega(x)\big<\varphi(x),f\big>\;d\mu(x),$$ for all $\varphi\in L^2(X,{\cal H},{\cal W}_\omega)$ and $f\in{\cal H}$. Its adjoint operator is $T_{{\cal W}_\omega}^*:{\cal H}\rightarrow L^2(X,{\cal H},{\cal W}_\omega)$; $T_{{\cal W}_\omega}^*(f)=\omega\pi_{{\cal W}_\omega}(f)$. For more details see \cite{af}.
Now, we give two immediate consequences from the above discussion. As the first, we have the following characterization of continuous Bessel frame of subspace in term of their synthesis operators as in discrete frame theory; see \cite{a}.
\begin{theorem} A family ${\cal W}_\omega$ is a continuous Bessel frame of subspace with Bessel fusion bound $B$ for ${\cal H}$ if and only if the synthesis operator $T_{{\cal W}_\omega}$ is a well-defined bounded operator and
$\|T_{{\cal W}_\omega}\|\leq\sqrt{B}$. \end{theorem}
Also, by an argument similar to the proof of (\cite{a}, Theorem 2.6), we have a characterization of continuous frame of subspace as follows;
\begin{theorem} The following
conditions are equivalent:\\\\ {\rm(a)} ${\cal W}_\omega=(\{W_x\}_{x\in X},\omega(x))$ is a continuous frame of subspace for ${\cal H}$;\\ {\rm(b)} The synthesis operator $T_{{\cal W}_\omega}$ is a bounded ,linear operator from $L^2(X,{\cal H},{\cal W}_\omega)$ onto ${\cal H}$;\\ {\rm(c)} The analysis operator $T^*_{{\cal W}_\omega}$ is injective with closed range.
\end{theorem}
If ${\cal W}_\omega$ is a continuous frame of subspace for ${\cal H}$ with frame bounds $A, B$, then we define the frame of subspace operator $S_{{\cal W}_\omega}$ for ${\cal W}_\omega$ by $$S_{{\cal W}_\omega}(f)=T_{{\cal W}_\omega}T^*_{{\cal W}_\omega}(f),~~~~~~~~f\in{\cal H},$$ which is a positive, self-adjoint, invertible operator on ${\cal H}$ with $A\cdot Id_{\cal H}\leq S_{{\cal W}_\omega}\leq B\cdot Id_{\cal H}.$
\section{\large\bf Main result}
For instituting a relationship between discrete and continuous frame of subspace, we generalize the concept of continuous frame and resolution of identity to arbitrary Hilbert space ${\cal H}$. For this propose, we introduce the summation to noncountable form. Let ${\cal H}$ be a Hilbert space and $\{T_x\}_{x\in X}$ be a family of bounded operators on it. If now, set $\Gamma$, the collection of all finite subset of $X$, then $\Gamma$ is a directed set ordered under inclusion.
Let $f$ be a fixed element of the Hilbert space ${\cal H}$. Define the sum $S(f)$ of the family $\{T_x(f)\}_{x\in X}$ as the limit $$S(f)=\sum_{x\in X}T_x(f)=\lim \{\sum_{x\in \gamma}T_x(f):~\gamma\in\Gamma\}.$$ If this limit exists, we say that the family $\{T_x(f)\}_{x\in X}$ is unconditionally summable. It is easy to see that the family $\{T_x(f)\}_{x\in X}$ is unconditionally summable if and only if for each $\varepsilon>0$, there exist a finite subset $\gamma_0\in\Gamma$ such that
$$\|\sum_{x\in \gamma_1}T_x(f)-\sum_{x\in \gamma_2}T_x(f)\|<\varepsilon,$$ for each $\gamma_1, \gamma_2>\gamma_0$. Therefore for each $\varepsilon>0$, there is a finite subset $\gamma_0$ of $X$ such that
$$\|T_x(f)\|<\varepsilon$$ for all $x\in X\setminus\gamma_0$. Hence for a fixed element $f\in{\cal H}$, if $\{T_x(f)\}_{x\in X}$ is unconditionally summable, then
$J_f = \{x\in X:~T_x(f)\neq 0\}$ is countable.
\begin{definition} Let ${\cal H}$ be a Hilbert space and let $\omega:X\rightarrow [0,\infty)$ be a measurable mapping such that $\omega\neq 0$ almost everywhere. We say that a family of bounded operator $\{T_x\}_{x\in X}$ on ${\cal H}$ is an unconditional continuous atomic resolution {\rm(}uca-resolution{\rm)} of the identity with respect to $\omega$ for ${\cal H}$, if there exist positive real numbers $C$ and $D$ such that for all $f\in{\cal H}$,\\\\ {\rm(a)} ~the mapping $x\mapsto T_{x}$ is weakly measurable;\\
{\rm(b)}~$ C\|f\|^2\leq\int_{X}\omega(x)^2\| T_x(f)\|^2d\mu(x)\leq D\| f\|^2;$\\ {\rm(c)}~$f=\sum_{x\in X} T_x(f).$ \end{definition}
The optimal values of C and D are called the uca-resolution of the identity bounds. It follows from the definition and the uniform boundedness principle that ${\rm sup}_{x\in X}\|T_x\|_{x\in X}\\<\infty$.\\
\begin{remark}
{\rm(a) If $f\in{\cal H}$ satisfies in (c), then as we mention in above,
there is a countable measurable subset $J_f$ (depends of $f$) of $X$ such that $$T_x(f)=0,$$
for all $x\in X\setminus J_f.$ So $$\int_X \omega(x)^2\|
T_x(f)\|^2d\mu(x)=\sum_{j\in J_f}\omega(j)^2\|
T_j(f)\|^2\;\mu(\{j\})$$ and condition (b) transform to
$$ C\|f\|^2\leq\sum_{j\in J_f}\omega(j)^2\|
T_j(f)\|^2\;\mu(\{j\})\leq D\| f\|^2$$ (b) If ${\cal H}$ is a separable Hilbert space with an orthonormal bases $\{e_n\}_{n=1}^{\infty}$, then by condition (c), for each $n$ there exists a countable measurable subset $J_n$ of $X$ such that $$T_x(e_n)=0,$$ for all $x\in X\setminus J_n.$ So, we can find a countable subset $J=\bigcup_{n=1}^\infty J_n$ of $X$ such that $$T_x(f)=0,$$ for all $f\in{\cal H}$ and $x\in X\setminus J$, and we have
$$\int_X \omega(x)^2\| T_x(f)\|^2d\mu(x)=\sum_{j\in J}\omega(j)^2\| T_j(f)\|^2\;\mu(\{j\}).$$ Therefore, if ${\cal H}$ is a separable Hilbert space, Definition 2.1 and Definition 3.1 in \cite{ak} coincide. } \end{remark}
From now on ${\cal H}$ is a Hilbert space with orthonormal bases $\{e_\lambda\}_{\lambda\in\Lambda}$ and $X$ is a locally compact Hausdorff space endowed with a positive Radon or discontinuous measure $\mu$, and $\omega:X\rightarrow [0,\infty)$ is a measurable mapping such that $\omega\neq 0$ almost everywhere. For a fix element $f\in{\cal H}$, by [7] there exists a countable subset $J$ of $\Lambda$ such that $\big<f,e_\lambda\big>=0$ for all $\lambda\in\Lambda\setminus J$.
The following is an important example of uca-resolution compatible with definition 2.1, and note that this example does not satisfy in the definition of resolution of identity and atomic resolution of identity which stated in \cite{ck} and \cite{ak}, respectively.
\begin{example} {\rm Let ${\cal H}$ be a Hilbert space with an orthonormal basis $\{e_\lambda\}_{\lambda\in\Lambda}$. If, we consider $\Lambda$ as a locally compact space with discrete topology and measurable space endowed with counting measure, then the family $\{T_\lambda\}_{\lambda\in\Lambda}$ of bounded operators on ${\cal H}$, defined by $$T_\lambda(f)=\big<e_\lambda,f\big>e_\lambda,~~~~~~~~~\hbox{for all}~ f\in{\cal H} ~\hbox{and}~ \lambda\in\Lambda,$$ is an uca-resolution of identity for ${\cal H}$}. \end{example}
In the next theorem we show that every uca-resolution of identity for ${\cal H}$, provides a continuous frame of subspace.
\begin{theorem} Let $\{T_x\}_{x\in X}$ be a family of bounded operators on ${\cal H}$ and for each $x\in X$, set $W_x=\overline{T_x({\cal H})}$. Suppose that there exists $D>0$ and $R>0$ such that the following conditions holds:\\ {\rm(a)} $f=\sum_{x\in X}\omega(x)^2T_x(f)\mu(\{x\});$\\
{\rm(b)} $\int_X\omega(x)^2\|\pi_{W_x}(f)-T_x(f)\|^2d\;\mu(x)\leq R\|f\|^2;$\\
{\rm(c)} $\int_X\omega(x)^2\|T_x(f)\|^2d\;\mu(x)\leq D\|f\|^2,$\\ for all $f\in{\cal H}$. Then $\{(W_x,\omega(x))\}_{x\in X}$ is a continuous frame of subspace for ${\cal H}$. \end{theorem} {\noindent Proof.} Let $f$ be a fix element of ${\cal H}$. as we mention in remark 2.2(a), there exists a countable subset $J_f$ of $X$ such that $$\omega(x)^2T_x(f)\mu(\{x\})=0,$$ for all $x\in X\setminus J_f$, and
$$\int_X\omega(x)^2\|T_x(f)\|^2\;d\mu(x)=\sum_{x\in X}\omega(x)^2\|T_x(f)\|^2\mu(\{x\}).$$ So we can use Cauchy-Schwarz inequality and compute as follows \begin{eqnarray*}
\|f\|^4&=&(\langle f,\sum_{x\in X}\omega(x)^2T_x(f)\mu(\{x\})\rangle)^2\\ &=&(\sum_{x\in X}\omega(x)\langle\sqrt{\mu(\{x\})}\;f,\omega(x)\sqrt{\mu(\{x\})}\;T_x(f)\rangle)^2\\ &=&(\sum_{x\in X}\omega(x)\langle\sqrt{\mu(\{x\})}\;\pi_{W_x}(f),\omega(x)\sqrt{\mu(\{x\})}\;T_x(f)\rangle)^2\\
&\leq&(\sum_{x\in X}\omega(x)\|\sqrt{\mu(\{x\})}\;\pi_{W_x}(f)\|\|\omega(x)\sqrt{\mu(\{x\})}\;T_x(f)\|)^2\\
&\leq&(\sum_{x\in X}\omega(x)^2\|\pi_{W_x}(f)\|^2\mu(\{x\}))(\sum_{x\in X}\|\omega(x)\sqrt{\mu(\{x\})}\;T_x(f)\|^2)\\
&\leq&(\int_{x\in X}\omega(x)^2\|\pi_{W_x}(f)\|^2\;d\mu(x))(\int_X\omega(x)^2\|T_x(f)\|^2\;d\mu(x))\\
&\leq&D\|f\|^2(\int_X\omega(x)^2\|\pi_{W_x}(f)\|^2\;d\mu(x)). \end{eqnarray*} Also, by triangle inequality and hypothesis we have \begin{eqnarray*}
\int_X\omega(x)^2\|\pi_{W_x}(f)\|^2\;d\mu(x)\leq D(1+\sqrt{\frac{R}{D}})^2\|f\|^2, \end{eqnarray*} so the assertion holds.$
\Box$\\
Casazza and Kutyniok in \cite{ck} introduced an interesting example of atomic resolution of identity. In the next theorem we obtain the uca-resolution of identity form, which is the converse of theorem 2.4.
\begin{theorem} Let $\{(W_x,\omega(x))\}_{x\in X}$ be a continuous Bessel frame of subspace for ${\cal H}$ with Bessel bound $D$, and for each $x\in X$, let $T_x:{\cal H}\rightarrow W_x$ be a bounded operator such that $T_x\pi_{W_x}=T_x$. Also assume that for each $f\in{\cal H}$ $$f=\sum_{x\in X}\omega(x)^2T_x(f)\mu(\{x\}).$$ Then for all $f\in{\cal H}$ we have
$$\frac{1}{D}\|f\|^2\leq\int_X\omega(x)^2\|T_x(f)\|^2\;d\mu(x)\leq DE\|f\|^2,$$
where $E={\rm sup}_{x\in X}\|T_x\|_{x\in X}$. \end{theorem} {\noindent Proof.} By similar prove of Theorem 2.4, we obtain
$$\frac{1}{D}\|f\|^2\leq\int_X\omega(x)^2\|T_x(f)\|^2\;d\mu(x).$$ Also we have \begin{eqnarray*}
\frac{1}{D}\|f\|^2&\leq&\int_X\omega(x)^2\|T_x(f)\|^2\;d\mu(x)\\
&=&\int_X\omega(x)^2\|T_x\pi_{W_x}(f)\|^2\;d\mu(x)\\
&\leq&\int_X\omega(x)^2\|T_x\|^2\|\pi_{W_x}(f)\|^2\;d\mu(x)\\
&\leq&E\int_X\omega(x)^2\|\pi_{W_x}(f)\|^2\;d\mu(x)\leq DE\|f\|^2. \end{eqnarray*} Whence, for each $f\in{\cal H}$
$$\frac{1}{D}\|f\|^2\leq\int_X\omega(x)^2\|T_x(f)\|^2\;d\mu(x)\leq DE\|f\|^2.$$ as we required.$
\Box$\\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\begin{proposition} Let $\{W_x\}_{x\in X}$ be a family of closed subspace of Hilbert space ${\cal H}$ such that the mapping $x\rightarrow \pi_{W_x}$ is weakly measurable. Also suppose $\omega$ is a bounded map and the following conditions hold for all $f\in{\cal H}$:\\ {\rm(a)} There exists $C>0$ such that $$\int_X\parallel\pi_{W_x}(f)\parallel^2d\mu(x)\leq \frac{1}{C}\parallel f\parallel^2,$$\\ {\rm(b)} $f=\sum_{x\in X}\omega(x)\pi_{W_x}(f)\mu(\{x\})$.\\ Then $\{(W_x,\omega(x))\}_{x\in X}$ is a continuous frame of subspace for ${\cal H}$. \end{proposition}
{\noindent{Proof.}} By condition (a) we see that
$$\int_X\omega(x)^2\|\pi_{W_x}(f)\|^2\;d\mu(x)\leq\frac{\sup_{x\in X}\omega(x)}{C}\|f\|^2, ~~~~~~~(f\in{\cal H}).$$ Condition (b) implies that for a fixed element $f$ of ${\cal H}$
$$\int_X\omega(x)^2\|\pi_{W_x}(f)\|^2\;d\mu(x)=\sum_{x\in X}\omega(x)^2\|\pi_{W_x}(f)\|^2\mu(\{x\}),$$ and
$$\int_X\|\pi_{W_x}(f)\|^2\;d\mu(x)=\sum_{x\in X}\|\pi_{W_x}(f)\|^2\mu(\{x\}).$$ Now, since the family $\{\omega(x)\mu(\{x\})T_x\}$ is unconditional summable, we can use Cauchy-Schwarz inequality and compute as follows
\begin{eqnarray*}
\|f\|^4&=&(\langle\sum_{x\in X}\omega(x)\mu(\{x\})\pi_{W_x}(f),f\big>)^2\\
&=&(\sum_{x\in X}\omega(x)\mu(\{x\})\|\pi_{W_x}(f)\|^2)^2\\
&\leq&(\sum_{x\in X}\omega(x)^2\mu(\{x\})\|\pi_{W_x}(f)\|^2)(\sum_{x\in X}\|\pi_{W_x}(f)\|^2\mu(\{x\}))\\
&\leq& \frac{1}{C}\|f\|^2(\sum_{x\in X}\omega(x)^2\|\pi_{W_x}(f)\|^2\mu(\{x\})) \end{eqnarray*} Thus
$$C\|f\|^2\leq\sum_{x\in X}\omega(x)^2\|\pi_{W_x}(f)\|^2\mu(\{x\})=\int_X\omega(x)^2\|\pi_{W_x}(f)\|^2\;d\mu(x)$$ for all $f\in{\cal H}$, and this complete the proof.$
\Box$\\
In the following proposition we give a reconstruction formula for continuous frame of subspace in the special case.
\begin{proposition} Let $\{W_x\}_{x\in X}$ be a family of orthogonal closed subspace of Hilbert space ${\cal H}$. If $\{(W_x,\omega(x)\}_{x\in X}$ is a continuous frame of subspace for ${\cal H}$ with bounds $C, D$, then for each $f\in{\cal H}$ $$f=\sum_{x\in X}\pi_{W_x}(f).$$ The converse is true if $\omega$ is bounded and there exists $C>0$ such that
$$\int_X\|\pi_{W_x}(f)\|^2\;d\mu(x)\leq\frac{1}{C}\|f\|^2,$$ for all $f\in{\cal H}$. \end{proposition}
{\noindent Proof.} Let $\{(W_x,\omega(x)\}_{x\in X}$ be a continuous frame of subspace. First, we should noted that for each $f\in{\cal H}$, by Hahn-Banach Theorem and orthogonality of the family $\{W_x\}_{x\in X}$, there exists a sequence $\{f_n\}$ in ${\cal H}$ such that $f_n\longrightarrow f$ and for each $n$ we have the following equality $$f_n=\sum_{x\in X}\pi_{W_x}(f_n).$$
Now we define $S_\gamma(f)=\sum_{x\in\gamma}\pi_{W_x}(f)$, where $\gamma$ is an arbitrary finite subset of $X$ and $f\in{\cal H}$. Therefore \begin{eqnarray*}
C\|S_\gamma(f)-f\|^2&\leq&\int_X\omega(x)^2\|\pi_{W_x}(S_\gamma(f)-f)\|^2\;d\mu(x)\\
&\leq&\int_X\omega(x)^2\|\pi_{W_x}(f)\|^2\;d\mu(x)\\
&\leq&D\|f\|^2. \end{eqnarray*} By replacing $f$ with $f_n-f$ we obtain
$$\|S_\gamma(f_n-f)-(f_n-f)\|\leq\sqrt{\frac{D}{C}}\|f_n-f\|.$$
The converse holds by 2.6.$
\Box$\\
Now we want to show that, by a given uca-resolution of identity, each $f\in{\cal H}$ has a new countable reconstruction formula. First we need the following Lemma: \begin{lemma} Let $\{T_x\}_{x\in X}$ be an uca-resolution of the identity with respect to weight $\omega$ for ${\cal H}$ with bounds $C$ and $D$, and let $\{f_i\}_{i\in I}$ be a frame sequence. Then there exists a countable subset $J$ of $X$, such that $\{\omega(j)\sqrt{\mu (\{j\})}T^*_j(f_i)\}_{i\in I, j\in J}$ is a frame for $\overline{\hbox{span}}\{f_i\}_{i\in I}$. \end{lemma} {\noindent {Proof.}} If we set $J_i=\{x\in X:~T_x(f_i)\neq 0\}$, then by definition of uca-resolution of identity , $J_i$ is a countable and measurable subset of $X$. Now, set $J=\bigcup_{i\in I}J_i$. So $J$ is a countable and measurable subset of $X$, and for each $f\in\overline{\hbox{span}}\{f_i\}_{i\in I}$ and $x\in X\setminus J$ we have $$T_x(f)=0.$$ Hence we see that for each $f\in\overline{\hbox{span}}\{f_i\}_{i\in I}$
$$C\|f\|^2\leq \sum_{j\in J}\omega^2(j)\mu(\{j\})\|T_j(f)\|^2\leq D\|f\|^2,$$ and $$f=\sum_{j\in J} T_j(f),$$ and these series converge unconditionally.
Now, suppose that $A$ and $B$ are frame bounds of $\{f_i\}_{i\in I}$. For each $f\in\overline{\hbox{span}}\{f_i\}_{i\in I}$ we have
$$A\sum_{j\in J}\omega^2(j)\mu(\{j\})\|T_j(f)\|^2\leq\sum_{j\in J}\sum_{i\in I}|<\omega^2(j)\mu(\{j\})T_j(f),f_i>|^2$$
$$~~~~~~~~~~~~~~~~~~~~~~~\leq B \sum_{j\in J}\omega^2(j)\mu(\{j\})\|T_j(f)\|^2,$$ and therefore
$$AC\|f\|^2\leq A\sum_{j\in J}\omega^2(j)\mu(\{j\})\|T_j(f)\|^2$$
$$~~~~~~~~~~~~~~~~~~~~~~~~\leq \sum_{j\in J}\sum_{i\in I}|<f,\omega^2(j)\mu(\{j\})T^*_j(f_i)>|^2$$
$$~~~~~~~~~~~~~~~~~~~~~~~~~~~\leq B\sum_{j\in J}\omega^2(j)\mu(\{j\})\|T_j(f)\|^2\leq BD\|f\|^2.$$ and this complete the proof.$
\Box$\\
\begin{theorem} Let $\{T_x\}_{x\in X}$ be an uca-resolution of the identity with respect to weight $\omega$ for ${\cal H}$ with bounds $C$ and $D$. Then for each $f\in{\cal H}$, there exists a countable subset $I$ {\rm(}dependents on $f${\rm)} of $X$, such that we have the following reconstruction formula $$f=\sum_{i\in I}\omega^2(i)\mu(\{i\})S^{-1}T^*_iT_i(f)=\sum_{i\in I}\omega^2(i)\mu(\{i\})T^*_iT_iS^{-1}(f),$$
where $S$ is a frame operator of a frame sequence. \end{theorem} {\noindent Proof.} Let $f$ be a fix element of Hilbert space ${\cal H}$. Set $${\cal H}_f=\overline{\hbox{span}}\{e_j\}_{j\in J},$$ where $J=\{j\in\Lambda:~\big<e_j,f\big>\neq 0\}$ is a countable subset of $\Lambda$. Then, by Lemma 2.8, there is a countable subset $I$ of $X$ such that the sequence $\{\omega(i)\sqrt{\mu (\{i\})}T^*_i(e_j)\}_{i\in I, j\in J}$ is a frame for ${\cal H}_f$.\\ If now, $S\in B({\cal H})$ is the frame operator of $\{\omega(i)\sqrt{\mu (\{i\})}T^*_i(e_j)\}_{i\in I, j\in J}$, then we have $$S(f)=\sum_{i\in I}\sum_{j\in J}\big<f,\omega(i)\sqrt{\mu (\{i\})}T^*_i(e_j)\big>\omega(i)\sqrt{\mu (\{i\})}T^*_i(e_j)$$
$$=\sum_{i\in I}\omega^2(i)\mu(\{i\})T^*_i(\sum_{j\in J}\big<T_i(f),e_j\big>e_j)~~~~~~~~~~~$$ $$=\sum_{i\in I}\omega^2(i)\mu(\{i\})T^*_iT_i(f).~~~~~~~~~~~~~~~~$$ Hence, the reconstruction formula follows immediately from the invertibility of the operator $S$.$
\Box$\\
In the rest of paper we consider to stability of perturbation in uca-resolution of identity. First, let us state and proof of the following useful lemma.
\begin{lemma} Let $\{T_x\}_{x\in X}$ and $\{S_x\}_{x\in X}$ be two families of bounded operators on ${\cal H}$ and there exists $0<\lambda<1$ such that for all finite subset $I$ of $X$
$$\|\sum_{i\in I}(T_i-S_i)(f)\|\leq\lambda\|\sum_{i\in I}T_i(f)\|~~~~~~{\rm(}f\in{\cal H}{\rm)}~~~~{\rm(}1{\rm)}.$$ If $\{(T_x,\omega(x)\}_{x\in X}$ is an uca-resolution of identity then we have the following reconstruction formula $$f=\sum_{x\in X}S_xS^{-1}(f)~~~~~~{\rm(}f\in{\cal H}{\rm)}$$ where $S$ is an invertible operator on ${\cal H}$. \end{lemma} {\noindent Proof.} Let $f\in{\cal H}$ and let $I$ be a finite subset of $X$. Since
$$\|f-\sum_{i\in I}S_i(f)\|\leq\|f-\sum_{i\in I}T_i(f)\|+\|\sum_{i\in I}T_i(f)-\sum_{i\in I}S_i(f)\|.$$ Therefore by inequality (1) we have
$$\|f-\sum_{i\in I}S_i(f)\|\leq\|f-\sum_{i\in I}T_i(f)\|+\lambda\|\sum_{i\in I}T_i(f)\|~~~~~~{\rm(}2{\rm)}.$$ Hence, the family $\{S_x(f)\}_{x\in X}$ is unconditionally summable. Now, we define $S:{\cal H}\rightarrow{\cal H}$ by $S(f)=\sum_{x\in X}S_x(f)$. By inequality (2) and using that $\{(T_x,\omega(x)\}$ is assumed to be uca-resolution of identity, $S$ is well defined and we have
$$\|f-S(f)\|\leq\lambda\|f\|,$$
for all $f\in{\cal H}$. So $\|{\rm id}_{\cal H}-S\|\leq\lambda<1$, and therefore $S$ is an invertible operator on ${\cal H}$. Hence for all $f\in{\cal H}$ we have $$\sum_{x\in X}S_xS^{-1}(f)=SS^{-1}(f)=f,$$ and this complete the proof.$
\Box$\\
\begin{definition} Let $\{T_x\}_{x\in X}$ and $\{S_x\}_{x\in X}$ be two families of bounded operators on ${\cal H}$, and let $\omega:X\rightarrow [0,\infty)$ be measurable map such that $\omega(x)\neq 0$ almost everywhere. Suppose that $0\leq\lambda_1, \lambda_2<1$, and $\varphi:X\rightarrow [0,\infty)$ is an arbitrary positive map such that $\int_X\varphi(x)^2\;d\mu(x)<\infty$. If
$$\|\omega(x)(T_x-S_x)(f)\|\leq\lambda_1\|\omega(x)T_x(f)\|+\lambda_2\|\omega(x)S_x(f)\|+\varphi(x)\|f\|$$ for all $f\in{\cal H}$ and $x\in X$, then we say that $\{(S_x,\omega(x))\}_{x\in X}$ is a $(\lambda_1, \lambda_2, \varphi)$-perturbation of $\{(T_x,\omega(x))\}_{x\in X}$. \end{definition}
From now on let $\{S_x\}_{x\in X}$ be a family of bounded operators on ${\cal H}$ such that the mapping $x\mapsto S_{x}(f)$ is weakly measurable. Then for each bounded operator $S:{\cal H}\rightarrow{\cal H}$, the map $x\mapsto S_{x}S(f)$ is weakly measurable. Hence by Lemma 2.9, we have the following theorem.
\begin{theorem} Let $\{(T_x,\omega(x))\}_{x\in X}$ be an uca-resolution of identity for ${\cal H}$ with bounds $C$ and $D$, and let $\{(S_x,\omega(x))\}_{x\in X}$ be a $(\lambda_1, \lambda_2, \varphi)$-perturbation of $\{(T_x,\omega(x))\}_{x\in X}$ for some $0\leq\lambda_1,\lambda_2<1$. Moreover assume that $(1-\lambda_1)\sqrt{C}-(\int_X\varphi(x)^2\;d\mu(x))^{\frac{1}{2}}>0$ and for some $0\leq\lambda<1$
$$\|\sum_{i\in I}(T_i-S_i)(f)\|\leq\lambda\|\sum_{i\in I}T_i(f)\|~~~~~~~~~{\rm(}f\in{\cal H}{\rm)},$$ for all finite subset $I$ of $X$. Then there exist an invertible operator $S$ on ${\cal H}$ such that $\{(S_xS^{-1},\omega(x)\}_{x\in X}$ is a uca-resolution of the identity on ${\cal H}$. \end{theorem} {\noindent Proof.} First it should be noted that by Lemma 2.10, there exists an invertible operator $S$ on ${\cal H}$, such that the family $\{S_xS^{-1}\}_{x\in X}$ satisfies in 2.1(c). Also by Open mapping Theorem and Closed Graph Theorem, there exist $A>0$ and $B>0$ such that
$$A\|f\|\leq\|S^{-1}(f)\|\leq B\|f\|$$ for all $f\in{\cal H}$.\\
Now, for $f\in{\cal H}$ we obtain\\\\
$(\int_X\omega(x)^2\|S_x(f)\|^2\;d\mu(x))^\frac{1}{2}\leq(\int_X\omega(x)^2(\|T_x(f)\|+\|(T_x-S_x)(f)\|)^2\;d\mu(x))^\frac{1}{2}$\\\\
$\leq(\int_X((\omega(x)^2(\|T_x(f)\|+\lambda_1\|T_x(f)\|\|+\lambda_2\|S_x(f)\|))+\varphi(x)\|f\|)^2\;d\mu(x))^\frac{1}{2}$\\\\
$\leq(1+\lambda_1)(\int_X\omega(x)^2\|T_x(f)\|^2\;d\mu(x))^\frac{1}{2}
+\lambda_2(\int_X\omega(x)^2\|S_x(f)\|^2\;d\mu(x))^\frac{1}{2}
+\|f\|(\int_X\varphi(x)^2\;d\mu(x))^\frac{1}{2}.$\\\\
Hence
$$\int_X\omega(x)^2\|S_xS^{-1}(f)\|^2\;d\mu(x)\leq(\frac{(1+\lambda_1)\sqrt{D}+(\int_x\varphi(x)^2\;d\mu(x))^\frac{1}{2}}{1-\lambda_2})^2B^2\|f\|^2.$$\\\\ To prove the lower bound, first we observe that
$$\|f\|^2\leq\frac{1}{C}\int_X\omega(x)^2\|T_x(f)\|^2\;d\mu(x),$$ for all $f\in{\cal H}$. Therefore, by triangle inequality we have\\\\
$(\int_X\omega(x)^2\|T_x(f)\|^2\;d\mu(x))^\frac{1}{2}-(\int_X\omega(x)^2\|S_x(f)\|^2\;d\mu(x))^\frac{1}{2}
\leq(\int_X\|\omega(x)(T_x-S_x)(f)\|^2)^\frac{1}{2}$\\\\
$\leq\lambda_1(\int_X\omega(x)^2\|T_x(f)\|^2\;d\mu(x))^\frac{1}{2}
+\lambda_2(\int_X\omega(x)^2\|S_x(f)\|^2\;d\mu(x))^\frac{1}{2}
\\\\+\frac{1}{\sqrt{C}}(\int_X\varphi(x)^2\;d\mu(x))^\frac{1}{2}(\int_X\omega(x)^2\|T_x(f)\|^2\;d\mu(x))^\frac{1}{2}.$\\\\ Hence $$(\frac{1-\lambda_1-\frac{1}{\sqrt{C}}(\int_x\varphi(x)^2\;d\mu(x))^\frac{1}{2}}{1+\lambda_2})
(\int_X\omega(x)^2\|T_x(f)\|^2\;d\mu(x))^\frac{1}{2}\leq(\int_X\omega(x)^2\|S_x(f)\|^2\;d\mu(x))^\frac{1}{2}.$$ So
$$(\frac{(1-\lambda_1)\sqrt{C}-(\int_x\varphi(x)^2\;d\mu(x))^\frac{1}{2}}{1+\lambda_2})^2A^2\|f\|^2\leq\int_X\omega(x)^2\|S_xS^{-1}(f)\|^2\;d\mu(x),$$\\ as we required.$
\Box$\\
\begin{remark} {\rm Suppose $\{T_x\}_{x\in X}$ and $\{S_x\}_{x\in X}$ are two families of bounded operators on ${\cal H}$. If $\{(T_x,\omega(x))\}_{x\in X}$ is a uca-resolution of identity, then by Cauchy-Schwarz inequality we have \begin{eqnarray*}
|\big<T_xS_x(f),g\big>|&=&|\big<S_x(f),T^*_x(g)\big>|\\
&\leq&\|S_x(f)\|\|T^*_x\|\|g\|\\
&\leq&\|S_x(f)\|\|g\|\sup_{x\in X}\|T_x\|, \end{eqnarray*} for all $f,g\in{\cal H}$ and $x\in X$. Hence, for each $f\in{\cal H}$ and $x\in X$
$$\|T_xS_x(f)\|\leq\|S_x(f)\|E,$$
where $E=\sup_{x\in X}\|T_x\|$. } \end{remark}
\begin{theorem} Let $\{(T_x,\omega(x))\}_{x\in X}$ be an uca-resolution of identity for ${\cal H}$ with bounds $C$ and $D$, and let $\{S_x\}_{x\in X}$ be a family of bounded operators on ${\cal H}$ such that for some $K$
$$\int_X\omega(x)^2\|S_x(f)\|^2\;d\mu(x)\leq D\|f\|^2,$$ for all $f\in{\cal H}$. Suppose that $\varphi:X\rightarrow [0,\infty)$ is a positive map, and there exist $0<\lambda_1, \lambda_2<1$ such that
$$\|\omega(x)f-\omega(x)^2T_xS_x(f)\|\leq\lambda_1\|\omega(x)T_x(f)\|+\lambda_2\|\omega(x)^2T_xS_x(f)\|+\varphi(x)\|f\|$$ Also
$$\|\sum_{i\in I}(T_i-S_i)(f)\|\leq\lambda\|\sum_{i\in I}T_i(f)\|$$ for all finite subset $I$ of $X$ and for all $f\in{\cal H}$, where $0<\lambda<1$. If $\int_X\varphi(x)\;d\mu(x)<\infty$ and $0<(\int_X\omega(x)^2d\mu(x))^{\frac{1}{2}}-\lambda_1\sqrt{D}-(\int_X\varphi(x)^2d\mu(x))<\infty$, then there exists an invertible operator $S$ on ${\cal H}$ such that $\{(S_xS^{-1},\omega(x))\}_{x\in X}$ is an uca-resolution of the identity on ${\cal H}$. \end{theorem} {\it Proof.} For $f\in{\cal H}$ we have\\\\
$\|f\|(\int_X\omega(x)^2\;d\mu(x))^{\frac{1}{2}}\leq
(\int_X(\|\omega(x)f-\omega(x)^2T_xS_x(f)\|+\|\omega(x)^2T_xS_x(f)\|)^2\;d\mu(x))^{\frac{1}{2}}$\\\\ $\leq
(\int_X\|\omega(x)f-\omega(x)^2T_xS_x(f)\|^2d\mu(x))^{\frac{1}{2}}+(\int_X\|\omega(x)^2T_xS_x(f)\|^2d\mu(x))^{\frac{1}{2}}$\\\\ $\leq
(\int_X(\lambda_1\|\omega(x)T_x(f)\|+\lambda_2\|\omega(x)^2T_xS_x(f)\|+\varphi(x)\|f\|)^2d\mu(x))^{\frac{1}{2}}$\\\\ $+
(\int_X\|\omega(x)^2T_xS_x(f)\|^2d\mu(x))^{\frac{1}{2}}$\\\\
$\leq\lambda_1\sqrt{D}\|f\|+(1+\lambda_2)(\int_X\omega(x)^2\|T_xS_x(f)\|^2d\mu(x))^{\frac{1}{2}}+\|f\|(\int_X\varphi(x)^2d\mu(x))^{\frac{1}{2}}$\\\\
$\leq\lambda_1\sqrt{D}\|f\|+(1+\lambda_2)E(\int_X\omega(x)^2\|S_x(f)\|^2d\mu(x))^{\frac{1}{2}}+\|f\|(\int_X\varphi(x)^2d\mu(x))^{\frac{1}{2}}$\\\\
where $E=\sup_{x\in X}\|T_x\|$. Therefore
$$\|f\|\frac{(\int_X\omega(x)^2d\mu(x))^{\frac{1}{2}}-\lambda_1\sqrt{D}-(\int_X\varphi(x)^2d\mu(x))^{\frac{1}{2}}}{E(1+\sqrt{\lambda_2})}
\leq(\int_X\omega(x)^2\|S_x(f)\|^2d\mu(x))^{\frac{1}{2}}.$$ Now by Lemma 2.10, and similar to prove of 2.12, the assertion holds.$
\Box$\\
\footnotesize
\noindent (Abdolmajid Fattahi) Department of Mathematics, Razi University, Kermanshah, Iran.\\ E-mail address: [email protected] \& [email protected]\\\\ (H. Javanshiri) Department of Mathematical Sciences, Isfahan University of Technology, Isfahan 84156-83111, Iran.\\ E-mail address: [email protected] \& [email protected]
\end{document} | arXiv |
Interpreting the Klein-Gordon Annihilation Operator Expression
How to show that the Feynman delta function satisfies the inhomogeneous Klein-Gordon equation
What is the Lagrangian from which the Klein-Gordon equation is derived in QFT?
Schrodinger equation from Klein-Gordon?
Klein–Gordon equation
The non relativistic limit of the real Klein-Gordon equation
Doubts taking the second functional derivative of the Klein Gordon action
What restriction does BRST symmetry put on the Hamiltonian of a (lie group) gauge theory?
Using a time-like boundary as a computer?
Lorentz Covariant formula for Noether Charges in QFT
Quantizing Klein-Gordon via Lie Groups
I'm trying to understand second quantization of the Klein-Gordon equation, as explained in, say, standard books like Peskin and Schroder, but using the language of Lie (representation) theory. In a sense I just want to be able to attach the correct mathematical words to each step of the process, e.g. this step is talking about a Heisenberg Lie algebra, but we interpret it as embedded in the Lie algebra of a Poincare group because of another step, that step goes to the dual Lie algebra, etc... but it's a bit confusing as to what's actually going on.
Motivating this question are Woit's notes on QM/QFT, pages 438, 441, 446, I've given the motivating quotes below.
My hope is that the process is something like:
Manifold: Minkowski Space
Lie Group: Poincare Group
Lie Algebra: Poincare Algebra (this gives us a Lie-algebra representation of the (Poincare) Lie Group).
Universal Enveloping Algebra: (Just to be able to form a Casimir)
Casimir Element: Klein-Gordon Operator
Casimir Eigenvalue Equation: Klein-Gordon Equation where eigenfunctions are just elements of the Poincare Lie algebra, and these are irreducible representations of the Poincare group, (which explains why wave functions are operators and avoids a classical scalar field interpretation)
Solution Method: Go to the dual Lie algebra (take a Fourier transform), solve as an algebraic equation, then invert to find $\hat{\psi}(x)$, but since this is a representation of the dual Poincare group (page 438 comment below), we need to randomly form quadratic combinations of the fields $\hat{\psi}(x), \hat{\pi}(x)$ and integrate over them (to get the Hamiltonian) and then commute this with those $\hat{\psi}(x), \hat{\pi}(x)$ fields (even though they all represent different spaces) and this gives us a solution to the Klein-Gordon equation... This is supposed to be motivated by thinking about intertwining operators (to find equivalent representations), but it really makes no sense to me...
Note this process completely ignores classical field theory, it is group theoretical only, the classical wave equation only arises as the Casimir of the Poincare group, spitting out irreducible representations, with function expressed in a basis of these gives a wave function, and the structure of the whole process just follows the same process you follow in Lie theory when looking at a Lie group to, say, prove it is simple/semi-simple...
However Woit's notes seem to say something different, pages 438, 441 and 446 seem to indicate the process is more like:
Manifold: Phase space of solutions (scalar fields) to classica Klein-Gordon PDE
Lie Group: Poincare Group (acting on functions in the phase space)
Lie Algebra: Not the Poincare Algebra (acting on functions in the phase space), but instead the representation of a (random) Heisenberg algebra derived from the symplectic structure of the phase space which we (randomly) use to find irreducible representations of the Poincae group, by applying the Casimir of a totally different Lie algebra coming from the Poincare group,
Universal Enveloping Algebra: ...
Casimir Element: Not the Klein-Gordon Operator acting on functions in the phase space coming from the Poincare group Lie algebra, but instead the Klein-Gordon operator acting on "unitary representation of a Heisenberg Lie algebra on $M \oplus R$, where M is the dual of the space of solutions of the Klein-Gordon equation",
Casimir Eigenvalue Equation: Eigenfunctions are representations of the solutions of the classical Klein-Gordon equation, the dual of the space of solutions of the Klein-Gordon equation, but (p.446) they are also irreducible representations of the Poincare group somehow.
Solution Method: Treat the wave function $$\hat{\phi}(x) = \dfrac{1}{(2\pi)^{3/2}} \int_{\mathbb{R}^3} (a(\vec{p})e^{i \vec{p} \cdot \vec{x}} + a^{+}(\vec{p})e^{- i \vec{p}\cdot \vec{x}})\dfrac{d^3\vec{p}}{\sqrt{2\omega_{\vec{p}}}} $$ along with the commutation relations $$[\hat{\phi},\hat{\pi}] = ..., [\hat{\phi},\hat{\phi}] = ..., $$ as a unitary representation of a Heisenberg Lie algebra on $M \oplus R$, where M is the dual of the space of solutions of the Klein-Gordon equation. Then to find a representation of the Poincare group from this representation of a completely different space, we form quadratic combinations of $\hat{\phi},\hat{\pi}$ and integrate to form the Hamiltonian (integrating so that we are representing the original Poincare group I guess) and then randomly commuting the Hamiltonian representing the original Poincare group with representations of a dual Lie algebra.
This process assumes the classical Klein-Gordon theory (for no reason), seems to mix up the spaces on which everything is done, and use this crazy Heisenberg Lie algebra coming from a representation of a function space to describe the Poincare group acting on the original phase space. It should look obvious and coherent, I just don't see it.
The quotes motivating all this are:
The real scalar quantum field operators are the operator-valued distributions defined by
$$\hat{\phi}(x) = \dfrac{1}{(2\pi)^{3/2}} \int_{\mathbb{R}^3} (a(\vec{p})e^{i \vec{p} \cdot \vec{x}} + a^{+}(\vec{p})e^{- i \vec{p}\cdot \vec{x}})\dfrac{d^3\vec{p}}{\sqrt{2\omega_{\vec{p}}}} $$
The commutation relations
$$[\hat{\phi},\hat{\pi}] = ..., [\hat{\phi},\hat{\phi}] = ..., $$
can be interpreted as the relations of a unitary representation of a Heisenberg Lie algebra on $M \oplus R$, where M is the dual of the space of solutions of the Klein-Gordon equation.
The complex scalar quantum field operators are the operator-valued distributions that provide a representation of the infinite-dimensional Heisenberg algebra given by the linear functions on the phase space of solutions to the complexified Klein-Gordon equation. This representation will be on a state space describing both particles and antiparticles.
which interpret the wave functions $\hat{\phi}$ as representations of a Heisenberg Lie algebra on the space of solutions of Klein-Gordon, and
Just as for non-relativistic quantum fields, the theory of free relativistic scalar quantum fields starts by taking as phase space an infinite dimensional space of solutions of an equation of motion. Quantization of this phase space proceeds by constructing field operators which provide a representation of the corresponding Heisenberg Lie algebra, using an infinite dimensional version of the Bargmann- Fock construction. In both cases the equation of motion has a representation-theoretical significance: it is an eigenvalue equation for the Casimir operator of a group of space-time symmetries, picking out an irreducible representation of that group. In the non-relativistic case the Laplacian was the Casimir and the symmetry group was the Euclidean group E(3). In the relativistic case the Casimir operator is the Klein-Gordon operator, and the space-time symmetry group is the Poincare group. The Poincare group acts on the phase space of solutions to the Klein-Gordon equation, preserving the Poisson bracket. One can thus use the same methods as in the finite-dimensional case to get a representation of the Poincare group by intertwining operators for the Heisenberg Lie algebra representation (the representation given by the field operators). These methods give a representation of the Lie algebra of the Poincare group in terms of quadratic combinations of the field operators.
which views the K-G equation as coming from a Casimir, talks about how this Heisenberg Lie algebra can be used to find representations of the Poincare group and how one should form quadratic combinations of fields, motivated by thinking about intertwining operators...
My questions are - based on the two outlines and Woit's quotes I've given above:
What is wrong with the first outline I have given?
e.g. Why is it wrong to take Minkowski space as the manifold, to ignore the classical Klein-Gordon equation, and to set up an eigenvalue equation where the eigenfunctions are Lie algebra operators?
How do we clean up the second outline and make it look coherent?
e.g. I have almost certainly misunderstood it, and not understood the link between the Poincare algebra coming from the Lie group and the Heisenberg Lie algebra coming from the dual of the space of solutions of K-G, let alone the link between the Poincare group and that Heisenberg Lie algebra, and why one can get the Poincare group from the Heisenberg Lie algebra.
What is going on with this quadratic combinations ~ Intertwining operator ~ Hamiltonian business?
e.g. it seems like this step is done to find a representation of the Poincare group even though you use representations of the dual, then you commutator this with those dual representations as if they're all in the same space anyway which is random, and even thinking to combine quadratic combinations comes out of nowhere. What's going on here?
Can a simpler and clearer explanation of this process be given?
e.g. In other words, what exactly is the process from start to finish?
References: Woit "Quantum Theory, Groups and Representations: An Introduction"
quantum-field-theory
klein-gordon
quantisation
asked Aug 20, 2015 in Theoretical Physics by bolbteppa (120 points) [ no revision ]
The group you should use is the semidirect product of the Heisenberg-Weyl group for the fields and the Poincare group acting on the field coordinates, and the corresponding Lie algebra. The Heisenberg algebra then provides the linear fields, and the poicare generators can be expressed as quadratic expressions in these fields. The Klein-Gordon differential operator appears as Casimir.
commented Aug 21, 2015 by Arnold Neumaier (13,989 points) [ no revision ]
p$\hbar$ysic$\varnothing$Overflow | CommonCrawl |
\begin{document}
\maketitle
\begin{abstract} In this article, we study a class of lattice random variables in the domain of attraction of an $\alpha$-stable random variable with index $\alpha \in (0,2)$ which satisfy a truncated fractional Edgeworth expansion. Our results include studying the class of such fractional Edgeworth expansions under simple operations, providing concrete examples; sharp rates of convergence to an $\alpha$-stable distribution in a local central limit theorem; Green's function expansions; and finally fluctuations of a class of discrete stochastic PDE's driven by the heavy-tailed random walks belonging to the class of fractional Edgeworth expansions. \end{abstract}
\section{Introduction and overview of the results}
Edgeworth expansions refer to asymptotic expansions of the cumulative distribution function (CDF) $F_X(\cdot)$ of properly centered and rescaled random variables $X$ (under some moment assumptions) which control the error, see e.g. \cite{edge} and references therein. Although the central limit theorem characterises the limiting distribution and the Berry-Ess\'een theorem quantifies uniform error bounds, they are not able to capture important quantities of the distribution such as skewness or kurtosis. Those quantities can be seen, again under some higher moment assumptions, in Edgeworth expansions which are polynomial series of the CDF with coefficients related to the cumulants. As a consequence, one can obtain local central limit theorem convergence rates and potential kernel expansions.
If the sequence of random variables is in the domain of attraction of an $\alpha$-stable distribution with $\alpha \in (0,2)$, then the variance and even the mean might not exist, hence Edgeworth expansions in the classical sense are not well defined.
To overcome the lack of moments of order greater or equal to $\alpha$ for random variables in the domain of attraction of an $\alpha$-stable distribution, other quantities were studied. For instance, Bergstr\"om introduced in the 1950s the concept of \textit{pseudo-moments} and \textit{difference moments}, see \cite{bergstrom1953distribution}. Pseudo-moments are not useful in the context of integer-valued random variables since they are infinite, see e.g. Lemma 2.7 in \cite{christoph1992convergence}.
Let us explain the concept of difference-moments in the following. Denote by $F_{\bar{X}}(\cdot)$, the CDF of a random variable in the domain of attraction of an $\alpha$-stable random variable $\bar{X}$, $k \ge \alpha$ be an integer and $F_X(\cdot)$ the CDF of a (continuous) random variable $X$. Although the k-th moment is not finite, it is possible to define for some $r > \alpha$ the quantity
\begin{equation}\label{eq:diff-moment}
\chi_r= r\int_{\bb R} |x|^{r-1} |H(x)|dx < \infty, \end{equation}
where $H(\cdot)$ is defined as $H(x):=F_X(x)-F_{\bar{X}}(x)$.
In this case, we have that for every integer $k < r$, the quantity
$\eta_k:= -k\int_{\bb R} x^{k-1} dH(x)$ is well defined, implying the following expansion for the characteristic function of $X$ in terms of the characteristic function of $\bar{X}$
\begin{equation}\label{eq:arbitrary-exp}
\phi_X(\theta)= \phi_{\bar{X}}(\theta)
+
\sum_{k = 0}^{\lfloor r \rfloor} \eta_k\frac{(i\theta)^k}{k!}
+
\mc O(|\theta|^r)
\text{ as } \theta \to 0. \end{equation}
In case the quantity $\eqref{eq:diff-moment}$ is not well-defined, similar expansions could be obtained using \textit{integral difference moments}, see \cite{christoph1992convergence} and references therein (particularly \cite{christoph1984asymptotic} for the case of integer-valued random variables).
Let us emphasize two shortcomings from the methods considered above. The first is that it only allows integer powers of $\theta$, and therefore the difference $(\phi_X-\phi_{\bar{X}})$ needs to be differentiable up to a high order for the sum in the expansion to be non-trivial. The second shortcoming comes from the necessity for closed expressions for $p_X(\cdot)$ and/or $p_{\bar{X}}(\cdot)$ in order to apply the methods above. This translates into two possible restrictions: i) restricting the of choice of stable distribution to one of the cases for which we have an analytic expressions for $p_{\bar{X}}(\cdot)$; or ii) restricting the choice of discrete random walks to a specific discretisation of $p_{\bar{X}}(\cdot)$ given by
\begin{equation}\label{eq:dist-from-integral}
p_X(x):=\int_{x-1/2}^{x+1/2} p_{\bar{X}}(y)dy. \end{equation}
\subsection*{Overview of the results} In this article, we seek to generalise such classes and provide simple examples with explicit distributions. In particular, we will study \textit{fractional Edgeworth expansions} of the characteristic function of an integer-valued $X$ (instead of the corresponding CDF), generalizing the difference moments approach to not necessarily positive integer powers in the expansion. More precisely, assume that the common characteristic function of the collection of integer-valued random variables $(X_n)_{n\geq 0}$ satisfies the following expansion with respect to $\alpha \in (0,2)$ and regularity set $R_{\alpha} \subset (\alpha, 2+\alpha):$
\begin{align}\label{intro-phi}
\phi_X(\theta) = 1
- \mu_\alpha |\theta|^\alpha
+ i\mu^\prime_\alpha \sgn(\theta) |\theta|^\alpha
+ \sum_{\beta \in R_\alpha} \mu_\beta |\theta|^{\beta}
+ \sum_{\beta \in R_\alpha} i\mu^\prime_\beta \sgn(\theta)|\theta|^{\beta}
+ \mc O\Big(|\theta|^{2+\alpha}\Big) \end{align}
as $ |\theta|\longrightarrow 0$ with constants $\mu_\alpha >0, \mu_\alpha^\prime, \mu_{\beta}, \mu_\beta^\prime \in \bb R$. We will refer to the constants $\mu_\beta,\mu^\prime_\beta$ for $\beta \in R_\alpha \cup \{\alpha\}$ as the \textit{fictional moment of order $k$} of $X$ (or of $p_X(\cdot)$). Notice that \eqref{intro-phi} implies that the distribution associated to $\phi_X$ is in the domain of attraction of an $\alpha$-stable distribution. The concept of the regularity set $R_{\alpha}$ is roughly inspired by the \textit{index set} $A$, which appears in the definition of \textit{regularity structures} in \cite{hairer2015}, where $A$ encodes possible ``homogeneities" on the several levels of (ir)regularity of the objects being studied.
The truncation at level $2+\alpha$ is somewhat arbitrary and related to the concrete examples which we will discuss. We say that $p_X(\cdot)$ is admissible if its characteristic function satisfies \eqref{intro-phi}, we will denote this by $p_X \in \mc A$.
Note that the term fractional cumulant appeared in the physics paper \cite{hazut2015fractional} for the first time when considering symmetric random variables whose density is defined as the inverse Fourier transform of a infinite series of fractional powers of $|\theta|$. Notice that their results are based on a precise infinite series for the characteristic function rather than an approximated expression.
Our results can be categorized into four types of contributions. We will sketch informally the main results for each contribution below.
\subsubsection*{1. Class of admissible probability mass functions}
In Section \ref{sec:example} we will study properties of the class of admissible probability mass functions or distributions for short. In particular, we will show in Lemma \ref{lem:closed-by-operations} that the class is closed under operations such as convolution and taking convex combinations, i.e. for $p_1, p_2 \in \mathcal{A}$ we have that $p_1*p_2\in \mathcal{A}$ and $\delta p_1+(1-\delta)p_2 \in \mathcal{A}$ for $\delta \in [0,1]$. This will be used in Proposition \ref{prop:adm-are-enough} to prove that any stable random variable $\bar{X}$ with parameters $(\alpha, \beta, \gamma, \varrho)$ (see Definition~\ref{def:stable}) can be approximated by an admissible distribution, i.e. there exists $p_X \in \mathcal{A}$ such that for the corresponding characteristic function we have that \[
\phi_X(\theta) = \phi_{\bar{X}}(\theta) + o(|\theta|^{\alpha}) \] for $\alpha \in (0,2)\setminus \{1\}$ and $\theta$ small enough. The main example is discussed in Proposition~\ref{prop:asymp-phi-alpha}. We consider the transition probability of a heavy-tailed random walk given by the transition probability \[
p_{\alpha}(x,y) = \frac{c_{\alpha}}{|x-y|^{1+\alpha}}, \, \, x\neq y, \alpha \in (0,2) \] and show that $p_{\alpha} \in \mathcal{A}$, $R_{\alpha} = \{2\}$ and determine the fictional moments of the expansion.
\subsubsection*{2. Local central limit theorem} Central limit theorems and local central limit theorems (LCLT) are fundamental results in probability theory. There exists a vast literature providing different types of LCLT results (or local stable limit theorems) in the stable setting with explicit and implicit convergence rates, e.g.~\cite{basu1976local, basu1979non, BergerNote, cara, Gnedenko, ibra, Mineka,rvaceva1961domains, stone1965local}.
To our knowledge, the best explicit non-uniform convergence rate for 1d absolutely continuous $X$ was proven in \cite{dattatraya1994non}, where the author showed under some integrability conditions on the characteristic function, that for any $\alpha \in (0,2)$:
\begin{equation} \label{eq:previous-lclt}
|x|^\alpha \left| p^n_{X}(x) - p^n_{\bar{X}} (x)\right|
\leq C n^{\gamma}, \end{equation}
where $\bar{X}$ is the stable distribution with index $\alpha$ where $\gamma=1- \frac{2}{\alpha}$ if $\alpha \in [1,2)$ and $\gamma = 1- \frac{1}{\alpha}$ if $\alpha \in (0,1)$. As for uniform bounds in $x$, without further assumptions in the law of the step distributions, one can use classical results of convergence of random variables (such as in \cite{rvaceva1961domains, stone1965local}) which imply that \begin{equation}\label{eq:previous-unif}
n^{\frac{1}{\alpha}}\left|
p^n_{X}(x) - p^n_{\bar{X}} (x)\right| = o (1). \end{equation} Our main result concerning sharp LCLT convergence rates is Theorem \ref{thm:lclt}:
\[
\sup_{x\in \bb Z} \left |p^n_{X}(x) - p^n_{\bar{X}} (x)\right|
\le C n^{-\frac{\beta_1+1-\alpha}{\alpha}} \]
for some positive constant $C$, where $\beta_1 = \beta_1(R_{\alpha})$ depends on the regularity set $R_{\alpha}$. In fact, we obtain in Corollary \ref{corol:assymp-repairable} that $\beta_1=2\alpha$ in the case that $\mu_{\beta}=0$ for all $\beta \notin \{\alpha, 2\}$. Depending on the sign of $\mu_2$ either the original or final distribution has to be modified by a distribution of the ``correct" finite variance random variable to prevail the strong convergence rate. The introduced error is of order $\mathcal{O}(n^{-\frac{1}{\alpha} + (1-\frac{2}{\alpha})})$ which will vanish as $n\rightarrow \infty$.
The modification idea is natural and has shown to be very fruitful for example in \cite{Frometa2018} where the authors used it to obtain better convergence rates of a truncated Green's function in $\bb Z^2$.
\subsubsection*{3. Green's function estimates} Concerning discrete potential kernel or Green's function behaviour there has been some asymptotic estimates obtained in \cite{amir2017, Bass,BergerNote, berger2019, Williamson} and \cite{Widom} in the continuum. In \cite{Williamson}, the author proves that for $\alpha \in (0,2)$ the discrete potential kernel
(see Definition~\ref{def:potential-green}) is asymptotic to $\|x\|^{d-\alpha}L(|x|)$ where $L(\cdot)$ is a slowly varying function, whereas \cite{berger2019} obtains similar asymptotics for processes on $\mathbb{Z}^d$ with index $\alpha =(\alpha_1,\dots,\alpha_d)$ and $\alpha \in (0,2]^d$, $d\geq 1$. In Theorem \ref{thm:Green-adm} we prove for $\alpha \in [1,2)$ that there exist explicit constants $C_{\alpha},C_0, C_{\delta}$ such that for $|x|\rightarrow \infty$ and $\delta := \min( R_\alpha )$ we have that there are constants $C_0,C_\delta$ such that \begin{itemize} \item[(i)] If $\delta < 2\alpha-1$, then
\[
a_X(0,x)= C_\alpha |x|^{\alpha-1}
+
C_{\delta} |x|^{2\alpha-\delta-1}
+ \mc O (|x|^{2\alpha-\delta-1}),
\] \item[(ii)] if $\delta > 2\alpha-1$, then
\[
a_X(0,x)= C_\alpha |x|^{\alpha-1}
+C_0+
o(1),
\] \item[(iii)] if $\delta = 2\alpha-1$, then
\[
a_X(0,x)= C_\alpha |x|^{\alpha-1} +
C_{\delta}\log |x| + \mc O (1)
\]
as $x \to \infty$. \end{itemize} In particular, for $R_{\alpha} \in \{\varnothing, \{2\}\}$, we provide in Theorems \ref{thm:Green} and \ref{thm:green-1} more explicit asymptotic expansions. We also provide similar results for the Green's function (see Definition~\ref{def:potential-green}) in the regime $\alpha \in (2/3,1)$ in Theorem~\ref{thm:green-less1}.
The proofs of the potential kernel bounds are original and they exploit the asymptotics of the characteristic function together with H\"older continuity instead of using the LCLT as a starting point like in the classical case \cite{Limic10}.
We believe our estimates would be useful to derive rates of convergence of the hitting measure from discrete to the continuous case. This convergence was first proven in \cite{kesten1961theorem}, but as far as the authors are aware, has no quantitative estimates. Quantitative estimates could allow us to study limit shape theorems for growth models, such as the internal diffusion limited aggregation (iDLA) driven by long-range random walks.
\subsubsection*{4. Fluctuations of the scaling limit of fractional Gaussian fields}
Finally, as an application of the fractional Edgeworth expansion, we look turn to discrete stochastic partial differential equations.
Consider fields $\Xi^m$ which solve the equation
\[ \begin{cases}
\mc L^m \Xi^m(x) = \xi^m(x) - \sum_{z \in \bb T_m } \xi^m(z), & \text{ if } x \in \bb T_m \\
\sum_{x \in \bb T_m} \Xi^m(x)=0 \end{cases} \]
where $\bb T_m = 2\pi((-m/2,m/2]\cap\bb Z)$ is the discrete torus, $\mc L^m$ is the semigroup of the random walk obtained by periodising the distribution $p_X(\cdot)$ around $\bb T_m$, and $(\xi^m(x))_x$ is a collection of i.i.d. random variables with finite variance. We know that $\Xi^m$ (when renormalised) converges to a fractional Gaussian field $\Xi_{\alpha} $ of order $\alpha$ i.e the solution of the equation in the continuum where the noise is substituted by the white-noise on the torus, \cite{chiarini2021odometer}. We will show in Theorem~\ref{thm:converge-of-fields-elip}, if the first non-trivial fractional cumulant of $p_X$ is of order sufficiently small, for an appropriate coupling between $\Xi^m$ and $\Xi_{\alpha}$, \[ C_1 m^{\beta - \alpha} (\Xi^m - C_2 \Xi_{\alpha}) \longrightarrow \Xi_{2\alpha -\beta} \] in probability as $m\rightarrow \infty$ in some appropriate Sobolev space $H^{-s}(\bb T)$, for some chosen $s$ depending on $\alpha, \beta$ and $\bb T$ the continuous torus.
Furthermore, we present similar results for the field satisfying the parabolic version of the equation above in Theorem \ref{thm:converge-of-fields-parab}.
The novelties of the paper include the construction and exploration of important properties of fractional Edgeworth expansions for stable random variables. We study the effect of having non-trivial fictional moments of higher orders and obtain sharp convergence rates by developing a repairing approach of the corresponding distributions. Besides that, we believe that the application given in Section~\ref{sec:second-order-conv} opens the possibility of similar results for general continuous objects constructed as the scaling limit of fractional Gaussian fields, such as Gaussian multiplicative chaos, parabolic Anderson model, and solutions to non-linear equations.
\subsection*{Structure of the article}
In Section \ref{sec:def}, we provide the setting and introduce necessary definitions. In Section \ref{sec:results} we state our main Theorems. The subsequent Section \ref{sec:general} contains a discussion about the results and possible generalizations in different directions. Section \ref{sec:example} deals with determining the expansion of the characteristic function for an explicit example of a long-range random walk and showing that it falls into the class $\mathcal{A}$ we consider in this article. Section \ref{sec:lclt} contains all proofs regarding LCLT's and in Section \ref{sec:Green} we demonstrate estimates on the discrete Green functions/potential kernels. Finally, Section~\ref{sec:second-order-conv} deals with fluctuations of the scaling limits of fractional Gaussian fields. Some technical lemmas are postponed to the Appendix.
\subsection*{Acknowlegments} The authors would like to thank S. Fr\'ometa for conversations about early versions of this article. We would also like to thank the anonymous referees, their suggestions helped to significantly improve this article. L. Chiarini was financially supported by CAPES and the NWO grant OCENW.KLEIN.083. M. Jara was funded by the ERC Horizon 2020 grant 715734, the CNPq grant 305075/2017-9 and the FAPERJ grant E-29/203.012/201.
\section{Definitions}\label{sec:def}
In this section, we will introduce all necessary notation and define the main objects. We will denote by $\bb T = (-\pi,\pi]$ the one-dimensional torus. Given $z \in \bb R$ and $r >0$, we write $B_r(z)$ to denote the interval $(z-r/2,z+r/2]$ around $z$ with length $r$. For $f,g:\mathbb{R} \rightarrow \mathbb{R}$ we write \[ f(x) \lesssim g(x) \] if there exists a constant $C>0$, which does not depend on $x$, such that $f(x) \leq C g(x)$, analogously for $\gtrsim$.
The functions $\lfloor \cdot \rfloor$ and $\lceil \cdot \rceil$ denote the floor and ceiling functions, respectively. We will write $\bb Z^+:=\{0\}\cup \bb N$.
Given finite sets of positive real numbers $A,B \subset \bb R^+$, we define its sum by
\[
A+B :=\left\{ a+b: a \in A, b \in B \right\}, \]
and
\[
\spann ( A ) := \left\{ \sum_{a \in A} l_a a : l_a \in \bb Z^+, a \in A \right\}. \]
Let $C^{k,\gamma} (\bb R)$ denote the space of function in $\bb R$ with $k \ge 0$ derivatives s.t the $k$-th derivative is $\gamma$-H\"older continuous with $\gamma \in (0,1]$. We will denote by $C^{k,\gamma} (\bb T)$ the subspace of $C^{k,\gamma} (\bb R)$ composed by $2\pi$-periodic functions. The notation $f \in C^{k,\gamma-} (\bb T)$ will be used for $f \in C^{k,\gamma- \varepsilon}(\bb T)$ for $\varepsilon \in (0,\gamma)$ sufficiently small. Similarly we will use the short notation $f(x) = \mc O (|x|^{\beta \pm })$ for $f(x) = \mc O(|x|^{\beta \pm \varepsilon})$ for all $\varepsilon >0$ sufficiently small, where $\mc O (\cdot)$ is the standard big-O notation.
We will call $\mc F(f) $ the Fourier transform of $f$ given by \[
\mc F (f)(\theta) := \int_{\bb R} f(x) e^{i \theta \cdot x} dx \] for $\theta \in \bb R$ resp.~$\mc F_{\bb T}$ for $k\in \bb N$ \[
\mc F_{\bb T} (f) (k) := \int_{\bb T} f(x) e^{i k \cdot x} dx. \]
Let $(X_i)_{i\in \mathbb{N}}$ be a sequence of i.i.d.~integer-valued random variables defined on some common probability space $(\Omega, \mathcal{A}, \mathbb{P})$. Denote by $p_X(\cdot)$ the probability distribution of $X$, with support in $\bb Z$.
We write shorthand $X$ instead of $X_i$ when we refer to one single random variable. Call $S_n:=\sum_{i=0}^n X_i$ its sum and abbreviate by $p^n_X(\cdot)$ the corresponding probability distribution. Denote by
\[
\phi_X(\theta) := \bb E\Big[ e^{i \theta \cdot X} \Big], \, \, \, \theta \in \mathbb{R}
\] its common characteristic function.
\begin{definition}\label{def:stable} For all $\alpha \in (0,2)\setminus \{1\}$, $\beta \in [-1,1]$, $\gamma \in (0,\infty)$, and $\varrho \in (-\infty,\infty)$, call $\bar{X}$ the stable random variable with parameters $(\alpha,\beta,\gamma,\varrho)$ if its the characteristic function is given by \begin{align*}
\phi_{\alpha,\beta,\gamma,\varrho}(\theta)=\exp \left(i \theta \varrho-|\gamma \theta|^{\alpha}\left(1-i \beta \sgn(\theta) \tan\left(\frac{\pi\alpha}{2}\right)\right)\right), \end{align*} for every $\theta \in \bb R$. \end{definition} We will call $\bar{X}$ a \textit{symmetric stable random variable} with index $\alpha \in (0,2)$ for short, if
\begin{equation}\label{eq:def-symmetric-cont}
\phi_{\bar{X}} (\theta) = e^{-\mu_{\alpha} |\theta|^{\alpha}}. \end{equation}
Observe that in that case, $\gamma$ satisfies $\gamma=(\mu_{\alpha})^{1/\alpha}$. Let $p_{\bar{X}}(\cdot)$ denote the density resp. $p^n_{\bar{X}}(\cdot)$ its $n$-th convolution.
For $\alpha = 1$, we will only consider the case $\beta=0$, given by the expression
\begin{align*}
\phi_{1,0 ,\gamma,\varrho}(\theta)=\exp \left(i \theta \varrho-|\gamma \theta|\right). \end{align*} In the following let us define the class of random variables which we will consider in this article.
\begin{definition} \label{regularity} Let $\alpha \in (0,2]$ and let $R_\alpha \subset (\alpha,2+\alpha)\cup \left(\bb N \cap (2+\alpha)\right)$ be a finite set. Denote by $p_X(\cdot)$ the probability distribution of a random variable $X$ with support in $\bb Z$. We say that $X$ admits a fractional Edgeworth expansion with index $\alpha$ and regularity set $R_\alpha$, if its corresponding characteristic function $\phi_X(\theta)$ satisfies the following expansion \begin{align} \label{def:char-alpha}
\phi_X(\theta) = 1
- \mu_\alpha |\theta|^\alpha
+ i\mu^\prime_\alpha \sgn(\theta) |\theta|^\alpha
+ \sum_{\beta \in R_\alpha} \mu_\beta |\theta|^{\beta}
+ \sum_{\beta \in R_\alpha} i\mu^\prime_\beta \sgn(\theta)|\theta|^{\beta}
+ \mc O\Big(|\theta|^{2+\alpha}\Big) \end{align}
as $|\theta| \longrightarrow 0$, for constants $\mu_\alpha >0$ and $\mu^\prime_\alpha, \mu_\beta, \mu^\prime_\beta \in \bb R\setminus \{0\}$, for all $\beta \in R_\alpha$. For $\alpha=1$, we further require the law of $p_X(\cdot)$ to be symmetric. The constants $\mu_\beta,\mu_\beta^\prime$ are referred to as the \textit{fictional moments of order $\beta$ of $p_X(\cdot)$}. Equivalently we will also simply say that $p_X(\cdot)$ is \textit{admissible} or $p_X \in \mc A$. \end{definition}
We will always assume that the set $R_\alpha$ is optimal, i.e, $|\mu_\beta|+|\mu^\prime_\beta|>0$ for all $\beta \in R_\alpha$. It is important to recall that the constants $\mu_{\alpha}, \mu^\prime_\alpha,\mu_\beta,\mu^\prime_\beta$, given in the definition above, depend on the law of $p_X(\cdot)$.
Using the expansion given in \eqref{def:char-alpha} and the Taylor polynomial of $\log (1+t)$ for $|t|<1$, setting $t:=\phi_X(\theta)-1$, we get that $\phi_X(\cdot)$ can be written as
\begin{align} \label{def:char-alpha-2}
\phi_X(\theta) = e^{-\mu_\alpha |\theta|^\alpha
+
r_X(\theta)
+\mc O(|\theta|^{2+\alpha})},\quad\text{as } |\theta|\longrightarrow 0, \end{align}
where
\[
r_{X}(\theta)
=
\sum_{\beta \in J_\alpha} \kappa_\beta|\theta|^{\beta}
+
\sum_{\beta \in \{\alpha\} \cup J_\alpha} i \sgn(\theta) \kappa_\beta^\prime |\theta|^{\beta}, \]
and the coefficients $\kappa_j$ are combinations of coefficients coming from the expansion of the logarithm and the powers $|\theta|^{\alpha}$
resp.~$|\theta|^{\beta}$.
We will refer to $\kappa_\beta$ for all $\beta \in J_\alpha$ as the \textit{fractional cumulants of} $p_X(\cdot)$.
Furthermore, let \begin{equation}\label{def:Jalpha}
J_\alpha := \spann (R^+_{\alpha} ) \cap (\alpha,2+\alpha), \end{equation} where $R^+_{\alpha} := R_\alpha\cup \{\alpha\}$. In a similar way we define $J_\alpha^+:= J_\alpha \cup \{2+\alpha\}$.
Remark that if $R_{\alpha}=\emptyset$ we have that $J_{\alpha} = \alpha \bb N \cap (\alpha, 2+\alpha)$ and therefore, in general we have $\beta_1 :=\min (J^+_{\alpha}) \leq 2\alpha$.
Our \textit{regularity set} $R_{\alpha}$ is a finite collection of powers of $|\theta|$ in the expansion of the characteristic function, up to orders which are strictly smaller than $2+\alpha$.
Our main example of an admissible distribution with index $\alpha \in (0,2)$ and $R_{\alpha}=\{2\}$ is given by \begin{equation} \label{def:palpha}
p_{\alpha}(x) :=
\begin{cases}
c_\alpha |x|^{-(1+\alpha)}, &\text{ if } x \neq 0,\\
0,&\text{ if } x=0,
\end{cases} \end{equation} where $c_{\alpha}$ is the normalising constant. We will discuss this example and many others in Section \ref{sec:example}.
An example of a distribution which is \textit{not admissible} is $p_\alpha(\cdot)$, defined in \eqref{def:palpha} with $\alpha=2$. In fact, in this case the characteristic function has the expansion \[
\phi_X(\theta) = 1- \mu_2|\theta|^2 \log (|\theta|) + \mc O (|\theta|^2). \] In \cite{nandori2011recurrence}, the author discusses some properties of this particular example including its recurrence and LCLT estimates.
In order to explore the idea of improving rates of convergence of a given random variable, we will concentrate on a particular subset of admissible distributions. Then we will subdivide the class of admissible distributions in a subclass w.r.t.~regularity sets $R_{\alpha}\in \{ \varnothing, \{2\}\}$ and a subclass w.r.t.~general $R_{\alpha}$. The first subclass will be further subdivided in three classes which will have different asymptotic behaviour as $n \rightarrow \infty$.
\begin{definition} Let $X$ be a symmetric random variable such that $p_X \in \mc A$ and $R_\alpha \in \{\emptyset,\{2\}\}$. Then we say that $p_X(\cdot)$ belongs to one of the following three classes: \begin{enumerate}
\item[(i)] \textit{repaired} if $R_\alpha =\emptyset$
\item[(ii)] \textit{locally repairable} if $R_\alpha=\{2\}$ and $\mu_2 >0$
\item[(iii)] \textit{asymptotically repairable} if $R_\alpha=\{2\}$ and $\mu_2 <0$. \end{enumerate} \end{definition}
A \textit{locally repairable} probability distribution $p_X(\cdot)$ can be \textit{repaired} by convoluting it with a simple discrete random variable with variance $2|\mu_2|$ which plays the part of a \textit{repairer}. Analogously, we can repair an \textit{asymptotically repairable} probability distribution $p_X(\cdot)$. This repairing is not performed on $p_X(\cdot)$ itself. Instead, we repair its asymptotic distribution $p_{\bar{X}}(\cdot)$ by convoluting $\bar{X}$ with a normal random variable with variance $2|\mu_2|$. In both cases, the aim is to change either the original random variable $X$ or its stable limit $\bar{X}$ in order to cancel the contribution from $\mu_2$.
\begin{definition} \label{def:repairer} Let $p_X(\cdot)$ be admissible with index $\alpha \in (0,2)$ with regularity set $R_{\alpha}\in \{ \varnothing, \{2\}\}$ and let $\mu_2$ be the constant defined in the expansion of $\phi_X(\cdot)$. \begin{itemize} \item[(i)] If $p_X(\cdot)$ is locally repairable, we call the repairer an
independent random variable $Z$ with probability distribution given by
\begin{equation} \label{eq:def-repairer}
p_Z(x) =
\begin{cases}
\frac{\mu_2}{M^2}, & \text{ if } |x|=M\\
1-\frac{2\mu_2}{M^2}, & \text{ if } x=0\\
0, & \text{ otherwise,}
\end{cases} \end{equation}
where $M = \lceil \sqrt{2 \mu_2}\rceil \in \bb N$.
\item[(ii)] If $p_X(\cdot)$ is asymptotically repairable, we call an asymptotic repairer a random variable $\bar{Z}$ such that $\bar{Z} \sim \mathcal{N}(0, 2|\mu_2|)$. We will assume that $\bar{Z}$ is independent of $\bar{X}$, whose characteristic function is given by \eqref{eq:def-symmetric-cont}.
\end{itemize} \end{definition} By construction, the characteristic function of a repairer $Z$ has the expansion
\begin{align*}
\phi_{Z}(\theta)
&=
1- \mu_2 |\theta|^2 + \mc O (\theta^4), \qquad\text{ as } |\theta| \longrightarrow 0. \end{align*}
It is easy to see that $p_{X+Z}(\cdot) = p_X*p_Z(\cdot)$ is in fact repaired. The asymptotic repairer $\bar{Z}$ is such that the characteristic function of $\bar{X}+\bar{Z}$ is equal to
\[
\phi_{\bar{X}+ \bar{Z}}(\theta) = e^{-\mu_{\alpha} |\theta|^{\alpha} -\mu_2 |\theta|^2}. \]
Note that in both cases we do not change the limiting distribution of $n^{-1/\alpha} S_n$. Indeed, this modification will introduce an error of order $\mathcal{O}(n^{1- \frac{3}{\alpha}})$ which vanishes as $n\rightarrow \infty$.
Let us remark that alternatively one could \textit{repair} by taking a convex combination as in \cite{Frometa2018}. Different repairing methods might be more convenient depending on the context.
The idea of repairing random variables is reminiscent of the \textit{Lindeberg principle}, see \cite{linde} for the original article and \cite{lindestable} for a proof of the stable law by Lindeberg principle. In the classical setting, its main idea is to explore the effect of swapping each discrete random variable by a Gaussian one with the correct mean and variance. However, in this case, we are trying to match the discrete and the continuous random variable up to a higher order (fictional) moment. This is done either by perturbing the law of the discrete or of the continuous random variable by a random variable which is strictly more regular (in the sense that it is integrable up to a higher order). We believe that this can be extended to match not only one extra fictional moment (as in the case of the repairable random variables) but any finite Edgeworth expansion. In Section~\ref{sec:general}, we discuss this idea further.
Finally, let us define the potential kernel for a random walk, whose transition probability $p_X(\cdot) := p_X(\cdot, \cdot)$ is admissible with index $\alpha \in [1,2)$ and regularity set $R_{\alpha}$.
\begin{definition} \label{def:potential-green} Let $(X_i)_{i\in \mathbb{N}}$ be a sequence of, i.i.d.~random variables such that $p_X(\cdot)$ is admissible. Call $S_n=\sum_{i=1}^n X_i$ and $p^n_X(\cdot, \cdot)$ its transition probability. If $(S_n)_{n \in \bb N}$ is recurrent, we define the \textit{potential kernel} of $p_X(\cdot)$ as
\[
a_{X}(0,x) = \sum_{n=0}^{\infty}(p_X^n(0,x) - p_X^n(0,0)), \, \, \text{ for } x\in \bb Z. \]
If the random walk $(S_n)_{n \in \bb N}$ is transient, we define the \textit{Green's function} of $p_X(\cdot)$ as
\[
g_{X}(0,x) = \sum_{n=0}^{\infty}p_X^n(0,x), \, \, \text{ for } x\in \bb Z. \]
Henceforth, we will abbreviate $p_X(x):= p_X(0,x)$. We will also write $a_X(x)=a_{X}(0,x)$ and $g_X(x)=g_{X}(0,x)$. \end{definition}
We will need a few more definitions to study the scaling limits of discrete PDEs. For $m \ge 1$, let $\bb T_m:= 2\pi \bb Z_m = 2\pi((-m/2,m/2]\cap \bb Z)$. For $f_1,f_2 \in L^2(\bb T)$, denote their inner product by
\begin{align*}
\langle f_1,f_2 \rangle:=
\langle f_1,f_2 \rangle_{L^2(\bb T)} =
\frac{1}{2\pi}
\int_{\bb T} f_1(x) \overline{f_2(x)} dx. \end{align*}
We extend this notation to the case in which $f$ is a distribution and $g$ is a suitable test function as the action of $f$ on $g$. We then abuse the notation to also describe the inner product of two functions in $\ell^2(\bb T_m)$ as
\begin{align*}
\langle f_1,f_2 \rangle =
\langle f_1,f_2 \rangle_{\ell^2(\bb T_m)} =
\frac{1}{2\pi m}
\sum_{x \in \bb T_m}
f_1(x) \overline{f_2(x)}. \end{align*}
This abuse of notation comes from the fact that given two functions $f_1, f_2\in C(\bb T)$, let $f_i^m$ be the restriction of $f_i$ to $\bb T_m$, then we have that $\langle f_1^m, f_2^m \rangle_{\ell^2(\bb T_m)}\to \langle f_1, f_2 \rangle_{L^2(\bb T)}$. Moreover, both spaces share the same orthonormal basis given by Fourier functions. That is, consider $\{{\bf e}_k\}_{k \in \bb Z}$ be given by ${\bf e}_k(x):= \exp\left(i k\cdot x\right)$, a orthonormal basis of $L^2(\bb T)$. Likewise, consider $\{{\bf e}^m_k\}_{k \in \bb Z_m}$ where ${\bf e}^m_k$ be given by the restriction of ${\bf e}_k$ to $\bb T_m$, this collection forms an orthonormal basis of $\ell^2(\bb T_m)$ .
For $s \ge 0$, consider the Hilbert space $H^{s}(\bb T)$ induced by the following inner product
\begin{equation*}
\langle f,g \rangle_{H^{s}} :=
\langle f, g \rangle_{L^2}
+
\sum_{k \in \bb Z}
\mc F_{\bb T}(f)(k)
\overline{\mc F_{\bb T}(g)(k)}
|k|^{4s}. \end{equation*}
The subset of functions which has $\mc F_{\bb T}(f)(0)= \int_{\bb T} f(x)dx = 0 $ is denoted by $H^s_0$. When studying $H^s_0$, we will use the norm induced by
\begin{equation*}
\langle f,g \rangle_{H^{s}_0} :=
\sum_{k \in \bb Z}
\mc F_{\bb T}(f)(k)
\overline{\mc F_{\bb T}(g)(k)}
|k|^{4s}. \end{equation*}
which is equivalent to the $H^s$-norm in $H^s_0$. We also consider the space $H^{-s}$ - the dual space of $H^{s}_0$ - with norm
\begin{equation*}
\|f\|_{H^{-s}} :=
\sum_{k \in \bb Z\setminus \{0\}}
|\mc F_{\bb T}(f)(k)|^2
|k|^{-4s}. \end{equation*}
Finally, for any Hilbert space $H$ and $T>0$ , we denote by $L^2([0,T],H)$ the Hilbert space of functions $f:[0,T] \mapsto H$ with the norm
\begin{equation}\label{eq:L20T-norm}
\|f\|_{L^2([0,T],H)} :=
\int_{0}^T
\|f(t)\|^2_H dt, \end{equation}
where $\|.\|_H$ is the norm of $H$.
\section{Results}\label{sec:results}
\subsection{Local central limit theorem} In this section, we state our results regarding LCLT's for heavy-tailed i.i.d.~random variables with admissible probability distributions.
\begin{theorem} \label{thm:lclt} Let $\alpha \in (0,2)$ and $(X_i)_{i\in \mathbb{N}}$ be a sequence of i.i.d.~random variables with admissible law $p_X(\cdot)$. Let furthermore $p_{\bar{X}}(\cdot)$ denote the law of the $\alpha$-stable random variable satisfying Definition \ref{def:stable} to which domain of attraction belongs $X$. Then we have that,
\begin{equation*}
\sup_{x \in \bb Z}
|{p}^n_X(x)- p^n_{\bar{X}}(x)| \lesssim
n^{- \frac{\beta_{1}+1-\alpha }{\alpha}}, \end{equation*} where $\beta_1=\min(J^+_{\alpha})$.
\end{theorem}
\begin{corollary}\label{corol:assymp-repairable} Let $\alpha \in (0,2)$ and $(X_i)_{i\in \mathbb{N}}$ be a sequence of symmetric i.i.d.~random variables with asymptotically repairable law $p_X(\cdot)$. Let furthermore $p_{\bar{X}}(\cdot)$ denote the law of the $\alpha$-stable to which domain of attraction belongs $X$, then \begin{equation*}
\sup_{x \in \bb Z}
|{p}^n_{X}(x)- p^n_{\bar{X} + \bar{Z}}(x)| \lesssim
n^{- (1+\frac{1}{\alpha})}. \end{equation*} \end{corollary}
The next theorem provides a more detailed expansion of the error obtained in the previous result.
\begin{theorem}\label{prop:lclt} Let $p_X(\cdot)$ and $p_{\bar{X}}(\cdot)$ be distributions satisfying the assumptions of Theorem~\ref{thm:lclt}. Then, there exists a collection of constants $\{C_{\beta},\beta \in J_\alpha\}$ s.t. for all $x\in \bb Z$,
\begin{equation}
\left| p^n_X(x)- p^n_{\bar{X}}(x)
- \sum_{\beta \in J_\alpha}
C_\beta\frac{u_\beta\left( \frac{x}{n^{1/\alpha}} \right)}{n^{(1+\beta-\alpha)/\alpha}}
- \sum_{\beta \in J_\alpha}
C^\prime_\beta\frac{u^\prime_\beta\left( \frac{x}{n^{1/\alpha}} \right)}{n^{(1+\beta-\alpha)/\alpha}}
\right|
\lesssim
n^{-\frac{3}{\alpha}},
\label{eq:thm-lclt-beyond}
\end{equation}
\label{thm:lclt-beyond}
where
\begin{equation}\label{eq:inte}
u_\beta(x):=
\frac{1}{2\pi}
\int_{\bb R}
|\theta|^{\beta}
\phi_{\bar{X}}(\theta)
e^{-i x \theta }
d\theta
\text{ and }
u^\prime_\beta(x):=
\frac{1}{2\pi}
\int_{\bb R}
\sgn(\theta) |\theta|^{\beta}
\phi_{\bar{X}}(\theta)
e^{-i x \theta }
d\theta.
\end{equation} \end{theorem} A careful analysis of the functions $u_\beta,u^\prime_\beta$ shows that
\begin{equation}
|u_\beta(x)|,|u^\prime_\beta(x)| \lesssim
\frac{1}{|x|^{\alpha+\beta+1}}
\label{eq:bound-uj}. \end{equation}
Indeed, this bound is significantly weaker than its equivalent Theorem 2.3.7 in \cite{Limic10} in the case that $X$ has at least four finite moments. There, the integrands in \eqref{eq:inte} are given by $g_\beta(\theta):=\theta^\beta e^{-c |\theta|^2}$, and therefore $g_\beta(\cdot)$ can be seen as Schwartz functions with rapidly decaying derivatives.
A simple triangular inequality leads us to the following corollary.
\begin{corollary}\label{coro}
Under the conditions of Theorem \ref{prop:lclt}, calling
$\beta_2 := \min ( {J}^+_{\alpha}\setminus \{\beta_1\} )$, we have that
\[
\left | p^n_X(x)- p^n_{\bar{X}}(x) \right |
= o \left (\sum_{\beta \in J_\alpha}
C_\beta\frac{u_\beta\left( \frac{x}{n^{1/\alpha}} \right)}{n^{(1+\beta-\alpha)/\alpha}} \right ). \] In particular, we have that \begin{equation*}
\sup_{x\in \bb Z} \left|
p^n_X(x)- p^n_{\bar{X}}(x)
\right|
\lesssim
n^{-\frac{(\beta_1+1-\alpha)}{\alpha}} \end{equation*} and \begin{equation*}
\left|
p^n_X(x)- p^n_{\bar{X}}(x)
\right|
\lesssim \left (n^{-\frac{(\beta_2+1-\alpha)}{\alpha}} \right )
\vee
\left ( n^{2} |x|^{-(\alpha+\beta_1+1)} \right ). \end{equation*} \end{corollary}
Note that from Corollary \ref{coro} we can deduce that the rate of convergence given in Theorem~\ref{thm:lclt} is optimal. Indeed, for any $\alpha \in (0,2)$, consider $\beta_1 \in (\alpha,2 \wedge 2\alpha)$, using Lemma~\ref{lem:closed-by-operations} and Proposition~\ref{prop:asymp-phi-alpha-asymmetric} for $\alpha$ and $\beta_1$ we have that $p_X(\cdot):=p_{\alpha}*p_{\beta_1}(\cdot)$ is admissible with index $\alpha$ and regularity set $\{\beta_1, 2\}$. Applying Corollary~\ref{coro}, we have that the error term in Theorem~\ref{prop:lclt} at $x=0$ is given by
\begin{equation*}
u_{\beta_1}(0)n^{-\gamma}+ o \left(n^{-\gamma}\right) \end{equation*}
where $\gamma =\frac{\beta_1 + 1-\alpha}{\alpha}$, and $u_{\beta_1}(0)>0$.
Notice that this reinforces this idea that repairing improves the convergence behaviour. If $p_X(\cdot)$ is repaired, then $\beta_1 = \min \{ 2\alpha, 2+\alpha \} = 2\alpha \geq 2$ which leads to $\gamma = \frac{\alpha+1}{\alpha}$. For $\alpha \geq 1$ and $p_X(\cdot)$ is locally repairable we have that $\beta_1=\min \{2, 2\alpha, 2+\alpha \}=2$. Without repairing, the best uniform bound we can get is
\begin{equation*}
\left|
p^n_X(x)- p^n_{\bar{X}}(x)
\right|
\lesssim
n^{1-3/\alpha}, \end{equation*}
which is much weaker than the bound in Theorem \ref{thm:lclt}, especially for $\alpha$ close to 2. Theorem \ref{thm:lclt} states that repairing a probability distribution preserves the convergence rates. Note that for $\alpha < 1$, we have that $\beta_1 < 2$ so repairing will not provide better convergence bounds beyond the one in Corollary \ref{coro}.
In Section \ref{sec:general}, we discuss how one could potentially repair a distribution using heavy-tailed random variables instead of random variables with finite variance.
\subsection{Potential kernel estimates for long-range random walks}\label{sec:res_pot_kernel}
Theorem \ref{thm:Green} presents potential kernel estimates for long-range random walks with admissible law $p_X(\cdot)$. It exemplifies that repairing distributions provides good potential kernel expansions. This will be proven in Section \ref{sec:Green}. Note that the results in this section hold for $\alpha \in (2/3,2)$. For further considerations on $\alpha \le 2/3$, we refer to Section \ref{sec:general}. Unfortunately, as our techniques require several cancellations, we are only able to proceed in the symmetrical case.
We will first treat the case $\alpha \in (1,2)$ and $\alpha=1$ for the subclass described by $R_{\alpha}\in \{ \varnothing, \{2\}\}$ separately. To start, we give bounds for repaired distributions when $\alpha \in (1,2)$, where we have an expansion up to some vanishing error as $|x|\rightarrow \infty$. After that we compute all terms of the expansion for locally and asymptotically repairable distributions up to order $\mc O (1)$. Then, we present the general admissible case, in which we obtain the first and second terms of the expansion which will depend on $\delta:=\min(R_\alpha)$. Finally, we look at the case $\alpha \in (2/3,1)$.
\begin{theorem}\label{thm:Green} Let $\alpha \in (1,2)$ and $(X_i)_{i\in \mathbb{N}}$ be a sequence of i.i.d.~random variables with common admissible distribution $p_X(\cdot)$ with index $\alpha$ and regularity set $R_{\alpha}\in \{ \varnothing, \{2\}\}$. \begin{itemize} \item[(i)] Assume that $p_X(\cdot)$ is repaired, then there exist constants
$C_{0}, C_{\alpha} \in \bb R$ such that
\[
a_X(x)= C_\alpha |x|^{\alpha-1} + C_0+
\mc O(|x|^{\frac{\alpha-2}{3}+})
\]
as $x \to \infty$, where
\[
C_\alpha = \frac{1}{\pi \mu_\alpha}
\int_{0}^\infty \frac{\cos (\theta)-1}{\theta^\alpha}d\theta
\]
and
\[
C_{0}
= - \frac{\pi^{1-\alpha}}{2 \pi \mu_\alpha(\alpha-1)}+
\frac{1}{\pi} \int_{0}^{\pi}
\frac{\phi_X (\theta) -\left( 1-\mu_\alpha \theta^\alpha \right)}{ \mu_\alpha \theta^\alpha (1- \phi_X(\theta))}
d\theta.
\] \item[(ii)] Assume that $p_X(\cdot)$ is locally or asymptotically repairable. Let $m_\alpha:= \lceil \frac{\alpha-1}{2-\alpha}\rceil-1$, then there
exist constants $C^\prime_0,C_1,\dots,C_{m_\alpha+1}$ such that
\[
a_X(x)= C_\alpha |x|^{\alpha-1}
+
\sum_{m=1}^{m_\alpha} C_m |x|^{(\alpha-1) - m (2-\alpha)}
+
C^\prime_0 \log{|x|}
+
\mc O(1)
\]
as $|x| \to \infty$, where for $1 \le m \le m_\alpha+1$
\[
C_m := \frac{\mu_2^m}{\pi \mu_\alpha^{m+1}}
\int_{0}^\infty \theta^{m(2-\alpha)-\alpha} (\cos(\theta)-1 ) d\theta,
\]
and the sum is zero if $m_\alpha =0$. Moreover,
\begin{equation*}
C^\prime_0:=
\begin{cases}
0,& \text{if } \frac{2}{2-\alpha} \not \in \bb N \\
C_{m_\alpha+1},& \text{if } \frac{2}{2-\alpha} \in \bb N.
\end{cases}
\end{equation*} \end{itemize} \end{theorem}
Note that $m_\alpha \rightarrow \infty$ as $\alpha \rightarrow 2$, therefore, repairing (whenever possible) becomes more relevant for values of $\alpha$ close to $2$. The following theorem treats the general admissible case.
\begin{theorem}\label{thm:Green-adm} Let $\alpha \in (1,2)$ and $(X_i)_{i\in \mathbb{N}}$ be a sequence of i.i.d.~random variables with common admissible distribution $p_X(\cdot)$ with index $\alpha$ and regularity set $R_\alpha$. Let $\delta := \min( R_\alpha )$ and
\[
C_\alpha = \frac{1}{\pi \mu_\alpha}
\int_{0}^\infty \frac{ \cos (\theta) -1}{\theta^\alpha}d\theta.
\] \begin{itemize} \item[(i)] If $\delta < 2\alpha-1$, then there
exists a constant $C_{\delta}$ such that
\[
a_X(x)= C_\alpha |x|^{\alpha-1}
+
C_{\delta} |x|^{2\alpha-\delta-1}
+ \mc O (|x|^{2\alpha-\delta-1})
\]
as $|x| \to \infty$, where
\[
C_{\delta}= \frac{\mu_{\delta}}{\pi \mu_\alpha}
\int_{0}^\infty \theta^{\delta-2\alpha} (\cos(\theta) -1) d\theta.
\] \item[(ii)] If $\delta > 2\alpha-1$, then there
exists a constant $C_0$ such that
\[
a_X(x)= C_\alpha |x|^{\alpha-1}
+C_0+
o(1)
\]
as $x \to \infty$, where
\[
C_{0}
= -\frac{\pi^{1-\alpha}}{2 \pi \mu_\alpha(\alpha-1)}+
\frac{1}{\pi} \int_{0}^{\pi}
\frac{\phi_X (\theta) -\left( 1-\mu_\alpha \theta^\alpha \right)}{ \mu_\alpha \theta^\alpha (1- \phi_X(\theta))}
d\theta.
\] \item[(iii)] If $\delta = 2\alpha-1$, then there
exists a constant $C_{\delta}$ such that
\[
a_X(x)= C_\alpha |x|^{\alpha-1} +
C_{\delta}\log |x| + \mc O (1)
\]
as $x \to \infty$, where
\[
C_{\delta}:= \frac{\mu_{\delta}}{\pi \mu_\alpha} \int_{0}^\pi \frac{\cos(\theta)-1}{\theta} d\theta
\]
\end{itemize} \end{theorem}
Finally, we include the result for the potential kernel for $\alpha=1$, when $R_{\alpha}\in \{ \varnothing, \{2\}\}$.
\begin{theorem} \label{thm:green-1}
Let $\alpha =1$ and $(X_i)_{i\in \mathbb{N}}$ be a sequence of i.i.d.~random variables with common admissible law $p_X(\cdot)$ and $R_{\alpha}\in \{ \varnothing, \{2\}\}$.
Then
\[
a_{X}(x) = - \frac{1}{\pi \mu_1} \log(|x|)
+ C_0 +
o \left( 1 \right),
\]
where
\[
C_0:=
\frac{\gamma + \log (\pi)}{\pi\mu_1}
+
\frac{1}{\pi}
\int_{0}^{\pi} \left( \frac{1}{1-\phi_X(\theta) } - \frac{1}{\mu_1 |\theta|} \right)d\theta
\]
and $\gamma$ is the Euler-Mascheroni constant.
Additionally, if $p_X(\cdot)$ is repaired, we have that the term $o (1)$ is in
fact of order $\mc O \left( |x|^{-\frac{1}{3}+} \right)$. \end{theorem}
\begin{theorem} \label{thm:green-less1}
Let $\alpha \in (2/3,1)$ and $(X_i)_{i\in \mathbb{N}}$ be a sequence of i.i.d.~random variables with common repaired law $p_X(\cdot)$ and $R_{\alpha} = \varnothing$.
Then
\[
g_{X}(x) = C_\alpha |x|^{\alpha-1} +
\mc O (|x|^{-\alpha \frac{2-\alpha}{2+\alpha}-}).
\] \end{theorem}
\subsection{Fluctuations of discrete stochastic PDEs} \label{sec:fluctuations-of-scaling-limits-of-discrete-stochastic-pdes}
As an application of our results we want to study second-order fluctuations of fractional Gaussian fields in this section.
We will work on the torus $\bb T=(-\pi,\pi]$. To keep the definitions consistent to the literature on fractional Gaussian fields, in this section, we will restrict ourselves to random walks with symmetric transition probabilities.
Let $\Xi^s=(\Xi^s(x))_{x\in \bb T}$ denote the so-called fractional Gaussian field of order $s \in \bb R$ (or $s$-FGF for short), on the torus, i.e, the solution of the elliptic equation \begin{equation}\label{eq:def-continuous-field}
\begin{cases}
-(-\Delta)^{s/2}_{\bb T}\Xi_{s}(x) = \xi(x) - \langle \xi, 1 \rangle & \text{ if } x \in \bb T \\
\phantom{-} \int_{\bb T} \Xi_{s}(x) dx =0,
\end{cases} \end{equation}
where $\xi=(\xi(x))_{x\in \bb T}$ is the white-noise defined as $\langle\xi, f\rangle \sim \mathcal{N}(0,\|f\|^2_{L^2(\bb T)})$ for $f\in C^{\infty}(\bb T)$, and $-(-\Delta)_{\bb T}^{s/2}$ is defined as the fractional Laplacian on the torus in terms of the Fourier transform, that is,
\begin{equation*}
\mc F_{\bb T}( -(-\Delta)^{s/2}_{\bb T} f)(\theta)
=
\mc F_{\bb T}(f)(\theta)|\theta|^{2s}, \end{equation*}
for a given function $f \in C^\infty(\bb T)$. We can then extend this operator to any distribution on $\bb T$ via duality. For more information on fractional Gaussian fields in a more general context, we suggest \cite{Lodhia2016}.
We will compare this field with its discrete counterpart. To do so, let $p_X(\cdot)$ be the transition probability of a symmetric admissible random walk on $\bb Z$ with index $\alpha$ , therefore the constants $\mu_\beta^\prime$ vanish. In particular, let us write the characteristic function of $X$ as
\begin{equation}\label{eq:char-func-for-fields}
\phi_X(\theta)=1-\mu_\alpha |\theta|^\alpha+ \mu_\beta |\theta|^\beta +\mc O(|\theta|^\gamma), \end{equation}
for $\alpha, \beta, \gamma$ satisfying
\begin{equation}\label{eq:range-field}
\alpha \in (0,2], \alpha < \beta < \gamma \le 2+\alpha,\text{ and }\beta <\alpha +1. \end{equation}
We embed such distributions on the discrete torus $\bb T_m:= 2\pi \bb Z_m = 2\pi((-m/2,m/2]\cap \bb Z)$ by defining
\begin{equation}\label{eq:periodisation}
p^{(m)}(x,y) = p_{X,m}(0,y-x):= \sum_{z \in \bb Z} p_X\left(\frac{y-x + 2\pi m z}{2\pi}\right), \end{equation} for $x,y \in \bb T_m$. From this, we construct $\mc L^{m}$ the generator of the random walk, that is,
\begin{equation}\label{eq:def-generator}
\mc L^m f(x): = m^\alpha\sum_{y \in \bb T_m} p^{(m)}(x,y) (f(y)-f(x)), \end{equation}
where the factor $m^\alpha$ is just the scaling required to have $\mc L^{m}$ converge to $-\mu_\alpha (-\Delta)_{\bb T}^{\alpha/2}$ as $m\rightarrow \infty$.
Now, for a given $m \ge 1$, we define the discrete fractional Gaussian field $h^m=(h^m(x))_{x\in \bb T_m}$ as the solution to
\begin{equation}\label{eq:def-discrete-field-1}
\begin{cases}
\mc L^{m}h^m(x) = \xi^m(x) - \frac{1}{m}\sum_{z \in \bb T_m} \xi^m (z), & \text{ if } x\in \bb T_m \\
\sum_{x \in \bb T_m}h^m(x)=0,
\end{cases} \end{equation}
where $\{\xi^m(x)\}_{x \in \bb T_m}$ is a collection of i.i.d. $\mc N(0,1)$-random variables. We can then associate a distribution $\Xi^m$ to $h^m$ on $\bb T_m$ by
\begin{equation}\label{eq:def-discrete-field-2}
\Xi^m:= m^{-1/2}\sum_{x \in \bb T_m} \delta_x h^m(x), \end{equation}
where $\delta_x$ denotes the Dirac's delta function.
\begin{theorem}\label{thm:converge-of-fields-elip} Let $h^m$ be the solution of \eqref{eq:def-discrete-field-1}, $\Xi^m$ defined in \eqref{eq:def-discrete-field-2} and the parameters $\alpha, \beta, \gamma$ satisfying \eqref{eq:range-field}. Then, we can couple $\Xi^m$ and $\Xi_\alpha$ in such a way that
\begin{equation}\label{eq:thm-conv-fields}
m^{\beta-\alpha} \frac{(\mu_\alpha)^2}{\mu_\beta}
\left(\Xi^m - \frac{1}{\mu_\alpha}\Xi_\alpha\right)
\longrightarrow
\Xi_{2\alpha-\beta}
\text{ in probability in }
H^{-s}(\bb T), \end{equation}
for all $s > s_0:= \max\{(3+2\alpha)/4, (\alpha-\beta)/2 + (\beta-\alpha)\vee (\gamma-\beta)\}$ as $m\rightarrow \infty$. \end{theorem}
Similarly, we will describe the parabolic counterpart of the result above. Consider $(\zeta^m(t,x))_{t\geq 0, x\in \bb T_m}$ to be the solution of the discrete stochastic heat equation (SHE)
\begin{equation}\label{eq:parabolic-discrete}
\begin{cases}
d\zeta^m(t,x)
=
\mc L^{m} \zeta^m(t,x) dt + d\xi^m(t,x), & t>0, x\in \bb T_m \\ \zeta^m(0,\cdot):=\zeta^m_0(x):= Z_0 \mid_{\bb T_m}, & t=0
\end{cases} \end{equation}
where $(\xi^m(\cdot,x))_{x \in \bb T_m}$ is a family of independent Brownian motions on $\bb T_m$, $Z_0(x)$ is a smooth function in the continuous torus, and $Z_0 \mid_{\bb T_m}$ is the restriction to the discretized torus $\bb T_m$.
Again, we compare it with its continuous counterpart given by
\begin{equation}\label{eq:parabolic-conti}
\begin{cases}
dZ_\alpha(t,x)
=
-\mu_\alpha(-\Delta)^{\alpha/2}Z_\alpha(t,x)dt + d\xi(t,x), & t>0, x\in \bb T \\ Z_\alpha(0,\cdot):=Z_0(x), & t=0
\end{cases} \end{equation}
which satisfies
\begin{align*}
\hat{Z}_\alpha(t,k) :=
\hat{Z}_0(k)e^{-\mu_\alpha|k|^\alpha t} +
\int_{0}^t e^{-\mu_\alpha|k|^\alpha (t-s)}
\hat{\xi}(ds,k), \end{align*}
where this Fourier transform is only taken in the space coordinates. Consider the field
\begin{equation}\label{eq:parabolic-discrete-field}
Z^m(t,\cdot)
=
m^{-1/2}
\sum_{x \in \bb T_m}
\zeta^m(t,x)\delta_x. \end{equation}
Then, the following theorem holds.
\begin{theorem}\label{thm:converge-of-fields-parab} Let $\zeta^m$ solve \eqref{eq:parabolic-discrete} and $Z^m$ be defined in Equation \eqref{eq:parabolic-discrete-field}. Then, we can couple $Z^m$ and $Z_\alpha$ in such a way that for every $T>0$,
\begin{equation}\label{eq:conv-fields-parab}
m^{\beta-\alpha}
\left(
Z^m - Z_{\alpha}
\right)
\longrightarrow
Z_{Err}
\text{ in probability in }
L^2([0,T],H^{-s}) \end{equation}
for all $s > s_0:= \max \{2\beta-\alpha,\gamma-\alpha\}$ as $m\rightarrow \infty$, and where the field $Z_{Err}$ can be characterised by its Fourier coefficients
\begin{equation}\label{eq:Fourier-of-parab-error}
\hat{Z}_{Err}(t,k):=
-\mu_\beta|k|^\beta\hat{Z}_0(k)e^{-\mu_\alpha |k|^\alpha t}t
- \mu_\beta\int_{0}^t e^{-\mu_\alpha |k|^\alpha (t-s)} |k|^\beta (t-s)\hat{\xi}(ds,k), \end{equation}
and
\begin{equation*}
\hat{Z}_{Err}(t,0)\equiv 0, \end{equation*}
where for each $k \in \bb Z \setminus \{0\}$, $\xi(\cdot,k)$ is an i.i.d. Brownian motion. \end{theorem}
Notice that this is \textbf{not} the solution of a linear stochastic differential equation. In fact, $Z_{Err}(t,\cdot)\to 0$ as $t\to 0$ for any initial condition and but the initial condition. However, the initial condition has has a non-trivial influence over $Z_{Err}(t,\cdot)$, showing that the characterisation of the fluctuations is not simple.
A natural follow up question is the fluctuations around non-linear equations. We will discuss ideas that could be used to analyse such cases in the next section.
\begin{remark} To evaluate fluctuations, we used the norm $H^{-s}$ on the space variable, which is not sensitive to the $0$-th Fourier coefficient of the test function. However, notice that due to our choice of discretisation, $\widehat{Z}^n(t,0)\equiv\hat{Z}(t,0)$ for every $n$, and therefore $Z_{Err}(t,0)\equiv 0$. A similar consideration applies to the elliptic case. \end{remark}
\section{Discussion and generalisations of our results} \label{sec:general}
In this section we quickly discuss possible generalisations and limitations of our results and techniques. \subsection*{Admissible distributions, regularity sets $R_{\alpha}$ and error terms}
As mentioned in the introduction, the order $2+\alpha$ is chosen because it appears naturally in the examples we study, see Section~\ref{sec:example}. In order to just obtain sharp convergence rates of the LCLT, expansions up to an error term of order $\mc O (|\theta|^{2\alpha})$ are enough. Analogously, all of our other results benefit from the further order terms. Regarding the potential kernel estimates in Section \ref{sec:res_pot_kernel}, choosing an error of order $\mc O(|\theta|^{2+\alpha})$ improves the expansion compared to choosing $\mc O (|\theta|^{2\alpha})$. Furthermore, if we assumed Edgeworth expansions to orders beyond $2+\alpha$, we would also be able to generalise our results.
Remark that random variables with support in $\bb Z$ and finite fourth moment have an admissible distribution with index $\alpha=2$ and $R_{\alpha} \subset \{1,2,3\}$. Both LCLT and potential kernel estimates for such random variables are well understood, see \cite{Limic10}. For this reason, we concentrate on the case $\alpha \in (0,2)$.
The class of admissible probability distributions is closed under natural operations such as convex sum and convolution, see Lemma~\ref{lem:closed-by-operations}.
\subsection*{Higher dimensions and asymmetric Green's functions}
We can use a similar approach to the one used here - together with the multidimensional version of Euler-Maclaurin (see \cite{karshon2003euler}) and the Faà di Bruno's formula - to show that the random variable $X$ with distribution $p_X(x)=c_{\alpha,d}|x|^{-(d+\alpha)} \1_{x \neq 0}$ admits a fractional Edgeworth expansion for any dimension $d$.
Although we can obtain such expansions, we are not able to use the same analysis to control the signs of the constants $\mu_{1+\alpha}$ and $\mu_{2}$ that appear in the expansion, and therefore not able to use the techniques given here to improve rates of convergence and other results. However, both the LCLT and fluctuations results can be generalised to this case with almost no additional changes.
Unfortunately, our results do not generalise to Green's function estimates for $d\ge 2$ and $\alpha \in (0,2)$ without further assumptions on the degree of continuity of the remainder of the function $\tilde{h}_X(\cdot)$. We would need to guarantee that the remainder would decay faster than $\|x\|^{\alpha-d}$, which is the first order term.
The same limitation applies to $\alpha <2/3$ and $d=1$, the degree of continuity of $\phi_X(\cdot)$ becomes too low to guarantee that its Fourier transform will decay faster than $|x|^{\alpha-1}$.
One could try to expand ideas from the proof of Theorem 1.4 in \cite{Frometa2018} to tackle the $d\geq 2$ and/or $\alpha < 1$ case. There the authors demonstrate a detailed expansion for the Green's function in $d=2$, $\alpha \in (0,2)$ for a truncated long-range random walk.
Regarding adding asymmetry in the random walk, other methods would be required as the criteria we use for evaluating smoothness of the integrands loses continuity at $\theta=0$.
\subsection*{Further repairers}
In this article we only studied repairers for probability distributions $p_X(\cdot)$ which are $\alpha$-admissible with a regularity set $R_\alpha = \left\{ 2 \right\}$. However, suppose that $p_X(\cdot)$ is an admissible distribution, let $\delta:=\min ( R_\alpha )$ and $\mu_{\delta}>0$ so we are in the locally repairable case. We could define a repairer $Z$ as an admissible distribution $p_Z(\cdot)$ with index $\delta$ whose leading coefficient in the expansion of the characteristic function of $X$ is equal to the negative value of the coefficient $\mu_{\delta}$ multiplying $|\theta|^{\delta}$ in the expansion of the characteristic function of $Z$.
Then, $\min (R'_{\alpha} ) >\delta$, where $R'_{\alpha}$ is the regularity set of $X+Z$. Hence repairing would allow to obtain more precise estimates on its potential kernel beyond the constant order of the error. A similar idea could be used to improve the rates of convergence in the LCLT for distributions such that $\min (R_{\alpha}) < 2\alpha$, by performing multiple repairs to cancel each of the terms in $r_X(\theta)$.
However, if the constant $\mu_\delta$ is negative, one can also elaborate a similar notion for asymptotic repairer. Again, in the same spirit of adapting the Lidenberg principle but matching the fictional moments/fractional cumulants.
\subsection*{Non-lattice walks/ Random variables in $\bb R$}
We believe that a combination of the ideas of the present paper and \cite{stone1965local} would be enough to prove our results in the context of non-lattice walks and absolutely continuous random variables, possibly depending on a further integrability assumption over the characteristic function. However, we cannot say the same about potential kernel estimates. Here we are relying on the fact that smoothness implies decay of the Fourier coefficients on the torus. This relation fails in the infinite plane as the functions in the integrand are not smooth at zero.
\subsection*{Discrete fractional fields and fluctuations of discrete stochastic heat equations} In \cite{Cipriani2016, chiarini2021odometer}, the authors proved that, after suitable rescaling, the discrete fractional Gaussian fields, respectively driven by the simple random walk and by random walk with a power-law decay, converge in distribution to their suitable continuous counterparts. In fact, they proved it in any dimension and only assuming that the family $\{\xi^{m}(x)\}_{x \in \bb T_m}$ is i.i.d. (not necessarily normal) with finite variance. Theorem \ref{thm:converge-of-fields-elip} shows that the fluctuations around such fields happen on the scale $m^{\alpha-\beta}$. We can also characterise this fluctuation as another fractional Gaussian field of parameter $2\alpha-\beta$. Notice that the exponent $2\alpha-\beta$ matches precisely the second order term in the heat-kernel expansion given in Theorem~\ref{thm:Green-adm}.
This leads us to think that similar results should hold in the whole space (rather than the torus). The reason for which we chose the torus is that the field is well-defined for any $\alpha \in (0,2)$ for any dimension $d \ge 1$ in the discrete torus; whereas the equivalent discrete fractional Gaussian on the full space only exists if $d >\alpha$, a regime for which our Green's function bounds are not strong enough to identify the correct order of fluctuations.
The assumption in Theorem \ref{thm:converge-of-fields-elip} of $\beta<\alpha+1$ is due to the fact that
\begin{equation*}
m^{\beta-\alpha}
\left\langle\xi,
{\bf e}_k(\cdot)
-
\sum_{x \in \bb T_m}{\bf e}_k(x)\1_{B_m(x)}(\cdot)
\right\rangle \end{equation*}
has variance of order $\mc O\left(m^{2(\beta-\alpha-1)}\right)$ as $m \to \infty$ for each fixed $k$. One can push an expansion past this restriction by renormalising the discrete fields. This renormalised version should incorporate Taylor polynomials of higher orders in the discretised noise. After that, one can then follow our strategy to extend the proof.
In fact, assuming a full expansion of the characteristic function $\phi_X$ (say for instance in the case of the simple random walk) would allow us to reiterate this argument and write a discrete fractional Gaussian field as a (possibly infinite) series of continuous Gaussian fields of decreasing regularity. Again, this reminds us of ideas used in Hairer's decomposition in terms of regularity. Perhaps a easier analogy is a Taylor expansion: where each term describes variations of smaller amplitudes, but the term itself (the derivative in the case of Taylor expansion) becomes more irregular as we look at further terms into the expansion.
\subsection*{Global in time fluctuations} One can easily improve the result in Theorem~\ref{thm:converge-of-fields-parab} to unbounded time intervals $[0,\infty)$ by introducing a weight density $\omega: [0,\infty) \to [0,\infty)$ satisfying $\int_{0}^{\infty} \omega(t) \max\{t^{2},t^3\}dt < \infty$ and looking at the space $L^2_\omega([0,\infty),H^{-s})$. This decay is chosen to guarantee that the equivalent quantities to the norms of $A_m(t,k)$ and $B_m(t,k)$ (according to the notation of the proof of Theorem~\ref{thm:converge-of-fields-parab}) remain finite when we integrate $t$ over $[0,\infty)$.
\subsection*{Fluctuations around solutions of non-linear SPDEs}
A interesting topic would be to study the fluctuations of discrete non-linear SPDEs. A very informal idea is to use the da Prato-Debusche argument \cite{da2003strong}. Consider the simple equation
\begin{equation*}
-(-\Delta)^{\alpha/2}X_\alpha =
\eta X^2_\alpha
+\xi \end{equation*}
in the torus, where $\xi$ is the space white-noise and $\eta \in \bb R$. Depending on the value of $\alpha$, this equation is not well-posed, since we do not expect $X$ to be a function, but only a distribution. However, we can try to write $X_\alpha = \Xi_\alpha + v_\alpha$ where $\Xi_\alpha$ is like in \eqref{eq:def-continuous-field}, as $\Xi_\alpha$ is dealing with the irregularity of the noise. We notice that $v_\alpha$ satisfies the formal equation
\begin{equation*}
-(-\Delta)^{\alpha/2}v_\alpha =
\eta(\Xi_\alpha)^2
+
2\eta \Xi_\alpha v_\alpha
+\eta v_\alpha^2 \end{equation*}
which we expect to be more regular as long as we manage to deal with the quadratic term, which can be given a formal meaning via Wick renormalisation. And the equation has a meaning as long as $\eta$ is small enough and $\alpha$ is large enough to perform a fixed point argument.
One could follow the same idea for
\begin{equation*} \mc L^m X_m =
\eta X_m^2
+
\xi^m, \end{equation*}
where we take $\xi^m$ as i.i.d. random variables obtained by the same coupling we use later in this article, here we are taking $\mu_\alpha=1$ to simplify the exposition. By decomposing $X_m = \Xi^m + v^m$, using Theorem~\ref{thm:converge-of-fields-elip}, we expect that we can also prove that $m^{\beta-\alpha}(v_m-v)$ converges to the solution of a PDE of the form
\begin{equation*}
-(\Delta)^{\alpha/2} v_{Err}
=
\eta \Xi_\alpha \cdot \Xi_{2\alpha-\beta}
+
(2 \eta v_\alpha + U + 2 \eta \Xi_\alpha )
v_{Err}, \end{equation*}
after suitable renormalisation, where $U$ is an operator related to the functions $u_\beta$ showing in Theorem~\ref{prop:lclt}. This equation is solvable via the da Prato-Debusche argument as long as $\alpha-\beta$ is sufficiently small. As mentioned above, for higher values of $\alpha-\beta$, one needs to add derivatives of the white-noise, making the equation too irregular. However, we expect this could still be solvable via more modern techniques of SPDEs. Similar ideas hold for the parabolic version, and for higher dimensions. We intend to study those results in future articles.
\section{Class of admissible and repairable distributions}\label{sec:example}
In this section, we will discuss a few examples and an explicit construction of admissible probability distributions with index $\alpha \in (0,2)$.
\subsection{Basic properties of admissible distributions}\label{subsec:previously-known-admissible-distributions}
We start by stating simple properties of admissible distributions which will be useful the \textit{repairing process} of distributions.
\begin{lemma}\label{lem:closed-by-operations} Let $p_{X_1}(\cdot)$ and $p_{X_2}(\cdot)$ be admissible distributions of independent random variables $X_1$ and $X_2$ with indexes $\alpha_1,\alpha_2\in (0,2]$, $\alpha_1\le \alpha_2$ and regularity sets $R_{\alpha_1}$, $R_{\alpha_2}$ respectively. We have that their convolution,
\[
p_X (x):=p_{X_1}*p_{X_2}(x) \]
is admissible with index $\alpha_1$ and regularity set
\[
R^\prime_{\alpha_1} \subset (R_{\alpha_1}+R^+_{\alpha_2}) \cap(0,2+\alpha_1). \]
Let $\tilde{X}:=UX_1+(1-U)X_2$ where $U$ is a Bernoulli r.v.~with parameter $q \in [0,1]$, independent from $X_1$ and $X_2$, with distribution $p_{\tilde{X}}(\cdot)$. We have that \begin{equation}\label{eq:alt-repair}
p_{\tilde{X}}(x):=q\cdot p_{X_1}(x)+(1-q)p_{X_2}(x) \end{equation} is admissible with index $\alpha_1$ (for each $q$) and regularity set \[
R^*_\alpha \subset \Big (R_{\alpha_1} \cup R_{\alpha_2}^+ \Big)\cap(0,2+\alpha_1). \] \end{lemma}
\begin{proof} This follows from the relations $\phi_X(\theta)= \phi_{X_1}(\theta)\cdot \phi_{X_2}(\theta)$ and $\phi_{\tilde{X}}(\theta)= q\phi_{X_1}(\theta) + (1-q) \phi_{X_2}(\theta)$. \end{proof}
We can only describe the regularity sets as subsets since there might be cancellations due to the convolution or convex combinations.
\subsection{Old and new examples of admissible distributions}\label{subsec:explicit-examples-of-admissible-distributions}
Before we look for simpler examples, let us point out, that the result given by \cite{christoph1984asymptotic} in which they compute the characteristic function for certain random walks in the domain of attraction of a stable distribution, is covered by our class of admissible distributions.
\begin{proposition}\label{lem:Christoph} Let $p$ be given by \eqref{eq:dist-from-integral}, then $p$ is admissible and its regularity set $R_\alpha$ satisfies $R_\alpha \subset \bb N \cap (0,2+\alpha)$ \end{proposition}
\begin{proof}
This follows from \cite{christoph1984asymptotic} Example~2.15 and Theorem 2.22. \end{proof}
The next result will be used in a constructive proof of the existence of admissible distributions in Proposition~\ref{prop:adm-are-enough}). It is worth mentioning that we could simplify the proof if we were not interested in the signs of the constants, in particular of $\mu_2$.
\begin{proposition}\label{prop:asymp-phi-alpha-asymmetric} Let $\alpha \in (0,2)\setminus\{1\}$ and define
\begin{equation}\label{eq:palpha-asymmetric}
p_{\alpha,+}(x) :=
\frac{c^+_\alpha}{x^{1+\alpha}} \1_{\{x>0\}} \end{equation}
where $c^{+}_\alpha=1/\zeta(1+\alpha) $ is the normalising constant and $\zeta$ the zeta-function. Then the distribution $p_{\alpha,+}(\cdot)$ is admissible with index $\alpha$ and the characteristic function for the corresponding random variable $X$ satisfies
\begin{equation}\label{eq:char-alpha+}
\phi_{\alpha,+}(\theta) = 1- \mu_\alpha|\theta|^\alpha
+i\mu^\prime_1\theta + \mu_2\theta^2 + i\mu^\prime_3\theta^3 +\mc O(|\theta|^{2+\alpha}), \end{equation}
where $\mu^\prime_1,\mu_2,\mu^\prime_3 \in \bb R$. \end{proposition}
One can easily see that the asymmetric distribution for $\alpha=1$ is not admissible. This is because the characteristic function $\phi_{1,+}(\cdot)$ satisfies
\begin{align*}
\phi_{1,+}(\theta) :=
1-\mu_1|\theta|
+i{\mu}^\prime_{1}|\theta|\log|\theta|
+o(|\theta|\log|\theta|) \text{ as } \theta \longrightarrow 0, \end{align*}
for some real constants $\mu_1,{\mu}^\prime_1$. In the symmetric case, the $\log$-term will be purely imaginary, and it disappears when summed with its complex conjugate. Indeed, the following proposition treats the symmetrical counterpart to Proposition~\ref{prop:asymp-phi-alpha-asymmetric}.
\begin{proposition}\label{prop:asymp-phi-alpha} The distribution $p_\alpha$ given in \eqref{def:palpha} for $\alpha \in (0,2)$ is admissible with index $\alpha$ and locally repairable. Let $\phi_\alpha(\theta)$ be given by \begin{equation}\label{eq:char-function}
\phi_\alpha(\theta)
=
c_\alpha \sum_{x \in \bb Z\backslash{\{0\}} } \frac{ e^{i x\theta}}{|x|^{1+\alpha}}. \end{equation} \begin{enumerate} \item Let $\alpha \neq 1$, then $\phi_\alpha$ satisfies
\[
\phi_{\alpha}(\theta) =
1 - \mu_{\alpha}|\theta|^{\alpha} + \mu_2 |\theta|^2
+ \mc O(|\theta|^{2+\alpha}) \text{ as } |\theta| \rightarrow 0 \]
with coefficients $\mu_{\alpha}, \mu_2$ given by
\[
\mu_{\alpha} = -2 c_{\alpha} \cos \left ( \frac{\pi \alpha}{2}\right ) \Gamma(-\alpha) \quad \text{ and }\quad
\mu_2
>0. \]
\item In the case $\alpha=1$, we have that
\[
\phi_{1}(\theta) =
1 - \frac{3}{\pi}|\theta| + \frac{3}{2\pi^2}|\theta|^2 +
\mc O(|\theta|^{3}) \text{ as } |\theta| \rightarrow 0. \] \end{enumerate} \end{proposition}
We will first prove Proposition~\ref{prop:asymp-phi-alpha-asymmetric} to explain our strategy to estimate such functions.
\begin{proof}[Proof of Proposition~\ref{prop:asymp-phi-alpha-asymmetric}] To prove this statement, we will use the Euler-Maclaurin formula \cite{Apo}, which states that for a given smooth function $f \in C^{\infty}(\bb R)$, we have that
\begin{equation}\label{eq:euler-mac}
\sum_{x=1}^M f(x) - \int_{1}^M f(x)dx =
\frac{f(1)+f(M)}{2} + R_{\alpha, +}^M, \end{equation}
where $M \in \bb N$ and the remainder term $R^M_{\alpha,+}$ can be computed explicitly by \[
R_{\alpha, +}^M = \int_1^M f^{\prime}(z)P_1(z)dz, \]
and $P_1(x)=B_1(x-\lfloor x\rfloor)$ with $B_1(\cdot)$ being the first periodised Bernoulli function, that is: $P_1(x)=\left(x - \lfloor x \rfloor \right)- \frac{1}{2}$. We will apply this formula to the function $f(x) = \frac{1-\exp(i\theta x)}{|x|^{1+\alpha}}$. Using that $\phi(-\theta)=\overline{\phi(\theta)}$, we can assume that $\theta >0$. As we take $M \to \infty$, the left-hand side of \eqref{eq:euler-mac} becomes \begin{equation}\label{eq:lhs-euler-mac}
\frac{1-\phi_{\alpha,+}(\theta)}{c_{\alpha,+}} -
\int_{1}^\infty \frac{1-\exp (i\theta z)}{z^{1+\alpha}}dz, \end{equation} where $c^{+}_\alpha$ is the normalising constant used in the definition of $p^{+}_\alpha(\cdot)$. By a change of variables $z=x \theta$ in the above integral, we get
\begin{equation}\label{eq:integral-alpha-plus}
\frac{1-\phi_{\alpha,+}(\theta)}{2c_\alpha} -
\theta^\alpha
\int_{\theta}^\infty \frac{1-\exp(iz)}{z^{1+\alpha}}dz. \end{equation}
To analyse the integral in a systematic manner, we add and subtract counter terms at the singularity $0$ (notice that this is only necessary for $\alpha>1$)
\begin{align*}
\theta^\alpha\int_{\theta}^\infty \frac{1-\exp(iz)}{z^{1+\alpha}}dz
& =
\theta^\alpha\int_{1}^\infty \frac{1-\exp(iz)}{z^{1+\alpha}}dz
+
\theta^\alpha\int_{\theta}^1 \frac{1+iz-\exp(iz)}{z^{1+\alpha}}dz
-
i\frac{\theta-|\theta|^\alpha}{1-\alpha} . \end{align*}
Using a Taylor expansion of $e^{iz}$ around $z=0$, we get
\begin{align*}
\theta^\alpha
\int_{\theta}^\infty \frac{1-\exp(iz)}{z^{1+\alpha}}dz &=
C^{+}_\alpha|\theta|^\alpha
-
\sum_{k=1}^3 \frac{(i\theta)^k}{k!(k-\alpha)} + \mc O(\theta^4), \end{align*}
where
\begin{align*}
C^{+}_\alpha
&=
- \cos\left(\frac{\pi \alpha}{2}\right)\Gamma(-\alpha)
\left(1
+
i \tan\left(\frac{\pi \alpha}{2}\right) \right), \end{align*}
the constant was evaluated by means of Mellin transform (and analytic extension for the case $\alpha >1$), finally set $\mu_{\alpha}=2c_{\alpha}C^+_{\alpha}$.
Now, we turn to the right-hand side of \eqref{eq:euler-mac}. Note that $f(M) \rightarrow 0$ as $M\rightarrow \infty$. Hence
\begin{align}
\lim_{M\rightarrow \infty}
\frac{f(1)+f(M)}{2} + R^M_{\alpha} & =
\frac{1}{2}(1-\exp(i\theta))
+ R^\infty_{\alpha}(\theta) \nonumber \\ &=
-\sum_{k=1}^3 \frac{(iz)^k}{2(k!)}
+ R^\infty_\alpha
+\mc O(\theta^4) \label{eq:prop5.4} \end{align}
where
\begin{align}\label{def:r-infity+}
R^\infty_{\alpha} = -\theta^{1+\alpha}
\int_{\theta}^\infty
\left(\frac{iz \exp(iz)+ (1+\alpha)(1-\exp(iz) )}{z^{2+\alpha}}\right)
P_1\Big(\frac{z}{\theta}\Big) dz. \end{align}
The proof now follows from Lemma~\ref{lem:ap-R-alpha+} and identifying the fictional moments from collecting the corresponding coefficients from \eqref{eq:prop5.4}. \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:asymp-phi-alpha} ] We start analogously as in the previous proof. By using \eqref{eq:euler-mac} for $f(x)=(1-\cos(\theta x))x^{-1-\alpha}$ and taking $M \to \infty$, we get
\begin{equation}\label{def:r-infity}
\frac{1-\phi_{\alpha}(\theta)}{c_{\alpha}} -
\int_{1}^\infty \frac{1-\cos (\theta z)}{z^{1+\alpha}}dz
=
\frac{1}{2}(1-\cos(\theta))
+
R^\infty_\alpha \end{equation}
where $R^\infty_\alpha$ is the Euler-Maclaurin error. In Lemma~\ref{lem:ap-R-alpha} we will estimate this integral in more detail, there we obtain \begin{align*}
R^\infty_\alpha(\theta): = K_2\theta^2
+\mc O(\theta^{2+\alpha}), \end{align*} where $K_2$ is a constant depending on $\alpha$ which is defined in \eqref{eq:asymp-of-R-2}. By setting
\begin{equation}\label{eq:def-mu2}
\mu_2
=
2c_\alpha
\left(
\frac{1}{2(2-\alpha)} - \frac{1}{4} - K_2
\right) \end{equation}
the statement for $\alpha \neq 1$ follows from applying Lemma~\ref{lem:Right-sign}.
For the case $\alpha=1$ the analysis becomes much simpler. This is because the first order term in Equation \eqref{asymp-of-R} vanishes. Since $\alpha=1$, the terms $\theta^{1+\alpha}$ and $\theta^2$ collapse to the same term. The normalization constant is equal to $c_1 = \frac{1}{2\zeta(2)} = \frac{3}{\pi^2}$.
Again, using Euler-Maclaurin we get that, for $\theta >0$ \begin{equation}\label{eq:euler-mac-alpha-1}
\frac{1-\phi_1(\theta)}{2 c _1}
-
\int_{1}^\infty \frac{1-\cos(\theta x)}{x^{2}}dx
=
\frac{1-\cos(\theta)}{2}
+R^\infty_1, \end{equation} where the remainder term will be of order \[
R^\infty_1 =\int_1^\infty \left( \frac{1-\cos(\theta \cdot )}{(\cdot)^{2}} \right)^\prime (x) P_p(x)dx = \mathcal{O}(\theta^3). \]
Since
\[
\int_{0}^\infty \frac{1-\cos(z)}{z^{2}}dz
=
\frac{\pi}{2}, \]
we can write
\begin{align*}
\theta \int_\theta^\infty \frac{ 1-\cos(z) }{z^{2}}dz
&=
\theta \int_0^\infty \frac{1-\cos(z)}{z^{2}}dz
-
\theta \int_0^\theta \frac{1-\cos(z)}{z^{2}}dz
\\ &=
\frac{\pi}{2}\theta
-
\frac{1}{2}\theta^2 + \mc O (\theta^4) \end{align*}
where in the last line we used a simple Taylor expansion. Collecting all coefficients corresponding to the powers of $\theta$ we obtain the result. \end{proof}
\subsection{A criterium for admissibility}\label{subsec:a-criteria-for-admissibility}
It is natural to wonder whether our techniques can be applied to examples for which the limits $\lim_{x \to \pm\infty} |x|^{1+\alpha}\cdot p_X(x)$ are not well-defined.
A very natural criterium comes from tail bounds on the cumulative distribution function,
\begin{equation}\label{eq:general-condition-1}
1- F_{X}(x) \stackrel{x \to \infty}{=}
\frac{c_{+}}{|x|^{\alpha}}
+
\sum_{\beta \in S_\alpha}\frac{c_{\beta,+}}{|x|^{\beta}} +
o\left(\frac{1}{|x|^{4}}\right) \end{equation}
and
\begin{equation}\label{eq:general-condition-2}
F_{X}(x) \stackrel{x \to -\infty}{=} \frac{c_{-}}{|x|^{\alpha}} +
\sum_{\beta \in S_\alpha}\frac{c_{\beta,-}}{|x|^{\beta}}
+
o\left(\frac{1}{|x|^{4}}\right) \end{equation}
for a finite $S_{\alpha} \subset (0,2+\alpha)$ and two positive constants $c_+$ and $c_-$ and any arbitrary real constants $c_{\beta,+},c_{\beta,-}, \beta \in S_\alpha$.
\begin{lemma}\label{lem:general-tail}
If $\alpha \in (1,2)$, let $p_X(\cdot)$ be the distribution given by
\begin{equation*}
p_X(x):=F_X(x)-F_X(x-1)
\end{equation*}
where $F_X(\cdot)$ satisfies estimate~\eqref{eq:general-condition-1}. Then we have that $p_X(\cdot)$ is admissible with regularity set $R_\alpha \subset (\bb Z \cup \{1+\alpha\}) \cap (0,2+\alpha).$ \end{lemma}
\begin{proof}
We will concentrate on the case $\tilde{p}_\alpha(x)= \tilde{p}_\alpha(|x|)=\frac{1}{2}\frac{1}{|x|^{\alpha}}-\frac{1}{(|x|-1)^{\alpha}}$ for $x \not \in \{-1,0,1\}$ , which means that $c_+=c_-=1$ and that the error term is zero. The non-symmetric case can be dealt with in a very similar manner. We will comment on the presence of an error term in later.
The proof is essentially the same as in Proposition~\ref{prop:asymp-phi-alpha} plus an summation by parts. Indeed, remember that for two sequence $\{f_k\}_{k \ge 1}$ and $\{g_k\}_{k \ge 1}$, we have that for any $M \in \bb N$
\begin{equation}\label{eq:summation-by-parts}
\sum_{k = 1}^M
f_k [g_{k+1}-g_k]
=
f_M g_{M+1}- f_1 g_1
-
\sum_{k = 1}^M
g_{k+1} [f_{k+1} - f_k]. \end{equation}
By taking $f_k=(1-\cos(\theta k))$, $g_{k}=F_X(k-1)-1$ and $M \to \infty$ , we get
\begin{align*}
2(1-\tilde{\phi}_\alpha(\theta)) & =
\sum_{x \in \bb N} (1-\cos(\theta x))\tilde{p}_\alpha(x) \\& =
(1-\cos(\theta))(1-F_X(1))
+
\sum_{x \in \bb N} [\cos(\theta(x+1))-\cos(\theta x)] (1-F_X(x)) \\& =
(1-\cos(\theta))(1-F_X(1))
+
(\cos(\theta)-1)
\sum_{x \in \bb N} \frac{\cos(\theta x)}{x^\alpha}
+
\sin(\theta)
\sum_{x \in \bb N} \frac{\sin(\theta x)}{x^\alpha}. \end{align*}
The proof now follows from analysing each of the infinite sums by using Euler-Maclaurin, just as before. Notice that the leading term, as $\theta \to 0$ for both series is of order $\mc O(|\theta|^{\alpha-1})$, but the terms $(1-\cos(\theta))$ and $\sin(\theta)$ help to recover the original rate. The condition $\alpha>1$ is required in order to have the functions $\theta \mapsto \int_{[\theta,\infty)} \frac{\cos(\theta x)}{|x|^{\alpha}}dx $ and $\theta \mapsto \int_{[\theta,\infty)} \frac{\sin(\theta x)}{|x|^{\alpha}} dx$ to be well-defined.
The error bound can be dealt with by defining
\begin{align*}
\phi_{E_\alpha}(\theta) & =
(1-\cos(\theta))J(1)
+
(\cos(\theta)-1)
\sum_{x \in \bb N} \cos(\theta x)J(x)
+
\sin(\theta)
\sum_{x \in \bb N} \sin(\theta x)J(x), \end{align*}
where $J(x):=(1-F_X(x))-|x|^{-\alpha} = c_{\pm,\delta} |x|^{-\delta} + o(|x|^{-\delta}) $ for some $\delta \in (\alpha,4]$ as $|x| \to \infty$. We can then analyse $\phi_{E_\alpha}$ using analogous estimates as we used for the first part of this proof but with $\delta$ instead of $\alpha$, notice that as we are not assuming the positiveness of the constants $c_{+,\beta}, c_{-,\beta}$, $\phi_{E_\alpha}$ may not be a characteristic function of a random variable, however, this is irrelevant for this proof. We iterate this argument until we have exhausted the set $S_{\alpha}$, leaving us with an error of order
\begin{align*}
\phi_{E^*}(\theta) & =
(1-\cos(\theta))J^* (1)
+
(\cos(\theta)-1)
\sum_{x \in \bb N} \cos(\theta x)J^*(x)
+
\sin(\theta)
\sum_{x \in \bb N} \sin(\theta x)J^*(x), \end{align*}
where $|J^*(x)| = o(|x|^{-4})$, at which point we can just differentiate the expression above in order to obtain the desired bounds. \end{proof}
\begin{remark}\label{rem:dom-attraction} Remember that a distribution is in the domain of attraction of an $\alpha$-stable distribution if, and only if, \begin{equation*}
\lim_{x\to \infty} (1- F_{X}(x))|x|^{\alpha} = c_{+} \in[0,\infty) \end{equation*}
and
\begin{equation*}
\lim_{x\to -\infty} F_{X}(x)|x|^{\alpha} = c_{-} \in [0,\infty) \end{equation*}
with $c_++c_- > 0$, see e.g. \cite{Taqqu}. Therefore, the condition given by Lemma~\ref{lem:general-tail} is a natural quantitative version of this characterisation. \end{remark}
The reason why we concentrate on the case where the density decays like a power law, (rather than the complement of the cumulative density) is that the former can be used for any $\alpha$, and also for any dimension - see Section~\ref{sec:general}.
\subsection{Construction of admissible distributions in the domain of attraction of any given stable distribution} \label{subsec:construction-of-admissible-distributions-in-the-domain-of-attraction-of-any-given-stable-distribution}
We end this section by giving a constructive proof of the existence of an admissible distribution in the domain of normal attraction of any given stable distribution (at least for $\alpha \neq 1$). In fact, such a construction will be derived from power-law distributions. It shows that the class of examples provided in this article is large.
\begin{proposition}\label{prop:adm-are-enough} Let $\bar{X}$ be a stable random variable with parameters $(\alpha,\beta,\gamma,\varrho)$ as in Definition \ref{def:stable}.
There exists an distribution $p_X \in \mathcal{A}$ such that for the corresponding characteristic functions we have
\begin{equation}\label{eq:adm-are-enough}
\phi_{\alpha,\beta,\gamma,\varrho}(\theta)
=
\phi_{X}(\theta)
+ o\left(|\theta|^\alpha\right). \end{equation}
The same holds for $\alpha=1, \beta = 0$ and $\gamma \in \bb R^{+},\varrho \in \bb R$. \end{proposition}
\begin{proof} The construction is simple and follows by modifying the characteristic function in order to add one parameter, i.e. $\beta,\gamma,\rho$, at the time. We will restrict ourselves to the case $\alpha \neq 1$ as it is more elaborate.
We start by defining a distribution $p_{W_1}(\cdot)$ with the correct parameters $\alpha$ and $\beta$, this can be done by choosing an appropriate convex sum of totally asymmetric random variables. Let $p_{\alpha,+}(\cdot)$ be the probability distribution given in Proposition~\ref{prop:asymp-phi-alpha-asymmetric} and define $p_{\alpha, -}(\cdot):= p_{\alpha,+}(-\cdot)$. Now, let $q_1= \frac{\beta+1}{2}$ and note that $q_1\in [0,1]$. The distribution \begin{equation*}
p_{W_1}:= q_1 \cdot p_{\alpha,+} + (1-q_1)\cdot p_{\alpha,-} \end{equation*}
belongs to $\mc A$ due to Lemma~\ref{lem:closed-by-operations}. Moreover,
\begin{equation*}
\log(\phi_{W_1}(\theta))
=
i \beta \theta -|(\mu_\alpha)^{1/\alpha} \theta|^{\alpha}\left(1-i \beta \sgn(\theta) \tan\left(\frac{\pi\alpha}{2}\right)\right)
+ o(|\theta|^\alpha), \end{equation*}
due to Proposition~\ref{prop:asymp-phi-alpha-asymmetric}.
Now we will define an admissible distribution $p_{W_2}$ with correct parameters $\alpha,\beta$ and $\gamma$. This will be done in two steps. First, we define $M:= \left\lceil \frac{\gamma}{(\mu_\alpha)^{1/\alpha}}\right\rceil \ge 1$ and consider
\begin{equation*}
p_{W^\prime_2}
:=
\underbrace{p_{W_1}
*
\dots
*
p_{W_1}
}_{M \text{ times }}. \end{equation*}
We have $p_{W^\prime_2} \in \mc A$ and
\begin{align*}
\log(\phi_{W^\prime_2}(\theta))
=
i M \beta \mu'_1\theta -|M\cdot(\mu_\alpha)^{1/\alpha} \theta|^{\alpha}\left(1-i \beta \sgn(\theta) \tan\left(\frac{\pi\alpha}{2}\right)\right)
+ o(|\theta|^\alpha). \end{align*}
Now, define $q_2:= \frac{\gamma}{ M \cdot(\mu_\alpha)^{1/\alpha}}\in [0,1]$. The distribution
\begin{equation*}
p_{W_2}(x):= q_2 \cdot p_{W^\prime_2}(x)+ (1-q_2)\cdot\delta_0(x) \end{equation*}
is admissible and satisfies \begin{equation*}
\log (\phi_{W_2}(\theta))
:=
i \gamma^\alpha \mu_1'\beta\theta -| \gamma\theta|^{\alpha}\left(1-i \beta \sgn(\theta) \tan\left(\frac{\pi\alpha}{2}\right)\right)
+ o(|\theta|^\alpha), \end{equation*} thanks to the asymptotics of $\phi_{W_2^\prime}$ around $0$ and the Taylor expansion of the functions $z \mapsto e^z, z \mapsto \log(1+z)$ around $z=0$.
Finally, we recover the drift parameter $\varrho$ by setting $D = \left\lceil \varrho- q_2 M \beta \mu_1'\right\rceil$ and $q_3 \in [0,1]$ such that
\begin{align*}
q_3 (D-1) + (1-q_3)D= \varrho- q_2 M \beta \mu_1'. \end{align*}
Then, it is easy to show that
\begin{equation*}
p_{X}:= p_{W_2}*(q_3\cdot\delta_{D-1}+ (1-q_3)\cdot\delta_{D}) \end{equation*}
belongs to $\mc A$ and satisfies the relation \eqref{eq:adm-are-enough}. \end{proof}
\section{Proofs of Local Central Limit Theorems}\label{sec:lclt}
In this section we will prove Theorems \ref{thm:lclt} and \ref{prop:lclt}.
\begin{proof}[Proof of Theorem \ref{thm:lclt}] We will prove cases (i) and (iii) since case (ii) is a corollary of case (i).\\ \noindent \textbf{Case (i): $p_X(\cdot)$ asymmetric}\\ Consider $(X_i)_{i\in \mathbb{N}}$ a sequence of admissible random variables and $(\bar{X}_i)_{i\in \mathbb{N}}$ a sequence of i.i.d. $\alpha$-stable random variables with laws $p_X(\cdot)$ resp.~$p_{\bar{X}}(\cdot)$, where $p_X$ is in the domain of normal attraction of $p_{\bar{X}}$. Let $S_n =\sum_{i=1}^n X_i$ resp.~$\bar{S}_n=\sum_{i=1}^n \bar{X}_i$ with probability distributions denoted by $p^n_X(\cdot)$ resp.~$p^n_{\bar{X}}(\cdot)$.
We want to compare the probability distributions $p^n_{X}(\cdot)$ and $p^n_{\bar{X}}(\cdot)$ using their representation in terms of inverse Fourier transforms. More precisely we have that
\[
p^n_{X}(x)
=
\frac{1}{2 \pi}
\int_{-\pi}^\pi \phi^n_X (\theta) e^{-i x \theta} d\theta
\]
resp. \[
p^n_{\bar{X}}(x)
= \frac{1}{2\pi} \int_{-\infty}^\infty
\phi_{\bar{X}}(\theta) e^{-i \theta \cdot x} d\theta. \]
Using a change of variable formula, we get
\[
p^n_X(x)
=
\frac{1}{2 \pi n ^{1/\alpha}}
\int_{-\pi n^{1/\alpha}}^{\pi n^{1/\alpha}}
\phi^n_X \Bigg( \frac{\theta}{n^{1/\alpha}} \Bigg)
e^{ -i x\frac{\theta}{ n ^{1/\alpha}}} d\theta. \]
Given $\varepsilon>0$, notice that $\sup_{ \theta \in \bb T\setminus
[-\varepsilon,\varepsilon]}|\phi_X (\theta)|<1$, as $X$ is supported in $\bb Z$, see \cite[Lemma 2.3.2]{Limic10}. To get
\[
p^n_X(x)
=
\frac{1}{2 \pi n ^{1/\alpha}}
\int_{-\varepsilon n^{1/\alpha}}^{\varepsilon n^{1/\alpha}}
\phi^n_X \Bigg( \frac{\theta}{n^{1/\alpha}} \Bigg)
e^{ - i x \frac{ \theta}{ n ^{1/\alpha}}} d\theta
+ \mc O (e^{-cn}) \]
for some positive constant $c>0$. Analogously, we have that
\begin{align*}
{p}^n_{\bar{X}}(x)
& = \frac{1}{ 2 \pi n^{1/\alpha}}
\int_{-\varepsilon n^{1/\alpha}}^{\varepsilon n^{1/\alpha}}
\phi^n_{\bar{X}}\left(\frac{\theta}{n^{1/\alpha}}\right)
e^{- i x \frac{ \theta}{n^{1/\alpha}}} d\theta
+
\frac{1}{ 2 \pi n^{1/\alpha}}
\int_{|\theta|> \varepsilon n^{1/\alpha}}
\phi^n_{\bar{X}}\left(\frac{\theta}{n^{1/\alpha}}\right)
e^{- i x \frac{\theta}{n^{1/\alpha}}} d\theta.
\end{align*}
One can easily check that,
\[
\int_{|\theta|> \varepsilon n^{1/\alpha}}
\phi^n_{\bar{X}}\left(\frac{\theta}{n^{1/\alpha}}\right)
e^{-\frac{i x \theta}{n^{1/\alpha}}} d\theta
= \mc O \left(e^{-c' n}\right), \]
for some constant $c'>0$. Write $\phi^n_X \Big(\frac{\theta}{n^{1/\alpha}}\Big) = [1+F_n(\theta)] \phi_{\bar{X}}(\theta)$, we can concentrate our efforts into bounding
\begin{align}
\label{thm:lclt-int-to-bound}
\int_{-\varepsilon n^{1/\alpha}}^{\varepsilon n^{1/\alpha}}
F_n(\theta)
\phi^n_{\bar{X}}\left(\frac{\theta}{n^{1/\alpha}}\right)
e^{-\frac{i x \theta}{n^{1/\alpha}}} d\theta. \end{align}
Remember the notation $\beta_1:=\min(J_\alpha)$ where $J_\alpha := \spann (R_\alpha)$. Now, we use
\[
|F_n(\theta)|
\lesssim n^{1-\frac{\beta_1}{\alpha}}|\theta|^{\beta_1} \]
for $|\theta| < \varepsilon n^{1/\alpha}$ (possibly for smaller value of $\varepsilon$). With this, we get \[ \begin{split}
\left| p^n_X(x) - p^n_{\bar{X}}(x) \right| & =
\Big |\frac{1}{ 2\pi n^{1/\alpha}}\int_{|\theta|<\varepsilon n^{1/\alpha}}
\phi^n_{\bar{X}}\left(\frac{\theta}{n^{1/\alpha}}\right)
F_n(\theta)d\theta\Big| + \mc O(e^{-c^\prime n})
\\ & \lesssim
\frac{1}{n^{\frac{\beta_1+1-\alpha}{\alpha}}} \underbrace{\int_{|\theta|<\varepsilon n^{1/\alpha}}
e^{- \mu_\alpha |\theta|^\alpha}
|\theta|^{\beta_1}
d\theta}_{\mc O (1)}
+ \mc O(e^{-c^\prime n}) \end{split} \] and that the integral on the r.h.s.~is bounded as $n \longrightarrow \infty$.
\noindent \textbf{Case (ii): $p_X(\cdot)$ asymptotically repairable}\\ We will prove the statement in a similar manner, so we will only highlight the main differences. Write
\begin{align*}
p^n_{\bar{X}+ \bar{Z}}(x) &=
\frac{1}{2 \pi}
\int_{-\infty}^\infty e^{-n \mu_\alpha |\theta|^\alpha-n \mu_2 |\theta|^2}
e^{-i x \theta} d\theta \\&=
\frac{1}{2 \pi n^{1/\alpha}}
\int_{-\infty}^\infty e^{- \mu_\alpha |\theta|^\alpha-n^{(1-2/\alpha)} \mu_2 |\theta|^2}
e^{-\frac{i x \theta}{n^{1/\alpha}} } d\theta \end{align*}
and write $\phi^n_X \Big(\frac{\theta}{n^{1/\alpha}}\Big) =
[1+F_n(\theta)]\exp\left( - \mu_\alpha |\theta|^\alpha-n^{(1-2/\alpha)} \mu_2
|\theta|^2\right)$. Notice that $1-\frac{2}{\alpha}<0$.
One can easily check that,
\[
\int_{|\theta|> \varepsilon n^{ 1/\alpha}}
e^{- \mu_\alpha |\theta|^\alpha-n^{1- \frac{2}{\alpha}} \mu_2 |\theta|^2}
e^{- i x \frac{\theta}{ n^{1/\alpha}}} d\theta
= \mc O (e^{-cn}), \]
for some constant $c>0$. The statement will follow once we bound
\[
\int_{-\varepsilon n^{1/\alpha}}^{\varepsilon n^{1/\alpha}}
F_n(\theta)e^{- \mu_\alpha |\theta|^\alpha-n^{1- \frac{2}{\alpha}} \mu_2 |\theta|^2}
e^{- i x \frac{\theta}{ n^{ 1/\alpha}}} d\theta \lesssim n^{-1/\alpha}. \]
Analogously to before, we have that for $|\theta|\le \varepsilon n^{1/\alpha}$, we have \[
|F_n(\theta)| \lesssim \frac{|\theta|^{2\alpha}}{n}. \] This concludes the claim. \end{proof}
We proceed with the proof of Theorem \ref{prop:lclt}. \begin{proof}[Proof of Theorem \ref{prop:lclt}] In this case, we are only dealing with symmetric distributions. Using similar ideas as before in the proof of Theorem \ref{thm:lclt}, assume again that $\theta >0$, we write
\[
p^n_X(x)
=
\frac{1}{2 \pi n ^{1/\alpha}}
\int_{-\varepsilon n^{1/\alpha}}^{\varepsilon n^{1/\alpha}}
[1+F_n(\theta)]e^{- \mu_\alpha |\theta|^\alpha}
e^{ - i x \theta n ^{-\frac{1}{\alpha}}} d\theta
+ \mc O (e^{-cn^{1/\alpha}})
\]
for some positive constant $c >0$. We have that
\begin{equation}\label{eq:exp-Fn}
F_n(\theta)
=
\sum_{\beta \in J_\alpha} C_\beta \frac{n}{n^{\beta/\alpha}}|\theta|^\beta +
\sum_{\beta \in J_\alpha} C^\prime_\beta \frac{n}{n^{\beta/\alpha}}\sgn(\theta)|\theta|^\beta +
\mc O \left( \frac{|\theta|^{2+\alpha}}{n^{2/\alpha}} \right), \end{equation} where we used the Taylor polynomial of
\begin{align*}
t \mapsto e^{ \sum_{\beta \in R_{\alpha}} n^{1-\beta/\alpha} \mu_{\beta} |t|^{\beta} +
\sum_{\beta \in R_{\alpha}} n^{1-\beta/\alpha} \mu^\prime_{\beta} \sgn(t)|t|^{\beta}
} \end{align*}
truncated at level $\mc O \left(\frac{t^{2+\alpha}}{n^{2/\alpha}}\right)$.
Define
\begin{equation*}
u_\beta(x):=
\frac{1}{2\pi}
\int_{\bb R}
|\theta|^{\beta} \phi_{\bar{X}}(\theta)
e^{ix \theta}
d\theta,
\text{ and }
u^\prime_\beta(x):=
\frac{1}{2\pi}
\int_{\bb R}
\sgn(\theta) |\theta|^{\beta} \phi_{\bar{X}}(\theta)
e^{ix \theta}
d\theta,
\end{equation*}
hence we have that for $|\theta| < \varepsilon n^{1/\alpha}$
\begin{align*}
\Bigg|
&
p^n_X(x)- p^n_{\bar{X}}(x)-
\sum_{\beta\in J_\alpha} C_\beta
\frac{u_\beta\left( \frac{x}{n^{1/\alpha}} \right)}{n^{(1+\beta-\alpha)/\alpha}}
\Bigg|
\\ & \lesssim
\int_{-\varepsilon n^{1/\alpha}}^{\varepsilon n^{1/\alpha}}
\frac{
|\theta|^{2+\alpha} e^{-\mu_\alpha |\theta|^\alpha}}{ n^{3/\alpha}}
d\theta
+
\sum_{\beta \in J_\alpha} (|C_\beta|+|C^\prime_\beta|)
\int_{ \bb R \setminus [-\varepsilon n ^{1/\alpha},\varepsilon n^{1/\alpha}]}
\frac{ |\theta|^{\beta} e^{-\mu_\alpha |\theta|^\alpha}}{ n^{(1+\beta-\alpha)/\alpha}}
d\theta, \\ & \lesssim
n^{-3/ \alpha}
+
\mc O \left( e^{-c n} \right) \end{align*}
for some $c>0$ and $n$ large enough. \end{proof}
\section{Proofs for Green function/potential kernel expansion}\label{sec:Green}
In this section we will develop potential kernel estimates stated in Theorems \ref{thm:Green} and \ref{thm:green-1}. The strategy will be to use detailed knowledge of the expansion $\phi_X(\cdot)$ and not the LCLT theorem as was done for the equivalent problem in the classical case in \cite{Limic10}. Again, here we are restricting ourselves to the symmetric case.
\begin{proof}[Proof of Theorem \ref{thm:Green}] \textbf{Case (i) $p_X(\cdot)$ repaired} \\ Let us evaluate the expression
\[
a_{X}(x) = \frac{1}{2\pi} \int_{-\pi}^\pi \frac{1}{1-\phi_X(\theta)}
(\cos(\theta x)-1)d\theta. \]
The idea is to compare $a_X(x)$ with the potential kernel $a_{\bar{X}}(\cdot)$ of a symmetric stable process $(\bar{X}_t)_{t\geq 0}$ with multiplicative constant $\mu_{\alpha}$ whose characteristic function is given by $\phi_{\bar{X}_t}(\theta) = e^{-\mu_\alpha t |\theta|^\alpha}$.
This is more convenient since it can be explicitly computed.
Using that $(t,\theta) \mapsto e^{-\mu_\alpha t |\theta|^\alpha}(\cos(\theta x)-1)$ is in $L^1(\bb R_+ \times \bb R)$, we can use Fubini to get
\begin{align*}
a_{\bar{X}}(x)
&=
\frac{1}{2\pi} \int_{\bb R}\int_{0}^\infty e^{-t \mu_\alpha |\theta|^\alpha }dt
(\cos (\theta x) - 1) d\theta
\\&=
\left(
\frac{1}{2\pi \mu_\alpha}\int_{\mathbb{R}} \frac{1}{ |\theta|^\alpha}
(\cos (\theta) - 1) d\theta
\right)
|x|^{\alpha-1} \end{align*}
which gives the expression for the constant $C_\alpha$. We write
\[
a_X(x) = a_{\bar{X}}(x)
+ \underbrace{\left( a_{X}(x) - \tilde{a}_{\bar{X}}(x) \right)}_A
- \underbrace{\left( a_{\bar{X}}(x) - \tilde{a}_{\bar{X}}(x) \right)}_B, \]
where
\[
\tilde{a}_{\bar{X}}(x) :=
\frac{1}{2\pi \mu_\alpha} \int_{-\pi}^\pi\frac{1}{ |\theta|^\alpha}
(\cos (\theta x) - 1) d\theta. \]
The reminder of the proof is divided into two parts: estimating the term in $A$ by using H\"older continuity and then the term in $B$ by using an interplay of Fourier transform in the torus $\bb T$ and in $\bb R$ plus a trick involving dyadic partitions of the unity.
We start by analysing the term \begin{align*}
a_{X}(x) - \tilde{a}_{\bar{X}}(x) &=
\frac{1}{2\pi} \int_{-\pi}^{\pi}
\left( \frac{1}{ 1-\phi_X(\theta)}- \frac{1}{ \mu_\alpha |\theta|^\alpha}\right)
(\cos (\theta x) - 1) d\theta
\\ &=
\frac{1}{2\pi} \int_{-\pi}^{\pi}
\frac{h_X(\theta)}{ \mu_\alpha |\theta|^\alpha (1- \phi_X(\theta))}
(\cos (\theta x) - 1) d\theta. \end{align*}
where
\begin{equation*}
h_X(\theta) :=
\phi_X(\theta)-\left( 1-\mu_\alpha |\theta|^\alpha \right)
= \mc O (|\theta|^{2+\alpha}) \end{equation*}
since $p_X(\cdot)$ is repaired.
It is important to notice that $h_X(\theta)$ is in $C^{1,\alpha-1-}(\bb T)$ due to Lemma \ref{lemma-app-phi-smooth} and the continuity of $\theta \mapsto 1-\mu_\alpha |\theta|^{\alpha}$. Denote by $ \tilde{h}_X(\theta) := \frac{h_X(\theta)}{ \mu_\alpha |\theta|^\alpha (1- \phi_X(\theta))}$ which is in $L^1(\bb T)$, as $(1-\phi_X(\theta))\neq 0$ for all $\theta \in \bb T \setminus\left\{ 0 \right\} $ again due to the fact that $X$ is supported in $\bb Z$.
Hence, we write for $A$
\begin{align*}
a_{X}(x) - \tilde{a}_{\bar{X}}(x)
&=
-\frac{1}{2\pi} \int_{-\pi}^{\pi}
\tilde{h}_X(\theta) d\theta
+
\underbrace{\frac{1}{2\pi} \int_{-\pi}^{\pi}
\tilde{h}_X(\theta) \cos (\theta x)
d\theta}_{I(x)}. \end{align*}
The first integral in the r.h.s.~is finite and does not depend on $x$.
We will show that the second integral on the r.h.s.~above is of order $\mathcal{O}(|x|^{ \frac{\alpha-2}{3+\varepsilon}})$.
This estimate is based on the fact that such integrals are Fourier coefficients of a function in $C^{0, \frac{2-\alpha}{3+\varepsilon}}(\bb T)$ for some $\varepsilon>0$ small enough.
We write
\[
f_1(\theta) :=
\frac{h_X(\theta)}{|\theta|^{2\alpha}} \]
and
\[
f_2(\theta):=
\frac{|\theta|^{\alpha}\left( \mu_\alpha |\theta|^\alpha - h_X(\theta) \right)}{|\theta|^{2\alpha}}=
\mu_{\alpha} - \frac{h_X(\theta)}{|\theta|^{\alpha}}. \]
Now, we use Lemma \ref{lem:app-holder-quocient} to determine the degree of H\"older continuity of $f_1(\cdot)$ and $f_2(\cdot)$.
For $f_1(\cdot)$ we can choose $\beta = \alpha-1-\varepsilon$ for any $\varepsilon \in (0,\alpha-1)$, $\beta_0=2+\alpha$ and $\beta_1=2\alpha$ to obtain that $f_{1}(\cdot)$ is H\"older continuous with $\alpha_1 = \frac{2-\alpha}{3+\varepsilon}$ for $\alpha > 1$. For $f_2(\cdot)$, we can choose $\beta = \alpha-1-\varepsilon$, $\beta_0=2+\alpha$ and $\beta_1=\alpha$ which yields to an order $\alpha_2=\frac{2}{3 +\varepsilon}$. Since $f_2(\cdot)$ is bounded away from 0 we have that the reciprocal $1/f_2(\cdot)$ is H\"older continuous of order $\alpha_2$ as well.
Therefore the product $f_1(\cdot) \cdot \frac{1}{f_2(\cdot)}$ is H\"older continuous of order $\alpha_1 \wedge \alpha_2=\alpha_1$.
This implies that $I(x) = \mc{O} (|x|^{-\alpha_1})$, see \cite[Theorem 3.3.9]{grafakos2008classical}.
For the second part of the proof, we estimate the term $B=a_{\bar{X}}(x) - \tilde{a}_{\bar{X}}(x)$. To do so, let $\varphi \in C^\infty(\bb{R})$ be a symmetric cutoff function such that $\varphi \equiv 1$ in $\bb R\setminus \left[-\pi+\eta,\pi-\eta \right]$ for some arbitrarily small $\eta>0$ and such that $\varphi \equiv 0$ in $\left[ -\pi+2\eta,\pi-2\eta \right]$, we now have
\begin{align*}
2 \pi\mu_\alpha
\left[
a_{\bar{X}}(x) - \tilde{a}_{\bar{X}}(x)
\right]
&=
\int_{\bb R \setminus \bb T} \frac{1}{|\theta|^\alpha}
(\cos (\theta x) - 1) d\theta \\ & =
-\underbrace{\int_{\bb R \setminus [-\pi,\pi]} \frac{1}{|\theta|^\alpha} d\theta}_{\frac{\pi^{1-\alpha}}{\alpha-1}}
+
\underbrace{\int_{\bb R} \varphi(\theta) \frac{1}{|\theta|^\alpha}
\cos (\theta x) d\theta}_{J_1(x)} \\ & \phantom{=}
+ \underbrace{\int_{\bb R} \left[ \1_{\{|\theta|>\pi\}}-\varphi(\theta) \right] \frac{1}{|\theta|^\alpha}
\cos (\theta x) d\theta}_{J_2(x)}. \end{align*}
The constant $-\frac{\pi^{1-\alpha}}{2\pi \mu_{\alpha}(\alpha-1)}$ gives the second contribution to $C_0$. We write $J_1(x)= \mc F\left(
\frac{\varphi(\cdot)}{|\cdot|^\alpha} \right)(x)$.
In order to analyse $J_1(x)$ we need to use a dyadic partition of the unity to show that this term decays faster than any polynomial.
Let $\psi_{-1},\psi_0$ be two radial functions such that $\psi_{-1} \in C_c^{\infty}(B_{\pi}(0))$ and $\psi_0 \in C_c^{\infty}(B_{2\pi}(0) \setminus B_{\pi}(0))$. They satisfies
\begin{equation}
\label{eq:dy-part}
1 \equiv \psi_{-1}(\theta)
+ \sum_{j=0}^\infty \underbrace{\psi_0(2^{-j}\theta)}_{=:\psi_j(\theta)}. \end{equation} Such functions exist by Proposition 2.10 in \cite{Chemin}, it is an application of Littlewood-Payley theory.
Define
\[
\varrho(\theta):=\frac{\varphi(\theta)}{|\theta|^\alpha}\psi_{-1}(\theta) \qquad\text{and}\qquad
\nu(\theta):=\frac{\varphi(\theta)}{|\theta|^\alpha}\psi_0(\theta)
\equiv \frac{1}{|\theta|^\alpha}\psi_0(\theta), \]
where, in the identity, we used that $\varphi \equiv 1$ in the $\supp (\psi_0)$. We have that both $\varrho,\nu\in C^\infty_c (\bb R)$ and therefore their Fourier transforms decay faster than any polynomial, that is, for any $N>1$, we have that
\begin{equation}
\label{eq:fast-decay}
\mc F(\nu)(x),\mc F(\varrho)(x) = \mc O(|x|^{-N}). \end{equation}
The fact that we can exchange the infinite sum with the Fourier transform is a result of the dominated convergence theorem.
Multiply both sides of \eqref{eq:dy-part} by
$\varphi(\theta)/ |\theta|^{\alpha}$ , compute $\mc F$ and use the scaling property of the Fourier transform to get
\begin{equation}
\label{eq:exp-J1}
J_1(x) = \mc F (\varrho)(x)+ \sum_{j=0}^\infty 2^{(1-\alpha)j }\mc F (\nu)(2^jx). \end{equation}
By using \eqref{eq:fast-decay} and \eqref{eq:exp-J1}, we get that $J_1(x)=\mc O(|x|^{-N})$ for all $N\ge 1$. Finally we estimate $J_2(x)$
\begin{align*}
J_2(x) &=
\int_{-\pi}^\pi
\left[ \1_{[|\theta|>\pi]}-\varphi(\theta) \right] \frac{1}{|\theta|^\alpha}
\cos (\theta x) d\theta \\&=
- \int_{-\pi}^\pi
\varphi(\theta) \frac{1}{|\theta|^\alpha}
\cos (\theta x) d\theta \end{align*}
where we used that $\varphi\equiv 1$ for $|x|>\pi$. We can write $J_2(x)=\mc F_{\bb T}(g)(x)$. Notice that $g$ is $C^{0,1}(\bb T)$, and therefore $J_2(x)$ decays as $\mc O(|x|^{-1})$ which is faster than $\mc O(|x|^{\frac{\alpha-2}{3+\varepsilon}})$ because $\alpha\in (1,2)$. This concludes the proof of the second part. Note that alternatively we could have interpreted the integral $a_{\bar{X}}(\cdot) - \tilde{a}_X(\cdot)$ as a generalised hypergeometric function and study its series expansion which is more implicit. We preferred this more explicit way as it seems more feasible to generalise to higher dimensions.
\noindent \textbf{Case (ii) $p_X(\cdot)$ locally or asymptotically repairable}\\ Here we follow a similar idea as in case (ii). Write again
\begin{equation*}
a_X(x) =
\left( a_X(x)- \tilde{a}_{\bar{X}}(x) \right)
-\left( a_{\bar{X}}(x) - \tilde{a}_{\bar{X}}(x)\right)
+\tilde{a}_{\bar{X}}(x). \end{equation*}
The last two terms are exactly the same as in the proof of (i).
However, the first term behaves differently due the presence of $\mu_2 |\theta|^2$. We have that
\begin{equation}
\label{eq:idea-thm-Greengen}
\frac{1}{(1-\phi_X(\theta))}-\frac{1}{\mu_\alpha |\theta|^\alpha}
=
\frac{h_X(\theta)}{\mu_\alpha |\theta|^{\alpha}(1-\phi_X(\theta))}
=
\mc O \left( |\theta|^{2-2\alpha} \right) \end{equation}
as $ |\theta|\rightarrow 0$, which blows up slower than $\mc O(|\theta|^{-\alpha})$ for any $\alpha < 2$. The main idea is to perform a telescopic sum together with expression \eqref{eq:idea-thm-Greengen} until we get a function in $L^1(\bb T)$, which will require exactly $m_\alpha$ iterations.
Note that, in this proof we are only interested in characterising the potential kernel up to a constant order, therefore, we will not need to use information on the degree of continuity of a remainder term as in previous proofs. Instead, we will compute the first $m_\alpha$ terms by hand an use that the remainder is in $L^1(\bb T)$, for which an application of the Riemann-Lebesgue Lemma \cite[Proposition 3.3.1]{grafakos2008classical} will be enough.
Let
\begin{align*}
a_X(x)
&
- \tilde{a}_{\bar{X}}(x) =
\frac{1}{2\mu_\alpha \pi}
\int_{-\pi}^{\pi} \frac{h_X(\theta)}{|\theta|^\alpha (1-\phi_X(\theta))}
\left( \cos \left(\theta x \right)-1 \right) d\theta. \end{align*}
For $\alpha < 3/2$ we have that $m_\alpha=0$ and $\tilde{h}_X(\cdot):=\frac{h_X(\cdot)}{|\cdot|^\alpha\left( 1-\phi_X(\cdot) \right)}$ is in $L^1(\bb T)$. Indeed,
\begin{align*}
a_X(x)
- \tilde{a}_{\bar{X}}(x)
=
\int_{-\pi}^\pi \tilde{h}_X(\theta)
\cos \left(\theta x \right)d\theta
-
\int_{-\pi}^\pi \tilde{h}_X(\theta)
d\theta. \end{align*}
The second term on the r.h.s.~is a constant, whereas the first vanishes as $|x|\rightarrow \infty$ as before.
For the case $\alpha \in (\frac{3}{2}, \frac{5}{3})$ the proof is analogous to the proof of (i): we compare the integral to its counterpart with $1-\phi_X(\theta)$ substituted by $\mu_\alpha|\theta|^\alpha$ in the denominator. Notice that we have not yet covered the case $\alpha=3/2$ which is given at the end of the proof. Here we have:
\begin{align*}
a_X(x) - \tilde{a}_{\bar{X}}(x) &:=
\underbrace{\frac{\mu_2}{2(\mu_\alpha)^2 \pi}
\int_{-\pi}^{\pi}
\frac{h_X(\theta)}{|\theta|^{2\alpha}}
\left( \cos \left(\theta x \right) -1 \right) d\theta}_{I(x)} \\ & \phantom{=}+ \underbrace{\frac{1}{2\mu_\alpha\pi}
\int_{-\pi}^{\pi}
\left(
\frac{h_X(\theta)}{|\theta|^\alpha (1-\phi_X(\theta))}
-\frac{\mu_2 h_X(\theta)}{\mu_\alpha|\theta|^{2\alpha})}
\right)
\left( \cos \left(\theta x \right) -1 \right) d\theta}_{R_0(x)}. \end{align*}
The last remainder term $R_0(x)$ is of order $\mc O(1)$ as $|x|\longrightarrow \infty$ for any $\alpha < 2$, again due to the fact that we can interpret it as the Fourier transform of a $L^1(\bb T)$ function.
Since we assumed $\alpha > \frac{3}{2}$,
$\theta\mapsto |\theta|^{2-2\alpha}\left( \cos \left(\theta x \right) -1 \right)$ is in $L^1(\bb R)$ and therefore
\begin{align*}
I(x) & =
|x|^{2\alpha-3} \frac{\mu_2}{2 (\mu_\alpha)^2 \pi} \int_{-\pi x}^{\pi x}
|\theta|^{2-2\alpha} \left( \cos(\theta)-1 \right)d\theta \\ & \phantom{=} +
\frac{\mu_2}{2\mu_\alpha\pi}
\int_{-\pi}^{\pi}
\frac{h_X(\theta)- |\theta|^2}{|\theta|^{2\alpha}}
\left( \cos \left(\theta x \right) -1 \right) d\theta \\ &=
\underbrace{|x|^{2\alpha-3} \frac{\mu_2}{2 (\mu_\alpha)^2 \pi} \int_{-\infty}^{\infty}
|\theta|^{2-2\alpha} \left( \cos(\theta)-1 \right)d\theta}_{I_1(x)} \\ & \phantom{=} -
\underbrace{|x|^{2\alpha-3}\frac{\mu_2}{2 (\mu_\alpha)^2 \pi}
\int_{\bb R \setminus [-\pi x, \pi x]}
|\theta|^{2-2\alpha} \left( \cos(\theta)-1 \right)d\theta}_{R_{1,1}(x)} \\ & \phantom{=} + \underbrace{\frac{\mu_2}{2\mu_\alpha\pi}
\int_{-\pi}^{\pi}
\frac{h_X(\theta)-|\theta|^2}{|\theta|^{2\alpha}}
\left( \cos \left(\theta x \right) -1 \right) d\theta}_{R_{1,2}(x)}. \end{align*}
Both terms $R_{1,1},R_{1,2} = \mc O(1)$ as $|x| \longrightarrow \infty$, since
\begin{equation*}
|x|^{2\alpha-3}
\left|
\int_{\bb R \setminus [-\pi x, \pi x]}
|\theta|^{2-2\alpha} \left( \cos(\theta)-1 \right)d\theta
\right|
=
\mc O (1), \end{equation*}
for any $\alpha <2$. More generally, let $\alpha \in (1,2)$ and $2/ (2-\alpha) \not \in \bb N$, we write
\begin{align}
\label{eq:genGreen-exp}
\int_{-\pi}^{\pi} \frac{h_X(\theta)
}{|\theta|^\alpha (1-\phi_X(\theta))}
& \left( \cos \left(\theta x \right)-1 \right) d\theta \\& = \nonumber
\sum_{m=1}^{m_\alpha} \underbrace{\int_{-\pi}^{\pi}
\frac{\mu_2^m}{\mu_\alpha^m} \frac{\left( h_X(\theta) \right)^m}{|\theta|^{(m+1)\alpha}}
\left( \cos \left(\theta x \right)-1 \right) d\theta}_{I_m(x)} \\&\phantom{=}+ \underbrace{\int_{-\pi}^{\pi}
\frac{\mu_2^{m_\alpha+1}}{\mu_\alpha^{m_\alpha+1}}
\frac{\left( h_X(\theta) \right)^{m_\alpha+1}}{|\theta|^{( m_\alpha+1 ) \cdot\alpha}(1-\phi_X(\theta))}
\left( \cos \left(\theta x \right)-1 \right) d\theta}_{R(x)} \nonumber \\& = \nonumber
\sum_{m=1}^{m_\alpha}
I_m(x) +
R(x). \nonumber \end{align}
We chose $m_\alpha = \lceil \frac{\alpha-1}{2-\alpha} \rceil-1$ as the minimal value of $m$ such that \[
\frac{(h_X(\theta))^{m_\alpha +1} }{(1-\phi_X(\theta))|\theta|^{m_\alpha+1}} \in L^1(\bb T). \]
Analogously as before we argue that $R(x)=\mc O(1)$ as $|x| \longrightarrow \infty$.
Finally, for $m \le m_\alpha$ we have
\[
\frac{h_X^m(\theta)}{\mu_\alpha^m |\theta|^{m \alpha} (1-\phi_X(\theta))}
=
\frac{\mu_2^m}{\mu_\alpha^{m+1}}|\theta|^{m(2-\alpha)-\alpha} +
\mc O \left( |\theta|^{m(2-\alpha)-1} \right), \]
and as $\alpha<2$, we have that ${m(2-\alpha)-1}>-1$, using a change of variable we get
\begin{align*}
I_m(x) &=
\frac{\mu_2^m}{\mu_{\alpha}^{m}}
\int_{-\pi}^\pi
|\theta|^{m(2-\alpha)-\alpha} \left( \cos(\theta x)-1 \right)
d\theta
+
\mc O(1) \\&=
|x|^{(\alpha-1) - m(2-\alpha)}
\frac{\mu_2^m}{\mu_{\alpha}^{m}}
\int_{-\infty}^\infty
|\theta|^{m(2-\alpha)-\alpha} \left( \cos(\theta )-1 \right)
d\theta \\ & \phantom{=}-
\frac{\mu_2^m}{\mu_{\alpha}^{m}}
\int_{ \bb R \setminus [-\pi |x|, \pi |x|]}
|\theta|^{m(2-\alpha)-\alpha} \left( \cos(\theta x)-1 \right)
d\theta
+
\mc O(1). \end{align*}
Where the first integral in the second line is finite because $m < m_\alpha$. Again, notice that the last integral is of order $\mc O(1)$ as $|x|\longrightarrow \infty$.
Finally, if $2/ (2-\alpha) \in \bb N$ (which includes the $\alpha=3/2$ case), we have that \[
\frac{(h_X(\theta))^{m_\alpha +1} }{(1-\phi_X(\theta))|\theta|^{m_\alpha+1}} -
\frac{\mu_2^{m_\alpha+1}}{\mu_\alpha^{m_\alpha+2}|\theta|} \in L^{1}(\bb T). \] Now, we proceed like before, but also taking into account the contribution of the integral \[
\frac{1}{2\pi}\int_{-\pi}^\pi \frac{\cos (x\theta)-1}{|\theta|} d\theta
=
\frac{1}{\pi}\int_{0}^{\pi |x|} \frac{\cos (\theta)-1}{\theta} d\theta \] and using Lemma \ref{lem:app-int}. This concludes the proof. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:green-1}] We will only prove the repaired case, as the other case is just an adaptation of the arguments of Theorem \ref{thm:Green} case \textit{(ii)} together with the considerations we will present here.
Instead of comparing $a_X(\cdot)$ and $a_{\bar{X}}(\cdot)$ and $\tilde{a}_{\bar{X}}(\cdot)$, we we will only compare $a_X(\cdot)$ and $\tilde{a}_{\bar{X}}(\cdot)$. That is, we have
\[
a_{X}(x) := \frac{1}{2\pi} \int_{-\pi}^\pi \frac{1}{1-\phi_X(\theta)} (\cos(\theta x)-1)d\theta \]
and we define
\[
\tilde{a}_{\bar{X}}(x) := \frac{1}{2\pi} \int_{-\pi}^\pi \frac{1}{\mu_1|\theta|} (\cos(\theta x)-1)d\theta. \]
Write now
\[
a_{X}(x) :=\tilde{a}_{\bar{X}}(x)+\left(a_{X}(x)-\tilde{a}_{\bar{X}}(x)\right). \]
To evaluate the second term, we use a very similar approach to the one in the proof of Theorem \ref{thm:Green}. Using the second part of the statement of Lemma \ref{lem:app-holder-quocient}, we get
$g:\theta \mapsto \frac{1}{\mu_1|\theta|} - \frac{1}{1-\phi_X(\theta)}$ is in $C^{0,1/3-}(\bb T)$. Indeed, by writing
\begin{equation*}
g(\theta)
=
\frac{f_1(\theta)}{\mu_1-f_2(\theta)} \end{equation*}
where $f_1(\theta):= h_X(\theta)/|\theta|^2$ and $f_2(\theta)=h_X(\theta)/|\theta|$ and applying the second statement of Lemma~\ref{lem:app-holder-quocient} for $f_1$ and $f_2$, we get that $f_1 \in C^{0,1/3-}(\bb T)$ and $f_1 \in C^{0,2/3-}(\bb T)$, by taking the minimum of the regularities, we recover the desired result.
It remains to evaluate $\tilde{a}_X(x)$. Note that
\[
\tilde{a}_{\bar{X}}(x) = \frac{1}{\pi \mu_1} \int_0^{\pi |x|} \frac{\cos(\theta)-1}{\theta} d\theta. \]
Again, using Lemma \ref{lem:app-int}, we conclude the result. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:green-less1}] This proof is similar to the one of Theorem~\ref{thm:Green}. By writing
\begin{equation*}
g_X(x)=\frac{1}{2\pi} \int_{\bb T} \frac{1}{1-\phi_X(\theta)} \cos(\theta x)d\theta \end{equation*}
and comparing it to
\begin{equation*}
g_{\bar{X}}(x)=\frac{1}{2\pi} \int_{\bb R} \frac{1}{\mu_\alpha |\theta|^\alpha} \cos(\theta x)d\theta
\quad
\text{ and }
\quad
\tilde{g}_{\bar{X}}(x)=\frac{1}{2\pi} \int_{-\pi}^\pi \frac{1}{\mu_\alpha |\theta|^\alpha} \cos(\theta x)d\theta, \end{equation*}
we can obtain the error bound by using the second statement of Lemma~\ref{lem:app-holder-quocient}. \end{proof}
\section{Fluctuations of Gaussian Fields driven by admissible random walks} \label{sec:second-order-conv}
\subsection{Proof of Theorem~\ref{thm:converge-of-fields-elip}}\label{subsec:elliptic-case}
Before we proceed to the proof, we define the required coupling. To do so, we define $\{\xi^m(x)\}_{m \in \bb N, x \in \bb T_m}$ by taking
\begin{equation}\label{def:discrete-wnoise}
\xi^m(x):=\frac{m^{1/2}}{2\pi} \langle\xi, \1_{B_{2\pi/m}(x)}\rangle, \end{equation}
where $\xi$ is the same realisation of the white-noise used in the definition of $\Xi_\alpha$. This allows us to define $\Xi^m - \Xi_\alpha$.
It is easy to show that $\{{\bf e}_k^m\}_{k \in \bb Z_m}$ forms an orthonormal basis of eigenfunctions of $\mc L^m$, where we remember that ${\bf e}_k^m = {\bf e}_k \mid_{\bb T_m}$ with ${\bf e}_k:= \exp (i k \cdot x)$, $x\in \bb T_m$. Moreover, simple computations show that the eigenvalue of $\mc L^m$ associated to ${\bf e}^m_k$ is precisely given by
\begin{equation}\label{eq:eigenvalues}
-\lambda^{m}_{k} = 1-\phi_X\left(\frac{k}{m}\right) \end{equation}
for each $k \in \bb Z$. Consider the Green's function $g_m(\cdot,\cdot)$ associated to the generator in the torus $\bb T_m$ , i.e, the solution of the equation \begin{equation*}
\begin{cases}
\left( \mc L^m g_{m}(x,\cdot)\right)(y) = \delta_{x}(y) & \text{ if } y\in \bb T_m, \\
\sum_{y \in \bb T_m}g_{m}(x,y)=0.
\end{cases} \end{equation*} Simple computations show that
\begin{equation*}
g_{m}(x,y)=\frac{1}{2\pi m}\sum_{k \in \bb Z_m\setminus \{0\}}
\frac{{\bf e}^{m}_k(x) \overline{{\bf e}^{m}_k(y)}}{\lambda^{m}_k}. \end{equation*}
We can explicitly write $\Xi^m(x)$ as
\begin{equation*}
\Xi^m(x) = g_{m} * (\xi^m - \langle \xi^m, 1\rangle)(x) \end{equation*}
where $*$ denotes the usual convolution. Likewise, we can write $\Xi_{\alpha}$ as a convolution (in the distributional sense) as
\begin{equation*}
\Xi_{\alpha}(x) = g_\alpha * (\xi - \langle \xi, 1\rangle)(x) \end{equation*}
where $g_\alpha$ is the Green function associated to $-(-\Delta)_{\bb T}^{\alpha/2}$, which is given by
\begin{equation}\label{eq:cont-green}
g_{\alpha}(x,y)=\frac{1}{2\pi m}\sum_{k \in \bb Z\setminus \{0\}}
\frac{{\bf e}_k(x) \overline{{\bf e}_k(y)}}{|k|^{\alpha}}. \end{equation}
\begin{proof}[Proof of Theorem~\ref{thm:converge-of-fields-elip}] The proof uses the coupling of the white noise and the explicit Green's function identities provided above. Define
\begin{equation*}
\Xi^m_{\alpha,\beta,Err}:=
m^{\beta-\alpha} \left(\Xi^m-\frac{1}{\mu_\alpha}\Xi_{\alpha}\right)
- \frac{\mu_\beta}{(\mu_\alpha)^2}\Xi_{2\alpha-\beta}. \end{equation*}
We analyse each Fourier coefficient of this field and write
\begin{equation}\label{eq:field-error-split} \langle \Xi^m_{\alpha,\beta,Err}, {\bf e}_k \rangle = \underbrace{ m^{\beta-\alpha} \langle\xi, {\bf e}_k\rangle
\left(
\frac{1}{m^\alpha \lambda^{m}_k}
-
\frac{1}{\mu_\alpha |k|^\alpha}
-
m^{\alpha-\beta}\frac{\mu_\beta}{(\mu_\alpha)^2 |k|^{2\alpha-\beta}}
\right)}_{A_1(m,k)} + \underbrace{ m^{\beta-\alpha}\frac{ \langle\xi,\tilde{{\bf e}}^m_k-{\bf e}_k\rangle }{m^\alpha \lambda^{m}_k},}_{A_2(m,k)} \end{equation}
where $\tilde{{\bf e}}^m_k=\sum_{z \in \bb T_m}{\bf e}_k(z)\1_{B_{2\pi/m}(z)}$. We can show that
\begin{align}\label{eq:A1-fields}
\nonumber
\bb E
\left(|A_1(m,k)|^2\right)
& =
m^{2\beta-2\alpha} \left(
\frac{m^\alpha h_X\left(\frac{k}{m}\right)}{
\mu_\alpha |k|^\alpha\left(\mu_\alpha |k|^\alpha + m^\alpha h_X\left(\frac{k}{m}\right)\right)}
-
\frac{\mu_\beta m^{\alpha-\beta}}{(\mu_\alpha)^2 |k|^{2\alpha-\beta}} \right)^2
\\& = \nonumber
|k|^{2\beta-4\alpha} \left(
\mc O\left(\frac{|k|^{\gamma-\beta}}{m^{\gamma-\beta}}\right) +
\mc O\left(\frac{|k|^{\beta-\alpha}}{m^{\beta-\alpha}}\right) \right)^2 \\& =
\mc O\left(\frac{|k|^{2\delta_1 + 2\beta - 4\alpha}}{m^{2\delta_2}}\right)
\end{align}
for all $k \in \bb Z_m$ with with $\delta_1:=(\gamma-\beta)\vee(\beta-\alpha)$ and $\delta_2:=(\gamma-\beta)\wedge(\beta-\alpha)$.
On the other hand, we have that
\begin{equation}\label{eq:A2-fields}
\bb E (|A_2(m,k)|^2)
\le C
m^{2(\beta-\alpha-1)}
|k|^{2-2\alpha}. \end{equation}
Therefore, we can see that
\begin{align*} &
\bb E \left(\|\Xi^m_{\alpha,\beta,Err}\|^2_{H^{-s}}\right) \\& =
\sum_{k \in \bb Z \setminus \{0\}}
\bb E(|\langle \Xi^m_{\alpha,\beta,Err}, {\bf e}_k \rangle|^2)
|k|^{-4s} \\& =
\sum_{k \in \bb Z_m \setminus \{0\}}
\bb E(|\langle \Xi^m_{\alpha,\beta,Err}, {\bf e}_k \rangle|^2)
|k|^{-4s}
+
\sum_{k \in \bb Z \setminus \bb Z_m}
\bb E|\langle \Xi^m_{\alpha,\beta,Err}, {\bf e}_k \rangle|^2
|k|^{-4s}
\\& \le
Cm^{-2\delta_2}
+
Cm^{2(\beta-\alpha) -1}
+
C
m^{2\beta-4\alpha-4s+1}
+
C
m^{2\beta-4\alpha-4s+1} \end{align*}
which goes to $0$ as long as $s > s_0$ and $\beta < \alpha +1$. Applying Chebyshevs's inequality, we recover the convergence in probability. \end{proof}
\subsection{Proof of Theorem~\ref{thm:converge-of-fields-parab}} \label{subsec:parabolic-case}
Again, we need to make a few observations before deriving the proof. By interpreting $\zeta^m$ as a map $t \mapsto \zeta^m(t,\cdot) \in \ell^2(\bb T_m)$, we can look at $\hat{\zeta}^{m}(t,k)$ the $k$-th Fourier coefficient of $\zeta^m(t,\cdot)$. Notice that due to linearity
\begin{align*}
d\hat{\zeta}^m(t,k) & :=
\frac{1}{m}
\sum_{x \in \bb T_m}
d\hat{\zeta}^m(t,k) {\bf e}^m_{-k}(x) \\& =
\frac{1}{m}
\left(
\sum_{x \in \bb T_m}
m^\alpha\mc L^{m} \zeta^m(t,x) {\bf e}^m_{-k}(x) \right)dt
+
\frac{1}{m}
\sum_{x \in \bb T_m}
{\bf e}^m_{-k}(x)d\xi^m(t,x) \\& =
-m^\alpha \lambda^m_k\hat{\zeta}^m(t,k)dt
+
d\hat{\xi}^m(t,k) \end{align*}
where $\{\hat{\xi}^m(\cdot,k)\}_{k \in \bb Z^m}$ is also collection of i.i.d. Brownian motions. Notice that $\hat{Z}^m(t,k) = \hat{\zeta}^m (t,k)$. Then we use It\^o's formula to get that that the term $\hat{Z}^m(t,k)$ can be written as
\begin{align*}
\hat{Z}^m(t,k) := \hat{Z}^m_0(k)e^{-m^\alpha\lambda_k^m t} +
\int_{0}^t e^{-m^\alpha\lambda_k^m (t-s)}
\hat{\xi}^m(ds,k). \end{align*}
Now, we construct a coupling between the continuous and discrete versions by taking
\begin{equation*}
\xi^m(t,x):= m^{1/2} \langle \xi(t,\cdot), \1_{B_m(x)} \rangle \end{equation*}
where we are abusing the notation as $\xi^m(t,x)$ is not a function in $t$ (but rather a distribution). We can write
\begin{align*}
\hat{Z}^m(t,k) := m^{1/2}\hat{Z}^m_0(k)e^{-m^\alpha\lambda_k^m t} +
m^{1/2}\xi ( f_{m,k}(t,\cdot,\cdot) ) \end{align*}
where $f_{m,k}(t,s,y):= \1_{[0,t]}(s)e^{-m^\alpha \lambda_k^m (t-s)} \tilde{{\bf e}}^m_k(y)$ and $\tilde{{\bf e}}^m_k$ is the same as in \eqref{eq:field-error-split}.
By running a similar argument as the elliptic one, we have that
\begin{equation}\label{eq:conv-parabollic-field}
m^{\beta-\alpha}\left(\frac{Z^{m}}{m^{1/2}} - Z_\alpha\right)- Z_{Err} \longrightarrow 0 \text{ in probability } \end{equation}
where the convergence happens in $L^2([0,T],H^{-s})$ for any $s >\max\{2\beta-\alpha,\gamma-\alpha\}$ and for any $T>0$ and $Z_{Err}$ is characterised by its Fourier coefficients as
\begin{align*}
\hat{Z}_{Err}(t,k):=
-\mu_\beta|k|^\beta\hat{Z}_0(k)e^{-\mu_\alpha |k|^\alpha t}t
- \mu_\beta\int_{0}^t e^{-\mu_\alpha |k|^\alpha (t-s)} |k|^\beta (t-s)\hat{\xi}(ds,k). \end{align*}
\begin{proof}[Proof of Theorem~\ref{thm:converge-of-fields-parab}] Due the linearity of the problem, we can deal with the initial condition separately, an analysis that follows similarly to the one of the forcing term. Therefore, we assume that $Z_0 \equiv 0 $. Again, we examine each Fourier mode independently and write
\begin{align*}
\frac{\hat{Z}^m(t,k)}{m^{1/2}}:= \xi \left( \1_{[0,t]}(\cdot)e^{-m^\alpha \lambda_k^m (t-\cdot)} {\bf e}_k(\cdot)\right)
+ \xi \left( \1_{[0,t]}(\cdot)e^{-m^\alpha \lambda_k^m (t-\cdot)} (\tilde{{\bf e}}^m_k(\cdot)-{\bf e}_k(\cdot))\right). \end{align*}
From this we examine
\begin{align*}
m^{\beta-\alpha}&\left(\frac{\hat{Z}^{m}(t,k)}{m^{1/2}} - \hat{Z}_\alpha(t,k)\right)- \hat{Z}_{Err}(t,k)
= \\ &
\xi \left( \1_{[0,t]}(\cdot)
\left(m^{\beta-\alpha}\left(e^{-m^\alpha \lambda_k^m (t-\cdot)}
-
e^{-\mu_\alpha|k|^\alpha (t-\cdot)}\right)
+
\mu_\beta |k|^\beta e^{-\mu_\alpha|k|^\alpha (t-\cdot)}\right)(t-\cdot)
{\bf e}_k(\cdot)\right) \\ & \phantom{=}
+m^{\beta-\alpha} \xi \left( \1_{[0,t]}(\cdot)e^{-m^\alpha \lambda_k^m (t-\cdot)} (\tilde{{\bf e}}^m_k(\cdot)-{\bf e}_k(\cdot))\right) \\& = A_m(t,k) + B_m(t,k). \end{align*}
Now we fix $T>0$, we have that the following bound on the second moment of the second term
\begin{align*}
\int_{0}^T \bb E ( (B_m(t,k))^2 ) dt & =
\int_{0}^T \frac{1-e^{-2 m^\alpha \lambda^m_k t}}{2 m^\alpha \lambda^m_k } \|\tilde{{\bf e}}^m_k-{\bf e}_k\|^2_{L^2} dt \\& \lesssim
T m^{2(\beta-\alpha-1)} |k|^{2-\alpha}. \end{align*}
As for the first term, using Taylor expansion we have that
\begin{align*}
& \int_{0}^T \bb E (A(t,k)^2 ) dt
=
\int_{0}^T
\int_{0}^t
e^{-2\mu_\alpha |k|^\alpha(t-s)}
\left(
m^{\beta-\alpha}
\left( e^{(-m^\alpha \lambda_k^m + \mu_\alpha |k|^\alpha)(t-s)}
- 1\right)
+
\mu_\beta |k|^\beta (t-s)
\right)^2
ds
dt
\\& =
\int_{0}^T
\int_{0}^t
e^{-2\mu_\alpha |k|^\alpha(t-s)}
\left(
\mc O \left(m^{\beta-\gamma}|k|^\gamma(t-s)\right)
+
\mc O \left(m^{\alpha- \beta}|k|^{2\beta}
(t-s)^2\right)
\right)^2
ds
dt \\& \lesssim
(1\vee T^{3}) |k|^{s_1-\alpha} m^{-s_2} \end{align*}
where $s_1 =\max \{\gamma, 2\beta\} $ and $s_2:=\max\{\gamma-\beta,\beta-\alpha\}$, the proof now follows from another Chebyshev's inequality and a triangular inequality.
\end{proof}
\appendix
\section{Evaluation of some special integrals}
\begin{lemma}\label{lem:ap-R-alpha+} For $\alpha \in (0,2)\setminus \{1\}$, let $R^\infty_\alpha$ be defined as in \eqref{def:r-infity+}. Then, there exist real constants $K_1,\dots,K_3$ such that
\begin{equation} \label{asymp-of-R+}
R^\infty_{\alpha,+}(\theta) =
\sum_{k=1}^3 i^k K_k \theta^k
+ \mc O(|\theta|^{2+\alpha}). \end{equation}
\end{lemma}
The proof of this lemma is very similar to the next, which refers to the symmetric case. However, in the symmetric case, we need to be more careful, as we will be interested in showing that the distribution $p_\alpha(\cdot)$ is not only admissible, but repairable. That means we will need to control the signs of certain constants. To avoid repeating ourselves, we will present only the proof of Lemma~\ref{lem:ap-R-alpha}.
\begin{lemma}\label{lem:ap-R-alpha} For $\alpha \in (0,2)\setminus \{1\}$, we have that $R^\infty_\alpha$ defined in \eqref{def:r-infity} satisfies
\begin{equation} \label{asymp-of-R}
R^\infty_\alpha(\theta) = K_2|\theta|^2
+\mc O(|\theta|^{2+\alpha}) \end{equation}
where
\begin{align} \label{eq:asymp-of-R-2}
K_2
=
\frac{1-\alpha}{2}
\Bigg(
&\left( \frac{2^{2-\alpha}-1}{2-\alpha}-\frac{3(2^{1-\alpha}-1)}{2(1-\alpha)}\right)
\\ &+
\frac{1}{2\Gamma(\alpha)}
\sum_{m=1}^\infty (-1)^m (\zeta(m+\alpha)-1) \frac{m \Gamma(m+\alpha)}{\Gamma(m+2)(m+2)}
\Bigg).
\nonumber \end{align}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:ap-R-alpha} ] Recall that $\theta >0$,
\begin{align*}
R^\infty_\alpha = \theta^{1+\alpha}
\int_{\theta}^\infty
\Big(\frac{z \sin(z) - (1+\alpha)(1-\cos(z))}{z^{2+\alpha}}\Big)
P_1\Big(\frac{z}{\theta}\Big) dz \end{align*}
and $P_1(x)=\left(x - \lfloor x \rfloor \right)- \frac{1}{2}$. Note that this integral is finite. Indeed, one can prove this by observing that $|P(z)|\le \frac{1}{2}$. We shall now divide the integral in $R_\alpha^\infty$ in two parts, one going from $\theta$ to $1$ and the other $1$ to $\infty$, as we will use different techniques to bound them.
\begin{align*}
R_\alpha^\infty &= \underbrace{\theta^{1+\alpha}
\int_{\theta}^1
\frac{z \sin(z) - (1+\alpha)(1-\cos(z))}{z^{2+\alpha}}
P_1\Big(\frac{z}{\theta}\Big) dz}_{I_1}
\\ &\phantom{=}+ \underbrace{\theta^{1+\alpha}
\int_{1}^\infty
\frac{z \sin(z) - (1+\alpha)(1-\cos(z))}{z^{2+\alpha}}
P_1\Big(\frac{z}{\theta}\Big) dz}_{I_2}. \end{align*}
We start by analysing $I_2$ and proving that $I_2 = \mc O(|\theta|^{2+\alpha})$,
\begin{align*}
I_2 &=
\theta^{1+\alpha}\int_{1}^\infty
\frac{z \sin(z)- (1+\alpha)(1-\cos(z))}{z^{2+\alpha}}
P_1\Big(\frac{z}{\theta}\Big) dz. \end{align*}
For convenience, we assume that $\theta^{-1}\in \bb N$. To treat the general case we need to compare the expressions between for $\theta^{-1}$ and $\lfloor\theta^{-1}\rfloor$.
In this case, we can write the integral above as
\begin{align*}
I_2 &= \theta^{1+\alpha} \sum_{k=1/\theta}^\infty \int_{k\theta}^{(k+1)\theta}
g(z)\left(\frac{z}{\theta}-k-\frac{1}{2} \right)dz, \end{align*}
where $g(z):=\frac{z \sin(z)- (1+\alpha)(1-\cos(z))}{z^{2+\alpha}}$. Now, we will use that $\int_{k\theta}^{(k+1)\theta} P_1\left(\frac{z}{\theta}\right)dz =0$ and sum and subtract the term $g(k\theta)$ in each term of the summands. Hence
\begin{align*}
|I_2| &=
|\theta|^{1+\alpha}
\left |
\sum_{k=1/\theta}^\infty \int_{k\theta}^{(k+1)\theta}
(g(z)-g(k\theta))\left(\frac{z}{\theta}-k-\frac{1}{2} \right)dz,
\right| \\ & \le
|\theta|^{1+\alpha}
\sum_{k=1/\theta}^\infty \sup_{y \in [k\theta,(k+1)\theta]} |g^\prime(y)|
\int_{k\theta}^{(k+1)\theta}
|z-k\theta|\left|\frac{z}{\theta}-k-\frac{1}{2} \right|dz, \\ & \le
\frac{1}{4} |\theta|^{3+\alpha}\sum_{k=1/\theta}^\infty \sup_{y \in [k\theta,(k+1)\theta]} |g^\prime(y)|, \end{align*}
where we used in the second inequality both a change of variables and that $|z-k\theta|\le \theta$.
For $z>0$, we have
\[
g^{\prime}(z)=\frac{\cos(z)}{z^{1+\alpha}}-2(1+\alpha)\frac{\sin(z)}{z^{2+\alpha}}+(1+\alpha)(2+\alpha)\frac{1-\cos(z)}{z^{3+\alpha}} \]
and therefore there is a constant $C_\alpha$ that only depends on $\alpha$, such
\[
|g^{\prime}(z)| \le \frac{C_\alpha}{z^{1+\alpha}} \]
which implies
\[
\sup_{[k\theta,(k+1)\theta]}|g^\prime(z)| \le \frac{C_\alpha}{(\theta k )^{1+\alpha}}. \]
We can now use this in the estimate of $|I_2|$ to get
\[
|I_2| \le
C \theta^{2}
\sum_{k=1/\theta}^\infty \frac{1}{k^{1+\alpha}} \lesssim
|\theta|^{2+\alpha} \]
and $I_2 = \mathcal{O}(|\theta|^{2+\alpha})$.
Now, for $I_1$, we use Taylor expansion of the function $h(z) = z \sin (z) - (1+\alpha)(1-\cos (z))$ to get
\[
h(z)=\frac{1-\alpha}{2}z^2 -\frac{3-\alpha}{24}z^4 + r(z), \]
where $r(z)=\mc O(z^6)$. We get
\begin{align*}
I_1
&=
\theta^{1+\alpha}
\frac{1-\alpha}{2}
\int_{\theta}^1
\frac{1}{z^{\alpha}}
P_1\Big(\frac{z}{\theta}\Big) dz
-
\theta^{1+\alpha}
\frac{3-\alpha}{24}
\int_{\theta}^1
z^{2-\alpha}
P_1\Big(\frac{z}{\theta}\Big) dz
\\ & \phantom{=}
+
\theta^{1+\alpha}
\int_{\theta}^1
\frac{r(z)}{z^{2+\alpha}}
P_1\Big(\frac{z}{\theta}\Big) dz
\\ &= \frac{1-\alpha}{2}I_{1,1}-\frac{3-\alpha}{24}I_{1,2}+I_{1,3}. \end{align*}
Again we examine each of the terms separately. We start with the last one. For this, notice that $r(\cdot)$ is a $C^\infty([-1,1])$ function, as it is the difference of two such functions. Moreover, we know that
$\tilde{r}(z):=\left| \frac{r(z)}{z^{2+\alpha}} \right|$ and therefore, applying Lemma \ref{lem:app-holder-quocient} we have that $\tilde{r}(\cdot)$ is in $C^{0, \frac{4-\alpha}{6}-}([-1,1])$. Now we can proceed like we did for $I_2$ to get that $I_{1,3}$ is of order $\mc O(\theta^{2+\alpha})$.
The first integral $I_{1,1}$ can be written as, again assuming that $\theta^{-1} \in \bb N$,
\begin{align*}
I_{1,1}
&
=
\theta^{1+\alpha}
\sum_{k=1}^{\lfloor \frac{1}{\theta} \rfloor -1 }
\int_{k\theta}^{(k+1)\theta}
\frac{1}{z^{\alpha}}
\Big(\frac{z}{\theta}- k - \frac{1}{2}\Big)dz
\\ &=
\theta^{2}
\sum_{k=1}^{\lfloor \frac{1}{\theta} \rfloor -1 }
k^{2-\alpha}\left[
\frac{\Big(1+\frac{1}{k}\Big)^{2-\alpha}-1}{2-\alpha}
-
\Big(1+\frac{1}{2k}\Big)
\frac{\Big(1+\frac{1}{k}\Big)^{1-\alpha}-1}{1-\alpha}\right]. \end{align*}
We now split the terms with $k=1$ and $k>1$,
\begin{align}\label{eq:I11}
I_{1,1}
&
\nonumber
=
\theta^2 \left( \frac{2^{2-\alpha}-1}{2-\alpha}-\frac{3(2^{1-\alpha}-1)}{2(1-\alpha)}\right)
\\&+
\theta^{2}
\sum_{k=2}^{\lfloor \frac{1}{\theta} \rfloor -1 }
k^{2-\alpha}\left[
\frac{\Big(1+\frac{1}{k}\Big)^{2-\alpha}-1}{2-\alpha}
-
\Big(1+\frac{1}{2k}\Big)
\frac{\Big(1+\frac{1}{k}\Big)^{1-\alpha}-1}{1-\alpha}\right]. \end{align}
Use now the full Taylor series of both $(1+x)^{2-\alpha}$ and $(1+x)^{1-\alpha}$ where we are taking $x= \frac{1}{k} \in(0,1)$ to explore the cancellations. For a fixed $k>1$, the expression inside the square brackets in the last summation is
\begin{align*}
\frac{1}{2-\alpha}\sum_{j=1}^{\infty} \frac{(2-\alpha)_j}{j!} \frac{1}{k^j}
-
\frac{1}{1-\alpha}\sum_{j=1}^{\infty} \frac{(1-\alpha)_j}{j!} \frac{1}{k^j}
-
\frac{1}{2(1-\alpha)}\sum_{j=1}^{\infty} \frac{(2-\alpha)_j}{j!} \frac{1}{k^{j+1}} \end{align*}
where $(x)_j:= x (x-1)\dots (x-j+1)$. By grouping the powers of $\frac{1}{k}$ together, we can check by hand that the coefficients of $\frac{1}{k}$ and $\frac{1}{k^2}$ are zero. Moreover, by a simple change of variable on the last sum, we have that the sum above equals \begin{align*}
\sum_{j=3}^{\infty}
\left(\frac{(1-\alpha)_{j-1}}{j!}
-
\frac{(-\alpha)_{j-1}}{j!}
-\frac{(-\alpha)_{j-2}}{2(j-1)!}
\right)
\frac{1}{k^j}, \end{align*}
where we used that $\frac{(x)_j}{x}=(x-1)_{j-1}$ and $(x)_{j+1}=(x)_j (x-j)$. Rewriting this expression in terms of Gamma functions, we have that the last term of \eqref{eq:I11} is equal to
\begin{equation}\label{eq:sumI2}
\theta^{2}
\sum_{k=2}^{\lfloor \frac{1}{\theta} \rfloor -1 }
\sum_{j=3}^\infty
k^{2-\alpha-j}
\frac{(j-2)\Gamma(1-\alpha)}{2 j! \Gamma( - \alpha - j +3)}. \end{equation}
From the reflection formula for the Gamma function and a change of variables $m = j-2$, we get
\begin{align*} \eqref{eq:sumI2}
=
\frac{\theta^2}{2\Gamma(\alpha)}
\sum_{k=2}^{\lfloor \frac{1}{\theta} \rfloor -1 }
\sum_{m=1}^\infty (-1)^m k^{-\alpha-m} \frac{m \Gamma(m+\alpha)}{(m+2)!}. \end{align*}
Now, using Euler-Maclaurin again, one can easily prove that for $\alpha \in (0,2)$ and $m\ge 1$, \begin{align}
\label{eq:bound-inc-zeta}
\left |\sum_{k=2}^{\lfloor \frac{1}{\theta} \rfloor -1}k^{-\alpha-m}
-
(\zeta(m+\alpha)-1) + \frac{\theta^{m+\alpha-1}}{m+\alpha-1} \right|
\le C \theta^{m+\alpha} \end{align} where the constant $C$ does not depend on $m$ or $\alpha$. Therefore there exist an explicit constant $K_2$
\[
I_{1,1} = K_2 |\theta|^2 + \mathcal{O}(|\theta|^{2+\alpha}). \]
Finally, we can show in an analogous way that $I_{1,2} = \mathcal{O}(|\theta|^4)$. For the case $\alpha=1$ we proceed in a similar way. We need to evaluate \[
R^\infty_1 = \theta^{2}
\int_{\theta}^\infty
\Big(\frac{z \sin(z)- 2(1-\cos(z))}{z^{3}}\Big)
P_1\Big(\frac{z}{\theta}\Big) dz. \]
Using similar ideas as before and the fact that $z \sin(z)- 2(1-\cos(z))=\mc O(z^4)$ when
$|z| \to 0$ instead of the order $\mc O (z^2)$ that we got for the case $\alpha \in (1,2)$ we conclude the proof. \end{proof}
It is worth explaining why we do not simply use the triangular inequality to bound the series representation by taking
\begin{align*}
|I_{1,1}|
&
\le
\theta^{1+\alpha}
\sum_{k=1}^{\lfloor \frac{1}{\theta} \rfloor -1 }
\int_{k\theta}^{(k+1)\theta}
\left|
\frac{1}{z^{\alpha}}
\Big(\frac{z}{\theta}- k - \frac{1}{2}\Big)
\right|
dz, \end{align*}
or something similar. This strategy is simply not good enough to guarantee that $\mu_2$ studied in the next lemma is positive for all $\alpha$. Indeed, applying the bounds above would only be enough to show that $\mu_2$ is positive in an interval of the form $(\alpha_0,2)$ with $\alpha_{0} > 0$. Although more technical, we preferred to keep a consistent approach for the proof to avoiding having yet more cases.
\begin{lemma}\label{lem:Right-sign}
For $\alpha \in (0,2)\setminus\{1\}$, the constant $\mu_2$ defined in \eqref{eq:def-mu2} is positive. \end{lemma}
\begin{proof} We start by focusing on the case $\alpha>1$. Recall the expression \eqref{asymp-of-R} for $K_2$. As $\alpha >1$, for $m \ge 1$, we have $m+\alpha >2$ and therefore \begin{align*}
\zeta(m+\alpha)-1
&=
\frac{1}{2^{m+\alpha}} + \sum_{k\ge 3} \frac{3^{m+\alpha}}{3^{m+\alpha}} \frac{1}{k^{m+\alpha}}
\\ & \le
\frac{1}{2^{m+\alpha}} + \frac{1}{3^{m+\alpha}} \sum_{k\ge 3} \Big(\frac{3}{k}\Big)^2
\\ & \le
\frac{1}{2^{m+\alpha}}\Bigg(1+ 9\Big(\zeta(2)-\frac{5}{4}\Big)\Bigg)
\le \frac{5}{2^{m+\alpha}}, \end{align*} where $\zeta(z)$ is the zeta-function. Moreover, using Gautschi's inequality for the ratio of two Gamma functions, see e.g.~\cite{Gau}, we can write \[
(m+2)^{\alpha-2}< \frac{\Gamma(m+\alpha)}{\Gamma(m+2)} <(m+1)^{\alpha-2} < m^{\alpha-2}. \]
The upper bound on $K_2$ will follow from the lower bound on $\frac{K_2}{1-\alpha}$. We remove all even summands $m$ in the definition of $K_2$ and bound further
\begin{align*}
\frac{2 K_2}{1-\alpha} & \ge
\Bigg(
\Bigg( \frac{2^{2-\alpha}-1}{2-\alpha}-\frac{3(2^{1-\alpha}-1)}{2(1-\alpha)}\Bigg)
\\ &
\phantom{\left( = \right)}-
\frac{1}{2\Gamma(\alpha)}
\sum_{m=0}^\infty (\zeta(2m+1+\alpha)-1)
\frac{(2m+1) \Gamma(2m+1+\alpha)}{\Gamma(2m+3)(2m+3)}
\Bigg) \\& \ge
\Bigg(
\Bigg( \frac{2^{2-\alpha}-1}{2-\alpha}-\frac{3(2^{1-\alpha}-1)}{2(1-\alpha)}\Bigg)
-
\frac{5}{2\Gamma(\alpha)}
\sum_{m=0}^\infty \frac{(2m+2)^{\alpha-2}}{2^{2m+1+\alpha}}
\Bigg) \\& \ge
\Bigg(
\Big( \frac{2^{2-\alpha}-1}{2-\alpha}-\frac{3(2^{1-\alpha}-1)}{2(1-\alpha)}\Big)
-
\frac{5}{12\Gamma(\alpha)} \Bigg). \end{align*}
Call $u: (0,2) \rightarrow \mathbb{R}$ the map \begin{equation} \label{eq:bound-u} t \mapsto \frac{1-t}{2} \Bigg(
\Bigg( \frac{2^{2- t}-1}{2- t}-\frac{3(2^{1-t}-1)}{2(1-t)}\Bigg)
-
\frac{5}{12\Gamma(t)}
\Bigg) \end{equation} which is increasing for $t>1$ and simple analysis shows that $u(t)$ is bounded from above by $\frac{1}{4}$. Now we collect all previous contributions to the constant $\mu_2$ and show that the sum above cannot flip the sign. This concludes that \begin{align*}
\mu_2
&= 2c_\alpha \Bigg(
\frac{1}{2(2-\alpha)} - \frac{1}{4} - K_2
\Bigg) > \frac{(\alpha-1) c_{\alpha}}{2-\alpha} \end{align*}
is positive for $\alpha>1$.
For $\alpha<1$, the strategy is similar, only this time, we proceed to get a function $\tilde{u}(\cdot)$ similar to \eqref{eq:bound-u} but bounding $\frac{K_2}{2(1-\alpha)}$ from below (as $1-\alpha$ is now positive). \end{proof}
\begin{lemma} \label{lem:app-int} Let $z \in [1,\infty)$ and define \[
\Cin(z):= \int_0^z \frac{1-\cos(t)}{t}dt. \] We have that \[
\Cin(z) = \log (z) + \gamma + \mc O (z^{-1}). \] as $z \longrightarrow \infty $ where $\gamma$ is the Euler-Mascheroni constant \end{lemma} \begin{proof} By defining \[
\Ci(z) := -\int_z^\infty \frac{\cos(t)}{t} dt \] the linearity of the integral implies that \[
\Cin(z)= \log (z) - \Ci(z) + \int_1^\infty \frac{\cos t}{t} dt
+ \int_0^1 \frac{1-\cos t}{t}dt. \] The exact value of the sum of the two integrals is not relevant for us, but it is known to be $\gamma$. Therefore, \[
\Cin(z) = -\Ci(z) + \log (z) + \gamma. \] Finally we conclude the proof by noting that trivially $\Ci = \mc O(z^{-1})$ as $z \to \infty.$ \end{proof}
\section{Continuity estimates}
\begin{lemma}\label{lem:app-holder-quocient}
Let $f \in C^{1,\beta}(I)$ for a closed interval $I$ containing the origin.
Additionally, suppose that
\[
f(x) = \mc O\left( |x|^{\beta_0} \right)
\text{ as } |x| \longrightarrow 0
\]
for some $\beta_0 \ge 1 + \beta$.
Let $1<\beta_1 < \beta_0 $ and define the function
\[
h(x):= \frac{f(x)}{|x|^{\beta_1}}.
\]
Then we have that the function $h$ is in $C^{0,\bar{\beta}}(I)$ where $\bar{\beta}= \frac{\beta_0-\beta_1}{\beta_0-\beta}$.
If instead, we have that $f \in C^{0,\beta}(I)$ for some $\beta \in (0,1)$, and $0<\beta_1<\min\{1, \beta_0\}$, we get that $h \in C^{0,\bar{\beta}}(I)$ with $\bar{\beta} := \min\{\beta(1-\beta_1/\beta_0),\beta_0-\beta_1,1,\beta_1\}$. \end{lemma}
\begin{proof}
We will prove the first claim, the second can be proved analogously. Let $x,y \in I$ and assume, without loss of generality, that $|x|<|y|$,
\begin{align*}
\left|
\frac{f(x)}{|x|^{\beta_1}}
-
\frac{f(y)}{|y|^{\beta_1}}
\pm
\frac{f(x)}{|y|^{\beta_1}}
\right| &=
\left|
\frac{f(x)}{|x|^{\beta_1}}
\left (\frac{|y|^{\beta_1} - |x|^{\beta_1}}{|y|^{\beta_1}} \right )
+
\frac{f(x)-f(y)}{|y|^{\beta_1}}
\right| \\& \lesssim
|x|^{\beta_0-\beta_1}
\frac{\left||y|^{\beta_1} - |x|^{\beta_1}\right|}{|y|^{\beta_1}}
+
\frac{|f(x)-f(y)|}{|y|^{\beta_1}}. \end{align*}
Now use that for $A,B> C >0$ real numbers and $\delta \in [0,1]$, we have $C \le A^\delta B^{1-\delta}$. Regarding the first term on the right hand side, notice that \[
\left||y|^{\beta_1} - |x|^{\beta_1}\right| \lesssim \min\{ |y|^{\beta_1}, |y|^{\beta_1-1} |x-y|\} \]
so choosing $A=|y|^{\beta_1}, B= |y|^{\beta_1-1}|x-y|$ and $\delta=\beta_0-\beta_1$ we can easily see that \[
|x|^{\beta_0-\beta_1}
\frac{\left||y|^{\beta_1} - |x|^{\beta_1}\right|}{|y|^{\beta_1}} \lesssim |x-y|^{\delta} \leq |x-y|^{\bar{\beta}}. \]
To bound the second term, remark that $|f'(z)| \leq C |y|^{\beta}$ for all
$|z|\leq |y|$ since $f'\in C^{0,\beta}(I)$ and $f'(0)=0$, so
\[
|f(x)-f(y)| \lesssim \min\{ |y|^{\beta_0}, |y|^{\beta} |x-y|\} \]
and again choosing $A=|y|^{\beta_0}, B=|y|^{\beta} |x-y|$ and $\delta=\bar{\beta}$ the claim follows. \end{proof}
\begin{lemma}
If $p_X(\cdot)$ is admissible with index $\alpha \in (1,2)$, then $\phi_X(\cdot)$
is in $C^{1,\alpha-1-}(\bb T)$.
If $p_X(\cdot)$ is admissible with index $1$, then $\phi_X$ is $C^{0,1-}(\bb T)$.
\label{lemma-app-phi-smooth} \end{lemma} \begin{proof}
Notice that $p_X(\cdot)$ being admissible implies that it is in the basin of attraction of an $\alpha$-stable distribution. Therefore given $\beta \ge 0$ we have $\bb E[|X|^\beta]<\infty$ for $\beta\in (0,\alpha)$ and $p_X(x) \lesssim |x|^{-\alpha+}$. Now, we just write that $p_X(\cdot)$ as the inverse Fourier transform of
\[
\mc F_{\bb T}(\phi_X)(-x) = p_X(x).
\]
Then use the classic relations between continuity and
decay of Fourier coefficients, see \cite[Proposition 3.3.12]{grafakos2008classical} to conclude the proof. \end{proof}
\end{document} | arXiv |
Banach spaces, function spaces, real functions, integral transforms, theory of distributions, measure theory.
Functional characterization of local correlation matrices?
Can the characteristic function of a Borel set be approached by a sequence of continuous function through a certain convergence in $L^\infty$?
Is every pair of closed linear subspaces boundedly regular?
How to prove the binary function uniformly boundary? | CommonCrawl |
Eco-evolutionary modelling of microbial syntrophy indicates the robustness of cross-feeding over cross-facilitation
Your article has downloaded
Similar articles being viewed by others
Carousel with three slides shown at a time. Use the Previous and Next buttons to navigate three slides at a time, or the slide dot buttons at the end to jump three slides at a time.
Selfishness driving reductive evolution shapes interdependent patterns in spatially structured microbial communities
Miaoxiao Wang, Xiaonan Liu, … Xiao-Lei Wu
Costless metabolic secretions as drivers of interspecies interactions in microbial ecosystems
Alan R. Pacheco, Mauricio Moel & Daniel Segrè
Contingent evolution of alternative metabolic network topologies determines whether cross-feeding evolves
Jeroen Meijer, Bram van Dijk & Paulien Hogeweg
Synergistic epistasis enhances the co-operativity of mutualistic interspecies interactions
Serdar Turkarslan, Nejc Stopnisek, … Nitin S. Baliga
Obligate mutualistic cooperation limits evolvability
Benedikt Pauli, Leonardo Oña, … Christian Kost
Rapid evolution destabilizes species interactions in a fluctuating environment
Alejandra Rodríguez-Verdugo & Martin Ackermann
Mutualism-enhancing mutations dominate early adaptation in a two-species microbial community
Sandeep Venkataram, Huan-Yu Kuo, … Sergey Kryazhimskiy
Obligate cross-feeding expands the metabolic niche of bacteria
Leonardo Oña, Samir Giri, … Christian Kost
Labour sharing promotes coexistence in atrazine degrading bacterial communities
Loren Billet, Marion Devers, … Aymé Spor
G. Boza1,2,3 na1,
G. Barabás1,4 na1,
I. Scheuring1 &
I. Zachar1,5,6
Scientific Reports volume 13, Article number: 907 (2023) Cite this article
Evolutionary ecology
Syntrophic cooperation among prokaryotes is ubiquitous and diverse. It relies on unilateral or mutual aid that may be both catalytic and metabolic in nature. Hypotheses of eukaryotic origins claim that mitochondrial endosymbiosis emerged from mutually beneficial syntrophy of archaeal and bacterial partners. However, there are no other examples of prokaryotic syntrophy leading to endosymbiosis. One potential reason is that when externalized products become public goods, they incite social conflict due to selfish mutants that may undermine any mutualistic interactions. To rigorously evaluate these arguments, here we construct a general mathematical framework of the ecology and evolution of different types of syntrophic partnerships. We do so both in a general microbial and in a eukaryogenetic context. Studying the case where partners cross-feed on each other's self-inhibiting waste, we show that cooperative partnerships will eventually dominate over selfish mutants. By contrast, systems where producers actively secrete enzymes that cross-facilitate their partners' resource consumption are not robust against cheaters over evolutionary time. We conclude that cross-facilitation is unlikely to provide an adequate syntrophic origin for endosymbiosis, but that cross-feeding mutualisms may indeed have played that role.
Microbial interactions include a wide range of mechanisms that shape not only the locally interacting pair but often the whole community or the larger ecosystem through externalized products1,2. Metabolite-based cooperation, syntrophy, is often crucial for the stable coexistence of microbial communities3,4,5. The term syntrophy ("co-feeding") gradually increased in scope to cover a diverse range of both trophic and catalytic interactions, leading to unidirectional or mutual aid6 that, in general, allow a community to survive in environments where individuals cannot7. Differences in the specific mechanisms of syntrophy likely have fundamentally different consequences for the eco-evolutionary dynamics of species.
While syntrophy is ubiquitous in the prokaryotic domain (likely responsible for the unculturability of many prokaryotes3,8,9), partnerships stop at ectosymbioses, never achieving true endosymbiosis via physical integration10. It is intriguing that we do not find further examples of purely prokaryotic endosymbioses (i.e., not embedded in a eukaryotic overhost), other than the singular putative mitochondriogenetic origin11. While there are many analogies to the endosymbiotic origin of mitochondria, rendering eukaryogenesis a perhaps less unique major evolutionary transition12, it is perfectly valid and relevant to ask why prokaryotic syntrophy has not lead to magnitudes more endosymbiotic integrations over ~ 4 billion years (that we know of). After all, multicellularity has evolved multiple times, independently. Besides metabolic compatibility and adaptive superiority, the process likely required a long and stable period during which species could coevolve without interruption from third parties. It is unknown whether and which types of syntrophy can be stably maintained for a prolonged time, withstanding the inevitable invasion of cheaters and other biotic and abiotic disturbances, especially regarding eukaryogenetic scenarios. We set out to evaluate the ecological and evolutionary robustness of different syntrophic mechanisms via mathematical modelling. Investigations may not only provide insight into prokaryotic integration (or the apparent lack of it) but could improve our understanding of the singularity of eukaryogenesis.
The broadest definition of syntrophy covers all metabolic cooperation that positively affects the population growth of another species13. Formally, cooperation is equivalent to cross-catalysed replication of molecules in chemical systems not involving cells, e.g. ribozymes catalysing the replication of other ribozymes. Between (cellular) organisms cooperation has the same second-order kinetics as cross-catalysis between chemical replicators14, without the fast association-dissociation dynamics of true chemical catalysts, as in this case, species aid each other via externalized molecules. These may directly serve as nutrients for the partner (e.g. living on the byproduct of another species13,15; called here cross-feeding in the narrow sense Fig. 1A) or they may facilitate each other indirectly by e.g. enabling resources (via. e.g. digestive enzymes5; called collaborative feeding, or more generally cross-facilitation Fig. 1B). Collaborative-feeding occurs when two distinct lineages secrete the same extracellular enzyme, a public good, and together increase the specific activity of that enzyme, rendering an otherwise inaccessible resource accessible, facilitating both partners' growth5, or via the benefits delivered by another form of facilitation, transportation, or protection mutualisms16,17,18. Microbial interaction models often focus solely on the phenomenological level of species and fail to capture the fundamentally distinct nature of the different mechanisms involved19. Note that, while cross-fed metabolites or external enzymes may convey a cooperative benefit, neither catalyses the reproduction of partners directly. The difference lies in their inhibitory effect, production cost, and reusability.
Models of cross-feeding (A) and cross-facilitation (B). Species \(N_i\) consumes resource \(R_i\) and produces \(X_i\) (grey arrows). (A) Trophic cross-feeding. Product \(X_i\) represents a self-inhibiting (red curve) waste material of species \(N_i\) that can be consumed by species \(N_j\), serving directly as food (blue arrows). (B) Enzymatic cross-facilitation or collaborative-feeding. Product \(X_i\) of species \(N_i\) is an enzyme providing cooperative help (e.g. extracellular digestion of resources, formally equivalent to any form of indirect aid of a protective mutualism) to partner species and to itself by enhancing the consumption of resources \(R_i\) (dashed yellow arrows). In both cases, a mutant species \(N_3\) (red cell) may appear that inherits the properties and interactions of \(N_2\) (yellow cell) but does not produce anything.
Most microorganisms are found to be auxotrophic, lacking essential pathways, depending on extracellular sources of amino acids, vitamins, and cofactors20, implying nutritional cross-feeding3. A diversity of externalized molecules may transmit cooperative effects, e.g., digestive enzymes4,5,21,22,23, signals24, protective matrix materials25, siderophores26,27, metallophores, biosurfactants28, antibiotics29, amino acids, vitamins, and other cofactors (for a review, see5). The common feature of exchanged products is the private or collective benefit they exert, acting as private, semi-private, or common goods30,31. Metabolic benefits are harnessed either by directly consuming produced metabolites (e.g., nutritional mutualism, waste consumption13), or via the catalytic effects of products that remain reusable (e.g. extracellular enzymes5). For further examples, see Supplementary Table S1.
A particularly important type of nutritional cross-feeding is the detoxification of inhibitory molecules, such as waste13 or reactive oxygen species20. Waste can accumulate in prohibitive concentrations internally, therefore disposing of it is beneficial for the producer (low cost). A partner that consumes the waste can effectively alleviate the external inhibitory effect on the producer, facilitating its growth. Mutual byproduct-consumption therefore may lead to reciprocal cooperation, or mutualism32. A textbook example is that of methanogenic syntrophies: the bacteria ferment lactate, producing hydrogen that inhibits the above process unless the archaeal partner consumes it in order to reduce carbon dioxide to methane33.
A fundamentally different mechanisms of syntrophic cooperation is mediated by products that provide indirect benefit as are not consumed by partners (hence we call it cross-facilitation to differentiate it from nutritional cross-feeding and to use a general term used in ecology for positive interactions34). Species can produce reusable catalytic factors that benefit not only the producer but the whole community. Extracellular enzymes that degrade complex substrates to forms that can be picked up by the producer and its neighbours4 can improve resource consumptions5,23,35. For large molecules, the cost of production and excretion can be substantial, especially compared to waste disposal. Protection mutualisms have similar effects, where factors produced by community members serve as non-consumable common goods (protective matrix of a biofilm24,25, bacteriocins serving as growth-inhibiting antimicrobial compounds24, or extracellular detoxification and the neutralization of inhibitory substances, e.g., antibiotics17,36). Different (potentially prokaryotic) species may combine their extracellular enzymatic activities cooperatively37 to achieve enhanced growth and enable new niches38. Hybrid examples, with both cross-feeding and cross-facilitation, also exist16, e.g. in biofilms24. For a list of various examples, see Supplementary Information SI 1.
Public goods may benefit species but they also generate social conflict and attract cheaters that do not invest into production while benefit from the goods23,39. Numerous experiments have demonstrated that mutant strains or emerging ecotypes can stably coexist within the community (clonal cooperation1) due to e.g. differential use of resources40 and cross-feeding41, especially when such division of metabolic labour is engineered42,43, often demonstrating higher fitness or productivity3,41,44,45. However, such compatibility between strains represents a best-case scenario. Selfish mutants not contributing anything but competing more effectively for resources are more likely to appear46, and they may ultimately win over strains that invest to a costly cooperative act47. Emergent cheater strategies have been observed in various experimental systems, including biofilm formation by P. aeruginosa where biofilm thickness and health was reduced by non-contributing strains48 or the survival of whole biofilm was sabotaged by such defecting types in P. fluorescens49. Non-producers exploiting public resources can turn saved production costs to higher growth rates and may outcompete producers35, leading to the 'tragedy of the commons'50 and the collapse of the community24,51. Consequently, all forms of syntrophy are prone to be disrupted by cheater mutants that reduce their costs at the expense of producers.
Despite the obvious differences between cross-feeding and cross-facilitation, it is not trivial which can lead to evolutionarily stable mutualism that could (in principle) account for the (endo)symbiotic integration of prokaryotic partners. We ask whether and which syntrophy can be stable against free-riders. Here we provide a mathematical model to investigate the ecological and evolutionary dynamics and robustness of symmetric and asymmetric cross-feeding and cross-facilitation of two unspecified microbial species in syntrophic interaction, potentially with mutants to appear (Fig. 1). Based on our analysis, cross-facilitation appears to be an unlikely (but not impossible) candidate for serving as the syntrophic origin for stable partnership or endosymbiosis. Cross-feeding mutualisms, however, may indeed have played that role.
We have modelled a theoretical partnership of unrelated microbial species 1 and 2 that belong to different guilds with different metabolic needs and therefore are limited by two independent resources \({R}_{1}\) and \({R}_{2}\), hence their coexistence is guaranteed. They secrete specific metabolites \({X}_{1}\) and \({X}_{2}\) to the environment (interpreted either as energy-rich waste or as enzymes), which benefit themselves and possibly the other species (Fig. 1). We assume linear consumer-resource dynamics with fast resources52. That is, resource uptake is fast compared to cellular growth, so resource concentrations are considered to be in steady state on the time of scale of the consumers' dynamics. We also assume a well-mixed environment where externalized products are immediately available to anyone. Therefore, any product is potentially a publicly available good. In case of cross-feeding, the secreted metabolite is waste that may self-inhibit (Fig. 1A). In collaborative-feeding (a case of cross-facilitation), species secrete enzymes that catalyse reactions in the environment, improving resource consumption for themselves and other species (Fig. 1B). A cheater mutant species 3 (technically an ecotype instead of a bona fide new biological species), can potentially emerge that lives on the same resource as its parent but may invest less, or even nothing, in production to increase its survival rate (or invest more into production at the expense of survival). While cooperative strains may also emerge1, we deliberately chose a worst-case scenario to test the stability of partnerships under worst conditions.
Based on these characteristics of the system, we have built a family of simple mathematical models to assess which types of syntrophy and interaction network topology are more likely to yield evolutionary stable, pairwise symbiosis. These serve as the first formal models of syntrophy that explicitly test a crucial aspect of metabolic interactions considered to have been relevant at the onset of eukaryogenesis. See Methods for the mathematical details, Fig. 2 for some typical time series produced by the models, and Table S2 for parameters.
Time-evolution of tripartite microbial systems with cross-feeding or collaborative-feeding interactions, depending on interaction type and per-capita mortality rate (\(d\)). The x-axis represents time, the y-axis represents species density. In both interaction types, only two species can stably coexist, the third one being extinct in the equilibrium. Whenever the mutant species (red) has smaller mortality than its parent (\(d_{3}<d_{2}\), top row), it can invade, causing the extinction of the resident parent (yellow). If the mutant mortality is larger (bottom row), it cannot invade. Parameters are \(\left\{b_{1}=b_{2}=b_{3}=1,r_{1}=r_{2}=1,{\delta }_{1}={\delta }_{2}=0.1,k_{1}=k_{2}=k_{3}=0.1,c_{1}=c_{2}=c_{3}=1,d_{1}=d_{2}=0.01,m_{1}=m_{2}=m_{3}=1,s_{11}=s_{12}=s_{13}=s_{21}=s_{22}=s_{23}=s_{31}=s_{32}=s_{33}=0.1,g_{1}=g_{2}=g_{3} =1,h_{1}=h_{2}=h_{3}=1,w_{1}=w_{2}=w_{3}=1\right\}\); parameters are explained in Table S2.
Cross-feeding
Coexistence of cross feeders
First, we examine byproduct cross-feeding [Fig. 1A, Eq. (4)]. Byproduct metabolites often accumulate internally, stalling the metabolism of the organism. Hence disposing them is beneficial, while retaining them internally is costly13,53. In the simplest case, we omit the potential of disposed waste for self-inhibition (\({h}_{i}=0\) for all species \(i\)), to make analytical investigations simpler. Without metabolic cross-feeding (\({g}_{i}=0\) for all \(i\)), species 1 reaches its carrying capacity independently of the other species, while species 2 and 3 become complete competitors for resource \({R}_{2}\). As a result, the competitor which utilises the resource more effectively by having a lower \({b}_{i}/{d}_{i}\) value will exclude the other (\({R}^{*}\)-rule54).
Cross-feeding couples the dynamics of the species, rendering the analysis more complex. When self-inhibition is negligible (\({h}_{1}={h}_{2}=0\)) but species cross-feed (\({g}_{1},{g}_{2}>0\)), we show that species 1 and 2 are in stable coexistence when their net growth rates are sufficiently high; otherwise both species go extinct (Supplementary Information SI 3). Assuming that species can grow independently of each other (i.e., cross-feeding is facultative) and under biologically realistic conditions, the dynamics always lead to stable coexistence.
Next, we include self-inhibition by the the waste metabolites. While secretion can effectively dispose waste, it can still accumulate externally, potentially causing self-inhibition. For instance, hydrogen-producing bacteria cannot grow due to the inhibiting effect of accumulating hydrogen when it is not consumed by methanogenic partners13. With self-inhibition taken into account (\({h}_{1},{h}_{2}>0\)), the dynamics of species 1 and 2 remain qualitatively unchanged despite the more complex dynamical equations. Following the analysis of the simpler case without self-inhibition, we conjecture that a single, globally stable internal fixed point always exists, assuming realistic parameter combinations and positive growth without the partner (SI 5). Extensive numerical simulations indicate that this is indeed the case, and that species concentrations tend to a globally stable internal fixed point if net growth rates are sufficiently high (SI 4).
These results indicate that once a pair of species have established mutual cross-feeding, they remain in stable coexistence against (small) perturbations in density or parameter values. Next, we examine how stable a partnership is against cheating mutants that may not produce compounds benefiting or inhibiting anyone.
Invasion of mutants
Here, we investigate whether a mutant species can invade the community in equilibrium. We assume that the mutant (species 3, a rare mutant of species 2) is unable to efficiently dispose of its waste product, hence it must pay a cost, compared to species 1 and 2 that excrete waste metabolites (Fig. 1A). Accordingly, the mortality rate of species 3 is larger than that of species 2 (\({d}_{3}>{d}_{2}\)). Because of the symmetry of the model, it is indifferent which resident species (1 or 2) generates the mutant.
We map out when the mutant species 3 can invade the partnership of species 1 and 2 being in stable coexistence. We also examine the case of species 2 being the invading mutant of species 3, with species 3 being the stable partner of 1 (SI 6). Depending on model parameters, we determine the direction of evolution (i.e., species 3 exchanges species 2 because species 3 can invade while species 2 cannot or vice versa and the case when species 2 and 3 coexist because of mutual invasion). To make the analysis tractable, we again assume no self-inhibition (\({h}_{1}={h}_{2}=0\)).
Apart from assuming that withholding \({X}_{2}\) increases the death rate of species 3 compared to species 2 (\({d}_{3}>{d}_{2})\), we assume that the conversion efficiencies of resources and byproducts remain the same (\({{b}_{2}={b}_{3}, g}_{2}={g}_{3})\) for the mutant. In this case species 3 cannot invade the pair of species 1 and 2, while species 2 invades successfully the pair of species 1 and 3. That is, species 3 cannot coexist with species 2 and the species (1, 2) subsystem is resistant against the invasion of selfish mutants (SI 6). It is natural to assume that increasing the rate of resource uptake \(g\) correlates with higher per-capita death rates \(d\), so these two variables are in positive trade-off (there are plenty of examples for such trade-offs between microbial traits55,56). Accordingly, in case the selfish mutant 3' utilizes the byproduct more efficiently than the resident (\({g}_{3}^{^{\prime}}>{g}_{2})\), it must pay an even larger cost, realized in an even higher mortality rate (\({d}_{3}^{^{\prime}}>{d}_{3}>{d}_{2})\). Depending on the trade-off between mortality rate \(d\) and conversion efficiency \(g\), one of the species can exclude the other or they can mutually invade each other, thus all three species can coexist (SI 6). The evolution of these correlated traits is studied in more detail using adaptive dynamics in the next section.
Adding self-inhibition of waste (\({h}_{1},{h}_{2}>0\)), we observe the same qualitative behaviour via numerical simulations. See Fig. 3 for the characteristic time-evolution of the various cases. If the subsystem of species (1, 2) has a stable internal fixed point (a theorem for \({h}_{1},{h}_{2}=0\) and a conjecture otherwise which nevertheless seems to be the case), then one can prove for \({h}_{1},{h}_{2}>0,{h}_{2}={h}_{3}\) that species 3 cannot invade if \({d}_{3}>{d}_{2}\). At the same time, species 2 can invade the species (1, 3) subsystem (SI 6).
Time evolution of a cross-feeding partnership with waste product inhibition with a mutant species appearing (red), depending on the directionality of cross-feeding (\(g\)) and mortality (\(d\)). We assume that species 3 (a mutant of species 2, yellow) inherits the resource utilization of species 2 (\(g_{3}=g_{2}\), except in the first column). The x-axis represents time, the y-axis represents density on a logarithmic scale. Species 1 and 2 cause the mutant species 3 to go extinct whenever \(d_{2}<d_{3}\), even if there is no cross-feeding at all. That is, the apparent survival of producers against cheaters is not because of cross-feeding but because of the higher assumed mortality for cheaters. When there is cross-feeding between species 1 and 2 (\(g_{1}=g_{2}=1,g_{3}=0\)), coexistence of species 1 and 2 is independent of the ratio of \(d_{1,2}/d_{3}\). That is, species 3 has no chance of persisting, even if it has much smaller mortality than the others. When there is asymmetric cross-feeding such that species 2,3 cannot feed on \(X_{1}\) (\(g_{1}=1,g_{2}=g_{3}=0\)), or species 1 cannot feed on \(X_{2}\) (\(g_{1}=0,g_{2}=g_{3}=1\)), or everyone can cross-feed (\(g_{1}=g_{2}=g_{3}=1\)), species 3 can replace species 2 only if it has smaller death rate \(d_{3}<d_{2}\). Results are qualitatively the same when inhibition is omitted (\(h_{1}=h_{2}=h_{3}=0\)). Parameters are \(\left\{r_{1}=r_{2}=1,c_{1}=c_{2}=c_{3}=1,b_{1}=1,b_{2}=b_{3}=0.1,d_{1}=d_{2}=0.01,{\delta }_{1}={\delta }_{2}=0.1,k_{1}=k_{2}=1,h_{1}=h_{2}=h_{3}=1,w_{1}=w_{2}=w_{3}=1\right\}\).
Adaptive dynamics
To simulate the evolution of cross-feeding, we implemented a numerical version of adaptive dynamics57 for our model. We assume that consumption efficiency \(g=g(z)\) and mortality rate \(d=d\left(z\right)\) both depending on an underlying trait \(z\), and are in trade-off. Assuming that the trait is determined by many genes, we expect mutations incur only small actual changes in trait value. The trade-off between \(g\) and \(d\), governed by equations [Eq. (7)], ensures that a higher byproduct consumption rate can only be attained at the cost of increased mortality. In our adaptive dynamics model, we check whether mutants that only slightly differ in their trait value from a resident species can invade and replace the resident or not (for details, see SI 7, 8).
When only species 2 evolves without inhibition (SI 7), we observe that species 2 experiences directional selection towards a trait value \(z\) that provides the best compromise between getting a benefit from cross-feeding without having an excessively large mortality rate. This evolutionary state of mutual cross-feeding is both locally and globally stable against invasion of other mutant, even against potential cheaters whose trait values are not close to that of species 2 at the end. We checked what happens when both species evolve, and when inhibition of waste is imposed (SI 8), to arrive at results qualitatively the same (Fig. 4).
Evolutionary trajectories of resident and mutant cross-feeding species with inhibition, throughout successive generations of invasions in case only one (A) or both species can mutate (B). Adaptive dynamics simulations start from different initial mutant trait values (\(z\)). Trait value (y-axis) is shown against generations (x-axis) for both mutant classes. Colours correspond to trait value, opacity to the relative equilibrium density of the various species present in the actual population. (A) Only species 2 can mutate, species 1 is fixed (orange line at \(z_{1}=0.2\)). Evolutionary trajectories of species 2 converge to either the equilibrium trait value at around \(z\approx 0.82\) (in case the starting trait is larger than about 0.1), or to the one at \(z=0\). (B) Both species may evolve. To achieve mutually positive trait values (implying cross-feeding), species 1 must have a starting trait over 0.2. Trajectories starting from around this critical \(z\approx 0.2\) may end up at either equilibrium due to stochasticity. When both species have positive equilibrium trait values, mutual cross-feeding evolves. In case one (or both) species converge to \(z=0\), there is no cross-feeding as the trait value defines low mortality with negligible cross-feeding efficiency. Parameters are \(\left\{t_{inv}={10}^{4},n_{inv}={10}^{-2},{\mu }_{SD}={10}^{-2},n_{\theta }={10}^{-3},G=300,z_{1}=0.2,b_{1}=1,b_{2}=0.1,c_{1}=1,c_{2}=0.1,k_{1}=1,k_{2}=0.1,r_{1}=r_{2}=1,{\delta }_{1}={\delta }_{2}=0.1,w_{1}=w_{2}=1,h_{1}=h_{2}=1,\overline{z}=0.5,\sigma =0.2,\eta =0.1\right\}\).
Cross-facilitation
Coexistence of collaborative feeders
Next, we examine collaborative-feeding, a specific case of cross-facilitation [Fig. 1B, Eq. (6)]. We assume that the extracellular metabolic product \({X}_{i}\) is an enzyme that has evolved to improve the producer's resource consumption21,23. Consequently, producing and secreting this molecule is costly. On the other hand, the enzyme, when externalized, benefits not only the producer but potentially everyone else in the vicinity, improving their resource consumption efficiency.
The subsystem of species 1 and 2 persists only if \({r}_{i}{b}_{i}-{d}_{i}>0\) for both (see SI 9, 11 for analytical considerations and SI 10 for numerical support). The mutant species 3 does not produce the enzyme (\({k}_{3}=0\)), hence it does not pay any production or secretion cost: \({{r}_{2}b}_{2}-{d}_{2}>{{r}_{2}b}_{3}-{d}_{3}\). As a result, species 3 does not produce a public good but still benefits from \({X}_{1}\). If the mutant does not invest into production, it generally wins over species 2, a realization of the tragedy of the commons50. It is easy to show that if enzymatic efficiency for species 2 is (roughly) identical to that of species 3 (\({s}_{22}={s}_{32}\), \({s}_{21}={s}_{31}\)), then species 3 always excludes species 2 because its total growth rate is always larger than that of species 2 (\(\frac{1}{{n}_{3}}\frac{d{n}_{3}}{dt}>\frac{1}{{n}_{2}}\frac{d{n}_{2}}{dt}\) for every \({n}_{1},{n}_{2}>0\)).
We note here that if there is some exclusively private benefit of producing an enzyme (or not producing it has an increased cost, like in case of waste), then the above simple selection dynamics no longer holds. However, since we assume a well-mixed system, such private benefits can be ignored. After species 2 has gone extinct, the dynamics of species 1 becomes independent of species 3. Thus, species 1 reaches its equilibrium concentration \({\widehat{n}}_{1}>0\) if its net growth rate is positive (\({r}_{1}{b}_{1}-{d}_{1}>0)\). Consequently, after species 1 reaches equilibrium, the dynamics of species 3 will depend only on its concentration, which leads to a positive equilibrium concentration too if \({r}_{2}{b}_{3}-{d}_{3}>0\). This effectively means that mutual catalytic aid is not stable against the invasion of cheaters. After the invasion of the selfish species 3, species 1 coexists with the selfish invader. Imagine now that a new mutant of species 1 (species 1') emerges, which does not produce enzyme 1 but utilizes it as effectively as species 1. Using the same argument as above, species 1' will outcompete species 1. Consequently, the reciprocal catalytic help disappears ultimately.
Next, we apply the same procedure of adaptive dynamics to cross-facilitation as we have done to cross-feeders. Since \({X}_{i}\) is an enzyme, it is beneficial to anyone that can access it, while producing it is costly. We therefore assume a trade-off between mortality \(d\left(z\right)\) and production rate \(k\left(z\right)\), common among microbes58, which ensures a larger cost paid in relative growth when more enzyme is produced per unit time [SI 12, Eq. (S9)]. Results indicate that cross-facilitation cooperation gets disrupted whenever there is a potential for cheaters to appear (Fig. 5).
Adaptive dynamics of cross-facilitating species. (A) Only species 2 can evolve, species 1 is fixed (blue line at \(z_{1}=1\)). (B) Both species may evolve. In both cases, species evolve toward \(z\approx 0\), where mortality is \(d\approx 0\) and the production rate \(k\) of the common good is also \(\approx 0\). Parameters: \(\left\{t_{inv}=5 \cdot {10}^{5},n_{inv}={10}^{-2},{\mu }_{SD}={10}^{-2},n_{\theta }={10}^{-2},G=600,z_{1}=1,b_{1}=1,b_{2}=0.1,c_{1}=1,c_{2}=0.1,r_{1}=r_{2}=1,{\delta }_{1}={\delta }_{2}=0.1,m_{1}=m_{2}=1,s_{11}=s_{12}=s_{21}=s_{22}=0.1,\overline{z}=0.5,\sigma =0.2,\eta =0.1\right\}\). Species at or below \(z=0\) have \(d=0\); they therefore become effectively identical copies of each other, leading to their neutral coexistence. For further details, see Fig. 4.
We also investigate the cross-facilitation situation in which enzymes are species-specific with different functions, facilitating the digestion of a single resource only, while making it available for consumption for all species. This leads to qualitatively similar results (Fig. S15 in SI 13), supporting our claim that it is the costly nature of (enzyme) production that leads to the ultimate demise of cooperation in metabolic communities. The same holds for any hybrid case, where one species is cross-feeding the other with waste, while the other species produces an enzyme that benefits both (Fig. 7(4)): cross-facilitation is lost at the evolutionary equilibrium (SI 14 and Fig. S17).
Microbial communities are widespread in almost every habitat on Earth. Interactions among species are dominantly mediated by externalized metabolites5. Facultative syntrophies and auxotrophies are both common, where partners facultatively or obligately depend on products of others6. The latter may be the potential cause of the unculturability of prokaryotes3,8,59. The ubiquity of metabolic cooperation among microbes indicates that partnerships of complementary metabolisms can easily form, even without prior coevolutionary history60.
However, genome-scale metabolic networks indicate that despite the likelihood of metabolic compatibility among species, it is insufficient to compensate for the increased costs associated with satisfying two biomass requirements instead of just one, leading to reduced growth of the pair against free-living competitors61. Metabolically cooperating bacteria (even obligately dependent ones) can regain autonomous metabolisms which disrupts cooperation62. When obligate dependence evolves, species can only survive if their partners survive too. Thus, partner-dependent strains are generally more prone to the ecological and evolutionary changes affecting their symbionts than autonomous lineages not depending (or not obligatorily) on partners62,63. It is likely that exclusive partnership can only evolve if the environment is stable enough to allow prolonged cooperative coupling and dependence without biotic or abiotic disturbances.
A recent comparative analysis suggests that mutualistic interactions are rare in natural microbial communities2 (but see18), except in highly stressful but stable environments, where the common stress factor forces species to collaborate16. Modelled communities having more auxotrophic strains were less robust to ecological disturbance63. On the other hand, mutualism-dominated communities may occupy more diverse niches and are more resilient to abiotic perturbations (e.g. nutrient changes) while being more susceptible to invasion as opposed to competitive communities64. Experiments have demonstrated that multi-member communities with complex interaction topologies tend to reduce to a few core species, and removing a keystone species further reduces the community to a single pair65.
Maintaining partnership of prokaryotes dependent on externalized metabolites is especially challenging as there are no sophisticated mechanisms for partner recognition and partner-specific close-contact10. Partners can be exchanged by functionally equivalent ones without detrimental effects, as recent in vitro experiments demonstrate for archaea59 and microbial communities in general66. Such dependence on the partner's functional profile but tolerance against taxonomic change is likely common to all prokaryotes. These factors may explain why endosymbiosis is virtually unknown among free-living prokaryotes despite the ubiquity of syntrophy10 (prokaryotic endosymbioses exist67, but provide limited analogy to eukaryotic origin10). The singular putative example is the origin of eukaryotes and mitochondria59,68,69.
According to syntrophic hypotheses of mitochondrial origins, endosymbiosis emerged from the mutually beneficial, metabolite-mediated syntrophy of prokaryotic partners69,70,71 (also see72). Reconstructed metabolisms of ancient partners may even support their presumed early cooperation68. These hypotheses, in general, assume different ancestral metabolisms for host and symbiont, and that they belong to different domains. Asgards, close to the eukaryotic branch, are metabolically versatile and have the ability to grow lithoautotrophically, producing H2 from amino-acid degradation59,68,69,73. Alternative, mitochondria-late hypotheses assume that the interaction started out as physical and exploitative, where mutualism did not play a critical initial role and was established only later, if at all74,75.
Syntrophic hypotheses assume product/waste syntrophy between the ancestral host and symbiont, where product(s) of one partner are directly metabolized by the other partner (Fig. 1A). A common assumption is that partners have exchanged hydrogen in one or the other direction (hydrogen hypothesis68,70 Fig. 6A, reverse flow hypothesis69 Fig. 6B). According to the latter, the ancestral host may have generated reducing equivalents utilized by the bacterial partner in the form of hydrogen, small reduced inorganic or organic compounds, or by direct electron transfer69. It is in the producers interest to dispose its waste, as otherwise it could accumulate to inhibitory amounts13,53,76. If this waste is consumed by a partner, both species can benefit, jointly performing a reaction that would otherwise be thermodynamically unfavourable for any one of them separately72. While the consumer's act of feeding benefits the producer, this benefit is not because of reciprocal exchange of metabolites. In this asymmetrical setup (called flow-through syntrophy76), material flows in one direction and producers can unilaterally control consumers further down the chain. While a methanogenic host for mitochondria has been ruled out, methanogenic archaea are fitting examples of flow-through cross-feeding, as they are responsible for the efficient removal of hydrogen and formate produced by primary fermenters in the absence of other terminal electron acceptors13,77.
Comparison of various syntrophic eukaryogenetic hypotheses. The updated hydrogen hypothesis68 (A) and the reverse flow hypothesis69 (B) are examples of unidirectional (flow-through) byproduct consumption. The sulphur cycling hypothesis78 (C) is an example of symmetric recycle-type cross-feeding. (D) The collaborative-feeding model envisages a partner that secretes external enzymes that catalytically benefit both itself and any partner through making resources available to feed on (a case of cross-facilitation). The image is based on the image of71 (where a third partner is postulated, ignored in our model).
Syntrophic interactions can also be symmetric (reciprocal), when both species pass on metabolites to the other that convey the benefit. Sulphur-cycling via oxidation and reduction between Sulphurospirillum and Chlorobium enables a rapid exchange which rate depends on the (small) amount of sulphur (and the rate of its regeneration) that has a catalytic effect on both species (called recycle syntrophy76). According to the sulphur-cycling hypothesis (Fig. 6C), ancestral eukaryogenetic partners cycled sulphur repeatedly, in effect serving as an electron carrier between the two organisms78 (or in a tripartite endosymbiosis71). While modern phylogenomic results do not support mitochondria deriving from Rhodospirillaceae, it serves as an example of symmetric cross-feeding in contrast to unidirectional syntrophies.
Alternatively, one may assume that the initial interaction between partners was not direct feeding on leaked metabolites but mediated by secretions that provided indirect benefit for both parties by cross-facilitation (Fig. 1B). Most prokaryotic species live in surface-adhered, multispecies biofilms79 where all forms of syntrophies, nutritional and catalytic, are common24. For example, a catabolic exoenzyme of one species makes a resource available for everyone, e.g. cellulose23,35 (Fig. 6D). This catalytic help is formally equivalent to the effect of a product that provides protection for everyone and remains in the vicinity, e.g. (biofilm matrix24,25, antibiotics29 or antibiotic degrading enzymes24). We can envisage an early mitochondriogenetic partnership that benefited from a protective environment generated by a partner. However, whether the ancestral host and mitochondria co-evolved in a surface-bound community or as free living is presently unknown.
While syntrophic hypotheses gained considerable support due to the improved characterization of closest relative Asgard archaea68,69,73,80,81,82, modelling the initial interaction is grossly neglected. Assumed chemical compatibility and the potential for cooperation do not necessarily entail stable (ecological) coexistence of species, much less long term (evolutionary) stability against cheater mutants (cf.61). For endosymbiosis, especially prokaryotic, without phagocytosis or a nucleus, long term stable coevolution is necessary. Without modelling the early evolutionary ecology of these systems, evolutionary claims inevitably become superficial (as we have already argued11).
In this paper, we have designed a set of formal mathematical models of microbial interactions to compare the ecological and evolutionary stability and potential of different syntrophies. Our models are not particular to the prokaryotic domain and species may represent any unicellular organism. We have analysed which interaction type (nutritional cross-feeding or collaborative-feeding syntrophy) and interaction network topology (mutual or unilateral) is more robust against cheaters. We have examined from which type could evolution traverse to other types and which one can enable exclusively pairwise stable mutualism—the cornerstone of endosymbiosis and, particularly, mitochondrial origins.
According to our results, there is a difference between trophic and catalytic reactions mediated by products in microbial relationships, yielding different dynamics and leading to different ecological and evolutionary coexistence. We have demonstrated that symmetric byproduct cross-feeding (mutual consumption of waste) is the most stable interaction of two species, and that asymmetric, unidirectional cross-feeding likely evolves toward symmetric, reciprocal interaction.
Our main conclusion is that nutritional cross-feeding can be stable both in the short and long terms, where selfish mutants cannot generally invade the mutual pair once it has established. Interactions are immediately beneficial for both parties, as the invested cost of disposing the waste is returned by the partner removing the inhibitor on growth. Cheaters stealing public goods do not affect this private benefit. Therefore, as we have shown, evolution leads to increased efficiency of using the byproduct of the partner. This kind of stable, mutual partnership rests on our assumptions of independent resources for two (and only two) species and stable flux for both resources. From a strictly dynamical point of view, the more symmetric sulphur cycling78 is more stable than the hydrogen70 or reverse flow hypotheses69, but this comparison ignores all other relevant aspects (like phylogenetic affiliations, absolute energetic costs and benefits).
Our second conclusion is that while a cross-facilitative interaction is stable ecologically, it can be easily destroyed by selfish mutants. If producing the enzyme is costly, and it does not provide any significant private benefit (but benefits everyone), then cross-facilitation is susceptible for exploitation and a selfish mutant can invade and destroy the interaction, as we have demonstrated, in line with what the "tragedy of the commons"50 imply. Consequently, it has been argued that a private benefit of the producer is necessary for cooperators to successfully withstand the invasion of cheaters23,83. In our model of cross-facilitation, there is no additional private benefit, while in cross-feeding, the private benefit is the disposal of toxic waste.
A trivial mechanism to maintain evolutionary stability of cooperation is spatial aggregation84,85. Microbes rarely interact exclusively in well mixed environments, they rather form dense, spatially structured, inhomogeneous communities in most habitat. In such aggregations (e.g. biofilms), interactions are localized and neighbourhoods are stable for a longer time1,86, which may increase the private benefit of catalytic products. Cooperative groups are less susceptible for cheaters and the longer timespan of interactions may facilitate evolutionary stability both for cross-feeding and collaborative-feeding for auxotrophic dependencies to develop. While we have demonstrated here the potential of cross-feeding for stable symbiosis and eukaryogenesis, collaborative-feeding (or more generally, cross-facilitation) may have also been relevant to such processes when such facilitating factors (like spatial inhomogeneities) are provided – but this claim must be examined more scrupulously via modelling.
Carefully constructed mathematical models may expose hidden assumptions or nontrivial dynamics, which verbal models may miss or obscure. Our models demonstrate that ecologically stable coexistence of a syntrophic pair does not ensure their evolutionary stability (just as metabolic compatibility does not necessarily entail increased growth or synergies61). In the cross-facilitation case in particular, an initially mutualistic interaction degrades over evolutionary time as selfish cheaters and free-riders invade the system. Most of the syntrophic hypotheses of mitochondrial origin assume an asymmetric setup at the origin. As our results demonstrate, nutritional cross-feeding, even unilateral, can lead to stable syntrophic mutualism. However, it remains an open question whether the initial mitochondriogenetic syntrophy was symmetric71,78 or asymmetric69,70.
On a final note, one cannot exclude that non-mutual syntrophy lead to endosymbiosis, i.e. symbiont uptake came before mutualism. However, one can convincingly argue against it, as we already did10: once inside the host, genetic, dynamical and cell-cycle synchronization issues readily arise that would provide ample cause for exploited parties to disrupt the partnership. We believe that if the initial interaction was metabolic syntrophy, the merger likely happened after cross-feeding become symmetric and mutual (unless there were other factors in play, e.g. phagocytosis74). If syntrophy was already mutually beneficial, then it is of both parties' interest to sort out dynamical issues (via e.g., central control) when being physically integrated. We do not see examples to either scenario, but both should be tested in the lab.
We model the dynamics of interacting species via resource-consumer dynamics. There are two primary resources \({R}_{1}\) and \({R}_{2}\) with concentrations \({\rho }_{1}\) and \({\rho }_{2}\), and three consumer species \({N}_{1}\), \({N}_{2}\), and \({N}_{3}\) with densities \({n}_{1}\), \({n}_{2}\), and \({n}_{3}\). Species 1 consumes resource \({R}_{1}\) only, while species 2 and 3 compete for \({R}_{2}\); that is, they are neither depending on each other nor are in direct competition for the resource. In the eukaryogenetic context it is generally assumed that ancestral partners had substantially different metabolisms, relying on different resources, as they belonged to different domains. Additionally, species 1 produces \({X}_{1}\) with concentration \({x}_{1}\), and species 2 produces a different product \({X}_{2}\) with concentration \({x}_{2}\). Species 3 is a potential selfish mutant of species 2, feeding on \({R}_{2}\) (Fig. 3).
In the case of cross-feeding, \({X}_{i}\) is a byproduct waste metabolite that is consumed by the other species (Fig. 1A). For cross-facilitation, \({X}_{i}\) is an enzyme that is not consumed but remains reusable within the local environment19. We assume that the catalytic product is costly to produce, because it is a larger molecule that requires active secretion by the producer (e.g., catalytic enzymes that digest resources externally4,21,22,23). For the sake of simplicity, we assume \({X}_{i}\) is an external enzyme that improves resource consumption for all species (Fig. 1B). It is unlikely that such a costly enzyme is externalized for the sole benefit of others, hence we assume that there is always some private benefit associated with production (see the annamox community in Table S1 for a counterexample). Otherwise, it would always be easy for selfish mutants to benefit from the product without reciprocation, which would leave the producer to pay all the cost of production without any hope for benefit, leading to the disruption of cooperation24,51.
Figure 7 displays the potential evolutionary transitions that can happen in a syntrophic pair. We have explicitly investigated transitions 1–3 and 2–5 (SI 8 and 12). We note that all parameters used in the model are assumed to be nonnegative (for parameters, see Supplementary Table S2).
Possible evolutionary transitions (thick black arrows) between different syntrophic interaction topologies of cross-feeding (1, 3, 4) and collaborative feeding (cross-facilitation) (2, 5, 4). Grey and blue arrows indicate trophic interactions, red curves indicate inhibitory and dashed yellow arrows catalytic interactions. There are other possible cases not displayed, but we assume that further topologies are isomorphic to displayed ones up to a symmetric re-labelling of species and interactions. We omit cases where a species simultaneously exerts both a trophic and a catalytic effect on the partner, as that would require multiple products of the same species and thus lead to topologies that are not isomorphic (up to a re-labelling of species) to any of the depicted ones. Different mutants can appear, inheriting the interaction topology of either species 1 or 2. Panel 1 corresponds to flow-through syntrophy, while panel 3 may represents the recycle type if the end products \(x_{1}\) and \(x_{2}\) represent different states of the same molecule and are stoichiometrically coupled (cf. Fig. 6). Transformative interactions (normal arrows) are not expected to evolve to catalytic ones, or vice versa. We have explicitly tested transitions \(1 \leftrightarrow 3\) and \(2\leftrightarrow5\) in the main text (and \(1\leftrightarrow4\) in Supplementary Information SI 14).
Model of cross-feeding
We can safely assume fast resource dynamics compared to the dynamics of the species. From this, the resource densities \({\rho }_{1}\) and \({\rho }_{2}\) can be expressed directly as
$$\begin{aligned} {\rho }_{1}&={r}_{1}-{c}_{1}{n}_{1}, \\ {\rho }_{2}&={r}_{2}-{c}_{2}{n}_{2}-{c}_{3}{n}_{3}.\end{aligned}$$
Here \({r}_{i}\) is the maximum, unconsumed equilibrium resource density, and the \({c}_{i} {n}_{i}\) terms measure how much of resource is locked up in the biomass of species \(i\). In turn, the abundances follow simple consumer-resource dynamics based on the model of52, auxiliated with the growth benefit received from the metabolic byproducts, with conversion efficiency \({g}_{i}\) for species \(i\). Furthermore, byproduct \({X}_{i}\) is considered a waste product that can accumulate and inhibit the metabolism of species \(i\) in proportion to the concentration of \({X}_{i}\). Therefore, the mortality rate of species \(i\) is increased by \({h}_{i}{x}_{i}\), where all the \({h}_{i}\) are positive constants that describe the strength of inhibition. Thus, the dynamical equations for these three species are as follows:
$$\begin{aligned} \frac{\mathrm{d}{n}_{1}}{\mathrm{d}t} & ={n}_{1}\left({b}_{1}{\rho }_{1}+{g}_{1}{x}_{2}-{d}_{1}-{h}_{1}{x}_{1}\right), \\ \frac{\mathrm{d}{n}_{2}}{\mathrm{d}t}& ={n}_{2}\left({b}_{2}{\rho }_{2}+{g}_{2}{x}_{1}-{d}_{2}-{h}_{2}{x}_{2}\right), \\\frac{\mathrm{d}{n}_{3}}{\mathrm{d}t} &={n}_{3}\left({b}_{3}{\rho }_{3}+{g}_{3}{x}_{1}-{d}_{3}-{h}_{3}{x}_{2}\right), \end{aligned}$$
where \({b}_{i}\) is the conversion constant of resources into the reproduction of species \(i\), and \({d}_{i}\) is the natural death rate of species \(i\). Here we assume that apart from \({\rho }_{i}\), \({x}_{j}\) is used as an additional resource for species \(i\). Note, that species 3 not only competes with species 2 for \({\rho }_{2}\), but may also benefit from the product \({x}_{1}\) produced by species 1. Also, for any species \(i\), the combination \({b}_{i}=0\) and \({g}_{i}>0\) represents obligate dependence on the partner. We ignore this situation, assuming that species are initially free living, and do not depend on obligate metabolic partners.
Due to the fast resource dynamics, we can substitute the expressions for the resources [Eq. (1)] into the above system and rearrange:
$$\begin{aligned} \frac{\mathrm{d}{n}_{1}}{\mathrm{d}t}&={n}_{1}\left(\left({b}_{1}{r}_{1}-{d}_{1}\right)+{g}_{1}{x}_{2}-{b}_{1}{c}_{1}{n}_{1}-{h}_{1}{x}_{1}\right), \\ \frac{\mathrm{d}{n}_{2}}{\mathrm{d}t}&={n}_{2}\left(\left({b}_{2}{r}_{2}-{d}_{2}\right)+{g}_{2}{x}_{1}-{b}_{2}{c}_{2}{n}_{2}-{b}_{2}{c}_{3}{n}_{3}-{h}_{2}{x}_{2}\right), \\ \frac{\mathrm{d}{n}_{3}}{\mathrm{d}t}&={n}_{3}\left(\left({b}_{3}{r}_{2} -{d}_{3}\right)+{g}_{3}{x}_{1}-{b}_{3}{c}_{2}{n}_{2}-{b}_{3}{c}_{3}{n}_{3}-{h}_{3}{x}_{2}\right).\end{aligned}$$
In turn, the metabolite dynamics read
$$\begin{aligned} \frac{\mathrm{d}{x}_{1}}{\mathrm{d}t}&={k}_{1}{n}_{1}-\left({w}_{2}{n}_{2}{x}_{1}+{w}_{3}{n}_{3}{x}_{1}\right)-{\delta }_{1}{x}_{1}, \\ \frac{\mathrm{d}{x}_{2}}{\mathrm{d}t}&={k}_{2}{n}_{2}-{w}_{1}{n}_{1}{x}_{2}-{\delta }_{2}{x}_{2},\end{aligned}$$
where \({k}_{i}\) is the production rate and \({w}_{i}\) is the consumption rate of byproduct of species \(i\), and \({\delta }_{i}\) is the rate of decomposition. If we additionally assume fast dynamics for the \({x}_{i}\) as well, we can set \(\frac{\mathrm{d}{x}_{i}}{\mathrm{d}t}=0\) to get the following quasi-equilibrium equations:
$$\begin{aligned} {x}_{1}&=\frac{{k}_{1}{n}_{1}}{ {w}_{2}{n}_{2}+{w}_{3}{n}_{3}+{\delta }_{1}}, \\ {x}_{2}&=\frac{{k}_{2}{n}_{2}}{{w}_{1}{n}_{1}+{\delta }_{2}}.\end{aligned}$$
Substituting these back into [Eq. (3)] yields:
$$\begin{aligned}\frac{\mathrm{d}{n}_{1}}{\mathrm{d}t}&={n}_{1}\left(\left({b}_{1}{r}_{1}-{d}_{1}\right)+{g}_{1}\frac{{k}_{2}{n}_{2}}{{w}_{1}{n}_{1}+{\delta }_{2}}-{b}_{1}{c}_{1}{n}_{1}-{h}_{1}\frac{{k}_{1}{n}_{1}}{{w}_{2}{n}_{2}+{w}_{3}{n}_{3}+{\delta }_{1}}\right), \\ \frac{\mathrm{d}{n}_{2}}{\mathrm{d}t}&={n}_{2}\left(\left({b}_{2}{r}_{2}-{d}_{2}\right)+{g}_{2}\frac{{k}_{1}{n}_{1}}{{w}_{2}{n}_{2}+{w}_{3}{n}_{3}+{\delta }_{1}}-{b}_{2}{c}_{2}{n}_{2}-{b}_{2}{c}_{3}{n}_{3}-{h}_{2}\frac{{k}_{2}{n}_{2}}{{w}_{1}{n}_{1}+{\delta }_{2}}\right), \\ \frac{\mathrm{d}{n}_{3}}{\mathrm{d}t}&={n}_{3}\left(\left({b}_{3}{r}_{2}-{d}_{3}\right)+{g}_{3}\frac{{k}_{1}{n}_{1}}{{w}_{2}{n}_{2}+{w}_{3}{n}_{3}+{\delta }_{1}}-{b}_{3}{c}_{2}{n}_{2}-{b}_{3}{c}_{3}{n}_{3}-{h}_{3}\frac{{k}_{2}{n}_{2}}{{w}_{1}{n}_{1}+{\delta }_{2}}\right). \end{aligned}$$
Model of cross-facilitation
In the catalytic case, metabolite \({X}_{i}\) represents a catalytic factor (such as an enzyme) that is costly to externalize but has a positive effect on resource consumption both for the producer and its competitor. Consequently, all products help all species, and they do not impose (self-)inhibition. For a different catalytic topology where enzyme \({X}_{i}\) enables resource \({R}_{i}\) only but makes it available for all species, see SI 13.
Motivated by Michaelis–Menten enzyme kinetics, we use the standard saturation functions for enzymes. We rely on the fast dynamics of resources, just as before, using [Eq. (1)]. Accordingly, the equations for cross-facilitation on \({X}_{1},{X}_{2}\) are:
$$\begin{aligned}\frac{\mathrm{d}{n}_{1}}{\mathrm{d}t}&={n}_{1}\left({\rho }_{1}\left({b}_{1}+\frac{{s}_{11}{x}_{1}}{{m}_{1}+{x}_{1}}+\frac{{s}_{12}{x}_{2}}{{m}_{1}+{x}_{2}}\right)-{d}_{1}\right), \\ \frac{\mathrm{d}{n}_{2}}{\mathrm{d}t}&={n}_{2}\left({\rho }_{2}\left({b}_{2}+\frac{{s}_{21}{x}_{1}}{{m}_{2}+{x}_{1}}+\frac{{s}_{22}{x}_{2}}{{m}_{2}+{x}_{2}}\right)-{d}_{2}\right), \\ \frac{\mathrm{d}{n}_{3}}{\mathrm{d}t}&={n}_{3}\left({\rho }_{2}\left({b}_{3}+\frac{{s}_{31}{x}_{1}}{{m}_{3}+{x}_{1}}+\frac{{s}_{32}{x}_{2}}{{m}_{2}+{x}_{2}}\right)-{d}_{3}\right),\end{aligned}$$
where \({m}_{i}\) is the Michaelis constant where the reaction rate is at its half-maximum, and \({s}_{ij}\) defines the maximum rate of conversion of enzyme \(j\) by species \(i\). Since enzymes are not consumed, their concentrations are determined by the balance of production and spontaneous decay. The first is proportional to the concentration of producer strains, the latter to the actual concentration of the enzyme:
$$\begin{aligned}\frac{\mathrm{d}{x}_{1}}{\mathrm{d}t}&={k}_{1}{n}_{1}-{\delta }_{1}{x}_{1}, \\ \frac{\mathrm{d}{x}_{2}}{\mathrm{d}t}&={k}_{2}{n}_{2}+{k}_{3}{n}_{3}-{\delta }_{2}{x}_{2}.\end{aligned}$$
We again assume fast dynamics for \({X}_{i}\), leading to \({x}_{1}=\frac{{k}_{1}}{{\delta }_{1}}{n}_{1}, {x}_{2}=\frac{{k}_{2}{n}_{2}+{k}_{3}{n}_{3}}{{\delta }_{2}}\) as above. Substituting these into [Eq. (5)], the model is as follows:
$$\begin{aligned}\frac{\mathrm{d}{n}_{1}}{\mathrm{d}t}&={n}_{1}\left({(r}_{1}-{c}_{1}{n}_{1})\left({b}_{1}+\frac{{s}_{11}{k}_{1}{n}_{1}}{{m}_{1}{\delta }_{1}+{k}_{1}{n}_{1}}+\frac{{s}_{12} \left({k}_{2}{n}_{2}+ {k}_{3}{n}_{3}\right)}{{m}_{1}{\delta }_{2}+{{k}_{2}n}_{2}+{k}_{3}{n}_{3}}\right)-{d}_{1}\right), \\ \frac{\mathrm{d}{n}_{2}}{\mathrm{d}t}&={n}_{2}\left({(r}_{2}-{c}_{2}{n}_{2}-{c}_{3}{n}_{3})\left({b}_{2}+\frac{{s}_{21}{k}_{1}{n}_{1}}{{m}_{2}{\delta }_{1}+{k}_{1}{n}_{1}}+\frac{{s}_{22} \left({{k}_{2}n}_{2}+{{k}_{3}n}_{3}\right)}{{m}_{2}{\delta }_{2}+{k}_{2}{n}_{2}+{k}_{3}{n}_{3}}\right)-{d}_{2}\right),\frac{\mathrm{d}{n}_{3}}{\mathrm{d}t}={n}_{3}\left({(r}_{2}-{c}_{2}{n}_{2}-{c}_{3}{n}_{3})\left({b}_{3}+\frac{{s}_{31}{k}_{1}{n}_{1}}{{m}_{2}{\delta }_{1}+{k}_{1}{n}_{1}}+\frac{{s}_{32} \left({{k}_{2}n}_{2}+{{k}_{3}n}_{3}\right)}{{m}_{2}{\delta }_{2}+{{k}_{2}n}_{2}+{k}_{3}{n}_{3}}\right)-{d}_{3}\right),\end{aligned}$$
To simulate the evolution of cross-feeding and cross-facilitation between the different species, we implemented a numerical version of adaptive dynamics57. First, we assume that there is an underlying trait, \(z\), whose value determines the death rates \(d\), and the efficiency of byproduct consumption \(g\) (or in case of cross-facilitation, the production efficiency \(k\)). Second, we assume that there is a trade-off between these quantities: a higher byproduct consumption (or enzyme production) efficiency can only be attained at the cost of increased mortality rates. Such a trade-off can be implemented via the following equations (for details, see SI 7):
$$\begin{aligned}g\left(z\right)&=\frac{1}{2}\left(1+\mathrm{tanh}\left(\frac{z-\overline{z}}{\sigma }\right)\right), \\ d\left(z\right)&=\eta \mathrm{max}\left(0,z\right).\end{aligned}$$
With these choices, the uptake rates \(g(z)\) vary between 0 and 1, approaching 0 for very large negative \(z\) and approaching 1 for very large positive \(z\), following a sigmoidal curve (Fig. S5). This function therefore expresses the fact that uptake rates cannot be increased ad infinitum, so that there are diminishing returns on increasing \(z\) beyond some point. In turn, the death rates \(d(z)\) increase linearly with \(z\), but only as long as they are positive. When they hit zero (in our parameterization, this happens exactly when \(z=0\)), the death rates stay at their lowest biologically meaningful value of 0 and no longer change.
Simulated evolution then proceeds by first initializing species 1 and 2 with different \(z\) values, but with species 2 starting out with a low z (implying negligibly low cross-feeding). In the basic version of the simulation, only species 2 evolves. We checked what happens when both species evolve and get qualitatively the same results. Now we generate a random mutant whose trait is similar to that of the original species 2 and introduce it into the community at a low initial density. We then run the dynamics until we reach equilibrium, at which point we remove those species whose densities dropped below an extinction threshold. Of the remaining mutants, we pick one, randomly mutate its trait value, and introduce a new mutant with that trait and a small invasion density—and so on. After several iterations of this procedure, we end up with a community potentially consisting of several species. For further details, see SI 7, 8 and 12. The code to reproduce the adaptive dynamics results is included as supplementary material (see SI 15).
All data generated or analysed during this study can be reproduced with the code provided as a set of Supplementary Information files (see SI 15).
The code to reproduce all the data (and figures) generated and analysed during this study (up to a change in the seed of the pseudorandom number generators), is provided as a set of Supplementary Information files (see SI 15).
Nadell, C. D., Drescher, K. & Foster, K. R. Spatial structure, cooperation and competition in biofilms. Nat. Rev. Microbiol. 14, 589–600 (2016).
Palmer, J. D. & Foster, K. R. Bacterial species rarely work together. Science 376, 581–582 (2022).
Pande, S. & Kost, C. Bacterial unculturability and the formation of intercellular metabolic networks. Trends Microbiol. 25, 349–361 (2017).
Nadell, C. D., Xavier, J. B. & Foster, K. R. The sociobiology of biofilms. FEMS Microbiol. Rev. 33, 206–224 (2009).
Fritts, R. K., McCully, A. L. & McKinlay, J. B. Extracellular metabolism sets the table for microbial cross-feeding. Microbiol. Mol. Biol. Rev. 85, 135 (2021).
D'Souza, G. et al. Ecology and evolution of metabolic cross-feeding interactions in bacteria. Nat. Prod. Rep. 35, 455–488 (2018).
Libby, E., Hébert-Dufresne, L., Hosseini, S.-R. & Wagner, A. Syntrophy emerges spontaneously in complex metabolic systems. PLoS Comput. Biol. 15, e1007169 (2019).
Staley, J. T. & Konopka, A. Measurement of in situ activities of nonphotosynthetic microorganisms in aquatic and terrestrial habitats. Annu. Rev. Microbiol. 39, 321–346 (1985).
Zachar, I. Closing the energetics gap. Nat. Ecol. Evol. https://doi.org/10.1038/s41559-022-01839-3 (2022).
Zachar, I. & Boza, G. Endosymbiosis before eukaryotes: mitochondrial establishment in protoeukaryotes. Cell. Mol. Life Sci. 77, 3503–3523. https://doi.org/10.1007/s00018-020-03462-6 (2020).
Zachar, I. & Szathmáry, E. Breath-giving cooperation: critical review of origin of mitochondria hypotheses. Biol. Direct 12, 19. https://doi.org/10.1186/s13062-017-0190-5 (2017).
Booth, A. & Doolittle, W. F. Eukaryogenesis, how special really?. Proc. Natl. Acad. Sci. 112, 10278–10285 (2015).
Morris, B. E. L., Henneberger, R., Huber, H. & Moissl-Eichinger, C. Microbial syntrophy: Interaction for the common good. FEMS Microbiol. Rev. 37, 384–406 (2013).
Szathmáry, E. On the propagation of a conceptual error concerning hypercycles and cooperation. J. Syst. Chem. 4, 2208 (2013).
Seth, E. C. & Taga, M. E. Nutrient cross-feeding in the microbial world. Front. Microbiol. 5, 350 (2014).
Piccardi, P., Vessman, B. & Mitri, S. Toxicity drives facilitation between 4 bacterial species. Proc. Natl. Acad. Sci. 116, 15979–15984 (2019).
Yurtsev, E. A., Conwill, A. & Gore, J. Oscillatory dynamics in a bacterial cross-protection mutualism. Proc. Natl. Acad. Sci. 113, 6236–6241 (2016).
Kehe, J. et al. Positive interactions are common among culturable bacteria. Sci. Adv. 7, 45 (2021).
Momeni, B., Xie, L. & Shou, W. Lotka-Volterra pairwise modeling fails to capture diverse pairwise microbial interactions. Elife 6, 25051 (2017).
Zengler, K. & Zaramela, L. S. The social network of microorganisms: How auxotrophies shape complex communities. Nat. Rev. Microbiol. 16, 383–390 (2018).
Koschwanez, J. H., Foster, K. R. & Murray, A. W. Sucrose utilization in budding yeast as a model for the origin of undifferentiated multicellularity. PLoS Biol. 9, e1001122 (2011).
Ciofu, O., Beveridge, T. J., Kadurugamuwa, J., Walther-Rasmussen, J. & Høiby, N. Chromosomal beta-lactamase is packaged into membrane vesicles and secreted from Pseudomonas aeruginosa. J. Antimicrob. Chemother. 45, 9–13 (2000).
Xenophontos, C., Harpole, W. S., Küsel, K. & Clark, A. T. Cheating promotes coexistence in a two-species one-substrate culture model. Front. Ecol. Evol. 9, 78006 (2022).
West, S. A., Diggle, S. P., Buckling, A., Gardner, A. & Griffin, A. S. The social lives of microbes. Annu. Rev. Ecol. Evol. Syst. 38, 53–77 (2007).
Flemming, H.-C. & Wingender, J. The biofilm matrix. Nat. Rev. Microbiol. 8, 623–633 (2010).
Kümmerli, R. & Brown, S. P. Molecular and regulatory properties of a public good shape the evolution of cooperation. Proc. Natl. Acad. Sci. 107, 18921–18926 (2010).
Griffin, A. S., West, S. A. & Buckling, A. Cooperation and competition in pathogenic bacteria. Nature 430, 1024–1027 (2004).
Kramer, J., Özkaya, Ö. & Kümmerli, R. Bacterial siderophores in community and host interactions. Nat. Rev. Microbiol. 18, 152–163 (2019).
van der Meij, A., Worsley, S. F., Hutchings, M. I. & van Wezel, G. P. Chemical ecology of antibiotic production by actinomycetes. FEMS Microbiol. Rev. 41, 392–416 (2017).
Kümmerli, R., Schiessl, K. T., Waldvogel, T., McNeill, K. & Ackermann, M. Habitat structure and the evolution of diffusible siderophores in bacteria. Ecol. Lett. 17, 1536–1544 (2014).
Jautzus, T., van Gestel, J. & Kovács, Á. T. Complex extracellular biology drives surface competition in lessigreaterBacillus subtilisless/igreater. Ecol. Lett. 16, 2320–2328. https://doi.org/10.1101/2022.02.28.482363 (2022).
Sachs, J. L., Mueller, U. G., Wilcox, T. P. & Bull, J. J. The evolution of cooperation. Q. Rev. Biol. 79, 135–160 (2004).
Hillesland, K. L. & Stahl, D. A. Rapid evolution of stability and productivity at the origin of a microbial mutualism. Proc. Natl. Acad. Sci. 107, 2124–2129 (2010).
Bruno, J. F., Stachowicz, J. J. & Bertness, M. D. Inclusion of facilitation into ecological theory. Trends Ecol. Evol. 18, 119–125 (2003).
Gore, J., Youk, H. & van Oudenaarden, A. Snowdrift game dynamics and facultative cheating in yeast. Nature 459, 253–256 (2009).
Sorg, R. A. et al. Collective resistance in microbial communities by intracellular antibiotic deactivation. PLoS Biol. 14, e2000631 (2016).
Karray, F. et al. Extracellular hydrolytic enzymes produced by halophilic bacteria and archaea isolated from hypersaline lake. Mol. Biol. Rep. 45, 1297–1309 (2018).
Datta, M. S., Sliwerska, E., Gore, J., Polz, M. F. & Cordero, O. X. Microbial interactions lead to rapid micro-scale successions on model marine particles. Nat. Commun. 7, 11965 (2016).
Tarnita, C. E. The ecology and evolution of social behavior in microbes. J. Exp. Biol. 220, 18–24 (2017).
Özkaya, Ö., Xavier, K. B., Dionisio, F. & Balbontn, R. Maintenance of microbial cooperation mediated by public goods in single- and multiple-trait scenarios. J. Bacteriol. 199, 22 (2017).
Yang, D.-D. et al. Fitness and productivity increase with ecotypic diversity among Escherichia coli strains that coevolved in a simple, constant environment. Appl. Environ. Microbiol. 86, 8 (2020).
Pande, S. et al. Fitness and stability of obligate cross-feeding interactions that emerge upon gene loss in bacteria. ISME J. 8, 953–962 (2013).
Zhou, K., Qiao, K., Edgar, S. & Stephanopoulos, G. Distributing a metabolic pathway among a microbial consortium enhances production of natural products. Nat. Biotechnol. 33, 377–383 (2015).
Harcombe, W. R., Chacón, J. M., Adamowicz, E. M., Chubiz, L. M. & Marx, C. J. Evolution of bidirectional costly mutualism from byproduct consumption. Proc. Natl. Acad. Sci. 115, 12000–12004 (2018).
Summers, Z. M. et al. Direct exchange of electrons within aggregates of an evolved syntrophic coculture of anaerobic bacteria. Science 330, 1413–1415 (2010).
Maddamsetti, R., Lenski, R. E. & Barrick, J. E. Adaptation, clonal interference, and frequency-dependent interactions in a long-term evolution experiment with Escherichia coli. Genetics 200, 619–631 (2015).
Gerrish, P. J. & Lenski, R. E. The fate of competing beneficial mutations in an asexual population. Genetica 102, 127–144 (1998).
Popat, R. et al. Quorum-sensing and cheating in bacterial biofilms. Proc. R. Soc. B 279, 4765–4771 (2012).
Rainey, P. B. & Rainey, K. Evolution of cooperation and conflict in experimental bacterial populations. Nature 425, 72–74 (2003).
Hardin, G. Tragedy of the commons. Science 162, 1243 (1968).
West, S. A., Cooper, G. A., Ghoul, M. B. & Ten Griffin, A. S. recent insights for our understanding of cooperation. Nat. Ecol. Evol. 5, 419–430 (2021).
MacArthur, R. Species packing and competitive equilibrium for many species. Theor. Popul. Biol. 1, 1–11 (1970).
Oliveira, N. M., Niehus, R. & Foster, K. R. Evolutionary limits to cooperation in microbial communities. Proc. Natl. Acad. Sci. 111, 17941–17946 (2014).
Tilman, D. Resource Competition and Community Structure. Monographs in Population Biology, Vol. 17 (Princeton University Press, 1982).
Ferenci, T. Trade-off mechanisms shaping the diversity of bacteria. Trends Microbiol. 24, 209–223 (2016).
Rozen, D. E., Philippe, N., de Visser, J. A., Lenski, R. E. & Schneider, D. Death and cannibalism in a seasonal environment facilitate bacterial coexistence. Ecol. Lett. 12, 34–44 (2009).
Brännström, Å., Johansson, J. & von Festenberg, N. The Hitchhiker's Guide to Adaptive Dynamics. Games 4, 304–328 (2013).
Article MATH Google Scholar
Ramin, K. I. & Allison, S. D. Bacterial tradeoffs in growth rate and extracellular enzymes. Front. Microbiol. 10, 2956 (2019).
Imachi, H. et al. Isolation of an archaeon at the prokaryote–eukaryote interface. Nature 577, 519–525 (2020).
Wintermute, E. H. & Silver, P. A. Emergent cooperation in microbial metabolism. Mol. Syst. Biol. 6, 407 (2010).
Libby, E., Kempes, C. & Okie, J. Metabolic compatibility and the rarity of prokaryote endosymbioses. BioRxiv https://doi.org/10.1101/2022.04.14.488272 (2022).
Pauli, B., Oña, L., Hermann, M. & Kost, C. Obligate mutualistic cooperation limits evolvability. Nat. Commun. 13, 27630 (2022).
Oña, L. & Kost, C. Cooperation increases robustness to ecological disturbance in microbial cross-feeding networks. Ecol. Lett. 25, 1410–1420 (2022).
Machado, D. et al. Polarization of microbial communities between competitive and cooperative metabolism. Nat. Ecol. Evol. 5, 195–203 (2021).
Mee, M. T., Collins, J. J., Church, G. M. & Wang, H. H. Syntrophic exchange in synthetic microbial communities. Proc. Natl. Acad. Sci. 111, E2149–E2156 (2014).
Goldford, J. E. et al. Emergent simplicity in microbial community assembly. Science 361, 469–474 (2018).
McCutcheon, J. P. The genomics and cell biology of host-beneficial intracellular infections. Annu. Rev. Cell Dev. Biol. 37, 115–142 (2021).
Sousa, F. L., Neukirchen, S., Allen, J. F., Lane, N. & Martin, W. F. Lokiarchaeon is hydrogen dependent. Nat. Microbiol. 1, 5 (2016).
Spang, A. et al. Proposal of the reverse flow model for the origin of the eukaryotic cell based on comparative analyses of Asgard archaeal metabolism. Nat. Microbiol. 4, 1138–1148 (2019).
Martin, W. & Müller, M. The hydrogen hypothesis for the first eukaryote. Nature 392, 37–41 (1998).
López-García, P. & Moreira, D. The Syntrophy hypothesis for the origin of eukaryotes revisited. Nat. Microbiol. 5, 655–667 (2020).
Mills, D. B. et al. Eukaryogenesis and oxygen in Earth history. Nat. Ecol. Evol. 6, 520–532 (2022).
Liu, Y. et al. Expanded diversity of Asgard archaea and their relationships with eukaryotes. Nature 593, 553–557 (2021).
Zachar, I., Szilágyi, A., Számadó, S. & Szathmáry, E. Farming the mitochondrial ancestor as a model of endosymbiotic establishment by natural selection. Proc. Natl. Acad. Sci. USA. 115, E1504–E1510. https://doi.org/10.1073/pnas.1718707115 (2018).
Cavalier-Smith, T. & Chao, E.E.-Y. Multidomain ribosomal protein trees and the planctobacterial origin of neomura (eukaryotes, archaebacteria). Protoplasma https://doi.org/10.1007/s00709-019-01442-7 (2020).
Searcy, D. G. Nutritional syntrophies and consortia as models for the origin of mitochondria. Symb. Mech. Model Syst. 1, 163–183. https://doi.org/10.1007/0-306-48173-1_10 (2002).
Müller, N., Timmers, P., Plugge, C. M., Stams, A. J. M. & Schink, B. Syntrophy in methanogenic degradation. Endosymb. Methanog. Archaea 1, 153–192. https://doi.org/10.1007/978-3-319-98836-8_9 (2018).
Searcy, D. G. Metabolic integration during the evolutionary origin of mitochondria. Cell Res. 13, 229–238 (2003).
Flemming, H.-C. & Wuertz, S. Bacteria and archaea on Earth and their abundance in biofilms. Nat. Rev. Microbiol. 17, 247–260 (2019).
Spang, A. et al. Asgard archaea are the closest prokaryotic relatives of eukaryotes. PLoS Genet. 14, e1007080 (2018).
Burns, J. A., Pittis, A. A. & Kim, E. Gene-based predictive models of trophic modes suggest Asgard archaea are not phagocytotic. Nat. Ecol. Evol. 2, 697–704 (2018).
Seitz, K. W. et al. Asgard archaea capable of anaerobic hydrocarbon cycling. Nat. Commun. 10, 1 (2019).
Jimenez, P. & Scheuring, I. Density-dependent private benefit leads to bacterial mutualism. Evolution 75, 1619–1635. https://doi.org/10.1111/evo.14241 (2021).
Preussger, D., Giri, S., Muhsal, L. K., Oña, L. & Kost, C. Reciprocal fitness feedbacks promote the evolution of mutualistic cooperation. Curr. Biol. 30, 3580-3590.e7 (2020).
Monaco, H. et al. Spatial-temporal dynamics of a microbial cooperative behavior resistant to cheating. Nat. Commun. 13, 3580 (2022).
Yanni, D., Márquez-Zacarias, P., Yunker, P. J. & Ratcliff, W. C. Drivers of spatial structure in social microbial communities. Curr. Biol. 29, 545–550 (2019).
The authors acknowledge support from the Hungarian Scientific Research Fund under grant numbers NKFIH #132250 (G.B.), #128289 (G.B., I.S.), #140901 (I.Z.), and #129848 (I.Z.), from the MTA Bolyai János Scolarship and from the ÚNKP Bolyai + Scolarship #BO/00570/22/8 (I.Z.), from the Volkswagen Stiftung initiative "Leben?-Ein neuer Blick der Naturwissenschaften auf die grundlegenden Prinzipien des Lebens" (I.Z.). This research has received funding from the Horizon 2020 Programme of the European Union within the FindingPheno project under grant agreement No 952914 (G.B., I.S.).
This research has received funding from the Horizon 2020 Programme of the European Union within the FindingPheno project under grant agreement No 952914.
These authors contributed equally: G. Boza and G. Barabás.
Institute of Evolution, MTA Centre for Ecological Research, Budapest, Hungary
G. Boza, G. Barabás, I. Scheuring & I. Zachar
ASA Program, International Institute for Applied Systems Analysis (IIASA), Laxenburg, Austria
G. Boza
Centre for Social Sciences, Budapest, Hungary
Division of Ecological and Environmental Modeling, Linköping University, Linköping, Sweden
G. Barabás
Department of Plant Systematics, Ecology and Theoretical Biology, Eötvös Loránd University, Budapest, Hungary
I. Zachar
Parmenides Foundation, Centre for the Conceptual Foundation of Science, Pullach Im Isartal, Germany
I. Scheuring
I.Z. and G.B. conceived the idea, Gy.B., G.B., I.S. and I.Z. conceptualized and designed the model, I.S., Gy.B. and I.Z. analyzed the models analytically, Gy.B. designed the adaptive dynamics model, Gy.B. and I.Z. implemented and run adaptive dynamics experiments, G.B. collected and evaluated examples, G.B. and I.Z. made the figures. I.Z. prepared the supplementary code files that can reproduce all data. All authors contributed to drafting, writing, and editing of the manuscript.
Correspondence to I. Zachar.
Supplementary Information 1.
Boza, G., Barabás, G., Scheuring, I. et al. Eco-evolutionary modelling of microbial syntrophy indicates the robustness of cross-feeding over cross-facilitation. Sci Rep 13, 907 (2023). https://doi.org/10.1038/s41598-023-27421-w
About Scientific Reports
Guide to referees
Scientific Reports (Sci Rep) ISSN 2045-2322 (online) | CommonCrawl |
Analysis of a Polycomb Group Protein Defines Regions That Link Repressive Activity on Nucleosomal Templates to In Vivo Function
Ian F. G. King, Richard B. Emmons, Nicole J. Francis, Brigitte Wild, Jürg Müller, Robert E. Kingston, Chao-ting Wu
Ian F. G. King
Department of Molecular Biology, Massachusetts General Hospital, Boston, Massachusetts 02114Department of Genetics, Harvard Medical School, Boston, Massachusetts 02115
Richard B. Emmons
Department of Genetics, Harvard Medical School, Boston, Massachusetts 02115
Nicole J. Francis
Brigitte Wild
EMBL, Gene Expression Programme, Meyerhofstrasse 1, 69120 Heidelberg, Germany
Jürg Müller
Robert E. Kingston
For correspondence: [email protected] [email protected]
Chao-ting Wu
DOI: 10.1128/MCB.25.15.6578-6591.2005
Polycomb group (PcG) genes propagate patterns of transcriptional repression throughout development. The products of several such genes are part of Polycomb repressive complex 1 (PRC1), which inhibits chromatin remodeling and transcription in vitro. Genetic and biochemical studies suggest the product of the Posterior sex combs (Psc) gene plays a central role in both PcG-mediated gene repression in vivo and PRC1 activity in vitro. To dissect the relationship between the in vivo and in vitro activities of Psc, we identified the lesions associated with 11 genetically characterized Psc mutations and asked how the corresponding mutant proteins affect Psc activity on nucleosomal templates in vitro. Analysis of both single-mutant Psc proteins and recombinant complexes containing mutant protein revealed that Psc encodes at least two functions, complex formation and the inhibition of remodeling and transcription, which require different regions of the protein. There is an excellent correlation between the in vivo phenotypes of mutant Psc alleles and the structure and in vitro activities of the corresponding proteins, suggesting that the in vitro activities of PRC1 reflect essential functions of Psc in vivo.
The genes of the Polycomb group (PcG) and trithorax group (trxG) maintain the spatially restricted patterns of homeotic gene expression in the embryo. For the most part, PcG proteins propagate patterns of transcriptional repression, while trxG proteins maintain patterns of transcriptional activation (6, 20, 43, 56). Accordingly, mutations in PcG genes cause derepression of homeotic genes, and those in trxG genes fail to maintain transcriptional activation, both of which result in homeotic transformations. We are interested in understanding the mechanism by which PcG genes maintain silencing.
Genetic and biochemical studies have shown that PcG proteins function as part of multiprotein complexes that act to repress gene expression. Repression has been proposed to occur by creating a chromatin structure refractory to remodeling and transcription, by directly inhibiting the general transcription machinery, or by a combination of mechanisms (9, 15, 19, 22, 30, 36, 38, 40, 44, 47). Two functionally distinct PcG complexes have been purified from Drosophila embryos, ESC/E(Z) and Polycomb Repressive Complex 1 (PRC1). The ESC/E(Z) complex can methylate histone H3 at lysine 27 through the E(Z) SET domain (10, 13, 31, 41), and this methylation has been proposed to act as a molecular mark that targets other PcG complexes to the correct location (10, 11, 13, 31, 41). PRC1 contains stoichiometric amounts of the PcG proteins Polycomb (Pc), Polyhomeotic (Ph), dRing1 (dRing1/Sce) (24, 26), and Posterior sex combs (Psc) as well as a number of other proteins (51, 52). Unlike ESC/E(Z), PRC1 has no known enzymatic activities. However, in vitro studies show that it can inhibit both chromatin remodeling by the hSWI/SNF complex (22, 52) and transcription of a chromatin template by RNA Pol II (30). A PRC1 core complex (PCC) consisting of Pc, Ph, dRing1, and Psc can recapitulate the in vitro inhibition activities of PRC1, suggesting these four proteins confer the basic activities of PRC1. Remarkably, the inhibitory activities of this core complex can be recapitulated by high concentrations of Psc alone, suggesting that Psc plays a central role in mediating PCC function. Although Ph can also inhibit chromatin remodeling and transcription in our in vitro assays, it does so less efficiently than does Psc, suggesting that its contribution is minor compared to that of Psc.
Psc consists of 1,603 amino acids and contains an N-terminal region of 246 amino acids that is homologous to the products of the vertebrate proto-oncogene bmi-1 and the tumor suppressor gene mel-18, as well as the Drosophila genes Suppressor 2 of zeste [Su(z)2] and lethal (3)73Ah (7, 28, 59). This region of homology contains a C3HC4 Ring finger motif and a helix-turn-helix-turn-helix motif (HTHTH) and, in the case of Su(z)2, has been shown to be involved in locus-specific binding of polytene chromosomes (48, 53). Studies of bmi-1 and mel-18 have implicated the Ring finger in subnuclear localization and cellular transformation (12), and Ring finger motifs in other proteins have been demonstrated to function as E3 ubiquitin or SUMO ligases (18). The HTHTH motif has been implicated in skeletal transformation due to overexpression of Bmi-1 (3) and transcriptional repression by tethered Bmi-1 (12). Two-hybrid studies in yeast suggest that both of these protein motifs may be involved in mediating protein-protein interactions between Psc, Pc, and Ph (32); however, these interactions have not been addressed within the context of an intact complex. Outside of the N-terminal homology region, Psc has no recognizable structural motifs and no obvious homology to other proteins. However, the C-terminal regions of Psc and Su(z)2 have similar amino acid compositions, suggesting they may contain a conserved function that cannot be easily discerned from the amino acid sequence (7, 59). This function may involve transcriptional repression or protein-protein interactions, as has been suggested by transgene analyses of Su(z)2 (9). Interestingly, Psc appears to share functional similarities with Su(z)2 and lies adjacent to it in the Su(z)2 Complex [Su(z)2-C] (for example, see references 5, 45, 54, 57, and 62).
Psc has been genetically characterized, and 11 ethylmethane sulfonate (EMS)-induced alleles have been grouped into two phenotypic classes (62). The first is made up of six strong loss-of-function alleles (Psc1, Psch27, Psc1445, Psce25, Psce24, and Psce23) that are lethal when heterozygous with one another as well as with deficiencies for the locus. The second class consists of moderate loss-of-function alleles (Psch28, Psch30, Psc1433, Psce22, and Pscepb) that show various degrees of viability when heterozygous with each other, strong loss-of-function alleles, or deficiencies. Based on several genetic assays, the two classes of alleles can be ordered into a phenotypic series progressing from most to least severe: Psc1 > [Psch27, Psc1445, Psce25] > Psce24 > Psce23 > Psch28 ≫ Psch30 ≫ [Psc1433, Psce22, Pscepb] (alleles within brackets cannot be ordered relative to each other). Three aspects of this series merit further discussion.
First, Psc1 displays complex genetic behavior. Although considered loss-of-function with regard to viability, it also displays dominant and gain-of-function attributes (for example, see references 1, 29, 61, and 62). Second, Psch28 represents a transition point between the strong and moderate alleles (62). Like the moderate alleles, it shows variable viability when heterozygous with other moderate alleles and at least one strong allele, Psce23. On the other hand, it resembles the strong alleles by its lethality when heterozygous with deficiencies. Third, moderate alleles display intragenic complementation when heterozygous with certain members of the strong class (62). This complementation suggests that Psc contains at least two genetically separable functional domains.
To characterize the structure of Psc, we sequenced the EMS-induced alleles. Most contain nonsense mutations and collectively form a deletion series that reveals an excellent correlation with the phenotypic series. To determine the relationship between the in vitro inhibitory activities of Psc and PRC1 and their in vivo function, we characterized the mutant proteins with regard to their in vitro impact on chromatin remodeling and transcription, both as individual subunits and in the context of the PCC. Our studies delineate distinct functional regions within Psc that are important for its in vitro inhibitory activities as well as its in vivo functions, suggesting that the in vitro activities are a good indication of the role of Psc in vivo.
Drosophila stocks.Psc alleles and the marked chromosomes and stocks that carry them have been described previously (33, 42, 62). The CyO-19 green fluorescent protein (GFP)-marked balancer chromosome (CyO, P{w[+mC] =GAL4-Kr.C}DC3, P{w[+mC] = UAS-GFP.S65T}DC7) was obtained from the Bloomington Stock Center.
Molecular analysis of Psc alleles.Psc alleles were kept in stock heterozygous with the CyO-19 balancer chromosome. To obtain genomic DNA for sequencing, 2-h egg lay collections were made from each heterozygous stock, aged 16 to 20 h, and then homozygous mutant embryos (non-GFP) were collected in groups of 25, placed in 1.5-ml Eppendorf tubes, and frozen at −80°C. Genomic preparations were made from each batch of embryos using the Berkeley Drosophila Genome Project protocol, and then PCR was performed using primer sets designed to amplify each exon as well as at least 50 bp of flanking intronic DNA. PCRs were gel purified and cleaned with the QIAGEN gel extraction kit, and then double strand sequence was collected across all exons via direct sequencing from the gel-purified fragments using the Dana Farber/Harvard Cancer Center sequencing center. Because of their different genetic backgrounds, the sequences of Psc1 (GenBank DQ022536), Psc1433 (DQ022537), and Psc1445 (DQ022538) were compared to the Psc cDNA (GenBank X59275) (7), while the sequences of the remaining alleles (DQ022539 to DQ022546) were compared to a Psc cDNA compiled from genomic sequence (AE003820.3). These reference sequences contain polymorphisms, resulting in ∼20 amino acid differences between the encoded proteins. Furthermore, because the polymorphisms include in-frame deletions and insertions, the respective amino acid sequences do not align precisely. We describe all mutant proteins using the amino acid numbering associated with X59275, because this sequence was used in our bacculovirus vectors (see below). Psc1, Psc1433, and Psc1445 share three amino acid polymorphisms (K519N, A545T, and S577A) not present in either reference sequence, while Psc1433 and Psc1445 also have a Q1167L polymorphism. Psch28 has two missense mutations (I762V and G1041E) located C-terminal to a stop codon, and several alleles have silent nucleotide changes (R.B.E., data not shown).
Western analyses.Approximately 50 embryos collected from flies heterozygous or homozygous for Psc mutations were suspended in 6× sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) sample buffer (6% SDS, 400 mM Tris [pH 6.8], 30% glycerol, 30 mM EDTA, 0.0012% bromphenol blue) and boiled for 5 min before loading on an 8% SDS-PAGE gel. Blots of proteins from Psc1445 homozygous embryos were stained with Amido Black (Sigma) after blotting. Psc was visualized using a mixture of two monoclonal anti-Psc antibodies, 6E8 and 7E10, which were a gift from P. N. Adler (37, 54).
Protein expression and purification.Mutagenesis was carried out using the QuickChange in vitro mutagenesis kit (Stratagene). The cDNA sequence for Psc was inserted into the pFastBac vector for bacculovirus and mutagenized directly (Psc1, Psch27, Psce24) or replaced with segments of the cDNA that had been subcloned into pBluescript and mutagenized (all other alleles). All mutations were verified by sequencing, and the Psce23 mutation was verified further by directly sequencing bacculoviral DNA. A FLAG tag was placed at the N termini of Psc proteins as indicated below. Psc and PCC proteins were expressed in Sf9 cells and purified as described previously (22). All purifications used buffers with 10 μM ZnCl2. Washes contained 0.05% NP-40. Proteins were eluted in BC300 with 0.4 mg/ml FLAG peptide and 10 μM ZnCl2.
Functional assays.Wild-type and mutant Psc and PCC were assayed for inhibition of chromatin remodeling as previously described (22). Reactions contained between 0.01 ng and 1 ng end-labeled 5S array and 9 to 10 ng unlabeled 5S array templates. Psc and PCC were diluted in BC300 with 1 mg/ml bovine serum albumin and incubated with the template for 20 min before addition of hSwi/Snf. The assays for inhibition of transcription were carried out as previously described (30), using 10 ng of 5S array templates.
Analyses of Psc proteins in wing imaginal disks.cDNA fragments encoding PSC1-572, PSC1-872, or PSC456-1603 were cloned into the CaSpeR-hs vector to obtain hs-Psc1-572, hs-Psc1-872, and hs-Psc456-1603 (plasmid maps are available from J.M. on request), and several independent transformant lines were isolated for each construct. Transgenic lines carrying a hs-Psc construct have been described previously (5). To determine the ability of the transgenes to repress homeotic gene activity in clones of cells homozygous for Su(z)21.b8, a deficiency that removes Psc and Su(z)2 (5, 8), the following strains were made: FRT42 Su(z)21.b8/CyO; hs-Psc, FRT42 Su(z)21.b8/CyO; hs-Psc1-572, FRT42 Su(z)21.b8/CyO; hs-Psc1-872, and FRT42 Su(z)21.b8/CyO; hs-Psc456-1603. Marked clones of Su(z)21.b8 mutant cells were then generated by crossing flies from these strains to hs-flp; FRT42 hs-nGFP and inducing clones during the first instar stage. Clones were analyzed 96 h after induction by staining with antibodies against Abd-B and GFP as previously described (5). Rescue function of the hs-Psc transgenes was analyzed by applying 1 h of heat shocks every 12 h over a 96-h period, starting at the time of clone induction. In animals carrying the hs-Psc or the hs-Psc1-572 transgene, Abd-B was tightly silenced in all Su(z)21.b8 clones in every animal analyzed. In animals carrying the hs-Psc1-872 or hs-Psc456-1603 transgene, misexpression of Abd-B in Su(z)21.b8 mutant clones was indistinguishable from the misexpression observed in Su(z)21.b8 mutant clones in animals lacking a hs-Psc transgene. At least two independent transgene insertions were analyzed with each construct and found to give the same result.
Expression of wild-type and mutant forms of Psc in embryos was induced as follows. For Fig. 7B, transgenic embryos were heat shocked five times (20-min heat shocks at 37°C every 2 h; first heat shock applied between 2 and 4 h of development) prior to fixation and stained with antibody against Abd-B. For Fig. 7C, a single 30′ heat shock at 37°C was applied, and total embryo extracts were prepared for analysis 30 min after completion of the heat shock.
Structures of the Psc alleles correlate with their phenotypic classifications.We identified the molecular lesions associated with 11 EMS-induced alleles of Psc: Psc1, Psch27, Psc1445, Psce25, Psce24, Psce23, Psch28, Psch30, Psc1433, Psce22, and Pscepb (33, 42, 62). Genomic DNA was isolated from homozygous mutant embryos, amplified by PCR, sequenced directly, and analyzed as described in Materials and Methods. Most alleles contained nonsense mutations predicted to truncate the Psc protein. Eight encode simple truncations: Psch27 (K111*), Psc1 (E521*), Psch28 (K760*), Psch30 (Q910*), Psc1433 (Q1075*), Psce22 (Q1098*), Pscepb (Q1098*), and Psc1445 (Q1453*). Although Psce22 and Pscepb encode the same nonsense mutation, they likely represent independent mutational events, since they were derived from separate mutageneses (62). Psc1, Psch28, Psc1433, and Psc1445 have additional amino acid changes that represent either polymorphisms or changes that are not translated (Materials and Methods). One allele, Psce24 (M373K and E378*), contains both missense and nonsense mutations, and another, Psce23 (C268Y), carries a missense mutation in the C3HC4 ring finger domain. Finally, Psce25 had no changes in the coding region. These results are summarized in Fig. 1A.
Molecular analysis of mutant Psc proteins. (A) Sequence analysis. Diagrams depict wild-type Psc (black), proteins of the moderate alleles (red), strong alleles (green), and engineered deletion constructs (white). Products of strong alleles that are likely associated with expression defects are shown in light green, and asterisks indicate locations of missense amino acid changes. The homology region (gray shading), Ring finger (dark stripes), and HTHTH motifs (light stripes) are indicated. (B) Western analysis of animals heterozygous for mutant Psc alleles. (C) Western analysis of embryos homozygous for Psc1445 or Psce23. Equal protein loading was confirmed by Amido Black staining (for Psc1445) or by Western blotting using an antibody to TATA binding protein (TBP) (for Psce23; Santa Cruz). (D and E) Recombinant Psc proteins corresponding to mutant alleles (∼400 ng) (D) or engineered deletions (∼500 ng) (E) were separated on 4 to 15%-gradient SDS-PAGE gels and stained with colloidal Coomassie. An asterisk (D and E) indicates contaminating heat shock protein 70 (Hsp70).
To verify the sequencing results, Pscmutant/+ embryos were collected for each mutant allele, and total embryonic protein was examined by Western analysis using anti-Psc monoclonal antibodies (37). The Western blots showed that embryos heterozygous for seven of the alleles (Psce24, Psc1, Psch28, Psch30, Psc1433, Psce22, and Pscepb) expressed altered proteins consistent with the size predicted by the sequence data (Fig. 1B). Altered bands were not seen for the remaining four alleles, Psch27, Psc1445, Psce23, and Psce25 (Fig. 1B). This observation was not surprising for Psch27, even though Psch27 is transcriptionally competent (2) and is expected to produce a truncated protein, since its corresponding protein is predicted to lack the homology region used to generate the anti-Psc monoclonal antibodies (37). Altered bands were also not expected for Psc1445, Psce23, and Psce25, since they were predicted to make proteins of wild-type, or nearly wild-type, size. Our findings are in agreement with those of previous studies of protein sizes for Psc1, Psc1443, and Psc1445 (37).
To assess expression of Psc1445, Psce23, and Psce25, homozygous mutant embryos were collected for each allele and analyzed by Western blotting. Homozygous Psce23 embryos expressed approximately normal levels of a protein of the size expected for full-length Psc (Fig. 1C), suggesting that the phenotype of Psce23 results from the alteration in its Ring finger motif. In contrast, embryos homozygous for either Psc1445 or Psce25 did not express detectable levels of Psc (Fig. 1C), although recovery of protein from Psce25 embryos was too low to determine conclusively whether this allele produces a protein (data not shown). We suggest that the phenotypes of Psc1445 and Psce25 might be due to lack of synthesis or instability of the protein. Association of Psce25 with a posttranslational defect is especially likely, since Psce25 is transcriptionally competent (2).
The deletion series formed by the following truncation alleles, Psch27, Psce24, Psc1, Psch28, Psch30, Psc1433, Psce22, and Pscepb, shows an excellent correlation to the phenotypic series in which the strong loss-of-function alleles encode the shortest proteins, while the moderate alleles encode the longest. Notably, the transition between the moderate and strong truncation alleles occurs at amino acid 760, which is the truncation point of the transition allele Psch28. All of the remaining moderate alleles have nonsense mutations located after this amino acid. With the exception of Psc1445 and Psce25, which do not express detectable levels of protein in vivo, all of the strong alleles have mutations located in the N-terminal 521 amino acids of the protein. Thus, the different classes of Psc alleles are associated with distinct mutant protein structures.
A central region of Psc is important for in vitro inhibition of both chromatin remodeling and transcription.To determine how mutations in Psc affect its in vitro activities as a single subunit, we expressed mutant Psc proteins using a baculovirus expression system, purified them by immunoaffinity chromatography via the FLAG epitope that was engineered at the N terminus of each protein (Fig. 1D), and then determined their ability to inhibit both chromatin remodeling and transcription. We tested mutant proteins corresponding to the following eight alleles: Psc1, Psch27, Psce24, Psce23, Psch28, Psch30, Psc1433, and Pscepb. With the exception of Psch27, whose expression pattern is not known, each of these alleles expressed mutant protein in embryos (Fig. 1B and C). Psce22 was not tested because the mutant protein it produces is predicted to be identical to that encoded by Pscepb, and neither Psc1445 nor Psce25 was tested because they do not express detectable levels of protein in vivo.
First, we tested whether the mutant proteins could inhibit chromatin remodeling by the human SWI/SNF complex by using a restriction enzyme accessibility assay (22). hSWI/SNF makes a HhaI site within a nucleosomal template accessible to digestion in an ATP-dependent manner, thereby providing a quantitative assay for chromatin remodeling (35), and PRC1, PCC, and Psc are all able to inhibit this remodeling reaction (22). End-labeled nucleosomal templates were preincubated with purified mutant Psc proteins for 20 min, and then HhaI, hSWI/SNF, and ATP were added. Reactions were incubated for 1 h, at which point the extent of HhaI digestion of the nucleosomal template was measured by resolving end-labeled products on an agarose gel and quantifying the proportion of undigested template (Fig. 2A).
In vitro activities of mutant Psc proteins. (A) Inhibition of chromatin remodeling by mutant Psc. End-labeled nucleosomal arrays were incubated with 0 nM, 4 nM, 8 nM, 16 nM, or 32 nM Psc, and the extent of chromatin remodeling by hSwi/Snf was determined by quantifying the increase in HhaI accessibility. (The first and third lanes of each panel demonstrate the extent of chromatin remodeling in the absence of Psc.) Lanes with no hSwi/Snf contain 0 nM and 32 nM Psc. (B) Summary of inhibition of chromatin remodeling by mutant Psc. Percent inhibition was calculated as described previously (22):
$$mathtex$$\[\frac{\mathrm{(fraction\ uncut\ with\ hSwi/Snf\ and\ Psc\ or\ PCC{-}fraction\ uncut\ with\ hSwi/Snf){\times}100}}{\mathrm{(fraction\ uncut\ without\ hSwi/Snf{-}fraction\ undigested\ with\ hSwi/Snf)}}\]$$mathtex$$
Bars represent the mean activity at 16 nM Psc; values were compared by Student's t test, and asterisks indicate values that are statistically different from wild-type values (P < 0.01). Note that titrations were carried out for all proteins in this figure and Fig. 5; the bar graph shows a concentration in the linear range for activity of wild-type Psc. (C) Comparison of inhibition of chromatin remodeling by Psc proteins representing the different phenotypic classes. (D) Inhibition of transcription by mutant Psc. Nucleosomal array templates were incubated with 0 nM, 6 nM, 12 nM, 24 nM, or 48 nM Psc before addition of Gal4-VP16, HeLa nuclear extract, and nucleoside triphosphates. (E) Summary of inhibition of transcription by mutant Psc. Results are presented as percentages of transcription activity in the absence of Psc. Bars represent the mean activity at 12 nM Psc; values were compared by Student's t test, and asterisks indicate values that are statistically different from wild-type values (P < 0.01). (F) Comparison of inhibition of transcription by Psc proteins representing the different phenotypic classes. All error bars represent standard errors (n ≥ 2).
Mutant Psc proteins representing the simple truncations can be divided into two groups, those that inhibit chromatin remodeling and those that do not. Mutant proteins corresponding to the moderate alleles Psch30, Psc1433, and Pscepb had inhibition activities indistinguishable from that of the wild type (Fig. 2A to C). In contrast, those corresponding to the strong alleles Psch27 and Psce24 had significantly lesser abilities to inhibit chromatin remodeling (Fig. 2A to C). Interestingly, mutant protein corresponding to the Psch28 transition allele was unable to inhibit chromatin remodeling, highlighting the similarity between Psch28 and the strong alleles. Mutant Psc1 protein also failed to inhibit remodeling, suggesting that it behaves like proteins representing the strong loss-of-function alleles. Finally, mutant Psce23 protein displayed wild-type inhibition activity, indicating that an intact Ring finger motif is not essential for in vitro inhibition of chromatin remodeling.
Second, we tested the ability of mutant Psc proteins to inhibit transcription. Mutant proteins were incubated with a chromatin template, and then the Gal4-VP16 transcriptional activator, HeLa nuclear extract, and nucleoside triphosphates were added. Transcription levels in each reaction were analyzed by performing primer extension on the reaction product and quantifying the resulting products as described previously (30).
The activities of the mutant proteins paralleled those observed in our assays of chromatin remodeling. Mutant proteins corresponding to the moderate alleles Psch30, Psc1433, and Pscepb again had inhibition activities that were not statistically distinguishable from that of the wild type (Fig. 2D to F), while mutant proteins corresponding to the strong alleles Psch27 and Psce24, as well as the Psch28 transition allele and Psc1, were significantly impaired in their ability to inhibit transcription of a chromatin template (Fig. 2D to F). Once again, the Psce23 protein had an activity similar to that of wild-type Psc (Fig. 2D to F). These observations suggest that with the exception of Psce23, there is a straightforward correlation between the in vitro inhibition activities of the truncation alleles and their phenotypic class (Fig. 2C and F).
As part of a parallel study of Psc activity, we made three deletion constructs of the Psc cDNA (Fig. 1B), each of which was engineered to have a FLAG epitope at the N terminus. Two encode C-terminal deletions, called PSC1-872 and PSC1-572, which consist of the first 872 and 572 amino acids of Psc, respectively. The third encodes an N-terminal deletion, PSC456-1603, which contains amino acids 456 to 1603. The construct breakpoints were chosen prior to our sequence analysis of the Psc alleles, so it was fortuitous that the breakpoints of the first two constructs happened to flank the nonsense mutation of the transition allele Psch28.
We expressed and purified these proteins (Fig. 1E) and then tested their abilities to inhibit chromatin remodeling (Fig. 3A and B) and transcription of a chromatin template (Fig. 3C). PSC1-872 had activities that were mildly reduced relative to those of wild-type Psc in both assays, while the activities of PSC1-572 were severely reduced (Fig. 3A to C). PSC456-1603 was also able to inhibit chromatin remodeling and transcription with activities similar to those of wild-type Psc (Fig. 3A to C), indicating that the sequences C-terminal to 456 are sufficient for in vitro inhibitory activities. These data suggest that a portion of Psc that is important for full inhibition lies within or overlaps the region between amino acids 456 and 872. Since PSC1-572 and the protein corresponding to Psch28 (K760*) are also unable to support significant inhibition, the region between amino acids 760 and 872 might be of particular importance.
In vitro activity of mutant Psc proteins of engineered deletion constructs. (A) Inhibition of chromatin remodeling by mutant Psc (concentrations as indicated in panel B). All lanes are from the same experiment and are arranged so that identical concentrations of Psc are vertically aligned. The titration of PSC1-572 was extended to concentrations higher than that for the other proteins because of the lower activity of PSC1-572. (B) Summary of inhibition of chromatin remodeling by mutant Psc. Standard error is shown. (C) Inhibition of transcription by mutant Psc (6, 12, 24, 48 nM).
Mutant Psc proteins can alter in vitro activity of the PCC.Since Psc functions as part of a multiprotein complex (23, 49, 51, 52), we expected that mutant Psc proteins would affect the structure and/or activity of the complex. Accordingly, we used a recombinant PCC consisting of Pc, Ph, dRing1, and mutant Psc to test individual mutant Psc proteins for their ability to allow the core complex to form and inhibit chromatin remodeling. Insect cells were coinfected with four baculovirus constructs designed to express the mutant Psc protein (without FLAG epitope tags), wild-type Pc and dRing1, and Ph carrying an N-terminal FLAG epitope tag. FLAG-Ph and associated proteins were purified by immunoaffinity chromatography from nuclear extracts prepared from the coinfected cells. Since the Psch27 protein cannot be detected with available antibodies, we used a FLAG-tagged Psch27 protein (and Ph without the FLAG tag) in experiments with this protein.
We found that mutant Psc proteins corresponding to the moderate alleles Psch30, Psc1433, and Pscepb, the transition allele Psch28, and the strong alleles Psc1, Psce24, and Psce23 all allowed complex formation (Fig. 4A). To confirm that these mutant core complexes had stoichiometric amounts of each of the four subunits, we examined their composition by colloidal Coomassie staining and Western analysis (Fig. 4A and B). PCC subunits were found at approximately equal levels in each of the mutant complexes (Fig. 4A and B). Mutant protein corresponding to Psch27 was impaired for complex formation: Pc, dRing1, and Ph copurified with the FLAG-tagged Psch27 protein in substoichiometric amounts. This defect in complex formation might have been more severe than observed, because apparent read-through of the nonsense codon at position 111 produced small amounts of full-length FLAG-tagged Psc (data not shown). While it is possible that the FLAG tag on the Psch27 protein contributed to poor complex formation, we have not previously seen adverse effects of this tag on formation of PCC recombinant complexes. We also found that while PSC1-872 and PSC1-572 allowed complex formation, PSC456-1603 did not (Fig. 4C, D, and E). PSC456-1603 was tested for complex formation by coinfecting FLAG-tagged PSC456-1603 with Ph, Pc, and dRing1 and then identifying the proteins that copurified with FLAG-PSC456-1603. Decreased amounts of Ph, dRing1, and Pc were seen in the eluates (Fig. 4E, lane E). Taken together, these data indicate that efficient complex formation can be achieved with the N-terminal 572 amino acids of Psc. Assuming that complex formation with the Psce24 protein does not require the missense mutation at amino acid position 373, our data suggest that complex formation may be possible with as little as the N-terminal 378 amino acids. This would be consistent with the inability of PSC456-1603 to support complex formation.
Formation of PCC with mutant Psc proteins. (A) Approximately 800 ng PCC containing mutant Psc proteins was separated on a 4 to 15%-gradient SDS-PAGE gel and stained with colloidal Coomassie. The identity of the Psc subunit of the complex is indicated [e.g., PCC (e23) for PCC containing Psce23 protein]. Positions of mutant Psc (circles) and other subunits are indicated. *, degradation product of Ph; **, Hsp70. (B) Western blots to assess relative stoichiometry of PCC subunits in complexes formed with mutant Psc (1 pmol in blots with anti-Pc or anti-dRing1; 100 fmol in blots with anti-Psc or anti-FLAG). PCC containing the Psce24 or Psc1 protein was probed with the 6E8 antibody, while all others were probed with the 7E10 antibody. Because equal amounts of protein were loaded in each lane, intensities of bands for each antibody should be similar in all lanes if PCC subunit stoichiometries are similar in all complexes. The lower region of the gel probed with 7E10 is not shown because it was probed with another antibody. (C) PCC (1 μg) carrying mutant Psc proteins was separated on a 4 to 15%-gradient SDS-PAGE gel and stained with colloidal Coomassie. (D) Two hundred nanograms PCC carrying mutant Psc proteins was separated on 8% SDS-PAGE gels, and the Western blots were probed with antibodies against PCC subunits or with an anti-FLAG antibody to detect Psc. (E) Insect cells were infected with viruses encoding FLAG-tagged PSC456-1603 and other PCC subunits, and immunoaffinity purification was performed on nuclear extracts from infected cells. Input (I), flow-through (FT), and elution (E) samples were analyzed for the presence of PCC subunits by Western blotting.
Core complexes containing mutant Psc proteins were then tested for their ability to inhibit chromatin remodeling and transcription in vitro. These assays are complicated, because PCC subunits other than Psc, such as Ph, might contribute to the activity of a complex containing a mutant Psc protein (22, 30). Nevertheless, as detailed below, we found that some mutant Psc proteins compromised PCC activity.
In comparison to the ability of wild-type PCC to inhibit chromatin remodeling, we found that the activities of core complex preparations containing proteins of the moderate alleles Psch30, Psc1433, and Pscepb were indistinguishable from that of the wild type, and those of preparations containing proteins of the transition allele Psch28 or the strong alleles Psc1 and Psce24 were significantly impaired (Fig. 5A to C). We also found that complexes prepared with PSC1-872 were similar to the wild type, while those containing PSC1-572 were relatively ineffective at inhibiting chromatin remodeling (Fig. 6A and B). Interestingly, although complexes containing the Psc1 or PSC1-572 proteins had reduced activity, they still inhibited chromatin remodeling better than did either of these proteins alone (compare Fig. 2B to 5B and 3B to 6B). This suggests either that the context of a complex enhanced the activity of mutant Psc subunits or that other PCC subunits, such as Ph, contributed activity in our assays (22, 30). Consistent with the latter possibility, removing Ph from PCC carrying PSC1-572 or Psce24 protein further reduces the activity of the complex (N.J.F., unpublished result). Finally, whereas the Psce23 protein was similar to the wild type when assayed as a single subunit (Fig. 2A to C), core complex preparations containing this mutant protein were less active than preparations of wild-type PCC (Fig. 5A to C). This reduced activity of complexes formed with the Psce23 protein might partially explain the strong loss-of-function phenotype of Psce23 in vivo.
In vitro activities of PCC carrying mutant Psc proteins. (A) Inhibition of chromatin remodeling by mutant PCC. PCC (0, 5, 10, 20, or 40 nM) carrying wild-type or mutant Psc was incubated with end-labeled nucleosomal array and then tested for chromatin remodeling as described in the legend to Fig. 2A. (B) Summary of inhibition of chromatin remodeling by wild-type and mutant PCC at 12 nM; activity and statistical comparisons were calculated as in Fig. 2B. (C) Comparison of inhibition of chromatin remodeling by PCC carrying Psc proteins representing the different phenotypic classes. (D) Inhibition of transcription by mutant PCC (0, 1.7, 3.5, 7, and 14 nM) was assayed as described in Fig. 2D. (E) Summary of inhibition of transcription by wild-type and mutant PCC at 7 nM. Results are presented as a percentage of transcription activity in the absence of PCC in each reaction, and statistical comparisons were calculated as described for Fig. 2E. (F) Comparison of inhibition of transcription by PCC carrying Psc proteins representing the different phenotypic classes. Standard errors are shown (n = 2 to 8).
In vitro activities of PCC carrying mutant Psc proteins of engineered deletion constructs. (A) Inhibition of chromatin remodeling by mutant PCC (0.5 to 32 nM; see panel B). All lanes are from the same experiment and are arranged so that identical concentrations of PCC are vertically aligned. The titration of PCC carrying PSC1-572 was extended to concentrations higher than those for the other PCC preparations because of the lower activity of PCC containing PSC1-572. (B) Summary of inhibition of chromatin remodeling assays by mutant PCC. Standard errors are shown. (C) Inhibition of transcription by mutant PCC (6, 16, 48 nM).
We next measured the ability of PCC containing mutant Psc proteins to inhibit transcription and found a pattern of activities similar to those found in our assays of chromatin remodeling, although there was more overlap among the allele classes in this assay. Here, complexes containing protein corresponding to the moderate allele Pscepb but not Psch30 or Psc1433 were less active than those containing wild-type Psc. Because the protein encoded by this allele was active in all of the other assays, the importance of this reduced activity is unclear. Complexes containing proteins corresponding to Psch28, Psc1, or Psce24 were less active than those containing wild-type Psc, although the difference was significant only for Psce24 (Fig. 5D to F). Our data further indicated that preparations containing PSC1-572 were impaired for inhibition of transcription, while complexes containing PSC1-872 were able to inhibit transcription (Fig. 6C). Finally, preparations containing mutant Psce23 proteins inhibited transcription to a level comparable to that of preparations containing proteins corresponding to the moderate alleles (Fig. 5D to F). Thus, while these results are generally consistent with the normal function of moderate alleles and impaired function of strong truncation alleles observed in other assays, they highlight the possibility that there are differences among the members within each class (see Discussion).
N-terminal 872-amino-acid fragment of Psc can maintain homeotic gene repression.Maintenance of homeotic gene repression is a key function of the PcG genes, and several of the Psc alleles described above fail to maintain repression (5, 55, 57). Here, we use transgenic technology to determine the capacity of the construct-derived PSC1-872, PSC1-572, and PSC456-1603 proteins to maintain homeotic gene repression in vivo. Our strategy rested on previous studies showing that absence of Psc and Su(z)2 in clones of cells in the wing imaginal disk results in the derepression of homeotic genes, including Abd-B. Importantly, this derepression can be prevented by expression of wild-type Psc and Su(z)2 (5) or by expression of wild-type Psc alone (Fig. 7A). Using Abd-B expression levels as our assay for repression, we therefore induced clones lacking Psc and Su(z)2 in wing imaginal disks and determined whether transgenes expressing PSC1-872, PSC1-572, or PSC456-1603 would be able to maintain homeotic gene repression (see Materials and Methods).
Silencing function of wild-type and mutant Psc proteins in vivo. (A) Wing imaginal disks immunostained for Abd-B (red) and GFP (green), which serves as a marker for wild-type cells. Abd-B is not expressed in wild-type cells (top left) but is strongly misexpressed in clones lacking Psc and Su(z)2 (marked by lack of GFP signal) in the absence of a Psc transgene (no TG); the mutant clones also display a tumor-like phenotype (5). Transgenes expressing wild-type Psc or PSC1-872 restore silencing of Abd-B and prevent the tumor-like phenotype in the mutant clones. In comparison, PSC1-572 and PSC456-1603 are ineffective. Psc− Su(z)2− mutant clones sort out from the surrounding wild-type tissue and form vesicles; in the "no TG," "hs-Psc1-572," and "hs-PSC456-1603" images, all cells in each of the clones express Abd-B, but not all cells are visible in this single confocal section. (B) Ectopic expression of Abd-B in some cells in the nervous system (arrows) (not all cells showing ectopic expression are visible in this focal plane) when PSC1-572, but not wild-type Psc, PSC1-872, or PSC456-1603, is overexpressed in embryos. Note that under the conditions used to express Psc proteins in imaginal disks in the experiment shown in Fig. 7A (see Materials and Methods), expression of PSC1-572 does not result in ectopic expression of Abd-B protein in wild-type (GFP-positive) imaginal disk cells. (C) Expression of mutant forms of Psc in transgenic embryos. Western blot analysis of embryonic extracts (EE) from transgenic embryos expressing PSC1-572, PSC1-872, and PSC456-1603, respectively. Embryos were not heat shocked (−hs) or were heat shocked (+hs) for 30 min and then were incubated for another 30 min prior to extract preparation. In each case, a Psc protein signal of the expected size is present only in extracts from heat-shocked embryos; the corresponding recombinant Psc proteins (RP), purified from baculovirus, were loaded as size references. The blot was stripped and reprobed with an antibody against alpha-tubulin to demonstrate comparable loading of extracts. We could not detect the endogenous wild-type Psc protein, suggesting that transgene-encoded Psc proteins are expressed at significantly higher levels.
We found that while PSC1-872 restored repression to wing disks lacking Psc and Su(z)2, PSC1-572 and PSC456-1603 could not (Fig. 7A). All three proteins were expressed at comparable levels as measured in embryos (Fig. 7C). These data suggest that a function necessary for the repression of Abd-B in wing imaginal cells in these assays is encoded within the N-terminal 872 amino acids of Psc and that a region of Psc that is essential for this function is located C-terminal to amino acid 572. The failure of PSC456-1603 to maintain repression might have resulted from the absence of a region important for repression, from an inability to support complex formation, or from an inability to appropriately target. Interestingly, in embryos, overexpression of PSC1-572, but not of wild-type Psc, PSC1-872, or PSC456-1603, led to ectopic activation of Abd-B expression in some cells in the nervous system, even in the presence of wild-type Psc (Fig. 7B). This suggests that PSC1-572 may act in a dominant-negative manner to interfere with the repression of homeotic genes.
Our molecular analysis of the genetically characterized alleles of Psc identifies at least two regions important for the function of this key PcG protein. Eight of the alleles, Psch27, Psce24, Psc1, Psch28, Psch30, Psc1433, Psce22, and Pscepb, form a deletion series spanning most of the protein (Fig. 1). We find a direct correlation between the lengths of the proteins encoded by these mutations and their abilities to inhibit chromatin remodeling and transcription in vitro as single subunits: mutant proteins 910 amino acids in length or longer display activity similar to that of the wild type in both assays, while those that are shorter than, or equal to, 760 amino acids in length support minimal activity (Fig. 2 and 5). Additional analysis of the mutant proteins derived from three deletion constructs of Psc, PSC1-872, PSC1-572, and PSC456-1603, is consistent with these data and highlights the importance of a region lying between amino acids 456 and 872 (Fig. 3 and 6). Thus, a critical aspect of the in vitro inhibitory activities of Psc may lie, at least in part, between amino acids 760 and 872. We also identified a region of Psc that is important for formation of the core complex, PCC. This region is located N-terminal to amino acid 572 and may be contained in as little as the N-terminal 378 amino acids (Fig. 4).
Our in vivo clonal analyses are consistent with these in vitro observations (Fig. 7). The ability of PSC1-872, but not PSC1-572 or PSC456-1603, to repress Abd-B in wing imaginal disks points to two regions of Psc, one lying in part between amino acids 572 and 872 and the other N-terminal to amino acid 456, that are important for in vivo maintenance of Abd-B repression. The location of these domains is consistent with the in vitro roles of the region between amino acids 760 and 872 and the region N-terminal to amino acid 572 for in vitro inhibition activities and complex formation, respectively.
The in vitro behavior of Psc mutant proteins as single subunits is generally indicative of their affect on PCC activity. Complexes containing mutant Psc proteins that are active as single subunits in vitro are generally more active than complexes containing mutant proteins that are only minimally active as single subunits. Nevertheless, the capacity of the latter class of complexes to support activity at all is significant and suggests either that the residual activity of minimally active Psc proteins can be enhanced in the context of a complex, perhaps through interactions with other subunits, and/or that subunits other than Psc can contribute to PCC activity. With regard to the latter possibility, we have found that Ph inhibits chromatin remodeling and transcription as a single subunit, although less efficiently than does Psc (22, 30), and that removal of Ph from complexes that contain PSC1-572 or the Psce24 protein reduces activity (N.J.F., unpublished).
The ability of the Psce23 protein to inhibit both chromatin remodeling and transcription in vitro demonstrates that an intact Ring finger is not essential for these activities (Fig. 2). This is consistent with the ability of PSC456-1603, which lacks the Ring finger altogether, to support in vitro inhibition activities at wild-type levels (Fig. 3 and 6). In contrast, incorporation of the Psce23 protein into PCC decreases in vitro inhibition activities, suggesting that in the context of the complex the Ring finger becomes important in vitro (Fig. 5). This diminished activity of PCC containing the Psce23 protein is intriguing in light of our observation that as a single subunit, the Psce23 protein approximates wild-type Psc activity in in vitro assays even though in vivo Psce23 is a loss-of-function, recessive lethal allele; the lethality of Psce23 may rest in part on disrupting normal intermolecular interactions, such as those within PRC1. Alternatively, it is possible that the Psce23 mutation, which disrupts a conserved cysteine in the Ring finger, disrupts a E3 ubiquitin or SUMO ligase activity, although such an activity has yet to be demonstrated for Psc or its homologues.
Recently, a separate study showed that PCC has the ability to compact chromatin, probably through the bridging of nucleosomes (21). Interestingly, this activity can also be carried out by Psc alone. Some of the mutant proteins studied here were also characterized for their ability to inhibit compaction, and those that decreased compaction also decreased inhibition of chromatin remodeling and transcription in vitro. Thus, PSC1-572 is impaired for chromatin compacting activity, while PSC1-872 and PSC456-1603 behave as does full-length Psc. One interpretation of these observations suggests that the internucleosomal interactions leading to chromatin compaction contribute to the inhibition of chromatin remodeling and transcription. Given that the mutations corresponding to the mutant proteins we have characterized also impair Psc function in vivo, it is possible that similar interactions with chromatin are important for the gene silencing function of Psc in vivo.
Our assays using purified and recombinant components also suggest that several functions of Psc do not require histone modification or variants. In contrast, recent data suggest that histone modifications, histone-modifying enzymes, and histone variants might be involved in silencing by PRC1 (for example, see references 16, 17, 58, and 60). For example, complexes containing PRC1 components, particularly dRing1 and its mammalian homologues, can ubiquitylate H2A (60), although this has not yet been demonstrated with either purified PRC1 or PCC. It will be interesting to determine how these modifications might alter Psc targeting or efficiency in vivo.
In vivo genetic behavior of mutant Psc alleles suggests additional functional domains.The parallel between the in vivo behavior of mutant Psc alleles and the in vitro activities of the corresponding mutant proteins is striking. Mutant proteins displaying significant in vitro activity correspond to four alleles of the moderate class (Psch30, Psc1433, Psce22, and Pscepb), and those that are essentially inactive correspond to alleles of the strong class (Psc1, Psch27, and Psce24). Furthermore, protein corresponding to the transitional Psch28 allele behaves in vitro as would protein corresponding to the strong class of alleles and falls, by length, at the interface between proteins of the moderate class and those of the strong class (Fig. 1). These observations suggest that the in vitro assays serve as a reasonable proxy for in vivo Psc function. Below, we extend our analysis by considering our data in light of prior genetic analyses.
Our data indicate that the region of Psc lying C-terminal to amino acid 872 is essential neither for in vitro activity nor for the maintenance of HOX gene silencing in our imaginal disk assays. Nevertheless, the four moderate alleles whose protein products are expected to be truncated at or beyond amino acid 910 (Psch30, Psc1433, Psce22, and Pscepb) show a loss of viability when hemizygous in vivo (62). In addition, the reduction in viability may correlate with the length of the proteins they encode. The hemizygous viabilities of Psce22, Pscepb, Psc1433, Psch30, and Psch28 are 30 to 55%, 15 to 33%, 6 to 9%, 1 to 2%, and 0% of the wild-type level, respectively. While some of these differences in viability may be attributed to variations in genetic background, the differences between Psc1433, Psch30, and Psch28 likely reflect real changes in Psc function, since these three alleles differ by other genetic criteria as well (see below) (62).
The progressive loss of viability might be due to an increasing instability of the Psc protein as its length decreases. This interpretation, however, is not easily compatible with the observation that moderate alleles have gain-of-function characteristics and support intragenic complementation (see below). Another possibility is that the region of Psc C-terminal to amino acid 760 encodes multiple functions, which are differentially deleted by the moderate alleles. Alternatively, there may be a single key function distributed throughout the C-terminal half of Psc such that mere quantity of the region can influence function. Nonlocalized activity, which is consistent with the amino acid composition of the region, has also been proposed as a mode of action for the analogous region of Su(z)2 (53). In light of these possibilities, the reduced in vitro activities of proteins representing the moderate alleles may be a true reflection of the reduced viability associated with these alleles.
A second feature of the moderate alleles that substantiates the importance of the C-terminal end of Psc is the homeotic transformation of wing to halter tissue in adults that are hemizygous for a moderate allele or heterozygous for two such alleles (62). This phenotype suggests that the region of Psc lying C-terminal to amino acid 1098 provides an activity that is essential for normal development. Since the homeotic phenotype is gain of function, possibly antimorphic, it may be that moderate allele proteins give rise to an activity that is normally suppressed, directly or indirectly, by the C-terminal end of Psc. The possibility that one domain of Psc influences another is consistent with previous genetic analyses of Psc as well as Su(z)2 (48, 53, 62).
A third observation concerns the zeste gene, whose product is implicated in maintenance of gene activation and repression and is present in PRC1 (for example, see references 14, 27, 39, and 51). Here, we focus on z1, a mutation of zeste that represses expression of the white gene (46), and the ability of Psc alleles to directly or indirectly alter this repression (61, 62). All six strong alleles suppress the z1 phenotype, indicating that wild-type Psc promotes repression of the white gene by z1. Since Psch28 does not modify the z1 phenotype, it seems that the region of Psc lying between amino acids 521 and 760 is important for repression by z1. In contrast, all moderate alleles except the transition allele Psch28 enhance the z1 phenotype and indicate that loss of material beyond amino acid 1098 directly or indirectly augments repression of the white gene by z1. Furthermore, since Psch30 enhances silencing while Psch28 does not, enhancement may require the region between amino acids 760 and 910. It may be that this region confers gain-of-function silencing activities that are normally kept in check by the region lying C-terminal to amino acid 1098. The potential gain-of-function aspect of enhancement is consistent with the gain-of-function homeotic activity of the moderate alleles.
Intriguingly, Psc1 is a strong dominant suppressor of the z1 phenotype and appears to antagonize wild-type Psc in a Psc1/+ animal (61, 62). This antimorphic nature may derive from abnormal activity of the homology region, which is expected to be intact in the product of Psc1 but disrupted in or lacking from that of Psce24, Psce23, and Psch27, none of which displays a similar dominant phenotype. If so, our data suggest that amino acids 521 through 760 keep the homology region in check. This interpretation is consistent with the in vivo dominant-negative nature of PSC1-572 and studies suggesting that the homology region of Su(z)2 may be modulated by a region lying C-terminal to it (53). Our data may also pertain to one of the most intriguing aspects of z1, which is that its ability to silence white is enhanced when white is physically paired with a homologue (43). As such, it is possible that the regions identified here as important for z1-mediated silencing might include those that contribute to homologue pairing in vivo. (Also see reference 34.)
Finally, our understanding of Psc may be further clarified by observations of intragenic complementation (62). The most dramatic example involves Psce23 and Psch30. Heterozygosity of Psce23 or Psch30 with a deficiency for the Su(z)2-C results in 0% or 1 to 2% viability, respectively. In contrast, Psce23/Psch30 animals show 100% viability. Psce23 also supports complementation with Psch28, although at a much-reduced rate; while Psch28 shows 0% viability when heterozygous with a deficiency for Su(z)2-C, Psce23/Psch28 animals show 3 to 15% viability. These different levels of complementation highlight the significance of the region lying between amino acids 760 and 910.
What is the mechanism of this complementation? It may be that the presence of two Psc mutant proteins results in two correspondingly different PRC1 complexes, which together provide all essential PRC1 function; the mutant Psce23 protein could provide C-terminal functions, while the mutant Psch30 and Psch28 proteins could provide N-terminal activity, especially with regard to Ring finger function. Alternatively, the presence of two mutant Psc proteins within a single PRC1 complex, possibly as a heterodimer, might allow each to compensate for the other's deficiencies. It is not known whether a single PRC1 complex contains one, two, or even more copies of Psc, since only relative stoichiometries, not absolute numbers, have been determined for the proteins in complexes isolated from Drosophila (51). Evidence that Psc has the capacity to dimerize comes from two-hybrid studies with Psc, Pc, and Ph (32), as well as studies of Mel-18 and Bmi-1 (4, 25, 50). These studies suggest that the Ring finger may play a role in dimerization. However, since Psce23 carries a mutation in the Ring finger, if complementation involves heterodimerization, then either the Psce23 mutation does not preclude heterodimerization or heterodimerization occurs through another means, perhaps through the HTHTH motif or via another protein, such as dRing1 (34).
In short, our findings emphasize the importance of the C-terminal two-thirds of Psc in defining Psc function in vivo and in vitro and the N-terminal region in complex formation in vitro. These studies show that the C-terminal region plays a key role in inhibition of chromatin remodeling and transcription on nucleosomal templates and suggest that these inhibitory functions in vitro might relate to in vivo function of the protein. It is intriguing that this region of Psc is not conserved in the mammalian homologs Bmi-1 and Mel-18 and does not show obvious similarity to any protein other than that of the related Su(z)2 gene. This suggests that the presumed function of the C-terminal portion of Psc in maintaining silenced states during development is novel or, perhaps, is supplied in other organisms by proteins or domains that have yet to be identified.
We thank P. Adler for anti-Psc antibodies and J. Bateman, A. Lee, J. Lokere, A. Moran, S. Ou, B. Williams, and the Dana Farber/Harvard Cancer Center sequencing facility for discussion and technical assistance.
This work was supported by the NIH (R.E.K., C.-T.W., and R.B.E.) and the Charles King Trust (N.J.F.).
Received 24 January 2005.
Returned for modification 8 March 2005.
Accepted 5 May 2005.
Adler, P. N., E. C. Martin, J. Charlton, and K. Jones. 1991. Phenotypic consequences and genetic interactions of a null mutation in the Drosophila Posterior Sex Combs gene. Dev. Genet. 12:349-361.
Ali, J., and W. Bender. 2004. Cross-regulation among the polycomb group genes in Drosophila melanogaster. Mol. Cell. Biol. 24:7737-7747.
Alkema, M. J., M. Bronk, E. Verhoeven, A. Otte, L. J. van't Veer, A. Berns, and M. van Lohuizen. 1997. Identification of Bmi1-interacting proteins as constituents of a multimeric mammalian polycomb complex. Genes Dev. 11:226-240.
Alkema, M. J., J. Jacobs, J. W. Voncken, N. A. Jenkins, N. G. Copeland, D. P. Satijn, A. P. Otte, A. Berns, and M. van Lohuizen. 1997. MPc2, a new murine homolog of the Drosophila polycomb protein is a member of the mouse polycomb transcriptional repressor complex. J. Mol. Biol. 273:993-1003.
Beuchle, D., G. Struhl, and J. Muller. 2001. Polycomb group proteins and heritable silencing of Drosophila Hox genes. Development 128:993-1004.
Brock, H. W., and M. van Lohuizen. 2001. The Polycomb group—-no longer an exclusive club? Curr. Opin. Genet. Dev. 11:175-181.
Brunk, B. P., E. C. Martin, and P. N. Adler. 1991. Drosophila genes Posterior Sex Combs and Suppressor two of zeste encode proteins with homology to the murine bmi-1 oncogene. Nature 353:351-353.
Brunk, B. P., E. C. Martin, and P. N. Adler. 1991. Molecular genetics of the Posterior sex combs/Suppressor 2 of zeste region of Drosophila: aberrant expression of the Suppressor 2 of zeste gene results in abnormal bristle development. Genetics 128:119-132.
Bunker, C. A., and R. E. Kingston. 1994. Transcriptional repression by Drosophila and mammalian Polycomb group proteins in transfected mammalian cells. Mol. Cell. Biol. 14:1721-1732.
Cao, R., L. Wang, H. Wang, L. Xia, H. Erdjument-Bromage, P. Tempst, R. S. Jones, and Y. Zhang. 2002. Role of histone H3 lysine 27 methylation in Polycomb-group silencing. Science 298:1039-1043.
Cao, R., and Y. Zhang. 2004. The functions of E(Z)/EZH2-mediated methylation of lysine 27 in histone H3. Curr. Opin. Genet. Dev. 14:155-164.
Cohen, K. J., J. S. Hanna, J. E. Prescott, and C. V. Dang. 1996. Transformation by the Bmi-1 oncoprotein correlates with its subnuclear localization but not its transcriptional suppression activity. Mol. Cell. Biol. 16:5527-5535.
Czermin, B., R. Melfi, D. McCabe, V. Seitz, A. Imhof, and V. Pirrotta. 2002. Drosophila enhancer of Zeste/ESC complexes have a histone H3 methyltransferase activity that marks chromosomal Polycomb sites. Cell 111:185-196.
Dejardin, J., and G. Cavalli. 2004. Chromatin inheritance upon Zeste-mediated Brahma recruitment at a minimal cellular memory module. EMBO J. 23f:857-868.
Dellino, G. I., Y. B. Schwartz, G. Farkas, D. McCabe, S. C. Elgin, and V. Pirrotta. 2004. Polycomb silencing blocks transcription initiation. Mol. Cell 13:887-893.
de Napoles, M., J. E. Mermoud, R. Wakao, Y. A. Tang, M. Endoh, R. Appanah, T. B. Nesterova, J. Silva, A. P. Otte, M. Vidal, H. Koseki, and N. Brockdorff. 2004. Polycomb group proteins Ring1A/B link ubiquitylation of histone H2A to heritable gene silencing and X inactivation. Dev. Cell 7:663-676.
Fang, J., T. Chen, B. Chadwick, E. Li, and Y. Zhang. 2004. Ring1b-mediated H2A ubiquitination associates with inactive X chromosomes and is involved in initiation of X inactivation. J. Biol. Chem. 270:52812-52815.
Fang, S. M., K. L. Lorick, J. P. Jensen, and A. M. Weissman. 2003. RING finger ubiquitin protein ligases: implications for tumorigenesis, metastasis, and for molecular targets in cancer. Semin. Cancer Biol. 13:5-14.
Fitzgerald, D. P., and W. Bender. 2001. Polycomb group repression reduces DNA accessibility. Mol. Cell. Biol. 21:6585-6597.
Francis, N. J., and R. E. Kingston. 2001. Mechanisms of transcriptional memory. Nat. Rev. Mol. Cell Biol. 2:409-421.
Francis, N. J., R. E. Kingston, and C. L. Woodcock. 2004. Chromatin compaction by a polycomb group protein complex. Science 306:1574-1577.
Francis, N. J., A. J. Saurin, Z. Shao, and R. E. Kingston. 2001. Reconstitution of a functional core polycomb repressive complex. Mol. Cell 8:545-556.
Franke, A., M. DeCamillis, D. Zink, N. Cheng, H. W. Brock, and R. Paro. 1992. Polycomb and polyhomeotic are constituents of a multimeric protein complex in chromatin of Drosophila melanogaster. EMBO J. 11:2941-2950.
Fritsch, C., D. Beuchle, and J. Muller. 2003. Molecular and genetic analysis of the Polycomb group gene Sex combs extra/Ring in Drosophila. Mech. Dev. 120:949-954.
Fujisakia, S., Y. Ninomiyaa, H. Ishiharaa, M. Miyazakia, R. Kannoa, T. Asaharab, and M. Kanno. 2003. Dimerization of the Polycomb-group protein Mel-18 is regulated by PKC phosphorylation. Biochem. Biophys. Res. Commun. 300:135-140.
Gorfinkiel, N., L. Fanti, T. Melgar, E. Garcia, S. Pimpinelli, I. Guerrero, and M. Vidal. 2004. The Drosophila Polycomb group gene Sex combs extra encodes the ortholog of mammalian Ring1 proteins. Mech. Dev. 121:449-462.
Hur, M. W., J. D. Laney, S. H. Jeon, J. Ali, and M. D. Biggin. 2002. Zeste maintains repression of Ubx transgenes: support for a new model of Polycomb repression. Development 129:1339-1343.
Irminger-Finger, I., and R. Nothiger. 1995. The Drosophila melanogaster gene lethal(3)73Ah encodes a ring finger protein homologous to the oncoproteins MEL-18 and BMI-1. Gene 163:203-208.
Jurgens, G. 1985. A group of genes controlling the spatial expression of the bithorax complex of Drosophila. Nature 316:153-155.
King, I. F., N. J. Francis, and R. E. Kingston. 2002. Native and recombinant polycomb group complexes establish a selective block to template accessibility to repress transcription in vitro. Mol. Cell. Biol. 22:7919-7928.
Kuzmichev, A., K. Nishioka, H. Erdjument-Bromage, P. Tempst, and D. Reinberg. 2002. Histone methyltransferase activity associated with a human multiprotein complex containing the Enhancer of Zeste protein. Genes Dev. 16:2893-2905.
Kyba, M., and H. W. Brock. 1998. The Drosophila polycomb group protein Psc contacts Ph and Pc through specific conserved domains. Mol. Cell. Biol. 18:2712-2720.
Lasko, P. F., and M. L. Pardue. 1988. Studies of the genetic organization of the vestigial microregion of Drosophila melanogaster. Genetics 120:495-502.
Lavigne, M., N. J. Francis, I. F. King, and R. E. Kingston. 2004. Propagation of silencing: recruitment and repression of naive chromatin in trans by Polycomb repressed chromatin. Mol. Cell 13:1-20.
Logie, C., and C. L. Peterson. 1997. Catalytic activity of the yeast SWI/SNF complex on reconstituted nucleosome arrays. EMBO J. 16:6772-6782.
Lund, A. H., and M. van Lohuizen. 2004. Polycomb complexes and silencing mechanisms. Curr. Opin. Cell Biol. 16:239-246.
Martin, E. C., and P. N. Adler. 1993. The Polycomb group gene Posterior Sex Combs encodes a chromosomal protein. Development 117:641-655.
McCall, K., and W. Bender. 1996. Probes of chromatin accessibility in the Drosophila bithorax complex respond differently to Polycomb-mediated repression. EMBO J. 15:569-580.
Mulholland, N. M., I. F. King, and R. E. Kingston. 2003. Regulation of Polycomb group complexes by the sequence-specific DNA binding proteins Zeste and GAGA. Genes Dev. 17:2741-2746.
Muller, J. 1995. Transcriptional silencing by the Polycomb protein in Drosophila embryos. EMBO J. 14:1209-1220.
Muller, J., C. M. Hart, N. J. Francis, M. L. Vargas, A. Sengupta, B. Wild, E. L. Miller, M. B. O'Connor, R. E. Kingston, and J. A. Simon. 2002. Histone methyltransferase activity of a Drosophila Polycomb group repressor complex. Cell 111:197-208.
Nusslein-Volhard, C., H. Kluding, and G. Jurgens. 1985. Genes affecting the segmental subdivision of the Drosophila embryo. Cold Spring Harbor Symp. Quant. Biol. 50:145-154.
Orlando, V. 2003. Polycomb, epigenomes, and control of cell identity. Cell 112:599-606.
Paro, R. 1990. Imprinting a determined state into the chromatin of Drosophila. Trends Genet. 6:416-421.
Pelegri, F., and R. Lehmann. 1994. A role of polycomb group genes in the regulation of gap gene expression in Drosophila. Genetics 136:1341-1353.
Pirrotta, V. 1991. The genetics and molecular biology of zeste in Drosophila melanogaster. Adv. Genet. 29:301-348.
Pirrotta, V. 1998. Polycombing the genome: PcG, trxG, and chromatin silencing. Cell 93:333-336.
Platero, J. S., E. J. Sharp, P. N. Adler, and J. C. Eissenberg. 1996. In vivo assay for protein-protein interactions using Drosophila chromosomes. Chromosoma 104:393-404.
Rastelli, L., C. S. Chan, and V. Pirrotta. 1993. Related chromosome binding sites for zeste, suppressors of zeste and Polycomb group proteins in Drosophila and their dependence on Enhancer of zeste function. EMBO J. 12:1513-1522.
Satijn, D. P., and A. P. Otte. 1999. RING1 interacts with multiple Polycomb-group proteins and displays tumorigenic activity. Mol. Cell. Biol. 19:57-68.
Saurin, A. J., Z. Shao, H. Erdjument-Bromage, P. Tempst, and R. E. Kingston. 2001. A Drosophila Polycomb group complex includes Zeste and dTAFII proteins. Nature 412:655-660.
Shao, Z., F. Raible, R. Mollaaghababa, J. R. Guyon, C. T. Wu, W. Bender, and R. E. Kingston. 1999. Stabilization of chromatin structure by PRC1, a Polycomb complex. Cell 98:37-46.
Sharp, E. J., N. A. Abramova, W. J. Park, and P. N. Adler. 1997. The conserved HR domain of the Drosophila suppressor 2 of zeste [Su(z)2] and murine bmi-1 proteins constitutes a locus-specific chromosome binding domain. Chromosoma 106:70-80.
Sharp, E. J., E. C. Martin, and P. N. Adler. 1994. Directed overexpression of suppressor 2 of zeste and Posterior Sex Combs results in bristle abnormalities in Drosophila melanogaster. Dev. Biol. 161:379-392.
Simon, J., A. Chiang, and W. Bender. 1992. Ten different Polycomb group genes are required for spatial control of the abdA and AbdB homeotic products. Development 114:493-505.
Simon, J. A., and J. W. Tamkun. 2002. Programming off and on states in chromatin: mechanisms of Polycomb and trithorax group complexes. Curr. Opin. Genet. Dev. 12:210-218.
Soto, M. C., T. B. Chou, and W. Bender. 1995. Comparison of germline mosaics of genes in the Polycomb group of Drosophila melanogaster. Genetics 140:231-243.
van der Knaap, J. A., B. R. Kumar, Y. M. Moshkin, K. Langenberg, J. Krijgsveld, A. J. Heck, F. Karch, and C. P. Verrijzer. 2005. GMP synthetase stimulates histone H2B deubiquitylation by the epigenetic silencer USP7. Mol. Cell 17:695-707.
van Lohuizen, M., M. Frasch, E. Wientjens, and A. Berns. 1991. Sequence similarity between the mammalian bmi-1 proto-oncogene and the Drosophila regulatory genes Psc and Su(z)2. Nature 353:353-355.
Wang, H., L. Wang, H. Edrdjument-Bromage, M. Vidal, P. Tempst, R. S. Jones, and Y. Zhang. 2004. Role of histone H2A ubiquitination in Polycomb silencing. Nature 431:873-878.
Wu, C.-T., R. S. Jones, P. F. Lasko, and W. M. Gelbart. 1989. Homeosis and the interaction of zeste and white in Drosophila. Mol. Gen. Genet. 218:559-564.
Wu, C. T., and M. Howe. 1995. A genetic analysis of the Suppressor 2 of zeste complex of Drosophila melanogaster. Genetics 140:139-181.
Molecular and Cellular Biology Jul 2005, 25 (15) 6578-6591; DOI: 10.1128/MCB.25.15.6578-6591.2005
You are going to email the following Analysis of a Polycomb Group Protein Defines Regions That Link Repressive Activity on Nucleosomal Templates to In Vivo Function | CommonCrawl |
BMB Reports
Korean Society for Biochemistry and Molecular Biology (생화학분자생물학회)
1976-670X(eISSN)
Life Science > Developmental/Neuronal Biology
BMB Reports is an international journal devoted to the very rapid dissemination of timely and significant results in diverse fields of biochemistry and molecular biology. Novel findings in the area of genomics, proteomics, metabolomics, bioinformatics, and systems biology are also considered for publication. For speedy publication of novel knowledge, we aim to offer a first decision to the authors in less than 3 weeks from the submission date. BMB Reports is an open access, online-only journal. The journal publishes peer-reviewed Original Articles and Contributed Mini Reviews.
http://submit.bmbreports.org/ KSCI KCI SCOPUS SCIE
Control of asymmetric cell division in early C. elegans embryogenesis: teaming-up translational repression and protein degradation
Hwang, Sue-Yun;Rose, Lesilee S. 69
https://doi.org/10.5483/BMBRep.2010.43.2.069 PDF KSCI
Asymmetric cell division is a fundamental mechanism for the generation of body axes and cell diversity during early embryogenesis in many organisms. During intrinsically asymmetric divisions, an axis of polarity is established within the cell and the division plane is oriented to ensure the differential segregation of developmental determinants to the daughter cells. Studies in the nematode Caenorhabditis elegans have contributed greatly to our understanding of the regulatory mechanisms underlying cell polarity and asymmetric division. However, much remains to be elucidated about the molecular machinery controlling the spatiotemporal distribution of key components. In this review we discuss recent findings that reveal intricate interactions between translational control and targeted proteolysis. These two mechanisms of regulation serve to carefully modulate protein levels and reinforce asymmetries, or to eliminate proteins from certain cells.
Mouse phenogenomics, toolbox for functional annotation of human genome
Kim, Il-Yong;Shin, Jae-Hoon;Seong, Je-Kyung 79
Mouse models are crucial for the functional annotation of human genome. Gene modification techniques including gene targeting and gene trap in mouse have provided powerful tools in the form of genetically engineered mice (GEM) for understanding the molecular pathogenesis of human diseases. Several international consortium and programs are under way to deliver mutations in every gene in mouse genome. The information from studying these GEM can be shared through international collaboration. However, there are many limitations in utility because not all human genes are knocked out in mouse and they are not yet phenotypically characterized by standardized ways which is required for sharing and evaluating data from GEM. The recent improvement in mouse genetics has now moved the bottleneck in mouse functional genomics from the production of GEM to the systematic mouse phenotype analysis of GEM. Enhanced, reproducible and comprehensive mouse phenotype analysis has thus emerged as a prerequisite for effectively engaging the phenotyping bottleneck. In this review, current information on systematic mouse phenotype analysis and an issue-oriented perspective will be provided.
Macrophage inhibitory cytokine-1 transactivates ErbB family receptors via the activation of Src in SK-BR-3 human breast cancer cells
Park, Yun-Jung;Lee, Han-Soo;Lee, Jeong-Hyung 91
The function of macrophage inhibitory cytokine-1 (MIC-1) in cancer remains controversial, and its signaling pathways remain poorly understood. In this study, we demonstrate that MIC-1 induces the transactivation of EGFR, ErbB2, and ErbB3 through the activation of c-Src in SK-BR-3 breast cells. MIC-1 induced significant phosphorylation of EGFR at Tyr845, ErbB2 at Tyr877, and ErbB3 at Tyr1289 as well as Akt and p38, Erk1/2, and JNK mitogen-activated protein kinases (MAPKs). Treatment of SK-BR-3 cells with MIC-1 increased the phosphorylation level of Src at Tyr416, and induced invasiveness of those cells. Inhibition of c-Src activity resulted in the complete abolition of MIC-1-induced phosphorylation of the EGFR, ErbB2, and ErbB3, as well as invasiveness and matrix metalloproteinase (MMP)-9 expression in SK-BR-3 cells. Collectively, these results show that MIC-1 may participate in the malignant progression of certain cancer cells through the activation of c-Src, which in turn may transactivate ErbB-family receptors.
Pig large tumor suppressor 2 (Lats2), a novel gene that may regulate the fat reduction in adipocyte
Liu, Qiuyue;Gu, Xiaorong;Zhao, Yiqiang;Zhang, Jin;Zhao, Yaofeng;Meng, Qingyong;Xu, Guoheng;Hu, Xiaoxiang;Li, Ning 97
Clenbuterol, a $\beta_2$-adrenoceptor agonist, has been proven to be a powerful repartition agent that can decrease fat deposition. Based on results from our previous cDNA microarray experiment of pig clenbuterol administration, a novel up-regulated EST was full-length cloned (4859 bp encoding 1041 amino acids) and found to be the pig homolog of large tumor suppressor 2 (Lats2). We mapped pig Lats2 to chromosome 11p13-14 by using FISH, and western blotting demonstrated that pig Lats2 protein was most abundant in adipose. In Drosophila, Lats2 ortholog was reported as a key component of the Hippo pathway which regulates cell differentiation and growth. Here, we show that pig Lats2 exhibit inverted expression to YAP1, another member of the Hippo pathway which positively regulates cell growth and proliferation, during the differentiation of 3T3-L1 preadipocytes. Our results suggested that Lats2 may involve in Hippo pathway regulating the fat reduction by inhibiting adipocyte differentiation and growth.
Response and transcriptional regulation of rice SUMOylation system during development and stress conditions
Chaikam, Vijay;Karlson, Dale T. 103
Modification of proteins by the reversible covalent addition of the small ubiquitin like modifier (SUMO) protein has important consequences affecting target protein stability, sub-cellular localization, and protein-protein interactions. SUMOylation involves a cascade of enzymatic reactions, which resembles the process of ubiquitination. In this study, we characterized the SUMOylation system from an important crop plant, rice, and show that it responds to cold, salt and ABA stress conditions on a protein level via the accumulation of SUMOylated proteins. We also characterized the transcriptional regulation of individual SUMOylation cascade components during stress and development. During stress conditions, majority of the SUMO cascade components are transcriptionally down regulated. SUMO conjugate proteins and SUMO cascade component transcripts accumulated differentially in various tissues during plant development with highest levels in reproductive tissues. Taken together, these data suggest a role for SUMOylation in rice development and stress responses.
On-off controllable RNA hybrid expression vector for yeast three-hybrid system
Bak, Geunu;Hwang, Se-Won;Ko, Ye-Rim;Lee, Jung-Min;Kim, Young-Mi;Kim, Kyung-Hwan;Hong, Soon-Kang;Lee, Young-Hoon 110
The yeast three-hybrid system (Y3H), a powerful method for identifying RNA-binding proteins, still suffers from many false positives, due mostly to RNA-independent interactions. In this study, we attempted to efficiently identify false positives by introducing a tetracycline operator (tetO) motif into the RPR1 promoter of an RNA hybrid expression vector. We successfully developed a tight tetracycline-regulatable RPR1 promoter variant containing a single tetO motif between the transcription start site and the A-box sequence of the RPR1 promoter. Expression from this tetracycline-regulatable RPR1 promoter in the presence of tetracycline-response transcription activator (tTA) was positively controlled by doxycycline (Dox), a derivative of tetracycline. This on-off control runs opposite to the general knowledge that Dox negatively regulates tTA. This positively controlled RPR1 promoter system can therefore efficiently eliminate RNA-independent false positives commonly observed in the Y3H system by directly monitoring RNA hybrid expression.
Transcript profiling of expressed sequence tags from intramuscular fat, longissimus dorsi muscle and liver in Korean cattle (Hanwoo)
Lim, Da-Jeong;Lee, Seung-Hwan;Cho, Yong-Min;Yoon, Du-Hak;Shin, Youn-Hee;Kim, Kyu-Won;Park, Hye-Sun;Kim, Hee-Bal 115
A large data set of Hanwoo (Korean cattle) ESTs was analyzed to obtain differential gene expression results for the following three libraries: intramuscular fat, longissimus dorsi muscle and liver. To better understand the gene expression profiles, we identified differentially expressed genes (DEGs) via digital gene expression analysis. Hierarchical clustering of genes was performed according to their relative abundance within the six separate groups (Hanwoo fat versus non-Hanwoo fat, Hanwoo muscle versus non-Hanwoo muscle and Hanwoo liver versus non-Hanwoo liver), producing detailed patterns of gene expression. We determined the quantitative traits associated with the highly expressed genes. We also provide the first list of putative regulatory elements associated with differential tissue expression in Hanwoo cattle. In addition, we conducted evolutionary analysis that suggests a subset of genes accelerated in the bovine lineage are strongly correlated with their expression in Hanwoo muscle.
Neuronal differentiation and developmental characteristics in the dentate gyrus of staggerer mutant mice
Yi, Sun-Shin;Hwang, In-Koo;Shin, Jae-Hoon;Baek, Sung-Hee;Yoon, Yeo-Sung;Seong, Je-Kyung 122
Homozygous staggerer ($RORa^{sg/sg}$) mice showed a severe ataxia caused by cerebellum degeneration. Decreased and dysfunctional Rora is a main cause of this neurologic phenotype. The phenotype of staggerer mice has been well known in cerebellum. However, there has been rarely reported about cerebrum even though of staggerer is expressed in merely cerebellum but hippocampus, thalamus, cortex, and olfactory bulb. The expressions of Ki67, doublecortin (DCX), and NeuN, which are cell proliferation, neuronal differentiation and mature neuron markers, respectively, were measured with immunohistechemistry in dentate gyrus in staggerer mice in order to uncover whether staggerer can affect the change in dentate gyrus. The immunoreactivities of DCX and NeuN were significantly reduced in the dentate gyrus of staggerer mice than normal control, while Ki67 were rarely unchanged in staggerer mice. These results suggest that staggerer mutation has an influence on the neuronal differentiation and development not only in cerebellum but also in dentate gyrus.
PI(3,4,5)P3 regulates the interaction between Akt and B23 in the nucleus
Kwon, Il-Sun;Lee, Kyung-Hoon;Choi, Joung-Woo;Ahn, Jee-Yin 127
Phosphatidylinositol (3,4,5)-triphosphate ($PIP_3$) is a lipid second messenger that employs a wide range of downstream effector proteins for the regulation of cellular processes, including cell survival, polarization and proliferation. One of the most well characterized cytoplasmic targets of $PIP_3$, serine/threonine protein kinase B (PKB)/Akt, promotes cell survival by directly interacting with nucleophosmin (NPM)/B23, the nuclear target of $PIP_3$. Here, we report that nuclear $PIP_3$ competes with Akt to preferentially bind B23 in the nucleoplasm. Mutation of Arg23 and Arg25 in the PH domain of Akt prevents binding to $PIP_3$, but does not disrupt the Akt/B23 interaction. However, treatment with phosphatases PTEN or SHIP abrogates the association between Akt and B23, indicating that nuclear $PIP_3$ regulates the Akt/B23 interaction by controlling the concentration and subcellular dynamics of these two proteins.
Mitochondrial DNA analysis of ancient human bones excavated from Nukdo island, S.Korea
Kim, Ae-Jin;Kim, Ki-Jeong;Choi, Jee-Hye;Choi, Eun-Ha;Jung, Yu-Jin;Min, Na-Young;Lkhagvasuren, Gavaachimed;Rhee, Sang-Myung;Kim, Jae-Hyun;Noh, Maeng-Seok;Park, Ae-Ja;Kim, Kyung-Yong;Kang, Yoon-Sung;Lee, Kwang-Ho;Kim, Keun-Cheol 133
We have performed analyses using ancient DNA extracted from 25 excavated human bones, estimating around the 1st century B.C. Ancient human bones were obtained from Nukdo Island, which is located off of the Korean peninsula of East Asia. We made concerted efforts to extract ancient DNA of high quality and to obtain reproducible PCR products, as this was a primary consideration for this extensive kind of undertaking. We performed PCR amplifications for several regions of the mitochondrial DNA, and could determine mitochondrial haplogroups for 21 ancient DNA samples. Genetic information from mitochondrial DNA belonged to super-haplogroup M, haplogroup D or its sub-haplogroups (D4 or D4b), which are distinctively found in East Asians, including Koreans or Japanese. The dendrogram and principal component analysis based on haplogroup frequencies revealed that the Nukdo population was close to those of the East Asians and clearly distinguished from populations shown in the other regions. Considering that Nukdo is geologically isolated in the southern part of the Korean peninsula and is a site of commercial importance with neighboring countries, these results may reflect genetic continuity for the habitation and migration of ethnic groups who had lived in a particular area in the past. Therefore, we suggest that phylogenetic analyses of ancient DNA have significant advantages for clarifying the origins and migrations of ethnic groups, or human races.
High glucose induces differentiation and adipogenesis in porcine muscle satellite cells via mTOR
Yue, Tao;Yin, Jingdong;Li, Fengna;Li, Defa;Du, Min 140
The present study investigated whether the mammalian target of rapamycin (mTOR) signal pathway is involved in the regulation of high glucose-induced intramuscular adipogenesis in porcine muscle satellite cells. High glucose (25 mM) dramatically increased intracellular lipid accumulation in cells during the 10-day adipogenic differentiation period. The expressions of CCAAT/enhancer binding protein-$\alpha$ (C/EBP-$\alpha$) and fatty acid synthase (FAS) protein were gradually enhanced during the 10-day duration while mTOR phosphorylation and sterol-regulatory- element-binding protein (SREBP)-1c protein were induced on day 4. Moreover, inhibition of mTOR activity by rapamycin resulted in a reduction of SREBP-1c protein expression and adipogenesis in cells. Collectively, our findings suggest that the adipogenic differentiation of porcine muscle satellite cells and a succeeding extensive adipogenesis, which is triggered by high glucose, is initiated by the mTOR signal pathway through the activation of SREBP-1c protein. This process is previously uncharacterized and suggests a cellular mechanism may be involved in ectopic lipid deposition in skeletal muscle during type 2 diabetes.
Induction of insulin receptor substrate-2 expression by Fc fusion to exendin-4 overexpressed in E. coli: a potential long-acting glucagon-like peptide-1 mimetic
Kim, Jae-Woo;Kim, Kyu-Tae;Ahn, You-Jin;Jeong, Hee-Jeong;Jeong, Hyeong-Yong;Ryu, Seung-Hyup;Lee, Seung-Yeon;Lee, Chang-Woo;Chung, Hye-Shin;Jang, Sei-Heon 146
Exendin-4 (Ex-4), a peptide secreted from the salivary glands of the Gila monster lizard, can increase pancreatic $\beta$-cell growth and insulin secretion by activating glucagon-like peptide-1 receptor. In this study, we expressed a fusion protein consisting of exendin-4 and the human immunoglobulin heavy chain (Ex-4/IgG-Fc) in E. coli and explored its potential therapeutic use for the treatment of insulin-resistant type 2 diabetes. Here, we show that the Ex-4/IgG-Fc fusion protein induces expression of insulin receptor substrate-2 in rat insulinoma INS-1 cells. Our findings therefore suggest that Ex-4/IgG-Fc overexpressed in E. coli could be used as a potential, long-acting glucagon-like peptide-1 mimetic. | CommonCrawl |
\begin{document}
\title{Transgressing the boundaries: towards a rigorous \understanding of deep learning and its (non-)robustness}
\begin{abstract}
The recent advances in machine learning in various fields of applications can be largely attributed to the rise of deep learning (DL) methods and architectures. Despite being a key technology behind autonomous cars, image processing, speech recognition, etc., a notorious problem remains the lack of theoretical understanding of DL and related interpretability and (adversarial) robustness issues. Understanding the specifics of DL, as compared to, say, other forms of nonlinear regression methods or statistical learning, is interesting from a mathematical perspective, but at the same time it is of crucial importance in practice: treating neural networks as mere black boxes might be sufficient in certain cases, but many applications require waterproof performance guarantees and a deeper understanding of what could go wrong and why it could go wrong.
It is probably fair to say that, despite being mathematically well founded as a method to approximate complicated functions, DL is mostly still more like modern alchemy that is firmly in the hands of engineers and computer scientists. Nevertheless, it is evident that certain specifics of DL that could explain its success in applications demands systematic mathematical approaches. In this work, we review robustness issues of DL and particularly bridge concerns and attempts from approximation theory to statistical learning theory. Further, we review Bayesian Deep Learning as a means for uncertainty quantification and rigorous explainability. \end{abstract}
\section{Introduction}
According to \citet[p.~2]{wheeler2016}, machine learning is a ``marriage of statistics and computer science that began in artificial intelligence''. While statistics deals with the question of what can be inferred from data given an appropriate statistical model, computer science is concerned with the design of algorithms to solve a given computational problem that would be intractable without the help of a computer.
Artificial intelligence and, specifically, machine learning have undergone substantial developments in recent years that have led to a huge variety of successful applications, most of which would not have been possible with alternative approaches. In particular, advances in deep learning (i.e. machine learning relying on deep neural networks) have revolutionized many fields, leading, for instance, to impressive achievements in computer vision (e.g. image classification, image segmentation, image generation), natural language processing (semantic text understanding, text categorization and text creation, automatic question answering) and reinforcement learning (agents and games, high-dimensional optimization problems); cf.~\citet{sarker2021review} and the references therein.
Moreover, deep learning is nowadays increasingly applied in multiple scientific branches as an acceptable tool for conducting inference from simulated or collected data. For example, in the medical field, the development of drugs \citep{ma2015deep} or the analysis of tomography \citep{bubba2019learning} are enhanced with deep learning. In molecular simulations, ground-state properties of organic molecules are predicted \citep{faber2017prediction}, equilibrium energies of molecular systems are learnt \citep{noe2019boltzmann} or multi-electron Schr\"odinger equations are solved \citep{hermann2020deep}. Speaking of which, the numerical treatment of high-dimensional partial differential equations with neural networks has undergone vast improvements \citep{weinan2017deep, nusken2021solving}, allowing for applications in almost all sciences. In biology, cell segmentation and classification have been studied with certain convolutional neural networks \citep{ronneberger2015u}, in signal processing speech separation is approached with temporal versions of these \citep{richter2020speech}, and in finance relevant stock pricing models are solved with deep learning \citep{germain2021neural}. In remote sensing, temporal recurrent neural networks are for instance used for crop classification \citep{russwurm2018multi} and image segmentation promises automatic understanding of the increasing amount of available satellite data \citep{zhu2017deep}. The list of successful deep learning applications is long and there are many more fields in which they have made significant contributions and still promise exciting advances that we shall omit here for the sake of brevity.
It is probably fair to say that, like statistics, deep learning (or machine learning in general) aims at drawing inferences from data. But unlike statistics, it avoids being overly explicit regarding the underlying model assumptions. In statistics, either the model assumptions or the complete model are set prior to making inferences, whereas the neural networks in deep learning are mostly seen as black boxes that are essentially able to `learn' the model. In this sense, deep learning delegates what \citet[\S 72, p.~374]{reichenbach1949prob} called the ``problem of the reference class'' to a computer algorithm, namely, the problem of deciding what model class to use when making a prediction of a particular instance or when assigning a probability to a particular event. While this might be understandable -- or even desirable -- from the user's point of view, it poses risks and might bring dangerous side-effects: \begin{itemize}
\item In most of the applied deep learning models, there is a lack of explainability, meaning that even though their inference from data might work well, the mechanisms behind the predictions are not well understood. As the ambition in all sciences it to understand causal relationships rather than pure correlations, this might neither be satisfying nor lead to further deeper understandings in corresponding fields.
\item Without understanding the details of a model, potential robustness issues might not be realized either. For example, who guarantees that certain deep learning achievements easily translate to slightly shifted data settings and how can we expect neural network training runs to converge consistently?
\item Finally, often the ambition of a prediction model to generalize to unseen data is stated on an `average' level and we cannot make robust statements on unexpected events, which might imply dangerous consequences in risk-sensitive applications. In general, there is no reliable measure for prediction (un-)certainty, which might lead to blind beliefs in the model output. \end{itemize}
Even when it comes to the success stories of deep learning, many achievements and properties of the models can simply not be explained theoretically, e.g. why does one of the most naive optimization attempts, stochastic gradient descent, work so well, why do models often generalize well even though they are powerful enough to simply memorize the training data and why can high-dimensional problems be addressed particularly efficiently? Not only is it important from a practical point of view to understand these phenomena theoretically, as a deeper understanding might motivate and drive novel approaches leading to even more successful results in practice, but it is also important for getting a grip on the epistemology of machine learning algorithms. This then might also advance pure `trial and error' strategies for architectural improvements of neural networks that sometimes seem to work mostly due to extensive hyperparameter finetuning and favorable data set selections; cf.~\citep{wang2019convergence}. \par
In this article, we will argue that relying on the tempting black box character of deep learning models can be dangerous and it is important to further develop a deeper mathematical understanding in order to obtain rigorous statements that will make applications more sound and more robust. We will demonstrate that there are still many limitations in the application of artificial intelligence, but mathematical analysis promises prospects that might at least partially overcome these limitations.
We further argue that, if one accepts that explainable DL must not be understood in the sense of the deductive-nomological model of scientific explanation, Bayesian probability theory can provide a means to explain DL in a precise statistical (abductive) sense.
In fact, a comprehensive theory should guide us towards coping with the potential drawbacks of neural networks, e.g. the lack of understanding why certain networks architectures work better than others, the risk of overfitting data, i.e. not performing well on unseen data, or the lack of knowledge on the prediction confidences, in particular, leading to overconfident predictions on data far from the training data set.
Even though we insist that understanding deep learning is a holistic endeavor that comprises the theoretical (e.g. approximation) properties of artificial neural networks in combination with the practical numerical algorithms that are used to train them, we refrain from going beyond the mathematical framework and exploring the epistemological implications of this framework. The epistemology of machine learning algorithms is a relatively new and dynamic field of research, and we refer to recent papers by \citet{wheeler2016} and \citet{sterkenburg2021schurz}, and the references given there.
\subsection{Definitions and first principles}
We can narrow down the definition of machine learning to one line by saying that its main intention is to identify functions that map input data $x \in \mathcal{X}$ to output data $y \in \mathcal{Y}$ in some \textit{good} way, where $\mathcal{X}$ and $\mathcal{Y}$ are suitable spaces, often identified with $\R^d$ and $\R$, respectively. In other words, the task is to find a function $f:\mathcal{X} \to \mathcal{Y}$ such that \begin{equation} \label{eq: f(x) = y}
f(x) = y. \end{equation}
To illustrate, let us provide two stereotypical examples that appear in practice. In a classification task, for instance, $x \in \mathcal{X}$ could represent an image (formalized as a matrix of pixels, or, in a flattened version, as a vector $x \in \R^d$) and $y \in \mathcal{Y} = \{1, \dots, K \}$ could be a class describing the content of the image. In a regression task, on the other hand, one tries to predict real numbers from the input data, e.g. given historical weather data and multiple measurements, one could aim to predict how much it will rain tomorrow and $y\in \mathcal{Y} = \R_{\ge 0}$ would be the amount of rain in milliliters. \par
From our simple task definition above, two questions arise immediately: \begin{enumerate}
\item How do we design (i.e. find) the function $f$?
\item How do we measure performance, i.e. how do we quantify deviations from the desired fit in \eqref{eq: f(x) = y}? \end{enumerate} Relating to question 1, it is common to rely on parametrized functions $f(x)=f_\theta(x)$, for which a parameter vector $\theta \in \R^p$ specifies the actual function. Artificial neural networks (ANNs) like deep neural networks are examples of such parametrized functions which enjoy specific beneficial properties, for instance in terms of approximation and optimization as we will detail later on. The characterizing feature of (deep) neural networks is that they are built by (multiple) concatenations of nonlinear and affine-linear maps:
\begin{definition}[Neural network, e.g.~\cite{berner2021modern,higham2019deep}] \label{def_NN} We define a \textit{feed-forward neural network} $\Phi_\sigma:\R^d \to \R^m$ with $L$ layers by \begin{equation} \Phi_\sigma(x) = A_L \sigma(A_{L-1} \sigma(\cdots \sigma(A_1 x + b_ 1) \cdots) + b_{L-1}) + b_L, \end{equation} with matrices $A_l \in \R^{n_{l} \times n_{l-1}}$, vectors $b_l \in \R^{n_l}, 1 \le l \le L$, and a nonlinear activation function $\sigma: \R \to \R$ that is applied componentwise. Clearly, $n_0=d$ and $n_L=m$, and the collection of matrices $A_l$ and vectors $b_l$, called \emph{weights} and \emph{biases}, comprises the learnable parameters $\theta$. \end{definition}
In practice, one often chooses $\sigma(x) = \max\{ x, 0\}$ or $\sigma(x) = (1 + e^{-x})^{-1}$, since their (sub)derivatives can be explicitly computed and they enjoy a universal approximation property \citep{barron1993approximation,cybenko1989approximation}.
Even though the organization of an ANN in layers is partly inspired by biological neural networks, the analogy between ANNs and the human brain is questionable and often misleading when it comes to understanding the specifics of machine learning algorithms, such as its ability to generalize \citep{wichmann2018humans}, and it will therefore play no role in what follows. We rather regard an ANN as a handy representation of the parametrized function $f_\theta$ that enjoys certain mathematical properties that we will discuss subsequently. (Note that closeness in function space does not necessarily imply closeness in parameter space and vice versa as has been pointed out in \citet[Sec.~2]{elbrachter2019degenerate}.) Clearly, alternative constructions besides the one stated in \Cref{def_NN} are possible and frequently used, depending on the problem at hand.
\subsection{Probabilistic modelling and mathematical perspectives} \label{sec: probabilistic modelling}
Now, for actually tuning the parameter vector $\theta$ in order to identify a good fit as indicated in \eqref{eq: f(x) = y}, the general idea in machine learning is to rely on training data $(x_n, y_n)_{n=1}^N \subset \mathcal{X} \times \mathcal{Y}$. For this, we define a \textit{loss function} $\ell:\mathcal{Y} \times \mathcal{Y} \to \R_{\ge 0}$ that measures how much our predictions, i.e. function outputs $f(x_n)$, deviate from their targets $y_n$. Given the training sample, our algorithm can now aim to minimize the \textit{empirical loss} \begin{equation} \label{eq: empirical loss}
\mathcal{L}_N(f) = \frac{1}{N}\sum_{n=1}^N \ell\left(f(x_n), y_n\right), \end{equation} i.e. an empirical average over all data points. Relating to question 2 from above, however, it turns out that it is not constructive to measure approximation quality by how well the function $f$ can fit the available training data, but rather to focus on the ability of $f$ to generalize to yet unseen data. To this end, the perspective of statistical learning theory assumes that the data is distributed according to an (unknown) probability distribution $\mathbb{P}$ on $\mathcal{X} \times \mathcal{Y}$. The training data points $x_n$ and $y_n$ should then be seen as realizations of the random variables $X$ and $Y$, which admit a joint probability distribution, so \begin{equation}
(X, Y) \sim \mathbb{P}. \end{equation} We further assume that all pairs $(x_n,y_n)$ are distributed identically and independently from one another (i.i.d.). The expectation over all random (data) variables of this loss is then called expected loss, defined as \begin{equation}
\mathcal{L}(f) = \E\!\left[\ell\left(f(X), Y\right) \right], \end{equation} where the expectation $\E\left[\cdot\right]$ is understood as the average over all possible data points $(X,Y)$. The expected loss measures how well the function $f$ performs on data from $\mathbb{P}$ \textit{on average}, assuming that the data distribution does not change after training. It is the general intention in machine learning to have the expected loss as small as possible.
\begin{example} To fix ideas, let us consider a toy example in $d=1$. We assume that the true function is given by $f(x) = \sin (2\pi x)$ and that the data $x$ is distributed uniformly on the interval $[0, 2]$. In \Cref{fig: sin joint density} we display the function $f$ along with $N=100$ randomly drawn data points $(x_n, y_n)_{n=1}^N$, where $y_n$ is once given by the deterministic mapping $y_n = f(x_n)$ and once by the stochastic mapping $y_n = f(x_n) + \eta_n$, where $\eta_n \sim \mathcal{N}(0, 0.01)$ indicates noise, by denoting $\mathcal{N}(\mu, \sigma^2)$ a normal (i.e. Gaussian) distribution with mean $\mu$ and variance $\sigma^2$. The stochastic mapping induces the probability measure $\mathbb{P}$, i.e. the joint distribution of the random variables $(X, Y) \in \mathcal{X} \times \mathcal{Y}$, which we plot approximately in the right panel. Note that (even for simple toy problems) $\mathbb{P}$ can usually not be written down analytically. \begin{figure}
\caption{We plot a given function $f(x) = \sin(2 \pi x)$ (in gray) along with data points (in orange) given either by a deterministic or stochastic mapping in the first two panels. The right panel shows an approximation of the measure $\mathbb{P}$ for the stochastic case.}
\label{fig: sin joint density}
\end{figure} \end{example}
For a further analysis, let us give names to three different functions that minimize a given corresponding loss (assuming for simplicity that all minima are attained, even though they may not be unique): \begin{equation}
f^B \in \argmin_{f \in \mathcal{M}(\mathcal{X}, \mathcal{Y})} \mathcal{L}(f), \qquad f^* \in \argmin_{f \in \mathcal{F}}\mathcal{L}(f), \qquad \widehat{f}_N \in \argmin_{f\in \mathcal{F}} \mathcal{L}_N(f). \end{equation} The first quantity, $f^B$, is the theoretically optimal function among all mathematically reasonable (or: \emph{measurable}) functions (cf. Appendix \ref{sec:bayesopt}), denoted here by the set $\mathcal{M}(\mathcal{X}, \mathcal{Y})$, the second quantity, $f^*$, is the optimal function in a specified function class $\mathcal{F}$ (e.g. the class of neural networks), and the third quantity, $\widehat{f}_N$, is the function that minimizes the empirical error on the training data.
With regard to the second quantity above, finding a suitable function class $\mathcal{F}$ requires balancing two conflicting goals: on the one hand, the function class should be sufficiently rich to enjoy the \emph{universal approximation property}, i.e. the ability to represent any theoretically optimal function $f^B$ up to a sufficiently small approximation error that is still considered acceptable.\footnote{What is considered an acceptable approximation error depends on the problem at hand.} On the other hand, the function class should not be overly complex, in order to avoid overfitting which may lead to a function $f$ (e.g. a classifier) that poorly generalizes beyond known data.
Let us make this point more precise, and let us say that we have some training algorithm that has produced a function $f$ on the training data $(x_n, y_n)_{n=1}^N$ (see Appendix \ref{sec:SGD} for details).
We can decompose the deviation of the function $f$ from the theoretically optimal solution $f^B$ into four different terms that correspond to three different error contributions -- generalization, optimization and approximation error: \begin{equation} \label{eq: error decomposition}
\mathcal{L}(f) - \mathcal{L}(f^B) = \underbrace{\mathcal{L}(f) - \mathcal{L}_N(f)}_{\text{generalization error}} + \underbrace{\mathcal{L}_N(f) - \mathcal{L}_N({f}^*)}_{\text{optimization error}} + \underbrace{\mathcal{L}_N({f}^*) - \mathcal{L}(f^*)}_{\text{generalization error}} + \underbrace{\mathcal{L}(f^*) - \mathcal{L}(f^B)}_{\text{approximation error}}. \end{equation} Specifically, if we set $f=\widehat{f}_N$, the above decomposition reveals what is known as the bias-variance tradeoff, namely, the decomposition of the total error (as measured in terms of the loss) into a contribution that stands for the ability of the function $f^*\in\mathcal{F}$ to best approximate the truth $f^B$ (bias) and a contribution that represents the ability to estimate the approximant $f^*$ from finitely many observations (variance), namely\footnote{Here we loosely understand the word `truth' in the sense of empirical adequacy following the seminal work of van Fraasen \citet[p.~12]{vanFraasen1980scientific}, which means that we consider the function $f^B$ to be empirically adequate, in that there is no other function (e.g. classifier or regression function) that has a higher likelihood relative to all unseen data in the world; see also \citet{hanna1983adequacy}. The term `truth' is typical jargon in the statistical learning literature, and one should not take it as a scientific realist's position.} \[ \mathcal{L}(\widehat{f}_N) - \mathcal{L}(f^B) = \underbrace{\mathcal{L}(\widehat{f}_N) - \mathcal{L}(f^*)}_{\text{estimation error (variance)}} + \underbrace{\mathcal{L}(f^*) - \mathcal{L}(f^B).}_{\text{approximation error (bias)}} \] We should stress that it is not fully understood yet in which cases overfitting leads to poor generalization and prediction properties of an ANN as there are cases in which models with many (nonzero) parameters that are perfectly fitted to noisy training data may still have good generalization skills; cf.~\citep{bartlett2020overfitting} or Section \ref{sec:generalmemory} below for further explanation.
A practical challenge of any function approximation and any learning algorithm is to minimize the expected loss by only using a finite amount of training data, but without knowing the underlying data distribution $\mathbb{P}$. In fact, one can show there is no universal learning algorithm that works well for every data distribution (\emph{no free lunch theorem}). Instead, any learning algorithm (e.g. for classification) with robust error bounds must necessarily be accompanied by a priori regularity conditions on the underlying data distribution, e.g.~\citep{berner2021modern,shalev2014understanding,wolpert1996}.\footnote{As a consequence, deep learning does not solve Reichenbach's reference class problem or gives any hint to the solution of the problem of induction, but it is rather an instance in favor of the Duhem-Quine thesis, in that any learning algorithm that generalizes well from seen data must rely on appropriate background knowledge \citep[p.~44]{quine1951}; cf.~\citet{sterkenburg2019putnam}.}
\par
Let us come back to the loss decomposition \eqref{eq: error decomposition}. The three types of errors hint at different perspectives that are important in machine learning from a mathematical point of view: \begin{enumerate}
\item Generalization: How can we guarantee generalizing to unseen data while relying only on a finite amount of training data?
\item Function approximation: Which neural network architectures do we choose in order to gain good approximation qualities (in particular in high-dimensional settings)?
\item Optimization: How do we optimize a complicated, nonconvex function, like a neural network? \end{enumerate}
Besides these three, there are more aspects that cannot be read off from equation \eqref{eq: error decomposition}, but turn out to become relevant in particular in certain practical applications. Let us stress the following two:
\begin{enumerate} \setcounter{enumi}{3}
\item Numerical stability and robustness: How can we design neural networks and corresponding algorithms that exhibit some numerical stability and are robust to certain perturbations?
\item Interpretability and uncertainty quantification: How can we explain the input-output behavior of certain complicated, potentially high-dimensional function approximations and how can we quantify uncertainty in neural network predictions? \end{enumerate}
In this article, we will argue that perspectives 4 and 5 are often overlooked, but still in particular relevant for a discussion on the limitations and prospects in machine learning. Along these lines, we will see that there are promising novel developments and ideas that advance the aspiration to put deep learning onto more solid grounds in the future.
The article is organized as follows. In \Cref{sec: ANNs - oddities} we will review some aspects of neural networks, admittedly in a very a non-exhaustive manner, where in particular Sections \ref{sec:generalmemory}--\ref{sec:regularize} will correspond to perspectives 1--3 stated above. \Cref{sec: adversarial attacks} will then demonstrate why (non-)robustness issues in deep learning are particularly relevant for practical applications, as illustrated by adversarial attacks in \Cref{sec:imageattack}. We will argue in \Cref{sec: worst-case scenarios} that successful adversarial attacks on (deep) neural networks require careful thinking about worst-case analyses and uncertainty quantification. \Cref{sec: adversarial attacks} therefore relates to perspectives 4 and 5 from above. Next, \Cref{sec: Bayesian perspective} will introduce the Bayesian perspective as a principled framework to approach some of the robustness issues raised before. After introducing Bayesian neural networks, we will discuss computational approaches in \Cref{sec: BNN in practice} and review further challenges in \Cref{sec: challenges for BNNs}. Finally, in \Cref{sec: conclusion} we will draw a conclusion.
\section{Deep neural networks: oddities and some specifics} \label{sec: ANNs - oddities}
One of the key questions regarding machine learning with (deep) neural networks is related to their ability to generalize beyond the data used in the training step (cf. perspective 1 in \Cref{sec: probabilistic modelling}). The idea here is that a trained ANN applies the regularities found in the training data (i.e. in past observations) to future or unobserved data, assuming that these regularities are persistent. Without dwelling on technical details, it is natural to understand the training of a neural network from a probabilistic viewpoint, with the trained ANN being a collection of functions, that is characterized by a probability distribution over the parameter space, rather than by a single function.
This viewpoint is in accordance with how the training works in practice, since training an ANN amounts to minimizing the empirical loss given some training data, as stated in equation \eqref{eq: empirical loss}, and this minimization is commonly done by some form of stochastic gradient descent (SGD) in the high-dimensional \emph{loss landscape}\footnote{The empirical risk $J_N(\theta)=\mathcal{L}_N(f_\theta)$, considered as a function of the parameters $\theta$ is often called the \emph{loss landscape} or \emph{energy landscape}.}, i.e. batches of the full training set are selected randomly during the training iterations (see also \Cref{sec:SGD}).
As a consequence, the outcome of the training is a random realization of the ANN and one can assign a probability distribution to the trained neural network.
\subsection{Generalization, memorization and benign overfitting}\label{sec:generalmemory}
If we think of the parametrized function that represents a trained neural network as a random variable, it is natural to assign a probability measure $Q(f)$ to every regression function $f$.
So, let $Q^B=Q(f^B)$ be the target probability distribution (i.e.~the truth), $Q^*=Q(f^*)$ the best approximation, and $\widehat{Q}_N=Q(\widehat{f}_N)$ the distribution associated with the $N$ training points that are assumed to be randomly drawn from $\mathbb{P}$.
We call $f(t)\in\mathcal{F}$ the function approximation that is obtained after running the parameter fitting until time $t$ (see Sec.~\ref{sec:regularize} and Appendix \ref{sec:SGD} below for further details) -- $f(t)$ therefore models the training for a specified amount of training iterations. Ideally, one would like to see that $Q(f(t))$ resembles either the truth $Q^B$ or its best approximation $Q^*$ as the training proceeds; however, it has been shown that trained networks often memorize (random) training data in that \cite[Thm.~6]{yang2021generalization} \[ \lim_{t\to\infty} Q(f(t)) = \widehat{Q}_N\,. \] In this case, the training lets the model learn the data which amounts to memorizing facts, without a pronounced ability to generate knowledge. It is interesting to note that this behavior is consistently observed when the network is trained on a completely random relabelling of the true data, in which case one would not expect outstanding generalization capabilities of the trained ANN \citep{zhang2021understanding}. Finally, it so happens that $Q(f(t))$ does not converge to $\widehat{Q}_N$, in which case it diverges and thus gives no information whatsoever about the truth. \par
A phenomenon that is related to memorizing the training data and that is well known in statistical learning is called \textit{overfitting}. It amounts to the trained function fitting the available data (too) well, while not generalizing to unseen data, as illustrated in the bottom left panel of \Cref{fig: underfitting overfitting}. The classical viewpoint in statistics is that when the function has far more parameters than there are data points (as is common with deep neural networks) and if the training time is too large, overfitting might happen, as illustrated in \Cref{fig:training}. An indication of overfitting can be that the generalization error is strongly growing while the empirical risk is driven almost to zero. To prevent this, an alternative to increasing the number of training steps, $t$, while the training data remains the same, is early stopping. It has been shown (e.g. \cite[Cor.~7]{yang2021generalization}) that the empirical distribution can be close to the truth (in which case the ANN generalizes well), if the training is stopped after a sufficiently long, but not too long training phase. Figure \ref{fig:training} shows the typical shape of the discrepancy between the trained network and the truth.\par
However, it turns out that there are also cases of benign overfitting, in which an ANN shows remarkable generalization properties, even though it is essentially fitting the noise in the training data. The phenomenon of benign overfitting, also known by the name of \emph{double descent}, describes the empirical observation that the generalization error, as measured by the true risk, decreases again as the number of parameters is increased -- despite severe overfitting (see Figure \ref{fig:doubledescent}). Note that there is not contradiction between the double descent phenomenon and the traditional U-shaped risk curve shown in Figure \ref{fig:training} as they hold under different circumstances and the double descent requires pushing the number of parameters beyond a certain (fairly large) threshold.
\begin{figure}
\caption{Different examples of good and bad fits in the classical regression scheme: While a perfect fit to the training data may either lead to a high-fidelity model on the training data that has no (upper left panel) or very little (lower left panel) predictive power, underfitting leads to a low-fidelity model on the training data (upper right panel). A good fit (lower right panel) is indicated by a compromise between model-fidelity and predictive power.}
\label{fig: underfitting overfitting}
\end{figure}
\begin{figure}
\caption{Traditional risk curve: schematic sketch of the generalization error of a generic deep neural network for a fixed amount of training data as a function of the training time $t$; see \citep{yang2021generalization} for details.}
\label{fig:training}
\end{figure}
It has been conjectured that this phenomenon is related to a certain low rank property of the data covariance; nevertheless a detailed theoretical understanding of the double descent curve for a finite amount of training data is still lacking as the available approximation results cannot be applied in situations in which the number of parameters is much higher than the number of data points. Interestingly, double descent has also been observed for linear regression problems or kernel methods, e.g. \citep{bartlett2020overfitting,mei2021double}. Thus it does not seem to be a unique feature of ANNs; whether or not it is a more typical phenomenon for ANNs is an open question though \citep{belkin2019tradeoff}; see also \citet{opper1990perceptron} for an early reference in which the double descent feature of ANNs has been first described (for some models even multiple descent curves are conjectured \citep{chen2020multiple, liang2020multiple}).
\begin{figure}
\caption{Risk curve with benign overfitting: highly overparametrized ANNs often exhibit the double descent phenomenom when the number of parameters exceeds the number of data points. The leftmost vertical dashed line shows the optimal model complexity (for given observation data), beyond which the model is considered overparametrized. The rightmost vertical dashed line marks the interpolation threshold at which the model can exactly fit all data points.}
\label{fig:doubledescent}
\end{figure}
\subsection{Curse of dimensionality}
An important aspect of function approximation (and therefore related to perspective 2 stated in \Cref{sec: probabilistic modelling}) is the question of how complicated the function $f_\theta$ or, equivalently, how rich the function class $\mathcal{F}$ needs to be. This becomes particularly interesting if the state space is high-dimensional and a notorious challenge is known as the \emph{curse of dimensionality}. It describes the phenomenon that approximating a target function $f^B$ or the corresponding probability distribution $Q^B=Q(f^B)$ when $\mathcal{X}$ is high-dimensional (i.e. when the number of degrees of freedom is large) requires a huge amount of training data to determine a regression function $f_\theta$ that is able to approximate the target.
As a rule of thumb, approximating a function $f^B$ on $\mathcal{X}=\R^d$ or the associated probability measure $Q^B$ with an accuracy of $\epsilon$ needs about \[N=\epsilon^{-\Omega(d)}\] sample points in order to determine roughly the same number of a priori unknown parameters $\theta$, thereby admitting an exponential dependence on the dimension.\footnote{Here we use the Landau notation $\Omega(d)$ to denote a function of $d$ that asymptotically grows like $\alpha\cdot d$ for some constant $\alpha>0$; often $\alpha=1,2$.}
It is easy to see that the number of parameters needed and the size of the training set become astronomical for real-world tasks.
As an example, consider the classification of handwritten digits. The MNIST database (Modified National Institute of Standards and Technology database) contains a dataset of about 60\,000 handwritten digits that are stored in digital form as $28\times 28$ pixel greyscale images \citep{lecun1998mnist}. If we store only the greyscale values for every image as a vector, then, the dimension of every such vector will be $28^2=784$. By today's standards, this is considered a small system, yet it is easy to see that training a network with about $10^{784}$ parameters and roughly the same number of training data points is simply not feasible, especially as the training set contains less than $10^5$ data points.
In practice, the number of ANN parameters and the number of data points needed to train a network can be much smaller. In some cases, this inherent complexity reduction present in deep learning can be mathematically understood. Clearly, when the target function is very smooth, symmetric or concentrated, it is possible to approximate it with a parametric function having a smaller number of parameters. The class of functions that can be approximated by an ANN without an exponentially large number of parameters, however, is considerably larger; for example, Barron-regular functions that form a fairly large class of relevant functions can be approximated by ANNs in arbitrary dimension with a number of parameters that is independent of the dimension \citep[Thm.~3]{barron1993approximation}; there are, moreover, results that show that it is possible to express \emph{any} labelling of $N$ data points in $\R^d$ by an ANN two layers and in total $p=2N+d$ parameters \citep[Thm.~1]{zhang2021understanding}; cf.~\citep{devore2021approx}. In general, however, the quite remarkable expressivity of deep neural networks with a relatively small number of parameters and even smaller training sets is still not well understood \citep[Sec.~4]{berner2021modern}.\footnote{Here, `relatively small' must be understood with respect to the dimension of the training data set. An ANN that was successfully trained on MNIST data may still have several hundred millions or even billions of parameters; nevertheless, the number of parameters is small compared to what one would expect from an approximation theory perspective, namely $10^{784}$. However, it is large compared to the minimum number of parameters needed to fit the the data, which in our example would be $p=2\cdot 60\,000+784=120\,784$, hence an ANN with good generalization capacities is typically severely overfitting, especially if we keep in mind that the effective dimension of the MNIST images that contains about 90\% black pixels is considerably smaller}
\subsection{Stochastic optimization as implicit regularization}\label{sec:regularize}
Let us finally discuss an aspect related to the optimization of ANNs (cf. perspective 3 in \Cref{sec: probabilistic modelling}) that interestingly offers a connection to function approximation as well. Here, the typical situation is that no a priori information whatsoever about the function class to which $f^B$ belongs is available. A conventional way then to control the number of parameters and to prevent overfitting is to add a regularization term to the loss function that forces the majority of the parameters to be zero or close to zero and hence effectively reduces the number of parameters \citep{tibshirani1996lasso}. Even though regularization can improve the generalization capabilities, it has been found to be neither necessary nor sufficient for controlling the generalization error \citep{geron2017handson}.
Instead, surprisingly, there is (in some situations proveable) evidence that SGD introduces an implicit regularization to the empirical risk minimization that is not present in the exact (i.e.~deterministic) gradient descent \citep{ali2020implicit,roberts2021sgd}. A possible explanation of this effect is that the inexact gradient evaluation of SGD introduces some noise that prevents the minimization algorithm from getting stuck in a bad local minimum. It has been observed that the effect is more pronounced when the variance of the gradient approximation is larger, in other words: when the approximation has a larger sampling error \citep{keskar2016large}. A popular, though controversial explanation is that noisier SGD tends to favor wider or flatter local minima of the loss landscape that are conventionally associated with better generalization capabilities of the trained ANN \citep{dinh2017sharp,hochreiter1997flat}. How to unambigously characterize the `flatness' of local minima with regard to their generalization capacities, however, is still an open question. Furthermore, it should be noted that too much variance in the gradient estimation is not favorable either, as it might lead to slower training convergence, and it will be interesting to investigate how to account for this tradeoff; cf.~\citep{bottou2018optimization, richter2020vargrad}.
\begin{example} To illustrate the implicit regularization of an overfitted ANN by SGD, we consider the true function $f(x) = \sin(2 \pi x)$ and create $N=100$ noisy data points according to $y_n = f(x_n) + 0.15 \,\eta_n$, where $x_n$ is uniformly distributed in the interval $[0,2\pi]$ (symbolically: $x_n\sim \mathcal{U}([0, 2])$) and $\eta_n \sim \mathcal{N}(0, 1)$. We choose a fully connected NN with three hidden layers (i.e. $L=4$), each with $10$ neurons.
Once we train with gradient descent and once we randomly choose a batch of size $N_b = 10$ in each gradient step. In \Cref{fig: GD vs. SGD} we can see that running gradient descent on the noisy data leads to overfitting, whereas stochastic gradient descent seems to have some implicit regularizing effect.
\begin{figure}
\caption{We consider a fully connected neural network (blue) that has been trained on $N=100$ noisy data points (orange), once by gradient descent and once by stochastic gradient descent, and compare it to the ground truth function (grey).}
\label{fig: GD vs. SGD}
\end{figure} \end{example}
We have provided a potpourri of aspects related to the three perspectives \textit{generalization}, \textit{function approximation} and \textit{optimization}, demonstrating subtleties of deep learning that have partly been understood with the help of rigorous mathematical analysis, while still leaving many open questions for future research. In the following, let us move towards perspectives 4 and 5 that we have stated in \Cref{sec: probabilistic modelling}. In particular, the following chapter will argue that relying on classical statistical learning theory might not be sufficient in certain practical applications and additional effort and analysis are needed in order to make deep learning more robust.
\section{Sensitivity and (non-)robustness of neural networks} \label{sec: adversarial attacks} So far we have measured the performance of prediction models in an `average' sense. In particular we have stated the goal of a machine learning algorithm to minimize the expected loss \begin{equation}
\mathcal{L}(f) = \E\!\left[\ell(f(X), Y) \right], \end{equation} where the deviations between predictions and ground truth data are averaged over the (unknown) probability distribution $\mathbb{P}$. Statements from statistical learning theory therefore usually hold the implicit assumption that future data comes from the same distribution and is hence similar to that encountered during training (cf. \Cref{sec: probabilistic modelling}). This perspective might often be valid in practice, but falls short of atypical data in the sense of having a small likelihood, which makes such an occurence a rare event or a \emph{large deviation}. Especially in safety-critical applications one might not be satisfied with average-case guarantees, but rather strives for worst-case analyses or at least for an indication of the certainty of a prediction (which we will come back to in the next section). Moreover, it is known that models like neural networks are particularly sensitive with respect to the input data, implying that very small, barely detectable changes of the data can drastically change the output of a prediction model -- a phenomenon that is not respected by an analysis based on expected losses.
\subsection{Adversarial attacks}\label{sec:imageattack}
An extreme illustration of the sensitivity of neural networks can be noted in \textit{adversarial attacks}, where input data is manipulated in order to mislead the algorithm.\footnote{This desire to mislead the algorithm is in accordance with Popper's dictum that we are essentially learning from our mistakes. As \citet[p.~324]{popper1984suche}) mentions in the seminal speech \emph{Duldsamkeit und intellektuelle Verantwortlichkeit} on the occasion of receiving the \emph{Dr. Leopold Lucas Price of the University of Tübingen} on the 26th May 1981: ``[\ldots] es ist die spezifische Aufgabe des Wissenschaftlers, nach solchen Fehlern zu suchen. Die Feststellung, daß eine gut bewährte Theorie oder ein viel verwendetes praktisches Verfahren fehlerhaft ist, kann eine wichtige Entdeckung sein.''} Here the idea is to add very small and therefore barely noticeable perturbations to the data in such a way that a previously trained prediction model then provides very different outputs. In a classification problem this could for instance result in suggesting different classes for almost identical input data. It has gained particular attention in image classification, where slightly changed images can be misclassified, even though they appear identical to the original image for the human eye, e.g.~\citep{brown2017adversarial,goodfellow2014explaining,kurakin2016adversarial}.
Adversarial attacks can be constructed in many different ways, but the general idea is usually the same. We discuss the example of a trained classifier: given a data point $x\in \R^d$ and a trained neural network $f_\theta$, we add some minor change $\delta \in \R^d$ to the input data $x$, such that $f_\theta(x + \delta)$ predicts a wrong class. One can differentiate in targeted and untargeted adversarial attacks, where either the wrong class is specified or the misclassification to any arbitrary (wrong) class is aimed at. We focus on the former strategy as it turns out to be more powerful. Since the perturbation is supposed to be small (e.g. for the human eye), it is natural to minimize the perturbation $\delta$ in some suitable norm (e.g. the Euclidean norm or the maximum norm) while constraining the classifier to assign a wrong label $\widetilde{y} \neq y$ to the perturbed data and imposing an additional box constraint. In the relevant literature (e.g.~\citet{szegedy2014intriguing}), an adversarial attack is constructed as the solution to the following optimization problem: \begin{equation}\label{eq:aabox}
\text{minimize } \| \delta \| \qquad \text{subject to} \qquad f_\theta(x + \delta) = \widetilde{y}\quad\text{and} \quad x+\delta\in[0,1]^d\,. \end{equation} Note that we have the hidden constraint $f_\theta(x)=y$, where $y\neq \widetilde{y}$ and the input variables have been scaled such that $x\in[0,1]^d$. In order to have an implementable version of this procedure, one usually considers a relaxation of (\ref{eq:aabox}) that can be solved with (stochastic) gradient descent-like methods in $\delta$; see e.g.~\citep{carlini2017towards}.
Roughly speaking, generating an adversarial attack amounts to doing a (stochastic) gradient descent in the data rather than the parameters, with the aim of finding the closest possible input $\widetilde{x}$ to $x$ that gets wrongly classified and to analyze what went wrong.\footnote{Again, quoting \citet[p.~325]{popper1984suche}: ``Wir müssen daher dauernd nach unseren Fehlern Ausschau halten. Wenn wir sie finden, müssen wir sie uns einprägen; sie nach allen Seiten analysieren, um ihnen auf den Grund zu gehen.''.}
\begin{example}[Adversarial attack to image classification] Let us provide an example of an adversarial attack in image classification. For this we use the Inception-v3 model from \citep{szegedy2016rethinking}, which is pretrained on $1000$ fixed classes. For the image in the left panel of \Cref{fig: adversarial attack} a class is predicted that seems close to what is in fact displayed. We then compute a small perturbation $\delta$, displayed in the central panel, with the goal of getting a different classification result. The right panel displays the perturbed image $x+\delta$, which notably looks indistinguishable from the original image, yet gets classified wrongly with the same Inception-v3 model. The displayed probabilities are the so-called softmax outputs of the neural network for the predicted classes and they represent some sort of certainty scores.
\begin{figure}
\caption{The original image of Thomas Bayes in the left panel gets reasonably classified (``cloak''), whereas the right picture is the result of an adversarial attack and therefore gets misclassified (as ``mosque'').}
\label{fig: adversarial attack}
\end{figure} \end{example}
\subsection{Including worst-case scenarios and marginal cases} \label{sec: worst-case scenarios} Adversarial attacks demonstrate that neural networks might not be robust with respect to unexpected input data and the next question naturally is how this issue can be addressed. In fact, multiple defense strategies have been developed in recent years in order to counteract attacks, while it is noted that a valid evaluation of defenses against adversarial examples turns out to be difficult, since one can often find additional attack strategies afterwards that have not been considered in the evaluation \citep{carlini2019evaluating}. One obvious idea for making neural networks more robust is to integrate adversarial attacks into the training process, e.g. by considering the minimization \begin{equation} \min_\theta \E\!\left[ \max_{\delta \in \Delta}\ell(f_{\theta}(X + \delta), Y)\right] , \end{equation}
where $\Delta = \{ \delta : \| \delta \| \le \varepsilon\}$ is some specified perturbation range \citep{madry2018towards,wang2019convergence}. Depending on the application, however, convergence of this min-max problem can be cumbersome. At present, the study of adversarial attacks is a prominent research topic with many questions still open (e.g. the role of regularization \citep{roth2019adversarial}), and it has already become apparent that principles that hold for the average case scenario might not be valid in worst-case settings anymore; cf.~\citep[Sec.~4]{ilyas2019adversarial}. To give an example, there is empirical evidence that overfitting might be more harmful when adversarial attacks are present, in that overparametrized deep NNs that are robust against adversarial attacks may not exhibit the typical double descent phenomenon when the training is continued beyond the interpolation threshold (cf.~Figure \ref{fig:doubledescent}); instead they show a slight increase of the generalization risk when validated against test data, i.e. their test performance degrades, which is at odds with the observations made for standard deep learning algorithms based on empirical risk minimization \citep{rice2020overfitting}.
Another way to address adversarial attacks is to incorporate uncertainty estimates in the models and hope that those then indicate whether perturbed (or out of sample) data occurs. Note that the question as to whether some new data is considered typical or not (i.e. an outlier or a marginal case) depends on the parameters of the trained neural network which are random, in that they depend on the random training data. As a principled way of uncertainty quantification we will introduce the Bayesian perspective and Bayesian Neural Networks (BNNs) in the next section. We claim that these can be viewed as a more robust deep learning paradigm, which promises fruitful advances, backed up by some already existing theoretical results and empirical evidence. In relation to adversarial attacks, there have been multiple indications of attack identifications \citep{rawat2017adversarial} and improved defenses \citep{feinman2017detecting,liu2018adv,zimmermann2019comment} when relying on BNNs. In fact, there is clear evidence of increasing prediction uncertainty with growing attack strength, indicating the usefulness of the provided uncertainty scores. On the theoretical side, it can be shown that in the (large data and overparametrized) limit BNN posteriors are robust against gradient-based adversarial attacks \citep{carbone2020robustness}.
\section{The Bayesian perspective} \label{sec: Bayesian perspective}
In the previous chapter we demonstrated and discussed the potential non-robustness of neural networks related, for example, to small changes of input data by adversarial attacks. A connected inherent problem is that neural networks usually \textit{don't know when they don't know}, meaning that there is no reliable quantification of prediction uncertainty.\footnote{Freely adapted from the infamous 2002 speech of the former U.S.~Secretary of Defense, Donald Rumsfeld: ``We [\ldots] know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know.''} In this chapter we will argue that the Bayesian perspective is well suited as a principled framework for uncertainty quantification, thus holding the promise of making machine learning models more robust; see \citep{neal1995bayesian} for an overview.
We have argued that classical machine learning algorithms often act as black boxes, i.e.~without making predictions interpretable and without indicating any level of confidence. Given that all models are learnt from a finite amount of data, this seems rather naive and it is in fact desirable that algorithms should be able to indicate a degree of uncertainty whenever not `enough' data have been present during training (keeping in mind, however, that this endeavor still leaves certain aspects of interpretability such as post-hoc explanations \citep[Sec.~3]{du2020posthoc}) open. To this end, the Bayesian credo is the following: we start with some beforehand (\textit{a priori}) given uncertainty of the prediction model $f$. In the next step, when the model is trained on data, this uncertainty will get `updated' such that predictions `close' to already seen data points become more certain. In mathematical terms, the idea is to assume a prior probability distribution $p(\theta)$ over the parameter vector $\theta$ of the prediction model rather than a fixed value as in the classical case. We then condition this distribution on the fact that we have seen a training data set $\mathcal{D} = (x_n, y_n)_{n=1}^N$.
The computation of conditional probabilities is governed by Bayes' theorem, yielding the \textit{posterior probability} $p(\theta | \mathcal{D})$, namely by \begin{equation} \label{eq: Bayes thm parameters}
p(\theta | \mathcal{D}) = \frac{p(\mathcal{D} | \theta)p(\theta)}{p(\mathcal{D})}, \end{equation}
where $p(\mathcal{D} | \theta)$ is the likelihood of seeing data $\mathcal{D}$ given the parameter vector $\theta$ and $p(\mathcal{D}) = \int_{\R^p} p(\mathcal{D}, \theta) \mathrm d\theta$ is the normalizing constant, sometimes called evidence, assuring that $p(\theta | \mathcal{D})$ is indeed a probability density. The posterior probability can be interpreted as an updated distribution over the parameters given the data $\mathcal{D}$. Assuming that we can sample from it, we can then make subsequent predictions on unseen data $x$ by \begin{equation} \label{eq: prediction from BNN}
f(x) = \int_{\R^p} f_\theta(x) p(\theta | \mathcal{D}) \mathrm d \theta
\approx \frac{1}{K} \sum_{k=1}^K f_{\theta^{(k)}}(x) \end{equation}
where $\theta^{(1)},\ldots,\theta^{(K)}$ are i.i.d.~samples drawn from the Bayesian posterior $p(\theta | \mathcal{D})$, i.e. we average predictions of multiple neural networks, each of which having parameters drawn from the posterior distribution.
\begin{figure}
\caption{We display the evaluation of a BNN by showing its mean prediction function (in dark blue) and a set of two standard deviations from it (in light blue), compared to the ground truth (in gray). Our BNN is either untrained (left) or has seen $N=5$ (in the central panel) or $N=100$ data points (in right panel) during training.}
\label{fig: BNN on data from sine function}
\end{figure}
\begin{example}[BNN based on different amounts of data] Let us say we want to learn the function $f(x) = \sin(2\pi x)$ and have a certain amount of training data $\mathcal{D} = (x_n, y_n)_{n=1}^N$ available, where the label is given by $y_n = f(x_n) + \eta_n$, with noise $\eta_n \sim \mathcal{N}(0, 0.01)$. Intuitively, the fitted neural network should be closer to the true function as well as more certain in its predictions the more data points are available. We consider a BNN trained with a mean field variational inference attempt and a Gaussian prior on the parameters (see next section for details). In \Cref{fig: BNN on data from sine function} we display the mean prediction function as defined in \eqref{eq: prediction from BNN} as well as a confidence set defined by two standard deviations on the sample predictions. In the left panel we display the evaluation before training, i.e. without relying on any available data points, and note that the average prediction function is rather arbitrary and the uncertainty rather high, as expected. The central panel repeats the same evaluation, where now the BNN is trained on $N = 5$ data points. We can see an improved prediction function and a decreased uncertainty. Finally, the right panel displays a BNN trained on $N = 100$ data points, where now the mean prediction function is quite close to the true function and the uncertainty almost vanishes close to the data points, yet remains large wherever no training data was available. The BNN is therefore able to output reasonable uncertainty scores, depending on which data was available during training. \end{example}
\subsection{Bayesian neural networks in practice} \label{sec: BNN in practice}
Even though simple at first glance, the Bayes formula \eqref{eq: Bayes thm parameters} is non-trivial from a computational point of view and can in almost all cases not be computed analytically. The challenging term is $p(\mathcal{D})$, for which, given the nested structure of neural networks, the integral has to be approximated numerically. Classical numerical integration, however, is infeasible too, due to the high dimension of the parameter vector $\theta$. We therefore have to resort to alternative attempts that aim to approximate the posterior distribution $p(\theta | \mathcal{D})$.
An asymptotically exact method for creating samples from any (suitable) probability distribution is called Hamiltonian Monte Carlo (also: Hybrid Monte Carlo), which is based on ideas from Statistical Physics and the observation that certain dynamical systems admit an equilibrium state that can be identified with the posterior probability that we seek to compute \citep{neal2011mcmc}. For our purposes this attempt seems to be a method of choice when aiming for high approximation quality; however, it does not scale well to high dimensions and is therefore practically useless for state-of-the-art neural networks. A similar idea is to exploit the so-called Langevin dynamics in combination with subsampling of the data points \citep{welling2011bayesian,zhang2020amagold}. This method scales much better, but it is biased since the data subsampling perturbs the stationary distribution. A quite different attempt is called \textit{dropout}, which builds the posterior approximation into the neural network architecture and implicitly trains multiple models at the same time \citep{gal2016dropout}. Finally, another popular method is based on variational inference, where the true posterior is approximated within a family of simpler probability densities, e.g. multidimensional Gaussians with diagonal covariance matrices \citep{blundell2015weight}. Depending on the approximation class, this method scales well, but approximation quality cannot be guaranteed.
Each of the methods mentioned above has advantages and disadvantages and many questions are still open. As a general remark, there is indeed repeated evidence that, ignoring the approximation challenges for the moment, the Bayesian framework works well in principle for quantifying the prediction uncertainties of neural networks. Additionally, there are indications, based on empirical studies \citep{izmailov2021bayesian}, that the overall model performance might be improved when relying on predictions from BNNs in contrast to deterministic ANNs. On the other hand, many of the approximation steps that lead to a BNN are not well understood theoretically, and one can demonstrate empirically that they often lead to posterior approximations that are not accurate, e.g. \citep{foong2019expressiveness}. Some of those failures seem to be due to systematic simplifications in the approximating family \citep{yao2019quality}. This phenomenon gets more severe, while at the same time harder to spot, when the neural networks are large, i.e. when the parameter vector is very high-dimensional. An accepted opinion seems to be that whenever BNNs do not work well, then it is not the Bayesian paradigm that is to blame, but rather the inability to approximate it well \citep{gal2018sufficient}. At the same time, however, there are works such as \citep{farquhar2020liberty} that claim that for certain neural network architectures simplified approximation structures get better the bigger (and in particular the deeper) the model is.
\subsection{Challenges and prospects for Bayesian Neural Networks} \label{sec: challenges for BNNs}
The previous section sought to argue that there is great potential in using BNNs in practice; however, many questions, both from a theoretical and practical point of view, are still open.
A natural formulation of BNNs can be based on free energy as a loss function that has been discussed in connection with a formal account of curiosity and insight in terms of Bayesian inference (see \citet{firston2017curiosity}): while the expected loss or risk in deep learning can be thought of as an energy that describes the goodness-of-fit of a trained ANN to some given data (where minimum energy amounts to an optimal fit), the free energy contains an additional entropy term that accounts for the inherent parameter uncertainty and has the effect of smoothing the energy landscape. The result is a trade-off between an accurate fit, which bears the risk of overfitting, and reduced model complexity (i.e.~Occam's razor). From the perspective of statistical inference, e.g.~\citep{freeEnergyLN}, the free energy has the property that its unique minimizer in the space of probability measures is the sought Bayesian posterior \citep{hartmann2017vari}. Selecting a BNN by free energy minimization therefore generates a model that, \emph{on average}, provides the best explanation for the data at hand, and thus it can be thought of as making an inference to the best explanation in the sense of \citet{harman1965}; cf. also \citep{mcauliffe2015}.
Evidently, the biggest challenge seems to be a computational one: how can we approximate posterior distributions of large neural networks both well and efficiently?
But even if the minimizer or the Bayesian posterior can be approximated, the evaluation of posterior accuracy (e.g. from the shape of the free energy in the neighborhood of the minimizer) is still difficult and one usually does not have clear guarantees. Furthermore, neural networks keep getting larger and more efficient methods that can cope with ever higher dimensionality are needed.
Regarding the benefits of BNNs, there is an open debate on how much performance gains they actually bring in practice; cf.~\citep{wenzel2020good}. Uncertainty quantification, on the other hand, is valuable enough to continue the Bayesian endeavor, eventually allowing for safety-critical applications or potentially improving active and continual learning.
\section{Concluding remarks} \label{sec: conclusion}
The recent progress in artificial intelligence is undeniable and the related improvements in various applications are impressive. This article, however, provides only a snapshot of the current state of deep learning, and we have demonstrated that many phenomena that are intimately connected are still not well understood from a theoretical point of view. We have further argued that this lack of understanding not only slows down further systematic developments of practical algorithms, but also bears risks that become in particular apparent in safety-critical applications. While inspecting deep learning from the mathematical angle, we have highlighted five perspectives that allow for a more systematic treatment, offering already some novel explanations of striking observations and bringing up valuable questions for future research (cf. \Cref{sec: probabilistic modelling}).
We have in particular emphasized the influence of the numerical methods on the performance of a trained neural network and touched upon the aspect of numerical stability, motivated by the observation that neural networks are often not robust (e.g. with respect to unexpected input data or adversarial attacks) and do not hold any reliable measure for uncertainty quantification. As a principled framework that might tackle those issues, we have presented the Bayesian paradigm and in particular Bayesian neural networks, which provide a natural way of quantifying epistemic uncertainties. In theory, BNNs promise to overcome certain robustness issues and many empirical observations are in line with this hope; however, they also bring additional computational challenges, connected mainly to the sampling of high dimensional probability distributions. The existing methods addressing this issue are neither sufficiently understood theoretically nor produce good enough (scalable) results in practice such that a persistent usage in applications is often infeasible. We believe that the theoretical properties of BNNs (or ANNs in general) cannot be fully understood without understanding the numerical algorithms used for training and optimisation. Future research should therefore aim at improving these numerical methods in connection with rigorous approximation guarantees.
Moreover, this article argued that many of the engineering-style improvements and anecdotes related to deep learning need systematic mathematical analyses in order foster a solid basis for artificial intelligence\footnote{This view addresses the skeptical challenge of Ali Rahimi who gave a presentation at NeurIPS Conference in 2017 with the title ``Machine learning has become alchemy''. According to Rahimi, machine learning and alchemy both work to a certain degree, but the lack of theoretical understanding and interpretability of machine learning models is major cause for concern.}. Rigorous mathematical inspection has already led to notable achievements in recent years, and in addition to an ever enhancing handcrafting of neural network architectures, the continuation of this theoretical research will be the basis for further substantial progress in machine learning. We therefore conclude with a quote from Vladimir \citet[p.~X]{vapnik1999nature}, one of the founding fathers of modern machine learning: ``I heard reiteration of the following claim: Complex theories do not work, simple algorithms do. [...] I would like to demonstrate that in this area of science a good old principle is valid: Nothing is more practical than a good theory.''
\par
\textbf{Acknowledgements.} This research has been partially funded by Deutsche Forschungsgemeinschaft (DFG) through the grant CRC 1114 ‘Scaling Cascades in Complex Systems’ (project A05, project number 235221301) as well as by the Energy Innovation Center (project numbers 85056897 and 03SF0693A) with funds from the Structural Development Act (Strukturstärkungsgesetz) for coal-mining regions.
\appendix
\section{Training of artificial neural networks} \label{sec:SGD}
Let $\mathcal{F}$ be the set of neural networks $f_\theta=\Phi_\sigma$ of a certain predefined topology (i.e.~with a given number of concatenated activation functions, interconnection patterns, etc.) that we want to train. Suppose we have $N$ data points $(x_1, y_1),\ldots,(x_N,y_N)$ where, for simplicity, we assume that $y_n=f(x_n)$ is deterministic. For example, we may think of every $x_n$ having a unique label $y_n=\pm 1$.
Training an ANN amounts to solving the regression problem \[ f_\theta(x_n)\approx y_n \] for all $n=1,\ldots,N$. Specifically, we seek $\theta\in\Theta$ that minimizes the empirical risk (also: loss landscape) \[ J_N(\theta) = \frac{1}{N}\sum_{n=1}^N \ell(f_\theta(x_n),y_n) \]
over some potentially high-dimensional parameter set $\Theta$.\footnote{Recall that we call the empirical risk $J_N$ when considered as a function of parameters $\theta$ and $\mathcal{L}_N$ when considered as a function of functions.} There are few cases in which the risk minimization problem has an explicit and unique solution if the number of independent data points is large enough. One such case in which an explicit solution is available is when $f_\theta(x)=\theta^\top x$ is linear and $l(z,y)=|z-y|^2$ is quadratic. This is the classical linear regression problem.
For ANNs, an explicit solution is neither available nor unique, and an approximation to $\widehat{f}_N\approx f^*$ must be computed by an suitable iterative numerical method. One such numerical method is called gradient descent \[ \theta_{k+1} = \theta_k - \eta_k\nabla J_N(\theta_k)\,,\quad k=0,1,2,3,\ldots\,, \] where $\eta_0,\eta_1,\eta_2$ is a sequence of step sizes, called \emph{learning rate}, that tends to zero asymptotically. For a typical ANN and a typical loss function, the derivative (i.e. the gradient) \[ \nabla J_N(\theta) = \frac{1}{N}\sum_{n=1}^N \nabla_\theta \ell(f_\theta(x_n),y_n) \] with respect to the parameter $\theta$ can be computed by what is called \emph{backpropagation}, essentially relying on the chain rule of differential calculus; see, e.g. \citep{higham2019deep}. Since the number of training points, $N$, is typically very large, evaluating the gradient that is a sum of $N$ terms is computationally demanding, therefore the sum over the training data is replaced by a sum over a random, usually small subsample of the training data. This means that, for fixed $\theta$, the derivative $\nabla J_N(\theta)$ is replaced by an approximation $\nabla \widehat{J}_N(\theta)$ that is random; the approximation has no systematic error, i.e. it equals the true derivative on average, but it deviates from the true derivative by a random amount (that may not even be small, but that is zero on average). As a consequence, we can rewrite our gradient descent as follows: \begin{equation}\label{SME} \theta_{k+1} = \theta_k - \eta_k\nabla \widehat{J}_N(\theta_k) + \zeta_k\,,\quad k=0,1,2,3,\ldots\,, \end{equation} where $\zeta_k$ is the random error invoked by substituting $\nabla J_N(\theta)$ with $\widehat{J}_N(\theta)$. Since $\zeta_k$ is unknown as it depends on the true derivative $\nabla J_N(\theta_k)$ at stage $k$ that cannot be easily computed, the noise term in (\ref{SME}) is ignored in the training procedure, which leads to what is called \emph{stochastic gradient descent (SGD)}: \begin{equation}\label{SGD} \theta_{k+1} = \theta_k - \eta_k\nabla \widehat{J}_N(\theta_k)\,,\quad k=0,1,2,3,\ldots\,. \end{equation} Since the right hand side in (\ref{SGD}) is random by virtue of the randomly chosen subsample that is used to approximate the true gradient, the outcome of the SGD algorithm after, say, $t$ iterations will always be random.
As a consequence, training an ANN for given data and for a fixed number of training steps, $t$, multiple times will never produce the same regression function $f_\theta$, but instead a random collection of regression functions. This justifies the idea of a trained neural as a probability distribution $Q(f(t))=Q(f_{\theta(t)})$ rather than unique function $f(t)=f_{\theta(t)}$ that represents its random state after $t$ training steps.
We should stress that typically, SGD does not converge to the optimal solution (if it converges at all), but rather finds a suboptimal local optimum (if any). From the perspective of mathematical optimization, it is one of the big mysteries of deep learning that despite being only a random and suboptimal solution, the predictions made by the resulting trained network are often suprisingly good \cite[Sec.~1.3]{berner2021modern}.
In trying to reveal the origin of this phenomenon, SGD has been analyzed using asymptotic arguments, e.g.~\citep{weinan2017sme,weinan2019sme,mandt2015continuous}. These methods rely on limit theorems, e.g.~\citep{kushner2003stochastic}, to approximate the random noise term in (\ref{SME}), and they are suitable to understand the performance in the large data setting. However, they are unable to adress the case of finite, not to mention sparse training data. Recently, the finite data situation has been analyzed using backward error analysis, and there is empirical evidence that SGD incorporates an implicit regularization which favors shallow minimization paths that leads to broader minima and (hence) to more robust ANNs \citep{barrett2020implicit,smith2021origin,soudry2018implicit}.
\section{Optimal prediction and Bayes classifier}\label{sec:bayesopt}
For prediction tasks, when the ANN is supposed to predict a quantity $y\in\R$ based on an input $x\in\R^d$, the generalization error is typically measured in the sense of the mean square error (MSE), with the quadratic loss \[ \ell(f(x),y) = (f(x)-y)^2\,. \] Let \[ \mathrm{sgn}(z) = \begin{cases} 1\,, & z > 0 \\ 0 \,, & z=0 \\ -1\,, & z<0\, \end{cases} \] be the sign function. Then, for the binary classification tasks, with $y\in\{-1,1\}$ and a classifier $f(x)=\mathrm{sgn}(h(x))$ for some function $h\colon\R^d\to\R$ the quadratic loss reduces to what is called the 0-1 loss (up to a multiplicative constant): \[ \frac{1}{4}\ell(f(x),y) = \mathbf{1}_{(-\infty,0]}(yh(x)) = \begin{cases} 0\,, & f(x)=y\\ 1\,, & \textrm{else}\,. \end{cases} \] In this case $\mathcal{L}(f) = \mathbb{P}(Y\neq f(X))$ is simply the probability of misclassification. We define the \emph{regression function} \[
g(x) = \E[Y|X=x] \] to be the conditional expectation of $Y$ given the observation $X=x$. Then, using the properties of the conditional expectation, the MSE can be decomposed in a Pythagorean type fashion as \begin{align*}
\E[(f(X)-Y)^2] & =\E[(f(X)-g(X) + g(X) - Y)^2] \\
& = \E[(f(X)-g(X))^2] + 2\E[(f(X)-g(X))(g(X) - Y)] + \E[(g(X) - Y)^2]\\
& = \E[(f(X)-g(X))^2] + \E[(g(X) - Y)^2]\,. \end{align*} The cross-term disappears since, by the tower property of the conditional expectation, \begin{align*}
\E[(f(X)-g(X))(g(X) - Y)] & = \E[\E[(f(X)-g(X))(g(X) - Y)|X]]\\
& = \E[\E[(f(X)-g(X))g(X)|X]] - \E[\E[(f(X)-g(X))Y|X]]\\
& = \E[(f(X)-g(X))g(X)] - \E[(f(X)-g(X))\E[Y|X]]\\
& = 0 \,. \end{align*} As a consequence, we have for all functions $f$: \[ \mathcal{L}(f) = \E[(f(X)-g(X))^2] + \E[(g(X) - Y)^2] \ge \E[(g(X) - Y)^2] \] where equality is attained if and only if $f=g$. The findings can be summarized in the following two statements that hold with probability one:\footnote{If a statement is said to hold \emph{with probability one} or \emph{almost surely}, this means that it is true upon ignoring events of probability zero.} \begin{itemize}
\item[(1)] The regression function is the minimizer of the MSE, i.e. we have $g=f^B$, with unique
\[
f^B(x) \in \argmin_{f \in \mathcal{M}(\mathcal{X}, \mathcal{Y})} \E[(f(X)-Y)^2]\,.
\]
\item[(2)] The MSE can be decomposed as
\[
\mathcal{L}(f) = \E[(f(X)-\E[Y|X])^2] + \mathcal{L}^*\,,
\]
where the \emph{Bayes risk} $\mathcal{L}^*=\mathcal{L}(f^B)$ measures the variance of $Y$ for given $X=x$ around its \emph{optimal prediction}
\[f^B(x)=\E[Y|X=x]\,.\] \end{itemize}
The reasoning carries over to the classification task with $Y\in\{-1,1\}$, in which case \[g(x)=\mathbb{P}(Y=1|X=x)-\mathbb{P}(Y=-1|X=x)\] and the \emph{optimal classifier} or \emph{Bayes classifier} can be shown to be \[ f^B(x) = \mathrm{sgn}(g(x)) = \begin{cases}
1\,, & \mathbb{P}(Y=1|X=x) > \mathbb{P}(Y=-1|X=x)\\ -1\,, & \mathbb{P}(Y=1|X=x) < \mathbb{P}(Y=-1|X=x)\,. \end{cases} \]
\end{document} | arXiv |
Analysis of stochastic Lanczos quadrature for spectrum approximation
Tyler Chen, Thomas Trogdon, Shashanka Ubaru
The cumulative empirical spectral measure (CESM) $\Phi[\mathbf{A}] : \mathbb{R} \to [0,1]$ of a $n\times n$ symmetric matrix $\mathbf{A}$ is defined as the fraction of eigenvalues of $\mathbf{A}$ less than a given threshold, i.e., $\Phi[\mathbf{A}](x) := \sum_{i=1}^{n} \frac{1}{n} {\large\unicode{x1D7D9}}[ \lambda_i[\mathbf{A}]\leq x]$. Spectral sums $\operatorname{tr}(f[\mathbf{A}])$ can be computed as the Riemann–Stieltjes integral of $f$ against $\Phi[\mathbf{A}]$, so the task of estimating CESM arises frequently in a number of applications, including machine learning. We present an error analysis for stochastic Lanczos quadrature (SLQ). We show that SLQ obtains an approximation to the CESM within a Wasserstein distance of $t \: | \lambda_{\text{max}}[\mathbf{A}] - \lambda_{\text{min}}[\mathbf{A}] |$ with probability at least $1-\eta$, by applying the Lanczos algorithm for $\lceil 12 t^{-1} + \frac{1}{2} \rceil$ iterations to $\lceil 4 ( n+2 )^{-1}t^{-2} \ln(2n\eta^{-1}) \rceil$ vectors sampled independently and uniformly from the unit sphere. We additionally provide (matrix-dependent) a posteriori error bounds for the Wasserstein and Kolmogorov–Smirnov distances between the output of this algorithm and the true CESM. The quality of our bounds is demonstrated using numerical experiments.
@InProceedings{pmlr-v139-chen21s, title = {Analysis of stochastic Lanczos quadrature for spectrum approximation}, author = {Chen, Tyler and Trogdon, Thomas and Ubaru, Shashanka}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1728--1739}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/chen21s/chen21s.pdf}, url = {https://proceedings.mlr.press/v139/chen21s.html}, abstract = {The cumulative empirical spectral measure (CESM) $\Phi[\mathbf{A}] : \mathbb{R} \to [0,1]$ of a $n\times n$ symmetric matrix $\mathbf{A}$ is defined as the fraction of eigenvalues of $\mathbf{A}$ less than a given threshold, i.e., $\Phi[\mathbf{A}](x) := \sum_{i=1}^{n} \frac{1}{n} {\large\unicode{x1D7D9}}[ \lambda_i[\mathbf{A}]\leq x]$. Spectral sums $\operatorname{tr}(f[\mathbf{A}])$ can be computed as the Riemann–Stieltjes integral of $f$ against $\Phi[\mathbf{A}]$, so the task of estimating CESM arises frequently in a number of applications, including machine learning. We present an error analysis for stochastic Lanczos quadrature (SLQ). We show that SLQ obtains an approximation to the CESM within a Wasserstein distance of $t \: | \lambda_{\text{max}}[\mathbf{A}] - \lambda_{\text{min}}[\mathbf{A}] |$ with probability at least $1-\eta$, by applying the Lanczos algorithm for $\lceil 12 t^{-1} + \frac{1}{2} \rceil$ iterations to $\lceil 4 ( n+2 )^{-1}t^{-2} \ln(2n\eta^{-1}) \rceil$ vectors sampled independently and uniformly from the unit sphere. We additionally provide (matrix-dependent) a posteriori error bounds for the Wasserstein and Kolmogorov–Smirnov distances between the output of this algorithm and the true CESM. The quality of our bounds is demonstrated using numerical experiments.} }
%0 Conference Paper %T Analysis of stochastic Lanczos quadrature for spectrum approximation %A Tyler Chen %A Thomas Trogdon %A Shashanka Ubaru %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-chen21s %I PMLR %P 1728--1739 %U https://proceedings.mlr.press/v139/chen21s.html %V 139 %X The cumulative empirical spectral measure (CESM) $\Phi[\mathbf{A}] : \mathbb{R} \to [0,1]$ of a $n\times n$ symmetric matrix $\mathbf{A}$ is defined as the fraction of eigenvalues of $\mathbf{A}$ less than a given threshold, i.e., $\Phi[\mathbf{A}](x) := \sum_{i=1}^{n} \frac{1}{n} {\large\unicode{x1D7D9}}[ \lambda_i[\mathbf{A}]\leq x]$. Spectral sums $\operatorname{tr}(f[\mathbf{A}])$ can be computed as the Riemann–Stieltjes integral of $f$ against $\Phi[\mathbf{A}]$, so the task of estimating CESM arises frequently in a number of applications, including machine learning. We present an error analysis for stochastic Lanczos quadrature (SLQ). We show that SLQ obtains an approximation to the CESM within a Wasserstein distance of $t \: | \lambda_{\text{max}}[\mathbf{A}] - \lambda_{\text{min}}[\mathbf{A}] |$ with probability at least $1-\eta$, by applying the Lanczos algorithm for $\lceil 12 t^{-1} + \frac{1}{2} \rceil$ iterations to $\lceil 4 ( n+2 )^{-1}t^{-2} \ln(2n\eta^{-1}) \rceil$ vectors sampled independently and uniformly from the unit sphere. We additionally provide (matrix-dependent) a posteriori error bounds for the Wasserstein and Kolmogorov–Smirnov distances between the output of this algorithm and the true CESM. The quality of our bounds is demonstrated using numerical experiments.
Chen, T., Trogdon, T. & Ubaru, S.. (2021). Analysis of stochastic Lanczos quadrature for spectrum approximation. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1728-1739 Available from https://proceedings.mlr.press/v139/chen21s.html. | CommonCrawl |
Eureka Math Algebra 2 Module 3 End of Module Assessment Answer Key
May 7, 2021 May 7, 2021 / By Prasanna
Engage NY Eureka Math Algebra 2 Module 3 End of Module Assessment Answer Key
For parts (a) to (c),
Sketch the graph of each pair of functions on the same coordinate axes showing end behavior and intercepts, and
Describe the graph of g as a series of transformations of the graph of f.
a. f(x) = 2x and g(x) = 2-x + 3
The graph of g is the graph of f reflected across y-axis and translated vertically 3 units.
b. f(x) = 3x, and g(x) = 9x – 2
Answers will vary. For example, because 9x – 2 = 32(x – 2), the graph of g is the graph of f scaled horizontally by a factor of \(\frac{1}{2}\) and translated horizintally by 2 units to the right.
c. f(x) = log2(x), and g(x) = log2((x – 1)3)
Most likely answer: Because log2((x – 1)3) = 3 log2(x – 1), the graph of g is the graph of f scaled vertically by a factor of 3 and translated horizontally by 1 unit to the right.
consider the gaph of f(x) = 8x. Let g(x) = f(\(\frac{1}{3}\)x + \(\frac{2}{3}\)) and h(x) = 4f(\(\frac{x}{3}\))
a.Describe the graphs of g and h as transformations of the graph of f.
The graph of g is the graph of f with a horizontal scaling by a factor of 3 and a horizontal translation 2 units to the left. The graph of h is the graph of f scaled vertically by a factor of 4 and horizontally by a factor of 3.
b. the properties of exponents to show why the graphs of the functions g(x) = f(\(\frac{1}{3}\)x + \(\frac{2}{3}\)) and h(x) = 4f(\(\frac{x}{3}\)) are the same.
Since g(x) = 8\(\frac{x}{3}\) + \(\frac{2}{3}\), h(x) = 4 . 8\(\frac{x}{3}\), 8\(\frac{x}{3}\) + \(\frac{2}{3}\) = 8\(\frac{2}{3}\) = 4 . 8\(\frac{x}{3}\) for any real number x, and their domains are the same, the functions g and h are equivalent. Therefore, they have identical graphs
The graphs of the functions f(x) = ln(x) and g(x) = log2(x) are shown to the right.
a. Which curve is the graph of f, and which curve is the graph of g? Explain.
The blue curve on top is the graph of g, and the green curve on the bottom is the graph of f. The two functions can be compared by converting both of them to the same base. Since f(x) = ln(x), and g(x) = \(\frac{1}{\ln (2)}\) . ln(x), and 1 < \(\frac{1}{\ln (2)}\), the graph of g is a vertical stretch of the graph of f; thus, the graph of g is the blue curve.
b. Describe the graph of g as a transformation of the graph of f.
The graph of g is a vertical scaling of the graph of f by a scale factor greater than one.
c. By what factor has the graph of f been scaled vertically to produce the graph of g? Explain how you know.
By the change of base formula, g(x) = \(\frac{1}{\ln (2)}\) . ln(x); thus, the graph of g is a vertical scaling of the graph of f by a scale factor of \(\frac{1}{\ln (2)}\).
Gwyneth is conducting an experiment. She rolls 1,000 dice simultaneously and removes any that have a six showing. She then rerolls all of the dice that remain and again removes any that have a six showing. Gwyneth does this over and over again—rerolling the remaining dice and then removing those that land with a six showing.
a. Write an exponential function f of the form f(n) = a bC for any real number n ≥ 0 that could be used to model the average number of dice she could expect on the n roll if she ran her experiment a large number of times.
f(n) = 1000(\(\frac{5}{6}\))n
b. Gwyneth computed f(12) = 112.15… using the function f. How should she interpret the number 112.15… in the context of the experiment?
The value f(12) = 112.15 means that if she was to run her experiment over and over again, the model predicts that the average number of dice left after the 12th roll would be approximately 112.15.
c. Explain the meaning of the parameters in your function f in terms of this experiment.
The number 1,000 represents the initial amount of dice. The number \(\frac{5}{6}\) represents the
fraction of dice remaining from the previous roll each time she rolls the dice; there is a \(\frac{5}{6}\) probability that any die will not land on a 6, so we would predict that after each roll about \(\frac{5}{6}\) of the dice would remain.
d. Describe in words the key features of the graph of the function f for n ≥ 0. Be sure to describe where the function is increasing or decreasing, where it has maximums and minimums (if they exist), and the end behavior.
This function is decreasing for all n ≥ 0. The maximum is the starting number of dice, 1,000. There is no minimum. The graph decreases at a decreasing rate with f(x) → 0 as x → ∞.
e. According to the model, on which roll does Gwyneth expect, on average, to find herself with only one die remaining? Write and solve an equation to support your answer to this question.
1000(\(\frac{5}{6}\))n = 1
(\(\frac{5}{6}\))n = \(\frac{1}{1000}\)
log(\(\frac{5}{6}\)|)n = log(\(\frac{1}{1000}\))
n log(\(\frac{5}{6}\)) = -3
n = –\(\frac{3}{\log \left(\frac{5}{6}\right)}\)
n ≈ 37.89
According to the model, Gwyneth should have 1 die remaining, on average, on the 38th roll.
f. For all of the values in the domain of f, is there any value for which f will predict an average number of 0 dice remaining? Explain why or why not. Be sure to use the domain of the function and the graph to support your reasoning.
The graph of this function shows the end behavior approaching 0 as the number of rolls increases. This function will never predict an average number of0 dice remaining because a function of the form f(n) = abcn never takes on the value of zero. Even as the number of trials becomes very large, the value of f will be a positive number.
Suppose the table below represents the results of one trial of Gwyneth's experiment.
g. Let g be the function that is defined exactly by the data in the table (i.e., g(0) = 1000, g(1) = 840, g(2) = 692, and so forth, up to g(28) = 0). Describe in words how the graph of g looks different from the graph off. Be sure to use the domain of g and the domain of f to justify your description.
The domain of g is the set of integers from 0 to 28. The graph of g is discrete (a set of unconnected points), while the graph of f is continuous (a curve). The graph of g has an x-intercept at (28, 0), and the graph of f has no x-intercept. The graphs of both functions decrease, but the graph of f is exponential, which means that the ratio \(\frac{f(x+1)}{f(x)}=\frac{5}{6}\) for all x in the domain of f. The function values for g do not have this property.
h. Gwyneth runs her experiment hundreds of times, and each time she generates a table like the one in part (f). How are these tables similar to the function f? How are they different?
The data in each table will follow the same basic pattern as f (i.e., decreasing from 1,000 eventually to 0 with a common ratio between rolls of about \(\frac{5}{6}\)).
Different:
(1) The tables are always discrete domains with integer range. (2) Each table will eventually reach 0 after some finite number of rolls, whereas the function g can never take on the value of 0.
Find the inverse g for each function f.
a. f(x) = \(\frac{1}{2}\)x – 3
y = \(\frac{1}{2}\)x – 3
x = \(\frac{1}{2}\)y – 3
x + 3 = \(\frac{1}{2}\)y
y = 2x + 6
g(x) = 2x + 6
b. f(x) = \(\frac{x+3}{x-2}\)
y = \(\frac{x+3}{x-2}\)
x = \(\frac{y+3}{y-2}\)
xy -2x = y + 3
y(x – 1) = 2x + 3
y = \(\frac{2 x+3}{x-1}\)
g(x) = \(\frac{2 x+3}{x-1}\)
c. f(x) = 23x + 1
y = 23x + 1
x = 23y + 1
x – 1 = 23y
log<sub>2</sub>(x – 1) = 3y
y = \(\frac{1}{3}\) log2(x – 1)
g(x) = \(\frac{1}{3}\) log2(x – 1)
d. f(x) = ex – 3
y = ex – 3
x = ey – 3
ln(x) = y – 3
y = ln(x) + 3
g(x) = ln(x) + 3
e. f(x) = log(2x + 3)
y = log(2x + 3)
x = log(2y + 3)
2y + 3 = 10x
y = \(\frac{1}{2}\) . 10x – \(\frac{3}{2}\)
g(x) = \(\frac{1}{2}\) . 10x – \(\frac{3}{2}\)
Dani has $1,000 in an investment account that earns 3% per year, compounded monthly.
a. Write a recursive sequence for the amount of money in her account after n months.
a1 = 1000, an + 1 = an(1.0025)
b. Write an explicit formula for the amount of money in the account after n months.
an= 1000(1.0025)n
c. Write an explicit formula for the amount of money in her account after t years.
b(t) = 1000(1.0025)12t
d. Boris also has $1,000, but in an account that earns 3% per year, compounded yearly. Write an explicit formula for the amount of money in his account after t years.
V(t) = 1000(1.03)t
e. Boris claims that the equivalent monthly interest rate for his account would be the same as Dani's. Use the expression you wrote in part (d) and the properties of exponents to show why Boris is incorrect.
Boris is incorrect because the formula for the amount of money in the account, based on a monthly rate, is given by V(t) = 1000(1+ i)12t, where i is the monthly interest rate. The expression from part d, 1000(1.03)t, is equivalent to 1000(1.03)\(\frac{12t}{12}\) = 1000(1.03)\(\frac{1}{12}\) .12t = 1000((1.03)\(\frac{1}{12}\))12t by properties of exponents. Therefore, if the expressions must be equivalent, the quantity given by 1.03\(\frac{1}{12}\) ≈ 1.00247 means that his monthly rate is about 0.247%, which is less than 0.25%, given by Dani's formula.
Show that \(\sum_{k=0}^{n-1}\)a. rk = a\(\left(\frac{1-r^{n}}{1-r}\right)\) where r ≠ 1.
We recall the identity 1 – rn = (1 – r)(1 + r + r2 + r3 + ….. + rn – 1) with r ≠ 1 and positive integer n. Dividing both sides of the identity by 1 – r gives
\(\frac{1-r^{n}}{1-r}\) = 1 + r + r2 + r3 + ….. + rn – 1
Therefore, by factoring the common factor a and substituting, we get the formula,
a + ar + ar2 + ….. + arn – 1 = a(1 + r + r2 + …. + rn – 1)
= a\(\frac{1-r^{n}}{1-r}\)
Sami opens an account and deposits $100 into it at the end of each month. The account earns 2% per year compounded monthly. Let S, denote the amount of money in her account at the end of n months (just after she makes a deposit). For example, S1 = 100 and S2 = 100 (1 + \(\frac{0.01}{12}\)) + 100.
a. Write a geometric series for the amount of money in the account after 3, 4, and 5 months.
b. Find a recursive description for Sn.
S1= 100, Sn = Sn – 1(1 + \(\frac{0.02}{12}\)) + 100
c. Find an explicit function for Sn, and use it to find S12.
Sn = \(\frac{100\left(1-\left(1+\frac{0.02}{12}\right)^{n}\right)}{1-\left(1+\frac{0.02}{12}\right)}\)
S12 ≈ 1211.06
d. When will Sami have at least $5,000 in her account? Show work to support your answer.
An algebraic solution is shown, but students could also solve this equation numerically.
The solution is approximately 48.07. At the end of the 49th month, she will have over $5000 in her account. It will take about 4 years and 2 days.
Beatrice decides to deposit $100 per month at the end of every month in a bank with an annual interest rate of 5.5% compounded monthly.
a. Write a geometric series to show how much she will accumulate in her account after one year.
100(1 + \(\frac{0.055}{12}\))12 + 100(1 + \(\frac{0.055}{12}\))11 + …. + 100(1 + \(\frac{0.055}{12}\))1 + 100
b. Use the formula for the sum of a geometric series to calculate how much she will have in the bank after five years if she keeps on investing $100 per month.
\(100\left(\frac{1-\left(1+\frac{0.055}{12}\right)^{60}}{1-\left(1+\frac{0.055}{12}\right)}\right)\) ≈ 6888.08
She will have approximately $6,888.08 in her account.
Nina has just taken out a car loan for $12,000. She will pay an annual interest rate of 3% through a series of monthly payments for 60 months, which she pays at the end of each month. The amount of money she has left to pay on the loan at the end of the nu month can be modeled by the function
f(n) = 86248 – 74248(1.0025)n for 0 ≤ n ≤ 60.
At the same time as her first payment (at the end of the first month), Nina placed $100 into a separate investment account that earns 6% per year compounded monthly. She placed $100 into the account at the end of each month thereafter. The amount of money in her savings account at the end of the nth month can be modeled by the function g(n) = 20000(1.005)n – 20000 for n ≥ 0.
a. Use the functions f and g to write an equation whose solution could be used to determine when Nina will have saved enough money to pay off the remaining balance on her car loan.
86248 – 74248(1.0025)n = 20000(1.005)n – 20000
b. Use a calculator or computer to graph f and g on the same coordinate plane. Sketch the graphs below, labeling intercepts and indicating end behavior on the sketch. Include the coordinates of any intersection points.
c. How would you interpret the end behavior of each function in the context of this situation?
For g, the end behavior as x → ∞ indicates that the amount in the account will continue to grow over time at an exponential rate. For f, the end behavior as x → ∞ is not meaningful in this situation since the loan will be paid off after 60 months.
d. What does the intersection point mean in the context of this situation? Explain how you know.
The intersection point indicates the time when the amount left on Nina's loan is approximately equal to the amount she has saved. It is only approximate because the actual amounts are compounded/paid/deposited at the end of the month.
e. After how many months will Nina have enough money saved to pay off her car loan? Explain how you know.
After 39.34 months, or at the end of the 40th month, she will have enough money saved to pay off the loan completely. The amount in her savings account at the end of the 40th month will be slightly more than $4,336, and the amount she has left on her loan will be slightly less than $4,336.
Each function below models the growth of three different trees of different ages over a fixed time interval.
Tree A:
f(t) = 15(1.69)\(\frac{t}{2}\), where t is time in years since the tree was 15 feet tall, f(t) is the height of the tree in feet, and 0 ≤ t ≤ 4.
Tree B:
Tree C:
The graph of h is shown where t is years since the tree was 5 feet tall, and h(t) is the height in feet after t𝐠years.
a. Classify each function f, g, and h as linear or nonlinear. Justify your answers.
The function f is exponential because it is of the form f(t) = abct, so f is nonlinear. The table for function appears to represent a linear function because its first differences are constant. The function h is nonlinear and appears to be an exponential function with common ratio 1.5.
b. Use the properties of exponents to show that Tree A has a percent rate of change of growth of 30% per year.
The percent rate of change for f is 30% because (1.69)\(\frac{t}{2}\) = ((1.69)\(\frac{1}{2}\))t = (1.3)t = (1 + 0.3)t
c. Which tree, A or C, has the greatest percent rate of change of growth? Justify your answer.
The percent rate of change for f is 30% by part (b). The percent rate of change for h is 50% because the common ratio is \(\frac{7.5}{5}=\frac{11.25}{7.5}\) = 1.5. Tree C has the greatest percent rate of change of growth.
d. Which function has the greatest average rate of change over the interval [0,4], and what does that mean in terms of tree heights?
The average rate of change of f on [0,4] is \(\frac{15(1.69)^{\frac{4}{2}}-15}{4}\) ≈ 7. The average rate of change of g on the interval [0,4] is \(\frac{10.2-5}{4}\) = 1.3. The average rate of change of h, estimating from the graph, is approximately \(\frac{25.5-5}{4}\) = 5.125. The function with the greatest average rate of change is f, which means that over that four-year period, Tree A grew more feet per year on average than the other two trees.
e. Write formulas for functions g and h, and use them to confirm your answer to part(c).
The functions are given by g(t) = 5 + 1.3t and h(t) = 5(\(\frac{3}{2}\))t. The average rate of change for g can be computed as \(\frac{g(4)-g(0)}{4}=\frac{10.2-5}{4}\) = 1.3, and the average rate of change for h on the interval [0,4] is \(\frac{h(4)-h(0)}{4}=\frac{25.3125-5}{4}\) = 5.078125.
f. For the exponential models, if the average rate of change of one function over the interval [0,4] is greater than the average rate of change of another function on the same interval, is the percent rate of change also greater? Why or why not?
No. The average rate of change of f is greater than that of h over the interval of [0,4], but the percent rate of change of h is greater. The average rate of change is the rate of change over a specific interval, which varies with the interval chosen. The percent rate of change of an exponential function is the percent increase or decrease between the value of the function at x and the value of the function at x + 1 for any real number x; the percent rate of change is constant for an exponential function.
Identify which functions are exponential. For the functions that are exponential, use the properties of exponents to identify the percent rate of change, and classify the functions as indicating exponential growth or decay.
a. f(x) = 3(1 – 0.4)-x
Since(1 – 0.4)-x = ((0.6)-1)x = \(\left(\left(\frac{6}{10}\right)^{-1}\right)^{x}=\left(\frac{10}{6}\right)^{x}\) ≈ (1.667)x, the percent rate of change for f is approximately 67%, which indicates exponential growth.
b. g(x) = \(\frac{3}{4^{x}}\)
Since \(\frac{3}{4^{x}}\) = 3(\(\frac{1}{4}\))x = 3(0.25)x = 3(1 – 0.75)x, the percent rate of change for g is -75%, which indicates exponential decay.
c. k(x) = 3x0.4
Since 3x0.4 = 3x\(\frac{4}{10}\) = \(3 \sqrt[10]{x^{4}}\) this is not a exponential function.
d. h(x) = 3\(\frac{x}{4}\)
Since 3\(\frac{x}{4}\) = (3\(\frac{1}{4}\))x ≈ 1.316x, the percent growth is approximately 31.6%, which indicates exponenmtial growth.
A patient in a hospital needs to maintain a certain amount of a medication in her bloodstream to fight an infection. Suppose the initial dosage is 10 mg, and the patient is given an additional maintenance dose of 4 mg every hour. Assume that the amount of medication in the bloodstream is reduced by 25% every hour.
a. Write a function for the amount of the initial dosage that remains in the bloodstream after n hours.
b(t) = 10(0.75)n
b. Complete the table below to track the amount of medication from the maintenance dose in the patient's bloodstream for the first five hours.
c. Write a function that models the total amount of medication in the bloodstream after n hours.
d(n) = 10(0.75)n + 4\(\left(\frac{1-0.75^{n}}{1-0.75}\right)\)
d. Use a calculator to graph the function you wrote in part (c). According to the graph, will there ever be more than 16 mg of the medication present in the patient's bloodstream after each dose is administered?
According to the graph of the function in part (c), the amount of the medication approaches 16 mg as n→∞.
e. Rewrite this function as the difference of two functions (one a constant function and the other an exponential function), and use that difference to justify why the amount of medication in the patient's bloodstream will not exceed 16 mg after each dose is administered.
Rewriting this function gives d(n) = 16 – 6(0.75)n. Thus, the function is the difference between a constant function and an exponential function that is approaching a value of 0 as n increases. The amount of the medication will always be 16 reduced by an ever-decreasing quantity. | CommonCrawl |
\begin{definition}[Definition:Series (Electronics)]
Two or more components in an electrical circuit are connected in '''series''' {{iff}} the current through them flows along a single path through the circuit.
{{finish|Needs to be formalised, with the concept of an electrical circuit being mapped by means of a graph. This will need plenty of work.}}
Category:Definitions/Electronics
\end{definition} | ProofWiki |
Han, X.Y.;Xu, Z.R.;Wang, Y.Z.;Tao , X.;Li, W.F.
This experiment was conducted to investigate the effect of cadmium levels on weight gain, nutrient digestibility and the retention of iron, copper and zinc in tissues of growing pigs. A total of one hundred and ninety-two crossbred pigs (barrows, Duroc$\times$Landrace$\times$Yorkshine, 27.67$\pm$1.33 kg of average initial body weight) were randomly allotted to four treatments. Each treatment had three replicates with 16 pigs per pen. The corn-soybean basal diets were supplemented with 0, 0.5, 5.0, 10.0 mg/kg cadmium respectively, and the feeding experiment lasted for eight-three days. Cadmium chloride was used as cadmium source. The results showed that pigs fed the diet containing 10.0 mg/kg cadmium had lower ADG and FCR than any other treatments (p<0.05). Apparent digestibility of protein in 10.0 mg/kg cadmium-treated group was lower than that of other groups (p<0.05). There was lower iron retention in some tissues of 5.0 mg/kg and 10.0 mg/kg cadmium treatments (p<0.05). However, pigs fed the diet 10.0 mg/kg cadmium had higher copper content in most tissues than that of any other groups (p<0.05). There was a significantly increase of zinc retention in kidney of 10.0 mg/kg cadmium additional group (p<0.05) and zinc concentrations in lymphaden, pancreas and heart of 10.0 mg/kg cadmium treatment were lower than those of the control (p<0.05). This study indicated that relatively high cadmium level (10.0 mg/kg) could decrease pig growth performance and change the retention of iron, copper and zinc in most tissues during extended cadmium exposure period.
Anke, M., T. Masaoka, B. Groppel, G. Zervas and W. Arnhold. 1989. The influence off sulpher, molybdenum and cadmium exposure on the growth of goat, cattle and pig. Archiv. Fur. Tierernahrung 39:221-228.
Brzoska, M. M. and J. Moniuszko-Jakoniuk. 2001. Interactions between cadmium and zinc in the organs. Food. Chem. Toxicol. 39:967-980.
Brzoska, M. M., J. Moniuszko-Jakoniuk, M. Jurczuk, M. Galazyn-Sidorczuk and J. Rogalska. 2000. Effect of short-term ethanol administration on cadmium retention and bioelement metabolism in rats continuously exposed to cadmium. Alcohol and Alcoholism. 5:439-445.
Elinder, C. G., M. Nordberg, B. Palm, L. Bjork and L. Jonsson. 1987. Cadmium, zinc, and copper in rabbit kidney metallothionein-relation to kidney toxicity. Environ. Res. 42:553-562.
Elsenhans, B., K. Kolb, K. Schumann and W. Forth. 1994. The longitudinal distribution of cadmium, zinc, copper, iron and metallothionein in the small intestinal mucosa of rats after administration of cadmium chloride. Biol. Trace Elem. Res.41:31-46.
Eybl, V., D. Kotyzova, J. Koutensky, V. Mickova, M. M. Jones and P. K. Singh. 1998. Effect of cadmium chelating agents on organ cadmium and trace element levels in mice. Analyst. 123:25-26.
Fox, M. R. S., R. M. Jacob, A. O. L. Jones, B. E. Fry and C. L. Stone. 1980. Effects of vitamin C and iron on cadmium metabolism. Annals of the New York Academy of Sciences 355:249-261.
Koizumi, T., T. Yokota and K. Suzuki. 1994. Mechanism of cadmium-induced cytotoxicity in rat hepatocytes. Biol. Trace Elem. Res. 42:31-41.
Mohan, J., R. P. Moudgal and J. N. Panda. 1992. Effects of cadmium salt on phosphomonoesterases activity and fertilizing ability of fowl spermatozoa. Indian J. Exp. Biol. 30:241-243.
Webster, W. S. 1979. Iron deficiency and its role in cadmiuminduced fetal growth retardation. J. Nutr. 109:1640-1645.
Miccadei, S. and A. Floridi. 1993. Sites of inhibition of mitochondrial electron transport by cadmium. Chem. Biol. Interact. 89:159-167.
NRC. 1998. Nutrient Requirements of Swine (10th Ed.) National Academy Press. Washington, DC.
Banis, R. J., W. G. Pond, E. F. Walker and J. R. O'Connor. 1969. Dietary cadmium, iron, and zinc interactions in the growing rat, Proc. Soc. Exp. Biol. Med. 130:802-806.
Crowe, A. and E. H. Morgan. 1997. Effect of dietary cadmium on iron metabolism in growing rats. Toxicol. Appl. Pharmacol. 145:136-146.
Kanwar, K. C., S. C. Kaushal and R. V. Mehra. 1980. Clearance of orally administered 115mCd from rat tissues. Experientia. 15:1004-1005.
Pond, W. G. and E. F. Walker.. 1973. Cadmium-induced anemia in growing pigs: protective effect of oral and parenteral iron. J. Anim. Sci. 6:1122-1128.
AOAC. 1990. Official Methods of Analysis (15th Ed.). Assocition f Official Analytical Chemists. Washington, DC.
Cotzias, G. C., D. C. Borg and B. Selleck. 1961. Virtual absence of turnover in cadmium metabolism: Cd109 studies in the mouse. Am. J. Physiol.. 201:927–930.
Hamilton, D. and L. Valberg. 1974. Relationship between cadmium and iron absorption. Am. J. Physiol. 227:1033-1037.
Schafer, S. G. and W. Forth. 1984. Effect of acute and subchronic exposure to cadmium on the retention of iron in rats. J. Nutr. 114:1989-1996.
Oishi, S., J.-I. Nakagawa and M. Ando. 2000. Effects of cadmium administration on the endogenous metal balance in rats. Biol. Trace Elem. Res. 76:257-278.
Sowa, B. and E. Steibert. 1985. Effect of oral cadmium administration to female rats during pregnancy on zinc, copper and iron content in placenta, foetal liver, kidney, intestine and brain. Arch. Toxicol. 56:256-263.
Rothe, S., J. Gropp, H. Weiser and W. A. Rambeck. 1994. The effect of vitamin C and zinc on the copper-induced increased of cadmium residues in swine. Zeitschrift fur Ernahrungswissenschaft. 33:61-67.
Steibert, E., B. Krol, K. Gralewska, M. Kaminki, O. Kaminska and E. Kusz. 1984. Cadmium-induced changes in the histoenzymatic activity in liver, kidney and duodenum of pregnant rats. Toxicol. Lett. 20:127-132.
Cousins, R. J., A. K. Baber and J. R. Trout. 1973. Cadmium toxicity in growing swine. J. Nutr. 103:964-972.
Jones, M. M. and M. G. Cherian. 1990. The search for chelate antagonists for chtonic cadmium intoxication. Toxicol. 62:1-25.
Coppen-Jaeger, D. E. and M. Wilhelm. 1989. The effects of cadmium on zinc absorption in isolated rat intestinal preparations. Biol. Trace Elem. Res. 21:207-212.
Miller, W. J. 1971. Cadmium nutrition and metabolism in ruminants: Relationship to concentrations in tissues and products. Feedstuffs. July 17:24.
Raszy, J., H. Docekalova, J. Rubes, V. Gajduskova, J. Masek, L. Rodak and J. Bartos. 1992. Ecotoxicologic relations on a large pig-fattening farm located in a lignite mining area and near and solid-fuel electrical power plant. Vet. Med. 37:435-448.
Bafundo, K. W., D. H. Baker and P. R. Fitzgerald. 1984. Eimeria acervulina infection and the zinc-cadmium interrelationship in the chick. Poult. Sci. 63:1828-1832.
Nasreddin, L. and D. Parent-Massin. 2002. Food contamination by metals and pesticides in the European Union. Should we worry? Toxicol. Lett. 127:29-41.
Pond, W. G., P. Chapman and E. Jr. Walker. 1966. Influence of dietary zinc, corn oil and cadmium on certain blood components, weight gain and parakeratosis in young pigs. J. Anim. Sci. 25:122-127.
Joshi, J. G. and A. Zimmerman. 1988. Ferritin: an expanded role in metabolic regulation. Toxicol. 48:21-29.
Davies, N. and J. Campbell. 1977. The effect of cadmium on intestinal copper absorption and binding in the rat. Life Sci. 20:955-960.
Hansen, L. G. and T. D. Hinesly. 1979. Cadmium from soil amended with sewage sludge: effects and residues in swine. Environ. Health. Persp. 28:51-57.
Kozlowska, K., A. Brzozowska, J. Sulkoeska and W. Roszkowski. 1993. The effect of cadmium on iron metabolism in rats. Nutr. Res. 13:1163-1172.
Lisk, D. J., R. D. Boyd, J. N. Telford, J. G. Babish, G. S. Stoesand, C. A. Bache and W. H. Gutenmann. 1982. Toxicologic studies with swine fed corn grown on municipal sewage sludgeamended soil. J. Anim. Sci. 55:613-619.
Jacobson, K. B. and J. E. Turner. 1980. The interaction of cadmium and certain other metal ions with proteins and nucleic acids. Toxicol. 16:1-37.
Czarnecki, G. L. and D. H. Baker. 1982. Tolerance of the chick to excess dietary cadmium as influenced by dietary cysteine and by experimental infection with Eimeria acervulina. J. Anim. Sci. 54:983-988.
Rambeck, W. A., H. W. Brehm and W. E. Kollmer. 1991. The effect of increased copper supplements in feed on the development of cadmium residues in swine. Zeitschrift fur Ernahrungswissenschaft. 30:298-306.
Ammerman, C. B., S. M. Miller, K. R. Fick and S. L. Hansard II. 1977. Contaminating elements in mineral supplements and 1977their potential toxicity: A review. J. Anim. Sci. 44:485-508.
Foulkes, E. C. 1985. Interactions between metals in rat jejunum:implications on the nature of cadmium uptake. Toxicol. 37:117-125.
Fox, M. R. S. 1987. Assessment of cadmium, lead and vanadium status of large animals as related to the human food chain. J. Anim. Sci. 65:1744-1752.
Matsubara-Khan, J. and K. Machida. 1975. Cadmium accumulation in the mouse organs during sequential injections of cadmium109. Environ. Res. 10:29-38.
Wang, S. C. 1994. Food Hygiene Testing and Technology Handbook. Chemical Industry Press, Beijin, China, pp. 185-189.
Chmielnicka, J. and B. Sowa. 1996. Cadmium interaction with essential metals (Zn, Cu, Fe), metabolism, metallothionein, and ceruloplasmin in pregnant rats and fetuses. Ecotoxicol. Environ. Safety 35:277-281. | CommonCrawl |
What exactly is pumping length in pumping lemma?
Pumping Lemma : For any regular language $\mathbb{L}$, there exists an integer $n$, such that for all $x\in \mathbb{L}$ with $|x|\geq n$, there exists $u, v, w \in \Sigma^*$, such that $x = uvw$, and
$|uv| \leq n$
$|v| > 0$
$uv^iw ∈ \mathbb{L} \space\forall i\geq0$
My question is, what exactly is the "Pumping Length" $n$ here? While proving if a language is regular or non-regular, do I have to assume the value of $n$, or do I have to calculate it? If Yes, how to calculate?
regular-languages
finite-automata
regular-expressions
pumping-lemma
edited May 7, 2022 at 9:55
asked May 7, 2022 at 4:07
Pratik HadawalePratik Hadawale
The pumping length $n$ must be assumed to be arbitrary - you can't fix it to be a particular value. The pumping lemma is used to prove that a given language is nonregular, and it is a proof by contradiction.
The idea behind proofs that use the pumping lemma is as follows. To prove a given language $L$ is nonregular, by way of contradiction, assume that $L$ is regular. Then there must be a machine on $n$ states accepting $L$, for some positive integer $n$. You can then show (this is essentially what the proof of the pumping lemma does) that such a machine must also accept other strings, i.e. $L$ must also contain these other strings (these other strings are the pumped strings, which have the form $xy^iz$). However, it can be shown that some of these pumped strings do not belong to the given language, and so we have a contradiction. This contradiction implies that the assumption you started with (that there is a machine with $n$ states accepting $L$) is false, i.e. there is no machine on $n$ states accepting $L$. Since $n$ was chosen to be arbitrary, you've shown that there does not exist a machine on any number of states accepting $L$, i.e. there does not exist a machine accepting $L$.
answered May 7, 2022 at 8:01
Ashwin GanesanAshwin Ganesan
1,15866 silver badges1010 bronze badges
When using the pumping lemma, you do assume such $p$ exists, assuming that the language is regular. This $p$, no matter what it is, should exists since it is the number of states for the DFA of the language. You do not however need to know its exact value, it must exists if the language is regular.
You can read it in the proof of the pumping lemma .
RusselRussel
The "Pumping Length" "n" exists because you can write a finite automata that classifies all strings up to a fixed, finite length in any way it wants to.
Your finite automata can have (in effect) a lookup table, and decide where each string goes (accepted or not accepted).
But, you cannot make an infinite lookup table. The program/machine/expression for recognizing regular languages only has so much state. How much state isn't known -- because you can make really, really large finite automata -- so we cannot fix an "n" that works for all machines.
Instead, it is saying that for every fixed finite automata, there is a length of string long enough that the finite automata cannot maintain an arbitrary lookup table. As we don't know how big the automata is, we don't know how long this string has to be.
You could write an algorithm that takes a given regular language (assuming it is described reasonably) and produces the point where it cannot hope to maintain a lookup table. As an example: if your recognizing machine was a finite automata description, once your string is longer than the number of states in the finite automata, it must have formed a loop (visited a state twice).
If we break the recognized string into 3 parts (the part before we start the loop, the part that walks the loop, and the part afterwards), we can repeat the loop part of the string any number of times.
But in this proof, and in general, we don't have the finite automata.
But we do know if the language is regular, then such an "n" exists.
So when proving a language non-regular, if you can prove that for every single n, the pumping lemma doesn't hold for any string of your choice, you win.
On the other hand, if you prove the pumping lemma holds for given string in the language, that does not prove the language is regular.
This kind of mental gymnastics is tricky, honestly.
You can rephrase this as a challenge-response. To prove a language is non-regular, you need to make a proof machine.
This proof machine must accept a value "n". It must then produce a string x with |x| > n.
Then it must accept any subdivision of x into uvw with |v| > 0 and |uw| <= n. From that, it has to prove that uv^iw is not in the language.
The choice of "n" and the "uvw" are hostile, not something you get to pick. Your "proof machine" must work regardless of what values it is challenged with.
If you can do this, you have proven the language is non-regular.
YakkYakk
A regular language has a finite state machine, and therefore it has a finite state machine with a minimum number of states n. If we examine the first n letters of an input string, there are n+1 states that we encountered after 0, 1, 2, …, n letters. Since there are no n+1 states, there must be a state S that we encountered after x letters and after x < y <= n letters, so in the state S the next y-x letters brought us back to the same state S. Whether we parsed those letters 0, 1, 2, or any number of times, we go back to the same state S.
And if you look carefully, that's exactly what the pumping Lemma says. The pumping length is any number the number of states in the smallest finite state machine for the language. (Except that we don't greater than that number of states, and we usually use the pumping Lemma to show that the language is irregular and therefore the number of states and the pumping length don't exist).
gnasher729gnasher729
Proving the language $L= \{0^n 1^m \space | \space m \equiv 0 \space mod \space n, \space n \geq 2 \}$ is not regular using the pumping lemma
Quantification in pumping lemma for regular languages
Physical significance of pumping length in pumping lemma
Pumping-Lemma regular languages: Consider multiple cases?
necessary and sufficient pumping lemma - bounded pumping variant | CommonCrawl |
In this new article series QuantStart returns to the discussion of pricing derivative securities, a topic which was covered a few years ago on the site through an introduction to stochastic calculus.
Imanol Pérez, a PhD researcher in Mathematics at Oxford University, and an expert guest contributor to QuantStart will mathematically describe the Black-Scholes model for options pricing in this article and then subsequently outline its limitations in future posts.
Derivatives are financial instruments whose price depends on the performance of some underlying asset or assets. The world of financial derivatives is very complex, and derivatives can be very different from each other. In this article we will focus on a particular type of derivatives, options, and we will show how stochastic analysis can be used to find the fair price, a notion that will be made clear later, of a class of options.
European call options with underlying asset $S$, maturity time $T$ and exercise price $K$ give the contract owner the right (but not the obligation) to buy one share of $S$ at the price $K$ at time $T$.
Similarly, European put options give the contract owner the right (but not the obligation) to sell one share of $S$ at the price $K$ at time $T$.
then it is in the interest of the owner of a European call option to buy the asset at the price $K$. If the price at time $T$ is lower than the exercise price, buying the asset at the price $K$ is against the interest of the option owner, since the asset can be bought at a cheaper price directly in the market. Therefore, the option owner will earn $\Phi_c(S_T)$, with $\Phi_c(x):=\max(x-K, 0)$. Similarly, one can check that the owner of a European put option will receive $\Phi_p(S_T)$, with $\Phi_p(x):=\max(K-x, 0)$, from the option contract.
After defining these contracts, one would like to answer the following question: If $\Pi_t$ represents the price of a European option (either a put or call option), how can one find a fair value for $\Pi_t$?
where $r$ is the short rate of interest, $\alpha$ is the local mean rate of return of the stock, $\sigma$ is its volatility and $W$ is a Brownian motion. We will assume that $r$ is constant and $\alpha$ and $\sigma$ are functions of $t$ and $S_t$. As we see, we will assume that $S$ follows a geometric Brownian motion, which was mentioned in this article.
An investor can hold a portfolio $h_t=(h_t^1, h_t^2, h_t^3)$. The value $h_t^1$ will correspond to the money invested in the risk free asset $B$, $h_t^2$ the money invested in the stock $S$ and $h_t^3$ the money invested in the option contract. A negative value indicates short selling. We will also denote by $V^h_t$ the wealth of the portfolio $h$ at time $t$.
The conditions above avoid the possibility of being able to make a positive amount of money, without investing any money, and without taking any risk.
where $\Phi$ can be either $\Phi_c$ or $\Phi_p$, although more general contingent claims $\Phi$ can be used as well.
The Black–Scholes model is a simple model that can be very useful, but it has many limitations and it should therefore be treated with care. In subsequent articles we shall see some of the limitations of the model, and how one could solve them in order to obtain a model that adjusts better to reality. | CommonCrawl |
brief communications
Real-time tracking of self-reported symptoms to predict potential COVID-19
Your article has downloaded
Similar articles being viewed by others
Carousel with three slides shown at a time. Use the Previous and Next buttons to navigate three slides at a time, or the slide dot buttons at the end to jump three slides at a time.
Wearable sensor data and self-reported symptoms for COVID-19 detection
Giorgio Quer, Jennifer M. Radin, … Steven R. Steinhubl
Pre-symptomatic detection of COVID-19 from smartwatch data
Tejaswini Mishra, Meng Wang, … Michael P. Snyder
Smartwatch data help detect COVID-19
Tingting Zhu, Peter Watkinson & David A. Clifton
Tracking COVID-19 using taste and smell loss Google searches is not a reliable strategy
Kim Asseo, Fabrizio Fierro, … Masha Y. Niv
Passive detection of COVID-19 with wearable sensors and explainable machine learning algorithms
Matteo Gadaleta, Jennifer M. Radin, … Giorgio Quer
Assessment of physiological signs associated with COVID-19 measured using wearable devices
Aravind Natarajan, Hao-Wei Su & Conor Heneghan
Higher sensitivity monitoring of reactions to COVID-19 vaccination using smartwatches
Grace Guan, Merav Mofaz, … Margaret L. Brandeau
Wearable devices for the detection of COVID-19
H. Ceren Ates, Ali K. Yetisen, … Can Dincer
Building an international consortium for tracking coronavirus health status
Eran Segal, Feng Zhang, … Paul Wilmes
Brief Communication
Cristina Menni ORCID: orcid.org/0000-0001-9790-05711 na1,
Ana M. Valdes ORCID: orcid.org/0000-0003-1141-44711,2 na1,
Maxim B. Freidin ORCID: orcid.org/0000-0002-1439-62591,
Carole H. Sudre3,
Long H. Nguyen ORCID: orcid.org/0000-0002-5436-42194,
David A. Drew ORCID: orcid.org/0000-0002-8813-08164,
Sajaysurya Ganesh ORCID: orcid.org/0000-0002-3720-41765,
Thomas Varsavsky3,
M. Jorge Cardoso3,
Julia S. El-Sayed Moustafa ORCID: orcid.org/0000-0001-6963-66541,
Alessia Visconti1,
Pirro Hysi1,
Ruth C. E. Bowyer ORCID: orcid.org/0000-0002-6941-81601,
Massimo Mangino ORCID: orcid.org/0000-0002-2167-74701,6,
Mario Falchi1,
Jonathan Wolf5,
Sebastien Ourselin3,
Andrew T. Chan4,
Claire J. Steves1 na2 &
Tim D. Spector ORCID: orcid.org/0000-0002-9795-03651 na2
Nature Medicine volume 26, pages 1037–1040 (2020)Cite this article
Respiratory signs and symptoms
A total of 2,618,862 participants reported their potential symptoms of COVID-19 on a smartphone-based app. Among the 18,401 who had undergone a SARS-CoV-2 test, the proportion of participants who reported loss of smell and taste was higher in those with a positive test result (4,668 of 7,178 individuals; 65.03%) than in those with a negative test result (2,436 of 11,223 participants; 21.71%) (odds ratio = 6.74; 95% confidence interval = 6.31–7.21). A model combining symptoms to predict probable infection was applied to the data from all app users who reported symptoms (805,753) and predicted that 140,312 (17.42%) participants are likely to have COVID-19.
COVID-19 is an acute respiratory illness caused by the novel coronavirus severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Since its outbreak in China in December 2019, over 2,573,143 cases have been confirmed worldwide (as of 21 April 2020; https://www.worldometers.info/coronavirus/). Although many people have presented with flu-like symptoms, widespread population testing is not yet available in most countries, including the United States (https://www.cdc.gov/coronavirus/2019-ncov/cases-updates/testing-in-us.html) and United Kingdom1. Thus, it is important to identify the combination of symptoms most predictive of COVID-19, to help guide recommendations for self-isolation and prevent further spread of the disease2.
Case reports and mainstream media articles from various countries indicate that a number of patients with diagnosed COVID-19 developed anosmia (loss of smell)3,4. Mechanisms of action for the SARS-CoV-2 viral infection causing anosmia have been postulated5,6. Other studies indicate that a number of infected individuals present anosmia in the absence of other symptoms7,8, suggesting that this symptom could be used as screening tool to help identify people with potential mild cases who could be recommended to self-isolate9.
We investigated whether loss of smell and taste is specific to COVID-19 in 2,618,862 individuals who used an app-based symptom tracker10 (Methods). The symptom tracker is a free smartphone application that was launched in the United Kingdom on 24 March 2020, and in the United States on 29 March 2020. It collects data from both asymptomatic and symptomatic individuals and tracks in real time how the disease progresses by recording self-reported health information on a daily basis, including symptoms, hospitalization, reverse-transcription PCR (RT-PCR) test outcomes, demographic information and pre-existing medical conditions.
Between 24 March and 21 April 2020, 2,450,569 UK and 168,293 US individuals reported symptoms through the smartphone app. Of the 2,450,569 participants in the United Kingdom, 789,083 (32.2%) indicated having one or more potential symptoms of COVID-19 (Table 1). In total, 15,638 UK and 2,763 US app users reported having had an RT-PCR SARS-CoV-2 test, and having received the outcome of the test. In the UK cohort, 6,452 participants reported a positive test and 9,186 participants had a negative test. In the cohort from the United Kingdom, of the 6,452 participants who tested positive for SARS-CoV-2, 4,178 (64.76%) reported loss of smell and taste, compared with 2,083 out of 9,186 participants (22.68%) who tested negative (odds ratio (OR) = 6.40; 95% confidence interval (CI) = 5.96–6.87; P < 0.0001 after adjusting for age, sex and body mass index (BMI)). We replicated this result in the US subset of participants who had been tested for SARS-CoV-2 (adjusted OR = 10.01; 95% CI = 8.23–12.16; P < 0.0001) and combined the adjusted results using inverse variance fixed-effects meta-analysis (OR = 6.74; 95% CI = 6.31–7.21; P < 0.0001).
Table 1 Characteristics of the study population
We re-ran logistic regressions adjusting for age, sex and BMI to identify other symptoms besides anosmia that might be associated with being infected by SARS-CoV-2. All ten symptoms queried (fever, persistent cough, fatigue, shortness of breath, diarrhea, delirium, skipped meals, abdominal pain, chest pain and hoarse voice) were associated with testing positive for COVID-19 in the UK cohort, after adjusting for multiple testing (Fig. 1a). In the US cohort, only loss of smell and taste, fatigue and skipped meals were associated with a positive test result.
Fig. 1: Association between symptoms and SARS-CoV-2 infection, and ROCs for prediction of the risk of a positive test.
a, Association between symptoms and the odds ratio of SARS-CoV-2 infection in 15,638 UK and 2,763 US participants who were tested via RT-PCR. Error bars represent 95% CIs. b,c, ROCs for prediction in the UK test set (b) and US validation set (c) of the risk of a positive test for SARS-CoV-2, using the following self-reported symptoms and traits: persistent cough, fatigue, skipped meals, loss of smell and taste, sex and age. Values for AUC, sensitivity (SE), specificity (SP), positive predictive value (PPV) and negative predictive value (NPV) are shown, with 95% CIs in parentheses.
We performed stepwise logistic regression in the UK cohort, by randomly dividing it into training and test sets (ratio: 80:20) to identify independent symptoms most strongly correlated with COVID-19, adjusting for age, sex and BMI. A combination of loss of smell and taste, fatigue, persistent cough and loss of appetite resulted in the best model (with the lowest Akaike information criterion). We therefore generated a linear model for symptoms that included loss of smell and taste, fatigue, persistent cough and loss of appetite to obtain a symptoms prediction model for COVID-19:
$$\begin{array}{l}{\rm{Prediction}}\,{\rm{model}} = - 1.32 - \left( {0.01 \times{\rm{age}}} \right)\\ + \left( {0.44 \times{\rm{sex}}} \right) + (1.75 \times{\rm{loss}}\,{\rm{of}}\,{\rm{smell}}\,{\rm{and}}\,{\rm{taste}})\\ + \left( {0.31 \times{\rm{severe}}\,{\rm{or}}\,{\rm{significant}}\,{\rm{persistent}}\,{\rm{cough}}} \right)\\ + \left( {0.49 \times{\rm{severe}}\,{\rm{fatigue}}} \right) + \left( {0.39 \times{\rm{skipped}}\,{\rm{meals}}} \right)\end{array}$$
where all symptoms are coded as 1 if the person self-reports the symptom and 0 if not. The sex feature is also binary, with 1 indicative of male participants and 0 representing females. The obtained value is then transformed into predicted probability using exp(x)/(1 + exp(x)) transformation followed by assigning cases of predicted COVID-19 for probabilities >0.5 and controls for probabilities <0.5.
In the UK test set, the prediction model had a sensitivity of 0.65 (0.62–0.67), a specificity of 0.78 (0.76–0.80), an area under the curve (AUC) of the receiver operating characteristic curve (ROC) (that is, ROC-AUC) of 0.76 (0.74–0.78), a positive predictive value of 0.69 (0.66–0.71) and a negative predictive value of 0.75 (0.73–0.77) (Fig. 1b). A cross-validation ROC-AUC was 0.75 (0.74–0.76) in the 15,638 UK users who were tested for SARS-CoV-2. In this model, the strongest predictor was loss of smell and taste (Fig. 1a). Excluding loss of smell and taste from the model resulted in reduced sensitivity (0.33 (0.30–0.35)) but increased specificity (0.84 (0.83–0.86)). We also computed the ROC-AUC with stratification for sex and age groups and found that the results were similar in all groups, with no significant differences between strata, suggesting that our model works similarly within different sex and age groups. We validated the model in the US cohort and found an ROC-AUC of 0.76 (0.74–0.78), a sensitivity of 0.66 (0.62–0.69), a specificity of 0.83 (0.82–0.85), a positive predictive value of 0.58 (0.55–0.62) and a negative predictive value 0.87 (0.86–0.89) (Fig. 1c).
We also queried whether the association between loss of smell and taste and COVID-19 was influenced by mainstream media reports. We assessed the correlation between loss of smell and taste and being COVID-19 positive in different date ranges: (1) 24 March to 3 April 2020, following a number of reports in the UK mainstream media (for example, ref. 11) reporting anosmia as a symptom of COVID-19; (2) the week of 4–10 April 2020; and (3) from 11–21 April 2020. In the United Kingdom, the OR (95% CI) values for the associations of self-reported loss of smell and taste and a positive test for COVID-19 across these periods were 4.98 (4.47–5.56), 6.64 (5.75–7.68) and 10.40 (9.08–11.91), respectively, suggesting that awareness of loss of smell and taste as symptoms of COVID-19 in the UK has increased following media reports. However, this association was not found in the US cohorts: 24 March to 3 April: 8.13 (5.18–12.78); 4–10 April: 12.30 (8.96–16.90); 11–21 April: 9.13 (6.73–12.38).
Finally, we applied the predictive model to the 805,753 UK and US symptom-reporting individuals who had not been tested for COVID-19 and found that, according to our model, 140,312 (116,400–164,224) of these 805,753 participants (17.42% (14.45–20.39%)) reporting some symptoms were likely to be infected by the virus, representing 5.36% as a proportion of the overall responders to the app.
We report that loss of smell and taste is a potential predictor of COVID-19 in addition to other, more established, symptoms including high temperature and a new, persistent cough. COVID-19 appears to cause problems of smell receptors in line with many other respiratory viruses, including previous coronaviruses thought to account for 10–15% of cases of anosmia7,9.
We also identify a combination of symptoms, including anosmia, fatigue, persistent cough and loss of appetite, that together might identify individuals with COVID-19.
A major limitation of the current study is the self-report nature of the data included, which cannot replace physiological assessments of olfactory and gustatory function or nucleotide-based testing for SARS-CoV-2. Both false negative and false positive reports could be included in the dataset12, and because of the way the questions are asked, gustatory and olfactory losses are conflated. Second, at present, we do not know whether anosmia was acquired before or after other COVID-19 symptoms, or during the illness or afterwards. This information could become available as currently healthy users track symptom development over time. As more accurate tests become available, we have the ability to optimize our model. One caveat of our study is that the individuals on which the model was trained are not representative of the general population because performing tests for SARS-CoV-2 is not random. Testing is more likely to be done if an individual develops severe symptoms requiring hospitalization, if an individual has been known to have had contact with people who have tested positive for SARS-CoV-2 infection, in health workers, and if an individual has traveled in an area of high risk of exposure. Therefore, our results may overestimate the number of expected positive cases of SARS-CoV-2 infection. Additionally, volunteers using the app are a self-selected group who might not be fully representative of the general population. Another limitation is the potential effect that mainstream media coverage of loss of smell and taste and COVID-19 might have had on app responses. We found that these reports might have influenced UK responders, for whom there was a temporal trend in the strength of the association. However, there was no such association in the US cohort; therefore, we conclude that regardless of any bias introduced by mainstream media reports, the association between COVID-19 and loss of smell and taste remains strong.
Our work suggests that loss of sense of smell and taste could be included as part of routine screening for COVID-19 and should be added to the symptom list currently developed by the World Health Organization (www.who.int/health-topics/coronavirus). A detailed study on the natural history of broader COVID-19 symptoms, especially according to timing and frequency, will help us to understand the usefulness of symptom tracking and modeling, and to identify probable clusters of infection.
Study setting and participants
The COVID Symptom Study smartphone-based app (previously known as COVID Symptom Tracker) was developed by Zoe Global, in collaboration with King's College London and Massachusetts General Hospital, and was launched in the United Kingdom on 24 March 2020 and in the United States on 29 March 2020. After 3 weeks, it had reached 2,618,862 users. It enables the capture of self-reported information related to COVID-19, as described previously10. The survey questions are available in Supplementary Table 1. On first use, the app records self-reported location, age and core health risk factors. With continued use and notifications, participants provide daily updates on symptoms, health care visits, COVID-19 testing results and whether they are self-quarantining or seeking health care, including the level of intervention and related outcomes. Individuals without apparent symptoms are also encouraged to use the app.
The King's College London Ethics Committee approved the ethics for the app, and all users provided consent for non-commercial use. An informal consultation with TwinsUK members over email and social media before the app was launched found that they were overwhelmingly supportive of the project. The US protocol was approved by the Partners Human Research Committee.
Data from the app were downloaded to a server and only records where the self-reported characteristics fell within the following ranges were utilized for further analysis: age: 16–90 years (18 years in the United States); height: 110–220 cm; weight: 40–200 kg; BMI: 14–45 kg m−2; and temperature: 35–42 °C. The individuals whose data were included to develop and test the prediction model were those who had completed the report for symptoms in the app and who declared that they had been tested for SARS-CoV-2 by RT-PCR and received the result. Only individuals who answered at least nine of the ten symptom questions, and who answered about loss of smell and taste, were included.
Baseline characteristics are presented as the number (percentage) for categorical variables and the mean (standard deviation) for continuous variables. Multivariate logistic regression adjusting for age, sex and BMI was applied to investigate the correlation between loss of smell and taste and COVID-19 in 15,368 UK users of the symptom tracker app who were also tested in the laboratory for SARS-CoV-2 (6,452 UK individuals tested positive and 9,186 tested negative). The results were replicated in 726 US individuals who tested positive and 2,037 US individuals who tested negative. We then randomly split the UK sample into training and test sets with a ratio of 80:20. In the training set, we performed stepwise logistic regression combining forward and backward algorithms, to identify other symptoms associated with COVID-19 independent of loss of smell and taste. We included in the model ten other symptoms (fever, persistent cough, fatigue, shortness of breath, diarrhea, delirium, skipped meals, abdominal pain, chest pain and hoarse voice) as well as age, sex and BMI, and chose as the best model the one with the lowest Akaike information criterion. We then assessed the performance of the model both in the test set and via tenfold cross-validation in the entire UK sample of 15,638 individuals using the R package cvAUC13. We further validated the prediction model in the US cohort.
For our predictive model, using the R packages pROC and epiR, we further computed the AUC (that is, the overall diagnostic performance of the model), sensitivity (positivity in disease; that is, the proportion of subjects who have the target condition (reference standard positive) and give positive test results) and specificity (negativity in health; that is, the proportion of subjects without a SARS-CoV-2 RT-PCR test who give negative model results).
Finally, we applied the predictive model to the 805,753 individuals reporting symptoms who had not had a SARS-CoV-2 test, to estimate the percentage of individuals reporting some COVID-19 symptoms who were likely to be infected by the virus. The proportion of estimated infections was calculated repeatedly by sampling the dataset (with replacement) to obtain the 95% CIs.
Reporting Summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data collected in the app are being shared with other health researchers through the NHS-funded Health Data Research UK (HDRUK)/SAIL consortium, housed in the UK Secure e-Research Platform (UKSeRP) in Swansea. Anonymized data collected by the symptom tracker app can be shared with bonafide researchers via HDRUK, provided the request is made according to their protocols and is in the public interest (see https://healthdatagateway.org/detail/9b604483-9cdc-41b2-b82c-14ee3dd705f6). US investigators are encouraged to coordinate data requests through the COPE Consortium (www.monganinstitute.org/cope-consortium). Data updates can be found at https://covid.joinzoe.com.
The app code is publicly available from https://github.com/zoe/covid-tracker-react-native.
Whittington, A. M. et al. Coronavirus: rolling out community testing for COVID-19 in the NHS. BMJ Opinion https://blogs.bmj.com/bmj/2020/02/17/coronavirus-rolling-out-community-testing-for-covid-19-in-the-nhs/ (2020).
Rossman, H. et al. A framework for identifying regional outbreak and spread of COVID-19 from one-minute population-wide surveys. Nat. Med. https://doi.org/10.1038/s41591-020-0857-9 (2020).
Gane, S. B., Kelly, C. & Hopkins, C. Isolated sudden onset anosmia in COVID-19 infection. A novel syndrome? Rhinology https://doi.org/10.4193/Rhin20.114 (2020).
Iacobucci, G. Sixty seconds on… anosmia. Br. Med. J. 368, m1202 (2020).
Brann, D. et al. Non-neuronal expression of SARS-CoV-2 entry genes in the olfactory system suggests mechanisms underlying COVID-19-associated anosmia. Preprint at bioRxiv https://www.biorxiv.org/content/10.1101/2020.03.25.009084v3 (2020).
Sungnak, W. et al. SARS-CoV-2 entry factors are highly expressed in nasal epithelial cells together with innate immune genes. Nat. Med. https://doi.org/10.1038/s41591-020-0868-6 (2020).
Hopkiins, C. & Kumar, N. Loss of Sense of Smell as Marker of COVID-19 Infection (ENT UK, 2020).
Eliezer, M. et al. Sudden and complete olfactory loss function as a possible symptom of COVID-19. JAMA Otolaryngol. Head Neck Surg. https://doi.org/10.1001/jamaoto.2020.0832 (2020).
Spinato, G. et al. Alterations in smell or taste in mildly symptomatic outpatients with SARS-CoV-2 infection. J. Am. Med. Assoc. https://doi.org/10.1001/jama.2020.6771 (2020).
Drew, D. et al. Rapid implementation of mobile technology for real-time epidemiology of COVID-19. Science https://science.sciencemag.org/content/early/2020/05/04/science.abc0473/tab-article-info (2020).
Roberts, M. Coronavirus: are loss of smell and taste key symptoms? BBC News (1 April 2020).
Oleszkiewicz, A., Kunkel, F., Larsson, M. & Hummel, T. Consequences of undetected olfactory loss for human chemosensory communication and well-being. Phil. Trans. R. Soc. Lond. B Biol. Sci. 375, 20190265 (2020).
LeDell, E., Petersen, M. & van der Laan, M. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates. Electron J. Stat. 9, 1583–1607 (2015).
We express our sincere thanks to all of the participants who entered data into the app, including study volunteers enrolled in cohorts within the Coronavirus Pandemic Epidemiology (COPE) consortium. We thank the staff of Zoe Global, the Department of Twin Research at King's College London and the Clinical and Translational Epidemiology Unit at Massachusetts General Hospital for tireless work in contributing to the running of the study and data collection. We thank E. Segal and his laboratory for helpful input. This work was supported by Zoe Global. The Department of Twin Research receives grants from the Wellcome Trust (212904/Z/18/Z) and Medical Research Council/British Heart Foundation Ancestry and Biological Informative Markers for Stratification of Hypertension (AIMHY; MR/M016560/1), and support from the European Union, the Chronic Disease Research Foundation, Zoe Global, the NIHR Clinical Research Facility and the Biomedical Research Centre (based at Guy's and St Thomas' NHS Foundation Trust in partnership with King's College London). C.M. is funded by the Chronic Disease Research Foundation and by the Medical Research Council AIM HY project grant. A.M.V. is supported by the National Institute for Health Research Nottingham Biomedical Research Centre. C.H.S. received an Alzheimer's Society Junior Fellowship (AS-JF-17-011). S.O. and M.J.C. are funded by the Wellcome/EPSRC Centre for Medical Engineering (WT203148/Z/16/Z) and Wellcome Flagship Programme (WT213038/Z/18/Z). A.T.C. is the Stuart and Suzanne Steele MGH Research Scholar and is a team leader for the Stand Up to Cancer Foundation. A.T.C., L.H.N. and D.A.D. are supported by an Evergrande COVID-19 Response Fund Award through the Massachusetts Consortium on Pathogen Readiness (MassCPR). This work was also supported by the UK Research and Innovation London Medical Imaging & Artificial Intelligence Centre for Value-Based Healthcare.
These authors contributed equally: Cristina Menni, Ana M. Valdes.
These authors jointly supervised this work: Claire J. Steves, Tim D. Spector.
Department of Twin Research and Genetic Epidemiology, King's College London, London, UK
Cristina Menni, Ana M. Valdes, Maxim B. Freidin, Julia S. El-Sayed Moustafa, Alessia Visconti, Pirro Hysi, Ruth C. E. Bowyer, Massimo Mangino, Mario Falchi, Claire J. Steves & Tim D. Spector
Academic Rheumatology, Clinical Sciences, Nottingham City Hospital, Nottingham, UK
Ana M. Valdes
School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
Carole H. Sudre, Thomas Varsavsky, M. Jorge Cardoso & Sebastien Ourselin
Clinical and Translational Epidemiology Unit, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
Long H. Nguyen, David A. Drew & Andrew T. Chan
Zoe Global, London, UK
Sajaysurya Ganesh & Jonathan Wolf
NIHR Biomedical Research Centre at Guy's and St Thomas' Foundation Trust, London, UK
Massimo Mangino
Cristina Menni
Maxim B. Freidin
Carole H. Sudre
Long H. Nguyen
David A. Drew
Sajaysurya Ganesh
Thomas Varsavsky
M. Jorge Cardoso
Julia S. El-Sayed Moustafa
Alessia Visconti
Pirro Hysi
Ruth C. E. Bowyer
Mario Falchi
Jonathan Wolf
Sebastien Ourselin
Andrew T. Chan
Claire J. Steves
Tim D. Spector
C.M., A.M.V., J.W., C.J.S. and T.D.S. conceived of and designed the experiments. C.M., M.B.F., C.H.S., S.G., and A.M.V. analyzed the data. T.V., S.O., M.J.C., R.C.E.B., A.V., J.S.E.S.-M., P.H., M.M., M.F., D.A.D., A.T.C. and L.H.N. contributed reagents, materials and/or analysis tools. C.M. and A.M.V. wrote the manuscript. All authors revised the manuscript.
Correspondence to Cristina Menni or Tim D. Spector.
T.D.S. and A.M.V. are consultants to Zoe Global. S.G. and J.W. are employees of Zoe Global.
Peer review information Jennifer Sargent was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.
Source Data Fig. 1
Statistical Source Data
Menni, C., Valdes, A.M., Freidin, M.B. et al. Real-time tracking of self-reported symptoms to predict potential COVID-19. Nat Med 26, 1037–1040 (2020). https://doi.org/10.1038/s41591-020-0916-2
Issue Date: July 2020
Voluntary risk mitigation behaviour can reduce impact of SARS-CoV-2: a real-time modelling study of the January 2022 Omicron wave in England
Ellen Brooks-Pollock
Kate Northstone
Leon Danon
BMC Medicine (2023)
A novel scoring system for early assessment of the risk of the COVID-19-associated mortality in hospitalized patients: COVID-19 BURDEN
Mohammad Hossein Imanieh
Fatemeh Amirzadehfard
Afrooz Feili
European Journal of Medical Research (2023)
Coronavirus infection in chemosensory cells
Martina Donadoni
Rafal Kaminski
Ilker K. Sariyer
Journal of NeuroVirology (2023)
Indicators of recent COVID-19 infection status: findings from a large occupational cohort of staff and postgraduate research students from a UK university
Katrina A. S. Davis
Ewan Carr
Matthew Hotopf
BMC Public Health (2022)
SARS COV-2 and other viral etiology as a possible clue for the olfactory dilemma
Ossama I. Mansour
Mohamed Shehata Taha
Mohamed Naguib Mohamed
The Egyptian Journal of Otolaryngology (2022)
COVID-19 and digital privacy
Nature Medicine Classic Collection
Nature Medicine (Nat Med) ISSN 1546-170X (online) ISSN 1078-8956 (print) | CommonCrawl |
Measuring the effect of node aggregation on community detection
Yérali Gandica1,2,
Adeline Decuyper1,
Christophe Cloquet1,2,3,
Isabelle Thomas1 &
Jean-Charles Delvenne1,2
Many times the nodes of a complex network, whether deliberately or not, are aggregated for technical, ethical, legal limitations or privacy reasons. A common example is the geographic position: one may uncover communities in a network of places, or of individuals identified with their typical geographical position, and then aggregate these places into larger entities, such as municipalities, thus obtaining another network. The communities found in the networks obtained at various levels of aggregation may exhibit various degrees of similarity, from full alignment to perfect independence. This is akin to the problem of ecological and atomic fallacies in statistics, or to the Modified Areal Unit Problem in geography.
We identify the class of community detection algorithms most suitable to cope with node aggregation, and develop an index for aggregability, capturing to which extent the aggregation preserves the community structure. We illustrate its relevance on real-world examples (mobile phone and Twitter reply-to networks). Our main message is that any node-partitioning analysis performed on aggregated networks should be interpreted with caution, as the outcome may be strongly influenced by the level of the aggregation.
One of the most efficient way to analyze a complex network is by partitioning the nodes into blocks, shedding light on the internal structure of the network. For example, community detection seeks to decompose a network into blocks of nodes with many internal edges, and few edges falling between the blocks. This approach allows analyzing large networks as a sum of dense but weakly interconnected subnetworks. In the last couple of decades, the wider and wider availability of various large network-shaped data has pushed the need for new formalisations to the task of community detection, and more efficient algorithms [1].
Often, one same situation can be modelled by several networks of interests, where nodes represent entities at different levels of abstractions. For example, at one level a node may represent an individual person, and at another level it may represent the aggregation of several persons sharing an attribute, for example belonging to the same age class or living in the same municipality. In this case, the edge between two aggregation classes is typically weighted as the sum of the weight of all edges linking the individuals across the two classes. The reasons for considering an aggregated network rather than a disaggregated one are many. For instance only the aggregated network may be available to the researcher due to privacy reasons, or due to limited resources (e.g. only aggregate flows may be accessible to the measurement, or the disaggregated network may be too large to handle for a given community detection algorithm). The aggregated network may also be more relevant for a given analysis, because the aggregation removes possible noise present at the individual level, and creates statistically robust entities.
In all these situations, it is natural to wonder whether the communities computed on different levels of aggregation will be comparable in any way. This is the question that we explore in this article.
That some statistical patterns, for instance correlations, may differ starkly when computed either on a dataset or on an aggregated version of the same dataset is well known in statistical sciences. Extrapolating observations on categories of individuals to the individuals themselves is generically called an ecological fallacy, with Simpson's paradox [2, 3] or Robinson's paradox [4] as well-known examples. In geographical sciences, a particular form of such fallacy is called the Modifiable Areal Unit Problem (MAUP). In the earliest detected occurrence of MAUP, Gelhke and Biel [5] showed that the value of the correlation coefficient of geolocalised features was influenced by the size of the spatial units used in their analyses. Openshaw further showed that the results of quantitative spatial models and statistics may depend highly on the size and shape of the basic spatial units used [6]. This problem has been broadly studied and is the object of extensive literature, see [7] for a review. The atomic fallacy can be seen as the bias generated by extrapolating patterns present at the individual level to the level of the group to which those individuals or their geographical entities belong.
To the best of our knowledge, however, the impact of atomic and ecological fallacies on community detection has not been considered in the literature, despite its high relevance in practical applications. This is a gap that we aim to fill in the present paper, by measuring quantitatively the impact of node aggregation on the community structure in networks. We first bring a theoretical argument showing that some community detection methods are more robust than others to node aggregation, in that whenever the communities found optimal by the method on the finer network happen to be unambiguously aggregated, the aggregated communities are also found optimal by the community detection method. Then, we introduce the aggregability index, a quantitative proxy for the robustness of the community structure of a given network with respect to given node aggregation classes.
We illustrate our considerations on two real-life examples. Both compare networks of places, where the nodes are geographical areas, and the edges represent interactions between areas. In these examples it is easy to generate a series of aggregated networks by merging the places into larger and larger areas, either according to an administrative hierarchy (districts, municipalities, counties, etc.) or according to coarser and coarser square grids. In the first real-life example, each node is the mean position of a Twitter user in Belgium, and edges count the reply-to tweets between two such users. Aggregated versions of this network are produced by merging the positions into larger and larger administrative units or grid cells, and merging the edges accordingly. In the second example, the nodes are mobile phone towers in and around Brussels (the capital city of Belgium), and the edges count the number of phones calls between two towers. Aggregated versions of this network are also produced by successive merging of the nodes and edges similarly to the Twitter case. The quantitative tools we introduce allow to observe that the Twitter networks exhibit significantly different community structures at different levels of aggregation, while the mobile phone networks' communities are relatively insensitive to aggregation.
Edge-counting objective functions for optimal partitioning
Partitioning the nodes of a network is often performed through optimising an objective function, and assigning a real number to each partition. We characterise a class of objective functions that preserve optimality of the community partition under aggregation whenever possible, as we now define.
Assume we want to detect communities in a weighted, undirected graph G, understood as a (non-overlapping) partition \(\mathcal{C}\) of the nodes of G. Let us assume that we are also interested in optimising a certain criterion, capturing structural patterns of interest, typically high density of edges inside the communities and low density across communities. Some other criteria are also possible as, for instance one may want to detect core-periphery structure or general stochastic block models [8–10]. We want to underline here that there is a variety of possible criteria whose relevance is strongly dependent on the network and the application. For instance, some methods integrate a resolution parameter that imposes a preference for small or large communities [11, 12]. Some methods based on comparison with a generative model for the graph are highly dependent on the choice of the model [13]. Even more broadly, different goals for community detection may lead to entirely different objective functions [14, 15]. As many of those methods proceed by optimising a 'goodness' criterion, we talk of "the optimal partition"to denote the communities found to be optimal for the criterion of interest—we suppose for simplicity that the partition is unique and can be discovered effectively, although in practice most algorithms are only heuristics.
Assume moreover that a graph \(G'\) is obtained from the aggregation of the nodes and edges of G, following a partition \(\mathcal{P}\) of the nodes of G. In other words, if \(\mathcal{P}\) partitions nodes of G into k "aggregation classes", then \(G'\) has k nodes. The weight of the edge (if any) between node i and node j of \(G'\) is the sum of the weights of all edges of G, between nodes in the corresponding aggregation class I and aggregation class J of the partition \(\mathcal{P}\). In particular, node i of \(G'\) has a self-loop aggregating the weight of all the edges inside the corresponding aggregation class I, representing the interactions between different nodes of the same class. In summary, the weights in \(G'\) are given by
$$ w_{ij}=\sum_{u \in I, v \in J, u \neq v} w_{uv}, $$
where \(w_{uv}\) is the weight of the edge between nodes u and v, in the initial disaggregated network G. In all cases, we insist that node aggregation as considered in this paper leads to weighted aggregated graphs, even when the original graphs are unweighted.
In general, we want to understand the relationship between the communities of G and \(G'\). Those communities takes place on different graphs, thus a direct comparison is not possible. We can nevertheless "lift" the communities of \(G'\) back to G, by replacing each node in \(C'\) with its aggregation class in G. Indeed a community of \(G'\) is a set of nodes of \(G'\), each of which represents an aggregation class of G. To state it more formally, if \(f_{\mathcal{P}}:\operatorname{Nodes}(G) \to \operatorname{Nodes}(G')\) is the aggregation function relating every node of the original graph G to its corresponding node in the aggregated graph \(G'\), then a community \(C' \subset \operatorname{Nodes}(G')\) is lifted back to \(f^{-1}(C')\), which is a notation for the set \(\{x \in \operatorname{Nodes}(G) : f(x) \in C'\}\). Doing so for each community of \(G'\), we obtain a partition of the nodes of G, which we denote \(f^{-1}(\mathcal{C}')\), and which we call the "lifting" of \(\mathcal{C}'\). This partition can now be compared to \(\mathcal{C}\).
A specific case of interest is when we want to know the communities of G while we only have access to \(G'\). Clearly the best scenario is when the aggregation classes are subsets of the optimal communities in G, i.e. when each community is a union of aggregation classes. In this case, the aggregation transforms unambiguously the optimal community partition \(\mathcal{C}\) of G into a (possibly non-optimal) community partition \(\mathcal{C}'\) of \(G'\). From the knowledge of the community partition \(\mathcal{C}'\) of \(G'\), it is then possible to recover \(\mathcal{C}\) as \(f^{-1}(\mathcal{C}')\), the lifting of \(\mathcal{C}'\). If, moreover the community structure \(\mathcal{C}'\) is also optimal in \(G'\) then we have a natural way to recover the community structure of the original G: first compute \(\mathcal{C}'\) as the optimal community structure of \(G'\) then lift it to \(\mathcal{C}\). However, whether \(\mathcal{C}'\) is indeed optimal for \(G'\) depends on the criterion used to define '(optimal) communities'.
This can be guaranteed if the objective function, evaluated on a given graph G and a proposed community partition \(\mathcal{C}\), only depends on the graph \(G''\), defined as the graph obtained by aggregating G with respect to the partition \(\mathcal{C}\) of the nodes. In other words, we require that the objective function depends only on the total weight of all edges between any pair of communities (including from a community to itself), but not on the way those edges are distributed inside a community or between communities. We call such a function an edge-counting function.
This natural result is proved simply. Since we assume that \(G'\) is obtained from G by aggregation with respect to a partition \(\mathcal{P}\), and that the partition \(\mathcal{C}\) is coarser than \(\mathcal{P}\), then the aggregation of \(G'\) with respect to \(\mathcal{C}'\) coincides with the aggregation of G with respect to \(\mathcal{C}\). Therefore, the edge-counting objective function takes the same value for \((G,\mathcal{C})\) and \((G',\mathcal{C}')\). Thus if \(\mathcal{C}\) is optimal for G then \(\mathcal{C}'\) is also optimal for \(G'\).
Despite its simplicity, this first result suggests that some methods of the literature are more appropriate than others in presence of node aggregation. Such edge-counting criteria include modularity [16], the Hamiltonian given by Potts models [17], linearised partition stability [12], Infomap's description length [18], conductance [19], Normalised Cuts [20], and their natural extension to weighted graphs.
On the other hand, methods based on counting paths rather than edges depend on the way edges are distributed inside a community and not only the number of edges or total weights. Such methods include Markov clustering [21], Walktrap [22], partition stability [12], etc., and should be used with the greatest caution in case of aggregated data.
Different aggregations lead to different community structures
Even an edge-counting objective function cannot preserve the community structure in the context of arbitrary aggregation classes. Assume for instance, that the aggregation classes are chosen randomly, every node being attributed uniformly randomly to one of the classes. Then, it is reasonable to assume that the aggregated graph will behave like a complete graph with all edges of similar weight. Such a graph is expected to exhibit either no community structure, or communities created only by the small random fluctuations in the weights, retaining no information from the optimal communities of G.
One can also generate examples where well chosen classes generate a graph with entirely different, yet relevant, community structure. See Fig. 1 for an illustration on a toy 4-node network, and two aggregated 2-node networks, whose communities lift back to different community structures on the fine-scale network. These partitions may or may not coincide with the community structure computed directly on the fine-scale 4-node network—depending on the criterion for detecting communities. Here we do not specify an explicit community detection criterion, but it is reasonable to assume that if the self-loop (omitted for clarity in Fig. 1) on each node of a 2-node aggregated network is heavy enough compared to the internode link, then the criterion will find the two 1-node-community partition as optimal. Suppose for example that on each aggregating partition (either by colour or by shape) of the figure, the community detection criterion is such that the 2-community partition is optimal. Suppose as well that the community partition criterion finds the same-colour communities to be optimal on the 4-node network. We see that this partition is 'orthogonal' to the partition that would be lifted from the communities on the (bottom) same-shape aggregated network. Yet all community partitions are 'correct' and relevant for their respective networks: one should refrain from thinking that the aggregation leads to the 'wrong' communities.
Community detection over two examples of aggregations of a same 4-node network. Self-loops in aggregated networks are omitted for clarity. We assume that the community detection criterion is such that each aggregated network admits the trivial two-community partition as optimal (which typically occurs when the self-loops are heavy enough for the community detection criterion to split the nodes into separate communities). The community structure on each aggregated network lifts to two possible partitions on the 4-node network, communities based on the same colour and on the same shape
A more general example is built with the Kronecker product of an \(n_{1}\)-node graph \(G_{1}\) and an \(n_{2}\)-node graph \(G_{2}\). In the product graph \(G_{1} \otimes G_{2}\), whose node set is the Cartesian product of the two individual node sets, a node \((i,j)\) is connected to the node \((i',j')\) if i and \(i'\) are neighbours in \(G_{1}\), as well as j and \(j'\) in \(G_{2}\). If the graphs are weighted, then the weight on an edge in the product graph is simply the product of the weights in the corresponding edges in \(G_{1}\) and \(G_{2}\). The product graph can be aggregated in two natural ways, in one that retrieves \(G_{1}\) as aggregated graph, and another one that retrieves \(G_{2}\). Assume that the fine-grained network is \(G_{1} \otimes G_{2}\). Both aggregated graphs \(G_{1}\) and \(G_{2}\) may have a significant community structure, thus the community detection on both aggregations will provide interesting, distinct insights on the underlying fine-grained network.
A real-life analogy would involve, for instance, aggregating a social network according either to geographical location (e.g., counties), or to age class: both may exhibit relevant community structures explaining on the one hand which counties interact together, and the other hand which age classes interact together. Both community structures can be lifted back to the social network. If all age groups are equally present in each location, then those two partitions of the social network, although both interesting in their own rights, are 'orthogonal' to each other as in the examples above. Thus at least one of them will differ sharply from communities found directly on the social network.
In summary, different aggregations of the original network may induce community structures on the original network that are completely disaligned with one another, without necessarily being 'wrong', and that are either similar or dissimilar to the community structure computed directly on the original graph.
The aggregability index
Between the two extremes situations where the aggregating partition is completely aligned with the community structure in G, or completely orthogonal to it, one finds intermediate situations where node aggregation is expected to perturb more or less the community detection.
We propose a metric capable of capturing to what extent node aggregation will preserve community detection by introducing the aggregability index, η, as the fraction of information required to identify the community of a randomly chosen node, that is provided by the knowledge of its aggregation class:
$$ \eta = \frac{I(\mathcal{C} ; \mathcal{P})}{H(\mathcal{C})}. $$
Here \(H(\mathcal{C})\) is the Shannon entropy of the community partition, defined in the following way. As a thought experiment, pick a node uniformly at random in G. The community of the node is a random variable with Shannon entropy \(H(\mathcal{C}) \triangleq - \sum_{C\in \mathcal{C}} P(C) \log P(C)\), with probability \(P(C)\) of a community C being proportional to its number of nodes. Similarly, \(I(\mathcal{C} ; \mathcal{P})\) is the Shannon mutual information between the community in the partition \(\mathcal{C}\) and the random aggregation class in \(\mathcal{P}\) of a randomly picked node of G.
Our newly-defined aggregability index, η, ranges from 0 to 1. In the \(\eta =0\) limit, the aggregation classes are independent from the communities in \(\mathcal{C}\), which implies that each node is aggregated with nodes from other communities. In particular, the community structure \(\mathcal{C}'\) that we may compute in the aggregated network, once lifted back to the initial graph G, form communities which are unions of aggregation classes, thus also independent from the communities in \(\mathcal{C}\). In short, using the notations above, we can write \(I(\mathcal{C} ; f^{-1}(\mathcal{C}'))=0\).
In the \(\eta =1\) limit, the aggregation classes are subset of the communities, thus any edge-counting criterion will preserve the community structure. In short, we write \(\mathcal{C} = f^{-1}(\mathcal{C}')\).
Between these extreme situations, the lifted communities \(f^{-1}(\mathcal{C}')\) are neither independent nor fully aligned with \(\mathcal{C}'\). In this case, we observe, due to the fact that \(f^{-1}(\mathcal{C}')\) is a coarser partition than the aggregation partition \(f^{-1}(\mathcal{C}')\), that \(I(\mathcal{C} ; f^{-1}(\mathcal{C}')) \leq I(\mathcal{C} ; \mathcal{P})\) (in application of the so-called data-processing inequality in information theory). In summary, we have in all cases:
$$ \eta \geq \frac{I(\mathcal{C} ; f^{-1}(\mathcal{C}')) }{H(\mathcal{C})}, $$
which confirms η as a 'best-case' estimate of the closeness between the community structure on the original graph G and its aggregation \(G'\).
We may relax (3) to make it more symmetric in \(\mathcal{C}\) and \(\mathcal{C}'\), by increasing the denominator:
$$ \eta \geq \frac{I(\mathcal{C} ; f^{-1}(\mathcal{C}')) }{H(\mathcal{C})+H(f^{-1}(\mathcal{C}'))}, $$
which can be written equivalently as
$$ \eta \geq \frac{1}{2} \operatorname{NMI}\bigl(\mathcal{C},f^{-1} \bigl(\mathcal{C}'\bigr)\bigr), $$
where NMI denotes a popular way to measure the similarity between two partitions, explained in the Methods (see Eq. 9). Note that if the aggregating partition is very coarse, with a few large aggregation classes, then we expect that \(H(f^{-1}(\mathcal{C}')) \ll H(\mathcal{C})\), and Eq. 4 is not more conservative than Eq. 3. If, on the other hand, we only have a few nodes in each aggregation class and \(H(\mathcal{C})\) is relatively large then we may heuristically expect \(H(\mathcal{C}) \approx H(\mathcal{C}')\), and it is more relevant to write
$$ \eta \gtrsim \operatorname{NMI}\bigl(\mathcal{C},f^{-1}\bigl( \mathcal{C}'\bigr)\bigr). $$
There is no reason that these inequalities should be always tight. Assume for instance that exactly one aggregation class overlaps over two communities \(C_{1}\) and \(C_{2}\) (so that \(\eta <1\), if only by a little). Then in the aggregated network, the node resulting from this aggregation class will create edges whose weight typically depends on the density of the two communities \(C_{1}\) and \(C_{2}\). Thus if \(C_{1}\) and \(C_{2}\) are sparse enough, the links so created in the aggregated network may be negligible so that the optimal community structure will not be modified, and the ideal situation \(\mathcal{C} = f^{-1}(\mathcal{C}')\) that holds for \(\eta =1\) and edge-counting criteria still holds. If on the other hand, the aggregation class cuts into dense communities, this will result in heavy weights in the aggregated that might disrupt significantly the overall community structure. We expect therefore that a network that is heterogeneous in terms of density of links may be potentially more fragile to aggregation, in terms of community structure. In Section SA.2. of Additional file 1, we investigate the behaviour of the aggregability index η in synthetic graphs with planted communities of heterogeneous densities.
In the next sections we show empirically how the aggregability index η correlates with the NMI distance between the optimal partitions found for the original and aggregated networks on two datasets. Albeit embedded in the same geographical area—Belgium—these two case studies will reveal different behaviours with respect to aggregation. In both cases, we know a network G, aggregate it according to administrative units or regular squares, compute the aggregability index and observe the distorsion of the communities found to be optimal in the new (aggregated) networks.
We now describe the datasets, the definition of community and the way to compare partitions in an empirical approach. Both datasets are localised on parts of Belgium. See Section SA.1 of Additional file 1 for a visualisation and description of the territory.
Twitter networks
Our first dataset is composed of 291,552 tweets geolocalised on the Belgian territory between 18,327 Twitter users, obtained as described in Additional file 1, Section SA.2. From this dataset we build a network \(N_{0}\) as follows. The nodes are the users, and the weighted edges count the number of reply-to tweets between the two users (without taking the directionality into account, in order to keep the graph undirected). Each node is associated to a position, obtained as the barycentre of positions of the user recorded in each sent tweet. In this way we see \(N_{0}\) as a network linking positions together. By the means of how the dataset was collected, those positions are spread over the Belgian territory.
A list of aggregated networks was created from \(N_{0}\). The territory of Belgium is divided into 589 municipalities, and used to be divided into 2,675 smaller municipalities until a merge took place in 1979. We first build two aggregated versions, where nodes represent former (\(N_{fm}\)) and current (\(N_{m}\)) municipalities, respectively, by merging all nodes of \(N_{0}\) positioned in the same (former or current) municipality. Edges are merged accordingly, receiving a weight that aggregates the weights of all corresponding edges of \(N_{0}\).
We also applied a regular grid of 125 m square cells onto the Belgian territory, and merged into a single node all nodes of \(N_{0}\) positioned in the same cell, creating the aggregating network \(N_{125}\). Increasingly coarser square grids of cell size 250 m to 32 km, were used in the same way to create the aggregated networks \(N_{250}\) to \(N_{32k}\) respectively. The number of nodes and edges are described in Table S1 of Additional file 1 (Section SA.3).
Phone networks
Our second dataset counts the numbers of phone calls between towers in the territory of Brabant, a former administrative unit (province) of 111 municipalities including and surrounding Brussels, the capital of Belgium. The derived undirected network, called \(M_{0}\), is composed of 1,168 nodes (towers). A weighted edge between two towers counts the number of communications between the towers in either direction, for a total of 13M communications over the network. As each tower is associated with a precise position, one may again consider \(M_{0}\) as a network between places. We may aggregate those places into municipalities, thus forming the network \(M_{m}\), or into cells of regular size 125 m to 32 km, creating the networks \(M_{125}\) to \(M_{32k}\), as for the Twitter dataset. See Table S2 of Additional file 1, Section SA.3, for the number of nodes and edges of each network.
Linearised stability maximisation
Communities are intuitively meant here as sets of strongly interconnected nodes with comparatively few connections between the communities. Among the many formalisations of this concept, one of the most popular is modularity [23], quantifying the goodness of a given partition \(\mathcal{C}\) of nodes as
$$ Q_{\mathcal{C}}=\frac{1}{2m} \sum_{C \in \mathcal{C}} \sum _{i,j \in C} \biggl(A_{ij} - \frac{k_{i} k_{j}}{2m} \biggr), $$
where m is the sum of all weights of the networks' edges, and \(k_{i}\) represents the (weighted) degree of node i. \(A_{ij}\) is the weighted adjacency matrix of the network, and C (\(\in \mathcal{C}\)) represents a community of the partition.
We use a generalisation, called linearised partition stability [12], or equivalently Potts model [17], which introduces a resolution parameter ρ varying from 0 to ∞ as follows:
$$ r_{\mathrm{lin}}(\rho ,\mathcal{C}) = (1-\rho ) + \rho \frac{1}{2m} \sum _{C \in \mathcal{C}} \sum_{i,j \in C} \biggl( A_{ij} - \frac{1}{\rho } \frac{k_{i} k_{j}}{2m}\biggr), $$
At \(\rho =0\), single nodes are optimal as communities, while partitions with larger communities emerge for increasing values of ρ, until a single community is optimal at \(\rho \to \infty \). For \(\rho =1\), the linearised stability is the modularity, \(r_{\mathrm{lin}}(1,\mathcal{C})=Q_{\mathcal{C}}\). The resolution parameter ρ is hereafter called timescale, because linearised stability is formally derived in [12] as capturing the ability of incumbent communities to retain the flow of a diffusion of random walkers across the network for a timescale of the order of ρ. The original Potts model [17] uses the parameter \(\gamma = 1/\rho \). As most community detection criteria, linearised stability is NP-hard to optimise except for extreme values of ρ, and we use the Louvain method [24, 25] as a heuristic.
Whenever appropriate, we will use the linearised stability method to detect communities, because it is an edge-counting criterion, because it includes an extremely popular criterion (modularity, for \(\rho =1\)) as a special case, and because it allows adapting the timescale parameter ρ in order to create partitionings on different networks with the same or similar number of communities. There are certainly many methods of merits sharing the same properties. Our goal in the Results section is not to find the most sociologically relevant Twitter or phone call communities in Belgium, but illustrate how partitions found with an edge-counting criterion are modified in presence of aggregation. Therefore, the various arguments in favor or against the practical significance of the communities delivered by one or another method are not relevant here.
Normalised mutual information for comparing partitions
We compute the normalised mutual information [26], between the two partitions \(\mathcal{C}\) and \(\mathcal{D}\) of the same set of nodes, to evaluate how similar they are, as
$$ \operatorname{NMI}(\mathcal{C},\mathcal{D})= \frac{I(\mathcal{C};\mathcal{D})}{( H(\mathcal{C})+H(\mathcal{D}) )/2}, $$
where \(I(\mathcal{C};\mathcal{D})\) denotes the mutual information between the two partitions, i.e. between the set in \(\mathcal{C}\) and the set in \(\mathcal{D}\) containing a randomly picked node of the graph. Note that in this article, the sets of nodes belonging to a partition are either called 'communities' (if found by community detection algorithm) or 'aggregation classes' (if defining a way to aggregate the network).
Similarly, \(H(\mathcal{C})\) or \(H(\mathcal{D})\) denotes the Shannon entropy of each partition, i.e., the Shannon entropy of the set of a randomly picked node of the graph. The NMI takes values between 0, for independent (thus maximally dissimilar) partitions, and 1, for identical partitions.
In our case, we also want to be able to compare community partitions at different levels of aggregation, let us say for example the optimal partition \(\mathcal{C}\) and \(\mathcal{D}\) of networks \(N_{0}\) and \(N_{125}\), respectively. In this case, we lift the communities of \(N_{125}\) into communities of \(N_{0}\), replacing each node of \(N_{125}\) by its aggregation classes in \(N_{0}\). We call \(\mathcal{D}'\) this partition of the nodes of \(N_{0}\). We now compare the two partitions \(\mathcal{C}\) and \(\mathcal{D}'\) with the quantity \(\operatorname{NMI}(\mathcal{C},\mathcal{D}')\), which we will also sometimes denote \(\operatorname{NMI}(\mathcal{C},\mathcal{D})\) by abuse of notations.
In the following, we illustrate on the two real-life datasets the concepts explained above on toy networks. Specifically, we show how the aggregation process over the Twitter and phone call networks strongly affects the community partition in the former case, and mildly so in the latter. We also show how the magnitude of this distorsion, as the aggregation grid becomes coarser and coarser, correlates with the proposed aggregability index.
Figure 2-a shows the communities extracted from the network \(N_{m}\) of municipalities, using a timescale \(\rho =1\). Each figure from 2-b to 2-f shows the spatial footprint of one community of individual Twitter users. We have used a timescale \(\rho =10\), in order to illustrate the case with a number of communities (namely 5) comparable to the 7 communities of the \(N_{m}\) network. The colour intensity in each municipality represents the proportion of users positioned in this municipality who belong to the community being represented.
Communities detected in Twitter networks. In (a), the communities extracted from the network \(N_{m}\) of municipalities, using a timescale \(\rho =1\). From (b) to (f), each figure, shows the spatial regions belonging to a different partition on a community detection performed with a value of scale parameter to give a partition into 5 communities of individual (non-aggregated) Twitter users. We have used a timescale \(\rho =10\), in order to illustrate the case with a number of communities (namely 5) comparable to the 7 communities of the \(N_{m}\) network. The colour intensity in each municipality represents the proportion of users positioned in this municipality who belong to the community being represented
Some communities of \(N_{0}\) (for example those represented on Figs. 2-b and 2-c) show a remarkable geographical dispersion, and in particular do not seem to match any community of \(N_{m}\) (only community 4 in Fig. 2-e seems to match a community in the network of municipalities in Fig. 2-a, namely the dark blue one).
In order to analyse quantitatively the effect of aggregating data, we systematically test different levels of spatial aggregation, all at the same timescale parameter \(\rho =1\). In other words, for the next analysis, we look at the maximum modularity communities, as approximated by the Louvain method.
Figure 3 shows communities at different levels of aggregation: municipalities, former (smaller) municipalities and square cells of size 1 km, 2 km, 4 km, 8 km. As the aggregation classes become larger and larger they step over several communities forcing their re-arrangement and giving rise to different partitionings.
Communities detected in the Twitter network aggregated into grids of square cells of different sizes (a–b, d, f), aggregated at the level of former municipalities, \(N_{fm}\), (c), and at the level of current municipalities in Belgium, \(N_{m}\), (e). The timescale parameter ρ is set to 1. As the aggregation classes become larger and larger they step over several communities forcing a rearrangement of the communities, resulting in another partition
We can see that as the nodes are increasingly aggregated, some communities gathering distant places, such as the light green community in Fig. 3-a) to 3-c), are re-arranged into geographically localised communities (light green in Fig. 3-f).
White areas represent the physical space where no event has been recorded. At the finest level (\(N_{0}\)), nodes are represented as a single point (the average position of a user), thus almost all space is white. As the aggregation scale increases, the white space is progressively removed, being merged with neighbouring space with non-zero activity. We observe that this effect is more visible in areas with low levels of activity, as the southern part of the country.
The normalised mutual information (NMI) between the disaggregated network \(N_{0}\) and several aggregated networks is depicted on Fig. 5. Starting with the first level of aggregation (125 m), we observe that the NMI already drops rather steeply, even though there is some fit (NMI ≈0.7) between the communities displayed by aggregated units of 125 m and the non-aggregated ones. Values of NMI continue to decrease with the size of the aggregation.
Mobile phone networks
Figure 4 shows the communities found at the disaggregated level of towers \(M_{0}\) (note that although towers are characterised by a single point, for the visual depiction we represent them by the Voronoi polygone associated to it), and the aggregated level of municipalities \(M_{m}\). The normalised mutual information, NMI, between community partitions found on networks \(M_{0}\) and \(M_{m}\) is 0.64. Thus, the similarity between the communities found on the two levels of aggregation is higher than the similarity observed in the Twitter network between the disaggregated network of users (\(N_{0}\)), and the aggregated versions (see Fig. 5). On Fig. 5 we also notice that the NMI between the communities found on \(M_{0}\) and versions aggregated with larger and larger cells is consistently higher than in the case of the Twitter dataset.
Communities detected in the mobile phone networks, at the disaggregated level of towers \(M_{0}\) (note that although towers are characterised by a single point, for the visual depiction we represent them by the Voronoi polygone associated to it), and the aggregated level of municipalities \(M_{m}\). The normalised mutual information, NMI, between partitions of network \(M_{0}\) and \(M_{m}\) is 0.64. Thus, the similarity between the communities found on the two levels of aggregation is higher than the similarity observed in the Twitter network between the disaggregated network of users (\(N_{0}\)), and the aggregated versions (see Fig. 5). The timescale parameter ρ is set to 0.75, as suggested by another study on the same dataset [27]
In circles is shown the evolution of normalised mutual information, NMI, between communities found in the original network prior aggregation, and communities found in aggregated networks at several square sizes, and lifted back to the original network (as explained in Methods). In squares, the evolution of the aggregability index, η, between communities and aggregating partitions at the finest level compared with the same sizes as before. For the Twitter dataset (in blue) the initial level corresponds to users centroids and the timescale is kept to \(\rho =1\). For the mobile phone dataset (in pink), the initial level corresponds to towers. The timescale ρ was kept constant with a value of 0.75
Aggregability index and NMI
In Fig. 5, we compare for both datasets, the results of community detection on the original network (\(N_{0}\) or \(M_{0}\)) with communities found on the networks of square cells of sides 125 m, 250 m, 500 m, 1 km, 2 km, 4 km, 8 km, 16 km and 32 km. We also plot the aggregability indices, comparing the community structure found on the original networks (\(N_{0}\) or \(M_{0}\)) with the aggregating partitions into square cells of sides 125 m, 250 m, 500 m, 1 km, 2 km, 4 km, 8 km, 16 km and 32 km.
The aggregability index, η, requires the knowledge of the partition into communities and of the partition into aggregation classes at the finest level, but not of the communities that are deemed to be relevant to the aggregated graph. It measures to what extent every aggregation class is a subset of a single community, which is a sufficient condition for the community structure to be left invariant by the (edge-counting) community detection method, as argued in this paper.
The shape of the η and NMI curves in Fig. 5 is in line with the following facts:
If \(\eta =1\) then \(\operatorname{NMI}=1\) (because we use an edge-counting criterion for detecting communities),
For a small aggregation scale we expect from Eq. 6 that \(\eta \gtrsim \operatorname{NMI}\),
For further aggregation scales, we know from Eq. 5 that \(\eta \geq \operatorname{NMI}/2\).
Low values of η can be seen as a warning signal that communities on the aggregated network (once lifted back to the original network) will necessarily be significantly different than the original communities. In Fig. 5 we observe that the value of η for mobile phone calls stays remarkably steady until the aggregation scale of 1 or 2 km, while the η value for the Twitter dataset dips comparably faster—and so does the NMI between the community partitions at different scales, as expected.
The fact that the NMI curve of the Twitter dataset drops significantly faster than the η curve shows that Eqs. 5 and 6 need not be tight. In line with the arguments we discuss above (below Eq. 6), one reason for this discrepancy may be a strong heterogeneity of the data in terms of density, as Figs. 2 and 3 suggest.
In this paper, we have studied the impact of data aggregation on community detection in networks. We have shown on theoretical and empirical examples that data aggregation can preserve the community structure, destroy it, or highlight another relevant community structure. We have identified a class of methods able to preserve the community structure whenever it is aligned with the aggregation classes. We have defined an aggregability index that measures how aligned the community structure is with the aggregation classes.
The article has been structured as a proof of concept. The examples have focused on the most standard notion of communities, as highly interconnected set of nodes. Communities were computed with one of the most popular quality functions, namely modularity and its multiscale extension. We focused on aggregating geographical coordinates into spatial units of increasing size, in line with the well-known Modified Areal Unit Problem in geography.
Nevertheless, from the theoretical considerations, we see that the conclusions may be potentially relevant for different notions of partitioning (e.g. stochastic block modelling) with various aggregation criteria, according to any node metadata such as age, school, etc.
Therefore, broadly speaking, we see our investigation as a warning to data scientists grappling with networks on several levels of aggregation. Our message being that the results of their analyses may depend starkly on the level and nature of the aggregation.
We chose two datasets behaving differently with respect to aggregation, as an illustration for our proposed parameter, the aggregability index. The fact that these two datasets are geographic in nature is incidental in our study, whose scope includes in principle any kind of network and their aggregations. Nonetheless, this might indicate a potential privileged applicability to space-embedded networks. Example of networks embedded in space abound, and the interaction between their structure and the way they unfold in space has triggered some interesting developments, see for example [28, 29]. As to explaining why the two datasets behave differently with respect to a same aggregation strategy, one can only formulate hypotheses, whose investigation is beyond the scope of this paper, and may involve the analysis of other datasets with other community detection methods on the same geographical area [30]. While the mobile phone calls dataset is shaped by the condition of previous social interaction, this constraint is not present, or to a lesser extent, in the Twitter dataset. Further differences between the datasets include the heterogenous density of events in the Twitter network, and the different geographic area (Belgium or surroundings of Brussels). Even more importantly, the mobile phone network's nodes at the finest scale are towers, which already aggregate a large number of users.
The present study is certainly not without caveats. For instance in many cases it may be that the full network is inaccessible to the measurement (such as in the human brain connectomes, only available under aggregated form), or too large for most community detection algorithms. In this case, a computation of the aggregation index η may not be available. Also, in many cases the aggregated network is available with weights on the edges that do not represent the sum of all interactions between all nodes of the aggregation classes, but only a tresholded version of it, for instance. Ways to cope with this may be a focus of further research.
m :
MAUP:
Modifiable Areal Unit Problem
NMI:
Normalised mutual information
\(N_{0}\) :
Disaggregated network of Twitter users
\(N_{fm}\) :
Network of former municipalities
\(N_{m}\) :
Network of municipalities
\(N_{p}\) :
Networks of Twitter users aggregated into cells of size p meters
\(M_{0}\) :
Disaggregated network of towers
\(M_{p}\) :
Networks of towers aggregated into cells of size p meters
Fortunato S (2010) Community detection in graphs. Phys Rep 486(3–5):75–174. https://doi.org/10.1016/j.physrep.2009.11.002
Simpson EH (1951) The interpretation of interaction in contingency tables. J R Stat Soc, Ser B, Methodol
Blyth CR (1972) On Simpson's paradox and the sure-thing principle. J Am Stat Assoc 67(338):364–366
Robinson WS (1950) Ecological correlations and the behavior of individuals. Am Sociol Rev 15(3):351–357
Gehlke CE, Biehl K (1934) Certain effects of grouping upon the size of the correlation coefficient in census tract material. J Am Stat Assoc
Openshaw S (1984) The modifiable areal unit problem. Geo Abstracts University of East Anglia
Wong D (2009) The modifiable areal unit problem (MAUP). In: Fotheringham AS, Rogerson PA (eds) The sage handbook of spatial analysis. Sage, Los Angeles
Cucuringu M, Rombach MP, Lee SH, Porter MA (2014) Detection of core-periphery structure in networks using spectral methods and geodesic paths. arXiv preprint arXiv:1410.6572
Newman M, Peixoto TP (2015) Generalized communities in networks. Phys Rev Lett 115:8
Goldenberg A, Zheng AX, Fienberg SE, Airoldi EM (2010) A survey of statistical network models. Found Trends Mach Learn 2(2):129–233
Reichardt J, Bornholdt S (2006) Statistical mechanics of community detection. Phys Rev E 74(1):016110. https://doi.org/10.1103/PhysRevE.74.016110
Delvenne JC, Schaub MT, Yaliraki SN, Barahona M (2013) The stability of a graph partition: a dynamics-based framework for community detection. In: Dynamics on and of complex networks. Springer, New York
Peel L, Larremore DB, Clauset A (2016) The ground truth about metadata and community detection in networks. arXiv preprint arXiv:1608.05878
Schaub M, Delvenne JC, Rosvall M, Lambiotte R (2016) The many facets of community detection in complex networks. arXiv preprint arXiv:1611.07769
Chakraborty T, Dalmia A, Mukherjee A, Ganguly N (2017) Metrics for community analysis: a survey. ACM Comput Surv 50(4):54. https://doi.org/10.1145/3091106
Newman MEJ, Girvan M (2004) Finding and evaluating community structure in networks. Phys Rev E 69(2):026113
Reichardt J, Bornholdt S (2007) Partitioning and modularity of graphs with arbitrary degree distribution. Phys Rev E 76:015102
Rosvall M, Bergstrom CT (2008) Maps of random walks on complex networks reveal community structure. Proc Natl Acad Sci 105(4):1118–1123. https://doi.org/10.1073/pnas.0706851105
Kannan R, Vempala S, Vetta A (2000) On clusterings: good, bad and spectral. In: Proceedings of the symposium on foundations of computer science
Jianbo S, Malik J (2000) Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell 22(8):888–905. https://doi.org/10.1109/34.868688
van Dongen S (2000) A cluster algorithm for graphs. RFC INS-R001, CWI, The Netherlands
Latapy M, Pons P (2006) Computing communities in large networks using random walks. J Graph Algorithms Appl 10(2):191–218
Newman MEJ (2004) Detecting community structure in networks. Eur Phys J B 38:321–330
Blondel VD, Guillaume J-L, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large networks. J Stat Mech Theory Exp 2008(10):P10008
Lambiotte R, Delvenne JC, Barahona M (2014) Random walks, Markov processes and the multiscale modular organization of complex networks. IEEE Trans Netw Sci Eng 1(2):76–90
Ana L, Jain AK (2003) Robust data clustering. In: Computer vision and pattern recognition proceedings 2003 IEEE computer society conference on, vol 2, pp 2–128
Thomas I, Adam A, Verhetsel A (2017) Migration and commuting interactions fields: a new geography with community detection algorithm? Belgeo 2017(4)
Hristova D, Williams MJ, Musolesi M, Panzarasa P, Mascolo C (2016) Measuring urban social DiversityUsing interconnected geo-social networks. In: Proceedings of the 25th international conference on world wide web
Rosvall M, Trusina A, Minnhagen P, Sneppen K (2005) Networks and cities: an information perspective. Phys Rev Lett 94(2):028701
Adam A, Delvenne JC, Thomas I (2018) Detecting communities with the multi-scale Louvain method: robustness test on the metropolitan area of Brussels. J Geogr Syst 20(4):363–386
This work was supported by Innoviris (project Anticipate—Prospective Research 88 BRU-NET), Federation Wallonia-Brussels (Concerted Research Action ARC 14/19-060), and Flagship European Research Area Network (FLAG-ERA) Joint Transnational Call FuturICT 2.0.
The anonymised phone call datasets used in this paper cannot be made publicly available due to a privacy contract signed between the authors and the phone company in order to avoid privacy issues. The Twitter dataset analysed during the current study are available from the corresponding author on reasonable request.
Center for Operations Research and Econometrics, Université catholique de Louvain, Louvain-la-Neuve, Belgium
Yérali Gandica, Adeline Decuyper, Christophe Cloquet, Isabelle Thomas & Jean-Charles Delvenne
Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Université catholique de Louvain, Louvain-la-Neuve, Belgium
Yérali Gandica, Christophe Cloquet & Jean-Charles Delvenne
Poppy, Jette, Belgium
Christophe Cloquet
Yérali Gandica
Adeline Decuyper
Isabelle Thomas
Jean-Charles Delvenne
All authors contributed to design the research, perform the research, write the manuscript. All authors read and approved the final manuscript.
Correspondence to Yérali Gandica.
The authors declare no competing interests.
Four sections are included: SA.1. Description of the territory, SA.2. Study of the aggregability index on synthetic data, SA.3. Methodology, SA.4. Tables. (PDF 2.1 MB)
Gandica, Y., Decuyper, A., Cloquet, C. et al. Measuring the effect of node aggregation on community detection. EPJ Data Sci. 9, 6 (2020). https://doi.org/10.1140/epjds/s13688-020-00223-0
DOI: https://doi.org/10.1140/epjds/s13688-020-00223-0
Community detection
Twitter data
Phone call data | CommonCrawl |
Pareto distribution
From formulasearchengine
Template:Probability distribution
The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power law probability distribution that is used in description of social, scientific, geophysical, actuarial, and many other types of observable phenomena.
2.1 Cumulative distribution function
2.2 Probability density function
2.3 Moments and characteristic function
2.4 Conditional distributions
2.5 A characterization theorem
2.6 Geometric mean
2.7 Harmonic mean
3 Generalized Pareto distributions
3.1 Pareto types I–IV
3.2 Feller–Pareto distribution
5 Relation to other distributions
5.1 Relation to the exponential distribution
5.2 Relation to the log-normal distribution
5.3 Relation to the generalized Pareto distribution
5.4 Relation to Zipf's law
5.5 Relation to the "Pareto principle"
6 Lorenz curve and Gini coefficient
7 Parameter estimation
8 Graphical representation
9 Random sample generation
10.1 Bounded Pareto distribution
10.1.1 Generating bounded Pareto random variables
10.2 Symmetric Pareto distribution
If X is a random variable with a Pareto (Type I) distribution,[1] then the probability that X is greater than some number x, i.e. the survival function (also called tail function), is given by
F¯(x)=Pr(X>x)={(xmx)αx≥xm,1x<xm.{\displaystyle {\overline {F}}(x)=\Pr(X>x)={\begin{cases}\left({\frac {x_{\mathrm {m} }}{x}}\right)^{\alpha }&x\geq x_{\mathrm {m} },\\1&x<x_{\mathrm {m} }.\end{cases}}}
where xm is the (necessarily positive) minimum possible value of X, and α is a positive parameter. The Pareto Type I distribution is characterized by a scale parameter xm and a shape parameter α, which is known as the tail index. When this distribution is used to model the distribution of wealth, then the parameter α is called the Pareto index.
Cumulative distribution function
From the definition, the cumulative distribution function of a Pareto random variable with parameters α and xm is
FX(x)={1−(xmx)αx≥xm,0x<xm.{\displaystyle F_{X}(x)={\begin{cases}1-\left({\frac {x_{\mathrm {m} }}{x}}\right)^{\alpha }&x\geq x_{\mathrm {m} },\\0&x<x_{\mathrm {m} }.\end{cases}}}
When plotted on linear axes, the distribution assumes the familiar J-shaped curve which approaches each of the orthogonal axes asymptotically. All segments of the curve are self-similar (subject to appropriate scaling factors). When plotted in a log-log plot, the distribution is represented by a straight line.
Probability density function
It follows (by differentiation) that the probability density function is
fX(x)={αxmαxα+1x≥xm,0x<xm.{\displaystyle f_{X}(x)={\begin{cases}{\frac {\alpha x_{\mathrm {m} }^{\alpha }}{x^{\alpha +1}}}&x\geq x_{\mathrm {m} },\\0&x<x_{\mathrm {m} }.\end{cases}}}
Moments and characteristic function
The expected value of a random variable following a Pareto distribution is
E(X)={∞α≤1,αxmα−1α>1.{\displaystyle E(X)={\begin{cases}\infty &\alpha \leq 1,\\{\frac {\alpha x_{\mathrm {m} }}{\alpha -1}}&\alpha >1.\end{cases}}}
The variance of a random variable following a Pareto distribution is
Var(X)={∞α∈(1,2],(xmα−1)2αα−2α>2.{\displaystyle \mathrm {Var} (X)={\begin{cases}\infty &\alpha \in (1,2],\\\left({\frac {x_{\mathrm {m} }}{\alpha -1}}\right)^{2}{\frac {\alpha }{\alpha -2}}&\alpha >2.\end{cases}}}
(If α ≤ 1, the variance does not exist.)
The raw moments are
μn′={∞α≤n,αxmnα−nα>n.{\displaystyle \mu _{n}'={\begin{cases}\infty &\alpha \leq n,\\{\frac {\alpha x_{\mathrm {m} }^{n}}{\alpha -n}}&\alpha >n.\end{cases}}}
The moment generating function is only defined for non-positive values t ≤ 0 as
M(t;α,xm)=E[etX]=α(−xmt)αΓ(−α,−xmt){\displaystyle M\left(t;\alpha ,x_{\mathrm {m} }\right)=E\left[e^{tX}\right]=\alpha (-x_{\mathrm {m} }t)^{\alpha }\Gamma (-\alpha ,-x_{\mathrm {m} }t)}
M(0,α,xm)=1.{\displaystyle M\left(0,\alpha ,x_{\mathrm {m} }\right)=1.}
The characteristic function is given by
φ(t;α,xm)=α(−ixmt)αΓ(−α,−ixmt),{\displaystyle \varphi (t;\alpha ,x_{\mathrm {m} })=\alpha (-ix_{\mathrm {m} }t)^{\alpha }\Gamma (-\alpha ,-ix_{\mathrm {m} }t),}
where Γ(a, x) is the incomplete gamma function.
Conditional distributions
The conditional probability distribution of a Pareto-distributed random variable, given the event that it is greater than or equal to a particular number x1{\displaystyle x_{1}} exceeding xm{\displaystyle x_{\text{m}}} , is a Pareto distribution with the same Pareto index α{\displaystyle \alpha } but with minimum x1{\displaystyle x_{1}} instead of xm{\displaystyle x_{\text{m}}} .
A characterization theorem
Suppose X1,X2,X3,…{\displaystyle X_{1},X_{2},X_{3},\dotsc } are independent identically distributed random variables whose probability distribution is supported on the interval [xm,∞){\displaystyle [x_{\text{m}},\infty )} for some xm>0{\displaystyle x_{\text{m}}>0} . Suppose that for all n{\displaystyle n} , the two random variables min{X1,…,Xn}{\displaystyle \min\{X_{1},\dotsc ,X_{n}\}} and (X1+⋯+Xn)/min{X1,…,Xn}{\displaystyle (X_{1}+\dotsb +X_{n})/\min\{X_{1},\dotsc ,X_{n}\}} are independent. Then the common distribution is a Pareto distribution.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}
The geometric mean (G) is[2]
G=xmexp(1α){\displaystyle G=x_{\text{m}}\exp \left({\frac {1}{\alpha }}\right)} .
The harmonic mean (H) is[2]
H=xm(1+1α){\displaystyle H=x_{\text{m}}\left(1+{\frac {1}{\alpha }}\right)} .
Generalized Pareto distributions
{{#invoke:see also|seealso}}
There is a hierarchy [1][3] of Pareto distributions known as Pareto Type I, II, III, IV, and Feller–Pareto distributions.[1][3][4] Pareto Type IV contains Pareto Type I–III as special cases. The Feller–Pareto[3][5] distribution generalizes Pareto Type IV.
Pareto types I–IV
The Pareto distribution hierarchy is summarized in the next table comparing the survival functions (complementary CDF).
When μ = 0, the Pareto distribution Type II is also known as the Lomax distribution.[6]
In this section, the symbol xm, used before to indicate the minimum value of x, is replaced by σ.
Pareto distributions
F¯(x)=1−F(x){\displaystyle {\overline {F}}(x)=1-F(x)}
Type I [xσ]−α{\displaystyle \left[{\frac {x}{\sigma }}\right]^{-\alpha }} x>σ{\displaystyle x>\sigma } σ>0,α{\displaystyle \sigma >0,\alpha }
Type II [1+x−μσ]−α{\displaystyle \left[1+{\frac {x-\mu }{\sigma }}\right]^{-\alpha }} x>μ{\displaystyle x>\mu } μ∈R,σ>0,α{\displaystyle \mu \in \mathbb {R} ,\sigma >0,\alpha }
Lomax [1+xσ]−α{\displaystyle \left[1+{\frac {x}{\sigma }}\right]^{-\alpha }} x>0{\displaystyle x>0} σ>0,α{\displaystyle \sigma >0,\alpha }
Type III [1+(x−μσ)1/γ]−1{\displaystyle \left[1+\left({\frac {x-\mu }{\sigma }}\right)^{1/\gamma }\right]^{-1}} x>μ{\displaystyle x>\mu } μ∈R,σ,γ>0{\displaystyle \mu \in \mathbb {R} ,\sigma ,\gamma >0}
Type IV [1+(x−μσ)1/γ]−α{\displaystyle \left[1+\left({\frac {x-\mu }{\sigma }}\right)^{1/\gamma }\right]^{-\alpha }} x>μ{\displaystyle x>\mu } μ∈R,σ,γ>0,α{\displaystyle \mu \in \mathbb {R} ,\sigma ,\gamma >0,\alpha }
The shape parameter α is the tail index, μ is location, σ is scale, γ is an inequality parameter. Some special cases of Pareto Type (IV) are
P(IV)(σ,σ,1,α)=P(I)(σ,α),{\displaystyle P(IV)(\sigma ,\sigma ,1,\alpha )=P(I)(\sigma ,\alpha ),}
P(IV)(μ,σ,1,α)=P(II)(μ,σ,α),{\displaystyle P(IV)(\mu ,\sigma ,1,\alpha )=P(II)(\mu ,\sigma ,\alpha ),}
P(IV)(μ,σ,γ,1)=P(III)(μ,σ,γ).{\displaystyle P(IV)(\mu ,\sigma ,\gamma ,1)=P(III)(\mu ,\sigma ,\gamma ).}
The finiteness of the mean, and the existence and the finiteness of the variance depend on the tail index α (inequality index γ). In particular, fractional δ-moments are finite for some δ > 0, as shown in the table below, where δ is not necessarily an integer.
Moments of Pareto I–IV distributions (case μ = 0)
E[X]{\displaystyle E[X]}
E[Xδ]{\displaystyle E[X^{\delta }]}
Type I σαα−1{\displaystyle {\frac {\sigma \alpha }{\alpha -1}}} α>1{\displaystyle \alpha >1} σδαα−δ{\displaystyle {\frac {\sigma ^{\delta }\alpha }{\alpha -\delta }}} δ<α{\displaystyle \delta <\alpha }
Type II σα−1{\displaystyle {\frac {\sigma }{\alpha -1}}} α>1{\displaystyle \alpha >1} σδΓ(α−δ)Γ(1+δ)Γ(α){\displaystyle {\frac {\sigma ^{\delta }\Gamma (\alpha -\delta )\Gamma (1+\delta )}{\Gamma (\alpha )}}} −1<δ<α{\displaystyle -1<\delta <\alpha }
Type III σΓ(1−γ)Γ(1+γ){\displaystyle \sigma \Gamma (1-\gamma )\Gamma (1+\gamma )} −1<γ<1{\displaystyle -1<\gamma <1} σδΓ(1−γδ)Γ(1+γδ){\displaystyle \sigma ^{\delta }\Gamma (1-\gamma \delta )\Gamma (1+\gamma \delta )} −γ−1<δ<γ−1{\displaystyle -\gamma ^{-1}<\delta <\gamma ^{-1}}
Type IV σΓ(α−γ)Γ(1+γ)Γ(α){\displaystyle {\frac {\sigma \Gamma (\alpha -\gamma )\Gamma (1+\gamma )}{\Gamma (\alpha )}}} −1<γ<α{\displaystyle -1<\gamma <\alpha } σδΓ(α−γδ)Γ(1+γδ)Γ(α){\displaystyle {\frac {\sigma ^{\delta }\Gamma (\alpha -\gamma \delta )\Gamma (1+\gamma \delta )}{\Gamma (\alpha )}}} −γ−1<δ<α/γ{\displaystyle -\gamma ^{-1}<\delta <\alpha /\gamma }
Feller–Pareto distribution
Feller[3][5] defines a Pareto variable by transformation U = Y−1 − 1 of a beta random variable Y, whose probability density function is
f(y)=yγ1−1(1−y)γ2−1B(γ1,γ2),0<y<1;γ1,γ2>0,{\displaystyle f(y)={\frac {y^{\gamma _{1}-1}(1-y)^{\gamma _{2}-1}}{B(\gamma _{1},\gamma _{2})}},\qquad 0<y<1;\gamma _{1},\gamma _{2}>0,}
where B( ) is the beta function. If
W=μ+σ(Y−1−1)γ,σ>0,γ>0,{\displaystyle W=\mu +\sigma (Y^{-1}-1)^{\gamma },\qquad \sigma >0,\gamma >0,}
then W has a Feller–Pareto distribution FP(μ, σ, γ, γ1, γ2).[1]
If U1∼Γ(δ1,1){\displaystyle U_{1}\sim \Gamma (\delta _{1},1)} and U2∼Γ(δ2,1){\displaystyle U_{2}\sim \Gamma (\delta _{2},1)} are independent Gamma variables, another construction of a Feller–Pareto (FP) variable is[7]
W=μ+σ(U1U2)γ{\displaystyle W=\mu +\sigma \left({\frac {U_{1}}{U_{2}}}\right)^{\gamma }}
and we write W ~ FP(μ, σ, γ, δ1, δ2). Special cases of the Feller–Pareto distribution are
FP(σ,σ,1,1,α)=P(I)(σ,α){\displaystyle FP(\sigma ,\sigma ,1,1,\alpha )=P(I)(\sigma ,\alpha )}
FP(μ,σ,1,1,α)=P(II)(μ,σ,α){\displaystyle FP(\mu ,\sigma ,1,1,\alpha )=P(II)(\mu ,\sigma ,\alpha )}
FP(μ,σ,γ,1,1)=P(III)(μ,σ,γ){\displaystyle FP(\mu ,\sigma ,\gamma ,1,1)=P(III)(\mu ,\sigma ,\gamma )}
FP(μ,σ,γ,1,α)=P(IV)(μ,σ,γ,α).{\displaystyle FP(\mu ,\sigma ,\gamma ,1,\alpha )=P(IV)(\mu ,\sigma ,\gamma ,\alpha ).}
Pareto originally used this distribution to describe the allocation of wealth among individuals since it seemed to show rather well the way that a larger portion of the wealth of any society is owned by a smaller percentage of the people in that society. He also used it to describe distribution of income.[8] This idea is sometimes expressed more simply as the Pareto principle or the "80-20 rule" which says that 20% of the population controls 80% of the wealth.[9] However, the 80-20 rule corresponds to a particular value of α, and in fact, Pareto's data on British income taxes in his Cours d'économie politique indicates that about 30% of the population had about 70% of the income. The probability density function (PDF) graph at the beginning of this article shows that the "probability" or fraction of the population that owns a small amount of wealth per person is rather high, and then decreases steadily as wealth increases. (Note that the Pareto distribution is not realistic for wealth for the lower end. In fact, net worth may even be negative.) This distribution is not limited to describing wealth or income, but to many situations in which an equilibrium is found in the distribution of the "small" to the "large". The following examples are sometimes seen as approximately Pareto-distributed:
The sizes of human settlements (few cities, many hamlets/villages)[10]
File size distribution of Internet traffic which uses the TCP protocol (many smaller files, few larger ones)[10]
Hard disk drive error rates[11]
Clusters of Bose–Einstein condensate near absolute zero[12]
The values of oil reserves in oil fields (a few large fields, many small fields)[10]
The length distribution in jobs assigned supercomputers (a few large ones, many small ones){{ safesubst:#invoke:Unsubst||date=__DATE__ |$B=
{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}
The standardized price returns on individual stocks [10]
Fitted cumulative Pareto (Lomax) distribution to maximum one-day rainfalls using CumFreq, see also distribution fitting
Sizes of sand particles [10]
Sizes of meteorites
Numbers of species per genus (There is subjectivity involved: The tendency to divide a genus into two or more increases with the number of species in it){{ safesubst:#invoke:Unsubst||date=__DATE__ |$B=
Areas burnt in forest fires
Severity of large casualty losses for certain lines of business such as general liability, commercial auto, and workers compensation.[13][14]
In hydrology the Pareto distribution is applied to extreme events such as annually maximum one-day rainfalls and river discharges. The blue picture illustrates an example of fitting the Pareto distribution to ranked annually maximum one-day rainfalls showing also the 90% confidence belt based on the binomial distribution. The rainfall data are represented by plotting positions as part of the cumulative frequency analysis.
Relation to other distributions
Relation to the exponential distribution
The Pareto distribution is related to the exponential distribution as follows. If X is Pareto-distributed with minimum xm and index α, then
Y=log(Xxm){\displaystyle Y=\log \left({\frac {X}{x_{\mathrm {m} }}}\right)}
is exponentially distributed with rate parameter α. Equivalently, if Y is exponentially distributed with rate α, then
xmeY{\displaystyle x_{\mathrm {m} }e^{Y}}
is Pareto-distributed with minimum xm and index α.
This can be shown using the standard change of variable techniques:
Pr(Y<y)=Pr(log(Xxm)<y)=Pr(X<xmey)=1−(xmxmey)α=1−e−αy.{\displaystyle \Pr(Y<y)=\Pr \left(\log \left({\frac {X}{x_{\mathrm {m} }}}\right)<y\right)=\Pr(X<x_{\mathrm {m} }e^{y})=1-\left({\frac {x_{\mathrm {m} }}{x_{\mathrm {m} }e^{y}}}\right)^{\alpha }=1-e^{-\alpha y}.}
The last expression is the cumulative distribution function of an exponential distribution with rate α.
Relation to the log-normal distribution
Note that the Pareto distribution and log-normal distribution are alternative distributions for describing the same types of quantities. One of the connections between the two is that they are both the distributions of the exponential of random variables distributed according to other common distributions, respectively the exponential distribution and normal distribution.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}
Relation to the generalized Pareto distribution
The Pareto distribution is a special case of the generalized Pareto distribution, which is a family of distributions of similar form, but containing an extra parameter in such a way that the support of the distribution is either bounded below (at a variable point), or bounded both above and below (where both are variable), with the Lomax distribution as a special case. This family also contains both the unshifted and shifted exponential distributions.
The Pareto distribution with scale xm{\displaystyle x_{m}} and shape α{\displaystyle \alpha } is equivalent to the generalized Pareto distribution with location μ=xm{\displaystyle \mu =x_{m}} , scale σ=xm/α{\displaystyle \sigma =x_{m}/\alpha } and shape ξ=1/α{\displaystyle \xi =1/\alpha } . Vice versa one can get the Pareto distribution from the GPD by xm=σ/ξ{\displaystyle x_{m}=\sigma /\xi } and α=1/ξ{\displaystyle \alpha =1/\xi } .
Relation to Zipf's law
Pareto distributions are continuous probability distributions. Zipf's law, also sometimes called the zeta distribution, may be thought of as a discrete counterpart of the Pareto distribution.
Relation to the "Pareto principle"
The "80-20 law", according to which 20% of all people receive 80% of all income, and 20% of the most affluent 20% receive 80% of that 80%, and so on, holds precisely when the Pareto index is α = log4(5) = log(5)/log(4), approximately 1.161. This result can be derived from the Lorenz curve formula given below. Moreover, the following have been shown[15] to be mathematically equivalent:
Income is distributed according to a Pareto distribution with index α > 1.
There is some number 0 ≤ p ≤ 1/2 such that 100p % of all people receive 100(1 − p) % of all income, and similarly for every real (not necessarily integer) n > 0, 100pn % of all people receive 100(1 − p)n percentage of all income.
This does not apply only to income, but also to wealth, or to anything else that can be modeled by this distribution.
This excludes Pareto distributions in which 0 < α ≤ 1, which, as noted above, have infinite expected value, and so cannot reasonably model income distribution.
Lorenz curve and Gini coefficient
Lorenz curves for a number of Pareto distributions. The case α = ∞ corresponds to perfectly equal distribution (G = 0) and the α = 1 line corresponds to complete inequality (G = 1)
The Lorenz curve is often used to characterize income and wealth distributions. For any distribution, the Lorenz curve L(F) is written in terms of the PDF f or the CDF F as
L(F)=∫xmx(F)xf(x)dx∫xm∞xf(x)dx=∫0Fx(F′)dF′∫01x(F′)dF′{\displaystyle L(F)={\frac {\int _{x_{\mathrm {m} }}^{x(F)}xf(x)\,dx}{\int _{x_{\mathrm {m} }}^{\infty }xf(x)\,dx}}={\frac {\int _{0}^{F}x(F')\,dF'}{\int _{0}^{1}x(F')\,dF'}}}
where x(F) is the inverse of the CDF. For the Pareto distribution,
x(F)=xm(1−F)1α{\displaystyle x(F)={\frac {x_{\mathrm {m} }}{(1-F)^{\frac {1}{\alpha }}}}}
and the Lorenz curve is calculated to be
L(F)=1−(1−F)1−1α,{\displaystyle L(F)=1-(1-F)^{1-{\frac {1}{\alpha }}},}
Although the numerator and denominator in the expression for L(F){\displaystyle L(F)} diverge for 0≤α<1{\displaystyle 0\leq \alpha <1} , their ratio does not, yielding L=0 in these cases, which yields a Gini coefficient of unity. Examples of the Lorenz curve for a number of Pareto distributions are shown in the graph on the right.
The Gini coefficient is a measure of the deviation of the Lorenz curve from the equidistribution line which is a line connecting [0, 0] and [1, 1], which is shown in black (α = ∞) in the Lorenz plot on the right. Specifically, the Gini coefficient is twice the area between the Lorenz curve and the equidistribution line. The Gini coefficient for the Pareto distribution is then calculated (for α≥1{\displaystyle \alpha \geq 1} ) to be
G=1−2(∫01L(F)dF)=12α−1{\displaystyle G=1-2\left(\int _{0}^{1}L(F)dF\right)={\frac {1}{2\alpha -1}}}
(see Aaberge 2005).
The likelihood function for the Pareto distribution parameters α and xm, given a sample x = (x1, x2, ..., xn), is
L(α,xm)=∏i=1nαxmαxiα+1=αnxmnα∏i=1n1xiα+1.{\displaystyle L(\alpha ,x_{\mathrm {m} })=\prod _{i=1}^{n}\alpha {\frac {x_{\mathrm {m} }^{\alpha }}{x_{i}^{\alpha +1}}}=\alpha ^{n}x_{\mathrm {m} }^{n\alpha }\prod _{i=1}^{n}{\frac {1}{x_{i}^{\alpha +1}}}.}
Therefore, the logarithmic likelihood function is
ℓ(α,xm)=nlnα+nαlnxm−(α+1)∑i=1nlnxi.{\displaystyle \ell (\alpha ,x_{\mathrm {m} })=n\ln \alpha +n\alpha \ln x_{\mathrm {m} }-(\alpha +1)\sum _{i=1}^{n}\ln x_{i}.}
It can be seen that ℓ(α,xm){\displaystyle \ell (\alpha ,x_{\mathrm {m} })} is monotonically increasing with xm, that is, the greater the value of xm, the greater the value of the likelihood function. Hence, since x ≥ xm, we conclude that
x^m=minixi.{\displaystyle {\widehat {x}}_{\mathrm {m} }=\min _{i}{x_{i}}.}
To find the estimator for α, we compute the corresponding partial derivative and determine where it is zero:
∂ℓ∂α=nα+nlnxm−∑i=1nlnxi=0.{\displaystyle {\frac {\partial \ell }{\partial \alpha }}={\frac {n}{\alpha }}+n\ln x_{\mathrm {m} }-\sum _{i=1}^{n}\ln x_{i}=0.}
Thus the maximum likelihood estimator for α is:
α^=n∑i(lnxi−lnx^m).{\displaystyle {\widehat {\alpha }}={\frac {n}{\sum _{i}\left(\ln x_{i}-\ln {\widehat {x}}_{\mathrm {m} }\right)}}.}
The expected statistical error is:[16]
σ=α^n.{\displaystyle \sigma ={\frac {\widehat {\alpha }}{\sqrt {n}}}.}
Malik (1970)[17] gives the exact joint distribution of (x^m,α^){\displaystyle ({\hat {x}}_{\mathrm {m} },{\hat {\alpha }})} . In particular, x^m{\displaystyle {\hat {x}}_{\mathrm {m} }} and α^{\displaystyle {\hat {\alpha }}} are independent and x^m{\displaystyle {\hat {x}}_{\mathrm {m} }} is Pareto with scale parameter xm and shape parameter nα, whereas α^{\displaystyle {\hat {\alpha }}} has an Inverse-gamma distribution with shape and scale parameters n−1 and nα, respectively.
The characteristic curved 'Long Tail' distribution when plotted on a linear scale, masks the underlying simplicity of the function when plotted on a log-log graph, which then takes the form of a straight line with negative gradient: It follows from the formula for the probability density function that for x ≥ xm,
logfX(x)=log(αxmαxα+1)=log(αxmα)−(α+1)logx.{\displaystyle \log f_{X}(x)=\log \left(\alpha {\frac {x_{\mathrm {m} }^{\alpha }}{x^{\alpha +1}}}\right)=\log(\alpha x_{\mathrm {m} }^{\alpha })-(\alpha +1)\log x.}
Since α is positive, the gradient −(α+1) is negative.
Random sample generation
Random samples can be generated using inverse transform sampling. Given a random variate U drawn from the uniform distribution on the unit interval (0, 1], the variate T given by
T=xmU1α{\displaystyle T={\frac {x_{\mathrm {m} }}{U^{\frac {1}{\alpha }}}}}
is Pareto-distributed.[18] If U is uniformly distributed on [0, 1), it can be exchanged with (1 − U).
Bounded Pareto distribution
{{#invoke:see also|seealso}} Template:Probability distribution
The bounded (or truncated) Pareto distribution has three parameters α, L and H. As in the standard Pareto distribution α determines the shape. L denotes the minimal value, and H denotes the maximal value. (The variance in the table on the right should be interpreted as the second moment).
The probability density function is
αLαx−α−11−(LH)α{\displaystyle {\frac {\alpha L^{\alpha }x^{-\alpha -1}}{1-\left({\frac {L}{H}}\right)^{\alpha }}}}
where L ≤ x ≤ H, and α > 0.
Generating bounded Pareto random variables
If U is uniformly distributed on (0, 1), then applying inverse-transform method [19]
U=1−Lαx−α1−(LH)α{\displaystyle U={\frac {1-L^{\alpha }x^{-\alpha }}{1-({\frac {L}{H}})^{\alpha }}}}
x=(−UHα−ULα−HαHαLα)−1α{\displaystyle x=\left(-{\frac {UH^{\alpha }-UL^{\alpha }-H^{\alpha }}{H^{\alpha }L^{\alpha }}}\right)^{-{\frac {1}{\alpha }}}}
is a bounded Pareto-distributed. {{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}
Symmetric Pareto distribution
The symmetric Pareto distribution can be defined by the probability density function:[20]
f(x;α,xm)={12αxmα|x|−α−1|x|>xm0otherwise.{\displaystyle f(x;\alpha ,x_{\mathrm {m} })={\begin{cases}{\tfrac {1}{2}}\alpha x_{\mathrm {m} }^{\alpha }|x|^{-\alpha -1}&|x|>x_{\mathrm {m} }\\0&{\text{otherwise}}.\end{cases}}}
It has a similar shape to a Pareto distribution for x > xm and is mirror symmetric about the vertical axis.
Bradford's law
Pareto analysis
Pareto efficiency
Pareto interpolation
Power law probability distributions
Traffic generation model
↑ 1.0 1.1 1.2 1.3 {{#invoke:citation/CS1|citation |CitationClass=book }}
↑ 2.0 2.1 Johnson NL, Kotz S, Balakrishnan N (1994) Continuous univariate distributions Vol 1. Wiley Series in Probability and Statistics.
↑ 3.0 3.1 3.2 3.3 Johnson, Kotz, and Balakrishnan (1994), (20.4).
↑ {{#invoke:citation/CS1|citation |CitationClass=book }}
↑ 5.0 5.1 {{#invoke:citation/CS1|citation |CitationClass=book }} "The densities (4.3) are sometimes called after the economist Pareto. It was thought (rather naïvely from a modern statistical standpoint) that income distributions should have a tail with a density ~ Ax−α as x → ∞."
↑ Lomax, K. S. (1954). Business failures. Another example of the analysis of failure data.Journal of the American Statistical Association, 49, 847–852.
↑ Pareto, Vilfredo, Cours d'Économie Politique: Nouvelle édition par G.-H. Bousquet et G. Busino, Librairie Droz, Geneva, 1964, pages 299–345.
↑ For a two-quantile population, where approximately 18% of the population owns 82% of the wealth, the Theil index takes the value 1.
↑ 10.0 10.1 10.2 10.3 10.4 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
↑ Kleiber and Kotz (2003): page 94.
↑ http://www.cs.bgu.ac.il/~mps042/invtransnote.htm
↑ Template:Cite web
{{#invoke:Citation/CS1|citation
|CitationClass=journal }}
Pareto V (1965) "La Courbe de la Repartition de la Richesse" (Originally published in 1896). In: Busino G, editor. Oevres Completes de Vilfredo Pareto. Geneva: Librairie Droz. pp. 1–5.
Pareto, V. (1895). La legge della domanda. Giornale degli Economisti, 10, 59–68. English translation in Rivista di Politica Economica, 87 (1997), 691–700.
Pareto, V. (1897). Cours d'économie politique. Lausanne: Ed. Rouge.
Gini's Nuclear Family / Rolf Aabergé. – In: International Conference to Honor Two Eminent Social Scientists, May, 2005 – PDF
|CitationClass=citation }}
syntraf1.c is a C program to generate synthetic packet traffic with bounded Pareto burst size and exponential interburst time.
"Self-Similarity in World Wide Web Traffic: Evidence and Possible Causes" /Mark E. Crovella and Azer Bestavros
Weisstein, Eric W., "Pareto distribution", MathWorld.
REDIRECT Template:Probability distributions
Template:Common univariate probability distributions
Retrieved from "https://en.formulasearchengine.com/index.php?title=Pareto_distribution&oldid=222840"
Power laws
Probability distributions with non-finite variance
Exponential family distributions
Probability distributions
About formulasearchengine | CommonCrawl |
Maximum Likelihood Estimation — why it is used despite being biased in many cases
Maximum likelihood estimation often results into biased estimators (e.g., its estimate for the sample variance is biased for the Gaussian distribution).
What then makes it so popular? Why exactly is it used so much? Also, what in particular makes it better than the alternative approach -- method of moments?
Also, I noticed that for the Gaussian, a simple scaling of the MLE estimator makes it unbiased. Why is this scaling not a standard procedure? I mean -- Why is it that after MLE computation, it is not routine to find the necessary scaling to make the estimator unbiased? The standard practice seems to be the plain computation of the MLE estimates, except of course for the well known Gaussian case where the scaling factor is well known.
normal-distribution maximum-likelihood method-of-moments
MinajMinaj
$\begingroup$ There are many, many alternatives to ML, not just the method of moments--which also tends to produce biased estimators, by the way. What you might want to ask instead is "why would anybody want to use an unbiased estimator?" A good way to start researching this issue is a search on bias-variance tradeoff. $\endgroup$ – whuber♦ Nov 22 '15 at 16:28
$\begingroup$ As whuber pointed out, there is no intrinsic superiority in being unbiased. $\endgroup$ – Xi'an Nov 22 '15 at 16:38
$\begingroup$ I think @whuber means "why would anybody want to use a biased estimator?" It doesn't take much work to convince someone that an unbiased estimator may be a reasonable one. $\endgroup$ – Cliff AB Nov 22 '15 at 17:31
$\begingroup$ See en.wikipedia.org/wiki/… for an example where the only unbiased estimator is certainly not one you'd want to use. $\endgroup$ – Scortchi♦ Nov 22 '15 at 19:24
$\begingroup$ @Cliff I intended to ask the question in its more provocative, potentially more mysterious form. Lurking behind this is the idea that there are many ways to evaluate the quality of an estimator and many of them have nothing to do with bias. From that point of view, it is most natural to ask why someone would propose an unbiased estimator. See glen_b's answer for more from this point of view. $\endgroup$ – whuber♦ Nov 23 '15 at 16:44
Unbiasedness isn't necessarily especially important on its own.
Aside a very limited set of circumstances, most useful estimators are biased, however they're obtained.
If two estimators have the same variance, one can readily mount an argument for preferring an unbiased one to a biased one, but that's an unusual situation to be in (that is, you may reasonably prefer unbiasedness, ceteris paribus -- but those pesky ceteris are almost never paribus).
More typically, if you want unbiasedness you'll be adding some variance to get it, and then the question would be why would you do that?
Bias is how far the expected value of my estimator will be too high on average (with negative bias indicating too low).
When I'm considering a small sample estimator, I don't really care about that. I'm usually more interested in how far wrong my estimator will be in this instance - my typical distance from right... something like a root-mean-square error or a mean absolute error would make more sense.
So if you like low variance and low bias, asking for say a minimum mean square error estimator would make sense; these are very rarely unbiased.
Bias and unbiasedness is a useful notion to be aware of, but it's not an especially useful property to seek unless you're only comparing estimators with the same variance.
ML estimators tend to be low-variance; they're usually not minimum MSE, but they often have lower MSE than than modifying them to be unbiased (when you can do it at all) would give you.
As an example, consider estimating variance when sampling from a normal distribution $\hat{\sigma}^2_\text{MMSE} = \frac{S^2}{n+1}, \hat{\sigma}^2_\text{MLE} = \frac{S^2}{n}, \hat{\sigma}^2_\text{Unb} = \frac{S^2}{n-1}$ (indeed the MMSE for the variance always has a larger denominator than $n-1$).
Glen_b♦Glen_b
$\begingroup$ +1. Is there any intuition for (or perhaps some theory behind) your second-before last paragraph? Why do ML estimators tend to be low-variance? Why do they often have lower MSE than the unbiased estimator? Also, I am amazed to see the expression for MMSE estimator of variance; somehow I have never encountered it before. Why is it so rarely used? And does it have anything to do with shrinkage? It seems that it is "shrunk" from unbiased towards zero, but I am confused by that as I am used to thinking about shrinkage only in the multivariate context (along the lines of James-Stein). $\endgroup$ – amoeba Nov 25 '15 at 1:16
$\begingroup$ @amoeba MLEs are generally functions of sufficient statistics, and at least asymptotically minimum variance unbiased, so you expect them to be low variance in large samples, typically achieving the CRLB in the limit; this is often reflected in smaller samples. $\:$ MMSE estimators are generally shrunk toward zero because that reduces variance (and hence a small amount of bias toward 0 introduced by a small shrinkage will typically reduce MSE). $\endgroup$ – Glen_b♦ Nov 25 '15 at 4:16
$\begingroup$ @Glen_b, great answer (I keep coming back to it). Would you have an explanation or a reference for $\hat{\sigma}^2_\text{MMSE} = \frac{S^2}{n+1}$ being the minimum MSE estimator? $\endgroup$ – Richard Hardy Apr 26 '18 at 8:56
$\begingroup$ Also, does that imply the ML estimator of variance is not a minimum-variance estimator? Otherwise the minimum MSE estimator would be some weighted average (with positive weights) of the MLE and the unbiased estimator, but now it is outside that range. I could ask this as a separate question if you think it makes sense. $\endgroup$ – Richard Hardy Apr 26 '18 at 9:12
$\begingroup$ Found a whole derivation in a Wikipedia article on MSE, I guess that explain all of it. $\endgroup$ – Richard Hardy Apr 26 '18 at 13:02
MLE yields the most likely value of the model parameters, given the model and the data at hand -- which is a pretty attractive concept. Why would you choose parameter values that make the data observed less probable when you can choose the values that make the data observed the most probable across any set of values? Would you wish to sacrifice this feature for unbiasedness? I do not say the answer is always clear, but the motivation for MLE is pretty strong and intuitive.
Also, MLE may be more widely applicable than method of moments, as far as I know. MLE seems more natural in cases of latent variables; for example, a moving average (MA) model or a generalized autoregressive conditional heteroskedasticity (GARCH) model can be directly estimated by MLE (by directly I mean it is enough to specify a likelihood function and submit it to an optimization routine) -- but not by method of moments (although indirect solutions utilizing the method of moments may exist).
Richard HardyRichard Hardy
$\begingroup$ +1. Of course, there are plenty of cases when you don't want the most likely estimate, such as Gaussian Mixture Models (i.e. unbounded likelihood). In general, a great answer to help intuition of MLE's. $\endgroup$ – Cliff AB Nov 22 '15 at 21:03
$\begingroup$ (+1) But I think you need to add a definition of the "most likely" parameter value as that given which the data is most probable to be quite clear. Other intuitively desirable properties of an estimator unrelated to its long-term behaviour under repeated sampling might include its not depending on how you parametrize a model, & its not producing impossible estimates of the true parameter value. $\endgroup$ – Scortchi♦ Nov 23 '15 at 12:40
$\begingroup$ Think there's still a risk of "most likely"'s being read as "most probable". $\endgroup$ – Scortchi♦ Nov 23 '15 at 13:05
$\begingroup$ @RichardHardy: They're not at all alike. Most likely, the sun has gone out. Most probably, it hasn't. $\endgroup$ – user2357112 Nov 23 '15 at 17:36
$\begingroup$ @dsaxton: Statisticians have been differentiating the likelihood of a parameter value given the data from the probability of the data given a parameter value for nearly a century - see Fisher (1921) "On the 'probable error of a correlation", Metron, 1, pp 3-32 & Pawitan (2013), In All Likelihood: Statistical Modelling & Inference Using Likelihood - so even though the terms are synonymous in ordinary usage it seems a bit late now to object. $\endgroup$ – Scortchi♦ Nov 24 '15 at 15:23
Actually, the scaling of the maximum likelihood estimates in order to obtain unbiased estimates is a standard procedure in many estimation problems. The reason for that is that the mle is a function of the sufficient statistics and so by the Rao-Blackwell theorem if you can find an unbiased estimator based on sufficient statistics, then you have a Minimum Variance Unbiased Estimator.
I know that your question is more general than that but what I mean to emphasize is that key concepts are intimately related to the likelihood and estimates based on it. These estimates might not be unbiased in finite samples but they are asymptotically so and moreover they are asymptotically efficient, i.e. they attain the Cramer-Rao bound of variance for unbiased estimators, which might not always be the case for the MOM estimators.
JohnKJohnK
To answer your question of why the MLE is so popular, consider that although it can be biased, it is consistent under standard conditions. In addition, it is asymptotically efficient, so at least for large samples, the MLE is likely to do as well or better as any other estimator you may cook up. Finally, the MLE is found by a simple recipe; take the likelihood function and maximize it. In some cases, that recipe may be hard to follow, but for most problems, it is not. Plus, once you have this estimate, we can derive the asymptotic standard errors right away using Fisher's information. Without using the Fisher's information, it is often really hard to derive the error bounds.
This is why MLE estimation is very often the go to estimator (unless you're a Bayesian); it's simple to implement and likely to be just as good if not better than anything else you need to do more work to cook up.
Cliff ABCliff AB
$\begingroup$ Can you please elaborate as to how it compares to the method of moments, as this seems to be an important part of the OP? $\endgroup$ – Antoni Parellada Nov 22 '15 at 17:41
$\begingroup$ as pointed out by whuber, the MOM estimators are also biased, so there's not a "unbiased-ness" advantage to the MOM estimators. Also, when the MOM and MLE estimators disagree, the MLE tends to have lower MSE. But this answer is really about why MLE's tend to be the default, rather than a direct comparison to other methods. $\endgroup$ – Cliff AB Nov 22 '15 at 18:19
$\begingroup$ @AntoniParellada There is an interesting thread in comparing MLE and MoM, stats.stackexchange.com/q/80380/28746 $\endgroup$ – Alecos Papadopoulos Nov 22 '15 at 18:54
I'd add that sometimes (often) we use an MLE estimator because that's what we got, even if in an ideal world it wouldn't be what we want. (I often think of statistics as being like engineering, where we use what we got, not what we want.) In many cases it's easy to define and solve for the MLE, and then get a value using an iterative approach. Whereas for a given parameter in a given situation there may be a better estimator (for some value of "better"), but finding it may require being very clever; and when you're done being clever, you still only have the better estimator for that one particular problem.
eac2222eac2222
$\begingroup$ Out of curiosity, what's an example of what (in the ideal world) you would want? $\endgroup$ – Glen_b♦ Nov 24 '15 at 23:11
$\begingroup$ @Glen_b: Dunno. Unbiased, lowest variance, easy to compute in closed form? When you first learn the estimators for least-squares regression, life seems simpler than it turns out to be. $\endgroup$ – eac2222 Nov 30 '15 at 14:06
Not the answer you're looking for? Browse other questions tagged normal-distribution maximum-likelihood method-of-moments or ask your own question.
Examples where method of moments can beat maximum likelihood in small samples?
Unbiased estimator for variance or Maximum Likelihood Estimator?
why not Fitting GLMs with least squares?
MLE in context: why is maximum likelihood estimation a thing?
Connection between MLE (Maximum Likelihood Estimation) and introductory Inferential Statistics?
MLE of the mean of uniforms
Unbiased Estimator of the truncation points in a truncated normal distribution?
Does the second moment estimator of the uniform distribution parameter have the same properties as that of the first moment?
Maximum likelihood estimation for incorrect distribution parameter
Is there an example where MLE produces a biased estimate of the mean?
When do maximum likelihood and method of moments produce the same estimators?
Independent and Identically distributed assumption in Maximum likelihood estimation
Usefulness of Point Estimators: MVU vs. MLE | CommonCrawl |
\begin{document}
\title[From visco to perfect plasticity]{From visco to perfect plasticity in thermoviscoelastic materials}
\author{Riccarda Rossi}
\address{R.\ Rossi, DIMI, Universit\`a degli studi di Brescia, via Branze 38, 25133 Brescia - Italy} \email{riccarda.rossi\,@\,unibs.it}
\thanks{The author has been partially supported by the Gruppo Nazionale per l'Analisi Matematica, la
Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM)}
\date{March 16, 2018} \maketitle
\begin{abstract} We consider a thermodynamically consistent model for thermoviscoplasticity. For the related PDE system, coupling the heat equation for the absolute temperature, the momentum balance with viscosity and inertia for the displacement variable, and the flow rule for the plastic strain, we
propose two weak sol\-va\-bi\-li\-ty concepts, `entropic' and `weak energy' solutions, where the highly nonlinear heat equation is
suitably formulated. Accordingly, we prove two existence results by passing to the limit in a carefully devised time discretization scheme.
\par
Furthermore, we study the asymptotic behavior of weak energy solutions as the rate of the external data becomes slower and slower, which amounts to taking the vanishing-viscosity and inertia limit of the system. We prove their convergence to a global energetic solution to the Prandtl-Reuss model for perfect plasticity, whose evolution is `energetically' coupled to that of the (spatially constant) limiting temperature. \end{abstract} \noindent \textbf{2010 Mathematics Subject Classification:} 35Q74, 74H20, 74C05, 74C10, 74F05. \par \noindent \textbf{Key words and phrases:} Thermoviscoplasticity, Entropic Solutions, Time Discretization, Vanishing-Viscosity Analysis,
Perfect Plasticity, Rate-Independent Systems, (Global) Energetic Solutions.
\section{\bf Introduction} \noindent Over the last decade, the mathematical study of rate-independent systems has received strong impulse. This is undoubtedly due to their ubiquity in several branches of continuum mechanics, see \cite{Miel05ERIS, MieRouBOOK}, but also to the manifold challenges posed by the analysis of rate-independent evolution. In particular, its intrinsically nonsmooth (in time) character makes it necessary to resort to suitable weak solvability notions: First and foremost, the concept of \emph{(global) energetic solution}, developed in \cite{MieThe99MMRI,MieThe04RIHM, Miel05ERIS}, cf.\ also
the notion of \emph{quasistatic evolution}, first introduced for models of crack propagation, cf.\ e.g.\ \cite{DMToa02, DFT05}. \par Alternative solution concepts for rate-independent models have been subsequently proposed, on the grounds that the \emph{global stability} condition prescribed by the energetic notion fails to accurately describe the behavior of the system at jump times, as soon as the driving energy is nonconvex. Among the various selection criteria of mechanically feasible concepts, let us mention here the \emph{vanishing-viscosity} approach, pioneered in \cite{EfeMie06RILS} and subsequently developed in the realm of \emph{abstract} rate-independent systems in \cite{MRS09,MRS12,MRS13} and, in parallel, in the context of specific models in crack propagation and damage, cf.\ e.g.\ \cite{ToaZan06?AVAQ,KnMiZa07?ILMC,LazzaroniToader,KnRoZa13VVAR}, as well as in plasticity, see e.g.\ \cite{DalDesSol11,DalDesSol12,BabFraMor12, FrSt2013}. In all of these applications, the evolution of the displacement variable is governed by the elastic equilibrium equation (with no viscosity or inertial terms), which is coupled to the rate-independent flow rule for the internal parameter describing the mechanical phenomenon under consideration. In the `standard' vanishing-viscosity approach,
the viscous term, regularizing the temporal evolution and then sent to zero, is added \emph{only} to the flow rule.
\par
Let us mention that, in turn, for certain models a \emph{rate-dependent} flow rule seems more mechanically feasible, cf.\ e.g.\
\cite{ZRSRZ06TDIT} in the frame of plasticity. Nonetheless, here we are interested in the vanishing-viscosity approach to rate-independence.
More specifically, we focus on
the extension of this approach to \emph{coupled} systems.
Recent papers have started to address this issue for systems coupling the evolution of the displacement and of the internal variable. In the context of the rate-dependent model,
\emph{both} the displacements and the internal variable are subject to viscous dissipation (and possibly to inertia in the momentum balance), and the vanishing-viscosity limit is taken \emph{both} in the momentum balance, and in the flow rule. The very first paper initiating this analysis
is \cite{DMSca14QEPP}, obtaining a \emph{perfect plasticity} (rate-independent) system in the limit of dynamic processes. We also quote \cite{Scala14}, where
this kind of approach was developed in the realm of a model for delamination, as well as \cite{MRS14}, tackling the analysis of
\emph{abstract, finite-dimensional} systems where the viscous terms vanish with different rates.
\par The model for small-strain associative elastoplasticity with the Prandtl-Reuss flow rule (without hardening) for the plastic strain, chosen in \cite{DMSca14QEPP} to pioneer the `full vanishing-viscosity' approach, has been extensively studied. In fact, the existence theory for perfect plasticity is by now classical, dating back to \cite{Johnson76, Suquet81,Kohn-Temam83}, cf.\ also \cite{Temam83}. It was revisited in \cite{DMDSMo06QEPL} in the framework of the aforementioned concept of (global) energetic solution to rate-independent systems, with the existence result established by passing to the limit in time-incremental minimization problems; a fine study of the flow rule for the plastic strain was also carried out in \cite{DMDSMo06QEPL}. This variational approach has apparently given new impulse to the analysis of perfect plasticity, extended to the case of heterogeneous materials in
\cite{Sol09,FraGia2012, Sol14,Sol15}; we also quote \cite{BMR12} on the vanishing-hardening approximation of the Prandtl-Reuss model.
\par
In \cite{DMSca14QEPP}, first of all an existence result for a dynamic viscoelastoplastic system approximating the perfectly plastic one, featuring viscosity and inertia in the momentum balance, and
viscosity in the flow rule for the plastic tensor, has been obtained. Secondly, the authors have analyzed its behavior as the rate of the external data
becomes slower and slower: with a suitable rescaling, this amounts to taking the vanishing-viscosity and inertia limit of the system.
They have shown that the (unique) solutions to the viscoplastic system converge, up to a subsequence, to a (global) energetic solution of the perfectly plastic system.
\par
In this paper, we aim to
\textbf{use the model for perfect plasticity as a \emph{case study}} for the vanishing-viscosity analysis of rate-dependent systems that also
\emph{encompass thermal effects}. To our knowledge, this is the first paper where the vanishing-viscosity analysis in
a fully rate-dependent, and temperature-dependent, system has been performed.
\par
Indeed, the analysis of systems with a \emph{mixed} rate-dependent/rate-independent character, coupling the
\emph{rate-dependent}
evolution of the
(absolute) temperature and of the displacement/deformation variables with the \emph{rate-independent} flow rule
of an internal variable, has been initiated in \cite{Roub10TRIP}, and subsequently particularized to various mechanical models.
While referring to \cite[Chap.\ 5]{MieRouBOOK}
for a survey of these type of systems, we mention here the perfect plasticity and damage models studied in \cite{Roub-PP} in \cite{LRTT}, respectively.
In the latter paper,
a vanishing-viscosity analysis (as the rate of the external loads and heat sources tends to zero) for the \emph{mixed} rate-dependent/independent damage model,
has been performed.
\par Instead, here the (approximating) thermoviscoplastic system will feature a \emph{rate-dependent} flow rule for the plastic strain, and thus will be entirely
rate-dependent.
\begin{itemize}
\item
First of all, we will focus on the analysis of the rate-dependent system. Exploiting the techniques from \cite{Rocca-Rossi}, we will obtain two existence results,
which might be interesting in their own right,
for two notions of solutions of the thermoviscoplastic system, referred to as `entropic' and `weak energy'.
Our proofs will be carried out by passing to the limit in a carefully tailored time discretization scheme.
\item
Secondly, in the case of `weak energy' solutions we will perform the vanishing-viscosity asymptotics, obtaining a system where the
evolution of the displacement and of the elastic and plastic strains, in the sense of
(global) energetic solutions, is coupled to that of the (spatially constant) temperature variable.
In fact,
we could address this singular limit also for
entropic solutions, but the resulting formulation of the limiting rate-independent system would be less meaningful due to
the too weak character of the entropic solution notion, cf.\ also Remark \ref{rmk:2weak} ahead.
\end{itemize}
\par
Let us now get further insight into our analysis, first in the visco-, and then in the perfectly plastic cases.
\subsection{The thermoviscoplastic system} \label{ss:1.1} The reference configuration is a bounded, open, Lipschitz domain $\Omega\subset \mathbb{R}^d$, $d\in \{2, 3\}$, and we consider
the evolution of the system in a time interval $(0,T)$. Within the small-strain approximation, the momentum balance features the linearized strain tensor $\sig{u}= \tfrac12 \left( \nabla u+ \nabla u^\bot \right)$, decomposed as \begin{equation} \label{decomp-intro} \sig u = e+p \qquad \text{ in } \Omega \times (0,T), \end{equation} with $e \in \bbM_\mathrm{sym}^{d \times d}$ (the space of symmetric $(d{\times}d)$-matrices) and $p \in \bbM_\mathrm{D}^{d \times d}$ (the space of symmetric
$(d{\times}d)$-matrices with null trace)
the elastic and plastic strains, respectively. In accord with the Kelvin-Voigt rheology for materials subject to thermal expansion, the stress is given by \begin{align} \label{stress} \sigma = \mathbb{D} \dot{e} + \mathbb{C}(e - \mathbb{E}\vartheta), \end{align} with $\vartheta$ the absolute temperature, and the elasticity, viscosity, and thermal expansion tensors $\mathbb{C},\, \mathbb{D},\, \mathbb{E}$ depending on the space variable $x$ (which shall be overlooked in this Introduction for simplicity of exposition), symmetric, $\mathbb{C}$ and $\mathbb{D}$ positive definite. Then, we consider the following PDE system: \begin{subequations} \label{plast-PDE} \begin{align} & \label{heat} \dot{\vartheta} - \mathrm{div}(\kappa(\vartheta)\nabla \vartheta) =H+ \mathrm{R}(\vartheta,\dot{p}) + \dot{p}: \dot{p}+ \mathbb{D} \dot{e} : \dot{e} -\vartheta \mathbb{C}\mathbb{E} : \dot{e} && \text{ in } \Omega \times (0,T), \\ \label{mom-balance}
&\rho \ddot{u} - \mathrm{div}\sigma = F && \text{ in } \Omega \times (0,T), \\ & \label{flow-rule} \partial_{\dot{p}} \mathrm{R}(\vartheta,\dot{p}) + \dot{p} \ni \sigma_{\mathrm{D}} && \text{ in } \Omega \times (0,T). \end{align} \end{subequations}
The heat equation \eqref{heat} features as heat conductivity coefficient a nonlinear function $\kappa \in \mathrm{C}^0(\mathbb{R}^+)$, which shall be
supposed with a suitable growth. In the momentum balance \eqref{mom-balance}, $\rho>0$ is the (constant, for simplicity) mass density.
The evolution of the plastic strain $p$ is given by the flow rule \eqref{flow-rule}, where
$\sigma_\mathrm{D}$ is the deviatoric part of the stress $\sigma$, and
the dissipation potential
$\mathrm{R} :\mathbb{R}^+\times \bbM_\mathrm{D}^{d\times d} \to [0,+\infty) $ is lower semicontinuous,
and associated with a multifunction
$K :\mathbb{R}^+ \rightrightarrows \bbM_\mathrm{D}^{d\times d}$, with values
in the compact and convex subsets of $\bbM_\mathrm{D}^{d\times d}$, via the relation
\[
R(\vartheta, \dot{p}) = \sup_{\pi \in K(\vartheta)} \pi{:} \dot{p} \qquad \text{for all } (\vartheta, \dot p) \in \mathbb{R}^+\times \bbM_\mathrm{D}^{d\times d}
\]
(the dependence of $K$ and $R$ on $x \in \Omega$ is overlooked within this section).
Namely, for every $\vartheta \in \mathbb{R}^+$ the potential $R(\vartheta, \cdot)$ is the support function of the convex and compact set $K(\vartheta)$, which can be interpreted as the domain of
viscoelasticity, allowed to depend on $x\in \Omega$ as well as on the temperature variable.
In fact, $R(\vartheta, \cdot)$ is the Fenchel-Moreau conjugate of the indicator function $I_{K(\vartheta)}$, and thus \eqref{flow-rule} (where $\partial_{\dot p}$ denotes the subdifferential in the sense of convex analysis w.r.t.\ the variable $\dot p$) rephrases as
\begin{equation}
\label{flow-rule-rewritten}
\begin{aligned}
\dot{p} \in \partial I_{K(\vartheta)}(\sigma_\mathrm{D} {-} \dot p)
\
\Leftrightarrow \ \dot{p} = \sigma_{\mathrm{D}} - \mathrm{P}_{K(\vartheta)} (\sigma_{\mathrm{D}} )
\quad
\text{ in } \Omega \times (0,T),
\end{aligned}
\end{equation}
with $ \mathrm{P}_{K(\vartheta)} $ the projection operator onto $K(\vartheta)$.
The PDE system \eqref{plast-PDE} is supplemented by the boundary conditions \begin{subequations} \label{bc} \begin{align} & \label{bc-u-1} \sigma \nu = g && \text{ on } \Gamma_\mathrm{Neu} \times (0,T), \\ & \label{bc-u-2} u= w && \text{ on } \Gamma_\mathrm{Dir} \times (0,T), \\ & \label{bc-teta} \kappa(\vartheta)\nabla \vartheta \nu = h && \text{ on } \partial\Omega \times (0,T), \end{align} \end{subequations} where $\nu$ is the external unit normal to $\partial\Omega$, with $ \Gamma_\mathrm{Neu} $ and $\Gamma_\mathrm{Dir}$ its Neumann and Dirichlet parts. The body is subject to the volume force $F$, to the applied traction $g$ on $\Gamma_\mathrm{Neu}$, and solicited by a displacement field $w$ applied on $\Gamma_\mathrm{Dir}$, while $H$ and $h$ are bulk and surface (positive) heat sources, respectively. \par A PDE system with the same structure as (\ref{plast-PDE}, \ref{bc}) was proposed in \cite{Roub-PP} to model the thermodynamics of perfect plasticity: i.e., a heat equation akin to \eqref{heat} and the momentum balance \eqref{mom-balance} were coupled to the \emph{rate-independent version} of the flow rule \eqref{flow-rule}, cf.\ \eqref{flow-rule-RIP} below. While the mixed rate-dependent/independent system in \cite{Roub-PP} calls for a completely different analysis from our own, the modeling discussion developed in \cite[Sec.\ 2]{Roub-PP} can be easily adapted to system
(\ref{plast-PDE}, \ref{bc}) to show
its compliance with the first and second principle of thermodynamics. In particular, let us stress that, due to the presence of the \emph{quadratic} terms $ \dot{p}: \dot{p}$, $\mathbb{D} \dot{e} : \dot{e} $, and $\vartheta \mathbb{C}\mathbb{E} : \dot{e} $ on the right-hand side of \eqref{heat}, system (\ref{plast-PDE}, \ref{bc}) is \underline{thermodynamically consistent}. \par The analysis of (the Cauchy problem associated with) system (\ref{plast-PDE}, \ref{bc}) poses some significant mathematical difficulties: \begin{description} \item[\textbf{(1)}]
First and foremost, its nonlinear character, and in particular the quadratic terms on the r.h.s.\ of \eqref{heat}, which is thus only estimated in $L^1((0,T) \times \Omega)$ as soon as $\dot p$ and $\dot e$ are estimated in $L^2((0,T) \times \Omega;\bbM_\mathrm{D}^{d\times d})$ and
$L^2((0,T) \times \Omega;\bbM_\mathrm{sym}^{d\times d})$, respectively. Because of this, on the one hand obtaining suitable estimates of the temperature variable turns out to be challenging. On the other hand,
appropriate weak formulations of \eqref{heat} are called for.
\end{description}
\par
In the one-dimensional case, existence results have been obtained for thermodynamically consistent (visco)\-plasticity models with hysteresis in \cite{KS97, KSS02, KSS03}. In higher dimensions, suitable adjustments of the toolbox by \textsc{Boccardo \& Gallou\"et}
\cite{Boccardo-Gallouet89} to handle the heat equation with $L^1$/measure data have been devised in a series of recent papers on thermoviscoelasticity with rate-dependent/independent plasticity. In particular, we quote \cite{Roub-Bartels-1}, dealing with a (rate-dependent) thermoviscoplastic model, where thermal expansion effects are neglected, as well as \cite{Roub-Bartels-2}, addressing rate-independent plasticity with hardening coupled with thermal effects, with the stress tensor given by $
\sigma = \mathbb{D} \sig{u_t} + \mathbb{C} e - \mathbb{C} \mathbb{E} \vartheta $, and finally \cite{Roub-PP}, handling the thermodynamics of perfect plasticity.
Let us point out that, in the estimates developed in \cite{Roub-Bartels-2, Roub-PP}, a crucial role is played by a sort of `compatibility condition' between the growth exponents of the ($\vartheta$-dependent) heat capacity coefficient multiplying $\vartheta_t$, and of the heat conduction coefficient $\kappa(\vartheta)$. This allows for Boccardo-Gallou\"et type estimates, drawn from \cite{Roub10TRIP}. In the recent \cite{HMS17}, the analysis of the heat equation with $L^1$ right-hand side has been handled without growth conditions on the abovementioned coefficients by resorting to maximal parabolic regularity arguments, made possible by the crucial ansatz that the viscous contribution to $\sigma$ features $\sig{\dot u}$, in place of $\dot e$ as in \eqref{stress}.
\par
Here we will instead stay with \eqref{stress}, which is more consistent with \emph{perfect plasticity}. While supposing that the heat capacity coefficient is constant
(cf.\ also Remark \ref{rmk:in-LRTT} ahead),
we will develop different arguments to derive estimates on the temperature variable based
on a growth condition
for the heat conduction coefficient. In this, we will follow
the footsteps of \cite{FPR09, Rocca-Rossi}, analyzing thermodynamically consistent models for phase transitions and with damage. Namely, we shall suppose that
\begin{equation}
\label{heat-cond-intro}
\kappa(\vartheta) \sim \vartheta^\mu \qquad \text{with } \mu>1.
\end{equation} We shall exploit \eqref{heat-cond-intro}
upon testing \eqref{heat} by a suitable negative power of $\vartheta$ (all calculations can be rendered rigorously on the level of a time discretization scheme). In this way, we will deduce a crucial estimate for $\vartheta$ in $L^2(0,T;H^1(\Omega))$. Under \eqref{heat-cond-intro} we will address the weak solvability of (\ref{plast-PDE}, \ref{bc}) in terms of the
`entropic' notion of solution,
proposed in the framework of models for heat conduction in fluids, cf.\ e.g.\ \cite{Feireisl2007,BFM2009}, and later used to weakly formulate models for phase change \cite{FPR09} and, among other applications,
for damage in thermoviscoelastic materials \cite{Rocca-Rossi}. In the framework of our plasticity system,
this solution concept features the weak formulation of the momentum balance \eqref{mom-balance} and the flow rule \eqref{flow-rule}, stated a.e.\ in $\Omega \times (0,T)$, coupled with \begin{itemize} \item[-] the \emph{entropy inequality}
\begin{equation} \label{entropy-ineq-intro} \begin{aligned}
& \int_s^t \int_\Omega \log(\vartheta) \dot{\varphi} \;\!\mathrm{d} x \;\!\mathrm{d} r - \int_s^t \int_\Omega \left( \kappa(\vartheta) \nabla \log(\vartheta) \nabla \varphi - \kappa(\vartheta) \frac\varphi\vartheta \nabla \log(\vartheta) \nabla \vartheta\right) \;\!\mathrm{d} x \;\!\mathrm{d} r
\\ & \leq
\int_\Omega \log(\vartheta(t)) \varphi(t) \;\!\mathrm{d} x - \int_\Omega \log(\vartheta(s)) \varphi(s) \;\!\mathrm{d} x \\ & \quad
- \int_s^t \int_\Omega \left( H+ \mathrm{R}(\vartheta,\dot{p}) + |\dot{p}|^2+ \mathbb{D} \dot{e} : \dot{e} -\vartheta \mathbb{B} : \dot{e} \right) \frac{\varphi}\vartheta \;\!\mathrm{d} x \;\!\mathrm{d} r - \int_s^t \int_{\partial\Omega} h \frac\varphi\vartheta \;\!\mathrm{d} x \;\!\mathrm{d} r \end{aligned} \end{equation} with $\varphi$ a sufficiently regular, \emph{positive} test function,
\item[-] the
\emph{total energy inequality} \begin{equation} \label{total-enid-intro} \begin{aligned} &
\frac{\rho}2 \int_\Omega |\dot{u}(t)|^2 \;\!\mathrm{d} x +\mathcal{E}(\vartheta(t), e(t)) \\
& \leq \frac{\rho}2 \int_\Omega |\dot{u}(s)|^2 \;\!\mathrm{d} x +\mathcal{E}(\vartheta(s), e(s)) + \int_s^t \pairing{}{H_\mathrm{Dir}^1 (\Omega;\mathbb{R}^d)}{\mathcal{L}}{\dot u{-} \dot w} +\int_s^t \int_\Omega H \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_s^t \int_{\partial\Omega} h \;\!\mathrm{d} S \;\!\mathrm{d} r \\ & \begin{aligned} \quad +\rho \left( \int_\Omega \dot{u}(t) \dot{w}(t) \;\!\mathrm{d} x - \int_\Omega \dot{u}(s) \dot{w}(s) \;\!\mathrm{d} x - \int_s^t \int_\Omega \dot{u}\ddot w \;\!\mathrm{d} x \;\!\mathrm{d} r \right) & + \int_s^t \int_\Omega \sigma: \sig{\dot w} \;\!\mathrm{d} x \;\!\mathrm{d} r \end{aligned}
\end{aligned} \end{equation}
involving the total load $\calL$ associated with the external forces $F$ and $g$, and the energy functional $
\mathcal{E}(\vartheta, e): = \int_\Omega \vartheta \;\!\mathrm{d} x + \int_\Omega \tfrac12 \mathbb{C} e{:} e \;\!\mathrm{d} x\,.
$
\end{itemize} Both \eqref{entropy-ineq-intro} and \eqref{total-enid-intro} are required to hold for almost all $t \in (0,T]$ and almost all $s\in (0,t)$, and for $s=0$. \par While referring to \cite{FPR09,Rocca-Rossi} for more details and to Sec.\ \ref{ss:2.2} for a formal derivation of \eqref{entropy-ineq-intro}--\eqref{total-enid-intro}, let us point out here that this solution concept reflects the thermodynamic consistency of the model, since it corresponds to the requirement that the system should satisfy the second and first principle of Thermodynamics.
From an analytical viewpoint, observe that the entropy inequality \eqref{entropy-ineq-intro} has the advantage that all the
quadratic terms on the right-hand side of \eqref{heat} feature as multiplied by a negative test function. This
allows for upper semicontinuity arguments in the limit passage in a suitable approximation of \eqref{entropy-ineq-intro}--\eqref{total-enid-intro}. Furthermore, despite its weak character, \emph{weak-strong uniqueness} results can be seemingly obtained for the entropic formulation, cf.\ e.g.\ \cite{Fei-Nov} in the context of the Navier-Stokes-Fourier system modeling heat conduction in fluids.
\begin{description} \item[\textbf{(2)}] An additional analytical challenge is related to handling a non-zero applied traction $g$ on the Neumann part of the boundary $\Gamma_\mathrm{Neu}$. This results in the term $ \int_0^T\pairing{}{H^1 (\Omega;\mathbb{R}^d)}{\mathcal{L}}{\dot u} \;\!\mathrm{d} t $ on the r.h.s.\ of \eqref{total-enid-intro}, whose time discrete version is, in fact, the starting point in the derivation of all of the a priori estimates. The estimate of this term is delicate, since it would in principle involve the $H^1 (\Omega;\mathbb{R}^d)$-norm of $\dot {u} $, which is not controlled by the left-hand side of \eqref{total-enid-intro}. A by-part integration in time shifts the problem to estimating the $H^1 (\Omega;\mathbb{R}^d)$-norm of $u$, but the l.h.s.\ of
\eqref{total-enid-intro} only controls the $L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})$-norm of $e$. Observe that this is ultimately due to the form \eqref{stress} of the stress $\sigma$.
\end{description}
To overcome this problem, we will impose that the data $F$ and $g$ comply with a suitable \emph{safe load} condition, see also Remark \ref{rmk:diffic-1-test}.
\par
Finally,
\begin{description} \item[\textbf{(3)}] the presence of adiabatic effects in the momentum balance, accounted for by the thermal expansion term coupling it with the heat equation, leads to yet another technical problem. In fact, the estimate of the term $ \int_0^T \int_\Omega \vartheta \mathbb{C} \mathbb{E}{:} \sig{\dot w} \;\!\mathrm{d} x \;\!\mathrm{d} t $ contributing to the integral $\int_0^T \int_\Omega \sigma: \sig{\dot w} \;\!\mathrm{d} x \;\!\mathrm{d} t $ on the r.h.s.\ of \eqref{total-enid-intro} calls for suitable assumptions on the Dirichlet loading $w$, since the l.h.s.\ of \eqref{total-enid-intro} only controls the $L^1(\Omega)$-norm of $\vartheta$, cf.\ again Remark \ref{rmk:diffic-1-test}. \end{description} \par As already mentioned, we will tackle the existence analysis for the entropic formulation of system (\ref{plast-PDE}, \ref{bc}) by approximation via time discretization. In particular, along the footsteps of \cite{Rocca-Rossi}, we will carefully devise our time-discretization scheme in such a way that the approximate solutions obtained by interpolation of the discrete ones fulfill discrete versions of the entropy and total energy inequalities, in addition to the discrete momentum balance and flow rule. We will then obtain a series of a priori estimates allowing us to deduce suitable compactness information on the approximate solutions, and thus to pass to the limit. \par In this way, under the basic growth condition \eqref{heat-cond-intro} on $\kappa$ and under appropriate assumptions on the data, also tailored to the technical problems
\textbf{(2)}\&\textbf{(3)}, we will prove our first main result, \textbf{\underline{Theorem \ref{mainth:1}}}, stating the existence of entropic
solutions to the Cauchy problem for system (\ref{plast-PDE}, \ref{bc}).
\par
Under a more stringent growth condition on $\kappa$,
we will prove in \textbf{\underline{Theorem \ref{mainth:2}}}
an existence result for an enhanced notion of solution. Instead of the entropy and total energy inequalities, this concept features
\begin{itemize} \item[-] a `conventional' weak formulation of the heat equation \eqref{heat}, namely
\begin{equation} \label{eq-teta-intro} \begin{aligned}
\pairing{}{}{\dot\vartheta}{\varphi} + \int_\Omega \kappa(\vartheta) \nabla \vartheta\nabla\varphi \;\!\mathrm{d} x
= \int_\Omega \left(H+ \mathrm{R}(\vartheta, \dot p) + \dot p : \dot p + \mathbb{D} \dot e : e - \vartheta \mathbb{C} \mathbb{E} : \dot{e} \right) \varphi \;\!\mathrm{d} x + \int_{\partial\Omega} h \varphi \;\!\mathrm{d} S \end{aligned} \end{equation}
for all test functions $\varphi \in W^{1,p}(\Omega)$, with $\vartheta \in W^{1,1}(0,T; W^{1,p}(\Omega)^*)$,
$\kappa(\vartheta) \nabla \vartheta \in L^{p'}((0,T){\times}\Omega)$,
and $p>1$ sufficiently big, and
\item[-] the \emph{total energy balance}, i.e.\ \eqref{total-enid-intro} as an equality.
\end{itemize}
In view of this, we will refer to these improved solutions as `weak energy'.
\subsection{The perfectly plastic system} In investigating the vanishing-viscosity and inertia limit of system
(\ref{plast-PDE}, \ref{bc}), we shall confine the discussion to the asymptotic behavior of a family of \emph{weak energy solutions}. In this setup, we will extend the analysis developed in \cite{DMSca14QEPP} to the \emph{temperature-dependent} and \emph{spatially heterogeneous} cases, i.e.\ with the tensors $\mathbb{C},\, \mathbb{D},\, \mathbb{E}$, and the elastic domain $K$, depending on $x\in \Omega$. However, we will drop the dependence of $K$ on the (spatially discontinuous) temperature variable $\vartheta$ due to technical difficulties in the handling of the plastic dissipation potential, see Remark \ref{rmk:added-sol} ahead. \par Mimicking \cite{DMSca14QEPP},
we will supplement the thermoviscoplastic system with rescaled data $F^\varepsilon\, g^\varepsilon, \, w^\varepsilon, \, H^\varepsilon, \, h^\varepsilon, $ with $\mathfrak{f}^\varepsilon (t) = \mathfrak{f}(\varepsilon t)$,
for $t \in [0,T/\varepsilon]$ and for $\mathfrak{f} \in \{ F,\, g, \, w,\, H,\, h\}$. Correspondingly, we will consider a family $(\vartheta^\varepsilon, u^\varepsilon, e^\varepsilon, p^\varepsilon)_\varepsilon$ of weak energy solutions
to (the Cauchy problem for) system (\ref{plast-PDE}, \ref{bc}), defined on $ [0,T/\varepsilon]$.
We will further rescale them in such a way that they are defined on $[0,T]$, by setting $ \vartheta_\varepsilon (t) = \vartheta^\varepsilon (t/\varepsilon)$, and defining analogously $u_\varepsilon$, $e_\varepsilon$, $p_\varepsilon$ and the data $F_\varepsilon, \, g_\varepsilon, \, w_\varepsilon, \, H_\varepsilon, \, h_\varepsilon$.
Hence, the functions $(\vartheta_\varepsilon, u_\varepsilon, e_\varepsilon, p_\varepsilon)$ are \emph{weak energy} solutions of the rescaled system \begin{subequations} \label{plast-PDE-rescal} \begin{align} & \label{heat-rescal} \varepsilon\dot{\vartheta} - \mathrm{div}(\kappa(\vartheta)\nabla \vartheta) =H+\varepsilon \mathrm{R}(\vartheta,\dot{p}) + \varepsilon^2\dot{p}: \dot{p}+ \varepsilon^2\mathbb{D} \dot{e} : \dot{e} -\vartheta \mathbb{C}\mathbb{E}_\varepsilon : \dot{e} && \text{ in } \Omega \times (0,T), \\ \label{mom-balance-rescal}
&\rho \varepsilon^2 \ddot{u} - \mathrm{div}\left( \varepsilon\mathbb{D} \dot{e} + \mathbb{C}(e - \mathbb{E}_\varepsilon\vartheta) \right) = F && \text{ in } \Omega \times (0,T), \\ & \label{flow-rule-rescal} \partial_{\dot{p}} \mathrm{R}(\vartheta,\dot{p}) + \varepsilon \dot{p} \ni \left( \varepsilon\mathbb{D} \dot{e} + \mathbb{C}(e - \mathbb{E}_\varepsilon\vartheta) \right)_{\mathrm{D}} && \text{ in } \Omega \times (0,T), \end{align} \end{subequations} supplemented with the boundary conditions \eqref{bc} featuring the rescaled data $g_\varepsilon, \, w_\varepsilon, \, h_\varepsilon$. Observe that we will let the thermal expansion tensors vary with $\varepsilon$. \par For technical reasons expounded at length in Section \ref{s:6}, we will address the asymptotic analysis of system (\ref{plast-PDE-rescal}, \ref{bc}) only under the assumption that the tensors $\mathbb{E}_\varepsilon$ scale in a suitable way with $\varepsilon$, namely\
\begin{equation} \label{scaling-intro}
\mathbb{E}_\varepsilon = \varepsilon^\beta \mathbb{E} \quad \text{ with a given } \mathbb{E} \in \bbM_\mathrm{sym}^{d\times d} \text{ and } \beta>\frac12. \end{equation} Under \eqref{scaling-intro}, the \emph{formal} limit of system (\ref{plast-PDE-rescal}, \ref{bc}) then consists of \begin{itemize} \item[-] the stationary heat equation \begin{equation} \label{stat-heat-intro} - \mathrm{div}(\kappa(\vartheta)\nabla \vartheta) =H \qquad \text{ in } \Omega \times (0,T), \end{equation}
supplemented with the Neumann condition \eqref{bc-teta};
\item[-] the system for perfect plasticity
\begin{subequations} \label{RIP-PDE} \begin{align} \label{mom-balance-RIP}
& - \mathrm{div}\sigma = F && \text{ in } \Omega \times (0,T), \\ & \label{flow-rule-RIP} \partial_{\dot{p}} \mathrm{R}(\Theta,\dot{p}) \ni \sigma_{\mathrm{D}} && \text{ in } \Omega \times (0,T), \end{align} with the boundary conditions \eqref{bc-u-1} and \eqref{bc-u-2}, complemented by the kinematic admissibility condition and Hooke's law \begin{align} & \label{decomp}\sig u = e + p && \text{ in } \Omega \times (0,T), \\ & \label{stress-RIP} \sigma = \mathbb{C}e && \text{ in } \Omega \times (0,T). \end{align} \end{subequations} \end{itemize} In fact, system \eqref{RIP-PDE} has to be weakly formulated in function spaces reflecting
the fact that the plastic strain $p$ is only a Radon measure on $\Omega$, and so is $\sig{u}$ (so that the displacement variable $u$ is only a function of bounded deformation), and that, in principle, we only have $\mathrm{BV}$-regularity for $t\mapsto p(t)$. \par Our asymptotic result, \textbf{\underline{Theorem \ref{mainth:3}}}, states that, under suitable conditions on the data $(F_\varepsilon\, g_\varepsilon, \, w_\varepsilon, \, H_\varepsilon, \, h_\varepsilon)_\varepsilon$, up to a subsequence the functions $(\vartheta_\varepsilon, u_\varepsilon, e_\varepsilon, p_\varepsilon)_\varepsilon$ converge as $\varepsilon \downarrow 0$ to a quadruple $(\Theta, u,e,p)$
such that \begin{enumerate} \item $\Theta$ is constant in space, \item $(u,e,p)$ comply with the \emph{(global) energetic formulation} of system \eqref{RIP-PDE}, consisting of a global stability condition and of an energy balance; \item there additionally holds a balance between the energy dissipated through changes of the plastic strain
and the thermal energy
on almost every sub-interval of $(0,T)$, i.e. \begin{equation} \label{gift} \int_\Omega \Theta(t) \;\!\mathrm{d} x - \int_\Omega \Theta(s) \;\!\mathrm{d} x =\mathrm{Var}(p;[s,t])+\int_s^t \int_\Omega \mathsf{H} \;\!\mathrm{d} x \;\!\mathrm{d} r +\int_s^t \int_{\partial\Omega} \mathsf{h} \;\!\mathrm{d} S \;\!\mathrm{d} r
\text{ for almost all } s< t \in (0,T), \end{equation} with $\mathsf{H}$ and $\mathsf{h}$ the limiting heat sources. \end{enumerate} \noindent
Observe that \eqref{gift} couples the evolution of the temperature $\Theta$ to that of $p$, and thus of the solution triple $(u,e,p)$.
\par
Finally, based on the arguments from \cite{DMSca14QEPP}, in Theorem \ref{mainth:3} we will also obtain that $(u,e,p)$ are, ultimately, \emph{absolutely continuous} as functions of time. This is a special feature of the perfectly plastic system, already observed in \cite{DMDSMo06QEPL}. It is in accordance with the time regularity results proved in \cite{MieThe04RIHM} for energetic solutions to rate-independent systems driven by uniformly convex energy functionals. It is in fact because of the `convex character' of the problem that we retrieve \emph{(global) energetic} solutions, upon taking the vanishing-viscosity and inertia limit, cf.\ also \cite[Prop.\ 7]{MRS09}. \par Also in view of the similar vanishing-viscosity analysis developed in \cite{LRTT} in the context of a thermodynamically consistent model for damage, we expect to obtain a different kind of solution when performing the same analysis for thermomechanical systems driven by nonconvex (mechanical) energies. We plan to address these studies in the future.
\paragraph{\bf Plan of the paper.}
In \underline{Section \ref{s:2}} we establish all the assumptions on the thermoviscoplastic system (\ref{plast-PDE}, \ref{bc}) and its data, introduce the two solvability concepts we will address, and state our two existence results, Theorems \ref{mainth:1} \& \ref{mainth:2}. \underline{Section \ref{s:3}} is devoted to the analysis of the time discretization scheme for (\ref{plast-PDE}, \ref{bc}). In \underline{Section \ref{s:4}} we pass to the time-continuous limit and conclude the proofs of Thms.\ \ref{mainth:1} \& \ref{mainth:2},
also relying on a novel, Helly-type compactness result, cf.\ Thm.\ \ref{th:mie-theil} ahead.
In \underline{Section \ref{s:5}} we set up the limiting perfectly plastic system and give its (global) energetic formulation. The vanishing-viscosity and inertia analysis is carried out in \underline{Section \ref{s:6}} with Theorem \ref{mainth:3}, whose proof also relies on some Young measure tools recapitulated in the Appendix.
\begin{notation}[General notation] \label{not:2.1} \upshape In what follows, $\mathbb{R}^+$ shall stand for $(0,+\infty)$. We will denote by $\bbM^{d\times d}$
the space of $d{\times} d$
matrices. We consider $\bbM^{d\times d}$ endowed with the Frobenius inner product
$\eta : \xi : = \sum_{i j} \eta_{ij}\xi_{ij}$ for two matrices $\eta = (\eta_{ij})$ and $\xi = (\xi_{ij})$, which induces the matrix norm $|\cdot|$. $\bbM_\mathrm{sym}^{d\times d}$ stands for the subspace of symmetric matrices, and $\bbM_\mathrm{D}^{d\times d}$ for the subspace of symmetric matrices with null trace. In fact, $\bbM_\mathrm{sym}^{d\times d} = \bbM_\mathrm{D}^{d\times d} \oplus \mathbb{R} I$ ($I$ denoting the identity matrix), since every $\eta \in \bbM_\mathrm{sym}^{d\times d}$ can be written as \[ \eta = \eta_\mathrm{D}+ \frac{\mathrm{tr}(\eta)}d I \] with $\eta_\mathrm{D}$ the orthogonal projection of $\eta$ into $\bbM_\mathrm{D}^{d\times d} $. We will refer to $\eta_\mathrm{D}$ as the deviatoric part of $\eta$. \par With the symbol $\odot$ we will denote the symmetrized tensor product of two vectors $a,\, b \in \mathbb{R}^d$, defined as the symmetric matrix with entries $ \frac{a_ib_j + a_j b_i}2$. Note that the trace $\mathrm{tr}(a \odot b)$ coincides with the scalar product $a \cdot b$. \par Given a Banach space $X$ we shall
use the symbol $\pairing{}{X}{\cdot}{\cdot}$ for the duality pairing between $X^*$ and $X$; if $X$ is a Hilbert space, $(\cdot,\cdot)_X$ will stand for its inner product. To avoid overburdening notation, we shall often write $\| \cdot\|_X$ both for the norm on $X$, and on the product space $X \times \ldots \times X$. With the symbol $\overline{B}_{1,X}(0)$ we will denote the closed unitary ball in $X$. We shall denote by the symbols
\[ \text{(i)} \ \mathrm{B}([0,T]; X), \, \qquad \text{(ii)} \ \mathrm{C}^0_{\mathrm{weak}}([0,T];X), \, \qquad \text{(iii)} \ \mathrm{BV} ([0,T]; X)
\]
the spaces of functions from $[0,T]$ with values in $ X$ that are defined at \emph{every} $t \in [0,T]$ and: (i) are measurable on $[0,T]$; (ii) are \emph{weakly} continuous on $[0,T]$; (iii) have bounded variation on $[0,T]$.
\par Finally, we shall use the symbols $c,\,c',\, C,\,C'$, etc., whose meaning may vary even within the same line, to denote various positive constants depending only on known quantities. Furthermore, the symbols $I_i$, $i = 0, 1,... $, will be used as place-holders for several integral terms (or sums of integral terms) popping in the various estimates: we warn the reader that we will not be self-consistent with the numbering, so that, for instance, the symbol $I_1$ will occur several times with different meanings. \end{notation}
\paragraph{\bf Acknowledgements.} I am grateful to the two anonymous referees for reading this paper very carefully and for several constructive suggestions.
\section{\bf Main results for the thermoviscoplastic system} \label{s:2} First, in Section \ref{ss:2.1}, for the thermoviscoplastic system (\ref{plast-PDE}, \ref{bc}) we establish all the basic assumptions on the reference configuration $\Omega$, on the tensors $\mathbb{C}, \, \mathbb{D},\, \mathbb{E}$, on the set of admissible stresses $K$ (and, consequently, on the dissipation potential $\mathrm{R}$), on the external data $g,\, h, \, f,\, \ell$, and $w$, and on the initial data $(\vartheta_0, \, u_0, \, \dot{u}_0, \, e_0, \, p_0)$. In Section \ref{ss:5.1} later on, we will revisit and strengthen some of these conditions in order to deal with the limiting perfectly plastic system. In view of this, to distinguish the two sets of assumptions, we will label them by indicating the number of the section (i.e., $2$ for the thermoviscoplastic, and $5$ for the perfectly plastic, system). \par Second, in Sec.\ \ref{ss:2.2} we introduce the weak solvabilty concepts for the (Cauchy problem associated with the) viscoplastic system (\ref{plast-PDE}, \ref{bc}), and state our existence results in Sec.\ \ref{ss:2.3}.
\subsection{Setup} \label{ss:2.1} \paragraph{{\em The reference configuration}.} Let $\Omega \subset \mathbb{R}^d$, $d\in \{2,3\}$, be a bounded domain, with Lipschitz boundary; we set $Q: = (0,T) \times \Omega $.
The boundary $\partial\Omega $ is given by \begin{equation} \label{Omega-s2} \tag{2.$\Omega$} \begin{gathered} \partial \Omega = \Gamma_\mathrm{Dir} \cup \Gamma_\mathrm{Neu} \cup \partial\Gamma \quad \text{ with $\Gamma_\mathrm{Dir}, \,\Gamma_\mathrm{Neu}, \, \partial\Gamma$ pairwise disjoint,} \\ \text{
$\Gamma_\mathrm{Dir}$ and $\Gamma_\mathrm{Neu}$ relatively open in $\partial\Omega$, and $ \partial\Gamma$ their relative boundary in $\partial\Omega$,}
\\ \text{
with Hausdorff measure $\calH^{d-1}(\partial\Gamma)=0$.}
\end{gathered}
\end{equation}
We will denote by $|\Omega|$ the Lebesgue measure of $\Omega$.
On the Dirichlet part $\Gamma_\mathrm{Dir}$, assumed with $\calH^{d-1}(\Gamma_\mathrm{Dir})>0, $ we shall prescribe the displacement, while on $\Gamma_\mathrm{Neu}$ we will impose a Neumann condition. The trace of a function $v$ on $\Gamma_\mathrm{Dir}$ or $\Gamma_\mathrm{Neu}$ shall be still denoted by the symbol $v$.
The symbol $H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)$ shall indicate the subspace of functions
of $H^1(\Omega;\mathbb{R}^d)$ with null trace on $\Gamma_\mathrm{Dir}$.
The symbol $W_\mathrm{Dir}^{1,p}(\Omega;\mathbb{R}^d)$, $p>1,$ shall denote the analogous $W^{1,p}$-space.
In what follows, we shall extensively use Korn's inequality (cf.\ \cite{GeySu86}): for every $1<p<\infty$
there exists a constant $C_K =C_K(\Omega, p)>0$
such that there holds \begin{equation} \label{Korn}
\| u \|_{W^{1,p}(\Omega;\mathbb{R}^d)} \leq C_K \| \sig u \|_{L^p (\Omega;\bbM_\mathrm{sym}^{d \times d})} \qquad \text{for all } u \in W_\mathrm{Dir}^{1,p}(\Omega;\mathbb{R}^d)\,. \end{equation}
Finally, we will use the notation \begin{equation} \label{label-added}
W_+^{1,p}(\Omega):= \left\{\zeta \in W^{1,p}(\Omega)\, : \ \zeta(x) \geq 0 \quad \text{for a.a. } x \in \Omega \right\}, \quad \text{ and analogously for } W_-^{1,p}(\Omega). \end{equation}
\paragraph{{\em Kinematic admissibility and stress}.} First of all, let us formalize the decomposition of the linearized strain $ \sig u $ as the sum of the elastic and the plastic strain. Given a function $w \in H^1(\Omega;\mathbb{R}^d)$, we say that a triple $(u,e,p)$ is \emph{kinematically admissible with boundary datum $w$}, and write $(u,e,p) \in \mathcal{A}(w)$, if \begin{subequations} \label{kin-adm} \begin{align} & u \in H^1(\Omega;\mathbb{R}^d), \quad e \in L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}), \quad p \in L^2(\Omega;\bbM_\mathrm{D}^{d\times d}), \\ & \sig u = e+p \quad \text{a.e.\ in }\, \Omega, \\ & u = w \quad \text{on } \Gamma_\mathrm{Dir}. \end{align} \end{subequations} \par
The elasticity, viscosity, and thermal expansion tensors are symmetric and fulfill
\begin{equation} \label{elast-visc-tensors} \tag{2.$\mathrm{T}$} \begin{gathered} \mathbb{C} , \, \mathbb{D}, \, \mathbb{E}
\in L^\infty(\Omega; \mathrm{Lin}(\bbM_\mathrm{sym}^{d \times d}))\,, \text{ and }
\\
\exists\, C_{\mathbb{C}}^1,\, C_{\mathbb{C}}^2, \, C_{\mathbb{D}}^1, \, C_{\mathbb{D}}^2>0 \ \ \text{for a.a. } x \in \Omega \ \ \forall\, A \in \bbM_\mathrm{sym}^{d\times d} \, : \quad \begin{cases}
& C_{\mathbb{C}}^1 |A|^2 \leq \mathbb{C}(x) A : A \leq C_{\mathbb{C}}^2 |A|^2,
\\
& C_{\mathbb{D}}^1 |A|^2 \leq \mathbb{D}(x) A : A \leq C_{\mathbb{D}}^2 |A|^2,
\end{cases}
\end{gathered} \end{equation}
where $\mathrm{Lin}(\bbM_\mathrm{sym}^{d \times d})$ denotes the space of linear operators from $\bbM_\mathrm{sym}^{d \times d}$ to $\bbM_\mathrm{sym}^{d \times d}$. Observe that with \eqref{elast-visc-tensors} we also encompass in our analysis the case of an anisotropic and inhomogeneous material. Throughout the paper, we will use the short-hand notation \begin{equation} \label{tensor-B} \mathbb{B}: = \mathbb{C} \mathbb{E} \end{equation} for the $(d{\times}d)$-matrix arising from the multiplication of $\mathbb{C}$ and $\mathbb{E}$. \begin{remark} \label{rmk:lorosi} \upshape In \cite{DMSca14QEPP} the viscosity tensor $\mathbb{D}$ was assumed (constant in space and) positive semidefinite, only: In particular, the case $\mathbb{D} \equiv 0$ was encompassed in the existence and vanishing-viscosity analysis. We are not able to extend our own analysis in this direction, though. In fact, the coercivity condition required on $\mathbb{D}$ (joint with $\sig{\dot u} = \dot e + \dot p$, following from kinematic admissibility), will play a crucial role in estimating the term $\iint \vartheta \mathbb{B}{:} \sig{\dot u} \;\!\mathrm{d} x \;\!\mathrm{d} t$, which arises from
the mechanical energy balance \eqref{mech-enbal} ahead. \end{remark}
\paragraph{{\em External heat sources}.} For the volume and boundary heat sources $H$ and $h$ we require \begin{align}
\label{heat-source}
\tag{2.$\mathrm{H}_1$}
& H \in L^1(0,T;L^1(\Omega)) \cap L^2 (0,T; H^1(\Omega)^*), && H\geq 0 \quad\hbox{a.e. in } Q\,,
\\
\label{dato-h}
\tag{2.$\mathrm{H}_2$}
& h \in L^1 (0,T; L^2(\partial \Omega)), && h \geq 0 \quad\hbox{a.e. in } (0,T)\times \partial \Omega\,. \end{align} Indeed, the positivity of $H$ and $h$ is necessary for obtaining the strict positivity of the temperature $\vartheta$. \paragraph{{\em Body force and traction}.} Our basic conditions on the volume force $F$ and the assigned traction $g$ are \begin{equation} \label{data-displ}
\tag{2.$\mathrm{L}_1$} F\in L^2(0,T; H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)^*), \qquad g \in L^2(0,T; H_{00,\Gamma_\mathrm{Dir}}^{1/2}(\Gamma_\mathrm{Neu}; \mathbb{R}^d)^*), \end{equation} recalling that $ H_{00,\Gamma_\mathrm{Dir}}^{1/2}(\Gamma_\mathrm{Neu}; \mathbb{R}^d)$ is the space of functions $\gamma \in H^{1/2} (\Gamma_\mathrm{Neu};\mathbb{R}^d)$ such that there exists $\tilde\gamma \in H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)$ with $\tilde\gamma = \gamma $ in $\Gamma_\mathrm{Neu}$. \par Furthermore, for technical reasons that will be expounded in Remark \ref{rmk:diffic-1-test} ahead (cf.\ also the text preceding the proof of Proposition \ref{prop:aprio}), \underline{in order to allow for a non-zero traction $g$}, also for the viscoplastic system we will need to require a \emph{uniform safe load} type condition,
which usually occurs in the analysis of perfectly plastic systems, cf.\ Sec.\ \ref{s:5} later on. Namely, we impose that there exists a function $\varrho: [0,T] \to L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})$ solving for almost all $t\in (0,T)$ the following elliptic problem \[
\begin{cases} - \mathrm{div}(\varrho(t)) = F(t) & \text{in } \Omega, \\ \varrho(t) \nu = g(t) & \text{on } \Gamma_\mathrm{Neu} \end{cases} \] such that \begin{equation} \label{safe-load}
\tag{2.$\mathrm{L}_2$} \varrho \in W^{1,1}(0,T; L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})) \qquad \text{and} \qquad {\varrho}_\mathrm{D}
\in L^1(0,T;L^\infty (\Omega; \bbM_\mathrm{D}^{d\times d}))\,. \end{equation}
\par Indeed, condition \eqref{safe-load} will enter into play only starting from the derivation of a priori estimates on the approximate solutions to the viscoplastic system, uniform with respect to the time discretization parameter $\tau$. When not explicitly using \eqref{safe-load}, to shorten notation we will incorporate the volume force $F$ and the traction $g$ into the total load induced by them, namely the function $\mathcal{L}: (0,T) \to H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)^*$ given at $t\in (0,T)$ by \begin{equation} \label{total-load} \pairing{}{H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)}{\mathcal{L}(t)}{u}: = \pairing{}{H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)}{F(t)}{u} + \pairing{}{H_{00,\Gamma_\mathrm{Dir}}^{1/2}(\Gamma_\mathrm{Neu}; \mathbb{R}^d)}{g(t)}{u} \qquad \text{for all } u \in H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d), \end{equation} which fulfills $\mathcal{L} \in L^2(0,T; H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)^*)$ in view of \eqref{data-displ}.
\paragraph{{\em Dirichlet loading}.} Finally, we will suppose that the hard device $w$ to which the body is subject on $\Gamma_\mathrm{Dir}$ is the trace on $\Gamma_\mathrm{Dir}$ of a function, denoted by the same symbol, fulfilling \begin{equation} \label{Dirichlet-loading} \tag{2.$\mathrm{W}$} w \in L^1(0,T; W^{1,\infty} (\Omega;\mathbb{R}^d)) \cap W^{2,1} (0,T;H^1(\Omega;\mathbb{R}^d)) \cap H^2(0,T; L^2(\Omega;\mathbb{R}^d))\,. \end{equation} We postpone to Remark \ref{rmk:diffic-1-test} some explanations on the use of, and need for, conditions \eqref{Dirichlet-loading}. Let us only mention here that the requirement $w\in L^1(0,T; W^{1,\infty} (\Omega;\mathbb{R}^d))$ could be replaced by asking for $\mathbb{B}{:} \sig w=0$ a.e.\ in $Q$, as imposed, e.g., in \cite{Roub-PP}.
\paragraph{{\em The weak formulation of the momentum balance}.} The variational formulation of \eqref{mom-balance}, supplemented with the boundary conditions \eqref{bc-u-1} and \eqref{bc-u-2}, reads \begin{equation} \label{w-momentum-balance} \begin{aligned} \rho\int_\Omega \ddot{u}(t) v \;\!\mathrm{d} x + \int_\Omega \left(\mathbb{D} \dot{e}(t) + \mathbb{C} e(t) - \vartheta(t) \mathbb{B} \right): \sig v \;\!\mathrm{d} x & = \pairing{}{H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)}{\calL(t)}{v} \\ & \qquad \text{for all } v \in H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d), \ \text{for a.a. } t \in (0,T)\,. \end{aligned} \end{equation} We will often use the short-hand notation $-\mathrm{div}_{\mathrm{Dir}}$ for the elliptic operator defined by \begin{equation} \label{div-Gdir} \pairing{}{ H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d) }{-\mathrm{div}_{\mathrm{Dir}}(\sigma)}{v}: = \int_\Omega \sigma : \sig v \;\!\mathrm{d} x \qquad \text{for all } v \in H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d) \,. \end{equation}
\paragraph{{\em The plastic dissipation}.} Prior to stating our precise assumptions on the multifunction $K: \Omega \times \mathbb{R}^+ \rightrightarrows \bbM_\mathrm{D}^{d \times d}$, following \cite{Castaing-Valadier77} let us recall the notions of measurability, lower semicontinuity, and upper semicontinuity, for a general multifunction $ \mathsf{F} : X \rightrightarrows Y$. Although the definitions and results given in \cite{Castaing-Valadier77} cover much more general situations, for simplicity here we shall confine the discussion to the case of a topological measurable space $(X, \mathscr{M})$, and a (separable) Hilbert space $Y$. For a set $B \subset Y$, we define \[ \mathsf{F}^{-1}(B): = \{ x\in X\, : \ \mathsf{F}(x) \cap B \neq \emptyset\}. \]
We say that \begin{subequations} \label{multifuncts-props} \begin{align} & \label{meas} \text{$\mathsf{F}$ is measurable if for every open subset $U \subset Y$, $ \mathsf{F}^{-1}(U) \in \mathscr{M}$;} \\ & \label{lsc} \text{$\mathsf{F}$ is lower semicontinuous if for every open set $U\subset Y$, the set $ \mathsf{F}^{-1}(U)$ is open; }\\ & \label{usc} \text{$\mathsf{F}$ is upper semicontinuous if for every open set $U\subset Y$, the set $\{ x \in X\, : \ \mathsf{F}(x) \subset U \}$ is open.} \end{align} \end{subequations} Finally, $\mathsf{F}$ is continuous if it is both lower and upper semicontinuous. \par Let us now turn back to the multifunction $K : \Omega \times \mathbb{R}^+ \rightrightarrows \bbM_\mathrm{D}^{d \times d}$. We suppose that \begin{equation} \label{measutab-cont-K} \tag{2.$\mathrm{K}_1$} \begin{aligned} & K : \Omega \times \mathbb{R}^+ \rightrightarrows \bbM_\mathrm{D}^{d \times d} && \text{ is measurable w.r.t.\ the variables $(x,\vartheta)$,} \\ & K(x, \cdot) : \mathbb{R}^+ \rightrightarrows \bbM_\mathrm{D}^{d \times d} && \text{ is continuous} \quad \text{for almost all } x \in \Omega. \end{aligned} \end{equation} Furthermore, we require that \begin{equation} \label{elastic-domain} \tag{2.$\mathrm{K}_2$} \begin{aligned} K(x,\vartheta) \text{ is a convex and compact set in } \bbM_\mathrm{D}^{d \times d} \qquad \text{for all } \vartheta \in \mathbb{R}^+, \text{ for almost all } x \in \Omega, \\ \exists\, 0<c_r<C_R \quad \text{for a.a. } x \in \Omega, \ \forall\, \vartheta \in \mathbb{R}^+ \, : \quad B_{c_r}(0) \subset K(x,\vartheta) \subset B_{C_R}(0). \end{aligned} \end{equation} \par Therefore, the support function associated with the multifunction $K$, i.e. \begin{equation} \label{1-homogeneous-dissip} \mathrm{R}: \Omega \times \mathbb{R}^+ \times \bbM_\mathrm{D}^{d \times d} \to [0,+\infty) \quad \text{defined by } \mathrm{R}(x,\vartheta, \dot p ): = \sup_{\pi \in K(x,\vartheta)} \pi : \dot p \end{equation} is positive, with $\mathrm{R}(x,\vartheta, \cdot) : \bbM_\mathrm{D}^{d \times d} \to [0,+\infty) $ convex and $1$-positively homogeneous for almost all $x \in \Omega$ and for all $\vartheta \in \mathbb{R}^+$. By the first of \eqref{measutab-cont-K}, the function $\mathrm{R}: \Omega \times \mathbb{R}^+ \times \bbM_\mathrm{D}^{d \times d} \to [0,+\infty) $ is measurable. Moreover, by the second of \eqref{measutab-cont-K}, in view of \cite[Thms.\ II.20, II.21]{Castaing-Valadier77} (cf.\ also \cite[Prop.\ 2.4]{Sol09}) the function \begin{subequations} \label{hypR} \begin{align} & \label{hypR-lsc} \mathrm{R}(x,\cdot, \cdot): \mathbb{R}^+ \times \bbM_\mathrm{D}^{d \times d} \to [0,+\infty) \text{ is (jointly) lower semicontinuous}, \intertext{for almost all $x \in \Omega$, i.e.\ $\mathrm{R}$ is a \emph{normal integrand}, and} & \label{hypR-cont} \mathrm{R}(x,\cdot, \dot p): \mathbb{R}^+ \to \mathbb{R}^+ \text{ is continuous for every $\dot p\in \bbM_\mathrm{D}^{d\times d}$}. \end{align} \end{subequations} Finally, it follows from the second of \eqref{elastic-domain} that \begin{subequations} \label{cons-lin-growth} \begin{equation} \label{linear-growth}
c_r|\dot p| \leq \mathrm{R}(x,\vartheta, \dot p ) \leq C_R |\dot p| \qquad \text{for all } (\vartheta, \dot p) \in \mathbb{R}^+ \times \bbM_\mathrm{D}^{d \times d} \text{ for almost all }x \in \Omega\,, \end{equation} and that \begin{equation} \label{bounded-subdiff} \partial_{\dot p} \mathrm{R}(x,\vartheta, \dot{p}) \subset \partial_{\dot p} \mathrm{R}(x,\vartheta, 0) = K(x,\vartheta) \subset B_{C_R}(0) \qquad \text{for all } (\vartheta, \dot p) \in \mathbb{R}^+ \times \bbM_\mathrm{D}^{d \times d} \quad \text{for almost all }x \in \Omega. \end{equation} \end{subequations} \par Finally, we also introduce the \emph{plastic dissipation potential} $\calR:L^1(\Omega; \mathbb{R}^+) \times L^1(\Omega;\bbM_\mathrm{D}^{d\times d})$ given by \begin{equation} \label{plastic-dissipation-functional} \calR(\vartheta, \dot p): = \int_\Omega \mathrm{R}(x,\vartheta(x), \dot p(x)) \;\!\mathrm{d} x\,. \end{equation}
\paragraph{{\em The plastic flow rule}.} Taking into account the $1$-positive homogeneity of $\mathrm{R}(x,\vartheta, \cdot)$, which yields the following characterization of $\partial_{\dot p} \mathrm{R}(x,\vartheta, \dot{p}): \bbM_\mathrm{D}^{d\times d} \rightrightarrows \bbM_\mathrm{D}^{d\times d}$: \begin{equation} \label{characterization-subdiff} \zeta \in \partial_{\dot p} \mathrm{R}(x,\vartheta, \dot{p}) \ \ \Leftrightarrow \ \ \begin{cases} & \zeta : \eta \leq \mathrm{R}(x,\vartheta, \eta) \quad \text{for all } \eta \in \bbM_\mathrm{D}^{d\times d} \\ & \zeta : \dot{p} = \mathrm{R}(x,\vartheta, \dot p), \end{cases}
\ \ \Leftrightarrow \ \begin{cases}
& \zeta \in \partial_{\dot p} \mathrm{R}(x,\vartheta, 0) = K(x,\vartheta),
\\
& \zeta : \dot{p} \geq \mathrm{R}(x,\vartheta, \dot p), \end{cases} \end{equation}
the plastic flow rule \begin{equation} \label{pl-flow} \partial_{\dot p} \mathrm{R}(x,\vartheta(t,x), \dot{p}(t,x)) + \dot{p}(t,x) \ni \sigma_\mathrm{D}(t,x) \qquad \text{for a.a. } (t,x) \in Q, \end{equation} reformulates as \begin{equation} \label{reform-pl-flow} \begin{cases}
\left( \sigma_\mathrm{D}(t,x) - \dot{p}(t,x) \right) : \eta \leq \mathrm{R}(x,\vartheta(t,x), \eta) \quad \text{for all } \eta \in \bbM_\mathrm{D}^{d\times d}
\\
\left( \sigma_\mathrm{D}(t,x) - \dot{p}(t,x) \right) : \dot{p}(t,x) \geq \mathrm{R}(x,\vartheta(t,x), \dot{p}(t,x)) \end{cases} \qquad \text{for a.a. } (t,x) \in Q\,. \end{equation} \paragraph{{\em Cauchy data}.} We will supplement the thermoviscoplastic system with initial data \begin{subequations} \label{Cauchy-data} \begin{align} \label{initial-teta} &
\vartheta_0 \in L^1(\Omega), \text{ fulfilling the strict positivity condition } \exists\, \vartheta_*>0: \ \inf_{x\in \Omega} \vartheta_0(x) \geq \vartheta_*,
\\ & \label{initial-u} u_0 \in H_\mathrm{Dir}^{1} (\Omega;\mathbb{R}^d), \ \dot{u}_0 \in L^2 (\Omega;\mathbb{R}^d), \\ & \label{initial-p} e_0 \in L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}), \quad p_0 \in L^2(\Omega;\bbM_\mathrm{D}^{d\times d}) \quad \text{such that } (u_0, e_0, p_0 ) \in \mathcal{A}(w(0))\,. \end{align} \end{subequations} \subsection{Weak solvability concepts for the thermoviscoplastic system} \label{ss:2.2} Throughout this section, we shall suppose that the functions $\mathbb{C}, \ldots, \mathrm{R}$, the data $H,\ldots, w$, and the initial data $(\vartheta_0, \, u_0, \dot{u}_0, e_0, p_0)$ fulfill the conditions stated in Section \ref{ss:2.1}. We now motivate the weak solvabilty concepts for the (Cauchy problem associated with the) viscoplastic system
(\ref{plast-PDE}, \ref{bc}) with some heuristic calculations. \paragraph{{\em Heuristics for entropic and weak solutions to system
(\ref{plast-PDE}, \ref{bc})}.} As already mentioned in the Introduction, we shall formulate the heat equation \eqref{heat} by means of an entropy inequality and a total energy inequality, featuring the stored energy of the system. The latter is given by the sum of the internal and of the elastic energies, i.e. \begin{equation} \label{stored-energy} \calE(\vartheta, u, e, p)= \calE(\vartheta, e):= \calF(\vartheta) + \calQ(e) \quad \text{with } \quad \begin{cases} \calF(\vartheta) : = \int_\Omega \vartheta \;\!\mathrm{d} x, \\ \calQ(e): =
\frac12 \int_\Omega \mathbb{C} e: e \;\!\mathrm{d} x\,.
\end{cases} \end{equation} \par Let us formally derive (in particular, without specifying the needed regularity on the solution quadruple $(\vartheta,u,e,p)$) the total energy inequality (indeed, we will formally obtain a total energy \emph{balance}), starting from the energy estimate associated with system (\ref{plast-PDE}, \ref{bc}). The latter consists in testing the momentum balance by $\dot u - \dot w$, the heat equation by $1$, and the plastic flow rule by $\dot p$, adding the resulting relations and integrating in space and over a generic interval $(s,t) \subset (0,T)$. More in detail, the test of \eqref{mom-balance} and of \eqref{flow-rule} yields, after some elementary calculations, \begin{equation} \label{intermediate-mech-enbal} \begin{aligned} &
\frac{\rho}2 \int_\Omega |\dot{u}(t)|^2 \;\!\mathrm{d} x + \int_s^t\int_\Omega \left( \mathbb{D} \dot{e} + \mathbb{C} e - \vartheta \mathbb{B} \right) : \sig{\dot{u}} \;\!\mathrm{d} x \;\!\mathrm{d} r
+ \int_s^t \int_\Omega \left( |\dot p|^2 {+} \mathrm{R}(\vartheta, \dot p) \right) \;\!\mathrm{d} x \;\!\mathrm{d} r \\
& = \frac{\rho}2 \int_\Omega |\dot{u}(s)|^2 \;\!\mathrm{d} x + \int_s^t \pairing{}{H_\mathrm{Dir}^1 (\Omega;\mathbb{R}^d)}{\mathcal{L}}{\dot u- \dot w} \;\!\mathrm{d} r + \int_s^t \int_\Omega \left( \mathbb{D} \dot{e} + \mathbb{C} e - \vartheta \mathbb{B} \right) : \sig{\dot w} \;\!\mathrm{d} x \;\!\mathrm{d} r \\ &\quad +\rho \left( \int_\Omega \dot{u}(t) \dot{w}(t) \;\!\mathrm{d} x - \int_\Omega \dot{u}(s) \dot{w}(s) \;\!\mathrm{d} x - \int_s^t \int_\Omega \dot{u}\ddot w \;\!\mathrm{d} x \;\!\mathrm{d} r \right) +
\int_s^t \int_\Omega \sigma_\mathrm{D} : \dot{p} \;\!\mathrm{d} x \;\!\mathrm{d} r\,. \end{aligned} \end{equation} Now, taking into account that $ \sig{\dot u} =\dot e + \dot p$ by the kinematical admissibility condition, rearranging some terms one has that \[ \begin{aligned}
\int_s^t\int_\Omega \left( \mathbb{D} \dot{e} + \mathbb{C} e - \vartheta \mathbb{B} \right) : \sig{\dot{u}} \;\!\mathrm{d} x \;\!\mathrm{d} r = & \int_s^t \int_\Omega \left( \mathbb{D} \dot e : \dot e + \mathbb{C} \dot e : e \right) \;\!\mathrm{d} x \;\!\mathrm{d} r
- \int_s^t \int_\Omega \vartheta \mathbb{B} : \dot e \;\!\mathrm{d} x \;\!\mathrm{d} r
\\
& \quad + \int_s^t \int_\Omega \left( \mathbb{D} \dot e+ \mathbb{C} e - \vartheta \mathbb{B} \right) : \dot p \;\!\mathrm{d} x \;\!\mathrm{d} r \,. \end{aligned} \] Substituting this in \eqref{intermediate-mech-enbal} and noting that $ \int_s^t \int_\Omega \left(\mathbb{D} \dot e + \mathbb{C} e - \vartheta \mathbb{B} \right) : \dot{p} \;\!\mathrm{d} x \;\!\mathrm{d} r = \int_s^t \int_\Omega \sigma_\mathrm{D} : \dot{p} \;\!\mathrm{d} x \;\!\mathrm{d} r $, so that the last term on the right-hand side of \eqref{intermediate-mech-enbal} cancels out, we get
the \emph{mechanical energy balance}, featuring the kinetic and dissipated energies \begin{equation} \label{mech-enbal} \begin{aligned} &
\dddn{\frac{\rho}2 \int_\Omega |\dot{u}(t)|^2 \;\!\mathrm{d} x}{kinetic} +\dddn{ \int_s^t\int_\Omega \left( \mathbb{D} \dot e: \dot e + |\dot p|^2 \right) \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_s^t \calR(\vartheta, \dot p) \;\!\mathrm{d} r }{dissipated}
+ \calQ(e(t)) \\
& = \frac{\rho}2 \int_\Omega |\dot{u}(s)|^2 \;\!\mathrm{d} x + \calQ(e(s)) + \int_s^t \pairing{}{H_\mathrm{Dir}^1 (\Omega;\mathbb{R}^d)}{\mathcal{L}}{\dot u{-} \dot w} \;\!\mathrm{d} r + \int_s^t \int_\Omega \vartheta \mathbb{B} : \dot e \;\!\mathrm{d} x \;\!\mathrm{d} r
\\ & \quad +\rho \left( \int_\Omega \dot{u}(t) \dot{w}(t) \;\!\mathrm{d} x - \int_\Omega \dot{u}(s) \dot{w}(s) \;\!\mathrm{d} x - \int_s^t \int_\Omega \dot{u}\ddot w \;\!\mathrm{d} x \;\!\mathrm{d} r \right) + \int_s^t \int_\Omega \left( \mathbb{D} \dot{e} + \mathbb{C} e - \vartheta \mathbb{B} \right) : \sig{\dot w} \;\!\mathrm{d} x \;\!\mathrm{d} r,
\end{aligned} \end{equation} which will also have a significant role for our analysis. \par Summing this with the heat equation tested by $1$ and integrated in time and space gives, after cancelation of some terms, the \emph{total energy balance} \begin{equation} \label{total-enbal} \begin{aligned} &
\frac{\rho}2 \int_\Omega |\dot{u}(t)|^2 \;\!\mathrm{d} x +\mathcal{E}(\vartheta(t), e(t)) \\
& = \frac{\rho}2 \int_\Omega |\dot{u}(s)|^2 \;\!\mathrm{d} x +\mathcal{E}(\vartheta(s), e(s)) + \int_s^t \pairing{}{H_\mathrm{Dir}^1 (\Omega;\mathbb{R}^d)}{\mathcal{L}}{\dot u{-} \dot w} +\int_s^t \int_\Omega H \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_s^t \int_{\partial\Omega} h \;\!\mathrm{d} S \;\!\mathrm{d} r \\ & \quad +\rho \left( \int_\Omega \dot{u}(t) \dot{w}(t) \;\!\mathrm{d} x - \int_\Omega \dot{u}(s) \dot{w}(s) \;\!\mathrm{d} x - \int_s^t \int_\Omega \dot{u}\ddot w \;\!\mathrm{d} x \;\!\mathrm{d} r \right) + \int_s^t \int_\Omega \sigma: \sig{\dot w} \;\!\mathrm{d} x \;\!\mathrm{d} r
\,.\end{aligned} \end{equation}
As for the \emph{entropy inequality}, let us only mention that it can be formally obtained by multiplying the heat equation \eqref{heat} by $\varphi/\vartheta$, with $\varphi $ a smooth and \emph{positive} test function. Integrating in space and over a generic interval $(s,t) \subset (0,T)$ leads to the identity \begin{equation} \label{formal-entropy-eq} \begin{aligned}
& \int_s^t \int_\Omega \partial_t \log(\vartheta) \varphi \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_s^t \int_\Omega \left( \kappa(\vartheta) \nabla \log(\vartheta) \nabla \varphi - \kappa(\vartheta) \frac\varphi\vartheta \nabla \log(\vartheta) \nabla \vartheta \right) \;\!\mathrm{d} x \;\!\mathrm{d} r
\\ & = \int_s^t \int_\Omega \left( H+ \mathrm{R}(\vartheta,\dot{p}) + |\dot{p}|^2+ \mathbb{D} \dot{e} : \dot{e} -\vartheta \mathbb{B} : \dot{e} \right) \frac{\varphi}\vartheta \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_s^t \int_{\partial\Omega} h \frac\varphi\vartheta \;\!\mathrm{d} x \;\!\mathrm{d} r\,. \end{aligned} \end{equation} The entropic solution concept given in Definition \ref{def:entropic-sols} below will feature the inequality version of \eqref{formal-entropy-eq}, where the first term on the left-hand side is integrated by parts in time, as well as the inequality version of \eqref{total-enbal}.
\begin{definition}[Entropic solutions to the thermoviscoplastic system] \label{def:entropic-sols}
Given initial data $(\vartheta_0,u_0, \dot{u}_0, e_0, p_0)$ fulfilling \eqref{Cauchy-data}, we call a quadruple $(\vartheta,u,e,p)$ an \emph{entropic solution} to the Cauchy problem for system (\ref{plast-PDE}, \ref{bc}), if \begin{subequations} \label{regularity} \begin{align} \label{reg-teta} & \vartheta \in L^2(0,T; H^1(\Omega))\cap L^\infty(0,T;L^1(\Omega)), \\ & \label{reg-log-teta} \log(\vartheta) \in L^2(0,T; H^1(\Omega)), \\ & \label{reg-u} u \in H^1(0,T; H_\mathrm{Dir}^{1}(\Omega;\mathbb{R}^d)) \cap W^{1,\infty}(0,T; L^2(\Omega;\mathbb{R}^d)) \cap H^2(0,T; H_\mathrm{Dir}^{1}(\Omega;\mathbb{R}^d)^*), \\
& \label{reg-e} e \in H^1(0,T; L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})), \\ & \label{reg-p} p \in H^1(0,T; L^2(\Omega;\bbM_\mathrm{D}^{d\times d})), \end{align} \end{subequations} $(u,e,p)$ comply with
the initial conditions
\begin{subequations}
\label{initial-conditions} \begin{align}
\label{iniu} & u(0,x) = u_0(x), \ \ \dot{u}(0,x) = \dot{u}_0(x) & \text{for a.a. }\, x \in
\Omega,
\\
\label{inie} & e(0,x) = e_0(x) & \text{for a.a. }\, x \in
\Omega,
\\
\label{inichi} & p(0,x) = p_0(x) & \text{for a.a. }\, x \in
\Omega, \end{align} \end{subequations}
(while the initial condition for $\vartheta$ is implicitly formulated in \eqref{entropy-ineq} and \eqref{total-enineq} below), and with \begin{itemize} \item[-] the \emph{strict positivity} of $\vartheta$: \begin{equation} \label{teta-strict-pos} \exists\, \bar\vartheta>0 \ \text{for a.a. } (t,x) \in Q\, : \quad \vartheta(t,x) > \bar\vartheta; \end{equation} \item[-] the \emph{entropy inequality}, to hold for almost all $t \in (0,T]$ and almost all $s\in (0,t)$, and for $s=0$ (where $\log(\vartheta(0))$ is to be understood as $\log(\vartheta_0)$), \begin{equation} \label{entropy-ineq} \begin{aligned}
& \int_s^t \int_\Omega \log(\vartheta) \dot{\varphi} \;\!\mathrm{d} x \;\!\mathrm{d} r - \int_s^t \int_\Omega \left( \kappa(\vartheta) \nabla \log(\vartheta) \nabla \varphi - \kappa(\vartheta) \frac\varphi\vartheta \nabla \log(\vartheta) \nabla \vartheta\right) \;\!\mathrm{d} x \;\!\mathrm{d} r
\\ & \leq
\int_\Omega \log(\vartheta(t)) \varphi(t) \;\!\mathrm{d} x - \int_\Omega \log(\vartheta(s)) \varphi(s) \;\!\mathrm{d} x \\ & \quad
- \int_s^t \int_\Omega \left( H+ \mathrm{R}(\vartheta,\dot{p}) + |\dot{p}|^2+ \mathbb{D} \dot{e} : \dot{e} -\vartheta \mathbb{B} : \dot{e} \right) \frac{\varphi}\vartheta \;\!\mathrm{d} x \;\!\mathrm{d} r - \int_s^t \int_{\partial\Omega} h \frac\varphi\vartheta \;\!\mathrm{d} x \;\!\mathrm{d} r \end{aligned} \end{equation} for all $\varphi $ in $L^\infty ([0,T]; W^{1,\infty}(\Omega)) \cap H^1 (0,T; L^{6/5}(\Omega))$, with $\varphi \geq 0$; \item[-] the \emph{total energy inequality}, to hold for almost all $t \in (0,T]$ and almost all $s\in (0,t)$, and for $s=0$: \begin{equation} \label{total-enineq} \begin{aligned} &
\frac{\rho}2 \int_\Omega |\dot{u}(t)|^2 \;\!\mathrm{d} x +\mathcal{E}(\vartheta(t), e(t)) \\
& \leq \frac{\rho}2 \int_\Omega |\dot{u}(s)|^2 \;\!\mathrm{d} x +\mathcal{E}(\vartheta(s), e(s)) + \int_s^t \pairing{}{H_\mathrm{Dir}^1 (\Omega;\mathbb{R}^d)}{\mathcal{L}}{\dot u{-} \dot w} +\int_s^t \int_\Omega H \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_s^t \int_{\partial\Omega} h \;\!\mathrm{d} S \;\!\mathrm{d} r \\ & \begin{aligned} \quad +\rho \left( \int_\Omega \dot{u}(t) \dot{w}(t) \;\!\mathrm{d} x - \int_\Omega \dot{u}(s) \dot{w}(s) \;\!\mathrm{d} x - \int_s^t \int_\Omega \dot{u}\ddot w \;\!\mathrm{d} x \;\!\mathrm{d} r \right) & + \int_s^t \int_\Omega \sigma: \sig{\dot w} \;\!\mathrm{d} x \;\!\mathrm{d} r, \end{aligned}
\end{aligned} \end{equation} where
for $s=0$ we read $\vartheta(0)=\vartheta_0$, with the stress $\sigma$ given by the constitutive equation
\begin{equation}
\label{stress-consti}
\sigma= \mathbb{D}\dot{e} + \mathbb{C} e - \vartheta \mathbb{B} \qquad \text{a.e.\ in } Q;
\end{equation}
\item[-] the \emph{kinematic admissibility} condition \begin{equation} \label{kin-admis} (u(t,x), e(t,x), p(t,x)) \in \mathcal{A}(w(t,x)) \qquad \text{for a.a. } (t,x) \in Q; \end{equation} \item[-] the weak formulation \eqref{w-momentum-balance} of the \emph{momentum balance}; \item[-] the \emph{plastic flow rule} \eqref{pl-flow}. \end{itemize} \end{definition} \begin{remark} \label{rmk:feasibility} \upshape Observe that with the entropy inequality \eqref{entropy-ineq} we are tacitly claiming that, in addition to \eqref{reg-teta} and \eqref{reg-log-teta}, the temperature variable has the following summability properties \[
\kappa(\vartheta) | \nabla \log(\vartheta)|^2 \varphi \in L^1(Q), \qquad \kappa(\vartheta) \nabla \log(\vartheta) \in L^1(Q) \] for every positive admissible test function $\varphi$. In fact, we shall retrieve the above properties (and improve the second one, cf.\ \eqref{further-logteta} ahead), within the proof of Theorem \ref{mainth:1}. Furthermore, note that the integral $ \int_\Omega \log(\vartheta(t)) \varphi(t) \;\!\mathrm{d} x$ makes sense for almost all $t\in(0,T)$, since the estimate \[
|\log(\vartheta(t,x))|\leq \vartheta(t,x) + \frac1{\vartheta(t,x)} \leq \vartheta(t,x) + \frac1{\bar\vartheta} \qquad \text{for a.a. } (t,x) \in Q, \] (with the second inequality due to \eqref{teta-strict-pos}), and the fact that $\vartheta \in L^\infty (0,T; L^1(\Omega))$, guarantee that $\log(\vartheta) \in L^\infty (0,T; L^1(\Omega))$ itself. Finally, the requirement that $\varphi \in H^1 (0,T; L^{6/5}(\Omega))$ ensures that $\int_s^t \int_\Omega \log(\vartheta) \dot{\varphi} \;\!\mathrm{d} x \;\!\mathrm{d} r $ is a well-defined integral, since $\log(\vartheta) \in L^2(0,T;L^6(\Omega))$ by \eqref{reg-log-teta}. \par We refer to \cite[Rmk\ 2.6]{Rocca-Rossi} for a thorough discussion on the consistency between the entropic and the standard, weak formulation of the heat equation \eqref{heat}. Still, we may mention here that, to obtain the latter from the former formulation, one should test the entropy inequality by $\varphi= \vartheta$. Therefore, $\vartheta$ should have enough regularity as to make it an admissible test function for
\eqref{entropy-ineq}. \end{remark} \par In our second solvability concept for the initial-boundary value problem associated with system \eqref{plast-PDE}, the temperature has the enhanced time regularity \eqref{enh-teta-W11} below, which allows us to give an improved variational formulation of the heat equation \eqref{heat}. Observe that, in \cite{Rocca-Rossi} this solution notion was referred to as \emph{weak}. In this paper we will instead prefer the term \emph{weak energy solution}, in order to highlight the validity of the total energy \emph{balance} on \emph{every} interval $[s,t]\subset [0,T]$, cf.\ Corollary \ref{cor:total-enid} below. \begin{definition}[Weak energy solutions to the thermoviscoplastic system] \label{def:weak-sols}
Given initial data $(\vartheta_0,u_0, \dot{u}_0, e_0, p_0)$ fulfilling \eqref{Cauchy-data}, we call a quadruple $(\vartheta,u,e,p)$ a \emph{weak energy solution} to the Cauchy problem for system
(\ref{plast-PDE}, \ref{bc}), if \begin{itemize} \item[-]
in addition to the regularity and summability properties \eqref{regularity}, there holds
\begin{equation}
\label{enh-teta-W11}
\vartheta \in W^{1,1}(0,T; W^{1,\infty}(\Omega)^*),
\end{equation} \item[-] in addition to the initial conditions \eqref{initial-conditions}, $\vartheta $ complies with \begin{equation} \label{initeta} \vartheta(0) = \vartheta_0 \qquad \text{ in } W^{1,\infty}(\Omega)^*. \end{equation} \item[-] in addition to the strict positivity \eqref{teta-strict-pos}, the kinematic admissibility \eqref{kin-admis}, the weak mometum balance \eqref{w-momentum-balance}, and the flow rule \eqref{pl-flow}, $(\vartheta,u,e,p)$ comply for almost all $t \in (0,T)$ with the following weak formulation of the heat equation \begin{equation} \label{eq-teta} \begin{aligned}
& \pairing{}{W^{1,\infty}(\Omega)}{\dot\vartheta}{\varphi} + \int_\Omega \kappa(\vartheta) \nabla \vartheta\nabla\varphi \;\!\mathrm{d} x \\ & = \int_\Omega \left(H+
\mathrm{R}(\vartheta, \dot p) + |\dot p |^2 + \mathbb{D} \dot e : \dot{e} - \vartheta \mathbb{B} : \dot{e} \right) \varphi \;\!\mathrm{d} x + \int_{\partial\Omega} h \varphi \;\!\mathrm{d} S \quad \text{for all } \varphi \in W^{1,\infty}(\Omega). \end{aligned} \end{equation}
\end{itemize} \end{definition} Along the lines of Remark \ref{rmk:feasibility}, we may observe that, underlying the weak formulation \eqref{eq-teta} is the property $\kappa(\vartheta) \nabla \vartheta \in L^1(Q;\mathbb{R}^d)$, which shall be in fact (slightly) improved in Theorem \ref{mainth:2}. \par We conclude the section with the following result, under the (tacitly assumed) conditions from Sec.\ \ref{ss:2.1}. \begin{lemma} \label{cor:total-enid} \begin{enumerate} \item Let $(\vartheta, u,e,p)$ be either an \emph{entropic} or a \emph{weak energy solution} to (the Cauchy problem for) system (\ref{plast-PDE}, \ref{bc}). Then, the functions $(\vartheta, u,e,p)$ comply with the mechanical energy balance \eqref{mech-enbal} for every $0\leq s \leq t \leq T$. \item Let $(\vartheta, u,e,p)$ be a \emph{weak energy solution} to (the Cauchy problem for) system (\ref{plast-PDE}, \ref{bc}). Then, the total energy \emph{balance} \begin{equation} \label{total-enbal-delicate} \begin{aligned} &
\frac{\rho}2 \int_\Omega |\dot{u}(t)|^2 \;\!\mathrm{d} x + \pairing{}{W^{1,\infty}(\Omega)}{\vartheta(t)}{1} + \calQ(e(t)) \\
& = \frac{\rho}2 \int_\Omega |\dot{u}(s)|^2 \;\!\mathrm{d} x + \pairing{}{W^{1,\infty}(\Omega)}{\vartheta(s)}{1} + \calQ(e(s)) + \int_s^t \pairing{}{H_\mathrm{Dir}^1 (\Omega;\mathbb{R}^d)}{\mathcal{L}}{\dot u{-} \dot w} +\int_s^t \int_\Omega H \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_s^t \int_{\partial\Omega} h \;\!\mathrm{d} S \;\!\mathrm{d} r \\ & \quad +\rho \left( \int_\Omega \dot{u}(t) \dot{w}(t) \;\!\mathrm{d} x - \int_\Omega \dot{u}(s) \dot{w}(s) \;\!\mathrm{d} x - \int_s^t \int_\Omega \dot{u}\ddot w \;\!\mathrm{d} x \;\!\mathrm{d} r \right) + \int_s^t \int_\Omega \sigma: \sig{\dot w} \;\!\mathrm{d} x \;\!\mathrm{d} r
\end{aligned} \end{equation} holds for all $0 \leq s \leq t \leq T$. \end{enumerate} \end{lemma} \noindent Observe that, since $\vartheta\in L^\infty(0,T; L^1(\Omega))$, there holds $ \pairing{}{W^{1,\infty}(\Omega)}{\vartheta(t)}{1} = \int_\Omega \vartheta(t) \;\!\mathrm{d} x = \calF(\vartheta(t))$ for almost all $t\in (0,T)$ and for $t=0$. For such $t$, \eqref{total-enbal-delicate}
may be thus rewritten in terms of the stored energy $\calE$ from \eqref{stored-energy}. \begin{proof} The energy balance \eqref{mech-enbal} follows from testing the momentum balance \eqref{w-momentum-balance} by $\dot u-\dot w$, the plastic flow rule by $\dot p$, adding the resulting relations, and integrating in time. \par As for \eqref{total-enbal-delicate}, it is sufficient to test the weak formulation \eqref{eq-teta} of the heat equation by $\varphi =1$, integrate in time taking into account that $\vartheta \in W^{1,1}(0,T; W^{1,\infty}(\Omega)^*) $, and add the resulting identity to \eqref{mech-enbal}. \end{proof}
\subsection{Existence results for the thermoviscoplastic system} \label{ss:2.3} Our first result states the existence of entropic solutions, under a mild growth condition on the thermal conductivity $\kappa$.
For shorter notation, in the statement below we shall write $(2.\mathrm{H})$ in place of \eqref{heat-source}, \eqref{dato-h}, and analogously $(2.\mathrm{L})$, $(2.\mathrm{K})$. \begin{maintheorem} \label{mainth:1} Assume \eqref{Omega-s2}, \eqref{elast-visc-tensors}, $(2.\mathrm{H})$, $(2.\mathrm{L})$,
\eqref{Dirichlet-loading}, and $(2.\mathrm{K})$.
In addition, suppose that \begin{equation} \label{hyp-K} \tag{2.$\kappa_1$} \begin{aligned} & \text{the function } \kappa: \mathbb{R}^+ \to \mathbb{R}^+ \ \text{ is
continuous and} \\ & \exists \, c_0, \, c_1>0 \quad \mu>1 \ \ \forall\vartheta\in \mathbb{R}^+\, :\quad c_0 (1+ \vartheta^{\mu}) \leq \kappa(\vartheta) \leq c_1 (1+\vartheta^{\mu})\,. \end{aligned} \end{equation} Then, for every $(\vartheta_0, u_0, \dot{u}_0, e_0, p_0) $ satisfying \eqref{Cauchy-data} there exists an entropic solution $(\vartheta,u,e,p)$ such that, in addition,
$\vartheta$ complies with the positivity property \begin{equation} \label{strong-strict-pos}
\vartheta(t,x) \geq \bar\vartheta : = \left( \bar{c} T + \frac1{\vartheta_*}\right)^{-1} \quad \text{for almost all $(t,x) \in Q$}, \end{equation} where
$\vartheta_*>0$ is from \eqref{initial-teta} and $\bar{c} := \frac{|\mathbb{B}|^2}{2 C_\mathbb{D}^1}$, with $C_\mathbb{D}^1>0$ from \eqref{elast-visc-tensors}. Finally, there holds \begin{equation} \label{further-logteta} \begin{gathered} \log(\vartheta) \in L^\infty (0,T;L^p(\Omega)) \quad \text{for all } 1 \leq p <\infty, \\ \kappa(\vartheta)\nabla \log(\vartheta) \in L^{1+\bar\delta}(Q;\mathbb{R}^d) \text{ with $ \bar\delta = \frac{\alpha}\mu $ and $\alpha \in [(2-\mu)^+, 1)$, and } \qquad \\ \kappa(\vartheta)\nabla \log(\vartheta) \in L^1(0,T;X) \quad \text{with } X= \left \{ \begin{array}{lll}
L^{2-\eta}(\Omega;\mathbb{R}^d) & \text{ for all } \eta \in (0,1] & \text{if } d=2, \\ L^{3/2-\eta}(\Omega;\mathbb{R}^d) & \text{ for all } \eta \in (0,1/2] & \text{if } d=3, \end{array} \right.
\end{gathered} \end{equation} with $(2-\mu)^+ = \max \{ (2{-}\mu), 0\}$. Therefore, the entropy inequality \eqref{entropy-ineq} in fact holds for all positive test functions $\varphi \in L^\infty ([0,T]; W^{1,d+\epsilon}(\Omega)) \cap H^1 (0,T; L^{6/5}(\Omega))$, for every $\epsilon>0$.
\end{maintheorem}
\noindent
The enhanced summability for $\log(\vartheta)$ in \eqref{further-logteta} ensues from the fact that for every $p \in [1,\infty)$
there exists $C_p$ such that
\[
|\log(\vartheta)|^p\leq \vartheta +C_p \qquad \text{for all } \vartheta \geq \bar\theta.
\] \begin{remark} \label{rmk:in-LRTT} \upshape In \cite{LRTT} we proved an existence result for a PDE system modeling \emph{rate-independent } damage in thermoviscoelastic materials, featuring a temperature equation with the same structure as \eqref{heat}. Also in that context we obtained a strict positivity property with the same constant as in \eqref{strong-strict-pos}. Moreover, we showed that, if the heat source function $H$ and the initial temperature $\vartheta_0$ fulfill \[ H(t,x) \geq H_*>0 \ \text{for a.a. } (t,x) \in Q \ \text{ and } \ \vartheta_0(x) \geq \sqrt{H_*/\bar{c}} \ \text{for a.a. } x \in \Omega, \] with $\bar{c}>0$ from \eqref{strong-strict-pos}, then the enhanced positivity property \begin{equation} \label{enh-strict-pos}
\vartheta(t,x) \geq \max\{ \bar\vartheta, \sqrt{H_*/\bar{c}}\} \quad \text{for a.a. } (t,x) \in Q \end{equation} holds. In the setting of the thermoviscoplastic system \eqref{plast-PDE}, too, it would be possible to prove \eqref{enh-strict-pos}. Observe that, choosing suitable data for the heat equation, the threshold $\max\{ \bar\vartheta, \sqrt{H_*/\bar{c}}\}$, and thus the temperature, may be tuned to stay above a given constant. Choosing such a constant as the so-called \emph{Debye temperature} (cf., e.g., \cite[Sec.\ 4.2, p.\ 761]{Wed97LPC}), according to the Debye model one can thus justify the assumption that the heat capacity is constant. \end{remark}
\par Under a more stringent growth condition on $\kappa$, we obtain the existence of weak energy solutions. \begin{maintheorem} \label{mainth:2} Assume \eqref{Omega-s2}, \eqref{elast-visc-tensors}, $(2.\mathrm{H})$, $(2.\mathrm{L})$,
\eqref{Dirichlet-loading}, $(2.\mathrm{K})$, and
\eqref{hyp-K}. In addition, suppose that
the exponent $\mu$ in \eqref{hyp-K} fulfills \begin{equation} \label{hyp-K-stronger} \tag{2.$\kappa_2$} \begin{cases} \mu \in (1,2) & \text{if } d=2, \\ \mu \in \left(1, \frac53\right) & \text{if } d=3. \end{cases} \end{equation} Then, for every $(\vartheta_0, u_0, \dot{u}_0, e_0, p_0) $ satisfying \eqref{Cauchy-data} there exists a weak energy solution $(\vartheta,u,e,p)$ to the Cauchy problem for system (\ref{plast-PDE}, \ref{bc}) satisfying
\eqref{strong-strict-pos}--\eqref{further-logteta}, as well as
\begin{equation}
\label{further-k-teta}
\nabla (\hat{\kappa}(\vartheta) ) \in L^{1+\tilde{\delta}}(Q) \text{ for some $\tilde\delta \in \left(0,\frac13 \right)$},
\end{equation}
with $\hat{\kappa}$ a primitive of $\kappa$.
Therefore,
\eqref{eq-teta} in fact holds for all test functions $\varphi \in W^{1,1+1/{\tilde\delta}}(\Omega)$ and, ultimately, $\vartheta $ has the enhanced regularity $\vartheta \in W^{1,1}(0,T; W^{1,1+1/{\tilde\delta}}(\Omega)^*)$.
\end{maintheorem} As it will be clear from the proof of Thm.\ \ref{mainth:2}, in the case $d=3$ the exponent $ \tilde \delta $ is in fact given by $ \tilde \delta = \frac{2-3\mu+3\alpha}{3(\mu-\alpha+2)} $ for all $ \alpha \in (\bar\alpha, 1) $ with $\bar\alpha: = \max\{ \mu-\frac23, (2-\mu)^+\}$: The condition $\mu <\frac53$ for $d=3$ in fact ensures that it is possible to choose $\alpha<1$ with $\alpha > \mu-\frac23$. Also, note that for every $\alpha $ in the prescribed range we have that $\tilde \delta<\tfrac13$, so that $1+\tfrac1{\tilde\delta} >4$. This yields \begin{equation} \label{for-later-reference} W^{1,1+1/{\tilde\delta}}(\Omega) \subset L^\infty(\Omega) \qquad \text{for } d \in \{2,3\}, \end{equation} so that every $\varphi\in W^{1,1+1/{\tilde\delta}}(\Omega)$ can multiply the $L^1$-r.h.s.\ of the heat equation \eqref{heat} and, moreover, has trace in $L^2(\partial\Omega)$. Therefore, $W^{1,1+1/{\tilde\delta}}(\Omega)$ is an admissible space of test functions for \eqref{eq-teta}. Clearly, in the case $d=2$ as well one can explicitly compute $\tilde \delta$, exploiting the condition $\mu <2$, leading to a better range of indexes. \par
The proofs of Theorems \ref{mainth:1} and \ref{mainth:2}, developed in Section \ref{s:4}, shall result from passing to the limit in a carefully tailored time discretization scheme of the thermoviscoplastic system (\ref{plast-PDE}, \ref{bc}), analyzed in detail in Section \ref{s:3}.
\section{Analysis of the thermoviscoplastic system: time discretization} \label{s:3} The analysis of the time-discrete scheme for system (\ref{plast-PDE}, \ref{bc}) shall often follow the lines of that developed for the phase transition/damage system analyzed in \cite{Rocca-Rossi} (cf.\ also the proof of \cite[Thm.\ 2.7]{LRTT}). Therefore, to avoid overburdening the exposition we will not fully develop all the arguments, but frequently refer to \cite{Rocca-Rossi, LRTT} for all details. \par In the statement of all the results of this section we will always tacitly assume the conditions on the problem data from Section \ref{ss:2.1}. \par Given an equidistant partition of $[0,T]$, with time-step $\tau>0$ and nodes $t_\tau^k:=k\tau$, $k=0,\ldots,K_\tau$, we approximate the data $F$, $g$, $H$, and $h$
by local means as follows \begin{equation} \label{local-means} \begin{gathered} \Ftau{k}:= \frac{1}{\tau}\int_{t_\tau^{k-1}}^{t_\tau^k} F(s)\;\!\mathrm{d} s\,, \quad g_\tau^{k}:= \frac{1}{\tau}\int_{t_\tau^{k-1}}^{t_\tau^k} g(s)\;\!\mathrm{d} s\,,
\quad
\gtau{k}:= \frac{1}{\tau}\int_{t_\tau^{k-1}}^{t_\tau^k} H(s) \;\!\mathrm{d} s\,, \quad \htau{k}:= \frac{1}{\tau}\int_{t_\tau^{k-1}}^{t_\tau^k} h(s) \;\!\mathrm{d} s \end{gathered} \end{equation} for all $k=1,\ldots, K_\tau$. From the terms $\Ftau{k}$ and $\gtau{k}$ one then defines the elements $ \Ltau{k}$, which are the local-mean approximations of $\calL$. Hereafter, given elements $(v_{\tau}^k)_{k=1,\ldots, K_\tau}$ in a Banach space $B$, we will use the notation \[ \Dtau{k} v: = \frac{v_{\tau}^{k} - v_{\tau}^{k-1}}\tau, \qquad \Ddtau{k} v: = \frac{\vtau{k} -2 \vtau{k-1} + \vtau{k-2}}{\tau^2}. \] \par We construct discrete solutions to system (\ref{plast-PDE}, \ref{bc}) by recursively solving an elliptic system, cf.\ the forthcoming Problem \ref{prob:discrete}, where the weak formulation of the discrete heat equation features the function space
\begin{equation} \label{X-space}
X:= \{ \theta \in H^1(\Omega)\, : \ \kappa(\theta) \nabla \theta \nabla v \in L^1(\Omega) \text{ for all } v \in H^1 (\Omega)\},
\end{equation}
and, for $k \in \{1,\ldots, K_\tau\}$, the elliptic operator
\begin{equation}
{A}^k: X \to H^1(\Omega)^* \text{ defined by }\\ \pairing{}{H^1(\Omega)}{ {A}^k(\theta) }{v}:=
\int_\Omega \kappa(\theta) \nabla \theta \nabla v \;\!\mathrm{d} x - \int_{\partial \Omega} \htau{k} v \;\!\mathrm{d} S\,. \end{equation} We also mention in advance that, for technical reasons connected both with the proof of existence of discrete solutions to Problem \ref{prob:discrete} (cf.\ the upcoming Lemma \ref{l:exist-approx-discr}), and with the rigorous derivation of
a priori estimates on them (cf.\ Remark \ref{rmk:comments} below), it will be necessary to add the regularizing term $-\tau \mathrm{div}(|\etau k|^{\gamma- 2} \etau k)$ to the discrete momentum equation, as well as the term $\tau |\ptau k|^{\gamma-2} \ptau k$ to the discrete plastic flow rule, with $\gamma>4$. That is why, we will seek for discrete solutions with $\etau k\in L^{\gamma} (\Omega;\bbM_{\mathrm{sym}}^{d\times d}) $ and $\ptau k \in L^{\gamma} (\Omega; \bbM_{\mathrm{D}}^{d\times d})$, giving $\sig{\utau{k}} \in L^{\gamma} (\Omega;\bbM_{\mathrm{sym}}^{d\times d})$ by the kinematic admissibility condition and thus, via Korn's inequality \eqref{Korn}, $\utau k \in W_\mathrm{Dir}^{1,\gamma} (\Omega;\mathbb{R}^d)$. Because of these regularizations, it will be necessary to supplement the discrete system with approximate initial data \begin{subequations} \label{complete-approx-e_0} \begin{equation} \label{approx-e_0} \begin{aligned} & (e_\tau^0)_\tau \subset L^{\gamma} (\Omega;\bbM_{\mathrm{sym}}^{d\times d}) & \text{ such that }
\lim_{\tau\downarrow 0} \tau^{1/\gamma} \| e_\tau^0\|_{L^{\gamma} (\Omega;\bbM_{\mathrm{sym}}^{d\times d})} =0
& \text{ and } e_\tau^0 \to e_0 \text{ in $L^{2} (\Omega;\bbM_{\mathrm{sym}}^{d\times d})$}, \\ & (p_\tau^0)_\tau \subset L^{\gamma} (\Omega;\bbM_{\mathrm{D}}^{d\times d}) & \text{ such that }
\lim_{\tau\downarrow 0} \tau^{1/\gamma} \| p_\tau^0\|_{L^{\gamma} (\Omega;\bbM_{\mathrm{D}}^{d\times d})} =0
& \text{ and } p_\tau^0 \to p_0 \text{ in $L^{2} (\Omega;\bbM_{\mathrm{D}}^{d\times d})$}. \end{aligned} \end{equation} By consistency with the kinematic admissibility condition at time $t=0$, we will also approximate the initial datum $u_0$ with a family $(u_\tau^0)_\tau \subset W^{1,\gamma} (\Omega;\mathbb{R}^d)$ such that \begin{equation} \label{approx-e_0-3}
(u_\tau^0)_\tau \subset W^{1,\gamma} (\Omega;\mathbb{R}^d) \text{ such that } \lim_{\tau\downarrow 0} \tau^{1/\gamma} \| u_\tau^0\|_{W^{1,\gamma} (\Omega;\mathbb{R}^d)} =0 \text{ and } u_\tau^0 \to u_0 \text{ in $H^{1} (\Omega;\mathbb{R}^d)$}. \end{equation} \end{subequations} In connection with the regularization of the discrete momentum balance, we will have to approximate the Dirichlet loading $w$ by a family $(w_\tau)_\tau \subset \mathbf{W} \cap W^{1,1}(0,T; W^{1,\gamma}(\Omega;\mathbb{R}^d))$, where we have used the place-holder $ \mathbf{W}:= L^1(0,T; W^{1,\infty} (\Omega;\mathbb{R}^d)) \cap W^{2,1} (0,T;H^1(\Omega;\mathbb{R}^d)) \cap H^2(0,T; L^2(\Omega;\mathbb{R}^d))$. We will require that \begin{equation} \label{discr-w-tau} w_\tau \to w \text{ in } \mathbf{W} \text{ as } \tau \downarrow 0, \quad \text{ as well as } \quad
\exists\, \alpha_w \in \left(0,\frac1\gamma\right) \text{ s.t. } \sup_{\tau>0} \tau^{\alpha_w} \| \sig{\dot{w}_\tau}\|_{L^\gamma (\Omega;\bbM_\mathrm{sym}^{d\times d})} \leq C<\infty\,. \end{equation} We will then consider the discrete data \[
\wtau{k}:= \frac{1}{\tau}\int_{t_\tau^{k-1}}^{t_\tau^k} w_\tau(s)\;\!\mathrm{d} s\,. \]
\begin{problem} \label{prob:discrete} Let $\gamma>4$. Starting from \begin{align}\label{discr-Cauchy} \tetau{0}:=\vartheta_{0}, \qquad \utau{0}:=u^{0}_\tau,\qquad
\utau{-1}:=u_\tau^{0}-\tau \dot{u}_0, \qquad \etau{0}: = e_\tau^0, \qquad \ptau{0}:=p_\tau^{0} \end{align}
with $\vartheta_0$ and $\dot{u}_0$ from \eqref{Cauchy-data} and $(u_\tau^{0}, e_\tau^{0}, p_\tau^{0})$ from \eqref{complete-approx-e_0},
find
$\{(\tetau{k}, \utau{k}, \etau{k}, \ptau{k})\}_{k=1}^{K_\tau} \subset X \times W_\mathrm{Dir}^{1,\gamma}(\Omega;\mathbb{R}^d) \times L^{\gamma} (\Omega;\bbM_{\mathrm{sym}}^{d\times d}) \times L^{\gamma} (\Omega; \bbM_{\mathrm{D}}^{d\times d})$ fulfilling for all $k=1,\ldots, K_\tau$ \begin{subequations} \label{syst:discr} \begin{itemize} \item[-] the discrete heat equation \begin{equation} \label{discrete-heat} \begin{aligned}
&\Dtau{k}\vartheta + A^k (\tetau{k}) \\ & = \gtau{k} + \mathrm{R}\left(\tetau{k-1}, \Dtau{k} p \right) + \left| \Dtau{k} p \right|^2+ \mathbb{D} \Dtau{k} e : \Dtau{k} e -\tetau{k} \mathbb{B} : \Dtau k e \quad \text{in } H^1(\Omega)^*; \end{aligned} \end{equation} \item[-] the kinematic admissibility $(\utau{k}, \etau{k}, \ptau{k}) \in \mathcal{A}(\wtau k)$ (in the sense of \eqref{kin-adm}); \item[-] the discrete momentum balance \begin{equation} \label{discrete-momentum} \rho\int_\Omega \Ddtau k u v \;\!\mathrm{d} x + \int_\Omega \sitau{k} : \sig v \;\!\mathrm{d} x = \pairing{}{H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)}{\Ltau k }{v} \quad
\text{for all } v \in W_\mathrm{Dir}^{1,\gamma}(\Omega;\mathbb{R}^d); \end{equation} \item[-] the discrete plastic flow rule \begin{equation}
\label{discrete-plastic}\zetau k + \Dtau kp + \tau |\ptau {k}|^{\gamma-2}\ptau k = \sidevtau{k}, \quad \text{with } \zetau k \in \partial_{\dot p} \mathrm{R} \left(\tetau{k-1},\Dtau k p \right), \qquad \text{a.e.\ in } \Omega, \end{equation} \end{itemize} \end{subequations} where we have used the place-holder $
\sitau{k}:= \mathbb{D} \Dtau k e + \mathbb{C} \etau k + \tau |\etau{k}|^{\gamma-2} \etau k- \tetau k \mathbb{B}\,. $ \end{problem}
\begin{remark}[Main features of the time-discretization scheme] \upshape \label{rmk:comments}
\upshape
Observe that the discrete heat equation \eqref{discrete-heat} is coupled with the momentum balance \eqref{discrete-momentum} through the implicit term $\tetau{k} $, which therefore contributes to the stress $\sitau{k}$ in \eqref{discrete-momentum} and in \eqref{discrete-plastic}. This makes the time discretization scheme \eqref{syst:discr}
fully implicit, as it is not possible to decouple any of the equations from the others. In turn, the `implicit coupling' between the heat equation and the momentum balance is crucial for the argument leading to the (strict) positivity of the discrete temperatures: we refer to the proof of \cite[Lemma 4.4]{Rocca-Rossi} for all details. In fact, the time discretization schemes in \cite{Roub-Bartels-2,Roub-PP} are fully implicit as well, again in view of the positivity of the temperature (though the arguments there are different, based on the approach via the enthalpy transformation in the heat equation).
\par
The role of the terms $-\tau \mathrm{div}(|\etau k|^{\gamma- 2} \etau k)$ and $\tau |\ptau k|^{\gamma-2} \ptau k$, added to the discrete momentum equation and plastic flow rule, respectively, is to `compensate' the quadratic terms on the right-hand side of \eqref{discrete-heat}. More precisely, they ensure that the pseudomonotone operator by means of which we will reformulate
our approximation of
system \eqref{syst:discr}, c.f.\ \eqref{pseudo-monot} ahead, is coercive, in particular w.r.t.\ the $H^1(\Omega)$-norm in the variable $\vartheta$. This will allow us to apply a result from the theory of pseudomonotone operators in order to obtain the existence of solutions to \eqref{pseudo-monot} and, a fortiori, to \eqref{syst:discr}. \end{remark} \begin{proposition}[Existence of discrete solutions] \label{prop:exist-discr} Under the growth condition \eqref{hyp-K}, Problem \ref{prob:discrete} admits a solution $\{(\tetau{k}, \utau{k}, \etau{k}, \ptau{k})\}_{k=1}^{K_\tau}$. Furthermore, any solution to Problem \ref{prob:discrete} fulfills \begin{equation} \label{discr-strict-pos} \tetau{k}\geq \bar{\vartheta}>0 \qquad \text{for all } k =1, \ldots, K_\tau, \end{equation} with $\bar{\vartheta}$ from \eqref{strong-strict-pos}.
\end{proposition} Along the lines of \cite{Rocca-Rossi, LRTT}, we will prove the existence of a solution to Problem \ref{prob:discrete} by \begin{enumerate} \item constructing an approximate problem where the thermal conductivity coefficient $\kappa$ is truncated and, accordingly, so are the occurrences of $\vartheta$ in the thermal expansion terms coupling the discrete heat and momentum equations, cf.\ system \eqref{syst:approx-discr} below; \item proving the existence of a solution to the approximate discrete problem by resorting to a general existence result from \cite{Roub05NPDE} for elliptic systems featuring pseudomonotone operators; \item passing to the limit with the truncation parameter. \end{enumerate} As the statement of Proposition \ref{prop:exist-discr} suggests, the positivity property \eqref{discr-strict-pos}
can be proved for \emph{all} discrete solutions to Problem \ref{prob:discrete} (i.e.\ not only for those deriving from the aforementioned
approximation procedure). Since its proof can be carried out by repeating the arguments for positivity in
\cite{Rocca-Rossi, LRTT}, we choose to omit it and refer to these papers for all details. We shall instead focus on the existence argument, dwelling on the parts which differ from \cite{Rocca-Rossi, LRTT} with some detail.
\par
The \underline{proof of Proposition \ref{prop:exist-discr}} will be split in some steps:
\\ \textbf{Step $1$: existence for the approximate discrete system.} We introduce the truncation operator \begin{equation} \label{def-truncation-m} \mathcal{T}_M : \mathbb{R} \to \mathbb{R}, \qquad \mathcal{T}_M(r):= \left\{ \begin{array}{ll} -M & \text{if } r <-M, \\
r & \text{if } |r| \leq M, \\ M & \text{if } r >M, \end{array} \right. \end{equation} and define \begin{align} & \label{def-k-m} \kappa_M(r):= \kappa(\mathcal{T}_M(r)) := \left\{ \begin{array}{ll} \kappa(-M) & \text{if } r <-M, \\
\kappa(r) & \text{if } |r| \leq M, \\ \kappa(M) & \text{if } r >M, \end{array} \right. \\ & \label{M-operator} {A}_M^k: H^1(\Omega) \to H^1(\Omega)^*
\text{ by } \pairing{}{H^1(\Omega)} { {A}_M^k(\theta)}{v}:= \int_\Omega \kappa_M(\theta) \nabla \theta \nabla v \;\!\mathrm{d} x - \int_{\partial \Omega} \htau{k}v \;\!\mathrm{d} S. \end{align} For later use, we observe that, thanks to \eqref{hyp-K} there still holds $\kappa_M(r) \geq c_{0} $ for all $r \in \mathbb{R}$, and therefore \begin{equation} \label{ellipticity-retained} \forall\, \delta>0 \ \ \exists\, C_\delta>0\, \ \forall\, \theta \in H^1(\Omega)\, : \qquad
\pairing{}{H^1(\Omega)}{ \mathcal{A}_M^{k} (\theta)}{\theta} \geq c_0 \int_\Omega |\nabla \theta|^2 \;\!\mathrm{d} x - \delta \|\theta\|_{L^2(\partial\Omega)}^2 - C_\delta \| \htau{k} \|_{L^2(\partial\Omega)}^2\,.
\end{equation} \par The approximate version of system \eqref{syst:discr} reads (to avoid overburdening notation, for the time being we will not highlight the dependence of the solution quadruple on the truncation parameter $M$):
\begin{subequations} \label{syst:approx-discr} \begin{align} \label{heat:approx-discr} &
\begin{aligned}
&\Dtau{k}\vartheta + A_M^k (\tetau{k}) \\ & = \gtau{k} + \mathrm{R}\left(\tetau{k-1}, \Dtau{k} p \right) + \left| \Dtau{k} p \right|^2+ \mathbb{D} \Dtau{k} e : \Dtau{k} e -\mathcal{T}_M(\tetau{k}) \mathbb{B} : \Dtau k e \quad \text{in } H^1(\Omega)^*; \end{aligned} \\ & \label{mom:approx-discr} \rho\int_\Omega \Ddtau k {u} v \;\!\mathrm{d} x + \int_\Omega \sitau{k} : \sig v \;\!\mathrm{d} x = \pairing{}{H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)}{\Ltau k }{v} \quad \text{for all } v \in W_\mathrm{Dir}^{1,\gamma}(\Omega;\mathbb{R}^d);
\\
& \label{plasr:approx-discr}
\zetau{k} + \Dtau kp + \tau |\ptau {k}|^{\gamma-2}\ptau k = \simdevtau{k} ,\quad \text{with } \zetau k \in \partial_{\dot p} \mathrm{R} \left(\tetau{k-1},\Dtau k p \right), \qquad \text{a.e.\ in } \Omega,
\end{align}
\end{subequations}
coupled with the kinematic admissibility \begin{equation} \label{discr-kin-adm} (\utau{k}, \etau{k}, \ptau{k}) \in \mathcal{A}(\wtau k), \end{equation} where now \[
\sitau{k}:= \mathbb{D} \Dtau k e + \mathbb{C} \etau k + \tau |\etau{k}|^{\gamma-2} \etau k- \calT_M(\tetau k) \mathbb{B}\,. \] \par The following result states the existence of solutions to system \eqref{syst:approx-discr} for $k\in \{1,\ldots, K_\tau\} $ fixed: in its proof, we make use of the higher order terms added to the discrete momentum equation and plastic flow rule. \begin{lemma} \label{l:exist-approx-discr} Under the growth condition \eqref{hyp-K}, there exists $\bar\tau>0$ such that for $0<\tau< \bar \tau$ and for every $k=1,\ldots, K_\tau$ there exists a solution $(\tetau{k}, \utau{k}, \etau{k}, \ptau{k}) \in H^1(\Omega) \times W_\mathrm{Dir}^{1,\gamma}(\Omega;\mathbb{R}^d) \times L^\gamma(\Omega;\bbM_\mathrm{sym}^{d\times d}) \times L^\gamma(\Omega;\bbM_\mathrm{D}^{d\times d}) $ to system \eqref{syst:approx-discr}, such that $\tetau k$ complies with the positivity property \eqref{discr-strict-pos}. \end{lemma} \begin{proof} The positivity \eqref{discr-strict-pos} follows from the same argument developed in the proof of \cite[Lemma 4.4]{Rocca-Rossi}. As for existence: For fixed $k \in \{1,\ldots, K_\tau\}$, we reformulate system \eqref{syst:approx-discr}, coupled with \eqref{discr-kin-adm}, as \begin{equation} \label{pseudo-monot} \partial\Psi_k (\tetau k , \utau k -\wtau k , \ptau k) +\mathscr{A}_k (\tetau k , \utau k -\wtau k , \ptau k ) \ni \mathscr{B}_k, \end{equation} where the elliptic operator $\mathscr{A}_k : \mathbf{B} \to \mathbf{B}^*$, with $\mathbf{B}: = H^1(\Omega) \times W_\mathrm{Dir}^{1,\gamma}(\Omega;\mathbb{R}^d) \times L^\gamma(\Omega;\bbM_\mathrm{D}^{d\times d})$, is given component-wise by \begin{subequations} \label{oper-scrA} \begin{align} & \label{oper-scrA-1}\begin{aligned}
\mathscr{A}_k^1 (\vartheta, \tilde u, p): = & \vartheta+ A_M^k(\vartheta) - \mathrm{R}(\tetau {k-1}, p-\ptau{k-1})- \frac1\tau |p|^2 - \frac2\tau p : \ptau{k-1} \\ &\quad -\frac1\tau \mathbb{D} \left(\sig{\tilde u + \wtau k} -p \right){:} \left(\sig{\tilde u + \wtau k} -p \right) -\frac2\tau \mathbb{D} \left(\sig{\tilde u + \wtau k} -p \right){:}\etau{k-1}
\\ & \quad+ \mathcal{T}_M (\vartheta) \mathbb{B} \left(\sig{\tilde u + \wtau k} -p - \etau{k-1} \right), \end{aligned} \\ & \label{oper-scrA-2}\begin{aligned}
\mathscr{A}_k^2 (\vartheta, \tilde u, p): = \rho (\tilde u -\wtau k) - \mathrm{div}_{\mathrm{Dir}}\Big( & \tau \mathbb{D} \left( \sig{\tilde u + \wtau k} -p \right) +\tau^2 \mathbb{C} \left( \sig{\tilde u + \wtau k} -p \right) \\ & + \tau^3 \left| \sig{\tilde u + \wtau k} -p \right|^{\gamma-2} \left( \sig{\tilde u + \wtau k} -p \right) -\tau^2 \mathcal{T}_M(\vartheta) \mathbb{B} \Big), \end{aligned} \\ & \label{oper-scrA-3}\begin{aligned}
\mathscr{A}_k^3(\vartheta, \tilde u, p): = p + \tau^2 |p|^{\gamma-2} p &
- \Big( \mathbb{D} \left( \sig{\tilde u + \wtau k} -p \right) +\tau \mathbb{C} \left( \sig{\tilde u + \wtau k} -p \right) \\ & \qquad \qquad + \tau^2 \left| \sig{\tilde u + \wtau k} -p \right|^{\gamma-2} \left( \sig{\tilde u + \wtau k} -p \right) -\tau \mathcal{T}_M(\vartheta) \mathbb{B} \Big)_\mathrm{D}, \end{aligned} \end{align} with $-\mathrm{div}_{\mathrm{Dir}}$ defined by \eqref{div-Gdir},
\end{subequations} while the vector $\mathscr{B}_k \in \mathbf{B}^*$ on the right-hand side of \eqref{pseudo-monot} has components \begin{subequations} \label{vector-B} \begin{align} & \label{vect-B-1}
\mathscr{B}_k^1: = \gtau k + \frac1\tau |\ptau{k-1}|^2 + \frac1\tau \mathbb{D} \etau{k-1}: \etau{k-1}, \\ & \label{vect-B-2} \mathscr{B}_k^2: = \Ltau k +2\rho\utau{k-1} - \rho\utau{k-1} - \mathrm{div}_{\mathrm{Dir}}(\tau \mathbb{D} \etau{k-1}), \\ & \label{vect-B-3} \mathscr{B}_k^3: = \ptau{k-1} - (\mathbb{D} \etau{k-1})_\mathrm{D}, \end{align} \end{subequations}
and $\partial \Psi_k : \mathbf{B} \rightrightarrows \mathbf{B}^*$ is the subdifferential of the lower semicontinuous and convex potential $ \Psi_k(\vartheta, \tilde u, p): = \mathrm{R}(\tetau{k-1}, p -\ptau{k-1}). $
We shall therefore prove the existence of a solution to the abstract subdifferential inclusion \eqref{pseudo-monot} by applying the existence result \cite[Cor.\ 5.17]{Roub05NPDE}, which amounts to verifying that $\mathscr{A}_k : \mathbf{B} \to \mathbf{B}^*$ is coercive and pseudomonotone. The latter property means that (cf., e.g., \cite{Roub05NPDE}) it is bounded and
fulfills the following for all $(\eta_m)_m,\eta,\zeta \in \mathbf{B}$:
\begin{equation}
\label{def-pseudomon}
\left.
\begin{array}{rr} & \eta_m \rightharpoonup \eta,
\\
& \limsup_{m\to\infty} \pairing{}{\mathbf{B}}{\mathscr{A}_k (\eta_m)}{\eta_m-\eta} \leq 0
\end{array}
\right\} \ \Rightarrow
\pairing{}{\mathbf{B}}{\mathscr{A}_k (\eta)}{\eta-\zeta} \leq \liminf_{m\to\infty} \pairing{}{\mathbf{B}}{\mathscr{A}_k (\eta_m)}{\eta_m-\zeta}\,.
\end{equation}
\par
To check coercivity, we compute
\begin{equation}
\label{rough-coerc-1}
\begin{aligned}
\pairing{}{\mathbf{B}}{\mathscr{A}_k (\vartheta, \tilde u, p)}{(\vartheta,\tilde u, p)} & = \pairing{}{H^1(\Omega)}{\mathscr{A}_k^1 (\vartheta, \tilde u, p)}{\vartheta} + \pairing{}{W_\mathrm{Dir}^{1,\gamma}(\Omega;\mathbb{R}^d)}{\mathscr{A}_k^2 (\vartheta, \tilde u, p)}{\tilde u} + \int_\Omega \mathscr{A}_k^3 (\vartheta, \tilde u, p) : p \;\!\mathrm{d} x
\\ & \stackrel{(1)}{\geq} \|\vartheta\|_{L^2(\Omega)}^2 + c_0 \|\nabla \vartheta\|_{L^2(\Omega)}^2 + \rho \| \tilde u \|_{L^2(\Omega)}^2 + \left( \tau C_\mathbb{D}^1 + \tau^2 C_\mathbb{C}^1 \right)
\| \sig{\tilde u} +\sig{\wtau k} - p \|_{L^2(\Omega)}^2 \\ & \quad + \tau^3 \| \sig{\tilde u} +\sig{\wtau k} - p \|_{L^\gamma(\Omega)}^\gamma + \| p \|_{L^2(\Omega)}^2 + \tau^2 \| p \|_{L^\gamma(\Omega)}^\gamma + I_1 + I_2 +I_3,
\end{aligned} \end{equation}
where (1) follows from \eqref{elast-visc-tensors} and \eqref{ellipticity-retained}. Taking into account \eqref{linear-growth}, again \eqref{elast-visc-tensors}, and the fact that $|\calT_M(\vartheta) | \leq M $ a.e.\ in $\Omega$, we have \begin{subequations} \label{est_I-terms} \begin{equation} \label{est-I-1} \begin{aligned}
I_1 & = -\delta \| \vartheta \|_{L^2(\partial\Omega)}^2 - C_\delta \| \htau k \|_{L^2(\partial\Omega)}^2 \\ & \quad - C_R\| p -\ptau{k-1}\|_{L^2(\Omega)} \|\vartheta\|_{L^2(\Omega)} - \frac1\tau \| p \|_{L^4(\Omega)}^2 \| \vartheta\|_{L^2(\Omega)} -\frac2\tau \|\ptau{k-1}\|_{L^4(\Omega)}\| p \|_{L^4(\Omega)} \| \vartheta\|_{L^2(\Omega)} \\ & \quad - \frac{C_\mathbb{D}^2}\tau \| \sig{\tilde u} +\sig{\wtau k} - p \|_{L^4(\Omega)}^2 \| \vartheta\|_{L^2(\Omega)} - C \| \sig{\tilde u} +\sig{\wtau k} - p \|_{L^4(\Omega)} \| \etau {k-1}\|_{L^4(\Omega)} \| \vartheta\|_{L^2(\Omega)} \\ & \quad - C \| \sig{\tilde u} +\sig{\wtau k} - p \|_{L^2(\Omega)}^2 - C \| \etau {k-1}\|_{L^2(\Omega)}^2 \,, \end{aligned} \end{equation}
with $\delta>0$ to be specified later, as well as \begin{equation} \label{est-I-2} \begin{aligned}
I_2 & = -\rho \| \tilde u \|_{L^2(\Omega)} \| \wtau k \|_{L^2(\Omega)} - C \| \sig{\tilde u} + \sig{\wtau k} -p \|_{L^2(\Omega)} \|\sig{\wtau k} -p \|_{L^2(\Omega)}\\ & \qquad -\tau^3 \int_\Omega |\sig{\tilde u} + \sig{\wtau k} -p|^{\gamma-1} |\sig{\wtau k} -p | \;\!\mathrm{d} x - C \int_\Omega |\sig{\tilde u}| \;\!\mathrm{d} x, \end{aligned} \end{equation} and \begin{equation} \label{est-I-3} \begin{aligned}
I_3 & =- C \| \sig{\tilde u} + \sig{\wtau k} -p\|_{L^2(\Omega)} \| p \|_{L^2(\Omega)} -\tau^2\int_\Omega | \sig{\tilde u} + \sig{\wtau k} -p|^{\gamma-1} |p| \;\!\mathrm{d} x - C \int_\Omega |p|\;\!\mathrm{d} x\,. \end{aligned} \end{equation} Now, with straightforward calculations it is possible to absorb the negative terms $I_1, \, I_2,\, I_3$ into the positive terms on the right-hand side of \eqref{rough-coerc-1}: without entering into details, let us only observe that, for example, the sixth term on the right-hand side of \eqref{est-I-1}
can be estimated by means of Young's inequality as \[ \begin{aligned}
- \frac{C_\mathbb{D}^2}\tau \| \sig{\tilde u} +\sig{\wtau k} - p \|_{L^4(\Omega)}^2 \| \vartheta\|_{L^2(\Omega)} &
\geq
-\delta \| \vartheta\|_{L^2(\Omega)}^2 - C \| \sig{\tilde u} +\sig{\wtau k} - p \|_{L^4(\Omega)}^4
\\ & \geq -\delta \| \vartheta\|_{L^2(\Omega)}^2
- \frac{\tau^3}2 \| \sig{\tilde u} +\sig{\wtau k} - p \|_{L^\gamma(\Omega)}^\gamma - C,
\end{aligned}
\] using that $\gamma>4$. The fourth term can be dealt with in the same way, so that one of the resulting terms is absorbed into $\tau^2 \| p \|_{L^\gamma(\Omega)}^\gamma$. The other terms contributing to $I_1$, $I_2$, and $I_3$ can be handled analogously. Let us now observe that the positive terms on the right-hand side of \eqref{rough-coerc-1} bound the desired norms of $\vartheta$, $\tilde u$, $ p$. Indeed, also taking into account that, again by Young's inequality \[
\| \sig{\tilde u} +\sig{\wtau k} - p \|_{L^2(\Omega)}^2 \geq c \| \sig{\tilde u} \|_{L^2(\Omega)}^2 - C \| \sig{\wtau k} \|_{L^2(\Omega)}^2 - \frac{\tau^2}4\|p \|_{L^\gamma(\Omega)}^\gamma - C, \] and repeatedly using the well-known estimate $(a+b)^{\gamma} \leq 2^{\gamma-1} (a^\gamma+b^\gamma)$ for all $a,\, b \in [0,+\infty)$, which gives \[ \begin{aligned}
\tau^3 \| \sig{\tilde u} +\sig{\wtau k} - p \|_{L^\gamma(\Omega)}^\gamma +\frac{ \tau^2}4 \| p \|_{L^\gamma(\Omega)}^\gamma & \geq \frac{\tau^3}{2^{\gamma-1}} \| \sig{\tilde u} +\sig{\wtau k} \|_{L^\gamma(\Omega)}^\gamma +\left( \frac{ \tau^2}4-\tau^3\right) \| p \|_{L^\gamma(\Omega)}^\gamma
\\ & \geq \frac{\tau^3}{2^{2\gamma-2}} \| \sig{\tilde u} \|_{L^\gamma(\Omega)}^\gamma +\frac{ \tau^2}8 \| p \|_{L^\gamma(\Omega)}^\gamma - \frac{\tau^3}{2^{\gamma-1}} \|\sig{\wtau k} \|_{L^\gamma(\Omega)}^\gamma \end{aligned} \] (where we have also used that, for $\tau < \bar\tau := 1/8$, there holds $\tau^2/8 \geq \tau^3 $), we end up with \[
\pairing{}{\mathbf{B}}{\mathscr{A}_k (\vartheta, \tilde u, p)}{(\vartheta,\tilde u, p)} \geq
c \left( \|\vartheta\|_{H^1(\Omega)}^2 + \| \tilde u \|_{L^2(\Omega)}^2 + \| \sig{\tilde u} \|_{L^2(\Omega)}^2 + \| \sig{\tilde u} \|_{L^\gamma(\Omega)}^\gamma + \| p \|_{L^2(\Omega)}^2 + \| p \|_{L^\gamma(\Omega)}^\gamma \right) - C \] for two positive constants $c$ and $C$, depending on $\tau$, on $M$, and on $w$. Thanks to Korn's inequality \eqref{Korn}, this shows the coercivity of $\mathscr{A}_k$. Its pseudomonotonicity \eqref{def-pseudomon} ensues from standard arguments.
Indeed, one can observe that
$\mathscr{A}_k$ is given by the sum of either bounded, radially continuous, monotone mappings (cf.\ e.g.\ \cite[Def.\ 2.3]{Roub05NPDE}), which
are pseudomonotone \cite[Lemma\ 2.9]{Roub05NPDE}, or of totally continuous mappings. In fact, perturbations of pseudomonotone mappings by totally continuous ones remain pseudomonotone, \cite[Cor. \ 2.12]{Roub05NPDE}. Therefore, we are in a position to apply \cite[Cor.\ 5.17]{Roub05NPDE} and thus conclude
the existence of solutions to system \eqref{syst:approx-discr}. \end{subequations} \end{proof} \textbf{Step $2$: a priori estimates on the solutions of the approximate discrete system.} Let now \[ (\tetaum k, \utaum k, \etaum k,\ptaum k)_{M} \]
be a family of solutions to system \eqref{syst:approx-discr}. The following result collects a series of a priori estimates uniform w.r.t.\ the parameter $M$ (but not w.r.t.\ $\tau$): a crucial ingredient to derive them will be a discrete version of the total energy inequality \eqref{total-enineq}, cf.\ \eqref{discr-total-ineq} below, featuring the discrete total energy \begin{equation} \label{discr-total-energy}
\calE_\tau (\vartheta,e,p) : = \int_\Omega \vartheta \;\!\mathrm{d} x +\frac12 \int_\Omega \mathbb{C} e: e \;\!\mathrm{d} x +\frac\tau\gamma \int_\Omega \left(|e|^\gamma + |p|^\gamma \right) \;\!\mathrm{d} x\,. \end{equation}
\begin{lemma} \label{l:aprio-M} Let $ k \in \{1,\ldots, K_\tau\} $ and $\tau \in (0,\bar \tau)$ be fixed. Under the growth condition \eqref{hyp-K}, the solution quadruple $(\tetaum k, \utaum k, \etaum k,\ptaum k) $ to \eqref{syst:approx-discr} satisfies \begin{equation} \label{discr-total-ineq} \begin{aligned}
& \frac{\rho}2 \int_\Omega \left|\frac{\utaum k - \utau{k-1}}{\tau} \right|^2 \;\!\mathrm{d} x + \calE_\tau (\tetaum k,\etaum k,\ptaum k) \\ & \leq \frac{\rho}2 \int_\Omega \left| \frac{\utau{k-1} - \utau{k-2}}\tau \right |^2 \;\!\mathrm{d} x + \calE_\tau (\tetau {k-1},\etau {k-1},\ptau{k-1}) + \tau \int_\Omega \gtau k \;\!\mathrm{d} x + \tau \int_{\partial\Omega} \htau{k} \;\!\mathrm{d} x
\\ & \quad + \tau \pairing{}{H_\mathrm{Dir}^{1}(\Omega;\mathbb{R}^d)}{\Ltau k}{\frac{\utaum k - \utau{k-1}}{\tau} {-} \Dtau kw}+ \tau \int_\Omega \simtau {k} : \sig{\Dtau kw} \\ & \quad +\rho \int_\Omega \left( \frac{\utaum k - \utau{k-1}}{\tau} - \Dtau {k-1} u \right) \Dtau kw \;\!\mathrm{d} x\,. \end{aligned} \end{equation} Moreover, there exists a constant $C>0$ such that for all $M>0$ \begin{subequations} \label{estimates-M-indep} \begin{align} & \label{est-M-indep1}
\| \tetaum k\|_{L^1(\Omega)} + \| \utaum k \|_{L^2(\Omega;\mathbb{R}^d)} + \| \etaum k \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \leq C, \\ & \label{est-M-indep2}
\tau^{1/\gamma} \| \utaum k \|_{W^{1,\gamma} (\Omega;\mathbb{R}^d)}
+ \tau^{1/\gamma} \| \etaum k \|_{L^\gamma(\Omega;\bbM_\mathrm{sym}^{d\times d})}
+ \tau^{1/\gamma} \| \ptaum k \|_{L^\gamma(\Omega;\bbM_\mathrm{D}^{d\times d})} \leq C, \\ & \label{est-M-indep3}
\| \tetaum k \|_{H^1(\Omega)} \leq C, \\ & \label{est-M-indep4}
\| \zetau{k} \|_{L^\infty(\Omega;\bbM_\mathrm{D}^{d\times d})} \leq C, \end{align} \end{subequations} where $\zetau k \in \partial_{\dot p} \mathrm{R} (\tetau{k-1}, (\ptaum{k}{ -} \ptau{k-1})/{\tau})$ fulfills \eqref{plasr:approx-discr}. \end{lemma} \begin{proof} Inequality \eqref{discr-total-ineq} follows by multiplying \eqref{heat:approx-discr} by $\tau$ and integrating it in space, testing \eqref{mom:approx-discr} by $\utaum k -\wtau k - (\utau {k-1} - \wtau {k-1})$, and testing \eqref{plasr:approx-discr} by $\ptaum k - \ptaum{k-1}$. We add the resulting relations and develop the following estimates \begin{subequations} \label{est-mimick} \begin{align} & \label{est-mimick-1} \begin{aligned} &
\frac{\rho}{\tau^2} \int_\Omega \left(\utaum {k} {-} \utau{k-1} {-} (\utau{k-1}{ -} \utau{k-2} ) \right) (\utaum{k} {- } \utau{k-1} ) \;\!\mathrm{d} x \\ & \geq \frac{\rho}2 \left\| \frac{\utaum k - \utau{k-1}}{\tau} \right\|_{L^2(\Omega)}^2 - \frac{\rho}2 \left\| \frac{\utau {k-1} - \utau{k-2}}{\tau} \right\|_{L^2(\Omega)}^2, \end{aligned} \\ & \label{est-mimick-2} \begin{aligned} & \int_\Omega \mathbb{D} \left(\frac{\etaum k - \etau{k-1}}\tau \right){:} \sig{\utaum k - \utau{k-1}} \;\!\mathrm{d} x \\ & = \tau \int_\Omega \mathbb{D} \frac{\etaum k - \etau{k-1}}\tau {:}\frac{\etaum k - \etau{k-1}}\tau \;\!\mathrm{d} x +\int_\Omega \mathbb{D} \frac{\etaum k - \etau{k-1}}\tau{ :} (\ptaum k{ -} \ptau {k-1}) \;\!\mathrm{d} x, \end{aligned} \\ & \label{est-mimick-3} \begin{aligned} \int_\Omega \mathbb{C} \etaum k {:} \sig{\utaum k - \utau{k-1}} \;\!\mathrm{d} x & = \int_\Omega \mathbb{C} \etaum {k} : (\etaum k - \etau{k-1}) + \mathbb{C} \etaum {k} : (\ptaum k {-} \ptau{k-1}) \;\!\mathrm{d} x \\ & \geq \int_\Omega \left( \tfrac12 \mathbb{C} \etaum {k} {:} \etaum {k} - \tfrac12 \mathbb{C} \etau{k-1}{:} \etau{k-1} +\mathbb{C} \etaum {k} : (\ptaum k {-} \ptau{k-1}) \right) \;\!\mathrm{d} x
\,, \end{aligned} \\ & \label{est-mimick-4} \begin{aligned} &
\int_\Omega |\etaum k|^{\gamma-2} \etaum k : \sig{\utaum k - \utau{k-1}} \;\!\mathrm{d} x
\\ & = \int_\Omega |\etaum k|^{\gamma-2} \etaum k : (\etaum k {-} \etau{k-1}) \;\!\mathrm{d} x + \int_\Omega |\etaum k|^{\gamma-2} \etaum k : (\ptaum k {-} \ptau{k-1}) \;\!\mathrm{d} x
\\ & \quad \geq\int_\Omega \left( \tfrac1{\gamma} |\etaum k|^{\gamma} {-} \tfrac1{\gamma} |\etau {k-1}|^\gamma {+} |\etaum k|^{\gamma-2} \etaum k : (\ptaum k{ -} \ptau{k-1}) \right) \;\!\mathrm{d} x \,. \end{aligned} \end{align} \end{subequations} Observe that \eqref{est-mimick-2}--\eqref{est-mimick-4} mimic the calculations on the time-continuous level leading to \eqref{total-enbal} and in fact rely on the kinematic admissibility condition. The terms on the right-hand side of \eqref{est-mimick-2} cancel with the fourth term on the r.h.s.\ of \eqref{heat:approx-discr}, multiplied by $\tau$, and with the analogous term deriving from \eqref{plasr:approx-discr}, tested by $\ptaum{k} - \ptau{k-1}$. In the same way, the last terms on the r.h.s.\ of \eqref{est-mimick-3} and
\eqref{est-mimick-4} cancel with the ones coming from \eqref{plasr:approx-discr}. In fact, it can be easily checked that, with the exception of $\tau \gtau k$, all the terms on the r.h.s.\ of \eqref{heat:approx-discr} cancel out: for instance, $\tau \int_\Omega \mathrm{R} (\tetau{k-1}, \ptaum {k} - \ptau{k-1}) \;\!\mathrm{d} x $ cancels with the term $ \int_\Omega \zetaum k {:} ( \ptaum k {-} \ptau {k-1}) \;\!\mathrm{d} x $ in view of \eqref{characterization-subdiff}. In this way, we conclude \eqref{discr-total-ineq}.
\par
In order to derive estimates \eqref{est-M-indep1}--\eqref{est-M-indep2}, we observe that the first four terms on the right-hand side of \eqref{discr-total-ineq} are bounded, depending on the quantities
$ \| \utau{k-1}\|_{L^2(\Omega;\mathbb{R}^d)} $, $\calE_\tau (\tetau {k-1},\etau {k-1},\ptau{k-1}), $
$\|\gtau k\|_{L^1(\Omega)}$, $ \| \htau{k} \|_{L^2(\partial\Omega)}$,
whereas the remaining ones can be controlled by the ones on the left-hand side. In fact,
we have \[ \begin{aligned}
&
\begin{aligned}
\left| \tau \pairing{}{H_\mathrm{Dir}^{1}(\Omega;\mathbb{R}^d)}{\Ltau k}{\frac{\utaum k - \utau{k-1}}{\tau} {-} \Dtau kw } \right| & \stackrel{(1)}{\leq} \delta \| \utaum k {-} \wtau k \|_{H^1(\Omega;\mathbb{R}^d)}^2 + \delta \| \utau { k-1}{-} \wtau{k-1} \|_{H^1(\Omega;\mathbb{R}^d)}^2 + C_\delta \| \Ltau k \|_{H^1(\Omega;\mathbb{R}^d)^*}^2 \\ & \stackrel{(2)}{\leq} \delta C_K \| \sig{\utaum k}\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d} )}^2 + C \\ & \stackrel{(3)}{\leq} 2 \delta C_K^2 \| {\etaum k}\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d} )}^2 + 2 \delta C_K \| {\ptaum k}\|_{L^2(\Omega;\bbM_\mathrm{D}^{d\times d} )}^2+C, \end{aligned} \\
& \begin{aligned}
& \left| \tau \int_\Omega \simtau {k} : \sig{\Dtau kw} \;\!\mathrm{d} x \right| \\ & \stackrel{(4)}{\leq} \frac{\delta}\tau \|\etaum k - \etau{k-1}\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}^2 + \delta \tau \|\etaum k \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}^2 + C_\delta \| \sig{\Dtau kw} \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}^2 \\ & \quad + C \| \sig{\Dtau kw} \|_{L^\infty(\Omega;\bbM_\mathrm{sym}^{d\times d})} \left( \int_\Omega | \tetaum k| + |\etaum{k}|^{\gamma-1} \;\!\mathrm{d} x \right) \end{aligned} \\
& \begin{aligned} \left| \rho \int_\Omega \left( \frac{\utaum k - \utau{k-1}}{\tau} {-} \Dtau {k-1} u \right) \Dtau kw \;\!\mathrm{d} x \right| & \leq \frac{\rho}4 \int_\Omega \left|\frac{\utaum k - \utau{k-1}}{\tau} \right|^2 \;\!\mathrm{d} x + \frac{\rho}4\| \Dtau {k-1} u\|_{L^2(\Omega;\mathbb{R}^d)}^2 \\ & \qquad + \rho \| \Dtau {k} w\|_{L^2(\Omega;\mathbb{R}^d)}^2\,, \end{aligned} \end{aligned} \]
where $\delta>0$ in (1) and in the other estimates is an arbitrary positive constant, to be specified later, while (2) ensues from Korn's inequality \eqref{Korn} and from the bounds on the quantities $\| \Ltau k \|_{H_\mathrm{Dir}^{1}(\Omega;\mathbb{R}^d)^*}$,
$\| \wtau{k}\|_{H^{1}(\Omega;\mathbb{R}^d)}$, $\| \wtau{k-1}\|_{H^{1}(\Omega;\mathbb{R}^d)}$, $ \| \utau{k-1}\|_{H^1(\Omega;\mathbb{R}^d)} $ , and (3) from the kinematic admissibility condition. For (4) we have used that
$\simtau{k}:= \mathbb{D} \Dtau k e + \mathbb{C} \etau k + \tau |\etau{k}|^{\gamma-2} \etau k- \calT_M(\tetau k) \mathbb{B}\,,$ as well as the fact that $\|\calT_M(\tetaum k )\|_{L^1(\Omega)} \leq \| \tetaum k \|_{L^1(\Omega)}$. It is now immediate to check that the terms on the right-hand sides of the above estimates are either bounded, due to our assumptions, or can be absorbed into the left-hand side of \eqref{discr-total-ineq}, suitably tuning the positive constant $\delta$.
All in all, we conclude that \[
\int_\Omega \left|\frac{\utaum k - \utau{k-1}}{\tau} \right|^2 \;\!\mathrm{d} x + \calE_\tau (\tetaum k,\etaum k,\ptaum k)\leq C \] for a constant independent of $M$. Estimates \eqref{est-M-indep1} and \eqref{est-M-indep2} then ensue, also taking into account Korn's inequality. \par Estimate \eqref{est-M-indep3} is proved in two steps, by testing \eqref{heat:approx-discr} first by $\calT_M(\tetaum k)$, and secondly by $\tetaum k$. We refer to the proof of \cite[Lemma 4.4]{Rocca-Rossi} for all the calculations. \par Estimate \eqref{est-M-indep4} follows from the fact that $\zetaum k \in \partial_{\dot p} \mathrm{R}(\tetau {k-1}, (\ptaum{k} {-}\ptau{k-1})/\tau)$ and from \eqref{bounded-subdiff}. \end{proof} \textbf{Step $3$: limit passage in the approximate discrete system.} With the following result we conclude the proof of Proposition \ref{prop:exist-discr}. From now on, we suppose that $M\in \mathbb{N}\setminus\{0\}$. \begin{lemma} \label{l:3.6}
Let $ k \in \{1,\ldots, K_\tau\} $ and $\tau\in (0,\bar\tau)$ be fixed. Under the growth condition \eqref{hyp-K}, there exist a (not relabeled) subsequence
of $(\tetaum k, \utaum k, \etaum k,\ptaum k)_{M}$ and of $(\zetaum k)_M$, and a quadruple $(\tetau k, \utau k, \etau k, \ptau k) \in H^1(\Omega) \times W_\mathrm{Dir}^{1,\gamma}(\Omega;\mathbb{R}^d) \times L^\gamma(\Omega;\bbM_\mathrm{sym}^{d\times d}) \times L^\gamma(\Omega;\bbM_\mathrm{D}^{d\times d}) $ and $\zetau k \in L^\infty(\Omega;\bbM_\mathrm{D}^{d\times d}) $, such that the following convergences hold as $M\to\infty$
\begin{subequations}
\label{conves-as-M}
\begin{align}
\label{conves-as-M-teta}
& \tetaum k \rightharpoonup \tetau k && \text{in } H^1(\Omega),
\\
\label{conves-as-M-u}
& \utaum k \to \utau k && \text{in } W_\mathrm{Dir}^{1,\gamma}(\Omega;\mathbb{R}^d),
\\
\label{conves-as-M-e}
& \etaum k \to \etau k && \text{in } L^{\gamma}(\Omega;\bbM_\mathrm{sym}^{d\times d}),
\\
\label{conves-as-M-p}
& \ptaum k \to \ptau k && \text{in } L^{\gamma}(\Omega;\bbM_\mathrm{D}^{d\times d}),
\\
\label{conves-as-M-zeta}
& \zetaum k \weaksto \zetau k && \text{in } L^{\infty}(\Omega;\bbM_\mathrm{D}^{d\times d}),
\end{align} \end{subequations} and the quintuple $(\tetau k, \utau k, \etau k, \ptau k,\zetau k)$ fulfill system \eqref{syst:discr}. \end{lemma} \begin{proof} It follows from estimates \eqref{estimates-M-indep} that convergences \eqref{conves-as-M-teta}, \eqref{conves-as-M-zeta}, and the weak versions of \eqref{conves-as-M-u}--\eqref{conves-as-M-p} hold as $M\to\infty$, along a suitable subsequence. Moreover, there exist $\varepsilon_\tau^k \in L^{\gamma/(\gamma{-}1)}(\Omega;\bbM_\mathrm{sym}^{d\times d})$ and $\pi_\tau^k \in
L^{\gamma/(\gamma{-}1)}(\Omega;\bbM_\mathrm{D}^{d\times d})$ such that
\[
|\etaum k |^{\gamma-2} \etaum k \rightharpoonup \varepsilon_\tau^k \quad \text{in } L^{\gamma/(\gamma{-}1)}(\Omega;\bbM_\mathrm{sym}^{d\times d}), \qquad \qquad |\ptaum k |^{\gamma-2} \ptaum k \rightharpoonup \pi_\tau^k \quad \text{in } L^{\gamma/(\gamma{-}1)}(\Omega;\bbM_\mathrm{D}^{d\times d})\,.
\]
Furthermore, from \eqref{conves-as-M-teta} one deduces that $\tetaum k \to \tetau k $ strongly in $L^{3\mu+6 -\rho}(\Omega)$ for all $\rho \in (0, 3\mu+5]$. Hence, it is not difficult to conclude that
\begin{equation}
\label{strong-conv-trunc}
\calT_M(\tetaum k) \to \tetau k \quad \text{ in $L^{3\mu+6 -\rho}(\Omega) $ for all $\rho \in (0, 3\mu+5] $.}
\end{equation}
\par
With these convergences at hand, it is possible to pass to the limit in \eqref{mom:approx-discr}--\eqref{plasr:approx-discr} and prove that the functions $(\tetau k, \utau k, \etau k, \ptau k, \zetau k, \varepsilon_\tau^k, \pi_\tau^k)$ fulfill
\begin{equation} \label{not-still-complete} \begin{aligned} &
\rho \Ddtau k u -\mathrm{div}_{\mathrm{Dir}}( \bar{\sigma}_\tau^k ) = \Ltau k \qquad \text{in } H_\mathrm{Dir}^1(\Omega)^*,
\\& \zetau k + \Dtau kp + \pi_\tau^k = ( \bar{\sigma}_\tau^k) _\mathrm{D} \qquad \text{a.e.\ in } \Omega, \end{aligned}
\end{equation}
with
$ \bar{\sigma}_\tau^k = \mathbb{D} \Dtau k e + \mathbb{C} \etau k + \varepsilon_\tau^k - \tetau k \mathbb{B} $.
In order to conclude the discrete momentum equation and plastic flow rule, it thus remains to show that
\begin{equation}
\label{identifications-to-show}
\varepsilon_\tau^k = |\etau k |^{\gamma-2} \etau k, \qquad \pi_\tau^k = |\ptau k |^{\gamma-2} \ptau k, \qquad \zetau k \in \partial_{\dot p} \mathrm{R}(\tetau {k-1}, \ptau k - \ptau{k-1}) \quad \text{a.e.\ in } \Omega.
\end{equation}
With this aim, on the one hand we observe that
\begin{equation}
\label{limsup_M}
\begin{aligned} &
\limsup_{M\to \infty} \left( \int_\Omega \zetaum k {:} \ptaum {k} \;\!\mathrm{d} x +\tau \int_\Omega |\ptaum k |^\gamma \;\!\mathrm{d} x + \tau \int_\Omega |\etaum k |^\gamma \;\!\mathrm{d} x \right) \\ & \stackrel{(1)}{\leq} \limsup_{M\to \infty} \left ( - \int_\Omega \frac{\ptaum k - \ptau{k-1}}\tau : \ptaum{k} \;\!\mathrm{d} x + \int_\Omega \simdevtau k : \ptaum{k} \;\!\mathrm{d} x +\tau \int_\Omega |\etaum k |^\gamma \;\!\mathrm{d} x \right) \\ & \stackrel{(2)}{\leq} - \int_\Omega \frac{\ptau k - \ptau{k-1}}\tau : \ptau{k} \;\!\mathrm{d} x + \limsup_{M\to \infty}\int_\Omega \dddn{\simtau k : \sig{\utaum k}}{$=\simtau k : \sig{\utaum k{-}\wtau k} +\simtau k{:} \sig{\wtau k}$} - \simtau k : \etaum k + \tau |\etaum k |^\gamma \;\!\mathrm{d} x \\ &
\begin{aligned}
\stackrel{(3)}{= } - \int_\Omega \frac{\ptau k - \ptau{k-1}}\tau : \ptau{k} \;\!\mathrm{d} x + \limsup_{M\to \infty} \Big( & - \int_\Omega \rho \frac{\utaum k - 2\utau{k-1} + \utau {k-2}}{\tau^2} (\utaum{k}{-}\wtau k) \;\!\mathrm{d} x \\ & \quad + \pairing{}{H_\mathrm{Dir}^{1}(\Omega;\mathbb{R}^d)}{\Ltau k}{\utaum{k} {-}\wtau k} +\int_\Omega \simtau k{:} \sig{\wtau k} \;\!\mathrm{d} x \\ & \quad -\int_\Omega \left( \mathbb{D} \frac{\etaum k - \etau{k-1}}{\tau} {+} \mathbb{C} \etaum{k} { -} \calT_M (\tetaum k) \mathbb{B} \right){:} \etaum k \;\!\mathrm{d} x \Big) \end{aligned} \\ &
\begin{aligned}
\stackrel{(4)}{\leq}
& - \int_\Omega \frac{\ptau k - \ptau{k-1}}\tau : \ptau{k} \;\!\mathrm{d} x - \rho \int_\Omega \Ddtau k u (\utau k{-} \wtau k) \;\!\mathrm{d} x + \pairing{}{H_\mathrm{Dir}^{1}(\Omega;\mathbb{R}^d)}{\Ltau k}{\utau{k} {-}\wtau k} +\int_\Omega \sitau k {:} \sig{\wtau k} \;\!\mathrm{d} x \\ & -\int_\Omega \left( \mathbb{D} \frac{\etau k - \etau{k-1}}{\tau} {+ }\mathbb{C} \etau{k} {-} \tetau k \mathbb{B} \right){:} \etau k \;\!\mathrm{d} x
\end{aligned} \\ & \stackrel{(5)}{=} \int_\Omega \zetau k {:} \ptau {k} \;\!\mathrm{d} x + \int_\Omega |\pi_\tau^k|^\gamma \;\!\mathrm{d} x + \int_\Omega |\varepsilon_\tau^k| ^\gamma \;\!\mathrm{d} x.
\end{aligned}
\end{equation} In \eqref{limsup_M},
(1) follows from testing \eqref{plasr:approx-discr} by $\ptaum k$, (2) from the weak convergence $\ptaum k \to \ptau k $ in $L^2(\Omega;\bbM_\mathrm{D}^{d\times d})$ and the discrete admissibility condition, (3) from rewriting the term $\int_\Omega \simtau k : \sig{\utaum k{-}\wtau k} \;\!\mathrm{d} x $ in terms of \eqref{mom:approx-discr} tested
by $\utaum k{-} \wtau k$, and from using the explicit expression of $\simtau k$ (which leads to the cancelation of the term $\int_\Omega \tau | \etaum k|^\gamma \;\!\mathrm{d} x$), (4) from the previously proved convergences via lower semicontinuity arguments, and (5) from repeating the above calculations
in the frame of system \eqref{not-still-complete}, fulfilled by the limiting seventuple
$(\tetau k, \utau k, \etau k, \ptau k, \zetau k, \varepsilon_\tau^k, \pi_\tau^k)$.
On the other hand, we have that
\begin{equation}
\label{liminf_M}
\begin{aligned}
\liminf_{M\to \infty} \int_\Omega \zetaum k {:} \ptaum {k} \;\!\mathrm{d} x \geq \int_\Omega \zetau k {:} \ptau {k} \;\!\mathrm{d} x, \qquad & \liminf_{M\to \infty} \int_\Omega |\ptaum k |^\gamma \;\!\mathrm{d} x \geq \int_\Omega |\pi_\tau^k|^\gamma \;\!\mathrm{d} x, \\ & \liminf_{M\to \infty} \int_\Omega |\etaum k |^\gamma \;\!\mathrm{d} x \geq \int_\Omega |\varepsilon_\tau^k| ^\gamma \;\!\mathrm{d} x\,, \end{aligned}
\end{equation}
where the second and the third inequalities follow from the weak convergence of $(\ptaum k)_M$ and $(\etaum k)_M$ to $\pi_\tau^k$ and $\varepsilon_\tau^k$, whereas the first inequality ensues from
\[
\begin{aligned}
\liminf_{M\to \infty} \left( \int_\Omega \zetaum k {:} \ptaum {k} {-} \zetau k {:} \ptau {k} \right) \;\!\mathrm{d} x & \geq \liminf_{M\to \infty} \int_\Omega \zetaum k {:} ( \ptaum {k} {-} \ptau k) \;\!\mathrm{d} x + \liminf_{M\to \infty} \int_\Omega (\zetaum k {-} \zetau k) { :} \ptau {k} \;\!\mathrm{d} x \\ & \stackrel{(1)}{\geq} \liminf_{M\to \infty}\int_\Omega \left( \mathrm{R}(\tetau{k-1}, \ptaum{k} - \ptau{k-1}) - \mathrm{R}(\tetau{k-1}, \ptau{k} - \ptau{k-1}) \right) \;\!\mathrm{d} x \stackrel{(2)}{\geq} 0 \end{aligned}
\]
with (1) due to the fact that $\zetaum k \in \partial_{\dot p} \mathrm{R}(\tetau{k-1}, \ptaum{k} - \ptau{k-1}) $ and from $\zetaum k \weaksto \zetau k$ in $L^\infty (\Omega;\bbM_\mathrm{D}^{d\times d})$ as $M\to\infty$, and (2) following from the lower semicontinuity w.r.t.\ to the weak $L^2(\Omega;\bbM_\mathrm{D}^{d\times d})$-convergence of the integral functional $ p \mapsto \int_\Omega \mathrm{R} (\tetau{k-1}, p - \ptau{k-1}) \;\!\mathrm{d} x $.
Combining \eqref{limsup_M} and \eqref{liminf_M} we obtain that \[ \left\{ \begin{array}{lll} \lim_{M\to\infty} \int_\Omega \zetaum k {:} \ptaum {k} \;\!\mathrm{d} x = \int_\Omega \zetau k {:} \ptau {k} \;\!\mathrm{d} x &\stackrel{(1)}{ \Rightarrow} & \zetau k \in \partial_{\dot p} \mathrm{R}(\tetau{k-1}, \ptau{k} - \ptau{k-1}) \quad \text{a.e.\ in } \Omega, \\
\lim_{M\to \infty} \int_\Omega |\ptaum k |^\gamma \;\!\mathrm{d} x = \int_\Omega |\pi_\tau^k|^\gamma \;\!\mathrm{d} x & \Rightarrow & \ptaum k \to \pi_\tau^k \qquad \text{in } L^\gamma(\Omega;\bbM_\mathrm{sym}^{d\times d}),
\\
\lim_{M\to \infty} \int_\Omega |\etaum k |^\gamma \;\!\mathrm{d} x = \int_\Omega |\varepsilon_\tau^k|^\gamma \;\!\mathrm{d} x & \Rightarrow & \etaum k \to \varepsilon_\tau^k \qquad \text{in } L^\gamma(\Omega;\bbM_\mathrm{D}^{d\times d})\,,
\end{array} \right. \] with (1) due to Minty's trick, cf.\ also \cite[Lemma 1.3, p. 42]{barbu76}.
Hence we conclude convergences \eqref{conves-as-M-e}--\eqref{conves-as-M-p} (and \eqref{conves-as-M-u}, via the kinematic admissibility $\sig{\utaum k } = \etaum k + \ptaum k$ and Korn's inequality), as well as \eqref{identifications-to-show}. Therefore $(\tetau k, \utau k, \etau k, \ptau k,\zetau k) $ fulfill the discrete momentum balance \eqref{discrete-momentum} and flow rule \eqref{discrete-plastic}. \par Exploiting convergences \eqref{conves-as-M} we pass to the limit as $M\to \infty$ on the right-hand side of \eqref{heat:approx-discr}. In order to take the limit of the elliptic operator on the left-hand side, we repeat the argument from the proof of \cite[Lemma 4.4]{Rocca-Rossi}. Namely, we observe that, due to convergence \eqref{strong-conv-trunc}, $\kappa_M(\tetaum k) = \kappa(\calT_M(\tetaum k)) \to \kappa(\tetau k)$ in $L^q(\Omega)$ for all $1\leq q<3+\tfrac6{\mu}$, and combine this with the fact that $\nabla \tetaum k \rightharpoonup \nabla \vartheta $ in $L^2(\Omega)$, and with the fact that, by comparison in \eqref{heat:approx-discr}, $(\mathcal{A}_M^k(\tetaum k))_M$ is bounded in $H^1(\Omega)^*$. All in all, we conclude that $\mathcal{A}_M^k(\tetaum k) \rightharpoonup {\mathcal A}} \def\calB{{\mathcal B}} \def\calC{{\mathcal C}^k(\tetau k)$ in $H^1(\Omega)^*$ as $M\to\infty$, yielding the discrete heat equation \eqref{discrete-heat}. \end{proof}
\section{Proof of Theorems \ref{mainth:1} and \ref{mainth:2}} \label{s:4} In the statements of all of the results of this section, leading to the proofs of Thms.\ \ref{mainth:1} \& \ref{mainth:2}, we will always tacitly assume the conditions on the problem data from Section \ref{ss:2.1}.
\par We start by fixing some notation for the approximate solutions. \begin{notation}[Interpolants]
\upshape For a given Banach space $B$ and a $K_\tau$-tuple $( \mathfrak{h}_\tau^k )_{k=0}^{K_\tau} \subset B$, we introduce
the left-continuous and right-continuous piecewise constant, and the piecewise linear interpolants
of the values $\{ \mathfrak{h}_\tau^k \}_{k=0}^{K_\tau}$, i.e.\ \[ \left. \begin{array}{llll} & \piecewiseConstant {\mathfrak{h}}{\tau}: (0,T] \to B & \text{defined by} & \piecewiseConstant {\mathfrak{h}}{\tau}(t): = \mathfrak{h}_\tau^k, \\ & \upiecewiseConstant {\mathfrak{h}}{\tau}: (0,T] \to B & \text{defined by} & \upiecewiseConstant {\mathfrak{h}}{\tau}(t) := \mathfrak{h}_\tau^{k-1}, \\
& \piecewiseLinear {\mathfrak{h}}{\tau}: (0,T] \to B & \text{defined by} &
\piecewiseLinear {\mathfrak{h}}{\tau}(t): =\frac{t-t_\tau^{k-1}}{\tau} \mathfrak{h}_\tau^k + \frac{t_\tau^k-t}{\tau}\mathfrak{h}_\tau^{k-1} \end{array} \right\}
\qquad \text{for $t \in (t_\tau^{k-1}, t_\tau^k]$,} \] setting $\piecewiseConstant {\mathfrak{h}}{\tau}(0)= \upiecewiseConstant {\mathfrak{h}}{\tau}(0)= \piecewiseLinear {\mathfrak{h}}{\tau}(0): = \mathfrak{h}_\tau^0$. We also introduce the piecewise linear interpolant of the values $\{ \Dtau k {\mathfrak{h}} = \tfrac{\mathfrak{h}_\tau^k - \mathfrak{h}_{\tau}^{k-1})}\tau\}_{k=1}^{K_\tau}$ (which are the values taken by the piecewise constant function $\piecewiseLinear {\dot{\mathfrak{h}}}{\tau}$), viz. \[ \pwwll {\mathfrak{h}}{\tau}: (0,T) \to B \ \text{ defined by } \ \pwwll {\mathfrak{h}}{\tau}(t) :=\frac{(t-t_\tau^{k-1})}{\tau} \Dtau k {\mathfrak{h}} + \frac{(t_\tau^k-t)}{\tau} \Dtau {k-1} {\mathfrak{h}} \qquad \text{for $t \in (t_\tau^{k-1}, t_\tau^k]$.} \] Note that $ {\partial_t \pwwll {{\mathfrak{h}}}{\tau}}(t) = \Ddtau k {\mathfrak{h}} $ for $t \in (t_\tau^{k-1}, t_\tau^k]$.
Furthermore, we denote by $\piecewiseConstant{\mathsf{t}}{\tau}$ and by $\upiecewiseConstant{\mathsf{t}}{\tau}$ the left-continuous and right-continuous piecewise constant interpolants associated with the partition, i.e.
$\piecewiseConstant{\mathsf{t}}{\tau}(t) := t_\tau^k$ if $t_\tau^{k-1}<t \leq t_\tau^k $ and $\upiecewiseConstant{\mathsf{t}}{\tau}(t):= t_\tau^{k-1}$ if $t_\tau^{k-1} \leq t < t_\tau^k $. Clearly, for every $t \in (0,T)$ we have $\piecewiseConstant{\mathsf{t}}{\tau}(t) \downarrow t$ and $\upiecewiseConstant{\mathsf{t}}{\tau}(t) \uparrow t$ as $\tau\to 0$. \end{notation}
In view of \eqref{heat-source}, \eqref{dato-h}, and \eqref{data-displ} it is easy to check that the piecewise constant interpolants $(\piecewiseConstant H{\tau} )_{\tau}$, $(\piecewiseConstant h{\tau} )_{\tau}$, and $(\piecewiseConstant \calL{\tau})_\tau$ of the values $\gtau{k}$, $\htau{k}$, and $\Ltau k $, cf.\ \eqref{local-means}, fulfill as $\tau \downarrow 0$ \begin{subequations} \label{convs-interp-data} \begin{align} \label{converg-interp-g} & \piecewiseConstant H{\tau} \to H
\text{ in $L^1(0,T;L^1(\Omega))\cap L^2(0,T;H^1(\Omega)^*)$.}
\\
\label{converg-interp-h} & \piecewiseConstant h{\tau} \to h
\text{ in $L^1(0,T;L^2(\partial\Omega))$,}
\\
\label{converg-interp-L} & \piecewiseConstant \calL{\tau} \to \calL \text{ in $ L^2(0,T; H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)^*).$}
\end{align} Furthermore, it follows from \eqref{Dirichlet-loading} and \eqref{discr-w-tau} that \begin{equation} \label{converg-interp-w} \begin{gathered} \piecewiseConstant w\tau \to w \quad \text{in $L^1(0,T; W^{1,\infty} (\Omega;\mathbb{R}^d))$}, \qquad \piecewiseLinear w\tau \to w \quad \text{in $W^{1,p} (0,T; H^1(\Omega;\mathbb{R}^d))$ for all } 1 \leq p<\infty, \\ \pwwll w\tau \to w \quad \text{in $W^{1,1}(0,T; H^1(\Omega;\mathbb{R}^d)) \cap H^1(0,T; L^2(\Omega;\mathbb{R}^d))$}, \\
\sup_{\tau>0} \tau^{\alpha_w} \| \sig{\piecewiseLinear{\dot w}{\tau}}\|_{L^\gamma(\Omega;\bbM_\mathrm{sym}^{d\times d})} \leq C <\infty \text{ with } \alpha_w \in (0,\tfrac1\gamma)\,.
\end{gathered} \end{equation} \end{subequations}
\par We now reformulate the discrete system \eqref{syst:discr} in terms of the approximate solutions constructed interpolating the discrete solutions $(\tetau k,\utau k, \etau k,\ptau k)_{k=1}^{K_\tau}$. Therefore, we have \begin{subequations} \label{syst-interp} \begin{align} & \label{eq-teta-interp} \begin{aligned} & \partial_t \piecewiseLinear \vartheta{\tau}(t)
+ \mathcal{A}^{\frac{\bar{\mathsf{t}}_\tau(t)}{\tau}}( \piecewiseConstant{\vartheta}\tau (t) ) \\ &\quad = \piecewiseConstant H{\tau}(t)+ \mathrm{R}(\upiecewiseConstant \vartheta\tau(t), \piecewiseLinear{\dot p}\tau (t)) + |\piecewiseLinear{\dot p}\tau (t)|^2 + \mathbb{D} \piecewiseLinear{\dot e}\tau(t) {:} \piecewiseLinear{\dot e}\tau(t) - \piecewiseConstant \vartheta\tau(t) \mathbb{B} : \piecewiseLinear{\dot e}\tau(t),
\quad \text{in $H^1(\Omega)^*$,} \end{aligned} \\ &
\label{eq-u-interp}
\begin{aligned} \rho\int_\Omega \partial_t\pwwll {u}{\tau}(t) v \;\!\mathrm{d} x + \int_\Omega \piecewiseConstant\sigma \tau(t) {:} \sig{v} \;\!\mathrm{d} x = \pairing{}{H_\mathrm{Dir}^{1}(\Omega;\mathbb{R}^d)}{\piecewiseConstant\calL \tau(t)}{v} \quad \text{for all } v \in W_\mathrm{Dir}^{1,\gamma}(\Omega;\mathbb{R}^d),
\end{aligned}\\ & \label{eq-p-interp} \begin{aligned}
\piecewiseConstant\zeta\tau(t)+ \piecewiseLinear{\dot p}\tau (t) + \tau | \piecewiseConstant p\tau(t)|^{\gamma-2} \piecewiseConstant p\tau(t) = (\piecewiseConstant\sigma \tau(t) )_\mathrm{D} \quad \text{a.e.\ in }\, \Omega
\end{aligned} \end{align} for almost all $t\in (0,T)$, with $\piecewiseConstant\zeta\tau \in \partial_{\dot p} \mathrm{R} (\upiecewiseConstant \vartheta\tau, \piecewiseLinear{\dot p}\tau ) $ a.e.\ in $Q$, and where we have used the notation \begin{equation} \label{sigma-interp}
\piecewiseConstant\sigma\tau: = \mathbb{D} \piecewiseLinear {\dot e}\tau + \mathbb{C} \piecewiseConstant e\tau + \tau |\piecewiseConstant e\tau|^{\gamma-2} \piecewiseConstant e\tau - \piecewiseConstant\vartheta\tau \mathbb{B}. \end{equation} \end{subequations} \par We now show that the approximate solutions fulfill the approximate versions of the entropy inequality \eqref{entropy-ineq},
of the
total energy inequality \eqref{total-enineq}, and of the mechanical energy (in)equality
\eqref{mech-enbal}.
These discrete inequalities will have a pivotal role in the derivation of a priori estimates on the approximate solutions. Moreover, we will take their limit in order to obtain the entropy and total energy inequalities prescribed by the notion of \emph{entropic solution}, cf.\ Definition \ref{def:entropic-sols}.
\par
For stating the discrete entropy inequality \eqref{entropy-ineq-discr} below, we need to introduce \emph{discrete} test functions. For technical reasons, we will need to pass to the limit with test functions enjoying a slightly stronger time regularity than that required by Def.\ \ref{def:entropic-sols}. Namely, we fix a positive test function $\varphi $, with $\varphi \in \rmC^0 ([0,T]; W^{1,\infty}(\Omega)) \cap H^1 (0,T; L^{6/5}(\Omega)) $. We set
\begin{equation}
\label{discrete-tests-phi}
\varphi_\tau^k:= \frac{1}{\tau}\int_{t_\tau^{k-1}}^{t_\tau^k} \varphi(s)\;\!\mathrm{d} s \qquad \text{for } k=1, \ldots, K_\tau,
\end{equation} and consider the piecewise constant and linear interpolants $\piecewiseConstant \varphi\tau$ and $\piecewiseLinear \varphi\tau$ of the values $(\varphi_\tau^k)_{k=1}^{K_\tau}$. It can be shown that the following convergences hold as $\tau \to 0$ \begin{equation} \label{convergences-test-interpolants} \piecewiseConstant \varphi\tau \to \varphi \quad \text{ in } L^\infty (0,T; W^{1,\infty}(\Omega)) \text{ and } \quad \partial_t \piecewiseLinear \varphi\tau \to \partial_t \varphi \quad \text{ in } L^2 (0,T; L^{6/5}(\Omega)). \end{equation} Observe that the first convergence property easily follows from the fact that the map $\varphi: [0,T]\to W^{1,\infty}(\Omega)$ is uniformly continuous.
\begin{lemma}[Discrete entropy, mechanical, and total energy inequalities] \label{lemma:discr-enid}
The interpolants of the discrete solutions
$(\tetau{k}, \utau{k}, \etau k, \ptau{k} )_{k=1}^{K_\tau}$ to Problem \ref{prob:discrete}
fulfill
\begin{itemize}
\item[-]
the \emph{discrete} entropy inequality
\begin{equation}\label{entropy-ineq-discr} \begin{aligned} & \int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \log(\upiecewiseConstant\vartheta\tau (r)) \piecewiseLinear {\dot\varphi}\tau (r) \;\!\mathrm{d} x \;\!\mathrm{d} r - \int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \kappa(\piecewiseConstant\vartheta\tau (r)) \nabla \log(\piecewiseConstant\vartheta\tau(r)) \nabla \piecewiseConstant\varphi\tau(r) \;\!\mathrm{d} x \;\!\mathrm{d} r \\ & \leq \int_\Omega \log(\piecewiseConstant\vartheta\tau(t))
\piecewiseConstant\varphi\tau(t) \;\!\mathrm{d} x - \int_\Omega \log(\piecewiseConstant\vartheta\tau(s)) \piecewiseConstant\varphi\tau(s) \;\!\mathrm{d} x
- \int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \kappa(\piecewiseConstant \vartheta\tau(r)) \frac{\piecewiseConstant\varphi\tau(r)}{\piecewiseConstant \vartheta\tau(r)} \nabla \log(\piecewiseConstant \vartheta\tau(r)) \nabla \piecewiseConstant \vartheta\tau (r) \;\!\mathrm{d} x \;\!\mathrm{d} r\\ & \quad
-
\int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \left( \piecewiseConstant H{\tau} (r) + \mathrm{R}(\upiecewiseConstant \vartheta\tau(r), \piecewiseLinear{\dot p}\tau(r)) + |\piecewiseLinear{\dot p}\tau(r)|^2 + \mathbb{D} \piecewiseLinear{\dot e}\tau (r) {:} \piecewiseLinear{\dot e}\tau(r) - \piecewiseConstant\vartheta\tau(r) \mathbb{B} {:} \piecewiseLinear{\dot e}\tau(r) \right)
\frac{\piecewiseConstant\varphi\tau(r)}{\piecewiseConstant\vartheta\tau(r)} \;\!\mathrm{d} x \;\!\mathrm{d} r \\ & \quad - \int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_{\partial\Omega} \piecewiseConstant h{\tau} (r) \frac{\piecewiseConstant\varphi\tau(r)}{\piecewiseConstant\vartheta\tau(r)} \;\!\mathrm{d} S \;\!\mathrm{d} r \end{aligned} \end{equation} for all $0 \leq s \leq t \leq T$ and for all $\varphi \in \mathrm{C}^0 ([0,T]; W^{1,\infty}(\Omega)) \cap H^1 (0,T; L^{6/5}(\Omega)) $ with $\varphi \geq 0$; \item[-] the \emph{discrete} total energy inequality for all $ 0 \leq s \leq t \leq T$, viz. \begin{equation} \label{total-enid-discr} \begin{aligned} &
\frac{\rho}2 \int_\Omega | \piecewiseLinear{\dot u}\tau (t)|^2 \;\!\mathrm{d} x + \calE_\tau(\piecewiseConstant\vartheta\tau(t),\piecewiseConstant e\tau(t),\piecewiseConstant p\tau(t)) \\ & \leq \frac{\rho}2 \int_\Omega | \piecewiseLinear{\dot u}\tau (s)|^2 \;\!\mathrm{d} x
+ \calE_\tau(\piecewiseConstant\vartheta\tau(s),\piecewiseConstant e \tau(s),\piecewiseConstant p\tau(s)) + \int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \pairing{}{H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)}{\piecewiseConstant \calL \tau (r) }{\piecewiseLinear{\dot u}\tau (r) {-} \piecewiseLinear{ \dot w}\tau (r)} \;\!\mathrm{d} r
\\ & \quad
+ \int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \left( \int_\Omega \piecewiseConstant H\tau \;\!\mathrm{d} x +
\int_{\partial\Omega} \piecewiseConstant h\tau \;\!\mathrm{d} S \right) \;\!\mathrm{d} r \\ & \quad
+\rho \int_\Omega \piecewiseLinear{\dot u}\tau(t) \piecewiseLinear{\dot w }\tau(t) \;\!\mathrm{d} x - \rho \int_\Omega \piecewiseLinear{\dot u}\tau(s) \piecewiseLinear{\dot w }\tau(s) \;\!\mathrm{d} x
- \rho \int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \piecewiseLinear{\dot u}\tau(r-\tau) \partial_t \pwwll{ w}{\tau} (r) \;\!\mathrm{d} x \;\!\mathrm{d} r \\ & \quad + \int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)}\int_\Omega \piecewiseConstant \sigma\tau (r) {:} \sig{\piecewiseLinear{\dot w}\tau (r) } \;\!\mathrm{d} x \;\!\mathrm{d} r
\end{aligned} \end{equation}
with the discrete total energy functional $\calE_\tau$ from
\eqref{discr-total-energy};
\item[-] the \emph{discrete} mechanical energy inequality for all $ 0 \leq s \leq t \leq T$, viz.
\begin{equation} \label{mech-ineq-discr} \begin{aligned} &
\frac{\rho}2 \int_\Omega | \piecewiseLinear{\dot u}\tau (t)|^2 \;\!\mathrm{d} x
+ \int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \left( \mathbb{D} \piecewiseLinear{\dot e}\tau(r) {:} \piecewiseLinear{\dot e}\tau(r) + \mathrm{R}(\upiecewiseConstant \vartheta\tau(r), \piecewiseLinear{\dot p}\tau(r)) + |\piecewiseLinear{\dot p}\tau(r)|^2 \right) \;\!\mathrm{d} x \;\!\mathrm{d} r + \frac12\int_\Omega \mathbb{C} \piecewiseConstant e\tau(t) {:} \piecewiseConstant e\tau(t) \;\!\mathrm{d} x \\
& \qquad + \frac{\tau}\gamma\int_\Omega \left( |\piecewiseConstant e\tau(t)|^\gamma + |\piecewiseConstant p\tau(t)|^\gamma \right) \;\!\mathrm{d} x \\ & \leq
\frac{\rho}2 \int_\Omega | \piecewiseLinear{\dot u}\tau (s)|^2 \;\!\mathrm{d} x
+ \frac12\int_\Omega \mathbb{C} \piecewiseConstant e\tau(s) {:} \piecewiseConstant e\tau(s) \;\!\mathrm{d} x
+ \frac{\tau}\gamma\int_\Omega \left( |\piecewiseConstant e\tau(s)|^\gamma + |\piecewiseConstant p\tau(s)|^\gamma \right) \;\!\mathrm{d} x
\\
&\quad +
\int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \pairing{}{H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)}{\piecewiseConstant \calL \tau (r) }{\piecewiseLinear{\dot u}\tau (r){-} \piecewiseLinear{\dot w}\tau (r) } \;\!\mathrm{d} r + \int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \piecewiseConstant\vartheta\tau(r) \mathbb{B} {:} \piecewiseLinear{\dot e}\tau \;\!\mathrm{d} x \;\!\mathrm{d} r
+\rho \int_\Omega \piecewiseLinear{\dot u}\tau(t) \piecewiseLinear{\dot w }\tau(t) \;\!\mathrm{d} x
\\ & \quad
- \rho \int_\Omega \piecewiseLinear{\dot u}\tau(s) \piecewiseLinear{\dot w }\tau(s) \;\!\mathrm{d} x
- \rho \int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \piecewiseLinear{\dot u}\tau(r-\tau) \partial_t \pwwll{w}\tau (r) \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_{\piecewiseConstant{\mathsf{t}}{\tau}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)}\int_\Omega \piecewiseConstant \sigma\tau (r) {:} \sig{\piecewiseLinear{\dot w}\tau (r) } \;\!\mathrm{d} x \;\!\mathrm{d} r\,.
\end{aligned} \end{equation}
\end{itemize}
\end{lemma}
Below we will only outline the argument for Lemma \ref{lemma:discr-enid}, referring to the proof of the analogous \cite[Prop.\ 4.8]{Rocca-Rossi} for most of the details. Let us only mention in advance that we will make use of the
\emph{discrete by-part integration} formula, holding for all $K_\tau$-uples $\{\mathfrak{h}_\tau^k \}_{k=0}^{K_\tau} \subset B,\, \{ v_\tau^k \}_{k=0}^{K_\tau} \subset B^*$ in a given Banach space $B$: \begin{equation} \label{discr-by-part} \sum_{k=1}^{K_\tau} \tau \pairing{}{B}{\vtau{k}}{\Dtau{k}{\mathfrak{h}}} = \pairing{}{B}{\vtau{K_\tau}}{\btau{K_\tau}} -\pairing{}{B}{\vtau{0}}{\btau{0}} -\sum_{k=1}^{K_\tau}\tau\pairing{}{B}{\dtau{k}{v} }{ \btau{k-1}}\,, \end{equation} as well as of the following inequality, satisfied by any \underline{concave}
(differentiable)
function $\psi: \mathrm{dom}(\psi) \to \mathbb{R}$:
\begin{equation}
\label{inequality-concave-functions}
\psi(x) - \psi(y) \leq \psi'(y) (x-y) \qquad \text{for all } x,\, y \in \mathrm{dom}(\psi).
\end{equation} \begin{proof}[Sketch of the proof of Lemma \ref{lemma:discr-enid}] The entropy inequality \eqref{entropy-ineq-discr} follows from testing \eqref{discrete-heat} by $\frac{\varphi_\tau^k}{\tetau{k}}$, for $k \in \{1,\ldots,K_\tau\}$ fixed, with $\varphi \in \mathrm{C}^0 ([0,T]; W^{1,\infty}(\Omega)) \cap H^1 (0,T; L^{6/5}(\Omega)) $ an arbitrary positive test function. Observe that $\frac{\varphi_\tau^k}{\tetau{k}} \in H^1(\Omega)$, as $\tetau k \in H^1(\Omega)$ is bounded away from zero by a strictly positive constant, cf.\ \eqref{discr-strict-pos}. We then obtain \[ \begin{aligned} &
\int_\Omega \left( \gtau{k} + \mathrm{R}\left(\tetau{k-1}, \Dtau{k} p \right) + \left| \Dtau{k} p \right|^2+ \mathbb{D} \Dtau{k} e : \Dtau{k} e -\tetau{k} \mathbb{B} : \Dtau k e \right) \frac{\varphi_\tau^k}{\tetau{k}} \;\!\mathrm{d} x + \int_{\partial\Omega} \htau k \frac{\varphi_\tau^k}{\tetau{k}} \;\!\mathrm{d} S \\ & = \int_\Omega \frac{\tetau k - \tetau{k-1}}{\tau} \frac{\varphi_\tau^k}{\tetau{k}} \;\!\mathrm{d} x + \int_\Omega \kappa(\tetau k) \nabla \tetau{k} \nabla \left( \frac{\varphi_\tau^k}{\tetau{k}} \right) \;\!\mathrm{d} x \\ & \stackrel{(1)}{\leq}
\int_\Omega \frac{\log(\tetau k) - \log(\tetau{k-1})}{\tau} \varphi_\tau^k \;\!\mathrm{d} x + \int_\Omega \left( \frac{\kappa(\tetau k)}{\tetau k} \nabla \tetau{k} \nabla \varphi_\tau^k - \frac{\kappa(\tetau k)}{|\tetau k|^2} |\nabla \tetau{k}|^2 \varphi_\tau^k \right) \;\!\mathrm{d} x \end{aligned} \] where (1) follows from \eqref{inequality-concave-functions} with $\psi=\log$.
Then, one sums the above inequality, multiplied by $\tau$, over $k=m,\ldots, j$, for any couple of indices $m,\, j \in \{1, \ldots, K_\tau\}$, and uses the discrete by-part integration formula \eqref{discr-by-part} to deal with the term $ \sum_{k=m}^j \frac{\log(\tetau k) - \log(\tetau{k-1})}{\tau} \varphi_\tau^k\,. $ This leads to \eqref{entropy-ineq-discr}. \par As for the discrete total energy inequality, with the very same calculations developed in the proof of Lemma \ref{l:aprio-M}, one shows that the solution quadruple $(\tetau k, \utau k,\etau k, \ptau k)$ to system \eqref{syst:discr} fulfills the energy inequality \eqref{discr-total-ineq}. Note that the two inequalities, i.e.\ the one for system \eqref{syst:discr} and inequality \eqref{discr-total-ineq} for the truncated version \eqref{syst:approx-discr} of \eqref{syst:discr}, in fact coincide since they neither involve the elliptic operator in the discrete heat equation, nor the thermal expansion terms coupling the heat equation and the momentum balance, which are the terms affected by the truncation procedure. Then, \eqref{total-enid-discr} ensues by adding \eqref{discr-total-ineq} over the index $k =m,\ldots, j$, for any couples of indices $m,\, j \in \{1,\ldots, K_\tau\}$. \par The mechanical energy inequality \eqref{mech-ineq-discr} is derived by subtracting from \eqref{total-enid-discr} the discrete heat equation \eqref{discrete-heat} multiplied by $\tau$ and integrated over $\Omega$. \end{proof}
\subsection{A priori estimates} \label{ss:3.2}
The following result collects the a priori estimates on the approximate solutions of system \eqref{syst-interp}. Let us mention in advance that, along the footsteps of \cite{Rocca-Rossi}, we shall derive from the discrete entropy inequality \eqref{entropy-ineq-discr} a \emph{weak version} of the estimate on the total variation of
$\log(\piecewiseConstant \vartheta \tau)$, cf.\ \eqref{aprio_Varlog} and \eqref{var-notation} below, which will play a crucial role in the compactness arguments for the approximate temperatures $(\piecewiseConstant \vartheta \tau)_\tau$. \begin{proposition} \label{prop:aprio} Assume \eqref{hyp-K}. Then, there exists a constant $S>0$ such that for all $\tau>0$ the following estimates hold \begin{subequations} \label{aprio} \begin{align} & \label{aprioU1}
\|\piecewiseConstant u{\tau}\|_{L^\infty(0,T;H^1(\Omega;\mathbb{R}^d))}
\leq S, \\ & \label{aprioU2}
\|\piecewiseLinear u{\tau}\|_{H^1(0,T; H^1(\Omega;\mathbb{R}^d) ) \cap W^{1,\infty}(0,T; L^2(\Omega;\mathbb{R}^d))}
\leq S, \\
& \label{aprioU3} \|\pwwll u{\tau}
\|_{L^2(0,T; H^1(\Omega;\mathbb{R}^d) ) \cap L^\infty (0,T; L^2(\Omega;\mathbb{R}^d)) \cap W^{1,\gamma/(\gamma-1)}(0,T;W^{1,\gamma}(\Omega;\mathbb{R}^d)^*)} \leq S, \\ & \label{aprioE1}
\|\piecewiseConstant e{\tau}\|_{L^\infty(0,T;L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}))} \leq S, \\ & \label{aprioE2}
\|\piecewiseLinear e{\tau}\|_{H^1(0,T;L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}))} \leq S, \\ & \label{aprioE3}
\tau^{1/\gamma}\|\piecewiseConstant e{\tau}\|_{L^\infty(0,T;L^\gamma(\Omega;\bbM_\mathrm{sym}^{d\times d}))} \leq S, \\ & \label{aprioP1}
\|\piecewiseConstant p{\tau}\|_{L^\infty(0,T;L^2(\Omega;\bbM_\mathrm{D}^{d\times d}))} \leq S, \\ & \label{aprioP2}
\|\piecewiseLinear p{\tau}\|_{H^1(0,T;L^2(\Omega;\bbM_\mathrm{D}^{d\times d}))} \leq S, \\ & \label{aprioP3}
\tau^{1/\gamma}\|\piecewiseConstant p{\tau}\|_{L^\infty(0,T;L^\gamma(\Omega;\bbM_\mathrm{D}^{d\times d}))} \leq S, \\
& \label{log-added}
\| \log(\piecewiseConstant \vartheta{\tau}) \|_{L^2 (0,T; H^1(\Omega))}
\leq S, \\ & \label{aprio6-discr}
\|\piecewiseConstant
\vartheta{\tau}\|_{L^2(0,T; H^1(\Omega)) \cap L^{\infty}(0,T;L^{1}(\Omega))} \leq S, \\ & \label{est-temp-added?-bis}
\| (\piecewiseConstant \vartheta\tau)^{(\mu+\alpha)/2} \|_{L^2(0,T; H^1(\Omega))}, \, \| (\piecewiseConstant \vartheta\tau)^{(\mu-\alpha)/2} \|_{L^2(0,T; H^1(\Omega))} \leq C \quad \text{for all } \alpha \in [(2{-}\mu)^+, 1) \\ & \label{aprio_Varlog}
\sup_{\varphi \in W^{1,d+\epsilon}(\Omega), \ \| \varphi \|_{W^{1,d+\epsilon}(\Omega)}\leq 1} \mathrm{Var}(\pairing{}{W^{1,d+\epsilon}(\Omega)}{\log(\piecewiseConstant\vartheta\tau)}{\varphi}; [0,T]) \leq S\ \quad \text{for every } \epsilon>0.
\end{align} Furthermore, if $\kappa$ fulfills \eqref{hyp-K-stronger}, there holds in addition \begin{align} \label{aprio7-discr} &
\sup_{\tau>0} \| \piecewiseLinear \vartheta{\tau} \|_{\mathrm{BV} ([0,T]; W^{1,\infty} (\Omega)^*)} \leq S. \end{align} \end{subequations} \end{proposition} \noindent The starting point in the proof is the discrete total energy inequality \eqref{total-enid-discr}, giving rise to the second of \eqref{aprioU2}, the second of \eqref{aprioU3}, \eqref{aprioE1}, \eqref{aprioE3}, \eqref{aprioP3}, and the second of \eqref{aprio6-discr}: we will detail the related calculations, in particular showing how the terms arising from the external forces $F$ and $g$, and those involving the Dirichlet loading $w$ can be handled. Let us also refer to the upcoming Remark \ref{rmk:diffic-1-test} for more comments. \par
The \emph{dissipative} estimates, i.e.\ the first of \eqref{aprioU2} and \eqref{aprioU3}, \eqref{aprioE2}, and \eqref{aprioP2}, then follow from the discrete mechanical energy inequality \eqref{mech-ineq-discr}. The remaining estimates on the approximate temperature can be performed with the very same arguments as in the proof of \cite[Prop.\ 4.10]{Rocca-Rossi}, to which we shall refer for all details. \begin{proof} \textbf{First a priori estimate:} We write the total energy inequality \eqref{total-enid-discr} for $s=0$ and estimate the terms on its right-hand side: \begin{equation} \label{very-1st-step} \begin{aligned}
\frac{\rho}2 \int_\Omega | \piecewiseLinear{\dot u}\tau (t)|^2 \;\!\mathrm{d} x + \calE_\tau(\piecewiseConstant\vartheta\tau(t),\piecewiseConstant e\tau(t), \piecewiseConstant p\tau(t)) \leq I_1+I_2+I_3+I_4+I_5+I_6, \end{aligned} \end{equation} with \[ I_1 =
\frac{\rho}2 \int_\Omega | {\dot u}_\tau(0)|^2 \;\!\mathrm{d} x
+ \calE_\tau(\piecewiseConstant \vartheta\tau(0),\piecewiseConstant e \tau(0), \piecewiseConstant p\tau(0)) \leq C
\]
thanks to \eqref{initial-teta}, \eqref{complete-approx-e_0}, and \eqref{discr-Cauchy}. To estimate $I_2$ we use the safe load condition \eqref{safe-load}, namely
\begin{equation}
\label{here:safe-loads}
\begin{aligned}
I_2 & = \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \pairing{}{H_\mathrm{Dir}^{1}(\Omega;\mathbb{R}^d)}{ \piecewiseConstant \calL \tau (r) }{\piecewiseLinear{\dot u}\tau (r){-} \piecewiseLinear{\dot w}\tau (r) } \;\!\mathrm{d} r
\\ & = \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \pairing{}{H_\mathrm{Dir}^{1}(\Omega;\mathbb{R}^d)}{ \piecewiseConstant F \tau (r) }{\piecewiseLinear{\dot u}\tau (r) {-} \piecewiseLinear{\dot w}\tau (r) } \;\!\mathrm{d} r + \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \pairing{}{H_{00,\Gamma_\mathrm{Dir}}^{1/2}(\Gamma_\mathrm{Neu};\mathbb{R}^d)}{ \piecewiseConstant g \tau (r) }{\piecewiseLinear{\dot u}\tau (r) {-} \piecewiseLinear{\dot w}\tau (r) } \;\!\mathrm{d} r
\\ & = \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \piecewiseConstant{\varrho}\tau (r) : \left( \sig{\piecewiseLinear{\dot u}\tau (r)} {-} \sig{\piecewiseLinear {\dot w}\tau(r)} \right) \;\!\mathrm{d} x \;\!\mathrm{d} r \\ &
\stackrel{(1)}{=} \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \piecewiseConstant{\varrho}\tau (r) : {\piecewiseLinear{\dot e}\tau (r)} \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \piecewiseConstant{\varrho}\tau (r): {\piecewiseLinear{\dot p}\tau (r)} \;\!\mathrm{d} x \;\!\mathrm{d} r - \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \piecewiseConstant{\varrho}\tau (r) : \sig{\piecewiseLinear {\dot w}\tau(r) }\;\!\mathrm{d} x \;\!\mathrm{d} r
\\ &
\doteq I_{2,1} + I_{2,2} +I_{2,3}
\end{aligned}
\end{equation}
where $\piecewiseConstant F\tau, \, \piecewiseConstant g\tau,\, \piecewiseConstant \varrho\tau, \, \piecewiseLinear \varrho\tau$ denote the approximations of $F, \, g, \, \varrho$. Equality (1) follows from the kinematic admissibility condition $\sig{\piecewiseLinear{\dot u}\tau} = \piecewiseLinear{\dot e}\tau + \piecewiseLinear {\dot p}\tau$.
Observe that, thanks to \eqref{safe-load}, there holds
\begin{equation}
\label{converg-rho-interp}
\| \piecewiseConstant \varrho\tau \|_{L^\infty(0,T;L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}))} + \| \piecewiseLinear \varrho\tau \|_{W^{1,1}(0,T;L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}))}+
\| (\piecewiseConstant \varrho\tau)_\mathrm{D} \|_{L^1(0,T;L^\infty(\Omega;\bbM_\mathrm{D}^{d\times d}))} \leq C\,.
\end{equation}
Now, using the discrete by-part integration formula \eqref{discr-by-part} we see that
\[
\begin{aligned}
I_{2,1} & = - \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \piecewiseLinear{\dot \varrho}\tau (r) {:} \upiecewiseConstant e{\tau}(r) \;\!\mathrm{d} x \;\!\mathrm{d} r
+\int_\Omega \piecewiseConstant{\varrho}\tau(t) : \piecewiseConstant e\tau(t) \;\!\mathrm{d} x - \int_\Omega \piecewiseConstant{\varrho}\tau(0): {\piecewiseConstant e\tau(0)} \;\!\mathrm{d} x
\\
& \stackrel{(2)}{\leq} \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| \piecewiseLinear{\dot \varrho}\tau (r) \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \| \upiecewiseConstant e{\tau}(r)\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \;\!\mathrm{d} r +\frac{C_\mathbb{C}^1}{16} \| \piecewiseConstant e\tau(t) \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}^2+ C \| \piecewiseConstant \varrho\tau(t) \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}^2 +C
\end{aligned}
\]
where estimate (2) follows from Young's inequality.
The choice of the coefficient $\tfrac{C_\mathbb{C}^1}{16}$ will allow us to absorb the second term into the left-hand side of \eqref{very-1st-step}, taking into account
the coercivity property \eqref{elast-visc-tensors} of $\mathbb{C}$, which ensures that $\calE_\tau(\piecewiseConstant\vartheta\tau(t),\piecewiseConstant e\tau(t), \piecewiseConstant p\tau(t))$ on the left-hand side of \eqref{very-1st-step} bounds $ \| \piecewiseConstant e\tau(t)\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}^2$.
As for $I_{22}$, using the discrete flow rule \eqref{eq-p-interp} and taking into account the expression of $(\piecewiseConstant \sigma\tau)_\mathrm{D}$ we gather
\[
\begin{aligned}
I_{2,2} & = \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega ( \piecewiseConstant{\varrho}\tau (r))_\mathrm{D} \left( \mathbb{D} \piecewiseLinear{\dot e}\tau(r) + \mathbb{C} \piecewiseConstant e\tau(r) + \tau|\piecewiseConstant e\tau(r) |^{\gamma-2} \piecewiseConstant e\tau(r) - \piecewiseConstant\vartheta\tau(r) \mathbb{B} - \piecewiseConstant\zeta\tau(r) - \tau | \piecewiseConstant p\tau(r)|^{\gamma-2} \piecewiseConstant p\tau(r) \right) \;\!\mathrm{d} x \;\!\mathrm{d} r \\ & \doteq I_{2,2,1} + I_{2,2,2} + I_{2,2,3} + I_{2,2,4} + I_{2,2,5} + I_{2,2,6}
\end{aligned}
\]
and we estimate the above terms as follows. First, for $I_{2,2,1}$ we resort to the by-parts integration formula \eqref{discr-by-part} with the very same calculations as in the estimate of the integral term $I_{2,1}$. Second, we estimate
\[
I_{2,2,2} \leq \frac{C_\mathbb{C}^1}{16} \| \piecewiseConstant e\tau(t) \||_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}^2+ C \| \piecewiseConstant \varrho\tau(t) \||_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}^2\,.
\] In the estimate of $I_{2,2,3}$ we use H\"older's inequality
\[
I_{2,2,3} \leq \frac{\tau \gamma} 2 \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| (\piecewiseConstant \varrho\tau (r))_\mathrm{D} \|_{L^\gamma (\Omega;\bbM_\mathrm{D}^{d\times d})}^\gamma \;\!\mathrm{d} r +\frac{ \tau }{2\gamma} \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| \piecewiseConstant e\tau (r)\|_{L^\gamma (\Omega;\bbM_\mathrm{sym}^{d\times d})}^\gamma \;\!\mathrm{d} r\,.
\]
For $I_{2,2,4}$ we resort to
estimate \eqref{converg-rho-interp} for $ (\piecewiseConstant \varrho\tau)_\mathrm{D}$ in $L^1(0,T; L^\infty (\Omega;\bbM_\mathrm{D}^{d\times d}))$, so that \[
I_{2,2,4} \leq C \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| (\piecewiseConstant \varrho\tau (r))_\mathrm{D} \|_{L^\infty (\Omega;\bbM_\mathrm{D}^{d\times d})} \| \piecewiseConstant \vartheta\tau(r) \|_{L^1(\Omega)} \;\!\mathrm{d} r; \]
again, this term will be estimated via Gronwall's inequality,
taking into account that $\calE_\tau(\piecewiseConstant\vartheta\tau(t),\piecewiseConstant e\tau(t), \piecewiseConstant p\tau(t))$ on the left-hand side of \eqref{very-1st-step} bounds $\| \piecewiseConstant\vartheta\tau(t)\|_{L^1(\Omega)}$.
Finally, since $\| \piecewiseConstant \zeta \tau (t) \|_{L^\infty(\Omega;\bbM_\mathrm{D}^{d\times d})} \leq C_R$ thanks to \eqref{bounded-subdiff}, we find that $I_{2,2,5} \leq C_R \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| \piecewiseConstant \varrho\tau (r) \|_{L^1 (\Omega;\bbM_\mathrm{D}^{d\times d})} \;\!\mathrm{d} r \leq C$
by \eqref{converg-rho-interp},
while with H\"older's inequality we have
\[
I_{2,2,6} \leq \frac{\tau \gamma} 2 \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| (\piecewiseConstant \varrho\tau (r))_\mathrm{D} \|_{L^\gamma (\Omega;\bbM_\mathrm{D}^{d\times d})}^\gamma \;\!\mathrm{d} r +\frac{ \tau }{2\gamma} \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| \piecewiseConstant p\tau (r)\|_{L^\gamma (\Omega;\bbM_\mathrm{D}^{d\times d})}^\gamma \;\!\mathrm{d} r\,.
\]
This concludes the estimation of $I_{2,2}$.
Finally, we have
\[
I_{2,3} \leq \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| \piecewiseConstant{\varrho}\tau (r) \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \| \sig{\piecewiseLinear {\dot w}\tau(r)} \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \;\!\mathrm{d} r \leq C
\]
in view of \eqref{converg-rho-interp} and \eqref{converg-interp-w}, which provides a bound for $\piecewiseLinear w \tau$, and we have thus handled all
the terms contributing to $I_2$. We also have \[ \begin{aligned} I_3 = \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \left( \int_\Omega \piecewiseConstant H\tau \;\!\mathrm{d} x +
\int_{\partial\Omega} \piecewiseConstant h\tau \;\!\mathrm{d} S \right) \;\!\mathrm{d} r \leq \| \piecewiseConstant H\tau \|_{L^1(0,T; L^1(\Omega))} + \| \piecewiseConstant h\tau \|_{L^1(0,T; L^2(\partial\Omega))} \leq C,
\end{aligned}
\] due to \eqref{convs-interp-data}; \[ \begin{aligned}
I_4 & = \rho \int_\Omega \piecewiseLinear{\dot u}\tau(t) \piecewiseLinear{\dot w }\tau(t) \;\!\mathrm{d} x - \rho \int_\Omega \dot{u}_0 \piecewiseLinear{\dot w }\tau(0) \;\!\mathrm{d} x
- \rho \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \piecewiseLinear{\dot u}\tau(r-\tau) \partial_t \pwwll{ w}\tau (r) \;\!\mathrm{d} x \;\!\mathrm{d} r \\ & \stackrel{(1)}{\leq}
C + \frac{\rho}8 \int_\Omega | \piecewiseLinear{\dot u}\tau (t)|^2 \;\!\mathrm{d} x + 2\rho \int_\Omega | \piecewiseLinear{\dot w}\tau (t)|^2 \;\!\mathrm{d} x + \rho \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)-\tau} \| \piecewiseLinear{\dot u}\tau(s) \|_{L^2(\Omega;\mathbb{R}^d)} \| \partial_t\pwwll{w}\tau(s+\tau) \|_{L^2(\Omega;\mathbb{R}^d)} \;\!\mathrm{d} s, \end{aligned} \]
where (1) follows from \eqref{initial-u},
\eqref{discr-Cauchy}, and \eqref{converg-interp-w},
and we are tacitly assuming that $\piecewiseLinear {\dot u}\tau $ extends identically to zero on the interval $(-\tau,0)$. Moreover,
\begin{equation}
\label{here:w} \begin{aligned}
I_ 5 &=
\int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)}\int_\Omega \left( \mathbb{D} \piecewiseLinear{\dot e}\tau (r) + \mathbb{C} \piecewiseConstant e\tau (r) -\piecewiseConstant\vartheta\tau(r) \mathbb{B} \right) {:} \sig{\piecewiseLinear{\dot w}\tau (r) } \;\!\mathrm{d} x \;\!\mathrm{d} r \\ & \stackrel{(2)}{\leq} \int_\Omega \mathbb{D} \piecewiseConstant e\tau(t) : \sig{\piecewiseLinear{\dot w}\tau (t) } \;\!\mathrm{d} x - \int_\Omega \mathbb{D} \piecewiseConstant e\tau(0) : \sig{\piecewiseLinear{\dot w}\tau (0) } \;\!\mathrm{d} x - \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \mathbb{D} \piecewiseConstant e\tau(r-\tau): \sig{\partial_t \pwwll w\tau(r)} \;\!\mathrm{d} x \;\!\mathrm{d} r \\
& \quad + C_\mathbb{C}^2 \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| \piecewiseConstant e\tau (r) \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}) } \| \sig{\piecewiseLinear{\dot w}\tau (r) } \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}) } \;\!\mathrm{d} r + C \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| \piecewiseConstant\vartheta\tau(r) \|_{L^1(\Omega)} \| \sig{\piecewiseLinear{\dot w}\tau (r) } \|_{L^\infty (\Omega;\bbM_\mathrm{sym}^{d\times d})} \;\!\mathrm{d} r
\\
& \stackrel{(3)}{\leq} C + \frac{C_\mathbb{C}^1}8 \int_\Omega |\piecewiseConstant e\tau(t) |^2 \;\!\mathrm{d} x + C \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t) } \left( \| \sig{\partial_t \pwwll w\tau(r)}\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}) }+ \| \sig{\piecewiseLinear{\dot w}\tau (r) } \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}) } \right) \| \piecewiseConstant e\tau(r) \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}) } \;\!\mathrm{d} r \\ & \quad+ C \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| \piecewiseConstant\vartheta\tau(r) \|_{L^1(\Omega)} \| \sig{\piecewiseLinear{\dot w}\tau (r) } \|_{L^\infty (\Omega;\bbM_\mathrm{sym}^{d\times d})} \;\!\mathrm{d} r
\end{aligned}
\end{equation}
where (2) follows from integrating by parts the term $\iint \mathbb{D} \piecewiseLinear{\dot e}\tau {:} \sig{\piecewiseLinear{\dot w}\tau }$ (again, setting $\piecewiseConstant e\tau \equiv 0 $ on $(-\tau, 0)$), and (3) by Young's inequality, with the coefficient $ \frac{C_\mathbb{C}^1}8 $ chosen in such a way as to absorb the second term
on the right-hand side into the left-hand side of \eqref{very-1st-step}.
Collecting all of the above estimates and taking into account the coercivity properties of $\calE_ \tau$, as well as the bounds provided by \eqref{converg-rho-interp} and \eqref{converg-interp-w}, we get \[ \begin{aligned} &
\frac38 {\rho} \int_\Omega | \piecewiseLinear{\dot u}\tau (t)|^2 \;\!\mathrm{d} x + \| \piecewiseConstant\vartheta\tau(t)\|_{L^1(\Omega)} + \frac{1}4 C_{\mathbb{C}}^1 \| \piecewiseConstant e\tau(t)\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}^2 + \frac\tau{2\gamma }
\| \piecewiseConstant e\tau(t)\|_{L^\gamma(\Omega;\bbM_\mathrm{sym}^{d\times d})}^\gamma + \frac\tau{2\gamma}
\| \piecewiseConstant p\tau(t)\|_{L^\gamma(\Omega;\bbM_\mathrm{D}^{d\times d})}^\gamma \\ &
\leq C +
\int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| \piecewiseLinear{\dot \varrho}\tau (r) \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \| \upiecewiseConstant e{\tau}(r)\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \;\!\mathrm{d} r +
C \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| (\piecewiseConstant \varrho\tau (r))_\mathrm{D} \|_{L^\infty (\Omega;\bbM_\mathrm{D}^{d\times d})} \| \piecewiseConstant \vartheta\tau(r) \|_{L^1(\Omega)} \;\!\mathrm{d} r\\ &
\quad
+ \rho \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)-\tau} \| \partial_t\pwwll{w}\tau(s+\tau) \|_{L^2(\Omega;\mathbb{R}^d)} \| \piecewiseLinear{\dot u}\tau(s) \|_{L^2(\Omega;\mathbb{R}^d)} \;\!\mathrm{d} s \\ & \quad
+ \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t) } \left( \| \sig{\partial_t \pwwll w\tau(r)}\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}) }+ \| \sig{\piecewiseLinear{\dot w}\tau (r) } \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}) } \right) \| \piecewiseConstant e\tau(r) \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}) } \;\!\mathrm{d} r \\ & \quad+ C \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| \sig{\piecewiseLinear{\dot w}\tau (r) } \|_{L^\infty (\Omega;\bbM_\mathrm{sym}^{d\times d})} \| \piecewiseConstant\vartheta\tau(r) \|_{L^1(\Omega)} \;\!\mathrm{d} r \,. \end{aligned} \]
Applying a suitable version of Gronwall's Lemma,
we conclude that \[
\|\piecewiseLinear{\dot u}\tau \|_{L^\infty(0,T; L^2(\Omega;\mathbb{R}^d))} + \calE_\tau(\piecewiseConstant\vartheta\tau(t),\piecewiseConstant e\tau(t), \piecewiseConstant p\tau(t)) \leq C, \] whence the second of \eqref{aprioU2}, the second of \eqref{aprioU3}, \eqref{aprioE1}, \eqref{aprioE3}, \eqref{aprioP3}, and the second of \eqref{aprio6-discr}. \begin{remark} \upshape \label{rmk:diffic-1-test} \noindent The safe load condition \eqref{safe-load} is crucial for handling $ \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \pairing{}{H_\mathrm{Dir}^{1}(\Omega;\mathbb{R}^d)} { \piecewiseConstant \calL \tau }{\piecewiseLinear{\dot u}\tau {-} \piecewiseLinear{\dot w}\tau} \;\!\mathrm{d} r $ on the r.h.s.\ of \eqref{very-1st-step}, cf.\ \eqref{here:safe-loads}. In fact, this term involves the \emph{dissipative} variable $\piecewiseLinear{\dot u}\tau $, whose $L^2(\Omega;\mathbb{R}^d)$-norm, \emph{only}, is estimated by
the r.h.s. of \eqref{very-1st-step}. Condition \eqref{safe-load} then allows us to rewrite the above integral in terms of the functions $\piecewiseConstant \varrho \tau$ and $\piecewiseLinear {\dot e}\tau$,
$\piecewiseLinear p\tau$, and the resulting integrals are then treated via integration by parts, leading to quantities that can be controlled by the l.h.s.\ of \eqref{very-1st-step}.
\par
Without \eqref{safe-load}, the term $ \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \pairing{}{H_\mathrm{Dir}^{1}(\Omega;\mathbb{R}^d)}{ \piecewiseConstant \calL \tau }{\piecewiseLinear{\dot u}\tau {-} \piecewiseLinear{\dot w}\tau } \;\!\mathrm{d} r $ could be treated only by supposing that $g \equiv 0$, and that $F\in L^2(Q;\mathbb{R}^d)$.
\par
The estimates for the term $ \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)}\int_\Omega \piecewiseConstant\sigma\tau {:} \sig{\piecewiseLinear{\dot w}\tau } \;\!\mathrm{d} x \;\!\mathrm{d} r $, cf.\ \eqref{here:w}, unveil the role of the condition $w\in L^1(0,T; W^{1,\infty} (\Omega;\mathbb{R}^d))$, which allows us to control the term $ \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)}\int_\Omega
\piecewiseConstant\vartheta\tau \mathbb{B}{:} \sig{\piecewiseLinear{\dot w}\tau } \;\!\mathrm{d} x \;\!\mathrm{d} r $ exploiting the $L^1(\Omega)$-bound provided by the l.h.s.\ of \eqref{very-1st-step}. Alternatively, one could
impose some sort of `compatibility' between the thermal expansion tensor $\mathbb{B} = \mathbb{C} \mathbb{E}$ and the Dirichlet loading $w$, by requiring that $\mathbb{B} {:} \sig{\dot w} \equiv 0$, cf.\
\cite{Roub-PP}.
Analogously, the condition $w \in W^{2,1}(0,T;H^1(\Omega;\mathbb{R}^d))$ has been used in the estimation of the term $I_5$, cf.\ \eqref{here:w}.
\end{remark} \textbf{Second a priori estimate:} we test \eqref{discrete-heat} by $(\tetau{k})^{\alpha-1}$, with $\alpha \in (0,1)$, thus obtaining \begin{equation} \label{ad-est-temp1} \begin{aligned} &
\int_\Omega \left( \gtau{k} + \mathrm{R}\left(\tetau{k-1}, \Dtau{k} p \right) + \left| \Dtau{k} p \right|^2+ \mathbb{D} \Dtau{k} e : \Dtau{k} e \right) (\tetau{k})^{\alpha-1} \;\!\mathrm{d} x \\ & \quad - \int_\Omega \kappa(\tetau k) \nabla \tetau k \nabla (\tetau{k})^{\alpha-1} \;\!\mathrm{d} x + \int_{\partial\Omega} \htau k (\tetau{k})^{\alpha-1} \;\!\mathrm{d} S \\ & \leq \int_\Omega \left( \frac1{\alpha}\frac{(\tetau k)^\alpha - (\tetau {k-1})^\alpha}{\tau} + \tetau{k} \mathbb{B} : \Dtau k e (\tetau{k})^{\alpha-1} \right) \;\!\mathrm{d} x \end{aligned} \end{equation} where we have applied the concavity inequality \eqref{inequality-concave-functions}, with the choice $\psi(\vartheta)=\tfrac1{\alpha}\vartheta^\alpha$, to estimate the term $\frac1\tau \int_\Omega (\tetau{k} {-} \tetau{k-1}) (\tetau k)^{\alpha-1} \;\!\mathrm{d} x $. Therefore, multiplying by $\tau$, summing over the index $k$ and neglecting some positive terms on the left-hand side of \eqref{ad-est-temp1}, we obtain
for all $t \in (0,T]$
\begin{equation}
\label{ad-est-temp2}
\begin{aligned}
&
\frac{4(1-\alpha)}{\alpha^2} \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \kappa(\piecewiseConstant \vartheta{\tau}) |\nabla ((\piecewiseConstant \vartheta{\tau})^{\alpha/2}) |^2 \;\!\mathrm{d} x \;\!\mathrm{d} s + \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega
C_\mathbb{D}^1 | \piecewiseLinear {\dot e}\tau|^2 (\piecewiseConstant \vartheta \tau)^{\alpha-1}
\;\!\mathrm{d} x \;\!\mathrm{d} s \\ & \leq I_1+I_2+I_3,
\end{aligned}
\end{equation}
with
\begin{equation}
\label{est-temp-I1}
I_1 = \frac1\alpha\int_\Omega (\piecewiseConstant \vartheta\tau(t) )^{\alpha} \;\!\mathrm{d} x \leq \frac1\alpha \| \piecewiseConstant \vartheta\tau\|_{L^\infty (0,T; L^1(\Omega))} + C \leq C
\end{equation}
via Young's inequality (using that $\alpha \in (0,1)$) and the second of \eqref{aprio6-discr}; similarly $ I_2= - \frac1\alpha\int_\Omega (\vartheta_0)^{\alpha} \;\!\mathrm{d} x \leq \frac1\alpha \| \vartheta_0\|_{L^1(\Omega)} + C$, whereas
\begin{equation}
\label{est-temp-I3}
I_3= \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \piecewiseConstant \vartheta\tau(t) \mathbb{B} : \piecewiseLinear{\dot e}\tau(t) ( \piecewiseConstant \vartheta\tau(t) )^{\alpha-1} \;\!\mathrm{d} x \leq
\frac{ C_\mathbb{D}^1 }4 \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega | \piecewiseLinear {\dot e}\tau|^2 (\piecewiseConstant \vartheta \tau)^{\alpha-1} \;\!\mathrm{d} x \;\!\mathrm{d} s + C \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega (\piecewiseConstant \vartheta \tau)^{\alpha+1} \;\!\mathrm{d} x \;\!\mathrm{d} s\,.
\end{equation}
All in all, absorbing the first term on the right-hand side of \eqref{est-temp-I3} into the left-hand side of \eqref{ad-est-temp2} and taking into account the growth condition \eqref{hyp-K} on $\kappa$, which yields with easy calculations
that
\begin{equation}
\label{needs-be-observed}
\int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \kappa(\piecewiseConstant \vartheta{\tau}) |\nabla ((\piecewiseConstant \vartheta{\tau})^{\alpha/2}) |^2 \;\!\mathrm{d} x \;\!\mathrm{d} s \stackrel{(1)}{\geq} c \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega | \piecewiseConstant \vartheta \tau|^{\mu+\alpha-2} | \nabla \piecewiseConstant \vartheta \tau |^2 \;\!\mathrm{d} x \;\!\mathrm{d} s = c \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega | \nabla (\piecewiseConstant \vartheta \tau)^{(\mu+\alpha)/2} |^2 \;\!\mathrm{d} x \;\!\mathrm{d} s,
\end{equation}
we conclude from \eqref{ad-est-temp2} that
\begin{equation}
\label{ad-est-temp3}
c \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega | \nabla (\piecewiseConstant \vartheta \tau)^{(\mu+\alpha)/2} |^2 \;\!\mathrm{d} x \;\!\mathrm{d} s \leq C + C \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega (\piecewiseConstant \vartheta \tau)^{\alpha+1} \;\!\mathrm{d} x \;\!\mathrm{d} s\,.
\end{equation}
From now on, the calculations follow exactly the same lines as those developed in \cite[(3.8)--(3.12)]{Rocca-Rossi} for the analogous estimate, in turn based on the ideas from \cite{FPR09}. While referring to \cite{Rocca-Rossi} for all details, let us just give the highlights. Setting $ \piecewiseConstant \xi\tau : = (\piecewiseConstant \vartheta\tau \vee 1)^{(\mu+\alpha)/2}$, we deduce from \eqref{ad-est-temp3} the following inequality
\begin{equation}
\label{ad-est-temp4}
\begin{aligned}
\int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega | \nabla \piecewiseConstant \xi\tau |^2 \;\!\mathrm{d} x \;\!\mathrm{d} s & \leq C + C \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| \piecewiseConstant \xi\tau\|_{L^q(\Omega)}^q \;\!\mathrm{d} s
\\ &
\leq C +
\frac12 \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega | \nabla \piecewiseConstant \xi\tau |^2 \;\!\mathrm{d} x \;\!\mathrm{d} s +
C
\int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| \piecewiseConstant \xi\tau\|_{L^r(\Omega)}^s \;\!\mathrm{d} s + C \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \| \piecewiseConstant \xi\tau\|_{L^r(\Omega)}^q \;\!\mathrm{d} s,
\end{aligned}
\end{equation}
with $q \in [1,6)$ satisfying $\tfrac{\mu+\alpha}{2} \geq \tfrac{\alpha+1}{q}$.
The very last estimate ensues from
the Gagliardo-Nirenberg inequality, which in fact yields
\begin{equation}
\label{GN}
\| \piecewiseConstant \xi\tau \|_{L^q(\Omega)} \leq C_{\mathrm{GN}} \| \nabla \piecewiseConstant \xi\tau \|_{L^2(\Omega;\mathbb{R}^d)}^\theta \| \piecewiseConstant \xi\tau \|_{L^r(\Omega)}^{1-\theta} + C \| \piecewiseConstant \xi\tau \|_{L^r(\Omega)} \qquad \text{for } \theta \in (0,1) \text{ s.t. } \frac1q= \frac\theta 6 +\frac{1-\theta}r
\end{equation}
with $r \in [1,q]$. Then, $s$ in \eqref{ad-est-temp4} is a third exponent, related to $q$ and $r$ via \eqref{GN}.
In \cite{Rocca-Rossi} it is shown that the exponents
$q$ and $r$ can be chosen in such a way as to have $ \| \piecewiseConstant \xi\tau\|_{L^\infty(0,T; L^r(\Omega))} \leq C \| \piecewiseConstant \vartheta\tau\|_{L^\infty (0,T; L^1(\Omega))} + C \leq C $ thanks to the second of \eqref{aprio6-discr}. In particular, one has to impose that $1\leq r \leq \frac2{\mu+\alpha}$.
Inserting this into \eqref{ad-est-temp4} one concludes that $\| \nabla \piecewiseConstant \xi\tau \|_{L^2(0,T; L^2(\Omega;\mathbb{R}^d))} \leq C$. All in all, this argument yields a bound for $ \piecewiseConstant \xi\tau$ in $L^2(0,T; H^1(\Omega)) \cap L^\infty (0,T; L^r(\Omega))$. Since $ \piecewiseConstant \xi\tau = (\piecewiseConstant \vartheta\tau \vee 1)^{(\mu+\alpha)/2}$, we ultimately conclude that
\begin{equation}
\label{est-temp-added?}
\| (\piecewiseConstant \vartheta\tau)^{(\mu+\alpha)/2} \|_{L^2(0,T; H^1(\Omega)) \cap L^\infty (0,T; L^r(\Omega))} \leq C.
\end{equation} Then, from inequality (1) in \eqref{needs-be-observed} we deduce that
$ \int_0^{T} \int_\Omega | \nabla \piecewiseConstant \vartheta\tau |^2 \;\!\mathrm{d} x \;\!\mathrm{d} s \leq C $
provided that
\begin{equation}
\label{restriction-alpha}
\mu+\alpha-2 \geq 0 \ \text{whence the constraints} \ \alpha>0 \text{ and } \alpha \geq 2-\mu.
\end{equation}
From the constraints $\frac2{\mu+\alpha} \geq 1$
and $\frac{\mu+\alpha}2 \geq 1$ in \eqref{restriction-alpha} we deduce that $\frac{\mu+\alpha}2 = 1$. Ultimately, $r=1$ and $\alpha = 2-\mu$.
Thus,
\eqref{est-temp-added?} yields
the first of \eqref{aprio6-discr}.
Interpolating between the two estimates in \eqref{aprio6-discr} via the Gagliardo-Niremberg inequality gives
\begin{equation}
\label{Gagliardo-Nir}
\| \piecewiseConstant \vartheta\tau \|_{L^h (Q)} \leq C \quad \text{with } h=\frac83 \text{ if } d=3 \text{ and } h=3 \text{ if } d=2.
\end{equation}
Estimate \eqref{log-added} follows from taking into account that \[
\| \log(\piecewiseConstant \vartheta\tau)\|_{L^2(0,T; H^1(\Omega))} \leq C \left( 1+ \|\piecewiseConstant \vartheta\tau\|_{L^2(0,T; H^1(\Omega))} \right) \] thanks to the strict positivity \eqref{discr-strict-pos}. For later use, let us point out that, in the end, we recover the bound \[
\| (\piecewiseConstant \vartheta\tau)^{(\mu{+}\alpha)/2} \|_{L^2(0,T;H^1(\Omega))} \leq C \] for arbitrary $\alpha \in (0,1)$. For this, it is sufficient to observe that the second term on the right-hand side of \eqref{ad-est-temp3} now fulfills \[
\int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega (\piecewiseConstant \vartheta \tau)^{\alpha+1} \;\!\mathrm{d} x \;\!\mathrm{d} s \leq C \]
thanks to estimate \eqref{Gagliardo-Nir}. Hence, \eqref{ad-est-temp3} yields $ \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega | \nabla (\piecewiseConstant \vartheta \tau)^{(\mu+\alpha)/2} |^2 \;\!\mathrm{d} x \;\!\mathrm{d} s\leq C$, whence the $L^2(0,T;H^1(\Omega))$-bound for $ (\piecewiseConstant \vartheta\tau)^{(\mu{+}\alpha)/2}$ via the Poincar\'e inequality. Then, taking into account that $ (\piecewiseConstant \vartheta\tau)^{(\mu-\alpha)/2} \leq (\piecewiseConstant \vartheta\tau)^{(\mu+\alpha)/2} +1 $ a.e.\ in $Q$ and using that
\[
\int_\Omega |\nabla (\piecewiseConstant \vartheta\tau)^{(\mu-\alpha)/2}|^2 \;\!\mathrm{d} x = C \int_\Omega ( \piecewiseConstant \vartheta\tau)^{\mu-\alpha - 2} | \nabla \piecewiseConstant \vartheta \tau|^2 \;\!\mathrm{d} x \leq \frac{C}{\bar\vartheta^{2\alpha}}
\int_\Omega ( \piecewiseConstant \vartheta\tau)^{\mu+\alpha - 2} | \nabla \piecewiseConstant \vartheta \tau|^2 \;\!\mathrm{d} x \leq C,
\]
we conclude estimate \eqref{est-temp-added?-bis}.
\par\noindent \textbf{Third a priori estimate:} We consider the mechanical energy inequality \eqref{mech-ineq-discr} written for $s=0$. We estimate the terms on its right-hand side by the very same calculations developed in the \emph{First a priori estimate} for the right-hand side terms of \eqref{total-enid-discr}. We also use that \[
\int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \piecewiseConstant\vartheta\tau \mathbb{B} {:} \piecewiseLinear{\dot e}\tau \;\!\mathrm{d} x \;\!\mathrm{d} r \leq \delta \int_0^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega | \piecewiseLinear{\dot e}\tau |^2 \;\!\mathrm{d} x \;\!\mathrm{d} r + C_\delta \| \piecewiseConstant \vartheta \tau \|_{L^2(0,T; L^2(\Omega))}^2 \]
via Young's inequality, with the constant $\delta>0$ chosen in such a way as to absorb the term $\iint | \piecewiseLinear{\dot e}\tau |^2$ into the left-hand side of \eqref{mech-ineq-discr}. Since $ \| \piecewiseConstant \vartheta \tau \|_{L^2(0,T; L^2(\Omega))}^2 \leq C$ by the previously proved \eqref{aprio6-discr}, we ultimately conclude that the terms on the right-hand side of \eqref{mech-ineq-discr} are all bounded, uniformly w.r.t.\ $\tau$. This leads to \eqref{aprioE2} and \eqref{aprioP2}, whence \eqref{aprioP1}, as well as \eqref{aprioU1} and the first of \eqref{aprioU2} by kinematic admissibility. \par\noindent \textbf{Fourth a priori estimate:} It follows from estimates \eqref{aprioE1}, \eqref{aprioE2}, \eqref{aprioE3}, and \eqref{aprio6-discr} that the stresses $(\piecewiseConstant \sigma\tau )_\tau$ are uniformly bounded in $L^{\gamma/(\gamma{-}1)}(Q; \bbM_\mathrm{sym}^{d\times d})$. Therefore, also taking into account \eqref{converg-interp-L}, a comparison argument in the discrete momentum balance \eqref{eq-u-interp} yields that the derivatives $(\partial_t \pwwll u\tau)_\tau$ are bounded in $L^{\gamma/(\gamma{-}1)} (0,T;W^{1,\gamma}(\Omega;\mathbb{R}^d)^*)$, whence the third of \eqref{aprioU3}. \par\noindent \textbf{Fifth a priori estimate:} We will now sketch the argument for \eqref{aprio_Varlog}, referring to the proof of \cite[Prop.\ 4.10]{Rocca-Rossi} for all details. Indeed, let us fix a partition $0 =\sigma_0 < \sigma_1 < \ldots < \sigma_J =T$ of the interval $[0,T]$. From the discrete entropy inequality \eqref{entropy-ineq-discr} written on the interval $[\sigma_{i-1},\sigma_i]$
and for a \emph{constant-in-time} test function
we deduce that
\begin{equation}
\label{ell-i-pos-neg}
\begin{aligned}
&
\int_\Omega (\log(\piecewiseConstant\vartheta\tau (\sigma_{i})) - \log(\piecewiseConstant\vartheta\tau (\sigma_{i-1})) ) \varphi \;\!\mathrm{d} x + \Lambda_{i,\tau} (\varphi) \geq 0 && \text{for all } \varphi \in W_+^{1,d+\epsilon}(\Omega),
\\
&
\int_\Omega ( \log(\piecewiseConstant\vartheta\tau (\sigma_{i-1}))- \log(\piecewiseConstant\vartheta\tau (\sigma_{i}))) \varphi \;\!\mathrm{d} x - \Lambda_{i,\tau} (\varphi) \geq 0 && \text{for all } \varphi \in W_-^{1,d+\epsilon}(\Omega),
\end{aligned}
\end{equation}
where we have used the place-holder
\begin{equation}
\label{pl-ho-varphi} \begin{aligned} \Lambda_{i,\tau} (\varphi) =
& \int_{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_{i-1})}^{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_i)} \int_\Omega \kappa(\piecewiseConstant\vartheta\tau) \nabla \log(\piecewiseConstant\vartheta\tau) \nabla \varphi \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_{i-1})}^{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_i)} \int_\Omega \mathbb{B} {:} \piecewiseLinear{\dot e}\tau \varphi \;\!\mathrm{d} x \;\!\mathrm{d} r \\ & \quad - \int_{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_{i-1})}^{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_i)} \int_\Omega \kappa(\piecewiseConstant\vartheta\tau) \frac{\varphi}{\piecewiseConstant\vartheta\tau} \nabla (\log(\piecewiseConstant\vartheta\tau)) \nabla \piecewiseConstant\vartheta\tau \;\!\mathrm{d} x \;\!\mathrm{d} r - \int_{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_{i-1})}^{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_i)} \int_{\partial\Omega} \piecewiseConstant h{\tau} \frac{\varphi}{\piecewiseConstant\vartheta\tau} \;\!\mathrm{d} S \;\!\mathrm{d} r
\\
&\quad - \int_{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_{i-1})}^{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_i)} \int_\Omega \left(\piecewiseConstant H\tau+ \mathrm{R}(\upiecewiseConstant \vartheta\tau, \piecewiseLinear {\dot p}\tau) + | \piecewiseLinear {\dot p}\tau|^2 + \mathbb{D} \piecewiseLinear{\dot e}\tau{:} \piecewiseLinear{\dot e}\tau \right)
\frac{\varphi}{\piecewiseConstant\vartheta\tau} \;\!\mathrm{d} x \;\!\mathrm{d} r. \end{aligned} \end{equation} Arguing as in the proof of \cite[Prop.\ 4.10]{Rocca-Rossi}, from \eqref{ell-i-pos-neg} we deduce that \begin{equation} \label{genialata} \begin{aligned} &
\sum_{i=1}^J \left| \pairing{}{W^{1,d+\epsilon}(\Omega)}{\log(\piecewiseConstant\vartheta\tau (\sigma_{i})) - \log(\piecewiseConstant\vartheta\tau (\sigma_{i-1})) }{\varphi} \right| \\ & \leq \sum_{i=1}^J
\int_\Omega (\log(\piecewiseConstant\vartheta\tau (\sigma_{i})) - \log(\piecewiseConstant\vartheta\tau (\sigma_{i-1}))) |\varphi| \;\!\mathrm{d} x
+ \Lambda_{i,\tau} (|\varphi|) + |\Lambda_{i,\tau} (\varphi^+)| + |\Lambda_{i,\tau} (\varphi^-)| \end{aligned} \end{equation} for all $\varphi \in W^{1,d+\epsilon}(\Omega)$. Then, we infer the bound \eqref{aprio_Varlog} by
estimating the terms on the right-hand side of \eqref{genialata}, uniformly w.r.t.\ $\varphi$.
In particular, to handle the second, fourth, and fifth integral terms arising from $\Lambda_{i,\tau} (\varphi)$ (cf.\
\eqref{pl-ho-varphi}), we use the previously proved estimates \eqref{aprioE2}, \eqref{aprioP2},
as well as the bounds provided by \eqref{converg-interp-g} and \eqref{converg-interp-h} on $\piecewiseConstant H\tau$ and $\piecewiseConstant h \tau$, cf.\ \cite{Rocca-Rossi} for all details. Let us only comment on the estimates for the first and third integral terms
on the r.h.s.\ of \eqref{pl-ho-varphi}.
We remark that for every $\varphi \in W^{1,d+\epsilon}(\Omega)$ we have
\begin{equation}
\label{est-kappa-teta-log}
\begin{aligned}
& \left| \int_{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_{i-1})}^{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_i)} \int_\Omega \kappa(\piecewiseConstant\vartheta\tau) \nabla \log(\piecewiseConstant\vartheta\tau) \nabla \varphi \;\!\mathrm{d} x \;\!\mathrm{d} r \right| \\
& \stackrel{(1)}{\leq} C \int_{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_{i-1})}^{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_i)} \int_\Omega \left(| \piecewiseConstant\vartheta\tau |^{\mu-1} | \nabla \piecewiseConstant\vartheta\tau | + \frac1{\piecewiseConstant\vartheta\tau} | \nabla \piecewiseConstant\vartheta\tau | \right) |\nabla \varphi| \;\!\mathrm{d} x \;\!\mathrm{d} r
\\
& \stackrel{(2)}{\leq} C \int_{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_{i-1})}^{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_i)} \int_\Omega |\left( \piecewiseConstant\vartheta\tau \right)^{(\mu+\alpha-2)/2} \nabla \piecewiseConstant\vartheta\tau | \left( \piecewiseConstant\vartheta\tau \right)^{(\mu-\alpha)/2} |\nabla \varphi| + \frac1{\bar\vartheta} | \nabla \piecewiseConstant\vartheta\tau | |\nabla \varphi| \;\!\mathrm{d} x \;\!\mathrm{d} r
\\
& \stackrel{(3)}{\leq} C \int_{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_{i-1})}^{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_i)} \|\left( \piecewiseConstant\vartheta\tau \right)^{(\mu+\alpha-2)/2} \nabla \piecewiseConstant\vartheta\tau \|_{L^2(\Omega;\mathbb{R}^d)} \, \| \left( \piecewiseConstant\vartheta\tau \right)^{(\mu-\alpha)/2} \|_{L^{d^\star}(\Omega)} \| \nabla \varphi \|_{L^{d+\epsilon}(\Omega;\mathbb{R}^d)} \;\!\mathrm{d} r \\ & \quad + C \int_{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_{i-1})}^{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_i)} \| \nabla \piecewiseConstant\vartheta\tau \|_{L^2(\Omega;\mathbb{R}^d)} \|\nabla \varphi\|_{L^2(\Omega;\mathbb{R}^d)} \;\!\mathrm{d} r \end{aligned}
\end{equation}
where (1) follows from the growth condition \eqref{hyp-K} on $\kappa$, (2) from the discrete positivity property \eqref{discr-strict-pos}, and (3) from H\"older's inequality, in view of the continuous embedding
\begin{equation}
\label{Sobolev-embedding}
H^1(\Omega) \subset L^{d^\star}(\Omega) \quad \text{with } d^\star \begin{cases}
\in [1,\infty) & \text{if } d=2,
\\
= 6 & \text{if } d=3.
\end{cases}
\end{equation}
Therefore, observe that only $\varphi \in W^{1,d+\epsilon}(\Omega)$, with $\epsilon>0$, is needed.
Then, we use estimates \eqref{aprio6-discr} and \eqref{est-temp-added?-bis} to bound the terms on the r.h.s.\ of \eqref{est-kappa-teta-log}. As for the third term on the r.h.s.\ of \eqref{pl-ho-varphi}, we use that
\[
\begin{aligned}
&
\left| \int_{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_{i-1})}^{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_i)} \int_\Omega \kappa(\piecewiseConstant\vartheta\tau) \frac{\varphi}{\piecewiseConstant\vartheta\tau} \nabla (\log(\piecewiseConstant\vartheta\tau)) \nabla \piecewiseConstant\vartheta\tau \;\!\mathrm{d} x \;\!\mathrm{d} r \right| \\
& \stackrel{(4)}{\leq} C \int_{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_{i-1})}^{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_i)} \int_\Omega \left(| \piecewiseConstant\vartheta\tau |^{\mu-2} | \nabla \piecewiseConstant\vartheta\tau |^2 + \frac1{\bar\vartheta^2} | \nabla \piecewiseConstant\vartheta\tau |^2\right) | \varphi| \;\!\mathrm{d} x \;\!\mathrm{d} r
\\
& \stackrel{(5)}{\leq} C \| \varphi \|_{L^\infty(\Omega)} \int_{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_{i-1})}^{\piecewiseConstant{\mathsf{t}}{\tau}(\sigma_i)} \int_\Omega | \piecewiseConstant\vartheta\tau |^{\mu+\alpha-2} | \nabla \piecewiseConstant\vartheta\tau |^2 + | \nabla \piecewiseConstant\vartheta\tau |^2 \;\!\mathrm{d} x \;\!\mathrm{d} r, \end{aligned}
\]
with (4) due to \eqref{hyp-K} and the positivity property \eqref{discr-strict-pos}, and (5) following from the estimate $| \piecewiseConstant\vartheta\tau |^{\mu-2} \leq | \piecewiseConstant\vartheta\tau |^{\mu+\alpha-2}+1 $, combined with the fact that $\varphi \in W^{1,d+\epsilon}(\Omega) \subset L^\infty(\Omega)$. Again, we conclude via the bounds \eqref{aprio6-discr} and \eqref{est-temp-added?-bis}. \par\noindent \textbf{Sixth a priori estimate:} Under the stronger condition \eqref{hyp-K-stronger}, we multiply the discrete heat equation \eqref{discrete-heat} by a function $\varphi \in W^{1,\infty}(\Omega)$. Integrating in space we thus obtain for almost all $t\in (0,T)$ \begin{equation} \label{analog-7th-est}
\left|\int_\Omega \piecewiseLinear{\dot \vartheta}\tau (t) \varphi \;\!\mathrm{d} x \right|
\leq \left| \int_\Omega \kappa(\piecewiseConstant \vartheta\tau(t)) \nabla \piecewiseConstant \vartheta\tau(t) \nabla \varphi \;\!\mathrm{d} x \right| + \left| \int_\Omega \piecewiseConstant J{\tau}(t) \varphi \;\!\mathrm{d} x \right| + \left| \int_{\partial\Omega} \piecewiseConstant h\tau(t) \varphi \;\!\mathrm{d} S\right| \doteq I_1+I_2+I_3\,, \end{equation}
where we have used the place-holder $ \piecewiseConstant J{\tau}(t): = \piecewiseConstant H\tau(t) + \mathrm{R}\left(\upiecewiseConstant\vartheta\tau(t), \piecewiseLinear {\dot p}\tau(t) \right) + \left| \piecewiseLinear {\dot p}\tau(t) \right|^2+ \mathbb{D} \piecewiseLinear {\dot e}\tau(t) : \piecewiseLinear {\dot e}\tau(t) -\piecewiseConstant\vartheta\tau(t) \mathbb{B} : \piecewiseLinear {\dot e}\tau(t) $. Now, in view of \eqref{converg-interp-g} for $\piecewiseConstant H\tau$ and of estimates
\eqref{aprioE2}, \eqref{aprioP2}, and \eqref{aprio6-discr}, it is clear that
\[
I_2\leq \mathcal{J}_\tau (t) \| \varphi\|_{L^\infty(\Omega)} \quad \text{with $\mathcal{J}_\tau (t): = \| \piecewiseConstant J\tau(t)\|_{L^1(\Omega)} $.}
\] Observe that the family $(\mathcal{J}_\tau)_\tau$ is uniformly bounded in $L^1(0,T)$. The third term on the r.h.s.\ of \eqref{analog-7th-est} is analogously bounded thanks to \eqref{converg-interp-h}. As for the first one, we use that \[
I_1\leq C\|(\piecewiseConstant \vartheta\tau)^{(\mu-\alpha+2)/2} \|_{L^2(\Omega)} \| (\piecewiseConstant \vartheta\tau)^{(\mu+\alpha-2)/2} \nabla \piecewiseConstant \vartheta\tau\|_{L^2(\Omega;\mathbb{R}^d)} \|\nabla \varphi\|_{L^\infty (\Omega;\mathbb{R}^d)}
+ C \| \nabla \piecewiseConstant \vartheta\tau\|_{L^2(\Omega;\mathbb{R}^d)} \|\nabla \varphi\|_{L^2 (\Omega;\mathbb{R}^d)}, \] based on the growth condition \eqref{hyp-K} on $\kappa$. By \eqref{hyp-K-stronger} we have
$\mu <5/3$ if $d=3$, and $\mu<2$ if $d=2$. Since $\alpha$ can be chosen arbitrarily close to $1$, from \eqref {Gagliardo-Nir}
we gather that $(\piecewiseConstant \vartheta\tau)^{(\mu-\alpha+2)/2}$ is bounded in $L^2(Q)$. Therefore, also taking into account \eqref{est-temp-added?} and \eqref{aprio6-discr} we infer that $I_1\leq \mathcal{K}_\tau(t) \| \varphi\|_{L^\infty(\Omega)}$ with $(\mathcal{K}_\tau)_\tau$ bounded in $L^1(0,T)$. Hence,
estimate \eqref{aprio7-discr}
follows. \end{proof}
\subsection{Passage to the limit} \label{ss:3.3} In this section, we conclude the proof of Theorems \ref{mainth:1} \& \ref{mainth:2}.
First of all, from the a priori estimates obtained in Proposition \ref{prop:aprio} we deduce the convergence (along a subsequence, in suitable topologies) of the approximate solutions, to a quadruple $(\vartheta,u,e,p)$. In the proofs of Thm.\ \ref{mainth:1} (\ref{mainth:2}, respectively), we then proceed to show that $(\vartheta,u,e,p)$ is an \emph{entropic} (a \emph{weak energy}, respectively) solution to (the Cauchy problem for) system (\ref{plast-PDE}, \ref{bc}), by passing to the limit in the approximate system \eqref{syst-interp}, and in the discrete entropy and total energy inequalities. Let us mention that, in order to recover the kinematic admissibility, the weak momentum balance, and the plastic flow rule, we will follow an approach different from that developed in \cite{DMSca14QEPP}. The latter paper exploited a reformulation of the (discrete) momentum balance and flow rule in terms of a mechanical energy balance, and a variational inequality, based on the results from \cite{DMDSMo06QEPL}. Let us point out that
it would be possible to repeat this argument in the present setting as well. Nonetheless, the limit passage procedure that we will develop in Step 2 of the proof of Thm.\ \ref{mainth:1} will lead us to conclude, via careful $\limsup$-arguments, additional strong convergences that will allow us to take the limit of the quadratic terms on the r.h.s.\ of the heat equation \eqref{heat}. \par Prior to our compactness statement for the sequence of approximate solutions, we recall here a compactness result, akin to the Helly Theorem and tailored to the bounded variation type estimate \eqref{aprio_Varlog}, which will have a pivotal role in establishing the convergence properties for (a subsequence of) the approximate temperatures. Theorem \ref{th:mie-theil} below was proved in \cite{Rocca-Rossi}, cf.\ Thm.\ A.5 therein, with the exception of convergence \eqref{enhSav}. We will give its proof in the Appendix, and in doing so we will shortly recapitulate the argument for \cite[Thm.\ A.5]{Rocca-Rossi}. Since in the proof we shall resort to a compactness result from the theory of Young measures, also invoked in the proof of Thm.\ \ref{mainth:3} ahead, we shall recall such result, together with some basics of the theory, in the Appendix. \begin{theorem} \label{th:mie-theil} Let $\mathbf{V}$ and $\mathbf{Y}$ be two (separable) reflexive Banach spaces such that $\mathbf{V} \subset \mathbf{Y}^*$ continuously. Let
$(\ell_k)_k \subset L^p (0,T;\mathbf{V}) \cap \mathrm{B} ([0,T];\mathbf{Y}^*)$ be bounded in $L^p (0,T;\mathbf{V}) $ and suppose in addition that \begin{align} \label{ell-n-0} & \text{$(\ell_k(0))_k\subset \mathbf{Y}^*$ is bounded}, \\ & \label{BV-bound} \exists\, C>0 \ \ \forall\, \varphi \in \overline{B}_{1,\mathbf{Y}}(0) \ \ \forall\, k \in \mathbb{N}\, : \quad
\mathrm{Var}(\pairing{}{\mathbf{Y}}{\ell_k}{ \varphi}; [0,T] ) \leq C, \end{align} where, for given $\ell \in \mathrm{B}([0,T];\mathbf{Y}^*)$ and $\varphi \in \mathbf{Y}$ we set \begin{equation} \label{var-notation} \begin{aligned} \mathrm{Var}(\pairing{}{\mathbf{Y}}{\ell}{ \varphi}; [0,T] ) : = \sup \{ \sum_{i=1}^J
\left |\pairing{}{\mathbf{Y}}{\ell(\sigma_{i})}{ \varphi} - \pairing{}{\mathbf{Y}}{\ell(\sigma_{i-1})}{ \varphi} \right| \, : \ 0 =\sigma_0 < \sigma_1 < \ldots < \sigma_J =T \} \,. \end{aligned} \end{equation} \par Then, there exist a (not relabeled) subsequence
$(\ell_{k})_k$
and a function $\ell \in L^p (0,T;\mathbf{V}) \cap L^\infty (0,T; \mathbf{Y}^*) $
such that as $k\to \infty$
\begin{align}
\label{weak-LpB}
&
\ell_{k} \weaksto \ell \quad \text{ in } L^p (0,T;\mathbf{V}) \cap L^\infty (0,T;\mathbf{Y}^*),
\\ \label{weak-ptw-B} & \ell_{k}(t) \rightharpoonup \ell(t) \quad \text{ in } \mathbf{V}\quad \text{for a.a. } t \in (0,T).
\end{align} \par Furthermore, for almost all $t \in (0,T)$ and any sequence $(t_k)_k \subset [0,T]$ with $t_k \to t$ there holds \begin{equation} \label{enhSav}
\ell_{k}(t_k) \rightharpoonup \ell(t) \qquad \text{ in $\mathbf{Y}^*$. }
\end{equation} \end{theorem}
\par We are now in the position to prove the following compactness result where, in particular, we show that, along a subsequence, the sequences $(\piecewiseConstant\vartheta\tau)_\tau$ and $(\upiecewiseConstant\vartheta\tau)_\tau$ converge, in suitable topologies, to the \emph{same} limit $\vartheta$. This is not a trivial consequence of the obtained a priori estimates, as no bound on the total variation of the functions $\piecewiseConstant\vartheta\tau$ is available. In fact, this fact stems from the `generalized $\mathrm{BV}$' estimate \eqref{aprio_Varlog}, via the convergence property \eqref{enhSav} from Theorem \ref{th:mie-theil}. \begin{lemma}[Compactness] \label{l:compactness} Assume \eqref{hyp-K}. Then, for any sequence $\tau_k \downarrow 0$ there exist a (not relabeled) subsequence and a quintuple $(\vartheta, u, e, p,\zeta)$ such that the following convergences hold \begin{subequations} \label{convergences-cv} \begin{align} & \label{cvU1}
\piecewiseLinear u{\tau_k} \weaksto u && \text{ in $H^1(0,T; H^1(\Omega;\mathbb{R}^d)) \cap W^{1,\infty}(0,T;L^2(\Omega;\mathbb{R}^d))$,} \\ & \label{cvU2}
\piecewiseConstant u{\tau_k},\, \upiecewiseConstant u{\tau_k} \to u &&
\text{ in $L^\infty(0,T;H^{1-\epsilon}(\Omega;\mathbb{R}^d)) $ for all $\epsilon \in (0,1]$,}
\\
&
\label{cvU3}
\piecewiseLinear u{\tau_k} \to u && \text{ in $\mathrm{C}^0([0,T];H^{1-\epsilon}(\Omega;\mathbb{R}^d)) $ for all $\epsilon \in (0,1]$,}
\\
&
\label{cvU3-bis} \pwwll{u}{\tau_k} \to \dot {u} && \text{ in $\mathrm{C}_{\mathrm{weak}}^0([0,T];L^2(\Omega;\mathbb{R}^d)) \cap L^2(0,T; H^{1-\epsilon}(\Omega;\mathbb{R}^d))$ for all $\epsilon \in (0,1]$,} \\ & \label{cvU4} \partial_t \pwwll {u}{\tau_k} \rightharpoonup \ddot u &&
\text{ in $L^{\gamma/(\gamma-1)}(0,T;W^{1,\gamma}(\Omega;\mathbb{R}^d)^*) $,}
\\ & \label{cvE1} \piecewiseConstant e{\tau_k} \weaksto e && \text{ in $L^\infty(0,T; L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}))$,} \\ & \label{cvE2} \piecewiseLinear e{\tau_k} \rightharpoonup e && \text{ in $H^1(0,T; L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}))$,} \\ & \label{cvE3-bis} \piecewiseLinear e{\tau_k} \to e && \text{ in $\rmC_{\mathrm{weak}}^0([0,T]; L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}))$,} \\ & \label{cvE3}
\tau|\piecewiseConstant e{\tau_k}|^{\gamma-2} \piecewiseConstant e{\tau_k} \to 0 && \text{ in $L^{\infty}(0,T; L^{\gamma/(\gamma{-}1)}(\Omega; \bbM_\mathrm{sym}^{d\times d}))$,}
\\ & \label{cvP1} \piecewiseConstant p{\tau_k} \weaksto p && \text{ in $L^\infty(0,T; L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}))$,} \\ & \label{cvP2} \piecewiseLinear p{\tau_k} \rightharpoonup p && \text{ in $H^1(0,T; L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}))$,} \\ & \label{cvP3-bis} \piecewiseLinear p{\tau_k} \to p && \text{ in $\rmC_{\mathrm{weak}}^0([0,T]; L^2(\Omega;\bbM_\mathrm{D}^{d\times d}))$,} \\ & \label{cvP3}
\tau|\piecewiseConstant p{\tau_k}|^{\gamma-2} \piecewiseConstant p{\tau_k} \to 0 && \text{ in $L^{\infty}(0,T; L^{\gamma/(\gamma{-}1)}(\Omega; \bbM_\mathrm{D}^{d\times d}))$,} \\
&
\label{cvT1}
\piecewiseConstant \vartheta{\tau_k}\rightharpoonup \vartheta && \text{ in $L^2 (0,T; H^1(\Omega))$},
\\ & \label{cvT2} \log(\piecewiseConstant\vartheta{\tau_k}) \weaksto \log(\vartheta) && \text{ in } L^2 (0,T; H^1(\Omega)) \cap L^\infty (0,T; W^{1,d+\epsilon}(\Omega)^*) \quad \text{for every } \epsilon>0, \\ & \label{cvT4}
\log(\piecewiseConstant\vartheta{\tau_k}(t)) \rightharpoonup \log(\vartheta(t)) && \text{ in $H^1(\Omega)$ for almost all $t \in (0,T)$,} \\ & \label{cvuT4}
\log(\upiecewiseConstant\vartheta{\tau_k}(t)) \rightharpoonup \log(\vartheta(t)) && \text{ in $H^1(\Omega)$ for almost all $t \in (0,T)$,} \\
& \label{cvT5} \piecewiseConstant\vartheta{\tau_k}\to \vartheta && \text{ in $L^h(Q)$ for all $h\in [1,8/3) $ for $d=3$ and all $h\in [1, 3)$ if $d=2$,} \\ & \label{cvuT5} \upiecewiseConstant\vartheta{\tau_k}\to \vartheta && \text{ in $L^h(Q)$ for all $h\in [1,8/3) $ for $d=3$ and all $h\in [1, 3)$ if $d=2$,} \\ & \label{cvT8} (\piecewiseConstant\vartheta{\tau_k})^{(\mu+\alpha)/2}\rightharpoonup \vartheta^{(\mu+\alpha)/2} && \text{ in $L^2(0,T;H^1(\Omega))$ for every $\alpha \in [(2{-}\mu)^+,1)$, } \\ & \label{cvT9} (\piecewiseConstant\vartheta{\tau_k})^{(\mu-\alpha)/2}\rightharpoonup \vartheta^{(\mu-\alpha)/2} && \text{ in $L^2(0,T;H^1(\Omega))$ for every $\alpha \in [(2{-}\mu)^+,1)$, } \\ & \label{cvZ} \piecewiseConstant\zeta{\tau_k} \weaksto \zeta && \text{ in $L^\infty (Q;\bbM_\mathrm{D}^{d \times d})$}. \end{align} The triple $(u,e,p)$ complies with the kinematic admissibility condition \eqref{kin-admis}, while $\vartheta$ also fulfills \begin{equation} \label{additional-teta} \vartheta \in L^\infty (0,T; L^1(\Omega)) \text{ and }
\vartheta \geq \bar{\vartheta} \text{ a.e.\ in $Q$}
\end{equation}
with $\bar{\vartheta}$ from \eqref{discr-strict-pos}. \par Furthermore, under condition \eqref{hyp-K-stronger} we also have $\vartheta \in \mathrm{BV} ([0,T]; W^{1,\infty} (\Omega)^*)$, and \begin{align} & \label{cvT6} \piecewiseConstant\vartheta{\tau_k} \to \vartheta && \text{ in } L^2 (0,T; Y) \text{ for all $Y$ such that $H^1(\Omega) \Subset Y \subset W^{1,\infty} (\Omega)^*$}, \\ & \label{cvT7} \piecewiseConstant\vartheta{\tau_k}(t) \weaksto \vartheta(t) && \text{ in } W^{1,\infty} (\Omega)^* \text{ for all } t \in [0,T]. \end{align} \end{subequations} \end{lemma} Let us mention beforehand that, in the proof of Thm.\ \ref{mainth:1} we will obtain further convergence properties for the sequences of approximate solutions,
cf.\ also Remark \ref{rmk:energy-conv} ahead. \begin{proof}[Sketch of the proof] Convergences \eqref{cvU1}--\eqref{cvU3}, \eqref{cvE1}--\eqref{cvE3-bis}, \eqref{cvP1}--\eqref{cvP3-bis}, and \eqref{cvZ} follow from the a priori estimates in Proposition \ref{prop:aprio} via well known weak and strong compactness results (cf.\ e.g.\ \cite{Simon87}), also taking into account that \begin{equation} \label{stability-e-k}
\| \piecewiseConstant e{\tau_k} {-} \piecewiseLinear e{\tau_k}\|_{L^\infty (0,T; L^2(\Omega;\mathbb{R}^d))} \leq C \tau_k^{1/2} \to 0 \qquad \text{ as $k\to\infty$}, \end{equation} and the analogous relations involving $\piecewiseConstant p{\tau_k},\, \piecewiseLinear p{\tau_k}$, etc.
Passing to the limit as $k\to\infty$ in the discrete kinematic admissibility condition $(\piecewiseConstant u{\tau_k}(t), \piecewiseConstant e{\tau_k}(t), \piecewiseConstant p{\tau_k}(t)) \in {\mathcal A}} \def\calB{{\mathcal B}} \def\calC{{\mathcal C}(\piecewiseConstant w{\tau_k}(t))$ for a.a.\ $t \in (0,T)$, also in view of convergence \eqref{converg-interp-w} for $\piecewiseConstant w{\tau_k}$, we conclude that the triple $(u,e,p)$ is admissible. In view of estimate \eqref{aprioU3} for $(\pwwll u{\tau_k})_k$, again by the Aubin-Lions type compactness results from \cite{Simon87} we conclude that there exists $v$ such that $\pwwll u{\tau_k} \to v$ in $ L^2 (0,T;H^{1-\epsilon}(\Omega;\mathbb{R}^d)) \cap \rmC_{\mathrm{weak}}^0 ([0,T]; L^2(\Omega;\mathbb{R}^d))$ for every $\epsilon \in (0,1]$. Taking into account that \begin{equation} \label{stability}
\| \pwwll u{\tau_k} - \piecewiseLinear {\dot u}{\tau_k}\|_{L^\infty
(0,T; W^{1,\gamma}(\Omega;\mathbb{R}^d)^* )}\leq \tau_k^{1/\gamma} \| \partial_t \pwwll
{u}{\tau_k} \|_{L^{\gamma/(\gamma{-}1)} (0,T;W^{1,\gamma}(\Omega;\mathbb{R}^d)^*)} \leq S {\tau_k}^{1/\gamma},
\end{equation}
we conclude that $v = \dot u$, whence \eqref{cvU3-bis}.
It then follows from \eqref{stability} that \begin{equation} \label{dotu-quoted-later} \piecewiseLinear {\dot u}{\tau_k}(t) \rightharpoonup \dot{u}(t) \qquad \text{in }L^2(\Omega;\mathbb{R}^d) \quad \text{for every } t \in [0,T]. \end{equation}
Moreover, thanks to \eqref{stability} we
identify the weak limit of $\partial_t \pwwll {u}{\tau_k}$ in $L^{\gamma/(\gamma-1)}(0,T;W^{1,\gamma}(\Omega;\mathbb{R}^d)^*) $ with $\ddot u$, and \eqref{cvU4} ensues. In order to prove \eqref{cvE3} (an analogous argument yields \eqref{cvP3}), it is sufficient to observe that
\[
\| \tau |\piecewiseConstant{e}{\tau_k}|^{\gamma-2} \piecewiseConstant{e}{\tau_k} \|_{L^\infty(0,T; L^{\gamma/(\gamma{-}1)}(\Omega; \bbM_\mathrm{sym}^{d\times d}))} = \tau^{1/\gamma} \left( \tau^{1/\gamma} \| \piecewiseConstant{e}{\tau_k} \|_{L^\infty(0,T; L^{\gamma}(\Omega; \bbM_\mathrm{sym}^{d\times d}))} \right)^{\gamma{-}1} \to 0
\]
thanks to estimate \eqref{aprioE3}.
\par
For the convergences of the functions $(\piecewiseConstant \vartheta{\tau_k})_k$, we briefly recap the arguments from the proof of \cite[Lemma 5.1]{Rocca-Rossi}. On account of estimates \eqref{log-added} and \eqref{aprio_Varlog} we can apply the compactness
Theorem \ref{th:mie-theil} to the functions $\ell_k = \log(\piecewiseConstant\vartheta{\tau_k})$, in the setting of the spaces $\mathbf{V}= H^1(\Omega)$, $\mathbf{Y}= W^{1,d+\epsilon}(\Omega)$, and with $p=2$. Hence we conclude that, up to a subsequence the functions $\log(\piecewiseConstant\vartheta{\tau_k})$ weakly$^*$ converge to some $\lambda $ in $ L^2 (0,T; H^1(\Omega)) \cap L^\infty(0,T; W^{1,d+\epsilon}(\Omega)^*)$ for all $\epsilon>0$, i.e.\ \eqref{cvT2}, and that $\log(\piecewiseConstant\vartheta{\tau_k}(t)) \rightharpoonup \lambda(t)$ in $H^1(\Omega)$ for almost all $t\in (0,T)$, i.e.\ \eqref{cvT4}. Therefore, up to a further subsequence we have $\log(\piecewiseConstant\vartheta{\tau_k}) \to \lambda$ almost everywhere in $Q$. Thus, $ \piecewiseConstant\vartheta{\tau_k} \to \vartheta:= e^{\lambda} $ almost everywhere in $Q$. Convergences \eqref{cvT1} and \eqref{cvT5} then follow from estimates \eqref{aprio6-discr} and \eqref{Gagliardo-Nir}, respectively.
An immediate lower semicontinuity argument combined with estimate \eqref{aprio6-discr} allows us to conclude \eqref{additional-teta}; the strict positivity of $\vartheta$ follows from \eqref{discr-strict-pos}. Concerning convergence \eqref{cvT8}, we use \eqref{cvT5} to deduce that $(\piecewiseConstant \vartheta{\tau_k})^{(\mu+\alpha)/2} \to \vartheta^{(\mu+\alpha)/2} $ in $L^{2h/(\mu+\alpha)}(Q)$ for $h$ as in \eqref{cvT5}. Since $(\piecewiseConstant \vartheta{\tau_k})^{(\mu+\alpha)/2} $ is itself bounded in $L^2(0,T;H^1(\Omega)$ by estimate \eqref{est-temp-added?-bis}, \eqref{cvT8} ensues, and so does \eqref{cvT9} by a completely analogous argument. \par Let us now address convergences \eqref{cvuT4} and \eqref{cvuT5} for the sequence $(\upiecewiseConstant \vartheta{\tau_k})_k$. On the one hand, observe that estimates \eqref{log-added}--\eqref{aprio_Varlog} also hold for $(\upiecewiseConstant \vartheta{\tau_k})_k$. Therefore, we may apply
Thm.\ \ref{th:mie-theil} to the functions $ \log(\upiecewiseConstant\vartheta{\tau_k}) $ and conclude that there exists $\underline \lambda $ such that $\log(\upiecewiseConstant\vartheta{\tau_k}) \weaksto \underline \lambda$ in $ L^2 (0,T; H^1(\Omega)) \cap L^\infty(0,T; W^{1,d+\epsilon}(\Omega)^*)$ for all $\epsilon>0$, as well as $\log(\upiecewiseConstant\vartheta{\tau_k}(t)) \rightharpoonup \underline\lambda(t)$ in $H^1(\Omega)$ for almost all $t\in (0,T)$. On the other hand, since $\upiecewiseConstant \vartheta{\tau_k} (t) = \piecewiseConstant \vartheta{\tau_k}(t-\tau_k)$ for almost all $ t \in (0,T)$, from \eqref{enhSav}
we conclude that $ \log(\upiecewiseConstant \vartheta{\tau_k} (t) ) \rightharpoonup \log(\vartheta(t)) $ in $W^{1,d+\epsilon}(\Omega)^*$ for almost all $t \in (0,T)$. Hence we identify $\underline\lambda(t) = \log(\vartheta(t))$ for almost all $t \in (0,T)$. Then, convergences \eqref{cvuT4} and \eqref{cvuT5} ensue from the very same arguments as for the sequence $(\piecewiseConstant\vartheta{\tau_k})_k$ (in fact, the analogue
of \eqref{cvT2} also holds for $\log(\upiecewiseConstant \vartheta{\tau_k})_k$). \par Finally, under condition \eqref{hyp-K-stronger}, we can also count on the $\mathrm{BV}$-estimate \eqref{aprio7-discr} for $(\piecewiseConstant\vartheta\tau)_\tau$. We may then apply \cite[Lemma 7.2]{DMDSMo06QEPL}, which generalizes the classical Helly Theorem to functions with values in the dual of a separable Banach space, and conclude the pointwise convergence \eqref{cvT7}. Convergence \eqref{cvT6} follows from estimate
\eqref{aprio7-discr} combined with \eqref{aprio6-discr}, via an Aubin-Lions type compactness result for $\mathrm{BV}$-functions (see, e.g., \cite[Chap.\ 7, Cor.\ 4.9]{Roub05NPDE}). \end{proof} \par We are now in the position to develop the \underline{\bf proof of Theorem \ref{mainth:1}}. Let $(\tau_k)$ be a vanishing sequence of time steps, and let \[ (\piecewiseConstant\vartheta{\tau_k}, \upiecewiseConstant\vartheta{\tau_k}, \piecewiseLinear{\vartheta}{\tau_k}, \piecewiseConstant u{\tau_k}, \piecewiseLinear u{\tau_k}, \pwwll u{\tau_k}, \piecewiseConstant e{\tau_k}, \piecewiseLinear e{\tau_k}, \piecewiseConstant p{\tau_k}, \piecewiseLinear p{\tau_k}, \piecewiseConstant \zeta{\tau_k})_k, \] be a sequence of solutions to the approximate PDE system \eqref{syst-interp} for which the convergences stated in Lemma \ref{l:compactness} hold to a quintuple $(\vartheta,u,e,p,\zeta)$. We will pass to the limit in the time-discrete versions of the momentum balance and of the plastic flow rule, in the discrete entropy inequality and in the discrete total energy inequality, to conclude that $(\vartheta,u,e,p)$ is an entropic solution to the thermoviscoplastic system
in the sense of Def.\ \ref{def:entropic-sols}. \par \noindent \emph{Step $0$: ad the initial conditions \eqref{initial-conditions} and the kinematic admissibility \eqref{kin-admis}.} It was shown in Lemma \ref{l:compactness} that the limit triple $(u,e,p)$ is kinematically admissible. Passing to the limit in the initial conditions \eqref{discr-Cauchy}, on account of \eqref{complete-approx-e_0} and of the pointwise convergences \eqref{cvU3},
\eqref{cvE3-bis}, \eqref{cvP3-bis}, and \eqref{dotu-quoted-later},
we conclude that the triple $(u,e,p)$ comply with initial conditions
\eqref{initial-conditions}. \par\noindent \emph{Step $1$: ad the momentum balance \eqref{w-momentum-balance}.} Thanks to convergences \eqref{cvE1}--\eqref{cvE3} and \eqref{cvT1} we have that \begin{equation} \label{cvS}
\piecewiseConstant \sigma{\tau_k} = \mathbb{D} \piecewiseLinear{\dot e}{\tau_k} + \mathbb{C} \piecewiseConstant e{\tau_k} + \tau |\piecewiseConstant e{\tau_k}|^{\gamma-2} \piecewiseConstant e{\tau_k} - \piecewiseConstant{\vartheta}{\tau_k} \mathbb{B} \rightharpoonup \sigma = \mathbb{D} \dot e +\mathbb{C} e - \vartheta \mathbb{B} \qquad \text{ in $L^{\gamma/(\gamma-1)} (Q;\bbM_\mathrm{sym}^{d\times d})$.} \end{equation}
Combining this with convergence \eqref{cvU4} and with \eqref{converg-interp-L} for $(\piecewiseConstant \calL{\tau_k})_k$, we pass to the limit in the discrete momentum balance \eqref{eq-u-interp} and conclude that $(\vartheta,u,e)$ fulfill \eqref{w-momentum-balance} with test functions in $W_\mathrm{Dir}^{1,\gamma}(\Omega;\mathbb{R}^d)$.
By comparison in \eqref{w-momentum-balance} we conclude that $\ddot{u} \in L^2(0,T; H_{\mathrm{Dir}}^1(\Omega;\mathbb{R}^d)^*)$, whence \eqref{reg-u}. Moreover, a density argument yields that
\eqref{w-momentum-balance} holds with test functions in $H_{\mathrm{Dir}}^1(\Omega;\mathbb{R}^d)$. This concludes the proof of the momentum
balance.
\par\noindent \emph{Step $2$: ad the plastic flow rule \eqref{pl-flow}.} Convergences \eqref{cvP2}--\eqref{cvP3}, \eqref{cvZ}, and \eqref{cvS} ensure that the functions $(\vartheta, e,p,\zeta)$ fulfill \begin{equation} \label{prelim-flow-rule} \zeta +\dot p = \sigma_\mathrm{D} \qquad \text{a.e.\ in }\, Q. \end{equation} In order to conclude \eqref{pl-flow} it remains to show that $\zeta \in \partial_{\dot p} \mathrm{R}(\vartheta, \dot p)$ a.e.\ in $Q$, which can be reformulated via \eqref{characterization-subdiff}. In turn, the latter relations are equivalent to \begin{equation} \label{def-subdiff} \begin{cases} \iint_Q \zeta{:} \eta \;\!\mathrm{d} x \;\!\mathrm{d} t \leq \int_0^T \calR(\vartheta(t), \eta(t)) \;\!\mathrm{d} t \quad \text{for all } \eta \in L^2(Q;\bbM_{\mathrm{D}}^{d\times d}), \\
\iint_Q \zeta{:} \dot{p} \;\!\mathrm{d} x \;\!\mathrm{d} t \geq \int_0^T \calR(\vartheta(t), \dot{p}(t)) \;\!\mathrm{d} t.
\end{cases} \end{equation} To obtain \eqref{def-subdiff} we will pass to the limit in the analogous relations satisfied at level $k$, namely \begin{equation} \label{def-subdiff-k} \begin{cases} \iint_Q \piecewiseConstant\zeta{\tau_k} {:} \eta \;\!\mathrm{d} x \;\!\mathrm{d} t \leq \int_0^T \calR(\upiecewiseConstant\vartheta{\tau_k}(t), \eta(t)) \;\!\mathrm{d} t \quad \text{for all } \eta \in L^2(Q;\bbM_{\mathrm{D}}^{d\times d}), \\ \iint_Q \piecewiseConstant\zeta{\tau_k} {:} \piecewiseLinear { \dot p}{\tau_k} \;\!\mathrm{d} x \;\!\mathrm{d} t \geq \int_0^T \calR(\upiecewiseConstant\vartheta{\tau_k}(t), \piecewiseLinear { \dot p}{\tau_k}(t)) \;\!\mathrm{d} t. \end{cases} \end{equation}
With this aim, we use conditions \eqref{hypR} on the dissipation metric $\mathrm{R}$. In order to pass to the limit in the first of \eqref{def-subdiff-k} for a fixed $\eta \in L^2(Q;\bbM_{\mathrm{D}}^{d\times d})$, we use convergence \eqref{cvZ} for $(\piecewiseConstant\zeta{\tau_k})_k$, and the fact that \[ \lim_{k\to\infty} \iint_Q \mathrm{R}(\upiecewiseConstant \vartheta{\tau_k}, \eta) \;\!\mathrm{d} x \;\!\mathrm{d} t = \iint_Q \mathrm{R}(\vartheta, \eta) \;\!\mathrm{d} x \;\!\mathrm{d} t. \]
The latter limit passage follows from convergence \eqref{cvuT5} for $\upiecewiseConstant\vartheta{\tau_k}$ which, combined with the continuity property \eqref{hypR-cont}, gives that $\mathrm{R}(\upiecewiseConstant \vartheta{\tau_k}, \eta) \to \mathrm{R}(\vartheta,\eta)$ almost everywhere in $Q$. Then we use
the dominated convergence theorem, taking into account that for every $k\in \mathbb{N}$ we have $ \mathrm{R}(\upiecewiseConstant \vartheta{\tau_k}, \eta) \leq C_R |\eta|$ a.e.\ in $Q$ thanks to \eqref{linear-growth}. \par As for the second inequality in \eqref{def-subdiff-k},
we use \eqref{hypR-lsc} and the convexity of the map $\dot p \mapsto \mathrm{R}(\vartheta, \dot p)$, combined with convergences \eqref{cvP2} and \eqref{cvuT5}, to conclude via the Ioffe theorem \cite{Ioff77LSIF} that \begin{equation}\label{lscR} \liminf_{k\to \infty} \int_0^T \calR (\upiecewiseConstant \vartheta{\tau_k}(t), \piecewiseLinear {\dot p}{\tau_k}(t)) \;\!\mathrm{d} t \geq \int_0^T \calR(\vartheta(t), \dot{p}(t)) \;\!\mathrm{d} t\,. \end{equation} Secondly, we show that \begin{equation} \label{limsup-cond} \limsup_{k\to \infty} \iint_Q \piecewiseConstant\zeta{\tau_k} {:} \piecewiseLinear { \dot p}{\tau_k} \;\!\mathrm{d} x \;\!\mathrm{d} t \leq \iint_Q \zeta {:} { \dot p} \;\!\mathrm{d} x \;\!\mathrm{d} t. \end{equation} For \eqref{limsup-cond} we repeat the same argument developed to obtain \eqref{identifications-to-show} in the proof of Lemma \ref{l:3.6}, and observe that \[ \begin{aligned} &
\limsup_{k\to \infty} \left( \iint_Q \piecewiseConstant\zeta{\tau_k} {:} \piecewiseLinear { \dot p}{\tau_k} \;\!\mathrm{d} x \;\!\mathrm{d} t + \iint_Q |\piecewiseLinear { \dot p}{\tau_k}|^2 \;\!\mathrm{d} x \;\!\mathrm{d} t +
\frac{\tau_k}\gamma \int_\Omega |\piecewiseConstant p{\tau_k}(T)|^\gamma \;\!\mathrm{d} x +
\iint_Q \mathbb{D} \piecewiseLinear{\dot e}{\tau_k}{:} \piecewiseLinear{\dot e}{\tau_k}\;\!\mathrm{d} x \;\!\mathrm{d} t \right)
\\ & \stackrel{(1)}{=} \limsup_{k\to \infty} \left( \iint_Q \left( \mathbb{D} \piecewiseLinear{\dot e}{\tau_k} + \mathbb{C} \piecewiseConstant e{\tau_k} +\tau | \piecewiseConstant e{\tau_k}|^{\gamma-2} \piecewiseConstant e{\tau_k} - \piecewiseConstant\vartheta{\tau_k} \mathbb{B} \right){:} \piecewiseLinear { \dot p}{\tau_k} \;\!\mathrm{d} x \;\!\mathrm{d} t + \iint_Q \mathbb{D} \piecewiseLinear{\dot e}{\tau_k}{:} \piecewiseLinear{\dot e}{\tau_k}\;\!\mathrm{d} x \;\!\mathrm{d} t \right) + \dddn{\lim_{k\to\infty} \frac{\tau_k}{\gamma} \int_\Omega |p_{\tau_k}^0|^\gamma \;\!\mathrm{d} x }{$=0$} \\ &
\stackrel{(2)}\leq \limsup_{k\to \infty} \dddn{\iint_Q \left( \mathbb{D} \piecewiseLinear{\dot e}{\tau_k} + \mathbb{C} \piecewiseConstant e{\tau_k} +\tau | \piecewiseConstant e{\tau_k}|^{\gamma-2} \piecewiseConstant e{\tau_k} - \piecewiseConstant\vartheta{\tau_k} \mathbb{B} \right){:} \sig{\piecewiseLinear { \dot u}{\tau_k} - \piecewiseLinear {\dot w}{\tau_k}} \;\!\mathrm{d} x \;\!\mathrm{d} t }{$= \int_0^T \pairing{}{H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)^*}{ \piecewiseConstant \calL{\tau_k} }{ \piecewiseLinear {\dot u}{\tau_k} {-}\piecewiseLinear {\dot w}{\tau_k}}\;\!\mathrm{d} t - \rho\iint_Q \partial_t \pwwll u{\tau_k} ( \piecewiseLinear {\dot u}{\tau_k} {-}\piecewiseLinear {\dot w}{\tau_k})) \;\!\mathrm{d} x\;\!\mathrm{d} t $ }+ \limsup_{k\to \infty} \iint_Q \piecewiseConstant{\sigma}{\tau_k} {:} \sig{\piecewiseLinear {\dot w}{\tau_k}} \;\!\mathrm{d} x \;\!\mathrm{d} t \\ & \quad - \liminf_{k\to \infty} \iint_Q \left( \mathbb{C} \piecewiseConstant e{\tau_k} +\tau | \piecewiseConstant e{\tau_k}|^{\gamma-2} \piecewiseConstant e{\tau_k} - \piecewiseConstant\vartheta{\tau_k} \mathbb{B} \right) {:} \piecewiseLinear { \dot e}{\tau_k} \;\!\mathrm{d} x \;\!\mathrm{d} t \\ & \stackrel{(3)}\leq \int_0^T \pairing{}{H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)^*}{ \calL}{ \dot{u} {-} \dot w} \;\!\mathrm{d} t - \iint_Q\left( \rho \ddot{u} ( \dot u {-}\dot w) {+} \sigma{:} \sig{\dot w} {+} \mathbb{C} e {:} \dot e {- }\vartheta \mathbb{B} {:} \dot e \right) \;\!\mathrm{d} x \;\!\mathrm{d} t\\ & \stackrel{(4)}= \iint_Q \zeta{:} \dot p \;\!\mathrm{d} x \;\!\mathrm{d} t +
\iint_Q |\dot p|^2 \;\!\mathrm{d} x \;\!\mathrm{d} t + \iint_Q \mathbb{D} \dot e{:} \dot e \;\!\mathrm{d} x \;\!\mathrm{d} t, \end{aligned} \] where (1) follows from testing the discrete flow rule \eqref{eq-p-interp} by $\piecewiseLinear {\dot p}{\tau_k}$, (2) from the kinematic admissibility condition, yielding $\piecewiseLinear {\dot p}{\tau_k} = \sig{\piecewiseLinear{\dot u}{\tau_k}} - \piecewiseLinear {\dot e}{\tau_k} = \sig{\piecewiseLinear{\dot u}{\tau_k}{-} \piecewiseLinear{\dot w}{\tau_k}} - \piecewiseLinear {\dot e}{\tau_k} + \sig{\piecewiseLinear{\dot u}{\tau_k}}$, which also leads to the cancellation of the term $\iint_Q \mathbb{D} \piecewiseLinear{\dot e}{\tau_k}{:} \piecewiseLinear{\dot e}{\tau_k}$, and from condition \eqref{approx-e_0} on the sequence $(p_{\tau_k}^0)_k$.
The limit passage in (3) follows
\begin{itemize}
\item
from convergence \eqref{converg-interp-L} for $(\piecewiseConstant \calL{\tau_k})_k$,
\item
from \eqref{cvU1},
\item
from convergence \eqref{converg-interp-w} for $(\piecewiseLinear w{\tau_k})_k$,
combined with the stress convergence \eqref{cvS} and with \eqref{cvE3}, which yield
\[
\int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \tau |\piecewiseConstant e{\tau_k}|^{\gamma-2} \piecewiseConstant e{\tau_k} : \sig{\piecewiseLinear {\dot w}{\tau_k}} \;\!\mathrm{d} d \;\!\mathrm{d} r
= \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega \tau^{1-\alpha_w} |\piecewiseConstant e{\tau_k}|^{\gamma-2} \piecewiseConstant e{\tau_k} : \tau^{\alpha_w}\sig{\piecewiseLinear {\dot w}{\tau_k}} \;\!\mathrm{d} d \;\!\mathrm{d} r \to 0,
\] so that
\begin{equation}
\label{houston}
\lim_{k\to\infty} \int_{0}^{\piecewiseConstant{\mathsf{t}}{\tau}(t)} \int_\Omega
\piecewiseConstant \sigma{\tau_k} {:} \sig{\piecewiseLinear {\dot w}{\tau_k}} \;\!\mathrm{d} x \;\!\mathrm{d} t
= \int_0^t \int_\Omega \sigma {:} \sig{\dot w} \;\!\mathrm{d} x \;\!\mathrm{d} t\,,
\end{equation} \item
from \eqref{A-below}: \begin{equation} \label{A-below} \begin{aligned} &
\limsup_{k\to\infty}\left( -\iint_Q \rho \partial_t \pwwll u{\tau_k} ( \piecewiseLinear {\dot u}{\tau_k} {-} \piecewiseLinear {\dot w}{\tau_k} ) \;\!\mathrm{d} x\;\!\mathrm{d} t \right) \\ & \leq - \liminf_{k\to\infty} \tfrac\rho2 \int_\Omega |\piecewiseLinear {\dot u}{\tau_k} (T)|^2 \;\!\mathrm{d} x +\tfrac\rho2 \int_\Omega |\piecewiseLinear {\dot u}{\tau_k} (0)|^2 \;\!\mathrm{d} x - \lim_{k\to\infty} \rho \iint_Q \partial_t \pwwll u{\tau_k} \piecewiseLinear {\dot w}{\tau_k} \;\!\mathrm{d} x\;\!\mathrm{d} t \\ & \stackrel{(A)}{\leq} -\tfrac\rho2 \int_\Omega | \dot u(T)|^2 \;\!\mathrm{d} x +\rho \int_\Omega |\dot{u}_0|^2 \;\!\mathrm{d} x -\rho \iint_Q \ddot{u} \dot w \;\!\mathrm{d} x \;\!\mathrm{d} t \end{aligned} \end{equation} with (A) due to \eqref{cvU4}, \eqref{converg-interp-w}, and \eqref{dotu-quoted-later}, \item from \eqref{B-below}:
\begin{equation} \label{B-below} \begin{aligned} & \begin{aligned} -\liminf_{k\to\infty} \iint_Q \mathbb{C} \piecewiseConstant e{\tau_k} {:} \piecewiseLinear { \dot e}{\tau_k} \;\!\mathrm{d} x \;\!\mathrm{d} t & \leq - \liminf_{k\to\infty} \int_\Omega \tfrac12 \mathbb{C} \piecewiseConstant e{\tau_k} (T){:} \piecewiseConstant e{\tau_k} (T) \;\!\mathrm{d} x + \int_\Omega \tfrac12 \mathbb{C} e_0{:} e_0 \;\!\mathrm{d} x \\ & \stackrel{(B)}{\leq}- \int_\Omega \tfrac12 \mathbb{C} e(T){:} e(T) \;\!\mathrm{d} x + \int_\Omega \tfrac12 \mathbb{C} e_0{:} e_0, \end{aligned} \\ & -\liminf_{k\to\infty}
\iint_Q \tau | \piecewiseConstant e{\tau_k}|^{\gamma-2} \piecewiseConstant e{\tau_k} {:} \piecewiseLinear { \dot e}{\tau_k} \;\!\mathrm{d} x \;\!\mathrm{d} t \leq - \liminf_{k\to\infty} \int_\Omega \tfrac\tau\gamma | \piecewiseConstant e{\tau_k} (T)|^\gamma\;\!\mathrm{d} x + \lim_{k \to\infty} \int_\Omega \tfrac{\tau_k}\gamma | e_{\tau_k}^0|^\gamma \;\!\mathrm{d} x \stackrel{(C)}{\leq} 0, \\ & \lim_{k\to\infty} \iint_Q \piecewiseConstant\vartheta{\tau_k} \mathbb{B} {:} \piecewiseLinear { \dot e}{\tau_k} \;\!\mathrm{d} x \;\!\mathrm{d} t \stackrel{(D)}{ = } \iint_Q \vartheta \mathbb{B} {:} \dot e \;\!\mathrm{d} x \;\!\mathrm{d} t, \end{aligned} \end{equation} with (B) due to \eqref{cvE3-bis}, (C) due to \eqref{cvE3} and \eqref{approx-e_0}, and (D) due to \eqref{cvE2} and \eqref{cvT5}. \end{itemize}
Finally, (4) follows from testing \eqref{w-momentum-balance} by $\dot u - \dot w$, and \eqref{prelim-flow-rule} by $\dot p$. From the thus obtained $\limsup$-inequality, arguing in the very same way as in the proof of Lemma \ref{l:3.6}, we conclude that \begin{equation} \label{strong-convergences} \begin{aligned} & \lim_{k \to\infty} \iint_Q \piecewiseConstant\zeta{\tau_k} {:} \piecewiseLinear { \dot p}{\tau_k} \;\!\mathrm{d} x \;\!\mathrm{d} t = \iint_Q \zeta {:} { \dot p} \;\!\mathrm{d} x \;\!\mathrm{d} t, \\ & \piecewiseLinear{\dot p}{\tau_k} \to \dot p && \text{ in } L^2(Q;\bbM_\mathrm{D}^{d\times d}), \\ & \piecewiseLinear{\dot e}{\tau_k} \to \dot e && \text{ in } L^2(Q;\bbM_\mathrm{sym}^{d\times d})\,. \end{aligned} \end{equation} Hence, combining the first of \eqref{strong-convergences} with \eqref{lscR}, we take the limit in the second inequality in \eqref{def-subdiff-k}.
All in all, we deduce \eqref{def-subdiff}. Hence, the functions $(\vartheta, e,p,\zeta)$ fulfill the plastic flow rule \eqref{pl-flow}. \par\noindent \emph{Step $3$: enhanced convergences.} For later use, observe that \eqref{strong-convergences} give \begin{equation} \label{cvE3-quater} \piecewiseLinear{e}{\tau_k} \to e \quad \text{ in } H^1(0,T;L^2(\Omega; \bbM_\mathrm{sym}^{d\times d}))\,, \qquad \piecewiseLinear{p}{\tau_k} \to p \quad \text{ in } H^1(0,T;L^2(\Omega; \bbM_\mathrm{D}^{d\times d}))\,. \end{equation} Moreover, by the kinematic admissibility condition we deduce the strong convergence of $\sig{\piecewiseLinear {\dot u}{\tau_k}}$ in $L^2(Q;\bbM_\mathrm{sym}^{d\times d})$,
hence, by Korn's inequality,
\begin{equation} \label{cvU-quater} \piecewiseLinear u{\tau_k} \to u \quad \text{ in } H^1(0,T; H^1(\Omega;\mathbb{R}^d)). \end{equation} Finally, repeating the $\limsup$ argument leading to \eqref{strong-convergences} on a generic interval $[0,t]$, we find that \begin{equation} \label{cvU-quinque} \piecewiseLinear{\dot u}{\tau_k}(t) \to \dot{u}(t) \quad \text{ in } L^2(\Omega;\mathbb{R}^d) \quad \text{ for every } t \in [0,T]. \end{equation} All in all, also on account of \eqref{cvE3} and \eqref{cvP3}, we have that convergence \eqref{cvS} improves to a strong one.
Therefore, from \eqref{eq-p-interp} we deduce that
\[
\piecewiseConstant \zeta{\tau_k} = (\piecewiseConstant \sigma{\tau_k})_\mathrm{D} - \piecewiseLinear{\dot{p}}{\tau_k} - \tau_k |\piecewiseConstant p{\tau_k}|^{\gamma-2} \piecewiseConstant p{\tau_k} \to \sigma_\mathrm{D} - \dot p = \zeta \qquad \text{a.e.\ in } Q. \]
We will use this to pass to the limit in the pointwise inequality \[ \piecewiseConstant \zeta{\tau_k}(t,x){:} \left( \dot{p}(t,x){-} \piecewiseLinear{\dot{p}}{\tau_k}(t,x) \right) +\mathrm{R}(\upiecewiseConstant \vartheta{\tau_k}(t,x), \piecewiseLinear{\dot{p}}{\tau_k}(t,x)) \leq \mathrm{R}(\upiecewiseConstant \vartheta{\tau_k}(t,x), \dot p(t,x)) \qquad \text{for a.a. } (t,x)\in Q. \] Indeed, in view of \eqref{strong-convergences}, which gives $\lim_{k\to\infty} \piecewiseConstant \zeta{\tau_k}{:} ( \dot{p}{-} \piecewiseLinear{\dot{p}}{\tau_k}) =0 $ a.e.\ in $Q$, of convergence \eqref{cvuT5} for $\upiecewiseConstant \vartheta{\tau_k}$, and of the continuity property \eqref{hypR-cont}, from the above inequality we conclude that \[ \limsup_{k\to\infty} \mathrm{R}(\upiecewiseConstant \vartheta{\tau_k}(x,t), \piecewiseLinear{\dot{p}}{\tau_k}(x,t)) \leq \mathrm{R}(\vartheta(x,t), \dot p(x,t)) \qquad \text{for a.a. } (x,t) \in Q. \] Combining this with the lower semicontinuity inequality which derives from \eqref{hypR-lsc}, we ultimately have that $\mathrm{R}(\upiecewiseConstant \vartheta{\tau_k}, \piecewiseLinear{\dot{p}}{\tau_k}) \to \mathrm{R}(\vartheta, \dot p) $ a.e.\ in $Q$, hence \begin{equation} \label{dominated-R}
\mathrm{R}(\upiecewiseConstant\vartheta{\tau_k}, \piecewiseLinear {\dot p}{\tau_k}) \to \mathrm{R}(\vartheta, \dot p) \quad \text{ in } L^2 (Q)
\end{equation}
by the dominated convergence theorem. \par\noindent \emph{Step $4$: ad the entropy inequality \eqref{entropy-ineq}.} Let us fix a positive test function $\varphi \in \rmC^0 ([0,T]; W^{1,\infty}(\Omega)) \cap H^1(0,T; L^{6/5}(\Omega))$ for \eqref{entropy-ineq}, and approximate it with the discrete test functions from \eqref{discrete-tests-phi}: their interpolants $\piecewiseConstant \varphi\tau, \, \piecewiseLinear \varphi\tau$ comply with convergences \eqref{convergences-test-interpolants} and with the discrete entropy inequality \eqref{entropy-ineq-discr}, where we pass to the limit. We take the limit of the first integral term on the left-hand side of \eqref{entropy-ineq-discr} based on convergence \eqref{cvT2} for $\log(\upiecewiseConstant\vartheta{\tau_k})$. \par For the second integral term, we will prove that \begin{equation} \label{weak-nabla-logteta} \kappa (\piecewiseConstant\vartheta{\tau_k}) \nabla \log(\piecewiseConstant\vartheta{\tau_k}) \rightharpoonup \kappa(\vartheta) \nabla \log(\vartheta) \qquad \text{ in }
L^{1+\bar\delta}(Q;\mathbb{R}^d) \text{ with $ \bar\delta = \frac{\alpha}\mu $ and $\alpha \in [(2-\mu)^+, 1)$.} \end{equation}
First of all, let us prove that $(\kappa (\piecewiseConstant\vartheta{\tau_k}) \nabla \log(\piecewiseConstant\vartheta{\tau_k}))_k $ is bounded in $L^{1+\bar\delta}(Q;\mathbb{R}^d) $. To this aim, we argue as in the proof of the \emph{Fifth a priori estimate} from Prop.\ \ref{prop:aprio} and observe that \[
\left| \kappa (\piecewiseConstant\vartheta{\tau_k}) \nabla \log(\piecewiseConstant\vartheta{\tau_k}) \right| \leq C
\left(|\piecewiseConstant\vartheta{\tau_k}|^{\mu-1} +\frac1{\bar \vartheta} \right) |\nabla \piecewiseConstant\vartheta{\tau_k} | \qquad \text{a.e.\ in } \, Q, \] by the growth condition \eqref{hyp-K} and the positivity \eqref{strong-strict-pos}. Let us now focus on the first term on the r.h.s.: with H\"older's inequality we have that, for a positive exponent $r$, \[ \begin{aligned}
\iint_Q \left(|\piecewiseConstant\vartheta{\tau_k}|^{\mu-1} |\nabla \piecewiseConstant\vartheta{\tau_k} | \right)^r \;\!\mathrm{d} x \;\!\mathrm{d} t &
\leq \| ( |\piecewiseConstant\vartheta{\tau_k}|^{(\mu-\alpha)/2} )^r\|_{L^{2/(2{-}r)}(Q)}
\|( |\piecewiseConstant\vartheta{\tau_k}|^{(\mu+\alpha-2)/2} |\nabla\piecewiseConstant\vartheta{\tau_k}| )^r\|_{L^{2/r}(Q;\mathbb{R}^d)} \\ & \leq C \| ( |\piecewiseConstant\vartheta{\tau_k}|^{(\mu-\alpha)/2} )^r\|_{L^{2/(2{-}r)}(Q)}, \end{aligned} \] where the second inequality follows from the estimate for
$ |\piecewiseConstant\vartheta{\tau_k}|^{(\mu+\alpha-2)/2} \nabla\piecewiseConstant\vartheta{\tau_k} $ in $L^2(Q;\mathbb{R}^d)$ thanks to \eqref{est-temp-added?-bis}. The latter also yields a bound for $ (\piecewiseConstant\vartheta{\tau_k})^{(\mu+\alpha)/2} $ in $L^{2}(Q)$, hence an estimate for $ (\piecewiseConstant\vartheta{\tau_k})^{(\mu-\alpha)/2} $ in $L^{2(\mu+\alpha)/(\mu-\alpha)}(Q)$. Therefore, for $r = (\mu+\alpha)/\mu = 1+\alpha/\mu$ we obtain that $ \| ( |\piecewiseConstant\vartheta{\tau_k}|^{(\mu-\alpha)/2} )^r\|_{L^{2/(2{-}r)}(Q)} \leq C$, and the estimate for $\kappa (\piecewiseConstant\vartheta{\tau_k}) \nabla \log(\piecewiseConstant\vartheta{\tau_k}) $ follows. For the proof of convergence \eqref{weak-nabla-logteta}, relying on convergences \eqref{cvT1}--\eqref{cvT5}, we refer to \cite[Thm.\ 1]{Rocca-Rossi}. Therefore we conclude the first of \eqref{further-logteta}. \par To take the limit in the right-hand side terms in the entropy inequality \eqref{entropy-ineq-discr}, for the first two integrals we use convergence \eqref{cvT4} combined with \eqref{convergences-test-interpolants}. A lower semicontinuity argument also based on the Ioffe theorem \cite{Ioff77LSIF} and on convergences \eqref{convergences-test-interpolants}, \eqref{cvT2}, and \eqref{cvT5} gives that \[ \begin{aligned} \limsup_{k\to\infty} \left( - \int_{\piecewiseConstant{\mathsf{t}}{\tau_k}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau_k}(t)} \int_\Omega \kappa(\piecewiseConstant \vartheta{\tau_k}) \frac{\piecewiseConstant\varphi{\tau_k}}{\piecewiseConstant \vartheta{\tau_k}} \nabla \log(\piecewiseConstant \vartheta{\tau_k}) \nabla \piecewiseConstant \vartheta{\tau_k} \;\!\mathrm{d} x \;\!\mathrm{d} r \right)
& = - \liminf_{k\to\infty} \int_{\piecewiseConstant{\mathsf{t}}{\tau_k}(s)}^{\piecewiseConstant{\mathsf{t}}{\tau_k}(t)} \int_\Omega \kappa(\piecewiseConstant \vartheta{\tau_k}) \piecewiseConstant\varphi{\tau_k} | \nabla \log(\piecewiseConstant \vartheta{\tau_k}) |^2 \;\!\mathrm{d} x \;\!\mathrm{d} r \\
& \leq - \int_s^t \int_\Omega \kappa(\vartheta) \varphi
|\nabla \log(\vartheta)|^2\;\!\mathrm{d} x \;\!\mathrm{d} r, \end{aligned} \] which allows us to deal with the third integral term on the r.h.s.\ of \eqref{entropy-ineq-discr}.
We take the limit of the fourth integral term taking into account convergences \eqref{converg-interp-g}, \eqref{cvT5},
which yields
\[
\frac1{\piecewiseConstant\vartheta{\tau_k}} \to \frac1\vartheta \qquad \text{ in } L^p(Q) \quad \text{for all } 1 \leq p<\infty,
\]
since $\left| \frac1{\piecewiseConstant\vartheta{\tau_k}} \right| \leq \frac1{\bar\vartheta}$ a.e.\ in $Q$, as well as
the previously established strong convergences \eqref{strong-convergences} and \eqref{dominated-R}.
Finally,
since $\nabla \left( \tfrac1{\piecewiseConstant\vartheta{\tau_k}} \right) =
\tfrac{\nabla \piecewiseConstant\vartheta{\tau_k}}{|\piecewiseConstant\vartheta{\tau_k}|^2}$, combining \eqref{strong-strict-pos} with estimate
\eqref{aprio6-discr} we infer that $( \tfrac1{\piecewiseConstant\vartheta{\tau_k}} )_k$ is bounded in $L^2(0,T;H^1(\Omega))$.
All in all, we have
\begin{equation}
\label{weak-1-teta}
\frac1{\piecewiseConstant\vartheta{\tau_k}} \rightharpoonup \frac1\vartheta \qquad \text{ in } L^2(0,T;H^1(\Omega)), \end{equation} which allows us to pass to the limit in the fifth integral term, in combination with convergence \eqref{converg-interp-h}.
\par
Ultimately, we establish the summability property $\kappa(\vartheta) \nabla \log(\vartheta)$ in $L^{1}(0,T;X)$, with $X$ from \eqref{further-logteta}, by combining the facts that $\vartheta^{(\mu+\alpha-2)/2}\nabla\vartheta \in L^2(Q;\mathbb{R}^d)$ thanks to convergence \eqref{cvT8}, with the information that $\vartheta^{(\mu-\alpha)/2} \in L^2(0,T; H^1(\Omega))$ by \eqref{cvT9}, and by arguing in the very same way as in the proof of the \emph{Fifth a priori estimate} from Prop.\ \ref{prop:aprio}. In view of \eqref{further-logteta}, the entropy inequality \eqref{entropy-ineq}
in fact makes sense for all positive test functions $\varphi $ in $H^1(0,T; L^{6/5}(\Omega)) \cup L^\infty(0,T;W^{1,d+\epsilon}(\Omega))$ with $\epsilon>0$. Therefore, with a density argument we conclude it for this larger test space.
\par\noindent \emph{Step $5$: ad the total energy inequality \eqref{total-enineq}.} It is deduced by passing to the limit in the discrete total energy inequality \eqref{total-enid-discr}. For the first integral term on the left-hand side, we use that $\piecewiseLinear {\dot u}{\tau_k}(t) \rightharpoonup \dot u(t)$ in $L^2(\Omega;\mathbb{R}^d)$ for all $t\in [0,T]$, cf.\ \eqref{dotu-quoted-later}. For the second term we observe that
$\liminf_{k\to\infty} \calE_{\tau_k} (\piecewiseConstant\vartheta{\tau_k}(t), \piecewiseConstant e{\tau_k}(t)) \geq \calE (\vartheta(t), e(t))$ for \emph{almost all} $t\in (0,T)$ by convergence \eqref{cvT5} for $\piecewiseConstant\vartheta{\tau_k}$ and by \eqref{cvE3-quater}, combined with \eqref{stability-e-k}. The limit passage on the right-hand side, for almost all $s \in (0,t)$, follows from \eqref{cvU-quater}, again \eqref{cvT5} and \eqref{cvE3-quater}, from \eqref{houston},
and from convergences \eqref{convs-interp-data} and for the interpolants $(\piecewiseConstant H{\tau_k})_k, \, (\piecewiseConstant h{\tau_k})_k, \, (\piecewiseConstant \calL{\tau_k})_k, \, (\piecewiseConstant w{\tau_k})_k$. \par\noindent This concludes the proof of Theorem \ref{mainth:1}.
\QED
\par
\noindent
We now briefly sketch the \underline{\bf proof of Theorem \ref{mainth:2}}. The limit passage in the discrete momentum balance and in the plastic flow rule, cf.\ \eqref{eq-u-interp}
and \eqref{eq-p-interp}, follows from the arguments in the proof of Thm.\ \ref{mainth:1}.
\par
As for the heat equation, we shall as a first step prove that the limit quadruple $(\vartheta, u,e,p)$ complies with
\begin{equation} \begin{aligned} \label{eq-teta-interm} & \pairing{}{W^{1,\infty}(\Omega)}{\vartheta(t)}{\varphi(t)} -\int_0^t\int_\Omega \vartheta \varphi_t \;\!\mathrm{d} x \;\!\mathrm{d} s +\int_0^t \int_\Omega \kappa(\vartheta) \nabla \vartheta\nabla\varphi \;\!\mathrm{d} x \;\!\mathrm{d} s \\ &\quad = \int_\Omega \vartheta_0 \varphi(0) \;\!\mathrm{d} x +
\int_0^t\int_\Omega \left(H+ \mathrm{R}(\vartheta, \dot p)+ |\dot p|^2 +\mathbb{D} \dot e {:} \dot e -\vartheta \mathbb{B} \dot e \right)\varphi \;\!\mathrm{d} x \;\!\mathrm{d} s + \int_0^t \int_\Omega \int_{\partial\Omega} h \varphi \;\!\mathrm{d} S \;\!\mathrm{d} s\,. \end{aligned} \end{equation} for all test functions $ \varphi\in \rmC^0([0,T]; W^{1,\infty}(\Omega))\cap H^1(0,T;L^{6/5}(\Omega)) $ and for all $ t\in (0,T]. $ With this aim, we pass to the limit in the approximate temperature equation \eqref{eq-teta-interp}, tested by the approximate test functions from \eqref{discrete-tests-phi}, where we integrate by parts in time the term $\int_0^t \int_\Omega \piecewiseLinear{\dot \vartheta}{\tau_k} \piecewiseConstant{\varphi}{\tau_k}\;\!\mathrm{d} x \;\!\mathrm{d} r$. For this limit passage, we exploit convergences \eqref{convergences-test-interpolants}
as well as
\eqref{cvT6} for $(\upiecewiseConstant \vartheta{\tau_k})_k$ and \eqref{cvT7}.
\par
For the limit passage in the term $\iint_Q \kappa(\piecewiseConstant \vartheta{\tau_k}) \nabla \piecewiseConstant \vartheta{\tau_k} \nabla \piecewiseConstant \varphi{\tau_k} \;\!\mathrm{d} x \;\!\mathrm{d} t$ we prove that
$\kappa(\piecewiseConstant \vartheta{\tau_k}) \nabla \piecewiseConstant \vartheta{\tau_k} \rightharpoonup \kappa(\vartheta) \nabla \vartheta$ in $L^{1+\tilde\delta}(Q;\mathbb{R}^d)$, with $\tilde \delta>0$ given by \eqref{further-k-teta}. Let us check the bound
\begin{equation}
\label{bound-tilde-delta}
\| \kappa(\piecewiseConstant \vartheta{\tau_k}) \nabla \piecewiseConstant \vartheta{\tau_k} \|_{L^{1+\tilde\delta}(Q;\mathbb{R}^d)} \leq C,
\end{equation} by again resorting to estimates \eqref{aprio6-discr} and \eqref{est-temp-added?-bis}. Indeed, by \eqref{hyp-K} we have that
\begin{equation}
\label{4.47}
| \kappa(\piecewiseConstant \vartheta{\tau_k}) \nabla \piecewiseConstant \vartheta{\tau_k} | \leq C| \piecewiseConstant \vartheta{\tau_k}|^{(\mu-\alpha+2)/2} | \piecewiseConstant \vartheta{\tau_k}|^{(\mu+\alpha-2)/2} |\nabla \piecewiseConstant \vartheta{\tau_k} |+ C | \nabla \piecewiseConstant \vartheta{\tau_k} | \qquad \text{a.e.\ in }\, Q,
\end{equation}
and we estimate the first term on the r.h.s.\ by observing that $| \piecewiseConstant \vartheta{\tau_k}|^{(\mu+\alpha-2)/2} |\nabla \piecewiseConstant \vartheta{\tau_k} |$ is bounded in $L^2(Q)$ thanks to \eqref{est-temp-added?-bis}. On the other hand, in the case $d=3$, to which we confine the discussion, by interpolation arguments $\piecewiseConstant\vartheta{\tau_k}$ is bounded in $L^h(Q)$ for every $1 \leq h<\frac83$. Therefore, for $\alpha>\mu-\frac23$ (so that $\mu-\alpha+2<\frac83$), the functions $(| \piecewiseConstant \vartheta{\tau_k}|^{(\mu-\alpha+2)/2})_k$ are bounded in $L^r(Q)$ with $1\leq r< \frac{16}{3(\mu-\alpha+2)}$. Then, \eqref{bound-tilde-delta} follows from \eqref{4.47} via the H\"older inequality. The corresponding weak convergence can be proved
arguing in the very same way as in the proof of \cite[Thm.\ 2]{Rocca-Rossi}, to which we refer the reader.
Therefore we conclude that
$\kappa(\vartheta) \nabla \vartheta \in L^{1+\tilde\delta}(Q;\mathbb{R}^d)$. Observe that $\kappa(\vartheta) \nabla \vartheta = \nabla (\hat{\kappa}(\vartheta))$ thanks to \cite{Marcus-Mizel}. Since $\hat{\kappa}(\vartheta)$ itself is a function in $L^{1+\tilde\delta}(Q) $ (for $d=3$, this follows from the fact that $\hat{\kappa}(\vartheta) \sim \vartheta^{\mu+1} \in L^{h/(\mu+1)}(Q)$ for every $1 \leq h<\frac83$), we conclude \eqref{further-k-teta}.
\par
The limit passage on the r.h.s.\ of the discrete heat equation \eqref{eq-teta-interp} results from \eqref{converg-interp-g}, from \eqref{cvT6}, the strong convergences \eqref{strong-convergences}, and \eqref{dominated-R}.
\par
All in all, we obtain \eqref{eq-teta-interm}, whence for every $\varphi \in W^{1,\infty}(\Omega)$ and every $0 \leq s \leq t \leq T$
\[
\begin{aligned}
& \pairing{}{W^{1,\infty}(\Omega)}{\vartheta(t)-\vartheta(s)}{\varphi} \\ & = -\int_s^t \int_\Omega \kappa(\vartheta) \nabla \vartheta\nabla\varphi \;\!\mathrm{d} x \;\!\mathrm{d} r +
\int_s^t\int_\Omega \left(H+ \mathrm{R}(\vartheta, \dot p)+ |\dot p|^2 +\mathbb{D} \dot e {:} \dot e -\vartheta \mathbb{B} \dot e \right) \varphi \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_s^t \int_\Omega \int_{\partial\Omega} h \varphi \;\!\mathrm{d} S \;\!\mathrm{d} r\,. \end{aligned} \]
From this we easily conclude
the enhanced regularity \eqref{enh-teta-W11}. Thanks to \cite[Thm.\ 7.1]{DMDSMo06QEPL}, the absolutely continuous function $\vartheta: [0,T] \to W^{1,\infty}(\Omega)^*$ admits at almost all $t\in (0,T)$ the derivative $\dot{\vartheta}(t)$, which turns out to be the limit as $h \to 0$ of the incremental quotients $\frac{\vartheta(t+h)-\vartheta(t)}h$, w.r.t.\ the weak$^*$-topology of $W^{1,\infty}(\Omega)^*$. Therefore, the enhanced weak formulation of the heat equation \eqref{eq-teta} follows.
\par
Recall (cf.\ the comments following the statement of Thm.\ \ref{mainth:2}) that $\tilde{\delta}$ is small enough as to ensure that $W^{1,1+1/\tilde{\delta}}(\Omega) \subset L^\infty(\Omega)$. Therefore, the terms on the r.h.s.\ of the heat equation \eqref{heat} can be multiplied by test functions $\varphi \in W^{1,1+1/\tilde{\delta}}(\Omega)$.
Thanks to \eqref{further-k-teta}, also the second term on the l.h.s.\ of \eqref{heat} admits such test functions. Therefore by comparison we conclude that $\dot{\vartheta} \in L^1(0,T; W^{1,1+1/\tilde{\delta}}(\Omega)^*)$ and, with a density argument, extend the weak formulation \eqref{eq-teta} to this (slightly) larger space of test functions. This finishes the proof of Theorem \ref{mainth:2}. \QED \begin{remark}[Energy convergences for the approximate solutions] \label{rmk:energy-conv} \upshape As a by-product of the proofs of Theorems \ref{mainth:1} and \ref{mainth:2}, we improve convergences \eqref{convergences-cv} of the approximate solutions to an entropic/weak energy solution of the thermoviscoplastic system. More specifically, it follows from \eqref{cvT5} and \eqref{strong-convergences}--\eqref{cvU-quinque} that we have the convergence of the kinetic energies \[
\frac{\varrho}2 \int_\Omega |\piecewiseLinear {\dot u}{\tau_k}(t)|^2 \;\!\mathrm{d} x \to \frac{\varrho}2 \int_\Omega |\dot u(t)|^2 \;\!\mathrm{d} x \qquad \text{for all } t \in [0,T], \] of the dissipated energies \[ \int_0^T \int_\Omega \mathbb{D} \piecewiseLinear {\dot e}{\tau_k}{:} \piecewiseLinear {\dot e}{\tau_k} \;\!\mathrm{d} x \;\!\mathrm{d} t \to \int_0^T \int_\Omega \mathbb{D} \dot e{:} \dot e \;\!\mathrm{d} x \;\!\mathrm{d} t, \qquad \int_0^T \calR (\upiecewiseConstant \vartheta{\tau_k}, \piecewiseLinear {\dot p}{\tau_k}) \;\!\mathrm{d} t \to \int_0^T \calR (\vartheta, \dot p) \;\!\mathrm{d} t, \] and of the thermal and mechanical energies \[ \calF(\piecewiseConstant \vartheta{\tau_k}(t)) \to \calF(\vartheta(t)) \quad \text{for a.a. } t \in (0,T), \qquad \calQ(\piecewiseConstant e{\tau_k}) \to \calQ(e) \qquad \text{uniformly in } [0,T]. \] \end{remark}
\section{\bf Setup for the perfectly plastic system} \label{s:5} As already mentioned in the Introduction, in the vanishing-viscosity limit of the thermoviscoplastic system we will obtain the \emph{(global) energetic formulation} for the perfectly plastic system, coupled with the stationary limit of the heat equation. Prior to performing this asymptotic analysis, in this section we gain further insight into the concept of energetic solution for perfect plasticity. \par For the energetic formulation to be fully meaningful, in Sec.\ \ref{ss:5.1} we need to strengthen the assumptions, previously given in Section \ref{ss:2.1},
on the reference configuration $\Omega$,
on the elasticity tensor $\mathbb{C}$,
and on the elastic domain $ x \in \Omega \rightrightarrows K(x) \subset \bbM_\mathrm{D}^{d\times d}$ (indeed,
we will drop the dependence of $K$ on the -spatially and temporally- nonsmooth variable $\vartheta$).
Instead, we will weaken the regularity requirements on the Dirichlet loading $w$.
\par Preliminarily, let us recall some basic facts about the space of functions with bounded deformation in $\Omega$. \paragraph{{\em The space $\mathrm{BD}(\Omega;\mathbb{R}^d)$}} It is defined by \begin{equation} \mathrm{BD}(\Omega;\mathbb{R}^d): = \{ u \in L^1(\Omega;\mathbb{R}^d)\, : \ \sig{u} \in{\mathrm M}} \def\rmN{{\mathrm N}} \def\rmO{{\mathrm O}(\Omega;\bbM_\mathrm{sym}^{d\times d}) \}, \end{equation}
with ${\mathrm M}} \def\rmN{{\mathrm N}} \def\rmO{{\mathrm O}(\Omega;\bbM_\mathrm{sym}^{d\times d})$ the space of Radon measures on $\Omega$ with values in $\bbM_\mathrm{sym}^{d\times d}$, with norm $\| \lambda\|_{{\mathrm M}} \def\rmN{{\mathrm N}} \def\rmO{{\mathrm O}(\Omega;\bbM_\mathrm{sym}^{d\times d}) }: = |\lambda|(\Omega)$ and $|\lambda|$ the variation of the measure. Recall that, by the Riesz representation theorem,
${\mathrm M}} \def\rmN{{\mathrm N}} \def\rmO{{\mathrm O}(\Omega;\bbM_\mathrm{sym}^{d\times d})$ can be identified with the dual of the space $\mathrm{C}_0(\Omega;\bbM_\mathrm{sym}^{d\times d})$ of the continuous functions $\varphi: \Omega \to \bbM_\mathrm{sym}^{d\times d}$ such that the sets $\{ |\varphi|\geq c\}$ are compact for every $c>0$. The space $ \mathrm{BD}(\Omega;\mathbb{R}^d)$ is endowed with the graph norm \[
\| u \|_{\mathrm{BD}(\Omega;\mathbb{R}^d)}: = \| u \|_{L^1(\Omega;\mathbb{R}^d)}+ \| \sig{u}\|_{{\mathrm M}} \def\rmN{{\mathrm N}} \def\rmO{{\mathrm O}(\Omega;\bbM_\mathrm{sym}^{d\times d}) }, \] which makes it a Banach space. It turns out that $\mathrm{BD}(\Omega;\mathbb{R}^d)$ is the dual of a normed space, cf.\ \cite{Temam-Strang80}. \par In addition to the strong convergence induced by $\norm{\cdot}{\mathrm{BD}(\Omega;\mathbb{R}^d)}$, this duality defines a notion of weak$^*$ convergence on $\mathrm{BD}(\Omega;\mathbb{R}^d)$ :
a sequence $(u_k)_k$ converges weakly$^*$ to $u$ in $\mathrm{BD}(\Omega;\mathbb{R}^d)$ if $u_k\rightharpoonup u$ in $L^1(\Omega;\mathbb{R}^d)$ and $\sig{u_k}\weaksto \sig{u}$ in $ {\mathrm M}} \def\rmN{{\mathrm N}} \def\rmO{{\mathrm O}(\Omega;\bbM_\mathrm{sym}^{d\times d})$. Every bounded sequence in $\mathrm{BD}(\Omega;\mathbb{R}^d)$ has a weakly$^*$ converging subsequence and, furthermore, a subsequence converging weakly in $L^{d/(d{-}1)}(\Omega;\mathbb{R}^d)$ and strongly in $L^{p}(\Omega;\mathbb{R}^d)$ for every $1\leq p <\frac d{d-1}$. \par
Finally, we recall that for every $u \in \mathrm{BD}(\Omega;\mathbb{R}^d)$ the trace $u|_{\partial\Omega}$ is well defined as an element in $L^1(\partial\Omega;\mathbb{R}^d)$, and that (cf.\ \cite[Prop.\ 2.4, Rmk.\ 2.5]{Temam83}) a Poincar\'e-type inequality holds: \begin{equation} \label{PoincareBD} \exists\, C>0 \ \ \forall\, u \in \mathrm{BD}(\Omega;\mathbb{R}^d)\, : \ \ \norm{u}{L^1(\Omega;\mathbb{R}^d)} \leq C \left( \norm{u}{L^1(\Gamma_\mathrm{Dir};\mathbb{R}^d)} + \norm{\sig u}{{\mathrm M}} \def\rmN{{\mathrm N}} \def\rmO{{\mathrm O}(\Omega;\bbM_\mathrm{sym}^{d\times d})}\right)\,. \end{equation}
\subsection{Setup} \label{ss:5.1} \par Let us now detail the basic assumptions on the data of the perfectly plastic system. We postpone to the end of Section \ref{ss:5.2} a series of comments on the outcome of the conditions given below, as well as on the possibility of weakening some of them.
\paragraph{{\em The reference configuration}.} For technical reasons related to the definition of the stress-strain duality, cf.\ Remark \ref{rmk:stress-strain-dual} later on, in addition to conditions \eqref{Omega-s2} required in Sec.\ \ref{ss:2.1}, we will suppose from now on that \begin{equation} \label{bdriesC2} \tag{5.$\Omega$} \partial\Omega \text{ and } \partial\Gamma \text{ are of class } \rmC^2\,. \end{equation} The latter requirement means that for every $x \in \partial\Gamma$ there exists a $ \rmC^2$-diffeomorphism defined in an open neighborhood of $x$ that maps $\partial\Omega$ into a $(d{-}1)$-dimensional plane, and $\partial\Gamma$ into a $(d{-}2)$-dimensional plane. \paragraph{{\em Kinematic admissibility and stress}.} Given a function $w\in H^1(\Omega;\mathbb{R}^d)$, we say that a triple $(u,e,p)$ is
\emph{kinematically admissible with boundary datum $w$ for the perfectly plastic system} (\emph{kinematically admissible}, for short), and write $(u,e,p) \in \mathcal{A}_{\mathrm{BD}}(w)$, if \begin{subequations} \label{kin-adm-BD} \begin{align} & \label{kin-adm-BD-a} u \in \mathrm{BD}(\Omega;\mathbb{R}^d), \quad e \in L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}), \quad p \in {\mathrm M}} \def\rmN{{\mathrm N}} \def\rmO{{\mathrm O}(\Omega {\cup} \Gamma_\mathrm{Dir};\bbM_\mathrm{D}^{d\times d}), \\ & \label{kin-adm-BD-b} \sig u = e+p, \\ \label{kin-adm-BD-c} & p = (w -u) \odot \nu \mathscr{H}^{d-1} \quad \text{on } \Gamma_\mathrm{Dir}. \end{align} \end{subequations}
Observe that
\eqref{kin-adm-BD-a} reflects the fact that the plastic strain is now a measure that can concentrate on Lebesgue-negligible sets. Furthermore, \eqref{kin-adm-BD-c} relaxes the Dirichlet condition $w=u$ on $\Gamma_\mathrm{Dir}$ imposed by the kinematic admissibility condition \eqref{kin-adm} and represents a plastic slip (mathematically described by the singular part of the measure $p$) occurring on $\Gamma_\mathrm{Dir}$. It can be checked that ${\mathcal A}} \def\calB{{\mathcal B}} \def\calC{{\mathcal C}(w) \subset {\mathcal A}} \def\calB{{\mathcal B}} \def\calC{{\mathcal C}_{\mathrm{BD}}(w)$. In the proof of Theorem \ref{mainth:3} we will make use of the following closedness property, proved in \cite[Lemma 2.1]{DMDSMo06QEPL}. \begin{lemma} \label{l:closure-kin-adm} Assume \eqref{Omega-s2} and \eqref{bdriesC2}. Let $(w_k)_k\subset H^1(\Omega;\mathbb{R}^d)$ and $(u_k,e_k,p_k )\in \mathcal{A}_{\mathrm{BD}}(w_k)$ for every $k\in \mathbb{N}$. Assume that \begin{equation} \label{ptw-convs} \begin{aligned} & w_k\rightharpoonup w_\infty \text{ in } H^1(\Omega;\mathbb{R}^d), \qquad u_k\weaksto u_\infty \text{ in } \mathrm{BD}(\Omega;\mathbb{R}^d), \\ & e_k\rightharpoonup e_\infty \text{ in } L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}), \qquad p_k \weaksto p_\infty \text{ in } {\mathrm M}} \def\rmN{{\mathrm N}} \def\rmO{{\mathrm O}(\Omega \cup \Gamma_\mathrm{Dir};\bbM_\mathrm{D}^{d\times d}). \end{aligned} \end{equation} Then, $(u_\infty, e_\infty, p_\infty) \in \mathcal{A}_{\mathrm{BD}}(w_\infty)$. \end{lemma} \par In the perfectly plastic system, the stress is given by $\sigma = \mathbb{C} e$, cf.\ \eqref{stress-RIP}. Following \cite{DMDSMo06QEPL, Sol09, FraGia2012, Sol14}, in addition to \eqref{elast-visc-tensors}, we suppose that for almost all $x\in \Omega$ the elastic tensor $\mathbb{C}(x) \in \mathrm{Lin}(\bbM_\mathrm{sym}^{d\times d})$ maps the orthogonal spaces $\bbM_\mathrm{D}^{d\times d}$ and $\mathbb{R} I$ into themselves. Namely, there exist functions
\begin{equation} \label{bbC-s:5} \tag{5.$\mathrm{T}$} \mathbb{C}_\mathrm{D} \in L^\infty (\Omega; \mathrm{Lin}(\bbM_\mathrm{sym}^{d\times d}) ) \text{ and } \eta \in L^\infty(\Omega;\mathbb{R}^+) \text{ s.t. } \forall\, A \in \bbM_\mathrm{sym}^{d\times d} \ \ \mathbb{C}(x)A = \mathbb{C}_\mathrm{D}(x) A_\mathrm{D} + \eta(x) \mathrm{tr}(A) I, \end{equation} with $I $ the identity matrix. \par
\paragraph{{\em Body force, traction, and Dirichlet loading}.} Along the footsteps of \cite{DMDSMo06QEPL}, we enhance our conditions on $F$ and $g$ (cf.\ \eqref{data-displ}), by requiring that \begin{equation} \label{data-displ-s5}
\tag{5.$\mathrm{L}_1$}
F \in \mathrm{AC} ([0,T]; L^d(\Omega;\mathbb{R}^d)),\qquad g \in \mathrm{AC} ([0,T]; L^\infty (\Gamma_\mathrm{Neu};\mathbb{R}^d)). \end{equation} Therefore, for every $t\in [0,T]$ the element $\calL(t)$ defined by \eqref{total-load} belongs to $\mathrm{BD}(\Omega;\mathbb{R}^d)^*$, and moreover (see \cite[Rmk.\ 4.1]{DMDSMo06QEPL}) $\dot{\calL}(t)$ exists in $ \mathrm{BD}(\Omega;\mathbb{R}^d)^*$
for almost all $t\in (0,T)$.
Furthermore, we shall strengthen the safe load condition from \eqref{safe-load} to
\begin{equation}
\label{safe-load-s5}
\varrho \in W^{1,1}(0,T; L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})) \quad \text{ and } \varrho_\mathrm{D} \equiv 0,
\end{equation}
cf.\ Remark \ref{rmk:honest} below.
\begin{remark}
\label{rmk:honest}
\upshape
The second of \eqref{safe-load-s5} is required only for consistency with
the upcoming conditions \eqref{safe-load-eps} on the stresses $(\varrho_\varepsilon)_\varepsilon$, associated via the safe load condition with the forces $(F_\varepsilon)_\varepsilon$ and $(g_\varepsilon)_\varepsilon$ for the thermoviscoplastic systems approximating the perfectly plastic one. The feasibility of \eqref{safe-load-s5} is completely open, though. Hence, we might as well
confine the discussion to the case the body force $F$ and the assigned traction $g$ are null. We have chosen not to do so because
\eqref{safe-load-s5} is the natural counterpart to \eqref{safe-load-eps}.
\end{remark}
Further, we consider the body to be solicited by a hard device $w$ on the Dirichlet boundary $\Gamma_\mathrm{Dir}$, for which we suppose \begin{equation} \label{Dir-load-Sec5} \tag{5.$\mathrm{W}$} w \in \mathrm{AC}([0,T]; H^1(\Omega;\mathbb{R}^d)), \end{equation} which is a weaker requirement than \eqref{Dirichlet-loading}. \paragraph{{\em The plastic dissipation.}} Since the plastic strain (and, accordingly, the plastic strain rate) is a measure on $\Omega \cup \Gamma_ \mathrm{Dir}$, from now on we will suppose that the multifunction $K$ is defined on $\Omega \cup \Gamma_\mathrm{Dir}$. Furthermore, following \cite{Sol09}, we will require that \begin{equation} \label{ass-K1-sec-5} \tag{5.$\mathrm{K}_1$} K : \Omega \cup \Gamma_\mathrm{Dir} \rightrightarrows \bbM_\mathrm{D}^{d\times d} \text{ is continuous } \end{equation} in the sense specified by \eqref{multifuncts-props} and that \begin{equation} \label{ass-K2-sec-5} \tag{5.$\mathrm{K}_2$} \begin{aligned} &
K(x) \ \text{ is a convex and compact subset of $\bbM_\mathrm{D}^{d\times d}$ for all } x \in \Omega \cup \Gamma_\mathrm{Dir} \text{ and} \\ & \exists\, 0<c_r<C_R \ \ \ \forall\, x \in \Omega \cup \Gamma_\mathrm{Dir}\, : \ \ \ B_{c_r}(0) \subset K(x) \subset B_{C_R}(0).
\end{aligned} \end{equation} In order to state the stress constraint $\sigma_\mathrm{D} \in K$ a.e.\ in $\Omega$
in a more compact form, we also introduce the set \begin{equation} \label{realization-K} \calK(\Omega): = \{ \zeta \in L^2(\Omega; \bbM_\mathrm{D}^{d\times d})\, : \zeta(x) \in K(x) \ \text{for a.a. } x \in \Omega\}. \end{equation} We will denote by $\mathrm{P}_{\mathcal{K}(\Omega)} : L^2(\Omega; \bbM_\mathrm{D}^{d\times d}) \to L^2(\Omega; \bbM_\mathrm{D}^{d\times d})$ the projection operator onto the closed convex set $\mathcal{K}(\Omega)$ induced by
the projection operators onto the sets $K(x)$, namely, for a given $\sigma \in L^2(\Omega; \bbM_\mathrm{D}^{d\times d})$, \begin{equation} \label{proj-K-Omega} \xi = \mathrm{P}_{\mathcal{K}(\Omega)}(\sigma) \qquad \text{if and only if} \qquad \xi(x) = \mathrm{P}_{K(x)}(\sigma(x)) \ \text{for a.a. } x \in \Omega. \end{equation} \par We introduce the support function $ \mathrm{R}: \Omega \times \bbM_\mathrm{D}^{d\times d} \to [0,+\infty) $ associated with the multifunction $K$. In order to define the related dissipation potential, we have to resort to the theory of convex function of measures \cite{Goffman-Serrin64,Reshetnyak68}, since the tensor $p$ and its rate $\dot p$ are now Radon measures on $\Omega \cup \Gamma_\mathrm{Dir}$. Therefore, with every $\dot{p} \in \mathrm{M}(\Omega{\cup} \Gamma_\mathrm{Dir}; \bbM_\mathrm{sym}^{d\times d})$ (for convenience, we will keep to the notation $\dot p$ for the independent variable in the plastic dissipation potential) we associate the nonnegative Radon measure $\overline{\mathrm{R}}(\dot{p})$ defined by \begin{equation} \label{Radon-meas-R}
\overline{\mathrm{R}}(\dot{p})(B): = \int_B \mathrm{R} \left(x, \frac{\dot p}{|\dot p|} (x) \right) \;\!\mathrm{d} |\dot p|(x) \qquad \text{for every Borel set } B \subset \Omega{\cup} \Gamma_\mathrm{Dir}, \end{equation}
with ${\dot p}/|\dot p|$ the Radon-Nykod\'ym derivative of the measure $\dot p$ w.r.t.\ its variation $|\dot p|$. We then consider the \emph{plastic dissipation potential} \begin{equation} \label{pl-diss-pot} \calR: \mathrm{M}(\Omega{\cup} \Gamma_\mathrm{Dir}; \bbM_\mathrm{sym}^{d\times d}) \to [0,+\infty) \text{ defined by } \calR(\dot p): = \overline{\mathrm{R}}(\dot{p})(\Omega{\cup} \Gamma_\mathrm{Dir}). \end{equation} Observe that the definition of the functional $\calR$ on $\mathrm{M}(\Omega{\cup} \Gamma_\mathrm{Dir}; \bbM_\mathrm{sym}^{d\times d}) $ is consistent with that given on $ L^1(\Omega;\bbM_\mathrm{sym}^{d\times d})$ in \eqref{plastic-dissipation-functional} (in the case the yield surface $K$ does not depend on $\vartheta$), namely
\begin{equation} \label{consistency-measure-L1} \calR(\dot p) = \int_\Omega \mathrm{R}(x,\dot p(x)) \;\!\mathrm{d} x \qquad \text{ if } \dot p \in L^1(\Omega;\bbM_\mathrm{sym}^{d\times d}). \end{equation} This justifies the abuse in the notation for $\mathcal{R}$. \par It follows from the lower semicontinuity of $x \rightrightarrows K(x)$ that its support function $ \mathrm{R} $ is lower semicontinuous on $\Omega \times \bbM_\mathrm{D}^{d\times d}$.
Since $\mathrm{R}(x,\cdot)$ is also convex and $1$-homogeneous, Reshetnyak's Theorem (cf., e.g., \cite[Thm.\ 2.38]{AmFuPa05FBVF}) applies to ensure that the functional $\calR$ from \eqref{pl-diss-pot} is lower semicontinuous w.r.t.\ the weak$^*$-topology on $\mathrm{M}(\Omega{\cup} \Gamma_\mathrm{Dir}; \bbM_\mathrm{sym}^{d\times d}) $. Accordingly, the induced total variation functional, defined for every function $p : [0,T] \to \mathrm{M}(\Omega{\cup} \Gamma_\mathrm{Dir}; \bbM_\mathrm{sym}^{d\times d})$ by \begin{equation} \mathrm{Var}_{\calR}(p; [a,b]) : = \sup\left \{ \sum_{i=1}^N \calR (p(t_i) - p(t_{i-1})) \, : \ a= t_0<t_1< \ldots< t_{N-1} = t_N = b \right\} \end{equation} for $ [a,b] \subset [0,T]$, is lower semicontiuous w.r.t.\ the pointwise (in time) convergence of $p$ in the weak$^*$ topology of $\mathrm{M}(\Omega{\cup} \Gamma_\mathrm{Dir}; \bbM_\mathrm{sym}^{d\times d})$. \begin{remark} \upshape \label{rmk:added-sol} The dependence of the constraint set on a (spatially and/or temporally) discontinuous additional variable poses considerable difficulties in
handling the plastic dissipation potential, in that Reshetnyak's Theorem is no longer applicable. To bypass this, in the non-associative plasticity models considered in, e.g., \cite{BabFraMor12,DalDesSol11}, such additional variable has been mollified; very recently, in \cite{CO17} a Reshetnyak-type lower semicontinuity result has been obtained in the case plastic dissipation potential also depends on a damage variable $z\in \mathrm{B}([0,T];W^{1,d}(\Omega))$ (with $d$ the space dimension of the problem). \par Here we prefer to avoid mollifying the temperature variable, and the result from \cite{CO17} does not apply due to the poor regularity of $\vartheta$. That is why, we have dropped the dependence of $K$ on $\vartheta$. \end{remark} \par Finally, we recall \cite[Thm.\ 7.1]{DMDSMo06QEPL}, stating that for every $p \in \mathrm{AC}([0,T]; \mathrm{M}(\Omega{\cup} \Gamma_\mathrm{Dir}; \bbM_\mathrm{sym}^{d\times d})) $, there exists \[ \dot{p}(t): = \mathrm{weak}^*-\lim_{h \to 0} \frac{p(t+h) - p(t)}{h} \qquad \text{for a.a. } t \in (0,T), \] where the limit is w.r.t.\ the weak$^*$-topology of $\mathrm{M}(\Omega{\cup} \Gamma_\mathrm{Dir}; \bbM_\mathrm{sym}^{d\times d})$. Moreover, there holds \begin{equation} \label{consistency-dotp} \mathrm{Var}_{\calR}(p; [a,b]) = \int_a^b \calR(\dot p(t)) \;\!\mathrm{d} t \qquad \text{for all } [a,b] \subset [0,T], \end{equation} cf.\ \cite[Thm.\ 7.1]{DMDSMo06QEPL} and \cite[Thm.\ 3.6]{Sol09}. \paragraph{{\em Cauchy data}.} We will supplement the perfectly plastic system with initial data \begin{subequations} \label{Cauchy-data-s5} \begin{align} & \label{initial-u-s5} u_0 \in \mathrm{BD}(\Omega;\mathbb{R}^d), \\ & \label{initial-p-s5} e_0 \in L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}), \quad p_0 \in \mathrm{M}(\Omega{\cup}\Gamma_\mathrm{Dir};\bbM_\mathrm{D}^{d\times d}) \quad \text{such that } (u_0, e_0, p_0 ) \in \mathcal{A}_{\mathrm{BD}}(w(0))\,. \end{align} \end{subequations}
\subsection{Energetic solutions to the perfectly plastic system} \label{ss:5.2} Throughout this section we will tacitly suppose the validity of conditions \eqref{bdriesC2}, \eqref{bbC-s:5}, \eqref{data-displ-s5}, \eqref{Dir-load-Sec5}, and
\eqref{ass-K1-sec-5}--\eqref{ass-K2-sec-5}. We are now in the position to give the notion of energetic solution (or \emph{quasistatic evolution}) for the perfectly plastic system (in the isothermal case).
\begin{definition}[Global energetic solutions to the perfectly plastic system] \label{def:en-sols-PP} Given initial data $(u_0,e_0,p_0)$ fulfilling \eqref{Cauchy-data-s5}, we call a triple $(u,e,p)$ a \emph{global energetic solution} to the Cauchy problem for system \eqref{RIP-PDE}, with boundary datum $w$ on $\Gamma_\mathrm{Dir}$, if \begin{subequations} \label{reg-eneerg-sols} \begin{align} & \label{reg-u-RIP} u \in \mathrm{BV}([0,T]; \mathrm{BD}(\Omega;\mathbb{R}^d)), \\ & \label{reg-e-RIP} e \in \mathrm{BV}([0,T]; L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})), \\ & \label{reg-p-RIP} p \in \mathrm{BV}([0,T]; \mathrm{M}(\Omega; \bbM_\mathrm{D}^{d\times d})), \end{align} \end{subequations} $(u,e,p)$ comply with the initial conditions \begin{equation} \label{Cauchy-RIP} u(0,x)= u_0(x), \qquad e(0,x)= e_0(x), \qquad p(0,x)= p_0(x) \qquad \text{for a.a. } x \in \Omega, \end{equation} and with the following conditions \emph{for every} $t \in [0,T]$: \begin{itemize} \item[-] \emph{kinematic admissibility}: $(u(t), e(t), p(t)) \in \mathcal{A}_\mathrm{BD}(w(t))$; \item[-]
\emph{global stability}: \begin{equation} \label{glob-stab-5} \tag{$\mathrm{S}$} \calQ(e(t)) - \pairing{}{\mathrm{BD}(\Omega;\mathbb{R}^d)}{\calL(t)}{u(t)} \leq \calQ(\tilde e) + \calR(\tilde p - p(t)) - \pairing{}{\mathrm{BD}(\Omega;\mathbb{R}^d)}{\calL(t)}{\tilde u} \qquad \text{for all } (\tilde u, \tilde e, \tilde p) \in \mathcal{A}_\mathrm{BD}(w(t)) \end{equation}
(recall definition \eqref{total-load} of the total loading function $\calL$); \item[-]
\emph{energy balance}: \begin{equation} \label{enbal-5} \tag{$\mathrm{E}$} \begin{aligned} \calQ(e(t)) + \mathrm{Var}_\calR (p; [0,t]) & = \calQ(e_0) + \int_0^t \int_\Omega \sigma : \sig{\dot{w}} \;\!\mathrm{d} x \;\!\mathrm{d} s - \int_0^t \pairing{}{\mathrm{BD}(\Omega;\mathbb{R}^d)}{\calL}{\dot w} \;\!\mathrm{d} s \\ & \quad + \pairing{}{\mathrm{BD}(\Omega;\mathbb{R}^d)}{\calL(t)}{u(t)} - \pairing{}{\mathrm{BD}(\Omega;\mathbb{R}^d)}{\calL(0)}{u_0} - \int_0^t \pairing{}{\mathrm{BD}(\Omega;\mathbb{R}^d)}{\dot{\calL}}{u} \;\!\mathrm{d} s
\end{aligned} \end{equation} with the stress $\sigma$ given by $\sigma(t) = \mathbb{C} e(t)$ for every $t \in [0,T]$. \end{itemize} \end{definition} \begin{remark} \label{rmk:save-the-day} \upshape It follows from \cite[Thm.\ 4.4]{DMDSMo06QEPL} (cf.\ also \cite[Rmk.\ 5]{DMSca14QEPP}), that \eqref{enbal-5} is equivalent to the condition that \begin{equation} \label{alternat-enbal-RIS} \begin{aligned} & p \in \mathrm{BV}([0,T]; \mathrm{M}(\Omega; \bbM_\mathrm{D}^{d\times d})), \\ & \begin{aligned} & \calQ(e(t)) + \mathrm{Var}_\calR (p; [0,t]) - \int_\Omega \varrho(t) : (e(t) {-} \sig{w(t)} ) \;\!\mathrm{d} x \\
& = \calQ(e_0) - \int_\Omega \varrho(0) : (e_0 {-} \sig{w(0)} ) \;\!\mathrm{d} x + \int_0^t \int_\Omega \sigma : \sig{\dot{w}} \;\!\mathrm{d} x \;\!\mathrm{d} s -\int_0^t\int_\Omega \dot{\varrho}: (e{-} \sig{w}) \;\!\mathrm{d} x \;\!\mathrm{d} s \end{aligned} \end{aligned} \end{equation} for every $t\in [0,T]$. \end{remark} \noindent In the above definition, for consistency with the standard concept of energetic solution, we have required only $\mathrm{BV}$-time regularity for the functions $(e,p)$ (and, accordingly, for $u$). On the other hand, an important feature of perfect plasticity is that, due to its \emph{convex character}, the maps $t\mapsto u(t), \, t \mapsto e(t), \, t \mapsto p(t)$ are ultimately \emph{absolutely continuous} on $[0,T]$. In fact,
the following result ensures that, if a triple $(u,e,p)$ complies with \eqref{glob-stab-5} and \eqref{enbal-5} \emph{at almost all} $t \in (0,T)$, then it satisfies said conditions \emph{for every} $t\in [0,T]$, and in addition the maps $t\mapsto u(t), \, t \mapsto e(t), \, t \mapsto p(t)$ are absolutely continuous. The proof of Thm.\ \ref{th:dm-scala}
below follows from that for
\cite[Thm.\ 5]{DMSca14QEPP}, since the argument carries over to the present case of a spatially-dependent dissipation metric $\mathrm{R}$. \begin{theorem} \label{th:dm-scala} Let $S \subset [0,T]$ be a set of full measure containing $0$. Let $(u,e,p) : S \to \mathrm{BD}(\Omega;\mathbb{R}^d) \times L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})\times \mathrm{M}(\Omega; \bbM_\mathrm{D}^{d\times d})$ be measurable and bounded functions satisfying the Cauchy conditions \eqref{Cauchy-RIP} with a triple $(u_0, e_0, p_0) $ as in \eqref{Cauchy-data-s5} and fulfilling the stability condition \eqref{glob-stab-5} at time $t=0$, as well as the kinematic admissibility, the global stability condition, and the energy balance \emph{for every} $t \in S$. Suppose in addition that $p\in \mathrm{BV}([0,T]; \mathrm{M}(\Omega; \bbM_\mathrm{D}^{d\times d})).$ \par Then, the pair $(u,e)$ extends to an absolutely continuous function $(u,e) \in \mathrm{AC} ([0,T]; \mathrm{BD}(\Omega;\mathbb{R}^d) \times L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}))$. Moreover, $p \in \mathrm{AC}([0,T]; \mathrm{M}(\Omega; \bbM_\mathrm{D}^{d\times d}))$ and the triple $(u,e,p)$ is a global energetic solution to the perfectly plastic system in the sense of Definition \ref{def:en-sols-PP}. \end{theorem} In the proof of the forthcoming Theorem \ref{mainth:3}, we will also make use of the following result, first obtained in \cite[Thm.\ 3.6]{DMDSMo06QEPL} in the homogeneous case, and extended to a spatially-dependent yield surface in \cite[Thm.\ 3.10]{Sol09}. \begin{lemma} \label{l:for-stability} Let $S \subset [0,T]$ be a set of full measure containing $0$. Let $(u,e,p) : S \to \mathrm{BD}(\Omega;\mathbb{R}^d) \times L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})\times \mathrm{M}(\Omega; \bbM_\mathrm{D}^{d\times d})$ fulfill the kinematic admissibility at $t \in S$. Then, the following conditions are equivalent
\begin{compactitem} \item[-] $(u(t), e(t),p(t))$ comply with the global stability condition \eqref{glob-stab-5} at $t$; \item[-] the stress $\sigma(t) = \mathbb{C} e(t)$ satisfies $\sigma(t) \in \calK(\Omega)$ and the boundary value problem
\begin{equation} \label{BVprob-stress}
\begin{cases}
-\mathrm{div}_{\mathrm{Dir}}(\sigma(t)) = F(t) & \text{in } \Omega,
\\
\sigma(t) \nu = g(t) & \text{on } \Gamma_\mathrm{Neu},
\end{cases}
\end{equation}
with the operator $-\mathrm{div}_{\mathrm{Dir}}$ from \eqref{div-Gdir}.
\end{compactitem} \end{lemma} \begin{remark} \label{rmk:stress-strain-dual} \upshape In the proof of Thm.\ \ref{mainth:3}, Lemma \ref{l:for-stability} will play a pivotal role in the argument for the global stability condition \eqref{glob-stab-5}. In the spatially homogeneous case addressed in \cite{DMDSMo06QEPL}, the proof of the analogue of Lemma \ref{l:for-stability} relies on a careful definition of the duality between the (deviatoric part of the) stress $\sigma$, which is typically not continuous as a function of the space variable, and the strain $\sig u$, as well as the plastic strain $p$, which in turn are just measures. In particular, the regularity conditions \eqref{bdriesC2} on $\partial\Omega$ and $\partial\Gamma$ entail the validity of a by-part integration formula (cf.\ \cite[Prop.\ 2.2]{DMDSMo06QEPL}), which is at the core of the proof of \cite[Thm.\ 3.6]{DMDSMo06QEPL}. Another crucial point is the validity of the inequality (between measures) \begin{equation} \label{Radon-meas-ineqs} \overline{\mathrm{R}}(\dot{p}) \geq [\sigma_\mathrm{D}{:} \dot{p}], \end{equation} where $\overline{\mathrm{R}}(\dot{p})$ is the Radon measure defined by \eqref{Radon-meas-R}, and the measure $ [\sigma_\mathrm{D}{:} \dot{p}]$ (we refer to \cite[Sec.
2]{DMDSMo06QEPL} for its definition), `surrogates' the duality between $\sigma_\mathrm{D}$ and $\dot{p}$.
\par
In \cite{Sol09} it was shown that, if $K: \Omega{\cup}\Gamma_\mathrm{Dir} \rightrightarrows \bbM_\mathrm{D}^{d\times d}$ is continuous, then \eqref{Radon-meas-ineqs} holds also in the spatially heterogeneous case and, based on that,
Lemma \ref{l:for-stability}
(cf.\ \cite[Thm.\ 3.10]{Sol09}),
was derived.
\par
However, it was observed in \cite{FraGia2012} that the continuity of $K$ is a quite restrictive condition for applications to heterogeneous materials. The authors carried out the analysis under a much weaker, and more mechanically feasible, set of conditions on the multifunction $K$ by
adopting a slightly different approach to the proof of existence of `quasistatic evolutions'. In particular,
their argument for obtaining the stability condition \eqref{glob-stab-5} did not rely on
Lemma \ref{l:for-stability}. Rather, it was based on the construction of a suitable recovery sequence in the time discrete-to-continuous limit passage.
Unfortunately, it seems to us that, for the asymptotic analysis developed in the upcoming Section \ref{s:6}, this argument could not be exploited to recover \eqref{glob-stab-5} in the vanishing-viscosity and inertia limit. That is why, we have to stay with
the continuity requirement \eqref{ass-K1-sec-5} on $K$.
\end{remark}
\section{From the thermoviscoplastic to the perfectly plastic system} \label{s:6}
In this section we address the limiting behavior of \emph{weak energy solutions} to the thermoviscoplastic system (\ref{plast-PDE}, \ref{bc}) as the rate of the external loads $F,\, g, \, w$ and of the heat sources $H,\, h$ becomes slower and slower. Accordingly, we will rescale time by a factor $\varepsilon>0$. Before detailing our conditions on the data $F,\, g,\, w,\, H $, and $h$ for performing the vanishing-viscosity analysis of system
(\ref{plast-PDE}, \ref{bc}), let us specify that, already on the ``viscous'' level, we will confine the discussion to the case in which
\[
\text{ the elastic domain $K$ does not depend on $\vartheta$ but only on $x \in \Omega$, and fulfills \eqref{ass-K1-sec-5}--\eqref{ass-K2-sec-5},}
\]
cf.\ Remark \ref{rmk:added-sol}.
Moreover, we will suppose that the
thermal expansion tensor
also depends on $\varepsilon$, i.e.\
$\mathbb{E} = \mathbb{E}_\varepsilon$, with the scaling given by \eqref{scaling-intro}.
Hence, the tensors $\mathbb{B}_\varepsilon: = \mathbb{C} \mathbb{E}_\varepsilon$ have the form
\begin{equation}
\label{scalingbbB}
\mathbb{B}_\varepsilon = \varepsilon^{\beta} \mathbb{B} \qquad \text{with } \beta >\frac12 \text{ and } \mathbb{B} \in L^\infty (\Omega; \mathbb{R}^{d\times d}).
\end{equation}
We postpone to Remark \ref{rmk:conds-forces}
ahead
some comments on the role of condition \eqref{scalingbbB}. \par Let $(H_\varepsilon, h_\varepsilon,F_\varepsilon, g_\varepsilon, w_\varepsilon)_\varepsilon$ be a family of data for system (\ref{plast-PDE}, \ref{bc}), and let us rescale them by the factor $\varepsilon>0$, thus introducing \[ H^\varepsilon(t) : = H_\varepsilon \left( \varepsilon t \right), \quad
h^\varepsilon(t) : = h_\varepsilon \left( \varepsilon t \right), \quad F^\varepsilon(t) : = F_\varepsilon \left( \varepsilon t \right), \quad g^\varepsilon(t) : = g_\varepsilon \left( \varepsilon t \right),\quad
w^\varepsilon(t) : = w_\varepsilon \left( \varepsilon t \right) \qquad t \in \left[0,\frac T\varepsilon \right]. \] Correspondingly, we denote by $(\vartheta^\varepsilon, u^\varepsilon, e^\varepsilon, p^\varepsilon)$ a weak energy solution to the thermoviscoplastic system, with the tensor $ \mathbb{B}_\varepsilon $ from \eqref{scalingbbB}, starting from initial data $(\vartheta^0_\varepsilon, u^0_\varepsilon, e^0_\varepsilon, p^0_\varepsilon)$: under the conditions on the functions $(H_\varepsilon, h_\varepsilon,F_\varepsilon, g_\varepsilon, w_\varepsilon)_\varepsilon$ and the data $(\vartheta^0_\varepsilon, u^0_\varepsilon, e^0_\varepsilon, p^0_\varepsilon)_\varepsilon$ stated in Sec.\ \ref{ss:2.1}, the existence of $(\vartheta^\varepsilon, u^\varepsilon, e^\varepsilon, p^\varepsilon)$ is ensured by Thm.\ \ref{mainth:2}. We further rescale the functions $(\vartheta^\varepsilon, u^\varepsilon, e^\varepsilon, p^\varepsilon)$ in such a way as to have them defined on the interval $[0,T]$, by setting \[ \vartheta_\varepsilon(t): = \vartheta^\varepsilon \left( \frac t \varepsilon\right), \quad u_\varepsilon(t): = u^\varepsilon \left( \frac t \varepsilon\right), \quad e_\varepsilon(t): = e^\varepsilon \left( \frac t \varepsilon\right), \quad p_\varepsilon(t): = p^\varepsilon \left( \frac t \varepsilon\right), \qquad t \in [0,T]. \] \par For later reference, here we state the defining properties of weak energy solutions in terms of the rescaled quadruple $(\vartheta_\varepsilon, u_\varepsilon, e_\varepsilon, p_\varepsilon)$, taking into account the improved formulation of the heat equation provided in Theorem \ref{mainth:2}. In addition to the kinematic admissibility $\sig{u_\varepsilon}= e_\varepsilon + p_\varepsilon$, we have: \begin{subequations} \label{weak-form-eps} \begin{compactitem} \item[-] strict positivity: $\vartheta_\varepsilon \geq \bar\vartheta>0$ a.e.\ in $\Omega$, with $\bar\vartheta$ given by \eqref{strong-strict-pos}; \item[-] weak formulation of the heat equation, for almost all $t \in (0,T)$ and all test functions $\varphi \in W^{1,1+1/\tilde{\delta}}(\Omega)$, with $\tilde\delta>0$ such that \eqref{further-k-teta} holds: \begin{equation} \label{eq-teta-eps} \begin{aligned}
&\varepsilon \pairing{}{W^{1,1+1/\tilde{\delta}}(\Omega)}{\dot{\vartheta}_\varepsilon(t)}{\varphi} + \int_\Omega \kappa(\vartheta_\varepsilon(t)) \nabla \vartheta_\varepsilon(t)\nabla\varphi \;\!\mathrm{d} x \\ & = \int_\Omega \left(H_\varepsilon(t)+ \varepsilon\mathrm{R}(x,\dot p_\varepsilon (t)) + \varepsilon^2 \dot{p}_\varepsilon(t) : \dot{p}_\varepsilon(t) + \varepsilon^2\mathbb{D} \dot{e}_\varepsilon (t) :\dot{e}_\varepsilon (t) -\varepsilon \vartheta_\varepsilon (t) \mathbb{B}_\varepsilon : \dot{e}_\varepsilon (t) \right) \varphi \;\!\mathrm{d} x \\ & \quad + \int_{\partial\Omega} h_\varepsilon (t) \varphi \;\!\mathrm{d} S;
\end{aligned} \end{equation} \item[-] weak momentum balance for almost all $t \in (0,T)$ and all test functions $v \in H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)$ \begin{equation} \label{w-momentum-balance-eps} \begin{aligned} \rho \varepsilon^2 \int_\Omega \ddot{u}_\varepsilon(t) v \;\!\mathrm{d} x + \int_\Omega \left(\mathbb{D} \varepsilon \dot{e}_\varepsilon(t) + \mathbb{C} e_\varepsilon(t) - \vartheta_\varepsilon(t) \mathbb{B}_\varepsilon \right): \sig v \;\!\mathrm{d} x & = \pairing{}{H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)}{\calL_\varepsilon(t)}{v} \end{aligned} \end{equation} for almost all $t\in (0,T)$, with $\calL_\varepsilon$ defined from $F_\varepsilon$ and $g_\varepsilon$ as in \eqref{total-load}; \item[-] the plastic flow rule \eqref{flow-rule-rescal}, rewritten (cf.\ \eqref{flow-rule-rewritten}) for later use as \begin{equation} \label{pl-flow-rule-eps} \sigma_\eps -\varepsilon {\dot p}_\varepsilon = \mathrm{P}_{\mathcal{K}(\Omega)}((\sigma_{\eps})_\dev) \qquad \text{a.e.\ in }\, Q, \end{equation} with $\sigma_\eps = \varepsilon\mathbb{D} \dot{e}_\varepsilon + \mathbb{C} e_\varepsilon - \vartheta_\varepsilon \mathbb{B}_\varepsilon$ and the projection operator $ \mathrm{P}_{\mathcal{K}(\Omega)} $ from \eqref{proj-K-Omega}.
\end{compactitem} We also record the \begin{compactitem} \item[-] (rescaled) mechanical energy balance
\begin{equation} \label{mech-enbal-rescal} \begin{aligned} &
\frac{\rho \varepsilon^2}2 \int_\Omega |\dot{u}_\varepsilon(t)|^2 \;\!\mathrm{d} x + \varepsilon\int_0^t\int_\Omega \mathbb{D} \dot{e}_\varepsilon: \dot{ e}_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} r
+ \frac{\varepsilon}2 \int_0^t\int_\Omega |\dot{p}_\varepsilon|^2 \;\!\mathrm{d} x \;\!\mathrm{d} r + \frac1{2\varepsilon} \int_0^t d^2((\sigma_{\eps})_\dev, \calK(\Omega)) \;\!\mathrm{d} t \\ & \quad + \int_0^t \calR( \dot{p}_\varepsilon) \;\!\mathrm{d} r
+ \calQ(e_\varepsilon(t)) \\
& = \frac{\rho \varepsilon^2}2 \int_\Omega |\dot{u}_\varepsilon^0|^2 \;\!\mathrm{d} x + \calQ(e_\varepsilon^0) + \int_0^t \pairing{}{H_\mathrm{Dir}^1 (\Omega;\mathbb{R}^d)}{\mathcal{L}_\varepsilon}{\dot{u}_\varepsilon{-} \dot{ w}_\varepsilon} \;\!\mathrm{d} r + \int_0^t \int_\Omega \vartheta_\varepsilon \mathbb{B}_\varepsilon : \dot{ e}_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} r
\\ & \quad +\rho\varepsilon^2 \left( \int_\Omega \dot{u}_\varepsilon(t) \dot{w}_\varepsilon(t) \;\!\mathrm{d} x - \int_\Omega \dot{u}_\varepsilon^0 \dot{w}_\varepsilon(0) \;\!\mathrm{d} x - \int_0^t \int_\Omega \dot{u}_\varepsilon\ddot{w}_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} r \right) + \int_0^t \int_\Omega \sigma_\eps : \sig{\dot{w}_\varepsilon} \;\!\mathrm{d} x \;\!\mathrm{d} r
\end{aligned} \end{equation} for every $t \in [0,T]$, where we have used \eqref{pl-flow-rule-eps}, yielding that \[
\varepsilon | \dot{p}_\varepsilon|^2 = \frac{\varepsilon}2 | \dot{p}_\varepsilon|^2 + \frac{1}{2\varepsilon}\dddn{ | \sigma_\eps {-} \mathrm{P}_{\mathcal{K}(\Omega)}((\sigma_{\eps})_\dev) |^2}{$= d^2((\sigma_{\eps})_\dev, \calK(\Omega)) $ } \qquad \text{a.e.\ in } \, Q, \] with $d(\cdot, \calK(\Omega))$ the distance function from the closed and convex set $\calK(\Omega)$. \end{compactitem} Finally, adding \eqref{mech-enbal-rescal} with \eqref{eq-teta-eps} tested by $\frac1{\varepsilon}$ we obtain the \begin{compactitem} \item[-] (rescaled) total energy balance
\begin{equation} \label{total-enbal-rescal} \begin{aligned} &
\frac{\rho \varepsilon^2}2 \int_\Omega |\dot{u}_\varepsilon(t)|^2 \;\!\mathrm{d} x +\pairing{}{W^{1,\infty}}{\vartheta_\varepsilon(t)}{1} +\frac12 \int_\Omega \mathbb{C} e_\varepsilon(t){:} e_\varepsilon(t) \;\!\mathrm{d} x \\
& = \frac{\rho\varepsilon^2}2 \int_\Omega |\dot{u}_\varepsilon^0|^2 \;\!\mathrm{d} x +\mathcal{E}(\vartheta_\varepsilon^0, e_\varepsilon^0) + \int_0^t \pairing{}{H_\mathrm{Dir}^1 (\Omega;\mathbb{R}^d)}{\mathcal{L}_\varepsilon}{\dot {u}_\varepsilon{-} \dot{w}_\varepsilon} \;\!\mathrm{d} r +\int_0^t \int_\Omega\frac1\varepsilon H_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_0^t \int_{\partial\Omega} \frac1\varepsilon h_\varepsilon \;\!\mathrm{d} S \;\!\mathrm{d} r \\ & \quad +\rho\varepsilon^2 \left( \int_\Omega \dot{u}_\varepsilon(t) \dot{w}_\varepsilon(t) \;\!\mathrm{d} x - \int_\Omega \dot{u}_\varepsilon^0 \dot{w}_\varepsilon(0) \;\!\mathrm{d} x - \int_0^t \int_\Omega \dot{u}_\varepsilon\ddot{w}_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} r \right) + \int_0^t \int_\Omega \sigma_\eps : \sig{\dot{w}_\varepsilon} \;\!\mathrm{d} x \;\!\mathrm{d} r \end{aligned} \end{equation} for every $t \in [0,T]$. \end{compactitem} \end{subequations} Indeed, as in the proof of Theorems \ref{mainth:1} and \ref{mainth:2}, \eqref{total-enbal-rescal} will be the starting point in the derivation of the priori estimates, \emph{uniform} w.r.t.\ the parameter $\varepsilon$, on the functions $(\vartheta_\varepsilon, u_\varepsilon, e_\varepsilon, p_\varepsilon)_\varepsilon$, under the following \paragraph{\emph{Hypotheses on the data $(H_\varepsilon, h_\varepsilon, F_\varepsilon, g_\varepsilon, w_\varepsilon)_\varepsilon$ and on the initial data $(\vartheta_\varepsilon^0, u_\varepsilon^0, \dot{u}_\varepsilon^0, e_\varepsilon^0, p_\varepsilon^0)_\varepsilon$.} } Since it will be necessary to start with \eqref{total-enbal-rescal} in this temperature-dependent setting, we shall have to assume that the families of data $(H_\varepsilon)_\varepsilon$ and $(h_\varepsilon)_\varepsilon$ converge to zero in the sense
that there exists a constant $\overline{C}>0$ such that for every $\varepsilon>0$
\[
\|H_\varepsilon\|_{L^1(0,T; L^1(\Omega))} \leq \overline{C}\varepsilon, \qquad \|h_\varepsilon\|_{L^1(0,T; L^1(\partial\Omega))} \leq \overline{C}\varepsilon.
\]
We will in fact need to strengthen this in order to pass to the limit, as $\varepsilon\downarrow 0$, in \eqref{total-enbal-rescal}, by assuming that there exist $\mathsf{H}\in L^1(0,T; L^1(\Omega)), \ \mathsf{h} \in L^1(0,T; L^1(\partial\Omega))$ such that \begin{subequations} \label{data-eps} \begin{equation} \label{heat-sources-eps}
\frac{1}{\varepsilon} H_\varepsilon \rightharpoonup \mathsf{H} \text{ in } L^1(0,T; L^1(\Omega)), \quad \frac{1}{\varepsilon} h_\varepsilon \rightharpoonup \mathsf{h} \text{ in } L^1(0,T; L^1(\partial\Omega)). \end{equation} As for the body and surface forces, for every $\varepsilon>0$ the functions $F_\varepsilon$ and $g_\varepsilon$ have to comply with \eqref{data-displ} and the safe-load condition \eqref{safe-load}, with associated stresses $\varrho_\varepsilon$. We impose that there exist $F$ and $g $ as in \eqref{data-displ-s5} to which $(F_\varepsilon)_\varepsilon$ and $(g_\varepsilon)_\varepsilon$ converge in topologies that we choose not to specify, and that the sequence $(\varrho_\varepsilon)_\varepsilon$ suitably converge to the stress $\varrho$ from \eqref{safe-load-s5} (hence, with $\varrho_\mathrm{D} \equiv 0$) associated with $F$ and $g$, namely \begin{equation} \label{safe-load-eps} \begin{aligned}
&
\varrho_\varepsilon \to \varrho \qquad \text{in } W^{1,1}(0,T; L^2(\Omega; \bbM_\mathrm{sym}^{d\times d})),
\\ &
\| (\varrho_\varepsilon )_\mathrm{D} \|_{L^1(0,T; L^\infty (\Omega; \bbM_\mathrm{sym}^{d\times d}))} \leq \overline{C}\varepsilon, \qquad
\| ({\varrho}_\varepsilon)_\mathrm{D} \|_{W^{1,1}(0,T; L^2 (\Omega; \bbM_\mathrm{sym}^{d\times d}))} \leq \overline{C}\varepsilon\,. \end{aligned} \end{equation} For later use, let us record here that, since $\pairing{}{H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)}{\calL_\varepsilon(t)}{v} = \int_\Omega \varrho_\varepsilon(t) {:} \sig{v} \;\!\mathrm{d} x $ for every $v \in H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)$ by the safe load condition, and analogously for $\dot{\calL}_\varepsilon$, it follows from \eqref{safe-load-eps} that \begin{equation} \label{total-load-eps-bound} \calL_\varepsilon \to \calL \quad \text{in } W^{1,1} (0,T;H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)^*), \end{equation} with $\calL$ the total load associated with $F$ and $g$.
\par Furthermore, we impose that the Dirichlet loadings $(w_\varepsilon)_\varepsilon \subset L^1(0,T; W^{1,\infty} (\Omega;\mathbb{R}^d)) \cap W^{2,1} (0,T;H^1(\Omega;\mathbb{R}^d)) \cap H^2(0,T; L^2(\Omega;\mathbb{R}^d)) $ (cf.\ \eqref{Dirichlet-loading}), fulfill
\begin{equation} \label{est-w-eps}
\varepsilon \| \dot{w}_\varepsilon \|_{L^\infty (0,T;H^1(\Omega;\mathbb{R}^d))} \leq \overline{C}, \quad \varepsilon \|\ddot{w}_\varepsilon \|_{L^1(0,T;H^1(\Omega;\mathbb{R}^d))} \leq \overline{C}, \quad \varepsilon^{1/2} \| \sig{\dot{w}_\varepsilon} \|_{L^1(0,T;L^\infty(\Omega;\bbM_\mathrm{sym}^{d\times d}))} \leq \overline{C},
\end{equation} and that there exists $w \in H^1(0,T; H^1(\Omega;\mathbb{R}^d))$ (cf.\ \eqref{Dir-load-Sec5}) such that \begin{equation} \label{conv-Dir-loads} w_\varepsilon \to w \quad \text{in } H^1(0,T; H^1(\Omega;\mathbb{R}^d)). \end{equation} Finally, for the Cauchy data $(\vartheta_\varepsilon^0, u_\varepsilon^0, \dot{u}_\varepsilon^0, e_\varepsilon^0, p_\varepsilon^0)_\varepsilon$ we impose
the convergences
\begin{equation} \label{convs-init-data} \begin{aligned} &
\varepsilon \| \dot{u}_\varepsilon^0\|_{L^2(\Omega;\mathbb{R}^d)} \to 0, \quad \vartheta_\varepsilon^0 \rightharpoonup \vartheta_0 \text{ in } L^1(\Omega), \\ & u_\varepsilon^0 \weaksto u_0 \text{ in } \mathrm{BD}(\Omega;\mathbb{R}^d), \qquad e_\varepsilon^0 \to e_0 \text{ in } L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}), \qquad p_\varepsilon^0 \weaksto p_0 \text{ in } \mathrm{M}(\Omega{\cup}\Gamma_\mathrm{Dir};\bbM_\mathrm{D}^{d\times d}). \end{aligned} \end{equation} Observe that, since $(u_\varepsilon^0, \dot{u}_\varepsilon^0, e_\varepsilon^0, p_\varepsilon^0) \in {\mathcal A}} \def\calB{{\mathcal B}} \def\calC{{\mathcal C}(w_\varepsilon(0)) \subset {\mathcal A}} \def\calB{{\mathcal B}} \def\calC{{\mathcal C}_{\mathrm{BD}}(w_\varepsilon(0))$, convergences \eqref{conv-Dir-loads} and \eqref{convs-init-data}, combined with Lemma \ref{l:closure-kin-adm}, ensure that the triple $(u_0,e_0,p_0) $ is in $ {\mathcal A}} \def\calB{{\mathcal B}} \def\calC{{\mathcal C}_{\mathrm{BD}}(w(0))$. \end{subequations} \par We are now in the position to give our asymptotic result, stating the convergence (along a sequence $\varepsilon_k \downarrow 0$) of a family of solutions to the thermoviscoplastic system, to a quadruple $(\Theta, u,e,p)$ such that $(u,e,p)$ is an energetic solution to the plastic system, while the limit temperature $\Theta$ is constant in space, but still time-dependent. Furthermore, we find that $(\Theta,u,e,p)$ fulfill a further energy balance, cf.\ \eqref{anche-la-temp} ahead, from which we deduce a balance between the energy dissipated by the plastic strain, and the thermal energy.
\begin{maintheorem} \label{mainth:3} Let the reference configuration $\Omega$ and the elasticity tensor $\mathbb{C} $ comply with \eqref{Omega-s2},
\eqref{bdriesC2} and \eqref{elast-visc-tensors}, \eqref{bbC-s:5}, respectively. Moreover, assume
\eqref{ass-K1-sec-5} and \eqref{ass-K2-sec-5}.
Let $(\vartheta_\varepsilon, u_\varepsilon,e_\varepsilon,p_\eps)_\varepsilon$ be a family of \emph{weak energy solutions} to the rescaled thermoviscoplastic systems (\ref{plast-PDE-rescal}, \ref{bc}), with heat conduction coefficient $\kappa$ fulfilling \eqref{hyp-K} and \eqref{hyp-K-stronger}, tensors $\mathbb{B}_\varepsilon$ satisfying \eqref{scalingbbB}, with data $(H_\varepsilon, h_\varepsilon, F_\varepsilon, g_\varepsilon, w_\varepsilon)_\varepsilon$ fulfilling conditions \eqref{data-eps}, and initial data $(\vartheta_\varepsilon^0, u_\varepsilon^0, e_\varepsilon^0, p_\varepsilon^0)_\varepsilon$ converging as in \eqref{convs-init-data} to a triple $(u_0,e_0,p_0)$ fulfilling the stability condition \eqref{glob-stab-5} at time $t=0$. \par Then, for every vanishing sequence $(\varepsilon_k )_k $ there exist a (not relabeled) subsequence $(\vartheta_{\varepsilon_k}, u_{\eps_k},e_{\varepsilon_k},p_{\eps_k})_k$ and
$\Theta \in L^\infty(0,T;L^\infty(\Omega))$,
$ u \in L^\infty (0,T; \mathrm{BD}(\Omega;\mathbb{R}^d) )$, $ e \in L^\infty(0,T; L^2(\Omega;\mathbb{R}^d))$, $ p \in \mathrm{BV}([0,T]; \mathrm{M}(\Omega{\cup} \Gamma_\mathrm{Dir};\bbM_\mathrm{D}^{d\times d}) )$ such that \begin{enumerate} \item the following convergences hold as $k\to\infty$: \begin{subequations} \label{convs-eps} \begin{align} \label{convs-eps-teta} & \teta_{\eps_k} \rightharpoonup \Theta&& \text{ in } L^h(Q)
&& \text{ for every } h
\in \begin{cases}
[1,3] & \text{ if } d =2,
\\
[1,8/3] & \text{ if } d=3,
\end{cases} \\
\label{convs-eps-u} & u_{\eps_k}(t) \weaksto u(t) && \text{ in } \mathrm{BD}(\Omega;\mathbb{R}^d) && \text{for a.a. } t\in (0,T), \\ \label{convs-eps-e} & e_{\varepsilon_k}(t) \to e(t) && \text{ in } L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}) && \text{for a.a. } t\in (0,T), \\ \label{convs-eps-p} & p_{\varepsilon_k}(t) \weaksto p(t) && \text{ in } \mathrm{M}(\Omega{\cup} \Gamma_\mathrm{Dir};\bbM_\mathrm{D}^{d\times d}) && \text{for every } t\in [0,T]; \end{align} \end{subequations} \item $\Theta $ is strictly positive and constant in space; \item $(u,e,p)$ is a \emph{global energetic solution to the perfectly plastic system}, with initial and boundary data $(u_0,e_0,p_0)$ and $w$, and the enhanced time regularity \begin{equation} \label{sono-AC} u\in \mathrm{AC} ([0,T]; \mathrm{BD}(\Omega;\mathbb{R}^d)), \qquad e \in \mathrm{AC} ([0,T]; L^2(\Omega;\bbM_{\mathrm{sym}}^{d\times d})), \qquad p \in \mathrm{AC} ([0,T];
\mathrm{M}(\Omega{\cup} \Gamma_\mathrm{Dir};\bbM_\mathrm{D}^{d\times d}));
\end{equation} \item the quadruple $(\vartheta,u,e,p)$ fulfills the additional energy balance \begin{equation} \label{anche-la-temp} \begin{aligned} \calE(\Theta(t), e(t)) & = \calE(\vartheta_0, e_0) +\int_0^t \int_\Omega \mathsf{H} \;\!\mathrm{d} x \;\!\mathrm{d} s +\int_0^t \int_{\partial\Omega} \mathsf{h} \;\!\mathrm{d} S \;\!\mathrm{d} s + \int_0^t \int_\Omega \sigma : \sig{\dot{w}} \;\!\mathrm{d} x \;\!\mathrm{d} s
\\ & \quad
-\int_0^t \pairing{}{H^1(\Omega;\mathbb{R}^d)}{\calL}{\dot w} \;\!\mathrm{d} s + \pairing{}{\mathrm{BD}(\Omega;\mathbb{R}^d)}{\calL(t)}{u(t)} \\ & \quad - \pairing{}{\mathrm{BD}(\Omega;\mathbb{R}^d)}{\calL(0)}{u_0} - \int_0^t \pairing{}{\mathrm{BD}(\Omega;\mathbb{R}^d)}{\dot{\calL}}{u} \;\!\mathrm{d} s,
\end{aligned} \end{equation} for almost all $t\in (0,T)$, and therefore there holds \begin{equation} \label{balance-of-dissipations} \begin{aligned} &
|\Omega| (\Theta(t) - \Theta(s))= \calF(\Theta(t)) - \calF(\Theta(s)) \\ & = \mathrm{Var}_{\calR}(p;[s,t]) +\int_s^t \int_\Omega \mathsf{H} \;\!\mathrm{d} x \;\!\mathrm{d} r +\int_s^t \int_{\partial\Omega} \mathsf{h} \;\!\mathrm{d} S \;\!\mathrm{d} r
\qquad \text{for almost all } s, t \in (0,T) \text{ with } s \leq t.
\end{aligned} \end{equation} \end{enumerate} \end{maintheorem} \noindent Observe that, by virtue of convergence \eqref{convs-eps-teta}, the limiting temperature $\Theta$ inherits the strict positivity property $\Theta(t) \geq \bar\vartheta>0$ for almost all $t\in (0,T)$. \begin{remark} \label{rmk:2weak} \upshape Under the same conditions as for Thm.\ \ref{mainth:3}, the vanishing-viscosity and inertia analysis for \emph{entropic} solutions to the rescaled thermoviscoplastic system (\ref{plast-PDE-rescal}, \ref{bc}) would lead to a considerably weaker formulation of the limiting system. Indeed, the energy balance \eqref{anche-la-temp} would be replaced by an upper energy estimate. Accordingly, it would no longer be possible to deduce \eqref{balance-of-dissipations}, which provides information on the evolution of the limiting temperature. \end{remark} \begin{remark}[An alternative scaling condition on the heat conduction coefficient $\kappa$] \upshape \label{rmk:alternative-scaling} For the vanishing-viscosity and inertia analysis carried out in the frame of the damage system analyzed in \cite{LRTT}, a scaling condition on the heat conduction coefficients $\kappa_\varepsilon$, allowed to depend on $\varepsilon$,
was exploited, in place of \eqref{scalingbbB}. Namely, it was supposed that \begin{equation} \label{scaling-lrtt} \kappa_\varepsilon(\vartheta) = \frac1{\varepsilon^2} \kappa(\vartheta) \qquad \text{with } \kappa \in \rmC^0(\mathbb{R}^+) \text{ satisfying \eqref{hyp-K-stronger}}. \end{equation} This reflects the view that, for the limit system, if a change of heat is caused at some spot in the material, then heat must be conducted all over the body with infinite speed. In fact, \eqref{scaling-lrtt} as well led us to show that the limit temperature is constant in space, like in the present case. \par This scaling condition was combined with the requirement that the Dirichlet boundary $\Gamma_\mathrm{Dir}$ coincides with the whole $\partial\Omega$, and that the Dirichlet loading $w$ is null, in order to deduce \begin{compactenum} \item the convergence (along a subsequence) of the temperatures $\vartheta_\varepsilon$ to a spatially constant function $\Theta$; \item the strong convergence $\varepsilon e(\dot{u}_\varepsilon) \to 0 $ in $L^2(Q;\bbM_\mathrm{sym}^{d\times d})$, by means of a careful argument strongly relying on the homogeneous character of the Dirichlet boundary conditions. \end{compactenum} In this way, in \cite{LRTT} we bypassed one of the major difficulties in the asymptotic analysis,
namely the presence of the
thermal expansion term $ \iint \teta_\eps \mathbb{B}{:}\dot{e}_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} t $ on the r.h.s.\ of the rescaled mechanical energy balance, which in turn is the starting point for the derivation of a priori estimates uniform w.r.t.\ $\varepsilon$ for the dissipative variables $\dot{e}_\varepsilon$ and $\dot{p}_\varepsilon$. \par In the present context, we have decided not to develop the approach based on condition \eqref{scaling-lrtt}. In fact, it would have forced us to take null Dirichlet loadings for the limit perfectly plastic system, and this, in combination with the strong safe load condition \eqref{safe-load-s5}, would have been too restrictive. \end{remark} We will develop the proof of Theorem \ref{mainth:3} in the ensuing Sec.\ \ref{ss:6.1}. \subsection{Proof of Theorem \ref{mainth:3}} \label{ss:6.1} We start by deriving a series of a priori estimates, \emph{uniform} w.r.t.\ the parameter $\varepsilon$, for \emph{a distinguished class} of weak energy solutions to system (\ref{plast-PDE-rescal}, \ref{bc}). In fact, in the derivation of these estimates we will perform the same tests as in the proof of Prop.\ \ref{prop:aprio}, in particular the test of the heat equation by $\teta_\eps^\alpha$, with $\alpha \in [2-\mu, 1)$, $\alpha>0$. Since $\teta_\eps^\alpha$ is not an admissible test function for the rescaled heat equation \eqref{eq-teta-eps} due to its insufficient spatial regularity, the calculations related to this test can be rendered rigorously only on the time discrete level, and the resulting a priori estimates in fact only hold for the \underline{weak energy solutions
arising from the time discretization scheme}. That is why, in Proposition \ref{prop:aprio-eps} below we will only claim that \emph{there exist} a family of weak energy solutions for which suitable a priori estimates hold.
\begin{proposition}[A priori estimates uniform w.r.t.\ $\varepsilon$] \label{prop:aprio-eps} Assume \eqref{Omega-s2},
\eqref{bdriesC2} and \eqref{elast-visc-tensors}, \eqref{bbC-s:5}. Assume conditions \eqref{hyp-K} and \eqref{hyp-K-stronger} on $\kappa$, \eqref{scalingbbB} on the tensors $\mathbb{B}_\varepsilon$, and \eqref{data-eps} on the data $(H_\varepsilon, h_\varepsilon, F_\varepsilon, g_\varepsilon, w_\varepsilon)_\varepsilon$ and $(\vartheta_\varepsilon^0, u_\varepsilon^0, e_\varepsilon^0, p_\varepsilon^0)_\varepsilon$. \par Then, there exist a constant $C>0$ and a family $(\vartheta_\varepsilon, u_\varepsilon,e_\varepsilon,p_\eps)_\varepsilon$ of \emph{weak energy solutions} to the rescaled thermoviscoplastic systems
(\ref{plast-PDE-rescal}, \ref{bc}), such that the following estimates hold uniformly w.r.t.\ the parameter $\varepsilon>0$: \begin{subequations} \label{aprio-eps} \begin{align} & \label{aprio-eps-u}
\| \sig{u_\varepsilon}\|_{L^\infty (0,T; L^1(\Omega;\bbM_\mathrm{sym}^{d\times d}))} + \varepsilon^{1/2} \| \sig{\dot{u}_\varepsilon}\|_{L^2 (Q;\bbM_\mathrm{sym}^{d\times d})}
\\ & \qquad \nonumber + \varepsilon \| \dot{u}_\varepsilon\|_{L^\infty(0,T;L^2(\Omega;\mathbb{R}^d))} + \varepsilon^2 \| \ddot{u}_\varepsilon \|_{L^2(0,T; H_\mathrm{Dir}^{1}(\Omega;\mathbb{R}^d)^*)} \leq C, \\ & \label{aprio-eps-e}
\| e_\varepsilon\|_{L^\infty (0,T; L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}))} + \varepsilon^{1/2} \| \dot{e}_\varepsilon\|_{L^2 (Q;\bbM_\mathrm{sym}^{d\times d})} \leq C, \\ & \label{aprio-eps-p}
\| p_\varepsilon\|_{L^\infty (0,T; L^1(\Omega;\bbM_\mathrm{sym}^{d\times d}))} + \| \dot{p}_\varepsilon\|_{L^1 (Q;\bbM_\mathrm{sym}^{d\times d})} + \varepsilon^{1/2} \| \dot{p}_\varepsilon\|_{L^2 (Q;\bbM_\mathrm{sym}^{d\times d})} \leq C, \\ & \label{aprio-eps-dist}
\frac1{\varepsilon^{1/2}} \| d((\sigma_\eps)_\mathrm{D}, \mathcal{K}(\Omega)) \|_{L^2(0,T)} \leq C, \\ & \label{aprio-eps-teta}
\| \teta_\eps\|_{L^\infty (0,T; L^1(\Omega))}
+ \|\teta_\eps\|_{L^h(Q)} + \frac1{\varepsilon^{1/2}}\|\nabla \teta_\eps\|_{L^2(Q;\mathbb{R}^d)}
\leq C \qquad \text{for } h = \begin{cases}
3 & \text{if } d=2,
\\
\frac83 & \text{if } d=3.
\end{cases}
\end{align}
\end{subequations}
\end{proposition} \begin{proof} We will follow the outline of the proof of Prop.\ \ref{prop:aprio}, referring to it for all details. \par \noindent
\textbf{First a priori estimate:} We start from the rescaled total energy balance \eqref{total-enbal-rescal} and estimate the terms on its right-hand side. It follows from \eqref{convs-init-data} that $\varepsilon^2 \| \dot{u}_\varepsilon^0 \|_{L^2(\Omega;\mathbb{R}^d)}^2 \leq C$ and $\calE (\vartheta_\varepsilon^0, e_\varepsilon^0) \leq C$. As for the third term on the r.h.s., we use the safe load condition, yielding
\begin{equation} \label{will-be-quoted} \begin{aligned}
\int_0^t \pairing{}{H_\mathrm{Dir}^1 (\Omega;\mathbb{R}^d)}{\mathcal{L}_\varepsilon}{\dot {u}_\varepsilon{-} \dot{w}_\varepsilon} \;\!\mathrm{d} r & = \int_0^t \int_\Omega \varrho_\varepsilon{:} \left( \sig{\dot{u}_\varepsilon}{-} \sig{\dot{w}_\varepsilon} \right) \;\!\mathrm{d} x \;\!\mathrm{d} r \\ & \stackrel{(1)}{=} \int_0^t \int_\Omega \varrho_\varepsilon {:} \dot{e}_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_0^t \int_\Omega \varrho_\varepsilon : \dot{p}_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} r - \int_0^t \int_\Omega \varrho_\varepsilon {:} \sig{\dot{w}_\varepsilon} \;\!\mathrm{d} x \;\!\mathrm{d} r
\\ &
\stackrel{(2)}{=} - \int_0^t \int_\Omega \dot{\varrho_\varepsilon} {:} e_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} r
+ \int_\Omega \varrho_\varepsilon(t) {:} e_\varepsilon(t) \;\!\mathrm{d} x - \int_\Omega \varrho_\varepsilon(0) {:} e_\varepsilon^0 \;\!\mathrm{d} x + \int_0^t \int_\Omega (\varrho_\varepsilon)_\mathrm{D} \dot{p}_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} r
\\
& \qquad
-
\int_0^t \int_\Omega \varrho_\varepsilon {:} \sig{\dot{w}_\varepsilon} \;\!\mathrm{d} x \;\!\mathrm{d} r
\\ & \doteq I_1 + I_2+I_3+I_4 +I_5
\end{aligned}
\end{equation}
with (1) due to the kinematic admissibility condition, and (2) following from integration by parts, and the fact that $ \dot{p}_\varepsilon \in \bbM_\mathrm{D}^{d \times d}$ a.e.\ in $Q$.
We estimate \[ \begin{aligned}
& \left| I_1\right| \leq \int_0^t \| \dot{\varrho_\varepsilon} \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \| e_\varepsilon \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \;\!\mathrm{d} r,
\\ & \left| I_2 \right| \stackrel{(3)} \leq C \| e_\varepsilon(t)\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \leq \frac{C_{\mathbb{C}}^1}{16} \| e_\varepsilon(t)\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}^2 + C, \\ &
\left| I_3 \right| \stackrel{(4)} \leq C \| e_\varepsilon^0\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}\leq C, \end{aligned} \]
where (3) and (4) follow from the bound provided for $\| \varrho_\varepsilon \|_{L^\infty (0,T;L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}))}$ by condition \eqref{safe-load-eps}, and from \eqref{convs-init-data}. Instead, for $I_4$ we use the plastic flow rule \eqref{pl-flow-rule-eps}, rewritten as $ \varepsilon \dot{p}_\varepsilon =( \sigma_\varepsilon)_\mathrm{D} - \zeta_\varepsilon$, with $\zeta_\varepsilon \in \partial_{\dot p} \mathrm{R}(\cdot, \dot{p}_\varepsilon) $ a.e.\ in $Q$. Then, \[ I_4 = \int_0^t \int_\Omega \tfrac 1\varepsilon (\varrho_\varepsilon)_\mathrm{D}{:} ( \sigma_\varepsilon)_\mathrm{D} \;\!\mathrm{d} x - \int_0^t \int_\Omega \tfrac 1\varepsilon (\varrho_\varepsilon)_\mathrm{D}{:} \zeta_\varepsilon \;\!\mathrm{d} x \doteq I_{4,1} + I_{4,2}, \]
and
\[
I_{4,1} = \int_0^t \int_\Omega \tfrac 1\varepsilon (\varrho_\varepsilon)_\mathrm{D}{:} \left( \mathbb{D} \dot{e}_\varepsilon + \mathbb{C} e_\varepsilon-\teta_\eps \mathbb{B}_\varepsilon \right)_\mathrm{D} \;\!\mathrm{d} x \;\!\mathrm{d} r \doteq I_{4,1,1} + I_{4,1,2} + I_{4,1,3} \] with \[ \begin{aligned} I_{4,1,1} & = - \int_0^t \int_\Omega \tfrac 1\varepsilon (\dot{\varrho}_\varepsilon)_\mathrm{D}{:} \mathbb{D} e_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_\Omega \tfrac 1\varepsilon (\varrho_\varepsilon(t))_\mathrm{D}{:}\mathbb{D} e_\varepsilon(t) \;\!\mathrm{d} x - \int_\Omega \tfrac 1\varepsilon (\varrho_\varepsilon(0))_\mathrm{D}{:}\mathbb{D} e_\varepsilon^0 \;\!\mathrm{d} x \\ &
\leq C \int_0^t \tfrac 1\varepsilon \| (\dot{\varrho}_\varepsilon)_\mathrm{D} \|_{L^2(\Omega;\bbM_\mathrm{D}^{d\times d})}\| e_\varepsilon \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \;\!\mathrm{d} r + \frac C{\varepsilon^2} \| ({\varrho}_\varepsilon)_\mathrm{D} \|_{L^\infty(0,T;L^2(\Omega;\bbM_\mathrm{D}^{d\times d}))}^2 \\ & \quad + \frac{C_{\mathbb{C}}^1}{16} \| e_\varepsilon(t)\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}^2 + C \| e_\varepsilon^0 \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}^2, \end{aligned} \] and, analogously, \[ \begin{aligned} &
\left| I_{4,1,2} \right| \leq C \int_0^t \tfrac 1\varepsilon \| \varrho_\varepsilon \|_{L^2(\Omega;\bbM_\mathrm{D}^{d\times d})}\| e_\varepsilon \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \;\!\mathrm{d} r, \\ &
\left| I_{4,1,3} \right| \leq C \int_0^t \tfrac 1\varepsilon \| \varrho_\varepsilon \|_{L^\infty(\Omega;\bbM_\mathrm{D}^{d\times d})} \| \teta_\eps\|_{L^1(\Omega)} \;\!\mathrm{d} r\,. \end{aligned} \]
Instead, for the term $I_{4,2}$ we use that $| \zeta_\varepsilon | \leq C_R$ by \eqref{elastic-domain}, so that $|I_{4,2}| \leq \tfrac {C_R}{\varepsilon} \| (\varrho_\varepsilon)_\mathrm{D} \|_{L^1(Q;\bbM_\mathrm{D}^{d\times d})}$. Finally, \[
I_5 \leq \int_0^t \| \varrho_\varepsilon \|_{L^2(\Omega;\bbM_\mathrm{D}^{d\times d})}\| \sig{\dot{w}_\varepsilon} \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \;\!\mathrm{d} r \leq C \] thanks to \eqref{safe-load-eps} and \eqref{conv-Dir-loads}. The fourth and the fifth terms on the r.h.s.\ of \eqref{total-enbal-rescal} are bounded thanks to condition \eqref{heat-sources-eps}. We estimate the sixth
term by
\[
\frac{\rho\varepsilon^2}4 \int_\Omega| \dot{u}_\varepsilon(t)|^2 \;\!\mathrm{d} x + \rho \varepsilon \int_0^t \| \dot{u}_\varepsilon\|_{L^2(\Omega;\mathbb{R}^d)} \varepsilon \|\ddot{w}_\varepsilon \|_{L^2(\Omega;\mathbb{R}^d)} \;\!\mathrm{d} r + C + C\varepsilon^2 \| \dot{w}_\varepsilon \|_{L^\infty (0,T;L^2(\Omega;\mathbb{R}^d)}^2
\]
where we have also used \eqref{convs-init-data}.
As for the last term on the r.h.s.\ of \eqref{total-enbal-rescal},
arguing in the very same way as in the proof of Prop.\ \ref{prop:aprio}, we estimate
\[ \int_0^t \int_ \Omega (\varepsilon \mathbb{D} \dot{e}_\varepsilon + \mathbb{C} e_\varepsilon -\teta_\eps \mathbb{B}_\varepsilon) : \sig{\dot{w}_\varepsilon} \;\!\mathrm{d} x \;\!\mathrm{d} r \doteq I_{6,1}+ I_{6,2}+I_{6,3}, \] with \[ \begin{aligned}
I_{6,1} & = - \int_0^t \int_
\Omega \varepsilon \mathbb{D} e_\varepsilon : \sig{\ddot{w}_\varepsilon} \;\!\mathrm{d} x \;\!\mathrm{d} r + \int_\Omega \varepsilon \mathbb{D} e_\varepsilon(t) : \sig{\dot{w}_\varepsilon(t)} \;\!\mathrm{d} x - \int_\Omega \varepsilon \mathbb{D} e_\varepsilon^0 : \sig{\dot{w}_\varepsilon(0)} \;\!\mathrm{d} x \\ & \leq \int_0^T \| e_\varepsilon \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \varepsilon \| \sig{\ddot{w}_\varepsilon}\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \;\!\mathrm{d} r + \frac{C_{\mathbb{C}}^1}{16} \| e_\varepsilon(t)\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}^2 +C \varepsilon^2 \| \sig{\dot{w}_\varepsilon} \|_{L^\infty(0,T; L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}))}^2 + C, \end{aligned} \] where we have again used \eqref{convs-init-data}. We also have \[ \begin{aligned} &
\left| I_{6,2} \right| \leq C \int_0^t \| e_\varepsilon \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \| \sig{\dot{w}_\varepsilon} \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \;\!\mathrm{d} r, \\ &
\left| I_{6,3} \right| \leq C\int_0^t \| \teta_\eps\|_{L^1(\Omega)} \varepsilon^{1/2} \| \sig{\dot{w}_\varepsilon} \|_{L^\infty(\Omega;\bbM_\mathrm{sym}^{d\times d})} \;\!\mathrm{d} r, \end{aligned} \] thanks to the scaling \eqref{scalingbbB} of the tensors $\mathbb{B}_\varepsilon$.
\par All in all, taking into account the bounds provided by conditions \eqref{data-eps}, we obtain \[ \begin{aligned} &
\varepsilon^2 \int_\Omega | \dot{u}_\varepsilon(t)|^2 \;\!\mathrm{d} x + \int_\Omega \teta_\eps(t) \;\!\mathrm{d} x + \int_\Omega |e_\varepsilon(t)|^2 \;\!\mathrm{d} x \\ & \begin{aligned}
\leq C
+ C \int_0^t \Big( & \| \dot{\varrho}_\varepsilon\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} {+}\tfrac1{\varepsilon} \|( \dot{\varrho}_\varepsilon)_\mathrm{D}\|_{L^2(\Omega;\bbM_\mathrm{D}^{d\times d})}
{+}\tfrac1\varepsilon \| \varrho_\varepsilon\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}
\\
&
{+} \| \sig{\dot{w}_\varepsilon} \|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} {+} \varepsilon \|\sig{\ddot{w}_\varepsilon}\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})}\Big) \| e_\varepsilon\|_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})} \;\!\mathrm{d} r
\end{aligned}
\\ & \quad + C \int_0^t \varepsilon \| \ddot{w}_\varepsilon\|_{L^2(\Omega;\mathbb{R}^d)} \varepsilon \| \dot{u}_\varepsilon\|_{L^2(\Omega;\mathbb{R}^d)} \;\!\mathrm{d} r + C \int_0^t \left( \tfrac1\varepsilon \| \varrho_\varepsilon\|_{L^\infty(\Omega;\bbM_\mathrm{sym}^{d\times d})} + \varepsilon^{1/2} \| \sig{\dot{w}_\varepsilon }\|_{L^\infty(\Omega;\bbM_\mathrm{sym}^{d\times d})} \|\right) \|\teta_\eps\|_{L^1(\Omega)} \;\!\mathrm{d} r. \end{aligned} \] Applying Gronwall's Lemma and again exploiting \eqref{data-eps},
we obtain $\varepsilon \| \dot{u}_\varepsilon\|_{L^\infty(0,T;L^2(\Omega;\mathbb{R}^d))} + \sup_{t\in [0,T]}\calE(\teta_\eps(t), e_\varepsilon(t)) \leq C$, whence the third bound in \eqref{aprio-eps-u}, and the first bounds in \eqref{aprio-eps-e} and \eqref{aprio-eps-teta} . \par \noindent \textbf{Second a priori estimate:} We (formally) test the rescaled heat equation \eqref{eq-teta-eps} by $\teta_\eps^{\alpha-1}$ and integrate on $(0,t),$ thus retrieving the (formally written) analogue of \eqref{ad-est-temp2}, namely \begin{equation} \label{eps-analog} \begin{aligned} &
c \int_0^t \int_\Omega \kappa(\teta_\eps) | \nabla (\teta_\eps^{\alpha/2})|^2 \;\!\mathrm{d} x \;\!\mathrm{d} r + \varepsilon^2 C_{\mathbb{D}}^2 \int_\Omega |\dot{e}_\varepsilon|^2 \teta_\eps^{\alpha-1} \;\!\mathrm{d} x \;\!\mathrm{d} r \\ & \leq \varepsilon \int_0^t \int_\Omega \dot{\vartheta}_\varepsilon \teta_\eps^{\alpha-1} \;\!\mathrm{d} x \;\!\mathrm{d} r + \varepsilon \int_0^t \int_\Omega \teta_\eps \mathbb{B}_\varepsilon{:} \dot{e}_\varepsilon \teta_\eps^{\alpha-1} \;\!\mathrm{d} x \;\!\mathrm{d} r \\ & = \frac\varepsilon\alpha \int_\Omega (\teta_\eps(t))^\alpha \;\!\mathrm{d} x - \frac\varepsilon\alpha \int_\Omega(\vartheta_\varepsilon^0)^\alpha \;\!\mathrm{d} x + \varepsilon \int_0^t \int_\Omega \teta_\eps \varepsilon^\beta \mathbb{B}{:} \dot{e}_\varepsilon \teta_\eps^{\alpha-1} \;\!\mathrm{d} x \;\!\mathrm{d} r \doteq I_1+I_2+I_3 \end{aligned} \end{equation} in view of the scaling \eqref{scalingbbB} for $\mathbb{B}_\varepsilon$.
The first two integral terms on the r.h.s.\ can be treated in the same way as in \eqref{est-temp-I1}, taking into account the previously proved bound for $\| \teta_\eps\|_{L^\infty(0,T;L^1(\Omega))}$. We thus obtain \begin{equation} \label{eps-analog-1} I_1+I_2 \leq C\varepsilon\,. \end{equation}
Again, we estimate \[ I_3 \leq
\frac{\varepsilon^2 C_{\mathbb{D}}^2 }4 \int_\Omega |\dot{e}_\varepsilon|^2 \teta_\eps^{\alpha-1} \;\!\mathrm{d} x \;\!\mathrm{d} r + C \varepsilon^{2\beta} \int_0^t \int_\Omega \teta_\eps^{\alpha+1} \;\!\mathrm{d} x \;\!\mathrm{d} r\,. \] While the first term in the above formula is absorbed into the l.h.s.\ of \eqref{eps-analog}, the second one is handled by the very same arguments in the proof of Prop.\ \ref{prop:aprio}. In this way, also taking into account \eqref{eps-analog-1}, we obtain, \begin{equation} \label{L2H1-ests}
\|\nabla \teta_\eps\|_{L^2(Q;\mathbb{R}^d)}^2 + \| \nabla(\teta_\eps)^{(\mu{+}\alpha)/2}\|_{L^2(Q;\mathbb{R}^d))}^2
+ \| \nabla(\teta_\eps)^{(\mu{-}\alpha)/2}\|_{L^2(Q;\mathbb{R}^d))}^2 \leq C\varepsilon + C'\varepsilon^{2\beta},
\end{equation} whence, in particular, the third bound in \eqref{aprio-eps-teta}. The second bound follows from interpolation, cf.\ \eqref{Gagliardo-Nir}.
\par \noindent \textbf{Third a priori estimate:} We now address the (rescaled) mechanical energy balance \eqref{mech-enbal-rescal}. The scaling \eqref{scalingbbB} of $\mathbb{B}_\varepsilon$ yields for the third integral term on the right-hand side \begin{equation} \label{scaling-BBeps}
\left| \int_0^t \int_\Omega \teta_\eps \mathbb{B}_\varepsilon \dot{e}_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} r \right| \leq
\int_0^t \int_\Omega \teta_\eps |\mathbb{B} | \varepsilon^{1/2}|\dot{e}_\varepsilon| \;\!\mathrm{d} x \;\!\mathrm{d} r \leq\int_0^t \int_\Omega |\teta_\eps|^2 \;\!\mathrm{d} x \;\!\mathrm{d} r + \frac{\varepsilon}4 \int_0^t \int_\Omega |\dot{e}_\varepsilon|^2 \;\!\mathrm{d} x \;\!\mathrm{d} r, \end{equation} so that the latter term can be absorbed into the left-hand side. The remaining terms on the r.h.s.\ are handled by the very same calculations developed for the \emph{First a priori estimate}. Therefore, from the bounds for the terms on the l.h.s.\ of \eqref{mech-enbal-rescal}, we conclude the second of \eqref{aprio-eps-e}, as well as \eqref{aprio-eps-p} and thus, by kinematic admissibility, the first two bounds in \eqref{aprio-eps-u}. We also infer \eqref{aprio-eps-dist}. \par \noindent \textbf{Fourth a priori estimate:} The last bound in \eqref{aprio-eps-u} follows from a comparison argument in the rescaled momentum balance \eqref{w-momentum-balance-eps}, taking into account the previously proved estimates, as well as the uniform bound \eqref{total-load-eps-bound} for $\calL_\varepsilon$. \end{proof}
\begin{remark}
\label{rmk:conds-forces} \upshape Condition \eqref{safe-load-eps}, imposing that the functions $(\varrho_\varepsilon)_\mathrm{D}$ tend to zero (w.r.t.\ suitable norms) has been crucial to compensate the blowup of the bounds for $ \dot{p}_\varepsilon$, in the estimate of the term $\int_0^t \int_\Omega (\varrho_\varepsilon)_\mathrm{D} \dot{p}_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} r $ contributing to $ \int_0^t \pairing{}{H_\mathrm{Dir}^1 (\Omega;\mathbb{R}^d)}{\mathcal{L}_\varepsilon}{\dot {u}_\varepsilon} \;\!\mathrm{d} r $ on the right-hand side of the total energy balance \eqref{total-enbal-rescal}.
A close perusal at the calculations for handling $ \int_0^t \pairing{}{H_\mathrm{Dir}^1 (\Omega;\mathbb{R}^d)}{\mathcal{L}_\varepsilon}{\dot {u}_\varepsilon} \;\!\mathrm{d} r $ reveals that, taking the tractions $g_\varepsilon$ null would not have allowed us to avoid \eqref{safe-load-eps}, either (unlike for the thermoviscoplastic system, cf.\ Remark \ref{rmk:diffic-1-test}). \par For estimating the term $ \iint \teta_\eps \mathbb{B} {:} \varepsilon^{1/2}\dot{e}_\varepsilon \;\!\mathrm{d} x \;\!\mathrm{d} t $ it would in fact be just sufficient that the thermal expansion tensors $\mathbb{B}_\varepsilon$ scale like $\varepsilon^{1/2}$: As we will see in the proof of Theorem \ref{mainth:3}, the (slightly) stronger scaling condition from \eqref{scalingbbB} is necessary for the limit passage as $\varepsilon\downarrow 0$ in the mechanical energy equality. \end{remark} \subsection{Proof of Theorem \ref{mainth:3}} \label{ss:6.2} We split the arguments in some steps. \par\noindent \emph{Step $0$: compactness.} It follows from \eqref{aprio-eps-teta} that $\nabla \teta_{\eps_k} \to 0$ in $L^2(Q;\mathbb{R}^d)$. Therefore, also taking into account the other bounds in \eqref{aprio-eps-teta}, we infer that, up to a subsequence the functions $(\teta_{\eps_k})_k $ weakly converge to a spatially constant strictly positive function $\Theta \in L^h(Q)$, with $h $ as in
\eqref{convs-eps-teta}. In fact, we find that $\Theta \in L^\infty(0,T;L^\infty(\Omega))$
since for every $ t \in (0,T)$ and (sufficiently small) $r>0$ \[
\int_{t-r}^{t+r} \| \Theta\|_{L^1(\Omega)} \;\!\mathrm{d} s \leq \liminf_{k\to\infty} \int_{t-r}^{t+r} \| \teta_{\eps_k}\|_{L^1(\Omega)} \;\!\mathrm{d} s \leq 2 r C, \]
where the first inequality follows from $\teta_{\eps_k} \rightharpoonup \Theta$ in $L^1(Q) $ and the second estimate from bound \eqref{aprio-eps-teta}. Then, it suffices to take the limit as $r\downarrow 0$ at every Lebesgue point of the function $t\mapsto \| \Theta(t)\|_{L^1(\Omega)} = \Theta(t) |\Omega|$.
On account of the continuous embedding $L^1(\Omega;\bbM_\mathrm{D}^{d\times d}) \subset \mathrm{M}(\Omega{\cup}\Gamma_\mathrm{Dir};\bbM_\mathrm{D}^{d\times d})$ we gather from \eqref{aprio-eps-p} that the functions $p_\eps$ have uniformly bounded variation in $ \mathrm{M}(\Omega{\cup}\Gamma_\mathrm{Dir};\bbM_\mathrm{D}^{d\times d})$. Therefore, a generalization of Helly Theorem for functions with values in the dual of a separable space, cf.\ \cite[Lemma 7.2]{DMDSMo06QEPL}, yields that there exists $p\in \mathrm{BV}([0,T]; \mathrm{M}(\Omega{\cup}\Gamma_\mathrm{Dir};\bbM_\mathrm{D}^{d\times d}))$ such that convergence \eqref{convs-eps-p} holds and, by the lower semicontinuity of the variation functional $\mathrm{Var}_\calR$, that \begin{equation} \label{lsc-Var-R}
\mathrm{Var}_{\calR}(p; [a,b]) \leq \liminf_{k\to\infty} \mathrm{Var}_{\calR}(p_{\eps_k}; [a,b]) \qquad \text{for every } [a,b]\subset [0,T]. \end{equation}
For later use, we remark that, in view of estimate \eqref{aprio-eps-p} on $(\dot{p}_\varepsilon)_\varepsilon$, \begin{equation} \label{vanno-a-zero-p} \varepsilon_k \int_0^T \calR(\dot{p}_{\varepsilon_k}) \;\!\mathrm{d} t \to 0, \qquad \varepsilon_k \dot{p}_{\varepsilon_k} \to 0 \text{ in } L^2(Q; \bbM_\mathrm{D}^{d\times d}). \end{equation} In fact, we even have that \begin{equation} \label{vanno-a-zero-deb-p}
\varepsilon_k^{1/2} \dot{p}_{\varepsilon_k} \rightharpoonup 0 \text{ in } L^2(Q; \bbM_\mathrm{D}^{d\times d}). \end{equation} Indeed, by \eqref{aprio-eps-p} there exists $\varpi \in L^2(Q; \bbM_\mathrm{D}^{d\times d})$ such that $ \varepsilon_k^{1/2} \dot{p}_{\varepsilon_k} \rightharpoonup \varpi$ in $ L^2(Q; \bbM_\mathrm{D}^{d\times d})$. We now show that $\varpi \equiv 0$. With this aim, we observe that, on the one hand the weak convergence in $ L^2(Q; \bbM_\mathrm{D}^{d\times d})$ entails that
\[ \int_\Omega \xi(x) \left( \int_0^t \varepsilon_k^{1/2} \dot{p}_{\varepsilon_k}(s,x) \;\!\mathrm{d} s \right) \;\!\mathrm{d} x \to \int_\Omega \xi(x) \left( \int_0^t \varpi(s,x) \;\!\mathrm{d} s \right) \;\!\mathrm{d} x \] for every $t \in (0,T)$ and
$\xi \in L^2(\Omega; \bbM_\mathrm{D}^{d\times d})$, i.e.\ $\int_0^t \varepsilon_k^{1/2} \dot{p}_{\varepsilon_k} \;\!\mathrm{d} s \rightharpoonup \int_0^t \varpi \;\!\mathrm{d} s $ in $L^2(\Omega; \bbM_\mathrm{D}^{d\times d})$. On the other hand, we have that \[
\left\| \int_0^t \varepsilon_k^{1/2} \dot{p}_{\varepsilon_k} \;\!\mathrm{d} r \right\|_{L^1(\Omega; \bbM_\mathrm{D}^{d\times d})} = \left\| \varepsilon_k^{1/2} p_{\varepsilon_k} (t) - \varepsilon_k^{1/2} p_{\varepsilon_k}^0 \right\|_{L^1(\Omega; \bbM_\mathrm{D}^{d\times d})} \to 0 \] in view of estimate \eqref{aprio-eps-p}.
Hence, \eqref{vanno-a-zero-deb-p} ensues.
\par Up to a further subsequence, we have \begin{equation} \label{convs-e-linfty} e_{\varepsilon_k} \weaksto e \quad \text{ in $L^\infty (0,T; L^2(\Omega; \bbM_\mathrm{sym}^{d\times d}))$.} \end{equation} Due to \eqref{aprio-eps-e}, with the same arguments as for \eqref{vanno-a-zero-deb-p} we have that \begin{equation} \label{vanno-a-zero-e}
\varepsilon_k \dot{e}_{\varepsilon_k} \to 0 \text{ in } L^2(Q; \bbM_\mathrm{sym}^{d\times d}), \qquad \varepsilon_k^{1/2} \dot{e}_{\varepsilon_k} \rightharpoonup 0 \text{ in } L^2(Q; \bbM_\mathrm{sym}^{d\times d}). \end{equation} \par
We combine the estimate for $\sig{u_\varepsilon}$ in $L^\infty (0,T; L^1(\Omega; \bbM_\mathrm{sym}^{d\times d}))$ with the fact that the trace of $u_\varepsilon $ on $\Gamma_\mathrm{Dir}$ (i.e., the trace of $w_\varepsilon$) is bounded in $L^\infty(0,T;L^1(\Gamma_\mathrm{Dir};\mathbb{R}^d))$ thanks to \eqref{conv-Dir-loads}. Then, via the Poincar\'e-type inequality \eqref{PoincareBD} we conclude that $(u_\varepsilon)_\varepsilon$ is bounded in $L^\infty (0,T; \mathrm{BD}(\Omega;\mathbb{R}^d))$, which embeds continuosly into $L^\infty (0,T; L^{d/(d{-}1)} (\Omega;\mathbb{R}^d))$. Therefore, up to a subsequence \begin{equation} \label{convs-u-linfty} u_{\varepsilon_k} \weaksto u \quad \text{ in } L^\infty (0,T; L^{d/(d{-}1)} (\Omega;\mathbb{R}^d)). \end{equation} Again via inequality \eqref{PoincareBD} combined with estimate \eqref{est-w-eps} on $\dot{w}_\varepsilon$, we deduce from the estimate for $\varepsilon^{1/2} \sig{\dot{u}_\varepsilon}$ in $ L^2(Q; \bbM_\mathrm{sym}^{d\times d}) $ that $ \varepsilon^{1/2} \dot{u}_\varepsilon$ is bounded in $L^2 (0,T; \mathrm{BD}(\Omega;\mathbb{R}^d))$, hence in $L^2(0,T; L^{d/(d{-}1)} (\Omega;\mathbb{R}^d))$. Therefore, taking into account \eqref{convs-u-linfty}, we get that \[ \varepsilon^{1{/}2} \dot{u}_{\varepsilon_k} \rightharpoonup 0 \quad \text{ in } L^2 (0,T; L^{d/(d{-}1)} (\Omega;\mathbb{R}^d)). \] Thus, \eqref{aprio-eps-u} also yields \begin{equation} \label{vanno-a-zero-u}
\varepsilon_k \dot{u}_{\varepsilon_k} \weaksto 0 \text{ in } L^\infty(0,T; L^2(\Omega;\mathbb{R}^d)), \qquad \varepsilon_k^{2} \ddot{u}_{\varepsilon_k} \rightharpoonup 0 \text{ in } L^2(0,T; H_{\mathrm{Dir}}^1(\Omega;\mathbb{R}^d)^*). \end{equation} \par\noindent \emph{Step $1$: ad the global stability condition \eqref{glob-stab-5} for almost all $t\in (0,T)$.} We will exploit Lemma \ref{l:for-stability} and check that \begin{enumerate} \item the stress $\sigma$ belongs to the elastic domain $\mathcal{K}(\Omega)$; \item it complies with the boundary value problem \eqref{BVprob-stress}; \item the triple $(u,e,p)$ is kinematically admissible. \end{enumerate} \par \textbf{Ad (1):} It follows from the scaling \eqref{scalingbbB} of the tensors $\mathbb{B}_\varepsilon$ and from estimate \eqref{aprio-eps-teta} on $(\teta_\eps)_\varepsilon$ that the term $\teta_\eps \mathbb{B}_\varepsilon $ strongly converges to $0$ in $L^2(Q;\bbM_\mathrm{sym}^{d\times d})$. Therefore, also taking into account convergences \eqref{convs-e-linfty} and
\eqref{vanno-a-zero-e}, we deduce that
\begin{equation}
\label{stress-eps-k}
\sigma_{\varepsilon_k} \rightharpoonup \sigma = \mathbb{C} e \qquad \text{ in $L^2 (Q; \bbM_\mathrm{sym}^{d\times d})$.}
\end{equation}
Hence,
\[ \int_0^T d^2( \sigma_\mathrm{D}, \mathcal{K}(\Omega)) \;\!\mathrm{d} t \leq \liminf_{k\to\infty} \int_0^T d^2( (\sigma_{\varepsilon_k})_\mathrm{D}, \mathcal{K}(\Omega)) \;\!\mathrm{d} t =0,
\]
where the last equality follows from estimate \eqref{aprio-eps-dist} deduced from the (rescaled) mechanical energy balance \eqref{mech-enbal-rescal}. Therefore, the limit stress $\sigma$ complies with the admissibility condition $\sigma(t) \in \mathcal{K}(\Omega)$ for almost all $t\in (0,T)$.
\par
\textbf{Ad (2):}
Exploiting convergence \eqref{total-load-eps-bound} for the loads $(\calL_{\varepsilon_k})_k$ and \eqref{vanno-a-zero-u} for the inertial terms $(\ddot{u}_{\varepsilon_k})_k$, we can pass to the limit in the rescaled momentum balance \eqref{w-momentum-balance-eps} and deduce that $\sigma$ complies with
\[
\int_\Omega \sigma(t) {:} \sig{v} \;\!\mathrm{d} x = \pairing{}{H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)}{\calL(t)}v = \int_\Omega F(t) v \;\!\mathrm{d} x + \int_{\Gamma_\mathrm{Neu}} g(t) v \;\!\mathrm{d} S\qquad \text{for a.a. } t \in (0,T),
\]
whence \eqref{BVprob-stress}.
\par
\textbf{Ad (3):}
In order to prove that $(u(t), e(t), p(t)) \in \mathcal{A}_{\mathrm{BD}} (w(t))$ we will make use of the closedness property guaranteed by Lemma \ref{l:closure-kin-adm}, and pass to the limit in the condition $(u_\varepsilon(t), e_{\varepsilon}(t), p_\eps(t)) \in \mathcal{A}(w_\varepsilon(t)) \subset \mathcal{A}_{\mathrm{BD}} (w_\varepsilon(t))$ for almost all $t\in (0,T)$. However, we cannot directly apply Lemma \ref{l:closure-kin-adm} as, at the moment, we cannot count on \emph{pointwise-in-time} convergences for the functions $(u_{\varepsilon_k})_k$ and $(e_{\varepsilon_k})_k$. In order to extract more information from the weak convergences \eqref{convs-e-linfty} and \eqref{convs-u-linfty}, we resort to the Young measure compactness result stated in the upcoming Theorem
\ref{thm.balder-gamma-conv}. Indeed, up to a further extraction, with the sequence $(u_{\varepsilon_k}, e_{\varepsilon_k})_k$, bounded in $L^\infty (0,T; \mathbf{X})$ with $\mathbf{X} = L^{d/{(d{-}1)}}(\Omega;\mathbb{R}^d) \times L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}),$ we can associate a limiting Young measure $ \bfmu\in \mathscr{Y}(0,T; \mathbf{X})$ such that for almost all $t\in (0,T)$ the probability measure $\mu_t$ is concentrated on the set $\boldsymbol{L}_t $
of the limit points of $(u_{\varepsilon_k}(t), e_{\varepsilon_k}(t))_k$ w.r.t.\ the weak topology of $\mathbf{X}$, and we have the following representation formulae for the limits $u$ and $e$ (cf.\ \eqref{eq:35}) \[ (u(t), e(t) ) = \int_{\mathbf{X}} (\mathsf{u}, \mathsf{e}) \;\!\mathrm{d} \mu_t(\mathsf{u}, \mathsf{e}) \qquad \text{for a.a. } t \in (0,T). \] Furthermore, for almost all $t\in (0,T)$ let us consider the \emph{marginals} of $\mu_t$, namely the probability measures $\mu_t^1$ on $L^{d/{(d{-}1)}}(\Omega;\mathbb{R}^d)$, and $\mu_t^2$ on $L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})$, defined by taking the push-forwards of $\mu_t$ through the projection maps $\pi_1: \mathbf{X} \to L^{d/{(d{-}1)}}(\Omega;\mathbb{R}^d)$, and $\pi_2: \mathbf{X} \to L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})$, i.e.\ $\mu_t^i =( \pi_i)_\#\mu_t $ for $i=1,2,$ with $( \pi_i)_\#\mu_t$ defined by $( \pi_i)_\#\mu_t(B): = \mu_t (\pi_i^{-1}(B))$ for every $B \subset L^{d/{(d{-}1)}}(\Omega;\mathbb{R}^d) $ and $B\subset L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})$, respectively. Therefore, \begin{subequations}
\label{baycenters}
\begin{equation} u(t) = \pi_1 \left( \int_{\mathbf{X}} (\mathsf{u}, \mathsf{e}) \;\!\mathrm{d} \mu_t(\mathsf{u}, \mathsf{e}) \right) = \int_{\mathbf{X}} \pi_1(\mathsf{u}, \mathsf{e}) \;\!\mathrm{d} \mu_t(\mathsf{u}, \mathsf{e}) = \int_{ L^{d/{(d{-}1)}}(\Omega;\mathbb{R}^d)} \mathsf{u} \;\!\mathrm{d} \mu_t^1(\mathsf{u}), \end{equation} and, analogously,
\begin{equation} e(t) = \int_{L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}) } \mathsf{e} \;\!\mathrm{d} \mu_t^2(\mathsf{e}). \end{equation} \end{subequations} By \eqref{e:concentration} in
Theorem \ref{thm.balder-gamma-conv}, the measure $\mu_t^1$ ($\mu_t^2$, respectively) is concentrated on $\boldsymbol{U}_t: = \pi_1(\boldsymbol{L}_t)$, the set of the weak-$L^{d/{(d{-}1)}}(\Omega;\mathbb{R}^d)$ limit points of $(u_{\varepsilon_k}(t))_k$ (on $\boldsymbol{E}_t: = \pi_2(\boldsymbol{L}_t)$, the set of the weak-$L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}) $ limit points of $(e_{\varepsilon_k}(t))_k$, respectively).
We now combine \eqref{baycenters} with the following information on the sets $\boldsymbol{U}_t$ and $\boldsymbol{E}_t$. Namely,
we have (1):
\begin{subequations}
\label{viva_Y_meas}
\begin{equation}
\label{vYm1}
\boldsymbol{U}_t \subset \mathrm{BD}(\Omega;\mathbb{R}^d) \qquad \text{for a.a. } t \in (0,T).
\end{equation}
Indeed, pick $\mathsf{u} \in \boldsymbol{U}_t $ and a subsequence $u_{\varepsilon_{k_j}^t} (t)$, possibly depending on $t$, such that $u_{\varepsilon_{k_j}^t} (t)\rightharpoonup \mathsf{u} $ in $ L^{d/{(d{-}1)}}(\Omega;\mathbb{R}^d)$. Since $(u_\varepsilon)_\varepsilon$ is bounded in $L^\infty(0,T; \mathrm{BD}(\Omega;\mathbb{R}^d))$, we may suppose that the sequence $(u_{\varepsilon_{k_j}^t} )$ is bounded in $\mathrm{BD}(\Omega;\mathbb{R}^d)$ and, a fortiori, weakly$^*$-converges to $\mathsf{u}$ in $ \mathrm{BD}(\Omega;\mathbb{R}^d)$, whence \eqref{vYm1}.
Ultimately,
\[
u(t) = \int_{ L^{d/{(d{-}1)}}(\Omega;\mathbb{R}^d)} \mathsf{u} \;\!\mathrm{d} \mu_t^1(\mathsf{u}) = \int_{\mathrm{BD}(\Omega;\mathbb{R}^d)} \mathsf{u} \;\!\mathrm{d} \mu_t^1(\mathsf{u})
\]
Furthermore, (2):
\begin{equation}
\label{vYm2}
\sig{\mathsf{u}} = \mathsf{e} + p(t) \qquad \text{for every } (\mathsf{u}, \mathsf{e}) \in \boldsymbol{U}_t \times \boldsymbol{E}_t \text{ and } \text{for a.a. } t \in (0,T).
\end{equation}
This follows from passing to the limit in the kinematic admissibility condition $\sig{u_{\varepsilon_k}(t)} = e_{\varepsilon_k}(t)+ p_{\varepsilon_k}(t)$, taking into account the pointwise convergence \eqref{convs-eps-p}.
Finally, (3):
\begin{equation}
\label{vYm3} p(t) = (w(t){-}\mathsf{u}) {\otimes} \nu\mathscr{H}^{d-1} \qquad \text{ on } \Gamma_\mathrm{Dir} \qquad \text{for every } \mathsf{u} \in \boldsymbol{U}_t \text{ and } \text{for a.a. } t \in (0,T), \end{equation} \end{subequations}
which ensues from
Lemma \ref{l:closure-kin-adm}, also taking into account convergence \eqref{conv-Dir-loads} for $(w_\varepsilon)_\varepsilon$.
\par Then, integrating \eqref{vYm2} w.r.t.\ the measure $\mu_t $, using that
\[
\iint_{\mathbf{X}} \sig{\mathsf{u}} \;\!\mathrm{d} \mu_t (\mathsf{u}, \mathsf{e}) = \sig{ \int_{\mathbf{X}} \mathsf{u} \;\!\mathrm{d} \mu_t(\mathsf{u})} = \sig{\int_{ L^{d/{(d{-}1)}}(\Omega;\mathbb{R}^d)} \mathsf{u} \;\!\mathrm{d} \mu_t^1(\mathsf{u}) } = \sig{u(t)}
\]
by the linearity of the operator $\sig{\cdot}$, and arguing analogously for the other terms in \eqref{vYm2}, we conclude that $\sig{u(t)} = e(t)+p(t)$. The boundary condition on $\Gamma_\mathrm{Dir}$ follows from integrating \eqref{vYm3}.
This concludes the proof of the kinematic admissibility condition, and thus of \eqref{glob-stab-5}, \emph{for almost all $t \in (0,T)$.}
\par\noindent
\emph{Step $2$: ad the upper energy estimate in \eqref{enbal-5} for almost all $t\in (0,T)$.} We shall now prove the inequality $\leq $ in \eqref{enbal-5}. With this aim, we pass to the limit in the (rescaled) mechanical energy balance \eqref{mech-enbal-rescal}, integrated on a generic interval $(a,b) \subset (0,T)$. Taking into account that the first four terms on the l.h.s.\ are positive, we have that \begin{equation} \label{u-e-est} \begin{aligned} \liminf_{k\to\infty} \int_a^b (\text{l.h.s.\ of \eqref{mech-enbal-rescal}}) \;\!\mathrm{d} t & \geq \liminf_{k\to\infty} \int_a^b \int_0^t \calR(\dot{p}_{\varepsilon_k}) \;\!\mathrm{d} s \;\!\mathrm{d} t + \liminf_{k\to\infty} \int_a^b \calQ(e_{\varepsilon_k}(t)) \;\!\mathrm{d} t \\ & \geq \int_a^b \mathrm{Var}_\calR (p; [0,t]) \;\!\mathrm{d} t + \int_a^b \calQ(e(t)) \;\!\mathrm{d} t. \end{aligned} \end{equation} The first $\liminf$-inequality follows from the fact that \[ \liminf_{k\to\infty} \int_0^t \calR(\dot{p}_{\varepsilon_k}) \;\!\mathrm{d} s \stackrel{\eqref{consistency-dotp}}{ =} \liminf_{k\to\infty} \mathrm{Var}_\calR (p_{\eps_k}; [0,t]) \stackrel{\eqref{lsc-Var-R}}{\geq }\mathrm{Var}_\calR (p; [0,t]) \quad \text{for every } t \in [0,T] \] and from the Fatou Lemma. The second one for the elastic energy is due to the weak convergence \eqref{convs-e-linfty} for the sequence $(e_{\varepsilon_k})_k$. \par As for the r.h.s.\ of \eqref{mech-enbal-rescal}, we have that \begin{equation} \label{l-e-est} \begin{aligned} \lim_{k\to\infty} \int_a^b (\text{r.h.s.\ of \eqref{mech-enbal-rescal}}) \;\!\mathrm{d} t & = \int_a^b \Big( \calQ(e_0) - \int_\Omega \varrho(0) : (e_0 {-} \sig{w(0)} ) \;\!\mathrm{d} x + \int_0^t \int_\Omega \sigma : \sig{\dot{w}} \;\!\mathrm{d} x \;\!\mathrm{d} s \\
& \qquad + \int_\Omega \varrho(t) : (e(t) {-} \sig{w(t)} ) \;\!\mathrm{d} x -\int_0^t\int_\Omega \dot{\varrho}: (e{-} \sig{w}) \;\!\mathrm{d} x \;\!\mathrm{d} s \Big) \;\!\mathrm{d} t\,. \end{aligned} \end{equation}
In fact, the term $ \tfrac{\rho \varepsilon_k^2}2 \int_\Omega |\dot{u}_{\varepsilon_k}^0|^2 \;\!\mathrm{d} x $ on the r.h.s.\ of \eqref{mech-enbal-rescal} tends to zero by \eqref{convs-init-data}. For the term $\iint \pairing{}{}{\calL_{\varepsilon_k}}{\dot{u}_{\varepsilon_k} {-} \dot{w}_{\varepsilon_k}} $ we use the safe-load condition, yielding \[ \int_a^b \int_0^t \pairing{}{H_\mathrm{Dir}^1(\Omega;\mathbb{R}^d)}{\calL_{\varepsilon_k}}{\dot{u}_{\varepsilon_k} {-} \dot{w}_{\varepsilon_k}} \;\!\mathrm{d} s \;\!\mathrm{d} t = \int_a^b \int_0^t \int_\Omega \varrho_{\varepsilon_k}{:} \sig{\dot{u}_{\varepsilon_k}}\;\!\mathrm{d} x \;\!\mathrm{d} s \;\!\mathrm{d} t - \int_a^b \int_0^t \int_\Omega \varrho_{\varepsilon_k} {:} \sig{\dot{w}_{\varepsilon_k}}\;\!\mathrm{d} x \;\!\mathrm{d} s \;\!\mathrm{d} t\,. \] In order to pass to the limit in the first integral term, we replace $\sig{\dot{u}_{\varepsilon_k}} $ by $\dot{e}_{\varepsilon_k} + \dot{p}_{\varepsilon_k}$ via kinematic admissibility, and integrate by parts the term featuring $ \varrho_{\varepsilon_k} {\dot{e}_{\varepsilon_k}}$, thus obtaining the sum of four integrals, cf.\ equality (2) in \eqref{will-be-quoted}. Referring to the notation $I_1, \ldots, I_4$ for the terms contributing to \eqref{will-be-quoted}, we find that \[ \begin{array}{llll} & \int_a^b I_1 \;\!\mathrm{d} t & \stackrel{(1)}{\to} & - \int_a^b \int_0^t \int_\Omega \dot{\varrho}{:} e \;\!\mathrm{d} x \;\!\mathrm{d} s \;\!\mathrm{d} t, \\ & \int_a^b I_2 \;\!\mathrm{d} t & \stackrel{(2)}{\to} & \int_a^b \int_\Omega \varrho(t){:} e(t) \;\!\mathrm{d} t,
\\ & \int_a^b I_3 \;\!\mathrm{d} t & \stackrel{(3)}{\to} & - \int_a^b \int_\Omega \varrho(0){:} e(0) \;\!\mathrm{d} t, \\ & \int_a^b I_4 \;\!\mathrm{d} t & \stackrel{(4)}{\to} & 0 \end{array} \] as $k\to\infty$, with convergences (1) \& (2) due to the first of \eqref{safe-load-eps} combined with \eqref{convs-e-linfty}, while (3) follows from \eqref{safe-load-eps} joint with \eqref{convs-init-data}. Finally, (4) ensues from \[
|I_4| = \left| \int_0^t
\int_\Omega \varrho_{\varepsilon_k}{:} \dot{p}_{\varepsilon_k} \;\!\mathrm{d} x \;\!\mathrm{d} s \right| \leq \varepsilon_k^{1/2} \tfrac1{\varepsilon_k} \| (\varrho_{\varepsilon_k})_\mathrm{D} \|_{L^2(0,t;L^2(\Omega;\bbM_\mathrm{D}^{d\times d}))} \varepsilon_k^{1/2} \| \dot{p}_{\varepsilon_k} \|_{L^2(0,t;L^2(\Omega;\bbM_\mathrm{D}^{d\times d}))} \leq C \varepsilon_k^{1/2} \to 0 \] where the last estimate is a consequence of \eqref{safe-load-eps} and of estimate \eqref{aprio-eps-p} for $ \dot{p}_{\varepsilon_k}$. Finally, again thanks to \eqref{safe-load-eps} joint with \eqref{conv-Dir-loads}, we find that \[ \begin{aligned} &
- \int_a^b \int_0^t \int_\Omega \varrho_{\varepsilon_k} {:} \sig{\dot{w}_{\varepsilon_k}}\;\!\mathrm{d} x \;\!\mathrm{d} s \;\!\mathrm{d} t \\ & \begin{aligned}
\to & - \int_a^b \int_0^t \int_\Omega \varrho {:} \sig{\dot{w}}\;\!\mathrm{d} x \;\!\mathrm{d} s \;\!\mathrm{d} t \\ &
= \int_a^b \int_\Omega \varrho(t) : \sig{w(t)} \;\!\mathrm{d} x \;\!\mathrm{d} t - \int_a^b \int_\Omega \varrho(0) : \sig{w(0)} \;\!\mathrm{d} x \;\!\mathrm{d} t + \int_a^b \int_0^t \int_\Omega \dot{\varrho} : \sig{w}\;\!\mathrm{d} x \;\!\mathrm{d} s \;\!\mathrm{d} t\,, \end{aligned} \end{aligned} \] the last equality due to integration by parts. \par To pass to the limit in the fourth integral term on the r.h.s.\ of \eqref{mech-enbal-rescal} we use that \[
\left| \int_0^t
\int_\Omega \vartheta_{\varepsilon_k} \mathbb{B}_{\varepsilon_k} {:} \dot{e}_{\varepsilon_k} \;\!\mathrm{d} x \;\!\mathrm{d} s \right| \leq \varepsilon_k^\beta \| \vartheta_{\varepsilon_k} \|_{L^2(Q)} \| \dot{e}_{\varepsilon_k} \|_{L^2(Q;\bbM_\mathrm{sym}^{d\times d})} \stackrel{(2)}{\leq} C \varepsilon_k^{\beta-\tfrac12} \to 0 \] for all $t\in [0,T]$, with (2) following from the scaling \eqref{scalingbbB} for $\mathbb{B}_\varepsilon$, and estimates \eqref{aprio-eps-e} and \eqref{aprio-eps-teta}. The fourth integral term on the r.h.s.\ of \eqref{mech-enbal-rescal} tends to zero thanks to estimate \eqref{aprio-eps-u} for $(\dot{u}_{\varepsilon_k})_k$ and to convergence \eqref{conv-Dir-loads} for $(w_{\varepsilon_k})_k$.
Combining \eqref{stress-eps-k}
with \eqref{conv-Dir-loads} we finally show that \[ \int_a^b \int_0^t \int_\Omega \sigma_{\varepsilon_k} {:} \sig{\dot{w}_{\varepsilon_k}} \;\!\mathrm{d} x \;\!\mathrm{d} s \;\!\mathrm{d} t \to \int_a^b \int_0^t \int_\Omega \sigma {:} \sig{\dot{w}} \;\!\mathrm{d} x \;\!\mathrm{d} s \;\!\mathrm{d} t\,. \] In view of all of the above convergences, \eqref{l-e-est} ensues. \par Combining \eqref{u-e-est} and \eqref{l-e-est} we obtain for every $ (a,b)\subset (0,T)$ \[ \begin{aligned} \int_a^b \left( \calQ(e(t)) {+} \mathrm{Var}_\calR (p; [0,t]) \right) \;\!\mathrm{d} t & \leq \int_a^b \Big( \calQ(e_0) - \int_\Omega \varrho(0) : (e_0 {-} \sig{w(0)} ) \;\!\mathrm{d} x + \int_0^t \int_\Omega \sigma : \sig{\dot{w}} \;\!\mathrm{d} x \;\!\mathrm{d} s \\
& \qquad + \int_\Omega \varrho(t) : (e(t) {-} \sig{w(t)} ) \;\!\mathrm{d} x -\int_0^t\int_\Omega \dot{\varrho}: (e{-} \sig{w}) \;\!\mathrm{d} x \;\!\mathrm{d} s \Big) \;\!\mathrm{d} t\,. \end{aligned} \] Then, by the arbitrariness of $(a,b)\subset [0,T]$, we conclude that for almost all $t\in (0,T)$ there holds \begin{equation} \label{u-e-t-foraat} \begin{aligned}
\calQ(e(t)) {+} \mathrm{Var}_\calR (p; [0,t]) & \leq
\calQ(e_0) - \int_\Omega \varrho(0) : (e_0 - \sig{w(0)} ) \;\!\mathrm{d} x + \int_0^t \int_\Omega \sigma : \sig{\dot{w}} \;\!\mathrm{d} x \;\!\mathrm{d} s \\
& \qquad + \int_\Omega \varrho(t) : (e(t) - \sig{w(t)} ) \;\!\mathrm{d} x -\int_0^t\int_\Omega \dot{\varrho}: (e{-} \sig{w}) \;\!\mathrm{d} x \;\!\mathrm{d} s.
\end{aligned}
\end{equation}
\emph{Step $3$: ad the lower energy estimate in \eqref{enbal-5} for almost all $t\in (0,T)$.} We use a by now standard argument (cf.\ \cite{DMFT05, Miel05ERIS}), combining the stability condition \eqref{glob-stab-5} with the previously proved momentum balance \eqref{w-momentum-balance} to deduce that the converse of inequality \eqref{u-e-t-foraat} holds at almost all $t\in (0,T)$. We refer to the proof of \cite[Thm.\ 6]{DMSca14QEPP} for all details. \par\noindent \emph{Step $4$: conclusion of the proof.} It follows from Steps $1$--$3$ that the triple $(u,e,p)$ complies with the kinematic admissibility and the global stability conditions, as well as with the energy balance, at every $t \in S$, with $S\subset [0,T]$ a set of full measure containing $0$. We are then in the position to apply Thm.\ \ref{th:dm-scala} and conclude that $(u,e,p)$ is a global energetic solution to the perfectly plastic system with the enhanced time regularity \eqref{sono-AC}. \par We also conclude enhanced convergences for the sequences $(u_{\varepsilon_k})$ and $(e_{\varepsilon_k})$ by observing that \[ \limsup_{k\to \infty} \int_a^b \left( \text{l.h.s.\ of \eqref{mech-enbal-rescal}} \right) \;\!\mathrm{d} t \leq \limsup_{k\to \infty} \int_a^b \left( \text{r.h.s.\ of \eqref{mech-enbal-rescal}} \right) \;\!\mathrm{d} t \stackrel{(1)}{=} \int_a^b \left( \text{r.h.s.\ of \eqref{enbal-5}} \right) \;\!\mathrm{d} t \stackrel{(2)}{=} \int_a^b \left( \text{l.h.s.\ of \eqref{enbal-5}} \right) \;\!\mathrm{d} t \] where (1) follows from the limit passage arguments in Step $2$ and (2) from the energy balance \eqref{enbal-5}.
Arguing in the very same way as in the proof of Lemma \ref{l:3.6} and Thm.\ \ref{mainth:1}, we conclude that \begin{subequations} \label{evvai-teta} \begin{equation} \label{strong-cvs-eps-1} \begin{aligned} & \lim_{k\to\infty}
\int_a^b \frac{\rho {\varepsilon_k}^2}2 \int_\Omega |\dot{u}_{\varepsilon_k}(t)|^2 \;\!\mathrm{d} x \;\!\mathrm{d} t =0 && \text{whence} && \varepsilon_k \dot{u}_{\varepsilon_k} (t) \to 0 \text{ in } L^2(\Omega;\mathbb{R}^d), \\ & \lim_{k\to\infty}
\int_a^b {\varepsilon_k}\int_0^t\int_\Omega \mathbb{D} \dot{e}_{\varepsilon_k}: \dot{ e}_{\varepsilon_k} \;\!\mathrm{d} x \;\!\mathrm{d} r \;\!\mathrm{d} t =0 && \text{whence} && \varepsilon_k^{1/2} \dot{e}_{\varepsilon_k} \to 0 \text{ in } L^2(0,t; L^2(\Omega;\bbM_\mathrm{sym}^{d\times d})),
\\ &
\lim_{k\to\infty} \int_a^b
{\varepsilon_k}\int_0^t\int_\Omega |\dot{p}_{\varepsilon_k}|^2 \;\!\mathrm{d} x \;\!\mathrm{d} r \;\!\mathrm{d} t =0 && \text{whence} && \varepsilon_k^{1/2} \dot{p}_{\varepsilon_k} \to 0 \text{ in }
L^2(0,t; L^2(\Omega;\bbM_\mathrm{D}^{d\times d})) \end{aligned} \end{equation}
for almost all $t\in (0,T)$, as well as the convergence \begin{equation} \label{strong-cvs-eps-2} \begin{aligned} & \int_a^b\calQ(e_{\varepsilon_k}(t)) \;\!\mathrm{d} t \to \int_a^b\calQ(e(t)) \;\!\mathrm{d} t \quad \text{ whence } \quad e_{\varepsilon_k} (t) \to e(t) \text{ in } L^2(\Omega;\bbM_\mathrm{sym}^{d\times d}) \ \text{for a.a. } t \in (0,T).
\end{aligned} \end{equation} \end{subequations} We use \eqref{evvai-teta} to conclude \eqref{convs-eps-e}. With the very same arguments as in the proof of \cite[Thm.\ 6]{DMSca14QEPP} we also infer the pointwise convergence \eqref{convs-eps-u}. \par Furthermore, exploiting \eqref{evvai-teta}, the weak convergence \eqref{convs-eps-teta} for $(\vartheta_{\varepsilon_k})_k$, and the arguments from Step $2$,
we pass to the limit in the (rescaled) total energy balance \eqref{total-enbal-rescal}, integrated on an arbitrary interval $(a,b) \subset (0,T)$. We thus have \begin{equation} \label{NEW-A}
\lim_{k\to\infty} \int_a^b \left( \tfrac{\rho\varepsilon_k^2}2 \int_\Omega |\dot{u}_{\varepsilon_k}|^2 \;\!\mathrm{d} x {+} \calE(\vartheta_{\varepsilon_k}(t), e_{\varepsilon_k}(t)) \right) \;\!\mathrm{d} t = \int_a^b \calE(\vartheta(t), e(t)) \;\!\mathrm{d} t, \end{equation} whereas, also taking into account \eqref{heat-sources-eps} and \eqref{convs-init-data}, arguing as in Step $2$ we find that \begin{equation} \label{NEW-B} \begin{aligned} \lim_{k\to\infty} \int_a^b \left( \text{r.h.s.\ of \eqref{total-enbal-rescal}} \right) \;\!\mathrm{d} t
& = \int_a^b \Big( \calE(\vartheta_0, e_0)
+\int_0^t \int_\Omega \mathsf{H} \;\!\mathrm{d} x \;\!\mathrm{d} s
+\int_0^t \int_{\partial\Omega} \mathsf{h} \;\!\mathrm{d} S \;\!\mathrm{d} s
\\
& \qquad
- \int_\Omega \varrho(0) : (e_0 {-} \sig{w(0)} ) \;\!\mathrm{d} x
+ \int_0^t \int_\Omega \sigma : \sig{\dot{w}} \;\!\mathrm{d} x \;\!\mathrm{d} s
\\
& \qquad
+ \int_\Omega \varrho(t) : (e(t) {-} \sig{w(t)} ) \;\!\mathrm{d} x -\int_0^t\int_\Omega \dot{\varrho}: (e{-} \sig{w}) \;\!\mathrm{d} x \;\!\mathrm{d} s \Big) \;\!\mathrm{d} t\,.
\end{aligned} \end{equation} Combining \eqref{NEW-A} and \eqref{NEW-B} and using the arbitrariness of the interval $(a,b)$, we conclude the energy balance \eqref{anche-la-temp}. A comparison between \eqref{anche-la-temp} and \eqref{enbal-5} yields \eqref{balance-of-dissipations}.
This concludes the proof of Theorem \ref{mainth:3}. \QED \appendix \section{Auxiliary compactness results} \label{s:a-1}
\noindent
The proof of Theorem \ref{th:mie-theil}, and the argument in Step $1$ of the proof of Thm.\ \ref{mainth:3}, hinge
on a compactness argument drawn from the theory of
\emph{parameterized} (or \emph{Young}) measures with values in an infinite-dimensional space.
Hence,
for the reader's convenience, we preliminarily collect here the definition
of Young measure with values in a \underline{reflexive} Banach space $\mathbf{X}$. We then
recall
the Young measure
compactness result from \cite{MRS2013},
extending to the frame of the
weak topology classical results within Young measure theory (see e.g.\ \cite[Thm.\,1]{Bald84GALS}, \cite[Thm.\,16]{Valadier90}). \par Preliminarily, let us fix some notation: We denote by $\mathscr{L}_{(0,T)}$ the $\sigma$-algebra of the Lebesgue measurable subsets of the interval $(0,T)$ and, given
a reflexive Banach space $\mathbf{X}$,
by $\mathscr B(\mathbf{X})$ its Borel $\sigma$-algebra.
\begin{definition}[\bf (Time-dependent) Young measures]
\label{parametrized_measures}
A \emph{Young measure} in the space $\mathbf{X}$
is a family
$\bfmu:=\{\mu_t\}_{t \in (0,T)} $ of Borel probability measures
on $\mathbf{X}$
such that the map on $(0,T)$ \begin{equation} \label{cond:mea} t \mapsto \mu_{t}(A) \quad \mbox{is}\quad {\mathscr{L}_{(0,T)}}\mbox{-measurable} \quad \text{for all } A \in \mathscr{B}(\mathbf{X}). \end{equation} We denote by $\mathscr{Y}(0,T; \mathbf{X})$ the set of all Young measures in $\mathbf{X} $. \end{definition}
The following result subsumes only part of the statements of \cite[Theorems A.2, A.3]{MRS2013}. We have in fact extrapolated the crucial finding of these results for the purposes of Theorem \ref{th:mie-theil}, and also for the proof of Thm.\ \ref{mainth:3}. They concern the characterization of the limit points in the weak topology of $L^p (0,T;\mathbf{X})$, $p \in (1,+\infty] $, of a bounded sequence $(\ell_n)_n \subset L^p (0,T;\mathbf{X})$. Every limit point arises as the barycenter of the limiting Young measure $\bfmu=(\mu_t)_{t\in (0,T)}$
associated with (a suitable subsequence $(\ell_{n_k})_k$ of) $(\ell_n)_n$. In turn, for almost all $t\in (0,T)$
the support of the measure $\mu_t$ is concentrated in the set of limit points of $(\ell_{n_k}(t))_k$ with respect to the weak topology of $\mathbf{X}$. \begin{theorem}{\cite[Theorems A.2, A.3]{MRS2013}} \label{thm.balder-gamma-conv}
Let $p>1$ and let
$(\ell_n)_n \subset L^p (0,T;\mathbf{X})$ be a bounded sequence. Then, there exist a subsequence $(\ell_{n_k})_k$ and a Young measure $\bfmu=\{\mu_t\}_{t \in (0,T)} \in \mathscr{Y}(0,T; \mathbf{X})$ such that for a.a. $t \in (0,T)$ \begin{equation} \label{e:concentration} \begin{gathered}
\mbox{$ \mu_{t} $ is
concentrated on
the set
$
\boldsymbol{L}_t: = \bigcap_{s=1}^{\infty}\overline{\big\{\ell_{n_k}(t)\,: \ k\ge s\big\}}^{\mathrm{weak}{\text{-}\mathbf{X}}}$}
\end{gathered}
\end{equation} of the limit points of the sequence $(\ell_{n_k}(t))$ with respect to the weak topology of $\mathbf{X}$ and,
setting
\[
\ell(t):=\int_{\mathbf{X}} l \, \;\!\mathrm{d} \mu_t (l) \qquad \text{for a.a. } t \in (0,T)\,,
\] there holds \begin{equation}
\label{eq:35} \ell_{n_k} \rightharpoonup \ell \ \ \text{ in $L^p (0,T;\mathbf{X})$} \qquad \text{as } k \to \infty \end{equation} with $\rightharpoonup$ replaced by $\weaksto$ if $p=\infty$. \par Furthermore, if $\mu_t = \delta_{\ell(t)}$ for almost all $t\in (0,T)$, then, up to the extraction of a further subsequence, \begin{equation} \label{singleton-appendix} \ell_{n_k} (t)\rightharpoonup \ell(t) \quad \text{in } \mathbf{X} \quad \text{for a.a. } t \in (0,T). \end{equation} \end{theorem} \par We are now in the position to develop the \underline{\textbf{proof of \eqref{enhSav} in Theorem \ref{th:mie-theil}}} (recall that the other items in the statement have been proved in \cite[Thm.\ A.5]{Rocca-Rossi}). Following the outline developed in \cite{Rocca-Rossi} for Thm.\ A.5 therein, we split the argument in some steps. \\ \underline{\textbf{Claim $1$:}} \emph{Let $ F \subset \overline{B}_{1,\mathbf{Y}}(0)$ be countable and dense in $ \overline{B}_{1,\mathbf{Y}}(0)$. There exist a subsequence $(\ell_{n_k})_k$ of $(\ell_n)_n$, a negligible set $\bar{J }\subset (0,T)$, and for every $\varphi \in F$ a function
$\mathscr{L}_\varphi: [0,T] \to \mathbb{R}$ such that the following convergences hold as $k\to\infty$ for every $ \varphi \in F$: \begin{align} \label{mie-th-conv} & \pairing{}{\mathbf{Y}}{\ell_{n_k}(t)}{\varphi} \to \mathscr{L}_\varphi(t)
\quad \text{for every } t \in [0,T], \\ & \label{mie-th-conv-enh} \pairing{}{\mathbf{Y}}{\ell_{n_k}(t_k)}{\varphi} \to \mathscr{L}_\varphi(t)
\quad \text{for every } t \in [0,T] {\setminus} \bar{J} \text{ and for every } (t_k)_k \subset [0,T] \text{ with } t_k \to t.
\end{align}
} Convergence \eqref{mie-th-conv} was already obtained in the proof of \cite[Thm.\ A.5]{Rocca-Rossi}, therefore we will only focus on the proof of \eqref{mie-th-conv-enh}. With every $\varphi \in \overline{B}_{1,\mathbf{Y}}(0)$ we associate the monotone functions $\mathcal{V}_{n}^{\varphi}: [0,T] \to [0,+\infty)$ defined by $\mathcal{V}_{n}^{\varphi}(t) := \mathrm{Var}(\pairing{}{\mathbf{Y}}{\ell_n}{ \varphi}; [0,t] ) $ for every $t\in [0,T]$. Let now $F \subset \overline{B}_{1,\mathbf{Y}}(0)$ be countable and dense and let us consider the family of functions $(\mathcal{V}_{n}^{\varphi})_{n\in \mathbb{N}, \, \varphi \in F}$ and the associated distributional derivatives $(\nu_n^{\varphi})_{n\in \mathbb{N}, \, \varphi \in F}$, in fact Radon
measures on $[0,T]$.
It follows from estimate \eqref{BV-bound}, combined with a diagonalization procedure based on the countability of
$F$, that
there exist a sequence of indexes $(n_k)_k$ and for every $\varphi \in F$ a Radon measure $\nu_\infty^\varphi$, such that $\nu_{n_k}^\varphi \weaksto \nu_\infty^\varphi$ as $k\to\infty$. Set $ \mathcal{V}_\infty^{\varphi}(t) : = \nu_\infty^\varphi([0,t]$ for every $t \in [0,T]$. Since the function $ \mathcal{V}_\infty^{\varphi}$
is monotone, it has an at most countable jump set (i.e., the set of atoms of the measure $ \nu_\infty^\varphi$), which we denote by $J_\varphi$. The set $\bar{J}:= \cup_{\varphi \in F} J_\varphi$ is still countable. \par
In order to show that \eqref{mie-th-conv-enh} holds, let us fix $\varphi \in F$. The sequence $(\pairing{}{\mathbf{Y}}{\ell_{n_k}(t_k)}{\varphi})_k$ is bounded for every $\varphi \in F$ and therefore it admits a subsequence (not relabeled, possibly depending on $\varphi$), converging to some $\bar{\ell}_{\varphi} \in \mathbb{R}$. Observe that \[ \begin{aligned}
| \bar{\ell}_{\varphi} - \mathscr{L}_\varphi(t) | = \lim_{k\to\infty} | \pairing{}{\mathbf{Y}}{\ell_{n_k}(t_k)}{\varphi} - \pairing{}{\mathbf{Y}}{\ell_{n_k}(t)}{\varphi} | & \stackrel{(1)}{\leq} \limsup_{k\to\infty} \mathrm{Var}(\pairing{}{\mathbf{Y}}{\ell_{n_k}}{ \varphi}; [t,t_k] ) \\ & = \limsup_{k\to\infty} \nu_{n_k}^\varphi ( [t,t_k] ) \stackrel{(2)}{\leq} \nu_\infty^\varphi(\{t\}) \stackrel{(3)}{=} 0, \end{aligned} \] where (1) follows from supposing (without loss of generality) that $t \leq t_k$ for $k $ sufficiently big, (2) from the upper semicontinuity property of weak$^*$ convergence of measures, and (3) from the fact that $t \notin \bar{J}$ is not an atom for the measure $\nu_\infty^\varphi$. Therefore $\bar{\ell}_{\varphi} = \mathscr{L}_\varphi(t)$ and, a fortiori, one has convergence \eqref{mie-th-conv-enh} along the \emph{whole} sequence of indexes $(n_k)_k$. \par\noindent \underline{\textbf{Claim $2$:}} \emph{Let
$(\ell_{n_k})_k$ be a (not relabeled) subsequence of the sequence from Claim $1$, with which a limiting Young measure $\bfmu= \{\mu_t\}_{t \in (0,T)} \in \mathscr{Y}(0,T; \mathbf{V})$ is associated according to Theorem \ref{thm.balder-gamma-conv}. Then, there exists a negligible set $N\subset (0,T)$ such that for every $t \in (0,T) \setminus N$ the probability measure $\mu_t$ is a Dirac mass $\delta_{\ell(t)}$, with $\ell(t) \in \mathbf{V}$ fulfilling \begin{equation} \label{ident-ell} \pairing{}{\mathbf{Y}}{\ell(t)}{\varphi} = \mathscr{L}_\varphi(t) \qquad \text{for every } \varphi \in F, \end{equation}
and \eqref{weak-ptw-B} holds as $k\to\infty$.} \\ We refer to the proof of \cite[Thm.\ A.5]{Rocca-Rossi} for this Claim. \par\noindent \underline{\textbf{Claim $3$:}} \emph{Set $J: = N {\cup} \bar{J}$. For every $t \in [0,T] \setminus J$ and for every $ (t_k)_k \subset [0,T] $ with $ t_k \to t $ there holds $ \ell_{n_k}(t_k) \rightharpoonup \ell(t)$ in $\mathbf{Y}^*$.} \\ Indeed, the sequence $( \ell_{n_k}(t_k))_k$ is bounded in $\mathbf{Y}^*$, and therefore it admits a (not relabeled) subsequence weakly converging in $\mathbf{Y}^*$ to some $\bar\ell$. It follows from \eqref{mie-th-conv-enh} and \eqref{ident-ell} that $ \pairing{}{\mathbf{Y}}{\bar \ell }{\varphi}= \mathscr{L}_\varphi(t) = \pairing{}{\mathbf{Y}}{\ell(t)}{\varphi} $ for every $\varphi \in F$. Since $F$ is dense in $ \overline{B}_{1,\mathbf{Y}}(0)$, we then conclude that $\bar \ell$ and $\ell(t)$ coincide on all the elements in $ \overline{B}_{1,\mathbf{Y}}(0)$. Hence $\bar \ell = \ell(t)$ in $\mathbf{Y}^*$ and the desired claim follows. This concludes the proof of \eqref{enhSav}. \QED
\end{document} | arXiv |
3d Spherical Plot
Is there an easier way to do it? It seems to me like a PlotVectorField2D(3D) command would be very useful. Making a Contour Plot. How to plot 3d sphere in matlab - 3d plotting FOR MORE INFORMATION AND VIDEOS, KINDLY SUBSCRIBE TO OUR CHANNEL For Source code of simulation file please visi. Lecture 4: Diffusion: Fick's second law Today's topics • Learn how to deduce the Fick's second law, and understand the basic meaning, in comparison to the first law. A point P in the plane can be uniquely described by its distance to the origin r =dist(P;O)and the angle µ; 0· µ < 2… : ' r P(x,y) O X Y. • Learn how to apply the second law in several practical cases, including homogenization, interdiffusion in carburization of steel, where diffusion plays dominant role. The plots show clearly the nodal planes of the functions. It is another 3D graphing software on this list. The spherical coordinates , , and correspond to (where is the radius), the polar angle, and azimuthal angle respectively. The x-Axis/Box unit number indicates how far apart the scale marks are on the x -axis, and how far apart scale marks are on the viewing cube box edges that are parallel to the x -axis. z) in your plot. 0 release, some 3D plotting utilities were built on top of matplotlib's 2D display, and the result is a convenient (if somewhat limited) set of tools for three-dimensional data visualization. If there are only two free parameters for a 3d plot, then. To use the application, you need Flash Player 6 or higher. In geography. Spherical harmonics example Plot spherical harmonics on the surface of the sphere, as well as a 3D polar plot. type=1: rescales automatically 3d boxes with extreme aspect ratios, the boundaries are specified by the value of the optional argument ebox. Spherical harmonics can be drawn, plotted or represented with a Computer Algebra System such as Mathematica by using the Mathematica built-in functions SphericalPlot3D[] and SphericalHarmonicY[]. Spherical and cartesian grids [Open in Overleaf] Spherical polar pots with 3dplot [Open in Overleaf] Stacked discs in 3d [Open in Overleaf] Steradian cone in sphere [Open in Overleaf] Stereographic and cylindrical map projections [Open in Overleaf] Sudoku 3D cube [Open in Overleaf]. b) Write the cone in spherical coordinates. 0001) sage: P = plot3d (f,(-3, 3),(-3, 3. 4 Dimensional Patterns How To Choose Your Antenna Pattern Type. 3D Plots; Section Plots; In this chapter various way of plotting spherical functions are explained. Browse other questions tagged geometry 3d parametric spherical-coordinates or ask your own question. The other, from 0 to Pi, is Theta, the altitude of spherical coordinates. Figure 1: The Spherical Pendulum. Options; Clear All; Save. 3D surface plot of regular data (41*29 points here) 3D surface plot of irregular data (5 points here) Creating vertex, grid and surface models in Cartesian, cylindrical and spherical coordinate systems on Visual Data. The S2PLOT library was written in C and can be used with C, C++ and FORTRAN programs on GNU/Linux and Apple/OSX. Cartesian and Polar Coordinates. 3D Wave Equation and Plane Waves / 3D Differential Operators Overview and Motivation: We now extend the wave equation to three-dimensional space and look at some basic solutions to the 3D wave equation, which are known as plane waves. Spherical Coordinates. They arise from solving the angular portion of Laplace's equation in spherical coordinates using separation of variables. From these we calculate a set of x, y, and z that are the coordinates of points on a surface. A Polar Plot is not a native Excel chart type, but it can be built using a relatively simple combination of Donut and XY Scatter chart types. They should also be the eigenfunctions of the Laplacian so that they represent wave-like patterns and that the associated transform is closely related to the normal Fourier transform. The Maps JavaScript API geometry library provides utility functions for the computation of geometric data on the surface of the Earth. Spherical Harmonics. In the simplest case, only the Z coordinates of the points are defined. You can have text in colour, plot with cylindrical and spherical coordinates, use animation, plot mathematical functions, do curve-fitting, and a whole lot more. As shown in the picture, the sector is nearly cube-like in shape. Translation of an excerpt of a fourth century geometry text. Typically, because it is simpler, the radiation patterns are plotted in 2-d. sph2car: Transforms 3D spherical coordinates to cartesian coordinates in sphereplot: Spherical plotting rdrr. rayshader is an open source package for producing 2D and 3D data visualizations in R. Ive been reading the manual and cant find an answer. Use Wolfram|Alpha to generate plots of functions, equations and inequalities in one, two and three dimensions. In geography. The stride arguments are only used by default if in the 'classic' mode. 4 nm and they are dispersed homogeneously within the. Browse other questions tagged geometry 3d parametric spherical-coordinates or ask your own question. SPHERE3D plots 3D data on a spherical surface. In Mathematica program the spherical harmonics are implemented as SphericalHarmonicY[l,m,theta,phi]; in matlab they are not built in and will be defined as complex functions in SH. Figure 6 Earth mapped using a polar spherical roll over theta positioning system. - Rotate and zoom in the graphs in real-time. Energy Levels 4. The Matlab function 'sphere' generates the x-, y-, and z-coordinates of a unit sphere for use with 'surf' and 'mesh'. The LUT is actually stored as a numpy 1801*3601 2D array indexed by theta and phi respectively in 0. We make a function called sphericalToCartesian to transform each two-element coordinate {φ,θ} in the spherical system to a three-element coordinate,{x,y,z}, in the Cartesian system. However, there are many other types of plots that can be implemented and those are. 1 Bound problems 4. Simultaneously, you can view dip/dip direction and trend/plunge values for the same discontinuity on the spherical projection. Hi, I need to plot a spherical segment (blue region of the sphere in the attached picture). I don't have programs like MAthematica or Maple that plot vector fields out of the box, the ones I use are Maxima and Scilab for simbolic/numeric, and none of them can easly plot 3d vector fields, so if someone can plot this one for me, I will appreciate it: $$ \left( \frac{\cot (\theta)}{r^3},\frac{1}{r^3},0 \right)$$ It's in spherical coordinates. Its rich set of features include: - Plot 2D & 3D functions - Plot implicit equations - Plot parametric equations - Plot inequalities - Plot 3D scatter points - Plot contour graphs - Generate tables of values - Cartesian coordinates - Polar coordinates - Cylindrical coordinates - Spherical coordinates - Import csv & excel coordinates - Import. The Y ℓ,m 's are complex valued. The plots show clearly the nodal planes of the functions. The Matlab function 'sphere' generates the x-, y-, and z-coordinates of a unit sphere for use with 'surf' and 'mesh'. The spherical harmonics describe non-symmetric solutions to problems with spherical symmetry. But there are instances when you know you can be better at storytelling by using 3D plots. I am trying to plot points on the map using the Mapping toolbox. The two are the Local Coordinate System and the World Coordinate System. PyNGL is a Python interface to the high quality 2D scientific visualizations in the NCAR Command Language (NCL). In 3D space, the Cartesian coordinates (x. Before writing a python program to plot the density, note that I use matplotlib, numpy, and scipy throughout:. LAPLACE'S EQUATION IN SPHERICAL COORDINATES. The flexibility is unparalleled and has opened the design space to enable features like undercuts and internal channels. I have a collection of lines of unit length in space that have some orientation. Cartesian Cylindrical Spherical Cylindrical Coordinates x = r cosθ r = √x2 + y2 y = r sinθ tan θ = y/x z = z z = z. They arise in many practical situations, notably atomic orbitals, particle scattering processes and antenna radiation patterns. This again allows us to compare the relationship of three variables rather than just two. Spherical harmonics can be drawn, plotted or represented with a Computer Algebra System such as Mathematica by using the Mathematica built-in functions SphericalPlot3D[] and SphericalHarmonicY[]. It is another 3D graphing software on this list. The third column shows the reconstruction up to l max = 6 which already captures rather well the overall shape. How to Create Datum Points In Creo Parametric Datum points are most commonly used for creating other datum entities, such as an axis, plane or curve. Every point in space is assigned a set of spherical coordinates of the form In case you're not in a sorority or fraternity, is the lowercase Greek letter rho, […]. For details, type help set mapping. I have spherical datas (function depending on $\theta$ and $\phi$ that I want to plot using a color scale on a sphere). 3D Graphics in MATLAB We'll introduce different types of plotting in 3D. Plot Stacked Contour Plots. Spherical harmonics example Plot spherical harmonics on the surface of the sphere, as well as a 3D polar plot. I'd like to plot it so that each element of that list is using a different color (red. Graph a Cartesian surface or space curve. The surfaces can be defined as functions of a 2D grid. e) Write the two-sheeted hyperboloid in spherical coordinates. Spherical Waves from a Point Source. The leftmost plot shows the generated image, the center shows a 3D render of the image, taking intensity values as height of a 3D surface, and the right one shows the shape index (s). Then check the checkbox in the upper-left corner of this object to plot the default vector field. , CS and UMIACS, Univ. spherical plot of: Please do not close the output windows. The fifth 3d orbital, \(3d_{z^2}\), has a distinct shape even though it is mathematically equivalent to the others. Do you have any plans to extend 3D capabilities to allow representing 3D parametric equations? Also, like 2D polar coordinates, in conjunction with 3D it would be nice to be able to represent equations in cylindrical or spherical coordinates. Darling, Nicholas R. H is the cylinder height. The Kriging tool fits a mathematical function to a specified number of points, or all points within a specified radius, to determine the output value for each location. over a 2-D rectangular region in the x-y plane. in spherical coordinates, which translate to the usual "Cartesian" coordinates as. On a linear spherical flow plot, data exhibiting a negative unit slope are indicative of spherical flow conditions. In fact, i found the distance of my object in rectangular coordinates by this equation:. Ive been reading the manual and cant find an answer. Use spherical coordinates. License GPL-2 Depends rgl NeedsCompilation no Repository CRAN. generates a 3D spherical plot over the specified ranges of spherical coordinates. Generate a 2D plot of a polynomial function: (The interval notation of {x,min,max} defines the domain. in this step, you can choose the most convenient 3d coordinate system according to the nature of your 3d objects. Converting Cartesian to Spherical Coordinates (3D) To convert from spherical coordinates to rectangular, the first thing to do is to get the radius, which is done in the exact same way as in 2d. By default ContourPlot3D uses cartesian coordinates. You get the function values of z by using element by element operations on matrices xx and yy. SphericalDensityPlot3D is a plotting routine, that makes a density plot on a spherical surface e. These examples show how to create line plots, scatter plots, and histograms in polar coordinates. How can I plot vectors in spherical coordinates using matlab? I have theta, phi and r component of a field but i don't know how to make vector plots of them showing their special variation. Here is the simplest usage:. Many are animated with the click of the mouse (like the refrigerator. Using spherical coordinates $(\rho,\theta,\phi)$, sketch the surfaces defined by the equation $\rho=1$, $\rho=2$, and $\rho=3$ on the same plot. However the type of plot can be modified with the fun argument, in which case the plots are generated by feval (fun, x, y). I tried to get the minima of the Log of a function which is as a ring in the 3D plot, but my problem is how to make this ring smoother. To customize the size of this region, adjust the horizontal and vertical dividers between panes. Thousands of 3D Models Included FREE. The Maps JavaScript API geometry library provides utility functions for the computation of geometric data on the surface of the Earth. a bit difficult. Green is positive and red is negative. The animation on the left shows the surface changing as n varies from 1 to 5. Function Grapher has the ability to set and modify the properties of 3D graphs such as color,material,model and smoothness. I have a three column data file where column 1 is $\. Cartesian to Cylindrical coordinates. This can help you visualize the plot from different angles. A "spherical lens" is a lens whose surface has the shape of (part of) the surface of a sphere. When you hit the calculate button, the demo will calculate the value of the expression over the x and y ranges provided and then plot the result as a surface. a: 1D seismic data: 3C seismic data: 4C seismic data: m: 1-D seismic data: 3D seismic data: 4D seismic. Spherical projections are used to display three dimensional directional data by projecting the surface of a sphere, or hemisphere, onto a plane. 62/428,029 filed Nov. generates a 3D plot with a spherical radius r as a function of spherical coordinates θ and ϕ. 6 Wave equation in spherical polar coordinates We now look at solving problems involving the Laplacian in spherical polar coordinates. There are 2 angles which sets the segment of the sphere's shell which pressure data will be. At the lower part of the model there is a good coincidence between surfaces, lines and textures, while the upper part. The 50 Examples¶. SphericalPlot3D [ r , { θ , θ min , θ max } , { ϕ , ϕ min , ϕ max } ] generates a 3D spherical plot over the specified ranges of spherical coordinates. The hydrogen 3d orbitals, shown in Figure \(\PageIndex{5}\), have more complex shapes than the 2p orbitals. The LUT is actually stored as a numpy 1801*3601 2D array indexed by theta and phi respectively in 0. Working with graphics in RStudio. Each graph gives control over domains and discritization (resolution). Technology-enabling science of the computational universe. Change the Scale to provide a better visualisation of the vector field. Figure 4: 3D plot Figure 5: 3D plot in colour. The map comes in a full sized 23" x 35" poster, a small size 16" X 20" poster. Matplotlib was initially designed with only two-dimensional plotting in mind. It just utilizes the power of the CPU. Tutorial for Mathematica & Wolfram Language. The 3D grapher can graph 3D functions, parametric curves (spacecurves), parametric surfaces, implicitly defined surfaces, implicit spherical surfaces, implicit cylindrical surfaces, 3D vector fields. 3) Plot these data in a 3D surface plot 4) Figure out how to label the axes to show the real values for X and Y. product_info { padding-left: 60px; }. Deep Learning 3D Shape Surfaces Using Geometry Images 225 [11] (see Fig. dem plot a '3D version using spherical coordinate system' of the world. , CS and UMIACS, Univ. You'll learn to make the "One Sphere" pattern and image below, and the dozens of variations the file permits therefrom. From building simple algorithms to creating interactive plots and animations, MathStudio bridges the gap between technology and your imagination. I have a three column data file where column 1 is $\. Set Coordinate System for 3-D Plots Description Examples Description The default coordinate system for all three dimensional plotting commands is the Cartesian coordinate system. The SG Procedures do not support creating a 3D scatter plot. We have state of the art facility with modern precision machines and highly skilled experienced manpower for design, development and manufacturing of high quality products. How to make graphs of polynomial functions, regions of inequalities. Loading 3D Spherical Plotting. sphere_design_rule Hardin and Sloane Spherical Designs A set of N points on the surface of a 3D sphere is called a spherical T-design if the integral of any polynomial p(x,y,z) of degree at most T over the surface of the sphere is equal to the average value of the polynomial evaluated at the set of points. xls examples are available on:. An asteroid collides with the prehistoric Earth, causing the extinction of the planet's dinosaurs, but sparing a tribe of cavemen living near the impact site. SymPy can do nice 2D and 3D plots that can be controlled by console commands as well as keyboard and mouse, with the only dependencies being ctypes (which is included in Python2. Click below to download the free player from the Macromedia site. Get the free "3D Plot" widget for your website, blog, Wordpress, Blogger, or iGoogle. It has a spreadsheet just like excel. Green is positive and red is negative. I would like to plot astronomy data by right ascension, declination, and various "distance" values, such as redshift and comoving distance, in 3D. 1 degree steps. In this article, we'll see how to make stunning 3D plots with R using ggplot2 and rayshader. However,a small amount of code allows us to draw one using the identities between Cartesian and polar coordinate systems:. This means we need to introduce a new variable j in order to describe the rotation of the pendulum around the z -axis. A 3D plot will be created. 0 release, some three-dimensional plotting utilities were built on top of Matplotlib's two-dimensional display, and the result is a convenient (if somewhat limited) set of tools for three-dimensional data visualization. The absolute values are meaningless because the functions are not normalized and accordingly the normalization factors are omitted from their definitions. Here are some results. The spherical coordinate system is also commonly used in 3D game development to rotate the camera around the player's position. In this example, a spherical harmonic is rendered, where the fill hue represents the complex phase angle. See the documentation for spherical_to_xyz. Make wire-framed surfaces 3D] 3D surface plots [MATLAB: surf, shading, surfc, surfl, surfnorm,. Can any one please help me to get a smooth line in Spherical 3D plot. In my plot code I used the With[] function to change coordinates. of MD, College Park, MD 20742 ABSTRACT Capturing and reproducing 3D audio has several applica-tions. Compass Labels on Polar Axes. 4 and later). We have two solutions for the 3D harmonic oscillator, one is in Cartesian coordinate, and the other is in spherical coordinate. 3D spherical histogram. In fact, I'm one of those who try to avoid adding too much to a plot that the plot loses its own plot. The agreement was described using Bland and Altman plots. If you want to draw arbitrary parametric surfaces in spherical coordinates go to Parametric Surfaces in Spherical Coordinates. With an easy-to-use syntax, his code supports the creation of several standard (and some custom) plot types, including surface plots (with and without lines, and with and without contours) and mesh plots (with and without contours). Set at the dawn of time, when prehistoric creatures and woolly mammoths roamed the earth, Early Man tells the story of Dug, along with sidekick Hognob as they unite his tribe against a mighty enemy Lord Nooth and his Bronze Age City to save their home. Spherical Coordinates. We fabricate a polycaprolactone (PCL) scaffold with interconnecting pores and uniform porosity for cell ingrowth using a 3D plotting system. m In the spherical coordinate system the polar or colatitudinal angle varies as 0 to pi and the azimuthal or. The origin of the spherical node becomes clear if we examine the wave equation, which includes a (2 - ρ) term. But for a simple sphere, the value of the drag coefficient varies widely with Reynolds number as shown on the figure at the top of this page. The plots show clearly the nodal planes of the functions. b) Write the cone in spherical coordinates. The original reason for developing this was to convert video content from the LadyBug-3 camera (spherical projection) to a suitable image for a 360 cylindrical display. Examples: sinc exp dome. I have spherical datas (function depending on $\theta$ and $\phi$ that I want to plot using a color scale on a sphere). Kinematic Synthesis of Planar, Spherical and Spatial Linkages Recent results in the design of 4R closed chains use the "configuration manifold" of the movement of the coupler link to guide the selection of task positions, such that they are consistent with smooth movement of the linkage. How to make interactive 3D surface plots in R. SphericalDensityPlot3D is a plotting routine, that makes a density plot on a spherical surface e. e) Write the two-sheeted hyperboloid in spherical coordinates. Mayavi is a general 3D visualisation tool, for which python wrappers are available. m permits the creation of 3d plots under octave by means of geomview There are also some test/examples and a minimal help. To plot multiple sets of coordinates on the same set of axes, specify at least one of X , Y , or Z as a matrix and the others as vectors. 2 Trend and plunge: The. Spherical well scattering Consider the 3D spherical potential well: V(r) = 0 for r > a V(r) = -V 0 for r < a. Spherical coordinates are used — with slight variation — to measure latitude, longitude, and altitude on the most important sphere of them all, the planet Earth. 0 release, some 3D plotting utilities were built on top of matplotlib's 2D display, and the result is a convenient (if somewhat limited) set of tools for three-dimensional data visualization. Change the relationship between surface plot data and the colormap. completely round, like a ball. In this example, a spherical harmonic is rendered, where the fill hue represents the complex phase angle. Typically, because it is simpler, the radiation patterns are plotted in 2-d. © 2016 CPM Educational Program. The diagram shows a cross-section through this spherical space. The wireframe mesh is plotted using rectangles. Plotting on the Sphere Grady Wright Contents Longitude-Latitude plots Plots using the Hammer projection 3D plots on the sphere 3D Plots from triangulations: Vector elds Plots in Longitude-Latitude 3D plots on the sphere In this tutorial we review some techniques for plotting scalar-vauled functions and vector elds on the surface of the sphere. An Example of Plotting Spheres in Matlab This example will produce this 3-D plot. 3D Vector Plotter. The concrete form of the angular and radial. This is the distance from the origin to the point and we will require \(\rho \ge 0\). The surfaces can be defined as functions of a 2D grid. All this is about the new plotting module and mostly covers the matplotlib backend. In the 3D coordinate system there is a third axis, and in equations there is a third variable. We just take the magnitude of the vector (aka the distance of the point from the origion) and we are done. Labriola, and Edith Mathiowitz and provisional application No. Plot a revolution around an axis. for plotting some angular dependence of a function The usage is intuitively and very similar to a normal 2D-density plot and besides many other options there is also the possibilty to apply contours to the plot. The coords option allows the user to alter this coordinate system. Again we'll use inline plotting, though it can be useful to skip the "inline" backend to allow interactive manipulation of the plots. a) Write the sphere in spherical coordinates. Use the commands mentioned above and below to create your variables x, y or x,y,z. The user can choose to input the spherical coordinates in degrees or radians. 5 mmol g −1 was obtained for 13X zeolite beads. In a spherical pressure wave of radius , the energy of the wavefront is spread out over the spherical surface area. EXAMPLES: sage: def f (x, y): return math. The 'Excel 3D Scatter Plot' macros may not be sold or offered for sale, or included with another software product offered for sale. Tutorial for Mathematica & Wolfram Language. The wireframe mesh is plotted using rectangles. First , here are a few 2D parametric curves (made with Mathematica) : Now for other types of curves. You get the function values of z by using element by element operations on matrices xx and yy. Yes looked at the manual. After plotting the third sphere, execute the command hidden off. To a good first approximation, wave energy is conserved as it propagates through the air. Spherical trig has some pleasing geometry and seems quite useful. in this step, you can choose the most convenient 3d coordinate system according to the nature of your 3d objects. The Y ℓ,m 's are complex valued. The plots show clearly the nodal planes of the functions. This module has some limitations and is not actively developed anymore. Privacy Policy. I have a set of (x,y,z) data that I have converted to spherical coordinates. In Mathematica program the spherical harmonics are implemented as SphericalHarmonicY[l,m,theta,phi]; in matlab they are not built in and will be defined as complex functions in SH. meshgrid to make 2D arrays for PHI and THETA instead of R and THETA (or what the 3D polar plot example calls P). Thank you for the help. Multivariable Calculus Tools Home. Visual Math 4D is a graphical calculator that allows you to visualize and solve your mathematical equations. The spherical coordinate system is a coordinate system for representing geometric figures in three dimensions using three coordinates,$ (\rho,\phi,\theta)$, where$\rho$ represents the radial distance of a point from a fixed origin,$\phi$ represents the zenith angle from the positive z-axis. There are many interesting equation plots , I'll try to show some examples. I am trying to plot points on the map using the Mapping toolbox. When you enter coordinate values, you indicate a point's distance and its direction (+ or -) along the X, Y, and Z axes relative to the coordinate system origin (0,0,0). The Radius controls the spherical radius. completely round, like a ball. Use the commands mentioned above and below to create your variables x, y or x,y,z. Cartesian to Spherical coordinates. Define to be the azimuthal angle in the - plane from the x-axis with (denoted when referred to as the longitude ), to be the polar angle (also known as the zenith angle and colatitude, with where is the latitude ) from the positive z-axis with , and to be distance ( radius ) from a point to the origin. I just learned 3d rectangular graphs in pgf-plots, and now I need to graph in spherical coordinates. Using spherical coordinates $(\rho,\theta,\phi)$, sketch the surfaces defined by the equation $\rho=1$, $\rho=2$, and $\rho=3$ on the same plot. The x-Axis/Box unit number indicates how far apart the scale marks are on the x -axis, and how far apart scale marks are on the viewing cube box edges that are parallel to the x -axis. Uses the reversed version of the YlGnBu color map. ) Change the size and dimensions of any object. We typically think of r as a function of theta and phi, where theta is the familiar angle in the x-y plane from polar coordinates and phi is the angle from the north pole. 3-D plotting in cylindrical & spherical coordinates. January 21, 2013 at 10:16 AM Nyrath. 4)) and pyglet. Labriola, and Edith Mathiowitz and provisional application No. This process allows AVARTANA METAL POWDERS to produce spherical, not oxidised particles with a homogeneous chemical com- position. LAPLACE'S EQUATION IN SPHERICAL COORDINATES. 3d parametric plot (cos t, sin 2t, sin 3t) Draw a parametric surface in three dimensions: 3d parametric plot (cos u, sin u + cos v, sin v), u=0 to 2pi, v=0 to 2pi. 1 and LiveGraphics3D by Martin Kraus. A Polar Plot is not a native Excel chart type, but it can be built using a relatively simple combination of Donut and XY Scatter chart types. This means we need to introduce a new variable j in order to describe the rotation of the pendulum around the z -axis. In this example, a spherical harmonic is rendered, where the fill hue represents the complex phase angle. shagC: Computes spherical harmonic analysis of a scalar field on a gaussian grid via spherical harmonics. Cartesian, spherical and cylindrical coordinates can be transformed into each other. Translation of an excerpt of a fourth century geometry text. This means that:( 3. 6 Cylindrical and Spherical Coordinates A) Review on the Polar Coordinates The polar coordinate system consists of the origin O;the rotating ray or half line from O with unit tick. To use the application, you need Flash Player 6 or higher. Plots in 2D. Then check the checkbox in the upper-left corner of this object to plot the default vector field. To customize the size of this region, adjust the horizontal and vertical dividers between panes. A plot of a function expressed in spherical coordinates, with radius as a function of angles and. Polar plots can be drawn using SphericalPlot3D[r, phi, phimin, phimax, theta, thetamin, thetamax]. The spherical coordinate system is also commonly used in 3D game development to rotate the camera around the player's position. we deign Beautiful Big PLot Corner House plan Design with , 3 story, on Ground floor have 3 Bed with closet + attached Bath, with stylish partition, also have rent out 1st floor feature, also have 1 guest bed room on first floor with seprate way, on ground floor can partition with double height entrance with Bridge way , in this project 7 Bed room with new feature like a BBQ area, Fire place. The same pattern in Figure 1 is plotted in Figure 2. SphericalPlot3D [ r , { θ , θ min , θ max } , { ϕ , ϕ min , ϕ max } ] generates a 3D spherical plot over the specified ranges of spherical coordinates. Is there something like this that I am unaware of?. 3D spherical histogram. I just learned 3d rectangular graphs in pgf-plots, and now I need to graph in spherical coordinates. Visualizing the spherical harmonics Visualising the spherical harmonics is a little tricky because they are complex and defined in terms of angular co-ordinates, $(\theta, \phi)$. Ideally a spherical list plot would be ideal, but it seems like the current list plot is only cartesian. Maths Geometry Graph plot vector. This example requires scipy. With Applications to Electrodynamics. I think I can use spherical co-ordinates as my Euler angles. This means we need to introduce a new variable j in order to describe the rotation of the pendulum around the z -axis. My Question is about acquiring plot in spherical coordinates. Plotting on the Sphere Grady Wright Contents Longitude-Latitude plots Plots using the Hammer projection 3D plots on the sphere 3D Plots from triangulations: Vector elds Plots in Longitude-Latitude 3D plots on the sphere In this tutorial we review some techniques for plotting scalar-vauled functions and vector elds on the surface of the sphere. The Spherical Grapher applet asks for a maximum value for r to give a viewing window. 1 degree steps. For 2D plotting there are basic line plots, series based plots such as stacked/clustered bars, statistical plots such as histograms and box plots, and contour plots. Download Flash Player. meshgrid to make 2D arrays for PHI and THETA instead of R and THETA (or what the 3D polar plot example calls P). The fifth 3d orbital, \(3d_{z^2}\), has a distinct shape even though it is mathematically equivalent to the others. The LUT is actually stored as a numpy 1801*3601 2D array indexed by theta and phi respectively in 0. EXAMPLES: sage: def f (x, y): return math. The leftmost plot shows the generated image, the center shows a 3D render of the image, taking intensity values as height of a 3D surface, and the right one shows the shape index (s). Simple way how vizualize 3D charts, plots, graphs and other XYZ coordinates in Excel. The Matlab function 'sphere' generates the x-, y-, and z-coordinates of a unit sphere for use with 'surf' and 'mesh'. Spherical harmonics are very tricky to visualise in 3D. Deep Learning 3D Shape Surfaces Using Geometry Images 225 [11] (see Fig. Canyon example Retrieve radar data from the NASA and plot a view of the Grand Canyon landscape. | CommonCrawl |
\begin{document}
\title{Quantitative Estimates in Reiterated Homogenization}
\author{Weisheng Niu\thanks{Supported by the NSF of China (11971031, 11701002).} , Zhongwei Shen\thanks{Supported in part by NSF grant DMS-1856235.}, and Yao Xu \thanks{Supported by China Postdoctoral Science Foundation (2019TQ0339).} } \maketitle \pagestyle{plain}
\begin{abstract} This paper investigates quantitative estimates in the homogenization of second-order elliptic systems with periodic coefficients that oscillate on multiple separated scales. We establish large-scale interior and boundary Lipschitz estimates down to the finest microscopic scale via iteration and rescaling arguments. We also obtain a convergence rate in the $L^2$ space by the reiterated homogenization method.
\noindent \textbf{Keywords}: Reiterated homogenization; Convergence rates; Large-scale regularity estimates
\noindent{\textbf{AMS Subject Classification (2010)}: 35B27, 74Q05 }
\end{abstract}
\section{Introduction}\label{section-1}
In this paper we investigate quantitative estimates in the homogenization of elliptic systems with periodic coefficients that oscillate on multiple separated scales. More precisely, consider the $m\times m$ elliptic system in divergence form, \begin{equation}\label{eq11} \mathcal{L}_\varepsilon (u_\varepsilon) =F \end{equation} in a bounded domain $\Omega\subset \mathbb{R}^d$ $(d\ge 2)$, where \begin{equation}\label{operator}
\mathcal{L}_\varepsilon= -\text{\rm div} \big( A^\varepsilon (x)\nabla \big)= -\text{\rm div} \big( A(x, x/\varepsilon_1 , x/\varepsilon_2, \dots, x/\varepsilon_n \big)\nabla \big),
\end{equation} and $\{ 0< \varepsilon_n <\varepsilon_{n-1}<\cdots< \varepsilon_1< 1 \}$ represents a set of $n$ ordered lengthscales, all depending on a single parameter $\varepsilon$. We assume that the coefficient tensor $A=A(x, y_1, y_2, \dots, y_n)$ is real, bounded measurable, and satisfies the ellipticity condition, \begin{align}
\|A\|_{L^\infty( \mathbb{R}^{d\times (n+1)})}\leq \frac{1}{\mu} \quad \text{ and } \quad
\mu |\xi|^2\le \langle A \xi, \xi \rangle \label{elcon} \end{align} for any $\xi \in \mathbb{R}^{m\times d}$, where $\mu>0$, and the periodicity condition \begin{align} A(x,y_1+z_1, \cdot\cdot\cdot, y_n+z_n)=A(x,y_1,\cdot\cdot\cdot,y_n) \quad\text{for any } (z_1, \cdots, z_n)\in \mathbb{Z}^{d\times n} \label{pcon}. \end{align} We also impose the H\"older continuity condition on $A$: there exist constants $L\ge 0$ and $ 0<\theta \leq 1$ such that \begin{align}\label{lipcon}
|A(x,y_1,\cdot\cdot\cdot,y_{n-1},y_n)-A(x',y_1',\cdot\cdot\cdot,y'_{n-1},y_n)|\leq L \Big\{|x-x'|+\sum_{\ell =1}^{n-1}
|y_\ell -y'_\ell | \Big\}^\theta \end{align} for $ x,x', y_1, \dots, y_n, y_1^\prime, \dots, y_{n-1}^\prime\in \mathbb{R}^d$. Note that no continuity condition is needed for the last variable $y_n$.
Homogenization problems with multiscale structures were first considered in the 1930s
by Bruggeman \cite{b1935}.
In the case where $\varepsilon_k=\varepsilon^k$ for $1\le k\le n$, the qualitative homogenization theory for $\mathcal{L}_\varepsilon$ in (\ref{operator}) was established in the 1970s by Bensoussan, Lions, and Papanicolaou \cite{lions1978}. Let $u_\varepsilon$ be a weak solution of the Dirichlet problem, \begin{equation}\label{DP} \mathcal{L}_\varepsilon (u_\varepsilon) =F \quad \text{ in } \Omega \quad \text{ and } \quad u_\varepsilon =f \quad \text{ on } \partial\Omega. \end{equation} Assume that $A$ satisfies (\ref{elcon})-(\ref{pcon}) and some continuity condition. It is known that $u_\varepsilon$ converges weakly in $H^1(\Omega)$ to the solution $u_0$ of the homogenized problem, \begin{equation}\label{DP-0} \mathcal{L}_0 (u_0) =F \quad \text{ in } \Omega \quad \text{ and } \quad u_0 =f \quad \text{ on } \partial\Omega, \end{equation} where $\mathcal{L}_0 =-\text{\rm div} \big( \widehat{A} (x)\nabla \big)$ is a second-order elliptic operator. The effective tensor $\widehat{A}(x)$ is obtained by homogenizing separately and successively the different scales, starting from the finest one $\varepsilon_n $, as follows. One fixes $(x, y_1, \dots, y_{n-1})$ and homogenizes the last variable $y_n=x/\varepsilon_n $
in $A_n=A(x, y_1, \dots, y_n)$
to obtain $A_{n-1} (x, y_1, \dots, y_{n-1})$.
Repeat the same procedure on $A_{n-1}$ to obtain $A_{n-2}$, and continue until one arrives at $A_0(x)$, which is $\widehat{A}(x)$. This process, in which at each step the standard homogenization is performed on an operator with a parameter,
is referred in \cite{lions1978} as reiterated homogenization.
For more recent work in the reiterated homogenization theory and its applications, we refer the reader to \cite{Allaire1996Multiscale,lions2000,lions2001,luk2002,nonrei2005,past07, luk2008, past2013, past2016} and their references. In particular, using the method of multiscale convergence,
Allaire and Briane \cite{Allaire1996Multiscale} obtained qualitative results for $\mathcal{L}_\varepsilon$
in a general case
under the condition of separation of scales,
\begin{equation}\label{s-condition}
\varepsilon_1\to 0 \quad \text{ and } \quad
\varepsilon_{k+1}/\varepsilon_k \to 0\quad \text{ for } 1\le k\le n-1 , \text{ as } \varepsilon\to 0.
\end{equation}
This paper is devoted to the quantitative homogenization theory for the operator $\mathcal{L}_\varepsilon$ and concerns problems of convergence rates and large-scale regularity estimates. We point out that in the case $n=1$, where $A^\varepsilon (x)=A(x/\varepsilon)$ or $A(x, x/\varepsilon)$, major progress has been made in quantitative homogenization in recent years. We refer the reader to \cite{al87, suslinaD2013, klsj2013, klsc2014, armstrongcpam2016, shenan2017, nsx, shennote2017} and their references
for the periodic case, and to \cite{Gloria2015, armstrongan2016, armstrongar2016, Gloria2017, armstrong-book} and their references for quantitative homogenization in the stochastic setting. The primary purpose of this paper is to extend quantitative estimates in periodic homogenization for $n=1$ to the case $n>1$, where the operator $\mathcal{L}_\varepsilon$ is used to model a composite medium with several microscopic scales.
Our main results are given in the following two theorems. We establish the large-scale interior and boundary Lipschitz estimates down to the finest scale $\varepsilon_n$, assuming that the scales $0<\varepsilon_n<\varepsilon_{n-1} < \cdots< \varepsilon_1< \varepsilon_0=1$ are well-separated in the sense that there exists a positive integer $N$ such that \begin{equation}\label{w-s-cond} \left( \frac{\varepsilon_{k+1}}{\varepsilon_k} \right)^N \le \frac{\varepsilon_k}{\varepsilon_{k-1}} \quad \text{ for } 1\le k\le n-1. \end{equation} In particular, this includes the case where $\varepsilon_k=\varepsilon^{\lambda_k}$ with
$\lambda_0 =0 < \lambda_1< \lambda_2< \dots < \lambda_n<\infty$ and $0<\varepsilon\le 1$, but excludes the case $(\varepsilon_1, \varepsilon_2)=(\varepsilon, \varepsilon ( |\log \varepsilon | +1)^{-1} )$.
\begin{theorem}\label{lipth} Suppose that $A$ satisfies conditions \eqref{elcon}, \eqref{pcon}, and \eqref{lipcon} for some $0<\theta \le 1$. Also assume that $0<\varepsilon_n<\varepsilon_{n-1}<\dots< \varepsilon_1 <\varepsilon_0=1$ and (\ref{w-s-cond}) holds. For $B_R=B(x_0, R)$ with $0<\varepsilon_n < R\le 1$, let $u_\varepsilon\in H^1(B_R; \mathbb{R}^m)$ be a weak solution of $\mathcal{L}_\varepsilon ( u_\varepsilon) =F$ in $B_R$,
where $F\in L^p(B_R; \mathbb{R}^m)$ for some $p>d$.
Then for $0<\varepsilon_n \leq r< R$, \begin{align}\label{relipth}
\left(\fint_{B_r} |\nabla u_\varepsilon|^2\right)^{1/2}\leq C \left\{ \left(\fint_{B_R}
| \nabla u_\varepsilon|^2\right)^{1/2} +
R\left(\fint_{B_R} |F |^p\right)^{1/p}\right\}, \end{align} where $C$ depends at most on $d$, $n$, $m$, $\mu$, $p$, $(\theta, L)$ in \eqref{lipcon}, and $N$ in (\ref{w-s-cond}). \end{theorem}
Let $\Omega$ be a bounded domain in $\mathbb{R}^d$. Define $D_r=D(x_0, r)=B(x_0, r)\cap \Omega$ and $\Delta_r=\Delta (x_0, r)=B(x_0, r) \cap \partial\Omega$, where $x_0\in \partial\Omega$ and $0<r<\text{\rm diam} (\Omega)$.
\begin{theorem}\label{blipth} Assume that $A$ and $(\varepsilon_1, \varepsilon_2, \dots, \varepsilon_n)$ satisfy the same conditions as in Theorem \ref{lipth}. Let $\Omega$ be a bounded $C^{1, \alpha}$ domain in $\mathbb{R}^d$ for some $\alpha>0$. Let $u_\varepsilon\in H^1(D_R; \mathbb{R}^m)$ be a weak solution to $\mathcal{L}_\varepsilon (u_\varepsilon)=F$ in $D_R$ and $u_\varepsilon=f$ on $\Delta_R$, where $\varepsilon_n < R\leq 1$, $F\in L^p(D_R; \mathbb{R}^m)$ for some $p>d$, and $f\in C^{1,\nu}(\Delta_R)$ for some $0<\nu\le \alpha$.
Then for $0<\varepsilon_n \leq r< R$, \begin{align}\label{reblipth}
\left(\fint_{D_r} |\nabla u_\varepsilon|^2\right)^{1/2}\leq C \left\{ \left(\fint_{D_R}
| \nabla u_\varepsilon|^2\right)^{1/2} +
R\left(\fint_{D_R} |F |^p\right)^{1/p}+ R^{-1} \|f \|_{{C}^{1,\nu}(\Delta_R)}\right\}, \end{align} where $C$ depends at most on $d$, $m$, $n$, $\mu$, $p$, $\nu$, $(\theta, L)$ in \eqref{lipcon}, $N$ in (\ref{w-s-cond}), and $\Omega$. \end{theorem}
\begin{remark} {\rm Under the additional assumption that $A=A(x, y_1, \dots, y_n)$ is also H\"older continuous in $y_n$, estimates (\ref{relipth}) and (\ref{reblipth}) imply the uniform pointwise interior and boundary Lipschitz estimates for $u_\varepsilon$, respectively. To see this, one introduces a dummy variable $y_{n+1}$
and considers the tensor $\widetilde{A}(x, y_1,\dots, y_n, y_{n+1})=A(x, y_1, \dots, y_n)$. Since $\varepsilon_{n+1}$ may be arbitrarily small, it follows that the inequalities (\ref{relipth}) and (\ref{reblipth}) hold for any $0<r<R\le 1$. By letting $r\to0$ we see that $|\nabla u_\varepsilon (x_0)|$ is bounded by the right-hand sides of the inequalities. } \end{remark}
\begin{remark} {\rm In the case $A^\varepsilon(x) =A(x/\varepsilon)$, Theorems \ref{lipth} and \ref{blipth} were proved by Avellaneda and Lin in a seminal paper \cite{al87} by using a compactness method. The boundary Lipschitz estimate in Theorem \ref{blipth} was extended in \cite{klsj2013} to solutions with Neumann conditions. Also see \cite{armstrongcpam2016} for operators with almost-periodic coefficients and \cite{armstrongan2016, armstrongar2016} for large-scale Lipschitz estimates in stochastic homogenization. Our results for $n>1$ are new even in the case $A^\varepsilon (x)=A(x/\varepsilon, x/\varepsilon^2)$. } \end{remark}
We now describe our approach to the proof of Theorem \ref{lipth}; the same approach works equally well for Theorem \ref{blipth}. The proof is divided into two steps. In the first step we prove the estimate (\ref{relipth}) for the case $\varepsilon_1 \le r<R\le 1$. To do this, we use a general approach developed in \cite{armstrongan2016} by Armstrong and Smart (also see \cite{ armstrongcpam2016,armstrongar2016}), which reduces the large-scale Lipschitz estimates to a problem of approximating solutions of $\mathcal{L}_\varepsilon (u_\varepsilon)=F$ by solutions of $\mathcal{L}_0 (u_0)=F$ in the $L^2$ norm. Given $u_\varepsilon$, to find a good approximation $u_0$, we use the idea of reiterated homogenization and introduce a (finite) sequence of approximations as follows. One first approximates $u_\varepsilon$ by solutions of $-\text{\rm div} \big( A_{n-1}^\varepsilon (x)\nabla u_{\varepsilon, n-1}\big)=F$, where $A_{n-1}^\varepsilon (x) =A_{n-1} (x, x/\varepsilon_1, \dots, x/\varepsilon_{n-1} )$ and $A_{n-1} (x, y_1, \dots, y_{n-1})$ is the effective tensor for $A_n=A(x, y_1, \dots, y_{n-1}, y_n)$, with $(x, y_1, \dots, y_{n-1})$ fixed as parameters. The function $u_{\varepsilon, n-1}$ is then approximated by a solution of $-\text{\rm div} \big( A_{n-2}^\varepsilon (x) \nabla u_{\varepsilon, n-2} \big)=F $, where $A^\varepsilon_{n-2} (x) =A_{n-2} (x, x/\varepsilon_1, \dots, x/\varepsilon_{n-2} )$ and $A_{n-2}(x, y_1, \dots, y_{n-2})$ is the effective tensor for \newline $A_{n-1} (x, y_1, \dots, y_{n-2}, y_{n-1})$, with $(x, y_1, \dots, y_{n-2})$ fixed. Continue the process until one reaches the tensor $A_0(x)=\widehat{A} (x)$. By an induction argument on $n$, to carry out the process above, it suffices to consider the special case where $n=1$ and $A^\varepsilon(x)=A(x, x/\varepsilon)$. Moreover, by using a convolution in the $x$ variable, one may assume that $A=A(x, y)$ is Lipschitz continuous in $x\in \mathbb{R}^d$. We point out that even though the case $A^\varepsilon (x)=A(x/\varepsilon)$ has been well studied, new techniques are needed for the case $A^\varepsilon(x) =A(x, x/\varepsilon)$ to derive estimates with sharp bounding constants depending
explicitly on $\|\nabla_x A\|_\infty$. For otherwise, the results would not be useful in the induction argument.
In the second step, a rescaling argument, together with another induction argument, is used to reach the finest scale $\varepsilon_n$.
We mention that the condition (\ref{w-s-cond}) is only used in the first step.
Without this condition, our argument yields
estimates (\ref{relipth}) and (\ref{reblipth}) for
\begin{equation}\label{l-range}
\varepsilon_1 + \left( \varepsilon_2/\varepsilon_1 +\cdots + \varepsilon_n/\varepsilon_{n-1}\right)^N \le r<R\le 1,
\end{equation}
where $N\ge 1$, with bounding constants $C$ depending on $N$.
See Remark \ref{remark-6.1}.
As a byproduct of the first step described above, we show that if $A^\varepsilon(x)=A(x, x/\varepsilon)$, then \begin{equation} \label{rate-basic}
\| u_\varepsilon -u_0\|_{L^2(\Omega)}
\le C \varepsilon\left\{ 1+ \|\nabla_x A\|_\infty +\varepsilon \|\nabla_x A\|_\infty^2 \right\}
\big( \| F\|_{L^2(\Omega)} + \| f\|_{H^{3/2}(\partial\Omega)} \big) \end{equation} for $0<\varepsilon<1$, where $C$ depends only on $d$, $m$, $\mu$, and $\Omega$ (see Lemma \ref{LE61}). Estimate (\ref{rate-basic}) improves a similar estimate in \cite{xndcds2019}, where a general case $A^\varepsilon (x)=A(x, \rho(x) /\varepsilon)$ was considered by the first and third authors. It also leads to the following theorem on the $L^2$ convergence rate for the operator $\mathcal{L}_\varepsilon$.
\begin{theorem} \label{tco}
Let $\Omega$ be a bounded $C^{1,1}$ domain in $\mathbb{R}^d$.
Assume that
$A$ satisfies \eqref{elcon}, \eqref{pcon}, and \eqref{lipcon} with $\theta=1$.
Let $\mathcal{L}_\varepsilon$ be given by (\ref{operator})
with $0<\varepsilon_n < \varepsilon_{n-1}< \dots< \varepsilon_1<1$.
For $F\in L^2(\Omega; \mathbb{R}^m)$ and
$f\in H^{3/2}(\partial\Omega; \mathbb{R}^m)$,
let $u_{\varepsilon} \in H^{1}(\Omega;\mathbb{R}^{m})$
be the solution of \eqref{DP}
and $u_0$ the solution of the homogenized problem (\ref{DP-0}).
Then
\begin{align}\label{tre1}
\|u_{\varepsilon}-u_{0}\|_{L^2(\Omega)}
\leq C \big\{ \varepsilon_1
+\varepsilon_2/\varepsilon_1
+\cdots +
\varepsilon_{n}/\varepsilon_{n-1}
\big\} \| u_0\|_{H^2(\Omega)},
\end{align}
where $C$ depends at most on $d$, $m$, $n$, $\mu$, $L$,
and $\Omega$. \end{theorem}
In the case $A^\varepsilon=A(x/ \varepsilon, x/\varepsilon^2)$, the estimate (\ref{tre1}) was proved in \cite{past2013} (also see \cite{past07, past2016}). As indicated in \cite{past2016}, one may extend the proof to the general case considered in Theorem \ref{tco}. However, the error estimates of the multiscale expansions for the case $n=2$ in \cite{past2013} are already quite involved, and their extension to the case $n>2$ is not so obvious. Our proof of (\ref{tre1}), which is based on the idea of reiterated homogenization, seems to be natural and is much simpler conceptually.
The paper is organized as follows. In Section 2 we give the definition of the effective tensor $\widehat{A}(x) $ as well as the tensors $A_k (x, y_1, \dots, y_k)$ for $ 1\le k\le n$, mentioned earlier. We also introduce a smoothing operator and prove two estimates needed in the following sections. The proof of (\ref{rate-basic}) is given in Section 3 and that of Theorem \ref{tco} in Section 4. In Section 5 we establish an approximation theorem, using the results in Section 3. Sections 6 and 7 are devoted to the proofs of Theorems \ref{lipth} and \ref{blipth}, respectively.
For notational simplicity we will assume $m=1$ in the rest of the paper. However, no particular fact pertain to the scalar case is ever used. All results and proofs extend readily to the case $m>1$ - the case of elliptic systems. We will use $\fint_E u$ to denote the $L^1$ average of $u$ over the set $E$; i.e. $\fint_E u=\frac{1}{|E|} \int_E u$. A function is said to be 1-periodic in $y_k\in \mathbb{R}^d$ if it is periodic in $y_k$ with respect to $\mathbb{Z}^d$. Finally, the summation convention is used throughout.
\section{Preliminaries}\label{section-2}
\subsection{Effective coefficients} \label{section-2.1}
Suppose $A=A(x, y_1, \dots, y_n)$ satisfies conditions (\ref{elcon}) and (\ref{pcon}). To define the effective matrix $\widehat{A} =\widehat{A} (x)$ in the homogenized operator $ \mathcal{L}_0 =-\text{\rm div} \big(\widehat{A} (x)\nabla \big), $ we introduce a sequence of $d\times d$ matrices, \begin{equation} \label{A-n} A_\ell =A_\ell (x, y_1, \dots, y_\ell) \quad \text{ for } 0\le \ell \le n, \end{equation} which are 1-periodic in $(y_1, \dots, y_\ell)\in \mathbb{R}^{d\times \ell}$ and satisfy the ellipticity condition, \begin{equation}\label{ellipticity-1}
\|A_\ell \|_{L^\infty(\mathbb{R}^{d\times (\ell +1)})} \le \mu_1 \quad \text{ and } \quad
\mu |\xi|^2\le \langle A_\ell \xi, \xi \rangle \end{equation} for $\xi\in \mathbb{R}^d$, where $\mu_1>0$ depends only on $d$, $n$ and $\mu$. To this end, we let $ A_n (x, y_1, \cdots, y_n)=A(x, y_1, \dots, y_n). $ Suppose $A_\ell$ has been given for some $1 \le \ell \le n$. For a.e. $(x, y_1, \dots, y_{\ell-1})\in \mathbb{R}^{d\times \ell }$ fixed,
we solve the elliptic cell problem, \begin{equation}\label{cell-1} \left\{ \aligned & -\text{\rm div}_y \big( A_\ell (x, y_1, \dots, y_{\ell-1}, y ) \nabla_y \chi_\ell ^j) =\text{\rm div}_y \big( A_\ell (x, y_1, \dots, y_{\ell-1}, y ) \nabla_y y ^j \big) \quad \text{ in } \mathbb{T}^d,\\ & \chi_\ell ^j=\chi_\ell ^j (x, y_1, \cdots, y_{\ell-1}, y ) \text{ is 1-periodic in } y,\\
& \int_{\mathbb{T}^d} \chi_\ell ^j (x, y_1, \dots, y_{\ell-1}, y )\, dy =0 \endaligned \right. \end{equation} for $1\le j\le d$, where $y^j$ denotes the $j$th component of $y \in \mathbb{R}^d$. Since $A_\ell$ is 1-periodic in $(y_1, \dots, y_\ell)$, so is the corrector $\chi_\ell (x, y_1, \dots, y_{\ell-1}, y_\ell)=(\chi_\ell^1, \cdots, \chi_\ell^d)$. We now define \begin{equation}\label{A-ell} A_{\ell-1} (x, y_1, \dots, y_{\ell-1}) =\fint_{\mathbb{T}^d} \Big( A_\ell (x, y_1, \dots, y_\ell) +A_\ell (x, y_1, \dots, y_\ell) \nabla_{y_\ell} \chi_\ell \Big) dy_\ell. \end{equation} Clearly, $A_{\ell-1}$ is 1-periodic in $(y_1, \dots, y_{\ell-1})$. It is also well known that $A_{\ell-1}$ satisfies the ellipticity condition (\ref{ellipticity-1}) \cite{lions1978}. As a result, by induction, we obtain the matrix $A_\ell$ for $0\le \ell \le n$. In particular,
$\widehat{A} (x) =A_0(x)$ is the effective matrix for the operator $\mathcal{L}_\varepsilon$ in (\ref{operator}).
\begin{theorem}\label{lem2.2} Suppose $A$ satisfies conditions (\ref{elcon}) and (\ref{pcon}). Also assume that as a function of $(x, y_1, \dots, y_{n-1})$, $ A\in C(\mathbb{R}^{d \times n}; L^\infty(\mathbb{R}^d)). $ Let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^d$.
Let $u_\varepsilon$ be a weak solution of the Dirichlet problem (\ref{DP}),
with $F\in H^{-1}(\Omega)$ and $f\in H^{1/2}(\partial\Omega)$.
Then,
if $\varepsilon\to 0$ and
$(\varepsilon_1, \varepsilon_2, \dots, \varepsilon_n)$ satisfies the condition (\ref{s-condition}),
$u_\varepsilon$ converges weakly in $H^1(\Omega)$ to the solution $u_0$ of
the homogenized problem \eqref{DP-0}. \end{theorem}
Theorem \ref{lem2.2}, whose proof may be found in \cite{lions1978,Allaire1996Multiscale}, is not used in this paper. In fact, by approximating the coefficients, our quantitative result in Theorem \ref{tco}, provides another proof of Theorem \ref{lem2.2}.
It follows by the energy estimate as well as Poincar\'e's inequality that \begin{equation}\label{3.000} \fint_{\mathbb{T}^d}
|\nabla_y \chi_\ell (x, y_1, \dots, y_{\ell-1}, y_\ell )|^2\, dy_\ell +\fint_{\mathbb{T}^d}
|\chi_\ell (x, y_1, \dots, y_{\ell-1}, y_\ell )|^2\, dy_\ell \le C \end{equation} for a.e. $(x, y_1, \dots, y_{\ell-1}) \in \mathbb{R}^{d\times \ell }$, where $1 \le \ell \le n$ and $C$ depends only on $d$, $n$ and $\mu$. The next lemma gives the H\"older estimates for $\chi_\ell$ and $A_\ell$ under the H\"older continuity condition on $A$.
\begin{lemma}\label{le2.0}
Suppose $A$ satisfies conditions (\ref{elcon}), (\ref{pcon}), and (\ref{lipcon})
for some $\theta \in (0, 1]$ and $L\ge 0$.
Then
\begin{equation}\label{est-corr}
\aligned
& \| \chi_\ell (x, y_1, \dots, y_{\ell-1}, \cdot)
-\chi_\ell (x^\prime, y_1^\prime, \dots, y_{\ell-1}^\prime, \cdot )\|_{H^1(\mathbb{T}^d)}\\
&\qquad\qquad
\le CL \big( |x-x^\prime| +|y_1-y_1^\prime|
+\cdots + |y_{\ell-1} -y_{\ell-1}^\prime| \big)^\theta,\\
& | A_{\ell-1} (x, y_1, \dots, y_{\ell-1})
-A_{\ell-1} (x^\prime, y_1^\prime, \dots, y^\prime_{\ell-1}) |\\
&
\qquad\qquad
\le
CL \big( |x-x^\prime| +|y_1-y_1^\prime|
+\cdots + |y_{\ell-1} -y_{\ell-1}^\prime| \big)^\theta
\endaligned
\end{equation}
for $1\le \ell \le n$,
where $C$ depends only on $d$, $n$, $\theta$ and $\mu$.
\end{lemma}
\begin{proof}
It suffices to prove (\ref{est-corr}) for $\ell=n$.
The rest follows by induction. Note that for $(x, y_1, \dots, y_{n-1}), (x^\prime, y_1^\prime, \dots, y_{n-1}^\prime ) \in \mathbb{R}^{d\times n}$ fixed, \begin{align*} &-\text{\rm div}_{y}\Big( A (x, y_1, \dots, y_{n-1}, y) \nabla_{y} \big( \chi_n^j (x,y_1,\dots y_{n-1}, y) - \chi_n^j (x^\prime, y_1^\prime, \dots, y_{n -1}^\prime, y )\big)\Big)\\ & = \text{div}_{y} \Big( \big({A}(x,y_1, \dots, y_{n-1}, y) -A(x^\prime, y_1^\prime, \dots, y_{n-1}^\prime, y) \big) \nabla_y \big(y^j + \chi^j_n (x^\prime, y_1^\prime, \dots, y^\prime_{n-1},y)\big) \Big). \end{align*} The estimate for the correct $\chi_n$ in (\ref{est-corr}) follows readily from the usual energy estimate and (\ref{lipcon}). In view of (\ref{A-ell}) we may deduce the estimate for $A_{n-1}$ in (\ref{est-corr}) by using (\ref{lipcon}) and the estimate of $\chi_{n}$ in (\ref{est-corr}). \end{proof}
\subsection{ An $\varepsilon$-smoothing operator}
Fix a function $\varphi \in C_{0}^{\infty}(B(0,1/2))$ such that $\varphi\geq 0$ and $\int_{\mathbb{R}^{d}}\varphi dx=1$.
For functions of form $g^\varepsilon (x)= g(x, x/\varepsilon)$,
we introduce a smoothing operator $S_\varepsilon$, defined by \begin{align}\label{smoothing}
S_\varepsilon (g^\varepsilon)(x)
=\int_{\mathbb{R}^d}
g(z, x/\varepsilon)\varphi_\varepsilon (x-z) dz , \end{align}
where $\varphi _{\varepsilon }(z)=\varepsilon^{- d}\varphi (z / \varepsilon)$.
Note that the smoothing is only done to the slow variable $x$.
\begin{lemma}\label{s-lemma-1}
Let $1\le p<\infty$.
Suppose that $h=h(x,y)$ is 1-periodic in $y$ and $h\in L^\infty(\mathbb{R}^d_x; L^p(\mathbb{T}^d_y))$.
Then for any $f\in L^p(\mathbb{R}^d)$,
\begin{equation}\label{s-1}
\| S_\varepsilon (h^\varepsilon f)\|_{L^p(\mathbb{R}^d)}
\le C \| f\|_{L^p(\mathbb{R}^d)}
\sup_{x\in \mathbb{R}^d}
\left(\fint_{\mathbb{T}^d} |h(x, y)|^p\, dy\right)^{1/p},
\end{equation}
where $h^\varepsilon(x) =h (x, x/\varepsilon)$ and $C$ depends only on $d$ and $p$.
\end{lemma}
\begin{proof}
It follows by H\"older's inequality and the assumption $\int_{\mathbb{R}^d} \varphi=1$ that
$$
|S_\varepsilon (h^\varepsilon f) (x)|^p
\le \int_{\mathbb{R}^d}
|h(z, x/\varepsilon)|^p | f(z)|^p \varphi_\varepsilon (x-z)\, dz.
$$
This, together with Fubini's Theorem, gives
$$
\aligned
\int_{\mathbb{R}^d}
| S_\varepsilon (h^\varepsilon f)|^p\, dx
&\le \int_{\mathbb{R}^d}
|f(z)|^p \int_{\mathbb{R}^d}
\varphi_\varepsilon (x-z) |h(z, x/\varepsilon)|^p\, dx \, dz\\
& \le \| f\|_{L^p(\mathbb{R}^d)}^p
\sup_{z\in \mathbb{R}^d}
\int_{\mathbb{R}^d}
\varphi_\varepsilon (x-z) |h(z, x/\varepsilon)|^p\, dx \\
& \le C \| f\|_{L^p(\mathbb{R}^d)}^p\sup_{z\in \mathbb{R}^d}
\fint_{B(z, \varepsilon/2)}
|h(z, x/\varepsilon)|^p\, dx.
\endaligned
$$
Using the periodicity of $h(x, y)$ in the second variable,
it is easy to see that
$$
\sup_{z\in \mathbb{R}^d}
\fint_{B(z, \varepsilon/2)}
|h(z, x/\varepsilon)|^p\, dx \le C
\sup_{x\in \mathbb{R}^d}
\fint_{\mathbb{T}^d} |h(x, y)|^p\, dy,
$$
which finishes the proof.
\end{proof}
\begin{lemma}\label{s-lemma-2} Let $1\le p\le \infty$. Suppose that $h=h(x, y)\in L^\infty(\mathbb{R}^d \times \mathbb{R}^d)$ and $\nabla_x h\in L^\infty(\mathbb{R}^d \times \mathbb{R}^d)$. Then for any $f\in W^{1, p} (\mathbb{R}^d)$, \begin{equation}\label{s-2}
\| h^\varepsilon f -S_\varepsilon (h^\varepsilon f) \|_{L^p(\mathbb{R}^d)} \le C \varepsilon
\Big\{ \|\nabla_x h\|_\infty \| f\|_{L^p(\mathbb{R}^d)}
+ \| h\|_\infty
\|\nabla f \|_{L^p(\mathbb{R}^d)} \Big\}, \end{equation} where $h^\varepsilon (x)=h(x, x/\varepsilon)$ and $C$ depends only on $d$ and $p$. \end{lemma}
\begin{proof} Write $$ h^\varepsilon (x) f(x) -S_\varepsilon (h^\varepsilon f) (x) = \int_{\mathbb{R}^d} \big( h(x, x/\varepsilon) f(x) - h(z, x/\varepsilon) f(z) \big) \varphi_\varepsilon (x-z)\, dz, $$ which leads to $$
|h^\varepsilon (x) f(x)
-S_\varepsilon (h^\varepsilon f) (x)|
\le C \fint_{B(x, \varepsilon/2)}
| h(x, x/\varepsilon) f(x) - h(z, x/\varepsilon) f(z)|\, dz. $$ We now apply the inequality, \begin{equation}\label{ineq} \fint_{B(x,\varepsilon/2)}
|u(z)-u(x)|\, dz \le C \int_{B(x, \varepsilon/2 )}
\frac{|\nabla u(z)|}{|z-x|^{d-1}}\, dz, \end{equation} where $C$ depends only on $d$. This gives $$ \aligned
& |h^\varepsilon (x) f(x)
-S_\varepsilon (h^\varepsilon f) (x)|\\
& \le C \|\nabla_x h\|_\infty \int_{B(x, \varepsilon/2)}
\frac{|f(z)|}{|z-x|^{d-1}}\, dz
+ C \| h\|_\infty \int_{B(x, \varepsilon/2)}
\frac{|\nabla_z f(z)|}{|z-x|^{d-1}}\, dz. \endaligned $$ It follows that \begin{equation}\label{s-3} \aligned \int_{\mathbb{R}^d}
|h^\varepsilon f -S_\varepsilon (h^\varepsilon f) | |F|\, dx
&\le C \|\nabla_x h \|_\infty \int_{\mathbb{R}^d} \left(\int_{B(x, \varepsilon/2)}
\frac{| f(z)||F(x)|}{|z-x|^{d-1}} dz \right) dx\\ &\qquad
+ C \| h \|_\infty \int_{\mathbb{R}^d} \left(\int_{B(x, \varepsilon/2)}
\frac{| \nabla_z f(z)||F(x)|}{|z-x|^{d-1}} dz \right) dx. \endaligned \end{equation} Finally, we note that the operator defined by $$ Tg (x) =\int_{B(x, \varepsilon/2)}
\frac{g(z)}{|z-x|^{d-1}}\, dz $$
is bounded on $L^p(\mathbb{R}^d)$ and $\|Tg\|_{L^p(\mathbb{R}^d)}
\le C\varepsilon \| g \|_{L^p(\mathbb{R}^d)}$ for $1\le p \le \infty$. Thus, if $1\le p\le\infty$ and $q=p^\prime$, $$ \int_{\mathbb{R}^d}
|h^\varepsilon f -S_\varepsilon (h^\varepsilon f) | |F|\, dx
\le C \varepsilon \| F\|_{L^q(\mathbb{R}^d)} \Big\{
\|\nabla_x h\|_\infty
\| f\|_{L^p(\mathbb{R}^d)}
+ \| h\|_\infty
\| \nabla f \|_{L^p(\mathbb{R}^d)} \Big\}, $$ from which the inequality (\ref{s-2}) follows by duality. \end{proof}
\section{Convergence rate $(n=1)$}\label{section-3}
In this section we consider a simple case, where $n=1$ and \begin{equation}\label{op-simple} \mathcal{L}_\varepsilon =-\text{\rm div} \big( A(x, x/\varepsilon)\nabla \big). \end{equation} The matrix $A=A(x,y)$ satisfies the ellipticity condition (\ref{elcon}) and is 1-periodic in $y\in \mathbb{R}^d$. We also assume that \begin{equation}\label{lip-simple}
\|\nabla_x A\|_\infty
=\|\nabla_x A\|_{L^\infty(\mathbb{R}_x^d \times \mathbb{R}^d_y)} <\infty. \end{equation} Recall that $$ \widehat{A}(x)=\fint_{\mathbb{T}^d} \Big( A(x, y) + A(x, y) \nabla_y \chi (x, y) \Big) dy, $$ where the corrector $\chi (x, y)= (\chi^1(x, y), \dots, \chi^d (x, y))$ is given by the cell problem (\ref{cell-1}) with $\ell =n=1$. Note that by (\ref{est-corr}), \begin{equation}\label{3.0}
\|\nabla_x \widehat{A} \|_\infty
\le C \| \nabla_x A \|_\infty, \end{equation} and \begin{equation}\label{3.00} \fint_{\mathbb{T}^d}
\big( |\nabla_x \nabla_y \chi (x, y)|^2
+|\nabla_x \chi(x, y)|^2\big) dy
\le C \|\nabla_x A\|^2_\infty, \end{equation} where $C$ depends only on $d$ and $\mu$.
Define \begin{equation}\label{B} B(x, y)= A(x, y)+A(x, y) \nabla_y \chi(x, y)-\widehat{A} (x). \end{equation} The $d\times d$ matrix $B(x, y)=(b_{ij} (x, y)) $ is 1-periodic in $y$ and \begin{equation}\label{3.81} \fint_{\mathbb{T}^d}
|B(x, y)|^2\, dy\le C, \end{equation} where $C$ depends only on $d$ and $\mu$. In view of (\ref{3.0})-(\ref{3.00}) we obtain \begin{equation}\label{est-B} \fint_{\mathbb{T}^d}
|\nabla_x B(x, y)|^2\, dy
\le C \|\nabla_x A\|_\infty^2. \end{equation} By the definitions of $\widehat{A} (x) $ and $\chi(x, y)$, it follows that \begin{equation}\label{3.01} \int_{\mathbb{T}^d} b_{ij} (x, y)\, dy=0 \quad \text{ and } \quad \frac{\partial}{\partial y^i} b_{ij} (x, y)=0 \end{equation} for each $x\in \mathbb{R}^d$ (the index $i$ is summed from $1$ to $d$), where we have used the notation $y=(y^1, \cdots, y^d)\in \mathbb{R}^d$.
\begin{lemma}\label{lemma-3.1} There exist functions $\phi(x, y)=(\phi_{kij} (x, y)) $ with $1\le k, i, j\le d$ such that $\phi $ is 1-periodic in $y$, \begin{equation}\label{3.02} \phi_{kij}=-\phi_{ikj} \quad \text{ and } \quad b_{ij} (x, y) =\frac{\partial }{\partial y^k} \phi_{kij} (x, y). \end{equation} Moreover, $\int_{\mathbb{T}^d} \phi (x, y) dy=0$, and \begin{equation}\label{3.03} \aligned \fint_{\mathbb{T}^d}
|\nabla_y \phi(x, y)|^2\, dy
+\fint_{\mathbb{T}^d} |\phi(x, y)|^2\, dy & \le C,\\ \fint_{\mathbb{T}^d}
|\nabla_x \nabla_y \phi(x, y)|^2\, dy
+\fint_{\mathbb{T}^d} |\nabla_x \phi(x, y) |^2 \, dy & \le C \|\nabla_x A\|_\infty^2 , \endaligned \end{equation} where $C$ depends only on $d$ and $\mu$. \end{lemma}
\begin{proof} Using (\ref{3.01}),
the flux correctors $\phi_{kij}$ are constructed in the same manner as in the case $A=A(y)$ (see e.g. \cite{shennote2017}). Indeed, for each $x$ fixed, one solves the cell problem $$ \left\{ \aligned & \Delta_y f_{ij} (x, y)= b_{ij} (x, y) \quad \text{ in } \mathbb{T}^d,\\ & f_{ij} (x, y) \text{ is 1-periodic in } y, \endaligned \right. $$ and sets $$ \phi_{kij} (x, y) =\frac{\partial}{\partial y^k} f_{ij} (x, y) -\frac{\partial}{\partial y^i} f_{kj} (x, y). $$ The first inequality in (\ref{3.03}) follows by using the $L^2$ estimate and (\ref{3.81}). To see the second one uses (\ref{est-B}). \end{proof}
Let $u_\varepsilon$ be a weak solution of the Dirichlet problem (\ref{DP}) and $u_0$ the solution of the homogenized problem (\ref{DP-0}). Let \begin{equation}\label{w} w_\varepsilon =u_\varepsilon -u_0 -\varepsilon S_\varepsilon (\eta_\varepsilon \chi^\varepsilon \nabla u_0 ), \end{equation} where $\chi^\varepsilon (x)=\chi(x, x/\varepsilon)$ and the operator $S_\varepsilon$ is defined by (\ref{smoothing}). The cut-off function $\eta_\varepsilon $ in (\ref{w}) is chosen so that $\eta_\varepsilon \in C_0^\infty(\Omega)$, $0\le \eta_\varepsilon\le 1$, $$ \aligned & \eta_\varepsilon (x)=1\quad \text{ if } x\in \Omega \text{ and dist} (x,\partial\Omega) \ge 4\varepsilon,\\ & \eta_\varepsilon (x)=0 \quad \text{ if dist} (x, \partial\Omega)\le 3\varepsilon, \endaligned $$
and $|\nabla \eta_\varepsilon| \le C \varepsilon^{-1}$. Define \begin{equation}\label{O-t} \Omega_t =\big\{ x\in \Omega: \ \text{ dist}(x, \partial\Omega)< t \big\}. \end{equation}
The following lemma was proved in \cite{shenzhu2017} for the case $A^\varepsilon=A(x/\varepsilon)$. The case $A^\varepsilon =A(x, \rho(x) /\varepsilon)$ for stratified structures was considered in \cite{xndcds2019} by the first and third authors. Also see \cite{Xu-nonlinear} for the nonlinear case. The estimate (\ref{bl-1}) is sharper than the similar estimates in \cite{xndcds2019, Xu-nonlinear}.
\begin{lemma}\label{lemma-3.3}
Let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^d$.
Let $w_\varepsilon$ be defined by (\ref{w}).
Then for any $\psi\in H^1_0(\Omega)$, \begin{equation}\label{bl-1} \aligned
& \Big|\int_\Omega A^\varepsilon
\nabla w_\varepsilon\cdot \nabla \psi dx\Big| \\ & \leq
C\varepsilon \|\nabla\psi\|_{L^2(\Omega)} \Big\{
\|\nabla_x A\|_\infty
\|\nabla u_0\|_{L^2(\Omega)}
+ \|\nabla^2 u_0\|_{L^2(\Omega\setminus\Omega_{3\varepsilon})} \Big\} \\ &\qquad\qquad + C
\|\nabla \psi \|_{L^2(\Omega_{5\varepsilon})}
\|\nabla u_0\|_{L^2(\Omega_{4\varepsilon})},
\endaligned
\end{equation} where $A^\varepsilon =A(x, x/\varepsilon)$ and $C$ depends only on $d$, $\mu$, and $\Omega$. \end{lemma}
\begin{proof} Using $\mathcal{L}_\varepsilon (u_\varepsilon)=\mathcal{L}_0 (u_0)$, we obtain \begin{equation}\label{3.10} \aligned \mathcal{L}_\varepsilon (w_\varepsilon) &= \text{\rm div} \big[ ( A^\varepsilon-\widehat{A}) \nabla u_0 \big] +\text{\rm div} \big[ A^\varepsilon S_\varepsilon \big( \eta_\varepsilon (\nabla_y \chi)^\varepsilon \nabla u_0 \big) \big]\\ & \quad + \varepsilon\, \text{\rm div} \big[ A^\varepsilon S_\varepsilon \big( (\nabla \eta_\varepsilon ) \chi^\varepsilon \nabla u_0\big) \big] +\varepsilon\, \text{\rm div} \big[ A^\varepsilon S_\varepsilon \big( \eta_\varepsilon (\nabla_x \chi)^\varepsilon \nabla u_0\big) \big]\\ &\quad + \varepsilon\, \text{\rm div} \big[ A^\varepsilon S_\varepsilon \big( \eta_\varepsilon \chi^\varepsilon \nabla^2 u_0\big) \big]. \endaligned \end{equation} The last three terms in the right-hand side of (\ref{3.10}) are easy to handle. Let $B(x, y)$ be given by (\ref{B}). To deal with the first two terms, we write the sum of them as \begin{equation}\label{3.11} I_1 +I_2 +\text{\rm div} \big[ S_\varepsilon \big( \eta_\varepsilon B^\varepsilon \nabla u_0) \big], \end{equation} where $B^\varepsilon =B(x, x/\varepsilon)$, and \begin{equation}\label{3.12} \aligned I_1 &= \text{\rm div} \big[ (A^\varepsilon -\widehat{A} ) \nabla u_0 -S_\varepsilon \big( (A^\varepsilon -\widehat{A} ) \eta_\varepsilon \nabla u_0 \big) \big],\\ I_2 &= \text{\rm div} \big[ A^\varepsilon S_\varepsilon \big( \eta_\varepsilon (\nabla_y \chi)^\varepsilon \nabla u_0 \big) -S_\varepsilon \big( \eta_\varepsilon A^\varepsilon (\nabla_y \chi)^\varepsilon \nabla u_0 \big) \big]. \endaligned \end{equation} It follows from (\ref{3.10})-\ref{3.12}) that \begin{equation}\label{3.13} \aligned
& \Big|\int_\Omega A^\varepsilon
\nabla w_\varepsilon\cdot \nabla \psi dx\Big| \\
&
\le \int_\Omega
\big| (A^\varepsilon -\widehat{A} ) \nabla u_0
-S_\varepsilon \big( (A^\varepsilon -\widehat{A} ) \eta_\varepsilon \nabla u_0 \big) \big| |\nabla \psi|\, dx\\ &\qquad + \int_\Omega
\big| A^\varepsilon S_\varepsilon \big( \eta_\varepsilon (\nabla_y \chi)^\varepsilon \nabla u_0 \big)
-S_\varepsilon \big( \eta_\varepsilon A^\varepsilon (\nabla_y \chi)^\varepsilon \nabla u_0 \big) \big| |\nabla \psi|\, dx\\
&\qquad +\Big|
\int_\Omega S_\varepsilon \big( \eta_\varepsilon B^\varepsilon \nabla u_0) \cdot \nabla \psi \, dx \Big|\\ & \qquad + C \varepsilon \int_\Omega
|S_\varepsilon \big( (\nabla \eta_\varepsilon ) \chi^\varepsilon \nabla u_0\big) | |\nabla \psi|\, dx\\ & \qquad + C \varepsilon \int_\Omega
|S_\varepsilon \big( \eta_\varepsilon (\nabla_x \chi)^\varepsilon \nabla u_0\big)| |\nabla \psi|\, dx\\ & \qquad + C \varepsilon \int_\Omega
|S_\varepsilon \big( \eta_\varepsilon \chi^\varepsilon \nabla^2 u_0\big)| |\nabla \psi|\, dx\\ &=J_1+\dots + J_6, \endaligned \end{equation} for any $\psi\in H_0^1 (\Omega)$. We estimate $J_i, i=1, \dots, 6$ separately.
To bound $J_4$, we use the Cauchy inequality and (\ref{s-1}) to obtain \begin{equation}\label{3.14} \aligned J_4
&\le C \varepsilon \| S_\varepsilon \big( (\nabla \eta_\varepsilon ) \chi^\varepsilon \nabla u_0\big) \|_{L^2(\Omega)}
\|\nabla \psi\|_{L^2(\Omega_{5\varepsilon} )} \\
&\le C \varepsilon \| (\nabla \eta_\varepsilon ) \nabla u_0\|_{L^2(\Omega)}
\|\nabla \psi \|_{L^2(\Omega_{5\varepsilon} )}\\ &\le C
\|\nabla u_0\|_{L^2(\Omega_{4\varepsilon})}
\|\nabla \psi\|_{L^2(\Omega_{5\varepsilon} )}, \endaligned \end{equation} where we have used the estimate for $\chi (x, y)$ in (\ref{3.000}). In view of the estimate for $\nabla_x \chi(x, y)$ in (\ref{3.00}), the same argument also shows that \begin{equation}\label{3.15} J_5 +J_6
\le C \varepsilon \|\nabla \psi\|_{L^2(\Omega)} \big\{
\|\nabla_x A\|_\infty
\|\nabla u_0\|_{L^2(\Omega)}
+ \|\nabla^2 u_0\|_{L^2(\Omega\setminus \Omega_{3\varepsilon})}\big\}. \end{equation}
Next, to bound $J_3$, we use the flux correctors $\phi_{kij}$ given by Lemma \ref{lemma-3.1}. Note that by using the second equation in (\ref{3.02}), $$ \aligned &\eta_\varepsilon (x-z) b_{ij}(x-z, x/\varepsilon) \frac{\partial u_0}{\partial x^j} (x-z)\\ & =\varepsilon \eta_\varepsilon (x-z) \frac{\partial}{\partial x^k} \Big\{ \phi_{kij} (x-z, x/\varepsilon) \Big\} \frac{\partial u_0}{\partial x^j} (x-z)\\ & \qquad -\varepsilon \eta_\varepsilon (x-z) \frac{\partial \phi_{kij}}{\partial x^k} (x-z, x/\varepsilon) \frac{\partial u_0}{\partial x^j} (x-z)\\ &= \varepsilon \frac{\partial}{\partial x^k} \Big\{ \eta_\varepsilon (x-z) \phi_{kij} (x-z, x/\varepsilon) \frac{\partial u_0}{\partial x^j} (x-z) \Big\}\\ & \qquad -\varepsilon \frac{\partial}{\partial x^k} \Big\{ \eta_\varepsilon (x-z) \Big\} \phi_{kij} (x-z, x/\varepsilon) \frac{\partial u_0}{\partial x^j} (x-z) \\ & \qquad -\varepsilon \eta_\varepsilon (x-z) \frac{\partial \phi_{kij}}{\partial x^k} (x-z, x/\varepsilon) \frac{\partial u_0}{\partial x^j} (x-z)\\ & \qquad -\varepsilon \eta_\varepsilon (x-z) \phi_{kij} (x-z, x/\varepsilon) \frac{\partial^2 u_0}{\partial x^j\partial x^k} (x-z). \endaligned $$ It follows that \begin{equation}\label{3.16}
\aligned J_3 & =\varepsilon \Big| \int_\Omega \frac{\partial}{\partial x^k} S_\varepsilon \left(\eta_\varepsilon \phi_{kij}^\varepsilon \frac{\partial u_0}{\partial x^j} \right) \frac{\partial \psi}{\partial x_i}\, dx
-\int_\Omega S_\varepsilon ( (\nabla \eta_\varepsilon) \phi^\varepsilon \nabla u_0) \cdot \nabla \psi\, dx\\ & \qquad - \int_\Omega S_\varepsilon (\eta_\varepsilon (\nabla_x \phi)^\varepsilon \nabla u_0) \cdot \nabla \psi\, dx
- \int_\Omega S_\varepsilon (\eta_\varepsilon \phi^\varepsilon \nabla^2 u_0) \cdot \nabla \psi\, dx \Big|. \endaligned \end{equation} By using the skew-symmetry property of $\phi_{kij}$ in (\ref{3.02}) and integration by parts we may show that the first term in the right-hand side of (\ref{3.16}) is zero, if $\psi\in C_0^\infty(\Omega)$. The same is true for any $\psi\in H_0^1(\Omega)$ by a simple density argument. The remaining terms in the right-hand side of (\ref{3.16}) may be handled as in the case of $J_4$, but using estimates of $\phi$ and $\nabla_x \phi$ in (\ref{3.03}). As a result, we obtain \begin{equation}\label{3.17} \aligned J_3
& \le C
\|\nabla \psi\|_{L^2(\Omega_{5\varepsilon})}
\|\nabla u_0\|_{L^2(\Omega_{4\varepsilon})}\\ &\qquad
+ C \varepsilon \|\nabla \psi\|_{L^2(\Omega)} \Big\{
\|\nabla_x A\|_\infty
\|\nabla u_0\|_{L^2(\Omega)}
+ \|\nabla^2 u_0\|_{L^2(\Omega \setminus \Omega_{3\varepsilon})} \Big\}. \endaligned \end{equation}
It remains to estimate $J_1$ and $J_2$. Note that \begin{equation}\label{3.18} \aligned J_1
& \le C \int_\Omega |\nabla u_0| |1-\eta_\varepsilon | |\nabla \psi|\, dx + \int_\Omega
|(A^\varepsilon -\widehat{A}) \eta_\varepsilon \nabla u_0 -S_\varepsilon \big( (A^\varepsilon-\widehat{A} ) \eta_\varepsilon
\nabla u_0 \big)| \, |\nabla \psi |\, dx\\ &=J_{11} +J_{12}. \endaligned \end{equation} By the Cauchy inequality, \begin{equation}\label{3.19} J_{11}
\le C \|\nabla \psi\|_{L^2(\Omega_{4\varepsilon})}
\|\nabla u_0\|_{L^2(\Omega_{4\varepsilon})}. \end{equation} To bound $J_{12}$, we use (\ref{s-3}) to obtain $$ \aligned J_{12}
& \le C \|\nabla_x A\|_\infty
\int_\Omega |\nabla \psi (x)| \int_{B(x,\varepsilon)}
\frac{\eta_\varepsilon (z) |\nabla u_0 (z)|}{|z-x|^{d-1}}\, dzdx\\ & \qquad
+ C \int_\Omega |\nabla \psi(x)| \int_{B(x, \varepsilon)}
\frac{|\nabla \eta_\varepsilon| |\nabla u_0(z)|
+ \eta_\varepsilon (z)| |\nabla^2 u_0 (z)|}{|z-x|^{d-1}}\, dz dx. \endaligned $$ As in the proof of Lemma \ref{s-lemma-2}, this yields that \begin{equation}\label{3.20} \aligned J_{12}
& \le C \varepsilon \|\nabla_x A\|_\infty
\|\nabla \psi\|_{L^2(\Omega)} \|\nabla u_0\|_{L^2(\Omega)}
+ C \|\nabla \psi\|_{L^2(\Omega_{5\varepsilon})}
\|\nabla u_0\|_{L^2(\Omega_{4\varepsilon})}\\ &\qquad\qquad + C \varepsilon
\|\nabla\psi\|_{L^2(\Omega)}
\|\nabla^2 u_0\|_{L^2(\Omega\setminus \Omega_{3\varepsilon})}. \endaligned \end{equation}
Finally, to bound $J_2$, we observe that $$ \aligned J_2
& \le C\int_\Omega \fint_{B(x, \varepsilon)}
| A(x, x/\varepsilon) -A(z, x/\varepsilon)| \eta_\varepsilon (z)
|\nabla_y \chi (z, x/\varepsilon)|
|\nabla u_0 (z)| |\nabla \psi ( x)|\, dz dx\\
&\le C \varepsilon \|\nabla_x A\|_\infty \int_\Omega \fint_{B(x, \varepsilon )} \eta_\varepsilon (z)
|\nabla_y \chi (z, x/\varepsilon)|
|\nabla u_0 (z)| |\nabla \psi ( x)|\, dz dx\\
& \le C\varepsilon \|\nabla_x A\|_\infty
\|\nabla \psi\|_{L^2(\Omega)}
\|\fint_{B(x, \varepsilon )}
|\nabla_y \chi(z, x/\varepsilon)| \eta_\varepsilon (z) |\nabla u_0 (z)|\, dz \|_{L^2(\Omega)}\\ &\le
C\varepsilon \|\nabla_x A\|_\infty
\|\nabla \psi\|_{L^2(\Omega)}
\Big\| \left( \fint_{B(x, \varepsilon )}
|\nabla_y \chi(z, x/\varepsilon)|^2 \eta_\varepsilon (z) |\nabla u_0 (z)|^2 \, dz \right)^{1/2} \Big\|_{L^2(\Omega)}, \endaligned $$ where we have used the Cauchy inequality for the last two inequalities. By using Fubini's Theorem and (\ref{3.000}) we see that $$
\Big\| \left( \fint_{B(x, \varepsilon)}
|\nabla_y \chi(z, x/\varepsilon)|^2 \eta_\varepsilon (z) |\nabla u_0 (z)|^2 \, dz \right)^{1/2} \Big\|_{L^2(\Omega)}
\le C \|\nabla u_0\|_{L^2(\Omega)}. $$ This gives $$
J_2\le C\varepsilon \|\nabla_x A\|_\infty
\|\nabla \psi\|_{L^2(\Omega)}
\|\nabla u_0\|_{L^2(\Omega)}, $$ and completes the proof. \end{proof}
The next theorem provides an error estimate in $H^1(\Omega)$.
\begin{theorem}\label{thm-3} Let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^d$. Assume that $A$ satisfies the same conditions as in Lemma \ref{lemma-3.3}. Let $w_\varepsilon$ be defined by (\ref{w}). Then \begin{equation}\label{H-1-est}
\| w_\varepsilon\|_{H^1(\Omega)}
\le C \varepsilon^{1/2} \| u_0\|^{1/2}_{H^2(\Omega)} \| \nabla u_0\|^{1/2}_{L^2(\Omega)}
+C\varepsilon \| u_0\|_{H^2(\Omega)}
+ C \varepsilon \|\nabla_x A\|_\infty \|\nabla u_0\|_{L^2(\Omega)} \end{equation} for $0<\varepsilon <1$, where $C$ depends only on $d$, $\mu$ and $\Omega$. \end{theorem}
\begin{proof}
Note that $w_\varepsilon\in H_0^1(\Omega)$ and $\| w_\varepsilon\|_{H^1(\Omega)} \le C \|\nabla w_\varepsilon\|_{L^2(\Omega)}$. By taking $\psi =w_\varepsilon$ in (\ref{bl-1}) and using the ellipticity condition of $A$, we obtain \begin{equation}\label{L-2-bl}
\| w_\varepsilon\|_{H^1(\Omega)} \le C \varepsilon \big\{
\|\nabla_x A\|_\infty \|\nabla u_0\|_{L^2(\Omega)}
+ \|\nabla^2 u_0\|_{L^2(\Omega\setminus \Omega_{3\varepsilon} )} \big\}
+ C \| \nabla u_0\|_{L^2(\Omega_{4\varepsilon})}. \end{equation} This, together with the inequality \begin{equation}\label{bl-est}
\| v\|_{L^2(\Omega_t)}
\le C t^{1/2} \| v\|_{L^2(\Omega)}^{1/2} \| v\|^{1/2}_{H^1(\Omega)} \end{equation} for $t>0$ and $v\in H^1(\Omega)$, where $\Omega_t$ is defined by (\ref{O-t}), gives (\ref{H-1-est}). \end{proof}
\begin{remark} {\rm Let $\Omega$ be a bounded Lipschitz domain. Let $u_\varepsilon$, $u_0$ and $w_\varepsilon$ be the same as in Theorem \ref{thm-3}. Observe that $$ \aligned
\| u_\varepsilon -u_0\|_{L^2(\Omega)}
&\le \| w_\varepsilon\|_{L^2(\Omega)}
+ \varepsilon \| S_\varepsilon \big( \eta_\varepsilon \chi^\varepsilon \nabla u_0\big) \|_{L^2(\Omega)}\\
&\le C \| w_\varepsilon\|_{H^1(\Omega)}
+ C \varepsilon \| \nabla u_0\|_{L^2(\Omega)}, \endaligned $$ where we have used (\ref{s-1}). This, together with (\ref{L-2-bl}), yields \begin{equation}\label{L-2-estimate} \aligned
\| u_\varepsilon -u_0\|_{L^2(\Omega)}
& \le C \varepsilon (\|\nabla_x A\|_\infty +1) \|\nabla u_0\|_{L^2(\Omega)}
+ C \varepsilon \|\nabla^2 u_0\|_{L^2(\Omega\setminus \Omega_{3\varepsilon})}\\ & \qquad\qquad
+ C \|\nabla u_0\|_{L^2(\Omega_{4\varepsilon})}, \endaligned \end{equation} where $C$ depends only on $d$, $\mu$ and $\Omega$. Estimate (\ref{L-2-estimate}) is not sharp, but will be useful in the proof of Theorems \ref{lipth} and \ref{blipth}. } \end{remark}
\begin{remark} {\rm Let $\Omega$ be a bounded $C^{1, 1}$ domain in $\mathbb{R}^d$. Let $w_\varepsilon$ be defined by (\ref{w}), where $u_\varepsilon$ and $u_0$ have the same data $F$ and $f$. Then \begin{equation}\label{r-3.1}
\| w_\varepsilon\|_{H^1(\Omega)}\\
\le C \varepsilon^{1/2} \Big\{
1 + \|\nabla_x A\|_\infty^{1/2}
+\varepsilon^{1/2} \|\nabla_x A\|_\infty \Big\} \left(
\| F\|_{L^2(\Omega)}
+ \| f\|_{H^{3/2}(\partial\Omega)} \right), \end{equation} where $C$ depends only on $d$, $\mu$ and $\Omega$. This follows from (\ref{H-1-est}), the energy estimate $$
\| u_0\|_{H^1(\Omega)} \le C \left(
\| F\|_{L^2 (\Omega)}
+ \| f\|_{H^{1/2}(\partial\Omega)} \right), $$ and the $H^2$ estimate for $\mathcal{L}_0$, \begin{equation}
\| u_0\|_{H^2(\Omega)}
\le C ( \|\nabla_x A\|_\infty +1) \left(
\| F\|_{L^2(\Omega)}
+ \| f\|_{H^{3/2}(\partial\Omega)} \right). \end{equation} where $C$ depends only on $d$, $\mu$ and $\Omega$. } \end{remark}
\section{Proof of Theorem \ref{tco}}\label{section-4}
The proof of Theorem \ref{tco} is based on an approach of homogenization with a parameter. We start with the case $n=1$ and $A^\varepsilon =A(x, x/\varepsilon)$, considered in the last section.
\begin{lemma}\label{LE61}
Let $\Omega$ be a bounded $C^{1,1}$ domain in $\mathbb{R}^d$.
Assume that $A=A(x,y)$ is 1-periodic in $y$
and satisfies conditions \eqref{elcon} and \eqref{lip-simple}.
Let $u_\varepsilon$ be a weak solution of (\ref{DP}), with
$\mathcal{L}_\varepsilon =-\text{\rm div} \big(A(x, x/\varepsilon)\nabla \big)$,
and $u_0$ the solution of (\ref{DP-0}) with the same data $F\in L^2(\Omega)$ and $f\in H^{3/2}(\partial\Omega)$.
Then
\begin{equation}\label{4.1}
\|u_\varepsilon-u_0\|_{L^{2}(\Omega)}\\
\leq C \varepsilon
\Big\{
1 +
\|\nabla_x A\|_\infty
+\varepsilon \|\nabla_x A\|^2 _\infty \Big\}
\left( \| F\|_{L^2(\Omega)}
+ \|f\|_{H^{3/2}(\partial\Omega)}\right)
\end{equation}
for $0<\varepsilon<1$, where $C$ depends only on $d$, $n$, $\mu$ and $\Omega$. \end{lemma}
\begin{proof} Let $w_\varepsilon$ be given by (\ref{w}). It follows from (\ref{s-1}) that $$
\|S_\varepsilon ( \eta_\varepsilon \chi^\varepsilon \nabla u_0)\|_{L^{2}(\Omega)}\leq C\|\nabla u_0\|_{L^2(\Omega)}. $$
Thus it suffices to show that $\|w_\varepsilon\|_{L^2(\Omega)}$ is bounded by the right-hand side of (\ref{4.1}). This is done by using (\ref{bl-1}) and a duality argument,
as in \cite{suslinaD2013}. Let $ A^*(x,y)$ denote the adjoint of ${A}(x,y)$. Note that $A^*(x,y)$ satisfies the same conditions as ${A}(x,y)$.
We denote the corresponding correctors and flux correctors by $ \chi^*(x,y)$ and $\psi^*(x,y)$, respectively.
Its matrix of effective coefficients is given by
$ \widehat{{A}^*}$ $= (\widehat{{A}})^*$, the adjoint of $\widehat{A}$.
For $G\in C_c^\infty(\Omega)$, let $v_\varepsilon$ be the weak solution of the following Dirichlet problem, \begin{equation}\label{PLE612} \begin{cases}
- \text{div}\left( {A}^*(x, x/\varepsilon) \nabla v_{\varepsilon} (x) \right)=G \quad\text{in } \Omega, \\
v_{\varepsilon}=0 \quad \text{on } \partial\Omega,
\end{cases} \end{equation} and $v_0$ the homogenized solution. Define
\begin{align*} \widetilde{ w}_{\varepsilon}(x)=&v_\varepsilon-v_0-\varepsilon S_\varepsilon \big( \widetilde{\eta}_\varepsilon (\chi^*)^\varepsilon \nabla v_0\big), \end{align*} where $(\chi^*)^\varepsilon =\chi^* (x, x/\varepsilon)$ and $\widetilde{\eta}_\varepsilon \in C_0^\infty(\Omega)$ is a cut-off function such that $0\le \widetilde{\eta}_\varepsilon \le 1$, $$ \widetilde{\eta}_\varepsilon (x)=1 \text{ in } \Omega\setminus\Omega_{10\varepsilon},\quad \widetilde{\eta}_\varepsilon (x)=0 \text{ in } \Omega_{ 8 \varepsilon }, $$
and $|\nabla \widetilde{\eta}_\varepsilon | \le C \varepsilon^{-1}$. Note that \begin{align}
\Big|\int_\Omega w_\varepsilon\cdot G\, dx\Big|
&=\Big|\int_\Omega {A}^\varepsilon (x)\nabla w_\varepsilon\cdot\nabla v_\varepsilon \, dx\Big|\nonumber\\
&\leq \Big|\int_\Omega {A}^\varepsilon(x)\nabla w_\varepsilon\cdot\nabla \widetilde{w}_\varepsilon \, dx\Big|
+\Big|\int_\Omega {A}^\varepsilon(x)\nabla w_\varepsilon\cdot\nabla v_0 \, dx\Big|\nonumber\\
&\qquad
+\varepsilon \Big|\int_\Omega {A}^\varepsilon(x)\nabla w_\varepsilon\cdot
\nabla\big[ S_\varepsilon \big( \widetilde{\eta}_\varepsilon
(\chi^*)^\varepsilon \nabla v_0\big) \big] dx\Big|\nonumber\\
&\doteq J_1+J_2+J_3.\label{PLE613} \end{align} We estimate $J_1$, $J_2$, and $J_3$ separately.
By using the Cauchy inequality and (\ref{r-3.1}), we obtain \begin{equation}\label{4.11} \aligned J_1
& \le C \|\nabla w_\varepsilon \|_{L^2(\Omega)}
\|\nabla \widetilde{w}_\varepsilon \|_{L^2(\Omega)}\\
& \le C \varepsilon \Big\{ 1 + \|\nabla_x A \|_\infty
+\varepsilon \|\nabla_x A\|^2 _\infty \Big\}
\left( \| F\|_{L^2(\Omega)}
+ \| f\|_{H^{3/2}(\partial\Omega)}\right)
\| G \|_{L^2(\Omega)}, \endaligned \end{equation} where we have also used the estimate \begin{align}\label{4.12}
\| \widetilde{w}_\varepsilon\|_{H^1_0(\Omega)} \leq C \varepsilon^{1/2} \Big\{
1 + \|\nabla_x A\|_\infty^{1/2} +
\varepsilon^{1/2} \|\nabla_x A\|_\infty\Big\}
\| G\|_{L^2(\Omega)}.
\end{align}
The proof of (\ref{4.12}) is the same as that of (\ref{r-3.1}).
Next, we use (\ref{bl-1}) to obtain
\begin{equation}\label{4.13}
\aligned
J_2
&\le C \varepsilon \|\nabla v_0\|_{L^2(\Omega)}
\Big\{ \|\nabla_x A\|_\infty \|\nabla u_0\|_{L^2(\Omega)}
+\|\nabla^2 u_0\|_{L^2(\Omega)} \Big\}\\
&\qquad\qquad
+C \|\nabla v_0\|_{L^2(\Omega_{5\varepsilon})}
\|\nabla u_0\|_{L^2(\Omega_{4\varepsilon})}.
\endaligned
\end{equation}
Note that by (\ref{bl-est}),
$$
\|\nabla v_0\|_{L^2(\Omega_{5\varepsilon})}
\|\nabla u_0\|_{L^2(\Omega_{4\varepsilon})}
\le C \varepsilon \| \nabla v_0\|_{L^2(\Omega)}^{1/2} \| v_0\|_{H^2(\Omega)}^{1/2}
\| \nabla u_0\|_{L^2(\Omega)}^{1/2}
\| u_0\|_{H^2(\Omega)}^{1/2}.
$$
This, together with (\ref{4.13}) and the energy estimates and $H^2$ estimates for $\mathcal{L}_0$ and $\mathcal{L}_0^*$,
gives
\begin{equation}\label{4.14}
J_2 \le C \varepsilon (1+\|\nabla_x A\|_\infty)
\left( \| F\|_{L^2(\Omega)}
+ \| f\|_{H^{3/2}(\partial\Omega)}\right)
\| G \|_{L^2(\Omega)}.
\end{equation}
The estimate of $J_3$ is similar to that of $J_2$. By (\ref{bl-1}) we see that $$ J_3
\le C\varepsilon^2 \| \nabla\big[ S_\varepsilon \big( \widetilde{\eta}_\varepsilon
(\chi^*)^\varepsilon \nabla v_0\big) \big]\|_{L^2(\Omega)}
\Big\{
\|\nabla_x A\|_\infty \|\nabla u_0\|_{L^2(\Omega)}
+\|\nabla^2 u_0\|_{L^2(\Omega)} \Big\},
$$
where we have used the fact $\widetilde{\eta}_\varepsilon=0$ on $\Omega_{8\varepsilon}$.
Note that by (\ref{s-1}),
$$
\aligned
& \| \nabla\big[ S_\varepsilon \big( \widetilde{\eta}_\varepsilon
(\chi^*)^\varepsilon \nabla v_0\big) \big]\|_{L^2(\Omega)}\\
&\le \| S_\varepsilon \big[
(\nabla \widetilde{\eta}_\varepsilon ) (\chi^*)^\varepsilon \nabla v_0 \big]\|_{L^2(\Omega)}
+ \| S_\varepsilon \big[ \widetilde{\eta}_\varepsilon (\nabla_x \chi^* )^\varepsilon \nabla v_0 \big] \|_{L^2(\Omega)}\\ &\qquad
+ \varepsilon^{-1} \| S_\varepsilon \big[ \widetilde{\eta}_\varepsilon (\nabla_y \chi^*)^\varepsilon \nabla v_0\big]\|_{L^2(\Omega)}
+ \| S_\varepsilon \big[ \widetilde{\eta}_\varepsilon (\chi^*)^\varepsilon \nabla^2 v_0\big] \|_{L^2(\Omega)}\\
&\le C \varepsilon^{-1} \|\nabla v_0\|_{L^2(\Omega)}
+ C \|\nabla^2 v_0\|_{L^2(\Omega)}.
\endaligned
$$
It follows that $$
\aligned J_3 & \le C \varepsilon \Big\{ \|\nabla v_0\|_{L^2(\Omega)}
+ \varepsilon \|\nabla^2 v_0\|_{L^2(\Omega)} \Big\}
\Big\{ \|\nabla_x A\|_\infty \|\nabla u_0\|_{L^2(\Omega)}
+ \|\nabla^2 u_0\|_{L^2(\Omega)} \Big\}\\
&\le C \varepsilon (1+\|\nabla_x A\|_\infty) (1+\varepsilon \|\nabla_x A\|_\infty)
\left( \| F\|_{L^2(\Omega)}
+ \| f\|_{H^{3/2}(\partial\Omega)}\right)
\| G \|_{L^2(\Omega)}. \endaligned $$ By combining the estimates of $J_1, J_2$ and $J_3$ we obtain $$ \aligned
& \Big|\int_{\Omega}w_\varepsilon\cdot G \, dx\Big|\\ & \le C \varepsilon
\Big\{
1 +
\|\nabla_x A\|_\infty
+\varepsilon \|\nabla_x A\|^2 _\infty \Big\} \big( \| F \|_{L^2(\Omega)} + \|f\|_{H^{3/2}(\partial\Omega)} \big) \|G\|_{L^2(\Omega)}, \endaligned $$ from which the desired estimate for $w_\varepsilon$ follows by duality. \end{proof}
We are now in a position to give the proof of Theorem \ref{tco}.
\begin{proof}[\bf Proof of Theorem \ref{tco}] We prove the theorem by using an induction argument on $n$. The case $n=1$ follows directly from Lemma \ref{LE61}. Suppose that the theorem is true for some $n-1$. To prove the theorem for $n$, let $u_\varepsilon$ be a weak solution of the Dirichlet problem (\ref{DP}) and $u_0$ the solution of the homogenized problem (\ref{DP-0}) with the same data $(F, f)$. Let $v_\varepsilon$ be the weak solution to \begin{equation}\label{DP-I} -\text{\rm div} \big( A_{n-1} (x, x/\varepsilon_1, \dots, x/\varepsilon_{n-1}) \nabla v_\varepsilon \big) =F \quad \text{ in } \Omega \quad \text{ and } \quad v_\varepsilon =f \quad \text{ on } \partial\Omega, \end{equation} where $A_{n-1}$ is defined by (\ref{A-ell}) with $\ell=n$ and $A_n=A$. Note that $$
\|\nabla_{x, y_1, \dots, y_{n-2}} A_{n-1} \|_\infty
\le C \|\nabla_{x, y_1, \dots, y_{n-1}} A\|_\infty \le CL. $$ By the induction assumption, \begin{equation}\label{ind}
\| v_\varepsilon -u_0\|_{L^2(\Omega)} \le C \big\{ \varepsilon_1 +\varepsilon_2/\varepsilon_1 +\cdots \varepsilon_{n-1} /\varepsilon_{n-2} \big\} \big\{
\| F\|_{L^2(\Omega)} + \| f\|_{H^{3/2}(\partial\Omega)} \big\}, \end{equation} where $C$ depends only on $d$, $n$, $\mu$, $L$ and $\Omega$.
To bound $\| u_\varepsilon -v_\varepsilon\|_{L^2(\Omega)}$, we use Lemma \ref{LE61}. For each $0<\varepsilon<1$ fixed, we let $$ E(x, y)=A(x, x/\varepsilon_1, \dots, x/\varepsilon_{n-1} , y). $$ Then $$ A(x, x/\varepsilon_1, \dots, x/\varepsilon_n) =E(x, x/\varepsilon_n). $$ Note that $$
\|\nabla_x E\|_\infty \le C L \varepsilon_{n-1} ^{-1}, $$
where we have used the assumption that $0<\varepsilon_n<\varepsilon_{n-1} < \cdots< \varepsilon_1<1$.
By Lemma \ref{LE61}, we obtain
$$
\aligned
\| u_\varepsilon -v_\varepsilon\|
& \le C \varepsilon_n
\left\{ 1+ \|\nabla_x E\|_\infty +\varepsilon_n \| \nabla_x E\|^2_\infty \right\}
\left\{ \| F\|_{L^2(\Omega)} + \| f\|_{H^{3/2}(\partial\Omega)} \right\}\\
& \le C \varepsilon_n
\left\{ 1+ L \varepsilon_{n-1}^{-1} + L^2 \varepsilon_n \varepsilon_{n-1}^{-2} \right\}
\left\{ \| F\|_{L^2(\Omega)} + \| f\|_{H^{3/2}(\partial\Omega)} \right\}\\ &\le C (1+L)^2 \varepsilon_n \varepsilon_{n-1}^{-1}
\left\{ \| F\|_{L^2(\Omega)} + \| f\|_{H^{3/2}(\partial\Omega)} \right\}.
\endaligned
$$ This, together with (\ref{ind}), gives (\ref{tre1}). \end{proof}
\section{Approximation}\label{section-5}
In preparation for the proofs of Theorems \ref{lipth} and \ref{blipth}, we establish several results on the approximation of solutions of $\mathcal{L}_\varepsilon (u_\varepsilon)=F$ by solutions of $\mathcal{L}_0 (u_0)=F$ in this section. We start with a simple case, where $n=1$ and $A=A(x, y)$ is Lipschitz continuous in $x$.
\begin{lemma}\label{lemma-5.1}
Suppose $A=A(x, y)$ satisfies (\ref{elcon}) and is 1-periodic in $y$. Also assume that $\|\nabla_x A\|_\infty<\infty$. Let $\mathcal{L}_\varepsilon =-\text{\rm div} \big( A(x, x/\varepsilon)\nabla \big)$ and $u_\varepsilon$ be a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon)=F$ in $B_{2r}=B(x_0, 2r)$, where $\varepsilon\le r\le 1$ and $F\in L^2(B_{2r})$. Then there exists a weak solution to $\mathcal{L}_0 (u_0)=F$ in $B_r$ such that \begin{equation}\label{5.01} \aligned
& \left(\fint_{B_r} |u_\varepsilon -u_0 |^2 \right)^{1/2}\\ & \le C \left\{ \left(\frac{\varepsilon}{r}\right)^\sigma
+\varepsilon \|\nabla_x A\|_\infty \right\} \left\{
\left(\fint_{B_{2r}} |u_\varepsilon|^2\right)^{1/2}
+ r^2 \left(\fint_{B_{2r}} |F|^2 \right)^{1/2} \right\} , \endaligned \end{equation} where $\sigma>0$ and $C$ depends only on $d$ and $\mu$. \end{lemma}
\begin{proof} By rescaling we may assume $r=1$. To see this, we note that if $-\text{\rm div} \big( A(x, x/\varepsilon)\nabla u_\varepsilon\big) =F$ in $B_{2r}$
and $v (x)= u_\varepsilon (rx)$, then $-\text{\rm div} \big( \widetilde{A} (x, x/\delta) \nabla v\big)=G$ in $B_2$, where $\widetilde{A} (x, y)=A(rx, y)$, $\delta=\varepsilon/r$, and $G(x)=r^2 F(rx)$. Also, observe that $\|\nabla_x \widetilde{A} \|_\infty= r \|\nabla_x A\|_\infty$.
Now, suppose that $-\text{\rm div} \big( A(x, x/\varepsilon)\nabla u_\varepsilon\big) =F$ in $B_{2}$. Let $u_0\in H^1(B_{3/2})$ be the weak solution to $$ \mathcal{L}_0 (u_0)=F \quad \text{ in } B_{3/2} \quad \text{ and } \quad u_0=u_\varepsilon \quad \text{ on } \partial B_{3/2}. $$ Note that $u_0-u_\varepsilon\in H^1_0(B_{3/2})$ and $$ \mathcal{L}_\varepsilon (u_0 -u_\varepsilon)=\text{\rm div} \big( (\widehat{A} -A^\varepsilon ) \nabla u_\varepsilon \big) \quad \text{ in } B_{3/2}. $$ It follows from the Meyers' estimates that $$
\fint_{B_{3/2}} |\nabla (u_\varepsilon-u_0) |^q
\le C \fint_{B_{3/2}} |\nabla u_\varepsilon|^q $$ for some $q >2$ and $C>0$, depending only on $d$ and $\mu$. This, together with the Meyers' estimate, $$
\left(\fint_{B_{3/2}} |\nabla u_\varepsilon|^q\right)^{1/q}
\le C \left(\fint_{B_2} |u_\varepsilon|^2\right)^{1/2}
+ C \left(\fint_{B_2} |F|^2\right)^{1/2}, $$ gives \begin{equation}\label{5.02}
\left(\fint_{B_{3/2}} |\nabla u_0|^q\right)^{1/q} \le
C \left(\fint_{B_2} |u_\varepsilon|^2\right)^{1/2}
+ C \left(\fint_{B_2} |F|^2\right)^{1/2}. \end{equation} Also, by the interior $H^2$ estimate for $\mathcal{L}_0$, \begin{equation}\label{5.03} \fint_{B(z,\rho)}
|\nabla^2 u_0|^2 \le C \fint_{B(z, 2\rho)}
|F|^2
+ C \big( \|\nabla_x A\|_\infty^2 +\rho^{-2} \big)
\fint_{B(z, 2\rho)} |\nabla u_0|^2, \end{equation} where $B(z, 2\rho)\subset B_2$, we may deduce that \begin{equation}\label{5.04} \aligned \int_{B_{(3/2) -t}}
|\nabla^2 u_0|^2\, dx
& \le C \int_{B_{3/2}}
|F|^2\, dx
+ C \|\nabla_x A\|^2_\infty \int_{B_{3/2}} |\nabla u_0 |^2\, dx\\ &\qquad\qquad + C \int_{B_{(3/2)-(t/2)}}
\frac{ |\nabla u_0 (x) |^2\, dx }{|\text{\rm dist} (x, \partial B_{3/2})|^2 } \endaligned \end{equation} for $0<t<1$. By H\"older's inequality, the last term in the right-hand side of (\ref{5.04}) is bounded by $$
C t^{-\frac{2}{q}-1} \left(\int_{B_{3/2}} |\nabla u_0|^q \right)^{2/q}. $$ In view of (\ref{5.02}) and (\ref{5.04}) we obtain \begin{equation}\label{5.05} \int_{B_{(3/2) -t}}
|\nabla^2 u_0|^2\, dx
\le C \left\{
t^{-\frac{2}{q}-1} + \|\nabla_x A\|^2_\infty\right\} \left\{
\fint_{B_2} |F|^2
+\fint_{B_2} |u_\varepsilon|^2 \right\} \end{equation} for $0<t<1$, where $C$ depends only on $d$ and $\mu$.
Finally, to finish the proof, we use the estimate (\ref{L-2-estimate}) to obtain $$ \aligned
\int_{B_{3/2}} |u_\varepsilon -u_0|^2
& \le C \varepsilon^2 (\|\nabla_x A\|^2 _\infty +1) \int_{B_{3/2}} |\nabla u_0|^2
+ C \varepsilon^2 \int_{B_{|x|<\frac{3}{2}-3\varepsilon}}
|\nabla^2 u_0|^2\\ &\qquad\qquad
+ C \int_{\frac{3}{2}-4\varepsilon < |x|<\frac{3}{2}} |\nabla u_0|^2. \endaligned $$ We bound the second term in the right-hand side of the inequality above by using (\ref{5.05}), and the third term by using H\"older inequality and (\ref{5.02}). It follows that $$
\int_{B_{3/2}} |u_\varepsilon -u_0|^2 \le C \Big\{ \varepsilon^{1-\frac{2}{q}}
+\varepsilon^2 \|\nabla_x A\|_\infty^2 \Big\}
\left\{ \int_{B_2} |u_\varepsilon|^2 +\int_{B_2} |F|^2 \right\}. $$ This gives the estimate (\ref{5.01}) with $r=1$ and $\sigma = \frac12 -\frac{1}{q}>0$. \end{proof}
The next lemma deals with the case $n=1$ and $A=A(x, y)$ is H\"older continuous in $x$, \begin{equation}\label{H-C}
|A(x, y)-A(x^\prime, y)|\le L |x-x^\prime|^\theta \quad \text{ for any } x, x^\prime \in \mathbb{R}^d, \end{equation} where $L\ge 0$ and $\theta\in (0,1)$.
\begin{lemma}\label{lemma-5.2} Suppose $A=A(x, y)$ satisfies (\ref{elcon}), (\ref{H-C}), and is 1-periodic in $y$. Let $\mathcal{L}_\varepsilon =-\text{\rm div} \big( A(x, x/\varepsilon)\nabla \big)$ and $u_\varepsilon$ be a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon)=F$ in $B_{2r}=B(x_0, 2r)$, where $\varepsilon\le r\le 1$ and $F\in L^2 (B_{2r})$. Then there exists a weak solution to $\mathcal{L}_0 (u_0)=F$ in $B_r$ such that \begin{equation}\label{5.21} \aligned
& \left(\fint_{B_r} |u_\varepsilon -u_0 |^2 \right)^{1/2}\\ & \le C \left\{ \left(\frac{\varepsilon}{r}\right)^\sigma +\varepsilon^\theta L \right\} \left\{
\left(\fint_{B_{2r}} |u_\varepsilon|^2\right)^{1/2}
+ r^2 \left(\fint_{B_{2r}} |F|^2 \right)^{1/2} \right\} , \endaligned \end{equation} where $\sigma>0$ depends only on $d$ and $\mu$. The constant $C$ depends only on $d$, $\mu$ and $\theta$. \end{lemma}
\begin{proof} As in the proof of Lemma \ref{lemma-5.1}, by rescaling, we may assume $r=1$. We also assume that $\varepsilon^\theta L<1$; for otherwise the inequality is trivial.
By using a convolution in the $x$ variable we may find a matrix $\widetilde{A}=\widetilde{A}(x, y)$ such that $\widetilde{A}$ satisfies the ellipticity condition (\ref{elcon}), is 1-periodic in $y$, and \begin{equation}\label{5.22}
\| A-\widetilde{A} \|_\infty \le C L\varepsilon ^\theta \quad \text{ and } \quad
\|\nabla_x \widetilde{A}\|_\infty \le C L \varepsilon ^{\theta-1}, \end{equation} where $C$ depends only on $d$ and $\theta$. Let $v_\varepsilon$ be the weak solution to \begin{equation} -\text{\rm div} \big( \widetilde{A}(x, x/\varepsilon)\nabla v_\varepsilon\big) =F \quad \text{ in } B_{3/2} \quad \text{ and } \quad v_\varepsilon =u_\varepsilon \quad \text{ on } \partial B_{3/2}. \end{equation} By the energy estimate as well as the first inequality in (\ref{5.22}), $$ \aligned \fint_{B_{3/2}}
|\nabla (u_\varepsilon -v_\varepsilon)|^2 &\le C (L\varepsilon^\theta)^2
\fint_{B_{3/2}} |\nabla u_\varepsilon|^2\\ &\le C (L \varepsilon^\theta)^2 \left\{
\fint_{B_2} |u_\varepsilon|^2
+ \fint_{B_2} |F|^2 \right\}, \endaligned $$ where we have used the Caccioppoli inequality for the last step. This, together with Poincar\'e's inequality, gives \begin{equation}\label{5.23} \left(\fint_{B_{3/2}}
| u_\varepsilon -v_\varepsilon|^2 \right)^{1/2} \le C L \varepsilon ^\theta \left\{
\left(\fint_{B_2} |u_\varepsilon|^2 \right)^{1/2}
+ \left(\fint_{B_2} |F|^2\right)^{1/2}\right\}. \end{equation} Next, we apply Lemma \ref{lemma-5.1} (and its proof) to the operator $-\text{\rm div} \big(\widetilde{A}(x, x/\varepsilon)\nabla \big)$. Let $\widetilde{A}_0(x) $ denote the matrix of effective coefficients for $\widetilde{A}(x, y)$. It follows that there exists $v_0 \in H^1(B_{5/4})$ such that $-\text{\rm div} \big( \widetilde{A}_0 (x)\nabla v_0)=F$ in $B_{5/4}$, and \begin{equation}\label{5.24} \aligned \left(\fint_{B_{5/4}}
|v_\varepsilon -v_0|^2 \right)^{1/2}
& \le C \big\{ \varepsilon^\sigma + \varepsilon^\theta L \big\} \left\{ \left(\fint_{B_{3/2}}
|v_\varepsilon|^2\right)^{1/2}
+ \left(\fint_{B_{3/2}} |F|^2 \right)^{1/2} \right\}\\ & \le C \big\{ \varepsilon^\sigma + \varepsilon^\theta L \big\} \left\{ \left(\fint_{B_{2}}
|u_\varepsilon|^2\right)^{1/2}
+ \left(\fint_{B_{2}} |F|^2 \right)^{1/2} \right\}, \endaligned \end{equation} where we have used the second inequality in (\ref{5.22}) as well as (\ref{5.23}).
Finally, let $u_0$ be the weak solution to $\mathcal{L}_0 (u_0)=F$ in $B_1 $ and $u_0=v_0$ on $\partial B_1$. Observe that by the first inequality in (\ref{5.22}), $$
\| \widetilde{A}_0 - \widehat{A} \|_\infty, \le C \varepsilon^\theta L, $$ where $C$ depends only on $d$ and $\mu$. It follows that by Poincar\'e's inequality, $$ \aligned
\int_{B_1} |u_0 -v_0|^2 &\le C
\int_{B_1}|\nabla (u_0-v_0)|^2\\ &\le C (\varepsilon^\theta L)^2
\int_{B_1} |\nabla v_0|^2\\ &\le C(\varepsilon^\theta L)^2
\left\{ \int_{B_{5/4}} |v_0|^2 +\int_{B_2} |F|^2 \right\}\\ &\le C (\varepsilon^\theta L)^2
\left\{ \int_{B_2} |u_\varepsilon|^2 +\int_{B_2} |F|^2 \right\}, \endaligned $$ where we have used Cacciopoli's inequality for the third inequality and (\ref{5.24}) for the fourth. This, together with (\ref{5.23}) and \ref{5.24}), gives (\ref{5.21}) for $r=1$. \end{proof}
We are now ready to handle the general case, where $n\ge 1$ and \begin{equation}\label{op-5} \mathcal{L}_\varepsilon=-\text{\rm div} \big( A(x, x/\varepsilon_1, \dots, x/\varepsilon_n)\nabla \big) \end{equation} with $0<\varepsilon_n<\varepsilon_{n-1}< \cdots< \varepsilon_1<1$.
\begin{theorem}\label{theorem-5.1} Suppose that $A=A(x, y_1, \dots, y_n)$ satisfies conditions (\ref{elcon}), (\ref{pcon}), and (\ref{lipcon}) for some $\theta\in (0, 1]$ and $L\ge 0$. Let $\mathcal{L}_\varepsilon $ be given by (\ref{op-5}) and $u_\varepsilon$ a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon)=F$ in $B_{tr} =B (x_0, t r)$ for some $t>1$, where $\varepsilon_1 \le r\le 1$ and $F\in L^2(B_{tr})$. Then there exists $u_0\in H^1(B_r)$ such that $\mathcal{L}_0 (u_0)=F$ in $B_r$ and \begin{equation}\label{5.30} \aligned
\left(\fint_{B_r} |u_\varepsilon -u_0|^2 \right)^{1/2} & \le C \left\{ \left(\frac{\varepsilon_1}{r} \right)^\sigma +\left (\varepsilon_1 + \varepsilon_2/\varepsilon_1 +\cdots + \varepsilon_n /\varepsilon_{n-1} \right)^\theta L \right\}\\ & \qquad\qquad
\cdot \left\{ \left(\fint_{B_{tr}} |u_\varepsilon|^2 \right)^{1/2}
+ r^2 \left(\fint_{B_{tr}} |F|^2 \right)^{1/2} \right\}, \endaligned \end{equation} where $\sigma>0$ depends only on $d$ and $\mu$. The constant $C$ depends only on $d$, $n$, $\mu$, $t$, and $\theta$. \end{theorem}
\begin{proof} We prove the theorem by an induction argument on $n$. The case $n=1$ with $t=2$ is given by Lemma \ref{lemma-5.2}. The proof for the general case $t>1$ is similar. Now suppose the theorem is true for $n-1$. To show it is true for $n$, let $u_\varepsilon$ be a weak solution to $\mathcal{L}_\varepsilon (u_\varepsilon)=F$ in $B_{t r}$, where $\mathcal{L}_\varepsilon$ is given by (\ref{op-5}). Fix $\varepsilon>0$ and consider the matrix $$ E (x, y)=A(x, x/\varepsilon_1, \dots, x/\varepsilon_{n-1} , y) $$ Note that $E$ satisfies the ellipticity condition (\ref{elcon}) and is 1-periodic in $y$. Moreover, we have \begin{equation}\label{5.300}
| E (x, y) - E (x^\prime, y)|
\le C \varepsilon_{n-1}^{-\theta} L |x-x^\prime|^\theta \quad \text{ for any } x, x^\prime \in \mathbb{R}^d, \end{equation} where $C$ depends only on $d$ and $n$. Also recall that the matrix of effective coefficients for $E (x, y)$ is given by $$ A_{n-1} (x, x/\varepsilon_1 , \cdots, x/\varepsilon_{n-1} ), $$ where $A_{n-1}(x, y_1, \cdots, y_{n-1})$ is given by (\ref{A-ell}) with $\ell =n$ and $A_n=A$. Let $1<s<t$. By the theorem for the case $n=1$,
there exists $v_\varepsilon\in H^1(B_{s r})$ such that $$ -\text{\rm div} \big( A_{n-1} (x, x/\varepsilon_1, \dots, x/\varepsilon_{n-1} )\nabla v_\varepsilon \big) =F \quad \text{ in } B_{ s r}, $$ and \begin{equation}\label{5.31} \aligned \left(\fint_{B_{s r}}
|u_\varepsilon -v_\varepsilon|^2\right)^{1/2}
& \le C \left\{ \left(\frac{\varepsilon_n }{r}\right)^\sigma + \left(\frac{\varepsilon_n}{\varepsilon_{n-1}} \right)^\theta L \right\}\\ &\qquad\qquad \cdot
\left\{ \left(\fint_{B_{t r}} |u_\varepsilon|^2 \right)^{1/2}
+ r^2 \left(\fint_{B_{t r}} |F|^2 \right)^{1/2} \right\}. \endaligned \end{equation} By induction assumption there exists $u_0\in H^1(B_r)$ such that $\mathcal{L}_0 (u_0)=F$ in $B_r$ and \begin{equation}\label{5.32} \aligned
\left(\fint_{B_r} |v_\varepsilon -u_0|^2 \right)^{1/2} & \le C \left\{ \left(\frac{\varepsilon_1 }{r} \right)^\sigma + (\varepsilon_1+ \varepsilon_2/\varepsilon_1 +\cdots + \varepsilon_{n-1}/\varepsilon_{n-2} )^\theta L \right\}\\ & \qquad\qquad
\cdot \left\{ \left(\fint_{B_{s r}} |v_\varepsilon|^2 \right)^{1/2}
+ r^2 \left(\fint_{B_{s r}} |F|^2 \right)^{1/2} \right\}. \endaligned \end{equation} Estimate (\ref{5.30}) follows readily from (\ref{5.31}) and (\ref{5.32}). \end{proof}
\begin{remark} {\rm Let $\delta=\varepsilon_1 +\varepsilon_2/\varepsilon_1 +\cdots + \varepsilon_{n-1}/\varepsilon_{n-2} $. It follows from Theorem \ref{theorem-5.1} (with $t=2$) that for $\delta\le r<1$, \begin{equation}\label{remark-5.10} \left(\fint_{B_r}
|u_\varepsilon -u_0|^2 \right)^{1/2} \le C \left(\frac{\delta}{r } \right)^\sigma \left\{ \left(\fint_{B_{2r}}
|u_\varepsilon|^2 \right)^{1/2}
+ r^2 \left(\fint_{B_{2r}} |F|^2\right)^{1/2} \right\}, \end{equation} where $\sigma>0$ depends only on $d$, $\mu$ and $\theta$. The constant $C$ depends at most on $d$, $n$, $\mu$ and $(\theta, L)$. Suppose that $(\varepsilon_1, \varepsilon_2, \dots, \varepsilon_n)$ satisfies the condition (\ref{w-s-cond}). Then $\delta \le C \varepsilon_1^\beta$ for some $\beta>0$ depending only on $n$ and $N$. This, together with (\ref{remark-5.10}), implies that for $\varepsilon_1\le r< 1$, \begin{equation}\label{remark-5.1} \left(\fint_{B_r}
|u_\varepsilon -u_0|^2 \right)^{1/2} \le C \left(\frac{\varepsilon_1}{r } \right)^\rho \left\{ \left(\fint_{B_{2r}}
|u_\varepsilon|^2 \right)^{1/2}
+ r^2 \left(\fint_{B_{2r}} |F|^2\right)^{1/2} \right\}, \end{equation} where $\rho>0$ depends only on $d$, $n$, $\mu$, $\theta$, and $N$. } \end{remark}
\section{Large-scale interior estimates}\label{section-6}
This section focuses on large-scale interior estimates for $\mathcal{L}_\varepsilon (u_\varepsilon)=F$ and gives the proof of Theorem \ref{lipth}. Throughout this section we assume that $\mathcal{L}_\varepsilon$ is given by (\ref{operator}) and $A=A(x, y_1, \dots, y_n)$ satisfies (\ref{elcon}), (\ref{pcon}), and (\ref{lipcon}) for some $\theta\in (0, 1]$ and $L\ge 0$. We also assume that $0<\varepsilon_n<\varepsilon_{n-1}<\dots< \varepsilon_1<1$ and the condition (\ref{w-s-cond}) of well-separation is satisfied.
We start with estimates of solutions of $\mathcal{L}_0 (u_0)=F$. Let $\mathcal{P}$ denote the set of linear functions.
\begin{lemma}\label{liple2}
Let $u_0\in H^1(B_r)$ be a solution to $\mathcal{L}_0(u_0)=F$ in $B_r=B(0, r)$,
where $0<r\leq 1$ and $ F\in L^p(B_r)$ for some $p>d$.
Define
\begin{align}\label{reliple2}
G(r; u_0)=\frac{1}{r}\inf_{P\in \mathcal{P}}\left\{\left(\fint_{B_r}|u_0-P|^2\right)^{1/2}+ r^{1+\vartheta }|\nabla P| \right\}
+r \left(\fint_{B_r}|F|^p\right)^{1/p},
\end{align}
where $\vartheta=\min\{\theta, 1-d/p\}$.
Then there exists $t \in(0, 1/8)$, depending only on $d$, $\mu$, $p$ and $(\theta, L)$ in \eqref{lipcon},
such that
$$
G(t r; u_0)\leq \frac{1}{2}G(r; u_0).
$$ \end{lemma}
\begin{proof} Let $ P_0= x \cdot \nabla u_0(0) +u_0(0)$. Then
\begin{equation}
\label{pliplem0}
\aligned
G(tr; u_0)&\leq \frac{1}{tr} \|u_0-P_0\|_{L^\infty(B_{tr})} +tr\left(\fint_{B_{tr}}| F |^p\right)^{1/p}+(tr)^\vartheta |\nabla u_0(0)|\\
&\leq (tr)^\vartheta\| \nabla u_0\|_{C^{0,\vartheta}(B_{tr})} +tr\left(\fint_{B_{tr}}|F |^p\right)^{1/p}+(tr)^\vartheta | \nabla u_0(0)| \\
&= (tr)^\vartheta\| \nabla (u_0-P)\|_{C^{0,\vartheta}(B_{tr})} +tr\left(\fint_{B_{tr}}|F|^p\right)^{1/p}+(tr)^\vartheta |\nabla u_0(0)|
\endaligned
\end{equation}
for any $P\in \mathcal{P}$.
Note that
\begin{equation}
\label{pliplem1}
\aligned
tr\left(\fint_{B_{rt}}|F|^p\Big)^{1/p}\leq C t^{1-d/p} r \Big(\fint_{B_{r}}|F|^p\right)^{1/p}.
\endaligned
\end{equation}
By interior Lipschitz estimates for $u_0$, we may deduce that
\begin{equation}\label{pliplem2}
\aligned
|\nabla u_0(0)| &\leq \frac{C}{r} \left(\fint_{B_r} |u_0-b|^2\right)^{1/2} +Cr\left(\fint_{B_r}|F|^p\right)^{1/p} \\
&\leq \frac{C}{r} \left(\fint_{B_r} |u_0-P|^2\Big)^{1/2}+ \frac{C}{r}\Big(\fint_{B_r} |P-b|^2\right)^{1/2} +Cr\left(\fint_{B_r}|F|^p\right)^{1/p}\\
&\leq \frac{C}{r} \left(\fint_{B_r} |u_0-P|^2\right)^{1/2} +C |\nabla P|+ Cr\left(\fint_{B_r}|F|^p\right)^{1/p},
\endaligned
\end{equation}
where $b= P(0) $. Also, note that
\begin{align*} -\text{div} \big(\widehat{A} \nabla (u_0-P)\big)= F +\text{div} \big([\widehat{A} -\widehat{A} (0)] \nabla P\big) \quad \text{ in } B_r. \end{align*} By $C^{1,\vartheta}$ estimates for the elliptic operator $\mathcal{L}_0$, we obtain that for $0<t<1/2$, \begin{equation}\label{pliplem3} \aligned
\|\nabla (u_0-P)\|_{C^{0,\vartheta}(B_{tr})}
&\leq \|\nabla (u_0-P)\|_{C^{0,\vartheta}(B_{r/2})} \\
&\leq \frac{C}{r^{1+\vartheta}} \Big( \fint_{B_r} |u_0-P|^2\Big)^{1/2} +C r^{-\vartheta}\| [\widehat {A}-\widehat{A}(0)] \nabla P \|_{L^\infty(B_r)} \\
&\quad+ C \| [\widehat{A}-\widehat{A}(0)] \nabla P \|_{C^{0,\vartheta}(B_r)}+ Cr^{1-\vartheta} \Big(\fint_{B_r}|F|^p\Big)^{1/p} \\
&\leq \frac{C}{r^{1+\vartheta}} \Big( \fint_{B_r} |u_0-P|^2\Big)^{1/2} + C |\nabla P | + Cr^{1-\vartheta} \Big(\fint_{B_r}|F|^p\Big)^{1/p}. \endaligned \end{equation} By using \eqref{pliplem1}--\eqref{pliplem3} to bound the right-hand side of \eqref{pliplem0}, it yields that $$ G(tr; u_0) \leq C t^\vartheta G(r; u_0)$$ for some constant $C$ depending only on $d$, $\mu$, $p$ and $(\theta, L)$ in \eqref{lipcon}. The desired result follows by choosing $t$ so small that $Ct^\vartheta\le 1/2$. \end{proof}
\begin{lemma}\label{liple3}
Let $u_{\varepsilon}\in H^1(B_1)$ be a solution of $\mathcal{L}_{\varepsilon} (u_{\varepsilon})=F$ in $B_1$,
where $F\in L^p(B_1)$ for some $p>d$ and $\varepsilon\in (0, 1/4)$.
For $0<r\leq 1$, we define
\begin{align}
\label{defh}
\begin{split}
&H(r)=\frac{1}{r}\inf_{P\in \mathcal{P} }\left\{\left(\fint_{B_r}|u_{\varepsilon}-P|^2\right)^{1/2}
+r^{1+\vartheta} |\nabla P| \right\}
+r\left(\fint_{B_r}|F|^p\right)^{1/p},\\
&\Phi(r)=\frac{1}{r}\inf_{b\in \mathbb{R}} \left(\fint_{B_r}|u_{\varepsilon}-b|^2\right)^{1/2}+r\left(\fint_{B_r}|F |^2\right)^{1/2}.
\end{split}
\end{align}
Let $t \in(0, 1/8)$ be given by Lemma \ref{liple2}.
Then for $r\in (\varepsilon_1, 1/2]$,
\begin{align}\label{liple3re}
H(t r)\leq \frac{1}{2}H(r)+C\left(\frac{\varepsilon_1 }{r}\right)^\rho \Phi(2 r),
\end{align}
where $\rho>0$ and $C$ depends at most on $d$, $n$, $\mu$, $p$, $(\theta, L)$ in \eqref{lipcon},
and $N$ in (\ref{w-s-cond}).
\end{lemma}
\begin{proof}
For any fixed $r\in (\varepsilon_1, 1/2 ]$, let $u_0$ be the solution to $\mathcal{L}_0 (u_0)=F$ in $B_r$, given in Theorem \ref{theorem-5.1}.
By the definitions of $G, H$ and $\Phi$, we have
\begin{align*}
H(t r)&\leq \frac{1}{t r}\left(\fint_{B_{t r}}|u_{\varepsilon}-u_0|^2\right)^{1/2}+G(t r; u_0)\\
&\leq \frac{1}{t r}\left(\fint_{B_{t r}}|u_{\varepsilon}-u_0|^2\right)^{1/2}+\frac{1}{2}G(r; u_0)\\
&\leq \frac{C}{ r}\left(\fint_{B_{r}}|u_{\varepsilon}-u_0|^2\right)^{1/2} +\frac{1}{2}H(r)\\
&\leq C\left(\frac{ \varepsilon_1 }{r}\right)^\rho
\left\{\left( \frac{1}{r} \fint_{B_{2 r}}|u_{\varepsilon}-b|^2\right)^{1/2}+r \left(\fint_{B_{2r}} | F |^2\right)^{1/2}\right\}+\frac{1}{2}H(r)
\end{align*} for any $b\in \mathbb{R}$, where we have used Lemma \ref{liple2} and (\ref{remark-5.1}) in the second and last inequalities, respectively. \end{proof}
The following lemma can be found in \cite[p.155]{shenan2017}.
\begin{lemma}\label{liple4}
Let $H(r)$ and $h(r)$ be two nonnegative continuous functions on the interval $(0,1]$ and let $t \in (0,1/4)$. Assume that \begin{align}\label{Lip_cond_1}
\max_{r\leq t\leq 2r} H(t)\leq C_0H(2r), ~~~~~\max_{r\leq t, s\leq 2r} | h(t)-h(s)|\leq C_0H(2r), \end{align} for any $r\in [\delta, 1/2]$, and also \begin{align}\label{Lip_cond_2} H( t r) \leq \frac{1}{2} H(r) + C_0 \omega (\delta/r)\left\{ H(2r)+h(2r)\right\}, \end{align} for any $r\in [\delta, 1/2]$, where $\omega$ is a nonnegative increasing function on $[0, 1]$ such that $\omega(0)=0$ and \begin{align}\label{Lip_cond_3} \int_0^1 \frac{\omega(s)}{s} ds<\infty. \end{align} Then \begin{align}\label{Lip_es_H} \max_{\delta\leq r\leq 1} \left\{H(r)+h(r)\right\}\leq C \left\{H(1) +h(1)\right\}, \end{align} where $C$ depends only on $C_0$, $\theta_0$ and $\omega$. \end{lemma}
The next lemma gives the large-scale Lipschitz estimate down to the scale $\varepsilon_1$.
\begin{lemma}\label{th41}
Let $u_{\varepsilon}\in H^1(B_1)$ be a solution to $\mathcal{L}_{\varepsilon} (u_{\varepsilon})=F$ in $B_1$,
where $B_1=B(x_0,1) $ and $F\in L^p(B_1)$ for some $p>d\ge 2$.
Then for $\varepsilon_1 \leq r< 1$, \begin{align}\label{reth41}
\left(\fint_{B_r} |\nabla u_{\varepsilon}|^2\right)^{1/2}\leq C \left\{ \left(\fint_{B_1}
| \nabla u_{\varepsilon}|^2\right)^{1/2} +
\left(\fint_{B_1} |F |^p\right)^{1/p}\right\}, \end{align} where $C$ depends only on $d$, $n$, $\mu$, $p$, $(\theta, L)$ in \eqref{lipcon}, and $N$ in (\ref{w-s-cond}). \end{lemma}
\begin{proof} By translation we may assume $x_0=0$. Let $P_r, b_r$ be a linear function and constant achieving the infimum in \eqref{defh}.
In particular,
$$
H(r)=\frac{1}{r}
\left(\fint_{B_r}|u_{\varepsilon}-P_r|^2\right)^{1/2}+r^\vartheta
|\nabla P_r| +r\Big(\fint_{B_r}|F|^p\Big)^{1/p}.
$$
Let $h(r)=|\nabla P_r|$. It follows by Poincar\'{e}'s inequality that \begin{align*}
\Phi(2r)\leq H(2r)+\frac{1}{r}\inf_{b\in \mathbb{R }} \left( \fint_{B_{2r}}|P_{2r}-b|^2\right)^{1/2}\leq H(2r)+Ch(2r).
\end{align*} This, combined with \eqref{liple3re}, gives
\eqref{Lip_cond_2} with $\omega(t)=t^\rho$, which satisfies \eqref{Lip_cond_3}.
For $t\in[r, 2r]$, it is obvious that $H(t)\leq CH(2r)$. Furthermore, observe that \begin{align*}
|h(t)-h(s)| &= |\nabla (P_t-P_s)|\leq \frac{C}{r} \left( \fint_{B_r}|P_t-P_s|^2\right)^{1/2}\\
&\leq \frac{C}{r} \left( \fint_{B_r}|u_{\varepsilon}-P_t|^2\right)^{1/2}+\frac{C}{r} \left( \fint_{B_r}|u_{\varepsilon}-P_s|^2\right)^{1/2}\\
&\leq \frac{C}{t} \left( \fint_{B_r}|u_{\varepsilon}-P_t|^2\right)^{1/2}+\frac{C}{s} \left( \fint_{B_s}|u_{\varepsilon}-P_s|^2\right)^{1/2}\\ &\leq C\{H(t)+H(s)\} \\ &\leq CH(2r) \end{align*}
for all $t, s\in[r, 2r]$, which is exactly the condition \eqref{Lip_cond_1}.
Thanks to \eqref{Lip_es_H}, we obtain that \begin{align}
\frac{1}{r}\inf_{b\in \mathbb{R}}\left( \fint_{B_r}|u_{\varepsilon}-b|^2\right)^{1/2}&\leq H(r)+\frac{1}{r}\inf_{b\in \mathbb{R}}\left( \fint_{B_r}|P_r-b|^2\right)^{1/2}\nonumber\\
&\leq C\{H(r)+h(r)\}\nonumber\\
&\leq C\{H(1)+h(1)\}\nonumber\\
&\leq C\left\{\left( \fint_{B_1}|u_{\varepsilon}|^2\right)^{1/2}+ \left( \fint_{B_1}|F|^p\right)^{1/p}\right\},\label{preth411} \end{align} for any $r\in [\varepsilon_1, 1/2]$, where for the last step the following observation is used, \begin{align*}
h(1)&\leq C\left( \fint_{B_1}|P_1|^2\right)^{1/2}\\&\leq C\left(\fint_{B_1}|u_{\varepsilon}-P_1|^2\right)^{1/2}+C\left(\fint_{B_1}|u_{\varepsilon}|^2\right)^{1/2}\\
&\leq CH(1)+C\left(\fint_{B_1}|u_{\varepsilon}|^2\right)^{1/2}. \end{align*}
The estimate \eqref{reth41} follows readily from \eqref{preth411} by Poincar\'e and Caccioppoli's inequalities. \end{proof}
We are now ready to prove Theorem \ref{lipth}.
\begin{proof}[\textbf{Proof of Theorem \ref{lipth}}]
The proof uses an induction on $n$ and relies on Lemma \ref{th41} and a rescaling argument.
The case $n=1$ follows directly from Lemma \ref{th41} by translation and dilation.
Assume the theorem is true for $n-1$.
Suppose
$$
\text{\rm div} \big( A(x, x/\varepsilon_1, \dots, x/\varepsilon_n) \nabla u_\varepsilon \big)
=F \quad \text{ in }
B_R=B(x_0, R)
$$
for some $0<R\le 1$.
We need to show that
\begin{equation}\label{Lip-6}
\left(\fint_{B_r} |\nabla u_\varepsilon |^2 \right)^{1/2}
\le C \left\{ \left(\fint_{B_R} |\nabla u_\varepsilon |^2 \right)^{1/2}
+ R \left(\fint_{B_R} |F |^p \right)^{1/p}\right\}, \end{equation} for $\varepsilon_n \le r<R\le 1$. By translation and dilation we may assume that $x_0=0$ and $R=1$. Note that the case $(1/8)\le r< R=1$ is trivial. If $\varepsilon_n < r\le (1/8)$, we may cover the ball $B(0, r)$ with a finite number of balls $B(x_\ell, \varepsilon_n )$, where $x_\ell\in B(0, r)$. Consequently, it suffices to prove (\ref{Lip-6}) for the case $r=\varepsilon_n $ and $R=1$. We further note that by Lemma \ref{th41}, the estimate (\ref{Lip-6}) holds for $r=\varepsilon_1$ and $R=1$.
To reach the finest scale $\varepsilon_n$, we let $w(x)= u_\varepsilon (\varepsilon_1 x)$. Then $$ -\text{\rm div} \big( E(x, x/(\varepsilon_2\varepsilon_1^{-1}), \dots, x/(\varepsilon_n \varepsilon_1^{-1})) \nabla w\big) =H \quad \text{ in } B_1, $$ where $H(x)=\varepsilon_1^{2} F(\varepsilon_1 x)$ and $$
E(x, y_2, \dots, y_n) =A(\varepsilon_1 x, x, y_2, \dots, y_n). $$ Observe that the matrix $E$ satisfies (\ref{elcon}) and is 1-periodic in $(y_2, \dots, y_n)$. It also satisfies the smoothness condition (\ref{lipcon})
with the same constants $\theta$ and $L$ as for $A$.
Furthermore, the $(n-1)$ scales $(\varepsilon_2\varepsilon_1^{-1}, \dots, \varepsilon_n \varepsilon_1^{-1})$
satisfies the condition (\ref{w-s-cond}) of well-separation.
Thus, by the induction assumption, \begin{equation}\label{Lip-7}
\left(\fint_{B_r} |\nabla w|^2 \right)^{1/2}
\le C \left\{ \left(\fint_{B_1} |\nabla w|^2 \right)^{1/2}
+ \left(\fint_{B_1} |H|^p \right)^{1/p} \right\},
\end{equation}
for $r=\varepsilon_n/\varepsilon_1 $. By a change of variables it follows that (\ref{Lip-6}) holds for $r=\varepsilon_n $ and $R=\varepsilon_1$. This, combined with the inequality for $r=\varepsilon_1$ and $R=1$, implies that (\ref{Lip-6}) holds for $r=\varepsilon_n $ and $R=1$. The proof is complete. \end{proof}
\begin{remark}\label{remark-6.1} {\rm It follows from the proof of Theorem \ref{lipth} that without the condition (\ref{w-s-cond}), the estimate (\ref{relipth}) continues to hold if $$ \varepsilon_1 +\left( \varepsilon_2/\varepsilon_1 + \cdots + \varepsilon_{n}/\varepsilon_{n-1} \right)^N \le r< R \le 1, $$ for any $N\ge 1$. In this case the constant $C$ in (\ref{relipth}) also depends on $N$. The case $N=1$ follows by using (\ref{remark-5.10}) in the place of (\ref{remark-5.1}). The general case is proved by an induction argument on $N$. Suppose the claim is true for some $N\ge 1$. Assume that $\beta=\varepsilon_2/\varepsilon_1+\cdots + \varepsilon_n/\varepsilon_{n-1}\ge \varepsilon_1$ (for otherwise, there is nothing to prove). Let $w(x)=u_\varepsilon (\beta x)$. Then $-\text{\rm div} \big( E(x, x/(\beta^{-1} \varepsilon_1 ), \dots, x/(\beta^{-1}\varepsilon_n ) ) \nabla w \big) =H$, where $E(x, y_1, \dots y_n)=A(\beta x, y_1, \dots, y_n)$. By the induction assumption, the inequality (\ref{Lip-7}) holds for $\beta^{-1} \varepsilon_1 +\beta^N<r<1$. By a change of variables we obtain (\ref{relipth}) for $\varepsilon_1 +\beta^{N+1} \le r< R=\beta$. This, together with the estimate for the case $N=1$, gives (\ref{relipth}) for $\varepsilon_1 +\beta^{N+1}\le r< R\le 1$. } \end{remark}
\section{Large-scale boundary Lipschitz estimates}\label{section-7}
This section is devoted to the large-scale boundary Lipschitz estimate and contains the proof of Theorem \ref{blipth}. Throughout the section we assume that $\mathcal{L}_\varepsilon$ is given by (\ref{operator}) with $A=A(x, y_1, \dots, y_n)$ satisfying conditions (\ref{elcon}), (\ref{pcon}) and (\ref{lipcon}) for some $0< \theta \le 1$. The condition (\ref{w-s-cond}) is also imposed.
Let $\psi: \mathbb{R}^{d-1}\rightarrow \mathbb{R}$ be a $C^{1,\alpha }$ function with
\begin{align} \label{cbd}
\psi(0)=0 \quad \text{ and } \quad
\|\nabla \psi \|_\infty + \|\nabla\psi\|_{C^{0,\alpha}(\mathbb{R}^{d-1})}\leq M.
\end{align} Set \begin{align}\label{drder} \begin{split}
&Z_r=Z(r,\psi)=\left\{ (x',x_d)\in \mathbb{R}^d: |x'|<r \text{ and } \psi(x')<x_d< 10(M+10) r \right\},\\
&I_r=I (r,\psi)=\left\{ (x',\psi(x'))\in \mathbb{R}^d: |x'|<r \right\}. \end{split} \end{align} For $f\in C^{1,\alpha}(I_r)$ with $0<\alpha<1$,
we introduce a scaling-invariant norm,
\begin{equation}\label{C-norm}
\|f\|_{{C}^{1,\alpha}(I_r)}=\|f\|_{L^\infty(I_r)} + r\|\nabla_{\tan} f\|_{L^\infty(I_r)}+ r^{1+\alpha} \|\nabla_{\tan} f\|_{C^{0,\alpha}(I_r)},
\end{equation} where $\nabla_{\tan} f$ denotes the tangential gradient of $f$ and $$
\| g\|_{C^{0,\alpha}(I_r)}= \sup_{x,y\in I_r, x\neq y} \frac{|g(x)-g (y)|}{|x-y|^\alpha}. $$
\begin{theorem}\label{th51} Let $u_{\varepsilon}\in H^1(Z_R)$ be a weak solution of $\mathcal{L}_{\varepsilon} ( u_{\varepsilon}) =F$ in $Z_R$ and $u_{\varepsilon}=f$ on $I_R$, where $0<\varepsilon_n <R\leq 1$, $F\in L^p(Z_R)$ for some $p>d$, and $f\in C^{1,\alpha}(I_R)$. Then for $\varepsilon_n \leq r< R$, \begin{align}\label{reth51}
\left(\fint_{Z_r} |\nabla u_\varepsilon|^2\right)^{1/2}\leq C \left\{ \left(\fint_{Z_R}
| \nabla u_\varepsilon|^2\right)^{1/2} + R^{-1} \|f\|_{C^{1,\alpha}(I_R)} +
R\left(\fint_{Z_R} |F |^p\right)^{1/p}\right\}, \end{align} where $C$ depends at most on $d$, $n$, $\mu$, $p$, $(\theta, L)$ in \eqref{lipcon}, $N$ in (\ref{w-s-cond}), and $(\alpha, M)$ in \eqref{cbd}. \end{theorem}
Theorem \ref{blipth} follows readily from Theorem \ref{th51} by translation and a suitable rotation of the coordinate system. To prove Theorem \ref{th51}, we use the same approach as in
the proof of Theorem \ref{th41}.
We will provide only a sketch of the proof for Theorem \ref{th51}.
First, we point out that the rescaling argument, which is used extensively for interior estimates,
works equally well in the case of boundary estimates.
Indeed, suppose $\mathcal{L}_\varepsilon (u_\varepsilon)=F $ in $Z(r, \psi) $ and
$u_\varepsilon=f$ on $I (r, \psi) $ for some $0<r \le 1$.
Let $v(x)=u_\varepsilon (rx)$.
Then
$$
-\text{\rm div} \big( \widetilde{A} (x, x/\varepsilon_1 r^{-1}, \dots, x/\varepsilon_n r^{-1}) \nabla v \big) =G \quad \text{ in }
Z(1, \psi_r) \quad \text{ and} \quad v=g \quad \text{ on } I (1, \psi_r), $$ where $\widetilde{A} (x, y_1, \dots, y_n)=A(rx, y_1, \dots, y_n)$, $G(x)=r^2 F(rx)$, $g(x)= f(rx)$, and $\psi_r (x^\prime) =r^{-1} \psi(rx^\prime)$.
Since $\nabla \psi_r (x^\prime) =\nabla \psi (rx^\prime)$ and $0<r\le 1$,
the function $\psi_r$ satisfies the condition (\ref{cbd}) with the same $M$.
Also, note that $\| f\|_{C^{1, \alpha} (I (r, \psi))}= \| g\|_{C^{1, \alpha} (I (1, \psi_r))}$.
As a result, it suffices to prove Theorem \ref{th51} for $R=1$.
Next, we establish an approximation result in the place of (\ref{remark-5.1}). Define $$
\| f\|_{C^1(I_r)}
=\|f\|_{L^\infty(I_r)} + r \|\nabla_{\tan} f\|_{L^\infty (I_r)}. $$
\begin{theorem}\label{apth2} Let $u_{\varepsilon}\in H^1(Z_{2r}) $ be a weak solution of $\mathcal{L}_{\varepsilon} (u_{\varepsilon})=F$ in $Z_{2r}$
and $u_{\varepsilon}=f$ on $I_{2r}$, where $0< \varepsilon \le r\leq 1 $.
Then there exists $u_0\in H^1(Z_r)$ such that $\mathcal{L}_0 (u_0)=F$ in $Z_r$,
$u_0=f$ on $I_{r}$, and
\begin{equation}\label{reapth2}
\aligned
&\left(\fint_{Z_r}|u_{\varepsilon}-u_0|^2\right)^{1/2}\\
&\leq C \left(\frac{\varepsilon_1}{r}\right)^\rho \left\{\left(\fint_{Z_{2r}}|u_{\varepsilon} |^2\right)^{1/2}+r^2\left(\fint_{Z_{2r}}| F |^2\right)^{1/2}+ \|f\|_{C^{1}(I_{2r})}\right\}.
\endaligned
\end{equation}
The constants $\rho\in (0, 1)$ and $C>0$ depend at most on $d$, $n$, $\mu$, $(\theta, L)$ in \eqref{lipcon},
$N$ in (\ref{w-s-cond}), and $(\alpha, M)$ in \eqref{cbd}. \end{theorem}
\begin{proof} The proof of (\ref{reapth2}) is similar to that of (\ref{remark-5.1}).
\noindent{Step 1.}\ \ Assume that $n=1$, $\mathcal{L}_\varepsilon =-\text{\rm div} \big(A(x, x/\varepsilon)\nabla \big)$ and $A(x, y)$ is Lipschitz continuous in $x$. Suppose that $\mathcal{L}_\varepsilon(u_\varepsilon)=F$ in $Z_{2r}$ and $u_\varepsilon=f$ on $I_{2r}$. Show that there exists $u_0\in H^1(Z_r)$ such that $\mathcal{L}_0 (u_0)=F$ in $Z_r$, $u_0=f$ on $I_r$, and \begin{equation}\label{5.01-b} \aligned
& \left(\fint_{Z_r} |u_\varepsilon -u_0|^2\right)^{1/2}\\ &\le C \left\{ \left(\frac{\varepsilon}{r} \right)^\sigma
+\varepsilon \|\nabla_x A\|_\infty \right\}
\left\{ \left(\fint_{Z_{2r}} |u_\varepsilon|^2\right)^{1/2}
+ r^2 \left(\fint_{Z_{2r}} |F|^2\right)^{1/2} +\| f\|_{C^1( I_{2r}) } \right\}. \endaligned \end{equation}
The proof of (\ref{5.01-b}) is similar to (\ref{5.01}). By rescaling we may assume $r=1$. Let $u_0$ be the weak solution of $$ \mathcal{L}_0(u_0)=F \quad \text{ in } \Omega \quad \text{ and } \quad u_0=u_\varepsilon \quad \text{ on } \partial \Omega, $$ where $\Omega=Z_{3/2}$. By using (\ref{L-2-estimate}), we obtain $$
\int_\Omega |u_\varepsilon -u_0|^2
\le C \varepsilon^2 (\|\nabla_x A\|_\infty^2 +1) \int_\Omega |\nabla u_0|^2
+ C \varepsilon^2 \int_{\Omega\setminus \Omega_{3\varepsilon}} |\nabla^2 u_0|^2
+ C \int_{\Omega_{4\varepsilon} } |\nabla u_0|^2. $$ The rest of the proof is the same as the proof of Lemma \ref{lemma-5.1}, using interior $H^2$ estimates for $\mathcal{L}_0$ as well as Meyers' estimates for $\mathcal{L}_\varepsilon$, \begin{equation}\label{Meyer-7}
\left(\fint_{Z_{3/2}} |\nabla u_\varepsilon|^q \right)^{1/q}
\le C \left\{ \left(\fint_{Z_2} |u_\varepsilon|^2\right)^{1/2}
+ \| f\|_{C^1(I_2)}
+ \left(\fint_{Z_2} |F|^2\right)^{1/2} \right\} \end{equation} for some $q>2$, depending only on $d$, $\mu$ and $M$.
\noindent{Step 2.}\ \ Assume $n=1$ and $A(x, y)$ is H\"older continuous in $x$. Suppose $\mathcal{L}_\varepsilon(u_\varepsilon)=F$ in $Z_{2r}$ and $u_\varepsilon=f$ on $I_{2r}$. Show that there exists $u_0\in H^1(Z_r)$ such that $\mathcal{L}_0 (u_0)=F$ in $Z_r$, $u_0=f$ on $I_r$, and \begin{equation}\label{5.21-b} \aligned
& \left(\fint_{Z_r} |u_\varepsilon -u_0|^2\right)^{1/2}\\ &\le C \left\{ \left(\frac{\varepsilon}{r} \right)^\sigma +\varepsilon^\theta L \right\}
\left\{ \left(\fint_{Z_{2r}} |u_\varepsilon|^2\right)^{1/2}
+ r^2 \left(\fint_{Z_{2r}} |F|^2\right)^{1/2} +\| f\|_{C^1( I_{2r}) } \right\}. \endaligned \end{equation} As in the case of (\ref{5.21}), the estimate (\ref{5.21-b}) follows from (\ref{5.01-b}) by approximating $A(x, y)$ in the $x$ variable.
\noindent{Step 3.} As in the interior case, the case $n>1$ follows from (\ref{5.21-b}) by an induction argument on $n$. \end{proof}
The following two lemmas will be used in the place of Lemmas \ref{liple2} and \ref{liple3}. Recall that $\mathcal{P}$ denotes the set of linear functions in $\mathbb{R}^d$.
\begin{lemma}\label{bll2}
Let $u_0\in H^1(Z_r)$ be a weak solution of
$ \mathcal{L}_0 (u_0) =F \text{ in } Z_r$ and $u_0=f$ on $I_r$,
where $0<r\leq 1, F\in L^p(Z_r)$ for some $p>d$, and $f\in C^{1,\alpha}(I_r)$
for some $0<\alpha<1$. Define
$$
\aligned
\mathcal{ G }(r; u_0)&= \inf_{P\in \mathcal{P}}\frac{1}{r}
\left\{\left(\fint_{Z_r}|u_0-P|^2\right)^{1/2} +r^{1+\vartheta} | \nabla P|
+ \|f-P\|_{C^{1,\alpha}(I_r)} \right\} \\
&\qquad
+r \left(\fint_{Z_r}|F|^p\right)^{1/p},
\endaligned
$$
where $\vartheta=\min\{\theta,\alpha, 1-d/p\}.$
Then there exists $t \in(0, 1/8)$, depending only on $d$, $n$, $\mu$, $p$, $(\theta, L)$ in \eqref{lipcon}, and $(\alpha, M)$ in \eqref{cbd}, such that, $$\mathcal{G}(t r; u_0)\leq \frac{1}{2}\mathcal{G}(r; u_0). $$ \end{lemma}
\begin{proof} The proof is similar to that of Lemma \ref{liple2}. Let $P_0(x)= \nabla u_0(0)\cdot x+u_0(0)$. Then for $0<t< (1/8)$,
\begin{equation}\label{pbll21}
\aligned
\mathcal{ G}(tr; u_0)
&\leq C (tr)^\vartheta \big\{\|\nabla u_0\|_{C^{0,\vartheta}( Z_{tr} )}
+ | \nabla u_0(0)| \big\} + tr \left(\fint_{ Z_{tr} }|F|^p\right)^{1/p}\\
&\le C (tr)^\vartheta \big\{\|\nabla (u_0-P)\|_{C^{0,\vartheta}( Z_{tr} )}
+ | \nabla u_0(0)-\nabla P | +|\nabla P| \big\}\\
&\qquad\qquad\qquad
+ tr \left(\fint_{ Z_{tr} }|F|^p\right)^{1/p}
\endaligned
\end{equation}
for any $P\in \mathcal{P}$, where we have used the fact $\nabla P$ is constant.
Note that
\begin{align*} -\text{div} \big(\widehat{A} \nabla (u_0-P)\big)= F +\text{div} \big([\widehat{A}-\widehat{A} (0)] \nabla P\big) ~\text{ in } Z_r. \end{align*} By boundary $C^{1,\vartheta}$ estimates for the operator $\mathcal{L}_0$ in $C^{1,\alpha}$ domains, it follows that for $0<t<(1/8)$, \begin{equation}\label{pbll22} \aligned
& \|\nabla (u_0-P)\|_{C^{0,\vartheta}(Z_{tr})} +\| \nabla (u_0 -P)\|_{L^\infty(Z_{tr})}\\
&
\leq \|\nabla (u_0-P)\|_{C^{0,\vartheta}(Z_{r/2})} + \| \nabla (u_0 -P)\|_{L^\infty(Z_{r/2})}\\ & \leq \frac{C}{r^{1+\vartheta}}
\left( \fint_{Z_r} |u_0-P|^2\right)^{1/2} + C |\nabla P| + Cr^{1-\vartheta} \left(\fint_{Z_r}|F|^p\right)^{1/p} + \frac{C}{r^{1+\vartheta}}
\| f-P\|_{C^{1, \alpha} (I_r)}
\endaligned \end{equation} for any $P\in \mathcal{P}$. This, together with (\ref{pbll21}), implies that $\mathcal{G}(tr; u_0)\le C t^{\vartheta} \mathcal{G} (r; u_0)$. To complete the proof, we choose $t$ so small that $Ct^{\vartheta}\le (1/2)$. \end{proof}
\begin{lemma}\label{bll3}
Let $u_{\varepsilon}\in H^1(Z_1)$ be a weak solution of $\mathcal{L}_{\varepsilon} (u_{\varepsilon}) =F$ in $Z_1$ and
$u=f$ on $I_1$,
where $0<\varepsilon< (1/4)$,
$F\in L^p(Z_1)$ for some $p>d$ and $f\in C^{1, \alpha}(I_1)$ for some $\alpha>0$.
For $0<r\leq 1$, define
\begin{align}
\label{defhh}
\begin{split}
\mathcal{H}(r) &=\inf_{P\in \mathcal{P}}\frac{1}{r}
\left\{ \left(\fint_{Z_r}|u_{\varepsilon}-P|^2\right)^{1/2} +r^{1+\vartheta} |\nabla P|
+ \|f-P\|_{{C}^{1,\alpha}(I_r)} \right\}\\
&\qquad\qquad\qquad\qquad
+r \left(\fint_{Z_r}|F|^p\right)^{1/p},\\ \Upsilon(r)=\inf_{b\in \mathbb{R} } \ &\frac{1}{r}
\left\{\left(\fint_{Z_r}|u_{\varepsilon}-b|^2\right)^{1/2}+ \|f-b\|_{{C}^{1,\alpha}(I_r)}\right\}
+r\left(\fint_{Z_r}|F|^2\right)^{1/2}.
\end{split}
\end{align}
Let $t \in(0, 1/8)$ be given by Lemma \ref{liple2}. Then for any $r\in [\varepsilon_1, 1/2]$,
\begin{align}\label{bll3re}
\mathcal{ H}(t r)\leq \frac{1}{2}\mathcal{H}(r)+C\left(\frac{\varepsilon_1}{r}\right)^\rho \Upsilon(2r),
\end{align}
where $\rho>0$ and $C>0$ depends at most on $d$, $n$, $\mu$, $p$, $(\theta, L)$ in \eqref{lipcon}, $N$ in (\ref{w-s-cond}), and $(\alpha, M)$ in (\ref{cbd}).
\end{lemma}
\begin{proof} We omit the proof, which is the same as that of Lemma \ref{liple3}. \end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{th51}}] With Theorem \ref{apth2}, Lemmas \ref{bll2} and \ref{bll3} at our disposal, Theorem \ref{th51} follows from Lemma \ref{liple4} in the same manner as in the case of Theorem \ref{th41}. We omit the details. \end{proof}
\noindent Weisheng Niu \\ School of Mathematical Science, Anhui University, Hefei, 230601, CHINA\\ E-mail:[email protected]\\
\noindent Zhongwei Shen\\ Department of Mathematics, University of Kentucky, Lexington, Kentucky 40506, USA.\\ E-mail: [email protected]\\
\noindent Yao Xu \\ Institute of Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, 100190, CHINA\\ E-mail: [email protected]\\
\noindent\today
\end{document} | arXiv |
Maths Admissions Test
MAT livestream
Logarithms and powers
Part of the Oxford MAT Livestream
These are the solutions for the Logarithms and powers worksheet.
Simplify $(2^3)^4$ and $(2^4)^3$ and $2^42^3$ and $2^32^4$.
$(2^3)^4=2^{12}$. $(2^4)^3=2^{12}$. $2^4 2^3=2^7$. $2^3 2^4=2^7$.
Solve $x^{-2}+4x^{-1}+3=0$.
This is a quadratic for $\frac{1}{x}$ with solutions $\frac{1}{x}=-1$ or $\frac{1}{x}=-3$.
So $x=-1$ or $x=-\frac{1}{3}$.
Alternatively, we could multiply both sides by $x^2$ and solve the quadratic that we get.
Solve $\log_x (x^2)=x^3$.
The left-hand side is just $2$ so we want $2=x^3$.
So $x=\sqrt[3]{2}$.
Solve $log_{x+5}(6x+22)=2$.
Take $(x+5)$ to the power of each side to get $6x+22=(x+5)^2$.
Expand the square and rearrange for $x^2+4x+3=0$.
The solutions are $x=-1$ or $x=-3$.
Check these solutions; $\log_4(16)=2$ and $\log_2(4)=2$.
Simplify $\log_{10} 3+\log_{10} 4$ into a single term.
This is $\log_{10} 12$.
Let $a=\ln 2$ and $b=\ln 5$. Write the following in terms of $a$ and $b$.\[\ln 1024, \quad \ln 40, \quad \ln \sqrt{\frac{2}{5}}, \quad \ln \frac{1}{10}, \quad\ln 1.024.\]
$\ln 1024=\ln\left(2^{10}\right)=10\ln 2=10a$.
$\ln 40=\ln 8+\ln 5=3a+b$.
$\ln \sqrt{2/5}=\frac{1}{2}\ln 2/5=\frac{1}{2}\left(a-b\right)$.
$\ln (1/10)=-\ln 10=-\ln2-\ln 5=-a-b$.
$\ln 1.024=\ln 1024+\ln 1/1000=10a +3(-a-b)=7a-3b$.
There are other solutions, partly because $b=a\times\log_2 5$.
Expand $\left(e^x+e^{-x}\right)\left(e^y-e^{-y}\right)+\left(e^x-e^{-x}\right)\left(e^y+e^{-y}\right)$.
$e^{x+y}+e^{y-x}-e^{x-y}-e^{-x-y}+e^{x+y}-e^{y-x}+e^{x-y}-e^{-x-y}$.
That's $2e^{x+y}-2e^{-x-y}$.
Expand $\left(e^x+e^{-x}\right)\left(e^y+e^{-y}\right)+\left(e^x-e^{-x}\right)\left(e^y-e^{-y}\right)$.
$e^{x+y}+e^{y-x}+e^{x-y}+e^{-x-y}+e^{x+y}-e^{y-x}-e^{x-y}+e^{-x-y}$.
That's $2e^{x+y}+2e^{-x-y}$.
Solve $2^x=3$. Solve $0.5^x=3$. Solve $4^x=3$.
$2^x=3$ is what it means for $x$ to be $\log_2 3$.
If $0.5^x=3$ then $2^{-x}=3$ so $x=-\log_2 3$. Alternatively, just write down $x=\log_{0.5}3$.
If $4^x=3$ then $2^{2x}=3$ so $x=\frac{1}{2}\log_2 3$. Alternatively, just write down $x=\log_4 3$.
For which values of $x$ (if any) does $1^x=1$? For which values of $x$ (if any) does $1^x=3$?
$1^x=1$ is true for all real $x$.
$1^x$ is never equal to 3 for real $x$.
For which values of $b$ (if any) does $0^b=0$? For which values of $a$ (if any) does $a^0=0$?
$0^b=0$ for any real $b>0$.
$a^0$ is never $0$.
MAT questions
MAT 2015 Q1H
That's a very strange logarithm. The only way I can think of to get rid of it is to write \[4-5x^2-6x^3=(x^2+2)^2.\]
Now this is a polynomial question (but we should be careful to go back and check our solutions in the original equation).
The polynomial rearranges to $x^4+6x^3+9x^2=0$.
Either $x=0$ or $x^2+6x+9=0$ which has a repeated root $x=-3$.
Let's quickly check these solutions $\log_2(4)=2$ and $\log_{11}(4-45+162)=\log_{11}121=2$.
There are two distinct solutions.
The answer is (c).
MAT 2017 Q1I
Let's simplify each term. $\log_b\left(\left(b^x\right)^x\right)=x\log_b \left(b^x\right)=x^2\log_b b =x^2$.
Also, $\log_a\left(\frac{c^x}{b^x}\right)=x\log_a(c/b)=x\left(\log_a c-\log_a b\right)$.
And the last term is $-\log_a b\log_a c$.
So this is a quadratic. Even better, we can factorise it \[x^2+\left(\log_a c-\log_a b\right)x-\log_a b\log_a c=\left(x-\log_a b\right)\left(x+\log_a c\right).\]
Those roots are the same number if $\log_a b=-\log a_c$, which happens when $c=1/b$.
The answer is (d).
MAT 2013 Q1F
The only way that we can have $\log_b a=2$ is if $a=b^2$. Similarly, we must have $c-3=b^3$ and $c+5=a^2$.
These simultaneous equations aren't linear, but we can do our best to solve them.
For example, we must have $c+5=b^4$ and $c-3=b^3$, so $b^4-b^3=8$. How many solutions does that have?
Let's sketch $b^3(b-1)$ for $b>0$ (since we're told that $b$ is positive).
This is negative in between $0$ and $1$ and then after that it's positive and increases (because $(b-1)$ and $b^3$ are both increasing functions). It can only have one solution for $b^3(b-1)=8$. In fact, that solution is $b=2$, but what we're interested in for this question is that that's a unique solution.
Now look back at the other equations. We've got $a=b^2=4$ and $c=3+b^3=11$. Let's check the original logarithms. \[ \log_2 4=2,\qquad \log_2 (11-3)=3,\qquad \log_4 (11+5)=2\]
These equations specify $a$ uniquely (it's 4).
The answer is (a).
MAT 2013 Q1J
This is some new notation. It appears in the question in the expression $[2^x]$, which mean the largest integer less than or equal to $2^x$.
When $x=0$, the term $[2^x]$ is 1. It stays at 1 until we get to a point with $2^x=2$, because then the largest integer less than or equal to $2^x$ will be 2 instead. That happens when $x=1$. Then $2^x$ keeps increasing, but the largest integer under it is still 2, until we get to a point with $2^x=3$.
Time for a logarithm; that happens when $x=\log_2 3$. There's a pattern here; the function takes each value between 1 and $2^{n-1}$ (it doesn't quite get to $2^n$ until right at the end of the interval when $x=n$).
When we integrate this function from 0 to $n$, we're calculating the area under the graph. That's made of a series of rectangles, and the sum of the areas of those rectangles is \[ 1+(\log_2 3-1)\times 2+ (\log_2 4-\log_2 3)\times 3 +(\log_2 5-\log_2 4)\times 4+\dots + (n-\log_2(2^n-1))\times \left(2^n-1\right) \]
There's a pattern here (add some $\log_2 k$ then subtract slightly more $\log_2 k$) and it simplifies to \[ -1-\log_2 3-\log_2 4-\dots -\log_2 (2^n-1)+n(2^n-1)\]
That's $-\log_2\left(\left(2^n-1\right)!\right)+n\left(2^n-1\right)$ which is not quite one of the options.
But $\log_2 2^n=n$ so we can re-write this as $-\log_2 \left(\left(2^n\right)!\right)+n2^n$.
The answer is (b).
I really like logarithms, but in lots of these questions getting rid of the logarithm is a good thing to do!
In that last question, we simplified things by using laws of logarithms and also by using the $k!$ notation for $k\times(k-1)\times\dots\times 2\times1$, which is a very compact way to write that product.
Please contact us for feedback and comments about this page. Last updated on 30 Jul 2022 17:58. | CommonCrawl |
Elevation, azimuth, and polarization estimation with nested electromagnetic vector-sensor arrays via tensor modeling
Ming-Yang Cao1,2,
Xingpeng Mao ORCID: orcid.org/0000-0002-7905-22621,2 &
Lei Huang3
In this paper, we address the joint estimation problem of elevation, azimuth, and polarization with nested array consists of complete six-component electromagnetic vector-sensors (EMVS). Taking advantage of the tensor permutation, we convert the sample covariance matrix of the receive data into a tensorial form which provides enhanced degree-of-freedom. Moreover, the parameter estimation issue with the proposed model boils down to a Vandermonde constraint Canonical Polyadic Decomposition problem. The structured least squares estimation of signal parameters via rotational invariance techniques is tailored for joint auto-pairing elevation, azimuth, and polarization estimation, ending up with a computational efficient method that avoids exhaustive searching over spatial and polarization region. Furthermore, the sufficient uniqueness analysis of our proposed approach is addressed, and the stochastic Cramér-Rao bound for underdetermined parameter estimation is derived. Simulation results are given to verify the effectiveness of the proposed method.
Electromagnetic vector-sensor (EMVS) has been widely used in a variety of applications such as localization, tracking, and beamforming [1–3]. A complete six-component EMVS contains spatially co-located three identical orthogonally electric dipoles and magnetic loops that could measure all six-components of the electromagnetic field [4, 5]. The model of EMVS was investigated in [6]. Different from scalar-sensor arrays, EMVS arrays that are composed of multiple EMVSs with particular configurations could detect both direction-of-arrival (DOA), i.e., elevation and azimuth, and polarization of incident sources. This polarization diversity brings us lots of advantages, such as resolving sources from the same DOA as long as they have different polarization states, providing better resolution ability, offering extra degrees-of-freedom (DOFs), and improving estimation performance [7–10].
In order to utilize the aforementioned benefits, the DOA and polarization estimation with EMVS arrays could be cast as a multiple-parameter estimation problem which, however, turns out to be much more complicated than the scalar-sensor array case. Usually, they demand a time-consuming multi-dimensional searching procedure [11]. Also, the auto-pairing issue of different parameters cannot be neglected. Moreover, the steering matrix of an EMVS always has irregular structure, which aggravates the difficulty in the parameter estimation techniques. Several matrix-based DOA and polarization estimation methods have been proposed with the EMVS array. Most of them are the extensions of the existing DOA estimators with scalar-sensor arrays by taking into the physical structure of EMVSs into consideration. In [6], a vector cross-product approach to estimate the Poynting vector of the sources was presented. Eigenvector-based parameter estimators such as multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) were developed in [12, 13], through utilizing either spatial or temporal invariance properties. These approaches show superior resolution abilities and estimation accuracy with tolerable computational burdens.
The multi-dimensional model, i.e., tensor model, was first linked with array processing under the framework with Canonical Polyadic Decomposition (CPD) in [14]. Interestingly, the tensor modeling techniques reverse the curse of multi-dimensional problems into blessing, which could achieve several benefits such as auto-paring of parameters, relaxed uniqueness condition, to name a few. Subsequently, tensor-based approaches were introduced to the EMVS array whose received data embodies multi-dimensional structure. In [15], the identifiability was analyzed with an EMVS array. More general CPD-based approach for EMVS array were developed in [16–19]. In parallel, higher-order singular value decomposition (HOSVD) which represents tensor-based low-rank approximation method, generalizes the concept of matrix-based SVD. HOSVD-based parameter estimators are able to provide higher estimation accuracy [20, 21]. The so-called tensor-MUSIC approaches were developed with the EMVS array in [22, 23], which require exhaustive multi-dimensional searching procedures.
Compared with uniform linear array (ULA), difference co-arrays such as nested and co-prime arrays are able to provide more DOFs [24–26]. The nested array could construct virtual arrays that obtain as large as \(\mathcal {O}(M^{2})\) DOFs with M physical sensors, which greatly improves the identifiability as well as resolvability and reduces mutual coupling effects between sensors. The stochastic Cramér-Rao bound (CRB) when the number of sources is more than the number of sensors was derived separately by [27–29]. It is worth noting that the aforementioned methods are only suitable for uncorrelated sources.
A nested array which is equipped with EMVSs was first developed by Han et al. in [30]. The nested EMVS array provides more DOFs and better resolvability than the EMVS ULA. However, the DOA and polarization vectors were not decoupled, meaning that the nested EMVS array requires a prior knowledge of the polarization state before performing MUSIC method to avoid exhaustive searching. A tensor model for nested EMVS array that could efficiently decouple the DOA and polarization was constructed in [31]. This method divides the nested EMVS array into several subarrays with respect to different polarization states. Then, it builds a tensor composed of multiple local covariances, followed by the CPD to obtain DOA and polarization estimations.
In this paper, we propose a novel methodology for tensor modeling and parameter estimation by means of utilizing the relationship between tensor permutation and the array structure. The proposed approach has several novelties and advantages: (1) A tensor model of nested EMVS array model is established through the property of tensor permutation, which is different from the complex produce proposed in [31]. (2) The proposed estimator is capable of achieving the auto-pairing of DOA and polarization state. (3) It is proved that the proposed method guarantees more DOFs through uniqueness condition analysis. (4) An underdetermined stochastic CRB for the nested EMVS array is derived as a benchmark for performance evaluation.
The remainder of this paper is organized as follows. Section 2 introduces the basic mathematical notations and constructs the tensor model of a nested EMVS array. In Section 3, we briefly review the Han's tensor modeling method and then propose our tensor modeling approach through multilinear algebra. In Section 4, we devise CPD and the structured least squares (SLS) ESPRIT method for joint DOA and polarization estimation. In Section 5, we analyze the sufficient uniqueness condition of the proposed method and derive the underdetermined stochastic CRB of the nested EMVS array. In Section 6, we give simulation results to demonstrate the effectiveness of our propose method. Section 7 concludes this work.
Problem formulation
Definition 1
Tensor vectorization A tensor vectorization of an N-dimensional tensor \(\boldsymbol {\mathcal {A}}\in \mathbb {C}^{I_{1}\times I_{2}\times \cdots \times I_{N}}\) is denoted as \(\text {vec}(\boldsymbol {\mathcal {A}})\in \mathbb {C}^{\prod _{n=1}^{N}I_{n}\times 1}\) where vec(·) represents the vectorization operator.
If an N-dimensional tensor \(\boldsymbol {\mathcal {A}}\in \mathcal {C}^{I_{1}\times \ldots \times I_{N}}\) could be expressed as the outer-product of a sequence of vectors \(\mathbf {a}_{n}\in \mathbb {C}^{I_{n}\times 1}, n=1,2,\ldots,N\), namely,
$$ \boldsymbol{\mathcal{A}}=\mathbf a_{1}\circ\mathbf{a}_{2}\circ\cdots\circ\mathbf a_{N} $$
where ∘ denotes the outer product, then, its tensor vectorization has the following structure as
$$ \text{vec}(\boldsymbol{\mathcal{A}})=\mathbf{a}_{N}\odot\mathbf{a}_{N-1}\odot\cdots\odot\mathbf{a}_{1} $$
where ⊙ represents the Khatri-Rao product.
Tensor matrization A matrix unfolding of an N-dimensional tensor \(\boldsymbol {\mathcal {A}}\in \mathbb {C}^{I_{1}\times I_{2}\times \cdots \times I_{N}}\) along n-mode is denoted as \(\boldsymbol {\mathcal {A}}_{(n)}\in \mathbb {C}^{I_{n}\times I_{n+1}\ldots I_{N}I_{1}\ldots I_{n-1}}\).
The n-mode tensor-matrix product The n-mode product of a tensor \(\boldsymbol {\mathcal {A}}\in \mathbb {C}^{I_{1}\times I_{2}\times \cdots \times I_{N}}\) and a matrix \(\mathbf {D}\in \mathbb {C}^{J_{n} \times I_{n}}\) along the nth mode is given by
$$\begin{array}{*{20}l} &\boldsymbol{\mathcal{C}}\stackrel{\triangle}{=}\boldsymbol{\mathcal{A}}\times_{n}\mathbf{D} \\ \notag &c_{i_{1}, i_{2}, \ldots, i_{n-1}, j_{n},i_{n+1},\ldots,i_{N}} = \sum_{i_{n}=1}^{I_{n}} a_{i_{1}, i_{2}, \ldots, i_{n}, \ldots, i_{N}}d_{j_{n}, i_{n}} \end{array} $$
where \(\boldsymbol {\mathcal {C}}\in \mathbb {C}^{I_{1}\times \ldots \times I_{n-1}\times J_{n}\times I_{n+1}\times \ldots \times I_{N}}\) and \(\boldsymbol {\mathcal {C}}_{(n)}=\mathbf {D}\boldsymbol {\mathcal {A}}_{(n)}\).
Signal model
For a six-component EMVS, it consists six spatially co-located antennas, i.e., three orthogonally electric dipoles and three orthogonally magnetic loops. We adopt ek=△[exk,eyk,ezk]T and hk=△[hxk,hyk,hzk]T to denote the kth source's electromagnetic field characterized by electric and magnetic triads along x-, y-, and z-axis, respectively. The diagram of an EMVS under Cartesian coordinates is shown in Fig. 1.
Diagram of a six-component EMVS
We assume that there are no mutual coupling effects within each EMVS. Thus, the physical polarization vector of kth source observed by an EMVS is a collection of ek and hk
$$\begin{array}{*{20}l} \mathbf{p}_{k}&\stackrel{\triangle}{=} \left[ e_{x_{k}} \ e_{y_{k}} \ e_{z_{k}} \ h_{x_{k}} \ h_{y_{k}} \ h_{z_{k}}\right]^{T} \notag\\ \stackrel{\triangle}{=}& \left[ \begin{array}{cccc} \cos\phi_{k}\cos\theta_{k} & -\sin\phi_{k} \\ \sin\phi_{k}\cos\theta_{k} & \cos\phi_{k} \\-\sin\theta_{k} & 0 \\ -\sin\phi_{k} & -\cos\phi_{k}\cos\theta_{k} \\ \cos\phi_{k} & -\sin\phi_{k}\cos\theta_{k} \\ 0 & \sin\theta_{k} \end{array} \!\!\right]\left[\begin{array}{cccc} \sin\gamma_{k}e^{j\eta_{k}} \\ \cos\gamma_{k} \end{array}\right] \notag \\ =& \boldsymbol{\Xi}_{k}\boldsymbol{\zeta}_{k} \end{array} $$
where θk∈[0,π),ϕk∈(0,2π],γk∈[0,π/2),ηk∈[−π,π) denotes the kth source's elevation measured from positive vertical z-axis, azimuth, auxiliary polarization angle, and polarization phase difference, respectively. The (·)T stands for the transpose. Throughout this work, we assume that different sources have different polarization states.
Then, the normalized Poynting vector gk is given by
$$\begin{array}{*{20}l} \mathbf{g}_{k}\stackrel{\triangle}{=}&\frac{\mathbf{e}_{k}}{||\mathbf{e}_{k}||}\times\frac{\mathbf{h}_{k}^{*}}{||\mathbf{h}_{k}||}\notag \\\stackrel{\triangle}{=}&\left[\begin{array}{cccc} \mu_{k} \\ \nu_{k} \\ \omega_{k} \end{array}\right] = \left[\begin{array}{cccc} \sin\theta_{k} & \cos\phi_{k} \\ \sin\theta_{k} & \sin\phi_{k} \\ \cos\theta_{k} \end{array}\right] \end{array} $$
where (·)∗, ×, and ||·|| represent the complex conjugation, Cartesian product, and ℓ2-norm, respectively. We use μk,νk,ωk for the direction-cosine functions along the x-, y-, and z-axis, respectively.
In this paper, we consider a typical two-level nested array which is composed of two concatenated ULAs with different inner EMVS spacings. The EMVS linear array is placed along the y-axis with a total of M EMVSs as illustrated in Fig. 2. The small ULA has M1 sensors with a half-wavelength spacing, whereas the large one has a total of M2 sensors with an intersensor spacing of \((M_{1}+1)\frac {\lambda }{2}\). Thus, the positions of EMVSs in the nested array are given as
$$ \mathbf z=\frac{\lambda}{2}[1,2,\ldots,M_{1},M_{1}+1,2M_{1}+2,\ldots,M_{2}(M_{1}+1)]^{T}. $$
A nested linear array consists of two concatenated ULAs with M1 and M2 EMVSs, respectively
According to [24], the DOFs for a nested array with identical scalar-sensors could be determined as
$$\begin{array}{@{}rcl@{}} \bar{M}=\left\{\begin{array}{ll} \frac{M^{2}-2}{2}+M &\textrm{if \textit{M} is even}\\ \frac{M^{2}-1}{2}+M & \textrm{if \textit{M} is odd}. \end{array}\right. \end{array} $$
Assume that there are K narrowband far-field completed polarized signals impinging on this array. As a result, the spatial steering vector of the nested EMVS array for the kth source is given by
$$ \mathbf{a}(\theta_{k})\stackrel{\triangle}{=}\left[e^{j\frac{2\pi}{\lambda} z_{1}\sin\theta_{k}},e^{j\frac{2\pi}{\lambda} z_{2}\sin\theta_{k}},\ldots,e^{j\frac{2\pi}{\lambda} z_{M}\sin\theta_{k}}\right]^{T} $$
where λ denotes the wavelength of the sources and zk denotes the kth element of z. Note that (8) only stands for the spatial relationship among all M EMVSs in the array without taking the physical property of EMVS into account. The tth sample vector of whole array gives as
$$\begin{array}{*{20}l} \mathbf{y}(t)=&\sum_{k=1}^{K}s_{k}(t)(\mathbf{a}(\theta_{k})\odot\mathbf{p}_{k})+\mathbf{n}(t) \end{array} $$
$$\begin{array}{*{20}l} =&(\mathbf{A}\odot\mathbf P)\mathbf{s}(t)+\mathbf{n}(t) \end{array} $$
$$\begin{array}{*{20}l} =&\mathbf{A}_{\mathrm{p}}\mathbf{s}(t)+\mathbf{n}(t) \end{array} $$
where \(\mathbf {A}_{\mathrm {p}}\in \mathbb {C}^{6M\times K}\) represents the steering matrix of the whole nested EMVS array where s(t) and n(t) represent K signals' waveforms at time t and the additive temporally and spatially white Gaussian noise, respectively.
$$\begin{array}{*{20}l} \mathbf{a}&\stackrel{\triangle}{=}[\mathbf{a}(\theta_{1}),\ldots,\mathbf{a}(\theta_{K})] \end{array} $$
$$\begin{array}{*{20}l} \mathbf P&\stackrel{\triangle}{=}[\mathbf p_{1},\ldots,\mathbf p_{K}] \end{array} $$
In this work, we assume that the sources are uncorrelated stationary white Gaussian process. Meanwhile, the additive noise obeys independent and identically distributed (IID) Gaussian distribution, i.e., \(n(t) \sim CN(0, \sigma _{n}^{2}\mathbf I)\) with \(\sigma _{n}^{2}\) being the noise variance. Furthermore, the signal and noise are uncorrelated. The covariance matrix is calculated as
$$\begin{array}{*{20}l} \mathbf R &=\mathbb{E}\left[\mathbf y(t)\mathbf y^{H}(t)\right] \\ &=\mathbf{a}_{p}\mathbb{E}\left[\mathbf s(t) \mathbf s^{H}(t)\right]\mathbf{a}_{p}^{H}+\mathbb{E}\left[\mathbf n(t) \mathbf n^{H}(t)\right] \\ &=\mathbf{a}_{p}\mathbf D\mathbf{a}_{p}^{H} + \sigma^{2}_{n} \mathbf I_{6M} \end{array} $$
where D denotes the source covariance matrix, \(\mathbb {E}[\cdot ]\) represents the mathematical expectation. Since the sources are statistically uncorrelated, we have
$$ \mathbf D = \text{diag}(\mathbf d) $$
where \(\mathbf d=\left [\sigma ^{2}_{1},\sigma ^{2}_{2},\ldots,\sigma ^{2}_{K}\right ]^{T}\) with \(\sigma _{k}^{2}\) being the power of the kth source and diag(·) denotes the diagonalization operator that forms an K×1 vector into a K×K square matrix with elements on its main diagonal and zeroes elsewhere. Note that the covariance matrix cannot be exactly obtained since the number of samples is finite. Instead, we use the sample covariance matrix (SCM) as
$$ \hat{\mathbf R}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{y}(t)\mathbf{y}^{H}(t). $$
For simplicity, we use the covariance matrix for derivation of the proposed method. However, it should be kept in mind that the SCM is different from the covariance matrix. In consequence, the signal covariance matrix Rs will not have a perfect diagonal structure. This phenomenon may degrade the performance of the proposed method as we will see in the simulations later.
By vectorizing the covariance matrix, an aperture-enlarged array is obtained, and its observation is given as
$$\begin{array}{*{20}l} \text{vec}(\mathbf R)=&(\mathbf{a}_{p}^{*}\odot\mathbf{a}_{p})\mathbf d + \sigma_{n}^{2}\mathbf 1 \end{array} $$
$$\begin{array}{*{20}l} =&(\mathbf{a}^{*}\odot\mathbf P^{*}\odot\mathbf{a}\odot\mathbf P)\mathbf d + \sigma_{n}^{2}\mathbf 1 \end{array} $$
where 1=vec(I6M). Note that the steering matrix \(\mathbf {a}_{p}^{*}\odot \mathbf {a}_{p}\in \mathbb {C}^{36M^{2}\times K}\) has a larger size, which extends the array aperture. However, the sources d become coherent since s(t) are stationary process. To circumvent this problem, several rank restoration methods, e.g., spatial-smoothing technique and compressive sensing-based approaches, have been suggested. These strategies might not be appropriate for the situation of the nested EMVS array. This is because the aperture-extended array does not have the ULA structure, which is prohibiting the application of spatial-smoothing technique. Note that the spatial matrix a and polarization matrix P are alternating arranged as depicted in (18), which are not decoupled. As a matter of fact, the EMVS array can be described as a multi-dimensional model. This thereby motivates us to establish the tensor model for the observations of the EMVS array.
Tensor modeling
Han's tensor modeling approach
The measurement matrix of the whole EMVS array at tth sample is obtained through matrizating (9) as
$$\begin{array}{*{20}l} \mathbf{Y}(t) =& \text{mat}(\mathbf y(t)) \end{array} $$
$$\begin{array}{*{20}l} =&\sum_{k=1}^{K}s_{k}(t)(\mathbf{a}(\theta_{k})\circ\mathbf{p}_{k})+\mathbf{N}(t) \end{array} $$
$$\begin{array}{*{20}l} =&\boldsymbol{\mathcal{A}}\times_{3}\mathbf s(t)+\mathbf N(t) \end{array} $$
where mat(·) denotes the matrization operator, \(\mathbf {Y}(t)\in \mathbb {C}^{M\times 6}\), and \(\boldsymbol {\mathcal {A}}\in \mathbb {C}^{M\times 6\times K}\) represents the steering tensor as shown in Fig. 3.
The steering tensor of a six-component EMVS
The 3-mode matrix unfolding of the steering tensor is equivalent to the steering matrix of EMVS as
$$ \boldsymbol{\mathcal{A}}^{T}_{(3)}=\mathbf{a}_{p}. $$
Based on [30], the fourth-dimensional covariance tensor \(\boldsymbol {\mathcal {R}}\) is constructed as
$$ \boldsymbol{\mathcal{R}}=\mathbb{E}[\mathbf Y \circ \mathbf Y^{*}]\in\mathbb{C}^{M\times 6\times M\times 6}. $$
We could regard (23) as a tensor extension of the covariance matrix. To further investigate the structure of \(\boldsymbol {\mathcal {R}}\), each element in \(\boldsymbol {\mathcal {R}}\) could be expressed as
$$ r_{i_{1}i_{2}i_{3}i_{4}}=\mathbb{E}\left[\mathbf{Y}_{i_{1}i_{2}}(t)\mathbf{Y}_{i_{3}i_{4}}^{*}(t)\right]. $$
To be specific, we have
$$ \boldsymbol{\mathcal{R}}=\mathbb{E}[(\boldsymbol{\mathcal{A}}\times_{3}\mathbf s+\mathbf N)\circ(\boldsymbol{\mathcal{A}}\times_{3}\mathbf s+\mathbf N)^{*}]. $$
Then perform 2-mode matrization of \(\boldsymbol {\mathcal {R}}\), yielding
$$\begin{array}{*{20}l} \mathbf R_{\text{ten}}&=\boldsymbol{\mathcal{R}}_{(2)}^{T} \notag \\ &=\boldsymbol{\mathcal{A}}_{\text{nest}}\times_{3}\mathbf d+\sigma_{n}^{2}\mathbf{E}_{n} \end{array} $$
where \(\boldsymbol {\mathcal {A}}_{\text {nest}}\in \mathbb {C}^{6M^{2}\times 6\times K}\) stands for the steering tensor and
$$ \mathbf{E}_{n}=\left[ \begin{array}{llll} \text{vec}(\mathbf I_{M}) & & & \\ & \text{vec}(\mathbf I_{M}) & &\\ & & \ddots & \\ & & & \text{vec}(\mathbf I_{M}) \end{array}\right]. $$
Comparing with (17), (26) has a similar form but with a nested steering tensor of size 6M2×6×K. Taking one horizontal slice of (30) since the DOA and polarization state are coupled in the steering tensor as
$$ \boldsymbol{\mathcal{A}}_{\text{nest}} = \left[\boldsymbol{\mathcal{A}}_{\text{nest}}^{1},\ldots,\boldsymbol{\mathcal{A}}_{\text{nest}}^{6}\right] $$
where \(\boldsymbol {\mathcal {A}}_{\text {nest}}^{l}\in \mathbb {C}^{6M^{2}\times K}, l=1,\ldots,6\) represents the sub-nested steering tensor associated with the lth polarization state in pk. The polarization state and the spatial structure are coupled in the first dimension of the lth sub-nested steering tensor, and it has the following form
$$\begin{array}{*{20}l} \boldsymbol{\mathcal{A}}_{\text{nest}}^{l}=&\boldsymbol{\mathcal{A}}_{(3)}^{H}\odot\boldsymbol{\mathcal{A}}_{l} \notag \\ =&\mathbf{a}_{p}^{*}\odot\mathbf{a}_{l} \end{array} $$
and al denotes one slice of the nested steering tensor associated with lth polarization state
$$ \mathbf{a}_{l}=\left[ \begin{array}{ccc} p_{1l}e^{-jr_{1}\pi\sin\theta_{1}} &\ldots& p_{Kl}e^{-jr_{1}\pi\sin\theta_{K}} \\ p_{1l}e^{-jr_{2}\pi\sin\theta_{1}} &\ldots& p_{Kl}e^{-jd_{2}\pi\sin\theta_{K}} \\ \vdots & \ddots & \vdots \\ p_{1l}e^{-jr_{M}\pi\sin\theta_{1}} &\ldots& p_{Kl}e^{-jr_{M}\pi\sin\theta_{K}} \end{array}\right]. $$
where \(\mathbf {a}_{l}\in \mathbb {C}^{M\times K}, l=1,2,\ldots,6\). In a word, the tensor modeling of nested EMVS array could be regarded as folding one of the polarization dimension of the matrix-based steering matrix into a higher dimension. Note that the spatial and polarization dimensions of the nested steering tensor are still coupled.
Proposed tensor modeling method
Different from the existing tensor modeling methods in [30] and [31], we introduce the property of tensor permutation to establish a tensor model of the nested EMVS array.
Tensor permutation Consider an N-dimensional tensor, a permutation operator of this tensor is denoted as π=[π1,π2,…,πN], where πn∈{1,2,…,N},n=1,2,…,N. An N-dimensional tensor \(\boldsymbol {\mathcal {C}}\in \mathbb {C}^{I_{1}\times I_{2}\ldots \times I_{N}}\) with indexes i1,i2,…,iN after permutation leads to \(\boldsymbol {\mathcal {C}}_{\pi }\in \mathbb {C}^{I_{\pi _{1}}\times I_{\pi _{2}}\ldots \times I_{\pi _{N}}}\)
$$ \pi :(i_{1},i_{2},\ldots,i_{N})\mapsto(i_{\pi_{1}},i_{\pi_{2}},\ldots,i_{\pi_{N}}) $$
For example, we define a permutation as π=[3,2,1]. Then, a three-dimensional tensor \(\boldsymbol {\mathcal {C}}\in \mathbb {C}^{I_{1}\times I_{2}\times I_{3}}\) after permutation is \(\boldsymbol {\mathcal {C}}_{\pi }\in \mathbb {C}^{I_{3}\times I_{2}\times I_{1}}\) as shown in Fig. 4.
Diagram of a three-dimensional tensor permutation
If an N-dimensional tensor \(\boldsymbol {\mathcal {C}}^{I_{1}\times \cdots \times I_{N}}\) could be expressed as a outer-product of N vectors, we have the following expression after permutation π=[π1,π2,…,πN] as
$$ \boldsymbol{\mathcal{C}}_{\pi}=\mathbf c_{\pi_{1}}\circ\mathbf c_{\pi_{2}}\circ\ldots\circ\mathbf c_{\pi_{N}} $$
This property is proved through using Property 1 and the definition of tensor permutation. It represents a special case of rank-one tensors, which is based the fact that tensor permutation will maintain the inter-relationship of each vectors.
Recalling (17), we obtain
$$\begin{array}{*{20}l} \text{vec}(\mathbf R)&=(\mathbf{a}^{*}\odot\mathbf P^{*}\odot\mathbf{a}\odot\mathbf P)\mathbf d+\sigma_{n}^{2}\mathbf 1 \notag \\ &=\sum_{k=1}^{K}\sigma_{k}^{2}\mathbf{a}_{k}^{*}\odot\mathbf p_{k}^{*}\odot\mathbf{a}_{k}\odot\mathbf p_{k}+\sigma_{n}^{2}\mathbf 1. \end{array} $$
According to Property 1, we formulate (33) into a four-dimensional tensor as
$$ \boldsymbol{\mathcal{G}}_{\text{nest}}=\sum_{k=1}^{K}\sigma_{k}^{2}\mathbf{a}^{*}(\theta_{k})\circ\mathbf p^{*}_{k}\circ\mathbf{a}(\theta_{k})\circ\mathbf p_{k}+\boldsymbol{\mathcal{N}}_{\text{nest}} $$
where \(\boldsymbol {\mathcal {G}}_{\text {nest}}\in \mathbb {C}^{M\times 6\times M\times 6}\). Then, by setting π:(1,3,2,4), \(\boldsymbol {\mathcal {G}}_{\text {nest}}\) after permutation π gives
$$ \boldsymbol{\mathcal{G}}_{\pi}=\sum_{k=1}^{K}\sigma_{k}^{2}\mathbf{a}^{*}(\theta_{k})\circ\mathbf{a}(\theta_{k})\circ\mathbf p^{*}_{k}\circ\mathbf p_{k}+\boldsymbol{\mathcal{N}}_{\pi}. $$
Performing 1-mode unfolding of \(\boldsymbol {\mathcal {G}}_{\pi }\in \mathbb {C}^{M\times M\times 6\times 6}\) into a three-dimensional tensor yields
$$ \boldsymbol{\mathcal{G}}_{\pi}^{(1)}=\sum_{k=1}^{K}\sigma_{k}^{2}(\mathbf{a}^{*}(\theta_{k})\odot\mathbf{a}(\theta_{k}))\circ\mathbf p^{*}_{k}\circ\mathbf p_{k}+\boldsymbol{\mathcal{N}}_{\pi} $$
where \(\boldsymbol {\mathcal {G}}_{\pi }^{(1)}\in \mathbb {C}^{M^{2}\times 6\times 6}\). The spatial and polarization vectors are arranged in order instead of placing alternatively as in (33). Until this step, a three-dimensional tensor with decoupled spatial and polarization factors is constructed. However, a∗(θk)⊙a(θk) does not obey the Vandermonde structure and incurs several repeated spatial phase factors as pointed out in [30]. Here, we directly remove the repeated ones to reduce the dimension of a∗(θk)⊙a(θk) while keeping the DOFs unchanged. The selecting and permutation matrices are defined as Js and Jp, respectively. Then, we operate them on \(\boldsymbol {\mathcal {G}}_{\pi }\) to remove the repeated and arrange the spatial phase factors in order as
$$\begin{array}{*{20}l} \boldsymbol{\mathcal{G}}&=\boldsymbol{\mathcal{G}}_{\pi}^{(1)}\times_{1}\mathbf J_{s}\times_{1}\mathbf J_{p} \end{array} $$
$$\begin{array}{*{20}l} &=\sum_{k=1}^{K}\sigma_{k}^{2}\mathbf b(\theta_{k})\circ\mathbf p^{*}_{k}\circ\mathbf p_{k}+\boldsymbol{\mathcal{N}} \end{array} $$
where \(\boldsymbol {\mathcal {G}}\in \mathbb {C}^{\bar {M}\times 6\times 6}\) and
$$ \mathbf b(\theta_{k})=\left[e^{-j\pi \frac{\bar{M}-1}{2}\sin\theta_{k}},\ldots,1,\ldots,e^{j\pi \frac{\bar{M}-1}{2}\sin\theta_{k}}\right]^{T}. $$
Note that the deleting and selecting matrices are determined by the spatial structure of the nested array and thus can be calculated offline once.
To be specific, each element in \(\boldsymbol {\mathcal {G}}\) is expressed as
$$ g_{i_{1}i_{2}i_{3}}=\sum_{k=1}^{K}\sigma_{k}^{2}b_{i_{1}k}p^{*}_{i_{2}k}p_{i_{3}k}+\sigma_{n}^{2}\delta_{i_{1}i_{2}i_{3}} $$
where \(i_{1}=1,\ldots,\bar {M}, i_{2}=1,\ldots,6, i_{3}=1,\ldots,6\) and δ stands for the Dirac function such that
$$ \delta_{i_{1}i_{2}i_{3}}= \left\{\begin{array}{ll} 1 \quad &i_{1}=\frac{\bar{M}+1}{2} \quad \text{and} \quad i_{2}=i_{3}\\ 0&\text{elsewhere} \end{array}\right.. $$
Since we have already known the positions of noise items as given in (41), the noise could be easily eliminated from \(\boldsymbol {\mathcal {G}}\) to further improve the signal-to-noise ratio (SNR) in real world applications. Performing SVD to (16), the singular value matrix is given as \(\mathbf \Sigma =\text {diag}(\hat {\sigma }_{1}^{2},\ldots,\hat {\sigma }_{6M}^{2})\). The singular values are decreasingly ordered as \(\hat {\sigma }_{1}^{2}\geq \cdots \geq \hat {\sigma }_{K}^{2}>\hat {\sigma }_{K+1}^{2}=\ldots =\hat {\sigma }_{6M}^{2}\). Thus, an estimate of the noise power could be given as
$$ \hat{\sigma}^{2}_{n}=\frac{1}{6M-K}\sum_{k=K+1}^{6M}\hat{\sigma}_{k}^{2}. $$
It should be emphasized that although the proposed tensor model has a similar form as that in [31], the physical properties behind the model are quite different. In our proposed model, the matrix factors represent spatial and polarization states while dropping the temporal information. Note that the powers of K sources are trivial parameters which will not influence the estimate of parameters.
DOA and polarization estimation
Elevation estimation
In this subsection, elevation estimation is firstly achieved. Then, we use the results to obtain azimuth and polarization estimates in the next subsection. The proposed tensor modeling approach constructs a three-dimensional tensor which is able to exploit the spatially correlation structure inherent in the EMVS array data. To this end, we use B, P, and P∗, respectively, to stand for matrix factors. Thus, we could use CPD to estimate these factors directly. The CPD of \(\boldsymbol {\mathcal {G}}\) could achieve the estimates of B and P up to permutation and scaling [14]. The Alternating Least Squares (ALS) algorithm is usually applied to conduct the general CPD without a prior knowledge of the structure of matrix factors. It is worth noting that since B obeys the Vandermonde structure, this problem at hand could be solved through Vandermonde constrained CPD (VCPD) [17], which has more a relaxed uniqueness condition.
The noiseless matrix form of \(\boldsymbol {\mathcal {G}}\) could be obtained as 1-mode unfolding, which is given by
$$ \mathbf G=\mathbf B\mathbf D\mathbf H^{T} $$
$$\begin{array}{*{20}l} \mathbf B=&[\mathbf b(\theta_{1}),\mathbf b(\theta_{2}),\ldots,\mathbf b(\theta_{K})] \end{array} $$
$$\begin{array}{*{20}l} \mathbf H =& \mathbf P^{*}\odot\mathbf P. \end{array} $$
Consider (43), we divide G into L overlapped submatrices, each of them has size Ms×36 to obtain the spatial-smoothing of G along the spatial dimension. Firstly, we define the lth selection matrix as
$$ \mathbf J_{l}=[\mathbf 0_{M_{s}\times (l-1)} \ \mathbf I_{M_{s}} \ \mathbf 0_{M_{s}\times (L-l)}]. \quad 1\leq l\leq L. $$
The augmented covariance Gs is then constructed as
$$\begin{array}{*{20}l} \mathbf G_{s}&=[\mathbf J_{1}\mathbf G,\ldots,\mathbf J_{L}\mathbf G]\in\mathbb{C}^{M_{s}\times 36L} \end{array} $$
$$\begin{array}{*{20}l} &=\mathbf B_{s}[1,\boldsymbol{\Phi},\ldots,\boldsymbol{\Phi}^{L-1}]\mathbf D\mathbf H^{T} \end{array} $$
where \(M_{s}=\bar {M}-L+1\) and
$$\begin{array}{*{20}l} \boldsymbol{\Phi}\stackrel{\triangle}{=}&\text{diag}\left(e^{-j\pi\sin\theta_{1}},\ldots,e^{-j\pi\sin\theta_{K}}\right) \end{array} $$
$$\begin{array}{*{20}l} \mathbf B_{s}=&[\mathbf I_{M_{s}} \ \mathbf 0_{M_{s}\times \bar{M}-M_{s}}]\mathbf B. \end{array} $$
Through the constuction of augmented covariance matrix, we restore the rank of G to achieve a better estimate of the signal subspace. Performing SVD to Gs, a low-rank approximation is obtained as
$$ \text{SVD}(\mathbf G_{s})=\left[ \begin{array}{lll} \hat{\mathbf{U}}_{s} & \hat{\mathbf{U}}_{n} \end{array}\right] \left[ \begin{array}{llll} \hat{\boldsymbol{\Sigma}}_{s} & \mathbf{0} \\ \mathbf{0} & \hat{\boldsymbol{\Sigma}}_{n} \end{array}\right] \left[\begin{array}{cc} \hat{\mathbf{V}}_{s} & \hat{\mathbf{V}}_{n} \end{array}\right]^{H}. $$
It follows from (50) and (51) that
$$ \hat{\mathbf{U}}_{s} =\hat{\mathbf B}_{s}\mathbf T $$
where T represents a full-rank matrix. To proceed, we need to define the selection matrices Js1 and Js2, which are defined as
$$\begin{array}{*{20}l} \mathbf{J}_{s1}\stackrel{\triangle}{=}&\left[\mathbf{I}_{M_{s}-1} \quad \mathbf{0}_{(M_{s}-1)\times 1}\right] \notag \\ \mathbf{J}_{s2}\stackrel{\triangle}{=}&\left[\mathbf{0}_{(M_{s}-1)\times 1} \quad \mathbf{I}_{M_{s}-1} \right]. \end{array} $$
Recall that the steering matrix Bs obeys Vandermonde structure, which indicates the steering matrix has spatial invariance property. The elevation is estimated by applying (52) and (53)
$$ \mathbf{J}_{s1}\hat{\mathbf{U}}_{s}\boldsymbol{\Phi}=\mathbf{J}_{s2}\hat{\mathbf{U}}_{s}. $$
However, there exists subspace perturbation in both handsides. Here, we use the total least squares (TLS) solution to eliminate the influence of subspace perturbation as follows
$$ (\mathbf{J}_{s1}\mathbf U_{s}+\Delta\mathbf U_{s1})\Phi=\mathbf{J}_{s2}\mathbf U_{s}+\Delta\mathbf U_{s2} $$
where ΔUs1 and ΔUs2 represent the perturbation of signal subspace of both handsides. Note that the left and right handsides of (55) are highly structured and share many elements, which in turn means that they share the same noises. In order to suppress the noise shared in the sub-arrays, the structured least squares ESPRIT (SLS-ESPRIT) is tailored for enhancing the performance of elevation finding. In particular, defining define \(\mathbf U_{s}^{\text {SLS}}=\mathbf U_{s}+\Delta \mathbf U_{s}\) and the residual matrix as \(\mathbf F\left (\mathbf U_{s}^{\text {SLS}},\Phi \right)=\mathbf {J}_{s1}\mathbf U_{s}^{\text {SLS}}\Phi -\mathbf {J}_{s2}\mathbf U_{s}^{\text {SLS}}\) as the residual matrix, the SLS-ESPRIT method attempts to minimize
$$ \min_{\mathbf U_{s},\Phi}\left\|\left[\begin{array}{c} \mathbf F(\mathbf U_{s}^{\text{SLS}},\Phi) \\ \gamma\Delta\mathbf U_{s} \end{array}\right]\right\| $$
where γ is the weighting factor which is used to avoid trivial solutions. Note that we use the iteration method to solve (56). The rth iteration step is formulated as
$$\begin{array}{*{20}l} \mathbf F&\left(\mathbf U_{s,r+1}^{\text{SLS}},\Phi_{r+1}\right) \\ &=\mathbf F\left(\mathbf U_{s,r}^{\text{SLS}}+\Delta\mathbf U_{s,r}^{\text{SLS}},\Phi_{r}+\Delta\Phi_{r}\right) \end{array} $$
$$\begin{array}{*{20}l} &=\mathbf{J}_{s1}\left(\mathbf U_{s,r}^{\text{SLS}}+\Delta\mathbf U_{s,r}^{\text{SLS}}\right)\left(\Phi_{r}+\Delta\Phi_{r})\right) \\ &\quad-\mathbf{J}_{s2}\left(\mathbf U_{s,r}^{\text{SLS}}+\Delta\mathbf U_{s,r}^{\text{SLS}}\right) \end{array} $$
$$\begin{array}{*{20}l} &=\mathbf F\left(\mathbf U_{s,r}^{\text{SLS}},\Phi_{r}\right) \\ &+\mathbf{J}_{s1}\left(\mathbf U_{s,r}^{\text{SLS}}\Delta\Phi_{r}+\Delta\mathbf U_{s,r}^{\text{SLS}}\Phi_{r}\right)-\mathbf{J}_{s2}\Delta\mathbf U_{s,r}^{\text{SLS}}. \end{array} $$
Neglecting the second-order term of (58), we have
$$ \min_{\Delta \mathbf U_{s,r}^{\text{SLS}},\Delta\Phi_{r}}\left\|{\mathbf{a}}_{s} \left[\begin{array}{c} \Delta\Phi_{r} \\ \Delta\mathbf U_{s,r}^{\text{SLS}} \end{array}\right] + \left[\begin{array}{c} \mathbf F\left(\mathbf U_{s,r}^{\text{SLS}},\Phi_{r}\right) \\ \gamma\Delta \mathbf U_{s,r}^{\text{SLS}} \end{array}\right] \right\| $$
$$ \mathbf{a}_{s} = \left[\begin{array}{cc} \mathbf{J}_{s1}\Delta\mathbf U_{s,r}^{\text{SLS}} & \Phi_{r}\mathbf{J}_{s1}-\mathbf{J}_{s2} \\ \mathbf 0 & \gamma\mathbf I \end{array}\right]. $$
Thus, the updates of Φr and \(\mathbf U_{s,r}^{\text {SLS}}\) could be expressed in a closed-form, expressed as
$$ \left[\begin{array}{c} \Delta\Phi_{r} \\ \Delta\mathbf U_{s,r}^{\text{SLS}} \end{array}\right] =-\mathbf{A}_{s}^{\dagger} \left[\begin{array}{cc} \mathbf{F}\left(\mathbf{U}_{s,r}^{\text{SLS}},\Phi_{r}\right) \\ \gamma\Delta \mathbf U_{s,r}^{\text{SLS}} \end{array}\right] $$
where (·)† represents the pesudo-inverse operator. After several iterations, the solution of (56) is achieved. Although the proposed SLS-VCPD method is not optimal, it realizes a tradeoff between computational burden and estimation accuracy.
Azimuth and polarization joint estimation
In the above subsection, the estimates of elevations have been determined. This allows us to construct \(\mathbf B(\hat {\boldsymbol {\theta }})\), leading to the LS estimate of H as
$$ \hat{\mathbf H}=\mathbf G^{T}\left(\mathbf B(\hat{\boldsymbol{\theta}})^{T}\right)^{\dagger} $$
where \(\hat {\mathbf H}\) consists of K vectors, that is
$$ \hat{\mathbf H}=[ \hat{\mathbf h}_{1},\hat{\mathbf h}_{2},\ldots,\hat{\mathbf h}_{K}]. $$
Recalling (45), the kth column of H is indeed an outer product of polarization vectors. Dividing \(\hat {\mathbf h}_{k}\) into six equally sized concatenated vectors and stack them along column to form a square matrix, we have
$$ \text{mat}(\hat{\mathbf h}_{k})=\lambda_{k}\hat{\mathbf p}_{k}\circ\hat{\mathbf p}_{k}^{*}\in\mathbb{C}^{6\times 6} $$
Performing SVD to mat(hk), an approximate estimate of the polarization information vector is obtained as
$$ \hat{\mathbf p}_{k} = \hat{\mathbf u}_{k} $$
where \(\hat {\mathbf u}_{k}\) stands for the singular vector associated with the largest singular value.
We now have achieved an estimate of the polarization vector. According to (4), the estimated Poynting vector of kth source is
$$ \hat{\mathbf g}_{k}=\frac{\hat{\mathbf{e}}_{k}}{||\hat{\mathbf{e}}_{k}||}\times\frac{\hat{\mathbf{h}}_{k}^{*}}{||\hat{\mathbf{h}}_{k}||} $$
It follows that the estimated azimuth is calculated as
$$ \hat{\phi}_{k}=\arctan (\hat{\nu}_{k}/\hat{\mu}_{k}) $$
Substituting \(\hat {\boldsymbol {\theta }}_{k}\) and \(\hat {\boldsymbol {\phi }}_{k}\) into (4), we achieve the estimate of \(\hat {\boldsymbol {\Xi }}_{k}\). The polarization parameters associated with kth signal are hence calculated as
$$\begin{array}{*{20}l} \hat{\gamma}_{k}&=\arctan\frac{\hat{\zeta}_{k1}}{\hat{\zeta}_{k2}} \end{array} $$
$$\begin{array}{*{20}l} \hat{\eta}_{k}&=\angle\hat{\zeta}_{k1} \quad k=1,2,\ldots,K \end{array} $$
$$ \hat{\boldsymbol{\zeta}}_{k}=[ \hat{\zeta}_{k1} \,\, \hat{\zeta}_{k2}]^{T} = \hat{\boldsymbol{\Xi}}_{k}^{\dagger}\hat{\mathbf p}_{k}. $$
Note that DOA and polarization associated with the same source is auto-paired since they share the same eigenvector. Thus, no extra pairing step is needed. Table 1 concludes the DOA and polarization estimation procedure.
Table 1 Algorithm of Joint Elevation-Azimuth-Polarization Estimation
Identifiability and CRB
Identifiability
In this subsection, we first review the sufficient uniqueness condition with EMVS array from the CPD perspective. Note that an N-dimensional tensor is defined as a rank-one tensor if it could be written into a outer product of N vectors. Thus, \(\boldsymbol {\mathcal {G}}\) is rank-K since it is the minimal number of K rank-one tensor as given in (38). Consider a three-dimensional \(\boldsymbol {\mathcal {X}}\in \mathbb {C}^{I_{1}\times I_{2}\times I_{3}}\) with rank-K which means that it is a sum of K outer product of three vectors, which could be expressed as
$$ \boldsymbol{\mathcal{X}}=\sum_{k=1}^{K}a_{i_{1}k}b_{i_{2}k}c_{i_{3}k}. $$
The uniqueness condition of a tensor with CPD relies on KrusKal's condition. The Kruskal-rank, termed as kr, is defined as the maximum number of J such that every J columns of a matrix are linearly independent. And it is denoted as kr(a)=J. Thus, the KrusKal's condition provides a sufficient uniqueness condition for a three-dimensional tensor. That is, three matrix factors satisfy
$$ kr(\mathbf{a})+kr(\mathbf B)+kr(\mathbf C)\geq 2K+2 $$
a, B, and C are unique up to scaling and permutation [14].
For a complete six-components EMVS, its observation data always has kr(P)≥3. Thus, for an M-elements EMVS ULA, the upper bound for the uniqueness condition when sources are uncorrelated gives
$$ K\leq M+\text{rank}(\mathbf P)-2. $$
Usually, it holds that rank(P)≥4 for arbitrary DOA and polarization parameters. So the maximum number of identifiable sources is
$$ K\leq M+2. $$
Comparing with the scalar-sensor ULA, the EMVS array could resolve more sources. Now, we consider the nested EMVS array, the sufficient uniqueness condition with the proposed model (38) yields
$$ kr(\mathbf B)+kr(\mathbf P^{*})+kr(\mathbf P)\geq 2K+2. $$
Since B is a Vandermonde matrix, it is straightforward to obtain \(kr(\mathbf B)=\bar {M}\). Also, the manifold of a six-component EMVS is free of rank-2 ambiguity, which indicates that kr(P)≥3 [32]. Substituting kr(P)=3 into (76) to obtain the upper bound of identifiability with CPD yields
$$ K\leq\frac{\bar{M}}{2}+2. $$
Comparing with (75), the nested EMVS array obviously provides more DOFs. However, the uniqueness condition could be further relaxed if we take the Vandermonde structure of B and ESPRIT method together into consideration. According to the results in [17], we could directly obtain the upper bound of identifiability after applying spatial smoothing as
$$ K\leq\min(6(\bar{M}-L+1),6L). $$
It should be noted that the proposed method perhaps cannot resolve the maximum number of sources provided by the upper bound since there exists errors between the SCM and covariance matrix when the number of samples is finite, and SNR is not sufficient high. For more discussions about this phenomenon, readers can refer to [27].
Underdetermined CRB
The stochastic CRB with EMVSs array model has been well discussed in literature. However, the existing CRB could not be directly applied to the underdetermined case in which the number of sources excesses the number of EMVSs. Under the assumption that Rs is a diagonal matrix, we extend the stochastic CRB for underdetermined DOA estimation with scalar-sensor array to the EMVSs array cases. Indeed, the derived CRB generalizes the existing results so that the scalar-sensor array could be regarded as a special case of the EMVS array. For simplicity, the variance of noise is assumed to be known. The parameter vector composed by DOAs, polarization, and source powers is defined as
$$ \boldsymbol{\zeta} = [\boldsymbol{\theta}^{T},\boldsymbol{\phi}^{T},\boldsymbol{\gamma}^{T},\boldsymbol{\eta}^{T},\mathbf d]^{T} $$
According to [33], the (m,n)th element of the Fisher information matrix (FIM) is given by
$$ \text{FIM}_{mn}=T\text{tr}\left[\frac{\partial\mathbf R}{\partial\zeta_{m}}\mathbf R^{-1}\frac{\partial\mathbf R}{\partial\zeta_{n}}\mathbf R^{-1}\right] $$
where tr(·) represents the trace operator. Since tr(aB)=vec(aT)Tvec(B) and vec(aXB)=(BT⊗a)vec(X), we have
$$ \text{FIM}_{mn}=T\text{tr}\left[\frac{\partial\mathbf r}{\partial\zeta_{m}}\right]^{H}\left(\mathbf R^{-T}\otimes\mathbf R^{-1}\right)\left[\frac{\partial\mathbf r}{\partial\zeta_{n}}\right] $$
where r=△vec(R), ⊗ denotes the Kronecker product. The first-order derivatives of r with respect to each element of ζ is defined as
$$\begin{array}{*{20}l} \frac{\partial\mathbf r}{\partial \boldsymbol{\zeta}}&=\left[\frac{\partial\mathbf r}{\partial \theta_{1}},\ldots,\frac{\partial\mathbf r}{\partial \theta_{K}},\frac{\partial\mathbf r}{\partial \phi_{1}},\ldots,\frac{\partial\mathbf r}{\partial \phi_{K}}, \right. \\ &\left.\frac{\partial\mathbf r}{\partial \gamma_{1}},\ldots,\frac{\partial\mathbf r}{\partial \gamma_{K}},\frac{\partial\mathbf r}{\partial \eta_{1}},\ldots,\frac{\partial\mathbf r}{\partial \eta_{K}},\frac{\partial\mathbf r}{\partial p_{1}^{2}},\ldots,\frac{\partial\mathbf r}{\partial p_{K}^{2}}\right]. \end{array} $$
According to (33), it is straightforward to get
$$ \frac{\partial\mathbf r}{\partial \boldsymbol{\zeta}}=[\mathbf{a}_{nest}^{(1)}(\mathbf I_{4} \otimes \mathbf P),\mathbf{a}_{nest}] $$
where \(\mathbf {a}_{nest}^{(1)}=\mathbf {a}^{(1)^{*}}_{p}\odot \mathbf {a}_{p}+\mathbf {a}^{*}_{p}\odot \mathbf {a}^{(1)}_{p}\) and
$$\begin{array}{*{20}l} \mathbf{a}^{(1)}_{p}&=\left[\frac{\partial\mathbf{a}_{p}}{\partial \theta_{1}},\ldots,\frac{\partial\mathbf{a}_{p}}{\partial \theta_{K}},\frac{\partial\mathbf{a}_{p}}{\partial \phi_{1}},\ldots,\frac{\partial\mathbf{a}_{p}}{\partial \phi_{K}}, \right. \\ &\left.\frac{\partial\mathbf{a}_{p}}{\partial \gamma_{1}},\ldots,\frac{\partial\mathbf{a}_{p}}{\partial \gamma_{K}},\frac{\partial\mathbf{a}_{p}}{\partial \eta_{1}},\ldots,\frac{\partial\mathbf{a}_{p}}{\partial \eta_{K}}\right]. \end{array} $$
To proceed, we need to define the following matrices
$$\begin{array}{*{20}l} \mathbf W &= \left(\mathbf R^{-T}\otimes\mathbf R^{-1}\right)^{1/2} \\ \mathbf Q_{1} &= \mathbf W\mathbf{a}_{nest}^{(1)}(\mathbf I_{4} \otimes \mathbf P) \notag \\ \mathbf Q_{2} &= \mathbf W\mathbf{a}_{nest}. \end{array} $$
Then, the FIM could be expressed into a compact form as
$$ \text{FIM}=T \left[\begin{array}{cc} \mathbf Q_{1}^{H}\mathbf Q_{1} & \mathbf Q_{1}^{H}\mathbf Q_{2} \\ \mathbf Q_{2}^{H}\mathbf Q_{1} & \mathbf Q_{2}^{H}\mathbf Q_{2}. \end{array}\right] $$
The underdetermined stochastic CRB of θ,ϕ,γ,η with nested EMVS array is calculated as
$$ \text{CRB}=\frac{1}{T}\text{binv}(\mathbf Q_{1}^{H}\Pi_{\mathbf Q_{2}}^{\perp}\mathbf Q_{1}) $$
where \(\Pi _{\mathbf Q_{2}}^{\perp }=\mathbf I-\mathbf Q_{2}(\mathbf Q_{2}^{H}\mathbf Q_{2})^{-1}\mathbf Q_{2}^{H}\), and binv(·) stands for blockwise inverse operator which takes inverse of the block matrices with size K×K on the main block diagonal of (87).
Simulation results and discussions
This section gives various numerical examples to show the performance of the proposed SLS-VCPD method. First, we evaluate root mean square error (RMSE) performance of our proposed method for elevation, azimuth, and polarization estimations. Second, we compare the proposed approach with the state-of-the-art tensor-based schemes which use the SLS-VCPD technique and EMVS array. Throughout all simulations, a nested EMVS array with M=6 EMVSs is adopted, which is depicted in Fig. 2. The position of these EMVSs are placed on \(\mathbf z=\frac {\lambda }{2}[1,2,3,4,8,12]^{T}\).
The RMSE of elevation estimation is defined as
$$ \text{RMSE}_{\theta}=\sqrt{\mathbb{E}\left\{\frac{1}{K}\sum_{k=1}^{K}(\hat{\theta}_{k}-\theta_{k})^{2}\right\}}. $$
Note that RMSEs of azimuth and polarization are computed by the same formula, but the variables are different. In addition, the SNR is defined as \(10\text {log}_{10}\left (\|\mathbf {A}_{p}\mathbf {S} \|_{F}^{2}/ \| \mathbf {N} \|_{F}^{2}\right)\) where ||·||F represents the Frobenius norm.
In Figs. 5 and 6, we present the RMSE performances of elevations, azimuth, and polarization, respectively. The parameters of two sources are set as θ=[30∘,40∘], ϕ=[30∘,60∘], γ=[20∘,45∘], and η=[15∘,30∘], respectively. It is seen that the RMSE performances improve when SNR or the number of samples increases.
DOA estimation performance analysis of SLS-VCPD method versus SNR with different number of samples. A nested EMVS array is equipped with M=6 sensors. The number of sources is set as K=2
Polarization estimation performance analysis of SLS-VCPD method versus SNR with different number of samples. A nested EMVS array is equipped with M=6 sensors. The number of sources is set as K=2
In Fig. 7, we study the influence of the number of iterations on the proposed SLS-VCPD method. In this simulation, the parameters of four sources are randomly chosen as θ=[49.0∘,3.8∘,44.8∘,74.3∘], ϕ=[103.1∘,101.4∘,33.2∘,67.7∘], γ=[14.7∘,46.9∘,65.2∘,33.7∘], and η=[−67.0∘,−101.7∘,−114.2∘,30.5∘]. The number of receive samples is set as T=100. The curves of SLS-VCPD with iteration number, i.e., n=1 and n=3, are also plotted for comparison. We observe that the proposed SLS-VCPD method outperforms the LS-VCPD scheme as expected. Also, the performance of SLS-VCPD remains unchanged as the number of iterations increases, which indicates that the proposed SLS-VCPD method has convergenced within one iteration.
Performance analysis of the influence of iteration number of the RMSE versus SNR for four signals with M=6
Overdetermined case
In this subsection, we focus on the elevation estimation performance in the overdetermined case. To examine the performance of the nested EMVS array, two array configurations are considered, i.e., ULA and nested array. The number of elements in the EMVS ULA is set as M=6 with K=2 impinging sources. We compare CPD with EMVS ULA, namely, ULA-CPD [18], tensor-MUSIC [30], SS-CPD [31], and proposed SLS-VCPD methods using the nested EMVS array. Note that the tensor-MUSIC, SS-CPD, and SLS-VCPD methods are constructed using nested array, and thereby, we conceal the item "nested array" for simplification. Note that the SLS-VCPD method only uses one iteration step. The number of spatial smoothing is set as L=2 for both SS-CPD and SLS-VCPD methods. For comparison, the polarization states of sources are prior known to the tensor-MUSIC method to reduce the computational burden. Also, the stochastic CRB with nested EMVS array is plotted as a benchmark. The Monte-Carlo trials for each simulation are set as 1000.
In Fig. 8, we compare the elevation RMSEs of all methods while varying SNR from –4 dB to 20 dB for a fixed T=50 samples. The parameters of two sources are the same as those in Fig. 5. It is seen that the SLS-VCPD approach results in the lowest RMSE among the aforementioned schemes. The nested EMVS array offers more DOFs comparing with the EMVS ULA. Thus, we can observe better RMSE performance. However, the RMSEs of parameter estimation methods with nested EMVS array converges to nonzero value since there exists errors according to (17) when the source covariance matrix is not strictly diagonal [27]. In addition, we also give average CPU run time for each method for comparison. The ULA-CPD, tensor-MUSIC, SS-CPD, and SLS-VCPD methods demand 0.23 s, 3.31 s, 0.47 s, and 0.51 s, respectively. It could be seen that the proposed SLS-VCPD method has a modest computation complexity among four method. The tensor-MUSIC method requires multi-dimensional searching over the whole parameter space, which needs the most computations.
RMSE versus SNR for two sources with M=6 and T=50
In Fig. 9, we examine the elevation RMSEs of all methods while varying the number of samples from T=20 to T=200 with fixed SNR=10 dB. The parameters of sources remain the same as those in Fig. 5. It can be observed that the SLS-VCPD algorithm is clearly advantageous when compared with the other methods.
RMSE versus the number of samples for two sources with M=6 and SNR=10 dB
In Fig. 10, we compare the elevation resolvabilities of all the above approaches, T=50 samples are used and 500 Monte-Carlo trials have been carried out. Two spatially closely sources are assumed to impinge on the array with θ=[20∘,22∘], ϕ=[30∘,30∘], γ=[0∘,15∘], and η=[15∘,30∘]. The resolvable condition defined for the elevation estimation is given as [34]
$$ | \hat{\theta}_{k} - \theta_{k} | < \frac{ |\theta_{1} - \theta_{2}| }{2} $$
Resolution ability versus SNR for two spatial closely located sources with M=6, and the number of samples is set as T=50
where \(\hat {\theta }_{k}, k=1,2\), represents the estimated elevation of two sources. We can see that the SLS-VCPD method provides the best resolvability among all the schemes. SLS-VCPD, ULA-CPD, and SS-CPD methods attain one in terms of probability of successful resolution when SNR ≥12 dB. However, the Tensor-MUSIC method fails to provide the reliable detection even though SNR becomes sufficiently large.
Underdetermined case
In this subsection, the underdetermined case where the number of sources is more than the number of elements in the original EMVS array. Recall that an M-elements EMVS ULA could resolve up to M+2 sources. Here, we consider an M=6 elements nested EMVS array with K=10 impinging sources, which is called the underdetermined case. In this scenario, the ULA-CPD method [15, 18] cannot resolve these sources. The parameters of ten sources are randomly chosen. We examine the performance of the ALS-CPD with our proposed model (38), SS-CPD and SLS-VCPD methods based on the nested EMVS array. The tensor-MUSIC method is no longer included since the polarization state is very difficult to be a prior known when the number of sources is so large. Also, the computational burden of tensor-MUSIC method becomes intolerable.
Figure 11 shows the elevation RMSEs of three methods versus SNR for T=2000 samples. It is obvious that the SLS-VCPD method offers the lowest RMSE especially when SNR <8 dB. When SNR >16 dB, the ALS-CPD, SS-CPD, and SLS-CPD schemes have almost the same RMSE performance. The RMSE curves saturate as SNR is sufficiently high. This phenomenon is caused by the approximation of (17) since the SCM cannot be strictly diagonal when the number of samples is finite as we discussed in the overdetermined case.
RMSE versus SNR for ten sources with M=6 and T=2000
In Fig. 12, we study the elevation RMSEs versus the number of samples with SNR=10 dB. The proposed approach outperforms the other two algorithms when T<1600. Moreover, it is observed that the SLS-VCPD and SS-CPD methods result in the almost same performance when the number of samples is greater than 1600.
RMSE versus the number of samples for ten sources with M=6 and T=2000
A tensor modeling method for elevation, azimuth, and polarization estimation with nested electromagnetic vector-sensor arrays is proposed. Since the signals are uncorrelated, we build the SCM into a CP model through tensor permutation. The spatial and polarization information are separated. Then, the SLS-VCPD method is implemented on this model to calculate the elevation and azimuth. Next, the polarization is estimated based on the structure of the vector-sensor.
The issue of joint elevation, azimuth, and polarization estimation in underdetermined case with the nested EMVS array has been addressed in this work. This array could be modeled into a high-dimensional tensor. The property of tensor permutation is introduced to separate the spatial and polarization vectors which are originally coupled in the steering matrix. This allows us to develop a tensor model of the nested EMVS array with extended spatial aperture. Furthermore, since the spatial and polarization are decoupled, which enables an efficient computational method for auto-pairing parameter estimation, avoiding exhaustive multi-parameter searching. We also investigate the uniqueness condition offered by the proposed SLS-VCPD approach. Besides, the underdetermined stochastic CRB with nested EMVS array is derived. Numerical examples confirm the superiority of our proposed method.
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Alternating least squares
CRB:
Cramé
r-Rao bound; CPD:
Canonical polyadic decomposition
DOA:
Direction-of-arrival
DOF:
Degree-of-freedom
EMVS:
Electromagnetic vector-sensor
ESPRIT:
Estimation of signal parameters via rotational invariance techniques
HOSVD:
Higher-order singular value decomposition
IID:
Independent and identically distributed
Multiple signal classification
RMSE:
Root mean square error
SCM:
Sample covariance matrix
SLS:
Structured least square
ULA:
Uniform linear array
VCPD:
Vandermonde constrained CPD
S. Chintagunta, P. Ponnusamy, 2D-DOD and 2D-DOA estimation using the electromagnetic vector sensors. Signal Process. 147:, 163–172 (2018).
H. Hou, X. Mao, Y. Liu, Oblique projection for direction-of-arrival estimation of hybrid completely polarised and partially polarised signals with arbitrary polarimetric array configuration. IET Signal Process. 11:, 893–900 (2017).
C. C. Ko, J. Zhang, A. Nehorai, Separation and tracking of multiple broadband sources with one electromagnetic vector sensor. IEEE Trans. Aerosp. Electron. Syst. 38(3), 1109–1116 (2002).
J. F. Bull, Field probe for measuring vector components of an electromagnetic field. U.S. Patent 5300885A (1994).
G. F. Hatke, Performance analysis of the SuperCART antenna array. MIT Lincoln Lab., Cambridge, MA, Proj. Rep. AST-22 (1992).
A. Nehorai, E. Paldi, Vector-sensor array processing for electromagnetic source localization. IEEE Trans. Signal Process. 42(2), 376–398 (1994).
J. Li, P. Stoica, Efficient parameter estimation of partially polarized electromagnetic waves. IEEE Trans. Signal Process. 42(11), 3114–3125 (1994).
D. Rahamim, J. Tabrikian, R. Shavit, Source localization using vector sensor array in a multipath environment. IEEE Trans. Signal Process. 52(11), 3096–3103 (2004).
S. Miron, N. Le Bihan, J. Mars, Quaternion-MUSIC for vector-sensor array processing. IEEE Trans. Signal Process. 54(4), 1218–1229 (2006).
X. Yuan, Estimating the DOA and the polarization of a polynomial- phase signal using a single polarized vector-sensor. IEEE Trans. Signal Process. 60(3), 1270–1282 (2012).
Q. Cheng, Y. Hua, Performance analysis of the MUSIC and PENCIL-MUSIC algorithms for diversely polarized array. IEEE Trans. Signal Process. 42(11), 3150–3165 (1994).
M. D. Zoltowski, K. T. Wong, ESPRIT-based 2-D direction finding with a sparse uniform array of electromagnetic vector sensors. IEEE Trans. Signal Process. 48(8), 2195–2204 (2000).
K. T. Wong, M. D. Zoltowski, Uni-vector-sensor ESPRIT for multisource azimuth, elevation, and polarization estimation. IEEE Trans. Antennas Propag. 45(10), 1467–1474 (1997).
N. D. Sidiropoulos, R. Bro, G. B. Giannakis, Parallel factor analysis in sensor array processing. IEEE Trans. Signal Process. 48(8), 2377–2388 (2000).
X. Guo, S. Miron, D. Brie, S. Zhu, X. Liao, A CANDECOMP/PARAFAC perspective on uniqueness of DOA estimation using a vector sensor array. IEEE Trans. Signal Process. 59(7), 3475–3481 (2011).
M. Sørensen, L. De Lathauwer, Multiple invariance ESPRIT for nonuniform linear arrays: a coupled canonical polyadic decomposition approach. IEEE Trans. Signal Process. 64(4), 3693–3704 (2016).
M Sørensen, L De Lathauwer, Blind signal separation via tensor decomposition with Vandermonde factor: canonical polyadic decomposition. IEEE Trans. Signal Process. 61(22), 5507–5519 (2013).
S. Miron, Y. Song, D. Brie, K. Wong, Multilinear direction finding for sensor-array with multiple scales of invariance. IEEE Trans. Aerosp. Electron. Syst. 51(3), 2057–2070 (2015).
M. Yang, J. Ding, B. Chen, X. Yuan, A multiscale sparse array of spatially spread electromagnetic-vector-sensors for direction finding and polarization estimation. IEEE access. 6:, 9807–9818 (2018).
L. de Lathauwer, B. de Moor, J. Vanderwalle, A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21(4), 1253–1278 (2000).
F. Roemer, M. Haardt, G. Del Galdo, Analytical performance assessment of multi-dimensional matrix-and tensor-based ESPRIT-type algorithms. IEEE Trans. Signal Process. 62(10), 2611–2625 (2014).
X. Gong, Z. Liu, Y. Xu, M. L. Ahmad, Direction-of-arrival estimation via twofold mode-projection. Signal Process. 89(5), 831–842 (2009).
P. Forster, G. Ginolhac, M. Boizard, Derivation of the theoretical performance of a Tensor MUSIC algorithm. Signal Process. 129:, 97–105 (2016).
P. Pal, P. P. Vaidyanathan, Nested array: a novel approach to array processing with enhanced degrees of freedom. IEEE Trans. Signal Process. 58(8), 4167–4181 (2010).
P. P. Vaidyanathan, P. Pal, Sparse sensing with co-prime samplers and arrays. IEEE Trans. Signal Process. 59(2), 573–586 (2011).
X. Han, T. Shu, J. He, W. Yu, Polarization-angle-frequency estimation with linear nested vector sensors. IEEE Access. 6:, 36916–36926 (2018).
M. Wang, A. Nehorai, Coarrays, MUSIC, and the Cramér- Rao bound. IEEE Trans. Signal Process. 65(4), 933–946 (2017).
A. Koochakzadeh, P. Pal, Cramér–Rao bounds for underdetermined source localization. IEEE Signal Process. Lett.23(7), 919–923 (2016).
C. -L. Liu, P. P. Vaidyanathan, Cramér-Rao bounds for coprime and other sparse arrays, which find more sources than sensors. Dig. Signal Process. 61:, 43–61 (2017).
K. Han, A. Nehorai, Nested vector-sensor array processing via tensor modeling. IEEE Trans. Signal Process. 62(10), 2542–2553 (2014).
R. Wei, D. Li, J. Zhang, A novel PARAFAC model for processing the nested vector-sensor array. Sensors. 18(11), 3708 (2018).
K. -C. Tan, K. -C. Ho, A. Nehorai, Linear independence of steering vectors of an electromagnetic vector sensor. IEEE Trans. Signal Process. 44(12), 3099–3107 (1996).
B. Ottersten, P. Stoica, R. Roy, Covariance matching estimation techniques for array signal processing applications. Dig. Signal Process. 8(3), 185–210 (1998).
H. L. Van Trees, In Optimum Array Processing (Wiley, New York, 2002).
This work was supported by the Key Program of National Natural Science Foundation of China (61831009, U1713217).
School of Electronics and Information Engineering, Harbin Institute of Technology, Xidazhi Street, Harbin, 150001, China
Ming-Yang Cao & Xingpeng Mao
Key Laboratory of Marine Environmental Monitoring and Information Processing, Ministry of Industry and Information Technology, Xidazhi Street, Harbin, 150001, China
College of Information Engineering, Shenzhen University, Nanhai Road, Shenzhen, 518060, China
Lei Huang
Ming-Yang Cao
Xingpeng Mao
MYC proposed the method and wrote the manuscript. XPM advised on the method and checked the simulations. LH revised the manuscript. The authors read and approved the final manuscript.
Correspondence to Xingpeng Mao.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Cao, MY., Mao, X. & Huang, L. Elevation, azimuth, and polarization estimation with nested electromagnetic vector-sensor arrays via tensor modeling. J Wireless Com Network 2020, 153 (2020). https://doi.org/10.1186/s13638-020-01764-8
Nested array
Tensor decomposition
Cramér-Rao bound (CRB)
Radar and Sonar Networks | CommonCrawl |
Skip to main content Skip to sections
Advances in Difference Equations
December 2017 , 2017:277 | Cite as
The modified two-dimensional Toda lattice with self-consistent sources
Gegenhasi
First Online: 11 September 2017
In this paper, we derive the Grammian determinant solutions to the modified two-dimensional Toda lattice, and then we construct the modified two-dimensional Toda lattice with self-consistent sources via the source generation procedure. We show the integrability of the modified two-dimensional Toda lattice with self-consistent sources by presenting its Casoratian and Grammian structure of the N-soliton solution. It is also demonstrated that the commutativity between the source generation procedure and Bäcklund transformation is valid for the two-dimensional Toda lattice.
modified two-dimensional Toda lattice equation source generation procedure Grammian determinant Casorati determinant
37K10 37K40
The two-dimensional Toda lattice, which can be regarded as a spatial discretization of the KP equation, takes the following form:
$$ \frac{\partial^{2}}{\partial x \partial s} \ln(V_{n}+1)=V_{n+1}+V_{n-1}-2V _{n}, $$
where \(V_{n}\) denotes \(V(n,x,s)\). We use the above notation throughout the paper. Under the dependent variable transformation
$$ V_{n}=\frac{\partial^{2}}{\partial x \partial s}\ln f_{n}, $$
equation (1) is transformed into the bilinear form [1, 2]:
$$ D_{x}D_{s}f_{n}\cdot f_{n}=2\bigl(e^{D_{n}}f_{n}\cdot f_{n}-f^{2}_{n} \bigr), $$
where the bilinear operators are defined by [2]
$$\begin{aligned}& D_{x}^{m}D_{t}^{n}f\cdot g= \frac{\partial^{m}}{\partial y^{m}}\frac{ \partial^{n}}{\partial s^{n}}f(x+y,t+s)g(x-y,t-s)\bigg|_{s=0,y=0}, \\& e^{D_{n}}f_{n}\cdot g_{n}=f_{n+1}g_{n-1}. \end{aligned}$$
It is shown in [2, 3] that the two-dimensional Toda lattice equation possesses the following bilinear Bäcklund transformation:
$$\begin{aligned}& D_{x}f_{n+1}\cdot f'_{n}=- \frac{1}{\lambda}f_{n}f'_{n+1}+\nu f _{n+1}f'_{n}, \end{aligned}$$
$$\begin{aligned}& D_{s}f_{n}\cdot f'_{n}=\lambda f_{n+1}f'_{n-1}-\mu f_{n}f'_{n}, \end{aligned}$$
where λ, μ, ν are arbitrary constants. Equations (4)-(5) are transformed into the following nonlinear form:
$$\begin{aligned}& \frac{\partial}{\partial x}u_{n}=(\mu+u_{n}) (v_{n}-v_{n+1}), \end{aligned}$$
$$\begin{aligned}& \frac{\partial}{\partial s}v_{n}=(\nu+v_{n}) (u_{n-1}-u_{n}), \end{aligned}$$
through the dependent variable transformation \(u_{n}=\frac{\partial}{ \partial s}\ln(\frac{f_{n}}{f'_{n}})\), \(v_{n}=-\frac{\partial}{\partial x}\ln(\frac{f_{n}}{f'_{n-1}})\). Equations (4)-(5) or (6)-(7) are called the modified two-dimensional Toda lattice [2, 3]. The solutions \(V_{n}\) of the two-dimensional Toda lattice (1) and \(u_{n}\), \(v_{n}\) of the modified two-dimensional Toda lattice (6)-(7) are connected through a Miura transformation [2].
The soliton equations with self-consistent sources can model a lot of important physical processes. For example, the KdV equation with self-consistent sources describes the interaction of long and short capillary-gravity waves [4]. The KP equation with self-consistent sources describes the interaction of a long wave with a short-wave packet propagating on the \(x,y\) plane at an angle to each other [5, 6]. Since the pioneering work of Mel'nikov [7], lots of soliton equations with self-consistent sources have been studied via inverse scattering methods [7, 8, 9, 10, 11], Darboux transformation methods [12, 13, 14, 15, 16, 17], Hirota's bilinear method and the Wronskian technique [18, 19, 20, 21, 22, 23, 24].
In [25], a new algebraic method, called the source generation procedure, is proposed to construct and solve the soliton equations with self-consistent sources both in continuous and discrete cases. The source generation procedure has been successfully applied to many \((2+1)\)-dimensional continuous and discrete soliton equations such as the Ishimori-I equation [26], the semi-discrete Toda equation [27], the modified discrete KP equation [28], and others. The purpose of this paper is to construct the modified two-dimensional Toda lattice with self-consistent sources via the source generation procedure and clarify the determinant structure of N-soliton solution for the modified two-dimensional Toda lattice with self-consistent sources.
The paper is organized as follows. In Section 2, we derive the Grammian solution to the modified two-dimensional Toda lattice equation and then construct the two-dimensional Toda lattice equations with self-consistent sources. In Section 3, the Casoratian formulation of N-soliton solution for the modified two-dimensional Toda lattice with self-consistent is given. Section 4 is devoted to showing that the commutativity of the source generation procedure and Bäcklund transformation is valid for two-dimensional Toda lattice. We end this paper with a conclusion and discussion in Section 5.
2 The modified two-dimensional Toda lattice equation with self-consistent sources
The N-soliton solution in Casoratian form for the modified two-dimensional Toda lattice equation (4)-(5) is given in [2] and [29]. In this section, we first derive the Grammian formulation of the N-soliton solution for the modified two-dimensional Toda lattice equation, and then we construct the modified two-dimensional Toda lattice equation with self-consistent sources via the source generation procedure.
If we choose \(\lambda=1\), \(\nu=\mu=0\), then the modified two-dimensional Toda lattice (4)-(5) becomes
$$\begin{aligned}& \bigl(D_{x}e^{\frac{1}{2}D_{n}}+e^{-\frac{1}{2}D_{n}}\bigr)f_{n} \cdot f'_{n}=0, \end{aligned}$$
$$\begin{aligned}& \bigl(D_{s}-e^{D_{n}}\bigr)f_{n}\cdot f'_{n}=0. \end{aligned}$$
The modified two-dimensional Toda lattice (8)-(9) has the following Grammian determinant solution:
$$\begin{aligned}& f_{n}=\det \biggl\vert c_{ij}+(-1)^{n} \int_{-\infty}^{x}\phi_{i}(n)\psi _{j}(-n)\,dx \biggr\vert _{1\leq i,j \leq N}= \vert M \vert , \end{aligned}$$
$$\begin{aligned}& f'_{n}(n,x,s)= \left \vert \textstyle\begin{array}{@{}c@{\quad}c@{}} M & \Phi(n) \\ \Psi(n)^{T} & -\phi_{N+1}(n) \end{array}\displaystyle \right \vert , \end{aligned}$$
$$\begin{aligned}& \Phi(n)=\bigl(-\phi_{1}(n),\ldots,-\phi_{N}(n) \bigr)^{T}, \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] \Psi(n)&=\biggl(c_{N+1,1}+(-1)^{n} \int_{-\infty}^{x}\phi_{N+1}(n)\psi _{1}(-n)\,dx,\ldots, \\ &\quad c_{N+1,N}+ \int_{-\infty}^{x}(-1)^{n}\phi_{N+1}(n) \psi_{N}(-n)\,dx\biggr)^{T}, \end{aligned} \end{aligned}$$
in which the \(\phi_{i}(n)\) denote \(\phi_{i}(n,x,s)\) and the \(\psi_{i}(-n)\) denote \(\psi_{i}(-n,x,s)\) for \(i=1,\ldots,N+1\). In addition, \(c_{ij}\) (\(1\leq i,j \leq N+1\)) are arbitrary constants and \(\phi_{i}(n)\), \(\psi_{i}(-n)\) (\(i=1,\ldots,N+1\)) satisfy the following dispersion relations:
$$\begin{aligned}& \frac{\partial\phi_{i}(n)}{\partial x}= \phi_{i}(n+1),\quad \quad\frac{ \partial\psi_{i}(-n)}{\partial x}= \psi_{i}(-n+1), \end{aligned}$$
$$\begin{aligned}& \frac{\partial\phi_{i}(n)}{\partial s}= -\phi_{i}(n-1),\quad \quad\frac{ \partial\psi_{i}(-n)}{\partial s}= - \psi_{i}(-n-1). \end{aligned}$$
The Grammian determinants \(f_{n}\) in (10) and \(f'_{n}\) in (11) can be expressed in terms of the following Pfaffians:
$$\begin{aligned}& f_{n}=\bigl(a_{1},\ldots,a_{N},a^{*}_{N}, \ldots,a^{*}_{1}\bigr)=(\star), \end{aligned}$$
$$\begin{aligned}& f'_{n}=\bigl(a_{1},\ldots,a_{N+1},d^{*}_{0},a_{N}^{*}, \ldots,a^{*} _{1}\bigr)=\bigl(a_{N+1},d^{*}_{0}, \star\bigr), \end{aligned}$$
where the Pfaffian elements are defined by
$$\begin{aligned}& \bigl(a_{i},a^{*}_{j}\bigr)_{n}=c_{ij}+(-1)^{n} \int_{-\infty}^{x}(-1)^{n} \phi_{i}(n)\psi_{j}(-n)\,dx, \end{aligned}$$
$$\begin{aligned}& \bigl(d^{*}_{m},a_{i}\bigr)= \phi_{i}(n+m),\bigl(d_{m},a^{*}_{j} \bigr)=(-1)^{n+m}\psi _{j}(-n+m), \end{aligned}$$
$$\begin{aligned}& (a_{i},a_{j})_{n}=\bigl(a^{*}_{i},a^{*}_{j} \bigr)_{n}=(d_{m},d_{k})=\bigl(d_{m},d ^{*}_{k}\bigr)=\bigl(d^{*}_{m},d^{*}_{k} \bigr)=0, \end{aligned}$$
in which \(i,j=1,\ldots,N+1\) and k, m are integers.
Using the dispersion relations (14)-(15), we can compute the following differential and difference formula for the Pfaffians (16)-(17):
$$\begin{aligned}& f_{n+1,x}=\bigl(d_{-1},d^{*}_{1}, \star\bigr), \qquad f_{n+1}=(\star)+\bigl(d_{-1},d^{*}_{0}, \star\bigr), \end{aligned}$$
$$\begin{aligned}& f_{ns}=\bigl(d_{-1},d^{*}_{-1}, \star\bigr),\quad\quad f'_{nx}=\bigl(a_{N+1},d^{*}_{1}, \star \bigr), \quad\quad f'_{n-1}=\bigl(a_{N+1},d^{*}_{-1}, \star\bigr) \end{aligned}$$
$$\begin{aligned}& f'_{n+1}=\bigl(a_{N+1},d^{*}_{1}, \star\bigr)+ \bigl(a_{N+1},d_{-1},d^{*}_{o},d ^{*}_{1},\star\bigr), \end{aligned}$$
$$\begin{aligned}& f'_{ns}=\bigl(a_{N+1},d_{-1},d^{*}_{-1},d^{*}_{0}, \star\bigr)-\bigl(a_{N+1},d^{*} _{-1},\star\bigr). \end{aligned}$$
Substituting equations (21)-(24) into the modified two-dimensional Toda lattice (8)-(9) gives the following two Pfaffian identities:
$$\begin{aligned}& \bigl(d_{-1},d^{*}_{1},\star\bigr) \bigl(a_{N+1},d^{*}_{0},\star\bigr)- \bigl(d_{-1},d^{*} _{0},\star\bigr) \bigl(a_{N+1},d^{*}_{1},\star\bigr)+(\star) \bigl(a_{N+1},d_{-1},d^{*} _{0},d^{*}_{1}, \star\bigr)=0, \\& \bigl(d_{-1},d^{*}_{0},\star\bigr) \bigl(a_{N+1},d^{*}_{-1},\star\bigr)- \bigl(d_{-1},d^{*} _{-1},\star\bigr) \bigl(a_{N+1},d^{*}_{0},\star\bigr)+(\star) \bigl(a_{N+1},d_{-1},d^{*} _{-1},d^{*}_{0}, \star\bigr)=0. \end{aligned}$$
In order to construct the modified two-dimensional Toda lattice with self-consistent sources, we change the Grammian determinant solutions (10)-(11) into the following form:
$$\begin{aligned}& f(n,x,s)=\det \biggl\vert \gamma_{ij}(s)+(-1)^{n} \int_{-\infty}^{x}(-1)^{n} \phi_{i}(n)\psi_{j}(-n)\,dx \biggr\vert _{1\leq i,j \leq N}= \vert F \vert , \end{aligned}$$
$$\begin{aligned}& f'_{n}(n,x,s)= \left \vert \textstyle\begin{array}{@{}c@{\quad}c@{}} F & \Phi(n) \\ \Psi(n)^{T} & -\phi_{N+1}(n) \end{array}\displaystyle \right \vert , \end{aligned}$$
where Nth column vectors \(\Phi(n)\), \(\Psi(n)\) are given in (12)-(13) and \(\phi_{i}(n)\), \(\psi_{i}(-n)\) (\(i=1,\ldots, {N+1}\)) also satisfy the dispersion relations (14)-(15). In addition, \(\gamma_{ij}(s)\) satisfies
$$\begin{aligned} \gamma_{ij}(s) = \textstyle\begin{cases} \gamma_{i}(s), & i=j\text{ and } 1\leq i \leq K \leq N, \\ c_{ij}, & \text{otherwise}, \end{cases}\displaystyle \end{aligned}$$
with \(\gamma_{i}(s)\) being an arbitrary function of s and K being a positive integer.
The Grammian determinants \(f_{n}\) in (25) and \(f'_{n}\) in (26) can be expressed by means of the following Pfaffians:
$$\begin{aligned}& f_{n}=\bigl(1,\ldots,N,N^{*},\ldots,1^{*} \bigr)=(\cdot), \end{aligned}$$
$$\begin{aligned}& f'_{n}=\bigl(1,\ldots,N+1,d^{*}_{0},N^{*}, \ldots,1^{*}\bigr)=\bigl(N+1,d^{*} _{0},\cdot \bigr), \end{aligned}$$
$$\begin{aligned}& \bigl(i,j^{*}\bigr)_{n}=\gamma_{ij}(s)+(-1)^{n} \int_{-\infty}^{x}(-1)^{n} \phi_{i}(n)\psi_{j}(-n)\,dx,\quad\quad \bigl(i^{*},j^{*} \bigr)_{n}=0, \end{aligned}$$
$$\begin{aligned}& \bigl(d^{*}_{m},i\bigr)=\phi_{i}(n+m),\quad\quad \bigl(d_{m},j^{*}\bigr)=(-1)^{n+m}\psi _{j}(-n+m),\quad\quad (i,j)_{n}=0, \end{aligned}$$
$$\begin{aligned}& (d_{m},i)=\bigl(d^{*}_{m},j^{*} \bigr)=(d_{m},d_{k})=\bigl(d_{m},d^{*}_{k} \bigr)=\bigl(d^{*} _{m},d^{*}_{k} \bigr)=0, \end{aligned}$$
It is easy to show that the functions \(f(n,x,s)\), \(f'(n,x,s)\) given in (28)-(29) still satisfy equation (8). However, they will not satisfy (9), and they satisfy the following new equation:
$$ D_{s}f_{n}\cdot f'_{n}-f_{n+1}f'_{n-1}=- \sum_{j=1}^{K}g_{n}^{(j)}h _{n}^{(j)}, $$
where the new functions \(g_{n}^{(j)}\), \(h_{n}^{(j)}\) are given by
$$\begin{aligned}& g_{n}^{(j)}=\sqrt{\dot{\gamma}_{j}(t)}\bigl(1, \ldots,N,d^{*}_{0},N ^{*},\ldots, \hat{j^{*}},\ldots,1^{*}\bigr), \end{aligned}$$
$$\begin{aligned}& h_{n}^{(j)}=\sqrt{\dot{\gamma}_{j}(t)}\bigl(1, \ldots,\hat{j},\ldots ,N+1,N^{*},\ldots,1^{*}\bigr), \end{aligned}$$
where \(j=1,\ldots,K\) and the dot denotes the derivative of \(\gamma_{j}(t)\) with respect to t. Furthermore, we can show that \(f_{n}\), \(f'_{n}\), \(g_{n}^{(j)}\), \(h_{n}^{(j)}\) (\(j=1,\ldots,K\)) satisfy the following 2K equations:
$$\begin{aligned}& \bigl(D_{x}e^{\frac{1}{2}D_{n}}+e^{-\frac{1}{2}D_{n}}\bigr)f\cdot g_{n}^{(j)}=0, \quad j=1,\ldots,K, \end{aligned}$$
$$\begin{aligned}& \bigl(D_{x}e^{\frac{1}{2}D_{n}}+e^{-\frac{1}{2}D_{n}}\bigr)h_{n}^{(j)} \cdot f'_{n}=0, \quad j=1,\ldots,K. \end{aligned}$$
In fact, we have the following differential and difference formula for \(f_{n}\) in (28), \(f'_{n}\) in (29) and \(g_{n}^{(j)}\), \(h _{n}^{(j)}\) (\(j=1,\ldots,K\)) by employing the dispersion relations (14)-(15):
$$\begin{aligned}& \begin{aligned}[b] f_{ns}&=\bigl(d_{-1},d^{*}_{-1}, \cdot\bigr) \\ &\quad{} +\sum_{j=1}^{K}\dot{ \gamma}_{j}(s) \bigl(1,\ldots,\hat{i},\ldots,N,N^{*}, \ldots,\hat{i^{*}},\ldots,1^{*}\bigr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] f'_{ns}&=\bigl(N+1,d_{-1},d^{*}_{-1},d^{*}_{0}, \cdot\bigr)-\bigl(N+1,d^{*}_{-1}, \cdot\bigr) \\ &\quad {}+\sum_{i=1}^{K}\dot{ \gamma}_{i}(s) \bigl(N+1,d^{*}_{0},1,\ldots, \hat{i}, \ldots,N,N^{*},\ldots,\hat{i^{*}}, \ldots,1^{*}\bigr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& f_{n+1}=(\cdot)+\bigl(d_{-1},d^{*}_{0}, \cdot\bigr),\quad\quad f'_{n-1}=\bigl(N+1,d^{*}_{-1}, \cdot\bigr), \end{aligned}$$
$$\begin{aligned}& g^{(j)}_{n-1}=\sqrt{\dot{\gamma}_{j}(t)}\bigl(1, \ldots,N,d^{*}_{-1},N ^{*},\ldots, \hat{j^{*}},\ldots,1^{*}\bigr), \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] g_{n-1,x}^{(j)}&=\sqrt{\dot{\gamma}_{j}(t)}\bigl[ \bigl(1,\ldots,N,d^{*} _{0},N^{*},\ldots, \hat{j^{*}},\ldots,1^{*}\bigr) \\ &\quad{} +\bigl(1,\ldots,N,d_{0},d^{*}_{0},d^{*}_{-1},N^{*}, \ldots,\hat{j^{*}}, \ldots,1^{*}\bigr)\bigr], \end{aligned} \end{aligned}$$
$$\begin{aligned}& f_{n-1}=(\cdot)-\bigl(d_{0},d^{*}_{-1}, \cdot\bigr),\quad\quad f_{nx}=\bigl(d_{0},d^{*}_{0}, \ldots\bigr), \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] h^{(j)}_{n+1}&=\sqrt{\dot{\gamma}_{j}(t)}\bigl[ \bigl(1,\ldots,\hat{j}, \ldots,N+1,N^{*},\ldots,1^{*}\bigr) \\ &\quad{} +\bigl(1,\ldots,\hat{j},\ldots,N+1,d_{-1},d^{*}_{0}N^{*}, \ldots,1^{*}\bigr)\bigr], \end{aligned} \end{aligned}$$
$$\begin{aligned}& h^{(j)}_{n+1,x}=\sqrt{\dot{\gamma}_{j}(t)} \bigl(1,\ldots,\hat{j}, \ldots,N+1,d_{-1},d^{*}_{1},N^{*}, \ldots,1^{*}\bigr), \end{aligned}$$
$$\begin{aligned}& f'_{nx}=\bigl(N+1,d^{*}_{1}, \cdot\bigr), \end{aligned}$$
$$\begin{aligned}& f'_{n+1}=\bigl(N+1,d^{*}_{1}, \cdot\bigr)+ \bigl(N+1,d_{-1},d^{*}_{0},d^{*}_{1}, \cdot\bigr), \end{aligned}$$
where \(\hat{\ }\) indicates deletion of the letter under it.
Substitution of equations (38)-(47) into equations (33), (36)-(37) gives the following Pfaffian identities:
$$\begin{aligned}& \bigl[\bigl(d_{-1},d^{*}_{-1},\cdot\bigr) \bigl(N+1,d^{*}_{0},\cdot\bigr)-(\cdot) \bigl(N+1,d _{-1},d^{*}_{-1},d^{*}_{0}, \cdot\bigr)-\bigl(d_{-1},d^{*}_{0},\cdot\bigr) \bigl(N+1,d ^{*}_{-1},\cdot\bigr)\bigr], \\& \quad{} +\sum_{j=1}^{K}\dot{ \gamma}_{j}(s)\bigl[\bigl(1,\ldots,N+1,d^{*}_{0},N^{*}, \ldots,1^{*}\bigr) \bigl(1,\ldots,\hat{i},\ldots,N,N^{*}, \ldots,\hat{i^{*}}, \ldots,1^{*}\bigr) \\& \quad{} -(\cdot) \bigl(1,\ldots,\hat{i},\ldots,N+1,d^{*}_{0},N^{*}, \ldots,i^{*}, \ldots,1^{*}\bigr) \\& \quad{} +\bigl(1,\ldots,N,d^{*}_{0},N^{*}, \ldots,\hat{i^{*}},\ldots,1^{*}\bigr) \bigl(1, \ldots, \hat{i},\ldots,N+1,N^{*},\ldots,1^{*}\bigr)\bigr]=0, \\& \bigl(d_{0},d^{*}_{0},\cdot\bigr) \bigl(1, \ldots,N,d^{*}_{-1},N^{*},\cdot, \hat{j^{*}},\ldots,1^{*}\bigr) \\& \quad{} -(\cdot) \bigl(1,\ldots,N,d_{0},d^{*}_{0},d^{*}_{-1},N^{*}, \cdot, \hat{j^{*}},\ldots,1^{*}\bigr) \\& \quad{} -\bigl(d_{0},d^{*}_{-1},\cdot \bigr) \bigl(1,\ldots,N,d^{*}_{0},N^{*},\cdot, \hat{j^{*}},\ldots,1^{*}\bigr)=0, \end{aligned}$$
$$\begin{aligned}& \bigl(N+1,d^{*}_{0},\cdot\bigr) \bigl(1,\ldots,\hat{i}, \ldots,N+1,d_{-1},d^{*} _{1},N^{*}, \ldots,1^{*}\bigr) \\& \quad{} -\bigl(N+1,d^{*}_{1},\cdot\bigr) \bigl(1, \ldots,\hat{i},\ldots,N+1,d_{-1},d^{*} _{0},N^{*}, \ldots,1^{*}\bigr) \\& \quad{} +\bigl(N+1,d_{-1},d^{*}_{0},d^{*}_{1}, \cdot\bigr) \bigl(1,\ldots,\hat{i},\ldots,N+1,N ^{*}, \ldots,1^{*}\bigr)=0, \end{aligned}$$
respectively. Therefore, equations (8), (33), (36)-(37) constitute the modified two-dimensional Toda lattice with self-consistent sources, and it possesses the Grammian determinant solution (28)-(29), (34)-(35).
Through the dependent variable transformation
$$ u_{n}=\frac{f_{n+1}f'_{n-1}}{f_{n}f'_{n}}, \quad\quad v_{n}=-\frac{\partial}{ \partial x}\ln \biggl(\frac{f_{n}}{f'_{n-1}}\biggr),\quad\quad G_{n}^{(j)}= \frac{g_{n}^{(j)}}{f _{n}},\quad\quad H_{n}^{(j)}=\frac{h_{n}^{(j)}}{f'_{n}}, $$
the bilinear modified two-dimensional Toda lattice with self-consistent sources (8, 33, 36)-(37) can be transformed into the following nonlinear form:
$$\begin{aligned}& \frac{\partial}{\partial x}u_{n}=u_{n}(v_{n}-v_{n+1}), \end{aligned}$$
$$\begin{aligned}& \frac{\partial}{\partial s}v_{n}=v_{n}(u_{n-1}-u_{n})+v_{n} \sum_{j=1}^{K}\bigl[u_{n}G_{n}^{(j)}H_{n}^{(j)}-u_{n-1}G_{n-1}^{(j)}H_{n-1} ^{(j)}\bigr], \end{aligned}$$
$$\begin{aligned}& \frac{\partial}{\partial x}G_{n-1}^{(j)}+G_{n}^{(j)}u_{n}v_{n}=0, \quad j=1,\ldots,K, \end{aligned}$$
$$\begin{aligned}& \frac{\partial}{\partial x}H_{n+1}^{(j)}+H_{n}^{(j)}u_{n}v_{n+1}=0, \quad j=1,\ldots,K. \end{aligned}$$
When we take \(G_{n}^{(j)}=H_{n}^{(j)}=0\), \(j=1,\ldots,K\) in (49)-(52), the nonlinear modified two-dimensional Toda lattice with self-consistent sources (49)-(52) is reduced to the nonlinear modified two-dimensional Toda lattice (6)-(7) with \(\lambda=1\), \(\nu=\mu=0\).
If we choose
$$ \begin{gathered} \phi_{i}(n)=e^{\xi_{i}},\quad\quad \psi_{i}(-n)=(-1)^{n}e^{\eta_{i}}, \\ \xi _{i}=e^{q_{i}}x+q_{i}n-e^{-q_{i}}t,\quad\quad \eta_{i}=-e^{Q_{i}}x-Q_{i}n+e^{-Q _{i}}t, \end{gathered} $$
where \(i=1,2,\ldots,N+1\) in the Grammian determinants (25)-(26), (34)-(35), then we obtain the N-soliton solution of the modified two-dimensional Toda lattice with self-consistent sources (8), (33), (36)-(37). Here \(q_{i}\), \(Q_{i}\) (\(i=1,2, \ldots,N+1\)) are arbitrary constants.
For example, if we take \(K=1\), \(N=1\) and
$$ \phi_{1}(n)=e^{\xi_{1}},\quad\quad \phi_{2}(n)=e^{\xi_{2}},\quad\quad \psi_{1}(n)=e^{\eta _{1}},\quad\quad \gamma_{1}(t)= \frac{e^{2a(t)}}{e^{q_{1}}-e^{Q_{1}}},\quad\quad c_{21}=0, $$
where \(a(t)\) is an arbitrary function of t, then we have
$$\begin{aligned}& f_{n}(x,n,t)=\frac{e^{2a(t)}}{e^{q_{1}}-e^{Q_{1}}}\bigl(1+e^{\xi_{1}+\eta _{1}-2a(t)}\bigr), \end{aligned}$$
$$\begin{aligned}& f'_{n}(x,n,t)=-\frac{e^{2a(t)+\xi_{2}}}{e^{q_{1}}-e^{Q_{1}}}\biggl(1+ \frac{e ^{q_{2}}-e^{q_{1}}}{e^{q_{2}}-e^{Q_{1}}}e^{\xi_{1}+\eta_{1}-2a(t)}\biggr), \end{aligned}$$
$$\begin{aligned}& g^{(1)}_{n}(x,n,t)=-\sqrt{ \frac{e^{2\dot{a}(t)}}{e^{q_{1}}-e^{Q1}}}e^{\xi_{1}+a(t)}, \end{aligned}$$
$$\begin{aligned}& h^{(1)}_{n}(x,n,t)= \sqrt{\frac{e^{2\dot{a}(t)}}{e^{q_{1}}-e^{Q1}}} \frac{1}{e^{q_{2}}-e ^{Q_{1}}}e^{\xi_{2}-\eta_{1}+a(t)}. \end{aligned}$$
Therefore, the one-soliton solution of the nonlinear modified two-dimensional Toda lattice with self-consistent sources (49)-(52) is given by
$$\begin{aligned}& u_{n}(x,n,t)=\frac{e^{-q_{2}}(1+e^{q_{1}-Q_{1}}e^{\xi_{1}+\eta _{1}-2a(t)})(1+\frac{e ^{q_{2}}-e^{q_{1}}}{e^{q_{2}}-e^{Q_{1}}}e^{Q_{1}-q_{1}}e^{\xi_{1}+\eta _{1}-2a(t)})}{(1+e^{\xi_{1}+\eta_{1}-2a(t)})(1+\frac{e^{q_{2}}-e^{q _{1}}}{e^{q_{2}}-e^{Q_{1}}}e^{\xi_{1}+\eta_{1}-2a(t)})}, \end{aligned}$$
$$\begin{aligned}& v_{n}(x,n,t)=-\frac{\partial}{\partial x}\ln\biggl(\frac{1+e^{\xi_{1}+ \eta_{1}-2a(t)}}{-e^{\xi_{2}}(1+\frac{e^{q_{2}}-e^{q_{1}}}{e^{q_{2}}-e ^{Q_{1}}}e^{Q_{1}-q_{1}}e^{\xi_{1}+\eta_{1}-2a(t)})}\biggr), \end{aligned}$$
$$\begin{aligned}& G^{(1)}_{n}(x,n,t)=-\sqrt{2\dot{a}(t) \bigl(e^{q_{1}}-e^{Q1} \bigr)}\frac{e ^{\xi_{1}-a(t)}}{1+e^{\xi_{1}+\eta_{1}-2a(t)}}, \end{aligned}$$
$$\begin{aligned}& H^{(1)}_{n}(x,n,t)=\frac{-\sqrt{2\dot{a}(t)(e^{q_{1}}-e^{Q1})}}{e ^{q_{2}}-e^{Q_{1}}}\frac{e^{-\eta_{1}-a(t)}}{1+\frac{e^{q_{2}}-e^{q _{1}}}{e^{q_{2}}-e^{Q_{1}}}e^{\xi_{1}+\eta_{1}-2a(t)}}. \end{aligned}$$
If we take \(K=1\), \(N=2\) and
$$\begin{aligned}& \phi_{1}(n)=e^{\xi_{1}},\quad\quad \phi_{2}(n)=e^{\xi_{2}},\quad\quad \phi_{3}(n)=e^{\xi _{3}},\quad\quad \psi_{1}(n)=e^{\eta_{1}},\quad\quad \psi_{2}(n)=e^{\eta_{2}}, \\& \gamma_{1}(t)=\frac{e^{2a(t)}}{e^{q_{1}}-e^{Q_{1}}},\quad\quad \gamma _{2}(t)= \frac{1}{e ^{q_{2}}-e^{Q_{2}}},\quad\quad c_{12}=0,\quad\quad c_{21}=0,\quad\quad c_{31}=0, \\& c_{32}=0, \end{aligned}$$
we derive
$$\begin{aligned}& \begin{aligned}[b] f_{n}(x,n,t)&=\frac{e^{2a(t)}}{(e^{q_{1}}-e^{Q_{1}})(e^{q_{2}}-e^{Q _{2}})}\biggl(1+e^{\xi_{1}+\eta_{1}-2a(t)}+e^{\xi_{2}+\eta_{2}} \\ &\quad{} +\frac{(e^{q_{1}}-e^{q_{2}})(e^{Q_{1}}-e^{Q_{2}})}{(e^{q_{1}}-e^{Q _{2}})(e^{Q_{1}}-e^{q_{2}})}e^{\xi_{1}+\eta_{1}+\xi_{2}+\eta_{2}-2a(t)}\biggr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] f'_{n}(x,n,t)&=-\frac{e^{\xi_{3}+2a(t)}}{(e^{q_{1}}-e^{Q_{1}})(e^{q _{2}}-e^{Q_{2}})}\biggl(1+ \frac{e^{q_{3}}-e^{q_{1}}}{e^{q_{3}}-e^{Q_{1}}}e ^{\xi_{1}+\eta_{1}-2a(t)}+\frac{e^{q_{3}}-e^{q_{2}}}{e^{q_{3}}-e^{Q _{2}}}e^{\xi_{2}+\eta_{2}} \\ &\quad{}+\frac{(e^{q_{1}}-e^{q_{2}})(e^{Q_{2}}-e^{Q_{1}})(e^{q_{3}}-e^{q_{2}})(e ^{q_{3}}-e^{q_{1}})}{(e^{q_{1}}-e^{Q_{2}})(e^{q_{2}}-e^{Q_{1}})(e^{q _{3}}-e^{Q_{2}})(e^{q_{3}}-e^{Q_{1}})}e^{\xi_{1}+\eta_{1}+\xi_{2}+\eta _{2}-2a(t)}\biggr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& g^{(1)}_{n}(x,n,t)=\sqrt{\frac{e^{2\dot{a}(t)}}{e^{q_{1}}-e^{Q _{1}}}} \frac{e^{\xi_{1}+a(t)}}{e^{q_{2}}-e^{Q_{2}}}\biggl(1+\frac{e^{q_{1}}-e ^{q_{2}}}{e^{q_{1}}-e^{Q_{2}}}e^{\xi_{2}+\eta_{2}}\biggr), \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] h^{(1)}_{n}(x,n,t)&=-\sqrt{\frac{e^{2\dot{a}(t)}}{e^{q_{1}}-e^{Q _{1}}}} \frac{e^{\xi_{3}+\eta_{1}+a(t)}}{(e^{q_{2}}-e^{Q_{2}})(e^{q _{3}}-e^{Q_{1}})} \\ &\quad{}\times \biggl(1+ \frac{(e^{q_{2}}-e^{q_{3}})(e^{Q_{1}}-e^{Q_{2}})}{(e^{Q_{2}}-e^{q _{3}})(e^{Q_{1}}-e^{q_{2}})}e^{\xi_{2}+\eta_{2}}\biggr) \end{aligned} . \end{aligned}$$
Substituting functions (63)-(66) into the dependent variable transformations (48), we obtain two-soliton solution of the nonlinear modified two-dimensional Toda lattice with self-consistent sources (49)-(52).
3 Casorati determinant solution to the modified two-dimensional Toda lattice equation with self-consistent sources
In Section 2, we derived that the modified two-dimensional Toda lattice with self-consistent sources (8), (33), (36)-(37) possess the Grammian determinant solution (25), (26), (34), (35). In this section, we derive the Casoratian formulation of the N-soliton for the modified two-dimensional Toda lattice with self-consistent sources (8), (33), (36)-(37).
The modified two-dimensional Toda lattice with self-consistent sources (8), (33), (36)-(37) has the following Casorati determinant solution:
$$\begin{aligned}& f_{n}=\det \bigl\vert \psi_{i}(n+j-1) \bigr\vert _{1\leq i,j \leq N}=(d_{0}, \ldots,d_{N-1},N,\ldots,1), \end{aligned}$$
$$\begin{aligned}& f'_{n}=\det \bigl\vert \psi_{i}(n+j-1) \bigr\vert _{1\leq i,j \leq N+1}=(d _{0},\ldots,d_{N},N+1, \ldots,1), \end{aligned}$$
$$\begin{aligned}& g^{(j)}_{n}=\sqrt{\dot{\gamma}_{j}(t)}(d_{0}, \ldots,d_{N},N, \ldots,1,\alpha_{j}), \end{aligned}$$
$$\begin{aligned}& h^{(j)}_{n}=\sqrt{\dot{\gamma}_{j}(t)}(d_{0}, \ldots,d_{N-1},N+1, \ldots,\hat{j},\ldots,1), \end{aligned}$$
where \(\psi_{i}(n+m)=\phi_{i1}(n+m)+(-1)^{i-1}C_{i}(s)\phi_{i2}(n+m)\) (\(m=0, \ldots,N\)) and
$$\begin{aligned} C_{i}(s) = \textstyle\begin{cases} \gamma_{i}(s), & 1\leq i \leq K \leq N+1, \\ \gamma_{i}, & \textit{otherwise}, \end{cases}\displaystyle \end{aligned}$$
with \(\gamma_{i}(s)\) being an arbitrary function of s and K, N being positive integers. In addition, \(\phi_{i1}(n)\), \(\phi_{i2}(n)\) satisfy the following dispersion relations:
$$\begin{aligned} \frac{\partial\phi_{ij}(n)}{\partial x}= \phi_{ij}(n+1),\qquad\frac{ \partial\phi_{ij}(n)}{\partial s}= - \phi_{ij}(n-1), \quad j=1,2, \end{aligned}$$
and the Pfaffian elements are defined by
$$\begin{aligned}& (d_{m},i)=\psi_{i}(n+m), \qquad (d_{m}, \alpha_{i})=\phi_{i2}(n+m), \end{aligned}$$
$$\begin{aligned}& (d_{m},d_{l})=(i,j)=0, \qquad (\alpha_{i},j)=( \alpha_{i},\alpha_{j})=0, \end{aligned}$$
in which \(i,j=1,\ldots,N+1\) and m, l are integers.
We can derive the following dispersion relation for \(\psi_{i}(n)\) (\(i=1, \ldots,N+1\)) from equations (72):
$$\begin{aligned}& \frac{\partial\psi_{i}(n)}{\partial x}= \phi_{i}(n+1), \end{aligned}$$
$$\begin{aligned}& \frac{\partial\psi_{i}(n)}{\partial s}= -\psi_{i}(n-1)+(-1)^{i-1} \dot{C_{i}(t)}\phi_{i2}(n). \end{aligned}$$
Applying the dispersion relation (75)-(76), we can calculate the following differential and difference formula for the Casorati determinants (67)-(70):
$$\begin{aligned}& f_{n+1,x}=(d_{1},\ldots,d_{N-1},d_{N+1},N, \ldots,1), \end{aligned}$$
$$\begin{aligned}& f_{n+1}=(d_{1},\ldots,d_{N},N, \ldots,1),\quad\quad f_{n-1}=(d_{-1},\ldots,d _{N-2},N, \ldots,1) \end{aligned}$$
$$\begin{aligned}& f'_{nx}=(d_{0},\ldots,d_{N-1},d_{N+1},N+1, \ldots,1), \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] f_{n,s}&=-(d_{-1},d_{1},\ldots,d_{N-1},N, \ldots,1) \\ &\quad{} +\sum_{j=1}^{K}\dot{ \gamma}_{j}(t) (d_{0},\ldots,d_{N-1},N,\ldots, \hat{j},\ldots,1,\alpha_{j}), \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] f'_{n,s}&=-(d_{-1},d_{1}, \ldots,d_{N},N+1,\ldots,1) \\ &\quad{} +\sum_{j=1}^{K}\dot{ \gamma}_{j}(t) (d_{0},\ldots,d_{N},N+1,\ldots, \hat{j},\ldots,1,\alpha_{j}), \end{aligned} \end{aligned}$$
$$\begin{aligned}& f'_{n+1}=(d_{1},\ldots,d_{N+1},N+1, \ldots,1), \end{aligned}$$
$$\begin{aligned}& f'_{n-1}=(d_{-1},d_{1}, \ldots,d_{N-1},N+1,\ldots,1), \end{aligned}$$
$$\begin{aligned}& g^{(j)}_{n}=\sqrt{\dot{\gamma}_{j}(t)}(d_{-1}, \ldots,d_{N},N, \ldots,1,\alpha_{j}), \end{aligned}$$
$$\begin{aligned}& h^{(j)}_{n+1}=\sqrt{\dot{\gamma}_{j}(t)}(d_{1}, \ldots,d_{N},N+1, \ldots,\hat{j},\ldots,1), \end{aligned}$$
$$\begin{aligned}& f_{nx}=(d_{0},\ldots,d_{N-2},d_{N},N, \ldots,1), \end{aligned}$$
$$\begin{aligned}& g^{(j)}_{n,x}=\sqrt{\dot{\gamma}_{j}(t)}(d_{-1}, \ldots,d_{N-2},d _{N},N,\ldots,1,\alpha_{j}), \end{aligned}$$
$$\begin{aligned}& h^{(j)}_{n+1,x}=\sqrt{\dot{\gamma}_{j}(t)}(d_{1}, \ldots,d_{N-1},d _{N+1},N+1,\ldots,\hat{j},\ldots,1). \end{aligned}$$
By substituting equations (77)-(88) into the modified two-dimensional Toda lattice with self-consistent sources (8), (33), (36)-(37), we obtain the following Pfaffian identities, respectively:
$$\begin{aligned}& (d_{1},\ldots,d_{N-1},d_{N+1},N,\ldots,1) (d_{0},\ldots,d_{N},N+1, \ldots,1) \\& \quad{} -(d_{1},\ldots,d_{N},N,\ldots,1) (d_{0},\ldots,d_{N-1},d_{N+1},N+1, \ldots,1) \\& \quad{} +(d_{0},\ldots,d_{N-1},N,\ldots,1) (d_{1},\ldots,d_{N+1},N+1,\ldots ,1)=0, \\& \bigl[-(d_{-1},d_{1},\ldots,d_{N-1},N, \ldots,1) (d_{0},\ldots,d_{N},N+1, \ldots,1) \\& \quad{} +(d_{0},\ldots,d_{N-1},N,\ldots,1) (d_{-1},d_{1},\ldots,d_{N},N+1, \ldots,1) \\& \quad{} -(d_{1},\ldots,d_{N},N,\ldots,1) (d_{-1},\ldots,d_{N-1},N+1,\ldots ,1)\bigr] \\& \quad{} +\sum_{j=1}^{K}\dot{ \gamma}_{j}(s)\bigl[(d_{0},\ldots,d_{N-1},N, \ldots ,\hat{j},\ldots,1,\alpha_{j}) (d_{0}, \ldots,d_{N},N+1,\ldots,1) \\& \quad{} -(d_{0},\ldots,d_{N},N+1,\ldots, \hat{j},\ldots,1,\alpha_{j}) (d_{0}, \ldots,d_{N-1},N, \ldots,1) \\& \quad{} +(d_{0},\ldots,d_{N},N,\ldots,1, \alpha_{j}) (d_{0},\ldots,d_{N-1},N+1, \ldots, \hat{j},\ldots,1)\bigr]=0, \\& (d_{0},\ldots,d_{N-2},d_{N},N,\ldots,1) (d_{-1},\ldots,d_{N-1},N, \ldots,1,\alpha_{j}) \\& \quad{} -(d_{0},\ldots,d_{N-1},N,\ldots,1) (d_{-1},\ldots,d_{N-2},d_{N},N, \ldots,1, \alpha_{j}) \\& \quad{} +(d_{-1},\ldots,d_{N-2},N,\ldots,1) (d_{0},\ldots,d_{N},N,\ldots,1, \alpha_{j})=0, \end{aligned}$$
$$\begin{aligned}& (d_{1},\ldots,d_{N-1},d_{N+1},N+1,\ldots, \hat{j},\ldots,1) (d_{0}, \ldots,d_{N},N+1,\ldots,1) \\& \quad{} -(d_{1},\ldots,d_{N},N+1,\ldots, \hat{j},\ldots,1) (d_{0},\ldots,d _{N-1},d_{N+1},N+1, \ldots,1) \\& \quad{} +(d_{0},\ldots,d_{N-1},N+1,\ldots, \hat{j},\ldots,1) (d_{1},\ldots,d _{N+1},N+1,\ldots,1)=0, \end{aligned}$$
respectively. □
In order to obtain the one-soliton solution of the nonlinear modified two-dimensional Toda lattice with self-consistent sources (49)-(52), we take \(N=1\), \(K=1\) and
$$\begin{aligned}& \phi_{11}=\frac{e^{\xi_{1}}}{e^{q_{1}}-e^{Q_{1}}},\quad\quad \phi_{12}=e^{-\eta _{1}},\quad\quad \phi_{21}=-\frac{e^{\xi_{2}}}{e^{q_{2}}-e^{Q_{1}}}, \\& \gamma_{1}(t)=\frac{e^{a(t)}}{e^{q_{1}}-e^{Q_{1}}},\quad\quad \gamma_{2}=0, \end{aligned}$$
in the Casoratian determinants (67)-(70). Here \(\xi_{i}\), \(\eta_{i}\) (\(i=1,2\)) are given in (53) and \(a(t)\) is an arbitrary function of t. Hence we obtain
$$\begin{aligned}& f_{n}(x,n,t)=\frac{e^{2a(t)-\eta1}}{e^{q_{1}}-e^{Q_{1}}}\bigl(1+e^{\xi _{1}+\eta_{1}-2a(t)}\bigr), \end{aligned}$$
$$\begin{aligned}& f'_{n}(x,n,t)=- \frac{e^{2a(t)+\xi_{2}-\eta_{1}}}{e^{q_{1}}-e^{Q_{1}}}\biggl(1+ \frac{e^{q _{2}}-e^{q_{1}}}{e^{q_{2}}-e^{Q_{1}}}e^{\xi_{1}+\eta_{1}-2a(t)}\biggr), \end{aligned}$$
$$\begin{aligned}& g^{(1)}_{n}(x,n,t)= \sqrt{\frac{e^{2\dot{a}(t)}}{e^{q_{1}}-e^{Q1}}}e^{\xi_{1}-\eta_{1}+a(t)}, \end{aligned}$$
$$\begin{aligned}& h^{(1)}_{n}(x,n,t)=-\sqrt{ \frac{e^{2\dot{a}(t)}}{e^{q_{1}}-e^{Q1}}} \frac{e^{\xi_{2}+a(t)}}{e ^{q_{2}}-e^{Q_{1}}}. \end{aligned}$$
Substituting functions (89)-(92) into the dependent variable transformations (48), we get a one-soliton solution of the nonlinear modified two-dimensional Toda lattice with self-consistent sources (49)-(52) given in (59)-(62).
If we take \(N=2\), \(K=1\) and
$$\begin{aligned}& \phi_{11}=\frac{e^{\xi_{1}}}{e^{q_{1}}-e^{Q_{1}}},\quad\quad \phi_{12}=e^{-\eta _{1}},\quad\quad \phi_{21}=-\frac{e^{\xi_{2}}}{e^{q_{2}}-e^{Q_{1}}},\\& \phi_{22}=e ^{\eta_{2}},\quad\quad \phi_{31}=\frac{e^{\xi_{3}}}{e^{q_{3}}-e^{Q_{1}}}, \\& \gamma_{1}(t)=\frac{e^{a(t)}}{e^{q_{1}}-e^{Q_{1}}},\quad\quad \gamma_{2}=- \frac{1}{e ^{q_{2}}-e^{Q_{1}}},\quad\quad \gamma_{3}=0, \end{aligned}$$
in the Casoratian determinants (67)-(70), we get
$$\begin{aligned}& \begin{aligned}[b] f_{n}(x,n,t)&=\frac{(e^{Q_{1}}-e^{Q_{2}})e^{2a(t)-\eta1-\eta2}}{(e ^{q_{2}}-e^{Q_{1}})(e^{q_{1}}-e^{Q_{1}})}\biggl(1+\frac{e^{q_{1}}-e^{Q_{2}}}{e ^{Q_{1}}-e^{Q_{2}}}e^{\xi_{1}+\eta_{1}-2a(t)}+ \frac{e^{Q_{1}}-e^{q _{2}}}{e^{Q_{1}}-e^{Q_{2}}}e^{\xi_{2}+\eta_{2}} \\ &\quad{} +\frac{e^{q_{1}}-e^{q_{2}}}{e^{Q_{1}}-e^{Q_{2}}}e^{\xi_{1}+\eta_{1}+ \xi_{2}+\eta_{2}-2a(t)}\biggr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] f'_{n}(x,n,t)&=\frac{(e^{Q_{1}}-e^{Q_{2}})(e^{q_{3}}-e^{Q_{2}})e^{2a(t)+ \xi_{3}-\eta_{1}-\eta _{2}}}{(e^{q_{2}}-e^{Q_{1}})(e^{q_{1}}-e^{Q_{1}})}\biggl(1+ \frac{(e ^{q_{1}}-e^{Q_{2}})(e^{q_{1}}-e^{q_{3}})}{(e^{Q_{1}}-e^{Q_{2}})(e^{Q _{1}}-e^{q_{3}})}e^{\xi_{1}+\eta_{1}-2a(t)}\hspace{-20pt} \\ &\quad{}+ \frac{(e^{q_{2}}-e^{Q_{1}})(e^{q_{2}}-e^{q_{3}})}{(e^{Q_{1}}-e^{Q _{2}})(e^{q_{3}}-e^{Q_{2}})}e^{\xi_{2}+\eta_{2}} \\ &\quad{} +\frac{(e^{q_{1}}-e ^{q_{2}})(e^{q_{1}}-e^{q_{3}})(e^{q_{3}}-e^{q_{2}})}{(e^{Q_{1}}-e^{Q _{2}})(e^{Q_{1}}-e^{q_{3}})(e^{q_{3}}-e^{Q_{2}}))}e^{\xi_{1}+\eta_{1}+ \xi_{2}+\eta_{2}-2a(t)}\biggr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& g^{(1)}_{n}(x,n,t)=\sqrt{\frac{e^{2\dot{a}(t)}}{e^{q_{1}}-e^{Q _{1}}}}e^{a(t)+\xi_{1}-\eta_{1}-\eta_{2}} \biggl(\bigl(e^{q_{1}}-e^{q_{2}}\bigr)e^{\xi _{2}+\eta_{2}}+ \frac{(e^{Q_{2}}-e^{Q_{1}})(e^{q_{1}}-e^{Q_{2}})}{e ^{q_{2}}-e^{Q_{1}}}\biggr), \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] h^{(1)}_{n}(x,n,t)&=\sqrt{\frac{e^{2\dot{a}(t)}}{e^{q_{1}}-e^{Q _{1}}}}e^{a(t)-\eta_{1}-\eta_{2}} \biggl(\frac{e^{Q_{2}}-e^{q_{3}}}{(e^{q _{3}}-e^{Q_{1}})(e^{q_{2}}-e^{Q_{1}})}e^{\xi_{3}+\eta_{1}} \\ &\quad{} + \frac{e^{q_{2}}-e^{q_{3}}}{(e^{q_{3}}-e^{Q_{1}})(e^{q_{2}}-e^{Q_{1}})}e ^{\xi_{2}+\xi_{3}+\eta_{1}+\eta_{2}}\biggr). \end{aligned} \end{aligned}$$
We introduce five constants \(\delta_{1}\), \(\delta_{2}\), \(\delta_{3}\), \(\epsilon _{1}\), \(\epsilon_{2}\) satisfying
$$\begin{aligned}& e^{\delta_{1}}=e^{Q_{2}}-e^{q_{1}},\quad\quad e^{\epsilon_{1}}= \frac{1}{e^{Q _{2}}-e^{Q_{1}}},\quad\quad e^{\delta_{3}}=e^{Q_{2}}-e^{q_{3}},\quad\quad e^{\delta_{2}+ \epsilon_{2}}= \frac{e^{Q_{1}}-e^{q_{2}}}{e^{Q_{1}}-e^{Q_{2}}}, \end{aligned}$$
and take
$$\begin{aligned}& \tilde{\xi}_{1}=\xi_{1}+\delta_{1},\quad\quad \tilde{ \xi}_{2}=\xi_{2}+\delta _{2},\quad\quad \tilde{ \xi}_{3}=\xi_{3}+\delta_{3},\quad\quad \tilde{ \eta}_{1}=\eta_{1}+ \epsilon_{1},\quad\quad \tilde{ \eta}_{2}=\eta_{2}+\epsilon_{2}, \end{aligned}$$
then equations (93)-(96) become
$$\begin{aligned}& \begin{aligned}[b] f_{n}(x,n,t)&=\frac{(e^{Q_{1}}-e^{Q_{2}})e^{\epsilon_{1}+\epsilon _{2}}e^{-\tilde{\eta}_{1}-\tilde{\eta}_{2}+2a(t)}}{(e^{q_{2}}-e^{Q _{1}})(e^{q_{1}}-e^{Q_{1}})}\biggl(1+e^{\tilde{\xi}_{1}+\tilde{\eta}_{1}-2a(t)}+e ^{\tilde{\xi}_{2}+\tilde{\eta}_{2}} \\ &\quad{} +\frac{(e^{q_{1}}-e^{q_{2}})(e^{Q_{1}}-e^{Q_{2}})}{(e^{q_{1}}-e^{Q _{2}})(e^{Q_{1}}-e^{q_{2}})}e^{\tilde{\xi}_{1}+\tilde{\eta}_{1}+ \tilde{\xi}_{2}+\tilde{\eta}_{2}-2a(t)}\biggr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] f'_{n}(x,n,t)&=-\frac{(e^{Q_{1}}-e^{Q_{2}})e^{\epsilon_{1}+\epsilon _{2}}e^{\tilde{\xi}_{3}-\tilde{\eta}_{1}-\tilde{\eta}_{2}+2a(t)}}{(e ^{q_{2}}-e^{Q_{1}})(e^{q_{1}}-e^{Q_{1}})}\biggl(1+ \frac{e^{q_{3}}-e^{q_{1}}}{e ^{q_{3}}-e^{Q_{1}}}e^{\tilde{\xi}_{1}+\tilde{\eta}_{1}-2a(t)}+\frac{e ^{q_{3}}-e^{q_{2}}}{e^{q_{3}}-e^{Q_{2}}}e^{\tilde{\xi}_{2}+ \tilde{\eta}_{2}}\hspace{-20pt} \\ &\quad{} +\frac{(e^{q_{1}}-e^{q_{2}})(e^{Q_{2}}-e^{Q_{1}})(e^{q_{3}}-e^{q_{2}})(e ^{q_{3}}-e^{q_{1}})}{(e^{q_{1}}-e^{Q_{2}})(e^{q_{2}}-e^{Q_{1}})(e^{q _{3}}-e^{Q_{2}})(e^{q_{3}}-e^{Q_{1}})}e^{\tilde{\xi}_{1}+ \tilde{\eta}_{1}+\tilde{\xi}_{2}+\tilde{\eta}_{2}-2a(t)}\biggr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& g^{(1)}_{n}(x,n,t)=\sqrt{\frac{e^{2\dot{a}(t)}}{e^{q_{1}}-e^{Q _{1}}}} \frac{e^{\epsilon_{1}+\epsilon_{2}}(e^{Q_{1}}-e^{Q_{2}})e^{ \tilde{\xi}_{1}-\tilde{\eta}_{1}-\tilde{\eta}_{2}+a(t)}}{e^{q_{2}}-e ^{Q_{1}}}\biggl(1+\frac{e^{q_{1}}-e^{q_{2}}}{e^{q_{1}}-e^{Q_{2}}}e^{ \tilde{\xi}_{2}+\tilde{\eta}_{2}}\biggr), \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] h^{(1)}_{n}(x,n,t)&=\sqrt{\frac{e^{2\dot{a}(t)}}{e^{q_{1}}-e^{Q _{1}}}} \frac{(e^{Q_{2}}-e^{Q_{1}})e^{\epsilon_{1}+\epsilon_{2}}e^{ \tilde{\xi}_{3}-\tilde{\eta}_{2}+a(t)}}{(e^{q_{2}}-e^{Q_{1}})(e^{q _{3}}-e^{Q_{1}})} \\ &\quad{}\times \biggl(1+\frac{(e^{q_{2}}-e^{q_{3}})(e^{Q_{1}}-e^{Q_{2}})}{(e^{Q_{2}}-e^{q_{3}})(e ^{Q_{1}}-e^{q_{2}})}e^{\tilde{\xi}_{2}+\tilde{\eta}_{2}} \biggr). \end{aligned} \end{aligned}$$
We rederive the two-soliton solution of the nonlinear modified two-dimensional Toda lattice with self-consistent sources (49)-(52) obtained in Section 2, substituting the above functions in equations (97)-(100) into the dependent variable transformation (48).
4 Commutativity of the source generation procedure and Bäcklund transformation
In this section, we show that the commutativity of the source generation procedure and Bäcklund transformation holds for the two-dimensional Toda lattice. For this purpose, we derive another form of the modified two-dimensional Toda lattice with self-consistent sources which is the Bäcklund transformation for the two-dimensional Toda lattice with self-consistent sources given in [25].
We have shown that the Casorati determinants \(f_{n}\), \(f'_{n}\), \(g^{(j)} _{n}\), \(h^{(j)}_{n}\) given in (67)-(70) satisfy the modified two-dimensional Toda lattice with self-consistent sources (8), (33), (36)-(37). Now we take
$$\begin{aligned}& F_{n}=f_{n}=\det \bigl\vert \psi_{i}(n+j-1) \bigr\vert _{1\leq i,j \leq N}=(d _{0},\ldots,d_{N-1},N, \ldots,1), \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] F'_{n}&=f'_{n-1}=\det \bigl\vert \psi_{i}(n+j-1) \bigr\vert _{1\leq i,j \leq N+1} \\ &= (d_{-1}, \ldots,d_{N-1},N+1,\ldots,1), \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] G^{(j)}_{n}&=\sqrt{2}g^{(j)}_{n-1}=\sqrt{2 \dot{\gamma}_{j}(t)}(d _{-1},\ldots,d_{N-1},N, \ldots,1,\alpha_{j}), \\ &\quad j=1,\ldots,K, \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] H^{\prime(j)}_{n}&=\sqrt{2}h^{(j)}_{n}=\sqrt{2 \dot{\gamma}_{j}(t)}(d _{0},\ldots,d_{N-1},N+1, \ldots,\hat{j},\ldots,1), \\ &\quad j=1,\ldots,K, \end{aligned} \end{aligned}$$
and we introduce two new fields
$$\begin{aligned}& G^{\prime(j)}_{n}=\sqrt{2\dot{\gamma}_{j}(t)}(d_{-2}, \ldots,d_{N-1},N+1, \ldots,1,\alpha_{j}),\quad j=1,\ldots,K, \end{aligned}$$
$$\begin{aligned}& H^{(j)}_{n}=\sqrt{2\dot{\gamma}_{j}(t)}(d_{1}, \ldots,d_{N-1},N, \ldots,\hat{j},\ldots,1),\quad j=1,\ldots,K, \end{aligned}$$
where the Pfaffian elements are defined in (67)-(74).
In [25], the authors prove that the Casorati determinant \(F_{n}\), \(G^{(j)}_{n}\), \(H^{(j)}_{n}\) solves the following two-dimensional Toda lattice with self-consistent sources [25]:
$$\begin{aligned}& \bigl(D_{x}D_{s}-2e^{D_{n}}+2\bigr)F_{n} \cdot F_{n}=-\sum_{j=1}^{K}e^{D_{n}}G _{n}^{(j)}H_{n}^{(j)}, \end{aligned}$$
$$\begin{aligned}& \bigl(D_{x}+e^{-D_{n}}\bigr)F_{n}\cdot G_{n}^{(j)}=0,\quad j=1,\ldots,K, \end{aligned}$$
$$\begin{aligned}& \bigl(D_{x}+e^{-D_{n}}\bigr)H_{n}^{(j)} \cdot F_{n} =0,\quad j=1,\ldots,K. \end{aligned}$$
It is not difficult to show that the Casorati determinant with \(F'_{n}\), \(G^{\prime(j)}_{n}\), \(H^{\prime(j)}_{n}\) is another solution to the two-dimensional Toda lattice with self-consistent sources (107)-(109).
Furthermore, we can verify that the Casorati determinants \(F_{n}\), \(F'_{n}\), \(G ^{(j)}_{n}\), \(G^{\prime(j)}_{n}\), \(H^{(j)}_{n}\), \(H^{\prime(j)}_{n}\) given in (101)-(106) satisfy the following bilinear equations:
$$\begin{aligned}& 2\bigl(D_{s}e^{-1/2D_{n}}-e^{1/2D_{n}}\bigr)F_{n} \cdot F'_{n}=-\sum_{j=1}^{K}e ^{1/2D_{n}}G_{n}^{(j)}\cdot H^{\prime(j)}_{n}, \end{aligned}$$
$$\begin{aligned}& \bigl(D_{x}+e^{-D_{n}}\bigr)F_{n}\cdot F'_{n}=0,\quad j=1,\ldots,K, \end{aligned}$$
$$\begin{aligned}& \bigl(D_{x}+e^{-D_{n}}\bigr)H_{n}^{(j)} \cdot H^{\prime(j)}_{n} =0,\quad j=1,\ldots ,K, \end{aligned}$$
$$\begin{aligned}& \bigl(D_{x}+e^{-D_{n}}\bigr)G_{n}^{(j)} \cdot G^{\prime(j)}_{n} =0,\quad j=1,\ldots ,K, \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] e^{1/2D_{n}}F_{n}\cdot H^{\prime(j)}_{n}&=e^{-1/2D_{n}}F_{n} \cdot H^{\prime(j)} _{n}-e^{-1/2D_{n}}H_{n}^{(j)} \cdot F'_{n}, \\ &\quad j=1,\ldots,K, \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] e^{1/2D_{n}} G_{n}^{(j)}\cdot F'_{n}&=e^{-1/2D_{n}}G_{n}^{(j)} \cdot F'_{n}-e^{-1/2D_{n}}F_{n}\cdot G^{\prime(j)}_{n}, \\ &\quad j=1,\ldots,K, \end{aligned} \end{aligned}$$
which is another form of the modified two-dimensional Toda lattice with self-consistent sources. It is proved in [30] that equations (110)-(115) constitute the Bäcklund transformation for the two-dimensional Toda lattice with self-consistent sources (107)-(109). Therefore, the commutativity of source generation procedure and Bäcklund transformation is valid for the two-dimensional Toda lattice.
5 Conclusion and discussion
In this paper, Grammian solutions to the modified two-dimensional Toda lattice are presented. From the Grammian solutions, the modified two-dimensional Toda lattice with self-consistent sources (8), (33), (36)-(37) are produced via the source generation procedure. We show that the modified two-dimensional Toda lattice with self-consistent sources (8), (33), (36)-(37) are resolved into the determinant identities by presenting its Grammian and Casorati determinant solutions. We also construct another form of the modified discrete KP equation with self-consistent sources (110)-(115) which is the Bäcklund transformation for the two-dimensional Toda lattice with self-consistent sources derived in [25].
Now we show that the modified two-dimensional Toda lattice has a continuum limit into the mKP equation [2, 31], and the modified two-dimensional Toda lattice with self-consistent sources (8, 33, 36)-(37) yields the mKP equation with self-consistent sources derived in [32] through a continuum limit. For this purpose, we take
$$\begin{aligned}& D_{n}=2\epsilon D_{X}-2\epsilon^{2}D_{Y},\quad\quad D_{x}=\epsilon^{2}D_{Y}+ \frac{3}{2} \epsilon D_{X},\quad\quad D_{s}=-\frac{16}{3} \epsilon^{3}D_{T}, \\& f(n,x,s)=F(X,Y,T),\quad\quad f'(n,x,s)=F'(X,Y,T), \end{aligned}$$
in the modified two-dimensional Toda lattice (8)-(9), and compare the \(\epsilon^{2}\) order in (8), and the \(\epsilon^{3}\) order in (9), then we obtain the mKP equation [2, 31]:
$$\begin{aligned}& \bigl(D_{Y}+D^{2}_{X}\bigr)F\cdot F' =0, \\& \bigl(D^{3}_{X}-4D_{T}-3D_{X}D_{Y} \bigr)F\cdot F' =0, \end{aligned}$$
where F, \(F'\) denote \(F(X,Y,T)\), \(F'(X,Y,T)\), respectively.
By taking
$$\begin{aligned}& D_{n}=2\epsilon D_{X}-2\epsilon^{2}D_{Y},\quad\quad D_{x}=\epsilon^{2}D_{Y}+ \frac{3}{2} \epsilon D_{X},\quad\quad D_{s}=\frac{4}{3} \epsilon^{3}D_{T}, \\& f(n,x,s)=F(X,Y,T),\quad\quad g^{(j)}(n,x,s)=\frac{2\sqrt{3}}{3} \epsilon^{\frac{3}{2}}G_{j}(X,Y,T), \\& f'(n,x,s)=F'(X,Y,T),\quad\quad h^{(j)}(n,x,s)= \frac{2\sqrt{3}}{3} \epsilon^{\frac{3}{2}}H_{j}(X,Y,T), \end{aligned}$$
for \(j=1,\ldots,K\) in the modified two-dimensional Toda lattice with self-consistent sources (8, 33, 36)-(37), and comparing the \(\epsilon^{2}\) order in (8), (36)-(37), and the \(\epsilon^{3}\) order in (33), we obtain the mKP equation with self-consistent sources [32]:
$$\begin{aligned}& \bigl(D_{Y}+D^{2}_{X}\bigr)F\cdot F' =0, \\& \bigl(D_{T}-3D_{X}D_{Y}+D^{3}_{X} \bigr)F\cdot F' =-\sum_{j=1}^{K}G_{j}H_{j}, \\& \bigl(D_{Y}+D^{2}_{X}\bigr)F \cdot G_{j} =0, \quad j=1,\ldots,K, \\& \bigl(D_{Y}+D^{2}_{X}\bigr)H_{j} \cdot F' =0, \quad j=1,\ldots,K, \end{aligned}$$
where F, \(F'\), \(G_{j}\), \(H_{j}\) denote \(F(X,Y,T)\), \(F'(X,Y,T)\), \(G_{j}(X,Y,T)\), \(H_{j}(X,Y,T)\) for \(j=1,\ldots,K\), respectively.
Recently, generalized Wronskian (Casorati) determinant solutions are constructed for continuous and discrete soliton equations [33, 34, 35, 36, 37, 38, 39]. Besides soliton solutions, a broader class of solutions such as rational solutions, negatons, positons and complexitons solutions are obtained from the generalized Wronskian (Casorati) determinant solutions [33, 34, 35, 36, 37, 38]. In [39], a general Casoratian formulation is presented for the two-dimensional Toda lattice equation from which various examples of Casoratian type solutions are derived. It is interesting for us to construct the two-dimensional Toda lattice equation with self-consistent sources having a generalized Casorati determinant solution via the source generation procedure. This will bring us a broader class of solutions such as negatons, positons, and complexiton type solutions of the two-dimensional Toda lattice equation with self-consistent sources.
This work was supported by the Natural Science Foundation of Inner Mongolia Autonomous Region (Grant no. 2016MS0115), the National Natural Science Foundation of China (Grants no. 11601247 and 11605096).
The author declares that she has no competing interests.
The author has contributed solely to the writing of this paper. She read and approved the manuscript.
Hirota, R, Ohta, Y, Satsuma, J: Solutions of the Kadomtsev-Petviashvili equation and the two-dimensional Toda equations. J. Phys. Soc. Jpn. 57, 1901-1904 (1988) MathSciNetCrossRefGoogle Scholar
Hirota, R: The Direct Method in Soliton Theory. Cambridge University Press, Cambridge (2004) CrossRefzbMATHGoogle Scholar
Hirota, R: Exact solution to 2n-wave interaction. J. Phys. Soc. Jpn. 57, 436-441 (1988) MathSciNetCrossRefGoogle Scholar
Leon, J, Latifi, A: Solutions of an initial-boundary value problem for coupled nonlinear waves. J. Phys. A 23, 1385-1403 (1990) MathSciNetCrossRefzbMATHGoogle Scholar
Mel'nikov, VK: A direct method for deriving a multi-soliton solution for the problem of interaction of waves on the \(x,y\) plane. Commun. Math. Phys. 112, 639-652 (1987) CrossRefzbMATHGoogle Scholar
Mel'nikov, VK: Interaction of solitary waves in the system described by the Kadomtsev-Petviashvili equation with a self-consistent source. Commun. Math. Phys. 126, 201-215 (1989) MathSciNetCrossRefzbMATHGoogle Scholar
Mel'nikov, VK: On equations for wave interactions. Lett. Math. Phys. 7, 129-136 (1983) MathSciNetCrossRefzbMATHGoogle Scholar
Mel'nikov, VK: Integration of the Korteweg-de Vries equation with a source. Inverse Probl. 6, 233-246 (1990) MathSciNetCrossRefzbMATHGoogle Scholar
Mel'nikov, VK: Integration of the nonlinear Schrödinger equation with a source. Inverse Probl. 8, 133-147 (1992) CrossRefzbMATHGoogle Scholar
Zeng, YB, Ma, WX, Lin, RL: Integration of the soliton hierarchy with self-consistent sources. J. Math. Phys. 41, 5453-5489 (2000) MathSciNetCrossRefzbMATHGoogle Scholar
Lin, RL, Zeng, YB, Ma, WX: Solving the KdV hierarchy with self-consistent sources by inverse scattering method. Physica A 291, 287-298 (2001) MathSciNetCrossRefzbMATHGoogle Scholar
Zeng, YB, Ma, WX, Shao, YJ: Two binary Darboux transformations for the KdV hierarchy with self-consistent sources. J. Math. Phys. 42, 2113-2128 (2001) MathSciNetCrossRefzbMATHGoogle Scholar
Zeng, YB, Shao, YJ, Ma, WX: Integrable-type Darboux transformation for the mKdV hierarchy with self-consistent sources. Commun. Theor. Phys. 38, 641-648 (2002) CrossRefzbMATHGoogle Scholar
Xiao, T, Zeng, YB: Generalized Darboux transformations for the KP equation with self-consistent sources. J. Phys. A, Math. Gen. 37, 7143-7162 (2004) MathSciNetCrossRefzbMATHGoogle Scholar
Liu, XJ, Zeng, YB: On the Toda lattice equation with self-consistent sources. J. Phys. A, Math. Gen. 38, 8951-8965 (2005) MathSciNetCrossRefzbMATHGoogle Scholar
Zeng, YB, Shao, YJ, Xue, WM: Negaton and position solutions of the soliton equation with self-consistent sources. J. Phys. A, Math. Gen. 36, 5035-5043 (2003) CrossRefzbMATHGoogle Scholar
Ma, WX: Complexiton solutions of the Korteweg-de Vries equation with self-consistent sources. Chaos Solitons Fractals 26, 1453-1458 (2005) MathSciNetCrossRefzbMATHGoogle Scholar
Hase, Y, Hirota, R, Ohta, Y, Satsuma, J: Soliton solutions of the Mel'nikov equations. J. Phys. Soc. Jpn. 58, 2713-2720 (1989) CrossRefGoogle Scholar
Matsuno, Y: Bilinear Bäcklund transformation for the KdV equation with a source. J. Phys. A, Math. Gen. 24, 273-277 (1991) MathSciNetCrossRefzbMATHGoogle Scholar
Hu, XB: Nonlinear superposition formula of the KdV equation with a source. J. Phys. A, Math. Gen. 24, 5489-5497 (1991) MathSciNetCrossRefzbMATHGoogle Scholar
Matsuno, Y: KP equation with a source and its soliton solutions. J. Phys. A, Math. Gen. 23, 1235-1239 (1990) CrossRefGoogle Scholar
Zhang, DJ: The n-soliton solutions for the modified KdV equation with self-consistent sources. J. Phys. Soc. Jpn. 71, 2649-2656 (2002) MathSciNetCrossRefzbMATHGoogle Scholar
Deng, SF, Chen, DY, Zhang, DJ: The multisoliton solutions of the KP equation with self-consistent sources. J. Phys. Soc. Jpn. 72, 2184-2192 (2003) MathSciNetCrossRefzbMATHGoogle Scholar
Gegenhasi, Hu, XB: On a integrable differential-difference equation with a source. J. Nonlinear Math. Phys. 13, 183-192 (2006) MathSciNetCrossRefzbMATHGoogle Scholar
Hu, XB, Wang, HY: Construction of dKP and BKP equation with self-consistent sources. Inverse Probl. 22, 1903-1920 (2006) MathSciNetCrossRefzbMATHGoogle Scholar
Hu, J, Hu, XB, Tam, HW: Ishimori-i equation with self-consistent sources. J. Math. Phys. 50, 053510 (2009) MathSciNetCrossRefzbMATHGoogle Scholar
Wang, HY: Integrability of the semi-discrete Toda equation with self-consistent sources. J. Math. Anal. Appl. 330, 1128-1138 (2007) MathSciNetCrossRefzbMATHGoogle Scholar
Gegenhasi, Bai, XR: On the modified discrete KP equation with self-consistent sources. J. Nonlinear Math. Phys. 24, 224-238 (2017) MathSciNetCrossRefGoogle Scholar
Zhao, JX, Gegenhasi, Hu, XB: Commutativity of pfaffianization and Bäcklund transformations: two differential-difference systems. J. Phys. Soc. Jpn. 78, 064005 (2009) CrossRefGoogle Scholar
Wang, HY, Hu, XB, Gegenhasi: 2d Toda lattice equation with self-consistent sources: Casoratian type solutions, bilinear Bäcklund transformation and Lax pair. J. Comput. Appl. Math. 202, 133-143 (2007) MathSciNetCrossRefzbMATHGoogle Scholar
Jimbo, M, Miwa, T: Solitons and infinite dimensional Lie algebras. Publ. Res. Inst. Math. Sci. 19, 943-1001 (1983) MathSciNetCrossRefzbMATHGoogle Scholar
Deng, SF: The multisoliton solutions for the mkpi equation with self-consistent sources. J. Phys. A, Math. Gen. 39, 14929-14945 (2006) MathSciNetCrossRefzbMATHGoogle Scholar
Sirianunpiboon, S, Howard, SD, Roy, SK: A note on the Wronskian form of solutions of the KdV equation. Phys. Lett. A 134, 31-33 (1988) MathSciNetCrossRefGoogle Scholar
Ma, WX, You, Y: Solving the Korteweg-de Vries equation by its bilinear form: Wronskian solutions. Trans. Am. Math. Soc. 357, 1753-1778 (2004) MathSciNetCrossRefzbMATHGoogle Scholar
Ma, WX: Complexiton solutions to the Korteweg-de Vries equation. Phys. Lett. A 301, 35-44 (2002) MathSciNetCrossRefzbMATHGoogle Scholar
Maruno, K, Ma, WX, Oikawa, M: Generalized Casorati determinant and positon-negaton type solutions of the Toda lattice equation. J. Phys. Soc. Jpn. 73, 831-837 (2004) CrossRefzbMATHGoogle Scholar
Zhang, DJ: Singular solutions in casoratian form for two differential-difference equations. Chaos Solitons Fractals 23, 1333-1350 (2005) MathSciNetCrossRefzbMATHGoogle Scholar
Ma, WX, Maruno, K: Complexiton solutions of the Toda lattice equation. Physica A 343, 219-237 (2004) MathSciNetCrossRefGoogle Scholar
Ma, WX: An application of the casoratian technique to the 2d Toda lattice equation. Mod. Phys. Lett. B 22, 1815-1825 (2008) CrossRefzbMATHGoogle Scholar
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.School of Mathematical SciencesInner Mongolia UniversityHohhotChina
Gegenhasi Adv Differ Equ (2017) 2017: 277. https://doi.org/10.1186/s13662-017-1347-3
Accepted 31 August 2017
First Online 11 September 2017
Publisher Name Springer International Publishing | CommonCrawl |
Mathematical Biosciences and Engineering, 2017, 14(1): 339-358. doi: 10.3934/mbe.2017022
A male-female mathematical model of human papillomavirus (HPV) in African American population
Najat Ziyadi
. Department of Mathematics, Morgan State University, Baltimore, MD 21251, USA
We introduce mathematical human papillomavirus (HPV) epidemic models (with and without vaccination) for African American females (AAF) and African American males (AAM) with "fitted" logistic demographics and use these models to study the HPV disease dynamics. The US Census Bureau data of AAF and AAM of 16 years and older from 2000 to 2014 is used to "fit" the logistic demographic models. We compute the basic reproduction number, $\mathcal{R}_0$, and use it to show that $\mathcal{R}_0$ is less than 1 in the African American (AA) population with or without implementation of HPV vaccination program. Furthermore, we obtain that adopting a HPV vaccination policy in the AAF and AAM populations lower $\mathcal{R}_0$ and the number of HPV infections. Sensitivity analysis is used to illustrate the impact of each model parameter on the basic reproduction number.
[1] A. Alsaleh,A. B. Gumel, Analysis of risk-structured vaccination model for the dynamics of oncogenic and warts-causing HPV types, Bulletin of Mathematical Biology, 76 (2014): 1670-1726.
[2] Black male statistics, Available from: http://blackdemographics.com/black-male-statistics/. Accessed 4/12/2016.
[3] F. Brauer and C. Castillo-Chavez, Mathematical Models in Population Biology and Epidemiology Texts in Applied Mathematics, Springer, New York, NY, 2001.
[4] J. Cariboni,D. Gatelli,R. Liska,A. Saltelli, The role of sensitivity analysis in ecological modelling, Ecological modelling, 203 (2007): 167-182.
[5] Centers for disease control and prevention, Genital HPV infection: CDC fact sheet Available from: http://www.cdc.gov/std/HPV/STDFact-HPV.htm. Accessed 4/12/2016.
[6] Centers for disease control and prevention, National Vital Statistics Reports, Volume 64, Number 2.
[7] Centers for Disease Control and Prevention, Human Papillomavirus (HPV), What is HPV Available from: http://www.cdc.gov/hpv/whatishpv.html. Accessed 4/12/2016.
[8] Centers for Disease Control and Prevention, Morbidity and mortality weekly report, CDC grand rounds: Reducing the burden of HPV-associated cancer and disease, MMWR, Weekly, 63 (2014), 69-72.
[9] Center for Disease Control and Prevention, Morbidity and Mortality Weekly Report, Weekly / Vol. 64 / No. 29. http://www.cdc.gov/mmwr/pdf/wk/mm6429.pdf. Accessed on 4/8/2016.
[10] N. Chitnis,J. M. Hyman,J. M. Cushing, Determining important parameter in the spread of malaria through the sensitivity analysis of mathematical model, Bulletin of Mathematical Biology, 70 (2008): 1272-1296.
[11] S. Hariri, E. Dunne, M. Saraiya, E. Unger and L. Markowitz, Chapter 5: Human papillomavirus, VPD Surveillance Manual, 5th Edition, 2011. Available from: http://www.cdc.gov/vaccines/pubs/surv-manual/chpt05-hpv.pdf. Accessed 4/12/2016.
[12] http://www.census.gov/popest/data/intercensal/national/tables/US-EST00INT-03-BA.xls. Accessed 4/12/2016.
[13] S. Lee,A. Tameru, A mathematical model of human papillomavirus (HPV) in the United States and its impact on cervical cancer, Journal of Cancer, 3 (2012): 262-268.
[14] L. Ribassin-Majed,R. Lounes,S. Clemencon, Deterministic modelling for transmission of human papillomavirus 6/11: Impact of vaccination, Math Med Biol, 31 (2014): 125-149.
[15] H. L. Smith, Monotone Dynamical Systems: An Introduction to the Theory of Competitive and Cooperative Systems Amer. Math. Soc., Rhode Island, 1995.
[16] US Census Bureau, Available from: http://factfinder.census.gov/faces/tableservices/jsf/pages/productview.xhtml?pid=ACS_11_1YR_B01001B&prodType=table. Accessed 8/28/2016.
[17] P. van den Driessche,J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Mathematical Biosciences, 180 (2002): 29-48.
[18] N. Ziyadi, Local and global sensitivity analysis of $\mathcal{R}_0 $ in a male and female human papillomavirus (HPV) epidemic model of Moroccans, Journal of Evolution Equations 9 (2016), Accepted.
[19] N. Ziyadi,A.-A. Yakubu, Local and global sensitivity analysis in a discrete-time seis epidemic model, Advances in Dynamical Systems and Applications, 11 (2016): 15-33.
Copyright Info: © 2017, Najat Ziyadi, licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution Licese (http://creativecommons.org/licenses/by/4.0)
Show full outline | CommonCrawl |
\begin{definition}[Definition:Basis (Topology)/Specification of Topology]
As a way of specifying a particular topology on a set, we can say:
:''Let $\tau$ be the topology which has the sets ... as a basis.''
In such a case, those sets should all satisfy the axioms for a synthetic basis $(\text B 1)$ and $(\text B 2)$.
\end{definition} | ProofWiki |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.